276 67 2MB
English Pages 468 Year 2007
Multi-Objective Optimization in Computer Networks Using Metaheuristics
OTHER TELECOMMUNICATIONS BOOKS FROM AUERBACH Architecting the Telecommunication Evolution: Toward Converged Network Services Vijay K. Gurbani and Xian-He Sun ISBN: 0-8493-9567-4 Business Strategies for the Next-Generation Network Nigel Seel ISBN: 0-8493-8035-9
Security in Distributed, Grid, Mobile, and Pervasive Computing Yang Xiao ISBN: 0-8493-7921-0 TCP Performance over UMTS-HSDPA Systems Mohamad Assaad and Djamal Zeghlache ISBN: 0-8493-6838-3 Testing Integrated QoS of VoIP: Packets to Perceptual Voice Quality Vlatko Lipovac ISBN: 0-8493-3521-3
Chaos Applications in Telecommunications Peter Stavroulakis ISBN: 0-8493-3832-8 Context-Aware Pervasive Systems: Architectures for a New Breed of Applications Seng Loke ISBN: 0-8493-7255-0 Fundamentals of DSL Technology Philip Golden, Herve Dedieu, Krista S Jacobsen ISBN: 0-8493-1913-7 Introduction to Mobile Communications: Technology, Services, Markets Tony Wakefield ISBN: 1-4200-4653-5 IP Multimedia Subsystem: Service Infrastructure to Converge NGN, 3G and the Internet Rebecca Copeland ISBN: 0-8493-9250-0 MPLS for Metropolitan Area Networks Nam-Kee Tan ISBN: 0-8493-2212-X Performance Modeling and Analysis of Bluetooth Networks: Polling, Scheduling, and Traffic Control Jelena Misic and Vojislav B Misic ISBN: 0-8493-3157-9 A Practical Guide to Content Delivery Networks Gilbert Held ISBN: 0-8493-3649-X
The Handbook of Mobile Middleware Paolo Bellavista and Antonio Corradi ISBN: 0-8493-3833-6 Traffic Management in IP-Based Communications Trinh Anh Tuan ISBN: 0-8493-9577-1 Understanding Broadband over Power Line Gilbert Held ISBN: 0-8493-9846-0 Understanding IPTV Gilbert Held ISBN: 0-8493-7415-4 WiMAX: A Wireless Technology Revolution G.S.V. Radha Krishna Rao, G. Radhamani ISBN: 0-8493-7059-0 WiMAX: Taking Wireless to the MAX Deepak Pareek ISBN: 0-8493-7186-4 Wireless Mesh Networking: Architectures, Protocols and Standards Yan Zhang, Jijun Luo and Honglin Hu ISBN: 0-8493-7399-9 Wireless Mesh Networks Gilbert Held ISBN: 0-8493-2960-4
Resource, Mobility, and Security Management in Wireless Networks and Mobile Communications Yan Zhang, Honglin Hu, and Masayuki Fujise ISBN: 0-8493-8036-7
AUERBACH PUBLICATIONS www.auerbach-publications.com To Order Call: 1-800-272-7737 • Fax: 1-800-374-3401 E-mail: [email protected]
Multi-Objective Optimization in Computer Networks Using Metaheuristics Yezid Donoso Ramon Fabregat
Boca Raton New York
Auerbach Publications is an imprint of the Taylor & Francis Group, an informa business
Auerbach Publications Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2007 by Taylor & Francis Group, LLC Auerbach is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-10: 0-8493-8084-7 (Hardcover) International Standard Book Number-13: 978-0-8493-8084-6 (Hardcover) his book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Donoso, Yezid. Multi-objective optimization in computer networks using metaheuristics / Yezid Donoso, Ramon Fabregat. p. cm. Includes bibliographical references and index. ISBN 0-8493-8084-7 (alk. paper) 1. Computer networks. 2. Mathematical optimization. I. Fabregat, Ramon, 1963- II. Title. TK5105.5.D665 2007 004.6--dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the Auerbach Web site at http://www.auerbach-publications.com
2007060385
Dedication To my wife, Adriana For her love and tenderness and for our future together To my children, Andres Felipe, Daniella, Marianna, and the following with Adry … a gift of God to my life Yezid
To my wife, Telvys, and my children … “continuarem caminant cap a Ìtaca” Ramon
Contents 1
Optimization Concepts........................................................................... 1 1.1 1.2 1.3 1.4 1.5
2
Local Minimum.......................................................................................... 2 Global Minimum ........................................................................................ 3 Convex and Nonconvex Sets...................................................................... 3 Convex and Concave Functions................................................................. 5 Minimum Search Techniques.................................................................... 11 1.5.1 Breadth First Search ..................................................................... 11 1.5.2 Depth First Search ........................................................................ 12 1.5.3 Best First Search........................................................................... 14
Multi-Objective Optimization Concepts ............................................ 15 2.1 2.2
2.3
Single-Objective versus Multi-Objective Optimization............................. 17 Traditional Methods .................................................................................. 19 2.2.1 Weighted Sum............................................................................... 19 2.2.2 ε-Constraint................................................................................... 23 2.2.3 Distance to a Referent Objective Method.................................... 26 2.2.4 Weighted Metrics.......................................................................... 28 2.2.5 The Benson Method ..................................................................... 30 Metaheuristics............................................................................................ 32 2.3.1 Convergence Toward Optimal ...................................................... 33 2.3.2 Optimal Solutions Not Withstanding Convexity of the Problem ..................................................................................... 33 2.3.3 Avoiding Local OptimaL.............................................................. 33 2.3.4 Polynomial Complexity of Metaheuristics................................... 34 2.3.5 Evolutionary Algorithms .............................................................. 35 2.3.5.1 Components of a General Evolutionary Algorithm......... 36 2.3.5.2 General Structure of an Evolutionary Algorithm ......... 38 2.3.6 Ant Colony.................................................................................... 40 2.3.6.1 General Structure of an Ant Colony Algorithm ........... 40 2.3.7 Memetic Algorithm....................................................................... 42
vii
viii 䡲 Contents
2.4
3
42 43 44 45 46 49 50 51 52 53 53 55
Computer Network Modeling ............................................................. 57 3.1
3.2
4
2.3.7.1 Local Searches............................................................... 2.3.7.2 General Structure of a Memetic Algorithm .................. 2.3.8 Tabu Search................................................................................... 2.3.8.1 General Structure of a Tabu Search .............................. 2.3.9 Simulated Annealing..................................................................... 2.3.9.1 General Structure of Simulated Annealing ................... Multi-Objective Solution Applying Metaheuristics.................................. 2.4.1 Procedure to Assign Fitness to Individuals.................................. 2.4.2 Reducing the Nondominated Set Using Clustering..................... 2.4.2.1 Representation of the Chromosome.............................. 2.4.2.2 Crossover Operator........................................................ 2.4.2.3 Mutation Operator .........................................................
Computer Networks: Introduction ............................................................ 57 3.1.1 Reference Models ......................................................................... 57 3.1.1.1 OSI Reference Model.................................................... 58 3.1.1.2 TCP/IP Reference Model .............................................. 59 3.1.2 Classification of Computer Networks Based on Size.................. 60 3.1.2.1 Personal Area Networks (PANs)................................... 61 3.1.2.2 Local Area Networks (LANs) ....................................... 61 3.1.2.3 Metropolitan Area Networks (MANs) .......................... 62 3.1.2.4 Wide Area Networks (WANs)....................................... 62 3.1.3 Classification of Computer Networks Based on Type of Transmission...................................................................... 64 3.1.3.1 Unicast Transmissions ................................................... 64 3.1.3.2 Multicast Transmissions ................................................ 64 3.1.3.3 Broadcast Transmissions ............................................... 64 Computer Network Modeling.................................................................. 65 3.2.1 Introduction to Graph Theory ...................................................... 65 3.2.2 Computer Network Modeling in Unicast Transmission .............. 66 3.2.3 Computer Networks Modeling in Multicast Transmission.......... 68
Routing Optimization in Computer Networks.................................. 71 4.1
4.2
Concepts ..................................................................................................... 71 4.1.1 Unicast Case ................................................................................. 71 4.1.2 Multicast Case .............................................................................. 72 Optimization Functions ............................................................................. 74 4.2.1 Hop Count .................................................................................... 74 4.2.1.1 Unicast Transmission..................................................... 74 4.2.1.2 Multicast Transmission.................................................. 75 4.2.2 Delay ............................................................................................. 78 4.2.2.1 Unicast Transmission..................................................... 78 4.2.2.2 Multicast Transmission.................................................. 80 4.2.3 Cost ............................................................................................... 82 4.2.3.1 Unicast Transmission..................................................... 82 4.2.3.2 Multicast Transmission.................................................. 84 4.2.4 Bandwidth Consumption .............................................................. 84
Contents 䡲 ix
4.3
4.4
4.5
4.6
4.7
4.2.4.1 Unicast Transmission..................................................... 84 4.2.4.2 Multicast Transmission.................................................. 86 4.2.5 Packet Loss Rate........................................................................... 89 4.2.6 Blocking Probability..................................................................... 90 4.2.6.1 Unicast Transmission..................................................... 90 4.2.6.2 Multicast Transmission.................................................. 92 4.2.7 Maximum Link Utilization........................................................... 92 4.2.7.1 Unicast Transmission..................................................... 92 4.2.7.2 Multicast Transmission.................................................. 94 4.2.8 Other Multicast Functions ............................................................ 95 4.2.8.1 Hop Count Average ....................................................... 95 4.2.8.2 Maximal Hop Count...................................................... 96 4.2.8.3 Maximal Hop Count Variation ...................................... 96 4.2.8.4 Average Delay ............................................................... 97 4.2.8.5 Maximal Delay .............................................................. 97 4.2.8.6 Maximal Delay Variation .............................................. 97 4.2.8.7 Average Cost.................................................................. 98 4.2.8.8 Maximal Cost ................................................................ 98 Constraints ................................................................................................. 98 4.3.1 Unicast Transmission.................................................................... 98 4.3.2 Multicast Transmission............................................................... 100 Functions and Constraints....................................................................... 102 4.4.1 Unicast Transmissions ................................................................ 102 4.4.2 Multicast Transmissions ............................................................. 102 Single-Objective Optimization Modeling and Solution ......................... 102 4.5.1 Unicast Transmission Using Hop Count and Delay.................. 106 4.5.1.1 Weighted Sum.............................................................. 106 4.5.1.2 ε-Constraint.................................................................. 107 4.5.1.3 Weighted Metrics......................................................... 110 4.5.1.4 Benson Method............................................................ 113 4.5.2 Multicast Transmission Using Hop Count and Delay ............... 114 4.5.2.1 Weighted Sum.............................................................. 114 4.5.2.2 ε-Constraint.................................................................. 116 4.5.2.3 Weighted Metrics......................................................... 119 4.5.2.4 Benson Method............................................................ 121 4.5.3 Unicast Transmission Using Hop Count, Delay, and Bandwidth Consumption............................................................... 123 4.5.4 Multicast Transmission Using Hop Count, Delay, and Bandwidth Consumption............................................................... 126 4.5.5 Unicast Transmission Using Hop Count, Delay, Bandwidth Consumption, and Maximum Link Utilization ..............129 4.5.6 Multicast Transmission Using Hop Count, Delay, Bandwidth Consumption, and Maximum Link Utilization ............. 133 Multi-Objective Optimization Modeling ................................................ 138 4.6.1 Unicast Transmission.................................................................. 138 4.6.2 Multicast Transmission............................................................... 140 Obtaining a Solution Using Metaheuristics............................................ 142
x
䡲 Contents 4.7.1
4.7.2
4.7.3
4.7.4
4.7.5
4.7.6
5
Unicast for the Hop Count and Delay Functions ...................... 143 4.7.1.1 Coding of a Chromosome ........................................... 143 4.7.1.2 Initial Population ......................................................... 144 4.7.1.3 Selection ...................................................................... 144 4.7.1.4 Crossover ..................................................................... 144 4.7.1.5 Mutation....................................................................... 145 Multicast for the Hop Count and Delay Functions ................... 146 4.7.2.1 Coding of a Chromosome ........................................... 146 4.7.2.2 Initial Population ......................................................... 148 4.7.2.3 Selection ...................................................................... 148 4.7.2.4 Crossover ..................................................................... 148 4.7.2.5 Mutation....................................................................... 150 Unicast Adding the Bandwidth Consumption Function ............ 151 4.7.3.1 Coding of a Chromosome ........................................... 151 4.7.3.2 Initial Population ......................................................... 151 4.7.3.3 Selection ...................................................................... 152 4.7.3.4 Crossover ..................................................................... 152 4.7.3.5 Mutation....................................................................... 152 Multicast Adding the Bandwidth Consumption Function ......... 153 4.7.4.1 Coding of a Chromosome ........................................... 154 4.7.4.2 Initial Population ......................................................... 154 4.7.4.3 Selection ...................................................................... 154 4.7.4.4 Crossover ..................................................................... 155 4.7.4.5 Mutation....................................................................... 157 Unicast Adding the Maximum Link Utilization Function......... 157 4.7.5.1 Coding of a Chromosome ........................................... 157 4.7.5.2 Initial Population ......................................................... 158 4.7.5.3 Selection ...................................................................... 158 4.7.5.4 Crossover ..................................................................... 158 4.7.5.5 Mutation....................................................................... 159 Multicast Adding the Maximum Link Utilization Function......... 160 4.7.6.1 Coding of a Chromosome ........................................... 161 4.7.6.2 Initial Population ......................................................... 161 4.7.6.3 Selection ...................................................................... 162 4.7.6.4 Crossover ..................................................................... 162 4.7.6.5 Mutation........................................................................ 163
Multi-Objective Optimization in Optical Networks ....................... 165 5.1
5.2
Concepts .................................................................................................. 5.1.1 Multiplexing of the Network...................................................... 5.1.2 Multiprotocol l Switching Architecture (MPlS) ........................ 5.1.3 Optical Fiber ............................................................................... 5.1.3.1 Types of Fibers ............................................................... New Optimization Functions .................................................................. 5.2.1 Number of l ................................................................................... 5.2.1.1 Unicast ......................................................................... 5.2.1.2 Multicast ......................................................................
165 167 167 167 168 168 170 171 171
Contents 䡲 xi 5.2.2
5.3
5.4
5.5
5.6
5.7
6
Optical Attenuation..................................................................... 5.2.2.1 Unicast ......................................................................... 5.2.2.2 Multicast ...................................................................... Redefinition of Optical Transmission Functions .................................... 5.3.1 Unicast ........................................................................................ 5.3.2 Multicast ..................................................................................... Constraints ............................................................................................... 5.4.1 Unicast Transmission.................................................................. 5.4.2 Multicast Transmission............................................................... Functions and Constraints....................................................................... 5.5.1 Unicast Transmissions ................................................................ 5.5.2 Multicast Transmissions ............................................................. Multi-objective Optimization Modeling ................................................. 5.6.1 Unicast Transmissions ................................................................ 5.6.2 Multicast Transmissions ............................................................. Obtaining a Solution Using Metaheuristics............................................ 5.7.1 Unicast Transmissions ................................................................ 5.7.1.1 Codification of a Chromosome ................................... 5.7.1.2 Initial Population ......................................................... 5.7.1.3 Selection ...................................................................... 5.7.1.4 Crossover ..................................................................... 5.7.1.5 Mutation....................................................................... 5.7.2 Multicast Transmissions ............................................................. 5.7.2.1 Codification of a Chromosome ................................... 5.7.2.2 Initial Population ......................................................... 5.7.2.3 Selection ...................................................................... 5.7.2.4 Crossover ..................................................................... 5.7.2.5 Mutation.......................................................................
171 171 172 172 172 172 175 175 177 179 179 179 179 179 184 185 185 185 186 186 186 187 187 187 188 189 189 192
Multi-Objective Optimization in Wireless Networks ..................... 195 6.1 6.2
6.3
6.4
6.5
6.6
Concepts .................................................................................................. New Optimization Function.................................................................... 6.2.1 Free Space Loss.......................................................................... 6.2.1.1 Unicast ......................................................................... 6.2.1.2 Multicast ...................................................................... Constraints ............................................................................................... 6.3.1 Unicast Transmission.................................................................. 6.3.2 Multicast Transmission............................................................... Function and Constraints ........................................................................ 6.4.1 Unicast Transmissions ................................................................ 6.4.2 Multicast Transmissions ............................................................. Multi-Objective Optimization Modeling ................................................ 6.5.1 Unicast Transmission.................................................................. 6.5.2 Multicast Transmission............................................................... Obtaining a Solution Using Metaheuristics............................................
195 196 196 197 197 198 198 199 200 200 200 200 200 203 204
xii 䡲 Contents
Annex A....................................................................................................... 205 Annex B....................................................................................................... 275 Bibliography ............................................................................................... 435 Index............................................................................................................ 441
Preface Many new multicast applications emerging from the Internet, such as Voice-over-IP (VoIP), videoconference, TV over the Internet, radio over the Internet, video streaming multipoint, etc., have the following resource requirements: bandwidth consumption, end-to-end delay, delay jitter, packet loss ratio, and so forth. It is therefore necessary to formulate a proposal to specify and provide the resources necessary for these kinds of applications so they will function properly. To show how these new applications can comply with these requirements, the book presents a multi-objective optimization scheme in which we will analyze and solve the problems related to resources optimization in computer networks. Once the readers have studied this book, they will be able to extend these models by adding new objective functions, new functions that act as restrictions, new network models, and new types of services or applications. This book is for an academic and scientific setting. In the professional environment, it is focused on optimization of resources that a carrier needs to know to profit from computer resources and its network infrastructure. It is very useful as a textbook mainly for master’s- or Ph.D.-level courses, whose subjects are related to computer networks traffic engineering, but it can also be used for an advanced or specialized course for the senior year of an undergraduate program. On the other hand, it can be of great use for a multi-objective optimization course that deals with graph theory by having represented the computer networks through graphs. The book structure is as follows: Chapter 1: Analyzes the basic optimization concepts, as well as several techniques and algorithms for the search of minimals. xiii
xiv 䡲 Preface
Chapter 2: Analyzes the basic multi-objective optimization concepts and the ways to solve them through traditional techniques and several metaheuristics. Chapter 3: Shows how to analytically model the computer network problems dealt with in this book. Chapter 4: The book’s main chapter — it shows the multi-objective models in computer networks and the applied way in which we can solve them. Chapter 5: An extension of Chapter 4, applied to optical networks. Chapter 6: An extension of Chapter 4, applied to wireless networks. Lastly, Annex A provides the source code to solve the mathematical model problems presented in this book through solvers. Annex B includes some source codes programmed in C language, which solve some of the multi-objective optimization problems presented. These source files are available online at http://www.crcpress.com/e—products/downloads/ default.asp
The Authors Yezid Donoso, Ph.D., is a professor at the Universidad del Norte in Barranquilla, Colombia, South America. He teaches courses in computer networks and multi-objective optimization. He is also a director of the computer network postgraduate program and the master program in system and computer engineering. In addition, he is a consultant in computer network and optimization for Colombian industries. He earned his bachelor’s degree in system and computer engineering from the Universidad del Norte, Barranquilla, Colombia, in 1996; M.Sc. degree in system and computer engineering from the Universidad de los Andes, Bogotá, Colombia, in 1998; D.E.A. in information technology from Girona University, Spain, in 2002; and Ph.D. (cum laude) in information technology from Girona University in 2005. Dr. Donoso is a senior member of IEEE as well as a distinguished visiting professor (DVP) of the IEEE Computer Society. His biography has been published in Who’s Who in the World (2006) and Who’s Who in Science and Engineering (2006) by Marquis, U.S.A. and in 2000 Outstanding Intellectuals of the 21st Century (2006) by International Biographical Centre, Cambridge, England. His awards include the title of distinguished professor from the Universidad del Norte (October 2004) and the National Award of Operations research from the Colombian Society of Operations Research (2004). Ramon Fabregat, Ph.D., earned his degree in computer engineering from the Universitat Autónoma de Barcelona (UAB), Spain, and his Ph.D. in information technology (1999) from Girona University, Spain. Currently, he is a professor in the electrical engineering, computer science, and automatic control departments and a researcher at the Institute of Informatics and Applications at Girona University. His teaching duties include graduate- and postgraduate-level courses on operating systems, computer communication xv
xvi 䡲 The Authors
networks, and the performance evaluation of telecommunication systems. His research interests are in the fields of management and performance evaluation of communication networks, network management based on intelligent agents, MPLS and GMPLS, and adaptive hypermedia systems. He coordinated the participation of broadband communications and distributed systems research group (BCDS) in the ADAPTPlan project (a Spanish national research project). He is a member of the Spanish Network of Excellence in MPLS/GMPLS networks, which involves several Spanish institutions. He has participated in the technical program committees of several conferences and has coauthored several papers published in international journals and presented at leading international conferences.
Chapter 1
Optimization Concepts In the field of engineering, solving a problem is not enough; the solution found must be the best solution possible. In other words, one must find the optimal solution to the problem. We say that this is the best possible solution because in the real world this problem may have certain constraints by which the solutions found may be feasible (they can be implemented in practice) and unfeasible (they cannot be implemented). In engineering, one speaks of optimization when one wants to solve complex problems. Such complexity may be associated with the kind of problem one wants to solve (i.e., if the problem is nonlinear) or the kind of solution one wishes to get (i.e., whether the solution is exact or an approximation). There are five basic ways to solve such problems: analytically, numerically, algorithmically through heuristics, algorithmically through metaheuristics, or through simulation. Analytical solutions are practically possible for simple problems, but complex or large-sized problems are very difficult and require too much computational time. When the analytical model is very complex, problems can be solved by approximation using numerical methods. To obtain such optimal approximate values, functions analyzed must usually meet a series of conditions. If such conditions are not met, the numerical method may converge toward the optimal value. In any event, these techniques are very useful when the problems are monoobjective, whether linear or not. However, when a problem is multi-objective, numerical methods are susceptible of being nonconvergent, depending on the model used. For example, if one attempts to solve a multi-objective optimization scheme 1
2 䡲 Multi-Objective Optimization in Computer Networks
by means of numerical methods using the mono-objective scheme of the weighted sum of the functions (which will be explained later), in addition to the conditions that the functions must meet for the specific numerical method, such a multi-objective scheme would present inconveniences if the search space is not convex, because one might not find many solutions. One can also find the solution applying computational algorithms called heuristics. In this case, heuristics presents a computational scheme that can reach the optimal value in an computational time. But the search of solutions with this type of algorithm may exhibit serious problems if, for example, the spaces are nonconvex or if the nature of the problem is of combined solution analysis. Furthermore, heuristics may present serious computational time problems when the problem is NP-Hard. A recognition problem P1 is said to be NP-Hard if all other problems in the class NP polynomially to P1, and we say that a recognition problem P1 is in the class NP if for every yes instance of P1, there is a short (i.e., polynomial length) verification that the instance is a yes instance [AHU93]. To overcome these inconveniences, metaheuristics have been created, which obtain an approximate solution to practically any kind of problem that is NP-Hard and complex or combined analysis solutions. Among existing metaheuristics we can mention genetic algorithms, Tabu search, ant colony, simulated annealing, memetic algorithms, etc. Many of these metaheuristics have been redesigned to provide solutions to multi-objective problems, which are the main interest of this book. This chapter provides an introduction to fundamentals of local and global minimal and some existing techniques to search for such minimal.
1.1 Local Minimum When optimizing a function f(x), one wants to find the minimum value in an [a, b] interval; that is, a ≤ x ≤ b. This minimum value is called the local minimum (Figure 1.1). f(x)
f(x*) f(x**)
[
a
Figure 1.1 Local minimum.
]
x* b
x**
x
Optimization Concepts 䡲 3
If given function f(x), we want to find the minimum value, but only in the (a ≤ x ≤ b) interval. The resulting f(x*) value is called the local minimum of function f(x) in interval [a, b]. As shown in Figure 1.1, this f(x*) value is the minimum value in the [a, b] interval, but is not the minimum value of function f(x) in the (–∞, ∞) interval. Traditionally, search techniques for local minima are simpler than search techniques for global minima due, among many reasons, to the complexity generated in the search space when the interval is (–∞, ∞).
1.2 Global Minimum When the function minimized is not constrained to a specific interval of the function, then one says that the value found is a global minimum. In this case, the search space interval is associated with (–∞, ∞). Even though in Figure 1.1 the value of function f(x*) is a minimum, we can see that f(x**) < f(x*). If there is no other value f(x'), so that f(x') < f(x**) in the (–∞, ∞) interval, then one says that f(x**) is a global minimum of function f(x). To find the global maximum value of a function, the analysis would be exactly the same, but in this case there should not exist another f(x') value so that f(x') > f(x**) in the (–∞, ∞) interval.
1.3 Convex and Nonconvex Sets Definition 1 A set S of ℜn is convex if for any pairs of points P1, P2 ε S and for every λ ε [0, 1] one proves that P = λP1 + (1 − λ ) P2 ε S. Point P is a linear combination of points P1 and P2. A set S ⊆ ℜn is convex if the linear combination of any two points in S also belongs to S. On the other hand, a set is nonconvex if there is at least one point P in set S that cannot be represented by a linear combination. Taking into account these definitions, Figure 1.2 represents convex solution sets and Figure 1.3 represents nonconvex solution sets. Below are examples of convex sets.
Exercise a. Is set S =
{( x , x ) ∈ℜ 1
2
2
/ x 2 ≥ x1
}
convex?
4 䡲 Multi-Objective Optimization in Computer Networks
P2
S
S P1
P
P2
P
P2
P
P1
P1
Figure 1.2 Convex sets. P P1
P1
S P2
P1
S
P
P
P2
P2
Figure 1.3 Nonconvex sets.
Proof: Let x = (x1, x2), y = (y1, y2), and x, y ε S. We must prove that for every λ ε [0, 1], z = (z1, z2) = λx + (1 – λ)y ε S. In this case, we must prove that z2 ≥ z1. λx + (1 – λ)y = (λx1 + (1 – λ)y1, λx2 + (1 – λ)y2)
Because x, y ε S, x2 ≥ x1, y2 ≥ y1, and λ ≥ 0 y (1 – λ) ≥ 0. Then λx2 ≥ λx1 y (1 – λ)y2 ≥ (1 – λ)y1. Adding both inequalities, we have λx2 + (1 – λ)y2 ≥ λx1 + (1 – λ)y1
Because z ε S, z2 = λx2 + (1 – λ)y2 y z1 = λx1 + (1 – λ)y1. Replacing in the foregoing inequality we have that z2 ≥ z1; therefore we have proven that S is a convex set.
{
}
b. Let S = x ∈ℜ / x ≤ 1 . Is it convex?
Optimization Concepts 䡲 5
Proof: Let x, y ε S, that is, |x| ≤ 1 and |y| ≤ 1. We must prove that for every λ ε[0, 1], z = λx + (1 – λ)y ε S. In this case, we must prove that |z| ≤ 1. Because z = λx + (1 – λ)y, applying || to the equality and taking into account the properties of this function — that λ ε [0, 1], |x| ≤ 1, and |y| ≤ 1 — we have that → |z| = |λx + (1 – λ)y| = |λx| + |(1 – λ)y| = λ|x| + (1 – λ)|y| ≤ 1, and therefore S is convex.
1.4 Convex and Concave Functions Definition 2 Let S be a convex, unempty subset of ℜn and f a defined function of S in ℜ. Function f is convex in S if and only if for any pair of points P1, P2 ε S and for every λ ε[0, 1] one proves that f(λP1 + (1 – λ) P2) ≤ λf (P1) + (1 – λ) f (P2).
Definition 3 Let S be a convex, unempty subset of ℜn and f a defined function of S in ℜ. Function f is concave in S if and only if for any pair of points P1, P2 ε S and for every λ ε[0, 1] one proves that f(λP1 + (1 – λ) P2) ≤ λf (P1) + (1 – λ) f (P2).
Definition 4 Let S be a convex, unempty subset of ℜn and f a defined function of S in ℜ. Function f is strictly convex in S if and only if for any pair of points P1, P2 ε S and for every λ ε[0, 1] one proves that f(λP1 + (1 – λ) P2) ≤ λf (P1) + (1 – λ) f (P2).
Definition 5 Let S be a convex, unempty subset of ℜn and f a defined function of S in ℜ. Function f is strictly concave in S if and only if for any pair of points P1, P2 ε S and for every λ ε[0, 1] one proves that f(λP1 + (1 – λ) P2) ≤ λf (P1) + (1 – λ) f (P2).
6 䡲 Multi-Objective Optimization in Computer Networks
Exercise Probe whether the following functions are concave or convex. a. Let y = f(x) = ax + b, in ℜ, with a, b ε ℜ. Proof: Let f(λx + (1 – λ)y) = a[x + (1 – λ)y] + b = aλx + a(1 – λ)y + b = λax + (1 – λ)ay + b + b – λb = λax + b + (1 – λ)ay + b – λb = λ[ax + b] + (1 – λ)ay + (1 – λ)b = λ[ax + b] + (1 – λ)[ay + b] = λf(x) + (1 – λ)f(y) Consequently, function y = f(x) = ax + b is convex and concave. n
b. Let y = f ( x ) =
∑ c .x , of ℜ i
i
n
in ℜ, with ci, xi positive.
i =1
Proof: Due to the function we are stating, in this case the inequality < does not apply, because these types of functions are not convex. Let n
f (λx + (1 – λ)y) ≥
∑ c . (λx + (1 − λ ) y ) i
i
i
i =1
n
≥
∑ i =1
n
λci xi +
∑ (1 − λ )c y i
i =1
i
Optimization Concepts 䡲 7
n
≥λ
∑
n
ci xi + (1 − λ )
i =1
∑c y i
i
i =1
≥ λf(x) + (1 – λ)f(y) n
Consequently, function y = f ( x ) =
∑ c .x i
is concave.
i
i =1
The following definitions can be used when function f is differentiable.
Definition 6 Let S be a convex, unempty subset of ℜn and f a defined differential function of S in ℜ. Then, function f is convex in S if and only if for any pair of points P1, P2 ε S one proves that ⎡⎣∇f ( y ) − ∇f ( x )⎤⎦ y − x ≥ 0.
(
)
Definition 7 Let S be a convex, unempty subset of ℜn and f a defined differential function of S in ℜ. Then, function f is strictly convex in S if and only if for any pair of points P1, P2 ε S one proves that ⎡⎣∇f ( y ) − ∇f ( x )⎤⎦ y − x > 0.
(
)
Definition 8 Let S be a convex, unempty subset of ℜn and f a defined differential function of S in ℜ. Function f is concave in S if and only if for any pair of points P1, P2 ε S one proves that ⎡⎣∇f ( y ) − ∇f ( x )⎤⎦ y − x ≤ 0.
(
)
Definition 9 Let S be a convex, unempty subset of ℜn and f a defined differential function of S in ℜ. Function f is strictly concave in S if and only if for any pair of points P1, P2 ε S one proves that ⎡⎣∇f ( y ) − ∇f ( x )⎤⎦ y − x < 0.
(
)
Exercise Show whether the following functions are concave or convex. n
c. Let y = f ( x ) =
∑x i =1
2 i
, of ℜn in ℜ.
8 䡲 Multi-Objective Optimization in Computer Networks
Proof: In this case we obtain the gradient vector of f(x), which consists of the first derivates for f(x).
⎡ ∂f ⎤ ⎢ ∂x ⎥ ⎢ 1⎥ ⎢ ∂f ⎥ ⎥ ⎢ ⎢ ∂x 2 ⎥ ⎥ ⎢ ⎢... ⎥ ⎥= ∇f ( x ) = ⎢ ⎢ ∂f ⎥ ⎢ ⎥ ⎢ ∂xi ⎥ ⎢ ⎥ ⎢... ⎥ ⎢ ⎥ ⎢ ∂f ⎥ ⎢⎣ ∂xn ⎥⎦
⎡2x1 ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢2x 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢... ⎥ ⎢ ⎥= ⎢2xi ⎥ ⎢ ⎥ ⎢... ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢2x ⎥ ⎣ n⎦
⎡0 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢...⎥ ⎢ ⎥ ⎢0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢...⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣0 ⎥⎦
Afterwards, we will calculate the Hessian matrix, which consists of the second derivates for function f(x).
⎡ ∂2 f ∂2 f ∂2 f ⎤ ... ⎥ ⎢ 2 ∂x1 xn ⎥ ⎢ ∂x1 ∂x1 x 2 ⎥ ⎢ 2 2 ∂2 f ⎥ ⎢ ∂ f ∂ f ... ⎢ ∂x 2 x 1 ∂x 22 ∂x 2 xn ⎥ ⎥ ⎢ ⎥ ⎢... ⎥= Hf ( x ) = ⎢ ⎢ 2 ⎥ 2 2 ∂ f ⎥ ⎢ ∂ f ∂ f ... ⎢ ∂xi x1 ∂xi x 2 ∂xi xn ⎥ ⎢ ⎥ ⎢... ⎥ ⎢ ⎥ 2 2 ⎢ ∂2 f ∂ f ∂ f ⎥ ... ⎢ ⎥ ∂xn2 ⎥⎦ ⎢⎣ ∂xn x1 ∂xn x 2
⎡2 0 ⎢ ⎢0 2 ⎢ ⎢.... ⎢ ⎢0 0 ⎢ ⎢... ⎢ ⎢0 0 ⎣
... ...
...
...
0⎤ ⎥ 0 ⎥ ⎥ ⎥ ⎥ 0⎥ ⎥ ⎥ ⎥ 2 ⎥⎦
Optimization Concepts 䡲 9
n
Because the Hessian matrix is positive, function y = f ( x ) =
∑x
2 i
i =1
is convex. d. Let f(x) = ex. Proof: Substituting the value of the function in definition ⎡⎣∇f ( y ) − ∇f ( x )⎤⎦ y − x we get:
(
)
[∇f(y) – ∇f(x)] (y – x) = [ey – ex] (y – x) Because x ≠ y, if x > y, we have ex > ey, and if y > x, we have ey > ex. Then, the sign of [eye – ex] is the same sign of (y – x), the product [eye – ex] (y – x) will always be positive, and therefore, function f(x) = ex is strictly convex. e. Let f(x) = ln x, with x > 0. Proof: Substituting the value of the function in definition ⎡⎣∇f ( y ) − ∇f ( x )⎤⎦ y − x we get:
(
)
[∇f(y) – ∇f(x)] (y – x) = [1/y – 1/x] (y – x)
=
( x − y )( y − x ) xy
( x − y )( y − x ) xy will always be negative, and therefore, function f(x) = ln x is strictly concave.
Because x ≠ y, if x > y or y > x, then the statement
10 䡲 Multi-Objective Optimization in Computer Networks
Proposition 1
Let g f x with f: H ⊂ ℜn Æ M and g: M ⊂ ℜ Æ ℜ; H is a convex subset of ℜn and M an interval of ℜ. Then:
(
)( )
If f is g f If f is g f If f is g f If f is g f
( ( ( (
convex and g is increasing and convex, then function
)( x ) is convex. convex and g is increasing and concave, )( x ) is concave. concave and g is increasing and convex, )( x ) is convex. concave and g is increasing and concave, )( x ) is concave.
then function then function then function
Exercise Probe whether the following functions are concave or convex. ⎛ n ⎞ xi ⎟ , of ℜn, with x positive. f. Let y = f ( x ) = ln ⎜ ⎝ i =1 ⎠ Proof:
∑
n
Because we have previously proven that function
∑x
i
is concave,
i =1
because function ln x is increasing and concave, applying propo-
⎛ sition 1, we have that y = f ( x ) = ln ⎜ ⎝ ⎛ g. Let y = f ( x ) = ⎜ ⎝ Proof:
n
∑ i =1
n
∑ i =1
⎞ xi ⎟ is a concave function. ⎠
2
⎞ x i2 ⎟ , with x ε ℜn. ⎠
n
Because we have proven that function
∑x
2 i
is convex and func-
i =1
tion x2 is an increasing convex function, by proposition 1, we have
⎛ that y = f ( x ) = ⎜ ⎝
n
∑ i =1
2
⎞ x ⎟ is a convex function. ⎠ 2 i
Optimization Concepts 䡲 11
1.5 Minimum Search Techniques Given a graph that could, for example, represent a computer network connectivity scheme, there are different search techniques to find optimal values to get from one point of the graph to another. The following section is a discussion of how some of these search techniques work, more specifically, the breadth first search, depth first search, and best first search techniques.
1.5.1 Breadth First Search This method (Figure 1.4) consists of expanding the search through the neighboring nodes. In this case, the algorithm begins searching all the directly connected nodes. The nodes through which it passes to reach the goal node are stored in a queue, and once the last connected node is reached, it starts removing from the queue and continues executing the algorithm, expanding successively.
Figure 1.4 The different steps of a tree search using the Breadth First Search method.
12 䡲 Multi-Objective Optimization in Computer Networks
The following is an example of the Breadth First Search algorithm: Algorithm BFS(G, s) L0 Å NULL L0 insert_Last(s) setLabel(s, VISITED) i Å 0 while ¬Li.isEmpty() Li +1 Å NULL for all v ∈ Li.elements() for all e ∈ G.incidentEdges(v) if getLabel(e) = UNEXPLORED w Å opposite(v,e) if getLabel(w) = UNEXPLORED setLabel(e, DISCOVERY) setLabel(w, VISITED) Li +1.insertLast(w) else setLabel(e, CROSS) i Å i +1 end BFS
1.5.2 Depth First Search This method (Figure 1.5) consists of expanding the search for the deepest nonterminal node. This algorithm begins its search on a tree branch until a goal node is reached. The nodes through which it passes to reach the goal node are stored in a queue, and once it reaches the goal node, it starts removing from the queue and continues executing the algorithm, expanding in depth successively. The Depth First Search algorithm is as follows: Algorithm DFS(G, v) Input graph G and a start vertex v of G Output labeling of the edges of G in the connected component of v
Optimization Concepts 䡲 13
Figure 1.5 The different steps of searching a tree using the Depth First Search method.
as discovery edges and back edges setLabel(v, VISITED) for all e Å G.incidentEdges(v) if getLabel(e) = UNEXPLORED w Å opposite(v,e) if getLabel(w) = UNEXPLORED setLabel(e, DISCOVERY) DFS(G, w) else setLabel(e, BACK) end DFS
14 䡲 Multi-Objective Optimization in Computer Networks
1.5.3 Best First Search This technique (Figure 1.6) expands the search of the most promising node; it can be considered an improved Breadth First Search. In this case, each tree branch will have an associated weight, and the decision on which direction to take in the search will be based on the value of such weight.
Figure 1.6 The different steps of a tree search using the Best First Search.
Chapter 2
Multi-Objective Optimization Concepts Optimization problems are normally stated in a single-objective way. In other words, the process must optimize a single-objective function complying with a series of constraints that are based on constraints given by the real world. A single-objective optimization problem may be stated as follows: Optimize [minimize/maximize] f(X)
(1)
subject to H(X) = 0 G(X) ≤ 0 In this case, the function to be optimized (minimize or maximize) is f(X), where vector X is the set of independent variables. Functions H(X) and G(X) are the constraints of the model. For this problem we can define three sets of solutions: 1. The universal set, which in this case is all possible values of X, whether feasible or nonfeasible. 15
16 䡲 Multi-Objective Optimization in Computer Networks
2. The set of feasible solutions, which are all the values of X that comply with the H(X) and G(X) constraints. In the real world, these variables would be all possible solutions that can be performed. 3. The set of optimal solutions, which are those values of X that, in addition to being feasible, comply with the optimal value (minimum or maximum) of function f(X), whether in a specific [a, b] interval or in a global context (–∞, ∞). In this case, one says that the set of optimal solutions may consist of a single element or several elements, provided that the following characteristic is met: f(x) = f(x'), where x ≠ x'. In this case, we can say that there are two optimal values to the problem when vector X = {x, x'}. But in real life, it is possible that when wanting to solve a problem, we may need to optimize more than one objective function. When this happens, we speak of multi-objective optimization. Figure 2.1 illustrates the topology of a network with five nodes and five links, each with a specific cost, on which one wishes to transmit a File Transfer Protocol (FTP) package from node 1 to node 5. The problem consists of determining which of the two possible paths one must take to transmit this package. In Figure 2.1, a cost has assigned between each pair of nodes (link). An analysis of Figure 2.1 shows that if we take the path given by links (1, 2) and (2, 5), we would have a 2 jump path and a cost of 20 units. If we take the path given by links (1, 3), (3, 4) and (4, 5), however, we would have a 3 jump path and a cost of 15 units. In this specific case, we can identify 2 feasible paths: the first one given by links (1, 2) and (2, 5) and the second given by links (1, 3), (3, 4) and (4, 5). If we only want to optimize (in this case minimize) the number of jumps, we can see that the minimum value is 2 jumps, and it is given by path (1, 2) and (2, 5). On the other hand, if we only want to minimize the cost objective function, the minimum value would be 15 units and would be given by path (1, 3), (3, 4) and (4, 5). This example shows that the optimal solution
2
10 FTP
10 5
1 5
5 3
Figure 2.1 Network topology.
5
4
Multi-Objective Optimization Concepts 䡲 17
and value obtained depend on the objective function optimized, but with the drawback that other results will only be contemplated as feasible solutions, and not as optimal solutions. In the previous example mentioned, if we optimize the number of jumps function, we will be paying the highest cost of transporting the package on this network. But it can also happen that the network administrator wants to use the path with fewer jumps. For this reason, we will state a model with more than one objective function, in which the solution will consist of a set of optimal solutions, not a single optimal solution. A multi-objective optimization model may be stated as follows: Optimize [minimize/maximize] F(X) = {f1(X), f2(X), …, fn(X)}
(2)
subject to H(X) = 0 G(X) ≥ 0 In this case, the functions to be optimized (whether minimize or maximize) are the set of functions F(X), where the vector X is the set of independent variables. Functions H(X) and G(X) are the constraints of the model. The solutions found solve the objective solutions even when they are conflicting, that is, when minimizing one function may worsen other functions. Stating and solving multi-objective optimization problems applied to computer networks is the purpose of this book. The first part of this chapter does a comparative analysis between performing single-function and multiple-function optimization pr ocesses. Next, the chapter shows some of the traditional methods used to solve multi-objective optimization problems, and finally, the last section shows some metaheuristics to solve multi-objective optimization processes.
2.1 Single-Objective versus Multi-Objective Optimization When the optimization process is performed on a single-objective problem, only one function will be minimized or maximized, and therefore, it will
18 䡲 Multi-Objective Optimization in Computer Networks
be necessary to find a minimum or maximum, whether local or global, for that objective function. When we speak of multiple objective functions, we wish to find the set of values that minimize or maximize each of these functions. Moreover, a typical feature of this type of problem is that these functions may conflict with each other; in other words, when a function is optimized, other functions may worsen. In multi-objective optimizations we may find the following situations: Minimize all objective functions. Maximize all objective functions. Minimize some and maximize others. The following example, which serves as an introduction to the formal theory of multi-objective optimization, explains the difference between optimizing a single function and optimizing multiple functions simultaneously. Suppose you want to go from one city to another and there are different means of transportation to choose from: airlines, with or without a nonstop flight, train, automobile, etc. The actual duration and cost of the trip will vary according to the option selected. Table 2.1 and Figure 2.2 show cost and trip duration data for each, independent of the means of transportation used. An analysis of Table 2.1 or Figure 2.2 shows that by minimizing the time function only, we will reach the city of destination in 1 h, but at the highest cost ($600). By minimizing the cost function only, we will obtain the most inexpensive solution ($100), but the duration of the trip will be the longest (16 h). By optimizing a single function, we obtain a partial view of the results, when in the real problem it could happen that more than one function influences the decision of which option to select. However, if the problem is analyzed in a multi-objective context, the idea of this optimization process is that the result is shown as a set of optimal solutions (optimal Pareto front) for the different objective functions jointly. This can be seen in the table. Later on, we will show that all of these points are optimal in the multi-objective context. Therefore, if we were to optimize both functions simultaneously, these would be the values obtained. Table 2.1
Multi-Objective Example
Option
1
2
3
4
5
6
7
8
9
10
Cost ($)
600
550
500
450
400
300
250
200
150
100
Time (hours)
1
2
3
4
5
6
8
10
14
16
Multi-Objective Optimization Concepts 䡲 19
Time (Hours)
Optimal Pareto Front 20 15 10 5 0 0
100
200
300
400
500
600
700
Cost ($)
Figure 2.2 Optimal values in multi-objective optimization.
2.2 Traditional Methods Solving the previous scheme can be quite complex. It requires optimizers that instead of providing a single optimal solution, yield as a result a set of optimal solutions, and such optimizers do not exist. In this section we will introduce some methods to solve multi-objective optimization problems in a single-objective context. This way, we can use the existing traditional optimizers. We will also show their drawbacks and constraints when finding the multiple solutions. There are several methods that can be used to solve multi-objective problems using single-objective approximations. Some of them are weighted sum, ε-constraint, weighted metrics, Benson, lexicographic, and min-max, among others. All of these methods try to find the optimal Pareto front using different approximation techniques. Later on we will analyze methods that use metaheuristics to solve multi-objective optimization problems in a real multi-objective context.
2.2.1 Weighted Sum This method consists of creating a single-objective model by weighing the n objective functions by assigning a weight to each the functions. Through the weighted sum method, the multi-objective model (2) can be restated in the following way: Optimize [minimize/maximize] n
'
F (X ) =
∑r * f (X ) i
i =1
i
(3)
20 䡲 Multi-Objective Optimization in Computer Networks
subject to H(X) = 0 G(X) ≥ 0 0 ≤ ri ≤ 1, i = {1, …, n} n
∑r = 1 i
i =1
In this case one can see that each function is multiplied by a weight (wi) whose value must be found between 0 and 1. Also, the sum of all the weights applied to the function must be 1. An analysis of this type of solution shows that function F'(X) obtained from the sum is really a linear combination of the functions fi(X). What the weighted sum method does exactly is find the points of the optimal Pareto front, which consists of all the optimal solutions in the multiobjective optimization, through the combinations given by the weighted vector R = {r1, r2, …, rn}. For example, in the case of two objective functions we would have the following equation: F'(X) = r1·f1(X) + r2·f2(X) The idea is to assign values to r1 and r2 to find two lines that tangentially touch the set of solutions (K). If in Figure 2.3 one is minimizing functions f2
f2
P1 K L1 P
L2
L2
L1
f1
f1 a
P2
b
Figure 2.3 Solution using the weighted sum method.
Multi-Objective Optimization Concepts 䡲 21
F1 and F2 through weights (r1, r2), by means of this method the optimization process will find the crossing point P of the the two lines (linear combinations) L1 and L2. This point (P) would be the optimal solution found by means of this method with weights (r1, r2) and that belongs to the optimal Pareto front in the multi-objective context. If we were to change the values of (r1, r2), we would find another point P that would also belong to the optimal Pareto front. If the set of solutions K is convex, any element of the optimal Pareto front may be found by changing the values of weight vector R (r1 and r2) in the example in Figure 2.3a. But this is only true when the set of solutions K is convex. If solution set K is nonconvex (Figure 2.3b), there are points of K that cannot be represented by a linear combination, and therefore, such points cannot be found by means of this method. For this method to work completely, the search space for the set of solutions must be convex; otherwise, we cannot guarantee that all optimal Pareto front points will be found. In a convex set of solutions, all elements of the optimal Pareto front would have a p > 0 probability of being found, but in nonconvex sets of solutions, this probability could be 0 for certain elements of the optimal Pareto front. Another drawback of this method deals with the range of values that can be obtained in the functions optimized. The problem arises when the ranges of the functions (a ≤ f1 ≤ b, c ≤ f2 ≤ d) have different magnitudes, and especially when such differences are very large. If this happens, the function having the highest range values will predominate in the result. For example, if 0 < f1 < 1 and 0 < f2 < 1000, all solutions would be biased to function f2, which is the function with the larger-sized range. To solve the previous drawback, it would be necessary to normalize the values of all objective functions, and this would generate a greater effort to obtain the results of the optimization model.
Example The following shows how we can apply this method to solve a simple problem with two objective functions that we want to minimize: f1(x) = x4 f2(x) = (x – 2)4 subject to –4 ≤ x ≤ 4
22 䡲 Multi-Objective Optimization in Computer Networks Functions
1400
1200
1000
F(X)
800
600
400 f2(x) 200 f1(x) 0 –4
–3
–2
–1
0 X
1
2
3
4
Figure 2.4 f1(x) = x4 and f2(x) = (x – 2)4 functions.
Figure 2.4 shows both functions in terms of independent variable x. Because both functions in this case are convex, the weighted sum method can be used. According to this method, the optimization function would look as follows: F(x) = w1·f1(x) + w2·f2(x) subject to –4 ≤ x ≤ 4 If we execute the model using any numerical solver for different values of wi, we can obtain the following solutions for functions f1 and f2 (Table 2.2).
Multi-Objective Optimization Concepts 䡲 23 Table 2.2 Sol
Optimal Values with Weighted Sum w1
w2
f1
f2
x
1
0
1
16.0
0
2.0
2
0.1
0.9
3.32
0.18
1.35
3
0.2
0.8
2.27
0.36
1.23
4
0.3
0.7
1.69
0.55
1.14
5
0.4
0.6
1.30
0.76
1.07
6
0.5
0.5
1
1
1
7
0.6
0.4
0.76
1.30
0.93
8
0.7
0.3
0.55
1.69
0.86
9
0.8
0.2
0.36
2.27
0.77
10
0.9
0.1
0.18
3.33
0.649
11
1
0
0
16
0
Figure 2.5 shows a graph with the optimal points obtained using the weighted sum method for the stated functions. The horizontal axis is associated with f1, and the vertical axis is associated with function f2. The figure only shows the values of f1 and f2 comprised between [0, 5]. For this reason, solutions 1 and 11 have not been shown.
2.2.2 ε-Constraint This method consists of creating a single-objective model in which only one of the functions will be optimized and the remaining function not optimized becomes constraints in the model. The multi-objective optimiOptimal Pareto Front 5 3 f2
Weighted Sum
1 -1
0
1
2
3 f1
Figure 2.5 Optimal values with weighted sum.
4
5
24 䡲 Multi-Objective Optimization in Computer Networks
zation model shown above (2) can be restated through the ε-constraint model as follows: Optimize [minimize/maximize]
fi ( X )
(4)
subject to fk(X) ≤ εk, k = 1, …, n and k ≠ i H(X) = 0 G(X) ≤ 0 In this case, function fi(X) is the only one optimized; the other n – 1 functions become constraints and are limited by their corresponding values. The objective in this method consists of changing the values of ε of each of the functions and, in this way, obtaining different optimization values in function f i ( X ). Figure 2.6 shows an example in which two functions (f1 and f2) are optimized using the ε-constraint model. In the example, f2 is optimized and f1 has become a constraint. For the two solutions P1a and P1b shown in Figure 2.6, one observes that when function f1 is a constraint with upper limit ε1a , the optimization point found for f2 is P1a . When in the model the constraint value for f1 is changed to εb1 , the optimization value found for f2 is P1b . If we change the constraint value for f1, we can obtain different values for f2, and therefore obtain the values of the optimal Pareto front. One can also optimize the other functions by changing the objective function to be optimized. In this case, the model would be f2 P1a
P1b ε1a
ε1b
Figure 2.6 Solution using the ε-constraint method.
f1
Multi-Objective Optimization Concepts 䡲 25
Optimize [minimize/maximize]
f j(X )
(5)
subject to fk(X) ≤ εk, k = 1, …, n and k ≠ j H(X) = 0 G(X) ≤ 0 Here, function fi(X) is now a constraint and function fj(X) is the objective function. In this case, if the values of the functions that act as constraint are modified, one can find new values of the optimal Pareto front. Through this method one can solve problems with convex and nonconvex solution spaces. The drawback is that one must know the problem and the ε values for the results obtained in the optimization to be true solutions to the specific problem. Now we will show how this method can be applied to solve a simple problem with two objective functions. As in the previous section, we need to minimize both functions.
Example Apply the ε-constraint method to the two previous functions. Considering f2 as the objective function and f1 as the constraint, the model would be Min f2(x) = (x – 2)4 Min f2(x) = (x – 2)4 Subject to x4 ≤ ε1 –4 ≤ x ≤ 4 In this case, given different values of ε1, we can obtain the following solutions, again using any numerical solver, such as sparse nonlinear optimizer (SNOPT). (Table 2.3)
26 䡲 Multi-Objective Optimization in Computer Networks Optimal Values with ε-Constraint
Table 2.3 ε1
Sol
f1
f2
X
1
20.0
16
0
2
2
5.0
5
0.06
1.50
3
4.5
4.5
0.09
1.46
4
3.0
3.0
0.22
1.32
5
2.5
2.5
0.3
1.26
6
2.0
2.0
0.43
1.19
7
1.5
1.5
0.64
1.11
8
1.0
1.0
1.0
1.0
9
0.5
0.5
1.81
0.84
10
0.3
0.3
2.52
0.74
11
0.2
0.2
3.14
0.67
12
0
0
15.67
0.01
Figure 2.7 shows the optimal points obtained using the weighted sum method and the ε-constraint method for the functions discussed.
2.2.3 Distance to a Referent Objective Method This method, like the weighted sum method, allows one to transform a multi-objective optimization problem into a single-objective problem. The function traditionally used in this method is distance. Optimal Pareto Front 5 Weighted Sum ε-Constraint
f2
3 1 -1
0
1
2
3
4
5
f1
Figure 2.7 Optimal values with ε-constraint versus weighted sum.
Multi-Objective Optimization Concepts 䡲 27
Through the distance to a referent objective method, the multi-objective method (2) can be rewritten as follows: Optimize [minimize/maximize] 1r
⎡ n r ⎤ ' F (X ) = ⎢ Z i − fi ( X ) ⎥ ⎢⎣ i = 1 ⎥⎦
∑
(6)
subject to H(X) = 0 G(X) ≥ 0 1 ≤r< ∞ In this method, one must set a Zi value for each of the functions. These Zi values serve as referent points to find the values of the optimal Pareto front by means of the distance function, as shown in Figure 2.8. Another value that must be set is r, which will tell us the distance function to be used. For example, if r = 1 is a solution equivalent to the weighted sum method but without normalizing the values of the functions; if r = 2 one would be using the Euclidean distance and if r Î ∞ the problem is called the min-max or Tchebychev problem. In the specific case of r Î ∞, the formula of the method can be written as Optimize [minimize/maximize]
F ' ( X ) = max ni =1 ⎡⎣ Zi − f i ( X ) ⎤⎦
(7)
subject to H(X) = 0 G(X) ≤ 0 1 ≤r< ∞ Figure 2.8 shows examples of this method with two functions and using r with values 1, 2, and ∞.
28 䡲 Multi-Objective Optimization in Computer Networks f2
f2
f2
A
A
A Z
Z
B
B
f1
f1 a
Z
B
C
f1
b
c
Figure 2.8 Solution using the distance to a referent objective method.
As can be seen in Figure 2.8, this method is sensitive to the values of Zi and r. In Figure 2.8a and 2.8b, one can see that through this method it is not possible to find certain values of the optimal Pareto front between A and B. However, through Figure 2.8c (with r Î ∞), it is possible to find such points. It may not be possible to find solutions in the optimal Pareto front for certain values of r when the set of solutions is nonconvex. Therefore, if the solution space is nonconvex or one is uncertain of it being convex, one should use the method with r Î ∞. Finally, analyzing the graphs in Figure 2.8, one can see that it is necessary to provide a good value of Z as point of reference. Providing a poor point Z may result in divergence instead of convergence toward optimal. This can be a problem due to ignorance of the set of solutions.
2.2.4 Weighted Metrics This method is similar to the previous one, with the difference that each of the functions is normalized with respect to a weight. Using the weighted metrics method, the multi-objective model (2) can be rewritten by adding a weight-multiplying factor to each function to normalize their values. Optimize [minimize/maximize] 1r
⎡ n r ⎤ F (X ) = ⎢ w i Z i − fi ( X ) ⎥ ⎢⎣ i = 1 ⎥⎦ '
∑
Subject to H(X) = 0 G(X) ≤ 0
(8)
Multi-Objective Optimization Concepts 䡲 29
1 ≤r< ∞ 0 ≤ wi ≤ 1, i = {1, …, n} n
∑w
i
=1
i =1
In the specific case of r Î ∞, the formula of the method can be rewritten in the following way: Optimize [minimize/maximize]
F ' ( X ) = max ni =1 ⎡⎣w i Zi − f i ( X ) ⎤⎦
(9)
Subject to H(X) = 0 G(X) ≥ 0 1 ≤r< ∞ 0 ≤ wi ≤ 1, i = {1, …, n} n
∑w
i
=1
i =1
This method has the same drawbacks as the method pr eviously mentioned.
Example Apply this method when r Î ∞ to the two previous objective functions: f1(x) = x4 y f2(x) = (x – 2)4 Min z = max [w1·|z1 – f1(x)|, w2·|z2 – f2(x)|] Subject to –4 ≤ x ≤ 4 If we execute the different values of wi and zi, we obtain the following solutions: (Table 2.4)
30 䡲 Multi-Objective Optimization in Computer Networks Table 2.4 Sol
1 2 3 4 5 6 7 8 9 10 11
Optimal Values with Weighted Metrics
w1
w2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
z1
0 0 0 0 0 0 0 0 0 0 0
z2
0 0 0 0 0 0 0 0 0 0 0
f1
f2
x
15.87 2.59 1.88 1.49 1.22 1 0.81 0.64 0.47 0.29 0
0 0.29 0.47 0.64 0.81 1 1.22 1.49 1.88 2.59 16
2.0 1.27 1.17 1.11 1.05 1 0.95 0.89 0.83 0.73 0
Figure 2.9 shows the optimal points obtained using the weighted metrics method for the given functions and compares them with the methods previously analyzed for this same example.
2.2.5 The Benson Method This method is similar to the weighted metrics method, but in this case the Z reference value has to be a feasible solution. Using the Benson method, one can rewrite the multiobjective model (2) the following way: Maximize n
F '( X ) =
∑ max (0, Z − f ( X )) i
i
i =1
Optimal Pareto Front 5
Weighted Sum ε-Constraint Weighted Metric
f2
3 1 -1
0
1
2
3
4
5
f1
Figure 2.9 Optimal values with weighted metric versus ε-constraint versus weighted sum.
Multi-Objective Optimization Concepts 䡲 31
Subject to fi(X) ≤ Zi, i = 1, 2, …, n H(X) = 0 G(X) ≤ 0 As can be seen, instead of minimizing, one is maximizing the resulting function F' because this method works this way. In this case, objective functions will be constraints by vector Z. Using this method, one can obtain the Pareto front values in a nonconvex solution space. The drawback consists of finding good values for Z to find the points in the Pareto front. One must know the space of feasible solutions of the problem being solved or, in lack of it, generate such values randomly.
Example Apply this method to the two objective functions already mentioned: Min z = max [0, z1 – f1(x)] + [0, z2 – f2(x)] Subject to x4 ≤ z1 (x – 2)4 ≤ z2 –4 ≤ x ≤ 4 If we execute the method for different values of wi and zi, we obtain the following solutions (Table 2.5): Table 2.5 Method
Optimal Values with the Benson
Sol
1 2 3 4 5
z1
5 5 3 2 1
z2
2 1 4 5 3
f1
0.43 1 0.19 0.06 0.22
f2
2 1 4 5 3
x
0.81 1 0.59 0.51 0.68
32 䡲 Multi-Objective Optimization in Computer Networks
Figure 2.10 shows the optimal points obtained using the weighted metrics method for the given functions and compares them with the methods previously analyzed for this same example.
f2
Optimal Pareto Front Weighted Sum ε-Constraint Weighted Metric Benson
5 4 3 2 1 0 0
1
2
3
4
5
f1
Figure 2.10 Optimal Benson values versus other methods.
After analyzing the different methods and applying them to a simple case, we can say that these methods converge toward the optimal Pareto front, although one must take into consideration the specific drawbacks of each. There are other traditional methods that can be used to solve these multi-objective problems. They include the Keeney–Raiffa, compromise, goal attainment, lexicographic, proper equality constraints, and goal programming methods and others that belong to the group of interactive methods and will not be considered in this book.
2.3 Metaheuristics Metaheuristics are considered high-level algorithmic strategies that are used to guide other heuristics or algorithms in their search of the space of feasible solutions of the optimal value (in the single-objective case) and the set of optimal values (in the multi-objective case). Traditionally, metaheuristics are good techniques to solve optimization problems in which to accomplish convergence toward optimum one must perform combinatorial analysis of solutions. The way to find some new solutions that are unknown thus far and that belong to the set of feasible solutions is done through the solutions previously known. These techniques accomplish the following functional features.
Multi-Objective Optimization Concepts 䡲 33
2.3.1 Convergence toward Optimal As the algorithm is executed, feasible solutions found must converge toward the optimal value. Figure 2.11 shows that when both functions (f1 and f2) are minimized, the solutions found in iteration i + 1, compared to solutions found in iteration i, converge to the optimal Pareto front. This does not mean that every new solution obtained has to be better than the previous one, but this process does have to show this behavior.
2.3.2 Optimal Solutions Not Withstanding Convexity of the Problem When metaheuristics function through combinatory analysis, any solution belonging to the set of feasible solutions must have a probability p > 0 of being found. If a metaheuristic meets this requirement, then any optimal solution may be found. In this case, it does not matter whether the set of solutions is convex, because this technique will always find the optimal solution. Figure 2.12 shows that despite having a nonconvex set of solutions, the solutions of iteration i + 1 can be found through combinatorial analysis of the solutions obtained in iteration i.
2.3.3 Avoiding Local Optimal There are heuristic algorithms that can find local optimal. If the problem consists of finding local optima, one does not need to use metaheuristic techniques to solve it. If the problem consists of finding global optimal, such metaheuristic techniques are required to not fall in local optimal; in f2
Solution found in the iteration i Solution found in the iteration i+1
Figure 2.11 Convergence toward optimal.
f1
34 䡲 Multi-Objective Optimization in Computer Networks f2
Solution found in the iteration i Solution found in the iteration i+1
f1
Figure 2.12 Solution in nonconvexity.
other words, such local optima can be found, but the final solution must be the global optimal value. This can be accomplished through combinatory analysis functions. Instead of dismissing a poor value found, it is left as a solution, and perhaps, through this poor value, one may find a global optimum. Figure 2.13 shows that even though the solution found in iteration i + 1 is worse than the solutions found in iteration i, this solution is drawing the optimization process from such local optimum and is drifting it toward another optimum (which in the example of the figure corresponds to the global optima). The solution found in iteration i + 2 is better than the one found in iteration i + 1, but this latter one was worse than the one found in iteration i. Metaheuristics usually implement this functioning precisely to leave the local optima and be able to converge toward global optima.
2.3.4 Polynomial Complexity of Metaheuristics This is one of the best contributions that these techniques can make to the solution of engineering problems; it consists of metaheuristics being used when, due to the complexity of the problem, one has not found f(x)
Solution found in the iteration i Solution found in the iteration i+1 Solution found in the iteration i+2
x
Figure 2.13 Global optimum.
Multi-Objective Optimization Concepts 䡲 35
algorithmic solutions with polynomial computational time. Because of their probabilistic work scheme, metaheuristics traditionally solve the problem in polynomial time. This, for example, implies that even though we cannot guarantee that the optimal value will necessarily be found, we can find a good approximate value that we can actually use. By being a probabilistic process, it can also happen that one finds a value in one execution and a different, but equally good value, in another execution. The foregoing features are among the most important exhibited by these types of techniques to solve optimization problems. Different metaheuristics have been developed through time. Some have been based on biological processes, others on engineering processes, and others on social or cultural processes, etc. Some of the metaheuristics developed so far include evolutionary algorithms, ant colony, memetic algorithm, Tabu search, and simulated annealing. Other techniques used to solve optimization problems include scatter search, cultural algorithms, neuron networks, etc., but they will not be covered in this book.
2.3.5 Evolutionary Algorithms Evolutionary algorithms are among the types of algorithms that solve combinatorial optimization problems and whose traditional solutions are very complex at a computational level. Evolutionary algorithms originate in Darwin’s theory of evolution, which explains the creation of species based on the evolution of life on Earth. Darwin introduced three fundamental components of evolution: replication, variation, and natural selection. Replication is the formation of a new organism from a previous one, but replicating would only produce identical copies of organisms, thereby stalling evolution. However, during the replication process there occur a series of errors called variations that allow a change in individuals. One manner of variation is sexual reproduction. In addition to replication and variation, evolution needs natural selection, which happens when individuals of a same species compete for the scarce resources of their environment and the possibility of reproducing. Such competition allows for the fittest individuals to survive and the weakest to die. Simulating the biological evolutionary process, evolutionary algorithms use a structure or individual to solve a problem. This representation is generally a bit chain or data structure that corresponds to the biological genotype. The genotype defines an individual organism that is expressed in a phenotype and is made up of one or several chromosomes, which in turn are made up of separate genes that take on certain values (alleles) of a genetic alphabet. A locus identifies a gene’s position in the chromosome. Hence, each individual codifies a set of parameters used as entry
36 䡲 Multi-Objective Optimization in Computer Networks
of the function under consideration. Finally, a set of chromosomes is called the population. Just as in nature, varied evolutionary operators function in the algorithm population trying to produce better individuals. The three operators most used in evolutionary algorithms are mutation, recombination, and selection.
2.3.5.1 Components of a General Evolutionary Algorithm An evolutionary algorithm consists of the individuals, the population of such individuals, and the fitness of each of the individuals and of the genetic operators. We will explain in a very simple manner what each of these components consists of.
2.3.5.1.1 Individual As previously mentioned, an individual is the representation of a solution to the problem. It includes bit chains, trees, graphs, or any data structure adjusted to the problem being solved.
2.3.5.1.2 Population A population is the set of individuals used by the algorithm to perform the evolutionary process. The initial population generally is generated randomly. Subsequently, it is modified iteratively by the operators with the purpose of improving the individuals.
2.3.5.1.3 Aptitude Function (Fitness) The aptitude function assigns each individual a real value that shows how good such an individual’s adaptation is to its environment. In many algorithms, especially those that optimize an objective, this fitness is equal to the function optimized. In multi-objective algorithms, however, it is a more elaborate function that takes into account other important factors in this type of problem, for example, how many individuals are close among them or how many individuals dominate a specific individual from another population.
2.3.5.1.4 Genetic Operators Genetic operators are applied on individuals of the population, modifying them to obtain a better solution. The concept of population improvement is very dependent on the problem. In a multi-objective problem, as
Multi-Objective Optimization Concepts 䡲 37
previously mentioned, it is important to obtain a Pareto front as close to the real one as possible. The three most common operators used in an evolutionary algorithm are selection, recombination, and mutation.
2.3.5.1.4.1 Selection Operator — To obtain the next population, genetic operators are applied to the best or fittest individuals (set of parents). The roulette and tournament methods are the two most common selection operators. Roulette Method — Let Ω be the set of individuals that represent the feasible solutions to the problem and f : Ω → ℜ the fitness function. If x ∈Ω , then p( x ) =
f (x )
∑
defines the probability of individual x being
f ( y)
y ∈I
part of the set of parents. Tournament Method — Let Ω be the set of individuals representing the feasible solutions to the problem and f : Ω → ℜ the fitness function. The set of parents S ⊂ Ω will be made by elements
x ∈ S such that
f ( x ) = max( f ( y )) y ∈S . To select the elements that will make up the set of
parents in this method, form small subsets (usually randomly).
2.3.5.1.4.2 Crossover Operator — Let Ω be the set of individuals representing the feasible solutions to the problem. A crossover operator is a function λ : Ωx Ω → χρ( Ω ) , where ρ( Ω ) is the set of all subsets of Ω . The crossover function combines two elements of the current population to produce one or more offspring. Figure 2.14 illustrates an example of the crossover operator, where, by selection of two chromosomes (parents), one produces two offspring chromosomes. It is possible that operators select more than two parents and also produce more than two offspring.
2.3.5.1.4.3 Mutation Operator — Let Ω be the set of individuals representing the feasible solutions to the problem. A mutation operator is a function β : Ω → Ω that takes an individual and transforms it into another. Figure 2.15 illustrates an example of the mutation operator, where the selected chromosome is altered to produce a new chromosome. Mutation operators may alter more than one value in the chromosome.
38 䡲 Multi-Objective Optimization in Computer Networks
Chromosome 1
Offspring 1
Chromosome 2
Offspring 2
Figure 2.14 Crossover function. Chromosome
Chromosome
Figure 2.15 Mutation function.
2.3.5.2 General Structure of an Evolutionary Algorithm In this book we will show other methaheuristics that are also used to solve optimization problems, although greater emphasis will be placed on evolutionary algorithms. Figure 2.16 is the general structure of an evolutionary algorithm to solve an optimization algorithm. Population P0 initiates in line 2. These individuals are usually generated randomly. Normally, the end condition of these algorithms is the performance of a constant number of iterations. In certain cases, additional conditions are introduced to control convergence of the algorithm to the optimal value sought. In P' one assigns the individuals that the selection operator takes from Pt (line 4). In line 5 one selects from P' the individuals that will be the parents of the next generation, that is, upon which one will apply the crossover operators. In line 6 one assigns a P'' to all the individuals produced with the crossover operator. To a subset of such individuals one applies the mutation operator (line 7). In line 9 one determines all the individuals of the next generation (Pt+1). The individuals that continue into the next generation are usually the individuals produced with the crossover and mutation operators. However, certain algorithms maintain high-quality individuals from previous generations. Evolutionary algorithms comply with the following metaheuristic characteristics:
Multi-Objective Optimization Concepts 䡲 39
t: iteration number. Pt: Population in iteration t. pc: Probability of performing crossover. pm: Probability of performing mutation. : Crossover operator. ß: Mutation operator. S: is a subset of the merger P’ and P’ obtained randomly W: is a subset of P’’ obtained randomly Selection Operator(): selection operator. Begin: 1.
t=0
2.
Initialize P0
3.
While (condition of completion = = false)
4.
P’=Selection_Operator(Pt)
5.
Select a Subset S
6.
P' ' = { ( x) | x
7.
Select a Subset W
8.
P' ' ' = { ( x) | x W }
9.
Pt +1 = M
10.
t=t+1
11.
Pt
P' xP' S}
P' '
P ' ' P' ' '
EndWhile
End.
Figure 2.16 Evolutionary algorithm.
Convergence to optimum: In the multi-objective case, this usually takes place through the elitist or dominant functions. This way, one can eliminate the poor individuals in each generation, while only the good individuals remain. Because the process is iterative, one expects each generation to present better individuals, guaranteeing convergence toward optimal.
40 䡲 Multi-Objective Optimization in Computer Networks
Optimal solutions not withstanding convexity of the problem: This feature is accomplished by means of the combinatory analysis produced through the crossover and mutation operators. Avoid local optima: This feature is accomplished through the values of crossover and mutation probabilities. If the mutation probability is high, the algorithm may converge toward local optima. Normally, one recommends a value of 0.05 or 0.1 as a maximum. Another fact that can help accomplish this feature is that there may be nonoptimal individuals in the population, confirming what is illustrated in Figure 2.8 — that operators can produce a good individual from bad individuals. Polynomial complexity of metaheuristics: Being a probabilistic algorithm, its consequence is that its execution is nonexhaustive. For more information about evolutionary algorithms, read [BAC00] and [DEB01].
2.3.6 Ant Colony Optimization based on ant colonies (ACO) is a metaheuristic approach to solve combinatorial optimization problems. This metaheuristic is based on the foraging behavior of certain ant species. The ants cooperate among them to find food quickly. This cooperation is possible due to the indirect communication of the ants. Initially, scouting ants start searching for food sources. As they return to the colony, they leave markers, called pheromones, on the path leading to the food, which will lead other ants on the right track. The amount of pheromones tells other ants how good a path is. When others follow the path, they leave their own markers. As time goes by and more ants take the path, the pheromones increase and, therefore, there is greater probability of the path being taken. In the absence of pher omones, ants randomly scout for paths. To explain this behavior, Dorigo [DOR04] designed an experiment that served as a starting point for combinatorial optimization algorithms based on ant colonies. This experiment established two paths, one long and one short, between the colony and the food source. During these experiments he observed that ants tend to take the shorter path.
2.3.6.1 General Structure of an Ant Colony Algorithm Figure 2.17 shows the pseudo-code of an algorithm based on ant colony. Three procedures must be implemented: construct ant solutions, update pheromones, and daemon actions. Dorigo et al. [DOR97] and [DOR04] say that this metaheuristic does not specify whether these procedures
Multi-Objective Optimization Concepts 䡲 41
Procedure ACOMetaheuristic 1.
ScheduleActivities
2.
ConstructAntsSolutions
3.
UpdatePheromones
4.
DaemonActions
5.
End-ScheduleActivities
end-Procedure
Figure 2.17 Ant colony algorithm.
must be executed fully parallel and independently or some sort of synchronization is needed.
2.3.6.1.1 Ant-Based Solution Contruction This is the procedure responsible for managing the ants that concurrently and asynchronically visit the adjacent states of the problem considered, moving through the neighbors of the construction graph. The transition from one state to the next is done using the pheromone trace and the heuristic information of how to recalculate the value of the pheromones. In this fashion, the ants construct the solutions that will later be evaluated with the objective, so that the pheromone update can decide what amount of pheromone to add.
2.3.6.1.2 Pheromone Trail Update This is the procedure by which pheromone trails are modified. The pheromone trail in the link, in the case of application to computer networks, can increase by pheromone addition from passing ants or decrease due to pheromone evaporation. From a practical point of view, an increase in the amount of pheromone increases the possibility of an ant using a component or link to build the solution. On the other hand, pheromone evaporation is used to avoid too rapid convergence toward suboptimal regions in the space of solutions, and would therefore favor the exploration of new areas in the search space.
2.3.6.1.3 Daemon Actions This procedure is used to implement centralized actions that cannot be performed by ants. For example, local searches and processes that take
42 䡲 Multi-Objective Optimization in Computer Networks
the best solutions and deposit additional pheromone on the links or components. For more information about ant colonies, read [DOR04].
2.3.7 Memetic Algorithm A memetic algorithm is a combination of an evolutionary algorithm and a local search. This combination is used to accomplish algorithms that comply mainly with the following two characteristics: flexibility to incorporate specific heuristics to the problem and specialized operators, and efficiency to produce acceptable solutions in less time. Some authors see memetic algorithms as a simulation of cultural evolution. This simulation is conceived as the transmission, imitation, modification, and selection of the ideas, trends, methods, and all information that defines a culture. The minimum unit of cultural transmission is called memes, and it is the equivalent of genders in biological evolution. In the context of cultural evolution, memetic algorithms are defined as those that make an analogy of cultural evolution. Because the general scheme is basically the same, generically memetic algorithms can be conceived as a hybrid of evolutionary algorithms and local search, and the simulation of cultural evolution.
2.3.7.1 Local Searches Before introducing the general structure of a memetic algorithm, one must explain local searches. A neighborhood function can be defined as follows: Let ( X , f ) (where X is the set of feasible solutions and f is the function to be optimized) be an instance of a combinatorial optimization problem. One neighborhood function is a mapping ξ : X → ρ( X ) (where ρ( X ) is the set of all subsets of X), which defines for every solution i ∈ X a set ξ( i ) ⊆ X of solutions that are in a certain sense close to i. The set ξ( i ) is the neighborhood of solutions of i, and every
j ∈ξ( i ) is a neighbor of i.
One must assume that i ∈ξ( i ) is met for i ∈ X . Once the neighborhood function is defined, one says that a local search is a method that, starting from a solution, continually tries to find the best solutions, searching among the neighbors (obtained by the neighborhood function) of such solution.
Multi-Objective Optimization Concepts 䡲 43
t: number of the iteration. Pt: Population in iteration t. pc: probability of performing a crossover. pm: probability of performing a mutation. : crossover operator. ß: mutation operator. : local search. Selection Operator(): this is the selection operator. Local_Search(); Begin: 1.
t=0
2.
Initialize P0
3.
While(Condition of Completion == false)
4.
P’=Selection_Operator(Pt)
5.
Select a Subset c
6.
P' ' = { ( x) | x
S}
7.
L = { ( x) | x
P' '}
7.
Select a Subset W
8.
P' ' ' = { ( x) | x W }
9.
L' = { ( x) | x
P' ' '}
9.
Pt +1 = M
L
10.
t=t+1
11.
Pt
P' '
L'
EndWhile
End.
Figure 2.18 Memetic algorithm.
2.3.7.2 General Structure of a Memetic Algorithm Figure 2.18 is a general structure of a memetic algorithm. Operators used in memetic algorithms are the same operators used in evolutionary algorithms. Memetic algorithms have the same characteristics as evolutionary algorithms because they operate similarly, except for the local search added to find better solutions in less time.
44 䡲 Multi-Objective Optimization in Computer Networks
For more information about memetic algorithms, read [MER00] and [MOS03].
2.3.8 Tabu Search Tabu search (TS) is a metaheuristic procedure used for a local search heuristic algorithm to explore the space of solutions beyond the simple local optimum. It has been applied to a wide range of practical optimization applications, producing an accelerated growth of Tabu search algorithms in recent years. Using TS or TS hybrids with other heuristics or algorithmic procedures, one can establish new records to find better solutions in programming, sequencing, resource assignment, investment planning, telecommunications problems, etc. Tabu search is based on the premise that to rate the solution to a problem as intelligent, it must incorporate adaptative memory and sensible exploration (responsive). Adaptative memory is the mechanism by which the Tabu search technique ensures that solutions already found in previous steps are not repeated. This mechanism stores solutions or steps taken to find such solutions in a temporary memory. The use of adaptative memory contrasts with “forgetful” designs, like the ones inspired on physics or biology metaphors; with “rigid memory” designs, such as those exemplified by branching and delimitation; and with its “cousins” in artificial intelligence (AI), which are normally not managed with this type of memory schemes. The importance of sensible exploration in Tabu search, whether in a deterministic or probabilistic implementation, is based on the assumption that a poor strategic selection may produce more information than a good random selection. In a memory-using system, a poor selection based on a strategy may provide useful clues on how to make beneficial modifications to the strategy. The basis for a Tabu search (TS) can be described as follows. Given a function f(x) to be optimized in a set X, TS starts as any local search, going iteratively from one point (solution) to another until meeting a given stopping criteria. Each x ∈ X has an associated neighborhood (or vicinity) N ( x ) ⊆ X , and every solution x ' ∈ N ( x ) can be reached from x by means of an operation called movement. TS exceeds local search using a modification strategy of N(x) as the local search continues, replacing it by a neighborhood N*(x). As said in our previous discussion, a key aspect in Tabu search is the use of memory structures that help determine N*(x), and thus organize the way the space is explored.
Multi-Objective Optimization Concepts 䡲 45
The solutions admitted in N*(x) by these memory structures are determined in several ways. Specifically, the one that gives its name to Tabu search identifies solutions found on a specified horizon (and, implicitly, certain solutions identified with them) and excludes them from N*(x), classifying them as Tabu. In principle, the Tabu terminology implies a type of inhibition that incorporates a cultural connotation (i.e., something that is subject to the influence of history and context), which can be overcome under appropriate conditions. The process by which solutions acquire a Tabu status has several facets designed to promote an aggressive examination guided by new points. One useful way to visualize and implement this process is to replace the original evaluation of solutions by Tabu evaluations, which introduce penalties to significantly discourage the election of Tabu solutions (i.e., that will preferably be excluded from the N*(x) neighborhood, according to their dependence with the elements that make up the Tabu status). Further, Tabu evaluations also periodically include incentives to stimulate the selection of other types of solutions as a result of aspiration levels and long-term influence. TS profits from memory (and hence form the learning process) to perform these functions. Memory used in TS may be explicit or based on attributes, although both modes are not excluding. Explicit memory conserves complete solutions and typically consists of an elite list of solutions visited during the search (or highly attractive but unexplored neighborhoods for such solutions). These special solutions are strategically introduced to expand N*(x), and thus present useful options not found in N(x). The memory in TS is also designed to introduce a more subtle effect in the search by means of a memory based on attributes that saves information about solution attributes that change when moving from one solution to another. For example, in a graph or network context, attributes may consist of added nodes or arcs that are added, deleted, or replaced by the movements executed. In more complex problem formulations, attributes may represent function values. Sometimes attributes can also be strategically combined to create other attributes through hashing procedures or IA-related segmentations, or by vocabulary construction methods.
2.3.8.1 General Structure of a Tabu Search Figure 2.19 is the general structure of an algorithm through Tabu search. This algorithm randomly finds an initial solution. Subsequently, and although the exit condition has not been met, one finds new solutions based on the existing one. These new solutions must be neighbors of the current solution and also cannot be in the Tabu list. The Tabu list is used
46 䡲 Multi-Objective Optimization in Computer Networks
F represents feasible solutions x’ is the best solution found so far c: iteration counter T set of “tabu” movements N(x) is the neighborhood function
1. Select x
F
2. x’ = x 3. c = 0 4. T = 5. If {N(x)
T} = , goto step 2
6. Otherwise, c Select nc
c+1
{N(x)
T} such that: nc(x) = opt(n(x) : n
{N(x)
T)}
opt() is an evaluation function defined by the user 7. x
nc(x)
If f(x) < f(x’) then x’
x
8. Check stopping conditions: Maximum number of iterations has been reached N(x)
T=
after reaching this
step directly from step 2. 9. If stopping conditions are not met, update T and return to step 2
Figure 2.19 Tabu search algorithm.
to ensure that solutions that have been visited are not visited again. For more information about Tabu search, read [GLO97].
2.3.9 Simulated Annealing Simulated annealing uses concepts originally described for statistical mechanics. According to Aarts [AAR1989], annealing is a thermal process
Multi-Objective Optimization Concepts 䡲 47
in which, through a thermal bath, low energy status is produced in a solid. This physical process of annealing first softens the solid by heating it at a high temperature and then cools it down slowly until the particles rearrange themselves in a solid configuration. For each temperature reached in the annealing process, the solid can reach thermal balance if the heating occurs slowly. If cooling is done correctly, the solid reaches its fundamental state in which particles form perfect reticulate and the system is at its lowest energy level. However, if cooling takes place too rapidly, the solid may reach metastable status, in which there are defects in the form of high-energy structures. The evolution of a solid in the thermal bath can be simulated using the Metropolis algorithm based on the Monte Carlos techniques. In this algorithm, thermal balance, described by the Boltzmann distribution, at a given temperature is obtained by generating an elevated number of transitions. Very briefly, the algorithm transitions from one step to the next according to the following rules: if the energy in the produced state is lower than the current state, then it accepts the produced state as the current state; however, the produced state will be accepted with a determined probability (based on the Boltzmann distribution). This acceptance probability is a function of temperature and the difference between the two energy levels. The lower the temperature, the lower the probability of transformation into a higher energy state, and the greater the energy in the new state, the lower the probability of it being accepted. Therefore, there is a possibility for every state to be reached, but with a different probability, depending on temperature. Simulated annealing can be seen as an iterative Metropolis algorithms process that is executed with decreasing control parameter values (temperature). Conceptually, it is a search method by neighborhoods, in which the selection criteria are the rules of transition of the Metropolis algorithm: The algorithm randomly selects a candidate from the group that makes up the neighborhood of the current solution. If the candidate is better than it in terms of evaluation criteria, then it is accepted as the current solution; however, it will be accepted with a probability that decreases as the difference in cost between the candidate solution and the current solution increases. When a candidate is rejected, the algorithm randomly selects another candidate and the process is repeated. Randomization in the selection of the next solution has been designed to reduce the probability of being trapped in a local optimum. One has been able to prove, as will be seen later, that simulated annealing is capable of finding (asymptotically) the optimal solution with one probability. Although guaranteed, optimality will only be reached after an infinite number of steps in the worst of cases. Therefore, asymptotic convergence of the algorithm can only be approximated in practice, which, fortunately,
48 䡲 Multi-Objective Optimization in Computer Networks
is done in polynomial time. Good functioning of the metaheuristic will depend largely on the design of the neighborhood structure, the cooling pattern, and the data structure. In the early 1980s, working in the design of electronic circuits, Kirkpatrick et al. [KIR83] (and Cerny [CER85] doing independent work researching the Transmission Control Protocol (TCP)) considered applying the Metropolis algorithm to some of the combinatorial optimization minimization problems that appear in these types of designs. They felt it was possible to establish an analogy between the parameters in the thermodynamics simulation and the local optimization methods. Thus, they related the following: Thermodynamic
Configuration Fundamental configuration Energy of configuration Temperature
Optimization
⇔ ⇔ ⇔ ⇔
Feasible solution Optimum solution Cost of solution ?
A real meaning in the field of optimization does not correspond to the physical concept of temperature; rather, it has to be considered a parameter, T, which will have to be adjusted. In a similar way, one could imagine the processes that take place when the molecules of a substance start moving in the different energy levels searching for a balance at a given temperature and those that occur in minimization (or maximization) processes in local optimization. In the first case, once the temperature is fixed, distribution of the particles in the dif ferent levels follows the Boltzmann distribution; therefore, when a molecule moves, this movement will be accepted in the simulation if the energy decreases, or will be accepted with a probability that is proportional to the Boltzmann factor in the opposite case. When speaking of optimization, once the T parameter is set, we produce an alteration and directly accept the new solution when its cost decreases. If the cost does not decrease, it is accepted with a probability that is proportional to the Boltzmann factor. This is the key of simulated annealing, as it is basically a local search heuristic strategy where the selection of the new element in the neighborhood N(s) is done randomly. As was seen previously, the drawback of this strategy is that if it falls in a local optimum during the search, it is unable to leave it. To avoid this, simulated annealing allows certain probability (lesser every time as we approach the optimal solution) passage to worst solutions. Indeed, analyzing the behavior of the Boltzmann factor as a function of temperature, we see that as temperature decreases, the
Multi-Objective Optimization Concepts 䡲 49
probability of us accepting a solution that is worse than the previous one also decreases rapidly. The strategy that we will follow in simulated annealing will start with a high temperature (with which we will allow changes to worse solutions during the first steps, when we are still far from the global optimum), and subsequently, we will decrease the temperature (reducing the possibility of changes to worse solutions once we are closer to the optimum sought). This procedure gives rise to the name of the algorithm: annealing is a metallurgical process (used, for example, to eliminate internal stress in cold laminated steel) through which the material is rapidly heated and then slowly cooled for hours in a controlled way.
2.3.9.1 General Structure of Simulated Annealing Figure 2.20 shows the general structure of the algorithm through simulated annealing. Input parameters for this algorithm are the initial temperature, To, and the cooling velocity, α. This cooling velocity is associated with Input (To, , Tf) 1. T
To
2. Sact
Initial_Solution
3. While T
Tf do
4.
Begin
5.
For cont
1 to L(T) do
6.
Begin
7.
Scand
8.
cost(Scand) – cost(Sact) If (U(0,1) < e(-
9. 10.
Sact
11.
/T)
) or ( < 0) Then
Scand
End
12.
T
13.
End
14.
Select_Solution_N(Sact)
(T)
Write the best solutions of Sact visited.
Figure 2.20 Simulated annealing algorithm.
50 䡲 Multi-Objective Optimization in Computer Networks
the way we update the value Ti+1 from Ti when the temperature decreases after L(T) iterations. An initial solution that belongs to the solution space Ω is generated and, until the end of the process is reached, for every T one calculates, a number L(T) tells the number of iterations (before the temperature reduction) that a solution will combine. These new solutions found must be in the neighborhood N(Sact) of the current one, which will replace the current solution if it costs less or has a probability e(–δ/T). To calculate this probability, generate a uniform random number [0, 1) that is represented as U(0, 1). Finally, the solution offered will be the best of all the Sact visited. For more information about simulated annealing, read [VAN89].
2.4 Multi-Objective Solution Applying Metaheuristics As we have seen in previous sections, the use of traditional techniques allows us to obtain optimal Pareto front points, yet some of these techniques have certain constraints. Something to bear in mind is that computer network problems are usually NP problems. Hence, to obtain an optimal solution one resorts to metaheuristics. Also bear in mind that the metaheuristics explained previously are aimed at providing a solution to singleobjective problems, and therefore must be adjusted to solve multi-objective problems. In the case of evolutionary algorithms, the Multi-Objective Evolutionary Algorithm (MOEA) is an adaptation to solve multi-objective problems. Similarly, other metaheuristic techniques can be adapted to address multiobjective problems. In this section we will discuss the MOEA technique. There are different MOEA algorithms, which can be divided into elitist (SPEA, PAES, NSGA II, DPGA, MOMGA, etc.) and nonelitist (VEGA, VOES, MOGA, NSGA, NPGA, etc.). They can also be classified based on the generation in which they were developed. The first generation includes those that do not work with Pareto dominance (VEGA, among others) and those that do work with the Pareto dominance concept (MOGA, NSGA, NPGA, and NPGA 2, among others). The second generation includes PAES, PESA, PESA II, SPEA, SPEA 2, NSGA II, MOMGA, and MOMGA II, among others. Because the purpose of this book is to show how metaheuristics solve computer network problems, we will not provide an analysis of the different evolutionary algorithms. One can consult some of these algorithms in books by [DEB01] and [COE02]. We will concentrate on the application of the SPEA algorithm to solve multi-objective pr oblems through MOEA.
Multi-Objective Optimization Concepts 䡲 51
SPEA is a combination of new and previously used techniques to find different optimal Pareto and parallel solutions. It shares the following characteristics with other algorithms previously proposed: It saves the nondominated solutions it has found so far in an external population, thus implementing elitism. It uses the concept of Pareto dominance to assign a fitness value to individuals. It does clustering [MOR80] to reduce the number of nondominated solutions it has stored without destroying the characteristics of the Pareto front outlined so far. On the other hand, SPEA exclusively uses the following concepts: It combines the above-mentioned three characteristics in a single algorithm (so far they had always been proposed separately). It determines fitness of an individual based solely on the solutions found in the population of nondominated individuals; this way it is not relevant that members of the general population dominate one another — all solutions are part of the external set of nondominant solutions that participate in the selection. It suggests a new method for inducing the formation of niches within the population, thus preserving genetic diversity; this method is based on the Pareto concept and does not require previous knowledge or determination of any distance parameter (as in the radius of the niche in SF (sharing function). Figure 2.21 shows a pseudoalgorithm by means of SPEA.
2.4.1 Procedure to Assign Fitness to Individuals One of SPEA’s characteristics is the way it assigns fitness to each individual. The process in carried out so that the dominant and most characteristic individuals have better fitness values. At the same time, the process is thought to induce maintenance of genetic diversity and the obtaining of solutions distributed in the complete Pareto front. 1. Begin 2. Generate randomly the initial population P0 with size N 3. Initialize the set PE as an empty set 4. Initialize the generation t counter to 0 5. While t < gmax 6. Evaluate the objectives on the members of Pt and PEt 7. Calculate the fitness of each of the individuals in Pt and PEt 8. Make the environmental selection to conform the new extern population PEt+1 9. Apply the selection operator by binary tournament with replacement on PEt+1. 10. Apply the crossover and mutation operators on the selected population. 11. Assign the new population to Pt+1. 12. t t +1 13. End While 14. End
Figure 2.21 SPEA algorithm.
52 䡲 Multi-Objective Optimization in Computer Networks
The fitness assignment process takes place in two stages: First, a fitness value is assigned to the individuals in set P' (external, nondominated set). The members of P (set of general solutions) are then evaluated. To assign fitness to members of the dominant set, each solution i ∈ P' is assigned a real value s i ∈[ 0, 1) , called strength, which is proportional to the number of individuals j ∈P for which i ≻ j (that is, it is a measure of the individuals dominated by i). Let n be the number of individuals in P that are dominated by i, and let us assume that N is the size of P. Then s i is defined as s i =
strength: F (i) =
n . The fitness of i is inversely equal to its N +1
1 . si
The strength of an individual j ∈P is calculated as the sum of strengths of all external, nondominated solutions i ∈ P' that dominate j. Add 1 to that total to ensure that members of P' have better strength than the members of P. Hence, the fitness of individual j is stated as f j =
1 1+
∑
, si
ij ≻ j
where s j ∈[1, N ) .
The idea that underlies this mechanism is to always prefer individuals that are closer to the optimal Pareto front and at the same time distribute them through the whole feasible surface.
2.4.2 Reducing the Nondominated Set Using Clustering In certain problems, the optimal Pareto front can be very large and may even contain an infinite number of solutions. However, from the standpoint of the network administrator, once a reasonable limit is reached, no advantage is added by considering all the nondominated solutions found until that point. Furthermore, the size of the external population of nondominated solutions alters the assignment procedure used by the algorithm. When the external dominant set exceeds the established limit, the algorithm no longer finds good solutions and also has greater probability of becoming stagnant in a local minimum. For these reasons, a purge of the external dominant set may not only be necessary, but even mandatory.
Multi-Objective Optimization Concepts 䡲 53
One method that has been applied successfully to this problem and has been studied extensively in the same context is the cluster analysis [MOR80]. Overall, the cluster analysis partitions a collection of p elements in q groups of homogenous elements, where q < p. Next, a characteristic or centroid element for the cluster is selected. The average link method [MOR80] has been shown to be an adequate behavior in this problem and is the chosen one in [ZIT99] for the SPEA. For this, it is also the method used in the implementations subsequently described to control size of the external population of SPEA and prevent losing good and significant solutions already found. The clustering procedure is expressed in the following pseudocode (Figure 2.22). In it we will show how SPEA can be applied to solve a simple problem with the same two objective functions worked with the traditional methods. As in the previous cases, we will minimize both functions but, in this case, in a real multi-objective context. Min F(X) = [f1(x) = x4, f2(x) = (x – 2)4] Subject to –4 ≤ x ≤ 4
2.4.2.1 Representation of the Chromosome In this case, one chromosome stands exactly for a value of x in the mathematical model. For this reason, it would be represented by a real value. For example, Chromosome Î 2.5 As selection operator we have chosen the roulette method, by means of which we can obtain the chromosomes to execute the crossover and mutation operators.
2.4.2.2 Crossover Operator By means of the roulette method we select, for example, the following chromosomes: Chromosome 1 Î 2.5 Chromosome 2 Î 1.3
54 䡲 Multi-Objective Optimization in Computer Networks
C: union of all clusters i: one P’ chromosome P’: nondominated population P: dominated population 1. Clustering Procedure() 2. Begin 3. Initialize the set of clusters C; each individual i Hence: C
i
P’ constitutes a different cluster.
i .
4. While C > N’ 5.
d
Calculate the distance of all possible pairs of clusters where distance
1 c1 c 2
i
1 i1 c1 ,i2 c2
i2 of the two clusters c1 and c 2
between pairs of individuals of these 2 clusters. Metrics
C is the average distance
reflects the distance
between 2 individuals i1 e i2 6.
Determine two clusters c1 and c2 with minimum distance d and join them to
create a single one. 7. End While 8. Compute the reduced nondominated set, selecting an individual that is characteristic of each cluster. The centroid (the point with minimum average distance with respect to all other cluster points) is the characteristic solution. 9. End
Figure 2.22 Clustering procedure.
The crossover function takes the complete part of chromosome 1 (2) and the decimal part of chromosome 2 (.3), combining these two values to create the first offspring. Offspring 1 Î 2.3
The second offspring is created taking the complete part of chromosome 2 (1) and the decimal part of chromosome 1 (.5).
Multi-Objective Optimization Concepts 䡲 55
Offspring 2 Î 1.5
2.4.2.3 Mutation Operator In this case, we select one chromosome. For example, Chromosome Î 2.5 The mutation function adds or subtracts a value δ. If a chromosome is selected for mutation, it randomly determines whether the value of δ is added or subtracted. Suppose that in this case we have δ = 0.01 and one must add randomly. The result will be Chromosome Î (2.5 + 0.01) = 2.51 Figure 2.23 shows the optimal points obtained by means of algorithm SPEA for the stated functions and compares them to the traditional methods previously analyzed in this same example. In this case, the following values for the SPEA have been used: Probability of crossover Î 0.7 Probability of mutation Î 0.1 Maximum population size Î 50 Maximum number of generations Î 20 In this case, with a single execution we have obtained multiple values (represented by a filled-in circle) of the optimal Pareto front.
f2
Optimal Pareto Front
Weighted Sum
5 4 3 2 1 0
ε-Constraint Weighted Metric Benson SPEA 0
1
2
3
4
5
f1
Figure 2.23 Optimal values of SPEA versus traditional methods.
Chapter 3
Computer Network Modeling This chapter briefly introduces computer networks. Readers will learn basic concepts and fundamentals that will allow them to understand the application of metaheuristic techniques in multi-objective problems in computer networks.
3.1 Computer Networks: Introduction A computer network can be defined as a set of computers that interact among the individual computers, sharing resources or information. Reference models have been developed in computer networks to establish the functions of certain layers in the network for proper performance. The following section introduces two reference models: the Open Systems Interconnection (OSI) model established by the International Organization for Standardization (ISO) and the Transmission Control Protocol/Internet Protocol (TCP/IP) model.
3.1.1 Reference Models Designing, implementing, and manufacturing computer networks and related devices are very complex activities. Therefore, in order for this technology to be successful and massively used, the manufacturers’ com57
58 䡲 Multi-Objective Optimization in Computer Networks
munity saw the need to follow a series of standards and common models, allowing interoperability among the products of different brand names.
3.1.1.1 OSI Reference Model The OSI reference model was developed by the ISO for international standardization of protocols used at various layers. The model uses welldefined descriptive layers that specify what happens at each stage of data processing during transmission. It is important to note that this model is not a network architecture because it does not specify the exact services and protocols that will be used in each layer. The OSI model has seven layers. Each of the layers is described next.
3.1.1.1.1 Physical Layer The physical layer is responsible for transmitting bits over a physical medium. It provides services at the data-link layer, receiving the blocks the latter generates when emitting messages or providing the bit chains when receiving information. At this layer it is important to define the type of physical medium to be used, the type of interfaces between the medium and the device, and the signaling scheme.
3.1.1.1.2 Data-Link Layer The data-link layer is responsible for transferring data between the ends of a physical link. It must also detect errors, create blocks made up of bits, called frames, and control data flow to reduce congestion. Additionally, the data-link layer must correct problems resulting from damaged, lost, or duplicate frames. The main function of this layer is switching.
3.1.1.1.3 Network Layer The network layer provides the means for connecting and delivering data from one end to another. It also controls interoperability problems between intermediate networks. The main function performed at this layer is routing.
3.1.1.1.4 Transport Layer The transport layer receives data from the upper layers, divides it in smaller units if necessary, transfers it to the network layer, and ensures that all the information arrives correctly at the other end. Connection
Computer Network Modeling 䡲 59
between two applications located in different machines takes place at this layer, for example, customer–server connections through application logical ports.
3.1.1.1.5 Session Layer This layer provides services when two users establish connections. Such services include dialog control, token management (prevents two sessions from trying to perform the same operation simultaneously), and synchronization.
3.1.1.1.6 Presentation Layer The presentation layer takes care of syntaxes and semantics of transmitted data. It encodes and compresses messages for electronic transmission. For example, one can differentiate a device that works with ASCII coding and one that works with binary-coded decimal (BCD), even though in both cases the information transmitted is identical.
3.1.1.1.7 Application Layer Protocols of applications commonly used in computer networks ar e defined in the application layer. Applications found in this layer include Internet surfing applications (Hypertext Transfer Protocol (HTTP)), file transfer (File Transfer Protocol (FTP)), voice-over networks (Voice-over-IP (VoIP)), videoconferences, etc.
3.1.1.2 TCP/IP Reference Model The set of TCP/IP protocols allows communication between different machines that execute completely different operating systems. This model was developed mainly to solve interoperatibility problems between heterogeneous networks, allowing that hosts need not know the characteristics of intermediate networks. The following is a description of the four layers of the TCP/IP model.
3.1.1.2.1 Network Interface Layer The network interface layer connects the equipment to the local network hardware, connects with the physical medium, and uses a specific protocol to access the medium. This layer performs all the functions of the first two layers in the OSI model.
60 䡲 Multi-Objective Optimization in Computer Networks
3.1.1.2.2 Internet Layer This is a service of datagrams without connection. Based on a metrics, the Internet layer establishes the routes that the data will take. This layer uses IP addresses to locate equipment on the network. It depends on the routers, which resend the packets over a specific interface depending on the IP address of the destination equipment. Is equivalent to layer 3 of the OSI model.
3.1.1.2.3 Transportation Layer The transportation layer is based on two protocols: the User Datagram Protocol (UDP) and the Transmission Control Protocol (TCP). The UDP is a protocol that is not connection oriented: it provides nonreliable datagram services (there is no end-to-end detection or correction of errors), it does not retransmit any data that has not been received, and it requires little overcharge. For example, this protocol is used for real-time audio and video transmission, where it is not possible to perform retransmissions due to the strict delay requirements one has in these cases. The TCP is a connection-oriented protocol that provides reliable data transmission, guarantees exact and orderly data transfer, retransmits data that has not been received, and provides guarantees against data duplication. TCP does the task of layer 4 in the OSI model. All applications working under this protocol require a value called port, which identifies an application in an entity and through which the connection is made with another application in another entity. TCP supports many of the most popular Internet applications, including HTTP, Simple Mail Transfer Protocol (SMTP), FTP, and Secure Shell (SSH).
3.1.1.2.4 Application Layer The application layer is similar to the OSI application layer, serving as a communication interface, providing specific application services. There are many protocols in this layer, among them FTP, HTTP, Internet Message Access Protocol (IMAP), Internet Relay Chat (IRC), Network File System (NFS), Network News Transport Protocol (NNTP), Network Time Protocol (NTP), Post Office Protocol version 3 (POP3), SMTP, Simple Network Management Protocol (SNMP), SSH, Telnet, etc.
3.1.2 Classification of Computer Networks Based on Size Computer networks may be classified in several ways according to the context of the study being conducted. The following is a classification by
Computer Network Modeling 䡲 61
Bluetooth
Cellular Network Internet Cellular PC
Figure 3.1 PAN design.
size. Later we will specify to which of the networks we will apply the concepts of this book.
3.1.2.1 Personal Area Networks (PANs) Personal area networks are small home computer networks. They generally connect the home computers to share other devices, such as printers, stereo equipment, etc. Technologies such as Bluetooth are included in PANs. A typical example of a PAN is the connection to the Internet through a cellular network. Here, the PC is connected via Bluetooth to the cell phone, and through this cell phone we connect to the Inter net, as illustrated in Figure 3.1.
3.1.2.2 Local Area Networks (LANs) Local area networks generally connect businesses, public institutions, libraries, universities, etc., to share services and resources such as Internet, databases, printers, etc. They include technologies such as Ethernet (in any of its speeds, today reaching up to 10 Gbps), Token Ring, 100VGAnyLan, etc. The main structure of a LAN traditionally consists of a switch to which one connects the switches where office PCs are connected. Corporate servers and other main equipment are also connected to the main switch. Traditionally, this main switch can be a third-layer switch or can go through a router connected to the main switch, which connects the LAN to the Internet. This connection from the LAN to the carrier, or Internet service provider (ISP), is called the last mile. Figure 3.2 shows a traditional LAN design.
62 䡲 Multi-Objective Optimization in Computer Networks Core Switch (Chassis) Ethernet Last Mile / Man Connection
Edge Switch
PCs
Servers PCs
Figure 3.2 LAN design.
3.1.2.3 Metropolitan Area Networks (MANs) Metropolitan area networks cover the geographical area of a city, interconnecting, for instance, different offices of an organization that are within the perimeter of the same city. Within these networks one finds technologies such as Asynchronous Transfer Mode (ATM), Frame Relay, xDSL (any type of Digital Subscriber Line), cable modem, Integrated Services Digital Network (ISDN), and even Ethernet. A MAN can be used to connect different LANs, whether among them or with a wide area network (WAN) such as Internet. LANs connect to MANs with what is called the last mile, through technologies such as ATM/Synchronous Digital Hierar chy (SDH), ATM/SONET, Frame Relay/xDSL, ATM/T1, ATM/E1, Frame Relay/T1, Frame Relay/E1, ATM/Asymmetrical Digital Subscriber Line (ADSL), Ethernet, etc. Traditionally, the metropolitan core is made up of high-velocity switches, such as ATM switchboards over an SDH ring or SONET. The new technological platforms establish that MAN or WAN rings can work over Dense Wave Length Division Multiplexing (DWDM); they can go from a 10-Gbps current to transmission velocities of 1.3 Tbps. These high-velocity switches can also be layer 3 equipment, and therefore may perform routing. Figure 3.3 shows a traditional MAN design.
3.1.2.4 Wide Area Networks (WANs) Wide area networks span a wide geographical area. They typically connect several local area networks, providing connectivity to devices located in different cities or countries. Technologies applied to these networks are the same as those applied to MANs, but in this case, a larger geographical
Computer Network Modeling 䡲 63 Metropolitan Backbone Switch ATM / Sonet ATM / SDH Last Mile / Man Connection (ATM, Frame Relay, etc.) ADM Sonet or SDH
Wan Connection
Figure 3.3 MAN design.
area is spanned, and therefore, a larger number of devices and greater complexity in the analysis is required to develop the optimization process. The most familiar case of WANs is the Internet, as it connects many networks worldwide. A WAN design may consist of a combination of twolayer (switches) and three-layer (routers) equipment, and the analysis depends exclusively on the layer under consideration. Traditionally, in this type of network, it is normally analyzed under a layer 3 perspective. Figure 3.4 shows a traditional WAN design. In this book we will work with several kinds of networks. Chapters 4 and 5 discuss MANs and WANs. Said chapters do not include LANs because problems found therein are simpler than problems found in MANs and Wan Backbone Switch / Router
Man Connections
Figure 3.4 WAN design.
64 䡲 Multi-Objective Optimization in Computer Networks
2
FTP Server
FTP Client s=1
t=5 MAN / WAN
LAN
3
4
LAN
Figure 3.5 Unicast transmission.
WANs. Because Ethernet applies to both LANs and MANs, the same analysis for the WANs can be used for the LANs. Chapter 6 covers wir eless networks. Wireless networks do not use physical cables. This type of analysis can be applied on PANs, LANs, MANs, and WANs.
3.1.3 Classification of Computer Networks Based on Type of Transmission Based on the type of transmission, computer networks fall into the following classifications.
3.1.3.1 Unicast Transmissions A transmission between a single transmitter and a single receiver is a unicast transmission (Figure 3.5). Examples include applications such as HTTP, FTP, Telnet, VoIP, point-to-point videoconferences, etc.
3.1.3.2 Multicast Transmissions Transmission of information between one or several transmitters and several receivers is called multicast transmission (Figure 3.6). Applications such as multipoint videoconferences, video streaming, etc., are examples of multicast transmissions.
3.1.3.3 Broadcast Transmissions Transmission of information between a transmitter and all receivers in a computer network is a broadcast transmission. Usually, these are control messages or messages sent by certain applications for proper operation.
Computer Network Modeling 䡲 65
LAN
MAN / WAN 2
Video Server s=1
Video Client 1 t1=5
4
Video Client 2 LAN
3
t2=6
LAN
Figure 3.6 Multicast transmission.
This book will focus on unicast and multicast transmissions because they are traditionally the ones intentionally generated by computer network users.
3.2 Computer Network Modeling A computer network can be represented in a graph where the nodes represent the devices for network interconnection, such as routers, switches, modems, satellites, and others, where the edges represent the physical connection links between such devices. In this book, the nodes will represent the routers or switches (in Internet analysis they would be routers). The links will represent the media and physical transmission technologies between such routers or switches, for example, optical fiber, copper, satellite links, microwaves, etc. In the case of communications, the nodes could be the satellites and the links, the uplink and downlink. In the next section we will introduce graph theory to represent and model computer networks as graphs and, consequently, be able to apply different graph algorithms to computer networks.
3.2.1 Introduction to Graph Theory Graphs are a type of data structure that consists basically of two components: vertices or nodes, and the edges or links that connect them. Graphs are divided into undirected graphs and directed graphs. Undirected graphs are made up of a set of nodes and a set of links that are
66 䡲 Multi-Objective Optimization in Computer Networks Node
2
3
1
Link
6
4
5
Figure 3.7 Graph. Node
2
3
1
Link
6
4
5
Figure 3.8 Directed graph.
not ordered node pairs. Figure 3.7 shows a undirected graph with six nodes and eight links, in which we refer indistinctly, for example, to link (1, 2) or (2, 1). Directed graphs are a set of nodes and a set of edges whose elements are ordered pairs of different nodes. Figure 3.8 shows a directed graph (also with six nodes and eight links) with the following links: {(1, 2), (1, 4), (2, 3), (2, 5), (3, 6), (4, 3), (4, 6), (5, 6)}. In this case, we see that there is a link (1, 2) but not a link (2, 1). For link (2, 1) to exist, it would have to be explicit in the set of links. Because we can represent a computer network through graphs, we can use different algorithms applied to graphs (shortest path, shortest path tree, minimum spanning tree, maximum flow, breadth first search, depth first search, etc.) to optimize resources in the computer networks.
3.2.2 Computer Network Modeling in Unicast Transmission In the unicast case the network is modeled as a directed graph G = ( N , E ), where N is the set of nodes and E is the set of links. n denotes the
Computer Network Modeling 䡲 67
number of network nodes, i.e., n = N . Among the nodes, we have a source s ∈ N (ingress node) and a destination t ∈ N (egress node). Let (i, j ) ∈ E be the link from node i to node j. Let f ∈ F be any unicast flow, where F is the flow set. We denote the number of flows by |F|. Let cij be the weight of each link (i, j). Let bwf be the traffic demand of a flow f from the ingress node s to the egress node t. Figure 3.9a illustrates a computer network for unicast transmissions. We can see that the set of nodes N is {1, 2, 3, 4, 5}, and therefore, n = |N| = 5. The set of links E is {(1, 2), (1, 3), (2, 5), (3, 4), (4, 5)}. Because this graph is a directed graph, there is no possibility of transmitting from node 2 to node 1, because link (2, 1) is not a part of set E. G = (N, E) is the graph that represents the network. The nodes, for example, would represent the routers and the links, the fiber optics or the terrestrial or satellite microwave connections. In this example, the origin or transmitter node is node 1 (labeled s) and the destination node or receiver is node 5 s (labeled t). In this type of network we suppose that origin as well as destination nodes are connected to a LAN such as Ethernet, for example, to which are connected the clients or servers that are transmitting the information. In Figure 3.9b, both the origin node s and the destination node t are connected to an Ethernet LAN, whether through a hub or a switch. Each link in the networks is assigned a weight. In this example, cij stands for the weight of each link. The weight can mean a delay in transmission on such link, the linking capacity, the available capacity, the cost of transmitting the information through that link, etc. Finally, we find that computer networks transmit the traffic produced by certain applications. In this book, when we talk about applications, we generally refer to applications over IP, because this is the transmission protocol on the Internet. For example, these unicast applications can be FTP for file transfers, HTTP for surfing, Telnet to emulate terminals, HTTPsecure, HTTP, SSH-secure Telnet, VoIP for telephone call transfer over IP, videoconferences over IP, video streaming, etc. In this example, we have established a single flow |F| = 1, which is an FTP transmission from the 2
c12 FTP f1
c25
s=1
a
c34
c45 3
4
FTP Client t=5
c13
c45 3
c25
s=1
t=5
c13
2
c12
FTP Server
c34
4
b
Figure 3.9 Computer network modeling in unicast transmission.
68 䡲 Multi-Objective Optimization in Computer Networks
origin node s to destination node t. The FTP flow is shown as f1, and the set of flows would consist of a single element, that is, F = {f1}. The path through which the flow will be transmitted from the emitting node to the receiving node will be determined by a routing algorithm. In this section, we have shown how the topology of computer networks transmitting unicast applications through graphs can be represented. In later chapters, this modeling will help us develop the optimization models at the routing level.
3.2.3 Computer Networks Modeling in Multicast Transmission As we had previously mentioned, there are other types of applications that transmit their data to several destination modes. These transmissions are called multicast transmissions. In the multicast case, the network can be modeled as a dir ected graph G = ( N , E ) , where N is the set of nodes and E is the set of links. n denotes the number of network nodes, i.e., n = N . Among the nodes, we have a source s ∈ N (ingress node) and some destinations T (the set of egress nodes). Let t ∈T be any egress node. Let (i, j ) ∈ E be the link from node i to node j. Let f ∈ F be any multicast flow, where F is the flow set and T f is the egress node subset for the multicast flow f. We use|F|to denote the number of flows. Note that T = Tf .
∪ f ∈F
Figure 3.10a shows a computer network for multicast transmission. In this figure we can see that the set of nodes N is {1, 2, 3, 4, 5, 6}, and therefore, n = |N| = 6. The set of links e is {(1, 2), (1, 3), (1, 4), (2, 5), (3, 6), (4, 5), (4, 6)}. The graph that represents the networks is G = (N, E). In this example, the origin or transmitter is node 1 (labeled s) and the set of destination or receiving nodes consists of nodes 5 and 6 (labeled t1 and t2, respectively).
c12 Video f1
s=1
c14
2 4
c25
t1=5
c45
s=1
c14
2 4
c46
Video Client 1 t1=5
c45 c46
c13
Video Client 2
c13 3
a
c12
Video Server
c25
c36
t2=6
3
c36
t2=6
b
Figure 3.10 Computer network modeling in multicast transmission.
Computer Network Modeling 䡲 69
As in the unicast case, in this type of network we assume that the origin and destination nodes are connected to a LAN such as, for example, Ethernet, to which are connected the clients or servers performing the transmission of information (Figure 3.10b). The origin router s and the destination routers t1 and t2 are connected to a LAN Ethernet network through a hub or switch. Like in the unicast case, the weight of each link is represented by cij. Examples of multicast applications include multipoint videoconferences, multipoint video streaming, multipoint audio conferences, etc. In this example, we have one single flow |F| = 1 that is a video-streaming transmission from origin node s to destination nodes t1 and t2. This videostreaming flow we represent as f1; the set of flows F would consist of a single element F = {f1} and T = Tf. One can see that the modeling of a network for multicast transmission is similar to the unicast case, with the difference that the flow of information will be received by multiple destinations.
Chapter 4
Routing Optimization in Computer Networks 4.1 Concepts 4.1.1 Unicast Case To solve an optimization problem in computer networks, we need to define a variable that will tell us the route through which the flow of information will be transmitted. We will define the variables vector X ijf as follows:
⎧⎪1, if link ( i , j ) is used for flow f X ijf = ⎨ ⎪⎩0, if link ( i , j ) is not used for flow f This means that the variables vector X ijf is 1 if link (i, j) is used to transmit flow f; if it cannot be used, the vector is 0. In Figure 4.1 we see that there are two possible routes to go from origin node 1 to destination node 5. The first route is through links (1, 2) and (2, 5). The second path is through links (1, 3), (3, 4), and (4, 5). Assuming the first path ((1, 2), (2, 5)) has been selected as the r oute through 1 1 = 1 and X 25 = 1. Because links (1, 3), (3, which to transmit flow f1, X 12 4), and (4, 5) are not used, the values of vector X ijf for these equal 0 (Figure 4.1). Table 4.1 summarizes the parameters used for the unicast case. 71
72 䡲 Multi-Objective Optimization in Computer Networks
1 X 12 1
FTP f1
X 125 1
2
1 1 X 13
5 0
3
4 1 X 34
X 145
0
0
Figure 4.1 Computer network optimization in unicast transmission.
Table 4.1
Parameters Used for the Unicast Case
Terms
Definition
G (N, E) N E s t (i, j) F f
Graphs of the topology Set of nodes Set of links Ingress node Egress node Link from node i to node j The flow set Any unicast flow Indicates whether the link (i, j) is used for flow f with destination to the egress node t The available capacity of each link (i, j) The traffic demand of a flow f
X ijf cij bwf
4.1.2 Multicast Case For the multicast case, we must also define a variable that shows us whether the link is used to transmit a specific flow. We will define the variables vector X ijtf as follows:
⎧⎪1, if link ( i , j ) is used for flow f with destination t X ijtf = ⎨ ⎪⎩0, if link ( i , j ) is not used for flow f with destination t This means that variable X ijtf is 1 if link (i, j) is used to transmit multicast flow f with destination node t, which belongs to the set of destination nodes Tf. If it is not used, it is 0. In Figure 4.2 we can see that there are four possible trees to go from origin node 1 to destination nodes 5 and 6. The first tree consists of paths {(1, 2), (2, 5)} and {(1, 3), (3, 6)}. The second tree consists of {(1, 4), (4, 5)} and {(1, 4), (4, 6)}. The third tree
Routing Optimization in Computer Networks 䡲 73
51 X 25 0 51 X 12
Video f1
1 61 X 13
Source: s = 1
2
0 51 1 X 14 61 1 X 14
0
4
5 51 X 45 1 61 1 X 46
3
61 0 X 36
6
Destinations: t 1 = 5 and t 2 = 6
Figure 4.2 Computer network optimization in multicast transmission.
consists of paths {(1, 2), (2, 5)} and {(1, 4), (4, 6)}. Finally, the fourth tree consists of paths {(1, 4), (4, 5)} and {(1, 3), (3, 6)}. Assuming that the second tree was selected ({(1, 4), (4, 5)} and {(1, 4), (4, 6)}) as the tree 51 61 to transmit multicast flow f1, X 14 = 1 and X 14 = 1 , because both links are used to transmit flow f1 with destination nodes 5 and 6. Because links (1, 2), (2, 5), (1, 3), and (3, 6) are not being used, the values of vector X ijtf for these links for both destination nodes 5 and 6 (Figure 4.2) are zero. Variables corresponding to the links through which one cannot reach a specific destination are not shown in Figure 4.2; for example, vari61 able X 45 = 0 , because it is impossible to reach node 6 through link (4, 5). Table 4.2 summarizes the parameters used for the multicast case. Table 4.2
Parameters Used for the Multicast Case
Terms
Definition
G(N, E) N E S T (i, j) F f Tf
Graphs of the topology Set of nodes Set of links Ingress node Set of egress nodes or any egress node Link from node i to node j The flow set Any multicast flow The egress node subset for the multicast flow f Indicates whether the link (i, j) is used for flow f with destination to the egress node t The available capacity of each link (i, j) The traffic demand of a flow f
X ijtf
cij bwf
74 䡲 Multi-Objective Optimization in Computer Networks
4.2 Optimization Functions In this section we will define certain functions that will later be used for multi-objective optimization. This book will show some of the functions most commonly used, but we are not attempting an exhaustive analysis of all possible functions that can be considered.
4.2.1 Hop Count 4.2.1.1 Unicast Transmission The first function to be analyzed is hop count. This function represents the number of links through which packets must pass from the time they leave the origin node until they reach the destination node. Figure 4.3 shows that there are two possible paths to carry the flow of information f1 (File Transfer Protocol (FTP)) between origin node 1 and destination node 5: The first path is formed by links (1, 2) and (2, 5), and therefore, the number of hops to be taken by the packets is 2. The second path is formed by links (1, 3), (3, 4), and (4, 5) and, in this case, the number of hops is 3. If we use the number of hops as an optimization function, then the path chosen to transmit the packets from origin node 1 to destination node 5 will be the first path. The routing protocol that works with the hops function to calculate the shortest path is called the Routing Internet Protocol (RIP). This protocol is used mainly in small metropolitan area networks (MANs) or wide area networks (WANs) or in routing in local area networks (LANs) through the virtual LANs (VLANs). To mathematically describe the hop count function we will use an example. In Figure 4.3 we have the variables X ijf describing the paths. If we want to describe the paths mentioned above under these variables, we could say that the number of hops for the first path would be given 1 X 12 1
FTP f1
X 125 1
2
1 1 X 13
5 0
3
4 1 X 34
Source: s = 1
X 145
0
0 Destination: t = 5
Figure 4.3 Hop count function in unicast transmission.
Routing Optimization in Computer Networks 䡲 75 1 1 + X 25 by X 12 . If this is the path used, the value of these variables would be 1 and the sum would yield 2. Similarly, for the second path, the value 1 1 1 + X 34 + X 45 of the function would be given by X 13 , and if this is the path used, the value of each of these variables would be 1, and therefore, the sum would yield 3. When we want to find the shortest path based on the hop count function, only one path will be selected. For the example shown previously, it would be the path given by links (1, 2) and (2, 5). The function to be optimized would consist of minimizing the sum of all the paths, but only the variables X ijf of one path would have a value of 1. Therefore, the hop count function can be stated the following way:
min
∑∑
X ijf
f ∈F ( i , j ) ∈E
∑
X ijf denotes the possible paths for flow f (of which only one will
( i , j )∈E
be taken).
∑
denotes all the flows that will be transmitted over the
f ∈F
network. In the solution, for every flow f, one will get a path with the minimum number of hops from its origin node s through its destination node t.
4.2.1.2 Multicast Transmission In this section we will analyze how we can mathematically express the hop count function when we perform multicast flow transmissions, that is, when the information is sent to a set of destination nodes. Figure 4.4 shows that there are four possible trees to carry the flow of information f1 (video) between node 1 and destination nodes 5 and 6: The first tree (Figure 4.5) is formed by paths {(1, 2), (2, 5)} and {(1, 3), (3, 6)}, and therefore, the number of total hops for this tree will be the sum of the number of hops in each path. Because there are 2 hops in the first path and also 2 hops in the second path, the total number of hops in the tree is 4. The second tree (Figure 4.6) is formed by paths {(1, 4), (4, 5)} and {(1, 4), (4, 6)}, and therefore, the total number of hops in this tree is 4. The third tree (Figure 4.7) is formed by paths {(1, 2), (2, 5)} and {(1, 4), (4, 6)}, and therefore, the total number of hops is 4. The fourth tree (Figure 4.8) is formed by paths {(1, 4), (4, 5)} and {(1, 3), (3, 6)}, and therefore, the total number of hops is 4.
76 䡲 Multi-Objective Optimization in Computer Networks
2 Video f1
5
4
1
3 Source: s = 1
6
Destinations: t 1 = 5 and t 2 = 6
Figure 4.4 Hop count function in multicast transmission. 51 X 25 1
2
51 X 12 1
Video f1
51 X 14 61 X 14
1
0 0
61 X 13 1
Source: s = 1
5
4 3
51 X 45
0
61 X 46
0 6
61 1 X 36
Destinations: t 1 = 5 and t 2 = 6
Figure 4.5 First solution. 51 0 X 25 51 X 12
Video f1
0 51 X 14 1
s=1 61 X 13
2
61 X 14
1
0
Source: s = 1
5 51 1 X 45
4
61 1 X 46
3
61 0 X 36
6
Destinations: t 1 = 5 and t 2 = 6
Figure 4.6 Second solution. 51 1 X 25 51 X 12
Video f1
1 61 X 13
Source: s = 1
Figure 4.7 Third solution.
2
1 51 X 14 61 X 14
0
0 1
5
4 3
51 X 45
0
61 X 46
1 6
61 X 36
0
Destinations: t 1 = 5 and t 2 = 6
Routing Optimization in Computer Networks 䡲 77
51 0 X 25 51 X 12
Video f1
51 X 14 61 X 14
1 61 X 13
Source: s = 1
2
0
1
1 0
5 51 X 45 1
4
61 X 46
3
0 6
61 1 X 36
Destinations: t 1 = 5 and t 2 = 6
Figure 4.8 Fourth solution.
Because we are using the total number of hops as the optimization function, it is possible to take any of the four trees as the optimal tree. In multicast transmissions, the routing protocol that works based on this hop count function is called the Distance Vector Multicast Routing Protocol (DVMRP). To mathematically describe the hop count function for the multicast case we will use an example. In the previous figures one can see the variables X ijtf that describe the trees. The number of hops for the first tree 51 51 61 61 would be given by: X 12 + X 13 + X 36 . If this is the tree used, + X 25 the value of these variables would be 1, and therefore, the sum would be 4. We can calculate for the other three trees in a similar way. When we want to find the shortest path tree based on the hop count function, only one tree will be selected for transmission of the multicast flow, but any of these four trees could be used. The function to be optimized would consist of minimizing the sum of all the paths with destination node t. The hop count function for multicast transmission can be stated in the following way:
(
) (
min
∑∑ ∑
)
X ijtf
f ∈F t ∈T f ( i , j ) ∈E
∑
X ijtf denotes one of the possible paths for flow f with destination
( i , j )∈E
node t.
∑
denotes that one must find a path for each of the nodes t
t ∈T f
of flow f.
∑ f ∈F
denotes all multicast flows transmitted over the network.
78 䡲 Multi-Objective Optimization in Computer Networks
For each flow f one will get a tree with the minimum number of hops from its origin node s through the set of destination nodes T.
4.2.2 Delay 4.2.2.1 Unicast Transmission There are different kinds of delay that are complementary among each other. Some authors showed that delay has three basic components: switching delay, queuing delay, and propagation delay. The switching delay is a consistent value and can be added to the propagation value. The queuing delay is already reflected in the bandwidth consumption. The authors state that the queuing delay is used as an indirect measure of buffer overflow probability (to be minimized). Other computational studies have shown that it makes little difference whether the cost function used in routing includes the queuing delay or the much simpler form of link utilization. Because later in this book we will analyze the bandwidth consumption function, we will only analyze the propagation delay here. The unit associated to the delay function is time, and it is usually expressed in milliseconds (ms). Because the objective of this function is to calculate and obtain the minimum delay end to end, this function is cumulative. Using the same network of previous examples, we will analyze how the delay function can change the result of the shorter path. Figure 4.9 shows a graph for the analysis by means of the delay function. A weight dij has been added to each link; it denotes the propagation delay between node i and node j. As in the previous example, Figure 4.9 shows that there are two possible paths to carry the flow of information f1 (FTP) d12 = 5 1 X 12
FTP f1
0
2
d25=6 X 125
1
5
d13 =3 1 1 X 13
0
d45 =2 3
4 d34 =2
X 145 1
1 1 X 34
Source: s = 1
Figure 4.9 Delay function.
Destination: t = 5
Routing Optimization in Computer Networks 䡲 79
between origin node 1 and the destination node 5: The first path is formed by links (1, 2) with a delay d12 of 5 ms and (2, 5) with a delay d25 of 6 ms. Because this is a cumulative function, the end-to-end delay experienced by the packets during transmission using this path would be 11 ms. The second path is formed by links (1, 3) with a delay of 3 ms, (3, 4) with a delay of 2 ms, and (4, 5) with a delay of 2 ms; the end-to-end delay is 7 ms. If we use the end-to-end delay as the optimization function, the second path will be selected to transmit the packets from origin node 1 to destination node 5. If we compare this result with the analysis using the hop count function (Figure 4.3), we see that the best path is different in every case. By means of multi-objective optimization, we can find all of these solutions when the functions are in conflict. Among the unicast routing protocols that can work with this type of function are Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP), Enhanced Interior Gateway Routing Protocol (EIGRP), Intermediate System to Intermediate System (IS-IS), and Border Gateway Protocol (BGP). To mathematically describe the end-to-end delay function we will use an example (Figure 4.9). In this case, the delay for the first path would 1 1 be given by d 12 .X 12 . If this is the path used, the variables + d 25 .X 25 1 1 X 12 = 1 and X 25 = 1 would be 1, and therefore, the end-to-end delay for this path would be (5*1) + (6*1) = 11 ms. Likewise, for the second path 1 1 1 + d 34 .X 34 + d 45 .X 45 the value of the function would be given by d 13 .X 13 . f If this is the path used, the value of these variables X ij would be 1; therefore, the calculation would be (3*1) + (2*1) + (2*1) = 7 ms. When we want to find the shortest path based on the end-to-end delay function, only one path will be selected (the second path in this example.) Figure 4.9 shows the solution found, and therefore, the values of variables X ijf of the second path are 1, while for the other path the value of these variables is 0. The function to be optimized would consist of minimizing the sum of the product of the delay dij of each link (i, j) times the value associated with variable X ijf . The function to minimize the end-to-end delay can be stated the following way:
min
∑ ∑ d .X ij
f ∈F ( i , j ) ∈E
f ij
80 䡲 Multi-Objective Optimization in Computer Networks
∑ d .X ij
f ij
denotes the end-to-end delay value of the possible paths for
( i , j )∈E
flow f (only one will be taken).
∑
denotes all flows that will be
f ∈F
transmitted over this network. For each flow f, one will get a path with the minimum end-to-end delay from its origin node s to its destination node t.
4.2.2.2 Multicast Transmission In this section we will analyze how we can mathematically express the end-to-end delay function when we transmit multicast flows. Figure 4.10 shows that there are four possible trees to carry the flow of information f1 (video) between node 1 and destination nodes 5 and 6. Figure 4.11 to Figure 4.14 show the same four possible trees shown previously in the hop count function to carry flow of information f1 (video) between origin node 1 and destination nodes 5 and 6. In this case, for each tree the total end-to-end delay would be 8 ms (Figure 4.11), 20 ms (Figure 4.12), 14 ms (Figure 4.13), and 14 ms (Figure 4.14), respectively. Therefore, in this case, the solution would be given by the tree formed by paths {(1, 2), (2, 5)} and {(1, 3), (3, 6)}. Other multicast routing protocols that can work with this type of function are, among others, Protocol Independent Multicast — Dense Mode (PIM-DM), Multicast Open Shortest Path First (MOSPF), and Border Gateway Multicast Protocol (BGMP). In this book we do not analyze the analytical model for the case of transmission with shared trees, which is how the Protocol Independent Multicast — Sparse Mode (PIM-SM) works, but with some minor changes, it could be implemented. In this case, the change would consist of creating a unicast shortest path (would be the same model of the delay function 2 Video f1
1
4 3
Source: s = 1
5
6
Destinations: t 1 = 5 and t 2 = 6
Figure 4.10 Delay function in multicast transmission.
Routing Optimization in Computer Networks 䡲 81
d25 =2 2
d12 =2 Video f1
1
d14 =5
d13 =2
4
5
d45 =5 d46 =5
3
6
d36 =2 Source: s = 1
Destinations: t 1 = 5 and t 2 = 6
Figure 4.11 First solution.
2
d12 =2 Video f1
1
d14 =5 d13 =2
4 3
Source: s = 1
d25 =2 5
d45 =5 d46 =5 6
d36 =2
Destinations: t 1 = 5 and t 2 = 6
Figure 4.12 Second solution. 2
d12 =2 Video f1
1
d14 =5 d13 =2
Source: s = 1
d25 =2 5
d45 =5
4
d46 =5 3
6
d36 =2
Destinations: t 1 = 5 and t 2 = 6
Figure 4.13 Third solution. 2
d12 =2 Video f1
1
d14 =5 d13 =2
4
d25 =2
5
d45 =5 d46 =5
3
6
d36 =2 Source: s = 1
Figure 4.14 Fourth solution.
Destinations: t 1 = 5 and t 2 = 6
82 䡲 Multi-Objective Optimization in Computer Networks
in the unicast case) between origin node s and node RP (rendezvous point), which works as the root of the shared tree and, subsequently, performs the shortest path tree (would be the same model of the delay function for the multicast case) between node RP and every one of the destination nodes Tf. To mathematically describe the end-to-end delay function we will use an example (Figure 4.11). In this case, the total delay would be given 51 51 61 61 + d 25 .X 25 + d 13 .X 13 + d 36 .X 36 by d 12 .X 12 . If this tree is used, the value of the variables would be 1, and therefore, the value would be ((2*1) + (2*1)) + ((2*1) + (2*1)) = 8 ms. The calculation for the other three trees can be done in the same way. The function to be optimized would consist of minimizing the sum of all the paths. The function to minimize the end-to-end delay for multicast transmission can be stated as follows:
(
) (
min
)
∑ ∑ ∑ d .X ij
tf ij
f ∈F t ∈T f ( i , j ) ∈E
4.2.3 Cost 4.2.3.1 Unicast Transmission The cost of the transmission function may be correlated to the delay function. In this section, we will analyze the cost function because there will be cases where the cost of transmission will not necessarily be correlated with other objective functions such as delay or hops. The analysis of the cost function is similar to the analysis performed with the delay function. The unit of the cost function is associated with money; for example, the cost of each link may be associated with the cost of using the links in a clear channel transmission with Point-to-Point Protocol (PPP) or HighLevel Data-Link Control (HDLC) technology, or may be associated with the use of a commuted operator through a virtual circuit with Asynchronous Transfer Mode (ATM) technology or ATM or Frame Relay, or through a local service provider (LSP) in Multi-Protocol Label Switching (MPLS) technology. Because the objective of this function is to calculate and obtain the minimum end-to-end cost, this function is also cumulative. Figure 4.15 shows a network to perform the analysis by means of the cost function. In this figure one sees that a weight wij, which denotes the cost of transmitting any flow between nodes i and j, has been added to
Routing Optimization in Computer Networks 䡲 83
w12 = $5 1 X 12
FTP f1
w25 = $6 2
0
X 125
1
0
5
w13 = $3
w45 = $2
1 1 X 13
3
4
X 145 1
w34 = $2 1 X 34 1
Source: s = 1
Destination: t = 5
Figure 4.15 Cost function.
every link. As in the case of the delay, in Figure 4.15 one can see that there are two possible paths to carry the flow of information f1 (FTP) between origin node 1 and destination node 5. The cost of the first path would be $11, and the cost of the second would be $7. If we use the cost of using the links as the optimization function, the second path would be selected to transmit the packets from the origin node 1 through destination node 5. If we want to describe the above-mentioned paths mathematically, we 1 1 can say that the cost for the first path would be given by w 12 .X 12 . + w 25 .X 25 1 1 If this is the path used, the value of variables X 12 and X 25 would be 1, and therefore, the total cost for this path would be (5*1) + (6*1) = $11. Likewise, for the second path the value of the function would be given 1 1 1 by w 13 .X 13 . If this is the path used, the value of + w 34 .X 34 + w 45 .X 45 1 1 variables X 13 and X 34 would be 1, and therefore, the total cost for this path would be (3*1) + (2*1) + (2*1) = $7. When we want to find the shortest path based on the cost function, only one path will be selected (the second path in the example). Figure 4.15 shows the solution found when cost is minimized, and therefore, the values of the variables X ijf of this path are 1, while for the other path the value of these variables is 0. The function to be optimized would consist of minimizing the sum of the product of the cost wij of every link (i, j) times the associated value of variable X ijf . The function to minimize cost may be stated in the following way:
min
∑ ∑ w .X ij
f ∈F ( i , j ) ∈E
f ij
84 䡲 Multi-Objective Optimization in Computer Networks
∑ w .X ij
f ij
denotes the end-to-end cost of the possible paths for flow f
( i , j )∈E
(only one will be taken).
∑ denotes all flows that will be transmitted over f ∈F
this network. For every flow f, one will obtain a path with the minimum end-to-end cost from its origin node s through its destination node t.
4.2.3.2 Multicast Transmission The cost function will be analyzed when we present the bandwidth consumption function because these functions exhibit similar behaviors when multicast transmissions are performed.
4.2.4 Bandwidth Consumption 4.2.4.1 Unicast Transmission The bandwidth consumption function represents the bandwidth that is used in every link throughout the path to transmit the flow of information from origin node s through destination node t. One needs to mention here that the function analyzed in this item refers to the bandwidth consumed in the whole network to carry the flow of information, and not the analysis of how much bandwidth is available in a path, whose function would be denoted by another mathematical expression. The bandwidth consumption function is associated with the transmission capacity used in every one of the links, and its values are given in bits per second (bps). It is common that the capacities of the links in low-speed WAN or MAN channels are in kilobits per seconds (Kbps). In high-speed WAN and MAN links and in LANs it is common to use magnitudes in the order of megabits per seconds (Mbps) or gigabits per second (Gbps). Finally, in the new Generalized Multiprotocol Label Switching (GMPLS) networks using Dense Wavelength Division Multiplexing (DWDM), it is possible to obtain measurements of the order of terabits per second (Tbps). Because the objective of this function consists of calculating and obtaining the minimum bandwidth consumption in the whole path, this function is cumulative. If we analyze the available bandwidth function, this function is concave. Figure 4.16 shows a network to perform the analysis through the bandwidth consumption function. Contrary to the networks used for the previous function, link (1, 5) has been added in this figure. Variable cij denotes the available capacity in the link (i, j). One can see in Figure
Routing Optimization in Computer Networks 䡲 85
c12 = 64 Kbps
c25 =64 Kbps
1 X 12 1
FTP f1 bwf1 = 32 Kbps
1
X 125 1
2 1 X 15
0
5
c15 =16 Kbps
c13 =64 Kbps 1 X 13
0
c45 =64 Kbps 3 4 c34 =64 Kbps 1 X 34
Source: s = 1
X 145
0
0 Destination: t = 5
Figure 4.16 Bandwidth consumption function.
4.16 that there are three possible paths to carry the flow of information f1 (FTP) between origin node 1 and destination node 5. In this example, the first path is formed by links (1, 2), with a capacity of 64 Kbps, and (2, 5), with a capacity of 64 Kbps. Because this function is cumulative, the application has a traffic demand bwf1 of 32 Kbps, and because it must pass through the two links, the bandwidth consumption when transmitting the flow through this path would be 64 Kbps: 32 Kbps consumed by (1, 2) plus 32 Kbps consumed by (2, 5). The second path consists only of link (1, 5), but there is a drawback to this link: it cannot transmit flow f1 because flow f1 (bwf1) demands a capacity of 32 Kbps and the capacity of channel c15 is only 16 Kbps. The third path is formed by links (1, 3), with a capacity of 64 Kbps, (3, 4), with a capacity of 64 Kbps, and (4, 5), with a capacity of 64 Kbps; as for the first path, the value of the bandwidth consumption in the complete path would be 96 Kbps. If we use bandwidth consumption as the optimization function, the first path would be selected to transmit the packets from origin node 1 to destination node 5, with a bandwidth consumption of 64 Kbps. Comparing this result obtained with the means of the hop count function (see Figure 4.3), it could be that both functions are correlated. However, in this last example one can see that if we only minimize the hop count function, the result is feasible because the second path, even though it only has 1 hop, does not have enough capacity to carry the demand of flow f1. When we analyze this same function in multicast transmission, we will see that it will be very useful due to the nature of the multicast transmission of creating tree transmission. To mathematically describe the bandwidth consumption function we will use an example (Figure 4.16). In this case, the bandwidth consumption 1 1 + bw 1 .X 25 for the first path would be given by bw 1 .X 12 . If this is the path 1 1 used, the value of the variables X 12 and X 25 would be 1, and therefore,
86 䡲 Multi-Objective Optimization in Computer Networks
the value for this path would be (32*1) + (32*1) = 64 Kbps. Similarly, for 1 the second path the value of the function would be given by bw1 .X 15 . If this is the path used, the value of variable X 11 would be 1, and therefore, the calculation would be 32*1 = 32 Kbps. But because the capacity of this channel is only 16 Kbps, this path is not a feasible solution for the problem. For the third path, the value of the function would be given 1 1 1 + bw 1 .X 34 + bw 1 .X 45 by bw 1 .X 13 ; now, if this is the path used, the value 1 1 1 of the variables X 13 , X 34 , and X 45 would be 1, and therefore, the value would be (32*1) + (32*1) + (32*1) = 96 Kbps. When we want to find the shortest path based on the bandwidth consumption function, only one path will be selected, and for the example previously shown, it would be the first path given by links (1, 2), and (2, 5). Figure 4.16 shows the solution found when minimizing bandwidth consumption, and therefore, the values of variables X ijf of this path are 1, while for the other paths the value of these variables is 0. The function to be optimized would consist of minimizing the sum of the product of the demand bwf for each flow f times the value associated with variable X ijf and can be expressed in the following way:
min
∑ ∑ bw .X f
f ij
f ∈F ( i , j ) ∈E
∑ bw .X f
f ij
denotes the value of the bandwidth consumption of the
( i , j )∈E
possible paths for flow f (only one will be taken).
∑
denotes all flows
f ∈F
that will be transmitted over this network. For each flow f, one will obtain a path with the minimum bandwidth consumption from its origin node s through its destination node t.
4.2.4.2 Multicast Transmission This function makes an interesting contribution in multicast transmission. As discussed in previous chapters for multicast transmissions, when one wants to send a packet to more than one destination, if packages whose destinations are different pass through the same link, only one is transmitted through such link. The router will copy this package to be able to transmit and resend it through more than one exit port. Figure 4.17 shows that there are four possible trees to transmit flow f1 from origin node s to destination nodes t1 and t2. The first tree would
Routing Optimization in Computer Networks 䡲 87
cij
64 Kbps ,
2
d 15 = 2 Video f1 bwf1 = 32 Kbps
1
d 15 = 5
i, j
d 15 = 2
E
5 d 15 = 5
4
d 15 = 5 d 15 = 2
3 Source: s = 1
d 15 = 2
6
Destinations: t 1 = 5 and t 2 = 6
Figure 4.17 Bandwidth consumption function in multicast transmission.
be formed by paths {(1, 2), (2, 5)} and {(1, 3), (3, 6)}. As we have seen previously, functions hops, total delay, and bandwidth consumption would give the following values: 4 hops, 8-ms delay, and 128-Kbps bandwidth consumption, respectively. In this case, calculation of the bandwidth consumption function would be given by 32 Kbps of flow f1 passing through links (1, 2) and (2, 5) to reach destination t1, and another 32 Kbps passing through links (1, 3) and (3, 6) to reach destination t2. Therefore, total consumption would be the sum of the demand of flow f1 (32 Kbps) passing through the four links, that is, 32 Kbps * 4 = 128 Kbps. The second tree would be formed by paths {(1, 4), (4, 5)} and {(1, 4), (4, 6)}. In this tree the values would be 4 hops for the hop count and 20 ms for the delay. But because the transmission is multicast, only one packet would pass through link (1, 4) to destinations t1 and t2, instead of one for each of the destinations, and therefore, the bandwidth consumption function in link (1, 4) would only be 32 Kbps. These packets would arrive at nodes t1 and t2 because at node 4, such packets would be copied to exit through links (4, 5) and (4, 6), consuming 32 Kbps at each of them. Therefore, the value of the bandwidth consumption function when transmitting such flow through this tree would be 32 Kbps * 3 = 96 Kbps. If we compare this tree with the first tree, we can see that for the delay function, the first tree is better; for the hops count function, both trees have the same value; and for the bandwidth consumption function, the second tree shows a better value than the first. At the multi-objective optimization level these two trees would be noncomparable solutions. The third tree would be formed by paths {(1, 2), (2, 5)} and {(1, 4), (4, 6)}. In this case, the values of the functions would be 4 hops, 14 ms, and 128 Kbps. Finally, the fourth tree would be formed by paths {(1, 4), (4, 5)} and {(1, 3), (3, 6)}, and the values would be 4 hops, 14 ms, and 128 Kbps.
88 䡲 Multi-Objective Optimization in Computer Networks
We can see that if we minimize this function, one tends to find trees with paths that intercept each other to reach the destination nodes, and this way profit from the functionality of the multicast transmissions. If we optimize a unicast transmission, the function can be expressed in the following way:
min
∑ ∑ ∑ bw .X f
tf ij
f ∈F t ∈T f ( i , j ) ∈E
But in the case of a multicast transmission, the mathematical for mula would be expressed differently. Once again, let us take the second tree that we previously analyzed, which showed the best value for the band51 61 51 61 , X 14 , X 45 , X 46 X ijtf would be width consumption function. Variables X 14 1, and the value of the rest of the variables X ijtf would be 0. In this case, 61 51 variables X 14 and X 14 would both have a value of 1, showing that the multicast packets of flow f1 that travel to destinations 5 and 6 are passing through the same link, (1, 4). But, as we had mentioned pr eviously, duplicate packets do not pass through this link, and we can use the max function to determine the maximum value of the variables that pass through this link for the same multicast flow, but for different destinations. 51 61 , X 14 = 1, 1 = 1 . By Hence, the value would be given by max X 14 means of the previous scheme, we would mathematically express in the model that the packets are not transmitted in duplicate for every link that shares a same multicast flow. Therefore, the bandwidth consumption function for multicast transmissions can be restated in the following way:
{
}
{
min
} { }
∑ ∑ bw . max ( X ) f
tf ij
t ∈T f
f ∈F ( i , j ) ∈E
If we analyze the cost function, the behavior would be similar to the bandwidth consumption function because the same link is used to reach several destinations; only one packet is transmitted, and the cost would be associated to the transmittal of such packet. Therefore, the cost function in multicast transmissions can be expressed as
min
∑ ∑ w . max ( X ) ij
f ∈F ( i , j ) ∈E
where wij is the cost of (i, j).
tf ij
t ∈T f
Routing Optimization in Computer Networks 䡲 89
4.2.5 Packet Loss Rate Packet loss rate in transmission consists of calculating and obtaining the minimum end-to-end packet loss rate. Being a probabilistic value at each link, this function is multiplicative. Figure 4.18 shows a network to perform the analysis through the packet loss rate function. In this figure one can see that the variable plrij, which is between nodes i and j, has been added to each link. As in the previous case, in Figure 4.18 one can see that there are two possible paths to carry the flow of information f1 (FTP) between the origin node 1 and the destination node 5. The first path is formed by link (1, 2), with a packet loss rate plr12 of 0.5, and link (2, 5), with a packet loss rate plr25 of 0.6. Because it is a multiplicative function, the packet loss rate that the packets would experience when transmitted through this path would be 0.6*0.5 = 0.3. The second path is formed by links (1, 3), with a packet loss rate of 0.2; (3, 4), with a packet loss rate of 0.2; and (4, 5), with a packet loss rate of 0.2, and in this case, the end-to-end value of the packet loss rate would be 0.2*0.2*0.2 = 0.008. If we use the end-to-end packet loss rate as the optimization function, the second path would be selected to transmit the packets from origin node 1 through destination node 5. To mathematically describe the end-to-end packet loss rate function we will use an example (Figure 4.18). In this example, the packet loss 1 1 rate for the first path would be given by plr12 .X 12 . If this is the * plr25 .X 25 1 1 path used, the values of X 12 and X 25 would be 1, and therefore, the calculation for this path would be (0.5*1)*(0.6*1) = 0.3. Similarly, for the second path, the value of the function would be given 1 1 1 * plr34 .X 34 * plr45 .X 45 by plr13 .X 13 ; now, if this is the path used, the value plr12 = 0.5 1 X 12
FTP f1
plr25 =0.6 2
0
X 125
1
5
plr13 =0.2 1 X 13
1
0
plr45 =0.2 3
4 plr34 =0.2
X 145 1
1 X 34 1
Source: s = 1
Figure 4.18 Packet loss rate function.
Destination: t = 5
90 䡲 Multi-Objective Optimization in Computer Networks
of these X ijf variables would be 1; hence, the calculation would be (0.2*1)*(0.2*1)*(0.2*1) = 0.008. When we want to find the shortest route based on the packet loss rate function, only one path will be selected, and in the example previously shown, it would be the second path, given by links (1, 3), (3, 4), and (4, 5). Figure 4.18 shows the solution when minimizing packet loss rate, and therefore, the values of variables X ijf of this path are 1, whereas for the other path the value of such variables is 0. The function to minimize packet loss rate may be expressed in the following way:
min
∑ ∏ plr .X ij
f ij
f ∈F ( i , j ) ∈E
∏ plr .X ij
f ij
denotes the end-to-end packet loss rate value of the pos-
( i , j )∈E
sible paths for the flow f (only one will be taken). The packet loss rate at every link may be calculated in the following way: if Packettx_i is the number of packets transmitted through node i and Packetrx_j is the number of packets received through node j using link (i, j), we can express packet loss rate at link (i, j) as
plrij =
Packet rx _ j Packet tx _ i
4.2.6 Blocking Probability 4.2.6.1 Unicast Transmission The next function that we can optimize is the blocking probability function. This function behaves like the packet loss function with the difference that while in the packet loss function one analyzes the packets received versus packets transmitted, in the blocking probability function one analyzes the number of connections (flows f) that can actually be transmitted versus the total number of flows requesting transmission (|F|). To calculate the value of the blocking probability function, we can establish the following definitions. If Connectiontotal = |F| is the total number of connections that one wishes to establish and Connectionreal is the amount of connections that can actually be established due to the
Routing Optimization in Computer Networks 䡲 91
network’s bandwidth resources, we can express the blocking probability as 1 – BP, where
BP =
Connectionreal connectiontotal
The expression Connectionreal can be defined as Connectionreal =
∑ max ( X ) f ij
f ∈F
( i , j )∈E
as a function of variables X ijf . Optimization of this
function would be associated with the maximum flow theory. The max function calculates the maximum value for the variables X ijf in the path from origin node s through destination node t for every flow f. If for a flow f there is a path to transmit such flow, the value of the corresponding variables X ijf will be 1, and therefore, the value of
( )
max X ijf
(i , j ) ∈
will also be 1. But if for a flow f one cannot build a path
due to lack of network broadband width availability, the value of the
( )
variables X ijf will be 0, and therefore, the value of max X ijf be 0.
(i , j ) ∈
will also
∑ denotes all the flows that will be transmitted over the network, f ∈F
and therefore will tell us how many flows can actually be transmitted with the curr ent characteristics of the bandwidth r esources. Because Connectionreal ≤ Connectiontotal ,
∑ max ( X ) f ij
f ∈F
( i , j ) ∈E
fore, the function to optimize may be
min 1 − BP , where BP =
Connectionreal connectiontotal
or
max Connectionreal =
∑ max ( X ) f ij
f ∈F
( i , j )∈E
≤ F , and there-
92 䡲 Multi-Objective Optimization in Computer Networks
4.2.6.2 Multicast Transmission For this function we can refer to the analysis performed on unicast transmissions. The function to be optimized would consist of maximizing the number of connections, that is, the number of flows f that can be transmitted on the network, and the function to minimize the probability of blocking for multicast transmissions can be expressed in the following way:
min 1 − BP , where BP =
Connectionreal connectiontotal
or
max Connectionreal =
∑ max ( X ) tf ij
f ∈F
( i , j )∈, t ∈T f
where Connectiontotal = |F| would be equally defined in the unicast transmission.
4.2.7 Maximum Link Utilization 4.2.7.1 Unicast Transmission When we analyze the bandwidth consumption function we see that the path given by link {(1, 5)} would be a nonfeasible solution because the capacity of the link was 16 Kbps and the demand of flow f1 was 32 Kbps. However, paths {(1, 2), (2, 5)} and {(1, 3), (3, 4), (4, 5)} were feasible solutions because the capacity of all their links was 64 Kbps, and the flow f1 could be transmitted through them. But if the origin node wishes to transmit an application f1 at a rate of 128 Kbps (Figure 4.19), there would not be a feasible solution on this network, and the problem would not have a solution. If we somehow split flow f1 into two subflows of, for example, 64 Kbps, one subflow could be transmitted through path {(1, 2), (2, 5)} and the other subflow through path {(1, 3), (3, 4), (4, 5)}, and then there would be a feasible solution. It is important to note that for the split considered, path {(1, 5)} was not considered. This technique is called load balancing, and it is precisely through the maximum link utilization function that one can obtain solutions of this type.
Routing Optimization in Computer Networks 䡲 93
c12 = 64 Kbps 1 X 12
FTP f1 bwf1 = 128 Kbps
c25 =64 Kbps 0.5
c15 =16 Kbps
1
1 X 15
c13 =64 Kbps 1 X 13
X 125
2
0.5
0.5
5
0
3
4
c45 =64 Kbps X 145
c34 =64 Kbps 1 X 34
Source: s = 1
0.5
0.5 Destination: t = 5
Figure 4.19 Maximum link utilization function.
In this case, the vector of variables X ijf ∈ℜ, X ijf ∈ ⎡⎣0, 1 ⎤⎦ will indicate the fraction of flow f that is transmitted through link (i, j). To mathematically describe the maximum link utilization function we will use an example (Figure 4.19). Here, the maximum link utilization for the first path would be given by max
{(bw .X 1
1 12
)(
1 c12 , bw 1 .X 25 c25
)} . If
1 1 this is the path used, the value of variables X 12 and X 25 would be 0.5 because paths in this example transmit up to 50 percent of the demand of flow f1, and therefore, the value for this path would be max{(128*0.5/64), (128*0.5/64)} = 1. Similarly, for the second path, the value of the function
would be given by max
{(bw .X 1
1 13
)(
)(
1 1 c13 , bw 1 .X 34 c34 , bw 1 .X 45 c45
)} . If
1 1 1 this is the path used, the value of variables X 13 , X 34 , and X 45 would also be 0.5, and therefore, the value would be max{(128*0.5/64), (128*0.5/64), (128*0.5/64)} = 1. Because no others are feasible, because none of the paths have enough capacity to transmit the demand of flow f1, which is 128 Kbps, the maximum link utilization for the whole network would be
α = max ( MLU _ path1, MLU _ Path2 ) = max(1, 1 ) = 1 , where MLU is maximum link utilization. In this solution the links of paths {(1, 2), (2, 5)} and {(1, 3), (3, 4), (4, 5)} are used completely, and therefore, MLU = 1 in them. However, link 1 (1, 5) has not been used, the value of variable X 15 would be 0, and, consequently, MLU = 0. The function to be optimized would consist of minimizing (the maximum utilization in the links of the whole network). This function may be expressed in the following way:
94 䡲 Multi-Objective Optimization in Computer Networks
min
α
where
∑ bw .X f
{ }
α = max α ij , where α ij =
∑ bw .X f
f ij
f ij
f ∈F
c ij
denotes the value of total traffic demand for all flows f ∈ F
f ∈F
that are transmitted through link (i, j). When one divides the previous value by the capacity of this link (i, j), we obtain the percentage of usage on link (i, j), which is denoted by αij. Because the objective is to minimize
{ }
the maximum usage value, we define α = max αij . Suppose we have a link (i, j) with a capacity Cij (Figure 4.20a). When the maximum link utilization is used, one establishes a new upper limit (α·cij) in this link so that it does not use its full capacity (Figure 4.20b). This way, when the new upper limit is surpassed, the flow of information will be transmitted through other paths instead of using the total capacity of this link. This way we will be performing a load-balancing function.
4.2.7.2 Multicast Transmission To analyze this function we can resort to the bandwidth consumption function in multicast transmissions and maximum link utilization in unicast transmission. In multicast transmissions, we had expressed the bandwidth consumption function as min
∑ ∑ bw . max ( X ) tf ij
f
f ∈F ( i , j ) ∈E
t ∈T f
. In unicast
transmissions, we had expressed the maximum link utilization function as min α, where
cij
a.cij
Figure 4.20 (a) Normal channel. (b) Channel with the function.
cij
Routing Optimization in Computer Networks 䡲 95
∑ bw .X f
{ }
α = max α ij , where α ij =
f ij
f ∈F
c ij
As had been discussed in the bandwidth consumption function in multicast transmissions, when one uses a same link to send information to more than one destination, only one packet is sent through that link.
( )
Therefore, one has to add the function max X ijtf
t ∈T f
to the maximum link
utilization function for multicast transmissions, through which we can express the previous functioning. Hence, the maximum link utilization function in multicast transmissions would be represented by the following mathematical statement:
min
α
where
∑ bw .max ( X ) f ij
f
{ }
α = max α ij , where α ij =
f ∈F
c ij
t ∈T f
4.2.8 Other Multicast Functions We can define other functions for multicast transmission that can be based in the functions mentioned above, for example, hop count average, maximal hop count, maximal hop count variation, average delay, maximal delay, maximal delay variation, average cost, maximal cost, etc. These new functions can be associated with the functions analyzed in previous sections, but in real situations they may be very useful for network administrators and designers.
4.2.8.1 Hop Count Average Hop count average consists of minimizing the average of the number of hops. Because in multicast transmissions there are different paths to reach the destination nodes, the idea would be to minimize the average of the shortest paths to each destination.
96 䡲 Multi-Objective Optimization in Computer Networks
This function may be expressed in the following way:
min
Total _ Hop _ Count
∑T
∑∑ ∑ = min ∑T
X ijtf
f ∈F t ∈T f ( i , j ) ∈E f
f
f ∈F
f ∈F
where |Tf| denotes the standard of Tf, which is the number of destination nodes for flow f.
4.2.8.2 Maximal Hop Count This function is useful for the maximum requirements of quality of service (QoS). It may be expressed in the following way:
⎫ ⎧ ⎪ ⎪ min max ⎨ X ijtf ⎬ f ∈F , ⎪⎭ t ∈T f ⎪ ⎩(i , j )∈E
∑
4.2.8.3 Maximal Hop Count Variation This function is useful when all destination nodes must receive information within a short time interval. It may be expressed in the following way:
{ }
min max Η f f ∈F
where
⎫ ⎫ ⎧ ⎧ ⎪ ⎪ ⎪ ⎪ X ijtf ⎬ − min ⎨ X itfj ⎬ Η f = max ⎨ t ∈T f t ∈T f ⎪⎭ ⎪⎭ ⎪⎩(i , j )∈E ⎪⎩(i , j )∈E
∑
∑
Routing Optimization in Computer Networks 䡲 97
4.2.8.4 Average Delay This function consists of minimizing the average of delays to each destination. Because in multicast transmissions there are different paths to reach the destination nodes, the idea is to minimize the average of the paths with the smallest delay. This function may be expressed in the following way:
∑ ∑ ∑ d .X = min ∑T ij
min
Total _ Delay
∑T f ∈F
f
tf ij
f ∈F t ∈T f ( i , j ) ∈E f
f ∈F
where |Tf| denotes the standard of Tf, that is, the number of destination nodes for flow f.
4.2.8.5 Maximal Delay This function is useful for the maximum requirements of quality of service (QoS). It may be expressed in the following way:
⎧ ⎪ d ij .X ijtf min max ⎨ f ∈F , t ∈T f ⎪ ⎩(i , j )∈E
∑
⎫ ⎪ ⎬ ⎪⎭
4.2.8.6 Maximal Delay Variation This function is useful when all the destination nodes must r eceive information within a short time interval. It may be expressed in the following way:
{ }
min max Δ f f ∈F
where
98 䡲 Multi-Objective Optimization in Computer Networks
⎧ ⎪ d ij .X ijtf Δ f = max ⎨ t ∈T f ⎪⎩(i , j )∈E
∑
⎫ ⎧ ⎪ ⎪ − min d ij .X ijtf ⎬ t ∈T ⎨ f ⎪⎭ ⎪⎩(i , j )∈E
∑
⎫ ⎪ ⎬ ⎪⎭
4.2.8.7 Average Cost This function consists of minimizing the average of costs to each destination. It may be expressed in the following way:
∑ ∑ w . max ( X ) = min ∑T tf ij
ij
min
Total _ Cost
∑T
f
f ∈F
f ∈F ( i , j ) ∈E
t ∈T f
f
f ∈F
4.2.8.8 Maximal Cost This function is useful for the maximum requirements of quality of service (QoS). It may be expressed in the following way:
⎧ ⎪ max X ijtf min max ⎨ f ∈F ⎪⎩(i , j )∈E
∑
( )
t ∈T f
⎫ ⎪ ⎬ ⎪⎭
4.3 Constraints In this section we will introduce the fundamental constraints that are necessary for the solutions found to be feasible.
4.3.1 Unicast Transmission The first three constraints are the flow conservation constraints, which guarantee that the solutions obtained are valid paths from the origin s to the destination t. Variable X ijf tells us whether link (i, j) is used by flow f. We will consider this a positive variable when the link leaves the node, and negative in the opposite case.
Routing Optimization in Computer Networks 䡲 99
The first constraint ensures that for every flow f only one path exits from origin node s. The equation that models this constraint is given by the following expression:
∑X
f ij
= 1, f ∈ F , i = s
( i , j )∈ E
In Figure 4.21, adding the links that exit origin node s = 1, we 1 1 obtain X 12 + X 13 = 1 , and therefore, this constraint is met. The second constraint ensures that for every flow f only one path reaches the destination node t. The equation that models this constraint is given by the following expression:
∑X
f ji
= −1, i = t , f ∈ F
( j , i )∈ E
In Figure 4.21, adding the links that reach destination node t = 5, we 1 1 obtain − X 25 − X 45 = −1 , and therefore, this constraint is met. The third constraint ensures that for every flow f everything that reaches an intermediate node (is not origin node s or destination node t) also exits that node through another link. That is, if for every flow f we add the exit links and subtract the entry links of an intermediate node, the value must be 0. The equation that models this constraint is given by the following expression:
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
In Figure 4.21 the intermediate nodes are nodes 2, 3, and 4. For node 1 1 2, if we add its exits and subtract its entries, we obtain X 25 − X 12 =0. 1 X 12 1
FTP f1
2
X 125 1
1 1 X 13
5 0
3
4 1 X 34
Figure 4.21 Unicast constraints.
0
X 145
0
100 䡲 Multi-Objective Optimization in Computer Networks 1 1 − X 13 = 0 , and, finally, for node 4, we For node 3, we would obtain X 34 1 1 obtain X 45 − X 34 = 0 . Therefore, this constraint is met for all intermediate nodes. On the other hand, because in the models shown we have analyzed the demand of traffic necessary for each flow of data and the capacity of the links, one needs to define a constraint to prevent sending a demand of traffic larger than the link’s capacity. This constraint can be expressed in the following way:
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
∑ bw .X f
f ij
denotes all the demand for flow (for all flows f) in bits per
f ∈F
second (bps) that is transmitted in a link (i, j ) ∈ E . The expression ≤ cij does not allow the demand to be higher than the capacity of link cij.
4.3.2 Multicast Transmission Just like in unicast transmissions, the first three constraints ensure that the solutions obtained are valid paths from origin s through every one of the destinations t ε Tf. We will also consider that variable X ijf is positive when the link exits the node, and negative in the opposite case. The first constraint ensures that for every destination t ε Tf and for flow f, only one path leaves from origin node s. The equation that models this constraint is given by the following expression:
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
In Figure 4.22, for example, one has not put variables X ijtf on link (1, 3) with destination node 5, because it is impossible to reach destination node 5 through such a link. In the optimization process, the value of these variables X ijtf would be 0. This constraint is met because adding the exit links for origin node s = 1 with destination node t = 5, we 51 51 obtain X 12 + X 14 = 1 , and adding the exit links for origin node s = 1 with 61 61 destination node T = 6, we obtain X 13 + X 14 =1 . The second constraint ensures that for every flow f only one path reaches each of the destination nodes t. The equation that models this constraint is given by the following expression:
Routing Optimization in Computer Networks 䡲 101 51 0 X 25 51 X 12
Video f1
1 61 X 13
2
0 51 X 14 1 61 X 14 1
0
4
5 51 X 45 1 61 X 46 1
3
6 61 0 X 36
Source: s = 1
Destinations: t1 = 5 and t 2 = 6
Figure 4.22 Multicast constraints.
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
In Figure 4.22, adding the entry links for destination nodes t = 5, we 51 51 61 61 obtain − X 25 − X 45 = −1 and − X 36 − X 46 = −1 , and therefore, this constraint is met. The third constraint ensures that for every flow f, everything that enters an intermediate node also exits such node through another link. This means that if we add the exit links and subtract the entry links in an intermediate node, the value must be 0 for every flow f. The equation that models this constraint is given by the following expression:
∑
( i , j )∈ E
X ijtf −
∑
X tfji = 0, t ∈ T f , f ∈ F , i ≠ s , i ∉ T f
( j , i )∈ E
In Figure 4.22 the intermediate nodes are nodes 2, 3, and 4. For node 51 51 2, if we add the exits and subtract its entries, we obtain X 25 − X 12 =0 . 51 51 For node 3, we obtain X 36 − X 13 = 0 , and, finally, for node 4, we 51 51 61 61 obtain X 45 − X 14 = 0 and X 46 − X 14 = 0 . Therefore, this constraint is met for all intermediate nodes. With the previous constraints we ensure that for every flow the multicast transmissions generate a single feasible path for its trajectory between the origin node s and every one of the destination nodes t. It is important to note that the set of all these paths creates a tree. On the other hand, because in the models discussed we have analyzed the traffic demand needed for each flow of data and the capacity of the channels, one needs to define a constraint to avoid sending a traffic demand larger than the capacity of the link. This constraint can be expressed in the following way:
102 䡲 Multi-Objective Optimization in Computer Networks
∑ bw .max ( X ) f
tf ij
f ∈F
∑ bw . max ( X ) f
f ∈F
tf ij
t ∈T f
t ∈T f
≤ c ij ,( i , j ) ∈ E
denotes the complete demand of flow (for all
multicast flows f) in bps that is transmitted in a link (i, j) ∈ E. This mathematical expression was analyzed in the bandwidth consumption function for multicast transmissions. The expression ≤ cij does not allow this flow demand to exceed the capacity of link cij. It may happen that on certain jobs some optimization functions will be used as constraints. This is true of the maximum delay function or the maximum hops function. But because the purpose of this book is to perform a multi-objective analysis, these functions will be used exclusively as objective functions and not as constraints.
4.4 Functions and Constraints In this section we will summarize the functions and constraints discussed above.
4.4.1 Unicast Transmissions Table 4.3 is a summary of the functions previously defined for the unicast transmissions case.
4.4.2 Multicast Transmissions Table 4.4 is a summary of the functions previously defined for the multicast transmissions case.
4.5 Single-Objective Optimization Modeling and Solution In this section we will introduce varied models to solve multi-objective problems using traditional methods that convert the multi-objective problem into a single-objective approximation. In the first cases, we will show how the single-objective approximation is formulated through the different methods, as well as the source code to solve such problem through a solver. For problems with more functions we will only use the weighted
Routing Optimization in Computer Networks 䡲 103 Table 4.3
Objective Functions Function
f1
Hop count
f2
End-to-end delay
Expression
∑∑X
f ij
f ∈F ( i , j )∈E
∑ ∑ d .X ij
f ij
f ∈F ( i , j )∈E
f3
Cost
∑ ∑ w .X ij
f ij
f ∈F ( i , j )∈E
f4
Bandwidth consumption
∑ ∑ bw .X
f ij
∑ ∏ plr .X
f ij
f
f ∈F ( i , j )∈E
f5
Packet loss rate
ij
f ∈F ( i , j )∈E
f6
Blocking probability min 1 − BP , where BP = max Connection real =
Connection real or connection total
∑ max ( X ) f ij
f ∈F
f7
Maximum link utilization
min α where
( i , j )∈
∑bw .X f
{ }
α = max α ij , where α ij =
f ij
f ∈F
c ij
Constraints Name
c1
Source node
Expression
∑X
f ij
= 1, f ∈ F , i = s
∑X
f ji
= −1, i = t , f ∈ F
( i , j )∈ E
c2
Destination node
( j , i )∈ E
c3
Intermediate node
∑X −∑X f ij
( i , j )∈ E
c4
Link capacity
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑bw .X f
f ∈F
f ji
f ij
≤ c ij , ( i , j ) ∈ E
104 䡲 Multi-Objective Optimization in Computer Networks Table 4.4
Objective Functions Function
f1
Expression
Hop count min
∑∑ ∑ X
tf ij
f ∈F t ∈T f ( i , j )∈E
f2
End-to-end delay min
∑ ∑ ∑ d .X ij
tf ij
f ∈F t ∈T f ( i , j )∈E
f3
Cost min
∑ ∑ w .max ( X ) tf ij
ij
t ∈T f
f ∈F ( i , j )∈E
f4
Bandwidth consumption min
∑ ∑ bw .max ( X ) tf ij
f
f ∈F ( i , j )∈E
f5
Blocking probability min 1 − BP, where BP =
max Connection real =
t ∈T f
Connection real connection totaal
∑ max ( X ) tf ij
f ∈F
f6
( i , j )∈,t ∈T f
Maximum link utilization min α where
∑ bw .max ( X ) f ij
f
{ }
α = max α ij , where α ij =
Other multicast functions f7 Hop count average min
Total _ Hop _ Count
∑T
f
f ∈F
∑∑ ∑ X = min ∑T f ∈F t ∈T f ( i , j )∈E f
f ∈F
f8
or
Maximal hop count ⎧ ⎪ min max ⎨ X ijtf f ∈F , t ∈T f ⎪ ⎩( i , j )∈E
∑
⎫ ⎪ ⎬ ⎪⎭
tf ij
f ∈F
cij
t ∈Tf
Routing Optimization in Computer Networks 䡲 105 Table 4.4 f9
Objective Functions (continued)
Maximal hop count variation
{ }
min max Η f f ∈F
where
⎧ ⎪ Η f = max ⎨ X ijtf t ∈T f ⎪⎩( i , j )∈E
∑
f10
Average delay
⎫ ⎧ ⎪ ⎪ X itfj ⎬ − min ⎨ t ∈T f ⎪⎭ ⎪⎩( i , j )∈E
∑
⎫ ⎪ ⎬ ⎪⎭
∑ ∑ ∑ d .X Total _ Delay = min min T ∑ ∑T ij
tf ij
f ∈F t ∈T f ( i , j )∈E f
f
f ∈F
f11
f12
f ∈F
Maximal delay
∑
⎫ ⎪ ⎬ ⎪⎭
{ }
where
⎧ ⎪ min max ⎨ d ij .X ijtf f ∈F , t ∈T f ⎪ ⎩( i , j )∈E Maximal delay variation
min max Δ f f ∈F
⎧ ⎪ Δ f = max ⎨ d ij .X ijtf t ∈T f ⎪⎩( i , j )∈E
∑
f13
Average cost
⎫ ⎧ ⎪ ⎪ d ij .X ijtf ⎬ − min ⎨ t ∈T f ⎪⎭ ⎪⎩( i , j )∈E
∑
∑ ∑ w .max ( X ) min ∑T tf ij
ij
min
Total _ Cost
∑T
f
f ∈F
f14
=
f ∈F ( i , j )∈E
f
f ∈F
Maximal cost ⎧ ⎪ min max ⎨ max X ijtf f ∈F ⎪⎩( i , j )∈E
∑
( )
t ∈T f
⎫ ⎪ ⎬ ⎪⎭
Constraints Name
c1
Source node
Expression
∑X
( i , j )∈ E
tf ij
⎫ ⎪ ⎬ ⎪⎭
= 1, t ∈ T f , f ∈ F , i = s
t ∈T f
106 䡲 Multi-Objective Optimization in Computer Networks Table 4.4 c2
Objective Functions (continued)
Destination node
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
c3
Intermediate node
∑X −∑X tf ij
( i , j )∈ E
c4
Link capacity
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ∉ T f
( j , i )∈ E
∑bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij , ( i , j ) ∈ E
sum method from among the traditional methods that we explained in previous chapters. As in Chapter 5, we will only use metaheuristics to solve the problems discussed.
4.5.1 Unicast Transmission Using Hop Count and Delay To understand the multi-objective optimization process, in this section we will use several traditional methods to analyze the hop count (f1) and delay (f2) functions in a unicast transmission in a very simple network topology (Figure 4.23). In the results, one will add the values of the bandwidth consumption function (f4) so that these results may later be analyzed when function f4 is also optimized.
4.5.1.1 Weighted Sum The first traditional method we will use is the weighted sum method. To solve the problem using this method we must rewrite the functions that we are going to optimize: d12 = 5 c12 = 64 Kbps FTP f1 bwf1 = 32 Kbps
1
c13 =64 Kbps d13 =3
Figure 4.23 Example 1.
2 d15 =20
d25 =6 c25 =64 Kbps 5
c15 =16 Kbps c45 =64 Kbps 3 4 c34 =64 Kbps d34 =2
d45 =2
Routing Optimization in Computer Networks 䡲 107
∑∑
∑ ∑ d .X
X ijf + r2 .
min Z = r1 .
f ∈F ( i , j ) ∈E
ij
f ij
(1)
f ∈F ( i , j ) ∈E
subject to
∑X
= 1, f ∈ F , i = s
f ij
( i , j )∈ E
∑X
f ji
= −1, i = t , f ∈ F
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .X f
f ij
≤ c ij , ( i , j ) ∈ E
f ∈F
2
X = {0, 1} , 0 ≤ ri ≤ 1, f ij
∑r = 1 i
i =1
To solve this problem, we can use a solver such as cplex, which can solve mixed integer programming (MIP) type problems. Figure 4.24 shows the three possible values. Solution {(1, 5)} is not feasible due to the link capacity constraint (1.c.4), because the demand of f1 is 32 Kbps and the capacity of the link (1, 5) is only 16 Kbps. If this constraint did not exist, path {(1, 5)} would be the best value at the hops level. For different values ri we can obtain the solutions in Table 4.5.
4.5.1.2 ε−Constraint The second traditional method that we will use is ε-constraint. To solve the problem using this method we must rewrite the functions. Because in this case we have two objective functions, the model may be rewritten in two ways:
108 䡲 Multi-Objective Optimization in Computer Networks
Optimal Pareto Front 5
Hops
4 Infeasible Solution
3 2 1 0 0
5
10
15 20 Delay (ms)
25
30
Figure 4.24 Optimal values in multi-objective optimization.
Table 4.5 Sol
1 2 3 4 5 6 7
Weighted Sum Solutions
r1
r2
1 0 0.5 0.1 0.9 0.8 0.7
0 1 0.5 0.9 0.1 0.2 0.3
f1 (hops)
f2 (ms)
f4 (Kbps)
2 3 3 3 2 2 3
11 7 7 7 11 11 7
64 96 96 96 64 64 96
Path
{(1, 2), (2, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 2), (2, 5)} {(1, 2), (2, 5)} {(1, 3), (3, 4), (4, 5)}
When function f1 is an objective function and function f2 is a constraint When function f2 is objective and function f1 is a constraint When function f1 is an objective function, the model is
min Z =
∑∑
X ijf
f ∈F ( i , j ) ∈E
subject to
∑ ∑ d .X ij
f ∈F ( i , j )∈E
f ij
≤ ε2
(2)
Routing Optimization in Computer Networks 䡲 109
∑X
f ij
= 1, f ∈ F , i = s
f ji
= −1, i = t , f ∈ F
( i , j )∈ E
∑X
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
X ijf = {0, 1} When function f2 is an objective function, the model is
min Z =
∑ ∑ d .X ij
f ij
f ∈F ( i , j ) ∈E
subject to
∑∑
X ijf ≤ ε1
f ∈F ( i , j )∈E
∑X
f ij
= 1, f ∈ F , i = s
f ji
= −1, i = t , f ∈ F
( i , j )∈ E
∑X
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
( j , i )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
(3)
110 䡲 Multi-Objective Optimization in Computer Networks
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
X ijf = {0, 1} We can use cplex, which can solve MIP type problems, to solve this problem using the ε-constraint method. Table 4.6 and 4.7 show the solutions to the model (2) when different values ε1 and ε2 are considered.
4.5.1.3 Weighted Metrics The third traditional model that we will use is the weighted metrics model. To solve the problem using this method we must rewrite the functions. First, we will consider a value of r = 2 and then a value of r = ∞. When r = 2, the model is
⎡ min Z = ⎢⎢w 1 . Z1 − ⎢⎣
2
2
∑∑
X ijf
+ w 2. Z2 −
∑ ∑ d .X ij
f ∈F ( i , j ) ∈E
f ∈F ( i , j ) ∈E
f ij
Table 4.6 ε-Constraint Solution 1 Sol
1 2 3 4 5
2
30 20 10 7 6
f1 (hops)
f2 (ms)
f4 (Kbps)
2 2 3 3
11 11 7 7
64 64 96 96
Path
{(1, 2), (2, 5)} {(1, 2), (2, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)} Solution not feasible
Table 4.7 ε-Constraint Solution 2 Sol
1 2 3 4 5
1
10 5 3 2 1
f1 (hops)
f2 (ms)
f4 (Kbps)
3 3 3 2
7 7 7 11
96 96 96 64
1/2
⎤ ⎥ ⎥ ⎥⎦
Path
{(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 2), (2, 5)} Solution not feasible
(4)
Routing Optimization in Computer Networks 䡲 111
subject to
∑X
f ij
= 1, f ∈ F , i = s
f ji
= −1, i = t , f ∈ F
( i , j )∈ E
∑X
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
X ijf = {0, 1} 1≤r< ∞ 0 ≤ wi ≤ 1, i = {1, …, 2} 2
∑w
i
=1
i =1
and when r = ∞, the model is rewritten in the following way:
⎡ min Z = max ⎢w 1 . Z1 − ⎢ ⎣
∑∑
X ijf , w 2 . Z 2 −
f ∈F ( i , j ) ∈E
subject to
∑X
( i , j )∈ E
f ij
= 1, f ∈ F , i = s
⎤ d ij .X ijf ⎥ (5) ⎥ ( i , j )∈E ⎦
∑∑ f ∈F
112 䡲 Multi-Objective Optimization in Computer Networks
∑X
f ji
= −1, i = t , f ∈ F
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
X ijf = {0, 1} 0 ≤ wi ≤ 1, i = {1, …, 2} 2
∑w
i
=1
i =1
To solve this method, we can use a solver such as the standard branch and bound (SBB), which can solve mixed integer nonlinear programming (MINLP) type problems, because we include the absolute value function (|x|). Table 4.8 shows the solutions to model (4) when r = 2, and different values Z1, Z2, w1, and w2 are considered. For r = ∞ one obtains exactly the same results. Table 4.8
Weighted Metric Solution with r=2 and r=∞
Sol
w1
w2
1 2 3 4 5 6 7 8 9 10 11
0.9 0.9 0.8 0.1 0.1 0.5 0.9 0.1 0.9 0.5 0.1
0.1 0.1 0.2 0.9 0.9 0.5 0.1 0.9 0.1 0.5 0.9
Z1
Z2
f1 (hops)
f2 (ms)
f4 (Kbps)
5 1 1 2 3 3 3 3 1 1 1
20 20 20 20 20 3 3 3 1 1 1
3 2 2 2 2 3 3 3 2 3 3
7 11 11 11 11 7 7 7 11 7 7
96 64 64 64 64 96 96 96 64 96 96
Path
{(1, 3), (3, 4), (4, 5)} {(1, 2), (2, 5)} {(1, 2), (2, 5)} {(1, 2), (2, 5)} {(1, 2), (2, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 2), (2, 5)} {(1, 3), (3, 4), (4, 5)} {(1, 3), (3, 4), (4, 5)}
Routing Optimization in Computer Networks 䡲 113
4.5.1.4 Benson Method The last traditional method that we will use to solve the multi-objective problem is the Benson method. To solve the problem using this method, we must rewrite the functions in the following way:
⎛ max Z = max ⎜ 0, Z1 − ⎜⎝
⎞ ⎛ X ijf ⎟ + max ⎜ 0, Z 2 − ⎟⎠ ⎜⎝
∑∑
f ∈F ( i , j ) ∈E
⎞ d ij .X ijf ⎟ ⎟⎠ ( i , j ) ∈E (6)
∑∑ f ∈F
subject to
∑∑
X ijf ≤ Z1
f ∈F ( i , j )∈E
∑ ∑ d .X ij
f ij
≤ Z2
f ∈F ( i , j )∈E
∑X
f ij
= 1, f ∈ F , i = s
f ji
= −1, i = t , f ∈ F
( i , j )∈ E
∑X
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
X ijf = {0, 1} Table 4.9 shows the solutions to the model for the values Z1 and Z2, for which a solution was found.
114 䡲 Multi-Objective Optimization in Computer Networks Table 4.9
Benson Solutions
Sol
Z1
Z2
f1 (hops)
f2 (ms)
f4 (Kbps)
1 2 3 4 5
5 2 3 3 4
20 20 20 7 10
2 2 2 3 3
11 11 11 7 7
64 64 64 96 96
Path
{(1, {(1, {(1, {(1, {(1,
2), 2), 2), 3), 3),
(2, (2, (2, (3, (3,
5)} 5)} 5)} 4), (4, 5)} 4), (4, 5)}
4.5.2 Multicast Transmission Using Hop Count and Delay As we did in the previous section for unicast transmission, we will use several traditional methods to analyze the hop count (f1) and delay (f2) functions in a multicast transmission in a very simple network topology (Figure 4.25).
4.5.2.1 Weighted Sum To solve the problem with this method, we must rewrite the functions in the following way:
∑ ∑ ∑ d .X
∑∑ ∑
X ijtf + r2 .
min Z = r1 .
ij
f ∈F t ∈T f ( i , j ) ∈E
f ∈F t ∈T f ( i , j ) ∈E
d15=20
s=1
d15=5
d15=2
cij = 64 Kbps, (i,j)Є E
4 d15=5 3
d15=20
Figure 4.25 Example 1.
t1=5
d15=5
A
Video f1 bwfl = 32 Kbps
d15=2
2
d15=2
d15=2
t2=6
tf ij
(7)
Routing Optimization in Computer Networks 䡲 115
subject to
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
( i , j )∈ ∈E
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
(7.c.4)
2
X ijtf = {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
If we remove the link capacity constraint (7.c.4), the problem is of the mixed integer programming (MIP) type and we can use the cplex as solver. Considering the link capacity constraint, because a max function is included, we have a DNLP (nonlinear programming with discontinuous derivatives) type problem, and in this case, one must use solvers such as Sparse Nonlinear Optimizer (SNOPT) or Constraint Nonlinear Optimizer (COMOPT). Table 4.10 shows these solutions. Table 4.10
Optimal and Feasible Solutions
Sol
f1 (hops)
f2 (ms)
f4 (Kbps)
1 2 3 4 5 6 7 8 9
3 3 4 4 4 2 4 3 3
30 30 20 14 14 40 8 24 24
96 96 96 128 128 64 128 96 96
Tree
[{(1, [{(1, [{(1, [{(1, [{(1, [{(1, [{(1, [{(1, [{(1,
5)}, {(1, 4), (4, 6)}] 4), (4, 5)}, {(1, 6)}] 4), (4, 5)}, {(1, 4), (4, 6)}] 2), (2, 5)}, {(1, 4), (4, 6)}] 4), (4, 5}, {(1, 3), (3, 6)}] 5)}, {(1, 6)}] 2), (2, 5)}, {(1, 3), (3, 6)}] 5)}, {(1, 3), (3, 6)}] 2), (2, 5)}, {(1, 6)}]
116 䡲 Multi-Objective Optimization in Computer Networks
Optimal Pareto Front
Hops
4 2 0 0
10
20
30
40
50
Delay (ms) Optimal Values
Feasible Values
Figure 4.26 Optimal values in multi-objective optimization.
Table 4.11
Weighted Sum Solutions
Sol
r1
r2
f1 (hops)
f2 (ms)
f4 (Kbps)
1
1
0
2
40
64
2
0
1
4
8
128
3
1
0
3
24
96
Tree
[{(1, 5)}, {(1, 6)}] [{(1, 2), (2, 5)}, {(1, 3), (3, 6)}] [{(1, 5)}, {(1, 3), (3, 6)}]
Figure 4.26 shows these feasible solutions. Because for some solutions (1 and 2, 4 and 5, and 8 and 9) f1 and f2 coincide, the network’s administrator has to select one of them. If we execute the model for different values ri, we can obtain the solutions in Table 4.11. In this case, if there is more than one solution that gives the same values for the optimization function, it may happen that only one of these solutions is found. For example, as we saw in the previous case, trees [{(1, 2), (2, 5)}, {(1, 6)}] and [{(1, 5)}, {(1, 3), (3, 6)}] have exactly the same values for hops (3) and delay (24), but in this case, only one of the trees has been found.
4.5.2.2 ε−Constraint As we have already mentioned in the unicast transmission case, this model may be rewritten in two ways: When function f1 is an objective function and function f2 is a constraint
Routing Optimization in Computer Networks 䡲 117
When function f2 is an objective function and function f1 is a constraint When function f1 is the objective function, the model is
min Z =
∑∑ ∑
X ijtf
(8)
f ∈F t ∈T f ( i , j ) ∈E
subject to
∑ ∑ ∑ d .X ij
tf ij
≤ ε2
f ∈F t ∈T f ( i , j )∈E
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
( i , j )∈ ∈E
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
X ijtf = {0, 1} When function f2 is the objective function, the model is
min Z =
∑ ∑ ∑ d .X ij
f ∈F t ∈T f ( i , j ) ∈E
subject to
∑∑ ∑
f ∈F t ∈T f ( i , j )∈E
X ijtf ≤ ε1
tf ij
(9)
118 䡲 Multi-Objective Optimization in Computer Networks
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
( i , j )∈ ∈E
∑ bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
X ijtf = {0, 1} As we have already mentioned, if we remove the link capacity (7.c.4) constraint, the problem is of the MIP type, and if we consider the link capacity constraint, we have a DNLP type problem. When f1 and f2 are the objective function, for different values ε2 we can obtain the solutions in Table 4.11 and 4.12. We can see that when we optimize f1 we find tree [{(1, 2), (2, 5)}, {(1, 6)}], and when we optimize f2 we find tree [{(1, 5)}, {(1, 3), (3, 6)}], where the values of the functions are the same. This shows us that if we optimize all functions, we can find more solutions than when we used the weighted sum method, where only one of these trees was found. Table 4.11 ε-Constraint Solution f1 (hops)
f2 (ms)
f4 (Kbps)
1
Sol
50
2
2
40
64
2
30
3
24
96
3
20
4
8
128
Tree
[{(1, 5)}, {(1, 6)}] [{(1, 2), (2, 5)}, {(1, 6)}] [{(1, 2), (2, 5)}, {(1, 3), (3, 6)}]
Routing Optimization in Computer Networks 䡲 119 Table 4.12 ε-Constraint Solution f1 (hops)
f2 (ms)
f4 (Kbps)
1
Sol
10
1
4
8
128
2
3
3
24
96
3
2
2
40
64
Tree
[{(1, 2), (2, 5)}, {(1, 3), (3, 6)}] [{(1, 5)}, {(1, 3), (3, 6)}] [{(1, 5)}, {(1, 6)}]
4.5.2.3 Weighted Metrics To solve the problem with this method, we must rewrite the functions. First, we will consider a value of r = 2 and then a value of r = ∞. When r = 2, the model is
⎡ ⎢ min Z = ⎢w 1 . Z1 − ⎢⎣
2
∑∑ ∑
X ijtf
+ w 2. Z2 −
∑ ∑ ∑ d .X ij
f ∈F t ∈T f ( i , j ) ∈E
f ∈F t ∈T f ( i , j ) ∈E
subject to
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
( i , j )∈ ∈E
∑ bw .max ( X ) f
f ∈F
tf ij
t ∈T f
≤ c ij ,( i , j ) ∈ E
X ijtf = {0, 1}
tf ij
1/2
⎤ ⎥ ⎥ ⎥⎦ (10)
2
120 䡲 Multi-Objective Optimization in Computer Networks
1 ≤r< ∞ 0 ≤ wi ≤ 1, i = {1, …, 2} 2
∑w
i
=1
i =1
When r = ∞, the model is
⎡ min Z = max ⎢w 1 . Z1 − ⎢ ⎣
∑∑ ∑
X ijtf , w 2 . Z 2 −
f ∈F t ∈T f ( i , j ) ∈E
⎤ d ij .X ijtf ⎥ ⎥ ( i , j )∈E ⎦ (11)
∑∑ ∑ f ∈F t ∈T f
subject to
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
( i , j )∈ ∈E
∑ bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
X ijtf = {0, 1} 0 ≤ wi ≤ 1, i = {1, …, 2} 2
∑w i =1
i
=1
Routing Optimization in Computer Networks 䡲 121
As happened with unicast transmission, this is a mixed integer nonlinear programming (MINLP) type problem because we include the absolute value function (|x|), and to solve it, we must use solver SBB. Table 4.13 and 4.14 shows the solutions to the model (10) when r = 2 and r = ∞ one considers different values of Z1, Z2, w1, and w2. It may happen that certain values that have been obtained with r = 2, for example, solution 3, cannot be easily obtained when r = ∞, or even when other methods are used.
4.5.2.4 Benson Method As we did for the unicast transmission, the last traditional model that we will use to solve the multi-objective problem is the Benson method. To solve the problem using this method we must rewrite the functions:
⎛ max Z = max ⎜ 0, Z1 − ⎜⎝
∑∑ ∑
f ∈F t ∈T f ( i , j ) ∈E
⎛ ⎞ X ijtf ⎟ + max ⎜ 0, Z 2 − ⎜⎝ ⎟⎠
⎞ d ij .X ijtf ⎟ ( i , j ) ∈E ⎠⎟
∑∑ ∑ f ∈F t ∈T f
(12) Table 4.13
Weighted Metric Solutions with r=2
Sol
w1
w2
Z1
Z2
f1 (hops)
f2 (ms)
f4 (Kbps)
1
0.1
0.9
5
50
2
40
64
2
0.9
0.1
5
50
2
40
64
3
0.1
0.9
5
25
3
24
96
4
0.1
0.9
5
10
4
8
128
Table 4.14
Tree
[{(1, 5)}, {(1, 6)}] [{(1, 5)}, {(1, 6)}] [{(1, 5)}, {(1, 3), (3, 6)}] [{(1, 2), (2, 5)}, {(1, 3), (3, 6)}]
Weighted Metric Solutions with r=∞
Sol
w1
w2
Z1
Z2
f1 (hops)
f2 (ms)
f4 (Kbps)
1
0.1
0.9
5
50
2
40
64
2
0.9
0.1
5
50
2
40
64
3
0.1
0.9
5
10
4
8
128
Tree
[{(1, 5)}, {(1, 6)}] [{(1, 5)}, {(1, 6)}] [{(1, 2), (2, 5)}, {(1, 3), (3, 6)}]
122 䡲 Multi-Objective Optimization in Computer Networks
subject to
∑∑ ∑
X ijtf ≤ Z1
f ∈F t ∈T f ( i , j )∈E
∑ ∑ ∑ d .X ij
tf ij
≤ Z2
f ∈F t ∈T f ( i , j )∈E
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
( i , j )∈ ∈E
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .max ( X ) f
tf ij
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
X ijtf = {0, 1} Table 4.15 shows the solutions to the model (12) for different values of Z1 and Z2. Table 4.15
Benson Method Solutions
Sol
Z1
Z2
f1 (hops)
f2 (ms)
f4 (Kbps)
1
5
50
4
8
128
2
5
25
4
8
128
3
3
24
3
24
96
4
2
50
2
40
64
Tree
[{(1, 2), (2, 5)}, {(1, 3), (3, 6)}] [{(1, 2), (2, 5)}, {(1, 3), (3, 6)}] [{(1, 2), (2, 5)}, {(1, 6)}] [{(1, 5)}, {(1, 6)}]
Routing Optimization in Computer Networks 䡲 123
4.5.3 Unicast Transmission Using Hop Count, Delay, and Bandwidth Consumption For this case, we will use the functions hop count (f1), delay (f2), and bandwidth consumption (f4) in a unicast transmission. In this section we will consider the weighted sum traditional method to subsequently be able to compare the results with a metaheuristic. To solve the problem using this method, we must rewrite the functions in the following way:
∑∑
min Z = r1 .
∑ ∑ d .X
X ijf + r2 .
f ∈F ( i , j ) ∈E
ij
f ij
∑ ∑ bw .X
+ r3 .
f ∈F ( i , j ) ∈E
f
f ij
f ∈F ( i , j ) ∈E
(13) subject to
∑X
f ij
= 1, f ∈ F , i = s
f ji
= −1, i = t , f ∈ F
( i , j )∈ E
∑X
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
3
X ijf = {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
The problem is a MIP type problem, and therefore, we can use Cplex to solve it. Figure 4.27 shows the topology considered. In this case, the links are T1, and therefore, the capacity considered is 1536 Kbps.
124 䡲 Multi-Objective Optimization in Computer Networks N8 5 N1 9
9
8
26
20
N7
N2
13
7
N9
7
7
N0
N13
5
7 7
N4
7
4
N6
N5
N12
14
11
8
N3
15
N10 9 N11
Figure 4.27 Example 2 network.
Example 1 Table 4.16 shows the solutions for different values ri when we transmit a single 512-Kbps flow f1 from node 0 to node 12. In this table we can see that notwithstanding the values of ri, there is only one path that simultaneously optimizes the three functions.
Example 2 Table 4.17 shows the solutions for different values ri when we transmit a single 512-Kbps flow f1 from node 0 to node 13. Table 4.16 Sol
1 2 3 4
Example 1 Solutions
r1
r2
r3
0 0 1 0.3
0 1 0 0.3
1 0 0 0.4
f1 (hops)
f2 (ms)
f4 (Kbps)
3 3 3 3
36 36 36 36
1536 1536 1536 1536
Path
{(0, {(0, {(0, {(0,
3), 3), 3), 3),
(3, (3, (3, (3,
10), 10), 10), 10),
(10, (10, (10, (10,
12)} 12)} 12)} 12)}
Routing Optimization in Computer Networks 䡲 125 Table 4.17 Sol
1 2 3 4
Example 2 Solutions
r1
r2
r3
0 0 1 0.3
0 1 0 0.3
1 0 0 0.4
f1 (hops)
3 4 3 3
f2 (ms)
43 40 43 43
f4 (Kbps)
1536 2048 1536 1536
Path
{(0, {(0, {(0, {(0,
2), 3), 2), 2),
(2, (3, (2, (2,
7), (7, 13)} 10), (10, 12), (12, 13)} 7), (7, 13)} 7), (7, 13)}
Example 3 Table 4.18 shows the solutions for different values ri when we transmit three flows, f1, f2, and f3, of 512 Kbps from node 0 to node 13. Table 4.18 Sol
1 2 3 4
Example 3 Solutions
r1
r2
r3
0 0 1 0.3
0 1 0 0.3
1 0 0 0.4
f1 (hops)
f2 (ms)
f4 (Kbps)
9 12 9 9
129 120 129 129
4608 6144 4608 4608
Path
{(0, {(0, {(0, {(0,
2), 3), 2), 2),
(2, (3, (2, (2,
7), (7, 13)} 10), (10, 12), (12, 13)} 7), (7, 13)} 7), (7, 13)}
Example 4 Table 4.19 shows the solutions for different values ri when we transmit two 1024-Kbps flows f1 and f2 from node 0 to node 13. Table 4.19 Sol
r1
Example 4 Solutions r2
r3
f1 (hops)
f2 (ms)
f4 (Kbps)
1
0
0
1
7
83
7168
2
0
1
0
7
83
7168
3
1
0
0
7
83
7168
4
0.3
0.3
0.4
7
83
7168
Flow Path
f1 {(0, 3), (3, 10), (10, 12), (12, 13)} f2 {(0, 2), (2, 7), (7, 13)} f1 {(0, 3), (3, 10), (10, 12), (12, 13)} f2 {(0, 2), (2, 7), (7, 13)} f1 {(0, 3), (3, 10), (10, 12), (12, 13)} f2 {(0, 2), (2, 7), (7, 13)} f1 {(0, 3), (3, 10), (10, 12), (12, 13)} f2 {(0, 2), (2, 7), (7, 13)}
126 䡲 Multi-Objective Optimization in Computer Networks
Example 5 Lastly, if in the previous example we consider an additional third flow f3 with the same transmission rate of 1024 Kbps, any solution that one obtains is nonfeasible due to the link capacity constraint. Using the maximum link utilization function, the flows can be fractioned through the different paths and it would be possible to find feasible solutions. In this case, as we have seen previously, the variables X ijf = ⎡⎣0, 1 ⎤⎦ .
4.5.4 Multicast Transmission Using Hop Count, Delay, and Bandwidth Consumption In this section we will do the analysis using the functions hop count, delay, and bandwidth consumption in a multicast transmission over the topology of Figure 4.27. As we did in the previous section, we will use the weighted sum traditional method and then compare the results with a metaheuristic. To solve the problem with this method, we must rewrite the functions in the following way:
∑∑ ∑ X
min Z = r1 .
f ∈ F t ∈ T f ( i , j )∈ E
tf ij
∑ ∑ ∑ d .X
+ r2 .
ij
f ∈ F t ∈ T f ( i , j )∈ E
∑ ∑ bw .max ( X )
+ r3 .
f
f ∈ F ( i , j )∈ E
tf ij
t ∈T f
subject to
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
( i , j )∈ ∈E
( j , i )∈ E
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
tf ij
(14)
Routing Optimization in Computer Networks 䡲 127
∑ bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
3
X ijtf = {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
Example 1 We will transmit a 512-Kbps single multicast flow f1 from node 0 to nodes 8 and 12. In this case we will need a solver DNLP (nonlinear programming with discontinuous derivatives) because the bandwidth consumption function uses the max function. Table 4.20 shows the solutions for different values ri. In solutions 1 and 2 we see that only one function has been minimized, hop count and delay, respectively, and in both cases, the resulting tree is the same (Figure 4.28). In this case, SNOPT was used as the solver. In solution 3 only the bandwidth function has been optimized (Figure 4.29). The resulting tree is a different tree that tried to use the same links for both destinations and, in this way, Table 4.20 Sol
r1
Example 1 Solution
r2
r3
f1 (hops) f2 (ms) f4 (Kbps)
1
1
0
0
6
76
3072
2
0
1
0
6
76
3072
3
0
0
1
7
79
2048
4
0.5 0.49 0.01 7
79
2048
5
0.5 0.5
76
3072
0
6
Tree
[{(0, 2), (2, 7), (7, 8)}, {(0, 3), (3, 10), (10, 12)}] [{(0, 2), (2, 7), (7, 8)}, {(0, 3), (3, 10), (10, 12)}] [{(0, 3), (3, 10), (10, 12), (12, 8)}, {(0, 3), (3, 10), (10, 12)}] [{(0, 3), (3, 10), (10, 12), (12, 8)}, {(0, 3), (3, 10), (10, 12)}] [{(0, 2), (2, 7), (7, 8)}, {(0, 3), (3, 10), (10, 12)}]
128 䡲 Multi-Objective Optimization in Computer Networks N8 5 N1 9
9
8
26
20
N7
N2
13
7
7
N5
7
4
N6
7
N4
N9
7
7
N0
N13
5
N12
14
11
8
N3
15
N10 9 N11
Figure 4.28 Minimizing hop count or delay. N8 5 N1 9
9
8
26
20
N7
N2
7
7
N0 13
7
7 7
N4
N9
7
4
N6
N5
N12
14
11 N3
N13
5
8 15
N10 9 N11
Figure 4.29 Minimizing bandwidth consumption.
minimize the bandwidth consumption used. CONOPT was used as the solver in this case. In solution 4 we can see that even though the value of the weight associated with the bandwidth consumption function is low (r3 = 0.01), the result still slants toward that function. Lastly, in solution 5 we see that if we remove the bandwidth consumption function, the optimization process will try to find
Routing Optimization in Computer Networks 䡲 129 Table 4.21
Example 2 Solution
Sol
r1
r2
r3
f1 (hops)
f2 (ms)
f4 (Kbps)
1
0
1
0
18
228
9216
Tree
[{(0, 2), (2, 7), (7, 8)}, {(0, 3), (3, 10), (10, 12)}]
the tree with the minimum value of hop count and delay, and the solution corresponds to the one previously found in 1 and 2.
Example 2 We will transmit three 512-Kbps multicast flows, f1, f2, and f3, each of them from origin node 0 to destination nodes 8 and 12. Table 4.21 shows the only solution that was found in the tests performed with different values ri.
4.5.5 Unicast Transmission Using Hop Count, Delay, Bandwidth Consumption, and Maximum Link Utilization In this section we will use the hop count (f1), delay (f2), bandwidth consumption (f4), and maximum link utilization (f7) functions in a unicast transmission to find a feasible solution for the case analyzed in the latter part of Section 4.5.3, where there was no feasible solution to the problem due to the capacity of the links. By adding the maximum link utilization function we are using a technique known as load balancing, through which each flow f may be divided and transmitted through different paths. In this case, the value of variables X ijf passes from binary to real between 0 and 1. If we use the maximum link utilization function with r eal variables X ijf , the hop count (f1) and delay (f2) functions must be redefined. We must define a new vector of variables Yijf in the following way:
⎧0 , X f = 0 ij f ⎤ ⎪ ⎡ Yij = ⎢ X ij ⎥ = ⎨ ⎪⎩1, 0 < X ijf ≤ 1 f
where Yijf denotes whether link (i, j) is used (1) or not (0) for the unicast transmission of flow f. Then, the hop count function would be redefined as
130 䡲 Multi-Objective Optimization in Computer Networks
∑∑Y
f
ij
f ∈F ( i , j )∈E
and the delay function as
∑ ∑ d .Y ij
f
ij
f ∈F ( i , j )∈E
To solve the problem with this method, we must rewrite the functions in the following way:
∑∑Y
min Z = r1 .
ij
f
∑ ∑ d .Y
+ r2 .
f ∈F ( i , j ) ∈E
ij
f
ij
∑ ∑ bw .X
+ r3 .
f ∈F ( i , j ) ∈E
f
f ij
+ r4 .α
f ∈F ( i , j ) ∈E
(15) subject to
∑ bw .X f
{ }
α = max αij , where αij =
∑X
f ij
f ∈F
f ij
= 1, f ∈ F , i = s
f ji
= −1, i = t , f ∈ F
cij
( i , j )∈ E
∑X
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
∑ bw .X f
f ∈F
f ij
≤ c ij ,( i , j ) ∈ E
Routing Optimization in Computer Networks 䡲 131
⎧0 , X f = 0 ij f ⎤ ⎪ ⎡ Yij = ⎢ X ij ⎥ = ⎨ ⎪⎩1, 0 < X ijf ≤ 1 f
4
X = {0, 1} , 0 ≤ ri ≤ 1, f ij
∑r = 1 i
i =1
Example Between origin node 0 and destination node 13 we will transmit three flows, f1, f2, and f3, with a transmission rate of 1024 Kbps each. Table 4.22 shows the solutions obtained for different values ri. Table 4.22 Sol
r1, r2, r3, r4
Optical Solutions in Unicast f1 (hops)
f2 (ms)
f4 (Kbps)
f7 (%)
1
r4 = 1
14
166
10752
100
2
r3 = 1
14
166
10752
100
3
r2 = 1
16
178
11776
100
4
r1 = 1
16
178
11776
100
Flow Fraction Path
f1 100% {(0, 3), (3, 10), (10, 12), (12, 13)} f2 50% {(0, 2), (2, 7), (7, 13)} f2 50% {(0, 3), (3, 10), (10, 12), (12, 13)} f3 100% {(0, 2), (2, 7), (7, 13)} f1 100% {(0, 3), (3, 10), (10, 12), (12, 13)} f2 50% {(0, 2), (2, 7), (7, 13)} f2 50% {(0, 3), (3, 10), (10, 12), (12, 13)} f3 100% {(0, 2), (2, 7), (7, 13)} f1 100% {(0, 2), (2, 7), (7, 13)} f2 50% {(0, 1), (1, 6), (6, 9), (9, 8), (9, 8), (8, 12), (12, 13)} f2 50% {(0, 2), (2, 7), (7, 13)} f3 100% {(0, 3), (3, 10), (10, 12), (12, 13)} f1 100% {(0, 2), (2, 7), (7, 13)} f2 50% {(0, 1), (1, 6), (6, 9), (9, 8), (9, 8), (8, 12), (12, 13)} f2 50% {(0, 2), (2, 7), (7, 13)} f3 100% {(0, 3), (3, 10), (10, 12), (12, 13)}
132 䡲 Multi-Objective Optimization in Computer Networks
As we can see, with the maximum link utilization function we can find feasible solutions. In this case, the unicast flow must be divided through several paths. This mathematical model can better be expressed using the subflow concept. In this case, the set of flows is denoted by F. Every flow f ε F can be divided into Kf subflows that, once normalized, can be denoted as fk, where k = 1, …, |Kf|. Under this denomination, fk shows the fraction of flow f ε F that is carried by subflow k. For each flow f ε F we have an origin node sf ε N and a destination node tf. It is necessary to include kf
the constraint
∑f
k
= 1 to normalize the values fk.
k =1
In this new mathematical formula the vector of variables X ijf is replaced by the new denotation X ijfk . X ijfk denotes a fraction of the flow f that is transmitted by subflow k through link (i, j). Similarly, one redefines Yijfk ,
Yij
fk
⎧0 , X f k = 0 ij fk ⎤ ⎪ ⎡ = ⎢ X ij ⎥ = ⎨ ⎪⎩1, 0 < X ijf k ≤ 1
To solve the problem with this method, we must rewrite the functions in the following way adding the new index k:
∑∑ ∑ Y
min Z = r1 .
ij
fk
f ∈ F k ∈ K f ( i , j )∈ E
∑∑ ∑
+ r3 .
∑ ∑ ∑ d .Y
+ r2 .
ij
f ∈ F k ∈ K f ( i , j )∈ E
bw f .X ijf k + r4 .α
f ∈ F k ∈ K f ( i , j )∈ E
subject to
∑ ∑ bw .X f
{ }
α = max α ij , where α ij =
f ∈F k ∈K f
c ij
fk ij
ij
fk
(16)
Routing Optimization in Computer Networks 䡲 133
∑ ∑X
fk ij
= 1, f ∈ F , i = s f
( i , j )∈ E k ∈ K f
∑ ∑X
fk ji
= −1, i = t f , f ∈ F
( j , i )∈ E k ∈ K f
∑X
fk ij
−
∑X
fk ji
= 0, f ∈ F , k ∈ K f , i ≠ s f , i ≠ t f
( j , i )∈ E
( i , j )∈ E
∑ ∑ bw .X f
fk ij
≤ c ij ,( i , j ) ∈ E
f ∈F k∈K f
Yij
fk
⎧0 , X f k = 0 ij fk ⎤ ⎪ ⎡ = ⎢ X ij ⎥ = ⎨ ⎪⎩1, 0 < X ijf k ≤ 1 4
X
fk ij
= {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
This new mathematical model facilitates denotation of the flow of traffic that is transmitting every one of the subflows. In addition, it will allow for each of these subflows to be an LSP path in the MPLS technology.
4.5.6 Multicast Transmission Using Hop Count, Delay, Bandwidth Consumption, and Maximum Link Utilization In this case, we will use the hop count ( f1), delay (f2), bandwidth consumption (f4), and maximum link utilization (f7) functions in a multicast transmission. Variables X ijtf also pass from binary to real between 0 and 1, and every multicast flow f can be divided and transmitted through different trees. If we use the maximum link utilization function with these real variables X ijtf , the hop count and delay functions must be redefined. Let us define a new vector of variables Yijtf :
⎧0, X tf = 0 ij ⎪ Y = ⎡⎢ X ⎤⎥ = ⎨ ⎪⎩1, 0 < X ijtf ≤ 1 tf ij
tf ij
134 䡲 Multi-Objective Optimization in Computer Networks
that represent whether link (i, j) is used (1) or not (0) for the multicast transmission of flow f. Due to the above, the hop count function would be redefined as
∑∑ ∑ Y
tf ij
f ∈F t ∈T f ( i , j )∈E
and the delay function as
∑ ∑ ∑ d .Y
tf ij
ij
f ∈F t ∈T f ( i , j )∈E
To solve the problem with this method, we must rewrite the functions:
∑∑ ∑ Y
tf ij
min Z = r1 .
∑ ∑ ∑ d .Y
+ r2 .
ij
f ∈ F t ∈ T f ( i , j )∈ E
f ∈ F t ∈ T f ( i , j )∈ E
∑ ∑ bw .max ( X )
+ r3 .
tf ij
tf ij
f
f ∈ F ( i , j )∈ E
t ∈T f
(17)
+ r4 .α
subject to
∑ bw .max ( X ) tf ij
f
{ }
α = max α ij , where α ij =
∑X
tf ij
f ∈F
t ∈Tf
cij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
tf ji
= −1, i = t , t ∈ T f , f ∈ F
( j , i )∈ E
∑X −∑X tf ij
( i , j )∈ ∈E
( j , i )∈ E
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
Routing Optimization in Computer Networks 䡲 135
∑ bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
⎧0, X tf = 0 ij ⎪ Y = ⎡⎢ X ⎤⎥ = ⎨ ⎪⎩1, 0 < X ijtf ≤ 1 tf ij
tf ij
4
X ijtf = {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
Example We will transmit a single flow f1 with a transmission rate of 2048 Kbps from origin node 0 to destination nodes 8 and 12. Table 4.23 shows one solution obtained. With this function we can find feasible and optimal solutions for the problem when it is necessary to divide the flow into different subflows. As in the unicast transmission, this mathematical model can be better expressed in multicast transmission using the subflow Table 4.23 Sol
1
Optimal Solutions in Multicast
r1, r2, r3, r4
r1 r2 r3 r4
= = = =
0.1 0.1 0.1 0.7
f1 (hops)
14
f2 (ms)
166
f4 (Kbps)
9728
f7 (%)
100
Flow Fraction Tree
f1 75% {(0, 2), (2, 7), (7, 8), (7, 13), (13, 12)} f1 25% {(0, 3), (3, 10), (10, 12), (12, 13)}
136 䡲 Multi-Objective Optimization in Computer Networks
concept. In this case, the set of multicast flows is denoted by F. For every flow f ε, F can be divided into Kf subflows and, once normalized, can be denoted as fk, where k = 1, …, |Kf|. Under this denomination, fk shows the fraction of the multicast flow f ε F that is carried by subflow k. For every flow f ε F we have an origin node sf ε N and a set of destination nodes Tf ⊂ N associated to such flow. In this case, a destination node t of a flow meets with t ε Tf and also T =
∪T
f
.
f ∈F kf
One must include the constraint values fk.
∑f
k
= 1 to normalize the
k =1
In this mathematical formula the vector of variables X ijtf is replaced by the new denotation X ijtfk . X ijtfk denotes the fraction of the flow f transmitted by subflow k with destination node t through link (i, j). Similarly, it is redefined Yijtfk , where
tf k ij
Y
⎧0, X tf k = 0 ij ⎪ = ⎡⎢ X ⎤⎥ = ⎨ tf k ⎩⎪1, 0 < X ij ≤ 1 tf k ij
For the case of multicast flow transmission through different subflows it is necessary to add a new constraint, a subflow uniformity constraint, to ensure that a subflow fk always carries the same information:
⎧ X f k = max X ijf kt , if Yij f kt = 1 ⎪ t ∈ T f , (i , j ) ∈ E X ijf kt = ⎨ , (i , j ) ∈ E, t ∈ T f f kt if Y = 0 ⎪⎩ ij 0,
{
}
Without this constraint, X ijf kt > 0 may differ from X i ′fjk′t ′ > 0, and therefore, the same subflow fk may not carry the same data to different destinations t and t'. As a consequence of this new constraint, mapping subflows to LSPs is easy. To solve the problem with this method, we must rewrite the functions with the addition of the new index k:
Routing Optimization in Computer Networks 䡲 137
∑∑∑ ∑ Y
tf k ij
min Z = r1 .
∑ ∑ ∑ ∑ d .Y
+ r2 .
ij
f ∈ F k ∈ K f t ∈ T f ( i , j )∈ E
f ∈ F k ∈ K f t ∈ T f ( i , j )∈ E
∑ ∑ ∑ bw .max ( X )
+ r3 .
tf k ij
tf k ij
f
f ∈ F k ∈ K f ( i , j )∈ E
+ r4 .α
t ∈T f
subject to
∑ ∑ bw .max ( X ) tf k ij
f
{ }
α = max α ij , where α ij =
f ∈F k ∈K f
t ∈T f
c ij
∑ ∑X
tf k ij
= 1, t ∈ T f , f ∈ F , i = s f
∑ ∑X
tf k ji
= −1, i = t , t ∈ T f , f ∈ F
( i , j )∈ E k ∈ K f
( j , i )∈ E k ∈ K f
∑X
( i , j )∈ E
tf k ij
−
∑X
tf k ji
= 0, t ∈ T f , f ∈ F , k ∈ K f , i f ≠ s f , i f ≠ t
( j ,ii ) ∈ E
⎧ X f k = max X ijf kt , if Yij f kt = 1 ⎪ t ∈ T f , (i , j ) ∈ E X ijf kt = ⎨ , (i , j ) ∈ E, t ∈ T f f kt if Y = 0 ⎪⎩ ij 0,
{
}
∑ ∑ bw .max ( X ) f
f ∈F k∈K f
tf k ij
Y
tf k ij
t ∈T f
≤ c ij ,( i , j ) ∈ E
⎧0, X tf k = 0 ij tf k ⎤ ⎪ ⎡ = ⎢ X ij ⎥ = ⎨ tf k ⎩⎪1, 0 < X ij ≤ 1 4
X
tf k ij
= {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
(18)
138 䡲 Multi-Objective Optimization in Computer Networks
4.6 Multi-Objective Optimization Modeling In this section we will present the model by means of a real multi-objective scheme. In this case, functions are handled as independent functions and no approximation is made through a single-objective method. In other words, we are going to rewrite the vector of objective functions instead of a single-objective approximation. Constraints remain exactly the same.
4.6.1 Unicast Transmission If we are going to optimize the functions hop and delay, the vector of the objective functions would be given by
⎡ =⎢ X ijf , ⎢ ⎣ f ∈ F ( i , j )∈ E
( ) ∑∑
F X
f ij
⎤ d ij .X ⎥ ⎥ f ∈ F ( i , j )∈ E ⎦
∑∑
f ij
If, in addition, we want to optimize the bandwidth consumption function, the vector of objective functions would be given by
⎡ =⎢ X ijf , ⎢ ⎣ f ∈ F ( i , j )∈ E
⎤ bw f .X ⎥ d ij .X , ⎥ f ∈ F ( i , j )∈ E f ∈ F ( i , j )∈ E ⎦
( ) ∑∑
F X
f ij
∑∑
f ij
∑∑
subject to
∑X
f ij
= 1, f ∈ F , i = s
f ji
= −1, i = t , f ∈ F
( i , j )∈ E
∑X
( j , i )∈ E
∑X −∑X f ij
( i , j )∈ E
( j , i )∈ E
f ji
= 0, f ∈ F , i ≠ s , i ≠ t
f ij
(19)
Routing Optimization in Computer Networks 䡲 139
∑ bw .X f
f ij
≤ c ij ,( i , j ) ∈ E
f ∈F
X ijf = {0, 1} This way, we can add the functions that we need to optimize without interfering in the analytical model to be solved. If, in addition, we want to optimize the maximum link utilization, the model would be the following (in this case, it has been necessary to add some constraints):
⎡ F X ijf k = ⎢ Yij f k , ⎢ ⎣ f ∈F k ∈K f ( i , j )∈E
( ) ∑∑ ∑
∑∑ ∑
d ij .Yij f k ,
f ∈F k ∈K f ( i , j )∈E
⎤ bw f .X ijf k , α ⎥ ⎥ f ∈F k ∈K f ( i , j )∈E ⎦ (20)
∑∑ ∑
subject to
∑ ∑ bw .X f
{ }
α = max α ij , where α ij =
∑ ∑X
fk ij
fk ij
f ∈F k ∈K f
c ij
= 1, f ∈ F , i = s f
( i , j )∈ E k ∈ K f
∑ ∑X
fk ji
= −1, i = t f , f ∈ F
( j , i )∈ E k ∈ K f
∑X ( i , j )∈ E
fk ij
−
∑X
fk ji
= 0, f ∈ F , k ∈ K f , i ≠ s f , i ≠ t f
( j , i )∈ E
∑ ∑ bw .X f
f ∈F k∈K f
fk ij
≤ c ij ,( i , j ) ∈ E
140 䡲 Multi-Objective Optimization in Computer Networks
Yij
⎧0 , X f k = 0 ij fk ⎤ ⎪ ⎡ = ⎢ X ij ⎥ = ⎨ ⎪⎩1, 0 < X ijf k ≤ 1
fk
4
X
fk ij
= {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
From these models we can include any other objective function that we wish to optimize or any other constraint that we need to add.
4.6.2 Multicast Transmission If we are going to optimize the functions hop and delay, the vector of the objective functions would be given by
⎡ F X ijtf = ⎢ ⎢ f ∈F ⎣
( ) ∑∑ ∑
X ijtf ,
t ∈T f ( i , j ) ∈E
⎤ d ij .X ijtf ⎥ ⎥ ( i , j ) ∈E ⎦
∑∑ ∑ f ∈F t ∈T f
(21)
If, in addition, we want to optimize the bandwidth consumption function, the vector of the objective functions would be given by
⎡ F X ijtf = ⎢ X ijtf , ⎢ ⎣ f ∈F t ∈T f ( i , j )∈E
( ) ∑∑ ∑
∑∑ ∑
f ∈F t ∈T f ( i , j )∈E
d ij .X ijtf ,
∑∑
f ∈F ( i , j )∈E
( )
bw f .max X ijtf
t ∈T f
(22) subject to
∑X
tf ij
= 1, t ∈ T f , f ∈ F , i = s
( i , j )∈ E
∑X
( j , i )∈ E
tf ji
= −1, i = t , t ∈ T f , f ∈ F
⎤ ⎥ ⎥ ⎦
Routing Optimization in Computer Networks 䡲 141
∑X −∑X tf ij
tf ji
= 0, t ∈ T f , f ∈ F , i ≠ s , i ≠ t
( j , i )∈ E
( i , j )∈ ∈E
∑ bw .max ( X ) tf ij
f
f ∈F
t ∈T f
≤ c ij ,( i , j ) ∈ E
X ijtf = {0, 1} And if we want to optimize the maximum link utilization function, the model would be the following:
⎡ =⎢ ⎢ ⎣ f ∈F k∈K f
( ) ∑ ∑ ∑ ∑ Y , ∑ ∑ ∑ ∑ d .YY
F X
tf k ij
tf k ij
t ∈ T f ( i , j )∈ E
ij
,
f ∈ F k ∈ K f t ∈ T f ( i , j )∈ E
∑ ∑ ∑ bw .max ( X ) f
tf k ij
tf k ij
f ∈ F k ∈ K f ( i , j )∈ E
(23)
⎤ , α⎥ ⎥ t ∈T f ⎦
subject to
∑ ∑ bw .max ( X ) tf k ij
f
{ }
α = max α ij , where α ij =
f ∈F k ∈K f
t ∈T f
c ij
∑ ∑X
tf k ij
= 1, t ∈ T f , f ∈ F , i = s f
∑ ∑X
tf k ji
= −1, i = t , t ∈ T f , f ∈ F
( i , j )∈ E k ∈ K f
( j , i )∈ E k ∈ K f
∑X
( i , j )∈ E
tf k ij
−
∑X
( j ,ii ) ∈ E
tf k ji
= 0, t ∈ T f , f ∈ F , k ∈ K f , i f ≠ s f , i f ≠ t
142 䡲 Multi-Objective Optimization in Computer Networks
⎧ X f k = max X ijf kt , if Yij f kt = 1 ⎪ t ∈ T f , (i , j ) ∈ E X ijf kt = ⎨ , (i , j ) ∈ E, t ∈ T f f kt if Y = 0 ⎪⎩ ij 0,
{
}
∑ ∑ bw . max ( X ) f
f ∈F k ∈K f
tf k ij
Y
tf k ij
t ∈T f
≤ cij
, (i , j ) ∈ E
⎧0, X tf k = 0 ij tf k ⎤ ⎪ ⎡ = ⎢ X ij ⎥ = ⎨ ⎪⎩1, 0 < X ijtf k ≤ 11 4
X
tf k ij
= {0, 1} , 0 ≤ ri ≤ 1,
∑r = 1 i
i =1
4.7 Obtaining a Solution Using Metaheuristics To solve multi-objective optimization problems applied to computer networks, one can use certain types of metaheuristics. In this section we will discuss the source codes developed in C language of a metaheuristic, which solve the optimization problem under a simple solution and do not have the intention of being the most effective and efficient algorithm to provide this solution. If readers want to analyze advanced algorithms to solve these types of problems, they can review some of the works and books recommended in the bibliography. Even though some of the metaheuristics mentioned in Chapter 2 can be used, we will concentrate on working with the Multi-objective Evolutionary Algorithm (MOEA). Specifically, we will work with the methodology proposed by the Strength Pareto Evolutionary Algorithm (SPEA) to solve the stated problems. In this section we will obtain a computational solution to the problem previously stated for unicast as well as multicast transmissions. To execute the evolutionary algorithm, one must first define how the solutions are going to be represented — in other words, what is the structure of a chromosome. In addition, one must define how the search process of the initial population is going to take place, how the chromosome selection process will be developed, what the crossover function will be, and, lastly, what the mutation function will be.
Routing Optimization in Computer Networks 䡲 143
4.7.1 Unicast for the Hop Count and Delay Functions Figure 4.30 shows the topology that we will use in the unicast case.
4.7.1.1 Coding of a Chromosome Figure 4.31 shows a chromosome (or a solution) corresponding to a path from origin node s to destination node t. It is a vector that contains all the nodes that form that path. Overall, the path represented by Figure 4.31 would be given by nodes s, n1, n2, …, ni, …, t. Figure 4.32 shows a representation of chromosomes corresponding to the five solutions that can be considered in Figure 4.30. 2 FTP f1
1
5
3
4
Source: s = 1
Destination: t = 5
Figure 4.30 Unicast topology.
Path
s
n1
n2
...
ni
...
Figure 4.31 Representation of a chromosome.
Figure 4.32 Solutions.
Path 1
1
2
Path 2
1
5
Path 3
1
3
4
5
Path 4
1
2
4
5
Path 5
1
3
5
5
t
144 䡲 Multi-Objective Optimization in Computer Networks
4.7.1.2 Initial Population To establish the initial population we use the breadth first search (BFS) graph search algorithm, which was explained in Chapter 1. Here, the algorithm will be executed probabilistically. Given a path to destination t, if probability P is not met, such path will not be explored. Because it is probabilistic, the search of this algorithm would not be exhaustive. The idea is to find several paths from origin node s to destination node t to consider them as the initial population. The paths not found by algorithm BFS will be found by the crossover and mutation procedures of the MOEA. This way, we will obtain a population of feasible paths to solve the problem.
4.7.1.3 Selection Selection of a chromosome is done probabilistically. The value associated with the selection probability is given by the chromosome’s fitness, which tells how good this chromosome is compared with others in its population. This algorithm is simple; the idea is to select all chromosomes that are not dominated in a generation, that is, in an iteration of the algorithm. These chromosomes form an elitist population called Pnd. The other chromosomes are dominated by population Pnd and form a population called P. The fitness value of each chromosome, both in population Pnd as in P, is calculated by the SPEA to ensure that the worst fitness value of the elitist Pnd is better than the best value in population P. Hence, chromosomes in Pnd have a greater probability of being selected. Once fitness values are found for all chromosomes, one uses the roulette selection operator or the binary tournament (explained in Chapter 2) on the junction of both populations Pnd and P to select one chromosome at a time every time needed.
4.7.1.4 Crossover The crossover and mutation functions are subject to combinatorial analysis to find new chromosomes from existing chromosomes. We propose the following scheme as the crossover operator. Using the selection procedure, one selects two chromosomes from which the offspring chromosomes will be produced. Next, one checks the crossover probability to determine whether the crossover will take place. If possible, one selects a cutting point, which can be obtained randomly, in any position past the origin node s and prior to destination node t. The crossover function considered in this book only has one cutting point, but one could consider crossover functions with more than one cutting point.
Routing Optimization in Computer Networks 䡲 145
Point 1
s
n1
...
ni
...
nm-1 t
2
s
n1
...
ni
...
nm-1 t
1
s
n1
...
ni
...
nm-1 t
2
s
n1
...
ni
...
nm-1 t
Chromosomes
Offspring
Figure 4.33
Crossover operator. Point 1
1
2
... 5
2
1
3
... 4
5
1
1
2
... 4
5
2
1
3
... 5
Chromosomes
Offspring
Figure 4.34 Crossover example.
Once the cutting point is determined, one obtains the offspring in the following way: the left part of the first parent chromosome is selected, the right part of the second parent chromosome is selected, and then both are combined to produce a new offspring. An additional crossover can be done to produce another offspring; in this case, the left part of the second parent chromosome is combined with the right side of the first parent chromosome (see Figure 4.33). Lastly, one must verify that the offspring created (one or two) are really feasible solutions to the problem. Figure 4.34 shows how from two existing paths in the initial population (parent chromosomes) one obtains two new paths (offspring chromosomes). The crossover point has been selected in the second position. Finally, we must verify that both paths are feasible in the topology shown in Figure 4.10.
4.7.1.5 Mutation We propose the following scheme as the mutation operator. By means of the selection method we obtain a chromosome. Next, one checks the
146 䡲 Multi-Objective Optimization in Computer Networks
Point Chromosome
s
n1
...
ni
...
nm-1 t New Path
New Chromosome
s
n1
...
ni
...
...
nm-1 t
Figure 4.35 Mutation operator. Point Chromosome
1
2
New Chromosome
1
2
... 5
New Path 4
5
Figure 4.36 Mutation example.
probability of mutation to verify whether the mutation will take place. If positive, one selects a cutting point to perform it. This point can be deterministic, for example, always in the center of the chromosome, or it can be selected probabilistically, this being most recommended. The part of the chromosome between origin node s and cutting point ni is maintained, and as from node ni, one performs, for example, through the BFS algorithm, the search of a new path between node ni and the destination node t. Figure 4.35 shows how the mutation operator works. Figure 4.36 shows how in the topology analyzed (Figure 4.30), from existing path {(1, 2), (2, 5)}, one obtains the new path {(1, 2), (2, 4), (4, 5)}. In this case, the mutation point is in the second position, and through, for example, the BFS algorithm, one has found the new path {(2, 4), (4, 5)} between nodes 2 and 5. Finally, we must verify that this path corresponds to a feasible path in the topology considered.
4.7.2 Multicast for the Hop Count and Delay Functions Figure 4.37 shows the topology that we will use in the multicast case.
4.7.2.1 Coding of a Chromosome Figure 4.38 shows a chromosome (or a solution) corresponding to a tree from origin node s to each of the destination nodes t. It is a vector with
Routing Optimization in Computer Networks 䡲 147
2 Video f1
5
4
1
3
Source: s = 1
6
Destinations: t 1 = 5 and t2 = 6
Figure 4.37 Multicast topology.
Tree
P1
#
P2
...
Pi
...
#
Pn
Figure 4.38 Representation of a chromosome.
Tree s
n1
...
ni
...
P1
#
P2
t1
#
s
n1
…
ni
...
t2
Figure 4.39 Representation of the nodes of each path.
n paths (one to each of the destination nodes). Each of these n paths Pi, with i = 1, …, n is separated by the special character #. As we have considered for the unicast case, every path Pi is formed by the nodes that are part of such path. Figure 4.39 shows the tree that corresponds to a multicast transmission with two destination nodes (t1 and t2). The tree could also be represented using matrices. In this case, each row represents a path with one destination node t. There are other ways to represent these trees, for example, with dynamic structures that would optimize the use of memory in PCs. In this book we will use the tree representation shown in Figure 4.39 and Figure 4.40. Figure 4.41 shows some of the trees that can be obtained in the topology of Figure 4.37 for a multicast flow from origin node 1 to destination nodes 5 and 6.
148 䡲 Multi-Objective Optimization in Computer Networks
P1
Tree
P2
s
n1
...
ni
...
t1
s
n1
…
ni
...
t2
Figure 4.40 Representation in matrix form. Tree 1
1
5
#
1
6
Tree 2
1
2
5
#
1
3
6
Tree 3
1
4
5
#
1
4
6
Tree 4
1
2
5
#
1
4
6
Tree 5
1
4
5
#
1
3
6
Figure 4.41 Solutions.
4.7.2.2 Initial Population To establish the initial population, we use the BFS algorithm probabilistically. Here, the idea is to find the initial population formed by several trees from origin node s to the set of destination nodes t.
4.7.2.3 Selection The chromosome selection process (which in the multicast case represents a tree) can be done the same way as for the unicast case.
4.7.2.4 Crossover By means of the crossover and mutation functions one can perform combinatorial analysis to find new chromosomes from existing chromosomes. As the crossover operator we propose the following scheme. Using the selection procedures, select two chromosomes with which one will create the offspring chromosome. Next, check the crossover probability to verify whether the crossover will effectively take place. In the multicast
Routing Optimization in Computer Networks 䡲 149
Point 1
P1
#
P...2
...
Pi
...
#
Pn
2
P1
#
Pi
...
Pi
...
#
Pn
Chromosomes