Multicriteria Location Analysis 3031238753, 9783031238758

This book applies Multicriteria Decision Making (MCDM) tools and techniques to problems in location analysis. It begins

382 32 9MB

English Pages 316 [317] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Chapter 1: Introduction
References
Part I: Tools and Techniques of Multicriteria Location Analysis
Chapter 2: Single-Objective Optimization
Appendix
A.1 Sensitivity Analysis
A.2 Visualizations of Sensitivity Analyses
A.3 Modeling Nonlinear Functions
A.4 Piecewise Linearization of Nonlinear Constraints
A.5 Heuristics
References
Chapter 3: Multicriteria Decision Making
3.1 The Basic Setting
3.2 Some Approaches in the MOP and MADM Settings
3.3 Methods for MOP Problems
3.4 Methods for MADM Problems
3.4.1 Making It Work
References
Chapter 4: Location Models
4.1 The Basic Setting
4.2 The Main Features of Location Problems
4.3 The Basic Models and Their Extensions
4.3.1 Problems with ``Pull´´ Objective
4.3.2 Problems with ``Push´´ Objective
4.3.3 Problems with ``Balancing´´ Objective
4.4 Evaluation of Point Patterns
4.5 Making It Work
References
Chapter 5: Mathematical and Geospatial Tools
5.1 Interpolation/Curve Fitting
5.1.1 Regressions
5.1.2 Voronoi Diagrams
5.2 GIS Tools
5.2.1 Basic Elements and Features
5.2.2 Geoprocessing Tools
5.3 Voting Procedures
5.4 Making It Work
5.4.1 Interpolation and Curve Fitting
5.4.2 Voting Procedures
References
Software Resources
Some GIS Tools
Part II: Model Applications
Chapter 6: Locating a Landfill with Vector Optimization
6.1 Making It Work
References
Chapter 7: Using Goal Programming to Locate a New Fire Hall
7.1 Making It Work
References
Chapter 8: Locating a Hospital with a Zoom Approach
8.1 Making It Work
References
Chapter 9: Locating a Private School with a Generic Method
9.1 Making It Work
References
Chapter 10: Locating Jails with a Target Value Approach
10.1 Making It Work
References
Chapter 11: A Rejection Approach to Locate a New City Park
11.1 Making It Work
References
Chapter 12: Locating a New Fulfillment Center with the Domain Criterion
12.1 Making It Work
References
Chapter 13: Locating Fast Food Restaurants with the Analytic Hierarchy Process
13.1 Making It Work
References
Chapter 14: Locating Billboards with the Jefferson-d´Hondt Method
14.1 Making It Work
References
Chapter 15: Locating an Airport by Voting
15.1 Making It Work
References
Part III: Practitioner Perspective
Chapter 16: How Location Works in Practice: A Perspective from the United States
16.1 Introduction
16.2 Area Development´s Annual Survey of Critical Site Selection Factors
16.3 Workforce-Related Factors
16.4 Infrastructure and Proximity-Related Factors
16.5 Tax Climate and Incentives
16.6 Locational Decision-Making Process
16.7 Global Location
16.8 Influence of State Rankings
16.9 Trends in Economic Development Strategies to Facilitate Site Selection
16.10 Location of ``Undesirable´´ Facilities
References
Recommend Papers

Multicriteria Location Analysis
 3031238753, 9783031238758

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

International Series in Operations Research & Management Science

H. A. Eiselt Vladimir Marianov Joyendu Bhadury

Multicriteria Location Analysis

International Series in Operations Research & Management Science Founding Editor Frederick S. Hillier, Stanford University, Stanford, CA, USA

Volume 338 Series Editor Camille C. Price, Department of Computer Science, Stephen F. Austin State University, Nacogdoches, TX, USA Editorial Board Members Emanuele Borgonovo, Department of Decision Sciences, Bocconi University, Milan, Italy Barry L. Nelson, Department of Industrial Engineering & Management Sciences, Northwestern University, Evanston, IL, USA Bruce W. Patty, Veritec Solutions, Mill Valley, CA, USA Michael Pinedo, Stern School of Business, New York University, New York, NY, USA Robert J. Vanderbei, Princeton University, Princeton, NJ, USA Associate Editor Joe Zhu, Foisie Business School, Worcester Polytechnic Institute, Worcester, MA, USA

The book series International Series in Operations Research and Management Science encompasses the various areas of operations research and management science. Both theoretical and applied books are included. It describes current advances anywhere in the world that are at the cutting edge of the field. The series is aimed especially at researchers, advanced graduate students, and sophisticated practitioners. The series features three types of books: 2022 Advanced expository books that extend and unify our understanding of particular areas. 2022 Research monographs that make substantial contributions to knowledge. 2022 Handbooks that define the new state of the art in particular areas. Each handbook will be edited by a leading authority in the area who will organize a team of experts on various aspects of the topic to write individual chapters. A handbook may emphasize expository surveys or completely new advances (either research or applications) or a combination of both. The series emphasizes the following four areas: Mathematical Programming: Including linear programming, integer programming, nonlinear programming, interior point methods, game theory, network optimization models, combinatorics, equilibrium programming, complementarity theory, multiobjective optimization, dynamic programming, stochastic programming, complexity theory, etc. Applied Probability: Including queuing theory, simulation, renewal theory, Brownian motion and diffusion processes, decision analysis, Markov decision processes, reliability theory, forecasting, other stochastic processes motivated by applications, etc. Production and Operations Management: Including inventory theory, production scheduling, capacity planning, facility location, supply chain management, distribution systems, materials requirements planning, just-in-time systems, flexible manufacturing systems, design of production lines, logistical planning, strategic issues, etc. Applications of Operations Research and Management Science: Including telecommunications, health care, capital budgeting and finance, economics, marketing, public policy, military operations research, humanitarian relief and disaster mitigation, service operations, transportation systems, etc. This book series is indexed in Scopus.

H. A. Eiselt 2022 Vladimir Marianov 2022 Joyendu Bhadury

Multicriteria Location Analysis

H. A. Eiselt Faculty of Management University of New Brunswick Faculty of Business Administration Fredericton, NB, Canada

Vladimir Marianov Department of Electrical Engineering Pontificia Universidad Católica de Chile Santiago, Chile

Joyendu Bhadury Davis College of Business and Economics Radford University Radford, VA, USA

ISSN 0884-8289 ISSN 2214-7934 (electronic) International Series in Operations Research & Management Science ISBN 978-3-031-23875-8 ISBN 978-3-031-23876-5 (eBook) https://doi.org/10.1007/978-3-031-23876-5 © Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

In the beginning, two deputies of the chief decision maker, Ángel Blanco and Diablo Rojo, met to decide where this newly created species called anthropoids should reside. Mr. Blanco insisted that they should live in a place, where the streets are paved with gold, while Mr. Rojo strongly demanded that they live in a very warm place. The two argued for a long time, considering a variety of planets with different features–some barren but good for mining, some with flat fertile fields suitable for farming, some with great seas for fishing. However, there was no single solution that satisfied them both. They determined that their objectives were just too different. After long and heated discussions, they settled for a solution that had a little bit of everything. Neither of them was very happy about it, but—at least for the time being—it was a viable compromise. And that’s where the anthropoids have lived ever since. Jesús Peregrino

Preface

While location models have interested mathematicians and geographers since the seventeenth century, the operations research community was introduced to the problem in the 1960s. Since then, a wide variety of models has been established and specialized solution techniques have been devised. One of the more important developments in the last half century is a development called geographical information systems, which allows users to store, manipulate, and display spatial data. On the other hand, decision analysis tools have been devised since the late 1940s. A variety of tools has been developed in the 1960s and 1970s, in part based on input by psychologists, so as to capture aspects related to behavioral components. Many contributions have been written in both fields, so that one could consider the areas well sedimented. Why then yet another book on the subject? Just to lower the center-of-gravity when we put it on the bookshelf? As the reader will probably have guessed, the answer is a resounding “no.” The main idea was to marry the two fields and demonstrate their uses and applications by way of a number of cases that use real data and reasonably real situations. In that, our effort resembles the Olson book from 1996 that introduces a variety of techniques in decision analysis (without the location component, though). The book is organized as follows. The first four chapters introduce the basic tools and techniques in single-objective optimization, multicriteria decision making, location analysis, and other tools, such as statistical regression and the aforementioned geographical information systems. This is followed by ten chapters of model applications, each of which introduces one location problem and applies one technique for its solution. The book is then wrapped up in one chapter, which looks at the location process from a practitioner’s point of view. Each chapter in the main body of the paper is subdivided into two parts. The first part introduces the problem and describes the approach taken in broad strokes (something a decision maker would want to read), while the second part, called “Making it work,” includes the nuts and bolts, i.e., the model, and its solution, which is special interest to the analyst.

vii

viii

Preface

The book is designed for third-or fourth-year undergraduate students as well as anybody with an interest in the subject, who has a minor background in optimization. The first four chapters of this book can then be used as a refresher. Each book is an effort by different people. In addition to the “tres amigos” who acted as authors, we would like to thank Camille Price, who coaxed us into undertaking this project, and the project coordinator of Springer publishers, Mrs. P. Chockalingam, who kept us on the straight and narrow by reminding us how much delayed the submission of our book already was. Thanks are also due to Mrs. K. Selvaraju, Mrs. J. Yan, and Mrs. R. Prakash from Springer publishers for their help and encouragement. Special thanks are due to Siobhan Hanratty from the University of New Brunswick, who not only produced some of the maps, but also helped us with the many copyright issues. We very much appreciate her assistance. Fredericton, NB, Canada Santiago, Chile Radford, VA, USA

H. A. Eiselt Vladimir Marianov Joy Bhadury

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part I

1 7

Tools and Techniques of Multicriteria Location Analysis

2

Single-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Visualizations of Sensitivity Analyses . . . . . . . . . . . . . . . . . . . . . A.3 Modeling Nonlinear Functions . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Piecewise Linearization of Nonlinear Constraints . . . . . . . . . . . . A.5 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 21 21 24 27 30 32 34

3

Multicriteria Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The Basic Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Some Approaches in the MOP and MADM Settings . . . . . . . . . 3.3 Methods for MOP Problems . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Methods for MADM Problems . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 37 44 47 49 51 69

4

Location Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.1 The Basic Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 The Main Features of Location Problems . . . . . . . . . . . . . . . . 73 4.3 The Basic Models and Their Extensions . . . . . . . . . . . . . . . . . 75 4.3.1 Problems with “Pull” Objective . . . . . . . . . . . . . . . . . . 77 4.3.2 Problems with “Push” Objective . . . . . . . . . . . . . . . . . 81 4.3.3 Problems with “Balancing” Objective . . . . . . . . . . . . . 84 4.4 Evaluation of Point Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.5 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 ix

x

5

Contents

Mathematical and Geospatial Tools . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Interpolation/Curve Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Regressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Voronoi Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 GIS Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Basic Elements and Features . . . . . . . . . . . . . . . . . . . . 5.2.2 Geoprocessing Tools . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Voting Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Interpolation and Curve Fitting . . . . . . . . . . . . . . . . . . 5.4.2 Voting Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Some GIS Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part II

107 108 108 109 112 112 116 123 126 126 131 140 142 142

Model Applications

6

Locating a Landfill with Vector Optimization . . . . . . . . . . . . . . . . . 145 6.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

7

Using Goal Programming to Locate a New Fire Hall . . . . . . . . . . . . 157 7.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

8

Locating a Hospital with a Zoom Approach . . . . . . . . . . . . . . . . . . 175 8.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

9

Locating a Private School with a Generic Method . . . . . . . . . . . . . . 191 9.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

10

Locating Jails with a Target Value Approach . . . . . . . . . . . . . . . . . 207 10.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

11

A Rejection Approach to Locate a New City Park . . . . . . . . . . . . . . 223 11.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

12

Locating a New Fulfillment Center with the Domain Criterion . . . . 235 12.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

13

Locating Fast Food Restaurants with the Analytic Hierarchy Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 13.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

Contents

xi

14

Locating Billboards with the Jefferson-d’Hondt Method . . . . . . . . . 267 14.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

15

Locating an Airport by Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 15.1 Making It Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290

Part III 16

Practitioner Perspective

How Location Works in Practice: A Perspective from the United States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Area Development’s Annual Survey of Critical Site Selection Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Workforce-Related Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Infrastructure and Proximity-Related Factors . . . . . . . . . . . . . . 16.5 Tax Climate and Incentives . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6 Locational Decision-Making Process . . . . . . . . . . . . . . . . . . . 16.7 Global Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8 Influence of State Rankings . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9 Trends in Economic Development Strategies to Facilitate Site Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10 Location of “Undesirable” Facilities . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

295 295 298 299 301 302 303 305 308 309 311 314

Chapter 1

Introduction

The purpose of this volume is to demonstrate the practical possibilities and the limitations of solving location problems with tools from multicriteria decision making. The reason for considering solving location problems in this way rather than resorting to the standard tools available in location analysis is based on the positioning of location problems on the strategic—tactical—operations scale. Most location problems are located (sic) high on that scale, as the facilities they locate tend to be expensive to build and to (re-) locate. This is in contrast to the related routing problems, which are typically found on the operational level. In the extreme, it is much more complex to plan, finance, and build, say, a new stadium as opposed to change a bus route. Given the high costs and the many stakeholders that are usually involved in the siting of a facility, the process typically involves multiple decision makers, each with a number of different criteria (in addition to a multitude of regulatory constraints). In contrast to the process as it pans out in reality, most location problems in the literature use a single criterion, viz., distance, i.e., a (dis-) utility for spatial separation. We should point out that the term “distance” is used here in the widest possible sense: In case of emergency vehicles, it may refer to the time that is needed to reach the site of an emergency, while in case of polluting facilities it measures the attenuation of pollution, whereas in case of commercial facilities such as warehouses or distribution centers, it is typically used as a proxy of costs. The following paragraphs will first briefly introduce the different strands of scientific inquiry and provide a glimpse of their origins. After reviewing the pillars on which this book rests, viz., optimization, multicriteria decision making, location theory, and statistical and geographical/spatial techniques, we then provide a summary of a number of applications of multicriteria decision making to facility location problems. They have been chosen so as to provide a glimpse at the variety of applications with respect to the country, in which the application takes place, the criteria that are relevant in the application, and the technique that is used to solve the problem. Rather than following the aforementioned sequence, we will start our © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_1

1

2

1 Introduction

discussion with location problems, as they are at the heart of the applications, while the other three pillars provide the support methodology. Location problems have been around as long as mankind. Where to spend the night (in which of the caves in the neighborhood), where to build a fortified city, where to construct boats for trade, and, more recently, where to build multi-mode airline hubs, or where to position communication satellites? The history of formal location models begins with early discussions in the works of Torricelli, Fermat, and Cavalieri in the seventeenth century, Simpson in the mid-nineteenth century, von Thünen (1966) in 1826 in the context of land use, and finally Alfred Weber in 1909 (even though the mathematical tools in his book are relegated to the Appendix, which was written by Georg Pick) in the geographical-industrial context. Christaller (1933), Lösch (1954), and Isard (1956) made some of the landmark contributions in location analysis, based on their geographical background. With the advent of operations research in the late 1940s, location theory took a more quantitative turn, later to be further emphasized by the “New Economic Geography” and its emphasis on economies of scale, agglomeration, and monopolistic competition. Among the first operations research contributions to location analysis are Hakimi’s (1964) major result on network locations, Cooper’s (1963) methods to solve location problems in the plane, the rediscovery of Weiszfeld’s (1937) method for planar location problems, which was later translated into English by Weiszfeld and Plastria (2009). Also worth mention is Hakimi’s (1965) contribution, which resulted in Toregas et al.’s (1971) location set covering problem, followed by the more practical max cover problem first described by Church and ReVelle (1974). An important development was initiated by Hakimi (1983), who was among the first operations researchers to describe location problems in a competitive context. A detailed survey of the origins of location science is provided by Eckhardt (2008, in German). Eiselt and Marianov (2011) supply a short history of location problems and their components in their book (Eiselt and Marianov 2011a), which features a collection of original contributions as viewed by today’s researchers. Up-to-date surveys of many aspects of location science can be found in the volume edited by Laporte et al. (2019). A large variety of applications of location models is described in the literature, see, e.g., Eiselt (1992), Eiselt and Sandblom (2004), Eiselt and Marianov (2011), Daskin (2013), or Laporte et al. (2019), Eiselt et al. (2015). All of the early contributions and many of the newer theoretical models in the literature used distance as the only criterion in their models. Reilly (1931) appears to have been one of the early contributors to use features other than distance in his location models. Huff (1964) followed in his footsteps with a probabilistic version of the model. Naturally, such models appeal mostly to researchers who deal with models in the retail context. It was Hodgson (1978, 1981) who, starting with spatial interaction models, also used attraction functions for retail facilities, in which the “attractiveness” of the facility is a one-dimensional factor that measures storespecific features, which make a customer prefer one store over another, given equal customer-facility distances. The initial attempts by Hodgson were followed by Eiselt and Laporte (1988) and T. Drezner (1994), who applied the concept in many different contexts, e.g., competitive location models. Another attempt by

1

Introduction

3

Hodgson (1988) to make location models more life-like was to build a hierarchical location model for referral systems in health care in India. Among the most notable developments in the context of location analysis in the last 50 years are geographical information systems (GIS). Following the timeline of ESRI, a company founded in Redlands in 1969 by Jack and Laura Dangermond (ESRI, undated), today the main player in the software field for GIS, the development started in the early 1960s with the demand of an inventory of natural resources in Canada. The field bloomed as computers became more powerful and especially when more advanced computer graphics were developed. Today, many geographical information systems include not only extensive visualization and mapping features, but also distance computations, inclusion and exclusion zones around prescribed features, and even some standard location optimization models. All of this makes GIS systems an invaluable tool in location analysis. An up-to-date expose about geographical information systems and their capabilities can be found in Murray et al. (2019). Solving location models will necessarily require the use of some optimization theory. As could be expected, the development of optimization methods it started with single-objective problems in linear, integer, and nonlinear programming. In the case of linear programming, the early days saw Leontieff’s work on input-output models in the 1930s, followed by Kantorovich’s research on the allocation of scarce resources, and crowned by Dantzig’s description of the simplex method, the go-to solution technique for linear programming problems in 1947. Whereas the discussion of integer problems dates back to Euclid (third century BC) and Diophantos (third century AD) and his work on equations and their solutions in integers (Fermat’s famous “last theorem” in 1641 also deserves some mention in this context), methods for the solution of integer programming problems were independently developed by Dantzig (1959) and Gomory (1958, 1963) in the late 1950s and early 1960s, and others such as Balas (1965) in the 1960s. Single-objective nonlinear problems date back to the seventeenth century, when Fermat and Simpson worked on, ironically, a location problem. Newton’s work on calculus was developed independently by Gauss, until Lagrange’s famed multipliers were described in the late eighteenth century. “Modern” nonlinear programming was developed and driven forward by Fritz John, Karush in his Master’s thesis in 1939, John von Neumann, and, most prominently Kuhn and Tucker (1951). An excellent summary of the history of nonlinear programming can be found in Giorgi and Kjeldsen (2014). These single-objective optimization problems were generalized to multiobjective programming problems. As early as 1951, Kuhn and Tucker (1951) laid the foundation for what is known as vector optimization. Zeleny (1974) provided a good account of the field at the time and described an algorithm that generates all nondominated solutions. An interesting account of the very early history of problems with multiple objectives is provided by DeWeck (2004). Charnes et al. (1955) were the first to describe goal programming (incidentally, the name “goal programming” appears to first have been used by and Charnes and Cooper 1961). If the number of books in a field is a sign of its maturity, then goal programming most certainly has come of age. Some prominent earlier examples are the volumes by Lee (1972), Ignizio and Cavalier (1994), and Schniederjans (1995). A somewhat more recent

4

1 Introduction

account of progress in the field is provided by Aouni and Kettani (2001). Many others followed and multiobjective programming holds its own as a separate discipline with many different approaches, some of which are covered in Chap. 3 of this volume. A different approach is taken by multiattribute decision making problems, which deal with only a finite number of decisions, coupled with a finite number of criteria to be considered. The roots of this discipline go back to a number of contributions in voting theory. The connection between voting and multicriteria decision making is that in both cases, the attempt is to reconcile different criteria or opinions into one result. The earliest reference in this context is most likely Pliny the Younger (AD 105), followed by Ramon Llull in the thirteenth and early fourteenth century, whose methods were rediscovered some 450 years later by Borda and Marie Jean Antoine Nicolas de Caritat, Marquis of Condorcet (mostly just referred to as “Condorcet.”). Modern multiattribute decision making is based on the work by John von Neumann on game theory in the early twentieth century which culminated in the publication of the tome by von Neumann and Morgenstern (1944). Outranking methods were first suggested by Roy (1968), and much of the work on multiattribute decision making problems is based on works by psychologists such as Tversky and Kahnemann (1973, for a good review, see, e.g., Heukelom 2007), as the approaches attempt to incorporate observations about human behavior in the mathematical methods. Given the very general description of location problems, viz., “find sites for ‘facilities,’ so as to optimize a function of the degree of interaction between the facilities and a set of fixed customer points,” it is little surprise that location problems have been applied in a large variety of contexts. A general description such as this fits context as diverse as the location of ATM machines in town, the location of search & rescue vessels in the ocean, the location of church camps, and even the positioning of politicians on the issues of the day and brand positioning and design. A (non-comprehensive) list of some selected applications is shown in Table 1.1. This is, of course, not the first attempt to collect and display applications of location applications. An early listing of applications in location analysis can be found in Eiselt (1992), a somewhat newer listing is found in Table 2 in Melo et al. (2009) and Eiselt et al. (2015), as well as, more recently, in Adeleke and Olukanni (2020) and Turkoglu and Genevois (2020). Most of the references use just a single objective, though. Lists that specialize on applications in specific fields are also available, e.g., for landfills, see, e.g., Eiselt and Marianov (2015) or Yu and Solvang (2017), Khan et al. (2022) as well as Gul and Guneri (2021) in health care, Boonmee et al. (2017) for the location of disaster facilities, Church and Drezner (2022) for obnoxious facility location problems, Alumur et al. (2021) for hub location problems, just to name a few. We hope that this short account of applications of location analysis has whet your appetite to discover more. The following chapters in this book are an attempt to provide food for thought.

1

Introduction

5

Table 1.1 Some applications of location problems #a 1

Year 2005

Country Chile

Facility Jails

Tool Multiobjective mixed integer

2

2009

Portugal

Incinerator

3

2010

Turkey

Warehouses

Vector optimization, weighting method Choquet integral

4

2012

Spain

Industrial area

Fuzzy logic, AHP

5

2012

Iran

Greenhouse

ANP-COPRAS

6

2015

Morocco

Solar farm

AHP

7

2015

Iran

Hospitals

Multiobjective genetic heuristic plus TOPSIS

8

2015

China

9

2015

Iran

Forest fire watchtowers Earthquake shelters

Bi-objective optimization PROMETHEE

10

2015

Turkey

University faculty

Fuzzy TOPSIS

11

2016

China

Earthquake evacuation shelter

MOP formulation & heuristic

12

2016

Vietnam

Dry (inland) port

Additive weighting, SWING, AHP

13

2016

Chile

Firefighting service

Mixed integer MOP

Criteria Fixed opening costs, expansion costs, overpopulation costs, sentenced inmate transportation costs Min operating costs, min fixed costs, min total risk, min the max avg. risk on any region Costs, labor characteristics, infrastructure, proximity to customers, laws & policies Environmental, socioeconomic, infrastructure & urban development. Costs, environmental concerns, access to transportation, supplies, markets & power Climate, slope, distances, land use Min total travel cost, min access inequity, min land use incompatibility, min fixed cost Min budget & max cover Age of buildings, population density, construction quality, land availability Geological risk, environmental risk, costs, proximity Min customer-facility distances & two max cover population objectives Criteria for customers, firms, & community, incl. accessibility cost, pollution, employment generation, &c Coverage of different types of emergencies with different types of vehicles (continued)

6

1

Introduction

Table 1.1 (continued) #a 14

Year 2016

Country Turkey

Facility Search & rescue vessels

15

2017

Serbia

Wind farm

16

2017

Spain

Biomass plant

17

2018

Serbia

Multimodal logistics center

18

2018

Turkey

Landfill

19

2018

Mexico

20

2019

Serbia

Flood emergency facilities, (shelters, meeting points, distribution centers) Tourist hotel

21

2019

Colombia

Landfill

22

2019

China

Water reservoir

Fuzzy TODIM

23

2020

Spain

Logistics center

FUCOM

24

2020

Iran

Hybrid wind/solar power plant

Fuzzy VIKOR

25

2020

Turkey

Temporary hospital

Gray-based combined

Tool AHP, clustering, vector optimization, weighting method ANP

GIS & weighted linear combination DEMATELMAIRCA

Fuzzy sets, information axiom MOP

SWARA for weights, then weighted sum AHP with fuzzy logic

Criteria Min exceeding response time, min exceeding budget, min supply exceeding capacity Wind speed avg., land use, distance from urban areas, protected areas, road network, telecom, tourist facilities, power lines, slope of land, population density Transport cost, access, environmental impact, Infrastructure, environmental impact, conformity with plans, area for expansion, distance to users, &c Distances to roads, transfer stations, &c environmental factors, water, land prices Min the max evacuation time, min the max time for aid distribution, min total cost Infrastructure, access, environment, investment, resources Social impact, transportation & other costs, access & availability of land, risk Costs, environmental factors, weather conditions, topography, distance to grid Accessibility, security, connectivity, pro& highways; land costs, proximity to customers, suppliers & intermodal transport, environmental impact, cost 14 criteria, including altitude, sunny hours, population, unemployment rate, avg. temperature & humidity, distance between site & main road, land price Traffic congestion, accessibility by road, accessibility by air, number of health (continued)

References

7

Table 1.1 (continued) #a

Year

Country

Facility

Tool

Criteria

compromise solutions

centers in district, dist. from populated residential area, land price, transportation cost, expansion potential, distance from industrial areas (air quality), land regulation Proximity to wells, rivers, road networks, population, fault areas Distances to consumer, container port, int’l airports, & highways; land costs Min cost of shelters, min time to reach shelter Minimize fixed costs, minimize undersupply

26

2021

Ethiopia

Landfill

Weighted average

27

2022

Brazil

Warehouse

Fuzzy decision making, MAUT, MAVT

28

2022

China

Earthquake shelters

29

2022

Senegal

Distribution points in drought

Bi-level optimization Bilevel optimization, value-atrisk, stochastic

a 1—Marianov and Fresard (2005), 2—Alçada-Almeida et al. (2009), 3—Demirel et al. (2010), 4— Ruiz et al. (2012), 5—Rezaeiniya et al. (2012), 6—Tahri et al. (2015), 7—Beheshtifar and Alimoahmmadi (2015), 8—Bao et al. (2015), 9—Esmaelian et al. (2015), 10—Suder and Kahraman (2015), 11—Xu et al. (2016), 12—Nguyen and Notteboom (2016), 13—Pérez et al. (2016), 14—Razi and Karatas (2016), 15—Gigović et al. (2017), 16—Jeong and Ramírez-Gómez (2017), 17—Pamucar et al. (2018), 18—Kahraman et al. (2018), 19—Mejia-Argueta et al. (2018), 20—Popovic et al. (2019), 21—Rojas-Trejos and González-Velasco (2019), 22—Wu et al. (2019), 23—Yazdani et al. (2020), 24—Rezaei et al. (2020), 25—Zolfani et al. (2020), 26—Mussa and Suryabhagava (2021), 27—Torre et al. (2022), 28—He and Xie (2022), 29—Nazemi et al. (2022)

References O.J. Adeleke, D.O. Olukanni, Facility location problems: models, techniques, and applications in waste management. Recycling 5(2) (2020) https://www.mdpi.com/2313-4321/5/2/10. Accessed 15 Sept 2022 L. Alçada-Almeida, J. Coutinho-Rodrigues, J. Current, A multiobjective modeling approach to locating incinerators. Socio Econ Plan Sci 43, 111–120 (2009) S.A. Alumur, J.F. Campbell, I. Contreras, B.Y. Kara, V. Marianov, M.E. O’Kelly, Perspectives on modeling hub location problems. Eur J Oper Res 291(1), 1–17 (2021) B. Aouni, O. Kettani, Goal programming model: a glorious history and a promising future. Eur J Oper Res 133(2), 225–231 (2001) E. Balas, Discrete programming by the filter method. Oper Res 13(915), 957 (1965) S. Bao, N. Xiao, Z. Lai, H. Zhang, C. Kim, Optimizing watchtower locations for forest fire monitoring using location models. Fire Saf J 71, 100–109 (2015) S. Beheshtifar, A. Alimoahmmadi, A multiobjective optimization approach for location-allocation of clinics. Int Trans Oper Res 22, 313–328 (2015) C. Boonmee, M. Arimura, T. Asada, Facility location optimization model for emergency humanitarian logistics. Int J Disaster Risk Reduct 24, 485–498 (2017)

8

1

Introduction

A. Charnes, W.W. Cooper, Management models and industrial applications of linear programming (Wiley, New York, 1961) A. Charnes, W.W. Cooper, R. Ferguson, Optimal estimation of executive compensation by linear programming. Manag Sci 1, 138–151 (1955) W. Christaller, Die zentralen Orte in Süddeutschland (Gustav Fischer, Jena, 1933) (Partial English translation: 1966. Central places in Southern Germany. Prentice Hall) R.L. Church, Z. Drezner, Review of obnoxious facilities location problems. Comput Oper Res 138 (2022) R. Church, C. ReVelle, The maximal covering location problem. Papers Reg Sci Assoc 32, 101–118 (1974) L. Cooper, Location–allocation problems. Oper Res 11(3), 331–343 (1963) G.B. Dantzig, Note on solving linear programs in integers. Nav Res Logist Q 6(75), 76 (1959) M.S. Daskin, Network and discrete location: models, algorithms, and applications, 2nd edn. (Wiley, Hoboken, NJ, 2013) T. Demirel, N.Ç. Demirel, C. Kahraman, Multi-criteria warehouse location selection using Choquet integral. Expert Syst Appl 37, 3943–3952 (2010) O. DeWeck, Multiobjective optimization: history and promise, in Invited keynote paper at the third China-Japan-Korea joint symposium on optimization of structural and mechanical systems, (2004) http://strategic.mit.edu/docs/3_46_CJK-OSM3-Keynote.pdf. Accessed 6 Oct 2022 T. Drezner, Locating a single new facility among existing, unequally attractive facilities. J Reg Sci 34, 237–252 (1994) U. Eckhardt, Kürzeste Wege und optimale Standorte – von Industriestandorten, Bomben und Seifenblasen, in Presentation on April 11, 2008 in the series “Spektrum der Wissenschaftsgeschichte III” at the University of Hamburg, (2008) https://www4.math.unihamburg.de/home/eckhardt/Standort.pdf. Accessed 6 Oct 2022 H.A. Eiselt, Location modeling in practice. Am J Math Manag Sci 12(1), 3–18 (1992) H.A. Eiselt, G. Laporte, Trading areas of facilities with different sizes. Recherche Operationnelle/ Oper Res 22(1), 33–44 (1988) H.A. Eiselt, V. Marianov, Chapter 1: Pioneering developments in location analysis, in Foundations of location analysis, ed. by H.A. Eiselt, M. Marianov, (Springer, New York, 2011), pp. 3–22 H.A. Eiselt, V. Marianov, Foundations of location analysis (Springer, New York, 2011a) H.A. Eiselt, V. Marianov, Location modeling for municipal solid waste facilities. Comput Oper Res 62, 305–315 (2015) H.A. Eiselt, C.-L. Sandblom, Decision analysis, location models, and scheduling problems (Springer, Berlin-Heidelberg-New York, 2004) H.A. Eiselt, V. Marianov, J. Bhadury, Location analysis in practice, in Applications of location analysis, ed. by H.A. Eiselt, V. Marianov, (Springer, Cham, Heidelberg, 2015), pp. 1–24 M. Esmaelian, M. Tavana, F.J. Santos Arteaga, S. Mohammadi, A multicriteria spatial decision support system for solving emergency service station location problems. Int J Geogr Inf Sci 29(7), 1187–1213 (2015) ESRI, History of GIS (undated), https://www.esri.com/en-us/what-is-gis/history-of-gis. Accessed 15 Sept 2022 L. Gigović, D. Pamučar, D. Božanić, S. Ljubojević, Application of the GIS-DANP-MABAC multicriteria model for selecting the location of wind farms: a case study of Vojvodina, Serbia. Renew Energy 103, 501–521 (2017) G. Giorgi, T.H. Kjeldsen, Traces and emergence of nonlinear programming (Birkhäuser Verlag, Basel, 2014) R.E. Gomory, Outline of an algorithm for integer solutions to linear programs. Bull Am Math Soc 64, 275–278 (1958) R.E. Gomory, An algorithm for integer solutions to linear programs, in Recent advances in mathematical programming, ed. by R.L. Graves, P. Wolfe, (McGraw-Hill, New York, 1963), pp. 269–302

References

9

M. Gul, A.F. Guneri, Hospital location selection: a systematic literature review on methodologies and applications. Math Probl Eng (2021) https://www.hindawi.com/journals/mpe/2021/6682 958/. Accessed 6 Oct 2022 S.L. Hakimi, Optimum locations of switching centers and the absolute centers and medians of a graph. Oper Res 12(3), 450–459 (1964) S.L. Hakimi, Optimum distribution of switching centers in a communication network and some related graph theoretic problems. Oper Res 13(3), 462–475 (1965) S.L. Hakimi, On locating new facilities in a competitive environment. Eur J Oper Res 12(1), 29–35 (1983) L. He, Z. Xie, Optimization of urban shelter locations using bi-level multi-objective locationallocation model. Int J Environ Res Public Health 19(7), 4401 (2022) F. Heukelom, Kahneman and Tversky and the origin of behavioral economics. Tinbergen Institute discussion paper, no. 07-003/1 (2007), https://papers.ssrn.com/sol3/papers.cfm?abstract_id= 956887. Accessed 15 Sept 2022 M.J. Hodgson, Toward more realistic allocation in location-allocation models: an interaction approach. Environ Plan A: Econ Space 10(11), 1273–1285 (1978) M.J. Hodgson, A location – allocation model maximizing consumers’ welfare. Reg Stud 15(6), 493–506 (1981) M.J. Hodgson, An hierarchical location-allocation model for primary health care delivery in a developing area. Soc Sci Med 26(1), 153–161 (1988) D.L. Huff, Defining and estimating a trade area. J Mark 28, 34–38 (1964) J.P. Ignizio, T.M. Cavalier, Linear programming (Prentice-Hall, Englewood Cliffs, NJ, 1994) W. Isard, Location and space-economy: a general theory relating to industrial location, market areas, land use, trade and urban structure (MIT Press, Cambridge, MA, 1956) J.S. Jeong, A. Ramírez-Gómez, A multicriteria GIS-based assessment to optimize biomass facility sites with parallel environment – a case study in Spain. Energies 10, 2095 (2017) C. Kahraman, S. Cebi, S.C. Onar, B. Oztaysi, A novel trapezoidal intuitionistic fuzzy information axiom approach: an application to multicriteria landfill site selection. Eng Appl Artif Intell 67, 157–172 (2018) I. Khan, L. Pintelon, H. Martin, The application of multicriteria decision analysis methods in health care: a literature review. Med Decis Mak 42(2), 262–274 (2022) H.W. Kuhn, W.W. Tucker, Nonlinear programming, in Proceedings of the second Berkeley symposium on mathematical statistics and probability, ed. by J. Neyman, vol. 2, (1951), p. 1950. https://projecteuclid.org/proceedings/berkeley-symposium-on-mathematical-statisticsand-probability/proceedings-of-the-second-berkeley-symposium-on-mathematical-statisticsand-probability/toc/bsmsp/1200500213. Accessed 6 Oct 2022 G. Laporte, S. Nickel, F. Saldanha da Gama (eds.), Location science, 2nd edn. (Springer, Cham, Switzerland, 2019) S.M. Lee, Goal programming for decision analysis (Auerbach, Philadelphia, PA, 1972) A. Lösch, The economics of location (Yale University Press, New Haven, CT, 1954) Translated from the original German reference: Lösch A (1940) Die räumliche Ordnung der Wirtschaft: Eine Untersuchung über Standort, Wirtschaftsgebiete und internationalen Handel. Gustav Fischer Verlag, Jena V. Marianov, F. Fresard, A procedure for the strategic planning of locations, capacities and districting of jails: Application to Chile. J Oper Res Soc 56(3), 244–251 (2005) C. Mejia-Argueta, J. Gaytán, R. Caballero, J. Molina, D. Vitoriano, Multicriteria optimization approach to deploy humanitarian logistic operations integrally during floods. Int Trans Oper Res 25, 1053–1079 (2018) M.T. Melo, S. Nickel, F. Saldanha-da-Gama, Facility location and supply chain management – a review. Eur J Oper Res 196, 401–412 (2009) A.T. Murray, J. Xu, Z. Wang, R.L. Church, Commercial GIS location analytics: capabilities and performance. Int J Geogr Inf Sci 33(5), 1106–1130 (2019)

10

1

Introduction

A. Mussa, K.V. Suryabhagava, Solid waste dumping site selection using GIS-based multi-criteria spatial modeling: a case study in Logia town, Afar region, Ethiopia. Geol Ecol Landsc 5(3), 186–198 (2021) N. Nazemi, S.N. Parragh, W.J. Gutjahr, Bi-objective Risk-Averse Facility Location Using a SubsetBased Representation of the Conditional Value-at-Risk, in Proceedings ICORES, (2022), pp. 77–85. https://arxiv.org/pdf/2007.07767.pdf. Accessed 6 Oct 2022 L.C. Nguyen, T. Notteboom, A multi-criteria approach to dry port location in developing economies with application to Vietnam. Asian J Shipp Logist 32(1), 23–32 (2016) D.S. Pamucar, S.P. Tarle, T. Parezanovic, New hybrid multi-criteria decision-making DEMATELMAIRCA model: sustainable selection of a location for the development of multimodal logistics centre. Econ Res-Ekon Istraž 31(1), 1641–1665 (2018) J. Pérez, S. Maldonado, V. Marianov, A reconfiguration of fire station and fleet locations for the Santiago Fire Department. Int J Prod Res 54(11), 3170–3186 (2016) G. Popovic, D. Stanujkic, M. Brzakovic, D. Karabasevic, A multiple-criteria decision-making model for the selection of a hotel location. Land Use Policy 84, 49–58 (2019) N. Razi, M. Karatas, A multi-objective model for locating search and rescue boats. Eur J Oper Res 254, 279–293 (2016) W.J. Reilly, The law of retail gravitation (Knickerbocker Press, NewYork, NY, 1931) M. Rezaei, K.R. Khalilpour, M. Jahangiri, Multi-criteria location identification for wind/solar based hydrogen generation: the case of capital cities of a developing country. Int J Hydrog Energy 45, 33151–33168 (2020) N. Rezaeiniya, S.H. Zolfani, E.K. Zavadskas, Greenhouse locating based on ANP-COPRAS-G methods—An empirical study based on Iran. Int J Strateg Prop Manag 16, 188–200 (2012) C.A. Rojas-Trejos, J. González-Velasco, Chapter 4: A Multicriteria Location Model for a Solid Waste Disposal Center in Valle Del Cauca, Colombia, in Supply Chain Management and Logistics in Latin America: A Multi-country Perspective, ed. by Y. HTY, J.C. Velázquez Martínez, C.M. Argueta, (Emerald Publishing, Bingley, UK, 2019), pp. 37–53 B. Roy, Classement et choix en présence de point de vue multiples: Le méthode ELECTRE. Revue Francaise d’Informatique et de Recherche Opérationnnelle 8, 57–75 (1968) M.C. Ruiz, E. Romero, M.A. Pérez, I. Fernández, Development and application of a multi-criteria spatial decision support system for planning sustainable industrial areas in Northern Spain. Autom Constr 22, 320–333 (2012) M.J. Schniederjans, Goal programming: methodology and applications (Kluwer Academic, Norwell, 1995) A. Suder, C. Kahraman, Minimizing environmental risks using fuzzy TOPSIS: location selection for the ITU Faculty of Management. Hum Ecol Risk Assess Int J 21(5), 1326–1340 (2015) M. Tahri, M. Hakdaoui, M. Maanan, The evaluation of solar farm locations applying geographic information system and multi-criteria decision-making methods: case study in southern Morocco. Renew Sust Energ Rev 51, 1354–1362 (2015) C. Toregas, R. Swain, C. ReVelle, L. Bergman, The location of emergency service facilities. Oper Res 19, 1363–1373 (1971) N.M.M. Torre, V.A.P. Salomon, E. Loche, S.A. Gazale, V.M. Palerm, Warehouse location for product distribution by e-commerce in Brazil: comparing symmetrical MCDM applications. Symmetry 14(10) (2022) https://www.mdpi.com/2073-8994/14/10/1987/htm. Accessed 5 Oct 2022 D.C. Turkoglu, M.E. Genevois, A comparative survey of service facility location problems. Ann Oper Res 292, 399–468 (2020) A. Tversky, D. Kahneman, Judgment under uncertainty: heuristics and biases. Science 185(4157), 1124–1131 (1973) J. von Neumann, O. Morgenstern, Theory of games and economic behavior (Princeton University Press, Princeton, 1944) J.H. von Thünen, von Thünen’s isolated state, 1st German edn 1826 (Pergamon Press, Oxford, 1966). Translated by Wartenberg CM

References

11

A. Weber, Über den Standort der Industrien. 1. Teil: Reine Theorie des Standorts (Mohr Siebeck Verlag, Tübingen, Germany, 1909) E. Weiszfeld, Sur le point pour lequel la somme des distances de n points donnés est minimum. Tôhoku Mathematical Journal (first series) 43, 355–386 (1937) E. Weiszfeld, F. Plastria, On the point for which the sum of the distances to n given points is minimum. Ann Oper Res 167, 7–41 (2009) Y. Wu, T. Zhang, C. Xu, B. Zhang, L. Li, Y. Ke, Y. Yan, R. Xu, Optimal location selection for offshore wind-PV-seawater pumped storage power plant using a hybrid MCDM approach: a two-stage framework. Energy Convers Manag 199, 112066 (2019) J. Xu, X. Yin, D. Chen, J. An, G. Nie, Multi-criteria location model of earthquake evacuation shelters to aid in urban planning. Int J Disaster Risk Reduct 20, 51–62 (2016) M. Yazdani, P. Chatterjee, D. Pamucar, S. Chakraborty, Development of an integrated decision making model for location selection of logistics centers in the Spanish autonomous communities. Expert Syst Appl 148, 113208 (2020) H. Yu, W.D. Solvang, A multi-objective location-allocation optimization for sustainable management of municipal solid waste. Environ Syst Decis 37, 289–308 (2017) M. Zeleny, Linear multiobjective programming, in Lecture notes in economics and mathematical systems, vol. 95, (Springer, Berlin-Heidelberg-New York, 1974) S.H. Zolfani, M. Yazdani, A.E. Torkayesh, A. Derakhati, Application of a gray-based decision support framework for location selection of a temporary hospital during COVID-19 pandemic. Symmetry 12(6), 886 (2020) https://www.mdpi.com/2073-8994/12/6/886. Accessed 6 Oct 2022

Part I

Tools and Techniques of Multicriteria Location Analysis

Chapter 2

Single-Objective Optimization

Optimization is a framework, which includes the modeling and solving of complex situations. In essence, it can be thought of as a toolkit that allows users to choose the best solution among a set of candidates. It is important to realize that the optimal solution is generally not the best solution under any condition and in any context: it is merely the best solution among all of those possible, given some measure (a value statement) that associates a value with each course of action. One of the best explanations of the difference between “absolute best” and “best possible” was provided by Haywood (1954), who demonstrated in the military context that even though a commander’s decision resulted in a disastrous defeat, it was an optimal decision: the other choices were even worse. All optimization problems contain two components, viz., constraints and objective functions. Simply speaking, constraints are mathematical expressions of what a decision maker can do (or is permitted to do), while objective functions express what a decision maker would like to do. Examples of objectives include the maximization of revenue, profit, or return on investment (or, in general, any utility) or the minimization of costs, distances, or other disutilities. While it is possible to have more than a single objective, even with a single decision maker, most of the discussion in this chapter assumes a single objective. The main reason is that with more than one objective, the concept of optimality, a central property in optimization, ceases to apply. More about decision making in the presence of multiple objectives can be found in Chap. 3. Considering now the constraints, examples of constraints include limits on the amounts a pension fund can invest in individual stocks, limits on the distance between a fire station and any of the buildings it is designed to protect, budget constraints for firms or households, etc. Semi-formally, we can write

© Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_2

15

16

2

Single-Objective Optimization

P : Max objective function s:t:all constraints, where s.t. stands for “subject to.” Most optimization models consider constraints as absolute, i.e., they will not consider solutions that violate any one of the constraints, regardless by how little an amount. Clearly, this does not necessarily reflect practical situations. For example, while it may be desirable to stay within a given budget, it may be possible to violate this constraint, albeit at a cost, viz., the cost of borrowing (the interest on the loan). Approaches such as goal programming (see Chaps. 3 and 7) or models with penalty costs do take this softness of the constraints into consideration. Solutions that satisfy all given constraints are said to be within the feasible set. Other than the structure of a model, which expresses the interdependencies of its elements, each model includes two types of numbers: those that are within the control of the decision maker, and those that are not. The values of the numbers that are within a decision maker’s control but are not yet known (it is the model’s task to determine them) are referred to as variables. On the other hand, the numbers of the model that are beyond the decision maker’s control and whose values are known are referred to as parameters. Clearly, it may very well be that the values of the parameters are not known, but then at least some probability distributions are known, which they are assumed to follow. Figure 2.1 shows some hierarchy of optimization problems, based on some of the standard assumptions. At each node in the figure, the right branch leads to a more difficult type of problem than the left branch. The simplest problem at the bottom left of the figure is Linear Programming (LP) with its assumptions of linearity of all functions, variables that can take any value, possibly within a range (divisibility), and the deterministic property, i.e., the assumption that all parameters of the problem are known with certainty. To its right we have Integer Linear Programming (ILP), where at least some of the variables no longer need to satisfy the assumption of divisibility but are required to be integers. This is the case, when, for instance, we obviously cannot build 2.8 houses or assign 5.2 drives to a specific trucking route. Even though the set of feasible solutions of an integer programming problem will be a proper subset of the set of feasible solutions of the corresponding linear programming problem with the same constraints, the integer optimization problem (at least if it does not possess any specific structure) will typically be significantly more difficult to solve than the linear programming problem. The reason is that the integrality requirement destroys one of the key properties of linear programming, viz., convexity, which, loosely speaking, guarantees that each feasible point can be reached from any other feasible point while staying within the feasible set. This is important as most standard optimization techniques are myopic in that they are able to detect what is happening in some direction only in the direct vicinity of the solution they have determined. Simply speaking, if we are standing on a slope and we are interested to reach the top of a mountain (i.e., a maximization problem with

2

Single-Objective Optimization

17

Optimization

Deterministic optimization

Nonlinear optimization

Linear optimization

LP

Probabilistic optimization

ILP

NLP

INLP

Fig. 2.1 Hierarchy of optimization problems

the criterion “altitude”), we consider single steps in all possible directions, and then choose to move in the direction in which the altitude increases. Convexity is the feature that allows us to apply this approach and reach the top. The lack of convexity makes matters much more difficult. Given that most optimization procedures are myopic (similar to hiking in dense fog), they can only see or assess their immediate vicinity. Thus, it is not clear to an optimizer if the trail that leads down at this point will later on climb again and lead to the summit. Hence, if in the direct vicinity of our present location, the terrain slopes down in all directions, we may conclude that we are at the top of the hill. This is correct if the function is convex, but it may not be correct if it is not: in nonconvex cases, we may have reached only a local optimum (the highest point in a small neighborhood) instead of a global optimum (the highest point anywhere in the predefined area). Only so-called global optimization techniques may find solutions that are better than the one we have right now, even though it is required to find other, worse, solutions first. When they hear about integer variables, most people think of items such as cans of food, bags of flour, etc., which are only sold in integer quantities. While this is true, such “natural integralities” are not the most useful aspect of modeling in integers. Take, for instance, the case of a decision maker, who faces the task of

18

2

Single-Objective Optimization

Fig. 2.2 Biomass of a tree over time

purchasing or leasing one of a number of available machines that are substitutes for each other. Choosing one machine means that the machine-specific constraints, such as capacity constraints, will have to hold, while the constrains related to all other machines do not apply. The modeling of this situation requires so-called conditional constraints, in the sense that “if this happens (i.e., we purchase/lease this machine), then that set of constraints has to hold” (only the constraints related to the purchased/ leased machine). Constraints such as these along with other types of constraints, such as “choose one out of a given number of possibilities,” either-or constraints, or others cannot be formulated as linear programs, but require the use of logical variables, i.e., those that can assume one a value of one (“true” or “yes” or “purchase”) or zero (“false,” “no,” or “do not purchase”). A number of formulations in zero-one variables is shown in any of the classical references such as Garfinkel and Nemhauser (1972) or Taha (1975) or, more recent references such as Eiselt and Sandblom (2000), Plastria (2002), Bertsimas and Weismantel (2005), Nemhauser and Wolsey (2014), or Eiselt and Sandblom (2022). Continuing at the bottom in Fig. 2.1, the next node includes nonlinear programming (NLP). Nonlinearities occur in many processes. A good example are sigmoidal growth functions, in which, say, trees grow very slowly at first, then show some time of rapid growth, after which, the growth rate slows down, peaks, and growth terminates altogether. This is shown in Fig. 2.2, in which the abscissa represents time, while the ordinate indicates the quantity of biomass of a tree. Another area in which nonlinear function occurs, is when variables are joined in some sort of multiplicative fashion. Take, for instance, the case, in which the price of a good is a function of the quantity that is offered, and price as well as quantity are both under the control of the decision maker. Since revenue equals price times quantity, the two variables are multiplied, and thus, revenue becomes a nonlinear function of price and quantity. Some examples of nonlinear functions in the spatial

2

Single-Objective Optimization

19

context are shown in Appendix A.3. It is also possible to linearize nonlinear functions, which requires the use of integer variables, thus transforming a nonlinear optimization problem to an integer programming problem. An example is shown in Appendix A.4. Which of the problems is easier to solve depends on the specific problem. In addition, as the size of the problem increases, so does the degree of difficulty in solving it. The last node in the tree on the lowest level includes integer nonlinear optimization problems. They combine the possibilities and features, but as such also the difficulties of integer and nonlinear optimization. Problems of this nature are almost always very difficult to solve. Depending on their size and structure, some may actually be intractable. This leads us directly to another important issue. So far, we have always assumed that we will use an exact method, meaning a solution technique that, after a number of steps, will (provably) find an optimal, i.e., the best possible solution. The obvious advantage of such a method is that it finds the best possible solution, whereas the disadvantage is that, especially in difficult problems, it may take a very long time to find this solution. Depending on the problem, there may not be sufficient time before the solution is required in the decision-making process. An alternative are heuristic methods, or simply heuristics. They are essentially approximation techniques, which attempt to find (hopefully) good solutions quickly. The two extremes are exemplified in location problems, typically strategic multimillion dollar problems that require a long time to plan and thus allow ample time to optimize, and routes of automated guided vehicles that need to be determined in real time. Among the advantages of heuristics is the fast access to a solution, while on the downside is the fact that the solution that has been found may be of poor quality. The specific application will have to determine if the time is available to find high quality solutions. A hybrid of exact and heuristic methods are so-called matheuristics, which are approximation methods, which do include exact methods as subalgorithms. A good pertinent reference is Michalewicz and Fogel (2004). Clearly, most of the world is stochastic and optimization models should reflect that. This is done by solving probabilistic optimization problems (the top branch to the right in Fig. 2.1). These can take different forms: 1. Chance constrained programming, which includes constraints in the problem to guarantee that the solution will stay within certain limits with a preset probability, given an analytic expression of the probability distribution of the probabilistic parameters (see, e.g., Charnes and Cooper 1959). 2. Stochastic programming, which makes use of the knowledge of the probabilistic distribution of the parameters to generate as many possible scenarios of the future as possible, and use them in the optimization problem (see, e.g., Birge and Louveaux 2011). Finally, there is 3. robust optimization, refers to a formulation of an optimization problem, whose parameters are defined within given ranges, as a proxy for their unknown

20

2

Single-Objective Optimization

probabilistic distributions. The search then is for solutions that provide reasonably good values of the objective as different values of the parameters apply. Pertinent references are Ben-Tal et al. (2009) or Gorissen et al. (2015). In other words, robust optimization is used to find stable solutions. The form chosen depends on each case, particularly on the knowledge there is about the parameters and their distributions. In general, the probabilistic nature of a problem significantly increases its difficulty. However, in order to keep the computational burden to a workable limit, most models make the assumption of the deterministic property. The resulting gap between real problem and computational model may then, at least partially, be bridged by sensitivity analyses. As a matter of fact, some authors actually refer to them as “what-if modeling.” The question is what happens to our solution and profit if we change the price of a product or if the demand for a product changes or is different from what we assumed? Broadly speaking, sensitivity analyses examine what happens to the solution of a model when some of the parameters change. As an example, a sensitivity analysis may determine how a solution changes if the price of a good or its demand changes. Two issues are relevant in this context: one concerns the changes of the actual solution based on the modification of a parameter, e.g., the output of a firm, the amount of a specific food in a diet, the funds allocated to an activity, or others, while the other concern relates to the changes in the value of the objective function, i.e., change of the value of the objective function, e.g., the profit, cost, or whatever the objective may measure. Most automated sensitivity analyses that are found in the report of the optimal solution provided by the program used to solve the problem will indicate what the effect a demand change has on the objective function, but it will not indicate the details how the actual solution will have to be rearranged to achieve that objective. The reason is simple: the change of, say, a price may result in hundreds of adjustments in a production process, while the effect of the objective value is simple to measure: for instance, each increase of the price of an input by $1 may result in a decrease of the profit by $4 due to the decrease of the demand for the product. In its most basic version, sensitivity analyses can be performed by simply solving a given model multiple times with different values of an uncertain parameter. Sometimes, however, e.g., in linear programming and in differentiable nonlinear programming, it may not be necessary to repeatedly solve a problem, the information is included, or can be calculated from, the optimal solution. Specific details are provided in Appendices A1 and A2. Finally, we turn to the issue of stability, which in the context of optimization problems, refers to two issues. The first issue arises in the context of sensitivity analyses, where the stability of a solution (sometimes also referred to the robustness of a solution) is a measure of the volatility a solution expresses if the problem parameters change. A stable problem is one whose parameter changes over a reasonably wide range have very little effect on the solution and its value of the objective function. On the other hand, a non-stable solution will react significantly to even smaller changes of the input parameters (or assumptions). This stability is inherent in the problem, and it cannot be changed. Information concerning the lack

Appendix

21

of stability may serve as a note of caution to a decision maker that specific care must be taken to estimate the input parameters, and the solution should be taken with a grain of salt. On the other hand, a stable solution will provide the decision maker with much confidence in the results of the model. Typical and well-known examples of non-stable solutions are many competitive location models (see, e.g., Bhadury and Eiselt 1995), while some of the standard inventory problems are known to be very stable and do not change much for a wide range of parameters. As an example, Eiselt et al. (1998) in their work concerning the cleanup of the Halifax harbor, find a solution that happens to be optimal or near optimal (within 5% of optimality) for a very wide range of the uncertain input parameters interest rate and rate of inflation. Pertinent discussions of the subject are Aldrich (1989) and Woodward (2006). We will finish this chapter by listing some classical references regarding operations research, optimization, linear, integer, and nonlinear optimization. For linear programming there is the book by Bertsimas (1997), for nonlinear programming there are Luenberger and Ye (2008) and Eiselt and Sandblom (2019), and for integer programming there are Schrijver (1998) and Papadimitriou and Steiglitz (1998). Finally, in general model building, decision making, and operations research, there are the works by Williams (1999), Winston (2004), Murty (2010), and Hillier et al. (2017).

Appendix A.1 Sensitivity Analysis This section will briefly introduce sensitivity analyses and their displays. In order to simplify matters, we will put the analysis in the context of a simple diet problem. The objective is to minimize the cost of a diet, which may include of up to five food stuffs, whose quantities (in terms of numbers of servings) are denoted by x1, x2, . . ., x5 and whose prices per serving are $1.50, $2.80, etc. The constraints limits on calories (a lower and an upper bound) are 1800 and 2200, respectively, the lower bound on fat is 80, an upper bound on cholesterol is 300, and an upper bound on sodium is 2300. Furthermore, each serving of the first food contains 200 calories, 3 units of fat, 7 units of cholesterol, and 150 units of sodium. These are the coefficients of x1 in the formulation below. The nutritional contents of the other foods are also attached to their respective variables. The problem can be formulated in LINGO-style (Lindo Systems Inc 2022) as follows. Min = 1:52217 x1 þ 2:82217 x2 þ 1:12217 x3 þ 1:92217 x4 þ 1:22217 x5; 2002217 x1 þ 1002217 x2 þ 502217 x3 þ 3202217 x4 þ 702217 x5 > = 1800; !calories; 2002217 x1 þ 1002217 x2 þ 502217 x3 þ 3202217 x4 þ 702217 x5 < = 2200; !calories;

22

2

Table 2.1 Solution of the diet problem

Decision variable x1 x2 x3 x4 x5 z

Table 2.2 Optimal values of slack and excess variables

Slack/excess variable E1 S2 E3 S4 S5

Single-Objective Optimization

Optimal value 4.417350 0.000000 1.478005 2.633218 0.000000 13.25494

Optimal value 0.000000 400.0000 0.000000 0.000000 41.83238

Reduced cost 0.000000 2.104767 0.000000 0.000000 0.6761239 –

Dual price -0.006952327 0.000000 -0.03651156 0.007267225 0.000000

32217 x1 þ 222217 x3 þ 132217 x4 þ 52217 x5 > = 80; !fat; 72217 x3 þ 1102217 x4 þ 202217 x5 < = 300; !cholesterol; 1502217 x1 þ 3902217 x2 þ 8802217 x3 þ 1122217 x4 þ 802217 x5 < = 2300; !sodium; end The optimal solution is then provided in Table 2.1. Typically, there is additional information. Each constraint is either a “≤”, “≥”, or “=” relation. In case of a “≤” relation, a slack variable is automatically added to the left-hand side to make it equal to the right-hand side, while in case of “≥” inequalities, an excess variable (also referred to as surplus variable) is subtracted from the left-hand side to transform the inequality to an equation. The slack variable measures the amount by which the left-hand side, typically the actual use of resources, falls short of the benchmark on the right-hand side. In the above case, a slack variable indicates the actual quantity of nutrients a diet includes, this is then related to the recommended value or bound/limit. Similarly, the excess variable indicates the use of resources beyond the prescribed limit. The reduced cost in Table 2.1 indicate by how much the objective function coefficients of the variables will have to change, so that the activity is undertaken (in our case: the food will be included in the diet). As an example, Food 2 is presently not included in the diet as its present price of $2.80 is $2.10 too high to be included. In other words, once the price of Food 2 drops below 2.80–2.10 = 70¢, we will include it in our diet. Table 2.2 summarizes the values of the slack and excess variables. For instance, it indicates that in the fifth constraint related to sodium, the optimal diet suggested in Table 2.1 has a slack, i.e., falls short of the prescribed limit of 2300 units by 41.83 units. In other words, the optimal diet has 2300–41.83 = 2258.17 units of sodium. This information also allows us to identify bottlenecks in an optimization problem. A bottleneck is defined as a constraint that is satisfied as an equation at the

Appendix Table 2.3 Ranges of objective function coefficients within which the solution does not change

23 Price p1 p2 p3 p4 p5

Lower bound 0.9813772 0.695233 0.3431818 -149.0714 0.5238761

Upper bound 5.621151 1 6.799187 2.6797647 1

optimal solution, where, consequently, its slack or excess variable equals zero. In practical terms, this means that this resource or demand is completely used up, so that each change of the availability of that resource will have immediate consequence of the solution of the problem. In our example, changes of the requirements on fat and cholesterol are such bottlenecks. Consider now the dual prices that are also indicated in the report. For instance, the third constraint (which, as we recall, relates to fat), has a dual price of -0.0365, i.e., -3.65¢. This means that if the requirement regarding fat, which is presently 80, were to be increased by 1 unit, the cost of the diet would decrease by 3.65¢. As indicated above, this information specifies the effect of such a change on the objective function, but it does not show how the solution will have to change to produce that effect. In order to find out how the solution, i.e., the individual foods consumed in the diet, will change, we will have to re-solve the problem with the new parameters. The sensitivity analyses provided by most optimization solvers will investigate the changes of the value of the objective function, given changes of the parameters in the objective function and those on the right-hand side values. The information shown in Tables 2.1 and 2.2 shows what happens if objective function coefficients or right-hand side values change by a single unit. Suppose you find out that such a change is beneficial, then we would like to make the change as large as possible to generate as much benefit is possible. There are of course, limits. For instance, when we determined above that 4.42 units of the first food are included in the optimal diet, then this will be true only within certain limits. If the price of the first food, presently set at $1.50 per serving, changes too much, these savings will no longer be realized. These bounds or ranges are also provided in what we may call the extended solution. For instance, Table 2.3 indicates that the present optimal solution will remain optimal as long as the per-serving price of the first product remains between 98¢ and $5.62. However, note that while the solution will not change, the value of the objective function, i.e., the cost of the diet, will. As an example, as long as the price of the third food is anywhere between 34.32¢ and $6.80, the optimal composition of the diet will not change. Similarly, the second food, which is not included in the optimal diet, will not be included as long as its price exceeds 69.52¢. Changes of the right-hand side values are not as simple beyond some obvious information. As discussed above, the optimal diet has 2258.17 units of sodium, so that we can infer that the solution of our problem will not change as long as the upper bound on sodium is anywhere above 2258.17 units. Beyond this basic information, most software packages do not provide the appropriate information to conduct

24

2 Single-Objective Optimization

further sensitivity analyses that are meaningful for decision makers. However, there is a relatively simple way out. Suppose that we are interested in conducting a sensitivity analysis on the fat constraint. We solve the same problem as before, however with the fat benchmark not at 80, but at 81 (or some other, “sufficiently small”) increment. For the value of 81, we obtain the optimal solution ðx1 , x2 , x3 , x4 , x5 Þ = (4.410201, 0, 1.526249, 2.630148, 0) with an associated optimal value of the objective function of z = 13.29146. Thus, the changes of the values of the five variables are: for each unit increase of the fat benchmark, x1 decreases by 0.007149, x3 increases by 0.048244, and x4 decreases by 0.003062, while x2 and x5 remain at the zero level. The relative sensitivity of x3 is easily explained by the fact that x3 is the least expensive food in the diet (and thus highly desirable), but it is rich in fat. Increasing the fat requirement by one allows us to increase the quantity of this (cheap) food. Given that an increase of the lower limit of fat consumption represents a strengthening of the fat constraint, it is no surprise that the cost of the diet increases by 3.652¢. Repeating the procedure for another right-hand side value, say, cholesterol, we increase its value from 300 to 301 and solve the problem again with this right-hand side value. The optimal solution is ðx1 , x2 , x3 , x4 , x5 Þ = (4.403338, 0, 1.474409, 2.642538, 0) with a value of the objective function of z = 13.24768. Thus, the consequences of a unit increase of the cholesterol benchmark is as follows: The food x1 decreases by 0.014012, x3 decreases by 0.003596, and x4 increases by 0.00932, while x2 and x5 remain unchanged at the zero level. The objective value decreases by 0.00726, which is not surprising, as the increase of the benchmark represents a loosening of the cholesterol constraint, so that the costs decrease. Similarly, the changes of the solution are also to be expected, as again, the quantity of the very inexpensive but cholesterol-rich food x4 increases.

A.2 Visualizations of Sensitivity Analyses Performing sensitivity analyses on parameters of an optimization problem, be it via re-solving the problem or information gleaned from the software’s output is a treasure trove of information. However, this information should be prepared for decision makers, so as to provide information at a glance and distinguish between important and less important. Tornado diagrams (see, e.g., Howard 1988) suggested a way to depict the essential results of some decision analyses. To construct such a diagram, it is necessary to re-solve an optimization program multiple times, while changing one of the parameters at a time. Typically, the parameter is set to its normal (e.g., estimated) value, say 100%, to a reasonable lower limit (e.g., 50%), and finally to a reasonable upper limit (e.g., 200%). For reasons of comparability, analysts often use the same range for all parameters, while this may not be meaningful: alternatively, we may range each parameter from a reasonable lower bound to a reasonably upper bound. These bounds may very well differ simply because some parameters exhibit more variability than others. For each value of these parameters, the value of

Appendix

25

Table 2.4 Effects of price changes on the objective value z

pj p1 p2 p3 p4 p5

Base price 1.50 2.80 1.10 1.90 1.20

Lower limit (50%) 0.75 1.40 0.55 0.95 0.60

Upper limit (200%) 3.00 5.60 2.20 3.80 2.40

z-value at low price 9.9098 13.2549 12.4420 10.7534 13.2549

z-value at high price 19.8810 13.2549 14.8808 18.1546 13.2549

Relative zvalue at low price 74.76 100 93.87 81.13 100

Relative zvalue at low price 149.99 100 112.27 136.97 100

Fig. 2.3 Tornado diagram for the example of Appendix A.3

the objective function is computed. If desired, we can then set the objective value for all parameters at their respective original, i.e., 100% values to 100% and indicate the percentage variation. Table 2.4 shows the data for all prices (objective function coefficients) of the diet problem in Appendix A.1. The relative z-values are calculated by putting the costs of the diet with prices at lower and upper limit, respectively, in relation to the cost of the diet when prices were at their original levels (i.e., z = 13.25494). It is apparent that changes of price p1 have a dramatic effect on the objective value, while changes of the prices p2 and p5 do not change the objective value at all. We now order the parameters from highest to lowest with respect to their effect on the objective value. We can then display them in a so-called tornado diagram by plotting the changes in a horizontal bar chart, in which the abscissa measures the (relative) objective value anywhere between the lowest and highest parameter value. For the figures in Table 2.4, the tornado diagram is shown in Fig. 2.3. From Fig. 2.3 it becomes apparent that changes of the price p1 and, to a somewhat lesser degree, price p2, have a major effect on the objective value while changes of the other prices have a comparatively negligible effect (or, actually, no effect at all). So far, the analyses have focused on the changes on the objective value as a function of parameter changes. A different sensitivity analysis investigates the changes of the actual solution based on the changes of the parameters. One display of such an analysis is the Lego diagram, named after the well-known building blocks. To illustrate the general principle, consider a discrete location problem, in which three facilities are to be located. Here, we will consider the effects of one

26 Table 2.5 Optimal locations and objective values for a fictitious location problem

2 Parameter value ≤ 50 ]50, 55] ]55, 65] ]65, 80] ]80, 110] ]110, 125] ]125, 130] ]130, 150] >150

Single-Objective Optimization

Locations at (solution) 2, 6, 9 3, 6, 10 3, 6, 12 5, 8, 12 4, 8, 12 5, 8, 11 6, 8, 11 1, 6, 11 2, 6, 11

Objective value 880 910 930 950 1000 1100 1150 1200 1230

Fig. 2.4 Lego diagram

parameter at a time. For the original estimate of the parameter, the solution (i.e., the locations of the three facilities) and, if required, the corresponding objective value are recorded. The problem is now solved repeatedly, and optimal solutions are determined for ranges of values of this parameter. Table 2.5 shows results for the fictitious location problem. As an example, as long as the value of the parameter in question is anywhere between 80 and 110 (with 100, say, being the original estimate), facilities are located at sites 4, 8, and 12. One the parameter drops to or below 80, the optimal solution locates facilities at sites 5, 8 and 12. In other words, one facility has changed location, namely from site 5 to site 4. If the same parameter were to change further to a value at or below 65, a second facility will be two facilities change sites. In comparison with the original locations, two facilities have changed sites, while only one facility is still at the same location (site 12). The changes shown in Table 2.5 can then be plotted in a two-dimensional diagram, which has the parameter values (actual or percentage) on the abscissa, while the ordinate measures the number of changes as compared to the original solution. Figure 2.4 shows the Lego diagram for the example in Table 2.5. In order to further condense the information (similar to that of a Lorenz curve to a Gini index (see, e.g., Lorenz 1905, Gini 1912, 1921, Ceriani and Verme 2012) we may calculate the area below the function between reasonable lower and upper bounds of the values of the parameter in question. Let this area be denoted by A. Suppose now that the range is of width w and p is the number of facilities to be located, then the maximum area (one in which all facilities change their sites the

Appendix

27

moment the parameter changes by an arbitrarily small amount up or down) is wp. A measure of stability is then S = 1 - A/wp. In our example, the area underneath the curve is A = 135, while wp = 300, so that the stability measure of the solution is then S = 0.55, indicating a fairly low degree of stability. Most location problems exhibit much higher degrees of stability. Similar diagrams for continuous problems can also be thought of.

A.3 Modeling Nonlinear Functions This appendix will describe two applications of nonlinear functions. The first part discusses the occurrence of nonlinear objective functions, while the second deals with nonlinear constraints. First consider possible nonlinear objective functions. Suppose that a firm considers determining its output of a single product, whose quantity is x and whose price is p. The decision maker has determined that there is a relation between the quantity of the product x and its price p. Both, quantity, i.e., the firm’s output, and its price are under the jurisdiction of the decision maker. Suppose for now that the relation between price p and quantity x is linear, e.g., p = 10 - 2x. The revenue is then R = px = 10x - 2x2, which is quadratic. Assuming further that the company’s output must be between 3 and 4, we obtain the optimization problem Max z = 10x - 2x2 s:t:x ≥ 3 x≤4 x ≥ 0: The optimal solution is x = 3, the price at that output equals 4, so that the revenue at optimum is z = 12. The price-quantity relation in Fig. 2.5 is shown in blue, while the revenue function is shown in red. The two black lines with the gray flags indicate the feasible set, and the optimal solution is shown as a red dot. The function is nonnegative within the range x 2 [0, 5] and it reaches a maximum at x = 2.5, at which point the price is p = 5 and the resulting revenue is 12.5. While this is the overall maximum, the constraints of the problem render it infeasible. Suppose now that the price-quantity relation is nonlinear, in particular p = 10e-x, which is shown in Fig. 2.6 as the blue line. The revenue 10xe-x is shown as the red line. With the same revenue maximization and constraints, we obtain the problem Max z = 10xe - x

28

Fig. 2.5 Linear price-quantity relation and revenue

Fig. 2.6 Nonlinear price-quantity relation and revenue

2

Single-Objective Optimization

Appendix

29

Fig. 2.7 Location with forbidden areas

s:t:x ≥ 1:5 x≤2 x ≥ 0: The solution is x = 1.5, so that p = 2.231302 and the revenue is z = 3.346952. As far as nonlinear optimization goes, nonlinearities in the objective function can usually be dealt with more efficiently than nonlinearities in the constraints. Some special cases, e.g., optimization problems with quadratic objective function and linear constraints, are considered well-solved, see, e.g., Eiselt and Sandblom (2019). Our discussion of nonlinear objectives is based on a hypothetical instance with more or less real data. Suppose that we consider the Potomac River just south of Washington and we do not allow facilities being located anywhere south of the river. In addition, there may be another sensitive site (such as the Marine Headquarters in Quantico, VA), in whose neighborhood we can also not locate our facility. In order to design the model, we would first choose points on or alongside the river in order to determine a nonlinear function that reasonably well approximates the course of the Potomac. Typically, there are a variety of nonlinear functions that more or less model reality. In our case, given a system of coordinates (x, y) with some arbitrary - 11,x2000 þ 22,x3000, which is shown as origin, we obtain the function y = - 40 þ 1, 600 x the purple line in Fig. 2.7. Requiring us to be located north of the river will have the relation written as a “≥” inequality. In addition, suppose that a forbidden area is around a place with the coordinates (x, y) = (20, 30). More specifically, our facility cannot be located any closer than ten miles to that point. In order to model this constraint, we first have to decide on a distance function. Here, we will use simple straight line (Euclidean) distances.

30

2 Single-Objective Optimization

Formally, the distance between a point Ai with coordinates (ai, bi) and a point Xj with 23A7 ) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 23A7 )2 23A7 )2ffi coordinates (xj, yj) is measured as d Ai , X j = ai - xj þ bi - yj . More on distances is found in Chap. 4. Given that, we can write this constraint as q23A7ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi )2 23A7 )2 xj - 20 þ yj - 30 ≥ 10 In Fig. 2.7, the feasible set is shown as the gray area above the purple line and outside the teal circle. Suppose now that we have three customers located at demand points (5, 5), (25, 5), (15, 15) with demands of 30, 20, and 20, respectively. The objective is then the minimization of the sum of straight-line (Euclidean) distances between the customer points and the facility location, each weighted by the demands of the customers. This results in the formulation qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Min z = 30 ð5 - xÞ2 þ ð5 - yÞ2 þ 20 ð25 - xÞ2 þ ð5 - yÞ2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ 20 ð15 - xÞ2 þ ð15 - yÞ2 1600 11, 000 22, 000 þ x x2 x3 q23A7ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi )2 23A7 )2 xj - 20 þ yj - 30 ≥ 10

s:t: y ≥ - 40 þ

x, y ≥ 0: The optimal solution is then (x, y) = (21.56229, 12.73877) with an objective value of 856.6110 (the blue dot in the above figure) with a computation time of 1.14 s. Even though the problem has only two variables, it takes—compared to a linear programming problem of the same size—significantly longer. The use of a global solver is strongly recommended.

A.4 Piecewise Linearization of Nonlinear Constraints The material in this appendix will demonstrate how to approximate a nonlinear function by a piecewise linear approximation. This approach can be used in the objective function and/or in the constraints. It will transform a nonlinear variable into a series of zero-one variables. The exposition here follows roughly the formulation in Eiselt and Sandblom (2000). To illustrate, consider the function

Appendix

31

Fig. 2.8 Piecewise linear approximation of a nonlinear function

y = 5x - 12x2 þ 2x3 The graph of the function is shown as the red line in Fig. 2.8. The linearization is shown is the three teal linear segments in the figure. The red dots in Fig. 2.8 show the limits of the intervals, in which the linear approximation is defined. [ [ of [the function [ [ [ In particular, we have r = 3 intervals I 1 = v1 , v01 , I 2 = v2 , v02 , and I 3 = v3 , v03 , so that the approximation is 8 > < 29:86x þ 53:96, if x 2 ½ - 1:81, 0:22005D y = - 12:67x þ 63:33, if x 2 ½0:22, 3:78005D > : 24:30x - 76:38, if x 2 ½3:78, 5:5005D The limits of the intervals are then v1 = - 1.81, v2 = 0.22, v3 = 3.78, v01 = 0:22, v02 = 3:78, and v23A703 =) 5:5, and their 23A7 )functional values 23A7 are ) f(v1) = 0, f(v2) = 60.54,, f(v3) = 15.46, f v01 = 60:54, f v02 = 15:46, and f v03 = 57:25. The integer formulation will then include the following relations: f ð xÞ =

X[23A7 k

x=

23A7 ) 23A7 ) [ f ðvk Þ - f v0k λk þ f v0k yk

X k

λk þ

λ0k

vk λ k þ

X v0k λ0k k

= yk , k = 1, . . . , r

32

2

Single-Objective Optimization

X yk = 1 k

λk , λ0k 2 ½0, yk 005D yk = 0 or 1, k = 1, . . . , r In our case, this formulation is. f ðxÞ = - 60:54λ1 þ 45:05λ2 –41:79λ3 þ 60:54y1 þ 15:46y2 þ 57:25y3 x = - 1:81λ1 þ 0:22λ2 þ 3:78λ3 þ 0:22λ01 þ 3:78λ02 þ 5:5λ03 λ1 þ λ01 = y1 λ2 þ λ02 = y2 λ3 þ λ03 = y3 y 1 þ y2 þ y3 = 1 λ 1 ≤ y1 λ 2 ≤ y2 λ 3 ≤ y3 λ01 ≤ y1 λ02 ≤ y2 λ03 ≤ y3 λ1 , λ2 , λ3 , λ01 , λ02 , λ03 ≥ 0 y1 , y2 , y3 = 0 or 1: This partial formulation can then be inserted in the full formulation, whenever x and f(x) are needed.

A.5 Heuristics In case the decision maker has decided to make do with an approximate solution, the choice is a heuristic algorithm, or heuristic for short. There are two different types of heuristics: So-called “Phase 1 heuristics” or construction heuristics, which are designed to determine a solution to the problem from scratch, and “Phase 2 heuristics” or improvement heuristics, which start with a feasible solution (possibly one that was generated with a construction heuristic) and attempt to improve the solution as much as possible.

Appendix

33

Table 2.6 Steps in the Greedy method for the example

Step # 0 1 2 3 4 5 6

x 0, 0, 0, 0, 0 0, 1, 0, 0, 0 0, 1, 0, 0, 1 0, 1, 0, 0, 2 0, 1, 0, 0, 3 0, 1, 0, 0, 4 0, 1, 0, 0, 5

LHS 0 3 7 11 15 19 23

z 0 8 12 16 20 24 28

Δz(xj) 0.2, 8, 1.22, 2, 4 0.2, 3.31, 1.22, 2, 4 0.2, 3.31, 1.22, 2, 4 0.2, 3.31, 1.22, 2, 4 0.2, 3.31, 1.22, 2, 4 0.2, 3.31, 1.22, 2, 4 0.2, 3.31, 1.22, 2, 4

This appendix will illustrate a so-called knapsack problem by means of a somewhat crude heuristic that is designed to illustrate the general principle. To begin, a knapsack problem is an integer programming problem with just a single constraint. The idea behind it is to maximize the “value” of the contents of the knapsack, while the single constraint ensures that the weight of the knapsack is restricted by an upper bound. An obvious simple application is the choice of stocks, so that the expected payoff is maximized and no more money than is available can be invested. Knapsack problems come in many varieties, some restrict the variables to zero or one, while others accept general nonnegative integers for the values of the variables. The following example is a nonlinear knapsack problem with general integer values: pffiffiffiffiffi Max z = 0:2x31 þ 8 x2 þ e0:2x3 þ 3x1 x2 þ 2x24 þ 4x5 s:t:4x1 þ 3x2 þ 2x3 þ 5x4 þ 4x5 ≤ 25 xj 2 ℕ 0

8j:

If solved to optimality with a global optimizer, we obtain the optimal solution x = [4, 3, 0, 0, 0] with an objective value of z = 63.65641. In Phase 1, we will use a Greedy algorithm to determine a feasible solution. Starting with xj = 0 8 j and its associated objective value z = 0, we determine for each variable how much a unit increase of the value of that variable changes the objective value. This value is referred to as Δz(xj), j = 1, . . ., 5. The variables with the maximal such increase is increased by one, and the process is repeated until the “knapsack is full,” i.e., none of the variables can be increased any further without violating feasibility. The individual steps of this process are shown in Table 2.6. At this time, only one unit of x3 fits into the knapsack, so the final solution found with Greedy is x = [0, 1, 1, 0, 5] with an objective value of 29.72. This is very different from the actual optimal solution. Phase 2 will now use an improvement heuristic that starts with the present solution, obtained by the Greedy method, and step by step, attempts to improve the solution. The technique we illustrate here is sometimes referred to as “Swap” method, as it swaps some units of one variable for some units of another variable. It does so whenever such a swap move results in the improvement of the objective function. The method continues until no swap movements will result in an

34

2

Single-Objective Optimization

Table 2.7 Some steps of the Swap improvement technique Increasing variable x1 (+1)

Decreasing variable x5 (-1)

x4 (+3)

x5 (-4)

x1, x2 (+1)

x4, x5 (-1)

x1 (+1)

x4 (-1)

x1, x2 (+1)

x3, x4 (-1)

New solution 1, 1, 1, 0, 4 0, 1, 1, 3, 1 1, 3, 1, 2, 0 2, 3, 1, 1, 0 3, 4, 0, 0, 0

LHS 25

Objective value 28.42 (< 29.72)

Conclusion Reject change

24

31.22 (> 29.72)

25

32.28 (> 31.22)

24

36.68 (> 32.28)

24

57.4 (> 36.68)

Accept change, new benchmark Accept change, new benchmark Accept change, new benchmark Accept change, new benchmark

improvement of the objective function, or so other stop criterion is satisfied. Clearly, this process can be interrupted at any time. Recall that the last solution obtained by the Greedy method was x = [0, 1, 1, 0, 5], its use of the single resource was (left-hand side) LHS = 25 out of the available 25, and its objective value was z = 29.72. The method now decreases the value of one variable (decreasing resource use and thus freeing up resources) and increases the value of another variable (increasing resource use and the value of the objective function). The choice of variables to be increased and to be decreased could be guided by some reasonable criterion, e.g., the change of the objective value. If any such change results in an improvement of the solution, the move is fixed, and the process continues from this point. Often the changes are conducted in unit steps, which may not be a good idea in the case of nonlinear functions. Table 2.7 shows a number of steps, which are chosen somewhat haphazardly just to illustrate the general concept. The process could continue from here as long as desired.

References J. Aldrich, Autonomy. Oxf. Econ. Pap. 41, 15–34 (1989) A. Ben-Tal, L. El Ghaoui, A. Nemirovski, Robust Optimization (Princeton University Press, Princeton, NJ, 2009) D. Bertsimas, Introduction to Linear Optimization (Athena Scientific, Nashua, NH, 1997) D. Bertsimas, R. Weismantel, Optimization Over Integers (Dynamic Ideas Publishers, Charlestown, MA, 2005) J. Bhadury, H.A. Eiselt, Stability of Nash equilibria in locational games. RAIRO-Oper. Res. 29(1), 19–33 (1995) J. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, 2nd edn. (Springer, 2011) L. Ceriani, P. Verme, The origins of the Gini index: extracts from Variabilità e Mutabilità (1912) by Corrado Gini. J. Econ. Inequal. 10, 421–443 (2012)

References

35

A. Charnes, W.W. Cooper, Chance-constrained programming. Manag. Sci. 6, 73–79 (1959) H.A. Eiselt, C.-L. Sandblom (eds. & authors) Integer Programming and Network Models (Springer, Berlin, 2000) H.A. Eiselt, C.-L. Sandblom, Nonlinear Programming (Springer, Berlin, 2019) H.A. Eiselt, C.-L. Sandblom, Operations Research: A Model-Based Approach, 3rd edn. (Springer, Berlin, 2022) H.A. Eiselt, C.-L. Sandblom, N. Jain, A spatial criterion as decision aid for capital projects: locating a sewage treatment plant in Halifax, Nova Scotia. J. Oper. Res. Soc. 49(1), 23–27 (1998) R.S. Garfinkel, G.L. Nemhauser, Integer Programming (Wiley, New York, 1972) C. Gini, Variabilità e mutabilità. Reprinted in Pizetti E, Salvemini T (eds.) (1955). Memorie di Metodologica Statistica (Libreria Eredi Virgilio Veschi, Rome, 1912) C. Gini, Measurement of inequality of incomes. Econ. J. 31(121), 124–126 (1921) B.L. Gorissen, I. Yanıkoğlu, D. den Hertog, A practical guide to robust optimization. Omega 53, 124–137 (2015) O.G. Haywood Jr., Military decision and game theory. J. Oper. Res. Soc. Am. 2, 365–385 (1954) F.S. Hillier, G.J. Lieberman, B. Nag, P. Basu, Introduction to Operations Research, 10th edn. (McGraw-Hill India, New Delhi, 2017) R.A. Howard, Decision analysis: practice and promise. Manag. Sci. 34(6), 679–695 (1988) Lindo Systems Inc, Demo Versions of Their Products are available online at https://lindo.com/ last accessed on 9/15/2022 (2022) M.O. Lorenz, Methods of measuring the concentration of wealth. Publ. Am. Stat. Assoc. 9(70), 209–219 (1905) D.G. Luenberger, Y. Ye, Linear and Nonlinear Programming, 3rd edn. (Springer, New York, 2008) Z. Michalewicz, D.B. Fogel, How To Solve It: Modern Heuristics (Springer, Cham, 2004) K.G. Murty, Optimization Models for Decision Making (Springer, Cham, 2010) C.H. Papadimitriou, K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity (Dover, Mineola, MY, 1998) F. Plastria, Formulating logical implications in combinatorial optimisation. Eur. J. Oper. Res. 140, 338–353 (2002) A. Schrijver, Theory of Linear and Integer Programming (Wiley, Hoboken, NJ, 1998) H.A. Taha, Integer Programming: Theory, Applications, and Computations (Academic, New York, 1975) H.P. Williams, Model Building in Mathematical Programming (Wiley, Hoboken, NJ, 1999) W.L. Winston, Operations Research: Applications and Algorithms, 4th edn. (Duxbury Press, Pacific Grove, CA, 2004) G.L. Nemhauser, L.A. Wolsey, Integer and Combinatorial Optimization. Wiley Series in Discrete Mathematics and Optimization (Wiley, New York, 2014) J. Woodward, Some varieties of robustness. J. Econ. Methodol. 13(2), 219–240 (2006)

Chapter 3

Multicriteria Decision Making

3.1

The Basic Setting

While the first chapter of this book dealt with single-objective optimization, we expand the view in this chapter so as to allow multiple concerns to be included by one or multiple decision makers. For now, we will use fuzzy terms such as “concerns” until we will formalize the terminology. It is apparent that multiple decision makers will have multiple views, but even a single decision maker may, and usually does, have more than a single concern. For instance, when purchasing a vehicle, the buyer will not only look at the price, but other features as well, such as resale value, gas mileage, comfort, horsepower, etc. Similarly, in the corporate context, while profitability is most likely the be the overriding concern in the short run, long-term viability is an important consideration that, at least in the retail context, could potentially be quantified by using customer satisfaction as a proxy measure. At this point we need to introduce a lot of terminology in order to facilitate our discussion below. A multicriteria decision making problem (MCDM, sometimes also referred to as multicriteria decision analysis, or MCDA) is any problem that involves more than a single concern of interest to the decision maker. Two types of problems can be distinguished, that belong to the class of multicriteria decision making problems. A multiobjective optimization problem (MOP) is a standard linear or nonlinear continuous or integer optimization problem, like the ones introduced in Chap. 2, except that it has more than a single objective. A multiattribute decision making problem (MADM or MADA) comprises a number of solution alternatives (or decisions), a number of criteria that the decisions are evaluated on, and a number of attributes that indicate how each decision is evaluated on each of the criteria. Typically, MADM problems are stated in matrix form with the decisions shown as rows, the criteria are represented by the columns, and the attributes are the evaluations (qualitative or quantitative) in the evaluation or attribute matrix. The decision maker’s task is then to choose exactly one of the solutions included in the problem, © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_3

37

38

3

Multicriteria Decision Making

Table 3.1 Three foods in comparison based on weight Food Swiss cheese (1 oz) Chocolate chip cookies (1 oz) Buttermilk biscuit (1 oz)

Carbohydrates (in g) 1.5 17 12.6

Protein (in g) 7.5 1.25 2

Table 3.2 Three foods in comparison based on cost Food Swiss cheese ($1) Chocolate chip cookies ($1) Buttermilk biscuit ($1) Table 3.3 Characteristics of the vehicles in the sample

Carbohydrates (in g) 1.5 6.07 6.3

Vehicle Cadillac CT6 Nissan Rogue Lada Granta

Price (in US$) 59,000 28,000 6100 (419,900 rubles)

Protein (in g) 7.5 0.446 1

Horsepower 335 170 87

making it a so-called selection problem. As a matter of fact, Tables 3.1, 3.2 and 3.3 are matrices that belong to MADM problems. The main difference between MOP and MADM problems is that in MOP problems, the feasible solutions are implicitly specified by the constraints of the model, while in MADM problems, the feasible solutions are explicitly specified as the rows of the evaluation matrix. A second relevant difference is related to objectives and criteria. An objective will indicate what feature a decision maker wants to actively find in the solutions, e.g., highly profitable or low-cost solutions. A criterion, on the other hand, exists solely for the purpose of evaluating a known solution. As an example, consider a location problem that attempts to (1) minimize the average time between a customer and his closest facility, and (2) maximize the total number of customers within a 30-min radius of the facilities to be located. These are expressed as objectives, so that the approach would be an MOP. In most cases, the objectives are not aligned, i.e., a solution that minimizes objective (1) will not necessarily maximize objective (2). In these cases, a number of solutions can be found such that no solution is better than any other one with respect to both objectives simultaneously. These are called pareto-optimal solutions. Once a number of paretooptimal solutions has been determined they can be evaluated by the decision makers on the two objectives in order to find a compromise solution. Suppose now in the above location problem we have n possible locations for the p( facilities that we attempt to locate. In the absence of any constraints, we have ) n = p!ðnn!- pÞ! possible solutions. This tends to be a very large number, e.g., for p n = 100 potential locations and 5 facilities to be located, there are no less than 75,287,520 solutions. In its simplest form without any additional input, an MOP problem would choose only the pareto-optimal solutions among those and display

3.1

The Basic Setting

39

them to the decision maker, who, in turn, will have to manually compare and evaluate these solutions. The number of pareto-optimal solutions will typically be much less than 75 million, but comparing all of them is often not possible, and then, only the most representative or promising solutions will be used for the final evaluation. If instead of casting the problem using objectives (1) and (2), the decision makers somehow know a manageable number of solutions (i.e., facility locations), they can compute the attributes of these solutions, measured according to two criteria: (a) average time between a customer and his closest facility, and (b) the total number of customers within a 30-min radius of the facilities located in that solution. Once this is done, an evaluation matrix can be constructed and an MADM problem solved to find the best solution. Notice the difference: in this approach we have not actively searched for solutions with good performance on criteria (a) and (b), we are just evaluating existing solutions on two criteria. This way, the pareto-optimal solutions of the MOP problem can be evaluated as if it were an MADM problem. Given that all criteria are also used as objectives, the implicit/explicit divide and the number of solutions that the method can manage are really the only differences between MOP and MADM problems. The techniques to solve the two problems are also very similar, as will be demonstrated in the remainder of this chapter. The models will then be applied to a common location problem in other chapters of this volume. In the following parts, we will first discuss MOP and MADM together whenever possible. We begin with ways to extract only meaningful, i.e., pareto-optimal solutions from the large, potentially infinite, pool of solutions. MADM problems typically comprise of no more than a few dozen (at most!) solutions. In case of multiobjective programming, pareto-optimal solutions first have to be generated from a potentially very large pool of feasible solutions. In other words, we start with what is generally known as a vector optimization problem, i.e., a problem identical to a standard optimization problem, except that it has multiple objectives, which are to be “optimized,” i.e., maximized or minimized: the term “maximize” or “minimize” is typically written in quotation marks to indicate that we are looking for so-called pareto-optima rather than the usual optima, as in a great majority of cases, it is not possible to simultaneously reach the maximum of all of the objectives simultaneously. The first to suggest formulations as vector optimization problems were Kuhn and Tucker (1951). For a brief history of the origins of vector optimization problems, see, e.g., DeWeck (2004) At this point, we have a number of different ways to deal with the problem. Without any further input from the decision maker, we can use Zeleny’s (1974) complete enumeration method (see also Yu and Zeleny 1975), which determines all pareto-optimal extreme points. The problem with the approach is not only that, depending on the size and structure of the problem, the computational burden may be tremendous as the number of pareto-optimal extreme points may be astronomically large. This fact is, however, dwarfed by the real problem: the potentially huge number of paretooptimal points—each of which represents a solution—will have to be manually

40

3 Multicriteria Decision Making

compared by the decision maker. Except in very special cases, in which the number of pareto-optimal solutions is very small, this appears to be infeasible. In order to keep the computational and managerial burden reasonable, it may be desirable to just approximate the nondominated frontier (i.e., find some of the points located on this frontier) rather than explore it in its totality. Some techniques that perform this task can be found below. The limited number of solutions generated by such a procedure could then be used as an input to an MADM model in which new criteria can be used. One problem associated with multiple objectives is the presence of correlation. It is obvious that the objectives profit, revenue, and costs are not independent of each other, and it would be problematic to have them as three objectives in a problem. Back to the example, in which the purchase of a vehicle is discussed, the size of a car, its costs, engine displacement, horsepower are all related to each other. It is probably best to group concerns into clearly distinct classes, e.g., costs, features, power, etc., in the automobile example. Good examples are found in Larichev and Olson (2001), who, for a problem that involves the location of a pipeline, have identified classes of concerns that include cost, time, environmental impact, risk of rupture, and others. There is some, but reasonably little, correlation between those factors, which are subsequently subdivided further. A good collection of methods and models is provided by Figueira et al. (2005). Another issue that figures prominently in problems with multiple concerns is that of optimality. Whereas in single-objective problems optimality is clearly defined— no other solution has a higher (or lower in case of minimization) value of the objective than the one labeled “optimal”—this is no longer the case once more than one objective is included. Actually, with more than a single objective, the concept of optimality ceases to apply and it needs to be replaced by another concept. The first step towards that goal is to introduce the concept of dominance. A decision or solution dominates another decision or solution, if it is equally good in all, but strictly better in at least one of the components it is compared to. For example, in the context of the diet problem introduced in the previous chapter, a 3 oz. piece of pork loin has 23.2 g of protein and zero carbohydrates, while a 3 oz. piece of cauliflower has 2.1 g of protein and 5.3 g of carbs. Assuming that we would like to get as many proteins and carbs as possible in our diet (within reason, of course) and want to eat exactly 3 oz. of food, pork is definitely superior in terms of protein, while cauliflower is better in terms of carbohydrates. In other words, there is no dominance of one of these two foods over the other. All of this is true if we can assume that the different foodstuffs blend linearly (which they probably do). While the concept of dominance is appealing, it does not solve the problem: without further input from the decision maker, we are left with the two decision alternatives pork and cauliflower that cannot be compared. Furthermore, consider the following example with three foods shown in Table 3.1. Again, we assume that protein and carbs are both to be maximized. The three foodstuffs are plotted in Fig. 3.1 in the (carbs, protein) space. In addition, we have connected the “cheese” and “cookie” points with a straight line. This line actually represents all combinations of cookies and cheese of 1 oz. in

3.1

The Basic Setting

41

Protein 10 Cheese 5

1

Cookie

Biscuit 1

5

10

15

20 Carbs

Fig. 3.1 Continuous linear dominance and nondominated frontier

total. For instance, 1/4 oz. of cheese and ¾ oz. of cookies has 13.125 g of carbs and 2.8125 g of protein, i.e., this combination of the cheese and cookies has more of the nutrients in question than the 1 oz. biscuit. This clearly indicates that the biscuit is dominated and thus it can be eliminated from further consideration, at least as long as only protein and carbs are looked at. Graphically speaking, all points below the cheese—cookie line are dominated. Strictly speaking, this should be referred to as continuous linear dominance. The line connecting adjacent nondominated point is the nondominated frontier. It is important to realize that this type of dominance is based on the fact that the possibility to use some combination of other decision alternatives to compare it with. Note that the concept of dominance the way we presented it here depends on the use of a common yardstick. Here, we have used weight as the benchmark, so that we can state that pound for pound, biscuits are dominated by cheese and cookies. This dominance may or may not be true if we change the common yardstick to, say, dollars. For instance, if Cheese were $1, cookies were $2.8, and biscuits were $2 per ounce, then Table 3.1 must be modified to Table 3.2, where it is apparent that biscuits actually dominate the cookies. Clearly, it is not always possible to use combinations of two or more products to compare them with another item. Take the (admittedly extreme) example of choosing a car among three alternatives. Assume now that the only criteria to be used for the evaluation of the vehicles are price and horsepower. The figures are shown in Table 3.3. First, we realize that we have to prepare the data before making any comparisons, as while we may like to maximize horsepower, we certainly wish to minimize the amount of money we will pay for the vehicle. Simply using “100,000 minus the price,” i.e., the money left over from, say, $100,000 we have converted to cost to something that we want to maximize. In the above example, the appropriate numbers for the three vehicles are 41,000, 72,000, and 93,900, respectively. Then a simple (0.4, 0.6) combination of the Cadillac and the Lada results in costs of $27,260 (i.e., $72,740 left over from the original $100,000) and 186.2 horsepower, thus superior to the specifications of the Nissan, hinting at dominance. However, in contrast to the

42

3 Multicriteria Decision Making

above example concerning the diet problem, automobiles are not divisible, so that the Nissan is not dominated here but can be considered a relevant decision alternative. This type of dominance, in which linear combinations of alternatives cannot be used to compare with a solution, could be referred to as discrete linear dominance. A technique to determine dominances in a specific context is the data envelopment analysis (DEA). Data envelopment analysis was developed by Charnes et al. (1978) and it was originally designed to compare existing branches of a company and determine their relative efficiency. It starts with all relevant input and output factors of the problem. One of the branches is then called efficient, if it is not possible to achieve at least the same level of output with at most the same level of input by a linear convex combination of the other branches, i.e., a branch is efficient, if it is not continuously linearly dominated. It must be pointed out that if the verdict for a branch or decision is “efficient,” it does not mean that it is actually efficient, only that you cannot do better by combining the other branches or decisions. In our context, the technique can be used to determine whether or not a point in a MOP problem is continuously linearly dominated. In order to do so, we need to compare it against all nondominated points. This is a problem, as in order to determine actual efficiency, we need to know all nondominated points, which is typically not possible or practical. This severely limits the usefulness of DEA in this context. The set of all nondominated decision alternatives or solutions is also referred to as the set of noninferior, efficient or pareto-optimal solutions. Without any further input from the decision maker all we can do is to determine all nondominated solutions. The drawback is that in most practical cases, the number of nondominated solutions is huge, and almost always the comparison of all of these solutions by the decision maker is simply not practical. This leads us directly to the specifications of tradeoffs by the decision maker. A tradeoff is simply a value statement by a decision maker that compares the value of a unit of one concern with a unit of a different concern. To be more specific, a decision maker searching for a vehicle could state that each additional mile per gallon of gas is worth an increase in price of $500 to him. Notice that while the units in which individual concerns are measured are important, they do not have to be commensurable. Clearly, each tradeoff is a value statement by a specific decision maker and valid only for him at this time, others may have very different ideas concerning tradeoffs. Clearly, it will be very difficult for any decision maker to specify precise tradeoffs between criteria. However, if this is possible, the benefits are tremendous: if a decision maker were to state that in his opinion, one unit of protein in a diet is as important as three units of carbohydrates, then we can measure a food’s value entirely in either protein or carbs, the tradeoff between them allows us to reduce the problem to only one nutrient, thus greatly simplifying the problem. As a matter of fact, applying this idea repeatedly will reduce a problem to an equivalent problem with just a single nutrient or whatever else the concern may be. This idea is applied in the weighting method and the weighted average method in the next section. An important issue is the consistency of the tradeoffs. If, for instance, a decision maker specifies that one unit of criterion A is 3 times as important as one unit of criterion B,

3.1

The Basic Setting

43

which in turn is twice as important as one unit of criterion C, then it must be understood that—by way of transitivity—one unit of A is considered worth six times one unit of C. This issue is further addressed in our discussion of the analytic hierarchy process further below. The reduction of the perceived value of individual concerns to one measure is, of course, nothing new. The invention of money is doing exactly the same thing in a barter economy as it expresses tradeoffs between itself and each other item, and thus between the items themselves. For a thorough discussion of criterion weights, readers are referred to Choo et al. (1999). At this point, it appears appropriate to take a step back and reconsider the issue of weights. Suppose that a problem has two objectives, and the decision maker has assigned weights of 3 and 1 to them, respectively. By stating that each unit of the first objective is three times as important as a unit of the second objective, a specific rate of substitution is implied. In this case, each time we decrease the first objective by one unit, we have to make up by increasing the second objective by three units. Using such an additive function also implies that this rate of substitution is valid regardless of the solution. For instance, if a decision maker has determined that one hour of leisure time is the equivalent of $20, then that may be true if he presently has only one hour of leisure time. However, in case he already has 12 h of leisure time per day, it is quite likely that his evaluation of extra time will be different. Whenever that is the case, an additive function is not appropriate. Alternatively, decision makers can apply a multiplicative rule. In such a rule, the rates of substitution are not constant on the domain, but the percentage rates of substitution remain constant. This appears much more relevant in many instances. To highlight the difference between additive and multiplicative rules, suppose the concerns are Horsepower H, and remaining budget after buying a car, R. An additive function that exchanges three units of H for one unit of R is 3H + 1R. In contrast, a multiplicative function in which the two objectives are to be maximized, in which a 1% of change in H results in a 33.3% of change in the valuation of the solution and a 1% of change in R results in a 66.6% of change in the value of the solution, is H0.33R0.66. (Note the similarity of this function and the well-known Cobb-Douglas production function, see, e.g., Douglas 1976). Another shortcoming of the additive model is shown in Fig. 3.2. The red dots in the figure represent the coordinates of the three solutions in the automobile example of Table 3.3. If the decision maker had a constant rate of substitution of 4 and 1 for money left over and horsepower, respectively, the blue line is a level curve that includes all points that are considered having the same value to the decision maker. For each level of “achievement,” there is such a line, and all of these lines are parallel. The objective is then to push these lines as far as possible into a northeasterly direction until the last of the existing points is reached. This is the blue line shown in the figure, which indicates that this value assessment by the decision maker results in the Lada being considered as the best solution. A similar situation presents itself for weights of 1 and 3. The solution point on the highest teal level curve is the Cadillac, which is thus the chosen solution. It is apparent that different rates of substitutions, are represented by level curves with different slopes. Furthermore, it can be seen that whatever the slope of the chosen level curve may be, the points that

44

3

Multicriteria Decision Making

Fig. 3.2 Rates of substitution in a multiplicative model

represents the Nissan will never be on the highest level curve and thus it will never be chosen by the decision maker. This appears to be counterintuitive, as among the three vehicles in this example, it could be considered to represent a reasonable middle-of-the-road solution. Considering a multiplicative model, however, may choose the Nissan for some weight combinations. For instance, if the decision maker were to specify an importance of 2/3 for the first criterion (money left over) and 1/3 for the second criterion (horsepower), the brown line indicates that the point on the highest (i.e., most northeasterly) level curve is indeed the Nissan. Such a model may better reflect a decision maker’s preferences. An excellent discussion of the issues involved is provided by Tofallis (2014).

3.2

Some Approaches in the MOP and MADM Settings

Arguably, the most popular way to deal with vector optimization problems is the weighting method, first proposed by Cohon (1978). The method itself is simple: given an optimization problem with multiple objectives, assign a positive weight to each of the objectives, and determine the weighted sum of the given objectives, which is now a single composite objective. Due to this aggregation, this problem can now be solved with any standard single-objective solver and an optimal solution can be determined. It can then be proved that while the solution found in this manner is optimal for the problem with the composite objective (which is quite irrelevant to the decision maker), it is also pareto-optimal for the original problem. Repeating this procedure multiple times with different sets of weights will result in multiple paretooptimal solutions (possible the same solutions are generated by different weight combinations), to obtain a subset of the nondominated frontier. This allows an

3.2

Some Approaches in the MOP and MADM Settings

45

interactive approach that puts a reasonable burden on the decision maker: First, generate a reasonably small number of solutions that are clearly distinct. In other words, the solutions will have been generated by widely different sets of weights. Then present the decision maker with the results. At this point, the task is not to make a final choice among the resulting solutions, but to express a preference. In the next iteration, the analyst will take the solution most favored by the decision maker, and create a new set of solutions, in which the weights are chosen still distinct from each other, but somewhat similar to the solution the decision maker expressed a preference for. This is continued a few times, until a solution has been found that is considered reasonable by the decision maker. Alternatively, a few clearly distinct solutions are generated with any MOP procedure, and the results are then used as input into an MADM method. However, such a procedure does not allow the generation of new compromise solutions. An alternative to the weighted sum aggregation is the weighted product aggregation. It does have some desirable properties including the avoidance the pitfalls of additive model discussed before on rate of substitution within the context of Fig. 3.2. However, weighted product aggregation results in a nonlinear programming problem, which adds to the degree of difficulty, particularly in case the original problem was integer already. A very similar approach can also be used for MADM problems. The simple additive weighting (SAW) method assigns a weight to each of the criteria and then, similar to calculating an expected value, aggregates the criteria by determining the weighted average. This single-dimensional measure (hence the term scalarization for methods that aggregate different criteria to a single aggregate) then allows a straightforward determination of the decision that maximizes or minimizes this aggregate value. Tofallis (2014) presents a highly readable account that highlights the problems with aggregation approaches. Similar to MOP problems, we can use weighted multiplicative aggregation for MADM problems as well, usually referred to as multiplicative exponential weighting (MEW). In this type of problem, such a procedure will not result in any undue computational complications. Multiobjective optimization problems also allow a different procedure, called the constraint or ε-constraint method. Given a standard MOP problem, this method imposes a minimum acceptable value to each objective but one. These bounds are specific to each objective and represent maximal acceptable costs, minimally acceptable customer satisfaction, etc. In doing so, all objectives but one are transformed into constraints, so that the resulting problem can now be solved as a single objective optimization problem. Since the levels of acceptability are somewhat arbitrary, it is mandatory to perform sensitivity analyses on them. In other words, we parametrically change the acceptability levels, re-solve the problem, and thus obtain more solutions on the nondominated frontier, among which the decision maker will then choose the solution that best fits the present situation. Is it necessary to point out that this method potentially finds all pareto-optimal solutions, as opposed to the weighting

46

3 Multicriteria Decision Making

method, which can miss some non-dominated solutions, e.g., Nissan in the example of Fig. 3.2. There is no direct equivalent of the constraint method for MADM problems. The technique closest to the constraint method will specify upper and lower bounds on the criteria that need to be achieved. Then the decisions that do not achieve these bounds will be discarded from consideration. Subsequent modifications of the bounds, i.e., sensitivity analyses will reveal the stability and performance of solutions regarding changes of the right-hand side values. Another approach applied by MOP and MADM problems is ideal point programming, reference point methods, or compromise programming. The basic idea proceeds in three phases. The first phase determines an ideal point or a utopia point. Such a point represents levels of the given objective functions that are not achievable; still, the point should be “reasonable” rather than having unreasonable achievement levels. One possibility to generate this point is to take one objective at a time and solve a problem with the given constraints and this objective, while ignoring all other objectives. The objective value of this objective is then considered ideal for this objective. This is repeated for all objectives, one at a time. The combination of all objective values at their respective optimal levels is then the ideal point. In the second phase, the decision maker will choose a disutility (or distance) function, which expresses the disutility of a solution if compared to the optimal point, i.e., the underachievement (in case of objectives to be maximized) and the overachievement (in case of objectives to be minimized). Loosely speaking, the farther a solution is located from the ideal point, the longer its distance from it and the higher its disutility. Typical choices for the disutility/distance functions are (possibly weighted) Minkowski distance functions to be defined in Chap. 4, or some concave or convex functions. The distance function will implicitly include the tradeoffs between criteria. The third and final phase minimizes the distance between any point in the feasible set and the ideal point. Reference point methods have also been seen as a close relative of goal programming techniques, which are described below. Note also the similarity between reference point methods and the “regrets” in the Savage-Niehans rule in decision analysis, see, e.g., Niehans (1948), Savage (1951), or Gaspars-Wieloch (2018). One thought that has been floated is to determine the least ideal point or nadir point, and rather than trying to pull a solution towards its ideal point, push it away from the least ideal point. In general, this does not appear to be a good idea, as moving away from something undesirable does not mean that the solution gets better, it may merely move towards something less undesirable. One possibility is to combine the maximization of the solution-to-utopia point and the minimization of the solution-to-nadir point in one objective. Ideal point programming has sometimes been suggested as a tool for robust optimization. Exactly the same concept can also be applied to MADM problems. Here, it is usually referred to as TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) due to Hwang and Yoon (1981). An application of the TOPSIS method can be found in Chap. 10.

3.3

3.3

Methods for MOP Problems

47

Methods for MOP Problems

The most popular methods to solve MOP problems are the vector optimization and ideal point programming and goal programming. Since the first two have already been discussed above, the remainder of this section will concentrate on goal programming. (An application for vector optimization is found in Chap. 6). The procedure was first suggested (without that name) by Charnes et al. (1955). The name “goal programming” was first introduced by Charnes and Cooper (1961). An interesting account of the history of goal programming is provided by Aouni and Kettani (2001). Early books on the subject are those by Lee (1972) and Ignizio (1976). Without attempting to change the name of the method, it would more aptly be named target value programming, as goals are typically defined as overall longterm (often qualitative) directions the decision maker wants to move into, whereas the setting of targets for individual objectives or constraints is central to this technique. Having said that, here are some of the main ideas. One of the main features of goal programming is to bridge the huge gap between objectives and constraints. As in all optimization problems, objectives express features the decision maker wants to achieve, while constraints indicate the issues that must be satisfied. In other words, constraints are considered absolute by the model. No matter to how small a degree a constraint is violated, the optimization system will not consider it. Many real situations are not like that. For instance, if a planned investment exceeds the financial capabilities of a decision maker, a loan can be considered. If we were to formulate the situation as a standard budget constraint, the optimizer would consider the constraint absolute and not consider any investment strategy that exceeds the given budget. Goal programming would soften a budget constraint as follows. First, it defines a target value, which represents a desired level of achievement. In case of the budget constraint, the target value could actually denote the budget or any other value that represents the maximally acceptable expenditures. Deviational variables will then be defined that represent the overachievement and underachievement (the terms are used in the wide sense) of a solution over or under the target value. For instance, if the budget is 100 and a solution has expenditures of 80, then the underachievement is 20 (which is the same as the value of the slack variable, had this relation be written as a constraint). Similarly, if we were to spend 110, then the overachievement would be 10 (which is the same as the value of the excess or surplus variable had we written the relation as a constraint). Once that has been done, the goal constraint with its deviational variables is written just like a regular constraint. In contrast to the usual formulations, we now use the deviational variables in the objective function, minimizing some function of the over- and/or underachievements, i.e., the amount by which the solution exceeds or falls short of the target value. For instance, in case of a budget constraint, we would put a term into the objective function that minimizes overspending, whereas in case of a constraint that requires satisfaction of some demand, the objective function could include a term that minimizes the underachievement of demand. In both cases, staying within the

48

3 Multicriteria Decision Making

budget and satisfaction of the demand are no longer absolute (as they are now included in the objective function). It is also worth noting that we need not necessarily simply minimize the over and/or underachievement, but some function of it. For instance, this could mean that we progressively minimize overspending, so that overspending by, say $1 carries a penalty of 2, while overspending by, say, $2, carries a penalty of 5. As a matter of fact, this is the same approach used in barrier and penalty methods in nonlinear optimization, such as the sequential unconstrained optimization method or Lagrangean approaches; see, e.g., Eiselt and Sandblom (2004). Finally, we would like to mention that it is still possible to incorporate the usual maximization (e.g., of profits or revenue) and/or minimization of costs (or any other objective) in the minimization objective in goal programming. All we need to do is to define an unachievable target value for the objective (very high for maximization objectives or very low for minimization objective) and then define a part of the new objective that minimizes the underachievement under the profit target or above the cost target. We then continue to assemble all the individual terms with the over-and underachievement into a single objective. As always in such cases, commensurability is an obvious problem. For example, if we attempt to minimize overspending (in dollars), the overuse of machine (in hours), the underperformance on customer satisfaction (measured on some Likert scale) all in one objective, we need weights not only to express the relative importance of the individual over- and underachievement, but also convert the over- and underachievement into commensurable units. A singlelevel goal programming model uses again tradeoffs for the individual under- and overachievements to arrive at a single objective function. The discussion concerning tradeoffs earlier in this chapter applies in this case. One of the features available to models who employ goal programming formulations is the possibility to use preemptive priority levels, in which the objective on one level is considered to be “infinitely more important” than the one on the next level. While this may sound contrived at first, the concept is well-known to modelers: in each standard optimization problem, constraints are infinitely more important than objectives, as without them being satisfied, there is no acceptable solution. However, in general it seems somewhat artificial to decide if, for instance, the underachievement under a target regarding customer satisfaction is infinitely more important than the overachievement over a given budget. An application that uses goal programming is provided in Chap. 7. Sometimes, we do not want to use absolute scores but apply some normalization first. One way to do this is shown just below Table 3.7 in the next section of this chapter.

3.4

3.4

Methods for MADM Problems

49

Methods for MADM Problems

In addition to the aforementioned techniques such as weighted additive and multiplicative aggregation, and TOPSIS methods, many other techniques exist. Here, we will describe two classes of methods in some detail, and then we will briefly mention some other approaches. A good account of MADM methods is found in Olson (1996). The first class of methods are so-called outranking methods. All outranking methods have in common that they use pairwise comparisons of decision alternatives and express their preferences either in a strict better/no better way or by defining a softer degree of preference. Given that, one may view outranking relations as a way to determine “soft dominances.” One class of methods, termed ELECTRE (Elimination et (and) Choice Translating Algorithms) was first described by Roy and Sussman (1964) and subsequently by Roy (1971). As other techniques, it starts with an attribute matrix (here typically all attributes are quantitative and normalized, as is the case with utilities of decision alternatives, measured with respect to the criteria) and a set of decision-maker specified weights that indicate the importance of and the tradeoffs between attributes. The technique first develops a concordance matrix, which, for each pair of decisions, expresses the magnitude of the preference of one decision over another. Similarly, a discordance matrix is determined, which indicates for each pair of decisions how much worse one decision is as compared to another. Then, if the concordance of a decision di over a decision dk exceeds a prespecified threshold and the discordance of di under dk is below a prespecified threshold, we say that decision di outranks decision dk. The outranking of decisions can be considered a weak form of dominance. The outranking relations can then be visualized in a graph, on which additional computations may be made. Over the years, Roy and others have refined the technique and produced a variety of versions, see, e.g., Rui Figueira et al. (2010). An application that uses the ELECTRE method can be found in Chap. 9. Another outranking technique is PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations), first described by Brans and Vincke (1985). As the ELECTRE method, the technique starts out with a utility matrix that shows the attributes on the individual decisions, as well as a vector of weights associated with the attributes. Using a number of functions, the utilities are then transformed into preferences for each attribute individually. These functions can be binary (if one decision performs better by a prespecified margin than another, it has total preference), based on linear, step, exponential functions, or others. The weights are then used to aggregate the preferences to an overall preference matrix. The different versions of PROMETHEE will then either produce outranking relations based on the average preferences of one decision over another. Additional analyses are typically performed. An application that applies the PROMETHEE method can be found in Chap. 8. Another technique that also uses pairwise comparisons of decision alternatives was suggested by Saaty (1980). The so-called analytic hierarchy process (AHP)

50

3 Multicriteria Decision Making

starts with the assertion that individuals are typically unable to compare more than a fairly small number of alternatives. In order to avoid such difficulties, the analytic hierarchy process uses only pairwise comparisons. Note that this creates the problem of consistency: if the decision maker were to assert that alternative A is, say, 2 times as important as alternative B, that B is 3 times as important as alternative C, and that A is considered 5 times as important as C, then this information is not consistent, as transitivity requires that A would have to be 6 times as important as C. It is highly unlikely that decision makers are totally consistent, when they state their preferences. This requires that, once the inconsistencies exceed a certain level, corrective action must be taken. e.g., a re-evaluation by the decision maker. In his treatise, Saaty (1980) suggests an index for the consistency of a set of assessments. Alternatively, we may use one of the standard statistical tools, such as the coefficient of variation as suggested by Eiselt and Sandblom (2004). Once the degree of inconsistencies has been determined and found to be unacceptable, analysts will either re-interview decision makers, or they will use a mathematical programming formulation that rectifies the problem while minimizing the deviations of the actual weights from those suggested by the decision maker. The process first requires the decision maker to make comparisons between all pairs of decisions on each individual criterion. In addition, comparisons are also needed between all pairs of criteria regarding their importance. After a number of normalizations (or, alternatively, eigenvectors), a utility matrix and a weight vector result. Finally, the method applies the generic method of weighted averages to arrive at a ranking of the decision alternatives. The method is demonstrated in an application in Chap. 13. The MOORA method (Multi-Objective Optimization on the basis of Ratio Analysis) described by Brauers (see, e.g., Brauers 2018) is quite simple and heavily based on normalization techniques (with all of its aforementioned problems). Pairwise comparisons are used by the SMART and ZAPROS methods, described by Edwards (1971, 1977) and Larichev (1982) and Larichev et al. (1974), respectively. SMART describes a ten-step procedure that uses a normalization technique (with all its inherent problems) and an additive aggregation technique (again with all of the strong assumption it requires). ZAPROS makes pairwise comparisons based on the decision alternatives’ ranks and it results in a partial ordering of the alternatives. VIKOR (Vise Kriterijumska Optimizacija I Kompromisno Resenje, in Serbian, meaning Multicriteria Optimization and Compromise Solution) was developed by Opricovic in the 1970s, and an early application is found in Duckstein and Opricovic (1980); papers by Opricovic and Tzeng (2004, 2007) extend and compare the VIKOR method with other MADM techniques. The technique uses ideas from the ELECTRE method as well as the Hurwicz and Hodges-Lehmann rules (see, e.g., Eiselt and Sandblom 2004) known in games against nature. It incorporates the concepts of regrets and uses an additive approach for aggregation. Two questions remain to be discussed: on the one hand, how do the individual techniques compare with each other, and which technique should a modeler use? References related to the former question are Zanakis (1998), Chiandussi et al.

3.4

Methods for MADM Problems

51

(2012) and Hamdy et al. (2016), while the latter question was addressed by Cinelli et al. (2020).

3.4.1

Making It Work

In order to illustrate the concepts introduced earlier in this chapter, we will consider the following problem for all multiobjective decision making problems. P: Max z1 = 12x1 þ 7x2 Max z2 = 2x1 þ 5x2 Min z3 = 6x1 þ 1x2 The gradients of the objective functions, emanating from some arbitrary anchor point, are shown in Fig. 3.3. In addition, the lines in red in the figure are perpendicular to the gradients are referred to as level curves or iso-profit (or cost, return, . . .) lines. They represent a certain level of the objective function to whose gradient they are perpendicular. All points on the side of the lines with small stars for a half plane, and all points in this half plane have objective values that are better than the points on the line themselves. The intersection of the half planes of all objective functions form what is called an improvement cone. The improvement cone will comprise all points that are better than the anchor point. This allows decision makers to evaluate each point: if the improvement cone anchored at the point has a nonempty intersection with the feasible set (other than the anchor point itself), then the anchor point is dominated, as by moving from the anchor point into the improvement cone, we improve all objectives simultaneously while staying feasible. The objectives shown above are now to be optimized given the following set of constraints: x1 ≤ 4 x2 ≤ 8 x1 þ x2 ≤ 10 –x1 þ 2x2 ≥ 4 x1 , x2 ≥ 0: The feasible set generated by these constraints is shown in Fig. 3.4. Plotting the improvement cone at some point at each of the faces of the feasible set, we can determine the nondominated frontier, which ranges from (0, 2) to (0, 8), to (2, 8) and (4, 6). The improvement cones and the nondominated frontier (the bold red line) are shown in Fig. 3.5.

52

3

Multicriteria Decision Making

Fig. 3.3 Improvement cone in multiobjective linear programming

10

5

–5

–1 –1

1

5

10

Fig. 3.4 Feasible set of the sample problem

As it is usually not practical to generate the entire nondominated frontier, we may use the weighting method to approximate this frontier. Table 3.4 shows a number of weight combinations along with the composite objectives they generate, the solutions that result from their use, and their values of the objective function. The first three objectives simply optimize one objective while disregarding the other two, whereas the other rows of the table represent different linear combinations of the objectives. Note that the third objective, being a minimization objective, will be subtracted from the weighted combination of the other two. For instance, the weights 1, 6, and 4 result in the composite objective Max z = 1(12x1 + 7x2) + 6(2x1 + 5x2) – 4(6x1 + 1x2) = 33x2.

3.4

Methods for MADM Problems

53

Fig. 3.5 Feasible set and improvement cones in the sample problem Table 3.4 Using the weighting method to approximate the nondominated frontier Weights w 1, 0, 0 0, 1, 0 0, 0, 1 1, 1.5, 2 3, 2, 1 2, 4, 1 1, 6, 4 1, 1, 12

Composite objective Max z = Max z = 12x1 + 7x2 Max z = 2x1 + 5x2 Max z = -6x1 – 1x2 Max z = 3x1 + 12.5x2 Max z = 34x1 + 30x2 Max z = 26x1 + 33x2 Max z = 33x2 Max z = -58x1

Optimal solution x 4, 6 2, 8 0, 2 2, 8 4, 6 2, 8 0, 8 0, 2

Objective values z 90, 38, 30 80, 44, 20 14, 10, 2 80, 44, 20 90, 38, 30 80, 44, 20 56, 40, 8 14, 10, 2

Consider now the constraint method. We use the same example, in which we use the first objective as the only objective and transform the second and third objectives to constraints with varying bounds/achievement levels. Table 3.5 provides some of the results with the constraint method. Note that we need to specify a lower bound on objective 2 (as we want to ensure that the solution(s) we generate have at least that achievement), while we need an upper bound on objective 3 (as we want to ensure that our solutions do not have a higher disutility that the one specified). Apply now the ideal point method to the same example. Suppose now that we choose the ideal point as the best possible achievements on each of the respective objectives. Above, we have determined that z1 = 90, z2 = 44, and z3 = 2. The objective of the ideal point method with unweighted straight-line distances can be

54

3

Multicriteria Decision Making

Table 3.5 Solutions obtained with the constraint method Lower bound on z2 30 30 35 35 30 40 42

Upper bound on z3 10 8 8 6 6 20 26

Optimal solution x 0.3333, 8 0, 8 0, 8 No feasible solution 0, 6 2, 8 2.6667, 7.3333

Objective value z1 60 56 56 42 80 83.3333

represented graphically in the (z1, z2, z3)-space by the point (90, 44, 2) and the circumference of each circle with this point at its center includes points with a given deviation from this point. Consequently, the ideal point method will determine the smallest such circle that will touch one feasible point. The unweighted objective can then be written as h i½ Min z = ðz1 - 90Þ2 þ ðz2 - 44Þ2 þ ðz3 - 2Þ2 : Applied to our problem, it results in the solution x = ð2, 8Þ with achievement levels of 80, 44, and 20, respectively. Using weights associated with the distances, i.e., multiplying each term in the above objective by some weight chosen by the decision maker. Choosing weights of 10, 5, and 1, we obtain the objective function Min z = [10(z1 - 90)2 + 5(z2 - 44)2 + 1(z3 - 2)2]½ and the optimal solution x = (3.2813, 6.7188) with achievement levels of 86.4072, 40.1566, and 26.4066, respectively. Note that the higher weight (i.e., penalty on the deviation from the first objective) resulted in a significantly higher achievement on that objective. Consider now goal programming. In the single-level goal programming formulation, there are two tools for finetuning available to the modeler. First, there is the definition of the target values for the individual achievements we are trying to reach, and then there are the weights associated with the over- or under achievements. To illustrate by means of our example, suppose that we have identified target values of 80, 35, and 6 for the three targets. The “goal constraints” can then be written as 12x1 þ 7x2 þ d1- - dþ 1 = 80, 2x1 þ 5x2 þ d2- - dþ 2 = 35, and 6x1 þ 1x2 þ d3- - dþ 3 = 6: measure the deviation of the actual Here, the deviational variables d þ k and d k achievement from the target value. For instance, if d2- were to equal 8, then the present solution underachieves the target of 35 by 8, i.e., the value of the second

3.4 Methods for MADM Problems

55

Table 3.6 Solutions for different target values and weights with goal programming Target values 80, 35, 6 50, 30, 10 80, 30, 10 85, 40, 5 100, 50, 0 90, 44, 2

Weights 2, 4, 7 2, 4, 7 2, 4, 7 9, 7, 1 5, 1, 3 1, 1, 1

Optimal solution x 0, 8 0.6667, 6 0.3333, 8 3, 7 4, 6 2, 8

Deviations from target values: d1- , d2- , dþ 3 24, 0, 2 0, 0, 0 20, 0, 0 0, 0, 20 10, 12, 30 10, 0, 18

Objective values z = ðz1 , z2 , z3 Þ 56, 40, 8 50, 31.33, 10 60, 40.67, 10 85, 41, 25 90, 38, 30 80, 44, 20

objective equals 27. Similarly, if, say, d þ 3 equals 2, then z3, the value of the third objective function associated with the present solution equals 8. Suppose now that we associate weights of 2 and 4 with the underachievement under the first two target values (we try to push up the achievements), and a weight of 7 with the overachievements of the third target value (which we try to reduce as much as possible). The objective will then attempt to minimize the weighted average of these over- and underachievements. The complete formulation is then as follows: P: Min z = 2d 1- þ 4d 2- þ 7dþ 3 s:t: 12x1 þ 7x2 þ d1- - d þ 1 = 80 2x1 þ 5x2 þ d 2- - dþ 2 = 35 6x1 þ 1x2 þ d 3- - dþ 3 =6 x1 ≤ 4 x2 ≤ 8 x1 þ x2 ≤ 10 –x1 þ 2x2 ≥ 4 x1 , x2 ≥ 0: Table 3.6 shows solutions for a number of combinations of target values and weights. Note that some combinations of target values cannot be reached, e.g., 100, 50, and 0. One possible addition is to add nonlinear penalty functions that penalize overand/or underachievements by, e.g., squaring them. Consider now approaches for multi-attribute decision making problems, i.e., those problems that specify a payoff or attribute matrix with finite numbers of decisions and criteria. In order to illustrate the different techniques, we have chosen a small number of decisions and criteria. (It so happens that these decisions are solutions to the above problem, but that is obviously not necessary, we have done

56 Table 3.7 Different decisions and their objective values for multi-attribute decision making problems

3 Criteria z1 90 80 56 14 83.33

Decision d1: 4, 6 d2: 2, 8 d3: 0, 8 d4: 0, 2 d5: 2.67, 7.33

Multicriteria Decision Making

z2 38 44 40 10 42

z3 30 20 8 2 39.33

this for the ease of comparison between MOP and MADM approaches). Table 3.7 shows the solutions and their respective achievements. It may be desired to convert the actual outcomes such as profits, costs, customer satisfaction, and other criteria to a common measure. Typically, analysts divide each attribute by the sum of attributes in that column, divide each attribute by the maximum attribute in that column, or normalize the attributes in a column between zero and one. As Tofallis (2014) pointed out in his summary, each of these procedures is conceptually problematic. Some of these normalization procedures may have undesirable features, such as rank reversal, etc. Notwithstanding these caveats, suppose we normalize the attributes in Table 3.7. In order to do so, we need some definitions. For each criterion, one at a time, define zi as the outcome of the decision di on the criterion under consideration, let zmin denote lowest outcome of any decision on that criterion, and define zmax as the highest outcome of any decision on this criterion. Note that so far, we do not make any distinction between criteria that are to be maximized and those to be minimized. The normalized attributes can be viewed as utilities. We can then construct a utility matrix U = (uij) by calculating the utilities zi - zmin u = zmax - zmin for “maximization criteria” (i.e., those, for which more is better) and zmax - zi the utilities u = zmax - zmin for “minimization criteria” (i.e., those, for which less is better). Note that we have ignored the subscript for the column, so as to avoid heavy notation. The utility is simply the difference between the value under consideration and the smallest (largest) value in the column for maximization (minimization) problem, divided by the range, i.e., the difference between smallest and largest value in the column. The normalized matrix for our example is then 2

1

6 6 :8684 6 U=6 6 :5526 6 4 0 :9122

:8235 1 :8824 0 :9412

:2499

3

7 :5178 7 7 :8393 7 7: 7 1 5 0

Note that there are no dominances in the matrix. The simple additive weighting (SAW) method simply post- multiplies the utility matrix by a vector of weights,

3.4

Methods for MADM Problems

57

Table 3.8 The simple additive weighting method with some weight combinations Weights w1, w2, w3 .5, .3, .2 .4, .2, .4 .2, .3, .5 .3, .1, .6 .2, .5, .3 .3, .3, .4

Values = weighted average utilities d1 d2 d3 .7970 .8378 .7089 .6647 .7545 .7332 .5966 .7326 .7949 .5323 .6712 .7576 .6867 .8290 .8035 .6447 .7676 .7662

Best decision d4 .2000 .4000 .5000 .6000 .3000 .4000

d5 .7385 .5531 .4648 .3678 .6530 .5560

d2 d2 d3 d3 d2 d2

Table 3.9 The multiplicative exponential weighting (MEW) with some weight combinations Weights w1, w2, w3 .5, .3, .2 .4, .2, .4 .2, .3, .5 .3, .1, .6 .2, .5, .3 .3, .3, .4

Weighted average utilities d1 d2 14.3098 15.2889 3.2122 3.7112 1.3373 1.6716 0.7211 0.9009 5.4652 6.4869 2.9469 3.4958

Best decision d3 14.9312 4.5546 2.3917 1.3893 7.5812 4.4039

d4 6.4992 3.4517 2.3917 1.8332 4.3543 3.3375

d5 13.4410 2.8515 1.1851 0.6050 5.2164 2.6627

d2 d3 d3 and d4 d4 d3 d3

which expresses the decision maker’s preferences. In other words, it computes the values of the decisions dj as aggregated outcomes vi, defined as vi =

X

uij wj :

j

The decisions can then be ranked on the basis of these values and the top decision is chosen. Table 3.8 shows different sets of values associated with the five decisions given different sets of weights. The best possible outcomes in each row are indicated in boldface. It is apparent from the above table that decisions d2 and d3 appear to be optimal for a wide range of weight combinations. Furthermore, even if in cases they are not optimal, they rank a fairly close second. Consider now the multiplicative exponential weighting method MEW. Note that for this procedure, scaling is not necessary, so that we can work directly with the numbers in Table 3.7. We will use the formula Q j2C

þ

vi = Q j2C

-

aij wj aij wj

,

58

3

Table 3.10 Solutions of the reference point method for Euclidean distances with different sets of weights

Weights .5, .3, .2 .4, .2, .4 .2, .3, .5 .3, .1, .6 .2, .5, .3 .3, .3, .4

Multicriteria Decision Making

Distance between ideal point and solution di d1 d2 d3 d4 d5 .3491 .2349 .3308 .8944 .4526 .4809 .3161 .3052 .7746 .6354 .5391 .3461 .2387 .7071 .7089 .5837 .3804 .2774 .6325 .7763 .4294 .2706 .2339 .8367 .5507 .5644 .3134 .2730 .7746 .6351

where C+ is the set of “maximization” criteria (i.e., the more, the better), while C- is the set of “minimization criteria (the less the better). In our example, the first two criteria belong to C+, while criterion 3 belongs to C-. The results for some selected weight combinations are shown in Table 3.9. It is apparent that the additive and the multiplicative models result in rather different recommendations. This is largely due to the normalization. In order to work with the reference point method, we first need to define a reference point. Given the utilities, the point (1, 1, . . ., 1) appears obvious. The simplest version of the reference point method uses unweighted λ1 distances between the outcomes of a solution and the ideal point. Another version adds up the squared differences between the utilities of the decision under consideration and the ideal point. The squaring of the differences emphasizes large differences, which may be a desired effect: a good compromise solution should not perform very poorly on any of the criteria. Finally, we can calculate weighted Euclidean distances as a compromise. The results for our example given weighted Euclidean distances with the weights shown are displayed in Table 3.10, where again, the most preferred (i.e., shortest) distances are indicated in boldface. The results in Table 3.10 are very stable and d3 is the preferred alternative for a wide variety of weights. Suppose our task is now to solve the above example with outranking methods. For convenience, we repeat the utility matrix of our example. It was 2

1

6 6 :8684 ( ) 6 U = uij = 6 6 :5526 6 4 0 :9122

:8235 1 :8824 0 :9412

:2499

3

7 :5178 7 7 :8393 7 7, 7 1 5 0

and as weights we use w1 = 0.3, w2 = .5, and w3 = .2. The first outranking method to be applied is ELECTRE. The simplest version of the ELECTRE class of methods constructs a concordance matrix C = (cik), so that

3.4

Methods for MADM Problems

59

P cik =

k:ui1 > uk1

P

w1

w1

:

1

In general terms, the element cik denotes the degree to which the decision maker prefers decision di over dk, while the subscript λ is used for the criteria. More specifically, an element cik of the concordance matrix is the sum of weights of all those criteria, for which the decision maker (strictly) prefers di over dk. As an example, in the above utility matrix, the decision maker prefers d2 over d3 on criteria 1 and 2 (but not 3), so that the concordance of d2 over d3 is the sum of the first two weights, viz., 0.8. The concordance matrix is then 2

-

6 6 :7 6 C=6 6 :7 6 4 :2 :5

:5

3

:3

:3

:8

-

:8

:8

:7 :2

:2

:8 -

7 :7 7 7 :2 7 7: 7 :2 5

:3

:8

:8

-

The next step is to construct a discordance matrix D = (dik). Again, we conduct pairwise comparisons of decisions on all criteria. For each comparison between decisions di and dk, we consider only criteria, for which di is strictly worse than dk. For each such criterion, we compute dik =

max fw1 ðuk1 - ui1 Þ

1:uk1 > ui1

max max fw1 jus1 - ut1 jg s, t 1

,

where the subscripts s and t are simply iterators. The numerator computes the largest weighted difference in outcomes (i.e., the difference in outcomes on the criterion, multiplied by the weight). This discordance is then normalized by dividing it by the largest weighted difference among all criteria. As an example, the discordance of decision d3 under decision d5 is max {(0.9122–0.5526) (0.3), (0.9412–0.8824) (0.5)}/0.5 = 0.2158. The discordance matrix is then 2

:1765 :2358 :3 6 :1286 :1929 6 :0790 6 6 D = 6 :2688 :1895 :0643 6 1 :8824 4 :8236 :1 :2071 :3357 :4

3 :1177 7 :0263 7 7 :2158 7 7: 7 :9412 5 -

Given the concordances and discordances, the decision maker needs to specify thresholds above/below which a concordance/discordance is considered relevant.

60

3

Multicriteria Decision Making

Fig. 3.6 Outranking graph for the ELECTRE problem with c = :5 and d = :3

Fig. 3.7 Outranking graph for the ELECTRE problem with c = :6 and d = :2

Suppose now that the decision maker has determined that c = :5 and d = :3. In the outranking graph, each decision is represented by a node, and there will be a directed arc from node di to node dk, if the concordance of di over dk is at least 0.5 and if the discordance of di under dk is at most 0.3. This leads to the graph shown in Fig. 3.6. While decision d2 can be seen to outrank most other decisions, there are simply too many arcs for a clear image to emerge. Changing the threshold values to c = :6 and d = :2 results in the graph in Fig. 3.7, which displays a much clearer image. Note that it is not necessary to convert the decision maker’s assessment to utilities first, since concordances and discordances can also be derived from the payoff matrix, i.e., the decisions maker evaluations of the decisions on the given criteria. Consider now Brans and Vincke’s PROMETHEE method. Again, we will use the same example with the utility matrix shown above. In contrast to the ELECTRE method above, PROMETHEE considers one criterion at a time. Rather than using the simple difference between payoffs, it defines one of a variety of functions for each difference Δik between the payoffs of decisions di and dk. In other words, it compares attributes of decisions with each other to determine relative preferences, rather than comparing outcomes to an external yardstick. For instance, there are binary differences, when the payoff of one decision over another exceeds a given threshold. Alternatively, there are two thresholds, below which the preference of one decision over another is zero, between which it is some intermediate value (e.g., 0.5, but any

3.4

Methods for MADM Problems

61

other value is fine, too), and above which the preference is total, i.e., one. Similarly, the preference between the two thresholds can be any linear or nonlinear function. Or the preference is expressed as some nonlinear, typically exponential, function. For the construction of the first preference matrix, we define the preference function p1ik = 1 - :1e - 3Δik , where Δik denotes the difference between the outcome of the ith and the kth decisions on the first criterion. These results are collected in the preference matrix P1. 2

-

6 6 :8516 6 P1 = 6 6 :6173 6 4 0 :8699

:9326

:9739 :9950

:7421

:9612 :9926 :9809 :4752

0 :9123

-

:9660 :9935

:9232

3

7 :8860 7 7 :7059 7 7: 7 0 5 -

For the second criterion, we have chosen a piecewise linear function. It is again based on Δik, the difference between two decisions di and dk, but this time evaluated on the second criterion. The preference function will be zero, if Δik is below -0.15, it is linear with a preference of p = 0.6 + 4Δik between -0.15 and 0.1, and it equals one for differences above 0.1. The preference matrix for the second criterion is then 2

6 6 1 6 P2 = 6 6 0 6 4 0

0 -

:3644 1

1 1

:1296 0

0

1 -

1

:3648

:8352

1

3 :1292 7 :8352 7 7 :3648 7 7: 7 0 5 -

Finally, for the third criterion the preference for decision di over dλ equals 0, if Δik < -0.2, it is 0.5, if Δik is between -0.2 and 0.2, and it is 1, if Δik exceeds 0.2. The preference matrix for the third criterion is then 2

-

6 6 1 6 3 P =6 6 1 6 4 1 0

1

3

0

0

0

-

0

0

1 1

:5

:5 -

7 1 7 7 1 7 7: 7 1 5

0

0

0

-

The overall preference matrix P is thenPconstructed as the weighted sum of the individual preference matrices, i.e., P = wk Pk . In our example, we obtain k

62

3

Multicriteria Decision Making

2

3 :2798 :4744 :7985 :5416 6 7 :7884 :7978 :8834 7 6 :9555 6 7 P=6 :8943 :5942 7 6 :3852 :4874 7: 6 7 :2000 5 4 :2000 :2000 :2426 :7610 :4561 :7074 :7981 The penultimate step is to calculate the vectors p+ and p-, which are the row averages and column averages, respectively. In some sense, they express concordances and discordances. For our example, we obtain p+ = [.5236, .8563, .5903, .2107, .6807] and p- = [.5754, .3558, .5532, .8222, .5548], respectively. For PROMETHEE I, the outranking graph has directed arcs aik, whenever decision di has a higher p+ value and a lower p- value than decision dk. The resulting graph is shown in Fig. 3.8. While the graph provides only a partial order, PROMETHEE II results in a total order. This method simply computes p = p+ - p- = [-.0518, .5005, .0371, -.6115, .1259], which results in the preference order d2 227B d5 227B d3 227B d1 227B d4, which again confirms the superiority of decision d2. Brauers’s MOORA method starts with the usual matrix of evaluations/attributes aij ffi. It then defines J + P A = (aij). It first calculates the normalized attributes aNij = qffiffiffiffiffiffiffiffi 2 aij

i



(J ) as the set of criteria to be maximized P (minimized), P Nand then calculates the aNij aij 8i = 1, . . . , m , which normalized value of decision di as vðdi Þ = j2J þ

j2J -

then results in a ranking. Several extensions are available, some using TOPSIS. As an example, consider again the evaluation matrix shown in Table 3.7, which we restate here for convenience:

d1 d2 d3 d4 d5

A:

With

c1 (max) 90 80 56 14 83.33

c2 (max) 38 44 40 10 42

rP ffiffiffiffiffiffiffiffiffiffi aij2 = [157.4053, 82.7285, 53.9893], we obtain the matrix i

c3 (min) 30 20 8 2 39.33

3.4

Methods for MADM Problems

63

Fig. 3.8 Outranking graph for the PROMETHEE example

2

:5718

6 :5082 ( ) 6 6 AN = aNij = 6 :3558 6 6 4 :0889 :5294

:4593 :5557

3

7 :5319 :3704 7 7 :4835 :1482 7 7, 7 :1209 :0370 5 :5077 :7285

so that vN(di) = [.4754, .6697, .6911, .1728, .3086]. This indicates that d3 227B d 2 227B d 1 227B d 5 227B d 4 : Opricovic’s VIKOR method uses the concepts of average and maximum individual regret and combines them into a unified criterion. In doing so, it uses a concept applied to decision making against nature by Hodges and Lehmann as mentioned the above. More specifically, the method first determines aþ j as the best and aj as { } þ worst attribute of criterion j achieved by any of the decisions, i.e., aj = max aij , i { } { } and aj- = min aij for criteria that are to be maximized, as well as aþ j = min aij i i { } and aj- = max aij for criteria that are to be minimized. Defining wj as the weight i

of the jth criterion, we can then determine Mi as the average (mean) deviation of a decision’s achievement of a criterion from the best achievement of any decision on that criterion, i.e., the weighted average regret. It is formally defined as Mi =

X aþ j - aij þ - wj a j - aj j

8i:

Similarly, we can define the weighted highest individual deviation of the actual solution from the best achievement on a single criterion, i.e., the worst individual regret of a decision, as Ii, which is formally defined as

64

3

( I i = max j

Multicriteria Decision Making

) aþ j - aij - wj : aþ j - aj

Define now M þ = max fM i g and M - = min fM i g, as well as I þ = max fI i g and i

i

i

I - = min fI i g . Given weights v 2 [0, 1] for the mean regret and (1–v) for the i

individual regret, we can then define the unified regret Ui as Ui =

Mi - M Ii - I v þ ð1- vÞ: Mþ - M Iþ - I -

At this point, we can rank decision with respect to M, I, & U, resulting in decisions M M I I I U U U dM 1 , d 2 , . . . , d m , d 1 , d 2 , . . . , d m , and d 1 , d 2 , . . . , d m . The method then tests whether the following two conditions are satisfied: 1 Condition (1): udU2 - udU1 ≥ m 1 (“acceptable advantage,” whose right-hand side is the average spacingof m facilities on a line segment with length 1) and I M (2) dU 1 = d 1 = d 1 (“acceptable stability”). U If only (2) is not satisfied, then designate the decisions d U 1 and d 2 as candidates for further consideration. U U If condition (1) is not satisfied, then submit dU 1 , d 2 , . . . , d k as candidates for further consideration, so that k is the largest subscript, for which.

udUk - udU1
0.4. For the first criterion, we then obtain the comparison matrix 2

1 3 6 6 61 1 5 6 63 6 61 1 6 1 6 1 C =6 66 5 61 1 1 6 66 6 6 1 4 1 2 5 6 2

3 2 17 27 7 17 7 57 7: 17 7 67 5 1

Using the same preferences for the second criterion, we obtain the comparison matrix 2 1

6 63 6 6 62 2 C =6 6 61 6 66 4 3

1 3 1 1 3 1 6 1 2

1 2 3

6 6

1

6

1 6

1

2

6

13 37 27 7 17 7 27 7, 17 7 67 5 1

66

3

Multicriteria Decision Making

and with the same preferences for the third criterion, we obtain the comparison matrix 2

1 4

1

6 6 64 6 6 3 C =6 66 6 6 66 4 1 4

1

1 6 1 5

5

1

6 1 6

3 1 6

3

1 6 1 6 1 3 1 1 6

4

7 7 67 7 7 7: 67 7 7 67 5 1

Finally, the decision maker needs to compare the criteria with each other. Using the same scale as before, we obtain the comparison matrix 2 61 6 C=63 4 1 2

1 3 1 1 4

3 27 7 47 5 1

e k , k = 1, 2, 3, as The next step is to compute the normalized comparison matrices C e This is achieved by dividing each element in a matrix by its column sum. well as C. The results are 2

:4615

:4712

:3495

:2400

:5172

3

6 7 6 :1538 :1571 :2913 :2400 :1293 7 6 7 e = 6 :0769 :0314 :0583 :2400 :0517 7, C 6 7 6 7 4 :0769 :0626 :0097 :0400 :0431 5 :2308 :3141 :2913 :2400 :2586 2 3 :1091 :1428 :0750 :2400 :0833 6 7 6 :3273 :4286 :4500 :2400 :5000 7 6 7 e 2 = 6 :2182 :1428 :1500 :2400 :1250 7, C 6 7 6 7 4 :0182 :0714 :0250 :0400 :0417 5 :3273 :2143 :3000 :2400 :2500 1

3.4

Methods for MADM Problems

2

:0580 6 6 :2319 3 6 e C =6 6 :3478 6 4 :3478 :0145

67

3 :0201 :0368 :0909 :1739 7 :0805 :0441 :0909 :2609 7 7 :4027 :2206 :1818 :2609 7 7, 7 :4832 :6618 :5455 :2609 5 :0134 :0368 :0909 :0435

2 and

:2222 :2105 :2857

3

7 e =6 C 4 :6667 :6316 :5714 5: :1111 :1579 :1429

ek We now construct the attribute matrix U, by including the row averages of matrix C e which and using them in the kth column of A, and the row average of the matrix C, are used in a weight vector w. In our example, we obtain 2

:4079 6 6 :1943 6 U=6 6 :0917 6 4 :0392 :2670

:1300 :3892 :1752 :0393 :2663

3 :0759 7 :1417 7 7 :2828 7 7 7 :4598 5 :1991

and w = ½:2395 :6232 :1373005D:

We then obtain the final “quality measurements” of the individual decision in the vector UwT = [.1891, .3085, .1700, .0970, .2572], which results in the ranking of the decisions as d2 227B d5 227B d1 227B d3 227B d4, which is quite similar if compared to the rankings found with the VIKOR method, albeit quite different from the MOORA technique. At this point, we would like to introduce a tool that is frequently applied to visualize the payoffs of different decisions. This spider plot, also referred to as radar plot or web plot, works with any number of axes rather than just two. These m axes, one for each criterion, are position at 360/m degrees from each other, and the attributes of each of the decisions are then plotted on the respective axes. To complete the plot, neighboring points that belong to one decision are then connected with each other, which is then repeated for all decisions. The result is one polygon for each decision. This provides a good visualization of the differences between the outcomes of the decision of the model. As a numerical illustration, consider the following. Example: Consider the attribute matrix 2

:9 6 A = 4 :5

:7 :9

3 :5 7 :3 5:

:7

:3

:9

The spider plot is then shown in Fig. 3.9. A further aggregation of the informational content can be achieved by calculating the area that is associated with the polygon of each of the decisions. However, the shapes and volumes associated with the decisions very much depend on a number of

68

3

Multicriteria Decision Making

C1 1.0

.5

.1 .1 .5 C 2 1.0

.1 .5 1.0 C 3

Fig. 3.9 Spider plot for sample problem

Fig. 3.10 Spider plots for decision makers with same numerical assessments to different criteria

factors, among them the order in which the criteria are plotted. For instance, consider a product that has six criteria, and its attributes are 80, 90, 70, 20,10, and 30. Assigning these values to the six axes, starting with the abscissa (the line leading from the black dot to the right), and continuing in counterclockwise fashion, the spider plot looks like the one in Fig. 3.10a. Suppose now that another decision maker has the very same numerical assessments but assigned to different criteria. In particular in our numerical example, the order of the criteria is changed to 80, 20, 90, 10, 70, and 30. Given that, we then obtain the plot shown in Fig. 3.10b. The shapes and the area within the polygons are obviously very different from each other. A few words of caution regarding spider plots are put forward in Data to viz (undated).

References

69

For those interested in software related to multicriteria decision making methods, we refer to the International Society on MCDM (undated) that lists a number of programs.

References B. Aouni, O. Kettani, Goal programming model: a glorious history and a promising future. Eur. J. Oper. Res. 133(2), 225–231 (2001) J.P. Brans, P. Vincke, A preference ranking organization method. Manag. Sci. 31, 647–656 (1985) W.K.M. Brauers, Location theory and multi-criteria decision making: an application of the Moora method. Contemp. Econ. 12(3), 241–252 (2018) A. Charnes, W.W. Cooper, Management Models and Industrial Applications of Linear Programming (Wiley, New York, 1961) A. Charnes, W.W. Cooper, R. Ferguson, Optimal estimation of executive compensation by linear programming. Manag. Sci. 1, 138–151 (1955) A. Charnes, W.W. Cooper, E. Rhodes, Measuring efficiency of decision making units. Eur. J. Oper. Res. 2, 429–444 (1978) G. Chiandussi, M. Codegone, S. Ferrero, F.E. Varesio, Comparison of multi-objective optimization methodologies for engineering applications. Comput. Math. Appl. 63(5), 912–942 (2012) E.U. Choo, B. Schoner, W.C. Wedley, Interpretation of criteria weights in multicriteria decision making. Comput. Ind. Eng. 37, 527–541 (1999) M. Cinelli, M. Kadziński, M. Gonzalez, R. Słowiński, How to support the application of multiple criteria decision analysis? Let us start with a comprehensive taxonomy. Omega 96 (2020). Available online as Journal preproof via https://reader.elsevier.com/reader/sd/pii/S030504831 9310710?token=C1E18D92EEE24AA8254223F856023CBCAEC2589A35582AB1 8EDB7BEC3D88DDC92974E792782AF14015C63C2A3CB5C4F4, last accessed on 9/15/ 2022 J.L. Cohon, Multiobjective Programming and Planning (Academic, New York, 1978) Data to viz, The Radar Chart And Its Caveats (undated). Available online at https://www.data-toviz.com/caveat/spider.html, last accessed on 9/21/2022 O. DeWeck, Multiobjective Optimization: History and Promise. Invited Keynote Paper at the Third China-Japan-Korea Joint Symposium on Optimization of Structural and Mechanical Systems (2004). Available online at http://strategic.mit.edu/docs/3_46_CJK-OSM3-Keynote.pdf, last accessed on 10/6/2022 P.H. Douglas, The Cobb-Douglas production function once again: Its history, its testing, and some new empirical values. J. Polit. Econ. 84(5), 903–915 (1976). Available online at http:// ecocritique.free.fr/douglas1976.pdf, last accessed on 10/15/2022 L. Duckstein, S. Opricovic, Multiobjective optimization in river basin development. Water Resour. Res. 16(1), 14–20 (1980) W. Edwards, Social utilities. Eng. Econ. Summer Symposium Series 6, 119–129 (1971) W. Edwards, How to use multiattribute utility measurement for social decisionmaking. IEEE Trans. Syst. Man Cybern. SMC-7(5), 326–340 (1977) H.A. Eiselt, C-L. Sandblom (eds. & authors), Decision Analysis, Location Models, and Scheduling Problems. (Springer, Berlin 2004) J. Figueira, S. Greco, M. Ehrgott, Multiple Criteria Decision Analysis: State of the Art Surveys (Springer Science +Business Media, Inc, New York, 2005) H. Gaspars-Wieloch, The impact of the structure of the payoff matrix on the final decision made under uncertainty. Asia-Pac. J. Oper. Res. 35(1), 1850001 (27 pages) (2018). Available online at https://www.worldscientific.com/doi/pdf/10.1142/S021759591850001X, last accessed on 9/15/ 2022

70

3

Multicriteria Decision Making

M. Hamdy, A.T. Nguyen, J.L. Hensen, A performance comparison of multi-objective optimization algorithms for solving nearly-zero-energy-building design problems. Energ. Buildings 121, 57–71 (2016) C.-L. Hwang, K. Yoon, Multiple Attribute Decision Making Methods and Applications a State-ofthe-Art Survey (Springer, Berlin, 1981) J.P. Ignizio, Goal Programming and Extensions (Lexington Books, Lexington, MA, 1976) International Society on MCDM, Software Related to MCDM (undated). Available online at https:// www.mcdmsociety.org/content/software-related-mcdm-0, last accessed on 9/15/2022 H.W. Kuhn, W.W. Tucker, Nonlinear programming, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, ed by J. Neyman, vol. 2, 1950 (1951). Available online at https://projecteuclid.org/proceedings/berkeley-symposium-onmathematical-statistics-and-probability/proceedings-of-the-second-berkeley-symposium-onmathematical-statistics-and-probability/toc/bsmsp/1200500213, last accessed on 10/9/2022 O.I. Larichev, A method for evaluating R & D proposals in large research organizations. Collaborative paper CP-82-75, IIASA (1982) O.I. Larichev, Y.A. Zuev, L.S. Gnedenko, Method for classification of applied R & D projects, in Perspectives in Applied R & D Planning, ed. by S.V. Emelyanov, (Nauka Press, 1974), pp. 28– 57. in Russian L.I. Larichev, D.L. Olson, Multiple Criteria Analysis in Strategic Siting Problems (Kluwer Academic, Boston, 2001) S.M. Lee, Goal Programming for Decision Analysis (Auerbach, Philadelphia, 1972) J. Niehans, Zur Preisbildung bei ungewissen Erwartungen. Schweizerische Zeitschrift für Volkswirtschaft und Statistik 84(5), 433–456 (1948) D.L. Olson, Decision Aids for Selection Problems (Springer, New York, 1996) S. Opricovic, G.H. Tzeng, Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 156(2), 445–455 (2004) S. Opricovic, G.H. Tzeng, Extended VIKOR method in comparison with outranking methods. Eur. J. Oper. Res. 178(2), 514–529 (2007) B. Roy, Problems and methods with multiple objective functions. Math. Program. 1, 280–283 (1971) B. Roy, B. Sussman, Les problèmes d’ordonnancement avec contraintes disjunctives. Note DS no 9 bis, SEMA, Montrouge (1964) J. Rui Figueira, S. Greco, B. Roy, R. Słowiński, ELECTRE methods: Main features and recent developments, in Handbook of Multicriteria Analysis. Applied Optimization, ed. by C. Zopounidis, P. Pardalos, vol. 103, (Springer, Berlin, 2010) T.L. Saaty, The Analytic Hierarchy Process (Mc Graw-Hill, New York, 1980) L.J. Savage, The theory of statistical decision. J. Am. Stat. Assoc. 46, 55–67 (1951) C. Tofallis, Add or multiply? A tutorial on ranking and choosing with multiple criteria. INFORMS Trans. Educ. 14(3), 109–119 (2014) P.L. Yu, M. Zeleny, The set of all nondominated solutions in linear cases and a multicriteria simplex method. J. Math. Anal. Appl. 49(2), 430–468 (1975) S.H. Zanakis, A. Solomon, N. Wishart, S. Dublish, Multi-attribute decision making: a simulation comparison of select methods. Eur. J. Oper. Res. 107, 507–529 (1998) M. Zeleny, Linear multiobjective programming, in Lecture Notes in Economics and Mathematical Systems, vol. 95, (Springer, Berlin, 1974)

Chapter 4

Location Models

4.1

The Basic Setting

Location problems occur all around us: finding a new location of a warehouse, a location for a fast-food outlet, a newspaper transfer point, an ambulance, or any other of a myriad of facilities that are to be sited. All location problems, regardless how different they may sound like or actually be, have four components in common: space, distance, customers, and facilities. As far as space goes, we either work in an n-dimensional space or on a network. In case of an n-dimensional space, most often we work in two dimensions, e.g., on a surface such as a map. However, locations in three-dimensional space may also be considered, e.g., when locating satellites for the transmission of information, or satellites whose task is to destroy incoming objects that are considered threatening to a country. Once the space has been determined, the next step is to choose a distance function. When discussing location models, distances should be understood as disutilities. Typically, in many location models, distances represent costs, the cost of transporting physical shipments from one place to another. However, the concept is very general and applies also to other types of disutilities. For instance, the spatial separation of an individual’s home and, say a library also represents a disutility. Even though there may be no costs involved, but if the individual concerned needs to walk for any distance, that detracts from his enjoyment of the facility. All of this assumes that long distances are detractors from good solutions. This implies that the facility we are about to locate is considered desirable, so that customers would like to have the facility close by (or, alternatively, decision makers at the facility would like customers to be close to them). On the other hand, there are undesirable facilities such as welding shops, crematories, prisons and similar facilities. Traditionally, location scientists distinguished between noxious and obnoxious facilities. One could argue that noxious facilities can be dealt with by putting limits on the polluting and other ill effects and © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_4

71

72

4 Location Models

deal with them as constraints, while obnoxious facilities could have been dealt with by an appropriate objective. However, modelers appeared to have used the terms interchangeably, and the more general term “undesirable” stuck. In practice, the general sentiment NIMBY (“not in my back yard”) is well known, even though the more extreme version BANANA (“build absolutely nothing anywhere near anything”) seems to have taken over (see, e.g., The Economist 2022). In reality, most facilities have desirable and undesirable aspects. Take, for instance, a supermarket. There is little doubt that most people will consider a supermarket in close proximity to their residence a benefit, as it allows them to purchase foods without making extended special trips. On the other hand, the same supermarket will receive deliveries in the middle of the night, which tends to be noisy and is a decidedly undesirable aspect. Prisons are similar in that they combine desirable and undesirable aspects: most people would like such facilities “somewhere else,” while the employees of prisons would prefer not having to drive too far to their place of work and relatives of inmates may (or may not) want to be close to the facility for potential (conjugal) visits. Similarly, while a polluting facility may be highly undesirable to the population living in its vicinity, the owner of the facility might say “Son, that smells like money to me” (Abbey 1991). Measuring distances in the space under consideration is crucial as it measures the degree of utility or disutility a decision maker wants to express. If we are looking at a two-dimensional map, the easiest way to measure the distance between two points is to use a straight line. Such straight-line (or Euclidean) distances are suitable for transportation modes such as emergency helicopters or Coast Guard vessels. Another distance measure allows movements only parallel to the axes at right angles. Such Manhattan or rectilinear distances can be suitable in the context of inner-city traffic in places such as Manhattan, Washington, DC, or parts of central Phoenix, Arizona. The case of traffic networks is somewhat different. For any two points on a traffic network, almost all location models will apply the shortest distance measured over the network in their model. The reason is that most transportation—assuming rational planners with an objective that emphasizes economy—will attempt to bridge space in the fastest or most cost-efficient manner. In the case of, say, ambulances, this would refer to the fastest route, for garbage collection trucks it would be the shortest route, etc. Very few models take into account issues in traffic assignment, which do consider congestion in networks. The third component of location models are customers. They represent entities that have fixed locations in the space under consideration. In addition, they also represent demand, which is typically referred to as “weight.” Usually, these weights are assumed to be deterministic, which is typically an approximation of the more realistic probabilistic weights. They also may depend on the facilities that customers somehow interact with. Furthermore, in some applications, especially in the context of retail, customers may have a utility function, which determines the quantity they will purchase. Finally, there are facilities. They are entities that are to be located in the space under consideration. Typically, facilities provide a service or a good, and in doing so, they interact with customers. This may either be a service for which customers

4.2

The Main Features of Location Problems

73

travel to the facilities (as is the case for retail facilities), or a service, which the facilities take to the customers (such as fire protection or other first responders). In addition to facility—customer interaction, there may also be interactions between the facilities themselves.

4.2

The Main Features of Location Problems

Having delineated the major components of all location problems, we will now investigate some of the details of how these elements interact and what other components may be present in the location decision. Among the most obvious decisions is the number of facilities that are to be located, typically denoted by p. Often, this decision has already been made in advance, in which case, the number of facilities, commonly denoted by p, is known when location decisions are made. If this is not the case, an explicit budget constraint can be included that puts an upper bound on the value of p. Alternatively, the number of facilities is decided together with their locations, by considering the opening cost of the facilities and its trade-off with other objectives of the problem, thus clearly complicating the problem. In most practical problems, p is a small single-digit number, often p = 1. The next feature of the model is whether the model is model continuous or discrete. In continuous models, facilities can be sited anywhere in the space under consideration, whereas in discrete model, there is a finite (albeit often very large) number of possible locations for the facilities. Both types of models exist in the plane as well as in network settings. In the case of network locations, often the nodes of the network are considered to be the possible locations of the facilities. However, when locating, say, tow trucks, locations along stretches of highway and not just at interchanges or intersections will often be considered. Often, continuous models are used on the macro scale to obtain a general idea in what region a facility should be located, before zooming in and construct a discrete model in that region with potential facility location enumerated. The third question to ask: is the problem competitive? Other things being equal, competitive models are much more complicated than noncompetitive models. Simply speaking, in competitive models the decision maker will take possible reactions by other competitors into consideration, whereas in noncompetitive models, the other competitors are assumed to continue without reacting. Assuming that competitors react will lead to so-called bilevel optimization problems, in which the decision makers at one facility, the leader, will act, assuming that his competitor(s), the followers, will react to the new situation by doing what is best for them. The economist Hotelling (1929) investigated stability issues in competitive location problems, while the economist von Stackelberg (1943) was the first to formally describe leader–follower problems. There is, of course, a wide world of decisions to be made that influence the location decision. There has been much discussion if these features should be

74

4 Location Models

integrated with the location decision and made at the same time, or if the decision maker were to adopt a “locate first, plan the other things later” attitude. One of the reasons to decouple the location decision from other related decisions is that while the location of a facility is a strategic problem that, while not irreversible, will almost always be very costly to revise, other features of the facility may be adjusted much more rapidly and easily. As an example, consider the pricing decision. While the general pricing level is a tactical decision (open a discount store or a designer outlet), the actual pricing decision is on the operational level and can be changed at any time. As an example, take the price of gasoline, which is often changed multiple times in a day, or the price of airline tickets, which changes based on the demand for specific flights (revenue management). Another obvious feature is the capacity of a facility to be established. The capacity will drive not only the number of customers that can be served, but also the costs. Its discussion will, for instance, answer the question whether to construct a small number of large facilities or a large number of smaller facilities. Another, less tangible, albeit very important, feature is the attractiveness of a facility. Many researchers use just a single parameter to express this “attractiveness.” The parameter expresses many of the facility’s features, such as the size of a facility (in the context of retail facilities, this could be a proxy for the variety of goods it offers), the friendliness and knowledgeability of the staff (again in the context of retail facilities), the availability of (free) parking, the accessibility of the facility via public transport, positive online reviews (which are not directly under the control of the decision maker), and others. A very important feature for a facility to locate are its interactions with other economic agents. Upstream, there are the suppliers, on the same level are the competitors, and downstream are the customers. First, we have to deal with the relations between the facilities and those of their suppliers. Do the suppliers, e.g., production facilities, have limiting capacities, what are their pricing policies, are their facilities in close proximity in case problems with the product have to be ironed out, and similar issues. Again, distance is a natural measure, but also the question whether or not routing is required. As a matter of fact, routing issues have spawned a special field in location analysis, referred to as “location-routing problems.” Secondly, consider the firm’s competitors. What is the distance between a facility and its competitors (is there an agglomeration of facilities by different competitors as often observed, e.g., in the context of fast-food outlets)? What is the relative attractiveness of a facility as compared to that of the competitors of the industry under consideration? Finally, there are the customers. This aspect is of paramount importance in the context of retail facilities. Here, the most obvious relation to customers is, of course, their proximity to the facility. However, many other aspects are to be taken into consideration as well. For instance, do customers make special trips to the facility when making their purchases? Do they engage in multipurpose shopping? In such a case, a facility’s decision to agglomerate with other similar outlets may be a worthwhile consideration. Do the customers comparison shop? Here, the distance to be kept from other competitors will depend on how similar the products and the price decisions by all agents are. Agglomeration of facilities may and usually does

4.3

The Basic Models and Their Extensions

75

increase the collective market, while it also increases the level of competitiveness and with it potentially decreases prices. Restrictions on the facilities will define feasible sets for the location of the facilities. Good examples are hard constraints, such as zoning laws, regulations regarding the location of facilities that deal with hazardous materials, and similar rules, laws, and bylaws. Practical examples include the many regulations that involve the location of landfills, such as restrictions concerning water supplies, natural features, public buildings, and many others. Soft constraints regarding restrictions may also exist. They are not based on laws, but on desirable features. For instance, locating a strip club in relative proximity of a residential area may not be prohibited, but it most certainly makes for ill will and potential lawsuits. Desirable features, in addition to the aforementioned proximity to suppliers and customers, may include proximity to a paved road, which would limit the amount of new construction required when accessing the new facility. Another desirable feature would be the availability of sufficient quality and quantity of the required workforce. Further desirable features include potential government subsidies (either in the form of actual cash payments, in-kind payments, worker training, etc.), agreeable labor laws, and similar features. All of the above considerations are used to outline a feasible set or, alternatively, allow the decision maker to formulate the objective function(s). Finally, an often-overlooked part of location analysis deals with the “de-location” of facilities. This process is employed whenever a decision maker wants to close one or more of the existing facilities, while minimizing the level of service or other damage by the deletion of facilities. Technically, the process is fairly similar to the original optimization problem, and usually actually much simpler. Details are shown in the “making it work” section of this chapter. Table 4.1 summarizes some of the major features of location models.

4.3

The Basic Models and Their Extensions

This section will first describe some of the basic models, followed by an examination of some of the possible extensions. They are formulated in a somewhat unusual way, viz., by concentrating on the meaning of the objective function and the constraints, rather than the mathematical formalism. This is followed by short descriptions of some mixed forms of the objectives. As usual, mathematical details are relegated to the second part of the chapter. In this book, we will adopt the framework that distinguishes between three distinct types of objectives, viz., pull, push, and balance as advocated by Eiselt and Laporte (1995). In models with “pull” objectives, customers exert a pull force on the facility towards themselves, clearly indicating that this is a desirable facility that customers want to be close to. On the other hand, in case of “push” objectives, customers push facilities away from themselves, indicating that undesirable facilities are to be located. Finally, there is a somewhat odd class of objectives, in which some

Space Euclidean Network

Location where? Continuous Discrete

Distances Minkowski Network, shortest Network, congestion

Table 4.1 Features of location models Number of facilities Fixed number Endogenous

Nature of facilities Desirable Undesirable

Number of independent decision makers 1 (central planning for all facilities) >1 (competitive)

Customer behavior Fixed demand Proximity-dependent Facility-dependent

76 4 Location Models

4.3

The Basic Models and Their Extensions

77

feature related to the facility is to be balanced with the same feature of other facilities. For instance, when locating a number of libraries, planners may want to ensure that the facility is more or less equally accessible to (potential) patrons. Many authors have dubbed the objective in balancing objectives “equity objectives”. This is a misnomer, as just about all of these objectives do not maximize equity but equality. For a pertinent discussion, see again the aforementioned reference by Eiselt and Laporte (1995). While a large number of problems with balancing objective are found in the public sector (e.g., choosing or supporting locations that ensure an equal level of employment), problems with balancing objectives are also home in the private sector. Examples are a more or less equal distribution of workload among employees (a proxy for maximizing employee satisfaction), or a design of sales territories that ensures similar sales potential to salesmen in a company.

4.3.1

Problems with “Pull” Objective

The first class of models in the “pull” category optimize a function of features of the facilities or the customers rather than an explicit function of the distances. In the wide sense, problems of this nature are referred to as capture problems. Loosely speaking, a customer is considered captured by a facility, if the distance between the customer and the facility is less than a prespecified distance δ. This distance δ is usually referred to as the covering distance or service standard and it is exogenously set by the decision maker or prescribed by law. Covering models in the context of location analysis were first introduced by Hakimi (1965) and one version, the so-called location set covering problem (LSCP) was first formulated as a linearinteger optimization problem by Toregas et al. (1971). The main idea of the problem is to locate the smallest number of facilities that are still able to cover all customers. The minimization of the number of facilities is essentially a proxy expression that minimizes the expenditures required to provide a service (i.e., cover all customers), given that all locations are equally expensive. Semi-formally, we can write the problem as follows. PLCSP : Min the number of facilities to be located s:t:all customers are covered: Dropping the equal cost assumption and introduce individual location costs, we can then solve a minor variation of the LCSP with the objective to minimize actual location costs. One of the limitations of the location set covering model is the requirement to cover everybody. Often, such a 100% service level will not be feasible. This was recognized by Church and ReVelle (1974), who described the max cover model. The main idea of this formulation is to cover (or capture) as many customers as possible with a given number of facilities, i.e., with a given budget. In

78

4 Location Models

other words, this formulation maximizes a function of the weights of the customers. A more formal description is this. P max

cover

: Max the demand that is covered within the prescribed distance δ s:t:locate no more than p facilities:

Applications of max cover models are found in many different contexts. An important application is that of emergency facilities. For instance, when locating hospitals, it is important to ensure that patients arrive at the hospital within half an hour of the incident (the “golden half hour”), as this dramatically increases their chances of survival. The idea is then to locate a health care facility to ensure that as many potential patients as possible are within that distance from the facility. For a pertinent discussion in the context of a location model, see, e.g., Burkey et al. (2012). Other service facilities, such as libraries, public pools, etc., use a similar model in order to ensure that as many people as possible can find their service within a given distance or travel time. Two extensions of max cover problems can be thought of. One concerns the de-location problem that optimally removes a predetermined number of facilities from the existing (optimal or non-optimal) solution, while still covering as many customers as possible. On the other hand, there are conditional location problems that keep the facilities that have already been located and optimally add a predetermined number of facilities to the solution (optimal expansion). Yet another version of the conditional problem has the existing facilities being operated by one chain of facilities, while a new chain attempts to enter the market and optimally locate its own facilities. From a computational point of view, all three aforementioned problems are easy to solve. However, this problem cannot be formulated without specifying what happens to customers that are within reach of multiple competing facilities. One of the shortcomings of the model is the concept of cover. In its original form, any customer, whose distance to a facility is smaller than δ (regardless how much smaller) is considered covered, while any customer that is farther from its closest facility than δ (again, no matter, how much farther) is considered not covered. Clearly, this distinction is somewhat arbitrary. Church and Roberts (1983) were the first to introduce what became known as the gradual covering problem. In it, the covered/not covered dichotomy is replaced by a variety of functions that express the degree of coverage. Another extension of the basic model is a formulation that includes backup coverage, see, e.g., Daskin and Stern (1981). The main idea is to guard against congestion and the possibility of non-availability of service from a facility. Another way to deal with congestion is an approach referred to as expected coverage. It was first described by Daskin (1983) and it has some rather strong assumptions. The second class of models attempts to minimize the disutility that results from the spatial separation of facilities and customers (plus potentially other features outlined below). This class includes the most popular and most frequently used location model, the p-median problem. This problem will locate p facilities in order

4.3

The Basic Models and Their Extensions

79

to minimize the total customer—facility distance (which, since the total demand is constant, is proportional to the average facility-customer distance), given that customers is served by their respective closest facilities. Formally, we can write: Pp - median : Min the average customer—facility distance: s:t:locate exactly p facilities and satisfy the demands of all customers: It should be noted that the objective function will arrange allocations, so that each customer is served by his closest facility. There is no need to specifically require this. However, if the objective function changes, this feature may no longer apply. Using, as usual, the weights as customer demand and the distances as proxy for transportation costs, the p-median objective, if applied to the private sector, will minimize the transportation cost. In case of public facilities, the weights may symbolize the number of potential patrons of a facility, then the objective will minimize the average customer-facility distance, a typical proxy for accessibility. The wide applicability of this model explains its appeal to modelers. Clearly, the de-location version and the conditional location version of the problem are again available and comparatively easy to solve. It is worth noting that the p-median problem, when it is used in other fields, e.g., to determine optimal clusters of objects, is sometimes referred to as “k-means.” Another, closely related, formulation is the simple plant location problem (SPLP), also referred to as the UPLP (uncapacitated plant location problem). In contrast to the p-median problem, it does not prespecify the number of facilities that are to be located. Its number is open, but, in addition to the total transportation costs, the objective function includes fixed costs related to the location of the facilities. This feature allows location costs to differ for different locations. That way, the model balances high transportation and low location costs in case of few facilities and low transportation and high location costs in case of many facilities being located. A semi-formal statement of the problem is this. PSPLP : Min the sum of total transportation and facility location costs s:t:each customer’ s demand is completely satisfied: Again, each customer is automatically served from its closest facility. A good survey of the basics of this problem is provided by Krarup and Pruzan (1983). A slight modification of the simple plant location problem includes capacities attached to the facilities. This is the so-called CPLP (capacitated plant location problem), which is identical to the above SPLP, except that each facility now has a finite capacity. The computational effects of this seemingly minor change are significant, though, as the capacitated version tends to be much more difficult than the uncapacitated formulation. Furthermore, while the model’s objective function in the SPLP automatically ensures that each customer’s total demand will be shipped from its closest facility, this may no longer be the case in the capacitated version. This leads to two versions

80

4 Location Models

of the capacitated plant location problem, the single-source CPLP and the multiplesource CPLP. In the former, it is required that each customer’s demand is entirely satisfied from a single facility, while the latter relaxes this condition. In many industrial applications it does not matter if a customer’s demand is satisfied from a single source or multiple sources. However, in applications such as the location of emergency shelters (see, e.g., Bayram et al. 2023), single-source solutions are preferred, as they minimize confusion among those trying to find their way to the shelter. Furthermore, additional closest assignment constraints may be added, which force all demands to be served by their closest facility. De-location and condition location versions of the problem exist as well. The third class of models deal with the minimization of the single longest distance or weighted distance. For centuries, mathematicians have dealt with location problems, which are today referred to as center problems. The first operations researcher to formally introduce the general problems appears to have been Hakimi (1964). The idea is to locate facilities (in its basic form without using the weights, i.e., the number of customers or customer demands), so that the longest distance between any customer and its closest facility as a short as possible. Formally, we can write: Pp - center : Min the longest individual facility - customer distance s:t:locate exactly p facilities given that all customers are allocated to their closest facility

The idea underlying center problems is Rawls’s “theory of justice,” according to which a solution is no better than the worst-off element in it. This, in the context of location models, is the customer, who is most distant from a facility. Center problems were originally designed for emergency facilities. However, its unique focus on the farthest customer, while ignoring all others, makes its use highly problematic. It shares this problem with Wald’s decision rule in games against nature. Attempts to avoid very long customer—facility distances have been made by using medi-centers, to be discussed below. For locations in the plane given straight-line distances, a 1-center is the center of the smallest enclosing circle, i.e., the circle that includes all customers. The p-center then has circles around all p facility locations, so that the diameter of the largest circle is as small as possible. Many variations of the basic models exist. Rather than always patronizing the closest facility, customers may use some deterministic or probabilistic utility function to plan their purchases. This may also include reservation prices, i.e., upper limits of prices customers are prepared to pay for the good (which obviously is only applicable in case of nonessential goods) and/or upper limits on distances that customers are prepared to travel to satisfy their needs. One of the first multiobjective location formulations was provided by Halpern (1976), who proposed what he referred to as the cent-dian. As the name suggests, it is a linear convex combination of center and median objectives. A formulation that incorporates features from the minisum (median) and the minimax (center) problems is what became known as the medi-center. A medi-center is a point that minimizes the average customer-facility distance, subject to constraints that ensure that none of

4.3

The Basic Models and Their Extensions

81

the customer-facility distances exceeds a preset limit. Early work on medi-centers was published by Toregas et al. (1971), who referred to the problem as a “constrained p-median,” followed by Toregas and ReVelle (1972), and Khumawala (1973). A unifying concept was described by Slater (1978), who proposed the k-centrum. His work was extended to p > 1 facilities by Tamir (2001). The objective of this model is to minimize the sum of k longest customer-facility distances. Note that for k = 1, the model coincides with the 1-center, while for k = n (given that the underlying network has n nodes), we obtain the 1-median. Another generalization is Nickel and Puerto (1999) and their ordered median. Finally, there are again de-location and conditional versions of this problem. In the competitive context, Hakimi (1983) was the first to categorize concepts that are based on von Stackelberg’s (1943) leader-follower concept. His medianoid is a set of locations for the follower that—given that each customer patronizes the facility closest to him—maximizes the demand captured by the competing follower. Similarly, a centroid is defined as the leader’s set of locations, which capture the largest possible demand after the competing follower has located at the most advantageous locations for him viz., the medianoid. It is clear that while the medianoid is a conditional location problem in that the follower locates with full knowledge of the existing locations of the leader, the centroid locates facilities given the follower’s reaction, which in itself is an integer linear optimization problem. This is the feature that makes centroids much more difficult than medianoid problems. Also note that since part of the leader’s problem is the solution to the follower’s problem, the leader must have knowledge concerning the objective and perception of the follower. Actually, medianoid and centroid are misnomers rather than counterparts of median and center in the competitive world, as the names may suggest. The reason is that while median and center minimize functions of (weighted) distances, medianoid and centroid problems maximize functions of weights associated with customers, while distances are relegated to customer behavior. A few years after Hakimi’s contribution, ReVelle (1986) described what he aptly referred to as the maxcap model, which is nothing but the medianoid problem. Its name “maxcap” clearly indicates the problem’s lineage: it is much more closely related to capturing problems than median and center problems. Hakimi’s names are probably derived from the fact that median and medianoid (center and centroid) problems are both formulations that minimize the sum (minimize the maximum) of a function. To add to the confusion, Slater’s (1975) “centroids” are what we nowadays call medians.

4.3.2

Problems with “Push” Objective

So far, we have implicitly assumed that the facilities that are to be located are desirable; in other words, the customers would prefer to have them closer rather than farther away, i.e., they were assumed to have a “pull objective,” as customers try to pull the facilities towards them. On the other hand, there are facilities that

82

4 Location Models

customers wish to be as far away from them as possible. Following the same logic as above, these would be models with a “push” objective. Models in this category typically fall under the header of undesirable facility location, and they include jails, landfills, nuclear power plants, chemical processing plants, and other such facilities. We should note that all location problems with push objective, in contrast to those with a pull objective, need to include a finite set that facilities are to be located in. Otherwise, the “push” motion would move the facilities towards infinity. First, there is the anti-cover problem, which minimizes the total weight (i.e., the total number of customers) who are located within a predetermined covering distance δ from any of the facilities. Formally, we can write Panti - cover : Min the number of customers covered s:t:locate p facilities customers are defined as covered, if they are closer than δ to any of the facilities: A typical application are polluting facilities that are considered damaging to customers who are sited closer to any of the facilities than a given distance δ and some attenuation function. In such a case we would like to locate a facility so as to minimize the number of customers that are located within the prespecified distance δ. An extension to the problem defines two distances δ1 and δ2, so that no customers are closer to δ1 to a facility (as pollution levels at these distances are dangerous, so that such a restriction would be formulated as an absolute constraint), while to customers living at a distance between δ1 and δ2 from a facility, the pollution effects would be a mere nuisance to be minimized. Customers living farther than δ2 from a facility would not be affected. The maxian (or, alternatively, anti-median), has the same general structure as the median problem, except that it maximizes the sum of all customer- facility distances. There is one technical hitch, though: recall that in the standard median formulation, customers are allocated to their closest facility, as this very often describes customer behavior. In the median formulation, we do not explicitly need to include the assignment of each customer to his closest facility, the minimization objective guarantees that there is at least one optimal solution, in which customers are assigned to their closest facilities (unless some of the capacities are finite, as explained above). Once the objective changes to maximization, though, this is no longer the case. If we just change the minimization objective to a maximization objective, customers will be assigned to the facility farthest from them, which is meaningless: As an example, of why such constraints are needed, a polluting facility that is closest to a customer will have most of the effect on the customer, not the most distant. Thus, we need to change the formulation to explicitly include constraints that guarantee the closest customer-to-facility assignment. Formally, we can write

4.3

The Basic Models and Their Extensions

83

Pp - maxian : Max the average customer—facility distance s:t:locate exactly p facilities and0028 satisfy the demands of all 0029 customers and each customer is served or“ affected” by his closest facility: A similar argument can be applied to center problems. The p-anticenter problems can be described as Pp - anticenter : Min the longest individual facility - customer distance s:t:locate exactly p facilities so that all customers are allocated to their closest facility In the plane with straight-line distances, the 1-anticenter is center of the largest empty circle, i.e., the circle that does not include any customers. Another problem supposes that legislation requires that there is a limit (e.g., in terms of ppm for air pollution) that is acceptable to the population. Customers are affected by pollution from all facilities, i.e., the sum of all pollution generators. Then it may be desired to minimize the population that is exposed to higher levels of this pollutant. Semi-formally, we write Ptotal pollution : Minimize the number of customers that are exposed to levels higher than legislated s:t:locate p facilities In addition to the aforementioned models, there are two other classes of models that have been dealt with by a significant number of researchers. The first of these problems is the so-called p-dispersion model, which appears to have been first described by Shier (1977), see, e.g., Kuby (1987). This problem includes no customers at all: the task is to locate the facilities, so as to maximize the minimum distance between them. Dispersion of facilities may be desirable from a security point of view: take, for instance, the location of ammunition dumps or nuclear reactors. If a single such facility were to receive a hit and it were to be located to close to another, similar, facility, the damage could easily spread to other facilities. This, of course, assumes that collateral damage is irrelevant in this context. A semi-formal description would read as follows: Pp - dispersion : Max the minimal distance between any two facilities s:t:locate p facilities: A similar problem that also includes only facilities is the p-defense formulation that also has, as the name suggests, military parentage. It can be written as

84

4

Location Models

Pp - defense : Max the sum of distances between all pairs of existing facilities s:t:locate p facilities: An interesting extension was put forward by Curtin and Church (2006), who included different types of facilities in their model. In particular, they used incinerators (undesirable) and parks (desirable), among which interaction is undesirable. A good reference in this context is Moon and Chaudhry (1984). A general reference for locating undesirable facilities is Erkut and Neuman (1989). We would like to mention that versions of the aforementioned problems exist, in which facility planners do not just plan the location of the facilities, but also their respective capacities. While the continuous version results in nonlinear integer (i.e., very difficult) problems, the results could answer the “a few big facilities vs many small ones” question. Regarding the location of undesirable facilities, though, it is important to note that even facilities that are quite obviously undesirable, typically do have desirable (i.e., “pull”) features: while customers would, for instance, dread the direct proximity of a landfill, they will not want the facility to be pushed too far away, as this would increase the costs of the operation, which, ultimately, they will have to pay. As a result, most facility combine desirable (with pull objectives) as well as undesirable features (with push objectives) within them. As far as modeling is concerned, they can then either be formulated as multiobjective models with pull and push objectives in the objective function, as a model with a standard pull objective and a minimum buffer in the constraints, or as a model with a standard push objective with a maximum buffer in the constraints. While the first option corresponds to the weighting method in multiobjective optimization, the second and third versions correspond to the constraint method. For details, see Chap. 3. In general, a formal mathematical treatment of multiobjective location problems is provided by Nickel et al. (2019).

4.3.3

Problems with “Balancing” Objective

Finally, there are a number of location models that do not fit into the “pull” or “push” framework. They are often referred to as “balancing objectives.” Their idea is to provide “equity” in the solution. First of all, due to the inherent vagueness of the concept, “equity” is replaced by “equality” by almost all authors. In the context of location modeling, this could mean that a facility is to be positioned, so that all of the population has (more or less) equal access to that facility. Caution is advised, though, as equity objectives by themselves may lead to meaningless solutions. One popular example has three customers in the plane with straight-line distances with the middle customer being located just slightly below the straight line that connects the other two customers. This situation is illustrated in Fig. 4.1. Given three customers located at the points P1, P2, and P3, respectively, the solution that best balances their facility-

4.3

The Basic Models and Their Extensions

85

Fig. 4.1 Problem with balancing objectives

customers distances is at M, the center of the circle on whose circumference the three customer points are located. The optimality of this point is apparent, as here, all three customers are at the same distance to the facility at M. Moving the facility from M to M′ has two effects. The first is that all customers benefit from such a move as M′ is closer to each of the three facilities than M. On the other hand, the three customer— facility distances are now no longer equal, so that a balancing objective would prefer a facility location at M over one at M′. This demonstrates that balancing objectives are best relegated to secondary objectives coupled with some objective that measures efficiency. The inequality of solutions can be measured in many ways as shown by Marsh and Schilling (1994) or Eiselt and Laporte (1995). One of these measures is the Gini index, which was originally described to measure the inequality of income distributions. More specifically, the Lorenz curve (Lorenz 1905) plots the proportion of the total income of a population (on the abscissa) to the proportion of the population (ordinate); see, e.g., Fig. 4.2. The 45° line that connects the origin and the point (1, 1) is the line that indicates total equality: any x% of the population command x% of the total income. Total inequality would be obtained by a curve that starts at the origin, moves to (almost) the point (1, 0), and then to the point (1, 1). It indicates that a very small proportion of the population has control over virtually all the money. Between these extremes are realistic distributions, such as the nonlinear line in Fig. 4.2. The sociologist Gini (1912) measured inequality as the area between the Lorenz curve and the 45° line of complete equality in relation to the area below the 45° line. The normalized measure will approach 0 as solutions are near equality, while it will approach 1 in case of very unequal solutions. As mentioned earlier, equality measures can be used either as secondary objectives (in addition to an efficiency objective such as maximization of profit or the minimization of cost) or as a criterion to evaluate solutions.

86

4

Location Models

Fig. 4.2 Lorenz curve and Gini index

4.4

Evaluation of Point Patterns

This section will concentrate on two issues that are relevant after the optimization. The first deals with the assessment of the actual solution rather than just its value of the objective function, and the second issue defines and assesses the stability/ robustness of a solution. In order to discuss the issues, we first need to distinguish between objectives, i.e., issues that are relevant to the decision maker, and criteria, i.e., issues that are relevant to all individuals that are involved or affected by the process. For instance, if a firm plans a multimodal transport hub in some area, there objective may very well be the minimization of the firm’s costs, whereas the criteria may, in addition to their own profitability, the level of unemployment in the area (something the firm is not likely to have included in their objective function, but the state or county will), the inequality of income (something of interest to the state but not the decision maker), the resulting agglomeration/dispersion of firms (of only moderate interest

4.4

Evaluation of Point Patterns

87

to the decision maker and unlikely to have been included in his objective), effect on the service level to customers (measured in terms of time to obtain goods or services), and many others. Suppose now that some process, be it an optimization procedure or a political process has resulted in some pattern of public services, such as hospitals, police stations, barracks, or similar facilities. Our first potential task is to evaluate this pattern. In an early contribution, Boots and Getis (1988) devised a number of statistical tests that determined whether a point (e.g., location) pattern could have resulted from a random process. Often, however, it is more interesting to determine “how different” two point or location patterns actually are. For instance, one could compare a set of locations that have arisen from a planning and political process, such as a pattern of public facilities, e.g., hospitals or landfills, with a pattern that would result from an optimization based on today’s demand, similar to the approach used for zero-based budgeting. In regional science, approaches of this nature have been described by Fisher and Rushton (1979) and Kohler et al. (2000). A historical vs optimized comparison in the context of hospitals can be found in Burkey et al. (2012). One way to compare two patterns that include the same number of facilities is the use of the well-known assignment problem. It starts by defining Set 1 as the set of facility locations in the first pattern and Set 2 as the set of facility locations in the second pattern. The distances between all points that connect a facility in set 1 and a facility in set 2 are known. The assignment problem then minimizes the sum of all distances between pairs of facilities. Semi-formally, we can write the assignment problem as Passignment : Minimize the sum of distances between points allocated to each other s:t:each point in the first set has exactly one point in the second set allocated to it and vice versa By minimizing the total distance between points from the two sets, this procedure we simulate a move of all points in Set 1 to the points in Set 2 that minimizes the overall effort. In other words, we have connected points of the two sets that belong to each other by way of proximity. By comparing an actually existing set and an optimized set, we will obtain information whether or not the existing set is close to optimality. Results of this nature have to be taken with a grain of salt, though, as the existing location pattern has typically resulted from a long-term process with different decision makers and priorities, whereas the optimization has not suffered from political influences and is based on today’s data. Especially in location theory, where most decisions are strategic, political influences start at very low levels. As one of us once was asked to advise an ambulance service and suggested to publish the results, the administrators immediately refused, so as to “not have the Government of the day look bad.” That, of course, prohibits any optimization or, for that matter, any improvement.

88

4

Location Models

Another analysis of a point pattern involves the issue of agglomeration. Agglomeration may be desirable in the case of retail facilities, if customers engage in multi-stop shopping, in case of tourist sites, where customers can be expected to patronize different facilities in a cluster, while they would not make an extra (long) trip for one of these facilities. Agglomeration also facilitates comparison shopping, in which customers visit different stores acquiring information on the products, before making a decision on where to purchase. A simple tool to measure agglomeration is provided by the 1-center problem discussed above. The diameter of the smallest enclosing circle (in networks defined as the longest path between any two facilities in the network) will indicate the degree of agglomeration: a small diameter indicates a high degree of agglomeration. Extensions of the concept may be required in case of multiple centers, e.g., those that form in population agglomerations around megacities. To illustrate a concept, consider two dense agglomerations that are far apart. The smallest enclosing circle will have a large diameter, while two circles that enclose all points, may have fairly small diameters each. The choice of the value of p in the case of p-center problems is not straightforward. We may also be interested in the relative importance of individual facilities. Since any solution assigns customers to each of the facilities according to some rule, we can determine the proportion of a company’s total business that can be attributed to each of the facilities. This idea is similar to the usual 80-20 rule in finance and the AB-C classification in inventory management. A facility’s importance can also be displayed in a Lorenz curve, which plots the proportion of facilities against the proportion of business that they serve. This information is crucial in case of de-location, i.e., the closing of facilities. In case a state faces an actual financial crisis, this process will identify the least-used facilities, whose closure may not have much effect on the general service level. The second issue discussed in this section deals with the stability or robustness of the solution. The typical definition of stability/robustness of a solution concerns the sensitivity of the solution. We can distinguish between two main issues, viz., sensitivity of the value of the objective function based on input factors beyond the decision maker’s control (i.e., problem parameters such as demand, price for price setting firms, machine-based capacities, etc.) and sensitivity of the solution of a problem with respect to the number and location of facilities that are sited (which are clearly within a decision maker’s control). The former definition refers to the traditional sensitivity analysis. The typical well-known individual sensitivity analysis investigates the effect the change of an input parameter has on the optimal solution. Probably more than in other fields, decision makers in location problems are not just interested in changes to the value of the objective function (profit, costs, etc.), but also the actual solution, i.e., the locational pattern. This is, at least in part, due to the large capital layout required in typical location problems. A sensitivity analysis will then determine for what range of individual parameters such as demand, prices, etc., the location will remain unchanged (or, in an extension, will remain sufficiently similar to the solution under investigation by the decision maker. A solution that does not react much to reasonable changes of the input parameters

4.5

Making It Work

89

will then be considered fairly robust. Decision makers will have considerably more faith in solutions that are stable or robust. A different type of measure is obtained, when we determine the robustness of choice, i.e., the likelihood that an individual location is included in an optimal solution for changing values of the total number of facilities be located. For instance, suppose that we are locating regional distribution centers in a state, and it has been determined that we will need a total of 5–10 such centers. Optimizing the problem for 5, 6, . . ., 10 centers individually, we can then determine the number of times that a location is included in an optimal solution at one of the sites. The higher this number, the higher is the likelihood that a specific site is included in an optimal solution given some specific number of facilities to be located. This can be important in cases of expansion or contraction, when p facilities have been located optimally, and, after some time, either an additional facility is to be located or an already located facility is to be closed. Having included sites that are likely included in an optimal solution means a better chance for obtaining an overall optimal solution after the addition or deletion.

4.5

Making It Work

This part of the chapter will illustrate the concepts presented above. First, we need to formally define a central concept in location theory, viz., distances. Distances (or metrics) are defined as sets of numbers that satisfy three axions, viz., identity (the distance between a point and itself is zero), symmetry (the distance between points A and point B is the same as the distance between point B and point A), and the triangle inequality (the distance between points A and B is never longer than the distance between the two points via some intermediate point C). Formally, given denoting by dij the distance between some point i and some point j, we have the requirements 2022 dii = 0, 2022 dij = dji, and 2022 dik + dkj ≥ dij 8 k. Many types of distances have been suggested for different purposes, e.g., gauges (distance functions that allow for asymmetries such as traffic networks with congestion, see, e.g., Plastria 1995), or Hamming and Levenshtein distance in information technology. Location scientists have almost exclusively applied distance functions from a family of functions, referred to as Minkowski distances. Formally, the Minkowski distance between points A and B with coordinates of (a1, a2) and (b1, b2), respectively, and some given parameter p > 0 is

90

4 ðpÞ

Location Models

1

dAB = ½ja1 - b1 jp þ ja2 - b2 jp 005Dp : It is easy to demonstrate that for p = 1, the Minkowski distance simplifies to the rectilinear, Manhattan, or l1 distance, in which the distance is takes as the sum of distances along the individual axes of the system of coordinates. In cities that are planned on a grid such as New York, Washington, or Phoenix, such a distance may be appropriate, when we can assume there are no one-way streets. For p = 2, the Minkowski distance reduces to Euclidean, straight-line, or l2 distances, which are simply the length of the straight line between the two points in the Euclidean plane. Finally, if we let p → 1, we obtain the Chebyshev or max distance, in which the distance between the two points is measured as the longer of the two coordinate differences. Most applications of this type of distance function are in the nonphysical domain. Another popular measure are the squared Euclidean distances. Actually, they do not measure distances at all but areas. Despite that, they have a number of very desirable properties, which accounts for their applications in statistics (the ordinary least squares regression comes to mind), while in location problems, as we will see below, the median with squared Euclidean distances is the center-ofgravity of the customer points. Throughout this chapter, we will use the distance matrix D = (dij) between seven customer points, represented by the rows, and seven potential facility locations, shown as the columns of the matrix. Note that this particular matrix D is not symmetric. 2

0

6 6 12 6 6 48 6 6 D = 6 35 6 6 22 6 6 4 48 41

12

56

35 28

42

0 23

23 0

70 46 60 52

35 34

73 46

66 52

0 25 68 0

74 46

28

34

74 47

0

52

72

11 36

46

41

3

7 51 7 7 72 7 7 7 52 7: 7 20 7 7 7 46 5 0

The first problem to be addressed is the location set covering model (LSCP). Recall that its objective is to minimize the number of facilities that are to be located, so that each customer has at least one facility within a given distance δ. Specifying δ = 34, we can determine the covering matrix A = (aij), so that aij = 1, if dij ≤ δ, and 0 otherwise. In our example,

4.5

Making It Work

91

2

1

6 61 6 60 6 6 A=60 6 61 6 6 40 0

1

0

0 1

0

1 1

1 1

0 0 0 0

0 1

0 0

0 0

1 1 0 1

0 0

1

1

0 0

1

0

0

1 0

0

0

3

7 07 7 07 7 7 0 7: 7 17 7 7 05 1

We can now define binary location variables yj, which equal 1, if we locate a facility at site j, and 0 otherwise. The problem can then be written as P : Min z =

X

yj

j

s:t:

X aij yj ≥ 1 8i j

yj = 0 or 1 8j: In the above formulation, the objective minimizes the sum of variables that are located, while the constraints ensure for each site that there is at least one facility within reach. In our example, an optimal solution for this problem is to locate facilities at sites 2, 5, and 7, (or, alternatively, at sites 2, 4, and 7), i.e., 3 facilities are required. A minor modification of the location set covering model has the facility planner, who pays for the operation of the facilities, minimizes its cost, while guaranteeing that none of the customers, who pay for the trips themselves, has to travel more than a preset distance of δ. Defining cj as the (annualized) cost of establishing and running a facility at site j, the formulationP is identical to the one above for the LSCP, except that the objective is now Min z = cj yj. Given costs of 400, 500, 500, 300, 500, 500, j

and 100 to establish a facility at the seven facilities, respectively, we obtain a solution, in which the decision maker locates facilities at sites 2, 4, and 7 for total costs of z = 900. The second model is the maximum covering model, which maximizes the number of customers reached with a given number of facilities. In addition to the location variables yj, we need to define covering variables xi, which assume a value of 1, if customers at site i are covered, and 0 otherwise. We can then formulate the max cover problem as P : Max z =

X i

wi xi

92

4

s:t:

X

Location Models

yj = p

j

xi ≤

X aij yj 8i j

xi = 0 or 1 8i; yj = 0 or 1 8j Again, consider the distance matrix above and the maximum distance δ = 34, within which a customer is considered covered. In addition, suppose that the number of customers (a proxy for demand) at the seven points is collected in the vector of weights w = [4, 2, 8, 6, 7, 4, 2,]. Assuming that we are to locate 2 facilities, the optimal solution to this problem locates facilities at sites 3 and 5, covering sites 1, 2, . . ., 6, leaving only site 7 uncovered. Hence the total coverage is 31, i.e., about 94%. In case of the de-location problem, suppose that the task is to eliminate one of the facilities. From a formal point of view, all we need to do is set the location variables of all variables not in the previous solution equal to zero and adjust the number of desired facilities from p to p′. In out example, if we were to reduce the existing two facilities by one, the solution recommends to close the facility at node 3 and retain the facility at node 5, which serves customers at nodes 1, 4, and 5 and thus covers a total of 17 customers, a loss of more than half the customers covered before. Semiformally, we could write Pdelocation : Max cover s:t:choose p’ locations among the previously chosen p location points In case of a conditional location problem, in which the firm wants to optimally add facilities to the existing facilities, we need to define p′ as the total number of facilities after the addition and add yj = 1 for all sites that were included in the previous solution. In our example, this means that the constraints y3 = 1 and y5 = 1 are added. The solution locates facilities at sites 3, 4, and 5 with a total capture of 33 customers. Adding another facility will result in a new location anywhere, as with 3 facilities, all customers are already captured. Semi-formally, we can write Pconditional : Max cover s:t:locate among the previously unchosen sites and locate an additional p0 –p facilities In the conditional problem with two or more firms who each locate their own branches (the “old” firms that have already located existing branches, while the “new” firm is about to enter the market), the formulation for the new firm cannot be formulated without additional information. In particular, it has to be determined which firm captures customers in case a customer is withing the capturing distance of multiple facilities. We should point out that de-location and conditional problems also exist for problems with objective functions other than “max cover.”

4.5

Making It Work

93

A modification of the conditional problem is the Maxcap (maximum capture) problem first proposed by ReVelle (1986). The scenario includes a location leader (a firm) that has already located a number of facilities at some locations, which are irreversible. The task is now for the location follower to decide where to locate its own facilities. For simplicity, we assume that there are no ties in the customerfacility distances (the formulation in the original paper does not make such an assumption and is consequently somewhat more involved). The formulation uses again location variables yj, which assume a value of 1, if the follower locates a facility at site j and 0 otherwise, and coverage variables xi, which equal 1, if the follower captures site i, and 0 otherwise. We also need parameters aij, which equal 1, if site j is closer to customer i than the closest of the leader’s facilities, and 0 otherwise. We can then formulate the maxcap (maximum capture) problem as Pmaxcap : Max z = s:t:

X

X

wi xi

i

yj = p

j

xi ≤

X aij yj 8i j

xi 2 f0, 1g 8i yj 2 f0, 1g 8j: Note that the formulation is just about identical to that of the max cover problem with the exception that the parameters aij in the max cover problem are based on a fixed standard δ, while in the maxcap problem, they are based on the locations of the leader’s facilities. In our numerical example, assume that the leader has located facilities at sites 4 and 5 already, and the follower wants to locate a single new facility. The problem then indicates that the new facility should be located at site 2, so that the follower will capture the demand at nodes 1, 2, 3, and 6 with a total of 18 units. If the follower were to locate 2 additional facilities, they would be located at sites 2 and 7, and the follower would capture the demand of all nodes except those occupied by the leader with a total capture of 20. Consider now the p-median model. Here, we need not only the usual location variables yj, but also continuous allocation variables xij, which are defined as the proportion of customer i’s demand that is satisfied by facility j. The problem that locates p facilities so as to minimize the sum of distances between the customers and their respective closest facilities can then be formulated as follows:

94

4

Pp - median : Min z = s:t: X

X

Location Models

XX wi dij xij i

j

yj = p

j

xij = 1 8i

j

xij ≤ yj 8i, j yj = 0 or 1 xij ≥ 0 8i, j: In our example, the optimal solution of this problem for p = 3 locates the two facilities at sites 3, 4, and 5, and the total weighted distance between customers and their respective facilities is 316. In this solution, the facility at site 3 serves customers at nodes 2, 3, and 6, the facility at site 4 serves customers at 4 and 7 and the facility at site 5 serves customers at nodes 1 and 5. The average distance between a customer and its assigned facility is then 316/33 = 9.58. At this point we will introduce some heuristics. We have chosen to illustrate these heuristics on p-median problems, as this is the most popular location model. As a simple construction heuristic, we choose the Greedy technique. It will sequentially locate one facility at a time by choosing the site with the least cost associated with the present (so far incomplete) solution. Once the required p facilities have been located, the algorithm terminates. To illustrate the concept, consider again the example introduced at the beginning of this chapter and assume that p = 2 facilities are to be located. As before, w denotes the vector of weights and D is the distance matrix. We first calculate the cost to locate a single facility for each potential location. This is done by calculating wD and choosing the smallest element in the vector. The site associated with this element is the first site to be chosen. In our example, the costs of locating a facility at one of the seven sites is wD = [1046, 1208, 1310, 1554, 1030, 1368, 1478] with a minimum of 1030, so that we position the first facility at site 5. We now try all possibilities for adding a second site. First try sites 1 and 5. The distances between a customer and the closest facility are d15 = [0, 12, 48, 25, 0, 47, 36], so that the costs of locating one facility at site 5 and the second facility tentatively at site 1 are wd15 = 818. Try now siting the second facility at site 2 (rather than at 1.) The closest distances to the customers are d25 = [12, 0, 23, 25, 0, 28, 36], so that the costs with one facility at site 5 and another at site 2 are wd25 = 566. Similarly, for sites 3 and 5 we obtain distances d35 = [28, 23, 0, 25, 0, 34, 36] and costs of wd35 = 516; for sites 4 and 5, the shortest distances are d45 = [28, 46, 52, 0, 0, 47, 11] and the costs are wd45 = 830;

4.5

Making It Work

95

for sites 6 and 5, the shortest distances are d65 = [28, 35, 34, 25, 0, 0, 36] and the costs are wd65 = 676; and for sites 7 and 5, the shortest distances are d75 = [28, 46, 52, 25, 0, 46, 36] and the costs are wd75 = 954. In this conditional location problem, Site 3 is best, and it is now permanently chosen. As we now have the required p = 2 facilities, the Greedy algorithm terminates with facilities located at sites 3 and 5 with costs of 516, which happens to be the optimal solution. In order to illustrate two improvement heuristics, start with the (nonoptimal) solution with locations sited at nodes 1 and 2. The shortest customer-facility distances for this locational arrangement are d12 = [0, 0, 23, 35, 22, 28, 41], so that the costs of this solution are wd12 = 742. The first of the two heuristics described here is the location-allocation algorithm first described by Cooper (1964). The method first determines the cluster of customers associated with the facility at node k as Ck and it then determines the reduced distance matrix Dk for the facility sited at node k and the weights wk of all customers allocated to the facility at node k. Given this cluster of customers, the technique now the method optimally locates a single facility for all customers in Ck. The is the new location of one facility. This procedure is then repeated for all clusters, resulting in the new solution. The new solution has new clusters of customers, and the process is repeated, until it converges. In our example, the two facilities are located at sites 1 and 2, so that the clusters are C1 = {1, 4, 5, 7}, and C2 = {2, 3, 6}. The reduced distance matrix for the facility at the first site is 2

0 6 35 6 D1 = 6 4 22

35 0

28 25

68

0

3 41 52 7 7 7 and vector of weights w1 = ½4, 6, 7, 2005D, 20 5

41

11

36

0

so that the costs are w1D1 = [446, 638, 334, 1092], with the minimum in the third position, which belongs to site 5, which is the best location within this cluster. In the second cluster, we obtain 2

0 6 D2 = 4 23

23 0

3 35 7 34 5

28

34

0

and

w2 = ½2, 8, 4005D,

so that we obtain costs of w2D2 = [296, 182, 342] with the minimum in the second position, which belongs to site 3, which is then chosen as the new location. The facilities are now located at sites 3 and 5, and the improvement step is repeated. The allocations of customers to facilities are C3 = {2, 3, 6} and C5 = {1, 4, 5, 7}. These are the same clusters as before, so that the process has

96

4

Location Models

Table 4.2 Detailed progress of the vertex substitution method Outgoing variable Site 1 Site 1

Incoming variable Site 3 Site 4

Facility locations Sites 2 and 3 Sites 2 and 4

Objective value 982 (>742) 688 < 742

Site 2 Site 2

Site 1 Site 3

Sites 1 and 4 Sites 3 and 4

776 > 688 708 > 688

Decision Swap is rejected Swap is accepted, new solution at 2 and 4 Swap is rejected Swap is rejected

converged, and it terminates. The fact that this solution also happens to be optimal solution is a coincidence. The second improvement heuristic was suggested by Teitz and Bart (1968). It is commonly referred to as vertex substitution or swap method, one-opt, and oneexchange method. While the idea behind the method is very simple, it has proven to be quite effective in practice. The technique starts with any solution and then tentatively exchanges one location that is presently included in the solution against one that is not, giving the method the title of “pairwise exchange.” If this exchange does not result in a cost reduction, it is rejected, and another pair is chosen for an exchange. If costs are reduced in such a step, the previous solution is replaced by the current one and the process continues from there. In our example, we choose again sites 1 and 2 as initial locations of the two facilities. This solution has which has costs of 742. Table 4.2 shows a summary of the procedure. The process shown in Table 4.2 continues until either no further improvements are possible, or the user interrupts the process. Consider now the de-location problem with the “minsumdistance” objective. Again, all we need to do revise the original p-median optimization problem by adding constraints yj = 0 for all variables that are not in the optimal solution and change p to p′. Recall that the optimal solution for p = 3 facilities was to locate facilities at nodes 3, 4, and 5, which resulted in total transportation costs of z = 316. For the de-location problem, we add constraints y1 = y2 = y6 = y7 = 0 and if only two facilities can be located, we also replace p = 3 by p′ = 2, and we obtain an optimal solution with facilities located at sites 3 and 5 with an objective value of z = 516, an increase of more than 60% of the total/average customer-facility distance (s). A further decrease to just a single facility results in locating the lone facility at site 5 with a total distance/cost of 1030, another 100% increase over the solution with 2 facilities. In case of the conditional location problem for one firm (i.e., no competition), if we wanted to optimally add a single facility, we would change p = 3 to the new value of p′ = 4 and add constraints yj = 1 for the variables at which facilities were located; in our case that means that we add y3 = 1, y4 = 1, and y5 = 1. The new solution has the additional facility locate at site 6, and the total costs/distances of all shipments is now 180, a reduction by almost half.

4.5

Making It Work

97

Closely related to the p-median problem is the simple plant location problem (SPLP). The difference to the p-median problem is that the SPLP has the number of facilities that are to be located not as a parameter, but as a variable. In order to make that work, we need to know the costs of locating facilities. These costs may be sitespecific, so that we define fj as the location cost, if we locate a facility at site j. The formulation of the problem is very similar to that of the p-median problem, except that in the SPLP, the single constraint is missing that requires p facilities to be located; instead, it includes location costs in the objective function. PSPLP : Min z =

XX i

wi dij xij þ

j

X xij = 1 8i

s:t:

X f j yj j

j

xij ≤ yj 8i, j yj = 0 or 1 xij ≥ 0 8i, j: Using our example and adding fixed costs of 400, 500, 500, 300, 500, 500, and 100 to the formulation, we obtain a solution that locates facilities at sites 3, 4, and 7. To facility 3, it allocates customers at sites 2, 3, and 6, facility 4 serves customers 1 and 4, and facility 7 serves customers 5 and 7. The total costs are 1362. The capacitated plant location problem (CPLP) is the same as the SPLP, except that it adds capacities to the facilities. The capacity of a facility at site j is denoted by κj. Defining again locational variables yj, which equal 1, if we locate a facility at site j and zero otherwise and allocation variables xij, which indicate the proportion of customer i’s demand that is satisfied by facility j. We can then formulate the problem as follows. PCPLP : Min z = s:t:

XX X wi dij xij þ f j yj

X

i

j

j

wi xij ≤ κj yj 8j

i

X

xij = 1 8i

j

yj = 0 or 1 xij ≥ 0 8i, j:Alternatively, xij = 0 or 1 8i, j In our example, using again customer weights/demands of 4, 2, 8, 6, 7, 4, and 2 as well as facility capacities of 8, 7, 9, 4, 5, 7, and 6, we obtain the following solution. Facilities are located at sites 1, 3, 4, 6, and 7 with a total cost of 2098. As permitted

98

4 Location Models

by the formulation, some of the customer-to-facility allocations are fractional. While all customers 1, 3, 6, and 7 are assigned to the facilities at the same places, customers 2, 4, and 5 are served by more than one facility each. In particular, customer 2 is served half by Facility 3 and half by Facility 6, Customer 4 is served 1/3 from Facility 1 and 2/3 by Facility 4, and Customer 5 receives 2/7 of the demand from Facility 1, 1/7 of the demand from Facility 6, and 4/7 from Facility 7. Requiring the allocation variables xij also to be binary (also referred to the “single source CPLP”), the location pattern changes slightly to facilities being located at 1, 2, 3, 6, and 7, with costs of 2606. Now, Facility 1 serves Customer 5, Facility 2 serves Customers 1 and 2, Facility 3 serves Customer 3, Facility 6 serves Customers 6 and 7, and Facility 7 serves Customer 4. Note that a facility no longer necessarily serves a customer that is located at the same site, even though the transportation costs for these shipments would be zero. Back in the multi-source CPLP, suppose now that the decision maker has the ability to raise the capacity of the unused site 5 from 5 to 6. This will change the solution so as to include facilities at sites 1, 3, 4, 5, and 7 (i.e., instead of site 6, we now use site 5), and the costs have decreased from 2098 to 2086. Whether such a capacity expansion is (cost-) beneficial is a decision to be made by the decision maker. Consider now the p-center problem. The objective is to locate p facilities, so as to ensure that the longest customer-facility distance is as short as possible. Pp - center : Min z X yj = p j

s:t: z ≥

X

dij xij 8i

j

xij ≤ yj 8i, j X xij = 1 8i j

yj = 0 or 1 8j xij = 0 or 1 8i, j: (The first set of constraints are aggregated in this formulation. They may be disaggregated to z ≥ dijxij 8 i, j) For our example with p = 3, we obtain a solution that locates facilities at nodes 2, 5, and 7 with the longest customer-facility distance (given that customers are allocated to their closest facility) of 28. Note that there is an alternative optimal solution with facilities at nodes 2, 4, and 7. Starting with this solution and solving the de-location problem for remaining 2 facilities, i.e., close one facility, we obtain the solution that closes the facility at

4.5

Making It Work

99

node 7, retains the facilities at nodes 2 and 5 with the longest customer-facility distance being 36. So far, we have assumed that the facilities that are to be located are desirable, so that proximity to customers was an asset. In contrast, the models below are undesirable, so that customers would like them to be as far as possible from their own location. The anti-cover problem minimizes the total number of customers that are located closer to a predetermined covering distance δ from any of the facilities. The problem is formulated with the usual location variables yj and covering variables xi. The formulation is then. Panti‐cover : Min z = s:t:

X

X

wi xi

i

yj = p

j

xi ≥ yj 8i, j : dij ≤ δ yj = 0 or 1 8j xi ≥ 0 8i: In our example with p = 2 facilities to be located and the covering distance δ = 34, we determine that facilities are to be located at sites 3 and 6. Given this location pattern, customers at sites 2, 3, and 6 are affected by at least one facility each. Given their weights, a total of 14 customers are affected by the undesirable facilities. In the literature we also occasionally find so-called anti-median (maxian) or anticenter problems. See Pravas and Vijayakumar (2017) and Klein and Kincaid (1994) as appropriate references for each model. They are the natural counterparts of median and center problems for undesirable facilities. Since we are not using either problem in this book, we will leave their discussion to the pertinent literature. The next model minimizes the number of people that are exposed to pollution levels higher than deemed acceptable or legal. We will refer to it as “at-risk coverage” model. For that purpose, suppose that the highest acceptable pollution level at customer node i is ui. If all nodes have the same acceptable level, we can set u = ui 8 i. These levels may be legislated or conform to some communal standard. Define now cij as the pollution level at customer i that emanates from a facility at site j (we do not need an explicit attenuation function, just the value of the pollution as the facility—customer distance is known) and define binary variables zi which equal 1, if the pollution level at site i exceeds ui, and 0 otherwise. The pollution at site i is then the sum of pollutions from all facilities. We can then formulate a problem that minimizes the population that is exposed to higher levels of this pollutant.

100

4

Pat‐risk coverage : Min z = s:t: X

X

Location Models

X wi zi i

yj = p

j

cij yj ≤ ui þ Mzi 8i

ð2217Þ

j

yj = 0 or 18j zi = 0 or 18i The constraint (*) can be explained as follows. in simple words, it reads (the total pollution at site i) cannot exceed (the legal limit) + zero or a very large number, depending on the value of zi. Note that since a positive function of the binary variable zi is minimized in the objective function, zi will equal 0 whenever possible and only be 1 if it needs to. In other words, if the pollution at site i is less than the legal limit, the constraint will be satisfied, even if zi equals zero, which it will. On the other hand, if the pollution exceeds the legal limit, then the only way that the constraint can be satisfied is if zi assumes a value of 1, in which case we count the population at that point in the objective function. This is the purpose of the model. In order to illustrate the concept, consider again the distance function introduced at the beginning of this chapter and suppose that the pollution decay is obtained as an inverse value of the distance, except when a facility is located at the same site as a customer, in which case the pollution is set to 0.2. This is shown in matrix 2

:2000

:0833

:0179

:0286

:0357

:0244

:0192

:0139

:0909

:0278

:0238 :0244

3

7 :0196 7 7 :0139 7 7 7 :0192 7: 7 :0500 7 7 7 :0217 5 :0217 :2000

6 6 :0833 :2000 :0435 :0143 :0217 :0286 6 6 :0208 :0435 :2000 :0167 :0192 :0294 6 6 D = 6 :0286 :0137 :0152 :2000 :0400 :0135 6 6 :0455 :0217 :0192 :0147 :2000 :0217 6 6 4 :0208 :0357 :0294 :0135 :0213 :2000

Given the population of w = [4, 2, 8, 6, 7, 4, 2,] and a common acceptable pollution level of u = ui = 0.22, we obtain the results shown in Table 4.3. The remaining models with “push” objective include only facilities and no customers. The p-dispersion model attempts to locate p facilities, so as to maximize the minimum distance between them. Most applications of this model are found in the military context, e.g., locating facilities so as to ensure that no other facility sustains a loss in case a facility is hit. Given distances dij, a “sufficiently large” number M >> 0 and the usual location variables yj, we can write (see, e.g., Kuby 1987)

4.5

Making It Work

101

Table 4.3 Optimal solutions of the at-risk coverage problem for different values of p

p 1 2 3 4 5 6 7

Table 4.4 Solutions of dispersion problems for various values of p

p 2 3 4 5

Locate at 1 3, 7 2, 6, 7 1, 2, 6, 7 1, 2, 4, 6, 7 1,2,4,5,6,7 1,2,3,4,5,6,7

z 0 0 8 12 18 25 33

% of population affected 0 0 24 36 55 76 100

Open facilities at 4, 6 1, 6, 7 1, 3, 4, 6 2, 3, 4, 5, 6

z 74 41 34 23

Pp - dispersion : Max z X yj = p s:t: j

z ≤ 2d ij M þ dij - dij Myi - d ij Myj 8i, j yj = 0 or 18j: Using the distances employed throughout this appendix and p = 2 facilities to be located, the solution has facilities located at sites 4 and 6 with an inter-facility distance of z = 74. Table 4.4 provides solutions and the largest inter-facility distances for different values of p. In a slightly distinct version, the model below does not minimize the maximum inter-facility distance, but the sum of distances between open facilities. It may be called the p-defense problem. Employing the usual definitions of variables, with additional binary variables zij, which equal one, if facilities are located at sites i and j, the problem can be written as Pp‐defense : Max z = s:t:

X

n n X X i = 1 j = iþ1

yj = p

j

zij ≤ yi zij ≤ yj yj = 0 or 18j zij = 0 or 18i, j

dij zij

102

4

Table 4.5 Solutions for pdefense problem with total and average distances

p 2 3 4 5

Facilities open 4, 6 3, 4, 7 3, 4, 6, 7 2, 3, 4, 6, 7

z-value 74 184 338 517

Location Models Average distance 74 (1 arc) 61.33 (3 arcs) 56.33 (6 arcs) 51.7 (10 arcs)

Note that the objective value z represents the total distance between all facilities that have located. Since with p facilities, there are ½p( p–1) edges between them, the average inter-facility distance for p facilities is then ½p( p–1) z . The results for various values of p are shown in Table 4.5. Finally, we would like to formulate and discuss a formulation of a problem with a balancing objective. Many different types of balancing objectives are possible. One could, for instance, balance the use of the facilities, measured in terms of the number of customers they serve, coupled with a constraint that limits the average traveling distance between customer and the facility the customer is served at. Alternatively, one could balance the deviation of individual travel distances from the average. This is the road we are taking here. Our formulation is based to a large extent on the pmedian formulation. More specifically, we formulate the following problem: Pdeviation : Min z =

XX0028 00292 v - dij wi xij i

j

PP wi dij xij i j P s:t: v = wi X

ð2217Þ

i

yj = p

j

X

xij = 1 8i

j

xij ≤ yj 8i, j yj = 0 or 1 xij ≥ 0 8i, j: With the exception of the objective and constraint (*), this is a standard p-median problem. The constraint (*) defines the additional variable v as the average customer—facility distance. The objective then minimizes the sum of deviations of customers’ distances from the facilities they patronize. In order to be able to compare, we use the same numerical example we have used to illustrate the pmedian problem. Table 4.6 summarizes the results, starting with the solution of the 3-median solution obtained earlier. Using the above distance-balancing objective and letting the average distance unrestricted results in the solution in the bottom row

References

103

Table 4.6 Solutions of formulations with balancing objective compared to 3-median solution Solution 3-median Balancing objective, average distance v ≤ 12 Balancing objective, average distance v ≤ 20 Balancing objective, average distance v unrestricted

Facility locations 3, 4, 5 2, 4, 5 2, 5, 7 1, 6, 7

Average traveling distance 9.58 11.0909 19.2121 47.64

of the table: the average customer-facility distance is almost five times as long as that in the 3-median solution. The distances between customers and facilities are all very similar, but also very long. In order to avoid this usually undesirable feature, we have added an upper bound on the average distance v. Bounds for v-values of 12 and 20 are shown in Table 4.6 as well.

References E. Abbey, In defense of the redneck, in Abbey’s Road: Take the Other, (Penguin Books, London, 1991) V. Bayram, B.Y. Kara, F. Saldanha-da-Gama, H. Yaman, Humanitarian logistics under uncertainty: planning for sheltering and evacuation, in Uncertainty in Facility Location Models - Incorporating Location Science and Randomness, ed. by H.A. Eiselt, V. Marianov, (Springer, Cham, 2023) B.N. Boots, A. Getis, Point pattern analysis (Sage, Newbury Park, CA, 1988) M.L. Burkey, J. Bhadury, H.A. Eiselt, A location-based comparison of health care services in four US states with efficiency and equity. Socio Econ. Plan. Sci. 46(2), 157–163 (2012) R.L. Church, C.S. ReVelle, The maximal covering location problem. Pap. Reg. Sci. Assoc. 32, 101–118 (1974) R.L. Church, K.L. Roberts, Generalized coverage models and public facility location. Pap. Reg. Sci. Assoc. 53, 117–135 (1983) L. Cooper, Heuristic methods for location – allocation problems. SIAM Rev. 6, 37–53 (1964) K.M. Curtin, R.L. Church, A family of location models for multiple-type discrete dispersion. Geogr. Anal. 38, 248–270 (2006) M.S. Daskin, Maximum expected covering location model: formulation, properties, and heuristic solution. Transp. Sci. 17, 48–70 (1983) M.S. Daskin, E.H. Stern, A hierarchical objective set covering model for emergency medical service vehicle deployment. Transp. Sci. 15(2), 137–152 (1981) H.A. Eiselt, G. Laporte, Objectives in location problems, in Facility Location: A Survey of Applications and Methods, ed. by Z. Drezner, (Springer, New York, 1995), pp. 151–180 E. Erkut, S. Neuman, Analytical models for locating undesirable facilities. Eur. J. Oper. Res. 40(3), 275–291 (1989) H.B. Fisher, G. Rushton, Spatial efficiency of service locations and the regional development process. Pap. Reg. Sci. 42(1), 83–97 (1979) Gini C (1912) Variabilità e mutabilità. Reprinted in Pizetti E, Salvemini T (eds) (1955) Memorie di Metodologica Statistica. Libreria Eredi Virgilio Veschi, Rome S.L. Hakimi, Optimum locations of switching centers and the absolute centers and medians of a graph. Oper. Res. 12, 450–459 (1964)

104

4

Location Models

S.L. Hakimi, Optimum distribution of switching centers in a communication network, and some related graph theoretic problems. Oper. Res. 12, 462–475 (1965) S.L. Hakimi, On locating new facilities in a competitive environment. Eur. J. Oper. Res. 12, 29–35 (1983) J. Halpern, The location of a center-median convex combination on an undirected tree. J. Reg. Sci. 16, 237–245 (1976) H. Hotelling, Stability in competition. Econ. J. 39, 41–57 (1929) B.M. Khumawala, An efficient algorithm for the p-median problem with maximum-distance constraint. Geogr. Anal. 5, 309–321 (1973) C.M. Klein, R.K. Kincaid, The discrete anti-p-center problem. Transp. Sci. 28(1), 77–79 (1994) T.A. Kohler, J. Kresl, C. Van West, E. Carr, R.H. Wilshusen, Be there then: a modeling approach to settlement determinants and spatial efficiency among late ancestral Pueblo populations of the Mesa Verde region, US Southwest, in Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes, ed. by T.A. Kohler, G.J. Gumerman, (Oxford University Press, Oxford, 2000), pp. 145–178 J. Krarup, P.M. Pruzan, The simple plant location problem: survey and synthesis. Eur. J. Oper. Res. 12, 36–81 (1983) M.J. Kuby, Programming models for facility dispersion: the p-dispersion and maxisum dispersion problem. Geogr. Anal. 19(4), 315–329 (1987) M.O. Lorenz, Methods of measuring the concentration of wealth. Publ. Am. Stat. Assoc. 9(70), 209–219 (1905) M.T. Marsh, D.A. Schilling, Equity measurement in facility location analysis: a review and framework. Eur. J. Oper. Res. 74, 1–17 (1994) I.D. Moon, S.S. Chaudhry, An analysis of network location problems with distance constraints. Manag. Sci. 30(3), 290–307 (1984) S. Nickel, J. Puerto, A unified approach to network location problems. Networks 34(4), 283–290 (1999) S. Nickel, J. Puerto, A.M. Rodríguez-Chía, Location problems with multiple criteria, Chapter 9, in Location Science, ed. by G. Laporte, S. Nickel, F. Saldanha da Gama, (Springer Nature, Cham, 2019), pp. 215–260 F. Plastria, Continuous location problems, in Facility Location: A Survey of Applications and Methods, ed. by Z. Drezner, (Springer-Verlag, New York, 1995) K. Pravas, A. Vijayakumar, Convex median and anti-median at prescribed distance. J. Comb. Optim. 33(3), 1021–1029 (2017) C. ReVelle, The maximum capture or “Sphere of influence” location problem: Hotelling revisited on a network. J. Reg. Sci. 26(2), 343–358 (1986) D.R. Shier, A min-max theorem for p-center problems on a tree. Transp. Sci. 11, 243–252 (1977) P.J. Slater (1975) Maximin facility location. J. Res. Natl. Bur. Stand. Math. Sci. 79B/3–4: 107-115. Available online at https://books.google.ca/books?hl=en&lr=&id=f58QuJJTc-YC&oi=fnd& pg=RA1-PA107&dq=Slater+PJ+(1975)+Journal+of+Research+of+the+National+Bureau+of +Standards+79B.&ots=oAH9zrjpo&sig=wya_UstQx5R9pVDQu2l8ehjnQRg&redir_esc=y#v=onepage&q&f=false, last accessed on 9/15/2022 P. Slater, Centers to centroids in graphs. J. Graph Theory 2(3), 209–222 (1978) A. Tamir, The k-centrum multi-facility location problem. Discret. Appl. Math. 109(3), 293–307 (2001) M.B. Teitz, P. Bart, Heuristic methods for estimating the generalized vertex median of a weighted graph. Oper. Res. 16, 955–961 (1968)

References

105

The Economist (2022) Britain’s failure to build is throttling its economy. The Economist, September 1, 2022. Available online behind paywall at https://www.economist.com/ leaders/2022/09/01/britains-failure-to-build-is-throttling-its-economy?utm_campaign=a.theeconomist-this-week&utm_medium=email.internal-newsletter.np&utm_source=salesforcemarketing-cloud&utm_term=9/1/2022&utm_id=1307180, last accessed on 9/15/2022 C.R. Toregas, C. ReVelle, Optimal location under time or distance constraints. Pap Reg Sci Assoc 28, 133–144 (1972) C. Toregas, R. Swain, C. ReVelle, L. Bergman, The location of emergency service facilities. Oper. Res. 19, 1363–1373 (1971) H. von Stackelberg, Grundlagen der theoretischen Volkswirtschaftslehre (translated as The Theory of the Market Economy) (W. Hodge, London, 1943)

Chapter 5

Mathematical and Geospatial Tools

This chapter briefly presents some of the mathematical and geospatial tools (for geospatial analysis, see, e.g., DeSmith et. al. 2018) that are important in the context of multicriteria location decisions (see Malczewski 1999). The first chapter discusses interpolation and curve fitting techniques. These methods are important in case measurements of some attribute have been taken at sites (so-called observation points), whereas the values of these attributes are needed at other points. For instance, seismic tests have been taken at some discrete points that provide a profile of different subsoil strata. Information regarding the thickness of, say, coal seams are needed in areas in which the Federal Government offers land leases for the exploitation of coal. Given that the observation points are reasonably close and there are no major geological faults in between, interpolation of known data can provide important clues regarding the feasibility of exploiting the natural resource. The second section in this chapter deal with tools available in geographical information systems (GIS). Arguably, GIS is one of the most important developments in the field of location theory in the last 60 years. Some information about the history of the field can be gleaned from ESRI (a company founded in Redlands in 1969 by Jack and Laura Dangermond). It is all about the collection, representation, manipulation, and visualization of spatial data. As such it is not only an invaluable tool for the analyst, but also a great means to demonstrate solutions and their usefulness to decision makers. The third and final section in this chapter deal with voting methods. These techniques are useful whenever multiple decision makers are present. Just like in general elections, voting processes are designed to aggregate the diverse assessments of multiple decision makers into a reasonable compromise.

© Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_5

107

108

5.1

5

Mathematical and Geospatial Tools

Interpolation/Curve Fitting

Consider the following scenario. In addition to the usual two sets of points in location problems, viz., the customer points that represent the location of customers in the chosen space and the location points that represent the potential of the facilities (often these two sets are collapsed into one for reasons of simplicity), we also have measurement points/observation points. At these points measurements have been taken concerning some variable of interest, e.g., population, noise pollution, income, air quality, etc. While it would be beneficial to have measurements taken at customer points, this may either not be practical or has not been done for some other reason. The task is then to interpolate the measurements between the measurement points and find their levels at customer points or any other desired point. As far as terminology is concerned, we distinguish between independent variables, which are variables that are supplied by the decision maker and dependent variables, which are outcomes determined by the interplay between the independent variables. In the locational context, in the plane, there are typically two independent variables, viz., the coordinates of the point at which measurements are desired.

5.1.1

Regressions

The task of interpolation is accomplished by statistical regression. We distinguish between three types of regression: simple regression with a single independent and one dependent variable, multiple regression with multiple independent variables and one dependent variable, and finally multivariate regression with an arbitrary number of independent and dependent variables. Examples of these types are as follows. Suppose that we have a timeline with values assumed by one independent variable representing individual years, while the dependent variable may represent the consumer price index, annual inflation, or any other measure of interest. Or, imagine a cross section of a city, so that surveys regarding income values have been taken and plotted for some measurement points in the cross section. Since there is only one independent variable i.e., time in history, and one dependent variable, viz., income level, this is a case for simple regression, which allows us to determine the relationship between the year in history and the income value at that time. On the other hand, consider two independent variables, e.g., age and education level, and estimate the dependent variable which may be income. Or, in the spatial context, two independent variables could be the coordinates of a customer, for whom we are then to estimate the dependent variable, such as the demand. In both cases, multiple regression is called for. Finally, given a multidimensional input such as location in a two-dimensional plane and maybe also the time of year as independent variables, we may want to estimate the demand for a number of different products. This is a case for multivariate regression.

5.1

Interpolation/Curve Fitting

109

In addition, regressions may also include indicator variables (also referred to as dummy variables), which are zero-one variables that indicate non-quantitative attributes, such as gender, religion, etc. Each regression function may be either linear or nonlinear. The linear case is easy—there is only one linear function for the chosen number of dependent variables and the dependent variable. The nonlinear case is trickier, as there are a large variety of nonlinear functions the user has to choose from. Note that it is always possible to fit a linear function to any set of data points, the question is how good a fit it is. For instance, if we were to choose a number of points on, say, a parabola, a simple linear regression will calculate the best fitting straight line for the points. It will be the best fit of any linear function, but it will not be a good fit. In order to determine how well the calculated function fits the dataset, a measure R2, the coefficient of determination, has been developed. It is always a number between zero and one, so that values close to 1 indicate a good fit, while values close to zero indicate that the line is not a very good fit. Slightly more technically, R2 indicates the proportion of the deviation of the data that is explained by the line. In multiple regression, the addition of more independent variables will always cause the R2 value to increase, even if those independent variables have nothing to do with the dependent variable or they are even random. This have given rise to so-called adjusted R2 values, which differ, albeit usually only slightly, from the original R2 value. An online multiple regression calculator can be found in Social Science Statistics (2022).

5.1.2

Voronoi Diagrams

The concept of Voronoi diagrams (or Thiessen polygons or Wigner-Seitz cells) has been reinvented multiple times. To our knowledge, Voronoi (1908) was the first to describe the concept in the context of his work on quadratic forms. Three years later, Thiessen (1911) reinvented the concept for a statistical problem in geography, and Wigner and Seitz described the concept in 1933 in the concept of solid-state physics. A reasonably recent survey of the concept can be found in Burkey et al. (2011). Since it is closest to our purpose in this book, we will use Thiessen’s problem to illustrate the concept. Thiessen’s original problem concerned the estimation of rainfall by using data from a number of rain gauges in a district. In order to calculate the average (or total) rainfall in the district, he could have simply computed the average. However, a measurement taken in the center of a district with no other gauges in the vicinity would surely be more telling than the measurement at some gauge in the corner of a district surrounded by some other gauges. The idea was then to start with the location of facilities or “seeds” as they are often referred to and for each seed determine an area associated with the seed. Such an area consists of all points in the plane that are closer (with respect to some metric) to the seed it is associated with than to any other seed. These areas, referred to as Thiessen polygons or Voronoi areas, are mutually

110

5

Mathematical and Geospatial Tools

Fig. 5.1 Voronoi diagram with straight-line distances for five seed points

exclusive and collectively exhaustive, so that their union is a tessellation of the given space. Such tessellations, most often referred to as Voronoi diagrams, can be constructed for Euclidean spaces, networks, and others. Of specific interest to us are the Euclidean plane and networks. Voronoi diagrams have often be used in the context of marketing, where each Thiessen polygon represents the market area of its seed point. The idea then is that any point in some Thiessen polygon is more influenced by the seed point of the polygon is associated with than any other seed. Given that, Thiessen suggested to compute the average rainfall in the total area as a weighted average, where the weights are the areas of the Thiessen polygons associated with the seed. The difference between using regression analysis and Voronoi diagrams is that regression functions assume that each point in the plane is influenced by all facilities or seed points, whereas in Voronoi diagrams, the point is influenced only by its closest facility. While the construction of Voronoi diagrams is computationally easy in networks in the Euclidean plane (see, e.g., Fortune 1986, or, for a survey of methods and application in the field, de Berg et al. 2008), optimizing facility locations that use Voronoi diagrams (e.g., locating a facility so as to maximize its market area) are difficult in the plane, where global optimization techniques are required. Consider now two-dimensional Voronoi diagrams. The input will be a set of seeds and their coordinates as well as an indication what metric is to be used. Here, we will restrict ourselves to the Euclidean metric, i.e., straight-line distances. It is then possible via Fortune’s algorithm (1986) to determine the Voronoi diagram

5.1

Interpolation/Curve Fitting

111

Fig. 5.2 Tessellation of Florida given seed points shown in green (Source: U.S. Census Bureau)

efficiently. An example of a Voronoi diagram is shown in Fig. 5.1 for five seed points. For example, the seed in the northwest area has the area surrounding it associated with it (bounded by whatever natural borders are present in the application), and in the context of retail location, the size of this area could give an indication of the firm’s market area and with it the potential sales, at least as long as we can assume that customers patronize the facility closest to them. While the construction of a Voronoi diagram may be easy, optimization problems that involve Voronoi diagrams are everything but. Typically, they will require tools from global optimization. Such a problem arises in the seemingly simple context if one of the facilities in Fig. 5.1 were to consider relocating. If any of the seeds were to move, all line segments that border its Thiessen polygon will tilt and its market area will change. An example of a Voronoi diagram is shown in Fig. 5.2, which depicts the state of Florida and a tessellation of the state given the seeds indicated as green dots.

112

5.2

5

Mathematical and Geospatial Tools

GIS Tools

Geographic information systems provide a wealth of tools and options. This section will survey just a very small portion of the so-called geoprocessing tolls, which are important for the process of finding a suitable location.

5.2.1

Basic Elements and Features

Geographical information systems, or GIS for short, is a set of tools that are used to collect, store, modify, analyze, and present spatial data. Spatial data are all kinds of data that can be pinpointed to a place on a map. In some sense, geographical information systems share some feature and goals with statistics in the sense that both branches attempt to make quantitative and qualitative information more visible. One of the basic tenets of geographical information systems is Tobler’s first law of geography (Tobler 1970): “Everything is related to everything else, but near things are more related than distant things.” This principle was also used in an incident more than hundred years past, which can be seen as one of the forerunners of GIS applications. It concerns the outbreak of cholera (a disease that is transmitted by food and water) in Broad Street, London in 1854. The physician Dr. John Snow mapped the locations, at which the instances of cholera occurred, as well as the locations of the pumps, at which residents obtained their water. The slightly modified map is shown in Fig. 5.3. Figure 5.3 shows locations of the occurrences of the cholera as black rectangles (whose size indicates the number of incidents) and the locations of the water pumps, at which residents obtained their water, as red dots. The proximity of the many black dots to the pump on Broad Street at about the center of the map suggested that water from this pump could be the culprit. And indeed, when examining the water from this pump, it turned out that the water from this pump was untreated and taken from a particularly dirty stretch of the River Thames. Modern geographical information systems have formalized this idea and apply so-called Voronoi diagrams that were discussed earlier. Useful references are the books by Law and Collins (2015), Bolstad (2016), Graser (2016), and the three-part tutorial series by Gorr and Kurland (2016), Allen (2013), and Allen and Coffey (2010). For the history of geographical information systems, see, e.g., Esri (undated-a). The concept of geographical information systems is based on maps. Maps include, at the very least, the boundaries of the area under consideration (continents, countries, states, etc.) plus the features of interest. These will or may include cities and towns, bodies of water and other geographical features, but also rainfall, income distributions, demographics, crime statistics, or any other features that may be of interest to the decision maker. Based on the information included on the map, we can distinguish between geographical and thematic maps. In addition to the features, all maps will (or should) include

5.2 GIS Tools

113

Fig. 5.3 Dr. Snow’s map with the occurrences of cholera and the locations of the water pumps (Snow 1854, drawn and lithographed by Charles Cheffins, public domain, red points added)

2022 a north symbol. It is shown as an arrow that, typically, but not necessarily, points to the top of the map. Examples in which this is not the case can be found, e.g., in the Osher Map Library (2018) (last map on site). Furthermore, north may also be pointing into a different direction, as the map makers try to fit the relevant part of the map onto a page. 2022 Scale. The scale of a map is the ratio of one unit of distance on the map in relation to the number of the same units of distance in “the field,” i.e., in reality. While “small-scale maps” show a large area on the map, large-scale maps show only a small area. As an example, a map of Europe on a letter-sized page may be of scale 1:12,000,000, a map of the State of Florida on the same-sized page would be approximately 1:2,000,000, while larger-scale road maps are typically of scale 1: 300,000. More detailed hiking maps are often 1:50,000 topographical maps, or the also popular 1:63,360 maps “an inch (on the map) to a mile” (in reality.) Even larger-scale maps could be those used for the design of a subdivision, where

114

5

a

b

c

d

Mathematical and Geospatial Tools

Fig. 5.4 (a) Map with scale 1:500,000. (b) Map with scale 1:250,000. (c) Map with scale 1: 50,000. (d) Map with scale 1:25,000

scales such as 1:1250 may be used. Even though this is not necessarily the case, larger-scale maps tend to provide more detail than small-scale maps. Examples of maps with different scales are shown in Fig. 5.4a–d (The source for all four maps is Canada, Department of Energy, Mines and Resources, Surveys and Mapping Branch). 2022 Legend. The legend will include information, how the features are shown on the map. For instance, it will indicate bodies of water in blue, wooded areas in dark green, fields and agriculturally used areas in light green, roads in white, and so forth. On thematic maps, it may show high-crime areas (properly defined) in dark red, low-crime areas in white or very light red, with all shades of differences in between. In addition, the legend will typically also include a copyright notice, and, for small-scale maps, the type of projection that is used. There are different types of projections that project a roughly spherical body such as the earth onto a planar map. There is no single projection that can simultaneously preserve the shape of the areas, the size of the areas, the distances, and other features. Figure 5.5a–d shows a number of different thematic maps. They include income distributions (useful for the planning of public and private facilities), the age

5.2 GIS Tools

115

a

b

c

d

Fig. 5.5 (a) Median household incomes in Colorado. (b) Proportion of population over 65 in Colorado. (c) Education levels in Colorado. (d) Number of employment opportunities in Colorado

distribution of seniors in an area (which will be useful to plan establishments that cater to seniors), the distribution of people with different educational levels (important for companies that plan to open facilities in the area in order to ensure that an appropriately educated labor force is available), the number of employment facilities (important for families, who want to move to the area), and many others. In addition to the thematic maps based on socio-economic data, many other thematic maps can be thought of for different purposes. Pertinent examples are land use maps, topographical maps, geological maps, weather maps, and many more. The features on a map are organized in so-called layers, each of which comprises one feature. For instance, one layer may include only the bodies of water, one layer may include boundaries between counties, one could feature roads or major cities, etc. Layers can be thought of as the transparencies of old, each including some type of information, but when put on top of each other, they provide a wealth of all types of information. The concept of layers is well known to users of photo, video, and audio editing programs, in which individual features, such as lettering in images, tracks such as subtitles in videos, and tracks of instruments all can be addressed individually and separately. The final map that is shown to the user is then the union of all those layers that are deemed of interest to the user. Again, this is similar to

116

5

Mathematical and Geospatial Tools

flattening and rendering in photo and video editing, where the different layers are put together into a final product, in which parts of the image or the video can no longer be addressed individually. There are two types of representations used in geographical information systems: raster and vector representation, each of which has its own advantages and disadvantages. Raster representation is similar to digital photography: the space under consideration is subdivided into squares or cells, each of which then is assigned one property, e.g., a color that represents a certain feature such as average temperature in January, income level, altitude, etc. The advantage is that this representation is the native format of satellite imagery and scanned images, it is easy to manage, and lends itself to quantitative analyses. It is also not difficult to mix discrete and analog data. The disadvantage is its fixed resolution, and that it will produce jagged edges (pinking shears effect) on lines, which does not result in a pleasant output. An excellent description is provided by Gomez Graphics (2020). On the other hand, there is vector representation. This representation uses points, straight lines, and polygons (a set bounded by straight lines) as its basic elements. All figures displayed in this format will consist of a combination of these three elements. The images displayed by GPS systems are typically examples. Advantages of vector representation are the fact that the quality of the output will not depend on some fixed resolution, while its major disadvantage is that vector data are more difficult to manage and manipulate.

5.2.2

Geoprocessing Tools

Once the features deemed important by the decision maker are included on a layer, we are now ready to manage and manipulate them. In order to do so, we will use a number of so-called geoprocessing tools. Below we will describe and illustrate seven of these tools using three features: a short definition, a pictorial illustration, and a brief discussion of its applications in spatial decision making. Whenever convenient, we will describe the functions as O = I ★M, where I and O symbolize input and output, respectively, ★ indicates the type of operations we desire to make, and M denotes the modifier used in the operation. For many of these operations, distance functions are required. For information regarding different distance functions, see, e.g., Chap. 4. In all schematic graphs below, the input I is shown as box, the modifier M is shown as circle, and the output O is shown in red. 1. The intersect operation. This operation has ★ = \, i.e., the output will include all information included in both, the input I and the modifier M. Here, I and M can both be layers. This situation is illustrated in Fig. 5.6, in which the two original sets are represented by a rectangle and a circle, respectively, while their intersection is shown in red. One application is a situation, in which the user is planning to market luxury products such as cruises or seniors’ homes. Hence, one of the inputs I could be a

5.2 GIS Tools

117

Fig. 5.6 The “intersect” operation

Fig. 5.7 Example of the “intersect” operation in Calgary, Alberta (Source: Statistics Canada, 2017, 2016 Census of Population) Fig. 5.8 The “clip” operation

map of counties with at least δ percent of residents being in the 65 plus age bracket, while the modifier M could be a map of the areas, in which the average income is above, say $100,000. The intersection O will then provide areas that have at least δ percent of the population being formally retired and that comprise households with incomes of at least $100,000. An example is shown in Fig. 5.7a– c: Fig. 5.7a shows areas of Calgary, Alberta, Canada, with household incomes of $100,000 and above, Fig. 5.7b shows areas of Calgary that have 15% or more of the population that is 65 years of age and above, and Fig. 5.7c is the intersection of the two, i.e., areas with at least $100,00 household income and at least 15% of retired persons. 2. The clip operation. This operation is a special case of the intersection, in which the modifier M is a blank layer without information other than its boundary. In other words, clip will result in that part of the original input map I that is of interest to the decision maker. An illustration of this operation is shown in Fig. 5.8.

118

5

Mathematical and Geospatial Tools

Fig. 5.9 Maine and New Brunswick (left) and just Maine with the non-U.S. parts blocked out (right) (Source: Natural Resources Canada) Fig. 5.10 The “erase” operation

The clip operation will result in small maps that show only those parts of the entire map (such as a state or a county) that is of interest to the decision maker. For example, if we were to have an input I that is a topographical map of a border region of the United States (bottom left area) and Canada (top right area) and a modifier M that is just the outline of the State of Maine, the clip operation will then show a map of Maine with its topographical features. This is shown in Fig. 5.9. 3. The erase operation. With this operation, ★ = \, i.e., the logical difference. An example of the erase operation is shown in Fig. 5.10. Note that the erase operation can be seen as an intersection operation with the modifier M being the complement of the part that is to be erased. This operation can be visualized in Fig. 5.10, in which the gray rectangle symbolizes the map, while the white circle shows the part of the map that is to be erased. An example for the erase operation is a map, from which certain areas are to be deleted. These areas could be sensitive military installations, nuclear power plants, communication hubs, and so forth. For instance, on Google maps, the airport of the Greek island of Skiathos is pixelated for some reason. To demonstrate the effect, a part of the city of Fredericton, New Brunswick, Canada, is “erased” by pixelating it as shown in Fig. 5.11. 4. The buffer operation. This operation takes the vector input and generates a buffer around it at a predefined distance δ. In other words, the boundary of the buffer is

5.2 GIS Tools

119

Fig. 5.11 Erased area on map of Fredericton, NB, Canada (Source: Statistics Canada, 2016 Census of population)

Fig. 5.12 The “buffer” operation. (a) Points with buffer. (b) Lines with buffer. (c) Polygon with buffer

the set of points that are δ units away from the closest input point. Fig. 5.12 shows buffers for the main types of shapes, viz., points, lines, and polygons. Buffers are very useful in locating facilities. They can either indicate areas, in which it is prohibited to locate a facility (e.g., landfills cannot be located closer than a certain distance to residential buildings or bodies of water), which are then

120

5

Mathematical and Geospatial Tools

Fig. 5.13 Exclusion zones in Saint John, New Brunswick, Canada (Contains information licensed under the Open Government License – City of Saint John)

Fig. 5.14 The “union” operation

referred to as exclusion zones. Alternatively, it may be required that a landfill cannot be farther away than a certain distance from an existing road, so as to avoid costly road construction. This would then define an inclusion zone (which is, of course, the difference between the universe and the exclusion zone). An example of exclusion zones is shown in Fig. 5.13. The figure shows a part of downtown Saint John, New Brunswick, Canada, with exclusion zones in pink around schools, municipal buildings, and a major tourist area. Such exclusion

5.2 GIS Tools

121

Fig. 5.15 (a) Shows the areas of Vancouver City, BC, Canada, with census tracts that include at least 21% of seniors (65 years and higher), (b) shows all census tracts with an average household income of at least $100,000, and (c) shows all areas that satisfy at least one of the two criteria (Source: Statistics Canada, Census of Population, 2021)

zones may apply to locations of seedier facilities, such as strip clubs, or similar facilities. 5. The union operation. Here, we set ★ = [, i.e., the output will include all information, which is included on at least one of the layers. This operation is shown in Fig. 5.14. As an example of this operation, consider the location of a Cadillac dealership. The owner wants to be located in an area that represents the clientele he wants to address. On the one hand, large and comfortable cars have a very strong appeal to seniors, and, on the other hand, luxury cars tend to be expensive, so he wants the area he locates in to reflect that. More specifically, the planner wants to locate in an area that has at least 21% seniors or in which the average household income is at least $100,000. The is illustrated in Fig. 5.15 for the city of Vancouver, British Columbia, Canada. 6. The merge operation. This operation puts together I and M, given that I \ M = ∅. An illustration of the merge operation is shown in Fig. 5.16. A typical example of merge is the combination of maps of two adjacent areas, e.g., the States of Arizona and New Mexico into one larger map as shown in Fig. 5.17. This operation is important if, for instance, one retail chain acquires

122

5

Mathematical and Geospatial Tools

Fig. 5.16 The “merge” operation

Fig. 5.17 “Merging” the states of Arizona and New Mexico (Map data © 2022 Google) Fig. 5.18 The “dissolve” operation

another chain, and the objective is to determine the market areas of the joint operation. However, the “merge” operation is not restricted to the assembly of individual map parts as shown in Fig. 5.17. We also “merge,” if we take layers with different parts of a map, e.g., one with political boundaries, cities and roads and merge it with another set of layers that may include topographical features such as rivers and contour lines. 7. The dissolve operation. This operation takes a layer with multiple types of information and amalgamates all those that share a common property. Combines them into one. The general idea is shown in Fig. 5.18, where the graph on the left includes not only the individual counties (the polygons) of an area, but also

5.3

Voting Procedures

123

Fig. 5.19 Example of the “dissolve operation” in the 1968 U.S. presidential election (Source: US Census Bureau 2020, TIGER/Line Shapefiles)

smaller units such as municipalities. In the figure on the right, the municipalities have been dissolved, thus providing a clearer picture of the area and its counties. As an example of the dissolve operation, suppose that you have one layer of a map of the counties in some state that also show the voting behavior of individual areas, e.g., red and blue. Figure 5.19 shows the states that, in the 1968 election, voted for Nixon are shown in red, those for Humphrey are blue, and those for Wallace are displayed in yellow. The output will put together all counties of the same color and delete the individual state lines, so as to increase visibility of the patterns on the map.

5.3

Voting Procedures

In case the problem under consideration features multiple decision makers, it will be necessary to “somehow” aggregate the decision makers’ preferences. Pertinent references are Smith (1973), Vincke (1982). We need to keep in mind that we are dealing with selection problems, i.e., problems in which we need to identify one solution that appears to be most preferable. Apart from using specific techniques, it would be beneficial if the decision makers could agree on their objectives or “align their objectives” before assessing the direction they want to move into. A very good example of common mergansers agreeing on a direction is shown in Fig. 5.20a, b. In order to initiate the process, it will be necessary for the decision makers to agree on a number of issues. More specifically, the decision makers will identify a number of relevant decisions and a number of criteria. While this sounds obvious, it is a task that may not be trivial. However, in case the decision makers cannot agree on acceptable decisions, we can consider the union of all decisions that are considered by the decision makers, and if one decision maker does not like or agree with a decision, he may rate or rank them as low as desired, or, depending on the system used, can assign low attributes to it. Similarly, if it is not possible to agree on the

124

5

Mathematical and Geospatial Tools

Fig. 5.20 (a) Indecision (© H.A. Eiselt). (b) A common objective has been found (© H.A. Eiselt)

relevant criteria, a decision maker who does not consider a criterion important, can either put a zero weight on it or, equivalently, simply ignore it. At this point, each decision maker has the task to either rate each decision on each criterion (on a ratio scale) or simply rank the decisions (on an ordinal scale), which just indicates that a decision maker prefers one decision over another, but not by how much. As the goal is to choose one decision only, ratings are only used for the aggregation of preferences. It would be tempting to use one of the many different voting procedures described in the literature (see, e.g., Balinski and Young 2001), but there is a main difference between elections and multicriteria decision making. In addition to the much smaller number of decision makers (as opposed to voters), the goal is not to elect somebody or something, but to find a consensus. Nevertheless, we will summarize some voting procedures here, which can be either used directly or suitably modified for our purposes. In the case of multiple decision makers, we could proceed as follows. As discussed in Chap. 3, each decision maker can separately and individually combine the rankings or rating of the relevant criteria into a ranking that reflects his own preference structure. These rankings can then by way of some function be aggregated among the decision makers, so as to arrive at an overall ranking that is, at least hopefully, agreeable to at least a plurality of decision makers. Some refinements of this procedure can be thought of. One such example is to deal with decisions that—in the eyes of a single decision maker or in the aggregated assessment—are very close together, so that distinguishing them by very small differences is meaningless. In order to avoid making decisions on minuscule differences is to define a certain interval of outcomes, within which decisions are considered equally good.

5.3

Voting Procedures

125

Suppose now that each decision maker has determined an order of the decisions, so that the highest order ranks first, followed by the second-highest, etc. We can then use the concept of plurality (or simple majority) to determine a winner, in case such a winner exists. A decision has a plurality, if it is preferred by more voters than any other decision, i.e., it is the best decision of the lot. While in typical political voting processes, a plurality decision (or, in that application, candidate), almost always exists, this is no longer necessarily the case in multicriteria problems with only a handful of decision makers. Plurality systems are also referred to as “first past the post” or “winner-take-all” systems. We can also look at the decision makers’ individual preferences. Typically, each decision maker has a structure such as d1 227B d2 227B . . . 227B dm (where " 227B " reads “is preferred to”) for the given decisions. If there exists a decision that wins against each other decision individually for all decision makers, that decision is a Condorcet winner. In any specific case, a Condorcet (1785) winner may or may not exist. In case no such winner exists, one could also attempt to call votes on each individual pair of decisions di 227B dj, i ≠ j, and record the winners. First of all, even though each decision maker is consistent in his preference relations, the majority voting may violate transitivity. This means that there may be a plurality for the statement “d1 is preferred over d2,” a plurality may also agree to “d2 is preferred over d3,” but then there may also be a plurality for “d3 is preferred over d1.” In such a case we may end up using alternate voting methods such as the Simpson Point solution (Simpson 1969), which will seek a decision from all the available decisions with the property that the number of better decisions is minimum. Alternatively, we can solve a tournament, which is briefly discussed further below. A popular type of aggregation are scoring methods. In such methods, each time a decision among m is scored first, i.e., highest by a decision maker, it receives v1 points, each time it scores second by any decision maker, it scores v2 points, and so forth. Naturally, v1 > v2 > . . . > vm. The decision with the highest number of points is then the winner. The Borda count (1781, see also Fraenkel and Grofman 2014) is a special case of a scoring method, in which the decision ranked highest by a decision maker receives (m–1) points, the second place receives (m–2) points, etc. This allows for a simple aggregation. Alternative methods may have more desirable features. For instance, in the spirit of finding a compromise, decisions ranked low by a decision maker could receive very few points in contrast to high-ranking decisions. That way, the final choice based on the aggregation will attempt to avoid decisions that are ranked low by one or more decision makers. Borda’s idea could also be used in a multi-stage process. Nanson (1882) suggested to delete all decisions with a Borda score below the average score and then repeat the procedure. Somewhat similarly, Baldwin (1926) proposed to delete the decision with the worst Borda count and then repeat the procedure. This is what we call a “rejection approach” in Chap. 11. It has, at least from a psychological point of view, very desirable properties. Rather than choosing one decision, calling it best and have no recourse, deleting just the least—favored alternative does not really make a decision, it just restricts the decision space. This continues until there are only two decisions left, at which time an actual choice has to be made. This

126

5

Mathematical and Geospatial Tools

procedure has been used by landfill planners in Saint John, New Brunswick, (Canada), who were looking for a new location. They started with about 1000 alternatives and one by one rejected alternatives based on criteria and rules that were introduced successively. Based on the work by Ramon Llull (Ars electionis, 1299), Copeland (1951, see also Saari and Merlin 1996), devised a procedure that aggregates votes. For each pair of decisions, we calculate the number of times that the decision is approved of by a plurality in pairwise comparisons against all other decisions minus the number of times the decision loses in one-to-one votes against all other decisions. This is the number of points associated with this decision. The process is repeated for all decisions, and the decision with the highest number of points is chosen. One problem with this procedure in our context is that it tends to result in multiple ties. Condorcet’s voting system is able to find a solution even in case there is no Condorcet winner and if there is a solution that violates transitivity. In such a case, we construct a graph, and the task is to find a tournament (a type of tour). Details are provided in the second section of this chapter. Finally, we may also consider using approval voting. In case of approval voting, decision makers do not choose one decision or a ranking among the decisions (a competitive process, but they simply indicate all decisions they approve of. The procedure will then simply count the number of decision makers who approve of a decision, and the decision with the highest degree of approval will then be chosen. Again, one problem associated with this process is the fact that, due to the relatively small number of decision makers (as opposed to voters), it will often result in ties.

5.4 5.4.1

Making It Work Interpolation and Curve Fitting

Let us start our discussion of regression with the simplest case, viz., a simple linear regression. To formalize, the known measurement points (observations) are (Xi, Yi) with independent variable Xi and dependent variable Yi. The relationship between the two variables is Yi = β0 + β1Xi + εi with parameters β0, β1 and an error term εi that has a mean of zero and a variance of one. Using ordinary least squares, i.e., minimizing the sum of squared vertical differences between the points and the regression line, we obtain the estimated regression line Y^ = β0 þ β1 X, with estimated parameters

5.4

Making It Work

127

Table 5.1 Demographic indexes for selected countries

# 1 2 3 4 5 6 7 8 9 10

Country Burkina Faso Canada Chile Haiti India Indonesia Kenya Niger Switzerland Venezuela

Human development index .434

Mortality rate (per 1000) 8.2

Life expectancy at birth 62.98

Population below poverty line 40.1%

.922 .847 .503 .647 .707 .579 .377 .946 .726

7.9 6.5 7.4 7.3 6.6 5.2 10.2 8.5 7.5

82.96 80.74 64.99 70.42 72.32 67.47 63.62 84.25 72.34

9.4% 14.4% 58.5% 21.9% 10.9% 36.1% 45.4% 6.6% 19.7%

P β1 =

P P X i Y i - 1n ð X i Þð Y i Þ , P 2 1 P 2 Xi - n ð XiÞ

and

β0 = Y - β1 X, with X and Y denoting the average of the observed values Xi and Yi, respectively. The aforementioned coefficient of determination R2 can be calculated as P 0028

00292 bi Yi - Y R2 = P 0028 00292 , Yi - Y b i denotes the point on the regression line that is determined by using the where Y input Xi. As an example, consider the data in Table 5.1, which include the human development index, the mortality rate, the life expectancy at birth, and the proportion of the population below the poverty line. Information was gleaned from The World Factbook (2020), Human Development Reports (2020), and similar sources. At first glance, we may wish to express mortality rates as a function, of, say, the human development index. However, as the above data already suggest, such a regression may not be meaningful (see, e.g., the similar mortality rates of Switzerland and that of Burkina Faso), as mortality is not only a function of the level of medical care available in a country, but also the average age. In order to avoid such problems, we may use the proportion of the population that is below the poverty line as a predictor X for the life expectancy at birth Y, we obtain the regression line.

128

5

Mathematical and Geospatial Tools

Table 5.2 Various nonlinear regression functions predicting life expectancy at birth by development index R2 0.99 0.99 0.99 0.98 0.97 0.95 0.94 0.94 0.92 0.90

Regression function y = 59.1133e(0.40362022x^2) y = 40.9550x2–15.4529x + 62.7762 y = -106.2381x3 + 252.6000x2–149.3376x + 89.4893 y = 89.5459338x0.9106 x y = 49.8694 e0.5458x y = 88.2210 ln(x + 1.6062) y = [5551.5011x + 1555.1508]1/2 y = 62.2207x1/2 + 21.8516 y = 83.2310x0.3314 y = 24.0299 ln(x) + 82.90

Y^ = 82:39144 - 38:7165 X (with R2 = 0.74), indicating that only about ¾ of the variation is explained. On the other hand, we may use the human development index as a predictor X of the life expectancy at birth Y. We obtain Y^ = 45:8789 þ 39:3692 X, indicating that there is a base life expectancy of 45 years (even with the lowest possible development index), and each increase of the development index by 0.1, i.e., ten percentage points, the life expectance increases by about 4 years to a maximum of about 85 years. The value of R2 = 0.9617 indicates that the human development index appears to be a very good predictor in that it explains about 96% of the variation. Simple nonlinear regressions are also possible all with X denoting the human development index and Y^ the life expectancy at birth; Table 5.2 lists a few of them (obtained via Agricultural and Meteorological Software 2020). Consider now the possibility of explaining a dependent variable by more than one independent variables. In a multiple regression, we start with observations (X1i, X2i, . . ., Xni, Yi), in which the first n variables are independent, i.e., predictors, and the last variable is the dependent variable, i.e., the feature that is to be predicted. The regression line is then Y^ = β0 þ β1 X 1 þ β2 X 2 þ . . . þ βn X n with (n + 1) parameters β0, β1, . . ., βn. For the special case of two independent variables, we obtain the normal equations X

Y i = β0 þ β1

X

X 1i þ β2

X

X 2i ,

5.4

Making It Work

Table 5.3 Automobile characteristics and prices

129 Years old (X1) 5 5 4 4 3 3 3 2 1 1

Mileage (X2) 20.7 147.8 28.0 115.6 22.0 33.8 6.9 9.1 1.4 45.5

Asking price (Y ) 13.0 7.0 15.0 11.0 19.5 19.0 21.0 19.5 21.4 18.0

X

X X X X 1i Y i = β0 X 1i þ β1 X 21i þ β2 X 1i X 2i , and X X X X X 2i Y i = β0 X 2i þ β1 X 1i X 2i þ β2 X 22i :

As an example, consider the price Y of a used vehicle (here: Jeep Compass Latitude, 4WD) based on the two predictors age of the car (X1) and mileage (X2). (Autotrader.ca 2022). Some characteristics of the automobiles and their asking prices are shown in Table 5.3. Performing a multiple linear regression, we obtain the regression line Y^ = 23:7–1:48X 1 - 0:06226X 2 with R2 = 0.91. In other words, other things being equal, each year the value of the vehicle decreases by $1480, and each 1000 miles have the value decrease by $62. A polynomial regression has the form Y^ = –0:7164X 21 - 0:0073X 1 X 2 þ 0:00044X 22 þ 2:7960X 1 - 0:0860X 2 þ 19:1597 which explains R2 = 0.97 of the total variation. Other means of interpolation exist as well. Inverse distance weighting has been shown successful in many contexts, splines are frequently used for single- or multiple-dimensional problems; for a short introduction, see, e.g., Curve Global Interpolation (see, e.g., Michigan Technological University undated) or Numerical Interpolation: Natural Cubic Spline (see, e.g., Geeks for Geeks 2021 or Leal 2018). Yet another of interpolation is kriging. It was developed in the context of geology, but it has rarely been used in the context of location analysis; for an example, see, e.g., Trujillo-Ventura and Ellis (1991). A good introduction can be found in GIS Resources (2022) or many advanced statistics texts. In order to discuss Voronoi diagrams in detail, let us begin with the simpler one-dimensional case. In particular, assume that we have a line segment from 0 to 10. We can think of it as a cross-section of an area, in which rain gauges are

130

5

Mathematical and Geospatial Tools

Fig. 5.21 Rainfall data on a line segment

positioned and the task is to determine the total rainfall in the area. Such a setting is shown in Fig. 5.21, in which the three reporting stations are located at points 2, 6, and 8, and the rainfall they report are 8″, 3″, and 4″, respectively. The unweighted average is (8 + 3 + 4)/3 = 5 for any unit distance on the segment and 5(10) = 50 for the total in the area. We notice, though, that the gauge on the left that measures high rainfall, is the only measuring station in more than half the entire area. The Voronoi diagram consists of three Thiessen polygons, one assigned to each of the facilities. The boundary points of the areas between two neighboring polygons are the halfway marks between two neighboring seeds. From left to right, this results in the intervals [0, 4], [4, 7], and [7, 10]. Since all points in the first interval are closer to the first facility, it may be assumed that they are more influenced by it than by other facilities (here: rain gauges). This leads to the assertion that all points in the interval [0, 4] receive a rainfall of 8″, all points in the interval [4, 7] receive 3″ of rain, and all points in the interval [7, 10] receive 4″ of rain. The total amount of rain is then the weighted average 4(8) + 3(3) + 3(4) = 53″. Suppose now that we assert that the rainfall at some point is not just determined by the reporting station closest to it, but by some function of all of the given points. First, run a linear regression for the three points in the example. The regression line is b=9- 3X Y 4 with R2 = 0.75. Given that, the total rainfall in the area is the integral Z

10

X=0

ð9- 0:75X ÞdX,

which equals 52.5, which is actually quite close to the result obtained by Voronoi diagrams. Performing a second-degree polynomial regression, we obtain the function

5.4

Making It Work

131

Fig. 5.22 Estimated rainfall interpolated with different methods

b = 0:2917X 2 - 3:5833X þ 14 Y with R2 = 1. Integrating over the market results in a total rainfall estimate of 58.065″, again, reasonably close to the other results. The estimated rainfall amounts throughout the region with the three methods, viz., linear regression (green), quadratic regression (red), and Thiessen polygons (purple) is shown in Fig. 5.22.

5.4.2

Voting Procedures

Consider a setting with s decision makers DM1, DM2, . . ., DMs, m decisions d1, d2, . . ., dm and n criteria c1, c2, . . . cn. For the remainder of this section, we will use j as subscript for criteria, i for decisions, and k for decision makers. Furthermore, each decision maker has a vector of weights wk, which indicates in its jth component the weight the decision maker DMk associates with criterion cj. Finally, 0028 0029 each decision 1 2 s k maker has a set of attribute matrices A , A , . . ., A with A = akij , in which rows are associated with decisions and columns with criteria, i.e., akij is the level at which criterion j is attained by decision i, in opinion of the decision maker k. We also assume that a higher payoff value listed in the matrix denotes a better decision. Throughout this section, we will consider the following example with three decision makers, four decisions, three criteria, and normalized weight vectors.

132

5

2

3 65 6 DM 1 : A1 = 6 48

7 4 2

3 4 67 7 7 15

4

4

5

2

3 8 4 3 27 7 7 4 35

5 67 6 DM 2 : A2 = 6 42 6

w1 : ½:5 :3 :2005D w2 : ½ :3

:4

Mathematical and Geospatial Tools

2

5 66 6 DM 3 : A3 = 6 42

4 5 4

3 5 37 7 7 75

5

5

4

7 4 :3 005D w3 : ½ :2

:2

:6 005D

Note that DM1 considers d4 dominated by d2. We can then calculate the weighted averages of the decisions, i.e., Ak wTk . In particular, we obtain 2

4:4

3

2

5:9

3

2

4:8

3

6 4:9 7 6 7 6 4:0 7 6 7 2 T 6 3:9 7 6 7 A1 wT1 = 6 7, A w2 = 6 7, and A3 wT3 = 6 7: 4 4:8 5 4 3:1 5 4 5:4 5 4:2

5:8

4:4

Consider now first the possibility of the decision makers agreeing on a set of common weights. For simplicity, we take an unweighted average, which, in our example, is wavg = ½:3333, :3000, :3667005D, and with these weights, we obtain the following ratings: DM 1 : ½4:5667, 3:5667, 3:6333, 4:3682005D, DM 2 : ½5:5333, 3:9667, 2:9667, 5:5667005D, and DM 3 : ½4:7, 4:6, 4:4335, 4:6333005D: This results in the rankings d1 227B d4 227B d3 227B d2, d4 227B d1 227B d2 227B d3, & d1 227B d4 227B d2 227B d3 for decision-makers DM1, DM2 and DM3 respectively. All decision makers agree via transitivity that d1 227B d2 and d1 227B d3, leaving only decisions d1 and d4. Note that d4 is one of the remaining decisions, even though DM1 considered d4 to be dominated by d2, the latter having been eliminated here. In order to make a final decision, we could consider the direct comparison between d1 and d4, in which d1 wins with 2 votes to 1. In case the decision makers have agreed on a common attribute matrix, the resulting ranking solves the problem. Using now the standard procedure, in which each decision maker aggregates his own attributes according to his own weight vector, resulting in assessments of the decisions by each of the decision makers Ak wTk , k = 1, 2, 3. The task then is to aggregate these assessments to some overall rating. For now, use an unweighted average in this aggregation. In our example, we obtain the ratings d1: 5.0333, d2: 4.2667, d3: 4.4333, and d4: 4.8, so that the preference structure d1 227B d4 227B d3 227B d2 results, so that again, decision d1 is chosen.

5.4

Making It Work

133

Fig. 5.23 Linear order in the sample problem

This process has many degrees of freedom. One possibility concerns the individual aggregation. Rather than using simple aggregation, i.e., 0028 0029 1 X k k AAki wk = a w n j ij i

8i = 1, . . . , m,

we could apply other types of aggregation. For instance, in order to avoid a strong influence of individual very large attributes (outliers), we may apply the aggregation 0028 0029 Y0028 k k 002912 aij wj AAki wk =

8i = 1, . . . , m,

j

which, in our example, results 1.5873, 1.8974, 0.6928, and 1.5492 for decision maker DM1 and similar for the other decision makers. It is apparent that an aggregation function of this type prefers middle-of-the-road solution, such as d2 in our example, as seen by decision maker DM1. One way to avoid generating rankings based on very small differences between decisions is to group the decisions. This could be done via some cluster analysis with the use of some chosen parameters. For our purpose, suppose that if two decisions are within, say, 10% (of the higher value) of each other, we will consider them equally attractive. Given the average weight vector, DM1 consider d1 and d4 equally attractive, and also d2 and d3 equally attractive (and the former two more attractive than the latter two). For DM2, d1 and d4 are considered equally good (and best), while for DM3, the decisions d3 and d1 are clustered as top choices, while d4 and d2 are clustered at the bottom of the preferences. It is apparent that decision d1 is in each of the decision makers’ top clusters and thus a good candidate to be chosen. To facilitate our discussion, we have to recall some properties of partially ordered sets (posets). In our case, the term “relation” between two elements means that one element is judged to “be equal or preferred to” the other. The properties include reflexivity (an element of the set is related to itself), asymmetry (if an element A is not related to some other element B, then B is related to A), transitivity (if an element A is related to an element B, and B is related to another element C, then A is also

134

5

Mathematical and Geospatial Tools

related to C), and completeness (between any pair of elements A and B, either A relates to B or B relates to A, but not both). We may use a graph to show the relations between individual elements. In our context, such a graph includes nodes that represent the decisions, and directed arcs that indicate that one decision is preferred over another. Ideally, our preference relations for a linear order, which is represented by a linear graph, in which all transitivity relations are satisfied as well. Such an acyclic graph with m nodes includes one source, one sink, (m–1) arcs leading out of the source, (m–2) arcs leading out of one of the other nodes, and so forth. An example of such a graph is shown in Fig. 5.23. The aggregation of rankings via votes will, however, not always result in a linear order. It may happen that while each individual’s rankings are transitive, an aggregation that results from plurality voting on all pairs of decisions may not satisfy transitivity. A case in point is our example the rankings by the three decision makers, given their attributes and weights, are DM 1 : d2 227B d3 227B d1 227B d4 , DM 2 : d 1 227B d 4 227B d2 227B d3 , and DM 3 : d3 227B d1 227B d4 227B d2 : Using simple plurality voting, we find that d1 227B d2 by a vote of 2: 1 (decision makers 2 and 3 agree, while decision maker 1 dissents), and so forth. Defining a matrix C, in which cij = 1, if di 227B dj, and 0 otherwise, we obtain 2

0

60 6 C=6 41

0

1 0

1

3

0 1 0 0

07 7 7: 15

1 0

0

We note that transitivity is violated (the so-called Condorcet paradox) as d1 227B d2, d2 227B d3, but d3 227B d1. One way to end up with a decision was suggested by Copeland (1951). The procedure first determines the outcome of all 12 ½ðm- 1Þm005D individual comparisons, i.e., the matrix C. For each decision, determine the number of points, which is the difference between the number of wins and the number of losses for each decision. Given the matrix C, the number of points assigned to decision di equals Pðdi Þ = P P cij - cji, i.e., the sum of elements in row i minus the sum of elements in column j

j

i. The decision with the highest number of points is then chosen. In our example, the row sums are [2, 1, 2, 1], while the column sums are [1, 2, 1, 2], so that the differences are [1, -1, 1, -1]. This means that decisions d1 and d3 are tied for

5.4

Making It Work

135

Copeland winner. With relatively few decision makers (as opposed to voters), this process tends to result in ties. In order to overcome the problem of lack of transitivity and end up with a decision via majority rule, we may also apply sequential voting. In other words, we may take one pair of decisions at a time and compare them. The decision that does not have a majority will be rejected. It is apparent that the order, in which the decisions are to be made, will influence the result. In our example, it is possible to choose sequences of comparisons, so that each of the four decisions can come up first via majority decision. To wit, Sequence 1: Compare d2 and d3 (d2 wins and d3 is eliminated), followed by d1 vs d4 (d1 wins and d4 is eliminated), and finally vote on d2 and d1 (d1 wins and d2 is eliminated), so that d1 is finally chosen as the most preferred alternative. Sequence 2: Compare d1 and d3 (d3 wins and d1 is eliminated), followed by d3 vs d4 (d3 wins and d4 is eliminated), and finally vote on d2 and d3 (d2 wins and d3 is eliminated), so that d2 is finally chosen as the most preferred alternative. Sequence 3: Compare d1 and d2 (d1 wins and d2 is eliminated), followed by d1 vs d4 (d1 wins and d4 is eliminated), and finally vote on d1 vs d3 (d3 wins and d1 is eliminated), so that d3 is finally chosen as the most preferred alternative. Sequence 4: Compare d3 and d4 (d3 wins and d4 is eliminated), followed by d2 vs d3 (d2 wins and d3 is eliminated), and finally vote on d1 vs d2 (d1 wins and d2 is eliminated), so that d1 is finally chosen as the most preferred alternative. In other words, in this example not only does the individual who chooses the sequence of comparisons, can influence the outcome, he can obtain whatever result he wants by choosing the “appropriate” sequence. Back now to simultaneous comparisons. We will now take the individual majority votes based on the assessments in the attribute matrices Ak and, in order to determine a linear order that, as much as possible, respects the plurality votes, we can proceed as follows. Rather than recording the outcome of a direct comparison between all pairs of decisions (di, dj), as was done in the matrix C, we now count the number of agreements we obtain for each such comparison and collect them in the matrix D = (dij). More specifically, dij denotes the number of votes for the proposition di 227B dj. We then compute the matrix E = (eij) with eij = dij – dji. This allows us to set up a complete graph GE = (N, A) with nodes di representing the decisions and arcs (di, dj) for all pairs (i, j), so that the graph with its m nodes has [m(m–1)] arcs. With each arc (di, dj), we associate the number eij. This is referred to as a tournament. Attempting to find a linear order as a subgraph of G, which P respects eij is the votes as much as possible, the task is then to find a linear order, so that ði, jÞ2A maximized. In our example, the matrix D is

136

5

Mathematical and Geospatial Tools

–1

Fig. 5.24 Tournament for the sample problem

d1

d2

1 1

1

–1

3

–1 d3

1

–1 1

–3 d4

–1

Fig. 5.25 Linear order resulting from the tournament

2

0

61 6 D=6 42 0

2

1 3

3

0 1

2 17 7 7, 0 25

2

1 0

so that we obtain 2

-

6 -1 6 E=6 4 1 -3

3

1

-1

-1

1 -

-17 7 7 1 5

1

-1

-

3

The graph GE is then shown in Fig. 5.24. The unique subgraph of GE that shows a linear order is then shown in Fig. 5.25. Here, the optimal linear order is d3 227B d1 227B d4 227B d2 with a sum of 6, so that the decision d3 will be chosen. A different type of approach are the so-called scoring methods. Simply speaking, each decision maker ranks the decisions (e.g., based on ratings obtained by some individual aggregation function as demonstrated above). Each rank is worth a certain number of points. For each decision, the points are added up and the decision with the highest number of points is chosen. Note that plurality voting is a special case of

5.4

Making It Work

137

a scoring method, in which the decision with the highest rank receives one point and all other decisions receive zero points. Borda (1733–1799) suggested in his Borda count (de Borda 1784) that each decision maker allocates (m–i) points to the decision ranked ith, the decision with the lowest count is then declared the winner. One could easily use other similar schemes that score low in case a decision ranks low and thus favor a middle-of-the-road solution, or other desired features. Given our example, a few scoring schemes are as follows: Example 1 Point distribution: 3, 2, 1, 0 (the original Borda count) Decision d1 is ranked third, first, and second and thus receives 1 + 3 + 2 = 6 points, Decision d2 is ranked first, third, and fourth and thus receives 3 + 1 + 0 = 4 points, Decision d3 is ranked second, fourth, and first and thus receives 2 + 0 + 3 = 5 points, and Decision d4 is ranked fourth, second, and third and thus receives 0 + 2 + 1 = 3 points, so that decision d1 is chosen. Example 2 Point distribution: 5, 3, 1, 0 (This distribution weighs high-ranking decisions more than the Borda count and tries to avoid low-ranking decisions). Then we obtain: Decision d1 gets 1 + 5 + 3 = 9 points, Decision d2 gets 5 + 1 + 0 = 6 points, Decision d3 gets 3 + 0 + 5 = 8 points, and Decision d4 gets 0 + 3 + 1 = 4 points, and again decision d1 is chosen. Example 3 Point distribution: 8, 4, 2, 1 (This distribution is even more skewed towards high-ranking decision than the distribution in Example 2). We obtain Decision d1 gets 2 + 8 + 4 = 14 points, Decision d2 gets 8 + 2 + 1 = 11points, Decision d3 gets 4 + 1 + 8 = 13 points, and Decision d4 gets 1 + 4 + 2 = 7 points. Again, decision d1 is chosen, which by now can be considered to be fairly stable, i.e., it remains optimal for a wide variety of point distributions. Example 4 Point distribution: 1, 0, 0, 0 (which is the same as plurality voting). We obtain Decision d1 gets 0 + 1 + 0 = 1 point, Decision d2 gets 1 + 0 + 0 = 1 point, Decision d3 gets 0 + 0 + 1 = 1 point, and Decision d4 gets 0 + 0 + 0 = 0 points. Here, either d1, d2, or d3 is chosen. An extension of the regular plurality vote is the plurality vote with runoff, as used in some countries, e.g., France for the presidential elections. In this system, if one decision has a majority in a regular first-past-the-post voting system, the voting

138

5

Mathematical and Geospatial Tools

terminates and a decision has been made; otherwise, the two decisions with the most votes are voted upon in a runoff election, which then will end with a majority for one decision (if there is no tie, which, with a small number of decision makers, is a distinct possibility). Other extensions of the Borda count have been proposed. One such (sequential) system is the Nanson voting. In its first stage, the usual Borda counts are determined. All decisions with less than average number points are eliminated. (Notice the similarity to the aforementioned runoff elections). The process is repeated until a decision has been made. Another closely related system was proposed by Baldwin. Again, the first stage is the usual Borda voting. Once the results are out, the decision with the lowest number of points is deleted and the process is repeated with the remaining decisions. This process is identical to our “rejection approach” in Chap. 11. The complexity of voting procedures are explored by Hudry (2015), while the mathematics of voting procedures are discussed by Brams (2008). Finally, in approval voting each decision maker approves of as many decisions as desired. This is not a fixed number; it could differ among decision makers. The decision with the highest level of approval is then chosen. Recall that in our example, we have the following preference structure by the decision makers: DM 1 : d2 227B d3 227B d1 227B d4 , DM 2 : d1 227B d4 227B d2 227B d3 , DM 3 : d3 227B d1 227B d4 227B d2 : Suppose that all decision makers choose their respective top two decisions as the decisions they approve of. For DM 1, they are d2 and d3, for DM 2 they are d1 and d4, and for DM 3 they are d3 and d1. This means that the four decisions have approval ratings of 2, 1, 2, and 1, respectively, so that d1 and d3 are tied for best decision. At this point, we notice that two out of three decision makers prefer d3 over d1, so that d3 may be chosen. In case each of the decision makers approves of his top three choices, we obtain the following approval sets of {d1, d2, d3} for decision maker 1, {d1, d2, d4} for decision maker 2, and {d1, d3, d4} for decision maker 3. This gives us approvals of 3, 2, 2, and 2 for the respective decisions, so that d1 is chosen. Similarly, assume that each decision maker approves of the top 10% based on his own aggregated averages. In our example, this means that the first decision maker, whose top aggregated score is 4.9, will approve of all decisions that have an aggregated score of at least (0.9)(4.9) = 4.41, and similar for the other decision makers. We then obtain sets of {d2, d3}, {d1, d4}, and {d3}, respectively, so that decision d3 is chosen. The last topic in this section deals not so much with voting itself, but the allocation of seats to parties, given that there has been a given number of votes. Such a technique can be very useful in location decisions, if we not only decide the location of one or more facilities, but also another feature such as their respective sizes. The idea behind this procedure is as follows. Given the ratings of different

5.4

Making It Work

139

Table 5.4 Allocations of tokens to counties Divisor 1 2 3 4

Decision A 6.81 3.45 2.278 1.7

Decision B 3.84 1.9

Decision C 5.03 2.57 1.67

Decision D 5.82 2.96 1.939 1.45

alternatives (assume for now that the attributes or their utilities have been aggregated by some technique, e.g., the simple weighting method, into a single achievement. These are then our “votes.” Given a “budget” that consists of a certain number of tokens (the “seats”), we can then use any technique, such as the method described below, to allocate tokens to individual decisions. The number of tokens allocated to a decision can then be used as an indicator of the size of the facility allocated to the site represented by the decision. This does, of course, assume that there are no economies of scale and that the utilities and disutilities of the facilities at the individual locations are additive. One technique that allocates seats to parties or, in our context, tokens to decisions, is what is generally known as the Jefferson—d’Hondt method. The idea behind the method is to allocate votes to (an obviously integer number of) seats so the proportions of the seats allocated to a party best reflect the proportions of the votes. Jefferson developed the method in 1792 for the House of Representatives, while d’Hondt, a Belgian mathematician, described it in 1878. The method is a “highest averages” technique, and, in simple words, proceeds as follows. The numbers of votes obtained by each party are first divided by the same divisor of 1. The part with the highest number of votes is assigned one seat, and the divisor for this party is increased by 1 and the original number of votes of this party is divided by the new divisor. This process is then repeated until all seats have been allocated. As an illustration, consider the problem of allocating facilities to counties. A small facility costs, say, 1 token, while a large facility costs 3 tokens. Allocating three small or one large facility to one county is considered the same (even though we may prefer one large facility over three small ones, which can easily be accommodated). Assume that there are four counties (decisions) and two 2 criteria 3 that 10 2 6 1 87 6 7 facility locations are to be evaluated on. The attribute matrix A = 6 7 and 4 5 55 7 4 the weights of the criteria are w1 = 0.6 and w2 = 0.4, so that aggregation of the two criteria with the weights (a simple weighted average) results in composite attributes 6.8, 3.8, 5.0, and 5.8, respectively. These are our “votes.” Suppose now that the goal is to allocate 9 tokens—our budget—to the four counties. In order to do so, we can use the Table 5.4:

140

5

Mathematical and Geospatial Tools

Since the largest number is 6.8, allocate one token to Decision A and divide the original score of 6.8 by 2. This first allocation is indicated by the superscript “1.” Then, the second, third and fourth tokens are allocated to decisions D, C, and B, in that order. Each of the original scores of these decisions is divided by 2. At this point, the highest number is now 3.4, so that Decision A receives another token. The original score of 6.8 is now divided by 3, which is 2.27. The next two allocations go to Decisions D and C, in that order. At this point, the largest number is 2.27, so that Decision A receives another token, and the original score is divided by 4. The highest number is now 1.93, and Decision D receives another token. We now have allocated all available tokens and the final tally is 3 tokens for Decisions A and D each, 2 tokens for Decision C, and 1 token for Decision B. This means that Counties A and D will receive one large facility each, County C will receive two small facilities, and County B will be allocated a single small facility. Below we include the references used in the text as well as some separate references to available software for geographical information systems.

References Agricultural and Meteorological Software (2020) Available online at https://agrimetsoft.com/ regressions/Multinomial-Logistic, last accessed on 10/7/2022 D.W. Allen, GIS Tutorial 2: Spatial Analysis Workbook, 3rd edn. (Esri Press, Toronto, ON, 2013) D.W. Allen, J.M. Coffey, GIS Tutorial 3: Advanced Workbook (Esri Press, Toronto, ON, 2010) Autotrader.ca (2022) Available online at https://www.autotrader.ca/cars/?rcp=0&rcs=0& prx=100&hprc=True&wcp=True&iosp=True&sts=New-Used&inMarket=basicSearch& mdl=Compass&make=Jeep&loc=E3B5A3, data checked on 9/7/2020 J.M. Baldwin (1926) The technique of the Nanson preferential majority system of election. Proc. R. Soc. Vic. 39, 42–52. Available online at https://archive.org/details/ proceedingsroyaxxxvroyaa/page/42/mode/2up, last accessed on 9/22/2022 M.L. Balinski, H.P. Young, Fair Representation: Meeting the Ideal of One Man, One Vote, 2nd edn. (Brookings Institution Press, Washington, DC, 2001) P. Bolstad, GIS Fundamentals: A First Text on Geographic Information Systems, 5th edn. (XanEdu Publishing Inc, Livonia, MI, 2016) S.J. Brams, Mathematics and Democracy: Designing Better Voting and Fair-Division Procedures (Princeton University Press, Princeton, 2008) M.L. Burkey, J. Bhadury, H.A. Eiselt, Voronoi diagrams and their uses. Chapter 19, in Foundations of Location Analysis, ed. by H.A. Eiselt, V. Marianov, (Springer, Berlin, 2011), pp. 445–470 A.H. Copeland, A “Reasonable” Social Welfare Function. Seminar on Mathematics in Social Sciences (University of Michigan, 1951) M. de Berg, O. Cheong, M van Kreveld, M Overmars, Chapter 7: Voronoi Diagrams. Computational Geometry: Algorithms and Applications (2nd rev. ed). (Springer, 2008) J.C. de Borda, Memoire sur les elections au Scrutin. Histoire de L'Academie Royale des Sciences (Baudoin, Paris, 1784). Available online at https://archive.org/details/mmoiresurleslec00 daungoog/page/n16/mode/2up, last accessed on 9/22/2022 M. de Condorcet, Essai sur l’application de l”analyse à la probabilité des décisions rendues à la pluralité des voix (1785). Available online at https://archive.org/details/essaisurlapplic00 conggoog, last accessed on 9/22/2022 M.J. DeSmith, M.F. Goodchild, P.A. Longley, Geospatial Analysis. A Comprehensive Guide to Principles, Techniques and Software Tools, 6th edn (2018). Available at http://www.

References

141

spatialanalysisonline.com/HTML/?software_tools_and_companion_m.htm last accessed on 9/22/2022 Esri, History of GIS (undated-a). Available online at https://www.esri.com/en-us/what-is-gis/ history-of-gis last accessed on 9/30/2022 S. Fortune, Sweepline algorithm for Voronoi Diagrams. SCG ’86: Proceedings of the Second Annual Symposium on Computational Geometry, pp. 313–322 (1986). Available online at https://dl.acm.org/doi/10.1145/10515.10549, last accessed on 9/22/2022 J. Fraenkel, B. Grofman, The Borda count and its real-world alternatives: comparing scoring rules in Nauru and Slovenia. Aust. J. Polit. Sci. 49(2), 186–205 (2014) Geeks for Geeks, Natural cubic spline (2021). Available online at https://www.geeksforgeeks.org/ natural-cubic-spline/, last accessed on 9/22/2022 GIS Resources, Choosing the right interpolation method (2022). Available online at https://www. gisresources.com/choosing-the-right-interpolation-method_2/, last accessed on 9/22/2022 Gomez Graphics, Painting with pixels/drawing with vectors (2020). Available online at http:// vector-conversions.com/vectorizing/raster_vs_vector.html, last accessed on 9/22/2022 W.L. Gorr, K.S. Kurland, GIS Tutorial 1: Basic Workbook, 6th edn. (Esri Press, Toronto, ON, 2016) A. Graser, Learning QGIS: Create Great Maps and Perform Geoprocessing Tasks with Ease, 3rd edn. (Packt Publishing, Birmingham, 2016) Hudry, O. (2015) Voting procedures, complexity of. In Encyclopedia of Complexity and Systems Science. Springer Science+Business Media New York. Available online at https://link.springer. com/referenceworkentry/10.1007/978-3-642-27737-5_585-4 last accessed on 9/22/2022 Human Development Reports, Human development index (HDI) (2020). Available online at https:// hdr.undp.org/data-center/human-development-index#/indicies/HDI last accessed on 10/7/2022 M. Law, A. Collins, Getting to Know ArcGIS, 4th edn. (Esri Press, Toronto, ON, 2015) L. Leal, Numerical Interpolation: Natural Cubic Spline Towards Data Science (2018). Available online at https://towardsdatascience.com/numerical-interpolation-natural-cubic-spline-52c11 57b98ac, last accessed on 9/22/2022 J. Malczewski, GIS and Multicriteria Decision Analysis (Wiley, New York, 1999) Michigan Technological University, Curve Global Interpolation (undated). Available online at https://pages.mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/CURVE-INT-global.html last accessed on 9/22/2022 E.J. Nanson (1882) Methods of election. Trans. Proc. R. Soc. Vic. 19: 197–240. Available online at https://archive.org/details/transactionsproc1719roya/page/197/mode/2up, last accessed on 9/22/ 2022 Osher Map Library (2018). http://www.oshermaps.org/exhibitions/road-maps-american-way/ivshowing-road, “upside down map” on the last page last accessed 9/22/2022 D.G. Saari, V.R. Merlin, The Copeland method. I.: relationships and the dictionary. Economic Theory 8, 51–76 (1996) P.B. Simpson, On defining areas of voter choice: Professor Tullock on stable voting. Q. J. Econ. 83(3), 478–490 (1969) J.H. Smith, Aggregation of preferences with variable electorate author(s). Econometrica 41(6), 1027–1041 (1973) J. Snow (1854) On the Mode of Communication: Cholera (2nd ed). John Churchill, London. Available online at https://archive.org/details/b28985266/page/n57/mode/2up?view=theater, last accessed on 9/22/2022 Social Science Statistics, Multiple Regression Calculator (2022). Available online at https://www. socscistatistics.com/tests/multipleregression/default.aspx, last accessed on 9/30/2022 The World Factbook (2020) Available online at https://www.cia.gov/the-world-factbook/field/ population-below-poverty-line/, last accessed on 10/7/2022 A.H. Thiessen, Precipitation averages for large areas. Mon. Weather Rev. 39, 1082–1084 (1911) W. Tobler, A computer movie simulating urban growth in the Detroit region. Econ. Geogr. 46(Suppl), 234–240 (1970)

142

5

Mathematical and Geospatial Tools

A. Trujillo-Ventura, J.H. Ellis, Multiobjective air pollution monitoring network design. Atmos. Environ. 25A/2, 469–479 (1991) University of Michigan, Curve Global Interpolation (undated). Available online at https://pages. mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/CURVE-INT-global.html, last accessed on 9/22/2022 P. Vincke, Aggregation of preferences: a review. Eur. J. Oper. Res. 9, 17–22 (1982) G. Voronoi, Nouvelles applications des parameters continus à la théorie des forms quadratiques, deuxième mémoire, recherché sur les paralleloèdres primitifs. Journal für die reine und angewandte Mathematik 134, 198–287 (1908) E. Wigner, F. Seitz, On the constitution of metallic sodium. Phys. Rev. 43, 804–810 (1933)

Software Resources Some GIS Tools Esri, ArcGIS (undated-b). Available online at https://www.esri.com/en-us/arcgis/about-arcgis/ overview last accessed on 9/30/2022 GIS Geography (2022) 13 Free GIS Software Options: Map the World in Open Source. Available online at https://gisgeography.com/free-gis-software/, last accessed on 9/30/2022 Grass GIS (2022) Available online at https://grass.osgeo.org/, last accessed on 9/30/2022 MapWindow (2022) About the MapWindow GIS Open Source Project. Available online at https:// www.mapwindow.org/, last accessed on 9/30/2022 MicroImages (2022) TNTgis. Available online at https://www.microimages.com/downloads/ tntmips.htm, last accessed on 9/30/2022 QGIS (2022) Available Online at QGIS-A Free and Open Source Geographic Information System. Available online at https://www.qgis.org/en/site/, last accessed on 9/30/2022

Part II

Model Applications

Chapter 6

Locating a Landfill with Vector Optimization

Municipal solid waste (a.k.a. garbage) is a major concern in developed and developing countries alike. Garbage generation has been increasing for a long time and has now (2018 data) reached 5.69 lbs. per capita per day (United States), 4.23 lbs. (France), 2.87 lbs. (Saudi Arabia), 2.25 lbs. (China) as per Statista (2020a). In Chile, the OECD (2017) reports a daily per-capita generation of 2.66 lbs. In most countries, a large proportion of the garbage is landfilled. The number of landfills per capita differs greatly between countries. For instance, in 2002, there were 1775 landfills in Germany (via How many landfills are there, 2014), which, in conjunction with its population of 83,784,000 (as per Worldometer 2020) results in 47,200 people per landfill, while Italy’s 642 landfill serve 60,460,000 people, i.e., 91,175 people per landfill, all the way to China’s 654 landfills (A rubbish story, 2019), which translates to 2.2 million people per landfill. However, rather than measuring the number of people per landfill, a (probably) more meaningful number of would be the landfill area required per capita, certainly in densely populated countries, in which space is at a premium. Pertinent references are BBC News (2019) and Rubbish Prohibited (2014). Given the development of the garbage generation, one would assume that the number of landfills has increased over the years. This is, however, not the case. For instance (Number of municipal waste landfills, Statista 2020b), there were 6326 landfills in the United States in 1990, whereas there are only 1269 landfills in 2017, a decrease by 80% in 27 years. Given ever-increasing environmental demands, landfills have gotten larger and larger, as it is more economical to install environmental controls in larger landfills. This, of course, also implies that the average distances between garbage generators, i.e., customers, and landfills have increased. This has led to the establishment of transfer stations, which are places between garbage generators and landfills, at which the trash is compressed (at times) and loaded from smaller collection trucks onto larger transport trucks, whose per-ton transportation costs are only about one third of those of the collection trucks.

© Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_6

145

146

6

Locating a Landfill with Vector Optimization

Fig. 6.1 (a) Cell in a landfill (© H.A. Eiselt). (b) Compacting trash (© H.A. Eiselt)

As far as modeling landfill location is concerned, we are then really facing the design of an entire system with the landfill being but one part. First of all, a landfill, in contrast to a dump, is an engineered facility that deals with a number of adverse environmental impacts that are present when a dump is located. A typical landfill is shown in Fig. 6.1. A landfill comprises not only the landfill itself, but also garbage transfer stations, a recycling station, incinerators, a leachate pond (which collects the leachate, i.e., the liquid that collects at the bottom of the landfill and is here collected and treated, so as to avoid it getting into the ground water), and the gas management system, in which methane gas, CH4, a product of anaerobic decomposition in the landfill, is either converted to electricity rather than simply flared off, or sold to power vehicles directly. In other words, in addition to the location of the landfill and its components, we need to determine many design features of the landfill and the associated facilities, and, of course, the routes used for the collection of garbage; see, e.g., Ghiani et al. (2014), Eiselt and Marianov (2014), Giannikos (1998), Mitropoulos et al. (2009), Adamides et al. (2009), Prodhon and Prins (2014). For simplicity, we will concentrate on the location of landfills and transfer stations in our study. The main concerns in this application are the costs to build and run the landfill on the one hand, and the public opposition on the other hand. This is not to say that these are the only factors that are relevant or that have been discussed in the pertinent literature (for a survey of contributions, the criteria they involve, and other features, see, e.g., Eiselt and Marianov (2015). Among other factors, pollution, the amount landfilled, geology of the site, land claims, effects on property values in surrounding areas, various types of risk, as well as some other environmental, social, and economic concerns have been included in landfill models. While the costs of establishing and operating a landfill system are reasonably easy to approximate, public acceptance/opposition is a different story. Typically, opposition to landfills is based on pollution (odors and general emissions from the landfill, or dust and noise based on passing trucks) and on environmental concerns regarding ground water, landfill gases, and similar considerations.

6.1

Making It Work

147

A procedural question is then whether to aggregate all these effects in some way into a general cost function, or to keep the objectives separate. The former route has been taken by Hirshfeld et al. (1992), while we treat the two concerns as separate objectives in this chapter. Our model assumes that there is no reason to assume that the costs to operate the different landfills will differ among potential facility locations, and, in order to avoid additional environmental damage (which should also go a long way to appease the public opposition to a large landfill), it was decided to locate the new landfill at one of the existing dump sites, after it had been asserted that (1) all of the sites satisfied the regulations for landfills, and (2) the surrounding land was suitable for a landfill and was available. This was the case for all six existing dumps. The transfer station, in case such a station was to be built, would also be located at the site of one of the dumps. As the purpose of this exercise is to describe a method to locate landfills, we did not consider the effect of closing the dumps. For simplicity, we model public opposition by a function of straight-line distances between the landfill and the cities and towns in the model. Given that landfills emit only a limited amount of pollution, the opposition beyond 6 miles was set to zero. Note that the model includes only opposition by people who are actually affected by the landfill, not professional protesters or other out-of-towners. The model was formulated and solved as a bi-objective optimization problem with the minimization of costs and the minimization of public opposition as objectives. After some initial testing (the results are shown in the technical section below), we have used two methods to explore solutions for the case of one landfill and zero or one transfer station, assuming that the per-ton mileage costs of a transport truck are only 30% of those of a collection truck, which is a well-known industry figure quoted, for instance, by Eiselt and Marianov (2015). The costs of the design and construction of a landfill have been assumed to be the same for all locations and have thus been deleted, leaving only transportation costs in the model.

6.1

Making It Work

The area in our application is located in Chile about 350 miles driving distance (325 miles straight-line distance) southsouthwest from the capital, Santiago. The blue dot in Fig. 6.2 shows the city of Santiago, whereas the red dot indicates the location of the Malleco region, i.e., the area study. The region’s main population agglomerations are six cities with populations between 7000 and 48,000 and four towns with between 1000 and 2000 people. At present, the region has neither a landfill nor transfer stations, but six dumps. The plan is to locate one properly engineered landfill and possibly one garbage transfer station if the additional costs will justify it. The locations of the population agglomerations are shown as red starts and the existing dumps are shown as green dots in Fig. 6.3. In order to formulate the problem, we first structure the situation in general. The general structure of the problem is shown in Fig. 6.4.

148

6

Locating a Landfill with Vector Optimization

Fig. 6.2 Location of the study area (Map data ©2022 Google)

In other words, we have two choices to haul the garbage from customers to the landfill: either transport it directly or haul it via a transfer station. Below, we will present two formulations of the problem, one arc-based formulation that requires two subscripts in the formulation, and a path-based formulation with three subscripts. We start with the arc-based formulation, which has at its core a p-median formulation but is extended to include potential transshipment points and

6.1

Making It Work

149

Fig. 6.3 The area with the main cities and towns (with urban areas surrounded by a red line) and the existing dumps (shown as green dots) (Map data ©2022 Google) Fig. 6.4 The general structure of the landfill problem

capacities. We first define the parameters of the problem. Central to the model are distances, which are cij: Distance between customer i and landfill j, c0ik : Distance between customer i and transfer station k, and c00kj : Distance between transfer station k and landfill j. Furthermore, we need to define the unit transportation costs for the collection trucks and the transportation trucks:

150

6

Locating a Landfill with Vector Optimization

α1: Unit transportation cost (ton-mile) for collection trucks, and α2: Unit transportation costs (ton-mile) for transport trucks. We also specify the number of landfills and transfer stations to be located as p: Number of landfills to be located q: Number of transfer stations to be located Finally, we need to define the capacities of the landfill and the transfer station. In our formulation, they are κj: Capacity of landfill j κk: Capacity of transfer station k. As far as variables are concerned, we need to define binary variables that indicate whether a facility is sited at any of the given potential locations. We formulate yj = 1, if a landfill is located at site j and 0 otherwise zk = 1, if a transfer station is located at site k, and 0 otherwise The arc-based formulation needs three different variables for the transportation. In particular, we need the quantities hauled by the collection trucks from the customers directly to the landfill, the quantities collected from customers and moved to the transfer stations by collection trucks, and finally the quantities taken from the transfer stations and shipped to the landfills they are assigned to. More specifically, we define xij: the quantity shipped from customer i to landfill j, x0ik : the quantity shipped from customer i to transfer station k, and x00kj : the quantity shipped from transfer station k to landfill j. The arc-based cost-minimizing formulation can then be written as Min zc = α1

XX i

cij xij þ α1

XX XX c0ik x0ik þ α2 c00kj x00kj

j

i

k

k

j

An alternative path-based formulation (see, e.g., Eiselt 2006, 2007) requires three-index variables. xikj : the quantity shipped from customer i via transfer station k to landfill j, which allows us to write the problem as follows: Min zc =

XX i

j

wi d ij xij þ

XXX 0028 0029 wi dik þ α2 d kj xikj i

k

j

Even though the path-based formulation typically behaves very well computationally, we use the simpler arc-based formulation, which has fewer variables. In our example, the problem with six potential sites for landfill and transfer station has

6.1

Making It Work

151

Fig. 6.5 Opposition as a function of the distance from the landfill

168 variables and 210 constraints (plus the specifications of the variables that require nonnegativity or the binary property). In addition to the above formulation, we need to quantify and formulate a function for the public opposition. Assume that a landfill generates opposition based on the Euclidean distance between a customer at site i (here an inhabitant of one of the cities or towns), the potential landfill site j is denoted by cij, and the number of people living at customer site i is again wi. Then the number of people in town i opposed to a landfill at site j is assumed to be wi/(0.5 + cij), which is based on the well-known gravity models; see, e.g., Reilly (1931) and Huff (1964). Based on its much smaller impact, the opposition to a transfer station is assumed to be only one third of that to a landfill. The “opposition graph” is depicted in Fig. 6.5. Table 6.1 shows the 4 towns and 6 cities, their populations, the road distances, the Euclidean distances, and the per-capita opposition regarding each of the potential sites. The objective PPpublic opposition can then be written as PPthat measures cij wi yj þ cik wi zk Min zo = i

j

i

k

Testing the model, we first consider only the cost objective and use a variety of combinations of the number of transfer stations and the discount factor for the transfer trucks. Some results are shown in Table 6.2. The results show that the optimal cost-minimizing solutions are very stable: for numerous discount factors, the landfill site 3 is always chosen, and the transfer station, if there exists one, will be at site 5. For a discount factor of 0.3 (i.e., a ton-mile of the transportation truck costs only 30% of that of a collection truck), the introduction of a single transfer station saves $1,414,589, while the introduction of a second transfer station saves only an additional $277,453. Note that these results assume a single service run to each town/city. In what follows, we will use the discount factor α2 = 0.3. This value has been suggested by Antunes (1999), EPA (2002), and Duffy (2009). We first use the weighting method (see Chap. 3) with the cost objective zc and the opposition

152

6

Locating a Landfill with Vector Optimization

Table 6.1 Distances between towns/cities and dumps as well as anticipated opposition

ID Town 1

From Capitán Pastene

Population 2612

Town 2

Lumaco

1408

Town 3

Ercilla

2302

Town 4

Pailahueque

1311

City 1

Purén

7524

City 2

Traiguén

14,257

City 3

Angol

48,608

To 128 112 111 113 114 115 128 112 111 113 114 115 128 112 111 113 114 115 128 112 111 113 114 115 128 112 111 113 114 115 128 112 111 113 114 115 128 112 111 113 114 115

Dump name Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín

Road distance [km] 3.1 30.5 62.1 102.0 72.8 169.0 10.9 21.2 52.8 92.7 63.5 159.7 83.7 92.9 50.8 12.8 27.7 86.6 81.7 90.9 57.7 19.7 15.2 76.6 36.0 6.0 51.4 71.6 91.9 151.0 38.0 47.2 61.6 63.4 34.4 93.3 64.2 57.0 2.4 42.5 71.4 130.0

Euclidean distance [km] 1.8 15.3

Public opposition 0.4348 0.0633

5.7 16.1

0.1613 0.0602

6.1 15.9

0.1515 0.0610

15.5 8.4

0.0625 0.1120

18.4 3.1

0.0529 0.2778

24.5

0.0400

28.1

0.0350

0.5 30.5

1.0000 0.0323

(continued)

6.1

Making It Work

153

Table 6.1 (continued)

ID City 4

City 5

City 6

From Collipulli

Victoria

Curacautín

Population 16,175

24,773

12,679

To 128 112 111 113 114 115 128 112 111 113 114 115 128 112 111 113 114 115

Dump name Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín Lumaco Purén Angol Ercilla Victoria Curacautín

Road distance [km] 90.6 83.3 34.1 11.4 40.3 99.2 69.0 78.2 69.9 31.8 2.7 63.1 125.0 134.0 127.0 89.0 60.0 9.4

Euclidean distance [km]

Public opposition

25.5 5.1 29.0

0.0385 0.1786 0.0339

24.8 0.8

0.0395 0.7692

6.0

0.1538

Table 6.2 Optimal solution for the model with cost objective only for various numbers of transfer stations and discount factors Number of transfer stations 0 1 1

α2 (discount factor) 0.3 0.3 0.5

Other None None None

1 2

0.2 0.3

None None

Location(s) of transfer station(s): customers allocated to it/them ./. 5: 3, 4, 6, 9, 10 5: 4, 9, 10 5: 3, 4, 6, 9, 10 2: 1,2, 5 5: 3, 4, 6, 9, 10

Landfill location: customers with direct shipment 3: all 3: 1, 2, 5, 7, 8 3: 1, 2, 3, 5, 6, 7, 8 3: 1, 2, 5, 7, 8 3: 7, 8

Transportation costs 4,425,682 3,011,093 3,504,444 2,703,811 2,733,640

objective zo have weights of λc and λo, respectively, associated with them. The results of our computations are shown in Table 6.3. Starting with a solution that only considers costs, the importance of the “public opposition” objective increases as we move down the list. The solutions at the bottom of the list consider only the opposition objective. The top and bottom solutions in the list are labeled with a star. This indicates that the formulations that

154

6

Locating a Landfill with Vector Optimization

Table 6.3 Solutions of the bi-objective landfill location problem with the weighting method Transfer station location 5 5 5 3 2

Shipments via transfer station 3, 4, 6, 9, 10 3, 4, 6, 9, 10 3, 4, 6, 9, 10 1, 2, 5, 7 1, 2, 5

Cost 3,011,093 3,011,093 3,011,093 3,721,237 4,645,802

Opposition 56,027.40 56,027.40 56,027.40 22,278.37 6648.21

2

Direct shipments from customers 1, 2, 5, 7, 8 1, 2, 5, 7, 8 1, 2, 5, 7, 8 3, 4, 6, 8, 9, 10 3, 4, 6, 7, 8, 9, 10 1, 2, 5, 6, 7

4

3, 4, 8, 9, 10

5,923,821

4296.31

2

1, 2, 5, 6, 7

4

3, 4, 8, 9, 10

5,923,821

4296.31

2

1,2,3,4,5,6,7,8,9

6

10

7,350,311

2990.28

6

3, 4, 8, 9, 10

2

1, 2, 5, 6, 7

10,219,950

2730.12

6 6

1,2,3,4,5,6,7,9 All

1 6

8, 10 None

13,829,080 12,943,690

2727.07 2600.41

(λc, λo) 1, 0* 1, 0 1, 10 1, 50 1, 100

Landfill location 3 3 3 4 4

1, 1000 1, 1090 1, 2000 1, 20,000 0, 1 0, 1*

Table 6.4 Solutions of the bi-objective landfill location problem with the constraint method Limit on xo 100,000 80,000 60,000 50,000 40,000 20,000 10,000 5000 3000 2750 2730 2700

Direct shipments from customers 1, 2, 5, 7, 8 1, 2, 5, 7, 8 1, 2, 5, 7, 8 1,2,3,4,6,8,9,10 1,2,3,4,6,8,9,10 3, 4, 5, 7, 8 3, 4, 6, 7, 8, 9, 10 2 1, 2, 5, 6, 7 2 1–9 6 3, 4, 8, 9, 10 6 3, 4, 8, 9, 10 No feasible solution

Landfill location 3 3 3 5 5 4 4

Transfer station location 5 5 5 3 3 5 2

Shipments via transfer station 3, 4, 6, 9, 10 3, 4, 6, 9, 10 3, 4, 6, 9, 10 5, 7 5,7 1, 2, 6, 9, 10 1, 2, 5

Cost 3,011,093 3,011,093 3,011,093 3,482,046 3,482,046 4,247,443 4,645,802

Opposition 56,027.40 56,027.40 56,027.40 36,800.22 36,800.22 12,664.78 6648.21

4 6 2 1

3, 4, 8, 9, 10 10 1, 2, 5, 6, 7 1, 2, 5, 6, 7

5,923,821 7,350,311 10,219,950 10,310,540

4296.31 2990.28 2730.12 2727.07

lead to these solutions allow double occupancy, i.e., landfill and transfer station may be located at the same site. In the constraint method, we use the cost objective as our primary objective and put various limits on the public opposition. The results are shown in Table 6.4. It is not a surprise that for high limits of acceptable public opposition, the landfill will be located near City 3, the largest population agglomeration, which allows short trips

6.1

Making It Work

155

Fig. 6.6 Tradeoffs between costs and public opposition

for many of the customers. On the other end of the spectrum, if only a very low level of opposition is acceptable, the landfill is moved away from the largest city (City 3), as that city generates a large part of the opposition. Figure 6.6 shows the tradeoffs between cost and opposition summarized in Table 6.4. The figure shows only nondominated solutions. However, comparing the solutions between costs of 700,000 and 14,000,000 are very similar in public opposition, while the costs are dramatically different. The converse happens with solutions that represent public oppositions between 20,000 and 55,000: their cost varies only slightly. A decision maker will most probably be more interested in solutions that are in the cost range of 4,000,000 and 8,000,000. This graph provides the decision maker(s) with a wealth of information regarding the tradeoffs between, thus allowing to make an informed decision. This is one of those techniques that require no prior input by the decision maker, who is presented with results similar to those in Fig. 6.6, and this is the information, on which decisions are made. With this approach, the operations research analyst has, as is almost always the case, not made or suggested a solution, but prepared it. Extensions to the model will include capacity considerations (as well as the possibility of potential expansions), alternatives to landfilling (e.g., recycling and incineration), an incorporation of a hazmat facility, and the inclusion of the generation and sale of electricity generated from landfill gas. Another extension will also include the routing of collection and transport trucks as well as the potential establishment of transfer station. Internal layout problems could also be part of a comprehensive plan.

156

6

Locating a Landfill with Vector Optimization

References E.D. Adamides, P. Mitropoulos, I. Giannikos, I. Mitropoulos, A multi-methodological approach to the development of a regional solid waste management system. J Oper Res Soc 60(6), 758–770 (2009) A.P. Antunes, Location analysis helps manage solid waste in central Portugal. Interfaces 29(4), 32–43 (1999) BBC News, A Rubbish Story (2019). Available online at https://www.bbc.com/news/worldasia-50429119, last accessed on 9/23/2022 D.P. Duffy, From Transfer Station to MRF (2009). Available online at https://www. mswmanagement.com/collection/article/13004365/from-transfer-station-to-mrf, accessed on 9/23/2022 H.A. Eiselt, Locating landfills and transfer stations in Alberta. Infor 44(4), 285–298 (2006) H.A. Eiselt, Locating landfills—optimization vs. reality. Eur J Oper Res 179(3), 1040–1049 (2007) H.A. Eiselt, V. Marianov, A bi-objective model for the location of landfills for municipal solid waste. Eur J Oper Res 235(1), 187–194 (2014) H.A. Eiselt, V. Marianov, Location modeling for municipal solid waste facilities. Comput Oper Res 62, 305–315 (2015) EPA, Waste Transfer Stations: A Manual for Decision-Making (2002). Available online at https:// www.epa.gov/sites/production/files/2016-03/documents/r02002.pdf, last accessed on 9/23/ 2022 G. Ghiani, D. Laganà, E. Manni, R. Musmanno, D. Vigo, Operations research in solid waste management: A survey of strategic and tactical issues. Comput Oper Res 44, 22–32 (2014) I. Giannikos, A multiobjective programming model for locating treatment sites and routing hazardous wastes. Eur J Oper Res 104, 333–342 (1998) S. Hirshfeld, P.A. Vesilind, E.I. Pas, Assessing the true cost of landfills. Waste Manag Res 10, 471–484 (1992) D.L. Huff, Defining and estimating a trading area. J Mark 28, 334–338 (1964) P. Mitropoulos, I. Giannikos, I. Mitropoulos, E. Adamides, Developing an integrated solid waste management system in western Greece: a dynamic location analysis. Int Trans Oper Res 16, 391–407 (2009) OECD, Data: Municipal Waste (2017). Available online at https://data.oecd.org/waste/municipalwaste.htm, last accessed on 9/23/2022 C. Prodhon, C. Prins, A survey of recent research on location-routing problems. Eur J Oper Res 238, 1–17 (2014) W.J. Reilly, The law of retail gravitation (Knickerbocker Press, New York, 1931) Rubbish Prohibited, How many landfills are there? (2014) Available online at https://landfill-site. com/how-many-landfills/, accessed on 9/23/2022 Statista, Daily Municipal Solid Waste Generation Per Capita Worldwide in 2018, by Select Country (2020a). Available online at https://www.statista.com/statistics/689809/per-capital-msw-gener ation-by-country-worldwide/, accessed on 9/23/2022 Statista, Number of Municipal Waste Landfills in the U.S. from 1990 to 2018 (2020b). Available online at https://www.statista.com/statistics/193813/number-of-municipal-solid-waste-landfillsin-the-us-since-1990/, last accessed on 9/23/2022 Worldometer, Countries in the World by Population (2020). Available online at https://www. worldometers.info/world-population/population-by-country/, last accessed on 9/23/2022

Chapter 7

Using Goal Programming to Locate a New Fire Hall

Firefighting has a long and illustrious history. The first to deal with firefighting equipment appears to have been Ctesibius of Alexandria (about 270 BC), who invented, among other things, a force pump (see Britannica (undated), which could be used for fighting fires. His invention was pretty much forgotten afterwards, though. The ancient Roman empire also had its fire brigades that comprised thousands of slaves, who were responsible not only to fight fires, but also to enforce fire codes of the day. Nothing much is known about fighting fires in the Middle Ages. In the seventeenth Century in the New Word, Boston was the first to have firefighting co-ops, which comprised property owners, who vowed to help each other out in case of a fire. The first fire chief was Benjamin Franklin in about 1736 in Philadelphia, whose idea was to protect not just houses of the members of some co-op, but all houses (presumably since a fire in any of the wooden houses would spread quickly and thus endanger all of them). In 1699 François du Mouriez introduced some more modern equipment and provided a number of pumps in Paris. A century later, Napoleon Bonaparte established a segment of the army, the sapeur-pompiers, that was equipped with manual fire pumps. In 1824, the Scottish city of Edinburgh established its own fire department, to be followed by London 8 years later. Another important day occurred in 1853, when the city of Cincinnati created one of the first salaried fire department (CFD History undated). They were also frontrunners when it came to using a steam engine in their department. For an excellent account of the history of firefighting, see Fireserviceinfo (2011). Figure 7.1 shows a manual pumper fire wagon from the 1860s. Another, somewhat curious, development dates back to 1878 and the invention of the fire pole (Rossen 2021). Similarly, fire nets were invented in 1887 and were used for close to a century before being replaced by ladder trucks (see Adams 2011).

This chapter was co-written by Gonzalo Méndez-Vogel, Pontificia Universidad Católica de Chile, Santiago, Chile. His contribution is much appreciated. © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_7

157

158

7

Using Goal Programming to Locate a New Fire Hall

Fig. 7.1 Manual Pumper Fire Wagon, 1860s, Firefighter’s Museum, Saint John, NB (© H.A. Eiselt)

Fig. 7.2 4-man fire crew in Ottawa, Ontario, Canada (© H.A. Eiselt)

Today, the tasks and the equipment of fire departments are manifold: equipment includes fire engines (pumpers) and fire trucks (ladders), (see Fig. 7.2), firebombers, i.e., airplanes that aid in extinguishing forest fires, foam pumpers, drones for information gathering in unsafe areas, and many other tools. Today’s firefighters’ tasks are equally diverse: From the traditional tasks of extinguishing fires in houses

7

Using Goal Programming to Locate a New Fire Hall

159

and forests (see, e.g., Fig. 7.3), they also include rescuing people from buildings and cars (with the well-known jaws of life) and people or some of their body parts stuck in appliances (Phillips 2022), assist in the crime investigations, particularly when arson is suspected, all the way to rescuing cats from trees. New challenges also include extinguishing burning batteries in electric cars; a problem that does not appear to have been solved in a satisfactory manner at this time. Each city houses one or more fire stations. Given that time is of the essence, response time is crucial. In the United States, the National Fire Protection Association (NFPA) has outlined a guideline called “NFPA 1710,” which sets standards for the service level of fire departments; see, e.g., Moore-Merrell (2019). Among them is a requirement that the first engine arrives at the site within 4 min 90% of the time with at least 4 firefighters. A similar type of standard, albeit much more on the loose side, was specified in the Medavie (2022) for ambulances in rural areas: “In rural areas, it [the ambulance, eds.] is supposed to respond within 22 min 90% of the time.” The standard for urban areas is 9 min 90% of the time. Actually, this standard was exceeded as the ambulances arrived at the scene during the required time more than 92% of the time. Clearly, in order to achieve such a standard, it is required to decentralize the services. In other words, it will be necessary, in particular in case of larger areas, to have multiple fire stations. It is not surprising that this has been done. A few Fig. 7.3 Fighting a hill fire in Hong Kong (© H.A. Eiselt)

160

7

Using Goal Programming to Locate a New Fire Hall

examples may explain the practice (taken from various internet sources): Los Angeles has 114 fire stations, Houston has 102 stations, Chicago has 93 fire stations, San Francisco and San Diego have 51 stations each, and so forth. Given that demographic changes (see Burns 2021), such as population increases (e.g., Raleigh, North Carolina, an increase of 72.6% since 2000) or decreases (e.g., Detroit, a decrease of 30% since 2000), do not occur equally throughout the city, the present locations of the fire hall may no longer be at the best possible sites, even if they had been optimized when they were planned. This takes us to Santiago, Chile, where one of us was commissioned to perform a study on the existing service. The pertinent results of the single-objective model were published in Pérez et al. (2015), who maximize the demand that is covered within a standard response time. The fire department of the city of Santiago was established in 1863, it comprises an all-volunteer workforce that operates 17 fire stations. The authors of the study also point out that the population distribution in the city has changed dramatically over the years. This chapter will take another look at the problem, but this time by using multiple objectives. The tool that we employ for that purpose is goal programming; see, e.g., Charnes et al. (1955), Ignizio (1985), Romero (1991), Schniederjans (1995), and Jones and Tamiz (2010). The main reason for this choice is that we can use target values in goal programming and minimize the over-or underachievement over and/or under these target values. In other words, while “hard” targets have been specified, the method will attempt to get as close to achieving them as possible rather than forcing solutions to satisfy them. In our problem, we will consider the following objectives. 1. Minimize the underachievement under the 90% target for covering buildings within 8 min, 2. Minimize the underachievement under the 95% target for covering people within 6 min, 3. Minimize the overachievement over some target value for the average fire stationto-customer distance, and 4. minimize the maximum number of calls for assistance directed to any of the fire stations. The first two objectives measure the level of protection, the first for structures and the second for people. The third objective essentially minimizes the average customer-to-fire station distance, while the fourth objective balances the traffic at the individual fire stations. One issue that should also be addressed concerns capacities at the fire stations. Ways to deal with this issue include probabilistic models (with the obvious drawback of problems in solving the model), or the consideration of backup coverage as introduced by Hogan and ReVelle (1986) and further elaborated upon by Pirkul and Schilling (1989) and Murray et al. (2010). In order to simplify matters, we will ignore this (albeit important) issue. Given the above scenario and objectives, we will need to perform a number of postoptimality analyses to get some feel for the problem and the sensitivity of the

7.1

Making It Work

161

solution. This concerns specifically the target values in the first two objectives, which, at least in some jurisdictions, are even mandated.

7.1

Making It Work

Our area of study is the city of Santiago, the capital of Chile. The city was founded in 1541 by Pedro de Valdivia in agreement with the local Picunche people. De Valdivia had been sent by Pizarro from Cuzco, Peru. Today, the city has about 6.8 million people, similar to Madrid, Houston, or Baghdad see, e.g., Macrotrends (2022). The city is subdivided into 35 neighborhoods, so-called comunas (Wikipedia 2022). A map of these neighborhoods is shown in Fig. 7.4. The individual neighborhoods

Fig. 7.4 Neighborhoods in Santiago (by Osmar Valdebenito, CC BY-SA 2.5, https://commons. wikimedia.org/w/index.php?curid=1055869)

162

7

Using Goal Programming to Locate a New Fire Hall

differ greatly in terms of the composition of its inhabitants, their habits, income, and other demographic features. For historical reasons, different fire brigades serve different areas of the city. Naturally, they cooperate with each other. We focus on the “Cuerpo de Bomberos de Santiago”, that serves the largest area including the downtown area of the city. For reasons of manageability, the area under study has been subdivided into 225 cells of size 1 km × 1 km. The cells are shown in Fig. 7.5. The 17 red cells in the figure show sites that house at least one fire station. For simplicity, we will assume that customers and fire stations are located at the center of each of the cells, and, given the existing road infrastructure of Santiago, distances are Manhattan. The overall goal of this case study is to examine the present situation and explore the effects of the relocation of a single facility and, budget permitting, the location of a new fire station in addition to relocating one station. These effects are studied given a number of different objectives. In order to formulate the problem, we first need to define a variety of subscripts, sets, and variables. In general, the subscript i refers to customers, while the subscript j refers to one of the existing or new facility locations. The set N includes all sites of existing and potential future facility locations. As in virtually all discrete location problems, we need to define location variables yj, which assume a value of 1, if we locate at site j and 0 otherwise, as well as allocation variables xij, 2 [0, 1], which measure the proportion of customers at site i, which are served by a facility at site j. Note that we do not distinguish between existing facility locations and locations that a new facility is assigned to. As far as parameters are concerned, we define bi as the number of buildings in cell i, and B denotes the total number of buildings in all 225 cells. Additional variables and parameters will be introduced whenever they are needed. The first objective concerns the coverage of the structures in the city. The target value is to cover at least 90% of the buildings in the city within 8 min. In order to do so, we need to define additional covering variables vBi, which assume a value of 1, if a building in cell i is covered within 8 min, and 0 if it is not. The first objective can then be written as. Min z1 = 0:9B–

X bi vBi : i

The second objective concerns the protection of the residents in the city. The potential demand for fire protection services is based on calls that were made to the fire department in the past, using data from 5 years. This does, of course, assume that future demand behaves similar to past demand. With C denoting the total number of past calls within the planning period and ci the calls to the fire department made from cell i, we define covering variables vCi, which assume a value of 1, if a call that originates in cell i can be covered within 6 min, and 0 otherwise. The target value is here to cover 95% of the calls from within the city. The second objective can then be formulated as

5 6 7

8 9 10

14 15 16

30 31 32

11 12 13

27 28 29

20 21 22 23

37 38 39 40 41

17 18 19

33 34 35 36

24 25 26 42 43 44 45 46 47 48 216 217 49 50 51 52 53 54 55

218 219 56 57 58 59 60 61 62

220 221 63 64 65 66 67 68 69

222 223 70 71 72 73 74 75 76

224 225 77 78 79 80 81 82 83

Fig. 7.5 The study area and the location of the existing fire stations

1 2 3 4 226 84 85 86 87 88

89 90 91 92 93 94 95

96 97 98 99 100 101 102 103

104 105 106 107 108 109 110 111 112 113

114 115 116 117 118 119 120 121

122 123 124 125 126 127 128 129

174 175 176 177 178 130 131 132 133 134 135 136 137

179 180 181 182 183 138 139 140 141 142 143 144 145

184 185 186 187 188 146 147 148 149 150 151 152 153 189 190 191 192 154 155 156 157 158 159 160 161

193 194 195 196 197 162 163 164 165 166 167 198 199 200 201 168 169 170 171 172 173

202 203 204 205

206 207 208 209

210 211 212 213

215

214

7.1 Making It Work 163

164

7

Using Goal Programming to Locate a New Fire Hall

0:95C -

X ci vCi : i

Consider now the third objective. It was originally designed as a general measure of efficiency. In essence, it minimizes the total or average distance (the two are proportional to each other as long as the number of people who are served is a constant). This is the objective in well-known standard problems such as the pmedian problem, the simple plant location problem, and others. One problem is that by minimizing the average, there may be some really long distances in the solution. Sometimes, analysts employ minimax functions to avoid that. The unique focus of such objectives is the worst case, similar to the covering objectives we are already using. In order to fit the minsum distance objective into the goal programming framework, we need to specify a target value and minimize the over- or underachievement of the objective over or under that value. In order to do so, we first define a sufficiently small total distance D, so that the objective can be written as Min z3 =

X

ci dij xij - D

i, j

Finally, the purpose of the fourth objective is to minimize the maximum number of calls assigned to a facility, given that each customer is assigned to his closest fire station. The minimax function can be written as ( Min z4 = max j

X

) ci xij :

i

Before rewriting the objectives in the usual goal programming-style form with the minimization of the over-or underachievement, we first need to formulate the constraints. We begin with the objectives that are independent of the objectives. We first need to ensure that each customer is served by exactly one facility, i.e., X

xij = 1 8i:

ð7:1Þ

j

We also need to require that we can only serve a customer from a facility that actually exists, i.e., xij ≤ yj 8i, j

ð7:2Þ

So far, the constraints are identical to those that are used in p-median problems. However, while in the p-median problem the objective ensures that each customer is assigned to his closest facility, this is not the case in this problem with multiple objectives and, especially, when minimizing the maximum load over a station. In order to ensure the “closest” assignments, we require that

7.1

Making It Work

165

xij ≥ yj -

X

yk 8i, j:

ð7:3Þ

dik ≤ d ij

In order to demonstrate that these constraints will ensure the “closest” assignments, consider this. In case the existing facility at site j (thus yj = 1) is closest to customer i, the sum on the right-hand side of the constraint equals zero as none of the other facilities is closer. Hence the constraint reduces to xij ≥ 1 and, given that the variables xij are binary, this forces xij to 1, so that the allocation needs to be made. On the other hand, suppose that fire hall j is not the closest facility to customer i. In this case, the sum on the right-hand side of the constraint equals at least one (more, if site j is the third- or lower-ranking facility as seen from customer i), so that, given that yj is a zero-one variable, the right-hand side of the constraint is zero or negative. Hence it allows the allocation variable xij to assume a value of either zero or one. However, since the constraints (1) require that customer i needs to be allocated to exactly one facility and, as shown above, the closest allocation needs to be made, this forces the allocation variable xij to zero if facility j is not the closest facility to customer i. This feature of the model is supported by the fact that in reality, it would be uncommon (or even prohibited) for the non-closest fire station to respond to a call. In order to express that we want to locate exactly one new facility, we first define N = {1, 2, . . ., n} as the set of sites with existing or potential new facilities. Then we can write X

yj = p þ 1,

ð7:4Þ

j2N

where p is the number of existing facilities. In order to formulate the possibility that at most one of the existing facilities may be relocated, we first need to define the set NE ⊂ N which includes the sites of all existing facilities. The formulation is then X

yj ≥ p - 1:

ð7:5Þ

j2N E

Simply put, the constraint indicates that at least p–1 of the existing facilities have to stay where they are. These are the regular (absolute) constraints of the problem. We now formulate the constraints needed for the objectives. First, we need define parameters tij, which is the time it takes a fire truck to reach customer i from fire station j. Without making any assumptions about specific routes and assuming that a fire truck has an average speed of 25 km/h in the city, we will use tij = δdij with δ = 60/25. Note that while dij is defined here in terms of km, the time tij is measured in minutes. 007BStarting 007Dwith the first (protection of buildings) objective, we first define the set N Bi = jjt ij ≤ 8 , which includes all existing and potential facility locations that are no farther away from customer i than 8 min. We first need to ensure that a customer is only considered

166

7

Using Goal Programming to Locate a New Fire Hall

covered, if there exists at least one existing or new facility within a 8-minute distance. This is achieved by formulating vBi ≤

X

yj :

j2N Bi

We then use the binary variables vBi to formulate the constraints that specify 90% building coverage (if possible) within 10 min as X

bi vBi þ d 1- - dþ 1 = 0:9B,

i2N Bi

where d 1- and dþ 1 are continuous under- and overachievement (deviational) variables, respectively. The sum on the left-hand side of the equation expresses the number of buildings actually covered within 10 min of the closest fire hall, and the right-hand side value equals 90% of the total number of buildings in the study area. Since the desired situation has the left-hand side larger than the right-hand side, Objective 1 is the minimization of the underachievement under the target value, i.e., Min z1 = d1- . 007B 007D The second objective is formulated along similar lines. The set N Ci = jjt ij ≤ 6 includes all existing and potential facility sites that are no farther than 6 min from a customer i, who initiated a call in the past. Using the binary variables vCi as defined above as 1, if a if a call that originates in cell i can be covered within 8 min, and 0 otherwise, we can write vCi ≤

X

yj :

j2N Ci

The requirement of covering at least 95% of the calls (if possible) can then be written as X

ci vCi þ d2- - dþ 2 = 0:95C,

i2N Ci

where d 2- and dþ 2 denote the under- and overachievement (deviational) variables, respectively. Again, we attempt to minimize the underachievement under the target value 0.95C, so that the objective can be written as Min z2 = d2- . Consider P now the third objective. Above, it was written in its preliminary form as Min z3 = ci d ij xij - D with a target value of D . Introducing again deviational i, j variables d 3- and dþ 3 , the appropriate goal constraint can be written as

7.1

Making It Work

167

X ci t ij xij þ d 3- - dþ 3 = D: i, j The goal D is obtained by solving a standard p-median problem with

P

yj = p þ 1.

j2N

In this case, the goal has to be a lower bound of the weighted distance, and we compute this lower bound by solving the problem using only the p-median objective P ci t ij xij , and allowing two of the current fire stations to be moved from their i, j P yj ≥ p - 2. This is a relaxation of one of the locations, i.e., using the constraint j2N E P requirements. Then, D = ci t ij x2217ij . Thus the third objective can now be written as i, j Min z3 = d þ 3. Finally, consider the fourth objective that minimizes the maximal number P of calls handled by a fire station. The number of calls allocated to facility j is ci xij , while i C the average number of calls at a facility is pþ 1. Thus we can write, for each station,

X ci xij þ dj- - dþ j = i

C 8j: pþ1

The fourth objective is then n o Min z4 = max d þ þ d j j j

which, given the standard transformation (see, e.g., Eiselt and Sandblom 2007), can be rewritten as a regular minimization problem by defining a new variable w and formulating constraints w ≥ dþ j þ d j 8j,

coupled with the objective Min z4 = w. Rather than using a somewhat artificial preemptive priority structure, we will aggregate the four individual objectives into a single weighted objective. As discussed in Chap. 3, this obviously raised the problem of the commensurability of individual deviational variables. In our case, d 1- is measured in the number of þ buildings, d2- is measured in number of calls, d þ 3 is expressed in time, and d 4 is expressed as capacity, i.e., the number of incidents assigned to a fire station. Clearly, these measures are noncommensurable. In order to ensure commensurability, we either explicitly specify tradeoffs between the individual objectives, or we normalize the objectives. For this case, we have chosen the latter option. Following one option suggested by Jones and

168

7

Using Goal Programming to Locate a New Fire Hall

Tamiz (2010), we divide the value of the deviational variable by the target value. The normalized first objective then reads -

Min z01 = d 0 1 , where P

0:9B -

i2N Bi

-

d0 1 =

bi vBi ,

0:9B

Here, the right-hand side value is the percentage deviation from the target value of 0.9B. Similarly, the second objective is -

Min z02 = d 0 2 , where P

0:95C d0 2

i2N Ci

=

bi vCi :

0:95C

The third objective is þ

Min z3 = d0 3 , where P þ d0 3

=

i, j

ci t ij xij - D :

D

Finally, the fourth objective is Min z04 = w0 , where C pþ1

-

P i

1:5L

ci xij

-

þ

= d0 j - d0 j

with L denoting the highest load that a station has in the current scenario. To compute this load, we solve the problem using only the p-median objective and

7.1

Making It Work

169

Table 7.1 Characteristics of the present solution Present solution Benchmark

% coverage of buildings & population 84.65%

% coverage of calls 67.49%

Average distance (kms) 1.69

Maximal load of any given station 8047

compute the maximum load that a station has in the solution. This load is denoted L. Since the full problem has other objectives, and we do not know a priori whether the þ maximum load will increase or decrease, we have to use both variables d0 j and d0 j , so that þ

-

w0 ≥ d 0 j þ d 0 j

8j,

and the objective will then be as shown above. The overall objective can then be written as Min z2217 =

X

αk z0k ,

k

with weights αk that indicate the importance of the kth objective. As far as the results are concerned, we have first solved the base scenario, which is the present solution with the fire stations in the places where they are currently found. Table 7.1 shows the results of this benchmark solution. Here, we are assuming that a building or its inhabitants are covered if they are within 8 min of the nearest fire station, a call is covered if it is within 6 min of the nearest station, and the speed of the vehicle is set to 25 km/h. Also, for the first criterion, we use as demand the number of houses plus 8 times the number of apartment buildings, as each apartment building has on average eight individual apartments. That way, we count people and dwellings the same way. At first glance, the present results are acceptable, albeit not outstanding. The fact that 85% of the buildings and people can be reached by a fire truck within 8 min is reasonably comforting. The coverage of the calls is less impressive, but we should recall that the standards for this cover are stricture, viz., only 6 min. The average distance is quite short, which was to expected in an urban environment with a fairly high population density (8460 people per km2 as per Versus, undated; in comparison London and Madrid are at about 5500, New York at 10,200 and Delhi at 9340). A bit surprising is the maximal load of any of the fire stations. With more than 8000 calls per year, it is far higher than the average of just over 1400 calls per year. In order to improve the solution, we first consider the relocation of a single fire station. The reason is that while there will, of course, be costs to move the station, but once the move has been accomplished, there are no other resource implications. In order to relocate an existing facility, we will use our four individual objectives, one at a time, to do so. Then, as a compromise solution, we will use the goal programming approach with weights α1 = 0.25, α2 = 0.4, α3 = 0.15, and α4 = 0.2. These weights have been chosen to reflect the importance of the objectives: Covering

170

7

Using Goal Programming to Locate a New Fire Hall

Table 7.2 Solutions when exactly one fire hall is moved

% coverage of calls

Average distance (kms)

Maximal load of any given station

Station # deleted

Station # added

Objective 1 Objective 2 Objective 3 Objective 4 Goal programming

% coverage of buildings & population

Solution

87.61% 85.45 85.45 85.78 85.45

68.69% 88.72 88.72 88.38 88.72

1.66 1.11 1.11 1.50 1.11

8.047 6,677 6,677 6,494 6,677

72 59 59 64 59

128 68 68 76 68

objectives account for about two thirds of the total, and the remainder is shared by the third and fourth objectives. The results are shown in Table 7.2. Considering the results in Table 7.2, we observe that Objectives 2, 3, and the goal programming approach, highlighted in yellow, all lead to the same solution. As compared to the solution obtained with Objective 1, the solution obtained with Objectives 2, 3, and goal programming are two percentage points worse than the solution obtained with Objective 1, but significantly better in the remaining three criteria by 29%, 33%, and 17%, respectively. Objective 4 results in attributes that are very similar to those obtained by Objectives 2, 3, and goal programming for the first, second, and fourth criteria, but they fall far behind in the third criterion. In other words, when moving one fire hall is permitted, the “yellow solutions” appear to be the most promising options. Comparing the “yellow solution” with the present situation, we first notice that it dominates the state-of-the-art. It covers a few more buildings and people, 31% more calls, has customers on average 34% closer to their nearest fire hall, and has a maximal load that is 17% lower. It appears that the “yellow solution” is indeed worth considering. Consider now the possibility that funds were available to add one fire station in addition to the move of another station. Again, we solve the four individual objectives and then the goal programming problem with the aforementioned weights. The results are displayed in Table 7.3. The results shown in Table 7.3 reveal that the solutions obtained by Objective 2 and the goal programming approach are identical, and both of them (the “blue solutions”) dominate the solution obtained with Objective 3. The solution obtained with Objective 1 performs somewhat better than the blue solutions, but it fares significantly worse on the other three criteria.

7.1

Making It Work

171

Table 7.3 Solutions when one fire hall is moved, and one additional fire hall is constructed

% coverage of calls

Average distance (kms)

Maximal load of any given station

Station # deleted

Station # added

Station # added

Objective 1 Objective 2 Objective 3 Objective 4 Goal programming

% coverage of buildings & population

Solution

90.00 87.08 85.06 85.78 87.08

69.44 92.86 88.85 89.02 92.86

1.64 1.06 1.06 1.49 1.06

8,047 6,677 6,677 6,388 6,677

72 58 79 59 58

128 68 68 62 68

216 221 97 76 221

Present solution

% coverage of buildings & population

% coverage of calls

Average distance (kms)

Maximal load of any given station

Table 7.4 Comparison of the best solutions in the three scenarios

Benchmark Move 1 Move 1, add 1

84.65% 85.45 % 87.08 %

67.49% 88.72 % 92.86 %

1.69 1.11 1.06

8,047 6,677 6,677

In order to facilitate comparisons, we summarize the three cases—benchmark (present situation), Move one facility, and move one and add one facility—in a single table. Table 7.4 shows the results in direct comparison. It is apparent that moving a single facility does increase coverage of buildings and people somewhat, but it causes major improvements in the other three objectives. This includes much improved average access time as well as a reduced maximum load for a more balanced workload. Moving from the benchmark solution to the “yellow solution” involves closing on of the two stations presently located in Square 59 and moving it to the more southerly located Square 68. It results in a somewhat more dispersed pattern of the locations of the fire halls.

172

7

Using Goal Programming to Locate a New Fire Hall

Adding a new fire hall (the “blue solution”) will further improve the attributes of three of the four criteria, but only by a modest degree. If this improvement is affordable must, of course, be left to the decision maker. Again, this demonstrates that operations research tools are mainly (except on the lower end of the operational level) used to prepare the decisions and make them transparent to the decision maker, not to automatically make them. Furthermore, concerning the maximum load on a particular station, this issue can be managed later after the location process has been completed by increasing the resources allocated to that station. As far as extensions are concerned, a number of issues could be thought of so as to make the model more lifelike. One would be to distinguish between different instances of calls to the fire department. Clearly, a fire in a high-rise building requires very different tools from a fire in a barn, or a wrecked vehicle, whose occupants have to be literally cut out of their car. This would complicate things, as not all fire stations may be equipped with all available tools. Another issue concerns the amount of time it takes to reach the scene. Here, we have taken the distances between buildings and a fixed speed to calculate the approximate time to reach a potential fire. In reality, some time will be needed to alert the fire department, get the firefighters on the way and later, when they have arrived set up operations. This is a fixed time that would have to be added to the driving time we have calculated. Along a similar line, a more realistic model would have to include potential congestion affecting travel times, or congestion in calls, meaning that a call appears at a time at which the fire station assigned to that site has its resources busy attending another call. One possibility to include this last issue is via secondary coverage as mentioned earlier.

References C. Adams, Did firemen once use nets to rescue people from burning buildings? The Straight Dope (2011). Available online at https://www.straightdope.com/21344100/did-firemen-once-usenets-to-rescue-people-from-burning-buildings, last accessed on 9/25/2022 T. Britannica, Editors of Encyclopaedia. Ctesibius Of Alexandria. Encyclopedia Britannica (undated). Available online at https://www.britannica.com/biography/Ctesibius-of-Alexandria, last accessed on 9/25/2022 S. Burns, Population growth in America’s largest cities since 2000. 24/7h Wall Street (2021). Available online at https://247wallst.com/special-report/2021/12/22/population-growth-since2000-in-americas-largest-cities/2/, last accessed on 9/25/2022 CFD History. The Diary (undated). Available online at http://www.cfdhistory.com/htmls/ eventsofday.html, last accessed on 8/4/2022 A. Charnes, W.W. Cooper, R. Ferguson, Optimal estimation of executive compensation by linear programming. Manag Sci 1, 138–151 (1955) H.A. Eiselt, C.-L. Sandblom, Linear Programming and Its Applications (Springer, Berlin, 2007) Fireserviceinfo, A Little Fire Service History (2011). Available online at https://www. fireserviceinfo.com/history.html last accessed on 9/25/2022 K. Hogan, C. ReVelle, Concepts and applications of backup coverage. Manag Sci 32(11), 1434–1444 (1986) J.P. Ignizio, Introduction to Linear Goal Programming (Sage, Newbury Park, CA, 1985)

References

173

D. Jones, M. Tamiz, Practical Goal Programming (Springer, New York, 2010) Macrotrends, Largest World Cities by Population (2022). Available online at https://www. macrotrends.net/cities/largest-cities-by-population, last accessed on 9/25/2022 Medavie Health Services NB, Statistics (2022). Available online at https://ambulancenb.ca/en/ accountability/how-we-are-doing/response-times/, last accessed on 9/25/2022 L. Moore-Merrell, Understanding and measuring fire department response times. Lexipol (2019). Available online at https://www.lexipol.com/resources/blog/understanding-and-measuring-firedepartment-response-times/, last accessed on 9/25/2022 A.T. Murray, D. Tong, K. Kim, Enhancing classic coverage location models. Int Reg Sci Rev 33(2), 115–133 (2010) J. Pérez, S. Maldonado, V. Marianov, A reconfiguration of fire station and fleet locations for the Santiago Fire Department. Int J Prod Res 54(11), 3170–3186 (2015) J. Phillips, London Fire Brigade Lists Strangest Call-Outs Over Last Three Years Including Child Stuck in Bin (2022). Available online at https://www.getsurrey.co.uk/news/uk-world-news/ london-fire-brigade-lists-strangest-18896026, last accessed on 9/25/2022 H. Pirkul, D. Schilling, The capacitated maximal covering location problem with backup service. Ann Oper Res 18, 141–154 (1989) C. Romero, Handbook of Critical Issues in Goal Programming (Elsevier, Oxford, 1991) J. Rossen, Why the fire pole is beginning to disappear. Mental Floss (2021). Available online at https://www.mentalfloss.com/article/643965/fire-pole-history-facts, last accessed on 9/25/2022 M.J. Schniederjans, Goal Programming: Methodology and Applications (Springer Science + Business Media LLC, New York, 1995) Versus, Compare Cities (undated). Available online at https://versus.com/en/city, last accessed on 9/25/2022 Wikipedia, Anexo: Comunas de Santiago de Chile (2022). Available online at https://es.wikipedia. org/wiki/Anexo:Comunas_de_Santiago_de_Chile, last accessed on 9/25/2022

Chapter 8

Locating a Hospital with a Zoom Approach

Health care is one of the main services provided to a population by either public, i.e., taxpayer funded services or private services, typically paid for by individuals or their insurance companies. The task of providing health care services is multi-faceted. It starts with the usual hierarchical system that provides primary health care by a family physician, followed by (if required) a referral to either a specialist such as a dermatologist or a cardiologist, or services available in a regular or specialized hospital. For each hospital that is sited, the decision is whether to make it a general hospital that provides the usual services, a hospital that focuses on specialized illnesses, or any combination thereof. Either of these types of hospitals will be joined by an emergency system that includes emergency medical staff, ambulances, etc. The importance of establishing a viable health care system is magnified by the skyrocketing costs of salaries, technology for diagnostics and surgery. Among the workflow decisions to be made is a referral system that avoids inefficient situations, e.g., those in which patients without family physicians use the emergency system for regular doctor’s visits; see, e.g., Fraser (2018). A pertinent hierarchical system for Goa, India, has been described by Hodgson (1988). Different health care systems have evolved over the centuries, ranging from public health care systems such as those in Japan, Canada, Germany, Scandinavian countries and others, to countries with private health care systems such as those in the United States, the UAE, and others with non-universal care as well as those with universal private care such the systems in Switzerland, the Netherlands, and Israel. For the history of health care, see Burnham (2015) for the United States, Paim et al. (2011) for Brazil, and Messac (2020) for Malawi. The history of special types of surgery is recounted by Hines et al. (2020) and Magarinos et al. (2022). Finally, Salmon (1990, ed.) and, more recently, Shi and Singh (2023, 6th ed.) outline the path of health care into the future. Being a very important and costly proposition with a number of different stakeholders, viz., public health planners, employees such as doctors and nurses, the © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_8

175

176

8 Locating a Hospital with a Zoom Approach

neighboring population, and, last but not least, the population the system is designed to serve, it is no surprise that the location of health care facilities is a multicriteria decision problem. As far as the objectives or criteria are concerned, while different authors use different criteria, there is a pattern. The survey by Soltani and Marandi (2011) lists recommendations by eight agencies and authors. The main criteria in all these studies include cost, demographics (i.e., demand for hospital services), the size, zoning, and ownership of the potential site, geological and geographical properties of the site as well as access and travel time. Burkey et al. (2012) apply the traditional average distance, number of people within a given radius; typically, the well-known “golden (half) hour” frequently used for stroke, heart attack, and similar events, see, e.g., Lou (2022). The contribution by Khaksefidi and Miri (2015) is a bit different, as the authors use the distances to the center of the province the facility is to be located in and the distances from other locations as criteria. Aytaç Adalı and Tuş (2021) again apply again the usual criteria including cost, demographics, transportation issues, as well as geological and geographical issues as their criteria. Our analysis will deal with a typical situation: rather than starting with a blank slate, we already have a number of hospitals in existence, and the plan is to site a new hospital. In this chapter, we consider exclusively with the issue of location, while in reality many other issues have to be dealt with, viz., availability of employees, capacity, specialization of services (if any), etc. In other words, it is a conditional location problem: locate a new facility, given that some similar facilities already exist. In the location process, we will consider the following criteria: 1. Maximize the number of people that are within 30 min of any one of the hospitals. 2. Minimize the average distance between any potential patient (i.e., any member of the population) and his nearest hospital. 3. Minimize the cost of the project. 4. Minimize the (noise) pollution to the people in the neighborhood of the new facility. 5. Provide similarly “good” service to all of your constituents. While the first three criteria are fairly easy to quantify, the last two criteria are more difficult to measure. First consider the noise pollution. Clearly, there will be additional traffic on the access roads and streets to the hospital. Each homeowner will know that additional houses in his neighborhood will mean additional traffic, and within reason, he will have to tolerate this. Large facilities such as gas stations, shopping centers, or hospitals are a different matter. In part, existing zoning laws are designed to deal with this issue. However, they are mostly, albeit not exclusively, designed to manage the use of a specific lot, rather than its surroundings. While the proximity of one’s dwelling to a hospital may be comforting to some, the all-night siren wailing of the emergency vehicles is less so. Not to mention the noise of low-flying emergency helicopters that may take off or land at any time of day or night. The difficulty will be the tradeoff between the number of people that are inconvenienced, and the degree to which they are disturbed. Simply speaking, what is worse: bother 100 people a lot or 200 a bit? The decision maker, preferably in

8

Locating a Hospital with a Zoom Approach

177

consultation with nearby homeowners will have to decide this. Furthermore, it is not only that some people get inconvenienced for the benefit of others: Following Pan (2016) and Prischak (2022), there is a 3% decrease in the value of real estate if a hospital is nearby. This is a factor that should be included in the facility’s budget in order to make up for lost value, but we are not aware of a single instance, in which this has actually been done. The final factor on our list is more of a political criterion. It is usually referred to as “equity,” meaning something akin to “fairness,” an equally nebulous concept. The main idea behind this concept is that all members of the population should have more or less “equal access” to a service, meaning, in the locational context, that all members of the population should be at approximately equal distances to the facility. As it has been argued elsewhere, see, e.g., Eiselt and Laporte (1995), that this means that the concept of “equity” has been replaced by equality. It is very well known that “equity” objectives without efficiency objectives may and typically do lead to nonsensical solutions; see, e.g., Eiselt and Sandblom (2022). The conflict between efficiency and “equity” had already been remarked by Marsh and Schilling (1994). Finally, in real-world planning the siting of public facilities is often not so much based on rational criteria such as those mentioned above, but rather based on political considerations. For instance, the representative of a district will try to attract desirable facilities such as hospitals, schools, etc., to his own district, while pushing undesirable facilities, such as prisons, (polluting) power plants, and similar facilities to other districts. For obvious reasons, this chapter will disregard such distorting factors. The general idea of the “zoom approach” applied in this chapter is this. Given the main constituents and stakeholders of the problem under consideration, the first phase uses a continuous single- or multiple objective formulation of the continuous version of the problem to arrive at some “tentative optimum,” which will serve as a starting point. We then “zoom in,” meaning that we first look at available properties in the vicinity of the “tentative optimum.” Here, the term “vicinity” can be considered in the narrow or wide sense. If planning is done within a county, maybe a 5-mile radius is appropriate, if planning within a state, this may be increased to 20 or 30 miles, and within an entire country (that depends on the country, of course, a location in 3.2 million square mile Brazil will be considered different from a location in, say, 62-square mile Lichtenstein) the area to be considered may be considerably larger. Given that a finite number of suitable properties will exist in this second phase, we will then use a discrete method, often a multiattribute decision making model. This is the approach taken in this chapter. Note that the generic approach in Chap. 9 of this book is somewhat similar in that its first phase uses geographical information systems to reduce the set of feasible solutions, before moving into its second phase, which is identical to the second phase in this approach.

178

8.1

8

Locating a Hospital with a Zoom Approach

Making It Work

The study area is York County in the southern tip of the state of Maine, U.S.A. With its population of 213,000, it is the third-largest county in Maine (York County, Maine, Population 2022 2022). Figure 8.1a shows the location of the State of Maine inside the United States, while Fig. 8.1b show the location of York County inside the State of Maine. Within York County, we have identified ten major population concentrations (shown as red dots numbered 1, 2, . . ., 10) in Fig. 8.2, as well as three existing hospitals (shown as green dots numbered A, B, and C). Next, consider the population, i.e., the demand for hospital services. We will concentrate the demand at ten discrete points based on the population data within each of the zip codes in the county. Overall, York Co., Maine, has a total of 42 zip codes, of which 36 are populated, the others are mere post office boxes with zero population. Table 8.1 shows the ten population agglomerations 1, 2, . . ., 10 in its rows, their latitudes and longitudes, their coordinates in a fictitious system of coordinates, the population aggregated via zip codes at that point as weights, and the road distances (as per google maps) between the population aggregations and the existing hospitals. For each population aggregation, the closest of the existing hospitals A, B, and C is chosen and its distance is highlighted in yellow. This will indicate the present customer-facility allocation. In order to formulate the problem for Phase 1 of the “zoom procedure,” we need the distances between customers and the existing hospitals (shown in Table 8.1) as well as the distances between the customers and the new hospital. Since road distances cannot be calculated to a yet unknown point, we will first determine the Euclidean distances, which then have to be scaled so as to resemble road distances. Using a factor of 1.5, we convert Euclidean distances to approximate road distances.

Fig. 8.1 (a) The State of Maine in the United States. (b) York County in the State of Maine (Source for both maps: U.S. Census Bureau, 2020 TIGER/Line Shapefiles, machine-readable data files)

8.1

Making It Work

179

Fig. 8.2 York County, Maine, with population agglomerations and existing hospitals (Source: U.S. Geological Survey, National Geospatial Program)

We will now solve two continuous single-facility location problem. The first problem locates the new facility so as to minimize the average distance between a customer and his closest facility, new or existing. The second problem will put an upper bound on the total distance all customers have to travel (which is, of course,

180

8 Locating a Hospital with a Zoom Approach

Table 8.1 Demand points, population, customer-(existing) hospital distances in miles #

Demand point

Coordinates

Weight

1 2 3 4 5 6 7 8 9 10

York 43°09'03.4"N 70°39'48.3"W Wells 43°19'22.8"N 70°36'31.0"W Acton 43.531548, -70.909014 Lebanon 43°24'20.1"N 70°54'23.8"W Kennebunk 43°23'05.0"N 70°32'35.2"W Sanford 43°26'25.5"N 70°46'37.6"W Biddeford 43°28'58.7"N 70°28'15.2"W Saco 43°30'47.5"N 70°23'25.8"W Hollis Center 43°36'04.7"N 70°35'41.4"W Limington 43°43'53.2"N 70°42'34.2"W

2.1, 0.8 2.25, 1.7 1.25, 2.75 1.1, 2.2 2.4, 1.85 1.65, 2.3 2.75, 2.55 3.15, 2.7 2.3, 3.2 1.9, 3.9

20,010 19,927 11,893 15,668 18,306 23,540 23,242 27,106 20,008 11,431

Site A: Biddeford 26.5 14.1 24.7 24.9 7.8 16.7 0 5.3 12.3 24.0

Site B: Sanford 25.7 12.4 9.6 8.3 14.0 0 16.7 23.9 17.1 24.2

Site C: York 0 13.0 35.1 24.2 21.4 25.6 27.1 33.5 35.1 46.8

proportional to the average travel distance, given that the number of customers is fixed), while balancing the capacity usage of the existing and the new hospital. That way, we obtain two tentative optimal solutions, which will allow us to search for existing properties in between these two points when we are entering the second phase of the Zoom procedure. At this point, we have all the information needed to formulate the conditional location problem: road distances cij between customers i and existing facilities j, the coordinates (ai, bi) of customers i for the calculation of estimated distances c2217i between an existing customer i and the new facility, and the population wi concentrated at site i. Defining now location variables x and y for the site of the new facility and allocation variables zij and z2217i for the proportion of customer i’s demand satisfied by existing facility j and the new facility, respectively, we can formulate a problem that minimizes the total/average distance between the customers and their closest existing or new P facilities as P P wi cij zij þ wi c2217i z2217i P1: Min z = i j i P s.t. zij þ z2217i = 1 8 i j

zij, z2217i ≥ 0 8q i, ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi j,

where c2217i = ðai - xÞ2 þ ðbi - yÞ2 . The definition of c2217i transforms the otherwise trivial formulation into a somewhat challenging nonlinear optimization problem. As a matter of fact, the global optimizer applied to our small example had not solved the problem after 13 h. There are, however, some considerations that significantly simplify the problem. Given that in the absence of capacity constraints, all customers’ demands will automatically be assigned to the closest existing or new facility (due to the minimization objective), we only need to define allocation variables for the closest connections, i.e., those highlighted in Table 7.1 plus all possible allocations to the new facility. In other words, we can set allocation variables z1A, z1B, z2A, z2C, etc., equal to zero. The simplified problem was now solved within a few seconds.

8.1

Making It Work

181

Table 8.2 The optimal locations of the new facility, usage rates, and total distances for various values of c

c

(ai, bi)

uA

uB

uC

u*

1,000,000 1,200,000 1,500,000 2,000,000 3,000,000

2.3, 3.2 2.3, 3.2 2.4181, 1.9242 2.3731, 2.2972 2.4087, 2.1886

68,654 68,654 58,545 58,545 58,545

71,028 59,135 51,101 55,360 55,360

20,010 20,010 20,010 20,010 20,010

31,439 43,332 61,475 57,216 57,216

Actual total distance 962,080 1,119,630.3 1,377,234.8 1,800,457.2 1,790,556

Before reporting the results of the optimization, we will describe a second formulation that generates another solution point, based on a different angle. More specifically, we will formulate a problem that minimizes the maximal capacity usage of any of the hospitals, so as to avoid a situation, in which some hospitals see heavy use, while others are only used by a few patients. Defining variables uj as the usage of facility j and u* as the usage of the new facility and a limit c on the total customermileage to the hospitals, the problem can be formulated as P2: Min z = max {uj, u*} s:t: uj =

X

wi cij zij

i

u2217 = XX i

j

X i

wi cij zij þ

wi c2217i z2217i X wi c2217i z2217i ≤ c i

X zij þ z2217i = 1 8i j

zij, z2217i ≥ 0 8 i, j,

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where again c2217i = ðai - xÞ2 þ ðbi - yÞ2 . After the usual reformulation of the objective, we obtain a nonlinear optimization problem that can be solved. The results for a variety of values of c are shown in Table 8.2. For each value of c, the highest usage of any of the facilities is highlighted in yellow. An interesting side effect of the optimizations is that in all cases, the hospital C (located at York) receives the least amount of traffic. This would be of particular interest if future hospital closures were considered. While the optimization of problem P1 results in a location of the new hospital that coincides with demand point 9 at Hollis Center (see the map in Fig. 8.2), the problem P2 results in solutions near the blue dot in the figure. Somewhere in the vicinity of

182

8

Locating a Hospital with a Zoom Approach

Fig. 8.3 Degrees of coverage as function of patient-hospital distance

these two points we will look for suitable real estate for Phase 2 of the Zoom procedure. Our search in Zillow (2022) resulting in seven properties I, II, . . ., VII, which appeared to be suitable. For our purposes, it is sufficient to indicate roughly where they are located. Property I is located in Lyman far from any existing road, Property II is in Waterboro next to a road, Property III is located in Biddeford next to a road, Site IV is located in Saco, about 1500 ft. from a country road, Site V is also located in Saco close to Site IV but also very close to an existing subdivision (which could present problems regarding noise pollution), Site VI is located in Buxton close to a country road, and finally Property VII is located in Waterboro next to a cemetery at a relatively short distance to a country road. At this point we need to operationalize the criteria we have briefly outlined earlier. The first measure we will apply deals with the number of people that live within a given distance of their closest hospital. Typical distances/travel times are the “golden (half) hour” The reasoning behind this is that life expectancy in case of heart attacks and strokes drops off sharply after that time. However, while this is a steep decline, it is not a jump, but a gradual decrease. The appropriate tool is a covering model, first described by Church and ReVelle (1974). However, the covered-not covered dichotomy does not necessarily reflect reality: the life expectance of an individual who lives 29 min from the closest hospital does not dramatically differ from that of someone, who lives 31 min from the closest hospital. This gave rise to the concept of gradual covering; see Berman and Krass (2002), Berman et al. (2003), Drezner et al. (2004), and Eiselt and Marianov (2009), in which covering is considered to be a continuous measure, ranging from 0 (no coverage at all) to 1 (full coverage). A typical way to model this is by way of a step function that provides full, i.e., 100% coverage for distances within ½ h, partial, e.g., 50% coverage for patient-hospital distances between 30 and 60 min, and no coverage beyond that. This is shown in Fig. 8.3. The second criterion is the average patient-hospital distance. In order to compute these distances, it is necessary for each potential location of the new facility to first allocate all customer to their respective closest hospitals. Once that has been done, we can compute the road distances (here done via google maps), and then divide by a constant, which is the total population in the county. This provides the desired result. The third criterion concerns the cost of the project. Since the properties are in relative vicinity of each other (i.e., in the same county), we may assume that the construction cost of the facility and the cost of equipment for the different potential locations are the same everywhere. This implies that the only difference are the

8.1

Making It Work

183

Table 8.3 Attribute matrix at the beginning of Phase 2 Decisions/ locations I II IV V VI VII

Coverage, ½h 127,872 147,880 147,880 147,880 147,880 127,872

Coverage, 1h 179,700 191,131 179,700 179,700 191,131 191,131

Avg dist 6.4947 5.3926 5.9847 5.9025 5.4180 6.2644

Cost $55,000 $595,000 $399,000 $499,999 $225,000 $100,000

Land size 37.33 19.43 17.7 69.86 18.0 20.0

Poll, # houses 50 6 71 81 12 50

Gini 0.2363 0.2223 0.2432 0.2438 0.2368 0.2321

respective costs of the land, which is shown on the website. Another related issue that is also readily available is the size of the property. While a regional hospital may not need more than the 15 plus acres that have been required, having extra land for potential future expansions is, while not a dealbreaker, a nice feature to have. The fourth criterion concerns polluting the neighbors. Given that a hospital is not a polluting facility per se, the main source of pollution in this context is noise. In particular, we are talking about the sirens of ambulances at any time of day or night and, if the facility is designed to have such a feature, the noise due to helicopters taking off and landing near the hospital. In order to model this, one would need (a) an appropriate attenuation function that measures the noise at all neighboring houses, and a function that specifies the tradeoff between a little noise that affects many people or a lot of noise that affects a few. For our purposes, we have simply counted the number of residences along the road that connects the potential facility location to the nearest numbered highway. We assume that on these numbered highways, residents are used to noise and do not have to be considered. Finally, decision makers may be interested in providing hospital care of the same quality to all residents in the study area. This refers not to the actual care in the hospital, but its accessibility. In other words, decision makers find it desirable to create a situation, in which hospitals are equally distant (or close) to all residents in the county. Of course, this will have to be coupled with an efficiency criterion, which is the average customer-hospital distance already discussed above. Many measures have been suggested; for a summary, see, e.g., Marsh and Schilling (1994) and Eiselt and Laporte (1995). For our purpose, we will calculate the Gini index and use it as a criterion. The attribute matrix is shown in Table 8.3. As mentioned above, the costs of the land are a very minor factor in term of the costs of the entire hospital, and since the costs of land in these cases are fairly low, the criterion will be deleted. Similar, we notice that the Gini indices are virtually identical for the given potential locations, so we delete them as well. For our further discussion, note that coverage and land size are “max” criteria, whereas average distance and pollution are “min” criteria. Upon inspection of the remaining attribute matrix, we notice that decision II dominates decisions IV and VI, so that these decisions will be deleted from further

184

8

Locating a Hospital with a Zoom Approach

Table 8.4 Reduced attribute matrix Coverage, ½ h 127,872 147,880 147,880 127,872

Decisions I II V VII

Coverage, 1 h 179,700 191,131 179,700 191,131

Avg distance 6.4947 5.3926 5.9025 6.2644

Land size 37.33 19.43 69.86 20.0

Poll, # houses 50 6 81 50

Fig. 8.4 Preference graph for capture

consideration. This leaves a reduced decision matrix, which, for convenience is shown in Table 8.4. We note that, with the exception of the land size, decision II dominates all other decisions, which could lead to a decision right here. However, we will examine the set of decisions by way of the PROMETHEE method (see Chap. 3 of this volume). First, we will consider the captures of customers, and, in order to avoid duplication and dependencies, we will agglomerate the two coverage (for ½ and 1 h) in one measure. First, we define the preference graph for the difference between two captures i and j, denoted by Δ1ij , as

p

1

0028

Δ1ij

0029

8
> >
x þ 2 > > : 0, if Δ2ij ≥ 1:333

will result in 0.6209. This preference function is shown in Fig. 8.5. The preferences with respect to the second criterion “average distance” are summarized in the matrix

186

8

Locating a Hospital with a Zoom Approach

Fig. 8.6 Preference function for the comparison of land sizes

2

0 0:0447 6 1:0 0 6 P2 = 6 4 0:8207 0:1968 0:5301 0:0964

0:1715 0:7422 0 0:2468

3 0:2967 1:0 7 7 7. 0:6209 5 0

Consider now the third criterion “land size.” Comparing the size of two properties i and j we obtain Δ3ij and the piecewise linear preference function we have chosen is

p

3

0028

Δ3ij

0029

8
d3: 1 + 3w1 + w2 ≥ 3-1w1-2w2 or 4w1 + 3w2 ≥ 2, d1 > d4: 1 + 3w1 + w2 ≥ 5-5w1-5w2 or 8w1 + 6w2 ≥ 4, d2 > d3: 2-2w1 + 3w2 ≥ 3-1w1-2w2 or -1w1 + 5w2 ≥ 1, d2 > d4: 2-2w1 + 3w2 ≥ 5-5w1-5w2 or 3w1 + 8w2 ≥ 3, and d3 > d4: 3-1w1-2w2 ≥ 5-5w1-5w2 or 4w1 + 3w2 ≥ 2. Figure 12.2 shows these preferences in the weight space, where the preference di 227B dk is shown next to the hyperplane in the appropriate halfspace.

240

12

Locating a New Fulfillment Center with the Domain Criterion

Fig. 12.2 Weight space in the numerical example

Defining the domain of a decision di as the set of weight combinations, for which decision di has a better (in maximization problems: higher) aggregate attribute than all other decisions. Define the domain of decision di as Di. Figure 12.2 shows the domains of the individual decisions. Being a convex polygon, each domain can be described as the convex hull of its extreme points. The extreme points of the individual domains are: D1: (1, 0), (0.4286, 0.5714), (0.3043, 0.2609), and (0.5, 0), D2: (0, 1), (0, 0.375), (0.3043, 0.2609), (0.4286, 0.5714), D3: the empty set, and D4: (0, 0), (0.5, 0), (0.3043, 0.2609), and (0, 0.375). In order to determine the areas of the individual domains, we need to perform triangulations. This means that the quadrangles that describe the domains (in general, it could be any other convex polygons) have to be subdivided into triangles that are mutually exclusive and collectively exhaustive. For instance, one triangulation of the domain D1 has the two triangles with extreme points (1, 0), (0.4286, 0.5714), and (0.3043, 0.2609), as well as the triangle with extreme points. (1, 0), (0.3043, 0.2609), and (0.5, 0), respectively. The areas of the two triangles are then determined 007C007C 007C 007C by the well-known 007C007C 007C 007Cformula. 007C 007C 1 0 1 007C007C007C007C 0 1 007C007C007C007C 007C007C 007C007C 1 007C 007C 007C007C 007C 007C 007C007C 007C 007C 007C 007C 007C 007C 007C 007C area D1 = ½007Cdet 007C :4286 :5714 1 007C007C½ 007Cdet 007C :3043 :2609 1 007C007C007C007C = ½(.2484) + ½(.13045) = .189425. 007C 007C007C 007C 007C 007C 007C 007C 007C 007C :3043 :2609 1 007C007C 007C 007C :5 0 1 007C007C

12.1

Making It Work

241

Here, the first two columns include the coordinates of the three extreme points, while the last column is the vector of ones. The areas of the domains D2 and D4 are calculated in similar fashion, and we obtain 0.1883, and 0.122275, respectively. Clearly, the sum of the areas of the domains equals 0.5, the area of the triangle (0, 0), (1, 0), (0, 1). The results demonstrate that the decisions d1 and d2 are best for the largest number of weight combinations, which is significantly higher than the areas of the domain of d4. If this criterion were to be used, the decision maker would choose either decision d1 or d2. Notice that is no longer necessary in this criterion to know the exact values of the weights. However, there are a number of somewhat troublesome assumptions in this criterion. The strongest such assumption is that all weight combinations are equally likely. Inspection of the domains in Fig. 12.2 shows that each of the three domains with positive area includes one point, at which at least one of the weights is zero: the point (1, 0) for decision d1 (which means w1 = 1, w2 = w3 = 0), the point (0, 1) for decision d2 (meaning w2 = 1 and w1 = w3 = 0), and the point (0, 0) for decision d4 (which means w3 = 1 and w1 = w2 = 0). None of these points is meaningful, as a decision with zero weight might as well have not been included in the model in the first place. Further elicitation concerning preference structure of the decision maker may, for instance reveal, that none of the weights is smaller than a lower bound, e.g., w1 ≥ 0.1, w2 ≥ 0.2, and w3 ≥ 0.15. Graphically, this moves the ordinate to the right, the abscissa up, and the line through (1, 0) and (0, 1) towards the southwest, respectively. Clearly, this will change the areas of the domains, but make the results more meaningful. One of the problems with domain criteria is their dimensionality. It is apparent that each criterion requires one dimension. While this presents no problems as far as computations are concerned, higher-dimensional spaces cannot be plotted, thus making it more difficult to the decision maker to see what is happening. This problem is, of course, shared with any general optimization procedure that also does not allow decision makers to visualize the process. As a matter of fact, in our example above we have chosen to avoid a three-dimensional graph by using a normalization. However, this procedure already distorts the graph and the relations between the sizes of the domains (Schneller and Sphicas 1983). In order to explain the general concept, consider the simple four-decision, two criterion2matrix 3 10 2 6 6 87 6 7 C=6 7. 4 5 95 7 3 We first define weights w1 and w2 for the two criteria, respectively, which are then normalized, so that w1 + w2 = 1. Furthermore, defining the aggregate values of the = 1, . . ., 4, the aggregation via the simple weighted sum is decisions di as v(di), i P calculated as vðdi Þ = cij wj 8i . In our example, we obtain v(d1) = 2 + 8w1, j

242

12

Locating a New Fulfillment Center with the Domain Criterion

Fig. 12.3 (a) Sizes of domains in the additive model. (b) Sizes of domains in the multiplicative model

v(d2) = 8-2w1, v(d3) = 9-4w1, and v(d4) = 3 + 4w1. The sizes of the domains are shown in Fig. 12.3a, where the size function of the first decision is shown in red, that of the second decision is shown in purple, that of the third decision is shown in teal, and the function associated with the fourth decision is shown in gray. The upper envelope of these functions shows which decision has the highest v(di) values. In our example, we have v(d1) is largest for weights w1 2 [.6, 1], v(d2) is largest for weights w1 2 [.5, .6], and v(d3) is largest for weights w1 2 [0, .5]. In other words, for small values of the first weight, we will choose the third decision, for a small range of intermediate values of w1, we will choose the second decision, and finally, for high values of w1, the first decision comes out on top. The advantage of this analysis is that exact values do not have to be known. Clearly, the simple weighted average method is not the only aggregation technique. One obvious alternative is the simple weighted multiplicative Q wmethod. The multiplicative value function determines aggregate scored v0 ðdi Þ = cij j , which, for j

our purposes we again normalize, so that w1 + w2 = 1. The resulting value functions are then v0 ðd 1 Þ = 10w1 2217 2ð1 - w1 Þ shown in red, v0 ðd2 Þ = 6w1 2217 8ð1 - w1 Þ shown in purple, v0 ðd3 Þ = 5w1 2217 9ð1 - w1 Þ shown in teal, and v0 ðd4 Þ = 7w1 2217 3ð1 - w1 Þ shown in gray. Again, we determine the upper envelope. This leads to the following results:

12.1

Making It Work

243

Fig. 12.4 Axes in a spider plot

d

b

03B1 a

For w1 2 [.7307, 1] decision d1 has the highest value for w1 2 [.3925, .7307], decision d2 is best, and for w1 2 [0, .3925], decision d3 comes out on top. Note that decision d4, while not dominated, is never best. In this case, the weight space is just a [0, 1] line segment associated with w1, which is separated into the aforementioned segments. Figure 12.3b shows the results. It is apparent that while the numbers of the two aggregation methods are somewhat different, they do agree on the general result (i.e., high values of w1 favor the first decision d1, intermediate values lead to d2 being best, and low values of w1 suggest we choose decision d3. Another possibility is to use spider diagrams. A spider plot (see also Chap. 3 in this volume) is a diagram that has m axes, one for each criterion, between which there are angles of 360/m each. That way, it is possible to display a problem with virtually any number of criteria. For each criterion, we can then plot the attribute or its utility on the appropriate axis, resulting in points, whose convex hull is a region whose areas, not unlike those of the domains in weight spaces. We can then measure the quality of the decision associated with it, for instance as the area of the polygon shown by the convex hull that belongs to a decision. One pertinent problem occurs if at least one of the regions is not of full dimensionality, as is the case in our example. It will then have an area of zero. Before discussing this case any further, we first want to demonstrate how a spider plot looks and works in our example. Given that we have three criteria, there will be three axes, separated by 120°. Suppose that the first axis points in a northeasterly direction 30° above the positive abscissa, the second axis points into the northwesterly direction 30° above the negative abscissa, and the third axis points straight down. We now need the actual coordinates of the points on those axes. Consider the first axis. Suppose that the attribute to be plotted is d, which has to appear at coordinates (a, b). In Fig. 12.4, the angle α = 30°, so that ½ = sin (α) p =ffiffiffib/d, or simply b = ½d. Furthermore, we know that a2 + b2 = d2, so that a = 12 3 d. The coordinates for points on the second axis are calculated similarly, except that the first coordinate is negative, while points on the third axis (a, b) have a = 0 and b being the third attribute of the decision under consideration. The maximal area possible in this case occurs, of a decision are 1, meaning that the coordinates are pwhen ffiffiffiffiffi ( all three putilities ffiffiffiffiffi ( ½, ½ 3Þ, - ½, ½pffiffiffi 3Þ, and (0, -1). The area of the area bordered by the convex hull is then ½ þ ¼ 3 = .9330. We can divide by this term in case we want to normalize the results.

244

12

Locating a New Fulfillment Center with the Domain Criterion

In our example, the regions of decisions d1, d2, and d3, defined as R1, R2, and R3, are as follows: R1: (1.7321, 2), (-0.8660, 1), (0, -1), R2: (0, 0), (-2.1651, 2.5), (0, -2), R3: (0.8660, 1), (-0.4330, 0.5), (0, -3), and R4: (0, 0), (0, 0), (0, -5). The areas of the regions are then 6.0222, 4.3302, and 4.763 (region 4 has an empty interior), so that a decision maker, facing this maximization problem, would choose decision d1. So much for the method that will be employed in this chapter. Recall that in our specific application, we consider the following three criteria: 1. Minimization of road distances 2. Minimization of property cost, and 3. Maximization of the reduction of unemployment First, consider criterion (1). In order to be able to employ a discrete MADM technique, we first need to determine the shortest road distances between the existing and the proposed centers to the customer sites. Table 12.2 shows these distances. The customers are shown at the heads of the columns, the existing centers (in italics) are shown as the first five rows, and the proposed centers are shown in the remaining six rows. In each column, the total sum of distances to the closest existing facility is shown in boldface. Furthermore, for each of the candidate locations, a star next to a sum of distances indicates that it is lower than the total sum of distances between customers and their closest existing facility. Consider now Criterion (2). As explained above, the cost of the lots from real estate websites plus the cost for the building yield the cost of constructing the fulfillment center shown in Table 12.1. Finally, consider criterion (3). The relations (1) and (2) are used to calculate the results of the new center and are again shown in Table 12.1. The pertinent results are summarized in the attribute Table 12.3. Note that the three criteria are of the min, min, and max type, respectively. We now convert these attributes to utilities by using the relations - min u = actual max - min for maximization criteria and max - actual u = max - min for minimization criteria. This process results in the utility 2 3 matrix :9880 :3678 :4130 6 1 :4886 :0522 7 6 7 6 7 6 :4845 0 :9348 7 7. U=6 6 :1764 :7176 1 7 6 7 6 7 4 :0686 :7046 :6261 5 0

1

0

Pop in 000 Houston San Antonio Dallas Austin Fort Worth Saragosa Lubbock Abilene Uvalde Victoria Waco

Customers Houston 2320 0 197 239 162 262 554 516 351 245 207 184

San Antonio 1532 197 0 274 80 268 362 386 246 85 117 181

Dallas 1345 239 274 0 219 32 456 346 181 368 295 97

Austin 964 162 80 195 0 190 386 373 220 162 126 102

Fort Worth 895 262 268 32 190 0 425 314 150 339 289 91

El Paso 683 746 551 635 576 604 195 342 439 492 665 615

Table 12.2 Populations of customer locations and fulfillment center-to-customer distances Corpus Christi 327 210 143 411 202 381 499 529 389 196 87 300

Laredo 262 315 157 445 236 423 414 503 376 131 187 337

Lubbock 259 516 386 346 373 314 234 0 163 374 501 348

Amarillo 200 600 510 366 496 339 357 124 270 497 624 429

12.1 Making It Work 245

246

12

Locating a New Fulfillment Center with the Domain Criterion

Table 12.3 The attribute table of the application Criteria Decisions

Saragosa Lubbock Abilene Uvalde Victoria Waco

Total distance 349,486 346,281 483,949 566,245 595,042 613,354

Cost of new center $16,115 $14,621 $20,662 $11,790 $11,950 $8298

Unemployment reduction 1.7 0.87 2.9 3.05 2.19 0.75

It is immediately apparent that decision d4 dominates decision d5, i.e., the location Uvalde dominates the location Victoria. Hence, from now on we will delete decision d5. In the simple additive weighting procedure we determine the following aggregated utilities: u1 = .9880w1 + .3678w2 + .4130w3 for decision d1, u2 = 1 w1 + .4886w2 + .0522w3 for decision d2, u3 = .4845w1 + 0w2 + .9348w3 for decision d3, u4 = .1764w1 + .7176w2 + 1w3 for decision d4, and u6 = 0w1 + 1w2 + 0w3 for decision d6. Using the normalization w1 + w2 + w3 = 1, we can eliminate w3, resulting in u1 = .4130 + .5750w1-.0452w2, u2 = .0522 + .9478w1 + .4364w2, u3 = .9348-.4503w1-.9348w2, u4 = 1-.8236w1-.2824w2, and u6 = w2. At this point, it is now possible to delineate the domains of the existing decisions that result from requiring that one decision is better than all others. Simply stated, the domain Di of a decision di is the set of all points Pk, for which the utility ui is higher than the utility of any of the other decisions. For instance, the first domain D1 is obtained by requiring that u1 ≥ u2, u1 ≥ u3, u1 ≥ u4, and u1 ≥ u6, respectively. This results in the inequalities .3728w1 + .4816w2 ≤ .3608, 0253w1 + .8896w2 ≥ .5218, 1.3986w1 + .2372w2 ≥ .5870, and -.5750w1 + 1.0452w2 ≤ .4130. The domains of the other decisions are computed similarly. The resulting tessellation of the decision space is shown in Fig. 12.5. The next task involves the areas of the domains. For that purpose, we need the corner points whose convex hulls are the domains. They are as follows (each in counterclockwise order):

12.1

Making It Work

247

Fig. 12.5 Domains of the decisions in the application

For D1: (0.9678, 0), (.5089, 0), (.3980, .1278), and (.3368, .4884), for D2: (1, 0), (.9678, 0), (.3368, .4884), (.2957, .5899), and (.3384, .6616) for D3: (.5089, 0), (.1747, 0), and (.3980, .1278), for D4: (.1747, 0), (0, 0), (0, 7798), (.2957, .5899), (.3368, .4884), and (.3980, .1278), and for D6: (.3384, .6616), (.2957, .5899), (0, 7798), and (0, 1). For each of these domains, we now have to perform a triangulation, which will allow us to compute the area of the domains. For instance, domain D1 can be subdivided into the triangles I: (.9678, 0), (.5089, 0), (.3980, .1278), and II: (.9678, 0), (.3980, .1278), and (.3368, .4884). Their areas are computed as. 007C 007C 007C007C 007C 007C 007C 007C :9678 007C 007C :9678 0 1 007C007C007C007C 007C 007C 007C 007C 007C007C 007C 007C 0 1 007C007C + ½007Cdet007C :3980 Area (D1) = ½007C007Cdet007C007C :5089 007C 007C 007C007C 007C 007C 007C 007C :3980 :1278 1 007C007C 007C 007C :3368

007C007C 0 1 007C007C007C007C 007C007C :1278 1 007C007C = ½(.0586 + .1976) = .1281 007C007C :4884 1 007C007C

and similar for the other domains. The areas of the domains of the five nondominated decisions are as follows: Area (D1) = .1281 Area (D2) = .0693 Area (D3) = .0214

248

12

Locating a New Fulfillment Center with the Domain Criterion

Area (D4) = .2292 Area (D6) = .0519 The sum is 0.5 and the resulting ranking is then d 4 227B d 1 227B d 2 227B d6 227B d3 , which indicates a clear preference for d4, i.e., the location Uvalde. Looking at the map, the site, the location is a compromise between locations in east and west Texas. At this point, various types of sensitivity are possible. We can distinguish three types of sensitivity that are of particular interest to decision makers: 1. There are limitations on the individual weights. 2. There are minimum requirements regarding the utilities of the individual decisions (i.e., target values). 3. Redefine domains, so that the domain of a decision includes all weight combinations for which this decision is no worse than x% of the best decision. In the first sensitivity analysis (1), we may superimpose limits on the weights associated with the decisions. In our example, we could for instance, require that w1 ≥ .15, w2 ≥ .1, and w2 ≥ .2. The addition of a positive lower bound on w1 means shifting the ordinate to the right, increasing the weight of w2 to a positive value means moving the abscissa upwards, and increasing the lower bound of w3 to a positive value means shifting the line through (1, 0) and (0, 1) parallel in a southwesterly direction. As an example, in our application in Fig. 12.5, a (reasonably small) increase of the lower bound on w1 would change the sizes of the areas of domains D4 and D6, while leaving the areas of the remaining domains unchanged. This will, however, increase the relative importance of the areas of the decisions d1, d2, and d3. An increase of the lower bound of w2 would change the areas of the domains D1, D2, D3, and D4, leaving the domains of the decision d6 unchanged. Finally, increasing the lower bound of w3 will change the sizes of the domains D2 and D6, but leaving the other domains (and with it the largest domain D4) unchanged. In the second sensitivity analysis, we could only be interested in weight combinations, which result in utilities of at least a given target value. As an example, consider a requirement that demands a utility of at least 1.0. There are only very few points that satisfy this requirement: decision d1 can never achieve it, decision d2 achieves this requirement only at point (1, 0), decision d3 cannot achieve the requirement, decision d4 achieves it only at (0, 0), and decision d6 achieves it only at point (0, 1). Relaxing the requirement to 0.8, the weight combinations that achieve this requirement are shown in Fig. 12.6, where the decisions that satisfy this requirement are shown in the areas. It is apparent that only “extreme” weight combinations satisfy the requirement. For instance, if wi ≥ 0.2 8 i, no weight combination can achieve a utility of 0.8. It is also apparent from the figure that the union of the sets in which at least one decision satisfies the utility requirement may be a proper subset of the weight space

12.1

Making It Work

249

Fig. 12.6 The application with lower bounds on the utilities of 0.8

(as is the case in our application), and that for some weight combinations multiple decisions satisfy the utility requirement. Finally, consider the third sensitivity analysis (3). Rather than having target values, the decision maker here is interested in being nor more than a certain percentage off the optimal solution. The reasoning behind it is somewhat reminiscent of regret criteria. The area in yellow in Fig. 12.7 indicates all weight combinations, for which decision d1 is no more than 10% off the best solution. Consider now the use of spider plots. Figure 12.8 shows the plots for decisions d1, d2, d3, and d4 (decision d6 is not shown as it has an empty region). Calculating the sizes of the regions using the same formulas before that utilizes determinants, we obtain Area (R1) = .4, Area (R2) = .2452, Area (R3) = .1961, and Area (R4) = .4420. This leads to a ranking of d4 227B d1 227B d2 227B d3 227B d6. Recall that the ranking of the domains was d4 227B d1 227B d2 227B d6 227B d3. We notice the strong similarities, most importantly the same ranking in the first three alternatives. That should provide some confidence in the decision based on this criterion.

250

12

Locating a New Fulfillment Center with the Domain Criterion

Fig. 12.7 Area (in yellow) for all weight combination, for which decision d1 is no more than 10% off of the optimal solution

Fig. 12.8 Spider plot for the application

References

251

References Buildingsguide, What does it cost to build a warehouse? (2021), https://www.buildingsguide.com/ blog/planning-steel-warehouse-building/. Accessed 30 Sept 2022 S. Chevalier, Global retail e-commerce sales 2014–2026 (2022), https://www.statista.com/ statistics/379046/worldwide-retail-e-commerce-sales/. Accessed 30 Sept 2022 D. Coppola, E-commerce in the United States – statistics & facts (2022), https://www.statista.com/ topics/2443/us-ecommerce/#dossierKeyfigures. Accessed 30 Sept 2022 Easyship, Top 10 ecommerce markets you should be targeting (2021), https://www.easyship.com/ blog/10-ecommerce-destinations-to-target. Accessed 30 Sept 2022 ecommerce CEO, Fulfillment center: what is it and how it works? (2021), https://www. ecommerceceo.com/learn/what-is-a-fulfillment-center/. Accessed 30 Sept 2022 H.A. Eiselt, A. Langley, Some extensions of domain criteria in decision making under uncertainty. Decis Sci 21(1), 138–153 (1990) H.A. Eiselt, G. Laporte, The use of domains in multicriteria decision making. Eur J Oper Res 61(3), 292–298 (1992) J.D. Herman, P.M. Reed, H.B. Zeff, G.W. Characklis, How should robustness be defined for water systems planning under change? J Water Resour Plan Manag 141(10), 04015012-1–0401501214 (2015) Homefacts, Texas unemployment rate report (2022), https://www.homefacts.com/unemployment/ Texas.html. Accessed 30 Sept 2022 IndexMundi, Texas Average household size by County (undated), https://www.indexmundi.com/ facts/united-states/quick-facts/texas/average-household-size#map. Accessed 30 Sept 2022 Landsearch (2021), https://www.landsearch.com/properties. Accessed 30 Sept 2022 Map of Texas (2021), https://upload.wikimedia.org/wikipedia/commons/5/53/Map_of_Texas_NA. png. Accessed 30 Sept 2022 C. McPhail, H.R. Maier, J.H. Kwakkel, M. Giuliani, A. Castelletti, S. Westra, Robustness metrics: how are they calculated, when should they be used and why do they give different results? Earth’s Future 6, 169–191 (2018) D. Pahwa, This old marketing tool will give you an explosive advantage (2014), https://medium. com/@pahwadivya/the-history-of-the-catalog-b5334841e941. Accessed 30 Sept 2022 Publitas, A visual history of the catalog (undated). https://www.publitas.com/blog/a-visual-historyof-the-catalog/. Accessed 30 Sept 2022 G.O. Schneller, G.P. Sphicas, Decision making under uncertainty: Starr’s domain criterion. Theory & Decision 15, 321–336 (1983) Sears Archives, Chronology of the Sears Catalog (2021), http://www.searsarchives.com/catalogs/ chronology.htm. Accessed 30 Sept 2022 M.K. Starr, Product Design and Decision Theory (Prentice-Hall, Englewood Cliffs, NJ, 1962) M.K. Starr, A discussion of some normative criteria for decision-making under uncertainty. Ind Manag Rev 8, 71–78 (1966) World Population Review, Population of counties in Texas (2021), https://worldpopulationreview. com/us-counties/states/tx. Accessed 30 Sept 2022

Chapter 13

Locating Fast Food Restaurants with the Analytic Hierarchy Process

Fast food restaurants are typically thought of having originated in the early years of the twentieth century in the United States. However, archaeologists have discovered that fast food was already known (and probably appreciated) in Ancient Italy, more specifically in Pompeii (see, e.g., Cascone 2021; BBC News 2020; NPR 2020). In modern times, we may follow Cyprus (2022), the “White Castle” in Wichita, Kansas, was the first fast food restaurant to open its doors in 1921. Other sources such as Oldest.org (2020) credit Roy W. Allen and Frank Wright with the honor. These two entrepreneurs started with a lemonade stand in Lodi, California in what morphed to the A & W chain we know today. Based on the number of store locations, Subway, McDonalds, and Starbucks are leading this sector (in that order, where the ranks of Subway and Starbucks are reversed if revenue is used as criterion of size; see, e.g., BizVibe undated). As a matter of fact, fast food is, while not a unique U.S. American phenomenon, overwhelmingly used in the United States. This is demonstrated vividly by the fact that, again according to the number of locations, 67 of the 100 largest chains are U.S. American (Statistics & Data 2022). The continuing popularity of fast-food outlets can be seen by long takeout lines, see, e.g., Recipes-Online (2020). Given that many employees of companies work quite far from their own home, “eating out” is almost a necessity, which has further fueled the growth of fast-food outlets. Their design favors easy access, proximity to places, in which people congregate (i.e., work, learn, or shop), and high visibility to passers-by. As such, fast-food outlets tend to cluster, as shown, for instance, on the map of Fredericton, New Brunswick, Canada, in Fig. 13.1. Since fast-food outlets have been located for many years, it is beneficial to investigate the processes that have been used in the location process and what criteria have been used. Ungerleider (2014) emphasizes the benefits of geographical information systems in the process. Others have delineated the criteria that are important. Kiyak (2016) characterizes the visibility of the store, demography, parking, costs, the property condition and the general neighborhood of the site as © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_13

253

254

13

Locating Fast Food Restaurants with the Analytic Hierarchy Process

Fig. 13.1 Clustering of fast-food outlets in Fredericton, NB (Map data ©2021 Google)

important for the determination of the quality of the site. Larkin (2017) also mentions visibility, parking and costs, but also adds crime rates and safety as criteria. Webstaurant Store (2018) reiterates the importance of visibility, demographics, safety, and costs, but adds labor costs, proximity to suppliers, and the site’s potential for future growth. Mealey (2018) also emphasizes visibility, demographics, costs, and parking, but also points to competition in the area. The author points out that having a restaurant with the same basic theme in the neighborhood could increase the competition, but also stresses that “successful businesses attract other successful businesses.” Similar results have been found by Marianov et al. (2019). Finally, EHL Insights (2022) add parking for easy pickup, competition and zoning regulations to the list. For our purposes, we will apply a two-step procedure that is typical in most operations research processes: in the first phase, determine feasible solutions (or, as is the case here, delete solutions that are not feasible), then optimize in the second phase. In the first step here, we will apply a yes/no filter (a “binary machine,” to coin a more fanciful phrase) that considers the constraints: sites with insufficient parking, sites in areas with significant crime rates, buildings (assuming that we purchase a site rather than just a lot, on which we build) of insufficient size or with layouts that do not fit our purpose and that are not easily modifiable, and areas that are not zoned for our purpose (and that cannot easily be rezoned) will be deleted before we apply a

13

Locating Fast Food Restaurants with the Analytic Hierarchy Process

255

solution technique. This is most easily accomplished by the use of geographical information techniques but can be done by other means as well. The second step will consider a number of criteria that are mentioned by many of the planners cited above. In particular, we have chosen: 2022 The expected number of patrons, which is subdivided into one part that comprises planned purchases and the other that consists of unplanned purchases, 2022 The cost of the site, and 2022 The safety of the area. The traffic count is relevant solely in order to describe the number of people who have been exposed to a restaurant by passing by it. These are potential casual patrons, who may make an unplanned visit and purchase at a restaurant. A traffic count as a measure to the degree of exposure of a passerby to the restaurant measures the same effect that a billboard has on drivers-by (see Chap. 14 of this volume). The distance to the places, in which people congregate, attempts to measure a more difficult process. While the traffic count addresses casual passersby who, more or less, follow a random instinct to patronize or not patronize the store, people, who congregate in nearby places, may make a conscious decision to patronize a particular fast-food restaurant. For this purpose, we need to develop a mechanism that attempts to model this decision making. In order to do so, we look at the literature that deals with discrete choice theory, i.e., the theory that deals with how individuals choose one among a finite number of decisions. Discrete choice models describe scenarios and potential solution techniques, in which a decision maker chooses between a finite number of alternatives; see, e.g., Anderson et al. (1992) in the context of product differentiation and Train (2009) in general. A large variety of methods have been described in the literature, e.g., utility functions, logit models, and many others. Almost a century ago, Reilly (1931) described what are now known as gravity models. Huff (1964) applied a probabilistic version of the gravity principle in the retail context. It can most easily be described as a utility function w/d λ, where w denotes a weight (sometimes referred to as a base attraction) that expresses the attractiveness of a store, measured, e.g., by its floor space or similar parameters, d denotes the distance between a customer and the store in question, and λ is a parameter that measures distance decay. The expression w/d λ then measures the degree, to which a customer at a distance of d is attracted to the store in question. The market share of the store under investigation, estimated with this measure, is then calculated as the attraction of this facility in relation to the sum of the attractions of all facilities included in the model. The use of this measure requires a number of assumptions, among which are a rational customer and full information, which is obtained by a “fact-finding mission,” in which the customer visits all potential stores (online or in-person) and compares prices and attributes. Spatial interaction models may help to overcome this deficiency. Already in the mid-1980s, Fotheringham (1985) argued that the facility location should also be seen in relation to other facilities. Three years later, he concluded in a piece about choice theory, “. . .that individuals initially evaluate clusters of alternatives and then only evaluate alternatives within a chosen cluster” (Fotheringham 1988).

256

13

Locating Fast Food Restaurants with the Analytic Hierarchy Process

For our purposes, we consider the following two-stage decision-making process by the decision makers. Again, we assume a rational decision maker with complete information regarding the facilities and the distances to and between them. In the preprocessing phase, the decision maker determines clusters of facilities, e.g., areas in which facilities are close to each other. In reality, this could be a commercial lot with fast-food outlets, a downtown area with restaurants, or a similar area. In the first decision-making phase, the decision maker will then determine the attraction of each individual cluster (where some “degenerate” clusters may consist of only a single facility). This is achieved by applying the usual gravity function with the weight raised to some power γ. This parameter will express the benefits of belonging to a cluster. In a more elaborate model, the facility’s exponent would not be linked to the facility, but to the facility and the cluster. The benefit of such inclusion is that it is now possible to express a complementarity effect, i.e., the benefit of being in a group with others (by choosing a positive value of γ), as well as the competitiveness effect, i.e., the added competitive pressure based on another facility in essentially the same trade that is considered by customer as an alternative to the new facility. In this case we would choose a value γ < 1. Once a cluster has been chosen, the customer will then choose the facility with the highest attraction within that cluster. This process can be anticipated by the facility planner and incorporated in his site selection process. The price of the property is fairly straightforward, even though it also remains to be decided if an outright purchase or a lease of the facility is to be preferred. Safety, of course, has many faces. It ranges from the different types of crimes (not only in a district, but the rates may dramatically different between different street segments), to possibilities of improving the situations by way of CCTV cameras, lighting, private security, etc.

13.1

Making It Work

The study area is a part of Scarborough, a suburb of Toronto, Canada, located just to the east of the city’s downtown area. We are particularly interested in the area to the East of the junction of Highway 401 and Highway 2A (Kingston Road) and west of the Don Valley Parkway as well as north of Highway 2A (Kingston Road) and south of Sheppard Avenue. In order to initialize the process, we first need to determine what properties are actually available in this area. Here, we have used the offerings of the real estate market on January 13, 2022. All information was taken from Realtor.ca (2022). The relevant data are shown in Table 13.1. Next, we determine clusters of existing fast-food restaurants. It was determined that the existing restaurants would be put into four clusters, here named Cluster 1, Cluster 2, etc. Our next step is to determine the locations and numbers of people who are interested in patronizing restaurants in the area. In order to approximate the numbers,

13.1

Making It Work

Table 13.1 Available properties, their sizes and prices

257 # 1 2 3 4 5 6 7 8

Address 1348 Kennedy Road 4002 Sheppard Ave East 2598 Eglinton Ave East 3867 Lawrence Ave East 1302 Ellesmere Rd 3561 Sheppard Ave East 795 Milner 4190 Kingston

Price 79,000 65,000 165,000 150,000 199,000 149,000 180,000 179,000

Size (sq ft) 1000 1150 1782 1200 1050 1500 1111 1680

Fig. 13.2 The study area in Scarborough, Ontario, Canada (a suburb of Toronto, Ontario) (Map data ©2021 Google)

we first delineate the centers of the main points at which people congregate, such as malls, office buildings, etc. We then use the size of the “place of congregation” as a proxy for the magnitude of the potential planned demand. For the number of space (square footage) per employee, see, e.g., Metropolitan Council (2016). Typical examples are about 550 sq. ft. in medical clinic to 2500 in hotels, 1500 in warehouses, and between 500 and 1000 in retail. In our analysis, we will use the factor of 1000. To illustrate the process, the data delineated above are now put into a map of the area shown in Fig. 13.2. In this figure, the four black ovals denote the existing fastfood restaurant clusters. At present, the four clusters include 3, 7, 6, and 3, i.e., a total of 19 restaurants. They are indicated by red dots on the map. The red clusters named, A, B, . . ., L are demand points, i.e., places, in which people aggregate, e.g., malls or offices. These are the facilities included in Table 13.2. Finally, the potential locations addressed in Table 13.1 are shown as blue dots with numbers 1, 2, . . ., 8. This information is shown in Fig. 13.2. The ovals surrounding the supply- and demand points have been estimated visually.

258

13

Table 13.2 Demand points and the square footage of the building

Locating Fast Food Restaurants with the Analytic Hierarchy Process Point # A B C D E F G H I J K L

Coordinates 43.757593, -79.312757 43.755241, -79.290717 43.762469, -79.274579 43.780880, -79.281495 43.773837, -79.264958 43.778965, -79.243725 43.788461, -79.250116 43.759855, -79.225691 43.743187, -79.218865 43.782626, -79.204925 43.785452, -79.193569 43.768485, -79.186191

Weight/size 66,550 1,283,900 1,840,000 570,000 1,800,000 1,785,000 1,035,000 80,000 138,000 133,400 434,500 165,000

For our case study, we have identified the following criteria that are to be used in the analysis: 1. the number of customers who make an unplanned purchase at the fast-food restaurant, 2. Number of customers who may make a planned purchase, 3. The price of the property, and 4. The safety of the area. First consider the number of individuals, who make unplanned purchases. Those are the people, who see the store and make their purchase. Their number is best be estimated by traffic counts. We have obtained traffic counts for the area via City of Toronto, Transportation Services (2018). Here, we have added the pedestrian and vehicle peak hour visibilities to arrive at visibility figures. In order to avoid repetition, we refer to the attribute matrix in Table 13.5, which includes these figures. The case of unplanned visits to restaurants is considerably more complex. We first need to measure the distances between the centers of the demand clusters A, B, . . ., L and the restaurant cluster centers as well as the stand-alone potential new store locations. The results are shown in Table 13.3. To model the decision-making process, we assume that potential customers use the following two-stage thought process. In Stage 1, customers at an office or a mall will determine where to go, i.e., which cluster to patronize. Once they have arrived at the cluster, the decision maker will then choose one of the facilities in that cluster in Stage 2. More specifically, each customer will apply a gravity-like attraction function in Stage 1, and a different allocation function in Stage 2. In order to explain the idea, we consider a small example, in which a customer faces four clusters of restaurants, viz., A, B, C, and D. The situation is shown in Fig. 13.3. The clusters include 0, 1, 2, and 3 existing facilities, respectively, which are shown as solid black dots. The respective distances between the customer and the clusters are shown in the fig. A new facility may now locate in cluster A (a presently nonexistent cluster, i.e., the facility will locate by itself), in cluster B together with

9.9

7.1

6.2

7.4

5.6

3.9

4.8

2.0

1.8

4.5

6.8

Cluster 1 43.789594, -79.258798 7.7

6.9 5.0

7.2

5.5

3.5

1.7

6.6

3.7

3.2

6.4

4.3

7.3

Cluster 3 43.755767, -79.245156 10.5

7.5

4.5

5.6

3.8

4.5

3.0

0.0

3.9

2.9

6.5

Cluster 2 43.772273, -79.251556 7.4

5.2

7.5

6.9

1.1

3.0

8.0

5.1

4.8

7.7

5.3

8.6

12.3

9.5

8.7

10.3

8.4

6.9

7.3

4.8

2.1

3.8

3.7

Cluster 4 Store 2 43.740281, 43.783335, -79.232069 -79.288326 12.1 5.3

7.0

9.3

8.5

2.9

4.8

9.1

7.6

5.2

5.9

3.5

6.8

Store 3 43.735686, -79.253254 10.9

13.3

10.5

9.7

11.3

9.4

7.9

9.8

7.4

3.0

4.6

3.0

Store 6 43.780657, -79.299889 4.0

4.5

1.7

3.2

8.2

6.2

1.4

4.7

7.1

7.1

9.4

11.7

Store 7 43.798117, -79.203677 12.6

1.4

3.7

4.2

3.5

3.3

6.7

4.4

6.9

10.2

7.7

10.8

Store 8 43.759402, -79.197328 15.8

Making It Work

Note that the coordinates in Table 13.3 are not the same as those in Table 13.1. The reason is that while Table 13.1 has planned demand concentrated at offices and malls, this tables includes unplanned demand at street corners. The number of demand points is the same, though

L

K

J

I

H

G

F

E

D

C

B

# A

Customer/ office 43.757822, -79.315134 43.761179, -79.300798 43.760999, -79.268659 43.782288, -79.276843 43.772277, -79.251160 43.778435, -79.222252 43.794815, -79.219571 43.759813, -79.225327 43.743221, -79.219014 43.782766, -79.205034 43.789372, -79.194359 43.768465, -79.186169

Table 13.3 Distances between demand points A–L and cluster centers and stand-alone potential new store locations

13.1 259

260

13

Locating Fast Food Restaurants with the Analytic Hierarchy Process

Fig. 13.3 Example for gravity function with clusters

one other facility, in cluster C with two other facilities. Cluster D presently comprises three facilities, but there is not space for the newcomer. The possible sites for the new facility are shown by white circles in the clusters. Here, we use a modified gravity function of the type (Attraction of a cluster) = (sum of attractions of all members in the cluster)γ/ (distance to the closest point in cluster)λ with λ: attenuation of the effect of distance (λ = 2 for gravity model), and γ: measure of the aggregation advantage, γ ≥ 1. Here, we apply the usual λ = 2, which is the exponent used in gravity models, and we use γ = 1.5 to model an advantage associated with facilities that are part of a larger cluster. This is supposed to express the fact that larger clusters have an advantage, because they provide the individual with more choices. A further refinement could count only the number of restaurants that offer different types of food, i.e., a cluster with three chicken and one burger restaurant may be counted only as a place with two different restaurants rather than four. Another refinement could include loyalty to a brand, e.g., via loyalty points or similar measures. Having arrived at the cluster, the customer is then presented with the restaurants among which he will have choose. Normally, we could use different weights associated with the individual restaurants. However, since we have not made any assumptions about the type of restaurant that we attempt to locate and the popularity of the other restaurants in the cluster, so that we apply Laplace’s principle of insufficient reason (or, alternatively, the rule used by Buridan’s ass, or, alternatively again, Aristotles’s reasoning in 350 BC), and assume that whenever a customer chooses a certain cluster, he will patronize all stores in the cluster with equal probability. For simplicity, we will assume that all facilities, new and old, have a base attraction of w = 2. In order to illustrate the concept, consider the example shown in Fig. 13.3. Suppose first that the new facility locates in cluster A, i.e., away from all other existing facilities. Its cluster (i.e., itself) has than an attraction of 21.5/1.52 = 1.2571. The other three clusters have cluster attractions of 21.5/12 = 2.8284, 41.5/ 32 = 0.8889, and 61.5/42 = 0.9186 for the four respective clusters. Dividing each cluster attraction by the sum of all attractions, which is 5.893, we obtain estimated market shares of 21.33, 47.5, 15.08, and 15.59% for the four clusters. Since clusters A and B comprise only a single facility each, their market shares are as shown above. However, the market share of each facility in Cluster C is half of the market share of

13.1

Making It Work

261

Table 13.4 Attractions of demand clusters to restaurant clusters with the new restaurant at Site 1 Customer Cluster 1 31:5 A = 0:0876 7:72 B C D E F G H I J K L

31:5 6:82 31:5 4:52 31:5 1:82 31:5 22 31:5 4:82 31:5 3:92 31:5 5:62 31:5 7:42 31:5 6:22 31:5 7:12 31:5 9:92

= 0:1124 = 0:2566 = 1:6038 = 1:299 = 0:2255 = 0:3416 = 0:1657 = 0:7022 = 0:1352 = 0:1031 = 0:0530

Cluster 2

Cluster 3

Cluster 4

81:5 7:42 81:5 6:52 81:5 2:92 81:5 3:92 81:5 :52 81:5 32 81:5 4:52 81:5 3:82 81:5 5:62 81:5 4:52 81:5 7:52 81:5 7:22

61:5 = 0:1333 10:52 61:5 = 0:2758 7:32 61:5 = :7949 4:32 61:5 = 0:3588 6:42 61:5 = 1:4352 3:22 61:5 = 1:0736 3:72 61:5 = 0:3374 6:62 61:5 = 5:0854 1:72 61:5 = 1:1998 3:52 61:5 = 0:4858 5:52 1:5 6 = 0:3087 6:92 61:5 = 0:5879 52

31:5 = 0:0355 12:12 31:5 = 0:0703 8:62 31:5 = 0:1850 5:32 31:5 = 0:0876 7:72 31:5 = 0:2255 4:82 31:5 = :1998 5:12 31:5 = 0:0812 82 31:5 = 0:5773 32 31:5 = 4:2943 1:12 31:5 = 0:1091 6:92 1:5 3 = 0:0924 7:52 31:5 = 0:1922 5:22

= 0:4132 = 0:5355 = 2:6905 = 1:4877 = 90:51 = 2:5142 = 1:1174 = 1:567 = 0:7215 = 1:1174 = 0:4023 = 0:4365

Σ of attractions 0.6696 0.994 3.927 3.5379 93.4697 4.0131 1.8776 7.3954 6.9178 1.8475 0.9065 1.2696

the cluster (i.e., 7.54% market share each), while each facility in Cluster D is one third of that of the entire cluster (i.e., 5.20% market share each). In summary, the solitary location in A results in an estimated market share of 21.33%. Repeating the procedure for the case, in which the new facility locates in cluster B, we obtain projected market shares of 0 (Cluster A is empty), 81.57% for Cluster B, 9.06% for Cluster C, and 9.37% for Cluster D. This results in 40.79% for the new facility. Finally, the market shares for the clusters, given that the new facility locates within Cluster C, are 0 (again no facility in Cluster A), 52.57% for Cluster B, 30.35% for Cluster C, and 17.07% for Cluster D. This results in an estimated 10.12% market share for the new facility. Comparing the results, it is best for the new facility to locate in Cluster B and achieve a 40% market share, which is significantly above average, considering that the average market share is 1/7 = 14.29%. Going through the calculations, we also observe that with this choice of parameters, the customers fairly strongly favor facilities that are in the proximity. We will now apply this concept to our case study. In order to do so, we will have to consider eight cases, each in which we locate our new restaurant at one of the new potential locations outline in Table 13.1. In order to avoid repetition and save space, we will explicitly describe only one of these cases. Suppose now that we tentatively locate our restaurant at potential Site 1, which is contained in Cluster 1. With the new facility located, the four clusters now have 3, 8, 6, and 3 facilities. Table 13.4 shows the calculation of the attractions of the individual demand clusters to the restaurant clusters. The last column shows the sum of the attractions between a customer cluster and all restaurant clusters. These values will be used to normalize the attractions.

262

13

Locating Fast Food Restaurants with the Analytic Hierarchy Process

Table 13.5 Market capture by clusters, given one new site in Cluster 1 Customer A B C D E F G H I J K L Total

Purchasing power 67 1284 1840 570 1800 1785 1035 80 138 133 435 165 9332

Cluster 1 (.1308) 8.76 (.1131) 145.22 (.0653) 120.15 (.4533) 258.38 (.0139) 25.02 (.0562) 100.32 (.1819) 188.27 (.0224) 1.79 (.1015) 14.01 (.0732) 9.74 (.1137) 49.46 (.0417) 6.88 928.00

Cluster 2 (.6171) 41.35 (.5387) 691.69 (.6851) 1260.58 (.4205) 239.69 (.9683) 1742.94 (.6265) 1118.30 (.5951) 615.93 (.2119) 16.95 (.1043) 14.39 (.6048) 80.44 (.4438) 193.05 (.3438) 56.73 6072.04

Cluster 3 (.1991) 13.34 (.2775) 356.31 (.2024) 372.42 (.1014) 57.80 (.0154) 27.72 (.2675) 477.49 (.1797) 185.99 (.7916) 63.33 (.1734) 23.93 (.2629) 34.97 (.3405) 148.12 (.4631) 76.41 1837.83

Cluster 4 (.0530) 3.55 (.0707) 90.78 (.0471) 86.66 (.0248) 14.14 (.0024) 4.32 (.0498) 88.89 (.0432) 44.71 (.0781) 6.25 (.6208) 85.67 (.0591) 7.86 (.1019) 44.33 (.1514) 24.98 502.14

The individual customer-to-restaurant cluster attractions are now normalized by dividing the individual attractions in Table 13.4 by their respective column sums. These are the numbers in brackets shown in Table 13.5. The numbers right next to them indicate the actual demand (to be) satisfied by the restaurant, which is the product of the normalized attraction and the purchasing power of the customer clusters. The bottom row of the last four columns in Table 13.5 shows the demands that are attracted by the restaurant clusters. Given the total demand of 9332, the new facility at Site 1 in Cluster 2 receives one eighth of the cluster’s total demand, i.e., 6.072.04/8 = 759 units, which is a market share of 8.15%. Note that, including the new facility, there are 20 facilities, so that the average market share is 5%. Finally, consider the safety of the individual areas. We have taken data from Toronto Life (2022), which has subdivided the Toronto area into neighborhoods, for which safety rankings are available. We use the rankings for all potential new locations for our restaurant. Whenever available, we would have preferred to deal with more detailed safety assessment. For instance, the number of home invasions in an area is of no great interest to an individual, who decides whether or not to patronize a restaurant. On the other hand, assaults, robberies, and car jackings are of major interest. The results are shown in the attribute matrix shown in Table 13.6. In the attribute matrix, we note that there is a significant number of dominances. More specifically, Site 1 dominates sites 4, 5, 7, and 8, while Site 2 dominates Site 3. This leaves us with the reduced attribute matrix in Table 13.7. We will now use the reduced attribute matrix and apply the analytic hierarchy process (AHP) to it. Even though we have “hard,” i.e., quantitative data that we could use in a simple weighted multiplicative method, the AHP allows us to incorporate human perception in the model. As usual, we consider one criterion at

13.1

Making It Work

263

Table 13.6 The attribute matrix New location at 1 2 3 4 5 6 7 8

Planned purchases 759 205 133 342 759 214 470 146

Peak hour total visibility 30,518 38,686 30,145 17,578 25,723 30,318 16,817 24,802

Price 79,000 65,000 165,000 150,000 199,000 149,000 180,000 179,000

Safety 18.6 29.3 26.8 4.3 8.6 38.6 16.4 12.9

Peak hour total visibility 30,518 38,686 30,318

Price 79,000 65,000 149,000

Safety 18.6 29.3 38.6

Table 13.7 The reduced attribute matrix New location at 1 2 6

Planned purchases 759 205 214

a time, and as a result, we obtain as many matrices as we have criteria. As far as the planned purchases are concerned, the decision maker views the attribute of Site 1 four times 2 as high as3that of either of the other two sites, so that we obtain 1 4 4 6 7 1 C = 4 :25 1 1 5 with column sums 1.5, 6, and 6. :25 1 1 Considering the visibility of the sites, we consider Site 2 to be 25% better than the other two2sites, so that we obtain 3 1 :8 1 6 7 C2 = 4 1:25 1 1:25 5 with column sums 3.25, 2.6, and 3.25. 1 :8 1 As far as the price of the property is concerned, the decision maker considers Sites 1 and 2 be more or less equal, while they are deemed twice as good (as they are half the price)2of Site 6. Thus, 3 we obtain the matrix 1 1 2 6 7 C3 = 4 1 1 2 5 with column sums 2.5, 2.5, and 5. :5 :5 1 Finally, regarding safety, Site 1 is considered half as good as Site 2 and only 1/3 as good as Site 6, while Site 6 is considered twice as good as Site 2, leading to the matrix 2 3 1 :5 :33 6 7 C4 = 4 2 1 :5 5 with column sums 6, 3.5, and 1.83. 3

2

1

264

13

Locating Fast Food Restaurants with the Analytic Hierarchy Process

The difficult part is the quantification of the decision maker’s preferences when it comes to the criteria. Here, we assume that indicators for planned and unplanned purchases are equally important, each of which in turn, is considered five times as important as the purchase price of the property (the price of the purchase is paid once, whereas the revenues are constant), and the planned purchases are considered twice as important as the safety index. The remaining preferences are shown in the “criterion 2 preference” matrix 3 1 1 5 2 61 1 4 3 7 7 e =6 C 6 7 with column sums 2.7, 2.58, 13, and 6.33. 4 :2 :25 1 :33 5 :5 :33 3 1 Normalizing the matrices by dividing each element by the appropriate column sum results 2 in the matrices3 2 3 2 3 :67 :67 :67 :31 :31 :31 :4 :4 :4 6 7 6 7 6 7 C1′ = 4 :17 :17 :17 5, C2′ = 4 :38 :38 :38 5, C3′ = 4 :4 :4 :4 5, C4′ = :17 :17 :17 2 :31 :31 :31 3 :2 :2 :2 3 2 :37 :39 :38 :32 :17 :14 :18 6 :37 :39 :31 :47 7 7 6 7 e0 = 6 4 :33 :29 :27 5, and C 6 7. 4 :07 :08 :08 :05 5 :50 :57 :55 :19 :13 :23 :16 At this point we note that the assessments of the first three criteria are completely consistent as all elements in each of the rows are identical. This is not true for the last two matrices. The coefficients of variations of the rows of matrix C4′ are .1, .08, and e 0 are .07, .15, .18, and .21. This .05 (probably acceptable), while those of the matrix C is normally cause for an intervention, as these variations are too sizeable as to be acceptable. Here, we will continue with the assessments the way they are and compute2the utility matrix 3 :67 :31 :4 :16 7 6 U = 4 :17 :38 :4 :30 5 and the weight vector w = [.37 .39 .07 .18]T, wo :17 :31 :2 :54 that we, via standard matrix-vector multiplication, obtain Uw = ½ :4256 :2931 :2950 005D. This clearly determines Site 1 as the clear winner with the remaining two other options being virtually equal. Site 1’s pretty dismal safety assessment has not stopped it from being chosen, due to the low rankings of e safety in comparison to most other criteria (the bottom row of C).

References S.P. Anderson, A. De Palma, J.-F. Thisse, Discrete Choice Theory of Product Differentiation (The MIT Press, Cambridge, MA, 1992)

References

265

BBC News, Pompeii: ancient ‘fast food’ counter to open to the public (2020), https://www.bbc. com/news/world-europe-55454717. Accessed 1 Oct 2022 BizVibe, Top 10 largest fast food chains in the world 2020 (undated), https://blog.bizvibe.com/ blog/largest-fast-food-chain. Accessed 1 Oct 2022 S. Cascone, An ancient fast food restaurant in Pompeii that served honey-roasted rodents is now open to the public. Artnet News (2021), https://news.artnet.com/art-world/pompeii-opensrecently-discovered-ancient-fast-food-restaurant-1998265. Accessed 1 Oct 2022 City of Toronto, Transportation Services, Traffic signal vehicle and pedestrian volumes. City of Toronto (2018), https://open.toronto.ca/dataset/traffic-signal-vehicle-and-pedestrian-volumes/. Accessed 4 Oct 2022 S. Cyprus, What was the first fast-food restaurant? Delighted Cooking (2022), https://www. delightedcooking.com/what-was-the-first-fast-food-restaurant.htm. Accessed 1 Oct 2022 EHL Insights, Restaurant management: how to choose the right location? (2022), https:// hospitalityinsights.ehl.edu/restaurant-management-location. Accessed 1 Oct 2022 A.S. Fotheringham, Spatial competition and agglomeration in urban modelling. Environ Plan A 17, 213–230 (1985) A.S. Fotheringham, Customer store choice and choice set definition. Mark Sci 7(3), 299–310 (1988) D.L. Huff, Defining an estimating a trading area. J Mark 28, 34–38 (1964) O. Kiyak, Restaurant: choosing a location. Gourmet Marketing (2016), https://www. gourmetmarketing.net/restaurant-essentials/choosing-a-restaurant-location/#:~:text=Usually% 2C%20easy%20parking%20and%20vehicle,lanes%20and%20roomy%20parking%20lots. Accessed 1 Oct 2022 T. Larkin, 8 factors for choosing a new restaurant location. FSR Magazine (2017), https://www. fsrmagazine.com/expert-takes/8-factors-choosing-new-restaurant-location. Accessed 1 Oct 2022 V. Marianov, H.A. Eiselt, A. Lüer-Villagra, The follower competitive location problem with comparison-shopping. Netw Spat Econ 20, 367–393 (2019) L. Mealey, Tips on where to locate your restaurant (2018), https://www.thebalancesmb.com/ choosing-restaurant-location-2888543. Accessed 1 Oct 2022 Metropolitan Council, Measuring employment (2016), https://metrocouncil.org/Handbook/Files/ Resources/Fact-Sheet/ECONOMIC-COMPETITIVENESS/How-to-Measure-EmploymentIntensity-and-Capacity.aspx. Accessed 6 June 2022 NPR, What’s on the menu in ancient Pompeii? Duck, goat, snail, researchers say (2020), https:// www.npr.org/2020/12/27/950645473/whats-on-the-menu-in-ancient-pompeii-duck-goat-snailresearchers-say. Accessed 1 Oct 2022 Oldest.org, Oldest fast food chains in the world (2020), https://www.oldest.org/food/fast-foodchains/. Accessed 1 Oct 2022 Realtor.ca, MLS® and real estate map (2022), https://www.realtor.ca/map#view=list&Sort=6-D& GeoIds=g20_dpz9hct3&GeoName=Scarborough%2C%20ON&PropertyTypeGroupID=2& PropertySearchTypeId=0&TransactionTypeId=2&Currency=CAD. Accessed 1 Oct 2022 Recipes-Online, A drone’s eye view of LA’s longest drive-thru lines (2020), https://la.eater. com/2020/5/12/21255064/drive-thru-fast-food-restaurants-los-angeles-lines-coronavirusdrone. Accessed 1 Oct 2022 W.J. Reilly, The Law of Retail Gravitation (Knickerbocker Press, New York, 1931) Statistics & Data, Biggest fast food chains in the world (2022), https://statisticsanddata.org/biggestfast-food-chains-in-the-world-71-2019/. Accessed 1 Oct 2022 Toronto Life, The ultimate neighborhood rankings (2022), https://torontolife.com/neighbourhoodrankings/#Rouge. Accessed 1 Oct 2022 K. Train, Discrete Choice Methods with Simulation, 2nd edn. (Cambridge University Press, 2009) https://eml.berkeley.edu/books/choice2.html. Accessed 1 Oct 2022 D. Ungerleider, How fast food chains pick their next location. Restaurant Magazine (2014), https:// www.restaurantmagazine.com/how-fast-food-chains-pick-their-next-location/. Accessed 1 Oct 2022 Webstaurant Store, Restaurant location analysis (2018), https://www.webstaurantstore.com/article/ 81/restaurant-environmental-analysis.html. Accessed 1 Oct 2022

Chapter 14

Locating Billboards with the Jefferson-d’Hondt Method

Out-of-home (OOH) advertising has been around for a very long time. Following BMedia (2019), Jared Bell was the first to create outdoor advertising, mostly in the form of murals. More about the general history of outdoor advertising can be found in Capitol (2022). An out-of-home advertising sign of a grocery store is shown in Fig. 14.1. OOH advertising received a big boost during the depression, when William (“Bill”) Borders, youngest son of the owner of a clothing store, appeared on the street as a “sandwich man,” advertising his dad’s business. This was to become “Bill’s Boards,” or, as we know them today, billboards. For a detailed history, see, e.g., OOH Today (2018). Typically, billboards are either directly attached to the facility that offers the services advertised (as in Fig. 13.1 in the case of a grocery store), or they are wooden or metal boards next to a highway or somewhere in town. Following Burt (2006), there is no less than half a million billboards along highways in the United States with an annual increase of 1–3%. The billboard industry is a $1.8 billion enterprise. It also has its very own language, see, e.g., Fliphound (undated). As far as modeling is concerned, it is useful to distinguish between advertisements, which offer services of interest to travelers that are available nearby, such as restaurants, hotels/motels, and similar facilities, as opposed to services that are more of interest to the local population, such as radio stations, lawyers, and others. In this chapter we will discuss the placement of billboards for travelers’ services. It is apparent that in most cases, billboards that offer a facility or a service will be located reasonably close to it. If a billboard were to offer something that is many hours away, its usefulness is limited, as few people will remember its message. Taylor et al. (2006) suggests that billboards promoting facilities such as motels will be remembered 30–60 min after the message was first seen. Personal observations by the authors of this chapter put this estimate squarely in the “optimistic” category. There are, of course, exceptions. One of the best-known exceptions is Wall Drug in Wall, South Dakota, a mall that includes western wear store, a restaurant, a chapel, and yes, a drug store. Roadside advertisements for the store can be found hundreds © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_14

267

268

14

Locating Billboards with the Jefferson-d’Hondt Method

Fig. 14.1 Outdoor advertising near Cerrillos, New Mexico (© H.A. Eiselt)

of miles away on I-90. A short account of the company’s history is found on Wall Drug (2021). While the company has paid for some 300 billboards, many more exist. As a matter of fact, their advertising has risen to cult-status, with some people going to extreme and pose with signs to the facility some 11,568 miles away (location unspecified, but it does look like Antarctica), as shown on the aforementioned company’s website. This chapter will investigate the best location of a billboard for travelers’ services. In order to do so, we need to look a bit more closely into different features of billboards. First of all, we typically distinguish between three different sizes. The largest size is the bulletin, which is 14 × 48 ft. with a total area of 672 sq. ft. The next smaller size is the poster, which is 10 × 22 ft., having a total area of 220 sq. ft. Finally, there is the junior size, which is 6 × 12 ft. large with an area of 72 sq. ft. We wish to point out that a poster has roughly three times the area of a junior, while a bulletin is roughly three times the size of a poster. In addition to their sizes, other features of billboards are important in determining their cost and application. For instance, a static billboard, i.e., one that has a message permanently printed on it, is found typically along highways, whereas electronic billboards, which have the advantage of being able to change messages over time and thus even allow its sharing among a number of different companies, are often found at traffic intersections with traffic lights, which allows the messages to make an impression on the drivers and passengers in the waiting cars. Billboards can either be purchased or rented. It will depend on the type of product advertised which of those is better. According to Lendvo (2019), building a wooden billboard (and that is just the structure without actual content) it costs between $15,000 and $20,000, while its steel counterpart will be between $40,000 and $100,000. According to Queiroz (2019), the basic rate for renting a billboard is US$3000 for a 4-week period. There are, however, many factors that will (sometimes dramatically) influence the cost. In particular they include 2022 the town the billboard is installed in 2022 the type of road the billboard is located on and its traffic count,

14

2022 2022 2022 2022

Locating Billboards with the Jefferson-d’Hondt Method

269

how many other billboards are around the one we rent, which side of the road the billboard is on, the size of the billboard, and if the billboard is lighted.

It is not much of a surprise that while a bulletin-sized billboard in Albuquerque may be rented for $1200 for a 4-week period, the same sized billboard will cost $3000 in any major U.S. city, and plenty more in New York City. For instance, a bulletinsized billboard in the SoHo district in New York city will cost as much as 18,000–$ 30,000 per month. (All information via Lendvo 2019 and Queiroz 2019). Another important feature is the type of road and the traffic count on the stretch of road the billboard is placed at. A billboard on a minor country road with only light traffic will be much less expensive than a billboard with a location next to a major interstate highway. This does, however, not just reflect the traffic count, but also the type of driver on the road: small country roads are often used more by local drivers, while travelers often use the interstate system. As a result, an advertisement for a motel will not be very effective if placed along a small country road, as it addresses the wrong clientele. Another feature that determines the rental cost of a billboard is the number of other billboards around it. The idea is that a solitary advertisement will make much more of an impression on the driver than a billboard that is one among many other ads. Yet another determinant of the cost is the side of the road a billboard is installed on. A bulletin on the right side of the road is typically significantly more expensive than one on the left side. Yes, the latter is the driver’s side, but he must look across a number of lanes to see it. The size of the billboard will also determine its rental cost, and it is also straightforward that lighted billboards are (about 25%) more expensive than their unlighted counterparts, as they simply have more hours in the day that they can make an impression. We should not forget that these costs just allow a company to display their advertisement for a given time. In order to do so, we need to design and produce the banner first. Typically, the design is something in the order of $500–$1000 and printing the design onto vinyl costs presently about 50¢ per sq. ft. In general, billboard advertising is regulated by the Outdoor Advertising Association of America (OAAA), which was established as early as 1872 and it regulates outdoor advertising to this day. Given all this, it is now our task to build a number of billboards in order to advertise our product, which we assume is a local facility, such as a motel or a restaurant. The location of the actual facility is given and so are the sites, at which billboards may be constructed. Our task is to choose the locations that will give us the most “bang for the buck,” i.e., make the most impression(s). Doing that, we have decided to concentrate on the following major concerns: 2022 maximizing the number of cars driving by a specific site, 2022 optimizing the distance between the billboard and the designated exit to actual facility, (e.g., the motel), and

270

14

Locating Billboards with the Jefferson-d’Hondt Method

2022 maximizing the impression of our billboard, given the number of other billboards in the vicinity, i.e., in the same zone our billboard is located in. The cost of designing, constructing, and placing a billboard will be included in a budget constraint. Since we are considering a specific stretch of highway of a few miles, costs will not differ much between the individual sites—it is not if we were to compare locations in rural Montana with those in downtown Manhattan. For the purpose of this work, we use the traffic count as a proxy expression for the impression a billboard makes on a driver. Clearly, the highway speed plays a key role: slow speeds give a billboard much more of a chance to make an impression on a driver much more than high speeds, where drivers have to pay more attention and billboards just appear to fly by. Another issue concerns multiple billboards of the same firm. Clearly, the effect of the billboard is not increasing linearly with a driver’s exposure to it: the first board makes a bit of an impression, the second solidifies the impression, the third intensifies the impression, and from that point on the repetition gets to be annoying and the attractiveness actually decreases. Billboard location problems have been described by researchers for a long time. Many of the “marketing-based” work deals with the determination and assessment of factors relevant for locating billboards. For instance, Inman (2016) outlines some of the main factors for the location of billboards. They include the traffic count at the site and the visibility of the billboard (how high, how far away from the road). The audience demographics play an important role: advertising, say, a pricey perfume or a luxury car in a working-class neighborhood is probably not as effective as one may wish. If the billboard advertises a specific service, the proximity of the advertisement to the facility is, of course, also a factor. This does not apply to advertisements for, say, travel destinations, radio station, or similar products or services that are not tied to a specific seller that needs to be physically visited by a customer. A similar listing is provided by Adquick (2020) that also includes a number of other factors, such as the type of land use, i.e., the type of surrounding area, as well as visibility. This does not only include the size of the billboard and its positioning, so that the driver can view it by a fairly slight movement of his head to the right (i.e., the board should be visible by the driver through the center of the windshield), but also the positioning of the billboard on the right side of the road. This is where the advertisement is much more visible to the driver than a billboard that is located across a number of lines of traffic on the left side. Liu et al. (2017) stress similar points. Wilson (2016) also includes dwell time, i.e., the amount of time a driver has to read the billboard, comprehend its message, and potentially act on it. This time will depend a lot on the speed that cars travel on the particular segment of highway, if there are frequent traffic jams (which dramatically increases the dwell time), and similar factors. Diaz (2018) quotes a study by Airdoor that has determined the proportions of drivers who notice billboards (74%), the proportion that is actually read (48%). They also determine that an 18–35 year-old driver is more likely to notice a billboard than

14

Locating Billboards with the Jefferson-d’Hondt Method

271

a 35–49 year-old driver. Slightly more than one in four drivers noted a phone number of web address on a billboard. There is, however, no indication what proportion of drivers actually acted on the information on a billboard. Another aspect is investigated by Zalesinska (2018), who examines illuminated billboards and drivers’ reactions to them. The general issue concerns LED billboards, which offer possibilities and challenges. The possibility is to include a variety of dynamically changing advertisements, which is not particularly effective along highways but near traffic lights, where drivers have to stop and can be reached for a minute or so by the changing displays. Illuminated static billboards along highways are challenging, as they produce glare, which poses a safety risk. On the other hand, bright spots draw the eyes of the driver towards it, thus resulting in a lack of attention to whatever is going on on the road and thus creating a potential hazard. As far as solution methods for billboard location problems are concerned, Taylor et al. (2006) promotes gravity models that were first suggested by Reilly (1931) and whose stochastic counterpart was suggested by Huff (1964). Based on original work by Atashpaz-Gargari and Lucas (2007), who devised what they call a “imperialist competitive algorithm,” Azad and Boushehri (2014) employ this algorithm to solve a billboard location problem that minimizes costs and maximizes traffic. They then apply the technique to a situation in Sioux Falls, South Dakota. Lotfi et al. (2017) first provide an authoritative review of the literature up to that time and then describe a bi-objective multiproduct billboard location model that includes costs and a traffic count embedded in an attraction function. The authors suggest a genetic algorithm for large-scale problems. Another strand of research is found in the location science literature. Until the late 1980s, location problems determined one or more sites for facilities to be located, so as to minimize or maximize a function of distance, where distance was defined between the known “customers” and the yet unknown facilities. In the case of desirable facilities, a function of these distances was minimized. This means that the underlying assumption is that for each unit of demand a customer has, there has to be a separate trip to or from a, typically the closest, facility. Clearly, this is often not the case. If, for instance, furniture is delivered to a customer, then there will most likely not be separate trips, but tours are routed, on which deliveries will be made. This immediately leads to location-routing—models; see, e.g., Schneider and Drexl (2017) or Schiffer et al. (2019). Similarly, if customers decide which purchases to make and where, a very similar situation leads to multipurpose shopping models, see, e.g., Marianov et al. (2018) or Lüer-Villagra et al. (2022). Hodgson (1990) was the first to remedy this situation by introducing flow-capturing (or, equivalently, flow interception) models. A typical example are locations of day care facilities or gas stations; both facilities, whose services are obtained on a customer’s way to work. As such, it is not the customer-facility (i.e., point-to-point) distance that is relevant, but the distance between the facility and the customer’s path to his place of work (i.e., the point-to-path distance). An update was provided by Berman et al. (1995). A probabilistic version of the problem was discussed by Berman et al. (1997), while the problem of double counting was addressed by Averbakh and Berman (1996).

272

14

Locating Billboards with the Jefferson-d’Hondt Method

Fig. 14.2 The study area in Edmonton, Alberta (Map data ©2022 Google)

The location of billboards was first discussed in the location literature by Hodgson and Berman (1997). This paper locates the facilities at the nodes of the network. In an extension, Hodgson (1998) looks again at the day care problem, finds it to be structurally identical to the max cover problem (see, e.g., Sect. 4.3.1 of this volume), and locates facility on the links of the underlying network, while explicitly considering the issue of multiple captures.

14.1

Making It Work

For our illustration, we have chosen the city of Edmonton, Alberta, Canada. Traffic counts are available at three points of the Sherwood Park Highway east of 17th, 34th, and 50th Streets as shown on the map in Fig. 14.2 (via Average Annual Weekday Traffic Volumes (2009–2019) 2021). The red dots with the labels A, B, and C indicate the sites, at which we can construct our billboards. The hotel, which the billboards advertise, is shown in blue. Since the problem deals with the location of possibly multiple billboards, we have chosen to apply the Jefferson-d’Hondt method, which is described in some detail in Sect. 5.3 of this volume. Consider now a driver’s utility based on the distance between billboard and the designated exit to the hotel. Here, will use the same exit for all possible sites of the billboard, which is crucial, if we have multiple billboards along the way—otherwise confusion is sure to arise. If the distance to the exit is very short, drivers do not have enough time to make a decision to exit and they will not patronize the hotel we advertise. On the other hand, if the distance to the designated exit is too far, drivers will have experienced other stimuli and may have forgotten about important details such as the exit number. For a background on the rules that govern the placement of such advertisement signs on a highway in the United States, see Byrd et al. (2017). Specifically, we design a relation between the billboard-to-designated exit as shown in Fig. 14.3, which shows an initial increase and the decrease after a maximum point. 2 -d e The function we use here is u = 100d 54:135 with u symbolizing the usefulness of a billboard at a site and d the distance between the site and the designated exit. The function has a maximum at a distance of 2 km, which equals—given a speed of 62 mph (100 km/h) 1¼ min.

14.1

Making It Work

273

Fig. 14.3 Utility of a billboard based on its distance to the designated exit

Fig. 14.4 Utility as a function of the number of billboards in the vicinity

Another issue concerns the number of billboards in the vicinity of our board. Here, we simply use the utility function u = 1/(number of billboards including ours). The function, shown in Fig. 14.4, plots the number of billboards on the abscissa against their utility on the ordinate. More specifically, the utility equals 1, in case our billboard is the only one along that stretch of highway. Given the three potential sites for putting up billboards along the highway (A, B, and C in Fig. 14.2) and the three criteria “traffic count,” distance to designated exist,” and “number of billboards along the stretch of highway,” the observed data (the number of billboards was assumed) are shown in Table 14.1. The next task is to translate the raw data in Table 14.1 to utilities. For the traffic count, the utility is determined by dividing the traffic count by the maximum (here

274

14

Locating Billboards with the Jefferson-d’Hondt Method

Table 14.1 Raw data for example Decision A B C a

Traffic count 33,300 40,300 36,300

Distance to designated exit 3.7 2.1 0.9

# of billboards in area excl. oursa 3 9 6

Assumed

Table 14.2 Utilities of the decisions as measured on the three criteria Decision A B C

Traffic count 0.8263 1.0000 0.9007

Table 14.3 The Jeffersond’Hondt procedure for our application

Distance to designated exit 0.6252 0.9976 0.6083

Divisor 1 2 3

Site A .80072 .40045 .26698 .2002

# of billboards in area 1.0000 0.4000 0.5716

Site B .87931 .43974 .29317 .2198

Site C .74923 .37366 .2491

40,300). For the distance to designated exit, we apply the formula that is the basis of Fig. 14.3. Finally, the number of billboards starts with the inverse values of all existing boards (including ours), which is then normalized, so that the highest utility equals 1. We thus obtain the utilities shown in Table 14.2. We note that there are no dominances among the three decisions. However, even if there were, this would be no reason to exclude a decision, as we will be assigning multiple facilities among the different sites. Given the relative similarities in the assessments of the three sites, we may expect that our billboards are divided somewhat evenly among the sites. One way to initialize the process begins with the usual aggregation technique. This ensures that the importance of the different criteria is included in the model, assuming that there is consensus among the decision maker, which we assume here. In our example, we apply weights of .5, .3, and .2, respectively. Next, we assume that our budget is expressed in the number of “tokens” with each token being equivalent to some amount of money. For instance, if a poster-sized billboard costs, say $10,000, while a banner-sized board goes for $30,000, then one token could be the equivalent of $10,000. An allocation of three tokens then equals either three postsized or one banner-sized billboard(s). We apply the 3-to-1 ratio between banner and post in our numerical example, which also requires the assumption that there are no economies of scale. The results are shown in the first row of Table 14.3. Given that we have a total of eight tokens (our budget), the remaining rows of Table 14.3 show the application of the Jefferson-d’Hondt method as described in Sect. 5.3 of this book. The superscripts indicate the order, in which individual tokens are assigned.

References

275

At the end of the process, eight tokens have been assigned. Assume now that there are only two sizes of boards that have been considered, viz., posters and banners. The junior size is not considered, as, given the speed along this stretch of highway, does not allow a driver to comprehend the important content of the message (“what?” “where?” “how much”, etc.). Given the total budget of eight tokens, the allocation is one banner in each of zone A and B, and two posters in zone C. As usual, the solution obtained in an optimization process is not the end, but a beginning. At this point, we can compare build vs. rent options, lighting vs. no lighting, three posters vs. one bulletin, and other billboard options. Most importantly, though, we will also need to compare the cost and effectiveness of out-ofhome advertising with that of other means of promotion, e.g., television ads, radio advertising, newspaper and magazine advertising, online advertising and others.

References Adquick, Billboard placement: why location is so critical (2020), https://www.adquick.com/blog/ why-location-is-so-critical/. Accessed 1 Oct 2022 E. Atashpaz-Gargari, C. Lucas, Imperialist Competitive Algorithm: An Algorithm for Optimization Inspired by Imperialistic Competition, in IEEE Congress on Evolutionary Computation, (2007), pp. 4661–4667 Average Annual Weekday Traffic Volumes (2009–2019), City of Edmonton (2021), https://data. edmonton.ca/Transportation/Average-Annual-Weekday-Traffic-Volumes-2011-2019-/b58qnxjr (esri subscription required). Accessed 1 Oct 2022 I. Averbakh, O. Berman, Locating flow-capturing units on a network with multi-counting and diminishing returns to scale. Eur J Oper Res 91(3), 495–506 (1996) H.R.L. Azad, N.S. Boushehri, Billboard advertising modeling by using network count location problem. Int J Traff Transp Eng 4(2), 146–160 (2014) O. Berman, D. Krass, C.W. Xu, Locating flow-intercepting facilities: new approaches and results. Ann Oper Res 60, 121–143 (1995) O. Berman, D. Krass, C.W. Xu, Generalized flow-interception facility location models with probabilistic customer flows. Commun Stat Stoch Models 13(1), 1–25 (1997) BMedia, History of billboard advertising (2019), https://www.bmediagroup.com/news/history-ofbillboard-advertising/. Accessed 1 Oct 2022 J.R. Burt, Speech interests inherent in the location of billboards and signs: a method for unweaving the tangled web of metromedia, Inc. v. City of San Diego. BYU Law Rev 6(4), 473 (2006) https://digitalcommons.law.byu.edu/lawreview/vol2006/iss2/4. Accessed 4 July 2021 E.T. Byrd, J. Bhadury, S.P. Troy, Wine tourism signage programs in the USA. Int J Wine Bus Res 29(4), 457–483 (2017) Capitol, History of billboard advertising (2022), https://www.capitoloutdoor.com/history-ofbillboard-advertising/. Accessed 1 Oct 2022 E. Diaz, How to choose an effective billboard location. Movia (2018), https://movia.media/movingbillboard-blog/how-to-choose-an-effective-billboard-location/. Accessed 1 Oct 2022 Fliphound, Glossary of billboard industry terminology (undated), https://fliphound.com/outdoorbillboard-advertising-ooh-industry-terms-and-definitions-glossary. Accessed 1 Oct 2022 J. Hodgson, A flow-capturing location-allocation model. Geogr Anal 22(3), 270–279 (1990) M.J. Hodgson, Developments in Flow-Based Location-Allocation Models, in Econometric Advances in Spatial Modelling and Methodology. Advanced Studies in Theoretical and Applied

276

14

Locating Billboards with the Jefferson-d’Hondt Method

Econometrics, ed. by D.A. Griffith, C.G. Amrhein, J.M. Huriot, vol. 35, (Springer, Boston, MA, 1998) J. Hodgson, O. Berman, A billboard location model. Geogr Environ Model 1(1), 25–43 (1997) D.L. Huff, Defining and estimating a trading area. J Mark 28, 34–38 (1964) P. Inman, Choosing an ideal billboard location: 7 things to consider. 75Media (2016), https:// 75media.co.uk/blog/choosing-an-ideal-billboard-location-7-things-to-consider/. Accessed 1 Oct 2022 Lendvo, Guide to renting a billboard (2019), https://www.lendvo.com/guide-renting-billboard/. Accessed 1 Oct 2022 D. Liu, D. Weng, Y. Li, J. Bao, Y. Zheng, H. Qu, Y. Wu, SmartAdP: visual analytics of large-scale taxi trajectories for selecting billboard locations. IEEE Trans Vis Comput Graph 23(1), 1–10 (2017) R. Lotfi, Y.Z. Mehrjerdi, N. Mardani, A multi-objective and multi-product advertising billboard location model with attraction factor mathematical modeling and solutions. Int J Appl Logist 7(1), 64–86 (2017) A. Lüer-Villagra, V. Marianov, H.A. Eiselt, G. Méndez-Vogel, The leader multipurpose shopping location problem. Eur J Oper Res 302(2), 470–481 (2022) V. Marianov, H.A. Eiselt, A. Lüer-Villagra, Effects of multipurpose shopping trips on retail store location. Eur J Oper Res 269(2), 782–792 (2018) OOH Today, The history of the first billboard (2018), https://oohtoday.com/the-story-of-the-firstbillboard-created/. Accessed 1 Oct 2022 R. Queiroz, How much does billboard advertising cost? Dash Two (2019), https://dashtwo.com/ blog/how-much-does-billboard-advertising-cost/. Accessed 1 Oct 2022 W.J. Reilly, The Law of Retail Gravitation (Knickerbocker Press, New York, 1931) M. Schiffer, M. Schneider, G. Walther, G. Laporte, Vehicle routing and location routing with intermediate stops: a review. Transp Sci 53(2) (2019) M. Schneider, M. Drexl, A survey of the standard location-routing problem. Ann Oper Res 259, 389–414 (2017) C.R. Taylor, G.R. Franke, H.-K. Bang, Use and effectiveness of billboards. J Advert 35(4), 21–34 (2006) Wall Drug (2021), https://www.walldrug.com/. Accessed 1 Oct 2022 R.T. Wilson, The role of location and visual saliency in capturing attention to outdoor advertising: how location attributes increase the likelihood for a driver to notice a billboard ad. J Advert Res 56(3), 259–273 (2016) M. Zalesinska, The impact of the luminance, size and location of LED billboards on drivers’ visual performance – laboratory tests. Accid Anal Prev 117, 439–448 (2018)

Chapter 15

Locating an Airport by Voting

Since the 1920s, airports have become an integral part of the landscape. Life today is hard to imagine without airports; not just for passengers, but also—and increasingly important—freight. Following Our World in Data (undated), there were no less than 4.32 billion passengers in 2018. Furthermore, given the increasing trend to purchase items online, Statista (2022a) asserts that there have been 63 million metric tons of freight shipped via airplane, tendency increasing. Given the tremendous costs of locating, relocating, or expanding an airport, this is only done in extenuating circumstances. One such case is the relocating of Hong Kong’s Kai Tak airport to the formerly quaint island of Lantau at Chep Lap Kok. The new airports in Hong Kong and the one in Kansei near Osaka, Japan, cost US$20 billion each. In comparison, the new airport in London Heathrow) cost US$10.5 billion, while the airport in Denver, Colorado cost US$4.8 billion (Dingman 2013). An example for an exceedingly costly mistake is Montreal’s Mirabel airport. Conceived in the early 1970s for the 1976 Olympic Games, no less than 98,000 acres (153 square miles!) were originally expropriated for the purpose, generating fierce opposition. Only 17,000 acres, i.e., less than 20% of this area was actually in the operations zone. The decision to use Mirabel only for international flights, requiring passengers to make a 1-h drive to Dorval airport for connections, proved fatal. The last commercial flight into Mirabel was in 2004. The terminal building was demolished 10 years later, and part of the area is now used for commercial and aviation activities. 81,000 acres have been deeded back to their original owners. Part of the history along with some other historical accounts can be found in the Airport History.org (undated) website. The tremendous costs that are incurred in the construction of an airport result from its components. The main components are land, runways, terminals, hangars, and access roads/infrastructure. The area of an airport is—surprisingly weakly— correlated with its passenger numbers. For instance, the huge Sacramento airport comprises no less than 6000 acres and it serves close to 5 million passengers, while the much smaller San Diego airport has an area of only about 660 acres, while © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_15

277

278

15

Locating an Airport by Voting

Fig. 15.1 Lukla airport, Nepal with steep runway in the foreground (© H.A. Eiselt)

serving twice the number of passengers (see, e.g., List of airports in California 2021). Even the number of runways is not necessarily related to the number of passengers: The 10-million passenger San Diego airport has only a single runway (making it the busiest single-runway airport), while Oakland, with close to 6 million passengers, has no less than four runways. On the other hand, the size of a terminal building usually reflects the magnitude of passenger traffic, or, at least, the expectation of future traffic. For instance, the 18.4 million sq. ft. international terminal of the Dubai International airport cost $4.5 billion and served close to 90 million passengers in 2018 (Statista 2022b). In contrast, the much smaller Barcelona-El Prat airport with a terminal size of under 6 million square feet served close to 53 million passengers (Barcelona Airport 2022). Most runways of commercial airports are somewhere between 1 and 2 miles long. Clearly, longer runways, coupled with increased width, allow larger airplanes to land, which, in turn, is a reflection of passenger traffic. Some exceptions are “specialty airports” that serve specific clienteles or purposes. Examples are the Lukla airport (the gateway to Everest) at 9333 ft. altitude with a runway of only 1730 ft. built at a 10° angle (see Fig. 15.1), or the airport on Saba Island (east of Puerto Rico) with its 1300 ft. runway, the shortest of any commercial airport. Airplane hangars cost about US$25 per square foot (Kompareit 2021), runways are about US$12.5 million for a mile-long runway (Compass International, Inc 2022), which, however, may change dramatically with different pavement classification numbers (Skybrary undated). For a collection of individual items and their costs, readers are referred again to the aforementioned Compass International, Inc (2022) website. It is worth mentioning that standard airport economics take construction costs as about one third and operating costs as about two thirds of the total costs (38 and 62%, to be precise, see, e.g., State of Airport Economics 2015). The specific case under consideration in this chapter is that of New Brunswick, a province in the East of Canada, bordered in the West by Maine, U.S., Quebec in the north, and by Nova Scotia in the Southeast. At present, the province has a total of 39 airports/airstrips/helicopter landing pads, many for aircraft with no more than 15 passengers. These airports serve the province’s total population of 780,000

15

Locating an Airport by Voting

279

people (World Population Review 2022). Among these 39 facilities are three main airports located not far from the three largest cities Moncton, Saint John, and Fredericton with their populations of 79,000, 71,000, and 64,000, respectively (city only), (City Population 2022). The Moncton airport was built in 1936 (there had been an earlier airport at a different site) and it played a role in World War II. The Saint John airport opened in 1952, but it had been a military airfield earlier. Finally, construction on Fredericton’s airport began in 1948. Its capacity was recently increased from 200,000 passengers to 500,000 passengers in a Can$30 million, expansion (Bird Construction News 2019). It is noteworthy that the three main airports in New Brunswick are 9, 17, and 15 km from the city center, which is much farther than the airports of comparable cities such as Regina (3 km) and Saskatoon, (4 km) and more in line with significantly larger airports such as those near Vancouver (13 km), Halifax (19.5 km) or Montreal (19 km). For many years, New Brunswickers have asked the question if the present system of three airports is serving them well. Pertinent examples are found in NB Datapoints (2014) and Southerland (2020), while the response by the Chambers of Commerce was quick at Cromwell (2021). The question of one large vs multiple smaller airports was also discussed by two architects is SOM (2017). In it, Derek Moore states that If the transport links work, it’s clearly better to have a single airport. Otherwise, bus lines and roads all need to be duplicated. Low-cost airlines would usually prefer to fly to a smaller airport, but these are nowadays stretched to the breaking point. Also, airline alliances are constantly changing. This dynamic is better managed at a single airport. The same is true of the civil aviation authorities, which operate more effectively within a single airport.

We will address this question in the remainder of this chapter. First consider the criteria. Many different criteria have been used in the available literature, including many engineering reports. For instance, the International Transport Forum (2017) lists economic and environmental objectives, along with challenges that include risk and uncertainty, e.g., of future demand, communication of results to the public, as well as dealing with NIMBY (not in my back yard) and PIMBY (please in my back yard) syndromes. In her work, Bahçelioğ (2014) lists topography, environmental impact, land development, obstruction and airspace, accessibility, residential impact, and meteorology as the main criteria. The author suggests the initial use of geographical information systems, and the concentration of the possible locations to maybe 4–5 sites, which are subsequently investigated in greater detail. Bollig, Inc (2020) provide an engineering report regarding a small number of possible sites of the new Karlstad (Minnesota) Municipal Airport. The criteria include the avoidance of wetlands and floodplains, culturally and architecturally significant sites, surface access, proximity of market areas and obstructions such as power lines and zoning regulations on the micro level. It also points out that not all lots and parcels are fee simple absolute (absolute ownership), there may be mineral rights, rights of way, etc., that need to be considered. In our analysis, we will consider the following five criteria.

280

15

Locating an Airport by Voting

1. Average/total distance between travelers and the airport. First and foremost, this criterion should usually be taken as a proxy for the time required to travel from a traveler’s home to the airport, a distinction particularly important in larger cities. More generally speaking, distance also measures the convenience to travelers, and with it, of course, also the acceptance of the airport by the travelers. Naturally, distance is a disutility that is to be minimized. However, disutility and distance are not usually proportional. We would expect the disutility caused by distance to be concave as while it makes a big difference to a passenger, if he is, say, 25 miles from the airport rather than 5, it is unlikely to make a difference if his distance to the airport is, say, 100 or 120 miles. Other factors such as congestion, also play a role. 2. Noise and other types of pollution affecting the population near the airport. The most obvious pollution is noise pollution. And while the intensity of sound diminishes with the square of the distance, jet engines (and, to a somewhat lesser degree, turboprop engines) emit a lot of noise. This noise is not only emitted during start and landing, but also during the first and last few minutes during the final approach. This issue is complicated further as it is not just the fairly short noise of an aircraft during takeoff, landing, and final approach that needs to be measured. Close to an airport, we experience a constant drone (not unlike that of wind turbines) plus occasional very loud noise during the aforementioned periods. In order to measure this, a variety of measures have been developed. These include the equivalent sound level, which equates the ups and downs of noise levels to the same sound energy of a constant noise. The day-night average sound level is a cumulative measure with noise during the night being weighted more heavily. A good description of the issue is found in Federal Aviation Administration (2020), see also Purdue.edu (2000). To simplify matters, we will ignore one important aspect of noise pollution. Noise pollution is not something that occurs more or less evenly: it is concentrated at the airport itself, but also directly underneath the typical flight paths in and out of the airport. 3. The third criterion concerns opposition to the project. Here, we do not refer to those segments of the population who oppose any construction project, but those segments of the population that are actually affected by it. In case of relocating a large facility such as an airport, this will include hotel and restaurant owners close to the airport, whose facility will now no longer be used, restaurants in the old airport’s vicinity, employees, who worked at the old facility and now have to either relocate as well or have search for different employment. On the other hand, one should not expect much approval from potential hotelkeepers and restaurant owners or prospective future employees at the site of a new airport, as presently, there are no capacities available at the site in question. 4. Another important criterion concerns the cost of building an airport. As alluded to above, there are many components and the cost vary tremendously, depending on the location of the airport (the cost of real estate, expropriation, lawsuits, etc.) as well as the type of aircraft that will use the airport. 5. Finally, the cost of operating a terminal (or, in the case of smaller airports, an area) associated with a specific airline must be concerned. These costs are to be

15

Locating an Airport by Voting

281

Table 15.1 Operating costs, number of passengers, and number of destinations for various Canadian airports Airport Calgary Charlottetown Fredericton, NB Halifax, NS Moncton, NB Montreal, QC Ottawa, ON Prince George, BC Red Deer, AB Regina, SK Saint John, NB Sudbury, ON Thunder Bay, ON Toronto, ON Vancouver, BC Winnipeg, MB Saskatoon, SK St. John’s, NL a

Operating expenses (in Can$000) 229,081 9325 8952

# of passengers 17,196,644 317,827 316,888

Can$ per passenger 13.32 29.33 28.25

Number of destinationsa 86 4 6

102,879 17,300 609,250 101,838 7813

4,128,243 674,406 19,563,661 5,110,801 496,714

24.92 25.65 31.14 19.93 15.73

21 8 143 32 10

1653 16,698 5990

25,819 1,179,485 281,069

64.02 14.16 21.31

0 17 4

8168 11,086

230,233 833,105

35.48 13.31

4 8

1,086,148 448,386

49,182,691 26,380,000

22.08 17.00

183 108

99,691

4,273,893

23.33

37

25,142

1,490,000

16.87

15

45,943

1,500,000

30.63

22

Number of destinations via website Flights from.com (2022)

absorbed by the airline and it will be a concern mostly to them. In the end, these costs will, of course, be passed on to the passengers and thus be a concern to them as well. Another apparent criterion is the number of destinations. As is apparent in Table 15.1 even by visible inspection, there is a strong correlation between the size of an airport (measured by the number of passengers passing through) and the number of destinations an airport offers. Clearly, an airport that offers many (direct) destinations is more convenient than one that offers only a few choices, as this may reduce the number of layovers, thus shortening the duration of the trip. Purely from a passenger’s point of view, there is already a tradeoff: a bigger, typically farther and thus less attractive airport may offer more destinations, which makes it again more attractive to travelers.

282

15

Locating an Airport by Voting

Due to the strong correlation between the number of destinations and the number of passengers and in order to avoid double counting, we have chosen not to include the number of destinations as one of the criteria. Finally, we need to identify the different stakeholders in the decision-making process. A good summary of all those who are affected by airport operations can be found in Schaar and Sherry (2010) which is used and further discussed by Omondi and Kimutai (2018). The main groups, along with the bodies that represent them, are. 2022 the population in the state or province (via the political parties, special interest groups, etc.), 2022 the communities and businesses near the airport (via individual political representatives and chambers of commerce), 2022 potential travelers in the state or province, and 2022 the airlines. Each of these groups have their own concerns and preferences. The state or airport authority is mostly interested in the construction cost, as most airports are publicly owned. The communities in the airport’s vicinity will be the ones whose businesses benefit from the airport and who will be the ones affected by the pollution inherent in the construction and operation of an airport. Potential travelers in the catchment area (not necessarily just the state or province in case of airports close to the border of the state) will want to have an airport reasonably close to their respective homes, and the airlines will be mostly interested in the construction and operational cost associated with the area in the airport that they rent or own.

15.1

Making It Work

In order to analyze the situation, we first need demographic data. All cities, towns, and villages above 5000 people are included, which comprises 18 places (City Population 2022). However, quite a few of these towns are in close vicinity to larger agglomerations, so that we have aggregated them. More specifically, Moncton now includes also Dieppe, Riverview, Shediac, and Memramcook, Saint John includes Quispamsis, Rothesay, and Grand Bay/Westfield, and Fredericton includes Oromocto. We should note here that these 18 places cover a population of 392,813 out of 781,315 people, i.e., 50.28% (World Population Review 2022). The result are the ten places shown as the rows in Table 15.2 along with their (aggregated) populations, as well as the distances between these places and the existing airports in Moncton, Saint John, and Fredericton and a potential new airport near Sussex. The new site near Sussex has been chosen on the basis of availability of land in the area (700 acres for Can$1.5 million) and the centrality of the location. Figure 15.2 shows the three existing airports and the approximate location of the potential new airport near Sussex as red dots. The location of the potential new airport has been chosen as a site that is located approximately 25 miles southeast of the center-of-gravity calculated on the basis of the ten towns in Table 15.2 (given l2 distances, of course).

15.1

Making It Work

283

Table 15.2 Largest ten population agglomerations in New Brunswick and their distances to existing and potential airports Moncton Saint John Fredericton Miramichi Edmundston Bathurst Campbellton Sackville Woodstock Grand Falls

Population 140,977 107,540 73,997 17,787 16,841 12,172 6915 5808 5517 5259

Moncton 10 165 189 139 453 213 314 41 282 391

Saint John 147 25 136 268 400 341 443 195 229 339

Fredericton 162 96 15 187 286 270 372 210 115 224

Sussex 69 85 126 188 391 262 363 117 220 329

Fig. 15.2 Existing and potential airports in New Brunswick (Source: Statistics Canada, Boundary Files, 2016 Census. Statistics Canada)

284

15

Locating an Airport by Voting

Given the setting shown above, we now have the following five potential decisions: Decision 1: The present arrangement with three regional airports with some possibly necessary expansions. Decision 2: A single new airport near the center of the main demand points. Decision 3: Expand the Moncton airport and close the other two existing airports. Decision 4: Expand the Saint John airport and close the other two existing airports. Decision 5: Expand the Fredericton John airport and close the other two existing airports. The first option is essentially the status quo, whereas the other options are new options, each of which carrying the potential of significant opposition. Consider now the criteria that are applied in this chapter. The first criterion concerns the (average) distance between potential passengers and their closest airport. The quantitative measure that will be applied here is the weighted passenger location-to-airport distances. For Decision 1, i.e., the status quo, we first need to allocate the demand points to the three existing airports. Simple visual inspection reveals that the airport at Moncton will serve customers in Moncton, Miramichi, Bathurst, Campbellton, and Sackville, the Saint John airport serves just the city itself (and those aggregated with it), while Fredericton serves customers in Fredericton, Edmundston, Woodstock, and Grand Falls. We will assume that the likelihood that a customer goes on a trip is independent of the customer location. This implies that the number of prospective passengers is proportional to the population in these places. For simplicity, we will simply use the population itself, rather than making assumptions about the propensity of someone taking a trip. The weighted mileages to the three airports are then 9,776,272, 2,688,500, and 7,738,952 for a total of 20,203,724 km. It is apparent that the mileages of all other decisions will be higher. As a matter of fact, in case of Decision 2 (a single new airport near Sussex), everybody will have to travel farther and with respect to this criterion only, renders this a dominated decision. More specifically, the total weighted distance in case of Decision 2 is 47,443,418 km. The three remaining decisions feature weighted distances of 51,854,806 km (Moncton), 56,371,439 km (Saint John), and 51,305,735 km (Fredericton). The second criterion concerns pollution in general and noise in particular. Jet aircraft during takeoff generate 150 decibels of noise at 25 m (Purdue Chemistry Department undated)—note that decibels are a logarithmic scale, so that, say, 150 dB is ten times as loud as 140 dB. In comparison, daytime noise in a residential area comes in at about 50 dB. In addition, exhaust from (partially) burnt fuel comprises chemicals such as sulfur dioxide, nitrogen oxide, benzpyrene and others, potentially resulting in asthma, emphysema, bronchitis, lung cancer, and other diseases. Cohen (2018) reports that a distance of 6 miles from an airport is reasonably safe but being directly under a flight path makes a big difference. More cautious sites such as Aviation Justice (2011) advocate 25 miles. For the purpose of this study, we use a circle with a 20-mile radius, even though tear-drop shapes would probably be more appropriate, given the pollution dispersion based on prevailing winds (somewhat reminiscent of Gaussian plumes). In order to account for the fact that larger airports

15.1

Making It Work

285

Fig. 15.3 Existing and proposed sites with 20-mile radius around them (Source: Statistics Canada, 2016 Census, Geo Suite (numeric) & Census & Dissemination Block Boundary File (geospatial)) Table 15.3 Population within 20-mile radius of existing and proposed airport sites

Airport site Fredericton Moncton Saint John Sussex/Newtown

Population 114,248 173,318 124,237 22,111

produce larger emissions, we will use the number of flights out of an airport (which is mostly what causes pollution) as a weight for the airport that is multiplied by the size of the population within the 20-mile radius. For that purpose, we use ARCGIS to determine the population within a 20-mile radius around the existing and proposed sites. This is done on the basis of dissemination blocks, and we have chosen to include all those blocks, which are fully or partially include in the circle (the difference to including only fully covered dissemination blocks is minor). The areas within a 20-mile radius of the three existing sites and the proposed site are shown in Fig. 15.3, where the colored polygons in the background are the dissemination blocks. Given the calculations on the basis of Fig. 15.3, Table 15.3 lists the population within a 20-mile radius of the existing and proposed airport sites. In order to determine not only how many people are negatively affected by the pollution of an airport but also the degree to which they are affected, we use the number of flights out of an airport, as they are the root cause of the pollution: the

286

15

Locating an Airport by Voting

noise from the starting and landing aircraft as well as the traffic to and from the airport (even though the latter is probably more accurately measured by the number of passengers). The number of daily flights out of Moncton is 12, Saint John 6, and Fredericton 9 (these numbers are taken during the pandemic in 2021, so they are not representative of the actual traffic. However, we may expect the ratios between them to be reasonably accurate). We use these number of weights then, so that the present solution with three airports has a pollution figure of (173,318)(12) + (124,237) (6) + (114,248)(9) = 3,853,470. The solution with Moncton being the only airport, the pollution figure is then (173,318)(27) = 4,679,586, given that we assume that all 27 flights are now leaving out of Moncton. Similar data for Saint John and Fredericton are 3,354,399 and 3,084,696, respectively. If a new airport in Sussex were to be constructed, its pollution would be 596,997. The third criterion deals with the general opposition to the project. The main components included in this criterion include the facilities that have to be relocated and the employees who have to either relocate or find alternative employment. The facilities in question include hotels that serve predominantly passengers with long layovers, restaurants in or close to the airport, and similar facilities that serve travelers. Hotels for business travelers are typically in town and they may exist regardless of whether the town has an airport. A long-range question, which cannot be addressed here, involves the question whether or not a town without an airport will actually be used for, say, conferences, or other business meetings. A similar issue concerns the employees, including airline clerks, maintenance workers, security personnel, baggage handlers, as well as employees who work in the hospitality industry that serves the airport and its travelers. It is apparent that the distinction between facilities and jobs that are directly related to the airport’s existence and those that would exist even if the airport would no longer exist, is fuzzy. The fourth criterion concerns the construction costs. These costs depend on the individual case and there is no way for us to find out what they actually may be. One possibility is to use qualitative assessments. However, knowing that Fredericton’s recent capacity increase from 200,000 to 500,000 passengers cost Can$30 million, we will take this expansion by 300,000 passengers as a datum for the cost of expansions as well as for new construction. At present (see Table 15.1), the three airports in New Brunswick are used by 700,000, 300,000, and 500,000 passengers, respectively, for a total of 1.5 million passengers. Since the construction is designed for future use, we will plan for 2 million passengers. Construction of a new airport with that capacity can then be assessed at Can$200 million, the status quo will be Can$50 million, while individual expansions of each of the existing airports cost $130, $170, and $150 million, respectively. The fifth and final criterion concerns the operating costs of the airport and the airlines. The cost of a terminal or part of a terminal are borne by the airline(s) that use (s) it and are one of the major expense items of the airlines. Connecting features such as DTW’s (Detroit Metropolitan Wayne County Airport’s) “light tunnel” are also part of the cost component.

15.1

Making It Work

287

Table 15.4 The attribute matrix Criteria

Decisions

d1: As is d2: Sussex d3: Moncton d4: Saint John d5: Fredericton

Distances 20,203,724 47,443,418 51,854,806 56,371,439 51,305,735

Pollution 3,853,470 596,997 4,679,586 3,354,399 3,084,696

General opposition 0 10 1 2 1.5

Construction cost 50 200 130 170 150

Table 15.5 The utility matrix

Decisions

d1: As is d2: Sussex d3: Moncton d4: Saint John d5: Fredericton

Distances 1 .2469 .1249 0 .1401

Criteria Pollution .2024 1 0 .3246 .3907

General opposition 1 0 .9 .8 .85

Construction cost 1 0 .4667 .2 .3333

Other criteria can be thought of as well. One such criterion, particularly important for travelers, is the number of destinations than can be reached from an airport via direct flights. Our tests revealed a strong correlation between the number of passengers and the number of destinations offered by an airport. In our study, we have chosen not to include this criterion. The reason is that the present situation with Moncton, Saint John, and Fredericton, the airports have 8, 4, and 6 direct destinations, respectively. If we were to combine the airports to a single airport, there is no reason that the new single airport should have any more or less destinations than the union of the presently existing airports. This argument holds unless the single (new or existing) airport assumes some hub functions or increases service due to long-term changes of demand, in which case additional destinations may be introduced. The above discussion is summarized in Table 15.4, which includes distances, pollution and the construction costs as calculated above, and the general opposition as expressed on a 1–10 scale. Note that all criteria are of the “minimization” variety. Before considering the main stakeholders in the process, we normalize the attribute matrix in the usual way, so that each column has a maximum utility of 1 and a minimal utility of 0. The utility matrix is shown in Table 15.5. Consider now the main stakeholders in the location process and their main concerns. The general population (or its government as speaker for the population at large) is mostly concerned about the construction costs that will be incurred now, and moderately interested in the potential or already existing benefits of airports as generator of increased economic activity to be generated in part now, in part in the future. The local communities are mostly interested in the immediate effects of the new or existing airport(s) on their lives: pollution and jobs are the main concerns on

288

15

Locating an Airport by Voting

Table 15.6 Weights associated to criteria by stakeholders Criteria

Stakeholders

State/population Local communities Travelers

Distances .05 .05

Pollution .15 .60

General opposition .30 .30

Construction cost .50 .05

.70

.10

.10

.10

Table 15.7 Assessment of the decisions by the major stakeholders Decision d1: As is d2: Sussex d3: Moncton d4: Saint John d5: Fredericton

State/population .96012 .062345 .469605 .25623 .37818

Local communities .14168 .7 .27 .46722 .52849

Travelers .92024 .29752 .14659 .05246 .18448

Airlines .7 0 1.0 .9 .95

their agenda. Finally, travelers will care a lot about the time it takes to get from their home to their chosen destination. Part of that is determined by the distance to the airport, another by the number of direct connections an airport offers, which was already discussed above. These weights are summarized in Table 15.6. The airlines have a special place in this context. Their interests coincide with none of the above criteria but are related to their own expenditures for a terminal of their own, and similar expenditures. Unable to assess specific costs to these expenses, we assume that the airlines favor the already existing Moncton airport (the largest of the existing airports) slightly over the existing airport in Fredericton, which, in turn, is slightly preferred over Saint John. All of these options (which require a service area in only a single airport) are preferred over the present solution that requires the maintenance of three separate service areas. All of these solutions are preferred over a newly to be constructed service area in Sussex. Thus, the chosen utilities for the status quo, Sussex, Moncton, Fredericton, and Saint John are 7, 0, 10, 9, and 9.5, respectively. For the first three decision makers, we calculate the weighted average utility (based on the decision maker’s assessment of the importance of a criterion) for each of the five potential decisions. The results, together with the independent assessment of the decisions by the airlines is summarized and displayed in Table 15.7. These assessments result in the following preference structures: State/population: d1 227B d3 227B d5 227B d4 227B d2, Local communities: d2 227B d5 227B d4 227B d3 227B d1, Travelers: d1 227B d2 227B d5 227B d3 227B d4, and. Airlines: d3 227B d5 227B d4 227B d1 227B d2. While not necessary, it is always advisable to find some consensus and use it in the decision-making process. Here, we see that it is universally agreed that d5 227B d4, so

15.1

Making It Work

Table 15.8 Votes resulting from pairwise comparisons

289

d1 d2 d3 d5

d1 – 1:3 2:2 2:2

d2 3:1 – 2:2 2:2

d3 2:2 2:2 – 2:2

d5 2:2 2:2 2:2 –

Table 15.9 Borda count of the groups of stakeholders Decision d1: As is d2: Sussex d3: Moncton d4: Saint John d5: Fredericton

State/population 4 0 3 1 2

Local communities 0 4 1 2 3

Travelers 4 3 1 0 2

Airlines 1 0 4 2 3

Sum 9 7 9 5 10

that we can eliminate decision d4. (Note that we could have eliminated decision d4 earlier as it can be seen to be dominated by d5 in Table 15.5). At this point, we may wish to determine whether there is some additional consensus hidden in the problem. For that purpose, we may see if there is a plurality for any pairwise decision. Table 15.8 shows the votes that result from pairwise comparisons. It is apparent that there are no preferences other than a plurality for d1 227B d2. We should note that the Copeland method, based on the much earlier work by Ramon Llull, shows its weakness here due to the many ties. For details on voting procedures, readers are referred to Sect. 5.3 of this volume. Instead, we may opt to calculate the Borda count (for an account of its history, see Borda Count 2022) for the remaining decisions. Recall that, given m (here m = 5 in the original setting) decisions, the method allocates m-1 points to a decision maker’s top-ranked decisions, m-2 points to its second-ranked decisions, and so forth. The Borda counts of the individual groups of stakeholders regarding the five decisions are shown in Table 15.9. This method—somewhat surprisingly—has Fredericton as the top choice, albeit by a small margin. Probably more meaningful is leaving a short list of choices d1, d3, and d5. It is important to note that the option of a new airport in or around Sussex is not included in the short list. Another approach could be the concept of a weighted majority. We could start with a situation, in which all decision makers carry the same weight, i.e., have the same importance. The calculation can be made on the basis of Table 15.7 by taking the simple (unweighted) averages in the rows. The results are .6805, .2650, .4715, .4185, and .5103, i.e., a clear vote for the status quo with the option of a single new airport being extremely low. Rather than taking an unweighted average of the individual assessments of the decisions by the stakeholders, we could assign differential weights to them, based on their relative importance. For instance, if all stakeholders had the same weight except for the local communities, which were

290

15

Locating an Airport by Voting

considered to be twice as important than anyone else (i.e., a weight vector of w = [.2, .4, .2, .2]), we obtain values of .5727, .3520, .4312, .4286, and .5139. Again, the status quo comes up first, with Fredericton being second. Finally, we would like to demonstrate the potential use of approval voting in this context. Since we do not know what approvals the individual decision makers have expressed, we define a threshold that, if decision maker’s weighted utility exceeds this threshold, we assume that he approves of this option. In order to remove at least some of the arbitrariness of the choice of the threshold, we can parametrically vary it. Starting with a threshold α = 0.3, we obtain approvals for the following decisions: State/population: d1, d3, d5, Local communities: d2, d4, d5, Travelers: d1, and. Airlines: d1, d3, d4, d5. This results in approval votes of 3, 1, 2, 2, and 3 for the five respective decisions, which has decisions d1 and d5 coming out on top. Changing the threshold to a stricter value of α = 0.4, we obtain (very similar) approvals for the following decisions: State/population: d1, d3, Local communities: d2, d4, d5, Travelers: d1, and. Airlines: d1, d3, d4, d5, which results in approval votes of 3, 1, 2, 2, and 2, so that now decision d1 is the sole winner. Strengthening the threshold further to α = 0.5, the approvals are now. State/population: d1, Local communities: d2, d5, Travelers: d1, and. Airlines: d1, d3, d4, d5, so that we obtain approval votes of 3, 1, 1, 1, and 2. Again, decision d1 is the sole winner with d5 coming in second. In other words, this solution is the same as that in a weighted majority decision explored earlier. It is also similar to other solutions, such as the Borda count, and as such, it makes a fairly strong point for the status quo.

References Airport History.org, Montreal-Mirabel International Airport: Part 1: a grand vision (undated), https://www.airporthistory.org/mirabel-1.html. Accessed 2 Oct 2022 Aviation Justice, Aviation and air pollution (2011), https://aviationjustice.org/impact/aviation-andair-pollution/. Accessed 2 Oct 2022

References

291

İ. Bahçelioğ, Airport site selection (2014), https://www.academia.edu/11118936/Airport_Site_ Selection. Accessed 2 Oct 2022 Barcelona Airport, Statistics for Barcelona Airport (2022), https://barcelonaairport.com/statistics/. Accessed 2 Oct 2022 Bird Construction News, $30-million Fredericton International Airport expansion project underway (2019), https://www.bird.ca/who-we-are/news/2019/07/20/30-million-fredericton-interna tional-airport-expansion-project-underway. Accessed 2 Oct 2022 Bollig, Inc, Karlstad Municipal Airport (23D) (2020), cityofkarlstad.com/files/AirportSiteStudy. pdf. Accessed 2 Oct 2022 Borda Count (2022), https://en.wikipedia.org/wiki/Borda_count#:~:text=The%20Borda%20count %20is%20a%20ranked%20voting%20system%3A%20the%20voter,most%20preferred%2C% 20and%20so%20on. Accessed 13 Oct 2022 City Population, Canada: New Brunswick (2022), https://www.citypopulation.de/en/canada/cities/ newbrunswick/. Accessed 2 Oct 2022 H. Cohen, A safe distance from the airport. Baseline of Health Foundation (2018), https://www. jonbarron.org/detoxing-full-body-detox/safe-distance-airport. Accessed 2 Oct 2022 Compass International, Inc, US regional airport cost model benchmarks (2022), https:// compassinternational.net/us-regional-airport-cost-model-benchmarks/. Accessed 2 Oct 2022 A. Cromwell, Chambers of Commerce in N.B.’s 3 major cities rally against idea of 1 regional airport (2021), https://globalnews.ca/news/7608097/nb-reject-regional-airport-for-province/. Accessed 2 Oct 2022 L. Dingman, 10 most expensive airports ever built in the world. The Richest (2013), https://www. therichest.com/luxury-architecture/10-most-expensive-airports-ever-built-in-the-world/. Accessed 2 Oct 2022 Federal Aviation Administration, Fundamentals of noise and sound (2020), https://www.faa.gov/ regulations_policies/policy_guidance/noise/basics/. Accessed 2 Oct 2022 Flights from.com, All routes and scheduled flights from every airport (2022), https://www. flightsfrom.com/. Accessed 2 Oct 2022 International Transport Forum, Airport site selection. OECD/ITF 2017 (2017), https://www.itfoecd.org/sites/default/files/docs/airport-site-selection.pdf. Accessed 2 Oct 2022 Kompareit, Compare steel building prices today & save an average of 27% (2021), https://www. kompareit.com/business/steel-buildings-airplane-hangar-cost.html. Accessed 2 Oct 2022 List of Airports in California (2021), https://en.wikipedia.org/wiki/List_of_airports_in_California. Accessed 2 Oct 2022 NB Datapoints, Needs versus wants: ‘International’ airports in New Brunswick (2014), http:// nbdatapoints.ca/needs-versus-wants-international-airports-in-nb/. Accessed 2 Oct 2022 N.N. Omondi, G. Kimutai, Stakeholder engagement conflicts and implementation of expansion and modernisation projects at Jomo Kenyatta International Airport in Nairobi, Kenya. Int Acad J Inform Sci Proj Manag 3(2), 12–36 (2018) http://www.iajournals.org/articles/iajispm_v3_ i2_12_36.pdf. Accessed 2 Oct 2022 Our World in Data (undated), https://ourworldindata.org/grapher/number-airline-passengers?tab= table. Accessed 2 Oct 2022 Purdue.edu, Noise sources and their effects (2000), https://www.chem.purdue.edu/chemsafety/ Training/PPETrain/dblevels.htm. Accessed 2 Oct 2022 Purdue Chemistry Department (undated), https://www.chem.purdue.edu/chemsafety/Training/ PPETrain/dblevels.htm. Accessed 20 Jan 2023 D. Schaar, L. Sherry, Analysis of Airport Stakeholders, in Proceedings of the Conference on Integrated Communications Navigation and Surveillance (ICNS) 2010, (2010) https://catsr. vse.gmu.edu/pubs/ICNS_Schaar_AirportStakeholders.pdf. Accessed 2 Oct 2022 Skybrary, Pavement classification number (PCN) (undated), https://skybrary.aero/index.php/ Pavement_Classification_Number_(PCN). Accessed 2 Oct 2022 SOM, Building a better airport (2017), https://som.medium.com/building-a-better-airport-92ff32 e826f3. Accessed 2 Oct 2022

292

15

Locating an Airport by Voting

M. Southerland, Higgs puts airports back on the table, and reaction has been swift (2020), https:// www.cbc.ca/news/canada/new-brunswick/nb-airports-study-higgs-1.5806735. Accessed 2 Oct 2022 State of Airport Economics (2015), https://www.icao.int/sustainability/Airport_Economics/State of Airport Economics.pdf. Accessed 2 Oct 2022 Statista, Worldwide air freight traffic from 2004 to 2022 (2022a), https://www.statista.com/ statistics/564668/worldwide-air-cargo-traffic/. Accessed 2 Oct 2022 Statista, Total number of passengers at Dubai International Airport in the Emirate of Dubai from 2013 to 2018 (2022b), https://www.statista.com/statistics/641255/dubai-total-passengers-atdubai-international-airport/. Accessed 2 Oct 2022 World Population Review (2022), https://worldpopulationreview.com/canadian-provinces/newbrunswick-population. Accessed 2 Oct 2022

Part III

Practitioner Perspective

Chapter 16

How Location Works in Practice: A Perspective from the United States

16.1

Introduction

Location and/or relocation of facilities is a strategic decision involving long-term planning and as such, involves considerable effort and lead-time. As such, academic researchers have devoted a considerable amount of attention to locational modeling and therefore, there are at least 3500 publications on the subject, see Domschke and Drexl (1985) and Hale and Moberg (2003). The focus of this chapter, however, is on the practice of locational modeling and survey of the practitioner literature on the same. In practice, the commonly used term by the industry for location decisions is “site selection.” Although companies in the United States make scores of location decisions annually, decisions made by major corporations often attract significant attention. A case in point is the recent decision by Amazon, U.S., to open its second headquarters in the United States, outside of Seattle. Since its announcement by Amazon in early 2017, this has set up an intense competition among states lobbying to land this project and garnered a significant amount of publicity—for example, research done for this chapter indicates that among the leading national newspapers such as New York Times, Wall Street Journal, USA Today, and Washington Post, there have been over two dozen articles covering this issue in a matter of months. The significance of this location decision is evident by the fact that as of the October 2017 deadline, 41 states in the United States, southern Canada, Mexico and Puerto Rico had submitted a combined 238 proposals for evaluation by Amazon. The primary driver of such interest by states is the potential economic impact of such a location, expected to generate in excess of 50,000 jobs and US$5 billion in

This chapter was co-written by M.S. Canbolat and M.L. Waite, School of Business and Management, State University of New York—The College at Brockport, Brockport, New York, USA. Their contribution is much appreciated. © Springer Nature Switzerland AG 2023 H. A. Eiselt et al., Multicriteria Location Analysis, International Series in Operations Research & Management Science 338, https://doi.org/10.1007/978-3-031-23876-5_16

295

296

16 How Location Works in Practice: A Perspective from the United States

investment that Amazon has said it plans to make in its second headquarters. A similar example of another recent high-profile location reported in the media is the plan by the Taiwanese company FoxConn, one of the world’s largest electronics manufacturers, to locate its first manufacturing facility in the United States and its final decision to do so in Wisconsin. While these two above may be two recent and extraordinarily prominent cases of location decisions that have attracted widespread national attention, many other examples exist of location decisions by corporations that have had a significant long-term economic impact on the region. One example is the well-known case of Mercedes-Benz locating its manufacturing facility in Tuscaloosa Alabama in 1987. According to Alabama government reports, Mercedes-Benz today “is responsible for more than 24,500 direct and indirect jobs in the region, with an estimated US$1.5 billion annual economic impact on the state. It is consistently Alabama’s largest exporter, and in 2015 alone, shipped more than US$5 billion in products to 135 markets around the globe.” For Alabama, this location by Mercedes-Benz was merely the start of the auto industry in that state and as of the writing of this chapter, other auto manufacturers such as Honda, Hyundai, Toyota and hundreds of suppliers set up shop in the state. Another such significant location was that by the auto manufacturer BMW when in 1994, it opened a manufacturing facility in Spartanburg County, South Carolina. The US$2.2 billion plant, which presently employs 8800, is part of the company’s global five-plant production network. Two decades in, this location has been significant for the state economy. An economic impact study by the University of South Carolina begins with the statement “It is now clear that BMW has had a potent, enduring effect on the state’s economy.” The same report found that BMW’s annual economic output amounts to US$16.6 billion, and that the company has created and supports 30,777 jobs directly and indirectly in South Carolina, generating US$1.8 billion in labor income that otherwise would not have existed. The authors of the study estimate that the “total impact of BMW on South Carolina’s value-added (similar to state gross domestic product) amounts to US$2.8 billion.” Outside of the United States, Mexico attracted more than 11 billion automotive related investments since 2012 mostly in Central Mexico. Toyota assembly plant, Ford engine and transmission plants, and BMW assembly plant are among the many. As quoted in an official statement from BMW “international trade agreements were a decisive factor in planting its flag in the country.” Outside of the auto manufacturing industry, locations by service-oriented companies can also have a similar long-term positive impact on the economy. A case in point is the decision by UPS, one of the world’s leaders in freight services, to relocate its headquarters from Connecticut to Atlanta, Georgia in 1991. According to estimates provided by the state government, UPS now employs in excess of 14,000 Georgia citizens in package delivery operations, ground freight, aircraft operations, data center management and contract logistics. In addition, higher-end jobs related to product and technology development, network planning and other corporate roles are also based in metro Atlanta. Beyond UPS itself, this move essentially established Atlanta as the logistics hub of Southern USA, which now

16.1

Introduction

297

has attracted other top logistics and transportation companies such as C.H. Robinson, J.B. Hunt, Schneider Logistics and DHL Supply Chain, to name a few, which have all located facilities there. This agglomeration of logistics and transportation oriented companies has had spillover effects to other modes of transportation beyond trucking: from the most extensive railroad network in the Southeast USA (CSX Railroad, one of the largest companies in the sector, has numerous facilities in and around Atlanta), Hartfield-Jackson International Airport for airfreight, currently the busiest airport in USA (Delta Airlines has headquarters in Atlanta) to the Port of Savannah, presently the fourth largest port in USA. It has even spawned one of the biggest clusters of companies related to information technology applications in logistics, including prominent ones such as Manhattan Associates and SMC3. In the academic literature, this is referred to as the “spatial spillover effect” and has been empirically observed in other cases such as the transport infrastructure on urban environment in China by Xie et al. (2016) as well as tourism industry in that country by Ma et al. (2015) which stems originally from the wellknown Porter hypothesis by Porter (1991). Boyd (2014) presents 15 examples of relocation of headquarters/offices by prominent Fortune 500 companies between 1975 and 2014. His analysis reflects that factors driving such relocations include comparative operating costs, a desire by companies to have a smaller footprint at the headquarters and a shift by major companies to locate important offices in urban centers rather than suburbs. It is interesting to note that besides these driving forces, there are often other reasons why companies may choose to relocate their facilities. For example, these may be political (Sun Life Insurance moved its offices from Montréal to Toronto in 1978 citing political instability surrounding Bill 101 that mandated French as Québec’s official language; Philip Morris moved from New York City to Richmond, Virginia in 2003, days after Mayor Bloomberg’s announcement of tough anti-smoking laws), larger incentives and subsidies (Panasonic move from Secaucus to Newark, New Jersey in 2012 to take advantage of the state’s innovative Urban Transit Tax Incentive), socioeconomic decline (American Airlines moved its headquarters from New York City to Dallas, Texas in 1979 at a time when New York City was devastated by fiscal crisis, crime and racial unrest), connectivity of the new site to markets served (UPS moved from Greenwich, Connecticut to Atlanta, Georgia in 1991 given Atlanta’s rising prominence as the logistics hub of Southern USA) or, consolidation of facilities (Hertz consolidated its facilities in New Jersey and Oklahoma into its new headquarters at Estero, Florida in 2013). Nonetheless, Boyd (2014) cites five primary factors used by companies in major relocation decisions, that he abbreviates as TALIO: 1. Talent: the desire to hire top professionals or technical talent. 2. Access: easy access to a hub airport with strong international connections 3. Lifestyle: the requirement for host cities to have “superior educational facilities, an accommodating housing market, state-of-the-art healthcare and a cultural and recreational bill of fare.”

298

16 How Location Works in Practice: A Perspective from the United States

4. Incentives and subsidies offered by the host city, region and state. 5. Operating costs at the new location. It is apparent from the above that decision-making regarding location of facilities by corporations involves a substantial amount of research, planning and negotiations. As such, while some companies such as Amazon and General Motors have internal divisions to make such decisions, others work in collaboration with professional site selection consultants. These consulting organizations range from large multinationals such as Ernst & Young, Deloitte Consulting, CBRE, AECOM, Crowe Horwath International, Global Location Strategies etc. to smaller ones such as Site Selection Group, Atlas Insight, Ginovus, to name but a few. One good source for names of such site selection consultants is the Site Selectors Guild. Given the significant economic impact of location of important facilities, regional economic development agencies work closely with these consultants and corporations throughout the decision-making process. These agencies range from local Chambers of Commerce to public-private partnerships. Therefore, a good source of practitioner literature on site selection issues can be found in magazines and journals many of which also cover economic development issues. For this chapter, we have chosen to focus on a few prominent sources for this literature. These include: 1. Site Selection Magazine, whose stated mission is “to publish information for expansion planning decision-makers—CEOs, corporate real estate executives and facility planners, human resource managers and consultants to corporations.” 2. Area Development (2022), which considers itself “the leading executive magazine covering corporate site selection and relocation,” whose coverage “provides valuable information pertinent to the factors, key issues, and criteria that affect a successful decision.” 3. The IEDC Economic Development Journal, the official publication of International Economic Development Council. 4. Journal of Applied Research in Economic Development, the official publication of C2ER (the Counsel for Community and Economic Research).

16.2

Area Development’s Annual Survey of Critical Site Selection Factors

Perhaps the most important initial issue that deserves study is the answer to the following question: what are the most common decision-making factors considered by companies in site selection? While numerous articles exist in the practitioner literature that pertain to this question, one of the most authoritative is an annual review of these factors published by Area Development (2022), which compiles a list of the top ten factors used by companies each year through an extensive survey of executives. As expected, these factors change annually according to changes in the overall economy. The latest 2017 compilation is based on a survey of 143 executives

16.3

Workforce-Related Factors

299

who collectively stated the following among the primary reasons for new facility development/facility location: expansion necessitated by increased sales/production; introduction of new products/services; better access to existing or new markets; and finally, relocations necessitated by mergers and acquisitions. As for the geographic spread of the facilities planned, the survey results indicated that the following three regions attracted the most interest. The US South (Alabama, Florida, Georgia, Louisiana, Mississippi) and the South Atlantic (North Carolina, South Carolina, Virginia, West Virginia); the Midwest region of the United States, comprising Illinois, Indiana, Michigan, Ohio and Wisconsin; and the Southwest comprising Arizona, New Mexico, Oklahoma, and Texas. Interestingly, the regions in the United States that attracted the least amount of interest were New England (Connecticut, Massachusetts, Maine, New Hampshire, Rhode Island, Vermont) and the Middle Atlantic (Delaware, Maryland, New Jersey, New York, Pennsylvania). While it is difficult to generalize, the most probable reasons for these preferences have to do with local business climates in these states, especially laws pertaining to unionization. The median investment in a planned facility was less than US$10 million, with an additional 25% (respectively, 15%) planning investments in the range of US$10 million-US$50 million (respectively, US$50 million–US$100 million). Only 5% were planning on “mega facilities” requiring investments between US$100 million and US$500 million. Studying the results of this survey between 2014 and 2017, it can be concluded that the following factors have generally been among the top ten of those considered by the companies during the locational decision-making.

16.3

Workforce-Related Factors

Perhaps no factors are more important for a company in selecting a site than the availability of a qualified workforce. This is particularly important for companies in the “knowledge economy.” With that, the workforce related factors identified by Area Developments Annual Survey are as below. 2022 Availability of skilled labor required by the company. This is almost always a top concern for most companies, primarily driven by the infusion of computing and automation into manufacturing and other business-related processes that necessitate the hiring of personnel with skills and training in advanced manufacturing and information technology. In recent times however, most manufacturing companies require employees with “middle-skill,” i.e., personnel that requires some training beyond a high school, but not necessarily an undergraduate degree. According to one report (Webster 2014), this sector alone is projected to be responsible for 40% of all job growth in the United States in the future. As a result, a number of states, including Georgia, Louisiana, South Carolina and Tennessee have initiated noteworthy job-training programs that bring together companies and the technical College system of the state. One of the most notable examples

300

16 How Location Works in Practice: A Perspective from the United States

of a successful such program is in Georgia called Georgia Quick Start (2019), which has successfully brought facilities from companies such as Toyo Tire to that state. 2022 Cost of labor Given that personnel generally comprises the largest share of expenses for most companies, it is natural to expect that labor costs will be considered one of the most important factors in site selection. Interestingly, most executives, especially in manufacturing, seem to associate unionization with an increase in labor costs. Which is why the Area Development surveys consistently indicate that manufacturers often will not consider a state that’s not a “right to work” state where union security agreements between companies and workers’ unions are not mandated and hence, even in unionized organizations, employees cannot be compelled to join nor pay for union representation. This has also been borne out by the well-known ALECLaffer State Competitiveness Index produced annually by the American Legislative Exchange Council (see Laffer et al. 2022) which consistently shows right-to-work states outperforming others in terms of attracting new business growth. Given this importance of labor availability in site selection decisions, an important question for both companies and economic development agencies is how to accurately capture and present labor data to companies during the recruitment process. In that regard, two references in the practitioner literature are important. First, Chmura (2016) presents a novel way to calculate available labor candidates for opening at the company for the region that includes contiguous counties, stating that firms or site selectors generally want this ratio to exceed 50 potential candidates per available position in selecting appropriate locations. Second, Rees (2017) prescribes ten important factors that go beyond labor data made available by economic development agencies that companies must keep in mind when looking to expand or open a new facility. The salience of workforce talent and availability in the site selection process for companies has also attracted a considerable amount of academic research. In the well-known book Markusen (1985), it was postulated that company locations were driven by their naturally occurring “profit cycles,” which, akin to the well-known Product Life Cycle, was hypothesized as comprising five stages: Zero profit, Super profit, Normal profit, Normal-plus and Normal-minus profit, and finally, Negative profit. This can explain how, in search for lower labor costs to prevent eroding profit margins, textile manufacturers moved from northeastern states in USA to southern states at the turn of the twenty-first century and thereafter, offshored their manufacturing facilities to cheaper countries such as Mexico, China and presently, Vietnam and Bangladesh. With the advent of the internet and the rise of connectivity, Friedman (2005) postulated that geography would become less important as companies would be able to conduct their business regardless of their location. Nonetheless, that scenario has not panned out as expected in the United States. As noted in Moretti (2012), the need for skilled labor, especially around information technology, has instead created isolated hubs of haves (referred to as “brain hubs” in Moretti (2012)) and have-nots with regards to locations by companies and consequent economic development. In fact, Florida (2014) goes further to describe the rise

16.4

Infrastructure and Proximity-Related Factors

301

of a “creative class” of highly skilled and innovative workers who choose to locate in areas that suit their lifestyles; that, in turn, has influenced location decisions by companies to such “creative class hubs.” This has been observed, in part, by both Moretti (2012) and in the practitioner literature; see our discussion below on information and communication technology infrastructure as a decision factor in site selection. Further attestation of Florida’s hypotheses about creative class hubs and its impact on the economy is available in Carlino and Saiz (2008) who showed that metropolitan areas in the United States that have extensive cultural and leisure-oriented amenities (they call them “beautiful cities”) have exhibited higher population and employment growth in the 1990s. This has caught the attention of practitioners (see for example, Houstoun 2010), who have called for deemphasizing incentives for business attraction in favor of lifestyle-focused infrastructure development as a tool to attract company locations. Finally, it is important to note that sometimes, proximity to higher education institutions plays a major role on some locational decisions. For example, in 2011 Pfizer signed a 10-year lease agreement with the Massachusetts Institute of Technology (MIT) and moved to Cambridge to be close to “Cambridge’s world-class institutions” (Site Selection 2011). Pfizer is not the only company that is attracted to this well-educated talent base and research focused institutions. General Electric (GE), Phillips, Lutron, Toyota, are some others. For example, GE had called Westchester NY home for more than 60 years but recently signed a partnership with MIT and moved its headquarters to Cambridge, Massaschusetts (Bruns 2016a). Some GE backed companies and some companies who want to be a part of this dynamic have also started following. A similar development occurred in the greater St. Louis area (Bruns 2016b) where bioscience and agro science companies have started to cluster around the higher education institutions. It is evident that when research in higher education institutions is shaped by its environment, it attracts and creates a cluster of for-profit companies, and both enjoy synergistic benefits.

16.4

Infrastructure and Proximity-Related Factors

While proximity to suppliers and customers is expectedly an important factor for companies in the manufacturing and transportation sector, with the emergence of technology in virtually all aspects of managing a company, the health of the available infrastructure is also an important site selection factor. As evident below, infrastructure and proximity are important even for companies that do not need to be proximate to their suppliers and customers but do require such proximity to their available workforce. 2022 Highway access: Excellent highway connectivity is essential to most companies, especially those that have geographically spread supply chains. 2022 Proximity to major markets and suppliers for companies: This is also another major factor, especially given the rising trend of customers expecting shorter lead

302

16 How Location Works in Practice: A Perspective from the United States

times due to adoption of just-in-time and lean manufacturing policies. This trend is further exacerbated by the rise in online retailing and expectations of quick delivery to and customers. For example, Amazon, the world’s largest online retailer, has or plans to locate 35 warehouses all across the United States to guarantee quick deliveries to their customers. 2022 Available buildings and sites: Companies frequently seek sites, be they for manufacturing, warehousing, research and development or office facilities, that are “shovel ready” i.e., ready to be moved in with minimal additional expenditures. The survey indicates that across the United States, both occupancy and construction costs are on the rise and hence companies prefer locations where suitable buildings and sites are already available. 2022 The state of information and communications technology infrastructure at the site and in the region. This is a corollary of the first factor, namely, availability of skilled labor, and also influenced by the growing trend of using “big data” for business analytics by today’s companies, be they large or small. It is interesting to note that according to Area Development, this strongly favors well-known hightechnology centers such as Austin and San Antonio, Texas, Provo and Salt Lake City, Utah, San Jose, California; Phoenix, Arizona; Nashville, Tennessee; Charlotte and Raleigh-Durham, N.C.; Portland, Oregon; and Atlanta, Georgia. 2022 Energy availability and costs: The importance of this factor is due to the energy intensive nature of many manufacturing facilities and data centers and also because it is an important determinant of overall operational costs of a facility.

16.5

Tax Climate and Incentives

It is obvious that a state’s corporate tax rate would be an important determinant of the overall costs of doing business there. Beyond that, the survey results indicate that for most executives, corporate tax rate imposed by a state is generally considered an indicator of the overall business friendliness of the state. In addition, an additional important factor, especially of late, has been the total package of incentives and subsidies offered to accompany for location in a state. Story (2012) has written a three-part series in New York Times describing how different states in the United States avidly compete with each other in offering large subsidies to companies in order to persuade them to locate within their borders. According to this study, the total amount in subsidies given to companies for site selection exceeds US$80 billion annually. A case in point is the 2017 decision of FoxConn to locate in Wisconsin, mentioned at the beginning of this chapter. According to public records, the total value of subsidies offered to FoxConn exceeds US$3 billion. Whereas companies normally expect to be incentivized to locate within a state or region, it is the overall corporate tax rate that is the greater influencer, as stated by a site selection consultant, “Incentives go away, but in the long term, you’re stuck with the (corporate tax) rates.” Bays (2017) goes further to mention that allowing economic incentives to drive site selection decisions early in the process is one of the top

16.6

Locational Decision-Making Process

303

mistakes that companies make. With this in mind, states such as North Carolina, Ohio, Missouri, Colorado, Kansas and Michigan have recently revised their corporate tax structures with a view to lowering the overall corporate tax burden in lieu of trying to attract businesses for the long haul with an initial package of subsidies.

16.6

Locational Decision-Making Process

The previous discussion describes the major factors used by organizations in the site selection process. As to the process itself, not all of these attributes are equally important to every company and furthermore, they are not used simultaneously in the locational decision-making process. Instead, the decision-making generally tends to be a sequential process that narrows the possibilities as it unfolds until the final site is chosen. For locations in the U.S.A., generally the first stage involves the selection of the appropriate state. Once done, the next steps are the selection of the region within the state. Often, especially when state-level agencies submit formal bids to companies as in the case of Amazon, they propose specific regions within the state. The final step is the selection of an actual site for the location. While there is a lot of practitioner literature on appropriate factors that should be used by decision-makers in different industries for site selection, there are fewer case studies or descriptions available in the published literature of real-life site selection processes. Two such descriptions are Thuermer (2007) and Bhadury et al. (2015). Thuermer (2007) describes the process used by Best Buy, one of the largest electronics retailers in USA, in selecting a Distribution Center (DC) for the East Coast. They began by using the rule of thumb that a DC should be within 100-mile radius of the center of the market being served. Following this rule, Best Buy and the site selection consultant (Cushman & Wakefield) identified a region comprising of eastern New York, northeastern Pennsylvania, western Massachusetts, and western Connecticut covering 40 counties. Imposing additional conditions for the DC to be reasonably proximate to New York City and Boston (major markets to be served) and to be within 10 miles from an interstate highway and 100 miles from an airport, the number of possible sites was reduced to 7. The final selection of a site at Yonkers, in New York was based on site-specific factors and negotiations with local county economic development and planning agency on preparation of the site (getting water, utilities, electricity to the site and building an access road). In the paper, Thuermer (2007) provides a six-step description of the entire decision-making process showing which factor was used at each stage, how it was evaluated, and the total number of counties retained at the end of a given stage in the decision process. Opening a new facility involves opportunities and risks at the same time and Levine (2002) argues that in most facility location decisions the location which minimizes the risks would become the final choice. It is also noted that when quantifying risk, a more holistic approach is required as functional risk assessment may overlook at cross-functional risks and result in suboptimal decisions. “For

304

16 How Location Works in Practice: A Perspective from the United States

example, a site-by-site analysis may overlook a situation in which a critical combination of facilities and/or suppliers is overly dependent upon a specific transport carrier or share an emerging political risk that may affect finance or trade relationships.” The author suggests evaluating workflows for the most important business operations and identifying the risk sources for each business operation. Then the outcomes can be aggregated again based on each risk source. Bhadury et al. (2015) describe the process used by the state of North Carolina in deciding where to locate seven logistics parks across the state. As background, a logistics park is generally an industrial park populated by warehouses, distribution centers and logistics-related companies/offices. In almost all cases, it is also an intermodal facility where truck trailers and containers are transferred between trucks and the railroad. The concept of logistics park has been applied and used in various international settings and may be referred to by other names such as Freight Village, Güterverkehrszentrum, Interporto etc. Many such facilities exist throughout the world and a few examples include Burlington Northern Logistics Park (Illinois, USA), large cargo airports such as Hong Kong and Memphis, Pinghu Logistics Park in Shenzen, China and Schipol Logistics Park in the Netherlands. To begin with, state legislature in North Carolina decided that the logistics parks would be distributed throughout the state for reasons of geographical equity in fostering regional economic development. As such, every one of the seven economic development zones of North Carolina was selected as a region to locate a logistics park. Bhadury et al. (2015) describe the decision-making process used to determine the location of a logistics park in one of these regions. For that purpose, the study used a structured locational decision-making framework abbreviated by its acronym SIRC, which stands for the following steps that need to be executed in their prescribed order. Step 1: Situational analysis of the region where the logistics park will be located. Step 2: Initial selection of candidate sites for locating the logistics park in the region. Step 3: Readiness assessment for each of the candidate sites selected in Step 2 above. Such assessment should include current infrastructure, desired infrastructure for peak performance of the logistics park and the gap between the two. Step 4: Competitive summary of the candidate sites, stating strengths and weaknesses of each. This step is obviously based on the data collected in the prior three steps. Following SIRC, 4 sites were identified and based on readiness assessment and competitive summary (Steps 3 and 4 in SIRC), strengths and weaknesses of each site were stated in a manner that made comparison possible. It is interesting to note that as is often common in location of major public facilities such as logistics parks, state authorities did not want specific recommendations for the location of each park. Instead, what they wanted as an outcome was a small list of possible sites and their relative comparisons. This is because the final decision to locate an important facility such as this is usually made at the legislative level involving substantial stakeholder input.

16.7

Global Location

305

Dunbar (2015) states that site selectors consider about 75 different criteria when they decide for an optimal location for their business. Many states provide a readypack solution for this difficulty: certified sites for specific industry needs. Some examples include Wisconsin’s “Certified in Wisconsin,” Minnesota’s “Shovel Ready,” and Ohio’s “Speed-to-Build” programs. The major benefit of these certified sites is that it allows for an easy site selection process and expedited development as companies can query their criteria from the certified site selection databases and find a location easily. Goldsmith (2011) indicates that for some companies the availability of a “ready to go” site is one of the reasons why a company have selected that specific location. But what makes a certified site a good site? A sample of advertisements of some certified sites in Site Selection shows that some keywords used include runway or access to airport, freeway, railway or port access, foreign-tradezone status, access to quality workforce, and others. Some also categorize sites by industry needs such as technology, life sciences, education, retail, etc. No decision-making process, especially one as complex involving as many different stakeholders, as site selection, is failproof. With that in mind, Bays (2017) provides a comprehensive overview of the top five mistakes commonly observed in the site selection decision-making process. According to Bays (2017), these are: 1. Assembling an incomplete internal project team that excludes appropriate stakeholders from within the organization, site selection consultants as well as economic development organizations 2. Loosely (or incorrectly) defining project specifications 3. Letting economic incentives drive the project early in the process 4. (Mis-) understanding the tightness of the industrial real estate market at the site being considered, and finally, 5. Internal company bias towards an early selection of locations in the process.

16.7

Global Location

The literature surveyed for this chapter also indicates that economic development agencies in the United States and Canada have become increasingly interested in attracting location by companies from abroad. Historically, this has been facilitated by the general attractiveness of the United States as a place to do business, as exhibited by regular global surveys performed by Site Selection Magazine. As Rasmussen (2015) points out, 88.5% of the global executives surveyed expressed confidence in the ability of North America, comprising of the United States and Canada, in their ability to attract foreign direct investment (FDI) in the form of new locations/expansions by foreign companies. The same survey also shows that the regions in North America most favored by the global executives include Salt Lake City, Utah; Boston, Massachusetts; Raleigh-Durham, North Carolina, Southern California; Portland, Oregon; Houston and Dallas Fort Worth, Texas; Reno-Sparks, Nevada and Winnipeg, Manitoba, among others. Annually, the United States attract

306

16 How Location Works in Practice: A Perspective from the United States

significant foreign direct investment, Anderson (2017) estimates total FDI into the U.S.A. to be US$373.4 billion in 2016, down 15% from US$439.6 billion in 2015. However, most of the FDI is in acquisitions of existing US facilities and only a small fraction of this represents new facility locations or expansions of existing domestic locations, referred to as “Greenfield Investment Expenditures.” In 2016 (respectively, 2015), this was US$7.7 billion (respectively, US$13.8 billion) representing approximately 2% (respectively 3.1%) of the total FDI. More than two-thirds of greenfield expenditures were from European and Asia-Pacific countries and greenfield expenditures in 2016 were largest in California (US$1.3 billion) and New York (US$1.2 billion). Another interesting point of contrast with within-country locations reported by the Area Development survey, is that as opposed to within-country locations/relocations, these foreign companies locating in the United States in 2016 did not overwhelmingly come from the manufacturing sector. In fact, by industry, the largest share (20.7%) of the greenfield expenditures in 2016 were in real estate sector, followed by electric power generation industries. Additionally, Van Den Berghe and Steele (2016) point out that overall investment in FDI as a direct investment has been in stagnation in the recent years and in some years shows signs of decline. The authors argue that there is a shift in strategic focus as in the past 30 years the main focus was minimizing cost and easy access to resources and now the focus tends to be more into market and asset access. Therefore, FDIs do not take place as site ownership at free zones but rather happens as joint ventures and merger and acquisitions. Although this approach has some drawbacks such as loss of control and inability to bring the unique company culture to host country, the benefits, such as gaining access to existing supply chains and know-how of how to operate in that particular market, exceed the detriments. With this in mind, Newman (2013) advocates that economic development agencies use a “soft landing in the community” approach towards attracting such greenfield investments from foreign companies by using available regional assets to offer an investor “what it will need, as a stranger in a strange land, to acclimatize, function and ultimately thrive in an unfamiliar environment.” In fact, this is similar to the ideal business incubation process for entrepreneurs advocated by the National Business Incubation Association (NBIA) as a part of its Soft Landings Program, which awards the NBIA Soft Landings International Incubator designation to business incubators that meet its criteria. Since 2005, NBIA has made 23 such awards, including to many technology or science parks that are operated by US universities, such as University City Science Center in Philadelphia, Rutgers EcoComplex Clean Energy Innovation Center and the Rutgers Food Innovation Center (both operated by Rutgers University), the Enterprise Development Center at the New Jersey Institute of Technology, the Center for Innovation at University of North Dakota and the Innovation Laboratory at East Tennessee State University just to name a few. Another recent interesting force driving the location of facilities in the United States is the phenomenon of “reshoring,” which refers to bringing back manufacturing that went offshore. A comprehensive source of information on this is the nonprofit named Reshoring Initiative which lists almost 1500 US companies that have reshored their facilities back to USA since 2010, creating over 300,000 jobs in

16.7

Global Location

307

the process. The list includes some well-known cases that have been highly publicized in the media, such as two prominent multinationals: General Electric and Walmart. The primary reasons quoted for such relocations back to USA include: rising labor costs in developing countries such as China that make labor costs in the United States competitive; problems resulting from currency fluctuations; quality, warranty and rework issues associated with foreign suppliers; increasing costs and lead-times involved in logistics and transportation for offshored manufacturing and its adverse impact on inventory and finally, the risk of loss of intellectual property in foreign countries. The website also contains a Total Cost of Ownership (TCO) Estimator that helps a company examine the total of all relevant costs associated with making or sourcing a product domestically in the USA or offshore. The TCO Estimator classifies these costs into five different categories: 1. Cost of goods sold versus landed cost, 2. “Hard” costs such as on hand and in transit inventory costs, 3. Potential risk related costs associated with rework, product liability, low-quality, theft of intellectual property etc., 4. Strategic costs, including the adverse impact of separating manufacturing from design and engineering on product/service innovation and, 5. Cost of overall impact on society, including environmental impact. Mataloni (2008) presents an interesting study on the process and criteria by which U.S. multinational companies (MNCs) in the manufacturing sector select locations for investments abroad by examining investments in seven European countries between 1989 and 2003. One surprising conclusion from the study was that “MNCs are attracted to high-wage locations, even after adjusting for location attributes generally thought to be associated with high-wage levels” explained, in part by the fact that these high wage regions also contained adequate labor with the high skills required. This is in stark contrast to the “anecdotal evidence of companies using relatively low-wage European countries as “export platforms” to the rest of the European Union.” Two factors were found to be strong determinants of location. The first was “industrial agglomeration,” i.e., the tendency for specific industries to agglomerate geographically around a given region. They found that MNCs are attracted to countries and to regions within those countries that have a relatively high proportion of firms in their own industry (for example, Germany’s North-Rhine Westphalia region has Europe’s largest concentration of chemical manufacturing). The second factor was transportation and other infrastructure (such as telecommunications). The study found that regions with relatively well-developed road networks attract more investment. As to the locational decision-making process, Mataloni (2008) concludes that MNCs use “a sequential choice process in which a country is first selected based on one set of attributes and then a region within that country is selected based on another—largely separate—set of attributes.”

308

16.8

16 How Location Works in Practice: A Perspective from the United States

Influence of State Rankings

In the U.S.A., different agencies, foundations, think tanks and magazines publish rankings of different states and cities that purport to compare them on their “business friendliness”, based on a host of economic, social and political factors. The practitioner literature makes it evident that several of these have a strong influence on site selection consultants as well as companies in the initial part of the locational decision-making process where the selection is among states to locate the new facility. For example, the ALEC-Laffer State Competitiveness Index produced annually by the American Legislative Exchange Council (Laffer et al. 2022) referenced before is one such ranking often quoted by site selection consultants. Curren (2014) is a comprehensive reference for the 13 most influential of these rankings and the factors that each considers in compiling its own unique ranking. 1. Forbes: Best States for Business 2. CNBC: America’s Top States for Business 3. Site Selection Magazine: Top US Business Climates, Governors Cup and Top Ten Competitive States 4. Area Development: Top States for Doing Business, Gold and Silver Shovel Awards, and Leading Locations 5. Tax Foundation: State Business Tax Climate Index 6. Pollina Corporate Real Estate: Top Ten Pro-Business States 7. Chief Executive: Best & Worst States for Business 8. Sperling’s Best Places 9. fDi Intelligence Unit: The fDi Report 10. Bloomberg Business Week/A.T. Kearney: Global Cities Index 11. Gallup: State of the States 12. Brookings Institute: The Metro Monitor 13. Business Facilities: Business Facilities Rankings Report Notwithstanding the widespread attention given to these state rankings, they are not without their critics. Research by the Ewing Marion Kauffman Foundation (Motoyama and Konczal 2013) outlines the pitfalls of these rankings and their shortcomings. Beginning with the fundamental question of whether or not a business climate index can usefully measure a real-life business climate of a region, given the immense amount of subjectivity involved, they perform statistical analysis to show that many of these indices are weakly or not correlated at all to several empirically grounded measures of business growth related to entrepreneurship and innovation in each state. Going further, they show how different aggregations of the factors used by the rankings above can result in inconsistent ordering of states to the point that virtually any state can be made the top-ranking state through simple manipulations of the data. Going further, Motoyama and Hui (2015) use a survey of 3600 small business owners, mainly in the personal and business service sectors, to show that “many of the popular state rankings either do not associate with individual

16.9

Trends in Economic Development Strategies to Facilitate Site Selection

309

perceptions of business climate or predict in the wrong direction.” Based on such results, Motoyama and Konczal (2013) recommend that. 2022 2022 2022 2022

16.9

Policymakers should not rely on a single indicator to gauge economic conditions. Aggregating indicators does not provide solutions because indicators are highly variable. Policymakers should not focus on improving their states' rankings because the rankings lack meaning. Rather, they should employ a scorecard approach, which does not create a normative, quantified measure, but descriptively assesses various conditions of each state.

Trends in Economic Development Strategies to Facilitate Site Selection

The surveyed literature points to a few prominent trends among economic development agencies in order to facilitate site selection within their regions and below we focus on four of them. First among these is the realization that regions in the country that have been successful in attracting new locations sought by companies have a strategic plan related to their overall economic development. Hamilton (2007) is an early reference with a case study on how the city of Enterprise in Alabama developed and used its economic development strategic plan as a guide to attract a tier 1 automotive parts supplier to locate there, creating 350 new jobs and attracting US$20 billion in capital investment. Presently, site selection consultants such as Uminski (2017) state that “sophisticated organizations are demanding a relatively new-but critical-component for streamlining their site selection process and increasing the likelihood of success” and points to the presence of community strategic plans as this critical factor now being required. Simply put, major organizations wish to see that any region where they are planning to locate has a well-formulated longterm plan for economic development that is widely supported by all stakeholders. The above, in turn, has given rise to the second trend among regions that have been successful at attracting companies, namely, that of collaboration among different economic development agencies. As background, responsibility for corporate recruitment and economic development is typically handled by several agencies, both public and private, in most regions of the United States. Involved organizations frequently include, local Chambers of Commerce, state-funded and appointed agencies (for example, Finger Lakes Economic Development Council in Western New York), public-private partnerships (for example, Piedmont Triad Partnership in central North Carolina) and a host of other city or county-based offices. For example, in Reid and Smith (2009), the authors identify over 30 entities, both public and private, in the city of Toledo, Ohio and surrounding Lucas County, each purporting to perform economic development activities, especially corporate recruitment for site selection. As they state, “fragmented economic development systems possess a number of inherent dangers including duplication of efforts, unwillingness to share ideas and information, hesitation to collaborate on development

310

16 How Location Works in Practice: A Perspective from the United States

opportunities, and jurisdictional and institutional territoriality.” This underscores the need for regional cooperation for successful recruitment of companies. Berzina and Rosa (2014) give an overview of how cooperation between the different economic development agencies in the cities of Dallas, Fort Worth and Irving in Texas with the utility TXU Energy, especially in forming the DFW Marketing Team in 2003, has resulted in that region having grown to be one of the largest corporate centers in the United States. Cooperation between these agencies made possible the DFW Airport, currently one of the largest in the world, and the associated economic activity that it spawned. This cooperation and regional sales pitch has resulted in numerous Fortune 500 companies having located in that region including AMR/American Airlines, Exxon Mobil, Fluor Corporation, AT&T, RadioShack, Texas Instruments and Kimberly-Clark. Despite having one quarter of the population of Texas, the DFW Airport and the surrounding area produces one third of the entire state’s gross domestic product. The success of efforts such as these has now resulted in practitioners including such regional collaboration in the decision-making process while scouting for sites on behalf of companies. Wagner (2017) has developed “collaboration scores” to ascertain “how well stakeholders in the region are fully aligned with respect to business attraction strategies and are committed to working collaboratively with the prospective company to ensure its project comes to fruition.” Reid and Smith (2009) have used another novel approach to measure cooperation between economic development practitioners in a region. Using Toledo, Ohio as an example, they show how social network analysis can be used to gauge regional cooperation. They propose using three graph theoretic measures, namely, centrality (measures a person’s position in a network in that a person with high centrality is considered well-connected to other people in the network), density (measured by calculating the actual number of connections within a network as a percentage of the maximum number of potential connections) and spatiality (measures the geographical spread of a social network) as important determinants of regional cooperation among economic development agencies. The third trend that can be gleaned from the practitioner literature is the infusion of information technology into the site selection process. As explained in Holbrook (2005), a significant impediment in the site selection process is the lack of working models of a useful database for buildings, both nationally and regionally, that are paired with community profiling tools that are information rich and continuously updated. With that in mind, Holbrook (2005) described the critical issues involved in developing such an integrated system that can be used by economic development agencies and site selection consultants alike and an overall framework for the same. Jones (2014) approaches the site selection from an analytics point of view, claiming that most long-term supply chain infrastructure decisions are made without an input from the real estate group of a company as their input does not drive strategy. This perspective suggests that decisions such as the number of plants, warehouses, distribution centers, and where should they be located are made by using an optimization software and then fine-tuned by customer requests as the location suggested by the software is rarely the best location given the other factors that decision maker may consider. The major factors used by the software include

16.10

Location of “Undesirable” Facilities

311

transportation, labor, and service whereas the intangible fine-tuning factors include tradition, timeliness, and quality of life. In recognition of this issue, in 2002, IEDC developed the Site Selection Data Standards to “assist economic development organizations in presenting relevant data to site selectors” that would allow for meaningful comparisons between alternate sites. Collaboratively developed by site selection consultants and economic developers, the standards, comprising of over 1000 data elements over a 25-tab spreadsheet that “comprehensively covers key data points in topics from infrastructure to pollution levels to patent rates”. The intention being that it would allow “communities in the collecting, analyzing, and delivering data in a coordinated and consistent manner.” Nonetheless, adoption of these standards among economic development organizations has been low—in fact IEDC states that only 16% of its own members use them to guide their data collection and presentation efforts. Regardless, IEDC regularly conducts studies of how economic development organizations have incorporated data analytics and information technology into the site selection process. Their latest report, Hurwitz and Brown (2016), shows: changes in the data needs of the users, that the 2002 IEDC Data Standards are still largely relevant to the needs of the site selection process and the increasing influence of open, mobile and big data computing on the information technology needs of economic development organizations. Finally, it is noteworthy that despite such attention, location consultants still cite site selection processes as being deficient in efficient utilization of information technology resources (Jetli 2017). The fourth and final trend identified from the surveyed literature points to the influence of long-term sustainability on economic development strategies for site selection. Borrowing from management literature, Hammer (2015) advocates for the use of the “triple bottom line philosophy” in evaluating sites for location. This concept, first enunciated by Elkington (2004), calls for simultaneous consideration of people (who are at the heart of economic and community development), place (the location where the development is expected to happen) and prosperity (which, in this case implies long-term well-being and quality of life). Hammer (2015) provides a framework to do the same and illustrates this with the case study from Port Townsend in Washington.

16.10

Location of “Undesirable” Facilities

The discussion thus far has focused exclusively on the location of facilities that are deemed “desirable” in that the site selection process seeks proximity to users in choosing their locations. In contrast, there are other types of facilities that are considered “undesirable” or “obnoxious” in that citizens typically do not wish to be close to them. Examples include landfills, dump sites, prisons etc.; see, e.g., Chaps. 6 and 10 in this volume. Due to the diversity of such facilities, there is a wide swath of literature dedicated to the location of such “undesirable” facilities, but each is unique to its particular application. For example, locations of prisons are different

312

16 How Location Works in Practice: A Perspective from the United States

than locating landfills; while transportation costs from waste collection centers is a primary determinant for the latter, such costs are relatively immaterial in selecting sites for the former. As mentioned above, undesirable/obnoxious facilities include landfill, prisons, nuclear power plants, cell phone towers, cemeteries, etc. In general, people realize the need for these facilities but do not want them in their close proximity. Therefore, siting an obnoxious facility is always a challenge. These facilities are also called LULUs in the literature which stands for “Locally Unwanted Land Use” (Myers and Martin 2004). Academic research in facility location approach this by using the objective of maximizing the minimum distance in distance based obnoxious facility models (Love et al. 1988). This ensures that the facility located is at least a certain distance away from the closest demand point. However, in practice, there are more criteria to consider than just distance when it comes to locate any facility. To begin with, let us take the case of landfill is considered an obnoxious facility that people do not wish to be close to. When it comes to waste, people generate a lot of them but do not want to store them close to where they live. Opening a waste facility or expanding an existing one therefore always has the challenge of acquiring public acceptance and the idea brings a lot of controversy (see, e.g., Agrawal 2017 or Johnson and Scicchitano 2012). Also, the site selection for one requires many criteria to consider. Walsh and O’Leary (2002) indicate that aside from the technical requirements such as “drainage patterns, geologic formations, groundwater depth”, etc. there are other important factors. These include “local public opinion, hauling distance, accessibility, climate, and economics.” Erkut and Moran (1991) came up with similar conclusions in their study where they considered the location of a landfill in Edmonton, Canada. Their study identified environmental, social, and economic as the major factors to consider in this locational decision-making process. Pairwise comparisons among these three factors done by a city official indicated that the social factors were the most important ones in the decision process followed by environmental and economic factors, respectively. Both studies show the importance of public acceptance since without public support, such locations can be delayed or even stopped. According to Abrams et al. (1992), the general public uses three major methods to stop or delay siting of a facility: court suit, legislative intervention, and zoning challenges. Public opposition is sometimes so strong that some of these projects are forced to halt (see, e.g., Kasuba 2021). Recent practices of abandoning dumping or burning and adopting emerging technologies such as recycling materials, creating energy, and providing better containment (Townsend et al. 2015) may make these facilities less obnoxious hence decrease public opposition. Finally, Kleindorfer and Kunreuther (1994) describe the features of the problems related to the siting of unwanted facilities and proposes a set of guidelines for providing alternatives to the usual “decide—announce—defend” approach used traditionally in the location process in practice. One way to increase public support for obnoxious facilities is to convince residents that these facilities help improve local economy. We see evidence of that in locating correctional facilities or prisons. Although once considered as obnoxious facilities, prisons are now considered as an engine for economic salvation in rural

16.10

Location of “Undesirable” Facilities

313

communities. In fact, Whitfield (2008) points out that “many small towns and rural counties actively lobbied state legislatures for prisons to be located in their communities.” The survey done by Myers and Martin (2004) indicated that only 25% of the respondents think that property values would decrease as a result of locating a prison and opposition is mainly from the immediate area. The county where the prison is to be located saw it as an economic advantage. Findings support the earlier results by Sechrest (1992) who found no evidence that correctional facilities have negative effects such as decrease in public safety or land values. As a result, the majority of new prisons built since 1980 are located in rural America (Huling 2002). According to Besser and Hanson (2016), these rural communities were hit by the major shift in the industry from manufacturing to services and they started looking for alternative options for economic development. This made prisons one of the few drivers of rural growth industry along with gambling casinos and concentrated animal feeding operations. As a result, Cherry and Kunce (2001) suggests that to efficiently locate an obnoxious facility such as a prison, policymakers should consider the heterogeneous economic conditions such as poverty rate, number of manufacturing sites, unemployment rate, etc., of potential sites. The logit model they developed gives evidence that state policymakers indeed does that. Another well-known study on siting correctional facilities is that by Abrams and Lyons (1987) who conducted a survey to understand if siting correctional facilities affect their respective communities’ property values, economy, public safety, law enforcement capabilities, and quality of life. By comparing data from communities with a correctional facility and control groups they concluded that quality of life and public safety indicators were not significantly different between the two. However, they also stated “it is possible that residential property values may be sensitive to negative public opinion.” Also, the increase in property values in selected prison cities were higher than the non-prison cities. Nonetheless, there is a strand of literature that does not attest to any substantially positive impact of siting correctional facilities in rural areas. Glasmeier and Farrigan (2007) represents one such comprehensive study of such locations of correctional facilities and their analysis suggests that a limited economic effect on rural places in general, but may have a positive impact on poverty rates in persistently poor rural counties, as measured by diminishing transfer payments and increasing state and local government earnings in places with relatively good economic health. However, there is little evidence that prison impacts were significant enough to foster structural economic change.

Additionally, Besser and Hanson (2016) found some evidence of lack of economic benefits from a new prison. Findings in the aforementioned study revealed that small towns that hosted a new prison in the 1990s witnessed higher poverty levels, and lower median value of housing units in 2000s. However, the study considers highly unequal sample sizes for prison towns and non-prison towns. Thus, we can conclude that while some communities may still want them as far away as possible and siting of a correctional facility within close proximity of a community causes controversy, there is evidence that these facilities are no longer

314

16 How Location Works in Practice: A Perspective from the United States

obnoxious for communities in economic difficulties. Or, years after location, evidence of their impact is more mixed and while they may still be considered “obnoxious,” they do generate sufficient spillover benefits to the local community. Travis and Sheridan (1983) stated a similar conclusion in their study and pointed out that rural communities with weak economies are more open to siting of new prisons. The U.S. Bureau of Labor Statistics (2016) supports this by reporting that the states with the highest concentration of prison jobs and location quotients are Mississippi, Arizona, New Mexico, West Virginia, and Louisiana, respectively. Three out of five of these states (Mississippi, Louisiana, and West Virginia) are ranked at the bottom of Best State Economies List produced by WalletHub. Thus, available evidence seems to suggest that regions with weaker economies are the ones that typically attract prisons; however, the economic development spawned by such locations remains insufficient by themselves for these regions to register significant improvements in economic development.

References K.S. Abrams, W. Lyons, Impact of Correctional Facilities on Land Values and Public Safety, in FAU-FIU Joint Center for Environmental and Urban Problems, (Florida International University, North Miami, 1987) https://www.ojp.gov/ncjrs/virtual-library/abstracts/impact-correc tional-facilities-land-values-and-public-policy. Accessed 13 Oct 2022 K.S. Abrams, W. Lyons, R. Cruz, A. Dahbura, L. de Haven-Smith, P. Johnson, et al., Issues in Siting Correctional Facilities: An Informational Brief (US Department of Justice, National Institute of Corrections, 1992) https://www.ncjrs.gov/pdffiles1/Digitization/151564NCJRS.pdf. Accessed 13 Oct 2022 AECOM, ESG report (2022), http://www.aecom.com. Accessed 13 Oct 2022 N. Agrawal, Controversial landfill in northern L.A. County to be expanded. Los Angeles Times (2017), http://www.latimes.com/local/lanow/la-me-ln-chiquita-canyon-landfill-20170627story.html (behind paywall). Accessed 3 Oct 2022 T. Anderson, New Foreign Direct Investment in the United States in 2016 (US Bureau of Economic Analysis, 2017) https://apps.bea.gov/scb/pdf/2017/08-August/0817-new-foreign-direct-invest ment-in-the-united-states.pdf. Accessed 3 Oct 2022 Area Development, Corporate survey results: site selection factors (2017), http://www. areadevelopment.com/corpSurveyResults/. Accessed 3 Oct 2022 Area Development (2022), http://www.areadevelopment.com. Accessed 13 Oct 2022 J. Bays, Top mistakes companies make during the site selection process. Area Development (2017) http://www.areadevelopment.com/corporate-site-selection-factors/Q4-2017/top-mistakes-com panies-make-during-site-selection-process.shtml. Accessed 3 Oct 2022 D. Berzina, M. Rosa, Coopetition. Econ Dev J 13(2), 7–11 (2014) T.L. Besser, M.M. Hanson, Development of last resort: the impact of new state prisons on small town economies in the United States. Commun Econ Develop 73 (2016) J. Bhadury, M.L. Burkey, S.P. Troy, Location Modeling for Logistics Parks, in Applications of Location Analysis, International Series in Operations Research & Management Science, ed. by H.A. Eiselt, V. Marianov, vol. 232, (Springer, 2015), pp. 55–83 Bloomberg Business Week/A.T. Kearney: global cities index, https://www.kearney.com/globalcities/2021. Accessed 13 Oct 2022 J.H. Boyd, Corporate headquarters mobility is at an all-time high. Site Selection Magazine (2014), https://siteselection.com/issues/2014/mar/sas-headquarters.cfm. Accessed 3 Oct 2022

References

315

A. Bruns, Talent first. Site Selection (2016a), http://siteselection.com/issues/2016/may/newengland.cfm. Accessed 3 Oct 2022 A. Bruns, Why clusters matter. Site Selection (2016b), https://siteselection.com/LifeSciences/2016/ feb/clusters.cfm. Accessed 3 Oct 2022 Business Facilities, Business facilities rankings report (undated), https://businessfacilities.com. Accessed 13 Oct 2022 G.A. Carlino, A. Saiz, Beautiful city: leisure amenities and urban growth. J Reg Sci 59(3), 369–408 (2008) CBRE (2022), https://www.cbre.us. Accessed 13 Oct 2022 T. Cherry, M. Kunce, Do policymakers locate prisons for economic development? Growth Chang 32(4), 533–547 (2001) Chief Executive: best & worst states for business, https://chiefexecutive.net/2017-best-worst-statesbusiness/. Accessed 15 Oct 2022 C. Chmura, Labor economics information and data. IEDC Econ Develop J 15(1), 13–17 (2016) https://www.iedconline.org/clientuploads/Economic%20Development%20Journal/EDJ_16_ Winter_Chmura.pdf. Accessed 3 Oct 2022 CNBC, America’s top states for business (2017), https://www.cnbc.com/2017/07/11/americas-topstates-for-business-2017.html. Accessed 13 Oct 2022 Crowe Horwath International (2022), https://www.crowehorwath.com. Accessed 13 Oct 2022 D.Y. Curren, Taking the ministry out of best places rankings. IEDC Econ Develop J 13(4), 5–12 (2014) https://www.iedconline.org/clientuploads/Economic%20Development%20Journal/ EDJ_14_Fall_Curren.pdf. Accessed 3 Oct 2022 W. Domschke, A. Drexl, Location and Layout Planning: An International Bibliography, in Lecture Notes in Economics and Mathematical Systems, vol. 238, (Springer, Berlin-Heidelberg-New York, 1985) C. Dunbar, Time is money. Site Selection (2015), http://siteselection.com/issues/2015/mar/sascertified-sites.cfm. Accessed 3 Oct 2022 J. Elkington, Enter the Triple Bottom Line, in The Triple Bottom Line: Does It All Add up? ed. by A. Henriques, J. Richardson, (Earthscan, Totnes, UK, 2004) E. Erkut, S.R. Moran, Locating obnoxious facilities in the public sector: an application of the analytic hierarchy process to municipal landfill siting decisions. Socio Econ Plan Sci 25(2), 89–102 (1991) fDi Intelligence, The fDi report (2022), https://www.fdiintelligence.com/Rankings. Accessed 13 Oct 2022 R. Florida, The Rise of the Creative Class – Revisited: Revised and Expanded (Basic Books, New York, 2014) Forbes: Best States for Business (2017), https://www.forbes.com/best-states-for-business/list/. Accessed 13 Oct 2022 T.L. Friedman, The World is Flat: A Brief History of the Twenty-First Century (Farrar, Straus and Giroux, New York, 2005) Gallup, State of the states (2022), http://news.gallup.com/poll/125066/state-states.aspx. Accessed 13 Oct 2022 Georgia Quick Start (2019), https://www.georgiaquickstart.org. Accessed 13 Oct 2022 Ginovus (2022), http://ginovus.com. Accessed 13 Oct 2022 A.K. Glasmeier, T. Farrigan, The economic impacts of the prison development boom on persistently poor rural places. Int Reg Sci Rev 30(3), 274–299 (2007) Global Location Strategies, Manufacturing and industrial site finders (undated), http://www. globallocationstrategies.com. Accessed 15 Oct 2022 J.T. Goldsmith, What is the true value of a shovel-ready site? Site Selection (2011), https:// siteselection.com/issues/2011/may/upload/1105SAS_ShovelReady.pdf. Accessed 2 Oct 2022 T.S. Hale, C.R. Moberg, Location science research: a review. Ann Oper Res 123(1), 21–35 (2003) Hamilton FL (Winter 2007) Economic development strategic planning. IEDC Econ Develop J 6/1: 44–50

316

16 How Location Works in Practice: A Perspective from the United States

J. Hammer, When three equals one-triple bottom line economic development. IEDC Econ Develop J 14(3), 5–10 (2015) J.A. Hawes, Cities with Prisons: Do They Have Higher or Lower Crime Rates? (Office of Justice Programs in US Department of Justice, 1985) https://www.ojp.gov/ncjrs/virtual-library/ abstracts/cities-prisons-do-they-have-higher-or-lower-crime-rates. Accessed 13 Oct 2022 D.A. Holbrook, Site selection technology-paradigm shift. Econ Dev J (Summer), 43–47 (2005) https://www.iedconline.org/clientuploads/Economic%20Development%20Journal/EDJ_05_ Summer_Paradigm_Shift.pdf. Accessed 3 Oct 2022 L. Houstoun, Amenity-driven economic growth: is it worth betting on? IEDC Econ Develop J 9(1), 19–23 (2010) Atlas insight, http://atlasinsight.com T. Huling, Building a Prison Economy in Rural America, in Invisible Punishment: The Collateral Consequences of Mass Imprisonment, (2002), pp. 197–213 J. Hurwitz, E.J. Brown, A New Standard: Achieving Data Excellence in Economic Development (International Economic Development Council (IEDC), 2016) https://www.iedconline.org. (Available to members only) International Economic Development Council (undated), https://www.iedconline.org. Accessed 15 Oct 2022 R. Jetli, Utilizing advanced technologies to evaluate the location decision. Area Development (2017), http://www.areadevelopment.com/corporate-site-selection-factors/Q4-2017/utilizingadvanced-technologies-evaluate-location-decision.shtml. Accessed 3 Oct 2022 R.J. Johnson, M.J. Scicchitano, Don’t call me NIMBY: public attitudes toward solid waste facilities. Environ Behav 44(3), 410–426 (2012) M. Jones, The most important thing to know about distribution site selection is . . . that it’s not about the site. Site Selection (2014), http://siteselection.com/issues/2014/jul/sas-logistics.cfm. Accessed 3 Oct 2022 J. Kasuba, Riverview repeats request to expand landfill; add 15 years of disposal capacity (2021), https://www.thenewsherald.com/2021/06/24/riverview-repeats-request-to-expand-landfill-add15-years-of-disposal-capacity/. Accessed 13 Oct 2022 P.R. Kleindorfer, H.C. Kunreuther, Siting of Hazardous Facilities, in Handbooks in Operations Research and Management Science, vol. 6, (1994), pp. 403–440 A.B. Laffer, S. Moore, J. Williams, Rich States, Poor States: ALEC-Laffer State Competitiveness Index, 15th edn. (American Legislative Exchange Council, 2022) https://www. richstatespoorstates.org/app/uploads/2022/04/2022-15th-RSPS.pdf. Accessed 2 Oct 2022 D.H. Levine, Risk deconstruction and the site selection process. Site Selection (2002), http:// siteselection.com/issues/2002/mar/p118/. Accessed 2 Oct 2022 R.F. Love, J.G. Morris, G.O. Wesolowsky, Chapter 6: Facilities Location (1988), pp. 133–135 T. Ma, T. Hong, H. Zhang, Tourism spatial spillover effects and urban economic growth. J Bus Res 68(1), 74–80 (2015) A. Markusen, Profit Cycle, Oligopoly and Regional Development (MIT Press, Cambridge, MA, 1985) R.J. Mataloni Jr., Foreign Location Choices by U.S. Multinational Companies (US Bureau of Economic Analysis, 2008) https://apps.bea.gov/scb/pdf/2008/03%20March/0308_locations. pdf. Accessed 2 Oct 2022 A. McCann, Best & worst state economies. WalletHub (2022) https://wallethub.com/edu/stateswith-the-best-economies/21697. Accessed 13 Oct 2022 E. Moretti, The New Geography of Jobs (Houghton Mifflin Harcourt, 2012) Y. Motoyama, I. Hui, How do business owners perceive the state business climate? Using hierarchical models to examine the business climate perceptions, state rankings, and tax rates. Econ Dev Q 29(3), 262–274 (2015) Y. Motoyama, J. Konczal, How can I create my favorite state ranking? The hidden pitfalls of statistical indexes. J Appl Res Econ Develop 10, 1–12 (2013) Ewing Marion Kauffman Foundation

References

317

D.L. Myers, R. Martin, Community member reactions to prison siting: perceptions of prison impact on economic factors. Crim Justice Rev 29(1), 115–144 (2004) P.B. Newman, Attracting foreign direct investment through “soft landing” initiatives. IEDC Econ Develop J 12(4), 20–25 (2013) Pollina Corporate Real Estate, Top 10 pro-business states (undated), http://www.pollina.com/top10 probusiness.html. Accessed 15 Oct 2022 M.E. Porter, Towards a dynamic theory of strategy. Strateg Manag J 12(S2), 95–117 (1991) P. Rasmussen, Locations of the future. Site Selection Magazine (2015), https://siteselection.com/ issues/2015/jul/fdi-outlook.cfm. Accessed 2 Oct 2022 J. Rees, Scouting for locations in an era of labor scarcity: 10 considerations. Area Development (2017), http://www.areadevelopment.com/skilled-workforce-STEM/workforce-q2-2017/scout ing-locations-in-era-of-labor-scarcity-10-consideration.shtml. Accessed 2 Oct 2022 N. Reid, B.W. Smith, IEDC Econ Develop J 8(1), 48–55 (2009) Reshoring Initiative (2022), http://reshorenow.org. Accessed 13 Oct 2022 D.K. Sechrest, Locating prisons: open versus closed approaches to siting. Crime Delinq 38(1), 88–104 (1992) Site Selection, Potent formula (2011), http://siteselection.com/LifeSciences/2011/nov/biopharma. cfm. Accessed 15 Oct 2022 Site Selection Group (2022), https://www.siteselectiongroup.com. Accessed 13 Oct 2022 Site Selection Magazine (2022), https://siteselection.com. Accessed 13 Oct 2022 Site Selectors Guild (undated), http://siteselectorsguild.com. Accessed 13 Oct 2022 Sperling’s Best Places (undated), http://www.bestplaces.net. Accessed 13 Oct 2022 L. Story, United States of subsidies: a series examining business incentives and their impact on jobs in local economies. New York Times (2012) Tax Foundation, State business tax climate index (2021), https://taxfoundation.org/publications/ state-business-tax-climate-index/. Accessed 13 Oct 2022 The Counsel for Community and Economic Research (2022), https://www.c2er.org. Accessed 13 Oct 2022 The Metro Monitor, Brookings Institute (2022), https://www.brookings.edu/research/metromonitor-2017/. Accessed 13 Oct 2022 K.E. Thuermer, Where, why, and how many? Logist Manag 46(3), 47–49 (2007) https://trid.trb.org/ view/811825 T.G. Townsend, J. Powell, P. Jain, Q. Xu, T. Tolaymat, D. Reinhart, The Landfill’s Role in Sustainable Waste Management, in Sustainable Practices for Landfill Design and Operation, ed. by T.G. Townsend, (Springer, New York, NY, 2015), pp. 1–12 K.M. Travis, F.J. Sheridan, Community involvement in prison siting. Correct Today 45(2), 14–15 (1983) U.S. Bureau of Labor Statistics, Occupational employment and wage statistics (2016), https://www. bls.gov/oes/current/oes333012.htm. Accessed 3 Oct 2022 D.J. Uminski, The importance of community strategic planning in the location decision. Area Development (2017) http://www.areadevelopment.com/corporate-site-selection-factors/Q4-201 7/importance-of-community-strategic-planning-location-decision.shtml. Accessed 2 Oct 2022 D. Van Den Berghe, C. Steele, The future of free zones. Site Selection (2016), http://siteselection. com/issues/2016/nov/the-future-of-free-zones-tapping-into-new-reservoirs-of-investment-andstimulating-innovation-in-service-delivery.cfm. Accessed 2 Oct 2022 L. Wagner, The collaboration score: the missing link in site selection. Area Development (2017), http://www.areadevelopment.com/business-climate/Q4-2017/collaboration-score-missing-linkin-site-selection.shtml. Accessed 2 Oct 2022 P. Walsh, P. O’Leary, Evaluating a potential sanitary landfill site. Waste Age 33(5), 74–75 (2002)

318

16 How Location Works in Practice: A Perspective from the United States

I. Watt, Pursuing the best and brightest: talent attraction as an economic driver. IEDC Econ Develop J 9(1), 13–18 (2010) M.J. Webster, Where the jobs are: the new blue collar. USA Today (2014), https://www.usatoday. com/story/news/nation/2014/09/30/job-economy-middle-skill-growth-wage-blue-collar/14 797413/. Accessed 2 Oct 2022 D. Whitfield, Economic impact of prisons in rural areas: a literature review. European Services Strategy Unit (2008), https://www.european-services-strategy.org.uk/wp-content/uploads/200 8/12/prison-impact-review.pdf. Accessed 2 Oct 2022 R. Xie, J. Fang, C. Liu, Impact and spatial spillover effect of transport infrastructure on urban environment. Energy Procedia 104, 227–232 (2016)