Optimization and Decision Science: Operations Research, Inclusion and Equity: ODS, Florence, Italy, August 30―September 2, 2022 (AIRO Springer Series, 9) 3031288629, 9783031288623

This volume collects peer-reviewed short papers presented at the Optimization and Decision Science conference (ODS 2022)

122 42 7MB

English Pages 380 [354] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Organization
Preface
Contents
About the Editors
Variational Inequalities, Equilibria and Games
Time-Dependent Generalized Nash Equilibria in Social Media Platforms
1 Introduction
2 The Game Theory Model of Digital Content Competition
3 The Generalized Nash Equilibrium Formulation
3.1 A Differential Game Model
4 An Illustrative Example
5 Conclusions
References
A General Cournot-Nash Equilibrium Principle and Applications to the COVID-19 Pandemic
1 Introduction
2 Preliminaries and Notations
3 The Model
4 Numerical Example
References
Environmental Damage Reduction: When Countries Face Conflicting Objectives
1 Introduction
2 The Global Emission Game
3 Sets of Equilibria for the Global Emission Game
4 Concluding Remarks
References
Nonsmooth Hierarchical Multi Portfolio Selection
1 Introduction
2 A Non-cooperative Multi-agent Hierarchical Model for Multi-portfolio Selection
3 Lower-Level NEP Properties
4 Upper-Level GNEP Properties
5 Conclusions
References
A Multiclass Network International Migration Model Under Shared Regulations
1 Introduction
2 Model
3 Numerical Examples
4 Conclusion and Further Research Perspectives
References
Optimization and Machine Learning
GPS Data Mining to Infer Fleet Operations for Personalised Product Upselling
1 Introduction
2 The System
2.1 Hotspot Detection and Classification
2.2 Route Identification
2.3 Operations Identification
2.4 Route Optimisation
2.5 Return on Investment, Fleet Feasibility and Personalised Marketing Material
3 Experimental Results
4 Conclusion
References
Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals
1 Introduction
2 Description of the Voronoi Optimizing Tree Method
3 Ensembles of Voronoi Optimizing Trees
4 Experimental Results
References
Mathematical Programming and Machine Learning for a Task Allocation Game
1 Introduction
2 Models
2.1 Score Forecasting Models
2.2 Optimization Models
3 Computational Evaluation
3.1 Quality of Regression Models
3.2 Integration of Regression Models in Mixed Integer Programs
3.3 Comparing Optimization Models with Allocation Policies
4 Conclusions
References
Global Optimization (Continuous, Non-linear and Multiobjective Optimization)
Random Projections for Semidefinite Programming
1 Introduction
2 Understanding Random Projections
3 Projected SDP: Motivation and Formulation
4 Theory of Projected SDP
4.1 A New Solution Retrieval Method
5 Computational Results
5.1 New Tests on Random Instances
5.2 Tests on SDP Relaxations of the DGP
5.3 Tests on SDP Relaxations of the ACOPF
5.4 Closing Remarks
References
Approaches to ESG—Integration in Portfolio Optimization Using MOEAs
1 Introduction
2 Literature Review
3 Approximating the M-V-ESG Surface with ev-MOGA
4 Illustrative Example
5 Conclusion
References
Optimization Under Uncertainty
Complexity Issues in Interval Linear Programming
1 Introduction
1.1 Linear Programming
1.2 Interval Analysis
1.3 Interval Linear Programming
1.4 Notation
2 Image of Optimal Value
3 Connectedness
3.1 Sufficient Condition
4 Convexity
5 Strong Optimality
6 Conclusion
References
Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach
1 Introduction
1.1 Literature Review and Contribution
2 Problem Definition and Model Formulation
2.1 The LL Problem
3 Problem Solution and Computational Experiments
3.1 The Single Level Reformulation
3.2 The Case Study
3.3 Results and Discussion
4 Conclusions
References
Risk-averse Approaches for a Two-Stage Assembly-to-Order Problem
1 Introduction and Paper Positioning
2 Mathematical Models
3 Instance Generation
4 Computational Experiments
5 Conclusions and Future Work
References
Combinatorial Optimization
Bi-dimensional Assignment in 5G Periodic Scheduling
1 Introduction
2 System Model
3 Conflict-Based (CB) Formulations
4 Matrix-Based (MB) Formulations
5 Computational Results
References
Capacitated Disassembly Lot-Sizing Problem with Disposal Decisions for Multiple Product Types with Parts Commonality
1 Introduction
2 Problem Statement and Formulation
3 Fix-and-Optimize Heuristic
4 Computational Experiments
5 Conclusion
References
The Crop Plant Scheduling Problem
1 Introduction
2 Related Literature
3 Methodology
3.1 Growing Degree Units Forecasting
3.2 Data Preprocessing
3.3 Mathematical Model
4 Results
5 Conclusions
References
A MILP Formulation and a Metaheuristic Approach for the Scheduling of Drone Landings and Payload Changes on an Automatic Platform
1 Introduction
2 Problem Formulation
3 A Metaheuristic Algorithm Based on Direct Search
4 Numerical Results
5 Conclusions
References
A Flexible Job Shop Scheduling Model for Sustainable Manufacturing
1 Introduction
1.1 Literature Review
2 Optimization Model for Efficient Production Management
2.1 Optimization Model Formulation
2.2 Implementation and Testing of FJSS Optimization Model
3 Case Study: Multinational Corporate
3.1 Model Customization on Product
3.2 Results and Conclusion
References
Selection of Cultural Sites via Optimization
1 Introduction
2 Problem Description and Mathematical Formulation
3 Computational Experience
4 Conclusions
References
Integer Linear Programming Formulations for the Fleet Quickest Routing Problem on Grids
1 Introduction
2 Literature Review and Relevant Results
3 Integer Linear Programming Formulations
4 Computational Results and Conclusions
References
Transportation and Mobility
C-Weibit Discrete Choice Model: A Path Based Approach
1 Introduction
2 Literature Review
3 From Weibit to C-weibit Path Choice Model
3.1 Introducing the Weibit Model
3.2 C-Weibit Model Formulation
4 Experimentation
5 Conclusions
References
Receding-Horizon Dynamic Optimization of Port-City Traffic Interactions Over Shared Urban Infrastructure
1 Introduction
2 System Description and Dynamic Model
3 Receding-Horizon Model-Predictive Control
4 Simulation Results
5 Conclusions
References
Ten Years of Routist: Vehicle Routing Lessons Learned from Practice
1 Introduction
2 Where we Started
2.1 Route Evaluation
2.2 The Optimization Algorithm
3 Algorithmic Improvements
3.1 Set Covering
3.2 Caching
3.3 Nearest Drivers
3.4 Equivalent Stops
4 Making the Customers Happier
4.1 Route Editing
4.2 Balancing the Routes
4.3 Variants of Interest
5 Conclusions
References
Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System
1 Introduction
2 Related Work
3 Mathematical Model
3.1 Problem Data
3.2 Space-Time Graph
3.3 ILP Formulation
4 Case Study
4.1 Case Description
4.2 Results
5 Conclusion
References
Health Care Management
Optimization for Surgery Department Management: An Application to a Hospital in Naples
1 Introduction
2 Problem Definition
3 Mathematical Model
4 Experimental Results
5 Conclusions
References
The Ambulance Diversion Phenomenon in an Emergency Department Network: A Case Study
1 Introduction
2 Generalities on Ambulance Diversion and the Case Study
3 The Discrete Event Simulation Model
4 Statement of the Simulation–Based Optimization Problem
5 SBO Implementation and Experimental Results
6 Conclusions
References
Applications
Reducing the Supply-Chain Nervosity Thanks to Flexible Planning
1 Introduction
2 Considered Problem and Proposed Approach
3 Results
4 Conclusion
References
Design Forward and Reverse Closed-Loop Supply Chain to Improve Economic and Environmental Performances
1 Introduction
2 Literature Review
3 Problem Statement and Modelling
4 Numerical Analysis
5 Conclusions
References
Supply Chain Design and Cost Allocation in a Collaborative Three-Echelon Supply Network: A Literature Review
1 Introduction
2 Methodology
3 Literature Review
4 Discussion
5 Conclusion
References
A Mathematical Model to Locate Services of General Economic Interest
1 Introduction
2 Mathematical Model
3 Case Study
4 Computational Experiments
4.1 Results
5 Conclusions
References
Author Index
Recommend Papers

Optimization and Decision Science: Operations Research, Inclusion and Equity: ODS, Florence, Italy, August 30―September 2, 2022 (AIRO Springer Series, 9)
 3031288629, 9783031288623

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

AIRO Springer Series 9

Paola Cappanera · Matteo Lapucci · Fabio Schoen · Marco Sciandrone · Fabio Tardella · Filippo Visintin   Editors

Optimization and Decision Science: Operations Research, Inclusion and Equity ODS, Florence, Italy, August 30– September 2, 2022

AIRO Springer Series Volume 9

Editor-in-Chief Daniele Vigo, Dipartimento di Ingegneria dell’Energia Elettrica e dell’Informazione “Gugliemo Marconi”, Alma Mater Studiorum Università di Bologna, Bologna, Italy Series Editors Alessandro Agnetis, Dipartimento di Ingegneria dell’Informazione e Scienze Matematiche, Università degli Studi di Siena, Siena, Italy Edoardo Amaldi, Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy Francesca Guerriero, Dipartimento di Ingegneria Meccanica, Energetica e Gestionale (DIMEG), Università della Calabria, Rende, Italy Stefano Lucidi, Dipartimento di Ingegneria Informatica Automatica e Gestionale “Antonio Ruberti” (DIAG), Università di Roma “La Sapienza”, Rome, Italy Enza Messina, Dipartimento di Informatica Sistemistica e Comunicazione, Università degli Studi di Milano-Bicocca, Milan, Italy Antonio Sforza, Dipartimento di Ingegneria Elettrica e Tecnologie dell’Informazione, Università degli Studi di Napoli Federico II, Naples, Italy

The AIRO Springer Series focuses on the relevance of operations research (OR) in the scientific world and in real life applications. The series publishes peer-reviewed only works, such as contributed volumes, lectures notes, and monographs in English language resulting from workshops, conferences, courses, schools, seminars, and research activities carried out by AIRO, Associazione Italiana di Ricerca Operativa – Optimization and Decision Sciences: http://www.airo.org/index.php/it/. The books in the series will discuss recent results and analyze new trends focusing on the following areas: Optimization and Operation Research, including Continuous, Discrete and Network Optimization, and related industrial and territorial applications. Interdisciplinary contributions, showing a fruitful collaboration of scientists with researchers from other fields to address complex applications, are welcome. The series is aimed at providing useful reference material to students, academic and industrial researchers at an international level. Should an author wish to submit a manuscript, please note that this can be done by directly contacting the series Editorial Board, which is in charge of the peer-review process. THE SERIES IS INDEXED IN SCOPUS

Paola Cappanera · Matteo Lapucci · Fabio Schoen · Marco Sciandrone · Fabio Tardella · Filippo Visintin Editors

Optimization and Decision Science: Operations Research, Inclusion and Equity ODS, Florence, Italy, August 30–September 2, 2022

Editors Paola Cappanera Department of Information Engineering University of Florence Florence, Italy Fabio Schoen Department of Information Engineering University of Florence Florence, Italy Fabio Tardella Department of Information Engineering University of Florence Florence, Italy

Matteo Lapucci Department of Information Engineering University of Florence Florence, Italy Marco Sciandrone Department of Computer, Control and Management Engineering Sapienza University of Rome Rome, Italy Filippo Visintin Department of Industrial Engineering University of Florence Florence, Italy

ISSN 2523-7047 ISSN 2523-7055 (electronic) AIRO Springer Series ISBN 978-3-031-28862-3 ISBN 978-3-031-28863-0 (eBook) https://doi.org/10.1007/978-3-031-28863-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Organization

Program Committee Chairs Paola Cappanera Fabio Schoen

Program Committee Members Alessandro Agnetis Daniela Ambrosino Stefania Bellavia Patrizia Beraldi Immanuel Bomze Valentina Cacchiani Paola Cappanera Raffaele Cerulli Andrea D’Ariano Patrizia Daniele Mauro Dell’Amico Carlo Filippi Francesca Guerriero Marco Locatelli Matteo Lapucci Francesca Maggioni Carlo Meloni Dario Pacciarelli Lorenzo Peirano Alice Raffaele Alberto Santini

v

vi

Fabio Schoen Antonio Sforza Marco Sciandrone M. Grazia Speranza Giuseppe Stecca Fabio Tardella Filippo Visintin

Reviewers Alessandro Agnetis Daniela Ambrosino Stefania Bellavia Patrizia Beraldi Immanuel Bomze Valentina Cacchiani Paola Cappanera Lorenzo Castelli Mirko Cavecchia Raffaele Cerulli Luca Consolini Ciriaco D’Ambrosio Andrea D’Ariano Patrizia Daniele Luigi De Giovanni Mauro Dell’Amico Carlo Filippi Francesca Guerriero Matteo Lapucci Marco Locatelli Francesca Maggioni Carlo Meloni Dario Pacciarelli Alberto Santini Lorenzo Peirano Fabio Schoen Marco Sciandrone Antonio Sforza M. Grazia Speranza Andrea Spinelli Giuseppe Stecca Fabio Tardella Filippo Visintin

Organization

Preface

This volume is devoted to the peer-reviewed short papers accepted for presentation at the Optimization and Decision Science conference (ODS 2022) held in Florence (Italy) from August 30 to September 2. The Conference has been organized by the Global Optimization Laboratory within the University of Florence, with the support of DINFO (Dipartimento di Ingegneria dell’Informazione) and AIRO (the Italian Association for Operations Research). As usual, the conference theme was open in the fields of operations research, optimization, problem solving, decision-making and their applications in the most diverse domains. However, a special focus was set on the theme “Operations Research: Inclusion and Equity”. Part of the success of the conference has surely been due to the very high quality of our keynote speakers: • Prof. Anna Nagurney (Amherst University—“Labor and Supply Chain Networks: It’s All About People”); • Prof. Dimitris Bertsimas (MIT—“HAIM: Holistic AI for Medicine”); • Prof. Dick den Hertog (University of Amsterdam—“Analytics for a Better World”); • Prof. Maria Paola Scaparra (University of Kent—“Leveraging OR to build more sustainable, resilient, and equitable communities in Southeast Asia”). All of them, whom we deeply thank, offered wide, deep, stimulating points of view, strictly connected among each other, on how Operational Research, Analytics and Artificial Intelligence can, and in fact do, have a very strong impact on improving the quality of life in such diverse fields like health care, the support of developing countries, the “no hunger” UN objective and much more. Thanks to a commendable initiative promoted by Springer and thanks to the four keynote speakers who gave their consent, a recording of their presentation is available, after a free registration, on the AIRO channel of the Cassyni system (https://cassyni.com/s/airo-springer). A total of 30 contributions were eventually accepted. They cover a wide spectrum of methodologies and applications. Specifically, they feature the following topics: (i) Variational Inequalities, Equilibria and Games, (ii) Optimization and Machine vii

viii

Preface

Learning, (iii) Global Optimization (Continuous, Non-linear and Multiobjective Optimization), (iv) Optimization Under Uncertainty, (v) Combinatorial Optimization, (vi) Transportation and Mobility, (vii) Health Care Management, and (viii) Applications. Each of the contributions has undergone a peer-review process which involved at least two reviewers selected within the Program Committee. We would like to thank the four keynote speakers, all the members of the Program Committee, the AIRO Steering Committee who gave us the opportunity to host ODS 2022, and all the speakers, authors, attendees and reviewers for their valuable help in guaranteeing the quality of the accepted contributions. Each of them has contributed to the success of ODS 2022. Florence, Italy Florence, Italy Florence, Italy Rome, Italy Florence, Italy Florence, Italy

Paola Cappanera Matteo Lapucci Fabio Schoen Marco Sciandrone Fabio Tardella Filippo Visintin

Contents

Variational Inequalities, Equilibria and Games Time-Dependent Generalized Nash Equilibria in Social Media Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Georgia Fargetta and Laura Scrimali

3

A General Cournot-Nash Equilibrium Principle and Applications to the COVID-19 Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Annamaria Barbagallo and Serena Guarino Lo Bianco

15

Environmental Damage Reduction: When Countries Face Conflicting Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. A. Caraballo, A. Zapata, L. Monroy, and A. M. Mármol

27

Nonsmooth Hierarchical Multi Portfolio Selection . . . . . . . . . . . . . . . . . . . . Lorenzo Lampariello, Simone Sagratella, and Valerio Giuseppe Sasso A Multiclass Network International Migration Model Under Shared Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mauro Passacantando and Fabio Raciti

37

47

Optimization and Machine Learning GPS Data Mining to Infer Fleet Operations for Personalised Product Upselling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Luca Bravi, Andrew Harbourne-Thomas, Alessandro Lori, Peter Mitchell, Samuele Salti, Leonardo Taccari, and Francesco Sambo

61

Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cristiano Cervellera, Danilo Macciò, and Francesco Rebora

73

Mathematical Programming and Machine Learning for a Task Allocation Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alberto Ceselli and Elia Togni

85

ix

x

Contents

Global Optimization (Continuous, Non-linear and Multiobjective Optimization) Random Projections for Semidefinite Programming . . . . . . . . . . . . . . . . . . Leo Liberti, Benedetto Manca, Antoine Oustry, and Pierre-Louis Poirion

97

Approaches to ESG—Integration in Portfolio Optimization Using MOEAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Ana Garcia-Bernabeu, Adolfo Hilario-Caballero, José Vicente Salcedo, and Francisco Salas-Molina Optimization Under Uncertainty Complexity Issues in Interval Linear Programming . . . . . . . . . . . . . . . . . . . 123 Milan Hladík Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Patrizia Beraldi and Sara Khodaparasti Risk-averse Approaches for a Two-Stage Assembly-to-Order Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Edoardo Fadda, Daniele Giovanni Gioia, and Paolo Brandimarte Combinatorial Optimization Bi-dimensional Assignment in 5G Periodic Scheduling . . . . . . . . . . . . . . . . 159 Giulia Ansuini, Antonio Frangioni, Laura Galli, Giovanni Nardini, and Giovanni Stea Capacitated Disassembly Lot-Sizing Problem with Disposal Decisions for Multiple Product Types with Parts Commonality . . . . . . . . 169 Meisam Pour-Massahian-Tafti, Matthieu Godichaud, and Lionel Amodeo The Crop Plant Scheduling Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Nikola Obrenovi´c, Selin Ataç, Stefano Bortolomiol, Sanja Brdar, Oskar Marko, and Vladimir Crnojevi´c A MILP Formulation and a Metaheuristic Approach for the Scheduling of Drone Landings and Payload Changes on an Automatic Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Elena Ausonio, Patrizia Bagnerini, and Mauro Gaggero A Flexible Job Shop Scheduling Model for Sustainable Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Rosita Guido, Gabriele Zangara, Giuseppina Ambrogio, and Domenico Conforti

Contents

xi

Selection of Cultural Sites via Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Annarita De Maio, Roberto Musmanno, Aurora Skrame, and Francesca Vocaturo Integer Linear Programming Formulations for the Fleet Quickest Routing Problem on Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Carla De Francesco and Luigi De Giovanni Transportation and Mobility C-Weibit Discrete Choice Model: A Path Based Approach . . . . . . . . . . . . . 241 Massimo Di Gangi, Antonio Polimeni, and Orlando Marco Belcore Receding-Horizon Dynamic Optimization of Port-City Traffic Interactions Over Shared Urban Infrastructure . . . . . . . . . . . . . . . . . . . . . . 253 Cristiano Cervellera, Danilo Macciò, and Francesco Rebora Ten Years of Routist: Vehicle Routing Lessons Learned from Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 David Di Lorenzo, Tommaso Bianconcini, Leonardo Taccari, Marco Gualtieri, Paolo Raiconi, and Alessandro Lori Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Maksim Lali´c, Nikola Obrenovi´c, Sanja Brdar, Ivan Lukovi´c, and Michel Bierlaire Health Care Management Optimization for Surgery Department Management: An Application to a Hospital in Naples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Maurizio Boccia, Andrea Mancuso, Adriano Masone, Francesco Messina, Antonio Sforza, and Claudio Sterle The Ambulance Diversion Phenomenon in an Emergency Department Network: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Christian Piermarini and Massimo Roma Applications Reducing the Supply-Chain Nervosity Thanks to Flexible Planning . . . . 317 Nicolas Zufferey, Marie-Sklaerder Vié, and Leandro Coelho

xii

Contents

Design Forward and Reverse Closed-Loop Supply Chain to Improve Economic and Environmental Performances . . . . . . . . . . . . . . 329 E. P. Mezatio, M. M. Aghelinejad, L. Amodeo, and I. Ferreira Supply Chain Design and Cost Allocation in a Collaborative Three-Echelon Supply Network: A Literature Review . . . . . . . . . . . . . . . . . 339 Tatiana Grimard, Nadia Lehoux, and Luc Lebel A Mathematical Model to Locate Services of General Economic Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 S. Baldassarre, G. Bruno, M. Cavola, and E. Pipicelli Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

About the Editors

Paola Cappanera is Associate Professor of Operations Research at the Department of Information Engineering of the University of Florence. She has a multidisciplinary educational background. Her main research interests are in combinatorial optimization and in the design of efficient algorithms with particular attention to health care, telecommunications and transportation settings. She deeply investigated several research topics stemming from very different real contexts; the unifying feature is the focus on network-flow-based methodologies and problem structure. She received the Omega Best Paper Award 2018 in the health care and optimization area. She is currently in the Editorial Board of Flexible Services and Manufacturing Journal, Health Care Management Science and Operations Research for Health Care. Matteo Lapucci received his bachelor’s and master’s degrees in Computer Engineering at the University of Florence in 2015 and 2018, respectively. He then received in 2022 his Ph.D. degree in Smart Computing jointly from the Universities of Florence, Pisa and Siena. He is currently a Postdoctoral Research Fellow at the Department of Information Engineering of the University of Florence. His main research interests include theory and algorithms for sparse, multi-objective and large-scale constrained non-linear optimization. Fabio Schoen is a Professor of Operations Research at the University of Florence (Italy). His main research interests deal with optimization methods, with particular reference to algorithms for non-convex optimization, the interface between machine learning and optimization and the development of optimization models for applications. He is the scientific coordinator of the Ph.D. program in Information Engineering, University of Florence, and a member of the Editorial Board of EURO J on Computational Optimization, Computational Optimization and Applications, J of Global Optimization, Optimization Methods and Software, Operations Research Letters. He was one of the founders of KKT srl (now: Verizon Connect Italy) and Intuendi.com

xiii

xiv

About the Editors

Marco Sciandrone is a Full Professor of Operations Research at Sapienza University of Rome. He received his M.Sc. degree in electrical engineering and his Ph.D. degree in systems engineering from the Sapienza University of Rome, in 1991 and 1997, respectively. From 1998 to 2006, he was researcher at the Institute of Systems Analysis and Computer Science (National Research Council) of Rome. From 2006 to 2017 and from 2018 to 2021, he has been, respectively, associate professor and full professor at University of Florence. His research interests include operations research, non-linear optimization and machine learning. He has published more than 60 papers on international journals. He is Associate Editor of the journals Optimization Methods and Software, and 4OR. Fabio Tardella is Full Professor in Operations Research at the Department of Information Engineering (DINFO) of the University of Florence, formerly at Sapienza University of Rome where he served as President of the Master Program Finance and Insurance and Head of the Department of Mathematics for Economic, Financial and Insurance decisions (now merged into the MEMOTEF Department). His research interests include optimization in finance and insurance, combinatorial optimization and its links with continuous optimization, quadratic programming, discrete convexity, lattice programming and submodularity. Filippo Visintin is Associate Professor of Service Design and Management at the School of Engineering of Florence University. He is the Scientific Director of the Information Based Industrial Services Laboratory (IBIS Lab, www.ibis.unifi.it). He is a member of the Italian Management Engineering Association (Associazione italiana Ingegneria Gestionale—AiIG). He is Co-founder and Co-owner of SmartOperations Srl, a University of Florence’s spin-off (www.smartoperations.it). His research interests include servitization of manufacturing, healthcare operations management, discrete-event simulation and circular economy. He is the author of several research papers published in prestigious international journals such as the European Journal of Operational Research, Industrial Marketing Management, International Journal of Production Economics, Computers in Industry, Computers and Industrial Engineering, Flexible Service and Manufacturing Journal, Management Decision, Journal of Intelligent Manufacturing, Production Planning and Control and IMA Journal of Management Mathematics and PlosOne. He served as guest-editor for Management Decision and Flexible Service and Manufacturing Journal.

Variational Inequalities, Equilibria and Games

Time-Dependent Generalized Nash Equilibria in Social Media Platforms Georgia Fargetta and Laura Scrimali

Abstract In this paper, we develop a dynamic network model of the competition of digital contents on social media platforms, assuming that there is a known and fixed upper bound on the total amount of views. In particular, we consider a two-layer network consisting of creators and viewers. Each creator seeks to maximize the profit by determining views and likes. The problem is formulated as a time-dependent generalized Nash equilibrium for which we provide the associated evolutionary variational inequality, using the variational equilibrium concept. We also discuss a possible differential game formulation. Finally, using a discrete-time approximation of the continuous time adjustment process, we present a numerical example.

1 Introduction Nowadays, due to the development of new technologies, smartphones and internet connectivity devices are becoming cheaper and easier to access, thus expanding the possibilities to approach other people. Every year, there is an increasing number of people signing up for social media. In 2019, there were around 2.77 billion people using social media, and, in 2021, more than 3 billion people were using social media. Of course, the rising popularity boosts social media company profits. YouTube is the largest video-sharing social media site in the world. Users may upload videos on the platform, view videos from other users, and interact with them. In 2019, it had an average of 2.3 billion monthly active users. YouTube users spend an average of 23.2 hours per month on the Android app watching videos on the platform. In this paper, Contribution to the Invited Session ‘Recent Advances in Variational Inequalities and Equilibrium Problems’. G. Fargetta (B) · L. Scrimali Department of Mathematics and Computer Science, University of Catania, Catania, Italy e-mail: [email protected] L. Scrimali e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_1

3

4

G. Fargetta and L. Scrimali

we study the competition among the creators of user generated contents posted on a social media platform. Specifically, we consider a two-layer dynamic network consisting of creators and viewers. Each creator seeks to determine the optimal views and likes, so as to maximize the profit. We assume that there is a known and fixed upper bound on the total amount of views and, hence, we formulate the competitive interaction of creators as a time-dependent generalized Nash equilibrium. Generalized Nash equilibrium problems (GNEPs) are non-cooperative games where the strategy of each player may depend on the strategies of the rivals. A large class of problems can be formulated as GNEPs, such as oligopolies, transportation networks, and electricity market models. In [17], Rosen introduced a case of GNEPs where the players have to share some constraints. For this class of problems, in [9] the authors showed that in finite dimensional spaces, certain solutions can be computed by solving a variational inequality and that the KKT multipliers of all players are equal. An infinite dimensional formulation of GNEPs was studied in [2], where the formulation of the GNEP as an evolutionary variational inequality problem is proved in the general setting of quasi convex functions. In [11], the authors extended the result in [9] to an infinite-dimensional functional setting. In [15], the authors studied GNEPs in Lebesgue spaces by means of a family of variational inequalities parameterized by an L ∞ vector. Other contributions to the search of non variational solutions are given in [3, 10]. Motivated by all the above analysis, we construct a dynamic equilibrium model of the competition among creators on a social media platform. Our contribution consists in improving the models in [12, 13] by considering time-varying data. In addition, we consider as our decision variables the views, given by a fraction of the number of subscribers, and the likes. Then, the problem is modelled as a time-dependent generalized Nash equilibrium, and, using the concept of variational equilibrium, we derive an evolutionary variational inequality formulation. This gives rise to challenging problems in both theory and computations (see [8]). Moreover, following [5] we formulate the competition problem as a dynamic game, derived from an infinitely repeated simultaneous game. Therefore, we may propose a unified approach and provide a differential game formulation for the dynamic generalized Nash equilibrium model. The paper is organized as follows. In Sect. 2, we introduce the game theory model, and give the related optimization problem. In Sect. 3, we characterize the GNEP as a solution of an evolutionary variational inequality. Moreover, we discuss a differential game approach. In Sect. 4, we provide an illustrative example, and, finally, Sect. 5 is dedicated to the conclusions.

2 The Game Theory Model of Digital Content Competition In this section, we present a dynamic game theory model of the competition among digital creators. We consider a network that consists of m creators and n groups of viewers with homogeneous interests, feelings and age. Each creator posts a content in the planning horizon [0, T ]. The topology of the network of this model is the same in [0, T ]. Each creator i ∈ {1, . . . , m} posts a content at time t, that can be

Time-Dependent Generalized Nash Equilibria …

5

accessed by each group j ∈ {1, . . . , n}. In order to express the time dependence, we choose as our functional setting the Hilbert space L 2 ([0, T ], Rk ) of square-integrable functions defined in the closed interval I = [0, T ], endowed with the scalar product T ·, · L 2 = 0 ·, ·dt and the usual associated norm  ·  L 2 . In particular, we suppose that the functional space for the trajectories of views is L 2 (I, Rmn ), while for the trajectories of likes is L 2 (I, Rm ). Let vi j (t) denote the views given by the fraction of subscribers who have accessed to the content generated by i at time t ∈ I . We group the {vi j } elements for all j into the vector vi ∈ L 2 (I, Rn+ ) and then we group all the vectors vi for all i into the vector v ∈ L 2 (I, Rmn + ). In addition, i (t) denotes the percentage of likes of content i at time t, and takes value in the interval [0, 1]. We group the likes of all creators into the vector  ∈ L 2 (I, [0, 1]m ). Usually, a content must reach a minimum amount of accesses to raise the interest of viewers and enter the competition with the other contents. We denote this threshold by s > 0. Thus, for each posted content i, the views must satisfy the condition n 

vi j (t) ≥ s, i = 1, . . . , m, a.e. t ∈ I.

(1)

j=1

In addition, since each content has a lifetime, we denote by v¯ j (t) the upper bound on the total amount of views at time t in each group j. We also assume v¯ j (t) to be known and fixed, and the following condition must hold: m 

vi j (t) ≤ v¯ j (t),

j = 1, . . . , n, a.e. t ∈ I.

(2)

i=1

Each creator i incurs a production cost πi (v, i ), i = 1, . . . , m. We assume that the production cost of i may depend upon the entire amount of views and on its own likes. We also assume that creators may accelerate the viewcount by paying a fee for the advertisement service in the social media platform. Hence, for each creator i, we introduce the advertisement cost function ci (t)

n 

vi j (t), i = 1, . . . , m, a.e. t ∈ I,

(3)

j=1

with ci (t) ≥ 0, i = 1, . . . , m, a.e t ∈ I . Similarly, the revenue of creator i, deriving from hosting advertisements and benefits from firms, is given by pi (t)

n 

vi j (t), i = 1, . . . , m, a.e. t ∈ I,

(4)

j=1

with pi (t) ≥ 0, i = 1, . . . , m, a.e t ∈ I . The advertisements hosted in the videos can be of several types, i.e. the creator can decide to insert a small advertising spot, shot

6

G. Fargetta and L. Scrimali

by himself, during his video or to interrupt the video by broadcasting advertisements managed by the YouTube platform. We associate to each group of viewers j the feedback function f j (t, v, ), that represents the evaluation of the contents and depends upon the entire amount of views and likes. Now, we define the popularity function of creator i as the function n 

f j (t, v, )vi j , i = 1, . . . , m, a.e. t ∈ I.

(5)

j=1

We consider the content diffusion model as a game where players are the creators, who compete for the diffusion of their contents. Strategic variables are content views v and likes . The profit for player i, denoted by Ui (t, v, ), i = 1, . . . , m, is the difference between total revenue and total costs, namely, for a.e. t ∈ I Ui (t, v, ) =

n 

f j (t, v, )vi j (t) + pi (t)

j=1

n 

vi j (t) − πi (t, v, i ) − ci (t)

j=1

n 

vi j (t).

j=1

(6) Thus, the set of strategies of creator i is given by n    Ki = (vi , i ) ∈ L 2 (I, Rn+1 ) : vi j (t) ≥ 0, ∀ j; vi j (t) ≥ s; 0 ≤ i ≤ 1, a.e. in I . j=1

m We also define K = i=1 Ki . In addition, players have to satisfy the shared constraints (2). Hence, we define the set S as follows: m    2 mn+m ): vi j (t) ≤ v¯ j (t), j = 1, . . . , n, a.e. in I . S = v ∈ L (I, R i=1

We suppose that the production cost function πi (t, v, i ), ∀i, is defined on I × Rmn × R → R+ , it is measurable in t and continuous with respect to v and i . Moreover, i i and ∂π exist and that they are measurable in t and continuous we assume that ∂π vi j i with respect to v and i . We also require that the feedback function f j (t, v, ), ∀ j, is defined on I × Rmn × Rm → R+ , it is measurable in t and continuous with respect ∂f ∂f to v and . In addition, we assume that vi jj and i j exist and that they are measurable in t and continuous with respect to v and . Further, we set u i = (v, i ), u = (v, ), and require the following growth conditions, ∀i, j and a.e. in I :

Time-Dependent Generalized Nash Equilibria …

        πi (t, u) ≤ α1 (1 + u i ), ∀,  f j (t, u) ≤ α2 (1 + u), ∀v, ,        ∂πi   ≤ β1 (1 + v), ∀v,  ∂πi  ≤ β2 (1 + ), ∀,   ∂l   ∂v  ij i      ∂fj  ∂ fj    ≤ β3 (1 + v), ∀v,    ∂v   ∂l  ≤ β4 (1 + ), ∀, ij

7

(7) (8) (9)

i

with α1 , α2 , β1 , β2 , β3 , β4 > 0. The above conditions will be useful in the following, since they ensure that problem (12) is well-defined. In fact, if (vi , i , v−i , −i ) ∈ L 2 (I, Rmn+m ) and conditions (7)–(9) hold, then Ui (t, vi , i , v−i , −i ) ∈ L 2 (I, R).

3 The Generalized Nash Equilibrium Formulation In our model, the creators behave in a non-cooperative fashion, each one trying to maximize her profit. Since players have to satisfy the shared constraints (2), each player’s strategy vector (vi , i ) belongs to the set Ki , but depends also on the rival players’ strategies through the constraints v ∈ S. Therefore, the underlying equilibrium concept will be that of a time-dependent generalized Nash equilibrium; see [2, 9, 11, 15]. We use the notation (v−i , −i ) to indicate the variables for all players except i. So that for any t ∈ I , (v−i (t), −i (t)) is the vector formed by all players’ decision variables except the player i at time t ∈ I . Following [2, 17], the strategy profile is chosen in a common subset K and thus the admissible strategy set of each player is defined as K i (v−i , −i ) = {(vi , i ) ∈ L 2 (I, Rm+1 ) : (vi , i , v−i , −i ) ∈ K }.

(10)

Definition 1 Let Ui : L 2 (I, Rm+1 ) → R be the profit function for player i. A strategy (v ∗ , ∗ ) ∈ K ⊆ L 2 (I, Rmn+n ) is a time-dependent generalized Nash equilibrium ∗ , ∗−i ) and if and only if for each player i, we have (vi∗ , i∗ ) ∈ K i (v−i ∗ ∗ , ∗−i ), ∀(vi , i ) ∈ K i (v−i , ∗−i ). Ui (t, v ∗ , ∗ ) ≤ Ui (t, vi∗ , i∗ , v−i

(11)

This means that (v ∗ , ∗ ) ∈ K ⊆ L 2 (I, Rmn+n ) is a time-dependent generalized Nash equilibrium if, for all creator i, (vi∗ , i∗ ) ∈ L 2 ([0, T ], Rm+1 ) solves the following optimization problem  ∗ min ∗ ∗ Ui (t, vi , i , v−i , ∗−i )dt. (12) (vi ,i )∈K i (v−i ,−i )

I

8

G. Fargetta and L. Scrimali

∗ Since the convex sets K i (v−i , ∗−i ), ∀i, depend on the solution, the GNEP can be formulated as a quasi-variational inequality. However, following [11, 17], considering the structure of the feasible strategies, we are allowed to reduce the problem to a variational inequality.

Definition 2 Let us assume that for each creator i the profit function Ui (t, v, ) is concave with respect to the variables (vi1 , . . . , vin ), and i , continuously differentiable, and the growth conditions (7)–(9) hold. A variational inequality approach to finding a GNE is to define the set K = K ∩ S, and to solve the evolutionary variational inequality. Find (v ∗ , ∗ ) ∈ K : 

T



0





m  n  ∂Ui (t, v ∗ , ∗ ) i=1 j=1

m  ∂Ui (t, v ∗ , ∗ ) i=1

∂i

∂vi j

× vi j (t) − vi∗j (t)



× i (t) − i∗ (t) dt ≥ 0, ∀(v, ) ∈ K.

(13)

For a discussion on the existence of solutions, we address the reader to [16]. As in [9, 11], we have the following result. Theorem 1 Every solution of variational inequality (13) is a solution of GNEP.

3.1 A Differential Game Model In contrast to the differential game approach as in [1], the evolution of the state in our model is not governed by differential equations. In fact, we present the competition problem of creators as an evolutionary variational inequality. Thus, we are able to describe how creators adapt the choice of contents to be posted, and the acceleration mechanisms in response to the reactions of viewers in the time horizon. However, we may provide a unified approach and present a differential game formulation for the dynamic generalized Nash equilibrium model. The model presented in this paper can be regarded as an infinitely repeated simultaneous move game. In the repeated games, the same static game, called the stage game, is repeated a finite or infinite number of times, and the result of each stage is observed by all the players before the new stage starts. In our case, the competitive game represents the stage game that is repeated and played almost at the same way at all the instants t. In fact, at each repetition the game is slightly different, since creators adjust their strategies according to the demands that change over time. We also emphasize that all the creators make their decisions simultaneously at the beginning of the game and such simultaneous moves are repeated indefinitely. Therefore, each creator chooses his decisions without any knowledge of the decisions taken by the other creators.

Time-Dependent Generalized Nash Equilibria …

9

Now, we focus on differential games, where time evolves continuously and the state evolution can be modelled by a set of differential equations given by x(t) ˙ = g(x(t), z 1 (t), . . . , z n (t)), x(0) = x0 , where x(t) is the state, z 1 (t), . . . , z n (t) are the controls and t is the time. We can transform the digital content competition game into a problem that can be approached as a differential game. Nevertheless, we emphasize that the game itself is not a differential game. We set: i (t, v(t), (t)) =

n  ∂U (t, v ∗ , ∗ )

 ∂Ui (t, v ∗ , ∗ ) i × vi j (t) − vi∗j (t) − × i (t) − i∗ (t) , ∂vi j ∂i j=1

and observe that the competition problem of creator i is equivalent to 

T

min

(v,)∈K

i (t, v(t), (t))dt.

0

Now, we introduce the profit of creator i over the interval [0, t]: 

t

i (τ, v(τ ), (τ ))dτ = G i (t).

0

Then, the problem of creator i becomes min G i (T ) dG i (t) = i (t, v(t), (t)), G i (0) = 0, subject to dt and the constraints in K. In the above formulation, G i (t) represents the state, whereas v(t) and (t) are the controls.

4 An Illustrative Example In this section, we present a numerical example. Even if the problem could be solved using optimal control tools, we will adopt a variational inequality approach. We focus on an example in which we consider two creators and one group of viewers who have the same preferences about the contents. The steps that we use to solve the example are the following: Firstly, we use a discretization procedure and select discrete points in the time interval (see [6, 7]). Then, We reduce our problem to solve a static variational inequality at each discrete point. Finally, We solve each static

10

G. Fargetta and L. Scrimali

variational inequality using the extragradient method as in [14]. This procedure can be performed if the continuity of the solutions is guaranteed, see [4]. We consider the production cost functions as: π1 (v11 (t), 1 (t)) = 43v11 (t)2 + v21 (t) + 251 (t)2 + 3, π2 (v21 (t), 2 (t)) = 160v21 (t)2 + 400v11 (t) + 502 (t)2 + 6; the evaluation function for the group of viewers as: f 1 (v11 (t), v21 (t), 1 (t), 2 (t)) = −(v11 (t) + v21 (t)) + 0.41 (t) + 0.52 (t). All functions are chosen so as to guarantee the existence, uniqueness, and continuity of the solution. We also emphasize that all the functions are expressed in a form widely used in relevant literature, see for instance [6–8]. We suppose that creator 1 earns from advertisement 16400$ per 10000 views, otherwise creator 2 earns from advertisement 40000$ per 10000 views. This means that creator 2 has brands, as advertising partner, that are more famous than the brand advertised by the creator 1. Moreover, for each creator, the coefficients of the cost functions and the revenue functions are: p1 = 16400, c1 = 7380, p2 = 40000, c2 = 18000. Following the policy of YouTube, the cost parameter ci is the 45% of the revenue parameter pi , for i = 1, 2. We set s = 0.5, v¯1 = 30t. Our aim is to investigate only the first six minutes after that the video has been posted, and analyze the impact of the video on the subscribers of the channel at the time when the video has been released. Thus, we set t = 0, . . . 6. We observe that Fig. 1a represents the views made by the analyzed user group. It can be seen from the plot that creator 1 has less views when the video is posted, but after a few instants of time it exceeds the views of the video of creator 2, which has more or less a constant trend of the number of views over time. The same trend of the equilibrium solutions as regards the views can also be observed from the Fig. 2a. It represents a heat map, where the x-axis and the y-axis are the creators for i = 1, 2 and the time interval from the starting point until the

(a) Number of views for each creator

(b) Percentage of likes for each creator

Fig. 1 Total number of views and total percentage of likes for each creator for each time t ∈ {0, . . . , 6}

Time-Dependent Generalized Nash Equilibria … 101.5

68.01

86.15

63.85

62.55

57.45

38.95

51.05

15.34

44.66

1.407e-11

30

0

0

1

2

100 80 60

0.8113

0.34

0.6883

0.3192

0.4995

0.2873

0.3097

0.2553

40

0.1211

0.2231

20

0.0006999

0.1492

6.598e-05

5.862e-05

1

2

0

(a) Number of views

11 0.8 0.6 0.4 0.2

(b) Percentage of likes ℓ ( )

( )

Fig. 2 Equilibrium solutions of the number of views and the percentage of likes over time t ∈ {0, . . . , 6} for all creator i = 1, 2 and for the group of viewers j = 1

8

105 U (t,:) 1

7 6

U2(t,:) Unet(t,:) U

5

Profit

Fig. 3 The net profit, i.e. Uinet (t, vi j (t), i (t)), and gross profit, i.e. Ui (t, vi j (t), i (t)), considering the net and the gross viewcount at each time t ∈ {0, . . . , 6} for all creators

1 net (t,:) 2

4 3 2 1 0 0

1

2

3

4

5

6

Time

sixth minute after that the video has been posted, respectively. Figure 1b shows the total percentage of likes that the group of viewers gives to the video for each instant of time. We note that the video of creator 2 has a small increase over time, whereas the other creator has a significant increase. Indeed, this confirms once again that video 1 is more appreciated by the audience. As a consequence, this appreciation is also reflected in Fig. 1a, where the number of views of video 1 is greater than the number of views of video 2. The same trend of the equilibrium solutions concerning the percentage of likes, i.e. i , can also be observed from the Fig. 2b. It represents a heat map, where the x-axis and the y-axis are the creators for i = 1, 2 and the time interval from the starting point until the sixth minute after that the video has been posted, respectively. Figure 3 shows that obtaining a percentage of likes and a number of views with a slight increase over time brings a greater profit U2 than U1 , in the first six minutes after that the video has been posted. As we can observe in Figs. 1 and 2, the strategies of creator 1 could pay off over time respect to the strategies of creator 2, despite that the profit of the second creator remains higher than the profit of the first one in the first six minutes. In Fig. 3, we characterize the net profit of each

12

G. Fargetta and L. Scrimali

creator with dotted lines. The difference between the net and the gross profit depends on the profit that a content earns from advertisement. Indeed, only the 15% of the views counts as a profit from advertisement strategies, because the only views that make creator earn money are those in which viewers have watched the advertisement for at least 30 s, or half advertisement for a very short video.

5 Conclusions In this paper, we focused on a dynamic network model that allowed us to describe the complex social media platform mechanisms and the evolution of views over time. We showed that the underlying generalized Nash equilibrium problem can be represented by means of an evolutionary variational inequality. This may give the opportunity to use the powerful variational inequality theory for existence results, stability and sensitivity of solutions, and computational procedures. Moreover, we remark that a static approach is not suitable to follow the behavior of phenomena evolving in time, whereas a dynamic approach is more efficient and desirable. Finally, we suggested a possible differential game formulation, so as to unify GNEPs, variational inequalities and differential games. As a future research issue, we could conduct a Lagrange analysis of the multipliers to assess the role of the constraints.

References 1. Altman, E., Jain, A., Shimkin, N., Touati, C.: Dynamic games for analyzing competition in the Internet and in on-line social networks. In: Lasaulce, S., et al. (eds.) Network Games, Control, and Optimization, pp. 11–22. Springer, Berlin (2007) 2. Aussel, D., Gupta, R., Mehra, A.: Evolutionary variational inequality formulation of the generalized Nash equilibrium problem. J. Optim. Theory Appl. 169(1), 74–90 (2016) 3. Aussel, D., Sagratella, S.: Sufficient conditions to compute any solution of a quasivariational inequality via a variational inequality. Math. Methods Oper. Res. 85(1), 3–18 (2017) 4. Barbagallo, A.: On the regularity of retarded equilibria in time dependent traffic equilibrium problems. Nonlinear Anal. 71, 2406–2417 (2009) 5. Chan, C.K., Zhou, Y., Wong, K.H.: A dynamic equilibrium model of the oligopolistic closedloop supply chain network under uncertain and time-dependent demands. Trans. Res. Part E: Logist. Trans. Rev. 118, 325–354 (2018) 6. Cojocaru, M.-G., Daniele, P., Nagurney, A.: Double-layered dynamics: a unified theory of projected dynamical systems and evolutionary variational inequalities. Eur. J. Oper. Res. 175, 494–507 (2006) 7. Cojocaru, M.-G., Daniele, P., Nagurney, A.: Projected dynamical systems, evolutionary variational inequalities, applications, and a computational procedure. In: Chinchuluun, A., Pardalos, P.M., Migdalas, A., Pitsoulis, L. (eds.) Pareto Optimality, Game Theory and Equilibria. Springer Optimization and Its Applications, vol. 17, pp. 387–406. Springer, New York (2005) 8. Daniele, P.: Dynamic Networks and Evolutionary Variational Inequalities. Edward Elgar Publishing, Cheltenham (2006) 9. Facchinei, F., Fischer, A., Piccialli, V.: On generalized Nash games and variational inequalities. Oper. Res. Lett. 35, 159–164 (2007)

Time-Dependent Generalized Nash Equilibria …

13

10. Facchinei, F., Sagratella, S.: On the computation of all solutions of jointly convex generalized Nash equilibrium problems. Opt. Lett. 5(3), 531–547 (2011) 11. Faraci, F., Raciti, F.: On generalized Nash equilibrium in infinitedimension: the lagrange multipliers approach. Opt.: J. Math. Prog. Oper. Res. 64(2), 321–338 (2015) 12. Fargetta, G., Scrimali, L.: A Game Theory Model of Online Content Competition. In: Paolucci, M., Sciomachen, A., Uberti, P. (eds.) Advances in Optimization and Decision Science for Society, Services and Enterprises. AIRO Springer Series, vol. 3. Springer, Cham (2019) 13. Fargetta, G., Scrimali, L.R.M.: Generalized Nash equilibrium and dynamics of popularity of online contents. Opt. Lett. (2020) 14. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matekon 13, 35–49 (1977) 15. Mastroeni, G., Pappalardo, M., Raciti, F.: Generalized nash equilibrium problems and variational inequalities in Lebesgue spaces. Minimax Theory Appl. 20(1–2), 47–64 (2020) 16. Maugeri, A., Raciti, F.: On existence theorems for monotone and nonmonotone variational inequalities. J. Convex Anal. 16(3–4), 899–911 (2009) 17. Rosen, J.B.: Existence and uniqueness of equilibrium points for concave n-person games. Econometrica 33, 520–534 (1965)

A General Cournot-Nash Equilibrium Principle and Applications to the COVID-19 Pandemic Annamaria Barbagallo and Serena Guarino Lo Bianco

Abstract The paper deals with the study of an oligopolistic market equilibrium problem in which each firm produces several commodities and both production and demand excesses occur. The equilibrium definition is presented by means of a general Counot-Nash equilibrium principle. Such a condition is characterized by a tensor variational inequality. Hence, the existence and uniqueness of an equilibrium solution is established by using theoretical results on tensor variational inequalities. Finally an example is provided.

1 Introduction The aim of the paper is to present a general oligopolistic market equilibrium problem in presence of production and demand excesses, namely the problem of finding a trade equilibrium in a supply-demand market between a finite number of spatially separated firms which produce several commodities and ship these to some demand markets. It is worth to highlight that demand excesses may occur when the supply cannot satisfy the demand especially for fundamental goods. Moreover, since the commodity shipments are limitated because of the capacity constraints of transportations (for example trains, trucks and planes) production excesses may occur simultaneously. As a consequence, not only the production and transportation costs have to be considered but also storage costs have to be introduced. Hence the profit of each firm is the difference between the price that the demand markets are disposed to pay for all the kinds of goods purchased minus the previous costs. Thanks A. Barbagallo (B) Department of Mathematics and Applications “R. Caccioppoli”, University of Naples Federico II, Complesso Universitario Monte S. Angelo, Via Cintia, 80126 Naples, Italy e-mail: [email protected] S. Guarino Lo Bianco Department of Agricultural Sciences, University of Naples Federico II, Via Università 100, 80055 Portici (NA), Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_2

15

16

A. Barbagallo and S. Guarino Lo Bianco

to such an observation, the model improves the ones presented in [2, 3, 5]. Indeed in the model in [2] any excesses do not occur. Instead in the ones introduced in [3, 5] only production excesses and demand excesses are considered, respectively. Again in this model, the equilibrium conditions are given by using a generalization of the Cournot-Nash principle. Under suitable assumptions, the equilibrium definition is characterized by a tensor variational formulation. For this reason it is fundamental to show some theoretical results for tensor variational inequalities which are very useful in order to obtain the existence and uniqueness of a general oligopolistic market equilibrium distribution in presence of production and demand excesses. The COVID-19 pandemic has shown that the presence of both production and demand excesses can occur in an oligopolistic market. Indeed if we think about the supplies of personal protective equipment (masks, gloves and others also for hospitals) necessary to counter the spread of the COVID-19 infection, there may be situations in which there is an excess of demand (masks that cannot be found in periods of high spread of the virus) even if companies have produced in large quantities but cannot send them due to shipping constraints. Furthermore we have to consider the more general situation in which a company can produce for various geographic areas where the pandemic is at different stages. The first scholar who investigated the behaviour of two firms who produce an homogeneous commodity was Cournot in [6]. Then Nash [9, 10] extended the model for n agents, introducing the noncooperative game which has been studied intensively by many authors (see for instance [7]). In [8] the variational formulation of the oligopolistic market equilibrium problem is presented, existence results are established and numerical schemes are introduced. Recently finite dimensional variational inequalities modeled in the class of tensors have been introduced and studied. Results on existence, uniqueness and regularity of solutions are obtained (see [2, 3] and the reference therein). This class of inequalities has an important role to study some economic equilibrium problems, especially the general oligopolistic market equilibrium problem. Furthermore the ill-posedness of tensor variational inequalities is investigated in [3]. Such an analysis allows us to reach a sequence of solutions to regularized tensor variational inequalities which converges to a solution to the ill-posedness inequality with minimal norm. In [4, 5] some numerical methods to compute solutions to tensor variational inequalities are introduced. Under suitable conditions, the convergence of the algorithms are guaranteed. Finally in [1] inverse tensor variational inequalities are introduced in order to study the policymarker’s point of view of the general oligopolistic market equilibrium model. We organize the paper as follows. Section 2 is devoted to present some notations and definitions on tensors and recall some existence results for tensor variational inequalities. Section 3 deals with the general demand-supply market model in presence of production and demand excesses. The equivalence between the general Cournot-Nash equilibrium principle and a suitable tensor variational inequality is established and existence results are shown. At last, in Sect. 4, a numerical example is discussed.

A General Cournot-Nash Equilibrium Principle …

17

2 Preliminaries and Notations Let V be a finite-dimensional vector space endowed with an inner product. A N -order tensor is the element of the product space V × · · · × V , i.e. a multidimensional array. Tensors are denoted by an italic capital letter X, Y, . . . . We remark that matrices, vectors and scalars are tensors of order two, one and zero, respectively. A N -order tensor on a vector space V of dimension m has m N entries. Let us indicate the set of all the N -order tensors on the m-dimensional vector space V with T [N ,m] . The entries of a N -order tensor X are denoted by xi1 ,...,i N . Moreover, we simply indicate the set of N -order tensors of Rm 1 × · · · × Rm N with R[m 1 ...m N ] . We endow the vector space T [N ,m] with an inner product ·, · defined as follows X, Y =

m  i 1 =1

...

m 

xi1 ,...,i N yi1 ,...,i N , ∀X, Y ∈ T [N ,m] .

i N =1

It is easy to show that (T [N ,m] , ·, ·) is a Hilbert space. In the next we can introduce tensor variational inequalities. Definition 1 Let K be a nonempty closed convex subset of T [N ,m] and F : K → T [N ,m] be a tensor mapping. The tensor variational inequality is the problem of finding X ∈ K such that F(X), Y − X ≥ 0, ∀Y ∈ K .

(1)

We recall some existence and uniqueness results for tensor variational inequalities proved in [2, 3]. In the bounded case of the set K , we state: Theorem 1 Let K be a nonempty compact convex subset of T [N ,m] and F : K → T [N ,m] be a continuous tensor mapping. Then the tensor variational inequality (1) admits at least one solution. Without the boundness assumption on the set K , the existence of a solution is guaranteed supposing also that the operator F is coercive, as below. Theorem 2 Let K be a nonempty closed convex subset of T [N ,m] and F : K → T [N ,m] be a continuous tensor mapping satisfying the coercivity condition F(X) − F(X0 ), X − X0  = +∞, X→+∞ X − X0  lim

for some X0 ∈ K . Then the tensor variational inequality (1) admits a solution. In addition, when the set K is unbounded, existence results can be obtained under monotonicity assumptions. To this aim we recall some definitions.

18

A. Barbagallo and S. Guarino Lo Bianco

Definition 2 A tensor mapping F : K → T [N ,m] is said to be • monotone on K if, for every X, Y ∈ T [N ,m] , F(X) − F(Y), X − Y ≥ 0; • strictly monotone on K if, for every X, Y ∈ T [N ,m] , with X = Y, F(X) − F(Y), X − Y > 0; • strongly monotone on K if, for every X, Y ∈ T [N ,m] , there exists ν > 0 such that F(X) − F(Y), X − Y ≥ νX − Y2 . Now we are able to show the next result. Theorem 3 Let K be a nonempty closed convex subset of T [N ,m] and F : K → T [N ,m] be a tensor mapping. The following statements hold: (a) if F is continuous and monotone, then the set of solutions Sol(F, K ) of (1) is nonempty closed and convex; (b) if F is strictly monotone and there exists a solution to (1), then it is unique; (c) if F is continuous and strongly monotone, then there exists a unique solution to (1).

3 The Model Let us treat an economic model in which not only the firms produce several commodities but also both production and demand excesses occur. For this reason the model improves the ones presented in [2, 4, 5]. Let us consider m firms Pi , i = 1, . . . , m, and n demand markets Q j , j = 1, . . . , n, that are generally spatially separated. We suppose that each firm Pi produces l different commodities which are denoted by k = 1, . . . , l. Let us introduce • xikj the nonnegative variable expressing the commodity shipment of kind k between the producer Pi and the demand market Q j , i = 1, . . . , m, j = 1, . . . , n,, k = 1, . . . , l; • εik the nonnegative variable expressing the production excess for the commodity of kind k produced by the firm Pi , i = 1, . . . , m, k = 1, . . . , l; • δ kj the nonnegative variable expressing the demand excess for the commodity of kind k of the demand market Q j , j = 1, . . . , n, k = 1, . . . , l. We assume that the following capacity constraints hold x ikj ≤ xikj ≤ x ikj , ∀i = 1, . . . , m, ∀ j = 1, . . . , n, ∀k = 1, . . . , l,

A General Cournot-Nash Equilibrium Principle …

19

where x ikj , x ikj are nonnegative, for every i = 1, . . . , m, j = 1, . . . , n,, k = 1, . . . , l. We group X = (xikj ) ∈ R[mnl] , as well as, X = (x ikj ) and X = (x ikj ) belong to R[mnl] , whereas E = (εik ) ∈ Rml and  = (δ kj ) ∈ Rnl . Furthermore we consider • pik the variable expressing the commodity output of kind k produced by the firm Pi , i = 1, . . . , m, k = 1, . . . , l; • q kj the variable expressing the demand for the commodity of kind k of the demand market Q j , j = 1, . . . , n, k = 1, . . . , l. We assume that the following feasibility conditions hold pik

=

n 

xikj + εik , ∀i = 1, . . . , m, ∀k = 1, . . . , l.

(2)

xikj + δ kj , ∀ j = 1, . . . , n, ∀k = 1, . . . , l.

(3)

j=1

q kj

=

n  i=1

Condition (2) states that the quantity produced by each firm Pi of kind k must be equal to the commodity shipments of such kind from that firm to all the demand markets plus the production excess for such kind of good. In addition (3) declares that the quantity demanded by each demand market Q j of kind k must be equal to the commodity shipments of such kind from all the firms to that demand market plus the demand excesses for such kind of good. Then the feasible set is given by   = (X, E, ) ∈ R[mnl] × Rml × Rnl : K 0 ≤ x ikj ≤ xikj ≤ x ikj , ∀i = 1, . . . , m, ∀ j = 1, . . . , n, ∀k = 1, . . . , l, εik ≥ 0, ∀i = 1, . . . , m, ∀k = 1, . . . , l n  xikj + εik , ∀i = 1, . . . , m, ∀k = 1, . . . , l pik = j=1

δ kj

≥ 0, ∀ j = 1, . . . , n, ∀k = 1, . . . , l

q kj =

m 

 xikj + δ kj , ∀ j = 1, . . . , n, ∀k = 1, . . . , l .

i=1

 is a nonempty convex compact subset of the Hilbert space R[mnl] × Let us note that K ml nl R ×R . Finally, we denote by •  f ik the variable expressing the production cost of the firm Pi for each good of type k, i = 1, . . . , m, k = 1, . . . , l, which may depend upon the entire production f ik (X, E), X ∈ R[mnl] , E ∈ Rml ; pattern, namely  f ik = 

20

A. Barbagallo and S. Guarino Lo Bianco

• dkj the variable expressing the demand price for unity of the commodity of kind k associated to the demand market Q j , j = 1, . . . , n, k = 1, . . . , l, which may depend upon the entire consumption pattern, namely dkj = dkj (X, ), X ∈ R[mnl] ,  ∈ Rnl ; • cikj the variable expressing the transaction cost between the firm Pi and the demand market Q j regarding the good of kind k, i = 1, . . . , m, j = 1, . . . , n,, k = 1, . . . , l, which depends upon the entire shipment pattern, namely cikj = cikj (X), X ∈ R[mnl] ; • gik the variable expressing the storage cost of the commodity of kind k produced by the firm Pi , i = 1, . . . , m, k = 1, . . . , l, which may depend upon the entire gik (X, E), X ∈ R[mnl] , E ∈ Rml . production pattern, namely  gik =  Taking into account the above notations, the profit vi of the firm Pi , i = 1, . . . , m, is given by ⎡ ⎤ l n n    ⎣  vi (X, E, ) = f ik (X, E) −  gik (X, E) − cikj (X)xikj ⎦ , dkj (X, )xikj −  k=1

j=1

j=1

i.e. it is equal to the sum with respect to the l commodities of the price that the demand markets are disposed to pay minus the production cost, the storage cost and the transportation costs. Let us observe that, by (2) and (3), we can express the production and demand excesses as follows εik = pik −

n 

xikj , ∀i = 1, . . . , m, ∀k = 1, . . . , l,

(4)

xikj , ∀ j = 1, . . . , n, ∀k = 1, . . . , l,

(5)

j=1

δ kj = q kj −

m  i=1

respectively. By virtue of the nonnegativity of the production and demand excesses, we can write an equivalent formulation of the feasible set  K = X ∈ R[mnl] : 0 ≤ x ikj ≤ xikj ≤ x ikj , ∀i = 1, . . . , m, ∀ j = 1, . . . , n, ∀k = 1, . . . , l, n  j=1 m  i=1

xikj ≤ pik , ∀i = 1, . . . , m, ∀k = 1, . . . , l  xikj ≤ q kj , ∀ j = 1, . . . , n, ∀k = 1, . . . , l .

A General Cournot-Nash Equilibrium Principle …

21

It is worth to underline that K includes the presence of production and demand  excesses as K. Taking into account (4) and (5), we can replace the production costs and the storage costs as: f ik (X, E), gik (X) =  gik (X, E), ∀i = 1, . . . , m, ∀k = 1, . . . , l, f ik (X) =  respectively, and the demand price as: d kj (X) = dkj (X, ), ∀ j = 1, . . . , n, ∀k = 1, . . . , l. As a consequence, by means of same observations, the profit function of firm Pi , for i = 1, . . . , m, can be rewritten as ⎡ ⎤ l n n    ⎣ vi (X) =  vi (X, E, ) = d kj (X)xikj − f ik (X) − gik (X) − cikj (X)xikj ⎦ . k=1

j=1

j=1

Each firm follows a noncooperative behavior trying to maximize its own profit function considering the optimal distribution pattern of the others. Therefore the goal is to determine a nonnegative tensor feasible commodity distribution X for which the m firms and the n demand markets will be in a state of equilibrium according to the following general Cournot-Nash equilibrium principle. Definition 3 A feasible tensor X∗ ∈ K is a general oligopolistic market equilibrium distribution in presence of production and demand excesses if and only if, for each i = 1, . . . , m, it results vi (X∗ ) ≥ vi (X i , X∗−i ), ∗ ∗ , X i+1 , . . . , X m∗ ) and X i is a slice of X of dimension nl. where X∗−i = (X 1∗ , . . . , X i−1

In the sequel the following hypothesis must be satisfied: Assumption C The profit function vi is continuously differentiable and pseudoconcave1 with respect to the variables X i , for each i = 1, . . . , m.

1

The profit function vi (X) is pseudoconcave with respect to the variable X i ∈ Rnl if and only if

∂vi (X 1 , . . . , X i , . . . , X m ), X i − Yi ≥ 0 ∂ Xi ⇒ vi (X 1 , . . . , X i , . . . , X m ) ≥ vi (X 1 , . . . , Yi , . . . , X m ).

22

A. Barbagallo and S. Guarino Lo Bianco

Let us indicate with

∂vi ∇D v = , i = 1, . . . , m, j = 1, . . . , n, k = 1, . . . , l. ∂ xikj We are able to establish the tensor variational formulation of the equilibrium problem. Theorem 4 Let us suppose that Assumption C is satisfied. Then, X∗ ∈ K is a general oligopolistic market equilibrium distribution in presence of production and demand excesses if and only if it is a solution to the following tensor variational inequality −∇ D v(X∗ ), X − X∗  = −

m  n  l  ∂vi (X∗ ) i=1 j=1 k=1

∂ xikj

(xikj − (xikj )∗ ) ≥ 0, ∀X ∈ K. (6)

Proof Let us start supposing that X∗ ∈ K is a solution to (6). We prove that it is a general oligopolistic market equilibrium distribution in presence of production and demand excesses. By contradiction, we assume that there exists i ∗ such that vi ∗ (X∗ ) < vi ∗ (X i ∗ , X∗−i ∗ ). By virtue of the pseudoconcavity of the profit function vi with respect to the variable X i ∈ Rnl , for each i = 1, . . . , m, we deduce

∂vi ∗ (X∗ ) ∗ , X i ∗ − X i ∗ < 0. ∂ Xi∗

Choosing X ∈ K such that X−i ∗ = X∗−i ∗ in (6), we have a contradiction. The opposite implication follows easily.  Making use of Theorem 3 and taking into account that the feasible set K is nonempty, convex and compact, the following existence and uniqueness result holds. Theorem 5 Let us suppose that Assumption C is satisfied and the tensor mapping −∇ D v is strictly monotone, then there exists a unique general oligopolistic market equilibrium distribution in presence of production and demand excesses.

4 Numerical Example We analyze a general oligopolistic market network made up of two companies of personal protective equipment P1 and P2 : each one produces masks and gloves. Furthermore the demand markets are three hospitals Q 1 , Q 2 and Q 3 . In Fig. 1 dashed

A General Cournot-Nash Equilibrium Principle …

23

P1

Fig. 1 Oligopolistic market network

P2

Q1

Q2

Q3

and continuous lines represent the shipping of masks and gloves, respectively. As before, we denote by xikj the k-th commodity shipment from Pi to Q j , i = 1, 2, j = 1, . . . , 3, k = 1, 2, and for convenience we rescale each shipping by a factor 104 . We suppose that the capacity constraints 0 ≤ xikj ≤ 50 hold, for every i = 1, 2, j = 1, . . . , 3, k = 1, 2. The production and the commodity demand are   20 10 p= , 15 15



⎞ 10 15 q = ⎝15 10⎠ , 10 10

respectively. Hence the feasible set is  K = X ∈ R[12] : 0 ≤ xikj ≤ 50, ∀i = 1, 2, ∀ j = 1, 2, 3, ∀k = 1, 2 3 

xikj ≤ pik , ∀i = 1, 2, ∀k = 1, 2,

j=1 2 

 xikj



q kj ,

∀ j = 1, 2, 3, ∀k = 1, 2 .

i=1

Let f : R[12] → R4 be the production cost mapping defined by 1 1 1 1 1 f 11 (X) = x11 x12 + x21 x12 − 3x13 ,

1 2 2 2 2 f 12 (X) = 2x13 x11 + (x13 ) + 27x12 ,

1 2 1 1 1 x23 + x21 x22 + 13x21 , f 21 (X) = 2x11

1 2 2 f 22 (X) = x23 x21 + 8x21 ,

and g : R[12] → R4 be the storage cost mapping given by 1 1 g11 (X) = x21 x22 ,

2 2 2 g12 (X) = x11 x21 + 16x13 ,

1 2 1 x22 + 12x22 , g21 (X) = 3x12

2 2 2 2 g22 (X) = (x21 ) + x13 x23 .

Moreover, let us define the demand price mapping d : R[12] → R6 as

24

A. Barbagallo and S. Guarino Lo Bianco

d11 (X) = 13,

2 d12 (X) = x21 + 14,

1 + 12, d21 (X) = x21

d22 (X) = 27,

d31 (X) = 5,

2 d32 (X) = x13 + 16.

Finally, let us assume that the transportation mapping c : R[12] → R[12] is given by 1 1 c11 (X) = 2x11 ,

1 1 c12 (X) = x12 ,

1 1 c13 (X) = 2x13 ,

1 1 (X) = 4x21 , c21

1 1 c22 (X) = 3x22 ,

1 1 c23 (X) = x23 ,

2 2 (X) = 3x11 , c11

2 2 c12 (X) = 3x12 ,

2 2 c13 (X) = 4x13 ,

2 2 (X) = 2x21 , c21

2 2 c22 (X) = 3x22 ,

2 2 c23 (X) = 2x23 .

Hence the profit functions of the two firms are given by: 1 2 1 2 1 2 2 2 2 2 2 2 1 1 v1 (X) = − 2(x11 ) − (x12 ) − 2(x13 ) − 3(x11 ) − 4(x13 ) − 3(x12 ) − x11 x12 1 2 1 1 1 1 1 2 − 2x13 x11 − x21 x22 + 13x11 + 12x12 + 8x13 + 14x11 , 2 2 2 2 1 2 1 2 1 2 2 2 1 2 ) − 2(x23 ) − 4(x21 ) − 3(x22 ) − (x23 ) − 2(x21 ) − 2x11 x23 v2 (X) = − 3(x22 2 1 1 2 2 2 1 2 − 3x22 x12 − x23 x21 + 27x22 + 16x23 + 5x23 + 6x21 .

Then the components of ∇ D v are ∂v1 1 x11 ∂v1 1 x12 ∂v1 1 x13 ∂v1 2 x11 ∂v1 2 x12 ∂v1 2 x13

1 1 = −4x11 − x12 + 13, 1 1 = −x11 − 2x12 + 12, 1 2 = −4x13 − 2x11 + 8, 1 2 = −2x13 − 6x11 + 14, 2 = −6x12 , 2 = −8x13 ,

∂v2 1 x21 ∂v2 1 x22 ∂v2 1 x23 ∂v2 2 x21 ∂v2 2 x22 ∂v2 2 x23

1 = −8x21 , 1 = −6x22 , 1 2 = −2x23 − x21 + 5, 1 2 = −x23 − 4x21 + 6, 2 1 = −6x22 − 3x12 + 27, 2 1 = −4x23 − 2x11 + 16.

By Theorem 4, the general oligopolistic market equilibrium distribution is a solution to the following tensor variational inequality ∗



−∇ D v(X ), X − X  = −

3  2 2   ∂vi (X∗ ) i=1 j=1 k=1

∂ xikj

(xikj − (xikj )∗ ) ≥ 0, ∀X ∈ K.

A General Cournot-Nash Equilibrium Principle …

25

Taking into account Corollary 2.1 in [5], we obtain the numerical equilibrium distribution solving the following system −∇ D v(X) = 0R[12] and verifying that its solution belongs to the interior of the feasible set K. Therefore we have     251 002 1 ∗ 2 ∗ , (X ) = . (X ) = 200 125 As a consequence the production and demand excesses are   12 8 E= , 13 7



⎞ 6 14  = ⎝10 8 ⎠ , 9 3

respectively. Acknowledgements The authors were partially supported by PRIN 2017 Nonlinear Differential Problems via Variational, Topological and Set-valued Methods (Grant 2017AYM8XW).

References 1. Anceschi, F., Barbagallo, A., Guarino Lo Bianco, S.: Inverse tensor variational inequalities and applications. J. Optim. Theor. Appl. 196, 570–589 (2023) 2. Barbagallo, A., Guarino Lo Bianco, S.: Variational inequalities on a class of structured tensors. J. Nonlinear Convex Anal. 19, 711–729 (2018) 3. Barbagallo, A., Guarino Lo Bianco, S.: On ill-posedness and stability of tensor variational inequalities: application to an economic equilibrium. J. Global Optim. 77, 125–141 (2020) 4. Barbagallo, A., Guarino Lo Bianco, S.: A new tensor projection method for tensor variational inequalities. J. Nonlinear Var. Anal. 6, 213–226 (2022) 5. Barbagallo, A., Guarino Lo Bianco, S., Toraldo, G.: Tensor variational inequalities: theoretical results, numerical methods and applications to an economic equilibrium model. J. Nonlinear Var. Anal. 4, 87–105 (2020) 6. Cournot, A.: Researches into the Mathematical Principles of the Theory of Wealth. MacMillan, London (1897) 7. Dafermos, S., Nagurney, A.: Oligopolistic and competitive behavior of spatially separated markets. Reg. Sci. Urban Econ. 17, 245–254 (1987) 8. Nagurney, A.: Network Economics: A Variational Inequality Approach. Kluwer Academic Publishers, Boston (1998) 9. Nash, J.F.: Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 36, 48–49 (1950) 10. Nash, J.F.: Non-cooperative games. Ann. Math. 54, 286–295 (1951)

Environmental Damage Reduction: When Countries Face Conflicting Objectives M. A. Caraballo, A. Zapata, L. Monroy, and A. M. Mármol

Abstract Numerous international environmental agreements of countries have been aimed at limiting their polluting emissions. However, this is an arduous goal to achieve, since it involves two often conflicting objectives for their governments: maximizing their monetary benefits and minimizing the perception of environmental damage, both of which depend on the level of pollution emitted by the set of all countries. Taking these two objectives into account, the situation is analyzed as a two-criteria game in which each country has a tolerance threshold with respect to global emissions. The approach considered makes it possible to deal with a key issue in the analysis: the fact that it is not possible to compare in monetary terms the results obtained when countries act strategically in pursuit of their objectives. We show that depending on the relationships between the thresholds for each country, different sets of equilibria arise. A significant consequence of this research is that all of these equilibria provide strategies with a positive effect on emission reductions and can play an important role in reversing climate change.

1

Introduction

The recent report of the Intergovernmental Panel on Climate Change (IPCC [3]) warns that the scope and magnitude of climate change impacts are greater than estimated in previous assessments, which makes the objective of reducing greenhouse M. A. Caraballo (B) · A. Zapata · L. Monroy · A. M. Mármol Universidad de Sevilla, Avda. Ramón y Cajal n.1., 41018 Sevilla, Spain e-mail: [email protected] A. Zapata e-mail: [email protected] L. Monroy e-mail: [email protected] A. M. Mármol e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_3

27

28

M. A. Caraballo et al.

gas emissions even more pressing. There are quite a few hindrances that make it difficult to achieve this goal. Some of them have been highlighted in recent literature. Verma and Kauskal [5] showed that reduction in gas emissions is possible only when all countries work together under incentive. However, it should be noted that it requires the commitment of all countries, but there is no supranational authority that enforces compliance with the commitments. In addition, given that climate change mitigation is a global public good (Buchholz and Sandler [1]), there are strong freeriding incentives for each country to let other countries carry out emissions reduction efforts, since the curbing of emissions could lead to lower monetary benefits. At the same time, citizens are increasingly sensitive to the global damage caused by climate change and demand effective measures in this regard. As Ertör-Akyazi and Akçay [2] pointed out, understanding the variation in individual moral opinions related to climate change is important for the design of public policies. Therefore, governments face two objectives, on the one hand, to maximize monetary benefits derived from the emissions and, on the other hand, to minimize the perception of environmental damages in order to satisfy the preferences of the citizens. The key element is the fact that it is not possible to compare in monetary terms the results obtained when the countries act strategically in pursuit of their goals, nor is there a clear way to aggregate their values. In this paper, in order to account for this issue and according to Mármol et al. [4], the situation is formalized as a game with vector-valued utilities. Moreover, the perception of the population about the damage of global emissions has been represented by a tolerance threshold. Depending on the relationships between the thresholds of the countries, various sets of equilibria are obtained. All these equilibria include strategies with a positive effect in the reduction of emissions with respect to scenarios where the preferences of the citizens are not taken into account.

2 The Global Emission Game We consider two or more countries that emit polluting gases to the environment. Each country takes into account the own benefits and the damage from the total pollutant produced on each country. In the present paper, this situation is analyzed as a game, called global emission game, in normal-form, G = {(E i , πi )i∈N }, where N = {1, . . . , n} is the set of agents, which in this case are countries or regions, and E i is the set of strategies (possible emissions) that country i ∈ N can adopt. The strategy profile, denoted by e = (e1 , . . . , en ) with ei ∈ E i , can be written as e = (ei , e−i ), where ei represents the quantity of gases emitted by country i, with ei ≤ ei0 , where ei0 is an upper bound for the emission of country i, and e−i = (e1 , . . . , ei−1 , ei+1 , . . . , en ) stands for the strategy combination of all countries except country i. Moreover, πi : ×i∈N E i → IR2 is the vector-valued utility function of country i, given by:

Environmental Damage Reduction: When Countries …





πi (e) = ⎝βi (e), −φi ⎝

n 

29

⎞⎞ e j ⎠⎠ , ∀i ∈ N ,

(1)

j=1

where the first component of πi , βi , represents the benefit for country i derived from production of a good, and φi represents the damage perceived for country i derived from global emission. As usual, the monetary benefit function, βi , is derived from the production of a good produced in quantity xi = xi (ei ). The benefits are calculated in terms of income and  costs. Income depends on the inverse demand function P(X ), with X = X (e) = nj=1 x j (e j ). We consider xi , increasing in ei , as xi (ei ) = αi ei , the inverse demand function P(X ) = a − bX , and the cost function, Ci , depending linearly on production (and therefore, also on emissions), Ci (ei ) = ci xi (ei ) = ci αi ei , where ci represents the unitary cost of production for country i, with a, b, ci , αi > 0 and a > ci for all i ∈ N . Formally, the benefit for country i derived from production is ⎛ βi (e) = xi (ei )P ⎝

n 





x j (e j )⎠ − Ci (ei ) = αi ei ⎝a − ci − b

j=1

Note

that

∂βi ∂ei

n 

⎞ αjej⎠ .

(2)

∂ 2 βi ∂ei2

(e) =

j=1

(e) = αi (a − ci ) − 2bαi2 ei − bαi

−2bαi2

 j=i

αjej

and

< 0, that is, βi is a strictly concave function in its emissions. We assume that the benefit for each country increases with emissions up to a level (that could be different for each country), and that extremely large emissions are excluded. Thus, for each i ∈ N , E i is a bounded set. On the other hand, the function that represents environmental damage perception, φi , depends on the global pollution emitted by all countries. This function is defined as a quantification of the externality, in terms of the damage and of the population sensitivity to damage. Formally, ⎫⎞γi ⎧ ⎛ ⎞ ⎛ n n ⎬ ⎨  e j ⎠ = ⎝max e j − Ki , 0 ⎠ , (3) φi ⎝ ⎭ ⎩ j=1

j=1

where K i > 0 is the tolerance threshold of country i with respect to global emissions and γi is the elasticity of damage perception with respect to the excesses of the global emissions over the tolerance threshold of country i. If the threshold K i is not reached, then the damage is null and so is the damage perceived for country i. If the threshold is exceeded, then γi captures the degree in which the violation of the threshold is perceived as a damage. In the present setting, it can be assumed that variations in the excesses with respect to the threshold generate a greater percentage variation in the perception of damage. Therefore, we consider γi > 1.

30

M. A. Caraballo et al.

n Note that if and second derivatives of φi are j=1 e j ≤ K i , then the first n   n  > K i , then φi ( j=1 e j ) = γi ( nj=1 e j − K i )γi −1 and equal to 0, and if j=1 e j    φi ( nj=1 e j ) = γi (γi − 1)( nj=1 e j − K i )γi −2 . Since γi > 1, then φi ( nj=1 e j ) ≥ 0  and φi ( nj=1 e j ) ≥ 0, that is, damage perception, φi , is an increasing convex function.

3 Sets of Equilibria for the Global Emission Game In what follows, we consider the following notation for vector inequalities: for x, y ∈ IRn , x ≥ y means that xi ≥ yi for all i = 1, . . . , n, with at least one strict inequality and x > y means that xi > yi for all i = 1, . . . , n. This section is devoted to identifying the sets of equilibria to which the countries will eventually arrive under the original assumptions. Definition 1 A strategy profile e∗ is an equilibrium for the game G = {(E i , πi )i∈N } ∗ ∗ ) ≥ πi (ei∗ , e−i ). if ∃/ i ∈ N with ei ∈ E i such that πi (ei , e−i The set of equilibria of G is denoted as E(G). Definition 2 A strategy profile e∗ is a weak equilibrium for the game G = ∗ ∗ ) > πi (ei∗ , e−i ). {(E i , πi )i∈N } if ∃/ i ∈ N with ei ∈ E i such that πi (ei , e−i ˜ The set of weak equilibria of G is denoted as E(G). As Mármol et al. [4] established, the whole set of equilibria can be characterized by taking into account the best response correspondences of the components of the vector utility functions of each country. We denote by ri1 (e−i ) the best response function of country i to the strategies of  all remaining countries when only considering their own individual benefits βi . i , then the best response of country i is obtained by means of If j=i α j e j ≤ a−c b  i i the first-order conditions, that is, ∂β (e) = 0, and if j=i α j e j > a−c , then the best ∂ei b response is to emit nothing. Thus, the best response correspondence of country i to the emissions of the remaining countries is  ri1 (e−i )

=

1 αi



a−ci 2b

− 0

 j =i

2

αjej



 if αjej ≤  j=i if j=i α j e j >

a−ci b a−ci b

(4)

Analogously, the best response correspondence of country i to the strategies of all ri2 (e−i ). remaining countries, considering only the damage function, φi , is denoted by  If j= i e j ≤ K i , then the best response is any value between 0 and K i − j=i e j , and if j=i e j > K i , then the best response consists of not emitting at all. Thus,     0, K − e e j ≤ Ki i j if 2 j = i ri (e−i ) = (5)  j=i 0 if j=i e j > K i

Environmental Damage Reduction: When Countries …

31

Since for all i ∈ N , E i is a nonempty convex compact subset, and βi and −φi are continuous and concave in its own action, then the set of weak equilibria of the vector-valued game G = {(E i , πi )i∈N } is given by ˜ E(G) = {(e1 , . . . , en ) ∈ ×i∈N E i : r i (e−i ) ≤ ei ≤ r¯ i (e−i ), i ∈ N },

(6)

where r i (e−i ) = min{ri1 (e−i ), ri2 (e−i )}, and r¯i (e−i ) = max{ri1 (e−i ), ri2 (e−i )}. Note that, when a function is not strictly concave (for instance, −φi ), the minimum and/or the maximum of the best responses may not bea singleton. It is clear that in our case r i (e−i ) = 0 and r¯i (e−i ) = max{ri1 (e−i ), K i − j=i e j }. In the characterization of the set of equilibria of the global emission game, E(G), two aspects of the game play an important role, namely the tolerance threshold of each country and the total emissions produced at the Nash equilibrium of the scalar game in which only benefits derived from production are considered. This total amount of emissions constitutes an upper bound for the pollution emitted at the equilibria of the global emission game. The Nash equilibrium of this scalar game is e N E = (eiN E )i∈N , with eiN E

=

a − (n + 1)ci +



(n + 1)αi b

j∈N

cj

.

Next results establish some conditions for an strategy profile to be an equilibrium. Lemma 1 Let G = {(E i , πi )i∈N } be the game with vector-valued utilities. If e is an equilibrium (in the strong sense), then ei ≤ ri1 (e−i ) for all i ∈ N .  Proof If e is an equilibrium of G, then 0 ≤ ei ≤ max{ri1 (e−i ), K i − j=i e j } since ˜ E(G) ⊂ E(G). If ei > ri1 (e−i ) for some i ∈ N , consider ε > 0 such that ei − ε > 1 ri (e−i ). Since βi is strictly concave and ri1 is its best response, then βi (ei − ε, e−i ) > βi (e).  On the other hand, if K i − j=i e j ≤ 0, the result follows. Otherwise, ri1 (e−i ) <    ei ≤ K i − j=i e j . Since in this case φi ( nj=1 e j − ε) = φi ( nj=1 e j ) and βi (ei −  ε, e−i ) > βi (e), this contradicts the fact that e is an equilibrium of G. Proposition 1 Let G = {(E i , πi )i∈N } be the game with vector-valued utilities and let K = min i {K i }. When nj=1 e Nj E ≤ K , e is an equilibrium of G if and only if e = eN E .  Proof We first prove that e N E is an equilibrium for G: for ε > 0, −φi ( nj=1 e Nj E +  NE ε) ≤ −φi ( nj=1 e Nj E ) = 0 and βi (eiN E + ε, e−i ) < β (e N E ) for all i ∈ N . Analon n i N E  NE gously for ε > 0, −φi ( j=1 e j − ε) = −φi ( j=1 e j ) = 0, since nj=1 e Nj E − NE ε < K and βi (eiN E − ε, e−i ) < βi (e N E ) for all i ∈ N . That is, e N E is an equilibrium for G. exists i ∈ N such that ei = Let e be an equilibrium such that e = e N E . There  ri1 (e−i ). It follows from Lemma 1 that ei < ri1 (e−i ). If nj=1 e j < K , consider ε >

32

M. A. Caraballo et al.

Fig. 1 Best response functions and the equilibrium of G when K 1 , K 2 > e1N E + e2N E

   0 such that nj=1 e j + ε < K , then −φi ( nj=1 e j + ε) = −φi ( nj=1 e j ) = 0 and + ε, e−i ) > βi (e) for all i ∈ N . Hence e is not an equilibrium. βi (ei   If nj=1 e j ≥ K , then there exists l ∈ N such that nj=1 e j ≥ K l > nj=1 e Nj E . Hence, there exists k ∈ N such that ek > ekN E . Consider ε > 0, such that ek − ε > ekN E , then βk (ek − ε, e−k ) > βk (e) since βk is strictly concave, and −φk ( nj=1 e j −  ε) > −φk ( nj=1 e j ). That is, both of the components of πk improve, and e is not an equilibrium. Therefore, the result follows.  In order to illustrate the results, we focus on the analysis of two countries such that c1 > c2 and, therefore, e1M < e2M , where eiM denotes the emission level of country i when e j = 0 for j = i. Figure 1 shows the best response functions and the equilibrium for the global emission game when K 1 , K 2 > e1N E + e2N E . Total emissions at equilibrium are below K 1 and K 2 , and therefore, the tolerance threshold of either of the countries is not exceeded. Since  the structure of the set of equilibria depends on the values of K i and on the total nj=1 e Nj E , in the next propositions the sets of equilibria are characterized based on the relationship between these values. We consider n = 2, for n ≥ 3, analogous characterization can be obtained. Proposition 2 Let K 1 ≤ K 2 . If K 2 ≤ e1N E + e2N E , then the set of equilibria is E(G) = {e ∈ E 1 × E 2 : e1 ≤ r11 (e2 ), e2 ≤ r21 (e1 ), e1 + e2 ≥ K 2 }∪ ∪{e ∈ E 1 × E 2 : e2 = r21 (e1 ), K 1 ≤ e1 + e2 < K 2 }.

Environmental Damage Reduction: When Countries …

33

Proof If e is an equilibrium of G, then ei ≤ ri1 (e j ) for i, j = 1, 2, i = j as a consequence of Lemma 1. Three cases can be considered: e1 + e2 ≥ K 2 , K 1 ≤ e1 + e2 < K 2 and e1 + e2 < K 1 . (a) If e1 + e2 ≥ K 2 , any point e such that ei ≤ ri1 (e j ), i, j = 1, 2, i = j, is an equilibrium for G: For ε > 0, −φ1 (e1 + e2 + ε) < −φ1 (e1 + e2 ) since φi is strictly increasing when e1 + e2 ≥ K i and e1 + e2 + ε ≥ K 2 ≥ K 1 . On the other hand, βi (ei − ε, e j ) < βi (e) since βi is strictly concave and ei − ε < ri1 (e j ) for i, j = 1, 2, i = j. (b) If K 1 ≤ e1 + e2 < K 2 and e2 = r21 (e1 ), then e is an equilibrium for G : For ε > 0, −φ1 (e1 + e2 + ε) < −φ1 (e1 + e2 ), since φi is strictly increasing when e1 + e2 ≥ K i and e1 + e2 + ε ≥ K 1 . On the other hand, β2 (e2 − ε, e1 ) < β2 (e) since β2 is strictly concave and e2 − ε < r21 (e1 ) = e2 . Moreover, if K 1 ≤ e1 + e2 < K 2 and e2 = r21 (e1 ), e is not an equilibrium for G. For ε > 0, β2 (e1 + e2 + ε) > β2 (e1 + e2 ) and φ2 (e1 + e2 + ε) = φ2 (e1 + e2 ) = 0 for ε such that e1 + e2 + ε < K 2 . (c) If e1 + e2 < K 1 , then e1 + e2 < K 1 ≤ e1N E + e2N E . Hence, there exists i ∈ N such that ei < eiN E . Consider ε > 0, such that ei + ε < eiN E and e1 + e2 + ε < K i , then βi (ei + ε, e j ) > βi (e) since βi is strictly concave, and φi (e1 + e2 + ε) = φi (e1 + e2 ) = 0 since e1 + e2 < K 1 ≤ K i . Hence e is not an equilibrium. Therefore, the result follows.  In Fig. 2 the best response functions and the equilibria for the global emission game in the case of Proposition 2 are represented. Total emissions at every equilibria can be at K 2 or above, as in the case of Fig. 2 (left). It may happen that total emissions at some equilibria can be below K 2 , but never at K 1 (Fig. 2, right). In these cases, at equilibria, the tolerance threshold of one of the countries is always exceeded. In addition, there are also situations where some equilibria exist in which the tolerance threshold of one of the countries may not be exceeded and the tolerance of the other country is at its limit. Proposition 3 If K 1 ≤ e1N E + e2N E ≤ K 2 , then the set of equilibria is E(G) = {e ∈ E 1 × E 2 : e2 = r21 (e−i ), K 1 ≤ e1 + e2 ≤ K 2 }. Proof The result follows with an analogous reasoning as the one of Proposition 2.  The best response functions and the set of equilibria given by Proposition 3 can be seen in Fig. 3. Total emissions at equilibria are below K 2 and above K 1 (Fig. 3, left) or total emissions at equilibria are below K 2 and might be at K 1 (Fig. 3, right). Hence, at equilibria, the tolerance threshold of one of the countries is never exceeded and some equilibria may exist in which the tolerance of the other country is at its limit. Note that the Nash equilibrium is an equilibrium of the game with vector-valued utilities in every case, and when nj=1 e j < K i for all i ∈ N and there exists l ∈ N  such that K l < nj=1 e Nj E , then e is not an equilibrium of the game G.

34

M. A. Caraballo et al.

Fig. 2 Best response functions and sets of equilibria of G when K 1 , K 2 < e1N E + e2N E

Fig. 3 Best response functions and sets of equilibria of G when K 1 < e1N E + e2N E < K 2

4 Concluding Remarks This paper has analysed the decisions on the level of emission of polluting gases that governments should take when they consider not only the monetary benefit derived from the emissions, but also the perception of the population on the global damage caused by those emissions. The results show that the consideration of this second issue leads to some equilibria that reduce the emission of polluting gases with respect to those equilibria achieved when only monetary benefits are taken into account. We have represented the perception of the population about the damage of global emissions by a tolerance threshold. Depending on the value of this threshold, several situations emerge. Focusing on the case of two countries, a first result shows that there can be cases where, at equilibrium, the tolerance threshold of either of the

Environmental Damage Reduction: When Countries …

35

countries is not exceeded. In a second case, at equilibria, the tolerance threshold of one of the countries may not be exceeded and some equilibria could exist in which the tolerance of the other country is at its limit. Finally, equilibria may exist where the tolerance threshold of one of the countries is never exceeded, and in some situations, the tolerance threshold of the other country is at its limit. These results show that the sensitivity of citizens towards environmental issues can play a relevant role in the reduction of emissions. In fact, since the satisfaction of citizens’ preferences is usually the main issue for governments, they should strive to develop the necessary economic measures to maintain pollution levels within tolerance regions. Acknowledgements The research of the authors is a part of the Project PGC2018-095786-B-I00 funded by MCIN/AEI/10.13039/501100011033/ and by “ERDF A way of making Europe”, and by the Andalusian Government, Project P20-00628 (PAIDI 2020) and the groups SEJ-183 and SEJ-258.

References 1. Buchholz, W., Sandler, T.: Global public goods: a survey. J. Econ. Lit. 59(2), 488–545 (2021) 2. Ertör-Akyazi, P., Akçay, Ç.: Moral intuitions predict pro-social behaviour in a climate commons game. Ecol. Econ. 181, 106918 (2021) 3. Intergovernmental Panel on Climate Change (IPCC) (2022) Climate Change: Impacts. Adaptation and Vulnerability, Summary for Policymakers (2022) 4. Mármol, A.M., Monroy, L., Caraballo, M.A., Zapata, A.: Equilibria with vector-valued utilities and preference information. The analysis of a mixed duopoly. Theory Dec. 83, 365–383 (2017) 5. Verma, S., Kauskal, R.K.: A game theoretic approach for global cooperation in climate control. J. Environ. Eng. Studies 1, 1–16 (2016)

Nonsmooth Hierarchical Multi Portfolio Selection Lorenzo Lampariello, Simone Sagratella, and Valerio Giuseppe Sasso

Abstract We focus on the case of a financial service provider having to manage different clients’ accounts via assigning them to multiple managers. In this multi-agent scenario, we introduce sparsity-enhancing terms in the objectives of both clients and managers. The resulting decision problem can be modeled as a hierarchical GNEP that is Jointly-Convex with nonsmooth objectives. We study the main theoretical properties of this multi-agent problem, and show that it is solvable under mild conditions. [Invited session Equilibria, variational models and applications.]

1 Introduction We consider multiple portfolios to be jointly optimized. Since multiple accounts from different clients are typically accommodated simultaneously and their trades are pooled for common execution, the multiple portfolios are linked through the transaction costs that depend on the trades from all accounts (see e.g. [10, 12, 13, 17] for some recent results). In this context, and for the first time in the literature, we focus on the case of a financial service provider where different clients’ accounts are assigned to some managers. The complex nature of such decision problem stems from the need to take into consideration both each account owner’s and managers’ interests. This results in a non-cooperative multi-agent scenario involving two decision levels. Assuming the following blanket conditions: agents are rational with complete information and they act simultaneously and non-cooperatively, these multi-agent frameworks are L. Lampariello Roma Tre University, Rome, Italy e-mail: [email protected] S. Sagratella · V. G. Sasso (B) Sapienza University of Rome, Rome, Italy e-mail: [email protected]; [email protected] S. Sagratella e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_4

37

38

L. Lampariello et al.

commonly modeled as Nash games, see [6]. Therefore, at the lower level, the different clients’ accounts act as players of a Nash Equilibrium Problem (NEP). Moreover, at the upper level, the managers play in turn a different Nash Game, but, to guarantee that every account is treated fairly, they can make decisions choosing exclusively among the lower-level accounts’ equilibria. Summarizing, from a modeling standpoint, a novelty of our study with respect to the current literature consists in the introduction of the managers’ Nash game played at the upper-level, resulting in a Generalized Nash Equilibrium Problem (GNEP) whose feasible region is implicitly defined as the solution set of the lower-level NEP. This hierarchical GNEP is a Jointly-Convex problem since all the managers share the same feasible region, which turns out to be convex under mild assumptions, see [8]. Despite being GNEPs, a subset of equilibria can be computed for such problems employing techniques that, roughly speaking, are similar to those used to address the conceptually simpler NEPs. Additionally, both at the upper- and lower-level portfolio selection, we include 1 sparsity-enhancing criteria. These nonsmooth terms prevent one from resorting to standard Variational Inequalities as an analytical tool; one has to rely, instead, on Generalized Variational Inequalities (GVI) (see [3, 9]). In particular, leveraging GVIs, we show that the hierarchical Jointly-Convex GNEP is solvable under mild conditions.

2 A Non-cooperative Multi-agent Hierarchical Model for Multi-portfolio Selection We consider N accounts indexed by ν = 1, . . . , N . Each account ν manages a budget bν ∈ R+ , to be invested in K assets of a market. The decision variables y ν ∈ Yν ⊆ R K denote the percentage of bν to invest in each asset, i.e. account ν invests bν yiν on each asset i. We indicate with Yν the nonempty convex compact set of feasible portfolios:  Yν 

y ν ∈ [lν , u ν ] K :

K 

 yiν ≤ 1 ,

i=1

where lν ∈ R and u ν ∈ R represent lower and upper bounds on the percentages of budget for each account. By letting lν < 0, the possibility of player ν shortselling assets is taken into account. Denoting by r ∈ R K the random variables where rk is the return of asset k ∈ {1, . . . , K }, we introduce π ν = Eν (r ) ∈ R K as player ν’s expected values of assets’ returns over a single-period investment. To take into consideration the transaction costs effect, we rely on the market impact matrix ν ∈ R K ×K , which represents, for each pair (i, j), the impact of the liquidity of asset i on that of asset j. We assume ν to be positive semidefinite for all ν, but not necessarily diagonal nor symmetric, in order to take into consideration the cross-asset effects (for further information

Nonsmooth Hierarchical Multi Portfolio Selection

39

concerning these choices, the reader can refer to [12]). The adopted measure for portfolio net income Iν (y ν ) is as follows: ν

Iν (y )  π

νT



ν

ν

ν 

(bν y ) − bν (y − v ) 



Expected return





ν



 

Invested capital

N  λ=1

bλ (y λ − v λ ) . 

(1)



Unitary transaction costs

We chose a linear market impact unitary cost function, where v λ ∈ R K denotes the vector of the current positions for account λ. This, since all trades are executed simultaneously, depends on the aggregated trades from all accounts and is then multiplied by account ν’s invested capital to compute total transaction costs. To model risk Rν (y ν ), we use the portfolio variance, where  ν = Eν ((r − ν π )(r − π ν ) ) is the positive semidefinite covariance matrix: Rν (y ν ) 

1 (bν y ν )  ν (bν y ν ) . 2   

(2)

Variance

Overall, we face a multi-objective optimization problem, which is solved by weighting the two functions (see (1) and (2)) via the risk aversion parameter ρν ∈ R+ , which is modeled to be different for each account ν. Thus, the lower-level NEP consists of the collection of N (parametric) optimization problems, each borne by account ν, with ν = 1, . . . , N . We denote by y the vector formed by all the decision variables, and by y −ν the vector composed by all the players’ decision variables except those of player ν: ⎛



⎞ y1 ⎜ ⎟ y  ⎝ ... ⎠ ∈ R N K , yN

y −ν

⎞ y1 ⎜ .. ⎟ ⎜ . ⎟ ⎜ ν−1 ⎟ ⎜y ⎟ (N −1)K ⎟ ⎜ . ⎜ y ν+1 ⎟ ∈ R ⎜ ⎟ ⎜ . ⎟ ⎝ .. ⎠ yN

To emphasize player ν’s decision variables within y, we sometimes write (y ν , y −ν ) instead of y. Note that this still stands for the vector y = (y 1 , . . . , y ν , . . . , y N ) and that, in particular, the notation (y ν , y −ν ) does not mean that the block components of y are reordered in such a way that y ν becomes the first block. For each account, the objective function to minimize is given by the sum of a smooth term θνL : R N K → R, obtained by suitably weighing I and R and depending on all account’s variables y, and a nonsmooth locally Lipschitz continuous term ϕνL : R K → R depending on variables y ν only:

40

L. Lampariello et al.

θνL (y ν , y −ν )  −Iν (y ν ) + ρν Rν (y ν ) = −(π ν ) (bν y ν ) + ρν 21 (bν y ν )  ν (bν y ν ) N bλ (y λ − v λ ). + bν (y ν − v ν ) ν λ=1 To promote sparsity, thus reducing monitoring costs and simplifying portfolio management, while also mitigating errors in parameters estimation (see, e.g., see [1, 2] for further details), the following nonsmooth terms are introduced: ϕνL (y ν )  τν y ν 1 , where τν > 0. Alternatively, it could be also possible to consider the sparsity term of the difference with respect to the current positions, i.e. y ν − v ν 1 . Summarizing, the NEP we consider at the lower level consists of the following account-related parametric optimization problems: minimize y ν θνL (y ν , y −ν ) + ϕνL (y ν ) s.t.

y ν ∈ Yν .

(PνL )

Denoting by Y  Y1 × · · · × Y N ⊆ R N K the overall feasible region, the NEP is, thus, the problem of finding y ∈ Y such that θνL (y ν , y −ν ) + ϕνL (y ν ) ≤ θνL (w ν , y −ν ) + ϕνL (w ν ),

∀w ν ∈ Yν ,

ν = 1, . . . , N . (3) Any y ∈ Y satisfying (3) is an equilibrium or a solution of the NEP, i.e. a point such that for no player, given the other players’ choices, the objective function can be decreased by unilaterally changing decision variables to any other feasible solution. Accordingly, we indicate with E the (non-parametric) set of equilibria of the NEP:  E  y ∈ Y : θνL (y ν , y −ν ) + ϕνL (y ν ) ≤

 θνL (w ν , y −ν ) + ϕνL (w ν ), ∀w ν ∈ Yν , ν = 1, . . . , N ⊆ R N K .

In the next section we show that, for our choice of objective functions, the set E turns out to be nonempty, convex and compact. Over the solution set of the lower-level NEP, a Nash game is played, at the upperlevel, among M account managers, each one being responsible of deciding trades for a subset of the accounts. To be precise, this is in fact a Jointly-Convex GNEP since the shared convex feasible region E does not have a separable structure among the upper-level players, see [5, 8]. Assuming each account to be controlled by one single manager, we define μ as the set of indexes of lower-level accounts controlled by the manager μ. Accordingly, we denote by x μ the collection of variables y ν with ν ∈ μ for each manager μ. Therefore variables x are nothing else but a different way of partitioning the overall lower-level variables y among the upper-level managers. Similarly to what happens at the lower level, the objective functions of each manager μ model the performances of all the different accounts controlled by manager μ:

Nonsmooth Hierarchical Multi Portfolio Selection

θμU (x μ , x −μ )  −



41

π νT (bν y ν ) + ρ μ

ν∈ μ

+



μ bν (y ν − v ν ) 

ν∈ μ

1  (bν y ν )  ν (bν y ν ) 2 ν∈

N 

μ

bλ (y λ − v λ ),

λ=1

and ϕμU (x μ ) =  τμ



y ν 1 ,

ν∈ μ

μ 0 is the transaction cost matrix, where ρ μ ≥ 0 is the risk aversion parameter,  and  τμ ≥ 0 is the regularization parameter. These upper-level objective functions reflect the compensation structure involving bonuses that depend on the performance of all accounts handled by each manager. Overall, player μ, with μ = 1, . . . , M, in the hierarchical Jointly-Convex GNEP chooses the decision variables x μ ∈ R K ·| μ | , so as to solve the following (parametric in x −μ ) optimization problem: minimizex μ θμU (x μ , x −μ ) + ϕμU (x μ ) s.t.

(x μ , x −μ ) ∈ E.

(PμU )

We remark that θμU : R N K → R is a smooth function, and ϕμu : R K ·| μ | → R is a nonsmooth locally Lipschitz continuous term. In the forthcoming sections we discuss the properties of this hierarchical Jointly-Convex GNEP via the use of GVIs.

3 Lower-Level NEP Properties In this section we study the main theoretical properties of the lower-level NEP played by the accounts. Specifically, we rely on finite-dimensional GVIs that provide an analytical tool to reformulate the solution set of the lower-level NEP (3). In particular, the latter problem turns out to be equivalent to GVI(F, Y ), which is the problem of finding y ∈ Y such that: ∃ f y ∈ F(y) :

f yT (w − y) ≥ 0, ∀w ∈ Y,

where  ⎞ ∂ y 1 θ1L (y) + ϕ1L (y 1 ) ⎟ ⎜ .. NK ⇒ RN K , F(y)  ⎝ ⎠:R .  L  ∂ y N θ N (y) + ϕ NL (y N ) ⎛

42

L. Lampariello et al.

  where ∂ y ν θνL (y) + ϕνL (y ν ) indicates the set of (partial) subgradients of θνL (•; y −ν ) + ϕνL (•) evaluated at y ν . We also denote by SOL(F, Y ) the solution set of GVI(F, Y ). Note that, due to the point-to-set nature of F, we cannot rely on the standard Variational Inequality problem, as usually done in NEP contexts. Remark 1 The objective function for each player ν is convex with respect to variables y ν , for every fixed y −ν . This is true because, for all ν, Yν is a convex nonempty set, and  ν and ν are positive semidefinite. Also, ϕνL is a convex function. This implies that the objective functions are regular (see [15, Proposition 7.27]), so that we can write (see [15, Proposition 10.9]):  ⎞ ⎛ ⎞ ⎛ ⎞ ∂ y 1 ϕ1L (y 1 ) ∇ y 1 θ1L (y) ∂ y 1 θ1L (y) + ϕ1L (y 1 ) ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ .. .. .. F(y)  ⎝ ⎠+⎝ ⎠. ⎠=⎝ . . .  L  L L N L N N N N ∇ y θ N (y) ∂ y ϕ N (y ) ∂ y θ N (y) + ϕ N (y ) ⎛

(4)

The explicit expression of ∇ y ν θνL (y) is given by    ∇ y ν θνL (y ν , y −ν ) = − bν π ν + 2(bν )2 sym(ν )v ν + bν ν λ =ν bλ v λ  +(bν )2 (ρν  ν + 2 sym(ν )) y ν + bν ν λ =ν bλ y λ , where we denote by sym(ν )  21 (ν + ν  ) the symmetric part of ν , while ⎧ ⎪ for yiν < 0 ⎨−τν ∂ yiν ϕνL (y ν ) = [−τν , τν ] for yiν = 0 ⎪ ⎩ for yiν > 0. τν Since convexity of each player ν’s objective function with respect to y ν is all one needs to exploit the minimum principle (see Remark 1), the lower-level NEP can be recast as GVI(F, Y ) via the following proposition. Proposition 1 The identity E = S O L(F, Y ) holds. This allows us to analyze the lower-level NEP’s solution set via GVI theory. Specifically, since ∇ y ν θνL is continuous, ∂ y ν ϕνL is outer-semicontinuous, and, in turn, F is outer-semicontinuous and convex-valued, the following result holds. Proposition 2 The equilibrium set E is nonempty and compact. Proof E is nonempty thanks to [9, Theorem 3.1] and bounded, since Y is compact. Regarding its closedness, the proof is obtained by contradiction. If E is not closed, there exists a sequence yk ∈ E for every k, i.e. ∃ f yk ∈ F(yk ) : f yTk (w − yk ) ≥ 0 ∀w ∈ Y, / E, i.e. and such that yk → y¯ ∈

(5)

Nonsmooth Hierarchical Multi Portfolio Selection

∀ f y¯ ∈ F( y¯ ), ∃w¯ ∈ Y :

43

f y¯T (w¯ − y¯ ) < 0.

(6)

Since F is locally bounded over the bounded set Y , for an appropriately chosen subsequence such that k ∈ K, ∃ limk∈K f yk = f¯. Moreover, since F is outersemicontinuous, f¯ ∈ F( y¯ ), taking the subsequential limit on both sides of (5), we get 0 ≤ lim f yTk (w¯ − yk ) = f¯(w¯ − y¯ ), k∈K

which contradicts (6).



In order to show the convexity of E, we investigate conditions for SOL(F, Y ) to be convex. A sufficient condition for this property to hold is the maximal monotonicity of F, which is a stronger condition than simple monotonicity. Such property, for an operator T , where gph T  {(u, t)|u ∈ Rn , t ∈ T (u)}, is as follows. Definition 1 The monotone mapping T : Rn ⇒ Rn is maximal monotone if for u , t) ∈ gph T with ( u − u )T ( t− every pair ( u , t) ∈ (Rn × Rn ) \ gph T there exists (  t) < 0. To clarify this property, we consider the following example: ⎧ ⎪ for u < 0 ⎨−1, T (u) = {−1, 1} for u = 0 ⎪ ⎩ 1 for u > 0. Such mapping is monotone and outer-semicontinuous, but it is not convex-valued, which means it cannot be maximal monotone: considering ( u , t) = (0, 0), for all T    ( u , t) ∈ gph T , it holds that ( u − u ) (t − t) ≥ 0. A similar counter-example can be obtained by considering convex-valued monotone maps that are not outersemicontinuous. For F to be maximal monotone, thanks to the particular structure of the term N N   ∂ y ν ϕνL ν=1 in (4), a sufficient condition is the monotonicity of ∇ y ν θνL ν=1 , which has been proven in [12, Sect. 3.1] under mild conditions. The last step in order to prove the convexity of E is to call upon [9, Theorem 4.4], so that the following result holds. N  Proposition 3 If ∇ y ν θνL ν=1 is monotone, the equilibrium set E is convex.

4 Upper-Level GNEP Properties As done for the lower-level, in order to address the hierarchical GNEP given by the collection of the parametric problems (PμU ), we resort to GVI(G, E), that is the problem to find x ∈ E such that:

44

L. Lampariello et al.

∃gx ∈ G(x) :

gxT (z − x) ≥ 0, ∀z ∈ E,

where  ⎞ ∂x 1 θ1U (x) + ϕ1L (x 1 ) ⎟ ⎜ .. | |K ⇒ R| |K . G(x)  ⎝ ⎠:R .   U L (x M ) ∂x M θ M (x) + ϕ M ⎛

We denote by SOL(G, E) the solution set of GVI(G, E). Remark 2 We can say also for the upper level that the objective functions are playerμ are positive semidefinite. This implies that the objective convex, because  ν and  functions are regular, and we can write: ⎞ ⎛  ⎞ ⎛ ⎞ ∂x 1 θ1U (x) + ϕ1U (x 1 ) ∇x 1 θ1U (x) ∂x 1 ϕ1U (x 1 ) ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ .. .. .. G(x)  ⎝ ⎠=⎝ ⎠+⎝ ⎠. . . .  U  U U U M M ∇x M θ M (x) ∂x M ϕ M (x ) ∂x M θ M (x) + ϕ N (x ) ⎛

Notice that E is nonempty, convex and compact, but it is a set coupling the variables of the different upper-level account managers. However, E is shared among the problems (PμU ), therefore the hierarchical GNEP is a Jointly-Convex one. Similarly to Proposition 1, leveraging joint convexity (see [4]), we have the following inclusion. Proposition 4 SOL(G, E) is included in the equilibrium set H of the hierarchical GNEP, where H  {x ∈ E : θμU (x μ , x −μ ) + ϕμU (x μ ) ≤

 θμU (z μ , x −μ ) + ϕμU (z μ ), ∀(z μ , x −μ ) ∈ E, μ = 1, . . . , M ⊆ R| |K .

Mimicking the definition of variational equilibria for Jointly-Convex GNEPs in the smooth case (see [8]), we call SOL(G, E), i.e. the set of solutions that can be computed resorting to GVI(G, E), the set of variational equilibria for the hierarchical GNEP. It can be easily shown that SOL(G, E) is nonempty and compact. The following result trivially holds in view of the properties of SOL(G, E) and E. Proposition 5 The equilibrium set H is nonempty and bounded. We are therefore able, by means of Proposition 5, to prove that the Jointly-Convex hierarchical GNEP with nonsmooth terms is solvable.

Nonsmooth Hierarchical Multi Portfolio Selection

45

5 Conclusions We propose a general multi-portfolio selection model consisting in a hierarchical structure where lower-level accounts are controlled by upper-level managers. Additionally, we generalize the state-of-the-art by considering nonsmooth objective functions and solving the hierarchical Jointly-Convex GNEP via the use of GVIs. Summarizing, our main contributions are: • the introduction of an upper-level non-cooperative game among managers handling lower-level accounts; • the inclusion of nonsmooth sparsity-enhancing terms both at the lower and at the upper level; • the theoretical analysis of the nonsmooth Jointly-Convex hierarchical GNEP framework by using GVIs; • the proof of existence of equilibria for the Jointly-Convex hierarchical GNEP. As future works, we aim at developing effective numerical methods to tackle these nonsmooth hierarchical games. We wish to employ Tikhonov-like approaches (generalizing and building on the results in [7, 11, 14, 16]) to develop globally-convergent algorithms with complexity guarantees. Acknowledgements Lorenzo Lampariello was partially supported by the MIUR PRIN 2017 (grant 20177WC4KE).

References 1. Bertsimas, D., Darnell, C., Soucy, R.: Portfolio construction through mixed-integer programming at Grantham, Mayo, Van Otterloo and Company. Interfaces 29(1), 49–66 (1999). https:// doi.org/10.1287/inte.29.1.49 2. Cesarone, F., Scozzari, A., Tardella, F.: A new method for mean-variance portfolio optimization with cardinality constraints. Ann. Oper. Res. 205(1), 213–234 (2013). https://doi.org/10.1007/ s10479-012-1165-7 3. Chan, D., Pang, J.S.: The generalized quasi-variational inequality problem. Math. Oper. Res. 7(2), 211–222 (1982). https://doi.org/10.1287/moor.7.2.211 4. Facchinei, F., Fischer, A., Piccialli, V.: On generalized Nash games and variational inequalities. Oper. Res. Lett. 35(2), 159–164 (2007). https://doi.org/10.1016/j.orl.2006.03.004 5. Facchinei, F., Lampariello, L.: Partial penalization for the solution of generalized Nash equilibrium problems. J. Global Optim. 50(1), 39–57 (2011). https://doi.org/10.1007/s10898-0109579-8 6. Facchinei, F., Pang, J.S. (Eds.): Finite-dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003) 7. Facchinei, F., Pang, J.S., Scutari, G., Lampariello, L.: VI-constrained hemivariational inequalities: distributed algorithms and power control in ad-hoc networks. Math. Prog. 145(1), 59–96 (2014) 8. Facchinei, F., Sagratella, S.: On the computation of all solutions of jointly convex generalized Nash equilibrium problems. Optim. Lett. 5(3), 531–547 (2011). https://doi.org/10.1007/ s11590-010-0218-6

46

L. Lampariello et al.

9. Fang, S.C., Peterson, E.L.: Generalized variational inequalities. J. Optim. Theory Appl. 38(3), 363–383 (1982). https://doi.org/10.1007/BF00935344 10. Iancu, D., Trichakis, N.: Fairness and efficiency in multiportfolio optimization. Oper. Res. 62(6), 1285–1301 (2014). https://doi.org/10.1287/opre.2014.1310 11. Lampariello, L., Neumann, C., Ricci, J.M., Sagratella, S., Stein, O.: An explicit Tikhonov algorithm for nested variational inequalities. Comput. Optim. Appl. 77(2), 335–350 (2020) 12. Lampariello, L., Neumann, C., Ricci, J.M., Sagratella, S., Stein, O.: Equilibrium selection for multi-portfolio optimization. Eur. J. Oper. Res. 295(1), 363–373 (2021). https://doi.org/10. 1016/j.ejor.2021.02.033 13. Lampariello, L., Priori, G., Sagratella, S.: On nested affine variational inequalities: the case of multi-portfolio selection. Optimization in Artificial Intelligence and Data Sciences. AIRO Springer Series, pp. 1–9 (2022). To appear 14. Lampariello, L., Priori, G., Sagratella, S.: On the solution of monotone nested variational inequalities (2021). arXiv:2105.13463 15. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis, vol. 317. Springer Science & Business Media (2009) 16. Scutari, G., Facchinei, F., Pang, J.S., Lampariello, L.: Equilibrium selection in power control games on the interference channel. In: 2012 Proceedings IEEE INFOCOM (2012), pp. 675–683 17. Yang, Y., Rubio, F., Scutari, G., Palomar, D.P.: Multi-portfolio optimization: a potential game approach. IEEE Trans. Signal Proc. 61(22), 5590–5602 (2013). https://doi.org/10.1109/TSP. 2013.2277839

A Multiclass Network International Migration Model Under Shared Regulations Mauro Passacantando and Fabio Raciti

Abstract In this note we extend a previously proposed model of international human migration by introducing the possibility that some of the destination countries agree to establish a common upper bound on the migratory flows they are willing to accept jointly. In this framework, we propose a new equilibrium definition and prove its equivalence to a suitably defined variational inequality. Some numerical examples show that the flow distribution under joint regulations can differ from those corresponding to a situation where each government autonomously establishes migration bounds.

1 Introduction International Human Migrations can be defined as the movements of people from one country to another in order to improve their overall wealth. Specific reasons that motivate individuals to migrate range from the desire to find a (possibly better) job to the need of fleeing wars or natural disasters. As a matter of fact, the volumes of migratory flows have been growing at a high pace in the last twenty years, as confirmed by a recent report of the United Nations [15]. More precisely, in 2017 the number of international migrants was an estimated 258 million persons, with around 10% of them being refugees or asylum seekers. At the time of writing this note, over two million refugees from Ukraine have reached several European countries, with around seventy thousand of them choosing Italy. International migration is a complex phenomenon with different aspects, and scholars from various fields (e.g., political sciences, sociology, statistics, economics) M. Passacantando Department of Computer Science, University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy e-mail: [email protected] F. Raciti (B) Department of Mathematics and Computer Science, University of Catania, Viale A. Doria 6, 95125 Catania, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_5

47

48

M. Passacantando and F. Raciti

have devoted many papers to its investigation. In this note, we deal with that research strand that focuses on the mathematical modeling of migration flows. Let us recall that the first attempt to frame the migration process in a formal manner dates to the seminal work of the English geographer Ernest Ravenstein who formulated his seven laws of migrations in 1885 [14], using for his investigation real data coming from population censuses. An account of mathematical models of migration, with an historic perspective, can be found in the interesting survey by Aleshkovski and Iontsev [2]. Mathematicians have focused on some particular features of this complex process and have, in turn, used tools ranging from statistics to mathematical physics or optimization. Our contribution is inspired to the microeconomic approach, which focuses on the equilibrium analysis from the user point of view, where each country is endowed with a utility function which describes the benefits that migrants think of getting choosing that country. A migration cost is also assigned to each pair of origin-destination countries, in order to describe the economic cost of displacement, but also the social and psychological cost associated with the migration choice. Within this approach it is also possible to introduce constraints so as to describe some regulatory mechanisms that countries may want to apply to control the incoming flows. The impact of some migration regulations on the different social components of citizens, and on the perceived national sovereignty has been investigated in the interesting paper [7]. The user point of view has been developed in a series of papers by A. Nagurney, with numerous co-authors, by applying the unifying theory of variational inequalities to compute the equilibrium flows (see, e.g., [10, 11]). Variational inequalities have proved to be very effective in modeling and numerically solving a great variety of equilibrium problems in many fields of applied science ([4, 8, 9]). We modify the very recent model in [12], where the authors include a regulatory mechanism in the previous work [11], as an upper bound on the incoming flows, for each country. Instead, we consider the possibility that some group of countries reach an agreement on the maximum incoming flow that they are willing to accept. Thus, a shared upper bound is imposed by coalitions and not, in general, by single countries. As a consequence, a new definition of migration equilibrium flow is put forward, together with a related variational inequality. The paper is arranged as follows. In the following Sect. 2 we first introduce the notation and the general mathematical framework. We then present our model of shared regulations, provide the new definition of equilibrium, and prove its equivalence to a suitably formulated variational inequality. Section 3 is devoted to the numerical solution of some small-size examples, where we also show that the equilibrium flow under shared constraints can differ from that one where each country imposes its own upper bound. In the short concluding section we outline some future research perspectives.

A Multiclass Network International Migration Model …

49

2 Model In what follows, vectors in Rm are thought of as columns, when involved in matrix operations, a  denotes the transpose of vector a and a  b the canonical scalar product in Rm . We consider a set N = {1, . . . , N } of N countries and assume that the population of migrants of each country can be divided into K different classes which constitute the set K = {1, . . . , K }. The countries are thus considered as the nodes of a network where the arcs represent the migratory routes. Let bik denote the initial population of migrants of class k in country i and pik the current population of migrants of class k in country i after a migratory phase has occurred (no repeat or chain migration are considered in this model); populations can be grouped into a vector p ∈ R K N , such that p = ( p11 , . . . , p 1N , . . . , p1K , . . . , p NK ) = ( p 1 , . . . , p k , . . . , p K ). The migratory flow from country i to country j, with i = j, of the class k is denoted with f ikj and we group flows into a vector f ∈ R K N (N −1) such that: f = ( f 1 , . . . , f k , . . . , f K ), where each f k is a subvector with the N (N − 1) components, f ikj , ordered in an arbitrarily prescribed manner. Each class of migrants chooses a destination country according to the perceived attractiveness of that country, which is embodied into a utility function u ik , i ∈ N, k ∈ K. The utilities are grouped into the vector u = (u 11 , . . . , u 1N , . . . , u 1K , . . . , u KN ) = (u 1 , . . . , u k , . . . , u K ). A common assumption is that the utility of each country can depend on the whole population vector, hence u : p → u( p) ∈ R K N , so as to take into account possible competition or saturation effects. Another factor that influences the migration choice is the cost cikj faced by migrants of class k in order to migrate from country i to country j, and all the costs are grouped into a vector in the same manner as the flows. For the sake of generality, the cost is assumed to depend on the whole network flow, hence c : f → c( f ) ∈ R K N (N −1) . As remarked in [11], c does not represent the mere economic cost of migration but also encompasses the social and psychological difficulties connected with the migration decision. The current population after migration can be expressed, for each class and each country, in terms of the initial one and the net flow as: pik = bik +

 j∈N\{i}

f jik −

 j∈N\{i}

f ikj ,

∀ i ∈ N, ∀ k ∈ K,

(1)

50

M. Passacantando and F. Raciti

where, for each class and each country, the outgoing flow cannot exceed the initial population:  f ikj ≤ bik , ∀ i ∈ N, ∀ k ∈ K, (2) j∈N\{i}

and f ikj ≥ 0. For later use, we observe that the utility function can be expressed as a function of the flows by using the conservation equation (1) and defining: U ( f ) := u(b + D f ),

(3)

where D is a block-diagonal matrix of dimension K N × K N (N − 1) whose diagonal blocks are all equal to the node-arc incidence matrix, E, of the graph associated to the countries, and whose elements eva are given by: ⎧ ⎪ ⎨−1 if v is the origin of arc a, eva = 1 if v is the destination of arc a, ⎪ ⎩ 0 otherwise. Now we wish to consider the possibility that some countries gather (i.e., form coalitions) to establish a joint bound on migrants’ flows. To this end, denote with R the set of country coalitions and with r the generic coalition. Moreover let: Srd = the set of countries in coalition r ; Sro = a subset of countries which do not belong to coalition r ; Srm = the set of migrant classes on which coalition r imposes an upper bound. For each r ∈ R, we also define an indicator function connected with the above sets as follows:  1 if i ∈ Sro , j ∈ Srd , k ∈ Srm , I Sro ×Srd ×Srm (i, j, k) = 0 otherwise. We further denote with Br the upper bound mentioned above, that is the maximum number of migrants belonging to some classes (in Srm ) arriving from a group of countries which are not in the coalition. Within this framework, the feasible set of flows is given by: C :=



f =( f ikj )k∈K,i∈N, j∈N \{i} : f ≥ 0,

 j∈N \{i}

  i∈Sro j∈Srd k∈Srm

f ikj ≤ Br , ∀ r ∈ R .

f ikj ≤ bik , ∀ i ∈ N, ∀ k ∈ K,

A Multiclass Network International Migration Model …

51

Definition 1 (Equilibrium under joint regulations) A flow f¯ ∈ C is an equilibrium flow if, ∀ r ∈ R, ∀ i ∈ N, ∀ k ∈ K, there exist γr and βik such that the following three conditions hold: ⎧

k ⎪ f¯i j < Br , ⎪= 0 if ⎨ i∈Sro j∈Srd k∈Srm γr (4)

k ⎪ ⎪ f¯i j = Br , ⎩≥ 0 if i∈Sro j∈Srd k∈Srm

βik

⎧ ⎪ ⎪ ⎨= 0

if

⎪ ⎪ ⎩≥ 0

if

j∈N\{i}



j∈N\{i}

f¯ikj < bik , (5)

f¯ikj = bik ,



k γr I Sro ×Srd ×Srm (i, j, k) ⎪ ⎨= βi + r ∈R U kj ( f¯) − Uik ( f¯) − cikj ( f¯)

⎪ ⎩≤ βik + γr I Sro ×Srd ×Srm (i, j, k) r ∈R

if f¯ikj > 0, if f¯ikj = 0.

(6)

In order to illustrate the meaning of the conditions above, we can consider the particular case where R consists of only one coalition. The left hand side of (6) represents the gain that migrants coming from country i perceive when moving towards country j. Let us notice that when the upper bound Br is not reached, then γr equals to zero and the gain of the migrants only depends on the country of origin. Otherwise, the gain can depend on both the country of origin and destination, and is greater than in the previous case. For a more detailed discussion we refer to the numerical examples in Sect. 3. In view of equilibria computation, as well as to deepen our analysis, we now provide a variational inequality formulation of the equilibrium defined above. Theorem 1 Let T : R K N (N −1) → R K N (N −1) be the operator defined by: T ( f ) := −D  U ( f ), where D is the matrix defined in (3). We have then, that f¯ ∈ C is an equilibrium flow according to Definition 1 if and only if f¯ solves the variational inequality problem of finding f¯ ∈ C such that T ( f¯) ( f − f¯) + c( f¯) ( f − f¯) ≥ 0

∀ f ∈ C.

(7)

Proof We first observe that (U ( f¯) D)ikj = U kj ( f¯) − Uik ( f¯), which yields to the decomposed form of (7):    i∈N j∈N\{i} k∈K

[cikj ( f¯) − U kj ( f¯) + Uik ( f¯)] ( f ikj − f¯ikj ) ≥ 0.

(8)

52

M. Passacantando and F. Raciti

We can now associate to the variational inequality on C its KKT system in the usual manner, by introducing the auxiliary function g( f ) = T ( f¯) f + c( f¯) f which satisfies:

g( f¯) ≤ g( f ), ∀ f ∈ C,

that is, f¯ is a minimum point for g in C. Considering the KKT system for the above minimum problem we arrive, after some algebra, at the following conditions:  γr I Sro ×Srd ×Srm (i, j, k) = 0 cikj ( f¯) − U kj ( f¯) + Uik ( f¯) − αikj + βik + (9) r ∈R ∀ i ∈ N, ∀ j ∈ N \ {i}, ∀ k ∈ K, f¯ikj αikj = 0, αikj ≥ 0, ⎛ βik ⎝



∀ i ∈ N, ∀ j ∈ N \ {i}, ∀ k ∈ K,

(10)

⎞ f¯ikj − bik ⎠ = 0, βik ≥ 0,

∀ i ∈ N, ∀ k ∈ K,

(11)

j∈N\{i}

⎛ γr ⎝

 

⎞ f¯ikj − Br ⎠ = 0, γr ≥ 0

∀ r ∈ R,

(12)

i∈Sro j∈Srd k∈Srm

f¯ ∈ C.

(13)

It is straightforward to verify that the above KKT conditions coincide with equilibrium conditions (4)–(6). Moreover, since the KKT conditions are both necessary and sufficient in order that f¯ be a solution to (7), the proof is complete.  Remark 1 Since the set C is compact, it is sufficient to assume the continuity of U and c to ensure that (7) admits solutions (see, e.g., [4]).

3 Numerical Examples In this section we report some numerical examples that show the impact of possible regulations on the equilibrium solution. We consider a problem with N = 3 countries and K = 2 classes of migrants (see Fig. 1). The initial populations of both classes are defined as follows:

A Multiclass Network International Migration Model … Fig. 1 Network structure of the problem considered in Sect. 3

1 2

53

1 1

2 1

1

1

2

3

1 3

2 2

Migration Class 1

2

3

2 3

Migration Class 2

b11 = 10, 000 b21 = 5, 000 b31 = 1, 000 (Class 1) b12 = 5, 000 b22 = 3, 000 b32 = 500

(Class 2)

The utility functions associated with the countries are: ⎧ 1 1 1 2 ⎪ ⎪ u 1 ( p) = − p1 − 0.5 p2 − 0.5 p1 + 30, 000 ⎨ Class 1: u 12 ( p) = − p11 − 2 p21 − p22 + 20, 000 ⎪ ⎪ ⎩ 1 u 3 ( p) = −0.5 p21 − 3 p31 − p32 + 10, 000 ⎧ u 2 ( p) = − p11 − 2 p12 + 25, 000 ⎪ ⎪ ⎨ 1 Class 2: u 22 ( p) = − p21 − 3 p22 + 15, 000 ⎪ ⎪ ⎩ 2 u 3 ( p) = −0.5 p11 − p32 + 20, 000 The migration cost functions are: ⎧ 1 c1 ( f ) = 2 f 12 + 20 ⎪ ⎪ ⎨ 12 1 1 ( f ) = 5 f 21 + 40 Class 1: c21 ⎪ ⎪ ⎩ 1 1 c31 ( f ) = 6 f 31 + 80 ⎧ 2 c2 ( f ) = 2 f 12 + 10 ⎪ ⎪ ⎨ 12 2 2 ( f ) = 3 f 21 + 10 Class 2: c21 ⎪ ⎪ ⎩ 2 2 c31 ( f ) = f 31 + 25

1 1 c13 ( f ) = f 13 + 30 1 1 c23 ( f ) = 4 f 23 + 20 1 1 c32 ( f ) = 4 f 32 + 60 2 2 c13 ( f ) = f 13 + 20 2 2 c23 ( f ) = 2 f 23 + 30 2 2 c32 ( f ) = 2 f 32 + 15

We consider three scenarios: Scenario 1: no regulation is imposed; Scenario 2: countries 1 and 3 each impose a cap on the total number of migrants entering their country corresponding to 2,000 and 3,000 migrants, respectively, that is the following constraints are imposed: 1 1 2 2 + f 31 + f 21 + f 31 ≤ 2, 000 f 21 1 1 2 2 f 13 + f 23 + f 13 + f 23 ≤ 3, 000

54

M. Passacantando and F. Raciti

Scenario 3: countries 1 and 3 form a coalition and establish the maximum number of migrants coming from country 2 (to country 1 or 3) equals to 4,000, i.e., 1 1 2 2 + f 23 + f 21 + f 23 ≤ 4, 000. f 21

Notice that in Scenario 3 no constraint is imposed on the migratory flow from country 1 to country 3 or vice versa. Since the variational inequality (7) to be solved has a polyhedral feasible region and an affine and strongly monotone map, we reformulated it as an equivalent convex quadratic optimization problem (see [1]) and solved by means of the MATLAB function quadprog from the optimization toolbox. Computations were implemented in MATLAB R2021 and tested on an Intel Core i7 system at 2.5 Ghz, with 16 GB of RAM, running under macOS 12.2. In all cases considered the computational time was less than 0.01 seconds. Results for Scenario 1 are reported in Table 1, where, for each class k of migrants, the initial population bk , the equilibrium flow f¯k , the final population p k , the equilibrium gain U kj ( f¯) − Uik ( f¯) − cikj ( f¯) and the KKT multipliers vector β k are reported. Notice that the equilibrium conditions (4)–(6) are satisfied: any equilibrium flow f¯ikj = 0 is associated with a negative equilibrium gain, while any positive equilib1 rium flow corresponds to a zero gain, except for f¯13 that corresponds to a positive gain (equal to β31 ) since all the migrants of class 1 in country 3 leave the country. Table 2 reports the results for Scenario 2. We remark that due to the upper bound imposed by country 1, not all class 1 migrants from country 3 leave the country, thus β 1 = β 2 = 0 in this case. However, both constraints imposed by countries 1 and 3 are tight at equilibrium and the corresponding KKT multipliers are γ1 = 8093 and γ3 = 7811. Notice that the equilibrium conditions (4)–(6) are satisfied: positive

Table 1 Results for scenario 1: no regulations Class 1 Initial population

Equilibrium migration flows i\

j

Final population

Equilibrium gain [U 1j − Ui1 − ci1j ] i\

j

β1

1

2

3

1

2

3

10,000

1

*

0

0

13,741

1

*

−13,763

−12,908

0

5,000

2

2,741 *

211

2,048

2

0

*

0

0

1,000

3

1,000 0

*

211

3

6,798

−925

*

6,798

Final population

Equilibrium gain [U 2j − Ui2 − ci2j ]

Class 2 Initial population

Equilibrium migration flows i\

j

3

j

1

2

3

1

*

65

4,013

922

1

*

0

0

0

3,000

2

0

*

1,932

1,132

2

−149

*

0

0

500

3

0

0

*

6,446

3

−4,058 −3,909

*

0

when i=j

2

i\

5,000



1

β2

A Multiclass Network International Migration Model …

55

Table 2 Results for scenario 2: regulations individually imposed by countries 1 and 3 (γ1 = 8093, γ3 = 7811) Class 1 Initial population

Equilibrium migration flows i\

j

1

2

3

Final population

Equilibrium gain [U 1j − Ui1 − ci1j ] i\

j

β1

1

2

3

10,000

1

*

0

0

12,000

1

*

−15,524

−11,358

0

5,000

2

1,474 *

0

3,526

2

8,093

*

4,157

0

1,000

3

526

*

474

3

8,093

−4,237

*

0

Final population

Equilibrium gain [U 2j − Ui2 − ci2j ]

0

Class 2 Initial population

Equilibrium migration flows i\

j

1

2

3

i\

j

β2

1

2

3

5,000

1

*

0

1,811

3,189

1

*

−588

7,811

0

3,000

2

0

*

1,189

1,811

2

568

*

7,811

0

500

3

0

0

*

3,500

3

−9,667 −10,235

*

0

Table 3 Results for scenario 3: regulation imposed jointly by countries 1 and 3 (γ1,3 = 2582) Class 1 Initial population

Equilibrium migration flows i\

j

1

2

3

Final population

Equilibrium gain [U 1j − Ui1 − ci1j ] i\

j

β1

1

2

3

10,000

1

*

0

0

13,441

1

*

−14,845

−12,284

0

5,000

2

2,441 *

0

2,559

2

2,582

*

2,551

0

1,000

3

1,000 0

*

0

3

6,174

−2,631

*

6,174

Final population

Equilibrium gain [U 2j − Ui2 − ci2j ]

Class 2 Initial population

Equilibrium migration flows i\

j

1

2

3

i\

j

β2

1

2

3

5,000

1

*

0

4,090

910

1

*

−1,631

0

0

3,000

2

0

*

1,559

1,441

2

1,611

*

2,582

0

500

3

0

0

*

6,150

3

−4,135 −5,746

*

0

equilibrium flows entering the countries 1 and 3 correspond to a maximal equilibrium gain equal to γ1 and γ3 , respectively; while zero equilibrium flows correspond to (positive or negative) non-maximal gains. Finally, Table 3 reports the results for Scenario 3. We remark that the joint constraint imposed by countries 1 and 3 (to country 2) is tight at equilibrium and the corresponding KKT multiplier is γ1,3 = 2582. Notice that, unlike scenario 2, the equilibrium gains corresponding to positive equilibrium flows are not necessarily

56

M. Passacantando and F. Raciti

positive nor the same. Specifically, the equilibrium gains corresponding to positive 1 2 1 and f¯23 , equals to γ1,3 = 2582; the gain related to f¯31 flows from country 2, i.e., f¯21 1 2 2 equals to β3 = 6174; while the gain corresponding to f¯13 equals to β1 = 0.

4 Conclusion and Further Research Perspectives In this note we extended a previous model of equilibrium of international migration flows by considering the possibility that a group of destination countries form a coalition and establish a shared upper bound on the incoming flows. This modification of the constraint set suggests a new definition of equilibrium which admits an equivalent variational inequality formulation. Future work regards the introduction of uncertain data, along the same lines as in [3], which will require the use of stochastic variational inequalities (see, e.g., [6, 13]). The model could be also refined by considering the case where the constraints are satisfied on the average, following the approach in [5]. Acknowledgements The authors are members of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA—National Group for Mathematical Analysis, Probability and their Applications) of the Istituto Nazionale di Alta Matematica (INdAM—National Institute of Higher Mathematics). This research was partially supported by the research project “Programma ricerca di ateneo UNICT 2020-22 linea 2-OMNIA” of the University of Catania. This support is gratefully acknowledged.

References 1. Aghassi, M., Bertsimas, D., Perakis, G.: Solving asymmetric variational inequalities via convex optimization. Oper. Res. Lett. 34, 481–490 (2006) 2. Aleshkovski, I., Iontsev, V.: Mathematical models of migration. In: Livchits, V.N., Tokarev, V.V. (eds.) Systems Analysis and Modeling of IntegratedWorld System, vol. II, pp. 185–214. EOLSS Publishers, Oxford (2009) 3. Causa, A., Jadamba, B., Raciti, F.: A migration equilibrium model with uncertain data and movement costs. Decis. Econ. Finance 40, 159–175 (2017) 4. Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003) 5. Faraci, F., Jadamba, B., Raciti, F.: On stochastic variational inequalities with mean value constraints. J. Optim. Theory Appl. 171, 675–693 (2016) 6. Gwinner, J., Jadamba, B., Khan, A.A., Raciti F.: Uncertainty Quantification in Variational Inequalities: Theory, Numerics, and Applications. Chapman and Hall/CRC (2021). ISBN 9781138626324 7. Helbling, M., Leblang, D.: Controlling immigration? How regulations affect migration flows. Eur. J. Polit. Res. 58, 248–269 (2019) 8. Konnov, I.: Equilibrium Models and Variational Inequalities. Elsevier (2007) 9. Nagurney, A.: Network Economics: A Variational Inequality Approach. Springer (1999)

A Multiclass Network International Migration Model …

57

10. Nagurney, A.: Migration equilibrium and variational inequalities. Econ. Lett. 31, 109–112 (1989) 11. Nagurney, A.: A network model of migration equilibrium with movement costs. Math. Comput. Model. 13, 79–88 (1990) 12. Nagurney, A., Daniele, P.: International human migration networks under regulations. Eur. J. Oper. Res. 291, 894–905 (2021) 13. Passacantando, M., Raciti, F.: A performance measure analysis for traffic networks with random data and general monotone cost functions. Optimization 71, 2375–2401 (2022) 14. Ravenstein, E.G.: The laws of migration. J. Stat. Soc. Lond. 48, 167–235 (1885). http://www. jstor.org/stable/2979181 15. United Nations: Population facts. Department of Economic and Social Affairs, Population Division, New York, December (2017). https://www.un.org/en/%20development/desa/population/ publications/pdf/popfacts/PopFacts%20_%202017-5.pdf

Optimization and Machine Learning

GPS Data Mining to Infer Fleet Operations for Personalised Product Upselling Luca Bravi, Andrew Harbourne-Thomas, Alessandro Lori, Peter Mitchell, Samuele Salti, Leonardo Taccari, and Francesco Sambo

Abstract In the Fleet Management Software business, selecting the correct targets for a marketing campaign and personalising the content of the marketing material requires a deep understanding of the operations of potential customers, i.e. fleet owners. We present a system that mines raw GPS data from a fleet of vehicles with the aim of inferring fleet operations. The inference proceeds in subsequent steps of increased understanding, from the location of fleet hot-spots (depots, driver homes) to vehicle routes and daily work shifts and work stops. We present an application of our system where such information is exploited to select among a set of fleets the best candidates for the adoption of vehicle routing software and to create for them personalised marketing material, based on the estimated savings for the fleet due to use of the routing software. Experimental results from an email marketing campaign confirm the effectiveness of our system and of the personalised marketing material it delivers.

1 Introduction Commercial vehicle fleets are used for daily operations in a wide range of industries, such as parcel delivery, healthcare, human transportation and field construction. To manage a vehicle fleet, a great deal of expertise is required in the areas of scheduling orders, planning routes, assigning vehicles and drivers, tracking vehicles while in transit and maintaining vehicles. To this end, fleets of commercial vehicles often rely on fleet management software, such as the Reveal solution offered by Verizon Connect1 (VZC). The core functionality of this type of software is to provide real 1 https://www.verizonconnect.com.

L. Bravi (B) · A. Harbourne-Thomas · A. Lori · P. Mitchell · S. Salti · L. Taccari · F. Sambo Verizon Connect, Via G. Paisiello 20, Florence, Italy e-mail: [email protected] F. Sambo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_6

61

62

L. Bravi et al.

time information on the positions and the operations of the vehicles of the fleet along with some metrics to monitor in a given timeframe the overall behaviour of the fleet or any single vehicle of the fleet. Fleet management software companies achieve this by providing to their customers vehicle tracking units (VTUs), which are mounted on vehicles to track their real time GPS position. Please note that, in this context, with the term customer we then refer to a commercial fleet company, being it the user of our fleet management software, and not to the customers of that company, as it would be more common in the route optimisation literature. In addition to core functionalities, fleet management software often offers addons such as a routing tool, to compute the optimal daily routes of the fleet given a sequence of work stops, a start and stop location for each available vehicle, and several other constraints, such as vehicle capacities, driver skills, coverage areas and service time windows for work stops [1]. The output of the routing algorithm is a selection of the available vehicles, each with an assigned driver and a planned route across work stops. Having a fleet owner adopt a routing service is not straightforward. A fleet owner may be skeptical, or not even aware, that effective routing algorithms are available and could be reluctant to replace an apparently working strategy, even if based on manual planning, without a concrete proof of the benefits that can be obtained. One possible way to demonstrate the improvement with a routing solution is to ask the fleet owner to provide additional information and domain-specific constraints on historical routes, and then apply the routing algorithm to show the potential improvement due by the new solution. However, this process is undesirable because of the effort required to the fleet owner. To address this customer need, we developed a system to automatically derive operational information from the GPS traces of the VZC customer fleets and then apply the VZC routing solution to estimate a potential return on investment (ROI) that the fleet owners would accrue if they were to adopt a routing solution. The ROI is estimated as a potential reduction in driven miles and/or driver working hours. By applying filters to the estimated ROI, combined with additional fleet statistics, our system is then able to identify promising targets among VZC customers for marketing campaigns to upsell routing solutions. Furthermore, the system can compose personalised marketing messages for the campaigns, based on the customers’ historical data and potential ROI. The remainder of this paper is structured as follows: Sect. 2 presents the structure of our pipeline and each step is detailed in the corresponding subsection. Experimental settings and results on a marketing campaign are reported in Sect. 3 while conclusions and possible future developments are discussed in Sect. 4.

GPS Data Mining to Infer Fleet Operations for Personalised …

63

2 The System Given a fleet and an observation period, our system considers as inputs of its analysis the GPS messages sent by all the fleet vehicles in the given period. Each GPS message includes a time stamp, the vehicle location (as a latitude, longitude pair), and an event code indicating whether the vehicle is running or in the act of turning the engine on or off. The GPS sampling frequency with engine-on is normally 90 s, but can vary in the case of asynchronous messages (such as engine on/off). Let us define as engine-off stop the time interval between an engine-off and the subsequent engine-on event. Our pipeline is depicted in Fig. 1 and consists of the following steps: • detection and classification of recurrent engine-off stop locations, or hotspots, such as depots or employee homes; • aggregation of GPS points into routes, as sequences of stops; • identification of fleet operations, as daily patterns of visits to depots, homes and work/non-work stops, and selection of a subset of desired operations; • computation of the optimal set of vehicles, drivers and routes serving all the identified work stops via the VZC routing software; • estimation of the return on investment and feasibility of the fleet as a VZC routing customer and generation of the personalised marketing material. Each step is described in more detail in what follows.

(a) GPS Points

(b) Hotspot detection and classification depot, 11 work stops, depot home, depot, 5 work stops, depot, 1 stop, home depot, 10 work stops, depot

(c) Route identification

(d) Operations identification

(e) Route optimisation

Fig. 1 Pipeline of the whole system

64

L. Bravi et al.

2.1 Hotspot Detection and Classification The hotspot detection and classification procedure, summarized in this section, is described in more details in [2]. This is a two-step process: hotspot detection, i.e identification of the location and shape of the places where the fleet frequently stops, and classification of the hotspot into categories, such as Home, Depot or Other [3]. To detect hotspots, we adopt the following procedure starting from the engine-off stops of all the fleet vehicles (see Fig. 2). Engine off stops are first aggregated into rectangular cells, defined by a regular longitude × latitude grid with spacing of 0.0008 degrees (roughly corresponding to 80 m at US latitudes). For each cell, we compute the total engine-off time spent by all the vehicles of the fleet, and retain as initial hotspots all cells with a cumulative engine-off time larger than a given threshold. We then aggregate adjacent hotspots into larger hotspots of rectangular shape and with boundaries defined by the minimum and maximum latitude and longitude of the aggregated hotspots. For all the resulting hotspots, we re-compute the cumulative engine-off time and retain as final hotspots the ones with a cumulative engine-off time larger than a second given threshold. It is not unlikely in the literature that grid-based approaches to hotspot detection [4], similar to ours, are preferred to more complex clustering solutions [3, 5], because of their sufficiently good performance in practice and their high level of parallelism. To classify the detected hotspots into the three classes Home, Depot and Other, we collect the following features for each hotspot based on the engine-off stops falling within them:

Fig. 2 Hotspot detection procedure. Blue: uniformly spaced grid to aggregate vehicles stops. Red: grid cells with sufficiently large cumulative stop time. Orange: final hotspots after contiguous cells aggregation

GPS Data Mining to Infer Fleet Operations for Personalised …

• • • • • • • • •

65

average number of engine-off stops per day, percentage of fleet vehicles that stop at least once in the hotspot, mean and standard deviation of engine-off stop duration, maximum and minimum engine-off stop duration, average cumulative engine-off stop duration per day, percentage of overnight engine-off stops, area and aspect ratio (height/width) of the hotspot bounding box, mean spatial density (number/area) of engine-off stops per day, percentage of engine-off stops starting or ending in one of two specific times of the day (morning, i.e. from 5 AM to 2 PM, and afternoon/evening, i.e. from 2 PM to 11 PM).

These 15 features are used by a Random Forest classifier [6] to estimate the correct class for each hotspot. A labelled dataset for the hotspot detection and classification procedure was generated by collecting two weeks of GPS data for 1025 VZC customers and by manually detecting and labelling all fleet hotspots. Labelling was accomplished by visual inspection of the stop patterns, of the vehicle routes and of the hotspot locations via Google Maps and Google Street View. To compare several variants of the hotspot detection procedure and of the hotspot feature set, we first split the dataset into a training set of 820 fleets and a test set of 205 fleets. For each variant, we tuned the random forest parameters via k-fold cross validation on the training set, then retrained the random forest with the best parameters on the entire training set and assessed the classification performance on the test set. The variant exhibiting the best performance was then re-trained on the entire dataset and used for our system.

2.2 Route Identification The identification of vehicle routes starting from GPS messages requires the aggregation of subsequent messages into alternating segments of type journey and stop. The procedure we adopt to aggregate GPS messages into stops is a finite state machine, which scans the sequence of received messages and detects the beginning of a stop whenever • an engine-off message is encountered, or • the distance between two consecutive GPS messages is lower than a given threshold, or • two consecutive GPS messages fall within the same Home or Depot hotspot. Once the detection of a stop begins, we aggregate to that same stop all subsequent GPS messages whose distance from the previous message is lower than the given threshold, or that fall within the same hotspot. The rationale is to aggregate all GPS messages related to a still position or small movements that can happen at a working location or at the depot (i.e. while loading the vehicle) so that we have the correct

66

L. Bravi et al.

duration of the stop. As soon as the first message that does not meet any of the criteria for the aggregation is encountered, the stop segment is terminated and a journey segment begins. As distance threshold, we set 100 m.

2.3 Operations Identification Different businesses can have different patterns of operations, such as start and end time of daily work shifts or vehicle parking habits, being it either at the depot or at the the driver’s home. Our system is designed to recognise a single specific type of daily operation for the fleet vehicles, namely: 1. starting the work day by leaving from a known hotspot (a Home or a Depot); 2. serving all work stops without passing by any other known hotspot; 3. ending the work day in a known hotspot. Other activities before or after the main working route, like driving from home to the depot in the morning, are recognised and ignored before optimising the route. Recognition of the fleet operations starts by identifying the fleet work shifts, i.e. the daily periods of activity during which the fleet work stops are served. To this aim, we first identify a fleet-specific daily cut-off, as the hour of the day with the highest number of stopped vehicles during the observed period. Each work shift is then set to start at the beginning of the first journey after the daily cut-off and to end with the end of the latest daily journey. Once work shifts are identified, we analyse the pattern of visits to the Home and Depot hotspots along the sequence of daily stops for each vehicle and work shift. We recognise operations for vehicles having both one of the following starting sequences and one of the following ending sequences. Starting sequences: • starting from a Home; • starting from a Home, stopping at most three times outside of any Home or Depot (supposed to be driver personal stops), then stopping at a Depot; • starting from a Depot, or with a sequence of stops at multiple Depots; Ending sequences: • ending in a Home; • stopping in a Depot, then stopping at most four times outside of any Home or Depot (supposed to be driver personal stops), then ending in a Home; • ending in a Depot, or with a sequence of stops at multiple Depots. For each work shift, we retain all vehicles that exhibit any combination of the starting and ending sequences and that, outside of starting and ending, never pass by any other Home or Depot and stop at least once, i.e. serve at least one work stop. The last hotspot of the starting sequence and the first hotspot of the ending sequence are considered, for each vehicle, as the start and end of the vehicle route, respectively, and all interim stops are classified as work stops to be optimised.

GPS Data Mining to Infer Fleet Operations for Personalised …

67

2.4 Route Optimisation The result of the previous step of the analysis is the re-partition of the observation period into work shifts and, for each work shift, the identification of the start and end points and of a set of work stops to be served by the fleet. Furthermore, all vehicles whose operations are recognised by the system in a work shift can be considered as available to the optimisation in that shift, with the further constraint of respecting the starting and ending hotspot of the vehicle in the optimised solution. The VZC routing optimisation engine can handle several other constraints in the optimisation problem, such as vehicle capacities, vehicle and drivers specific skills and work stop service time windows. In the current version of our system, though, we do not bound the vehicle capacity, avoid requiring any vehicle or driver skill, and set the service time windows of all work stops in a work shift as the start and end of the shift itself. We are aware that in a real world optimisation scenario there are tight constraints on skills, capacity and time windows, and therefore our approach could over-estimate the potential improvement due to the optimisation of the routes: we currently cope with this issue by discarding those fleets for which our system estimates a too large improvement, as further explained in the next section. The optimisation criterion of the VZC routing algorithm can be chosen among the driven mileage or the number of working hours. A third criterion, solution cost, is derived by assigning a cost per driven mile and a cost per worked hour to the algorithm solution and then summing the two. For each work shift, the result of the optimisation is a set of vehicles, potentially smaller than the set of vehicles available in that work shift, and an assignment to each vehicle of a set of work stops and of a route to serve them all.

2.5 Return on Investment, Fleet Feasibility and Personalised Marketing Material We estimate the return on investment (ROI) deriving from the adoption of routing software as an add-on to VZC Reveal as the average savings in the observation period due to route optimisation, compared to what had actually happened, in terms of either mileage, worked hours or cost. We consider a fleet as prospect for upselling of VZC routing solutions according to several filters: • • • • •

number of active vehicles in the observation period greater than a given threshold, average number of active vehicles per work shift greater than a given threshold, average number of work stops per work shift greater than a given threshold, ROI in the observation period higher than a given threshold, ROI in the observation period lower than a given threshold.

The latter two filters are meant to capture fleets that show a sufficiently large improvement, but not too large, as this probably indicates that our system is ignoring

68

L. Bravi et al.

Fig. 3 Example of personalised marketing material: the entire driven mileage in a month is compared to the hypothetical mileage that the vehicles could have driven if the customer had adopted routing solutions. For a specific day, like July 16th in the example, the marketing material further drills down on the number of spared vehicles, saved hours, driving time and the potential work stops that could have been served with the same vehicles and working time

some important constraints of the optimisation problem that are not easily detectable from GPS data alone, like narrow service time windows or a limited vehicle capacity for the type of goods to be delivered. For fleets that pass all the filters, the system can generate personalised marketing material based on their driving data and estimated savings due to optimisation. An example of such material is given in Fig. 3.

3 Experimental Results To test the effectiveness of our system, we sampled 7200 customers with a number of active vehicle subscriptions to VZC Reveal between 5 and 50. For them we collected one month of data, totalling over 100 million GPS messages. Our system is designed to scale and is based on the Apache SparkTM [7] engine for cluster computing. For this experiment, we used a small cluster of Amazon Web Services (AWS) instances2 with one master node of type r3.xlarge (4 virtual CPUs, 30.5GB RAM) and five slave nodes of type r3.large (2 virtual CPUs, 15.25GB of RAM). Processing the whole dataset, excluding route optimisation with the VZC routing software, took 4 h and 45 min.

2

https://aws.amazon.com/.

GPS Data Mining to Infer Fleet Operations for Personalised … Table 1 Marketing email campaign results Batch Size

A: prospects, personalised email B: prospects, standard email C: non prospects, standard email

1000 1000 1000

69

Opens

Clicks

Open rate (%)

Clickthrough rate (%)

301 272 244

42 13 11

30.1 27.2 24.4

4.2 1.3 1.1

From the original set of 7200 accounts, we identified 2910 potential prospects for upselling (40.5%). To separately assess the effectiveness of our prospect selection procedure and of the personalised marketing material based on our analysis, we designed the following experiment. Two batches of 1000 accounts were randomly sampled from the prospects (batches A and B) and one batch of 1000 accounts from the set of non-prospects (batch C). A standard, non-personalised marketing email was sent to batches B and C, while batch A received a personalised marketing email with content similar to the one in Fig. 3. Both email types had the same header. To assess the effectiveness of the email campaigns, we used the two standard indicators open rate and click-through rate [8], defined as the fraction of opened emails and of clicked in-email links over the total, respectively. Absolute values and rates of opened and click-through, measured one month after the email campaign, are reported in Table 1. As it is clear from the results, prospects exhibited a significantly higher open rate than non-prospects (Pearson’s χ 2 test p-value < 0.017), and personalised emails received a significantly higher click-through rate than standard emails (Pearson’s χ 2 test p-value < 3 × 10−7 ). The results support the hypothesis that prospects are more interested in a routing software add-on and that the personalised marketing content, whose creation is not possible without the analyses accomplished by our system, is effective for upselling such an add-on.

4 Conclusion In this paper, we presented a system for mining the historical GPS data of a fleet, such as the data collected by the VZC Reveal, to infer fleet operations. The system provides valuable insights on customer and fleet behaviour that lend themselves to several useful applications, such as identifying customers who can enjoy a healthy ROI supporting the subsequent sales process. The analysis pipeline of our system starts with the raw GPS messages and proceeds by building increasingly richer layers of information stemming from the previously inferred layers. By grouping nearby GPS engine-off stops we detect hotspots and classify them based on aggregated statistics of the stops. A finite state machine is used to reconstruct vehicle routes, as sequences of journeys and stops, starting from

70

L. Bravi et al.

the raw GPS messages. Finally, vehicle routes and hotspots are exploited in a rulebased fashion to infer daily vehicle operations, such as the work stops and the start and end of the work routes. The specific application presented in this paper exploits this information to assess the fleet feasibility as a customer of the Verizon Connect routing software, sold as an add-on to VZC customers. To this end, we process the fleet data for a given observation period and apply the VZC routing algorithm to optimise the daily work routes with the available vehicles. The return on investment due to the possible subscription to VZC routing is estimated as the saving in working hours and/or driven miles that derives from optimisation. For feasible prospects, the insights provided by our system can be synthesised in an effective personalised marketing message that can be used, for example, in an email marketing campaign. To assess the effectiveness of our system in detecting good prospects and in extracting highly enriched information for use in marketing material, we run an experimental email marketing campaign. The results of the marketing campaign confirmed the value of the prospect identification procedure and of the derived marketing material. As future directions, it would be of interest to refine the categorization of vehicle stops according to their purpose, e.g. work-related vs non-work stops, with machine learning approaches similar to [9], and then consider only work-related stops in the route optimisation task. Another possible addition is to leverage GPS data to estimate vehicle class or size, and then allow work stops to be assigned only yo vehicles of the same class or size in the optimisation process [10]. Finally, we plan to improve our ability to recognise fleet operations by detecting optimisation constraints, such as vehicle capacity or service time windows, from the historical data of the fleet.

References 1. Toth, P., Vigo, D.: Vehicle Routing: Problems, Methods, and Applications. SIAM (2014) 2. Sambo, F., Salti, S., Bravi, L., Simoncini, M., Taccari, L., Lori, A., Integration of GPS and satellite images for detection and classification of fleet hotspots. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) (2017) 3. Lv, M., Chen, L., Xu, Z., Li, Y., ChenThe, G.: The discovery of personally semantic places based on trajectory data mining. Neurocomputing 173(3), 1142–1153 (2016) 4. Gingerich, K., Maoh, H., Anderson, W.: Classifying the purpose of stopped truck events: an application of entropy to GPS data. Trans. Res. Part C: Emer. Technol. 64, 17–27 (2016) 5. Montini, L., Rieser-Schüssler, N., Horni, A., Axhausen, K.: Trip purpose identification from GPS tracks. Trans. Res. Record 2405(1), 16–23 (2014) 6. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001) 7. Shanahan, J., Dai, L.: Large scale distributed data science using Apache Spark. In: Proceeding of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2323–2324 (2015) 8. Bonfrer, A., Drèze, X.: Real-time evaluation of E-mail campaign performance. Market. Sci. 251–263 (2009)

GPS Data Mining to Infer Fleet Operations for Personalised …

71

9. Sarti, L., Bravi, L., Sambo, F., Taccari, L., Simoncini, M., Salti, S., Lori, A.: Stop purpose classification from GPS data of commercial vehicle fleets. In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 280–287 (2017) 10. Simoncini, M., Taccari, L., Sambo, F., Bravi, L., Salti, S., Lori, A.: Vehicle classification from low-frequency GPS data with recurrent neural networks. Trans. Res. Part C: Emer. Technol. 91, 176–191 (2018)

Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals Cristiano Cervellera, Danilo Macciò, and Francesco Rebora

Abstract We propose an algorithm for the approximate solution of general nonlinear functional optimization problems through recursive binary Voronoi tree models. Unlike typical binary tree structures commonly employed for classification and regression problems, where splits are performed parallel to the coordinate axes, here the splits are based on Voronoi cells defined by a pair of centroids. Models of this kind are particularly suited to functional optimization, where the optimal solution function can easily be discontinuous even for very smooth cost functionals. In fact, the flexible nature of Voronoi recursive trees allows the model to adapt very well to possible discontinuities. In order to improve efficiency, accuracy and robustness, the proposed algorithm exploits randomization and the ensemble paradigm. To this purpose, an ad hoc aggregation scheme is proposed. Simulation tests involving various test problems, including the optimal control of a crane-like system, are presented, showing how the proposed algorithm can cope well with discontinuous optimal solutions and outperform trees based on the standard split scheme.

1 Introduction Classes of approximating functions based on recursive partitioning of the input space, whose most popular instances are binary decision trees, are routinely employed in machine learning problems such as regression, classification and density estimation [2]. The reason for their popularity is due to many factors, like their simple application, robustness, a consolidated literature, software availability and a performance that, for some applications, proves to be competitive with much more C. Cervellera · D. Macciò (B) · F. Rebora Institute of Marine Engineering, National Research Council of Italy, Genova, Italy e-mail: [email protected] C. Cervellera e-mail: [email protected] F. Rebora e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_7

73

74

C. Cervellera et al.

sophisticated and computationally heavy methods, such as those belonging to the category of deep learning [8]. Despite this popularity in the aforementioned standard problems, little research has been performed in the application of recursive tree methods for the solution of general functional optimization problems, in which the cost is arbitrary and not limited to the typical loss functions used for regression or classification. In particular, we consider the problem of minimizing the expected value of the functional u ∗ = arg min J (u) = E [L(x, u(x))] u∈U

x∼ p

(1)

where x ∈ X ⊂ Rn , u : X → Rm , U is a suitable class of functions and the expectation over x is defined through a density p(x), whose range is X . The main difference with respect to typical regression or classification problems is that here there is no label associated to the input x. The need to solve integral minimization problems of this kind arises in many optimization contexts [16] and finds application in Operations Research, Optimal Control and Statistics (see, e.g., dynamic programming [1], maximum likelihood estimation [14], model predictive control [6], etc.) Functional optimization problems are often addressed through techniques typical of machine learning such as neural networks and linear combinations basis functions (see, e.g., [5, 9, 10]). A significant challenge in this kind of problems is due to the fact that the optimal solution u ∗ can easily be discontinuous over X even when the functional L is very smooth. Thus, classes of functions that are structurally based on a partitioning of the input space are an attractive option for the approximate solution of (1), since the models show a better capability to adapt their structure to the possible discontinuities of u ∗ , naturally providing discontinuous outputs over X . This motivates the investigation of binary recursive trees for the direct solution of (1). In particular, in this work we propose an algorithm based on greedy recursive splits, in line with the typical construction schemes of binary trees, aimed at minimizing L and providing an approximate optimal solution u ∗ for all the points in X . The algorithm is based on randomization and quasi-random sampling of the input space, which makes the construction of the optimizing tree computationally manageable. Furthermore, in order to adapt with better accuracy to the possibly complex shape of discontinuous optimal solutions, we consider tree architectures with cells different from those generally adopted in the literature, which have “cuts” parallel to the axes. In particular, we propose a structure whose basic tree elements are binary Voronoi cells, whose centroids have been appropriately chosen in a greedy manner at each iteration of the algorithm. The resulting trees have splits that are not bound to be parallel to the axes, which greatly improves the flexibility and adaptability of the models. At the same time, due to the binary nature of the splits, the tree can be navigated without significant increase of computational burden, since at each level we only have to compare two distances between points. This kind of tree structures has recently proved to be successful in other data-driven contexts, such as distribution-preserving random sampling and generation [7].

Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals

75

Finally, since the proposed algorithm is based on randomization, we also consider the use of the models in ensemble fashion for improved accuracy and robustness, eventually leading to a forest of optimizing trees. We remark that ensembles of trees can in principle be employed to minimize a generic functional through the gradient boosting paradigm [11, Sect. 10.10]. Yet, in the literature this technique is typically employed for regression and robust classification problems, while it is not commonly applied to general functional optimization with arbitrary cost functions. In fact, being based on gradient descent, the method can be slow to converge for generic complex functionals. Furthermore, penalty functions must be employed to guarantee possible constraints on the solution, which is cumbersome and requires painful trial and error procedures. An example exploiting ensembles of neural networks is proposed in [5]. Here we propose an ensemble aggregation method to be applied to the solution of the general problem (1) that exploits a few weak optimized trees, in which diversity is obtained by employing different input samplings and randomized splits. Then, in order to adapt the ensemble to the possible discontinuous nature of the optimal solution, we propose a different aggregation scheme with respect to typical averaging approach of forests with continuous outputs. To showcase the application of the proposed method, simulation tests have been carried out involving different test functionals and an example of application to an optimal control problem. The tests show that the proposed Voronoi tree models yield better performance with respect to standard trees with cuts parallel to the axes, being able to approximate with greater accuracy optimal functions characterized by discontinuous outputs. The tests also show how ensembles of optimizing trees with the proposed aggregation scheme increase the accuracy of the approximate solution.

2 Description of the Voronoi Optimizing Tree Method In order to address the original problem defined in Sect. 1, we estimate the cost in N , integral form using an empirical approximation based on a sample X N = {xi }i=1 with xi ∈ X, distributed according to p. N 1  u ∗ = arg min J˜(u) = L(xi , u(xi )). u∈U N i=1

Unless there are reasons to focus on specific input zones, we consider the case in which p is the uniform distribution. In this case, we can resort to a sampling X N on the input space based on low-discrepancy sequences, which proved to yield good accuracy in learning and decision-making problems [3, 4]. Formally, a tree can be defined Tas a piecewise constant function over subsets Vt . The reason why the partition is called a Vt , t = 1, . . . , T , such that X = t=1 “tree” is that the leaves, that is, the subsets Vt , are added in a recursive way to form,

76

C. Cervellera et al.

Algorithm 1 VTO Require: X N : the sets of N input points; the parameter δ; the standard deviation σ . 1: Initialization: Set the family of nodes and split candidate nodes as the input space: V = V∗ = X; the set of optimal values as the empty set: Y = ∅. 2: while V∗ = ∅ do 3: Process the next unprocessed node V j ∈ V∗ . 4: for k = 1, . . . , K do 5: Extract randomly two points xk(l) , xk(r ) from X j . (l) (l) 6: Update xk ← xk + n (l) where n (l) ∼ N(0, σ ). (r ) (r ) 7: Update xk ← xk + n (r ) where n (r ) ∼ N(0, σ ). 8: Consider the Voronoi cells Vk(l) , Vk(r ) ⊂ V j , corresponding to xkl , xkr .    9: Set gk (u 1 , u 2 ) ← N1j xi ∈Vk L(x i , u 1 ) + xi ∈Vk L(x i , u 2 ) . (k)

(k)

r

l

10: Evaluate (u 1 , u 2 ) ← arg minu 1 ,u 2 gk (u 1 , u 2 ). 11: end for (k) (k) 12: Evaluate k ∗ ← arg min1≤k≤K gk (u 1 , u 2 ). (k ∗ ) (k ∗ ) 13: Update Y ← Y ∪ {(u 1 , u 2 )}. (l) (r ) 14: Update V ← V ∪ {Vk ∗ , Vk ∗ } \ {V j }. ∗ ∗ 15: Update V ← V \ V j . (l) 16: if A(X j , Vk ∗ ) > δ · M then (l) 17: Update V∗ ← V∗ ∪ Vk ∗ 18: end if (r ) 19: if A(X j , Vk ∗ ) > δ · M then 20: Update V∗ ← V∗ ∪ Vk(r∗ ) 21: end if 22: end while 23: Output: f (V, Y).

T conceptually, a tree structure. If we denote by V = {Vt }t=1 the set of leaves and by T Y = {u t }t=1 the corresponding set of values, a tree model is defined as the function:

f (V, Y)(x) =

T 

u t I Vt (x),

t=1

where I Vt is the indicator function of the set Vt , defined as I Vt (x) = 1 if x ∈ Vt and I Vt (x) = 0 otherwise. The proposed method, called Voronoi optimizing tree (VOT), involves approximating the target function with a tree model that, unlike classic ones that perform splits parallel to the axes, makes use of binary Voronoi cells for the definition of the subsets Vt . In particular, at every iteration a greedy procedure is employed to identify a suitable splitting of one of the leaves belonging to the current tree. More specifically, if Vt is the leaf to be split, we consider 2 points xt(l) and xt(r ) contained within and perform a partitioning into two cells Vt(l) and Vt(r ) defined as

Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals

77

  Vt(l) = x ∈ Vt : x − xt(l) ≤ x − xt(r ) ,   Vt(r ) = x ∈ Vt : x − xt(r ) ≤ x − xt(l) . The VOT procedure, described in Algorithm 1, starts by considering the entire domain X as the first node to split and, at every iteration, updates the set of leaves V and the set of split candidates nodes V∗ until the latter is empty (line 1). At each iteration we consider a node V j belonging to V∗ and perform a splitting by considering a set of K pairs randomly drawn among the points of X j and choosing the one with the associated lowest cost within the leaf V j . More specifically, for a node V j denote by X j the set of points of X N contained within, by xk(l) and xk(r ) the kth pair of centroids randomly extracted and by Vk(l) and Vk(r ) the corresponding Voronoi cells inside V j . In order to introduce more variability to the candidate splits, we add to the centroids drawn from X j a noise evaluated as a realization of normal random variable with 0 mean and variance σ 2 , that is, n ∼ N(0, σ ) (line 5–7). Then, at every k = 1, . . . , K , we associate the function ⎞ ⎛  1 ⎝ gk (u 1 , u 2 ) = L(xi , u 1 ) + L(xi , u 2 )⎠ , Nj (l) (r ) xi ∈Vk

xi ∈Vk

(k) and consider (u (k) 8–10).1 The sets Y and V are 1 , u 2 ) = arg minu 1 ,u 2 gk (u 1 , u 2 ) (line ∗ (k ∗ ) ) then updated by adding respectively the points (u 1 , u (k 2 ) and the corresponding (l) (r ) (k) ∗ Voronoi cells Vk ∗ and Vk ∗ where k = arg min1≤k≤K gk (u (k) 1 , u 2 ), and removing ∗ V j from V (line 12–14). Now, we update the set V by removing the set V j and by (r ) introducing the two cells Vk(l) ∗ and Vk ∗ in the case the number of points they contain (r ) is at least δ · N , that is, A(X j , Vk(l) ∗ ) > δ · N and A(X j , Vk ∗ ) > δ · N , respectively, (l) where we denote by A(X j , Vk ∗ ) the number of points of X j in the cell Vk(l) ∗ (line 15–21). We remark that the choice of a finite number K of random candidate splits is introduced for the sake of keeping a small computational burden. This is especially relevant in view of the use of Voronoi optimizing trees in ensemble fashion, as will be described in Sect. 3. However, if computational burden is not an issue, the optimal pair of split points (x (l∗) , x (r ∗) ) may be obtained by an optimization routine, jointly with (u ∗1 , u ∗2 ). In order to navigate the VOT to generate u ∗ for a given x, at each level we choose the next children by comparing x with the two centroids defining the current binary Voronoi partition. Thus, the output generation implies at most 2W max distance comparisons, where W max is the maximum depth of the VOT, which makes the method computationally light also in high dimensions.

(k)

(k)

(k)

Note that computationally it is more convenient to compute u 1 and u 2 separately as u 1 =   (k) arg minu 1 = arg minu 1 (l) L(x i , u 1 ) and u 2 (r ) L(x i , u 2 ), being gk (u 1 , u 2 ) the sum xi ∈Vk xi ∈Vk of 2 functions each depending on u 1 and u 2 , respectively.

1

78

C. Cervellera et al.

3 Ensembles of Voronoi Optimizing Trees The VOT algorithm yields a piecewise-constant function that approximates the minimum of the functional (1) through a Voronoi tree. An inherent issue of such models is that the accuracy of the approximation provided by a constant output may vary significantly between adjacent zones of the input space, which would be more evident in case of discontinuous optimal solutions. In this case, the minimum points u (k) 1 and u (k) determined at line 12 of Algorithm 1 would lead to a suboptimal approximation 2 of the objective function. In order to cope with this phenomenon, we propose to create an ensemble of approximate functions and, at each point x, exploit the output provided by many VOTs. As said in Sect. 1, methods based on ensembles of prediction models have widely proved their estimation capabilities in classical learning problems, especially when the estimator is a regression or classification tree. In typical learning applications, the most popular ensemble aggregation scheme when the output is continuous is the mean aggregation, according to which the final output is the (possibly weighted) average of the outputs of the single ensemble elements. In our functional optimization context the mean aggregation strategy is not viable since, in presence of discontinuous outputs, the average of the VOT outputs may be very far from true ones in zones close to discontinuity boundaries. Thus, we propose to employ an aggregation strategy based on the minimum of the single element outputs, exploiting the best available solution in each zone of the input space. More formally, we generate Q VOTs f q (Vq , Yq ), q = 1, . . . , Q, through Algorithm 1 and for a given x we set u(x) ˆ = arg min L(x, f q (Vq , Yq )(x)). 1≤q≤Q

(2)

It is known that the advantages of an ensemble increase if the various elements are sufficiently diverse (see, e.g., [15] for a detailed discussion of the various measures of ensemble diversity). In our case the diversity is inherently obtained through the randomization introduced in various steps of Algorithm 1. This enables to obtain successful approximation of the optimal function through ensembles of even few VOTs with a limited number of cells, as shown by the results presented in Sect. 4.

4 Experimental Results In this section we present simulation results designed to showcase the proposed methodology in various functional optimization examples. The test functions have been devised to highlight the performance of the proposed method in problems where the optimal solution to be approximated presents discontinuities not parallel to the axes. In all the tests, we consider both VOTs (also in ensemble fashion) trained as described in Algorithm 1 and, for baseline comparison, classic trees obtained

Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals

79

with cuts parallel to the axes in place of the binary Voronoi splitting scheme. In the classic tree case, the construction algorithm remains the same as in Algorithm 1, but the candidate splits at points 5–8 are now defined by randomly sampled pairs (i, si ), where 1 ≤ i ≤ n is the component to split and si is the split point value, to be sampled within the range of the node values for component i. Notice that for all the tests we scale the input state between 0 and 1, so that the sample X N is always in the unit hypercube. To this purpose, in all the tests the low-discrepancy Sobol’ sequence [13] was employed to generate X N . All the tests have been run on a 2,7 GHz Intel Core i7 with 16 Gb of RAM and coded in Python, using functions from the numpy and scipy packages where needed. To give an idea of the kind of approximation that can be provided by Voronoi tree optimizing models, we first consider a parametric programming problem involving the following 2-dimensional test function for x1 , x2 ∈ [0, 1]: L(x, u) =

 √ √ 1 1 4 1 u + (1 − x14 ) x2 − cos(x2 ) u 3 − (1 − x14 ) x2 cos(x2 )u 2 4 3 2

The optimal solution can be computed explicitly, having the following form



u (x1 , x2 ) =

√ √ −(1 − x14 ) x2 , if f (x, −(1 − x14 ) x2 ) < f (x, cos(x2 )) otherwise cos(x2 ),

The function u ∗ is discontinuous, despite L being smooth. In Fig. 1 we illustrate the shape of the optimal solution, together with a Voronoi optimizing tree trained through Algorithm 1 with only 4 cells. For a comparison, a standard tree with the same nodes and cuts parallel to the axes has also been trained as described above. From the figure we can notice that the VOT, even with a very limited number of nodes, can adapt reasonably well to the discontinuous front of u ∗ , while the standard tree provides only a very rough approximation. In fact, as a quantitative evaluation, the mean absolute error between the true optimal solution and the VOT one is equal to 0.09, while the standard tree yields a value of 0.14.

Fig. 1 True optimal solution of the 2-dimensional example (left) and approximate solution through optimizing trees with only 4 cells: Voronoi (center) and standard (right) split scheme. All trees trained with N = 2000 points and K = 200 random candidate splits

80

C. Cervellera et al.

Fig. 2 Results for the 6-dimensional test problem—error with respect to the optimal functional value

For the next test we consider the following 6-dimensional parametric programming problem:     x4 x1 (y − sin x2 )2 (y − 5 − tan x5 )2 − exp − exp − √ √ x3 2π x6 2π 2x32 2x62 x1 , x4 ∈ [0, 2], x2 , x5 ∈ [−1, 1], x3 , x6 ∈ [0.5, 1.5], y ∈ [−5, 10]

L(x, y) = −

Here, the (once again, discontinuous) optimal solution can be approximated very closely2 by the following expression



u (x) 

sin x2 , if f (x, sin x2 ) < f (x, 5 + tan x5 ) 5 + tan x5 , otherwise

The aim of the tests is both to evaluate the performance of Voronoi optimizing trees, and the advantages of optimizing trees used in ensemble fashion. To this purpose, 100 different Voronoi optimizing trees and 100 optimizing trees with standard splits have been trained to approximate u ∗ , for increasing values of the maximum allowed number P ∗ of nodes in the tree. For these tests we employed, for both kind of splitting schemes, N = 1000 sampling points and K = 10 random split candidates. For the Voronoi trees we used σ = 0.001 for the noise in Algorithm 1. Then, we also considered 100 different ensembles of Q = 5 optimizing trees with aggregation as in (2), for both splitting schemes, trained with the same above-mentioned parameters. In Fig. 2 we show the boxplots of the mean absolute error (MAE) between the model output and the optimal value of L, computed over 1000 randomly sampled points in the input domain, for the 100 different optimizing trees and ensembles. In the figure, ‘single std’ and ‘single Vor’ denote the single tree with standard and Voronoi splitting scheme, respectively, while ‘ens std’ and ‘ens Vor’ denote the ensembles. Figure 3 reports the same kinds of results concerning the MAE and the optimal minimizer u ∗ . From the figures we can see clearly that the performance of even a small ensemble (Q = 5) of weakly optimized trees (with K only equal to 10) is significantly better than the one provided by a single tree. Furthermore, we can see that 2

In our tests, the maximum difference over X between the solution computed by a nonlinear solver and u ∗ turned out to be of the order 10−7 .

Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals

81

Fig. 3 Results for the 6-dimensional test problem—error with respect to the optimal solution

the Voronoi structure consistently yields more accurate results than standard splits, both regarding the actual cost values and the estimation of the optimal functional solution u ∗ . To give an idea of the computational times, training a VOT for this 6-dimensional test functional with N = 1000 sampling points, Q = 64 maximum cells and K = 100 candidate splits, takes about 31 s in our implementation, while using a tree with standard split scheme using the same parameters takes about 26 s. As a further example, we consider an optimal control problem, namely the control of a sliding trolley with a mass attached by means of a cable that has to be moved and stopped, whose dynamics are at the basis of many real world tasks such as, e.g., crane control. The mass M of the sliding trolley is equal to 100 Kg, while the cable has length l = 2 m. The state vector is 5-dimensional, being x1 the position of the trolley, x2 the velocity, x3 the angle of the cable, x4 the angular velocity and x5 the mass of the attached object to move. The control u is the horizontal force applied to the trolley (see [12] for a more detailed description of the system). The aim of the control is to bring the trolley at the 0 position and stop, therefore the cost functional has the form L(x, u) = cx · (x+2 )T + cu · |u|, where x+ = f (x, u) is the new state after running the system state equation recursively for 50 stages with sampling time equal to 0.01 s, in which the control u is kept constant. The vector cx in the cost is equal to [1000, 1, 2, 1, 0], while cu = 5 · 10−3 . For the tests, the bounds for the 5-dimensional input space have been taken equal to ([−1, −2, −.5, −1.5, 200], [1, 2, .5, 1.5, 300]), while u is constrained to the [−500, 500] range. We trained 100 different instances of ensembles of optimizing trees with aggregation over the minimum, for increasing number Q of ensemble elements, both with Voronoi and standard split schemes. Each single tree element was trained with N = 2000 training points, P ∗ = 16 maximum cells, K = 20 random split candidates, and σ = 0.0001 for the Voronoi trees. A randomly sampled set of 100 test points has been drawn in the input domain, and for each point the optimal control u ∗ has been computed numerically by applying the minimize_scalar optimization routine from the scipy Python package. Figure 4 reports the boxplots of the MAE between the optimal cost obtained by applying the forest output and the one obtained by u ∗ (left) and the MAE between the forest output and the optimal u ∗ (center), for the 100 different ensemble instances. Once again, it turns out that the Voronoi structure outperforms the standard split scheme. For further reference, the figure (right) also reports an example of the trajectory, starting from a random state within the input ranges, of the sliding mass position

82

C. Cervellera et al.

Fig. 4 Results for myopic control test problem. Left: error with respect to the optimal cost. Center: error with respect to the optimal control. Right: example of behavior of a VOT ensemble solution (trolley position x1 )

(first component of x) controlled at each stage t by u ∗ (xt ) and uˆ ∗ (xt ) provided by a Voronoi model with Q = 32, showing that the approximate myopic solution is actually able to provide a satisfying control action.

References 1. Bertsekas, D.: Dynamic Programming and Optimal Control, vol. I. Athena Scientific, Belmont (2000) 2. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Monterey, CA (1984) 3. Cervellera, C., Macciò, D.: Learning with kernel smoothing models and low-discrepancy sampling. IEEE Trans. Neural Netw. Learn. Syst. 24(3), 504–509 (2013) 4. Cervellera, C., Macciò, D.: Low-discrepancy points for deterministic assignment of hidden weights in extreme learning machines. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 891–896 (2016) 5. Cervellera, C., Macciò, D.: Gradient boosting with extreme learning machines for the optimization of nonlinear functionals. In: Advances in Optimization and Decision Science for Society, Services and Enterprises, pp. 69–79. Springer International Publishing (2019) 6. Cervellera, C., Macciò, D., Parisini, T.: Learning robustly stabilizing explicit model predictive controllers: a non-regular sampling approach. IEEE Control Syst. Lett. 4(3), 737–742 (2020) 7. Cervellera, C., Macciò, D.: Voronoi tree models for distribution-preserving sampling and generation. Pattern Recognit. 97, 107002 (2020) 8. Chollet, F.: Deep Learning with Python. Manning (2017) 9. Gnecco, G., Sanguineti, M.: On a variational norm tailored to variable-basis approximation schemes 57(1), 549–558 (2011) 10. Gnecco, G., Sanguineti, M.: Neural approximations in discounted infinite-horizon stochastic optimal control problems. Eng. Appl. Artif. Intell. 74, 294–302 (2018) 11. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, New York (2009) 12. Macciò, D., Cervellera, C.: Local kernel learning for data-driven control of complex systems. Expert Syst. Appl. 39(18), 13399–13408 (2012) 13. Sobol’, I.M.: The distribution of points in a cube and the approximate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Fiz. 7, 784–802 (1967) 14. Wasserman, L.: All of Statistics: A Concise Course in Statistical Inference. New York (2010)

Voronoi Recursive Binary Trees for the Optimization of Nonlinear Functionals

83

15. Zhou, Z.H.: Ensemble Methods: Foundations and Algorithms. Chapman & Hall/CRC (2012) 16. Zoppoli, R., Sanguineti, M., Gnecco, G., Parisini, T.: Neural Approximations for Optimal Control and Decision. Springer, Cham (2020)

Mathematical Programming and Machine Learning for a Task Allocation Game Alberto Ceselli and Elia Togni

Abstract We consider a combinatorial game proposed in the literature, motivated by the arising paradigm of crowdworking. It consists in assigning tasks to worker agents whose behaviour details are unknown. The game proceeds in rounds. At the end of each of them, the player is given a feedback on the effect of the choices which have been made, including an overall score and a reputation value obtained by each worker agent. The game is meant to be solved by pure player intuition. We analyze instead the effect of using quantitative models. We evaluate the effectiveness of regression models, which are trained round by round, in predicting worker agents performance. We also experiment on suggesting users’ choice by optimization models, in which regressors are re-encoded as parts of a mathematical program, and merged in a generalized assignment formulation. We present a computational evaluation on a dataset collected from a real online gaming system.

1 Introduction Crowdworking is an emerging paradigm, which is challenging the labor market [1]. In its essence, employers ask for the completion of specific tasks by publishing their description on an open platform. Freelance experts can then apply, proposing themselves for the completion of these tasks at a price. Employers can then assign each task to one expert (or even more than one to minimize risks), possibly through a negotiation phase about job effort and price. Online services have therefore flourished, to meet demand and offer on a massive global scale. To reduce the complexity of task assignment in such a scenario, these online services implement automatic matching. Pre-defined policies are normally A. Ceselli (B) · E. Togni Dipartimento di Informatica, Università degli Studi di Milano, Via Celoria 18, 20133 Milano, Italy e-mail: [email protected] E. Togni e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_8

85

86

A. Ceselli and E. Togni

used, like “first offer” (immediately assign the task once an offer is made), “cheapest offer” (assign each task to the expert asking lowest price), “best quality” (assign each task to the expert having highest rank among applicants), or suitable combinations of them. These policies, which work one task at a time, do not provide specific guarantees, neither in terms of global cost minimization, nor quality maximization, nor effort balancing over experts. Unfortunately, when more than a single task is considered at once, a combinatorial problem arises which is further complicated by its nondeterministic nature: the performance of each expert on each specific task is a-priori unknown. In fact, crowdworking games are used even in procedures like personnel selection to assess applicants skills in problem solving. It is the case of Agile Manager [2], which is an online platform proposing the following game. A set of tasks is given, each having a value, a difficulty level, a required effort and a boolean flag ‘deadline’; these data are made available to the player. A set of worker agents is given, each having a certain high quality output probability, a maximum productivity limit, a stamina level and a backlog of tasks. These data (or part of them) might however be unknown to the player. Only an aggregated performance indicator is always given, whose intuition is to encode a reputation of the worker agent. The logic used by the worker agents to schedule their assigned tasks is also unknown to the player. The game proceeds in rounds: the player is given the set of tasks with all their data, and part of the data of each worker agent including reputation, and is asked to assign tasks to the worker agents. Then a simulation is carried out: each worker agent might complete all or a subset of its assigned tasks, including those in backlog, with either high or low quality. In fact, the worker agent maximum productivity or stamina might not be enough to complete all tasks within the timelimit of one round. Uncompleted tasks flagged as ‘deadline’ are lost, the others are added to the agent backlog. The simulation logic is kept unknown to the player. After each round the player gets a score, depending on which tasks have been completed and which have been completed with high quality output. The player also gets new reputation values for each worker agent, depending on how the agent has performed during the round. The final player score is the sum of scores in all the rounds of the match. We report that the online platform outputs also a set of indices concerning user experience, which are however not relevant for our study. In this paper we carry out the following computational evaluation. First, we analyze how regression models perform in forecasting the outcome of a task subset allocation to a specific agent. Second, we encode these models as terms of a mathematical programming model, which aims at maximizing the player score in each round. We analyze the behaviour of our models exploiting a dataset of logs of real matches, made available in [3]. Our interest into the problem is twofold: first, it is a benchmark for the integration of data-driven models (regression ones) in a mathematical programming framework, which is an emerging trend in optimization [4], and a line of research we are pursuing [5]; second, it gives an insight into the possible application of optimization models for automatic matching in massive online crowdworking services.

Mathematical Programming and Machine Learning for a Task Allocation Game

87

In Sect. 2 we formalize the problem and we introduce our models, in Sect. 3 we report and discuss our experiments, and in Sect. 4 we collect some conclusions.

2 Models Let I be a set of tasks. Each task i ∈ I is described by a value vi , a difficulty di , an effort ei and a deadline flag u i . Let J be a set of Worker Agents (WA). Each WA j ∈ J is described by a maximum productivity level m j , a high quality output probability p j and a reputation r j . An allocation of a subset I¯ ⊆ I of tasks to a WA j produces a player score σ ( I¯, j) and an agent reputation update ρ( I¯, j). The player objective is to find a partitioning of the tasks in one subset I¯j for WA  j, in such a way that the total score j∈J σ ( I¯j , j) is maximized. We have considered different modeling options, trying to balance the following four aspects (a) the complexity of a model forecasting the player score (b) the set of features it requires (c) the complexity of its representation in terms of mathematical programming (d) the complexity of the overall optimization model. In the following we describe only the modeling line which showed to be the most promising in our computational evaluation.

2.1 Score Forecasting Models As discussed in the introduction, the rules for producing a score and a reputation update given the tasks and WA details are unknown to the player. Additionally, neither the disaggregated score is given to the player after the round, but only the reputation update. Finally, also the backlog of tasks of each WA remains unknown. Nevertheless, we assume that (a) the score is linearly correlated with reputation update, allowing to use the latter as a proxy for the former (b) the reputation update is a function of tasks and worker agent data. In details we assume σ ( I¯, j)  α · ρ( I¯, j)  f ({(vi , di , ei ) : i ∈ I¯}, m j , p j , r j ). In such a way, we are able to estimate f () with a supervised learning approach. That is, we assume to be given a set of player matches T . For each t ∈ T the subset of allocated tasks I t ⊆ I and the corresponding WA j t are given, together with the reputation update ρ t which was produced. We estimate f () by a multivariate regression approach, as a function mapping each predictor tuple (v1t , . . . , v|It | , d1t , . . . , d|It | , e1t , . . . , e|It | , m j t , p j t , r j t ) (where vit = vi if i ∈ I t , 0 otherwise, dit and eit being defined similarly) to the response variable ρ t . Note that deadline flags are not chosen to be part of the predictor variables.

88

A. Ceselli and E. Togni

In Sect. 3 we compare a few regression modeling techniques. One of them is linear regression, with a different regression model f j () created for each WA. That is, we assume  (αi j vi + βi j di + γi j ei ) + μ j m j + π j p j + ν j r j + j ρt  i∈I

and we find parameters αi j , βi j , γi j , μ j , π j , ν j and j by supervised learning methods, that is training |J | regression models, each on the set of attempts which refer to WA j. That is, intuitively, a model with a simple structure, but a large set of features.

2.2 Optimization Models Once each mapping f j () from tasks and WA data to the score is determined, as described in the previous subsection, an optimization model can be formulated as follows:    αi j vi + βi j di + γi j ei xi j + μ j m j + π j p j + ν j r j + j (1) max. j∈J i∈I

s.t.



j∈J

xi j = 1

∀i ∈ I

(2)

∀j ∈ J

(3)

∀i ∈ I, ∀ j ∈ J

(4)

j∈J



ei xi j ≤ m j

i∈I :u i =1

xi j ∈ {0, 1}

It is a Generalized Assignment Problem (GAP) [6]. Variables xi j take value 1 if task i is allocated to WA j, 0 otherwise. Constraints (2) impose that each task is assigned to exactly one WA. Constraints (3) impose that the sum of effort of ‘deadline’ tasks assigned to the same WA does not exceed the WA maximum productivity. Terms (4) are integrality conditions. The objective function encodes the linear regression model: when xi j = 0, the corresponding terms in the first double summation are 0, thus matching the definition of the predictor tuple terms. The second summation is a constant, which does not affect the optimization process. Model (1)–(4) is therefore maximizing the overall reputation update, which we assume to imply the maximization of the score. We finally notice that constraints (3) tend to be automatically respected by an implicit violation penalty, which is learnt by the regression models. In fact, those players who violate them produce tasks which are lost, thereby lowering the final WA reputation.

Mathematical Programming and Machine Learning for a Task Allocation Game

89

3 Computational Evaluation In [3] a large set of logs has been collected, analyzed and shared for public research. They come from real players of the Agile Manager [2] online platform. More specifically, it contains details of about 50000 play rounds in matches of either 5 or 10 rounds, performed by 1141 players. Matches are of various types: a selection of a few parameters is given to the player before the match starts. The most important is a difficulty setting, which can simply be high or low. In low difficult setting high quality output probability and maximum productivity of WAs are directly proportional. In high difficulty setting they are inversely proportional. We remark that no randomization is performed on matches: given the same parameter and level selection, the tasks required to be assigned in a specific round remain the same. The combinatorics of the game, however, make it difficult for a human player to perform optimal choices, even by repeatedly playing the same level. Data were not always consistent, as casual players probably tend to interrupt matches before their end. We found instead data about the set of players with 30 or more active matches to be reliable. Therefore we have restricted our dataset to their matches in our analyses. Levels 5 and 6 (the highest ones) were the more representative ones, since 1–4 are likely played by casual players. Level 5 is set to low difficulty, level 6 to high difficulty. The final dataset consists of about 2000 rounds of matches at levels 5 and 6, composing more than 200 matches. Data processing was performed by scripts in python 3.9. Regression models exploit python scikit-learn library, and optimization model implementations are based on the python-mip library, using CBC as a mathematical programming solver. Tests were run on a notebook equipped with a Ryzen 2400U 3.6 GHz CPU and 16 GB or RAM. On this setting, each regression training took only a few seconds (except SVR whose training required a few minutes), and each MIP optimization only fractions of seconds.

3.1 Quality of Regression Models In the first experiment we trained and compared different regression models, namely linear regression (Linear), Ridge Regression (Ridge), Stochastic Gradient Descent (SGD) and Support Vector Regression (SVR). In Table 1 we report their average coefficient of determination R 2 , when considered to model the reputation update over the same round of all matches. Unfortunately, the reputation update after round 10 is not included in the source data; therefore we could not estimate the quality of the models at round 10. The outcome is clear, and similar for all regression models: they provide poor results in round 1, improving on round 2 and getting very accurate from round 3 onwards. This is most probably due to the contribution given by better starting reputations to rely on. These very high score reflect an important characteristic of our

90

A. Ceselli and E. Togni

Table 1 Score of different regression models Round Linear Ridge 1 2 3 4 5 6 7 8 9

0.239429 0.796824 0.886585 0.927480 0.886406 0.914572 0.931019 0.946479 0.953715

0.239429 0.796824 0.886585 0.927479 0.886403 0.914569 0.931016 0.946476 0.953712

SGD

SVR

0.218955 0.792330 0.883128 0.925204 0.884115 0.913163 0.928841 0.945075 0.952456

0.653779 0.775409 0.790667 0.764091 0.780832 0.776808 0.776849 0.788343

game: WAs are actually algorithms, whose logic is unknown but probably simple. As such, their behaviour tends to be very predictable by barely looking at data; however, data dimensionality and the combinatorics of the game tends to make them hard to foresee for a human player. We finally report SVR to provide lower scores than the other models, and to have convergence problems in the training of round 1. Since Linear, Ridge and SDG provide very similar scores we have restricted our efforts to Linear, being also the simplest to be efficiently embedded in a MIP. We conjectured the starting reputation to be a key feature for prediction. Therefore we have also repeated these experiments by either excluding it or keeping only such a feature. In both cases we obtained much inferior performances. That is, starting reputation is actually a key feature, but is not enough to be used alone for predictions. A similar experiment on maximum WA workload, instead, showed it to contribute but not being central for predictions.

3.2 Integration of Regression Models in Mixed Integer Programs We then started to use the MIP models (1)–(4), integrating the linear regression models trained as explained above, to play matches. As already reported, we did not have direct access to the gaming software, but only to the historical logs which have been produced by players. Therefore, we could not perform a full comparison between human play and autopilot MIP play over all rounds: the resulting internal status of WAs after the rounds of a match can only be tracked on those match traces fully contained on the logs. That is, MIPs would produce solutions whose effect in the internal WAs status is unknown. We could however check the behavior of MIPs in playing one round k, after rounds 1 to k − 1 which have been played by a human. Hence, in a second experiment, we compared the performance of our models in improving the reputation of WAs (and thus the overall score), benchmarking them

Mathematical Programming and Machine Learning for a Task Allocation Game

91

Table 2 Average reputation, and variance, of WAs after each round at level 5 played by both humans and MIPs (predicted) Human MIP Round Mean Variance Mean Variance 1 2 3 4 5 6 7 8 9

6.593 6.780 6.638 6.471 6.343 6.243 6.161 6.107 6.083

0.181 0.166 0.252 0.337 0.429 0.523 0.580 0.609 0.679

1.781 5.201 6.038 6.347 5.998 6.083 6.040 6.030 5.973

0.000 0.106 0.120 0.205 0.251 0.343 0.430 0.514 0.542

Table 3 Average reputation, and variance, of WAs after each round at level 6 played by both humans and MIPs (predicted) Human MIP Round Mean Variance Mean Variance 1 2 3 4 5 6 7 8 9

6.439 6.468 6.218 5.961 5.793 5.683 5.576 5.490 5.413

0.205 0.209 0.245 0.310 0.364 0.429 0.474 0.531 0.538

1.513 5.471 5.764 5.780 5.221 5.315 5.308 5.285 5.262

0.000 0.117 0.148 0.198 0.223 0.284 0.352 0.406 0.468

with those of human players. In Tables 2 and 3 we report for both humans (left) and MIP solutions (right) the average reputation obtained after each round, and the corresponding variance. Table 2 refers to matches at level 5, while Table 3 refers to those at level 6. We remark that the reputation values reported for MIP solutions are values predicted by our regression model. That is, the main target of the experiment is to check if, once different regression models are integrated into a single MIP for optimization, their predictions remain consistent. In this regard our experiments confirm our expectations. Level 6 is by design more difficult than level 5, being required skill and effort directly correlated. In fact both human players and MIPs produce slightly lower reputations. Round 1 is troublesome for MIPs, producing poor reputation update w.r.t. human players. The gap is filled from round 3 in matches of level 5 and from round 4 in those of level 6,

92

A. Ceselli and E. Togni

showing regression models to need some additional data on the latter setting to start being fully performant. From that point, MIPs reputations are just a few percentage points lower than those of human players.

3.3 Comparing Optimization Models with Allocation Policies Finally, in an effort for assessing the potential of our methods in automatic decision support tools, we have compared the performance of our MIPs in terms of score produced, benchmarking with policies employed by human players. In details, the Agile Manager platform itself asks to the user which of the following policies was more similar to their play: random (no specific pattern of assignment of tasks to WAs), load balance (evenly distribute tasks to WAs by looking at tasks effort values), reputation based (try to distribute first highly valued tasks to WAs with high reputation). Such an indication is therefore purely qualitative and retrieved only as a feedback from the player (e.g. by random it is not meant to allocate tasks with uniform probabilities). We also remark that each round of each level is the same in terms of given tasks: while human players may non-deterministically play differently in different matches, our MIPs deterministically play always in the same way, which is that producing best predicted reputation score. To be as challenging as possible in our benchmarking, we have therefore considered for both level 5 and level 6 the match of the best human player declaring to use a specific policy, that is those yielding to highest scores in the logs. We report results for round 9 (resp. round 2) as a case in which regression models have very high (resp. barely fair) accuracy. We simulated by manual computations the full solution structure obtained by both the best policy players and our MIP models, following the logic of the WAs described in [2, 3]. This produces a faithful representation on the outcome in terms of value of tasks successfully completed. Our results are reported in Table 4 which includes in turn the match level, the reference assignment policy, the corresponding value of successful tasks in both Round 2 and Round 9. As expected, reputation based policies tend to work best on level 5, while load balance ones work best on level 6. The reason is the following: when effort and difficulty are directly proportional (level 5) only few skilled WAs can accomplish complex tasks, which are at the same time those having more stamina. When task complexity and effort are inversely proportional (level 6), good allocations would need to assign only complex tasks to skilled WAs. However, if these tend to be also those having high reputation, reputation based policies tend to overload them, thereby fully consuming their stamina too early; in this way complex tasks appearing in last rounds cannot be completed. At round 9 MIPs are not competitive with the best players, despite the accuracy of inner regression models. It is particularly evident in level 5, where the structure of the specific instance makes a simple greedy assignment, in order of complexity and reputation, the best choice: players managing to have high reputation values

Mathematical Programming and Machine Learning for a Task Allocation Game

93

Table 4 Value of tasks successfully completed in MIP and human solutions Level Policy Round 2 Round 9 5

6

MIP Load balance Random Reputation based MIP Load balance Random Reputation based

53/89 40/89 30/89 35/89 49/89 40/89 30/89 44/89

31/89 40/89 30/89 71/89 25/89 40/89 30/89 24/89

from earlier rounds find it easy to successfully complete the match. A simple reason making MIPs not competitive can also be a mismatch between score and reputation, that we are not able to assess from data. At round 2, instead, MIP was producing better solutions than human players in both levels. A full explanation of this phenomenon certainly requires more investigation. We conjecture that, at round 9, players in successful matches had room to learn how to apply good heuristic rules, while at round 2 more flat reputation values and less experience in play make it harder to apply good choices. Data driven MIP models, instead, can rely on the earlier learning from a larger set of players, exploring different policies. We also report a qualitative, but insightful, observation. MIP models tend to allocate tasks to the weakest WA that can carry them out; this helps in preserving stamina on the more skilled WAs. For instance, at round 2 of level 5, MIP allocates tasks 20, 22 and 26, having difficulty 0.2, to WA number 2, which is that of minimum skill that can accomplish them. This ability of detecting feasibility is actually matching previous experiments on other combinatorial optimization problems [5].

4 Conclusions We have introduced data-driven and mathematical programming models, in light of supporting decision making both in this specific game and in more general crowdworking applications. Our modeling choices show that indeed such an approach of embedding regression models in a MIP, predicting a suitable performance proxy given instance data, is promising. As expected, key steps in our work were (a) identifying such a proxy (b) choosing a suitable feature space (c) keeping the MIP model simple. We found it useful to keep also the regression model as simple as possible: linear ones produced good results in our experiments, while being at the same time easy to formulate for the embedding in a MIP.

94

A. Ceselli and E. Togni

Concerning model testing, we have found our combination of regression and MIP to keep on being predictive, even if one regression model is trained for each worker agent, independently, and all of them are subsequently combined in a single MIP: predicted reputations were just a few percentage points away from real ones. Concerning the comparison of MIP solutions to those of skilled human players, we have found the latter to outperform the former at latest rounds. Such a good behaviour of skilled human players at latest rounds confirm that the combinatorics of this specific game still allow to develop good heuristics, which we believe to be a prerequisite also for our quantitative MIP approach to work well. However, the right heuristic to apply is highly instance dependent: this is where our data driven regression models prove useful. At an early round of both levels we have analyzed, instead, we report MIP solutions to allow more completed tasks than all solutions produced by human players, showing our models to be promising in decision support for these types of problems.

References 1. Jäger, G., Zilian, L.S., Hofer, C., Füllsack, M.: Crowdworking: working with or against the crowd? J. Econ. Interact. Coord. 14, 761–788 (2019) 2. Yu, H., Yu, X., Lim, S.F., Lin, J., Shen, Z., Miao, C.: A multi-agent game for studying human decision-making. In: Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (2014) 3. Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Yiqiang Chen, Simon Fauvel, Jun Lin, Lizhen Cui, Zhengxiang Pan, Qiang Yang “A dataset of human decision-making in teamwork management”, Scientific Data 4 (2017) 4. Bengio, Y., Lodi, A., Prouvost, A.: Machine learning for combinatorial optimization: a methodological tour d’horizon. Eur. J. Oper. Res. 290(2) (2021) 5. Casazza, M., Ceselli, A.: Heuristic data-driven feasibility on integrated planning and scheduling. In: Proceedings of ODS – Advances in Optimization and Decision Science for Society, Services and Enterprises (2019) 6. Öncan, T.: A survey of the generalized assignment problem and its applications. INFOR 45 (2007)

Global Optimization (Continuous, Non-linear and Multiobjective Optimization)

Random Projections for Semidefinite Programming Leo Liberti, Benedetto Manca, Antoine Oustry, and Pierre-Louis Poirion

Abstract Random projections can reduce the dimensionality of point sets while keeping approximate congruence. Applying random projections to optimization problems raises many theoretical and computational issues. Most of the theoretical issues in the application of random projections to conic programming were addressed in Liberti et al. (Linear Algebr. Appl. 626:204–220, 2021) [1]. This paper focuses on semidefinite programming.

1 Introduction Semidefinite Programming (SDP) problems are linear optimization problems with a matrix variable X over the cone X  0 of positive semidefinite (PSD) matrices: min{C, X  | ∀i ≤ m Ai , X  = bi ∧ X  0},

(1)

where C, Ai (for i ≤ m) are given n × n symmetric matrices, b is a given vector in Rm , and M, N  = tr(M  N ) is the Frobenius inner product. The second author (BM) was partly supported by grant STAGE, Fondazione Sardegna 2018. L. Liberti (B) · A. Oustry LIX CNRS Ecole Polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau, France e-mail: [email protected] A. Oustry e-mail: [email protected] B. Manca Dip. Matematica and Informatica, Università degli Studi di Cagliari, Via Ospedale 72, 09124 Cagliari, Italy e-mail: [email protected] P.-L. Poirion RIKEN Center for Advanced Intelligence Project, Tokyo, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_9

97

98

L. Liberti et al.

A Random Projection (RP) is a k × m matrix T , with k ≤ m and density σ = |nonzeros|/(km), √ every component of which is sampled from a normal distribution Normal(0, 1/ σ k) [2, Sect. 5.1]. RPs are interesting because of the JohnsonLindenstrauss Lemma (JLL). Lemma 1 ([3]) Given a finite set of vectors X = {x1 , . . . , xn } ⊂ Rm and an ε ∈ (0, 1), there exist k = O(ε−2 ln n) ∈ N, an affine function φ(k), and constants C , C2 > 0 such that:   T xi − T x j 2 P ∀i < j ≤ n (1 − ε) ≤ ≤ (1 + ε) ≥ 1 − C2 e−C φ(k) . (2) xi − x j 2 Instead of applying RPs to point sets, in this paper we apply them to Mathematical Programming (MP) formulations (also see [1, 2, 4–7]). More precisely, when RPs are applied to the input data, they yield different types of projected MP formulations. For each such type, one must prove that the JLL implies approximate feasibility and approximate optimality, which is nontrivial: for example, decision variables in formulations may well represent uncountable sets of vectors, while the JLL only applies to finite sets. Moreover, the projected solution (i.e. a solution of the projected MP) is generally infeasible in the original MP. It was shown in [5] that Linear Programming (LP) min{cx | Ax = b ∧ x ≥ 0} yields a projected formulation min{cx | T Ax = T b ∧ x ≥ 0} with feasibility and optimality guarantees in probability. The results about LP were extended to other Conic Programming (CP) problems in [1], which also proposes a solution retrieval method for constructing an almost feasible original problem solution from the projected solution. The present paper focuses on SDP. Since SDPs are CPs, all of the results in [1] apply. The new contributions of this paper build on the theoretical results of [1], and contribute a new computational study. We improve some error bounds from [1] (Proposition 1 and its corollaries); we propose a new solution retrieval algorithm; we obtain computational results on random as well as structured instance. The rest of this paper is organized as follows. Section 2 discusses some of the most interesting properties of RPs. Section 3 motivates and introduces the projected SDP formulation. In Sect. 4 we recall the theory underlying the application of RPs to SDPs [1]. Section 5 presents a computational study on random instances, SDP relaxations of the Distance Geometry Problem (DGP), and SDP relaxations of the Alternating Current Optimal Power Flow (ACOPF) problem. We find that RPs yield useful SDP approximations.

2 Understanding Random Projections The JLL shows that the RP T guarantees approximate congruence between X and T X with arbitrarily high probability (wahp). This is surprising because we think of congruence as a dimension-preserving property (e.g. rotations, translations). Allow-

Random Projections for Semidefinite Programming

99

ing an ε error, however, affords approximate congruence. Even so, this RP feature appears mysterious because it is only realized in high dimensional spaces, and hence it is impossible to “visualize”. This is a consequence of the fact that k = O(ε−2 ln n): if we want ε to represent a low enough error, say 10%, this requires k ≥ ε−2 = 100 (assuming the O(·) coefficient to be ≥1 [10]), meaning that the original space dimension m must be larger than 100. Note that the target dimension k is independent of the original dimension m, and only depends on n logarithmically. In other words, up to an ε error, the “natural” choice of the data dimensionality m (e.g. the number of pixels in an image) is immaterial: the “correct” dimension only depends on the number of image vectors being projected. This challenges our comprehension: after all, the reason why the original dimension is termed “natural” is that there is a natural, real-life interpretation of m as the most appropriate dimension choice. Another surprising consequence of the JLL (obtained by re-casting the JLL in terms of angles instead of distances) is that Rm can host exponentially many “almost orthogonal vectors” [8, Sect. 7.3.1(7)]. A different result, that of distance instability [9], says that if Z , X 1 , . . . , X n are random vectors in Rm sampled independently from any “reasonable” and “wellbehaved” distribution (check [9] for details), then for any ε > 0 we have max Z − X j 2 ≤ (1 + ε) min Z − X j 2 j≤n

j≤n

as m → ∞: in short, closest neighbors to Z are indistinguishable (up to ε) from any other point. Intuitively, this suggests that all Euclidean distances are more or less the same in high dimensions, and so the ε-approximate congruence guaranteed by the JLL would not be all that surprising. But the proof settings of the JLL and of the distance instability theorem are very different: the JLL applies to a given finite set of vectors, while distance instability applies to distributions. Numerous computational tests on clustering images with the k-means algorithm (carried out by one of the authors of this papers) show that the clusterings obtained on original and projected image vectors are similar. The tests exhibited in many JLL-related papers tell a similar story (see e.g. [10]). So, distance instability notwithstanding, and despite its apparent weirdness, the JLL definitely offers a computational value, which we exploit in this paper by applying it to SDP formulations.

3 Projected SDP: Motivation and Formulation SDPs are important because they are routinely used as convex relaxations of Quadratic Programs (QP) [2], Quadratically Constrained Programs (QCP) [8, Sect. 6.1.3], and Quadratically Constrained Quadratic Programs (QCQP) [11, Sect. 4.4]. While convex programming is not necessarily tractable (e.g. max clique, through the Motzkin-Straus formulation, reduces to linear optimization over the completely positive cone [12]), we can solve SDPs to any desired accuracy in polynomial time

100

L. Liberti et al.

using the interior point method [13]. Because SDP relaxations are generally tighter than LP relaxations obtained using standard techniques such as McCormick inequalities [14], there is an expectation that their use within spatial Branch-and-Bound (sBB) algorithms (e.g. [15]) is beneficial. Unfortunately, however, SDP solvers are much slower than their LP counterparts, so SDP use in sBB is not mainstream. Because of their remarkable size decrease, resulting in proportional CPU time savings during the solution phase, projected SDP formulations might partly address this issue. In order to define a projected SDP, we proceed as follows. We note that, for n × n matrices M, N , M, N  = vec(M) vec(N ), where, for any M, vec(M) is 2 the vector in Rn constructed by stacking the columns of M. We can therefore rewrite Ai , X  = bi as vec(Ai ) vec(X ) = bi for each i ≤ m. Let A be the m × n 2 matrix obtained by stacking the m rows vec(Ai ) . We define A  X to be the vector (vec(Ai ) vec(X ) | i ≤ m) ∈ Rm . Next, we can reformulate Eq. (1) as follows: min{C, X  | A  X = b ∧ X  0}. (P)

(3)

A projected SDP formulation based on a k × m RP T , where k = O(ε−2 ln n), can finally be derived as follows: min{C, X  | T A  X = T b ∧ X  0}. (T P)

(4)

4 Theory of Projected SDP The solution X¯ of T P has some interesting approximation properties w.r.t. the original SDP P, under a boundedness condition 1, X  ≤ θ for some appropriate θ > 0. We note that the dual SDP to P is max{yb | y A  C}, (D) where y A =

 i≤m

(5)

yi Ai , and y A  C means that C − y A is PSD.

Theorem 1 (Approximate feasibility) Let y ∗ be the optimal solution of D, and A † = sup z 1 =1 z A F . Then there are constants C , C2 > 0 such that, for 0 < ε < y ∗ 2 ( b 2 + A † )−1 and k = O(ε−2 ln n), we have   2 P P is feasible ⇔ T P is feasible ≥ 1 − 2(m + 1)C2 e−C ε k . Proving that T P is feasible whenever P is feasible simply follows by linearity of T . The probabilistic statement only refers to proving that T P is infeasible wahp whenever P is infeasible [1, Thm. 3.2]. Theorem 2 (Approximate optimality) With the above notation, and assuming strong SDP duality holds,

Random Projections for Semidefinite Programming

101

  2 P val(T P) ≤ val(P) ≤ val(T P) + ε y ∗ 2 ( b 2 + A † θ) ≥ 1 − (2m + 1)C2 e−C ε k .

Again, the proof has an easy part (proving that T P is a relaxation of P, which simply follows because aggregating equality constraints always produces a relaxation), and a difficult part [1, Thm. 3.5]. Theorems 1 and 2 are unsatisfying insofar as the appropriate choice of ε, necessary to define k and therefore to formulate T P, depends on the norm of the optimal solution of D. Of course, if one were to solve D, the solution of T P to speed up that of P would become a moot point (unfortunately, this is a feature of many theoretical results in RPs applied to MP). What these results really say is that solving projected formulations is likely to provide good approximations, although the choice of parameters requires considerable guesswork. The projected solution X¯ of T P, which satisfies T A  X = T b, has probability zero of being feasible in A  X = b because the rank of (T A, b) is k, which is smaller than the rank m of (A, b). On the other hand, [1, Prop. 4.2] gives an upper bound to the infeasibility error: for any u > 0 we have   2 P A  X¯ − b 2 ≤ εθ A 2 (C3 w(B) + u(B))/ ln(n) ≥ 1 − 2e−u ,

(6)

where C3 is a universal constant, B = {X  0 | tr(X ) ≤ 1}, w(B) is the Gaussian width, and (B) is the diameter of B. We now estimate the Gaussian width and diameter of B in terms of n in order to improve the bound in Eq. (6) specifically for SDPs. √ Proposition 1 (i) w(B) ≤ 2n; (ii) (B) ≤ 2. Proof About (i), by [16, Thm. 1], we have tr(AB) ≤ λmax (A)tr(B). Thus, w(B) = EG (sup G, X ) by definition of Gaussian width X ∈B

≤ EG (λmax (G) sup tr(X )) since G does not depend on X X ∈B

= EG (λmax (G)) because sup tr(X ) = 1. X ∈B

Now, by [17, Lemma 2.8], for a symmetric n × n random matrix variable√G dis√ tributed like Normal(0, 1)n(n+1)/2 , we have EG (λmax (G)) ≤ 2n ⇒ w ≤ 2n as claimed. As for (ii), by definition of B we have: (B) = sup X − Y F ≤ 2 sup X F ≤ 2 sup X 1 = 2 sup tr(X ) = 2. X,Y ∈B

X ∈B

X ∈B

X ∈B

 Corollary 1 For A, b as defined above, and X¯ the solution of T P, we have:   √ 2 ∀u > 0 P A  X¯ − b 2 ≤ εθ A 2 (C3 2n + 2u)/ ln(n) ≥ 1 − 2e−u . (7)

102

L. Liberti et al.

Proof By application of Proposition 1 to Eq. (6).



The solution retrieval method proposed in [1] consists in constructing a solution X˜ which is a projection of X¯ onto A  X = b: X˜ = X¯ + A (A A )−1 (b − A  X¯ ).

(8)

This may easily cause X˜  0, but [1, Thm. 4.4] and Proposition 1 show that this “negativity error” is bounded. Corollary 2 With A, b, X¯ , X˜ as above and κ(A) the condition number of A, we have:  √ √ 2 ∀u > 0 P λmin ( X˜ ) ≥ λmin ( X¯ ) − εθ κ(A)(C3 2n + 2u)/ ln n ≥ 1 − 2e−u .

4.1 A New Solution Retrieval Method The issue with the solution retrieval method in Eq. (8) is that it yields a solution X˜ that has some (hopefully small) eigenvalue negativity, which stems from a projection of X¯ onto A  X = b. We note that this eigenvalue negativity can be “projected away” by zeroing all of the negative eigenvalues of X˜ , a technique known as classic Multidimensional Scaling (MDS) [18], which essentially yields a projection of X˜ on the PSD cone X  0. Every time we project on A  X = b we may leave the PSD cone, and every time we project back into the PSD cone we may leave the subspace A  X = b. This suggests using the well-known Alternating Projection Method (APM) [19, Thm. 13.10] based on the two convex sets A  X = b and X  0. For two closed convex sets S1 , S2 with non-empty intersection, the APM converges (possibly in infinite time) to a point in the intersection S1 ∩ S2 . The new retrieval method is presented in Algorithm 1. The loop in Algorithm 1 is executed at most a given number MaxIterations of times. It alternates between achieving feasibility w.r.t. A  X = b and w.r.t. X  0. Algorithm 1 returns a solution Xˆ ← APM(A, b, Xˆ , δ) which is closer to the feasible set of P than the initial matrix X˜ was.

5 Computational Results The Mosek 9.1.5 [20] SDP solver was used in all experiments, carried out on a 2.1 GHz Intel Xeon E5-2620 with 32 8-core CPUs and 64 GB RAM running Linux. Most code was written in Python 3. The number of iterations for the APM Algorithm 1 was set to 50 for DGP instances (Sect. 5.2) and 20 otherwise. A computational validation of projected SDPs was carried out in [1, Sect. 5]. RPs were sampled as in [21]. Fourty random SDPs were generated, ten of which infeasible

Random Projections for Semidefinite Programming

103

Algorithm 1 APM(A, b, X˜ , δ) Xˆ ← X˜ for i ≤ MaxIterations do Xˆ ← Xˆ + A (A A )−1 (b − A  Xˆ ) if X  0 then return Xˆ // Xˆ feasible end if Xˆ ← MDS( Xˆ ) // perform Multidimensional Scaling on Xˆ if A  Xˆ − b 2 ≤ δ then return Xˆ // Xˆ almost feasible end if end for return Xˆ // Xˆ is closer than X˜ to being feasible

and thirty feasible. In all cases, C = In and A ∼ Uniform(0, 1)m×d , where d = n(n + 1)/2 are the degrees of freedom of the linear system. In infeasible cases, b ∼ Uniform(0, 1)m (all candidate infeasible instances were verified to be infeasible). In feasible cases, X 0 ∼ Uniform(−1, 1)n×n so that X was diagonally dominant (DD) and hence PSD, and b = A  X . The actual SDP formulation tested in infeasible cases was min{tr(X ) | A  X = b ∧ X 0 − θ 1 ≤ X ≤ X 0 + θ 1 ∧ X  0}, to make sure instances were not unbounded, and take the θ bound into account explicitly. Infeasible tests were all successful with ε = 0.13 (every infeasible P mapped to an infeasible T P), and all unsuccessful with ε = 0.2. Feasible tests yielded very poor values of val(T P). The retrieved solutions had excellent quality, although we later discovered a bias in the random generation process.

5.1 New Tests on Random Instances The differences of the new benchmark w.r.t. [1] are as follows. (a) We only test feasible instances. (b) We use k × m√sparse RPs T with given density σ , sampled componentwise from Normal(0, 1/ σ k). (c) Instead of taking C = In , we sample 2 C ∼ Uniform(0, 1)n . (d) We do not artificially change the instance with X 0 and θ as in [1], since neither are actually available in practice. Instead, we impose a doubly non-negative (DNN) constraint on X , yielding min{C, X  | A  X = b ∧ X ≥ 0 ∧ X  0}.

(9)

We test SDPs with different densities σ ∈ {0.05, 0.1, 0.4} in the constraint matrix A (sampled as in [1]), and sample the RP T with the same density. We chose instance sizes (m, n) ∈ {(1000, 50), (1500, 57), (2000, 65), (2500, 72), (3000, 80)}, yielding degrees of freedom d ∈ {1275, 1653, 2145, 2628, 3240}; we fixed k = 0.001 m for the whole benchmark. We tested 6 random instances per size and density. Average results are given in Table 1. RPs appear to provide best results on sparse

104

L. Liberti et al.

¯ F˜ are the objective function values of the Table 1 Computational results on random SDPs. F ∗ , F, original, projected, retrieved ( X˜ ) solution; t˜ is the CPU time taken to sample T , compute T A, T b, construct and solve T P, retrieve X˜ ; t ∗ is the time taken to solve P. Best objective ratios should approach 1; best CPU time ratios should approach 0 Instance

Objective

Feasibility



X˜ i−j n2

CPU

σ

m

k

n

d

¯ ∗ F/F

˜ ∗ F/F

A X˜ −b 1 m

0.05

1000

10

50

1275

0.0614

0.8131

0.000

0.640

0.668

0.73

0.05

1500

15

57

1653

0.0621

0.8748

0.001

0.446

0.002

0.70

0.05

2000

20

65

2145

0.0882

0.9356

0.001

0.397

0.000

0.70

0.05

2500

25

72

2628

0.0837

0.9458

0.001

0.391

0.000

0.79

0.05

3000

30

80

3240

0.0784

0.9396

0.001

0.529

0.000

0.71

0.10

1000

10

50

1275

0.0520

0.7676

0.000

0.618

0.570

0.69

0.10

1500

15

57

1653

0.0740

0.9425

0.001

0.475

0.001

0.79

0.10

2000

20

65

2145

0.0795

0.9278

0.001

0.457

0.000

0.83

0.10

2500

25

72

2628

0.0721

0.9381

0.001

0.361

0.000

0.85

0.10

3000

30

80

3240

0.0879

0.9217

0.001

0.521

0.000

0.85

0.40

1000

10

50

1275

0.0542

0.7702

0.001

0.591

0.385

0.92

0.40

1500

15

57

1653

0.0736

0.9155

0.002

0.427

0.000

1.01

0.40

2000

20

65

2145

0.0707

0.9341

0.002

0.417

0.000

1.12

0.40

2500

25

72

2628

0.0866

0.9549

0.003

0.367

0.000

1.06

0.40

3000

30

80

3240

0.0963

0.8953

0.003

0.491

0.000

0.95

λmin ( X˜ ) t˜/t ∗

instances. Given that projection on A  X = b very often preserved X  0, the solution retrieval method was adapted to only alternate between A  X = b and X ≥ 0, terminating on the former, and limited to 20 iterations to prevent excessive CPU time usage, but clearly more iterations would have further decreased the error w.r.t. X ≥ 0. The objective values follow a trend to [1]: val(T P) provides a poor relaxation, but C, X˜  is a good approximation of val(P).

5.2 Tests on SDP Relaxations of the DGP The DGP is the following decision problem: given an integer K > 0 and a simple edge-weighted graph G = (V, E, d), decide whether there is a realization x : V → R K such that ∀{i, j} ∈ E xi − x j 22 = di2j . This is a pure-feasibility, NPhard nonconvex QCP [22]. Its SDP relaxation is based on the identity xi − x j 22 = xi , xi  + x j , x j  − 2xi , x j . By replacement of the xi , x j  with new variables X i j , we have the SDP: min{tr(X ) | ∀{i, j} ∈ E (X ii + X j j − 2X i j = di2j ) ∧ X  0} (†). The objective function aims at (heuristically) reducing the rank of X , since, by spectral decomposition, tr(X ) = tr(P P  ) = tr( P P  ) = tr( ) = v∈V λv . By minimizing the sum of (non-negative) eigenvalues of X , we hope that at least some of them will be set to zero.

Random Projections for Semidefinite Programming

105

Table 2 Computational results on SDPs derived from DGP instances Instance

Objective ¯ ∗ ˜ ∗ F/F F/F

Feasibility A X˜ −b 1 m

λmin ( X˜ ) t˜/t ∗

741

0.0190

0.4999

0.000

0.730

0.62

3828

0.0122

0.2182

0.000

1.007

0.45

91378

0.0000

0.1290

0.000

0.203

0.23

11325

0.0025

0.1152

0.000

0.574

0.42

333

55611

0.0020

0.1718

0.000

0.765

0.22

491

120786 0.0013

0.0692

0.000

1.217

0.26

Name

σ

tiny

0.00540

335

58

38

names

0.00104

849

67

87

1guu

0.00004

955

69

427

1guu-1 0.00035

959

69

150

2kxa

0.00007 2711

79

100d

0.00003 5741

87

m

k

n

d

CPU

The tested instances were taken from the Protein Data Bank (PDB): each realization was transformed into a distance-weighted ball graph with radius 5.5Å, as discussed in [23, 3.1]. The resulting linear system A  X = b in (†) is very sparse and loosely constrained. We chose k = 0.01 m for this benchmark. The results in Table 2 show a good trend in the CPU times, at the expense of the sizable approximation errors (optimality and feasibility w.r.t. the PSD cone). Despite the errors, further tests (not included in this paper) show that good quality solutions of the DGP can nonetheless be obtained by using X˜ , after dimensionality reduction (which loses the negative eigenvalues), as a starting point for a local descent using a nonlinear optimization solver on a standard primal DGP formulation.

5.3 Tests on SDP Relaxations of the ACOPF The ACOPF is a QCQP with parameters and decision variables over C [11], which can be reformulated to a QCQP over R four times the size. It relies on Ohm’s and Kirchhoff’s laws adapted to AC, plus some technical constraints. We test an ACOPF variant of the form min{C, X  | A  X = b ∧ L ≤ X ≤ U ∧ X  0} that minimizes X − X 0 ∞ with fixed generating power levels, where X 0 is a target solution. See [24] about the tested instances. We chose k = 0.001 m for this benchmark. The results in Table 3 show good trends in optimality and feasibility (especially in the ieee instances), as well as CPU time (across the benchmark).

5.4 Closing Remarks ˜ ∗ in the results Although X˜ is supposed to be approximately feasible, the ratios F/F ˜ tables are ≤ 1 because X is obtained by applying a heuristic (Sect. 4.1) to the solution X¯ of an SDP relaxation. That the ratios t˜/t ∗ are sometimes ≥ 1 does not mean that RPs are useless: the point of RPs is to solve instances so large they cannot be solved

σ

0.00366

0.00205

0.00377

0.00136

0.00072

0.00059

Name

case57_ieee

case73_ieee

case89_pegase

case118_ieee

case162_ieee

case179_goc

Instance

32399

26568

14160

8099

5475

3363

m

8

5

3

32

27

14

k

358

324

236

178

146

114

n 6555

64261

52650

27966

15931

10731

d

0.0010

0.0015

0.0006

0.0001

0.0028

0.0002

0.0632

0.8868

0.8456

0.0147

0.8228

0.9152

0.000

0.000

0.000

0.000

0.000

0.000

0.261

0.012

0.010

0.281

0.018

0.007

0.017

0.099

0.092

0.022

0.094

0.092

0.28

0.32

0.35

0.14

0.58

0.74

CPU t˜/t ∗ λmin ( X˜ )

Feasibility A X˜ −b 1 rng m

˜ ∗ F/F

Objective ¯ ∗ F/F

Table 3 Tests on SDPs derived from ACOPF instances; rng is the average range error

106 L. Liberti et al.

Random Projections for Semidefinite Programming

107

any other way. Here we are interested in comparing the performances, so instances are small enough so we can also solve them exactly. But the trend given by k = O(ln n) is that t˜/t ∗ decreases with increasing n.

References 1. Liberti, L., Poirion, P.-L., Vu, K.: Random projections for conic programs. Linear Algebr. Appl. 626, 204–220 (2021) 2. D’Ambrosio, C., Liberti, L., Poirion, P.-L., Vu, K.: Random projections for quadratic programs. Math. Program. B 183, 619–647 (2020) 3. Johnson, W., Lindenstrauss, J.: Extensions of Lipschitz mappings into a Hilbert space. In: Hedlund, G. (ed.) Conference in Modern Analysis and Probability. Contemporary Mathematics, vol. 26, pp. 189–206. AMS, Providence, RI (1984) 4. Pilanci, M., Wainwright, M.: Randomized sketches of convex programs with sharp guarantees. IEEE Trans. Inf. Theory 61(9), 5096–5115 (2015) 5. Vu, K., Poirion, P.-L., Liberti, L.: Random projections for linear programming. Math. Oper. Res. 43(4), 1051–1071 (2018) 6. Liberti, L., Manca, B.: Side-constrained minimum sum-of-squares clustering: mathematical Programming and random projections. J. Glob. Optim., accepted 7. Cartis, C., Massart, E., Otemissov, A.: Global optimization using random embeddings. Technical Report. arXiv:2107.12102 (2021) 8. Liberti, L.: Distance geometry and data science. TOP 28, 271–339, 220 9. Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U.: When is “nearest neighbor” meaningful? In: Beeri, C., Buneman, P. (eds.) Proceedings of ICDT. LNCS, vol. 1540, pp. 217–235. Springer, Heidelberg (1998) 10. Venkatasubramanian, S., Wang, Q.: The Johnson-Lindenstrauss transform: an empirical study. In: Algorithm Engineering and Experiments, ALENEX, vol. 13, pp. 164–173. SIAM, Providence, RI (2011) 11. Bienstock, D., Escobar, M., Gentile, C., Liberti, L.: Mathematical programming formulations for the alternating current optimal power flow problem. 4OR 18(3), 249–292 (2020) 12. Bomze, I., Dür, M., De Klerk, E., Roos, C., Quist, A., Terlaky, T.: On copositive programming and standard quadratic optimization problems. J. Glob. Optim. 18, 301–320 (2000) 13. Helmberg, C., Rendl, F., Vanderbei, R., Wolkowicz, H.: An interior-point method for semidefinite programming. SIAM J. Optim. 6(2), 342–361 (1996) 14. Anstreicher, K.: Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming. J. Glob. Optim. 43, 471–484 (2009) 15. Belotti, P., Lee, J., Liberti, L., Margot, F., Wächter, A.: Branching and bounds tightening techniques for non-convex MINLP. Optim. Methods Softw. 24(4), 597–634 (2009) 16. Fang, Y., Loparo, K., Feng, X.: Inequalities for the trace of matrix product. IEEE Trans. Autom. Control 39(12), 2489–2490 (1994) 17. Song, D., Parrilo, P.: On approximations of the PSD cone by a polynomial number of smallersized PSD cones. Technical Report. arXiv:2105.02080v1 (2021) 18. Cox, T., Cox, M.: Multidimensional Scaling. Chapman & Hall, Boca Raton (2001) 19. von Neumann, J.: Functional Operators. Volume II: The geometry of Orthogonal Spaces. Number 22 in Annals of Mathematics Studies. Princeton University Press, Princeton, NJ (1950) 20. Mosek ApS: The mosek manual, Version 9 (2019) 21. Kane, D., Nelson, J.: Sparser Johnson-Lindenstrauss transforms. J. ACM 61(1), 4 (2014) 22. Saxe, J.: Embeddability of weighted graphs in k-space is strongly NP-hard. In: Proceedings of 17th Allerton Conference in Communications, Control and Computing, pp. 480–489 (1979)

108

L. Liberti et al.

23. Liberti, L., Lavor, C., Maculan, N., Mucherino, A.: Euclidean distance geometry and applications. SIAM Rev. 56(1), 3–69 (2014) 24. IEEE PES PGLib-OPF Task Force. The power grid library for benchmarking AC optimal power flow algorithms. Technical Report. arXiv:1908.02788 (2019)

Approaches to ESG—Integration in Portfolio Optimization Using MOEAs Ana Garcia-Bernabeu, Adolfo Hilario-Caballero, José Vicente Salcedo, and Francisco Salas-Molina

Abstract Against a backdrop of growing collective awareness of sustainability, the financial sector plays a key role in aligning financial flows with a path towards sustainable development. In this research, we focus on the process of integrating environmental, social, and governance (ESG) criteria, which will lead to a better reorientation of investments towards socially and environmentally useful projects. With this aim, we approach the ESG-Integration problem in portfolio selection using a recently proposed evolutionary multiobjective optimization procedure called evMOGA. This new approach significantly improves over exact methods and allows a better assessment of financial and ESG criteria tradeoffs. As an application example, we compute the mean-variance-ESG non-dominated surface for ESG motivated investors who want to assess the tradeoffs among the three criteria.

1 Introduction The ecological transition is a strategic opportunity for the financial sector. The current climate emergency and the recent COVID-19 pandemic emergency have highlighted the need for a change in habits, attitudes and processes towards an economy based on sustainability criteria [27]. Sustainable and responsible investment (SRI) is an investment that incorporates environmental, social and governance (ESG) criteria in the process of research, analysis and selection of securities in an investment A. Garcia-Bernabeu (B) · A. Hilario-Caballero · J. V. Salcedo · F. Salas-Molina Universitat Politecnica de Valencia, Valencia, Spain e-mail: [email protected] A. Hilario-Caballero e-mail: [email protected] J. V. Salcedo e-mail: [email protected] F. Salas-Molina e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_10

109

110

A. Garcia-Bernabeu et al.

portfolio [13]. Sustainable investments play a key role in reorienting capital flows towards responsible and sustainable criteria by connecting investors to socially or environmentally useful projects. Since the 1990s, sustainable investment has increased its presence significantly, but it is only since the early 2000s that it has begun to be considered in academia [3]. At present, studies on the integration of sustainability in investment decisions, both in the economic-financial area and in operational research, constitute an incipient but active field of research with a huge projection. Pioneering research in this field includes the work of [1, 20], which constitute the first operational research approaches to socially responsible investment (SRI) based on multi-attribute functions and classical utility theory. Subsequent research has mainly focused on the behaviour of the socially responsible investor, the comparison of SRI performance versus conventional investment, and the optimisation of SRI portfolios [2]. At the international level, the Paris Agreement on Climate Change, the Sustainable Development Goals (SDGs) established by the United Nations 2030 Agenda, and the European Green Deal have marked a turning point in the commitment to sustainable development. In short, there has been growth in both the volume of sustainable investment and institutional support for reorienting the financial sector in line with environmental and social challenges. To incorporate ESG criteria into decision-making processes, existing analytical tools need to be rethought and new models developed that can help investors, especially retail investors, who aim to integrate ESG criteria into their investment decisions. Concerning investment management, one of the most studied problems in the financial sector is portfolio optimisation. The basic formulation of the problem is based on the Mean-Variance (M-V) quadratic programming model proposed by [28], known as a multi-objective programming model that allows obtaining a set of optimal solutions or efficient frontier in a two-dimensional plane as a function of returns as a measure of profitability and variance as a measure of risk. ESG factors can be integrated into all fixed income and equity portfolios, and especially mutual funds. So far, the methodology most commonly applied to select portfolios of assets with sustainability criteria (shares/investment funds) uses the technique of exclusion or ethical pre-selection, which consists of establishing a kind of “red line” to rule out companies in sectors that are not aligned with the Principles for Responsible Investment (PRI) of the United Nations Global Compact, such as arms companies, projects that harm the environment, companies that use child labour or that exclude minorities. From this selection, investment decisions are made on the basis of profitability and risk, but without measuring the level of sustainability in any case. In this way, the idea that sustainable investment is less profitable is generalised, as it limits the investment universe to fewer assets. This work takes the perspective of ESG motivated investors who are strongly driven by ESG considerations, and whose commitment to ESG is such that they are willing to compromise profitability for ESG. In this research, we propose a tri-criterion evolutionary multiobjective optimisation model for portfolio selection with the aim of integrating ESG criteria in the investment decisions. This model is

Approaches to ESG—Integration in Portfolio Optimization Using MOEAs

111

performed by determining the optimal composition of a portfolio with ESG criteria as an additional objective. In Sect. 2 we perform a review on recent approaches to extend M-V multiobjective portfolio optimization by incorporating sustainability considerations. In Sect. 3 the proposed approach for ESG-Integration in portfolio optimization is formulated. In Sect. 4 we present a real-world application of mutual funds that generate an intentional and measurable impact on sustainability. Finally, the last section provides some conclusions with future lines of research.

2 Literature Review Within the multiobjective approaches to extend the M-V model, most of the contributions include the sustainability criteria under the consideration of additional constraint. An early attempt to consider sustainability-related constraints in the classical M-V portfolio optimization problem was proposed by [22] to assess the cost of sustainability. The authors proposed a new measure called the price of sustainability defined as the loss in the Sharpe ratio. A mathematical framework for modelling the sustainability returns is suggested in [11] by introducing stochastic sustainability returns into safety first models for portfolio choice. In [14] the impact of the reduction in the investments opportunities of investors when considering socially responsible screening to construct the mean-variance efficient frontier was analysed. A threestage approach for portfolio selection based upon financial and ethical criteria is provided in [19]. For the first time sustainability appears as a third criterion in [33] who computed a variance-expected return-sustainability efficient frontier. The so called tri-criterion non-dominated surface was derived by a new Quadratic Constrained Linear Program (QCLP) approach. A fuzzy multi-criteria model for portfolio selection including a new non-financial criterion is computed in [8] by means of fuzzy set tools. In [26] a two step portfolio selection model based on the classical Markowitz’s mean-variance model is proposed by means of the Induced Ordered Weighted Averaging (IOWA) methodology combined with the M-V multiobjective optimization problem to determine socially responsible efficient portfolios. A model to derive the dependency of the optimal portfolio choice on the available amount of investment volume from the investor’s appreciation of sustainable and responsible objectives is implemented by [10]. In [18] the Markowitz’s traditional mean-variance portfolio optimization problem is extended to include a social responsibility measure of risky assets as a third objective. Goal Programming (GP) [9] is one of the most widely used techniques in portfolio selection. Several authors have implemented GP techniques for incorporating sustainability criteria. For example, [32] used an integrated model for selecting SRI stocks based on Analytic Network Process (ANP) and zero-one GP techniques. In [12] the classical mean-variance model enriched with socially responsible indicators for evaluating the performance of sovereign bonds in developed countries by

112

A. Garcia-Bernabeu et al.

applying GP with flexible targets and constraints. A new financial bi-criteria model is suggested by [1], in which the first goal includes the purely financial side of the portfolio optimization problem and the second goal reflects the SRI side. This is the first time that risk aversion coefficients and targets were defined depending on the investor’s ethical preferences. Reference [4] introduces a Fuzzy-GP optimization model by employing a new SRI-attractiveness index to summarize the ESG performance of funds for a particular investor. Reference [16] proposes a GP approach to design efficient portfolios considering classic financial criteria and environmental criteria. Also in [5–7] fuzzy techniques are combined with GP approach to select investment with a certain level of social responsibility concerns. The increasing complexity of financial decision making problems has led researchers to apply heuristic procedures inspired by biological processes such as Multi-objective Evolutionary Algorithms (MOEAs). Suggested at the beginning of the 90s, MOEAs have been applied in several fields including finance, and in particular to solve the portfolio selection problem [29]. These techniques provide satisfactory approximations of the efficient frontier even when the problem involves nonconvexity, discontinuity or non-integer variables. In [15] a multi-objective approach for portfolio selection is presented. In the proposed model the ESG performance is included as a constraint in the investment decision-making process. The authors solve the constrained portfolio optimization problems using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). The mean-variance sustainability non-dominated surface is computed in [17] using a MOEA called ev-MOGA. In the proposed algorithm, sustainability is integrated as a third criterion to reflect the individual preferences of ethical or green investors. In [23] a tri-criterion portfolio selection model to extend the classical Markowitz’s mean-variance approach to include investor’s preferences on the portfolio carbon risk exposure as an additional criterion is proposed. A recent proposal of [25] uses MOEAs (NSGAII, SPA2 and IBEA) to incorporate a screening a procedure into the portfolio optimization problem and to eliminate stocks from the set of investments that do not respect the ESG constraints.

3 Approximating the M-V-ESG Surface with ev-MOGA ev-MOGA is an evolutionary multiobjective optimization [21] based on the concept of -dominance [24] to approximate the Pareto front. This algorithm adjust the limits of the approximate Pareto front dynamically and provides a uniform distribution of solutions. In this section, we illustrate how the ev-MOGA can be used for ESGIntegration in portfolio optimization and to derive the mean-variance-ESG (M-VESG) efficient surface. Therefore, three objectives are considered: • Objective 1: Maximize the expected return of the portfolio • Objective 2: Minimize the risk of the portfolio • Objective 3: Minimize the ESG risk of the portfolio

Approaches to ESG—Integration in Portfolio Optimization Using MOEAs Table 1 ESG risk scores and categories ESG score range 0–0.99 10–19.99 20–29.99 30–39.99 >40

113

ESG risk category Negligible ESG risk Low ESG risk Medium ESG risk High ESG risk Severe ESG risk

Source Morningstar/Sustainalytics

max f 1 (x) : f 1 (x) = μT x

(1)

min f 2 (x) : f 2 (x) = xT  x

(2)

x

x

min f 3 (x) : f 3 (x) = ρ T x x   s.t.: x ∈ S = x ∈ Rn | 1T x = 1, xi = q ω0 , q = 0, 1, 2, . . . qmax

(3) (4)

where x denotes the proportion of asset i in the portfolio. The first two objectives are return and risk, where μ is the expected return and  is the covariance matrix. For the third objective, ρ denotes the ESG risk score which is measured according to the Historical Corporate Sustainability Score provided by Sustainalytics. In practice, most scores ranges from 0 to 50 and are assigned to five risk categories as shown in Table 1. Additionally, we include the discretization of portfolio weights, so that the analyst has to specify: (i) the value of ω0 to set the discrete step between two investment shares, and (ii) the maximum asset weight ωmax = qmax ω0 , to indicate the maximum amount of investment in an asset. Thus, the portfolio is defined by a set of weights that are zero or multiples of ω0 with a maximum level of ωmax . The feasible region in objective space is defined as follows:   Z = z ∈ R3 | z 1 = μT x, z 2 = xT  x, z 3 = ρ T x, x ∈ S In the feasible region, a vector z ∈ Z is said to be a member of the approximate Pareto front if there exist no member z ∈ Z such that z 1 ≥ z 1 , z 2 ≤ z 2 , z 2 ≤ z 3 and z = z. The adapted ev-MOGA algorithm is an extension of the algorithm proposed by [21], which algorithm code is available at Matlab Central: ev-MOGA in Matlab Central. Although the ev-MOGA algorithm has been used in our recent publications [17, 23] to integrate the sustainability objective, this proposal differs from the previous ones in the following aspects. First, in [17] sustainability is considered as an objective to be maximized. In contrast, in the current proposal, the valuation of ESG criteria is considered a risk to be minimized in line with the new Sustainalytics’ ESG Risk Ratings methodology proposed by Morningstar in 2021. On the other hand, in [23] the third objective focused on the minimization of carbon risk and does not take into account the overall assessment of ESG criteria.

114

A. Garcia-Bernabeu et al.

Thus, the consideration of return, risk and ESG risk will lead to 3-dimensional graph representing the M-V-ESG efficient surface involving the investor’s task to identify the best risk-return-ESG tradeoff. To implement the adapted ev-MOGA detailed in Algorithm 1, we previously need to specify the size of the main population Pk , which is NindP . We also define an archive population Ak , and auxiliary population GAk formed by the new individuals obtained by crossover and mutation from individuals belonging to Pk and the archive Ak population. Its size is denoted as NindGA . Also, it is needed to define the maximum algorithm iterations kmax , the probability of crossing/mutation Pc/m , and the number of boxes n box . Algorithm 1 Adapted ev-MOGA for computing the M-V-ESG surface Set k = 0. Generate the population of candidate solutions P0 and set A0 = ∅ Conduct the M-V-ESG multiobjective optimization from P0 Store the ε-nondominated [24] portfolios in the archive population A0 while k ≤ kmax do From the main population Pk and the archive population Ak generate the auxiliary population GAk by crossing and mutating portfolios (using a extended linear recombination and a random mutation with Gaussian distribution respectively) 8: Evaluate population GAk using the M-V-ESG multiobjective portfolio model, (1)–(4) 9: Check which portfolios in GAk must be included in Ak+1 on the basis of their location in the objective space. 10: Update population Pk+1 with portfolios from GAk 11: k ←k+1 12: end while 1: 2: 3: 4: 5: 6: 7:

4 Illustrative Example In the experimental part, we use a set of monthly returns for ten funds considered as impact investments. These funds are included in Article 9 of the European Sustainable Finance Disclosure Regulation (SFDR), in which the investment products that fall into this category have a concrete and measurable sustainability objective. Our analysis period spans from January 2019 to February 2022. All the numerical information of returns and sustainability risk scores to be used on this opportunity set comes c database. To implement the adapted ev-MOGA, we specify the from Morningstar parameter setting as follows: the size of the main population is NindP = 5 × 104 , while the population of the archive GAk is NindGA = 500. For the probability of crossing/mutation we select Pc/m = 0.5. Finally, the space of each objective function has been divided in 300 boxes.

Approaches to ESG—Integration in Portfolio Optimization Using MOEAs

115

2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1 0.8 0.6 24 22 20 18

10

20

30

40

50

60

70

80

Fig. 1 Efficient M-V-ESG Pareto Front. zopt is the optimum portfolio, as its normalized distance to ideal z∗ is minimum. The points nearest to the optimum are plotted in green. The red points represents M-V-ESG tradeoffs for an increase in return of 0.25%

c R2021b, and executed on an The procedure was coded and compiled in Matlab Intel i7 3.20 GHz, 20 GB of RAM. While there are no applications of the algorithm in large-scale portfolio optimization to date, the works [30, 31] have tested it feasibility considering 18 and 30 targets respectively. Figure 1 illustrates how the M-V-ESG surface computed using the Algorithm 1 for this dataset can be represented in 3D. The ideal z ∗ includes the optimum of each objective once the joint optimization of the multiobjective problem has been performed. We find the optimum portfolio zopt as the one in which the normalized distance to the ideal z∗ is minimum. Consider now the case of an ESG motivated investor who wants to assess the impact in risk and sustainability to increase its return on investment by 0.25%. Thus, the line coloured in red highlights the feasible set of these solutions. In Fig. 2 we display the 2-dimensional projections of the M-V-ESG approximate Pareto front, where an investor committed to sustainability can value the effect on variance and ESG risk if he/she wants to increase his/her returns. Notice that, the less the ESG risk, the more the Variance. Table 2 displays in the first row the objective function values attained by the optimum portfolio as well as the portfolio composition. When analysing the optimum solution, we should draw attention to funds 2, 3, 4, 5, 6 and 10. For an increase in return around 0.25% (z 1 ), the following rows present the tradeoff for Variance

116

A. Garcia-Bernabeu et al.

2.5

2.5

2

2

1.5

1.5

1

1

23 22 21 20

0.5 20

40

60

80

0.5 18

19 18 19

20

21

22

23

20

40

60

80

Fig. 2 Pareto Front projections. When an investor wants to obtain more return, he has to choose between more ESG risk or more Variance as shown by the points colored in red in z 2 –z 3 projection

(z 2 ) and Sustainability (z 3 ). It can be seen how financial risk increases from 26.8% and 18.8%. Notice that, as financial risk decreases, sustainability risk tends to increase between 4.1% and 7.2% accordingly to z 2 –z 3 projection in Fig. 2. Besides, note that over the entire interval considered funds 3, 4 and 5 do not change their composition and remain at the maximum limit, whereas funds 1, 8 and 10 move towards greater sustainability, funds 2 and 7 move towards improved financial risk.

5 Conclusion Today’s standard procedures for integrating ESG concerns into investment decisions involve a preliminary step based on the screening of unsustainable assets. However, more and more motivated ESG investors want to have the opportunity to assess the tradeoffs of investing in sustainability. To shed some light on the problem of integrating ESG criteria in the construction of efficient portfolios, we have first reviewed the main contributions of the literature in this field. According to an ESG motivated investor the problem of portfolio construction is designed from the outset as a three-criteria multiobjective optimization problem. Using the ev-MOGA algorithm, we are able to obtain approximate solutions of the Pareto front, which are efficiently distributed with limited memory resources. In the illustrative example, the algorithm has been applied to a set of funds considered as impact investments. After approximating the frontier, the tradeoffs have been analysed when an investor wants to evaluate the impact on risk and sustainability of an increase in returns. Although in our proposal, we have focused on ESG risk as a third criterion, it appears that the proposed approach could be applied to specific objectives as for example, ethic and diversity inclusion criteria for those investors who wants to increase social inclusion and equality, while seeking to generate attractive investment returns.

zopt z1 z2 z3 z4 z5 z6 z7 z8 z9 z10 z11 z12 z13 z14 z15 z16 z17 z18 z19 z20

1.5706 1.8204 1.8156 1.8157 1.8202 1.8205 1.8220 1.8157 1.8212 1.8205 1.8157 1.8205 1.8164 1.8214 1.8214 1.8207 1.8215 1.8202 1.8206 1.8160 1.8228

z1

34.52 43.77 43.59 43.44 43.38 43.23 42.97 42.80 42.75 42.53 42.44 42.37 42.26 42.21 42.02 41.81 41.61 41.43 41.27 41.22 41.00

z2

19.65 20.47 20.45 20.47 20.48 20.51 20.57 20.53 20.60 20.62 20.59 20.65 20.67 20.73 20.77 20.79 20.88 20.90 20.93 20.88 21.05

z3

0.000 0.250 0.245 0.245 0.250 0.250 0.251 0.245 0.251 0.250 0.245 0.250 0.246 0.251 0.251 0.250 0.251 0.250 0.250 0.245 0.252

0.0 26.8 26.3 25.8 25.7 25.2 24.5 24.0 23.8 23.2 22.9 22.8 22.4 22.3 21.7 21.1 20.5 20.0 19.6 19.4 18.8

0.0 4.2 4.1 4.2 4.3 4.4 4.7 4.5 4.8 5.0 4.8 5.1 5.2 5.5 5.7 5.8 6.3 6.4 6.5 6.3 7.2

0.00 0.14 0.14 0.08 0.15 0.15 0.00 0.00 0.00 0.00 0.00 0.01 0.14 0.05 0.00 0.00 0.10 0.02 0.00 0.03 0.00

z 1 (%) z 2 (%) z 3 (%) x1 0.06 0.00 0.00 0.06 0.00 0.00 0.15 0.15 0.15 0.15 0.15 0.14 0.00 0.09 0.14 0.14 0.03 0.11 0.13 0.10 0.13

x2 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20

x3 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20

x4 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20 0.20

x5 0.14 0.00 0.13 0.00 0.05 0.04 0.09 0.08 0.06 0.03 0.00 0.00 0.00 0.00 0.04 0.01 0.00 0.00 0.00 0.00 0.00

x6 0.00 0.00 0.00 0.00 0.00 0.01 0.02 0.01 0.03 0.04 0.03 0.05 0.08 0.09 0.10 0.11 0.16 0.16 0.17 0.16 0.20

x7 0.00 0.16 0.05 0.16 0.01 0.00 0.00 0.00 0.00 0.00 0.02 0.00 0.00 0.03 0.00 0.00 0.00 0.02 0.01 0.00 0.00

x8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01

x9

0.20 0.10 0.08 0.10 0.19 0.20 0.14 0.16 0.16 0.18 0.20 0.20 0.18 0.14 0.12 0.14 0.11 0.09 0.09 0.11 0.06

x10

Table 2 Objective value functions, tradeoffs and portfolio composition for Impact Investments. The sample of z1 -z20 feasible portfolios present the tradeoffs in Variance and ESG risk for an increase in profitability about 0.25%

Approaches to ESG—Integration in Portfolio Optimization Using MOEAs 117

118

A. Garcia-Bernabeu et al.

In future work, we plan to test the adapted algorithm’s computational cost for ESG integration in portfolio optimization compared to exact methods and other algorithms.

References 1. Ballestero, E., Bravo, M., Perez-Gladish, B., Arenas-Parra, M., Pla-Santamaria, D.: Socially responsible investment: a multicriteria approach to portfolio selection combining ethical and financial objectives. Eur. J. Oper. Res. 216(2), 487–494 (2012). https://doi.org/10.1016/j.ejor. 2011.07.011 2. Barroso, J.S.S., Araújo, E.A.: Socially responsible investments (SRIs)–mapping the research field. Soc. Responsib. J. (2020) 3. Bauer, R., Koedijk, K., Otten, R.: International evidence on ethical mutual fund performance and investment style. J. Bank. Financ. 29(7), 1751–1767 (2005) 4. Bilbao-Terol, A., Arenas-Parra, M., Cañal-Fernández, V.: A fuzzy multi-objective approach for sustainable investments. Expert Syst. Appl. 39(12), 10904–10915 (2012) 5. Bilbao-Terol, A., Arenas-Parra, M., Cañal-Fernández, V., Bilbao-Terol, C.: Multi-criteria decision making for choosing socially responsible investment within a behavioral portfolio theory framework: a new way of investing into a crisis environment. Ann. Oper. Res. 247(2), 549–580 (2016) 6. Bilbao-Terol, A., Arenas-Parra, M., Cañal-Fernández, V., Obam-Eyang, P.N.: Multi-criteria analysis of the GRI sustainability reports: an application to socially responsible investment. J. Oper. Res. Soc. 1–26 (2017) 7. Bilbao-Terol, A., Jiménez-López, M., Arenas-Parra, M., Rodríguez-Uría, M.V.: Fuzzy multicriteria support for sustainable and social responsible investments: the case of investors with loss aversion. In: The Mathematics of the Uncertain, pp. 555–564. Springer (2018) 8. Calvo, C., Ivorra, C., Liern, V.: Fuzzy portfolio selection with non-financial goals: exploring the efficient frontier. Ann. Oper. Res. 245(1–2), 31–46 (2016). https://doi.org/10.1007/s10479014-1561-2 9. Charnes, A., Cooper, W.W.: Goal programming and multiple objective optimizations: part 1. Eur. J. Oper. Res. 1(1), 39–54 (1977) 10. Dorfleitner, G., Nguyen, M.: A new approach for optimizing responsible investments dependently on the initial wealth. J. Asset Manag. 18(2), 81–98 (2017). https://doi.org/10.1057/ s41260-016-0011-x 11. Dorfleitner, G., Utz, S.: Safety first portfolio choice based on financial and sustainability returns. Eur. J. Oper. Res. 221(1), 155–164 (2012). https://doi.org/10.1016/j.ejor.2012.02.034 12. Drut, B.: Sovereign bonds and socially responsible investment. J. Bus. Ethics 92(1), 131–145 (2010). https://doi.org/10.1007/s10551-010-0638-3 13. European Comission: Communication from the Commission to the European Parliament, the European Council, the Council, the European Central Bank, the European Economic and Social Committee and the Committee of the Regions. Action Plan: Financing Sustainable Growth, com (2018) 97 final edn. European Comission (2018) 14. Fabretti, A., Herzel, S.: Delegated portfolio management with socially responsible investment constraints. Eur. J. Financ. 18(3–4, SI), 293–309 (2012). https://doi.org/10.1080/1351847X. 2011.579746 15. Garcia, F., Gonzalez-Bueno, J., Oliver, J., Riley, N.: Selecting socially responsible portfolios: a fuzzy multicriteria approach. Sustainability 11(9) (2019). https://doi.org/10.3390/su11092496 16. Garcia-Bernabeu, A., Pla-Santamaria, D., Bravo, M., Perez-Gladish, B.: The environmental protection as a selection criterion in socially responsible investments: a multicriteria approach. Econ. Agrar. Y Recur. Nat. 15(1), 101–112 (2015). https://doi.org/10.7201/earn.2015.01.06

Approaches to ESG—Integration in Portfolio Optimization Using MOEAs

119

17. Garcia-Bernabeu, A., Salcedo, V.J., Hilario, A., Pla-Santamaria, D., Herrero, J.M.: Computing the mean-variance-sustainability nondominated surface by ev-MOGA. Complexity 2019 (2019). https://doi.org/10.1155/2019/6095712 18. Gasser, S.M., Rammerstorfer, M., Weinmayer, K.: Markowitz revisited: social portfolio engineering. Eur. J. Oper. Res. 258(3), 1181–1190 (2017). https://doi.org/10.1016/j.ejor.2016.10. 043 19. Gupta, P., Mehlawat, M.K., Inuiguchi, M., Chandra, S.: Ethicality considerations in multicriteria fuzzy portfolio optimization. In: Fuzzy Portfolio Optimization: Advances in Hybrid Multi-Criteria Methodologies, Studies in Fuzziness and Soft Computing, vol. 316, pp. 255–281 (2014). https://doi.org/10.1007/978-3-642-54652-5_9 20. Hallerbach, W., Ning, H., Soppe, A., Spronk, J.: A framework for managing a portfolio of socially responsible investments. Eur. J. Oper. Res. 153(2), 517–529 (2004) 21. Herrero, J.M.: Non-linear robust identification using evolutionary algorithms. Ph.D. thesis, Polytechnic University of Valencia (2006) 22. Herzel, S., Nicolosi, M., Starica, C.: The cost of sustainability in optimal portfolio decisions. Eur. J. Financ. 18(3-4, SI), 333–349 (2012). https://doi.org/10.1080/1351847X.2011.587521 23. Hilario-Caballero, A., Garcia-Bernabeu, A., Salcedo, J.V., Vercher, M.: Tri-criterion model for constructing low-carbon mutual fund portfolios: a preference-based multi-objective genetic algorithm approach. Int. J. Environ. Res. Public Health 17(17), 6324 (2020) 24. Laumanns, M., Thiele, L., Deb, K., Zitzler, E.: Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 10(3), 263–282 (2002) 25. Liagkouras, K., Metaxiotis, K., Tsihrintzis, G.: Incorporating environmental and social considerations into the portfolio optimization process. Ann. Oper. Res. https://doi.org/10.1007/ s10479-020-03554-3 26. Liern, V., Perez-Gladish, B., Mendez-Rodriguez, P.: Measuring social responsibility: a multicriteria approach. In: Zopounidis, C., Doumpos, M. (eds.) Multiple Criteria Decision Making: Applications in Management and Engineering, Multiple Criteria Decision Making, pp. 31–46 (2017). https://doi.org/10.1007/978-3-319-39292-9_2 27. Markard, J., Rosenbloom, D.: A tale of two crises: Covid-19 and climate. Sustain.: Sci., Pract. Policy 16(1), 53–60 (2020) 28. Markowitz, H.: Portfolio selection. J. Financ. 7(1), 77–91 (1952) 29. Metaxiotis, K., Liagkouras, K.: Multiobjective evolutionary algorithms for portfolio management: a comprehensive literature review. Expert Syst. Appl. 39(14), 11685–11698 (2012) 30. Pajares, A., Blasco, F.X., Herrero, J.M., Salcedo, J.V.: Analyzing the nearly optimal solutions in a multi-objective optimization approach for the multivariable nonlinear identification of a PEM fuel cell cooling system. IEEE Access 8, 114361–114377 (2020) 31. Susperregui, A., Herrero, J.M., Martinez, M.I., Tapia-Otaegui, G., Blasco, X.: Multi-objective optimisation-based tuning of two second-order sliding-mode controller variants for DFIGs connected to non-ideal grid voltage. Energies 12(19), 3782 (2019) 32. Tsai, W.H., Chou, W.C., Hsu, W.: The sustainability balanced scorecard as a framework for selecting socially responsible investment: an effective MCDM model. J. Oper. Res. Soc. 60(10), 1396–1410 (2009). https://doi.org/10.1057/jors.2008.91 33. Utz, S., Wimmer, M., Hirschberger, M., Steuer, R.E.: Tri-criterion inverse portfolio optimization with application to socially responsible mutual funds. Eur. J. Oper. Res. 234(2), 491–498 (2014). https://doi.org/10.1016/j.ejor.2013.07.024

Optimization Under Uncertainty

Complexity Issues in Interval Linear Programming Milan Hladík

Abstract Interval linear programming studies linear programming problems with interval coefficients. Herein, the intervals represent a range of possible values the coefficients may attain, independently of each other. They usually originate from a certain uncertainty of obtaining the data, but they can also be used in a type of a sensitivity analysis. The goal of interval linear programming is to provide tools for analysing the effects of data variations on the optimal value, optimal solutions and other characteristics. This paper is a contribution to computational complexity theory. Some problems in interval linear programming are known to be polynomially solvable, but some were proved to be NP-hard. We help to improve this classification by stating several novel complexity results. In particular, we show NP-hardness of the following problems: checking whether a particular value is attained as an optimal value; testing connectedness and convexity of the optimal solution set; and checking whether a given solution is robustly optimal for each realization of the interval values.

1 Introduction Interval linear programming [7, 18] was developed to study the effects of variations of input data on diverse characteristics of linear programming (LP) problems, in particular in the optimal value and optimal solutions. The key feature is that only lower and upper bounds on the variations are given, and the coefficients may vary simultaneously and independently of each other. Thus, in contrast to fuzzy or stochastic programming, no information about the distribution of uncertain coefficients is needed. This generality has some drawbacks, though. Many problems appearing in interval LP are intractable. This paper is a contribution to the complexity analysis in interval linear programming, and we prove NP-hardness results for several problems.

M. Hladík (B) Faculty of Mathematics and Physics, Department of Applied Mathematics, Charles University, Malostranské nám. 25, 11800 Prague, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_11

123

124

M. Hladík

1.1 Linear Programming Consider an LP problem f (A, b, c) := min c T x subject to x ∈ M(A, b), where f (A, b, c) is the optimal value and M(A, b) is the feasible set. We have f (A, b, c) = −∞ if the problem is unbounded and f (A, b, c) = ∞ if the feasible set is empty. Next, we denote by S(A, b, c) the set of optimal solutions. Usually, we assume that the feasible set M(A, b) is described by a system in one of the following canonical forms M(A, b) = {x ∈ Rn ; Ax = b, x ≥ 0},

(1)

M(A, b) = {x ∈ R ; Ax ≤ b}, M(A, b) = {x ∈ Rn ; Ax ≤ b, x ≥ 0}.

(2) (3)

n

We do not specify the structure of the constraints because the type of constraints influences the computational complexity in the interval LP case; it can happen that a problem is easy to solve for one form of constraints but NP-hard for another one. The standard transformations of constraints are no longer equivalent [6].

1.2 Interval Analysis The key object in interval analysis is an interval matrix. It is defined as A = [A, A] = {A ∈ Rm×n ; Ai j ≤ Ai j ≤ Ai j ∀i, j}, where A, A ∈ Rm×n are given lower and upper bounds matrices. The set of interval matrices of size m × n is denoted by IRm×n . Interval vectors and scalar intervals are considered as interval matrices of a specific size. All interval quantities are typed in bold. Let A ∈ IRm×n and b ∈ IRm be given. An interval system of linear equations is a family of systems Ax = b, where A ∈ A, b ∈ b. In short, we write just Ax = b. The corresponding solution set is defined as the union of all solutions of all realizations of the system, that is, {x ∈ Rn ; ∃A ∈ A ∃b ∈ b : Ax = b}. We use analogous notation and notion for other interval linear systems, e.g., interval linear inequalities Ax ≤ b, or mixed equations and inequalities.

Complexity Issues in Interval Linear Programming

125

1.3 Interval Linear Programming Let A ∈ IRm×n , b ∈ IRm and c ∈ IRn . By an interval LP problem we mean a family of LP problems f (A, b, c) := min c T x subject to x ∈ M(A, b),

(4)

where A ∈ A, b ∈ b and c ∈ c. We often write the problem shortly as min cT x subject to x ∈ M( A, b). There are many questions related to this problem. Most importantly, we wish to know what is the effect of variations of A, b and c within interval matrices and vectors A, b and c on the optimal value [2, 8, 18] and the optimal solutions [4, 10]. To address the latter, we recall the definition of the optimal solution set 

S=

S(A, b, c),

A∈ A, b∈b, c∈c

which comprises all optimal solutions of all realizations of the interval LP problem.

1.4 Notation We use e for the vector of ones (with convenient dimension), and diag(v) is the diagonal matrix with the entries v1 , . . . , vn . The sign of a real r is defined sgn(r ) = 1 if r > 0 and sgn(r ) = −1 otherwise. For vectors, the sign function and the absolute value are understood entrywise.

2 Image of Optimal Value The best and worst case optimal values of the interval LP problem (4) are defined, respectively, as f := min f (A, b, c) subject to A ∈ A, b ∈ b, c ∈ c, f := max f (A, b, c) subject to A ∈ A, b ∈ b, c ∈ c. It is known that computing these extremal optimal values is an NP-hard problem [3, 17, 18], even for specific subclasses with a real constraint matrix. In particular, for the form (1), it is NP-hard to determine the worst case optimal value, while the

126

M. Hladík

best case is polynomially computable. For constraints in the form (2), the situation is opposite—it is NP-hard to compute f and easy to compute f . We show that it is also intractable to check that some a priori given value is attained as an optimal value. We restrict ourselves to the form (3) and prove the result even for the case with A and b real, so intervals are situated in the objective vector c only. Theorem 1 It is NP-hard to decide whether there is c ∈ c such that −1 = min c T x subject to Ax ≤ b, x ≥ 0. Proof By Rohn [19], it is NP-hard to check if the system |Ax| ≤ e, e T |x| ≥ 1

(5)

is solvable, where A is a nonnegative positive definite rational matrix. We show that this system is solvable if and only if the linear program max c1T u + c2T v subject to |A(u − v)| ≤ e, u, v ≥ 0

(6)

has the optimal value 1 for some c1 , c2 ∈ [−e, e]. (i) Let x be a solution of (5), and put c1 := sgn(x), c2 := −c1 . Now, (6) reads max sgn(x)T (u − v) subject to |A(u − v)| ≤ e, u, v ≥ 0, which is equivalent to max e T w subject to |A diag(sgn(x))w)| ≤ e. Since w := |x| is a feasible solution, the optimal value is at least 1. Denote by γ ≥ 1 the optimal value; it is finite since A is positive definite. Then (6) has the optimal value 1 when we put c1 := sgn(x)/γ , c2 := −c1 . (ii) Suppose that (6) has the optimal value 1 for some c1 , c2 ∈ [−e, e]. Then c1 + c2 ≤ 0, otherwise the problem is unbounded. If c1 = −c2 , then the problem reads max c1T (u − v) subject to |A(u − v)| ≤ e, u, v ≥ 0, or equivalently max c1T w subject to |Aw| ≤ e. Any optimal solution w of this problem satisfies (5) since e T |w| ≥ |c1 |T |w| ≥ |c1T w| = 1.

(7)

Complexity Issues in Interval Linear Programming

127

If c1 = −c2 , then we replace c1 by −c2 . Since the components of c1 were not decreased, the optimal value of (6) was not decreased, too. Now, the argument of the previous case applies; the possible increase of the optimal value does not matter and inequalities in (7) remain valid. Eventually, we change the maximization problems into a minimization one and rewrite (6) as − min −c1T u − c2T v subject to Au − Av ≤ e, −Au + Av ≤ e, u, v ≥ 0.  This complexity result might seem surprising in view of the fact that the best and worst case optimal values are effectively computable in our case; we simply have f = f (A, b, c) and f = f (A, b, c). However, there can be gaps in the interval [ f , f ], that is, not every value in this interval is attained as the optimal value f (A, b, c) for certain c ∈ c; see [1, 13]. Notice also that the interval [ f , f ] is unbounded for the instances discussed in the proof.

3 Connectedness Both the solution set and the optimal solution set S need not be connected. This makes troubles in many computational aspects, for instance, when determining the best or worst case optimal values [8]. Even worse, it turns out that it is intractable to check if the solution set or set S are connected. Theorem 2 It is co-NP-hard to check for connectedness of the solution set to Ax ≤ b even on a class of problems with nonempty solution sets. Proof By Rohn and Kreslová [20], it is NP-hard to check if there is a solution x ∗ of an interval system M x = d such that for the first component of x ∗ we have x1∗ ≥ 1. Moreover, we can assume two properties. First, M contains only nonsingular matrices, which implies that the solution set is nonempty, compact and connected. Second, 0 ∈ d, which means that the origin is a solution. Now, consider the system M x = d, |x1 − 0.5| ≥ 0.5.

(8)

Its solution set is nonempty, contains the origin and is convex in each orthant. Therefore, it is disconnected if and only if there is a solution x ∗ of M x = d such that x1∗ ≥ 1. Therefore, checking connectedness of (8) is co-NP-hard. By [11, 16] and using the substitution x0 ≡ x1 − 0.5, we can transform it to an equivalent interval system

128

M. Hladík

M x ≤ d, −M x ≤ −d, x0 − x1 ≤ −0.5, −x0 + x1 ≤ 0.5, [−1, 1]x0 ≤ −0.5. which follows the required form.



Considering the interval LP problem min 0T x subject to Ax ≤ b we immediately have that connectedness is also hard to check for the optimal solution set. Corollary 1 It is co-NP-hard to check for connectedness of the optimal solution set of interval LP problems.

3.1 Sufficient Condition In view of the above complexity result, it is convenient to have some strong but cheap sufficient conditions for connectedness. Now, consider an interval LP problem in the form min cT x subject to Ax = b, x ≥ 0. (The nonnegativity of variables is essential here to derive the condition.) From the LP duality theory we have that x ∈ S(A, b, c) for a particular realization (A, b, c) of interval values if and only if there is the dual variable y such that Ax = b, x ≥ 0, A T y ≤ c, c T x ≤ b T y. This motivates us to consider the interval system Ax = b, x ≥ 0, AT y ≤ c, cT x ≤ bT y.

(9)

It does not describe the optimal solution set S exactly, due to multiple occurrences of the interval quantities A, b and c (the standard way is to consider them as independent objects). Therefore, this system describes an outer approximation of S. Since the optimal solution set is hard to determine, this approximation is often used. We show a simple sufficient condition for the connectedness of this outer approximation. Theorem 3 If the system T

A y 1 − A T y 2 ≤ c, y 1 , y 2 ≥ 0 is solvable, then the solution set to (9) is connected.

Complexity Issues in Interval Linear Programming

129

Proof By Machost [12], the best case optimal value can be expressed as f = min c T x subject to Ax ≤ b, −Ax ≤ −b, x ≥ 0. Let x ∗ be its optimal solution and let y ∗1 , y ∗2 be an optimal solution of the dual problem T

T

max b T y 1 − b y 2 subject to A y 1 − A T y 2 ≤ c, y 1 , y 2 ≥ 0. By the Oettli–Prager theorem [15] (cf. [19]), the solution set to the first part of (9) is a convex polyhedron described Ax ≤ b, Ax ≥ b, x ≥ 0. By [19, 21], the point y ∗ := y ∗1 − y ∗2 is a strong solution to AT y ≤ c, that is, it solves every instance of the system. Thus, the point (x ∗ , y ∗ ) is a strong solution to Ax ≤ b, Ax ≥ b, x ≥ 0, AT y ≤ c, c T x ≤ bT y,

(10)

because for each b ∈ b we have T

c T x ∗ = b T y ∗1 − b y ∗1 ≤ b T y ∗1 − b T y ∗1 = b T y ∗ . The solution set to (9) is equal to the solution set to (10), and each instance of (10) is a convex polyhedral set containing the point (x ∗ , y ∗ ). Therefore the solution set  to (9) is connected via the point (x ∗ , y ∗ ).

4 Convexity Similarly as in the previous section, neither the solution set nor the optimal solution set must be convex. Again, it is computationally hard to check for convexity. Theorem 4 It is co-NP-hard to check for convexity of the solution set to Ax ≤ b even on a class of problems with b real and nonempty solution sets. Proof By Rohn [19], checking solvability of a system |Ax| ≤ e, e T |x| ≥ 1 is a NP-complete problem on the set of nonnegative positive definite rational matrices. Its solvability is equivalent to solvability of |Ax| ≤ e|y|, e T |x| ≥ |y|, y = 0.

(11)

130

M. Hladík

Consider now the system |Ax| ≤ e|y|, e T |x| ≥ |y|, |y − 1| ≥ 1.

(12)

Obviously, it is solvable since (x, y) = (0, 0) solves the system. Now, if (11) is unsolvable, then (12) cannot have a solution with y = 0. Thus y = 0, and |Ax| ≤ 0 implies x = 0 in view of nonsingularity of A. Hence (12) has only the trivial solution (x, y) = (0, 0), and the solution set is therefore convex. Conversely, if (x ∗ , y ∗ ) solves (11), then both vectors (0, 0) and 2( y1∗ x ∗ , 1) solve (12), but their midpoint ( y1∗ x ∗ , 1) is not a solution. Therefore, the solution set is not convex. Eventually, we construct an interval system the solution set of which is described by (12). First, we rewrite it to an equivalent for by using an additional variable z, |Ax| ≤ e|y|, e T |x| ≥ |y|, |z| ≥ 1, z = y − 1. In view of the Oettli–Prager theorem [15] (cf. [19]), this system describes the solution set of ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ A [−e, e] 0 0 x ⎜[−e, e]T ⎟ ⎜0⎟ 1 0 ⎜ ⎟ ⎝y⎠ = ⎜ ⎟ , ⎝ 0 ⎝−1⎠ 0 [−1, 1]⎠ z 0 1 −1 1 ˜ In view of [11, 16], its solution set is equal to the ˜ = b. which we denote by Ax ˜ − Ax ˜ in ˜ ≤ b, ˜ ≤ − b, solution set of the interval system of linear inequalities Ax which we consider the interval coefficients as independent quantities.  Analogously as for the case of connectedness, considering the interval LP problem min 0T x subject to Ax ≤ b we find that convexity is intractable for the optimal solution set, too. Corollary 2 It is co-NP-hard to check for convexity of the optimal solution set of interval LP problems.

5 Strong Optimality Definition 1 A vector is a strongly optimal solution of an ILP problem if it is an optimal solution for each realization. Strongly optimal solutions are thus robust with respect to data perturbation. Of course, the existence of such solutions is rare, provided that all coefficients are subject

Complexity Issues in Interval Linear Programming

131

to perturbation. On the other hand, very often the objective function coefficients are the only uncertain values, and strongly optimal solutions may exist as long as the uncertainty is mild (i.e., the interval radii are small). Strongly optimal solutions were characterized in [9]. It was also shown that the problem is co-NP-hard to check. Herein, we prove that the complexity result remains valid even when we restrict ourselves to the above mentioned case with real constraint coefficients. Theorem 5 Checking whether a given solution is strongly optimal is a co-NP-hard problem, even on the class of problems min cT x subject to Ax = b, x ≥ 0 with real constraints. Proof By [5], checking the solvability of the interval system Bx ≤ 0, cT x < 0 is an NP-hard problem. Let x ∗ := e, y ∗ := 0 and put b := Be. Consider the ILP problem min cT x subject to Bx + y = b, x, y ≥ 0. Obviously, (x ∗ , y ∗ ) is a strongly feasible solution. Now, according to the linear programming theory, (x ∗ , y ∗ ) is strongly optimal if and only if there is no realization of c such that (x ∗ , y ∗ ) is dominated by another solution. In other words, the interval  system Bx + y = 0, y ≥ 0, cT x < 0 is not solvable. Theorem 6 Checking whether a given solution is strongly optimal is a co-NP-hard problem, even on the class of problems min cT x subject to Ax ≤ b, x ≥ 0 with real constraints. Proof By [5], checking solvability of the interval system Ax ≤ 0, cT x < 0 is an NP-hard problem. Let x ∗ := e and put b := Ae. Then I ∗ = {1, . . . , m}, where m is the number of rows of A. Consider the ILP problem min cT x subject to Ax ≤ b, x ≥ 0. Obviously, x ∗ is strongly feasible and the active constraints are all those in Ax ≤ b. According to the linear programming theory, x ∗ is strongly optimal if and only if the  interval system Ax ≤ 0, cT x < 0 is not solvable.

132

M. Hladík

6 Conclusion We proved NP-hardness of several basic problems in interval linear programming. However, there are still some open questions regarding the computational complexity, for instance, the following: • The complexity of testing if the duality gap is zero (cf. [14]) for every realization of the interval LP problem in the form min cT x subject to Ax ≤ b, x ≥ 0. • The complexity of testing if there is a realization of the interval LP problem in the form min cT x subject to Ax = b, x ≥ 0 such that it is unbounded (the optimal value is −∞); cf. [5]. Acknowledgements The author was supported by the Czech Science Foundation Grant P403-2211117S.

References 1. Beeck, H.: Linear programming with inexact data. Technical report TUM-ISU-7830, Technical University of Munich, Munich (1978) 2. Chaiyakan, S., Thipwiwatpotjana, P.: Bounds on mean absolute deviation portfolios under interval-valued expected future asset returns. Comput. Manage. Sci. 18(2), 195–212 (2021) 3. Gabrel, V., Murat, C., Remli, N.: Linear programming with interval right hand sides. Int. Trans. Oper. Res. 17(3), 397–408 (2010) 4. Garajová, E., Hladík, M.: On the optimal solution set in interval linear programming. Comput. Optim. Appl. 72(1), 269–292 (2019) 5. Garajová, E., Hladík, M., Rada, M.: On the properties of interval linear programs with a fixed coefficient matrix. In: Sforza, A., Sterle, C. (eds.) Optimization and Decision Science: Methodologies and Applications, Springer Proceedings in Mathematics & Statistics, vol. 217, pp. 393–401. Springer, Cham (2017) 6. Garajová, E., Hladík, M., Rada, M.: Interval linear programming under transformations: optimal solutions and optimal value range. Cent. Eur. J. Oper. Res. 27(3), 601–614 (2019) 7. Hladík, M.: Interval linear programming: A survey. In: Mann Z.A. (ed.) Linear Programming– New Frontiers in Theory and Applications, Chap. 2, pp. 85–120. Nova Science Publishers, New York (2012) 8. Hladík, M.: On approximation of the best case optimal value in interval linear programming. Optim. Lett. 8(7), 1985–1997 (2014) 9. Hladík, M.: On strong optimality of interval linear programming. Optim. Lett. 11(7), 1459– 1468 (2017) 10. Hladík, M.: Two approaches to inner estimations of the optimal solution set in interval linear programming. In: Deb, S. (ed.), Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence, ISMSI 2020, pp. 99–104. Association for Computing Machinery, New York, USA (2020)

Complexity Issues in Interval Linear Programming

133

11. Li, W.: A note on dependency between interval linear systems. Optim. Lett. 9(4), 795–797 (2015) 12. Machost, B.: Numerische Behandlung des Simplexverfahrens mit intervallanalytischen Methoden. Technical report 30, Berichte der Gesellschaft für Mathematik und Datenverarbeitung, 54 pages, Bonn (1970). In German ˇ 13. Mostafaee, A., Hladík, M., Cerný, M.: Inverse linear programming with interval coefficients. J. Comput. Appl. Math. 292, 591–608 (2016) 14. Novotná, J., Hladík, M., Masaˇrík, T.: Duality gap in interval linear programming. J. Optim. Theory Appl. 184(2), 565–580 (2020) 15. Oettli, W., Prager, W.: Compatibility of approximate solution of linear equations with given error bounds for coefficients and right-hand sides. Numer. Math. 6, 405–409 (1964) 16. Rohn, J.: Miscellaneous results on linear interval systems. Freiburger Intervall-Berichte 85/9, Albert-Ludwigs-Universität, Freiburg (1985) 17. Rohn, J.: Complexity of some linear problems with interval data. Reliab. Comput. 3(3), 315– 323 (1997) 18. Rohn, J.: Interval linear programming. In: Fiedler M. et al. (ed.), Linear Optimization Problems with Inexact Data, Chap. 3, pp. 79–100. Springer, New York (2006) 19. Rohn, J.: Solvability of systems of interval linear equations and inequalities. In: Fiedler M. et al. (ed.), Linear Optimization Problems with Inexact Data, chap. 2, pp. 35–77. Springer, New York (2006) 20. Rohn, J., Kreinovich, V.: Computing exact componentwise bounds on solutions of linear systems with interval data is NP-hard. SIAM J. Matrix Anal. Appl. 16(2), 415–420 (1995) 21. Rohn, J., Kreslová, J.: Linear interval inequalities. Linear Multilinear Alg 38(1–2), 79–82 (1994)

Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach Patrizia Beraldi and Sara Khodaparasti

Abstract This paper investigates the dynamic pricing problem in the retail market and proposes a bi-level approach to define time-differentiated electricity rates which are announced with short advance. The pricing decisions are affected by the procurement plan that the retailer defines also exploiting the local energy system composed by photovoltaic panels and a battery storage device. In the proposed bilevel formulation, the retailer acts as leader deciding first about the electricity rates, whereas the consumer acts as follower, reacting to the announced tariff by changing the consumption pattern. To account for the inherent uncertainty affecting the main parameters entering the decision process a stochastic formulation is proposed. By exploiting the structure of the problem a single level reformulation is provided. Preliminary numerical results, carried out on a real case study, are presented and discussed.

1 Introduction In the last decades, the electricity industry has been experiencing ground-breaking reforms. Monopolies have been gradually replaced by competitive markets, where private entities compete each other to gain increasing market shares. In this new landscape, electricity retailers represent important stakeholders. Operating as intermediaries between the energy wholesalers and end-consumers, they face the fundamental problem of defining competitive electricity rates so to expand the customers’ base and maximize the profit. This latter is defined as the difference between the revenue deriving from the electricity selling, and, thus, related to the offered tariffs and required amounts, and the procurement costs, that, being related to the volatile P. Beraldi (B) · S. Khodaparasti Department of Mechanical, Energy and Management Engineering, University of Calabria, Via P. Bucci, Rende (Cosenza), Italy e-mail: [email protected] S. Khodaparasti e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_12

135

136

P. Beraldi and S. Khodaparasti

wholesale market prices are clearly uncertain. The retailer business can only be profitable if the electricity prices charged to the consumers compensate the procurement costs. The price surcharge accounts for the risk that retailer takes when entering into a contract with consumers since he purchases energy in advance at a stochastic price and sells it to the consumers at a regulated price. We assume that the retailer is willing to define dynamic pricing schemes. Differently from the static programs, which are contracted for long periods (e.g. one year), the dynamic ones are announced with short advance and include different time-variant options. In this paper, we consider the Real-Time (RT) pricing where the electricity rates may vary constantly over the day following the fluctuations of the market prices. Time-based rates are expected to become a common practice in smart grids since they can be seen as a tool to incentive demand-response programs. Energy consumers can be motivated to change their habitual consumption patterns in response to economic signals. This dynamic reaction is made possible by the deployment of smart meters and ICT technologies that allow to control the scheduling of the flexible loads. Many modern appliances are deemed for control and they can be properly scheduled during lower cost hours with the aim of reducing the electricity bills. Retailer and consumers are clearly tied by a hierarchical relation: the retailer acts as a leader deciding first on electricity rates to offer. Then, the consumer, as follower, reacts to the communicated prices by defining a new management strategy that, fitting the tariff, aims at minimizing the costs. The follower’s reaction is fed back to the retailer and should be considered since it affects the actual profit. We model the pricing problem by the Bi-Level (BL) framework [6] where Upper-Level (UL) refers to the retailer and the Lower-Level (LL) to the consumer. We assume that the retailer owns some local resources that may guarantee a partial coverage of the consumer’s demand. In particular, he is equipped with an integrated system composed by Photovoltaic (PV) panels and a Battery Electric Storage (BES) device. This energy solution is nowadays very popular since it allows to mitigate inconveniences related to the intermittent and uncertain nature of the solar production. The BES allows to decouple energy production from consumption and/or it can be also used to store energy purchased during low-cost hours which is then sold in more expensive time periods. Pricing decisions are, thus, influenced not only by the market prices, but also by the management strategy that the retailer decides to implement. The pricing problem is made more challenging by the inherent uncertainty affecting the main parameters involved in the decision process. Electricity prices are not known when tariffs are communicated and weather related variables affect the procurement plan. To deal with this more involved problem we propose a stochastic BL approach based on the Two-Stage Stochastic Programming paradigm. This choice is suggested by the structure of the main electricity markets, as the Italian one taken as a reference. At the UL, the retailer’s first-stage decisions refer to amount of electricity to purchase from the Day Ahead Electricity Market (DAEM) and to the offered electricity rates. Such decisions are taken in the face of uncertainty and should be optimal regardless of the realization of the uncertain parameters. Corrective actions are then devised to accommodate for possible imbalances by the electricity exchange

Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach

137

with the Balancing Market (BM) and the management of the BES. At the LL, the consumer reacts to the offered tariffs by optimizing the scheduling of the flexible loads with the aim of reducing the electricity bill.

1.1 Literature Review and Contribution During the last decade, increasing attention has been devoted to the development of pricing strategies for retailer operating in the electricity market and this topic has been becoming even more relevant due to the steady increase of the electricity prices caused by the geopolitical tensions. BL represents the favorite framework used to model the retailer-consumer interaction and more and more challenging formulations have been proposed to account for several different restrictions. Among the others, we cite the recent paper [7] where the authors present and test different pricing structures and conclude that the RT scheme can yield the highest additional profit for the retailer. In [9] the authors present a deterministic BL model for the definition of Time-of-Use (ToU) tariffs where the LL problem refers to a consumer with flexible loads. The proposed formulation is further extended and fully tested in [10]. In [3] the authors analyze the interaction between a leader equipped with local energy resources and a prosumager who reacts to the pricing signal by scheduling the flexible loads. In [1], the authors address a day-ahead pricing and load balancing problem, within an environment involving self-scheduled users whose utilities are optimized via a smart grid. All the above contributions share the common assumption that all the input data are known. The limits of such an assumption are evident undermining the validity of the recommendations provided by the problem solutions when implemented in a real setting. If the weather conditions and/or the electricity market prices deviate from the forecasts, the retailer may incur in consistent financial losses. The analysis of the scientific literature reveals that the number of contributions addressing the more involved stochastic problem is by far very low when compared to the deterministic case. One of the first contributions is due to Zugno et al. [11], who proposed a stochastic three-stage BL formulation. The dynamic tariffs charged to the end-users are scenario-dependent and the retailer expected profit is maximized. The LL is represented by a consumer with flexible loads related to the heating requests. A stochastic BL model for the ToU price settings considering a medium-term planning horizon has been proposed in [8]. The retailer is assumed to procure energy by forward contracts and/or from the DAEM whereas the consumer is assumed to be equipped with a storage device. Uncertainty affects the electricity market prices and the consumer’s demand and the preference of the retailer is reflected by employing a fractile criterion to maximize. In [4] the authors propose a stochastic BL model accounting for uncertainty related to varying market prices, weather-related variables and consumer’s load. The model also includes in the UL problem a safety measure to account for the retailer’s risk aversion.

138

P. Beraldi and S. Khodaparasti

Our contribution adds to the literature on stochastic BL and extends the last above referenced paper by considering a retailer owning local resources whose optimal management impacts on the offered tariffs. More importantly, the consumer is equipped with smart devices that should be optimally re-scheduled in response to the electricity tariffs with the aim of minimizing the total electricity bill. This latter aspect complicates further the LL problem since scheduling decisions involve the introduction of binary decision variables, thus preventing the application of traditional approaches based on the derivation of a single level reformulation. However, as in [1] we show that by exploiting the specific problem structure, a single level reformulation can be still derived. The rest of the paper is organized as follows. Section 2 introduces the problem and presents the BL formulation. Computational experiments, carried out on a real case study, are presented and discussed in Sect. 3. Finally, some conclusions are drawn in Sect. 4.

2 Problem Definition and Model Formulation We consider the problem faced by a retailer who wants to define RT electricity tariffs to offer to potential affiliated end-users. For each time period t of a given time horizon T a different rate can be eventually considered, thus, encompassing as special case the ToU price structure. We focus on a daily time horizon and we consider a dynamic pricing scheme in that the rates may change from day to day to reflect the variable market prices. The tariffs are communicated the day before the delivery and are valid over the next day long. From a mathematical standpoint, the dynamic approach entails the solution of different instances of the same problem using, for each day, updated input data. It is easy to recognize that the electricity pricing is a decision-making problem under uncertainty. Indeed, the retailer’s procurement plan defined in terms of the amount of electricity purchased from the market, depends on weather-related variables that affect the local production from PV panels. In addition, the reaction of the follower to the announced prices should be also taken into account. Following the stochastic programming framework, we model uncertain parameters as random variables defined on a given probability space that we assume to be discrete. We denote by S the scenario set and we use the index s to indicate the s-th realization of the uncertain parameters, occurring with probability πs . In the UL problem, at the first stage, the retailer takes scenario invariant decisions referring to the rates and the amount of electricity to purchase from the DAEM. Both the decisions, should be taken “here and now" before knowing what the realization of the uncertain future parameters will occur. In particular, for every time period t, we denote by rt the offered rate. To account for the retail market competitiveness, bounding conditions are imposed, as expressed by the following constraints:

Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach

rt ≤ u t ∀t ∈ T 1  rt ≤ r¯ |T | t∈T

139

(1) (2)

where the upper bounds u t and the daily average value r¯ are assumed to be agreed in advance by contract. The procurement plan is defined in terms of amount of electricity purchased from the DAEM and the BM. These amounts are in turn influenced by the uncertain PV production which is partially mitigated by the proper management of the BES device. For each time period t, we denote by xt the first-stage decisions referring to the amount of electricity purchased from the DAEM, whereas we introduce the variables yts+ and yts− to account for imbalance at time period t under scenario s (here “ + " and “ − " are used to denote purchase and sell, respectively). We further, introduce, as scenario-dependent, the variables related to the management of the storage device. In particular, we denote by socts , in st and ou st , the state of charge and the power charged in and discharged from the BES at time t under scenario s. The management of the BES requires the introduction of the following scenario-based constraints: 1 outts ∀t ∈ T ∀s ∈ S ηd in st ≤ φc C ∀t ∈ T ∀s ∈ S outts ≤ φd C ∀t ∈ T ∀s ∈ S

(4) (5)

τm C ≤ socts ≤ τ M C ∀t ∈ T ∀s ∈ S

(6)

s socts = soct−1 + ηc in st −

(3)

Constraints (3) represent the balance condition used to relate the state of charge over two consecutive periods with the amount of energy charged to and discharged from the device. The parameters ηc and ηd account for energy-loss. We underline that for t = 1, the state of the charge at the previous period is equal to a known value soc0 and by continuity we impose that the same value should be in the BES at the end of the planning horizon (socTs = soc0 ). Constrains (4) and (5) bounds the power that can be charged in and discharged from the BES that are expressed as function of the nominal capacity C. Similarly, constraints (6) limit the state of charge of the BES for each period t and under each scenario s. The procurement plan should guarantee the coverage of the consumer’s demand which is under the follower’s control. We denote by ξts the production from the PV system during time period t and under scenario s. The balance constraints can be expressed as follows: xt + yts+ + ξts + outts − in st − yts− = dts

∀t ∈ T ∀s ∈ S

(7)

The aim of the retailer is to maximize the expected profit defined as the difference between the revenue deriving from the electricity selling and the procurement costs:

140

max z R =

P. Beraldi and S. Khodaparasti



s∈S πs



 s− s− s+ s+ s t∈T (rt dt + B M Pt yt ) − t∈T (D A Pts x t + B M Pt yt ) +



t∈T



s s s∈S πs cs (in t + outt )



(8)

where for each time period t and scenario s, D A Pts , B M Pts+ and B M Pts+ represent the DAEM and BM market prices, respectively. The last term in the formula accounts for the storage management operations and prevents the simultaneous charge and discharge of the storage device where cs represents the unitary cost for such operations.

2.1 The LL Problem At the lower level, the consumer reacts to the offered rates by properly scheduling the flexible loads. We assume here that, in addition to the base load that is assumed to be uncertain (denoted by bts in the model), the consumer owns smart appliances that can be controlled to some extent. Such loads are labelled as flexible and are further classified into shiftable and interruptible. We denote by K the set of appliances in the first group (indexed by k) and by J those belonging to the second group (indexed by j). Shiftable appliances, as for example, dishwasher and washing machine have an operation cycle that, once initiated, cannot be interrupted, whereas for the interruptable devices a temporary interruption is allowed provided that a given amount of energy is supplied during a specified time slot; electric vehicles can be considered as an example of this type of load. For each flexible load a comfort time window ([l. , u . ]) reflecting the consumer’s preferences is specified. For the shiftable loads, additional data refer to the duration expressed in terms of number of operating time periods n k and to the required energy ek . As for the interruptible devices, the power demand e j and the power limit E j are also specified. The scheduling on the shiftable appliances makes the problem more challenging since it requires the introduction of binary decision variables. In particular, for each k ∈ K, we introduce the variables δkt taking the value 1 if the appliance k starts operating at time t and 0 otherwise. Outside the time window, the decision variables are set to 0. Trivially each appliance k can start operating just once within the user’s defined time window reduced by the duration n k , as expressed by the constraint: u k −n k +1

δkt = 1 ∀k ∈ K

(9)

t=lk

We assume that once started, the shitable device should keep operating for the required time. Thus, considering the offered rate rt , the shiftable appliances contribute to the follower’s cost as expressed as: C(K) =

k +1  u k −n k∈K

t=lk

ek (

t−n k +1  h=t

rh ) δkt

(10)

Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach

141

As for the interruptible loads, we introduce the decision variables g jt to denote the power level for device j at time t. The following constraints are considered to model the functioning of the interruptible devices: uj 

g jt = e j ∀ j ∈ J

(11)

t=l j

g jt ≤ E j

∀ j ∈ J ∀t ∈ T , t ∈ [l j , u j ]

(12)

The cost related to these loads can be defined as: C(J ) =

uj 

rt g jt

(13)

j∈J t=l j

To summarize, the overall objective function for the consumer can be expressed as: min z F =

 s∈S

πs



bts rt + C(K) + C(J )

(14)

t∈T

3 Problem Solution and Computational Experiments The proposed model belongs to the class of the BL problems involving at the LL binary variables in addition to the continuous ones. Moreover, the presence of bilinear terms that appear in the objective function contributes to further increase the complexity of the resulting model. Even in the deterministic setting, problems in this class are considered as the most challenging, a fortiori in the stochastic case, where some variables and constraints are replicated with scenarios. In this section, we briefly introduce the main idea used to solve the problem and then we present and discuss the results carried out on a real case study. The problem has been implemented in GAMS 24.4.6 with CPLEX as solver. All the experiments have been performed on an Intel ® Core i7 2.6 GHz, with 16.0 GB of RAM memory.

3.1 The Single Level Reformulation The presence of binary decision variables in the LL problem prevents the possibility to solve the model by applying the traditional approach based on the derivation of a single level reformulation. However, looking at the problem’s structure, we may notice that it enjoys the property of total unimodularity. Following the same approach described in [1], we may derive a single level reformulation by simply relaxing the binary condition on the δkt variables in the LL problem and moving it into the UL. In

142

P. Beraldi and S. Khodaparasti

this way, we may deal with the LL problem applying the duality theory. In particular, we substitute the LL problem with the following set on constraints: αk − βkt ≤ ek

uk 

t−n +1  k  rh ∀k ∈ K ∀t ∈ T t ∈ [lk , n k + 1]

t=lk

γ j − σ jt ≤ rt ∀ j ∈ J ∀t ∈ T t ∈ [l j , u j ] k +1  u k −n k∈K

 k∈K

ek (

t−n k +1 

t=lk

αk −

k∈K t=lk



(16)

uj

rh ) δkt +

h=t uk 

(15)

h=t

βkt +

rt g jt =

j∈J t=l j

 j∈J

ej γj −

uj 

E j σ jt

(17)

j∈J t=l j

where αk , γ j , σ jt are the dual variables corresponding to the set of constraints (9), (11), (12), respectively, whereas βkt , are the dual variables associated with the constraints δkt ≤ 1. Here, constraints (15)–(16) correspond to the dual feasibility conditions, whereas by (17) we impose the equality between the primal and dual objective functions. We may observe in (17) the presence of bilinear terms related to the product of binary continuous variables (rt δkt ) and two continuous variables (rt g jt ). While we deal with the former product by applying the classical big-M approach, for the latter we consider a dual reformulation [12].

3.2 The Case Study In the considered case study, the leader is represented by a retailer operating in the Italian electricity market that is equipped with a system composed of 300 PV panels and a BES device of nominal capacity of 300 kW h. The values of ηc , ηd , τm , τ M , φc , φd used in the BES management constraints have been set to 0.98, 0.98, 0.05, 0.99, 0.95, 0.95, respectively, whereas as the cost related to charging and discharging operations have been set 0.01. The electricity rates are assumed to be announced the day before the application and the retailer is called to solve each day a different instance of the proposed BL problem instantiated with more recent data. In particular, scenarios referring to PV production and market prices are generated each time by using updated weather forecasts and cleared market electricity prices (see [2, 5], for details). As for the base load, we have used the average base demand with a random variation. In the experiments we have generated scenarios for the DAEM prices, D A Pts , and we have assumed that those for the BM are 20% higher for the purchasing prices B Pts+ and 20% lower for the selling prices B Pts− , respectively. As for the follower, it is representative of a group of 50 residential consumers, living in a smart building who behave in a similar way. In addition to the base load (e.g., lighting, refrigerator)

Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach

143

that cannot be controlled and that amounts on average to 12 kW h per day, we have considered 5 flexible appliances. In particular, the first 4 appliances are labeled as shiftable, i.e., Laundry Machine, Clothes Dryer, Dishwasher and Vacuum Cleaner, while the last one as interruptible, i.e., Electric Vehicle. For each flexible load, a comfort time window is specified by the end-user based on his specific needs, that may change eventually from day to day. For the considered devices, we considered the following time windows [9, 14], [14, 19], [20, 24], [10, 17], [1, 8], whereas the number of required time periods and energy amounts are equal to 2, 2, 2, 1 and 1, 1.8, 1.6, 1 respectively.

3.3 Results and Discussion The results discussed from now on refer to a typical working day in winter and have been collected by considering a set of 1000 scenarios. The solution time in this case is quite low and in the order of a few minutes. Figure 1 reports for the considered day, the tariffs offered by the retailer and the consumption pattern of the follower under a given scenario. As we may observe, according to fixed time windows the consumer schedules the flexible loads during lower-priced time periods so to minimize the daily electricity bill. We remind that the pricing decisions are influenced by the retailer’s procurement plan. Figure 2 shows how, under a given scenario, the consumer’s demand is satisfied. We may notice that, during some time periods, as for example at 4 am and 2 pm, the retailer purchases from the DAEM market an amount in excess to the demand.

Fig. 1 Offered rates and consumer’s consumption pattern

144

P. Beraldi and S. Khodaparasti

Fig. 2 Retailer’s procurement plan under a given scenario

During such periods, the market prices are lower and the retailer finds it convenient to charge the BES. The stored amount are used to satisfy the demand in successive time periods when the market prices are higher. Actually, the results confirm the economical advantage that the PV-BES solution can provide.

4 Conclusions In this paper we have investigated the dynamic pricing problem in the retail market by proposing a BL approach that may support the decision maker in defining RT tariffs that are announced with short advance. The inherent uncertainty affecting the main parameters and influencing the procurement strategy is dealt by formulating the UL problem as a stochastic problem with recourse. The consumer reacts to the pricing signal by defining new scheduling strategies of the flexible loads with the aim of minimizing the electricity bill. The BL problem is reformulated as a single level model by exploiting the special structure of the LL problem. Preliminary results carried out on a real case study are presented and discussed.

Dynamic Pricing in the Electricity Retail Market: A Stochastic Bi-Level Approach

145

References 1. Af¸sar, S., Brotcorne, L., Marcotte, P., Savard, G.: Revenue optimization in energy networks involving self-scheduled demand and a smart grid. Comput. Oper. Res. 134, 105366 (2021) 2. Algieri, A., Beraldi, P., Pagnotta, G., Spadafora, I.: The optimal design, synthesis and operation of polygeneration energy systems: Balancing life cycle environmental and economic priorities. Energy Convers. Manag. 243, 114354 (2021) 3. Beraldi, P., Khodaparasti, S.: A bi-level model for the design of dynamic electricity tariffs with demand-side flexibility. Soft Comput. 1–18 (2022) 4. Beraldi, P., Khodaparasti, S.: Designing electricity tariffs in the retail market: A stochastic bi-level approach. Int. J. Prod. Econ. 257, 108759 (2023) 5. Beraldi, P., Violi, A., Scordino, N., Sorrentino, N.: Short-term electricity procurement: a rolling horizon stochastic programming approach. Appl. Math. Model. 35(8), 3980–90 (2011) 6. Colson, B., Marcotte, P., Savard, G.: An overview of bilevel optimization. Ann Oper. Res. 153(1), 235–56 (2007) 7. Grimm, V., Orlinskaya, G., Schewe, L., Schmidt, M., Zöttl, G.: Optimal design of retailerprosumer electricity tariffs using bilevel optimization. Omega 102, 102327 (2021) 8. Sekizaki, S., Nishizaki, I.: Decision making of electricity retailer with multiple channels of purchase based on fractile criterion with rational responses of consumers. Int J Electrical Power Energy Syst. 105, 877–93 (2019) 9. Soares, I., Alves, M.J., Antunes, C.H.: Designing time-of-use tariffs in electricity retail markets using a bi-level model-Estimating bounds when the lower level problem cannot be exactly solved. Omega 93, 102027 (2020) 10. Soares, I., Alves, M.J., Antunes, C.H.: A deterministic bounding procedure for the global optimization of a bi-level mixed-integer problem. Eur. J. Oper. Res. 291(1), 52–66 (2021) 11. Zugno, M., Morales, J.M., Pinson, P., Madsen, H.: A bilevel model for electricity retailers’ participation in a demand response market environment. Energy Econ. 36, 182–97 (2013) 12. Costa, A., Ng, T.S., Foo, L.X.: Complete mixed integer linear programming formulations for modularity density based clustering. Disc Optim. 25, 141–158 (2017)

Risk-averse Approaches for a Two-Stage Assembly-to-Order Problem Edoardo Fadda , Daniele Giovanni Gioia , and Paolo Brandimarte

Abstract Assembly to order is a production strategy where components are manufactured under demand uncertainty and end items are assembled only after demand is realized. Risk-neutral approaches aim to maximize the expected profit. However, this approach may fail if heavy-tailed or multi-modal distributions are likely to generate significant disruptions or if the shrinking life of products is considered. Conversely, risk-averse models may tackle these problems. In the paper, we deal with an assembly-to-order problem, modeled as a two-stage stochastic linear programming problem considering the introduction of a classical risk measure from finance: the conditional value-at-risk. We examine the characteristics and the performance of the model by means of a large number of out-of-sample scenarios.

1 Introduction and Paper Positioning Demand uncertainty is one of the main difficulties concerning the production planning problems. However, a wide array of buffering tools have been devised to ease the difficulty of demand forecasting. One example is delayed product differentiation, which aims at exploiting risk pooling by postponing product differentiation as late as possible along the supply chain (a well-known success story for this approach is the HP DeskJet case). This strategy is exploited in Assembly-to-Order (ATO) manufacturing environments and it can be naturally cast as a two-stage stochastic linear program with recourse [3], and possibly generalized to multiple stages. The standard models usually maximize expected profits. Nevertheless, it may not be enough to E. Fadda (B) · D. G. Gioia · P. Brandimarte Department of Mathematical Sciences “Giuseppe Luigi Lagrange”, DISMA, Politecnico di Torino, 10129 Corso Duca degli Abruzzi 24, Turin, Italy e-mail: [email protected] D. G. Gioia e-mail: [email protected] P. Brandimarte e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_13

147

148

E. Fadda et al.

get a robust solution, as there could be high variability in profits across the different scenarios. This is a risk-neutral approach, whereas most decision-makers are riskaverse. In principle, risk aversion could be modeled by a concave utility function, but eliciting it is not very practical. So, one possibility is to modify the problem formulation in order to make it somewhat more robust. In this paper, the choice that we explore is the optimization of Conditional Value at Risk (CVaR). CVaR is defined as the conditional expected loss under the condition that it exceeds the Value-at-Risk (i.e., a given quantile) and has several desirable properties [1]. Moreover, it has been shown that the solution with (in-sample) optimal CVaR at a given level can be found by solving a linear programming model [5]. While in finance there is a huge number of applications considering this risk measure, concerning the ATO problem the authors are not aware of any risk-adverse application [2]. Thus, the aim of this paper is to fill this gap and start exploring the ATO problem by considering risk measures. Specifically, since considering the CVaR instead of the expected value increase the computational burden, we investigate the problem in terms of the number of scenarios to achieve stability. Furthermore, we test the solutions in order to assess its performance by means of several out-of-samples scenarios. The paper is organized as follows. In Sect. 2 we describe the mathematical models for the ATO problem. In Sect. 3 we present instance generation and in Sect. 4 we show the results of the computational experiment. Finally, in Sect. 5 we draw the conclusion of our work and we suggest the possible future lines of research.

2 Mathematical Models In this section, we present the mathematical models that we are going to consider in the following. Let I = {1, . . . , I } be the set of components, J = {1, . . . , J } the set of end items, M = {1, . . . , M} the set of production resources (e.g., machines) and S = {1, . . . , S} the set of scenarios used to discretize the demand of the end items. The probability of each scenario is π s . Moreover, let us define the following parameters: • • • •

Ci cost of component i ∈ I and P j price of the end item j ∈ J . L m production availability of machine m ∈ M. Tim time required to produce i ∈ I in m ∈ M. G i j amount of components i ∈ I required for assembling end item j ∈ J , commonly known as gozinto factors. • d sj demand for end item j ∈ J in scenario s ∈ S, sampled from a distribution D j . The decision variable of the model are xi and y sj , the amount of component i ∈ I produced and the amount of end items j ∈ J assembled, respectively. The resulting sampling average approximation model formulation is:

Risk-averse Approaches for a Two-Stage Assembly-to-Order Problem

max

x∈R I ,y∈R J



 i∈I



s.t.

Ci xi +

⎛ πs ⎝

s∈S



⎞ P j y sj ⎠

d sj

G i j y sj

(1)

j∈J

Tim xi ≤ L m

i∈I y sj ≤





149

≤ xi

∀m ∈ M

(2)

∀ j ∈ J ∀s ∈ S

(3)

∀i ∈ I ∀s ∈ S

(4)

∀i ∈ I ∀ j ∈ J ∀s ∈ S

(5)

j∈J

y sj , xi ≥ 0

The objective function of the problem is the expected net profit, expressed in (1) as expected revenue at the second stage minus cost at the first stage. Constraints (2) limit the production resources; constraints (3) state that it is not possible to sell more than the demand, and constraints (4) preclude assembling items for which we lack the necessary components, thereby linking the two decision stages. Constraints (5) are non-negativity conditions since, as in [3], we experiment with a continuous linear program and not an integer one. We do so to ease the computational burden, without affecting the conclusions significantly since we consider settings characterized by a high number of produced components and end items demand. In the following we will call model (1)–(5) recourse problem (RP). Model (1)–(5) considers a risk neutral decision-maker which aims at maximizing the expected profit but does not care about the negative tails of the profit in some scenarios. Instead, these are taken into account if risk measures are considered. In the following, we consider the CVaR, which we optimize by minimizing an appropriate auxiliary function [5]. Specifically, it can be proved that the α percent CVaR of the negative profit can be expressed as:

C V a Rα [



i∈I

Ci xi −

 j∈J

P j y sj ] = min ζ + ζ

S   1  πs [ Ci xi − P j y sj − ζ ]+ , 1−α s=1

i∈I

j∈J

(6) where [·]+ = max[·, 0]. The usage of the CVaR in the model leads to two alternative formulations: minimizing the CVaR but providing a minimum expected profit or maximizing expected profit and bounding the CVaR. We do not focus on this second formulation since it can be proved to be equivalent to the first one in terms of efficient frontier. Specifically, both models can be obtained by applying the -constraint method to the multi-objective model considering the maximization of expected profit and the minimization of the CVaR [4]. Thus, the considered model is:

150

E. Fadda et al.

min

x∈R I ,y∈R J

s.t.

ζ+ −

S 1  πs z s 1 − α s=1



Ci xi +

i∈I



(7) ⎛

πs ⎝

s∈S



⎞ P j y sj ⎠ ≥ 

zs ≥ 0   Ci xi − P j y sj − ζ zs ≥ i∈I

(8)

j∈J

∀s ∈ S

(9)

∀s ∈ S

(10)

j∈J

(2), (3), (4), (5) ζ ∈ R, z s ≥ 0

∀s ∈ S

where Eq. (7) minimizes the CVaR and Contraint (8) enforces a minimum profit denoted by . Furthermore, constraints (9) and (10), are employed to linearize the CVaR expression in Eq. (6) by means of the z s variables.

3 Instance Generation In this section, we describe the generation procedure of the instances considered in the computational experiments. Due to space limitations, we focus on the gozinto matrix and on the demand generation procedure while we refer to [3] for the generation of the other parameters. The gozinto matrix is randomly generated by setting a number of families and, for each of them a number of common and specific components. In Fig. 1 we illustrate the two-dimensional heat map representation of the structure of the gozinto matrix that will be used in the experiments. Each color is associated with the number of components for each end item. We consider 35 end items and 60 components. The matrix looks block-diagonal, where blocks correspond to families. We call K the number of families, Jk the set of items belonging to the family k, and n k the cardinality of Jk . The first columns of each block are the common components, while the others are the specific ones. At the bottom of the matrix, there are a few items that we call outcast items. Each one of them defines a family. The only risk factor considered in the models is items’ demand. It is generated with a process composed by two nested steps. Firstly, we sample the demand Fk for each family k ∈ {1, . . . , K }. We assume a bimodal normal distribution with √ √ expected values n k μ1 , n k μ2 , standard deviations n k σ1 , n k σ2 and mixing parameter p. Namely, Fk ∼ N (n k μ1 , n k μ2 ,



n k σ1 ,



n k σ2 , p),

∀k ∈ {1, . . . , K }.

(11)

Risk-averse Approaches for a Two-Stage Assembly-to-Order Problem

151

Fig. 1 Structure of gozinto matrix

We assume a multimodal demand distribution inspired by several real-world situations. Consider, for example, the demand of a company with a few large customers with irregular bulk orders or the demand for new products, which can become either a shelf warmer or a top seller (e.g., fashion clothes). In the second step, the overall demand for each family is divided into the demand for every single item. In the case of families composed of one item (as for the outcast items), the demand for the unique end item is equal to the demand for the family. Instead, if more items are present, the total demand is split and d j is defined as: d j ∼ w j,k Fk ,

∀ j ∈ Jk , ∀k ∈ {1, . . . , K },

(12)

The weights w j,k are randomly sampled from a Dirichlet distribution with different parameters ζk for each family k, i.e. w1,k , . . . , wn k ,k ∼ Dirch(ζk ),

∀k ∈ {1, . . . , K }.

(13)

 This choice ensures that j∈Jk w j,k = 1, w j,k > 0, ∀ j ∈ Jk , k ∈ {1, . . . , K }. A graphical representation of the demand generation is illustrated in Fig. 2.

152

E. Fadda et al.

Fig. 2 Demand sampling schema. The first two demand for families 1 and 2 are divided. The last demand is related to a degenerative family

4 Computational Experiments In this section, we study the properties of model (7)–(10). First, we consider insample and out-of-sample stability, then we study its properties. While for model (1)–(5) stability has already been studied [3], in this section we consider in-sample and the out-of-sample stability for model (7)–(10). Due to space limitations, we cannot describe a full-fledged experimental design. Thus, for the experiments studying stability, we fix the  parameter to be equal to the 80% of the in-sample expected profit obtained by solving model (1)–(5) with 2048 scenarios (this choice ensures the stability of the RP). Concerning in-sample stability, in Fig. 3, we report for S = 16, 32, 64, 128, 256, 512, 1024, and 2048 the average objective functions over 50 runs. The upper and lower points of the error bar represent the 0.05 and the 0.95 quantiles of the values collected, respectively. To increase the graph readability we present on the x-axis the logarithm of the number of scenarios. The size of the error bars decreases as more scenarios are considered and it increases as α increases since estimating the expected value of a smaller portion of the distribution requires more scenarios than the estimation of a greater one. For example, with S = 100 and α = 0.99 just one scenario is considered for the CVaR while for α = 0.90 the scenarios considered are 10. As the reader can notice, the average objective function value is decreasing from the left to the right. This is due to the fact that the greater is S, the more conservative is the solution since more bad scenarios are considered by the instance. Furthermore, while for α = 0.90, and α = 0.95 the optimal values are always greater than zero, sometimes for the α = 0.99 they have negative values meaning that in some cases, in order to maintain the given expected profit, the minimization of the in-sample C V a R for α = 0.99 is at a loss. The CPU time used to solve these problem instances on an Intel(R) Core(TM) i7-5500U [email protected] computer with 16GB of RAM, running Ubuntu v20.04

Risk-averse Approaches for a Two-Stage Assembly-to-Order Problem

153

Fig. 3 Objective functions for different values of α

and using Gurobi v9.1.1 as solver, goes from a few seconds S = 32 up to around 10 minutes S = 2048 and a few runs with S = 4096 required up to 40 minutes to be solved. Luckily, stability is achieved before that number of scenarios. Thus, the increment of the computational burden required to solve model (7)–(10) does not justify the development of ad-hoc heuristics for the considered setting. Nevertheless, we have considered small cardinality of the set I and J (i.e., 60 components and 35 items, respectively) hence the development of ad-hoc heuristics may be required for instances of greater dimension. Concerning the in-sample stability, we consider the ratio: ρS =

S C V a Rout − C V a Rin S |C V a Rin |

(14)

S is the value of the objective function of model (7)–(10) computed where C V a Rin with S scenarios (i.e., the in-sample CVaR) and C V a Rout is the CVaR of the optimal solution computed out-of-sample. Since C V a Rout is computed with more scenarios S , we expect that to be greater (the objective function (7) (S = 10000) than C V a Rin is minimizing loss). Thus, in Eq.(14) we consider the absolute value just in the denominator since in some instances the lowest possible CVaR can be negative (i.e. it can be a profit). The average results over 50 runs are shown in Table 1. The value ρ S increases as α increases while decreases as the number of considered scenarios increases. If we consider an out-of-sample stability of less than the 5%, we need to consider 512 scenarios for α = 0.90, 2048 scenarios for α = 0.95 and more than 2048 scenarios for α = 0.95. By considering both the results of Table (1) and Fig. 3, in the following we consider α = 0.95 and S = 2048. We decide to consider α = 0.95 since it is a good trade-off for the production setting which does not require accounting for extreme

154

E. Fadda et al.

conditions as in finance. In fact, in the ATO problem common components hedge against uncertainty. Moreover, for α = 0.95, S = 2048 ensures both in-sample and out-of-sample stability. By recalling that model (7)–(10) derives from a multi-objective problem, we can generate the in-sample efficient frontier by considering different minimum expected profits . In Fig. 4 we show the results. On the y-axis, we represent the in-sample tail expected profit (i.e. the opposite of the optimal objective function) while, on the x-axis we report the minimum in-sample expected profit. Unfortunately, since the instance generation procedure does not use real data, the value on the axis are purely indicative. Nevertheless, they provide us with some interesting insight. First, for value of  greater than the optimal value of the RP, model (7)–(10) becomes infeasible. Moreover, on the left-hand side of the graph, all the fronts are horizontals (defining a plateau) since the optimal solution is the same for different values of . Thus, for these values of  constraint (8) is no more active. Moreover, it is interesting to notice that the length of the plateau increases as α decreases. In fact, for α = 0.9 the plateau ends for an expected profit of near 275000, while for α = 0.9 the plateau ends for a value of expected profit lower than 250000. Finally, the efficient front decreases for greater α, since more extreme quantiles are considered. Table 1 Average and standard deviation (in brackets) of ρ S for different values of α and S ρ16 ρ32 ρ64 ρ128 ρ256 ρ512 ρ1024 ρ2048 α = 0.90 98 (50) α = 0.95 118(38) α = 0.99 169(51)

54 (20) 79(24) 107(31)

32 (12) 49(13) 99(24)

19 (7) 31(11) 82(12)

10 (5) 18(9) 62(15)

Fig. 4 In-sample efficient frontier for different values of α

5 (3) 10(6) 44(14)

3(2) 5(4) 26(11)

2(1) 4(3) 18(10)

Risk-averse Approaches for a Two-Stage Assembly-to-Order Problem

155

For the sake of completeness, we study the out-of-sample performance of the model. As noticed above, the instances have been randomly generated thus the optimal value of the objective function does not have any real meaning. Thus, we consider the following performance indicator: E V P I% =

PW S − PC V a R PW S

(15)

where PW S and PC V a R are the out-of-sample profit obtained by the wait and see problem and the one obtained by the model (7)–(10), respectively. Thus, E V P I% is a measure of the expected value of perfect information in a given scenario. It is important to notice that if E V P I% ≥ 1, then the profit is negative. The boxplots of the E V P I% for several values of  are shown in Fig. 5. Each boxplot is computed on 10000 out-of-sample scenarios. As the reader can notice, the minimum value of perfect information is around 0.5, meaning that in the best case, if we had known the real demand, the profit would have been double. The leftmost boxplot represents the performance of the RP and in several scenarios, the profits are really low. Rather unsurprisingly, the smaller , the lower is the CVaR. Nevertheless, also the expected profit gets lower since as  decreases the solution tends to be more conservative. Comparing the solutions of the problems for different , we can notice that the total quantity of components decreases, and the common components are preferred since they can be used to hedge against risk. While this strategy is effective in a two-stage setting (as in the fashion field), it may

Fig. 5 Boxplot representing the E V P I% on 10000 out-of-sample scenario. Below the horizontal dashed line the profit are positive, above are negative

156

E. Fadda et al.

be less effective in the multistage one, since the overproduction in one stage can be used in the next ones. Thus accurate tests with multiple stages will be considered in future work. As noticed above, there exists a value of  such that the optimal solution is not yet influenced by constraint (8). For the considered setting (α = 0.95, S = 2000) this value is  = 0.60PR P . It is worth noting that the worst-case profit of the solution obtained for  ≤ 0.65PR P is lower than one, therefore in all the 10000 out-ofsample scenarios the solution leads to a profit. Nevertheless, the expected profit from implementing these solutions is around 60% less than the one achieved by the RP solutions. Thus, a reasonable choice for  in the practical field may range between 0.9PR P and 0.8PR P .

5 Conclusions and Future Work In this paper, we have discussed a simple model considering the minimization of CVaR for the two-stage production planning in an ATO environment. Clearly, the results from synthetic instances must be taken with great care, but it is clear that risk-neutral models can lead to huge losses in several scenarios. The future lines of research will follow two directions. First, we will take into account that the knowledge of the exact distribution of the uncertain parameters affecting a stochastic optimization problem must not be taken for granted. Thus, we will apply ad-hoc methodologies considering distribution ambiguity [6]. Second, we will analyse the multistage problem with both risk neutral and risk-averse models. In such a way it will be possible to better quantify the profitability of the two approaches in a wider and more realistic set of applications.

References 1. Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Coherent measures of risk. Math Financ 9(3), 203–228 (1999) 2. Atan, Z., Ahmadi, T., Stegehuis, C., de Kok, T., Adan, I.: Assemble-to-order systems: A review. European J. Oper. Res. 261(3), 866–879 (2017) 3. Brandimarte, P., Fadda, E., Gennaro, A.: The value of the stochastic solution in a two-stage assembly-to-order problem. In: AIRO Springer Series, pp. 105–116. Springer International Publishing (2021). https://doi.org/10.1007/978-3-030-86841-3_9 4. Krokhmal, P., Palmquist, J., Uryasev, S.: Portfolio optimization with conditional value-at-risk objective and constraints. J. Risk 4 (2003). https://doi.org/10.21314/JOR.2002.057 5. Rockafellar, R.T., Uryasev, S.: Optimization of conditional value-at-risk. J. Risk 2(3), 21–41 (2000). https://doi.org/10.21314/jor.2000.038 6. Zhu, S., Fukushima, M.: Worst-case conditional value-at-risk with application to robust portfolio management. Oper. Res. 57(5), 1155–1168 (2009)

Combinatorial Optimization

Bi-dimensional Assignment in 5G Periodic Scheduling Giulia Ansuini, Antonio Frangioni, Laura Galli, Giovanni Nardini, and Giovanni Stea

Abstract We consider a scheduling application in 5G cellular networks, where base stations serve periodic tasks by allocating conflict-free portions of the available spectrum, in order to meet their traffic demand. The problem has a combinatorial structure featuring bi-dimensional periodic allocations of resources. We consider four variants of the problem, characterized by different degrees of freedom. Two types of formulations are presented and tested on realistic data, using a general-purpose solver. Knapsack Problems invited session.

1 Introduction 5G cellular networks provide ubiquitous wireless access on licensed spectrum, with very low latencies and high reliability, thus being a viable solution for real-time applications, such as vehicular communications. The Cellular Vehicular-to-everything (C-V2X) standard for New Radio 5G networks allows vehicles to request exclusive access to some spectrum resources, which they can use for inter-vehicle communications. Resource allocation is done centrally by the base station, which is in charge G. Ansuini Dipartimento di Matematica, Università di Pisa, Largo B. Pontecorvo 5, 56127 Pisa, Italy A. Frangioni · L. Galli (B) Dipartimento di Informatica, Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy e-mail: [email protected] A. Frangioni e-mail: [email protected] G. Nardini · G. Stea Dipartimento di Ingegneria dell’Informazione, Università di Pisa, Largo Lucio Lazzarino 1, 56122 Pisa, Italy e-mail: [email protected] G. Stea e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_14

159

160

G. Ansuini et al.

of a (possibly large) coverage area, and needs to fulfill several such requests simultaneously. The base station allocates spectrum resources in both time and frequency. On every Transmission Time Interval (TTI), in the order of 1 millisecond or less, the base station can allocate an airframe, i.e., a vector of several tens of resource blocks (RBs). Each RB can be allocated to only one requesting entity at a time. The transmission overhead associated with serving a single request is non negligible, and—quite often—the communication needs of a vehicle are long-term (imagine a vehicle transmitting to another the live video acquired from its camera). For this reason, the C-V2X standard allows vehicles to request periodic allocation of resources. This forces the base station to run complex algorithms to compose requests coming from different vehicles. In such 5G Periodic Scheduling, tasks have a period π , expressed in number of TTIs, and a demand w, expressed as a number of RBs. A base station serves tasks requested by vehicles by allocating w contiguous RBs in a TTI, and on every π -th subsequent TTI thereafter. Once the sub-vector is assigned to a task, it will be used again by the same task in each TTI that it is aired (depending on its period). In other words, the assignment of a sub-vector of RBs to a task periodically reserves the same portion of bandwidth to the same task. We recall that an RB can only be assigned to one task in each TTI—the assignment of a RB to more than one task in the same TTI is called a “conflict”. The basic combinatorial problem consists of finding a conflict-free RB assignment (aka RB schedule) for a given set of “new” tasks to be served, taking also into account the resources previously assigned to a set of “old” tasks that are already in place. We consider four variants of the basic problem by combining two degrees of freedom: (i) old tasks are either fixed or movable; (ii) new tasks are either forced to be scheduled or their scheduling is optional. These variants are solved in sequence, starting from the most rigid one, and loosening it up if the current model turns out to be infeasible. In particular, in the first attempt the old tasks are fixed and all the new ones are forced to be scheduled. If this has no solution, we then consider the old tasks to be movable in order to place all the new tasks. Finally, the last two variants are always feasible, because the scheduling of new tasks is optional (while the old ones are either fixed or movable). We propose two types of formulations, called “Conflict-Based” and “MatrixBased”, that are solved (for each variant) using the general-purpose MILP solver CPLEX. The formulations are tested on several instances corresponding to realistic 5G settings. Our results show that the sequential approach is generally more efficient than immediately solving the most flexible variant (i.e., Movable-Optional), and even large instances can often be solved in fairly short computing times.

Bi-dimensional Assignment in 5G Periodic Scheduling

161

2 System Model We represent the airframe as a vector of RBs of size M. Let N = {1, . . . , n} be the set of new periodic tasks to be scheduled. Each task i ∈ N corresponds to an ordered pair (πi , wi ), where πi is the task period, and wi is the number of RBs it demands. The RBs assigned to a task must be contiguous elements of the airframe, i.e., a subvector of size wi . The assignment of a sub-vector to a task means reserving the same sub-vector to the task in each TTI it is aired (according to its period). Given a task set N , the hyperperiod H is defined as the least common multiple of the task periods: H = lcmi∈N (πi ). The hyperperiod corresponds to the minimum number of TTI after which the schedule is repeated. The aim of Periodic Scheduling is to find a RB schedule for the task set N in the corresponding hyperperiod. A RB schedule S consists of n ordered pairs (h i , ti ), one for each task i ∈ N : • h i ∈ {1, . . . , M − wi + 1} is the position of the first RB in the sub-vector {h i , . . . , M − wi + 1} of size wi assigned to task i; • ti ∈ {0, . . . , πi − 1} is the first TTI in which task i is aired within the hyperperiod. Note that given a schedule S, all the (periodic) RB allocations withinthe hyperperiod  are uniquely defined for all i ∈ N : ti , ti + πi , ti + 2πi , . . . , ti + πHi − 1 πi . Each RB can be assigned to only one task in any TTI, hence the challenge is to avoid simultaneous overlap in time and space, i.e., a conflict. Two tasks i, j ∈ N that are first aired in TTIs t, t  , respectively, will overlap in time if and only if there exist n i , n j ∈ N such that t + n i πi = t  + n j π j . This is a Diophantine equation, where n i , n j are integer unknowns. The equation has a solution if and only if t  − t ≡ 0 (mod gcd(πi , π j )). Let h, h  be the first RB positions in the airframe for tasks i and j, respectively. To check space (i.e., frequency) overlap one can just consider the intersection between the corresponding sub-vectors:   [h, h + wi − 1] ∩ h  , h  + w j − 1 = ∅. Feasibility requires that any two tasks can overlap in at most one dimension. Indeed, if the overlap is both in time and space, then there exists (at least) one RB that is assigned to different tasks at the same time, generating a conflict. A schedule S is feasible for a set N of tasks, if all demands are satisfied without conflicts. A set of tasks N is said to be schedulable if there exist a feasible schedule S for it. In the following, we also assume that a set F of old tasks is already scheduled. We denote by A = N ∪ F the overall set of tasks (old and new). We consider different scheduling strategies: old tasks can either be fixed in their previous position, or movable to facilitate the placement of new tasks; new tasks N can either be forced to be scheduled, or their scheduling can be optional. These strategies generate four variants of our problem with different levels of “flexibility”. To our knowledge, the variants with optional scheduling correspond to a new multiple-period version of the knapsack problem, that has not appeared in the literature so far [1, 2]. In the problem domain, no works that we know of have addressed the problem dealt with in this paper. Some works (e.g., [3]) deal with adjusting

162

G. Ansuini et al.

the offsets of tasks to minimize the delay between the activation of a periodic task instance and its scheduling in the hyperperiod. Others (e.g., [4]) try to predict which tasks should be scheduled periodically and which should not. For the sake of brevity, we only present the formulations corresponding to the two variants with movable old tasks.

3 Conflict-Based (CB) Formulations This formulation uses binary variables to represent overlaps in time and space between pairs of tasks:  1 if tasks i and j overlap in time i, j ∈ A, cti, j = 0 otherwise

ch i, j

 1 if tasks i and j overlap in space = 0 otherwise

i, j ∈ A.

To identify time overlaps we define the set T (i, t, j) that contains all TTIs t  for which task j would generate a time overlap in the hyperperiod with task i aired in TTI t. The set is defined for all task pairs i, j ∈ A, and for all possible TTIs t ∈ {0, . . . , πi − 1} of task i: T (i, t, j) =



 t  ∈ {0, . . . , π j − 1} | t  − t ≡ 0 (mod gcd(πi , π j ) .

Note that we only need to identify the initial TTIs (i.e., in the first period of a task) because these uniquely define all the subsequent periodic allocation times. Similarly, the set H (i, h, j) contains all the initial RB positions h  for which task j would generate a space overlap with task i whose first RB position is h: H (i, h, j) =



   h  ∈ {1, . . . , M − w j + 1} | [h, h + wi − 1] ∩ h  , h  + w j − 1  = ∅ .

Next, we use binary variables to represent the assignment to a task of the initial TTI  xi,t =

1 if task i is first aired in TTI t 0 otherwise

and the initial RB

i ∈ A, t ∈ {0, . . . , πi − 1},

Bi-dimensional Assignment in 5G Periodic Scheduling

 yi,h =

1 if task i has first RB position h 0 otherwise

163

i ∈ A, h ∈ {1, . . . , M − wi + 1}.

The model for the Movable-Forced (M-F) variant looks as follows: (CB-M-F) min



zi

(1)

i∈F π i −1

i∈A

(2)

i∈A

(3)

i, j ∈ A, t ∈ {0, . . . , πi − 1}

(4)

i, j ∈ A, h ∈ {1, . . . , M − wi + 1}

(5)

i, j ∈ A i∈F i ∈ A, t ∈ {0, . . . , πi − 1}

(6) (7) (8)

yi,h ∈ {0, 1} cti, j ∈ {0, 1}

i ∈ A, h ∈ {1, . . . , M − wi + 1} i, j ∈ A

(9) (10)

ch i, j ∈ {0, 1} z i ∈ {0, 1}

i, j ∈ A i∈F

(11) (12)

s.t.

xi,t = 1

t=0 M−w i +1

yi,h = 1

h=1



cti, j ≥ xi,t +

x j,t  − 1

t  ∈T (i,t, j)

ch i, j ≥ yi,h +



y j,h  − 1

h  ∈H (i,h, j)

cti, j + ch i, j ≤ 1 z i ≥ 1 − xi,ti /2 − yi,h i /2 xi,t ∈ {0, 1}

This variant of the problem forces all new tasks to be scheduled, but we are allowed to move old ones. In constraints (7) we use variables xi,ti and yi,h i to represent the “original” position in time and space, respectively, of an old task i ∈ F. If any of the two variables is set to zero, the old task is moved (in time and/or in space). The objective (1) is to minimize the number of old tasks moved, so we define binary variables z i , ∀i ∈ F to keep track of old tasks moved:  zi =

1 if task i is moved 0 otherwise

i ∈ F.

Note that Eqs. (2)–(3), expressing the assignment of an initial TTI and RB to the tasks, refer to both new and old tasks (i ∈ A), since old tasks can be assigned a different schedule (if they are moved). Constraints (4)–(5) keep track of time and space overlaps, while inequalities (6) forbid simultaneous overlaps in both dimensions, hence conflicts.

164

G. Ansuini et al.

A more flexible variant, called Movable-Optional (M-O), is obtained by making the scheduling of new tasks optional, the model is: i −1 π

(CB-M-O) max

xi,t −

i∈N t=0

s.t.

π i −1

1 zi |F| + 1 i∈F

xi,t = 1

(13) i∈F

(14)

i∈F

(15)

i∈N

(16)

i∈N

(17)

i∈N

(18)

t=0 M−w i +1

yi,h = 1

h=1 π i −1

xi,t ≤ 1

t=0 M−w i +1

yi,h ≤ 1

h=1 π i −1 t=0

xi,t −

M−w i +1

yi,h = 0

h=1

(4), (5), (6), (7), (8), (9), (10), (11), (12) In this case the objective function (13) consists of two terms, one maximizes the number of new tasks that are scheduled, while the other minimizes the number of old tasks moved. The weight of the latter is < 1, thus among the two policies the former prevails. Note that with respect to the previous variant, the assignment constraints are slightly different, as one needs to distinguish between old and new tasks. Equations (14)–(15) guarantee that old tasks are always scheduled, as they either keep their old schedule or they are moved. The scheduling of new tasks, instead, is optional, as expressed by inequalities (16)–(17); constraints (18) make sure that if a new task receives a position in one dimension, it also receives a position in the other dimension.

4 Matrix-Based (MB) Formulations We use three-index binary variables representing the assignment to both dimensions:  xi,h,t =

1 if task i has first RB position h and first TTI t 0 otherwise

for all i ∈ A, h ∈ {1, . . . , M − wi + 1}, t ∈ {0, . . . , πi − 1}.

Bi-dimensional Assignment in 5G Periodic Scheduling

165

To identify conflicts we define a set for each space-time position (h, t), h ∈ {1, . . . , M}, t ∈ {0, . . . , H − 1} containing all the schedules that use RB h at TTI t: C(h, t) = { (i, h  , t  ) i ∈ A, h  ∈ {1, . . . , M − wi + 1}, t  ∈ {0, . . . , πi − 1} | if i is in (h  , t  ), i has assigned also the RB (h, t) }. We denote, as before, by h i and ti the original space and time positions respectively of an old task i ∈ F. To keep track of old tasks that are moved, we define the following parameter:  1 if h  = h i or t  = ti αi,h,t = 0 otherwise

i ∈ F, h ∈ {1, . . . , M − wi + 1}, t ∈ {0, . . . , πi − 1}.

The model for the Movable-Forced (M-F) variant is: (MB-M-F)

min

i −1 M−w i +1 π

i∈F

s.t.

M−w i −1 i +1 π h=1



h=1

αi,h,t xi,h,t

(19)

i∈A

(20)

t=0

xi,h,t = 1

t=0

xi,h  ,t  ≤ 1

h ∈ {1, . . . , M}, t ∈ {0, . . . , H − 1}

(21)

(i,h  ,t  )∈C(h,t)

xi,h,t ∈ {0, 1} i ∈ A, h ∈ {1, . . . , M − wi + 1}, t ∈ {0, . . . , πi − 1}

(22)

Note that the M-B formulation does not need additional variables to represent old tasks that are moved, since the condition is given by parameter αi,h,t associated to all assignment variables in the objective function. Assignment in time and space (for all tasks) is obtained via constraints (20). Conflicts are forbidden by inequalities (21), which, for each space-time position (h, t), make sure that at most one schedule in C(h, t) (i.e., a schedule using (h, t)) is selected. Next, we give the formulation for the Movable-Optional (M-O) variant, for which, as we already observed, the scheduling of new tasks is optional: (MB-M-O) max

i −1 i +1 π M−w

i∈N

h=1

t=0

xi,h,t −

1 |F| + 1

i −1 i +1 π M−w

i∈F

h=1

αi,h,t xi,h,t

t=0

(23) s.t.

M−w i −1 i +1 π h=1

xi,h,t = 1

i∈F

t=0

(24)

166

G. Ansuini et al. M−w i −1 i +1 π h=1

xi,h,t ≤ 1

i∈N

t=0

(25) (20), (21). The objective function (23), as already observed for the C-B formulation, maximizes the number of new tasks scheduled and minimizes the number of old tasks moved. Assignment constraints (24) guarantee a schedule for all the old tasks i ∈ F, while for new tasks i ∈ N the scheduling is optional (25).

5 Computational Results The models are implemented and solved using CPLEX Callable Library. We generated realistic instances taking into account technical aspects of the application. An instance of our problem is characterized by the following elements: • size M of the airframe • RB demand and period of new tasks (wi , πi ), i ∈ N = {1 . . . n} • set of fixed tasks F The idea underlying the construction of four variants for the problem is to solve them in sequence. Namely, if one variant turns out to be infeasible we set to solve the next one: 1. **-F-F: this is a purely feasibility problem, that checks if all new tasks can be scheduled without moving the old ones; 2. **-M-F: in this variant we are free to move old tasks if this allows to schedule all new task, the objective function minimizes the number of old tasks moved; 3. **-F-O: in this variant old tasks are fixed, but the scheduling of new tasks is optional, which guarantees to find a feasible solution; 4. **-M-O: this is the most flexible and complex “knapsack-like” variant having both degrees of freedom (i.e., old tasks can be moved and the scheduling of new tasks is optional). In our experiments we compare the sequential approach with the direct solution of the last variant **-M-O. The comparison is performed for both types of formulations, so “**” is either CB (Conflict-Based) or MB (Matrix-Based). We use a time limit of 3600 s seconds. In the sequential approach the time limit is split among the different variants: • 600 s for **-F-F • 1200 s seconds for **-M-F • 1800 1800seconds seconds for **-F-O

Bi-dimensional Assignment in 5G Periodic Scheduling

167

Table 1 Solution times (seconds) of sequential approach versus M-O variant for the Conflict-Based formulation M = 10 M = 25 M = 50 M = 100 |N | CB-seq CB-M-O CB-seq CB-M-O CB-seq CB-M-O CB-seq CB-M-O 20 30 50 60 90

14 165 526 1803 3256

32 141 866 2925 3014

29 79 951 1450 3109

94 166 2752 2376 2839

6 108 709 518 3251

37 119 1310 1307 2849

9 132 1605 1146 3236

70 108 1725 1752 2841

Table 2 Solution times (seconds) of sequential approach versus M-O variant for the Matrix-Based formulation M = 10 M = 25 M = 50 M = 100 |N | MB-seq MB-M- MB-seq MB-M- MB-seq MB-M- MB-seq MB-MO O O O 20 30 50 60 90

69 101 305 975 2472

902 1203 1939 2444 2903

1229 545 1753 1031 2401

1814 1850 2798 2576 2913

4 334 1545 1452 1806

2242 2291 2936 3314 3600

914 1336 22 1222 44

– – – – –

In Tables 1 and 2 we report, for formulations CB and MB respectively, the average solution times (in seconds) on all the task periods considered, for each value of M and |N |: These preliminary results show that the sequential approach consistently performs better than the **-M-O variant for both formulations, except for the largest instances (|N | = 90) for which CB-M-O has a better performance. Indeed 43% of the tested instances can be solved by the first two variants (**-F-F and **-M-F) during the sequential approach. The instances in the last column of Table 2 (“–”) could not be solved within the time limit. From an application point of view, a future research direction is that of using our method as a benchmark to assess the quality of very fast and straightforward heuristics, that are currently employed in similar contexts and can be adjusted to this setting. Acknowledgements The authors gratefully acknowledge partial financial support from the University of Pisa, under the project “Analisi di reti complesse: dalla teoria alle applicazioni” (grant PRA_2020_61), and from the Italian Ministry of University and Research, under the CrossLab grant (Departments of Excellence).

168

G. Ansuini et al.

References 1. Cacchiani, V., Iori, M., Locatelli, A., Martello, S.: Knapsack problems—an overview of recent advances. Comput. Oper. Res. (2022). https://doi.org/10.1016/j.cor.2021.105692 2. Kellerer, H., Pferschy, U., Pisinger, D.: Knapsack Problems. Springer, Berlin (2004) 3. Jiang, N., Aijaz, A., Jin, Y.: Recursive periodicity shifting for semi-persistent scheduling of time-sensitive communication in 5G. GLOBECOM 2021, 1–6 (2021) 4. He, Q., Dáin, G., Koudouridis, G.P.: Semi-Persistent scheduling for 5G downlink based on short-term traffic prediction. GLOBECOM 2020–2020, 1–6 (2021)

Capacitated Disassembly Lot-Sizing Problem with Disposal Decisions for Multiple Product Types with Parts Commonality Meisam Pour-Massahian-Tafti, Matthieu Godichaud, and Lionel Amodeo

Abstract This paper addresses capacitated disassembly lot sizing problem for multiproduct structure with parts commonality. Disposal decisions are considered to manage the accumulation of unnecessary items obtained after disassembly operations during the planning horizon. A new mixed-integer programming (MIP) model is given to formulate the problem. Exact method by using CPLEX solver can be applied to obtain optimal solutions for the small-sized problem. A fix-and-optimize heuristic is proposed for larger sizes of the problem, it solves successively a series of subproblems. In each sub-problem, a subset of variables are fixed while the remaining are optimized using an exact method. Computational tests are performed on a number of randomly generated instances and the results demonstrate the performance of the proposed model and methods.

1 Introduction Recently, legislation pressures are forcing companies to take into account laws regulating the treatment of used or End-Of-Life (EOL) products [13]. For example, End-of-Life Vehicles directive (Directive 2000/53/EC) pushes companies to consider reuse, recycling, and other energy recovery methods to obtain a recovery rate of no less than 95%. This is because environmental problems around the world are becoming increasingly serious [9]. Disassembly represents an efficient way to recovery EOL products, from both environmental and economic aspect. Various decision problems in disassembly emerge such as disassembly sequencing, disassembly line balancing, disassembly scheduling, etc. [3, 5]. The present paper focuses on disasM. Pour-Massahian-Tafti · M. Godichaud (B) · L. Amodeo ICD-LOSI, University of Technology of Troyes (UTT), 12 Rue Marie Curie, 10300 Troyes, France e-mail: [email protected] M. Pour-Massahian-Tafti e-mail: [email protected] L. Amodeo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_15

169

170

M. Pour-Massahian-Tafti et al.

sembly lot sizing problem, which can be defined as the problem of determining the quantity and timing of disassembling EOL products to fulfill the demand of their parts or components over a given planning horizon. Specific characteristics of disassembly systems can create unnecessary surplus inventory accumulating during planning horizon. Managing this surplus inventory is an important challenge for companies because it has a significant impact on inventory holding costs [6, 10]. Previous research on disassembly lot sizing can be classified with respect to product structure, capacity, decisions on the management of surplus inventory (for more details about the decisions on the management of surplus inventory see [11]), models and solution approaches. First, Gupta and Taleb in [1] propose a reverse version of Material Requirement Planning for the case with one product type without parts commonality and they then extend the study to include multi-product cases with parts commonality [15]. No cost is considered in these studies. Different costs can be include in disassembly systems [6]. In [7], Lee et al. propose a heuristic based approach on Gupta and Taleb procedure with costs related to disassembly operations. In [8], Lee et al. suggest three Integer-Programming models for the single/multi-product cases with/without parts commonality, that can be used to solve the small-sized problem using CPLEX solver. For more complex and large-sized problem, Kim et al. [5] suggest a two-phase heuristic for the multiple products types with parts commonality. For the problem under capacity constraints, Kim et al. [3] propose a Lagrangian heuristic which shows good performance especially for the large-sized problem. Recently in [9], Liu and Zhang consider disassembly scheduling under stochastic yield and demand, and propose a mixed-integer nonlinear program (MINLP). They suggest an outer approximation solution algorithm to solve MINLP. These studies do not consider management of surplus inventories inherent in disassembly systems [6]. In [14], Tafti et al. consider disposal decisions to manage surplus inventory for the single product case, and they propose two heuristics that can solve the large-sized problem efficiently. They also suggest and compare different formulations and methods of the problem in therm of solution quality and time [10, 11, 13]. To the best of authors’ knowledge, there is no work which considers surplus inventory decisions such as disposal for the capacitated problem with multi-product and part commonality. In this paper, we consider multi-product, multi-level capacitated disassembly lot sizing problem with parts commonality and decisions on surplus inventory. The contribution of this research is twofold. (1) We propose a new MIP model for more complex capacitated disassembly lot sizing problem: multi-product, multi-level with parts commonality. The model considers disposal decision to handle surplus inventory and provides potential cost savings. (2) A Fix-and-Optimize (FO) heuristic is suggested to solve large-sized problems efficiently. It consists of solving a series of simple sub-problems in an iterative algorithm that can be solved in very short computational times [2]. It has been used to solve the large-sized capacitated lot sizing problems, but there is no work using this heuristic to solve complex disassembly lot sizing problems.

Capacitated Disassembly Lot-Sizing Problem …

171

The present paper is organized as follows: In Sect. 2 the problem considered here is presented with a MIP model. Section 3 proposes Fix-and-Optimize heuristic for the large-sized problem. Computational results preformed on a new benchmark are reported in Sect. 4. Finally, Sect. 5 presents conclusion and future works.

2 Problem Statement and Formulation The Capacitated Disassembly Lot Sizing Problem with Disposal is based on the structure of the products to be disassembled. The root items are the EOL products to be disassembled, and leaf items are the demanded parts and cannot be disassembled further. A child item is obtained by disassembling of a parent item. The disassembly of a parent item generates one or more child items. An example of the structure with three levels is given in Fig. 1. The example has three root items (EOL products), i.e. items 1–3. The number in parenthesis is the yield of the item when one unit of its parent (parents) is disassembled. The third (last) level represents leaf items (8–12). The parts commonality is allowed here which implies that a given item may have more than one parents. In the example, the shaded boxes represent common items (item 4, 6, 9 and 10). The problem is to determine the quantity and timing of disassembling all parent items to satisfy the demand of leaf items over planning horizon with respecting capacity constraints, while unnecessary obtained items will be disposed of after disassembly operation. We use the same underlying assumption as in [6, 11, 15]. Some industrial application examples can be found in these references. The assumptions made in this paper are summarized as follow: (a) used products can be obtained whenever they are ordered and there is no holding cost for them; (b) backlogging and lost sales are not allowed, and hence demands should be satisfied on time; (c) demand of leaf items are given and deterministic; d) the disassembled items are considered of equal

Fig. 1 Example of multi-level and multi-product structure with parts commonality

172

M. Pour-Massahian-Tafti et al.

quality; (e) we assume, without loss of generality, that the stock of the root and leaf items at the beginning of the planning horizon are zero; f ) there is no disposal cost for the unnecessary items; (g) The excess quantity of obtained items will be disposed of as soon as possible after disassembly operation. In the proposed models, without loss of generality, all items (N ) are numbered with integers: 1, 2 . . . ir , il . . . N , where ir is the index for the root items and the index that are ≥ il represent leaf items. The following notations are used in this paper: Index Index for items (1, 2 … ir , ir +1 . . . il−1 , il … N ) Index for periods (1, 2 … T ).

i t

Parameters Mit Arbitrary big number considered for parent item i in period t sit Setup cost of disassembling root item i in period t pit Disassembly operation cost of root item i in period t ai j Number of unit of item j obtained by disassembly of one unit of item i h it Inventory holding cost of item i in period t dit Demand of leaf item i in period t git Needed operation time to disassembly one unit of item i in period t G t Available capacity, in time, in period t (i) Parents of item i. Decision variables Yit

1 if there is a setup in period t for item i, and 0 otherwise

X it

Disassembly quantity of item i in period t

E it

Disposed quantity of item i in period t

Iit

Inventory level of item i at the end of period t

(P) Min

 i −1 T l   i=1 t=1

sit · Yit +

il −1  T 

pit · X it +

i=1 t=1

N  T 

 h it · Iit

i=ir +1 t=1

(1) Subject to 

Iit = Iit−1 +

aki .X kt − E it − X it

∀ i = ir + 1 . . . il − 1 & t = 1 . . . T

aki .X kt − E it − dit

∀ i = il , il + 1 . . . N & t = 1 . . . T

k∈(i)

Iit = Iit−1 +



(2)

k∈(i)

(3)

Capacitated Disassembly Lot-Sizing Problem … il −1 

gi · X it ≤ G t

∀ t = 1···T

173

(4)

i=1

X it ≤ Mit · Yit X it ≥ 0 & integer

∀ i = 1, 2 . . . il − 1 & t = 1 . . . T

(5)

∀ i = 1, 2 . . . il − 1 & t = 1 . . . T

(6)

Iit , E it ≥ 0

∀ i = ir + 1 . . . N & t = 1 . . . T

(7)

Yit = 0 or 1

∀ i = 1, 2 . . . il − 1 & t = 1 . . . T

(8)

Objective function (1) is the sum of setup, disassembly operation, and inventory holding costs over the whole T -period horizon. Constraints (2) and (3) are the inventory balance equations for the parent items and leaf items, respectively (Note that Ii0 =0 for all items). Constraints (4) respect capacity limits on available time in each period. Constraints (5) guarantee that a setup cost for item i is performed in period t if any disassembly operation of item i is done in that period. Constraints (6)–(8) impose the non-negativity and binary restrictions on the variables. Note that in the model P, the variables Iit and E it can be set as real. The model P can be directly used to obtain optimal solution by using CPLEX solver, but for the large-sized problem, CPU running time increases significantly. Decomposition heuristics then are efficient, because of the advantage that they can solve the complex problem in short computational times. In following, Fix-andOptimize heuristic is proposed to solve the large-sized and more complex problem efficiently.

3 Fix-and-Optimize Heuristic The Fix-and-Optimize (FO) heuristic has been widely used in the literature to solve the lot sizing problem. Sahling et al. [12] first propose a FO to solve the multilevel lot sizing problem with capacity constraints. They present three principles for determining sub-problems: product, resource, and process-oriented. In [2], the authors reapply the same FO approach for the multi-level case and they obtain a good solution quality in reasonable CPU time by solving a series of sub-problems with a set of fixed binary variables. FO is a decomposition heuristic which consists of successively solving subproblems using exact integer linear program solving techniques. In this paper variables are partitioned according to periods. A sub-problem is defined by separating the set of variables of the general problem (multi-product, multi-level capacitated disassembly lot sizing with part commonality and disposal) into two subsets. The first sub-set consists of variables to be optimized (depending on the function to be minimized and the constraints of the general problem) and the second sub-set

174

M. Pour-Massahian-Tafti et al.

consists of the other variables whose values are fixed by adding constraints in the general problem. The sub-problem is thus modeled by a MIP, identical to the one modeling the general problem with the additional constraints allowing to fix the variables of the second sub-assembly. The heuristic works by iteration by modifying at each iteration the sub-set of the fixed variables and by integrating the values of the variables obtained at the previous iteration. It is therefore necessary to generate an initial solution to launch the heuristic. In this paper, an initial solution is generated by setting the disassembly operation to be performed in every period and we consider a period-oriented problem decomposition strategy. A pseudo-code of the proposed heuristic is presented in the following. Let E={X it , Yit , Iit , and E it } be the set of solution of model P, T Wes ={s, e} be a time window between two given periods s and e (1 ≤ s ≤ e ≤ T ). We define S Pes as the solution of a sub-problem with T Wes : all variables for all i, t < s, and t > e are fixed to the related values of current best solution of model P (E) and the remaining variables are optimized. For more details on the Fix-and-Optimize heuristic please see [2, 12]. Algorithm 1 Fix-and-Optimize (FO) period-oriented algorithm Input: initial solution (E 0 ={X it0 , Yit0 , Iit0 , E it0 }) obtained by solving the model P when setting Yit = 1 for all i and t, period decomposition started from 1 to T. Output: final solution. Set the initial solution as the current best solution of the model P (E=E 0 ) s ←1&e←1 while s & e ≤ T do Solve sub-problem S Pes : All variables for all i and t ∈ {s, e} are optimized, Other variables are fixed to their value in E. if the solution of S Pes is better than E then Set E ← S Pes s ←e+1&e ←s else e ←e+1 end if end while

4 Computational Experiments Computational tests are performed on randomly generated instances. The parameter setting of the benchmark of Tafti et al. [13] is adapted for the problem in order to obtain a cycle Time Between Order equal to 2 based on average value of the data. For the test, We generate 225 problem instances, i.e. 25 problem instances for each combination of three levels of the number of items and three levels of number of periods. Five different disassembly products structures are generated for each level of the number of items by using randomly generated number of root items, number of leaf items, and number of common leaf items (for more details see [4]).

Capacitated Disassembly Lot-Sizing Problem … Table 1 Parameters value Parameter Items (N ) Period (T ) OHolding cost (h it )

Demand (dit ) Yield (ai j ) Disassembly cost ( pit ) Setup cost (sit ) Disassembly operation time (git )

175

Value (10, 20, 30) (10, 20, 30) N = 10 ⇒ DU(0.3, 0.5) N=20 ⇒ DU(0.1, 0.3) N=30 ⇒ DU(0.1, 0.16) DU(50, 250) DU(1, 4) DU(38, 62) DU(2500, 3500) DU(1, 4)

For each disassembly structure, five problems with different data are generated for each level of the number of periods. To generate different disassembly structures, the number of root items (ir ) are generated from DU(2, 2), DU(2, 4), and DU(2, 6) for the problems with 10, 20, 30 items, respectively. Also, The number of leaf items for each root item (parent) are generated from DU(2, 5), DU(5, 10), and DU(10, 15) for the problems with 10, 20, and 30 items, respectively. Finally, the number of common leaf items are generated from DU(1, 3), DU(1, 6), and DU (1, 10) for the problems with 10, 20, and 30 items, respectively. Table 1 summarizes the generation of parameters value. Here, DU(a,b) means the discrete uniform distribution with a rage of [a, b]. We generate disassembly capacity per period by adapting the procedure proposed by Kim et al. [3] as follows: Step 1: Step 2:

Step 3:

Step 4:

Initial available capacity per period is generated as 400, 480, and 540 with probabilities 0.2, 0.5, and 0.3, respectively. The uncapacited problem is solved by performing the disassembly operation in every period, then the disassembly quantity variables (X it ) are obtained. T G t ) and the overall used capacThe total available capacities (T C = t=1 il−1 T ity (U C = i=1 t=1 git · X it ) are calculated for the solution obtained in Step 2. Note that G t is the initial capacity generated by Step 1.  The initial capacity is modified by using G t = α · CU/T C · G t . Note that α is selected to be equal to 1.25, but it will be increased if the feasibility is not obtained.

The model and methods are implemented on eclipse by using Java programming language. Also, all tests are run on a system with an Intel Core i7-7700T, 2.9 GHz, and 16 Go RAM on windows 10. We use CPLEX solver 12.8 to solve the problems. Note that CPLEX is terminated when CPU time reached 3600 seconds and we use default parameters setting of CPLEX i.e. dynamic search and cuts generation are allowed

176

M. Pour-Massahian-Tafti et al.

automatically. The Percentage deviations form the optimal solution obtained (lower bound, for the cases that CPLEX cannot obtain the optimal solution within limited 3600 seconds), and CPU seconds are used to evaluate the performance of the model and methods. Test results are summarized in Table 2. The exact method for original problem using CPLEX solver is not efficient for the large-sized problem because of increasing CPU times. For example, average of CPU time is very variant with increasing the size of the problem with 30 number of items: 120.33, 2716.01 and 3331.01 for 10, 20, and 30 periods, respectively. FO heuristic can solve the problem instances in very short CPU times. The average of CPU time for the problem instances with 30 periods and 10, 20, and 30 number of items is only 0.29, 0.73, and 0.85 seconds, respectively. FO heuristic is also efficient to improve the initial solution: maximum improvement of gap (%) is 13.89, 9.28, and 6.63 for the problem instances with 10, 20, and 30 number of items. respectively. Minimum, Average and Maximum in the table 2 represent the result of each method (MIP, IS, and FO) related to each level of items for all of instances with different periods. We mention that N=30 is consider as large instances in [4]. We develop our instance based on this reference. The time to provide a solution is important to decision maker. In decision support process, decision makers want to be able to analyse solutions to adjust the parameters and it is not possible if the time to obtain is too long. Furthermore, the heuristic does not required the use of a commercial solvers which can be an obstacle for an industrial application. The impact of considering disposal decision on the total cost is analyzed for the problem instances with 30 number of items and 10 periods. We solve the problem instances by adapting the model without disposal proposed by Kim et al. [3]. The cost reduction by considering disposal can attain 11.71 %, there is an important opportunity of cost reduction and making disassembly systems more profitable.

Table 2 Test results N = 10, ir = DU(2, 2)

N =20, ir = DU(2, 4)

MIP

IS

FO

MIP

IS

FO

MIP

N = 30, ir = DU(2, 6)

IS

FO

T =10

0.00 ∗ 0.49

12.32 0.23

3.32 0.17

0.00 9.72

15.48 15.45

9.36 0.32

0.00 47.60

10.74 10.50

7.01 0.29

T =20

0.00 10.86

15.29 5.65

9.44 0.43

0.24 996.65

15.50 5.02

10.33 0.40

0.93 2948.71

10.11 12.64

7.38 0.73

T =30

0.00 120.33

16.38 9.69

11.40 0.54

1.20 2716.01

17.18 6.27

13.20 0.77

2.76 3331.01

12.88 2.58

10.20 0.85

Minimum

0.00 0.12

2.73 0.03

1.66 0.08

0.00 0.33

4.80 0.08

2.25 0.10

0.00 2.03

4.35 0.05

2.35 0.12

Average

0.00 46.99

14.66 5.19

9.44 0.38

0.48 1240.79

16.06 8.91

11.13 0.50

1.23 2109.10

11.24 8.57

8.20 0.62

Maximum

0.00 635.37

29.53 60.06

21.85 1.24

5.86 3618.42

33.27 60.04

23.99 1.39

11.13 3646.25

21.87 60.08

MIP: Mixed-Integer Programming; IS: Initial Solution; FO: Fix-and-Optimize; (*):

17.60 3.01

Gap(%) C PU (s)

Capacitated Disassembly Lot-Sizing Problem …

177

5 Conclusion In capacitated disassembly lot sizing problem with multi-product, multi-level and parts commonality, disposal decision to handle surplus inventory, can make a significant cost reduction of around 12%, for the tested instances. A new MIP model is presented to formulate this problem. Exact method using CPLEX solver for original problem is not efficient for the large-sized problem, with the disadvantage that CPU time increases significantly. A Fix-and-Optimize heuristic is proposed, which can solve the problems (especially, large-sized problems) in shorter computational times with good solution quality. Further research can be proposed as improving the performance of the proposed heuristic for the problem: considering interrelatedness characteristics between the binary setup variables, a round-down heuristic based on linear programming relaxation approach to obtain initial solution. Another extension of this work is to take into account demand balancing by using pricing problem to manage the surplus inventory problem. Acknowledgements We appreciate the Departmental Council of Aube (CD10), along with the European Regional Development Fund (FEDER) for supporting the project.

References 1. Gupta, S., Taleb, K.: Scheduling disassembly. Int. J. Product. Res. 32(8), 1857–1866 (1994) 2. Helber, S., Sahling, F.: A fix-and-optimize approach for the multi-level capacitated lot sizing problem. Int. J. Product. Econ. 123(2), 247–256 (2010) 3. Kim, H.-J., Xirouchakis, P.: Capacitated disassembly scheduling with random demand. Int. J. Product. Res. 48(23), 7177–7194 (2010) 4. Kim, H.-J., Lee, D.-H., Xirouchakis, P.: A lagrangean relaxation approach for capacitated disassembly scheduling. In: International Conference on Computational Science and Its Applications, pp. 722–732. Springer (2005) 5. Kim, H.-J., Lee, D.-H., Xirouchakis, P.: Two-phase heuristic for disassembly scheduling with multiple product types and parts commonality. Int. J. Product. Res. 44(1), 195–212 (2006) 6. Kim, H.-J., Lee, D.-H., Xirouchakis, P.: Disassembly scheduling: literature review and future research directions. Int. J. Product. Res. 45(18–19), 4465–4484 (2007) 7. Lee, D.-H., Xirouchakis, P.: A two-stage heuristic for disassembly scheduling with assembly product structure. J. Oper. Res. Soc. 55(3), 287–297 (2004) 8. Lee, D.-H., Kim, H., Choi, G., Xirouchakis, P.: Disassembly scheduling: integer programming models. Proc. Inst. Mech. Eng. Part B: J. Eng. Manuf. 218(10), 1357–1372 (2004) 9. Liu, K., Zhang, Z.-H.: Capacitated disassembly scheduling under stochastic yield and demand. European J. Oper. Res. 269(1), 244–257 (2018) 10. Pour-Massahian-Tafti, M., Godichaud, M., Amodeo, L.: Models for disassembly lot sizing problem with decisions on surplus inventory. In: Advances in Optimization and Decision Science for Society, Services and Enterprises, pp. 423–432. Springer (2019) 11. Pour-Massahian-Tafti, M., Godichaud, M., Amodeo, L.: New models and efficient methods for single-product disassembly lot-sizing problem with surplus inventory decisions. Int. J. Product. Res. 1–21 (2020)

178

M. Pour-Massahian-Tafti et al.

12. Sahling, F., Buschkühl, L., Tempelmeier, H., Helber, S.: Solving a multi-level capacitated lot sizing problem with multi-period setup carry-over via a fix-and-optimize heuristic. Comput. Oper. Res. 36(9), 2546–2553 (2009) 13. Tafti, M.P.M., Godichaud, M., Amodeo, L.: Models for the single product disassembly lot sizing problem with disposal. IFAC-PapersOnLine 52(13), 547–552 (2019) 14. Tafti, M.P.M., Godichaud, M., Amodeo, L.: Single product disassembly lot sizing problem with disposal. In: 2019 IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA), pp. 135–140. IEEE (2019) 15. Taleb, K.N., Gupta, S.M.: Disassembly of multiple product structures. Comput. Ind. Eng. 32(4), 949–961 (1997)

The Crop Plant Scheduling Problem Nikola Obrenovi´c, Selin Ataç, Stefano Bortolomiol, Sanja Brdar, Oskar Marko, and Vladimir Crnojevi´c

Abstract With the increase in world population, the efficient production of food becomes an ever more important goal. One of the particular tasks associated with this goal is to find an optimal crop planting time, considering the allowed planting time windows and the following objectives: (1) weekly harvest must not surplus the available storage capacity and produce waste, and (2) the labor force should be utilized in an efficient manner. To tackle this problem, we define the crop plant scheduling problem (CPSP), in the form of mixed integer linear problem, and solve it with a commercial mathematical programming solver. To estimate the harvest time, we also predict the time needed for accumulation of the sufficient amount of growing degree units (GDUs), using an ARIMA model. In this paper, we present the developed GDU forecasting and CPSP models, and the obtained results for the two selected problem instances.

N. Obrenovi´c (B) · S. Brdar · O. Marko · V. Crnojevi´c - ca 1, 21000 Novi Sad, Serbia BioSense Institute, University of Novi Sad, Dr Zorana -Dindi´ e-mail: [email protected] S. Brdar e-mail: [email protected] O. Marko e-mail: [email protected] V. Crnojevi´c e-mail: [email protected] S. Ataç École Polytechnique Fédérale de Lausanne, Transport and Mobility Laboratory (TRANSP-OR), GC B2 385, 1015 Lausanne, Switzerland e-mail: [email protected] S. Bortolomiol Optit Srl, Via Mazzini 82, 40138 Bologna, Italy e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_16

179

180

N. Obrenovi´c et al.

1 Introduction Due to the global population increase and expansion of urban and industrial areas, the arable land area per inhabitant of our planet is decreasing [6]. Therefore, we can expect food to become a very scarce and valuable resource and optimization of its production process a very important topic. One aspect of food production is to satisfy the demand without creating a surplus which cannot be consumed and represents a waste. Many crop types are important elements in the production of various food and industrial products, which sets a goal for the producers to harvest as many crops as possible. However, the produced crops need to be stored appropriately until forwarded further in the process chain. Therefore, it is important not to overflow the available storage capacities and avoid waste of produced crop. This has led to the definition of the crop plant scheduling problem (CPSP) with the main objective to produce the required amount of crop, distributed over the time horizon, in order to avoid exceeding the available storage capacity whenever possible. Additionally, the engagement of labor force should be taken into account and managed properly. This is reflected by setting two additional goals for our problem, namely 1. to harvest all crops in the minimum number of weeks, and 2. to avoid gaps in harvest, which represent weeks when the employees are paid although there is no work required. From the moment of planting, crop seeds start growing thanks to the accumulation of heat, which can be measured in growing degree units (GDUs). Then, when a predetermined amount of GDUs is reached, plants are harvested. We denote as crop population each group of seeds, which are located at the same plantation site and which are planted and harvested at the same time. The accumulation of GDUs depends on the planting site and varies throughout the year. Therefore, given a time window of possible planting dates for each population, it is first necessary to estimate the harvest date of each crop population for each feasible planting date. For this purpose, we estimate the dates of acquiring the sufficient amount of GDUs per population by creating an ARIMA model [4] to forecast the daily GDUs based on historical data. The presented optimization problem, in its lesser form, has been first introduced in the Syngenta Crop Challenge in Analytics 2021 [13]. Since the CPSP still represents an open research challenge, we continue to work on its solution even after the Syngenta competition. The remaining of the paper is organized in the following manner. The related literature is presented in Sect. 2. The GDUs forecasting model, data preprocessing, and CPSP definition are described in Sect. 3. The selected problem instances and their solutions are presented in Sect. 4. The conclusions and future research paths are given in Sect. 5.

The Crop Plant Scheduling Problem

181

2 Related Literature In the literature, we have found several works tackling similar or related problems. The most notable are [5, 12]. The crop growth planning problem in vertical farming is studied in [12]. The modeling approach is based on a time-expanded network and their model structure resembles the multicommodity flow models [3], as stated by the authors themselves. Consequently, their model is considerably different from ours. On the other hand, the authors propose a linear programming solution approach as we do in this work. The authors in [5] optimize the crop allocation to parcels with the objective of maximizing the annual profit under the limited irrigation water availability. The problem can be considered as complementary to ours and represents a potential future extension of our problem. Optimization of fruit harvest with management zones delineation is studied in [1]. However, given that the problem considers grapes, the authors do not analyze the planting phase, while the used solution methodology, i.e., the complete enumeration, is not applicable to our problem due to its size. The works [10, 11] deal with forest harvest optimization with the specific goals of preventing the forest fragmentation and reducing transportation costs, respectively. The former goal is not applicable to our case whilst the latter is not yet in the focus of our work, but may represent a future extension to our model. To conclude, we have not found an optimization problem in the literature, which possesses the needed characteristics of CPSP. Hence, the development of a new optimization model is justified.

3 Methodology At the highest level, the selected methodological approach can be divided into three main components: (1) forecasting of the GDUs, (2) data preparation for the optimization model, and (3) optimization of the planting schedule based on the forecasted GDUs. The first is based on an ARIMA model and performed independently from the planting schedule optimization. GDUs forecasting is described in Sect. 3.1. Data preprocessing and planting schedule optimization model are presented in Sects. 3.2 and 3.3, respectively.

3.1 Growing Degree Units Forecasting As input for GDU forecasting, we use the historical data about daily accumulated GDUs for the period of previous 10 years. With that, we predict the obtained GDUs in the following 80 weeks, which represent the planning horizon of the subsequent optimization problem.

182

N. Obrenovi´c et al.

Fig. 1 ETS decomposition of historical GDU data

To analyze the input data, we apply Error, Trend, and Seasonality (ETS) decomposition of the time series data (Fig. 1). From the figure, we observe that the time series have a seasonality component. As a result, the data is not stationary. Because of the seasonality, forecasting methods such as simple exponential smoothing or double exponential smoothing do not work [7]. Therefore, we focus on an autoregressive integrated moving average (ARIMA) model [4]. To determine the parameters of the ARIMA ( p, d, q) model, where p, d, and q denote the autoregressive, differencing, and moving average parameters, respectively, we refer to Box-Jenkins method [4]. First, we come up with the auto-correlation function (ACF) and partial auto-correlation function (PACF) of the time series (Fig. 2). From Fig. 2, we observe the cut off lag in PACF, which provides the maximum value for the autoregressive parameter p = 4. Similarly, the cut off lag in ACF suggests the moving average parameter q to be set to value 3 at most. From ACF plot, we also determine the differencing parameter d = 1, as the lag after which autocorrelations drop significantly. Further, we take advantage of Akaike’s Information Criterion (AIC, [8]) in order to find the best parameters among several candidates. Since we are interested in the simplest model, we compare the AIC values of ARIMA (4, 1, 3) model and all other simpler models, such as ARIMA (1, 1, 1) and ARIMA (1, 1, 2). Finally, we find that the minimum AIC value is achieved for ARIMA (2, 1, 2) and use this model to forecast the accumulated GDUs per day for the time period of 80 weeks.

The Crop Plant Scheduling Problem

183

Fig. 2 ACF and PACF plots

3.2 Data Preprocessing Each crop population is described by the estimated harvest quantity, required GDUs, initially planned planting date, and planting time window, defined by the earliest and latest possible planting dates. The output of the ARIMA model provides the daily GDU forecast for the planning horizon. For each population and each its possible planting day, we calculate the estimated population harvest week, based on the forecasted GDUs. With that, we determine the feasible set of combinations of population p and harvest week w, denoted as F. When the harvest week for a population is determined, we can backtrack the corresponding possible plant days and select one of them. This preprocessing allows us to define the optimization model based on harvest weeks, as presented in the following subsection. With that, we avoid explicit modeling of plant days or weeks, and harvest days, too. This model simplification is without loss of generality and acceptable since the storage capacity is defined at the weekly level. Furthermore, this approach prevents from accumulating too many more GDUs than required.

184

N. Obrenovi´c et al.

3.3 Mathematical Model The CPSP represents a multi-objective problem. The elements of the constructed mathematical model are presented hereafter. Sets P the set of corn populations W the set of weeks of the planning horizon F = {( p, w)} the set of feasible combinations of population p and harvest week w. Data Q p the expected yield of crop population p C the weekly storage capacity. Indices p ∈ {1, ..., |P|} the population id w ∈ {1, ..., |W |} the week id. Decision variables h pw binary variable determining if population p is harvested in week w u w binary variable determining if week w is used for harvesting Objectives min

f1 =

       h Q − C pw p  

w∈W

min

f2 =



uw

w∈W

min

f3 =

(1)

p∈P



(2)   u w+1 − u w 

(3)

w∈{1..|W |−1}

Objective (1) minimizes the total deviation from the capacity for all weeks. Objective (2) minimizes the number of weeks used for harvesting. This objective serves the aim of shortening the time horizon needed to harvest all populations. Lastly, objective (3) strives to consolidate harvest weeks into consecutive weeks to minimize the workforce fluctuation.

The Crop Plant Scheduling Problem

185

In the final model, the objectives are combined by using the weighted sum: min z = f 1 + c · f 2 + c · f 3 .

(4)

The weight c for objectives (2) and (3) is set to be equal to the storage capacity value. Constraints 

h pw = 1

∀p ∈ P

(5)

h pw = 1

∀p ∈ P

(6)

h pw ≤ u w · M1

∀w ∈ W

(7)

h pw ∈ {0, 1}

∀ p ∈ P, w ∈ W

(8)

∀w ∈ W

(9)

w:( p,w)∈F



w∈W



p:( p,w)∈F

u w ∈ {0, 1}

The constraint set (5) ensures that each population is harvested in one of the feasible weeks, whilst constraints (6) ensure that a population is harvested in no more than one week. Constraints (7) determine which weeks are used for harvesting. Here, M1 is a big-M value that is set to the maximum possible number of populations harvested in any of the weeks of planning horizon. The rest of the constraints, i.e., (8) and (9), are domain constraints. The expressions of absolute value of differences in objectives (1) and (3) are linearized with the technique explained in [2], which leads to the introduction of the following variable sets: • eyw+ —the yield deviation above the capacity in the harvesting week w, • eyw− —the yield deviation below the capacity in the harvesting week w, • eww+ —the indicator of change from non-harvesting w to harvesting week w + 1, and • eww− —the indicator of change from harvesting w to non-harvesting week w + 1. Additionally, we apply another coefficient, k > 1, at the surplus over the storage capacity in order to emphasize that positive deviation from the capacity is less favorable than the negative one. Finally, we obtain the following linear model: min z

=

 w∈W

+ c  w:( p,w)∈F

  k · eyw+ + eyw− + c uw 

(10)

w∈W

 +  eww + eww−

(11)

w∈{1..|W |−1}

h pw

= 1

∀p ∈ P

(12)

186

N. Obrenovi´c et al.



h pw

= 1

∀p ∈ P

(13)

h pw

≤ u w · M1

∀w ∈ W

(14)

Q p h pw − C

= eyw+ − eyw−

∀w ∈ W

(15)

u w+1 − u w

= eww+ − eww−

∀w ∈ W

(16)

∈ {0, 1} ∈ {0, 1}

∀ p ∈ P, w ∈ W ∀w ∈ W

(17) (18)

eyw+ ≥ 0 eyw− ≥ 0 eww+ ∈ {0, 1}

∀w ∈ W ∀w ∈ W ∀w ∈ W

(19) (20) (21)

∈ {0, 1}

∀w ∈ W

(22)

w∈W





p∈P

w∈W

h pw uw

eww−

4 Results The model has been tested on two problem instances, which have been created based on the instances provided at the Syngenta Crop Analytics Challenge 2021 [13]: 1. Instance 1 contains 1375 different corn populations which need to be planted in the period of one year. The harvest horizon ends within 80 weeks from the year start. The storage capacity is set to 7000 units. 2. Instance 2 contains 1194 corn populations, while the harvest horizon ends within 71 weeks from the year start. The storage capacity is 6000 units. Problem instances are solved with IBM ILOG CPLEX ver. 12.9 on Intel(R) Core(TM) i5-9400 processor at 2.90 GHz, with 16 GB RAM, For Instance 1, the optimal solution is reached in 28.67 s. Figure 3 shows the harvest schedules and quantities of the experience-based and resulting solutions, with blue and orange bars, respectively. The experience-based solution is determined by the planting days defined by an expert, for each crop population. The storage capacity is presented with the dashed black line. From Fig. 3, we observe that the storage capacity is always satisfied in week 17 and starting from week 40, while the harvest weeks are mainly condensed in two chunks, from week 17 to week 45 and from week 56 till week 69, with the exception of weeks 50 and 51. For weeks 18 to 39, we have tried to reduce the created surplus by further constraining the harvest amount. However, in this case, the problem becomes infeasible which brings us to the conclusion that we cannot further reduce surplus with this particular data instance.

The Crop Plant Scheduling Problem

187

Fig. 3 Expert-defined and optimal solution schedules for data instance 1

Fig. 4 Expert-defined and optimal solution schedules for data instance 2

Instance 2 is solved optimally in 324.28 s. Similarly, Fig. 4 shows the expertdefined harvest schedule and the schedule of the optimal solution. The latter schedules all harvests in consecutive weeks 17 to 61, without having wasted surplus in any of the weeks. This solution demonstrates clearly the quality potential of our proposed model.

188

N. Obrenovi´c et al.

5 Conclusions In this paper, we present the crop plant scheduling problem (CPSP), whose goal is to reduce the wasted production surplus and cleverly utilize the workforce. The problem is modeled as a mixed integer linear program and solved for two instances inspired by the Syngenta Crop Analytics Challenge 2021 [13] by using a commercial mathematical programming solver. To be able to estimate crop populations harvest times, we also develop and present here an ARIMA model of the GDUs gathered during the planning horizon. The obtained results show the strength and practical potential of the defined model. In our future work, we intend to create more realistic test data sets, including large instances, in cooperation with industry partners, for the purpose of evaluating the proposed model thoroughly. We also plan to analyze the sensitivity of our model to the used coefficients c and k, introduced in Sect. 3.3. The CPSP possesses certain similarities with the well-known job shop scheduling problem [9]. The analogy lies in the fact that each crop population requires certain time to grow, which is analogous to job execution time, while the number of simultaneously cultivated populations is limited by the storage capacity, corresponding to the machines’ capacities. Hence, larger problem instances might eventually require development of heuristic or metaheuristic solution approaches, e.g., adaptive large neighborhood search, genetic, or biogeography-based algorithms. The formal assessment of the problem complexity will be part of our future work, too. In this work, the forecast of obtained GDUs is done only with one model, while in the future we plan to test some additional forecasting techniques. Therefore, we also intend to develop a robust optimization approach, which will account for the estimated error of GDU forecast and multiple forecasting models. Acknowledgements This research is part of the ANTARES project that has received funding from the European Union’s Horizon 2020 research and innovation programme (SGA-CSA. No. 739570 under FPA No. 664387, https://doi.org/10.3030/739570).

References 1. Albornoz, V.M., Araneda, L.C., Ortega, R.: Planning and scheduling of selective harvest with management zones delineation. Ann. Oper. Res. (2021). https://doi.org/10.1007/s10479-02104112-1 2. Asghari, M., Fathollahi-Fard, A.M., Mirzapour Al-e hashem, S.M.J., Dulebenets, M.A.: Transformation and linearization techniques in optimization: a state-of-the-art survey. Mathematics 10(2) (2022). https://doi.org/10.3390/math10020283. https://www.mdpi.com/2227-7390/10/ 2/283 3. Barnhart, C., Krishnan, N., Vance, P.H.: Multicommodity Flow Problems, pp. 2354–2362. Springer, Boston, MA, USA (2009). https://doi.org/10.1007/978-0-387-74759-0_407 4. Box, G.E.P., Jenkins, G.M.: Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco, USA (1976)

The Crop Plant Scheduling Problem

189

5. Cervantes-Gaxiola, M.E., Sosa-Niebla, E.F., Hernández-Calderón, O.M., Ponce-Ortega, J.M., del Castillo, J.R.O., Rubio-Castro, E.: Optimal crop allocation including market trends and water availability. Eur. J. Oper. Res. 285(2), 728–739 (2020). https://doi.org/10.1016/j.ejor. 2020.02.012. https://www.sciencedirect.com/science/article/pii/S0377221720301302 6. Fedoroff, N.V.: Food in a future of 10 billion. Agric. Food Secur. 4 (2015). https://doi.org/10. 1186/s40066-015-0031-7 7. Gardner, E.S., Jr.: Exponential smoothing: the state of the art. J. Forecast. 4(1), 1–28 (1985) 8. Hyndman, R.J., Athanasopoulos, G.: Forecasting: Principles and Practice, 2nd edn. OTexts, Melbourne, Austraila (2018). OTexts.com/fpp2. Accessed 18 Jan. 2021 9. van Laarhoven, P.J.M., Aarts, E.H.L., Lenstra, J.K.: Job shop scheduling by simulated annealing. Oper. Res. 40(1), 113–125 (1992). http://www.jstor.org/stable/171189 10. Naderializadeh, N., Crowe, K.A., Rouhafza, M.: Solving the integrated forest harvest scheduling model using metaheuristic algorithms. Oper. Res. (2020). https://doi.org/10.1007/s12351020-00612-3 11. Neto, T., Constantino, M., Martins, I., Pedroso, J.P.: A multi-objective monte carlo tree search for forest harvest scheduling. Eur. J. Oper. Res. 282(3), 1115–1126 (2020). https://doi.org/10. 1016/j.ejor.2019.09.034. www.sciencedirect.com/science/article/pii/S037722171930791X 10.1016/j.ejor.2019.09.034. www.sciencedirect.com/science/article/pii/S037722171930791X 12. Santini, A., Bartolini, E., Schneider, M., Greco de Lemos, V.: The crop growth planning problem in vertical farming. Eur. J. Oper. Res. 294(1), 377–390 (2021). https://doi.org/10.1016/j.ejor. 2021.01.034. https://www.sciencedirect.com/science/article/pii/S0377221721000643 13. Syngenta: Syngenta crop challenge in analytics 2021 (2021). https://www.ideaconnection.com/ syngenta-crop-challenge/challenge.php. Accessed: 04 Mar. 2022

A MILP Formulation and a Metaheuristic Approach for the Scheduling of Drone Landings and Payload Changes on an Automatic Platform Elena Ausonio, Patrizia Bagnerini, and Mauro Gaggero Abstract We present a mixed-integer linear programming formulation and a metaheuristic approach based on direct search to schedule landings and payload changes of a set of unmanned aerial vehicles that cooperate to achieve given mission objectives. In more detail, such vehicles require landing on an automatic platform able to rapidly substitute batteries and switch the payload they are currently carrying with another one, if required by the mission at hand. Preliminary numerical results are presented to show the effectiveness of the metaheuristic algorithm as a compromise between accuracy of suboptimal solutions and computational effort.

1 Introduction In the last decades, unmanned aerial vehicles (UAVs), also called drones, have become extremely popular owing to their ability to perform various kinds of missions consisting of different tasks. Examples of application are environmental monitoring, geographic mapping, search and rescue, shipping and delivery, precision agriculture, inspection and surveillance, and fire suppression [6, 12, 15, 16]. In fact, UAVs are able to carry a variety of equipment, generally called payload, such as thermal and multi-spectral cameras, LiDAR, and other sensors. Despite the technological evolution of drones, there are still two major limitations preventing their systematic use: the reduced operating autonomy of batteries that affect the area UAVs can cover [4], and the difficulty of performing complex missions requiring the coordination of multiple devices [23]. To overcome these limitations, besides the enhancement of E. Ausonio · P. Bagnerini University of Genoa, Genoa, Italy e-mail: [email protected] P. Bagnerini e-mail: [email protected] M. Gaggero (B) National Research Council of Italy, Genoa, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_17

191

192

E. Ausonio et al.

cooperation algorithms (not considered in this work), a possibility is to automate drone management through a landing platform able to automatically change payloads or batteries and recharge exhausted ones [13, 26]. Several companies around the world are working towards the development of such automatic platforms (see, e.g., [1, 3]). The paradigm shift is evident: from being a tool requiring continuous human intervention, UAVs can be employed to perform activities even 24/7. In this paper, we focus on a set of drones flying in a given area and performing different tasks by means of suitable payloads. After concluding the assigned task or due to the need of replacing exhausted batteries, UAVs have to land on the aforementioned automatic platforms, which perform a change of battery with a newly-charged one and switch the payload carried before landing. In more detail, the payloads that they were carrying before landing are let on the platforms, and new ones are assigned. After such operations, drones take off and continue their mission. The payloads left on the platforms can then be mounted on the next drones landing therein. If UAVs land on platforms where the requested payload is unavailable, we assume that an alternative payload can be assigned (for instance, a less performing sensor with respect to the required one, or another one with similar characteristics) in order to avoid aborting the mission. The full functionality and efficiency of this scenario requires to assign in an optimal way the shared resources (platforms and payloads) to drones. Hence, it is crucial to sequence landings and take-offs, as well as to assign a payload to each drone after landing according to requests and availability on the chosen landing platform. From now on we refer to such a problem as drone scheduling problem (DSP). First, we formulate the DSP in the case of a single automatic platform as a mixedinteger linear programming (MILP) problem. However, finding a solution may be complex in the case of a large number of drones. The DSP was first introduced in [5], together with a simple heuristic based on a greedy choice of landing times and payload assignments, which was not satisfactory for problems with a large number of drones. Thus, here we propose a metaheuristic approach able to find suboptimal solutions that are close to those computed by solving the MILP formulation with an appropriate solver also for large-dimensional problem instances, but with significant savings on the computational time. Such an approach is based on direct-search optimization, a family of methods originally developed for unconstrained problems that do not rely on derivatives of the cost function to minimize [18]. In particular, we focus on the generalized pattern search (GPS) algorithm, which consists in iteratively updating the current solution by sampling the objective function along suitable search directions through the construction of a grid with the purpose of reducing the cost [8, 9]. The approach revealed to be very effective to solve the DSP, and was adopted also for its simplicity of implementation. Numerical results obtained in a test case are reported and discussed to evaluate the effectiveness of the proposed approaches. Although a large scientific literature exists for the management of military aircraft missions or scheduling landings at an airport [7, 11, 17], the application to UAVs has not received the same attention yet. In most cases, the considered setting belongs

A MILP Formulation and a Metaheuristic Approach …

193

to the family of the so-called aircraft landing problem, which consists in minimizing the difference between actual and target aircraft landing times under suitable constraints. A simple, recent approach to the solution of this problem, based on the decomposition into a chain of smaller, easier-to-solve cases, can be found in [21]. Several heuristic or metaheuristic algorithms have been proposed as well to reduce the computational burden in finding suboptimal solutions (see, e.g., [14, 22, 25]). The main difference between the DSP and the aircraft landing problem is the need of considering, in the former, the availability of the desired payload on the landing platform together with the deviation from the target landing time. The general problem of scheduling unmanned vehicles, instead, has received a lot of attention from the research community. Several works that consider autonomous vehicles to deliver packages through logistics networks exist. Many companies like Amazon, DHL, and UPS are currently exploring the practical use of drones for parcel delivery (see, e.g., [2]). In this context, one of the first works that employs drones for last-mile delivery in logistics operations is [20]. It is based on a variant of the traveling salesman problem, where UAVs operate along with traditional delivery trucks to distribute packages. MILP formulations for two delivery-by-drone problems are presented, together with two heuristic approaches for solving large-scale problems. The scheduling of UAVs that take off from a delivery truck traveling on a route with a predetermined sequence of stops is investigated in [10]. The goal is to minimize the total duration of the delivery round, making sure that customers receive their deliveries. Finally, [24] addresses the delivery problem in relation to the battery charge level, and focuses on the impact of battery consumption on fleet scheduling. The remainder of this paper is organized as follows. Section 2 reports the MILP formulation of the problem. Section 3 presents the metaheuristic approach to find suboptimal solutions. Section 4 discusses numerical results. Section 5 contains concluding remarks.

2 Problem Formulation Let us focus on a DSP instance composed of N drones to be scheduled on a single automatic platform within the time interval [0, T ], where T is a given time horizon. We perform a time discretization of [0, T ], i.e., we focus on the discrete instants t = 0, t, 2t, . . . , Tfin t, where Tfin is the total number of time steps and t := T /Tfin is the sampling time. With a little abuse of notation, from now on we refer to the discrete time steps t = 0, 1, . . . , Tfin . Without loss of generality, we consider missions that require a unique landing, payload switch, and take-off per drone, and we assume that UAVs can carry only one payload at a time. A total number of different payload types equal to K is considered, and the number of payloads for each type is equal to Pk , k = 1, . . . , K . The type of payload carried by drones before landing is accounted for by the 0-1 parameter Bi,k , i = 1 . . . , N , k = 1, . . . , K , which is equal to 1 if drone i carries the payload type k before landing, while it is equal to 0 otherwise. After landing, each drone has a “desired” payload type Di , i = 1, . . . , N , that has

194

E. Ausonio et al.

to be assigned to it in order to continue the mission after take-off. Each drone is characterized by a target landing time Ti , i = 1, . . . , N , and by a time interval [E i , L i ] for landing, where E i is the earliest landing time and L i is the latest landing time. To allow a correct setup of the automatic platform for payload switches, we assume that two consecutive landings must be separated by at least S time steps. Summarizing, we define an instance of the DSP as the set I of all the previous input parameters, i.e., I := {N , Tfin , K , Pk , Bi,k , Di , Ti , E i , L i , S, i = 1, . . . , N , k = 1, . . . , K }. Two goals can be identified for the DSP: (i) schedule landings so that drones land as close as possible to target times, and (ii) after landing, assign a payload type to drones as close as possible to the desired one. Goals (i) and (ii) are pursued by formulating the DSP as an optimization problem. To this end, let us consider the following decision variables: – xi,t , i = 1, . . . , N , t = 0, 1, . . . , Tfin , is a binary variable equal to 1 if drone i lands at time t, otherwise it is equal to 0; – ai , i = 1, . . . , N , is an integer variable representing the earliness in the landing of drone i with respect to the target time Ti ; – bi , i = 1, . . . , N , is an integer variable accounting for the lateness in the landing of drone i with respect to the target time Ti ; – wi,k,t , i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin , is a binary variable equal to 1 if drone i carries a payload of type k at time t, otherwise it is equal to 0; – z i,k,t , i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin , is an integer variable equal to the difference between the desired payload type and the actually-assigned one after drone landing; it is useful to linearize the cost and avoid absolute values; – yi,k,t , i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin , is a binary variable that allow to write constraints on the payload types involving an absolute value in a linear way (see later for details). We formulate the DSP as the following MILP problem. min C1

N  i=1

(ai + bi ) + C2

N  K T fin −1 

z i,k,t

(1)

i=1 k=1 t=0

subject to Tfin 

xi,t = 1, i = 1, . . . , N ,

(2)

xi,t ≤ 1, t = 0, 1, . . . , Tfin ,

(3)

t=0 N  i=1

Ei ≤

Tfin 

t xi,t ≤ L i , i = 1, . . . , N ,

(4)

t=0 N  i=1

wi,k,t ≤ Pk , k = 1, . . . , K , t = 0, 1, . . . , Tfin ,

(5)

A MILP Formulation and a Metaheuristic Approach … K 

wi,k,t = 1, i = 1, . . . , N , t = 0, 1, . . . , Tfin ,

195

(6)

k=1 N t+S−1   l=t Tfin 

xi,l ≤ 1, t = 0, 1, . . . , Tfin − S,

(7)

xi,t (t − Ti ) = bi − ai , i = 1, . . . , N ,

(8)

i=1

t=0

wi,k,0 = Bi,k , i = 1, . . . , N , k = 1, . . . , K , (9) wi,k,t+1 ≥ wi,k,t − M xi,t , i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin −1, (10) wi,k,t+1 ≤ wi,k,t + M xi,t , i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin −1, (11) z i,k,t ≤ M xi,t , i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin , wi,k,t+1 (k − Di ) + M yi,k,t ≥ z i,k,t − M(1 − xi,t ), i = 1, . . . , N ,

(12)

k = 1, . . . , K , t = 0, 1, . . . , Tfin − 1, (13) − wi,k,t+1 (k − Di ) + M(1 − yi,k,t ) ≥ z i,k,t − M(1 − xi,t ), i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin − 1, wi,k,t+1 (k − Di ) ≤ z i,k,t + M(1 − xi,t ), i = 1, . . . , N ,

(14)

k = 1, . . . , K , t = 0, 1, . . . , Tfin − 1, − wi,k,t+1 (k − Di ) ≤ z i,k,t + M(1 − xi,t ), i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin − 1,

(15)

xi,t ∈ {0, 1}, i = 1, . . . , N , t = 0, 1, . . . , Tfin , wi,k,t ∈ {0, 1}, i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin ,

(17) (18)

yi,k,t ∈ {0, 1}, i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin , z i,k,t ≥ 0, i = 1, . . . , N , t = 0, 1, . . . , Tfin ,

(19) (20)

ai ≥ 0, i = 1, . . . , N , bi ≥ 0, i = 1, . . . , N ,

(21) (22)

(16)

where M is a very large positive constant and C1 , C2 are positive coefficients weighting the two terms of the cost, which reflect the previously-introduced objectives (i) and (ii). The first term penalizes the landing of drones before or after the corresponding target times. Along with constraint (8), it is a linearized version of the absolute value of the difference between Tfinactual landing time of drones and the corre N the sponding target times, i.e., i=1 t=0 |t x i,t − Ti |. The second term penalizes the assignment to UAVs of payload types different from the desired ones after landing

196

E. Ausonio et al.

(see also constraints (12)–(16)). Roughly speaking, the cost to pay for a payload type different from the desired one is the difference between the numerical indexes associated to the involved payload types. This is a simple yet effective way of devising a penalization for assignments different from the desired payload types, and does not account for a loss of generality. Clearly, it would be possible to set up more complex relationships for the penalization term related to payload assignments, but this could entail possible nonlinearities with doubtful advantages in terms of effectiveness of the formulation. Constraint (2) guarantees that each UAV lands only once, whereas (3) ensures that, for each time t, at most one drone can land. Equation (4) imposes that the landing time of UAVs lies in the interval between the corresponding earliest and latest times. Constraint (5) enforces that, at each time step, the number of assigned payloads of each type does not exceed the overall number of payloads of the same type, while (6) guarantees that each drone carries one and only one payload. Formula (7) separates two consecutive landings of at least S time instants, whereas (8) connects the decision variables xi,t to ai and bi , for all i = 1, . . . , N and t = 0, 1, . . . , Tfin . Equation (9) provides initialization of the decision variable wi,k,0 at t = 0 with the information on the carried payload type at the beginning of the mission, contained in the input parameter Bi,k . Constraints (10) and (11) are equivalent to the following, and establish a dynamics for the type of payloads carried by the various UAVs:  wi,k,t+1 =

if xi,t = 0, wi,k,t i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin −1. unconstrained if xi,t = 1,

More specifically, the payload of a drone i not landing at time t (i.e., such that xi,t = 0) does not change from time t to t + 1. Otherwise, the constraints are trivially satisfied due to the presence of the large positive constant M, i.e., wi,k,t+1 is unconstrained. Similarly, constraints (12)–(16) are equivalent to the following:  z i,k,t =

0 if xi,t = 0, i = 1, . . . , N , k = 1, . . . , K , t = 0, 1, . . . , Tfin −1. |wi,k,t+1 (k − Di )| if xi,t = 1,

In particular, a penalty is paid in the second term of the cost function (1) if a payload type different than the desired one is assigned to a drone landing at time t. The absolute value is then removed by using the previously-introduced binary variables yi,k,t and standard arguments, from which (12)–(16) are derived. More specifically, we have yi,k,t = 0 if wi,k,t (k − Di ) ≥ 0, and yi,k,t = 1 otherwise. Lastly, (17)–(22) define the decision variables.

A MILP Formulation and a Metaheuristic Approach …

197

3 A Metaheuristic Algorithm Based on Direct Search The goal of the metaheuristic approach is to find the optimal values of drone Tfin t xi,t and payload type assignments after landing πi := landing times τi := t=0  K Tfin −1 K k=1 t=0 k wi,k,t+1 x i,t , for all i = 1, . . . , N . To this end, let ρi := k=1 k Bi,k , i = 1, . . . , N , be the payload type carried by drone i before landing, and let N Bi,k , k = 1, . . . , K , be the number of payloads of type k that are G k := Pk − i=1 available on the landing platform at t = 0 and that are not carried by drones. The latter quantity has to be updated after each landing to track the number of payloads per each type that are actually carried by drones, as detailed in the following. Optimization is done by using the GPS algorithm, which performs a local search on a given grid constructed by sampling the objective function around the current solution to reduce the value at each iteration [9]. In more detail, we focus on the minimization of the following cost, which is an equivalent version of (1) written in terms of the variables τi and πi and with the introduction of the function f : N → N in the second term to account for constraints on the availability of payloads, as detailed later on: J (X ) := C1

N  i=1

|τi − Ti | + C2

N 

| f (πi ) − Di |,

(23)

i=1

where X := (τi , πi , i = 1, . . . , N ) ∈ R2N , C1 and C2 are the same weighting coefficients used in (1), and  f (πi ) :=

if G πi > 0, πi minπ∈{1,...,K } (|π − πi | : G π > 0) otherwise.

As in (1), the first term of (23) penalizes deviations from target landing times, while the second one penalizes the assignment of payloads different than the desired ones. In the following, we briefly describe the idea of the GPS algorithm. Let X k be the solution at iteration k, obtained with a grid size X k > 0. Then, the function J (X ) is evaluated at the points X k := X k ± X k e j , j = 1, . . . , 2N , around the current solution by discarding unfeasible ones (constraints are described in the next paragraph). Since landing times and payload type assignments after landing are both integer numbers, a proper rounding has to be performed depending on the grid size X k . The set of points X k is called pattern, the generation of the pattern is called polling, and e j are the versors of the coordinate axes. After the construction of the pattern, we look for the point X ◦k := arg min J (X k ), such that J (X ◦k ) < J (X k ). The polling is successful if X ◦k exists. In this case, we generate a new solution by setting X k+1 := X ◦k , and we increase the grid size to look for possible better points far from the current solution, i.e., we let X k+1 := 2X k . Otherwise, the polling is unsuccessful, we set X k+1 := X k , and reduce the grid size to allow a finer search around the current best solution, i.e., we let X k+1 := X k /2. The procedure is

198

E. Ausonio et al.

iterated until the grid size X k is smaller than a certain tolerance value, and a maximum number of iterations or cost function evaluations is reached [19]. Constraints (5), (6), and (9)–(16) on the availability and dynamics of the various payload types are taken into account in the following way by means of the function f and the update of G k at each landing. To fix ideas, let us focus on a generic drone i. First, the quantity G ρi is increased of one unit to account for the payload type ρi that drone i was carrying before landing. Then, it is checked whether the payload type πi selected by the GPS algorithm is available on the platform, i.e., if G πi > 0. If the payload type πi is available, then it is actually assigned to drone i, i.e., f (πi ) = πi , and the quantity G πi is reduced of one unit to model the pick up of the payload from the platform. Otherwise, an alternative payload type has to be assigned to drone i since the choice of πi does not satisfy the availability constraints. The payload type actually assigned to drone i is chosen as the one that is closest to πi among those that are available on the platform, i.e., f (πi ) = minπ∈{1,...,K } (|π − πi | : G π > 0). Constraints (2)–(4) and (7) related to landing times were taken into account by imposing proper bounds for the generation of the pattern points, while (8) was implicitly considered through the definition of the cost (23).

4 Numerical Results We report in this section preliminary numerical results to verify the effectiveness of the MILP formulation and of the metaheuristic approach to find a solution to the DSP. Tests were performed using a personal computer equipped with an Intel Core i9 processor having a clock frequency equal to 3.6 GHz and 64 GB of RAM. We considered 15 instances of the DSP, each one corresponding to a fixed number (l) , K (l) , Pk(l) , of drones, from 10 to 150. In particular, the l-th instance I (l) := {N (l) , Tfin (l) (l) (l) (l) (l) (l) (l) (l) Bi,k , Di , Ti , E i , L i , S , i = 1, . . . , N , k = 1, . . . , K } had a number of drones N (l) := 10 l, where l = 1, . . . , 15. All the instances were characterized by  K (l) (l) K (l) := 5 different payload types, and by a total number of payloads k=1 Pk  N (l) (l) (l) (l) (l) (l) equal to N + 3, where Pk := i=1 Bi,k + ηk , k = 1, . . . , K . The term ηk is a positive integer random number drawn from a discrete uniform distribution in  K (l) (l) the range [0, 3], and such that k=1 ηk = 3. In other words, the three exceeding payloads with respect to the number of drones N (l) were assigned to the various payload types via random extractions. The parameter S (l) was fixed to 2 for all (l) carried by drones before l = 1, . . . , 15. For each instance l, the payload types Bi,k (l) landing, and the desired ones after landing Di , were randomly extracted from discrete uniform distributions in the ranges [0, 1] and [1, K (l) ], respectively. The earliest landing times E i(l) were again randomly extracted from discrete uniform distributions (l) in the range [E i−1 , 5(N (l) − 1)], with E 0(l) := 0 for all l. Instead, the latest landing (l) times L i were randomly drawn from discrete uniform distributions in the range [E i(l) + 5, 5(N (l) − 1)], with L i(l) := 5(N (l) − 1) if E i(l) + 5 > 5(N (l) − 1). Lastly,

A MILP Formulation and a Metaheuristic Approach …

199

the target landing times Ti(l) were randomly drawn from discrete uniform distribu(l) was set to maxi {L i(l) } tions in the range [E i(l) , L i(l) ]. The total number of time steps Tfin for each instance l = 1, . . . , 15, with a sampling time t equal to 1 minute. In this way, the largest mission duration for the instances characterized by 150 drones was equal to about 12 h. The coefficients C1 and C2 in (1), (23) were both fixed to 1. We solved the DSP using the CPLEX solver applied to the MILP formulation (1)– (22), and the metaheuristic approach described in Sect. 3. Concerning CPLEX, we set a maximum time limit equal to 1 h and a maximum relative optimality gap equal to 10−4 . The large positive constant M was chosen equal to 1000 via a trial-and-error procedure. As regards the metaheuristic approach, we relied on the Matlab function patternsearch, which provides an implementation of the GPS algorithm described in Sect. 3, with tolerance values for the stopping criteria equal to 10−6 (including the grid size), and a maximum number of iterations equal to 200 N (l) for l = 1, . . . , 15. All the parameters of the problem instances were tuned in order to have, on the one hand, a scarcity of resources to make the problem not too easy to solve, but, on the other hand, to generate problem instances solvable by CPLEX within the imposed time limit of 1 h to perform a fair comparison with the metaheuristic approach. Since all instances of the DSP involve the generation of random numbers, we repeated 10 times the random extractions to have statistical significance of results. Thus, each instance I (l) was run a total of 10 times with different values for the random quantities. Performances were evaluated by computing the averages of the following indicators over the aforementioned 10 runs: – the total cost (TC) is given by the cost function in (1); it measures the overall trade-off between the objectives (i) and (ii); – the total time cost (TTC) is given by the first term of the cost function in (1); it accounts for the deviation of drone landing times from target times; – the total payload cost (TPC) is given by the second term of the cost function in (1); it quantifies the difference between the payload type desired by drones after landing and the actually-assigned one. – the CPU time (CPU) measures the time required to find a solution. Tables 1 and 2 report the values of the averages and of the standard deviations of the performance indicators, respectively, computed over the considered 10 simulation runs for each instance l = 1, . . . , 15. In the tables, “MILP” denotes the solution of the MILP formulation found by CPLEX, and “META” the one provided by the metaheuristic method. The column “gap” indicates the percentage relative difference between the results provided by the metaheuristic and by the MILP-based approach. We observe that the larger the number of drones, the higher the average values of the performance indicators. Thus, the difficulty in finding a solution to the DSP increases with N , as expected, and also the CPU time required to find a solution grows with N . An almost linear increase of the average TC, TTC, and TPC can be observed for both the MILP-based approach and the metaheuristic one. Instead, the computational requirements of the MILP-based method grow exponentially with the number of drones, whereas the growth is much more limited and linear for the metaheuristic. The accuracy of the metaheuristic method is satisfactory if compared to the

57.67

63.00

70.67

79.00

86.00

91.00

80

90

100

110

120

130

58.06

54.60

70

Avg

47.00

60

98.67

39.00

50

105.00

30.80

40

150

24.33

30

140

17.83

61.60

113.00

106.00

97.00

91.33

83.67

75.33

67.67

62.00

55.40

48.20

41.40

32.60

25.00

18.17

5.74

7.08

6.92

6.19

5.84

5.58

6.19

6.90

6.99

1.44

2.49

5.80

5.52

2.67

1.83

14.62

27.00

24.33

22.67

20.67

19.33

17.33

16.00

15.00

13.20

11.60

9.80

8.80

6.17

5.00

2.40

20

11.11

6.40

10

7.20

TTC

gap (%)

MILP

META

TC

MILP

N

16.11

30.33

27.67

25.67

24.00

22.00

19.67

18.67

17.00

11.40

10.00

11.60

9.40

6.67

5.00

2.60

META

Table 1 Averages of the performance indicators over 10 simulation runs TPC

41.40

−15.79

9.26

10.99

12.05

11.69

13.89

12.12

11.86

14.29

43.44

78.00

74.33

68.33

65.33

59.67

53.33

47.00

42.67

35.40

−16.00

11.76

29.20

22.00

18.17

12.83

4.00

MILP

15.52

6.38

7.50

0.00

7.69

gap (%)

45.49

82.67

78.33

71.33

67.33

61.67

55.67

49.00

45.00

44.00

38.20

29.80

23.20

18.33

13.17

4.60

META

4.49

5.65

5.11

4.21

2.97

3.24

4.19

4.08

5.19

5.91

7.33

2.01

5.17

0.91

2.53

13.04

gap (%)

CPU (s)

410.50

2454.94

1671.26

1131.07

197.83

459.32

105.39

65.42

34.11

20.40

7.13

5.52

3.39

1.21

0.47

0.09

MILP

4.88

13.45

11.88

10.10

8.58

7.36

5.72

4.78

3.67

2.39

1.85

1.60

0.98

0.52

0.28

0.10

META

−8304.75

−18156.00

−13971.64

−11095.11

−2204.58

−6144.55

−1741.03

−1267.69

−830.56

−752.23

−286.01

−244.81

−246.76

−130.22

−68.02

14.31

gap (%)

200 E. Ausonio et al.

3.49

3.20

4.92

5.39

7.18

4.04

5.51

6.56

5.51

11.53

11.53

8.54

13.50

14.73

7.22

30

40

50

60

70

80

90

100

110

120

130

140

150

Avg

7.76

13.75

13.23

9.54

11.59

12.01

6.03

7.77

7.55

5.46

7.01

5.64

5.32

3.10

4.31

3.46

−7.15

2.85

4.51

−2.07

6.98

5.13

4.93

4.73

2.89

2.65

2.65

10.43

0.50

4.01

8.63

15.58

27.05

1.79

2.70

−2.31

26.04

1.48

4.50

1.30

2.14

−3.41

7.53

1.55

19.05

0.89

20

34.85

2.70

10

4.15

TTC

gap (%)

MILP

META

TC

MILP

N

3.23

3.06

4.73

4.51

4.36

3.61

1.53

1.53

1.73

6.50

5.83

1.67

2.30

3.01

2.00

2.07

META

TPC

6.24 5.03 10.07 11.72 11.55

−73.21 −88.98 −31.07 −13.17 −13.80

11.64

−13.39

7.28

14.53

16.50

4.51

−52.75

4.58

4.56

6.07

4.76

5.00

3.76

2.14

2.74

MILP

72.50

53.66

11.36

43.36

29.03

22.54

56.87

gap (%)

Table 2 Standard deviations of the performance indicators over 10 simulation runs

8.31

15.50

17.67

13.43

13.32

11.37

6.81

7.81

6.24

6.20

6.57

5.40

5.54

3.83

2.32

2.61

META

12.40

6.30

6.62

14.01

12.00

11.48

26.06

20.04

27.79

26.50

7.70

11.83

9.76

1.72

7.75

−5.02

gap (%)

CPU (s)

412.89

1994.99

1768.21

1593.77

109.58

489.28

129.43

40.15

37.86

22.19

2.41

2.27

2.49

0.62

0.14

0.03

MILP

META

0.38

0.52

0.49

0.52

0.68

0.80

0.07

0.17

0.21

0.95

0.72

0.07

0.28

0.18

0.08

0.01

−107430.88

−383439.48

−358787.06

−303736.52

−15970.75

−61014.94

−189826.16

−23344.92

−18115.29

−2230.14

−233.06

−2971.34

−791.42

−252.14

−84.79

−120.79

gap (%)

A MILP Formulation and a Metaheuristic Approach … 201

202

E. Ausonio et al.

MILP-based approach. In fact, the average relative gap in terms of the TC is equal to 5.74% (see the last line, fourth column of Table 1), and it is almost independent of the number of drones characterizing the problem instance. Similar values for the relative gap are experienced for the TPC, while an increase up to a maximum of 15.52% can be measured for the TTC. For 60 and 70 UAVs, the metaheuristic provides a solution with drone landing times closer to targets as compared to the MILP-based method. However, such an advantage in the TTC is compensated by greater values of the TPC. Concerning computational time, the metaheuristic approach requires on the average about 13 s to find a solution in the largest instance of the DSP corresponding to 150 drones, as compared to about 2.5·103 seconds of the MILP-based method. Thus, the proposed metaheuristic method represents a good compromise between accuracy of suboptimal solutions and required computational effort. On the average, a gap of 5.74% on the total cost and a saving of two orders of magnitude in the computational time are observed. As regards standard deviations, the results in Table 2 reveal that the two methods guarantee, on the average, almost the same behavior in terms of dispersion around the average values (the maximum average gap is equal to 12.40% for the TPC), thus confirming the effectiveness of the proposed metaheuristic algorithm.

5 Conclusions We have presented a MILP formulation of the problem of scheduling landings on automatic platforms and payload switches of a set of UAVs performing given missions. Since the optimization problem may be difficult to be solved for large numbers of drones, we have devised a metaheuristic approach based on a direct search method, which has revealed to be effective for either low- or high-dimensional problem instances. Future efforts will be first devoted to perform additional tests on the proposed metaheuristic approach by defining different, more complex problem instances. Then, we will focus on the development of ad-hoc heuristic algorithms and perform proper comparisons. Moreover, we plan to adopt multi-objective optimization methods, as well as to extend the approach to the case of multiple landing platforms and the possibility for drones to carry more than one payload and land multiple times.

References 1. 2. 3. 4.

Airobotics Company. https://www.airoboticsdrones.com. Accessed 23 May 2022 Amazon Prime Air. https://www.amazon.com/Amazon-Prime-Air/. Accessed 23 May 2022 Inspire Company. https://www.inspire.flights. Accessed 23 May 2022 Apeland, J., Pavlou, D., Hemmingsen, T.: Suitability analysis of implementing a fuel cell on a multirotor drone. J. Aerosp. Technol. Manag. 12, 1–14 (2020)

A MILP Formulation and a Metaheuristic Approach …

203

5. Ausonio, E., Bagnerini, P., Gaggero, M.: Scheduling landing and payload switch of unmanned aerial vehicles on a single automatic platform. In: Proceedings of IEEE International Conference on Automation Science and Engineering, pp 499–504 (2022) 6. Ausonio E., Bagnerini, P., Ghio, M.: Drone swarms in fire suppression activities: a conceptual framework. Drones 5, 1–22, Art. no. 17 (2021) 7. Beasley, J.E., Krishnamoorthy, M., Sharaiha, Y.M., Abramson, D.: Scheduling aircraft landings—the static case. Transp. Sci. 34, 180–197 (2000) 8. Bogani, C., Gasparo, M.G., Papini, A.: Pattern search method for discrete L 1 -approximation. J. Opt. Theory Appl. 134, 47–59 (2007) 9. Bogani, C., Gasparo, M.G., Papini, A.: Generalized pattern search methods for a class of nonsmooth optimization problems with structure. J. Comput. Appl. Math. 229, 283–293 (2009) 10. Boysen, N., Briskorn, D., Fedtke, S., Schwerdfeger, S.: Drone delivery from trucks: Drone scheduling for given truck routes. Networks 72, 506–527 (2018) 11. Faye, A.: Solving the aircraft landing problem with time discretization approach. Eur. J. Oper. Res. 242, 1028–1038 (2015) 12. Floreano, D., Wood, R.J.: Science, technology and the future of small autonomous drones. Nature 521, 460–466 (2015) 13. Ghio, M.: Methods and apparatus for the employment of drones in firefighting activities. US Patent, (US11104436B2) (2017) 14. Girish, B.: An efficient hybrid particle swarm optimization algorithm in a rolling horizon framework for the aircraft landing problem. Appl. Soft Comput. 44, 200–221 (2016) 15. Gonzalez-Jorge, H., Martinez-Sanchez, J., Bueno, M., Arias, P.: Unmanned aerial systems for civil applications: a review. Drones, 1:1–19, Art. no. 2 (2017) 16. Hassanalian, M., Abdelkefi, A.: Classifications, applications, and design challenges of drones: a review. Progress Aerosp. Sci. 91, 99–131 (2017) 17. Ikli, S., Mancel, C., Mongeau, Olive, X., Rachelson, E.: The aircraft runway scheduling problem: a survey. Comput. Oper. Res. 132:1–20, Art. no. 105336 (2021) 18. Kolda, T.G., Lewis, R.M., Torczon, V.: Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev. 45, 385–482 (2003) 19. Lewis, R.M., Torczon, V.: Pattern search methods for linearly constrained minimization. SIAM J. Optim. 10, 917–941 (2000) 20. Murray, C.C., Chu, A.G.: The flying sidekick traveling salesman problem: optimization of drone-assisted parcel delivery. Transp. Res. Part C: Emerging Technol. 54, 86–109 (2015) 21. Salehipour, A.: An algorithm for single- and multiple-runway aircraft landing problem. Math. Comput. Simul. 175, 179–191 (2020) 22. Salehipour, A., Modarres, M., Naeni, L.M.: An efficient hybrid meta-heuristic for aircraft landing problem. Comput. Oper. Res. 40, 207–213 (2013) 23. Sargolzaei, A., Abbaspour, A., Crane, C.: Control of Cooperative Unmanned Aerial Vehicles: Review of Applications, Challenges, and Algorithms, pp. 229–255. Springer, Cham (2020) 24. Torabbeigi, M., Lim, G.J., Kim, S.J.: Drone delivery scheduling optimization considering payload-induced battery consumption rates. J. Intell. Rob. Syst.: Th. Appl. 97, 471–487 (2020) 25. Vadlamani, S., Hosseini, S.: A novel heuristic approach for solving aircraft landing problem with single runway. J. Air Transp. Manag. 40, 144–148 (2014) 26. Wang, M., Chen, X.: System and method for managing unmanned aerial vehicles. US Patent, (US20170190260A1) (2017)

A Flexible Job Shop Scheduling Model for Sustainable Manufacturing Rosita Guido, Gabriele Zangara, Giuseppina Ambrogio, and Domenico Conforti

Abstract The research work aims at optimizing the energy costs in manufacturing industries. We pay particular attention to the use of energy in production processes and to the costs deriving from its consumption considering flexibility in machine selecting, job sequencing, idle machine and machine switching off-on operations. We formulate a flexible job shop scheduling model that minimizes costs due to energy consumption and considers a planning period where to schedule jobs. To test the optimization model, we consider a case study of a multinational corporate that operates in the manufacturing sector. The collected real data are related to process activities of heat exchangers products. The results show that the planning and scheduling of operations over a planning period is found in few seconds and that the costs due to the energy required for production is about 10% lower than the current production. Keywords Flexible job shop · Scheduling · Energy · Sustainable manufacturing

1 Introduction The scenario, in which the following work is placed, have become of primary importance for companies, which are called to put into practice the principles of Sustainable Manufacturing (SM). It is defined as “the creation of manufactured products that use processes that minimize negative environmental impacts, conserve energy and R. Guido · G. Zangara (B) · G. Ambrogio · D. Conforti Department of Mechanical, Energy and Management Engineering, University of Calabria, Arcavacata, Italy e-mail: [email protected] R. Guido e-mail: [email protected] G. Ambrogio e-mail: [email protected] D. Conforti e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_18

205

206

R. Guido et al.

natural resources, are safe for employees, communities, consumers, and are economically healthy” [1]. Clearly putting such a profound change into practice represents a big challenge, certainly with obstacles, as evidenced in the work of the barriers to sustainable production in manufacturing organizations [2]. One strategy to solve the problem is based on the concept of Smart Factory, “a manufacturing solution that provides such flexible and adaptive manufacturing processes that will solve the problems that arise on a manufacturing facility with dynamic and rapidly changing boundary conditions in a world of increasing complexity” [3]. As it can easily realised, the advantages brought to an organization in manufacturing would even be of strategic dimensions both regarding the optimization of the decision-making process along the entire value chain and for greater efficiency in resource management. Companies play a fundamental and irreplaceable role in the economic and social system of a territory and, therefore, can offer a significant contribution to the good of communities and the environment. For this reason, the integration between Industry 4.0 technologies and sustainability must be implemented in companies, bringing benefits for all. A Flexible Job Shop Scheduling (FJSS) model energy-aware has been developed to support companies in making programming, sequencing, and scheduling decisions taking into account multiple factors. It aims to schedule the production orders in a given planning period by selecting the machines on which carry out certain processes and by sequencing them in an optimal way to reduce the overall processing time, i.e., the makespan, and the cost of needed energy.

1.1 Literature Review Sustainable production acts on all relevant aspects of an enterprise: products, processes, and systems. Jawahir and Dillon Jr in 2007 introduced the 6R concept, taken from to facilitate the approach to sustainable production; the six Rs refer to reduce, reuse, recycle, redesign, recover, regenerate [4]. Industry 4.0, as we find in the manuscript [5], implies the continuous automation of traditional production practices using intelligent technology. The current structure of IoT application sees its greatest application in Smart cities, as can be seen from [6]. Moreover, as stated by De Crescenzio [7], regarding augmented reality applied to the Oil and Gas sector: “the sensors integrated in the viewers can help identify abnormal or critical conditions that the human eye would not be able to perceive autonomously”. Finally, the use of new technologies could support the integration of renewable energy sources. When we talk about renewable energy, we refer to resources that could reproduce with a speed greater than or equal to their rate of use. Solar energy is certainly the bestknown form of renewable energy and there are three technologies to accumulate this type of energy: the solar photovoltaic, the solar thermal, the solar thermodynamic. Dealing with renewable energy is a must for companies that are aware of it. In fact, as reported by the data of the Global Trends in Renewable Energy Investment, countries are making numerous investments in this regard [8]. The industry

A Flexible Job Shop Scheduling Model for Sustainable Manufacturing

207

is certainly the largest consumer of energy and, consequently, in recent years, it is working with systems that can reduce this cost item as much as possible. The most important Energy Saving actions can be at different levels: managerial, technological and redesign of the plant layout. The first are based on a careful study of the entire plant highlighting any waste or excessively energy consuming practices. The second type of intervention is technological level: the economic commitment on the part of the company is much greater, since we are talking about replacing entire plants or parts of them, in such a way as to insert the latest generation equipment that is far more efficient. When redesigning the layout of the plant is considered, we refer to the methods offered by Lean Manufacturing to support energy saving. Hence the concept of Lean and Green, that is, using the same tools as Lean Manufacturing to positively affect the company’s environmental impact and related costs. Nowadays, sustainability is not just a discussion at the strategic plan but it is also reflected in operational objectives. So much so that, in research activities, scheduling problems assume an alternative approach to improve sustainable production issues. Energy savings in no way compromise the productivity of the company, as described in [9]. Indeed, one of the operational objectives is precisely the study of accurate scheduling and the effective use of natural resources in order to achieve greater savings. Job Shop Scheduling(JSS) models fall under the programming problems of industrial production and are among the most difficult problems of combinatorial optimization [10, 11]. Tendentially, the research activity by the various research groups has focused mainly only on temporal aspects, such as, for example, the minimization of makespan [12, 13]. The numerous articles in the literature have addressed several problems considering various objective functions as summarized in the recent review [14]. Nowadays, however, there is a greater focus of companies towards energy consumption. A multi-objective model to minimize overall process costs is proposed in [15] and workload balancing and total energy consumption is considered in [16, 17]. A similar problem has been addressed in the literature by a few researchers. The authors of [16] formulated several optimization models whose goal is to minimize the overall cost of energy. The optimization model that we propose is based on the FJSS model of [16]. It belongs to the class of optimization problems known as FJSS problems since some operations can be carried out by multiple machines. In addition, an FJSS problem has assignment and scheduling problems as two subproblems. The fundamental characteristics are the processing times of the operations and the matrix of submachines that defines the feasible assignments job-machine. The implementation by companies of optimization models such those proposed in this work to support decisions, in general, has the advantage of leading to considerable economic savings, even making processes more competitive and greener.

208

R. Guido et al.

2 Optimization Model for Efficient Production Management The problem addressed concerns the planning and scheduling of jobs within a defined planning horizon. Following the need to reduce both energy consumption and therefore the related costs, a production planning and scheduling model has been developed with the goal of minimizing energy costs while respecting all the constraints existing in the production process. We assume that: • • • • • • • • • •

all the data of the problem are deterministic and known with certainty; the planning period consists of a defined number of work shifts; energy costs vary in different work shifts and are constant throughout the shift; a job consists of one or more operations and they have to be performed respecting a precise sequence; each machine can only perform one operation at a time; no operation can be interrupted once started, so preemption is not allowed; all jobs and machines are available at the beginning of the planning period; an operation cannot be performed by multiple machines at the same time; material transport and machine setup times are considered negligible; turning off a machine does not consume energy.

Each working day is divided into working shifts with starting and end time known. For each available machine, the triable operations and the required time are known. The objective function is the weighted sum of three terms. In fact, the total energy is not only attributable to the consumption due to the machines used in the different processes of the jobs, but a part depends on the energy consumed when the machines are on stand-by and another part from that necessary to power the auxiliary systems. Electricity also has a different cost depending on the shift considered, since the unit costs of energy vary according to this. The scheduling of the operations consists on determining a starting time for each of them while taking into account precedence constraints and machines availability; each operation is assigned to only one machine. This is even called as routing [18]. The model chooses which operations are performed in each shift, selects the machines on which to carry out the different processes, schedules production on the selected machines and sequences the operations. Decision-making is supported by the results of the optimization model in deciding which jobs to perform in certain work shifts.

2.1 Optimization Model Formulation The used sets, indices, parameters, and decision and auxiliary variables are reported in Tables 1, 2. The set O j is an ordered sequence of operations defined per each job j ∈ J.

A Flexible Job Shop Scheduling Model for Sustainable Manufacturing

209

Table 1 Sets and parameters Sets J M S Sj O j = {oi j } Mi j ⊆ M Posm Parameters T Msm P Pi jm pti jm SWms Pm 

Pm I Pm T Sm E Sm T Bms Ps Ts ecs K >> 0

Set of n jobs, indexed by j = 1, . . . , n Set of k machines, indexed by m = 1, . . . , k Set of work shifts, indexed by s = 1, . . . , |S| Set of operations of job j ∈ J, indexed by i = 1, . . . , |S j | Sequence of operations of job j ∈ J Set of machines that can process operation oi j Set of the positions 1, . . . , Pm on machine m Available time of machine m during working shift s Processing power of operation oi j on machine m ∈ Mi j Processing time of operation oi j on machine m ∈ Mi j Maximum number of times that machine m can be switched on/off in the shift s The maximum length sequence on machine m, given as   Pm = oi j j∈J i∈O j :m∈Mi j

= Pm − 1 Idle power of machine m Time consumed by machine m for one turning On/Off strategy Energy consumed to switch machine m to off/on; Break-even time of machine m, defined as max{T S m , IEPSmm } Power needed for auxiliary systems (e.g., light, air conditioning, ventilation, heating) in period s Duration of shift s Energy cost during shift s A large positive number

Table 2 Decision variables Decision and auxiliary variables xi jmsp = 1, if operation i ∈ O j is performed by machine m in shift s and position p; 0 otherwise z mps = 1, if the on/off strategy on machine m is used in shift s, between position p and p + 1; 0 otherwise Soi js ≥ 0, start time of operation oi j in shift s Eoi js ≥ 0, end time of operation oi j in shift s Sm mps ≥ 0, start time instant on machine m of operation processed in position p in shift s Fm mps ≥ 0, final time instant on machine m of operation processed in position p in shift s Csmax > 0, makespan, i.e., maximum completion time of all operations performed in shift s En mps ≥ 0 energy consumption of machine m between the operation performed in Position p and that one performed in position p + 1 in shift s

210

R. Guido et al.

The use of indices, if not specified, follows the meaning reported in the tables. The objective function (1) minimizes the overall energy cost in the planning horizon given as sum of three terms: the first term is the total idle energy consumption including shutdown; the second term is the total processing energy consumption; the third term is the overall energy needed for auxiliary systems. min

      ( En mps + P Pi jm pti jm xi jmsp + Ps Csmax ) s∈S m∈M p∈Pm

j∈J i∈I m∈Mi j

p

(1)  

xi jmsp = 1

∀ j, i

(2)

∀m ∈ K i j , p, s

(3)

m∈Mi j p∈Pm s∈S



xi jmsp ≤ 1

j∈J i∈O j



xi jmsp ≥

j∈J i∈O j

 j∈J i∈O j

  

xi, j,m s¯ p ≥

m∈Mi j p∈Pm s¯ ∈S|¯s sT 2 ;   while for each route Rd = s1 , s2 , . . . , sn d we denote with • Rdstart , Rdend : the time the driver is supposed to leave/return to the depot respectively; • Rdduration : the route total duration given by Rdend− Rdstart ; s + m(sn d , d); • Rddistance : the route total travel distance given by i∈{1,2,...,n d } i distance  • Rdcapacity : the route capacity buffer given by dC − i∈{1,2,...,n d } sC ;   • Rddelay : the route total delay given by i∈{1,2,...,n d } si delay + max 0, dT 1 −     Rdstart + max 0, Rdend − dT 2 + max 0, Rdduration − d D . A route Rd is considered feasible if 

         Rdduration ≤ d D ∧ Rdstart ≥ dT 1 ∧ Rdend ≤ dT 2 ∧ Rddelay = 0 ∧ Rdcapacity ≥ 0

¯ = {Rd }d∈D . We do not require A solution is defined as a set of disjoint routes R all the stops to be assigned to a route and we define the pool as the set of unassigned ¯ unassigned = S \ (∪d∈D Rd ). The cost of a single route d is given by stops, i.e. R   Rdcost = d M Rddistance + dT Rdduration + α1 Rddelay − α2 max 0, Rdcapacity

(1)

while the cost of the solution is given by  d∈D

Rdcost +



βs P ,

(2)

¯ unassigned s∈R

where α1 , α2 and {βi }i∈{ p1 , p2 ,..., pk } with βi >= β j if i ≥ j, are some custom penalty coefficients for delay and for unserved stops respectively.

2.1 Route Evaluation To evaluate all the entities previously described for a route Rd , we slightly modified the well known algorithm based on earliest execution time, also known as forward

268

D. Di Lorenzo et al.

time slack [10, 11], to handle delays on services and driver rest breaks. We do not allow any route to start earlier than the driver earliest time. To reduce the algorithm complexity to O (|Rd |), we use a heuristic for choosing each stop s time window by selecting the earliest possible one depending on the arrival time sarrival . We do not allow a delay in any time window apart from the last, meaning that if it is late for the first time window, we wait until the second one. While iterating over all the route stops we also check whether it is the case to schedule the rest break. The strategy here is to add the time needed by the rest to the stop wait time whenever needed. A sketch of the rest break heuristic is presented in Algorithm 1. The delay on stop service is evaluated right after this heuristic. Algorithm 1 Rest break algorithm Require: sarrival , swait Require: d BT 1 , d BT 2 , d B D 1: sservice ← sarrival + swait 2: sdeparture ← sservice + s D 3: if sservice ≥ d BT 1 then 4: if sarrival ≤ d BT 1 then 5: if d B D > sservice − d BT 1 then 6: swait ← d B D + d BT 1 − sarrival 7: end if 8: else if d BT 1 < sarrival ≤ d BT 2 then 9: swait ← max (d B D , swait ) 10: else if sarrival > d BT 2 then 11: sarrival ← sarrival + d B D 12: swait ← max (0, swait − d B D ) 13: end if 14: else 15: if sservice + s D ≥ d BT 2 then 16: swait ← d BT 1 + d B D − sarrival 17: end if 18: 19: end if

 The incumbent stop  The rest break

 Take rest during wait time at d BT 1  Add a buffer in wait time if needed  Rest starts at sarrival  Add a buffer in wait time if needed  Add rest to travel time

 Stop service covers the entire break window  Wait till break can start  Do not handle rest yet

2.2 The Optimization Algorithm The algorithms starts with a solution where all stops are unassigned, namely ¯ unassigned = S. We set the penalties {βi }i∈{ p , p ,..., p } in (2) so that R 1

2

k

β p k > 2 max d M max m(x, y) + 2 max dT max t (x, y) d∈D

x,y∈S∪D

d∈D

x,y∈S∪D

in order to have a decrease in the cost function whenever an additional stop is scheduled. We then run a best insertion heuristic which adds stops to the solution until the

Ten Years of Routist: Vehicle Routing Lessons Learned from Practice

269

solution becomes unfeasible. The compatibility among stops and drivers is handled as a hard constraint. We then execute a tabu search [6] similar to the one described in [2] where the neighborhoods exploit the following set of moves: • • • • • •

insert stop: a stop within the pool is scheduled within a route; remove stop: a stop is removed from a route and it’s added to the pool; move stop: a stop from a route is rescheduled onto an other; swap stops: swap two stops within the same route; invert route section: perform a 2-Opt [7] on a route; move route section: a sequence of consecutive stops is removed from a route and scheduled onto another; • swap route section: two sequence of consecutive stops are removed from two routes and added onto the other route. All the constraints on stops apart from driver/stop compatibility are relaxed. The same happens for driver constraints, except for driver earliest start time and rest break which are always implicitly satisfied by the route evaluation algorithm. We do allow then solutions to be unfeasible but we keep track of the best one among the feasible ones only. We periodically destroy the solution removing stops from the routes and adding them to the pool. This perturbation is typically followed by a repair phase based again on the best insertion heuristic, with a tabu lists which prevents a stop from being inserted in the same route it has been removed from.

3 Algorithmic Improvements While our basic algorithm as described in Sect. 2 was an effective starting point, we soon realized that to scale a VRP solution in a practical setting every CPU cycle counts, both because it forces our customers to wait longer for the optimal solution, and because it creates larger infrastructure costs. Thus, we focused very early on finding ways to improve our algorithm so that it could provide solutions that were as good as possible under tight resource constraints. We present in this Section a few of the main algorithmic changes that improved our optimization algorithm both in terms of solution quality and performance.

3.1 Set Covering During the tabu search execution, a very high number of solutions is generated. Some of those will be efficient, while others will not be. It can also happen that the algorithm finds solutions that are very efficient for a subset of routes, but very inefficient for a different subset of routes. Thus the overall solution will likely be discarded, even though a part of it would be worth keeping. Because of this, combinations of

270

D. Di Lorenzo et al.

previously computed routes can generate solutions which are better than the best one found so far. We can achieve this by solving a variation of a set covering problem [3]. First, we analyze whether there are groups of drivers which are identical (i.e. drivers which can be safely exchanged without changing the solution cost or feasiˆ the set bility, as they have the same depot, working hours, and so on). Let us call D of driver groups. We define br,g to be 1 if the driver for route r belongs to the group g, and 0 otherwise. ˆ This While the algorithm runs, we keep track of “good” solutions in a set R. bucket keeps track of the best feasible route found so far, for every driver group and for every set of stops covered by the solution (regardless of the sequence in which stops are visited). Note that we must make sure that the solution generated by combining the routes ˆ serve at least as many stops as the best solution found so far, for every different in R priority class. It is possible to trade one lower priority stop for a higher priority one, but not the other way around. Let us denote by S∗ the set of stops served in the best solution found so far. With all of this, we can define the following integer linear programming problem min x,y



rcost xr

ˆ r ∈R



xr ≤ 1

ˆ r ∈R:s∈r

ys ≤ 



xr

∀s ∈ S ∀s ∈ S

ˆ r ∈R:s∈r

br,g xr ≤ |g|

ˆ r ∈R



ˆ ∀g ∈ D

  ys ≥  s ∈ S∗ : s P ≤ p 

∀p ∈ P

s∈S:s P ≤ p

ˆ xr ∈ (0, 1) ∀r ∈ R,

ys ∈ (0, 1) ∀s ∈ S

We do not need to model additional constraints like capacities, time windows, and ˆ we are guaranteed that those so on: since we are only adding feasible routes to R, constraints are satisfied. We are using COIN-OR CBC [5] to solve such problem. We obviously do not let the solver run to optimality, by setting a strict time limit to 1–10 s, depending on the problem size. Even with this limited computational time, the solver can usually improve on the current solution. We invoke this procedure every 2000 tabu search iterations, and one last time at the end of the tabu search. This change lowered the median cost for the routes in our internal dataset by 2.5%, without any measurable increase in computational times.

Ten Years of Routist: Vehicle Routing Lessons Learned from Practice

271

3.2 Caching Evaluating the moves while exploring the neighborhoods is the most expensive operation, from a computational point of view. However, most of the routes evaluated at every iteration are the same with respect to the previous one, which is obviously a waste. To prevent this, we gave every neighborhood a cache. The neighborhoods will group the cache data into “lines”. Every line is identified by the set of vehicles which are affected by the move it holds, and by whether the move depends on the pool of unassigned stops. For example, the swap neighborhood only changes one vehicle, and is not affected by the pool. In that case, the cache lines will be tagged with only the id of the vehicle affected by the move. As another example, the move route section neighborhood affects two vehicles, so every cache line within such a neighborhood will be tagged by a pair of driver ids. The insert stop and the remove stop neighborhoods will affect the pool, so their cache lines will be tagged with the pool and with the affected vehicle. The rules that govern the cache are: • every time the neighborhood evaluates a move, the route will be saved into the relevant cache line, identified by the associated vehicles, along with the associated route cost; • every time a move is applied, every neighborhood checks which routes are affected by the move (and also if the pool is), and deletes every cache line which shares either the pool or at least one vehicle with the applied move; • every time the algorithm asks the neighborhood for the best solution, the neighborhood will first restore all the missing cache lines, and then return the solution with the lowest cost which is not tabu. Because of the caching, the neighborhood will not be forced to compute the costs for the whole neighborhood, but only for the missing cache lines which have been deleted by the execution of the previous move. Cache lines are usually sorted by solution cost, to speed up this step. Further refinements can be made to the neighborhoods which are affected by the pool. Instead of completely clearing the cache every time a stop is added or removed from the pool itself, we can update the cached by adding and removing only one element on the affected cache lines, to reflect the added or removed element on the pool. This is useful especially in the initial heuristic, where the only neighborhood available is the insert stop. Before introducing the cache our algorithm did not fully explore the neighborhoods, but only considered a subset of moves, in order to save some computational time. At every iteration, the random set of moves would be generated anew, so that eventually all the possible moves would likely have been visited. Using a cache forced us to fully explore the neighborhood at every iteration, canceling part of the speedup brought in by the cache itself. On the other hand, fully exploring the neighborhood allowed us to also improve the objective function quality. We tested the change on an internal dataset derived from our customer data, with the following results (Table 1).

272

D. Di Lorenzo et al.

Table 1 Algorithm execution times (seconds) and final solution cost, lower is better Percentile Without cache, partial exploration With cache, full exploration Time Cost Time Cost 25 50 75

67.4 68.1 71.3

12.3 12.7 13.3

33.4 34.1 36.6

10.8 11.1 11.5

3.3 Nearest Drivers A simple way to further reduce the computational effort is to avoid evaluating moves which are unlikely to yield a good solution. One such example is moving a stop to a vehicle whose current route is far away from the stop itself. While in theory other constraints could require for a far away stop to be assigned to the vehicle, this does not happen in practice in our experience with our customers. As a heuristic, when exploring the move stop neighborhood, we only consider the moves from a vehicle to one of the N closest to the vehicle itself, and ignore the rest. The same strategy can be applied to all of the neighborhoods which affect more than one vehicle. In our experiments, we found that a value N = 10 produces a substantial computational time decrease for large plans, without reducing the solution quality.

3.4 Equivalent Stops Real-world scenarios sometimes involve long route sections having stops whose ordering is equivalent for the current solution. We consider a group of stops to be equivalent if any solution obtained by applying a permutation to those subset of stops is provably either as good as the original solution, or worse. A simple heuristic to check this property is to check that all of the stops in the subset • • • •

are sequenced one after another on the current solution; share the same location; are all pickups or all deliveries (in case such feature exists); do not require waiting time in the current solution.

Those conditions are quickly verified while exploring the neighborhood. Should such a group be found, then the swap and invert neighborhoods can safely skip any operation involving only stops in this subset. Note that those stops might differ in other parameters, like capacity and time windows. Large and flat local minima with equivalent solutions are usually badly handled by tabu search like algorithms, as they tend to revisit the same solutions again and again. Tabu lists struggle to prevent this, since the number of possible permutations is

Ten Years of Routist: Vehicle Routing Lessons Learned from Practice

273

usually higher than the tenure of the tabu list itself. Depending on the problem data, this strategy can give substantial improvements to the solution quality, and some improvement to the CPU time.

4 Making the Customers Happier The improvements described in the previous Section made the optimization more efficient and effective. Still, when developing a product, the quality of the algorithm is not as important as satisfying the needs of our customers and users, most of which are the fleet managers of small to medium sized businesses. In this Section we list some of the challenges we encountered and the features our customers were mostly interested in.

4.1 Route Editing Most customers want to have a way to manually edit or fine-tune the routes produced by the optimization algorithm. This is crucial for several reasons. First of all, there might be some specific or complex constraints that cannot be easily modeled, such as a precedence of a stop over another under some conditions. Often, users might need to change routes due to an unexpected event, such as a stop being canceled or rescheduled. This also led us to investigate the use of robust or stochastic formulations, so that the algorithm would provide solutions that are easy to fix when necessary. However, the formulations in the literature typically do not generalize well to the several different cases that arise dealing with our very diverse set of customers. Finally, and quite surprisingly, the customer perception of “good” routes is mostly related to the way they appear on a map: they tend to prefer disentangled and “petalshaped” routes, even if it means having slightly higher operational costs or violating some of the constraints, while they perceive nested routes as sub-optimal.

4.2 Balancing the Routes Another factor highly valued by our customers is the equity of the assignments, or route balancing. The algorithm needs to make sure that all the drivers receive a fair share of the work, even if this means getting a solution whose overall cost is higher. In a standard CVRP, when there are not enough stops to completely fill all of the driver working hours, it is common that the best solutions involve having some drivers working full day, while other drivers working half day, or nothing at all.

274

D. Di Lorenzo et al.

This means that the algorithm objective function (1) should take into account not only the route costs, but also an inequality function related to the fairness of the workload assignment [8]. As shown in the literature, routes are guaranteed to be TSP-optimal only if the inequality function is weakly monotonic. Most of the commonly used ones however are not. Furthermore, none of the inequality functions proposed in [8] can be computed independently for every route, preventing us from using a cache mechanism as described in Sect. 3.2. Because of this, we designed the following inequality function:    ¯ =b Rd2duration I R d∈D

This penalty function can be computed independently for every route, but nevertheless does its job at producing more balanced routes, as a quadratic cost penalizes solutions in which most of the work is performed by the same driver, with respect to solutions in which some of such work can be efficiently performed by a different driver. This change was a breakthrough in our balancing feature, as it is strong enough to substantially balance the solution, plays well with the cache, and still guarantees TSP-optimal solutions.

4.3 Variants of Interest When we went live for the first time with our routing product, we did not have any kind of pickup and delivery feature and the problem we were addressing was basically a Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). It turned out pretty soon that this was not enough to cover the diverse needs of our customers. We first had to introduce multiple capacities to model different goods’ dimensions such as number of items or weight, and then to start modeling both pickups and deliveries (VRPPD). The Vehicle Routing Problem with Backhaul (VRPB) [9] becomes a natural extension for our problem. This problem requires all deliveries to be performed before the pickups, to avoid rearranging the goods on the truck. We handled it by taking into account pickups and deliveries in our route evaluation routine and adding a penalty in the cost function. It is worth highlighting that several other variants studied in the literature are not considered interesting by our SMB customers—one such example is the many-tomany VRPPD. We also discovered that our customers are not interested in having a way to let our software find the optimal day to schedule a stop, as it happens for example in Multi-period Vehicle Routing Problems (MVRPD) [1], nor in having any kind of recurrences automatically handled [4]. They prefer to be given the option of deciding which day each stop should be scheduled. We noticed that this is one of the

Ten Years of Routist: Vehicle Routing Lessons Learned from Practice

275

biggest differences between SMB customers and enterprise customers, that typically have several thousands of vehicles and want to automate their process as much as possible.

5 Conclusions Over the years we significantly improved the optimization engine in various respects, but most of our key findings were about customers expectations: they really want an active role in the decision process, rather than delegating the control to a fully automated algorithm. Our users tend to trust their domain knowledge and experience more than everything else, so it is fundamental to walk them through the optimization process, clearly and visually showing that the routes optimized by the system are significantly more efficient that the ones planned manually. Ease of use turned out to be a major key factor for the success of a VRP product. Several users, especially in smaller companies, are owners and drivers of their fleets. They generally manage multiple aspects of the business: relationships with customers, route planning and dispatching, job execution and complaints handling. Their workday starts early and ends late: they have no time to invest in learning how the software works or in configuring it. The product needs to learn the customer needs, and not vice versa. Acknowledgements The authors would like to thank Prof. Fabio Schoen, our research North Star, always available for any scientific advice over the years, for having helped them growing professionally and personally. The authors are also grateful to Peter Mitchell, General Manager of VZC, for seeing the potential in a handful of geek scientists and turning them into professional technologists.

References 1. Archetti, C., Jabali, O., Speranza, M.G.: Multi-period vehicle routing problem with due dates. Comput. Oper. Res. 61, 122–134 (2015) 2. Bianconcini, T., Di Lorenzo, D., Lori, A., Schoen, F., Taccari, L.: Exploiting sets of independent moves in VRP. EURO J. Transp. Logist. 7(2), 93–120 (2018) 3. Cacchiani, V., Hemmelmayr, V.C., Tricoire, F.: A set-covering based heuristic algorithm for the periodic vehicle routing problem. Discret. Appl. Math. 163, 53–64 (2014) 4. Campbell, A.M., Wilson, J.H.: Forty years of periodic vehicle routing. Networks 63(1), 2–15 (2014) 5. Forrest, J., Lougee-Heimer, R.: CBC user guide. In: Emerging Theory, Methods, and Applications, pp. 257–277. INFORMS (2005) 6. Glover, F., Laguna, M.: Tabu search. In: Handbook of Combinatorial Optimization, pp. 2093– 2229. Springer, Berlin (1998) 7. Lin, S., Kernighan, B.W.: An effective heuristic algorithm for the traveling-salesman problem. Oper. Res. 21(2), 498–516 (1973) 8. Matl, P., Hartl, R.F., Vidal, T.: Workload equity in vehicle routing problems: a survey and analysis. Transp. Sci. 52(2), 239–260 (2018)

276

D. Di Lorenzo et al.

9. Ropke, S., Pisinger, D.: A unified heuristic for a large class of vehicle routing problems with backhauls. Eur. J. Oper. Res. 171(3), 750–775 (2006) 10. Savelsbergh, M.W.: The vehicle routing problem with time windows: minimizing route duration. ORSA J. Comput. 4(2), 146–154 (1992) 11. Vidal, T., Crainic, T., Gendreau, M., Prins, C.: A unifying view on timing problems and algorithms. CIRRELT (2012)

Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System Maksim Lali´c, Nikola Obrenovi´c, Sanja Brdar, Ivan Lukovi´c, and Michel Bierlaire

Abstract Unexpected blockages of railway sections force trains that usually pass through them to change their route, affecting passengers whose boarding and alighting stations are skipped. In such cases, shared vehicles, whose stations are positioned around railway stations, could be used as an emergency transport option and mitigate the consequences of train rerouting. This paper provides an integer linear program for simultaneous routing of affected passengers and used shared vehicles. Our model assumes that the new train route is fixed, and provides a solution that minimizes the number of dropped passengers and total passenger travel time while keeping additional vehicles engagement as low as possible. Computational experiments are performed on realistic case studies over a part of the Swiss railway network. Behavior characteristics of the analyzed transportation system under low or high passenger demand are recognized and presented.

M. Lali´c (B) · N. Obrenovi´c · S. Brdar Institute BioSense, University of Novi Sad, Novi Sad, Serbia e-mail: [email protected] N. Obrenovi´c e-mail: [email protected] S. Brdar e-mail: [email protected] I. Lukovi´c Faculty of Organizational Sciences, University of Belgrade, Belgrade, Serbia e-mail: [email protected] M. Bierlaire Transport and Mobility Laboratory (TRANSP-OR), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_23

277

278

M. Lali´c et al.

1 Introduction Various unexpected and unwanted events could temporarily block sections of the railway network, preventing trains to complete their planned trips. In the cases of larger disruptions, some train services are often canceled or rerouted, which significantly affects passengers whose boarding and alighting stations are skipped. Due to a reduction of passable railway paths and diversity in origin and destination stations of affected passengers, even with additional railway services, part of passengers still may be without any appropriate transport option. In such cases, transportation can be performed by shared vehicles stationed close to or at railway stations. The advantages of the railway and vehicle sharing system (VSS) integration would be manifold. Shared vehicles can separately transport passengers in various directions through the road network. Since they do not exploit the railway network, their usage does not induce further propagation of timetable modifications. Passengers can drive themselves and no additional working duties are required. They are constantly available, which releases railway operators from the necessity to provide alternative services in cases with lower passenger demand. In this paper, we analyze the usability of shared vehicles as an emergency transport option in the case of a railway disruption, which we consider as our main conceptual contribution. To do so, we develop an integer linear program (ILP) formulation aimed to assist passengers affected by a rerouting of a single train. Given the new train route in advance, solutions generated by our model simultaneously define routing options for affected passengers, by using a fleet of uniform vehicles distributed at railway stations. While passengers can utilize the train and vehicles for their movements through the network, each vehicle movement requires a passenger with a driving license that could serve as a driver on that particular ride. The routing of each passenger and vehicle is determined individually. The optimization process is designed to provide a transportation option for as many passengers as possible, keeping total passenger delay and the cost of provided services as low as possible. This paper is organized as follows. Section 2 presents an overview of related literature. The mathematical formulation of the model is given in Sect. 3. Descriptions of case studies, results and their interpretations are provided in Sect. 4. Paper conclusion and directions for further research are given in Sect. 5.

2 Related Work According to the literature review on railway recovery models done in [1], our model is classified in the group of disruption models with a mesoscopic level of abstraction. Since our model in the first place aims to improve passenger experience rather than the cost of provided actions, it could further be classified into the model group with the passenger-centric approach. As concluded in [1, 2], literature related to those models is scarce in comparison to other groups. Furthermore, only a few recent

Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System

279

papers consider models which provide recovery plans with additional services and resources provided to passengers. In [3], the authors propose a model for the reassignment of passengers from the broken train to the following trains. They formulate a mixed integer linear program (MILP) designed to minimize the number of lost passengers while minimizing the total delay of following trains for which it is possible to introduce additional stops. The model proposed in [4] adjusts train services over a broader railway network. Aside from additional stops, it considers additional short-turning of trains as a substitution for canceled train service from the opposite direction. The system minimizes additional passengers delay by estimating the effects of each decision on passenger flow separately. Emergency train as an additional service was for the first time introduced in [5]. The work presents an iterative framework that adjusts the recovery plan and rolling stock allocation to estimated passenger demand. Aside from operator costs for recovery actions, the objective function of their MILP formulation incorporates the number of denied passengers as its part. Emergency trains, additional stops and explicit passenger rerouting are parts of the comprehensive framework given in [2]. Authors formulate a three-objective ILP which in the first place minimizes total passenger inconvenience expressed as the sum of generalized travel times calculated for passenger groups. The same authors improved the performances of the previous model and extended it with a possibility of emergency busses engagement [6]. In that work, the adjusted timetable is defined using the ALNS metaheuristic and solution quality is estimated based on simulated passenger flow for a given timetable. Measured computational times are acceptable for real-time usage. We have no information about previous attempts to include shared vehicles into the railway system as an emergency transport option. Yet, similar research directions could be expected, since the potential benefits of the VSS and railway system integration are officially recognized and proclaimed by International Transport Forum [7] and European Commission [8]. The primary way of VSS usage significantly differs from the one presented in this paper, and current research topics in VSS management literature, as classified in [9], still can not be related to the disruption management field. Further integration of these transport systems would probably open new research questions in the field of vehicle rebalancing strategies, to quickly compensate consequences of a sudden and intense demand caused by disruption.

3 Mathematical Model Our model considers a single train that was supposed to run along its planned base route. Because of disruption, it is rerouted to the alt route, which excludes some stations from the planned base route but can include other stations. The train routes are known in advance and not subject to the optimization process, so the cost of the train utilization are not considered. We assume that the train has enough capacity to accept all boarding passengers.

280

M. Lali´c et al.

Passengers planned trips are along the base route, and their origin and/or destination stations may be skipped due to rerouting. To complete their trips, passengers additionally utilize shared vehicles, which are initially placed and can independently travel between stations on both base and alt routes. A model that manages such situations can be defined using parameters given in Sect. 3.1. The emergency plan determines routes for each passenger and vehicle independently. Those routes are composed of elementary movements, which are given as arcs of the space-time graph, presented in Sect. 3.2. Finally, desired and mandatory characteristics for emergency routes are defined respectively through the objectives and constraints of an ILP described in Sect. 3.3.

3.1 Problem Data The railway network is represented as a set of stations S and a set of tracks R ⊆ S × S. The set of all platforms is denoted as J and the set of platforms at station s ∈ S is given as Js . With C we denote the set of all vehicles in the system. All vehicles are uniform, with the number of seats given by the parameter Q. The time required for a ride between two stations s and s  is given as t (s, s  ) and it is also used as an indicator of a ride cost. The planning horizon, a period considered by an emergency plan, spans between moments Tbeg and Tend . It is discretized with intervals of length τ , producing a set of time steps T = {t0 , t1 , . . . , tn }. Vehicle routes are defined for the whole planning horizon, while passenger routes have to be ended by the end of the planning horizon. If a train visits station s on its base route, the time of its arrival to and departure from s are denoted as bs+ and bs− , respectively. Similarly, as+ and as− denote the times of departure from and arrival to station s according to schedule on the alt route. Each passenger p in a set of considered passengers P is presented individually as a quadruplet (o p , d p , t p , l p ). With o p ∈ S and d p ∈ S we denote the origin and the destination station of the passenger p, his/her departure time with t p ∈ T and with l p ∈ {0, 1} driving license possession indicator with a value of 1 if the passenger p is allowed to drive vehicles. Passengers’ departure time corresponds to the moment of the train departure at his/her origin station on the base route, so t p = bo+p .

3.2 Space-Time Graph Emergency plans consider 3 different types of events, used as passengers and vehicles presence indicators, and 8 different types of movements between them. They are modeled as nodes and arcs of the space-time graph G(V, A), inspired by [10]. Different types of events divide set V = N ∪ N o ∪ N d into 3 parts. The set N comprises time expanded nodes (s, j, t) ∈ N , each indicating presence of a vehicle, train or a passenger at the station s ∈ S and its platform j ∈ Js , at the moment t ∈ T .

Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System

281

While passengers and vehicles can be present at VSS platforms at any moment, at railway platforms passengers are expected only at the moments of train arrival and departure from them. Time-invariant nodes are given by sets of origin and destination nodes, N o and N d. They correspond to events of passenger entrances into the system at their origin stations and their exits out of the system at their destination station. Different movement types divide the set of arcs into the following 8 subsets. Set A Dri contains arcs modeling movements of the train between consecutive stations on its alt route. Similarly, set A W ai contains arcs that model waitings of the train between its arrivals and departures along its alt route. Transfer arcs, given by the set Atrans , model passenger movements between a railway and VSS platform at the same station, which happens when a passenger changes the mode of his/her transport. All transfer arcs have the same duration, determined by the parameter μ. Movements from railway platforms are available immediately after the train arrival at those platforms, while movements to railway platform end at the moment of the train departures from them. Possible rides between VSS platforms are modeled by arcs in the set A V SS . All movements between two stations, s and s  , are represented by arcs with the duration t (s, s  ). Parking arcs, given by the set A Par k , model potential waitings at VSS platforms during a single time interval. Waiting longer than a single time interval can be achieved by chaining more active parking arcs. p For a given passenger p, we introduce the set of access arcs A Acc that model passenger entrance into the system from the origin node at station o p , and movement to the platform j ∈ Jo p , at the moment of his/her departure t p . Similarly, egress arcs p contained in set A Egr model passenger last presence on some platform j ∈ Jd p and p his/her egress out of the system at the destination node at d p . Set A Pen contains a single element, a penalty arc connecting the origin and destination node of the passenger p, used by him/her if there is no other routing option. We introduce several auxiliary sets. The set of all possible vehicle movements is given as ACar = A V SS ∪ A Par k . The set of all movements available to the passenger p p p p is given as A p = A Dri ∪ A W ai ∪ A T rans ∪ A V SS ∪ A Par k ∪ A Acc ∪ A Egr ∪ A Pen . n+ n− For each node n, the set A defines its outgoing arcs, while the set A defines its incoming arcs. Weight ta of the arc a is determined differently depending on arc type. Weights of driving, waiting, transfer, VSS and parking arcs are proportional to their duration. Access and egress arcs have no weight, while all penalty arcs have the same weight, determined by parameter π .

3.3 ILP Formulation Vehicle movements are determined by decision variables rac , with a value of 1 if the vehicle c uses arc a ∈ ACar and 0 otherwise. Similarly, passenger movements are p determined by decision variables wa , which has a value of 1 if passenger p uses arc p a ∈ A , else it has a value of 0.

282

M. Lali´c et al.

The two objective functions of the model are given in (1) and (2). Quality of service (QoS) is expressed as the total travel time of transported passengers increased by a penalty for each dropped passenger. Cost of service (CoS) is measured as the total time of vehicles spent on rides. QoS =



ta wap

(1)

a∈A p∈P

CoS =

 

ta rac

(2)

a∈A V SS c∈C

Constraints of the optimization process are given in (3)–(11). Constraints (3) and (4) ensure that each passenger either uses his/her penalty arc or 1 access arc for an entrance into and 1 egress arc for an exit out of the system. Flow conservation of passengers through the system is regulated with constraints (5). Using the indicator i(c, s) with a value of 1 if station s is the origin station for vehicle c, and 0 if it is not, constraints (6) ensures that at the beginning of the planning horizon, the route of each vehicle starts from its origin platform. Constraints (7) represent flow conservation constraints for vehicles. With capacity constraints (8), we guarantee that all passengers on a particular ride can be transferred with vehicles assigned for that ride. Constraint (9) ensures that per each ride, there would be enough passengers with driving licenses to transfer assigned vehicles. Domain constraints are given by (10) and (11). 

wap = 1,

∀p ∈ P

(3)

wap = 1,

∀p ∈ P

(4)

∀ p ∈ P, ∀n ∈ N

(5)

∀(s, j, t) = n ∈ N , t = Tbeg

(6)

rac ,

∀(s, j, t) = n ∈ N , Tbeg < t < Tend

(7)

rac ,

∀a ∈ A V SS

(8)

wap ,

∀a ∈ A V SS

(9)

p p a∈A Acc ∪A Pen



 a∈A p ∩An+

p p a∈A Egr ∪A Pen



wap =

a∈A p ∩An−



rac = i(c, s),

a∈ACar ∩An+



rac =

a∈ACar ∩An+



wap ,



a∈ACar ∩An−

wap ≤ Q

p∈P

 c∈C

 c∈C

rac ≤



p∈P l p =1

rac ∈ {0, 1},

∀a ∈ ACar , ∀c ∈ C

(10)

Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System

wap ∈ {0, 1},

∀a ∈ A p , ∀ p ∈ P

283

(11)

The final solution provided by the model is determined through a two-step hierarchical optimization procedure. The procedure first minimizes QoS, ensuring that constraints (3)–(11) are satisfied. In the second step, the procedure minimizes CoS, satisfying constraints (3)–(11), as well as additional constraint QoS ≤ QoS ∗ where QoS ∗ represents obtained value for the objective QoS on the first step.

4 Case Study The previously presented methodology is examined on realistic case studies over part of the Swiss railway network. Experiments are performed by Gurobi 12.2, on a PC with an Intel i7 processor with 3.00 GHz and 32 GB RAM. Both steps of the optimization procedure terminate when the optimality gap of a current solution becomes less than 3%.

4.1 Case Description The disrupted area with train schedules on base and alt route is presented in Fig. 1. Stations are presented with rectangles. Each rectangle contains a code assigned to the station and its official name. Optional information about the number of passengers boarding on and alighting from the station and the number of vehicles initially placed at the station can be at the bottom of the rectangle. Arrows denote planned and realized train movements on its base and alt route, respectively. Departure and arrival moments of the train are denoted at the tails and heads of the arrows. Train changes its original path after the disjunction station (coded with D), omitting base stations (with codes starting with B), but stopping at several additional alt stations (with code starting with A), until it returns to its original path at the conjunction station (coded with C),

Fig. 1 Disrupted area of case study

284

M. Lali´c et al.

which is the last station of the disrupted area. Exit station (coded with X ) is the first station after the disrupted area. It is considered a destination for all passengers that remain on the train after the disrupted area, and therefore, it is reachable only by the train. Since passengers going from station D to X are determined to stay on the train through the whole disrupted area, their routing is excluded from the optimization process. The train movement durations are based on the schedules for regional train lines S1, S2 and S3 over the given disrupted area. The number of boarding passengers at the D and B stations is calculated as a ratio between estimated numbers of daily served passengers and train departures per day at a particular station. All mentioned data are acquired through the portal of the Swiss Federal Railways [11]. Each B station is a destination for 10% of all passengers, C station for another 20%, and the rest of the passengers should remain on the train after the disrupted area. We perform sensitivity analysis by scaling passenger demand with a factor f ∈ {1, 2, 3, 4, 5, 10, 15}. The planning horizon is 62 min, which is the time required for the train to come from station D to X . Due to the train movements and waitings durations, the planning horizon is discretized with 1 min intervals. The travel time matrix, defining the required time for vehicle rides among stations, is obtained using Google Maps. The number of train lines passing through a station is used as a measure of station significance, and it is equal to the number of vehicles initially placed at the station. The time required for a passenger to transfer from one platform to another is set to μ = 5 min while the weight of penalty arcs is π = 620 min, which is the duration of the planning horizon multiplied by 10.

4.2 Results Table 1 contains overall computational results. The first and the second columns display the scaling factor of the initial passenger demand and the total passenger demand. The third and fourth columns provide the total number of dropped and served passengers. The fifth column presents the total time that transported passengers spent in the transportation system while the sixth column contains information about the cost of additional services, expressed as the total time of vehicles engagement. Computational times are given in the last column. The first few instances have completely satisfied demand. When demand gets over a certain threshold, the system comes into a saturated state, and the number of dropped passengers starts to grow. Also, with a further increase in passenger demand, there is a slow increase in the number of transported passengers, indicating a gradual change in the character of system response. Tables 2 and 3 outline those differences for instance with original passenger demand and the one with demand scaled by f = 15. Both tables display the distribution of vehicle rides based on their origin and destination station.

Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System

285

Table 1 Computational results Scaling factor

#Passengers

#Dropped

#Served

Total travel time [min]

Cost of Computation service [min] time [s]

1

190

0

190

5488

775

11.69

2

385

0

385

12437

1525

70.55

3

578

0

578

22460

2295

44.79

4

758

79

679

26586

2444

138.38

5

965

213

754

29350

2395

131.57

10

1927

999

928

36578

2153

134.94

15

2893

1761

1132

44565

2066

183.96

Table 2 Distribution of rides for instance with original demand Destination B1 B2 Origin

B2 B1 D

3 5

7

C 11 16 7

Table 3 Distribution of rides for instance with original demand scaled by f = 15 Destination B1 B2 C Origin

A3 A2 A1 C B2 B1 D

16 1 2 25

21

30 5

81 6

In the case with lower demand, the system has enough resources to satisfy it completely. Passengers are transported directly to their destinations, and vehicles from one station can be directed differently. With an increase in passenger demand, the system tries to satisfy it as much as possible with more exhaustive exploitation of the fleet. Passengers unable to take a vehicle at D station will proceed trip by train and utilize available vehicles from stations A1, A2 and A3. All passengers going from D to C will be transported using the train, leaving more space for passengers in other directions. Performing several consecutive rides, each starting from the platform where the previous one is finished, some vehicles serve different passenger groups. For example, all vehicles entering B1 and B2 will be further routed to other stations down the train base route. Similarly,

286

M. Lali´c et al.

part of the vehicles entering C station will be routed to B2, which is opposite the train base route, and later they will be routed again toward C station. Even with the high utilization of the shared vehicles, part of passengers will still be dropped. To reduce the cost of additional service, vehicles will be primarily assigned on shorter and cheaper rides. Also, by performing shorter rides, it is possible to perform more rides during the planning horizon, and to serve more passengers. In Table 2, all rides from A1, A2, A3 and C stations are in a single direction, and rides from B1 and D stations are very unevenly distributed. Consequently, that will introduce a prioritization among passengers, giving more chances for transportation to those with shorter and thus cheaper trips.

5 Conclusion In this work, we propose an ILP formulation for simultaneous routing of passengers affected by the unexpected train rerouting and shared vehicles as passengers’ emergency transport option. The optimization process searches for a solution that minimizes the number of dropped passengers and total travel time that passengers spend in transportation, in the least costly way. To the best of our knowledge, this is the first work in the railway disruption management research area that integrates shared vehicles as an emergency transport option. For a given disrupted area, it is possible to estimate the border level of passenger demand that can be satisfied solely by a given vehicle fleet, which can be useful when determining its size. Computational experiments are performed on a part of the Swiss railway network. With lower demand, all passengers are served and transported directly to their destinations. As the number of passengers increases, our approach performs prioritization over possible vehicle rides and passenger trips fulfilling cheaper and shorter ones. In the future work, we intend to gradually extend the proposed model. By scaling arc weights that contribute to it, objective function QoS can express individual or general passenger routing preferences. There can be more trains in both directions on the alt route. Departure and arrival times for those trains will also be adjusted as part of the optimization process. Finally, alternative route selection over a broader railway network will be subjected to the optimization process. Such extended models may become too complex for the usage of a general-purpose solver. In that case, further research will be directed toward the development of an appropriate heuristic method. Acknowledgements This work was supported by Ministry of Education, Science and Technological Development (No. 451-03-68/2022-14/200358).

Assisting Passengers on Rerouted Train Service Using Vehicle Sharing System

287

References 1. Cacchiani, V., Huisman, D., Kidd, M., Kroon, L., Toth, P., Veelenturf, L., Wagenaar, J.: An overview of recovery models and algorithms for real-time railway rescheduling. Transp. Res. Part B 63, 15–37 (2014) 2. Binder, S., Maknoon, Y., Bierlaire, M.: The multi-objective railway timetable rescheduling problem. Transp. Res. Part C 78, 78–94 (2017) 3. Hao, W., Meng, L., Veelenturf, L., Long, S., Corman, F., Niu, X.: Optimal reassignement of passengers to trains following a broken train. In: International Conference on Intelligent Rail Transportation (ICIRT), pp. 1–5 (2018) 4. Zhu, Y., Goverde, R.M.P.: Railway timetable rescheduling with flexible stopping and flexible short-turning during disruptions. Transp. Res. Part B 123, 149–181 (2019) 5. Cadarso, L., Marin, A., Maroti, G.: Recovery of disruptions in rapid transit network. Transp. Res. Part E 53, 15–33 (2013) 6. Binder, S., Maknoon, Y., Bierlaire, M.: Efficient exploration of the multiple objectives of the railway timetable rescheduling problem. In: 17th Swiss Transport Research Conference. Monte Verita/Ascona (2017) 7. International Transport Forum ITF, ITF work on Shared Mobility. Accessed on: Mar 2022. https://www.itf-oecd.org/itf-work-shared-mobility 8. European Commission EC, Transport and Research and Innovation Monitoring and Information System TRIMIS, Network and traffic management systems (NTM), Accessed on: Mar 2022. https://trimis.ec.europa.eu/roadmaps/network-and-traffic-management-systems-ntm 9. Ataç, S., Obrenovi´c, N., Berliere, M.: Vehicle sharing systems: a review and a holistic management framework. EURO J. Transp. Logist. 10 (2019) 10. Nguyen, S., Pallottino, S., Malucelli, F.: A modeling framework for passenger assignement on a transport network with timetables. Transp. Sci. 35(3), 238–249 (2001) 11. Swiss Federal Railways, Network map passenger Traffic. Accessed on: April 2022. https:// maps.trafimage.ch/ch.sbb.netzkarte?lang=en

Health Care Management

Optimization for Surgery Department Management: An Application to a Hospital in Naples Maurizio Boccia, Andrea Mancuso, Adriano Masone, Francesco Messina, Antonio Sforza, and Claudio Sterle

Abstract Surgery department with its operating rooms represents the financial backbone of modern hospitals accounting the main part of a hospital cost and revenue. Therefore, maximizing its efficiency is of vital importance since it can have important implications on cost saving and patient satisfaction. In this context, Operations Research methodologies can play a relevant role supporting hospital executives in operating room management and surgery scheduling issues. In particular, great relevance has been given in literature to the Surgery Scheduling Problem (SPP). In its general form, it consists in determining a day, an operating room and a starting time of a set of surgeries. In this work, we address the SPP faced by a local hospital of Naples. The aim of the hospital is to determine a surgery schedule capable of handling unforeseeable events while maximizing the number of performed surgeries, according to some medical guidelines. This problem has been modelled by an M. Boccia · A. Mancuso (B) · A. Masone · C. Sterle Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy e-mail: [email protected] M. Boccia e-mail: [email protected] A. Masone e-mail: [email protected] C. Sterle e-mail: [email protected] M. Boccia · A. Mancuso · A. Masone · A. Sforza · C. Sterle Optimization and Problem Solving Laboratory, Via Claudio 21, 80125 Naples, Italy e-mail: [email protected] C. Sterle Institute for System Analysis and Computer Science Antonio Ruberti (IASI), National Research Council of Italy, via dei Taurini 19, 00185 Rome, Italy F. Messina Betania Evangelical Hospital, Via Argine 604, 80147 Naples, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_24

291

292

M. Boccia et al.

original integer linear programming formulation that has been tested and validated on several instances derived from real data provided by the hospital. Finally, the proposed formulation can be used to simulate different surgery operating scenarios. The results of this simulation can be used to provide useful managerial insights for an efficient schedule of the hospital surgeries.

1 Introduction Surgery department with its operating rooms (ORs) represents the financial backbone of modern hospitals accounting for up to 40% and 60–70% of a hospital cost and revenue, respectively [18]. Therefore, maximizing the efficiency of the ORs is of vital importance in the modern landscape, as it can have important implications on cost saving and patient satisfaction. In this context, Operations Research can play a relevant role supporting hospital executives in OR management and surgery scheduling by data-driven and mathematical programming based approaches [8]. The issues arising from the OR management and surgery scheduling can be classified into three decision levels: strategic, tactical, and operational depending on the decisionmaking time horizon (long-term, medium-term, short-term) [21]. At a strategic level of decision, the problems involve medium and long-term demand forecasting and the distribution of the OR time among the surgical specialties and/or surgeons (Capacity Planning Problem, Capacity Allocation Problem); at a tactical level, the problem is to define cyclic OR schedules, i.e., assign OR time blocks to surgical specialties, usually with homogeneous lengths of half-day or full day (Master Surgical Scheduling Problem); at the operational level, the problem is to determine the date, the OR, the starting time, and the medical resources allocated to the surgeries (Surgical Scheduling Problem). The Surgical Scheduling Problem (SSP) deals with elective patients, i.e., a non-emergency surgical procedure that can be planned in advance. Usually, the elective patients are inserted into a waiting list and are scheduled through an ex-ante planning, according to a performance criteria, typically one week before the surgery. From a mathematical point of view, the SSP can be conceived as a combinatorial optimization problem considering multiple objectives, such as: maximizing the utilization of ORs, minimizing the number of postponed surgeries, minimizing the overtime work of the ORs and of the surgeons, maximizing patient satisfaction, etc. This problem, as widely known, is NP-Hard and it can lead to some severe inefficiencies if it is not solved appropriately [15]. In this work, we study the SSP faced by a local hospital of Naples. In a nutshell, the hospital considers the following scheduling assumptions: a set of surgeries have to be performed; an OR, a day and a time have to be reserved to perform a surgery; the planning horizon is fixed and each OR has a limited working time per day; only one surgery at a time can be performed in an OR. The problem consists in determining the schedule that maximizes the number of performed surgeries according to some medical guidelines. The hospital aims at planning an elective patient schedule able to respond to any emergency in an adequate time, i.e., to schedule an incoming

Optimization for Surgery Department Management: An Application …

293

emergency within a pre-determined time limit. We point out that, as the majority of the hospitals [17], the local hospital takes a pragmatic approach considering a deterministic surgery duration. Indeed, the SSP schedule determines a first planning that is repeatedly modified on the basis of unforeseen events (duration longer than expected, cancellations, arrivals of emergency). On this basis, we analyzed the deterministic approaches proposed in literature for the SSP. Moreover, since the interest of the hospital is towards a weekly schedule of a fixed patient list, we do not refer to surgery group based approaches, which have shown to be more effective over longer planning horizons [3]. In this context, the literature on SSP is very rich as witnessed by the survey works reported in [5, 15, 21]. In particular, the SSP is often considered to be two separate decision problems: the assignment of patients to specific surgical blocks (i.e., OR and day) and the surgery sequencing within each block. These two subproblems are known in literature as Advance Scheduling Problem (ADP) and Allocation Scheduling Problem (ALP), respectively [20]. On this basis, it is possible to classify the literature on the SSP in four groups: contributions on ADPs [6]; contributions on ALPs [7, 14]; contributions on SSPs with a sequential solution of ADP and ALP [1, 2, 9, 10, 22]; contributions on SSPs with a simultaneous solution of ADP and ALP [11, 13, 16, 19]. We point out that the SSP generally arises from real applications. Therefore, the literature approaches are designed to tackle very specific variants of the problem, so preventing their usage except for a significant rethinking of the method. Moreover, we highlight that decomposition and sequential approaches for the SSP have a clear drawback, as also noted in [5]. Indeed, the quality of the surgery sequence in the ALP is highly dependent on the assignment to days and ORs that was made in the ADP. On the basis of these considerations, the SSP under study requires the development of an ad-hoc solution method exploiting its own features. Therefore, we propose an original integer linear programming (ILP) formulation extending the one presented in [16] to take into account the guidelines of the local hospital. We point out that, to the best of our knowledge, this is the first work addressing simultaneously the ALP and the ADP while taking into account emergency considerations. The paper is organized as follows: Sect. 2 provides a detailed description of the main features of our SSP. In Sect. 3 is presented the mathematical formulation of the problem. Section 4 reports the results of the computational experiments on the instances provided by the hospital. Finally, in Sect. 5 some conclusions and future research directions are outlined.

2 Problem Definition In this work, we tackle the SSP faced by a local hospital of Naples which prepares weekly schedules for elective surgeries. The considered SSP addresses simultaneously the “advanced scheduling issues”, i.e. the problem of assigning surgeries to ORs over a certain time horizon (usually one or two weeks), and “allocation scheduling issues”, i.e., the problem of sequencing the surgeries for each OR in each day.

294

M. Boccia et al.

The hospital focuses on the scheduling of non-urgent elective patients, because these patients are the ones whose surgeries will be done over the foreseeable future and can be planned in advance. In particular, a surgery of an elective patient can be characterized by a clinical specialty, a duration and a clinical severity. The clinical specialty indicates the equipment and the skills required for the surgery. The duration is an estimate of the required operating time and it is generated on the basis of the average duration of the surgeries of the same type. This information is used to find an available date and an OR with a sufficient available time left to perform the surgery and therefore is treated as deterministic. Finally, the clinical severity indicates surgery complexity and patient conditions. The hospital uses a discrete time model splitting each day in a fixed number of time periods in which a surgery may be performed. The surgeries can be assigned to the time periods according to a block scheduling scheme of the ORs. In this block scheduling scheme, each period of each OR is reserved to a surgical specialty (e.g., cardiology and senology). Each specialty requires specific equipment and medical skills. Hence, it is possible to schedule a surgery only in an OR reserved to the corresponding specialty. Moreover, the hospital considers a shared operating room policy which allows to perform elective and non-elective surgeries in the same operating room theatre (i.e., it is not available an operating room dedicated only to the emergencies). Therefore, the schedule has to take into account the possible arrival of emergency patients. Once an emergency arrives, the patient will be stabilized in the emergency department with initial treatments before the patient is transferred to an OR. This transfer must be done within a given amount of time. Therefore, according to this setting, emergency patients who arrive during operating hours compete with elective surgeries for OR capacity. The problem considered in this paper is to assign a day, an OR and a starting period to a set of surgeries in a both efficient and robust way while taking into account hospital guidelines. By efficient and robust we mean that we get as little wasted time in the ORs as possible, but at the same time the schedule should also be able to handle sudden changes such as emergency surgeries. The hospital guidelines can be summarized as follows: 1. the maximum number of surgeries has to be performed; 2. the surgeries of the same specialty should be scheduled during the day according to a clinical score in a decreasing order; 3. the surgery clinical score is function of the clinical severity and the surgery duration; 4. only one surgery at a time can be performed in any OR; 5. a surgery cannot be interrupted; 6. it is possible to schedule a surgery only in an OR reserved to the corresponding specialty; 7. An OR has to be cleaned after each surgery; 8. the cleaning time is fixed and does not depend on the surgery; 9. an occurring emergency has to be handled within a fixed time threshold by at least one OR in each period;

Optimization for Surgery Department Management: An Application …

295

Fig. 1 Emergency management in two schedule examples

10. an emergency can be managed either if a surgery is ending, or an OR is empty, or it is being cleaned. For the sake of clarity, we point out that the first two guidelines represent the objective that the hospital aims at pursuing. Instead, the other guidelines represent the operating theatre settings that have to be guaranteed. Two schedules of three surgeries in three ORs and on a single day with four periods are reported in Fig. 1 to better explain the emergency management threshold (δe ). Figure 1a shows a schedule able to handle an emergency arrival within 1 period during the entire time horizon. On the other hand, in Fig. 1b, we highlight that if an emergency occurs in the first period it has to wait two time periods before it can be handled. In this case, if we consider δe equal to 1 the second schedule is not a feasible one for the considered SSP.

3 Mathematical Model To formulate the SSP, we introduced a notation based on the previous problem description. For the sake of readability, the notation is reported in Table 1. Consistently with this notation, the SSP can be formulated as follows:

max

 i∈I p∈P d∈D r ∈R

Z i p xi pdr −

 i∈I

(1 −



xi pdr )

(1)

p∈P d∈D r ∈R

s.t. 

xi pdr ≤ 1

∀i ∈ I

(2)

∀ p ∈ P, d ∈ D, r ∈ R

(3)

p∈P d∈D r ∈R

 i∈I

yi pdr ≤ 1

296

M. Boccia et al.

Table 1 Notation used for the SSP formulation Problem notation Sets R I S D P Parameters Zi p  δi δc i δe σis Mr pds

Set of ORs Set of surgeries Set of surgical specialties Set of days of time horizon Set of periods Clinical score of scheduling surgery i at the period p Penalty for not scheduling a surgery Duration of surgery i Cleaning time required after a surgery Total time required by surgery i (i = δi + δc ) Emergency time threshold Equal to 1 if surgery i belongs to specialty s, 0 otherwise Equal to 1 if OR r is reserved to specialty s on period p of day d, 0 otherwise

Binary variables xi pdr yi pdr O pdr

Equal to 1 if surgery i starts in period p of day d in OR r , 0 otherwise Equal to 1 if surgery i occupies OR r in period p of day d, 0 otherwise Equal to 1 if an emergency cannot be handled in OR r in period p of day d, 0 otherwise

p 

yi pdr = 

yi pdr σis Mr pds = i

p∈P s∈S

O pdr ≥



xi pdr

(4)

∀i ∈ I, r ∈ R, d ∈ D (5)

p∈P



yi pdr −

e −1   p+δ

r ∈R



p−δi +1



xikdr

∀ p ∈ P, d ∈ D, r ∈ R

(6)

i∈I :i ≤ p k= p−i +1

i∈I

|R|δe −

∀i ∈ I, p ∈ P, d ∈ D, r ∈ R

xi p dr

p =max( p−i +1,1)

O kdr ≥ 1

∀ p ∈ P, d ∈ D (7)

k= p

xi pdr , yi pdr ∈ {0, 1} O pdr ∈ {0, 1}

∀i ∈ I, ∀ p ∈ P, d ∈ D, r ∈ R

(8)

∀ p ∈ P, d ∈ D, r ∈ R

(9)

The objective function (1) is composed of two contributions. The first one aims at scheduling surgeries during the day according to the clinical score in a decreasing order. Instead, the second contribution aims at maximizing the number of performed

Optimization for Surgery Department Management: An Application …

297

surgeries. Constraints (2) ensure that a surgery is scheduled at most once. Constraints (3) guarantee that an OR can be used for a maximum of one surgery in each period of each day. Constraints (4) ensure the consistency between the x- and y- variables and guarantee the contiguity of y- variables. Constraints (5) guarantee that the surgery scheduling complies the block strategy. In other words, a surgery is scheduled only in an OR period dedicated to its specialty. Constraints (6) ensure that if an OR is occupied then it is not able to handle an emergency unless the ongoing surgery is ending or the OR is being cleaned. Constraints (7) guarantee that at least one OR is able to manage an emergency within the emergency time threshold. Finally, constraints (8)–(9) define the nature of the decision variables.

4 Experimental Results In this section, we present and discuss the computational results of the experimentation performed to validate the ILP formulation proposed for the SSP under study and to evaluate the effect of the emergency management constraints (i.e., constraints (6)–(7)) on the surgery schedule. To this aim, we solved a modified version of the proposed SSP formulation (SS Pr ) relaxing constraints (6)–(7). The two formulations have been solved using an ILP solver (FICO Xpress-MP 8.9) on an AMD Ryzen 3 PRO 4350G, 3.80GHz, 32.00GB of RAM with a time limit of 3600 s. The experimentation has been conducted on twelve instances built from real data coming from the local hospital. All instances consider the same operating theater setting but differ in the list of surgeries to be scheduled. In particular, the operating theatre consists of 4 ORs that are managed according to the block scheduling scheme reported in Fig. 2. It indicates the opening and closing time of each OR in each day and the surgery specialty that can be performed in each OR in each time block. In particular, the ORs are open from 8 a.m. to 8 p.m. from Monday to Friday and from 8 a.m. to 2 p.m. on Saturday. We highlight that OR2 is reserved only to emergency surgeries on Saturday. Moreover, we point out that it is not possible to schedule an elective surgery in overtime in any OR during the planning stage. However, in practice it is possible to use overtime to manage unexpected events (e.g. emergency surgeries and surgical complications). Following the hospital indications, we discretize each day in 36 periods of 20 min each (18 periods on Saturday). The emergency time threshold and the cleaning duration are both set equal to 1 period. A different weekly waiting list, defined by surgeons, is considered in each instance. For each surgery in the list, we know its average duration (expressed in number of periods). Starting from this data, considering a normal distribution, we define for each surgery a small, a medium and a long duration value, choosing them within the left side, center, and right side of the normal distribution, respectively. On this basis, we built four group of instances: short (S-instances: INS_1-INS_4); medium (Minstances: INS_5-INS_8); long (L-instances: INS_9-INS_12). This artificial choice is done to simulate the functioning of the system in best, average and worst case scenarios. We highlight that, for all the instances, we set the value of  high enough

298

M. Boccia et al.

Fig. 2 Surgical scheduling block used in the case study

to totally dominate the other component of the objective function. This choice ensures that the SSP formulation first priority will be to maximize the number of performed surgeries. According to the hospital guidelines, the clinical score Z i p of each surgery  p+δ −1 wi2 i in each period p is set equal to k= pi , where wi is the clinical severity. k The results of the experimentation are shown in Table 2. The first two columns indicate the instance Id and the number of elective patients in the waiting list. Then, for each formulation, we reported: • %SgS – percentage of scheduled surgeries (%SgS = number of scheduled surgeries / number of patients in the waiting list · 100); • %OUt – percentage of OR theatre utilization (%OUt = number of periods in which the ORs are occupied / number of available periods · 100); • %Gap – optimality gap (%gap = 100 - lower bound / upper bound · 100); • Time – running time. Moreover, we reported two indicators, denoted with  p and M p , to highlight the schedule responsiveness to incoming emergencies. These values are computed simulating the arrival of an emergency in each period of the time horizon. In particular,  p is the number of periods in which it is not possible to handle an emergency within the time threshold. On the other hand, M p is the number of periods that an emergency has to wait before it can scheduled in the worst case scenario. We point out that we reported the values of  p and M p only for the SSPr schedule since for the SSP one they are always equal to 0 and 1, respectively. We can observe that, on average, the two formulations show similar results in terms of %SgS and %OUt. These results prove that it is possible to define a surgery schedule able to handle emergencies without a significant decrease of %SgS and %OUt through an effective management of the ORs. On the other hand, considering the  p and M p indicators of the SSPr formulation, it can be observed that neglecting emergency management could seriously compromise the operating theatre ability

Optimization for Surgery Department Management: An Application …

299

Table 2 Comparison of SSP and SSPr solutions SSP

SSPr

Id

|I|

%SgS %OUt %Gap

Time

%SgS %OUt  p

Mp

%Gap

INST1

148

92.57

50.28

0.17

3600

93.84

49.86

0

1

0.17

3600

INST2

155

94.84

53.33

0

5

95.45

53.19

0

1

0

6

INST3

144

96.53

50.42

0

4

95.21

50.14

0

1

0

4

INST4

138

97.83

49.58

0

3

97.83

49.44

0

1

0

3

INST5

145

83.45

82.50

1.46

3600

82.88

82.50

12

4

0.65

3600

INST6

155

85.16

90.14

2.41

3600

85.71

90.42

10

4

1.21

3600

INST7

145

88.97

83.61

0.92

3600

88.36

83.61

8

3

0.78

3600

INST8

137

79.56

72.22

5.19

3600

78.99

72.36

13

5

0.59

3600

INST9

146

50.68

88.89

0

3096

53.42

97.50

55

8

0

15

INST10

151

49.67

88.75

0

3487

52.32

97.36

68

8

0

719

INST11

148

48.65

87.08

0

2676

50.00

92.64

60

8

0

3

INST12

140

50.71

87.64

0

1023

51.43

92.22

51

9

0

36

Average

146

76.55

73.70

0.85

2357.58

77.17

75.94

23.08

4.41

0.28

1565.52

Time

to deal with emergencies. Indeed, the  p average value is around 23 (out of 198 available periods). Therefore, if an emergency occurs on average the 10% of times the OR theatre is not able to deal with it within the emergency threshold. This critical issue is further supported by the average M p , that is almost equal to 5 (i.e., around 100 min). We highlight that a waiting of around one hour and half could determine severe consequences to emergency patient conditions. Moreover, if we analyze the results on the basis of the three groups of instances, it is possible to observe that the duration of the surgeries, as predictable, greatly affect the percentage of scheduled surgeries and the OR utilization. Indeed, the S-instances show the highest percentage of scheduled surgeries with the lowest OR occupation. We point out that even if the ORs are generally free 50% of the time it is not possible to schedule all the surgeries due to the considered scheduling block. Moreover, we highlight that when the surgery average duration is short, it is more likely that the schedule allows to effectively manage emergencies. Considering the M-instances, we can observe that the two formulations obtain similar results but the emergencies cannot be always managed effectively. Finally, considering the L-instances, we can observe that the SSPr schedule allows us to obtain the highest %OUt (around 95% on average). However, this OR utilization significantly affects the ability to handle an emergency in a suitable time as shown by the average  p and M p (around 60 and 8, respectively). On the other hand, the SSP schedule is able to effectively manage incoming emergencies and presents a high %OUt at the same time. In conclusion, in terms of quality of the solution and running times, we can observe that the two formulations show comparable results. Indeed, we can observe that both the formulations optimally solve 7 out of 12 instances within the time limit with an average percentage gap below 1%.

300

M. Boccia et al.

5 Conclusions In this work, we presented the SSP arising in a local hospital of Naples. It consists in the determination of the surgery schedule with respect to medical guidelines which maximizes the efficiency of the OR theatre. In particular, unlike other contributions on the same topic, the SSP under study considers in the surgery scheduling the possibility of managing unforeseen events (e.g., emergency surgeries). An original ILP formulation has been proposed to tackle the problem. The performed experimentation on instances with up to 155 surgeries, 6 days and 4 ORs, derived from real data, confirm the applicability and the effectiveness of the proposed approach. Future work concerns the extension of the proposed model to take into account other critical resources, including surgeons, anaesthetists, mobile equipment and recovery beds. Moreover, it will be investigated the possibility of considering in the proposed approach surgery groups based on similarity patterns [4]. Finally, it would be interesting to study the impact of the emergency time threshold when considering the uncertainty related to the surgery duration and emergency occurrence.

References 1. Agnetis, A., Coppi, A., Corsini, M., Dellino, G., Meloni, C., Pranzo, M.: A decomposition approach for the combined master surgical schedule and surgical case assignment problems. Health care management science 17(1), 49–59 (2014) 2. Ballestín, F., Pérez, Á., Quintanilla, S.: Scheduling and rescheduling elective patients in operating rooms to minimise the percentage of tardy patients. J. of Scheduling 22(1), 107–118 (2019) 3. Banditori, C., Cappanera, P., Visintin, F.: A combined optimization-simulation approach to the master surgical scheduling problem. IMA J. of Management Mathematics 24(2), 155–187 (2013) 4. Boccia, M., Masone, A., Sforza, A., Sterle, C. (2017). A partitioning based heuristic for a variant of the simple pattern minimality problem. International Conference on Optimization and Decision Science, Springer, 93-102 5. Cardoen, B., Demeulemeester, E., Beliën, J.: Operating room planning and scheduling: A literature review. EJOR 201(3), 921–932 (2010) 6. Cardoen, B., Demeulemeester, E., Beliën, J.: Optimizing a multiple objective surgical case sequencing problem. International Journal of Production Economics 119(2), 354–366 (2009) 7. Durán, G., Rey, P.A., Wolff, P.: Solving the operating room scheduling problem with prioritized lists of patients. Annals of Operations Research 258(2), 395–414 (2017) 8. Fairley, M., Scheinker, D., Brandeau, M.L.: Improving the efficiency of the operating room environment with an optimization and machine learning model. Health care management science 22(4), 756–767 (2019) 9. Jung, K.S., Pinedo, M., Sriskandarajah, C., Tiwari, V.: Scheduling elective surgeries with emergency patients at shared operating rooms. Production and Operations Management 28(6), 1407–1430 (2019) 10. Luo, Y.Y., Wang, B.: A new method of block allocation used in two-stage operating rooms scheduling. IEEE Access 7, 102820–102831 (2019) 11. Marques, I., Captivo, M.E., Vaz Pato, M.: An integer programming approach to elective surgery scheduling. OR spectrum 34(2), 407–427 (2012)

Optimization for Surgery Department Management: An Application …

301

12. Marques, I., Captivo, M.E.: Bicriteria elective surgery scheduling using an evolutionary algorithm. Operations Research for Health Care 7, 14–26 (2015) 13. Marques, I., Captivo, M.E., Vaz Pato, M.: A bicriteria heuristic for an elective surgery scheduling problem. Health care management science 18(3), 251–266 (2015) 14. Mateus, C., Marques, I., Captivo, M.E.: Local search heuristics for a surgical case assignment problem. Operations Research for Health Care 17, 71–81 (2018) 15. Rahimi, I., Gandomi, A.H.: A comprehensive review and analysis of operating room and surgery scheduling. Archives of Computational Methods in Engineering 28(3), 1667–1688 (2021) 16. Riise, A., Burke, E.K.: Local search for the surgery admission planning problem. J. of Heuristics 17(4), 389–414 (2011) 17. Riise, A., Mannino, C., Burke, E.K.: Modelling and solving generalised operational surgery scheduling problems. Computers & Operations Research 66, 1–11 (2016) 18. Rothstein, D. H., Raval, M. V. (2018, April). Operating room efficiency. In Seminars in pediatric surgery (Vol. 27, No. 2, pp. 79-85). WB Saunders 19. Vijayakumar, B., Parikh, P.J., Scott, R., Barnes, A., Gallimore, J.: A dual bin-packing approach to scheduling surgical cases at a publicly-funded hospital. EJOR 224(3), 583–591 (2013) 20. Zhang, J., Dridi, M., El Moudni, A.: Column-generation-based heuristic approaches to stochastic surgery scheduling with downstream capacity constraints. International Journal of Production Economics 229, 107764 (2020) 21. Zhu, S., Fan, W., Yang, S., Pei, J., Pardalos, P.M.: Operating room planning and surgical case scheduling: a review of literature. J. of Combinatorial Optimization 37(3), 757–805 (2019) 22. Zhu, S., Fan, W., Liu, T., Yang, S., Pardalos, P.M.: Dynamic three-stage operating room scheduling considering patient waiting time and surgical overtime costs. J. of Combinatorial Optimization 39(1), 185–215 (2020)

The Ambulance Diversion Phenomenon in an Emergency Department Network: A Case Study Christian Piermarini and Massimo Roma

Abstract Most of the studies dealing with the increasing and well-known problem of Emergency Department (ED) overcrowding usually focus on modeling the patient flow within a single ED, without considering the possibilities offered by the cooperation among EDs. In this work, we analyze the overcrowding phenomenon considering an ED network, focusing on the so called Ambulance Diversion problem. A Discrete Event Simulation (DES) model is used to represent the ED network and the Simulation–Based Optimization (SBO) approach is adopted to study a first-aid network under different conditions. The aim is to optimize the performances of the entire network, in order to provide the best service to patients without carrying unsustainable resource costs. A real case study consisting of six big EDs in the Lazio region of Italy has been considered. The achieved results show which are the best diversion policies both in terms of patient waiting time and costs for the service providers. Our implementation of the SBO procedure is based on a novel approach adopted for the communication between the DES model and a Derivative-Free optimization algorithm.

1 Introduction While dealing with the optimization of emergency medical services, many mathematical methodologies are applied to tackle the ever present Emergency Department overcrowding problem. Here we focus on a phenomenon strongly related to the ED overcrowding, the Ambulance Diversion (AD). AD is a procedure adopted by ED managers during an emergency situation, when patients cannot be accepted and treated anymore, and incoming ambulances are diverted to neighboring EDs. The C. Piermarini (B) · M. Roma Department of Computer, Control and Management Engineering, Rome, Italy e-mail: [email protected] M. Roma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_25

303

304

C. Piermarini and M. Roma

relevance that AD obtained over the years is witnessed by many research activities dealing with the analysis of this phenomenon by means of various mathematical models (see, e.g. [1–11] and the references reported therein), and also through the Simulation–Based Optimization (SBO) approach [12]. The SBO approach has been successfully applied in order to perform analyses on management and operative problems arising in systems represented by means of simulation models (see, e.g., the papers [13, 14] where systematic reviews of SBO methods applied to analyze hospital EDs’ performances are reported, and the papers [15–20]). The procedure aims at optimizing specific Key Performance Indicators (KPIs) and it is applicable to a wide range of phenomena and different situations (see e.g., [21, 22]). Of course, the SBO approach considers Black–Box Optimization problems, since the objective function and the constraints of the optimization problem in hand are not available in analytic form [23]. As a consequence, algorithms belonging to the class of Derivative–Free Optimization (DFO) methods must be applied (see, e.g., [24, 25]). In this paper, we present a case study involving a network of six big EDs located in the Lazio region of Italy that we tackle by means of the construction of a Discrete Event Simulation (DES) model of the network, combined with an appropriate optimization algorithm. The availability of real data allowed us to build a reliable queueing network model. Then, we formulated the AD problem as an SBO optimization problem, namely a Black–Box Integer Optimization problem with bound constraints on the decision variables and with simulation constraints. We solved such problem by applying the DFO algorithm for black–box optimization problems with integer variables recently proposed in [26]. In this way, the parameters of the ED network can be tuned in order to provide the best possible performances, taking into consideration different AD policy applied. The paper is organized as follows: in Sect. 2 some generalities on AD are recalled along with a brief description of the case study we consider. Section 3 reportes the DES model used. The statement of the SBO problem we formulate is included in Sect. 4. The implementation of the SBO procedure and the results of an experimentation are reported in Sect. 5.

2 Generalities on Ambulance Diversion and the Case Study AD is a strategy adopted by the hospital ED managers when an emergency situation occurs, and patients cannot be accepted and treated anymore. Two possibilities can be considered when an ED is overcrowded, namely when some overcrowding index is exceeded: • the patients are anyhow accepted but, due to unavailability of ED resources, they wait (in a possible long queue) for visits and treatments. They are usually placed into an ED hall or inside the ambulance in the parking lot during the entire waiting time;

The Ambulance Diversion Phenomenon in an Emergency …

305

• the ED goes on an AD status, therefore the patients are redirected (transferred) to another available nearby ED, according to some AD policy. Another important concept related to the AD phenomenon is the definition of diversion policies, which refer to the set of organizational choices that defines: (1) which are the conditions that establish when an ED must be considered on AD status, i.e. when an ED has to execute the patient’s redirection; (2) which patients can be diverted, according to their severity level; (3) which are the destinations of the redirected patients. In this paper, we consider a network composed by six EDs located in the Lazio region of Italy, for which data for the whole year 2012 have been collected. In these EDs, the standard four–colors tag scale was used in the triage and AD only concerned severely injured or life–threatening patients. Therefore only yellow and red–tagged patients are considered and the adopted policies are the following: P1: P2:

P3:

P4:

Ambulance stoppage, i.e. no redirection inside the network is allowed, hence patients are queued. Redirect towards the nearest ED (complete diversion) when all the ED resources are occupied and patients are redirected to the nearest ED, if the latter is not on diversion. Redirect towards the nearest ED, with priority to the red–tagged patients (partial diversion), which is identical to the previous one, but it involves the diversion for yellow–tagged patients only. Redirection to the least occupied ED of the network, regardless of the distance between the “sending” and the “receiving” ED.

3 The Discrete Event Simulation Model In order to study the effect of different AD policies on a ED network, usually a conceptual input–throughput–output simulation model of each single ED is adopted (see, e.g., [4, 10]). This model is based on theoretical probability distributions which rule over patient flow through the ED and it uses the concept of total number of medical resources available at each ED and number of medical resources required for patient treatment on the basis of the severity level. Here, to build a DES model used for assessing different AD policies for the ED network considered, we introduce a similar concept which we call of sanitary resource. In particular, we define sanitary resource an integer valued parameter which symbolically represents the total amount of human (physicians, nurses) and physical (ED rooms, beds, stretchers) resources available at the ED. Then we model each single ED as a queueing system, in which the number of servers reflects the value of the sanitary resource of the ED. We determine the value of the sanitary resource associated to each ED in the network by means of a calibrating procedure based on real data, so that each queueing model can well represent the operation of the associated ED. More specifically, we determine this value so that the KPI of major

306

C. Piermarini and M. Roma

interest obtained from the simulation output accurately approximate the one derived from real data. Note that calibration procedures in studying patient flow through an ED is often used (see, e.g., [27]) due to the complexity and great variability of the processes involved. The adoption of this approach allows us to overcome the drawbacks usually encountered by those approaches which consider the total amount of resources available at an ED only related to its size and without even considering personnel shifts. Another drawback not present in our approach is the need of an empirical scale derived from observations for determining the number of resources require for patient treatment (see, e.g. [10]). We anticipate that sanitary resources actually will result as control variables during the optimization phase of the approach we propose. Moreover, we acquire the probability distributions governing patient flow in the input–throughput–output model of each ED by a best fitting procedure of the available real data. To this aim, for each ED, we collect from data • patient interarrival times for each color–tag; • doctor–to–discharge times, i.e., the times between the first visit of patient by a physician and the discharge. More in details, as regards, the stochastic process of patient arrival at each ED, according to the standard assumption in the literature (see, e.g., [28–34]), we model patient arrivals by a Nonhomogeneous Poisson Process (NHPP) and, following the usual procedure, we approximate the arrival rate by a piecewise constant function. To this aim, we consider 8-h time slots for each day (from 00:00 a.m. to 08:00 a.m., from 08:00 a.m. to 04:00 p.m. and from 04:00 p.m. to 12:00 p.m.). The choice of 8-hours time slots is motivated by the fact that ED staff is scheduled on 8-hours shifts, hence an analysis based on these time slots results realistic. Moreover, red–tagged patient arrivals in each time slot (and in particular during the night time slot) are few, so that a possible reduction of the time slots width would imply an even smaller number of patient arrivals, making meaningless every statistical analysis. As regards the doctor–to–discharge times, we consider the same 8-hours time slots, and by performing an analysis of the collected data, we obtain a suited probability distribution and its parameters for each time slot. After defining a proper stochastic model for patient interarrival times and for doctor–to–discharge times, in order to calibrate the model, we focus on the major KPI of interest, namely the time difference between patient arrival and the starting of the visit. This time difference is usually called door–to–doctor time (see, e.g. [14]) and it represents the patient waiting time before the first consultation with a physician. Since the value of this latter time, especially for critical patients, plays a crucial role in deciding if an ED must go on diversion status, we use it as KPI for calibrating and validating our DES model of each ED of the network. In this way, for each ED of the network, we build a simple queueing model which generates accurate average values of the door–to–doctor times (for each color tag and for each time slot) which reflect the whole ED care process, in the “as-is” status, i.e. without considering any AD.

The Ambulance Diversion Phenomenon in an Emergency …

307

Fig. 1 ARENA model of a single ED used in the calibration procedure

For each ED, we determine the representative value of the sanitary resource by minimizing the difference between the door–to–doctor times obtained from the simulation output and those derived from real data. Formally, we denote by n i j the value of the sanitary resource representative of the ith ED, i = 1, . . . , 6, during the jth time slot, j = 1, . . . , 3, and for each ED i we solve the problem min ni j

2 3  

sim real |W j,k − W j,k |,

(1)

j=1 k=1

where k = 1, 2 represents the color tag (red and yellow ones) and for each j = sim real and W j,k denote the average door–to–doctor times obtained 1, . . . 3, k = 1, 2, W j,k from simulation output and from data, respectively. Therefore, for each ED of the network, we construct a DES model to represent the input–throughput–output queueing system of the ED and solve problem (1) to calibrate it, namely to determine the sanitary resources to be used for generating a good approximation of the actual door–to–doctor times for yellow and red–tagged patients. We implemented such model by using ARENA 16.1 simulation software [35], a widely used general purpose DES package. A snapshot of this model is reported in Fig. 1. The model entities (the patients) are created according to the suited NHPP process and they flow through the ED. An attribute (delay_code) is assigned to each entity, depending on the color tag and on the time slot in which the patients entered the ED. Then, the entities join a priority queue, and then they are “taken in charge” by one available sanitary resource which is seized to this aim. The patients are delayed for a time that depends on the probability distribution of doctor– to–discharge associated to the specific delay_code. At the end, the entity releases the seized resource and record modules collect the door–to–doctor time for each patient, distinguishing each color tag and each time slot. Once the value of the sanitary resource n i j has been determined for each jth time slot of each ith ED in the network, we can build a DES model of the entire network. It consists of a weighted graph where each node corresponds to an ED and each edge represents a route between EDs with an associated weight (the travel time). Also the model of the entire network has been implemented by using ARENA 16.1 simulation software.

308

C. Piermarini and M. Roma

We implemented the four AD policies (P1, P2, P3, P4) described in Sect. 2. The first one (P1) does not consider any redirection, so that each ED is considered as a single first–aid point. In the second one (P2), to implement complete redirection to the nearest ED, some decision modules are added to each ED model in order to represent the diversion condition and the actual redirection of the patients when all the sanitary resources are seized and a red/yellow–tagged patient arrives. Policy P3 considers partial diversion and hence only redirection of yellow–tagged patients has been implemented, depending on a threshold value of the available sanitary resources. Finally, policy P4 involves an additional central decision module for redirection towards the ED of the network with the least number of sanitary resources seized. For the sake of brevity we do not describe details of these implementations.

4 Statement of the Simulation–Based Optimization Problem In this section, we report the formulation of the SBO problem we propose in order to determine the best value of the sanitary resources of each ED of the network, so that some suited KPIs are optimized. The purpose is to obtain the best overall performances of the ED network, taking into account the different AD policies applied. To this aim, we consider the sanitary resources n i j as control variables of the SBO problem. Moreover, we focus on the following KPI: the patient average NonValued Added (NVA) time associated to each ED, for red and yellow–tagged patients, NVA/R NVA/Y and ti the corresponding to waiting and transfer times. We denoted by ti average NVA time of the ith ED for red and yellow–tagged patients, respectively. It is important to note that such times actually are function of the variables n i j and they can be only computed from the output of the simulation runs. In order to keep costs low, the overall number of sanitary resources should be minimized, too. Of course, this is a conflicting objective with respect to the minimization of the NVA times defined above. Therefore, actually we have a black box multiobjective problem which we tackle by reducing into a single objective problem by means of the standard scalarization procedure (see, e.g., [36]). More in detail, we consider the following objective function w1 480

3 6   i=1 j=1

n i j + w2

6  i=1

NVA/Y

ti

+ w3

6 

NVA/R

ti

,

(2)

i=1

where w1 , w2 and w3 are suited weights. The last two terms represent the overall NVA times (in minutes) for red an yellow–tagged patients, respectively; the first term measures the overall time (in minutes) for which the sanitary resources are scheduled in the 8 h (480 min) time slots, hence representing the total cost to be sustained in order to maintain a certain number of sanitary resources available in each time slot of each ED.

The Ambulance Diversion Phenomenon in an Emergency …

309

As regards the constraints, we have bound constraints on the variables n i j and simulation constraints which prevent the NVA times to exceed specific threshold values for each color tag. The complete formulation of the SBO problem we consider is the following: min w1 480

6  3  i=1 j=1

s.t.

NVA/Y ti ≤ 40, NVA/R ≤ 20, ti

2 ≤ n i j ≤ 10, n i j integer,

n i j + w2

6 

NVA/Y ti

i=1

+ w3

6 

NVA/R

ti

i=1

i = 1, . . . , 6,

(3)

i = 1, . . . , 6, i = 1, . . . , 6; j = 1, . . . , 3,

which is a Black Box Integer Optimization (BBIO) problem with 18 variables, the corresponding bound constraints and 12 simulation constraints.

5 SBO Implementation and Experimental Results To solve the BBIO Problem (3), we use an algorithm belonging to the class of Derivative Free Optimization methods recently proposed in [26], named DFLINT.1 It is an algorithm for black box inequality and box constrained integer nonlinear programming problems, hence well suited for solving Problem (3). Since the DFLINT code is available in Python, we decided to code Problem (3) in Python as well. Moreover, we also created a specific interface between the optimization algorithm and the ARENA simulation package by using Python, without leaning on VBA. This allowed us to significantly improve the overall performance of the SBO approach applied to the problem under study. An insight description of this procedure is available in [37]. Now we report the results obtained by means of the SBO approach we propose, applied to the ED network under study. The aim is to determine which AD policy (among those described in Sect. 2) and which ED settings (in terms of sanitary resources to be allocated) perform the best in the sense that the objective function (2) is minimized. In our experimentation the weights w1 , w2 and w3 in (2) are chosen so that the order of magnitude of the three terms of the objective function (2) is the same, namely w1 = 1, w2 = 300 and w3 = 600. Of course different values could be experimented, leading to different weighing of the three terms of the objective function, i.e. enabling assigning higher weights to terms which are to be considered most relevant, for instance to the term which refers to the most critical (red–tagged) patients. As regards the parameters of the DFLINT code, we use the default ones; the starting point of the algorithm is the one corresponding to the “as-is” status, i.e., the current EDs setting. 1

Publicly available in the DFL Library at the URL www.iasi.cnr.it/~liuzzi/DFL.

310

C. Piermarini and M. Roma

Table 1 Values of the objective function corresponding to the “as-is” status (F 0 ) and to the optimal function value (F ∗ ) for each AD policy AD policy F0 F∗ P1 P2 P3 P4

127454.63 92956.13 93758.85 47926.04

51975.77 43506.65 46180.20 42037.83

Table 2 Values of the sanitary resources n i j corresponding to the “as-is” status and to the optimal point for each AD policy, for each ED and for each time slot Time slot As-is P1 P2 P3 P4 ED1

ED2

ED3

ED4

ED5

ED6

00:00–08:00 08:00–16:00 16:00–24:00 00:00–08:00 08:00–16:00 16:00–24:00 00:00–08:00 08:00–16:00 16:00–24:00 00:00–08:00 08:00–16:00 16:00–24:00 00:00–08:00 08:00–16:00 16:00–24:00 00:00–08:00 08:00–16:00 16:00–24:00

4 4 4 4 5 4 4 4 4 4 5 2 4 5 2 3 3 2

7 5 4 6 5 7 4 6 3 7 4 6 7 4 6 4 4 4

4 4 3 5 5 4 3 4 4 5 6 4 7 6 4 4 4 3

6 3 5 5 4 4 4 5 3 6 6 4 6 4 5 4 3 3

5 5 5 5 5 4 4 4 4 4 3 4 5 3 5 3 3 2

In Table 1, we report, for each AD policy, the value of the objective function (2) corresponding to the current “as-is” status (denoted by F 0 ) along with the optimal function value (denoted by F ∗ ) obtained by solving the problem (3). Moreover, in Table 2 the corresponding values of the sanitary resources n i j are detailed. Table 1 clearly evidences the achieved improvement obtained for all the policies considered, and in most cases it is very significant. Moreover, the best diversion policy in terms of the objective function (2) can be easily determined: it is the policy P4. Actually, it provides good operative performances even before the optimization, i.e. in correspondence of the starting point. Furthermore, the worst diversion policy is P1 for which the best improvement of the objective function is observed. The main issue associated with the ambulance stoppage diversion policy P1 is the inability to redirect

The Ambulance Diversion Phenomenon in an Emergency … NVA/Y

Table 3 NVA times ti ED1 ED2 ED3 ED4 ED5 ED6

NVA/R

and ti

311

, i = 1, . . . , 6, corresponding to the starting point

Color tag

P1

P2

P3

P4

Yellow Red Yellow Red Yellow Red Yellow Red Yellow Red Yellow Red

15.89 ± 0.96 6.78 ± 0.44 25.77 ± 4.15 11.18 ± 1.62 5.96 ± 0.97 3.94 ± 0.74 50.28 ± 2.75 24.40 ± 1.08 54.63 ± 2.98 23.70 ± 1.43 12.51 ± 1.26 7.10 ± 1.22

8.55 ± 0.45 4.86 ± 0.42 2.05 ± 0.39 1.72 ± 0.22 2.10 ± 0.03 1.53 ± 0.33 34.13 ± 1.65 20.19 ± 1.08 37.20 ± 1.57 17.85 ± 1.01 12.98 ± 1.25 7.47 ± 0.96

12.78 ± 0.55 4.19 ± 0.36 6.89 ± 0.48 0.74 ± 0.12 5.06 ± 0.87 1.98 ± 0.42 43.43 ± 1.54 15.83 ± 0.75 46.48 ± 1.60 14.22 ± 0.81 9.05 ± 0.93 4.66 ± 0.70

2.98 ± 0.09 2.70 ± 0.18 2.65 ± 0.17 2.43 ± 0.30 1.32 ± 0.17 1.21 ± 0.28 5.10 ± 0.27 4.46 ± 0.34 5.27 ± 0.32 4.37 ± 0.31 2.33 ± 0.19 2.08 ± 0.41

patients toward other EDs which has the direct consequence the need of allocating an higher number of sanitary resources to maintain good operative performances. The results show that similar performances, can be obtained by applying other diversion policies with a reduced number of sanitary resources, thanks to the interhospital transportation of the patients. In other words, without adopting redirection policies, the same performance can be obtained but with higher costs due to an increase of the sanitary resources needed. From Table 2 it can be easily observed that an increase of the number of sanitary resources (with respect to the starting point) in many cases is required to ensure the viability of the corresponding AD policy. This is mainly due to the satisfaction of the simulation constraints in problem (3) which impose that the NVA times must be lower than a specific value. In the Appendix we report detailed results concerning such NVA times corresponding to the starting point (Table 3) and those corresponding to the optimal point obtained for each policy (Table 4). From these tables it can be observed the significant reduction obtained for such times by adopting ED settings corresponding to the optimal points. The fact that, in some cases, the times are similar across the different diversion policies is due to the queue discipline adopted for boarding patients that, of course, gives highest priority to red–tagged patients. It can also be noted that the average NVA times corresponding to the optimal points (Table 4) do not exceed 5 min, and this can be considered a really good operative result. A concluding remark on the case study analyzed which arises from our experimentation is that a good trade–off should be reached between the reduction of the patient average door–to–doctor times and the sanitary resources allocated at each ED, in each time slot.

312

C. Piermarini and M. Roma NVA/Y

Table 4 NVA times ti ED1 ED2 ED3 ED4 ED5 ED6

NVA/R

and ti

, i = 1, . . . , 6, corresponding to the optimal point

Color tag

P1

P2

P3

P4

Yellow Red Yellow Red Yellow Red Yellow Red Yellow Red Yellow Red

1.29 ± 0.29 0.87 ± 0.14 2.62 ± 0.43 1.54 ± 0.33 0.99 ± 0.23 0.65 ± 0.20 2.00 ± 0.25 1.45 ± 0.25 2.20 ± 0.26 1.30 ± 0.20 1.08 ± 0.12 0.53 ± 0.18

2.39 ± 0.14 1.97 ± 0.16 1.25 ± 0.17 1.07 ± 0.22 1.22 ± 0.20 1.08 ± 0.23 0.99 ± 0.14 0.75 ± 0.10 0.86 ± 0.14 0.60 ± 0.11 1.14 ± 0.18 0.70 ± 0.24

3.23 ± 0.15 0.40 ± 0.12 4.88 ± 0.16 0.33 ± 0.05 1.78 ± 0.07 0.11 ± 0.05 2.61 ± 0.24 0.51 ± 0.10 3.79 ± 09.33 0.66 ± 0.13 2.67 ± 0.14 0.67 ± 0.27

1.18 ± 0.05 1.16 ± 0.10 1.18 ± 0.11 1.09 ± 0.16 0.84 ± 0.07 0.80 ± 0.11 2.17 ± 0.09 2.05 ± 0.12 1.69 ± 0.16 1.56 ± 0.14 1.76 ± 0.13 1.39 ± 0.19

6 Conclusions In this paper we deal with the AD phenomenon by the SBO approach. In particular, we formulate the problem as a Black–Box Integer Optimization problem which we solve by means of an effective DFO algorithm recently proposed in literature. In order to experiment the approach we propose, we consider a network composed by six EDs located in the Lazio region of Italy. The data collected for these EDs (related to one year) have been used for constructing a DES model implemented by using ARENA simulation software. The obtained results on the case study considered show that the best diversion policy in terms of operative performance is the redirection towards the least occupied ED. However, our experimentation clearly highlights that AD and interhospital transportation of patients are definitely procedures worth considering, especially during important critical emergency situations which can cause sudden spikes in the number of patient arrivals (see, e.g., [38]). The main limitations of our study rely on the fact that a good calibration of each ED model is required. In other words, the estimate of the sanitary resource value associated to the current status of the EDs must be as accurate as possible, to avoid unreliable conclusions.

Appendix NVA/Y

In this appendix we include the detailed results concerning the NVA times ti NVA/R and ti , i = 1, . . . , 6, for each AD policy. The tables report the average values and the half–width of the 95% confidence interval. In particular, in Table 3 we report

The Ambulance Diversion Phenomenon in an Emergency … NVA/Y

313

NVA/R

the values of ti and ti corresponding to the starting point. In Table 4 we NVA/Y NVA/R report the values of ti and ti corresponding to the optimal point. All the times are expressed in minutes.

References 1. Glushak, C., Delbridge, T.R., Garrison, H.G.: Ambulance diversion. Prehosp. Emerg. Care 1(2), 100–103 (1997) 2. Hagtvedt, R., Ferguson, M., Griffin, P., Jones, G.T., Keskinocak, P.: Cooperative strategies to reduce ambulance diversion. In: Proceedings of the 2009 Winter Simulation Conference, pp. 1861–1874 . IEEE (2009) 3. Gundlach, J.: The problem of ambulance diversion, and some potential solutions. Legis. Public Policy 13, 175–217 (2010) 4. Lin, C.-H., Kao, C.-Y., Huang, C.-Y.: Managing emergency department overcrowding via ambulance diversion: a discrete event simulation model. J. Formos. Med. Assoc. 114, 64–71 (2015) 5. Epstein, S.K., Tian, L.: Development of an emergency department work score to predict ambulance diversion. Acad. Emerg. Med. 13(4), 421–426 (2006) 6. Pham, J.C., Patel, R., Millin, M.G., Kirsch, T.D., Chanmugam, A.: The effects of ambulance diversion: a comprehensive review. Acad. Emerg. Med. 13(11), 1220–1227 (2006) 7. Ramirez-Nafarrate, A., Hafizoglu, A.B., Gel, E.S., Fowler, J.W.: Optimal control policies for ambulance diversion. Eur. J. Oper. Res. 236, 298–312 (2014) 8. Delgado, M.K., Meng, L.J., Mercer, M.P., Pines, J.M., Owens, D.K., Zaric, G.S.: Reducing ambulance diversion at hospital and regional levels: systemic review of insights from simulation models, Western. J. Emerg. Med. 14, 489–498 (2014) 9. Li, M., Vanberkel, P., Carter, A.J.: A review on ambulance offload delay literature. Health Care Manag. Sci. 22, 658–675 (2019) 10. Kao, C.-Y., Yang, J.-C., Lin, C.-H.: The impact of ambulance and patient diversion on crowdedness of multiple emergency departments in a region. PLoS ONE 10(12), e0144227 (2015) 11. Deo, S., Gurvich, I.: Centralized vs. decentralized ambulance diversion: a network perspective. Manag. Sci. 57(7), 1300–1319 (2011) 12. Ramirez-Nafarrate, A., Fowler, J.W., Wu, T.: Design of centralized ambulance diversion policies using simulation-optimization. In: Proceedings of the 2011 Winter Simulation Conference, pp. 1251–1262. IEEE (2011) 13. Yousefi, M., Yousefi, M., Fogliatto, F.S.: Simulation-based optimization methods applied in hospital emergency departments: a systematic review. Simulation 96(10), 791–806 (2020) 14. Vanbrabant, L., Braekers, K., Ramaekers, K., Van Nieuwenhuyse, I.: Simulation of emergency department operations: a comprehensive review of KPIs and operational improvements. Comput. Ind. Eng. 131, 356–381 (2019) 15. Uriarte, A.G., Zúñiga, E.R., Moris, M.U., Ng, A.H.: System design and improvement of an emergency department using simulation-based multi-objective optimization. Journal of Physics: Conference Series, vol. 616, p. 012015. IOP Publishing (2015) 16. Uriarte, A.G., Zúñiga, E.R., Moris, M.U., Ng, A.H.: How can decision makers be supported in the improvement of an emergency department? A simulation, optimization and data mining approach. Oper. Res. Health Care 15, 102–122 (2017) 17. Chen, T., Wang, C.: Multi-objective simulation optimization for medical capacity allocation in emergency department. J. Simul. 10(1), 50–68 (2016) 18. Feng, Y.-Y., Wu, I.-C., Chen, T.-L.: Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm. Health Care Manag. Sci. 20(1), 55– 75 (2017)

314

C. Piermarini and M. Roma

19. Lucidi, S., Maurici, M., Paulon, L., Rinaldi, F., Roma, M.: A derivative-free approach for a simulation-based optimization problem in healthcare. Optim. Lett. 10, 219–235 (2016) 20. Lucidi, S., Maurici, M., Paulon, L., Rinaldi, F., Roma, M.: A simulation-based multiobjective optimization approach for health care service management. IEEE Trans. Autom. Sci. Eng. 13, 1480–1491 (2016) 21. Gosavi, A.: Simulation–Based Optimization. Operations Research/Computer Science Interfaces Series, vol. 55. Springer (2015) 22. Fu, M. (Ed.): Handbook of Simulation Optimization. Series in Operations Research and Management Science, vol. 216. Springer (2015) 23. Audet, C., Hare, W.: Derivative-Free and Blackbox Optimization. Springer Series in Operations Research and Financial Engineering. Springer (2017) 24. Conn, A., Scheinberg, K., Vicente, L.N.: Derivative-Free Optimization. SIAM (2009) 25. Amaran, S., Sahinidis, N.V., Sharda, B., Bury, S.J.: Simulation optimization: a review of algorithms and applications. Ann. Oper. Res. 240(1), 351–380 (2016) 26. Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Math. Program. Comput. 12(4), 673–702 (2020) 27. De Santis, A., Giovannelli, T., Lucidi, S., Messedaglia, M., Roma, M.: A simulation-based optimization approach for the calibration of a discrete event simulation model of an emergency department. Ann. Oper. Res. (2022). https://doi.org/10.1007/s10479-021-04382-9. 28. Whitt, W., Zhang, X.: A data-driven model of an emergency department. Oper. Res. Health Care 12, 1–15 (2017) 29. Kuo, Y.-H., Rado, O., Lupia, B., Leung, J.M.Y., Graham, C.A.: Improving the efficiency of a hospital emergency department: a simulation study with indirectly imputed service-time distributions. Flex. Serv. Manuf. J. 28(1), 120–147 (2016) 30. Zeinali, F., Mahootchi, M., Sepehri, M.: Resource planning in the emergency departments: a simulation-based metamodeling approach. Simul. Model. Pract. Theory 53, 123–138 (2015) 31. Ahmed, M.A., Alkhamis, T.M.: Simulation optimization for an emergency department healthcare unit in Kuwait. Eur. J. Oper. Res. 198(3), 936–942 (2009) 32. Ahalt, V., Argon, N., Strickler, J., Mehrotra, A.: Comparison of emergency department crowding scores: a discrete-event simulation approach. Health Care Manag. Sci. 21, 144–155 (2018) 33. Guo, H., Gao, S., Tsui, K., Niu, T.: Simulation optimization for medical staff configuration at emergency department in Hong Kong. IEEE Trans. Autom. Sci. Eng. 14(4), 1655–1665 (2017) 34. De Santis, A., Giovannelli, T., Lucidi, S., Messedaglia, M., Roma, M.: Determining the optimal piecewise constant approximation for the nonhomogeneous poisson process rate of emergency department patient arrivals. Flex. Ser. Manuf. J. (2021). https://doi.org/10.1007/s10696-02109408-9 35. Kelton, W., Sadowsky, R., Zupick, N.: Simulation with ARENA, 6th edn. McGraw Hill Education (2015) 36. Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer Academic Publishers (1999) 37. Piermarini, C., Roma, M.: A simulation-based optimization approach for analyzing the ambulance diversion phenomenon in an emergency department network (2021). arXiv: 2108.04162 38. Fava, G., Giovannelli, T., Messedaglia, M., Roma, M.: Effect of different patient peak arrivals on an emergency department via discrete event simulation. Simulation 98, 161–181 (2021)

Applications

Reducing the Supply-Chain Nervosity Thanks to Flexible Planning Nicolas Zufferey, Marie-Sklaerder Vié, and Leandro Coelho

Abstract This study is motivated by a major fast-moving consumer goods company. It considers a 3-echelon supply chain made of one plant, one distribution center and several shops. A flexible planning (FLEXP) approach is proposed to reduce the supply chain nervosity (e.g., bullwhip effect, peaks of production, peaks of inventories, shortages). FLEXP tries to take the best out of both the well-known PULL and PUSH approaches. Indeed, PULL is characterized by low shortages but high production variations, whereas PUSH is characterized by low production variations but high shortages. FLEXP relies on assigning penalties only if the daily production and the daily inventories are out of predetermined ideal intervals. More precisely, the following objective functions are minimized in a lexicographic fashion: the shortage at the shop level, the out-of-range production, the out-of-range inventories (in the distribution center and the shops). As the demand is non-deterministic, a simulationoptimization algorithm is designed. The experiments performed on 20 instances demonstrate the benefit of FLEXP when compared to both PULL and PUSH.

N. Zufferey (B) · M.-S. Vié GSEM, University of Geneva - Uni Mail, Bd du Pont-d’Arve 40, 1211 Geneva, Switzerland e-mail: [email protected] M.-S. Vié e-mail: [email protected] N. Zufferey · L. Coelho Centre Interuniversitaire de Recherche sur les Réseaux d’Entreprise, la Logistique et le Transport, CIRRELT, Québec, Canada e-mail: [email protected] L. Coelho Canada Research Chair in Integrated Logistics, Université Laval, Québec, Canada © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_26

317

318

N. Zufferey et al.

1 Introduction The supply chain considered in this study is made of one plant, one distribution center and several shops. Considering a flagship product commonly used all over the world, this project was proposed by a fast-moving consumer goods company based in Switzerland. The company cannot be named because of a non-disclosure agreement. A reorder-point inventory management policy is currently used in the company. It means that every echelon (e.g., a shop) places an order to the upstream echelon (e.g., the distribution center) anytime the available inventory is smaller than or equal to the reorder point. In such a PULL context, the plant organizes its production in order to minimize the shortages along the supply chain. As always in practice, economic batch quantities (EBQs) are employed. They are determined with respect to the transportation means and contents. Typically, a decision maker will round up the ordered quantities to perfectly fill a truck, a pallet, a layer, a case or a box. In such a decentralized inventory-management context characterized by reorder points and EBQ constraints, a significant variability of the orders (at every level of the supply chain) and of the production workload is likely to occur. It results in the bullwhip effect, for which many studies can be found in the literature [11, 19]. More generally, the reader interested in recent studies and overviews is referred to the following works: inventory management [2, 14, 20], production-distribution [6], production-scheduling [12]. Based on [1, 4, 10], the following ingredients can contribute to the bullwhip effect (and more generally to the nervosity of the supply chain): uncertainty in the lead times along the supply chain; too large demand forecasts; the demand pattern; the ordered quantities (from a downstream echelon to an upstream echelon) are rounded up to EBQs. Therefore, for some time periods, a small demand at the shop level can lead to a large production order at the plant level. Based on the detailed review of the practices favoring the bullwhip effect presented in [7], one can observe that the following research streams are often encountered: design of mathematical models for explaining the effect and its contributing factors; development of empirical studies analyzing historical data; design of management games. However, there is almost no paper involving simulation-optimization methods for tackling real-world supply-chain problems. Moreover, various studies show that integrated optimization outperforms sequential optimization [5, 16]. In such a context, we propose here an original flexible and integrated simulation-optimization approach relying on assigning penalties only if the daily production and the daily inventories are out of predetermined ideal intervals. In order to avoid the bullwhip effect, the well-known PUSH approach consists in producing the same quantity every day. This quantity is typically the average forecasted demand computed over a sufficiently large horizon. The produced units are then pushed downstream the supply chain in order to finally reach the shop level. The advantage of the PUSH approach is a smooth production, but in counterpart, inventory costs are expected and shortages are likely to occur at the shops.

Reducing the Supply-Chain Nervosity Thanks to Flexible Planning

319

Considering a real-world problem, this work proposes a flexible planning (FLEXP) method which aims at finding the appropriate balance between the PUSH and PULL approaches. The considered problem and FLEXP are presented in Sect. 2. Because the demand is not deterministic, a simulation procedure is required. Based on experiments conducted on 20 realistic instances, PULL, PUSH and FLEXP are compared in Sect. 3. In addition to having a better performance than PULL and PUSH, it will be highlighted that FLEXP can be adapted to other supply chain networks in the context of reducing shortages, inventories and supply chain variations (e.g., the bullwhip effect). Conclusions are drawn in Sect. 4.

2 Considered Problem and Proposed Approach We consider a supply chain made of three echelons, including a single plant P, a single distribution center DC, and tens of shops. For each day t, the following EBQ constraints have to be satisfied. First, the shipments s tP,DC from the plant P to the t from the DC to DC have to be in number of layers. Second, the shipments s DC,x each shop x have to be in number of cases. Storage is forbidden at the plant level. In contrast, it is possible to have inventory both at the DC level and at the shop level. The main idea of the proposed flexible planning (FLEXP) approach is to deal with daily production and inventory ideal intervals instead of single targeted values. Based on that, the out-of-range production and the out-of-range inventories are penalized. Such an approach is likely to mitigate the bullwhip effect and the supply-chain nervosity (i.e., variations). Let p t be the number of layers produced during day t. The goal is to organize the production such that p t belongs to the ideal interval [Pmin , Pmax ], and a penalty is incurred for the out-of-range production. The following assumptions characterize this study, where D is the expected average daily demand for the planning horizon. These assumptions were validated by the involved company. • The planning horizon is 100 d. • The daily demand follows a normal distribution. • Pmax is 20% above D, whereas Pmin is 20% below D. The employed lead times (in days) are presented below. • L(P) = 1 between the production and the availability (at the plant level) for shipment to the DC. • L(P, DC) = 2 from the plant to the DC. • L(DC) = 1 for cross-docking through the DC. • L(DC, x) = 1 from the DC to any shop x. The following notation is introduced for modeling the considered problem. • nl : number of cases per layer. • n c : number of items per cases.

320

• • • •

N. Zufferey et al.

dxt : expected demand of shop x for day t. l xt : expected lost sales of shop x for day t. t : current inventory in the DC for day t. i DC i xt : current inventory in the shop x for day t.

The supply-chain-material flow conservation constraints are presented in Eq. (1) for the plant, in Eq. (2) for the DC, and in Eq. (3) for the shops. These constraints are denoted by (C) for short. s tP,DC = p t+L(P) t i DC

=

t−1 i DC

+ nl ·

i xt = i xt−1 + n c ·

s t+L(P,DC) P,DC





x t+L(DC)+L(DC,x) s DC,x

(1) t s DC,x

(2)

− dxt + l xt

(3)

In order to favor flexibility for managing the considered flagship product, instead of assigning a single reorder point to each shop x, we propose to associate a daily interval I (x) = [Imin (x), Imax (x)]. On the one hand, Imin (x) is the inventory that x needs to satisfy its daily demand. On the other hand, Imax (x) is the largest desired daily inventory in shop x for the product. More precisely, Imax (x) represents the sum of the assigned shelf capacity and the back-room capacity. The shelf inventory is used first to cover the consumer demand, whereas the backroom inventory fills the shelf. Note that the inventory can exceed Imax (x) because each shop x has other products to deal with (i.e., not only the considered flagship product), and thus it is always possible to find a place in the back-room if there is too much inventory for the considered product. In that sense, satisfying Imax (x) is not a hard constraint. We can now propose to manage the production and distribution as follows. Figure 1 illustrates the situation with five shops.

Fig. 1 Network with five shops and intervals [Pmin , Pmax ] and [Imin , Imax ]

Reducing the Supply-Chain Nervosity Thanks to Flexible Planning

321

• Design a production plan based on the aggregation of the I (x) ideal inventory intervals, while satisfying all the EBQ constraints. The production plan aims to daily meet the [Pmin , Pmax ] ideal production interval. • Push the production downstream to the DC. At this stage, the product can be temporarily stored (but a storage penalty is encountered). • Push the production downstream to the shops, where out-of-range inventories are penalized. The following benefits are expected for the proposed flexible planning (FLEXP) approach. On the one hand, the DC should have a more stable response (with respect to the shops) while keeping a relatively low inventory. On the other hand, the plant should have fewer production peaks. Therefore, the bullwhip effect and the overall supply-chain nervosity are likely to be reduced. The originality of this approach is twofold. First, from the plant standpoint, the production variability is not penalized if it belongs to the interval [Pmin , Pmax ]. Second, from the shop point of view, and in contrast with the standard unit storage costs, the stock is not penalized if it belongs to its associated interval I (x). For each day t, the considered company aims at minimizing the following objective functions: ( f 1 ) the shortage at the shop level (i.e., anytime the quantity is below Imin (x)); ( f 2 ) the out-of-range production (i.e., anytime the produced quantity is below Pmin (x) or above Pmax (x)); ( f 3 ) the excess of inventory in the shops and at the DC (i.e., anytime the quantity is above Imax (x) in the shops, and anytime there is a stock in the DC). Formally, these three functions are presented below.  • f 1 (t) = x l xt t t • f 2 (t) = max{P min − p , 0} + max{ p − Pmax , 0} t t • f 3 (t) = i DC + x max{i x − Imax (x), 0} The company has imposed the following natural priorities: f 1 > f 2 > f 3 . Moreover, these objectives have to be considered in a lexicographic fashion. It means that no improvement can be implemented for a lower-level objective if it decreases the quality of a higher-level objective. It is important to notice here that lexicographic optimization is more and more used in practice [8, 15, 17, 18, 22]. As the demand is assumed to be non-deterministic, and in line with [3, 9, 21], a simulation procedure is required. It relies on demand forecasts and a rolling planning window W . The demand of any day t is only revealed on day t, just before making decisions for the following days (for which we only have demand forecasts). As mentioned above, the planning horizon covers 100 d. In line with the recommendations of the company, we have decided to fix the size of W in order to allow two production decisions (i.e., involving day 1 and day 2 of W ) to reach the shop level before the end of W . Formally, we set |W | = L(P) + L(P, DC) + L(DC) + maxx L(DC, x) + 1 = 6. For each shop x, during the simulation, only the demand of today (i.e., dxt0 ) and the demand of the previous days are known. As suggested by the company, we set the forecast (dxt ) for the next day as the average daily demand (which is stable as there is neither trend nor seasonality for the considered product) plus its standard deviation. The resulting FLEXP approach is summarized in

322

N. Zufferey et al.

Algorithm 1, where W = [t0 , t0 + |W |] is the planning window. For each day of the planning window, a 3-step optimization is performed (one step for each objective function), and each step is solved with the use of the CPLEX solver (with a time limit of one minute per objective, which meets the computing-time requirements of the considered company). Algorithm 1 Flexible planning optimization-simulation algorithm Set t0 ← 1 (initialize the first day) While (t0 + |W | ≤ 100), do: • For each shop x, set dxt0 as the just revealed demand (instead of forecast)  1. minimize F1 = t∈W f 1 (t) satisfying constraints {(C)}; let F1 be the value  of the obtained minimum 2. minimize F2 = t∈W f 2 (t) satisfying constraints {(C); F1 ≤ F1 }; let F2 be the value  of the obtained minimum 3. minimize F3 = t∈W f 3 (t) satisfying constraints {(C), F1 ≤ F1 ; F2 ≤ F2 } • For day t0 , freeze the shipments and the production • Set t0 ← t0 + 1 (roll the planning window W to the next day of the horizon)

3 Results The three above-presented methods, summarized in Table 1, are compared for 20 realistic instances generated with the involved company. The accurate adaptation of PUSH and PULL for the considered problem are presented below. Recall that the planning horizon is 100 d. PUSH works as follows. First, the ideal (theoretical) production rate pˆ is computed as the average daily forecasted demand over the entire planning horizon. This value must be, however, adjusted to an integer number of layers in order to satisfy the EBQ constraints. For this reason, for each day t of the planning horizon, we produce p =  pˆ · t − Q layers, where Q is the number of layers produced up to day t. This way, we have a daily production quantity p that is close to p. ˆ The following downstream deliveries are then performed as soon as possible: the daily produced quantity is shipped from the plant to the DC; the DC ships to the shops without exceeding their desired inventories (i.e., Imax ). PULL relies on reorder points for the DC and the shops. An order is placed to the upstream level anytime the available inventory is below the involved reorder point. The plant cannot store any item. For this reason, anytime an order is placed from the DC to the plant, a production batch is launched in the plant. The employed reorder point is computed as (Ddown + σ ) · L, based on the following notation.

Reducing the Supply-Chain Nervosity Thanks to Flexible Planning Table 1 Considered methods for the numerical experiments Method Priority Characteristics Expected advantages PUSH

PULL

Plant

Shops

Constant production Downstream decisions Reorder points Upstream orders

FLEXP

Network

Ideal production interval Ideal inventory intervals Lexicographic optimization

Low production variation Low inventory in the DC Low shortages

323

Expected drawbacks High shortages

Inventory in the shops High production variation Low inventory in Inventory in the the shops DC Low shortages None Low inventories (DC, shops) Low production variation

• Ddown : average daily demand from the downstream echelon. • σ : daily standard deviation of Ddown . • L: total lead-time from the upstream echelon (more precisely, L = L(P) + L(P, DC) for the DC, and L = L(DC) + L(DC, x) for each shop x). Table 2 presents the 20 instances (labeled as I nst1 to I nst20 ) generated with the help of the involved company in order to capture various possible real situations. For each instance, we indicate sequentially: its number of shops (N ∈ {20, 50}); the interval to which its average (computed over the shops) daily demand belongs to, along with the associated standard deviation σ ; the desired inventories at the shop level; the number of items per case; the number of cases per layer. The aggregated results (over the planning horizon of 100 d) obtained by PUSH, PULL and FLEXP are presented in Table 3. For each instance and each method, the following values are given. • F1 : shortage percentage with respect to the average daily demand over all the shops. • F2 : out-of-range production percentage. • F3 : percentage of exceeding stock with respect to the total storage capacity that is not penalized. The latter is the sum of all the desired inventories in the shops (knowing that the desired inventory in the DC is always zero). The following computational environment characterizes the experiments: each algorithm was implemented with C++ under Linux, and run on 3.4 GHz Intel Quadcore i7 processor with 8 GB of DDR3 RAM; for each planning window W , each method can provide its solution within seconds (including CPLEX that always finds an optimal solution within the allowed computing time of one minute per objective).

324

N. Zufferey et al.

Table 2 Presentation of the 20 instances Instance Shops Daily Std. dev. σ average demand per shop I nst1

I nst2 I nst3 I nst4 I nst5 I nst6

I nst7 I nst8 I nst9 I nst10 I nst11

I nst12 I nst13 I nst14 I nst15 I nst16

I nst17 I nst18 I nst19 I nst20

20

20

50

50

[6, 12] cases [50, 100]%

Desired shop inventory 6 cases (2 cases for shelf; 4 cases for backroom)

[6, 12] cases [100, 150]% 6 cases (2 cases for shelf; 4 cases for backroom)

[1, 4] cases

[1, 4] cases

[50, 100]%

3 cases (1 case for shelf; 2 cases for backroom)

[100, 150]% 3 cases (1 case for shelf; 2 cases for backroom)

Number of items per case

Number of cases per layer

6

14

8 12 16 20 6

10 14 10 12 14

8 12 16 20 6

10 14 10 12 14

8 12 16 20 6

10 14 10 12 14

8 12 16 20

10 14 10 12

Reducing the Supply-Chain Nervosity Thanks to Flexible Planning Table 3 Results with PULL, PUSH and FLEXP PULL PUSH Instance I nst1 I nst2 I nst3 I nst4 I nst5 I nst6 I nst7 I nst8 I nst9 I nst10 I nst11 I nst12 I nst13 I nst14 I nst15 I nst16 I nst17 I nst18 I nst19 I nst20 Average

F1 0.44 0.73 0.63 0.55 0.55 1.53 0.88 1.03 1.24 1.28 0.43 0.34 0.35 0.42 0.59 1.18 0.46 0.8 0.99 0.79 0.76

F2 46.67 24 59 54.5 75 77 8 70 42.33 70.5 105 15 110 113 49.5 47.5 22 126 48 57.5 61.03

F3 68.08 14.72 30.9 18.79 18.12 89.22 25.76 43.71 36.05 26.81 14.45 4.83 7.5 5.51 5.27 20.94 6.13 10.03 8.04 6.56 23.07

F1 22.81 1.46 19.19 16.28 18.82 28.71 4.66 27.97 30.69 27.92 4.67 6.85 4.8 5.14 4.17 5.94 5.22 4.75 4.75 4.2 12.45

325

FLEXP F2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

F3 46.13 0.09 17.12 10.37 11.5 53.61 3.11 28.69 23.92 15.95 0 0 0 0 0 0 0 0 0 0 10.52

F1 0.07 0.11 0.17 0.07 0.09 0.74 0.02 0.2 0.64 0.51 0.28 0 0.23 0.31 0.43 0.81 0.03 0.67 0.48 0.56 0.32

F2 0 0 0 0.01 0.01 0.04 0 0.02 0 0.01 0.11 0 0.04 0.03 0.01 0.02 0 0.02 0 0 0.02

F3 0 0 0 0.17 0.5 0.82 0 0.82 0.17 0.5 0.63 0 0.42 0.25 0.3 0.21 0 0.21 0 0.12 0.26

The following average observations can be made, and they are in line with the expectations depicted in Table 1. Note by the way that, unsurprisingly, the shortage penalty F1 augments with σ . • Performance of PULL. The shortage penalty is small (F1 = 0.76% of the demand). The out-of-range penalty is high (F2 = 61.03%). The inventory penalty is significant (F3 = 23.07%, but it is mainly encountered at the DC). • Performance of PUSH. It does not tune its production with respect to the demand pattern and it has thus the biggest shortage penalty (F1 = 12.45% of the demand). It has no out-of-range production (F2 = 0). The inventory penalty is important (F3 = 10.52%, but mostly encountered in the shops). Interestingly, F1 decreases approximately from 25% with N = 20 shops to 5% with N = 50 shops. A possible explanation is that the more shops we have, the better the dispatching of the production among the shops can be (as we have more dispatching possibilities). • Performance of FLEXP. All the penalties are small: the shortage penalty F1 = 0.32% of the demand, the out-of-range production is F2 = 0.02%, and the costly inventory is F3 = 0.26% of the free-of-charge capacity. Moreover, observing each

326

N. Zufferey et al.

instance individually, the bullwhip effect is removed (no F2 penalty is over 0.11%) while almost always avoiding shortage/inventory penalties (no F1 and F3 penalty exceeds 0.82%).

4 Conclusion The production and distribution of a flagship product of a company is optimized in this work. The following network is considered: one plant, one distribution center (DC), and tens of shops. Three types of penalties have to be minimized lexicographically: shortage (at the shop level), out-of-range production (at the plant level), and outof-range inventory (at the shop and DC levels). The well-known PULL and PUSH approaches are compared with the proposed flexible planning method denoted as FLEXP. The latter method is a compromise between PULL and PUSH, and it is based on ideal intervals for the production (at the plant) and for the inventory (at the DC and at the shops). The obtained results show the superiority of FLEXP over PUSH and PULL, as its penalties are always close to zero. It is an appropriate method for controlling the bullwhip effect and reducing the supply chain variations. Future works include the consideration of more complex networks (e.g., with several DC’s) and of different demand patterns (e.g., peaks of demand resulting from promotions, trend, seasonality).

References 1. Agrawal, S., Sengupta, R., Shanker, K.: Impact of information sharing and lead time on bullwhip effect and on-hand inventory. Eur. J. Oper. Res. 192, 576–593 (2009) 2. Axsäter, S.: Inventory Control. Springer (2015) 3. Bredstrom, D., Lundgren, J., Ronnqvist, M.: Supply chain optimization in the pulp mill industry - IP models, column generation and novel constraint branches. Eur. J. Oper. Res. 156, 2–22 (2004) 4. Chen, F., Drezner, Z., Ryan, J.K., Simchi-Levi, D.: Quantifying the bullwhip effect in a simple supply chain: the impact of forecasting, lead times, and information. Manage. Sci. 3, 436–443 (2000) 5. Darvish, M., Coelho, L.C.: Sequential versus integrated optimization: production, location, inventory control, and distribution. Eur. J. Oper. Res. 268, 203–214 (2018) 6. Fahimnia, B., Farahani, R., Marian, R., Luong, L.: A review and critique on integrated production-distribution planning models and techniques. J. Manuf. Syst. 32, 1–19 (2013) 7. Geary, S., Disney, S.M., Towill, D.R.: On bullwhip in supply chains - Historical review, present practice and expected future impact. Int. J. Prod. Econ. 101, 2–18 (2006) 8. Hertz, A., Schindl, D., Zufferey, N.: Lower bounding and tabu search procedures for the frequency assignment problem with polarization constraints. 4OR 3(2), 139–161 (2005) 9. Kostin, A., Guillen-Gosalbez, G., Mele, F., Bagajewicz, M., Jimenez, L.: A novel rolling horizon strategy for the strategic planning of supply chains. Application to the sugar cane industry of Argentina. Comput. Chem. Eng. 35, 2540–2563 (2011) 10. Jaksic, M., Rusjan, B.: The effect of replenishment policies on the bullwhip effect: a transfer function approach. Eur. J. Oper. Res. 184, 946–961 (2008)

Reducing the Supply-Chain Nervosity Thanks to Flexible Planning

327

11. Lee, H.L., Padmanabhan, V., Whang, S.: The bullwhip effect in supply chains. Sloan Manag. Rev. 93–102 (1997) 12. Pinedo, M.: Scheduling: Theory, Algorithms, and Systems. Springer (2016) 13. Respen, J., Zufferey, N., Wieser, P.: Three-level inventory deployment for a luxury watch company facing various perturbations. J. Oper. Res. Soc. 68(10), 1195–1210 (2017) 14. Toomey, J.: Inventory Management: Principles, Concepts and Techniques. Springer Science & Business Media (2000) 15. Thevenin, S., Zufferey, N.: Learning Variable Neighborhood Search for a scheduling problem with time windows and rejections. Discret. Appl. Math. 261, 344–353 (2019) 16. Thevenin, S., Zufferey, N., Glardon, R.: Model and metaheuristics for a scheduling problem integrating procurement, sale and distribution. Ann. Oper. Res. 259(1), 437–460 (2017) 17. Thevenin, S., Zufferey, N., Potvin, J.-Y.: Makespan minimisation for a parallel machine scheduling problem with preemption and job incompatibility. Int. J. Prod. Res. 55(6), 1588–1606 (2017) 18. Thevenin, S., Zufferey, N., Potvin, J.-Y.: Graph multi-coloring for a job scheduling application. Discret. Appl. Math. 234, 218–235 (2018) 19. Wang, X., Disney, S.M.: The bullwhip effect: progress, trends and directions. Eur. J. Oper. Res. 250, 691–701 (2016) 20. Wild, T.: Best Practice in Inventory Management, 3rd edn. Routledge (2017) 21. You, F., Wassick, J., Grossmann, I.: Risk management for a global supply chain planning under uncertainty: models and algorithms. AIChE J. 55, 931–946 (2009) 22. Zufferey, N.: Tabu Search Approaches for Two Car sequencing problems with smoothing constraints. Metaheuristics for Production Systems, pp. 167–190. Springer (Cham) (2016)

Design Forward and Reverse Closed-Loop Supply Chain to Improve Economic and Environmental Performances E. P. Mezatio, M. M. Aghelinejad, L. Amodeo, and I. Ferreira

Abstract This paper focuses on modelling and solving the problems related to four-level of closed-lopp supply chain management (CLSC), from the supplier to the retailers and the recycling center. A new Mixed integer linear programming model has been developed. This model can be used as a decision support tool for all industries wishing to improve their economic and environmental performances. In this study, we addressed specially the case of the textile industry. The results show that by recycling and reusing recycled resources, the textile industry can reduce its C O2 emissions over to 42.5%, for an additional investment of 34.19%, with a carbon tax of 86 e per ton of emissions. While, without recycling and for the same carbon tax, industries would invest 68.73% more, for an emissions reduction of 12.74%.

1 Introduction Supply chain management is a set of processes consisting of the purchase of raw materials, the production of finished products and their distribution to consumers. This management becomes more important with a strong increase in the number of actors in the supply chain. Similarly, current environmental concerns, related to climate change, have led many countries to take action to reduce carbon emissions by adopting more stringent measures or setting emission targets. Nowadays, the eco-responsible image of a company represents a new standard for customers to E. P. Mezatio (B) · M. M. Aghelinejad · L. Amodeo University of Technology of Troyes (UTT), Troyes, France e-mail: [email protected] M. M. Aghelinejad e-mail: [email protected] L. Amodeo e-mail: [email protected] E. P. Mezatio · I. Ferreira Institut français du textile et de l’habillement (IFTH), Troyes, France e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_27

329

330

E. P. Mezatio et al.

purchase a product [1]. Indeed, achieving a better balance between economic and environmental efficiency, requires a better design of the logistics network [2]. In this study, a mathematical model is presented for the management of a closedloop supply chain integrating all the actors of the forward and reverse logistic, with resource saving and reuse of the recycled materials. The contributions of this paper are: (1) presentation of a more complete forward and reverse supply chain with material reuse; (2) new mixed integer linear programming model (MILP) integrating both economic and environmental performance indicators, base on carbon emissions and a carbon tax mechanism; (3) integration of recycling with material reuses; (4) the design of a new benchmark for the textile and clothing industry based on several data from the existing literature. This paper is organise as follows: Sect. 2 will present a state of the art on closedloop supply chain management, Sect. 3 will describe the problem addressed and the mathematical optimization model. Section 4 present the performance of the proposed model on numerical experiments in a real case study of the textile industry. Finally, Sect. 5 presents conclusions and future work directions.

2 Literature Review To address the closed-loop supply chain management problem, a mixed integer nonlinear programming approach was proposed by [3], with multiple facilities and stochastic product return quality. They analysed the impact of expenses, returns, quality uncertainty, and demand with the aim of minimizing transportation and refurbishment costs of products. Moreover, most of the literature partially treats Closed-loop supply chain (CLSC). Either they focus on the configuration, manufacturer-retailer or a supplier-manufacturer configuration. Thus, very few papers study the closedloop supply chain with more than two tiers. With the objective of maximizing profit, responsiveness, and customer quality, a multi-objective model for the design of multitier forward/reverse logistics network, had been proposed by [4]. In transportation, to solve the vehicle routing problem for urban freight distribution, [5] developed an optimization model to analyse the challenges of vehicle routing. In the textile industry, [17] developed a mathematical model to optimize the configuration of the entire logistics network by optimizing the reverse logistics route of the product in terms of transportation cost. Other works try to couple the triple bottom line aspect in the management of the reverse logistics network by formulation of a mathematical model [18]. The work presented above has limitations related to the lack of sufficient integration of environmental aspects in the design of the closed-loop supply chain. To address this, [6] proposed a model for sustainable supply chain design with integration of carbon emissions and carbon trading issues. In [12], proposed a mathematical model and a particle swarm optimization (PSO) algorithm for route planning in a forward and reverse logistics network for fresh product transportation, minimizing the carbon footprint. Their method is efficient on the economic aspects, but does not take into account all the actors in the supply chain and does not consider

Design Forward and Reverse Closed-Loop Supply Chain to Improve …

331

the carbon life cycle. In the textile and apparel industry, [8] presents this industry as one of the most polluting. The popularity of the fashion industry to produce a wide range of products in a short time and at low cost, with a short life cycle of products, is responsible for a large amount of clothing waste. Increasingly, an interest in recycling has been growing and customer perception of recycled clothing is also evolving positively. It is then necessary to prepare for the uncertainty surrounding recycled products by developing a closed-loop supply chain plan that takes into account the uncertainty of reverse logistics. To address this, [16] proposes a mixed integer mathematical model to evaluate the economic, social, and environmental contribution of reverse logistics (recycling) on textile waste management, but without integrating the reverse logistics aspect and carbon emissions. Most articles in the literature use mathematical modelling to address the issue of forward and reverse supply chain management. In addition, all the above-mentioned works have partially and separately addressed the economic and environmental aspects of the forward and reverse supply chain in different fields such as: transportation, electronics, food and textile. Indeed, each article deals separately with carbon emissions, recycling and considering both forward and reverse logistics. Integrating all levels and actors in the supply chain could improve the economic and environmental performance of the entire chain. Given the scarcity of works in the literature dealing with forward and reverse logistics, and to the best of our knowledge, there are no articles in the field of the textile and clothing industry that jointly address economic and environmental issues. The next section will present the modelling of the approach discussed in this article.

3 Problem Statement and Modelling The proposed supply chain network includes a set of suppliers, factories, warehouses, customers, collecting, sorting and recycling point and second-hand market. The model is single-objective, multi-product, multi-period and takes into account subcontracting activity. For forward logistics, raw materials supply manufacturing centers. Then, the products are produced in the internal plants and others are subcontracted and transported to the warehouses, which are then delivered to the customers to satisfy their demands. For reverse logistics, prefabricated products from production centers and unsold products are sent to collection areas. Finally, their waste will go through the sorting centers and then, the outcomes can take three different directions: landfill, second-hand markets and recycling centers (see Fig. 1). Assumptions The mathematical model has taken into account the following assumptions: (1) all demands are deterministic, variable in each period and satisfied at the end of planning horizon; (2) carbon tax is defined under several scenarios; (3) the model decide on the quantities of raw materials to be ordered, depending on the carbon footprint of the suppliers; (4) the different costs, are assumed to be non-variable throughout the

332

E. P. Mezatio et al.

Fig. 1 Closed-loop supply chain Table 1 Sets and indices S: set of suppliers of raw materials, s ∈ S V: set of internals producers, v ∈ V N: set of customers/retailers, n ∈ N W: set of warehouses, w ∈ W L: set of recycling points, l ∈ L

R: set of raw materials, r ∈ R U: set of subcontractors producers, u ∈ U P: set of products, p ∈ P T: set of periods, t ∈ T J: set of second hand retailers, j ∈ J

planning horizon; (5) the sorting, recycling and storage center are merged on the same location. Mathematical modeling The description of the model presented in this paper is defined as follows (Table 1): Parameters C prs /C p vp , C p up /C p wp : supplier’s capacity (in tons) of raw material r for the supplier s, resp. the production capacity of product p for the producer v and the subcontractor u, and the storage capacity of product p in the warehouse w. Pcrs , Pcru , Pcrl : variable selling price per unit of raw material r in supplier s, resp. in subcontractor u and recycling center l (in e). Scvp , Scup , s vp : variable production cost of product p in producer v resp. subcontractor u, and setup cost per product p in producer v. Sclp , h wp : respectively the selling price of second-hand product p comes from recycle center l, and holding cost of p in w. Rcs vp , Rcs np /Rcr vp , Rcr np /R f cnp : Sorting cost for waste of product p from producer v resp. customer n. Recycling cost after sorting of waste of product p from producer v resp. customer n. Refurbishment cost, after sorting of waste of product p from customer n.

Design Forward and Reverse Closed-Loop Supply Chain to Improve …

333

dsv , dvw , duw , dwn : distances (in km) resp. from supplier s to plant v, from plant v and subcontractor u to warehouses w, and from warehouses w to retailers n. dnl , dvl , dl j : distances resp. from customers to recycling center l, from plant to recycling center l, and from recycling center l to second-hand market j. u,w w,n variable costs (in e per unit of raw material r resp. T crs,v , T cv,w p , T cp , T cp : product p ) of transportation from supplier s to producer v, from producer (resp. subcontractor u) to warehouses w, and from warehouses w to customers n. l, j l,v variable costs (in e per kg of product waste p) of transportation T cn,l p , T cp , T cp : from customers n to recycling center l, from recycling center l to second hand market j, and from recycling center l to plant v. ers , eru , evp , eup /etr : life cycle of C O2 emissions in kg per unit of raw material (resp. for supplier and subcontractor), product (resp.for producers and subcontractors), and transportation mode tr . Ocwt , Oclt : opening cost of warehouse w, resp. recycling centers l. α vp , β np : waste rate for the product p for producer v, resp. for customer n. cp : carbon tax. crr p : conversion rate of raw material r to product p. demands of product p by customer n, at period t (in unit of product). Dm n,t p : Decision variables – Q rs,v,t , Q ru,t : amounts of raw materials r in the period t which are delivered by the supplier s to the producer v resp. by subcontractor u. u,t amounts of products p, produced by the producer v, resp. subcon– Q v,t p , Qp : tractor u for satisfied the demands for the period t. – T Srs,v,t , T F pv,w,t , T F pu,w,t , T W pw,n,t : amounts of raw material r , resp. product p transported, from supplier s to producers v. The amounts of products p transported from producer v, resp. subcontractor u to warehouse w, and the amounts of products transported, from warehouse to retailers n at period t.   – T Dm pn,T , T F pv,l,T : amounts of waste of product p, transported in the planning horizon t (resp. for customers n and producers v), to the recycling center l.   j,T +1 – T Fp , T Srv,T +1 : amounts of recycled products p and raw materials r transported, in the horizon t, for the market j and producers v in horizon T + 1.   – Q pv,l,T , Q pn,l,T : amounts of waste product p at the planning horizon t delivered to recycling center l (resp. for producers v and customers n).    j,T +1 – Q rv,T +1 , Q rn,T +1 , Q p : amounts of recycled raw materials’ r resp. products p, in the planning horizon t, for the producers, resp. market j in horizon T + 1. – S Ist , S Fvt , SSut , SWwt : binary sets, 1 if, supplier s, producers v, subcontractor u and warehouse w is selected, o otherwise, in each period t. – Y pv,t , Y pu,t : binary sets, 1 if, producers v resp. subcontractor u is selected to produce, 0 otherwise, in each period t. Model formulation In this paper, the objective function finds the optimal solution that minimize the overall cost presented in expression (1).

334

E. P. Mezatio et al.

Table 2 The capacity constraints table ∀ p ∈ P, ∀v ∈ V, ∀u ∈ U, ∀ j ∈ J ∀l ∈ L  s,v,t ≤ C prs × S Ist ; v∈V Q r v,t v Q p ≤ C p p × S F tf ; u t Q u,t p ≤ C p p × SSu ;   v,w,t + u∈U T F u,w,t ≤ p v∈V T F p t ; × SW C pw p w

∀t ∈ T, ∀n ∈ N , ∀r ∈ R, ∀ j ∈ J ; Y pu,t ≤ Q u,t p ; t Q v,t p ≤ M ∗ Yp f ; Y pt f ≤ Q v,t p ;

u,t Q u,t p ≤ M ∗ Yp ;

min z = PC + T C + MC + W C + RC

(1)

where, PC represented by (2) is for the raw material supply costs, T C are for the transportation costs, represented by (3), based on the distance travelled. Equation (4), represented by MC is for manufacturing costs of the internal and subcontracting production process. The W C represented by Eq. (5) is for the holding cost. Equation (6) represented the overall costs generated by the processes of sorting, recycling, selling used products, recycled raw materials, in the recycling process. All its costs include the cost of carbon emissions at each level of closed-loop supply chain.   PC = r,s,v,t Q rs,v,t (Pcrs + cp × ers ) + r,u,t Q ru,t (Pcru + cp × eru )   T C = r,s,v,t dsv × T Srs,v,t (T crs,v + cp × etr ) + p,v,w,t dvw × T F pv,w,t (T cv,w p + cp × etr )   u,w,t u,w (T c p + cp × etr ) + p,w,n,t dwn × T W pw,n,t (T cw,n + p,u,w,t dvw × T F p p + cp × etr )   v,l,T   (T cv,l + p,n dnl × T Dm pn,l,T (T cn,l p + cp × etr ) + p + cp × etr ) s,v dvl × T F p  j,T +1  v,T +1   l, j (T c p + cp × etr ) + r,n dlv × T Sr (T cv,l + p, j dlv × T F p p + cp × etr )    u,t v v u u v t MC = p,v,t Q v,t p (Sc p + cp × e p ) + p,u,t Q p (Sc p + cp × e p ) + v,t s p × S Fv   v,t u,t w t t W C = p,u,v,w,t h p (Q p + Q p ) + w,t Ocw × SWw      RC = p,v,l,t Rcs vp × Q pv,l,T + p,v,l,t α vp × Q pv,l,T (Rcr vp + cp × epv ) + p,n,l,t Rcs np  n,l,T  n,l,T  n,l,T    ×Q + p,n,l,t β np × Q p (Rcr np + cp × e pn ) + p,n,l,t(1 − β np )Q p × R f cnp p   j,T +1  v,T +1  n,T +1  t l l − + Qr ) + p,t Sc p × Q p + Ocl r,l,t Pcr × (Q r

(2)

(3) (4) (5)

(6)

(∀ p ∈ P, ∀v ∈ V, ∀u ∈ U, ∀ j ∈ J, ∀l ∈ L , ∀t ∈ T, ∀n ∈ N , ∀r ∈ R, ∀s ∈ S, w ∈ W ; )

Constraints The model have 23 constraints, divided into 3 categories (capacity, equilibrium and non-negativity) (Tables 2, 3 and 4).

4 Numerical Analysis In order to evaluate the effect of different parameters, such as: the variation of the price per ton of the carbon tax, the variable demands of the customers and the contributions of recycling at the next planning horizon, a reference problem generator is created.

Design Forward and Reverse Closed-Loop Supply Chain to Improve … Table 3 The flow equilibrium constraints table ∀ p ∈ P, ∀v ∈ V, ∀u ∈ U, ∀ j ∈ J, ∀l ∈ L  v,t p∈P Q p × crr p =  s,v,t Qr + Q  rv,T +1 + Q  rn,T +1 ; s∈S u,t (Q p × crr p ) = Q ru,t ;  p∈P  v,w,t v,w,t + Q v,t p = v∈V T F p w∈W T F p   v,w,t u,w,t T Fp + u∈U T F p = v∈V w,n,t T W p n∈N    v,t u,t n,t v∈V Q p + u∈U Q p = n∈N Dm p  v,T +1 v,l,T   Qr = v, p ×Q p × crr p ;  Q  rn,T +1 = n, p β np × Q  n,l,T × crr p ; p

335

∀t ∈ T, ∀n ∈ N , ∀r ∈ R, ∀ j ∈ J ; Q rs,v,t = T Srs,v,t ; 

w,n,t = Dm n,t p ; w∈W T W p  u,t Q p = w∈W T F pu,w,t ;  T F pv,l,T = Q  v,l,T ; p 

T Dm pn,l,T = Q  n,l,T ;  p Q  pj,T +1 = n, p (1 − β np ) × Q  n,l,T ; p

Table 4 The Non-negativity restriction on decision variables constraints table 0 ≤ α Tpv ≤ 1 0 ≤ β Tpu ≤ 1 S Ist ∈ {0, 1} SSut ∈ {0, 1} S Fvt ∈ {0, 1} SWwt ∈ {0, 1} Y pu,t ∈ {0, 1} Y pv,t ∈ {0, 1};

This generator is based on data collected from [9–11, 14, 15]. The carbon tax (46, 66, 79, 86, 100, 150 e/tons of C O2 ), represent the prices on the carbon market for the years 2018, 2019, 2020, 2021, 2050, and the cases where the carbon tax becomes very large, (the 150 e/tons of C O2 ). The closed-loop supply chain considered for the experiment is over a planning horizon of 12 periods (T = 12), with 20 raw material suppliers (S = 20), 6 producers (F = 6), 4 warehouses (W = 4), 10 customers (N = 10), 1 collection, sorting and recycling center (L = 1), 9 raw materials (R = 9), 17 different products (P = 17), 15 subcontracted producers (U = 15), and 1 s-hand shop (V = 1). The market demands are taken from the 2019 market data in France [15]. For solving the mathematical models, Cplex 20.1.0 solver, with a Core i5 PC, frequency 2.3 GHz was used. In Table 5, the costs without recycling are represented by, PC, MC, W C, T C and RC. The costs with recycling are represented by r _PC, r _MC, r _T C and r _RC. Figure 2, show the emissions without recycling, represented by Pr ocur ement_Emiss, Manu f ac_Emiss, T rans_Emiss. The emissions with recycling are draw and are represented by r _Pr ocu_Emiss, r _Manu f _Emiss, r _T rans_Emiss. Impact of carbon tax and recycling, on each activity of supply chain Table 5 shows that procurement costs (PC) are the most important, followed by production (MC) and transport costs (T C), for each carbon tax scenario. It shows that the integration of recycling and carbon tax reduces the costs of the different activities in the supply chain, over to 38%, due to the reuse of raw materials, recycled products and the limitation of travel distances. The recycling costs represent on average less than 3% of the overall supply chain cost.

336

E. P. Mezatio et al.

Table 5 Cost contribution per activity in the overall supply chain cost at various carbon tax (CP): Case without VS with recycling CP(e/ton C O2 ) 0 46 66 79 86 100 150 PC(Me) r_PC(Me) MC+WC(Me) r_MC+WC(Me) TC(Me) r_TC(Me) RC(Me)

46.35 42.59 28.89 27.85 6.59 4.66 0.72

115.32 57.89 36.81 30.0 14.24 10.09 0.79

145.92 63.43 40.51 30.97 17.56 12.5 0.83

164.35 66.39 42.81 31.56 19.58 13.94 0.85

173.97 67.6 44.11 31.89 20.72 14.86 0.86

192.94 70.04 46.6 32.53 23.08 16.43 0.88

258.55 76.06 55.46 34.91 30.56 21.75 0.96

Fig. 2 C O2 Emission per activity on supply chain (without and With recycling) Table 6 Variation in overall costs without recycling (Z_cost) and with recycling (r_Z_cost) versus Variation in overall emissions without recycling (Z_emiss) and with recycling (r_Z_emiss) CP(e/ton 0 46 66 79 86 100 150 C O2 ) (%) Z_cost r_Z_cost Z_emiss r_Z_emiss

– – – –

+50.81 +23.23 −6.57 −21.6

+59.88 +29.61 −10.09 −32.44

+63.91 +32.75 −10.41 −34.39

+65.73 +34.19 −12.74 −42.5

+68.84 +36.75 −13.0 −42.88

+76.25 +43.28 −18.54 −60.21

Figure 2 shows that for each carbon tax scenario, the procurement emissions (Pr ocur ement_Emiss) are the most important in the supply chain, followed by production (Manu f ac_Emiss) and transport emissions (T rans_Emiss). It shows that the integration of recycling can reduce procurement emissions over to 25%, due to the reuse of recycled raw materials, which is less polluting and help save virgin resources. Overall costs and emissions analysis without recycling VS with recycling Table 6 show the overall cost and emission of the supply chain. This table shows that an additional investment (in the recycling case) of less than 34.19% would improve

Design Forward and Reverse Closed-Loop Supply Chain to Improve …

337

Fig. 3 Overall cost and C O2 emission on supply chain (Without and with recycling)

emissions over to 42.5%. While, the case without recycling, which, for an investment of 65.73%, would result in a reduction of only 12% in overall emissions, considering the 86e carbon scenario, which is normally the price of the carbon tax for the year 2022 (see Fig. 3 and Table 6).

5 Conclusions This paper addressed the problem of closed-loop supply chain, taking both economic and environmental issues. It presented a closed-loop supply chain structure. Then, a new mixed integer linear programming model integrating economic and environmental performance indicators is developed. Finally, a use case with a new benchmark for the textile industry will be designed to analyse the model. The objective of this study is to show that taking into account environmental aspects and recycling of textile waste in the forward and reverse supply chain would strongly influence the behaviour of supply chain actors. This will act on two levels: the first is the configuration of the supply chain, and the second is the economic impact. Recycling reduces the amount of raw material to be ordered, as well as the consumption of virgin materials and environmental impacts. Future works will focus on more advanced multi-objective optimization strategies, in order to have a better knowledge of the correlation of objectives (economics and environmental) and proposed a new resolution methods.

338

E. P. Mezatio et al.

References 1. Niinimäki, K., Hassi, L.: Emerging design strategies in sustainable production and consumption of textiles and clothing. J. Clean. Prod. 19(16), 1876–1883 (2011) 2. Wang, Y., Lu, T., Zhang, C.: Integrated logistics network design in hybrid manufacturing/remanufacturing system under low-carbon restriction. In: LISS 2012, pp. 111–121. Springer, Berlin, Heidelberg (2013) 3. Radhi, M., Zhang, G.: Optimal configuration of remanufacturing supply network with return quality decision. Int. J. Prod. Res. 54(5), 1487–1502 (2016) 4. Ramezani, M., Bashiri, M., Tavakkoli-Moghaddam, R.: A new multi-objective stochastic model for a forward/reverse logistic network design with responsiveness and quality level. Appl. Math. Model. 37(1–2), 328–344 (2013) 5. Cattaruzza, D., Absi, N., Feillet, D., González-Feliu, J.: Vehicle routing problems for city logistics. EURO J. Transp. Logist. 6(1), 51–79 (2017) 6. Shaw, K., Irfan, M., Shankar, R., Yadav, S.S.: Low carbon chance constrained supply chain network design problem: a Benders decomposition based approach. Comput. Ind. Eng. 98, 483–497 (2016) 7. Yang, J., Guo, J., Ma, S.: Low-carbon city logistics distribution network design with resource deployment. J. Clean. Prod. 119, 223–228 (2016) 8. Yin, J., Zheng, M., Li, X.: Interregional transfer of polluting industries: a consumption responsibility perspective. J. Clean. Prod. 112, 4318–4328 (2016) 9. Safra, I., Jebali, A., Jemai, Z., Bouchriha, H., Ghaffari, A.: Capacity planning in textile and apparel supply chains. IMA J. Manag. Math. 30(2), 209–233 (2019) 10. Ren, H., Zhou, W., Makowski, M., Yan, H., Yu, Y., Ma, T.: Incorporation of life cycle emissions and carbon price uncertainty into the supply chain network management of PVC production. Ann. Oper. Res. 300(2), 601–620 (2021) 11. Paydar, M.M., Olfati, M., Triki, C.: Designing a clothing supply chain network considering pricing and demand sensitivity to discounts and advertisement. RAIRO–Operations Research 55 (2021) 12. Guo, J., Wang, X., Fan, S., Gen, M.: Forward and reverse logistics network and route planning under the environment of low-carbon emissions: a case study of Shanghai fresh food E-commerce enterprises. Comput. Ind. Eng. 106, 351–360 (2017) 13. Fu, R., et al.: Closed-loop supply chain network with interaction of forward and reverse logistics. Sustain. Prod. Consum. 27, 737–752 (2021) 14. Payet, J.: Assessment of carbon footprint for the textile sector in France. Sustainability 13(5), 2422 (2021) 15. ADEME, Lhotellier, J., Less, E., Bossanne, E., Pesnel, S.: Modélisation et évaluation du poids carbone de produits de consommation et biens d’équipements—Rapport, 217 p (2017) 16. Cuc, S., Vidovic, M.: Environmental sustainability through clothing recycling. Oper. Supply Chain. Manag.: Int. J. 4(2), 108–115 (2014) 17. Singh, R.K., Acharya, P., Taneja, D.: Impact of reverse logistics on apparel industry. Int. J. Serv. Oper. Manag. 25(1), 80–98 (2016) 18. Safdar, N., Khalid, R., Ahmed, W., Imran, M.: Reverse logistics network design of e-waste management under the triple bottom line approach. J. Clean. Prod. 272, 122662 (2020)

Supply Chain Design and Cost Allocation in a Collaborative Three-Echelon Supply Network: A Literature Review Tatiana Grimard, Nadia Lehoux, and Luc Lebel

Abstract Supply chains have been the focus of many studies. When warehouses or distribution centers must be located and allocated to customers and suppliers, a few key parameters can greatly affect the optimisation of the supply chain, for example, its flexibility, the inclusion of inventory management and the consideration for uncertainty. We first provide an overview of the impacts of these parameters on the network design process. The project that motivated this review concerns a supply chain that must be designed that will be used in a coalition. Therefore, the second part of our review concerns the different cost allocation methods used in collaborative networks. We then discuss the advantages of designing supply chain with a future collaboration in mind and how this consideration can affect the optimisation.

1 Introduction Reforestation in eastern North America faces many challenges. Some of these challenges are from diseases such as the Forest tent caterpillar [47], the spruce budwork [7, 36, 54] or the scleroderris canker [34]. Other challenges concern climate change. Warming climate has caused the tree-line of the forest-tundra to slowly move north in the last century [23, 24]. Climate changes can also be observed in the tree nurseries where frost events and late fall freezing are affecting the roots and the growth of the seedlings [8, 43]. These conditions make the distribution of seedlings a challenge, especially when we consider other restrictions such as the soil type best suited for each species [17], (Henned et al. 2020). One approach to reduce costs, avoid T. Grimard (B) · N. Lehoux · L. Lebel Université Laval, 2325, rue de l’Université, Québec, Canada e-mail: [email protected] N. Lehoux e-mail: [email protected] L. Lebel e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_28

339

340

T. Grimard et al.

spreading diseases, and protect seedlings is to package them (Mercier and Ersson 2016). The province of Quebec intends to implement this technique, but the supply chain for this project has yet to be developed. Designing such a supply chain involves determining the location of the cold storage facilities (i.e., warehouse or distribution center) within a set of potential locations, and allocate which tree nurseries (i.e., suppliers) and reforestation zones (i.e., clients) to assign to each of these facilities. The objective of this review is therefore to highlight the different models that could be used to design an efficient and sustainable three-echelon supply network. To achieve this goal, a series of articles were selected based on specific criteria and then analysed so as to identify the models used by the authors to design a supply chain, manage its invenvories and control its uncertainty. The review also aims to point out the costsharing mechanisms that can be used to operate such a collaborative supply chain. The analysis led to understand how a supply chain can be designed to prepare for future collaboraiton.

2 Methodology In order to investigate supply chain design models and cost-allocation methods, a systematic review was conducted based on four research questions: • Question 1. What are the main models proposed for the three-echelon distribution supply chain structures? • Question 2. What are the inventory management policies considered in the models? • Question 3. How has uncertainty been incorporated in the models? • Question 4. In a collaborative supply chain, what are the cost-allocation methods used depending on the type of resource shared? Two different databases were exploited, Engineering Village and Web of Science Core Collection. These were selected because of the significant number of scientific articles related to supply chain management available and because the filters allow to specify the subjects of interest. The research was completed between November 2021 and February 2022 and included only results in French and in English. The criterias, research equations and results for each subject are detailed below [22]. The number of articles found varied greatly between the two research tools because of the “NEAR” function. Since it was used with truncated words, the Engineering Village search did not respect this constraint as opposed to the Web of Science search system. Questions 1 through 3 were searched with a generic equation aiming to find supply chain design problems including location decisions for warehouses or distribution centers, i.e., ((warehous* NEAR/1 location*) OR (“distribution center” NEAR/1 location*) OR (DCs NEAR/1 location*)) AND “network design”. Inspec and Compendex databases in Engineering Village produced 322 results and Web of Science Core Collection led to 89 results. Out of those 411 records, 127 were duplicates and thus removed. The remaining 284 were screened based on the following criteria: The supply chain analysed had to be a three-echelon supply network to better reflect the

Supply Chain Design and Cost Allocation in a Collaborative …

341

case study considered. Supply chains including outsourced carriers were excluded as they cannot be used in the context of our case study; Design of supply chains for emergency events were ignored. A total of 71 articles satisfied these criteria. Of those, 26 could not be retrieved. The remaining 45 were assessed for eligibility. Out of those 45, 31 were thoroughly analysed as the topics described were directly linked to the research questions. We first searched for question 4 by including the concept of fairness and stability, i.e., distribution AND network AND (collabora* OR coordina* OR coalition*) AND ((shar* NEAR/1 cost*) OR (cost* NEAR/1 allocation*)) AND (fair OR stable). We then searched cost-allocation methods based on game theory, i.e., (coordinat* OR cooperat* OR collaborat* OR coalition* OR “pooled warehouse”) AND “game theory” AND (distribut* OR inventor*) AND ((shar* NEAR/1 cost*) OR (shar* NEAR/1 benefit*) OR (shar* NEAR/1 profit*)). The Engineering Village tool provided respectively 228 and 941 results. The Web of Science tool provided 11 and 93 results. Out of those 1273 articles, 518 were duplicates so they were removed. The remaining 755 were screened. Articles focussing on the creation of collaborative supply chain and not on the cost-allocation aspect were not considered. Cases where the shared resource was in the electronics or energy field were also excluded as these suppy chains can share virtual space or inventory that is not easily quantifiable. A total of 82 articles satisfied these criteria. Of those, 34 could not be retrieved. The remaining 48 were assessed for eligibility. 19 articles addressing cases of revenuesharing were excluded in favor of those considering cost-sharing or benefits-sharing. Hence, 29 articles relating to this topic were included in the review. The most frequent regions from which the articles originated are Europe, Eastern Asia, North America and Western Asia. A majority of the articles, i.e., 83%, were published after 2010.

3 Literature Review The selected articles allowed us to answer the research questions as described in the following paragraphs. Question 1. What are the main models proposed for the three-echelon distribution supply chain structures? Three types of structure are presented in the selected articles. The first type is a supply chain with one supplier or plant, multiple customers or retailers, and a list of potential warehouses or distribution centers. An example of such supply chain is presented in Saif et al. [55] with the development of a standard location-allocation resolution. Another instance of such supply chain is presented in Ozsen et al. [51]. The model included a weight factor associated with transportation costs. Ozsen et al. [50] presented a similar supply chain. Their model determined the capacity needed to fulfill the demands and the flows when they are at their peek rather than the average capacity requirement throughout the supply chain.

342

T. Grimard et al.

The second type is a supply chain with multiple suppliers or plants, multiple customers or retailers and a list of potential warehouses. Two articles presented this generic supply chain combination with similar models, i.e., Lashine et al. [35] and Gan et al. [25]. Azad and Davoudpour [6] presented a similar supply chain focusing on Value-at-Risk in the objective function. Boujelben et al. [9] solved a location-allocation problem by first applying a cluster mechanism to determine which distribution centers to open and then solving the simple allocation problem with the clusters. Tancrez et al. [59] included multiple sourcing for retailers and warehouses while adding the option to assign flow directly between the plants and retailers. Considering that this model becomes more difficult to solve, an initial solution was generated and then improved to obtain an efficient solution. Askin et al. [4] included multiple flows in one model by creating a set of states in which the inventory could be located. Warehouse capacity was also determined with a set of possible sizes, i.e., small, medium, or large. Ambrosino and Scutellá (2005) used regional and central warehouses with regular and high-volume customers to offer more flexibility to the model, and Shahabi et al. [57] used hubs through which the products could transit before heading to the customers. These added layers make the models harder to solve but they can be applied in many scenarios in the industry. The third type presented is a supply chain with multiple customers or retailers, a list of potential plants or suppliers and a list of potential warehouses or distribution centers. Ashtab et al. [3] evaluated the Amiri algorithm to show the possible errors in its optimal solutions. Tanonkou et al. [60] presented an optimisation model where potential warehouses are located at the retailers. Other articles presented this type of supply chain without a fixed capacity parameter. Jayaraman [30] solved this problem by optimising the transportation based on a parameter representing the space required by each type of product. Salem and Haouari [56] instead presented a model that allows to optimise the capacity in each facility by adding a penalty for the unused capacity. Finally, Chatzikontidou et al. [14] presented an approach to increase supply chain flexibility where the potential plants and distribution centers are generalized as potential nodes. The model included parameters to track the resources availability and usage. Question 2. What are the inventory management policies considered in the models? Many models included an inventory management aspect to their supply chain design. This allowed to determine the cycle time of the product, the optimisation of the reorder levels, the safety stocks, and the order quantities [20, 53, 61], (Diabat and Richard 2011; Li et al. 2021). Another aspect that can be added is the inventory checks. Vahdani et al. [64] included intervals of time between inventory checks in the model. Similarly, Araya-Sassi et al. [2] solved the inventory management model twice to compare the effects of continuous vs periodic inventory reviews. Finally, Darmawan et al. [18] presented a model that coordinates inventory management throughout the supply chain.

Supply Chain Design and Cost Allocation in a Collaborative …

343

Question 3. How has uncertainty been incorporated in the models? Two main concepts are presented to include uncertainty in the models. The first method is to incorporate parameters of average and variance for the demand [58] or for both the demand and supply lead-time either only for the retailers [11] or for both the retailers and warehouses [18]. Another article, Vahdani et al. [64], created a robust meta-heuristic modelisation that included a correlation between the demand of the other customers. Diabat et al. [19] instead optimised the adapted stochastic model with the uncertainty of the supply and the demand by balancing the inbound and outbound quantity with a queuing concept. The other method widely presented is to include scenarios in the stochastic model with another parameter indicating the probability of each demand scenario occurring [12, 13, 40] presented scenarios with minimum and maximum bounds and Izadi and Kimiagari [29] focused on scenarios generated with the Monte Carlo Simulation in their robust modelisation. Table 1 presents a summary of the selected articles and indicates their structure type, based on the descriptions in Question 1, and if they include inventory management and uncertainty considerations. Question 4. In a collaborative supply chain, what are the cost-allocation methods used depending on the type of resource shared? The main cost-allocation methods presented were the Shapley value, the Nucleolus method, Nash bargaining, and Proportional allocation. Table 2 presents the number of applications for each method. The most common method used is the Shapley value. Asrol et al. [5] presented a fuzzy adaptation of the value to apply when the individual profit before joining the coalition is unknown. Wang et al. [67]) used it in a monotonic path selection to find the optimal sequence. Mahjoub and Hennet [42] modified the Shapley value to make it work in the non-convex case. Others articles presented added factors to the Shapley value to improve its accuracy. Teng et al. [62] included risks in the evaluation and Fiestras-Janeiro et al. [22] incorporated risk taking, market competitiveness, and investment amounts in their application of the Shapley value. Li et al. [37] also improved the Shapley value with new criteria regarding the operations, the customer satisfaction, the environmental sustainability, and the IT. The weight of the added criteria was obtained with the Analytic Hierarchy Process method. Other cost-allocation methods included the Alternative Cost Avoided Method [66], the Activity Based Costing [65], the Leading-idealism Cost Allocation Model [41], and the Average and Serial Cost Pricing that Mavronicolas et al. (2007) compared with a custom method called Fair Pricing Model. Cabo and Tidball [10] presented a method to evaluate the cost-allocation in an infrastructure shared investment based on the expected future benefits to each agent involved. Chen and Xie [16] explained a method based on the worst case to analyse a risk-sharing problem with distributional ambiguity. Grabish and Xie (2011) introduced a cost-allocation method for hierarchal structures. Mohebbi and Li [46] presented a Shared Capacity Index tool for supply chains sharing replenishment orders. Olgun et al. [48] used the principles of the Grey games to allocate the costs in a coordinated orders problem.

344

T. Grimard et al.

Table 1 Supply chain types, inventory, and uncertainty management in selected articles Author Year Structure Inventory Uncertainty management considerations Type 1 Type 2 Type 3 Ambrosino and Scutellá Araya-Sassi et al. Ashtab et al. Askin et al. Azad and Davoudpour Boujelben et al. Cabrera et al. Cardona-Valdés et al. Cardona-Valdés et al. Chatzikontidou et al. Darmawan et al. Diabat and Deskoores Diabat and Richard Diabat et al. Gan et al. Izadi and Kimiagari Jayaraman Lashine et al. Li et al. Li et al. Ozsen et al. Ozsen et al. Park et al. Saif et al. Salem and Haouari Shahabi et al. Shu et al. Tancrez et al. Tanonkou et al. Tapia-Ubeda et al. Vahdani et al.

2005 [2] [3] [4] [6] [9] [11] [12] [13] [14] [18] [20] 2011 [19] [25] [25] [30] [35] [40] [39] [50] [51] [53] [55] [56] [57] [58] [59] [60] [61] [64]

X X

X X X X X

X X

X

X X X X

X X X X

X X X X

X X X X X

X

X X X

X

X X

X

X X X

X X X

X

X X X

X X X

X

X

X X X

X X X X

X X

X X X

Year

[5] [10] [15] [16] [22]

2011

2006 [31] [32] [33] [38] [37] [41] [42]

2007 [45]

[46]

Author

Asrol et al. Cabo and Tidball Chau and Elbassioni Chen and Xie Fiestras-Janeiro et al.

Grabish and Xie

Hua and Li Jin et al. Jouida et al. Karsten et al. Li et al. Li et al. Liu and Cheng Mahjoub and Hennet

Mavronicolas et al. Meca et al.

Mohebbi and Li

Supply chain with risks Infrastructure with future benefits Sharing resources & Facilites Agents aggregating costs and risks Supply chain sharing replenishment orders Hierarchial transferable utility game Two-echelon newsvendor problem Containers in port Sharing new DC Spare parts inventory Vendor Managed Inventory Joint Distribution Alliance Carriers cooperation Sharing possibilities in supply network Generic collaboration problem Supply chain sharing replenishment orders Supply chain sharing replenishment orders

Context of collab.

X

X X X X

X X

X

X

X

Shap

X X

X

Nucl

X

X

X

X

Nash

ECM

EPM

X X

Core

X

X

Egal

X

X

X

X X X

X

Prop

(continued)

X

X

X

X

X

X X

X

Other

Table 2 Cost-allocation methods used in selected articles where “Shap” is Shapley, “Nucl” is Nucleolus, “ECM” is Equal Cost Method, “EPM” is Equal Profit Method, “Egal” is Egalitarian, and “Prop” is Proportional

Supply Chain Design and Cost Allocation in a Collaborative … 345

[70] [71] [72] Total

Yilmaz et al. Zhai et al. Zhang and Liu Total

[63]

Timmer et al.

[66] [67] [68] [69]

[49] [62]

Ouhader and El Kyal Teng et al.

and [65]

[48]

Olgun et al.

Vanovermeire Sörensen Verdonck et al. Wang et al. Wen et al. Yan

Year

Author

Table 2 (continued)

Supply chain sharing replenishment orders Shipping collaboration Integrated Project Delivery with risks Supply chain sharing replenishment orders Collaborative logistics based on flexibility Carriers collaboration Two-echelon logistics Shipping pool Sharing possibilities in supply network Sharing capacity Warehouse extension three-level green supply chain

Context of collab.

X 18

X X X

X

X

X X

Shap

5

X

X

Nucl

X X 7

X

Nash

2

X

X

ECM

2

X

X

EPM

2

Core

2

Egal

10

X

X

X

Prop

15

X X

X

X

X

X

Other

346 T. Grimard et al.

Supply Chain Design and Cost Allocation in a Collaborative …

347

Yilmaz et al. [70] presented a sharing method in a context of shared capacity to fulfill orders based on the capacity lent to other companies. Jin et al. [31] introduced a Cooperative and Coopetitive space sharing methods that allocate the costs and the inventory space simultaneously in a port containers shared storage. Karsten et al. [33] present a Concavicated increasing and average marginal rule to use in a spare parts shared inventory problem. Articles showed that even if specific methods can be implemented for unique situations, the Shapley value is often a viable option to solve a cost-allocation problem.

4 Discussion Two main subjects were covered in this review: supply chain design, and costallocation mechanisms in collaborative supply chains. The design of a three-echelon supply network was analysed through three axes, i.e., the structure of the supply chain, the incorporation of inventory management, and the ways to include uncertainty. The structure of the supply chain can be separated into three types. The first type considers only one general supplier and fixed customers with potential warehouse locations. The second type adds flexibility to allocate each warehouse to a specific plant, when there are more than one, instead of simply allocating a customer to each warehouse. The third type adds flexibility to locate the plants as well as the warehouses in the supply chain. It is also possible to add flexibility when the capacity of the warehouses can be determined as well as their location and allocation [14, 30, 56]. Each method can be used at a different moment in the supply chain design process. When a supply chain is already partially built, i.e., plants and customers are already fixed, the location and allocation of the warehouses are the only variables of the optimisation problem. When customers are fixed but the rest of the supply chain is not yet decided, the added flexibility in the location of the plants allows for a total cost that is lower than it would be with fixed facilities. A collaborative supply chain can be viewed the same way when all the collaborators are willing to reallocate customers, warehouse space, etc. The difficulty is finding the balance between creating a problem with the maximum flexibility to allow greater savings and having enough information about the supply chain for the model to be accurate, which often comes once it is already partially fixed. A way to deal with this lack of information is to include uncertainty parameters in the model either by including demand and supply averages and variances or by determining scenarios and weighing their probability. If inventory management is added at the beginning of the supply chain design process, especially when it is a coordinated management policy like in Darmawan et al. [18], the supply chain is configured in a way that will make collaboration easier between the different agents. Their systems will already be integrated which avoids issues during a transition from a group of single-entity companies to a coalition. Figure 1 suggests to include these three elements when a new supply chain has to be designed with the possibility of it involving a coalition. Creating a model that allows flexibility,

348

T. Grimard et al.

Fig. 1 Framework for a new collaborative supply chain

Fig. 2 Application of the framework in our case study

uncertainty and inventory coordination is the perfect combination to create a truly efficient coalition supply chain. In our case study involving the design of a supply chain for the distribution of packaged seedlings, those three elements will be adressed in the optimisation as summarised in Fig. 2. The structure is flexible as we can select the tree nurseries to involve in the supply chain, we can determine the location and the capacity of the cold storage facilities as well the allocation of reforestation zones. A coordinated inventory management will be interesting in this case to optimise the defrosting point to have inventory available for deliveries in the allocated zones. Finally, uncertainty

Supply Chain Design and Cost Allocation in a Collaborative …

349

must be included both on the supply and the demand. For the supply, the production can be affected by the climate or diseases. The uncertainty of the demand comes from the availability of employees planting the seedlings. In the same way, taking into account the possibilities for cost and risk sharing during the design of the supply chain reveal the parameters, other than the cost, that will influence the collaboration, like environmental sustainability or market competitiveness. Using these parameters to create a custom cost allocation method or to adapt a versatile method like the Shapley value, could ensure a fair allocation as well as facilitating the collaboration negociation process. When a supply chain is designed and the collaboration contract is already agreed upon, it avoids many problems that can be encountered during a transition between a single company to a coalition. These issues can include migrating to another inventory management system or losing customer satisfaction because of a necessary re-allocation of warehouses.

5 Conclusion This review has covered the different tools used to create an efficient and flexible supply chain and to allocate the costs of a collaborative supply chain through a systematic review of the relevant literature. The next steps that could be the subjects of further research would be to analyse the effects of a coalition modelisation in the design of a supply chain. A process could be developed to seamlessly integrate both aspects of the modelisation.

References 1. Ambrosino, D., Scutellá, M.G.: Distribution network design: new problems and related models. Eur. J. Oper. Res. 165, 610–624 (2004). https://doi.org/10.1016/j.ejor.2003.04.009 2. Araya-Sassi, C., Paredes-Belmar, G., Gutiérrez-Jarpa, G.: Multi-commodity inventory-location problem with two different review inventory control policies and modular stochastic capacity constraints. Comput. Ind. Eng. 143, 106410 (2020). https://doi.org/10.1016/j.cie.2020.106410 3. Ashtab, S., Caron, R.J., Selvarajah, E.: A characterization of alternate optimal solutions for a supply chain network design model. INFOR: Inf. Syst. Oper. Res. 53, 2, 90–93 (2015). https:// doi.org/10.3138/infor.53.2.90 4. Askin, R.G., Baffo, I., Xia, M.: Multi-commodity warehouse location and distribution planning with inventory consideration. Int. J. Product. Res. 52(7), 1897–1910 (2014). https://doi.org/ 10.1080/00207543.2013.787171 5. Asrol, M., Marimin, Machfud, Yani, M., & Taira, E.: Supply chain fair profit allocation based on risk and value added for sugarcane agro-industry. Oper. Suppl. Chain Manag. 13, 2, 150–165 (2020) 6. Azad, N., Davoudpour, H.: A two echelon location-routing model with considering Value-atRisk measure. Int. J. Manag. Sci. Eng. Manag. 5(3), 235–240 (2010). https://doi.org/10.1080/ 17509653.2010.10671113

350

T. Grimard et al.

7. Berthiaume, R., Hébert, C., Dupont, A., Charest, M., Bauce, E.: The spruce budworm, a potential threat for Norway spruce in Eastern Canada? Forestry Chron. 96(1), 71–76 (2020) 8. Bigras, F.J., Margolis, H.A.: Shoot and root sensitivity of containerized black spruce, white spruce and jack pine seedlings to late fall freezing. New Forests 13, 23–49 (1996) 9. Boujelben, M.K., Gicquel, C., Minoux, M.: A MILP model and heuristic approach for facility location under multiple operational constraints. Comput. Ind. Eng. 98, 446–461 (2016). https:// doi.org/10.1016/j.cie.2016.06.022 10. Cabo, F., Tidball, M.: Promotion of cooperation when benefits come in the future: a water transfer case. Res. Energy Econ. 47, 56–71 (2017). https://doi.org/10.1016/j.reseneeco.2016. 12.001 11. Cabrera, G., Miranda, P.A., Cabrera, E., Soto, R., Crawford, B., Rubio, J.M., Paredes, F.: Solving a novel inventory location model with stochastic constraints and (R, s, S) inventory control policy. Math. Probl. Eng. 2013, 670528 (2013). https://doi.org/10.1155/2013/670528 12. Cardona-Valdés, Y., Alvarez, A., Ozdemir, D.: A bi-objective supply chain design problem with uncertainty. Transp. Res. Part C 19, 821–832 (2011). https://doi.org/10.1016/j.trc.2010. 04.003 13. Cardona-Valdés, Y., Alvarez, A., Pacheco, J.: Metaheuristic procedure for a bi-objective supply chain design problem with uncertainty. Transp. Res. Part B 60, 66–84 (2014). https://doi.org/ 10.1016/j.trb.2013.11.010 14. Chatzikontidou, A., Longinidis, P., Tsiakis, P., Georgiadis, M.C.: Flexible supply chain network design under uncertainty. Chem. Eng. Res. Des. 128, 290–305 (2017). https://doi.org/10.1016/ j.cherd.2017.10.013 15. Chau, C.K., Elbassioni, K.: Quantifying inefficiency of fair cost-sharing mechanisms for sharing economy. IEEE Trans. Control Netw. Syst. 5(4), 1809–1818 (2018). https://doi.org/10. 1109/TCNS.2017.2763747 16. Chen, Z., Xie, W.: Sharing the value-at-risk under distributional ambiguity. Math. Financ. 31, 531–559 (2020). https://doi.org/10.1111/mafi.12296 17. Cogliastro, A., Gagnon, D., Bouchard, A.: Experimental determination of soil characteristics optimal for the growth of ten hardwoods planted on abandoned farmland. For. Ecol. Manag. 96, 49–63 (1997) 18. Darmawan, A., Wong, H., Thorstenson, A.: Supply chain network design with coordinated inventory control. Transp. Res. Part E 145, 102168 (2021). https://doi.org/10.1016/j.tre.2020. 102168 19. Diabat, A., Dehghani, E., Jabbarzadeh, A.: Incorporating location and inventory decisions into a supply chain design problem with uncertain demands and lead times. J. Manuf. Syst. 43, 139–149 (2017). https://doi.org/10.1016/j.jmsy.2017.02.010 20. Diabat, A., Deskoores, R.: A hybrid genetic algorithm based heuristic for an integrated supply chain problem. J. Manuf. Syst. 38, 172–180 (2016). https://doi.org/10.1016/j.jmsy.2015.04. 011 21. Diabat, A., Richard, J.P., Codrington, C.W.: A Lagrangian relaxation approach to simultaneous strategic and tactical planning in supply chain design. Ann. Oper. Res. 203, 55–80 (2013). https://doi.org/10.1007/s10479-011-0915-2 22. Fiestras-Janeiro, M.G., Garcia-Jurado, I., Meca, A., Mosquera, M.A.: Cooperation on capacitated inventory situations with fixed holding costs. Eur. J. Oper. Res. 241, 719–726 (2015). https://doi.org/10.1016/j.ejor.2014.09.016 23. Gamache, I., Payette, S.: Height growth response of tree line black spruce to recent climate warming across the forest-tundra of eastern Canada. J. Ecol. 92, 835–845 (2004) 24. Gamache, I., Payette, S.: Latitudinal response of subarctic tree lines to recent climate change in eastern Canada. J. Biogeograph. 32, 849–862 (2005). https://doi.org/10.1111/j.1365-2699. 2004.01182.x 25. Gan, M., Li, Z., Chen, S.: On the transformation mechanism for formulating a multiproduct two-layer supply chain network design problem as a network flow model. Math. Probl. Eng. 2014, 480127 (2014). https://doi.org/10.1155/2014/480127

Supply Chain Design and Cost Allocation in a Collaborative …

351

26. Grabisch, M., Xie, L.: The restricted core of games on distributive lattices: how to share benefits in a hierarchy. Math. Meth. Oper. Res. 73, 189–208 (2011). https://doi.org/10.1007/s00186010-0341-2 27. Henneb, M., Thiffault, N., Valeria, O.: Regional Climate, edaphic conditions and establishment substrated interact to influence initial growth of black spruce and jack pine planted in the boreal forest. Forests 11, 139 (2020). https://doi.org/10.3390/f11020139 28. Hua, Z., Li, S.: Impacts of demand uncertainty on retailer’s dominance and manufacturerretailer supply chain cooperation. Omega 36, 697–714 (2008). https://doi.org/10.1016/j.omega. 2006.02.005 29. Izadi, A., Kimiagari, A.M.: Distribution network design under demand uncertainty using genetic algorithm and Monte Carlo simulation approach: a case study in pharmaceutical industry. J. Ind. Eng. Int. 10, 50 (2014). https://doi.org/10.1007/s40092-014-0050-1 30. Jayaraman, V.: Transportation, facility location and inventory issues in distribution network design. Int. J. Oper. Product. Manag. 18(5), 471–494 (1998) 31. Jin, X., Park, K.T., Kim, K.H.: Storage space sharing among container handling companies. Transp. Res. Part E 127, 111–131 (2019). https://doi.org/10.1016/j.tre.2019.05.001 32. Jouida, S.B., Guajardo, M., Klibi, W., Krichen, S.: Profit maximizing coalitions with shared capacities in distribution networks. Eur. J. Oper. Res. 288, 480–495 (2021). https://doi.org/10. 1016/j.ejor.2020.06.005 33. Karsten, F., Slikker, M., Borm, P.: Cost allocation rules for elastic single-attribute situations. Wiley Online Libr. (2017). https://doi.org/10.1002/nav.21749 34. Laflamme, G., Hopkin, A.A., Harrison, K.J.: Status of the European race of Scleroderris canker in Canada. For. Chron. 74(4), 561–566 (1998) 35. Lashine, S.H., Fattouh, M., Issa, A.: Location/allocation and routing decisions in supply chain network design. J. Model. Manag. 1(2), 173–183 (2006). https://doi.org/10.1108/ 17465660610703495 36. Lavoie, J., Montoro Girona, M., Morin, H.: Vulnerability of conifer regeneration to spruce budworm outbreaks in the Eastern Canadian boreal forest. Forests 10, 850 (2019). https://doi. org/10.3390/f10100850 37. Li, L., Wang, X., Lin, Y.: Cooperative game-based profit allocation for joint distribution alliance under online shopping environment. Asia Pac. J. Mark. Log. 31(2), 302–326 (2018). https:// doi.org/10.1108/APJML-02-2018-0050 38. Li, S., Yu, Z., Dong, M.: Construct the stable vendor managed inventory partnership through a profit-sharing approach. Int. J. Syst. Sci. 46(2), 271–283 (2015). https://doi.org/10.1080/ 00207721.2013.777982 39. Li, Y., Lin, Y., Shu, J.: Location and two-echelon inventory network design with economies and diseconomies of scale in facility operating costs. Comput. Oper. Res. 133, 105347 (2021). https://doi.org/10.1016/j.cor.2021.105347 40. Li, Y., Shu, J., Song, M., Zhang, J., Zheng, H.: Multisourcing supply network design: twostage chance-constrained model, tractable approximations, and computational results. Inf. J. Comput. 29(2), 287–300 (2017). https://doi.org/10.1287/ijoc.2016.0730 41. Liu, N., Cheng, Y.: Allocating cost to freight carriers in horizontal logistic collaborative transportation planning on leading company perspective. Math. Probl. Eng. 2020, 4504086 (2020). https://doi.org/10.1155/2020/4504086 42. Mahjoub, S., Hennet, J.C.: Manufacturers’ coalition under a price elastic market—a quadratic production game approach. Int. J. Prod. Res. 52(12), 3568–3582 (2014). https://doi.org/10. 1080/00207543.2013.874605 43. Marquis, B., Duval, P., Bergeron, Y., Simard, M., Thiffault, N., Tremblay, F.: Height growth stagnation of planted spruce in boreal mixedwoods: importance of landscape, microsite, and growing-season frosts. Forest Ecol. Manag. 479, 118533 (2021). https://doi.org/10.1016/j. foreco.2020.118533 44. Mavronicolas, M., Panagopoulou, P.N., Spirakis, P.G.: Cost sharing mechanisms for fair pricing or resource usage. Algorithmica 52, 19–43 (2008). https://doi.org/10.1007/s00453-007-91084

352

T. Grimard et al.

45. Meca, A., Timmer, J., Garcia-Jurado, I., Borm, P.: Inventory games. Eur. J. Oper. Res. 156, 127–139 (2004). https://doi.org/10.1016/S0377-2217(02)00913-X 46. Mohebbi, S., Li, X.: Coalitional game theory approach to modeling suppliers’ collaboration in supply networks. Int. J. Prod. Econ. 169, 333–342 (2015). https://doi.org/10.1016/j.ijpe.2015. 08.022 47. Moulinier, J., Lorenzetti, F., Bergeron, Y.: Gap dynamics in aspen stands of the Clay Belt of northwestern Quebec following a forest tent caterpillar outbreak. Can. J. For. Res. 41, 1606– 1617 (2011). https://doi.org/10.1139/X11-075 48. Olgun, M.O., Alparslan Gök, S.Z., Özdemir, G.: Cooperative grey games and a application on economic order quantity model. Kybernetes 45(5), 828–838 (2016). https://doi.org/10.1108/ K-06-2015-0160 49. Hanan, Ouhader, El Kyal, M.: Sustainable horizontal collaboration: a case study in moroccan dry foods distribution. IFIP AICT 629, 768–777 (2021). https://doi.org/10.1007/978-3-03085969-5_73 50. Ozsen, L., Coullard, C.R., Daskin, M.S.: Capacitated warehouse location model with risk pooling. Wiley InterScience (2008). https://doi.org/10.1002/nav.20282 51. Ozsen, L., Daskin, M.S., Coullard, C.R.: Facility location modeling and inventory management with multisourcing. Transp. Sci. 43(4), 455–472 (2009). https://doi.org/10.1287/trsc. 1090.0268 52. Page, M.J., McKenzie, J.E., Bossuyt, P.M., Boutron, I., Hoffmann, T.C., Mulrow, C.D., et al.: The PRISM 2020 statement: an updated guideline for reporting systematic reviews. Syst. Rev. 10, 89 (2021) 53. Park, S., Lee, T.E., Sung, C.S.: A three-level supply chain network design model with riskpooling and lead times. Transp. Res. Part E 46, 563–581 (2010). https://doi.org/10.1016/j.tre. 2009.12.004 54. Rossi, S., Morin, H.: Demography and spatial dynamics in balsam fir stands after a spruce budworm outbreak. Can. J. For. Res. 41, 1112–1120 (2011). https://doi.org/10.1139/X11-037 55. Saif, A., Keyvandarian, A., & Elhedhli, S.: A new Lagrangian-Benders approach for a concave cost supply chain network design problem. INFOR: Inf. Syst. Oper. Res. 59, 3, 495–516 (2021). https://doi.org/10.1080/03155986.2021.1923978 56. Salem, R.W., Haouari, M.: A simulation-optimisation approach for supply chain network design under supply and demand uncertainties. Int. J. Prod. Res. 55(7), 1845–1861 (2017). https:// doi.org/10.1080/00207543.2016.1174788 57. Shahabi, M., Akbarinasaji, S., Unnikrishnan, A., James, R.: Integrated inventory control and facility location decisions in a multi-echelon supply chain network with hubs. Netw. Spat. Econ. 13, 497–514 (2013). https://doi.org/10.1007/s11067-013-9196-4 58. Shu, J., Wu, T., Zhang, K.: Warehouse location and two-echelon inventory management with concave operating cost. Int. J. Prod. Res. 53(9), 2718–2729 (2015). https://doi.org/10.1080/ 00207543.2014.977456 59. Tancrez, J.S., Lange, J.C., Semal, P.: A location-inventory model for large three-level supply chains. Transp. Res. Part E 48, 485–502 (2012). https://doi.org/10.1016/j.tre.2011.10.005 60. Tanonkou, G., Benyoucef, L.: Integrated facility location and supplier selection decisions in a distribution network design. SOLI 2006, 399–404 (2006). https://doi.org/10.1109/SOLI.2006. 236425 61. Tapia-Ubeda, F.J., Miranda, P.A., Macchi, M.: A Generalized Benders Decomposition based algorithm for an inventory location problem with stochastic inventory capacity constraints. Eur. J. Oper. Res. 267, 806–817 (2018). https://doi.org/10.1016/j.ejor.2017.12.017 62. Teng, Y., Li, X., Wu, P., Wang, X.: Using cooperative game theory to determine profit distribution in IPD projects. Int. J. Constr. Manag. 19(1), 32–45 (2019). https://doi.org/10.1080/ 15623599.2017.1358075 63. Timmer, J., Chessa, M., Boucherie, R.J.: Cooperation and game-theoretic cost allocation in stochastic inventory models with continuous review. Eur. J. Oper. Res. 231, 567–576 (2013). https://doi.org/10.1016/j.ejor.2013.05.051

Supply Chain Design and Cost Allocation in a Collaborative …

353

64. Vahdani, B., Soltani, M., Yazdani, M., Meysam Mousavi, S.: A three level joint locationinventory problem with correlated demand, shortages and periodic review system: robust metaheuristics. Comput. Ind. Eng. 109, 113–129 (2017). https://doi.org/10.1016/j.cie.2017.04.041 65. Vanovermeire, C., Sörense, K.: Measuring and rewarding flexibility in collaborative distribution, including two-partner coalitions. Eur. J. Oper. Res. 239, 157–165 (2014). https://doi.org/ 10.1016/j.ejor.2014.04.015 66. Verdonck, L., Beullens, P., Caris, A., Ramaekers, K., Janssens, G.K.: Analysis of collaborative savings and cost allocation techniques for the cooperative carrier facility location problem. J. Oper. Res. Soc. 67(6), 853–871 (2016). https://doi.org/10.1057/jors.2015.106 67. Wang, Y., Ma, X., Liu, M., Gong, K., Liu, Y., Xu, M., Wang, Y.: Cooperation and profit allocation in two-echelon logistics joint distribution network optimization. Appl. Soft Comput. 56, 143–157 (2017). https://doi.org/10.1016/j.asoc.2017.02.025 68. Wen, M., Larsen, R., Ropke, S., Petersen, H.L., Madsen, O.B.G.: Centralised horizontal cooperation and profit sharing in a shipping pool. J. Oper. Res. Soc. 70(5), 737–750 (2019). https:// doi.org/10.1080/01605682.2018.1457481 69. Yan, R.: Managing channel coordination in a multi-channel manufacturer-retailer supply chain. Ind. Market. Manag. 40, 636–642 (2011). https://doi.org/10.1016/j.indmarman.2010.12.019 70. Yilmaz, I., Yoon, S.W., Seok, H.: A framework and algorithm for fair demand and capacity sharing in collaborative networks. Int. J. Prod. Econ. 193, 137–147 (2017). https://doi.org/10. 1016/j.ijpe.2017.06.027 71. Zhai, Y., Fu, Y., Xu, G., Huang, G.: Multi-period hedging and coordination in a prefabricated construction supply chain. Int. J. Prod. Res. 57(7), 1949–1971 (2019). https://doi.org/10.1080/ 00207543.2018.1512765 72. Zhang, C.T., Liu, L.P.: Research on coordination mechanism in three-level green supply chain under non-cooperative game. Appl. Math. Model. 37, 3369–3379 (2013). https://doi.org/10. 1016/j.apm.2012.08.006

A Mathematical Model to Locate Services of General Economic Interest S. Baldassarre, G. Bruno, M. Cavola, and E. Pipicelli

Abstract Services of General Economic Interest (SGEI) are services of public interest whose provision is ruled by public authorities to guarantee accessibility levels for users and fair competition among service providers. Therefore, locating SGEI in a given territory implies the definition of adequate criteria driving the decisionmaking process. In this context, Facility Location Models (FLMs) can play a crucial role as a decision support tool to simulate regulatory frameworks and identify practical solutions. In this work, we propose a mathematical model for the location of such services, including constraints about users’ spatial accessibility while considering a dispersion criterion to ensure fair market competition. We apply the model to a real problem concerning the locations of pharmacies in the city of Pamplona (Navarre, Spain). The obtained results show the capability of the model to produce meaningful scenarios that can support the definition of appropriate regulatory frameworks.

1 Introduction “Services of general economic interest (SGEI) are economic activities which deliver outcomes in the overall public good that would not be supplied (or would be supplied under different conditions in terms of quality, safety, affordability, equal treatment or universal access) by the market without public intervention” ([7]). Schools, hospitals, S. Baldassarre · G. Bruno · M. Cavola · E. Pipicelli (B) Department of Industrial Engineering, University of Naples Federico II, P.le Tecchio 80, 80125 Naples, Italy e-mail: [email protected] S. Baldassarre e-mail: [email protected] G. Bruno e-mail: [email protected] M. Cavola e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0_29

355

356

S. Baldassarre et al.

pharmacies, emergency sites, and post offices are some examples of SGEI. Public authorities regulate their geographical presence, defining criteria to assure appropriate service levels to potential users. However, achieving this objective may suggest activating an unsustainable number of territorial facilities. Therefore, limiting the number of facilities would permit containing fixed and variable costs. Besides, if service providers are private (e.g., pharmacies’ owners), it is fundamental to guarantee profitable market shares to each player. Facility Location Models (FMLs) can be effectively used in order to describe these problems ([6, 8]). In particular, the conflicting interest of the actors involved (users and service providers) can be included in the models as constraints, distinguishing between users-oriented and providers-oriented criteria. The former generally focus on geographical aspects as a function between users and providers; the latter deal with conditions to be respected in terms of mutual distance among the providers. In the FL literature, different examples of these constraints were used in order to model SGEI location problems ([2–4, 9]). However, we note that most of the existent studies generally privilege one of these two points of view. In this paper, we propose a mathematical model including both the above constraints. In particular, users’ accessibility is described in terms of the coverage that users get from active service providers; on the other hand, a dispersion criterion is considered so that distances between pairs of facilities have to be above a minimum required threshold. The model appears suited for describing a problem related to the regulation of pharmacies’ location in a given territorial context. This aspect represents the original contribution to the existent literature together with the peculiarity of the analyzed case study concerning the city of Pamplona (Navarre, Spain). Indeed, the case study is particularly interesting as, since the early 2000s, the Navarre region promoted a deregulation policy, relaxing constraints about the activation of new pharmacies to increase the presence of pharmacies and, consequently, improve users’ accessibility. This circumstance caused a significant increase in the number of the located pharmacies; however, the new market scenario determined by the providers’ choice of location did not appear as beneficial as expected. For this reason, the availability of a model able to produce scenarios corresponding to different regulatory frameworks seems necessary to help decision-makers identify adequate regulations coherent with the envisaged goals. The remainder of the paper is organised as follows. In Sect. 2, the formal description of the problem and the mathematical model are given. In Sect. 3, we present the case study, while in Sect. 4, we outline the main findings from our computational experiments. A summary of the work done finally follows in Sect. 5.

2 Mathematical Model As we stated, our goal is to propose a mathematical model to support public authorities in the location of a SGEI. Without loss of generality, hereafter we will explicitly refer to pharmacies, which are the focus of the present work.

A Mathematical Model to Locate Services of General Economic Interest

357

With reference to a given region, we consider a discrete set I of users nodes and a set J of potential pharmacies locations. We denote by pi the population associated with each node i ∈ I , and by di j the users-to-pharmacies distances. Additionally, let c jk be the distance between pairs of pharmacies ( j, k) ∈ J . We assume the decision-maker is interested in driving the location process by fixing geographical criteria related to users’ accessibility and pharmacies’ competition. We use a covering paradigm to cast the former. Note that a user having the closest pharmacy within a given distance r is covered within r itself. Precisely, we consider the presence of a set of “quality/service levels” K , each associated with a specific covering radius rk (r1 ≤ r2 ≤ · · · ≤ r|K | ). We consider the decision-maker is willing to guarantee given percentages of users, say αk , are covered within each radius rk . Besides, in terms of competition, we consider the introduction of a dispersion criterion, ensuring a minimum distance c¯ between any two located pharmacies. Finally, it is worth noting that, since pharmacies are free to enter the market compliant with the above restrictions, their actual future number cannot be predicted or prescribed. Therefore, it may be insightful for the decision-maker to evaluate solutions corresponding to two extreme cases: the minimum and the maximum market saturation. For this reason, the objective function of the model consists of both minimizing and maximizing the number of located pharmacies. To formulate the model, in addition to the notation already introduced, we use the following decision variables: yj xik

binary variables equal to 1 if a pharmacy is located in j ∈ J ; binary variables equal to 1 if users i ∈ I are covered within service level k

The mathematical model (M1) is as follows: maximize [minimize]



yj

(1)

j∈J

subject to



pi xik ≥ αk

i∈I





pi

∀k ∈ K (2)

i∈I

y j ≥ xik

∀i ∈ I, k ∈ K (3)

j∈J :di j ≤rk



yt ≤ (|M cj¯ | − 1)(1 − y j )

∀ j ∈ J (4)

t∈M cj

y j , xik ∈ {0, 1}

∀i ∈ I, j ∈ J, k ∈ K (5)

Objective function (1) maximizes (or minimizes) the number of located facilities. Constraints (2), are the users’ accessibility conditions, stating that at least αk percentage of users are covered within radius rk . Constraints (3) ensure that users in i are covered within service level k if at least one pharmacy is located within distance rk from node i itself. Constraints (4) assure that the minimum distance between

358

S. Baldassarre et al.

two located pharmacies is farther than c¯ (M cj¯ denotes the set of potential locations within distance c¯ from j). Finally, Constraints (5) define the domain of the introduced decision variables. We emphasize that model (M1) may render infeasible solutions according to the ¯ specific settings of its calibration parameters, namely pairs (αk , rk ), k ∈ K , and c. In practice, it may be not be possible in some cases to reach the desired coverage ¯ In order to overcome this level αk within radius rk under the imposed dispersion c. limitation, a simple idea is to introduce non-negative auxiliary decision variables φk , quantifying the uncovered population for each service level k, w.r.t. the envisaged goal αk . The new model (M2) is as follows: minimize



ωk φk

(6)

k∈K

maximize [minimize] (1) subject to (3)-(5)   pi xik + φk ≥ αk pi i∈I

φk ≥ 0

∀k ∈ K

(7)

∀k ∈ K

(8)

i∈I

Objective function (6) minimizes the total uncovered population. Note that, in practice, (6) maximizes coverage within each service level. Parameters ωk express the relative weight/importance of each service level. Their setting is discussed in Sect. 4. Constraints (7) are adapted from inequalities (2) to include variables φk , whose domain is defined byConstraints  (8). We underline that, given the above conditions, φk = max{0, αk i∈I pi − i∈I pi xik }, k ∈ K . Hence, ideally, φk = 0 means the achievement of the desired coverage for service level k. The problem is NP-Hard since it is an extension of a maximal covering location problem ([5]) with multiple covering radii and additional dispersion constraints.

3 Case Study We apply our model to the case of facility relocations that occurred after the introduction of deregulation policies in the retail pharmaceutical sector in the Navarre Region (Spain). In 1996, the Decree-Law 11/1996 introduced geographic and demographic criteria for opening new pharmacies in Spain (i.e., the minimum distance of 250 m among pharmacies and one pharmacy per 2800 inhabitants). The criteria were valid at the national level, but the right to modify such rules was left to the Autonomous Communities to better take into account specific local conditions. In particular, the Navarre Region is the most representative case because it reduced the minimum distance among pharmacies to 150 m, fixing a maximum of one pharmacy

A Mathematical Model to Locate Services of General Economic Interest

359

Fig. 1 Pamplona: manzanas (grey polygons) and pharmacies (red dots)

per 700 inhabitants (Ley Foral 12/2000). This rule induced a dramatic entry process, which doubled the overall number of facilities. We focus on the city Pamplona, where the number of pharmacies increased from 103 in 2000 to 209 in 2020. Even if this increase has improved accessibility conditions, it has also led to a substantial loss of market share for existing pharmacies’ owners. For these reasons, the application of the proposed model that safeguards both users and owners fits with a similar case study, proposing an alternative and more efficient regulatory scheme. All the necessary data for our analysis were sourced from the official online website (https://idena.navarra.es/) and elaborated through the software QGIS. To represent the demand distribution, we referred to the geographical division in submunicipal territorial units called manzanas (smaller than census tracts). We considered the total population (201.716) of the year 2020 concentrated in the centroids of the 779 manzanas (|I | = 779). We assumed set J corresponding to the actual pharmacies locations in 2020 (|J | = 209). We obtained users-to pharmacies-distances di j and reciprocal distances among pharmacies c jk as the shortest paths on the road network. Figure 1 provides a graphical representation of the study area. Based on these data, we performed a spatial analysis to assess the users’ accessibility in the 2020 scenario (AS-IS). Following [1, 3], we measured users’ accessibility by the distance to the closest active pharmacy. Accordingly, we calculated the following indicators: (i) the percentages of users covered within some given distances d (α(d)); (ii) the average distance users should travel to reach the closest pharmacy (davg ); (iii) the Gini coefficient, as a measure of inequality in market shares’

360

S. Baldassarre et al.

Table 1 Results from the spatial analysis (AS-IS scenario, Pamplona) No. of α(d) davg phar[km] macies 0.1 km 0.2 km 0.3 km 0.5 km 1.0 km 1.5 km 2.0 km 209 8.0% 40.8% 62.8% 75.2% 86.3% 93.6% 100.0% 0.43

Gini coefficient 0.54

distribution (G). Note that market shares were calculated as the number of users patronizing a pharmacy based on a closest-assignment principle (consistently with the considered accessibility indicator). Table 1 reports on the above-mentioned indicators. In general, we can observe that the high pharmacy concentration in the city produced two main effects. On the one hand, users benefited from favorable accessibility conditions. Indeed, more than 75% of the population has at least one pharmacy within 0.5 km, and, on average, users access the closest service by traveling only 0.43 km. On the other, market share distribution appears unequal, as testified by the relatively high value of the Gini coefficient (0.54). The above information and indicators will be used to set the model’s parameters and evaluate the solutions produced in our computational tests, as we discuss next.

4 Computational Experiments The application aims at finding alternative regulatory schemes for the location of pharmacies in Pamplona, guaranteeing good accessibility conditions to users, possibly replicating those in the AS-IS scenario (see Table 1), following different settings for the dispersion criterion. This way, we expect to observe lower values of the Gini coefficient in the obtained solutions, that is, more fair competition in terms of market shares distribution. In order to test model (M2), we consider three service levels (|K | = 3) and two different sets for the covering radii (rk ): (i) rk = {0.1 km, 0.5 km, 1.0 km} (Experiment A); (ii) rk = {0.3 km, 0.5 km, 1.0 km} (Experiment B). We only change the first radius to better highlight its impact on the results. The corresponding percentages αk were set, for each radius, as in Table 1. Note that lower values of the first covering radius may resemble more users-oriented policies since accessibility is maximized within smaller distances. Besides, we set the weighting factors (ωk ) in objective function (6) as follows: ωk = {3, 2, 1}. Such setting prioritize covering within smaller radii, reflecting again user-oriented policies. Different solutions are produced by varying the minimum dispersion distance (c). In particular, we consider c ∈ {0.00, 0.15, 0.20,0.25, 0.30, 0.35, 0.40, 0.45, 0.50} km. The experiments are performed using IBM ILOG CPLEX v12.1 on an 11th Gen Intel(R) Core(TM)

A Mathematical Model to Locate Services of General Economic Interest

361

i7-1165G7 CPU at 2.80 GHz, with 16 GiB of RAM running Windows 10 64 bits operating system. We recall that model (M2) is, in fact, a bi-objective model. We solved it using the staticLex function available in CPLEX, that is, performing lexicographic optimization ordering the two criteria as follows: minimizing (6) first, and then optimizing (1).

4.1 Results The obtained results are summarized in Figs. 2, 3, 4 and 5, which report on the variation of four different indicators by the imposed dispersion criterion (c). ¯ In particular: (i) Fig. 2 displays the value of objective function (6), for each service level k ∈ K (φk ), thus giving indication of the ability of the model to replicate the current users’ accessibility conditions; (ii) Fig. 3 reports on the number of located pharmacies; (iii) Fig. 4 depicts the average users’ accessibility; (iv) Fig. 5 finally shows the value of the Gini coefficient calculated in the produced solutions. In each figure, results are displayed separately for the two considered triplets of covering radii. For clarity, three lines are presented in Fig. 2, corresponding to the values of φk for each service level. In the others, the values of the reported indicators are displayed under three different cases: minimization (Min) and maximization (Max) of objective function (1), and the current scenario (AS-IS). Focusing on Fig. 2, it is possible to notice that the model effectively reproduces the users’ accessibility conditions in the current scenario. Indeed, the values of uncovered population are relatively low, being approximately equal to 5% and 20% for experiments A and B, respectively, for the highest value of dispersion criterion c. ¯ Moreover, within each experiment, the percentage of uncovered population is higher for the first service level, i.e., k = 1. This testifies that it is critical to achieve high

Fig. 2 Uncovered population w.r.t. to the current scenario by service level (i.e., values of φk in objective function (6))

362

Fig. 3 Number of located pharmacies (i.e., objective function (1))

Fig. 4 Average users’ accessibility distance (davg [km])

Fig. 5 Gini coefficient

S. Baldassarre et al.

A Mathematical Model to Locate Services of General Economic Interest

363

demand coverage within smaller distances as the dispersion criterion c¯ increases, due to the lower number located pharmacies. In fact, as we show in Fig. 3, the need to ensure higher territorial dispersion reduces the number of active services. The difference between the upper and lower bounds on their number, respectively obtained by maximizing and minimizing objective function (1), is relevant for lower values of c¯ but smooths as the dispersion increases, reaching the same value for c = 0.5 km. In practice, the range of available feasible solutions meeting the imposed geographical criteria shrinks significantly. In general, this figure highlights that we can obtain similar results in population coverage with a lower number of (more dispersed) pharmacies compared to the AS-IS scenario. This evidence is confirmed if we focus on the average users’ accessibility distances, displayed in Fig. 4. Indeed, the worsening w.r.t. the AS-IS scenario is acceptable as the average distance increases from 0.43 to 0.52 in the worst case (Experiment A, c¯ = 0.5 km). Of course, the curve yielded by the maximization of objective function (1) “dominates” the other as the number of located pharmacies is higher. Nevertheless, these differences are not particularly relevant when looking at the values involved. The Gini coefficient provides another perspective in the analysis. Figure 5 indicates that when c increases, the Gini coefficient decreases, reducing from 0.53 in the AS-IS scenario to values in the range 0.34–0.36 for both the sets of experiments. This result confirms that the dispersion criterion contributes to reaching more fair competition even in highly saturated markets (i.e., when maximizing objective function (1)). The results confirm that the proposed model is an appropriate tool to support authorities in defining a regulatory scheme that guarantees accessibility levels for users and fair competition among service providers. In particular, they suggest that with high values of minimum distance between pairs of facilities, it is possible to ensure equal market share, guaranteeing good accessibility conditions. On a final note, we underline that the model was solved in very short computing times equal, on average, to 1.39 s.

5 Conclusions In this work, we propose a general mathematical model to support authorities in regulating the location process of facilities providing services of general economic interest. The model aims at finding the number and the position of such facilities considering both users-oriented and providers-oriented spatial criteria simultaneously. For the former, we refer to a spatial accessibility criterion that guarantees population coverage within different radii/service levels; for the latter, we consider a dispersion criterion imposing a minimum distance between pairs of active facilities. We test the model to propose a different regulatory scheme for the network of retail pharmacies in the city of Pamplona (Spain). The results from our computational experiments demonstrate the relevance of considering the above-mentioned criteria simultane-

364

S. Baldassarre et al.

ously. Indeed, the model produces solutions that replicate the current (and favorable) users’ accessibility conditions, ensuring more fair competition in the market.

References 1. Barbarisi, I., Bruno, G., Diglio, A., Elizalde, J., Piccolo, C.: A spatial analysis to evaluate the impact of deregulation policies in the pharmacy sector: evidence from the case of navarre. Health Policy 123(11), 1108–1115 (2019) 2. Batta, R., Lejeune, M., Prasad, S.: Public facility location using dispersion, population, and equity criteria. Eur. J. Oper. Res. 234(3), 819–829 (2014) 3. Bruno, G., Cavola, M., Diglio, A., Elizalde, J., Piccolo, C.: A locational analysis of deregulation policies in the Spanish retail pharmaceutical sector. Socio-Economic Planning Sciences, p. 101233 (2022) 4. Bruno, G., Cavola, M., Diglio, A., Piccolo, C., Pipicelli, E.: Strategies to reduce postal network access points: from demographic to spatial distribution criteria. Utilities Policy 69, 101189 (2021) 5. Church, R., ReVelle, C.: The maximal covering location problem. In: Papers of the Regional Science Association, vol. 32, pp. 101–118. Springer (1974) 6. Eiselt, H.A., Marianov, V.: Foundations of Location Analysis (2011) 7. European Commission: A Quality Framework for Services of General Interest in Europe (2011). https://www.europarl.europa.eu/meetdocs/2009_2014/documents/com/com_ com(2011)0900_/com_com(2011)0900_en.pdf, (Accessed on February 8, 2022) 8. Laporte, G., Nickel, S., da Gama, F.S.: Location Science. Springer Nature (2019) 9. Miliotis, P., Dimopoulou, M., Giannikos, I.: A hierarchical location model for locating bank branches in a competitive environment. Int. Trans. Oper. Res. 9(5), 549–565 (2002)

Author Index

A Aghelinejad, M. M., 329 Ambrogio, Giuseppina, 205 Amodeo, Lionel, 169, 329 Ansuini, Giulia, 159 Ataç, Selin, 179 Ausonio, Elena, 191

B Bagnerini, Patrizia, 191 Baldassarre, S., 355 Barbagallo, Annamaria, 15 Belcore, Orlando Marco, 241 Beraldi, Patrizia, 135 Bianconcini, Tommaso, 265 Bierlaire, Michel, 277 Boccia, Maurizio, 291 Bortolomiol, Stefano, 179 Brandimarte, Paolo, 147 Bravi, Luca, 61 Brdar, Sanja, 179, 277 Bruno, G., 355

C Caraballo, M. A., 27 Cavola, M., 355 Cervellera, Cristiano, 73, 253 Ceselli, Alberto, 85 Coelho, Leandro, 317 Conforti, Domenico, 205 Crnojevi´c, Vladimir, 179

D De Francesco, Carla, 227 De Giovanni, Luigi, 227 De Maio, Annarita, 217 Di Gangi, Massimo, 241

F Fadda, Edoardo, 147 Fargetta, Georgia, 3 Ferreira, I., 329 Frangioni, Antonio, 159

G Gaggero, Mauro, 191 Galli, Laura, 159 Garcia-Bernabeu, Ana, 109 Gioia, Daniele Giovanni, 147 Godichaud, Matthieu, 169 Grimard, Tatiana, 339 Gualtieri, Marco, 265 Guarino Lo Bianco, Serena, 15 Guido, Rosita, 205

H Harbourne-Thomas, Andrew, 61 Hilario-Caballero, Adolfo, 109 Hladík, Milan, 123

K Khodaparasti, Sara, 135

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Cappanera et al. (eds.), Optimization and Decision Science: Operations Research, Inclusion and Equity, AIRO Springer Series 9, https://doi.org/10.1007/978-3-031-28863-0

365

366

Author Index Polimeni, Antonio, 241 Pour-Massahian-Tafti, Meisam, 169

L Lali´c, Maksim, 277 Lampariello, Lorenzo, 37 Lebel, Luc, 339 Lehoux, Nadia, 339 Liberti, Leo, 97 Lorenzo, David Di, 265 Lori, Alessandro, 61, 265 Lukovi´c, Ivan, 277

R Raciti, Fabio, 47 Raiconi, Paolo, 265 Rebora, Francesco, 73, 253 Roma, Massimo, 303

M Mármol, A. M., 27 Macciò, Danilo, 73, 253 Manca, Benedetto, 97 Mancuso, Andrea, 291 Marko, Oskar, 179 Masone, Adriano, 291 Messina, Francesco, 291 Mezatio, E. P., 329 Mitchell, Peter, 61 Monroy, L., 27 Musmanno, Roberto, 217

S Sagratella, Simone, 37 Salas-Molina, Francisco, 109 Salcedo, José Vicente, 109 Salti, Samuele, 61 Sambo, Francesco, 61 Sasso, Valerio Giuseppe, 37 Scrimali, Laura, 3 Sforza, Antonio, 291 Skrame, Aurora, 217 Stea, Giovanni, 159 Sterle, Claudio, 291

N Nardini, Giovanni, 159

T Taccari, Leonardo, 61, 265 Togni, Elia, 85

O Obrenovi´c, Nikola, 179, 277 Oustry, Antoine, 97

P Passacantando, Mauro, 47 Piermarini, Christian, 303 Pipicelli, E., 355 Poirion, Pierre-Louis, 97

V Vié, Marie-Sklaerder, 317 Vocaturo, Francesca, 217

Z Zangara, Gabriele, 205 Zapata, A., 27 Zufferey, Zufferey, 317