Modelling, Computation and Optimization in Information Systems and Management Sciences (Lecture Notes in Networks and Systems) 3030926656, 9783030926656

The proceedings consist of 34 papers which have been submitted to the 4th international conference on Modelling, Computa

104 16 28MB

English Pages 416 [413] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Conference Chairs
Program Chairs
Honorary Chair
Organizing Chairs
Publicity Chair
Sponsoring Institutions
Contents
Optimization of Complex Systems - Models and Methods
An Interior Proximal Method with Proximal Distances for Quasimonotone Equilibrium Problems
1 Introduction
2 Preliminaries
3 Proximal Method
4 Rate of Convergence
References
Beyond Pointwise Submodularity: Non-monotone Adaptive Submodular Maximization Subject to Knapsack and k-System Constraints
1 Introduction
2 Preliminaries
2.1 Items and States
2.2 Policies and Problem Formulation
2.3 Adaptive Submodularity and Pointwise Submodularity
3 Knapsack Constraint
3.1 Algorithm Design
3.2 Performance Analysis
4 k-System Constraint
4.1 Algorithm Design
4.2 Performance Analysis
References
Optimizing a Binary Integer Program by Identifying Its Optimal Core Problem - A New Optimization Concept Applied to the Multidimensional Knapsack Problem
1 Introduction
2 The Core Problem
2.1 Background
2.2 Creating Core Problems in LS-CORE-LP Heuristic
3 Neighbouring Core Problems
3.1 Neighbouring Solutions and Neighbouring CPs
3.2 Defining Neighbouring Core Problems
4 The LS-CORE-LP Heuristic Algorithm
4.1 A Generic LS-CORE-LP Algorithm
4.2 Defining Neighbouring CPs
4.3 LS-CORE-LP for MKP
5 Experiments
6 Conclusion and Future Work
References
A Comparison Between Optimization Tools to Solve Sectorization Problem
1 Introduction
2 Models Description
2.1 SC: SO Model to Minimize Compactness
2.2 SE: SO Model to Maximize Equilibrium
2.3 MCE: MO Model to Minimize Compactness and to Maximize Equilibrium
3 Solution Approaches
4 Experimental Results
5 Conclusion and Future Work
References
Exploiting Demand Prediction to Reduce Idling Travel Distance for Online Taxi Scheduling Problem
1 Introduction
2 Problem Description and Formulation
2.1 Problem Description
2.2 Problem Formulation
3 Online Scheduling Algorithm Based on Predictive Demand Information for Solving O-TSPP Problem
3.1 Prediction-Based Idling Taxi Direction
4 Experiments
4.1 Instances and Settings
4.2 Taxi Demand Prediction
4.3 The Efficiency of the Direction Based on the Taxi Demand Prediction
5 Conclusions
References
Algorithms for Flow Shop with Job–Dependent Buffer Requirements
1 Introduction and Description of the Problem
2 Algorithms
2.1 Barrier Heuristic
2.2 Bin-Packing Approach
3 Integer Formulation
4 Computational Experiments
5 Conclusion
References
Traveling Salesman Problem with Truck and Drones: A Case Study of Parcel Delivery in Hanoi
1 Introduction
2 Parallel Drone Scheduling Traveling Salesman Problem
2.1 Problem Definition
2.2 Time-Dependent Speed Model
3 Heuristic Algorithm
3.1 Main Algorithm
3.2 Customers Partitioning
3.3 Subtours Improvement
4 Experimental Results
4.1 Instances and Parameters Setting
4.2 Results and Discussions
5 Conclusions
References
A New Mathematical Model for Hybrid Flow Shop Under Time-Varying Resource and Exact Time-Lag Constraints
1 Introduction
2 Representation Comparison
2.1 Discrete-Time (DT) Representation
2.2 Discrete Continuous (DC) Representation
3 Mathematical Models
3.1 Problem Description
3.2 DT Model
3.3 DC Model
4 Numerical Experimentation
4.1 Instance Generation
4.2 Test Protocol
4.3 Result
5 Conclusion
References
Maximizing Achievable Rate for Incremental OFDM-Based Cooperative Communication Systems with Out-of-Band Energy Harvesting Technique
1 Introduction
2 Problem Formulation and Solution
3 Simulation Results
4 Conclusions
References
Estimation and Compensation of Doppler Frequency Offset in Millimeter Wave Mobile Communication Systems
1 Introduction
2 MIMO-OFDM Model Description and CFO Estimation in the Frequency Domain
2.1 OFDM System Model
2.2 MIMO System Model
2.3 CFO Estimation Technique Using Pilot Tones
3 Simulation Results
4 Conclusion
References
Bayesian Optimization Based on Simulation Conditionally to Subvariety
1 Introduction
2 Simulation Conditionally to Subvariety
2.1 Elements on Interval Analysis
2.2 Formal Description of the Simulation Problem
2.3 A Naive Dichotomous Approach for Sampling
2.4 Generic Method for Conditional Sampling
3 Black-Box Optimization
4 Examples
4.1 Tests of the Sampler
4.2 Black-Box Function Optimization
5 Conclusion
References
Optimal Operation Model of Heat Pump for Multiple Residences
1 Introduction
2 Previous Studies
2.1 Introduction of Previous Studies
2.2 Unit Commitment Problem
3 Problem Description
3.1 Notations
3.2 Overview of the Model
3.3 Formulation
4 Numerical Experiments
4.1 Setting Conditions
4.2 Results
4.3 Consideration
5 Conclusion
References
Revenue Management Problem via Stochastic Programming in the Aviation Industry
1 Introduction
2 Problem Description
2.1 Itinerary and Flight Leg
2.2 Reservation
2.3 Overbooking
3 Previous Study
4 Formulation of the New Model
5 Numerical Experiments
5.1 Littlewood Model Considering Cancellation
5.2 Deterministic Model
5.3 Data Setting
6 Conclusion
References
Stochastic Programming Model for Lateral Transshipment Considering Rentals and Returns
1 Introduction
2 Multi-period Inventory Transshipment Problem
2.1 Problem Description
2.2 Notations
2.3 Formulation
3 Moment Matching Method
3.1 Overview of the Moment Matching Method
3.2 Notation
3.3 Formulation
4 Numerical Experiments
5 Conclusion
References
Multi-objective Sustainable Process Plan Generation for RMS: NSGA-III vs New NSGA-III
1 Introduction
2 Problem Description and Mathematical Formulation
2.1 Problem Description
2.2 Mathematical Formulation
3 Proposed Evolutionary Approaches
3.1 Non-dominated Sorting Genetic Algorithm III (NSGA-III)
3.2 New Non-dominated Sorting Genetic Algorithm III (New NSGA-III)
4 Experimental Results and Analyses
5 Conclusions and Future Work Directions
References
Clarke Subdifferential, Pareto-Clarke Critical Points and Descent Directions to Multiobjective Optimization on Hadamard Manifolds
1 Introduction
2 Preliminaries
3 Fréchet and Clarke Subdifferential
4 Pareto Efficient Solutions, Pareto-Clarke Critical Points and Descent Directions for Multiobjective Optimization
References
Carbon Abatement in the Petroleum Sector: A Supply Chain Optimization-Based Approach
1 Introduction
2 Literature Review
3 Problem Statement and Assumptions
3.1 Problem Definition
3.2 Assumptions
4 Mathematical Model
4.1 Model Elements
4.2 Model Formulation
4.3 Solution Method
5 Experimentation and Results
5.1 Baseline Scenario
5.2 Carbon Capture and Storage
6 Conclusion
Appendix
References
Bi-objective Model for the Distribution of COVID-19 Vaccines
1 Introduction
2 Problem Description
3 Mathematical Formulation
4 Results and Discussion
5 Conclusion
References
Machine Learning - Algorithms and Applications
DCA for Gaussian Kernel Support Vector Machines with Feature Selection
1 Introduction
2 Solution Method by DC Programming and DCA
2.1 Introduction to DC Programming and DCA
2.2 DC Reformulation of the Binary Formulation (4)
2.3 Standard DCA for Solving the DC Program (7)
3 Numerical Experiments
4 Conclusions
References
Training Support Vector Machines for Dealing with the ImageNet Challenging Problem
1 Introduction
2 Support Vector Machines
3 Parallel Multi-class Support Vector Machines for the Large Number of Classes
4 Experimental Results
4.1 ILSVRC 2010 Dataset
4.2 Tuning Parameter
4.3 Classification Results
5 Conclusion and Future Works
References
The Effect of Machine Learning Demand Forecasting on Supply Chain Performance - The Case Study of Coffee in Vietnam
1 Introduction
2 Literature Review
2.1 Demand Forecasting Models
2.2 Performance Metrics
3 Methodology
3.1 Data
3.2 Supply Chain Performance Metrics
4 Results and Discussion
4.1 Comparison of Operational Measures Between Traditional and Machine Learning Forecasting Methods
4.2 Comparison of Financial Measures Between Traditional and Machine Learning Forecasting Methods
5 Conclusion
References
Measuring Semantic Similarity of Vietnamese Sentences Based on Lexical and Distribution Similarity
1 Introduction
2 Related Work
2.1 Word Embedding Models
2.2 PhoBERT Model
2.3 Sentence Similarity Measure Based on Deep-Learning
3 Proposed Method
3.1 Lexical-Based Measures
3.2 LCS Algorithm
3.3 Sentence Similarity Measure Based on LCS
3.4 Sentence Similarity Measure Based on PhoBERT
4 Construct a Vietnamese Dataset
5 Experiments
6 Conclusion and Future Work
References
ILSA Data Analysis with R Packages
1 Introduction
2 Data Download and Preparation
3 Statistical Data Analysis
3.1 Descriptive Statistics
3.2 Correlation
3.3 Regression
3.4 Multilevel Linear Modeling
4 Conclusions
References
An Ensemble Learning Approach for Credit Scoring Problem: A Case Study of Taiwan Default Credit Card Dataset
1 Introduction
2 Solution Methods
2.1 Existing Methods
2.2 Proposed Method
3 Experimental Settings
3.1 Dataset
3.2 Preprocessing Data
3.3 Processing Pipeline
4 Results
5 Conclusions
References
A New Approach to the Improvement of the Federated Deep Learning Model in a Distributed Environment
1 Introduction
2 Background Knowledge
2.1 Convolutional Neural Network
2.2 Transfer Learning
2.3 Federated Learning
3 The Improved Method to Aggerate the Set of Weights
3.1 Idea
3.2 Mathemetical Model
4 Experiment
4.1 Experimental Model
4.2 Experimental Data, Program and Process
5 Discussion and Evaluation
5.1 Experiment Result Evaluation
5.2 Discussion
6 Conclusion
References
Optimal Control in Learning Neural Network
1 Introduction
1.1 Our Approach
1.2 Meaning of Our Approach and Contributions
1.3 Related Works
2 Formulation of Optimal Control Problem
3 Approximate Dynamic Programming
4 Numerical Algorithm
4.1 Numerical Example
References
Deep Networks for Monitoring Waterway Traffic in the Mekong Delta
1 Introduction
2 Training Deep Networks for Monitoring Waterway Traffic Means in the Mekong Delta
2.1 Data Collection of Waterway Traffic Means
2.2 Deep Networks for Monitoring Waterway Traffic Means
3 Experimental Results
3.1 Performance Measurements
3.2 Results
4 Conclusion and Future Works
References
Training Deep Network Models for Fingerprint Image Classification
1 Introduction
2 Training Deep Network Models for Classifying Fingerprint Images
2.1 Dataset Collection
2.2 Deep Networks
2.3 Training Deep Network Models for Classifying Fingerprint Images
3 Experimental Results
3.1 Classification Results
4 Conclusions and Future Work
References
An Assessment of the Weight of the Experimental Component in Physics and Chemistry Classes
1 Introduction
2 Theoretical Framework
3 Methods
4 Case Study
4.1 An Entropic Approach to Data Attainment and Processing
4.2 Artificial Neural Network Training and Testing Procedures
5 Conclusions and Future Work
References
The Multi-objective Optimization of the Convolutional Neural Network for the Problem of IoT System Attack Detection
1 Introduction
2 Related Works
3 Method Development
3.1 Ideas
3.2 Pareto Multi-objective Optimization
3.3 IoT System Attack Detection Using CNN
3.4 Improvement of CNN Network Structure According to Multi-objective Method
4 Experiment
4.1 Experimental Model
4.2 Experimental Program and Data
4.3 Experimental Results and Evaluation
5 Conclusion
References
What to Forecast When Forecasting New Covid-19 Cases? Jordan and the United Arab Emirates as Case Studies
1 Introduction
2 Related Work
3 LSTM
4 Experiment
4.1 Data
4.2 Exploratory Data Analysis
4.3 LSTM Configuration
5 Comparative Study
6 Conclusion and Future Research
References
Cryptography
Solving a Centralized Dynamic Group Key Management Problem by an Optimization Approach
1 Introduction
2 Centralized Group Key Management
2.1 Terms and Definitions
2.2 Logical Key Hierarchy
3 The 2-Stage Optimization Approach to the Problem of Updating Group Key in the LKH Structure
4 Conclusion and Future Works
References
4 4Recursive MDS Matrices Effective for Implementation from Reed-Solomon Code over GF( q )Field
1 Introduction
2 Preliminaries and Related Works
2.1 RS Codes
3 4 × 4 Recursive MDS Matrices Effective for Implementation from Reed-Solomon Code over a General Field GF(q)
4 Conclusion
References
Implementation of XTS - GOST 28147-89 with Pipeline Structure on FPGA
1 Introduction
2 The GOST 28147-89 Algorithm
3 The Proposed Pipelined Implementation
4 The Proposed XTS Block Cipher Mode for GOST 28147-89
5 FPGA Implementation Results
6 Conclusion
References
Author Index
Recommend Papers

Modelling, Computation and Optimization in Information Systems and Management Sciences (Lecture Notes in Networks and Systems)
 3030926656, 9783030926656

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 363

Hoai An Le Thi Tao Pham Dinh Hoai Minh Le Editors

Modelling, Computation and Optimization in Information Systems and Management Sciences Proceedings of the 4th International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences MCO 2021

Lecture Notes in Networks and Systems Volume 363

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA; Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada; Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/15179

Hoai An Le Thi Tao Pham Dinh Hoai Minh Le •



Editors

Modelling, Computation and Optimization in Information Systems and Management Sciences Proceedings of the 4th International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2021

123

Editors Hoai An Le Thi Computer science and Applications Department, LGIPM University of Lorraine Metz Cedex, France

Tao Pham Dinh Laboratory of Mathematics National Institute for Applied Sciences - Rouen Saint-Etienne-du-Rouvray Cedex, France

Institut Universitaire de France (IUF) Paris, France Hoai Minh Le Computer science and Applications Department, LGIPM University of Lorraine Metz Cedex, France

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-030-92665-6 ISBN 978-3-030-92666-3 (eBook) https://doi.org/10.1007/978-3-030-92666-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This volume contains 34 selected full papers (from 70 submitted ones) presented at the MCO 2021 conference, held on December 11–13, 2021, at Hanoi, Vietnam. MCO 2021 was the fourth event in the series of conferences on Modelling, Computation and Optimization in Information Systems and Management Sciences, traditionally organized by LITA, the Laboratory of Theoretical and Applied Computer Science (LITA has become now the Computer Science and Applications Department of LGIPM), University of Lorraine, in Metz, France. Exceptionally, MCO 2021 was co-organized by the Computer Science and Applications Department, LGIPM, University of Lorraine, France, and Academy of Cryptography Techniques, Vietnam, in collaboration with the Data Science and Optimization of Complex Systems (DataOpt) Laboratory, International School, Vietnam National University, Hanoi, Vietnam. The first conference, MCO 2004, brought together 100 scientists from 21 countries. It included 8 invited plenary speakers, 70 papers presented and published in the proceedings, “Modelling, Computation and Optimization in Information Systems and Management Sciences,” edited by Le Thi Hoai An and Pham Dinh Tao, Hermes Sciences Publishing, June 2004, 668 pages. Two special issues including 22 papers were published in the European Journal of Operational Research and in the Journal of Global Optimization. The second event, MCO 2008, gathered 6 invited plenary speakers and more than 120 scientists from 27 countries. The scientific program consisted of 6 plenary lectures and the oral presentations of 68 selected full papers as well as 34 selected abstracts covering all main topic areas. Its proceedings were edited by Le Thi Hoai An, Pascal Bouvry and Pham Dinh Tao in Communications in Computer and Information Science, Springer (618 pages). Two special issues were published in Journal of Computational, Optimization & Application and Advance on Data Analysis and Classification. The third edition, MCO 2015, was attended by more than 130 scientists from 35 countries. The scientific program includes 5 plenary lectures and the oral presentation of 86 selected full papers and several selected abstracts. The proceedings was edited by Le Thi Hoai An, Pham Dinh Tao and Nguyen Ngoc Thanh in Advances in Intelligent Systems and Computing, Springer (2 volumes for a total of 1000 pages). v

vi

Preface

MCO 2015, the biggest MCO edition, was marked by the celebration of the 30th birthday of DC programming and DCA, an efficient approach in nonconvex programming framework. One special issue in Mathematical Programming Series B was dedicated to DC programming and DCA, and the second special issue was published in Computers and Operations Research. MCO 2021 covers, traditionally, several fields of management science and information systems: computer sciences, information technology, mathematical programming, optimization and operations research and related areas. It allows researchers and practitioners to clarify the recent developments in models and solutions for decision making in engineering and information systems and to interact and discuss how to reinforce the role of these fields in potential applications of great impact. The conference program includes 3 plenary lectures of world-class speakers and the oral presentation of 34 selected papers as well as several selected abstracts. This book covers theoretical and algorithmic aspects as well as practical issues connected with modeling, computation and optimization in information systems and management science. Each paper was peer-reviewed by at least two members of the International Program Committee and the International Reviewer Board. The book is composed of 3 parts: optimization of complex systems - models and methods, machine learning - algorithms and applications, and cryptography. We hope that researchers and practitioners can find here many inspiring ideas and useful tools and techniques for their works. We would like to thank all those who contributed to the success of the conference and to this book of proceedings. In particular, we would like to express our gratitude to the members of International Program Committee as well as the reviewers for their hard work in the review process, which helped us to guarantee the highest quality of the selected papers for the conference. Thanks are also due to the plenary lecturers for their interesting and informative talks of a world-class standard. We wish to especially thank all members of the Organizing Committee for their excellent work to make the conference a success. Our special thanks go to all the authors for their valuable contributions and to the other participants who enriched the conference success. Finally, we cordially thank Springer, especially Prof. Janusz Kacprzyk and Dr. Thomas Ditzinger, for their supports in publishing this book. October 2021

Hoai An Le Thi Tao Pham Dinh Hoai Minh Le

Organization

MCO 2021 is co-organized by the Computer Science and Applications Department, LGIPM, University of Lorraine, France, and Academy of Cryptography Techniques, Vietnam, in collaboration with the Data Science and Optimization of Complex Systems (DataOpt) Laboratory, International School, Vietnam National University, Hanoi, Vietnam.

Conference Chairs Hoai An Le Thi Huu Hung Nguyen

University of Lorraine, France Academy of Cryptography Techniques, Vietnam

Program Chairs Hoai An Le Thi Tao Pham Dinh

University of Lorraine, France National Institute of Applied Sciences of Rouen, France

Honorary Chair Nam Hai Nguyen

Vietnam Government Information Security Commission, Vietnam

Organizing Chairs Hoai Minh Le Thi Dao Vu

University of Lorraine, France Academy of Cryptography Techniques, Vietnam

Publicity Chair Vinh Thanh Ho

University of Lorraine, France

vii

viii

Sponsoring Institutions DCA Solutions (Vietnam) Springer

Organization

Contents

Optimization of Complex Systems - Models and Methods An Interior Proximal Method with Proximal Distances for Quasimonotone Equilibrium Problems . . . . . . . . . . . . . . . . . . . . . . . Erik Alex Papa Quiroz Beyond Pointwise Submodularity: Non-monotone Adaptive Submodular Maximization Subject to Knapsack and k-System Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shaojie Tang Optimizing a Binary Integer Program by Identifying Its Optimal Core Problem - A New Optimization Concept Applied to the Multidimensional Knapsack Problem . . . . . . . . . . . . . . . . . . . . . . Sameh Al-Shihabi A Comparison Between Optimization Tools to Solve Sectorization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aydin Teymourifar, Ana Maria Rodrigues, José Soeiro Ferreira, and Cristina Lopes

3

16

28

40

Exploiting Demand Prediction to Reduce Idling Travel Distance for Online Taxi Scheduling Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . Van Son Nguyen, Quang Dung Pham, and Van Hieu Nguyen

51

Algorithms for Flow Shop with Job–Dependent Buffer Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexander Kononov, Julia Memar, and Yakov Zinder

63

Traveling Salesman Problem with Truck and Drones: A Case Study of Parcel Delivery in Hanoi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quang Huy Vuong, Giang Thi-Huong Dang, Trung Do Quang, and Minh-Trien Pham

75

ix

x

Contents

A New Mathematical Model for Hybrid Flow Shop Under Time-Varying Resource and Exact Time-Lag Constraints . . . . . . . . . . . Quoc Nhat Han Tran, Nhan Quy Nguyen, Hicham Chehade, Farouk Yalaoui, and Frédéric Dugardin

87

Maximizing Achievable Rate for Incremental OFDM-Based Cooperative Communication Systems with Out-of-Band Energy Harvesting Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 You-Xing Lin, Tzu-Hao Wang, Chun-Wei Wu, and Jyh-Horng Wen Estimation and Compensation of Doppler Frequency Offset in Millimeter Wave Mobile Communication Systems . . . . . . . . . . . . . . . 112 Van Linh Dinh and Van Yem Vu Bayesian Optimization Based on Simulation Conditionally to Subvariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Frédéric Dambreville Optimal Operation Model of Heat Pump for Multiple Residences . . . . . 133 Yusuke Kusunoki, Tetsuya Sato, and Takayuki Shiina Revenue Management Problem via Stochastic Programming in the Aviation Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Mio Imai, Tetsuya Sato, and Takayuki Shiina Stochastic Programming Model for Lateral Transshipment Considering Rentals and Returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Keiya Kadota, Tetsuya Sato, and Takayuki Shiina Multi-objective Sustainable Process Plan Generation for RMS: NSGA-III vs New NSGA-III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Imen Khettabi, Lyes Benyoucef, and Mohamed Amine Boutiche Clarke Subdifferential, Pareto-Clarke Critical Points and Descent Directions to Multiobjective Optimization on Hadamard Manifolds . . . . 182 Erik Alex Papa Quiroz, Nancy Baygorrea, and Nelson Maculan Carbon Abatement in the Petroleum Sector: A Supply Chain Optimization-Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Otman Abdussalam, Nuri Fello, and Amin Chaabane Bi-objective Model for the Distribution of COVID-19 Vaccines . . . . . . . 208 Mohammad Amin Yazdani, Daniel Roy, and Sophie Hennequin Machine Learning - Algorithms and Applications DCA for Gaussian Kernel Support Vector Machines with Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Hoai An Le Thi and Vinh Thanh Ho

Contents

xi

Training Support Vector Machines for Dealing with the ImageNet Challenging Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Thanh-Nghi Do and Hoai An Le Thi The Effect of Machine Learning Demand Forecasting on Supply Chain Performance - The Case Study of Coffee in Vietnam . . . . . . . . . . 247 Thi Thuy Hanh Nguyen, Abdelghani Bekrar, Thi Muoi Le, and Mourad Abed Measuring Semantic Similarity of Vietnamese Sentences Based on Lexical and Distribution Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Van-Tan Bui and Phuong-Thai Nguyen ILSA Data Analysis with R Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Laura Ringienė, Julius Žilinskas, and Audronė Jakaitienė An Ensemble Learning Approach for Credit Scoring Problem: A Case Study of Taiwan Default Credit Card Dataset . . . . . . . . . . . . . . 283 Duc Quynh Tran, Doan Dong Nguyen, Huu Hai Nguyen, and Quang Thuan Nguyen A New Approach to the Improvement of the Federated Deep Learning Model in a Distributed Environment . . . . . . . . . . . . . . . . . . . 293 Duc Thuan Le, Van Huong Pham, Van Hiep Hoang, and Kim Khanh Nguyen Optimal Control in Learning Neural Network . . . . . . . . . . . . . . . . . . . . 304 Marta Lipnicka and Andrzej Nowakowski Deep Networks for Monitoring Waterway Traffic in the Mekong Delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Thanh-Nghi Do, Minh-Thu Tran-Nguyen, Thanh-Tri Trang, and Tri-Thuc Vo Training Deep Network Models for Fingerprint Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Thanh-Nghi Do and Minh-Thu Tran-Nguyen An Assessment of the Weight of the Experimental Component in Physics and Chemistry Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Margarida Figueiredo, M. Lurdes Esteves, Humberto Chaves, José Neves, and Henrique Vicente The Multi-objective Optimization of the Convolutional Neural Network for the Problem of IoT System Attack Detection . . . . . . . . . . . 350 Hong Van Le Thi, Van Huong Pham, and Hieu Minh Nguyen What to Forecast When Forecasting New Covid-19 Cases? Jordan and the United Arab Emirates as Case Studies . . . . . . . . . . . . . 361 Sameh Al-Shihabi and Dana I. Abu-Abdoun

xii

Contents

Cryptography Solving a Centralized Dynamic Group Key Management Problem by an Optimization Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Thi Tuyet Trinh Nguyen, Hoang Phuc Hau Luu, and Hoai An Le Thi 4  4 Recursive MDS Matrices Effective for Implementation from Reed-Solomon Code over GFðqÞ Field . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Thi Luong Tran, Ngoc Cuong Nguyen, and Duc Trinh Bui Implementation of XTS - GOST 28147-89 with Pipeline Structure on FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Binh-Nhung Tran, Ngoc-Quynh Nguyen, Ba-Anh Dao, and Chung-Tien Nguyen Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403

Optimization of Complex Systems Models and Methods

An Interior Proximal Method with Proximal Distances for Quasimonotone Equilibrium Problems Erik Alex Papa Quiroz1,2(B) 1

Universidad Nacional Mayor de San Marcos, Lima, Peru [email protected], [email protected] 2 Universidad Privada del Norte, Lima, Peru

Abstract. We introduce an interior proximal point algorithm with proximal distances to solve quasimonotone Equilibrium problems defined on convex sets. Under adequate assumptions, we prove that the sequence generated by the algorithm converges to a solution of the problem and for a broad class of proximal distances the rate of convergence of the sequence is linear or superlinear. Keywords: Proximal algorithms Quasimonotone bifunctions

1

· Proximal distances ·

Introduction

In this paper we consider the well known Equilibrium Problem (EP): find x¯ ∈ C such that (1) f (¯ x, y) ≥ 0, ∀y ∈ C, where f : C × C → IR is a bifunction, C is a nonempty open convex set in the Euclidean space IRn and C is the closure of C. Equilibrium problem is a general mathematical model which includes as particular cases minimization problems, variational inequalities problems, monotone inclusion problems, sadle point problems, complemetarity problems, vector minimization problems and Nash equilibria problems with noncooperative games, see for example Blum and Oettli [3], Iusem and Sosa [4,5] and references therein. There are several methods for solving (EP), for example, splitting proximal methods [8], hybrid extragradient methods [1], extragradient methods [9], double projection-type method [15] and proximal point algorithms [6], among many others. In previous works the standard condition to guarantee the convergence of the algorithms to solve the problem (1) is the monotonicity or the pseudomonotonicity of the bifunction f (., .). However, to broaden the field of applications of the model, in 2017, Mallma et al. [7] introduced the quasimonotone condition Supported by Universidad Privada del Norte, Peru. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 3–15, 2022. https://doi.org/10.1007/978-3-030-92666-3_1

4

E. A. Papa Quiroz

in the assumptions of the proposed algorithm. In that paper, for k = 1, 2, . . . , given xk−1 ∈ C, the authors considered the following main step: find xk ∈ C and g k ∈ ∂2 f (xk , xk ) such that g k + λk ∇1 d(xk , xk−1 ) = ek ,

(2)

+∞ |ek ,xk | +∞ ek  < +∞. where the error criteria satisfies k=1 λk < +∞ and k=1 λk They proved that the sequence generated by the method converges weakly to a solution of the (EP). Furthermore, if there exists an accumulation point, of the sequence generated by the method, which belongs to a certain subset of the solution set of (EP), then the sequence converges to a point of the solution subset. However, they did not report any rate of convergence results of the algorithm. This is the motivation of the present paper. Due to that the above error criteria is not appropriate to obtain some results about the rate of convergence of the algorithm, we consider in the present paper other error criteria, that is, we consider the following criteria:  ek  ≤ ηk H(xk , xk−1 ) λk +∞  ηk < +∞

(3) (4)

k=1

where H is an induced proximal distance, see Definition 3 of Sect. 2. Observe that (3) and (4) are new in proximal point methods with proximal distances but they are motivated from the work of Papa Quiroz and Cruzado [13]. We prove, under adequate assumptions on the (EP), the same convergence results found in the paper of Mallma et al. [7], but we add the linear or superlinear rate of convergence of the proposed algorithm, we consider this fact the principal contribution of the paper. The organization of the paper is the following: In Sect. 2 we present the preliminaries, the concept of proximal and induced proximal distances and we introduce the definition of H-linear and H-superlinear convergence. In Sect. 3, we present the interior proximal algorithm, the assumption on the problem and the result of convergence. In Sect. 4, we present the rate of convergence of the algorithm.

2

Preliminaries

Throughout this paper IRn is the Euclidean space endowed with the inner prodn  uct x, y = xi yi , where x = (x1 , x2 , ..., xn ) and y = (y1 , y2 , ..., yn ). The norm i=1

of x given by ||x|| := x, x 1/2 and bd(C), C denotes the boundary and closure of the subset C ⊂ IRn respectively. A bifunction f : C × C → IR is said to be quasimonotone on C if f (x, y) > 0 ⇒ f (y, x) ≤ 0, ∀x, y ∈ C.

Interior Proximal Method for Quasimonotone Equilibrium

5

Definition 1. Let f : C × C → IR be a bifunction. For each fixed z ∈ C, the diagonal subdifferential of f (z, .) at x ∈ C, denoted by ∂2 f (z, x), is defined and denoted by ∂2 f (z, x) = {g ∈ IRn : f (z, y) ≥ f (z, x) + g, y − x , ∀y ∈ C} Furthermore, if f (x, x) = 0, then ∂2 f (x, x) = {g ∈ IRn : f (x, y) ≥ g, y − x , ∀y ∈ C}. We present the definitions of proximal and induced proximal distances, introduced by Auslender and Teboulle [2]. For applications of these proximal distances for optimization, variational inequality problems and equilibrium problems see for example the references [10–12,17]. Definition 2. A function d : IRn × IRn → IR+ ∪ {+∞} is called a proximal distance with respect to an open nonempty convex set C if for each y ∈ C it satisfies the following properties: i. d(·, y) is proper, lower semicontinuous, strictly convex and continuously differentiable on C; ii. dom (d(·, y)) ⊂ C and dom( ∂1 d(·, y)) = C, where ∂1 d(·, y) denotes the classical subgradient map of the function d(·, y) with respect to the first variable; iii. d(·, y) is coercive on IRn (i.e., limu→∞ d(u, y) = +∞); iv. d(y, y) = 0. We denote by D(C) the family of functions satisfying the above definition. Definition 3. Given d ∈ D(C), a function H : IRn × IRn → IR+ ∪ {+∞} is called the induced proximal distance to d if there exists γ ∈ (0, 1] with H a finite-valued function on C × C and for each a, b ∈ C we have: (Ii) H(a, a) = 0. (Iii) c − b, ∇1 d(b, a) ≤ H(c, a) − H(c, b) − γH(b, a), ∀ c ∈ C; where the notation ∇1 d(., .) means the gradient of d with respect to the first variable. Denote by (d, H) ∈ F(C) the proximal distance that satisfies the conditions of Definition 3. We also denote (d, H) ∈ F(C) if there exists H such that: (Iiii) H is finite valued on C × C satisfying (Ii) and (Iii), for each c ∈ C. (Iiv) For each c ∈ C, H(c, ·) has level bounded sets on C. Finally, denote (d, H) ∈ F+ (C) if (Iv) (d, H) ∈ F(C). (Ivi) ∀ y ∈ C y ∀ {y k } ⊂ C bounded with limk→+∞ H(y, y k ) = 0, then limk→+∞ y k = y. (Ivii) ∀ y ∈ C, y ∀ {y k } ⊂ C such that limk→+∞ y k = y, then limk→+∞ H(y, y k ) = 0.

6

E. A. Papa Quiroz

Remark 1. Examples of proximal distances which satisfy the above definitions may be seen in Auslender and Teboulle [2], Sect. 3. Definition 4. Let (d, H) ∈ F(C) and {xk } ⊂ IRn be a sequence such that {xk } converges to a point x ∈ IRn . Then, the convergence is said to be: 1. H−linear, if there exist a constant 0 < θ < 1 and n0 ∈ IN such that H(xk , x) ≤ θH(xk−1 , x),

∀ k ≥ n0 ;

(5)

2. H−superlinear, if there exist a sequence {βk } converging to zero and n ∈ IN such that (6) H(xk , x) ≤ βk H(xk−1 , x), ∀ k ≥ n. In the particular case when the induced proximal distance H is given by H(x, y) = 2 η¯ x − y , for some η¯ > 0, we obtain the usual definition of rate of convergence. Lemma 1 [14, Lemma 2, pp. 44]. Let {vk }, {γk }, and {βk } be nonnegative sequences of real numbers satisfying vk+1 ≤ (1 + γk ) vk + βk and such that ∞  ∞ k=1 βk < ∞, k=1 γk < ∞. Then, the sequence {vk } converges.

3

Proximal Method

Let C be a nonempty open convex set and f : C × C → IR an equilibrium bifunction, i.e., satisfying f (x, x) = 0 for every x ∈ C. The equilibrium problem, EP (f, C) in short, consists to find a point x ∈ C such that EP (f, C)

f (x, y) ≥ 0 ∀y ∈ C.

(7)

The solution set of the EP (f, C), is denoted by S(f, C). Next, we give the following assumptions on the equilibrium bifunction. (H1) f (., y) : C → IR is upper semicontinuous for all y ∈ C. (H2) f (x, .) : C → IR is convex, for all x ∈ C. (H3) f (., .) is quasimonotone. Remark 2. Observe that assumptions (H1) and (H2) are standard for the study of equilibrium problems, see Remark 3.1 of [7] for a justification of each assumption. We impose the assumption (H3) because monotonicty assumption on f (., .) turned out to be too restrictive for many applied problems, especially in Economics and Operation Research. Inexact Algorithm Initialization: Let {λk } be a sequence of positive parameters and a starting point: (8) x0 ∈ C.

Interior Proximal Method for Quasimonotone Equilibrium

7

Main Steps: For k = 1, 2, . . . , and xk−1 ∈ C, find xk ∈ C and g k ∈ ∂2 f (xk , xk ) such that (9) g k + λk ∇1 d(xk , xk−1 ) = ek , ¯ and ek is an approxiwhere d is a proximal distance such that (d, H) ∈ F+ (C) mation error which satisfies the following conditions:  ek  ≤ ηk H(xk , xk−1 ) λk +∞  ηk < +∞

(10) (11)

k=1

Stop Criterion: If xk = xk−1 or ek ∈ ∂2 f (xk , xk ), then finish. Otherwise, to do k − 1 ← k and return to Main Steps. Observe that the error ek is not prescribed before the finding of xk . We impose the following additional assumption: (H4) For each k ∈ IN, there exist xk and g k satisfying (9). We are interested in analyzing the iterations when xk = xk−1 for each k = 1, 2, ... because, otherwise, we obtain g k = ek ∈ ∂2 f (xk , xk ) and therefore the algorithm finishes. Now we define the following particular solution set of S(f, C) which has been introduced in Mallma et al. [7]: S ∗ (f, C) = {x ∈ S(f, C) : f (x, w) > 0, ∀w ∈ C}.

(12)

We will use the following assumption. (H5) S ∗ (f, C) = ∅. Proposition 1. Under the assumptions (H2), (H3), (H4), (H5) and (d, H) ∈ F(C), we have H(x, xk ) ≤ H(x, xk−1 )−γH(xk , xk−1 )−

 1  k e , x − xk λk

∀x ∈ S ∗ (f, C). (13)

Proof. Given that x ∈ S ∗ (f, C), then f (x, w) > 0, for all w ∈ C, and as xk ∈ C (by assumption (H4)), we obtain f (x, xk ) > 0. Then, as f is quasimonotone, then f (xk , x) ≤ 0. Due to g k ∈ ∂2 f (xk , xk ), from (H2) and from Definition 1, we have   k (14) g , x − xk ≤ f (xk , x) ≤ 0. Replacing (9) in the previous expression and making use of Definition 3 (Iii) we obtain the result.  

8

E. A. Papa Quiroz

We introduce the following extra condition on the induced proximal distance: (Iviii) There exists θ > 0 such that: x − y2 ≤ θH(x, y), for all x ∈ C and for all y ∈ C. Remark 3. Some examples of proximal distances which satisfy the above condition are the following: 1. d(x, y) :=

n 

xj − yj − yj ln

j=1

H(x, y) =

n 

xj 2 + (σ/2) x − y , with θ = yj 

xj ln

j=1

xj yj

 + yj − xj +

2 σ

and

σ ||x − y||2 . 2

2. Let ϕ : IR → IR∪{+∞} be a closed proper convex function such that domϕ ⊂ IR+ and dom∂ϕ = IR++ . We suppose in addition that ϕ is C 2 (IR++ ), strictly convex, and nonnegative on IR++ with ϕ(1) = ϕ (1) = 0. We denote by Φ the class of such kernels and by 

 1 ¯ Φ = ϕ ∈ Φ : ϕ (1) 1 − ≤ ϕ (t) ≤ ϕ (1)(t − 1) ∀t > 0 t the subclass of these kernels. Let ϕ(t) = μp(t)+ ν2 (t−1)2 with ν ≥ μp (1) > 0, p ∈ Φ¯ and let the associated proximal distance be defined by   n  xj 2 dϕ (x, y) = yj ϕ . yj j=1 The use of ϕ-divergence proximal distances is particularly suitable for handling polyhedral constraints. Let C = {x ∈ IRn : Ax < b}, where A is an (m, n) matrix of full rank m (m ≥ n). Particularly important cases include C = IRn++ or C = {x ∈ IRn++ : ai < xi < bi ∀i = 1, . . . , n}, with ai , bi ∈ IR. In [16], example (c) of the appendix section, was showed that for 2 H(x, y) = η¯ x − y with η¯ = 2−1 (ν + μp (1)), we have (dϕ , H) ∈ F+ (IRn+ ). Proposition 2. Let (d, H) ∈ F(C) and suppose that the assumptions (H2) − (H5) are satisfied. If the proximal distance H(., .) satisfies the additional condition (Iviii), then i) there exists an integer k0 ∈ IN such that for all k ≥ k0 and for all x ∈ S ∗ (f, C), we have   η θηk k k − γ H(xk , xk−1 ); H(x, x ) ≤ 1 + (15) H(x, xk−1 ) + 1 − θηk 4 ii) {H(x, xk )} converges for all x ∈ S ∗ (f, C);

Interior Proximal Method for Quasimonotone Equilibrium

9

iii) {xk } is bounded; iv) limk→+∞ H(xk , xk−1 ) = 0. Proof. i) Let x ∈ S ∗ (f, C), then

2 

ek

ek 2 k k 2 k k √ 0≤ + 2λ η (x − x ) k k

2λ η

= 2λk ηk + 2λk ηk x − x  + 2 e , x − x , k k thus, −

1 k ek 2

e , x − xk ≤ 2 + ηk x − xk 2 . λk 4λk ηk

Replacing the previous expression in (13) we have H(x, xk ) ≤ H(x, xk−1 ) − γH(xk , xk−1 ) +

ek 2 + ηk x − xk 2 . 4λ2k ηk

Taking into account the hypothesis (10) and the condition (Iviii), we have H(x, xk ) ≤ H(x, xk−1 ) − γH(xk , xk−1 ) + thus, (1 − θηk )H(x, xk ) ≤ H(x, xk−1 ) +

ηk H(xk , xk−1 ) + θηk H(x, xk ), 4 η

k

− γ H(xk , xk−1 ).

4 As ηk → 0+ (this is true from (11)) and θ > 0, then there exists k0 ≥ 0 such that 0 < 1 − θηk ≤ 1 and η4k − γ < 0, for all k ≥ k0 . So applying this fact in the previous expression we have     1 1 ηk − γ H(xk , xk−1 ). H(x, xk ) ≤ H(x, xk−1 ) + 1 − θηk 1 − θηk 4 As 1 − θηk ≤ 1, then the previous expression becomes   η θηk k k − γ H(xk , xk−1 ). H(x, x ) ≤ 1 + H(x, xk−1 ) + 1 − θηk 4 ii) From (15), it is clear that  k H(x, x ) ≤ 1 +

θηk 1 − θηk



H(x, xk−1 ),

∀k ≥ k0 .

(16)

k0 ∈ IN, such that As ηk → 0+ and θ > 0, then for all 0 < < 1 there exists  θηk < , for all k ≥  k0 , then 1 − < 1 − θηk ≤ 1. So θηk θηk , < 1 − θηk 1−

∀ k ≥ k0 .

Applying summations, and taking into account (11), we have +∞  k=1

θηk < +∞. 1 − θηk

(17)

10

E. A. Papa Quiroz

θηk Finally, taking vk+1 = H(x, xk ), vk = H(x, xk−1 ), γk = 1−θη and βk = 0 in k +∞ Lemma 1 and considering that k=1 γk < +∞ we obtain that the sequence {H(x, xk } converges. iii) It is immediate from (ii) and Definition 3-(Iiv). iv) It is immediate from (i) and (ii).  

Theorem 1. Let (d, H) ∈ F+ (C) and suppose that the assumptions (H1) − (H5), (Iviii) are satisfied and 0 < λk < λ, then i) {xk } converges weakly to an element of S(f, C), that is, Acc(xk ) = ∅ and every element of Acc(xk ) is a point of S(f, C). ii) If an accumulation point x ¯ belongs to S ∗ (f, C) then all the sequence {xk } converges to x ¯. Proof. (i). From Propositions 2, we have that {xk } is bounded, so there exist a subsequence {xkj } ⊆ {xk } and a point x∗ such that xkj → x∗ . Define L := {k1 , k2 , ..., kj , ...}, then {xl }l∈L → x∗ . We will prove that x∗ ∈ S(f, C). From (9) we have that ∀ l ∈ L and ∀ x ∈ C :       f (xl , x) ≥ g l , x − xl = el , x − xl − λl ∇1 d(xl , xl−1 ), x − xl . Using Definition 3-(Iii) in the above equality, we obtain  l      g , x − xl ≥ el , x − xl + λl H(x, xl ) − H(x, xl−1 ) + γH(xl , xl−1 ) .

(18)

Observe that, from (10) and due that {λk } and {xk } are bounded, {ηk } and {H(xk , xk−1 )} converge to zero, then   (19) lim el , x − xl = 0. l→∞

Fix x ∈ C, we analyze two cases: a) If {H(x, xl )} converges, then from Proposition 2-(iv), and the fact  that {λl } is bounded, we have λl H(x, xl ) − H(x, xl−1 ) + γH(xl , xl−1 ) → 0. Applying this result and (19) in (18) and from assumption (H1) we obtain   f (x∗ , x) ≥ lim sup f (xl , x) ≥ lim sup ul , x − xl ≥ 0. j→∞

l→∞

b) If {H(x, xl )} is not convergent, then the sequence is not monotonically decreasing and so there are infinite l ∈ L such that H(x, xl ) ≥ H(x, xl−1 ). Let {lj } ⊂ L, for all j ∈ IN, such that H(x, xlj ) ≥ H(x, xlj −1 ), then H(x, xlj ) − H(x, xlj −1 ) + γH(xlj , xlj −1 ) ≥ γH(xlj , xlj −1 ). Taking into account this last result, Proposition 2-(iv) and (19), in (18) we have:

    lim sup g lj , x − xlj ≥ lim sup λlj H(x, xlj ) − H(x, xlj −1 ) + γH(xlj , xlj −1 ) ≥ 0, j→∞

j→∞

and from assumption (H1),

  f (x∗ , x) ≥ lim sup f (xl , x) ≥ lim sup ulj , x − xlj ≥ 0. j→∞

j→∞

Interior Proximal Method for Quasimonotone Equilibrium

11

(ii). Let x ¯ such that xkl → x and x ¯ ∈ S ∗ (f, C). Then, from Definition 3 (Ivii), H(x, xkl ) → 0. Remember that from Proposition 2 (ii) we have that {H(x, xk )} is convergent and as H(x, xkl ) → 0, we obtain that H(x, xkj ) → 0; so applying the Definition 3 (Ivi) we obtain that xkj → x, and due to the uniqueness of the   limit we have x∗ = x. Thus {xk } converges to x∗ .

4

Rate of Convergence

In this section we prove the linear or superlinear rate of convergence of the inexact algorithm. For that, we consider the following additional assumption: (H6) For x ∈ S ∗ (f, C) such that xk → x, there exist δ = δ(x) > 0 and τk = τk (x) > 0, such that for all w ∈ B(0, δ) ⊂ IRn and for all xk with w ∈ ∂2 f (xk , xk ), we have (20) H(x, xk ) ≤ τk w2 . Another assumption that we also assume for the proximal distance (d, H) ∈ F+ (C) is the following: (H7) For all u ∈ C, the function 1 d(., u) satisfies the following condition: there exists L > 0 such that  1 d(x, u) − 1 d(y, u) ≤ Lx − y,

∀x, y ∈ C.

Lemma 2. Let (d, H) ∈ F+ (C) and suppose that assumptions (H1)–(H7) and condition (Iviii) are satisfied and 0 < λk < λ. Then i) there exists  k ∈ IN such that g k  < δ,

∀k ≥ k,

(21)

where g k is given by (9); ii) it holds that √ H(x, xk ) ≤ τk λ2k (ηk + L θ)2 H(xk , xk−1 ),

∀k ≥ k.

(22)

Proof. i) Let x = limk→+∞ xk , such that x ∈ S ∗ (f, C), and thus from assumption (H7), there exists L > 0 such that  1 d(x, xk−1 ) − 1 d(y, xk−1 ) ≤ Lx − y,

∀x, y ∈ C.

From the above inequality we have  1 d(xk , xk−1 ) =  1 d(xk , xk−1 ) − 1 d(xk−1 , xk−1 ) ≤ Lxk − xk−1 . (23) From (9) we obtain g k  = ek − λk 1 d(xk , xk−1 ) ≤ ek  + λk  1 d(xk , xk−1 ),

(24)

12

E. A. Papa Quiroz

so, taking into account (10), (23), the condition (Iviii), and the fact that λk ≤ λ, we have that the inequality (24) implies  √  g k  ≤ λk ηk H(xk , xk−1 ) + λk L θ H(xk , xk−1 ) √  (25) = λk (ηk + L θ) H(xk , xk−1 )  √ (26) ≤ λ(ηk + L θ) H(xk , xk−1 ). Since ηk → 0 and H(xk , xk−1 ) → 0 (see Proposition 2-(iv)), taking δ > 0, there k. exists  k ∈ IN such that g k  < δ for all k ≥  k, we have ii) In (20) taking w = g k for all k ≥  H(x, xk ) ≤ τk g k 2 .

(27)

Therefore, the relation (22) follows from the last inequality combined with (25).   Theorem 2. Let (d, H) ∈ F+ (C) and suppose that assumptions (H1)-(H7) and condition (Iviii) are satisfied and 0 < λk < λ. Then, H(x, xk ) ≤ rk H(x, xk−1 ),

(28)

for k sufficiently large, where   √  1 4τk (ηk + L θ)2 √ rk = . k 1 − θηk 4τk (ηk + L θ)2 + 4γ−η 2 λ

1. If τk = τ > 0 then, {xk } converges H-linearly to x ∈ SOL(T, C). 2. If {τk } converges to zero then, {xk } converges H-superlinearly to x ∈ SOL(T, C). 3. If τk = τ > 0 and λk  0 then, {xk } converges H-superlinearly to x ∈ SOL(T, C). Proof. Let x ∈ S ∗ (f, C) be the limit point of the sequence {xk } and g k ∈ ∂2 f (xk , xk ) given by (9). Due to the relationship (21) we have to g k  < δ for k. all k ≥  k. So g k ∈ B(0, δ), for all k ≥  k}, it follows that Considering the inequality (22) in (15) for all k ≥ max{k0 ,   H(x, x ) ≤ 1 + k

θηk 1 − θηk



k−1

H(x, x

 ηk )− γ− 4



1 √ τk λ2k (ηk + L θ)2

H(x, xk ),

Thus, we obtain for all k ≥ max{k0 ,  k} :     1 4γ − ηk k √ H(x, x ) ≤ 1+ H(x, xk−1 ). 1 − θηk 4τk λ2k (ηk + L θ)2

Interior Proximal Method for Quasimonotone Equilibrium

13

As τk > 0 and also (4γ − ηk ) > 0, then for all k ≥ max{k0 ,  k} we have H(x, xk ) ≤ βk H(x, xk−1 ), where

⎞ √ 2   1 (η + L θ) 4τ k k ⎠ ⎝ √ βk = . k 1 − θηk 4τk (ηk + L θ)2 + 4γ−η λ2

(29)



(30)

k

Since that λk ≤ λ for all k ∈ IN, we obtain βk ≤ rk , 

where rk =

(31)

 √  1 4τk (ηk + L θ)2 √ . k 1 − θηk 4τk (ηk + L θ)2 + 4γ−η 2 λ

Thus we obtain (28). 1. Let τk = τ > 0, then taking into account that ηk → 0, then   4τ L2 θ . rk → 4τ L2 θ + 4γ2 λ

Thus, there exists a positive number k1 ∈ IN with k ≥ k1 , such that   4τ L2 θ 1 1+ < 1 ∀ k ≥ k1 . βk ≤ rk < 2 4τ L2 θ + 4γ2 λ

Then, in (29) we have for all k ≥ max{k0 ,  k, k1 } : ¯ H(x, xk ) ≤ θH(x, xk−1 ), 

where θ¯ =

4τ L2 θ 4τ L2 θ + 4γ2

 .

λ

Thus, the sequence {xk } converges H−linearly to x. 2. If {τk } converges to zero then from (28) we have that {rk } converges to zero and thus we obtain that the sequence {xk } converges H−superlinearly to x. 3. Let τk = τ > 0, we have from (29) and (30) H(x, xk ) ≤ βk H(x, xk−1 ), where

⎞ √ 2   1 4τ (ηk + L θ) ⎠ √ βk = ⎝ . k 1 − θηk 4τ (ηk + L θ)2 + 4γ−η λ2

(32)



(33)

k

As λk  0 and ηk → 0, then sequence {xk } converges H−superlinearly   to x.

14

E. A. Papa Quiroz

Corollary 1. Under the same assumptions of the previous theorem and suppose that the condition (34) H(x, y) = θx − y2 for some θ > 0, is satisfied, then 1. If τk = τ > 0, then {xk } converges linearly to x ∈ S ∗ (f, C) 2. If {τk } converges to zero then {xk } converges superlinearly to x ∈ S ∗ (f, C) 3. If τk = τ > 0, and λk  0 then {xk } converges superlinearly to x ∈ S ∗ (f, C). Remark 4. A class of proximal distances which satisfies the above condition (34) is the proximal distance with second order homogeneous distances, see Remark 3.

References 1. Anh, P.N.: A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 62, 271–283 (2013) 2. Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006) 3. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Student 63, 123–145 (1994) 4. Iusem, A.N., Sosa, W.: New existence results for equilibrium problems. Nonlinear Anal. 52, 621–635 (2003) 5. Iusem, A.N., Sosa, W.: On the proximal point method for equilibrium problems in Hilbert spaces. Optimization 59, 1259–1274 (2010) 6. Khatibzadeh, H., Mohebbi, V., Ranjbar, S.: Convergence analysis of the proximal point algorithm for pseudo-monotone equilibrium problems. Optim. Methods Software 30, 1146–1163 (2015) 7. Mallma Ramirez, L., Papa Quiroz, E.A., Oliveira, P.R.: An inexact proximal method with proximal distances for quasimonotone equilibrium problems. J. Oper. Res. SOC China 246(3), 721–729 (2017) 8. Moudafi, A.: On the convergence of splitting proximal methods for equilibrium problems in Hilbert spaces. J. Math. Anal. Appl. 359, 508–513 (2009) 9. Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: The interior proximal extragradient method for solving equilibrium problems. J. Glob. Optim. 44, 175–192 (2009) 10. Papa Quiroz, E.A., Oliveira, P.R.: An extension of proximal methods for quasiconvex minimization on the nonnegative orthant. Eur. J. Oper. Res. 216, 26–32 (2012) 11. Papa Quiroz, E.A., Mallma Ramirez, L., Oliveira. P.R.: An inexact proximal method for quasiconvex minimization. Eur. J. Oper. Res. 246, 721–729 (2015) 12. Papa Quiroz, E.A., Mallma Ramirez, L., Oliveira, P.R.: An inexact algorithm with proximal distances for variational inequalities. RAIRO Oper. Res. 52(1), 159–176 (2018) 13. Papa Quiroz, E.A., Cruzado, S.: An inexact scalarization proximal point method for multiobjective quasiconvex minimization. Ann. Oper. Res., 1–26 (2020). https://doi.org/10.1007/s10479-020-03622-8 14. Pedregal, P.: Optimal control. In: Introduction to Optimization. TAM, vol. 46, pp. 195–236. Springer, New York (2004). https://doi.org/10.1007/0-387-21680-4 6 15. Quoc, T.D., Muu, L.D.: Iterative methods for solving monotone equilibrium problems via dual gap functions. Comput. Optim. Appl. 51, 709–728 (2012)

Interior Proximal Method for Quasimonotone Equilibrium

15

16. Sarmiento, O., Papa Quiroz, E.A., Oliveira, P.R.: A proximal multiplier method for separable convex minimization. Optimization 65(2), 501–537 (2016) 17. Villacorta, K.D., Oliveira, P.R.: An interior proximal method in vector optimization. Eur. J. Oper. Res. 214, 485–492 (2011)

Beyond Pointwise Submodularity: Non-monotone Adaptive Submodular Maximization Subject to Knapsack and k-System Constraints Shaojie Tang(B) Naveen Jindal School of Management, University of Texas at Dallas, Richardson, USA [email protected] Abstract. Although the knapsack-constrained and k-system-constrained non-monotone adaptive submodular maximization have been well studied in the literature, it has only been settled given the additional assumption of pointwise submodularity. In this paper, we remove the common assumption on pointwise submodularity and propose the first approximation solutions for both knapsack and k-system constrained adaptive submodular maximization problems. Inspired by two recent studies on non-monotone adaptive submodular maximization, we develop a 1 approximation sampling-based randomized algorithm that achieves a 10 1 approximafor the case of a knapsack constraint and that achieves a 2k+4 tion ratio for the case of a k-system constraint. Keywords: Adaptive submodularity Non-monotonicity

1

· Approximation algorithms ·

Introduction

In [5], they extend the study of submodular maximization from the non-adaptive setting [8] to the adaptive setting. They introduce the notions of adaptive monotonicity and submodularity, and show that a simple adaptive greedy policy achieves a 1−1/e approximation ratio if the utility function is adaptive submodular and adaptive monotone. Although there have been numerous research studies on adaptive submodular maximization under different settings [2,4,9,11,13], most of them assume adaptive monotonicity. For the case of maximizing a nonmonotone adaptive submodular function subject to a cardinality constraint, [10] develops the first constant approximation solution. For the case of maximizing a non-monotone adaptive submodular and pointwise submodular function, [1,3] develop effective solutions for the case of a knapsack and a k-system constraints, respectively. Note that adaptive submodularity does not imply pointwise submodularity and vice versa [5,7], and this raises the following question: Does there exist an approximation solution for maximizing a knapsack-constrained or a k-system constrained non-monotone adaptive submodular function without resorting to pointwise submodularity? c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 16–27, 2022. https://doi.org/10.1007/978-3-030-92666-3_2

Non-monotone Adaptive Submodular Maximization

17

Table 1. Approximation for non-monotone adaptive submodular function maximization Source [6] [1] [3] [10] This work This work

Ratio 1 e 1 9

√1 k+2 k+1+2 1 e 1 10 1 2k+4

Constraint

Require pointwise submodularity?

Cardinality constraint Yes Knapsack constraint

Yes

k-system constraint

Yes

Cardinality constraint No Knapsack constraint

No

k-system constraint

No

In this paper, we answer the above question affirmatively by proposing the first approximation solutions for both knapsack or a k-system constraints. Note that many practical constraints, including cardinality, matroid, intersection of k matroids, k-matchoid and k-extendible constraints, all belong to the family 1 approximate solution for of k-system constraints. In particular, we develop a 10 maximizing a knapsack-constrained non-monotone adaptive submodular function. Technically speaking, our design is an extension of the classic modified density greedy algorithm [1,12]. In particular, their design is required to maintain two candidate policies, i.e., one is to choose a best singleton and the other one is to choose items in a density-greedy manner, while our design maintains three candidate policies in order to drop the common assumption about pointwise submodularity. For the case of a k-system constraint, we are inspired by the sampling based policy proposed in [3] and develop a similar policy that achieves 1 approximation ratio without resorting to pointwise submodularity. We a 2k+4 list the performance bounds of the closely related studies in Table 1.

2

Preliminaries

We first introduce some important notations. In the rest of this paper, we use [m] to denote the set {0, 1, 2, · · · , m}, and we use |S| to denote the cardinality of a set S. 2.1

Items and States

We consider a set E of n items, where each item e ∈ E is in a particular state from O. We use φ : E → O to denote a realization, where φ(e) represents the state of e ∈ E. Let Φ = {Φ(e) | e ∈ E} denote a random realization, where Φ(e) ∈ O is a random realization of the state of e ∈ E. The state of each item is unknown initially, one must pick an item e ∈ E before observing the value of Φ(e). We assume there is a known prior probability distribution p(φ) = {Pr[Φ = φ] : φ ∈ U } over realizations U . For any subset of items S ⊆ E, we use ψ : S → O to denote a partial realization and dom(ψ) = S is called the

18

S. Tang

domain of ψ. Consider any realization φ and any partial realization ψ, we say that ψ is consistent with φ, i.e., ψ ≺ φ, if they are equal everywhere in the domain of ψ. We say that ψ is a subrealization of ψ  , i.e., ψ ⊆ ψ  , if dom(ψ) ⊆ dom(ψ  ) and they are equal everywhere in dom(ψ). Moreover, we use p(φ | ψ) to denote the conditional distribution over realizations conditioned on a partial realization ψ: p(φ | ψ) = Pr[Φ = φ | ψ ≺ Φ]. There is a non-negative utility function f that is defined over items and their states: f : 2E × OE → R≥0 . 2.2

Policies and Problem Formulation

A typical adaptive policy works as follows: select the first item and observe its state, then continue to select the next item based on the observations collected so far, and so on. After each selection, we observe some partial realization ψ of the states of some subset of E, for example, we are able to observe the partial realization of the states of those items which have been selected. Formally, any adaptive policy can be represented as a function π that maps a set of observations to a distribution P(E) of E: π : 2E × OE → P(E), specifying which item to pick next based on the current observation. Definition 1 (Policy Concatenation). Given two policies π and π  , let π@π  denote a policy that runs π first, and then runs π  , ignoring the observation obtained from running π. Let the random variable E(π, φ) denote the subset of items selected by π under a realization φ. The expected utility favg (π) of a policy π is favg (π) = EΦ∼p(φ),Π f (E(π, Φ), Φ)

(1)

where the expectation is taken over Φ with respect to p(φ) and the random output of π. For ease of presentation, let f (e) = EΦ∼p(φ) f ({e}, Φ). Definition 2 (Independence System). Given a ground set E and a collection of sets I ⊆ 2E , the pair (E, I) is an independence system if 1. ∅ ∈ I; 2. I, which is called the independent sets, is downward-closed, that is, A ∈ I and B ⊆ A implies that B ∈ I. A set B ∈ I is called a base if A ∈ I and B ⊆ A imply that B = A. A set B ∈ I is called a base of R if B ⊆ R and B is a base of the independence system (R, 2R ∩ I). Definition 3 (k-System). An independence system (E, I) is a k-system for an integer k ≥ 1 if for every set R ⊆ E, the ratio between the sizes of the largest and smallest bases of R is upper bounded by k. ∈ U | p(φ) > 0}. Let Ω denote the set of feasible policies and let U + = {φ For the case of knapsack constraint, define Ω = {π|∀φ ∈ U + , e∈E(π,φ) ce ≤ b} where ce is the cost of e, which is fixed and pre-known, and b is the budget constraint. For the case of k-system constraint, define Ω = {π|∀φ ∈ U + , E(π, φ) ∈ I} where (E, I) is a k-system. Our goal is to find a feasible policy π opt that maximizes the expected utility, i.e., π opt ∈ arg maxπ∈Ω favg (π).

Non-monotone Adaptive Submodular Maximization

2.3

19

Adaptive Submodularity and Pointwise Submodularity

We start by introducing the conditional expected marginal utility of an item. Definition 4 (Conditional Expected Marginal Utility of an Item). For any partial realization ψ and any item e ∈ E, the conditional expected marginal utility Δ(e | ψ) of e conditioned on ψ is Δ(e | ψ) = EΦ [f (dom(ψ) ∪ {e}, Φ) − f (dom(ψ), Φ) | ψ ≺ Φ] where the expectation is taken over Φ with respect to p(φ | ψ) = Pr(Φ = φ | ψ ≺ Φ). We next introduce the concept of adaptive submodularity. Definition 5 [5][Adaptive Submodularity]. A function f : 2E × OE is adaptive submodular with respect to a prior distribution p(φ), if for any two partial real/ dom(ψ  ), izations ψ and ψ  such that ψ ⊆ ψ  , and any item e ∈ E such that e ∈ the following holds: Δ(e | ψ) ≥ Δ(e | ψ  ) For comparison purpose, we further introduce the pointwise submodularity. Definition 6 [5][Pointwise Submodularity]. A function f : 2E × OE → R≥0 is pointwise submodular if f (S, φ) is submodular in terms of S ⊆ E for all φ ∈ U + . That is, for any φ ∈ U , any two sets E1 ⊆ E and E2 ⊆ E such that E1 ⊆ E2 , and any item e ∈ / E2 , we have f (E1 ∪ {e}, φ) − f (E1 , φ) ≥ f (E2 ∪ {e}, φ) − f (E2 , φ). The above property is referred to as state-wise submodularity in [1]. Note that adaptive submodularity does not imply pointwise submodularity and vice versa.

3 3.1

Knapsack Constraint Algorithm Design

We first present the design of our Sampling-based Adaptive Density-Greedy Policy π sad subject to a knapsack constraint. A detailed description of π sad is listed in Algorithm 1. Our policy is composed of three candidate policies: π 1 , π 2 , and π 3 . The first candidate policy π 1 selects a singleton with the maximum expected utility. The other two candidates π 2 and π 3 follow a simple densitygreedy rule to select items from two random sets respectively. Our final policy π sad randomly picks one solution from the above three candidates such that π 1 is selected with probability δ1 , π 2 is selected with probability δ2 , and π 3 is selected with probability 1 − δ1 − δ2 . All parameters will be decided later. Although the framework of π sad is similar to the modified density greedy algorithm [1], where they only maintain two candidate policies (one is to choose a high value item and the other one is to choose items in a greedy manner), its performance

20

S. Tang

Algorithm 1. Sampling-based Adaptive Density-Greedy Policy π sad 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26:

S1 = ∅, S2 = ∅, e∗ = arg maxe∈E f (e), t = 1, ψ0 = ∅, C = b. Sample a number r0 uniformly at random from [0, 1] for e ∈ E do let re ∼ Bernoulli(δ0 ) if re = 1 then S1 = S1 ∪ {e} else S2 = S2 ∪ {e} if r0 ∈ [0, δ1 ) then {Adopting the first candidate policy π 1 } pick e∗ else if r0 ∈ [δ1 , δ1 + δ2 ) then {Adopting the second candidate policy π 2 } F1 = {e|e ∈ S1 , f (e) > 0} while F1 = ∅ do Δ(e|ψt−1 ) et ← arg maxe∈F1 ce select et and observe Φ(et ) ψt = ψt−1 ∪ {(et , Φ(et ))} C = C − cet S1 = S1 \ {et }, F1 = {e ∈ S1 |C ≥ ce and Δ(e | ψt−1 ) > 0}, t ← t + 1 else {Adopting the third candidate policy π 3 } F2 = {e|e ∈ S2 , f (e) > 0} while F2 = ∅ do Δ(e|ψt−1 ) et ← arg maxe∈F2 ce select et and observe Φ(et ) ψt = ψt−1 ∪ {(et , Φ(et ))} C = C − cet S2 = S2 \ {et }, F2 = {e ∈ S2 |C ≥ ce and Δ(e | ψt−1 ) > 0}, t ← t + 1

analysis is very different, as their results hold only if the utility function is both adaptive submodular and pointwise submodular. Later we show that π sad achieves a constant approximation ratio without resorting to the property of pointwise submodularity. We next describe π 1 , π 2 , and π 3 in details. Design of π 1 . Selecting a singleton e∗ with the maximum expected utility, i.e., e∗ = arg maxe∈E f (e). Design of π 2 . Partition E into two disjoint subsets S1 and S2 (or E \ S1 ) such that S1 contains each item independently with probability δ0 . π 2 selects items only from S1 in a density-greedy manner as follows: In each round t, π 2 selects an item et with the largest “benefit-to-cost” ratio from F1 conditioned on the current observation ψt−1 et ← arg max e∈F1

Δ(e | ψt−1 ) ce

where F1 = {e ∈ S1 |C ≥ ce and Δ(e | ψt−1 ) > 0}. Here we use C to denote the remaining budget before entering round t. After observing the state Φ(et ) of et , we update the partial realization using ψt = ψt−1 ∪ {(et , Φ(et ))}. This process iterates until F1 becomes an empty set.

Non-monotone Adaptive Submodular Maximization

21

Design of π 3 . Partition E into two disjoint subsets S1 and S2 (or E \ S1 ) such that S1 contains each item independently with probability δ0 . π 3 selects items only from S1 in the same density-greedy manner as used in the design of π 2 . 3.2

Performance Analysis

Note that all existing results on non-monotone adaptive submodular maximization [1,3] require Lemma 4 of [6], whose proof relied on the assumption of the pointwise submodularity of the utility function. To relax this assumption, we first provide two technical lemmas, whose proofs do not require pointwise submodularity. We use range(π) to denote the set containing all items that π selects for some φ ∈ U + , i.e., range(π) = {e|e ∈ ∪φ∈U + E(π, φ)}. Lemma 1. If f : 2E × OE → R≥0 is adaptive submodular with respect to p(φ), then for any three policies π a , π b , and π c such that range(π b ) ∩ range(π c ) = ∅, we have favg (π a @π b ) + favg (π a @π c ) ≥ favg (π a )

(2)

− → Proof: For each r ∈ {a, b, c}, let ψ r = {ψ0r , ψ1r , ψ2r , · · · , ψ r−→r } denote a fixed |ψ |−1 − → run of π r , where for each t ∈ [|ψ r | − 1], ψtr is the partial realization of the first t selected items. For ease of presentation, let ψ r denote the final observation − → of ψ r for short. For each e ∈ range(π c ) and t ∈ [n − 1], let I(π c , e, t + 1) ψ r−→r |ψ |−1 → − − → − → be indicator variable that e is selected as the (t+1)-th item by π c . Let Ψ a , Ψ b , Ψ c → − − → − → denote random realizations of ψ a , ψ b , ψ c , respectively. Then we have favg (π a @π c ) − favg (π a )  → → = E(− a − c [ Ψ ,Ψ )

− → e∈range(π c ),t∈[|Ψ c |−2]



E[I(π c , e, t + 1)|Ψtc ]Δ(e | Ψ a ∪ Ψtc )]

→− E[I(π c , e, t − →− → [ (Ψ a ,Ψ b ,Ψ c ) − → e∈range(π c ),t∈[|Ψ c |−2]

=E

+ 1)|Ψtc ]Δ(e | Ψ a ∪ Ψtc )]

(3)

− → where the first expectation is taken over the prior joint distribution of ψ a and − → − →c ψ and the second expectation is taken over the prior joint distribution of ψ a , − →b − → ψ and ψ c . favg (π a @π b @π c ) − favg (π a @π b )  = E −→a −→b −→c [ E[I(π c , e, t + 1)|Ψtc ]Δ(e | Ψ a ∪ Ψ b ∪ Ψtc )](4) (Ψ ,Ψ ,Ψ )

− → e∈range(π c ),t∈[|Ψ c |−2]

→ − → − where the expectation is taken over the prior joint distribution of ψ a , ψ b and − → ψc .

22

S. Tang

Because f : 2E × OE → R≥0 is adaptive submodular with respect to p(φ), → − − → − → and range(π b ) ∩ range(π c ) = ∅, then for any given realizations (ψ a , ψ b , ψ c ) after − → running π a @π b @π c , any t ∈ [|ψ c | − 2], and any item e ∈ range(π c ), we have / dom(ψ b ). This together with Δ(e | ψ a ∪ ψtc ) ≥ Δ(e | ψ a ∪ ψ b ∪ ψtc ) due to e ∈ (3) and (4) implies that favg (π a @π c ) − favg (π a ) ≥ favg (π a @π b @π c ) − favg (π a @π b ) Hence, favg (π a @π b @π c ) = favg (π a ) + (favg (π a @π b ) − favg (π a )) + (favg (π a @π b @π c ) − favg (π a @π b )) ≤ favg (π a ) + (favg (π a @π b ) − favg (π a )) + (favg (π a @π c ) − favg (π a ))

(5)

Because favg (π a @π b @π c ) ≥ 0, we have favg (π a ) + (favg (π a @π b ) − favg (π a )) + (favg (π a @π c ) − favg (π a )) ≥ 0

(6)

It follows that favg (π a @π b ) + favg (π a @π c ) = favg (π a ) + (favg (π a @π b ) − favg (π a )) + favg (π a ) + (favg (π a @π c ) − favg (π a )) ≥ favg (π a ) The inequality is due to (6). We next present the second technical lemma.



Lemma 2. Let π ∈ Ω denote a policy that selects items from S in the same density-greedy manner as used in the design of π 2 and π 3 , where S is a random set that is obtained by independently picking each item with probability σ. If f : 2E × OE → R≥0 is adaptive submodular with respect to p(φ), then (2 +

1 )favg (π) + f (e∗ ) ≥ favg (π opt @π) σ

(7)

Proof: In the proof of Theorem 4 of [1], they show that (7) holds if f : 2E ×OE → R≥0 is adaptive submodular and pointwise submodular with respect to p(φ). In fact, we can prove a more general result by relaxing the assumption about the property of pointwise submodularity, i.e., we next show that (7) holds if f : 2E × OE → R≥0 is adaptive submodular with respect to p(φ). By inspecting the proof of (7) of [1], it is easy to find that the only part that requires the property of pointwise submodularity is the proof of Lemma 5. We next show that Lemma 5 holds without resorting to pointwise submodularity. → − Before restating Lemma 5 in [1], we introduce some notations. Let ψ = → − − } denote a fixed run of π, where for any t ∈ [| ψ | − 1], {ψ0 , ψ1 , ψ2 , · · · , ψ|→ ψ |−1 ψt represents the partial realization of the first t selected items. For notation

Non-monotone Adaptive Submodular Maximization

23

→ − − simplicity, for every ψ , let ψ  denote the final observation ψ|→ for short. ψ |−1 Note that in [1], they defer the “sampling” phase of π without affecting the distributions of the output, that is, they toss a coin of success σ to decide whether or not to add an item to the solution each time after an item is being considered. → − Let M ( ψ ) denote those items which are considered but not chosen by π which → − have positive expected marginal contribution to ψ  under ψ . Lemma 5 in [1]  →  − → − → − Δ(e | ψ  ) where Pr[ ψ ] is the − Pr[ ψ ] states that favg (π) ≥ σ × → e∈M ( ψ ) ψ → − probability that ψ occurs. For any item e ∈ E, let Λe denote the set of all possible partial realizations ψ such that ψ is observed right before e is being considered, i.e., Λe = {ψ | ψ is the last observation before e is being considered}. Let De denote the prior distribution over all partial realizations in Λe , i.e., for each ψ ∈ Λe , De (ψ) is the probability that e has been considered and ψ is the last observation before e is being considered. It follows that  EΨ ∼De [σ × Δ(e | Ψ )] favg (π) = e∈E





e∈E

=σ×

EΨ ∼De [σ ×  e∈E

=σ×

 → − ψ

 → − ψ

→ − Pr[ ψ | Ψ, e]Δ(e | ψ  )]

 → − EΨ ∼De [ Pr[ ψ | Ψ, e]Δ(e | ψ  )] → − ψ

 − → Pr[ ψ ](

→ − e∈M ( ψ )

Δ(e | ψ  ))

→ − → − where Pr[ ψ | Ψ, e] is the probability that ψ occurs conditioned on the event that e is being considered and Ψ is the last observation before e is being considered. The first equality is due to the assumption that each item is sampled with probability σ, and the inequality is due to the observations that e ∈ / dom(ψ  ),  E E  Ψ ⊆ ψ and f : 2 × O → R≥0 is adaptive submodular. Lemma 2, together with the design of π 2 and π 3 , implies the following two corollaries. Corollary 1. If f : 2E × OE → R≥0 is adaptive submodular with respect to p(φ), then (2 + δ10 )favg (π 2 ) + f (e∗ ) ≥ favg (π opt @π 2 ). Corollary 2. If f : 2E × OE → R≥0 is adaptive submodular with respect to 1 )favg (π 3 ) + f (e∗ ) ≥ favg (π opt @π 3 ). p(φ), then (2 + 1−δ 0 Now we are ready to present the main theorem of this section. Theorem 1. If f : 2E ×OE → R≥0 is adaptive submodular with respect to p(φ), 1 then for δ1 = 1/5, δ2 = 2/5, and δ0 = 1/2, we have favg (π sad ) ≥ 10 favg (π opt ). Proof: Recall that π 2 and π 3 start with a random partition of E into two disjoint subsets according to the same distribution. It is safe to assume that π 2

24

S. Tang

and π 3 share a common phase of generating such a partition as this assumption does not affect the expected utility of either π 2 or π 3 . Thus, given a fixed partition (S1 , S2 ), π 2 and π 3 are running on two disjoint subsets because π 2 selects items only from S1 and π 3 selects items only from S2 . It follows that range(π 2 ) ∩ range(π 3 ) = ∅ conditional on any fixed pair of (S1 , S2 ). Letting E[favg (π opt @π 2 )+favg (π opt @π 3 )|(S1 , S2 )] denote the conditional expected value of favg (π opt @π 2 ) + favg (π opt @π 3 ) conditioned on (S1 , S2 ), Lemma 1 implies that for any fixed pair of (S1 , S2 ), E[favg (π opt @π 2 ) + favg (π opt @π 3 )|(S1 , S2 )] ≥ E[favg (π opt )|(S1 , S2 )] = favg (π opt )

(8)

The equality is due to the observation that the expected utility of the optimal solution π opt is independent of the realizations of S1 and S2 . Taking the expectation of E[favg (π opt @π 2 ) + favg (π opt @π 3 )|(S1 , S2 )] over (S1 , S2 ), (8) implies that favg (π opt @π 2 ) + favg (π opt @π 3 ) ≥ favg (π opt )

(9)

Hence, 1 1 )favg (π 2 ) + f (e∗ ) + (2 + )favg (π 3 ) + f (e∗ ) δ0 1 − δ0 ≥ favg (π opt @π 2 ) + favg (π opt @π 3 )

(2 +

≥ favg (π opt )

(10)

The first inequality is due to Corollary 1 and Corollary 2. The second inequality is due to (9). Because f (e∗ ) = favg (π 1 ), (10) implies that (2 +

1 1 )favg (π 2 ) + (2 + )favg (π 3 ) + 2favg (π 1 ) ≥ favg (π opt ) δ0 1 − δ0

Recall that π sad randomly picks one solution from {π 1 , π 2 , π 3 } such that π is picked with probability δ1 , π 2 is picked with probability δ2 , and π 3 is 2 and δ2 = picked with probability 1 − δ1 − δ2 . If we set δ1 = (2+ 1 )+(2+ 1 )+2 1

δ0

2+ δ1

0

1 (2+ δ1 )+(2+ 1−δ )+2 0

favg (π sad ) =

, then we have

0

2+ (2 + +

1−δ0

1 ) δ0

(2 +

1 δ0

+ (2 +

1 ) δ0

1 ) 1−δ0

2 + (2 +

+2

1 ) 1−δ0

favg (π 2 ) +

+2

2+ (2 +

favg (π 1 ) ≥

1 ) δ0

6+

1 δ0

1 1−δ0

favg (π 3 )

+ (2 +

1 ) 1−δ0

1 +

favg (π opt )

1 1−δ0

+2

1 If we set δ0 = 1/2, then favg (π sad ) ≥ 10 favg (π opt ).  Remark: Recall that under the optimal setting, δ0 = 1/2, which indicates that π 2 is identical to π 3 . Thus, we can simplify the design of π sad to maintain only two candidate policies π 1 and π 2 . In particular, given that δ1 = 1/5 and δ2 = 2/5 under the optimal setting, π sad randomly picks a policy from π 1 and π 2 such that π 1 is picked with probability 1/5 and π 2 is picked with probability 4/5. It is easy to verify that this simplified version of π sad and its original version have identical output distributions.

Non-monotone Adaptive Submodular Maximization

25

Algorithm 2. Sampling-based Adaptive Greedy Policy π sag 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24:

4 4.1

S1 = ∅, S2 = ∅, ψ0 = ∅, V = ∅. Sample a number r0 uniformly at random from [0, 1] for e ∈ E do let re ∼ Bernoulli(δ0 ) if re = 1 then S1 = S1 ∪ {e} else S2 = S2 ∪ {e} if r0 ∈ [0, δ1 ) then {Adopting the first candidate policy π 1 } F1 = {e|e ∈ S1 , f (e) > 0} while F1 = ∅ do et ← arg maxe∈F1 Δ(e | ψt−1 ) select et and observe Φ(et ) ψt = ψt−1 ∪ {(et , Φ(et ))} V ← V ∪ {et } S1 = S1 \ {et }, F1 = {e ∈ S1 |V ∪ {e} ∈ I and Δ(e | ψt−1 ) > 0}, t ← t + 1 else {Adopting the second candidate policy π 2 } F2 = {e|e ∈ S2 , f (e) > 0} while F2 = ∅ do et ← arg maxe∈F2 Δ(e | ψt−1 ) select et and observe Φ(et ) ψt = ψt−1 ∪ {(et , Φ(et ))} V ← V ∪ {et } S2 = S2 \ {et }, F2 = {e ∈ S2 |V ∪ {e} ∈ I and Δ(e | ψt−1 ) > 0}, t ← t + 1

k-System Constraint Algorithm Design

We next present a Sampling-based Adaptive Greedy Policy π sag subject to a ksystem constraint. A detailed description of π sag is listed in Algorithm 2. π sag randomly picks one solution from two candidate policies, π 1 and π 2 , such that π 1 is selected with probability δ1 , π 2 is selected with probability 1−δ1 . Both π 1 and π 2 follow a simple greedy rule to select items from two random sets respectively. We next describe π 1 and π 2 in details. All parameters will be optimized later. Design of π 1 . Partition E into two disjoint subsets S1 and S2 (or E \ S1 ) such that S1 contains each item independently with probability δ0 . π 1 selects items only from S1 in a greedy manner as follows: In each round t, π 1 selects an item et with the largest marginal value from F1 conditioned on the current observation ψt−1 et ← arg max Δ(e | ψt−1 ) e∈F1

where F1 = {e ∈ S1 |V ∪ {e} ∈ I and Δ(e | ψt−1 ) > 0}. Here V denotes the first t − 1 items selected by π 1 . After observing the state Φ(et ) of et , we update the partial realization using ψt = ψt−1 ∪ {(et , Φ(et ))}. This process iterates until F1 becomes an empty set.

26

S. Tang

Design of π 2 . Partition E into two disjoint subsets S1 and S2 (or E \ S1 ) such that S1 contains each item independently with probability δ0 . π 2 selects items only from S2 in the same greedy manner as used in the design of π 1 . 4.2

Performance Analysis

Before presenting the main theorem, we first provide a technical lemma from [3]. Lemma 3 [3]. Let π ∈ Ω denote a feasible k-system constrained policy that chooses items from S in the same greedy manner as used in the design of π 1 and π 2 , where S is a random set that is obtained by independently picking each item with probability σ. If f : 2E × OE → R≥0 is adaptive submodular with respect to p(φ), then (k + σ1 )favg (π) ≥ favg (π opt @π). Lemma 3, together with the design of π 1 and π 2 , implies the following two corollaries. Corollary 3. If f : 2E × OE → R≥0 is adaptive submodular with respect to p(φ), then (k + δ10 )favg (π 1 ) ≥ favg (π opt @π 1 ). Corollary 4. If f : 2E × OE → R≥0 is adaptive submodular with respect to 1 )favg (π 2 ) ≥ favg (π opt @π 2 ). p(φ), then (k + 1−δ 0 Now we are ready to present the main theorem of this section. Theorem 2. If f : 2E ×OE → R≥0 is adaptive submodular with respect to p(φ), 1 favg (π opt ). then for δ1 = 1/2 and δ0 = 1/2, we have favg (π sag ) ≥ 2k+4 Proof: Following the same proof as for (9), we have favg (π opt @π 1 ) + favg (π opt @π 2 ) ≥ favg (π opt )

(11)

1 1 )favg (π 1 ) + (k + )favg (π 2 ) δ0 1 − δ0 ≥ favg (π opt @π 1 ) + favg (π opt @π 2 ) ≥ favg (π opt )

(12)

Hence, (k +

The first inequality is due to Corollary 3 and Corollary 4. The second inequality is due to (11). Recall that π sag randomly picks one solution from {π 1 , π 2 } such that π 1 is picked with probability δ1 and π 2 is picked with probability 1 − δ1 . If we set δ1 =

k+ δ1

0

1 (k+ δ1 )+(k+ 1−δ ) 0

favg (π sag ) =

, then

0

k+

favg (π 1 ) +

1 + (k + 1−δ ) (k + 0 1 ≥ favg (π opt ) 1 (k + δ10 ) + (k + 1−δ ) 0

(k +

1 δ0 )

1 δ0

k+ 1 δ0 )

1 1−δ0

+ (k +

favg (π 1 1−δ0 )

2

)

Non-monotone Adaptive Submodular Maximization

If we set δ0 = 1/2, then favg (π sag ) ≥

1 opt ). 2k+4 favg (π

27



Remark: Recall that under the optimal setting, δ0 = 1/2, which indicates that π 1 is identical to π 2 . Thus, we can simplify the design of π sag such that it maintains only one policy π 1 . It is easy to verify that this simplified version of π sag and its original version have identical output distributions.

References 1. Amanatidis, G., Fusco, F., Lazos, P., Leonardi, S., Reiffenh¨ auser, R.: Fast adaptive non-monotone submodular maximization subject to a knapsack constraint. In: Advances in Neural Information Processing Systems (2020) 2. Chen, Y., Krause, A.: Near-optimal batch mode active learning and adaptive submodular optimization. In: ICML, vol. 28, no. 1, pp. 160–168 (2013) 3. Cui, S., Han, K., Zhu, T., Tang, J., Wu, B., Huang, H.: Randomized algorithms for submodular function maximization with a k-system constraint. In: International Conference on Machine Learning, pp. 2222–2232. PMLR (2021) 4. Fujii, K., Sakaue, S.: Beyond adaptive submodularity: approximation guarantees of greedy policy with adaptive submodularity ratio. In: International Conference on Machine Learning, pp. 2042–2051 (2019) 5. Golovin, D., Krause, A.: Adaptive submodularity: theory and applications in active learning and stochastic optimization. J. Artif. Intell. Res. 42, 427–486 (2011) 6. Gotovos, A., Karbasi, A., Krause, A.: Non-monotone adaptive submodular maximization. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015) 7. Guillory, A., Bilmes, J.: Interactive submodular set cover. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 415–422 (2010) 8. Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions-I. Math. Program. 14(1), 265–294 (1978) 9. Tang, S.: Price of dependence: stochastic submodular maximization with dependent items. J. Comb. Optim. 39(2), 305–314 (2019). https://doi.org/10.1007/ s10878-019-00470-6 10. Tang, S.: Beyond pointwise submodularity: non-monotone adaptive submodular maximization in linear time. Theoret. Comput. Sci. 850, 249–261 (2021) 11. Tang, S., Yuan, J.: Influence maximization with partial feedback. Oper. Res. Lett. 48(1), 24–28 (2020) 12. Wolsey, L.A.: Maximising real-valued submodular functions: primal and dual heuristics for location problems. Math. Oper. Res. 7(3), 410–425 (1982) 13. Yuan, J., Tang, S.J.: Adaptive discount allocation in social networks. In: Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 1–10 (2017)

Optimizing a Binary Integer Program by Identifying Its Optimal Core Problem - A New Optimization Concept Applied to the Multidimensional Knapsack Problem Sameh Al-Shihabi1,2(B) 1

2

Industrial Engineering and Engineering Management Department, University of Sharjah, PO Box 27272, Sharjah, United Arab Emirates [email protected] Industrial Engineering Department, The University of Jordan, Amman, Jordan [email protected] Abstract. The core concept for solving a binary integer program (BIP) is about dividing the BIP’s variables into core and adjunct ones. We fix the adjunct variables to either 0 or 1; consequently, we reduce the problem to the core variables only, forming a core problem (CP). An optimal solution to a CP is not optimal to the original BIP unless adjunct variables are fixed to their optimal values. Consequently, an optimal CP is a CP whose associated adjunct variables are fixed to their optimal values. This paper presents a new optimization concept that solves a BIP by searching for its optimal CP. We use a hybrid algorithm of local search and linear programming to move from a CP to a better one until we find the optimal CP. We use our algorithm to solve 180 multidimensional knapsack (MKP) instances to validate this new optimization concept. Results show that it is a promising approach to investigate because we were able to find the optimal solutions of 149 instances, of which some had 500 variables, by solving several CPs having 30 variables only. Keywords: Binary integer program · Core problem Multidimensional knapsack · Local search

1

·

Introduction

The purpose of the core concept to solve a binary integer program (BIP), as stated in [14], is to reduce the original problem by only considering a core of items for which it is hard to decide if they belong to the optimal solution or not, whereas the variables for all items outside the core are fixed to certain values, i.e., 0 and 1 for BIP. Thus, instead of considering all the BIP variables, we optimize a reduced version of the problem after replacing the adjunct variables, variables not belonging to the set of core variables, with their most probable values in the optimal solution. Researchers have applied the core concept to solve c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 28–39, 2022. https://doi.org/10.1007/978-3-030-92666-3_3

Experimenting with the Core Concept

29

several well-studied problems like the knapsack problem (KP) (e.g., [4,12]), the multidimensional knapsack problem (MKP) (e.g., [9,11,14–16]), the set covering problem [2], and general BIPs [8]. Equation 1 shows a maximization BIP, where f , x, X, and S represent a realvalued objective function, a feasible solution, the feasible set, and the solution space, respectively. S consists of feasible and infeasible solutions as defined in Eq. 2, where n is the number of variables. M aximize{f (x)|x ∈ X ∈ S = Bn }.

(1)

S = {(x1 , x2 , ..., xn )|xi = {0, 1}, 1 ≤ i ≤ n}.

(2)

When using the core concept, the solution space, S, is divided into two comprehensive and mutually exclusive subspaces. The first subspace is formed from the adjunct variables, S adj = Bd , and the second subspace is S core = Bc that consists of the core variables, where d and c represent the number of adjunct and core variables, respectively, and their sum is equal to n, i.e., n = d + c. We, therefore, represent the BIP’s solution xBIP as xBIP = (xadj , xcore ), and the objective function in Eq. 1 is f (xadj , xcore ). We also define sets N core and N adj to include the core and adjunct variables, respectively. As stated earlier, partial solution xadj is already known since all variables in N adj are fixed to either 1 or 0. We, however, search for the best values for variables in N core by solving the resulting core problem (CP), i.e. the problem whose decision variables are those in N core . Finding an optimal solution for CP, xopt core , does not mean finding the , unless the adjunct variables are fixed to their optimal solution of the BIP, xopt BIP opt opt opt opt optimal values in xBIP since xBIP = (xcore , xadj ). We call a CP an optimal CP if the associated adjunct variables are fixed to opt opt their optimal values in xopt BIP ; consequently, finding xcore means finding xBIP for this CP. Thus, the objective of this paper is to find an optimal CP such that its solution results in finding xopt BIP . Not only are we interested in finding an optimal CP, but we are also interested in finding a CP that has a small size. We try to identify an optimal CP of a small number of variables by solving a series of CPs. The CPs are defined based on LP-relaxation solutions, whereas we use a simple local search algorithm to move from a CP to another one. We, therefore, call our heuristic algorithm LS-CORE-LP. We cannot identify an optimal CP unless we find the optimal solution to the BIP itself. The sum of the CP solution and the partial solution resulting from fixing the adjunct variables need to equal the BIP’s optimal solution, f (xopt BIP ) = opt f (xopt core ) + f (xadj ), to call the CP an optimal CP. Thus, a by-product of the optimal core identification problem is finding the optimal solution to the BIP. Moreover, solution quality is expected to improve if the size of the problem is decreased, i.e., the probability that a heuristic or meta-heuristic find an optimal solution to the CP is higher than the BIP itself due to problem reduction. Lastly, the LS-CORE-LP heuristic algorithm is highly parallel; consequently, we can benefit from recent parallel computation capabilities to solve the different CPs in parallel to reduce computation time. In summary, we hope the LS-CORE-LP

30

S. Al-Shihabi

heuristic algorithm can be used to solve BIPs by identifying optimal CPs, as we demonstrate by solving 180 MKP instances. Similar to previous researchers who tested the core concept using MKP instances, we choose MKP as well. Given a set N = {1, 2, ..., n} of n items, where each item j ∈ N has profit pj , and set M = {1, 2, ..., m} of m resources, where the capacity of resource i ∈ M is bi , and each item j ∈ N consumes wi,j units of resource i ∈ M , MKP is about selecting a subset of items from N that maximizes Z, where Z is the sum of profits of the selected items, as shown in Eq. 3, without exceeding the available capacity of each resource i ∈ M , as shown in Eq. 4. We use a binary decision variable xj , ∀j ∈ N , as defined in Eq. 5, such that if xj = 1, then item j ∈ N is a member of the selected subset of N ; otherwise, it is not selected. MKP is an extension to KP that has one resource only, i.e., m = 1.  pj xj , (3) maxZ = j∈N

subject to, 

wi,j xj ≤ bi , ∀i ∈ M,

(4)

j∈N

xj ∈ {0, 1}, ∀j ∈ N.

(5)

As discussed earlier, we divide variables in set N into two mutually and exclusive sets N core and N adj . Fixing the variables in N adj to either 0 or 1 to generate xadj , we can define a CP as shown in Eqs. 6–8, where the × notation indicate numbers multiplication, as in the second term of Eq. 6.   cj xj + cj × xj (6) M aximize Z = j∈N core

s.t.



aij xj ≤ bi −

j∈N core



j∈N adj

aij × xj , ∀i ∈ M

(7)

j∈N adj

xj ∈ {0, 1}, ∀j ∈ N core

(8)

In the next section, we briefly review previous algorithms that used the core concept and show how we define a CP in the LS-CORE-LP heuristic. Section 3 shows how we find neighbouring CPs, whereas in Sect. 4, we introduce the LSCORE-LP heuristic. Experimental studies are presented in Sect. 5, followed by conclusions and future research in Sect. 6. The objective of this paper is not to introduce a new state-of-the-art algorithm to solve MKP; rather, it is introducing a new concept and prove it.

2

The Core Problem

In this section, we first review the literature related to the core concept. We then discuss how we create a CP in the LS-CORE-LP heuristic.

Experimenting with the Core Concept

2.1

31

Background

To define CPs we need to define an efficiency measure τj , ∀j ∈ N , that can distinguish core from adjunct variables. We also need to choose the size of the problem, i.e., how many variables to include in the CP and how many variables to fix. The efficiency measure that was used in [4] to define the CP of a KP relied on c c c the benefit/cost ratio, τj = ajj , whereas [11] replaced τj = ajj by τj = m ajij ×ri i=1 to identify the CP of a MKP, where ri is the surrogate multiplier of constraint i ∈ M . The surrogate multipliers were substituted by the dual values of the LP-relaxation solution in [14], which makes the efficiency measure dependant on the variables’ reduced costs. Thus, in this work, we use variables’ reduced costs to define the CP in agreement with previous research work (e.g., [2,7,9,14]). After defining the efficiency measure, we need to choose the N core size. Researchers have use a fixed core size that is not related to the problem size as in [1,4,13], a percentage of the problem size as in [10,14], or they consider the characteristics of the LP-relaxation solution as in [7]. In this paper, we choose a fixed number of variables to be included in N core after studying the effect of the core size of computation time. In this paper, we solve several CPs of fixed size to find the optimal CP in LS-CORE-LP heuristic. The algorithm suggested in this work is different from other core-based algorithms that allow core size or core variables changes such as the kernel search algorithms [3]), the relax-and-fix and fix-and-optimize algorithms [5] and algorithms implementing the local branching constraints that were suggested in [6]. The algorithm suggested in this paper adopts an essential idea from the CORe ALgorithm (CORAL) that was proposed in [9]. In CORAL, if the fixation of an adjunct variable to its complimentary value, i.e., adjunct variable xj is fixed to 1 − xj , leads to an improvement to the incumbent solution value, then this adjunct variable is added to the set of core variables. Thus, the size of the CP in CORAL keeps increasing. Instead of increasing the size of the CP in CORAL, the LS-CORE-LP heuristic defines neighbouring cores by fixing one of the variables to its complementary value, modifying the BIP, and then creating a new CP. 2.2

Creating Core Problems in LS-CORE-LP Heuristic

Algorithm 1 shows the steps needed to create a CP for an MKP instance in the proposed LS-CORE-LP heuristic. Algorithm 1 starts by solving the LPrelaxation of the BIP. The first CP is found by solving the starting BIP that include all variables; however, neighbouring CPs are found after modifying the starting BIP, as discussed in the next section. After solving the LP-relaxation of the BIP, we rank variables in a non-decreasing order based on their absolute reduced cost values. The CP is formed from the first r variables, while the rest of the variables are included in N adj . We fix variables in N adj to either 0 or 1 based on their reduced costs. Variables having negative and positive reduced costs are fixed to 0 and 1, respectively.

32

S. Al-Shihabi

Algorithm 1: Defining S core and xadj

1 2 3 4 5 6 7 8 9

Data: BIP and r Result: N core and xadj begin Solve the LP-relaxation problem of the BIP ; Rank variables based on |πj | in an ascending order; ; N core ← first r ranked variables ; N adj ← N/N core ; for j ∈ N adj do if πj < 0 then xadj = {x1 , x2 , ..., xj = 0 ..., , xd=n−r } ;

10

else xadj = {x1 , x2 , ..., xj = 1, ..., xd=n−r }

11

End ;

12

3

End ;

Neighbouring Core Problems

We start this section by reviewing the concept of neighbouring solutions and how it can be extended to define neighbouring CPs. We then present the algorithm that we use in the LS-CORE-LP heuristic. 3.1

Neighbouring Solutions and Neighbouring CPs

In a typical local search algorithm, we define the neighbourhood of solution x, N (x), as shown in Eq. 9. Generally, a neighbourhood N (x) is defined relative to a given metric (or quasi-metric) function δ(x, y) introduced in the solution space S, and a limit to this metric α. For example, the incumbent solution of , is a binary vector of size n. To find a neighbouring solution, a BIP, xincumb BIP y ∈ N (xincumb ), we might define δ(xincumb , y) in terms of the distance between and y, where distance is defined in terms of the number of solutions xincumb BIP and y. If we use α = 1, then variables that have different values in xincumb BIP y ∈ N (xincumb ) is a neighbouring solution that has the same values of variables in , except only for one variable. For example, if n = 3, α = 1 and xincumb = xincumb BIP BIP {1, 1, 0}, then N (xincumb ) = {{ 0, 1, 0}, {1, 0, 0}, {1, 1, 1}}. If, however, α = 2, then N (xincumb ) = {{ 0, 0, 0}, {0, 1, 1}, {1, 0, 1}}. N (x) = {y ∈ X|δ(x, y) ≤ α}

(9)

We define neighbouring CPs, CPneighb ∈ N (CP ), similar to neighbouring adj is the set solutions y ∈ N (x), as shown in Eqs. 10 and 11, where NCP neighb CP

of adjunct variables associated with CPneighb , and xj neighb is the values to adj which adjunct variable j ∈ NCP is fixed in CPneighb ∈ N (CP ). For xj , neighb adj it is the value of variable j ∈ NCP in the solution associated with solving neighb

Experimenting with the Core Concept

33

the current CP. Note that for variable j, it needs to be an adjunct variable in CPneighb ∈ N (CP ); however, it can be a core or adjunct variable in the solution associated with solving the current CP. Thus, the δ metric used to define neighbouring CPs depends on the variables that are fixed to their complementary values in CPneighb compared to their values in the solution associated with the current CP. Henceforth, we call these variables catalyst variables and define set C to include these catalyst variables. Consequently, the distance measure δ is simply the cardinality of set C, |C|. N (CP ) = {CPneighb |δ(CP, CPneighb ) ≤ α}  CP |xj − xj neighb | δ(CP, CPneighb ) = adj j∈{NCP

neighb

(10) (11)

}

Illustrative Example. Assume a four-variable MKP such that the current CP only includes the last two variables, i.e., N adj = {x1 , x2 } and N core = {x3 , x4}. Assume also that the solution associated with this CP is x = (1, 0, 1, 1). The we can have four neighbouring CPs such that each CP fixes one of the four variables to its complementary value. Consequently, if we associate the first neighbouring adj and x1 = 0. CP, CP1 , with variable x1 , then we should have x1 ∈ NCP 1 3.2

Defining Neighbouring Core Problems

Algorithm 2 shows the steps needed to create a neighbouring CP at a distance |C| from the current CP. The inputs to Algorithm 2 include the best solution found by solving the current CP, X best , the catalyst variable set, C, and the core problem size r. We first create a partial solution xC where we fix the variables’ values in C to their complementary values in xbest and remove these variables from the set of decision variables. We create a modified problem, BIP mod, by enforcing partial solution xC to the BIP. We then find the CP of BIP mod of size r by using Algorithm 1. Variables in C are then added to the adjunct mod , where they are fixed to variables found by Algorithm 1 for BIP mod, xBIP adj their complementary values.

4

The LS-CORE-LP Heuristic Algorithm

This section describes first a generic model for the LS-CORE-LP algorithm without specifying a solution algorithm to solve the CPs. We then discuss how neighbouring CPs can be defined for a BIP like MKP. Finally, we show how we use the default B&B algorithm of Cplex12.12 to solve the MKP instances. 4.1

A Generic LS-CORE-LP Algorithm

Algorithm 3 explains the main steps of the LS-CORE-LP algorithm. The algorithm’s inputs are the BIP, neighbourhood definition N , and the core size r,

34

S. Al-Shihabi

Algorithm 2: Finding a neighbouring core to a CP

1 2 3 4 5 6 7

Data: xbest , α and r Result: N core and xadj begin xC ← create partial solution by fixing variables in C to their complementary values in xbest ; N ← N\ C ; BIPmod← fix variables in C to their values in xC ; core BIP mod NBIP ← algorithm 1 with inputs BIPmod and r; mod and xadj BIP mod xadj = (xadj , xC ) ; End ;

whereas the output of Algorithm 3 is the best BIP solution. We use the firstascent strategy to move to a better CP in Algorithm 3; however, users can use the steepest-ascent approach as well. Lines 2–4 of Algorithm 3 show the initialization steps of the LS-CORE-LP heuristic. Algorithm 3 starts by defining the first CP, which is also CP incumb , relying on Algorithm 1. After defining and solving the initial CP, we get the . We then start the main outer loop of the first incumbent solution, xincumb BIP algorithm, as shown in lines 5–21. Time or the number of solved CPs could be good choices for a termination criterion for this loop. We then initialize counter to 1 before starting the inner loop. The inner loop of the algorithm, lines 8–18, checks solutions associated with neighbouring CPs of CP incmb , as shown in lines , then we update 10–12. If one of the solutions is better than the current xincumb BIP and xincumb BIP 4.2

Defining Neighbouring CPs

We can easily use classical neighbouring solutions operators to define neighbouring CPs. For example, we use the two neighbourhood defining operations in Eqs. 12 and 13 to solve the MKP instances. In both neighbourhoods, we define that δ(CP incumb , CPneighb ) in terms of variables having a value of 1 in xincumb BIP are flipped to 0 in CPneighb . = 1)} N1 (CP incumb ) = {xincumb ⊕ F lip(q|xincumb q

(12)

= 1 and xincumb = 1)} (13) N2 (xincumb ) = {xincumb ⊕ F lip(q and p|xincumb q p 4.3

LS-CORE-LP for MKP

In our LS-CORE-LP implementation to solve MKP, we include all variables whose value is 1 in solution xincumb in set O. We also rank variables in O in an BIP ascending order based on their reduced costs. Intuitively, variables having low . reduced costs, approaching 0, might be fixed to the wrong values in xincumb BIP This step expedites the algorithm execution, mainly that we use a first-ascent

Experimenting with the Core Concept

35

Algorithm 3: LS-CORE-LP algorithm

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Data: BIP , N , and r Result: xincumb BIP begin CP incumb ←N core , xadj ← Algorithm 1(BIP,r) ; xincumb ← optimize CP incumb ; core incumb xBIP ← (xincumb , xadj ) ; core while termination criterion not met ; do counter=1; ; while counter ≤ |N (CP incumb )| ; do N

Ccounter ←− CP incumb ; CPcounter ←N core , xadj ← Algorithm 2(xincumb ,Ccounter ,r) ; BIP xbest ← optimize CP ; counter counter incumb if xbest ; counter > xBIP then CP incumb = CPcounter ; xincumb =xbest counter ; BIP break ; end end Check termination criterion ; end End ; end

move strategy. To further accelerate the algorithm’s execution, we do not change the values of all variables; instead, we choose a percentage, Q% of O, i.e., the number of solved CPs is Q × |O|. Thus, the termination criterion depends on Q%, the number of solved CPs. To solve CPs, we use the default B$B solver of CPLEX12.12. Before solving any CP, if the LP-relaxation value of the BIPmod is less than f (xincumb ), then we do not solve this CP. Moreover, if the UB of the core problem, as found by the branch and bound algorithm of CPLEX12.12, becomes less than f (xincumb ), then we stop the IP solver since we would not get a better solution than f (xincumb ). We use an exact solver to solve the CPs to block the effects of the algorithm used to solve the CPs.

5

Experiments

We use six sets of MKP classical benchmark instances that we download from http://people.brunel.ac.uk/mastjjb/jeb/orlib/mknapinfo.html. The six sets are mknapcb1 – mknapcb6 whose characteristics are summarized in Table 1, and each set has 30 instances. Thus, we solve 180 MKP instances. Column 2 of Table 1

36

S. Al-Shihabi

shows the number of columns and rows, respectively, of any instance in any of the sets shown in column 1. In columns 3–6, we report some descriptive statistics about the minimum core sizes, i.e., N core only includes variables having zero reduced costs. It is interesting to note that for set mknapcb3, the core elements are only five variables out of 500. Table 1. Characteristics of the benchmark instances Benchmark Size

Core size statistics

Set

n×m

Min Max Avg.

mknapcb1

100 × 5

5

5

5

mknapcb2

250 × 5

5

5

5

mknapcb3

500 × 5

5

5

5

mknapcb4

100 × 10

9

10

9.8

mknapcb5

250 × 10 10

10

10

mknapcb6

500 × 10 10

10

10

We use set mknapcb5 to understand the effect of the core size, r, on both computation time and solution quality. Thus, we conduct a simple experiment to check different core sizes, r, and only solve the first CP. Figure 1 summarizes the results of this experiment. The number of optimal solutions found increases when we increase r, as shown in Fig. 1a. For r = 10, which only includes columns having π = 0 based on Table 1, the LS-CORE-LP heuristic identifies one optimal solution by solving the first CP. Increasing the core size to 45 variables, we obtain 16 optimal solutions. This increase in core size, however, caused an increase in the computation time, as shown in Fig. 1b. Computation time for some of the instances exceeded 10,000 s for r = 50. The processor used to conduct all the experiments reported in this paper is 2.5 GHz Core (TM) i5-3210M.

(a) Number of optimal solutions as a func-(b) Computation time needed to solve the tion of r rst CP

Fig. 1. Results of solving the fist CPs of set mknapcb5

Based on the above results, we choose a core size of 30 variables to optimize all the instances because the computation time needed to solve a core problem of 30 variables is 2.3 s. Moreover, since we terminate the core optimization problems if their UB(s), which we continuously check, are inferior to f incumb , the average expected solution time per new core is expected to be less than 2.3 s.

Experimenting with the Core Concept

37

Using r = 30 variables, we try next to find the best percentage of neighbouring CPs to solve, which we denoted by Q%, for N1 only. Figure 2 shows the effect of Q% on both computation time and the number of optimal solutions found. As seen in Fig. 2a, the number of optimal solutions has increased until Q = 30%; however, the number of optimal solutions did not increase for Q > 30%. Computation time, as shown in Fig. 2b, has linearly increased with the increase in Q%. Thus, we choose Q = 40% when solving benchmarking instances.

(a) Effect of CP on the number of found optimal solutions

(b) Effect of CP on computation time

Fig. 2. Effect of CP on computation time and number of found optimal solutions

Table 2 summarizes the optimization results of each set, while the detailed results can be downloaded from http://www.github.com/samehShihabi/ Knapsack-detailed-results. Column 2 of Table 2 reports the number of optimal solutions that were found using N (x) = N1 (x) only. The number of optimal solutions found increases with the decrease in the number of variables and constraints. Consider, for example, sets mknapcb2 and mknapcb5 each having 250 variables, the number of optimal solutions that the algorithm finds decreased from 21 to 14 with the increase in the number of constraints from 5 to 10. Moreover, comparing the sets with the same number of constraints, the number of optimal solutions decreases with the increase in the number of variables. Table 2. Characteristics of benchmark instances Benchmark N1 (x) Set Optimal number

N1 (x) and N2 (x) Time(s) Updates Optimal changes number

Deviation %

mknapcb1

28

0.08

17.9

0.4

30

mknapcb2

21

0.03

124.9

1.1

28

mknapcb3

11

0.01

297.9

1.7

21

mknapcb4

28

0.1

74.2

1.8

30

mknapcb5

14

0.03

403.1

2.9

22

mknapcb6

0

0.03

705.9

3.4

18

38

S. Al-Shihabi

Column 3 of Table 2 shows the average deviations from the optimal solutions. This average only considers instances for which the algorithm failed to obtain the optimal solution. The average deviations did not exceed 0.1%. We also report in Column 4 average computation times. Intuitively, the time increases with the number of variables and constraints, especially when we use an exact solver. Moreover, as a termination criterion, we need to solve Q% × |O| optimization problems, and |O| also increases with the number of variables and constraints as well. Note that the computation times would drastically drop if we use parallel processors to solve the optimization problems associated with the different CPs since the result of one would not depend on any other result, except for the initial CP. Column 5 of Table 2 shows the average number of times that the incumbent solutions were updated. This number is needed to understand the effect of the local search. For set mknapcb1, the average changes are 0.4, which means that reasonable solutions were obtained at the start of the algorithm, and the local search did not have a significant impact in improving the initial solutions. However, for set mknapcb6, the average number of core changes was 3.4, which means that the local search had a significant impact on the solution quality. The last column of Table 2 shows the number of optimal solutions that were obtained by sequentially searching two neighbourhoods, N (x) = N1 (x) ∪ N2 (x). As a termination criterion, we stop the algorithm if we find the optimal solution or after 1,000 s. It is clear that alternating between the two neighbourhoods has improved the results; however, computation time has significantly increased. Suppose the termination criterion was changed so that we check both neighbourhoods for any improvement. In that case, we need to solve 2 CP × |O| + CP × (CP − 1) × |O| optimization problems. It is better to use a meta-heuristic or heuristic to reduce computation time for a large number of CPs, especially that we do not use parallel processors.

6

Conclusion and Future Work

This paper introduces a new optimization concept that solves BIPs by searching for their optimal CPs. An optimal CP is a reduction of the problem such that adjunct variables, i.e., variables not included in the core variables’ set, are fixed to their values in the optimal solution of the BIP itself. Consequently, finding the CP optimal solution means finding the BIP’s optimal solution. To search for an optimal CP, we introduce a heuristic algorithm where CPs are defined based on LP-relaxation solutions, and local search concepts are used to move from one CP to the other to find the optimal CP. Thus, we name the suggested heuristic LS-CORE-LP. We test this new optimization concept by solving 180 MKP instances. We use an exact solver to solve the different, small CPs to block the effects of the used optimization algorithm. The LS-CORE-LP heuristic could find the optimal solutions of 149 instances without solving the whole problem. The different CPs can be solved in parallel to reduce computation time, where CPs having 30 variables can be solved in almost two seconds using an exact commercial solver.

Experimenting with the Core Concept

39

This algorithm can be used to solve other BIPs where the two main components of the algorithm are LS and LP-relaxation. However, researchers and practitioners are free in using other algorithms in solving core problems, defining neighbourhoods, defining cores, and implementing LS strategies. The core size is an important algorithm component that requires further investigation. Moreover, future research should target other problems and other implementation alternatives, including the use of parallel processors.

References 1. Al-Shihabi, S.: Backtracking ant system for the traveling salesman problem. In: Dorigo, M., Birattari, M., Blum, C., Gambardella, L.M., Mondada, F., St¨ utzle, T. (eds.) ANTS 2004. LNCS, vol. 3172, pp. 318–325. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28646-2 30 2. Al-Shihabi, S.: A hybrid of max-min ant system and linear programming for the k-covering problem. Comput. Oper. Res. 76, 1–11 (2016) 3. Angelelli, E., Mansini, R., Speranza, M.G.: Kernel search: a general heuristic for the multi-dimensional knapsack problem. Comput. Oper. Res. 37(11), 2017–2026 (2010) 4. Balas, E., Zemel, E.: An algorithm for large zero-one knapsack problems. Oper. Res. 28(5), 1130–1154 (1980) 5. Federgruen, A., Meissner, J., Tzur, M.: Progressive interval heuristics for multiitem capacitated lot-sizing problems. Oper. Res. 55(3), 490–502 (2007) 6. Fischetti, M., Lodi, A.: Local branching. Math. Program. 98(1), 23–47 (2003) 7. Hill, R.R., Cho, Y.K., Moore, J.T.: Problem reduction heuristic for the 0–1 multidimensional knapsack problem. Comput. Oper. Res. 39(1), 19–26 (2012) 8. Huston, S., Puchinger, J., Stuckey, P.: The core concept for 0/1 integer programming. In: Proceedings of the Fourteenth Symposium on Computing: the Australasian Theory-Volume 77, pp. 39–47. Australian Computer Society, Inc. (2008) 9. Mansini, R., Speranza, M.G.: CORAL: an exact algorithm for the multidimensional knapsack problem. INFORMS J. Comput. 24(3), 399–415 (2012) 10. Martello, S., Toth, P.: A new algorithm for the 0–1 knapsack problem. Manage. Sci. 34(5), 633–644 (1988) 11. Pirkul, H.: A heuristic solution procedure for the multiconstraint zero-one knapsack problem. Naval Res. Logistics (NRL) 34(2), 161–172 (1987) 12. Pisinger, D.: An expanding-core algorithm for the exact 0–1 knapsack problem. Eur. J. Oper. Res. 87(1), 175–187 (1995) 13. Pisinger, D.: Core problems in knapsack algorithms. Oper. Res. 47(4), 570–575 (1999) 14. Puchinger, J., Raidl, G.R., Pferschy, U.: The core concept for the multidimensional knapsack problem. In: Gottlieb, J., Raidl, G.R. (eds.) EvoCOP 2006. LNCS, vol. 3906, pp. 195–208. Springer, Heidelberg (2006). https://doi.org/10. 1007/11730095 17 15. Puchinger, J., Raidl, G.R., Pferschy, U.: The multidimensional knapsack problem: structure and algorithms. INFORMS J. Comput. 22(2), 250–265 (2010) 16. Vimont, Y., Boussier, S., Vasquez, M.: Reduced costs propagation in an efficient implicit enumeration for the 01 multidimensional knapsack problem. J. Comb. Optim. 15(2), 165–178 (2008)

A Comparison Between Optimization Tools to Solve Sectorization Problem Aydin Teymourifar1,2(B) , Ana Maria Rodrigues2,3 , Jos´e Soeiro Ferreira2,4 , and Cristina Lopes3,5 1

CEGE - Centro de Estudos em Gest˜ ao e Economia, Cat´ olica Porto Business School, Porto, Portugal 2 INESC TEC - Institute for Systems and Computer Engineering Technology and Science, Porto, Portugal [email protected], {aydin.teymourifar,ana.m.rodrigues,jsf}@inesctec.pt 3 CEOS.PP - Center for Organizational and Social Studies of Porto Polytechnic, Porto, Portugal [email protected] 4 FEUP - Faculty of Engineering, University of Porto, Porto, Portugal 5 ISCAP - Institute for Accounting and Administration of Porto, Porto, Portugal Abstract. In sectorization problems, a large district is split into small ones, usually meeting certain criteria. In this study, at first, two singleobjective integer programming models for sectorization are presented. Models contain sector centers and customers, which are known beforehand. Sectors are established by assigning a subset of customers to each center, regarding objective functions like equilibrium and compactness. Pulp and Pyomo libraries available in Python are utilised to solve related benchmarks. The problems are then solved using a genetic algorithm available in Pymoo, which is a library in Python that contains evolutionary algorithms. Furthermore, the multi-objective versions of the models are solved with NSGA-II and RNSGA-II from Pymoo. A comparison is made among solution approaches. Between solvers, Gurobi performs better, while in the case of setting proper parameters and operators the evolutionary algorithm in Pymoo is better in terms of solution time, particularly for larger benchmarks. Keywords: Sectorization Pymoo

1

· Optimization · Pulp · Pyomo · Gurobi ·

Introduction

The main objective of sectorization problems (SPs) is to divide a large territory into smaller sectors according to criteria like equilibrium and compactness [1–6]. They have many applications in designing territories for airspace [7], commerce [8], electrical power [9], forest [10], healthcare [11,12], sales [13,14], school [15], social service [16], politic [17,18]. Real-life applications of SPs are usually large, and models are generally in the form of linear, quadratic, and non-linear optimization. As in many other c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 40–50, 2022. https://doi.org/10.1007/978-3-030-92666-3_4

A Comparison Between Optimization Tools to Solve Sectorization Problem

41

fields of operational research, the efficiency of methods and solution times are important matters. In this study, single-objective (SO) and multi-objective (MO) models are presented for a basic SP, in which, centers are known beforehand. Then different solvers and metaheuristics are used for their solution. The main differences of this study from the previous ones can be summarized as follows: Pulp, Pyomo, intlinprog, Gurobi, and a genetic algorithm (GA) from Pymoo are used to solve these models and the results are compared. MO model is discussed and the nondominated sorting genetic algorithm (NSGA-II) and the reference point based non-dominated sorting genetic algorithm (RNSGA-II) are used to solve it. It is shown that the RNSGA-II, which has not been used to solve sectorization problems before, has not a good performance as much as NSGA-II.

2

Models Description

This section describes the models of the SP. In a region, there are n points to be assigned to k sectors. The coordinates of points and center of sectors are known beforehand. The related decision variable is defined as follows:  1, if point i is assigned to sector j xij = i = 1, ..., n, j = 1, ..., k. (1) 0, otherwise. Some of the used notations are summarized in Table 1. Table 1. Used notations

Notation Description n

Total number of points

k

Total number of sectors

xij

Decision variable about assignment of point i to sector j

cj

Center of sector j

dicj

Euclidean distance between each two points i and the center of sector j

Q

Target number of points per sector

qj

Number of points in sector j



Average number of points in sectors

Dj

Total distance of the points in sector j from the center

τequ

Tolerance for the equilibrium criteria

τcom

Tolerance for the compactness criteria

f1

Single-objective function related to compactness

f2

Single-objective function related to equilibrium

f

Multi-objective function related to compactness and equilibrium

ST

Solution time

42

2.1

A. Teymourifar et al.

SC: SO Model to Minimize Compactness

In this model, as defined in Eq. 2, compactness is the objective function, while equilibrium is satisfied with a constraint. Compactness is defined based on measuring the total distance of the points to the center of the assigned sector [19,20]. f1 = M inimize

k  n 

dicj xij

(2)

j=1 i=1

There are also some constraints, which are defined as in Eqs. 3, 4, 5 and 6. Each point should be assigned to only one sector, which is ensured with Constraint 3. k 

xij = 1,

∀i = 1, ..., n

(3)

j=1

Constraint 4 is used to assign at least one point to each sector. n 

xij ≥ 1,

∀j = 1, ..., k

(4)

i=1

With Constraint 5, lower and upper limits are determined for the number of points that can be assigned to each sector, and in this way, the equilibrium is provided within an interval. Q(1 − τequ ) ≤

n 

xij ≤ Q(1 + τequ ),

∀j = 1, ..., k

(5)

i=1

where 0 ≤ τequ ≤ 1 and Q =  nk  is the target number of points per sector. The domain of the decision variable is defined in Constraint 6. xij ∈ {0, 1}, 2.2

∀i = 1, ..., n,

j = 1, ..., k.

(6)

SE: SO Model to Maximize Equilibrium

In this model, we aim to maximize equilibrium. So, the objective function, as defined in Eq. 7, is to minimize the variance of the number of points in each sector. Compactness is provided by Constraint 8. f2 = M inimize where qj =

n i=1

xij and q¯ =

n  i=1

k  (qj − q¯)2 k−1 j=1

(7)

k

qj j=1 k .

dicj xij ≤ Dj (1 − τcom ),

∀j = 1, ..., k

(8)

A Comparison Between Optimization Tools to Solve Sectorization Problem

43

n where Dj = i=1 dicj , ∀j = 1, ..., k and 0 ≤ τcom ≤ 1. Also, Constraints 3, 4, 5, and 6 are valid. 2.3

MCE: MO Model to Minimize Compactness and to Maximize Equilibrium

In this model, the objective function as defined in Eq. 9, considers both compactness and equilibrium, based on the Pareto optimality concept [20]. f = M inimize (f1 , f2 )

(9)

Constraints 3, 4, 5, 6, and 8 are valid.

3

Solution Approaches

We use Pulp, Pyomo, intlinprog, Gurobi, and Pymoo to solve the models. Pulp is an open-source package in Python that can solve linear models. Pyomo is an open-source package in Python that can solve linear, quadratic and non-linear models. Intlinprog is a function in the optimization toolbox of MATLAB to solve linear models. Pymoo is an open-source library in Python that contains single-objective evolutionary algorithms (SOEAs) and multi-objective evolutionary algorithms (MOEAs) and many more features related to optimization such as visualization and decision making [21]. From SOEAs, GA and from MOEAs, NSGA-II and RNSGA-II are used. GA is one of the most famous evolutionary algorithms that starts with an initial population generation, and then selection, crossover, and mutation operators are used to derive generations and this process continues until the termination condition is reached. In NSGA-II, the populations of parent and offspring are merged and sorted, then using the non-dominated sorting, Pareto fronts are determined. At the next step, the individuals of the new population are selected. The crowing distance is the concept used for selection. This process continues until the condition of termination is reached [22,23]. The stages of RNSGA-II are similar to NSGA-II, but it uses a modified survival selection, which is frontwise. The front is split since all individuals cannot survive. The selection of individuals on the splitting front is done based on rank, which is calculated as the euclidean distance to each reference point. The rank of the closest solution to a reference point is equal to 1. The ranking of each solution is the best rank assigned to it [24,25]. All codes are available via the corresponding author’s email address.

4

Experimental Results

We use a system with Intel Core i7 processor, 1.8 GHz with 16 GB of RAM. The results obtained for the SO and MO models are presented in Table 2 and 5,

44

A. Teymourifar et al.

respectively. Python 3.5, a trial version of MATLAB R2020b, and Gurobi 9.0.2 with an academic license are used to obtain the results. It is possible to use different solvers in Pyomo. The presented results in Table 2 are for ipopt. Similar results occur with other solvers that can solve quadratic models. Benchmarks are shown as n×m, where n and m are the numbers of points and sectors respectively. Benchmarks are generated as 200× 10, 500× 31, 1000× 76, and 2000× 200. The coordinates of both points and sectors centers are generated according to the Normal (50, 10) distribution. In GA, the half uniform crossover and a random mutation with a rate equal to 0.1 are used. Population size and the number of offsprings are equal to 100 and 10, respectively. The number of iterations is 200 but according to the settings, the search ends when the algorithm reaches sufficient convergence in fewer iterations. For benchmarks 200× 10, 500× 31, 1000× 76, and 2000× 200 the value of Q is equal to 20, 16, 13 and 10, respectively. For these benchmarks, feasible results are obtained when τequ is in intervals (0.2, 1), (0.27, 1), (0.45, 1) and (0.61, 1), while these intervals are (0, 0.89), (0, 0.95), (0, 0.91), and (0, 0.99) for τcom . The values of τequ are chosen to be 0.2, 0.27, 0.45 and 0.61, when for τcom they are 0.89, 0.95, 0.91 and 0.99. The reason for this choice is to create the tightest ranges. In the tables, the results that are found by the solvers as an optimal solution are made bold. If the solvers could not find a feasible solution within 15 min, the relevant place in the table is indicated by “-”. Since there are more parameters, variables, and constraints in real-life SPs, the models and benchmarks in this study can actually be considered as small ones. Therefore, they are expected to be solved in under 15 min. In the tables, solution times (ST ) are in seconds. For Model SC, intlinprog can provide the optimal solution for two benchmarks. In terms of achieving optimal results, with used operators, Pymoo does not outperform solvers. It doesn’t even find a feasible solution for two benchmarks. However, when the values of population size and the number of offsprings change, feasible solutions are obtained, which are shown in Tables 3 and 4. For Model SC, Gurobi has the best performance in terms of both results and computation time. In the model SE, which is quadratic, except for benchmark 200 × 10, within the determined time, Pyomo and Gurobi are not able to find any result. But for this model, the GA in Pymoo achieved results for all benchmarks at a reasonable time.

A Comparison Between Optimization Tools to Solve Sectorization Problem

45

Table 2. Comparison between the results for the SO models, obtained using Pulp, intlinprog, Pyomo, Gurobi, and Pymoo

200×10

500×31

Model f1 Pulp 1588.07 intlinprog 1542.71 Gurobi 1542.71 Pymoo (GA) 3361

SC ST 0 the considered problem remains NP-hard even if the following condition holds: 4 1+δ maxi∈N {ai , bi } ≤ Ω. On the other hand, [11] presents a polynomial-time algorithm for the case when   3.5 max ai + max 0.5 max ai , max bi ≤ Ω, (2) i∈N

i∈N

i∈N

which is an improvement of (1). This paper continues the research initiated in [10] where the two heuristics were introduced: a barrier heuristic that utilises the polynomial-time algorithm described in [10] and another heuristic (hereafter referred to as bin-packing heuristic) based on one of the methods of bin-packing [3]. Both heuristics have shown promising results. Below, we describe a new heuristic that is based on the polynomial-time algorithm presented in [11] and another variant of the bin-packing heuristic and compare the heuristics computationally with the heuristics presented in [10]. The paper is organised as follows. Section 2 describes the heuristics. Section 3 provides integer programming formulation for the problem. The results of computational experiments are presented in Sect. 4. Section 5 concludes the paper.

Algorithms for Flow Shop with Job-Dependent Buffer Requirements

2

65

Algorithms

In all algorithms, presented in this paper, we will use the WAIT algorithm which was described in [7]. In what follows, the WAIT algorithm will be used to construct a “permutation” schedule, i.e. the algorithm schedules jobs in the same given order (permutation) π on both machines.  The notion of a critical position was introduced in [7] for each job i such that 1≤u≤π−1 (i) aπ(u) > Ω. The critical position 1 ≤ ki ≤ n is defined as the smallest index v such that   aπ(u) − aπ(u) ≤ Ω. 1≤u≤π −1 (i)

1≤u≤v

Let π be the given permutation, and |π| signify the number of elements in π. Let t1 and t2 be the current starting  time on M1 and M2 , correspondingly; it is assumed that ki = −1 if for job i, 1≤u≤π−1 (i) aπ(u) ≤ Ω. The WAIT algorithm can be summarised as follows: Algorithm WAIT 1: set t1 = 0, t2 = aπ(1) , set j = |π|. 2: while j > 0 do 3: set i = π(1). 4: if ki = −1 then 5: set Si1 = t1 . 6: else 2 + bπ(ki ) }. 7: set Si1 = max{t1 , Sπ(k i) 8: end if 9: set t1 = Si1 + ai . 10: set Si2 = max{t2 , Si1 + ai }; t2 = Si2 + bi ; delete i from π; j = j − 1. 11: end while

2.1

Barrier Heuristic

The polynomial-time algorithm (hereafter referred to as P olyN ew algorithm), introduced in [11], constructs an optimal schedule, if (2) is satisfied. To construct a schedule for the general case, the proposed in this paper heuristic (hereafter referred to as BarrierN ew heuristic) utilises P olyN ew to obtain a permutation of jobs induced by the resulting schedule in P olyN ew. The polynomial-time algorithm, introduced in [10] (hereafter referred to as P olyOld algorithm), was used in the heuristic for the general case (hereafter referred to as BarrierOld heuristic). Below, the barrier heuristics are described, the details of the P olyN ew and P olyOld are available in [11] and [10], correspondingly. First, a schedule is constructed according to Johnson’s rule ignoring the buffer constraint: the set N is partitioned into two sets - L1 = {i ∈ N : ai < bi } and L2 = {i ∈ N : ai ≥ bi }; then all jobs from L1 are scheduled in a non-decreasing order of ai , followed by all jobs from L2 scheduled in a non-increasing order of bi ; J the resultant schedule provides an optimal value of the makespan [8]. Let Cmax be the maximum completion time provided by this schedule.

66

A. Kononov et al.

Define the following: – amax = max ai and bmax = max bi . i∈N

i∈N

J ] on machines – Idle1 and Idle2 - the total idle time in theinterval [0, Cmax  M1 and n J J − n M2 , correspondingly, i.e. Idle1 = Cmax − i=1 ai and Idle2 = Cmax i=1 bi .   Idle2 Idle2 – X and Y - sets of auxiliary jobs: |X| = bmax , ai = 0, bi = for any i ∈ X; |X|   Idle1 1 , ai = |Y | = aIdle , bi = 0 for any i ∈ Y . max |Y |

Let N  = N ∪ X ∪ Y and n = n + |X| + |Y |. Construct a permutation schedule σ  on this set as follows. First schedule all jobs from X, sequenced in arbitrary order; then all jobs from N , scheduled according to Johnson’s rule; lastly all jobs from Y in arbitrary order. Let π J be the permutation of all jobs in N  induced by σ  and π0 , π1 and π2 be the permutations of the jobs in     1 1 – L0 = X ∪ i ∈ L1 | ai ≤ amax , L1 = i ∈ L1 | ai > amax - for BarrierN ew; 2 2 – L1 = X ∪ {i ∈ L1 } - for BarrierOld; – L2 = L2 ∪ Y - both barrier heuristics;

respectively, induced by π J . The P olyN ew algorithm uses the permutations π0 , π1 and π2 and the P olyOld algorithm uses the permutations π1 and π2 interchangeably, to construct the permutation π  in a contrast to usual single-list method. It has been shown in [11] for P olyN ew algorithm and in [10] for the P olyOld algorithm, that if (2) is satisfied for P olyN ew and (1) is satisfied for the P olyOld, the J , thus implying makespan of the resulting permutation schedule equals to Cmax the optimality of the schedule. Let π  be the permutation produced by P olyN ew or P olyOld algorithm. Assume that N ew = T RU E if the BarrierN ew heuristic is run, and that N ew = F ALSE, if the BarrierOld heuristic is run. Let W AIT (π) signify the value of the makespan provided by the feasible schedule constructed by WAIT algorithm with the order π on both machines. Barrier heuristic 1: set π  = ∅, set π = ∅. 2: define sets X and Y . 3: if N ew then 4: define π0 , π1 , π2 . 5: π  =P olyN ew(π0 , π1 , π2 ). 6: else 7: define π1 , π2 . 8: π  = P olyOld(π1 , π2 ). 9: end if 10: set n = n + |X| + |Y |, j = 1. 11: for i = 1 to n do / X) AND (π  (i) ∈ / Y ) then 12: if (π  (i) ∈ 13: π(j) = π  (i), j = j + 1. 14: end if 15: end for 16: return W AIT (π).

Algorithms for Flow Shop with Job-Dependent Buffer Requirements

2.2

67

Bin-Packing Approach

As the name suggests, the bin-packing heuristic utilises the idea of “packing” jobs into a buffer in the way allowing to process all the jobs as fast as possible. The bin-packing heuristic (hereafter referred to as BinsU p heuristic) was introduced in [10]. It can be summarised as follows: – Each job is allocated to a bin of size Ω: if it does not “fit” into any of the existing bins, a new bin is created; – Within each bin, jobs are sorted according to Johnson’s rule, which provides an optimal makespan for the set of jobs within each bin [8]; – The bins are ordered in a non-decreasing order of their total buffer requirement; – The resulting permutation of all jobs is used by WAIT algorithm with this permutation on both machines.

Obviously, we could use a non-increasing order of a total bin’s buffer requirement to obtain the permutation - which gives a rise to another variant of the bin-packing heuristic - denote the bin-packing heuristic with the non-increasing order of a total bin’s buffer requirement as BinsDown. Let U p = T RU E, if we run BinsU p bin-packing heuristic, and U p = F ALSE, if BinsDown heuristic is run; let Sort(P riority, P erm) be the procedure that sorts a set of elements in a non-increasing order of elements’ priorities P riority; the resultant permutation of the elements is recorded in the permutation P erm; denote by JohnsonSort(Set, P erm) the procedure that sorts a set of jobs Set ⊆ N according to Johnson’s rule and the resultant permutation is recorded in P erm and let W AIT (π) signify the value of the objective function provided by the feasible schedule constructed by WAIT algorithm with the order π on both machines; denote by Bin[i] ⊆ N a subset of jobs i and let load1i and load2i signify the sum of aj and bj , correspondingly, of all jobs j ∈ Bin[i]. Let Nb be the number of “bins”, and initially Nb = 0. Bin-packing heuristic 1: set Nb = 0; 2: for all i ∈ N do 3: set priority[i] = ai 4: end for 5: Sort(priority, π); 6: for i = 1 to N do 7: allocate = T RU E; 8: if aπ(i) > Ω2 then 9: Bin[Nb ] = π(i); load1Nb = aπ(i) ; load2Nb = bπ(i) , Nb = Nb + 1; 10: else 11: while allocate do 12: set j = 0; 13: while j < Nb do 14: if load1j + aπ(i) ≤ Ω and load2j + bπ(i) ≤ Ω then 15: Bin[j] = Bin[j] ∪ π(i); load1j = load1j + aπ(i) ; load2j = load2j + bπ(i) , 16: j = Nb ; allocate = F ALSE; 17: else 18: j = j + 1;

68 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 39: 40: 41: 42: 43: 44:

3

A. Kononov et al. end if end while if allocate then Bin[Nb ] = π(i); load1Nb = aπ(i) ; load2Nb = bπ(i) Nb = Nb + 1; allocate = F ALSE; end if end while end if end for if U p then for all 0 ≤ i < Nb do 1 set priority[i] = load 1; i end for else for all 0 ≤ i < Nb do set priority[i] = load1i ; end for end if Sort(priority, π bins ); set π = ∅, k = 1; for i = 0 to Nb − 1 do JohnstonSort(Bin[π bins (i)], ππJbins (i) ); for j = 1 to |Bin[π bins (i)]| do π(k) = ππJbins (i) (j); k = k + 1; end for end for W AIT (π).

Integer Formulation

This integer formulation is an adaptation of the model, discussed in [7], for the objective function of the maximum completion time. This formulation is used to run computational experiments on CPLEX. Denote by T the planning horizon, i.e. T is a non-negative number such that  for an optimal schedule σ the value of makespan Cmax (σ) ≤ T., and let T = i∈N (ai + bi ). Define xm it , i ∈ N , 0 ≤ t < T , m ∈ {1, 2}, as  1, if Sim = t; = xm it 0, otherwise. T −1 2 Denote by Cmax = maxi∈N t=1 txit + bi . The considered scheduling problem can be formulated as: min Cmax (3)

Algorithms for Flow Shop with Job-Dependent Buffer Requirements

69

subject to T −1 

xm it = 1,

for 1 ≤ i ≤ n and m ∈ {1, 2}

(4)

t=0 n 

t 

x1iτ ≤ 1,

for 0 ≤ t < T, m ∈ {1, 2}

(5)

x2iτ ≤ 1,

for 0 ≤ t < T, m ∈ {1, 2}

(6)

i=1 τ =max{0,t−ai +1} n 

t 

i=1 τ =max{0,t−bi +1} T −1 

tx2it −

t=1 n 

 ai

tx1it ≥ ai ,

t=1 t 

x1iτ

τ =0

i=1 T −1 

T −1 



t−bi



{0, 1},

(7)

≤ Ω,

(8)

x2iτ

for 0 ≤ t < T

τ =0

tx2it + bi ≤ Cmax ,

t=1 xm it ∈

for 1 ≤ i ≤ n

for 1 ≤ i ≤ n

for 1 ≤ i ≤ n, 0 ≤ t < T, m ∈ {1, 2};

(9) Cmax ≥ 0

(10)

The constraints (4) signify that each job can start only once on each machine, (5) and (6) imply that only one operation is processed on each machine at a time, (7) ensure the correct order of operations for each job, (8) enforce that the overall buffer capacity is not exceeded at any time, (9) define the value of Cmax and (10) are non-negativity constraints.

4

Computational Experiments

The computational experiments aimed to compare the barrier and bin-packing heuristics. The experiments were conducted by the second author on a personal computer with Intel Core i5 processor CP U @1.70 Ghz, using Ubuntu 14.04 LTS, with base memory 4096 MB. All algorithms were implemented in C programming language. The test instances were generated randomly with processing times chosen from the interval [1, 20]. There were 50 instances in each tested set. The experiments were run for sets of instances with 50, 100 and 200 and for buffer sizes Ωk = k × pmax , where pmax = max{amax , bmax } is the maximum processing time of an instance, and k ∈ {1.0, 1.25, 1.5, 1.75, 2.0, 2.5, 4.5}. There was a 15 minute time limit per instance for all heuristics. The lower bound on the optimal value of the makespan was calculated as described in [10]. This lower bound takes the buffer capacity explicitly into account, and for smaller buffer sizes it outperforms the lower bound provided by the schedules constructed according to Johnson’s rule without taking into account the buffer constraint. First, all jobs i ∈ N are numbered in a nonincreasing order of ai . Let LargeJobs = {1, 2, ..., k} be the set of the jobs i with buffer requirement ai > Ω2 . For each a subset Bl = {1, 2, ..., l} ⊆ LargeJobs,

70

A. Kononov et al.

1 ≤ l ≤ k, define Sl = {i > l : ai + al > Ω}. Note that N = S0 when LargeJobs = ∅. Let C(Sl )J be the makespan in a permutation schedule where the jobs from Sl are scheduled according to Johnson’s rule and the buffer restriction is ignored, denote by LBJohnson = C(N )J . The lower bound LBbuf f er can be calculated as    l  1  2 J max max Σi=1 pi + pi + C(Sl ) , LBJohnson (11) 1≤l≤k

Let Cmax be the value of makespan obtained by a heuristic, and let LB be a lower bound. Then a relative error RE, in %, is computed as RE =

Cmax − 1. LB

(12)

The box-plot charts on Fig. 1 represent the relative error for instances with 50, 100 and 200 jobs and each of the seven buffer sizes. For each instance and each heuristic, the relative error RE was calculated according to (12), with LB set to the value of (11). Then the box-plot charts are built using the resulting values of the relative errors.

Fig. 1. Relative error for 50, 100 and 200 jobs instances

Across all sets with different number of jobs the similar pattern can be observed: the interquartile range of the box-plots increases as the buffer size increases from Ω1.0 to Ω1.25 for all heuristics; then, as the buffer size further increases from Ω1.5 to Ω4.5 , the interquartile range gradually decreases to zero for the barrier heuristics, but for the bin-packing heuristics the interquartile range does not change significantly. In the similar pattern, the values of relative errors increase as the buffer size increases from Ω1.0 to Ω1.25 for all heuristics; as the buffer size increases from Ω1.5 to Ω4.5 the values decrease to zero for the

Algorithms for Flow Shop with Job-Dependent Buffer Requirements

71

Fig. 2. Comparison of two variants of a heuristic

barrier heuristics, but for the bin-packing heuristics the values decrease insignificantly. The barrier heuristics solved all instances with buffer Ω4.5 to optimality. The charts on the Fig. 2 compare two variants for each - the barrier and the bin-packing heuristics. For each instance, the values provided by BarrierN ew are compared with the values provided by BarrierOld; the values provided by BinsU p are compared with the values provided by BinsDown. For each pair of the heuristics, each column in the charts represents the proportion, in %, of instances in the corresponding set, where one heuristic provided smaller, greater or equal value of the objective function than the other heuristic. For each number of jobs, the buffer size increases from Ω1.0 to Ω4.5 . For buffer sizes Ω1.0 − Ω2.5 the BarrierN ew provided smaller values than BarrierOld, for 50 − 62% of 50 jobs instances and for 36 − 62% and 40 − 54% of instances, for 100 and 200 jobs instances, correspondingly, with buffer sizes Ω1.0 − Ω2.5 . For all instances with buffer size Ω4.5 both BarrierN ew and BarrierOld have attained equal values of the objective function. BinsU p clearly outperformed BinsDown heuristic and provided smaller values for absolute majority of instances. All heuristics required only 0.000068 − 0.048 seconds per instance. In comparison, when ten instances with 25 jobs and buffer size Ω4.5 were tested by running the integer program (3)–(10) on CPLEX software, for some instances 30 minutes was not sufficient to determine whether an integer solution exists, for the instances for which CPLEX obtained the best “exit” value of the objective function, it was greater than the corresponding values obtained by the heuristics. The next group of the experiments aimed to evaluate the strength of the four heuristics by running the integer program (3)–(10) on CPLEX software for 25 small instances of 10 jobs and buffer sizes Ω2.0 and Ω4.5 and comparing the results with the results provided by the heuristics. Time limit was set to 30 min per instance. For each heuristic and the integer program, the relative error RE was calculated according to (12), with LB set to the value of (11). The box-plot charts on the Fig. 3 are constructed using these relative errors. For the smaller buffer

72

A. Kononov et al.

size Ω2.0 the four heuristics have shown 5% − 14% median relative errors, while CPLEX delivered optimal solution for two-thirds of instances. For larger buffer size Ω4.5 , both barrier heuristics and CPLEX solved all instances to optimality, and BinU p heuristic provided near optimal solutions (with RE within 2%) for absolute majority of instances, and BinsDown had larger variability of relative errors up to 30%. All heuristics provided the solutions in much shorter time than CPLEX. To investigate the strength of the heuristics even further, all four heuristics and LBbuf f er have been tested on the “benchamark” instances. It has been shown in [6] that if an instance of the considered problem is defined in a certain way, then the optimal value of the makespan is known and its value is Cmax ∗ = η(19α + 2),

(13)

where η and α are some positive integers. According to [6], each instance has 5η jobs and is defined as follows: there are 2η jobs with ai = 2α and bi = 5α, there are 2η jobs with ai = 5α and bi = 1, finally there are η jobs with ai = 5α and bi = 4α, Ω = 9α. These instances are referred to as to “benchmark” instances. The instances of 50, 100 and 200 jobs were obtained by setting α = 4 and η ∈ {10, 20, 40}. For each heuristic and LBbuf f er , the relative error RE was calculated according to (12), with LB set to the value of (13), with results depicted on Fig. 3. BinsU p and BinsDown attained the smaller relative errors around 18% for all benchmark instances, and BarrierOld and BarrierN ew shown slightly larger relative errors of 23% − 24%. The lower bound LBbuf f er attained the optimal values of the objective function for all benchmark instances, as in each case LBbuf f er is equal to the total duration of all large jobs, which coincides with (13). In the final group of experiments, the heuristics were run on the sets of instances with 50, 100 and 200 with a very large buffer thus efficiently eliminating buffer constraint, and the results were compared with the optimal values of the objective function provided by the schedule constructed according to the Johnson’s rule. For each instance, a schedule in non-increasing order of the total processing time of jobs was constructed by WAIT algorithm, and the buffer capacity was set to the value of the makespan of this schedule. For each heuristic, the relative error RE is calculated according to (12), with LB set to the value of (11). Remarkably, all heuristics have obtained optimal values for all instances with the relative error RE = 0%.

Algorithms for Flow Shop with Job-Dependent Buffer Requirements

73

Fig. 3. Relative errors

5

Conclusion

In this paper, we discussed algorithms for the two-machine flow shop with a limited buffer and the objective function of the maximum completion time. The buffer requirement equals to the processing time of a job on the first stage and it varies from job to job; the job occupies the buffer for its entire processing, and the buffer capacity cannot be violated at any point of time. We presented a barrier heuristic based on the polynomial-time algorithm from [11] and compared it with the barrier heuristic introduced in [10]. In addition, another variant of the bin-packing heuristic from [10] was described. The four heuristics were tested computationally, and the results provided useful insights for further research. The buffer requirement makes the considered problem much harder when the buffer requirement is very large, all heuristics have provided optimal values of the objective function for all instances. The heuristics performed reasonably well when tested on the benchmark instances, with bin-packing heuristics attaining smaller relative errors then the barrier heuristics. The strength of the heuristics was also demonstrated by running CPLEX and the heuristics for smaller instances, with BinsU p and both barrier heuristics providing optimal/near optimal solutions for all instances with larger buffer size. The barrier and bin-packing heuristics performed differently for various buffer sizes: for the smaller buffer sizes, the bin-packing heuristics performed better than the barrier heuristics, and for the larger buffer sizes the barrier heuristics outperformed the bin-packing heuristics. For the bin-packing heuristic the non-decreasing order of total buffer requirement of bins is more significant, as the heuristic with nondecreasing order provided better results (in terms of the value of the objective function). The BarrierN ew heuristic presented in this paper appears to be more efficient (in terms of the value of the objective function) than the BarrierOld heuristic, discussed in [10]. Hence the stronger theoretical result (2) allowed to improve the barrier heuristic.

74

A. Kononov et al.

Acknowledgement. The research of the first author was supported by the program of fundamental scientific researches of the SB RAS No I.5.1., project No 0314-2019-0014, and by the Russian Foundation for Basic Research, project 20-07-00458.

References 1. Brucker, P., Heitmann, S., Hurink, J.: Flow-shop problems with intermediate buffers. OR Spectr. 25(4), 549–574 (2003) 2. Brucker, P., Knust, S.: Complex Scheduling. Springer, Heidelberg (2012) 3. Coffman, E.G., Jr., Garey, M.R., Johnson, D.S.: An application of bin-packing to multiprocessor scheduling. SIAM J. Comput. 7(1), 1–17 (1978) 4. Emmons, H., Vairaktarakis, G.: Flow Shop Scheduling. Springer, Heidelberg (2013) 5. Fung, J., Singh, G., Zinder, Y.: Capacity planning in supply chains of mineral resources. Inf. Sci. 316, 397–418 (2015) 6. Fung, J., Zinder, Y.: Permutation schedules for a two-machine flow shop with storage. Oper. Res. Lett. 44(2), 153–157 (2016) 7. Gu, H., Kononov, A., Memar, J., Zinder, Y.: Efficient lagrangian heuristics for the two-stage flow shop with job dependent buffer requirements. J. Discrete Alg. 52–53, 143–155 (2018) 8. Johnson, S.M.: Optimal two-and three-stage production schedules with setup times included. Naval Res. Logistics Q. 1(1), 61–68 (1954) 9. Kononov, A., Hong, J.S., Kononova, P., Lin, F.C.: Quantity-based bufferconstrained two-machine flowshop problem: active and passive prefetch models for multimedia applications. J. Sched. 15(4), 487–497 (2012) 10. Kononov, A., Memar, J., Zinder, Y.: Flow shop with job–dependent buffer requirements—a polynomial–time algorithm and efficient heuristics. In: Khachay, M., Kochetov, Y., Pardalos, P. (eds.) MOTOR 2019. LNCS, vol. 11548, pp. 342– 357. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22629-9 24 11. Kononov, A., Memar, J., Zinder, Y.: On a borderline between the NP-hard and polynomial-time solvable cases of the flow shop with job-dependent storage requirements. J. Glob. Optim. 1–12 (2021). https://doi.org/10.1007/s10898-021-01097-w 12. Kononova, P., Kochetov, Y.A.: The variable neighborhood search for the two machine flow shop problem with a passive prefetch. J. Appl. Ind. Math. 7(1), 54–67 (2013) 13. Lin, F.C., Hong, J.S., Lin, B.M.: A two-machine flowshop problem with processing time-dependent buffer constraints-an application in multimedia presentations. Comput. Oper. Res. 36(4), 1158–1175 (2009) 14. Lin, F.C., Hong, J.S., Lin, B.M.: Sequence optimisation for media objects with due date constraints in multimedia presentations from digital libraries. Inf. Syst. 38(1), 82–96 (2013) 15. Lin, F.C., Lai, C.Y., Hong, J.S.: Minimize presentation lag by sequencing media objects for auto-assembled presentations from digital libraries. Data Knowl. Eng. 66(3), 382–401 (2008) 16. Pinedo, M.L.: Scheduling: Theory, Algorithms, and Systems. Springer (2016) 17. Witt, A., Voß, S.: Simple heuristics for scheduling with limited intermediate storage. Comput. Oper. Res. 34(8), 2293–2309 (2007)

Traveling Salesman Problem with Truck and Drones: A Case Study of Parcel Delivery in Hanoi Quang Huy Vuong1 , Giang Thi-Huong Dang2 , Trung Do Quang3 , and Minh-Trien Pham1(B) 1

2

VNU University of Engineering and Technology, 144 Xuan Thuy, Cau Giay, Hanoi, Vietnam [email protected] University of Economic and Technical Industries, 456 Minh Khai, Vinh Tuy, Hai Ba Trung, Ha Noi, Vietnam 3 Academy of Cryptography Techniques, 141 Chien Thang, Tan Trieu, Thanh Tri, Ha Noi, Vietnam

Abstract. Unmanned Aerial Vehicles (UAVs), commonly known to the public as drones, have recently been utilized for military and many agriculture applications. In the near future, drones are likely to become a potential way of delivering parcels in urban areas. In this paper, we apply a heuristic solution for the parallel drone scheduling salesman problem (PDSTSP) for real-world optimization problems, where a set of customers requiring a delivery is split between a truck and a fleet of drones, with the aim of minimizing the completion time (or the makespan) required to service all of the customers. The study is based on the analysis of numerical results obtained by systematically applying the algorithm to the delivery problem in Hanoi. The results demonstrate that the utilization of drones might reduce the makespan significantly, and our approaches effectively deal with the delivery problem in Hanoi.

Keywords: Parallel drone scheduling algorithm

1

· Drone delivery · Heuristic

Introduction

Recently, drones have received more attention as a new distribution method for transporting parcels. Several companies have put considerable efforts into drone delivery research. A remarkable event occurred in December 2013, a delivery service using drones called Prime Air [2] was first publicly introduced by Jeff Bezos, the CEO and founder of Amazon - the largest online retailer. Then in 2016, it said it had made its first successful drone delivery to a customer in Cambridge, England. In 2014, Google began testing its drone delivery service This work was supported by the Domestic Master/ PhD Scholarship Programme of Vingroup Innovation Foundation. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 75–86, 2022. https://doi.org/10.1007/978-3-030-92666-3_7

76

Q. H. Vuong et al.

called Google’s Wing in Australia. Now Wing’s drones are being used to deliver essentials such as medicine, food to residents in lockdown in Virginia, USA during coronavirus pandemic [9]. DHL also launched its drones called parcelcopter in 2016, which could deliver parcels to customers in remote areas such as in the Alps [3]. In 2017, a Silicon Valley start-up Matternet developed their drone delivery system for medical applications in Switzerland [5]. Other similar systems have been launched by many companies such as Alibaba [1], JD.com [4]. Drones can provide significant advantages over traditional delivery systems handled only by trucks. The comparison of trucks and drones is summarized in Table 1 [14]. Table 1. Comparison of truck and drones. Vehicle Delivery space

Speed Parcel weight

Parcel capacity

Delivery range

Drone Truck

Fast Slow

One Many

Short Long

Air Ground

Light Heavy

Alongside the attention in the industry, in the last few years, several publications in the literature on truck-drone collaboration have been proposed. Khoufi et al. [13] provided a comprehensive survey on this field. Using both truck and drones in the delivery system gives rise to new variants of travelling salesman problems (TSP). In 2015, Murray and Chu [16] was first proposed the problem, named the PDSTSP. In the PDSTSP, a truck and a fleet of drones perform independent tasks. The PDSTSP aims to minimize the makespan (the distance in time that elapses from the start of the delivery process to the end after serving all customers). The authors proposed simple greedy heuristics to obtain solutions but only for small-size instances due to the NP-hard of the problem. Mbiadou Saleu et al. [15] presented an improved algorithm for PDSTSP called two-step heuristic. A dynamic programming algorithm with bounding mechanisms is used to decompose the customer sequence into a trip for the truck and multiple trips for drones. Kim and Moon [14] proposed an extension of PDSTSP named The traveling salesman problem with a drone station(TSP-DS), in which drones launched from a drone station, not the distribution center. In this paper, we also consider the parallel drone scheduling problem, based on the dynamic programming-based algorithm introduced by Saleu et al. [15] with some modifications. First, we consider the real-world problem with the timedependent based speed model for the truck. The speed of the truck is affected by traffic conditions. Second, a constructive heuristic approach was applied to solve the TSP for the truck tour, while a parallel machine scheduling algorithm still handles multiple drone tours. Finally, the algorithm is tested with real-world instances in Hanoi with different problem parameters to evaluate the performance and the potential of the algorithms for applying in real-world problems.

TSP with Truck and Drones: A Case Study of Delivery in Hanoi

77

The paper is organized as follows. Section 2 provides the problem description of PDSTSP. The heuristic algorithm is described in Sect. 3. Section 4 shows experimental results and discussions of the results. Finally, Sect. 5 concludes the paper.

2

Parallel Drone Scheduling Traveling Salesman Problem

In this paper, we investigate the PDSTSP presented in [16], in which a truck and drones depart and return dependently with no synchronization. There are reasons we decided to investigate this model. First, the PDSTSP with parallel utilization of a truck and drones in a non-synchronized way is more suitable for real-world problems, where the synchronized collaboration between truck and drones (truck carries drones) is challenging to deploy in practical delivery problems. Moreover, we consider the case that the depot locates in a convenient position for drone delivery.

Fig. 1. An illustration of PDSTSP.

2.1

Problem Definition

The PDSTSP can be represented as follows. Consider a set of nodes N = {0,...,n} represents the set of customers and the depot (index 0). A truck and a fleet of homogeneous drones are available to deliver parcels to the customers from the depot. The truck starts from the depot, services a subset of customers along a TSP route, and returns back to the depot. Drones service customers directly from the depot, then return the depot while servicing a single customer per trip. Not all customers can be served by a drone because of practical constraints like the limited capacity or the limited flight range of a drone. We define D as a set of customers which can be served by a drone. These customers are referred to as drone-serviceable customers in the rest of the paper. Assume that the truck and the drones start from the depot at time 0. The objective of the PDSTSP

78

Q. H. Vuong et al.

is to minimize the time required for a truck and drones to return to the depot after servicing all customers (a customer must be serviced exactly once by either the truck or a drone). Since truck and drones work in parallel, the objective is also to find an optimal TSP route for a truck and optimal customer orders for drones. An illustration of PDSTSP is shown in Fig. 1. 2.2

Time-Dependent Speed Model

In this section, a model of time-dependent is constructed to capture the congestion in a traffic network. In real-life problems, the travel time between customers strongly depends on the traffic condition of the road network. It means the speed of the truck could vary depending on the time of the day. For example, the truck’s speed during rush hour would be multiple times slower than the speed at night. The time dependency is modeled as follows. The time of day is partitioned into L intervals [Tl , Tl+1 ], l = 0, ..., L − 1. The average value of the truck speed is known. It should be noticed that the truck’s speed may differ among arcs when the truck crosses the boundaries of an interval. The time-dependent travel speed of truck can be represented as: vijl = vtruck Rand ∈ [FL , FU ]

(1)

where vtruck represents the average speed of the truck, vijl is the speed of truck among arc (i, j) during the interval l, FL and FU are respectively the lower bound and upper bound value of congestion factors. The lower value of Rand, the more congestion there will be. During the peak hours, the value of Rand should be close to FL . The value of Rand should be close to FU during off-peak hours. To calculate the travel time between two cities (i, j), the distance between these nodes for the truck dij and the departure time t are needed. The time-dependent travel time value on arc (i, j) if departing from vertex i at time t is computed following Algorithm 1 as proposed by Ichoua et al. [12]. Algorithm 1. Computing the travel time of arc (i, j) at departure time t0 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:

t ← t0 l ← l0 : Tl0 ≤ t0 ≤ Tl0 +1 d ← dij t ← t + (d/vijl ) while t > Tl+1 do d ← d − vijl × (Tl+1 − t) t ← Tl+1 t ← t + (d/vijl+1 ) l ←l+1 end while return t − t0

TSP with Truck and Drones: A Case Study of Delivery in Hanoi

3 3.1

79

Heuristic Algorithm Main Algorithm

Based on the definition of the PDSTSP from [16], we follow the general scheme of Saleu et al. [15] to solve this problem. The PDSTSP is considered as a twostage problem: – Partitioning stage: splitting customers into two sets, a set for the truck and a set for the fleet of drones. – Optimizing stage: solving a TSP for the truck and a parallel machine scheduling (PMS) problem for drones. Our heuristic approach iterates over these stages. After the optimizing stage, the algorithm is repeated until the termination condition is met. Our heuristic method is described as follows with pseudocode: 1. Given a TSP tour T visiting the depot and all of the customers, a truck and a fleet of M drones start from the depot, the tour T is initialized with a greedy algorithm. 2. Update current solution: assign all the customers to the truck in order of sequence T , that means no customer is assigned to the drones. 3. BestSolution = Solution BestCost = Cost(BestSolution) 4. Initial tour T is split into two complementary subtours: Ttruck for the truck and Tdrones for the fleet of drones. 5. Tours improvement: – Truck tour Ttruck is then repoptimized using TSP algorithm and improved using an improvement heuristic. – A set of tours for drones Tdrone1 , .., TdroneM is obtained by a PMS algorithm from Tdrones . 6. Update new solution with optimized tours for truck and drones. 7. If Cost(Solution) < BestCost then BestCost = Cost(Solution) and BestSolution = Solution 8. Drones tour Tdrone1 , .., TdroneM are inserted into Ttruck with Nearest Insertion algorithm to form a new tour T . 9. Check the termination criterion: the process is terminated if the exit criterion is met (typically computation time is reached), otherwise comeback to step 4. Algorithm 2 illustrates the pseudo-code of the introduced heuristics. A solution is presented as a set consisting a tour for the truck in order of Ttruck and M tours for fleet of M drones in order of Tdrone1 , .., TdroneM . The cost of the truck is denoted by T ruckCost that indicates the completion time for the truck after servicing all customers in order Ttruck . DronesCost denotes the completion time for the fleet of drones. Therefore, the cost of a solution Cost(Solution) is equal to max(T ruckCost, DronesCost), which indicates the completion time of the final vehicle returning to the depot after servicing all

80

Q. H. Vuong et al.

Algorithm 2. Main algorithm 1: T ← InitializeT SP () 2: BestSolution = Solution BestCost = Cost(BestSolution) 3: while isT erminate = F alse do 4: Split(T ) 5: Ttruck ← ImprovementHeuristic() Tdrone1 , .., TdroneM ← P M Salgorithm() 6: if Cost(Solution) < BestCost then 7: BestCost = Cost(Solution) BestSolution = Solution T ← reInsertion(Ttruck , Tdrone1 , ..., TdroneM ) 8: end if 9: end while

customers. A solution is considered to be better than others if its cost is smaller. If two solutions have the same cost value, the solution that has the smaller sum of T ruckCost and DronesCost is the better solution. 3.2

Customers Partitioning

The tour T is separated into two complementary parts Ttruck and Tdrones . We adapted an effective split procedure from Saleu et al. [15] with a remarkable change of cost calculation. It should be noticed that the cost of the truck now is affected by the traffic congestion model described in Sect. 2.2, since the result of the split procedure is also affected. Given a set of customers N = {0, ..., n} and an additional node n + 1 represents the copy of depot 0. As mentioned in the previous section, a truck can serve all customers, but drones can only serve customers in a drone-serviceable subset D ⊆ N . The objective of partitioning phase is to find a partition between Ttruck and Tdrones that minimizes the max(T ruckCost, DronesCost). The details of the split procedure can be found in [15]. We briefly describe the algorithm with some modifications as follows: 1. The algorithm checks every node from depot 0 to destination node n + 1 in order of tour T . At any node i, a list of (T ruckCost, DronesCost) is induced by adding arc cost for every solution from node 0 to node i (for node j, two different solutions can occur when we decide whether node j is assigned to the drone or not). 2. With every arc (i, j), a cost vector (c1ij , c2ij ) is generated. The component c1ij represents the cost incurred for the truck if it travels directly from i to j: c1ij = dij . The corresponding cost induced for the drone c2ij . If the truck travels directly from i to j, all drone-serviceable customers k in-between i and j are  assigned to the drone: c2ij = dˆk . 3. To reduce the number of solutions, before adding a solution to the list of (T ruckCost, DronesCost), all existing solutions are checked with the new solution to decide which solution should be removed. The best decomposition

TSP with Truck and Drones: A Case Study of Delivery in Hanoi

81

(Ttruck and Tdrones ) is retrieved from the best solution found in the list of cost at destination node n + 1. 4. Before running the procedure, the departure time is set. The cost for the truck is calculated based on the given truck cost distance matrix, and the speed of the truck is affected by the time-dependent traffic congestion model. 3.3

Subtours Improvement

Truck Tour Improvement. Truck tour Ttruck retrieved from partitioning procedure is then reoptimized using Christofides algorithm [10]. After that, 2-opt heuristic [11] is used to improve it. For the TSP improvement phase, the congestion index is relaxed because it does not affect the result. Christofides Heuristic. Christofides’ algorithm [10] is a well-known tour construction heuristic for travelling salesman problem . For the given inputs, it starts with finding the minimum spanning tree T and then a minimum matching M on the odd-degree vertices. A new graph H is formed by adding M to T . Every vertex now has even degree. Therefore, H is Eulerian. Finally, a TSP tour is obtained by skipping visited nodes (using shortcuts). The algorithm is described as follows: 1. Find a minimum spanning tree MST (T ) 2. Find vertexes in T with odd degree (O) and find minimum weight matching (M ) edges to T 3. Form an Eulerian graph using the edges of M and T 4. Obtain a Hamiltonian path by skipping repeated vertexes (using shortcuts) Two-Opt Local Search. The 2-opt algorithm was first proposed by Croes in 1958 [11]. The 2-opt algorithm examines all possible pairs of edges in the tour, removes and reconnects them to form a new tour. This transition is called a 2-opt move. If the new tour is longer or equal to the original one, we undo the swap. Otherwise, the move resulted in a shorter tour. In the 2-opt, when removing two edges, there is only one alternative feasible solution. The swap continues until it no longer improves the path. Drone Scheduling. The customers sequence Tdrones is decomposed to subtours for M drones. The objective of this phase is to find the minimum makespan of the schedule. The simple Longest Processing Time (LPT) algorithm is used. LPT assigns the customers with the longest cost to the drone with the earliest end time so far.

4

Experimental Results

The heuristic approach has been tested with different benchmark sets. The first set is introduced by Saleu et al. [15] that generated from TSPLIB library with

82

Q. H. Vuong et al.

some parameters for sensitivity analysis. The second real-life instance set is generated using geodata from OpenSreetMap [7]. The details of instances generation are later described in Sect. 4.1. All computational works were conducted on an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz, and the algorithm is implemented in C++. 4.1

Instances and Parameters Setting

In this section, two data sets are used to evaluate the performance of the algorithm. The first set of instances used was adapted from [15]. The instances were generated from the classic TSPLIB library [8]. The customers are represented by coordinates x and y. The original TSP instances were modified for the PDSTSP problems: – The percentage of drone-serviceable customers ranging from 0% to 100%. – The Manhattan distances dij are calculated for the truck if it travels from i to j and Euclidean distances dˆk for the drones if a drone serves customer k. The second instances were generated from OpenStreetMap project [7]. A customer node is represented by coordinates at a geographic coordinate system with latitude and longitude units. The real travel distances are generated from Openrouteservices API [6]. The parameters that are inserted for the experiments are shown as follows: – Fixed distance cost matrices of truck and drones. – For the instances generated from TSPLIB, we indicate the speed factor sp = vdrone /vtruck . The cost of the drone is divided by the speed factor sp, and the speed of the drone is set to be one unit of distance per unit of time. – The depot is located near the center of all customers. – The number of drones M = 2. – The planning horizon is divided into five intervals L = 5, the value of FL and FU are set to 0.5 and 1, respectively. – The termination criterion (computing time limit) is set to 5 min. 4.2

Results and Discussions

Results obtained from first data set of TSPLIB are shown in Tables 2 - 4, where completion time indicates the cost value of the solution. The completion time is then compared to the optimal cost of the traditional TSP tour provided by gap% value. Columns labeled with %ds indicate the percentage of drone-serviceable customers. In general, the results show that the completion times are reduced with the introduction of drones working in parallel. In addition, completion times are also improved when the percentage of drone-serviceable customers increases. It can be easily explained that when the percentage of drone-serviceable increases, assigning customers to the drones can reduce the completion time for the truck.

TSP with Truck and Drones: A Case Study of Delivery in Hanoi Table 2. Results of the algorithms on the TSPLIB instance att48 Instance CT Name %ds

Gap %

att48

0 –14.9 –26.3 –30.3 –35.9 –36.2

0 20 40 60 80 100

47170 40121 34761 32867 30221 30085

Table 3. Results of the algorithms on the TSPLIB instance berlin52 Instance CT Name %ds

Gap %

berlin52

0 –10.3 –18.4 –29.2 –38.1 –41.0

0 20 40 60 80 100

11175 10013 9120 7914 6914 6592

Table 4. Results of the algorithms on the TSPLIB instance eil101 Instance CT Gap % Name %ds eil101

0 20 40 60 80 100

988 838 721 651 623 596

0 –15.2 –27.0 –34.1 –36.9 –39.7

Table 5. Impact of the speed factor (80% drone-serviceable customers) Instance Without sp factor With sp factor CT DC CT DC att48 berlin52 eil101

34324 10

30221 18

7760 11

6914 18

690 15

623 28

83

84

Q. H. Vuong et al.

However, when the completion time of the truck and drones are nearly the same, increasing drone-serviceable customers does not affect the completion time too much. We conduct a sensitivity analysis as shown in Table 5 to investigate the impact of the speed factor. Columns labeled with DC indicate the number of customers assigned to the drones. When the speed factor affects the completion time of the truck, the drone is able to serve more customers. The completion time of all three instances is also improved. Actually, the completion time also depends on the percentage of drone-serviceable customers. The drone is not able to visit more customers if the percentage of drone-serviceable customers is low. Table 6. Comparison of results on TSPLIB instances (80% drone-serviceable customers) Instance Gap% Proposed algorithm Greedy approach att48

–35.9

–22.6

berlin52

–38.1

–24.1

eil101

–36.9

–23.5

In Table 6, the results obtained are compared with a baseline algorithm with a greedy strategy. The algorithm tried with all different combinations of droneserviceable customers to be assigned to the truck route. Then, the TSP is solved by Path Cheapest Arc (PCA) algorithm, and the PMS is based on Shortest Job First (SJF). In Table 6, it is shown that the proposed algorithm shows significantly more efficiency, demonstrating the reliability of the algorithm for applying to practical problems. The baseline greedy algorithm can be easily trapped in local optimums in the complex search space of this problem.

Fig. 2. An illustration of the result for real instance with 20 customers.

TSP with Truck and Drones: A Case Study of Delivery in Hanoi

85

Table 7. Results for real instances (50% drone-serviceable customers) Instance NC ND

%gap

DC

20

1 2 3

–24.2 –45.1 –48.2

6 9 10

40

1 2 3

–19.1 –36.5 –36.5

14 18 18

80

1 2 3

–18.5 –32.1 –38.5

22 28 31

For the real delivery problem, we analyzed instances in which the number of nodes ranging from 20 to 100. The default truck speed vtruck is set to 40 km/h and is affected by the time-dependent traffic congestion model in Sect. 2.2. In addition, we vary the number of drones. An illustration of the results for real instances is shown in Fig. 2. The results obtained from real instances are shown in Table 7. Columns labeled with NC and ND indicate the number of customers and the number of drones, respectively. The results show that the completion time of the real delivery problem is reduced when increasing the number of drones and the drones are able to visit more customers. However, in some cases, increasing the number of drones does not improve the solution, or not too much improvement. Therefore, the number of drones needs to be selected reasonably to save the operating cost. Finally, compared to the traditional delivery service (only truck), the effect of traffic congestion is reduced by the utilization of drones. The customers tend to be assigned to the drone to balance the completion time of both the truck and the drones.

5

Conclusions

In this paper, we applied a heuristic solution for the traveling salesman problems with a truck and a fleet of drones for real-world delivery service. A model of traffic congestion is used to explore the advantages of using a truck and drones in parallel. In addition, when applying to real-world applications, the introduction of drones would bring substantial improvements in the logistic operations of lastmile delivery in Hanoi. The results indicated that we could overcome congestion situations and significantly reduce the delivery completion time compared to traditional delivery services.

86

Q. H. Vuong et al.

References 1. Alibaba uses drones to deliver products to Chinese islands - Geospatial World, geospatialworld.net/news/alibaba-uses-drones-deliver-products-chinese-islands/. Accessed 13 Jul 2021 2. Amazon.com: Prime Air. amazon.com/Amazon-Prime-Air/b?ie=UTF8&node=80 37720011. Accessed 13 Jul 2021 3. DHL drone delivery and parcelcopter technology — Discover DHL. discover.dhl.com/business/business-ethics/parcelcopter-drone-technology. Accessed 13 Jul 2021 4. JD.com makes drone deliveries as coronavirus cuts off usual modes - Nikkei Asian Review. asia.nikkei.com/Spotlight/Coronavirus/JD.com-makes-drone-deliveriesas-coronavirus-cuts-off-usual-modes. Accessed 13 Jul 2021 5. Matternet to launch medical drone delivery in Switzerland — Internet of Business. internetofbusiness.com/matternet-drone-delivery-switzerland/. Accessed 13 Jul 2021 6. Openrouteservice. openrouteservice.org/. Accessed 19 May 2021 7. OpenStreetMap. openstreetmap.org/. Accessed 19 May 2021 8. TSPLIB. elib.zib.de/pub/mp-testdata/tsp/tsplib/tsplib.html. Accessed 19 May 2021 9. Wing drones from Google deliver shopping during coronavirus pandemic. dezeen.com/2020/04/15/google-wing-drone-delivery-coronavirus-virginia/. Accessed 13 Jul 2021 10. Christofides, N.: Worst-case analysis of a new heuristic for the travelling salesman problem (1976) 11. Croes, G.A.: A method for solving traveling-salesman problems. Oper. Res. 6(6), 791–812 (1958). https://doi.org/10.1287/opre.6.6.791 12. Ichoua, S., Gendreau, M., Potvin, J.Y.: Vehicle dispatching with time-dependent travel times. European J. Oper. Res. 144(2), 379–396 (2003). https://doi.org/10. 1016/S0377-2217(02)00147-9 13. Khoufi, I., Laouiti, A., Adjih, C.: A survey of recent extended variants of the traveling Salesman and vehicle routing problems for unmanned aerial vehicles. Drones 3(3), 66 (2019). https://doi.org/10.3390/drones3030066 14. Kim, S., Moon, I.: Traveling salesman problem with a drone station. IEEE Trans. Syst. Man Cyber. Syst. (2019). https://doi.org/10.1109/TSMC.2018.2867496 15. Mbiadou Saleu, R.G., Deroussi, L., Feillet, D., Grangeon, N., Quilliot, A.: An iterative two-step heuristic for the parallel drone scheduling traveling salesman problem. Networks 72(4), 459–474 (2018). https://doi.org/10.1002/net.21846 16. Murray, C.C., Chu, A.G.: The flying sidekick traveling salesman problem : optimization of drone-assisted parcel delivery. Trans. Res. Part C 54, 86–109 (2015). https://doi.org/10.1016/j.trc.2015.03.005

A New Mathematical Model for Hybrid Flow Shop Under Time-Varying Resource and Exact Time-Lag Constraints Quoc Nhat Han Tran1,2(B) , Nhan Quy Nguyen2,3 , Hicham Chehade1 , Farouk Yalaoui2 , and Fr´ed´eric Dugardin2 1

2

Opta LP S.A.S., 2 rue Gustave Eiffel, 10430 Rosi`eres-Pr`es-Troyes, France [email protected] Laboratoire Informatique et soci´et´e num´erique (List3n), Universit´e de Technologie de Troyes, 12 rue Marie Curie, CS 42060, 10004 Troyes, France 3 Chaire Connected Innovation, Universit´e de Technologie de Troyes, 12 rue Marie Curie, 10004 Troyes, France

Abstract. This paper proposes a new mathematical formulation for the Hybrid Flow Shop problem under time-varying resources and chaining exact time-lag constraints. This formulation is named Discrete Continuous (DC) formulation to distinguish from the state-of-the-art DiscreteTime (DT) formulation in the literature. In the DC formulation, the starting time of jobs is modeled by a continuous variable, and its execution state is modeled with a binary one. The two formulations are benchmarked: the DC formulation always assures a feasible solution for any instance. Keywords: MILP Optimization

1

· Project Scheduling · Hybrid Flow Shop ·

Introduction

Project scheduling refers to the problems where multiple projects (tasks, jobs) must be scheduled/planned under the constraints of resources [8]. Due to its essential applications in the real world, these problems have received more and more research efforts over the decades [8]. In terms of modeling the problem using the mathematical linear programming, we see early efforts since the 60s [9]. One well-known approach is the Discrete-Time (DT) model, suggested by Pristker et al. [16]. By discretizing and indexing the planning horizon, the formulation uses an ordered set of binary variables to indicate if the job terminates at a certain period. Based on this work, Christofides et al. [4] proposed an extended version in which a Disaggregated Precedence cut (DDT) is introduced, yielding a stronger LP-relaxation. As far as we know, a considerable amount of recent work exploits this approach with various constraints, depending on the nature of particular problems [6,10,11,18,21]. More detailed surveys can be found at [5,7,8,12,24]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 87–99, 2022. https://doi.org/10.1007/978-3-030-92666-3_8

88

Q. N. H. Tran et al.

In the case of constant resources’ availability, we have more compact Linear Programming (LP) formulations, where the decision variables are independent of the time-discretization. According to [20], major formulations that have research attention are: the event-based of Kon´e et al. [10], and the flow-based of Artigues et al. [2]. While [10] focuses on the starting order of jobs based on their relative precedence, [2] investigates the flow of resources to determine the passing order of jobs. Interested reader can find more details about these formulations at [1,19]. However, it is worth noting that in the general case where the resource capacity varies over time, one will find difficulty adapting the compact models. Although different techniques exist to extend the compact models, the defined variables did not capture the resource variation during the jobs’ execution. Hence, discretization is still viable. We present a new way to represent jobs different from the classic DT model. We demonstrate its performance by applying the Hybrid Flow Shop problem under the time-varying resources and chaining exact time-lag constraints. The rest of the paper is organized as follows: Sect. 2 provides a detailed comparison of the ideas behind the two approaches, then Sect. 3 shows the application of these formulations. The test protocol and the numerical results to compare the performances will be conducted at Sect. 4. Finally, Sect. 5 concludes with our remarks and future direction.

2

Representation Comparison

Pritsker et al. [16] has stated that “an efficient formulation, however, will depend upon a judicious choice of definition for the variables.” Patterson and Huber [14] indicates that having fewer variables does not necessarily produce an easily exploitable result structure. Thus, the jobs’ representation has a significant influence on the tractability of the MILP resolution. In this paper, the schedule horizon is discretized into multiple equal time intervals with the length of a unit, indexed by t from 0 to T . 2.1

Discrete-Time (DT) Representation

We will now present the idea of Patterson and Roth [15], which is an equivalent formulation of [16]. Given a non-preemptive job j with processing time pj to be scheduled on a discretized horizon {0 . . . T }. Let us denote the binary variables σjt that:  1 if the job j starts at t ∀t ∈ {0 . . . T } σjt = 0 otherwise Due to the non-preemption, the job j is only allowed to start once on the horizon: T  t=0

σjt = 1

(1)

New Mathematical Model for Hybrid Flow Shop

89

The starting time sj and completion time cj of job j are calculated by: sj =

T 

tσjt

(2)

cj = sj + pj

(3)

t=0

Furthermore, the execution state of job j at time t, which can be used to determine the resource usage, is deduced by the formulation: 

xjt =

σjτ

(4)

t−pj +1≤τ ≤t

2.2

Discrete Continuous (DC) Representation

The DT formulation is based on the starting point of a job to deduce the corresponding starting time and the execution state. On the other hand, the new DC model establishes a direct linear relationship between the starting time and the execution state (both are separated variables), resulting in an essential difference in solution structure. Given a non-preemptive job j of positive duration pj to be executed on a discretized horizon {0 . . . T }. The job j must start and end at an integer time. Let us denote the starting time sj ∈ R+ and the execution states xjt that  1 if the job j is executing at t ∀t ∈ {0 . . . T } xjt = 0 otherwise Proposition 1. The system (5) allows representing the job j with nonpreemptive processing time pj on the horizon. sj ≤ txjt + T (1 − xjt ) sj + pj ≥ (t + 1) xjt T 

xjt = pj

∀t ∈ {0 . . . T } ∀t ∈ {0 . . . T }

(5a) (5b) (5c)

t=0

Fig. 1. Example of a discrete-continuous representation of starting time variable sj and execution state variables xjt

90

Q. N. H. Tran et al.

Equation 5c imposes the integrity of execution, that a job j needs to be processed as much as the requirement pj . Equation 5a restricts the starting time sj by the first moment t where the job j is in execution state (xjt = 1). On the other hand, Eq. 5b limits that no more execution after the completion time cj (= sj + pj ). This direct connection between the continuous variable sj and the discrete variables xjt is what makes up the name Discrete Continuous of this approach (Fig. 1).

3

Mathematical Models

With these introduced modeling techniques, we then tackle a particular Hybrid Flow shop problem where jobs are grouped into chains (at most one predecessor and one successor). There is waiting time before processing the next job of the chain. And a job requires additional resources without constant availability. This problem has typical application in the domain with recurring works, such as chemotherapy treatment at the hospitals [22] (Tables 1 and 2). 3.1

Problem Description

Given A jobs arrived as multiple chains Qn (n = 1 . . . N ) at the beginning of horizon. Each chain Qn is a sequence of an jobs Jni (i = 1 . . . an ) with exact time-lag lni between two consecutive jobs Jni and Jn(i+1) . Each job Jni consists of m operations. Each operation o of the job Jni has a fixed non-preemptive processing time poni . For each resource v, each operation o needs a certain amount uvo per unit of time in order to process. The resources’ capacity of any resource v at any time t is denoted by Rtv . The criterion is minimizing the total completion time of all jobs. Table 1. Nomenclature of essential sets Name

Index Symbol Range

Horizon

t

Chain

n

Qn

Job

i

Jni

Operation o Resource

v

0...T 1...N 1 . . . an 1...m 1...V

As far as we know, the simplified version of this problem with a single machine and coupled tasks (chains of two jobs of single operations with exact time-lag) is already NP-hard [3]. Moreover, the time-varying resource constraint does not reduce the complexity [24]. Thus, this problem should be at least NP-hard.

New Mathematical Model for Hybrid Flow Shop

91

Table 2. Nomenclature of problem parameters Notation

Description

Rtv uvo

Total amount of resource v at time t Resource consumption per time unit of resource v of operation o

N 

A=

n=1

poni

an Total number of jobs

lni

Processing time of operation o of job Jni Exact time-lag between the end of job Jni and the start of the next job Jn(i+1)

o ESni o LSni

Earliest starting time of operation o of job Jni Latest starting time of operation o of job Jni

3.2

DT Model

We begin with the classical approach of DT formulation: Table 3. Nomenclature of decision variables and auxiliary variables for Discrete-Time model Notation

Description

σno i t

Decision variables. 1 if operation o of job Jni starts at time t. 0 otherwise

xonit = soni =

  t−po ni +1≤t ≤t

T  t=0

o σnit  The execution state of operation o of job Jni at the time t, defined by Eq. 4

o tσnit

The starting time of operation o of job Jni , defined by Eq. 2 The completion time of operation o of job Jni , defined by Eq. 3 The completion time of job Jni

coni = soni + poni cni = cm ni

Minimize

an N  

sm ni

(6a)

n=1 i=1

S.t: T 

o σnit =1

∀n ∀i ∀o

(6b)

∀n ∀i ∀o

(6c)

∀n ∀i ∀o

(6d)

t=0

soni =

T 

o tσnit

t=0 o soni ≥ ESni

92

Q. N. H. Tran et al. o soni ≤ LSni

soni sm ni

+ +

poni pm ni

xonit =



so+1 ni

+ lni = s1n(i+1)  o σnit 

∀n ∀i ∀o

(6e)

∀n ∀i ∀o ≤ m − 1

(6f)

∀n ∀i ≤ an − 1

(6g)

∀n ∀i ∀o ∀t

(6h)

∀v ∀t

(6i)

t−p+1≤t ≤t an  N  m 

uvo xonit ≤ Rtv

n=1 i=1 o=1

The objective is to minimize the total completion time: z=

an N  

cni =

n=1 i=1

The part

an N   n=1 i=1

an N   n=1 i=1

(sm ni

+

pm ni )

=

an N   n=1 i=1

sm ni

+

an N  

pm ni

n=1 i=1

pm ni , which expresses the total processing time of the last

operation of all jobs, is fixed, so minimizing the total completion time is equivalent to minimizing the total starting time of the last operations (6a). (6b) ensures that every job’s operation starts precisely once on the horizon, according to (1). (6c) represents the starting time of a job in terms of starting point, according to o o , LSni ]. (2). Each operation of a job must start in a specific range: soni ∈ [ESni Proven mathematical properties can pre-process these bounds. One basic bound is that an operation o of the job Jni must not start before the total processing time plus the time-lag required for all the operations and jobs preceding it. Conversely, it must reserve sufficient time for all its successors. The operation flow must be preserved (6f). The exact time-lag between the end of the job Jni and the start of the next job Jn(i+1) must be honored (6g). The resource consumption at any time should not exceed the availability of resources at that moment (6i). This inequality is obtained by the fact that the consummation of an operation o of job Jni is determined by its execution state xonit at the moment (6h), which should be equal to 1 if the job Jni has started in the last poni time intervals, including the current moment. 3.3

DC Model

Table 4. Nomenclature of decision variables for Discrete Continuous model Notation Description son i

Starting time of operation o of job Jni . soni ∈ R+

xon i t

1 if operation o of job Jni is executing at time t, otherwise 0

New Mathematical Model for Hybrid Flow Shop

93

o o We reuse the notations coni , cni , ESni , LSni introduced earlier in Table 3.

Minimise

an N  

sm ni

(7a)

n=1 k=1

S.t: o soni ≥ ESni

soni soni sm ni soni soni

o ≤ LSni + poni ≤ so+1 ni 1 + pm + l ni = sn(i+1) ni ≤ txonit + T (1 − xonit ) + poni ≥ (t + 1) xonit

T 

∀n ∀i ∀o

(7b)

∀n ∀i ∀o

(7c)

∀n ∀i ∀o ≤ m − 1

(7d)

∀n ∀i ≤ an − 1

(7e)

∀n ∀i ∀o ∀t

(7f)

∀n ∀i ∀o ∀t

(7g)

∀n ∀i ∀o

(7h)

∀v ∀t

(7i)

xonit = poni

t=0 an  N  m 

uvo xonit ≤ Rtv

n=1 i=1 o=1

One can notice a degree of similarity between the DC formulation and the DT formulation. Constraints (7a), (7b), (7c), (7d), (7e), (7i) are respectively (6a), (6d), (6e), (6f), (6g), (6i). However, the three equations (7f), (7g), (7h) define a different solution structure. This change not only divides the formulation (7) into a LP problem (7b)–(7e) and a MILP problem (7h)–(7i), but also makes a direct bridge (7f)–(7g) between continuous variables (starting time soni ) and the discrete variables (the execution state xonit ) (Table 4).

4

Numerical Experimentation

Although numerous papers have used [17] as the benchmark for RCPSP problems, we find this data set inadequate for our problem for two reasons. First, the precedence constraint in our problem is entirely parallel. Second, the time lag is not present in the benchmark. Therefore, we generate new instances based on several custom parameters defined in the following: – Resource Variation (RV): [13] For any resource v, we define the parameter measuring the variation of its capacity over time: RV v =

v v − Rmin Rmax v Rmax

(8)

v v With Rmax , Rmin respectively the maximum and minimum resource capacity of v over the time horizon. RV v = 0 means constant capacity, while RV v = 1 suggest the unavailability of resources at some periods over the time horizon.

94

Q. N. H. Tran et al.

– Inactivity (I): For any instance, we define the indicator of obligatory waiting time (inactivity) to estimate the potential waste of resources: I=

l p+l

(9)

With p, l respectively the mean processing time of job and the mean waiting time between jobs. Because p ≥ 0 and l ≥ 0, we have 0 ≤ I ≤ 1. When I = 0, the problem has no waiting time (no-wait). I < 12 in general means the waiting time is shorter than the processing time, thus it is expected to be harder to interlace jobs’ chains to reduce waiting time. – Resource Consumption rate (RCo): [13] For any resource v, we define the resource consumption rate: RCov =

Uv Rv

(10)

Where U v the total demand and Rv the total capacity of resource v over an  N  m  horizon. In this particular problem: U v = poni uvo . n=1 i=1 o=1

4.1

Instance Generation

To test the performance of the formulations, we are going to set these parameters above to establish the structure of the data. We denote z ∼ U [a; b] as z is generated from uniform distribution on [a; b]. In order to keep the numerical experimentation manageable, we set the number of resources V = 1, the number of operations m = 4 and the resource unit usage uvo = 1. This setting is basic and practical, where one operator is necessary for the execution at a time. The scheduling horizon T is fixed as 100. The number of jobs A is selected from {10, 20, 50, 100, . . . , 500}. The resource variation RV v = 0.3, and the resource consumption rate RCov = 0.6, as [13] shows numerically that stronger variation and higher consumption of resource may cause infeasibility. The inactivity I is set to 0.25 to enable partially interlacing the jobs’ chains. The generation process is as follows: 1. We select the chain length an ∼ U [1; 10] so that

N  n=1

an = A. The number of

chains N does not necessarily stay the same between instances. 2. The operations’ length poni ∼ U [0; 3]. 3. After calculating the average value of job processing time by the formula an  N  m  pI 1 p= A poni , we deduce the mean of time-lag (based on (9)): l = 1−I . n=1 i=1 o=1

The time-lags is then lni ∼ U [0; 2l]. 4. In order to generate the resource capacity, we first choose the value of resource unit usage for each resource and each operation uvo .

New Mathematical Model for Hybrid Flow Shop

95

Uv

T 5. We then deduce the mean resource capacity by: Rv = RC v . From which the value domain of resource capacity is calculated (8):    1 v v v 2R ; umax (11) Rmax = max 2 − RV v    1 − RV v v = max 2Rv ; uvmax (12) Rmin 2 − RV v

Where uvmax designates the maximal unit usage of resource v by operations v v ; Rmax ]. (uvmax = max uvo ). The resource capacity is finally Rtv ∼ U [Rmin o∈O

4.2

Test Protocol

For each problem size (A), we generate 10 feasible instances. This dataset, which is denoted by TNCY-I, has 120 instances in total [23]. The resolution time of the solver is limited respectively by 5 min (fast), 30 min (medium), and 60 min (long). We measure for each model m the KPI presented in Table 5. The formulations are implemented in CPLEX 20.1, and run on the server with CPU Intel(R) Xeon(R) Gold 5120 @ 2.20 GHz. Each resolution process uses one thread and 2 GB of RAM. Table 5. KPI to measure per each formulation each problem size, each resolution time KPI

Description

% Feasibility

Percentage of total instances by each problem size that the solver finds feasible solution with the corresponding formulation

LB-GAP

MLB-GAP [25]

4.3

OBJm −LBm , LBm

where OBJm , LBm are respectively the best objective value and the best lower bound found by the formulation m. Only when the formulation has 100% feasibility OBJm −LB ∗ , LB ∗

where OBJm is the best objective value found by the formulation m, and LB ∗ is the best lower bound found by all formulations. Only when the formulation has 100% feasibility

Result

As we can see in Fig. 2, the DT model (6) begins to have difficulty finding a feasible solution as the number of jobs A grows to 50. And the more complex the instances, the harder the DT model finds an answer. Even after giving more time, the DT model still struggles and ultimately gives up after A ≥ 250. On the other hand, the DC model (7) has no such difficulty, even with 500 jobs. This observation hints that the structural change made by (5) potentially creates a more easily exploitable solution space for the solver. However, more experiments and studies should be conducted to examine the benefit of the Discrete Continuous representation.

96

Q. N. H. Tran et al.

Regarding the solution’s quality (Fig. 3), when the instances are small (A ≤ 20), while the DT model manages to prove the optimality or reduce the GAP to under 5%, the DC model provides a relatively large GAP around 20%. However, the MLB-GAP (Table 8), which compares the solutions against the best proven LB of all formulations, demonstrates that the answers of the DC model are not that far from the optimality. Thus, the DC model might need a suitable cut to strengthen its LB.

Fig. 2. The rate of feasibility over problem sizes of two formulations at different resolution time (Table 6)

Table 6. The rate of the feasibility of two formulations (Fig. 2) Problem size (A) DC-fast DC-medium DC-long DT-fast DT-medium DT-long 10

100

100

100

100

100

100

20

100

100

100

100

100

100

50

100

100

100

80

90

90

100

100

100

100

30

50

60

150

100

100

100

20

40

50

200

100

100

100

0

10

20

250

100

100

100

0

0

0

300

100

100

100

0

0

0

350

100

100

100

0

0

0

400

100

100

100

0

0

0

450

100

100

100

0

0

0

500

100

100

100

0

0

0

New Mathematical Model for Hybrid Flow Shop

97

Fig. 3. The average LB-GAP of two formulations (Table 7) Table 7. Mean LB-GAP (%) (only when 100% feasibility, or else −) (Fig. 3) Problem size (A) DC-fast DC-medium DC-long DT-fast DT-medium DT-long 10

24.12

17.9

15.01

1.32

0.5

20

29.29

23.31

23.77

3.48

1.33

0.24 1.25

50

33.84

17.48

17.17







100

39.41

29.24

20.67







150

40.6

33.42

25.11







200

43.31

31.54

29.29







250

49.78

40.49

35.96







300

46.03

35.15

31.68







350

48.05

38.19

36.02







400

48.17

43.82

39.16







450

47.66

40.29

35.14







500

48.92

46.18

44.45







Table 8. Average MLB-GAP (%) (only where both formulations yields 100 % feasibility) Problem size (A) DC-fast DC-medium DC-long DT-fast DT-medium DT-long

5

10

0.03

0.01

0.01

0

0

0

20

0.08

0.03

0.04

0

0

0

Conclusion

This paper presents a new approach to construct variables to model the jobs for Project Scheduling Problems. We also demonstrate this approach in the Hybrid Flow Shop problem with chaining jobs with exact time-lag and time-varying resources constraints. A specific benchmark, albeit limited, was shown to compare the performance of our formulation to the state-of-the-art DT model. The results demonstrate that our formulation is quicker to find a feasible solution, even in complex instances where the DT model failed to reach any solution under a long time limit. Despite unimpressive LB-GAP, this speedy resolution of the

98

Q. N. H. Tran et al.

DC model suggests a particular potential in rapidly assuring the feasibility of generated instances, for example. As to the future perspective, we plan to develop valid inequalities for better solution quality. The formulation we present does not necessarily restrict to Project Scheduling, as the inequalities represent different thinking about dynamically connecting the aspects of a job. Although the benchmark is created explicitly for the defined problem, one can accommodate other constraints or reduce the complexity depending on the scenario.

References 1. Artigues, C.: On the strength of time-indexed formulations for the resourceconstrained project scheduling problem. Oper. Res. Lett. 45(2), 154–159 (2017). https://doi.org/10.1016/j.orl.2017.02.001 2. Artigues, C., Michelon, P., Reusser, S.: Insertion techniques for static and dynamic resource-constrained project scheduling. Eur. J. Oper. Res. 149(2), 249–267 (2003). https://doi.org/10.1016/S0377-2217(02)00758-0 3. Chen, B., Zhang, X.: Scheduling coupled tasks with exact delays for minimum total job completion time. J. Sched. 24(2), 209–221 (2020). https://doi.org/10. 1007/s10951-020-00668-1 4. Christofides, N., Alvarez-Valdes, R., Tamarit, J.M.: Project scheduling with resource constraints: a branch and bound approach. Eur. J. Oper. Res. 29(3), 262–273 (1987). https://doi.org/10.1016/0377-2217(87)90240-2 5. Demeulemeester, E.L., Herroelen, W.S.: Project Scheduling: A Research Handbook. Springer, Heidelberg (2006). https://doi.org/10.1007/b101924 6. Gn¨ agi, M., Zimmermann, A., Trautmann, N.: A continuous-time unit-based MILP formulation for the resource-constrained project scheduling problem. In: 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) (2018). https://doi.org/10.1109/IEEM.2018.8607337 7. Hartmann, S., Briskorn, D.: A survey of variants and extensions of the resourceconstrained project scheduling problem. Eur. J. Oper. Res. 207(1), 1–14 (2010). https://doi.org/10.1016/j.ejor.2009.11.005 8. Hartmann, S., Briskorn, D.: An updated survey of variants and extensions of the resource-constrained project scheduling problem. Eur. J. Oper. Res. (2021). https://doi.org/10.1016/j.ejor.2021.05.004 9. Herroelen, W.S.: Resource-constrained project scheduling—the state of the art. J. Oper. Res. Soc. 23(3), 261–275 (1972). https://doi.org/10.1057/jors.1972.48 10. Kon´e, O., Artigues, C., Lopez, P., Mongeau, M.: Event-based MILP models for resource-constrained project scheduling problems. Comput. Oper. Res. 38(1), 3– 13 (2011). https://doi.org/10.1016/j.cor.2009.12.011 11. Kopanos, G., Kyriakidis, T.S., Georgiadis, M.: New continuous-time and discretetime mathematical formulations for resource-constrained project scheduling problems. Comput. Chem. Eng. (2014). https://doi.org/10.1016/j.compchemeng.2014. 05.009 12. Mika, M., Walig´ ora, G., W¸eglarz, J.: Overview and state of the art. In: Schwindt, C., Zimmermann, J. (eds.) Handbook on Project Management and Scheduling Vol.1. IHIS, pp. 445–490. Springer, Cham (2015). https://doi.org/10.1007/978-3319-05443-8 21

New Mathematical Model for Hybrid Flow Shop

99

13. Nguyen, N.Q., Yalaoui, F., Amodeo, L., Chehade, H., Toggenburger, P.: Total completion time minimization for machine scheduling problem under time windows constraints with jobs’ linear processing rate function. Comput. Oper. Res. 90, 110– 124 (2018). https://doi.org/10.1016/j.cor.2017.09.015 14. Patterson, J.H., Huber, W.D.: A horizon-varying, zero-one approach to project scheduling. Manag. Sci. 20(6), 990–998 (1974). https://doi.org/10.1287/mnsc.20. 6.990 15. Patterson, J.H., Roth, G.W.: Scheduling a project under multiple resource constraints: a zero-one programming approach. AIIE Trans. 8(4), 449–455 (1976). https://doi.org/10.1080/05695557608975107 16. Pritsker, A.A.B., Watters, L.J., Wolfe, P.M.: Multiproject scheduling with limited resources: a zero-one programming approach. Manag. Sci. 16(1), 93–108 (1969) 17. Rainer, K., Sprecher, A.: The Library PSBLIB (March 2005). http://www.om-db. wi.tum.de/psplib/library.html 18. Rihm, T.: Applications of Mathematical Programming in Personnel Scheduling. Dissertation, Universit¨ at Bern (2017) 19. Tesch, A.: Compact MIP models for the resource-constrained project scheduling problem. Master thesis, Technische Universitat Berlin (2015) 20. Tesch, A.: Improved compact models for the resource-constrained project scheduling problem. In: Fink, A., F¨ ugenschuh, A., Geiger, M.J. (eds.) Operations Research Proceedings 2016. Operations Research Proceedings, pp. 25–30. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-55702-1 21. Tesch, A.: A polyhedral study of event-based models for the resource-constrained project scheduling problem. J. Sched. (2020). https://doi.org/10.1007/S10951-02000647-6 22. Tran, Q.N.H., et al.: Optimization of chemotherapy planning. In: International Conference on Optimization and Learning (OLA2020), Cadiz, Spain (February 2020) 23. Tran, Q.N.H., Nguyen, N.Q., Chehade, H.: TNCY-I (September 2021). https:// github.com/qnhant5010/TNCY-I 24. Weglarz, J. (ed.): Project Scheduling: Recent Models, Algorithms and Applications. International Series in Operations Research & Management Science. Springer, US (1999). https://doi.org/10.1007/978-1-4615-5533-9 25. Yalaoui, F., Nguyen, N.Q.: Identical machine scheduling problem with sequencedependent setup times: MILP formulations computational study. Am. J. Oper. Res. 11(01), 15 (2021). https://doi.org/10.4236/ajor.2021.111002

Maximizing Achievable Rate for Incremental OFDM-Based Cooperative Communication Systems with Out-of-Band Energy Harvesting Technique You-Xing Lin , Tzu-Hao Wang, Chun-Wei Wu, and Jyh-Horng Wen(B) Tunghai University, Taichung, Taiwan {g07360014,jhwen}@thu.edu.com

Abstract. Cooperative communication combined with simultaneous wireless information and power transfer (SWIPT) is a very promising technology. It can make the nodes in the relaying network more energy-efficient, and even improve system performance. In this paper, OFDM-based cooperative communication with out-of-band energy harvesting is discussed. To maximize the average achievable rate, relaying selection and subcarrier allocation are jointly optimized. For practical application considerations, we assume that the power allocation strategy uses equal power allocation (EPA). In order to simplify the computational complexity of the original optimal problem, we proposed a sub-optimal solution with excellent performance. Numerical results verify the benefits of our solution. Keywords: Cooperative communication · Amplify-and-forward · Achievable rate · OFDM · Out-of-band energy harvesting

1 Introduction Cooperative communication is a promising communication technology. This technology effectively utilizes communication resources through the cooperation of nodes in the communication network, and can provide diversity gains similar to multiple-input multiple-output (MIMO) communication systems [1]. The relay nodes in the cooperative communication system play the role of forwarding signals. Common forwarding strategies include amplify-and-forward (AF) and decodeand-forward (DF). AF, as its name implies, is that the relay node amplifies and forwards the signal after receiving the signal from the source node. DF is to decode the received signal, re-encode it and forward it to the destination node. Comparing the two methods, AF has the advantage of lower hardware complexity, but the disadvantage is that the received noise will also be amplified. The advantage of DF is that it will not amplify and forward the received noise, but the disadvantage is that the hardware complexity is relatively high, and the signal must be successfully decoded for the purpose of forwarding [2]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 100–111, 2022. https://doi.org/10.1007/978-3-030-92666-3_9

Maximizing Achievable Rate for Incremental OFDM

101

Orthogonal frequency-division multiplexing (OFDM) is a physical layer transmission scheme. The bandwidth used by the user is divided into multiple orthogonal subcarriers, and data are parallel transmitted by carried on these orthogonal sub-carriers. This technology effectively combats frequency selective fading, reduces inter-symbol interference, and achieves high spectrum efficiency. Because of these advantages, OFDM has become a key technology in many wireless communication standards, such as WIFI, fourth-generation and fifth-generation mobile communications [3], and many scholars have been studying cooperative communications based on OFDM [4, 5]. Because the promotion of green energy, energy harvesting (EH) is a technique that has emerged in recent years. It can solve the energy problem in energy-limited wireless devices, such as wireless cellular networks and wireless sensor networks [6, 7]. More specifically, in a wireless cooperation scenario, a relay node equipped with energy harvesting technology can harvest RF energy from the signal transmitted by the base station [8]. In fact, SWIPT was first proposed in [9] in 2008. Two SWIPT architectures based on power splitting are proposed in [10], namely power splitting (PS) and time switching (TS). These two energy harvesting architectures are also more suitable for devices with only a single antenna. When TS is adopted, the relay node switches between information processing mode and EH mode. When using PS, the relay divides the received signal power into two parts proportionally, one part is sent to the information processing circuit, and the other part is sent to the EH circuit. Generally speaking, the SWIPT-enabled relay method can effectively promote the lifetime of energy-limited device and significantly enhance system performance. It is well acknowledged that two-hop cooperative communication will have prelog loss of 1/2 due to the half-duplex characteristics of the relay. In [4, 5, 11–13], an incremental cooperative protocol is adopted to reduce aforementioned pre-log loss. Specifically, the cooperation mode will only be carried out when the relaying link is better than direct link. It is conceivable that the combination of SWIPT, cooperative communication, and OFDM technology can effectively improve the quality of communication and have the characteristics of green energy. In [4, 5], the communication system model of incremental AF-OFDM with multi-relay was investigated. In [8, 10, 13–17], cooperative communication with energy harvesting technique was studied. Among them, the communication system in the literature [13–16] is based on OFDM, but only in-band energy has been harvested by relay. We believe that the incremental AF-OFDM communication system is very suitable for out-of-band energy harvesting. Because the relay may not forward all sub-carriers after receiving the signal from source, and the energy of the subcarriers that are not forwarded can be fully collected. This energy can be provided for other subcarriers that need to be forwarded to improve signal quality. With the above ideas, our goal is to maximize the achievable rate of the incremental AF-OFDM system subject to total sum-power constraint.

102

Y.-X. Lin et al.

2 Problem Formulation and Solution Consider a system model similar to reference [4], a two-hop OFDM-based AF relaying network with one source S, one destination D, and the help of potential relays Rl , l = 1, . . . , L. Need to mention that system model in [4] didn’t adopt SWIPT technique, while we consider a similar system but with PS-SWIPT technique in relays. The system completes a transmission through two phases, and the lengths of the two phases are always equal. It is assumed that each channel is a slow fading channel, that is, the channel maintains the same fading characteristics in two phases. In the first phase, the signal on each subcarrier n of S is broadcast with power Ps,n , where n = 1, . . . , N . At this time, Rl l = 1, . . . , L and D receive the signal. Let hs,l,n , hl,d ,n and hs,d ,n represent the complex Gaussian channel coefficients on subcarrier n of links S → Rl , Rl → D and S → D, respectively. Suppose that all noise samples associated with the transmission links denoted as ns,l,n , nl,d ,n and ns,d ,n are independently and identically distributed (i.i.d.) additive white Gaussian noise (AWGN) with zero-mean and variance σ 2 . The signal received by the nth subcarrier of the l th relay and the destination can be represented by Eqs. (1) and (2), respectively,  ys,l,n = Ps,n hs,l,n xn + ns,l,n , (1) yd1 ,n =

 Ps,n hs,d ,n xn + ns,d ,n ,

(2)

where xn is the unit power information symbol transmitted on the nth subcarrier of S. In the second phase, binary parameters ηc,n , n = 1, . . . , N , are employed to indicate the incremental policy for subcarrier n. That is to say, each subcarrier can only select one transmission mode, i.e., direct mode or cooperative mode. Moreover, binary parameter ql,n , indicate the relay selection strategy, where ql,n ∈ {0, 1}. These two parameters should satisfy the following condition: L    1 − ηc,n + ηc,n ql,n = 1, ∀n

(3)

l=1

For the subcarrier with ηc,n = 1, it’s means that this subcarrier is doing cooperative mode, and a chosen relay will forward the scaled information to the destination with power Pl,n . For the subcarrier with ηc,n = 0, all relays will not forward the signal but S will broadcast a new information symbol with the same power used in phase one. Since PS-SWIPT technique is adopted by all relay, the received power of each subcarrier with ηc,n = 1 on the chosen relay is split into two parts. One with (1 − ρ) percentage is used for signal processing, and the remainder with ρ percentage is harvested by the relay in

Maximizing Achievable Rate for Incremental OFDM

103

order to forward the information in the second phase. Thus, the expression for signal processing at the relay is as follows  y˜ s,l,n = (1 − ρ)Ps,n hs,l,n xn + n˜ s,l,n (4)   For simplicity, we assume that n˜ s,l,n ∼ CN 0, σ 2 [13]. The received power for the subcarriers which will not be forwarded by the relay will be totally harvested and equally allocated to other subcarriers that need to be forwarded. And the energy harvested from noise is generally too small, so it’s not considered. Hence, the signals received by the destination in the second phase can be represented by Eqs. (5),  P h x + nl,d ,n , ηc,n = 1, ql,n = 1 2 , (5) yd ,n =  l,n l,d ,n l,n Ps,n hs,d ,n xn + ns,d ,n , ηc,n = 0 where xl,n =





1 y˜ s,l,n 2 (1−ρ)Ps,n |hs,l,n | +σ 2

is the scaled signal copy transmitted by the

relay Rl with ql,n = 1. The equivalent received signal-to-noise ratio (SNR) at the destination through AF relaying for subcarrier n can be derived as

L

ζn =

l=1 ql,n (1 − ρ)Ps,n αl,n ·

L 1 + l=1 ql,n (1 − ρ)Ps,n αl,n

L

l=1 ql,n Pl,n βl,n

+

L

l=1 ql,n Pl,n βl,n

,

(6)

|h |2 |hl,d ,n |2 and β = [2]. By assuming ideal maximum ratio comwhere αl,n = s,l,n l,n 2 σ σ2 bining (MRC) is used at destination, the mutual information on the subcarrier n within the cooperative mode can be written as Ic,n =

  1 log2 1 + Ps,n γn + ζn , 2

(7)

|h ,n |2 , and the factor 21 is because the information signal is actually where γn = s,d σ2 transmitted in two slots. Therefore, the spectrum efficiency drops by one half. For the subcarrier n which is in direct mode, the mutual information can be derived as   (8) Id ,n = log2 1 + Ps,n γn So far, the incremental AF-OFDM cooperative system achievable rate can be defined by R=

 1 N  1 − ηc,n Id ,n + ηc,n Ic,n n=1 N

(9)

104

Y.-X. Lin et al.

The power PT is wholly devoted to the transmission of S, and transmitting power of Rl comes from their own harvested energy in the first phase. In order to simplify the problem and increase the feasibility of practical applications, we assume that the power allocation uses EPA. Specifically, Ps,n can be described as PT Ps,n = N    n=1 ηc,n + 2 1 − ηc,n

(10)

According to the previous description about incremental policy, we can set ql,n = 0, ∀l, for the case of ηc,n = 0. This constraint can be mathematically derived as  L  1 − ηc,n

l=1

ql,n = 0, ∀n

(11)

And the energy harvested by the Rl on the subcarrier n in the first phase can be derived as 

2 

2 T  (12) El,n = ηc,n ql,n ερPs,n hs,l,n + 1 − ql,n εPs,n hs,l,n 2 where ε ∈ [0, 1] is the energy conversion efficiency which depends on the EH circuit. T denotes the total transmission time, which is divided into two equal time slots for cooperative relaying system. Because each subcarrier can operate in either cooperative mode or direct mode, the energy harvested by the Rl has two cases. For the first case, ηc,n = 1 and ql,n = 1. It means the subcarrier n of Rl participates in cooperative transmission. For the second case, ηc,n = 1 and ql,n = 0 or ηc,n = 0 and ql,n = 0. It means the subcarrier n of Rl does not participate in cooperative transmission. We integrate these two cases into one equation, as Eq. (12). Note that Eq. (12) is subject to (3) and (11). So, the transmitting power of subcarrier n on Rl can be derived as ⎧

2 N 1−q εP |h |2 ⎨ ερPs,n hs,l,n + n=1 ( N l,n ) s,n s,l,n , ηc,n = 1, ql,n = 1 (13) Pl,n = n=1 ηc,n ql,n ⎩ 0, otherwise As we can see, the transmitting power of the Rl on the subcarrier n in the second phase is composed of energy on the subcarrier n in the first phase and the other subcarriers energy in the first phase. Be aware that the EPA is adopted on out-of-band power allocation. Thus, the problem is formulated as max R

ηc,n ,ql,n

sub.to (3), (10), (11), (13)

(14)

Maximizing Achievable Rate for Incremental OFDM

105

To find the optimal solution of ηc,n and ql,n , exhaustive search is an intuitive method, but it will lead to a very high computational complexity. This prompted us to look for a way to simplify the optimal problem (14). Because the use of out-of-band energy harvesting, we can assume plenty of energy has been harvested for the Rl to use. Therefore, the received SNR ζn on the subcarrier n can be approximated by the following equation: lim ζn =

Pl,n →∞

L l=1

ql,n (1 − ρ)Ps,n αl,n

Then (7) can be rewritten as:   L 1 upper ql,n (1 − ρ)Ps,n αl,n Ic,n = log2 1 + Ps,n γn + l=1 2

(15)

(16)

On the other hand, (16) is the upper bound capacity of relaying link regarding to Pl,n . According to the incremental cooperative protocol concept, we design a method to PT into (8) and (16), and let determine ηcn independently. That is, substitute Ps,n = 2N upper ηc,n = 1 when Ic,n > Id ,n , otherwise, let ηc,n = 0. Obviously, this simplified method makes sense, but not necessarily the optimal solution. As the subcarrier allocation indicator ηc,n is determined, the remaining issue is relay selection strategy. We will investigate two strategy, global relay selection (GRS) and independent relay selection (IRS) [4]. The GRS, where only one relay is selected to forward signal. Unlike GRS, subcarrier within cooperative mode distributed on different relays for strategy IRS. In short, GRS can be considered as choosing the best relay and IRS is to choose the best link for the subcarrier among all relays. Following tables show the algorithm of two relay selection strategy with complete network channel state information (CSI) available in destination. Need to be mentioned, these two algorithms are centralized approaches. It is worth mentioning that in algorithm of IRS, ql,n is determinated by αl,n . It is because of the previous assumption about (15) and (16). In (16),

L upper l=1 ql,n (1 − ρ)Ps,n αl,n determines the performance regarding to Ic,n . Therefore, upper choose the best αl,n lead to the best performance of Ic,n . Furthermore, the bigger αl,n is, the bigger Pl,n is, according to (12) and (13). Then the assumption about Pl,n → ∞ will more reasonable.

106

Y.-X. Lin et al.

Maximizing Achievable Rate for Incremental OFDM

107

3 Simulation Results This section illustrates the simulation result of the proposed algorithms. Consider an AF-OFDM relaying system with 4 relays, where the channel coefficients hs,l,n , hl,d ,n and hs,d ,n are generated as complex zero-mean Gaussian random variables with variance a, b and c, respectively. The number of OFDM subcarriers is 64. The noise power σ 2 is chosen to be unitary. And we define SNR = PσT2 . These simulation parameters are set to be the same as [4]. The system without energy harvesting (EH) technique in the simulation is based on the algorithms proposed in [4]. In addition, PS ratio ρ is set to be 0.5, and energy conversion efficiency ε is also set to be 0.5 [18]. Figure 1 illustrates the comparative results in terms of achievable rate in bps/Hz in three scenarios of GRS algorithm, where instantaneous CSI can be fully accessed. Three scenarios are considered: (i) a = b = c = 10; (ii) a = 100, b = c = 10; (iii) a = c = 10, b = 100. It can be observed that GRS with EH always performs better than GRS without EH. As we can see, the improvement in scenario (ii) is better than the others. It is because scenario (ii) can harvest more energy than other scenarios.

Fig. 1. Achievable rate comparison for GRS without EH [4], proposed GRS with EH

Figure 2 illustrates IRS algorithm achievable rate with the same scenarios in Fig. 1. Apparently, IRS with EH has significant performance improvement. To mention that performance of IRS without EH in scenarios (i) and (ii) is worse than GRS without EH. This counterintuitive phenomenon has been discussed in [4]. Comparing both GRS with EH and IRS with EH, we can find that IRS with EH has the better performance. This

108

Y.-X. Lin et al.

is because IRS with EH can allocate more power to the subcarriers selected for signal forwarding in the relay. The simulations were done on Windows operating system with software MATLAB. The CPU is AMD Ryzen 5 1600. The simulation time of each point in Fig. 1 and Fig. 2 takes about 4 s, which include 105 OFDM symbols.

Fig. 2. Achievable rate comparison for IRS without EH [4], proposed IRS with EH

Figures 3 and 4 can be used to assist our assumptions about (15) and (16), which can rationalize our proposed subcarrier allocation policy. The simulations in Figs. 3 and 4 were performed at SNR = 20 dB. The solid line with x = 1 represents the system performance where all relays didn’t harvest out-of-band energy but only harvest in-band energy for transmitting power. As the x rise, the Pl,n in solid line can be described

2 as Pl,n = x · ερPs,n hs,l,n , ηc,n = 1, ql,n = 1. According to the chart, when x is large enough, the system performance can be regarded as a system performance with Pl,n → ∞. Dashed line represents the actual achievable rate when using in-band energy and out-of-band energy at the same time, as described in (13). We can find that solid line will approach its own asymptotic line, respectively, as Pl,n → ∞. The asymptotes represent their respective upper bound of achievable rate. It is noticed that dashed line in scenarios (i) and (iii) are very close to the upper bound of achievable rate, it means that our assumptions make sense in scenarios (i) and (iii). Now we compare the relationship between the solid line and the dashed line of color cyan in Figs. 3 and 4, it can be found that the achievable rate of actual IRS+EH is closer to its upper bound. So we can conclude that IRS with EH algorithm is more suitable for our assumptions.

Maximizing Achievable Rate for Incremental OFDM

Fig. 3. The upper bound of achievable rate when Pl,n → ∞ in GRS with EH

Fig. 4. The upper bound of achievable rate when Pl,n → ∞ in IRS with EH

109

110

Y.-X. Lin et al.

4 Conclusions An incremental AF-OFDM cooperative with out-of-band energy harvesting scheme is introduced. We also investigated the subcarrier allocation policy and relay selection strategy to maximize the achievable rate of the proposed system. Since to obtain the optimal solution is computationally demanding, we proposed a suboptimal solution which has the outstanding performance, especially for the system using IRS strategy. From the results, we can conclude that out-of-band energy harvesting in cooperative communication system has a great impact on system performance. In the future, optimal power allocation (OPA) issue would be worth investigating for the proposed system. Acknowledgements. This work is currently supported by the Ministry of Science and Technology (MOST) of Taiwan, under Grant MOST 109-2221-E-028-021.

References 1. Nosratinia, A., Hunter, T.E., Hedayat, A.: Cooperative communication in wireless networks. IEEE Commun. Mag. 42(10), 74–80 (2004) 2. Hong, Y.W.P., Huang, W.J., Kuo, C.C.J.: Cooperative Communications and Networking: Technologies and System Design. Springer, Heidelberg (2010). https://doi.org/10.1007/9781-4419-7194-4 3. Arrano, H.F., Azurdia-Meza, C.A.: OFDM: today and in the future of next generation wireless communications. In: 2016 IEEE Central America and Panama Student Conference (CONESCAPAN), pp. 1–6 (2016) 4. Zhang, Y., Pang, L., Liang, X., Li, J., Wang, Q., Liu, X.: Maximizing achievable rate for improved AF-OFDM cooperative transmission. In: 2015 IEEE Wireless Communications and Networking Conference (WCNC), pp. 510–515 (2015) 5. Zhang, Y., et al.: Spectrum and energy efficient relaying algorithms for selective AF-OFDM systems. In: 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring), pp. 1–5 (2016) 6. Liu, L., Zhang, R., Chua, K.: Wireless information transfer with opportunistic energy harvesting. IEEE Trans. Wirel. Commun. 12(1), 288–300 (2013) 7. Velkov, Z., Zlatanov, N., Duong, T., Schober, R.: Rate maximization of decode-and-forward relaying systems with RF energy harvesting. IEEE Commun. Lett. 19(12), 2290–2293 (2015) 8. Di, X., Xiong, K., Fan, P., Yang, H.: Simultaneous wireless information and power transfer in cooperative relay networks with rateless codes. IEEE Trans. Veh. Technol. 66(4), 2981–2996 (2017) 9. Varshney, L.: Transporting information and energy simultaneously. In: IEEE International Symposium on Information Theory, Toronto, Canada, pp. 1612–1616 (2008) 10. Atapattu, S., Evans, J.: Optimal energy harvesting protocols for wireless relay networks. IEEE Trans. Wirel. Commun. 15(8), 5789–5803 (2016) 11. Al-Tous, H., Barhumi, I.: Resource allocation for multiuser improved AF cooperative communication scheme. IEEE Trans. Wirel. Commun. 14(7), 3655–3672 (2015) 12. Pang, L., et al.: Energy aware resource allocation for incremental AF-OFDM relaying. IEEE Commun. Lett. 19(10), 1766–1769 (2015) 13. Zhang, Y., et al.: Multi-dimensional resource optimization for incremental AF-OFDM systems with RF energy harvesting relay. IEEE Trans. Veh. Technol. 68(1), 613–627 (2019)

Maximizing Achievable Rate for Incremental OFDM

111

14. Huang, G.: Joint resource allocation optimization in OFDM relay networks with SWIET. In: 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), pp. 304–309 (2016) 15. Huang, G., Tu, W.: Wireless information and energy transfer in nonregenerative OFDM AF relay systems. Wirel. Pers. Commun. 94, 3131–3146 (2017) 16. Liu, Y., Wang, X.: Information and energy cooperation in OFDM relaying: protocols and optimization. IEEE Trans. Veh. Technol. 65(7), 5088–5098 (2016) 17. Wang, Z.S.: Performance Analyses and Researches of Cooperative Communications with Energy Harvesting. 2018 Peking University Master’s Thesis (2018) 18. Jameel, F., Haider, M.A.A., Butt, A.A.: A technical review of simultaneous wireless information and power transfer (SWIPT). In: 2017 International Symposium on Recent Advances in Electrical Engineering (RAEE), pp. 1–6 (2017)

Estimation and Compensation of Doppler Frequency Offset in Millimeter Wave Mobile Communication Systems Van Linh Dinh1,2(B) and Van Yem Vu2 1 Academy of Cryptography Techniques, No. 141 Chien Thang Road, Hanoi, Vietnam

[email protected]

2 Hanoi University of Science and Technology, No. 1 Dai Co Viet Road, Hanoi, Vietnam

[email protected]

Abstract. In millimeter Wave (mmWave) Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems that applied to high mobility applications, Carrier Frequency Offset (CFO) is a primary factor in reducing the performance of OFDM transmissions due to the destruction of the subcarrier component’s orthogonality. This paper presents the bit error rate (BER) performance for various modulation techniques of Cyclic Prefix (CP) OFDM and different MIMO systems at 28 GHz and 38 GHz by applying the frequency domain estimation technique for CFO. The simulations are carried out in channels affected by Rayleigh fading. The simulation results indicate that for BPSK modulation technique, it provides better BER performance at 28 GHz when compared to other modulation techniques (QPSK, 8PSK) and 38 GHz. Moreover, the performance of mmWave MIMO-OFDM systems can be effectively improved as the number of receive antennas increases. Keywords: MIMO-OFDM · CP-OFDM · BPSK · Doppler effect · Carrier Frequency Offset (CFO) · Pilot tones

1 Introduction Millimeter Wave (mmWave) bands attract large research interest as they can potentially lead to the data rate of almost 10 Gbits/s and huge available bandwidth whereas the microwave frequencies are limited to 1 Gbits/s [1]. In 5G system, radio interface, especially radio frequency channels have had many challenges that need to be solved. The mmWave band promises a massive amount of unlicensed spectrum at 28 GHz and 38 GHz and these frequency bands are potential for 5G cellular systems [2]. Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system is the dominant air interface for 5G broadband wireless communications. MIMO systems and their functions are the benefits for environment providing numerous antennas on both transmitter and receiver side that raises throughput, data rate, spectral efficiency, and reliability, by reducing the transmission power requirement, is giving users a better environment in the field of wireless communication [3]. 5G system © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 112–119, 2022. https://doi.org/10.1007/978-3-030-92666-3_10

Estimation and Compensation of Doppler Frequency Offset

113

uses many modulations based on OFDM modulation techniques such as Cyclic Prefix OFDM (CP-OFDM), Filtered OFDM (F-OFDM), Filter Bank MultiCarrier (FBMC), Universal Filtered Multicarrier (UFMC), Generalized Frequency Division Multiplexing (GFDM) [4]. OFDM is defined as an efficient technique in which orthogonal subcarriers are transmitted using parallel data transmission by achieving high data rates. Instead of serially transmitting the data streams, OFDM uses a parallel data scheme for transmission, which uses serial-to-parallel block for the conversion of serial data into parallel data symbols for transmission. The fast Fourier transform (FFT) technique makes it computationally efficient. The insertion of guard intervals in OFDM systems leads to the improvement in inter-symbol/channel interference [5, 6]. OFDM is very sensitive to the frequency offset and the time synchronization. The synchronization problem consists of two major parts: carrier frequency offset (CFO) and symbol time offset (STO). This is due to the Doppler shift and a mismatch between the local oscillator at the transmitter and receiver [7]. In STO, time-domain δ sample and phase shift offset are affected in the frequency domain. Frequency synchronization error destroys the orthogonality among the subcarriers which causes inter-carrier interference (ICI) [8, 9]. Therefore, CFO synchronization is essential to the OFDM system. The normalized CFO can be divided into two parts which are integral CFO (IFO) ξi and fractional CFO (FFO) ξf . IFO produces a cyclic shift by ξi at the receiver side to corresponding subcarriers, it does not destroy orthogonality among the subcarrier frequency components and the FFO destroys the orthogonality between the subcarriers [3]. The CFO estimation has been extensively investigated for single-input single-output (SISO) and MIMO-OFDM systems. For CFO estimation in the time domain, a cyclic prefix (CP) and training sequence are used. The CP-based estimation has been analyzed assuming negligible channel effect. CFO can be found from the phase angle of the product of CP and the corresponding part of an OFDM symbol, the average has taken over the CP intervals and in training sequence estimation using the training symbol that is repetitive with shorter periods. In CFO estimation using frequency domain, this technique involves the comparison of the phase of each subcarrier to successive symbol, the phase shift in symbol due to the carrier frequency offset. Two different estimation modes for CFO estimation in the pilot-based estimation method are acquisition and tracking mode. In the acquisition mode, a large range of CFO estimation is done and in tracking mode, only the fine CFO is estimated. In OFDM systems, the frequency domain estimation technique for CFO by using pilot tones provides better mean square error performance than the time domain estimation techniques for CFO [7]. In this paper, the performance of the mmWave MIMO-OFDM system affected by CFO is presented by using the metric of bit error rate (BER) versus energy per bit to noise power (Eb/N0). The CFO is estimated from pilot tones in the frequency domain, then the signal is compensated with the estimated CFO in the time domain. The performance of the mmWave MIMO-OFDM system is analyzed by using diverse modulation techniques and varying the number of receive antennas at 28 GHz, 38 GHz. The paper is organized as follows, Sect. 2 provides briefly MIMO-OFDM model description and CFO estimation in the frequency domain. Section 3 presents several results obtained in the mmWave MIMO-OFDM systems. All of them are implemented

114

V. L. Dinh and V. Y. Vu

in MATLAB and the results are presented for each situation. Finally, several conclusions are given in Sect. 4.

2 MIMO-OFDM Model Description and CFO Estimation in the Frequency Domain 2.1 OFDM System Model In this paper, we consider the OFDM system model for uplink transmission (Fig. 1). In the OFDM transmission scheme, a wide-band channel is divided into N orthogonal narrowband sub-channels. N Point IFFT and FFT are used to implement OFDM modulation and demodulation. The transmitter maps the message bits Xm into a sequence of phaseshift keying (PSK) or quadrature amplitude modulation (QAM) symbols which are subsequently converted into an N parallel bitstream. Each of the N symbols from the serial-to-parallel (S/P) conversion is modulated on the different subcarriers.

Fig. 1. OFDM system model

Let Xl [k] denote the l th transmit symbol at k th subcarrier l = 0, 2, ..., ∞. k = 0, 1, 2, ..., N − 1, Tsym = NTs OFDM symbol length. OFDM signal at the k th subcarrier [10]:  e2π jfk (t−lTsym ) 0 < t ≤ Tsym (1) ψlk (t) = 0 elsewhere The passband and baseband OFDM are in the continuous-time domain.   ∞ ∞ 1   xl [k]ψlk (t) xl (t) = Re Tsym l=0

k=0

(2)

Estimation and Compensation of Doppler Frequency Offset

115

The continuous-time baseband OFDM signal is sampled at t = lTsym + nTs with Ts = Tsym /N and fk = k/Tsym to the corresponding discrete-time OFDM signal. xl [n] =

N −1 

Xl [k]e2π jkn/N

(3)

k=0

For n = 0, 1, ..., N − 1 the received baseband symbol with considering the effect N −1 the sample value of the received OFDM of channel and noise at the receiver {yl [n]}n=0 symbol yl (t), t = lTsym + nTs is yl [k] =

N −1 

Hl [n]yl [n]e−2π jkn/N + Wl [n]

(4)

n=0

The received baseband symbols under the presence of CFO ξ and STO δ. yl [n] =

N −1 1  Hl [k]Xl [k]e2π j(k+ξ )(n+δ)/N + Wl [k] N

(5)

k=0

Where ξ is the normalized frequency offset (the ratio of actual frequency offset to the inter-carrier spacing f ) and wl [n] is the complex envelope of additive white Gaussian noise (AWGN). The k th element of the discrete Fourier transform sequence consists of three components [11].   sin π ξ eπ j(N −1)/N + Ik + Wk yk = (Xk Hk ) (6) N sin(π ξ/N ) Here the first component is modulation and the second component is ICI caused by the frequency offset.   N −1  sin π ξ .ejφ Ik = (7) (Xl Hl ) N sin(π ξ (l − k + ξ )/N ) l=0,l=k

In order to evaluate the statistical properties for estimation of the ICI, some further are necessary. Specifically, it will be assumed that E[Ik ] = 0 and   assumptions E Xk Xl∗ = |x|2 δlk the modulation values have zero mean and are uncorrelated. 2.2 MIMO System Model MIMO is a wireless technology in the combination with OFDM becomes the MIMOOFDM is a widely used interface solution to the problems of the next-generation wireless. Multiple transmitter and receiver antennas are presented on both sides in the MIMO system model [12]. The antenna diversity is especially effective at mitigating these multipath situations (Fig. 2). The main difference in the diversity in the time and frequency scheme is the bandwidth wastage. So, the transmission and reception fading effect gets minimized by making the use of multiple physically separated antennas simultaneously. Ideally, they are separated by one-half or more of their wavelengths.

116

V. L. Dinh and V. Y. Vu

Fig. 2. Spatial diversity using the MIMO system

2.3 CFO Estimation Technique Using Pilot Tones Pilot tones can be inserted in the frequency domain and transmitted in every OFDM symbol for CFO tracking [8]. The technique is shown as steps: Step 1: Assuming that the two received OFDM symbols, yl [n] and yl+D [n] are saved in the memory after synchronization (D is the number of repetitive patterns). N −1 N −1 and {Yl+D [k]}k=0 via FFT, Step 2: Then, the signals are transformed into {Yl [k]}k=0 from which pilot tones are extracted. Step 3: After estimating the CFO from pilot tones in the frequency domain, the signal is compensated with the estimated CFO in the time domain. In this process, two different estimation modes for CFO estimation are implemented: acquisition and tracking modes. In the acquisition mode, a large range of CFOs including an integer CFO is estimated. In the tracking mode, only fine CFO is estimated. The integer CFO is estimated by the equation below:

⎫ ⎧



 L−1 ⎨

              1 ∗ (8) p j Xl p j

max

Yl+D p j , ξ Yl∗ p j , ξ Xl+D ξ acq = 2π.Tsub ξ ⎩



j=0

    Where L, p j and Xl p j denote the number of pilot tones, the location of the jth  pilot tone, and the pilot tone is located at p j in the frequency domain at the l th symbol period, respectively. Meanwhile, the fine CFO is estimated by: ⎧ ⎫ L−1 ⎨ ⎬             1 ∗ ξf = arg Yl+D p j , ξ acq Yl∗ p j , ξ acq Xl+D p j Xl p j (9) ⎩ ⎭ 2π.Tsub .D





j=1





In the acquisition mode, ξ acq and ξ f are estimated and then, the CFO is compensated by their sum. In the tracking mode, only ξ f is estimated and then compensated.

3 Simulation Results In this section, we use the pilot tones method to estimate and compensate for the CFO in mmWave MIMO-OFDM systems with frequencies 28 GHz and 38 GHz. The obtained

Estimation and Compensation of Doppler Frequency Offset

117

results illustrate BER performances for various modulation techniques such as BPSK, QPSK, and 8PSK with the Eb/N0 ranges at receivers ranging from 0 to 20 dB. The simulated MIMO-OFDM systems are configured with 2Tx-2Rx and 2Tx-4Rx, while the speed of user equipment is 25 m/s (90 km/h). All simulations are carried out by using MATLAB software. The common parameters of the OFDM system are shown in Table 1. Table 1. OFDM parameters Parameters

Specification

FFT size

128

Channel

Rayleigh Fading

Modulation

CP-OFDM

Guard interval percentage

1/4

Subcarriers

128

Pilot space

1

Subcarrier spacing

30 kHz

First, we simulate the MIMO-OFDM system with 2Tx-2Rx. Figure 3 shows the BER versus Eb/N0 results for BPSK, QPSK, 8PSK modulation techniques with frequencies 28 GHz and 38 GHz, respectively. It is observed that for different modulation techniques, the system performance degrades as the modulation order increases. Figure 3a depicts that the system obtains a BER = 10–2 at Eb/N0 = 15 dB for BPSK, about 19 dB for QPSK, respectively. Figure 3b illustrates that we get a BER = 10–2 at Eb/N0 = 16 dB for BPSK and about 20 dB for QPSK, respectively. In both cases, the system can not reach a BER = 10–2 for 8PSK. Furthermore, after comparing Fig. 3a and Fig. 3b, it can be seen that the BER of 38 GHz is slightly higher than the one obtained with 28 GHz. Next, we perform the MIMO-OFDM 2 × 4 system by modifying various modulation techniques. Comparing the results obtained in Fig. 3 and Fig. 4 we can see that the BER performance of the MIMO-OFDM 2 × 4 system is significantly improved. In Fig. 4, the BER of both frequencies decreases quickly with Eb/N0. Again, BPSK provides better performance when compared to other modulation techniques and the BER performance of 28 GHz is better than 38 GHz. The system of 28 GHz can obtain BER = 10–5 at Eb/N0 = 16 dB for BPSK and 20 dB for QPSK, respectively. The BER performance of 38 GHz reduces to 10–4 at Eb/N0 = 12 dB for BPSK and 18 dB for QPSK, respectively. In 8PSK case, the system reaches a BER = 10–2 at Eb/N0 = 12 dB for 28 GHz and 16 dB for 38 GHz, respectively.

118

V. L. Dinh and V. Y. Vu

Fig. 3. BER performance for the MIMO-OFDM 2 × 2 system with different frequencies. a) 28 GHz; b) 38 GHz

Fig. 4. BER performance for the MIMO-OFDM 2 × 4 system with different frequencies. a) 28 GHz; b) 38 GHz

4 Conclusion In this paper, we present the performance of mmWave MIMO-OFDM systems after applying the frequency domain estimation technique for CFO by considering the different modulation techniques of CP-OFDM. We observe that, in the Doppler effect, BPSK modulation is less sensitive to CFO than other modulation techniques. CFO values are based on the carrier frequencies, so BER performance would be worse as an increase in CFOs. Moreover, mmWave MIMO-OFDM systems would achieve the best BER performance when the number of the receive antennas is greater than that of the transmit antennas.

Estimation and Compensation of Doppler Frequency Offset

119

Acknowledgment. Dinh Van Linh was funded by Vingroup JSC and supported by the Master, Ph.D. Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2021.TS.144.

References 1. Chittimoju, G., Yalavarthi, U.D.: A comprehensive review on millimeter waves applications and antennas. In: International Conference of Modern Applications on Information and Communication Technology (ICMAICT) (October 2020) 2. Qamar, F., Siddiqui, M.H.S., Dimyati, K., Noordin, K.A.B., Majed, M.B.: Channel characterization of 28 and 38 GHz MM-wave frequency band spectrum for the future 5G network. In: 2017 IEEE 15th Student Conference on Research and Development (SCOReD), Malaysia (2017) 3. Gaba, G.S., Kansal, L., Tubbal, F.E.M.M., Abulgasem, S.: Performance analysis of OFDM system augmented with SC diversity combining technique in presence of CFO. In: 12th International Conference of Telecommunication Systems, Services and Applications (TSSA) (October 2018) 4. Cai, Y., Qin, Z., Cui, F., Li, G.Y., McCann, J.A.: Modulation and multiple access for 5G networks. IEEE Commun. Surv. Tutor. 20(1), 629–646 (2018) 5. Chang, G.: Kalman filter with both adaptivity and robustness. J. Process Control 24(3), 81–87 (2014) 6. Van Linh, D., Halunga, S.V., Vulpe, A.: Evaluation of coded OFDM systems using a different type of codes. In: 23rd Telecommunication Forum TELFOR (2015) 7. Nishad, P.K., Singh, P.: Carrier frequency offset estimation in OFDM systems. In: IEEE Conference on Information and Communication Technology, India, pp. 885–889 (2014) 8. Patel, M., Patel, N., Paliwal, A.: Performance analysis of different diversity combining techniques with MIMO systems. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 4(12), 9411–9418 (2015) 9. Kumar, N., Singh, A.P.: Various diversity combining techniques with LDPC codes in MIMOOFDM. Int. J. Sci. Res. 3(5), 773–776 (2014) 10. Cho, Y.S., Kim, J., Yang, W.Y., Kang, C.G.: MIMO-OFDM wireless communication with MATLAB, 1st edn. Wiley, Hoboken (2010) 11. Moose, P.H.: A technique for orthogonal frequency division multiplexing frequency offset correction. IEEE Trans. Commun. 42(10), 2908–2914 (1994) 12. Shrivastava, N.: Diversity schemes for wireless communication – a short review. J. Theor. Appl. Inf. Technol. 15(2), 134–143 (2010)

Bayesian Optimization Based on Simulation Conditionally to Subvariety Fr´ed´eric Dambreville(B) ONERA/DTIS, Universit´e Paris Saclay, 91123 Palaiseau Cedex, France [email protected] Abstract. In a first step, the paper presents an accurate method for sampling a random vector conditionally to a subvariety within a box. In a second step, it presents how to use such simulations in order to address black-box optimizations of some kind: the criterion function depends on a control parameter to be optimized and on a model parameter which is unknown but is priorly characterized as a random vector. The approach is versatile in the choice of the function to be optimized and in the choice of the control parameter, its nature and its constraints. Keywords: Rare event simulation · Bayesian optimization · Subvariety · Interval analysis · Black-box optimization · Cross-entropy method

1

Introduction

Black-box optimization concerns a type of optimization problem, where on the one hand the function to be optimized is not completely formalized and on the other hand the request to an evaluator of this function is generally particularly expensive. Accordingly, an inherent goal of such optimization is to save as much as possible on the number of candidate solutions to be evaluated. In the litterature, a main and seminal contribution has been made by Jones, Schonlau and Welch who proposed in [7] the efficient global optimization method (EGO) for addressing such kind of problems. The idea is to use a surrogate model under the form of a Gaussian process with correlation depending typically on spatial distance (kriging). The general idea of the method is to alternate phases of reestimation of the model with phases of optimization and evaluation. A fundamental ingredient is to reestimate the model law conditionally to the past evaluations. This task is made easy by the use of Gaussian process, but at the cost of some implementation constraints, in particular in terms of the dimensionality of the function. Works have been done on the efficiency of the surrogate models in order to improve performance [5]. EGO is actually related to the domain of Bayesian optimization [9,10,13], which is more general in the sense, that it does not set as necessary the gaussian hypothesis on the model randomness. In this paper, we address black-box optimization problems in the general context of Bayesian optimization. More c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 120–132, 2022. https://doi.org/10.1007/978-3-030-92666-3_11

Bayesian Optimization Based on Simulation Conditionally to Subvariety

121

precisely, our paper deals with a family of problems, where a known (nonlinear) function to be maximized depends on a model noise (known as a random vector), to be estimated and which realization is unknown, and on a decision parameter to be optimized. Then, the objective is first of all to optimize the decision parameter and secondarily (as an instrumental purpose only) to sufficiently estimate the noise parameter. To this end, a costly evaluator of the function can be requested which has knowledge of the actual value of the model noise. We have already worked on this kind of problem [4]. However, we faced great difficulty in computing the conditional laws necessary to the method. The issue here is to sample a random vector conditionally to a subvariety. This is a special case of rare event simulation. In survey [11], Morio, Balesdent et al. have evaluated the advantages and drawbacks of various rare event sampling methods. Many methods are working well when the rare event is unimodal or moderately multimodal. In cross-entropy method (CE) [12] for example, multimodality has to be managed by the sampling law family. It is possible [4] by a combination of EM algorithms and cross-entropy to sample a moderately multimodal rare event. But sampling conditionally to subvariety is a strongly multimodal problem. Parametric methods (like CE) are not good candidates. Non parametric approaches [2,11] need an important simulation budget, and additional studies are needed in order to value the subvariety geometry. This article promotes in Sect. 2 an alternative approach based on a dichotomous exploration in order to sample a random vector (with moderate dimension) conditionally to a subvariety characterized by observation constraints. By design, the approach is accurate, produces independent samples and avoids the phenomenon of sample impoverishement. In Sect. 3, this simulation approach is implemented within a Bayesian optimization algorithm in order to handle backbox optimization. Section 4 presents some sampling cases and some black-box optimization tests. Section 5 concludes.

2

Simulation Conditionally to Subvariety

The section presents a dichotomous approach for sampling random vector conditionally to subvariety. Some tools and formalism of interval analysis [1,6] are instrumental here and we start by a short presentation of these concepts. It is not the purpose here to perform a detailed introduction on the topic. 2.1

Elements on Interval Analysis

Notations. The following notations are used in the paper: n n + – [x]  [x− , x+ ] is an interval; [x]  k=1 [xk ] is a box; [x[  k=1 [x− k , xk [ is a half-open n box. – [Rn ]  { k=1 [xk ] : ∀k, [xk ] ∈ [R]} is the set of box subsets of Rn . – ρ([x]) = x+ − x− and ρ([x]) = max ρ([xk ]) are the lengths of [x] and [x]. 1≤k≤n

– g : R → R, gj : Rk → R, h : Rk → Rj are (multivariate) real functions,

122

F. Dambreville

      – [g] : [R] → [R], [gj ] : Rk → [R], [h] : Rk → Rj are interval functions. One should not confuse notations [g]([x]) and g([x]): g([x]) = {g(x) : x ∈ [x]} is not [g]([x]) and is not even necessarily a box. Operators and Functions Applied on Intervals. We assume in this paper, that g and [g] are related by the following properties: – g([x]) ⊂ [g]([x]) for any [x] ∈ [Rn ]. In particular, [g] is minimal when [g]([x]) is the minimal box containing g([x]) as subset for all [x] ∈ [Rn ]. – [g]([x]) ⊂ [g]([y]) when [x] ⊂ [y]. – If ρ([x]) vanishes, then ρ([g]([x])) vanishes: ρ (g([x])) −−−−−−→ 0 . ρ([x])→0

(1)

These properties express that [g] implies a bound on the error propagated by g, and this bound has good convergence behavior in regards to the error. These properties are typically achieved by constructing the interval functions from reference functions and operators. It is quite easy to define interval functions and operators from primitive functions and operators: 1. Reference functions, g ∈ {ln, exp, sin, cos, .n , . . . }, are continuous onto R, so that g([x]) ∈ [R] for all [x] ∈ [R]. Then, it is optimal to set [g]([x]) = g([x]) for all [x] ∈ [R] . For example:  [xn− , xn+ ] for n > 0 , − + n n [ln]([x])  [ln(x ), ln(x )] and [x][ ]  [x] = [xn+ , xn− ] for n < 0 . 2. Reference operators +, ·, −, / are also defined the same way. For example: [x] + [y]  [x− + y − , x+ + y + ] and [x] − [y]  [x− − y + , x+ − y − ] . Now, consider example of function g : θ → cos2 θ + sin2 θ . By using functions cos, sin, .2 and + one may define [g]([θ]) = ([cos]([θ]))2 + ([sin]([θ]))2 . By this definition, [g] is not minimal since [g] = [1, 1] while cos2 + sin2 = 1. In particular,  π we compute [g] 0, 2 = [0, is a bad error bound on theoretical value 1.  2] 1which 1 Now, we also compute: [g] − 10 , 10  [0.99, 1.01] which is a tight error bound on 1 . This example holds confirmation that [g]([θ]) has a good behavior for small boxes [θ]. Although the definition of [g] by means of reference functions and operators is not minimal, this approach is prefered since it is generic, systematic, and based on already implemented functions of reference. It provides a way to construct automatically [g] without any specific knowledge. 2.2

Formal Description of the Simulation Problem

It is given a random vector X defined on Rn and characterized by a bounded density fX  . Cumulative distribution function FX of X defined by FX (x) = P (X ≤ x) = y≤x fX (y) μ(dy) for all x ∈ Rn is assumed to be easily computable (μ is

Bayesian Optimization Based on Simulation Conditionally to Subvariety

123

Borel measure). Are also defined box [b] ∈ [Rn ], small box [] ∈ [Rm ], such that ρ([]) 1, function g : [b] → Rm defined by composing reference functions and operators, and [g] derived from g by composing the related interval functions and operators of reference. The section describes an algorithm for sampling conditional vector: [X |g(X) ∈ [] ]  [X |X ∈ [b] & g(X) ∈ [] ] .

(2)

When space dimension n and constraint dimension m increase, event [g(X) ∈ []] has very low probability. We are dealing here with a particular case of rare event simulation. As [g(X) ∈ []] approximates a subvariety, it is foreseeable that conditional random vector [X |g(X) ∈ [] ] is strongly multimodal. For example, the opposite diagram shows that too many Gaussian laws are necessary in order to approximate a law conditionally to a thin spherical layer as a mixture. Common methods [11] for rare event simulation are not efficient on strongly multimodal laws. An alternative dichotomous approach is now presented. 2.3

A Naive Dichotomous Approach for Sampling

We point out that it is easy to compute P (X ∈ [y]) or to sample [X |X ∈ [y] ] when FX is available. Thus, these features are taken for granted in this paper. Indeed, it is well known that P (X ∈ [y]) is litteraly computable from FX : P (X ∈ [y]) =



σ∈{−1,+1}n

FX



sgn(σk )

yk

 n

 1≤k≤n

σk ,

(3)

k=1

where sgn(1)  + and sgn(−1)  − . When the components of X are jointly independent, the computation is even easier. Sampling [X |X ∈ [y] ] is also easily done by the opposite algo- 1 for i ← 1 to n do Fi ← FX (x1 , · · · , xi−1 , ·, ∞, · · · , ∞) rithm. The principle is to iteratively 2 θ ← RandUniform([Fi (yi− ), Fi (yi+ )]) apply the inverse transform sam- 3 pling method to Fi , the marginal in 4 xi ← Fi−1 (θ) xi of the cumulative function condi- 5 end tionally to the already sampled com- 6 return x1:n ponents x1 , · · · , xi−1 . Now, we present a dichonomous approach, which will sample [X |g(X) ∈ [] ] by iteratively cutting boxes (starting from [b]) and randomly selecting branches until condition is fulfilled or becomes irrelevant. A hierarchy of cuts is generated by the dichotomous processes of the following algorithms. Definition 1. Let [x] ∈ [Rn ] be a box. Then pair ([l], [r]) ∈ [Rn ] is a cut of [x] , if [l[ ∩[r[ = ∅ and [l[ [r[ = [x[ . 2

124

F. Dambreville

1 Function Sampling [X |g(X ) ∈ [] ] Algorithm 1 is introduced in [3] input : r,g, ω, N output: (y k , wk )1:N and is intended to approxi2 for k ← 1 to N do mate [X |g(X) ∈ [] ] by means 3 ([x 0 ], πk , j) ← ([b], 1, 0) of particles cloud (yk , wk )1:N . 4 while ρ([x j ]) > r and [g]([x j ]) ⊂ [] do Weight ω[x] is a fundamental 5 ([lj+1 ], [r j+1 ]) ← Cut([x j ]) parameter which should verify 6 j ←j+1 ω[x]  P (X ∈ [x] & g(X) ∈ []). 7 [x j ] ← Bern(([lj ], ω[l j ] ), ([r j ], ω[r j ] )) The algorithm starts by setting ω[x ] j [x0 ] = [b] and then iterates 8 πk ← ω +ω πk [l j ] [r j ] (for loop) the same sampling 9 end  process until [xj ] is sufficiently 10 wk ← P (X ∈ [x j ]) δ[g]([x j ])⊂[ ] πk small, i.e. ρ([xj ]) ≤ r, or checks 11 Build y k by sampling [X |X ∈ [x j ] ] [g]([xj ]) ⊂ [] (this constraint is 12 end stronger than g([xj ]) ⊂ [] but 13 end Algorithm 1: Based on a predictive weight. is much easier to compute). Sampling process consists of the following successive steps:

– Cut [xj ] by means of Cut([xj ]). Cutting function Cut is designed so as to ensure that ρ([xj ]) vanishes along with the iteration, – Select randomly one box of cut ([lj ], [rj ]) in proportion to their predictive weight, by mean of Bernoulli process Bern(([lj ], ω[l j ] ), ([rj ], ω[r j ] )), – Increment j and set [xj ] equal to the selected box, – Update πk , the processed probability of [xj ] in regards to Bernoulli sequence. After last iteration j = J, reject cases such that g([xJ ]) ⊂ []; otherwise set wk = P (X ∈ [xj ]) πk . Box particle ([xJ ], wk ) is then an unbiased estimator of P ([xJ ]) . Particle (yk , wk ) is obtained by sampling yk from [X |X ∈ [xJ ] ]. In [3], it has been shown that hypothesis ω[x] = P (X ∈ [x] & g(X) ∈ []) implies wk = P (g(X) ∈ []) , the probability of the rare event. Then wk is constant, which ensures that particles cloud (yk , wk )1:N is a good approximation of [X |g(X) ∈ [] ]. Actually, this is the best possible configuration. However, P (g(X) ∈ []) and a fortiori P (X ∈ [x] & g(X) ∈ []) are unknown. So it is generally impossible to ensure that ω[x] = P (X ∈ [x] & g(X) ∈ []). But then accumulated errors in computation of πk will explode with the dimension, which implies dramatically uneven particle weights wk . Resulting weighted particles cloud (yk , wk ) is then useless. 2.4

Generic Method for Conditional Sampling

From now on, the probabilistic predictive weight checks following property:  0 if [g]([x]) ∩ [] = ∅ , (4) 0 ≤ ω[x] ≤ P (X ∈ [x]) , and ω[x] = P (X ∈ [x]) if [g]([x]) ⊂ [] .

Bayesian Optimization Based on Simulation Conditionally to Subvariety

125

1 Function Sampling [X |g(X ) ∈ [] ] Algorithm 2 is introduced in input : σ,r,g, ω, N details in [3] and is an evoluoutput: (y k , wk )1:N tion of Algorithm 1. In addi2 (cuts,omg,k) ← (∅, ∅, 0) tion, it builds a history of 3 omg ([b]) ← ω[b ] 4 while k < N do cuts, stored in map cuts, and 5 ([x 0 ], πk , j) ← ([b], 1, 0) computes dynamically from this 6 while ρ([x j ]) > r and [g]([x j ]) ⊂ [] history an improved weighting do    function, stored in map omg. omg ([b])   π  > σ goto 20 7 if log2 omg ([x j ]) k The lines of this algorithm are colored in black and blue. Black 8 ifundef cuts ([x j ]) ← Cut([x j ]) lines are inherited from Algo9 ([lj+1 ], [r j+1 ]) ← cuts ([x j ]) rithm 1. Blue lines are new 10 j ←j+1 additions to the previous algo- 11 ifundef omg ([r j ]) ← ω[r j ] rithm. Variable cuts is a dic- 12 ifundef omg ([lj ]) ← ω[l j ] tionary and is used to regis- 13 (ν[l j ] , ν[r j ] ) ← (omg ([lj ]),omg ([r j ])) ter the history of computed 14 [x j ] ← Bern(([lj ], ν[l j ] ), ([r j ], ν[r j ] )) ν[x ] cuts. Variable omg is a dicj 15 πk ← ν +ν πk [l j ] [r j ] tionary which records the preend dictive weight and its possible 16  wk ← P (X ∈ [x j ]) δ[g]([x j ])⊂[ ] πk updates, when needed. Variable 17 Build y k by sampling [X |X ∈ [x j ] ] omg([rj ]) is set to ω[r j ] , if it 18 19 k ←k+1 has not been initialized yet (line 20 for i ← j to 1 do 11 with ifundef ). The same is 21 omg ([x i−1 ]) ← omg ([li ])+omg ([r i ]) done for variable omg([lj ]) at 22 end end line 12. When the cuts sequence 23 is done (second while), then the 24 end Algorithm 2: Based on cuts history. weighting function is updated by probabilistic additivity (for loop). Together with property (4), this ensures that omg([x]) gets closer to P (X ∈ [x] & g(X) ∈ []) when cuts tree rooted on [x] gets more refined. Algorithm 2 also implements code (line 7) for testing degeneracy of πk . This code compares the logarithmic divergence between weight and processed sampling probability. If it exceeds σ, then sampling loop (second while) restarts. As a result, the process adaptively balances depth and breadth explorations.  

We used ω[x] =

μ [g]([x])∩[]



μ [g]([x])

 P (X ∈ [x]) , where μ is Borel measure, as initial

predictive weight. Compared to [3], we significantly improved a multithreaded implementation of Algorithm 2. In order to achieve that, we used an array of reference-counting pointers in order to store both cuts and omg data. Tree relational structure within cuts and omg is encoded using weak references to avoid permanent thread locking. Threads are used for both the sampling process and data cleansing. When rare data asynchrony is detected, thread simply restarts.

126

3

F. Dambreville

Black-Box Optimization

We proposed in [4] an approach for black-box optimization of a nonlinear function depending on a model noise; the realization of the noise was the unknown information of the problem. The work was not complete, since we were not able to build a good conditional sampling at that time. Now, we have this sampler. Formalization. Function γ → g(γ, xo ) is to be minimized. Parameter xo is unknown, but xo is a realization of random vector X, whose law FX is known. In order to optimize γ, we are allowed to request an evaluator for computing g(γ, xo ), but each call to this evaluator is costly. The objective is then to solve γ o ∈ arg minγ g(γ, xo ) and save on evaluation requests jointly. Efficient Global Optimization. EGO has been proposed in [7] for solving such problems. It is a form of Bayesian optimization [9,10,13]. Main idea consists in approximating g(γ, X) by a Gaussian process (GP) γ → G(γ). Given measures gj = G(γj ), conditional variable [G(γ) |∀j, G(γj ) = gj ] is Gaussian and mathematically computed. Expected improvement on γ is defined by EI(γ) = EG(γ)|∀j,G(γj )=gj max{mγ − G(γ), 0} where mγ = min1≤j≤k gj , and is computed mathematically. EI(γ) indicates where evaluations are promising and is maximized in order to chose next best evaluation request. Algorithm. We took inspi1 Function Process next measure ration of EGO, but raninput : γj and gj  g(γj , x o ) for 1 ≤ j ≤ k output: γk+1 and gk+1  g(γk+1 , x o ) dom function g(·, X) is used 2 Make samples of [X |∀j, g(γj , X ) = gj ] directly instead of GP approx3 Compute mγ = min1≤j≤k gj and: imation. Parameters γk and EI(γ) EX |∀j,g(γ ,X )=g min{g(γ, X ), mγ } j j evaluations gk are opti4 Compute γk+1 ∈ arg minγ EI(γ) mized by iterating Algo5 Compute gk+1  g(γk+1 , x o ) rithm 3. As a crucial ingre6 end dient, [X |∀j, g(γj , X) = gj ] Algorithm 3: Sampler-based Bayesian optimizer is sampled by Algorithm 2. This random vector, conditional to a subvariety, incorporates all constraints related to the past evaluations. While EI is defined slighly differently, overall principle is similar: 1. Generate samples of posterior random vector [X |∀j, g(γj , X) = gj ], 2. Build a Monte Carlo approximation of expected improvement EI(γ), 3. Choose the parameter optimizing EI(γ) and evaluate it. Implementations of step 4 are not detailed here. Now, there are optimizers that combine steps 4 and 3, based in particular on the stochastic gradient, or even based on DCA (Difference of Convex functions Algorithm) [8].

Bayesian Optimization Based on Simulation Conditionally to Subvariety

4 4.1

127

Examples Tests of the Sampler

Simulation Tests. The tests presented here are performed for sampling Algorithm 2. Algorithm is implemented in Rust language (www.rust-lang.org) and is processed on 7 threads. Examples have been chosen mathematically simple on purpose to facilitate analysis. Tests have been achieved with parameters r = 0.001 (minimal box size), M = 5000 (initial discarded samples), N = 50000 (sampled particles) and σ = 10 (depth and breadth balance). It is assumed that X follows the uniform law on b = [−2, 2]n with n ∈ {2, 3, · · · , 11} .

n 2 Case (a): It is defined ga (x) = ||x||2 = j=1 xj and [a ] = [0.95, 1.05] . Then ga−1 ([a ]) is a hyper-spherical shell. Case (b):For n = 11, it is defined gb (x) = [||x||2 , x3 , · · · , x11 , min(|x1 |, |x2 |)] , and [b ] = [a ] × [z]9 × [α] with [z] = [−0.05, 0.05] and [α] = [0, 0.5] . Then gb−1 ([b ]) is a shell around circle subsegments A1 , . . . , A4 :      π  π π x1 x1 mod 2π . (5) Aj = ∈ C arg ∈j + − , x2 x2 2 6 6 Case (a) is used to evaluate the performance of the sampler both in accuracy and in efficiency for different dimensions. Case (b) is used to evaluate the accuracy of the sampler in case of complex constaints which introduce disjoint components. Case (a). This case, as well as (b), is mathematically predictable. Histograms derived from data sampled by Algorithm 2 are confronted to theoretical values. Process Statistics. Figure 1 shows some behavioral measurements of the algorithm according to the number of generated samples (0 to 55000). Cases n = 3, 7, 11 are considered with respective colors, red, blue and green. Left chart shows how value − log2 (omg([b])) evolves and approximates − log2 (P (ga (X) ∈ [])), the information content of event [ga (X) ∈ []]. Each curve increases to this value (same color line), but performance decreases with dimension. Right chart shows evolution of cpu-time per sample and per thread consumed by the process (expressed in second). Except discontinuities caused by intermittent memory allocations, sampling efficiency increases with the number of generated samples.

Fig. 1. Statistics of case (a) for n = 3, n = 7, n = 11.

128

F. Dambreville

Histograms. For each subcase n ∈ {3, 11}, we have computed the radius of all samples and built the associated histograms (upper charts of Fig. 2). For each n ∈ {3, 11} and for all 1 ≤ i < j ≤ n, we have computed the angle of all samples (xi , xj ) and built the associated histograms. From these n(n−1) 2 histograms of each subcase, we have computed the minimal, mean and maximal histogram. The results are shown in blue, green and red, respectively, and provide a hint on the error of the estimation (lower charts of Fig. 2). The histograms are conform to the theoretical laws: angular laws are uniform while radius laws are increasing (especially for high dimension). The errors should be of the order of

 0, 02. In comparison, the errors figured in the angular histograms are quite acceptable, even for the highest dimension. 20 50000

Fig. 2. Histograms of case (a) (20 divisions).

Fig. 3. Histograms of case (b).

Case (b). On this test, we computed the radial histogram and angular histogram (there is only one) for the samples. These histograms are presented in

Bayesian Optimization Based on Simulation Conditionally to Subvariety

129

Fig. 3. The quality of the histograms is comparable although slightly weaker than what was observed on case (a). Due to constraint configuration, theoretical radius law is uniform, and theoretical angular law is uniform around each subsegment, A1 , . . . , A4 with a gradual decrease on the borders. The generated histograms actually comply with these properties. 4.2

Black-Box Function Optimization

A Simple Geometric Example. We intend to find the isobarycenter γ = (a, b) of 4 unknown points Mi = (xoi , yio ) ∈ [−5, 5]2 with i ∈ {1, 4}. The only approach that is possible for us is to test some solutions by requesting for a costly measurement; this measurement evaluates function: 1 1 o o Mi = (x , y ) . 4 i=1 4 i=1 i i 4

g(γ, xo ) = ||γ − h(xo )||2 , where h(xo ) =

4

(6)

Our purpose is to optimize (a, b) by minimally requesting evaluation g(γ, xo ). Geometric Solution. Each measure restricts the solution to a circle. After 2 measures, we usualy have to choose between two points, and the solution is found equiprobably at step 3 or 4. Tests and Results. Points M1 , . . . , M4 are (2, −1), (3, 2), (− 32 , 4), ( 12 , 3). Their barycenter is (1, 2). We used a sampler with M = 5000, N = 10000 and 1 1 k , 100 ] . Variable X is considered uniform on [−5, 5]8 . Following table [] = [− 100 present a typical optimized sequence for parameter γk = (xk , yk ). Optimization of EI is done by the cross-entropy method [12] and is thus near-optimal: k

1

2

3

4

5

6

7

8

9

10

11

12

13

xk 0.01 −0.41 0.84 0.97 0.99 1.00 0.98 1.02 −3.79 0.98 0.98 −0.39 1.03 yk 0.02 −1.94 2.05 1.99 2.02 1.98 2.02 1.99

6.49 2.00 1.99

5.40 2.00

gk 2.21

6.57 0.02 0.03

3.67 0.03

4.19 0.16 0.03 0.02 0.02 0.03 0.02

In previous example, best value for gk is 0.02. Best values are generally found around 0.02: this is a consequence of error interval [], which is not zero size. ko

1

2

3

4

5

6

7

8

9

10

11

12

13

gko ≤ 0.02 0% 0% 26% 40% 10% 9% 2% 3% 3% 3% 1% 1% 1% gko ≤ 0.03 0% 0% 34% 50% 10% 4% 1% 1% 0% 0% 0% 0% 0% gko ≤ 0.04 0% 0% 36% 57% 6%

1% 0% 0% 0% 0% 0% 0% 0%

gko ≤ 0.05 0% 0% 38% 58% 4%

0% 0% 0% 0% 0% 0% 0% 0%

gko ≤ 0.07 0% 0% 39% 58% 3%

0% 0% 0% 0% 0% 0% 0% 0%

gko ≤ 0.09 0% 0% 40% 57% 3%

0% 0% 0% 0% 0% 0% 0% 0%

130

F. Dambreville

Previous table presents the results for 100 runs. For these runs, the mean cpu time is 2627s. Various stop conditions, from gko ≤ 0.02 to gko ≤ 0.09, are considered for these runs, and percentage is given for each final step value ko . In all cases, we notice that most optimal values are found at steps 2 or 3. The result tends to be equilibrated on 2 and 3 when stopping criterion is relaxed. EGO method. For the sake of comparison, we apply EGO GP G   to our problem. with exponential covariance cov(G(γ), G(γ  )) = 14 exp − 12 ||γ − γ  ||2 and mean 0 is used. Following table summarizes the results of 100 tests. g

0.5

% : ∃k, gk ≤ g

91% 74% 69% 64% 55% 46% 42%

worse : min{k/gk ≤ g} 55

0.2 78

0.1 96

0.05 0.02 0.01 0.005 0.002 0.001 97

97

84

93

29%

14%

93

99

mean : min{k/gk ≤ g} 11.8 19.4 23.1 25.5 30

31.3 37.5

56.3

60.5

best : min{k/gk ≤ g}

15

19

19

1

1

8

8

8

15

Each test implements 100 successive evaluations. For each ceil value g, the table indicates the percentage of tests which succeeded to reach this ceil, and then, the worse, mean and best minimal indices that achieve it. EGO is outperformed here, but is able to achieve more refined results if the evaluation budget is relaxed. Constraints on the Decision Parameters. While our approach requires some regularity of g in regards to noise X, there is no particular condition put on γ. Actually, it is easy to handle constraints on γ, and it is even possible to deal with discrete values. In order to enhance previous test performance, we Z Z × 100 . Process is stopped at step ko , when introduce additional constraint γ ∈ 100 optimum is found, i.e. gko = 0 and γko = (1, 2). The following table summarizes the results of 100 runs. cpu gives the average computation time. ko 3 4 5 % 46 53 1 cpu(s) 1670 2758 2939

Except for outlier, optimum is found almost equiprobably after 3 or 4 tries. This is similar to the geometric method. And yet our algorithm does not have any geometric knowledge of the problem and deals with eight dimensions model noise.

Decision Space of Higher Dimension. Variable X is uniform on [−5, 5]8 with realization xo = (2, −1, 3, 2, − 32 , 4, 12 , 3). Points Mi,j = (mi,j,k )1≤k≤28 with 1 ≤ j < i ≤ 8 are built from X by setting mi,j,k = Xi Xj if k = j + (i−2)(i−1) and 2 mi,j,k = 0 else. Function g(γ, X) is defined by g(γ, X) = ||γ − h(X)||2 where 8 i−1 1 h(X) = 28 i=2 j=1 Mi,j . The problem is thus of much higher dimension. We had some theoretical issues on this example. The Bayesian estimation was unable to resolve some remaining ponctual cases for zero dimension subvarieties. This difficulty has been overcome by adding exclusion constraints around already tested cases. Following table gives an example of run (progress is in bold).

Bayesian Optimization Based on Simulation Conditionally to Subvariety k

1

2

3

4

5

6

7

8

gk 1.02 1.04 1.18 0.77 0.68 0.68 0.61 1 k

17

18

19

20

21

22

23

24

9

10

11

12

13

14

15

131 16

0.33 0.12 0.18 0.21 0.09 0.09 0.09 0.09 25

26

27

28

29

30

31

32

gk 0.05 0.10 0.09 0.12 0.12 0.11 0.15 0.05 0.12 0.08 0.11 0.11 0.15 0.08 0.08 0.05

5

Conclusion

In this paper we address Bayesian optimization problems, and especially investigate the conditional simulation issues related to these optimizations. We propose an original dichotomous method for sampling a random vector conditionally to a subvariety. We have shown how it could be applied efficiently to the optimization of expensive black-box function. The work is promizing from an applicative point of view and offers several improvement perspectives. We will particularly investigate some relaxations techniques applied to the subvariety in order to enhance the efficiency of the approach in regards to higher dimensions. This article was not intended to present applications. In future work, we will nevertheless consider applying the method in order to solve jointly the optimal control of sensor network and the estimate of sensors bias.

References 1. Alefeld, G., Mayer, G.: Interval analysis: theory and applications. J. Comput. Appl. Math. 121(1), 421–464 (2000) 2. C´erou, F., Del Moral, P., Furon, T., Guyader, A.: Sequential monte carlo for rare event estimation. Stat. Comput. 22(3), 795–908 (2012) 3. Dambreville, F.: Simulating a random vector conditionally to a subvariety: a generic dichotomous approach. In: SIMULTECH 2021, 11th International Conference on Simulation and Modeling Methodologies, Technologies and Applications, pp. 109–120, July 2021 4. Dambreville, F.: Optimizing a sensor deployment with network constraints computable by costly requests. In: Le Thi, H.A., Pham Dinh, T., Nguyen, N.T. (eds.) Modelling, Computation and Optimization in Information Systems and Management Sciences. AISC, vol. 360, pp. 247–259. Springer, Cham (2015). https://doi. org/10.1007/978-3-319-18167-7 22 5. Hebbal, A., Brevault, L., Balesdent, M., Taibi, E.G., Melab, N.: Efficient global optimization using deep gaussian processes. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8 (2018) 6. Jaulin, L., Kieffer, M., Didrit, O., Walter, E.: Applied Interval Analysis with Examples in Parameter and State Estimation. Robust Control and Robotics, Springer, London (2001) 7. Jones, D., Schonlau, M., Welch, W.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13, 455–492 (1998) 8. Le Thi, H.A., Pham Dinh, T.: Dc programming and dca: thirty years of developments. Math. Program. 169, 5–68 (2018) 9. Mockus, A., Mockus, J., Mockus, L.: Bayesian approach adapting stochastic and heuristic methods of global and discrete optimization. Informatica (Lith. Acad. Sci.) 5, 123–166 (1994)

132

F. Dambreville

10. Moˇckus, J.: On bayesian methods for seeking the extremum. In: Marchuk, G.I. (ed.) Optimization Techniques 1974. LNCS, vol. 27, pp. 400–404. Springer, Heidelberg (1975). https://doi.org/10.1007/3-540-07165-2 55 11. Morio, J., Balesdent, M., Jacquemart, D., Verg´e, C.: A survey of rare event simulation methods for static input-output models. Simul. Model. Pract. Theory 49, 287–304 (2014) 12. Rubinstein, R.Y., Kroese, D.P.: The Cross Entropy Method: A Unified Approach To Combinatorial Optimization, Monte-Carlo Simulation (Information Science and Statistics). Springer-Verlag, New York (2004) 13. Zilinskas, A., Zhigljavsky, A.: Stochastic global optimization: a review on the occasion of 25 years of informatica. Informatica 27, 229–256 (2016)

Optimal Operation Model of Heat Pump for Multiple Residences Yusuke Kusunoki, Tetsuya Sato, and Takayuki Shiina(B) Waseda University, Tokyo, Japan [email protected], tetsuya [email protected], [email protected] Abstract. In a world in which global warming is progressing and environmental problems are becoming increasingly severe, the use of energy with a low environmental load has become important. One way to solve this problem is the use of water heaters driven by a heat pump, a solution that has been widely implemented in residential homes. In this study, we endeavored to develop an optimal operation plan for the energy storage equipment in a residence. The proposed model is based on the idea of a smart community and aims to optimize electricity usage across multiple residences. We formulate the problem using a stochastic programming method with integer conditions to model the operation of complex equipment. Multiple scenarios were used to consider the uncertainty in the demand for both power and heat for a residence. The solution that is obtained corresponds to the operation of multiple models. The purpose of this study was to minimize the electricity charge and level the load in terms of the power supply. We aimed to reduce the maximum power consumption and average power demand relative to the separate electricity charges of individual homes and the demand at different times of the day.

Keywords: Heat-pump water heater community · Stochastic programming

1

· Load leveling · Smart

Introduction

In recent years, global warming has become a serious problem and has increased the awareness of environmental issues. In 2015, the United Nations set sustainable development goals (SDGs) [1] and emphasized the need for a method that would lower the environmental load of energy consumption. In addition, the liberalization of electricity [2] in 2016 has made consumers think about the electricity charges. One approach to reduce the cost of electricity is to use a heat pump to heat water. A water heater driven by a heat pump is a piece of equipment that uses electricity to draw heat from the outside environment and uses that heat to boil water. In this study, we aimed to develop the optimal operation plan for energy storage equipment that supplies power to multiple residences. The model, which is based on an actual hot water supply model, maximizes the amount of c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 133–144, 2022. https://doi.org/10.1007/978-3-030-92666-3_12

134

Y. Kusunoki et al.

heat stored at 6 a.m to minimize the combined electricity charge across multiple residences. The model to devise the optimal operation plan for the water heater based on a heat pump was formulated using a stochastic planning method. The demand side aims to minimize the electricity charge, and the supply side aims to minimize the peak electricity demand. The use of load leveling among multiple residences lowers the electricity charge per residence compared with that obtained with the existing operation plan for each residence.

2 2.1

Previous Studies Introduction of Previous Studies

A smart community is a social system that comprehensively manages and controls goods and services such as residences, buildings, and public infrastructure by connecting them with an information network based on a next-generation power grid known as a smart grid. Tokoro and Fukuyama [3] developed a model that can be evaluated quantitatively by considering the mutual influence of the seven fields that constitute the smart community. The use of energy is optimized with the use of renewable energy and by managing the energy consumption. The seven fields that form the smart community are electricity, gas, water, rail, industry, business, and residence. The model of the home takes into account storage batteries, heat-pump water heaters, fuel cells, and combustion water heaters to calculate the amounts of electricity and gas used in the home. In this study, we focus on this model and refer to the relationship between the heat-pump water heater and the combustion water heater in the home model. Tokoro et al. [4] introduced a tool to determine the optimal equipment configuration and optimal operation pattern of a hybrid hot water supply system that minimizes the hot water supply cost with respect to the demand for hot water supply. The system consists of a combination of a heat-pump water heater, combustion water heater, and hot-water storage tank and is economically highly efficient and environmentally friendly. The model in this study includes only the heat-pump water heater and the hot water storage tank and does not make provision for a combustion water heater. The relationship and characteristics of the two devices are referred to, and a heat pump is used to heat the water. The model uses the performance coefficient and efficiency formulas related to the on and off conditions of the water heater. Zhu et al. [5] proposed an optimization model for housing load management on a smart grid using integer linear programming (ILP). In this study, we consider a combined operation plan for the energy storage equipment in multiple residences. To consider the relationship between energy storage devices, we referred to the relationship between smart grids that connect multiple smart meters. Kimata et al. [6] developed an optimization model for the operation plan of energy storage equipment in a residence using the unit commitment problem and demonstrated its usefulness by comparing it with the conventional model. Here, we considered the amount of heat stored at the end of each 24-h period

Optimal Operation Model of Heat Pump

135

and the continuity of the operation off state of the heat-pump water heater. We also aimed to achieve further optimization by leveling the load of power demand in each residence. As a result, the electricity charge and maximum power consumption of each residence are lowered. 2.2

Unit Commitment Problem

We explain the proposed model by referring to the unit commitment problem. This problem is a scheduling problem to obtain the on and off schedules and the amount of power generated by each generator to satisfy the power demand given for each time zone. Shiina [7] presented a stochastic programming model that considers the uncertainty in the power demand by approximating the operation of an actual system, as opposed to the conventional model that provides the power demand as a deterministic value. In this study, the formulation was based on the research of Kimata et al. [6]. The basis of this model is the unit commitment problem. In formulating the model, we refer to stochastic programming, which considers uncertainty, generator on constraints, and off constraints.

3

Problem Description

The proposed model determines the outage state of the heat-pump water heater and minimizes the sum of the electricity charges and the difference between the maximum and minimum demands. We considered the power demand of electrical appliances, excluding residential heat-pump water heaters. 3.1

Notations

The definitions of symbols in the all-residences model and individual residence model described later are as follows. Variables uijt gmin θj B s Ijt Gsjt Dminj Dmaxj

Status of heat-pump water heater i of residence j at time t Lower bound storage Maximum power consumption at residence j Difference between maximum power demand and minimum Initial heat storage amount of residence j at time t in scenarios s Final heat storage amount of residence j at time t in scenarios s Minimum power demand at residence j Maximum power demand at residence j

Sets I T S J

Set Set Set Set

of of of of

heat-pump water heaters time periods uncertain scenarios residences

136

Y. Kusunoki et al.

Parameters Basic charge for maximum power consumption, f Electricity charge at time t cos tt cutij Reduction rate of heat storage amount of heat-pump water heater i at residence j Minimum heat storage rate of heat storage tank Kmin Maximum heat storage rate of heat storage tank Kmax Tank capacity V Minimum running time of the heat-pump water heater i at resiLij dence j Minimum stoppage time of the heat-pump water heater i at reslij idence j Lower limit of heat storage Cmin Upper limit of heat storage Cmax Conversion factor Mkcal Conversion factor MkW h Electricity demand for appliances except heat-pump water heater dsjt at residence j at time t in scenarios s Heat demand at residence j at time t in scenarios s hsjt xij Power consumption of heat-pump water heater i of residence j Coefficient of performance, e Outside air temperature AT Water input temperature to tank WT Boiling temperature BT 3.2

Overview of the Model

In the proposed model, the heat-pump water heater is determined after considering the electric power demand of electric appliances, excluding the heat-pump water heater of the residence. The model then minimizes the sum of the electricity charges and the difference between the maximum and minimum demands. The proposed model consists of a residence, a heat-pump water heater, and a hot water storage tank. The relationships between each of these components are shown in Fig. 1.

Fig. 1. Water heater system based on a heat pump

Optimal Operation Model of Heat Pump

137

In residence, there is a demand for electricity and heat. By operating a heatpump water heater, heat is generated and stored in a hot water storage tank. This tank supplies heat to the residence to meet the heat demand of the residence. At times when the heat-pump water heater is not operating, the heat stored in the hot water storage tank is supplied to the residence. The process described above is performed under the condition that the power consumption and movement of heat occur within a unit time. The generated heat can be consumed at the time of generation. In addition, in this model, the basic electricity charge is affected by the maximum power consumption. The maximum power consumption is the maximum value of the power demand in a residence, and the relationship between power demand and maximum power consumption θ is shown in Fig. 2.

Fig. 2. Relationship between power demand and maximum power consumption θ

3.3

Formulation

We formulated the operations of the all residences model. In the proposed model, electricity charges were minimized considering load leveling in seven residences. In this model, the heat-pump water heater is operated at times when the total power consumption is low; thus, the peak power consumption is not increased. All Residences Model (The Model Includes All Residences)   xij uijt 1  s )+B min f θj + cos tt (djt + i∈I |S| e j∈J j∈J s∈S t∈T  MkW h i∈I xij uijt s s.t. Ijt + = hsjt + Gsjt , ∀t ∈ T, ∀s ∈ S, ∀j ∈ J e 1 − cutij s s )Gjt = Ij,t+1 , ∀t ∈ T, ∀i ∈ I, ∀s ∈ S, ∀j ∈ J ( 100 s Cmin ≤ Ijt ≤ Cmax , ∀t ∈ T, ∀s ∈ S, ∀j ∈ J, Cmin ≤

Gsjt

s 0 ≤ Ijt +

(1) (2) (3) (4)

≤ Cmax , ∀t ∈ T, ∀s ∈ S, ∀j ∈ J, (5)  x u i∈I ij ijt ≤ Mkcal V (Kmax BT + (1 − Kmax )W T ), e ∀t ∈ T, ∀s ∈ S, ∀j ∈ J (6)

MkW h

138

Y. Kusunoki et al.



xij uijt ≤ θj , ∀t ∈ T, ∀s ∈ S, ∀j ∈ J, e uijt − ui,j,t−1 ≤ uiτ , t = min (T − Lij + 1, T ), . . . , T, dsjt

+

i∈I

(7)

τ = 0, . . . , max (Lij − T + t − 1, 0), ∀t ∈ T, ∀i ∈ I, ∀j ∈ J ui,j,t−1 − uijt ≤ 1 − uiτ , t = min (T − lij + 1, T ), . . . , T,

(8)

τ = 0, . . . , max (lij − T + t − 1, 0), ∀t ∈ T, ∀i ∈ I, ∀j ∈ J Cmin = Mkcal V (Kmin BT + (1 − Kmin )W T ) Cmax = Mkcal V (Kmax BT + (1 − Kmax )W T )

(9) (10) (11)

e = a1 AT − a2 W T − a3 BT + a4  (Dmaxj − Dminj ) B=

(12)

j∈J



xij uijt , ∀t ∈ T, ∀s ∈ S, ∀j ∈ J, e  xij uijt Dminj ≤ dsjt + i∈I , ∀t ∈ T, ∀s ∈ S, ∀j ∈ J, e Dmaxj , Dminj ≥ 0, ∀j ∈ J s Ijt , Gsjt ≥ 0

Dmaxj ≥

(13)

dsjt

+

i∈I

(14) (15) (16) (17)

θ≥0

(18)

uijt ∈ {0, 1}

(19)

s+1 Gsj,24 ≥ Ij,1 , ∀s ∈ S, ∀j ∈ J,

(20)

Objective function (1) is the sum of the electricity charge and the difference between the maximum and minimum demands. Constraint (2) corresponds to the conservation of the amount of heat storage at the end of the period, constraint (3) is the decrease in the amount of heat storage over time, constraint (4) is the capacity constraint of the amount of heat storage at the beginning of the period, constraint (5) is the capacity constraint of the amount of heat storage at the end of the period, and constraint (6) is the heat generation. This is a capacity constraint of the amount of heat storage at that time. Constraint (7) is the definition of θ, and constraints (8), (9) are the operation constraints and stop constraints of the heat-pump water heater. Constraints (10), (11) define the heat storage capacity. Constraint (12) is the definition of the coefficient of performance e, constraint (13) is the definition of B, constraint (14) is the definition of Dmaxj , constraint (15) is the definition of Dminj , and constraint (16) is the non-negativity of maximum and minimum power demand. Constraint (17) is the non-negativity of heat storage at the beginning and end of the period, constraint (18) is the non-negativity of maximum power consumption, constraint (19) is the binary constraint of determinants, and constraint (20) is the continuity of heat storage after 24 h.

Optimal Operation Model of Heat Pump

139

The hot water storage tank has capacity restrictions and heat dissipation restrictions. Regarding the capacity constraint, the upper limit Cmax and the lower limit Cmin of the amount of heat stored in the hot water storage tank were given. Regarding the heat dissipation constraints, it is necessary to consider the decrease in the amount of heat stored over time. Because the daily heat reduction rate of the hot water storage tank installed in the residence was 10%, the hourly heat reduction rate was approximately 1.1%. Therefore, the amount of heat stored at the end of the term Gst in period t and the initial amount s in the t + 1 period can be expressed by Eq. (3). Tokoro of heat stored It+1 et al.[4] introduced a method to model each water heater required for a water heater system based on a heat pump. The efficiency of the heat-pump water heater changes depending on the outside air temperature AT , temperature of the water entering tank W T , and boiling temperature BT . Assuming that the time is t and the amount of heat of the hot water is HW [kW h], the power consumption CH of the heat-pump water heater is given by Eq. (21): CH =

HW t , W Tt , BT )

COP H (AT

(21)

The term COP H in Eq. (21) is the coefficient of performance e, which represents the efficiency of the heat-pump water heater. The coefficient of performance e is expressed by Eq. (22). The coefficients a1 , a2 , a3 , and a4 are listed in Table 1, as obtained from [4]. COP H = e = a1 AT − a2 W T − a3 BT + a4

(22)

Table 1. value of coefficients a1 ,a2 ,a3 ,a4 a1

a2

a3

a4

0.0598 0.0254 0.0327 5.44

In the alternative model, the operation plan is designed to maximize the amount of heat stored at 6 a.m, which closely approximates that of the actual hot water supply system. Load leveling is not employed, and the heat-pump water heaters can generate heat throughout the day. In the formulation of this model, each residence is optimized individually; thus, a distinction is not made among residences, and variables and parameters are not distinguished by the subscript j. The formulation of the individual residence model is as follows:

140

Y. Kusunoki et al.

Individual Residences Model (The Model Considers Individual Residences) gmin Mkcal V BT s.t. Gs6 ≥ gmin , ∀s ∈ S gmin ≥ 0

max

(23) (24) (25)

Objective function (23) is the maximization of the amount of heat stored at the end of the period at 6 a.m. The constraint equations of the proposed model described above are (24), (25), (2), (3), (6)–(12), and (17)–(20). Constraint (24) is the heat storage at 6 a.m in all scenarios, and constraint (25) is a nonnegative constraint.

4 4.1

Numerical Experiments Setting Conditions

The experimental environment specifications were as follows: OS—Windows 10 Pro 64 bit, CPU—Core i7-7500U (2.70 GHz), mounted RAM—8.00 GB, modeling language—AMPL, and solver—Gurobi 9.0.3. The experimental data are described next. The scenario representing the uncertainty in the demand for electricity and heat uses data captured from April 2003 to March 2004 for seven houses in Hokuriku (detached residences No. 01, No. 03-07, and No. 09) obtained from the Architectural Institute of Japan [8]. The number of scenarios is the same as the number of days per month, and the probability of a scenario is unweighted. Data from the Architectural Institute of Japan [8] were also used for the electricity demand, heat demand, and temperature of water entering the tank. 4.2

Results

The experimental results are explained in three ways. First, we compare the electricity charges. Fig. 3 shows the difference in the electricity charges between the model that includes all residences and the model that considers individual residences between April 2003 and March 2004 for the seven residences. As shown in Fig. 3, the electricity charges improved for all months and residences. The difference ranges from approximately 100 yen to approximately 900 yen. Considering that this result is achieved per day, the improvement can be considered to be significant. Next, the maximum power consumption was compared. Table 2 lists the difference in the maximum power consumption between the all residences model and individual residence model. The negative values indicate an improvement in the maximum power consumption compared with the individual residence model.

Optimal Operation Model of Heat Pump

141

Fig. 3. Difference in the electricity charges between the model that includes all residences and the model that considers individual residences Table 2. Difference in maximum power consumption Difference in power consumption [kWh] Residence 1 2

3

4

Apr. −1.4 0.0

0.0

0.0 0.0

0.0 −0.2

May 0.0

0.0

0.0

0.0 0.0

0.0 −1.2

Jun. 0.0

0.0

0.0

0.0 0.0

0.0 0.0

−1.3 0.0

Jul,

5

6

7

0.0

0.0 0.0

0.0 0.0

Aug. −0.4 −0.5 0.0

0.0 0.0

0.0 −0.2

Sep. 0.0

0.0

0.0

0.0 0.0

0.0 −0.7

Oct. 0.0

0.0

0.0

0.0 0.0

0.0 0.0

Nov. −1.4 0.0

0.0

0.0 0.0

0.0 0.0

Dec. −1.5 0.0

0.0

0.0 −0.1 0.0 0.0

Jan. −1.6 0.0

−0.7 0.0 0.0

0.0 0.0

Feb. −1.6 0.0

0.0

0.0 0.0

0.0 0.0

Mar. −1.6 0.0

0.0

0.0 0.0

0.0 0.0

Finally, we compared the average power demand, i.e., the sum of the power demand for each scenario at a certain time divided by the number of scenarios. Because this demand is included in the objective function, it is directly linked to the minimization of electricity charges. Fig. 4 shows the changes in the average electricity demand in April 2003. Similarly, in all months, the comparison showed that the maximum value of the average electricity demand decreased. In the graph, the hours in which the values change as a result of the model are from 1 a.m to 6 a.m and from 8 p.m to 12 p.m.

142

Y. Kusunoki et al.

Fig. 4. Average electricity demand in April

4.3

Consideration

The experimental results showed that the electricity charges were lowered. There are two possible reasons for this finding. The first is the overall decline in the average electricity demand. This is because heat-pump water heaters are continuously operating to meet the heat demand of residences. Fig. 5 shows the hourly difference in the average electricity demand between the all residences model and the individual residence model. The positive values represent improvements.

Fig. 5. Differences in average power demand between the model that includes all residences and the model that considers individual residences

In Fig. 5, a large change can be seen, especially during the hours from 1 a.m to 6 a.m and from 8 p.m to 12 p.m. The decrease in the average electricity demand during hours from 8 p.m to 12 p.m occurs because the heat-pump water heater is planned to operate when the cost of electricity is low by minimizing the electricity charge of the objective function. In the hours from 1 a.m to 6 a.m, all residential models operate as needed; thus, at certain times, the heat-pump water heater does not have to be operated, which is one of the reasons for the decrease in the average electricity demand.

Optimal Operation Model of Heat Pump

143

The second reason is the decrease in the maximum power consumption θ. From constraint (7), the maximum power consumption is equivalent to the maximum value of the power demand of the residence in all scenarios. These results indicate that the operation of the heat-pump water heater has been optimized; as a result, the maximum value of electricity demand in the residence has changed, and the maximum power consumption θ has decreased. However, the maximum power consumption changed only partially. This finding could be explained in two possible ways. In case 1, the operating time of the heat-pump water heater did not change, and the maximum value of the power demand did not change in response to the heat demand even after optimization. This situation is illustrated in Fig. 6.

Fig. 6. Case in which the maximum power demand does not change

In case 2, the power demand for electrical appliances other than heat-pump water heaters is the maximum value of the power demand. This situation is illustrated in Fig. 7.

Fig. 7. Case in which d is the maximum power demand

As indicated by the positions of the green circles in Fig. 6 and 7, the maximum power consumption does not change in either case. Thus, months in which the difference in the power consumption is 0, as listed in Table 2, are considered to have increased consumption. The objective function has a basic charge and a demand-based charge. The basic charge is determined by the maximum power consumption, and the charge according to demand is determined by the average power demand. Both the maximum power consumption and average power demand decreased, which led to an improvement in the electricity charges. Because the maximum power consumption was reduced, the difference between the upper and lower levels of power demand became smaller, which could be considered to have the effect of load leveling.

144

5

Y. Kusunoki et al.

Conclusion

In this study, we developed a model to optimize the operation plan that considers load leveling in multiple residences. The proposed model formulates the operation of complicated equipment by using the stochastic programming method with integer conditions, which is a mathematical optimization technique. The solution corresponding to the operation was derived from multiple models. Compared with those in the conventional model (individual residence model), which imitates a real water heater system, the electricity charge is improved and minimized. As a result of load leveling among multiple residences, the maximum load could be reduced when the power demand changed. This suggests that the operation plan based on the model developed in this study has a certain advantage. Future tasks include introducing seasonal variations in the temperature data to more closely approximate reality, expanding the restrictions for load leveling by considering various scenarios, and expanding the scale by increasing the number of residences.

References 1. United Nations Information Center. https://www.unic.or.jp/activities/economic social development/sustainable development/2030agenda/. Accessed 9 Jul 2021 2. Agency for Natural Resources and Energy of Japan, Electricity retail liberalization. https://www.enecho.meti.go.jp/category/electricity and gas/electric/ electricity liberalization/. Accessed 9 Jul 2021 3. Tokoro, K., Fukuyama, Y.: Development and extension of smart community model for energy efficiency (in Japanese). Commun. Oper. Res. Soc. J. 62, 44–49 (2017) 4. Tokoro, K., Wakamatsu, Y., Hashimoto, K., Sugaya, Y., Oda, S.: Optimization of system configuration and operational parameters of a hybrid water heater system. IEEJ Trans. Electron. Inf. Syst. 134, 1365–1372 (2014) 5. Zhu, Z., Tang, J., Lambotharan, S., Chin, W.H., Fan, Z.: An integer linear programming-based optimization for home demand-side management in smart grid. In: 2012 IEEE PES Innovative Smart Grid Technologies (ISGT), vol. 1, pp. 1–5 (2012) 6. Kimata, S., Shiina, T., Sato, T., Tokoro, K.: Operation planning for heat pumps in a residential building. J. Adv. Mech. Des. Syst. Manuf. 14 (2020) 7. Shiina, T.: Stochastic Programming (in Japanese). Asakura Publishing, Tokyo (2015) 8. Architectural Institute of Japan. Energy consumption database in residential building. https://tkkankyo.eng.niigatau.ac.jp/HP/HP/database/index.htm. Accessed 9 Jul 2021

Revenue Management Problem via Stochastic Programming in the Aviation Industry Mio Imai, Tetsuya Sato, and Takayuki Shiina(B) Waseda University, Tokyo, Japan [email protected], tetsuya [email protected], [email protected]

Abstract. This paper presents an optimization model using stochastic programming to secure the optimum seat and maximizing the profit of the airline in consideration of overbooking. Airline seat inventory control involves selling the right seats to the right people at the right time. If an airline sells tickets on a first-come, first-serve basis, it is likely to be occupied by leisure travelers and late bookers. Generally, business travelers willing to pay a higher fare will subsequently find no seats left, and revenue from such sales will be lost. The airline allows overbooking, accepting more reservations than seats to minimize losses. While there are various needs that depend on the type of passenger, we propose an optimization model using stochastic programming as a method of maximizing the profit of the airline company by securing seats appropriately and employing the concept of overbooking. Keywords: Revenue management Overbooking

1

· Stochastic programming ·

Introduction

Since the late 1970 s, according to deregulation in the United States, revenue management of airlines began and each company is permitted to set their own fares. Tickets for the same destination are classified into various classes and subsequently sold. Airlines adjust the price and number of seats in each class to reduce the number of vacant seats at takeoff. When selling tickets on a firstcome, first-serve basis, there is a tendency to sell tickets from the low-priced class to ensure that all seats are booked; such tickets are predominantly bought by discounted passengers (mostly leisure travelers) who prefer to reserve low-priced seats at early stages. The company thus struggles to achieve higher-priced sales that they could have sold to business passengers. To prevent such losses, it is necessary to secure seats appropriately by employing the concept of overbooking, which refers to the acceptance of reservation numbers that are greater than the number of available seats on the airplane. (Takagi [1], Sato, Sawaki [2]). Boer et al. [5] stated that airline seat inventory control involves selling the right seats to the right people at the right time, and they categorized revenue c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 145–157, 2022. https://doi.org/10.1007/978-3-030-92666-3_13

146

M. Imai et al.

management models as leg-based or network-based models. In addition, they analyzed network-based mathematical programming models and identified the need to include reservation limits and price-resetting methods in the stochastic programming model as the best approach to deal with uncertainty within the framework of revenue management. Walczak et al. [6] demonstrated that overbooking can balance losses due to vacancy and boarding refusals; overbooking is determined by the expected income, probability that demand will exceed capacity, and expected number of boarding refusals. Moreover, overbooking gives airlines the benefit of not only reducing overall costs by improving operational efficiency, but also providing additional seats for passengers. In Japan, the “flex traveler system” has been introduced as a response to situations when passengers are denied boarding due to overbooking. In the unlikely event of a shortage of seats, the airline will invite passengers who can accommodate changes in flights at the airport on the day of the flight. The system requires the airline to compensate passengers who accept to such changes. In this study, we propose an optimization model using stochastic programming to maximize the profit of an airline company by securing seats appropriately and employing the concept of overbooking.

2 2.1

Problem Description Itinerary and Flight Leg

Airlines classify tickets that have identical departures and destinations into multiple classes and sell them at different rates. In this study, a combination of take off and landing (flight section) is called a flight leg. In addition, a combination of the departure and the destination that does not include the transit airport is called the itinerary, and a combination of the itinerary and the fare class is called the origin, destination, fare class (ODF).

Fig. 1. Example of itinerary and flight leg

Revenue Management Problem in the Aviation Industry

2.2

147

Reservation

During the sale of tickets for various classes with identical departures and destinations, airlines adjust the number of reserved seats for each class. The number of reserved seats is the number of reservations accepted for each itinerary. If the actual demand is greater than the number of reserved seats, lost opportunities will occur, and if demand is less than the number of seats, losses due to vacancy will occur. Reservations tend to fill up beginning from the low-priced class; therefore, if the airline sells all the seats at the same price and the seats are sold out early, they will be unable to sell airline tickets to passengers willing to pay higher prices, which is a lost opportunity to boost revenue. In addition, because the fuel and maintenance costs required to operate the airplane do not change depending on the number of passengers, vacant seats at takeoff will result in losses; therefore, it is necessary to consider these losses. 2.3

Overbooking

Overbooking is permitted as a measure to minimize the number of vacant seats during takeoff caused by cancellations before boarding or no-shows. Overbooking is indicated by ensuring that the sum of the number of reserved seats in each class is greater than or equal to the number of airplane seats. cancellation is allowed before boarding. Conversely, a no-show is a cancellation without permission, which refers to an instance wherein the passenger does not appear at the boarding gate by the scheduled time. Airlines typically refund passengers who cancel their reservations with an amount equal to a part of the fare paid. In addition, if the airline company performs overbooking at the time of reservation and the actual number of cancellations is less than expected, boarding for a few passengers is refused. The company refunds the fare and compensates passengers who have no alternative but to refuse boarding.

3

Previous Study

In the Littlewood model [3], the company sells tickets in two phases. In the first half of the sales, the tickets are sold at a discounted fare to price-sensitive, discount-seeking passengers (leisure travelers). As the purpose of such passengers is to travel for leisure, there is a possibility of relatively early planning. However, passengers who buy tickets at a discounted fare cannot change the reservation details; additionally, restrictions such as a high cancellation fare are levied. In the latter half of the sales, the reservation timing is often set prior to boarding, and the tickets are sold at regular fares for business passengers who dislike restrictions such as inability to change reservation details. We assume that the demand of business passengers D is uncertain and that the flight can be fully booked by providing discounted fares. In addition, we assume that discount-seeking passengers make reservations before business passengers. If the airline accepts reservations on a first-come, first-serve basis until

148

M. Imai et al.

all seats are sold out, the discount-seeking passengers will occupy all the seats, and the airline’s income will be low. To prevent this, Littlewood adopted the idea of a protection level y, which represents the number of seats reserved for high-paying customers. Seats apart from those reserved for the protection level y will be sold at a discounted fare. The remaining number of seats is called the booking limit. Littlewood’s formula is given as equality (1), where F (·), r1 , r2 denote the distribution function of D, fare for leisure traveller, and fare for business traveller, respectively. F (y) = 1 −

r2 r1

(1)

However, the Littlewood model is limited to two classes; additionally, even if the demand for high-priced classes is below the protection level, airlines do not sell tickets at discounted fares. Therefore, there is a possibility of take off with vacant seats. Williamson [4] dealt with network-based models and optimized reservation management for the entire network using either a probabilistic mathematical programming problem (PMP) or a deterministic mathematical programming problem (DMP), both of which offer significant benefits. Because PMP uses stochastic demand, it needs to be solved using stochastic programming. Conversely, DMP simplifies the problem by replacing the uncertain demand with its expected value.

Parameter ODF l N Sl xODF Cl DODF fODF

Subscript for itinerary Subscript for flight leg Set of flight legs on the network Set of ODF available in the flight leg l Number of reserved seats in the ODF Number of seats of the flight leg l Probabilistic aggregated demand for ODF Fare of the ODF

The stochastic programming problem is formulated as follows. The term min {xODF , DODF } in Eq. (2) indicates that the objective function represents the number of reservations at takeoff. In PMP, the product of the fare and the actual number of reservations is assumed as the total revenue, and the expected value is maximized. In addition, inequality (3) is a capacity constraint to ensure that the total number of reserved seats is less than or equal to the number of airplane seats. The problem (PMP) can be expressed as

Revenue Management Problem in the Aviation Industry

(PMP) max E



fODF min{xODF , DODF }

149

(2)

ODF

s.t.



xODF ≤ Cl , ∀l ∈ N

(3)

ODF ∈Sl

xODF ≥ 0, xODF ∈ Z, ∀ODF ∈ Sl

(4)

However, these two models maximize profits at the time of booking and do not consider cancellations subsequent to booking. Therefore, the disadvantage of these models is that they are not effective in conditions of uncertainty in the number of future reservations.

4

Formulation of the New Model

Based on a study of the aforementioned models, we introduce factors such as lost opportunities and vacant seat loss at the time of reservation, cancellation loss before boarding, and boarding refusal loss, and we propose a stochastic programming model that takes these into consideration. The objective function of this model can be calculated by minimizing losses from the product of the fares and the number of reserved seats, as well as optimizing the number of reserved seats to maximize the expected value of the airline’s total revenue, including overbooking in anticipation of cancellation.

Sets OD

Set of itineraries (departure point arrival points)

F

Set of classes (fares)

L

Set of flight legs

Parameter Clj

Number of airplane seats on flight leg l, class j

fij

Fares in itinerary i, class j

pij

Loss on cancellation in itinerary i, class j(fare-cancellation fee)

qij

Compensation for boarding denials in itinerary i, class j

Random variable ξ˜ij Demand at the time of booking according to a normal distribution in the itinerary i, class j. Let Ξij be the set of realization of values ξij Number of cancellations after booking according to Poisson ζ˜ij distribution in itinerary i, class j. Let Zij be the set of realization of values ζij

150

M. Imai et al. Variable xij + (ξij ) yij

Number of reserved seats in itinerary i, class j Number of insufficient seats at the time of booking in itinerary i, class j − (ξij ) Number of surplus seats at the time of booking in itinerary i, yij class j wij (ξij , ζij ) Number of boarding refusals in itinerary i, class j

Equation (5) is an objective function of the proposed model. We consider the revenue from the number of reserved seats minus the opportunity, vacant seat, cancellation, and boarding refusal losses presented in Eq. (5) as the total revenue and maximize it. Equality (6) is the constraint of demand in each itinerary and indicates that the sum of the number of reserved seats and the number of insufficient seats minus the number of surplus seats is equal to the demand. In addition, the number of surplus seats and the number of cancellations subtracted from the number of reserved seats in inequality (7) represents the number of passengers at the time of boarding. Inequality (7) is a capacity constraint in each flight leg and indicates that the number of passengers at the time of boarding is less than the combined sum of the airplane capacity and number of boarding refusals. max

 

fij xij − Eξ˜[

i∈OD(l) j∈F

 

+ ˜ fij yij (ξij )] − Eξ˜[

i∈OD(l) j∈F

− Eζ˜[

 

− ˜ fij yij (ξij )]

i∈OD(l) j∈F

pij ζ˜ij ] − Eξ˜Eζ˜[

i∈OD(l) j∈F

 

 

qij wij (ξ˜ij , ζ˜ij )]

i∈OD(l) j∈F

(5) s.t.

+ − xij + yij (ξij ) − yij (ξij ) = ξij , ∀i ∈ OD, ∀j ∈ F, ξij ∈ Ξij   − (xij − yij (ξij ) − ζij ) ≤ Clj + wij (ξij , ζij ) i∈OD(l)

(6)

i∈OD(l)

∀l ∈ L, ξij ∈ Ξij , ζij ∈ Zij (7) xij ≥ 0, xij ∈ Z, ∀i ∈ OD, ∀j ∈ F + − (ξij ), yij (ξij ) yij



+ − 0, yij (ξij ), yij (ξij )

(8) ∈Z ∀i ∈ OD, ∀j ∈ F, ξij ∈ Ξij (9)

wij (ξij , ζij ) ≥ 0, wi,j (ξij , ζij ) ∈ Z ∀i ∈ OD, ∀j ∈ F, ξij ∈ Ξij , ζij ∈ Zij (10) Setting random variables

Revenue Management Problem in the Aviation Industry

151

We assume that demand ξ˜ij follows a normal distribution N (μij , σj2 ) and is represented by the following probability density function. f (ξij ) = 

1

exp[−

2πσj2

(ξij − μij )2 ] 2σj2

The expected value of this probability density function is given by E(ξij ) = μij . The number of cancellations ζ˜ij follows the Poisson distribution P o(λij ) and is represented by the following probability. ζ

P (ζij ) =

eλij λijij ζij !

λij = cj xij The symbol cj represents the probability of cancellation per customer. The expected value of the distribution function is given by E(ζij ) = λij . For the probability distribution used in this study, the upper and lower limits (a ≤ x ≤ b) were set, and the truncated distribution expressed by the equation given subsequently were applied. Function g(x) and function F (x) indicate the density and cumulative distribution functions of the random variables, respectively. Density function: g(x) F (b) − F (a) Cumulative distribution function: x g(t)dt F (x) − F (a) a = F (b) − F (a) F (b) − F (a) Expected value:

b

xg(x)dx a F (b) − F (a)

5

Numerical Experiments

The experimental environment is as follows. OS—Windows10 Pro 64bit, CPU— Core i7-7500U (2.70 GHz) 8.00GB, Modeling language—AMPL. The following two models were used for comparison with the proposed model using the stochastic programming problem: 5.1

Littlewood Model Considering Cancellation

The optimal number of reserved seats was calculated using Littlewood’s [3] concept of optimal protection level. The original Littlewood’s model can be extended by introducing cancellations.

152

M. Imai et al. Variable xij Number of class j  yij+ (ξij ) Number of booking in  − yij (ξij ) Number of booking in

reserved seats in itinerary i, insufficient seats at the time of itinerary i, class j surplus seats at the time of itinerary i, class j

Considering the formulas based on Littlewood’s formula (1), fi2 fi1

(11)

xi1 − μi1 ), σ

(12)

F (xi1 ) = 1 − and F (xi1 ) = Φ( we obtain 1−

fi2 x − μi1 ). = F (xi1 ) = Φ( i1 fi1 σ

Furthermore, using zi (which is defined as a value such that Φ(zi ) = (1 − we determine the decision variables as follows:

fi2 fi1 )),

xi1 = μi1 + zi σ xi2 = μi1 + μi2 − xi1 

yij+ (ξij ) = max(0, μij − xij ) 

yij− (ξij ) = max(0, xij − μij ) We substitute these into the formula for the objective function to obtain the maximum profit.   i∈OD(l) j∈F

fij xij −

  i∈OD(l) j∈F

 fij yij+ (ξ¯ij ) −

 

 fij yij− (ξ¯ij )

i∈OD(l) j∈F



 

pij ζij

(13)

i∈OD(l) j∈F

5.2

Deterministic Model

In the deterministic model, the profit is maximized by the optimum number of reserved seats xd obtained using the expected value of the normal distribution for the demand and the expected value of the Poisson distribution for the number of cancellations without considering the fluctuation.

Revenue Management Problem in the Aviation Industry

153

Equation (5) is an objective function of the original problem. We consider the revenue from the number of reserved seats minus the opportunity, vacant seat, cancellation, and boarding refusal losses presented in Eq. (14) as the total revenue and maximize it. Equality (15) is the constraint of demand in each itinerary and indicates that the sum of the number of reserved seats and the number of insufficient seats minus the number of surplus seats is equal to the demand. In addition, the number of surplus seats and the number of cancellations subtracted from the number of reserved seats in inequality (16) represents the number of customers at the time of boarding. Inequality (16) is a capacity constraint in each flight leg and indicates that the number of customers at the time of boarding is less than the sum of the airplane capacity and the number of boarding refusals. Parameter Number of reserved seats at the time of bookxdij ing in itinerary i, class j d+ (ξij ) Number of insufficient seats at the time of yij booking in itinerary i, class j d− (ξij ) Number of surplus seats at the time of yij booking in itinerary i, class j

Formulation   fij xdij − max i∈OD(l) j∈F

 

 

d+ fij yij −

i∈OD(l) j∈F



 

i∈OD(l) j∈F

pij E(ζij ) −

i∈OD(l) j∈F

d− fij yij

 

d qij wij

i∈OD(l) j∈F

(14) s.t.

xdij

+ 

= E(ξij ), ∀i ∈ OD, ∀j ∈ F, ξij ∈ Ξij  d− d (xdij − yij − E(ζij )) ≤ Clj + wij

d+ yij



d− yij

i∈OD(l)

(15)

i∈OD(l)

∀l ∈ L, ξij ∈ Ξij , ζij ∈ Zij (16) xdij ≥ 0, xdij ∈ Z, ∀i ∈ OD, ∀j ∈ F d+ d− d+ d− yij , yij ≥ 0, yij , yij ∈ Z, ∀i ∈ OD, ∀j d d wij ≥ 0, wij ∈ Z, ∀i ∈ OD, ∀j ∈ F, ξij ∈

5.3

(17) ∈ F, ξij ∈ Ξij

(18)

Ξij , ζij ∈ Zij .

(19)

Data Setting

In this study, the network shown in Fig. (2) was used. We set seven itineraries by connecting four flight legs and connecting five airports, A, B, C, D, and H.

154

M. Imai et al.

Each itinerary includes one or two flight legs. To travel from airports A, B, and C to airport D, there is no direct flight; therefore, it is necessary to go through airport H.

Fig. 2. Example of network

We limited our study to two classes of flights. The number of seats in each flight leg is listed in Table 1. In addition, the expected value of the fare and demand for each itinerary are listed in Table 2. Table 1. Number of seats on each flight leg Leg Leg Number of seats number Class1 Class2 1 2 3 4

A-H B-H C-H H-D

20 15 20 94

145 80 145 263

Table 2. Fare and expected value of the demand on each flight leg OD OD Number

Fare (yen) Expected value Class1 Class2 Class1 Class2

1 2 3 4 5 6 7

18590 16590 20190 38310 50200 31360 49900

A-H B-H C-H H-D A-H-D B-H-D C-H-D

12790 12490 14690 9510 19900 19600 20000

18 14 18 72 2 2 2

131 72 131 385 15 8 15

Revenue Management Problem in the Aviation Industry

155

The loss of cancellation and boarding refusal fee can be set as follows. In addition, the fluctuation of demand and cancellation probability of each class are shown in the Table 3. Table 3. The value of each parameter for each class Class pij (yen) 1 2

qij (yen)

σj2 cj (%)

(fare - 440) × 0.6 Fare + 10,000 5 15 (fare50%) × 0.8 Fare + 10,000 30 5

Table 4 lists the total profits of the three models; it is apparent that the total profits obtained by the stochastic programming model are the maximum. Table 5 shows the number of seats of each ODF, which is determined by each model. In the Littlewood model, the number of reserved seats is determined only by the difference in fare between Class 1 and Class 2; therefore, it presents the disadvantage of securing an excessive number of Class 1 seats with high fares, resulting in a large loss of opportunities, vacant seat losses, and losses on cancellation. In the stochastic model, depending on the itinerary, additional seats than those present on the airplane are secured; however, it is possible to minimize total loss. Table 6 shows the revenue for each itinerary for each model. In OD four of Littlewood’s model resulted in a loss of 959 × 103 yen. If the fare difference between the two classes is large and the demand for itineraries is high, the profit may be negative. In addition, the number of cancellations differs depending on whether fluctuations are taken into consideration; therefore, it is apparent that the stochastic programming model is considerably realistic than the deterministic model. Table 4. Total revenue for each model Model

Total revenue(yen)

Littlewood 423,270 Deterministic 11,391,600 11,758,300 Stochastic

156

M. Imai et al. Table 5. Number of reserved seats for each model OD Littlewood Deterministic Stochastic No. Class1 Class2 Class1 Class2 Class1 Class2 1 2 3 4 5 6 7

65 48 68 398 8 1 8

84 38 81 0 8 8 8

18 14 18 89 2 1 2

131 72 131 227 14 8 14

17 12 17 114 1 1 1

145 59 145 289 6 3 6

Table 6. Profit for each model(103 yen) OD Littlewood Deterministic Stochastic No. Class1 Class2 Class1 Class2 Class1 Class2 1 2 3 4 5 6 7

6

248 115 22 1,200 74 31 46

405 305 2 212 241 331 −2,159 2,106 31 91 145 29 24 91

1,642 881 1,886 2,116 273 154 274

3,302 185 282 3,542 50 0 50

1,809 679 2,005 2,548 119 59 120

Conclusion

In this study, we proved that it is possible to maximize the expected value of total revenue by securing seats and using an optimization model based on a stochastic programming model. Furthermore, we compared the results obtained using the Littlewood model (which secures seats based on the Littlewood formula) and the deterministic model (which does not consider fluctuations). The Littlewood model presents the disadvantage that the fare difference between the two classes strongly affects the number of seats reserved. Therefore, airlines will reserve an excessive number of high-priced class seats and increase lost opportunities and vacant seat losses. The stochastic programming model proposed in this study is a practical model that enables realistic predictions by examining multiple scenarios, unlike the deterministic model that does not consider fluctuations and examines only one scenario. In the future, sales methods are expected to increase various aspects; moreover, it is expected that additional classes of airline tickets will be sold. In addition, it is now possible to easily book airline tickets using the Internet; hence, it is necessary to design a revenue management method capable of responding flexibly.

Revenue Management Problem in the Aviation Industry

157

Future research areas include application to networks that consider larger itineraries, examination of scenarios other than the those considered in this study, namely cancellation and boarding refusal losses, and the application of considerably realistic introduction methods.

References 1. Takagi, H.: Service science beginning (in Japanese). University of Tsukuba Publishing, pp. 211–248 (2014) 2. Sato, K., Sawaki, K.: Revenue Management from the Basics of Revenue Management to Dynamic Pricing (in Japanese), pp. 51–110. Kyoritsu Publishing Co., Ltd, Tokyo (2020) 3. Littlewood, K.: Forecasting and control of passenger bookings. AGIFORS Symp. Proc. 12, 95–117 (1972) 4. Williamson, E.L.: Airline network seat inventory control: methodologies and revenue impacts. Ph. D. Thesis, Massachusetts Institute of Technology, Cambridge, MA (1992) 5. De Boer, S.V., Freling, R., Piersma, N.: Mathematical programming for network revenue management revisited. Eur. J. Oper. Res. 137, 72–92 (2002) 6. Walczak, D., Boyd, E.A., Cramer, R.: Revenue Management. In: Barnhart, C., Smith, B. (eds.) Quantitative Problem-solving Methods in the Airline Industry, Springer, pp. 101–161 (2012). https://doi.org/10.1057/9780230294776 7. Shiina, T.: Stochastic Programming (in Japanese), Asakura publishing, Tokyo (2015)

Stochastic Programming Model for Lateral Transshipment Considering Rentals and Returns Keiya Kadota, Tetsuya Sato, and Takayuki Shiina(B) Waseda University, Tokyo, Japan [email protected], tetsuya [email protected], [email protected] Abstract. Supply chain management is a large-scale planning under uncertainty. It is important to build an efficient supply chain under uncertain circumstances. There are numerous traditional lateral transshipment models targeting only demands but do not include rentals and returns. This study provides the uncertainty between rentals and returns by scenarios using a multi-period stochastic programming model of lateral transshipment problems. The moment matching method was used to reduce the number of scenarios and the computation time, and the comparative experiment demonstrated the utility of this model. Keywords: Stochastic programming Supply chain

1

· Inventory transshipment ·

Introduction

In recent years, it has become difficult to minimize inventory, ordering, and shipping costs while satisfying the various needs of each customer. There exists numerous research on facility location and inventory planning problems using stochastic programming to build an efficient supply chain under uncertain conditions. In addition, supply chain management in multiple periods requires a long calculation time. Therefore, numerous studies using the K-means and moment matching methods, which reduce the number of scenarios, have been conducted to calculate the realizable time. Paterson et al. [4] conducted a literature review and categorised previous studies of lateral transshipment. Lateral transshipment models are categorised into two categories: those that occur at a predetermined time before all demand is known, and those that can be done at any time to accommodate a potential shortage. These two types are called proactive transshipments and reactive transshipments. Shiina et al. [6] proposed the inventory transshipment problem. This aimed to enhance the service level and minimize costs by reducing the number of orders to factories by inventory transshipment between the bases considering demand fluctuation. In this study, stochastic programming was used to consider the penalty for causing inventory shortages. However, this is a single period-model, and it can be extended to a multi-period model. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 158–169, 2022. https://doi.org/10.1007/978-3-030-92666-3_14

The Multi-period Lateral Transshipment Problem

159

The Schildbach and Morari [5] minimized production, shipping, and inventory costs in multi-period supply chain management while considering demand fluctuation. Their research meets demand invariably without considering inventory shortages. Aragane et al. [1] proposed the inventory transshipment problem to minimize the total cost using a scenario-tree to provide demand fluctuation in multiple periods considering rentals and returns. In their research, setting up a new factory is assumed when it is difficult to meet demand with the present condition. The number of scenarios is reduced using the K-means method, which aggregates scenario fans into a scenario tree.

2 2.1

Multi-period Inventory Transshipment Problem Problem Description

This study aims to minimize the total cost of a multi-period inventory transshipment problem, and considers businesses that engage in rental products, such as large container houses. Aragane et al. [1] divided the network into three echelons: the factory, base, and customer. This model assumes that customer demand and the rental period follow a probability distribution. In a typical supply chain, considerable fixed order costs are incurred when each base places an order with a factory every period. Thus, transshipment between bases, which is the management of the risk of inventory shortage between bases, improves the service level while reducing the number of orders from a factory. This study uses inventory transshipment without distinction between before and after observing demand. Each base performs inventory transshipment from a base that is overstocked or has cheap order costs to another one that has inventory shortage or has expensive order costs concurrently when each base orders from a factory. The preconditions for this study are as follows. • Each base may rent products only once, and the rental product will be returned to the same base in some periods. • The rental is made before the final period and the return is made in the period after the rental. • Each base holds products by ordering for factories and inventory transshipment. • Rentals and returns are made at the same base. • Fixed order costs are incurred when placing orders for factories. • Penalties are incurred for inventory shortages caused by uncertain demand. Let the period be T and the number of bases be I. The number of rental and return patterns for each base can be expressed as T (T2−1) . The number of rental and return patterns at each base is determined by length of the period; therefore, the total number of scenarios depends on the number of bases and length of the

160

K. Kadota et al.

period. The total number of scenarios can be obtained by multiplying the number of rental and return patterns by the number of bases. Figure 1 shows 6 = 4×3 2 rental and return patterns for the total period 4. The blue circle represents the period when rental occurs, whereas the red circle represents the period when the product is to be returned. The white circle represents the period when neither occurs.

Fig. 1. Pattern of the model

In addition, we consider the probability of each rental and return pattern in this study. By defining the probabilities for rentals and returns, we set probabilities for all rental and return patterns. In each period, the rental is assumed to 1 . The return is also occur in period n = 1, · · · , T − 1 with equal probability T −1 1 assumed to occur with equal probability T −n in each period. The probability of the rental and return pattern for the total period 4 is calculated as follows. 1 • Probability of scenarios 1,2,3 (4−1)(4−1) = 1 • Probability of scenarios 4,5 (4−1)(4−2) = 16 1 • Probability of scenarios 6 (4−1)(4−3) = 13

1 9

Because each scenario is determined by the period when the rental occurred, 1 the probability of the scenario can be calculated as (T −1)(T −n) if the rental period is n. The probabilities of the rental and return patterns for the total period 4 are shown in Fig. 2.

The Multi-period Lateral Transshipment Problem

161

Fig. 2. Probability of model

2.2

Notations

Sets I The set of bases Set of scenarios S The set of periods T Parameters s Rental quantity to base i under scenario s in period t ξit s Return quantity from base i under scenario s in period t ζit s Inventory cost of base i under scenario s in period t Hit Stockout cost of base i in scenario s of period t Pits s Variable order cost of base i under scenario s in period t Rit Lateral transshipment cost from base i to base j under scenario s in Lsijt period t Fixed order cost of base i in period t Wit Reusable rate of returned products β CAP Factory capacity occurrence probability of scenario s ps Variables uit 0-1 decision variable, 1 if the order is placed at base i in period t; otherwise, 0 s Inventory quantity of base i under scenario s in period t lit s Stockout quantity of base i under scenario s in period t zit Order quantity of base i to the factory under scenario s in period t osit Lateral transshipment quantity from base i to base j under scenario xsijt s in period t The 0–1 variables related to the order decision were defined as scenarioindependent decision variables, and the other decision variables were defined for each scenario.

162

2.3

K. Kadota et al.

Formulation

The proposed problem is formulated as a multi-period stochastic problem with scenarios, and can be formulated as follows: min



Wit uit +

t∈T i∈I

+



s s Rit oit

+

i∈I

s.t. 





 ps ( Lsijt xsijt

t∈T s∈S s s Hit lit

+

i∈I

osit ≤ CAP



i∈I j=i s Pits zit )

(1)

i∈I

∀t ∈ T, ∀s ∈ S,

(2)

i∈I

s s s − zit = lit−1 + osit + lit



xsjit −

j∈I s s − ξit + βζit ,





xsijt

j∈I

∀i ∈ I, ∀t ∈ T, ∀s ∈ S,

(3)

s xsijt ≤ lit−1 + osit , ∀i ∈ I, ∀t ∈ T, ∀s ∈ S,

(4)

j∈I

osit ≤ M uit , uit ∈

∀i ∈ I, ∀t ∈ T, ∀s ∈ S,

s s {0, 1}, lit , zit , osit , xsijt

(5)

≥0

∀i ∈ I, ∀t ∈ T, ∀s ∈ S,

(6)

Objective function (1) of the main problem represents the minimization of the total cost of fixed ordering, inventory flexibility, ordering, inventory, and inventory shortage. Inequality (2) is the constraint that the number of orders placed with the factory is less than or equal to the capacity of the factory, and inequality (3) is the constraint representing the conservation of the amount of inventory at the base. The left-hand side of (3) represents the amount of inventory in period t of scenario s, and the right-hand side represents the sum of the amount of inventory in period t − 1 of scenario s and the transition in products at base i in period t. The transition in the number of products refers to the number of orders, decrease owing to rental patterns, increase owing to return patterns, and amount of inventory transshipment between bases. Inequality (4) is a constraint that the amount of inventory transshipment must be less than or equal to the amount of inventory at the end of the previous period, and inequality (5) is a constraint that represents the upper bound on the number of orders.

The Multi-period Lateral Transshipment Problem

3 3.1

163

Moment Matching Method Overview of the Moment Matching Method

The total number of scenarios in previous studies increased with the number of bases and periods. When the total number of scenarios is large, the computation time also becomes very long. In this study, we attempt to reduce the number of scenarios and the computation time using the moment matching method. Kaut and Wallace [3] presented an overview of the most basic scenario generation methods, and classified them into scenario trees considering independence of random variable and scenario fans assuming the joint distribution. If the uncertainty is represented by a multivariate continuous distribution or discrete distribution with many outcomes, there is a need to reduce the number of possible scenarios. Aragane et al. [1] used the K-means method to solve the inventory transshipment problem. This method aggregates the scenario fans that have a large number of scenarios and large scale of the problem into a scenario tree with a smaller number of nodes by bundling the nodes of each period into clusters. Høyland and Wallace [2] tackled the moment matching method that reduces scenarios by approximate solutions without drastically changing the nature of the distribution by assigning the probabilities again. If the scenario is given in the form of a tree, this method sets the probability of the scenario for each stage again, retaining the statistical properties, such as the expected value, variance, skewness, and kurtosis in the original probability distribution. If the probability is assigned once again, the probability of the node corresponding to each stage scenario may become zero. Then that of all the child nodes will also become zero. Therefore, it is possible to reduce the number of scenarios. 3.2

Notation

Parameters Number of bases where probability fluctuation occurs N The number of fluctuations at base i bi Total number of nodes in the scenario tree corresponding Bi to base i M i , Qi Expected value of the period when rental or return occurs at base i Σi , Ci , Vi Variance, skewness, and kurtosis of the period when rental occurs at base i Variance, skewness, and kurtosis of the period when return σi , ci , vi occurs at base i Xik , Yik k-th realization in the scenario tree for the period when rental or return occurs at base i Weight factor at base i λi ωi1 , ωi2 , ωi3 Weight factor for variance, skewness, and kurtosis of the μ1i , μ2i , μ3i period when rental or return occurs at base i

164

K. Kadota et al.

Variables Probability of occurrence of the kth realization in the scenario tree at base i Difference from the theoretical value of the variance of the period when rental occurs at base i Difference from the theoretical value of the variance of the period when return occurs at base i Difference from the theoretical value of the skewness of the period when rental occurs at base i Difference from the theoretical value of the skewness of the period when return occurs at base i Difference from the theoretical value of the kurtosis of the period when rental occurs at base i Difference from the theoretical value of the kurtosis of the period when return occurs at base i

pki Σi+ , Σi− σi+ , σi− Ci+ , Ci− − c+ i , ci

Vi+ , Vi− vi+ , vi−

3.3

Formulation

The moment matching method is formulated as follows.

min

N 

λi (ωi1 (Σi+ + Σi− ) + ωi2 (Ci+ + Ci− ) + ωi3 (Vi+ + Vi− )

i=1 − − 3 + + μ1i (σi+ + σi− ) + μ2i (c+ i + ci ) + μi (vi + vi ))

(7)

s.t. Bi 

Xik pki = M i , i = 1..N

(8)

Yik pki = Qi , i = 1..N

(9)

k=1 Bi  k=1 jbi 

pki = pji−1 , i = 1..N, j = 1..Bi−1

(10)

k=(j−1)bi +1 Bi 

(Xik − M i )2 pki − Σi+ + Σi− = Σi , i = 1..N

(11)

k=1 Bi 

(Yik − Qi )2 pki − σi+ + σi− = σi , i = 1..N

(12)

k=1 Bi 

(Xik − M i )3 pki − Ci+ + Ci− = Ci , i = 1..N

k=1

(13)

The Multi-period Lateral Transshipment Problem Bi 

− (Yik − Qi )3 pki − c+ i + ci = ci , i = 1..N

165

(14)

k=1 Bi 

(Xik − M i )4 pki − Vi+ + Vi− = Vi , i = 1..N

(15)

k=1 Bi 

(Yik − Qi )4 pki − vi+ + vi− = vi , i = 1..N

k=1 − + − + − Σi+ , Σi− , σi+ , σi− , Ci+ , Ci− , c+ i , ci , Vi , Vi , vi , vi

(16) ≥0

(17)

Objective function (7) minimizes the difference between the theoretical and approximated values of the variance, skewness, and kurtosis. Equations (8) and (9) are constraints that are the product of the realization, and the probability is equal to the theoretical value of the expectation. Equation (10) is the constraint that the sum of the descendant probabilities in the scenario tree is equal to the probability of the parent node. Equations (11) and (12) yield the variance, Equations (13) and (14) yield the skewness, and Equations (15) and (16) yield the difference from the theoretical value of kurtosis. The problem becomes a linear programming problem because the coefficients of the random variable pki for Equations (11) to (16) are constant, as shown in Equations (8) and (9).

4

Numerical Experiments

We demonstrated the effectiveness of the proposed model through numerical experiments. The computer used for this experiment was Core i5-4300 with 32 GB memory using AMPL-CPLEX version 12.6.2.0 on Windows 10 Pro. The locations of the factory and bases were randomly generated on the s is defined as (1+0.1×the distance [0,100]×[0,100] grid. The order cost Rit between the factory and the base). The lateral transshipment cost Lsijt is defined as (0.5×the distance between the bases). The fluctuation of the rentals and returns was represented by a scenario tree. The rental quantity follows a normal distribution with a standard deviation of 10. The means were 100, 150, 200, 250, and 300 for bases 1, 2, 3, 4, and 5, respectively. The number of scenarios varies depending on the period and number of bases, and is represented by a scenario tree. The values of major parameters are described as follows. Table 1. Values of major parameters Notation

Value

The number of bases I s Hit Inventory cost s Pit Stockout cost Wit Order fixed cost Reusable rate of returned products β CAP Capacity of the factory

5 1 10 50 0.25 2000

166

K. Kadota et al.

The numerical experiments demonstrated the effectiveness of our model and reduced the scenario and computation time using the moment matching method. First, we demonstrate the usefulness of this model by comparing it to a deterministic problem. Next, we compare the computation time of the problem after applying the moment matching method with the original problem. The value of the stochastic solution (VSS) was used to evaluate the solution of the stochastic programming method. The deterministic model is formulated by replacing the random variable with the mean of all scenarios. Let EEV and RP be the optimal objective values of the stochastic programming problem with the optimal solution of the deterministic problem and the stochastic programming problem, respectively. VSS is defined using EEV and RP as follows. VSS = EEV − RP

(18)

Table 2 shows the results of an experiment comparing RP and EEV. The improvement rate of the cost by a stochastic model is defined as follows.

improvement rate[%] =

V SS × 100 EEV

(19)

In all cases, RP provides a better solution than EEV. The improvement rate was highest when the number of bases was 4. For any number of bases, the number of orders placed was lower for EEVs; however, the quantity ordered at a time was larger than that for RPs. The effectiveness of the stochastic programming model is shown in Table 2. Table 2. Evaluation of solution for stochastic programming model The number of bases RP 4 5 6

EEV

VSS

Improvement rate[%]

3807.2 4256.3 449.1 10.6 5437.0 5897.4 460.4 7.81 7729.6 8610.9 881.3 10.2

The results of the computation time and the number of scenarios in Tables 3 and 4 show the effectiveness of the moment matching method. In this experiment, the period was set to four, and experiments were conducted for the cases of five and six bases. The probabilities of each scenario were recomputed from the newly obtained probabilities by applying the moment matching method. Using these probabilities, we solved the original problem. In both cases, we significantly reduced the number of scenarios and computation time. The total number of scenarios was reduced more when the number of bases was six than when the number of bases was five. This shows that the number of scenarios can be reduced as the number of bases increases.

The Multi-period Lateral Transshipment Problem

167

For the original problem, the value of the objective function after applying the moment matching method is maintained within 2% error, indicating that this method works effectively. Figures 3 and 4 show the scenario tree after applying the moment matching method for the cases of five and six bases. Table 3. In the case of 5 bases Total cost Computation times Number of scenarios Errors [%] Original problem 5437.0

413.1

7776

After application 5546.8

1.4

19

– −2.021

Table 4. In the case of 6 bases Total cost Computation times Number of scenarios Errors [%] Original problem 7729.6

8768.2

46656

After application 7649.3

12.4

25

– 1.039

Fig. 3. Result of moment matching when the number of bases is 5

168

K. Kadota et al.

Fig. 4. Result of moment matching when the number of bases is 6

5

Conclusion

In this study, the inventory transshipment problem is extended to a multi-period stochastic programming problem considering rentals and returns. The results of the numerical experiments show the effectiveness of this study. The computation time and number of scenarios were reduced using the moment matching method. Although the reliability of the error is not high because the moment matching method is an approximate solution method, the error can be suppressed to a small value within 2%. A future issue is a multi-period plan that considers the ordering lead time. This study shows the results by only changing the number of bases; however, the consideration of length of the period is necessary for the application of this model.

References 1. Aragane, K., Shina, T., Fukuba, T.: Multi-period stochastic lateral transshipment problem for rental products. Asian J. Manag. Sci. Appl. 6(1), 32–48 (2021) 2. Høyland, K., Wallace, S.W.: Generating scenario trees for multistage decision problems. Manag. Sci. 47, 295–307 (2001) 3. Kaut, M., Wallace, S.W.: Evaluation of scenario generation methods for stochastic programming. Pacific J. Optim. 3, 257–271 (2007)

The Multi-period Lateral Transshipment Problem

169

4. Paterson, C., Kiesm¨ uller, G., Teunter, R., Glazebrook, K.: Inventory models with lateral transshipments: a review. Eur. J. Oper. Res. 210, 125–136 (2011) 5. Schildbach, G., Morari, M.: Scenario-based model predictive control for multiechelon supply chain management. Eur. J. Oper. Res. 252, 540–549 (2016) 6. Shiina, T., Umeda, M., Imaizumi, J., Morito, S., Xu, C.: Inventory distribution problem via stochastic programming. Asian J. Manag. Sci. Appl. 1, 261–277 (2014) 7. Xu, D., Chen, Z., Yang, L.: Scenario tree generation approaches using K-means and LP moment matching methods. J. Comput. Appl. Math. 236, 4561–4579 (2012) 8. Kitamura, T., Shina, T.: Solution algorithm for time/cost trade-off stochastic project scheduling problem. In: Operations Research Proceedings 2018, Springer, pp. 467–473 (2019) 9. Shiina, T.: Stochastic programming. In Japanese, Asakura Syoten (2015)

Multi-objective Sustainable Process Plan Generation for RMS: NSGA-III vs New NSGA-III Imen Khettabi1 , Lyes Benyoucef2(B) , and Mohamed Amine Boutiche1 1

2

DGRSDT, LaROMaD Laboratory, USTHB University, Algiers, Algeria {khettabi.imen,mboutiche}@usthb.dz Aix Marseille University, University of Touloun, CNRS, LIS, Marseille, France [email protected] Abstract. Nowadays, to be relevant, the manufacturing system of a company has to be simultaneously cost and time-efficient and environmentally harmless. The RMS paradigm is proposed to cope with these new challenges. This paper addresses a multi-objective sustainable process plan generation problem in a reconfigurable manufacturing context. A non-linear multi-objective integer program (NL-MOIP) is proposed, where four objectives are minimized: the amount of greenhouse gas emitted by machines, the hazardous liquid wastes, the classical total production cost, and the total production time. To solve the problem, adapted versions of the well-known non-dominated sorting genetic algorithm (NSGA) approach, namely NSGA-III and New NSGA-III, are developed. Finally, the evaluation of the efficiency of the two approaches is performed through the use of four metrics: cardinality of the Pareto front (CPF), the cardinality of the mixed Pareto fronts (CMPF), inverted generational distance (IGD), and diversity metric (DM). Keywords: Reconfigurable manufacturing system · Sustainability Process plan generation · Multi-objective optimization · New NSGA-III · Similarity coefficient

1

·

Introduction

Reconfigurable manufacturing system (RMS) is one of the latest manufacturing paradigms, which considers many aspects such as unstable periodic market changes, economic globalization, mass customization, rapid technological advances, social and environmental changes, designing and considering a responsiveness manufacturing enterprises. It is a logical evolution of the two manufacturing systems already used in the industries, respectively, DMS (dedicated manufacturing system) and FMS (flexible manufacturing system). It is designed to combine the high flexibility of FMS with the high production ratio of DMS. Thanks to its flexible structure and outline focus [5]. Furthermore, the RMS is designed with six main characteristics: modularity, integrability, convertibility, scalability, diagnosability, and customization. They overcome cases where both c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 170–181, 2022. https://doi.org/10.1007/978-3-030-92666-3_15

Multi-objective Sustainable Process Plan Generation for RMS

171

productivity and responsiveness of the system to uncertainties or unpredictable cases such as machine failure, demand change, etc., are of critical importance [5]. For reconfigurable manufacturing process plans generation, [11] investigated the use of variants of the simulated annealing (SA)-based algorithms. [7] used an adapted version of the NSGA-II to demonstrate the concept of reconfigurability and energy-efficiency integration in the manufacturing environment. [10] studied a reconfigurable system compounded from a rotary table and a set of machining modules. The authors distinguish two main phases: system design and system processing. In the former phase, they focus mainly on productivity (i.e., cost and takt time), but in the later phase, they integrate energy costs. [2] developed three hybrid heuristics, namely: repetitive single-unit process plan heuristic (RSUPP), iterated local search on single-unit process plans heuristic (LSSUPP), and archive-based iterated local search heuristic (ABILS) to address the multi-unit process plan generation problem in RMS. Three objectives are minimized: the total production cost, total completion time, and maximum machines exploitation time. [9] proposed a non-linear mathematical model to reduce the total amount of energy consumption of the system. In this paper, two evolutionary approaches, respectively NSGA-III and New NSGA-III are proposed to solve an environmental-oriented multi-objective single-unit process plan generation problem in reconfigurable environment. Some experimental results are presented and analyzed using the computational times and the cardinalities of the mixed Pareto fronts to show the efficiency of the two approaches. The rest of the paper is organized as follows: Sect. 2 presents the problem under consideration and its mathematical formulation. Section 3 describes more in detail the proposed two approaches. Section 4 discusses the experimental results and analyzes them. Section 5 concludes the paper with some future work directions.

2 2.1

Problem Description and Mathematical Formulation Problem Description

The problem under consideration concerns producing a specific product in a reconfigurable environment by selecting some reconfigurable candidates machines that can perform the required operations. An operation OPi is characterized by three primary data: the precedence constraints as shown in Fig. 1, the required tool approach directions (TADs) (i.e., x±, y±, z±), and the candidate tools. A reconfigurable candidate machine is represented by the potential sets of configurations and the required tools in our case. Each candidate machine must be able to perform at least one operation from the set of operations. The set of selected machines must satisfy the needs of all required operations and precedence constraints. For each operation OPi , we define a set of triplets T Oi by the disposable combinations of machine-configuration-tool (M, C, T ). Table 1a and Table 1b show respectively, the required TADS and tools for the operations of

172

I. Khettabi et al.

our example and the TADS, configurations, and tools that each machine can offers.

Fig. 1. An illustrative product schema and operation precedence graph Table 1. Operations and Machines requirements OP Op1 Op2 Op3 Op4 Op5 Op6 Op7 Op8

TAD Tools +X +Y +Z -X -Y -Z × × × × ×

× × ×

× × ×

× ×

× ×

×

×

T4 T1 T2 T4 T2 T2 T3 T1

(a) Operations requirements

Ms Cs

TAD Tools +X +Y +Z -X -Y -Z

M1 C1

×

M2 C1 C2

×

M3 C1 C2

×

M4 C1 C2 C3 C4

×

M5 C1 C2

× ×

× × × × ×

× × ×

× T4, T2 × ×

× × ×

× ×

T2, T3 T4, T2

× × × ×

T4,T3

× T1, T2 × ×

×

(b) Machines requirements

As a result, in our case, the process plan generation problem is based on sequencing the operations to be performed by the selected reconfigurable machines (under machines configurations, used tools, and precedence graph constraints) and the triplets to perform each operation in the sequence. Table 2 illustrates a simple generated process plan of our example. Table 2. Illustrative structure of a process plan Operation

OP1 OP2 OP6 OP3 OP8 OP4 OP5 OP7

Machine M3 Configuration C1 Tool T4

M5 C2 T1

M5 C2 T2

M5 C2 T2

M5 C2 T1

M1 C1 T4

M2 C1 T2

M4 C4 T3

Multi-objective Sustainable Process Plan Generation for RMS

173

The process plan generation problem in a reconfigurable environment is known as NP-hard [2]. In the following, the generated process plans should satisfy the manufacturing requirements with respect to both manufacturing and sustainability objectives. In our case, four objectives are miminized respectively: – – – –

The The The The

2.2

total production cost. total production time. hazardous liquid waste. green house gases (GHG) emissions during the total production process.

Mathematical Formulation

Parameters n OP i, i P Ri m M j, j  G g li,t EPi,t fef fi,g t, t T Oi T Mj T c, c tl, tl p, p GW Pg

Number of operations Set of operations Index of operations Set of predecessors of operation OPi Number of machines Set of machines Index of machines Set of greenhouse gases Index of greenhouse gases Required liquid for operation OPi when using triplet t per time unit Estimated hazardous liquid waste for operation OPi when using triplet t Emission factor for electricity consumption Operation OPi emitting greenhouse gas type g per time unit Index of triplets Set of available triplets for operation OPi Set of available triplets using machine Mj Set of triplets, where T = T Oi ∪ T Mj Index of configurations Index of tools Index of positions in the sequence Global warning potential for emitted greenhouse gas type g

Cost Parameters CCMj,j  CCCc,c CCTtl,tl P ci,t DCGHG DCLHW

Machine changeover cost per time unit Configuration changeover cost per time unit Tool changeover cost per time unit Operation OPi processing cost when using triplet t per time unit Disposal cost of the emitted greenhouse gases Disposal cost of the hazardous liquid waste

174

I. Khettabi et al.

Time Parameters T CMj,j  T CCc,c T CTtl,tl P ti,t

Machine changeover time Configuration changeover time Tool changeover time Operation OPi processing time when using triplet t

Energy Parameters ECMj,j  ECCc,c ECTtl,tl P ei,t IECj

Machine changeover energy per time unit Configuration changeover energy per time unit Tool changeover energy Operation OPi processing energy when using triplet t per time unit Initial energy consumption of machine Mj

Decision Variables The following decision variables are used: xti,p = 1 if operation OPi is using triplet t at the pth position, 0 otherwise. m = 1 if machine Mj is using triplet t at the pth position, 0 otherwise. yp,t M Cpp−1 (j, j  ) = 1 if there has been a change from machine Mj to machine Mj  between positions p − 1 and p, 0 otherwise. T Cpj,p−1 (t, t ) = 1 if there has been a change from triplet t to triplet t of machine Mj between positions p − 1 and p, 0 otherwise. Objective Functions Our problem can be formulated as a non-linear multi-objective integer program (NL-MOIP), where four objectives are optimized, respectively, fc , ft , fLHW and fGHG : 1. The total production cost fc : Eq. (1) represents the total production cost to be minimized. It includes the following costs: machine changeover cost, configuration changeover cost, tool changeover cost, processing cost, emitted greenhouse gases cost and disposal cost of the emitted hazardous waste during the production. fc =

n n   

xti,p × P ci,t × P ti,t

p=1 i=1 t∈T Oi

+

m  m n  

M C p−1 (j, j  ) × CCM j,j  × T CM j, j  p

p=2 j=1 j  =1

+

m   n  

T Cpj,p−1 (t, t ) × (CCT tl,tl T CT tl,tl + CCC c,c × T CC c,c )

p=2 j=1 t∈Mj t ∈Mj

+ (DC GHG × fGHG + DC LW H × fLHW )

(1)

Multi-objective Sustainable Process Plan Generation for RMS

175

2. The total production time ft : Eq. (2) calculates the total production time to be minimized. It includes the following times: machine changeover time, configuration changeover time, tool changeover time and processing time. ft =

n n   

xti,p × P ti,t

p=1 i=1 t∈T Oi

+

m  n  m 

M Cpp−1 (j, j  ) × T CMj,j 

(2)

p=2 j=1 j  =1

+

m   n  

T Cpj,p−1 (t, t ) × (T CCc,c + T CTtl,tl )

p=2 j=1 t∈Mj t ∈Mj

3. The amount of hazardous liquid waste fLHW : Eq. (3) defines the amount of hazardous liquid waste to be minimized. It comprises the hazardous liquid waste during the processing of the operations, including wastes oils/water, hydrocarbons/water mixtures, emulsions; wastes from the production, formulation, and use of resins, latex, plasticizers, glues/adhesives; wastes resulting from surface treatment of metals and plastics; residues arising from industrial waste disposal operations. fLHW =

n n   

xti,p × li,t × P ti,t × EPi,t

(3)

p=1 i=1 t∈T Oi

4. The amount of greenhouse gases emitted fGHG : Eq. (4) defines the amount of greenhouse gases emitted during the manufacturing process to be minimized. It is composed of two parts. The first considers the energy consumption taking into account the emission factor for consumed electricity. The second considers the emitted gases taking into account the factor of global warming potential (GWP). In this research work, GWP factor converts emissions of the other greenhouse gases into CO2 equivalents. fGHG = fef × fEC +

n n    

xti,p × P ti,t × fi,g × GW Pg

(4)

p=1 i=1 t∈T Oi g∈G

Equations (5) describe more in details how to compute the total energy consumption during the production fEC . fEC =

n  m n   

j yp,t × xti,p × IECj

p=1 i=1 j=1 t∈T Oi n n   

xti,p × P ei,t × P ti,t

+ +

p=1 i=1 t∈T Oi m  m n  

M Cpp−1 (j, j  ) × ECMj,j  × T CMj,j 

p=2 j=1 j  =1

+

m n   



p=2 j=1 t∈T Mj t ∈T Mj

T Cpj,p−1 (t, t )

× (T T Ctl,tl × ET Ctl,tl + T CCc,c + ECCc,c )

(5)

176

I. Khettabi et al.

fEC is a non-linear function, as we can notice. To convert it to a linear form, follow these steps: j ×xti,p = z yp,t j j S.t : z ≤ xti,p , z ≤ yp,t , z ≥ yp,t + xti,p − 1, z ∈ {0, 1}

A complete description of the nine constraints associated with our problem is depicted in [1].

3

Proposed Evolutionary Approaches

In this section, we describe the adapted two evolutionary approaches, namely NSGA-III and New NSGA-III. For clear descriptions of the considered coded process plan as well as the two genetic operators, namely crossover and mutation, refer to [6]. 3.1

Non-dominated Sorting Genetic Algorithm III (NSGA-III)

Non-dominated sorting genetic algorithm-III (NSGA-III) is an evolutionary multi-objective optimization algorithm developed by [3] which is an extension of NSGA-II. Contrary to NSGA-II, NSGA-III proposes a niche selection technique based on reference points initially defined to maintain population diversity. 3.2

New Non-dominated Sorting Genetic Algorithm III (New NSGA-III)

Based on the similarity coefficient (SC), we developed a new version of NSGAIII called New NSGA-III. A study by [4] concludes that applying mutation is preferable only when the obtained average similarity coefficient (ASC) exceeds a given threshold. To determine the SC between each pair of chromosomes, Eq. (6) is used: n {∂(Xia , Xib )} (6) SCab = i=1 2×n We represent a chromosome as a matrix of two rows and n columns. The first row represents a coded version of the indices of operations, and the second row represents the indices of triplets. – Xa and Xb are the vectors associated with chromosomes a and b, where for each chromosome, the first and second rows are linked to form one row. – ∂(α, β) is the similarity between the decimal part of every gene of vectors Xa and Xb . It is expressed using Eq. (7):  1 if α = β ∂(α, β) = (7) 0 otherwise

Multi-objective Sustainable Process Plan Generation for RMS

177

The ASC of the population is calculated as follows: N pop−1 N pop SC =

a=1

b=a+1 N pop 

SCab

(8)

2

where N pop is the population size. Finally, considering a predefined threshold similarity coefficient θ and the obtained ASC, the mutation operator and crossover operator will be automatically incorporated into the New NSGA-III loop as follows:  if SC > θ Apply 10% crossover and 90% mutation otherwise Apply 100% crossover and 0% mutation To better understand how our SC is computed, we can use the example of Fig. 2.

Fig. 2. An illustrative example of the similarity coefficient computation

In this example, the chromosomes a and b have a SC equals to: SCab =

3 1+0+0+0+0+0+0+1+0+0+0+0+0+0+0+1 = 16 16

In the following, we propose a new version of NSGA-III called New NSGA-III using the ASC. New NSGA-III aims to improve the convergence and diversity in many-objective optimization scenarios. The mutation and crossover operators will be automatically included in the New NSGA-III loop as follows:  if SC > θ Apply 10% crossover and 90% mutation otherwise Apply 100% crossover and 0% mutation

178

I. Khettabi et al.

The following algorithm presents the main steps of New NSGA-III, where the blue part shows the difference with NSGA-II and the red part shows the difference with NSGA-III. New NSGA-III Algorithm 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:

input data initialize Population Size, Number of iterations, Threshold Similarity Coefficient θ randomize parentP opulation for iter = 1 :Number of iterations do generate childP opulation from parentP opulation using: if SC > θ then Apply 10% crossover and 90% mutation otherwise Apply 100% crossover and 0% mutation population = parentP opulation ∪ childP opulation for l = 1 : size(F ) do if size(newP opulation)+size(Fl ) < populationSize then newP opulation+ = Fl else Normalize objectives and create reference set Z r Associate each individual s with the reference set: [π(s), d(s)]=Associate(Fl , Z r ) % π(s)= closest reference point, d(s)= distance between s and π(s)  ((π(s) = j)/1 : 0) Apply niche count of reference point: j ∈ Z r , ρj = s∈Fl

18: Select individuals one at a time from Fl: N iching(K, ρj , π, d, Z r , Fl ) 19: for k = 1 : size(Fl ) do 20: if size(newP opulation) < populationSize then 21: newP opulation+ = Flk 22: else 23: break; 24: end if 25: end for 26: end if 27: end for 28: parentP opulation = newP opulation 29: end for 30: return parentP opulation

4

Experimental Results and Analyses

The experiment results were performed in a Java environment with Intel Core i7, 4.0 GHz and 16 GB RAM. In the single unit scenario, an instance is defined as the number of required operations and the number of reconfigurable candidate machines (nbOperations-nbMachines). To analyze the Pareto solutions, four metrics, respectively cardinality of the Pareto front (CPF) which is used to compute the number of non-dominated solutions generated by the algorithm, cardinality of the mixed Pareto fronts (CMPF) which is used to evaluate the cardinality of the mixed Pareto front obtained when several Pareto fronts (generated by different MOEAs) are mixed in one front, diversity metric (DM) and inverted generational distanc e(IGD) [8], are used. A list of the different basic

Multi-objective Sustainable Process Plan Generation for RMS

179

parameters used by NSGA-III and New NSGA-III is given in Table 3. Figure 3 shows the CPU calculation time (in seconds) of NSGA-III vs New NSGA-III. Table 3. The based parameters settings Parameters

NSGA-III New NSGA-III

Population size

40

40

Number of iterations

1000

1000

Pcrossover

10%



Pmutation

90%



Perturbation ratio

30%



Similarity coefficient θ –

0.1

Fig. 3. CPU time of NSGA-III vs New NSGA-III

Table 4 presents the obtained cardinality of the Pareto fronts of each instance, and Table 5 shows a comparison of the Pareto fronts obtained when the two fronts of NSGA-III and New NSGA-III are mixed in one new Pareto front. Moreover, Table 6 presents the obtained DM values and the IGD metric values. Table 4. Cardinality of the Pareto fronts of NSGA-III vs New NSGA-III Instance

CPF NSGA-III New NSGA-III

7–5 12–4 13–6

4

7

13

14

9

10

35–15

21

22

50–20

13

16

100–20

13

18

100–20bis 37

16

180

I. Khettabi et al. Table 5. Comparisons of the performances of NSGA-III vs New NSGA-III Instance

Combination of thePareto fronts of NSGA-III, New NSGA-III CMPF # Pareto # Pareto # Pareto front in common front of front of NSGA-III NewNSGA-III

7–5

8

1

4

3

12–4

17

8

9

0

13–6

13

4

5

4

35–15

31

10

21

0

50–20

18

2

16

0

100–20

18

0

18

0

100–20bis 29

13

16

0

Table 6. IGD and Diversity Metric of NSGA-III and New NSGA-III Instance

IGD DM NSGA-III New NSGA-III NSGA-III New NSGA-III

7–5

83.29

12–4

114.42

13–6

36.05

35–15

761

700.03

0.76

0.80

50–20

230.36

319.06

0.54

0.78

100–20

594.32

100–20bis 549.28

132.25

0.89

0.99

98.06

0.79

1.13

45.01

1.16

1.32

474.01

0.89

0.91

616.89

0.95

0.59

Four main observations can be taken from the above numerical results: • Observation 1: From Fig. 3, we observe that, NSGA-III has better computational time. • Observation 2: From Table 4, we can see that, New NSGA-III has acquired more Pareto solutions. • Observation 3: From Table 5, we observe that, New NSGA-III dominates completely NSGA-III. • Observation 4: From Table 6, we can conclude that New NSGA-III has a great advantage in promoting diversity for the larger instances.

5

Conclusions and Future Work Directions

In this paper, we addressed the problem of multi-objective process plan generation within a reconfigurable manufacturing environment. We adapted two evolutionary approaches, respectively, NSGA-III and New NSGA-III. To show the

Multi-objective Sustainable Process Plan Generation for RMS

181

efficiencies of both approaches, some experimental results were realized, and the obtained results were analyzed using three metrics respectively, diversity metric, inverted generational distance, and cardinality of the mixed Pareto fronts. In the near future, using NSGA-III, further sensitivity analyses will be performed to determine how the similarity coefficient (SC) influences the convergence of New NSGA-III. Furthermore, in addition to reducing the traditional total production cost and completion time, minimizing the maximum machines exploitation time can be considered as a novel optimization objective for highquality products. Finally, other evolutionary-based approaches, such as AMOSA, MOPSO, etc., can be adapted and compared.

References 1. Khezri, A., Haddou Benderbal, H., Benyoucef, L.: Towards a sustainable reconfigurable manufacturing system (SRMS): multi-objective based approaches for process plan generation problem. Int. J. Prod. Res. 59(15), 4533–4558 (2021) 2. Touzout, F.A., Benyoucef, L.: Multi-objective multi-unit process plan generation in a reconfigurable manufacturing environment: a comparative study of three hybrid metaheuristics. Int. J. Prod. Res. 57(24), 7520–7535 (2019) 3. Deb, K., Jain, H.: An evolutionary many-objective optimization algorithm using reference-point- based non-dominated sorting approach, part i: solving problems with box constraints. IEEE Trans. Evol. Comput. 18, 577–601 (2014) 4. Smullen, D., Gillett, J., Heron, J., Rahnamayan, S.: Genetic algorithm with selfadaptive mutation controlled by chromosome similarity. In: IEEE Congress on Evolutionary Computation (CEC), pp. 504–511 (2014) 5. Koren, Y.: The Global Manufacturing Revolution: Product-Process-business Integration and Reconfigurable Systems, vol. 80. Wiley, Hoboken (2010) 6. Khettabi, I., Benyoucef, L., Boutiche, M.A.: Sustainable reconfigurable manufacturing system design using adapted multi-objective evolutionary-based approaches. Int. J. Adv. Manuf. Technol. 115, 1–19 (2021) 7. Zhang, H., Zhao, F., Sutherland, J.W.: Energy-efficient scheduling of multiple manufacturing factories under real-time electricity pricing. CIRP Ann. 64(1), 41–44 (2015) 8. Sierra, M.R., Coello, C.A.C.: Improving PSO-based multi-objective optimization using crowding, mutation and -dominance. In: International Conference on Evolutionary Multi-Criterion Optimization, pp. 505–519 (2005) 9. Massimi, E., Khezri, A., Haddou Benderbal, H., Benyoucef, L.: A heuristic-based non-linear mixed -integer approach for optimizing modularity and integrability in a sustainable reconfigurable manufacturing environment. Int. J. Adv. Manuf. Technol. 108, 1997–2020 (2020) 10. Liu, M., An, L., Zhang, J., Chu, F., Chu, C.: Energy-oriented bi-objective optimisation for a multi-module reconfigurable manufacturing system. Int. J. Prod. Res. 57(19), 5974–5995 (2019) 11. Musharavati, F., Hamouda, A.: Enhanced simulated-annealing-based algorithms and their applications to process planning in reconfigurable manufacturing systems. Adv. Eng. Softw. 45(1), 80–90 (2012)

Clarke Subdifferential, Pareto-Clarke Critical Points and Descent Directions to Multiobjective Optimization on Hadamard Manifolds Erik Alex Papa Quiroz1,2(B) , Nancy Baygorrea3,4 , and Nelson Maculan4 1

4

Universidad Nacional Mayor de San Marcos, 15081 Lima, Peru 2 Universidad Privada del Norte, Trujillo, Peru [email protected], [email protected] 3 Center of Mineral Technology, Rio de Janeiro 21941-908, Brazil [email protected] Federal University of Rio de Janeiro, Rio de Janeiro 21941-901, Brazil [email protected]

Abstract. In this paper, we aim to complement our work reported in [20] by showing some further properties and results on Clarke subdifferential, Pareto-Clarke critical points and descent directions on Hadamard manifolds. These tools and results can be applied to introduce new algorithms for solving nonsmooth nonconvex multiobjective minimization problems on Hadamard manifolds. Keywords: Pareto-Clarke critical point · Descent direction · Hadamard manifolds · Multiobjective optimization · Pareto optimality

1

Introduction

Tools of convex analysis are very important in optimization from the theory and practical point of view because these elements give the theoretical support to obtain properties and prove convergence of iterative algorithms. There exists a broad literature on convex analysis in linear vectorial (Euclidean, Hilbert, Banach) space, see for example Rockafellar [23], Rockafellar and Wets [24], Boyd and Vandenberghe [7], Bertsekas et al. [6], Hiriart-Urruty and Lemar´echal [14, 15], Pardalos [21] and Van Tiel [29]. On the other hand, extensions of results and properties of convex analysis from linear spaces to Riemannian manifolds are natural and have advantages in cases where an approach can hardly be used. In fact, there are reasons why convex analysis, on Riemannian manifolds, is being highly studied in the last years. More precisely, choosing an appropriate Riemannian metric on the manifold, it is possible to transform nonconvex minimization problems (non monotone operators) in linear vectorial spaces in convex minimization problems (monotone vector fields) on Riemannian manifolds, as well as, converting nonconvex sets in c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 182–192, 2022. https://doi.org/10.1007/978-3-030-92666-3_16

Descent Directions to Multiobjective Optimization on Hadamard Manifolds

183

convex ones. See, for instance, Udriste [28], Smith [27], Absil et al. [1,2], Roy [25] and Gabay [11]. In this paper, motivated by the introduction of algorithms to solve multiobjective minimization problems on Riemannian manifolds, in particular proximal point algorithms, we study some extensions from Euclidean spaces to Hadamard manifolds of three concepts of convex analysis which are very important to construct optimization algorithms in these manifolds: Clarke sudfifferential, ParetoClarke critical point and descent direction. Observe that Hadamard manifolds are complete simply connected Riemannian manifolds with non positive sectional curvature and these manifolds are natural spaces to developed proximal point methods. Clarke sudfifferential on Riemannian manifolds have been introduced by Motreanu and Pavel [17] using the concept of charts. Then, this subdifferential was used in smooth analysis and Hamilton-Jacobi equations by Azagra et al. [3]. These authors also introduced another equivalent definition of Clarke subdifferential. After that, several properties of the Clarke suddifferential have been obtained by Hosseini and Pouryayevali, see [12,13]. In the context of proximal point methods to solve optimization problems on Hadamard manifolds, Bento et al. [5] and Papa Quiroz and Oliveira [18,19] have been used another version of the Clarke subdifferential, introduced by [5], to prove the convergence of their algorithms and in [4] this subdifferential was studied for solving quasiconvex minimization problems. In this context, a natural question arises: are the three Clarke subdifferential definitions equivalent? On the other hand, in linear vectorial spaces the concept of Pareto-Clarke critical point and descent direction are closely related for locally Lipschitz functions. In fact, it can be proved that if a point is not a Pareto-Clarke critical point of a multivalued function then there exists a descent direction. Another result is that if a point is a Pareto Clarke critical point of a multivalued convex function, then this point is a weak Pareto Solution of that function. In Hadamard manifolds, are these results true? In this paper we response the above questions. In fact, we prove that the three versions of the concept of Clarke subdifferential, introduced by Motreanu and Pavel [17], Azagra et al. [3] and Bento et al. [5], are equivalents in Hadamard manifolds. Also, we introduce the concept of Pareto efficient solution and weak Pareto efficient solution of multivalued function in the Riemannian context, establishing an inclusion relation. Then, we introduce the concepts of Pareto-Clarke critical point and descent direction on Hadamard manifolds, obtaining that if a point is not a Pareto-Clarke critical point of a multivalued locally Lipschitz function then there exists a descent direction on the tangent space of the manifold. Finally, we prove that if a multivalued function is convex, then the concept of Pareto-Clarke critical coincides with that weak Pareto efficient solution. The paper is organized as follows: Sect. 2 presents the preliminares on Riemannian manifolds. Section 3 presents the concepts of Fr´echet and Clarke subdifferential and we prove that three versions of the concept of Clarke subdifferential are equivalents. Section 4 introduce the concepts of Pareto efficient solutions and weak Pareto efficient solutions, then we introduce the concepts of ParetoClarke critical point for multivalued function and descent direction on Hadamard

184

E. A. Papa Quiroz et al.

manifolds and we prove that if a point is not a Pareto Clarke critical point of a multivalued locally Lipschitz function then there exists a descent direction and, moreover, if the multivalued function is convex then all Pareto-Clarke critical point are weak Pareto efficient solutions of the multiobjective minimization problem.

2

Preliminaries

In this section we recall some fundamental properties and notations on Riemannian manifolds, see, e.g. do Carmo [10], Sakai [26], Udriste [28], Rapcs´ak [22] and references therein. Let M be a differential manifold with finite dimension n. We denote by Tx M the tangent space of M at x which is defined by Tx M = {v | ∃ γ : (−ε, ε) → M, γ(0) = x, γ  (0) = v}. Because we restrict ourselves to real manifolds, Tx M is isomorphic to IRn . For each x ∈ M there also exists the dual space Tx∗ M of Tx M , called the cotangent space at x ∈ M . A Riemannian metric g on M is a smooth family of inner products on the tangent spaces of M . Namely, g associates to each x ∈ M a positive definite symmetric bilinear form on Tp M , gx : Tx M × Tx M → IR, and the smoothness condition on g refers to the fact that the function x ∈ M → gx (X, Y ) ∈ IR must be smooth for every locally defined smooth vector fields X, Y in M . The manifold M together with the Riemannian metric g is called a Riemannian manifold. Let (M, g) be a Riemannian manifold. For each x ∈ M , the Riemannian metric induces an isomorphism (in the present paper, any isomorphism will be denoted by ∼ =) between Tx M and Tx∗ M, by w = gx (v, ·) and w, ux = gx (v, u) for all u, v ∈ Tx M and w ∈ Tx∗ M . Then we define norm on Tx M by vx = gx (v, v). Later on, we will often simplify the notation and refer to M as a Riemannian manifold where the Riemannian metric is implicit. Metrics can be used to define the length of a piecewise smooth curve α : [t0 , t1 ] → M t joining α(t0 ) = p to α(t1 ) = p through L(α) = t01 α (t)α(t) dt. Minimizing this length functional over the set of all curves we obtain a Riemannian distance d(p , p) which induces the original topology on M . Given two vector fields X and Y on M , the covariant derivative of Y in the direction X is denoted by ∇X Y . In this paper ∇ is the Levi-Civita connection associated to (M, g). This connection defines an unique covariant derivative D/dt, where, for each vector field X, along a smooth curve α : [t0 , t1 ] → M , another vector field is obtained, denoted by DX/dt. The parallel transport along α from α(t0 ) to α(t1 ), denoted by Pα,t0 ,t1 , is an application Pα,t0 ,t1 : Tα(t0 ) M → Tα(t1 ) M defined by Pα,t0 ,t1 (v) = X(t1 ) where X is the unique vector field along α so that DX/dt = 0 and X(t0 ) = v. Since ∇ is a Riemannian connection, Pα,t0 ,t1 −1 is a linear isometry, furthermore Pα,t = Pα,t1 ,t0 and Pα,t0 ,t1 = Pα,t,t1 Pα,t0 ,t , 0 ,t1 for all t ∈ [t0 , t1 ]. A curve γ : I → M, where I is a certain open interval in IR, is called a geodesic if Dγ  /dt = 0. A Riemannian manifold is complete if its geodesics are defined for any value of t ∈ IR. Let x ∈ M , the exponential map expx : Tx M → M is defined expx (v) = γ(1), where γ is the geodesic starting at x and having velocity v ∈ Tx M for each x ∈ M . If M is complete, then expx is

Descent Directions to Multiobjective Optimization on Hadamard Manifolds

185

defined for all v ∈ Tx M. Besides, there is a minimal geodesic (its length is equal to the distance between the extremes). The set Ω ⊂ M is said to be convex if any geodesic segment with end points in Ω is contained in Ω. Let Ω ⊂ M be an open convex set. A function f : M → IR is said to be convex on Ω if for any geodesic segment γ : [a, b] → Ω, the composition f ◦γ : [a, b] → R is convex. It is called quasiconvex if for all x, y ∈ Ω and t ∈ [0, 1], it holds that f (γ(t)) ≤ max{f (x), f (y)}, for the geodesic γ : [0, 1] → IR, so that γ(0) = x and γ(1) = y. A function f : M −→ IR ∪ {+∞} is said to be lower ¯, semicontinuous at a point x ¯ ∈ M if, for any sequence {xk } convergent to x k x). The function f is lower semicontinuous on we obtain lim inf k→∞ f (x ) ≥ f (¯ M if it is lower semicontinuous at each point in M . A function f defined on Riemannian manifold M is a Lipschitz function on a given subset S of M with constant L > 0, if |f (x) − f (y)| ≤ Ld(x, y) for every x, y ∈ S. A function f is said to be locally Lipschitz on M if f is locally Lipschitz at x, for every x ∈ M .

3

Fr´ echet and Clarke Subdifferential

In this section we give some definitions of nonsmooth analysis on Hadamard manifolds, see Hosseini and Pouryayevali [13], Azagra et al. [3], Ledyaev and Zhu [16]. Definition 1. Let M be a Hadamard manifold and f : M → IR ∪ {+∞} be a lower semicontinuous function with dom(f ) = ∅. The Fr´echet subdifferential of f at a point x ∈ dom(f ), denoted by ∂ F f (x), is defined as the set all s ∈ Tx M with the property that lim inf u→x u=x

 1  f (u) − f (x) − s, exp−1 x ux ≥ 0. d(x, u)

(1)

If x ∈ / dom(f ) then we define ∂ F f (x) = ∅. Clearly, s ∈ ∂ F f (x) iff for each η > 0, there is  > 0 such that s, exp−1 x ux ≤ f (u) − f (x) + ηd(x, u),

for all u ∈ B(x, ).

Observe that the above definition is equivalent to the D− f (·) subdiferential given in Definition 4.1 of Azagra et al. [3]. In fact, in that definition we have   D− f (x) = dφx : φ ∈ C 1 (M, IR), f − φ has a local minimum at x , and the authors proved in [3, Theorem 4.3] that this definition is equivalent to the following: For each chart h : U ⊂ M → IRn with x ∈ U, if ζ = s ◦ d(h−1 )h(x) the following inequality is satisfied: lim inf v→0

 1  (f oh−1 )(h(x) + v) − f (x) − ζ, v ≥ 0. ||v||

186

E. A. Papa Quiroz et al.

Now, considering the chart h(·) = exp−1 x (·) and using the fact that d(expx )0 is the identity, in the above inequality we obtain lim inf v→0

1 [f (expx v) − f (x) − s, vx ] ≥ 0. ||v||

(2)

Finally, doing a change of variables v = exp−1 x y we have that ||v|| → 0 iff y → x, then we obtain (1). Observe also that in view of relation (2) and the identification between Tx M and T0 Tx M , we have that ∂ F f (x) = ∂ F (f ◦ expx )(0x ). Lemma 1. Let f, h : M → IR ∪ {+∞} be proper functions such that f is lower semicontinuous at x ¯ ∈ M and h is locally Lipschitz at x ¯ ∈ M . Then ∂ F (f + h)(¯ x) = ∂ F f (¯ x) + ∂ F h(¯ x). Proof. It is trivial because ∂ F f (x) = ∂ F (f ◦ expx )(0x ) and using the result in linear space.   Next, we present definitions of Clarke’s generalized directional derivative and subdifferential introduced by Bento et al. [5]. Definition 2. Let f : M → IR ∪ {+∞} be a locally Lipschitz proper function at x ∈ dom(f ) and d ∈ Tx M . The Clarke’s generalized directional derivate of f at x ∈ M in the direction d ∈ Tx M and denoted by f ◦ (x, d), is defined as f ◦ (x, d) := lim sup

f (expu t(Dexpx )exp−1 d) − f (u) x u

u→x t 0

t

,

where (Dexpx )exp−1 : Texp−1 (Tx M ) ∼ = Tx M → Tu M is the differential of x u x u −1 exponential mapping at expx u. The Clarke’s generalized subdifferential of f at x ∈ M , is the subset ∂ ◦ f (x) of Tx∗ M ∼ = Tx M defined by ∂ ◦ f (x) := {s ∈ Tx M | s, dx ≤ f ◦ (x, d), ∀ d ∈ Tx M }. Proposition 1. Let f : M → IR ∪ {+∞} be a locally Lipschitz proper function at x ∈ dom(f ), then f ◦ (x, v) = (f ◦ expx )◦ (0, v), for all v ∈ Tx M and thus ∂ ◦ f (x) = ∂ ◦ (f ◦ expx )(0x ). d, then Proof. Let w = (Dexpx )exp−1 x u f ◦ (x, v) = lim sup u→x t0

−1 −1 −1 (f ◦ expx )(exp−1 x u + expx (expu (tw)) − expx u) − (f ◦ expx )(expx u) t

−1 −1 Denoting p = exp−1 x u and λp (t) = expx (expu (tw)) − expx u we obtain

f ◦ (x, v) = lim sup p→0 t 0

(f ◦ expx )(p + λp (t)) − (f ◦ expx )(p) t

Descent Directions to Multiobjective Optimization on Hadamard Manifolds

187

Now, given  > 0, for all p and t > 0 such that ||p|| + t < , we have    (f ◦ expx )(p + λp (t)) − (f ◦ expx )(p) (f ◦ expx )(p + tv) − (f ◦ expx )(p)    −   t t    (f ◦ expx )(p + λp (t)) − (f ◦ expx )(p + tv)    ≤ K λp (t) − tv , =  t t for some K > 0, where the last inequality is due that the function f ◦ expx is locally Lipschitz. From the first order Taylor expression we have that λp (t) = λp (0) + λp (0)t + o(t), where limt−→0 o(t) t = 0. As λ(0) = 0, and using the Gauss’s lemma (see [10, Lemma 3.5]) which states that expx is a radial isometry, we obtain λp (0) = −1 (v) = v. Therefore D(exp−1 x )u (w) = D(expx ◦ expx )exp−1 x u λp (t) − tv ||o(t)|| = t t Thus, for all p and t > 0 such that ||p|| + t < , we have    (f ◦ expx )(p + λp (t)) − (f ◦ expx )(p) (f ◦ expx )(p + tv) − (f ◦ expx )(p)    −   t t ||o(t)|| ≤K . t Taking supremum at (p, t) such that ||p|| + t <  we obtain sup ||p||+t 1 t



(15)

r

i

g

(14)

j

Crude oil production constraints    C C VCpwgt = XCpwit + XRC pwrt ∀p ∈ P, ∀w ∈ W , ∀t ∈ T g

(13)

C BWGwgt =1

∀w ∈ W , ∀t ∈ T

(16) (17)

g

Refinery Sector Constraints Refinery demand satisfaction  R XRMrest ∀e ∈ E, ∀s ∈ S, ∀t ∈ T DRRest =

(18)

r

Inbound flow of crude to produce refinery product   R VRRerxt =γrpe . XRRpwrt ∀r ∈ R, ∀p ∈ P, ∀e ∈ E, ∀t ∈ T x

Outbound flow constraints from the refinery    R R VRRerxt ≥ XRMerst + XRHerht ∀e ∈ E, r ∈ R, ∀t ∈ T x

(19)

w

s

h

(20)

Carbon Abatement in the Petroleum Sector

Max and Min Capacity in the refinery  R VRRest ≤ (Caprt .BRRr )

∀r ∈ R, ∀t ∈ T

201

(21)

e,s

 e

R VRRest ≥ (Caprt − Min.BRRr ) ∀r ∈ R, ∀t ∈ T

(22)

x

Logical constraints for refinery R R BRGrxt = BRGrxt−1 R BRGrxt =1

∀r ∈ R, ∀x ∈ X , ∀T > 1

∀r ∈ R, ∀x ∈ X , ∀t ∈ T

Petrochemical Sector Constraints Petrochemical demand satisfaction  H H XHMnkzt = DHnzt

∀n ∈ N , ∀z ∈ Z, ∀t ∈ T

(23) (24)

(25)

k

Flow of petrochemical   H H VHnhqt = XHnhkt q

∀n ∈ N , ∀h ∈ H , ∀t ∈ T

(26)

k

Outbound flow from petrochemical   h H H SHnk1 = XHnhk1 − XHMnkz1 ∀n ∈ N , ∀k ∈ K Inventory balance for petrochemical at storage tanks   h h H H SHnkt = SHnkt−1 + XHnhkt − XHMnkzt ∀n ∈ N , ∀k ∈ K, ∀T > 1 h

(27)

z

h

(28)

z

Inventory balance of petrochemical at tanks for period 1   h H H SHnk1 = XHnhk1 − XHMnkz1 ∀n ∈ N , ∀k ∈ K

(29)

z

h

Max capacity of petrochemical  h H VHnhzt ≤ Capht ∀h ∈ H , ∀t ∈ T

(30)

n,q

Min Capacity of petrochemical H Capht − Min ≤

 n,q

h VHnhzt ∀h ∈ H , ∀t ∈ T

(31)

202

O. Abdussalam et al.

Petrochemicals production constraints   H H VHnhqt ≥ XHnhkt ∀n ∈ N , ∀h ∈ H , ∀t ∈ T q

Logical constraints for petrochemicals  H H BHGhqt ≥ BHGhqt−1 ∀h ∈ H , ∀q ∈ Q, ∀T > 1 q,t

(32)

k



H BHGhqt = 1 ∀h ∈ H , ∀t ∈ T

(33) (34)

q

4.3 Solution Method The ε-constraint method is considered a solution procedure in this paper because the decision-maker does not need to articulate a prior preference for the objective. Thus, one objective is selected for optimization. The remaining objectives are reformulated as constraints [11]. The objective, Z, is selected for the optimization to solve the previously formulated model (Subsect. 3.4) using the ε-constraint method. The objective function E is reformulated as a constraint. By progressively changing the constraint values, ε, the Pareto-frontier curve is obtained. By calculating the Pareto-frontier’s extremes, the values and the range of objective E are selected accordingly.

5 Experimentation and Results This section describes the solution procedure and numerical results for the case study. The model is solved using the LINGO 19.0 from LINDO systems. The proposed model’s efficiency has been tested using the Libyan supply chain covering all petroleum sectors (upstream, midstream, and downstream). 5.1 Baseline Scenario Most of the data has been collected with the Libyan National Oil Corporation (NOC) collaboration at different levels. Other data were estimated using official websites, published reports, and some previous studies. In this study, we consider only the direct emissions from certain activities in each sector. To experiment with the proposed model, two scenarios have been developed. The first scenario (Baseline) is when we optimize the petroleum supply chain without considering the CO2 reduction objective. We assume that only CCS technology is activated to achieve the CO2 reduction targets in the second scenario. The baseline scenario’s objective is to meet each product’s demand requirement and identify the CO2 emission contribution of each level on the petroleum sector and the cost related to that. The results are shown in Table 1 and compare costs and the CO2 emissions contributions of the different oil sectors. The crude sector accounts for a total cost of 69,539 M$ (54%) and 135,096 KT of CO2 (73% of the total emissions). Production of crude oil in upstream operations

Carbon Abatement in the Petroleum Sector

203

Table 1. Baseline scenario of different sectors Sector

Total cost (M $) Total cost (%) CO2 emissions (KT Total CO2 emissions CO2 ) (%)

Crude oil

69,539

54

135,096

73

Refinery

33,638

26

34,506

19

Petrochemical

1,986

2

2,885

2

Transportation

22,509

Total (20 years) 127,672 Average

18

11,927

100

184,415

6,384 (M$/year)

6 100

9,222 KT CO2 /year

accounts for the highest emissions because of the energy-intensive production methods to extract crude oil, especially in offshore platforms. Also, the refinery sector is massively emitting CO2 because of the complex process systems that synthesize many products while utilizing large amounts of energy and hydrogen for hydrotreatment processes. 5.2 Carbon Capture and Storage Table 2 summarizes the total CO2 emissions and the total costs for all sectors for different CO2 reduction target scenarios. We can observe that the supply chain cost increased for all scenarios compared to the Baseline. For instance, with a 32% CO2 emissions reduction objective (scenario CCS-7), the total cost increases by 10.70% and brings the total cost to141,328 M$. Table 2. Capture scenario of different CO2 reduction versus Baseline Scenario

CO2 reduction (%)

Cost increase (%)

CO2 decrease (%)

Crude ($/bbl)

Refinery ($/bbl)

Petrochemical ($/bbl)

Baseline

0

0

0

8.59

14.31

16.20

CCS-1

5

0.06

7.75%

8.59

14.31

16.20

CCS-2

10

0.28

11.13%

8.62

14.31

16.20

CCS-4

20

0.64

25.07%

8.88

14.31

16.21

CCS-5

30

1.22

27.46%

9.17

14.31

16.22

CCS-6

31

5.25

30.92%

9.18

14.48

16.28

CCS-7

32

10.70

32.56%

9.18

17.22

18.94

CCS-8

33

Unfeasible

Using the results obtained from Table 2, the Pareto frontier presented in Fig. 2 (a) demonstrates the economic and environmental objectives conflict. Figure 2 (b) shows

204

O. Abdussalam et al.

the marginal abatement cost (MAC), which measures the cost of reducing one tone of CO2 for each scenario. However, activating CCS projects only for the crude sector can reduce up to 30% CO2 (CCS-5). In this scenario, the total cost increases by only 1.22%, and the MAC will be around $30. These results can be explained by the fact that CCS projects implementation at refinery and petrochemical plants cannot be economically viable options given the low production level in these plants [14]. Figure 3 compares cost and CO2 perspectives between the Baseline and CCS-7 scenarios for each sector. We observe significant investment in the crude (upstream) and refinery sectors (midstream).

Fig. 2. Total and abatement costs for CCS scenarios

Simultaneously, we observe a reduction in the extraction and transformation costs resulting from more efficiency in the extraction of crude and the refinery transformation. By far, using CCS, we will not reduce more than 32% [6, 7, 9]. Therefore, it is crucial to propose appropriate mitigation strategies depending on the stringency of environmental regulations. There is a trade-off relationship between the emission reduction and the cost.

Fig. 3. Comparison between Baseline & CCS-7 scenarios

6 Conclusion This research investigates the impacts of sustainability implementation on the petroleum supply chain from a sector perspective to address the sustainable issue. Also, this study

Carbon Abatement in the Petroleum Sector

205

mainly explores carbon abatement in CCS in the petroleum supply chain at the country level. The results of optimization for different reduction targets. For instance, if only a 5% reduction is needed, we can only implement the CCS for well five located in (W5). At a 30% reduction, we need to activate CCS in all wells. For 31%, we should add the implantation at the refiner located in (R3). The CCS is recommended for the next twenty years if the reduction objective is less than 32%. For the limitations, much more research needs to be done in future studies for considering other options for CO2 reduction. Finally, research efforts should be extended to study different methodologies to deal with air, land, and sea pollutants and other sources of uncertainty, such as product prices, resource availability, and disruptions events.

Appendix Crude oil Parameters LCwC = Setup cost (Fixed cost) of location well w ∈ W($) C = capacity in the well (bbl/y) Capwt C − Min = Minimum Capacity in the well (bbl/y) Capwt C = Variable extraction cost of p ∈ P at wells w ∈ W by using technology EXCpwgt g ∈ G during period t ∈ T ($/bbl) C = Transportation cost of p ∈ P transported from well w ∈ W to storage PRCpwit tanks i ∈ I at period t ∈ T ($/bbl) C : Transportation cost of p ∈ P transported from storage tanks i ∈ I to market PMCpijt j ∈ J at period t ∈ T ($/bbl) PRRC pwrt : Transportation cost of p ∈ P transported from well w ∈ W to refinery r ∈ R at period t ∈ T ($/bbl) C = Inventory cost of p ∈ P at storage tanks i ∈ I during period t ∈ T ($/bbl) CSCpit C = Selling price of crude oil p ∈ P to market j ∈ J at period t ∈ T ($/bbl) FCCpjt C = Demand of crude oil product p ∈ P by crude market j ∈ J at period t ∈ T DCpjt (bbl/y) max = Overall storage capacity for product p ∈ P at storage tanks i ∈ I (bbl/y) SCpi VCtmax,min = Maximum and Minimum production level of crude production at period t ∈ T (bbl/y) C = Cost of technology g ∈ G at Wells w ∈ W at the period t ∈ T ($) CGWwgt C EFCpwg = Emission factor associated with extracting p ∈ P with technology g ∈ G at wells w ∈ W (kg CO2 /bbl) EFLC1C = Emission factor using pipeline transportation crude products to storage tanks and refinery (Kg CO2 /bbl·km) EFSC1C = Emission factor using ship transportation for crude products from storage tanks to market and petrochemical products to market (Kg CO2 /bbl·km) Refinery Parameters LRRr = Setup cost (fixed cost) of refinery location r ∈ R ($) R = capacity in the refinery (bbl/y) Caprt R γrep = Yield of refinery product produced from processing crude product

206

O. Abdussalam et al.

FRRRest = Selling price of e ∈ E to market s ∈ S at the period t ∈ T ($/bbl) R = Transportation cost e ∈ E transported from r ∈ R to market s ∈ S at PRMerst t ∈ T , ($/bbl) R = Transportation cost of e ∈ E from refinery r ∈ R to h ∈ H at period PRHerht t ∈ T ($/bbl) DRRest = Demand of refinery product e ∈ E by the market s ∈ S at the period t ∈ T (bbl/y) VTRRerxt = Variable transformation cost e ∈ E at r ∈ R using technology x ∈ X at t ∈ T ($/bbl) CGRRrxt = Variable transformation cost at refinery r ∈ R using technology x ∈ X t ∈ T ($/bbl) EFRRerx = Emission factor for transformation e ∈ E at r ∈ R technology x ∈ X (kg CO2 /bbl) EFLRR1 = Emission factor pipeline from refinery to petrochemical plants (kg CO2 /bbl. km) EFTRR1 = Emission factor truck form refinery to the local market (Kg CO2 /bbl·km) Petrochemical Parameters LHhH = Setup cost of petrochemical plant location h ∈ H ($) H = capacity in the petrochemical (bbl/y) Capht H − Min = Minimum capacity in the petrochemical (bbl/y) Capht H = Yield of petrochemical products produced from processing refinery products γhen H = Unit inventory cost of n ∈ N at storage tank k ∈ K during the period CSHnkt t ∈ T ($/bbl) H = Demand of petrochemical product n ∈ N by market z ∈ Z at period t ∈ T DHnzt (bbl/y) H = Selling price of the product n ∈ N to market z ∈ Z at period t ∈ T ($/bbl) FHHnzt max = Overall storage capacity for storage tank k ∈ K (bbl/y) SH max , VH min = Maximum & Minimum production level of petrochemical product VHnht nht n ∈ N at petrochemical plants h ∈ H at the period t ∈ T (bbl/y) H = Transportation cost of n ∈ N transported from petrochemical h ∈ H to PHKnhkt storage tank k ∈ K at the period t ∈ T ($/bbl) H = Transportation cost of n ∈ N transported from storage tank k ∈ K to PKZnkzt market z ∈ Z at the period t ∈ T ($/bbl) H = Variable transformation cost of the product n ∈ N at the petrochemical VTHnhqt plant h ∈ H using technology q ∈ Q during the time t ∈ T ($/bbl) H = Emission factor associated with transformation petrochemical products EFHnhq n ∈ N with technology q ∈ Q at petrochemical (kg CO2 /bbl) EFLH1H = Emission factor using pipeline transportation petrochemical products to storage tanks (Kg CO2 /bbl·km) EFSH1H = Emission factor using ship transportation for petrochemical products from storage tanks to market (Kg CO2 /bbl·km)

Carbon Abatement in the Petroleum Sector

207

References 1. Dudley, B.: BP statistical review of world energy. BP Statistical Review, London, UK (2018). Accessed 6 Aug 2018 2. Abdussalam, O., Trochu, J., Fello, N., Chaabane, A.: Recent advances and opportunities in planning green petroleum supply chains: a model-oriented review. Int. J. Sustain. Dev. World Ecol. 28, 1–16 (2021) 3. Benjaafar, S., Li, Y., Daskin, M.: Carbon footprint and the management of supply chains: insights from simple models. IEEE Trans. Autom. Sci. Eng. 10, 99–116 (2012) 4. Alshbili, I., Elamer, A.A., Moustafa, M.W.: Social and environmental reporting, sustainable development and institutional voids: evidence from a developing country. Corp. Soc. Responsib. Environ. Manag. 28, 881–895 (2021) 5. Al Dhaheri, N., Diabat, A.: A mathematical programming approach to reducing carbon dioxide emissions in the petroleum refining industry. In: 2010 Second International Conference on Engineering System Management and Applications, pp. 1–5. IEEE (2010) 6. Alhajri, I., Saif, Y., Elkamel, A., Almansoori, A.: Overall integration of the management of H2 and CO2 within refinery planning using rigorous process models. Chem. Eng. Commun. 200, 139–161 (2013) 7. Nguyen, T.-V., Tock, L., Breuhaus, P., Maréchal, F., Elmegaard, B.: CO2 -mitigation options for the offshore oil and gas sector. Appl. Energy 161, 673–694 (2016) 8. Attia, A.M., Ghaithan, A.M., Duffuaa, S.O.: A multi-objective optimization model for tactical planning of upstream oil & gas supply chains. Comput. Chem. Eng. 128, 216–227 (2019) 9. Kangas, I., Nikolopoulou, C., Attiya, M.: Modeling & optimization of the FCC unit to maximize gasoline production and reduce carbon dioxide emissions in the presence of CO2 emissions trading scheme. In: Proceedings of 2013 International Conference on Industrial Engineering and Systems Management (IESM), pp. 1–5 (2013) 10. Grösser, S.N.: Complexity management and system dynamics thinking. In: Grösser, S.N., Reyes-Lecuona, A., Granholm, G. (eds.) Dynamics of Long-Life Assets, pp. 69–92. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-45438-2_5 11. Lahri, V., Shaw, K., Ishizaka, A.: Sustainable supply chain network design problem: using the integrated BWM, TOPSIS, possibilistic programming, and ε-constrained methods. Expert Syst. Appl. 168, 114373 (2021) 12. Sahebi, H., Nickel, S., Ashayeri, J.: Environmentally conscious design of upstream crude oil supply chain. Ind. Eng. Chem. Res. 53, 11501–11511 (2014) 13. Mojarad, A.A.S., Atashbari, V., Tantau, A.: Challenges for sustainable development strategies in oil and gas industries. In: Proceedings of the International Conference on Business Excellence, pp. 626–638. Sciendo (2018) 14. Chan, W., Walter, A., Sugiyama, M., Borges, G.: Assessment of CO2 emission mitigation for a Brazilian oil refinery. Braz. J. Chem. Eng. 33, 835–850 (2016)

Bi-objective Model for the Distribution of COVID-19 Vaccines Mohammad Amin Yazdani(B)

, Daniel Roy , and Sophie Hennequin

University of Lorraine, LGIPM, 57000 Metz, France [email protected]

Abstract. It is important to define optimal supply chain strategies that can respond to real vaccination needs in different disasters, especially in the event of a pandemic. The distribution of medicines and vaccines is more critical when they can decay and must arrive at their final destination as fast as possible. In this paper, to overcome these problems and respond to the pandemic of COVID-19 needs, we introduced a bi-objective model for the distribution of COVID-19 vaccines. The objectives are to minimize cost function and to minimize the maximum traveling time of the vaccines to treat targeted populations in different time phases. The bi-objective model is solved with the well-known multi-objective augmented epsilon-constraint method. Besides, we bring numerical results and the appliance of our proposed model. By solving the proposed model, we can find the optimal network of the vaccines and open needed facilities in several locations. Finally, we give the decision-maker several possible answers to choose according to his preferences. Keywords: Pharmaceutical supply chains · Distribution of COVID-19 vaccines · Bi-objective model · Allocation problem · Epsilon-constraint method

1 Introduction Recent years have numerous causes of growth in human-being life losses and damages for countries’ societies and economics. One of the most important ones was COVID19 that appears first in December of 2019 according to World Health Organization. Fortunately, virus vaccines have been developed within several months [1]. Vaccines must then be distributed rapidly, but this can only be done in different periods because they are not immediately available for all people. The distribution of vaccines could be complex since countries’ strategy is different, and the different vaccines have various constraints. These problems brought the idea to propose a Vaccine Supply Chain (VSC) that can help decision-makers distribute vaccines in better way. From the Supply Chain (SC) perspective, a flexible and trustable SC is necessary in case of disaster. The main barriers are Inter-agency coordination and cooperation between actors (suppliers and regional actors), uncertain environment, unpredictable demands, and financial planning in the VSC [2]. Considering the multi-period of the humanitarian SC and the uncertain demands makes the decision-making more challenging for governors’ [3]. Another challenge is to determine the number and location of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 208–220, 2022. https://doi.org/10.1007/978-3-030-92666-3_18

Bi-objective Model for the Distribution of COVID-19 Vaccines

209

needed facilities and the best strategies to assign them to different demand zones [4]. Besides, it is vital to consider the vulnerability factor of the different groups of the society that will affect the priority of each group [5]. Therefore, the decision is not easy to make because it depends on many evolving criteria associated with strong constraints. In multi-objective optimization of humanitarian SC, many kinds of methods have been developed for different purposes. We can cite for example, Afshar and Haghani [6] who used CPLEX in disaster relief operations to minimize the total amount of weighted unsatisfied demand. Abounacer et al. [7] solved the problem with the epsilon constraint method in a location-transportation model for disaster response. The paper aimed to minimize total transportation duration, minimize the number of first-aiders, and minimize non-covered demand. Hongzhong et al. [8] proposed a facility location model of medical supplies for large-scale emergencies. They solved the problem with algorithms based on Lagrangian relaxation and genetic algorithm. According to the gap founded in other researches, a bi-objective model for the distribution of vaccines is addressed: minimizing the VSC’s total cost and minimizing the vaccines’ maximum traveling time by integrating constraints related to vaccines. The proposed model is solved with an adapted version of the augmented epsilon-constraint method. The rest of the paper is as follows: Sect. 2 describes the problem. Section 3 proposes the mathematical formulation. Section 4 discusses the experimental results and the proposed algorithm. Finally, Sect. 5 gives the conclusion and perspectives.

2 Problem Description This paper considers a COVID-19 VSC with five main actors: suppliers, vehicles, warehouses, distribution centers (DCs), and demand zones (DZs) (see Fig. 1). During the Covid-19 pandemic, the lack of spaces and crucial demands make decision-makers open the DCs in various and different places to cover all the targeted population to be vaccinated. The suppliers are providers of the vaccines for the whole VSC, which is apart from manufacturing aspects. The vaccines have an optimal use-by date, and depending on the type of vaccines, they may have specific inventory and transport constraints. The quantity of vaccines that suppliers can send to the rest of VSC is known. It is defined for the whole time horizon (the finite time horizon is decomposed into periods), which is dependent on the countries’ strategies for vaccinating their communities. Suppliers will send vaccines to the warehouses, and then from warehouses, the vaccines will flow to the DCs and respectively to DZs. The second way is to direct sending vaccines to the DCs, then send them to the DZs. Besides, the main difference between warehouses and DCs is that warehouses received the vaccines from the suppliers and send them to the opened DCs, but DCs should store vaccines and answers several demands of the DZs, which warehouses cannot do (we do not have direct flows from warehouses). Vehicles should deliver the vaccines to the warehouses and to the DCs by respecting several conditions according to the type of vaccines, such as temperature and transportation time. Then, DCs have to answer the demand of the DZs to satisfy all the demands in a given period. DCs and vehicles should ensure immunization services’ capacity to deliver vaccines. Therefore, the immunization strategy’s deployment should have different characteristics and different storage and transport requirements, including cold chain and refrigerated transport and storage capacities.

210

M. A. Yazdani et al.

Fig. 1. Considered VSC.

The location of available warehouses is known, but DCs’ location depends on the strategies chosen by governments and type of vaccine’s constraints. Hence, as a first step, we try to find the best location of the DCs to open them between candidate locations that exist in the problem for each period. The focuses of opening a DC for a country between the candidate locations are to minimize the cost and cover all demands in each period according to a country’s strategy. This strategy can evolve. For example, in France: at first, elderly people in dedicated institutions were vaccinated, then doctors were able to vaccinate and now different vaccination centers are open (high school, sports hall, etc.). However, suppliers need to be able to modify their procurement strategy because DCs may need to alter their vaccine rollout in response to contingencies that may arise (production delays, new demands, changes in demands, new government strategies, etc.). For instance, they have to be ready for the probability of sudden increases in demand at different periods. Indeed, priority groups are defined by the countries (different strategies) following the specific populations: health professionals and people working in long-term care facilities, older people, etc. After finding the locations to open DCs, the packages should deliver to the DCs from the warehouses, as discussed before. It is important to note that each period contains several time-windows. A time window is a decomposition of a period that allows decision makers to increase/decrease the number of targeted vaccinated people as the epidemic evolves. It has to be noticed that in each period, the sum (for all time windows of this period) of targeted people is known. Indeed, the increase/decrease has a small impact on the total sum for a period because the epidemic severely impacts a small percentage of the total population and therefore of the population to be vaccinated. However, over (shorter) time windows this percentage can change the decision maker’s vaccination strategy. The main objectives are given below: I) identify all possible DCs locations to cover all the demands and populations in each period and each time-window. Then, allocate the best amount of vaccines flow in the total VSC (from suppliers to the warehouses and DCs and from DCs to the DZs) to answer the demands of the DZs by minimize the total cost; II) minimize the maximum traveling time because of the lifetime of vaccines.

Bi-objective Model for the Distribution of COVID-19 Vaccines

211

The proposed model is constructed according to the following assumptions: 1. There are multiple vaccines type with different characteristics. The decays possibility exist during transportations by vehicles and in the inventory of the DCs. Besides, it is essential to note we do not consider decay for vaccines in DZs in this paper. In fact, within the DZs, vaccination is carried out without considering any storage activity. During transportation, the travelling time could increase due to events that may occur on the road such as traffic jams, accidents… To model this fact, we will define a vulnerability factor of the road. 2. During the transportation and inventories’ special conditions, the possibility of decays of the vaccines is known, and the lifetime is finite. In this paper, we do not consider the capacity of vehicles but some vehicles are more adapted for a given type of vaccine. 3. Each period includes several time-windows. The number of time-windows in each period is known. At the beginning of each period, according to the targeted group, we establish the DCs. Each DCs that opened at the beginning of a period will stand until the end of that period. It means that in all time-windows, the locations of the DCs are the same. From each period to the other, the total targeted population for vaccination could change, so we try to establish DCs according to each period’s data, separately. 4. Each period’s total demands are equal to the targeted population that should be vaccinated, which is deterministic. The demands of the DZs will be answered by the inventory of the DCs, and when the amount of inventory is lower than the demand of the DZs, the supplier will directly supply the DCs to answer the demand. However, direct purchasing will cost the SC a great value, but there is no other choice when the vaccines are decays, and they are not available at the needed moment. 5. The expected maximum percentage of vaccines are considered in the vehicles and DCs. It is considered according to the situation, and it is a limitation for the current epidemic. Therefore, all the actors in the SC should meet it. We will give the notations and present our mathematical model in what follows.

3 Mathematical Formulation The notations, parameters and variables are given below. Notations and parameters i l p k f s w v

Set of demand zones ∈ {1…I} Set of candidate locations ∈ {1…L} Set of vaccines ∈ {1…P} Set of time-windows ∈ {1…kf } Set of periods ∈ {1…F} Set of suppliers ∈ {1…S} Set of warehouses ∈ {1…W} Set of vehicles ∈ {1…V}

212

M. A. Yazdani et al.

mdcf ttb clf popf capl μcc p

Maximum number of DC can be opened in candidate locations at period f Total budget for establishing DCs Cost of establishing DC at candidate location l in period f Total targeted population should be covered in period f Capacity of the opened DC at candidate location l The expected maximum percentage of vaccines p that can be decayed in the equipped vehicles The expected maximum percentage of vaccines p that can be decayed during inventory with special condition at DC at time-window k The demand of the vaccine p at DZ i at time-window k Travelling time between the DC at candidate location l to DZ i at time-window k Travelling time between the warehouse w to DC at candidate location l at time-window k The lifetime of the vaccine p in the transportation with the special condition Holding cost of the inventory at DC in the candidate level l Vulnerability factor of the road between DC at candidate location l to DZ i Vulnerability factor of the road between warehouse w to the DC at candidate location l Maximum travelling time of the vehicle type v A very big number Cost of using vehicle type v Cost of direct ordering of the vaccine p from supplier s Ordering cost of vaccine p from supplier s The chance of the increase in the population of the affected people of the period f

λpk dempik tdclik twwlk lt cc p hl vf li vf wl ATT v BM cv v dosp ocsp θf

Variables:  xlf =  ylk = 

1 : if DC opened at candidate location l in period f 0 : otherwise 1 : if DC opened at candidate l in time − window k 0 : otherwise

1 : if vaccines flow from DC at candidate l to the DZ i in time − window k 0 : otherwise  1 : if vaccines flow from warehouses w to the DZ i at time − window k = 0 : otherwise

adclik = awwlk qdcplikv qwpwlkv idcplk

Quantity of vaccine p transferred from DC at candidate location l to the DZ i at time-window k by vehicle type v Quantity of vaccine p transferred from warehouse w to the DC at candidate location l at time-window k by vehicle type v Inventory level of the vaccine p in DC location l at the beginning of the time-window k

Bi-objective Model for the Distribution of COVID-19 Vaccines

nvv Mtdcmax Mtwmax Z1 Z2

213

Number of needed vehicles of type v Maximum travelling time from DCs to DZ Maximum travelling time from warehouses to DC Total cost objective function Maximum travelling time of vaccines

The main objectives on a finite horizon of study are given as follows:   min Z1 = cvv nvv + ocsp qwpwlkv v

+ +

s

p

w

 s

p

l

f



l

i

xlf clf +

k

l

v

k



dosp λpk idcplk + μcc p qdcplikv



v

 p

l

hl idcplk

(1)

k

min Z2 = Mtdcmax + Mtwmax

(2)

The objective function (1) tries to minimize the total costs that exist in the VSC. The first part of the cost function is the cost of the vehicles (transportation costs depending on the choice made for vehicles). The second part includes the ordering costs. The third part brings the cost of ordering directly from the supplier to answer the demand. The fourth part contains the establishment of the DCs in several periods’ costs’ and the fifth part contains the inventory costs. It is important to note that the direct order is calculated according to the amount of decayed materials in the system and they should be sent from the supplier part again. That is why it is shown as λpk idcplk + μcc p qdcplikv . The objective function (2) minimizes the maximum traveling time from warehouses to the DCs and from DCs to the DZs, with:  l

   w

l

∀p ∈ {1 . . . P} ∀i ∈ {1 . . . I} ∀k ∈ {1 . . . kf }

qdcplikv = dempik

v

  1 − μcc dempik p qwpwlkv =

v

i

   ( l i k adclik (1 + vfli ) tdclik +    w l k awwlk (1 + vfwl ) twwlk ) nvv = ATTv

∀p ∈ {1 . . . P} ∀k ∈ {1 . . . kf }

(4)

∀v ∈ {1 . . . V}

(5)

        idcplk = 1 − λpk idcplk−1 + 1 − μcc 1 − μcc p qwpwlkv − p qdcplikv w

v

i

v

(3)

∀p ∈ {1 . . . P} ∀l ∈ {1 . . . L} ∀k ∈ {2 . . . kf }

(6) Equation (3) states that the demands of the DZs should be completely satisfied in each time-window. Equation (4) states that each time-windows inventory level includes the previous time-windows inventory, input, and output of material that came from

214

M. A. Yazdani et al.

warehouses and goes to the DZs. Equation (5) introduces the number of needed vehicles in our VCS according to the time limitations. Equation (6) defines that the total amount of ordered materials that exist in the warehouses and go to the DCs, are ordered according to the total demands of DZs in each time-window. The model constraints are:  xlf clf ≤ ttb (7) l

f

   popf 1 + θf ≤ xlf capl 

∀f ∈ {1 . . . F}

(8)

l

xlf ≤ mdcf

∀f ∈ {1 . . . F}

(9)

l

ylk = xlf

∀l ∈ {1 . . . L} ∀f ∈ {1 . . . F} ∀kj ∈ fj

(10)

Constraint (7) limits the total budget of establishing the DCs between the candidate locations in periods. Constraint (8) implies that we have to open DCs in the candidate locations to cover all of the targeted population in different periods even if the evolutions happen. Constraint (9) proposed that the total number of opened DCs between the candidate locations in each period could not exceed a maximum number. Constraint (10) ensures that in each time-window which is in a particular period (∀kj m fj ), the DC is open according to the periods’ preference. By solving Z1 rely on the constraints (7)–(10), we will be able to find the optimum places for opening the DCs in each period. The rest of constraints are as follows:

adclik (1 + vfli ) tdclik + awwlk (1 + vfwl )twwlk ≤ ltpcc

 p

v

i



idcplk ≤ BMylk

p

 p

qdcplikv ≤ BMylk

w

∀p ∈ {1 . . . P} ∀w ∈ {1 . . . W} ∀l ∈ {1 . . . L} ∀i ∈ {1 . . . I} ∀v ∈ {1 . . . V} ∀k ∈ {1 . . . kf }

∀l ∈ {1 . . . L} ∀k ∈ {1 . . . kf }

∀l ∈ {1 . . . L} ∀k ∈ {1 . . . kf }

qwpwlkv ≤ BMylk

v

adclik (1 + vfli ) tdclik ≤ Mtdcmax

∀l ∈ {1 . . . L} ∀k ∈ {1 . . . kf } ∀l ∈ {1 . . . L} ∀i ∈ {1 . . . I} ∀k ∈ {1 . . . kf }

(11)

(12)

(13)

(14)

(15)

Bi-objective Model for the Distribution of COVID-19 Vaccines

awwlk (1 + vfwl ) twwlk ≤ Mtwmax

ylk ≤



adclik

i

adclik ≤

 p

ylk ≤

qdcplikv

v



awwik

w

awwik ≤

 p

v

qwpwlkv

∀w ∈ {1 . . . W} ∀l ∈ {1 . . . L} ∀k ∈ {1 . . . kf }

∀l ∈ {1 . . . L} ∀k ∈ {1 . . . kf } ∀l ∈ {1 . . . L} ∀i ∈ {1 . . . I} ∀k ∈ {1 . . . kf } ∀l ∈ {1 . . . L} ∀k ∈ {1 . . . kf } ∀w ∈ {1 . . . W} ∀l ∈ {1 . . . L} ∀k ∈ {1 . . . kf }

qdcplikv , qwpwlkv , qspsk , idcplk , nvv , Mtdcmax , Mtwmax ≥ 0 xlf , ylk , adclik , awwlk ∈ {0, 1}

215

(16)

(17)

(18)

(19)

(20)

(21)

Constraints (11) limits the vaccines’ transportation time according to their lifetime in the road between the warehouses, DCs, and DZs. Constraints (12), (13), and (14) ensures that we can have material flows and material stocks only in the opened candidate locations. It means that from the warehouses, the material can flow just toward the opened DCs. The incoming vaccines from warehouses can stock only in the opened DCs, and from the opened DCs, the demand for the DZs will be answered. Constraints (15) and (16) provide an upper for the traveling time toward DZs and DCs. Constraints (17) and (18) ensure that the assignment of the DCs to the DZs happens according to two different conditions. First, if the locations have opened DC, second, if the materials flow between a DC and DZ, they can be considered as the allocated. Constraints (19) and (20) ensure the assignment of the warehouses and DCs according to the two different conditions. The first one, like before, is the opened DCs condition, and the second one is that if the warehouses send vaccines toward the opened location. Constraint (21) determines the types of variables.

4 Results and Discussion In this section, the application is presented by solving a test problem. The test problem contains uniform generation data for all the parameters. The value of some parameters can be seen in Table 1. An ample notice in our input data is that the cost of opening a new DC is not high, because in the pandemic situation, other parts of society can play the role of DC. The problem has twelve DZs (I = 12), six candidate location (L = 6), three different types of vaccines (P = 3), nine time-windows with three distinct periods (kf = 9, F = 3), three suppliers (S = 3), three warehouses (W = 3) and two different types of

216

M. A. Yazdani et al.

vehicles (V = 2). The example is implemented in GAMS 25.1 on a pc with the following configuration: (i) Core i7 and 2.20 GHz processor (ii) 16 GB RAM. Besides, a suitable solver like the CPLEX is utilized to solve the mentioned mixed-integer linear problem. The optimal value of the first objective function is 4960441.804 $, and the optimum value of the second objective function is 151.249 h. Besides, the optimal network of the proposed VSC according to the considered data is given in Table 2. Table 1. Data parameters. Parameter

Value

Parameter

Value

popf

Uniform-integer (10000,20000)

capl

Uniform-integer (10000,20000)

tdclik

Uniform (20,80)

twwlk

Uniform (20,60)

lt cc p

Uniform (200,300)

hl

Uniform (5,7)

cvv

Uniform (10,20)

dosp

Uniform (10,20)

ocsp

Uniform (5,8)

clf

Uniform (2,3)

It is essential to note that the number of periods has been shown according to the three input data. Then, each period contains three different time-windows. In each timewindow, we were able to cover the dynamic demand of the DZs completely, as we discussed in the assumptions. In each period, the opened DCs will stay fixed (i.e. no change in the different time-windows, which are in the same period). For example, in period 1, which contains three time-windows, locations 1, 3, 5 and 6 have opened DCs. One of the critical notices is that the number of time-windows could be different in each period, but we consider them equally to three in the proposed example. Besides, the number of opened DCs might be different in each period. For instance, in period 1, the number of opened DC is four, but in other periods, this number is three. Besides, we were able to find the origin and destination of the vaccine flow. For example, in time-window 1, we have four opened DCs. From the one which is in location number 1, the vaccines will send to DZs number 2, 3, 7 and 8. Moreover, this DC, supplied by warehouse number 2. Finally, it can be seen that more than one DC can flow vaccine to a DZ, and more than one warehouse can supply a DC. Of course, all these results depend on the choice of the parameters and the government’s strategy. The most critical parameter that can affect our proposed model is the demand from the DZs and vaccines’ traveling time. It means the more demand in DZs, the more possibility of a closer DC to assign to the DZ. Respectively it happens between warehouses and DCs. It is rational because we focused on the decay of the vaccines in the roads, and this sensitivity shows the validity of our model. Finding the best locations and assignment matrix and validity of the proposed model makes our model eligible to implement case study data to other types of perishable nature vaccines and products. In multi-objective optimization to obtain Pareto solutions, several famous solving procedures such as goal programming, Lp-metrics, weighted sum, lexicographic, and epsilon constraint are well known and most used ones [9]. In this paper, since we have a

Bi-objective Model for the Distribution of COVID-19 Vaccines

217

Table 2. Optimal network. Period

Time-windows

Opened DCs locations (xlf , ylk = 1)

Allocated DZs (adclik = 1)

Warehouses that send vaccine (awwlk = 1)

1

1

1

2,3,7,8

2

3

1,5,6,9,10

3

5

4,6,11,12

2

2

3

2

4

5

6

3

7

8

9

6

5

1

1

1,5,6,7,8,12

3

3

2,9,10

1

5

11

3

6

3,4

3

1

1,2,3,4,5,9,10,11,12

1

3

8

2

5

6,7,12

2

6

3,10

2

2

1,2,4,6,7,11,12

2

5

3,5,9,10

1

6

6,8

6

2

3,4,5,6

3

5

11

1

6

1,2,4,7,8,9,10,11,12

3

2

3,10

1

5

1,2,4,7,8,12

3

6

5,6,9,11

1

2

2,5

1

3

3,4,6,7

1

4

1,6,8,9,10,11,12

1

2

2,7,12

2

3

1,3,6,11

3

4

4,5,8,9,10

2

2

1,8,9,10

2

3

4,12

2

4

2,3,5,6,7,11

1,2

218

M. A. Yazdani et al.

bi-objective model, we prefer to implement an improved version of the epsilon-constraint name as augmented epsilon-constraint. Several reasons prove that this method suits our proposed VSC. First, for the linear problems, augmented epsilon-constraint [10] will obtain non-extreme efficient solutions, while the methods that contain weights produce only extra efficient solutions. Furthermore, in the proposed method, the scaling of the objective functions is not a problem; on the contrary, in weighting methods, it is necessary. Besides, finding the best weights is a time-consuming action that makes it harder to use goal programming. The general form of the epsilon-constraint method solves the problems according to the Eq. (22). min f1 (x) f2 (x) ≤ ε2 f3 (x) ≤ ε3 . . fn (x) ≤ εn

(22)

It is important to note that the payoff matrix calculates the value of the ε. Figure 2 represents the procedure of the proposed algorithm to obtain the Pareto solutions where ni represents the grid point. By using augmented epsilon-constraint, we can eliminate the dominated optimal solutions from our final Pareto solutions [11].

Fig. 2. Procedure of the proposed augmented epsilon-constraint.

The proposed model’s goal is to find efficient solutions and give the decision-makers the freedom to choose between the objective’s values according to the situation and their preferences. For instance, in some countries, the whole VSC’s cost is not a big deal; on the other hand, it is crucial in some other countries with less wealth. Besides, some countries just focus on the traveling time to ensure that they can vaccine their

Bi-objective Model for the Distribution of COVID-19 Vaccines

219

Maximum traveling time of products

targeted population faster. By solving the model with the augmented epsilon-constraint method, we were able to find a small number of efficient solutions. Figure 3 depicts Pareto solutions. 230 210 190 170 150 130 110 90 4938000

4940000

4942000

4944000

Total cost objective function Fig. 3. Pareto solutions of the problem by augmented epsilon-constraint.

The total number of Pareto solutions is 21 different values. For the first objective function, the best value is 4938556.46 $ while the second objective function is 214.73 h. on the other hand, the best value of the second objective function is 101.39 h while the first one is 4943446.84 $. Therefore, we were able to achieve the payoff table of the problem. Therefore, main advantages of our work compared to mentioned works in the literature are we were able to find solutions for the new disease that human race is faced with it in today’s world with more comprehensive mathematical model. Besides, we suggested several possible cases that can he choose according to his preferences and limitations.

5 Conclusion In this paper, we considered a VSC that can be useful for several kinds of vaccines that can have the possibility of decays, like COVID-19. A bi-objective model has been introduced to find the best material flows and discover the best places to open facilities to cover all predicted demands. The two different objective functions considered are to minimize total cost and to minimize the maximum traveling times. We solved the problem firstly with the GAMS software, and found the optimum assignments and material flows of the proposed model. Finally, an augmented epsilon-constraint is applied to discover the Pareto solutions to help the decision-maker choose more widely. For future research, we will propose a dynamical mathematical model to be closer to reality and collect real data to improve our model. Then, we will conduct a sensitivity analysis from different parameters to study the evolution of the model. In that sense, when the model will be completed and verified, we can consider other objectives and define a dynamic optimization that can respond to change in demands and integrate other variable uncertainties. The final objective of our work is to integrate this model into a decision-maker tool with blockchain technology for data traceability.

220

M. A. Yazdani et al.

References 1. Belete, T.M.: A review on Promising vaccine development progress for COVID-19 disease. Vacunas (2020) 2. Agarwal, S., Kant, R., Shankar, R.: Evaluating solutions to overcome humanitarian supply chain management barriers: a hybrid fuzzy SWARA–Fuzzy WASPAS approach. Int. J. Disaster Risk Reduction 51, 101838 (2020) 3. Cao, C., Liu, Y., Tang, O., Gao, X.: A fuzzy bi-level optimization model for multi-period post-disaster relief distribution in sustainable humanitarian supply chains. Int. J. Prod. Econ. 235, 108081 (2021). ISSN 0925-5273 4. Habibi-Kouchaksaraei, M., Paydar, M.M., Asadi-Gangraj, E.: Designing a bi-objective multiechelon robust blood supply chain in a disaster. Appl. Math. Model. 55, 583–599 (2018) 5. Alem, D., Bonilla-Londono, H.F., Barbosa-Povoa, A.P., Relvas, S., Ferreira, D., Moreno, A.: Building disaster preparedness and response capacity in humanitarian supply chains using the Social Vulnerability Index. Eur. J. Oper. Res. 292(1), 250–275 (2021) 6. Afshar, A., Haghani, A.: Modeling integrated supply chain logistics in real-time large-scale disaster relief operations. Socioecon. Planning Sci. 46(4), 327–338 (2012) 7. Abounacer, R., Rekik, M., Renaud, J.: An exact solution approach for multi-objective locationtransportation problem for disaster response. Comput. Oper. Res. 41, 83–93 (2014) 8. Jia, H., Ordonez, F., Dessouky, M.M.: Solution approaches for facility location of medical supplies for large-scale emergencies. Comput. Ind. Eng. 52(2), 257–276 (2007) 9. Abazari, S.R., Aghsami, A., Rabbani, M.: Prepositioning and distributing relief items in humanitarian logistics with uncertain parameters. Socio-Econ. Planning Sci. 74, 100933 (2020) 10. Mavrotas, G., Florios, K.: An improved version of the augmented epsilon-constraint method (AUGMECON2) for finding the exact pareto set in multi-objective integer programming problems. Appl. Math. Comput. 219(18), 9652–9669 (2013) 11. Amin, S.H., Zhang, G.: A multi-objective facility location model for closed-loop supply chain network under uncertain demand and return. Appl. Math. Model. 37(6), 4165–4176 (2013)

Machine Learning - Algorithms and Applications

DCA for Gaussian Kernel Support Vector Machines with Feature Selection Hoai An Le Thi1,2 1

and Vinh Thanh Ho1(B)

LGIPM, D´epartement IA, Universit´e de Lorraine, 57000 Metz, France {hoai-an.le-thi,vinh-thanh.ho}@univ-lorraine.fr 2 Institut Universitaire de France (IUF), Paris, France

Abstract. We consider the support vector machines problem with the feature selection using Gaussian kernel function. This problem takes the form of a nonconvex minimization problem with binary variables. We investigate an exact penalty technique to deal with the binary variables. The resulting optimization problem can be expressed as a DC (Difference of Convex functions) program on which DCA (DC Algorithm) is applied. Numerical experiments on four benchmark real datasets show the efficiency of the proposed algorithm in terms of both feature selection and classification when compared with the existing algorithm. Keywords: DC programming · DCA kernel · Support vector machines

1

· Feature selection · Gaussian

Introduction

Support Vector Machines (SVMs) [2] are a powerful supervised learning method and have been successfully applied to many real-world problems in bioinformatics, face detection, image classification, etc. (see, e.g., [3,14,30]). SVMs aim to separate points in two given sets by a decision surface. In practice, it is difficult to separate the data points linearly. The role of kernel functions is to map these points into a higher (possibly infinite) dimensional space where building an SVM model is easier. Examples of popular kernel functions used in SVMs are Gaussian kernel, and polynomial kernel (see, e.g., [23]). Feature selection is an important task in SVMs. It attempts to eliminate the irrelevant and redundant features, save storage space, reduce prediction time, and avoid the course of dimensionality while maintaining or improving the prediction quality (see, e.g., [6,24]). There are three main approaches for feature selection in SVMs: filter, wrapper, and embedded approach (see more details in, e.g., [23]). Each approach differs from the others in the way feature selection and classifier training are used separately (for “filter”: selection before classification) or alternately (for “embedded” and “wrapper”). The wrapper approach measures the importance of features based on classification accuracy in order to select or remove features. The embedded approach considers feature selection as an integral part in the learning process. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 223–234, 2022. https://doi.org/10.1007/978-3-030-92666-3_19

224

H. A. Le Thi and V. T. Ho

This paper focuses on the embedded approach for solving the Gaussian kernel SVMs problem with feature selection. In [23], the authors proposed an embedded formulation of this problem. More precisely, given a training dataset {x(i) , yi }i=1,...,n where each training example x(i) ∈ Rm (with m features) is labeled by its class yi ∈ {−1, 1}, the authors considered the modified Gaussian ˜ defined as follows: kernel K   v ◦ x(k) − v ◦ x(l) 22 (k) (l) ˜ K(x , x , v) = exp − , v = (vi )i=1,...,m , vi ≥ 0. (1) 2 Here ◦ denotes an element-wise product operator. The variable v ∈ Rm in (1) plays two roles as a feature selection variable (i.e., for i = 1, . . . , m, if vi = 0, then the ith feature is not selected) and a width variable of the Gaussian kernel function. The idea in [23] is to penalize the 0 -norm v0 (i.e. the number of non-zero elements of v) into the dual formulation of SVMs. The 0 -norm formulation of SVMs with Gaussian kernel function (1) and feature selection is expressed as follows (see [23]): min α ,v

s.t.

n 

n n  1  ˜ (i) , x(j) , v) − αi αj yi yj K(x αi + C2 v0 2 i,j=1 i=1

(2)

αi yi = 0; 0 ≤ αi ≤ C1 , i = 1, 2, . . . , n; vi ≥ 0, i = 1, 2, . . . , m,

i=1

where C1 , C2 are positive control parameters. The difficulty of the problem (2) lies in the nonconvexity of the first term and the 0 -norm term in the objective function. In [23], the authors approximated the 0 -norm term by an exponential concave approximation (see more approximations in the seminal work [18]) and then developed the algorithm KP-SVM to solve the resulting approximation problem alternately in two variables: α and v. For fixed variable v, this problem becomes the convex quadratic SVM problem with the kernel function (1). For fixed α, the authors applied the gradient method to update v and eliminated the features whose vi ’s are close to zero (below a given threshold). The authors showed the efficiency of KP-SVM on four real datasets in comparison with other filter, wrapper, embedded methods. In this paper, we consider a modified version of (2) in which we use the binary variable v ∈ {0, 1}m for feature selection and fix the width parameter σ of the Gaussian kernel function K, that is   v ◦ x(k) − v ◦ x(l) 22 (k) (l) (3) , v ∈ {0, 1}m , σ > 0. K(x , x , v) = exp − 2σ 2 Hence, our binary formulation of SVMs with feature selection using the Gaussian kernel function (3) takes the form

DCA for Gaussian Kernel Support Vector Machines with Feature Selection

min α ,v

s.t.

n 

n n n   1  αi αj yi yj K(x(i) , x(j) , v) − αi + C2 vi 2 i,j=1 i=1 i=1

225

(4)

αi yi = 0; 0 ≤ αi ≤ C1 , i = 1, 2, . . . , n; vi ∈ {0, 1}, i = 1, 2, . . . , m.

i=1

n In (4), the linear term i=1 vi replaces the nonconvex term v0 of (2). However, the problem (4) still has the double difficulty due to the nonconvexity of the objective function and the binary variable. We develop a DC (Difference of Convex functions) programming and DCA (DC Algorithm) based approach for directly solving the problem (4) in both variables via two following main steps for the binary variable and the nonconvex objective function. First, we exploit an exact penalty technique in DC programming, which has been widely studied in, e.g., [7,19,25], for treating the binary variable. The combinatorial optimization problem (4) is equivalently reformulated as a continuous problem which is further formulated as a DC program. Then we design a DCA-based algorithm, named DCA-KSVM-Bi, for this DC program. Thanks to the special structure of the considered problem, our algorithm has several advantages. At each iteration, both variables α and v are explicitly computed in a very inexpensive way. We prove that if one feature is not selected at an iteration k, then it will not be selected at all the next iteration l > k, hence we can remove it definitely at the iteration k. As a result, the total number of removed features is non-decreasing at each iteration, and the computational time of the algorithm will be greatly reduced. Finally, our preliminary experiments on several benchmark real-world classification datasets demonstrate the efficiency of our approach in comparison with the existing algorithm KP-SVM. The rest of the paper is organized as follows. A brief introduction of DC programming and DCA is given in Sect. 2.1. We show how to reformulate the problem (4) as a DC program in Sect. 2.2, and then present a DCA-based algorithm for the resulting optimization reformulation in Sect. 2.3. Section 3 reports the numerical results on several real datasets which is followed by some conclusions in Sect. 4.

2 2.1

Solution Method by DC Programming and DCA Introduction to DC Programming and DCA

DC programming and DCA were introduced by Pham Dinh Tao in a preliminary form in 1985 and have been extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994. DCA is well-known as a powerful approach in the nonconvex programming framework (see, e.g., [15,17,25,26]). A standard DC program takes the form (Pdc ) α = inf{f (x) := g(x) − h(x) : x ∈ Rp },

226

H. A. Le Thi and V. T. Ho

where the functions g, h ∈ Γ0 (Rp ) are convex. Here Γ0 (Rp ) denotes the set of proper lower-semicontinuous convex functions from a set Rp to R ∪ {+∞}. Such a function f is called a DC function, g−h is called a DC decomposition of f , while g and h are DC components of f. A convex constraint C can be incorporated into the objective function of (Pdc ) by using the indicator function χC of C (defined by χC (x) := 0 if x ∈ C, +∞ otherwise): inf{f (x) := g(x) − h(x) : x ∈ C} = inf{[g(x) + χC (x)] − h(x) : x ∈ Rp }. A point x∗ is called a critical point of g − h if it satisfies the generalized Kuhn-Tucker condition ∂g(x∗ ) ∩ ∂h(x∗ ) = ∅. The standard DCA scheme is described below. Standard DCA scheme Initialization: Let x0 ∈ Rp be a best guess. Set k = 0. repeat 1. Calculate xk ∈ ∂h(xk ). 2. Calculate xk+1 ∈ argmin{g(x) − x, xk : x ∈ Rp }. 3. k = k + 1. until convergence of {xk }. Convergence properties of the DCA and its theoretical basis are described in [15,26,27]. For instance, it is worth mentioning the following properties: • DCA is a descent method without linesearch (the sequence {g(xk ) − h(xk )} is decreasing) but with global convergence (i.e. it converges from an arbitrary starting point). • If g(xk+1 ) − h(xk+1 ) = g(xk ) − h(xk ), then xk is a critical point of g − h. In this case, DCA terminates at kth iteration. • If the optimal value α of (Pdc ) is finite and the infinite sequence {xk } is bounded, then every limit point x∗ of this sequence is a critical point of g − h. In recent years, numerous DCA-based algorithms have been developed to successfully solve large-scale nonsmooth/nonconvex programs in many application areas (see, e.g., [8–13,16,18,20,21,28,29] and the list of references in [17]). For a comprehensive survey on thirty years of development of DCA, the reader is referred to the recent paper [17]. 2.2

DC Reformulation of the Binary Formulation (4)

We use the exact penalty technique [7,19,25] to treat the binary variable v. Let p : [0, 1]m → R be the penalty function defined by p(v) :=

m  i=1

vi (1 − vi ).

DCA for Gaussian Kernel Support Vector Machines with Feature Selection

227

The problem (4) can be rewritten as follows: min α ,v

s.t.

1  α Q(v)α − 1 α + C2 1 v 2 α y = 0; 0 ≤ α ≤ C1 1; 0 ≤ v ≤ 1; p(v) ≤ 0,

(5)

where 1 (resp. 0) denotes a vector of all ones (resp. all zeros) with an appropriate size; the matrix of functions in v, denoted by Q(v), is defined by Q(v) := (Qij (v))i,j=1,2,...,n , Qij (v) = yi yj K(x(i) , x(j) , v), and the Gaussian kernel function K is defined as (3). This leads to the corresponding penalized problems (C3 being the positive penalty parameter) 1  α Q(v)α−1 α+(C2 +C3 )1 v −C3 v22 : (α, v) ∈ K}, (6) 2   where K := (α, v) ∈ Rn × Rm : α y = 0; 0 ≤ α ≤ C1 1; 0 ≤ v ≤ 1 . Obviously, the set K is a nonempty bounded polyhedral convex set in Rn × Rm and f is a Lipschitz function with the bounded gradient on K. It follows from Theorem 7 in [19] that there exists C˜3 > 0 such that for all C3 > C˜3 , the two problems (5)–(6) are equivalent, in the sense that they have the same optimal value and the same solution set. Now, we reformulate the problem (6) as a DC program on which DCA can be applied. To obtain a DC decomposition, we use the following proposition. Let us define the function min{f (α, v) :=

ψ(α, v) :=

1 ρ (v22 + α22 ) − α Q(v)α. 2 2

Proposition 1. There exists a ρ > 0 such that the function ψ is convex. Proof. Clearly, ψ is convex when we take ρ greater than or equal to the spectral 1 radius of the Hessian matrix of Λ(α, v) = α Q(v)α, i.e. ρ ≥ ρ(∇2 Λ(α, v)). 2 We have ⎤ ∂2Λ ∂2Λ (α, v) (α, v) ⎥ ⎢ ∂α2 ∂α∂v ⎥, ∇2 Λ(α, v) = ⎢ ⎦ ⎣ 2 2 ∂ Λ ∂ Λ (α, v) (α, v) ∂v∂α ∂v 2 n  ∂Λ ∂2Λ ∂Λ αi αj Qij (v)ν (ij) ; (α, v) = Q(v), (α, v) = Q(v)α; (α, v) = − ∂α ∂v ∂α2 i,j=1 ⎡

n n    ∂2Λ ∂2Λ (α, v) = (−2) αj Qij (v)ν (ij) ; αj ν (ij) Qij (v), (α, v) = (−2) ∂v∂αi ∂α i ∂v j=1 j=1 n

  ∂2Λ (α, v) = αi αj Qij (v) 2ν (ij) ν (ij) − diag((x(i) − x(j) )2 ) . 2 ∂v i,j=1

228

H. A. Le Thi and V. T. Ho

Here we denote by z 2 the element-wise square of a vector z defined as z 2 := z ◦ z and by diag(z) a square diagonal √ matrix with the elements of vector z on the main diagonal; and ν (ij) := (v/( 2σ)) ◦ (x(i) − x(j) )2 . From [5], we have ρ(∇2 Λ(α, v)) ≤ ∇2 Λ(α, v)F ≤

 ∂2Λ ∂2Λ ∂2Λ (α, v)F + 2  (α, v)F +  2 (α, v)F . 2 ∂α ∂v∂αi ∂v i

We observe that Qij , αi , and vi are bounded by 1, C1 , and 1, respectively. Therefore, we can take ρ ≥ ρ˜ and n n   √ √ ρ˜ := n + (2 2C1 / σ + C12 ) x(i) − x(j) 24 + (C12 /σ) x(i) − x(j) 44 . i,j=1

i,j=1

 

The proof is complete. By the following DC decomposition of

1  α Q(v)α: 2

1  ρ α Q(v)α = (v22 + α22 ) − ψ(α, v), 2 2 the problem (6) can be rewritten as a DC program min{f (α, v) := g(α, v) − h(α, v) : (α, v) ∈ Rn × Rm }

(7)

ρ (v22 + α22 ) − 1 α + (C2 + C3 )1 v + χK (α, v), 2 h(α, v) := ψ(α, v) + C3 v22 .

(8)

where g(α, v) :=

Note that the DC components g and h are differentiable in Rn × Rm . 2.3

Standard DCA for Solving the DC Program (7)

According to the standard DCA scheme in Sect. 2.1, we need to construct two sequences {(αk , v k )} and {(β k , uk )} such that β k = ∇α h(αk , v k ), uk = ∇v h(αk , v k ), and (αk+1 , v k+1 ) is an optimal solution to the following convex program  min g(α, v) − (β k , uk ), (α, v) : (α, v) ∈ Rn × Rm .

(9)

Compute the gradient of h: From the definition (8), we have ∇α h(α, v) = ∇α ψ(α, v) = ρα − Q(v)α; ∇v h(α, v) = ∇v ψ(α, v) + 2C3 v.

DCA for Gaussian Kernel Support Vector Machines with Feature Selection

229

We have ∇v ψ(α, v) = ∇vp ψ(α, v) p=1,2,...,m , and ∇vp ψ(α, v) = ρvp +

n

1  (i) (j) 2 α . α Q (v)v (x − x ) i j ij p p p 2σ 2 i,j=1

Thus, we can take β k = ραk − Q(v k )αk ; uk = ∇v ψ(αk , v k ) + 2C3 v k .

(10)

Solving the convex subproblem (9): Since the function g is separable in each variable α and v, we can calculate two vectors αk+1 and v k+1 separately. In particular, we compute αk+1 by solving the following convex quadratic program: ρ α22 − β k + 1, α (11) min α 2 s.t. α y = 0; 0 ≤ α ≤ C1 1. It is worth noting that the optimal solution to the subproblem (11) is exactly βk + 1 the projection of the point on the intersection of a hyperplane {α ∈ Rn : ρ y  α = 0} and a box {α ∈ Rn : 0 ≤ α ≤ C1 1}. According to, e.g., [1,22], the projection point can be explicitly computed as     βk + 1 k+1 −μ ˆy, 0 , C1 1 , := min max (12) α ρ where μ ˆ ∈ R is a solution of the equation     βk + 1  − μy, 0 , C1 1 , φ(μ) = y min max ρ

(13)

and min (resp. max) of two vectors is defined as an element-wise minimum (resp. maximum). The single variable equation (13) can be easily solved by several existing algorithms: e.g., an O(n) median-finding algorithm in [22], a bisection method in [1], or a combination of bisection, secant, and inverse quadratic interpolation methods in [4]. Next, we find v k+1 by solving the following convex subproblem: ρ v22 − uk − (C2 + C3 )1, v min (14) v 2 s.t. 0 ≤ v ≤ 1. uk − (C2 + C3 )1 on the box {v ∈ Rm : ρ 0 ≤ v ≤ 1}. Thus, an explicit solution to the subproblem (14) is     k u − (C2 + C3 )1 k+1 ,0 ,1 . (15) := min max v ρ Similar to α, v k+1 is the projection of

230

H. A. Le Thi and V. T. Ho

Algorithm 1. DCA-KSVM-Bi: Standard DCA for solving the problem (7) Initialization: Choose (α0 , v 0 ) ∈ K, ε > 0, σ > 0, C1 > 0, C2 > 0, C3 > 0, ρ > 0. Set k = 0. repeat 1. Compute (β k , uk ) ∈ ∂h(αk , v k ) using (10). 2. Compute μ ˆ ∈ R by solving a single variable equation (13). 3. Compute αk+1 ∈ Rn using (12). 4. Compute v k+1 ∈ Rm using (15). 5. Set k = k + 1. until (αk , v k ) − (αk−1 , v k−1 )2 ≤ ε(1 + (αk−1 , v k−1 )2 ) or |f (αk , v k ) − f (αk−1 , v k−1 )| ≤ ε(1 + |f (αk−1 , v k−1 )|).

Hence, DCA applied to (7) can be summarized in Algorithm 1 (DCA-KSVM-Bi). According to the convergence properties of the generic DCA scheme, we deduce the following interesting convergence properties of DCA-KSVM-Bi. Theorem 1. i) DCA-KSVM-Bi generates the sequence {(αk , v k )} such that the sequence {f (αk , v k )} is decreasing. ii) The sequence {(αk , v k )} converges to a critical point (α∗ , v ∗ ) of (7). iii) For each i = 1, . . . , m, if at an iteration q we have viq = 0, then vil = 0 for all l > q. Proof. The properties i) and ii) are direct consequences of the convergence properties of standard DCA. As for the property iii), if viq =0 then we deduce from

3) (10) that uqi = 0. From (15), we have viq+1 = min max − (C2 +C , 0 , 1 = 0. ρ

Similarly, we derive that vil = 0 for all l > q. This completes the proof.

 

Remark 1. From the property iii) of Theorem 1, we see that if the selection variable of a feature is zero at an iteration k, then we can remove this feature at all next iterations l > k. Therefore, the total number of eliminated features is non-decreasing at each iteration. Moreover, the computational time of the algorithm will be improved.

3

Numerical Experiments

In our experiments, we compare the proposed algorithm DCA-KSVM-Bi with the existing algorithm KP-SVM [23] as described in Sect. 1. The two comparative algorithms DCA-KSVM-Bi and KP-SVM are implemented in MATLAB R2019b and performed on a PC Intel(R) Core(TM) i5-3470 CPU @ 3.20 GHz–3.20 GHz of 8 GB RAM. We run on the four benchmark classification datasets which are summarized in Table 1. More details of these datasets are given on the LIBSVM website1 , UCI Machine Learning Repository2 , and the references therein. 1 2

https://www.csie.ntu.edu.tw/∼cjlin/libsvmtools/datasets/. http://www.ics.uci.edu/∼mlearn/MLRepository.html.

DCA for Gaussian Kernel Support Vector Machines with Feature Selection

231

Table 1. Datasets used in our experiments. Dataset Name

# Instances (n) # Features (m)

D1

Ionosphere

351

34

D2

Diabetes

768

8

D3

Wisconsin-breast 569

30

D4

Gastroenterology 152

698

Knowing from Proposition 1 that the condition ρ ≥ ρ˜ is only a sufficient condition for the convexity of the function ψ, in our experiments we can choose ρ = ρ˜ at each iteration. However, ψ may still be convex for some ρ < ρ˜. From our experiments, the smaller the ρ that ensures the convexity of ψ is, the more efficient our algorithm could be. Thus, in these experiments, we update ρ as follows. We set the initial value ρ = ρ˜. At each iteration, if the objective function value decreases, we divide ρ by 2; otherwise, we stop decreasing. We multiply ρ by 2 and run our algorithm with this ρ until its convergence. We choose the best parameters of each algorithm by using the 10-fold cross validation on the whole dataset. As described in [23], the parameters C1 and σ are tuned by using a standard SVMs model selection procedure for Gaussian kernel function without feature selection. To train such SVM classifiers for binary classification in tuning procedure and the algorithm KP-SVM, in our experiments, we use the MATLAB function fitcsvm3 . The ranges of the parameters C1 and σ are {0.1, 0.5, 1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000} and {0.1, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 100}, respectively. The best tuning parameters of C1 and σ are fixed and used in the learning procedure of both algorithms. The parameters of KP-SVM are 0 -penalty parameter C2 , 0 -approximation parameter β, feature selection parameter , step-size parameter γ. We tune the parameters C2 ∈ {0.1, 0.2, 0.5, 1, 5, 10, 20, 40}, γ ∈ {10−3 , 10−2 , 10−1 , 0.25, 0.5} and we set (resp. β) to 10−5 (resp. 5) as mentioned in [23]. As for the initial parameters of our algorithm, their ranges to tune are C2 ∈ {0.1, 0.5, 1}, C3 ∈ {10, 100, 500}, two initial points of v 0 (v 0 = 1 or v 0 with elements randomized in [0.01, 1]). After obtaining the best parameters of the algorithms, we start learning and evaluating SVM models by using 5-fold cross validation on the whole dataset. The default tolerance in the stopping criteria of both algorithms is ε = 10−4 . The feasible point α0 of our algorithm is set to 0. To solve the single variable Eq. (13), we use the MATLAB function fzero4 (see [4]) because of its efficiency in terms of quality and rapidity when comparing with the algorithms in [1,22]. We are interested in the following criteria to evaluate the effectiveness of the proposed algorithm: the Percentage of Well Classified Objects (PWCO) (in %), the number and percentage (in %) of selected features (SF), and training CPU 3 4

https://fr.mathworks.com/help/stats/fitcsvm.html. https://fr.mathworks.com/help/matlab/ref/fzero.html.

232

H. A. Le Thi and V. T. Ho

Table 2. Comparative results of DCA-KSVM-Bi and KP-SVM on 4 real datasets. Bold values indicate the best result. Dataset n

m

DCA-KSVM-Bi KP-SVM

D1

351

34 PWCO1 PWCO2 SF CPU

97.07 ± 0.81 93.72 ± 1.93 23 (68%) 0.55 ± 0.05

87.24 ± 1.56 84.88 ± 4.15 24 (70%) 0.35 ± 0.01

D2

768

8 PWCO1 PWCO2 SF CPU

73.95 ± 5.73 73.30 ± 2.17 5.2 (65%) 0.50 ± 0.10

65.1 ± 0.05 65.1 ± 0.23 5.8 (72%) 2.77 ± 0.15

D3

569

30 PWCO1 PWCO2 SF CPU

95.16 ± 3.58 92.44 ± 1.7 26.8 (89%) 7.35 ± 2.15

93.27 ± 1.14 92.28 ± 2.36 27 (90%) 4.98 ± 0.61

D4

152 699 PWCO1 PWCO2 SF CPU

90.84 ± 2.97 84.25 ± 2.36 352 (50%) 0.12 ± 0.01

50.32 ± 0.44 48.66 ± 1.82 359 (51%) 0.13 ± 0.00

time (in seconds). PWCO1 (resp. PWCO2 ) denotes the PWCO on training set (resp. test set). After performing 5-folds cross-validation, we report the average and the standard deviation of each evaluation criterion in Table 2. Bold values in Table 2 are the best results. Comments on Numerical Results: we observe from numerical experiments that DCA-KSVM-Bi is better than KP-SVM on all four datasets for both feature selection and classification. In terms of PWCO, our algorithm is always better on both training and test sets of all four datasets. The gain varies from 8.2% to 9.83% on two datasets D1 and D2, especially up to 40.5% on the training set of D4. Concerning the number of selected features, DCA-KSVM-Bi selects from 50% to 89% of features. Our algorithm is slightly better than KP-SVM with the gain in percentage of selected features from 1% to 7%. As for CPU time, our algorithm runs very fast on three out of four datasets (less than 0.6 s). On the dataset D3, DCA-KSVM-Bi runs slower 1.4 times than KP-SVM but faster 5.54 times on the dataset D1. Moreover, DCA-KSVM-Bi always furnishes the binary solutions, which is an advantage of using DCA for solving the continuous problem.

4

Conclusions

We have developed a DC programming and DCA based approach for solving the Gaussian Kernel Support Vector Machines problem with binary variables

DCA for Gaussian Kernel Support Vector Machines with Feature Selection

233

for feature selection. Exploiting an exact penalty technique in DC programming to deal with the binary variable, we have obtained an equivalent continuous formulation, namely, both combinatorial and continuous formulations have the same optimal value and the same solution set. We have applied DC programming and DCA for this continuous formulation to get an efficient DCA algorithm having some interesting properties: The computations are explicit, inexpensive; and we can completely remove a feature at the first iteration where it is not selected. Therefore, we improve the computational time. The numerical results on four real-world datasets show that our approach is effective in comparison with the existing algorithm in terms of both the feature selection and classification.

References 1. Beck, A.: First-Order Methods in Optimization. Society for Industrial and Applied Mathematics, Philadelphia (2017) 2. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, COLT 1992, pp. 144–152. Association for Computing Machinery (1992) 3. da Costa, D.M.M., Peres, S.M., Lima, C.A.M., Mustaro, P.: Face recognition using support vector machine and multiscale directional image representation methods: a comparative study. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2015) 4. Dekker, T.J.: Finding a zero by means of successive linear interpolation. In: Constructive aspects of the fundamental theorem of algebra, pp. 37–51. WileyInterscience (1969) 5. Derzko, N.A., Pfeffer, A.M.: Bounds for the spectral radius of a matrix. Math. Comput. 19(89), 62–67 (1965) 6. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003) 7. Le Thi, H.A., Pham Dinh, T., Le Dung, M.: Exact penalty in DC programming. Vietnam J. Math. 27(2), 169–178 (1999) 8. Le Thi, H.A.: DC programming and DCA for supply chain and production management: state-of-the-art models and methods. Int. J. Prod. Res. 58(20), 6078–6114 (2020) 9. Le Thi, H.A., Ho, V.T.: Online learning based on online DCA and application to online classification. Neural Comput. 32(4), 759–793 (2020) 10. Le Thi, H.A., Ho, V.T., Pham Dinh, T.: A unified DC programming framework and efficient DCA based approaches for large scale batch reinforcement learning. J. Glob. Optim. 73(2), 279–310 (2018). https://doi.org/10.1007/s10898-018-0698-y 11. Le Thi, H.A., Le, H.M., Pham Dinh, T.: New and efficient DCA based algorithms for minimum sum-of-squares clustering. Pattern Recognit. 47(1), 388–401 (2014) 12. Le Thi, H.A., Le, H.M., Phan, D.N., Tran, B.: Novel DCA based algorithms for a special class of nonconvex problems with application in machine learning. Appl. Math. Comput. 409, 1–22 (2020). https://doi.org/10.1016/j.amc.2020.125904 13. Le Thi, H.A., Nguyen, M.C., Pham Dinh, T.: A DC programming approach for finding communities in networks. Neural Comput. 26(12), 2827–2854 (2014)

234

H. A. Le Thi and V. T. Ho

14. Le Thi, H.A., Nguyen, V.V., Ouchani, S.: Gene selection for cancer classification using DCA. In: Tang, C., Ling, C.X., Zhou, X., Cercone, N.J., Li, X. (eds.) ADMA 2008. LNCS (LNAI), vol. 5139, pp. 62–72. Springer, Heidelberg (2008). https:// doi.org/10.1007/978-3-540-88192-6 8 15. Le Thi, H.A., Pham Dinh, T.: The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133(1–4), 23–46 (2005) 16. Le Thi, H.A., Pham Dinh, T.: Difference of convex functions algorithms (DCA) for image restoration via a Markov random field model. Optim. Eng. 18(4), 873–906 (2017). https://doi.org/10.1007/s11081-017-9359-0 17. Le Thi, H.A., Pham Dinh, T.: DC programming and DCA: thirty years of developments. Math. Program. 169(1), 5–68 (2018). https://doi.org/10.1007/s10107-0181235-y 18. Le Thi, H.A., Pham Dinh, T., Le, H.M., Vo, X.T.: DC approximation approaches for sparse optimization. Eur. J. Oper. Res. 244(1), 26–46 (2015) 19. Le Thi, H.A., Pham Dinh, T., Ngai, H.V.: Exact penalty and error bounds in DC programming. J. Glob. Optim. 52(3), 509–535 (2012) 20. Le Thi, H.A., Phan, D.N.: DC programming and DCA for sparse optimal scoring problem. Neurocomputing 186, 170–181 (2016) 21. Le Thi, H.A., Vo, X.T., Pham Dinh, T.: Efficient nonnegative matrix factorization by DC programming and DCA. Neural Comput. 28(6), 1163–1216 (2016) 22. Maculan, N., Santiago, C.P., Macambira, E.M., Jardim, M.H.C.: An o(n) algorithm for projecting a vector on the intersection of a hyperplane and a box in RN. J. Optim. Theory Appl. 117(3), 553–574 (2003) 23. Maldonado, S., Weber, R., Basak, J.: Simultaneous feature selection and classification using kernel-penalized support vector machines. Inf. Sci. 181(1), 115–128 (2011) 24. Neumann, J., Schn¨ orr, C., Steidl, G.: Combined SVM-based feature selection and classification. Mach. Learn. 61(1), 129–150 (2005) 25. Pham Dinh, T., Le Thi, H.A.: Recent advances in DC programming and DCA. In: Nguyen, N.-T., Le-Thi, H.A. (eds.) Transactions on Computational Intelligence XIII. LNCS, vol. 8342, pp. 1–37. Springer, Heidelberg (2014). https://doi.org/10. 1007/978-3-642-54455-2 1 26. Pham Dinh, T., Le Thi, H.A.: Convex analysis approach to DC programming: theory, algorithms and applications. Acta Mathematica Vietnamica 22(1), 289– 355 (1997) 27. Pham Dinh, T., Le Thi, H.A.: DC optimization algorithms for solving the trust region subproblem. SIAM J. Optim. 8(2), 476–505 (1998) 28. Phan, D.N., Le, H.M., Le Thi, H.A.: Accelerated difference of convex functions algorithm and its application to sparse binary logistic regression. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-2018, pp. 1369–1375. International Joint Conferences on Artificial Intelligence Organization (2018) 29. Phan, D.N., Le Thi, H.A.: Group variable selection via p,0 regularization and application to optimal scoring. Neural Netw. 118, 220–234 (2019) 30. Sun, X., Liu, L., Wang, H., Song, W., Lu, J.: Image classification via support vector machine. In: 2015 4th International Conference on Computer Science and Network Technology (ICCSNT), vol. 1, pp. 485–489 (2015)

Training Support Vector Machines for Dealing with the ImageNet Challenging Problem Thanh-Nghi Do1,2(B) and Hoai An Le Thi3 1 2

College of Information Technology, Can Tho University, 92000 Cantho, Vietnam [email protected] UMI UMMISCO 209 (IRD/UPMC), Sorbonne University, Pierre and Marie Curie University, Paris 6, France 3 IA - LGIPM, University of Lorraine, Nancy, France [email protected]

Abstract. We propose the parallel multi-class support vector machines (Para-SVM) algorithm to efficiently perform the classification task of the ImageNet challenging problem with very large number of images and a thousand classes. Our Para-SVM learns in the parallel way to create ensemble binary SVM classifiers used in the One-Versus-All multiclass strategy. The stochastic gradient descent (SGD) algorithm rapidly trains the binary SVM classifier from mini-batches being created by under-sampling training dataset. The numerical test results on ImageNet challenging dataset show that the Para-SVM algorithm is faster and more accurate than the state-of-the-art SVM algorithms. Our ParaSVM achieves an accuracy of 74.89% obtained in the classification of ImageNet-1000 dataset having 1,261,405 images in 2048 deep features into 1,000 classes in 53.29 min using a PC Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores.

Keywords: Large scale image classification machines · Multi-class

1

· Support vector

Introduction

The classification of images is one of the most important topics in communities of computer vision and machine learning. The aim of image classification tasks is to automatically categorize the image into one of predefined classes. ImageNet dataset [7,8] with more than 14 million images for 21,841 classes raises more challenges in training classifiers, due to the large scale number of images and classes. Many researches [9,10,12,13,23,33] proposed to use popular handcrafted features such as the scale-invariant feature transform (SIFT [20,21]), the bag-of-words model (BoW [1,18,27]) and support vector machines (SVM [29]) for dealing with large scale ImageNet dataset. Recent deep learning networks, c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 235–246, 2022. https://doi.org/10.1007/978-3-030-92666-3_20

236

T.-N. Do and H. A. Le Thi

including VGG19 [26], ResNet50 [15], Inception v3 [28], Xception [6] are proposed to efficiently classify ImageNet dataset with the prediction correctness more over 70%. In this paper, we propose the new parallel multi-class support vector machines (Para-SVM) algorithm to efficiently handle the classification task of ImageNet challenging dataset. The Para-SVM algorithm bases on the One-Versus-All (OVA [29]) strategy to deal with the very large number of classes. Ensemble binary SVM classifiers in the multi-class model are trained parallel on multi-core computer. In which, the stochastic gradient descent (SGD) algorithm rapidly trains the binary SVM classifier from mini-batches being produced by under-sampling training dataset. And then, the Para-SVM algorithm is faster and more accurate than the state-of-the-art SVM algorithms such as LIBLINEAR [14] and kSVM [11]. The Para-SVM achieves an accuracy of 74.89% obtained in the classification of ImageNet-1000 dataset having 1,261,405 images in 2048 deep features into 1,000 classes in 53.29 min using a PC Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores. The remainder of this paper is organized as follows. Section 2 briefly introduces the SVM classification using the SGD algorithm. Section 3 presents our Para-SVM for handling the very large number of classes. Section 4 shows the experimental results. We then conclude in Sect. 5.

2

Support Vector Machines

Let us consider a binary classification problem with the dataset D = [X, Y ] consisting of m datapoints X = {x1 , x2 , . . . , xm } in the n-dimensional input space Rn , having corresponding labels Y = {y1 , y2 , . . . , ym } being ±1. The SVM algorithm [29] tries to find the best separating plane (denoted by the normal vector w ∈ Rn ), i.e. furthest from both class +1 and class −1. It is accomplished through the maximization of the margin (or the distance) between the supporting planes for each class, i.e. 2/w. Any point xi falling on the wrong side of its supporting plane is considered to be an error, denoted by zi = 1 − yi (w.xi ) ≥ 0. The error zi is rewritten by L(w, [xi , yi ]) = max{0, 1 − yi (w.xi )}. And then, SVM has to simultaneously maximize the margin and minimize the error. This is accomplished through the unconstrained problem (1). m

min Ψ (w, [X, Y ]) =

λ 1  w2 + L(w, [xi , yi ]) 2 m i=1

(1)

where λ is a positive constant used to tune the trade-off between the margin size and the error. And then, [2,25] proposed the stochastic gradient descent (SGD) method to solve the unconstrained problem (1). The SGD for SVM (denoted by SVM-SGD) updates w on T epochs with a learning rate η. For each epoch t1 , the SVM-SGD

1

We use subscript t to refer to the epoch t.

Training SVM for the ImageNet Challenging Problem

237

uses a single randomly received datapoint (xi , yi ) to compute the sub-gradient ∇t Ψ (w, [xi , yi ]) and update wt+1 as follows: wt+1 = wt − ηt ∇t Ψ (wt , [xt , yt ]) = wt − ηt (λwt + ∇t L(wt , [xt , yt ]))

∇t L(w, [xt , yt ]) = ∇t max{0, 1 − yt (wt .xt )}  −yt xt if yt (wt .xt ) < 1 = 0 otherwise

(2)

(3)

The SVM-SGD using the update rule (2) is described in Algorithm 1.

Algorithm 1: SVM-SGD(D, λ, T ) for binary classification input :

output: 1 2 3 4 5 6 7 8 9 10 11 12 13

training dataset D positive constant λ > 0 number of epochs T hyperplane w

begin init w1 s.t. w1  ≤ √1λ for t ← 1 to T do randomly pick a datapoint [xt , yt ] from training dataset D 1 set ηt = λt if (yi (wt .xi ) < 1) then wt+1 = wt − ηt (λwt − yi xi ) else wt+1 = wt − ηt λwt end end return wt+1 end

As mentioned in [2,25], the SVM-SGD algorithm quickly converges to the optimal solution due to the fact that the unconstrained problem (1) is convex optimization problems on very large datasets. The algorithmic complexity of SVM-SGD is linear with the number of datapoints. An example of its effectiveness is given with the binary classification of 780,000 datapoints in 47,000dimensional input space in 2 s on a PC and the test accuracy is similar to standard SVM, e.g. LIBLINEAR [14].

238

T.-N. Do and H. A. Le Thi

The binary SVM solver can be extended for handling multi-class problems (c classes, c ≥ 3). Practical approaches for multi-class SVMs are to train a series of binary SVMs, including One-Versus-All (OVA [29]), One-Versus-One (OVO [17]). The OVA strategy shown in Fig. 1 builds c binary SVM models where the ith one separates the ith class from the rest. The OVO strategy illustrated in Fig. 2 constructs c(c − 1)/2 binary SVM models for all binary pairwise combinations among c classes. The class of a new datapoint is then predicted with the largest distance vote. In practice, the OVA strategy is implemented in LIBLINEAR [14] and the OVO technique is also used in LibSVM [4].

Fig. 1. Multi-class SVM (One-Versus-All)

Fig. 2. Multi-class SVM (One-Versus-One)

Training SVM for the ImageNet Challenging Problem

3

239

Parallel Multi-class Support Vector Machines for the Large Number of Classes

When dealing with the very large number of classes, e.g. c = 1, 000 classes, the One-Versus-One strategy needs training 499, 500 of binary classifiers and using them in the classification. This is too expensive, in the comparison with 1, 000 binary models learned by the One-Versus-All strategy. Therefore, the OneVersus-All approach is suited for such very large number of classes. Nevertheless, the multi-class SVM algorithm using the One-Versus-All leads to the two main issues: 1. the multi-class SVM algorithm deals with the imbalanced datasets for building binary classifiers, 2. the multi-class SVM algorithm also takes very long time to train the very large number of binary classifiers in the sequential mode. To overcome these problems, we propose to train ensemble binary classifiers with the under-sampling strategy and then parallelize the training task of all binary classifiers with multi-core machines. Training Binary SVM Classifier with the Under-Sampling Strategy. In the multi-class SVM algorithm using One-Versus-All approach, the learning task of binary SVM-SGD classifier tries to separate the ci class (positive class) from the c − 1 other classes (negative class). For the very large number of classes, this leads to the extreme unbalance between the positive and the negative class. The problem of binary SVM-SGD comes from line 4 of Algorithm 1. Given a classification problem with 1, 000 classes, the probability for a positive datapoint sampled is very small (about 0.001) compared with the large chance for a negative datapoint sampled (e.g. 0.999). And then, the binary SVM-SGD classifier focuses mostly on the negative datapoints. Therefore, the binary SVMSGD classifier has difficulty to separate the positive class from the negative class, well-known as the class imbalance problems. One of the most popular solutions for dealing with the imbalanced data [16, 31,32] is to change the data distribution, including over-sampling the minority class [5] or under-sampling the majority class [19,24]. We propose to use undersampling the majority class because over-sampling the minority class is very expensive due to large datasets with millions datapoints. Given the training dataset D consists of the positive class D+ (|D+ | is the cardinality of the positive class ci ) and the negative class D− (|D− | is the cardinality of the negative class). Our binary SVM-SGD algorithm trains κ binary SVM-SGD classifiers {w1 , w2 , . . . , wκ } from mini-batch using under-sampling the majority class (negative class) to separate the positive class ci from the negative class. Since the original bagging [3] uses bootstrap sampling from the training dataset without regard to the class distribution. Our binary SVM-SGD algorithm follows the idea of bagging in more appropriate strategy for dealing with class imbalanced. Therefore, it is called Bagged-SVM-SGD. At the ith iteration, the mini-batch mBi includes np datapoints randomly sampling without replacement

240

T.-N. Do and H. A. Le Thi

 −| from the positive class D+ and np |D |D+ | datapoints sampling without replacement from the negative class D− , and then the learning Algorithm 1 learns wi from mBi . Such a mini-batch improves the chance for a positive datapoint sampled in learning Algorithm 1. The Bagged-SVM-SGD averages all classifiers {w1 , w2 , . . . , wκ } to create the final model w for separating the class ci from other ones. The Bagged-SVM-SGD is described in Algorithm 2.

Algorithm 2: Bagged-SVM-SGD(ci , D, λ, T, κ) for training the binary classifier (class ci versus all) input :

output: 1 2 3 4 5

6 7 8 9

positive class ci versus other classes training dataset D positive constant λ > 0 number of epochs T number of SVM-SGD models κ hyperplane w

begin splitting training dataset D into the positive data D+ (class ci ) and the negative data D− for i ← 1 to κ do creating a mini-batch mBi by sampling without replacement np  |D |   datapoints from D+ and D− from D− (with |D− | = np |D− ) +| wi = SVM-SGD(mBi , λ, T ) end  return w = κ1 κi=1 wi end

Parallel Training of Binary SVM Classifiers The Bagged-SVM-SGD algorithm independently trains c binary classifiers for c classes in multi-class SVM. This is a nice property for learning c binary classifiers in the parallel way to speed up training tasks for large-scale multi-class datasets. We propose to develop the parallel multi-class SVM (called ParaSVM as described in Algorithm 3) that launches parallel binary Bagged-SVMSGD training algorithms for the very large number of classes, using the shared memory multiprocessing programming model OpenMP on multi-core computers. #pragma omp parallel for schedule(dynamic) directive in Algorithm 3 explicitly instructs the compiler to parallelize the for loop.

Training SVM for the ImageNet Challenging Problem

241

Algorithm 3: Para-SVM for parallel training ensemble binary SVM classifiers for the very large number of classes input :

output: 1 2 3 4 5 6

4

training dataset D with c classes positive constant λ > 0 number of epochs T number of SVM-SGD models κ hyperplanes {w1 , w2 , . . . , wc }

begin #pragma omp parallel for schedule(dynamic) for ci ← 1 to c do wci = Bagged-SVM-SGD(ci , D, λ, T, κ) end end

/* class ci */

Experimental Results

In order to evaluate the performance (accuracy and training time) of the ParaSVM algorithm for classifying a large amount of data in very large number of classes, we have implemented the Para-SVM in C/C++, OpenMP [22]. We are interested in the best state-of-the-art linear SVM algorithm, LIBLINEAR (a library for large linear classification [14], the parallel version on multi-core computers) and kSVM (parallel learning of k local SVM algorithm for classifying large datasets [11]). Therefore, we here report the comparison of the classification performance obtained by the Para-SVM, the LIBLINEAR and the kSVM. All experiments are run on a PC with Linux Fedora 32, Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores and 32 GB main memory. We are interested in the experimental evaluation of the Para-SVM algorithm for handling large scale multi-class datasets. Therefore, it needs to assess the performance in terms of training time and classification correctness. 4.1

ILSVRC 2010 Dataset

The Para-SVM algorithm is designed for dealing with the ImageNet challenging problems [7,8]. Full ImageNet dataset has more than 14 million images for 21,841 classes raises more challenges in training classifiers, due to the large scale number of images and classes. A subset of ImageNet dataset (called ILSVRC 2010 dataset) is the most popular visual classification benchmark [6– 10,12,13,15,26,28,33]. ILSVRC 2010 dataset contains 1,261,405 images with 1,000 classes.

242

T.-N. Do and H. A. Le Thi

The feature extraction from ILSVRC 2010 dataset is performed by pretrained deep learning networks, Inception v3 [28] which is to learn multi-level features for image classification. The pre-trained Inception v3 model extracts 2,048 deep features from the images. ILSVRC 2010 dataset is randomly divided into training set with 1,009,124 rows and testing set with 252,281 rows (with random guess 0.1% due to 1,000 classes). 4.2

Tuning Parameter

For training linear SVM models (LIBLINEAR), it needs to tune the positive constant C in SVM algorithms for keeping a trade-off between the margin size and the errors. We use the cross-validation (hold-out) protocol to find-out the best value C = 100, 000. For the parameter k local models (number of clusters) of kSVM, we propose to set k = 1000 so that each cluster has about 1000 individuals. The idea gives a trade-off between the generalization capacity [30] and the computational cost. Furthermore, the Bagged-SVM-SGD in our Para-SVM algorithm learns κ = 20 binary SVM classifiers to separate ci class from other ones. Due to the PC (Intel(R) Core i7-4790 CPU, 4 cores) used in the experimental setup, the number of OpenMP threads is setting to 4 for all training tasks. 4.3

Classification Results

We obtain classification results of Para-SVM, LIBLINEAR and kSVM in Table 1, Fig. 3 and Fig. 4. The fastest training algorithm is in bold-faced and the second one is in italic. The highest accuracy is also in bold-faced and the second accuracy is in italic. kSVM classifies ILSVRC 2010 dataset in 48.60 min with 73.32% accuracy. LIBLINEAR learns to classify ILSVRC 2010 dataset in 9, 813.58 min with 73.66% accuracy. Our Para-SVM achieves 74.89% accuracy with 53.29 min in the training time. In the comparison of training time among multi-class SVM algorithms, we can see that kSVM is fastest training algorithm with the lowest accuracy. LIBLINEAR takes longest training time. LIBLINEAR is 184.17, 201.94 times slower than our Para-SVM and kSVM, respectively. kSVM is 1.10 times faster than Para-SVM. In terms of overall classification accuracy, Para-SVM achieves the highest accuracy in the classification. In the comparison, algorithm by algorithm, shows that the superiority of Para-PSVM on LIBLINEAR corresponds to 1.23%. ParaSVM improves 1.57% compared to kSVM. The classification results allow to believe that our proposed Para-SVM algorithm is efficient for handling such very large-scale multi-class datasets.

Training SVM for the ImageNet Challenging Problem Table 1. Overall classification accuracy for ILSVRC 2010 dataset No Algorithm

Time (min) Accuracy (%)

1

Para-SVM

53.29

2

LIBLINEAR 9,813.58

73.66

3

k-SVM

73.32

48.60

74.89

Fig. 3. Training time (min) for ILSVRC 2010 dataset

Fig. 4. Overall classification accuracy for ILSVRC 2010 dataset

243

244

5

T.-N. Do and H. A. Le Thi

Conclusion and Future Works

We have presented the parallel multi-class support vector machines (Para-SVMSGD) on multi-core computers that achieves high performances for dealing with a large amount of data in very large number of classes. The main idea is to learn in the parallel way from mini-batches being produced by under-sampling training dataset to create ensemble binary SVM classifiers used in the OneVersus-All multi-class strategy. The numerical test results on ImageNet challenging dataset show that the Para-SVM algorithm is faster and more accurate than the state-of-the-art SVM algorithms, such as LIBLINEAR and kSVM. Our Para-SVM achieves an accuracy of 74.89% obtained in the classification of ImageNet-1000 dataset having 1,261,405 images and 1,000 classes in 53.29 min using a PC Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores. In the near future, we intend to develop a distributed implementation for large scale processing on an in-memory cluster-computing platform. Acknowledgments. This work has received support from the College of Information Technology, Can Tho University. The authors would like to thank very much the Big Data and Mobile Computing Laboratory.

References 1. Bosch, A., Zisserman, A., Mu˜ noz, X.: Scene classification Via pLSA. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part IV. LNCS, vol. 3954, pp. 517–530. Springer, Heidelberg (2006). https://doi.org/10.1007/11744085 40 2. Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems, vol. 20, pp. 161–168. NIPS Foundation (2008). http://books.nips.cc 3. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996) 4. Chang, C.C., Lin, C.J.: LIBSVM?: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(27), 1–27 (2011) 5. Chawla, N.V., Lazarevic, A., Hall, L.O., Bowyer, K.W.: SMOTEBoost: improving prediction of the minority class in boosting. In: Lavraˇc, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) PKDD 2003. LNCS (LNAI), vol. 2838, pp. 107– 119. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39804-2 12 6. Chollet, F.: Xception: deep learning with depthwise separable convolutions. CoRR abs/1610.02357 (2016) 7. Deng, J., Berg, A.C., Li, K., Fei-Fei, L.: What does classifying more than 10,000 image categories tell us? In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part V. LNCS, vol. 6315, pp. 71–84. Springer, Heidelberg (2010). https:// doi.org/10.1007/978-3-642-15555-0 6 8. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: Imagenet: a large-scale hierarchical image database. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009) 9. Do, T.-N.: Parallel multiclass stochastic gradient descent algorithms for classifying million images with very-high-dimensional signatures into thousands classes. Vietnam J. Comput. Sci. 1(2), 107–115 (2014). https://doi.org/10.1007/s40595013-0013-2

Training SVM for the ImageNet Challenging Problem

245

10. Do, T.-N., Poulet, F.: Parallel multiclass logistic regression for classifying large scale image datasets. In: Le Thi, H.A., Nguyen, N.T., Do, T.V. (eds.) Advanced Computational Methods for Knowledge Engineering. AISC, vol. 358, pp. 255–266. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-17996-4 23 11. Do, T., Poulet, F.: Parallel learning of local SVM algorithms for classifying large datasets. Trans. Large Scale Data Knowl. Centered Syst. 31, 67–93 (2017) 12. Do, T.-N., Tran-Nguyen, M.-T.: Incremental parallel support vector machines for classifying large-scale multi-class image datasets. In: Dang, T.K., Wagner, R., K¨ ung, J., Thoai, N., Takizawa, M., Neuhold, E. (eds.) FDSE 2016. LNCS, vol. 10018, pp. 20–39. Springer, Cham (2016). https://doi.org/10.1007/978-3-31948057-2 2 13. Doan, T.-N., Do, T.-N., Poulet, F.: Large scale classifiers for visual classification tasks. Multimed. Tools Appl. 74(4), 1199–1224 (2014). https://doi.org/10.1007/ s11042-014-2049-4 14. Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res. 9(4), 1871–1874 (2008) 15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015) 16. Japkowicz, N. (ed.): AAAI Workshop on Learning from Imbalanced Data Sets. No. WS-00-05 in AAAI Tech report (2000) 17. Kreßel, U.H.G.: Pairwise classification and support vector machines. In: Sch¨ olkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods, pp. 255–268. MIT Press, Cambridge (1999) 18. Li, F., Perona, P.: A Bayesian hierarchical model for learning natural scene categories. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), 20–26 June 2005, San Diego, CA, USA, pp. 524–531 (2005) 19. Liu, X.Y., Wu, J., Zhou, Z.H.: Exploratory undersampling for class-imbalance learning. IEEE Trans. Syst. Man Cybern. Part B 39(2), 539–550 (2009) 20. Lowe, D.: Object recognition from local scale invariant features. In: Proceedings of the 7th International Conference on Computer Vision, pp. 1150–1157 (1999) 21. Lowe, D.: Distinctive image features from scale invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004) 22. OpenMP Architecture Review Board: OpenMP application program interface version 3.0 (2008). http://www.openmp.org/mp-documents/spec30.pdf 23. Perronnin, F., S´ anchez, J., Liu, Y.: Large-scale image categorization with explicit data embedding. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2297–2304 (2010) 24. Ricamato, M.T., Marrocco, C., Tortorella, F.: MCS-based balancing techniques for skewed classes: an empirical comparison. In: ICPR, pp. 1–4 (2008) 25. Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: primal estimated sub-gradient solver for SVM. In: Proceedings of the Twenty-Fourth International Conference Machine Learning, pp. 807–814. ACM (2007) 26. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014) 27. Sivic, J., Zisserman, A.: Video google: a text retrieval approach to object matching in videos. In: 9th IEEE International Conference on Computer Vision (ICCV 2003), 14–17 October 2003, Nice, France, pp. 1470–1477 (2003) 28. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR abs/1512.00567 (2015)

246

T.-N. Do and H. A. Le Thi

29. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995). https://doi.org/10.1007/978-1-4757-3264-1 30. Vapnik, V., Bottou, L.: Local algorithms for pattern recognition and dependencies estimation. Neural Comput. 5(6), 893–909 (1993) 31. Visa, S., Ralescu, A.: Issues in mining imbalanced data sets - a review paper. In: Midwest Artificial Intelligence and Cognitive Science Conference, Dayton, USA, pp. 67–73 (2005) 32. Weiss, G.M., Provost, F.: Learning when training data are costly: the effect of class distribution on tree induction. J. Artif. Intell. Res. 19, 315–354 (2003) 33. Wu, J.: Power mean SVM for large scale visual classification. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2344–2351 (2012)

The Effect of Machine Learning Demand Forecasting on Supply Chain Performance - The Case Study of Coffee in Vietnam Thi Thuy Hanh Nguyen1(B) , Abdelghani Bekrar1 , Thi Muoi Le2 , and Mourad Abed1 1 LAMIH, Université Polytechnique Hauts-de-France, Valenciennes, France

{Thithuyhanh.Nguyen,Abdelghani.Bekrar,Mourad.Abed}@uphf.fr 2 CRISS, Université Polytechnique Hauts-de-France, Valenciennes, France [email protected]

Abstract. Demand forecasting methods are one of the variables that have a considerable influence on supply chain performance. However, there is a lack of empirical proof on the magnitude of savings as observable supply chain performance results. In the literature, most scholars have paid more attention to non-financial performance while ignoring financial performance. This study compared the effect of two famous forecasting models on the operational and financial performance of the supply chain. ARIMAX (Auto-Regressive Integrated Moving Average with exogenous factors as a traditional model) and LSTM (Long Short-Term Memory as machine learning model) have been chosen. These two models were tested on Vietnamese coffee demand data. The results demonstrated that traditional and machine learning forecasting methods have different impacts on supply chain performance. The machine learning forecasting method outperformed the traditional method regarding operational and financial metrics. Three relevant operating and one financial metrics are selected, such as bullwhip effect (BWE), net stock amplification (NSAmp), and transportation cost (TC), and inventory turn (IT), respectively. Keywords: Demand forecasting · Machine learning · Supply chain performance

1 Introduction Demand predictions are the backbone of supply chains [1]. It enables predicting and satisfying future customer needs and desires [2]. Regardless of the industry and the type of company, efficient demand forecasting helps define market opportunities, enhance channel relationships, increase customer satisfaction, reduce inventory spending, eliminate product obsolescence, improve distribution operations, and schedule more effective production. It also predicts future financial and capital requirements [3]. As a result, forecasting is critical to optimizing a companies’ efficiency [4]. Supply chain managers rely on precise and trustworthy demand prediction to aid planning and judgment [5]. According to previous research, forecasting methods are defined as one of the essential factors influencing supply chain success [3, 6–8]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 247–258, 2022. https://doi.org/10.1007/978-3-030-92666-3_21

248

T. T. H. Nguyen et al.

Several studies have found significant improvements in supply chain performance when forecasting methods are used [9–11]. For example, some authors stated that good forecasting would reduce transportation and holding costs [11]. In comparison, improper demand forecasting increases total supply chain costs, including shortage and backorder costs [9]. Traditional forecasting (TF) and machine learning (ML) are the two main types of demand forecasting methods [12]. Each forecasting method have its strengths and weaknesses [13, 14]. TF method is more suited for working with linear data, while the ML method works better with non-linear data. Moreover, ML can handle enormous volumes of data and features, but it also requires a more extended training period than TF [11, 12]. Many scholars have adopted and compared the performance of forecasting methods in different data and situations such as finance [15], Indian Robusta coffee [16], housing [14], agricultural products [11]. However, the majority of them concentrated on the method’s prediction accuracy through employing several measures such as Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE). Consequently, there is a lack of empirical evidence of the effect of demand forecasting methods on other performance measures. Therefore, this study aims to investigate the forecasting method’s impact on supply chain performance. Besides, various Key Performance Indicators (KPIs) or metrics have been used to evaluate the performance of the supply chain. Forecast accuracy refers to one of the operational measures [10]. In the literature, most forecasters use accuracy as the primary riterion for evaluating sales forecasting effectiveness [3, 17, 18]. However, according to [19], a supply chain measurement system is inadequate based solely on financial or operational measures. As a result, this study employs both financial and operational metrics to measure supply chain performance. The remainder of this paper is structured as follows: firstly, Sect. 2 reviews the literature on demand forecasting methods and supply chain performance; secondly, Sect. 3 presents the methodology of our study; thirdly, the obtained results will be discussed in Sect. 4; finally, Sect. 5 presents the conclusion.

2 Literature Review This section presents and reviews the selected demand forecasting models and performance indicators used in this article. 2.1 Demand Forecasting Models TF method forecasts future demand using past time series data. There are several famous TF models, such as Autoregressive (AR), Moving Average (MA), Simple Exponential Smoothing (SES), and Autoregressive Integrated Moving Average (ARIMA) with its extensions [20]. ARIMA models are the most well-known TF methods used to identify the best fit for a time series’s historical values [12, 21]. ARIMA models consider that the time series’s future values are linearly related to the present and previous values

The Effect of Machine Learning Demand Forecasting

249

[15, 22]. Compared to other traditional approaches, the ARIMA models produce a higher performance in predicting the subsequent lags of time series [12]. Moreover, ARIMA can model the data’s non-stationary characteristics [20]. Therefore, in this study, ARIMA with exogenous factors (ARIMAX) is selected to represent the TF method. Among ML methods, deep learning (DL) methods have received greater attention in recent years. DL uses deep neural network architectures to address complicated issues. Thus, they produce better prediction outputs than traditional ML algorithms in many applications [12]. Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) are the most well-known models in DL methods [20]. In this study, LSTM will be chosen because of the following reasons. LSTM is the highest performing RNN model based on a memory cell concept [11]. LSTM allows appropriate memories from previous time frames to be collected and transferred into the next ones [23]. Besides, LSTM can preserve and train the dataset’s features for an extended period [20]. Finally, to be the best of our knowledge, there is a lack of empirical support in using the LSTM to investigate its impact and to compare it to TF methods such as ARIMAX, particularly in the case of coffee demand. 2.2 Performance Metrics Various KPIs or metrics can be used to measure supply chain performance. According to [19], supply chain performance consists of financial and operational performance. The financial performance focuses on using financial indicators based on outcomes, for example, sales growth, profitability, earnings per share, for example. Financial metrics are to represent a company’s evaluation by external variables of its border. In contrast, operational metrics are used to assess the efficiency and effectiveness of a company’s internal activities. These performance categories indicate supply chain abilities in certain aspects, such as cost, delivery speed, reliability, quality, and flexibility. Thus, our study follows this definition to categorize the financial and operational indicators. Only a few authors have studied how the forecasting model affects supply chain performance, but they focused on operational metrics rather than financial ones ([9, 10]). For instance, a recent study by [10] selected three performance metrics but two of them are non-financial (forecast accuracy, inventory turns) and only one is financial (cashconversion cycle). The results show that ML-based forecasting methods improve supply chain performance better than the traditional forecasting method in steel manufacturing companies. Although our purpose is similar to this study, there are differences between the modeling approach, performance metrics, and dataset. Due to the limited data, our study selects the relevant performance metrics comprised of financial and operational metrics. Four famous metrics are adopted to measure supply chain performance, three operational and one financial. These include the bullwhip effect, net stock amplification, transportation cost, and inventory turn, respectively. Prior researchers have used different operational performance measures in the literature, such as cost and delivery [7], BWE and NSAmp [9], holding cost, and transportation costs [11]. The scholars [9] used BWE and NSAmp to evaluate supply chain costs and customer service levels, respectively. According to Vietnam Logistics Report 2020, transportation cost is the most considerable logistics cost, accounting for 59% of logistics cost. In the coffee production in Vietnam, the logistics cost accounts for

250

T. T. H. Nguyen et al.

9.5% GDP of the country [24]. Thus, our study selects transportation cost as one of the operational metrics. In terms of financial metrics, this study selects inventory turn (IT). Previous authors indicated that forecasters have rarely used financial metrics [3, 17, 18]. One study by [10] assesses the impact of forecasting techniques on financial performance by using a cash conversion cycle. However, the data of this metric is not available. The findings also demonstrated that better demand prediction increases inventory performance and working capital quality, resulting in a higher return on assets and profitability. However, this scholar considered inventory turn as a non-financial metric. In contrast, based on the classification of previous study [19], inventory turn (IT) is treated as financial performance. Similarly, another study [26] investigated the relationship between inventory turn metric and other financial metrics in various segments of Korean manufacturing enterprises. This author indicated that inventory turnover was a critical financial ratio to show how quickly the goods leave the plant. The results revealed a positive link between inventory turn and turnover ratio of total liabilities and net worth in most segments, demonstrating how successfully a firm utilizes its capital. In addition, this research discovered a positive linkage between inventory turn and operating profit per share in several segments such as textile and clothing, rubber, and plastics. Therefore, inventory turn is an essential metric that correlates well with other financial metrics.

3 Methodology 3.1 Data The data are collected from the International Coffee Organization (ICO) [25], the leading intergovernmental organization for coffee. However, there is no daily export data. Thus, Vietnam’s monthly export data and daily price data from January 2000 to December 2020 are obtained. As a result, this study uses the Dirichlet distribution method, a multivariate probability distribution [11], to create daily demand from the monthly export quantity randomly. The total daily demand created is equal to the monthly amount relying on an equal probability each day. Therefore, each month, the overall probability of daily data must be equal to one. This study used 80% of the data set for training and 20% for testing. Data will be normalized using the min-max method to reduce duplicates and improve data integrity [22]. Before calculating the final output, the scaled data are transformed to raw data of predicted daily demand using the inverse_transform method of the Min-Max Scaler function. Based on the output of each method, we will calculate and compare performance metrics between them. Figure 1 presents the time series plot of the daily coffee demand of Vietnamese coffee from January 2000 to December 2020. The graph shows a positive trend and seasonality in the time series data. The series starts with a downtrend and then followed by an uptrend. It continues to replay for the remainder of its life. These abrupt changes in pattern make predicting challenging. All models are run on Google Colab. The parameters of ARIMA are fitted and predicted with the ’auto.arima’ function. The Akaike Information Criterion (AIC) is an estimate of lost information [10]. The best-fitted models are those with the lowest AICc [5]. All constructed LSTM networks use the Adam algorithm as an optimizer

The Effect of Machine Learning Demand Forecasting

251

Fig. 1. Coffee daily demand of Vietnam from 2000 to 2020

and the mean of squared error (MSE) as a loss. After many random searches, the best configuration of LSTM is obtained as lag 10; the number of hidden layers: 2, number of neurons in each layer: 64, epoch: 50, batch size:10, dropout rate: 0.1, learning rate 0.001. 3.2 Supply Chain Performance Metrics Operational Metrics Bullwhip Effect Bullwhip effects, one of the problems of high demand variability, is the distortion of actual end-customer demand from a lack of coordinated and shared information in the supply chain [13]. According to the previous studies [9, 27, 28], BWE is the ratio of order quantity and demand variance. Hence, BWE is defined using Eq. 1. BWE =

Variance of order Variance of demand

(1)

Where: qt order quantities (or production quantities) according to a base-stock policy in which lead time (L) and review period (R) are taken as one period. According to [27, 28], the base stock policy is a simplified version of the order-up-to inventory policy, in which an order is made at the start of each period to raise the inventory levels to a pre-determined level.   (2) qt = DLt − DLt−1 + Dt−1 





DtL forecasted demand during the lead time L at period t.



L forecasted demand during the lead time L at period t − 1. Dt−1 Dt−1 actual demand at period t − 1. 



DLt and DLt−1 can be calculated as follows [27, 28] 







DLt = Dt + Dt+1 + ... + Dt+L−1

252

T. T. H. Nguyen et al.









DLt−1 = Dt−1 + Dt + ... + Dt−1+L−1

Higher BWE implies wildly fluctuating orders, meaning that the production level frequently changes, resulting in higher average production switching costs per period [29]. Therefore, reducing BWE results in decreased supply chain costs [9]. Net Stock Amplification (NSAmp) Net-stock amplification (NSAmp) is known as variation in net stock concerning demand. Thus, it influences customer service enormously. The higher the net stock variance, the more safety stock required. Moreover, high NSAmp results in high holding and backlog costs [29]. The NSAmp is estimated using Eq. 3 ([9])

NSAmp =

Variance of net stock (inventory level) Variance of demand

(3)

Based on authors [29], the order quantity equals production quantity, net stock can be calculated by Eq. 4 in this study. Net stockt = Remaining stock t + Ordert − Real demandt Or Net stockt = qt−1 − Dt−1 + qt − Dt

(4)

Where: qt ordert can be calculated in Eq. 2. Dt real demand at period t. remaining stock t is assumed as (qt−1 − Dt−1 ). The Transportation Cost (TC) The transportation cost is inspired by authors [11] as follows: Daily transportation cost = predicted daily demand × unit transportation cost (5) Based on the interviews with Vietnamese coffee companies, the average unit transportation cost is assumed to equal 1500 VND/kg or approximately 65.38 USD/tons. Financial Metric Inventory turn ratio is used to measure inventory performance which is essential in manufacturing processes [10, 18, 26]. Higher inventory turn is better because less cash is tied up in products that are consumed slowly and are unsold [26]. The inventory turn is measured using Eq. 6 [10]. IT =

Cost of Goods Sold Average Inventory

(6)

The Effect of Machine Learning Demand Forecasting

253

Where: Cost of Goods Sold = Real demand × Production cost unit

Average inventory =

(Beginning Inventory + Ending Inventory) 2

Based on a study [30], the total cost of coffee producers is 29.3 cents/lb or 645.95 USD/tonnes. Difference The difference is computed to compare metrics given by LSTM and ARIMAX. Equation 7 is used to calculate the difference. Metrics consist of BWE, NSAmp, TC, and IT, respectively. Difference = abs(MetricLSTM − MetricARIMAX ) × 100/abs(MetricARIMAX )

(7)

4 Results and Discussion 4.1 Comparison of Operational Measures Between Traditional and Machine Learning Forecasting Methods Figure 2 presents the monthly BWE generated from LSTM and ARIMAX model in the test dataset. The BWE from LSTM has a lower value than ARIMAX. As shown in Table 1, the BWE of the LSTM and ARIMAX models in 2020 are 2.1 and 5.7, respectively. Thus, it can be seen that the BWE using the LSTM model is less than the ARIMAX model. The findings imply that LSTM helps in the management of order variance. As a result, the LSTM model reduces overall supply chain costs more effectively than ARIMAX. 20

LSTM

ARIMAX

Fig. 2. BWE value generated from LSTM and ARIMAX model

Dec-20

Sep-20

Jun-20

Mar-20

Dec-19

Sep-19

Jun-19

Mar-19

Dec-18

Sep-18

Jun-18

Mar-18

Dec-17

Jun-17

Mar-17

0

Sep-17

10

254

T. T. H. Nguyen et al.

Besides, the monthly values of NSAmp of LSTM and ARIMAX are shown in Fig. 3. Similarly, LSTM produces smaller NSAmp than ARIMAX. Indeed, the average NSAmp values for the LSTM and ARIMAX are computed as 3.7 and 5.0, respectively (see Table 2). It indicates that there is a reduction of 27.3% in the NSAmp. As a result, the LSTM model decreases holding and backlog costs, which leads to improvements in customer service levels. Prior studies indicated that lowering BWE and NSAmp enhances supply chain performance [9, 29]. 10 8 6 4

LSTM

Dec-20

Sep-20

Jun-20

Mar-20

Dec-19

Sep-19

Jun-19

Mar-19

Dec-18

Sep-18

Jun-18

Mar-18

Dec-17

Sep-17

Jun-17

0

Mar-17

2

ARIMAX

Fig. 3. NSAmp value generated from LSTM and ARIMAX model

In addition, Fig. 4 depicts the transportation costs of forecasting models. Because of the significant values, the differences between the two models are not visible in the graph. However, Table 3 provides the estimated monthly transportation costs for each model. The TC of LSTM and ARIMAX models are 379403.4 dollars/tons and 381662.5 dollars/tons, respectively. As a result, the average transportation cost decreased by 1.42% by adopting an LSTM model. 800000 600000 400000

LSTM

Dec-20

Sep-20

Jun-20

Mar-20

Dec-19

Sep-19

Jun-19

Mar-19

Dec-18

Sep-18

Jun-18

Mar-18

Dec-17

Jun-17

Mar-17

0

Sep-17

200000

ARIMAX

Fig. 4. Transportation costs (TC) value generated from LSTM and ARIMAX model

All operational metrics, including BWE, NSAmp, and TC, are lowered using the LSTM forecasting model, as shown in Table 1, Table 2, and Table 3.

The Effect of Machine Learning Demand Forecasting Table 1. BWE of forecasting models for the year 2020

255

Table 2. NSAmp of forecasting models for the year 2020

Month

LSTM

ARIMAX

Difference

Month

LSTM ARIMAX Difference

1

4.7

10.9

56.4

1

7.1

9.4

24.2

2

3.0

5.6

45.9

2

6.4

7.4

13.4

3

1.6

5.2

69.6

3

3.7

5.4

31.6

4

1.7

4.2

59.1

4

3.6

4.8

24.6

5

1.9

7.0

72.5

5

1.9

2.9

34.3

6

1.6

4.6

63.7

6

3.5

5.0

30.0

7

1.6

5.5

69.9

7

2.7

4.9

44.4

8

1.6

4.1

61.1

8

4.1

5.1

20.1

9

1.9

5.2

62.5

9

3.7

5.3

29.5

10

1.4

3.6

61.1

10

2.3

2.7

14.2

11

1.0

3.8

71.8

11

1.8

2.9

36.3

12

3.1

8.1

61.9

12

3.1

4.9

35.4

Overall

2.1

5.7

62.5

Overall 3.7

5.0

27.3

Table 3. Transportation costs (TC) values of LSTM and ARIMAX in 2020 Month

LSTM

ARIMAX

Difference

1

424535.7

434765.2

2.35

2

554086.4

555533.7

0.26

3

505818.9

497879.4

1.59

4

483240.4

483565.1

0.07

5

392470

394701.9

0.57

6

375099.8

375393.4

0.08

7

317770.8

310880.7

2.22

8

305766.1

312141.8

2.04

9

297459.9

297396.2

0.02

10

266914.8

271945.3

1.85

11

260680

267877.8

2.69

12

368998.3

381662.5

3.32

Overall

379403.4

381978.6

1.42

256

T. T. H. Nguyen et al.

4.2 Comparison of Financial Measures Between Traditional and Machine Learning Forecasting Methods

LSTM

ARIMAX

Fig. 5. Inventory turn value generated from LSTM and ARIMAX model

Table 4. Inventory turn (IT) values of LSTM and ARIMAX in 2020 Month

LSTM

ARIMAX

Difference

1

184.51

179.66

2.70

2

−341.90

−258.41

32.31

3

629.35

401.08

56.91

4

−247.58

−175.56

41.02

5

263.70

310.90

15.18

6

5265.77

4913.48

7.17

7

3506.60

−376.00

1032.59

8

−1824.56

1364.08

233.76

9

505.70

−527.28

195.91

10

−720.01

−244.72

194.21

11

3395.85

−586.83

678.67

12

771.89

−5837.68

113.22

Overall

949.11

−69.78

216.97

Dec-20

Sep-20

Jun-20

Mar-20

Dec-19

Sep-19

Jun-19

Mar-19

Dec-18

Sep-18

Jun-18

Mar-18

Dec-17

Sep-17

Jun-17

10000 5000 0 -5000 -10000

Mar-17

As a representative of the financial measures, Fig. 5 displays the monthly inventory turn of LSTM and ARIMAX. It is apparent that the LSTM produces a more stable inventory turn than ARIMAX. As seen in Table 4, the overall inventory turn of LSTM and ARIMAX is 949.11 and −69.78, respectively. It is worth noting that the estimated IT values using the LSTM forecasting model are comparatively higher ARIMAX model, which yields an average of 216.97%. Therefore, the LSTM model enhances inventory performance and working capital efficiency. Adopting LSTM improves the financial performance of the coffee supply chain.

The Effect of Machine Learning Demand Forecasting

257

5 Conclusion This study analyzed the impact of forecasting methods on supply chain performance. These two models, ARIMAX and LSTM, were selected to estimate the time series of demand. A case study of coffee demand in Vietnam was examined to assess the impact of each model. Three operational and one financial metrics were employed to compare the performance of ARIMAX and LSTM. The findings prove that LSTM improves supply chain performance better than ARIMAX. In detail, LSTM outperformed ARIMAX in operational performance by lowering several metrics, including BWE, NSAmp, and TC. Also, LSTM produced higher IT than ARIMAX. It proved that LSTM boosted financial performance. Therefore, adopting machine learning forecasting methods was beneficial, resulting in lower supply chain costs and higher inventory performance in the case study of Vietnamese coffee.

References 1. Perera, H.N., Hurley, J., Fahimnia, B., Reisi, M.: The human factor in supply chain forecasting: a systematic review. Eur. J. Oper. Res. 274(2), 574–600 (2019) 2. Benkachcha, S., Benhra, J., El Hassani, H.: Demand forecasting in supply chain: comparing multiple linear regression and artificial neural networks approaches. Int. Rev. Model. Simul. 7(2), 279–286 (2014) 3. Moon, M.A., Mentzer, J.T., Smith, C.D.: Conducting a sales forecasting audit. Int. J. Forecast. 19(1), 5–25 (2003) 4. Zotteri, G., Kalchschmidt, M.: Forecasting practices: empirical evidence and a framework for research. Int. J. Prod. Econ. 108(12), 84–99 (2007) 5. Abolghasemi, M., Beh, E., Tarr, G., Gerlach, R.: Demand forecasting in supply chain: the impact of demand volatility in the presence of promotion. Comput. Ind. Eng. 142, 1–12 (2020) 6. Moon, M.A., Mentzer, J.T., Smith, C.D., Garver, M.S.: Seven keys to better forecasting. Bus. Horiz. 41(5), 44–52 (1998) 7. Danese, P., Kalchschmidt, M.: The role of the forecasting process in improving forecast accuracy and operational performance. Int. J. Prod. Econ. 131(1), 204–214 (2011) 8. George, J., Madhusudanan Pillai, V.: A study of factors affecting supply chain performance. J. Phys: Conf. Ser. 1355(1), 1–8 (2019) 9. Jaipuria, S., Mahapatra, S.S.: An improved demand forecasting method to reduce bullwhip effect in supply chains. Expert Syst. Appl. 41(5), 2395–2408 (2014) 10. Feizabadi, J.: Machine learning demand forecasting and supply chain performance. International Journal of Logistics Research and Applications, pp. 1–24 (2020) 11. Kantasa-ard, A., Nouiri, M., Bekrar, A., Ait el cadi, A., Sallez, Y.: Machine learning for demand forecasting in the physical internet: a case study of agricultural products in Thailand. International Journal of Production Research, pp. 1–25 (2020) 12. Kilimci, Z.H., et al.: An improved demand forecasting model using deep learning approach and proposed decision integration strategy for supply chain. Complexity, pp. 1–16 (2019) 13. Aburto, L., Weber, R.: Improved supply chain management based on hybrid demand forecasts. Appl. Soft Comput. 7(1), 136–144 (2007) 14. Soy Temür, A., Akgün, M., Temür, G.: Predicting housing sales in turkey using arima, lstm and hybrid models. J. Bus. Econ. Manag. 20(5), 920–938 (2019) 15. Khashei, M., Bijari, M.: A novel hybridization of artificial neural networks and ARIMA models for time series forecasting. Appl. Soft Comput. J. 11(2), 2664–2675 (2011)

258

T. T. H. Nguyen et al.

16. Naveena, K., Singh, S., Rathod, S., Singh, A.: Hybrid ARIMA-ANN modelling for forecasting the price of robusta coffee in India. Int. J. Curr. Microbiol. App. Sci. 6(7), 1721–1726 (2017) 17. Mentzer, J.T., Kahn, K.B.: Forecasting technique familiarity, satisfaction, usage, and application. J. Forecast. 14(5), 465–476 (1995) 18. Mccarthy, T.M., Davis, D.F., Golicic, S.L., Mentzer, J.T.: The evolution of sales forecasting management: a 20-year longitudinal study of forecasting practices. J. Forecast. 25(5), 303–324 (2006) 19. Chen, I.J., Paulraj, A.: Understanding supply chain management: critical research and a theoretical framework. Int. J. Prod. Res. 42(1), 131–163 (2004) 20. Siami-Namini, S., Tavakoli, N., Siami Namin, A.: A comparison of ARIMA and LSTM in forecasting time series. In: Proceedings - 17th IEEE International Conference on Machine Learning and Applications, pp. 1394–1401. IEEE (2019) 21. Choi, H.K.: Stock price correlation coefficient prediction with ARIMA-LSTM hybrid model. Seoul, Korea: Korea University (2018). https://arxiv.org/pdf/1808.01560v5.pdf. Accessed 03 Mar 2021 22. Abbasimehr, H., Shabani, M., Yousefi, M.: An optimized model using LSTM network for demand forecasting. Comput. Ind. Eng. 143, 1–13 (2020) 23. Dave, E., Leonardo, A., Jeanice, M., Hanafiah, N.: Science direct forecasting indonesia exports using a hybrid model ARIMA- LSTM. Procedia Comput. Sci. 179(2020), 480–487 (2021) 24. Vietnam Logistics Report 2020. https://hdgroup.vn/wp-content/uploads/2020/12/Ba%CC% 81o-ca%CC%81o-Logistics-Vie%CC%A3%CC%82t-Nam-2020.pdf. Accessed 02 Feb 2021 25. Coffee Market Report. https://ico.org/news/cmr-1020-e.pdf. Accessed 02 Jan 2021 26. Kwak, J.K.: Analysis of inventory turnover as a performance measure in manufacturing industry. Processes 7(10), 1–11 (2019) 27. Luong, H.T.: Measure of bullwhip effect in supply chains with autoregressive demand process. Eur. J. Oper. Res. 180(3), 1086–1097 (2007) 28. Luong, H.T., Phien, N.H.: Measure of bullwhip effect in supply chains: the case of high order autoregressive demand process. Eur. J. Oper. Res. 183(1), 197–209 (2007) 29. Boute, R.N., Lambrecht, M.R.: Exploring the bullwhip effect by means of spreadsheet simulation. INFORMS Trans. Educ. 10(1), 1–9 (2009) 30. Luong, Q.V., Tauer, L.W.: A real options analysis of coffee planting in Vietnam. Agric. Econ. 35(1), 49–57 (2006)

Measuring Semantic Similarity of Vietnamese Sentences Based on Lexical and Distribution Similarity Van-Tan Bui1(B) and Phuong-Thai Nguyen2 1

2

University of Economic and Technical Industries, Hanoi, Vietnam [email protected] University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam [email protected] Abstract. Measuring the semantic similarity of sentence pairs is an important natural language processing (NLP) problem and has many applications in many NLP systems. Sentence similarity is used to improve the performance of many systems such as machine translation, speech recognition, automatic question and answer, text summarization. However, accurately evaluate the semantic similarity between sentences is still a challenge. Up to now, there are not sentence similarity methods, which exploit Vietnamese specific characteristics, have been proposed. Moreover, there are not sentence similarity datasets for Vietnamese that have been published. In this paper, we propose a new method to measure the semantic similarity of Vietnamese sentence pairs based on combining lexical similarity score and distribution semantic similarity score of two sentences. The experimental results have shown that our proposed model has high performance for the Vietnamese semantic similarity problem.

Keywords: Sentence similarity similarity

1

· Word embeddings · Semantic

Introduction

Measuring the semantic similarity between two sentences (sentence similarity) is an important Natural Language Processing (NLP) problem. This problem is widely applied in NLP systems such as searching [5], Natural Language Understanding [11], Machine Translation [20], Speech Recognition [13], QuestionAnswering [2], Text Summarization [1]. Sentence similarity is a problem that has been studied for a long time, it has been mentioned since 1957 in the publication of Luhn [10]. Several different approaches to this problem have been proposed, such as based on sentence structure, Sentence Embeddings, and Deep Learning. According to the sentence structure-based approach, Wang et al. [19] have proposed a method of separating and reorganizing lexical semantics. Lee et al. [9] proposed a method to measure the similarity of two sentences based on c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 259–270, 2022. https://doi.org/10.1007/978-3-030-92666-3_22

260

V.-T. Bui and P.-T. Nguyen

information mining from categories and semantic networks. A method of measuring the similarity of two sentences based on the grammatical structure of sentences was proposed by Lee et al. [8]. In another study, Ferreira et al. [6] proposed a sentence similarity method based on information about word order and sentence structure. In this study, we propose a similarity measure method for Vietnamese sentences by combining lexical correlation measure and distribution semantic similarity measure. To measure the similarity of sentence pairs according to the distribution semantic, we use phoBERT model. To measure the lexical correlation of sentence pairs, we use the Longest Common Subsequence algorithm. Experimental results show that our proposed method achieves high performance on Vietnamese dataset. This paper is structured as follows. Section 2 deals with several related studies. Section 3 proposes a new method of sentence similarity following the combined approach of lexical correlation and word similarity by word embedding vector. Section 4 presents the content of building a sentence similarity dataset for the Vietnamese language. Section 5 presents the experiments that have been carried out. Section 6 provides conclusions and directions for further research.

2 2.1

Related Work Word Embedding Models

NLP method according to distribution semantic approaches based on an intuition that words appearing in similar contexts tend to have similar meanings. Distributional natural language processing methods often aim to learn vector representations for words using large corpora. Each word is represented by a multidimensional vector, the vocabulary set forms a semantic vector space (vector space model). Some methods of learning vector representation for words (word embedding) are based on statistics of their occurrence in the corpus such as Latent Semantic Analysis [3]. These methods usually learn word representation vectors by building a Word-Context co-occurrence matrix (Word - Context) and performing a matrix analysis algorithm (Singular Value Decomposition (SVD) to achieve representation vectors with lower dimensions. Recently, learning vector representation methods for words based on artificial neural networks, also known as word embedding models, has achieved breakthrough results for many NLP problems. Word embeddings techniques are inspired by neural language models, which are trained based on predicting the contextual words of a center word (target word), or vice versa. Neural network models that learn word embeddings begin by initializing vectors representing words at random, then repeatedly training the network to make the target word embedding vector close to the vector representing neighboring words, and different representation vectors of words that do not appear in the neighborhood. The most prominent of these methods is Word2Vec proposed by Mikolov et al. [12]. According to another approach, instead of language models, the GloVe model proposed in [18] is based on global matrix factorization.

Measuring Semantic Similarity of Vietnamese Sentences

261

Similar to neural language models, the Word2Vec model learns word embeddings by training the neural network to predict neighboring words, with two architectures Skip-gram and Continuous bag of words (CBOW). In which, Skipgram architecture predicts neighboring words in a context window by maximizing the logarithmic mean of conditional probabilities (Eq. 1). T c 1  logp (wt+i |wt ) T t=1 i=−c

(1)

where wi ∶ i ∈ T is the whole training set, wt is the central word and wt+i are the words in the context window with size c. Conditional probability is defined by Softmax function as Eq. 2.    T exp vw v w I O   p(wj |wI ) =  (2) V T  exp v v   wI j =1 w j

 vw

In that, vw and are two representations of the word w. vw is a row of the  comes weight matrix W between the input layer and the hidden layer, and vw  from columns of the weight matrix W between the hidden layer and the output  as the output vector of the word w. layer. We call vw as the input vector and vw The outstanding advantage of the Word2Vec technique is that it only requires raw text data to train models. When using a large corpus, the vocabulary is quite complete, it is possible to calculate the similarity of any pair of words. Besides, the word representation vectors generated after training, in addition to the ability to measure semantic similarity, can also be used in many other language processing tasks. The disadvantage of this technique is that it does not clearly distinguish the similarity and relatedness of word pairs. 2.2

PhoBERT Model

BERT (Bidirectional Encoder Representations from Transformers) is a multilayered structure of Bidirectional Transformer encoder layers, based on the architecture of Transformer [4]. This model is a neural language model, which can generate vector representations of words according to their context. Unlike previous word embedding models, BERT is a deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. PhoBERT1 is a pre-trained Vietnamese language model published by Dat Quoc Nguyen and Anh Tuan Nguyen [15]. PhoBERT has been exploited in many Vietnamese NLP tasks and has yielded breakthrough results such as Partof-speech tagging, Dependency parsing, Named-entity recognition, and Natural language inference. PhoBERT consists of two pre-trained models phoBERTbase, and phoBERT-large with parameter numbers of 135M and 370M, respectively. In this study, we use the phoBERT-base model to generate vector representations for Vietnamese sentences. 1

https://github.com/VinAIResearch/PhoBERT.

262

2.3

V.-T. Bui and P.-T. Nguyen

Sentence Similarity Measure Based on Deep-Learning

Recently, sentence similarity methods following the deep learning approach with neural network models LSTM, GRU, CNN, and BERT have proven overcome to other approaches. According to the deep learning approach, Mueller and Thyagarajan [14] proposed a method to extract the attributes containing the entire information of a sentence using the Long Short-Term Memory (LSTM) network model. Recently, Heo et al. [7] proposed a method that measures the similarity of sentences by using a combination of global features of sentences extracted by two-dimensional LSTM neural network model (Bi-LSTM), and local features of the sentences are extracted through the Capsule Network. Devlin et al. [4] evaluated the similarity of two sentences using the BERT model, a high-performance language representation model exploited in various NLP problems. Although sentence similarity models based on deep learning have demonstrated superior performance on several evaluative datasets. However, accurately measure the similarity of sentence pairs is still a challenge. Exploiting more features of sentence structures and distribution semantic can improve the accuracy of sentence similarity models. Therefore, in this study, we propose a sentence similarity model that exploits both lexical correlation information of sentence pairs and distribution semantic sentence features.

3

Proposed Method

To measure the semantic similarity between two sentences, we exploit two different characteristics of sentences: firstly, the similarity in structure and vocabulary; secondly, the similarity between sentence representation vectors that are extracted from the phoBERT model. The proposed model named Vietnamese Sentence Similarity (ViSentSim) is depicted as shown in Fig. 1. 3.1

Lexical-Based Measures

Jaccard Similarity: This similarity measure also known as Jaccard index is used to determine the similarity between two sets, which was proposed by Paul Jaccard in 1901. Jaccard degree is also used to measure the similarity of two texts such as sentences, documents. This measure is proportional to the number of words common to the total number of different words appearing in two documents. Equation 3 presents the Jaccard similarity measure between two sentences S1 and S2 . |S1 ∩ S2 | (3) SimJaccard (S1 , S2 ) = |S1 ∪ S2 | The Jaccard measure reveals a disadvantage when the two compared sets have different sizes. For example, consider two sets A and B, where both sets contain 100 elements. Assuming that 50 of them are common on the two sets, we have a SimJaccard of 0.33. If we increase set A by 10 elements and decrease set

Measuring Semantic Similarity of Vietnamese Sentences

263

Fig. 1. Proposed model.

B by the same amount, while maintaining 50 elements in common, the Jaccard similarity score does not change. In other words, Jaccard has no sensitivity to the size of sets. Szymkiewicz-Simpson (Overlab): The Szymkiewicz–Simpson similarity measure is also known as the Overlap Coefficient. This similarity measure is defined as the ratio between the number of elements of the intersection of A and B to the number of elements of the smaller set between A and B. The Szymkiewicz–Simpson similarity measure is calculated as Eq. 4. SimOverlab (S1 , S2 ) =

|S1 ∩ S2 | M in(|S1 |, |S2 |)

(4)

Using the smaller set size as the denominator causes the Szymkiewicz–Simpson similarity measure to characterize the sign of the smaller set within the larger set, or the extent of the inclusion of the large set to the small set. In other words, this measure provides information about whether a set is a subset of a larger set. With S1 , S2 are the two sentences in the above example, SimOverlab (S1 , S2 ) = 0.50. Since both Jaccard and Overlap are similarity measures applied to two sets, their disadvantage when used to measure sentence similarity is that they do not exploit the order information of the words in the sentence. These measures treat a sentence as a bag of words. Consider the following sentences: – – – –



264

V.-T. Bui and P.-T. Nguyen

The sentences above are all form pairs of 1.0 similarity with Jaccard and Overlap measures even though they have completely different meanings. It can be seen that, besides information about semantic relations between words, information about their order or sentence structure is also an important attribute to accurately evaluate the similarity of sentence pairs. To overcome the disadvantages of Jaccard and Overlap, we propose a new similarity measure using the Longest Common Subsequence algorithm (LCS). 3.2

LCS Algorithm

Given two sentences, S1 has m words and S2 has n words, the Longest Common Subsequence (LCS) algorithm determines the longest common string of two sentences S1 and S2 . Algorithm 1: Longest Common Subsequence Algorithm. 1

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

def LCS(S1 , S2 ); Input: Two sentences S1 , S2 Output : Length of longest common subsequence of S1 and S2 n = len(S1 ); // the number of words in sentence S1 m = len(S1 ); // the number of words in sentence S2 int L[n + 1][m + 1] ; // weight matrix for (int i=0; i≤n; i++) do for (int j=0; j≤m; j++) do if (i = =0) ∥ (j = =0) then L[i][j] = 0; else if S1 [i − 1] = =S2 [j − 1] then L[i][j] = L[i − 1][j − 1] + 1; else L[i][j] = M ax(L[i − 1][j], L[i][j − 1]); end end end end return L[n][m];

To increase the performance of the LCS algorithm, we use a set of synonymous word pairs. This set includes 156, 847 synonymous word pairs, which extracted from Vietnamese WordNet [16] and VCL dictionary [17]. Whereby synonyms in sentences such as , are considered as common words. LCS algorithm finds the longest common construction of two sentences with time complexity Θ(n × m), where n, m are the lengths of sentences S1 and S2 , respectively.

Measuring Semantic Similarity of Vietnamese Sentences

3.3

265

Sentence Similarity Measure Based on LCS

Using the LCS algorithm, we propose a lexical similarity measure for a sentence pair as follows: 2 × |LCS(S1 , S2)| (5) SimLCS (S1 , S2 ) = |S1 | + |S2 | 3.4

Sentence Similarity Measure Based on PhoBERT

To measure the semantic similarity of two sentences based on the sentence representation vector, we use phoBERT model. For each sentence S, we take the top 4 hidden layer vectors of the phoBERT model and concatenate them together (Eq. 6), where ⊕ is the operator to concatenate two vectors. vBERT (S) = h1 ⊕ h2 ⊕ h3 ⊕ h4

(6)

The semantic similarity of two sentences S1 , S2 is measured based on the cosine similarity of their two representation vectors. SimBERT (S1 , S2 ) =

vBERT (S1 ) · vBERT (S2 ) ||vBERT (S1 )|| × ||vBERT (S2 )||

(7)

Combining lexical similarity measure (Eq. 6) and word distribution similarity measure (Eq. 7), we propose a new similarity measure for two sentences as follows: SimComb (S1 , S2 ) = α × SimLCS (S1 , S2 ) + (1 − α) × SimBERT (S1 , S2 )

4

(8)

Construct a Vietnamese Dataset

Sentence similarity is an important problem, but according to our search for studies on natural language processing up to the present time, there are not Vietnamese datasets are published for this problem. Therefore, in this study, we aim to build a large and reliable ViSentSim-10002 (Vietnamese Sentence Similarity) dataset for the Vietnamese sentence similarity problem. To build the Vietnamese dataset, we use a Vietnamese corpus consisting of 28 million sentences with about 560 million words (Vcorpus) extracted from online newspapers. From this corpus, we extract 1000 pairs of sentences according to some criteria as follows: – Similarity calculated according to SimJaccard of sentence pairs is evenly distributed in the range from 0 to 1. In which each interval 0.1 has 100 pairs. – Sentences must be more than 10 words and less than 50 words in length.

2

https://github.com/BuiTan/ViSentSim-1000.

266

V.-T. Bui and P.-T. Nguyen Table 1. Annotation guidelines provided to annotators.

Title

Scale Description

Very similar 4

Two sentences are completely similar in meaning. Two sentences that refer to the same object or concept, using words that have semantic similarity or synonyms to describe them. The length of the two sentences is equivalent

Somewhat similar

3

Two sentences with slight similarities in meaning, referring to the same object or concept. The length of the two sentences may vary slightly

Somewhat related but not similar

2

Two sentences that are related in meaning, each referring to objects or concepts but they are related. The length of two sentences may vary slightly

Slightly related

1

Two sentences that are different in meaning but have a slight semantic related, may share the same topic. The length of two sentences can vary greatly

Unrelated

0

The two sentences are completely different in meaning, their content is not related to each other. The length of two sentences can vary greatly

– Sentence pairs chosen belong to different domains such as music, food, sports, economy, education, science, tourism, and others. The selected sentence pairs are randomly divided into ten sets, each containing 100 sentence pairs. Each set is evaluated by 12 information technology students (Annotators) for similarity according to five levels as shown in Table 1. Before annotators assessing the similarity of sentence pairs, they were instructed on similarity levels including totally different, different, slightly similar, similar, very similar, as well as how to estimate the similarity of sentence pairs. Annotators conduct the evaluation of the similarity of sentence pairs independently (Table 2).

5

Experiments

In this study, we conduct experiments with three similarity measurement models for Vietnamese sentence pairs: firstly, (SimLCS ) model sentence similarity based on the lexical correlation of sentence pairs; secondly, (SimBERT ) model sentence similarity based on phoBERT; thirdly, (SimComb ) is a combination model between lexical correlation SimLCS and distribution similarity SimBERT . The models are evaluated on the ViSentSim-1000 dataset, which consists of 1000 pairs of Vietnamese sentences that are evaluated and labeled by language experts. We experiment with α values from 0 to 1, with α jump of 0.1. When α = 0 the role of SimLCS is disabled, while α = 1 the SimBERT is not exploited. Since the SimComb model achieved the best performance when α = 0.4, the test results presented in this section were performed with this value of α. The results of the models were evaluated with the Pearson and Spearman correlation scores. The experimental results presented in Table 3 have shown that the SimComb model achieved higher results than sentence similarity models that are based only on the correlation vocabulary or distribution semantic of sentences.

Measuring Semantic Similarity of Vietnamese Sentences

Table 2. Several sentence pairs of the ViSentSim-1000 dataset.

267

268

V.-T. Bui and P.-T. Nguyen Table 3. Experimental results on the ViSentSim-1000 dataset. SimLCS SimBERT SimComb Pearson

0.54

0.52

0.63

Spearman 0.51

0.49

0.61

The sentence similarity model in this study was used for extracting bilingual sentence pairs including Vietnamese-Laotian, Vietnamese-Khmer, VietnameseChinese, which is an important task of the KC-4.0-12/19-25 project. Specifically, Vietnamese-Laotian, Vietnamese-Khmer, Vietnamese-Chinese bilingual documents are automatically crawled from the internet, these documents are then automatically aligned with a document alignment tool. For each aligned document pair, we choose bilingual sentence pairs that have semantic similarity between the Vietnamese sentence and the Vietnamese translation3 of the target sentence greater than a threshold θ, these sentences are considered as raw aligned sentences. To improve the quality of parallel corpora, raw aligned sentences are reviewed by linguists who are fluent in both directions of the language pair. With θ = 0.8, the proportion of sentence pairs that are exactly aligned without correction, exactly aligned but need correcting, and not exactly aligned is 71%, 23%, and 6%, respectively. In addition, our model is also used to remove “soft” duplicate aligned sentences that do not differ much from each other in terms of semantics and vocabulary, to reduce the effort of language experts.

6

Conclusion and Future Work

In this paper, we introduced a combined approach for the sentence similarity problem. The proposed model measures semantic similarity for Vietnamese sentence pairs based on exploiting the lexical similarity between two sentences and the similarity of the sentence representation vectors. Experimental results have shown that our proposed model has high performance for the problem of sentence similarity in Vietnamese. In the future, we intend to exploit more Vietnamese Characteristics improving the performance of the sentence similarity model. Acknowledgments. This paper is part of project number KC-4.0-12/19-25 that is led by Doctor Nguyen Van Vinh and funded by the Science and Technology Program KC 4.0.

References 1. Aliguliyev, R.M.: A new sentence similarity measure and sentence based extractive technique for automatic text summarization. Expert Syst. Appl. 36(4), 7764–7772 (2009). http://dblp.uni-trier.de/db/journals/eswa/eswa36.html#Aliguliyev09 3

In this study, we use google API to translate Chinese, Laotian, Khmer sentences into Vietnamese.

Measuring Semantic Similarity of Vietnamese Sentences

269

2. Burke, R., Hammond, K., Kulyukin, V., Tomuro, S.: Question answering from frequently asked question files. AI Mag. 18(2), 57–66 (1997) 3. Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990) 4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota, June 2019. https://doi.org/10.18653/v1/N19-1423, https://www. aclweb.org/anthology/N19-1423 5. Farouk, Mamdouh, Ishizuka, Mitsuru, Bollegala, Danushka: Graph matching based semantic search engine. In: Garoufallou, Emmanouel, Sartori, Fabio, Siatri, Rania, Zervas, Marios (eds.) MTSR 2018. CCIS, vol. 846, pp. 89–100. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14401-2 8, http://dblp.uni-trier. de/db/conf/mtsr/mtsr2018.html#FaroukIB18 6. Ferreira, R., Lins, R.D., Simske, S.J., Freitas, F., Riss, M.: Assessing sentence similarity through lexical, syntactic and semantic analysis. Comput. Speech Lang. 39, 1–28 (2016). http://dblp.uni-trier.de/db/journals/csl/csl39. html#FerreiraLSFR16 7. Heo, T.S., Kim, J.D., Park, C.Y., Kim, Y.S.: Global and local information adjustment for semantic similarity evaluation. Appl. Sci. 11(5), 2161 (2021). https://doi. org/10.3390/app11052161, https://www.mdpi.com/2076-3417/11/5/2161 8. Lee, M.C., Chang, J.W., Hsieh, T.C.: A grammar-based semantic similarity algorithm for natural language sentences. Sci. World J. 2014, 17 (2014). https://www. hindawi.com/journals/tswj/2014/437162/ 9. Lee, M.C., Zhang, J.W., Lee, W.X., Ye, H.Y.: Sentence similarity computation based on PoS and semantic nets. In: Kim, J., et al. (eds.) NCM, pp. 907–912. IEEE Computer Society (2009). http://dblp.uni-trier.de/db/conf/ncm/ncm2009. html#LeeZLY09 10. Luhn, H.P.: A statistical approach to mechanized encoding and searching of literary information. IBM J. Res. Dev. 1, 309–317 (1957) 11. Manning, C.D., MacCartney, B.: Natural language inference (2009) 12. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space (2013). arxiv:1301.3781 13. Morris, A.C., Maier, V., Green, P.D.: From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition. In: INTERSPEECH. ISCA (2004). http://dblp.uni-trier.de/db/conf/interspeech/ interspeech2004.html#MorrisMG04 14. Mueller, J., Thyagarajan, A.: Siamese recurrent architectures for learning sentence similarity. In: Schuurmans, D., Wellman, M.P. (eds.) AAAI, pp. 2786–2792. AAAI Press (2016). http://dblp.uni-trier.de/db/conf/aaai/aaai2016.html#MuellerT16 15. Nguyen, D.Q., Nguyen, A.T.: Phobert: pre-trained language models for Vietnamese. In: Cohn, T., He, Y., Liu, Y. (eds.) EMNLP (Findings), pp. 1037–1042. Association for Computational Linguistics (2020). http://dblp.uni-trier.de/db/ conf/emnlp/emnlp2020f.html#NguyenN20 16. Nguyen, P.T., Pham, V.L., Nguyen, H.A., Vu, H.H., Tran, N.A., Truong, T.T.H.: A two-phase approach for building Vietnamese WordNet. In: the 8th Global Wordnet Conference, pp. 259–264 (2015) 17. Nguyen, T.M.H., Romary, L., Rossignol, M., Vu, X.L.: A lexicon for Vietnamese language processing. Lang. Resour. Eval. 40(3–4), 291–309 (2006)

270

V.-T. Bui and P.-T. Nguyen

18. Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014) 19. Wang, Z., Mi, H., Ittycheriah, A.: Sentence similarity learning by lexical decomposition and composition. In: Calzolari, N., Matsumoto, Y., Prasad, R. (eds.) COLING, pp. 1340–1349. ACL (2016). http://dblp.uni-trier.de/db/conf/coling/coling2016. html#WangMI16 20. Yang, M., et al.: Sentence-level agreement for neural machine translation. In: Korhonen, A., Traum, D.R., M` arquez, L. (eds.) ACL (1), pp. 3076–3082. Association for Computational Linguistics (2019). http://dblp.uni-trier.de/db/conf/acl/ acl2019-1.html#YangWCUSZZ19

ILSA Data Analysis with R Packages ˇ Laura Ringien˙e(B) , Julius Zilinskas, and Audron˙e Jakaitien˙e Institute of Data Science and Digital Technologies, Vilnius University, Akademijos Street 4, 08412 Vilnius, Lithuania [email protected] Abstract. High volume and special structure International Large-Scale Assessment data such as PISA (Programme for International Student Assessment), TIMSS (Trends in International Mathematics and Science Study), and others are of interest to social scientists around the world. Such data can be analysed using commercial software such as SPSS, SAS, Mplus, etc. However, the use of open-source R software for statistical calculations has recently increased in popularity. To encourage the social sciences to use open source R software, we overview the possibilities of five packages for statistical analysis of International Large-Scale Assessment data: BIFIEsurvey, EdSurvey, intsvy, RALSA, and svyPVpack. We test and compare the packages using PISA and TIMSS data. We conclude that each package has its advantages and disadvantages. To conduct a comprehensive data analysis of International Large-Scale Assessment surveys one might require to use more than one package. Keywords: ILSA data

1

· R packages · Statistical analysis

Introduction

The International Association for the Assessment of Educational Achievement (IEA) and the Organization for Economic Co-operation and Development (OECD) collect data at various levels of education programs that help analysis and evaluation of education around the world. These data are called International Large Scale Assessment (ILSA) data. In this work, samples are taken from TIMSS 2019 (Trends in International Mathematics and Science Study1 ) and PISA 2018 (Programme for International Student Assessment2 ) ILSA data. ILSA data are of high volume and unique in their composition. Data feature a hierarchical structure. At the lowest level are students who belong to classes, classes to schools, schools to countries. The country is the highest level in the hierarchical data. Of course, the number of students in classes, schools, and countries is not the same, so it is necessary to use weights to make comparisons 1 2

https://timssandpirls.bc.edu/. https://www.oecd.org/pisa/.

This project has received funding from European Social Fund (project No. DOTSUT39 (09.3.3-LMT-K-712-01-0018)/LSS-250000-57) under grant agreement with the Research Council of Lithuania (LMTLT). c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 271–282, 2022. https://doi.org/10.1007/978-3-030-92666-3_23

272

L. Ringien˙e et al.

with each other. For a more detailed analysis, data is collected not only from students, but also from home, parents, teachers, school, or other contexts. The achievements of students in various fields are collected for the evaluation of the quality of education. Each student takes only a part of the test, so his/her achievements in the data denoted by plausible values (from 5 to 10 depending on the study). More accurate results for statistical analysis are obtained when all plausible values are used, but calculations can also be performed with one plausible value. The PISA manual [6] specifies that statistical calculations are performed with each plausible value separately, and then the results are averaged. Thus, ILSA data are hierarchical, have several plausible values, and weights must be applied in the calculations due to the different number of students. The uniqueness of the data listed makes the statistical data analysis complicated. It is worth mentioning that ILSA data analysis is very important for social science researches, and statistical/mathematical/informatics analysis of ILSA data should be as simple as possible. Statistical analysis or mathematical modelling of ILSA data can be performed with various commercial programs such as SPSS, SAS, Mplus, etc. In SPSS and SAS applications, it is possible to prepare data for analysis (combine data tables, remove unnecessary records, select variables), in Mplus, it is possible to analyse only prepared data. However, Mplus is adapted for ILSA data analysis. Meanwhile, SPSS and SAS can perform calculations with only one plausible value without additional programming. The IEA has developed IDB Analyzer operating on SPSS and SAS (Windows environment only), and OECD has developed macros for SPSS and SAS adapted for statistical analysis of ILSA data. Based on the capabilities of the IDB analyzer, the investigator can compute key descriptive statistics, correlations, linear, and logistic regressions. SPSS and SAS macros are developed for specific purposes, e.g., multilevel linear regression, therefore one should run multiple macro commands for detailed statistical analysis. Commercial applications are not available to everyone, so there is an opportunity to use a free open source statistical environment R to perform statistical analysis. There are five packages in the R program capable of performing ILSA statistical analysis: BIFIEsurvey [5], EdSurvey [1], intsvy [2], RALSA [3], and svyPVpack [4]. More technical information about the packages are given in Table 1 and the package descriptions can be found at https://cran.r-project.org/ web/packages/. In the R environment, a researcher can perform all statistical analysis within one program using existing packages. It is worth mentioning that in the R software the results of statistical analysis are obtained by calling the function on the command line and displayed on the screen. However, the RALSA package differs from the other R packages presented in this paper. The RALSA package has a graphical user interface and all results are presented in the MS Excel software. This study provides an overview of the possibilities of five R packages (BIFIEsurvey, EdSurvey, intsvy, RALSA, and svyPVpack) for how easy it would be to use these packages from the perspectives of informatics, mathematics, or statistics for social science researchers. The difficulties encountered in testing the

ILSA Data Analysis with R Packages

273

Table 1. R packages for ILSA analysis Package

Last Available ILSA Possible PISA Possible TIMSS published data cycles cycles EdSurvey 2.6.9 2021-03-22 PISA, TIMSS 2003, 2007, 2011, and others* 2015, 2019 1995, 1999, BIFIEsurvey 3.3-12 2019-06-12 PISA, TIMSS, 2000, 2003, 2003, 2007, PIRLS 2006, 2009, 2011, 2015, intsvy 2.5 2021-01-24 PISA, TIMSS, 2012, 2015, 2019 PIRLS, ICILS, 2018 PIAAC RALSA 1.0.1 2021-05-28 PISA, TIMSS and others** svyPVpack 0.1-1 2014-03-06 PISA, PIAAC Not mentioned Not mentioned and others*** *TIMSS Advanced, ICCS, ICILS, CivEd, PIAAC, TALIS, NEAP ir ECLS **CivED, ICCS, ICILS, RLII, PIRLS, TiPi, TIMSS Advanced, SITES, TEDS-M, TALIS, TALIS 3S ***Not mentionet in the package manual

packages have led to the question if social scientists will use the R environment for statistical calculations when additional IT knowledge is required? Therefore, the purpose of this study is to test, analyse and compare the statistical analysis performed by BIFIEsurvey, EdSurvey, intsvy, RALSA, and svyPVpack R packages and their ease of use in analysing TIMSS and PISA data. To our knowledge, this is the first review article to analyze the capabilities of five R packages suitable for analyzing ILSA data.

2

Data Download and Preparation

Five R packages presented in this paper are different in the structure of functions and presentation of results. Each package statistically processes the data only in the typical package format. The developers of each package solve their own problems of data download, merging and fast data reuse. Information about data download and preparation in the packages are presented in Table 2. The PISA and TIMSS data are presented in several data files on the official sites. PISA has student, school, teacher questionnaire data files, and others. TIMSS has country-by-country data files for student, school, teacher, and others. For this reason, it is not enough to just scan the data files, but they still need to be merged properly. For social scientists without experience, merging data files may be a challenge. As shown in Table 2, only the EdSurvey package contains a function that downloads ILSA data from the original web pages. The user has to download the data himself for other packages. EdSurvey, intsvy, and RALSA packages have the ability merging data tables. The disadvantage of the RALSA package is that

274

L. Ringien˙e et al.

Table 2. Information about data download and preparation in R packages for ILSA analysis Package

Data Merging Data format download data

BIFIEsurvey No

No

BIFIEdata

First data Data reuse preparation∗ 1h

1h

EdSurvey

Yes

Yes

edsurvey.data.frame 1–5 h

5–30 s

intsvy

No

Yes

data.frame

1h

1h

RALSA

No

Yes

.Rdata

15 min–6 h

1–3 min

svyPVpack No No svydesign – – ∗ The time is approximate as it depends on the data and the computer settings

statistical analysis is carried out with all countries participating in the PISA survey. The individual PISA data files are of high volume, so it is not possible to combine them. As a result, the possible statistical analysis of the data is narrowed with the RALSA package for the PISA data. Table 2 shows the time of data retrieval with packages for the first time and when the data is reused. With the EdSurvey and RALSA packages, data preparation takes a long time only for the first data download. Reusing the data takes a very short time to retrieve it and the statistical analysis is done quickly. The proper data preparation for the BIFIEsurvey, intsvy and svyPVpack packages is complex. Package descriptions also provide too little information on how to properly prepare the data. This may lead the social scientist to be afraid of using an open source R environment. BIFIEsurvey and intsvy data format is prepared with R software special data reading and merging functions. This data preparation is performed every time and therefore takes a long time. The BIFIEdata format to the BIFIEsurvey package is created only after data preparation. The EdSurvey package can be used to prepare BIFIEsurvey and intsvy data formats, as the format of this package is applicable to other packages as well. The svyPVpack package works only with very small volume of test data. Real PISA and TIMSS data could not be applied, so this package will not be analyzed further in this paper. Calculations can be performed with one or all plausible values in BIFIEsurvey, EdSurvey and intsvy packages. Statistical analysis is performed only with all plausible values in the RALSA package.

3

Statistical Data Analysis

Properly prepared data tables of ILSA data in R with the packages BIFIEsurvey, EdSurvey, intsvy, and RALSA can be used for statistical analysis. The statistical analysis tools were divided into four groups: descriptive statistics, correlation, regression and multilevel linear modeling. Table 3 was created after summarizing the results of the review and testing of the R packages BIFIEsurvey, EdSurvey, intsvy, and RALSA. Table 3 shows what analysis can/cannot be performed with the ILSA data. The word “Yes” means that statistical analysis is possible in this package and the word “No” means opposite.

ILSA Data Analysis with R Packages

275

Table 3. Statistical data analysis with R packages BIFIEsurvey EdSurvey intsvy RALSA Descriptive statistics Number of records

Yes

Yes

Yes

Yes

Percentage

Yes

Yes

Yes

Yes

Number of NA

Yes

Yes

No

Yes

Minimum value

Yes

Yes

No

No

Maximum value

Yes

Yes

No

No

Mean

Yes

Yes

Yes

Yes

Standard deviation Yes

Yes

Yes

Yes

Percentile

No

Yes

Yes

Yes

Graphics

Yes

No

Yes

No

Yes

Yes

Yes

Yes

Linear

Yes

Yes

Yes

Yes

Logistic

Yes

No

Yes

Yes

Yes

Yes

No

No

Correlation Regression

Multilevel linear modeling

Table 3 shows that descriptive statistics can mostly be calculated with BIFIEsurvey or EdSurvey packages. The intsvy and RALSA packages provide only basic descriptive statistics: number of records, percentage, mean, and standard deviation. The correlation coefficient can be calculated with any package. Linear regression can also be calculated with any package, but logistic regression analysis can be performed with the BIFIEsurvey, intsvy, or RALSA packages. Multilevel linear modeling dominates in the study for the ILSA data. This method is implemented in the BIFIEsurvey and EdSurvey packages. 3.1

Descriptive Statistics

No matter which package we use to compute the descriptive statistics, the results will all be presented the same. However, an appeal to the functions and presentation of the results on the screen is different (see Fig. 1). The BIFIEsurvey package can compute descriptive statistics for multiple variables at a time. There is an option to group variables. General information about the used data provided at the beginning of the results: the number of records, the number of plausible values and the number of replicate weights. Information on the calculated descriptive statistics is provided in the tables. It is inconvenient that the values of variables are given in codes (see Fig. 1a). There is an option to display variables in a histogram. The EdSurvey package can calculate descriptive statistics for only one categorical variable or for several interval variables at a time. The descriptive statistics must be calculated separately for the categorical and interval variables. There is an option to group variables. The results are presented in the tables. The advantage is that the full names of the variables are given (see Fig. 1b).

276

L. Ringien˙e et al.

(a) BIFIEsurvey

(b) EdSurvey

(c) intsvy

(d) RALSA

Fig. 1. Results for R packages (Parents’ highest education level (ASDHEDUP) from TIMSS 2019, Lithuania)

ILSA Data Analysis with R Packages

277

The intsvy package computes descriptive statistics for only one variable at a time. The results are presented in the tables. All numbers are rounded to two decimal places (see Fig. 1c). The intsvy package is the only one that provides detailed information about NA values. For example, what is the average math achievement for children with no parental education. Other packages simply remove such records from the calculations. There is an option to display variables in a histogram and several histograms according to the grouping of variables. There is also the ability to visualize averages over a variable range. The RALSA package stands out from other packages because it has a graphical user interface and all results are presented in MS Excel. The result file consists of three pages: descriptive statistics results, general information about the data and calling syntax. The RALSA package can compute descriptive statistics for multiple variables at a time. The results are presented for each country separately and the overall average of all analyzed countries (see Fig. 1d). 3.2

Correlation

The same value of correlation coefficient are obtained using all packages. As shown in Table 4, the responses are presented in different rounded form (from 2 decimal places in the RALSA package, to 7 decimal places in the EdSurvey package). The calculation time is approximate as it varies by 5 s for all packages. The RALSA package is the fastest to calculate the correlation coefficient. Table 4. Pearson correlation comparison between R packages and ILSA data TIMSS 2019 Correlation∗ Time (seconds) BIFIEsurvey −.4282

PISA 2018 Correlation∗∗ Time (seconds)

25

.3749

42

EdSurvey

−.4282541

28

.3749383

35

intsvy

–∗∗∗



.375

13

RALSA −.43 14 .37 12∗∗∗∗ ∗ Correlation between Math achievement and Parents’ highest education level (Lithuania) ∗∗ Correlation between Math achievement and Index of economic, social and cultural status (Lithuania) ∗∗∗ The correlation coefficient is calculated only between interval variables ∗∗∗∗ The calculations are carried out for 80 countries and take 16 min. Next to each country there is a time, which is shown in the table.

The Pearson correlation coefficient is calculated with all packages, and the Spearman correlation coefficient is calculated only with the EdSurvey and RALSA packages.

278

L. Ringien˙e et al.

The BIFIEsurvey package calculates the correlation coefficient for any variable. A correlation between more than two variables can be calculated at a time. The results provide general information on the data, statistical inference for correlations and correlation matrices. There is an option to calculate correlation for grouped variables. The results then present a separate correlation matrix for each group. The EdSurvey package calculates Pearson and Spearman correlation coefficients for any variable. Correlation coefficients are calculated only between two variables. The resulting output depends on the variables. The result gives the correlation coefficient name (Pearson or Spearman), the number of records in the data, the number of records used to calculate the correlation coefficient, the correlation coefficient, the standard error and the confidence interval if the correlation is calculated between interval variables. The result gives the same information plus the correlation levels for categorical variables if one of the variable is categorical. It is not possible to calculate the correlation coefficient when the variables are grouped. The intsvy package calculates the correlation coefficient only between interval variables. Correlation coefficients are calculated only between two variables. The result is a correlation matrix with standard errors. The inconvenience is that after calculating the correlation between all plausible values and some other interval variable, the correlation matrix indicates that the correlation was calculated with the first plausible value, when in fact all plausible values were used. This seems a small mistake can be very misleading. The RALSA package calculates Pearson and Spearman correlation coefficients for any variable. A correlation between more than two variables can be calculated at a time. The results are presented in MS Excel as in the case of descriptive statistics on three pages. The result table provides descriptive statistics information and a correlation matrix with errors. The results are presented for each country separately and the overall average of all analyzed countries. 3.3

Regression

Properly presented data provide the same regression coefficients and R2 values for all packages. As shown in Table 5, the BIFIEsurvey and RALSA packages have the option of providing standardised regression coefficients. The estimates of the standardised regression coefficients differ by one hundredth between the packages (BIFIEsurvey −.43 and RALSA −.44). A comparison of the computation times of the packages shows that the intsvy package takes the longest to compute the regression coefficients, but less than a minute. The BIFIEsurvey package can calculate linear and logistic regression. The variables for a function can be presented in two ways: by specifying dependent and independent variables or by writing a formula: dependent ∼ independent1 + independent2

(1)

The results provide general information on the data and statistical inference for linear or logistic regression table. There are unstandardized and standardized

ILSA Data Analysis with R Packages

279

Table 5. Linear regression comparison between R packages (depended variable Math achievement, independed – Parents’ highest education level (ASDHEDUP) from TIMSS 2019, Lithuania) BIFIEsurvey EdSurvey intsvy (intercept)

609.3244

asdhedup

−40.0808

standardized (intercept) 0

609.3244

RALSA

609.32 609.32

−40.0808 −40.08 −40.08 –



0

standardized asdhedup

−.4283





.44

R2

.1834

.1834

.18

.18

Time (seconds)

19

17

50

19

values of the coefficients, the standard deviation, and R2 values. There is an option to calculate regression for grouped variables. The BIFIEsurvey package does not include empty variable values in the calculations. However, the values of variables such as “Valid Skip”, “Not Applicable”, “Invalid”, “No Response”, “Omitted or invalid”, and “Sysmis” are included in the calculations because they are numerical values. Variables need to be re-coded before constructing a regression model. The EdSurvey package can calculate only the linear regression. The variables for the function are given by writing the formula as given in (1). The results can be presented in two ways: only estimates of regression coefficients or more model estimates. The screen provides general information about the data, a table of coefficients with the significance of the variable for the regression formula, and the value of R2 when more model results are selected. The EdSurvey package has the ability to compute regression coefficients for two dependent variables simultaneously when the independent variables are identical. The result then presents two tables of coefficients, for each dependent variable separately. It is not possible to calculate the correlation coefficient when the variables are grouped. Data from one or more countries with all possible variables can be used for calculations in the EdSurvey package. However, such a data takes up a lot of memory. It is possible to choose only the variables needed for statistical analysis. In this case, the values of the categorical variables are coded in full name and regression analysis with categorical variables is not possible. Categorical variables need to be re-coded before performing regression analysis. The intsvy package can calculate linear and logistic regression. The variables for a function are given by specifying dependent and independent variables. The result is only a table of regression coefficient estimates with errors and t values. There is R2 value in the last row of the table. There is an option to calculate regression for grouped variables. The result provides separate tables of regression coefficients for each group. The values of the variables must be numeric, as in the EdSurvey package. If the data for intsvy package is prepared with R functions, it does not need to be re-coded. However, if the data is prepared with the EdSurvey package, it must be re-coded before performing the calculations.

280

L. Ringien˙e et al.

The RALSA package can calculate linear and logistic regression. The user must specify the dependent and independent variables and whether to provide standardized coefficients in the result using the graphical interface. The results are presented in MS Excel on four pages: regression model coefficients, model statistic, general information about the data and calling syntax. There are regression coefficient value, standard error, the values of the variables belonging to the regression equation in the regression model coefficients page. The model statistics page contains model fit estimates. There is an option to calculate the regression for grouped variables. 3.4

Multilevel Linear Modeling

Multilevel regression analysis is very important for the ILSA data because of hierarchical structures. Models can be in two or three levels. Only the BIFIEsurvey and EdSurvey packages have the ability to create a multilevel linear models. The results obtained do not match between the packages. Models for packages can only be provided as formula. Too few examples of formula writing are provided in the packages manuals. Writing the formula for a more complex model can be very difficult for the social scientist, who finds the diagrams easier to read and understand. Model fit and comparability indices such as AIC, BIC, RMSEA, CFI, TLI, and others are not provided in both packages. The BIFIEsurvey package can only form a two-level model. The result provides general information about the data and a table of model estimates. The last rows of the result table show the R2 value and intraclass correlation coefficient. The EdSurvey package can form two and three level models. The result provides model formula, number of plausible values, table of levels records number, table of variance terms with standard error and standard deviation, table of fixed effects values with error and t value, and intraclass correlation value. However, the values of the significance of each variable are not provided. As the results led by the packages do not match and it is difficult to estimate which package results are more accurate, an additional comparison with the Mplus program model was made. NULL models were compared. A dependent variable is only needed to an empty model. TIMSS 2019 data of Lithuanian grade 4 were used to construct the models. The achievements of mathematics are taken as a dependent variable. The obtained results are shown in Table 6. Table 6. BIFIEsurvey, EdSurvey, and Mplus two-level NULL model comparison Estimates

Mplus

BIFIEsurvey EdSurvey

Differences in achievement between students 4249.997 4148.691

4076.217

Differences in achievement between school

2628.193

2670.408 2976.270

Mean of achievement

523.302

545.786

523.514

Intraclass correlation

.365

.418

.392

ILSA Data Analysis with R Packages

281

The data in Table 6 show that the results of all three NULL models constructed for mathematical achievements are different. However, the model created with the EdSurvey package function is closer to the model created by the Mplus software, because the difference between the estimates is smaller.

4

Conclusions

When testing and analysing the capabilities of five R software packages from a statistical and informatics perspective for ILSA data analysis, we encountered various problems: 1. The documentation of packages do not provide a complete, sufficient description of the functions. 2. Packages read ILSA data from different data tables, which are hardly compatible with each other. 3. Packages are developed by independent teams of researchers, so the outputs of the functions are presented in different forms and are difficult to combine. 4. More sophisticated secondary analysis methods are not properly implemented and do not provide the required results. The problems listed above illustrate the complexity of analysing the ILSA data with the R packages not only because of the specific data itself but also because of the heterogeneity of content of the analysed packages. For a researcher who has no knowledge in computer science, analysing the ILSA data with the R package may require too much time, effort, and new knowledge. The developers of the RALSA package have already taken a step towards such researchers who has no knowledge in computer science, as the package has a graphical user interface, but does not yet perform important secondary analysis. More detailed documentation of packages and better compatibility between the packages would facilitate statistical analysis of the ILSA data. All packages have their advantages and disadvantages. The best strategy for comprehensive ILSA data analysis might require the use of more than one package.

References 1. Bailey, P., et al.: EdSurvey: Analysis of NCES Education Survey and Assessment Data. R package version 2.6.9 (2021). https://CRAN.R-project.org/ package=EdSurvey 2. Caro, D.H., Biecek, P.: intsvy: an R package for analyzing international large-scale assessment data. J. Stat. Softw. 81(7), 1–44 (2017). https://CRAN.R-project.org/ package=intsvy 3. Mirazchiyski, P.V., INERI: RALSA: R Analyzer for Large-Scale Assessments. R package version 0.90.3 (2021). https://CRAN.R-project.org/package=RALSA 4. Reif, M., Peterbauer, J.: svyPVpack: a package for complex surveys including plausible values. R package version 0.1-1 (2014). https://github.com/manuelreif/ svyPVpack

282

L. Ringien˙e et al.

5. Robitzsch, A., Oberwimmer, K.: BIFIEsurvey: tools for survey statistics in educational assessment. R package version 3.3-12 (2019). https://CRAN.R-project.org/ package=BIFIEsurvey 6. Watanabe, R.: PISA Data Analysis Manual SPSS, 2nd edn. OECD, Paris (2009)

An Ensemble Learning Approach for Credit Scoring Problem: A Case Study of Taiwan Default Credit Card Dataset Duc Quynh Tran1(B) , Doan Dong Nguyen1 , Huu Hai Nguyen2 , and Quang Thuan Nguyen1 1

International School, Vietnam National University Hanoi, Hanoi, Vietnam {ducquynh,dongnd,nguyenquangthuan}@vnu.edu.vn 2 Vietnam National University of Agriculture, Hanoi, Vietnam [email protected] Abstract. Credit scoring is very important for financial institutions. With the advent of machine learning, credit scoring problems can be considered as classification problems. In recent years, credit scoring problems have been attracted to researchers. They explored machine learning and data preprocessing methods for specific datasets. The difficulties of the credit scoring problem reside in the imbalance of datasets and the categorical features. In this paper, we consider a Taiwan credit dataset which is shared publicly. The small number of studies on this dataset motivates us to carry out the investigation. We first proposed methods to transform and balance the dataset and then explore the performance of classical classification models. Finally, we use ensemble learning, namely Voting which combines the results of some classifiers to improve the performance. The experimental results show that our approach is better than the recent publishes and the Voting approach is very promising. Keywords: Credit scoring prediction learning · Voting

1

· Ensemble learning · Machine

Introduction

Credit scoring is very important in finance management. It helps lenders estimate the creditworthiness of a person or a company. Lenders and financial institutions often use the creditworthiness of a company to decide whether to extend or deny credit. If a financial institution has a good credit scoring method then it can decrease the proportion of bad loans. Hence, it can decrease the risk level. In the literature, there are some existing methods for credit scoring based on expert knowledge. Nowadays, with the development of machine learning, the credit scoring can be considered as a classification problem. Financial institutions collect data of clients and assign the label to the clients based on the history or the opinion of experts. The clients can be separated into two groups. The first group contains the low-risk clients (label 0) and the second group contains the high-risk clients (label 1). The data is used to train classification models and c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 283–292, 2022. https://doi.org/10.1007/978-3-030-92666-3_24

284

D. Q. Tran et al.

then the model is applied to predict the risk level of a new client. The machine learning method overcomes some disadvantages of the method based on expert knowledge such as it is objective and it provides the estimation immediately. The work in [7] has used different models to classify the default of credit card problems. The experimental results have shown that the Neural Network yielded better results compared to other models. The ensemble models such as boosting, random forest, and bagging have been investigated in [5] and the results have proved that Boosting performed better in terms of accuracy. Combining Bagging and Stacking approaches were applied in [9] and compared to other models by using the metrics such as Area Under the Curve (AUC), AUC-H measure, etc. The author in [6] also conducted experiments on various default credit card datasets to investigate combinations of Bagging, Voting, etc. Although there are some existing models for classification problems, it is very hard to use directly the existing models because the performance depends on data. For some datasets, the performance of the existing methods is low. How to improve their performance is still challenging. The difficulties may come from missing values/categorical features and the imbalance of the data set. Besides, improving the accuracy may come from the feature engineering step, in which the steps to transform the data or to create more useful attributes also play a very important role. Our paper investigated the performance of some existing models and proposed suitable methods for categorical features and the imbalance of the dataset. Besides, we proposed a method1 to exploit the reciprocal effect of features to improve the quality of the data. Finally, we combine multiple classifiers with weighted voting to form a robust and reliable prediction model. The techniques we used are Random Forest, Logistic Regression, and Gradient Boosting with weighted voting. The results confirmed our proposed method of better accuracy in prediction. The paper is organized as follows. Section 2 presents the methodology which is used in the paper. Section 3 introduces the dataset and experimental settings. Subsequently, the results are discussed. Finally, Sect. 5 provides conclusions.

2

Solution Methods

2.1

Existing Methods

In this research, we use some existing methods including Support vector machine (SVM), Random forest, Logistic regression, and Gradient boosting. The idea of SVM is to find a hyperplane that separates two classes [8] while Random forest classifier is an ensemble learning method which bases on a decision tree classifier, bootstrap, and voting. Bagging is similar to Random forest but we can use other base classifiers instead of decision tree classifiers. The idea is to use an iterative procedure to change the distribution of the training sample for learning base classifiers so that they increasingly focus on instances that are hard to classify. 1

https://github.com/doandongnguyen/TaiwanCreditScoring.

An Ensemble Learning Approach For Solving Credit Scoring Problem

285

The basic idea of logistic regression is to use an exponential of a linear function for approximating the odds of instances. Support Vector Machine Support vector machine (SVM) is a method in supervised learning for solving a classification problem or a regression problem [2]. In binary classification problems, SVM tries to find a hyperplane that well separates two classes. We suppose that the training data consists of n records (xi , yi ). It means that the label of xi is yi ∈ {−1, 1}. The equation of a separating hyperplane in space Rn has the form wT x + b = 0.The values of parameter w and b (for linear soft-margin SVM) can be obtained by solving the following optimization problem [8]: n  w2 +C ξi , ∀i = 1, 2, ..., n w,b,ξi 2 i=1 T subject to yi (w xi + b) ≥ 1 − ξi , ξi ≥ 0

min

In the case where linear models do not fit well the data, we may use a suitable kernel function, φ(.), to transform any data instance x to φ(x). Thus, the separating hyperplane is written in the transformed space as wT φ(x)+b = 0. To get the optimal separating hyperplane, we solve the following optimization problem: n  w2 +C ξi w,b,ξi 2 i=1 subject to yi (wT φ(xi ) + b) ≥ 1 − ξi , ξi ≥ 0

min

Random Forest Random forest classifier is an ensemble learning method which bases on decision tree classifier, bootstrap, and voting. Firstly, we select a random sample with replacement from the original sample. We then build a decision tree classifier from the bootstrap sample. The procedure is repeated k time to obtain k classifier. The predicted label is obtained from k-predicted labels of k classifier by voting. Random forest classifier can be described as follows [3]: 1. Chose a random bootstrap sample of size n (randomly select n instances with replacement from the training set). 2. Train a decision tree by using the selected bootstrap sample. At each node: – Randomly chose d features from the set of features. – Split the node using the feature that provides the best split. 3. Repeat the steps 1 to 2 k times. 4. Combine the prediction by majority vote to obtain the class label. Logistic Regression To make the prediction in binary classification problems, we may calculate the conditional probability of a label y given observation x, P (y|x), which is called

286

D. Q. Tran et al.

P (y = 1|x) , is P (y = 0|x) called the odds of a data instance x. If this ratio is smaller than 1, then x is assigned to label y = 0. Otherwise, it is assigned to label y = 1. The basic idea of the logistic regression is to use an exponential of a linear function for approximating the odds of x as follows: the posterior probability. The ratio of the posterior probability,

T P (y = 1|x) = ew x+b = ez P (y = 0|x)

Since P (y = 1|x) + P (y = 0|x) = 1, we can obtain P (y = 1|x) =

1 = σ(z) 1 + e−z

1 1 + ez where function σ(.) is the logistic function or sigmoid function. We suppose that the training data consists of n training instances, where every training instance xi is associated with a binary label yi ∈ {0, 1}. The likelihood ob observing yi given xi , w and b can be expressed as: P (y = 0|x) = 1 − σ(z) =

P (yi |xi , w, b) = (σ(wT + b))yi .(1 − σ(wT + b))1−yi . The parameters of the logistic regression, (w, b), can be found by maximizing the n  P (yi |xi , w, b). By consequence, likelihood of all training instances L(w, b) = i=1

parameters (w, b) can be found by solve the following optimization problem: min {−

(w,b)

n 

yi log(σ(wT xi + b)) −

i=1

n 

(1 − yi ) log(1 − σ(wT xi + b))}

i=1

Gradient Boosting Boosting is an ensemble learning method. The idea is to use an iterative procedure to change the distribution of the training sample for learning base classifiers so that they increasingly focus on instances that are hard to classify. In each round of training, we build a weak classifier and the predictions five by the classifier are compared to the actual outcome. The error of the model is calculated by the gap between the predicted values and the actual values. These errors will help us compute the gradient. Basically, the partial derivative of the loss function describes the steepness of the error function. Thus, the error can be used to find the parameter for the model in the next round by using the decent gradient method. There are some specific gradient boosting methods, but, in this research, we use XGBoost - scalable tree boosting system. 2.2

Proposed Method

In order to improve the performance, we propose a technique to combine weak classifiers to get a stronger classifier. The idea is similar to the idea of bagging.

An Ensemble Learning Approach For Solving Credit Scoring Problem

287

The difference is that several classification models are used as base classifiers and all classifiers are trained on the same training data. A fair voting strategy or weighted voting may be used to make the final prediction. The proposed algorithm can be described as follows: Voting Step 1: Take k classification model Ci , i = 1, 2, .., k. Step 2: Train k base classifiers Ci on the training data. k  Step 3: Select weight (w1 , w2 , ..., wk ) such that wi = 1 and wi > 0, ∀i = i=1

1, 2, ..., k. Step 4: Use the selected weight w to combine Ci , i = 1, 2, .., k and obtain the final classifier.

3 3.1

Experimental Settings Dataset

The used dataset is the default of credit card clients in Taiwan [10]. There are 25 attributes in the dataset as described as in Table 1. The outcome is the default payment attribute. There is no missing values in the datasets; however, the data set is imbalanced (the rating label 1–22% and label 0–78% in the outcome) as in Fig. 1. Table 1. Dataset description Attributes

Data types Descriptions

Default payment Categorical The response values (Yes = 1, No = 0) X1

Continuous Amount of the given credit (NT dollar)

X2

Categorical Gender (1 = male; 2 = female)

X3

Categorical Education 1=graduate school;2=university;3=high school;4 = others

X4

Categorical Marital status (1 = married; 2 = single; 3 = others)

X5

Continuous Age (year)

X6–X11

Category

X12–X17

Continuous Amount of bill statement (NT dollar) in September-April, 2015

X18–X23

Continuous Amount of previous payment (NT dollar) in September-April, 2015

X24

Continuous Limit Balance - Amount of given credit in NT dollars

3.2

History of past payment from April to September, 2005;

Preprocessing Data

In this section, we will describe the pipepline to process the dataset to obtained a cleaned data for ingesting the models.

288

D. Q. Tran et al.

Fig. 1. Default payment

Fig. 2. The distribution of the age values in the dataset

In the datasets, we found that there are more values in Education and Marriage than in the descriptions, so we changed the values not in the description into correct values. For instance, the values of 5 (unknown) and 6 (unknown) in the Education (X3) attribute are set to be 4 (Others). In the X1 to X11 attributes, there are values to indicate that the customers paid earlier (values of −2 and −1); therefore, we decided to set those values to be Zero (0 - pay duly). By doing those, the number of unwanted values will not affect the learning process of models. For the Age attribute, we create the bins to group customers’ ages called AgeBin. For example, the age from 20 to 30 to the Group 1, from 30–40 to Group 2 and so on. The histogram for customers’ age as in Fig. 2. By doing this, we can generalize the customers’ age information instead of being too specific using the age attribute. As stated previously, in order to investigate the correlation between variables, we create more informative features by multiplying the categorical attributes to each other. For instance, with the gender value of 1 (Men) and the bin of age value of (2 - the age from 20 to 30) will create a value of 2 (the men with age in 30s). We also do the same process to the other categorical attributes. In this work, we conduct two experimental scenarios, the first one using the dataset without multiplying categorical attributes and with multiplying categorical attributes. To prepare data for models to learn, we use a Min-Max scaler to transform the continuous attributes only (not applying for categorical attributes).

An Ensemble Learning Approach For Solving Credit Scoring Problem

289

Table 2. Hyper-parameter ranges for the tuning models Models

Hyper-parameters

Logistic Reg.

C: Inverse regularization strength [−100, 100]

SVM XGBoost

Penalty: the penalization

l2 or None

Solver: optimization algorithms

lbfgs, sag, saga, newton-cg

C: Regularization parameter

Exponential distribution

Gama: kernel coefficient

Exponential distribution

Max depth

[2, 15]

Learning rate

[0.001, .2]

Gamma

[0.01, 0.5]

Regularization lambda

[10, 100]

Gradient boosting No. estimators Loss

Random forest

3.3

Value range

[50, 1000] [Deviance, exponential]

Min sample split

[2, 5]

Max depth

[2, 10]

Max features

[0.3, 1.0] of total features

Number of estimators

[10, 2000]

Max features

Square root, .5, .9, 1. of total features

Processing Pipeline

In this section, we describe our proposed approach pipeline for model selection and evaluation. The pipeline is illustrated as in Fig. 3. The dataset is split into two sets: the training set (70%) and testing set (30%). The training set is used for model selection. There are 2 steps in the model selection phase: training and tuning model. Due to the imbalance in the dataset as in Fig. 1, we re-sample the training set by using over resampling method (SMOTE - Synthetic Minority Oversampling Technique) [4]. Then, the resampled dataset is used as the input for the models in the training and tuning step. To obtain the optimal sets of hyper-parameters for the models, we utilize the random search techniques [1] with 5-Fold cross-validation on the training set. The random search technique is proved that it is not only efficient but also faster than the grid search one. The description of hyper-parameters for the tuning step is in the Table 2. In the Voting model, we combine the RandomForest, XGBoost and Logistic Regression models. We set the hyper-parameters for these models based on tuning individual models. Also, by analyzing the errors of these models on the training sets, we found that the performance of the XGBoost model is better than the others; therefore, we set the values of weights to 2, 1, 1 for voting of XGBoost, RandomForest and Logistic Regression, respectively. After obtained the optimal models, the performance of those will be evaluated on the testing set. The results are reported in Sect. 4.

290

D. Q. Tran et al.

Fig. 3. Processing pipeline

4

Results

Our proposed method are compared with other models by using performance metrics such as Precision, Recall, F1 score, F1 score weighted, Accuracy (refer [8] for the meaning of these metrics). Because of the imbalance in the dataset, we focus on the F1 score to evaluate the recognition of the minor class (Label 1) of the models. Table 3. Experimental results without multiplying categorical attributes Models

Class Precision Recall F1 score ACC

Logistic regression 0 1

0.88 0.48

0.79 0.61

0.83 0.53

0.75

SVM

0 1

0.88 0.43

0.76 0.65

0.82 0.52

0.77

Random forest

0 1

0.86 0.56

0.84 0.48

0.85 0.52

0.77

Gradient boosting 0 1

0.85 0.50

0.87 0.45

0.86 0.48

0.79

XGBoost

0 1

0.86 0.49

0.85 0.52

0.86 0.51

0.78

Voting

0 1

0.87 0.51

0.85 0.56

0.86 0.54

0.79

The Table 3 and Table 4 describe the experimental results with and without multiplying categorical attributes, respectively. Although the F1 scores results of the two scenarios are quite similar for the minority class (Label 1), the F1 scores for the majority class (Label 0) in scenarios with more attributes are higher.

An Ensemble Learning Approach For Solving Credit Scoring Problem

291

Table 4. Experimental results with multiplying categorical attributes Models

Class Precision Recall F1 score F1 weighted

Logistic 0 regression 1

0.88

0.81

0.84

0.48

0.61

0.53

SVM

0 1

0.87 0.48

0.83 0.57

Random forest

0

0.86

1

Gradient boosting

0 1

0.57

0.43

0.49

XGBoost

0 1

0.85 0.59

0.91 0.44

Voting

0 1

0.86 0.57

0.89 0.51

ACC F1 score (from [6])

0.77

0.77

0.47

0.85 0.52

0.77

0.77

0.45

0.89

0.88

0.80

0.80 0.47

0.56

0.48

0.52

0.85

0.91

0.88

0.79

0.80

0.47

0.88 0.51

0.80

0.80

0.47

0.88 0.54

0.80

0.81 –

For example, the F1 scores for RandomForest, Gradient Boosting, XGboost, and Voting in Table 4 are 88% which is higher than the one of these models in Table 3. Therefore, it is obvious that with more features created, the overall results are much better. In Table 4, we can see that the Voting model yields the best result in the respect of F1 scores weighted and the accuracy score with the value of 80% and 81%, respectively. As shown in the result table, the ensemble methods have better results compared to the SVM and Logistic Regression model in terms of recognizing the majority class (Label 0). The Logistic Regression model has the outstanding result of 53% for the F1 score for the minority class (Label 1) while the Gradient Boosting performs this task very poorly with the 49% for F1 score. The XGBoost and RandomForest perform really well for predicting the majority class but quite low for the one of the minority class. The Voting model performances outperform others thanks to the combination. The F1 score and F1 weighted for minority classes are up to 54% and 80%, respectively. Besides the results from our experiments, we also compare to the results from the work of [6] with the F1 score on the minority class as in the last column. Overall, it is clear that the F1 scores in [6] on minor classes are quite low to compare to our approaches. Take SVM and RandomForest as examples, the scores of our methods are 52% to compare the ones in [6] with just 47%. From the results, we can see that the accuracy of credit scoring problems is improved by combing multiple classifiers with weighted voting and by creating more useful features. Also, this model can recognize the minor class better than the other models.

292

5

D. Q. Tran et al.

Conclusions

Estimating the creditworthiness of a person/or a company is a constant challenging. The problem can be solved by applying machine learning techniques. In this research, we proposed a suitable method to preprocess the Taiwan credit dataset, then investigate the performance of classifiers and combine them to improve the accuracy. The numerical results show that the proposed method is efficient for the considered dataset and provide an alternative approach for other data. This investigation also contributes to the pool of methods for the Taiwan credits dataset. In future work, we may combine the proposed data preprocessing methods with deep learning and extend our approach for other datasets.

References 1. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(2), 281–305 (2012) 2. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp. 144–152 (1992) 3. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001) 4. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) 5. Hamori, S., Kawai, M., Kume, T., Murakami, Y., Watanabe, C.: Ensemble learning or deep learning? Application to default risk analysis. J. Risk Fin. Manage. 11(1), 12 (2018). https://doi.org/10.3390/jrfm11010012, https://www.mdpi.com/ 1911-8074/11/1/12 6. He, H., Zhang, W., Zhang, S.: A novel ensemble method for credit scoring: adaption of different imbalance ratios. Expert Syst. Appl. 98, 105–117 (2018) 7. Leong, O.J., Jayabalan, M.: A comparative study on credit card default risk predictive model. J. Comput. Theor. Nanosci. 16(8), 3591–3595 (2019) 8. Tan, P.N., Steinbach, M., Kumar, V.: Introduction to Data Mining. Pearson Education, Noida (2016) 9. Xia, Y., Liu, C., Da, B., Xie, F.: A novel heterogeneous ensemble credit scoring model based on bstacking approach. Expert Syst. Appl. 93, 182–199 (2018) 10. Yeh, I.C., Lien, C.H.: The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Syst. Appl. 36(2), 2473–2480 (2009)

A New Approach to the Improvement of the Federated Deep Learning Model in a Distributed Environment Duc Thuan Le1,2(B) , Van Huong Pham2 , Van Hiep Hoang1 , and Kim Khanh Nguyen1 1 Hanoi University of Science and Technology, Hanoi, Vietnam

{hiephv,khanhnk}@soict.hust.edu.vn

2 Academy of Cryptography Techniques, Hanoi, Vietnam

{thuanld,huongpv}@actvn.edu.vn

Abstract. The federated deep learning model has been successfully studied and applied in a distributed environment. The method aggregates the weight set on the server by averaging the component weight sets. The limitation of the method is that the number of training samples on the clients is different but the weights are averaged, so the importance of the component weight sets cannot be clearly shown. Therefore, this paper proposes a new method to synthesize the weight set for the distributed federated deep learning model based on the importance of the component weight sets. The importance is proportional to the number of training data samples. That is, the larger the dataset size, the more important the weight set. The proposed method is tested with the MNIST dataset by the K-fold method. The improved accuracy compared to the old method is 2.54%. Keywords: Deep learning · Convolutional neural network · Federated learning · Tranfer learning

1 Introduction Today, data science is a key development trend of information and communication technology. Machine learning, deep learning is the topic of data science that has been successfully researched and applied in many fields. Data became very important in the accuracy and practical significance of deep learning models. Currently, most data is stored and managed distributed in the cloud computing model. Centralized machine learning models are not suitability. Moreover, centralized machine learning at one machine is also constrained by the processing capacity of the hardware. Therefore, to solve this problem, a suitable machine learning and deep learning model is needed. Some good examples of large, distributed data sources used in machine learning are as follows: In 2017 [1], Google provided 65,000 recordings with 30 different short words by thousands of people with different voices. According to [2] by 2020, Mozilla provides Mozilla’s common voice dataset with a capacity of 56 GB, the number of voices used is 66,173 in different ages and voices and languages. For images, there are many large datasets such as CIFAR [3], ImageNet [4], Open Images [5] with the number of images © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 293–303, 2022. https://doi.org/10.1007/978-3-030-92666-3_25

294

D. T. Le et al.

from 60,000 to nearly 10 million images. With malware on Android, there is an AMD [6] dataset with more than 24 thousand.apk files with a capacity of 77 GB. With datasets of recommender systems, training data ranges from a few hundred GiB to thousands of TiBs. In some areas, computers must be trained to make predictions, facebook or google investigate user behavior on the internet to make suggested videos and items on the website. Tiktok is an application that can usurp Facebook’s throne and is also using an algorithm to suggest videos from the behaviors collected by users when watching previous videos. From the previous data, the systems predict the weather, forecast the rise and fall of stocks, etc. working effectively. This shows us the meaning of training in machine learning. Widely used machine learning models help train well and give highly accurate predictions or classifications. Traditional machine learning models are used such as: KNN, DT, RF, SVM. About 10 years ago, machine learning models using neural networks more and more, typically deep learning models such as CNN, DBN, LSTM, AE, etc. After training, the machine learning and deep learning models give very high classification and prediction results, most of them over 90%. However, we need a highly configurable computer to be able to train these models with big data. Although the computer is highly configurable, the training time in the model is also very long, which can last from many hours to many days. Furthermore, most data sources are now distributed in storage, so a more suitable learning model is needed for this environment. To solve the above challenge, the distributed learning model is being interested and applied more and more widely. One of the most used distributed learning models is federated learning. However, the common feature of distributed learning models is that the training results are often lower than those of training on a machine. In the researches [7], H. B. McMahan et al. used a federated deep learning model in a distributed environment using a weighted-average aggregation method called Federated Averaging. In the experimental part, the author used CNN and LSTM models to experiment on the MNIST, Speed Command and CIFAR-10 datasets, based on customizing the information to give the highest accuracy results. The data used in the clients is IID (independent and identically distributed). Similarly, to improve the efficiency of training, Yue Zhao et al. [8] propose to extend over [7] the non-IID data by taking a small share of the shared data. In addition, the research on link learning also goes in the direction of reducing costs in the process of transferring data such as weights from the client to the server. Bonawitz et al. [9] developed an efficient secure aggregation protocol for federated learning, allowing a server to perform computation of high-dimensional data from mobile devices. Konecny et al. [10] proposed structured updates and sketched updates to reduce communication costs by two orders of magnitude. Lin et al. [11] proposed Deep Gradient Compression (DGC) to reduce the communication bandwidth by two orders of magnitude to train high-quality models. In [12] Xu et al. still use Federated Averaging, but there is an additional method to reduce energy consumption at the client during inference and communication called T-FedAvg.

A New Approach to the Improvement of the Federated Deep Learning Model

295

In the above typical researches, updating the training model from distributed machines mainly uses average weighting or weighting summation. This is a limitation because it does not evaluate the importance, contribution and influence of the component weights on the composite weights. Training on distributed machines will give different weights depending on the data and the number of training files at each computer. Therefore, the weight averaging does not fully reflect the importance of each weight in the distributed training model. To overcome this limitation, in this paper, we have proposed an improved method when aggregating the set of weights on the server from the set of component weights with different ratios depending on its importance. Both the weight and the number of training samples of each machine are transferred to the server, from which the weight will be calculated based on the ratio of the number of files on which it has been trained. This means that weights trained in multiple files will have a large parameter indicating that the weight is more important. The rest of the paper is organized as follows: In Sect. 2, we summarize the basic knowledge for developing this method. In Sect. 3, we refer to the improved method of aggregating weights. Section 4 we present about the experiment. In Sect. 5 we provide discussion and evaluation. Section 6 is the conclusion.

2 Background Knowledge 2.1 Convolutional Neural Network Convolutional neural network (CNN), a class of artificial neural networks that has become dominant in a variety of domains. CNN is a type of deep learning model for processing data that has a grid pattern, such as images, which is inspired by the organization of animal visual cortex and designed to automatically and adaptively learn spatial hierarchies of features, from low- to high-level patterns. CNN is a mathematical construct that is typically composed of three types of layers (or building blocks): convolution, pooling, and fully connected layers [14]. A CNN is composed of a stacking of several building blocks: convolution layers, pooling layers (e.g., max pooling), and fully connected (FC) layers, as shown in Fig. 1 [14]. A model’s performance under particular kernels and weights is calculated with a loss function through forward propagation on a training dataset, and learnable parameters, i.e., kernels and weights, are updated according to the loss value through backpropagation with gradient descent optimization algorithm. 2.2 Transfer Learning Transfer learning is the application of skills learned from one problem (source domain – Ds ) with the source application (source task – T s ) to another problem (target domain – Dt ) of another application (target task – T t ) related to each other. Transfer learning to improve the learning of the If function for the application T t on the domain Dt . Thus, transfer learning helps to take advantage of the knowledge learned from previous data sets or applications to apply to new similar data sets without having to retrain.

296

D. T. Le et al.

Fig. 1. An overview of a CNN architecture and the training process

According to [15], we have two definitions of transfer learning as follows. Tranfer Learning Given a learning task T t based on Dt , and we can get the help from Ds for the learning task T s . Transfer learning aims to improve the performance of predictive function f T (·) for learning task T t by discover and transfer latent knowledge from Ds and T s , where Ds = Dt and/or T s = T t . In addition, in the most case, the size of Ds is much larger than the size of Dt , N s  N t . Deep Tranfer Learning Given a transfer learning task defined by . It is a deep transfer learning task where fT(.) is a non-linear function that reflected a deep neural network. 2.3 Federated Learning Federated learning was first proposed in 2016 by H. B. McMahan et al. [9, 19, 20]. Since then, it has received great attention from research groups. Based on deep learning and transfer learning models, the federated learning model helps the training process to be performed on many different machines without having to share data between machines. Federated learning has several advantages: • Helps to significantly reduce training time by training only on local data of each machine. • Helps to increase the security of data. The data does not need to be transmitted to a machine for training. The transmission poses a risk of leaking application information or extracted features during submission.

A New Approach to the Improvement of the Federated Deep Learning Model

297

• Federated learning can still help improve the model based on updating the model at each workstation. Therefore, it is easier to expand the model, expand the training set, and scale the training. In the federated learning model, a server is used to aggregate weights from clients. After calculating the aggregate weight set, the calculation server sends it back to the member machines of this set of weights for use. Assuming C i with i = 1..n is a set of n workstations, Di : Dataset i is trained at machine C i and W i is the weight set when training at machine C i , training process and weight aggregation of Conjugate learning model in a distributed environment follows the following steps: Step 1: Train each client and server independently with C i with the corresponding training dataset Di . From there, after training is complete, each machine generates a set of weights W i of the model. Step 2: C i sends W i for S. Step 3: Server S aggregates the set of weights and calculates the new weights according to Eq. (1). N 

W=

wi

i=1

N

(1)

Step 4: Server S sends back the set of weights to C i . Step 5: C i updates weights and continues training. Step 6: For each C i , if there are enough k files, we will continue to train and repeat step 2.

3 The Improved Method to Aggerate the Set of Weights 3.1 Idea The main idea of our paper is to synthesize the weight set on the server based on the component weight sets and the quality of training samples. This quality depends on some factors such as the number of training samples, the distribution of samples and labels, etc. In the scope of our paper, we only focus on the first factor to synthesis the weight set on the server. In the deep learning models, the larger the dataset size, the better the training results and the higher the quality of the weights. However, in the current federated learning models, the composite weights are calculated by the average of the component weights, so the contribution level of the component weight sets cannot be expressed. That is, the set of weights trained on the large dataset is as important as the set of weights trained on the small dataset. This affects the quality of the composite weight set. Therefore, the paper has added a scaling factor showing the importance of each component weight set when synthesizing. This idea is described by the overall model in Fig. 2 and will be implemented in detail in the next section.

298

D. T. Le et al.

3.2 Mathemetical Model Definition 1. The set of composite weights The set of composite weights is the set that contains the weights calculated on a server based on the component weight sets and it is sent back to use for all clients in the system. Definition 2. The component weight set The component weight set is the weight set trained on each client with the individual dataset by the CNN model. This weight set is sent to the server to compose. Definition 3. The component dataset The component dataset is the invidual dataset used to train on each client. This dataset is updated and trained by the transfer learning model to improve the set of weights.

Fig. 2. The proposed model of the federated deep learning

Definition 4. The importance of the component set of weights The importance of the partial set of weights is a value that evaluates the influence of this set on the composite weights, denoted a. In deep learning, the larger the size of the training dataset, the more the network is trained and the more valuable the weights are. Therefore, we give a definition that the significance of the component weights depends on the dataset size. The importance is built by us according to Eq. (2):

A New Approach to the Improvement of the Federated Deep Learning Model

Di ai = N

i=1 Di

299

(2)

where, • N is the number of clients • Di is the size of the invidual dataset on client ith • ai is the importance of the weight set W i In this proposed federated learning model, each client needs to send a set of component weights along with the size of the dataset. Based on the component set of weights and the size of the sent dataset, the composite weight set is calculated according to the Eq. (3).

W =

N 

ai × Wi

(3)

i=1

where, • N is the number of clients • Wi is the weight set of the client ith • ai is the importance of the weight set W i .

4 Experiment 4.1 Experimental Model To evaluate the proposed method, we have done the experiment in Fig. 3. In this experiment, we use three clients, each client has the individual dataset trained by the same CNN structure to obtain three sets of weights. Three sets of weights are sent to the server to aggregate according to the traditional method, obtaining a set of weights W. This set of weights W is sent back to the clients to classify for accuracy statistics. Moreover, three sets of weights and the size of the dataset are sent to the server to aggregate the weights according to our proposed method to obtain a set of weights W . The weight set W is also sent back to the clients for classification and accuracy evaluation. 4.2 Experimental Data, Program and Process Experimental Data In this experiment, we use the data MNIST [22], including 60,000 images to train and 10,000 images to test. We use 10-fold to split data. The training set with 60,000 files is

300

D. T. Le et al.

Fig. 3. The experimental model

splited to 8 sub sets (train = 12,600; train1 = 8,775; train2 = 9,240; train3 = 12,600; train4 = 040; train5 = 2,925; train6 = 7,560; train7 = 1,260). The test set is splited to two parts with 5,000 files to validate and 5,000 file to test. Experimental Programs To do this experiment, we implemented the programs for clients and server. These programs have putted on github [23], including the main features: • Clients and server have the same format of data • Clients and server have the same CNN model • The aggregating of weights is done on server and sent back clients.

Experimental Process Step 1: Train and test individually on computers (put train, train1, train2, train3 to server (S), client1 (CL1), client2 (CL2), CL3, CL4). Step 2: Server calculate the average of the weights from the component comuter and send back clients. Step 3: Put train4 to CL1 and train5 to CL2 to train and update the set of weights. Step 4: From step 3, we update the set of weights on server and sent back clients. Step 5: Do step 3 and step 4 again with (put train6 to CL1, put train7 to CL2). Step 6: Train all data on a computer to test.

A New Approach to the Improvement of the Federated Deep Learning Model

301

Table 1. The experimental results under the old method (%) Server

Client_1

Client_2

Client_3

Step 1

95,26

94,04

94,36

94,98

Step 2

71,1

Step 3

71,1

91,7

92,5

71,1

Step 4

91,7

Step 5_3

91,7

95,38

91,56

91,7

Step 5_4

93,54

Step 6

97,86

Table 2. The experimental results under our method (%) Server

Client_1

Client_2

Client_3

Step 1

96,56

95,54

96,44

96,5

Step 2

36,1

Step 3

36,1

94,3

93,22

36,1

Step 4

94,12

Step 5_3

94,12

96,2

93,94

95,74

Step 5_4

96,08

Step 6

97,92

5 Discussion and Evaluation 5.1 Experiment Result Evaluation Do the experiment under the model in Fig. 3, With the old method of aggragating the weight on the server, we obtains the results in Table 1. With the proposed method, we obtain as shown in Table 2. The advantages of federated learning and transfer learning models that make the training and testing process faster and more convenient. • Each machine can independently train and test the data set on its own machine. • Faster training time because the data set on each machine will be much smaller than the aggregate dataset trained for one machine. • When there is no internet, the machine can still work. If there is an internet connection, the machine will be updated with the latest set of weights. This reduces the training load of the server. • No need to re-train from the beginning, helping to reduce training time throughout the system.

302

D. T. Le et al.

• Each machine’s set of weights is an array of numeric values, so transmission between machines is fast. 5.2 Discussion Discussion 1: Compare with training results on a machine. We found that putting all the training data into the same machine for training gives high results, up to 97.92%. Meanwhile, after training with the average weight by old method, the result is 93.54%. We found that using the Federated learning model gave good results, approximately when training on a computer. In general, we see that using distributed learning model, specifically Federated learning, is feasible in detection and classification problems. Discussion 2: In the distributed learning model, the weights calculated based on the number of training files we proposed gave 96.08% higher results when using the average weight of 2.54%. From that, we see that uploading to the server the number of training files for the server to calculate the weights gives good results.

6 Conclusion The main contribution of our paper is to propose and develop a new method to synthesize the set of weights from the corresponding sets of component weights and dataset sizes. The weigh aggrated model proposed by us has a higher result than the average weight of 2.54%. This is a positive result when in the transfer process, we only need to pass the parameter as the number of training files. On the other hand, compared to the training results on a traditional computer, the accuracy is only reduced by 1.96%, but the distributed learning model has outstanding advantages compared to the traditional learning model. A limitation in the paper, when transmitting the set of weights between the client and the server, it can also be stolen and affected by hackers. If it is developed into a system for use on the internet, it is necessary to have a mechanism to ensure safety during transmission.

References 1. Pete, W.: Software Engineer, Google Brain Team, Launching the Speech Commands Dataset (2017). https://ai.googleblog.com/2017/08/launching-speech-commands-dat aset.html. Accessed 15 Jul 2021 2. Mozilla’s Common Voice Dataset. https://commonvoice.mozilla.org/en/datasets. Accessed 15 Jul 2021 3. Doon, R., Kumar Rawat, T., Gautam, S.: Cifar-10 classification using deep convolutional neural network. In: IEEE Punecon 2018, pp. 1–5 (2018). https://doi.org/10.1109/PUNECON. 2018.8745428 4. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 248– 255 (2009). https://doi.org/10.1109/cvpr.2009.5206848

A New Approach to the Improvement of the Federated Deep Learning Model

303

5. Kuznetsova, A., et al.: The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. CoRR abs/1811.00982 (2018). https:// doi.org/10.1007/s11263-020-01316-z 6. Wei, F., Li, Y., Roy, S., Ou, X., Zhou, W.: Deep ground truth analysis of current android malware. In: Polychronakis, M., Meier, M. (eds.) DIMVA 2017. LNCS, vol. 10327, pp. 252– 276. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60876-1_12 7. McMahan, H.B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.Y.: Communicationefficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics (2017) 8. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated Learning with Non-IID Data (2018). CoRR abs/1806.00582. http://arxiv.org/abs/1806.00582 9. McMahan, H.B., Moore, E., Ramage, D., Arcas, B.A.Y.: Federated Learning of Deep Networks using Model Averaging (2016). CoRR abs/1602.05629. http://arxiv.org/abs/1602. 05629 10. Konecˇny‘, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: Strategies for improving communication efficiency (2016). CoRR abs/1610.05492, http://arxiv.org/abs/1610.05492 11. Lin, Y., Han, S., Mao, H., Wang, Y., Dally, J.W.: Deep gradient compression: Reducing the communication bandwidth for distributed training (2017). CoRR abs/1712.01887. http:// arxiv.org/abs/1712.01887 12. Xu, J., Du, W., Jin, Y., He, W., Cheng, R.: Ternary compression for communication-efficient federated learning (2020). CoRR abs/2003.03564. https://arxiv.org/abs/2003.03564 13. Xu, J., Jin, Y., Du, W., Gu, S.: A federated data-driven evolutionary algorithm (2021). CoRR abs/2102.08288 14. Yamashita, R., Nishio, M., Do, R.K.G., Togashi, K.: Convolutional neural networks: an overview and application in radiology. Insights Imaging 9(4), 611–629 (2018). https://doi. org/10.1007/s13244-018-0639-9 15. Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., Liu, C.: A Survey on Deep Transfer Learning (2018). CoRR abs/1808.01974. http://arxiv.org/abs/1808.01974 16. Xu, R., Baracaldo, N., Zhou, Y., Anwar, A., Ludwig, H.: Hybrid alpha: an efficient approach for privacy-preserving federated learning. In: Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pp. 13–23 (2019) 17. Zhu, H., Jin, Y.: Real-time federated evolutionary neural architecture search (2020). CoRR abs/2003.02793. https://arxiv.org/abs/2003.02793 18. Zhu, H., Zhang, H., Jin, Y.: From federated learning to federated neural architecture search: a survey. Complex Intell. Syst. 7, 639–657 (2021) 19. Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191. ACM (2017) 20. Koneˇcný, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated Learning: Strategies for Improving Communication Efficiency (2016). CoRR abs/1610.05492. http://arxiv.org/abs/1610.05492 21. Caldas, S., Koneˇcny, J., McMahan, H.B., Talwalkar, A.: Expanding the reach of federated learning by reducing client resource requirements (2018). CoRR abs/1812.07210. http://arxiv. org/abs/1812.07210 22. LeCun, Y., Cortes, C.: The MNIST database of handwritten digits. homepage http://yann. lecun.com/exdb/mnist/. Accessed 24 Jul 2021 23. Homepage. https://github.com/lethuan255/distributed_learning. Accessed 24 Jul 2021

Optimal Control in Learning Neural Network Marta Lipnicka

and Andrzej Nowakowski(B)

Faculty of Math and Computer Sciences, University of L ´ od´z, Banacha 22, 90-238 L ´ od´z, Poland {marta.lipnicka,andrzej.nowakowski}@wmii.uni.lodz.pl Abstract. We present optimal control approach to improve the neural network learned on a given empirical data (set of observations). Artificial neural networks usually are described as black-boxes and it is difficult to say something about their properties than very general results received from learning data. For many applications, e.g. medicine or embedded system for controlling autonomous vehicles, it is essential to say not only that on training data we get some error but that we will make an error not greater than some ε for every data we can input to our system. To derive required theory we apply an optimal control theory to a certain family of neutral networks, considered as ordinary differential equations, defined by a set of controls and suitable constructed functional. Very often we have additional information or knowledge on the problem the data represent. Our approach allows to include these information and knowledge in the construction of the model. We apply a modification of classical dynamic programming ideas to formulate a new optimization problem. We use it to state and prove sufficient approximate optimality conditions for finding approximate neural network which should work correctly for given ε with respect to built functional, on a data different than the set of observations. Keywords: Neural networks · Optimal control · Approximate sufficient optimality conditions · Computational algorithm

1

Introduction

Artificial neural networks often are used to model a given phenomenon from empirical data. They focus mainly on models learned solely from observed data (supervised learning). However, in practice, such as economics, medicine applications they are required explicit, parametric models, e.g. modeling known process constraints, operations constraints (see e.g. [2–4,13]). It is well known that usually training of neural network on finite data, is done by a back propagation methods. We have to be aware about some weakness of that type of training of neural networks. The learned neural network is a function and we learned and checked it only on finite number of data. Thus, we are only sure that it is correct on trained data. But are we able to say something about correctness of behavior of that network for different than training data i.e. for all point of the domain of c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 304–314, 2022. https://doi.org/10.1007/978-3-030-92666-3_26

Optimal Control in Learning Neural Network

305

the definition for that function (neural network). A good example are the training data being generated by function sin(1/x) of the form (1/kπ, 0), k = 1, 2, ..., 1/kπ - input and 0 - output of the network. Then we see that a network trained by this data will learn a function which is near zero on the interval (0, 0.5]. It is obvious that there are infinitely many functions passing through these points, not only one e.g. sin(1/x). Thus which one is a model that we are looking for having these data? We should admit, that in big number of cases, as long as the networks can do the work, e.g. classification, prediction, and so on, people are satisfied. They do not take care if it is the one they want, or if there does exist one as long as it behaves as it is expected. But in applications, such as medicine or embedded system for autonomous vehicle or space shuttles it is not enough to say that the method usually works or that it works with all test cases. We have to estimate with what accuracy the method work regardless of data input when considering all possible inputs. It seems to be crucial to be able to say not only that on test data we get some error but rather that we will make an error not greater than some ε for every data we can input to our system. In this note we propose some solution to the above stated problem. First, note that common representation of a general artificial neural network is an ordinary differential equation of the type: dx(t) = f (t, x(t)), t ∈ [0, T ], dt x(0) = xi0 , x(T ) = xiT , xi0 , xiT ∈ RN , i = 1, ..., m.

(1) (2)

The pairs (xi0 , xiT ), i = 1, ..., m, are the training data (xi0 - inputs, xiT - outputs). We do not require that for given f there exist solutions to (1) satisfying (2) (see e.g. [12]). We simply assume that (1) represents learned on the observed data network, i.e. we assume that for the boundary conditions (2) there exists differential equation (1) with solution satisfying (2). It is obvious that there exists infinitely many neural networks with representations of type (1) satisfying boundary conditions (2). Usually the function f is more specified, but it is not essential for the approach presented here. Essential point is that we do not know a form of f for learned, on the given data, neural network. Thus, we do not know is the function f this, which we are looking for. However very often, for the investigated problem, we have additional information, which are not implemented in the observable data. Even more, we are expected or we believe, what kind of function (neural network) the observable data could generate in approximate evaluation. All that suggest that having trained neural network f we can fit the function f with some parameter (control) function u(t), t ∈ [0, T ] which describes possible forms of suspected neural networks generated by the same observable data, i.e. now f has the form f (t, x(t), u(t)) and then (1), (2) assume: dx(t) = f (t, x(t), u(t)), t ∈ [0, T ], dt x(0) = xi0 , x(T ) = xiT , xi0 , xiT ∈ RN , i = 1, ..., m.

(3)

306

M. Lipnicka and A. Nowakowski

We suppose that controls u(·) belong to some known set of measurable functions. Usually the additional information on the problem is implemented in the cost function L(t, x(t), u(t)) which is integrand of a certain functional and functions l0 (x(0)), l0 : RN → R, lT (x(T )), lT : RN → R which should measure the quality of learning on the observable set with respect to expected, during the training process, neural network (function). In the simplest case if we expect that a neural network could be function of type g(x) then l0 (x(0)) = ((x(0) − g(x(0))2 , lT = (x(T ) − g(x(T ))2 . Therefore for the cost functional J we assume  T J(x, u) = L(t, x(t), u(t))dt + l0 (x(0)) + lT (x(T )). (4) 0

Thus, investigation whether the obtained neural network, by training on the given observable data, is that we search, reduce to search the best neural network among admissible (with respect to some knowledge - here described by controls) according to the information on the problem implemented into the cost functional. That means we should optimize the functional J(x, u) with respect to a set of pairs of functions (x, u), i.e. we have to formulate optimal control problem. 1.1

Our Approach

We plan to provide solid mathematical theory allowing to estimate neural network quality. At this point it should be emphasized that always we can think and treat any artificial neural network, regardless of their type, as a function. Having neural network, we have also a corresponding function. In our work, through ordinary differential equation, we define an uncountable family of functions (neural networks) having certain properties. To this family we apply optimal control approach to find approximate model realizing observable data and satisfying some verification conditions in order to ensure correctness of approximation. As a result we get that a neural network, represented as a function being a solution of the differential equation (3) satisfying the sufficient optimality conditions, approximates the model in the best possible way, with respect to the knowledge (information) implemented to the functional and given observable data, with error less than ε. Our goal is not to solve any problem in a sense of training neural network to get satisfactory results or to improve any existing solution. Instead, we propose a general method which can be used to investigate a quality of a neural network obtained with any other existing method. If we are able, for a given neural network N n (which is always a function), to find a family of function enclosing this neural network N n, then the optimal solution xop approximates not function N n but rather the best neural network possible to obtain among all neural networks defined by parametrizing f . Saying that a function corresponding to given neural network is unknown, we mean that after learning process our neural network realize some function, not necessary the function it should realize which is clearly explained with sin(1/x) example given above: not enough good learning patterns, leads to be close to constant function which is far away from (unknown) sin(1/x).

Optimal Control in Learning Neural Network

307

For such a general optimal control problem we have tools in optimal control theory, which provide conditions, in a form of verification type theorem, allowing to find approximate solutions. Therefore, the aim of this note is to formulate in the rigorous way the optimal control problem and to prove sufficient optimality conditions for an approximate minimum of the functional (4) over a family of pairs of functions (x(t), u(t)). 1.2

Meaning of Our Approach and Contributions

What does the presented approach mean for learning? In order to understand that, let us recall what the neural network (deep learning) is. In general, the neural network consists with many layers, of which each, consists of many neurons. Each neuron has an activation function (in general nonlinear function). The neural network (the function!) forms then a combination of linear and nonlinear composition of the activation functions. By known theorem each continuous function may be arbitrarily well approximate by a polynomial of separating functions. Of course, we can use as activation functions, in the neural network, suitable set of separating functions. However, after the training the neural network with the observation set, we do not now the shape of the function we received. Even worse: we know nothing about a behavior of the function out of the observation set. This might be crucial for medical implementations, where it is not possible to cover all possible cases, with some empirical data. The problem is, that we do not know, what a function we are looking for. But we believe, that the neural network learned, from the observation set, the function we have a poke around. However, our faith may be wrong (recall the example given at the beginning of this article) if it has not any true theoretical reason. Thus, after learning process (using the observation set), are we able to tell something more about the function which the neural network define? If we apply to learning process the optimization algorithm, then using Karush-Kuhn-Tucker conditions (necessary optimality conditions in mathematical programming), we can calculate the optimal weights (linear coefficients) staying by activation functions, at least theoretically. However, we still do not know the function, as well, we do not know how far it is from a function which the observations set represent. Hence, those optimization tools, in spite that, they look very promising are not sufficiently good to tell something about a proper answer of the neural network for different than empirical data. This is why we add to the process of learning the set of function obtained by parametrization of initial neural network by controls. They build a set of functions, which we know, and among which probably is a function (neural network) we are looking for, or at least the one which approximates it sufficiently well. Of course, we still do not know whether the function, we suspect, that approximates (on the empirical data!) sufficiently well, is the one that represent the empirical data. But, we know that function and we can study its properties and thus also to improve it. However our approach has one underestimated advantage – it gives us an opportunity to implement some kind of information different than pure learning data, even maybe of informal type. In general, for each problem, which is represented by empirical data, very often we

308

M. Lipnicka and A. Nowakowski

have more information or knowledge that is not embedded in the data itself. We can implement those information in the functional, as well, in the construction of parametrization of f . This way we create an optimization problem, but the functional is now defined on the set of functions and not in the subset of Rn+m of weights only as in the above problem. Therefore, we can search in optimal control theory for tools to solve that optimization problem. We apply sufficient optimality conditions to find the best function which minimizes the functional (4). As a result we get a function – a neural network which represent in the best way the observable data with respect to the information and the knowledge we have on the problem the data represent and which are implemented in the functional (4) and in parametrization of f . Of course, someone can assert that those knowledge we can implement in the deep neural network and after deep learning to get a better neural network – a function. That is true. But, mentioned the above weakness of such the function is still true too, i.e. we know nothing (almost) on this new function in contrast to the described case where we know the best function. The contributions of this note is as follows. We formulate, in rigorous form, an optimal control problem. We describe and prove sufficient optimality conditions (a verification theorem), for a function (neural network), minimizing the functional over the possible neural networks. Thus, the obtained neural network is optimal for the empirical data, with respect to the information and the knowledge we have on the problem. An algorithm is constructed for calculation of an approximate optimal neural network. The verification theorem allows to check whether calculated function is really an approximate optimal neural network. We propose a general method which can be used to investigate a quality of a neural network obtained with any other existing method. If we are able, for a given neural network N n (which is always a function), to find a family of function enclosing this neural network N n, then the optimal solution xop approximates not function N n, but rather, the best neural network possible to obtain, among all admissible neural networks. 1.3

Related Works

The problem of training (deep) feed-forward neural networks as a nonlinear programming problem was presented in literature [1,8] where optimizations tools are also used but only with respect to parameters (weights) of the neural network. In our case we consider a different problem: if resulting neural network is the network we really want to get. This seems to be very important in case of deep neural networks for which special methods have to be developed to get satisfactory results (see for example [7] and related papers). Recently there were also attempts to apply optimization theory to deep learning resulting in formulating necessary conditions for optimality [11]. In our case we formulate sufficient conditions for optimality, which is much stronger result. We also introduce two, according to our knowledge (see for example [5]), unknown types of information, which significantly enrich the model we can make.

Optimal Control in Learning Neural Network

309

In [9], the authors present an effective deep prediction framework based on robust recurrent neural networks to predict the likely therapeutic classes of medications a patient is taking, with a sequence of diagnostic billing codes that are contaminated by missing values and multiple errors. The described approach is designed to predict the complete set of medications, a patient is actively taking at a given moment, from a sequence of diagnostic billing coeds, in the context of non-trivial billing record noise. The authors conduct extensive experiments on health care data sets to demonstrate the superiority of their method over state-of-the-art in neural networks. In [6], a novel multi-modal machine learning based approach is proposed to integrate EEG engineered features for automatic classification of brain states. EEGs are acquired from neurological patients with mild cognitive impairment or Alzheimer’s disease and the aim was to discriminate healthy control subjects from patients. A set of features estimated from both the Bis-pectrum and the time-frequency representation extracted via the Continuous Wavelet Transform (CWT) are used to automatically classify EEGs. The authors extract CWT and BiS features and vectorize them and next they are used as multi-modal input to a machine learning system to discriminate EEG epochs. To classify the signals’ epochs, four different standard machine learning classifiers are used. We see that in both cited (newest) papers deep neural networks are refined to get better results than in existing literature for important medical problems. However, in both case, we do not know how good in general the prediction is. We only know that for given empirical data it is better than existing diagnosis.

2

Formulation of Optimal Control Problem

Accordingly to the suggestions of the former section we formulate in rigorous way the optimal control problem corresponding to the learning process of neural networks. Let [0, T ] denote the time interval on which we define controls u : [0, T ] → U , U ⊂ Rk - compact. The function of controls is to make distinctions between possible, different, learned neural networks. The networks are described by absolutely continuous functions x : [0, T ] → RN . Let f : [0, T ]×RN ×U → RN m and the pairs (x, u) for given set of observations (x10 , x1T ), ..., (xm 0 , xT ) satisfy dx(t) = f (t, x(t), u(t)), t ∈ [0, T ], dt i i x(0) = x0 , x(T ) = xT , xi0 , xiT ∈ RN , i = 1, ..., m.

(5) (6)

We assume that f is sufficiently smooth function. Let L : [0, T ] × RN × U → R and l0 : RN → R, lT : RN → R. We assume L, l0 , lT to be smooth functions. Then optimal control problem R for u(t) ∈ U reads minimize J(x, u)

(7)

subject to (5) and (6). The set of all functions x(·) satisfying (5) and (6) for all corresponding to them controls u(·) ∈ U = {u(t), t ∈ [0, T ] : u(·) −

310

M. Lipnicka and A. Nowakowski

measurable in [0, T ]}, we denote by Ad and the set of pairs (x, u) by Adu. In fact, we are interested in approximate solutions to problem R this is why we denote by J=

inf

 (

(x,u)∈Adu

T

L(t, x(t), u(t))dt + l0 (x(0)) + lT (x(T )).

(8)

0

We name J the optimal value. An ε-optimal value for the problem R we call each value Jε such that the following inequality is satisfied: J ≤ Jε ≤ J + T ε.

3

(9)

Approximate Dynamic Programming

We describe first an intuition of a classical dynamic approach to optimal control problems R. Let us recall what does it mean dynamic programming in classical one dimensional setting with fixed initial condition? We have a functional, ordinary differential equation for a state x(t) with control u(t), t ∈ [0, T ] and an initial condition (t0 , x0 (t0 )). Assume we have for such an optimal problem a solution (¯ x, u ¯). Then by necessary optimality conditions (see e.g. [11]) there exists a function p(t) = (y 0 , y(t)) on (0, T ) - conjugate function, being solution to the corresponding adjoint system. That p = (y 0 , y) plays a role of multipliers from the classical Lagrange problem with constraints (with multiplier y 0 staying by functional and y corresponding to the constraints). If we perturb (t0 , x0 ) then assuming that optimal solution for each perturbed problem exists we also have corresponding to it conjugate function. Therefore making perturbations of our initial conditions we obtain two sets of functions: optimal trajectories x ¯ and corresponding to them conjugate functions p. The graph of the sets of functions x ¯ cover some sets in the state space (t, x), say set X. We assume that the set X is open. In the classical dynamic programming approach, we explore the state space (t, x) i.e. the set X. The value function which is defined as optimal value for initial condition optimal problem satisfies (if it is smooth) Hamilton-Jacobi equation in X (first order partial differential equation, compared to [10]). In the case when the initial condition is not fixed then the notion of a value function is not clear. This is why we do not define in this case a value function and we search only for ε-value. From now on we assume that X is an open set covered by graphs of all x(·) ∈ Ad. Assume that for given ε > 0, there exists a C 1 ([0, T ) × RN ) function V : [0, T ) × RN → R satisfying inequality −ε ≤ Vt (t, x) + inf {Vx (t, x)f (t, x, u) + L(t, x, u)} u∈U

(10)

with boundary conditions V (0, x) = −l0 (x), V (T, x) = lT (x).

(11)

Optimal Control in Learning Neural Network

311

Notice that by our assumption on f, L, U we can expect an existence of V with the mentioned regularity satisfying (10). Below we formulate and prove the verification theorem, which gives sufficient ε-optimality conditions for the existence of an approximate optimal value Jε , as well as for an approximate optimal pair. Theorem 1. Assume that there exists a C 1 ([0, T ) × RN ) function V satisfying (10) in X with(11). Let x ¯(·) with the corresponding u ¯(·), satisfy (5), (6) and let ¯(t)) + Vx (t, x ¯(t))f (t, x ¯(t), u ¯(t)) + L(t, x ¯(t), u ¯(t)). 0 ≥ Vt (t, x

(12)

Then (¯ x, u ¯) is the ε-optimal pair, i.e. J(¯ x, u ¯) ≤ J + T ε.

(13)

Proof. Let us take any (x(·), u(·)) ∈ Adu. Then (10) gives us Vt (t, x(t)) + Vx (t, x(t))f (t, x(t), u(t)) + L(t, x(t), u(t)) ≥ −ε.

(14)

Integrating (14) over [0, T ] and taking into account the boundary conditions (11) we get  T (15) −εT ≤ L(t, x(t), u(t))dt + l0 (x(0)) + lT (x(T )). 0

Proceeding similarly, for the pair (¯ x, u ¯), but now using (12) we get  T L(t, x ¯(t), u ¯(t))dt + l0 (¯ x(0)) + lT (¯ x(T )) ≤ 0.

(16)

0

From (15) and (16) we infer J(¯ x, u ¯) ≤ J(x, u) + T ε,

(17)

i.e. the assertion of the theorem. Let us notice that the minimum of (4) in the above verification theorem is considered in the set Adu. It is more interesting that we can choose the set Adu to have only a finite number of elements and the above theorem is still working. That can be done by choosing a suitable set of controls u(·) to define the trajectories by (5) and (6). In order to get an approximate optimal neural network x ¯ we should follow the steps of numerical algorithm. Theorem 1 asserts that x ¯ is an ε-approximation of the neural network learned m on the empirical set (x10 , x1T )..., (xm 0 , xT ) with respect to the additional knowledge on the considered problem implemented in the functional (4) and the system (5) and (6). Any theorem of verification type has one undeniable advantage: there is no need to consider convergence analysis. If we have a candidate to be a solution of a problem, we can simply verify it with a theorem. The way we obtain this candidate is not relevant – it can be even generated with some random method. If all conditions are fulfilled, then this candidate is a solution.

312

4

M. Lipnicka and A. Nowakowski

Numerical Algorithm

The aim of our calculations is to solve the optimal control problem by using the sufficient optimality conditions for an approximate minimum of the functional (4) over a family of pairs of functions (x(t), u(t)). We have the set of learning patterns. We do not know how this set describes the problem under consideration. We use it to generate the learning procedure for some defined neural network. Next, using the set of controls, we define the ODE, the right side of which depends on the control. Using the additional information of the problem we define the cost function and for obtained pairs (x(t), u(t)) we calculate the value of the functional. For the best pair (¯ x, u ¯) we find some function V . At the end we check if the sufficient optimality conditions is fulfilled (with some ε > 0). If fulfilled, we obtain ε-approximation of an unknown minimum value of functional J. Computational Algorithm Consists of the Following Steps. 1. Define N ∈ N, T > 0 and ε > 0. 2. Collect learning patterns (xi0 , xiT ), i = 1, . . . , m, xi0 , xiT ∈ RN where xi0 is an input vector of the neural network, and xiT an output vector of the neural network. 3. Define neural network and learn it using the learning patterns from the previous point. Because ODE is a common representation of a general artificial neural network, the created neural network is described by (1). 4. Define objects needed during computations: (a) Choose the sets U ⊂ Rk - compact, k ∈ N and empty set U. (b) Define finite number of controls and add all of them to the set U. In theory, U is uncountable. In practice, we are able to generate only a finite number of controls, so the set of controls is finite. Define the number of controls as mu ∈ N. (c) Choose function f (t, x(t), u(t)), t ∈ [0, T ], according to additional information about the problem. 5. For all controls u from U solve the problem (5), (6) and define the set of mu pairs (x, u) denoted by Adu. 6. Using the additional information on the problem define the cost function L(t, x(t), u(t)) which is integrand of a certain functional and functions l0 (x(0)), l0 : RN → R, lT (x(T )), lT : RN → R which should measure the quality of learning on the observable set with respect to expected, during the training process, neural network (function). 7. Define the cost functional J(x, u) as (4). 8. For all pairs from the set Adu calculate the value of the cost functional. 9. Denote by (¯ x, u ¯) the best pair from the set Adu with respect to the value of the cost functional. 10. For given ε find V ∈ C 1 ([0, T ) × RN ) satisfying (10) with boundary conditions (11). 11. Check if (5), (6) and (12) are fulfilled for (¯ x, u ¯), with the given ε.

Optimal Control in Learning Neural Network

313

(a) If true, the pair (¯ x, u ¯) is called the ε-optimal pair. This pair allows to approximate an unknown minimum value of functional J over set of controls u. x ¯ approximates an unknown neural network. (b) If false, repeat all the steps selecting differently some of the objects used in computations. 4.1

Numerical Example

We consider a very simple example to illustrate how this algorithm works. We take the sin(t) function for t ∈ [0, 2π] and create a model for it. The algorithm works as follows: 1. N = 1, T = 2π and ε = 0.01. 2. m = 11. We have the following learning patterns {(0, 0), (0.6283, 0.5878), (1.2566, 0.9511), (1.8850, 0.9511), (2.5133, 0.5878), (3.1416, 0), (3.7699, −0.5878), (4.3982, −0.9511), (5.0265, −0.9511), (5.6549, −0.5878), (6.2832, 0)}. 3. Our neural network is an multilayer neural network with the following parameters: one input, one hidden layer with 3 neurons, one output, sigmoidal activation function. Learn this network using the learning patterns generated in the previous point. Denote output of learned network by net(x(T )). 4. Define objects needed during computations: (a) k = 1, mu = 3, U = {u(t) = cos(t), u(t) = 1, u(t) = t, t ∈ [0, 2π]}. (b) f (t, x(t), u(t)) = u(t) , t ∈ [0, 2π]. 5. For all controls u from U solve the problem (3) and define the set of mu = 3 pairs (x, u) denoted by Adu. 6. L(t, x(t), u(t)) = 0, l0 (x(0)) = −0.01x(0) and lT (x(T )) = 0.01(x(T ) − T ). 7. Our cost functional is defined as J(x, u) = −l0 (x(0)) + lT (x(T )) = 0.01(−x(0) + x(T ) − T ). 8. The best pair from the set Adu with respect to the value of the cost functional is a pair (¯ x, u ¯) = (sin(t), cos(t)). J(¯ x, u ¯) = −0.0628. 9. Function V satisfying (10) with boundary conditions (11) is the following function V (t, x) = 0.01(x − t). 10. (5), (6) and (12) are fulfilled (see Fig. 1) for (¯ x, u ¯), with the given ε. The pair (¯ x, u ¯) is called the ε-optimal pair. This pair allows to approximate an unknown minimum value of functional J over set of controls u. x ¯ approximates an unknown neural network.

314

M. Lipnicka and A. Nowakowski 0 -0.002 -0.004 -0.006 -0.008 -0.01 -0.012 -0.014 -0.016 -0.018 -0.02

0

1

2

3

4

5

6

7

Fig. 1. The value of Vt (t, x ¯(t)) + Vx (t, x ¯(t))f (t, x ¯(t), u ¯(t)) + L(t, x ¯(t), u ¯(t)).

References 1. Bazaraa, M.S., Sherali, H.D.: Nonlinear Programming: Theory and Algorithms. Wiley, Hoboken (2013) 2. Bhosekar, A., Ierapetritou, M.: Advances in surrogate based modeling, feasibility analysis and optimization: a review. Comput. Chem. Eng. 108, 250–267 (2018) 3. Caballero, J.A., Grossmann, I.E.: An algorithm for the use of surrogate models in modular flowsheet optimization. AIChE J. 54(10), 2633–2650 (2008) 4. Cozad, A., Sahinidis, N.V., Miller, D.C.: Learning surrogate models for simulationbased optimization. AIChE J. 60(6), 2211–2227 (2017) 5. Deng, L., Yu, D.: Deep learning: methods and applications. Technical Report MSRTR-2014-21, May 2014. https://www.microsoft.com/en-us/research/publication/ deep-learning-methods-and-applications/ 6. Ieracitano, C., Mammone, N., Hussain, A., Morabito, F.C.: A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia. Neural Netw. 123, 176–190 (2020) 7. Kaiming, H., Xiangyu, Z., Shaoqing, R., Jian, S.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385v1 (2015) 8. Kuhn, H.W., Tucker, A.W.: Nonlinear Programming. Traces and Emergence of Nonlinear Programming, pp. 247–258 (2014) 9. Liu, D., Wu, Y.L., Li, X., Qi, L.: Medi-care AI: predicting medications from billing codes via robust recurrent neural networks. Neural Netw. 124, 109–116 (2020) 10. Nowakowski, A.: ε-value function and dynamic programming. J. Optim. Theory Appl. 138, 85–93 (2008). https://doi.org/10.1007/s10957-008-9367-5 11. Qianxiao, L., Shuji, H.: An optimal control approach to deep learning and applications to discrete-weight neural networks. In: Proceedings of the 35th International Conference on Machine Learning. arXiv preprint arXiv:1803.01299v2 (2018) 12. Perko, L.: Differential Equation and Dynamical Systems. Springer, New York (1991). https://doi.org/10.1007/978-1-4684-0392-3 13. Shokry, A., Ardakani, M.H., Escudero, G., Graells, M., Espuna, A.: Dynamic kriging-based fault detection and diagnosis approach for nonlinear noisy dynamic processes. Comput. Chem. Eng. 106, 758–776 (2017)

Deep Networks for Monitoring Waterway Traffic in the Mekong Delta Thanh-Nghi Do1,2(B) , Minh-Thu Tran-Nguyen1 , Thanh-Tri Trang1 , and Tri-Thuc Vo1 1 2

College of Information Technology, Can Tho University, Cantho 92000, Vietnam UMI UMMISCO 209 (IRD/UPMC), Sorbonne University, Pierre and Marie Curie University, Paris 6, France [email protected]

Abstract. Our investigation aims at training deep networks for monitoring waterway traffic means on the rivers in the Mekong Delta. We collected the real videos of the waterway traffic, and then tagging the five most popular means in frames extracted from the videos, making an image dataset. We propose to train recent deep network models such as YOLO v4 (You only look once), RetinaNet and EfficientDet on this image dataset to detect the five most popular means in the videos. The numerical test results show that YOLO v4 gives highest accuracy than two other methods, including RetinaNet and EfficientDet. YOLO v4 achieves the performances on the testset with a precision of 91%, a recall of 98%, F1-score of 94% and mean average precision ([email protected]) of 97.51%.

Keywords: Waterway traffic means RetinaNet · EfficientDet

1

· Deep network · YOLO v4 ·

Introduction

The Mekong Delta in southern Vietnam is a vast system of rivers, swamps, home to floating markets, and villages surrounded by rice paddies. Waterway traffic means are the main transportation of the region. Therefore, monitoring waterway means is the promising research to avoid traffic jams. Recently, computer vision is a field that gets more attention due to its applications. One of the most common problems related to computer vision is object detection that related to the ability of computer systems can locate objects in an image and identify them. Along with the deep neural network (called AlexNet proposed by [13]), this has promoted the development of deep learning networks to extract information from images and videos based on Convolutional Neural Networks (CNNs [14]). In addition, CNNs have proven to be one of the best approaches for image classification and image recognition [10]. That is why we propose to train deep neural networks for monitoring waterway traffic means on the rivers in the Mekong Delta. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 315–326, 2022. https://doi.org/10.1007/978-3-030-92666-3_27

316

T.-N. Do et al.

In order to build the system for monitoring waterway traffic means, we firstly collect the real videos of the waterway traffic, and then the frames extracted from the videos are manually drawn by bounding boxes around the five most popular means, including Canoe, Composite Canoe, Boat, Ship and Barge. We obtain an image dataset. After that, we propose to train three recent deep network models, such as You only look once (YOLO v4 [2]), RetinaNet [16] and EfficientDet [28] to detect the five most popular means in the videos. The empirical test results on our real image dataset show that YOLO v4 achieves higher accuracy and faster training time than two other methods, including RetinaNet and EfficientDet. YOLO v4 gives the performances on the testset with a precision of 91%, a recall of 98%, F1-score of 94% and mean average precision ([email protected]) of 97.51%. The remainder of this paper is structured as follows. Section 2 briefly presents our proposal for monitoring waterway traffic means with deep networks, including YOLO v4, RetinaNet and EfficientDet. Section 3 shows the experimental results. The conclusion is presented in Sect. 4.

2

Training Deep Networks for Monitoring Waterway Traffic Means in the Mekong Delta

Our system for monitoring waterway traffic means follows the usual framework of the image classification. Building this system involves three main works as follows: 1. collecting the dataset of images, 2. extracting visual features from images and representing them, 3. training classifiers. 2.1

Data Collection of Waterway Traffic Means

Therefore, we start with the collection of the waterway traffic images. We used the camera of the Samsung Galaxy A10 (with 13 MP, f/1.9, 28 mm wide, AF, 1080p@30 fps) to record the waterway traffic on rivers in the Mekong Delta region, closed to floating markets. From these videos, we extracted 4480 images. And then the images are manually drawn by bounding boxes around the five most popular means, including Canoe, Composite Canoe, Boat, Ship and Barge. Figure 1 presents an sample about five waterway traffic means.

Fig. 1. Categories of waterway traffic means

Deep Networks for Monitoring Waterway Traffic in the Mekong Delta

317

We divided the image dataset into two sub-sets: training set (to train the model) and test set (to evaluate the model) with the number of images 4027 and 453, respectively. Table 1 shows the description of the dataset with the number of five popular means in the training set and the test set. Table 1. Description of dataset No Class

2.2

Trainset Testset

1

Canoe

2

Composite Canoe

497

49

8,977

982

3

Boat

3,095

367

4

Ship

11,092

1,370

5

Barge

2,001

209

6

Total

25,662

2,977

Deep Networks for Monitoring Waterway Traffic Means

The classical visual approaches perform the classification task of images via two key steps. The first one is to extract visual features from images and represent them, using handcrafted features including the scale-invariant feature transform (SIFT [19,20]) and the bag-of-words model (BoW [3,15,26]), the histogram of oriented gradients (HOG [5]), the GIST [21]. Followed which, the second one is to train Support vector machines (SVM [29]) to classify images. More recent approaches aim to train deep convolutional neural networks (CNNs [14]) to benefit from the ability to learn visual features (low-level, midlevel, high-level) from images and the softmax classifier in an unified framework. These approaches are widely used for the object detection task due to their performance. The object detection techniques are categorized into two genres: two-stage detectors and one-stage detectors. Two-stage detectors propose the potential bounding box candidates, and then a classifier only processes the region candidates. There are outstanding studies about object detection based on a two-stage approach such as Regions with CNN features (RCNN [8,9]), Spatial Pyramid Pooling Networks (SPPNet [11]), Fast RCNN [7], Faster RCNN [25], and Feature Pyramid Networks [16]. One-stage detectors locate objects in images using a single deep neural network. The first one-stage detector in deep learning is considered You Only Look Once (YOLO) that was proposed by [22–24]. Liu et al. [18] proposed Single Shot MultiBox Detector (SSD) in 2016. Lin et al. [16] proposed the focal loss through RetinaNet to solve the foreground-background class imbalance. EfficientDet was proposed by Tan et al. [28], that is a one-stage detector by combine EfficientNet backbones, BiFPN and compound scaling. We propose to train the three most recent deep network models, such as YOLO v4 [2], RetinaNet [16], and EfficientDet [28], for monitoring waterway traffic means, due to their performance in the object detection.

318

T.-N. Do et al.

YOLO v4: Alexey Bochkovskiy proposed YOLO v4 to improve from an version of YOLO v3 [22]. The architecture of YOLO v4 consists of: CSPDarknet53 [30] playing a role as Backbone, Neck including SPP [11] and PAN [17], and YOLO v3 [22] (anchor based) as head. CSPDarknet53 is a CNN for object detection that is based on DarkNet-53 (53 convolutional Layers) and a CSPNet strategy (Cross Stage Partial Network). In YOLO v4, the SPP strategy (spatial pyramid pooling) is added the CSPDarknet53 because the receptive field is significantly increased. PANet (Path Aggregation Network) is employed in YOLO v4 as the parameter aggregation method from different backbone levels for different detector levels, compare with YOLO v3 used the FPN (Feature Pyramid Network [16]). The overall YOLO v4 architecture is presented in Fig. 2.

Fig. 2. YOLO v4 architecture

EfficientDet: Tan and his colleagues [28] designed the one stage detector paradigm called EfficientDet by linking EfficientNet backbones to a weighted bi-directional feature pyramid network (BiFPN) and combine with a customized compound scaling method [28]. Firstly, the authors observe that the efficiency of EfficientNets [27] is better achieved than previous commonly used backbones. Secondly, BiFPN enables easy and fast multiscale feature fusion. Thirdly, a compound scaling method is proposed for object detectors, that jointly scales up the resolution, depth, and width for all backbone, feature network, and box/class prediction networks. Based on the key optimization, EfficientDet consistently achieves both greater accuracy and better efficiency than previous object detection. Figure 3 shows the overall architecture of EfficientDet. RetinaNet: Tsung-Yi Lin and his colleagues proposed RetinaNet [16] to deal with the extreme foreground-background class imbalance that is the primary cause during training of one-stage object detectors. The main idea is to use the focal loss as illustrated in Eq. 1, where γ > 0 does reduce the relative loss for wellclassified (qi ) and putting more focus on hard examples. Therefore, the training speed is compared to previous one-stage detectors while achieving higher accuracy and surpass with the accuracy of some existing state-of-the-art two-stage

Deep Networks for Monitoring Waterway Traffic in the Mekong Delta

319

Fig. 3. EfficientDet architecture

detectors like ResNet-101-C4 and Inception-ResNet-v2. The network architecture ResNet50-FPN (U-Net) (Fig. 4) is used to extract image features for the object detection. F P (q) = −αi (1 − qi )γ log(qi )

(1)

where γ is focusing parameter and α is an α-balanced variant of the focal loss.

Fig. 4. RetinaNet architecture

3

Experimental Results

In this section, we present experimental results of different deep networks for detecting waterway traffic means in the collected videos. For manual tagging the five waterway traffic means in images extracted from the videos, we implemented our own tagging tool in Python using library OpenCV [12]. We downloaded the source of the YOLO v4 from [2] and compiling it with the custom configuration. We use the Keras implementation of RetinaNet object detection [6] and EfficientDet (Scalable and Efficient Object Detection [28]) implementation in Keras [4] and Tensorflow [1]. All experiments are conducted on a machine Linux Ubuntu 18.04, Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores and 16 GB main memory and the Gigabyte GeForce RTX 2080Ti 11 GB GDDR6, 4352 CUDA cores.

320

T.-N. Do et al.

As described in Sect. 2.1, we have the image dataset extracted from the waterway traffic videos on the rivers in the Mekong Delta. After manually drawn by bounding boxes around the five most popular means, including Canoe, Composite Canoe, Boat, Ship and Barge in images, we randomly split the image dataset into the trainset (4027 images) and testset (453 images). The description of the dataset is presented in Table 1. We use the trainset to train deep network models. Then, results are reported on the testset using the resulting deep network models. 3.1

Performance Measurements

In the comparison of detection results obtained by the deep network models, we use the most popular performance measurements, including precision, recall, F1, Intersection over union (IoU), average precision (AP), and mean average precision (mAP). – TP (True positive rate) measures the proportion of positives that are correctly identified. – FN (False negative rate) is the proportion of positives that are mis-classified into negative by the model. – FP (False positive rate) is the proportion of negatives that are mis-classified into positive by the model.

P recision = Recall =

TP TP + FP

TP TP + FN

(2) (3)

2 × P recision × Recall (4) P recision + Recall The IoU metric measures the number of pixels common between the target (the ground-truth bounding box) and the prediction (the predicted bounding box) divided by the total number of pixels present across both. F1 =

IoU =

area(Bp ∩ Bgt ) area(Bp ∪ Bgt )

(5)

where Bp is the predicted bounding box and Bgt is the ground-truth bounding box. Pinterp = maxr ≥r p(r ) AP =

n−1  i=1

(ri+1 − ri )pinterp (ri + 1)

(6)

Deep Networks for Monitoring Waterway Traffic in the Mekong Delta

321

where (r1 , r2 , . . . , rn ) is recall levels [0, 0.1, . . . , 1] mAP0.50 + mAP0.55 + · · · + mAP0.95 10 where (r1 , r2 , . . . , rn ) is recall levels [0, 0.1, . . . , 1] mAP =

3.2

(7)

Results

We obtain classification results obtained by deep network models in Table 2 and Figs. 5, 6, 7, 8, 9 and 10. The highest accuracy is bold-faced and the second one is in italic. In the comparison among deep network models, RetinaNet, YOLO v4 and EfficientDet, we can see that the RetinaNet and YOLO v4 models give highest average precision (AP) results (greater than 89%) with threshold IoU ≥ 0.5. The YOLO v4 achieves the best results for 4/5 classes against RetinaNet. The EfficientDet-0 (φ = 0 is a hyper-parameter for scaling BiFPN width and depth in the EfficientDet network) and EfficientDet-1 (φ = 1) models have lower AP than RetinaNet and YOLO v4. The EfficientDet models give AP of over 79% (except for Efficient-Det-0 for Canoe class being 50.91%). Regarding the evaluation on mAP (COCO), the YOLO v4 model gives the highest accuracy of 97.51%, followed by the RetinaNet model, the EfficientDet-1 with accuracy of 96.28% and 93.39%, respectively. Table 2. Classification results in terms of average precision for 5 waterway traffic means Class

RetinaNet YOLO v4 EfficientDet-0 EfficientDet-1

Canoe

89.00

94.10

50.91

85.79

Composite Canoe 95.91

95.94

79.06

88.29

Boat

98.36

99.23

94.48

94.17

Ship

99.48

99.21

97.11

99.45

Barge

98.67

99.05

98.70

99.23

mAP

96.28

97.51

84.05

93.39

More details on the results of the YOLO v4 model with threshold IoU 0.50, it achieves AP of over 94% for all waterway traffic means. In which 3 out of 5 classes have more than 99% AP accuracy corresponding Boat (99.23%), Barge (99.05%) and Ship (99.21%). Mean average precision [email protected] reaches 97.51%. The average IoU is 79.46%. The YOLO v4 gives a precision of 91%, a recall of 98%, F1-score of 94%. Figure 11 is a recognition result of YOLO v4.

322

T.-N. Do et al.

85.79

EfficientDet-1

50.91

EfficientDet-0

94.1

YOLO v4

89

RetinaNet 50

60

70

80

90

100

Average precision (%)

Fig. 5. Average precision for Canoe

88.29

EfficientDet-1

79.06

EfficientDet-0

YOLO v4

95.94

RetinaNet

95.91 50

60

70

80

90

100

Average precision (%)

Fig. 6. Average precision for Composite Canoe

EfficientDet-1

94.17

EfficientDet-0

94.48

YOLO v4

99.23

RetinaNet

98.36 50

60

70

80

90

Average precision (%)

Fig. 7. Average precision for Boat

100

Deep Networks for Monitoring Waterway Traffic in the Mekong Delta

99.45

EfficientDet-1

97.11

EfficientDet-0

YOLO v4

99.21

RetinaNet

99.48 50

60

70

80

90

100

Average precision (%)

Fig. 8. Average precision for Ship

EfficientDet-1

99.23

EfficientDet-0

98.7

YOLO v4

99.05

RetinaNet

98.67 50

60

70

80

90

100

Average precision (%)

Fig. 9. Average precision for Barge

93.39

EfficientDet-1

84.05

EfficientDet-0

97.51

YOLO v4

96.28

RetinaNet 50

60

70

80

90

Mean average precision (%)

Fig. 10. Mean average precision

100

323

324

T.-N. Do et al.

Fig. 11. A recognition result of waterway traffic means

4

Conclusion and Future Works

We have presented a proposal to monitor waterway traffic means on the rivers in the Mekong Delta through deep learning models. For this aim, we collected the real videos of the waterway traffic and manual tagging he five most popular means, including Canoe, Composite Canoe, Boat, Ship and Barge. After that, we trained three recent deep learning neural networks, such as YOLO v4, RetinaNet model, EfficientDet. The empirical test results show that the YOLO v4 model achieves the highest accuracy of 97.51% at the IoU threshold (0.5:0.95) compared to accuracy of 96.28%, 93.39% obtained by RetinaNet model and EfficientDet, respectively. YOLO v4 gives the performances on the testset with a precision of 91%, a recall of 98%, F1-score of 94% and mean average precision ([email protected]) of 97.51%. In the near future, we intend to provide more empirical test and compare with other algorithms. A promising future research aims to apply the YOLO v4 model to build a water traffic monitoring system on rivers as well as floating markets in the Mekong Delta. Acknowledgments. This work has received support from the College of Information Technology, Can Tho University. The authors would like to thank very much the Big Data and Mobile Computing Laboratory.

References 1. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/. Software available from tensorflow.org

Deep Networks for Monitoring Waterway Traffic in the Mekong Delta

325

2. Bochkovskiy, A., Wang, C.Y., Liao, H.: YOLOv4: optimal speed and accuracy of object detection. arXiv:2004.10934 (2020) 3. Bosch, A., Zisserman, A., Mu˜ noz, X.: Scene classification via pLSA. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 517–530. Springer, Heidelberg (2006). https://doi.org/10.1007/11744085 40 4. Chollet, F., et al.: Keras (2015). https://keras.io 5. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005) - Volume 1, pp. 886–893. IEEE Computer Society (2005) 6. Gaiser, H., et al.: fizyr/keras-retinanet 0.5.1, June 2019. https://doi.org/10.5281/ zenodo.3250670 7. Girshick, R.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448 (2015). https://doi.org/10.1109/ICCV.2015.169 8. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014). https://doi.org/ 10.1109/CVPR.2014.81 9. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 142–158 (2016). https://doi.org/10.1109/TPAMI.2015. 2437384 10. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org 11. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 346–361. Springer, Cham (2014). https:// doi.org/10.1007/978-3-319-10578-9 23 12. Itseez: Open source computer vision library (2015). https://github.com/itseez/ opencv 13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/ 10.1145/3065386 14. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998) 15. Li, F., Perona, P.: A Bayesian hierarchical model for learning natural scene categories. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–26 June 2005, pp. 524–531 (2005) 16. Lin, T.Y., Doll´ ar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 936–944 (2017). https://doi.org/10. 1109/CVPR.2017.106 17. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8759–8768 (2018). https://doi.org/10.1109/CVPR.2018.00913 18. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, Bastian, Matas, Jiri, Sebe, Nicu, Welling, Max (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0 2 19. Lowe, D.: Object recognition from local scale invariant features. In: Proceedings of the 7th International Conference on Computer Vision, pp. 1150–1157 (1999)

326

T.-N. Do et al.

20. Lowe, D.: Distinctive image features from scale invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94 21. Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42, 145–175 (2001) 22. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, realtime object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016). https://doi.org/10.1109/CVPR.2016.91 23. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017). https://doi.org/10.1109/CVPR.2017.690 24. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv:1804.02767 (2018) 25. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS 2015, pp. 91–99. MIT Press, Cambridge (2015) 26. Sivic, J., Zisserman, A.: Video google: a text retrieval approach to object matching in videos. In: 9th IEEE International Conference on Computer Vision (ICCV 2003), Nice, France, 14–17 October 2003, pp. 1470–1477 (2003) 27. Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. arXiv:1905.11946 (2019) 28. Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778–10787 (2020). https://doi.org/10.1109/CVPR42600.2020. 01079 29. Vapnik, V.: The Nature of Statistical Learning Theory, 2nd edn. Springer, New York (2000). https://doi.org/10.1007/978-1-4757-3264-1 30. Wang, C.Y., Mark Liao, H.Y., Wu, Y.H., Chen, P.Y., Hsieh, J.W., Yeh, I.H.: CSPNet: a new backbone that can enhance learning capability of CNN. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1571–1580 (2020). https://doi.org/10.1109/CVPRW50498.2020. 00203

Training Deep Network Models for Fingerprint Image Classification Thanh-Nghi Do1,2(B) and Minh-Thu Tran-Nguyen1 1 2

College of Information Technology, Can Tho University, 92000 Cantho, Vietnam {dtnghi,tnmthu}@cit.ctu.edu.vn UMI UMMISCO 209 (IRD/UPMC), Sorbonne University, Pierre and Marie Curie University - Paris 6, Paris, France

Abstract. Our investigation aims to answer the research question is it possible to train deep network models that can be re-used to classify a new coming dataset of fingerprint images without re-training the new deep network model? For this purpose, we collect real datasets of fingerprint images from students at the Can Tho University. After that, we propose to train recent deep networks, such as VGG, ResNet50, Inception-v3, Xception, on the training dataset with 9,236 fingerprint images of 441 students, to create deep network models. And then, we re-use these resulting deep network models as the feature extraction and only fine-tune the last layer in deep network models for the new fingerprint image datasets. The empirical test results on three real fingerprint image datasets (FP-235, FP-389, FP-559) show that deep network models achieve at least the accuracy of 96.72% on the testsets. Typically, the ResNet50 models give classification accuracy of 99.00%, 98.33%, 98.05% on FP-235, FP-389 and FP-559, respectively queryAs per Springer style, both city and country names must be present in the affiliations. Accordingly, we have inserted the city name “Paris” in affiliation 2. Please check and confirm if the inserted city name is correct. If not, please provide us with the correct city name.. Keywords: Fingerprint image classification learning

1

· Deep network · Transfer

Introduction

The recognition of fingerprint images is one of the most popular and useful method for identifying individuals due to its uniqueness and durability over time. The recognition of fingerprint images is successfully applied to the applications, like suspect and victim identifications, the recovery of partial fingerprints from a crime scene in forensic science, border control, employment background checks, and secure facility entrance [17,18,24]. In this paper, we are interested in the research problem, in which is it possible to train deep network models for classifying a new coming dataset of fingerprint images without retraining the new deep network from scratch? For this c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 327–337, 2022. https://doi.org/10.1007/978-3-030-92666-3_28

328

T.-N. Do and M.-T. Tran-Nguyen

aim, we study to train deep network models for the classification of any new coming dataset of fingerprint images. We start with the collection of real fingerprint image datasets from the students at the Can Tho University. After that, we propose to train recent deep networks, including VGG [33], ResNet50 [15], Inception-v3 [35], Xception [7], on the training dataset with 9,236 fingerprint images of 441 students, to create deep network models. And then, these resulting deep network models can be re-used as the feature extraction and being only fine-tuned by training the last layer in deep network models for the new fingerprint image datasets. The empirical test results on three real fingerprint image datasets (FP-235, FP-389, FP-559 [9]) show that deep network models achieve at least the accuracy of 96.72% on the testsets without re-training deep network models from scratch. An example of effectiveness obtained by ResNet50 models is illustrated in the classification accuracy of 99.00%, 98.33%, 98.05% on FP-235, FP-389 and FP-559, respectively. The remainder of this paper is organized as follows. Section 2 illustrates our proposal to train deep network models for fingerprint image classification. Section 3 shows the experimental results before conclusions and future work presented in Sect. 4.

2

Training Deep Network Models for Classifying Fingerprint Images

Building a system for fingerprint image classification bases on the usual framework for the classification of images. It consists of three main works as follows: 1. collecting the dataset of fingerprint images, 2. extracting visual features from fingerprint images and representing them, 3. training classifiers. As illustrated in [17,18,24], the fingerprint recognition approaches are classified into two categories. First classical approaches commonly use minutiae (e.g. ridge ending, ridge bifurcation, etc.) and the matching method between minutiae of fingerprint images [5,17,18,24,25]. Some classical approaches [10,13,21] also use handcrafted features, like the scale-invariant feature transform (SIFT [22,23]) and the bag-of-words model (BoW [2,20,34]) and training support vector machines (SVM [36]). Recent approaches [3,4,11,26–29,32] aim to train deep convolutional neural networks (CNN [19]) or fine-tuning pre-trained deep neural networks (VGG16, VGG19 [33], ResNet50 [15], Inception-v3 [35], Xception [6]) to classify fingerprint images. These CNN-based approaches illustrated in Fig. 1 benefits from the ability to learn visual features (low-level, mid-level, high-level) from images and the softmax classifier in an unified framework. We are interested in training deep network models to classify a new coming dataset of fingerprint images without retraining the new deep network from scratch.

Training Deep Network Models for Fingerprint Image Classification

329

Fig. 1. Convolutional neural network (CNN) for classifying fingerprint images

2.1

Dataset Collection

We start with the collection of fingerprint image datasets. By supporting from students and colleagues at the College of Information Technology, Can Tho University, we captured a dataset (named trainbase), having 9,236 fingerprint images of 441 individuals with Microsoft Fingerprint Reader (optical fingerprint scanner, resolution: 512 DPI, image size: 355 × 390, colors: 256 levels grayscale). This fingerprint image dataset is used to train deep network models. And then, we use three real fingerprint datasets named FP-235, FP-389, FP-559 (collected in our previous works [9,11,12], including fingerprint images of 235, 389, and 559 individuals respectively) to evaluate resulting deep network models. 2.2

Deep Networks

In last years, the researchers in computer vision and machine learning communities proposed deep neural networks including VGG [33], ResNet50 [15], Inception-v3 [35], Xception [7] that achieve the most accuracy for the classification of ImageNet dataset [8]. We also propose to use these deep network architectures to train models for fingerprint images. VGG16, VGG19: The VGG network architecture proposed by [33] includes two parts. The first part stacks several VGG block modules (convolutional and pooling layers as shown in Fig. 2) to learn visual features from images. The second part consists of fully-connected layers to train classifiers. The VGG architecture tries to capture the spatial information from images. VGG-16, VGG-19 consist of 16 and 19 VGG block modules for the large scale image classification of ImageNet dataset. Inception-V3: The Inception-v3 network proposed by [35] stacks 11 inception modules (in Fig. 3) and global average pooling to learn multi-level, invariant features and the spatial information from images for the classification.

330

T.-N. Do and M.-T. Tran-Nguyen

Fig. 2. VGG block module

Fig. 3. Inception module

ResNet50: The well-known failure of deep networks is the vanishing gradient problem during the learning process of deep network. To overcome this issue, He and his colleagues [15] proposed the ResNet50 network architecture, using the residual block (in Fig. 4) to develop extremely deep networks for classifying ImageNet dataset. Xception: The Xception network proposed by [6] is an extension of the Inception architecture. The Xception module (Fig. 5) replaces the standard depthwise

Training Deep Network Models for Fingerprint Image Classification

331

Fig. 4. Residual block

separable convolution (the depthwise convolution followed by a pointwise convolution) in Inception modules with depthwise separable convolutions, making the classification improvement. 2.3

Training Deep Network Models for Classifying Fingerprint Images

We propose to train deep neural networks, VGG, ResNet50, Inception-v3, Xception on trainbase dataset with 9,236 fingerprint images of 441 individuals. We obtain deep network models for this fingerprint image dataset. Although the deep network models give most accurate classification results but the training task requires big-data, vast computational cost and time resources to learn a new model for the new coming fingerprint image dataset. In last years, the deep learning community pays attention to the ability to reuse pre-trained models from the source learner in the target task. This strategy is well-known as transfer learning [14]. The main idea is to re-use a pre-trained model on a problem similar to the target problem as the starting point for learning a new model on the target problem [31,37]. Our main idea is to re-use these deep network models being pre-trained on trainbase dataset as feature extractors from images. The training task for a new coming fingerprint image dataset is only to update the weights of the last layer while keeping unchanged the first other layers in networks. Therefore, it reduces the training resources and complexity for handling a new coming dataset due to the leverage of pre-trained VGG, ResNet50, Inception-v3, Xception on trainbase dataset.

332

T.-N. Do and M.-T. Tran-Nguyen

Fig. 5. Xception module

3

Experimental Results

In this section, we present experimental results of deep network models for classifying fingerprint images. We implement the training program in Python using library Keras [6] with backend Tensorflow [1], library Scikit-learn [30] and library OpenCV [16]. All experiments are conducted on a machine Linux Fedora 34, Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores and 16 GB main memory and the Gigabyte GeForce RTX 2080Ti 11 GB GDDR6, 4352 CUDA cores. As described in Sect. 2, the starting models are pre-trained VGG, ResNet50, Inception-v3, Xception on trainbase dataset with learning rate = 0.001 and number of epochs = 200. The training task for a new fingerprint image dataset is only to fine-tune the last layer in these resulting VGG, ResNet50, Inception-v3, Xception with learning rate = 0.001 and number of epochs = 100. We propose to use three real fingerprint image datasets (in Table 1) to evaluate the classification performance of deep networks. Datasets are randomly split into the trainset (80% fingerprint images) and the testset (20% fingerprint images). We use the trainset to fine-tune the last layer in deep network models. Then, results are reported on the testset using these resulting models. Table 1. Description of fingerprint image datasets ID Dataset # Datapoints # Classes 1

FP-235

3485

235

2

FP-389

6306

389

3

FP-559 10270

559

Training Deep Network Models for Fingerprint Image Classification

3.1

333

Classification Results

We obtain the classification accuracy of deep network models in Tables 2, 3, 4 and Figs. 6, 7, 8. In tables, the highest accuracy is bold-faced and the second one is in italic. The VGG16, VGG19, ResNet50, Inception-v3 and Xception give the average classification accuracy of 97.68%, 97.80%, 98.46%, 96.95%, 98.20%, respectively. In the comparison among deep network models, we can see that the ResNet50 and Xception models achieve highest classification accuracy, followed by VGG19 and VGG16. Maybe trainbase dataset is not large enough is the reason why inception-v3 has not achieved as high accuracy as other deep network models. Although the Inception-v3 gives lowest correctness, it is very competitive with the average accuracy of 96.95%. The classification results are very closed to our previous work in [11], in which we fine-tuned many last layers of deep networks (denoted by FTm-VGG16, FTmVGG19, FTm-ResNet50, FTm-Inception-v3, FTm-Xception), on the training datasets FP-235, FP-389, FP-559 and then using the resulting models to classify the testset. Nevertheless, the previous approach requires to re-train many last layers in deep network models for a new coming fingerprint image dataset. We believe that it is possible to train deep network models on such trainbase dataset that can be re-used to classify a new coming dataset of fingerprint images without re-training the new deep network model. Table 2. Overall classification accuracy for FP-235 No Deep network model Accuracy (%) 1

VGG16

97.70

2

VGG19

98.13

3

ResNet50

99.00

4

Inception-v3

97.27

5

Xception

98.71

Table 3. Overall classification accuracy for FP-389 No Deep network model Accuracy (%) 1

VGG16

2

VGG19

97.22 97.65

3

ResNet50

98.33

4

Inception-v3

96.72

5

Xception

98.64

334

T.-N. Do and M.-T. Tran-Nguyen Table 4. Overall classification accuracy for FP-559 No Deep network model Accuracy (%) 1

VGG16

98.13

2

VGG19

97.63

3

ResNet50

98.05

4

Inception-v3

96.85

5

Xception

97.26

Fig. 6. Overall classification accuracy for FP-235

Fig. 7. Overall classification accuracy for FP-389

Training Deep Network Models for Fingerprint Image Classification

335

Fig. 8. Overall classification accuracy for FP-559

4

Conclusions and Future Work

We have presented the new proposal to train deep network models for classifying fingerprint images. We collected a real fingerprint image dataset, called trainbase from the students at the Can Tho University. After that, we propose to train recent deep networks, including VGG, ResNet50, Inception-v3, Xception, on trainbase dataset. For dealing with a new coming fingerprint image dataset, these resulting deep network models can be re-used as the feature extraction and then the train task only performs fine-tuning the last layer in deep network models without retraining a new deep network model from scratch. The empirical test results on three real fingerprint image datasets (FP-235, FP-389, FP-559) show that deep network models achieve at least the accuracy of 96.72% on the testsets. An example of effectiveness is that the ResNet50 models give classification accuracy of 99.00%, 98.33%, 98.05% on FP-235, FP-389 and FP-559, respectively. In the future, we will enlarge trainbase dataset to improve the pre-trained deep network models. We intend to provide more empirical test on large benchmarks. Acknowledgments. This work has received support from the College of Information Technology, Can Tho University. The author would like to thank very much the Big Data and Mobile Computing Laboratory.

References 1. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/. Software available from tensorflow.org 2. Bosch, A., Zisserman, A., Mu˜ noz, X.: Scene classification via pLSA. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 517–530. Springer, Heidelberg (2006). https://doi.org/10.1007/11744085 40

336

T.-N. Do and M.-T. Tran-Nguyen

3. Cao, K., Jain, A.K.: Fingerprint indexing and matching: an integrated approach. In: 2017 IEEE International Joint Conference on Biometrics (IJCB), pp. 437–445 (2017). https://doi.org/10.1109/BTAS.2017.8272728 4. Cao, K., Nguyen, D.L., Tymoszek, C., Jain, A.K.: End-to-end latent fingerprint search. IEEE Trans. Inf. Forensics Secur. 15, 880–894 (2020). https://doi.org/10. 1109/TIFS.2019.2930487 5. Cappelli, R., Ferrara, M., Maltoni, D.: Large-scale fingerprint identification on GPU. Inf. Sci. 306, 1–20 (2015). https://doi.org/10.1016/j.ins.2015.02.016, https://www.sciencedirect.com/science/article/pii/S0020025515001097 6. Chollet, F., et al.: Keras (2015). https://keras.io 7. Chollet, F.: Xception: deep learning with depthwise separable convolutions. CoRR arXiv:1610.02357 (2016) 8. Deng, J., Berg, A.C., Li, K., Fei-Fei, L.: What does classifying more than 10,000 image categories tell us? In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 71–84. Springer, Heidelberg (2010). https://doi.org/10. 1007/978-3-642-15555-0 6 9. Do, T.: Training neural networks on top of support vector machine models for classifying fingerprint images. SN Comput. Sci. 2(5), 1–12 (2021). https://doi.org/ 10.1007/s42979-021-00743-0 10. Do, T.-N., Lenca, P., Lallich, S.: Classifying many-class high-dimensional fingerprint datasets using random forest of oblique decision trees. Vietnam J. Comput. Sci. 2(1), 3–12 (2014). https://doi.org/10.1007/s40595-014-0024-7 11. Do, T., Pham, T., Tran-Nguyen, M.: Fine-tuning deep network models for classifying fingerprint images. In: 12th International Conference on Knowledge and Systems Engineering, KSE 2020, Can Tho City, Vietnam, 12–14 November 2020, pp. 79–84. IEEE (2020) 12. Do, T., Poulet, F.: Latent-LSVM classification of very high-dimensional and largescale multi-class datasets. Concurr. Comput. Pract. Exp. 31(2), e4224 (2019). https://doi.org/10.1002/cpe.4224 13. El-Abed, M., Giot, R., Hemery, B., Charrier, C., Rosenberger, C.: A SVM-based model for the evaluation of biometric sample quality. In: 2011 IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (CIBIM), pp. 115–122 (2011). https://doi.org/10.1109/CIBIM.2011.5949212 14. Goodfellow, I.J., Bengio, Y., Courville, A.C.: Deep Learning. Adaptive Computation and Machine Learning. MIT Press (2016) 15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR arXiv:1512.03385 (2015) 16. Itseez: Open source computer vision library (2015). https://github.com/itseez/ opencv 17. Jain, A.K., Feng, J., Nandakumar, K.: Fingerprint matching. IEEE Comput. 43(2), 36–44 (2010) 18. Jain, A.K., Nandakumar, K., Ross, A.: 50 years of biometric research: accomplishments, challenges, and opportunities. Pattern Recogn. Lett. 79, 80–105 (2016) 19. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998) 20. Li, F., Perona, P.: A Bayesian hierarchical model for learning natural scene categories. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–26 June 2005, pp. 524–531 (2005)

Training Deep Network Models for Fingerprint Image Classification

337

21. Li, X., Cheng, W., Yuan, C., Gu, W., Yang, B., Cui, Q.: Fingerprint liveness detection based on fine-grained feature fusion for intelligent devices. Mathematics 8(4), 517 (2020). https://doi.org/10.3390/math8040517, https://www.mdpi.com/ 2227-7390/8/4/517 22. Lowe, D.: Object recognition from local scale invariant features. In: Proceedings of the 7th International Conference on Computer Vision, pp. 1150–1157 (1999) 23. Lowe, D.: Distinctive image features from scale invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94 24. Maltoni, D.: Fingerprint Recognition, Overview, pp. 664–668. Springer, Boston (2015). https://doi.org/10.1007/978-0-387-73003-5 47 25. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition, 2nd edn. Springer, London (2009). https://doi.org/10.1007/978-1-84882254-2 26. Militello, C., Rundo, L., Vitabile, S., Conti, V.: Fingerprint classification based on deep learning approaches: experimental findings and comparisons. Symmetry 13(5), 750 (2021). https://doi.org/10.3390/sym13050750 27. Minaee, S., Abdolrashidi, A., Su, H., Bennamoun, M., Zhang, D.: Biometrics recognition using deep learning: a survey. arXiv e-prints arXiv:1912.00271, November 2019 28. Minaee, S., Azimi, E., Abdolrashidi, A.: FingerNet: pushing the limits of fingerprint recognition using convolutional neural network. CoRR arXiv:1907.12956 (2019) 29. Pandya, B., Cosma, G., Alani, A.A., Taherkhani, A., Bharadi, V., McGinnity, T.: Fingerprint classification using a deep convolutional neural network. In: 2018 4th International Conference on Information Management (ICIM), pp. 86–91 (2018). https://doi.org/10.1109/INFOMAN.2018.8392815 30. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011) 31. Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2014, Columbus, OH, USA, 23–28 June 2014, pp. 512–519. IEEE Computer Society (2014) 32. Shrein, J.M.: Fingerprint classification using convolutional neural networks and ridge orientation images. In: 2017 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8, November 2017 33. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR arXiv:1409.1556 (2014) 34. Sivic, J., Zisserman, A.: Video google: a text retrieval approach to object matching in videos. In: 9th IEEE International Conference on Computer Vision (ICCV 2003), Nice, France, 14–17 October 2003, pp. 1470–1477 (2003) 35. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR arXiv:1512.00567 (2015) 36. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995). https://doi.org/10.1007/978-1-4757-3264-1 37. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, Montreal, Quebec, Canada, 8–13 December 2014, pp. 3320–3328 (2014)

An Assessment of the Weight of the Experimental Component in Physics and Chemistry Classes Margarida Figueiredo1 , M. Lurdes Esteves2 , Humberto Chaves3 José Neves4,5 , and Henrique Vicente4,6(B)

,

1 Departamento de Química, Escola de Ciências e Tecnologia, Centro de Investigação em

Educação e Psicologia, Universidade de Évora, Évora, Portugal [email protected] 2 Agrupamento de Escolas D. José I, Vila Real de Santo António, Portugal [email protected] 3 Escola Superior Agrária de Beja, Instituto Politécnico de Beja, Beja, Portugal [email protected] 4 Centro Algoritmi, Universidade do Minho, Braga, Portugal [email protected] 5 Instituto Politécnico de Saúde Do Norte, Famalicão, Portugal 6 Departamento de Química, Escola de Ciências e Tecnologia, REQUIMTE/LAQV, Universidade de Évora, Évora, Portugal [email protected]

Abstract. Experimental work plays a central role in Physics and Chemistry teaching. However, the use of experimental work depends on the perception that teacher has about the gains in terms of the students’ motivation and learning. Thus, this study aims to evaluate the weight of the experimental component in the chemistry teaching focusing on four topics, i.e., material resources, teaching methodologies, learning achievements, and teacher engagement. For this purpose, a questionnaire was developed and applied to a cohort comprising 129 Physics and Chemistry teachers of both genders, aged between 26 and 60 years old. The questionnaire consists of two sections, the first of which contains general questions, whereas the second contains information on the topics mentioned above. Mathematical-logical programs are presented, considering the teachers’ opinions in terms of Best and Worst-case Scenarios, complemented with a computer approach based on artificial neural networks. The model was trained and tested with real data exhibiting an overall accuracy of 91.5%. Keywords: Experimental work · Chemistry teaching · Thermodynamics · Entropy · Knowledge Representation and Reasoning · Logic Programming · Artificial neural networks · Decision Support System

1 Introduction Experimental work plays a central role in Physics and Chemistry Teaching (PCT ) and has been officially anchored in science curricula since the 19th century. In fact, the relevance © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 338–349, 2022. https://doi.org/10.1007/978-3-030-92666-3_29

An Assessment of the Weight of the Experimental Component

339

of experimental work has been investigated by various authors [1, 2]. Moreover, based on experimental work, PCT is more motivating for students and offers a wide range of learning opportunities. Experimental work can be carried out according to different strategies, which require different levels of participation from students and teachers. The former involves the use of demonstrations conducted by the teacher. In this case, the role of the students is limited as they are only asked to observe the observations and, ultimately, to interpret them. As a result, students do not have the opportunity to develop the skills commonly cited as advantages of experimental work. Despite these limitations, the demonstrations are still used to illustrate certain topics or to overcome the lack of material resources [3]. Another strategy involves the student’s work following a recipe step by step. Students focus their attention on finishing one step at a time and may fail to develop a better understanding of the experiments on several occasions. For many students, experimental work only means working and handling laboratory equipment, but not an understanding of scientific thinking. Finally, another common strategy requires students to conduct experiments themselves by designing the activities, exploring, and discussing hypotheses. Experimental work based on this strategy plays a major role in increasing the learning success and the positive attitude of the students towards PCT [4]. From this point of view, it does not make sense to give the students detailed instructions on every experiment. In fact, following a recipe means eliminating basic steps in the teaching/learning process like gathering information or doing design. The use of experimental work in the PCT is highly dependent on the teacher, i.e., from the perception of the teacher to the motivation and learning gain of the students. In this respect, knowing the teachers’ opinions on these issues is of the utmost importance. Therefore, the main aim of this research is to evaluate the weight of the experimental component in the PCT. The article develops along the lines, i.e., following the Introduction, the principles adopted in the article are defined, namely the use of Logic Programming (LP) for Knowledge Representation and Reasoning (KRR) [5, 6], the scenarios of which are understood as a process energy devaluation [7]. Next, a case study on the weight of the experimental component in Physics and Chemistry lessons, which takes the teachers’ opinion into account and how it can be described by logic programs or mathematical logic theories, supplemented by a computer framework based on Artificial Neural Networks (ANNs) [8, 9]. Conclusions are then drawn and future work is outlined.

2 Theoretical Framework Aiming to demonstrate the basic rules of the proposed methodology, the First and Second Law of Thermodynamic are attended, considering that one’s system shifts from state to state over time. The former one, also known as the Energy Saving Law, claims that the total energy of an isolated system remains constant. The latter deals with Entropy, a property that quantifies the orderly state of a system and its evolution. These attributes match the intended concept of KRR practices, as this must be understood as a process of energy degradation. Thus, it is assumed that a process is in an entropic state, the energy of which can be broken down and used in sense of degradation, but never used in the sense of destruction, viz.

340

M. Figueiredo et al.

• EXergy, that stands for the entropic state of the universe of discourse; • VAgueness, that corresponds to the energy values that may or may not have been relocated and spent; and • ANergy, that represents an energy potential that has not yet been reassigned and depleted [6–8]. This relationship is introduced as the entropic potential of the transmitted energy in an evolutionary environment. It takes into account the entire energy transfer in the initial and final states, i.e., based on pure exergy, e.g., if it is a primary energy, and ends as pure anergy when it has become part of the internal energy of the structure. Indeed, with this concept, an energy degradation number with a reasonable physical background can be defined as the above-mentioned entropic potential, which in this work stands for the assessment of the Weight of the Experimental Component in Physics and Chemistry Classes (WECPCT ); such references serve as parameters to put the devaluation process in perspective. They stand for the Degree of Sustainability (DoS) and the Quality-of-Information (QoI), computed using the expressions [6, 7], viz.  DoS =

1 − (Exergy + Vagueness)2 ;

QoI = 1 − (exergy + vagueness)/Interval length(= 1). On the other hand, several approaches to KRR are described in the literature using the Logic Programming (LP) epitome, specifically in the context of Model Theory and Proof Theory. In this article, the Proof Theoretical approach for problem solving was embraced and stated as an extension of the LP language. Under this setting a LP will be grounded on a finite set of clauses in the form, viz.

where p, pn and pm are classical ground literals that stand for a set of predicates. A predicate (“PRED-i-cat”) is the part of a sentence that contains the verb and tells something about the subject. The clause (1) refers to the predicate’s closure, “,” stands for the logical and, whereas “?” is a domain atom symbolizing falsity. The classical negation sign “¬” denotes a strong declaration, whereas not expresses negation-by-failure (a collapse in demonstrating a certain statement since it was not declared in an explicit way). A set of exceptions to the extensions of the predicates that make the program are given by clause (4), that represents data, information or knowledge that cannot be ruled out. On the other hand, clause (3) put across invariants that make the context under which the universe of discourse have to be understood [5, 10].

An Assessment of the Weight of the Experimental Component

341

3 Methods The research was conducted at Portuguese secondary schools situated in the north (municipalities of Bragança and Oporto), centre (municipalities of Castelo Branco and Lisbon), and south (municipalities of Beja, Évora and Faro). The municipalities of Beja, Bragança, Castelo Branco and Évora are in the inner region of Portugal, whereas the remaining ones are situated in the coastline. A total of 129 teachers who taught Physics and Chemistry in public and private schools were enrolled in this study. The ages of the participants ranged from 26 to 60 years (average age 44 years), with 82% women and 18% men. The data collection was carried out using the questionnaire survey technique. The questionnaire consists of two sections, the first of which covers biographic data on age, gender, academic qualifications, professional category, and school location. The second contained statements on material resources, teaching methodologies, learning achievements, and teacher engagement. In the first part of the questionnaire the answers are descriptive, whereas in the second section a four levels Likert scale was used (Strongly Disagree (1), Disagree (2), Agree (3), and Strongly Agree (4)). The ANNs were implemented using the WEKA software, maintaining the standard software parameters [8, 9, 11]. Thirty tests have been conducted in which the database was randomly split into training and test sets.

4 Case Study Seeking to collect information about teachers’ opinion on the weight of the experimental component in the PCT the academics were invited to select the option(s) that corresponds to their opinion concerning each statement. If the academic chooses more than one option, he/she is also requested to indicate the evolution trend of his/her answer, i.e., growing tendency (Strongly Disagree (1) → Strongly Agree (4)) or the inverse (Strongly Agree (4) → Strongly Disagree (1)). Since the participants were requested to indicate the answer’s tendency the answer options were given in an expanded Likert scale, viz. Strongly Agree (4), Agree (3), Disagree (2), Strongly Disagree (1), Disagree (2), Agree (3), Strongly Agree (4) The statements under analysis were systematized into four groups, namely (Material Resources Statements – Four Items (MRS – 4), Teaching Methodologies Statements – Four Items (TMS – 4), Learning Achievements Statements – Five Items (LAS – 5), and Teacher Engagement Statements – Three Items (TES – 3). The former one contains the statements, viz. S1 – The school has adequate rooms to Physics and Chemistry experimental teaching; S2 – The school has the necessary material to Physics and Chemistry experimental teaching; S3 – The school has the necessary reagents to Physics and Chemistry experimental teaching; and S4 – The rooms allocated to the Physics and Chemistry classes allow to carry out experimental teaching.

342

M. Figueiredo et al.

The second group covers the statements, viz. S5 – The experimental work should be performed by the students; S6 – In experimental work, the students should be organized in small groups (maximum of 3 elements); S7 – The experimental work should be based on experimental guidelines; and S8 – The experimental work should be based on experimental problems from which students develop experimental guidelines. The third one embraces the statements, viz. S9 – Experimental work in PCT provides important learning achievements; S10 – Experimental work in PCT allows students’ better results; S11 – The elaboration of post-class written reports allows students’ better results; S12 – Experimental work in PCT contributes to increase students’ motivation; and S13 – Experimental work in PCT contributes to increase students’ literacy. Finally, the fourth group comprises the statements, viz. S14 – Experimental work in PCT makes the teacher work more interesting. S15 – Experimental work in Physics and Chemistry facilitates the teacher work; and S16 – The students’ results compensate the preparation time required by experimental work. Table 1 shows a teacher answers to the second part of the questionnaire. For example, the answer to S2 was Strongly Agree (4) → Agree (3), corresponding to an increase in entropy, since there is a decreasing trend in his/her opinion. Conversely, the answer to S4 was Strongly Disagree (1) → Disagree (2), corresponding to a decrease in entropy, as teacher ticked an increasing answer’s tendency. For S1 the answer was Agree (3), a fact that speaks for itself, while for S5 no options were marked, corresponding to a vague situation. In this case, although the values of the different forms of energy (i.e., exergy, vagueness, and anergy) are unknown, it is known that the bandwidth is the interval [0, 1]. Table 1. A teacher’s answers to the questionnaire MRS – 4, TMS – 4, LAS – 5, TES-3 Group

Statements

Scale (4)

MRS – 4

S3 S4

(2)

(1)

(2)

×

×

(3)

(4)

Vagueness

×

S1 S2

(3)

×

× × (continued)

An Assessment of the Weight of the Experimental Component

343

Table 1. (continued) Group

Statements

Scale (4)

TMS – 4

(3)

(2)

(1)

(2)

× ×

× ×

S8 ×

S10

×

×

S11 ×

× ×

S13 S14

×

S15

×

S16

×

×

S9

S12

Vagueness

×

S7

TES – 3

(4)

S5 S6

LAS – 5

(3)

×

×

×

4.1 An Entropic Approach to Data Attainment and Processing Aiming to transpose the qualitative information into a quantitative form, Fig. 1 presents the graphical representation of teacher’s answers to the MRS – 4 questionnaire in terms of the different forms of energy, i.e., exergy, vagueness and anergy for the Best and Worstcase scenarios. The markers on the axis correspond to any of the possible scale options, which may be used from bottom (4) → top (1), which indicates that the performance of the system decreases, with increasing entropy or used from top (1) → bottom (4), indicating that the that the performance of the system increases, with decreasing entropy. The calculation of the various forms of energy with regard to the MRS – 4 group are shown in Table 2 for the Worst-case scenario, where a quadratic formulation is used to estimate the various forms of energy values rather than a linear or logarithmic one as suggested by Shannon [12, 13] or even others, since in present case the data for the decision-making processes cannot always be measured exactly, i.e., we are dealing with some other types of data, namely interval or incomplete data. Regarding the remaining groups the values of exergy, vagueness, and anergy were evaluated in a similar way, considering a teacher’s answers presented in Table 1. The quantitative data presented in Table 2 (considering the teacher’s answers presented in Table 1), may now be structured in terms of the extent of predicates material resources statements – four items (mrs – 4), teaching methodologies statements – four items (tms – 4), learning achievements statements – five items (las – 5), and teacher engagement statements – three items (tes – 3), whose extent and formal description are present in Table 3 and Program 2 for the Worst-case scenario.

344

M. Figueiredo et al. 1

S4

π

1

π

(1)

(1)

(2)

(2)

(2)

(2)

(3)

(3)

(3)

(3)

(4)

(4)

(4)

(4)

(1)

S1

S3

S4

S2

(1)

S3

S1

S2

Worst-case Scenario

Best-case Scenario

Fig. 1. Estimating entropic states for a teacher’s answers to the MRS – 4 group for the Best and Worst-case scenarios. The dark, gray, and white areas stand for exergy, vagueness and anergy, respectively. Table 2. Assessment of an entropic state of a teacher for the Worst-case scenario when answering the MRS-4 group. Statements

Scale (4) (3) (2) (1)

Scale (1) (2) (3) (4) 

S1

2 1 4 π exergyS1 = 41 π r 2  1 0



π

  2 = 41 π 24 π1 − 0 = 0.06 vaguenessS1 = 41 π r 2 

anergyS1 = 41 π r 2



2 4

1

S2

2

4

2 4 1 π

 

exergyS2 = 41 π r 2 0

4

vaguenessS2 = 41 π r 2 

anergyS2 = 41 π r 2



2 4

1 π 1 π

2

4

1 4 1 π



1 π



1 π

=0



= 0.19



= 0.02





 

1 π

1 π 1 π

= 0.04

= 0.19



– (continued)

An Assessment of the Weight of the Experimental Component

345

Table 2. (continued) Statements S3

Scale (4) (3) (2) (1)

Scale (1) (2) (3) (4)

2 4 exergyS3 = 41 π r 2 0



vaguenessS3 = 41 π r 2

anergyS3 = 41 π r 2 S4



 2 4

1 π

2

4

2 4 1 π



1 π

= 0.06 

1 π



=0



= 0.19





1 π

0





exergyS4 = − 41 π r 2



4 π vaguenessS4 = − 41 π r 2  = 0.11 1

3 4

= 0.14

1 π

3



1

π



 1 π anergyS4 = 41 π r 2  = 0 1



π

Table 3. The mrs – 4, tms – 4, las – 5, and tes – 3 predicates’ extent according to a teacher’s answers to MRS – 4, TMS – 4, LAS – 5 and TES – 3 groups for the Worst-case scenario. Scale (4) (3) (2) (1)

Scale (1) (2) (3) (4)

EX

VA

AN

DoS

QoI

EX

VA

AN

DoS

QoI

mrs – 44–1

0.14

0.04

0.57

0.98

0.82

mrs – 41–4

0.14

0.11

0

0.97

0.75

tms – 44–1

0.08

0.33

0.34

0.91

las – 54–1

0.11

0.04

0.45

0.99

0.59

tms – 41–4

0.02

0.04

0.19

1.0

0.94

0.85

las – 51–4

0.06

0.10

0.24

0.99

0.84

tes – 34–1

0.04

0

0.63

1.0

0.96

tes – 31–4

0.08

0.10

0.15

0.98

0.82

The Degree of Sustainability (DoS) for the  different predicates present in Table 3 were computing using DoS = 1 − (Exergy + Vagueness)2 , while the Quality-of-Information (QoI) were evaluated using QoI = 1 − (exergy + vagueness)/Interval length(= 1) [6, 7].

346

M. Figueiredo et al.

Where predicates are used used to talk about the properties of objects, by defining the set of all objects that have a common property. A predicate is the part of a sentence that contains the verb and tells one something about the subject. For example, if P is a predicate on X, one may say that P is a property of X. A predicate asks a question whose answer is true or false, i.e. yes or no. In Computer Science and Mathematics, this question comes in the form of a function. The data type of the answer is referred to as Boolean in both Mathematics and Computer Science. Regarding the Best-case scenario, the extent of predicates mrs – 4, tms – 4, las – 5, and tes – 3 are set in a similar way and the formal description is equivalent.

An Assessment of the Weight of the Experimental Component

347

4.2 Artificial Neural Network Training and Testing Procedures It is now possible to generate the data sets to train and test an ANN [9–11] (Fig. 2). The input variables are the extents of mrs – 4, tms – 4, las – 5, and tes – 3’s predicates, whereas the output is given in terms of an evaluation the Weight of the Experimental Component in Physics and Chemistry Teaching (WECPCT ) and a measure of its Sustainability. In this research a cohort of 129 teachers was enrolled, and the training and test sets were obtained as a side effect (through a process of unification [14]) of proving the theorems, viz. ∀(EX1 , VA1 , AN1 , DoS1 , QoI1 , · · · , EX8 , VA8 , AN8 , DoS8 , QoI8 ), (mrs − 44−1 (EX1 , VA1 , AN1 , DoS1 , QoI1 ), · · · , tes − 34−1 (EX8 , VA8 , AN8 , DoS8 , QoI8 )) and, ∀(EX1 , VA1 , AN1 , DoS1 , QoI1 , · · · , EX8 , VA8 , AN8 , DoS8 , QoI8 ), (mrs − 41−4 (EX1 , VA1 , AN1 , DoS1 , QoI1 ), · · · , tes − 31−4 (EX8 , VA8 , AN8 , DoS8 , QoI8 )) i.e., the triples (Exergy, Vagueness, Anergy), the DoSs and QoIs [6, 7], which are based on the answers of a teacher’s on MRS – 4, TMS – 4, LAS – 5 and TES – 3 groups for reasons of exposure. An example of the computing process for the duo pairs (WECPCT, Sustainability) may now be given in the form, viz.   WECPCT = DoSmrs−44−1 + · · · + DoStes−34−1 /4 = (0.98 + · · · + 1.0)/4 = 0.97   Sustainability = QoImrs−44−1 + · · · + QoItes−34−1 /4 = (0.82 + · · · + 0.96)/4 = 0.81 and,   WECPCT = DoSmrs−41−4 + · · · + DoStes−31−4 /4 = (0.97 + · · · + 0.98)/4 = 0.99   Sustainability = QoImrs−41−4 + · · · + QoItes−31−4 /4 = (0.75 + · · · + 0.82)/4 = 0.84 leading to an ANN with an 8-5-2’s topology, i.e., eight nodes in the input layer, a hidden layer with five nodes, and an output one with two nodes (Fig. 2) [9, 10]. In the pre-processing layer it was used the linear activation function, whereas the sigmoid one worked in the other layers [8, 9]. The model accuracy was 93.1% (i.e., 81 correctly classified in 87), for the training set (a random partition with 2/3 of the data) and 88.1% (i.e., 37 properly labeled in 42) for test set (a partition with the rest of cases). In the classification process, high indicates WECPCT higher than 0.80, medium denote WECPCT varying in the interval 0.5…0.80, and low designates WECPCT lesser than 0.5.

M. Figueiredo et al. 0.81

0.97 Weight of the Experimental Component in the Physics and Chemistry Teaching (WECPCT) Output Layer

Sustainability



Bias

0.15

0

0.11

… 0.14

… 0.63



0

0.57

0.04

0.14

Pre-processing Layer

… 0.04

Input Layer

Bias

0.10

Hidden Layer

0.08

348

Fig. 2. An abstract view of the topology of the ANN for assessing the WECPCT and a measure of its Sustainability in its training and test phases.

5 Conclusions and Future Work In order to achieve the goal presented at the beginning of this work, An Assessment of the Weight of the Experimental Component in Physics and Chemistry Classes, the experimental labor plays a fundamental role and must be evaluated based on teachers’ opinions on some aspects of the problem, such as material resources, teaching methods, learning outcomes and teacher engagement. The focus was on the data processing order, i.e. the collection of data through groups. Mathematical-logical programs are presented that take the teachers’ opinions into account. In addition, this paper also presents the archetype of a Decision Support System for assessing the WECPCT level based on the ANN computing paradigm. The system was trained and tested with real data that showed satisfactory effectiveness and an overall accuracy of 91.5%. Future work will consider expanding to a larger sample and including other dimensions of analysis that may help understand what determines the use of experimental work in the PCT. Acknowledgments. This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020.

References 1. Hofstein, A., Lunetta, V.N.: The laboratory in science education: foundations for the twentyfirst century. Sci. Educ. 88, 28–54 (2004) 2. Abrahams, I., Reiss, M.J.: Practical work: its effectiveness in primary and secondary schools in England. J. Res. Sci. Teach. 49, 1035–1055 (2012) 3. Logar, A., Savec, V.F.: Students’ hands-on experimental work vs. lecture demonstration in teaching elementary school chemistry. Acta Chim. Slov. 58, 866–875 (2011)

An Assessment of the Weight of the Experimental Component

349

4. Tarhana, L., Sesen, B.A.: Investigation the effectiveness of laboratory works related to “acids and bases” on learning achievements and attitudes toward laboratory. Procedia – Soc. Behav. Sci. 2, 2631–2636 (2010) 5. Neves, J.: A logic interpreter to handle time and negation in logic databases. In: Muller, R., Pottmyer, J. (eds.) Proceedings of the 1984 Annual Conference of the ACM on the 5th Generation Challenge, pp. 50–54. ACM, New York (1984) 6. Figueiredo, M., Fernandes, A., Ribeiro, J., Neves, J., Dias, A., Vicente, H.: An assessment of students’ satisfaction in higher education. In: Vittorini, P., Di Mascio, T., Tarantino, L., Temperini, M., Gennari, R., De la Prieta, F. (eds.) MIS4TEL 2020. AISC, vol. 1241, pp. 147– 161. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52538-5_16 7. Wenterodt, T., Herwig, H.: The entropic potential concept: a new way to look at energy transfer operations. Entropy 16, 2071–2084 (2014) 8. Cortez, P., Rocha, M., Neves, J.: Evolving time series forecasting ARMA models. J. Heuristics 10, 415–429 (2004) 9. Fernández-Delgado, M., Cernadas, E., Barro, S., Ribeiro, J., Neves, J.: Direct Kernel Perceptron (DKP): ultra-fast kernel ELM-based classification with non-iterative closed-form weight calculation. J. Neural Netw. 50, 60–71 (2014) 10. Kakas, A., Kowalski, R., Toni, F.: The role of abduction in logic programming. In: Gabbay, D., Hogger, C., Robinson, I. (eds.) Handbook of Logic in Artificial Intelligence and Logic Programming, vol. 5, pp. 235–324. Oxford University Press, Oxford (1998) 11. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. SIGKDD Explor. 11(1), 10–18 (2009) 12. Rioul, O.: This is IT: a primer on Shannon’s entropy and information. In: L’Information, Seminaire Poincare, vol. XXIII, pp. 43–77 (2018) 13. Fortune, T., Sang, H.: Shannon entropy estimation for linear processes. J. Risk Fin. Manage. 13, 1–13 (2020) 14. Baader, F., Snyder, W.: Unification theory. In: Handbook of Automated Reasoning, vol. I, pp. 447–533. Elsevier Science Publishers (2001)

The Multi-objective Optimization of the Convolutional Neural Network for the Problem of IoT System Attack Detection Hong Van Le Thi1(B) , Van Huong Pham1 , and Hieu Minh Nguyen2 1 Academy of Cryptography Techniques, Hanoi, Vietnam

{lthvan,huongpv}@actvn.edu.vn 2 Institute of Cryptographic Science and Technology, Hanoi, Vietnam

Abstract. This paper proposes a new approach, applying multi-objective optimization to improve the convolutional neural network structure for the IoT system attack detection problem. The goal of the paper is to develop a global optimization method that balances detection speed and accuracy when using CNN. The accuracy objective function, the speed objective function and the global objective function are constructed to evaluate each network topology. Based on the value of the global objective function to choose the best network structure according to the Pareto multi-objective optimization method. The proposed method in the paper is experimentally evaluated by the K-fold method and has given positive results. The most balanced CNN structure has an accuracy of 99.94% and a classification time of 253,86 s. Keywords: Convolutional neural network (CNN) · Multi-objective optimization · Pareto optimizatio · IoT attack detection

1 Introduction IoT trend is developing rapidly and becoming more and more popular, especially in the industrial revolution 4.0. According to research by market research firm Statista, the world shall have 75 billion Internet-connected devices by 2025. IoT devices include sensors, actuators and various devices. The majority of IoT devices tested are insecure, use default passwords or unpatched vulnerabilities, and are easily compromised by malware like Mirai and Hajime to conduct DoS attacks, DDoS. The goal of IoT attacks is eavesdropping, access control, data and device management. Therefore, along with the strong development of IoT systems network is a fierce and comprehensive battle between cybercriminals creating new attack technologies and cyber security forces. The first and increasingly difficult task of the cybersecurity force is to rapidly develop methods to detect known and emerging forms of IoT attacks. The methods of detection based on signatures, based on network traffic, or group network analysis have been consecutively researched and applied to detect IoT attacks, especially Botnet attacks [6, 7]. However, they still have not kept up with the changing speed of attack types and attack-generating technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 350–360, 2022. https://doi.org/10.1007/978-3-030-92666-3_30

The Multi-objective Optimization of the Convolutional Neural Network

351

Since the 2010s, machine learning and deep learning have become the most effective methods for detecting network attacks and IoT attacks. Deep learning models effectively applied in IoT security such as: RNN, AE, RBM, DBN, GAN, EDLN, CNN. Of that, CNN are outstanding potential. CNN has been successfully applied in many different tasks related to machine learning, namely object detection, recognition, classification, regression, segmentation, etc., In IoT security, a research [5] has suggested CNN-based Android malware detection method. With CNN, important features related to malware detection are automatically learned from the raw data, thus eliminating the need for manual feature techniques. The disadvantage of CNN is its high computational cost; therefore, implementing them on resource-constrained (IoT) devices to support on-board security systems is a challenge. However, there have been many positive research results using CNN in IoT attack detection, such as: malware detection [1, 5]; Intrusion detection in IoT systems achieves over 98% accuracy [3]; IoT Botnet Detection [4], etc. Furthermore, CNN can automatically learn the features of raw security data; therefore, it is possible to build an end-to-end security model for IoT systems [5]. However, the choice of hyperparameters greatly affects the performance of the CNN model. A slight change in the hyperparameter values can also affect the overall performance of the CNN. That is why careful selection of hyperparameters is a major design problem that needs to be solved through suitable optimization strategies. Therefore, the improvement of the CNN network structure needs to consider the multi-objective optimization problem. The more complex the network structure is, the higher the accuracy gets, but it also requires high performance requirements of the operating system. While this requirement is not easily met, especially for resourceconstrained systems like IoT systems. The rest of our article is organized as follows: Sect. 2 – Survey, analysis, synthesis of related researches; Sect. 3 – Presentation on the basic idea, process and content of method’s development; Sect. 4 – Experiment for method’s testing; Sect. 5 – Conclusion and trends of development.

2 Related Works CNN is one of the most successful (with the best performance) neural network models applied in many fields, such as: Image classification and segmentation, object detection, video processing, Natural Language Processing (NLP), and speech recognition, IoT security [8]. CNN uses multiple stages of feature extraction that can automatically learn representations from raw data, which makes for its superior learning ability. Today, with the strong development in hardware technology and the availability of large data sets, the development and improvement of the CNN structure is more accelerated than ever. Many research directions bring great progress of CNN such as: using different activation functions and loss functions, parameter optimization, innovation and regularization of the structure. Among them, structural innovation studies show a huge improvement in CNN’s capabilities. CNN structure has been improved according to the following approaches: channel and spatial information exploitation (1995–2015), depth and width of architecture (2016–2017), multi-path information processing (2015–2017), or using a block of layers as a structural unit [8], etc.

352

H. V. Le Thi et al.

However, along with the development of CNN network structure to meet high efficiency in training, it requires powerful hardware such as GPU. While embedded systems, IoT systems have limited resources, it is still necessary to apply CNN for specific problems. Therefore, the problem of multi-objective optimization for the CNN model in the IoT system attack detection problem is a problem of practical and scientific significance. In addition, the multi-objective optimization (MOO) problem was introduced by Vilfredo Pareto [12]. The optimization problems are actually problems that looks for maximum or minimum value where using one objective/multi-objective. The optimal value or the best solution can be found through the optimization process. This type of problem is found in everyday life, such as mathematics, engineering, social studies, economics, agriculture, aviation, automotive, and others [9]. The two MOO methods are Pareto and scalarization. In the Pareto method, there are a dominated solution and a non-dominated solution obtained by a continuously updated algorithm. Meanwhile, the scalarization method creates multi-objective functions made into a single solution using weights. The solution using the Pareto method is a performance indicators component that forms MOO a separate and produces a compromise solution and can be displayed in the form of Pareto optimal front, while the solution using the scalarization method is a performance indicators component that forms a scalar function which is incorporated in the fitness function [9]. In the development of neural network structures, the MOO problem was also posed and solved relatively effectively. Research [10] proposes a DeepMaker framework that automatically generates a robust DNN for network accuracy and network size, and then maps the generated network with an embedded device. DeepMaker equips the MOO method to solve the neural network structure searching problem by finding a set of Pareto optimal surfaces based on the advanced structure, DenseNet. The results of the optimization problem using GA of the study [11] help the decision maker to choose the neural network model among many options, with the required accuracy and available computational resources, reducing the computational complexity of the obtained structures of artificial neural networks – ANNs. Research [17] proposes an ensemble deep learning with multi-objective optimization (EDL-MO) using a deep belief network (DBN) to build synthetic deep learning models and the parameters of each DBN are determined. By simultaneously optimizing two conflicting objects, namely accuracy and variety. EDL-MO is more efficient than current algorithms in predicting RUL (Remaining useful life) on rotating machinery. The multi-objective optimization method using Pareto has been effectively applied in research [13] for efficient and accurate discovery of CNN hyperparameter configurations. This method has been used in hyperparameter optimization of CNN [14] and neural network structure (NAS) search [15, 16], etc. This paper proposes a method to improve the CNN network structure in the IoT attack detection problem towards the MOO approach with the Pareto method. The method development process and method experimental results will be presented in the next sections.

The Multi-objective Optimization of the Convolutional Neural Network

353

3 Method Development 3.1 Ideas The idea of the paper is to optimize the CNN network structure to achieve the best balance between accuracy and speed goals. Pareto multi-objective optimization method is applied to implement this idea. In machine learning in general and deep learning in particular, the more complex the network structure, and the higher the number of hidden layers are, the higher the accuracy gets. This is due to the greater the number of hidden layers and the smaller the number of neurons the difference between the layers, the less information is lost, so the accuracy is higher. However, in many cases, when the complexity increases a lot, the improved accuracy has little impact on system performance. Therefore, the paper approaches the optimal balance between accuracy and performance. This is a new approach to the problem of IoT attack detection based on deep learning according to CNN. 3.2 Pareto Multi-objective Optimization In practice, each system has many optimization goals, and these optimization goals often contradict each other. For example, improving performance can increase memory size; improving accuracy can degrade performance, etc. Multi-objective optimization aims to solve this problem. Pareto multi-objective optimization is the method towards the most balanced solution among the optimal objectives. Definition 1. Pareto Optimal Solution x* is the solution to be found, then x* must have the following properties: • x* must belong to the point where all possible solutions of the problem are satisfied, i.e., satisfy the constraints x* in the set D. • Every possible alternative other than x in D that has some better objective (f i (x) > = f i (x*)) must also have at least one other worse objective (f j (x) < f j (x*)) with i other than j. This x* solution is also called the efficient solution. On the whole there is not a single x that can outperform x*. 3.3 IoT System Attack Detection Using CNN In deep learning, the CNN is the most common network. The CNN is used to detect IoT system attacks with high accuracy as shown in the researches [1, 3–5]. The CNN is very different from other machine learning algorithms in that CNNs combine both feature extraction and classification. Figure 1 shows an example of the basic CNN consisting of five different layers: an input layer, a convolution layer, a pooling layer, a fully-connected layer, and an output layer. They are divided into two parts: feature extraction and classification. Feature extraction consists of an input layer, a convolution layer, and a pooling layer, while classification consists of a fully-connected layer and an output layer. The

354

H. V. Le Thi et al.

input layer specifies a fixed size for the input matrix. Then the matrix is convolved with multiple learned kernels using shared weights by convolution layer. Next, the pooling layer reduces the matrix size while trying to maintain the contained information. The outputs of the feature extraction are known as feature maps. The classification combines the extracted features in the fully connected layers. Finally, there exists one output neuron for each class of IoT attacks in the output layer [18].

Fig. 1. Simple structure of a CNN

3.4 Improvement of CNN Network Structure According to Multi-objective Method Each CNN model consists of: an input layer, one or more pairs (convolutional layer and pooling layer), one or more fully-connected layers, and an output layer. Accordingly, the structure of each CNN includes the number of layers, the size of each layer, the number of dimensions, the number of sliding windows, the number of pooling windows and the corresponding size. Let s be the structure of a CNN and S be the set of structures, the problem of improving the CNN structure by the Pareto multi-objective method is described as in Eq. (1). max(f (s) = {f1 (s), f2 (s)})

(1)

where f is the global objective function and f 1 is the accuracy objective function, f 2 is the speed objective function. These functions are defined and constructed below. Definition 2. Accuracy Objective Function The accuracy objective function is a quantity proportional to the accuracy, where the accuracy is calculated as the quotient of the number of correctly detected samples over the total number of samples. In order to cancel the dimension (measurement unit), we construct the accuracy objective function as in Formula (2). f1 =

a max(a) s∈S

(2)

The Multi-objective Optimization of the Convolutional Neural Network

355

where, • a is the accuracy corresponding to the structure s • s is a CNN structure • S is a set of CNN structures.

Definition 3. Speed Objective Function The speed objective function is a quantity that has reverse proportion to the detection time. In machine learning, the training time is often ignored, but only the classification time is focused. Since the goal of training is to obtain an array of weights, the training time is not important. In order to cancel the dimension (measurement unit) we construct the speed objective function as in Formula (3). f2 =

max(t) s∈S

t

(3)

where, • t is detection time • s is a CNN structure • S is a set of CNN structures.

Definition 4. Global Objective Function The global objective function is a function that evaluates the balance between the component optimization goals. The Pareto multi-objective optimization problem for the CNN structure in IoT attack detection is the problem of finding the maximum of the function f . The global objective function is built as shown in Expression (4). f = w1 × f1 + w2 × f2

(4)

where, • w1 is the weight of the accuracy objective function • w2 is the weight of the speed objective function • w1 + w2 = 1 The weights characterize the importance of the component objective function. In the multi-objective optimization problem, depending on the importance of the component objective function or the desire to improve which component is more, we can adjust the weight values accordingly.

356

H. V. Le Thi et al.

4 Experiment 4.1 Experimental Model To evaluate the proposed method, we conduct experiments according to the model shown in Fig. 2. Input is a set of N different CNN structures tested on the same dataset. Each structure is tested by K-fold cross validation method to obtain accuracy and execution time (classification). Based on these values and the set of weights corresponding to the objective functions, the global objective function value is calculated based on two component objective functions f 1 , f 2 . The best CNN structure is the one with the maximum value of f .

Fig. 2. Experimental model

4.2 Experimental Program and Data Experimental Data Set The experimental dataset used in this paper is randomly taken from the N-BaIoT dataset [2] and is summarized in Table 1.

The Multi-objective Optimization of the Convolutional Neural Network

357

Table 1. The experimental data set Labels

Description

0

Normal

3000

1

Gafgyt-Combo

3000

2

Gafgyt-Junk

3000

3

Gafgyt-Scan

3000

4

Gafgyt-TCP

3000

5

Gafgyt-UDP

3000

6

Mirai-Ack

3000

7

Mirai-Scan

3000

8

Mirai-Syn

3000

9

Mirai-UDP

3000

10

Mirai-UDPplain

Total of samples

Number of samples

3000 33000

To measure the execution speed, we execute the CNN constructs on the same hardware configuration as described in Table 2. Table 2. Hardware configuration of experimental environment Component

Value

Processor

Intel(R) Core™ i7–9700 CPU @ 3.00GHz 3.00 GHz; Sockets: 1; Cores: 8; Processes: 258; Threads: 2769

RAM

16 GB (15.9 GB usable)

System type

Windows 10 64-bit

Cache

L1: 512 KB; L2: 2 MB; L3: 12 MB

GPU

No

The set S of CNN structures including 108 structures is partially illustrated as shown in Table 3. Where, (X i , Y i , Z i ) are the filter window size, sliding step and the corresponding pooling matrix size of the ith layer pair (a convolutional layer and a pooling layer), respectively.

358

H. V. Le Thi et al. Table 3. Illustrate some CNN structures No.

Structures

3

2 layer pairs (32, 3, 3); (32, 3, 2)

22

2 layer pairs (24, 1, 2); (24, 1, 3)

26

2 layer pairs (32, 3, 2); (24, 3, 3)

27

2 layer pairs (32, 3, 3); (24, 3, 2)

37

2 layer pairs (32, 3, 6); (24, 3, 2)

56

3 layer pairs (32, 3, 2); (32, 3, 3); (32, 3, 3)

61

3 layer pairs (32, 2, 3); (32, 2, 2); (32, 2, 2)

64

3 layer pairs (32, 1, 2); (32, 1, 3); (32, 1, 3)

71

3 layer pairs (24, 2, 2); (24, 2, 2); (24, 2, 2)

95

3 layer pairs (32, 3, 6); (24, 1, 2); (24, 1, 2)

4.3 Experimental Results and Evaluation From the above data set, we conduct experiments according to K-fold cross validation method to determine the accuracy and time of classification. Experimental results on 108 structures and calculation the value of objective functions are shown in Table 4. In this experiment, we have chosen the weight set (w1 , w2 ) = (0.8, 0.2). The graph of functions f 1 , f 2 , f of the set of N CNN structures shown in Fig. 3. Table 4. A part of experimental results Structure

f1

f2

f

3

0,9994

1,5156

1,10264

22

0,8255

1,5798

0,97636

26

0,9991

1,558

1,11088

27

0,9994

1,5783

1,11518

37

0,9888

1,5563

1,1023

56

0,9948

1,0886

1,01356

61

0,9970

1,0009

0,99778

64

0,8452

1,0434

0,88484

71

0,9979

1,0518

1,00868

95

0,9767

1,5628

1,09392

According to the results in Table 4, 27th structure has the maximum value of f function – which is the structure with the best balance between accuracy and detection speed. With this CNN structure, the accuracy is 99.94% and the detection time is 253.86

The Multi-objective Optimization of the Convolutional Neural Network

359

s. Comparison to researches [3–5], having average accuracy from 98.0% to 99.95%, the accuracy of our method is not the best accuracy but it is quite high. And, in this paper, we focus on the global optimal solution. The best structure is the structure having the best balance of the accuracy and the performance. This is the new point in our research and it has not proposed in the previous researches.

Fig. 3. Chart of experimental results with 108 CNN architectures

5 Conclusion The main contribution of the paper is to propose and develop a method to improve the CNN network structure for the problem of detecting IoT system attacks in a multi-objective approach. This is a new approach, towards global optimization; balance between detection time and accuracy. The article has built the accuracy objective function, speed objective function, global objective function and applied Pareto multiobjective optimization method to find the best structure. This proposed method was also evaluated experimentally and achieved positive results. Besides the positive results, the paper still has some limitations such as: just experimenting on only one data set; the number of samples and the number of labels is quite small. In the next researches, we will apply on different data sets, combined with genetic algorithms to optimize.

References 1. Thuan, L.Ð., Huong, P.V., Van, L.T.H., Cuong, H.Q., Hiep, H.V., Khanh, N.K.: Android malware detection based on deep learning using convolutional neural network. Ta.p chí nghiên c´u,u khoa ho.c và công nghê. quân su., (2019). ISSN 1859-1043

360

H. V. Le Thi et al.

2. Dataset: N-BaloT dataset to Detect IoT Botnet Attacks. https://www.kaggle.com/mkashifn/ nbaiot-dataset/data?select=features.csv. Accessed 28 Sept 2021 3. Thuan, L.Ð., Huong, P.V., Van, L.T.H., Hung, Ð.V.: Intrusion detection in IoT systems based on deep learning using convolutional neural network. In: 6th NAFOSTED Conference on Information and Computer Science - NICS (2019) 4. Pour, M.S., et al.: Data-driven curation, learning and analysis for inferring evolving IoT Botnets in the wild. In: Conference Paper (2019) 5. McLaughlin, N.: Deep android malware detection. In: Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy, pp. 301–308. ACM (2017) 6. Goodman, N.: A Survey of Advances in Botnet Technologies (2017). arXiv:1702.01132v1 7. Zhao, D., et al.: Botnet detection based on traffic behavior analysis and flow intervals. Comput. Secur. J. 39, 2–16 (2013) 8. Khan, A., Sohail, A., Zahoora, U., Qureshi, A.S: A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 53, 5455–5516 (2020) 9. Gunantara, N.: A review of multi-objective optimization methods and its applications. Cogent Eng. 5(1), 1502242 (2018). ISSN 2331-1916 10. Loni, M., Sinaei, S., Zoljodi, A., Daneshtalab, M., Sjodin, M.: DeepMaker: a multi-objective optimization framework for deep neural networks in embedded systems. Microprocess. Microsyst. 73, 102989 (2020). https://doi.org/10.1016/j.micpro.2020.102989 11. Tynchenko, V.S., Tynchenko, V.V., Bukhtoyarov, V.V., Tynchenko, S.V., Petrovskyi, E.A.: The multi-objective optimization of complex objects neural network models. Indian J. Sci. Technol. 9(29), 1–11 (2016). ISSN (Print): 0974-6846 12. Ehrgott, M.: Vilfredo Pareto and Multi-objective Optimization. Mathematics Subject Classification (2010) 13. Yin, Z., Gross, W., Meyer, B.H.: Probabilistic Sequential Multi-Objective Optimization of Convolutional Neural Networks. IEEE Xplore (2020) 14. Smithson, S.C., Yang, G., Gross, W. J., Meyer, B.H.: Neural networks designing neural networks: Multi-objective hyper-parameter optimization. In: ICCAD (2016) 15. Liu, C., et al.: Progressive neural architecture search. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 19–35. Springer, Cham (2018). https:// doi.org/10.1007/978-3-030-01246-5_2 16. Dong, J.-D., Cheng, A.-C., Juan, D.-C., Wei, W., Sun, M.: DPP-Net: device-aware progressive search for pareto-optimal neural architectures. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 540–555. Springer, Cham (2018). https:// doi.org/10.1007/978-3-030-01252-6_32 17. Ma, M., Sun, C., Mao, Z., Chen, X.: Ensemble deep learning with multi-objective optimization for prognosis of rotating machinery. ISA Trans. 113, 166–174 (2020) 18. Si, L., Xiong, X., Wang, Z., Tan, C.: A deep convolutional neural network model for intelligent discrimination between coal and rocks in coal mining face. Bogdan Smolka (2020)

What to Forecast When Forecasting New Covid-19 Cases? Jordan and the United Arab Emirates as Case Studies Sameh Al-Shihabi1,2(B) 1

2

and Dana I. Abu-Abdoun1

Industrial Engineering and Engineering Management Department, University of Sharjah, PO Box 27272, Sharjah, United Arab Emirates [email protected], [email protected] Industrial Engineering Department, The University of Jorday, Amman, Jordan

Abstract. Covid-19 has exerted tremendous pressure on countries’ resources, especially the health sector. Thus, it was important for governments to predict the number of new covid-19 cases to face this sudden epidemic. Deep learning techniques have shown success in predicting new covid-19 cases. Researchers have used long-short term memory (LSTM) networks that consider the previous covid-19 numbers to predict new ones. In this work, we use LSTM networks to predict new covid-19 cases in Jordan and the United Arab Emirates (UAE) for six months. The populations of both countries are almost the same; however, they had different arrangements to deal with the epidemic. The UAE was a world leader in terms of the number of covid-19 tests per capita. Thus, we try to find if incorporating covid-19 tests in predicting the LSTM networks would improve the prediction accuracy. Building bi-variate LSTM models that consider the number of tests did not improve uni-variate LSTM models that only consider previous covid-19 cases. However, using a univariate LSTM model to predict the ratio of covid-19 cases to the number of covid-19 tests have shown superior results in the case of Jordan. This ratio can be used to forecast the number of new covid-19 cases by multiplying this ratio by the number of conducted tests.

Keywords: Forecasting Covid-19 · PCR tests

1

· Long short term memory neural network ·

Introduction

The first detected cases of the Corona Virus Disease (Covid-19), which is caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV2), appeared in the Chinese city Wuhan in December 2019 [15]. SARS-CoV-2 can cause Acute Respiratory Distress Syndrome (ARDS) or multiple organ dysfunction, which may lead to physiological deterioration and death of an infected individual [9]. In addition to its symptoms that might lead to death in some cases, SARS-CoV-2 is highly contagious [21] and [17]. Due to its health consequences c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 361–372, 2022. https://doi.org/10.1007/978-3-030-92666-3_31

362

S. Al-Shihabi and D. I. Abu-Abdoun

and ease of transmission, the World Health Organization (WHO) declared Covid19 a pandemic on 11 March 2020. One important problem during the current pandemic is to predict the evolution of the epidemic. Building reliable forecasting models allow governments to develop proper plans to cope with Covid-19 cases. Researchers have mainly used two techniques to study how the SARS-CoV-2 spread within a given population. The first technique is using mathematical epidemiology (ME) [3]. The susceptible-infectious-recovered (SIR) model that was suggest in [13] is a classical ME model that was used to study several epidemics like flu [16] and SARS [20]. Researchers have also used the SIR model and its variants to study Covid-19 (e.g., [6,9,19]). The second technique is using artificial intelligence (AI) [5]. AI-based learning techniques have shown superior capabilities compared to classical statistical approaches in modeling the intricacies contained in the data [23]. Artificial neural networks (ANNs) [12] are deep learning techniques that have been used by several researchers to forecast the number of covid-19 cases (e.g., [7,8,26]). Among the different ANNs, researchers have relied a recurrent neural network (RNN) type called long-short term memory (LSTM) network due to its abilities to use long and short term memories in forecasting [14]. The polymerase chain reaction (PCR) test is the main tool used to detect covid-19 cases. Serological-based antibody essay testing is another technique that can detect covid-19 patients; however, it is expensive and does not show the timing of the infection [11]. Due to the sudden high demand for PCR test kits, the world has witnessed a supply chain shortage in these kits [18]. For countries that had a limited abilities to conduct PCR tests, forecasting the number of cases without considering the number of tests might be erroneous. In this paper, we try to answer the following question: Would incorporating the conducted PCR tests improve prediction accuracy? To answer this question, we consider the data of two countries, Jordan and the United Arab Emirates (UAE), with similar populations but different capabilities of conducting PCR tests. Figures 1 and 2 show the number of tests and cases in Jordan and the UAE, respectively. Figure 1 shows that there is a considerable correlation between the number of tests and the number of covid-19 cases; more tests meant more cases, as shown graphically in Fig. 1. On the other hand, since a larger number of tests was conducted in the UAE, Fig. 2 shows no graphical correlation between the number of tests and the number of cases. Similar to several other researchers, we use LSTM to forecast the number of new covid-19 cases. We test three forecasting models for the two countries: 1. Uni-variate LSTM to forecast covid-19 cases. 2. Bi-variate LSTM to forecast covid-19 cases where the inputs are previous numbers of cases and previous numbers of tests. 3. Uni-variate LSTM to forecast the percentage of people who test positive. To get the number of covid-19 cases, we multiply this percentage forecast by the number of tests.

What to Forecast When Forecasting New Covid-19 Cases?

363

Fig. 1. Number of Covid-19 tests and cases in Jordan from 1 November 2020 to 1 June 2021

Henceforth, we denote these three LSTM models as uni-LSTM, bi-LSTM, and uni-LSTM × Tests, respectively. We use the three LSTM models to predict the cases of Jordan and the UAE. Results show that using the uni-LSTM model, which does not consider the number of tests, is the best to predict covid-19 cases in the UAE. For Jordan, it is better to use the uni-LSTM × Tests model because the accuracy in predicting the proportion of individuals who test positive among all individuals taking the test is higher than forecasting the number of covid-19 cases. This paper is organized as follows: Section two and three reviews related work and artificial neural network (ANN) models. Section four describes the experiment setup, whereas section five compares the different LSTM models. Finally, conclusions and future research are discussed in section six.

2

Related Work

The current summary does not cover all the work related to the use of LSTM models in predicting covid-19 spread. We only review some of the research related to this papers. An LSTM model was suggested in [8] to predict the number of total confirmed cases for one week in Saudi Arabia, as well as five other countries: Brazil, India, South Africa, Spain, and the United States. The LSTM model with 100 hidden units achieved 99% accuracy. The quality-of-life affects on the spread

364

S. Al-Shihabi and D. I. Abu-Abdoun

Fig. 2. Number of Covid-19 tests and cases in UAE from 1 November 2020 to 1 June 2021

of covid-19 citizens awareness was studied in [10] in the gulf cooperation council (GDD) countries. Again, LSTM models were used to predict the number of covid-19 cases. Other comparative studies that compare countries and deep learning algorithms to forecast the number of covid-19 cases include [27] and [1] that compared European countries and Indian states, respectively. Moreover, in [25], LSTM models were used to predict the spread of covid-19 cases in Russia, Peru, and Iran. Not only LSTM models were used to forecast the number of new cases, they were also used to predict other covid-related numbers. For example, LSTM-based models were also used to predict deaths, as well as the number of new covid-19 cases, in [7]. LSTM-based models were also used in [2] to predict confirmed cases, negative cases, released, and deceased cases of covid-19. Researchers have also tried several LSTM variants in predicting new covid-19 cases. For example, in [4], bidirectional LSTM and encoder-decoder LSTM, in addition to a unidirectional LSTM were used to predict the covid-19 cases in India. A k-means LSTM model was used in [24] to forecast covid-19 spread in Louisiana state USA.

3

LSTM

Standard neural networks, feed-forward ones, do not have memories because connections between nodes do not form cycles as shown in Fig. 3. Consequently, they cannot be used to forecast time series where data is arranged according

What to Forecast When Forecasting New Covid-19 Cases?

365

to a time index, and previous readings can affect future forecasts. Recurrent neural networks (RNNs) have cycles, and the last internal state of an RNN is fed to the current one, as shown in Fig. 4. In Fig. 4, the internal state of the RNN is represented by ht , while xt and yt represent the inputs and output of the RNN. Note that the inputs, outputs, and internal states are time-indexed. Moreover, the current internal state is passed to the next one. Researchers have used RNNs in applications related to speech recognition where the sequence of words is important. RNNs suffer from the vanishing gradients problem, which biases the RNN weights to capture short-term dependencies and neglect long-term dependencies. One way to alleviate the vanishing gradients problem is by replacing the intermediate hidden layers of RNN with LSTM gated cells as explained in [22]. An LSTM’s gated cell has three gates: an input gate it to add new information, a forget gate ft to delete information, and an output gate ot to update the next internal state. In addition to the internal state ht , the gated cells also maintain a cell state, ct , responsible for the long term memory. A logistic sigmoid activation function, as shown in Eq. 1, is used to evaluate the information importance (Fig. 5). σ(z) =

4

1 1 + e−z

(1)

it = σ(wi xt + ui ht−1 + bi )

(2)

ft = σ(wf xt + uf ht−1 + bf )

(3)

ot = σ(wo xt + uo ht−1 + bo )

(4)

ht = ot × tanh(it × tanh(wg xt + ug ht−1 + bg ) + ft × Ct−1 )

(5)

Experiment

In this section, we show the data collection and exploratory data analysis (EDA) steps. We then show how did we configure our LSTM network. 4.1

Data

The data used in the following analysis was obtained from the official sources in Jordan1 and UAE (See footnote 1). To have a fair comparison, we select six months from each country’s data to conduct our experiment, from 1 November 2020 to 30 April 2021.

1

111.

366

S. Al-Shihabi and D. I. Abu-Abdoun

Fig. 3. Example of a feedforward ANN

Fig. 4. Example of an RNN

What to Forecast When Forecasting New Covid-19 Cases?

367

Fig. 5. Example of a LSTM

4.2

Exploratory Data Analysis

Before delving into developing forecasting models, we conduct a simple EDA to understand the data. We start by calculating the cross-correlation coefficients as shown in Eq. 6 where Xt−d is the reading of variable X at time t − d and Yt is the reading of variable Y at time t. In our case, X represent the number of tests, Y represent the number of new covid-19 cases, and d is the lag between the test date and case-recording data. Table 1 shows a high correlation between the number of tests and the number of cases in Jordan, compared to a low correlation in the UAE. In Jordan, the correlation is 0.8 when the lag is 0. For the UAE, the correlations did not reach 0.4. N ˆ t−d − Yˆ ) (Xt−d − X)(Y ρd (X, Y ) =  t=d N ˆ 2 N Yt − Yˆ )2 t=d (Xt−d − X) t=1

(6)

Figure 6 compares Jordan and the UAE in terms of the number of covid-19 cases and tests. The box-plot in Fig. 6a graphically summarizes then situations in Jordan and the UAE with respect to covid-19 cases. It is clear that Jordan has witnessed more cases and showed higher deviations than the UAE. The UAE, however, has conducted more PCR tests to detect covid-19 cases as shown in Fig. 6b. 4.3

LSTM Configuration

We rely on previous research papers and trial and error methodology to configure our LSTM model. Most previous papers have used Adam as an optimization technique, accuracy as an objective, and mean absolute error as a loss function when training the LSTM models. We use the root mean square error, as shown in Eq. 7, to compare the different LSTM models. Python is used as an application

368

S. Al-Shihabi and D. I. Abu-Abdoun

Table 1. Cross-correlation coefficients between number of tests and number of cases for different day lags in Jordan and the UAE. Lag Jordan UAE 0

0.80

0.35

1

0.67

0.35

2

0.59

0.34

3

0.58

0.34

4

0.60

0.36

5

0.61

0.37

(a) Number of covid-19 cases in Jordan and the UAE from 1 November 2020 to 30 April 2021

(b) Number of covid-19 tests in Jordan and the UAE from 1 November 2020 to 30 April 2021

Fig. 6. Comparison between Jordan and the UAE regarding covid-19 cases and tests

program interface (API) to work with deep learning packages. Pandas, Numpy, and Keras are one of the open source libraries used in the study. In all the studies, we divide the data such that 70% of the data is used for training while the other 30% is used for testing.   n 1  (yˆi − yi )2 (7) RM SE =  n i=1 We found that predicting Jordan cases had higher RMSE than predicting the UAE cases from the trial and error experiments. Thus, we use Jordan cases to tune our three LSTM models. Table 2 shows the results of the different tested models. Column 2 of Table 2 shows the number of previous periods that were considered when predicting the next day number of covid-19 cases. For example, model 10 that has period = 10 uses the cases of the current and past nine days to predict the cases of the next day. Column 3 and 4 of Table 2 show the number of hidden layers and neurons per layer, respectively. The last three columns report the RM SE values for the three LSTM models. The best RMSE values are written in italic in Table 2. It is clear from the results that model 19 and 16 are the best models for predicting the uni-LSTM and the bi-LSTM,

What to Forecast When Forecasting New Covid-19 Cases?

369

respectively. For the Uni-LSTM × Tests model, it is clear that any model that uses one previous reading outperform other models. However, deviations with models relying on five and seven days are not significant for this LSTM model. Table 2. LSTM configuration experiment using Jordan’s uni-variate LSTM model Model Periods Hidden Neurons Uni-LSTM Bi-LSTM Uni-LSTM × Tests number layers number RMSE RMSE RMSE

5

1

1

1

10

1,309.7

1,329.4

0.019

2

1

2

10

1,285.6

1,329.4

0.019

3

1

1

20

1,286.6

1,331.2

0.019

4

1

2

20

1,271.2

1,244.5

0.019

5

5

1

10

1,341.3

1,328.2

0.02

6

5

2

10

1326.4

1,331.3

0.02

7

5

1

20

1,289.5

1,310.4

0.02

8

5

2

20

1,1307.0

1,359.2

0.02

9

7

1

10

1,106.3

1,248.2

0.03

10

7

2

10

1,089.2

1,238.5

0.03

11

7

1

20

1,108.8

1,248.3

0.026

12

7

2

20

1,143,2

1,325.3

0.025

13

10

1

10

939.2

1,247.3

0.14

14

10

2

10

1,045.2

1,306.9

0.13

15

10

1

20

928.4

1,273.4

0.13

16

10

2

20

947.2

1,230.3

0.16

17

14

1

10

1,074.7

1,341.3

0.13

18

14

2

10

948.8

1,328.4

0.13

19

14

1

20

863.8

1,303.4

0.14

20

14

2

20

991.3

1,400.1

0.14

Comparative Study

In this section, we try to find the best deep-learning model that Jordan and the UAE could use to forecast the expected number of new covid-19 cases. As we have shown in the previous section, Jordan and UAE had significant differences in the number of PCR tests conducted in each country. Moreover, Jordan had a high correlation between the number of new covid-19 cases and the number of conducted PCR tests. Thus, we compare the following three forecasting techniques using the LSTM models that we have configured in the last section and employing the RMSE error measure, as well as the coefficient of variation, as shown in Eq. 8. For the Uni-LSTM × Tests model, we calculate the accuracy measures by multiplying the predicted proportions by the number of PCR tests

370

S. Al-Shihabi and D. I. Abu-Abdoun

conducted in the forecast day. Then we use the RMSE formulate as shown in Eq. 7. R2 = 1 −

n  yˆi − yi yˆ 2 − yi2 i=1 i

(8)

Table 3 compare the different models’ RMSEs and R2 s for the two countries. For Jordan, it is clear that predicting the fraction of covid-19 cases out of covid19 tests is the best model to predict future covid-19 cases. For the UAE, on the other hand, the best model is the uni-LSTM model where future covid-19 cases depend only on previous covid-19 cases. Using the number of PCR tests as input to the bi-LSTM model was inferior to the other two models. Table 3. Comparison of different LSTM-based forecasting techniques to predict the number of new covid-19 cases in Jordan and the UAE Model

Jordan RMSE R2

Uni-LSTM

863.8

Bi-LSTM

1,230.3 0.61 279

Uni-LSTM × Tests 612.4

The UAE RMSE R2

0.85 244.0 0.92 261.5

0.92 0.85 0.90

Intuitively, the number of PCR tests compared to the number covid-19 cases is much higher in the UAE compared to Jordan. Thus, this number did not control the number of covid-19 cases. In Jordan, since the number of PCR tests is low compare to covid-19 cases, the number of tests affected the number of discovered cases. Thus, covid-19 reported cases in the UAE approaches the actual number of covid-19 cases, whereas, in Jordan, the number of reported cases was affected by the number of tests. Table 3 also shows that the number of tests impacts the forecasting models. However, using them as direct inputs in model bi-LSTM did not lead to improved , and forecast this new results. It is better to create a new feature, covid−19cases P CRtest feature. For Jordan, we used the Uni-LSTM× model for periods up to 10 days, and the obtained forecasting results were better than the best Uni-LSTM model.

6

Conclusion and Future Research

Several researchers have used uni-variate LSTM models to predict the number of new covid-19 cases based on the previous number of cases. Researchers who developed these models have ignored the effect of the number of conducted PCR tests to discover covid-19 patients. Thus, in this work, we study the effects of PCR tests on the prediction models. To fulfil this study objective, we use the

What to Forecast When Forecasting New Covid-19 Cases?

371

official covid-19 tests and positive cases in Jordan and the UAE for a six-month period. The two countries had significant differences in the number of conducted tests. We test three LSTM models: a uni-variate LSTM model similar to previous studies, a second, bi-variate LSTM model that considers the previous number of tests and cases, and a third uni-variate LSTM model that predicts the covid−19cases ratio. For the UAE, incorporating the number of tests into the P CRtest LSTM models did not improve the prediction models. However, for Jordan, the ratio was superior to the other uni-variate model that forecasts the covid−19cases P CRtest two models. The reason for these results can be attributed to the number of tests. A small number of tests means that not all covid-19 cases are discovered; however, if the number of tests is huge, then tests are expected to find the accurate number of covid-19 patients. For future work, it is important to validate our findings by testing other countries prediction models. By applying the current study to other countries, it ratio below, which incorporatwould be possible to find the critical covid−19cases P CRtest ing the number of tests is important. Lastly, countries like Jordan had limited testing capabilities compared to the UAE; knowing the actual number of covid19 cases remains a problem. Extending this work to relate actual covid-19 cases to those discovered by testing and the number of tests is crucial to estimate the actual spread of the virus in any society or country.

References 1. Arora, P., Kumar, H., Panigrahi, B.K.: Prediction and analysis of COVID-19 positive cases using deep learning models: a descriptive case study of India. Chaos, Solitons Fractals 139, 110017 (2020) 2. Bandyopadhyay, S.K., Dutta, S.: Machine learning approach for confirmation of COVID-19 cases: positive, negative, death and release. medRxiv (2020) 3. Brauer, F., Van den Driessche, P., Wu, J., Allen, L.J.: Mathematical Epidemiology, vol. 1945. Springer, Cham (2008). https://doi.org/10.1007/978-3-540-78911-6 4. Chandra, R., Jain, A., Chauhan, D.S.: Deep learning via LSTM models for COVID19 infection forecasting in India. arXiv preprint arXiv:2101.11881 (2021) 5. Chen, J., Li, K., Zhang, Z., Li, K., Yu, P.S.: A survey on applications of artificial intelligence in fighting against COVID-19. arXiv preprint arXiv:2007.02202 (2020) 6. Cooper, I., Mondal, A., Antonopoulos, C.G.: A sir model assumption for the spread of COVID-19 in different communities. Chaos, Solitons Fractals 139, 110057 (2020) 7. Direkoglu, C., Sah, M.: Worldwide and regional forecasting of coronavirus (COVID19) spread using a deep learning model. medRxiv (2020) 8. Elsheikh, A.H., et al.: Deep learning-based forecasting model for COVID-19 outbreak in Saudi Arabia. Process Saf. Environ. Prot. 149, 223–233 (2021) 9. Farooq, J., Bazaz, M.A.: A novel adaptive deep learning model of COVID-19 with focus on mortality reduction strategies. Chaos, Solitons Fractals 138, 110148 (2020) 10. Ghany, K.K.A., Zawbaa, H.M., Sabri, H.M.: COVID-19 prediction using LSTM algorithm: GCC case study. Inform. Med. Unlocked 23, 100566 (2021) 11. Goldstein, N.D., Burstyn, I.: On the importance of early testing even when imperfect in a pandemic such as COVID-19. Global Epidemiol. 2, 100031 (2020)

372

S. Al-Shihabi and D. I. Abu-Abdoun

12. Hassoun, M.H., et al.: Fundamentals of Artificial Neural Networks. MIT press, Cambridge (1995) 13. Kermark, M., Mckendrick, A.: Contributions to the mathematical theory of epidemics. Part I. Proc. R. Soc. A 115(5), 700–721 (1927) 14. Lindemann, B., M¨ uller, T., Vietz, H., Jazdi, N., Weyrich, M.: A survey on long short-term memory networks for time series prediction. Procedia CIRP 99, 650– 655 (2021) 15. Lu, H., Stratton, C.W., Tang, Y.W.: Outbreak of Pneumonia of unknown etiology in Wuhan, China: the mystery and the miracle. J. Med. Virol. 92(4), 401–402 (2020) 16. Mills, C.E., Robins, J.M., Lipsitch, M.: Transmissibility of 1918 pandemic influenza. Nature 432(7019), 904–906 (2004) 17. Mohapatra, R.K., et al.: The recent challenges of highly contagious COVID-19, causing respiratory infections: symptoms, diagnosis, transmission, possible vaccines, animal models, and immunotherapy. Chem. Biol. Drug Des. 96(5), 1187– 1208 (2020) 18. More, S., et al.: Pooling of nasopharyngeal swab samples to overcome a global shortage of real-time reverse transcription-PCR COVID-19 test kits. J. Clin. Microbiol. 59(4), e01295-20 (2021) 19. Mu˜ noz-Fern´ andez, G.A., Seoane, J.M., Seoane-Sep´ ulveda, J.B.: A sir-type model describing the successive waves of COVID-19. Chaos, Solitons Fractals 144, 110682 (2021) 20. Riley, S., et al.: Transmission dynamics of the etiological agent of SARS in Hong Kong: impact of public health interventions. Science 300(5627), 1961–1966 (2003) 21. Saidan, M.N., et al.: Estimation of the probable outbreak size of novel coronavirus (COVID-19) in social gathering events and industrial activities. Int. J. Infect. Dis. 98, 321–327 (2020) 22. Schmidhuber, J., Hochreiter, S., et al.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 23. Tayarani-N, M.H.: Applications of artificial intelligence in battling against COVID19: a literature review. Chaos, Solitons Fractals 142, 110338 (2020) 24. Vadyala, S.R., Betgeri, S.N., Sherer, E.A., Amritphale, A.: Prediction of the number of COVID-19 confirmed cases based on K-means-LSTM. arXiv preprint arXiv:2006.14752 (2020) 25. Wang, P., Zheng, X., Ai, G., Liu, D., Zhu, B.: Time series prediction for the epidemic trends of COVID-19 using the improved LSTM deep learning method: case studies in Russia, Peru and Iran. Chaos, Solitons Fractals 140, 110214 (2020) 26. Younis, M.C.: Evaluation of deep learning approaches for identification of different corona-virus species and time series prediction. Comput. Med. Imaging Graph. 90, 101921 (2021) 27. Zeroual, A., Harrou, F., Dairi, A., Sun, Y.: Deep learning methods for forecasting COVID-19 time-series data: a comparative study. Chaos, Solitons Fractals 140, 110121 (2020)

Cryptography

Solving a Centralized Dynamic Group Key Management Problem by an Optimization Approach Thi Tuyet Trinh Nguyen1(B) , Hoang Phuc Hau Luu1 , and Hoai An Le Thi1,2 1

Universit´e de Lorraine, LGIPM, D´epartement IA, 57000 Metz, France {thi-tuyet-trinh.nguyen,hoang-phuc-hau.luu, hoai-an.le-thi}@univ-lorraine.fr 2 Institut Universitaire de France (IUF), Paris, France

Abstract. In centralized key management schemes, a single trusted entity called a Key Server is employed to manage the group key and other supporting keys of the entire group. This management mechanism usually employs a binary tree based structure. In dynamic multicast communication, members may join/leave the group at any time, which requires a certain cost to update the binary key tree. This paper addresses an important problem in centralized dynamic group key management. It consists in finding a set of leaf nodes in a binary key tree to insert new members while minimizing the insertion cost. Since the inserting cost is proportional to the distance from the root to the selected leaf node, the balance of the tree plays an important role in dynamic group key management. Therefore, our proposed approach also considers the balance of the tree after insertion. The two mentioned important objectives are combined into a unified optimization framework. Keywords: Centralized group key management DCA · Combinatorial optimization

1

· DC programming ·

Introduction

Many secure group communication systems are based on a group key that is secretly shared by group members. In order to provide security for such communications, the existing systems encrypt the data using this group key and send the corresponding ciphertext to all members. Therefore, securing group communications (i.e., providing confidentiality, authenticity, and integrity of messages delivered between group members) has become an important Internet design issue. Apart from social networks, more secured environments like military networks, where sensitive data is exchanged, require an even higher level of confidentiality and security for data transmission, membership management, and key management. Since group membership is dynamic, it becomes necessary to change the group key in an efficient and secure fashion when members join or depart from the group. Commonly, the key server is responsible for updating the group key when c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022  H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 375–385, 2022. https://doi.org/10.1007/978-3-030-92666-3_32

376

T. T. T. Nguyen et al.

there is a change in the group membership. Due to its centrality nature, such scheme is called centralized group key management (CGKM). Among various structures of this type of management, the widely-used hierarchical key-tree approach proposed by Wallner et al. [20] and independently studied by Wong et al. [21] is an efficient way to reduce the rekeying cost. The rekeying cost denotes the number of messages that needs to be distributed to the members so that they obtain the new group key. In the binary tree, only leaf nodes can be removed and new nodes will be only appended below given leaf nodes. The number of messages is the one of updated keys in the path from the root to the selected leaf node. Consequently, if the binary tree architecture is balanced, the key server needs to perform 2 log2 (N ) encryptions for a joining and 2(log2 (N )−1) for a leaving, where N is the number of group members and log2 (N ) represents the height of the binary key tree. By definition, a key tree is considered balanced if the distance from the root to any two leaf nodes differs by not more than one. In the literature, there are several works that study the optimization aspect of CGKM with symmetric keys (keys that are shared secretly) [3,4,11]. Broadly, the optimization objectives include reducing the rekeying cost, the selection of the parameters in group communication systems, the structure of key tree, the time interval for rekeying, etc. However, most of these objectives have not been described systematically as a rigorous mathematical optimization model. Also, their algorithmic designs are mainly based on logical/heuristic arguments. In this work we propose a novel approach to an important problem in centralized dynamic group key management that consists in finding a set of leaf nodes in a binary key tree to insert new members while minimizing the insertion cost. Moreover, as stated in Moyer et al. [12], maintaining balanced trees is desirable in practice because membership updates can be performed with logarithmic rekeying costs provided that the tree is balanced. Hence, the balance of the tree is an important property in the Logical Key Hierarchy (LKH) structure. However, the key server cannot control the positions of departing members, the tree may become unbalanced afterward. It remains unbalanced until either insertions/deletions bring the tree back to a balanced state or some actions are taken to rebalance the tree. To overcome this problem, we need to control the shape of the key tree in the LKH. This objective should be carried out based on the insertion process since rebalancing the whole tree is a very expensive procedure. To our knowledge, most approaches in the literature are heuristic based and the two mentioned important objectives - the rekeying cost and the balance of the tree, have not been fully addressed in a unified optimization framework. Furthermore, there are two types of insertion that are commonly used in practice: individual insertion and batch insertion. The former performs the insert operation immediately when there is a new member joining a group. It has the following drawback: key server and group members are assigned to perform more rekeying operations rather than making use of the service. This limitation can be alleviated by batch insertion [10,13,14,19,22]. That is, when a member joins/departs from the group, the key server does not perform rekeying immediately; instead, it aggregates the total number of joining and leaving members during a time interval and then updates the related keys. Therefore, batch rekeying techniques increase

Solving a Centralized Dynamic GKM Problem by an Optimization Approach

377

efficiency in the number of required messages and it takes advantage of the possible overlap of new keys for multiple rekeying requests, and then reduces the possibility of generating redundant new keys. Lam-Gouda et al. [10] presented a marking algorithm for the key server to procedure a batch of join and leave requests to minimize the rekeying cost. Hock Desmond Ng et al. [13] devised two merging algorithms that are suitable for batch joining events to keep the tree balanced after insertion. Vijayakumar et al. [19] presented rotation-based key tree algorithms to make the tree balanced even when batch leave requests are more than batch joins operations. Existing algorithms do not simultaneously consider both the balancing of the key tree and rekeying costs and therefore lead to either an unbalanced key tree or high rekeying costs. Apart from the symmetric key cryptography, there is another recent research direction that focuses on changing encryption algorithms. This line of works combines the LKH with asymmetric key cryptography (public key cryptography) [2,5], which can reduce the cost of updating group key, but increase the computational complexity. Our Contributions. We develop an optimization approach to the problem of updating group key in the LKH structure. We opt for the batch inserting technique due to its advantages over the individual inserting as discussed above. As the new nodes to be inserted should be leaf nodes, the first step of our algorithm consists in finding the set of all leaf nodes of the given tree. In the second stage, based on the found leaf nodes, we consider an optimization problem which minimizes the insertion cost of new members while keeping the tree as balanced as possible. Clearly, the first stage reduces considerably the complexity of the optimization model given in the second stage which works only on the leaf nodes. Overall, our proposed model provides a good compromise compared to existing works, producing a more balanced key tree with a low insertion cost. To our knowledge, this is the first work introducing an optimization model that takes into account simultaneously both objectives: the rekeying cost and the balance of the tree. The proposed optimization model is a combinatorial problem with nonconvex objective function. Consequently, it is very challenging to handle such kinds of programs by standard methods where the source of difficulty comes from the nonconvexity of the objective and the binary nature of the solutions. Fortunately, it is known that by using exact penalty techniques, this problem can be reformulated as a DC (Difference-of-convex-functions) program, where the DCA (DC Algorithm) is at our disposal as an efficient algorithm in DC programming. DC programming and DCA, which constitute the backbone of nonconvex programming and global optimization, were introduced by Pham Dinh Tao in 1985 and have been extensively developed since 1994 by Le Thi Hoai An and Pham Dinh Tao. This theoretical and algorithmic framework has been applied successfully to various areas, namely, transport logistics, finance, data mining and machine learning, computational chemistry, computational biology, robotics and computer vision, combinatorial optimization, cryptology, inverse problems and ill-posed problems, etc., see, e.g., [7,8,15,16].

378

T. T. T. Nguyen et al.

The rest of the paper is organized as follows. Section 2 gives an overall description of hierarchical key-tree approaches in CGKM. The 2-stage approach to the problem of updating group key in the LKH structure is developed in Sect. 3, while Sect. 4 concludes the paper.

2

Centralized Group Key Management

LKH [20,21] is a basic and widely-used approach in centralized group key management. The LKH manages the group key with a hierarchical structure called the key tree. A full binary tree in which every node has 0 or 2 children is one of the most efficient structures. 2.1

Terms and Definitions

Before proceeding further, we introduce some notations and definitions [1] used in this paper. – Key: sequence of symbols that controls the operations of a cryptographic transformation. – Individual key: key shared between the key server and each member of the group. – Key encryption key: cryptographic key that is used for the encryption or decryption of other keys. – Shared secret key: key which is shared with all the active entities via a key establishment mechanism for multiple entities. It is also called group key. – Rekeying: process of updating and redistributing the shared secret key, and optionally, key encryption key (KEK). This process is executed by the key server. – Individual rekeying: rekeying method in which the shared secret key, and optionally, key encryption key are updated when an entity joins or leaves. – Batch rekeying: rekeying method in which the shared secret key, and optionally, key encryption key are updated at every rekeying interval T . – eK (M ): result of encrypting data M with a symmetric encryption algorithm using the secret key K. – X||Y : result of concatenating data items X and Y in that order. 2.2

Logical Key Hierarchy

The LKH [20,21], as illustrated in Fig. 1, shows a single trusted key server to maintain the tree of keys and update the distribution of keys. A shared secret key is assigned to the root node of the tree. The leaf nodes of the key tree correspond to the group members and each leaf node assigns an individual key. Additionally, key encryption keys are assigned to the internal nodes (middle level nodes). The key encryption keys are shared by multiple members whose individual keys are assigned to the descendant of the node to which the key encryption key is assigned.

Solving a Centralized Dynamic GKM Problem by an Optimization Approach

379

Each member in the group has to maintain all the keys assigned to the nodes on the path from the root node to the leaf node, to which the individual key of the member is assigned. Thus, the number of keys an entity has is proportional to the logarithm of the total number of entities. When a member joins or leaves, all the keys on the member’s key path have to be changed to maintain forward and backward secrecy.

Fig. 1. A full binary tree structure.

Figure 1 shows an example of an LKH, it can be taken as a full binary tree architecture. Each node in a key tree represents a symmetric key. The key server chooses and distributes the keys for the eight members U1 , U2 , ..., U8 as shown in Fig. 1. The root key SSK represents the group key shared by all group members. The leaf node keys IK1 , IK2 , ..., IK8 are assigned to the group members U1 , U2 , ..., U8 , separately, and IKi is shared by both the key server and individual member Ui . The internal node keys KEK1 , KEK2 , ..., KEK6 are known only by the members that are in the subtree rooted at this node. Therefore, each member holds the keys from its leaf node key to the root node key. For instance, U1 holds four keys {SSK, KEK1 , KEK3 , IK1 } along the path from U1 to root. Moyer et al. [12] defined a set of operations and techniques for efficiently updating the tree when members join and leave the group. Deletion of an Existing Member When deleting a member from the group, the key server follows this sequence of steps: 1. If the tree has only one leaf remaining, then the member that is being deleted is the last remaining member of the group. Simple delete this leaf (this last leaf should also be the root). There are no keys to update since the group has no more members. 2. If the tree has more than one leaf, first locate the member node for the member to be deleted. Call it C. Let P be the interior node that is the parent of C, and let S be the sibling of C, i.e., the other child of P. Delete P and C, and move S up into the position formerly occupied by P. Figure 2 illustrates this operation. The key server has to generate new keys for all nodes in the path from this deleted node to the root.

380

T. T. T. Nguyen et al.

Fig. 2. The key tree after the member has been deleted [12].

Suppose that member U8 leaves the group as shown in Fig. 1 from (a) to (b), then the form of the updated keys (UpdateKey1−7 ), broadcast by the key server to U1 , U2 , U3 , U4 , U5 , U6 and U7 is: UpdatedKey1−7 = eKEK1 (SSK  )||eKEK2 (SSK  )||eKEK5 (KEK2 )||eIK7 (KEK2 ).

Addition of a New Member When a new member joins the group, the key server inserts it into the tree based on the following rule. – Find the shallowest leaf to the tree by following the shallowest leaf link from the root. Call this leaf LS. Then, create a new interior node NI, insert it at the location of LS, and make LS a child of NI. Finally, create a new member node C and insert it as the other child of NI. By inserting the new member at the shallowest place in the tree, we help to keep it balanced and minimize its height. This minimizes the computational cost and bandwidth required during key updates. Figure 3 illustrates this operation. The key server has to generate new keys for all nodes in the path from this new node to the root.

Fig. 3. Adding a new member at the shallowest position [12].

Solving a Centralized Dynamic GKM Problem by an Optimization Approach

381

Suppose that a member U8 joins the group as shown in Fig. 1 (from (b) to (a)). After gaining the authorization by the key server, the new member U8 will be assigned a joining node and get a secret key IK8 . Three updated keys (UpdatedKey1−7 ), broadcast by the key server to U1 , U2 , U3 , U4 , U5 , U6 and U7 are: UpdatedKey1−7 = eSSK (SSK  )||eKEK2 (KEK2 )||eIK7 (KEK6 ). Three updated keys (UpdatedKey8 ), sent by the key server to U8 are: UpdatedKey8 = eIK8 (SSK  ||KEK2 ||KEK6 ). In order to further reduce the rekeying cost, there are other approaches proposed to improve the efficiency of key tree-based approaches. The optimization of the hierarchical binary tree called OFT (one-way function tree) was proposed by Sherman and McGrew [17]. Their scheme reduces the rekeying cost from 2 log2 (N ) to only log2 (N ). However, once members join or leave a group, there is rarely a balanced tree. For a balanced binary key tree with N leaf nodes, the height from the root to any leaf node is log2 N . If the tree is unbalanced some members might need to perform N − 1 decryptions in order to get the group key. Furthermore, in an unbalanced key tree, some members might need to store N keys, whereas remaining members might need to store only a few keys. Hence, maintaining the key tree balanced is a vital problem to solve.

3

The 2-Stage Optimization Approach to the Problem of Updating Group Key in the LKH Structure

In this section, we propose a new approach to the problem of updating group key in LKH structure. As discussed earlier, the key server can control the new node to be placed at any chosen position, but it cannot control the positions where deletions occur. Therefore, our main goal here is to minimize the insertion cost (the number of updated keys) when new members join the group while taking the balance of the tree into account. Given a full binary tree that can be represented as an ordered set T = {2a +b: there exists a node at the b-th horizontal position of level a on the tree, 0 ≤ a ≤ h, 0 ≤ b ≤ 2a − 1, a, b ∈ N}, where h is the height of tree, we seek a set of the leaf nodes L ⊆ T that is defined as L = {t ∈ T : 2t ∈ / T, 2t + 1 ∈ / T }. For convenience, we sort T, L in ascending order. The distance from the root to the leaf node L[i] is given by di = log2 L[i]. Let the set of joining members be M = {1, 2, . . . , m}. As the problem is defined on the set of leaf nodes instead of on the entire set of tree nodes, we first find all leaf nodes in a full binary tree. A node t ∈ T is a leaf node if and only if there are no nodes 2t and 2t + 1 on the tree. To check if node x is in the set T , we perform the binary search algorithm. The algorithm for finding leaf nodes in a full binary tree is described as follows.

382

T. T. T. Nguyen et al.

Algorithm: Find leaf nodes of a given tree T . Initialization: k = 1, L = []. for a = 0 to h do for b = 0 to 2a − 1 do / T ) and (2(a+1) + 2b + 1 ∈ / T) if (2(a+1) + 2b ∈ a L[k] = 2 + b; k = k + 1; end if end for end for Remark 1. i) After inserting m members into the tree, each leaf node will be either assigned with a certain number of new members or not changed (illustrated in Fig. 4). If a leaf node L[i] is assigned some new members, it becomes a key interior node, otherwise, L[i] remains a leaf node. ii) A valid insertion procedure will create a full binary subtree below each assigned leaf node. According to Handshaking lemma [18] and the proprieties of a full binary tree, we have the following assertion: The cost of appending new nodes at the position of leaf node on the full binary tree only depends on the number of new nodes to be inserted at that leaf node but it does not depend on the configuration of the full binary subtree.

Fig. 4. The tree structure after inserting new nodes.

Now we propose the optimization model for minimizing the key updating cost. Let xij be binary variables defined by xij = 1 if a new member j ∈ M is inserted into the subtree below the leaf node L[i], xij = 0 otherwise. Since every l new member is appended below only one leaf node of the original tree, i=1 xij = 1, ∀j = 1, m, where l = |L|. For a given value i ∈ {1, 2, ..., l},

Solving a Centralized Dynamic GKM Problem by an Optimization Approach

383

the number of new nodes j being inserted at the leaf node L[i] is calculated m l as mi = j=1 xij . Obviously, i=1 mi = m. It means that a subtree at the leaf node L[i] includes mi + 1 leaf nodes (including mi new nodes and the old leaf node L[i]) as illustrated in Fig. 4. For the mleaf node L[i] that is chosen to append some new members (equivalent to j=1 xij > 0), the cost to build a m corresponding subtree is 2 j=1 xij − 1. Besides, the cost to update keys from the root to L[i] is di . More precisely, we have  m di + 2 j=1 xij − 1, cost at L[i] = 0, = di × 1m +2 j=1 xij >0

m 

m if xij > 0 j=1 m if j=1 xij = 0, xij × 1m − 1m . j=1 xij >0 j=1 xij >0

j=1

The total cost for all leaf nodes is given by

F (x) =

=

=

l 

di × 1m +2 j=1 xij >0

l  m 

i=1

i=1 j=1

l 

l 

di × 1m +2 j=1 xij >0

i=1

i=1 l  m 

di ×

j=1

xij >0

i=1

+2

i=1 j=1

l 

1m j=1 xij >0

i=1

1m j=1 xij >0

l 

1m

xij × 1m − j=1 xij >0 m 

xij −

j=1

xij −

l 

l 

1m j=1 xij >0

i=1

(1) 1m

j=1

xij >0

i=1

    m    = (di − 1) × 1m + 2m = (d − 1) × x i ij  + 2m,  j=1 xij >0  j=1  i=1 i=1 l 

l 

0

where | · |0 denotes the step function defined by |s|0 = 1 if s = 0, 0 otherwise. At the same time, we append new nodes in such a way that the balance of the tree is considered based on the given tree structure. Formally, the balance condition can be expressed as ⎛ ⎛ ⎞ ⎞ m m   xij + 1)⎠ − mini=1,l log2 ⎝L[i] × ( xij + 1)⎠ ≤ 1. maxi=1,l log2 ⎝L[i] × ( j=1

j=1

(2) It is evident that given an arbitrary tree structure and the number of new members, the balance condition (2) is not always possible. Therefore, we do not impose this condition as a constraint, rather, we find a tree as balanced as possible by putting the balance term in the objective function.

384

T. T. T. Nguyen et al.

Finally, our optimization problem takes the following form: min

 m l    (di − 1) ×  xij  i=1

j=1





0

+ λ ⎣maxi=1,l log2 ⎝L[i] × (

subject to

l 

xij = 1,

m  j=1





xij + 1)⎠ − mini=1,l log2 ⎝L[i] × (

∀j = 1, m,

xij ∈ {0, 1},

m 

⎞⎤ xij + 1)⎠⎦

(3)

j=1

∀i = 1, l, ∀j = 1, m,

i=1

where λ is a positive parameter controlling the trade-off between the cost of inserting new members and the balance coefficient of the tree after insertion. It is observed that the optimization problem (3) is a very difficult optimization problem with nonsmooth, nonconvex objective function and binary variables. We can - however - reformulate this problem as a DC program via an exact penalty technique [6,9]. Therefore, the reformulated program can be handled by DCA which is an efficient algorithm for DC programming.

4

Conclusion and Future Works

In this paper, we have proposed an optimization approach to the problem of updating group key in LKH structure with batch insertion. In the first stage, an algorithm is used to search all leaf nodes, while in the second stage, an optimization model is proposed. This is the first optimization model that considers simultaneously the updating key cost and the balance of the resulting key tree. By using recent results on exact penalty techniques in DC programming, the proposed optimization problem can be reformulated as a continuous DC optimization program. In future works, we are going to design an efficient DCA for solving this problem and conduct numerical experiments to justify the merits of our proposed model as well as the corresponding DCA.

References 1. ISO/IEC 11770-5:2011, Information technology - Security techniques - Key management - Part 5: Group key management (2011) 2. Elhoseny, M., Elminir, H., Riad, A., Yuan, X.: A secure data routing schema for WSN using elliptic curve cryptography and homomorphic encryption. J. King Saud Univ.-Comput. Inf. Sci. 28(3), 262–275 (2016) 3. Fukushima, K., Kiyomoto, S., Tanaka, T., Sakurai, K.: Optimization of group key management structure with a client join-leave mechanism. J. Inf. Process. 16, 130– 141 (2008) 4. Je, D.H., Kim, H.S., Choi, Y.H., Seo, S.W.: Dynamic configuration of batch rekeying interval for secure multicast service. In: 2014 International Conference on Computing, Networking and Communications (ICNC), pp. 26–30. IEEE (2014)

Solving a Centralized Dynamic GKM Problem by an Optimization Approach

385

5. Kumar, V., Kumar, R., Pandey, S.K.: A computationally efficient centralized group key distribution protocol for secure multicast communications based upon RSA public key cryptosystem. J. King Saud Univ.-Comput. Inf. Sci. 32(9), 1081–1094 (2020) 6. Le Thi, H.A., Le, H.M., Pham Dinh, T.: Feature selection in machine learning: an exact penalty approach using a difference of convex function algorithm. Mach. Learn. 101(1), 163–186 (2015) 7. Le Thi, H.A., Pham Dinh, T.: The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133(1–4), 23–46 (2005) 8. Le Thi, H.A., Pham Dinh, T.: DC programming and DCA: thirty years of developments. Math. Program. 169(1), 5–68 (2018). Special Issue dedicated to: DC Programming - Theory, Algorithms and Applications 9. Le Thi, H.A., Pham Dinh, T., Le, H.M., Vo, X.T.: Dc approximation approaches for sparse optimization. Eur. J. Oper. Res. 244(1), 26–46 (2015) 10. Li, X.S., Yang, Y.R., Gouda, M.G., Lam, S.S.: Batch rekeying for secure group communications. In: Proceedings of the 10th International Conference on World Wide Web, pp. 525–534 (2001) 11. Morales, L., Sudborough, I.H., Eltoweissy, M., Heydari, M.H.: Combinatorial optimization of multicast key management. In: 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the, pp. 9–pp. IEEE (2003) 12. Moyer, M.J.: Maintaining balanced key trees for secure multicast. IRTF Internet Draft (1999) 13. Ng, W.H.D., Howarth, M., Sun, Z., Cruickshank, H.: Dynamic balanced key tree management for secure multicast communications. IEEE Trans. Comput. 56(5), 590–605 (2007) 14. Pegueroles, J., Rico-Novella, F.: Balanced batch LKH: new proposal, implementation and performance evaluation. In: Proceedings of the Eighth IEEE Symposium on Computers and Communications, ISCC 2003, pp. 815–820. IEEE (2003) 15. Pham Dinh, T., Le Thi, H.A.: Convex analysis approach to DC programming: theory, algorithms and applications. Acta Math. Vietnam. 22(1), 289–355 (1997) 16. Pham Dinh, T., Le Thi, H.A.: A DC optimization algorithm for solving the trustregion subproblem. SIAM J. Optim. 8(2), 476–505 (1998) 17. Sherman, A.T., McGrew, D.A.: Key establishment in large dynamic groups using one-way function trees. IEEE Trans. Softw. Eng. 29(5), 444–458 (2003) 18. Vasudev, C.: Graph Theory with Applications. New Age International (2006) 19. Vijayakumar, P., Bose, S., Kannan, A.: Rotation based secure multicast key management for batch rekeying operations. Network. Sci. 1(1–4), 39–47 (2012) 20. Wallner, D., Harder, E., Agee, R., et al.: Key management for multicast: issues and architectures. Technical report, RFC 2627 (1999) 21. Wong, C.K., Gouda, M., Lam, S.S.: Secure group communications using key graphs. IEEE/ACM Trans. Network. 8(1), 16–30 (2000) 22. Zhang, X.B., Lam, S.S., Lee, D.Y., Yang, Y.R.: Protocol design for scalable and reliable group rekeying. IEEE/ACM Trans. Network. 11(6), 908–922 (2003)

4 × 4 Recursive MDS Matrices Effective for Implementation from Reed-Solomon Code over GF(q) Field Thi Luong Tran(B) , Ngoc Cuong Nguyen, and Duc Trinh Bui Academy of Cryptography Techniques, No. 141 Chien Thang road, Hanoi, Vietnam

Abstract. Maximum Distance Separable (MDS) matrices have applications not only in code theory but also in the design of block ciphers and hash functions. However, MDS matrices often cause large overhead in hardware/software implementations. Recursive MDS matrices allow this problem to be solved because they can be powers of a very sparse Serial matrix, and thus can be suitable even for limited, constrained environments. In this paper, 4 × 4 recursive MDS matrices effective for implementation from Reed-Solomon code over GF(q) field will be shown. These matrices are very effective for implementation and can be applied in lightweight cryptography. Keywords: MDS matrix · Recursive MDS matrices · RS codes

1 Introduction MDS matrices play a very important role in block cipher design, especially SubstitutionPermutation Network (SPN) block ciphers. They are often used for the diffusion layer of block ciphers to provide high diffusion. MDS matrices have been used as a diffusion component for many block ciphers such as AES, SHARK, Square, Twofish, Manta, Hierocrypt, Camellia, and MUGI stream cipher and WHIRLPOOL cryptographic hash function. The application of MDS matrices in block ciphers was first introduced by Serge Vaudenay in FSE’95 [1] as a linear case of multi-permutations. Recursive MDS matrices (power of a Serial matrix) [2] have been studied by many authors in the literature because of its important application in lightweight cryptography, for example in [3–8]. However, according to these studies, searching for such recursive MDS matrices requires performing an exhaustive search on the Serial matrices family (this limits the size of the found MDS matrices) [5] or it is to have to use some other rather complicated methods such as building recursive MDS matrices from BCH codes [7, 8]. In [9], we showed an efficient and simple method for building recursive MDS matrices from Reed-Solomon (RS) codes, but we did not mention the search for effective recursive MDS matrices (symmetric MDS matrices) from this method. In [10], we showed a method to build recursive MDS matrices effective for implementation from RS codes by finding symmetric recursive MDS matrices of size 4, 8, and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 386–391, 2022. https://doi.org/10.1007/978-3-030-92666-3_33

Recursive MDS Matrices Effective for Implementation

387

    16 over specific fields GF 24 and GF 28 . However, we have not shown such matrices in a general field GF(pr ). In this paper, 4 × 4 recursive MDS matrices effective for implementation from Reed-Solomon code over a general field GF(q) for q = pr , p is a prime number, will be shown. Such matrices are very meaningful in implementation, especially in hardware implementation, and have the potential to be applied to lightweight cryptography. The paper is organized as follows. In Sect. 2, preliminaries and related works are introduced. Section 3 shows 4 × 4 recursive MDS matrices effective for implementation from Reed-Solomon code over a general field GF(q). And conclusions of the paper are in Sect. 4.

2 Preliminaries and Related Works 2.1 RS Codes It can be definited a general recursive MDS matrix and a recursive MDS matrix as a power of a Serial matrix as follows:   Definition 1. Let A = ai,j m×m , ai,j ∈ GF(pr ), be an MDS matrix. A is called a recursive MDS matrix if there exists a matrix S of size m over GF(pr ) and a non-negative integer k(k ≥ 2) such that: A = S k .   Definition 2. Let A = ai,j m×m , ai,j ∈ GF(pr ) be an MDS matrix. A is called a recursive MDS matrix as a power of a Serial matrix if there exists a Serial matrix S of size m, m ≥ 2 over GF(pr ) such that: A = S m . where the Serial matrix S associated with a polynomial c(x) = z0 + z1 x + z2 x2 + . . . + zd −1 xd −1 + xd has the following form:: ⎤ ⎡ 1 1 0 ... 0 ⎥ ⎢0 0 1 ... 0 ⎥ ⎢ ⎥ ⎢ .. . . . . . . .. S = Serial(z0 , . . . , zm−1 ) = ⎢ . ⎥ . . .. ⎥ ⎢ ⎦ ⎣0 0 ... 0 1 z0 z1 z2 . . . zm−1 In [10], we showed that the recursvie MDS matrices have symmetric form when the coefficients of the polynomial c(x) are symmetric (i.e. having coefficients symmetric each other) and c(x) has the constant term equal to 1. For symmetric recursvie MDS matrix, the encryption and decryption processes in the diffusion layer can use nearly identical circuits, thereby saving on hardware resources and implementation costs. In [10], we showed  the  symmetric  recursive MDS matrices from the RS codes over the specific fields GF 24 and GF 28 as follows:   Proposition 1 ([10]). On constructing 4 × 4 recursive MDS matrices over GF 24 or   GF 28 from RS codes, the generator polynomial g(x) of the form (1) is a symmetric polynomial having the constant term equal to 1 if and only if b = 6 or b = 126 respectively.

388

T. L. Tran et al.

Proposition   2 ([10]). On constructing 8×8, 16×16 or 32×32 recursive MDS matrices over GF 28 from RS codes, the generator polynomial g(x) of the form (1) is a symmetric polynomial having the constant term equal to 1 if and only if b = 124 or b = 120 or b = 112 respectively.

3 4 × 4 Recursive MDS Matrices Effective for Implementation from Reed-Solomon Code over a General Field GF(q) In this section, symmetric 4 × 4 recursive MDS matrices from the Reed-Solomon code over the general field GF(q) for q = pr , where p is a prime number and r ≥ 2, will be shown. To construct a 4 × 4 MDS matrix from RS codes over GF(q) for q = pr , where p is a prime number and r ≥ 2, we build a RS code with the generating polynomial g(x) of degree 4 and has the following form:







(1) g(x) = x + α b x + α b+1 x + α b+2 x + α b+3 where 1 ≤ b ≤ q − 1, b ∈ N , and α is a primitive element of the field. Expanding g(x), it is to have:



g(x) = x4 + α b 1 + α + α 2 + α 3 x3 + α 2b+1 α 4 + α 3 + α + 1 x2

+ α b α 2b+3 1 + α + α 2 + α 3 x + α 4b+6 g(x) is symmetric and has the constant term equal to 1 if and only if:     b 2b+3  α α 1 + α + α2 + α3 = αb 1 + α + α2 + α3 α 4b+6 = 1

(2)

(3)

We have the following proposition: Proposition 3. On constructing 4×4 recursive MDS matrices from RS code over GF(q) for q = pr , where p is a prime number and r ≥ 2, the generator polynomial g(x) of the form (1) is a symmetric polynomial having the constant term equal to 1 if and only if p = 2 and ((r = 2 and b = 3) or ( r ≥ 3 and b = 2r−1 − 2)). Proof. By (3), if 1 + α + α 2 + α 3 = 0 then:  2b+3 =1 α (4) ⇔ α 4b+6 = 1

(4)

Since α is a primitive element of the field, so: (5) ⇔ (q − 1)|(2b + 3) where, 1 ≤ b ≤ q − 1, b ∈ N .

(5)

Recursive MDS Matrices Effective for Implementation

389

By (5), exist k ∈ N : 2b + 3 = (q − 1)k

(6)

By (6), it is to infer that the right side of (6) is an odd number, which means that q − 1 and k are both odd. That is, q must be even. This means q = 2r , or p = 2. It is to have: ⎧ ⎪ b = (q−1)k−3 ⎨ 2 5 3 ≤ k ≤ 2 + q−1 (7) (7) ⇔ q−1 ⎪ ⎩ 1 ≤ b ≤ q − 1; k ≥ 1; b, k ∈ N By (7), it is to infer that k ≤ 5, combined with the property k is odd, so it can be seen that k can only take the values 1, 3, 5 and therefore it can be considered some following cases. Case 1: If k = 1 then b = 2q − 2 = 2r−1 − 2. Since 1 ≤ b ⇔r ≥ 3, so for r ≥ 3 then b = 2r−1 − 2 satisfies the condition 1 ≤ b ≤ q − 1; b ∈ N

(8)

Case 2: If k = 3 then by (7), it is to have: 3(q − 2) 3(q − 1) − 3 = (9) 2 2   Since q = 2r , then b = 3 2r−1 − 1 . So, 1 ≤ b ⇔ r ≥ 2 and b ≤ q − 1 ⇔ r ≤ 2. b=

So in this case r = 2, hence b = 3.

(10)

Case 3: If k = 5 then by (7), it is to have: b=

5(q − 1) − 3 2

(11)

Then, the condition 1 ≤ b ⇔ q ≥ 2 and b ≤ q − 1 ⇔ q ≤ 2. Thus, in this case q = 2 ⇔ r = 1 < 2 and it is contrary to the assumption in our proposition. So this case is unsatisfactory. From (8), (9), (10), we conclude that: if 1 + α + α 2 + α 3 = 0 then the generator polynomial g(x) in (2) of the RS code over GF(pr ) is a symmetric polynomial having the constant term equal to 1 if and only if p = 2 with one of two cases: either r = 2 and b = 3; or r ≥ 3 and b = 2r−1 − 2. Now we will prove that 1 + α + α 2 + α 3 = 0 over GF(q) for q = pr , r ≥ 2. Note that r ≥ 2. The polynomial f (x) of degree 3 is analyzed as follows:  f (x) = 1 + x + x2 + x3 = (x + 1) x2 + 1 ; and f (α) = 1 + α + α 2 + α 3 . • If p = 2 then, since 1 = −1, f (x) has exactly three solutions (triple-solution) x = 1. Obviously, x = 1 cannot be a generator element of the field GF(q), where q = 2r and r ≥ 2. So, f (α) = 0.

390

T. L. Tran et al.

• If p > 2. We have, if f (α) = 0 then α + 1 = 0 or α 2 + 1 = 0. If α + 1 = 0 then α = p − 1, and α 2 = (p − 1)2 = 1. That is, α has order 2. If α 2 + 1 = 0 then α 2 = p − 1, and α 4 = (p − 1)2 = 1. That is, α has order 4. In the other hand, q − 1 = pr − 1 ≥ 8 > 4 > 2. That means, if f (α) = 0 then α cannot generate GF(pr ). It contradicts the assumption that α is a generator element of  the field GF(pr ). So f (α) = 0. Example: For q = 23 , then b = 2r−1 − 2 = 2; For q = 24 , then b = 6; For q = 25 , then b = 14; For q = 28 , then b = 126. For example, in [10], we also found some generator polynomials symmetric     having coefficients and constant term equal to 1 for two fields, namely GF 24 and GF 28 as shown in Table 1. Table 1. List of some symmetric polynomials having the constant term equal to 1 for 4 × 4 recursive MDS matrices from RS code

Recursive MDS Matrices Effective for Implementation

391

4 Conclusion Recursive MDS matrices are of recent interest to researchers because they can be powers of very sparse Serial matrices, so they can be suitable for limited, constrained environments. In this paper, symmetric 4 × 4 recursive MDS matrices effective for implementation from the Reed-Solomon codes over the general field GF(q) for q = pr , p is a prime number are shown. Especially, we have shown that the 4 × 4 recursive MDS matrices from RS codes can only be built over GF(pr ) for p = 2 but not for p being an any odd prime number. It is also given some specific cases of symmetric recursive MDS matrices over GF(2r ). This result is much more general than the ones in [10]. These matrices are very effective for implementation and can be applied in lightweight cryptography.

References 1. Vaudenay, S.: On the need for multipermutations: cryptanalysis of MD4 and SAFER. In: Preneel, B. (ed.) FSE 1994. LNCS, vol. 1008, pp. 286–297. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-60590-8_22 2. Gupta, K.C., Ray, I.G.: On constructions of MDS matrices from companion matrices for lightweight cryptography, applied statistics unit, Indian Statistical Institute, Kolkata, India (2013) 3. Sajadieh, M., Dakhilalian, M., Mala, H., Sepehrdad, P.: Recursive diffusion layers for block ciphers and hash functions. In: Canteaut, A. (ed.) Fast Software Encryption, pp. 385–401. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34047-5_22 4. Wu, S., Wang, M., Wu, W.: Recursive diffusion layers for (lightweight) block ciphers and hash functions. In: Knudsen, L.R., Wu, H. (eds.) Selected Areas in Cryptography, pp. 43–60. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35999-6_23 5. Augot, D., Finiasz, M.: Exhaustive search for small dimension recursive MDS diffusion layers for block ciphers and hash functions. In: 2013 IEEE International Symposium on Information Theory Proceedings (ISIT), pp.1551–1555. IEEE (2013) 6. Kolay, S., Mukhopadhyay, D.: Lightweight diffusion layer from the kth root of the mds matrix. IACR Cryptology ePrint Archive, vol. 498 (2014) 7. Augot, D., Finiasz, M.: Direct construction of recursive MDS diffusion layers using shortened bch codes. In: Cid, C., Rechberger, C. (eds.) 21st International Workshop on Fast Software Encryption, FSE 2014. pp. 3–17. Springer, Heidelberg (2014). https://doi.org/10.1007/9783-662-46706-0_1 8. Gupta, K.C., Pandey, S.K., Venkateswarlu, A.: On the direct construction of recursive MDS matrices. Des. Codes Crypt. 82(1–2), 77–94 (2016). https://doi.org/10.1007/s10623-0160233-4 9. Luong, T.T.: Constructing effectively MDS and recursive MDS matrices by reed-Solomon codes. J. Sci. Technol. Inf. Secur. Viet Nam Government Inf. Secur. Comm. 3(2), 10–16 (2016) 10. Luong, T.T., Cuong, N.N., Tho, H.D.: Constructing recursive MDS matrices effective for implementation from reed-solomon codes and preserving the recursive property of MDS matrix of scalar multiplication. J. Inf. Math. Sci. 11(2), 155–177 (2019)

Implementation of XTS - GOST 28147-89 with Pipeline Structure on FPGA Binh-Nhung Tran, Ngoc-Quynh Nguyen(B) , Ba-Anh Dao, and Chung-Tien Nguyen Academy of Cryptography Techniques, 141 Chien Thang, Tan Trieu, Thanh Tri, Ha Noi, Viet Nam {nhungtb,quynhnn,daobaanh,chungtien.nguyen}@actvn.edu.vn

Abstract. On the disk drive protected with storage encryption, data must be capable of being randomly accessed or written at any location. Hence, the data encryption/decryption process must be done independently and arbitrarily at the Sector-level, while the size of the data remains unchanged. Furthermore, to ensure the drive’s read/write speed, the cryptographic implementation is required to meet strict timing requirements such as low latency, high computation speed, and real-time operations. Therefore, the structure of the cryptographic implementation plays a decisive role. In this paper, we proposed a pipelined implementation of XTS-GOST 28147-89 on FPGA to allow real-time data storage encryption/decryption on time-critical systems. Keywords: Cryptographic algorithms · GOST 28147-89 · XTS mode · Pipeline · FPGA · Real time applications

1 Introduction Hard-disk drives are devices used to store the recorded user’s data. Due to the rapid development of technology, the hard disk drive size is getting smaller and smaller. While the storage capacity is increasing, the ability to store and retrieve data is fastening. Consequently, implementing cryptographic algorithms to match these improved read/write speeds is very challenging. Moreover, the computer operates with different processes, different tasks. The files’ contents are constantly changing and shuffling. In other words, the processes of data reading/writing always happen. Besides, data are not stored continuously in the hard disk drive but scattered throughout the physical locations. Computers also perform multitasked execution, so they need to access/write data at different locations simultaneously. Therefore, the mechanism of reading and writing data in hard disk drives is not executing sequentially. This is the outstanding feature of hard disk drives compared to other form of data storage. However, it is also the critical problem for the data storage encryption. In order to solve the problem of data storage encryption, we propose an implementation of XTS-GOST 28147-89 with the pipelined architecture on FPGA platform. The GOST 28147-89 is a lightweight block cipher. Compared to other block cipher such as AES, the GOST 28147-89 algorithm has advantages in computation speed. Therefore, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 H. A. Le Thi et al. (Eds.): MCO 2021, LNNS 363, pp. 392–401, 2022. https://doi.org/10.1007/978-3-030-92666-3_34

Implementation of XTS - GOST 28147-89 with Pipeline Structure

393

it is suitable for applications that required a low level of security. The pipelined architecture is utilized to guarantee that the proposed implementation has high computation speed and match with the real-time constraints. However, in order to deploy the pipelined architecture, the GOST 28147-89 implementation also needs to operate parallelly. Thus, we aimed to use the XTS block cipher mode. Furthermore, the XTS also is an optimized option for the data storage encryption since the data are encrypted/decrypted independently at the Sector-level when they are accessed arbitrary and the encryption/decryption processes do not alter the data’s size. We implement the proposed pipelined architecture of XTS-GOST 28147-89 on FPGA (Field Programmable Gate Array). FPGA is an integrated circuit that contains array of programmable logic blocks. Flexible reconfiguration to implement different algorithms and parallel computing capabilities give FPGA outstanding advantages in solving complex problems requiring high computational power and real-time processing. These special features of FPGA bring efficiency in implementing cryptographic algorithms, including the GOST 28147-89 block cipher algorithm. With the idea of replacing cryptographic algorithm’s iterations with the pipelined architecture to utilize the parallel computing power of FPGA, we expected that our proposed solution would provide the ability to compute large volumes of data in real time and reduce the latency of the cryptographic implementation in data storage encryption. The rest of the paper is organized as follows. In Sect. 2, we present the basic knowledge about the GOST 28147-89 block cipher algorithm and the compatibility of this algorithm with FPGA platform. Section 3 describes in detail the proposed pipelined architecture of the GOST 28147-89 implementation for real-time operation. Next, Sect. 4 proposes to use the XTS block cipher mode, which is optimized for the data storage encryption problem. The implementation results on FPGA and its evaluation are illustrated in Sect. 5. Lastly, the paper is concluded in Sect. 6.

2 The GOST 28147-89 Algorithm GOST 28147-89 is a block cipher algorithm that has similar structure with DES with reduced key schedule. The data block size is 64 bit while the secret key’s size is 256 bit. The structure of the GOST 28147-89 algorithm uses a network of 32 rounds, which is twice the number of round in DES algorithm. Each round is executed as described in Fig. 1. Two secret elements used in this algorithm is the 256 bit encryption/decryption key and the substitution values of S-box. The 64 bit data block of plaintext is divided into two half Li and Ri. Then, the round function F is performed on each half [7]. In each encryption round, the right half data block Ri is transformed by using three cryptographic operations, which are modulo-232 adding a subkey Ki (i = 0, …, 7) to the data block Ri, substituting the added results with the 4-bit S-box’s values, and bitwise left shifting of 11 bit. The result of round function is modulo-2 added (Exclusive-Or or XOR) with the left half data block Li. After that, two half of data blocks are swapped and used as input for the next encryption round except for the last round.

394

B.-N. Tran et al.

Fig. 1. Datapath of each encryption/decryption round

The secret key K has the length of 256 bit. It is divided in to eight subkeys with the length of 32 bit (K0, …, K7). Since there are 32 encryption rounds and only eight subkeys in total, each subkey is used in four round. The key schedule is presented in Table 1. Table 1. The subkey schedule used in 32 encryption rounds [6] Round

1

2

3

4

5

6

7

8

Subkey

1

2

3

4

5

6

7

8

Round

9

10

11

12

13

14

15

16

Subkey

1

2

3

4

5

6

7

8

Round

17

18

19

20

21

22

23

24

Subkey

1

2

3

4

5

6

7

8

Round

25

26

27

28

29

30

31

32

Subkey

8

7

6

5

4

3

2

1

After modulo 232 adding with the subkey Ki , the result is divided in to eight blocks of 4 bit. These 4 bit blocks are substituted by S-box values. The most significant bits are input of S1 and the least significant bits are input of S8 . The S-box is the only non-linear element in this algorithm. It determines the security of the cryptographic algorithm. Each of eight S-box is a permutation of integers [0, 1 … 14, 15]. An example of S-box is illustrated in Table 2. Outputs of these eight S-boxes are concatenate together to form a 32 bit data block. This 32 bit data block is rotate 11 bit to the left (toward the most significant bit) in order to create the amplification properties for the cryptographic algorithm. Lastly, the rotated 32 bit data block is bitwise XOR-ed with the Li data block. The other encryption rounds are also conducted in the same manner. The structure of GOST 28147-89’s 32 encryption rounds is described in Fig. 2.

Implementation of XTS - GOST 28147-89 with Pipeline Structure

395

Table 2. S-Box [6] S-box1

4

10

S-box2

14

11

S-box3

5

S-box4

7

S-box5

6

9

2

13

8

0

14

4

12

6

13

15

10

8

1

13

10

3

4

13

10

1

0

8

9

12

7

1

5

15

13

6

11

1

12 1

7

15

2

3

8

0

2

14

15

12

7

6

15

14

4

6

12

11

8

4

10

9

14

0

7

5

3

5

9

0

9

11

2

5

3

3

11

2

S-box6

4

11

10

0

7

2

1

13

3

6

8

5

9

12

15

14

S-box7

13

11

4

1

3

15

5

9

0

10

14

7

6

8

2

12

S-box8

1

15

13

0

5

7

10

4

9

2

3

14

6

11

8

12

Fig. 2. Datapath of 32 encryption/decryption rounds

The decryption has the same process with the encryption. The 64 bit data block of ciphertext is also divided into two half of 32 bit. The first round of decryption can be calculated by using Eq. (1) and (2). L1 = R32 = L31 ⊕ f(R31 , K1 )

(1)

R1 = L32 = R31

(2)

In the decryption process, the subkeys are used in a different order. The order of subkeys used in decryption is listed as follows: K0 , . . . , K7 , K7 , . . . , K0 , K7 , . . . , K0 , K7 , . . . , K0 . The decryption is also finished in 32 rounds. After the last round, the original plaintext is derived.

396

B.-N. Tran et al.

The iteration structure of GOST 28147-89 block cipher algorithm is suitable to implement on FPGA [2]. The hardware architecture described in Fig. 3 can be used to implement a encryption round on FPGA. Since the output of each encryption round is the input of next encryption round, we can modify the hardware architecture described in Fig. 3 so that it can perform all 32 rounds of encryption. The modified architecture is demonstrated in Fig. 4. The hardware architecture described in Fig. 4 needs to repeat its operation 32 times to finish the encryption of a 64 bit plaintext input. Each encryption round requires 4 clocks to execute. Thus, the overall GOST encryption requires 128 clocks. If the implementation operates with 100 MHz clock signal, then the execution time to encrypt a 64 bit plaintext input is 128 * 10 ns = 1280 ns [3]. This computational speed is relatively fast for executing a cryptographic algorithm. However, it is not enough when applying to systems that require real-time performance or need to encrypt large amounts data at once.

Fig. 3. Implement an encryption round on FPGA [3]

Fig. 4. Implementation of 32 encryption rounds on FPGA [3]

3 The Proposed Pipelined Implementation We propose a solution to improve the computational speed of the GOST 28147-89 implementation on FPGA, which is using the pipelined architecture. Instead of using the same hardware to perform 32 encryption rounds in 32 iterations, we can utilize more FPGA’s resources to execute 32 encryption rounds in parallel. Meanwhile, users do not need to wait for all 32 rounds of encrypting an 64-bit input plaintext to be finished to encrypt the next 64 bit plaintext. Different encryption on different plaintext can be executed in parallel. This improvement would help the applied implementation to meet any real-time constraints. However data synchronization in the pipelined implementation is challenging. Several previous works proposed a solution for data synchronization which utilizes the buffer registers as illustrated in Fig. 5. This method allows real-time encryption, even though the latency for encrypting the first 64 bit plaintext is still 128 clocks [3]. Moreover, seven additional registers are added between two consecutive encryption rounds for data synchronization. There are 32 encryption rounds in total. Therefore, 217 registers are required to be added, which means more FPGA’s resources are spent.

Implementation of XTS - GOST 28147-89 with Pipeline Structure

397

Fig. 5. Pipelined architecture with registers for data synchronization [3]

In order to reduce the computational time and apply the GOST 28147-89 algorithm to the data storage encryption, we propose a method to implement the pipelined architecture with clock-controlled data synchronization. This method utilize 32 hardware modules. Each module executes an encryption round in 1/2 clocks. The first module is triggered with rising edge of the clock signal while the next module is trigger with the falling edge of the clock signal, and so on. Each hardware module executes all four operation, which are modulo-232 addition, S-box substitution, bitwise left shifting, and XOR operation. The data synchronization using clock pulse is described in Fig. 6.

Fig. 6. Pipelined architecture with data synchronization in 16 clocks

Each hardware module starts to execute at the rising edge and finishs all related operations before the falling edge of the clock signal. Therefore, all 32 rounds of encryption require only 16 clocks to be finished. When performing encryption of multiple blocks of plaintext input, the first ciphertext is produced after 16 clocks. After that, the following ciphertexts are produced after every 1/2 of a clock cycle ultil all corresponding ciphertexts are generated.

398

B.-N. Tran et al.

The proposed pipelined architecture offers a better latency, comparing to the architecture utilized registers for data synchronization. It also reduce the hardware resources utilization, since there is no need for additional synchronizing registers. The propsed pipelined architecture is suitable with the systems that require real-time data encryption with low latency.

4 The Proposed XTS Block Cipher Mode for GOST 28147-89 In order to utilize the pipelines architecture, the cryptographic algorithm must allows parallel computation. Therefore, various secured block cipher modes cannot be used, such as Cipher block chaining (CBC), Cipher feedback (CFB) or Output feedback (OFB) [8, 9]. The Electronic codebook (ECB) mode can be implemented to operate in parallel, but it lacks the amplification property. The same plaintext is always corresponded to the same ciphertext when the same encryption key is used. This is the reason why the ECB mode is considered to have a weak level ov security and not recommended for practical applications [8, 9]. A more powerful block cipher mode is proposed by NIST in 2010 and named XTS block cipher mode (XOR Encrypt XOR - based Tweaked – codebook mode with ciphertex Stealing). It is designed to be used in applications that required parallel encryption. In XTS mode, two secret keys are used. One for encrypting the data blocks, while the other is used to encrypt the “Tweak” values [10]. The block diagram of the XTS block cipher mode is demonstrated in Fig. 7.

Fig. 7. Block diagram of the XTS block cipher mode

The Tweak value is encrypted with encryption key Key2 is adjusted with Galois j polynomial and then XOR-ed with both plaintext and ciphertext of each original encryption. This process guarantees the amplification properties of the cryptographic implementation.

Implementation of XTS - GOST 28147-89 with Pipeline Structure

399

The XTS block cipher mode is suitable for data storage encryption, in which the data are encrypted and decrypted at Sector-level. The encryption of each data block is not depend on encryption of any other data blocks. The XTS mode also does not alter the data block size. In hard disk drive encryption, the Tweak values is a 128-bit nonnegative integer that derived from Sector ID and j value, which indicates the location of the input block of data in that sector (j = 0, 1, …, 31). Plaintext’s location on the disk drive also affects the encryption result. Therefore, encrypting the same plaintexts at different locations on the hard disk drive produces different ciphertexts. This is the major advantage of using XTS block cipher mode in data encryption.

Fig. 8. Timing diagram of internal data signals

The internal data signals’ timing diagram of our pipelined XTS-GOST 28147-89 implementation on FPGA is illustrated in Fig. 8. The GOST-Enc1 and GOST-Enc2 encryption modules is implemented with pipelined architecture with data synchronization using edges of 16 clocks. The GOST-Enc2 module encrypts the Tweak with Key2 and marked in green. Meanwhile, the GOST-Enc1 module encrypts the plaintext data with Key1 and marked in purple. The latency for the first ciphertext to be generated is equal to the latency of the GOST-Enc1 module, which is 16 clocks.

5 FPGA Implementation Results We use the ISE Project Navigator 14.7 and Vivado 2017.2 to synthesize and simulate the proposed pipelined XTS-GOST 28147-89 implementation with data synchronization using edges of 16 clocks. The target device is Kintex-7 xc7k160t-2-fbg484. The synthesized and simulated results show that our proposed implementation help to perform the GOST 28147-89 algorithm in real-time with low latency while maintain the correctness of the encryption/decryption. Figure 9 presents the simulation results of the proposed pipelined XTS-GOST 28147-89 implementation with data synchronization using edges of 16 clocks. The first 64 bit plaintext input is encrypted in 18 clocks, including 1 cycle for retrieving the plaintext, 16 cycles for encryption, and 1 cycle for exporting the computed ciphertext. After the first 18 clocks, the following ciphertexts are produced after every 1/2 clock.

400

B.-N. Tran et al.

Fig. 9. Simulation results of the proposed pipelined XTS-GOST 28147–-9 implementation with data synchronization using edges of 16 clocks

Fig. 10. Comparing encryption/decryption speed of pipelined versus non-pipelined implementations.

Figure 10 show the encryption/decryption speed comparison between pipelined versus non-pipelined implementations. It clearly shows the effectiveness of the proposed pipelined XTS-GOST 28147-89 implementation with data synchronization using edges of 16 clocks. We also compute the hardware utilization of the proposed implementation based on the synthesis results and describe it in Table 3. The implementation’s timing results and its maximum operating frequency are presented in Table 4. Table 3. Hardware utilization of the proposed implementation Slice logic utilization

Used of available

Utilization

Number of slice registers

5216 out of 202800

2.57%

Number of slice LUTs

4593 out of 101400

4.53%

LUT used as logic

4591 out of 101400

4.53%

LUT used as memory

2 out of 35000