*289*
*94*
*20MB*

*English*
*Pages XIV, 331
[345]*
*Year 2020*

Table of contents :

Front Matter ....Pages i-xiv

Optimization of the Values of the Right-Hand Sides of Boundary Conditions with Point and Integral Terms for the ODE System (Kamil Aida-zade, Vagif Abdullayev)....Pages 1-16

Saddle-Point Method in Terminal Control with Sections in Phase Constraints (Anatoly Antipin, Elena Khoroshilova)....Pages 17-26

Pricing in Dynamic Marketing: The Cases of Piece-Wise Constant Sale and Retail Discounts (Igor Bykadorov)....Pages 27-39

The Generalized Algorithms of Global Parametric Optimization and Stochastization for Dynamical Models of Interconnected Populations (Anastasia Demidova, Olga Druzhinina, Milojica Jacimovic, Olga Masina, Nevena Mijajlovic, Nicholas Olenev et al.)....Pages 40-54

On Solving a Generalized Constrained Longest Common Subsequence Problem (Marko Djukanovic, Christoph Berger, Günther R. Raidl, Christian Blum)....Pages 55-70

Optimization Problems in Tracking Control Design for an Underactuated Ship with Feedback Delay, State and Control Constraints (Olga Druzhinina, Natalya Sedova)....Pages 71-85

Theorems of Alternative and Optimization (Yuri Evtushenko, Alexander Golikov)....Pages 86-96

P-Regularity Theory: Applications to Optimization (Yuri Evtushenko, Vlasta Malkova, Alexey Tret’yakov)....Pages 97-109

A Given Diameter MST on a Random Graph (Edward K. Gimadi, Aleksandr S. Shevyakov, Alexandr A. Shtepa)....Pages 110-121

Method of Parametric Correction in Data Transformation and Approximation Problems (Victor Gorelik, Tatiana Zolotova)....Pages 122-133

On the Use of Decision Diagrams for Finding Repetition-Free Longest Common Subsequences (Matthias Horn, Marko Djukanovic, Christian Blum, Günther R. Raidl)....Pages 134-149

Distributing Battery Swapping Stations for Electric Scooters in an Urban Area (Thomas Jatschka, Fabio F. Oberweger, Tobias Rodemann, Gunther R. Raidl)....Pages 150-165

Optimal Combination of Tensor Optimization Methods (Dmitry Kamzolov, Alexander Gasnikov, Pavel Dvurechensky)....Pages 166-183

Nonlinear Least Squares Solver for Evaluating Canonical Tensor Decomposition (Igor Kaporin)....Pages 184-195

PCGLNS: A Heuristic Solver for the Precedence Constrained Generalized Traveling Salesman Problem (Michael Khachay, Andrei Kudriavtsev, Alexander Petunin)....Pages 196-208

Simultaneous Detection and Discrimination of Subsequences Which Are Nonlinearly Extended Elements of the Given Sequences Alphabet in a Quasiperiodic Sequence (Liudmila Mikhailova, Sergey Khamdullin)....Pages 209-223

Lurie Systems Stability Approach for Attraction Domain Estimation in the Wheeled Robot Control Problem (Lev Rapoport, Alexey Generalov)....Pages 224-238

Penalty-Based Method for Decentralized Optimization over Time-Varying Graphs (Alexander Rogozin, Alexander Gasnikov)....Pages 239-256

The Adaptation of Interior Point Method for Solving the Quadratic Programming Problems Arising in the Assembly of Deformable Structures (Maria Stefanova, Sergey Lupuleac)....Pages 257-271

On Optimizing Electricity Markets Performance (Alexander Vasin, Olesya Grigoryeva)....Pages 272-286

Adaptive Extraproximal Algorithm for the Equilibrium Problem in Hadamard Spaces (Yana Vedel, Vladimir Semenov)....Pages 287-300

The Dual Simplex-Type Method for Linear Second-Order Cone Programming Problem (Vitaly Zhadan)....Pages 301-316

About Difference Schemes for Solving Inverse Coefficient Problems (Vladimir Zubov, Alla Albu)....Pages 317-330

Back Matter ....Pages 331-331

Front Matter ....Pages i-xiv

Optimization of the Values of the Right-Hand Sides of Boundary Conditions with Point and Integral Terms for the ODE System (Kamil Aida-zade, Vagif Abdullayev)....Pages 1-16

Saddle-Point Method in Terminal Control with Sections in Phase Constraints (Anatoly Antipin, Elena Khoroshilova)....Pages 17-26

Pricing in Dynamic Marketing: The Cases of Piece-Wise Constant Sale and Retail Discounts (Igor Bykadorov)....Pages 27-39

The Generalized Algorithms of Global Parametric Optimization and Stochastization for Dynamical Models of Interconnected Populations (Anastasia Demidova, Olga Druzhinina, Milojica Jacimovic, Olga Masina, Nevena Mijajlovic, Nicholas Olenev et al.)....Pages 40-54

On Solving a Generalized Constrained Longest Common Subsequence Problem (Marko Djukanovic, Christoph Berger, Günther R. Raidl, Christian Blum)....Pages 55-70

Optimization Problems in Tracking Control Design for an Underactuated Ship with Feedback Delay, State and Control Constraints (Olga Druzhinina, Natalya Sedova)....Pages 71-85

Theorems of Alternative and Optimization (Yuri Evtushenko, Alexander Golikov)....Pages 86-96

P-Regularity Theory: Applications to Optimization (Yuri Evtushenko, Vlasta Malkova, Alexey Tret’yakov)....Pages 97-109

A Given Diameter MST on a Random Graph (Edward K. Gimadi, Aleksandr S. Shevyakov, Alexandr A. Shtepa)....Pages 110-121

Method of Parametric Correction in Data Transformation and Approximation Problems (Victor Gorelik, Tatiana Zolotova)....Pages 122-133

On the Use of Decision Diagrams for Finding Repetition-Free Longest Common Subsequences (Matthias Horn, Marko Djukanovic, Christian Blum, Günther R. Raidl)....Pages 134-149

Distributing Battery Swapping Stations for Electric Scooters in an Urban Area (Thomas Jatschka, Fabio F. Oberweger, Tobias Rodemann, Gunther R. Raidl)....Pages 150-165

Optimal Combination of Tensor Optimization Methods (Dmitry Kamzolov, Alexander Gasnikov, Pavel Dvurechensky)....Pages 166-183

Nonlinear Least Squares Solver for Evaluating Canonical Tensor Decomposition (Igor Kaporin)....Pages 184-195

PCGLNS: A Heuristic Solver for the Precedence Constrained Generalized Traveling Salesman Problem (Michael Khachay, Andrei Kudriavtsev, Alexander Petunin)....Pages 196-208

Simultaneous Detection and Discrimination of Subsequences Which Are Nonlinearly Extended Elements of the Given Sequences Alphabet in a Quasiperiodic Sequence (Liudmila Mikhailova, Sergey Khamdullin)....Pages 209-223

Lurie Systems Stability Approach for Attraction Domain Estimation in the Wheeled Robot Control Problem (Lev Rapoport, Alexey Generalov)....Pages 224-238

Penalty-Based Method for Decentralized Optimization over Time-Varying Graphs (Alexander Rogozin, Alexander Gasnikov)....Pages 239-256

The Adaptation of Interior Point Method for Solving the Quadratic Programming Problems Arising in the Assembly of Deformable Structures (Maria Stefanova, Sergey Lupuleac)....Pages 257-271

On Optimizing Electricity Markets Performance (Alexander Vasin, Olesya Grigoryeva)....Pages 272-286

Adaptive Extraproximal Algorithm for the Equilibrium Problem in Hadamard Spaces (Yana Vedel, Vladimir Semenov)....Pages 287-300

The Dual Simplex-Type Method for Linear Second-Order Cone Programming Problem (Vitaly Zhadan)....Pages 301-316

About Difference Schemes for Solving Inverse Coefficient Problems (Vladimir Zubov, Alla Albu)....Pages 317-330

Back Matter ....Pages 331-331

- Author / Uploaded
- Nicholas Olenev
- Yuri Evtushenko
- Michael Khachay
- Vlasta Malkova

LNCS 12422

Nicholas Olenev Yuri Evtushenko Michael Khachay Vlasta Malkova (Eds.)

Optimization and Applications 11th International Conference, OPTIMA 2020 Moscow, Russia, September 28 – October 2, 2020 Proceedings

Lecture Notes in Computer Science Founding Editors Gerhard Goos Karlsruhe Institute of Technology, Karlsruhe, Germany Juris Hartmanis Cornell University, Ithaca, NY, USA

Editorial Board Members Elisa Bertino Purdue University, West Lafayette, IN, USA Wen Gao Peking University, Beijing, China Bernhard Steffen TU Dortmund University, Dortmund, Germany Gerhard Woeginger RWTH Aachen, Aachen, Germany Moti Yung Columbia University, New York, NY, USA

12422

More information about this series at http://www.springer.com/series/7407

Nicholas Olenev Yuri Evtushenko Michael Khachay Vlasta Malkova (Eds.) •

•

•

Optimization and Applications 11th International Conference, OPTIMA 2020 Moscow, Russia, September 28 – October 2, 2020 Proceedings

123

Editors Nicholas Olenev Dorodnicyn Computing Centre FRC CSC RAS Moscow, Russia

Yuri Evtushenko Dorodnicyn Computing Centre FRC CSC RAS Moscow, Russia

Michael Khachay Krasovsky Institute of Mathematics and Mechanics Ekaterinburg, Russia

Vlasta Malkova Dorodnicyn Computing Centre FRC CSC RAS Moscow, Russia

ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-030-62866-6 ISBN 978-3-030-62867-3 (eBook) https://doi.org/10.1007/978-3-030-62867-3 LNCS Sublibrary: SL1 – Theoretical Computer Science and General Issues © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional afﬁliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book contains the ﬁrst volume of the refereed proceedings of the 11th International Conference on Optimization and Applications (OPTIMA 2020)1. The goal of the conference is to bring together researchers and practitioners working in the ﬁeld of optimization theory, methods, software, and related areas. Organized annually since 2009, the conference has attracted a signiﬁcant number of researchers, academics, and specialists in many ﬁelds of optimization, operations research, optimal control, game theory, and their numerous applications in practical problems of operations research, data analysis, and software development. The broad scope of OPTIMA has made it an event where researchers involved in different domains of optimization theory and numerical methods, investigating continuous and discrete extremal problems, designing heuristics and algorithms with theoretical bounds, developing optimization software and applying optimization techniques to highly relevant practical problems, can meet together and discuss their approaches and results. We strongly believe that this facilitates cross-fertilization of ideas between scientists elaborating on the modern optimization theory and methods and employing them to valuable practical problems. This year, the conference was held online, due to the COVID-19 pandemic situation, during September 28 – October 2, 2020. By tradition, the main organizers of the conference were Montenegrin Academy of Sciences and Arts, Montenegro, Dorodnicyn Computing Centre FRC CSC RAS, Russia, Moscow Institute of Physics and Technology, Russia, and the University of Évora, Portugal. This year, the key topics of OPTIMA were grouped into six tracks: (i) (ii) (iii) (iv) (v) (vi)

Mathematical programming Combinatorial and discrete optimization Optimal control Optimization in economics, ﬁnance, and social sciences Global optimization Applications

In the framework of the conference, a welcome session was held dedicated to the anniversary of an Academician of the Montenegro Academy of Sciences and Arts, Milojica Jacimovic, a world-famous scientist in the ﬁeld of computational mathematics and one of the founders of the conference. The Program Committee (PC) of the conference brought together over a hundred well-known experts in continuous and discrete optimization, optimal control and game theory, data analysis, mathematical economy, and related areas from leading institutions of 25 countries including Argentina, Australia, Austria, Belarus, Belgium, Croatia, Finland, France, Germany, Greece, India, Israel, Italy, Kazakhstan,

1

http://www.agora.guru.ru/optima-2020.

vi

Preface

Montenegro, The Netherlands, Poland, Portugal, Russia, Serbia, Sweden, Taiwan, Ukraine, the UK, and the USA. In response to the call for papers, OPTIMA 2020 received 108 submissions. Out of 60 full papers considered for reviewing (46 abstracts and short communications were excluded because of formal reasons) only 23 papers were selected by the PC for publication. Thus, the acceptance rate for this volume is about 38%. Each submission was reviewed by at least three PC members or invited reviewers, experts in their ﬁelds, in order to supply detailed and helpful comments. In addition, the PC recommended to include 18 papers in the supplementary volume after their presentation and discussion during the conference and subsequent revision with respect to the reviewers’ comments. The conference featured ﬁve invited lecturers as well as several plenary and keynote talks. The invited lectures included: – Prof. Andrei Dmitruk (CEMI RAS and MSU, Russia), “Lagrange Multipliers Rule for a General Extremum Problem with an Inﬁnite Number of Constraints” – Prof. Nikolai Osmolovskii (Systems Research Institute, Poland), “Quadratic Optimality Conditions for Broken Extremals and Discontinuous Controls” – Prof. Boris T. Polyak and Ilyas Fatkhullin (Institute of Control Sciences, Russia), “Static feedback in linear control systems as optimization problem” – Prof. Alexey Tret’yakov (Siedlce University of Natural Sciences and Humanities, Poland), “P–regularity Theory: Applications to Optimization” We would like to thank all the authors for submitting their papers and the members of the PC and the reviewers for their efforts in providing exhaustive reviews. We would also like to express special gratitude to all the invited lecturers and plenary speakers. October 2020

Nicholas Olenev Yuri Evtushenko Michael Khachay Vlasta Malkova

Organization

Program Committee Chairs Milojica Jaćimović Yuri G. Evtushenko Igor G. Pospelov Maksat Kalimoldayev

Montenegrin Academy of Sciences and Arts, Montenegro Dorodnicyn Computing Centre, FRC CSC RAS, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia Institute of Information and Computational Technologies, Kazakhstan

Program Committee Majid Abbasov Samir Adly Kamil Aida-Zade Alla Albu Alexander P. Afanasiev Yedilkhan Amirgaliyev Anatoly S. Antipin Sergey Astrakov Adil Bagirov Evripidis Bampis Oleg Burdakov Olga Battaïa Armen Beklaryan Vladimir Beresnev René Van Bevern Sergiy Butenko Vladimir Bushenkov Igor A. Bykadorov Alexey Chernov Duc-Cuong Dang Tatjana Davidovic Stephan Dempe

St. Petersburg State University, Russia University of Limoges, France Institute of Control Systems of ANAS, Azerbaijan Dorodnicyn Computing Centre, FRC CSC RAS, Russia Institute for Information Transmission Problems, RAS, Russia Süleyman Demirel University, Kazakhstan Dorodnicyn Computing Centre, FRC CSC RAS, Russia Institute of Computational Technologies, Siberian Branch, RAS, Russia Federation University, Australia LIP6 UPMC, France Linköping University, Sweden ISAE-SUPAERO, France National Research University Higher School of Economics, Russia Sobolev Institute of Mathematics, Russia Novosibirsk State University, Russia Texas A&M University, USA University of Évora, Portugal Sobolev Institute of Mathematics, Russia Moscow Institute of Physics and Technology, Russia INESC TEC, Portugal Mathematical Institute of Serbian Academy of Sciences and Arts, Serbia TU Bergakademie Freiberg, Germany

viii

Organization

Alexandre Dolgui Olga Druzhinina Anton Eremeev Adil Erzin Francisco Facchinei Vladimir Garanzha Alexander V. Gasnikov Manlio Gaudioso Alexander I. Golikov Alexander Yu. Gornov Edward Kh. Gimadi Andrei Gorchakov Alexander Grigoriev Mikhail Gusev Vladimir Jaćimović Vyacheslav Kalashnikov Valeriy Kalyagin Igor E. Kaporin Alexander Kazakov Mikhail Yu. Khachay Oleg V. Khamisov Andrey Kibzun Donghyun Kim Roman Kolpakov Alexander Kononov Igor Konnov Vladimir Kotov Vera Kovacevic-Vujcic Yury A. Kochetov Pavlo A. Krokhmal Ilya Kurochkin Dmitri E. Kvasov Alexander A. Lazarev Vadim Levit Bertrand M. T. Lin

IMT Atlantique, LS2N, CNRS, France FRC CSC RAS, Russia Omsk Division of Sobolev Institute of Mathematics, Siberian Branch, RAS, Russia Novosibirsk State University, Russia Sapienza University of Rome, Italy Dorodnicyn Computing Centre, FRC CSC RAS, Russia Moscow Institute of Physics and Technology, Russia Università della Calabria, Italy Dorodnicyn Computing Centre, FRC CSC RAS, Russia Institute for System Dynamics and Control Theory, Siberian Branch, RAS, Russia Sobolev Institute of Mathematics, Siberian Branch, RAS, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia Maastricht University, The Netherlands N.N. Krasovskii Institute of Mathematics and Mechanics, Russia University of Montenegro, Montenegro ITESM, Campus Monterrey, Mexico Higher School of Economics, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia Matrosov Institute for System Dynamics and Control Theory, Siberian Branch, RAS, Russia Krasovsky Institute of Mathematics and Mechanics, Russia L. A. Melentiev Energy Systems Institute, Russia Moscow Aviation Institute, Russia Kennesaw State University, USA Moscow State University, Russia Sobolev Institute of Mathematics, Russia Kazan Federal University, Russia Belarus State University, Belarus University of Belgrade, Serbia Sobolev Institute of Mathematics, Russia University of Arizona, USA Institute for Information Transmission Problems, RAS, Russia University of Calabria, Italy V.A. Trapeznikov Institute of Control Sciences, Russia Ariel University, Israel National Chiao Tung University, Taiwan

Organization

Alexander V. Lotov Nikolay Lukoyanov Vittorio Maniezzo Olga Masina Vladimir Mazalov Nevena Mijajlović Nenad Mladenovic Angelia Nedich Yuri Nesterov Yuri Nikulin Evgeni Nurminski Nicholas N. Olenev Panos Pardalos Alexander V. Pesterev Alexander Petunin Stefan Pickl Boris T. Polyak Yury S. Popkov Leonid Popov Mikhail A. Posypkin Oleg Prokopyev Artem Pyatkin Ioan Bot Radu Soumyendu Raha Andrei Raigorodskii Larisa Rybak Leonidas Sakalauskas Eugene Semenkin Yaroslav D. Sergeyev Natalia Shakhlevich Alexander A. Shananin Angelo Sifaleras Mathias Staudigl Petro Stetsyuk Alexander Strekalovskiy Vitaly Strusevich Michel Thera Tatiana Tchemisova

ix

Dorodnicyn Computing Centre, FRC CSC RAS, Russia N.N. Krasovskii Institute of Mathematics and Mechanics, Russia University of Bologna, Italy Yelets State University, Russia Institute of Applied Mathematical Research, Karelian Research Center, Russia University of Montenegro, Montenegro Mathematical Institute, Serbian Academy of Sciences and Arts, Serbia University of Illinois at Urbana-Champaign, USA CORE, Université Catholique de Louvain, Belgium University of Turku, Finland Far Eastern Federal University, Russia CEDIMES-Russie, Dorodnicyn Computing Centre, FRC CSC RAS, Russia University of Florida, USA V.A. Trapeznikov Institute of Control Sciences, Russia Ural Federal University, Russia Bundeswehr University Munich, Germany V.A. Trapeznikov Institute of Control Sciences, Russia Institute for Systems Analysis, FRC CSC RAS, Russia IMM UB RAS, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia University of Pittsburgh, USA Novosibirsk State University, Sobolev Institute of Mathematics, Russia University of Vienna, Austria Indian Institute of Science, India Moscow State University, Russia Belgorod State Technological University, Russia Institute of Mathematics and Informatics, Lithuania Siberian State Aerospace University, Russia University of Calabria, Italy University of Leeds, UK Moscow Institute of Physics and Technology, Russia University of Macedonia, Greece Maastricht University, The Netherlands V.M. Glushkov Institute of Cybernetics, Ukraine Matrosov Institute for System Dynamics and Control Theory, Siberian Branch, RAS, Russia University of Greenwich, UK University of Limoges, France University of Aveiro, Portugal

x

Organization

Anna Tatarczak Alexey A. Tretyakov Stan Uryasev Vladimir Voloshinov Frank Werner Oleg Zaikin Vitaly G. Zhadan Anatoly A. Zhigljavsky Julius Žilinskas Yakov Zinder Tatiana V. Zolotova Vladimir I. Zubov Anna V. Zykina

Maria Curie-Skłodowska University, Poland Dorodnicyn Computing Centre, FRC CSC RAS, Russia University of Florida, USA Institute for Information Transmission Problems RAS, Russia Otto von Guericke University Magdeburg, Germany Matrosov Institute for System Dynamics and Control Theory, Siberian Branch, RAS, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia Cardiff University, UK Vilnius University, Lithuania University of Technology, Australia Financial University under the Government of the Russian Federation, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia Omsk State Technical University, Russia

Organizing Committee Chairs Milojica Jaćimović Yuri G. Evtushenko Nicholas N. Olenev

Montenegrin Academy of Sciences and Arts, Montenegro Dorodnicyn Computing Centre, FRC CSC RAS, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia

Organizing Committee Gulshat Amirkhanova Natalia Burova Alexander Golikov Alexander Gornov Vesna Dragović Vladimir Jaćimović Mikhail Khachay Yury Kochetov

Institute of Information and Computational Technologies, Kazakhstan Dorodnicyn Computing Centre, FRC CSC RAS, Russia Dorodnicyn Computing Centre, FRC CSC RAS, Russia Institute of System Dynamics and Control Theory, Siberian Branch, RAS, Russia Montenegrin Academy of Sciences and Arts, Montenegro University of Montenegro, Montenegro Krasovsky Institute of Mathematics and Mechanics, Russia Sobolev Institute of Mathematics, Russia

Organization

Elena A. Koroleva Vlasta Malkova Nevena Mijajlović Oleg Obradovic Mikhail A. Posypkin Kirill B. Teymurazov Yulia Trusova Svetlana Vladimirova Victor Zakharov Elena S. Zasukhina Ivetta Zonn Vladimir Zubov

Dorodnicyn Computing Centre, FRC CSC Russia Dorodnicyn Computing Centre, FRC CSC Russia University of Montenegro, Montenegro University of Montenegro, Montenegro Dorodnicyn Computing Centre, FRC CSC Russia Dorodnicyn Computing Centre, FRC CSC Russia Dorodnicyn Computing Centre, FRC CSC Russia Dorodnicyn Computing Centre, FRC CSC Russia FRC CSC RAS, Russia Dorodnicyn Computing Centre, FRC CSC Russia Dorodnicyn Computing Centre, FRC CSC Russia Dorodnicyn Computing Centre, FRC CSC Russia

RAS, RAS,

RAS, RAS, RAS, RAS,

RAS, RAS, RAS,

xi

Contents

Optimization of the Values of the Right-Hand Sides of Boundary Conditions with Point and Integral Terms for the ODE System. . . . . . . . . . . Kamil Aida-zade and Vagif Abdullayev

1

Saddle-Point Method in Terminal Control with Sections in Phase Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anatoly Antipin and Elena Khoroshilova

17

Pricing in Dynamic Marketing: The Cases of Piece-Wise Constant Sale and Retail Discounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Igor Bykadorov

27

The Generalized Algorithms of Global Parametric Optimization and Stochastization for Dynamical Models of Interconnected Populations. . . . . . . Anastasia Demidova, Olga Druzhinina, Milojica Jacimovic, Olga Masina, Nevena Mijajlovic, Nicholas Olenev, and Alexey Petrov On Solving a Generalized Constrained Longest Common Subsequence Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marko Djukanovic, Christoph Berger, Günther R. Raidl, and Christian Blum Optimization Problems in Tracking Control Design for an Underactuated Ship with Feedback Delay, State and Control Constraints. . . . . . . . . . . . . . . Olga Druzhinina and Natalya Sedova

40

55

71

Theorems of Alternative and Optimization . . . . . . . . . . . . . . . . . . . . . . . . . Yuri Evtushenko and Alexander Golikov

86

P-Regularity Theory: Applications to Optimization . . . . . . . . . . . . . . . . . . . Yuri Evtushenko, Vlasta Malkova, and Alexey Tret’yakov

97

A Given Diameter MST on a Random Graph . . . . . . . . . . . . . . . . . . . . . . . Edward K. Gimadi, Aleksandr S. Shevyakov, and Aleksandr A. Shtepa

110

Method of Parametric Correction in Data Transformation and Approximation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Victor Gorelik and Tatiana Zolotova On the Use of Decision Diagrams for Finding Repetition-Free Longest Common Subsequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matthias Horn, Marko Djukanovic, Christian Blum, and Günther R. Raidl

122

134

xiv

Contents

Distributing Battery Swapping Stations for Electric Scooters in an Urban Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thomas Jatschka, Fabio F. Oberweger, Tobias Rodemann, and Gunther R. Raidl Optimal Combination of Tensor Optimization Methods . . . . . . . . . . . . . . . . Dmitry Kamzolov, Alexander Gasnikov, and Pavel Dvurechensky

150

166

Nonlinear Least Squares Solver for Evaluating Canonical Tensor Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Igor Kaporin

184

PCGLNS: A Heuristic Solver for the Precedence Constrained Generalized Traveling Salesman Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Khachay, Andrei Kudriavtsev, and Alexander Petunin

196

Simultaneous Detection and Discrimination of Subsequences Which Are Nonlinearly Extended Elements of the Given Sequences Alphabet in a Quasiperiodic Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liudmila Mikhailova and Sergey Khamdullin

209

Lurie Systems Stability Approach for Attraction Domain Estimation in the Wheeled Robot Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . Lev Rapoport and Alexey Generalov

224

Penalty-Based Method for Decentralized Optimization over Time-Varying Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexander Rogozin and Alexander Gasnikov

239

The Adaptation of Interior Point Method for Solving the Quadratic Programming Problems Arising in the Assembly of Deformable Structures . . . . Maria Stefanova and Sergey Lupuleac

257

On Optimizing Electricity Markets Performance . . . . . . . . . . . . . . . . . . . . . Alexander Vasin and Olesya Grigoryeva

272

Adaptive Extraproximal Algorithm for the Equilibrium Problem in Hadamard Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yana Vedel and Vladimir Semenov

287

The Dual Simplex-Type Method for Linear Second-Order Cone Programming Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vitaly Zhadan

301

About Difference Schemes for Solving Inverse Coefficient Problems. . . . . . . Vladimir Zubov and Alla Albu

317

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

331

Optimization of the Values of the Right-Hand Sides of Boundary Conditions with Point and Integral Terms for the ODE System Kamil Aida-zade1,2(B) 1

and Vagif Abdullayev1,3

Institute of Control Systems of Azerbaijan National Academy of Sciences, Str. B. Vahabzade 9, Baku AZ1141, Azerbaijan {kamil aydazade,vaqif ab}@rambler.ru 2 Institute of Mathematics and Mechanics of Azerbaijan National Academy of Sciences, Str. B. Vahabzade 9, Baku AZ1141, Azerbaijan 3 Azerbaijan State Oil and Industry University, Azadlig ave. 20, Baku AZ1010, Azerbaijan

Abstract. In the paper, we investigate the problem of optimal control for a linear system of ordinary diﬀerential equations with linear boundary conditions. The boundary conditions include, as terms, the values of the phase variable both at separate intermediate points and their integral values over individual intervals of the independent variable. The values of the right sides of unseparated boundary conditions are optimizing in the problem. In the paper the necessary conditions for the existence and uniqueness of the solution to the boundary value problem, the convexity of the target functional, the necessary optimality conditions for the optimized parameters are obtained. Conditions contain constructive formulas of the gradient components of the functional. The numerical solution of an illustrative problem is considered. Keywords: Functional gradient · Convexity of functional conditions · Optimality conditions · Multipoint conditions

1

· Nonlocal

Introduction

The paper deals with the problem of optimal control of a system of ordinary linear diﬀerential equations with nonlocal conditions, containing summands with point and integral values of the phase function. The study of nonlocal boundary value problems was started in [1–3] and continued in the works of many other researchers [4–13]. The practical importance of investigating these problems and the related control problems is due to the fact that the measured information about the state of the object at any point of the object or at any time instant actually covers the neighborhood of the point or the instant of measurement. c Springer Nature Switzerland AG 2020 N. Olenev et al. (Eds.): OPTIMA 2020, LNCS 12422, pp. 1–16, 2020. https://doi.org/10.1007/978-3-030-62867-3_1

2

K. Aida-zade and V. Abdullayev

It should be noted that various statements of control problems with multipoint and intermediate conditions are considered in many studies [14–16]. Optimality conditions in various forms were obtained for them [17–19] and numerical schemes for solving these problems were proposed; cases of nonlocal (loaded) diﬀerential equations with multipoint boundary conditions were investigated in [12,20]. This research diﬀers from previous studies primarily in that we optimize the values of the right-hand sides of nonlocal boundary conditions. In this paper, we obtain the conditions of existence and uniqueness of the solution to a nonlocal boundary value problem, the convexity of the objective functional of the problem, and the necessary optimality conditions for the optimal control and optimization problem under investigation. Depending on the rank of the matrix of coeﬃcients at the point values of the phase vector function in nonlocal conditions, two techniques are proposed to obtain formulas for the components of the gradient of the functional. One technique uses the idea of the conditional gradient method [21,22], and the other the Lagrange method [22]. The use of the Lagrange method leads to an increase in both the number of conditions and the dimension of the vector of parameters to be determined. Therefore, depending on the rank of the matrix of conditions, preference is given to the ﬁrst approach. The paper gives the results of numerical experiments, using one illustrative problem as an example. The formulas obtained here for the components of the gradient of the objective functional are used for its numerical solution by ﬁrstorder optimization methods.

2

Formulation of the Problem

We consider an object whose dynamics is described by a linear system of diﬀerential equations: (1) x(t) ˙ = A1 (t)x(t) + A2 (t), t ∈ [t1 , tf ], with nonlocal (unseparated multipoint and integral) conditions and with optimized values of the right-hand sides: l1 i=1

2j

αi x(t˜i ) +

tˆ l2

βj (t)x(t)dt = ϑ.

(2)

j=1ˆ2j−1 t

Here: x(t) ∈ Rn is a phase variable. Given are: matrix functions A1 (t)– =const, βj (t), j = 1, 2, ..., l2 of dimension (n × n), continuous at t ∈ [t1 , tf ], and n– dimensional vector function A2 (t); scalar matrices αi , i = 1, 2, ..., l1 of dimension (n × n); instants of time t˜i ∈ [t1 , tf ], i = 1, 2, ..., l1 , t˜1 < ... < t˜i−1 < / [tˆ2j−1 , tˆ2j ], i = t˜i < ... < t˜l1 , tˆ1 < ... < tˆi−1 < tˆi < ... < tˆ2l2 , with t˜i ∈ 1 1 1 ˆ2l2 l1 f ˜ ˆ ˜ 1, 2, ..., l1 , j = 1, 2, ..., l2 , and t = t < t , t < t = t ; l1 , l2 are integers. The optimized parameter of the problem is the vector ϑ ∈ V ⊂ Rn , which is normally determined by the impacts of external sources. The set of admissible values V of the optimized vector of parameters ϑ is compact and convex.

Optimization of the Values of the Right-Hand Sides of Boundary Conditions

3

The objective functional for ﬁnding the vector of parameters ϑ is f

t

f 0 (x(t), ϑ, t) dt + Φ(¯ x, ϑ) → min .

J(ϑ) =

ϑ∈V

(3)

t1

Here, f 0 (x, ϑ, t), Φ(¯ x, ϑ) are given functions continuously diﬀerentiable in x, ϑ, x ¯, and the following notation is used: t˜ = (t˜1 , t˜2 , ..., t˜l1 ),

tˆ = (tˆ1 , tˆ2 , ..., tˆ2l2 ),

t¯ = (t˜1 , t˜2 , ..., t˜l1 , tˆ1 , tˆ2 , ..., tˆ2l2 ),

x ˜ = (˜ x1 , x ˜2 , ..., x ˜l1 )T = (x(t˜1 ), x(t˜2 ), ..., x( t˜l1 ))T ∈ Rl1 n , x ˆ = (ˆ x1 , x ˆ2 , ..., x ˆ2l2 )T = (x(tˆ1 ), x(tˆ2 ), ..., x( tˆ2l2 ))T ∈ R2l2 n , x(t¯) = x ¯ = (¯ x1 , x ¯2 , ..., x ¯l1 +2l2 )T = (x(t˜1 ), x(t˜2 ), ..., x( t˜l1 ), x(tˆ1 ), ..., x(tˆ2l2 ))T , “T ”–transpose sign. Assume that there are n linearly independent conditions in (2). In the case of a smaller number of linearly independent conditions or a smaller number of conditions in general, e.g. n1 , n1 < n, then, accordingly, ϑ ∈ Rn1 . This means that there are (n − n1 ) free (non-ﬁxed) initial conditions in the problem. Then, the optimized values of the initial conditions for any (n − n1 ) components of the phase vector x(t) can be added to conditions (2), thereby expanding the vector ϑ ∈ Rn1 to ϑ ∈ Rn .

3

Obtaining Optimality Conditions

Further we assume that for all parameters ϑ, problem (1), (2) has a solution, and a unique one at that. This requires that condition given in the following theorem be satisﬁed. In the theorem, the matrix F (t, τ ) is a fundamental matrix of solutions to system (1), i.e. a solution to the Cauchy matrix problem: F˙ (t, τ ) = A1 (t)F (t, τ ), t, τ ∈ [t1 , tf ],

F (t1 , t1 ) = E,

where E is an n-dimensional identity matrix. Theorem 1. Problem (1), (2) for an arbitrary vector of parameters ϑ has a solution, and a unique one at that, if the functions A1 (t) are continuous, A2 (t), βi (t), i = 1, 2, ..., l2 , integrable and ⎡ ⎤ 2j tˆ l1 l2 ⎢ ⎥ rang ⎣ αi F (t˜i , t1 ) + βj (t)F (t, t1 )dt⎦ = n. (4) i=1

j=1ˆ2j−1 t

4

K. Aida-zade and V. Abdullayev

Proof. The proof of the theorem follows from a direct substitution of the Cauchy formula with respect to system (1) t 1

1

x(t) = F (t, t )x +

F (t, τ ) A2 (τ )dτ ,

(5)

t1

in conditions (2). After simple transformations and grouping, we obtain an algebraic system with respect to x1 = x(t1 ): Lx1 = D,

L=

l1

(6) 2j

˜i

1

αi F (t , t ) +

i=1

tˆ l2

βj (t)F (t, t1 )dt,

j=1ˆ2j−1 t

⎡ i ⎤ ⎡ t ⎤ 2j t˜ tˆ l1 l2 ⎢ ⎥ D =ϑ− αi ⎣ F (t˜i , τ )A2 (τ )dτ ⎦ − βj (t) ⎣ A2 (τ )dτ ⎦ dt. i=1

j=1ˆ2j−1 t

t1

t1

It is known that system of equations (6) has a solution, and a unique one at that, if the matrix L is invertible, i.e. when condition (4) is satisﬁed. It is clear that rang L does not depend on the values of the vector D, and therefore, it does not depend on the vector ϑ. And due to the uniqueness of representation (5) for the solution of the Cauchy problem with respect to system (1), problem (1), (2) also has a solution, and a unique one at that, when condition (4) is satisﬁed. The following theorem takes place. Theorem 2. Suppose all the conditions imposed on the functions and paramx, ϑ) eters in problem (1)–(3) are satisﬁed, and the functions f 0 (x, ϑ, t) and Φ(¯ are convex in x, x ¯, ϑ. Then the functionalJ(ϑ) is convex. If, in addition, one of x, ϑ) is strongly convex, then the functional of the the functions f 0 (x, ϑ, t) and Φ(¯ problem is also strongly convex. The proof of the theorem follows from a direct check of the convexity condition of the integer functional under the conditions of convexity of the functions x, ϑ). f 0 (x, ϑ, t) and Φ(¯ Let us investigate the diﬀerentiability of functional (3) and obtain formulas for the components of its gradient with respect to the optimized parameters ϑ ∈ Rn . ¯i will be understood as rows of the The derivatives ∂f 0 /∂x, ∂f 0 /∂ϑ, ∂Φ/∂ x corresponding dimensions. For arbitrary functions f (t), we use the notation: f (t± ) = f (t ± 0) = lim f (t ± ε), ε→+0

and we assume that f (tf+ ) = f (t1− ) = 0.

Δf (t) = f (t+ ) − f (t− ),

Optimization of the Values of the Right-Hand Sides of Boundary Conditions

χ[tˆ2j−1 , tˆ2j ] (t),

5

j = 1, 2, ..., l2 is a characteristic function:

χ[tˆ2j−1 , tˆ2j ] (t) =

0, t ∈ / [tˆ2j−1 , tˆ2j ] , 1, t ∈ [tˆ2j−1 , tˆ2j ], j = 1, 2, ..., l2 .

Suppose the rank of the augmented matrix α = [α1 , α2 , ..., αl1 ] of dimension ¯ . Then n × l1 n is n rang α = n ¯ ≤ n. (7) In the case of n ¯ < n, conditions (2), due to their linear combination, can be reduced to such a form that the last (n−¯ n) rows of the matrix α = [α1 , α2 , ..., αl1 ] will be zero, the integral summands in conditions (2) also undergoing a linear combination. But it is important that these transformations do not violate condition (4) for the existence and uniqueness of the solution of boundary value problem (1), (2) for arbitrary ϑ. To avoid introducing new notation, suppose the ¯ ×n, and the rank of their augmented matrices αi , i = 1, 2, ..., l1 have dimension n matrix is n ¯ . Then we divide constraints (2) into two parts, writing the ﬁrst n ¯ constraints in the following form: l1 i=1

2j

αi x(t˜i ) +

tˆ l2

βj1 (t)x(t)dt = ϑ(1) ,

(8)

j=1ˆ2j−1 t

and in the last n − n ¯ constraints, there will be no point values of the function x(t): 2j tˆ l2 βj2 (t)x(t)dt = ϑ(2) . (9) j=1ˆ2j−1 t

Here, the matrices αi , βj1 (t), i = 1, 2, ..., l1 , j = 1, 2, ..., l2 have dimension ¯ ) × n, the n ¯ × n, and the matrices βj2 (t), j = 1, 2, ..., l2 have dimension (n − n vectors ϑ(1) ∈ Rn¯ , ϑ(2) ∈ Rn−¯n , ϑ = ϑ(1) , ϑ(2) ∈ Rn . Therefore, it is possible to extract from the augmented matrix α the matrix ¯ formed by n ¯ columns of the matrix α. (minor) α of rank n Suppose ki is the number of the column of the matrix αsi , 1 ≤ si ≤ l1 , i = 1, 2, ..., n ¯ , included as the i-th column in the matrix α, i.e. the i-th column of the matrix α is the ki -th column of the matrix αsi . ¯ 1 2 n Suppose x = ( x , x , ..., x )T = (xk1 (t˜s1 ), xk2 (t˜s2 ), ..., xkn¯ (t˜sn¯ ))T = ˜sk22 , ..., x ˜sknn¯¯ )T is an n ¯ -dimensional vector consisting of the components of the (˜ xsk11 , x vector x(t˜) formed from the values of the kj -th coordinates of the n-dimensional vector x(t) corresponding to the matrix α at time instants t˜sj , 1 ≤ sj ≤ l1 , j = 1, 2, ..., n ¯. n ×(l1 n− n ¯ ))-dimensional matrix and residual Suppose α and x are residual (¯ ¯ )-dimensional vector, respectively, obtained by removing from the matrix (l1 n− n ¯ components α and from the vector x ˜ n ¯ columns included in the matrix α and n included in the vector x, respectively.

6

K. Aida-zade and V. Abdullayev

Suppose the i-th column of the matrix α is the gi -the column of the matrix ¯ ). αqi , 1 ≤ gi ≤ n, 1 ≤ qi ≤ l1 , i = 1, 2, ..., (l1 n − n x = (xg1 (t˜q1 ), xg2 (t˜q2 ), ..., xg(l

q

¯) 1 n−n

n ¯) T (t˜q(l1 n−n¯ ) ))T = (˜ xqg11 , x ˜qg22 , ..., x ˜g(l(l11n− ) . n−n ¯)

¯ , i = 1, 2, ..., (l1 n − n ¯ ). Obviously, (gi , qi ) = (sj , kj ), j = 1, 2, ..., n

−1

For simplicity of formula notation, the n ¯ -dimensional square matrix α is −1 n × (l1 n − n ¯ )) matrix (−α α) is denoted by C with elements cij , and the (¯ denoted by B with elements bij . Next, we consider separately the cases when n ¯ = n and n ¯ < n. In the case of n ¯ = n, the following theorem takes place. Theorem 3. In the conditions imposed on the functions and parameters involved in problem (1)–(3), functional (3) is diﬀerentiated with respect to the parameters ϑ of the right-hand sides of nonlocal boundary conditions. The gra dient of the functional of the problem for rang α = rang α = n is determined by the formulas f

t n ∂J ∂f 0 (x, ϑ, t) ∂Φ(¯ x, ϑ) ∂Φ(¯ x , ϑ) = + Δψki (t˜si ) cik + + dt, si ∂ϑk ∂x ˜ ki ∂ϑk ∂ϑk i=1

(10)

t1

where k = 1, 2, ..., n, the vector function ψ(t), which is continuously diﬀerentiable over the interval [t1 , tf ] except the points t˜i , tˆj , i = 2, 3, .., l1 − 1, j = 1, 2, .., 2l2 , is the solution to the adjoint problem:

T l2 ∂f 0 (x, ϑ, t) −1 χ(tˆ2j ) − χ(tˆ2j−1 ) βjT (t) ( α )T + ∂x j=1 n ∂Φ(¯ x, ϑ) si si ˜ ˜ × + ψki (t− ) − ψki (t+ ) , (11) ∂x ˜skii i=1

˙ ψ(t) = −AT1 (t)ψ(t) +

ψgν (t˜q+ν )

=

ψgν (t˜q−ν )

+

n i=1

ψi (tˆj+ ) = ψi (tˆj− ) +

biν

∂Φ(¯ x, ϑ) + ψki (t˜s−i ) − ψki (t˜s+i ) ∂x ˜skii

∂Φ(¯ x, ϑ) , ∂x ˆji

i = 1, 2, ..., n,

+

∂Φ(¯ x, ϑ) , ∂x ˜qgνν

j = 1, 2, ..., 2l2 ,

(12) (13)

where ν = 1, 2, ..., l1 n. Proof. Suppose x(t) ∈ Rn is the solution to boundary value problem (1), (2) for an admissible vector of parameters ϑ ∈ V , and x1 (t) = x(t) + Δx(t) is the solution to problem (1), (2) corresponding to the increment admissible vector ϑ1 = ϑ + Δϑ ∈ V : x˙ 1 (t) = A1 (t)x1 (t) + A2 (t), t ∈ [t1 , tf ],

(14)

Optimization of the Values of the Right-Hand Sides of Boundary Conditions l1

7

2j

αi x1 (t˜i ) +

i=1

tˆ l2

βj (t)x1 (t)dt = ϑ1 .

(15)

j=1ˆ2j−1 t

It follows from (1), (2) and (14), (15) that the following takes place: Δx(t) ˙ = A1 (t)Δx(t), t ∈ [t1 , tf ], l1

(16)

2j

˜i

αi Δx(t ) +

i=1

tˆ l2

βj (t)Δx(t)dt = Δϑ.

(17)

j=1ˆ2j−1 t

Then for the increment of functional (3), we have f t

ΔJ(ϑ) = J(ϑ ) − J(ϑ) = 1

t1

+

l1 i=1

∂f 0 (x, ϑ, t) ∂f 0 (x, ϑ, t) Δx(t) + Δϑ dt ∂x ∂ϑ

2l2 ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) ˜i ) + Δϑ + R . Δx( t Δx(tˆj ) + i j ∂x ˜ ∂x ˆ ∂ϑ j=1

(18)

Here R = o (Δx(t)C 1,n [t1 ,tf ] , ΔϑRn ) is a residual member. Under the assumptions made for the data of problems (1), (2), using the known technique [22], we can obtain an estimate of the form Δx(t)C 1,n [t1 ,tf ] ≤ c1 ΔϑRn where c1 > 0 is independent of x(t). From here, taking into account formula (18), we have the diﬀerentiability of the functional J(ϑ) with respect to ϑ. Let us transpose the right-hand side members of (16) to the left and multiply scalarwise both sides of the resulting equality by the as yet arbitrary ndimensional vector function ψ(t), which is continuously diﬀerentiable over the intervals (t¯i , t¯i+1 ) , i = 1, 2, ..., (l1 + 2l2 − 1). Integrating the obtained equality by parts and adding the result to (18), after simple transformations, we will have: ¯i+1 (l1 +2l2 −1) t

∂f 0 (x, ϑ, t) −ψ˙ T (t) − ψ T (t)A1 (t) + Δx(t)dt ∂x i=1 i t¯ ⎤ ⎡ f l t 1 0 ∂Φ(¯ x, ϑ) ⎥ ∂Φ(¯ x, ϑ) ⎢ ∂f (x, ϑ, t) dt + Δx(t˜i ) +⎣ ⎦ Δϑ + i ∂ϑ ∂ϑ ∂ x ˜ i=1

ΔJ(ϑ) =

t1

+

2l2 ∂Φ(¯ x, ϑ) j=1

+

l 1 −1 i=2

∂x ˆj

Δx(tˆj ) + ψ T (tf )Δx(tf ) − ψ T (t1 )Δx(t1 )

ψ(t˜i− ) − ψ(t˜i+ )

T

Δx(t˜i ) +

2l2 j=1

ψ(tˆj− ) − ψ(tˆj+ )

T

Δx(tˆj )

⎫ ⎬ ⎭

+ R. (19)

8

K. Aida-zade and V. Abdullayev

Let us deal with the summands inside the curly brackets. In (19), we use conditions (17) to obtain the dependence of any n components of the (nl1 )–dimensional vector Δx(t˜) = Δ˜ x = Δx1 (t˜1 ), Δx2 (t˜1 ), ..., Δxn (t˜1 ), ..., Δxi (t˜j ), ..., Δxn (t˜l1 ) , through the other n(l1 − 1) components. Then relation (17) can be written as: 2j

α Δ x + αΔ x +

tˆ l2

βj (t)Δx(t)dt = Δϑ.

j=1ˆ2j−1 t

Hence, taking into account (7), we have 2j

−1

Δx = α

−1

Δϑ − α

αΔ x −

tˆ l2

−1

α

βj (t)Δx(t)dt.

(20)

j=1ˆ2j−1 t

Further, for simplicity of presentation of technical details, alongside matrix operations, we will use component-wise formula notation. Given the agreed nota−1 −1 tion: C = −α , B = −α α, (20) in a component-wise form will look as follows: Δ x i = Δxki (t˜si ) =

n k=1

−

l2 n

2j tˆ

−1

α

cik Δϑk +

l1 n

biν Δxgν (t˜qν )

ν=1

j βik (t)Δxk (t)dt, i = 1, 2, ..., n , 1 ≤ gν ≤ n.

(21)

j=1 k=1ˆ2j−1 t

The last 4–7 summands in (19) are written as follows:

ψ T (tf )Δx(tf ) =

n

ψj (tf )Δxj (tf ),

ψ T (t1 )Δx(t1 ) =

j=1

n

ψj (t1 )Δxj (t1 ). (22)

j=1

From (19), taking into account (21)–(22), we can obtain: ¯i+1 (l1 +2l2 −1) t

ΔJ(ϑ) =

i=1

−

n i=1

t¯i

−ψ˙ T (t) − ψ T (t)A1 (t) +

∂f 0 (x, ϑ, t) ∂x

l 2 −1 ∂Φ(¯ x, ϑ) si 2j 2j−1 ˆ ˜ ˆ χ( t + Δψ ( t ) ) − χ( t ) α β (t) Δx(t)dt j ki ∂x ˜skii j=1

Optimization of the Values of the Right-Hand Sides of Boundary Conditions

+

+

⎧ n ⎪ n ⎨ ∂Φ(¯ x, ϑ) ∂x ˜skii

⎪ k=1 ⎩ i=1 n l1 n ν=1

biν

i=1

+ Δψki (t˜si )

cik +

∂Φ(¯ x, ϑ) + ∂ϑk

tf t1

9

⎫ ⎪ ⎬ ∂f (x, ϑ, t) dt Δϑk ⎪ ∂ϑk ⎭ 0

∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) si qν ˜ ˜ + Δψki (t ) + + Δψgν (t ) Δxgν (t˜qν ) ∂x ˜skii ∂x ˜qgνν 2l2 n

∂Φ(¯ x, ϑ) j ˆ + Δψi (t ) Δxi (tˆj ) + R. (23) + j ∂ x ˆ i j=1 i=1

Since the vector functions ψ(t) are arbitrary, let us require that the expressions in the ﬁrst and last two square brackets (23) be 0. From the ﬁrst requirement, we obtain adjoint system of diﬀerential equations (11), and from the other two requirements, we obtain the following expressions:

n ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) s qν i ˜ ˜ biν + Δψ ( t ) + + Δψ ( t ) = 0, ν = 1, 2, ..., l1 n, k g i ν ∂x ˜skii ∂x ˜qgνν i=1 ∂Φ(¯ x, ϑ) ∂x ˆji

+ Δψi (tˆj ) = 0,

i = 1, 2, ..., n, j = 1, 2, ..., 2l2 .

Hence conditions (12), (13). Then, the desired components of the gradient of the functional with respect to ϑ will be determined from (23) as the linear parts of the increment of the functional for Δϑ from formulas (10). From Theorem 3, we can obtain formulas for simpler special cases when the ranks of any of the matrices αi , i = 1, 2, ..., l1 are n and then they can be taken as the matrix α. For instance, when rangα1 = n, and therefore α1−1 exists, for the components of the gradient of the functional with respect to ϑ, we can obtain the formula: f t T T (¯ x , ϑ) (¯ x , ϑ) ∂f 0 (x, ϑ, t) ∂Φ ∂Φ ∇J(ϑ) = −(α1−1 )T ψ(t1 ) − + dt, (24) + 1 ∂x ˜ ∂ϑ ∂ϑ

t1

where the adjoint boundary value problem has the form 0 T ∂f (x, ϑ, t) T ˙ ψ(t) = −A1 (t)ψ(t) + ∂x l 2 2j T −1 T x, ϑ) ∂ΦT (¯ 2j−1 1 ˆ ˆ χ(t ) − χ(t + ) βj (t) α1 ψ(t ) − , ∂x ˜1 j=1 αlT1 (α1−1 )T ψ(t1 ) + ψ(tf ) = −

T x, ϑ) x, ϑ) ∂ΦT (¯ −1 T ∂Φ (¯ T + α (α ) , l1 1 l 1 ∂x ˜ ∂x ˜1

ψ(t˜i+ ) = ψ(t˜i− ) + αiT (α1−1 )T ψ(t1 ) +

x, ϑ) ∂ΦT (¯ ∂x ˜i

(25) (26)

10

K. Aida-zade and V. Abdullayev

∂ΦT (¯ x, ϑ) , i = 2, 3, ..., l1 − 1, ∂x ˜1 x, ϑ) ∂ΦT (¯ ψ(tˆj+ ) = ψ(tˆj− ) + , j = 1, 2, ..., 2l2 . ∂x ˆj − αiT (α1−1 )T

(27) (28)

Let us now consider the case when rangα = n ¯ < n. Theorem 4. Suppose all the conditions imposed on the data of the problem are satisﬁed, and rang α = n ¯ < n. Then integer functional (3) is diﬀerentiable with respect to the parameters ϑ = (ϑ1 , ϑ2 ), and the components of its gradient are determined by the following formulas:

n ¯ ∂Φ(¯ x, ϑ) ∂J s i = + Δψki (t˜ ) cik (1) ∂x ˜skii ∂ϑ i=1 k

f

+

t

∂Φ(¯ x, ϑ)

+

(1)

∂ϑk

∂f 0 (x, ϑ, t) (1)

dt, k = 1, 2, ..., n ¯,

(29)

∂ϑk

t1 f

∂J (2)

∂ϑk

= −λ +

∂Φ(¯ x, ϑ) (2)

t +

∂ϑk

∂f 0 (x, ϑ, t) (2)

∂ϑk

t1

dt, k = 1, 2, ..., (n − n ¯ ),

(30)

where ψ(t) ∈ Rn , λ ∈ Rn¯ satisfy the conditions of the adjoint problem:

T ∂f 0 (x, ϑ, t) ∂x l2 n ∂Φ(¯ x, ϑ) si si 2j 2j−1 1 T −1 T ˆ ˆ ˆ ˆ χ(t ) − χ(t + ) (βj (t)) ( α ) + ψki (t− ) − ψki (t+ ) ∂x ˜skii j=1 i=1 ˙ ψ(t) = −AT1 (t)ψ(t) +

−

l2

χ(tˆ2j ) − χ(tˆ2j−1 ) (βj2 (t))T λ,

(31)

j=1

ψgν (t˜q+ν ) = ψgν (t˜q−ν ) +

n ¯ i=1

biν

∂Φ(¯ x, ϑ) + ψki (t˜s−i ) − ψki (t˜s+i ) ∂x ˜skii

∂Φ(¯ x, ϑ) , ν = 1, .., (l1 n − n ¯ ), ∂x ˜qgνν ∂Φ(¯ x, ϑ) ψi (tˆj+ ) = ψi (tˆj− ) + , i = 1, 2, ..., n, j = 1, 2, ..., 2l2 . ∂x ˆji +

(32) (33)

Proof. To take into account conditions (9), we use the method of Lagrange multipliers. Then, to increment the functional, we have: f

t ΔJ(ϑ) = t1

ψ T (t) [Δx(t) ˙ − A1 (t)Δx(t)] dt

Optimization of the Values of the Right-Hand Sides of Boundary Conditions

⎡ ⎤ 2j tˆ l2 ⎢ ⎥ + λT ⎣ βj2 (t)Δx(t)dt − Δϑ(2) ⎦ ,

11

(34)

j=1ˆ2j−1 t

where the as yet arbitrary λ−(n− n ¯ )-dimensional vector of Lagrange multipliers, ψ(t) – conjugate variables. To take into account conditions (8), we select an invertible n ¯ -dimensional n × l n) matrix α = (α1 , α2 , ..., αl1 ). This square matrix α from the augmented (¯ will allow expressing from the analogue of relations (17) any n ¯ components of ¯ ) the remaining components the (nl1 )-dimensional vector Δx(t¯) through (nl1 − n and obtain a formula similar to (20). Further, the course of the proof of the theorem is similar to the proof of Theorem 3, with the only diﬀerence in the dimensions of the matrices α, α and vectors Δ x, Δ x and, of course, in the added second summand in (34). Note a special case when n ¯ = 0. Obtaining formulas for the components of the gradient with respect to ϑ in this case is simpliﬁed, because the Lagrange method will be used for all conditions (2). This will, of course, lead to an increase in the dimension of the problem due to the increase in the dimension of the vector λ to n, i.e. λ ∈ Rn . Using the calculations presented in the proof of Theorem 4 with respect to conditions (9), the components of the gradient of the functional can be obtained in the following form: f

t ∇J(ϑ) = t1

∂Φ(¯ x, ϑ) ∂f 0 (x, ϑ, t) dt + − λ. ∂ϑ ∂ϑ

(35)

ψ(t) and λ are the solution of the following adjoint boundary value problem with 2n conditions: 0 T l2 2j ∂f (x, ϑ, t) ˙ ψ(t) = −AT1 (t)ψ(t)+ χ(tˆ ) − χ(tˆ2j−1 ) (βj (t))T λ+ , (36) ∂x j=1 x, ϑ) ∂ΦT (¯ + α1T · λ, ∂x ˜1 ∂Φ(¯ x, ϑ) ψ(tf ) = − − αlT1 · λ, ∂x ˜ l1 ψ(t1 ) =

(37) (38)

and jump conditions: ∂Φ(¯ x, ϑ) + αiT · λ, ψ(t˜i+ ) = ψ(t˜i− ) + ∂x ˜i

i = 2, 3, ..., l1 − 1,

(39)

∂Φ(¯ x, ϑ) ψ(tˆj+ ) = ψ(tˆj− ) + , j = 1, 2, ..., 2l2 . (40) ∂x ˆj Formulas (36)–(38) diﬀer from the formulas given in Theorems 3 and 4 in the dimension of the parameters and the number of conditions in the adjoint problem. In the formulas given in Theorem 3 for the case of rang α = n ¯ = n,

12

K. Aida-zade and V. Abdullayev

the number of boundary conditions for the conjugate variable was n, and there were no unknown parameters. In the case of n ¯ < n, formulas (30), (31) and (35)–(38) include the (n − n ¯ )-dimensional vector of Lagrange multipliers λ, for the determination of which there are also additional (n− n ¯ ) boundary conditions totaling (2n − n ¯ ). Theorem 5. In order for the pair (ϑ∗ , x∗ (t)) to be a solution to problem (1)– (3), it is necessary and suﬃcient that the following take place for the arbitrary admissible vector of parameters ϑ ∈ V : (∇J(ϑ∗ ), ϑ − ϑ∗ ) ≥ 0, where ∇J(ϑ∗ ) is determined depending on the rank of the augmented matrix α = [α1 , α2 , ..., αl1 ] by one of formulas (10), (24), (29), (30) or (35). The proof of the theorem follows from the convexity of the admissible domain V , the convexity and diﬀerentiability of the objective functional J(ϑ) ( [21,22]).

4

The Scheme of the Numerical Solution of the Problem and The Results of Computer Experiments

For a numerical solution of system of diﬀerential equations with multipoint conditions both for the direct problem (1), (2) and the adjoint problem for the given vector of parameters ϑ, we use the method of condition shift proposed by the authors in [8,9,23]. Problem 1. We present the results of numerical experiments obtained by solving the following problem described for t ∈ [0, 1] by a system of diﬀerential equations:

x˙ 1 (t) = 2x1 (t) + x2 (t) − 6 cos(8t) − 22 sin(8t) − 3t2 + 4t + 6, (41) x˙ 2 (t) = tx1 (t) + x2 (t) − (24 − 2t) cos(8t) − 3 sin(8t) − 2t3 + t2 − 1 , with nonlocal conditions ⎧ 0.8 ⎪ ⎪ ⎨ x1 (0) + x1 (0.5) + x2 (1) + (x1 (t) + 2x2 (t))dt = ϑ1 , 0.4 ⎪ ⎪ ⎩ (x1 (t) − x2 (t))dt = ϑ2 .

0.6

(42)

0.2

There are the following constraints on optimized parameters: −3 ≤ ϑ1 ≤ 3, −2 ≤ ϑ2 ≤ 2,

−5 ≤ u(t) ≤ 5.

The objective functional has the form: 1 J(ϑ) = 0

2 2 x1 (t) + x2 (t) − t2 − 2 2 dt + δ · [(ϑ1 + 0.24) + (ϑ2 + 1.17)

Optimization of the Values of the Right-Hand Sides of Boundary Conditions 2

2

2

+ (x2 (0.5) + 1.52) + (x1 (1) + 0.29) + (x2 (1) − 2.97)

→ min .

Adjoint problem (31) has the form: ⎧ ψ˙ 1 (t) = −2ψ1 (t) − tψ2 (t) + λ [χ(0.4) − χ(0.2)] + ⎪ ⎪ ⎨ + ψ1 (0) [χ(0.8) − χ(0.6)] + 2(x1 (t) + x2 (t) − t2 − 2), ⎪ ψ˙ (t) = −ψ1 (t) − ψ2 (t) − λ [χ(0.4) − χ(0.2)] + ⎪ ⎩ 2 + 2ψ1 (0) [χ(0.8) − χ(0.6)] + 2(x1 (t) + x2 (t) − t2 − 2) .

13

(43)

(44)

The augmented matrix α=

101001 000000

has rank 1. This corresponds to the case in Theorem 4 considered in the previous paragraphs. We build the matrix α ˆ from the ﬁrst column and the ﬁrst row of the matrix α1 : α = (1), α = 0 1 0 0 1 . Then for the matrices B and C, we have: α = 0 −1 0 0 −1 ,

−1

B = −α

For the elements of the vector ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) = = 0, ∂x ˜sk11 ∂x ˜11

∂Φ(¯ x,ϑ) ∂x ˜i ,

−1

C=α

= (1).

i = 1, 2, 3, we have:

∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) = = 0, = =0, ∂x ˜qg12 ∂x ˜12 ∂x ˜qg22 ∂x ˜21

∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) = = 2 (x2 (0.5) + 1.52) , = = 2 (x1 (1) + 0.29) , ∂x ˜qg33 ∂x ˜22 ∂x ˜qg44 ∂x ˜31 ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) ∂Φ(¯ x, ϑ) = = 2 (x2 (1) − 2.97) , = 0 , i = 1, 2, j = 1, 2, 3, 4. ∂x ˜qg55 ∂x ˜32 ∂x ˆji

Then conditions (32) and (33) take the form: ⎧ ψ1 (0) + ψ2 (1) = −2(x2 (1) − 2.97), ⎪ ⎪ ⎪ ⎪ ψ2 (0) = 0, ψ1 (1) = −2(x1 (1) + 0.29), ⎨ ψ1 (0.5+ ) = ψ1 (0.5− ) + ψ1 (0), ⎪ ⎪ ψ2 (0.5+ ) = ψ2 (0.5− ) + 2(x2 (0.5) + 1.52), ⎪ ⎪ ⎩ ψi (tˆj+ ) = ψi (tˆj− ), i = 1, 2, j = 1, 2, 3, 4.

(45)

The components of the gradient of the functional with respect to the vector ϑ, according to formulas (29), (30), are determined as follows: ∂J(ϑ) = −ψ1 (0) + 2(ϑ1 + 0.24), ∂ϑ1

∂J(ϑ) = −λ + 2(ϑ2 + 1.17). ∂ϑ2

(46)

The iterative procedure of the gradient projection method [21] was carried out with accuracy with respect to the functional ε = 10−5 from diﬀerent starting points ϑ(0) . The auxiliary Cauchy problems used by the method of condition

14

K. Aida-zade and V. Abdullayev

shift ([8]) to solve both direct problem (41)–(42) and adjoint problem (44)– (45) were solved by the fourth-order Runge-Kutta method with diﬀerent steps h = 0.01, 0.02, 0.05. Figure 1 shows the graphs of solutions to the direct (a) and adjoint (b) boundary value problems for the number of time interval partitions N = 200 obtained at the twentieth iteration of optimization. The value of the functional at the (0) (0) starting point ϑ1 = −0.125 ; ϑ2 = 1.5; was J(ϑ(0) ) = 397.30552, and the value of λ = −240.1465. At the twentieth iteration, we obtained the value of the functional J(ϑ(20) ) = 1.4 · 10−8 , and the parameter values were as follows: (20) (20) ϑ1 = −0.2403 , ϑ2 = −1.1719, λ(20) = 0.0002.

Fig. 1. Graphs of the obtained solutions of the direct (a) and adjoint (b) boundary value problems.

5

Conclusion

In the paper, we investigate the linear optimal control problem for a dynamic object, in which the values of the right-hand sides of linear nonlocal boundary conditions are optimized. The boundary conditions include, as terms, the values of the phase variable at intermediate points and the integral values of the phase variable over several intervals. The conditions for the existence and uniqueness of the solution to the boundary value problem with unseparated boundary conditions and convexity of the objective functional to the problem are investigated. The necessary optimality conditions are formulated by using the technique of the Lagrange method and deﬁnition of the conditional gradient. The obtained results can be used to investigate control problems described by nonlinear systems of diﬀerential equations with linear unseparated boundary conditions, including the point and integral values of phase variables and optimized right-hand sides.

References 1. Nicoletti, O.: Sulle condizioni iniziali che determiniano gli integrali della diﬀenziali ordinazie. Att della R. Acc. Sc, Torino (1897) 2. Tamarkin, Y.D.: On some general problems in the theory of ordinary diﬀerential equations and on series expansions of arbitrary functions. Petrograd (1917)

Optimization of the Values of the Right-Hand Sides of Boundary Conditions

15

3. De la Vall´ee-Poussin, Ch.J.: Sur l’´equation diﬀ´erentielle lin´eare du second ordre. D´etermination d’une integrale par deux valeurs assign´ees. Extension aux ´equations d’orde n. J. Math. Pures Appl. 8(9) (1929) 4. Kiguradze, I.T.: Boundary value problems for system of ordinary diﬀerential equations. Itogi Nauki Tekh. Sovrem. Probl. Mat. Nov. Dostizheniya 30, 3–103 (1987) 5. Nakhushev, A.M.: Loaded Equations and Applications. Nauka, Moscow (2012) 6. Dzhumabaev, D.S., Imanchiev, A.E.: Boundary value problems for system of ordinary diﬀerential equations. Mat. J. 5(15), 30–38 (2005) 7. Assanova, A.T., Imanchiyev, A.E., Kadirbayeva, ZhM: Solvability of nonlocal problems for systems of Sobolev-type diﬀerential equations with a multipoint condition. Izv. Vyssh. Uchebn. Zaved. Mat. 12, 3–15 (2019) 8. Aida-zade, K.R., Abdullaev, V.M.: On the solution of boundary value problems with nonseparated multipoint and integral conditions. Diﬀer. Equ. 49(9), 1114– 1125 (2013). https://doi.org/10.1134/S0012266113090061 9. Abdullaev, V.M., Aida-Zade, K.R.: Numerical method of solution to loaded nonlocal boundary value problems for ordinary diﬀerential equations. Comput. Math. Math. Phys. 54(7), 1096–1109 (2014). https://doi.org/10.1134/ S0965542514070021 10. Assanova, A.T.: Solvability of a nonlocal problem for a hyperbolic equation with integral conditions. Electron. J. Diﬀer. Equ. 170, 1–12 (2017) 11. Aida-zade, K.R., Abdullayev, V.M.: Optimizing placement of the control points at synthesis of the heating process control. Autom. Remote Control 78(9), 1585–1599 (2017). https://doi.org/10.1134/S0005117917090041 12. Abdullayev, V.M., Aida-zade, K.R.: Numerical solution of the problem of determining the number and locations of state observation points in feedback control of a heating process. Comput. Math. Math. Phys. 58(1), 78–89 (2018). https://doi. org/10.1134/S0965542518010025 13. Aida-zade, K.R., Hashimov, V.A.: Optimization of measurement points positioning in a border control synthesis problem for the process of heating a rod. Autom. Remote Control 79(9), 1643–1660 (2018). https://doi.org/10.1134/ S0005117918090096 14. Abdullayev, V.M.: Numerical solution to optimal control problems with multipoint and integral conditions. Proc. Inst. Math. Mech. 44(2), 171–186 (2018) 15. Devadze, D., Beridze, V.: Optimality conditions and solution algorithms of optimal control problems for nonlocal boundary-value problems. J. Math. Sci. 218(6), 731– 736 (2016). https://doi.org/10.1007/s10958-016-3057-x 16. Zubova, S.P., Raetskaya, E.V.: Algorithm to solve linear multipoint problems of control by the method of cascade decomposition. Autom. Remote Control 78(7), 1189–1202 (2017). https://doi.org/10.1134/S0005117917070025 17. Abdullayev, V.M., Aida-zade, K.R.: Optimization of loading places and load response functions for stationary systems. Comput. Math. Math. Phys. 57(4), 634– 644 (2017). https://doi.org/10.1134/S0965542517040029 18. Aschepkov, L.T.: Optimal control of system with intermediate conditions. J. Appl. Math. Mech. 45(2), 215–222 (1981) 19. Vasil’eva, O.O., Mizukami, K.: Dynamical processes described by boundary problem: necessary optimality conditions and methods of solution. J. Comput. Syst. Sci. Int. (A J. Optim. Control) 1, 95–100 (2000) 20. Abdullayev, V.M., Aida-zade, K.R.: Approach to the numerical solution of optimal control problems for loaded diﬀerential equations with nonlocal conditions. Comput. Math. Math. Phys. 59(5), 696–707 (2019). https://doi.org/10.1134/ S0965542519050026

16

K. Aida-zade and V. Abdullayev

21. Polyak, B.T.: Introduction to Optimization. Lenand, Moscow (2019) 22. Vasil’ev, F.P.: Methods of Optimization. Faktorial Press, Moscow (2002) 23. Aida-Zade, Kamil, Abdullayev, Vagif: Numerical method for solving the parametric identiﬁcation problem for loaded diﬀerential equations. Bull. Iran. Math. Soc. 45(6), 1725–1742 (2019). https://doi.org/10.1007/s41980-019-00225-3 24. Moszynski, K.: A method of solving the boundary value problem for a system of linear ordinary diﬀerential equation. Algorytmy. Varshava. 11(3), 25–43 (1964) 25. Abramov, A.A.: A variation of the ‘dispersion’ method. USSR Comput. Math. Math. Phys. 1(3), 368–371 (1961)

Saddle-Point Method in Terminal Control with Sections in Phase Constraints Anatoly Antipin1

and Elena Khoroshilova2(B)

1

2

Dorodnicyn Computing Centre, FRC CSC RAS, Vavilov Street 40, 119333 Moscow, Russia [email protected] CMC Faculty, Lomonosov MSU, Leninskiye Gory, 119991 Moscow, Russia [email protected]

Abstract. A new approach to solving terminal control problems with phase constraints, based on saddle-point suﬃcient optimality conditions, is considered. The basis of the approach is Lagrangian formalism and duality theory. We study linear controlled dynamics in the presence of phase constraints. The cross section of phase constraints at certain points in time leads to the appearance of new intermediate ﬁnite-dimensional convex programming problems. In fact, the optimal control problem, deﬁned over the entire time interval, is split into a number of independent intermediate subproblems, each of which is deﬁned in its own subsegment. Combining the solutions of these subproblems together, we can obtain solutions5 to the original problem on the entire time interval. To this end, a gradient ﬂow is launched to solve all intermediate problems at the same time. The convergence of computing technology to the solution of the optimal control problem in all variables is proved. Keywords: Optimal control · Lagrange function point · Iterative solution methods · Convergence

1

· Duality · Saddle

Introduction

Dynamic problems of terminal control under state constraints are among the most complex in optimal control theory. For quite a long time, from the moment of their occurrence and the ﬁrst attempts at practical implementation in technical ﬁelds, these problems have been studied by experts from diﬀerent angles. Much attention is traditionally paid to the development of computational methods for solving this class of problems in a wide area of applications [9,13]. At the same time, directions are being developed related to further generalizations and the development of the Pontryagin maximum principle [7,10], as well as with the extension of the classes of problem statements [8,12,14]. The questions of the existence, stability, optimality of solutions are studied in [11] and others. Supported by the Russian Science Foundation (Research Project 18-01-00312). c Springer Nature Switzerland AG 2020 N. Olenev et al. (Eds.): OPTIMA 2020, LNCS 12422, pp. 17–26, 2020. https://doi.org/10.1007/978-3-030-62867-3_2

18

A. Antipin and E. Khoroshilova

In our opinion, one of the most important areas of the theory of solving optimal control problems is the study of various approaches to the development of evidence-based methods for solving terminal control problems. The theory of evidence-based methods is currently an important tool in various application areas of mathematical modeling tools. In this theory, emphasis is placed on the ideas of proof, validity, and guarantee of the result. The latter assumes that the developed computing technology (computing process) generates a sequence of iterations that has a limit point on a bounded set, and this point is guaranteed to be a solution to the original problem with a given accuracy. In this paper, we consider the problem of terminal control with phase constraints and their cross sections at discretization points. Intermediate spaces are associated with sampling points, the dimension of which is equal to the dimension of the phase trajectory vector. The sections of the phase trajectory in the spaces of sections form a polyhedral set. On this set, we pose the problem of minimizing a convex objective function. At each sampling point, we obtain some ﬁnite-dimensional problem. To iteratively proceed to the next phase trajectory, it is enough to take a gradient-type step in the section space for each intermediate problem. These steps together on all intermediate problems form a saddle-point gradient ﬂow. This computational ﬂow with an increase in the number of iterations leads us to the solution of the problem.

2

Statement of Terminal Control Problem with Continuous Phase Constraints

We consider a linear dynamic controlled system deﬁned on a given time interval [t0 , tf ], with a ﬁxed left end and a moving right end under phase constraints on the trajectory. The dynamics of the controlled phase trajectory x(t) is described by a linear system of diﬀerential equations with an implicit condition at the right end of the time interval. A terminal condition is deﬁned as a solution to a linear programming problem that is not known in advance. In this case, it is necessary to choose a control so that the phase trajectory satisﬁes the phase constraints, and its right end coincides with the solution of the boundary value problem. The control problem is considered in a Hilbert function space. Formally, everything said in the case of continuous phase constraints can be represented as a problem: ﬁnd the optimal control u(t) = u∗ (t) and the corresponding trajectory x(t) = x∗ (t), t ∈ [t0 , tf ], that satisfy the system d x(t) = D(t)x(t) + B(t)u(t), t0 ≤ t ≤ tf , x(t0 ) = x0 , x(tf ) = x∗f , dt G(t)x(t) ≤ g(t), x(t) ∈ Rn ∀ t ∈ [t0 , tf ], u(t) ∈ U = {u(t) ∈ Lr2 [t0 , tf ] | u(t) ∈ [u− , u+ ] ∀ t ∈ [t0 , tf ]}, x∗f ∈ Argmin{ϕf , xf | Gf xf ≤ gf , xf ∈ Rn },

(1)

where D(t), B(t), G(t) are continuous matrices of size n × n, n × r, m × n respectively; g(t) is a given continuous vector function; Gf = G(tf ), gf = g(tf ),

Saddle-Point Method in Terminal Control

19

xf = x(tf ) are the values at the right-hand end of the time interval; ϕf is the given vector (normal to the linear objective functional), x(t0 ) = x0 is the given initial condition. The inclusion x(t) ∈ Rn means that the vector x(t) for each t belongs to the ﬁnite-dimensional space Rn . The controls u(t) for each t ∈ [t0 , tf ] belong to the set U, which is a convex compact set from Rr . Problem (1) is considered as an analogue of the linear programming problem formulated in a functional Hilbert space. To solve the diﬀerential system in (1), it is necessary to use the initial condition x0 and some control u(t) ∈ U. For each admissible u(t), in the framework of the classical theorems of existence and uniqueness, we obtain a unique phase trajectory x(t). The right end of the optimal trajectory must coincide with the ﬁnite-dimensional solution of the boundary value problem, i. e. x∗ (tf ) = x∗f . An asterisk means that x∗ (t) is the optimal solution; in particular, x∗f is a solution to the boundary value optimization problem. The control must be selected so that phase constraints are additionally fulﬁlled. The left end x0 of the trajectory is ﬁxed and is not an object of optimization. The formulated problem with phase constraints from the point of view of developing evidence-based computational methods is one of the diﬃcult problems. Traditionally, optimal control problems (without a boundary value problem) are studied in the framework of the Hamiltonian formalism, the peak of which is the maximum principle. This principle is a necessary condition for optimality and is the dominant tool for the study of dynamic controlled problems. However, the maximum principle does not allow constructing methods that are guaranteed to give solutions with a predetermined accuracy. In the case of convex problems of type (1), it seems more reasonable to conduct a study in the framework of the Lagrangian formalism. Moreover, the class of convex problems in optimal control is wide enough, and almost any smooth problem can be approximated by a convex, quadratic, or linear problem. Problem (1) without phase restrictions was investigated by the authors in [1–5]. In the linear-convex case, relying on the saddle-point inequalities of the Lagrange function, the authors proved the convergence of extragradient and extraproximal methods to solving the terminal control problem in all solution components: weak convergence in controls, strong convergence in phase and conjugate trajectories, and also in terminal variables of intermediate (boundary value) problems. This turned out to be possible due to the fact that the saddle-point inequalities for the Lagrange function in the case under consideration represent suﬃcient optimality conditions. These conditions, in contrast to the necessary conditions of the maximum principle, allow us to develop an evidence-based (without heuristic) theory of methods for solving optimal control problems, which was demonstrated in [1–5].

3

Phase Constraints Sections and Finite-Dimensional Intermediate Problems Generated by Them

In statement (1), we presented the phase constraints G(t)x(t) ≤ g(t), t ∈ [t0 , tf ], of continuous type. An approach will be described below, where instead of con-

20

A. Antipin and E. Khoroshilova

tinuous phase constraints their sections Gs xs ≤ gs are considered at certain instants of time ts on a discrete grid Γ = {t0 , t1 , ..., ts−1 , ts , ts+1 , ..., tf }. At these moments of time, ﬁnite-dimensional cross-section problems are formed, and between these moments (on the discretization segments [ts−1 , ts ], s = 1, f ) intermediate terminal control problems arise. Thus, the original problem formulated on the entire segment [t0 , tf ] is decomposed into a set of independent problems, each of which is deﬁned on its own sub-segment. The obtained intermediate problems no longer have phase constraints, since the phase constraints on the sub-segments have passed to the boundary value problems at the ends of these sub-segments. This approach does not require the existence of a functional Slater condition. In fact, the existence of ﬁnite-dimensional saddle points in the intermediate spaces Rn (that are generated by sections of phase constraints at given moments ts ) is suﬃcient. Each section has its own boundary-value problem, and then through all these solutions (like a thread through a coal ear), the desired phase trajectory is drawn over the entire time interval. In ﬁnitedimensional section spaces, the Slater condition for convex problems is always satisﬁed by deﬁnition. Except for discrete phase constraints, the rest of problem (1) remains continuous. The combination of the trajectories and other components of the problem over all time sub-segments results in the solution of the original problem over the entire time interval [t0 , tf ]. The approach based on sections can be interpreted as a method of decomposing a complex problem into a number of simple ones. Thus, on each of the segments [ts−1 , ts ], a speciﬁc segment xs (t) of the phase trajectory of diﬀerential equation (1) is deﬁned. At the common point of the adjacent segments [ts−1 , ts ] and [ts , ts+1 ] the values xs (ts ) and xs+1 (ts ) coincide in construction: xs (ts ) = xs+1 (ts ), i.e. on each segment of the partition, the right end of the trajectory coincides with the starting point of the trajectory in the next segment. As a result of discretization based on (1), the following statement of the multi-problem is obtained: d xs (t) = D(t)xs (t) + B(t)us (t), t ∈ [ts−1 , ts ], dt xs (ts−1 ) = x∗s−1 (ts−1 ), xs (ts ) = x∗s , us (t) ∈ U,

x∗1 ∈ Argmin{ϕ1 , x1 | G1 x1 ≤ g1 , x1 ∈ Rn }, x∗1 ∈ X1 , x∗2 ∈ Argmin{ϕ2 , x2 | G2 x2 ≤ g2 , x2 ∈ Rn }, x∗2 ∈ X2 , .......................................................................................... x∗f ∈ Argmin{ϕf , xf | Gf xf ≤ gf xf ∈ Rn }, x∗f ∈ Xf .

(2)

Here xs (ts ) is the value of the function xs (t) at the right end of segment [ts−1 , ts ], x∗s is the solution of sth intermediate linear programming problem; ϕs is the normal to the objective function; Xs is an intermediate reachability set; Gs = G(ts ), gs = g(ts ), s = 1, f . If we combine all parts of the trajectories xs (t) then

Saddle-Point Method in Terminal Control

21

we get the full trajectory on the entire segment x(t), t ∈ [t0 , tf ]. In other words, we “broke” the original problem (1) into f independent problems of the same kind. For greater clarity, imagine system (2) in an expanded form. Discretization of Γ generates time intervals [ts−1 , ts ], on which functions xs (t) are deﬁned for all s = 1, f . Each of these functions is the restriction of the phase trajectory x(t) to the segment [ts−1 , ts ]. In this model, for each sth time interval [ts−1 , ts ], the sth controlled trajectory xs (t) and the sth intermediate problem are deﬁned: d x1 (t) = D(t)x1 (t) + B(t)u1 (t), t ∈ [t0 , t1 ], dt x1 (t0 ) = x0 , x1 (t1 ) = x∗1 , u1 (t) ∈ U, ∗ x1 ∈ Argmin{ϕ1 , x1 | G1 x1 ≤ g1 , x1 ∈ Rn }, x∗1 ∈ X1 , x1 (t1 ) = x1 , .............................................................................................................. d xs (t) = D(t)xs (t) + B(t)us (t), t ∈ [ts−1 , ts ], dt xs (ts−1 ) = x∗s−1 , xs (ts ) = x∗s , us (t) ∈ U,

x∗s ∈ Argmin{ϕs , xs | Gs xs ≤ gs , xs ∈ Rn }, x∗s ∈ Xs , xs (ts ) = xs , (3) .............................................................................................................. d xf (t) = D(t)xf (t) + B(t)uf (t), t ∈ [tf −1 , tf ], dt xf (tf −1 ) = x∗f −1 , xf (tf ) = x∗f , uf (t) ∈ U, ∗ xf ∈ Argmin{ϕf , xf | Gf xf ≤ gf , xf ∈ Rn }, x∗f ∈ Xf , xf (tf ) = xf .

So, within the framework of the proposed approach, the initial problem with phase constraints (1) is split into a ﬁnite set of independent intermediate terminal control problems without phase constraints. Each of these problems can be solved independently, starting with the ﬁrst problem. Then, conducting a phase trajectory through solutions of intermediate problems, we ﬁnd a solution to the terminal control problem over the entire segment [t0 , tf ]. To solve any of the subproblems of system (3), the authors developed methods in [1,2].

4

Problem Statement in Vector-Matrix Form

For greater clarity, we present the matrix form: dynamics ⎛ dx1 ⎞ ⎛ D1 0 · · · 0 dt ⎜ dx2 ⎟ ⎜ 0 D2 · · · 0 ⎜ dt ⎟ ⎜ ⎜ .. ⎟ = ⎜ .. .. . . .. ⎝ . ⎠ ⎝ . . . . dxf dt

0

0 · · · Df

system (2) or (3) in a more compact vector-

⎞⎛

⎞ ⎛ x1 B1 ⎟ ⎜ x2 ⎟ ⎜ 0 ⎟⎜ ⎟ ⎜ ⎟ ⎜ .. ⎟ + ⎜ .. ⎠⎝ . ⎠ ⎝ . xf

0 B2 .. .

··· ··· .. .

0 0 .. .

0 0 · · · Bf

where x(t0 ) = x0 , xs (ts ) = x∗s , xf (tf ) = x∗f , us (t) ∈ U,

⎞⎛

⎞ u1 ⎟ ⎜ u2 ⎟ ⎟⎜ ⎟ ⎟ ⎜ .. ⎟ ⎠⎝ . ⎠ uf

22

A. Antipin and E. Khoroshilova

and intermediate problems ⎧ ⎛ ⎞ ⎛ ∗⎞ x1 x1 ⎪ ⎪ ⎪ ∗ ⎨ ⎜ ⎜ x2 ⎟

⎜ x2 ⎟ ⎜ ⎟ ⎟ ⎜ .. ⎟ ∈ Argmin ϕ1 ϕ2 · · · ϕf ⎜ .. ⎟ ⎪ ⎝ . ⎠ ⎝ . ⎠ ⎪ ⎪ ⎩ ∗ xf xf

⎛ G1 ⎜0 ⎜ ⎜ .. ⎝ . 0

⎞ ⎛ ⎞⎫ g1 ⎪ x1 ⎪ ⎬ ⎟ ⎜ x2 ⎟ ⎜ g2 ⎟⎪ ⎟⎜ ⎟ ⎜ ⎟ ⎟ ⎜ .. ⎟ ≤ ⎜ .. ⎟ ⎠ ⎝ . ⎠ ⎝ . ⎠⎪ ⎪ ⎪ ⎭ xf gf 0 · · · Gf

0 G2 .. .

··· ··· .. .

0 0 .. .

⎞⎛

(4) Recall once again that each function x(t) generates a vector with components (x(t1 ), ..., x(ts ), ..., x(tf )), and the number of components is equal to the number of sampling points of the segment [t0 , tf ]. Each component of this vector, in turn, is a vector of size n. Thus, we have a space of dimension Rn×f . In this space, the diagonal matrix G(ts ), s = 1, f , is deﬁned, each component of which is submatrix Gs (ts ) from (4), whose dimension is n × n. We described the matrix functional constraint of the inequality type at the right-hand side, which is given by vector g = (g1 , g2 , ..., gf ). The linear objective function that completes the formulation of the ﬁnite-dimensional linear programming problem in (4) is a scalar product of vectors ϕ and x. Thus, in macro format, we can represent problem (4) in the form ⎧ ⎨ d x(t) = D(t)x(t) + B(t)u(t), t ≤ t ≤ t , x(t ) = x , x(t ) = x∗ , 0 f 0 0 f f dt (5) ⎩ ∗ n x ∈ Argmin{ϕf , x | Gx ≤ g, x ∈ R }, u(t) ∈ U. Note that the macro system (5) obtained as a result of scalarization of intermediate problems (3) (or (4)) almost completely coincides with the terminal control problem with the boundary value problem on the right-hand end suggested and explored in [1,2]. Therefore, the method for solving problem (5) and the proof of its convergence as a whole will repeat the logic of reasoning. As a solution to diﬀerential system (5), we mean any pair (x(t), u(t)) ∈ Ln2 [t0 , tf ] × U that satisﬁes the condition t (D(τ )x(τ ) + B(τ )u(τ ))dτ, t0 ≤ t ≤ tf . (6) x(t) = x(t0 ) + t0

The trajectory x(t) in (6) is an absolutely continuous function. The class of absolutely continuous functions is a linear variety everywhere dense in Ln2 [t0 , tf ]. In the future, this class will be denoted as ACn [t0 , tf ] ⊂ Ln2 [t0 , tf ]. For any pair of functions (x(t), u(t)) ∈ ACn [t0 , tf ] × U, the Newton-Leibniz formula and, accordingly, the integration-by-parts formula are satisﬁed.

5

Saddle-Point Suﬃcient Optimality Conditions. Dual Approach

Drawing the corresponding analogies with the theory of linear programming, we write out the primal and dual Lagrange functions for the problem (5). To do

Saddle-Point Method in Terminal Control

23

this, we scalarize system (5) and introduce a linear convolution known as the Lagrange function L(p, ψ(t); x, x(t), u(t)) = ϕ, x + p, Gx − g tf d + ψ(t), D(t)x(t) + B(t)u(t) − x(t)dt, dt t0 n n n deﬁned for all p ∈ Rm + , ψ(t) ∈ Ψ2 [t0 , tf ], x ∈ R , (x(t), u(t)) ∈ AC [t0 , tf ] × U. Here x is a ﬁnite-dimensional vector composed of the values of trajectory x(t) at the sampling points; Ψ2n [t0 , tf ] is a linear manifold of absolutely continuous functions from an adjoint space. The variety Ψ2n [t0 , tf ] is everywhere dense in Ln2 [t0 , tf ]. The saddle point (p∗ , ψ ∗ (t); x∗ , x∗ (t), u∗ (t)) of the Lagrange function is formed by primal (x∗ , x∗ (t), u∗ (t)) and dual (p∗ , ψ ∗ (t)) solutions of problem (5) and, by deﬁnition, satisﬁes the system of inequalities tf d ψ(t), D(t)x∗ (t) + B(t)u∗ (t) − x∗ (t)dt ϕ, x∗ + p, Gx∗ − g + dt t0 tf d ≤ϕ, x∗ + p∗ , Gx∗ − g + ψ ∗ (t), D(t)x∗ (t) + B(t)u∗ (t) − x∗ (t)dt dt t0 tf d ≤ ϕ, x + p∗ , Gx − g + ψ ∗ (t), D(t)x(t) + B(t)u(t) − x(t)dt dt t0 n n n for all p ∈ Rm + , ψ(t) ∈ Ψ2 [t0 , tf ], x ∈ R , (x(t), u(t)) ∈ AC [t0 , tf ] × U. If the original problem (5) has a primal and dual solution, then this pair is a saddle point of the Lagrange function. Here, as in the ﬁnite-dimensional case, the dual solution is formed by the coordinates of the normal to the supporting plane at the minimum point. The converse is also true: the saddle point of the Lagrange function consists of the primal and dual solutions to original problem (5). Using formulas to go over to conjugate linear operators

ψ, Dx = DT ψ, x, ψ, Bu = B T ψ, u and the integrating-by-parts formula on segment [t0 , tf ] ψ(tf ), x(tf ) − ψ(t0 ), x(t0 ) =

tf

t0

d ψ(t), x(t)dt + dt

tf

t0

ψ(t),

d x(t)dt, dt

we write out the dual Lagrange function and saddle-point system in the conjugate form: LT (p, ψ(t); x, x(t), u(t)) = ϕ + GT p − ψf , x − g, p + ψ0 , x0 tf tf d T + D (t)ψ(t) + ψ(t), x(t)dt + B T (t)ψ(t), u(t)dt dt t0 t0

24

A. Antipin and E. Khoroshilova

n n n for all p ∈ Rm + , ψ(t) ∈ Ψ2 [t0 , tf ], x ∈ R , (x(t), u(t)) ∈ AC [t0 , tf ] × U, x0 = x(t0 ), ψ0 = ψ(t0 ), ψf = ψ(tf ). The dual saddle-point system has the form

ϕ + GT p − ψf , x∗ + −g, p + ψ0 , x∗0 +

tf t0

DT (t)ψ(t) +

d ∗ dt ψ(t), x (t)dt

+

tf t0

B T (t)ψ(t), u∗ (t)dt ≤

≤ ϕ + GT p∗ − ψf∗ , x∗ + −g, p∗ + ψ0∗ , x∗0 +

tf t0

DT (t)ψ ∗ (t) +

d ∗ ∗ dt ψ (t), x (t)dt

+

tf t0

B T (t)ψ ∗ (t), u∗ (t)dt ≤

≤ ϕ + GT p∗ − ψf∗ , x + −g, p∗ + ψ0∗ , x0 +

tf t0

DT (t)ψ ∗ (t) +

d ∗ dt ψ (t), x(t)dt

+

tf t0

B T (t)ψ ∗ (t), u(t)dt

n n n for all p ∈ Rm + , ψ(t) ∈ Ψ2 [t0 , t1 ], x ∈ R , (x(t), u(t)) ∈ AC [t0 , tf ] × U. Both Lagrangians (primal and dual) have the same saddle point (p∗ , ψ ∗ (t); x∗ , ∗ x (t), u∗ (t)), which satisﬁes the saddle-point conjugate system. From the analysis of the saddle-point inequalities, we can write out mutually dual problems:

the primal problem: x∗ ∈ Argmin{ϕ, x | Gx ≤ g, x ∈ Rn , d dt x(t)

= D(t)x(t) + B(t)u(t),

x(t0 ) = x0 , u(t) ∈ U};

the dual problem: (p∗ , ψ ∗ (t)) ∈ Argmax{−g, p + ψ0 , x∗0 + DT (t)ψ(t) +

d dt ψ(t)

tf t0

ψ(t), B(t)u∗ (t)dt

= 0, ψf = ϕ + GT p,

n p ∈ Rm + , ψ(t) ∈ Ψ2 [t0 , tf ]},

tf t0

6

B T (t)ψ ∗ (t), u∗ (t) − u(t)dt ≤ 0, u(t) ∈ U.

Method for Solving. Convergence Technique

Replacing the variational inequalities in the above system with the corresponding equations with the projection operator, we can write the diﬀerential system in operator form. Then, based on this system, we write out a saddle-point method of extragradient type to calculate the saddle point of the Lagrange function. The two components of the saddle point are the primal and dual solutions to problem (5).

Saddle-Point Method in Terminal Control

25

The formulas of this iterative method are as follows: 1) predictive half-step d k dt x (t)

= D(t)xk (t) + B(t)uk (t), xk (t0 ) = x0 , p¯k = π+ (pk + α(Gxk − g)),

d k dt ψ (t)

+ DT (t)ψ k (t) = 0, ψ k = ϕ + GT pk ,

u ¯k (t) = πU (uk (t) − αB T (t)ψ k (t)); 2) basic half-step d k ¯ (t) dt x

= D(t)¯ xk (t) + B(t)¯ uk (t), x ¯k (t0 ) = x0 , pk+1 = π+ (pk + α(G¯ xk − g)),

d ¯k dt ψ (t)

+ DT (t)ψ¯k (t) = 0, ψ¯k = ϕ + GT p¯k ,

uk+1 (t) = πU (uk (t) − αB T (t)ψ¯k (t)), k = 0, 1, 2, ... Here, at each half-step, two diﬀerential equations are solved and an iterative step along the controls is carried out. Below, a theorem on the convergence of the method to the solution is formulated. Theorem 1. If the set of solutions (p∗ , ψ ∗ (t); x∗ , x∗ (t), u∗ (t)) for problem (5) is not empty, then sequence {(pk , ψ k (t); xk , xk (t), uk (t))} generated by the method with the step length α ≤ α0 contains subsequence {(pki , ψ ki (t); xki , xki (t), uki (t))}, which converges to the solution of the problem, including: weak convergence in controls, strong convergence in trajectories, conjugate trajectories, and also in terminal variables. The proof of the theorem is carried out in the same way as in [6]. The computational process presented in this paper implements the idea of evidencebased computing. It allows us to receive guaranteed solutions to the problem with a given accuracy, consistent with the accuracy of the initial information. Conclusions. In this paper, we study a terminal control problem with a ﬁnitedimensional boundary value problem at the right-hand end of the time interval and phase constraints distributed over a ﬁnite given number of points of this interval. The problem has a convex structure, which makes it possible, within the duality theory, using the saddle-point properties of the Lagrangian, to develop a theory of saddle-point methods for solving terminal control problems. The approach proposed here makes it possible to deal with a complex case with intermediate phase constraints on the controlled phase trajectories. The convergence of the computation process for all components of the solution has been proved: namely, weak convergence in controls and strong convergence in phase and dual trajectories and in terminal variables.

26

A. Antipin and E. Khoroshilova

References 1. Antipin, A.S., Khoroshilova, E.V.: Linear programming and dynamics. Ural Math. J. 1(1), 3–19 (2015) 2. Antipin, A.S., Khoroshilova, E.V.: Saddle-point approach to solving problem of optimal control with ﬁxed ends. J. Global Optim. 65(1), 3–17 (2016) 3. Antipin, A.S., Khoroshilova, E.V.: On methods of terminal control with boundary value problems: Lagrange approach. In: Goldengorin, B. (ed.) Optimization and Applications in Control and Data Sciences, pp. 17–49. Springer, New York (2016) 4. Antipin, A.S., Khoroshilova, E.V.: Lagrangian as a tool for solving linear optimal control problems with state constraints. In: Proceedings of the International Conference on Optimal Control and Diﬀerential Games Dedicated to L.S. Pontryagin on the Occasion of His 110th Birthday, pp. 23–26 (2018) 5. Antipin, A., Khoroshilova, E.: Controlled dynamic model with boundary-value problem of minimizing a sensitivity function. Optim. Lett. 13(3), 451–473 (2017). https://doi.org/10.1007/s11590-017-1216-8 6. Antipin, A.S., Khoroshilova, E.V.: Dynamics, phase constraints, and linear programming. Comput. Math. Math. Phys. 60(2), 184–202 (2020) 7. Dmitruk, A.V.: Maximum principle for the general optimal control problem with phase and regular mixed constraints. Comput. Math. Model 4, 364–377 (1993) 8. Dykhta, V., Samsonyuk, O.: Some applications of Hamilton-Jacobi inequalities for classical and impulsive optimal control problems. Eur. J. Control 17(1), 55–69 (2011) 9. Gornov, A.Y., Tyatyushkin, A.I., Finkelstein, E.A.: Numerical methods for solving terminal optimal control problems. Comput. Math. Math. Phys. 56(2), 221–234 (2016) 10. Hartl, R.F., Sethi, S.P., Vickson, R.G.: A survey of the maximum principles for optimal control problems with state constraints. SIAM Rev. 37(2), 181–218 (1995) 11. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.M.: Survey constrained model predictive control: stability and optimality. Automatica (J. IFAC) 36(6), 789–814 (2001) 12. Pales, Z., Zeidan, V.: Optimal control problems with set-valued control and state constraints. SIAM J. Optim. 14(2), 334–358 (2003) 13. Pytlak, R.: Numerical Methods for Optimal Control Problems with State Constraints. Lecture Notes in Mathematics, vol. 1707, p. 214p. Springer, Berlin, Heidelberg (1999) 14. Rao, A.V.: A Survey of Numerical Methods for Optimal Control. Advances in the Astronautical Sciences. Preprint AAS 09-334 (2009)

Pricing in Dynamic Marketing: The Cases of Piece-Wise Constant Sale and Retail Discounts Igor Bykadorov1,2,3(B) 1

3

Sobolev Institute of Mathematics, 4 Koptyug Ave., 630090 Novosibirsk, Russia [email protected] 2 Novosibirsk State University, 1 Pirogova St., 630090 Novosibirsk, Russia Novosibirsk State University of Economics and Management, Kamenskaja street 56, 630099 Novosibirsk, Russia

Abstract. We consider a stylized distribution channel, where a manufacturer sells a single kind of good to a retailer. In classical setting, the proﬁt of manufacturer is quadratic w.r.t. wholesales discount, while the proﬁt of retailer is quadratic w.r.t. retail discount (pass-through). Thus, the wholesale prices and the retail prices are continuous. These results are elegant mathematically but not adequate economically. Therefore, we assume that wholesale discount and retail discounts are piece-wise constant. We show the strict concavity of retailer’s proﬁts w.r.t. retail discount levels. As for the manufacturer’s proﬁts w.r.t. wholesale discount levels, we show that strict concavity can be guaranteed only in the case when retail discount is constant and suﬃciently large. Keywords: Retailer · Piece-wise constant prices Wholesale discount · Retail discount · Concavity

1

· Retailer ·

Introduction

Typically, economic agents stimulate production and sales through communications (advertising, promotion, etc.), as well as various types of inﬂuence on pricing. Moreover, in the structure of “producer - retailer - consumer”, various types of discounts are often used. One of the ﬁrst work in this direction can be considered the paper [1], see also [2]. Among many works on this subject, let us note [3–8]. In the presented paper, we study dynamic marketing model, cf. [9–12]. At every moment t ∈ [t1 , t2 ] , the manufacturer stimulates retailer by wholesale discount as the manufacturer’s control α(t), while the retailer stimulates sales by retail discount as the retailer’s control β(t). We assume that the controls α(t) and β(t) are piece-wise constant; moreover, time switches of discount levels are ﬁxed and known1 . This way, the optimal control problems reduce to the 1

It seems realistic, cf. [12].

c Springer Nature Switzerland AG 2020 N. Olenev et al. (Eds.): OPTIMA 2020, LNCS 12422, pp. 27–39, 2020. https://doi.org/10.1007/978-3-030-62867-3_3

28

I. Bykadorov

mathematical programming problems where the proﬁt of the manufacturer is quadratic with respect to wholesale discount level(s), while the proﬁt of the retailer is quadratic with respect to pass-through level(s). The ﬁrst question is the concavity property of the proﬁts. This allows getting the optimal behavior strategies of the manufacturer and the retailer. The main result of [12] is that the retailer’s proﬁt is strictly concave w.r.t. pass-through levels. The proof takes into account very strongly the structure of the Hessian matrix of the retailer’s proﬁt, it turns out to be inapplicable to the study of the concavity of the manufacturer’s proﬁt. The presented paper is devoted to the development of [12]. First, we show that Hessian matrix of the retailer’s proﬁt is strict diagonal dominance matrix. This allows not only to obtain a simpler proof of the main result of [12], but also to prove the strict concavity of the manufacturer’s proﬁt in the case when the retail discount is constant and suﬃciently large. As for the case when the retail discount is piece-wise constant, we get a suﬃcient condition (a’la “retail discount levels should not diﬀer much from each other”), guaranteeing strict concavity of the manufacturer’s proﬁt. Finally, examples are given when the manufacturer’s proﬁt is not concave, which indicates the “unimprovability” of the result. The paper is organized as follows. In Sect. 2.1 we set the basic model as in [9– 12]. In Sect. 2.2 we consider explicitly the case when wholesale discount and passthrough are piece-wise constant. Here we repeat the results of [12] about the form of cumulative sales (Proposition 1) and about the strictly concavity of retailer’s proﬁt (Proposition 2). Moreover, we get that Hessian matrix of the retailer’s proﬁt is strict diagonal dominance matrix (Lemma 1) and that the retailer’s proﬁt is strictly concave for constant path-trough (Proposition 3). Finally, we get a suﬃcient condition for strict concavity of manufacturer’s proﬁt for piecewise constant path-trough (Lemma 2) and examples when the manufacturer’s proﬁt is not concave. Section 3 contains the proofs of Lemmas and Propositions, the discussions of examples. Section 4 concludes.

2

Model

Let us remember the model, see [12]. 2.1

Basic Model

Let us consider a vertical distribution channel. On the market, there are manufacturer (“ﬁrm”), retailer and consumer. The ﬁrm produces and sells a single product during the time period [t1 , t2 ]. Let p be the unit price in a situation where the ﬁrm sells the product directly to the consumer, bypassing the retailer, p > 0. To increase its proﬁts, the ﬁrm uses the services of a retailer. To encourage the retailer to sell the commodity, the ﬁrm provides it with wholesale discount α(t) ∈ [A1 , A2 ] ⊂ [0, 1].

Retailing Under Piece-Wise Constant Discounts

29

Thus, the wholesale price of the goods is pw (t) = (1 − α(t))p. In turn, the retailer directs retail discount (“pass-through”), i.e., a part β(t) ∈ [B1 , B2 ] ⊂ [0, 1] of the discount α(t) to reduce the market price of the commodity. Therefore, the retail price of the commodity is equal to (1 − β(t)α(t))p. Then the retailer’s proﬁt per unit from the sale is the diﬀerence between retail price and wholesale price, i.e., α(t)(1 − β(t))p. Let x(t) be the accumulated sales during the period [t1 , t] while c0 be a unit production cost. At the end of the selling period, the total proﬁt of the ﬁrm is t2

t2 (pw (t) − c0 ) x(t)dt ˙ =

Πm = t1

(q − α(t)p) x(t)dt, ˙ t1

where q = p − c0 . The total proﬁt of the retailer is t2 x(t)α(t)(1 ˙ − β(t))dt.

Πr = p t1

We assume that accumulated sales x(t) and the motivation of the retailer M (t), satisfy the diﬀerential equations M˙ (t) = γ x(t) ˙ + ε (α(t) − α) , x(t) ˙ = −θx(t) + δM (t) + ηα(t)β(t), where γ > 0, ε > 0, θ > 0, δ >, η > 0; see [12] for details2 . Let M > 0 be the initial motivation of the retailer. Thus, the Manufacturer-Retailer Problem is Πm −→ maxα Πr −→ maxβ x(t) ˙ = −θx(t) + δM (t) + ηα(t)β(t), M˙ (t) = γ x(t) ˙ + ε (α(t) − α) , x(t1 ) = 0, M (t1 ) = M , α(t) ∈ [A1 , A2 ] ⊂ [0, 1], β(t) ∈ [B1 , B2 ] ⊂ [0, 1]. 2

Parameter α ∈ [A1 , A2 ] takes into account the fact that the retailer has some expectations about the wholesale discount: the motivation is reduced if the retailer is dissatisﬁed with the wholesale discount, i.e., if α(t) < α; on the contrary, the motivation increases if α(t) > α.

30

I. Bykadorov

2.2

The Case: Wholesale Discount and Pass-Through Are Piece-Wise Constant

Let for some t1 = τ0 < τ1 < . . . < τn < τn+1 = t2 ⎧ ⎧ α1 , β1 , t ∈ (τ0 , τ1 ) t ∈ (τ0 , τ1 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨α , ⎨β , t ∈ (τ1 , τ2 ) t ∈ (τ1 , τ2 ) 2 2 α (t) = β (t) = ⎪ ⎪ ... ... ⎪ ⎪ ⎪ ⎪ ⎩ ⎩ αn+1 , t ∈ (τn , τn+1 ) βn+1 , t ∈ (τn , τn+1 ) i.e., α (t) = αi , β (t) = βi , t ∈ (τi−1 , τi ) , i ∈ {1, . . . n + 1} . Then, due to continuity of space variables, ⎧ ⎧ x1 (t) , M1 (t) , t ∈ [τ0 , τ1 ] t ∈ [τ1 , τ1 ] ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎨M (t) , t ∈ [τ1 , τ2 ] t ∈ [τ1 , τ2 ] x2 (t) , 2 x (t) = M (t) = ⎪ ⎪ ... ... ⎪ ⎪ ⎪ ⎪ ⎩ ⎩ xn+1 (t) , t ∈ [τn , τn+1 ] Mn+1 (t) , t ∈ [τn , τn+1 ] i.e., x (t) = xi (t) , M (t) = Mi (t) , t ∈ [τi−1 , τi ] , i ∈ {1, . . . n + 1} , where xi (t) and Mi (t) are the solutions of the systems3 x˙ i (t) = −θxi (t) + δMi (t) + ηαi βi , M˙ i (t) = γ x˙ i (t) + ε (αi − α) , xi (τi ) = xi−1 (τi ) , Mi (τi ) = Mi−1 (τi ) , t ∈ [τi−1 , τi ] , i ∈ {1, . . . n + 1} . We get Πr = p ·

n+1

(1 − βi ) αi (xi (τi ) − xi (τi−1 )) ,

(1)

(αi+1 − αi ) xi (τi ) + (q − αn+1 p) x (t2 ) .

(2)

i=1

Πm = p ·

n i=1

Therefore, we need the expressions for xi (τi ) . Let a = θ − γδ. Under rather natural condition (the concavity of cumulative sales for constant wholesale price, see details in [9,11]), we assume a > 0. (3) Let4

αδε a(t1 −t) + a2 · 1 − ea(t1 −t) + a (t1 − t) , K (t) = δM a · 1 − e Hi (t) = ηa · 1 − ea(τi−1 −t) , t ≥ τi−1 , i ∈ {1, . . . n + 1} , Li (t) = − aδε2 · 1 − ea(τi−1 −t) + a (τi−1 − t) , t ≥ τi−1 , i ∈ {1, . . . n + 1} .

3 4

Note that x1 (τ0 ) = 0 while M1 (τ0 ) = M . Due to (3), these formulas are well deﬁned.

Retailing Under Piece-Wise Constant Discounts

31

Proposition 1. (See [12].) For t ∈ [τi−1 , τi ] , i ∈ {1, . . . n + 1} xi (t) = K (t) + (Hi (t) βi + Li (t)) αi +

i−1

j=1

((Hj (t) − Hj+1 (t)) βj + Lj (t) − Lj+1 (t)) αj .

To optimize the proﬁts, we need ﬁrst to study their concavity. The main result of [12] is Proposition 2. (See [12].) The retailer’s profit Πr is strictly concave with respect to pass-through levels βi , i ∈ {1, . . . n + 1} . The proof in [12] takes into account very strongly the structure of the Hessian matrix of the retailer’s proﬁt, it turns out to be inapplicable to the study of the concavity of the manufacturer’s proﬁt. The presented paper is devoted to the development of [12]. First, we show Lemma 1. Hessian matrix of the retailer’s profit is strict diagonal dominance matrix. Proof. See Sect. 3.1. This Lemma allows not only to obtain a simpler proof of Proposition 2, but also to prove the strict concavity of the manufacturer’s proﬁt in the case when the retail discount is constant. More precisely, the following Proposition holds. Proposition 3. Let βi = β, i ∈ {1, . . . n + 1} and β≥

δε . aη

(4)

Then the retailer’s profit Πm is strictly concave with respect to wholesale discount levels αi , i ∈ {1, . . . n + 1} . Proof. See Sect. 3.2. As for the case when the retail discount is piece-wise constant, we can get a suﬃcient condition (a’la “retail discount levels should not diﬀer much from each other”), guaranteeing strict concavity of the manufacturer’s proﬁt. For simplicity, let us consider the case n = 1. Lemma 2. Let n = 1 and

β1 ∈

δε δε , 2β2 − . aη aη

Then the retailer’s profit Πm is strictly concave with respect to wholesale discount levels α1 , α2 . Proof. See Sect. 3.3.

32

I. Bykadorov

Let us presents examples when the producer’s proﬁt is not concave, which indicates the “unimprovability” of the result. (See Sect. 3.4 for details.) Example 1. Let n = 1, t1 = 0, τ1 = 7, t2 = 10, β1 = 0.88, β2 = 0.1, δ = 0.2, ε = < 0. 0.1, η = 2, θ = 1, γ = 0.6. Then a = θ − γδ = 0.88 and det Πm Example 2. Let n = 1, t1 = 0, τ1 = 3, t2 = 6, β1 = 0.9, β2 = 0.1, δ = 0.2, ε = < 0. 0.1, η = 2, θ = 1, γ = 0.6. Then a = θ − γδ = 0.88 and det Πm Example 2 shows that even if τ1 is the middle of [t1 , t2 ] , proﬁt Πm can be non-concave.

3

Proofs. Examples

3.1

Proof of Lemma 1

Due to (1), we get ∂ 2 Πr ∂βi ∂βj ⎧

∂xi (τi ) ∂xi (τi−1 ) ⎪ ⎪ −2pα · − , i = j ∈ {1, . . . , n + 1} ⎪ i ⎪ ⎪ ∂βi ⎪

∂βi ⎨ ∂xj (τj ) ∂xj (τj−1 ) − , i ∈ {1, . . . , n} , j ∈ {i + 1, . . . , n + 1} = −pαj · ⎪ ∂βi ⎪

∂βi ⎪ ⎪ ∂xi (τi ) ∂xi (τi−1 ) ⎪ ⎪ − , j ∈ {1, . . . , n} , i ∈ {j + 1, . . . , n + 1} ⎩−pαi · ∂βj ∂βj i.e., ∂ 2 Πr ∂βi ∂βj

⎧ 2 pη −2 (αi ) · · si , ⎪ ⎪ ⎪ a ⎪ ⎪ j−1 ⎪ ⎪ ⎨α α · pη · s s · (1 − sk ) , i j i j a = k=i+1 ⎪ ⎪ i−1 ⎪ ⎪ ⎪ pη ⎪ ⎪ α · · s s · (1 − sk ) , α i j i j ⎩ a

i = j ∈ {1, . . . , n + 1} i ∈ {1, . . . , n} , j ∈ {i + 1, . . . , n + 1} j ∈ {1, . . . , n} , i ∈ {j + 1, . . . , n + 1}

k=j+1

where

si = 1 − eTi = 1 − ea(τi−1 −τi ) ∈ (0, 1), i ∈ {1, . . . , n + 1} , Ti = a (τi−1 − τi ) < 0, i ∈ {1, . . . , n + 1} .

We get det Πr =

pη n+1 n+1 2 · (αi ) det A, a i=1

Retailing Under Piece-Wise Constant Discounts

33

where the eliments of matrix A are ⎧ ⎪ −2 · si , i = j ∈ {1, . . . , n + 1} ⎪ ⎪ ⎪ j−1 ⎪ ⎪ ⎪ ⎨si sj · (1 − sk ) , i ∈ {1, . . . , n} , j ∈ {i + 1, . . . , n + 1} Aij = k=i+1 ⎪ ⎪ i−1 ⎪ ⎪ ⎪ ⎪ s · (1 − sk ) , j ∈ {1, . . . , n} , i ∈ {j + 1, . . . , n + 1} s i j ⎪ ⎩ k=j+1

Further,

n+1 j=1

⎛ = si · ⎝ ⎛ = si · ⎝

i−1

j=1

i−1

j=1

⎛ ⎝sj ·

Aij =

i−1

j=1

Aij + Aii +

⎞

i−1

(1 − sk )⎠ − 2 + ⎞

i−1

⎝(1 − rj ) ·

rk ⎠ − 2 +

j=i+1

n+1

n+1

k=j+1

Aij

sj ·

j=i+1

k=j+1

⎛

n+1

⎞ (1 − sk ) ⎠

j−1 k=i+1

(1 − rj ) ·

j=i+1

j−1

⎞ rk ⎠ ,

k=i+1

where ri = 1 − si ∈ (0, 1) ,

i ∈ {1, . . . , n + 1} .

Let us show that ⎛ ⎞ i−1 i−1 i−1 ⎝(1 − rj ) · rk ⎠ = 1 − rk , j=1

and

k=j+1

n+1

(1 − rj ) ·

j=i+1

j−1

i ∈ {2, . . . , n + 1} ,

rk

k=i+1

=1−

n+1

rk ,

i ∈ {1, . . . n} .

k=i+1

First we show (5) by the Induction on i. For i = 2, ⎛ ⎞ 1 1 1 ⎝(1 − rj ) · rk ⎠ = (1 − rj ) · rk = 1 − r1 . j=1

k=j+1

k=2

For i = 3, 2 j=1

⎛ ⎝(1 − rj ) ·

2 k=j+1

(5)

k=1

⎞ rk ⎠ = (1 − r1 ) · r2 + (1 − r2 ) ·

= (1 − r1 ) · r2 + 1 − r2 = 1 − r1 r2 .

2 k=3

rk

(6)

34

I. Bykadorov

Let (5) holds for i = f, i.e., let ⎛ ⎞ f −1 f −1 f −1 ⎝(1 − rj ) · rk ⎠ = 1 − rk . j=1

k=j+1

Then we get for i = f + 1 f j=1 f −1

=

⎛

⎞

f

⎝(1 − rj ) ·

rk ⎠ =

k=j+1

⎛

f −1

⎝(1 − rj ) ·

j=1

f −1

⎛

f

⎝(1 − rj ) ·

j=1

⎞

k=1

rk ⎠ · rf + 1 − rf =

⎞

k=j+1

k=j+1

rk

rk

k=F +1

f −1

1−

f

rk ⎠ + (1 − rf ) ·

· rf + 1 − rf = 1 −

k=1

f k=1

So (5) holds. As to (6), let us apply the induction on n. For n = 1, we have i = 1 and 2 1 1 rk = (1 − r2 ) · rk = 1 − r2 . (1 − r2 ) · j=2

k=2

k=2

For n = 2,

3

i=1:

(1 − rj ) ·

j=i+1

= (1 − r2 ) ·

1

rk

2

=

(1 − rj ) ·

j=i+1 2

(1 − rj ) ·

j=2

j−1

rk

=

3 j=3

k=i+1

= (1 − r3 ) ·

3

j−1

rk

k=2

rk = 1 − r2 + (1 − r3 ) · r2 = 1 − r2 r3 ,

k=2

3

k=i+1

rk + (1 − r3 ) ·

k=2

i=2:

j−1

(1 − rj ) ·

j−1

rk

k=3

rk = 1 − r3 .

k=3

Let (6) holds for n = F − 1, i.e., let F j=i+1

(1 − rj ) ·

j−1

rk

k=i+1

=1−

F

rk ,

k=i+1

i ∈ {1, . . . F − 1} .

rk .

Retailing Under Piece-Wise Constant Discounts

35

Then we get for n = F

F +1

(1 − rj ) ·

j=i+1

=

=1−

F

j=i+1

rk

k=i+1

F

j−1

j−1

(1 − rj ) ·

+ (1 − rF +1 ) ·

rk

k=i+1

rk + (1 − rF +1 ) ·

k=i+1

F

F

rk

k=i+1

rk = 1 −

k=i+1

F +1

i ∈ {1, . . . F } .

rk ,

k=i+1

So (6) holds. Due to (5) and (6),

n+1 j=1

⎛ Aij = ⎝

= si ·

⎛

i−1

1−

j=1 i−1

⎞

i−1

⎝(1 − rj ) ·

rk ⎠ − 2 +

k=j+1 n+1

rk − 2 + 1 −

k=1

n+1

rk

= −si ·

k=i+1

j=i+1

i−1

j−1

(1 − rj ) ·

⎞ rk ⎠ si

k=i+1

rk +

k=1

n+1

rk

< 0.

k=i+1

Therefore, A is strictly diagonally dominant. 3.2

Proof of Proposition 3

Due to (2), we get ∂ 2 Πm ∂βi ∂βj

⎧ 2p ⎪ ⎪ · (δε · Ti − bi si ) , i = j ∈ {1, . . . , n + 1} ⎪ ⎪ a2 ⎪ ⎪ j−1 ⎪ ⎪ pbi ⎨ · s s · (1 − sk ) , i ∈ {1, . . . , n} , j ∈ {i + 1, . . . , n + 1} i j = a2 k=i+1 ⎪ ⎪ ⎪ j−1 ⎪ ⎪ pbj ⎪ ⎪ · s s · (1 − sk ) , j ∈ {1, . . . , n} , i ∈ {j + 1, . . . , n + 1} ⎪ i j ⎩ a2 k=i+1

where bi = aηβi − δε,

i ∈ {1, . . . , n + 1} .

Let βi = β, i ∈ {1, . . . , n + 1} . Then due to (4) bi = b = aηβ − δε ≥ 0, i ∈ {1, . . . , n + 1} ,

36

I. Bykadorov

and

∂ 2 Πm ∂βi ∂βj

⎧ 2p ⎪ ⎪ 2 · (δε · Ti − b · si ) , i = j ∈ {1, . . . , N } ⎪ ⎪ a ⎪ ⎪ j−1 ⎪ ⎪ ⎨ pb · si sj · (1 − sk ) , i ∈ {1, . . . , N − 1} , j ∈ {i + 1, . . . , N } 2 = a k=i+1 ⎪ ⎪ ⎪ j−1 ⎪ ⎪ pb ⎪ ⎪ · s s · (1 − sk ) , j ∈ {1, . . . , N − 1} , i ∈ {j + 1, . . . , N } ⎪ ⎩ a2 i j k=i+1

We get det Πm =

p n+1 · det B, a2

where the elements of matrix B are ⎧ ⎪ 2 · (δε · Ti − bsi ) , i = j ∈ {1, . . . , n + 1} ⎪ ⎪ ⎪ j−1 ⎪ ⎪ ⎪ ⎨b · si sj · (1 − sk ) , i ∈ {1, . . . , n} , j ∈ {i + 1, . . . , n + 1} Bij = k=i+1 ⎪ ⎪ j−1 ⎪ ⎪ ⎪ ⎪ b · s s · (1 − sk ) , j ∈ {1, . . . , n} , i ∈ {j + 1, . . . , n + 1} ⎪ i j ⎩ k=i+1

We get n+1

Bij =

j=1

⎛

= 2 · δε · Ti + b · si · ⎝ ⎛ = 2 · δε · Ti + b · si · ⎝

i−1 j=1

i−1 j=1 i−1

sj ·

n+1

Bij + Bii +

j−1

j=1

n+1

(1 − sk ) − 2 + j−1

n+1

rk − 2 +

(1 − rj ) ·

= 2 · δε · Ti − b · si ·

k=1

rk +

n+1

rk

k=i+1

Therefore, matrix B is strictly diagonally dominant.

j−1

⎞ rk ⎠

k=i+1

(due to (5) and (6)) i−1

(1 − sk )⎠

k=i+1

j=i+1

k=i+1

⎞

j−1

sj ·

j=i+1

k=i+1

(1 − rj ) ·

Bij

j=i+1

< 0.

Retailing Under Piece-Wise Constant Discounts

3.3

37

Proof of Lemma 2

Let n = 1, then

∂ 2 Πm ∂α21

= − a2p2 · (b1 · s1 − δε · T1 ) < 0

∂ 2 Πm ∂α22

= − a2p2 · (b2 · s2 − δε · T2 ) < 0 ∂ 2 Πm ∂α1 ∂α2

=

pb1 a2

+

∂ 2 Πm ∂α1 ∂α2

· s1 s2

We get ∂ 2 Πm ∂α21

=

p a2

· ((−2 + s2 ) · b1 s1 + 2 · δε · T1 ) < 0 ∂ 2 Πm ∂α22

=

p a2

+

∀b1 > 0

∂ 2 Πm ∂α1 ∂α2

· ((−2 · b2 + b1 · s1 ) · s2 + 2 · δε · T2 ) < 0 ∀ 2 · b2 ≥ b1 ≥ 0

is strictly diagonally dominant. Hence, if 2 · b2 ≥ b1 ≥ 0 then matrix Πm

3.4

Examples

Example 1. Let n = 1, t1 = 0, τ1 = 7, t2 = 10, β1 = 0.88, β2 = 0.1, δ = 0.2, ε = 0.1, η = 2, θ = 1, γ = 0.6. Then

a = θ − γδ = 0.88, b1 = 1.5288, b2 = 0.156, T1 = −6.16, T2 = −2.64, s1 ≈ 0.997, s2 ≈ 0.928

and

2

4 · (b1 s1 − δεT1 ) (b2 s2 − δεT2 ) − (b1 s1 s2 ) ≈ −0.625 < 0. Hence det Πm < 0.

Example 2. Let n = 1, t1 = 0, τ1 = 3, t2 = 6, β1 = 0.9, β2 = 0.1, δ = 0.2, ε = 0.1, η = 2, θ = 1, γ = 0.6. Then

and

a = θ − γδ = 0.88, b1 = 1.564, b2 = 0.156, T1 = T2 = T = −2.64, s1 = s2 = s ≈ 0.928 2 4 · (b2 s − δεT ) · (b1 s − δεT ) − b1 s2 ≈ −0.6988 < 0.

Hence det Πm < 0.

38

I. Bykadorov

4

Conclusion

In this paper, we study a stylized vertical control distribution channel in the structure “manufacturer-retailer-consumer” and develop the results of [9–12]. More precisely, we consider the situation when the wholesale discount and pass-through are piece-wise constant. The switching times are assumed to be known and ﬁxed. This case seems to be economically adequate. The arising optimization problems contain quadratic objective functions (manufacturer’s proﬁt and retailer’s proﬁt) with respect to wholesale discount and pass-through level(s). We get an exhaustive answer to the question of the concavity of the manufacturer’s proﬁt with respect to wholesale discount levels levels and of the retailer’s proﬁt with respect to pass-through levels. As for the topics of further research, we plan to study the equilibrium (Nash and Stackelberg) in the structure “manufacturer-retailer-consumer”. Moreover, it seems interesting to study the interaction of several manufacturers and several retailers. Acknowledgments. The study was carried out within the framework of the state contract of the Sobolev Institute of Mathematics (project no. 0314-2019-0018) and within the framework of Laboratory of Empirical Industrial Organization, Economic Department of Novosibirsk State University. The work was supported in part by the Russian Foundation for Basic Research, projects 18-010-00728 and 19-010-00910 and by the Russian Ministry of Science and Education under the 5–100 Excellence Programme.

References 1. Nerlove, M., Arrow, K.J.: Optimal advertising policy under dynamic conditions. Economica 29(144), 129–142 (1962) 2. Vidale, M.L., Wolfe, H.B.: An Operations-Research Study of Sales Response to Advertising. Mathematical Models in Marketing. Lecture Notes in Economics and Mathematical Systems (Operations Research), vol. 132, pp. 223–225. Springer, Berlin, Heidelberg (1976). https://doi.org/10.1007/978-3-642-51565-1 72 3. Bykadorov, I.A., Ellero, A., Moretti, E.: Minimization of communication expenditure for seasonal products. RAIRO Oper. Res. 36(2), 109–127 (2002) 4. Mosca, S., Viscolani, B.: Optimal goodwill path to introduce a new product. J. Optim. Theory Appl. 123(1), 149–162 (2004) 5. Giri, B.C., Bardhan, S.: Coordinating a two-echelon supply chain with price and inventory level dependent demand, time dependent holding cost, and partial backlogging. Int. J. Math. Oper. Res. 8(4), 406–423 (2016) 6. Printezis, A., Burnetas, A.: The eﬀect of discounts on optimal pricing under limited capacity. Int. J. Oper. Res. 10(2), 160–179 (2011) 7. Lu, L., Gou, Q., Tang, W., Zhang, J.: Joint pricing and advertising strategy with reference price eﬀect. Int. J. Prod. Res. 54(17), 5250–5270 (2016) 8. Zhang, S., Zhang, J., Shen, J., Tang, W.: A joint dynamic pricing and production model with asymmetric reference price eﬀect. J. Ind. Manage. Optim. 15(2), 667– 688 (2019)

Retailing Under Piece-Wise Constant Discounts

39

9. Bykadorov, I., Ellero, A., Moretti, E., Vianello, S.: The role of retailer’s performance in optimal wholesale price discount policies. Eur. J. Oper. Res. 194(2), 538–550 (2009) 10. Bykadorov, I.A., Ellero, A., Moretti, E.: Trade discount policies in the diﬀerential games framework. Int. J. Biomed. Soft Comput. Hum. Sci. 18(1), 15–20 (2013) 11. Bykadorov, I.: Dynamic marketing model: optimization of retailer’s role. Commun. Comput. Inf. Sci. 974, 399–414 (2019) 12. Bykadorov, I.: Dynamic marketing model: the case of piece-wise constant pricing. Commun. Comput. Inf. Sci. 1145, 150–163 (2020)

The Generalized Algorithms of Global Parametric Optimization and Stochastization for Dynamical Models of Interconnected Populations Anastasia Demidova1 , Olga Druzhinina2,3(B) , Milojica Jacimovic4 , Olga Masina5 , Nevena Mijajlovic6 , Nicholas Olenev2 , and Alexey Petrov5 1

2

Peoples’ Friendship University of Russia (RUDN University), Moscow, Russia [email protected] Federal Research Center “Computer Science and Control” of RAS, Moscow, Russia {ovdruzh,nolenev}@mail.ru 3 V.A. Trapeznikov Institute of Control Sciences of RAS, Moscow, Russia 4 Montenegrin Academy of Sciences and Arts, Podgorica, Montenegro [email protected] 5 Bunin Yelets State University, Yelets, Russia [email protected], [email protected] 6 University of Montenegro, Podgorica, Montenegro [email protected]

Abstract. We consider the issue of synthesis and analysis of multidimensional controlled model with consideration of predator-prey interaction and taking into account migration ﬂows, and propose new formulations of corresponding optimal control problems. In search for optimal trajectories, we develop a generalized algorithm of global parametric optimization, which is based on the development of algorithm for generating control function and on modiﬁcations of classical numerical methods for solving diﬀerential equations. We present the results of the search for optimal trajectories and control functions generation. Additionally, we propose an algorithm for the transition to stochastic controlled models based on the development of a method for constructing self-consistent stochastic models. The tool software as a part of a software package for modeling controlled dynamical systems has been developed. We have also carried out a computer study of the constructed models. The results can be used in the problems of modeling dynamics processes, taking into account the requirements of control and optimization. Keywords: Multidimensional nonlinear models · Dynamics of interconnected populations · Global parametric optimization methods · Diﬀerential equations · Stochastic models · Optimal control · Software package

c Springer Nature Switzerland AG 2020 N. Olenev et al. (Eds.): OPTIMA 2020, LNCS 12422, pp. 40–54, 2020. https://doi.org/10.1007/978-3-030-62867-3_4

The Generalized Algorithms of Global Parametric Optimization

1

41

Introduction

When solving the problems of studying the dynamics of interconnected communities, one of the important directions is the construction and study of new multidimensional mathematical models which take into account various types of interspeciﬁc and interspeciﬁc interactions as well as control eﬀects. In this framework, the problems of optimal control in model of population dynamics are of theoretical and practical interest. Problems of constructing multidimensional populations models are considered, for examples in [1–4], while models of interconnected communities dynamics taking into account competition and migration ﬂows were studied in [5–12] among others. Migration and population models are widely used in analyzing and predicting the dynamics of real populations and communities, as well as in solving problems of their optimal exploitation. These problems include, in particular, the problems of studying the dynamics of population models on an unlimited trophic resource located in communicating areals, studying the dynamics of the population, and optimization of trade. The results obtained in the course of solving these problems can be used in modeling environmental, demographic, and socio-economic systems. The eﬀects of migration ﬂows in population models are considered in [8–12] and in other papers, with a number of papers considering both deterministic and stochastic dynamic models. For stochastic modeling of various types of dynamic systems, a method for constructing self-consistent one-step models is proposed [13] and a specialized software package is developed [14,15]. This software package allows to perform computer research of models based on the implementation of algorithms for numerical calculation of stochastic diﬀerential equations, as well as algorithms for generating trajectories of multidimensional Wiener processes and multipoint distributions. However, until now, the application of this software package was not considered for the problems associated with the modeling of stochastic controlled systems. It should be noted that, when studying models of dynamics of interconnected communities, the use of applied mathematical packages and General-purpose programming languages is relevant [16–18]. Analysis of models, even in three-and four-dimensional cases, is associated with such a factor as the cumbersomeness of intermediate calculations, in particular, when searching for stationary States. Computer research allows not only to obtain the results of numerical experiments in the framework of stationary states analysis, search for trajectories and estimation of model parameters, but also to identify qualitative new eﬀects caused by the model structure and external inﬂuences. An important role is also played by the ability to perform a comparative analysis of models that diﬀer in the types of relationships of phase variables during computational experiments. Classes of three-dimensional and four-dimensional unmanaged models with competition and migration ﬂows are considered in [10]. This article provides a comparative analysis of the qualitative properties of four-dimensional models, taking into account changes in migration rates, as well as coeﬃcients of intraspeciﬁc and interspeciﬁc interaction. In [11], four-dimensional nonlinear models of population dynamics of interconnected communities are proposed and studied, taking into account migration and competition, as well as migration, competi-

42

A. Demidova et al.

tion and mutualism. In [12], it is proposed to construct multidimensional models taking into account competition and mutualism, as well as taking into account migration ﬂows. The type of community interaction called “predator-prey” interaction is discussed in numerous papers (see, for example, [2–4,19,20]). A signiﬁcant part of the results are obtained for two-dimensional and three-dimensional cases, while for higher-dimensional models, there are fewer of them. The two-dimensional “predator-prey” model, taking into account the intraspeciﬁc competition of victims, is considered in [2]. For a model with intraspeciﬁc competition of victims, the conditions for the existence of a global attractor, as well as conditions for the asymptotic stability of a positive equilibrium state corresponding to the equilibrium coexistence of predators and victims, are obtained. The two-dimensional “predator-prey” model, taking into account the intraspeciﬁc competition of predators, is considered in [3]. The two-dimensional “predatorprey” model, taking into account the intraspeciﬁc competition of predators and victims, is studied in [4]. A three-dimensional model with one victim and two predators, taking into account the competition of victims and the saturation of predators, is studied in [19]. It is shown that at certain values of parameters, the saturation of predators can lead to the possibility of their coexistence when consuming one type of preys, but only in an oscillatory mode. The three-dimensional model “predator – two populations of preys” with consideration of interspeciﬁc competition of victims is considered in [3]. It is shown that for certain parameter values, the presence of a predator in a community can ensure the coexistence of competing populations, which is impossible in the absence of a predator. The control problems for population dynamics models are set and studied in [21,22] and in the papers of other researchers. Some aspects of optimal control of distributed population models are studied in [21]. The optimality criterion for autoreproduction systems in the framework of the analysis of evolutionarily stable behavior is presented in [22]. In [11,12], optimal control problems are proposed for certain classes of population-migration models with competition under phase and mixed constraints. Due to the complex structure of controlled population models with migration ﬂows and various types of interspecies interactions, the creation of algorithms and design of programs for global parametric optimization is an urgent problem. Features of global parametric optimization problems include the high dimensionality of the search space, the complex landscape, and the high computational complexity of target functions. Nature-inspired algorithms are quite eﬀective for solving these problems [23,24]. Algorithms for single-criteria global optimization of trajectories of dynamic systems with switches for a software package for modeling switched systems are proposed in [25]. Algorithms for searching for optimal parameters for some types of switchable models, taking into account the action of non-stationary forces, are considered in [26]. This article provides a comparative analysis of single-criteria global optimization methods and discusses their application to ﬁnd coeﬃcients of parametric control functions.

The Generalized Algorithms of Global Parametric Optimization

43

In this paper, we consider nonlinear models of population dynamics of interconnected communities, taking into account the interaction “predator-prey” and taking into account migration ﬂows. Optimal control problems are proposed for these models with migration ﬂows. The problem of optimal control under phase constraints for a three-dimensional population model is considered, taking into account the features of the trophic chain and taking into account migration ﬂows. Numerical optimization methods and generalized symbolic calculation algorithms are used to solve the optimal control problem. A computer study of the trajectories of the controlled model is performed. A controlled stochastic model with migration ﬂows by the predator-prey interaction is constructed. To construct this model, we use the method of constructing self-consistent stochastic models and a generalized stochastization algorithm taking into account the controlled case. The properties of models in the deterministic and stochastic cases are characterized. Specialized software systems are used as tools for studying models and solving optimal control problems. Software packages are designed for conducting numerical experiments based on the implementation of algorithms for constructing motion paths, algorithms for parametric optimization and generating control functions, as well as for numerical solution of diﬀerential equations systems using modiﬁed Runge–Kutta methods. In Sect. 2, we formulate the optimal control problems for the threedimensional controlled model with consideration of predator-prey interaction and taking into account migration ﬂows. In Sect. 3 we develop a generalized algorithm of global parametric optimization. We present the results of the search for optimal trajectories and generation of control functions. In Sect. 4, we propose an algorithm for the transition to stochastic controlled models based on the development of a method for constructing self-consistent stochastic models. We use a software package for modeling controlled dynamic systems.

2

The Optimal Control Problems for Deterministic Models of Interacting Communities with Trophic Chains and Migration Flows

A multidimensional model of m communities which takes into account trophic chains and migration ﬂows is given by a system of ordinary diﬀerential equations of the following type x˙i = xi (ai − pi xi ) + βi+1 xi+1 − γi xi − qi xi xi+2 , i = 3k − 2, i = 3k − 1, x˙i = xi (ai − pi xi ) − βi xi + γi−1 xi−1 , i = 3k. x˙i = xi (ai − pi xi ) + ri xi xi−2 ,

(1)

Here k is the number of the three-element set (prey, prey with shelter, predator), k = 1, . . . , m. The quantity m ≥ 1 is the number of such sets. Then the total number of considered equations in (1) is 3m. In this model it is assumed that the number of predator populations is equal to the number of prey populations, and each prey population has the ability to use a refuge. The following assumptions and designations are accepted in (1). The ﬁrst m equations describe such

44

A. Demidova et al.

dynamics of prey populations of densities xi , and in these equation we taken into account the conditions of interaction of preys with predators (i = 1, 4, . . . ), and the conditions outside of interaction with predators in the presence of shelter (i = 2, 5, . . . ). For i = 3, 6, . . . , 3m, the equations describe the dynamics of predator populations. Further, ai are the coeﬃcients of natural growth, pi are intraspeciﬁc competition coeﬃcients, β and γ are the migration coeﬃcients, qi and ri are the coeﬃcients of interspeciﬁc interactions. The dimension of this model 3m satisﬁes condition m ≥ 1. A special case of the model (1) is a model described by a system of the form x˙1 = a1 x1 − p1 x21 − qx1 x3 + βx2 − γx1 , x˙2 = a2 x2 − p2 x22 + γx1 − βx2 , x˙3 = a3 x3 −

p3 x23

(2)

+ rx1 x3 ,

where x1 and x3 are the densities of prey and predator population in the ﬁrst areal, x2 is the prey population density in the second areal, pi (i = 1, 2, 3) are the intraspeciﬁc competition coeﬃcients, q and r are the interaction coeﬃcients of predator and prey, ai (i = 1, 2, 3) are the natural growth coeﬃcients, β, γ are the migration coeﬃcients of the species between two areas, while the second is a refuge. In the absence of migration, model (2) corresponds to the classic predatorprey model [27,28]. When considering multidimensional models, diﬃculties arise when calculating symbolic parameters, in particular interest, when ﬁnding equilibrium states and constructing phase portraits. In this regard, it is advisable to perform a series of computer experiments, during which the most representative sets of numerical parameter values are considered. Computer research allows us to conduct a comparative analysis of the models properties in the deterministic and stochastic cases. Computer research with consideration of control actions is of interest. Next, we consider optimal control problems in the models of the interacting communities dynamics, taking into account predator-prey interactions in the presence of migration ﬂows. We formulate the optimal control problems for the three-dimensional model of a “predator-prey” with migration ﬂows. The dynamics of the controlled model is determined by a system of diﬀerential equations x˙1 = a1 x1 − p1 x21 − qx1 x3 + βx2 − γx1 − u1 x1 , x˙2 = a2 x2 − p2 x22 + γx1 − βx2 − u2 x2 , x˙3 = a3 x3 −

p3 x23

(3)

+ rx1 x3 − u3 x3 ,

where ui = ui (t) are control functions. We set the constraints for the model (3) in the form x1 (0) = x10 , x2 (0) = x20 , x3 (0) = x30 , x1 (T ) = x11 , x2 (T ) = x21 , x3 (T ) = x31 , t ∈ [0, T ],

(4)

0 ≤ u1 ≤ u11 , 0 ≤ u2 ≤ u21 , 0 ≤ u3 ≤ u31 , t ∈ [0, T ].

(5)

The Generalized Algorithms of Global Parametric Optimization

45

In relation to the problem (3)–(5), the functional to be maximized is written in the form T 3 (li xi − ci )ui (t)dt. (6) J(u) = 0

i=1

The quality control criterion (6) corresponds to the maximum proﬁt from the use of populations, and li is the cost of the i-th population, ci is the cost of technical equipment corresponding to the i-th population. The optimal control problem C1 for the model (3) can be formulated as follows. (C1 ) To ﬁnd the maximum of functional (6) under conditions (4), (5). It is also of interest to study the following type of restrictions imposed on ui (t): 0 ≤ u1 (t) + u2 (t) + u3 (t) ≤ M, ui (t) ≥ 0, i = 1, 2, 3, t ∈ [0, T ].

(7)

The optimal control problem C2 for the model (3) is formulated as follows. (C2 ) To ﬁnd the maximum of functional (6) under conditions (4), (7). In problems of dynamics of interacting communities, the conditions of nonnegativity of phase variables are used. Moreover, restrictions on the growth of the i-th species can be imposed, which leads to mixed restrictions. Given these features, along with problems C1 and C2 , the optimal control problems with phase and mixed constraints are of interest. However, due to the diﬃculties of the analytical study of multidimensional dynamic models and the peculiarities of the control quality criterion, methods of numerical optimization are often used. Next, we consider the application of numerical optimization methods as applied to optimal control problems in predator-prey models with migration ﬂows.

3

The Results of Computer Experiments

We consider the algorithm for solving the problem C1 using global parametric optimization algorithms [23,24]. We propose an approximation of the control function u(t) in the form of polynomials of the n-th degree. This approach to numerical optimization is based on the spline approximation method. Issues of using this method are discussed, for example, in [29,30]. The solution of this problem can be represented in the form of algorithm I, consisting of the following steps. Step 1. The construction of model trajectories. Step 2. The evaluation of the control quality criterion. Step 3. The adjustment of polynomial coeﬃcients using the optimization algorithm. If the stopping condition is not reached, we return to step 1. Algorithm I is a generalized global parametric optimization algorithm for generating control functions while modeling the dynamics of interacting communities. Within the framework of the generalized algorithm for constructing the control function in a symbolic form, there are wide possibilities for using such methods as a symbol tree method and artiﬁcial neural networks.

46

A. Demidova et al.

To solve the problem C1 we developed a program in the Python programming language using the diﬀerential evolution algorithm from the Scipy library. The choice of the diﬀerential evolution algorithm in this problem is determined by suﬃciently eﬀective indicators with respect to the error and calculation time in comparison to other algorithms, in particular, to the Powell algorithm. A detailed comparative analysis of the diﬀerential evolution algorithm compared to other optimization algorithms is given in [31]. The maximizing problem of functional (6) can be reduced to the problem of minimizing the following control quality criterion: (δ, e−J ) → min, where δ is the absolute deviation of the trajectories from x11 , x21 , x31 taking into account the constraints (3) in the optimal control problem. The notation e−J is accepted for the inverse exponent corresponding to functional (6). In our numerical experiment, we will consider a optimal control problem on a class of control functions of the form ui (t) = Ri S, Ri = (ri0 , ri1 , . . . , rin ), S = (t0 , t1 , . . . , tn )T ,

(8)

where Ri S = ri0 t0 + ri1 t1 + · · · + rin tn are polynomials of the degree n and · is the Cartesian norm of the corresponding vector, Ri are the parametric coeﬃcients. For model (3), in the framework of solving C1 , a series of computer experiments are carried out. The experimental results and a comparison of approximating dependences x1 (0) = 1, x2 (0) = 0.5, x3 (0) = 1, xi1 = 0.2, li = 10, c1 = 1, c2 = 0.5, c3 = 1, p1 = p2 = p3 = q = r = 1 are presented in Table 1. Table 1. The values of functional (6) for various parameters R. Values

Error

Value of functional (6)

Calculation time, sec.

n = 0, R1 = 1.113 × 10−5 , R2 = −1.338, R3 = −1.027

0.0201 56.332

3.8

n = 1, R1 = (0.1115, −0.031), R2 = (−1.034, 0.2816), R3 = (0.7892, 0.0451)

0.0091 67.781

23.4

n = 2, R1 = (−0.2052, 0.0909, −0.0092), R2 = (2.3093, −1.2374, 0.1250), R3 = (2.895, −1.558, 0.1532)

0.0061 73.458

402.1

n = 3, increased integration step R1 = (−2.7781, 1.1816, −0.1576, 0.0068), R2 = (−2.3412, 2.1955, −0.4520, 0.0279), R3 = (−1.6646, 2.2734, −0.5220, 0.0338)

0.0170 58.779

5210

The Generalized Algorithms of Global Parametric Optimization

47

We observe the following eﬀects arising from the implementation of the algorithm for the optimal control problem in the predator-prey model with migration ﬂows. With an increase in n from 0 to 2, the value of criterion (6) increases and the calculation error decreases. However, for n > 2, the value of the criterion begins to decrease. This fact may be associated with an increase in the integration step. The speciﬁed increase in this case is necessary to reduce the calculation time. Figure 1 shows the trajectories of the system (3) for n = 0, which corresponds to the case ui = const. Here and further along the abscissa axis, time is indicated, along the ordinate axis, population density for the system (3).

Fig. 1. The trajectories of the system (3) for n = 0, R1 = 1.11338543 × 10−5 , R2 = −1.33895442, R3 = −1.02759506.

It can be noted that the trajectories have a character close to monotonically decreasing. We trace an insigniﬁcant dependence of the predator population density on the preys population density. Figure 2 shows the trajectories of the model (3) for n = 1. The use of linear control functions signiﬁcantly increases the value of the criterion (6), while the error decreases (see Table 1). However, in this case, such an eﬃciency indicator of the algorithm as calculation time worsens. This indicator increases several times. Figure 3 presents the results of constructing the trajectories of the model (3) for n = 2. For this case, we reveal the following eﬀect: the density of the predator population ﬂuctuates and depends on the population density of the preys, and we note a signiﬁcant increase of the value of the functional (6). In connection with these, it should be noted that there is a direct dependence of the value of the motion quality criterion on the degree of the controlling polynomial. Figure 4 presents the results of constructing the trajectories of system (3) for n = 3. According to the results, the indicators of the criterion (6) and absolute error worsened (see Table 1). The negative eﬀect can be explained by an increase in the integration step when performing calculations (for n from 0 to 2 we use 100 steps, for n = 3 we have to limit ourselves to the number of steps equal to 22). Moreover,

48

A. Demidova et al.

Fig. 2. The trajectories of the system (3) for n = 1, R1 = (0.11147736, −0.0305515), R2 = (−1.03398361, 0.28162707), R3 = (0.78872214, 0.04509503).

Fig. 3. The trajectories of the system (3) for n = 2, R1 = (−0.205182, 0.090929, −0.009207), R2 = (2.309313, −1.237407, 0.125034), R3 = (2.894907, −1.557567, 0.153203).

Fig. 4. The trajectories of the system (3) for n = 3, R1 = (-2.778136, 1.181603, -0.157622, 0.006812), R2 = (−2.341297, 2.195542, −0.452087, 0.027912), R3 = (−1.664612, 2.273436, −0.522019, 0.033816).

The Generalized Algorithms of Global Parametric Optimization

49

despite the increase in the integration step by several times, the calculation time for one experiment is more than 1.5 h. In the future, we plan to conduct a series of computational experiments with a smaller integration step. Based on the results shown in Table 1, we can conclude that the eﬀectiveness of control functions increases with increasing degree of polynomial. However, it should be noted that an increase of value n signiﬁcantly increases the computational complexity of algorithm I. The largest values of functional (6) for the model (3) correspond to oscillating interdependent trajectories of “predator-prey”. The presented results based on the application of the diﬀerential evolution method belong to step 3 of the generalized global parametric optimization algorithm (algorithm I). The obtained results can be used to search of the control functions using symbolic regression and artiﬁcial neural networks. In particular, it is possible to use the results obtained in the population dynamics with switching control generalized models constructing.

4

The Construction of a Stochastic Predator-Prey Model with Migration Flows

Furthermore in this paper we stochastize the model (3) using the method of constructing self-consistent stochastic models [8]. From transition from the deterministic to the stochastic case and to study the character of accidental eﬀects, a software package for stochastization of one-step processes is developed in a computer algebra system [9,10,14,16]. In this paper, we performed such a modiﬁcation of this software package that allows us to take into account the controlled actions in the model. For an uncontrolled case, the software package is used in research of the dynamics models of interacting communities, taking into account competition and mutualism [16,25,26]. To implement the described software package, the computer system SymPy [11] is used, which is a powerful library of symbolic calculations for the Python language. In addition, the output data obtained using the SymPy library can be transferred for numerical calculations using both the NumPy [12] and SciPy [13] libraries. For transition from the model (3) to the corresponding stochastic model, we write the interaction scheme, which has the following form: a

i 2Xi , Xi −→ pi Xi + Xi −→ Xi , q X1 + X 3 − → X3 , r → 2X3 , X1 + X3 −

i = 1, 2, 3; i = 1, 2, 3;

X1 − → X2 , ui 0, Xi −→

X2 − → X1 , i = 1, 2, 3.

γ

(9) β

In this interaction scheme (9), the ﬁrst line corresponds to the natural reproduction of species in the absence of other factors, the 2-nd line symbolizes intraspe-

50

A. Demidova et al.

ciﬁc competition, and the 3-rd and 4-th describe the “predator-prey” relationships between populations x1 and x3 . The ﬁfth line is a description of a species migration process from one areal to another. The last line is responsible for control. Further, for the obtained interaction scheme using the developed software package, we obtained expressions for the coeﬃcients of the Fokker–Planck equation. The Fokker – Planck equation is a partial diﬀerential equation describing the time evolution of the population probability density function. The speciﬁed equation in the three-dimensional case, we write as follows: ∂t P (x, t) = −

3

[Ai (x)P (x, t)] +

i=1

3 1 ∂x ∂x [Bij P (x, t)]. 2 i,j=1 i j

(10)

where, for a controlled predator-prey model with migration ﬂows, the drift vector and diﬀusion matrix, respectively, have the form ⎞ ⎛ a1 x1 − p1 x21 − qx1 x3 + βx2 − γx1 − u1 x1 ⎠, a2 x2 − p2 x22 − βx2 + γx1 − u2 x2 A(x) = ⎝ a3 x3 − p3 x23 − rx1 x3 − u3 x3 ⎛

⎞ B11 −βx2 − γx1 0 B22 0 ⎠, B(x) = ⎝−βx2 − γx1 0 0 B33 where x = (x1 , x2 , x3 ) is the system phase vector, B11 = a1 x1 + p1 x21 + qx1 x3 + βx2 + γx1 + u1 x1 , B22 = a2 x2 + p2 x22 + βx2 + γx1 + u2 x2 , B33 = a3 x3 + p3 x23 + rx1 x3 + u3 x3 . The generated set of coeﬃcients is transferred to another module of the stochastization software package for one-step processes in order to search for a numerical solution of the obtained stochastic diﬀerential equation. For the numerical experiment of the obtained stochastic model, the same parameters are chosen as for the numerical analysis of the deterministic model (3). The results of the numerical solving of the stochastic diﬀerential equation are presented in graphs (Figs. 5, 6, 7 and 8).

Fig. 5. Comparison of trajectories in deterministic and stochastic cases (n = 0).

The Generalized Algorithms of Global Parametric Optimization

51

Fig. 6. Comparison of trajectories in deterministic and stochastic cases (n = 1).

Fig. 7. Comparison of trajectories in deterministic and stochastic cases (n = 2).

Fig. 8. Comparison of trajectories in deterministic and stochastic cases (n = 3).

A comparative analysis of the deterministic and stochastic behavior of the system described by the system of the equations (3) showed that in the ﬁrst case, namely, for n = 0, which corresponds to the case ui = const, the introduction of stochastics weakly aﬀects the behavior of the system. The solutions remain close to the boundary conditions x1i = 0.2 speciﬁed for the model (3). In the second, third and fourth cases, namely, for n = 1, n = 2 and n = 3, the transition to the stochastic case signiﬁcantly changes the behavior of the system. In contrast to deterministic systems, the trajectories of stochastic models solutions for a given set of parameters are monotonous character and go to the stationary mode. Thus, in this case, alternative methods must be used to obtain

52

A. Demidova et al.

optimal solutions to the stochastic model. In particular, it is advisable to use the feedback control ui (t, x). The consideration and implementation of methods for obtaining optimal solutions to stochastic diﬀerential equations describing the predator-prey interaction with regard to migration ﬂows is a problem of promising research.

5

Conclusions

In this paper, we have proposed a new approach to the synthesis and analysis of controlled models of the interacting communities dynamics, taking into account the features of trophic chains and migration ﬂows. As a part of this approach, we have developed a generalized algorithm for global parametric optimization using control generation methods. Optimal control problems for models with consideration of predator-prey interactions in the presence of migration ﬂows are considered. Computer research of these models allowed us to obtain the results of numerical experiments on the search for trajectories and generating control functions. The case of representability of control functions in the form of positive polynomials is studied. Peculiarities of the diﬀerential evolution algorithm application for solving the problems of optimal control search in the systems with trophic chain and migration are revealed. To solve the problems of population models optimal control, it is proposed to use numerical optimization methods and methods of symbolic calculations. The implementation of the generalized stochastization algorithm and the analysis of the stochastic model taking into account the predator-prey interaction in the presence of migration ﬂows demonstrated the eﬀectiveness of constructing self-consistent stochastic models method for the controlled case. For a number of parameters sets, it is possible to conduct a series of computer experiments to construct stochastic model trajectories taking into account control actions. A comparative analysis of the studied deterministic and stochastic models is carried out. The choice of control functions for the stochastic model is determined by the structure of the constructed Fokker–Planck equation and is related to the results of numerical experiments conducted with the deterministic model based on global parametric optimization methods. The study of the stochastic model, as opposed to the deterministic one, allows us to take into account the probabilistic nature of the processes of birth and death. In addition, the transition to stochastisation makes it possible to evaluate the eﬀects of the external environment that can cause random ﬂuctuations in the model parameters. For the models studied in this paper, it is shown that in the deterministic case, the proposed global optimization algorithm demonstrates adequate results when constructing optimal trajectories, and for the stochastic case, additional research is required. The tool software created in the framework of this work can serve as the basis for the new modules (control and numerical optimization modules) for the software package for analyzing self-consistent stochastic models and for the modeling dynamic systems with switching software package. The use of the developed

The Generalized Algorithms of Global Parametric Optimization

53

tool software, symbolic calculations and generalized classical and non-classical numerical methods has demonstrated suﬃcient eﬃciency for computer research of multidimensional nonlinear models with trophic chain and migration. The prospects for further research are the synthesis and computer study of partially controlled migration models and the expansion of the numerical methods range for global parametric optimization in the study of models with migration ﬂows. In addition, one of the promising areas is the extension of the previously obtained results for multidimensional uncontrolled models to the controlled case, as well as the development of methods for ﬁnding optimal controls using artiﬁcial neural networks. It should be noted that the development of methods and tools for analyzing multidimensional models, taking into account various features of trophic chains in the presence of migration ﬂows, as well as the relationship of competition and mutualism, are of interest for further research.

References 1. Murray, J.D.: Mathematical Biology: I. An Introduction. Springer-Verlag, New York (2002). https://doi.org/10.1007/b98868 2. Svirezhev, Y.M., Logofet, D.O.: Stability of Biological Communities. Nauka, Moscow (1978) 3. Bazykin, A.D.: Nonlinear Dynamics of Interacting Populations. Institute of Computer Research, Moscow-Izhevsk (2003) 4. Bratus, A.S., Novozhilov, A.S., Platonov, A.P.: Dynamical Systems and Models of Biology. Draft, Moscow (2011) 5. Lu, Z., Takeuchi, Y.: Global asymptotic behavior in single-species discrete diﬀusion systems. J. Math. Biol. 32(1), 67–77 (1993) 6. Zhang, X.-A., Chen, L.: The linear and nonlinear diﬀusion of the competitive Lotka–Volterra model. Nonlinear Anal. 66, 2767–2776 (2007) 7. Chen, X., Daus, E.S., J¨ ungel, A.: Global existence analysis of cross-diﬀusion population systems for multiple species. Arch. Ration. Mech. Anal. 227(2), 715–747 (2018) 8. Sinitsyn, I.N., Druzhinina, O.V., Masina, O.N.: Analytical modeling and stability analysis of nonlinear broadband migration ﬂow. Nonlinear world 16(3), 3–16 (2018) 9. Demidova, A.V., Druzhinina, O., Jacimovic, M., Masina, O.: Construction and analysis of nondeterministic models of population dynamics. In: Vishnevskiy, V.M., Samouylov, K.E., Kozyrev, D.V. (eds.) DCCN 2016. CCIS, vol. 678, pp. 498–510. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-51917-3 43 10. Demidova, A.V., Druzhinina, O.V., Masina, O.N., Tarova, E.D.: Computer research of nonlinear stochastic models with migration ﬂows. CEUR Workshop Proceedings, vol. 2407, pp. 26–37 (2019) 11. Druzhinina, O.V., Masina, O.N., Tarova, E.D.: Analysis and synthesis of nonlinear dynamic models taking into account migration ﬂows and control actions. Nonlinear World 17(4), 24–37 (2019) 12. Demidova, A., Druzhinina, O., Ja´cimovi´c, M., Masina, O., Mijajlovic, N.: Problems of synthesis, analysis and optimization of parameters for multidimensional mathematical models of interconnected populations dynamics. In: Ja´cimovi´c, M., Khachay, M., Malkova, V., Posypkin, M. (eds.) OPTIMA 2019. CCIS, vol. 1145, pp. 56–71. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-38603-0 5

54

A. Demidova et al.

13. Demidova, A. V., Gevorkyan, M. N., Egorov, A. D., Kulyabov, D. S., Korolkova, A. V., Sevastyanov, L. A.: Inﬂuence of stochastization on one-step models. RUDN J. Math. Inf. Sci. Phys. (1), 71–85 (2014) 14. Gevorkyan, M.N., Velieva, T.R., Korolkova, A.V., Kulyabov, D.S., Sevastyanov, L.A.: Stochastic Runge–Kutta software package for stochastic diﬀerential equations. In: Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J. (eds.) Dependability Engineering and Complex Systems. AISC, vol. 470, pp. 169–179. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39639-2 15 15. Gevorkyan, M.N., Demidova, A.V., Velieva, T.R., Korol’kova, A.V., Kulyabov, D.S., Sevast’yanov, L.A.: Implementing a method for stochastization of one-step processes in a computer algebra system. Program. Comput. Softw. 44, 86–93 (2018) 16. Oliphant, T.E.: Python for scientiﬁc computing. Comput. Sci. Eng. 9, 10–20 (2007) 17. Lamy, R.: Instant SymPy Starter. Packt Publishing, Birmingham (2013) 18. Oliphant, T.E.: Guide to NumPy, 2nd edn. CreateSpace Independent Publishing Platform, Scotts Valley (2015) 19. Kirlinger, G.: Permanence of some ecological systems with several predator and one preys species. J. Math. Biol. 26, 217–232 (1988) 20. Kirlinger, G.: Two predators feeding on two preys species: a result on permanence. Math. Biosci. 96(1), 1–32 (1989) 21. Moskalenko, A.I.: Methods of Nonlinear Mappings in Optimal Control. Theory and Applications to Models of Natural Systems. Nauka, Novosibirsk (1983) 22. Kuzenkov, O.A., Kuzenkova, G.V.: Optimal control of self-reproduction systems. J. Comput. Syst. Sci. Int. 51, 500–511 (2012) 23. Karpenko, A.P.: Modern Search Engine Optimization Algorithms. Algorithms Inspired by Nature, 2nd edn. N.E. Bauman MSTU, Moscow (2016) 24. Sakharov, M., Karpenko, A.: Meta-optimization of mind evolutionary computation algorithm using design of experiments. In: Abraham, A., Kovalev, S., Tarassov, V., Snasel, V., Sukhanov, A. (eds.) IITI’18 2018. AISC, vol. 874, pp. 473–482. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-01818-4 47 25. Petrov, A.A.: The structure of the software package for modeling technical systems under conditions of switching operating modes. Electromagn. Waves Electro. Syst. 23(4), 61–64 (2018) 26. Druzhinina, O., Masina, O., Petrov, A.: The synthesis of the switching systems optimal parameters search algorithms. In: Evtushenko, Y., Ja´cimovi´c, M., Khachay, M., Kochetov, Y., Malkova, V., Posypkin, M. (eds.) OPTIMA 2018. CCIS, vol. 974, pp. 306–320. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-109349 22 27. Lotka, A.: Elements of Physical Ecology. Williams and Wilkins, Baltimora (1925) 28. Volterra, V.: Mathematical Theory of the Struggle for Existence [Russian Translation]. Nauka, Moscow (1976) 29. Dimitrienko, Y.I., Drogolyub, A.N., Gubareva, E.A.: Spline approximation-based optimization of multi-component disperse reinforced composites. Sci. Educ. Bauman MSTU 2, 216–233 (2015) 30. Laube P., Franz M., Umlauf G.: Deep learning parametrization for B-spline curve approximation. In: 2018 International Conference on 3D Vision (3DV), pp. 691–699 (2018) 31. Padhye N., Mittal P., Deb K.: Diﬀerential evolution: performances and analyses. In: 2013 IEEE Congress on Evolutionary Computation, CEC, pp. 1960–1967 (2013)

On Solving a Generalized Constrained Longest Common Subsequence Problem Marko Djukanovic1 , Christoph Berger1,2(B) , G¨ unther R. Raidl1(B) , 2(B) and Christian Blum 1

Institute of Logic and Computation, TU Wien, Vienna, Austria {djukanovic,raidl}@ac.tuwien.ac.at, [email protected] 2 Artiﬁcial Intelligence Research Institute (IIIA-CSIC), Campus UAB, Bellaterra, Spain [email protected]

Abstract. Given a set of two input strings and a pattern string, the constrained longest common subsequence problem deals with ﬁnding a longest string that is a subsequence of both input strings and that contains the given pattern string as a subsequence. This problem has various applications, especially in computational biology. In this work we consider the N P–hard case of the problem in which more than two input strings are given. First, we adapt an existing A∗ search from two input strings to an arbitrary number m of input strings (m ≥ 2). With the aim of tackling large problem instances approximately, we additionally propose a greedy heuristic and a beam search. All three algorithms are compared to an existing approximation algorithm from the literature. Beam search turns out to be the best heuristic approach, matching almost all optimal solutions obtained by A∗ search for rather small instances.

Keywords: Longest common subsequences subsequences · Beam search · A∗ search

1

· Constrained

Introduction

Strings are commonly used to represent DNA and RNA in computational biology, and it is often necessary to obtain a measure of similarity for two or more input strings. One of the most well-known measures is calculated by the so-called longest common subsequence (LS) problem. Given a number of input strings, this problem asks to ﬁnd a longest string that is a subsequence of all input strings. Hereby, a subsequence t of a string s is obtained by deleting zero or more characters from s. Apart from computational biology, the LCS problem ﬁnds also application in video segmentation [3] and text processing [13], just to name a few. During the last three decades, several variants of the LCS problem have arisen from practice. One of these variants is the constrained longest common subsequence (CLCS) problem [14], which can be stated as follows. Given m input c Springer Nature Switzerland AG 2020 N. Olenev et al. (Eds.): OPTIMA 2020, LNCS 12422, pp. 55–70, 2020. https://doi.org/10.1007/978-3-030-62867-3_5

56

M. Djukanovic et al.

strings and a pattern string P , we seek for a longest common subsequence of the input strings that has P as a subsequence. This problem presents a useful measure of similarity when additional information concerning common structure of the input strings is known beforehand. The most studied CLCS variant is the one with only two input strings (2–CLCS); see, for example, [2,5,14]. In addition to these works from the literature, we recently proposed an A∗ search for the 2–CLCS problem [6] and showed that this algorithm is approximately one order of magnitude faster than other exact approaches. In the following we consider the general variant of the CLCS problem with m ≥ 2 input strings S = {s1 , . . . , sm }, henceforth denoted by m–CLCS. Note that the m–CLCS problem is N P–hard [1]. An application of this general variant is motivated from computational biology when it is necessary to ﬁnd the commonality for not just two but an arbitrary number of DNA molecules under the consideration of a speciﬁc known structure. To the best of our knowledge, the approximation algorithm by Gotthilf et al. [9] is the only existing algorithm for solving the general m–CLCS problem so far. We ﬁrst extend the general search framework and the A∗ search from [6] to solve the more general m–CLCS problem. For the application to large-scale instances we additionally propose two heuristic techniques: (i) a greedy heuristic that is eﬃcient in producing reasonably good solutions within a short runtime, and (ii) a beam search (BS) which produces high-quality solutions at the cost of more time. The experimental evaluation shows that the BS is the new stateof-the-art algorithm, especially for large problem instances. The rest of the paper is organized as follows. Section 2 describes a greedy heuristic for the m–CLCS problem. In Sect. 3 the general search framework for the m–CLCS problem is presented. Section 4 describes the A∗ search, and in Sect. 5 the beam search is proposed. In Sect. 6, our computational experiments are presented. Section 7 concludes this work and outlines directions for future research.

2

A Fast Heuristic for the m–CLCS Problem

Henceforth we denote the length of a string s over a ﬁnite alphabet Σ by |s|, and the length of the longest string from the set of input strings (s1 , . . . , sm ) by n, i.e., n := max{|s1 |, . . . , |sm |}. The j-th letter of a string s is denoted by s[j], j = 1, . . . , |s|, and for j > |s| we deﬁne s[j] = ε, where ε denotes the empty string. Moreover, we denote the contiguous subsequence—that is, the substring—of s starting at position j and ending at position j by s[j, j ], j = 1, . . . , |s|, j = j, . . . , |s|. If j > j , then s[j, j ] = ε. The concatenation of a string s and a letter c ∈ Σ is written as s · c. Finally, let |s|c be the number of occurrences of letter c ∈ Σ in s. We make use of two data structures created during preprocessing to allow an eﬃcient search: – For each i = 1, . . . , m, j = 1, . . . , |si |, and c ∈ Σ, Succ[i, j, c] stores the minimal position index x such that x ≥ j ∧ si [x] = c or −1 if c does not occur in si from position j onward. This structure is built in O(m · n · |Σ|) time.

On Solving a Generalized CLCS Problem

57

– For each i = 1, . . . , m, u = 1, . . . , |P |, Embed [i, u] stores the right-most position x of si such that P [u, |P |] is a subsequence of si [x, |si |]. If no such position exists, Embed [i, u] := −1. This table is built in O(|P | · m) time. In the following we present Greedy, a heuristic for the m–CLCS problem inspired by the well-known Best–Next heuristic [11] for the LCS problem. Greedy is pseudo-coded in Algorithm 1. The basic principle is straight-forward. The algorithm starts with an empty solution string s := and proceeds by appending, at each construction step, exactly one letter to s. The choice of the letter to append is done by means of a greedy function. The procedure stops once no more letters can be added. The basic data structure of the algorithm is a position vector ps = (ps1 , . . . , psm ) ∈ Nm which is initialized to ps := (1, . . . , 1) at the beginning. The superscript indicates that this position vector depends on the current (partial) solution s. Given ps , si [psi , |si |] for i = 1, . . . , m refer to the substrings from which letters can still be chosen for extending the current partial solution s. Moreover, the algorithm starts with a pattern position index u := 1. The meaning of u is that P [u, |P |] is the substring of P that remains to be included as a subsequence in s. At each construction step, ﬁrst, a subset Σfeas ⊆ Σ of letters is determined that can feasibly extend the current partial solution s, ensuring that the ﬁnal outcome contains pattern P as a subsequence. More speciﬁcally, Σfeas contains a letter c ∈ Σ iﬀ (i) c appears in all strings si [psi , |si |] and (ii) s·c can be extended towards a solution that includes pattern P . Condition (ii) is fulﬁlled if u = |P | + 1, P [u] = c, or Succ[i, psi , c] < Embed [i, u] for all i = 1, . . . , m (assuming that there is at least one feasible solution). These three cases are checked in the given order, and with the ﬁrst case that evaluates to true, condition (ii) evaluates to true; otherwise, condition (ii) evaluates to false. Next, dominated letters are removed from Σfeas . For two letters c, c ∈ Σfeas , we say that c dominates c iﬀ Succ[i, psi , c] ≤ Succ[i, psi , c ] for all i = 1, . . . , m. Afterwards, the remaining letters in Σfeas are evaluated by the greedy function explained below, and a letter c∗ that has the best greedy value is chosen and appended to s. Further, the position vector ps is updated w.r.t. letter c∗ by psi := Succ[i, psi , c∗ ] + 1, i = 1, . . . , m. Moreover, u is increased by one if c∗ = P [u]. These steps are repeated until Σfeas = ∅, and the greedy solution s is returned. The greedy function used to evaluate each letter c ∈ Σfeas is g(ps , u, c) =

m

Succ[i, ps , c] − ps + 1 1 i i , + s lmin (p , c) + 1P [u]=c i=1 |si | − psi + 1

(1)

where lmin (ps , c) is the length of the shortest remaining part of any of the input strings when considering letter c appended to the solution string and thus consumed, i.e., lmin := min{|si | − Succ[i, psi , c] | i = 1, . . . , m}, and 1P [u]=c evaluates to one if P [u] = c and to zero otherwise. Greedy chooses at each construction step a letter that minimizes g(). The ﬁrst term of g() penalizes letters for which the lmin is decreased more and which are not the next letter from P [u]. The second term in Eq. (1) represents the sum of the ratios of characters that are

58

M. Djukanovic et al.

skipped (in relation to the remaining part of each input string) when extending the current solution s with letter c. Algorithm 1. Greedy: a heuristic for the m–CLCS problem 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18:

3

Input: problem instance (S, P, Σ) Output: heuristic solution s s←ε psi ← 1, i = 1, . . . , m u←1 Σfeas ← set of feasible and non-dominated letters for extending s while Σfeas = ∅ do c∗ ← arg min{g(ps , u, c) | c ∈ Σfeas )} s ← s · c∗ for i ← 1 to m do psi ← Succ[i, psi , c∗ ] + 1 end for if P [u] = c∗ then u ← u + 1 // consider next letter in P end if Σfeas ← set of feasible and non-dominated letters for extending s end while return s

State Graph for the m–CLCS Problem

This section describes the state graph for the m–CLCS problem, in which paths from a dedicated root node to inner nodes correspond to (meaningful) partial solutions, paths from the root to sink nodes correspond to complete solutions, and directed arcs represent (meaningful) extensions of partial solutions. Note that the state graph for the m–CLCS problem is an extension of the state graph of the 2–CLCS problem [6]. Given an m–CLCS problem instance I = (S, P, Σ), let s be any string over Σ that is a common subsequence of all input strings S. Such a (partial) solution s induces a position vector ps in a well-deﬁned way by assigning a value to each psi , i = 1, . . . , m, such that si [1, psi − 1] is the smallest string among all strings in {si [1, k] | k = 1, . . . , psi − 1} that contains s as a subsequence. Note that these position vectors are the same ones as already deﬁned in the context of Greedy. In other words, s induces a subproblem I[ps ] := {s1 [ps1 , |s1 |], . . . , sm [psm , |sm |]} of the original problem instance. This is because s can only be extended by adding letters that appear in all strings of si [psi , |si |], i = 1, . . . , m. In this context, let substring P [1, k ] of pattern string P be the maximal string among all strings of P [1, k], k = 1, . . . , |P |, such that P [1, k ] is a subsequence of s. We then say that s is a valid (partial) solution iﬀ P [k + 1, |P |] is a subsequence of the strings in subproblem I[ps ], that is, a subsequence of si [psi , |si |] for all i = 1, . . . , m.

On Solving a Generalized CLCS Problem

59

((1,1,1),0,1)

c

b ((2,3,2),1,1)

((3,2,4),1,2)

c

a

((3,4,4),2,2)

c

((4,6,6),2,2)

a

c

c

b

((7,3,7),2,3)

d

((6,4,5),2,2)

b

((6,5,5),3,2)

((4,6,6),3,2)

((6,8,8),3,2)

((8,7,9),3,3)

((7,9,10),3,3)

b

c

b

b

b

((7,9,7),4,3)

((6,8,8),4,2)

((7,9,10),4,3)

((9,9,10),4,4)

((9,10,10),4,4)

b

b

b

((9,10,10),5,4)

((7,9,10),5,2)

((9,10,11),5,4)

b ((9,10,11),6,4)

Fig. 1. State graph for the instance ({s1 = bcaacbdba, s2 = cbccadcbbd, s3 = bbccabcdbba}, P = cbb, Σ = {a, b, c, d}). There are ﬁve non-extensible sink nodes (shown in gray). The longest path corresponds to the optimal solution s = bcacbb with length six and leads to node v = (pv = (9, 10, 11), lv = 6, uv = 4) (shown in blue). (Color ﬁgure online)

The state graph G = (V, A) of our A∗ search is a directed acyclic graph where each node v ∈ V stores a triple (pv , lv , uv ), with pv being a position vector that induces subproblem I[pv ], lv is the length of (any) valid partial solution (i.e., path from the root to node v) that induces pv , and uv is the length of the longest preﬁx string of pattern P that is contained as a subsequence in any of the partial solutions that induce node v. Moreover, there is an arc a = (v, v ) ∈ A labeled with letter c(a) ∈ Σ between two nodes v = (pv , lv , uv ) and v = (pv , lv , uv ) iﬀ (i) lv = lv + 1 and (ii) subproblem I[pv ] is induced by the partial solution that is obtained by appending letter c(a) to the end of a partial solution that induces v. As mentioned above, we are only interested in meaningful partial solutions, and thus, for feasibly extending a node v, only the letters from Σfeas can be chosen (see Sect. 2 for the deﬁnition of Σfeas ). An extension v = (pv , lv , uv ) is therefore generated for each c ∈ Σfeas in the following way: pvi = Succ[i, pvi , c]+1 for i = 1, . . . , m, lv = lv + 1, and uv = uv + 1 in case c = P [uv ], respectively uv = uv otherwise. The root node of the state graph is deﬁned by r = (pr = (1, . . . , 1), lr = 0, ur = 1) and it thus represents the original problem instance. Sink nodes correspond to non-extensible states. A longest path from the root node to some sink node represents an optimal solution to the m–CLCS problem. Figure 1 shows as example the full state graph for the problem instance ({s1 = bcaacbdba, s2 = cbccadcbbd, s3 = bbccabcdbba}, P = cbb, Σ = {a, b, c, d}). The root node, for example can only be extended by letters b and c, because letters a and d

60

M. Djukanovic et al.

are dominated by the other two letters. Moreover, note that node ((6, 5, 5), 3, 2) (induced by partial solution bcc) can only be extended by letter b. Even though letter d is not dominated by letter b, adding letter d cannot lead to any feasible solution, because any solution starting with bccd does not have P = cbb as a subsequence. 3.1

Upper Bounds

As any upper bound for the general LCS problem is also valid for the m–CLCS problem [8], we adopt the following ones from existing work on the LCS problem. Given a subproblem represented by a node v of the state graph, the upper bound proposed by Blum et al. [4] determines for each letter a limit for the number of its occurrences in any solution that can be built starting from a partial solution represented by v. The upper bound is obtained by summing these values for all letters from Σ: min {|si [pvi , |si |]|c } (2) UB1 (v) = c∈Σ

i=1,...,m

where |si [pvi , |si |]|c is the number of occurrences of letter c in si [pvi , |si |]. This bound is eﬃciently calculated in O(m · |Σ|) time by making use of appropriate data structures created during preprocessing; see [7] for more details. A dynamic programming (DP) based upper bound was introduced by Wang et al. [15]. It makes use of the DP recursion for the classical LCS problem for all pairs of input strings {si , si+1 }, i = 1, . . . , m − 1. In more detail, for each pair Si = {si , si+1 }, a scoring matrix Mi is recursively derived, where entry Mi [x, y], x = 1, . . . , |si | + 1, y = 1, . . . , |si+1 | + 1 stores the length of the longest common subsequence of si [x, |si |] and si+1 [y, |si+1 |]. We then get the upper bound UB2 (v) =

min

i=1,...,m−1

Mi [pvi , pvi+1 ].

(3)

Neglecting the preprocessing step, this bound can be calculated eﬃciently in O(m) time. By combining the two bounds we obtain UB(v) := min{UB1 (v), UB2 (v)}. This bound is admissible for the A∗ search, which means that its values never underestimate the optimal value of the subproblem that corresponds to a node v. Moreover, the bound is monotonic, that is, the estimated upper bound of any child node is never smaller than the upper bound of the parent node. Monotonicity is an important property in A∗ search, because it implies that no re-expansion of already expanded nodes [6] may occur.

4

A∗ Search for the m–CLCS Problem

A∗ search [10] is a well-known technique in the ﬁeld of artiﬁcial intelligence. More speciﬁcally, it is a search algorithm based on the best-ﬁrst principle, explicitly suited for path-ﬁnding in large possibly weighted graphs. Moreover, it is an informed search, that is, the nodes to be further pursued are prioritized according to a function that includes a heuristic guidance component. This function is

On Solving a Generalized CLCS Problem

61

expressed as f (v) = g(v) + h(v) for all nodes v ∈ V to which a path has already been found. In our context the graph to be searched is the acyclic state graph G = (V, A) introduced in the previous section. The components of the priority function f (v) are: – the length g(v) of a best-so-far path from the root r to node v, and – an estimated length h(v) of a best path from node v to a sink node (known as dual bound). The performance of A∗ search usually depends on the tightness of the dual bound, that is, the size of the gap between the estimation and the real cost. In our A∗ search, g(v) = lv and h(v) = UB(v), and the search utilizes the following two data structures: 1. The set of all so far created (reached) nodes N : This set is realized as a nested data structure of sorted lists within a hash map. That is, pv vectors act as keys of the hash-map, each one mapping to a list that stores all pairs (lv , uv ) of nodes (pv , lv , uv ) that induce subproblem I[pv ]. This structure was chosen to eﬃciently check if a speciﬁc node was already generated during the search and to keep the memory footprint comparably small. 2. The open list Q: This is a priority queue that stores references to all not-yetexpanded nodes sorted according to non-increasing values f (v). The structure is used to eﬃciently retrieve the most promising non-expanded node at any moment. The search starts by adding root node r = ((1, . . . , 1), 0, 1) to N and Q. Then, at each iteration, the node with highest priority, i.e., the top node of Q, is extended in all possible ways (see Sect. 3), and any newly created node v is stored in N and Q. If some node v is reached via the expanded node in a better way, its f -value is updated accordingly. Moreover, it is checked if v is dominated by some other node from N [v ] ⊆ N , where N [v ] is the set of all nodes from N representing the same subproblem I[pv ]. If this is the case, v is discarded. In this context, given vˆ, v ∈ N [v ] we say that vˆ dominates v iﬀ lvˆ ≥ lv ∧ uvˆ ≥ uv . In the opposite case—that is, if any v ∈ N [v ] is dominated by v —node v is removed from N [v ] ⊆ N and Q. The node expansion iterations are repeated until the top node of Q is a sink node, in which case a path from the root node to this sink node corresponds to a proven optimal solution. Such a path is retrieved by reversing the path from following each node’s predecessor from the sink node to the root node. Moreover, our A∗ search terminates without a meaningful solution when a speciﬁed time or memory limit is exceeded.

5

Beam Search for the m–CLCS Problem

It is well known from research on other LCS variants that beam search (BS) is often able to produce high-quality approximate solutions in this domain [8]. For those cases in which our A∗ approach is not able to deliver an optimal solution in a reasonable computation time, we therefore propose the following BS approach.

62

M. Djukanovic et al.

Before the start of the BS procedure, Greedy is performed to obtain an initial solution sbsf . This solution can be used in BS for pruning partial solutions (nodes) that provenly cannot be extended towards a solution better than sbsf . Beam search maintains a set of nodes B, called the beam, which is initialized with the root node r at the start of the algorithm. Remember that this root node represents the empty partial solution. A single major iteration of BS consists of the following steps: – Each node v ∈ B is expanded in all possible ways (see the deﬁnition of the state graph) and the extensions are kept in a set Vext . If any node v ∈ Vext is a complete node for which lv is larger than |sbsf |, the best-so-far solution sbsf is updated accordingly. – Application of function Prune(Vext , UBprune ) (optional): All nodes from Vext whose upper bound value is no more than |sbsf | are removed. UBprune refers to the utilized upper bound function (see Sect. 3.1 for the options). – Application of function Filter(Vext , kbest ) (optional): this function examines the nodes from Vext and removes dominated ones. Given v, v ∈ Vext , we say in this context that v dominates v iﬀ pvi ≤ pvi , for all i = 1, . . . , m ∧ uv ≥ uv . Note that this is a generalization of the domination relation introduced in [4] for the LCS problem. Since it is time-demanding to examine the possible domination for each pair of nodes from Vext if |Vext | is not small, the domination for each node v ∈ Vext is only checked against the best kbest nodes from Vext w.r.t. a heuristic guidance function h(v), where kbest is a strategy parameter. We will consider several options for h(v) presented in the next section. – Application of function Reduce(Vext , β): The best at most β nodes are selected from Vext to form the new beam B for the next major iteration; the beam width β is another strategy parameter. These four steps are repeated until B becomes empty. Beam search is thus a kind of incomplete breadth-first-search. 5.1

Options for the Heuristic Guidance of BS

Diﬀerent functions can be used as heuristic guidance of the BS, that is, for the function h that evaluates the heuristic goodness of any node v = (pL,v , lv , uv ) ∈ V . An obvious choice is, of course, the upper bound UB from Sect. 3.1. Additionally, we consider the following three options. 5.1.1 Probability Based Heuristic For a probability based heuristic guidance, we make use of a DP recursion from [12] for calculating the probability Pr(p, q) that any string of length p is a subsequence of a random string of length q. These probabilities are computed in a preprocessing step for p, q = 0, . . . , n. Remember, in this context that n is the length of the longest input string. Assuming independence among the input strings, the probability Pr(s ≺ S) that a random string s of length p is a

On Solving a Generalized CLCS Problem

63

m common subsequence of all input strings from S is Pr(s ≺ S) = i=1 Pr(p, |si |). Given Vext in some construction step of BS, the question is now how to choose the value p common for all nodes v ∈ Vext in order to take proﬁt from the above formula in a sensible heuristic manner. For this purpose, we ﬁrst calculate pmin = min (|P | − uv + 1) , v∈Vext

(4)

where P is the pattern string of the tackled m–CLCS instance. Note that the string P [pmin , |P |] must appear as a subsequence in all possible completions of all nodes from v ∈ Vext , because pattern P must be a subsequence of any feasible solution. Based on pmin , the value of p for all v ∈ Vext is then heuristically chosen as mini=1,...,m {|si | − pvi + 1} − pmin p = pmin + min . (5) v∈Vext |Σ| The intention here is, ﬁrst, to let the characters from P [pmin , |P |] fully count, because they will—as mentioned above—appear for sure in any possible extension. This explains the ﬁrst term (pmin ) in Eq. (5). The second term is justiﬁed by the fact that an optimal m–CLCS solution becomes shorter if the alphabet size becomes larger. Moreover, the solution tends to be longer for nodes v whose length of the shortest remaining string from I[pv ] is longer than the one of other nodes. We emphasize that this is a heuristic choice which might be improvable. If p would be zero, we set it to one in order to break ties. The ﬁnal probabilitybased heuristic for evaluating a node v ∈ Vext is then H(v) =

m

Pr(p, |si | − pvi + 1),

(6)

i=1

and those nodes with a larger H–value are preferred. 5.1.2 Expected Length Based Heuristic In [8] we derived an approximate formula for the expected length of a longest common subsequence of a set of uniform random strings. Before we extend this result to the m–CLCS problem, we state those aspects of the results from [8] that are needed for this purpose. For more information we refer the interested reader to the original article. In particular, from [8] we know that E[Y ] =

lmin

E[Yk ],

(7)

k=1

where lmin := min{|si | | i = 1, . . . , m}, Y is a random variable for the length of an LCS, and Yk is, for any k = 1, . . . , lmin , a binary random variable indicating whether or not there is an LCS with a length of at least k. E[·] denotes the expected value of some random variable.

64

M. Djukanovic et al.

In the context of the m–CLCS problem, a similar formula with the following re-deﬁnition of the binary variables is used. Y is now a random variable for the length of an LCS that has pattern string P as a subsequence, and the Yk are binary random variables indicating whether or not there is an LCS with a length of at least k having P as a subsequence. Ifwe assume the existence of at least lmin one feasible solution, we get E[Y ] = |P | + k=|P |+1 E[Yk ]. For k = |P |, . . . , lmin , let Tk be the set of all possible strings of length k over alphabet Σ. Clearly, there are |Σ|k such strings. For each s ∈ Tk we deﬁne the event Evs that s is a subsequence of all input strings from S having P as a subsequence. For simplicity, we assume the independence among events Evs and Evs , for any s, s ∈ Tk , s = s . With this assumption, the probability that string s ∈ Tk is a subsequence of all input strings from S is equal to m i=1 Pr(|s|, |si |). Further, under the assumption that (i) s is a uniform random string and (ii) the probabilities that s is a subsequence of si (denoted by Pr(s ≺ si )) for i = 1, . . . , m, and the probability that P is a subsequence of s (denoted by (Pr(P ≺ s)) are independent, it follows that the probability P CLCS (s, S, P ) that s is a common subsequence ofall strings from S having pattern P as a m subsequence is equal to Pr(|P |, k) · i=1 Pr(k, |si |). Moreover, note that, under our assumptions, it holds that Pr(P ≺ s ) = Pr(P ≺ s ) = Pr(|P |, k), for any pair of sampled strings s , s ∈ Tk . Therefore, it follows that 1 − P CLCS (s, S, P ) E[Yk ] = 1 − s∈Tk

=1−

1−

m

Pr(k, |si |)

|Σ|k · Pr(|P |, k)

.

(8)

i=1

Using this result, the expected length of a ﬁnal m–CLCS solution that includes a string inducing node v ∈ V as a preﬁx can be approximated by the following (heuristic) expression: 7,8

EXCLCS (v) = |P | − uv + (lmin − (|P | − uv + 1) + 1) − |Σ|k

m lmin L,v v Pr(k, |si | − pi + 1) · Pr(|P | − u , k) 1− i=1

k=|P |−uv +1

=

v lmin

v lmin

−

k=|P |−uv +1

1−

m i=1

Pr(k, |si | −

pL,v i

+ 1)

|Σ|k v

· Pr(|P | − u , k)

, (9)

v where lmin = min{|si | − pL,v + 1 | i = 1, . . . , m}. To calculate this value in i practice, one has to take care of numerical issues, in particular the large power value |Σ|k . We resolve it in the same way as in [8] by applying a Taylor series.

On Solving a Generalized CLCS Problem

65

5.1.3 Pattern Ratio Heuristic So far we have introduced three options for the heuristic function in beam search: the upper bound (Sect. 3.1), the probability based heuristic (Sect. 5.1.1) and the expected length based heuristic (Sect. 5.1.2). With the intention to test, in comparison, a much simpler measure we introduce in the following the pattern ratio heuristic that only depends on the length of the shortest string in S[pv ] and the length of the remaining part of the pattern string to be covered (|P |−uv +1). In fact, we might directly use the following function for estimating the goodness of any v ∈ V : R(v) :=

mini=1,...,m (|si | − pvi + 1) . |P | − uv + 1

(10)

In general, the larger R(v), the more preferable should be v. However, note that the direct use of (10) generates numerous ties. In order to avoid a large number of ties, instead of R(v) we use the well-known k-norm ||v||kk =

k m |si | − pvi + 1 , with some k > 0. Again, nodes v ∈ V with a larger i=1 |P | − uv + 1 || · ||k -values are preferable. In our experiments, we set k = 2 (Euclidean norm).

6

Experimental Evaluation

All algorithms were implemented in C++ using GCC 7.4, and the experiments were conducted in single-threaded mode on a machine with an Intel Xeon E5– 2640 processor with 2.40 GHz and a memory limit of 32 GB. The maximal CPU time allowed for each run was set to 15 min, i.e., 900 s. We generated the following set of problem instances for the experimental evaluation. For each combination of the number of input strings m ∈ {10, 50, 100}, the length of input strings n 500, 1000}, the alphabet size |Σ| ∈ {4, 20} and the ratio p = |Pn | ∈ 1∈ {100, 1 1 1 1 50 , 20 , 10 , 4 , 2 , ten instances were created, each one as follows. First, P is generated uniformly at random. Then, each string si ∈ S is generated as follows. First, P is copied, that is, si := P . Then, si is augmented in n − |P | steps by single random characters. The position for the new character is selected randomly between any two consecutive characters of si , at the beginning, or at the end of si . This procedure ensures that at least one feasible solution exists for each instance. The benchmarks are available at https://www.ac.tuwien.ac.at/ﬁles/ resources/instances/m-clcs.zip. Overall, we thus created and use 900 benchmark instances. We include the following six algorithms (resp. algorithm variants) in our comparison: (i) the approximation algorithm from [9] (Approx), (ii) Greedy from Sect. 2, and (iii) the four beam search conﬁgurations diﬀering only in the heuristic guidance function. These BS versions are denoted as follows. Bs-Ub refers to BS using the upper bound, Bs-Prob refers to the use of the probability based heuristic, Bs-Ex to the use of expected length based heuristic, and BS-Pat to the use of the pattern ratio heuristic. Moreover, we include the information of how

66

M. Djukanovic et al.

many instances of each type were solved to optimality by the exact A∗ search. Concerning the beam search, parameters β (the maximum number of nodes kept for the next iteration) and kbest (the extent of ﬁltering) are crucial for obtaining good results. After tuning we selected β = 2000 and kbest = 100. Moreover, the tuning procedure indicated that function upper bound based pruning is indeed beneﬁcial. Table 1. Results for instances with p = |Σ|

m

n

Approx |s| t[s]

Greedy |s| t[s]

4 4 4 4 4 4 4 4 4

10 100 21.4