Variational Methods in Partially Ordered Spaces (CMS/CAIMS Books in Mathematics, 7) 303136533X, 9783031365331

In mathematical modeling of processes occurring in logistics, management science, operations research, networks, mathema

127 2

English Pages 581 [576] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface to the First Edition
Mathematical Background
Purpose of This Book
Organization
Preface to the Second Edition
Acknowledgments
Contents
List of Symbols and Abbreviations
Abbreviations
Spaces
Sets
Cones, Relations, Measures, Nets
Sets of Optimal Elements
Functions and Operators
Set-Valued Mappings
List of Figures
1 Examples
1.1 Cones in Vector Spaces
1.2 Equilibrium Problems
1.3 Location Problems in Town Planning
1.4 Multiobjective Control Problems
1.5 Stochastic Dominance
1.6 Uncertainty
2 Functional Analysis over Cones
2.1 Order Structures
2.1.1 Binary relations
2.1.2 Cone order structures on linear spaces
2.2 Functional Analysis and Convexity
2.2.1 Locally convex spaces
2.2.2 Examples and properties of locally convex spaces
2.2.3 Asplund spaces
2.2.4 Special results in the theory of locally convex spaces
2.3 Generalized Set Less Relations
2.4 Separation Theorems for Not Necessarily Convex Sets
2.4.1 Algebraic and Topological Properties
2.4.2 Continuity and Lipschitz Continuity
2.4.3 Separation Properties
2.5 Characterization of set relations by means of nonlinear functionals
2.6 Convexity Notions for Sets and Multifunctions
2.7 Continuity Notions for Multifunctions
2.8 Continuity Notions for Extended Vector-valued Functions
2.9 Extended Multifunctions
2.10 Continuity Properties of Multifunctions Under Convexity Assumptions
2.11 Tangent Cones and Differentiability of Multifunctions
2.12 Radial Epi-Differentiability of Extended Vector-Valued Functions
3 Optimization in Partially Ordered Spaces
3.1 Solution Concepts in Vector Optimization
3.1.1 Approximate Minimality
3.1.2 A General Scalarization Method
3.2 Solution Concepts in Set-Valued Optimization
3.2.1 Solution Concepts Based on Vector Approach
3.2.2 Solution Concepts Based on Set Approach
3.3 Existence Results for Efficient Points
3.3.1 Preliminary Notions and Results Concerning Transitive Relations
3.3.2 Existence of Maximal Elements with Respect to Transitive Relations
3.3.3 Existence of Efficient Points with Respect to Cones
3.3.4 Types of Convex Cones and Compactness with Respect to Cones
3.3.5 Classification of Existence Results for Efficient Points
3.3.6 Some Density and Connectedness Results
3.4 Continuity Properties with Respect to a Scalarization Parameter
3.5 Well-Posedness of Vector Optimization Problems
3.6 Continuity Properties
3.6.1 Continuity Properties of Optimal-Value Multifunctions
3.6.2 Continuity Properties for the Optimal Multifunction in the Case of Moving Cones
3.6.3 Continuity Properties for the Solution Multifunction
3.7 Sensitivity of Vector Optimization Problems
3.8 Duality
3.8.1 Duality Without Scalarization
3.8.2 Duality by Scalarization
3.8.3 Duality for Approximation Problems
3.9 Vector Equilibrium Problems and Related Topics
3.9.1 Vector Equilibrium Problems
3.9.2 General Vector Monotonicity
3.9.3 Generalized KKM Lemma
3.9.4 Existence of Vector Equilibria by Use of the Generalized KKM Lemma
3.9.5 Existence by Scalarization of Vector Equilibrium Problems
3.9.6 Some Knowledge About the Assumptions
3.9.7 Some Particular Cases
3.9.8 Mixed Vector Equilibrium Problems
3.10 Vector Variational Inequalities
3.10.1 Vector Variational-Like Inequalities
3.10.2 Perturbed Vector Variational Inequalities
3.10.3 Hemivariational Inequality Systems
3.10.4 Vector Complementarity Problems
3.10.5 Vector Optimization Problems
3.10.6 Minimax Theorem for Vector-Valued Mappings
3.11 Minimal-Point Theorems in Product Spaces and Corresponding Variational Principles
3.11.1 Not Authentic Minimal-Point Theorems
3.11.2 Authentic Minimal-Point Theorems
3.11.3 Minimal-Point Theorems and Gauge Techniques
3.11.4 Minimal-Point Theorems and Cone-Valued Metrics
3.11.5 Fixed Point Theorems of Kirk–Caristi Type
3.12 Saddle Point Theory
3.12.1 Lagrange Multipliers and Saddle Point Assertions
3.12.2 ε-Saddle Point Assertions
4 Generalized Differentiation and Optimality Conditions
4.1 Mordukhovich/limiting generalized differentiation
4.2 General Concept of Set Extremality
4.2.1 Introduction to Set Extremality
4.2.2 Characterizations of Asplund spaces
4.2.3 Properties and Applications of Extremal systems
4.2.4 Extremality and Optimality
4.3 Subdifferentials of Scalarization Functionals
4.4 Application to Optimization Problems with Set-valued Objectives
5 Applications
5.1 Approximation Problems
5.1.1 General Approximation Problems
5.1.2 Finite-dimensional Approximation Problems
5.1.3 Lp-Approximation Problems
5.1.4 Example: The Inverse Stefan Problem
5.2 Solution Procedures
5.2.1 A Proximal-Point Algorithm for Real-Valued Control Approximation Problems
5.2.2 An Interactive Algorithm for the Vector Control Approximation Problem
5.2.3 Proximal Algorithms for Vector Equilibrium Problems
5.2.4 Relaxation and Penalization for Vector Equilibrium Problems
5.3 Location Problems
5.3.1 Formulation of the Problem
5.3.2 An Algorithm for the Multiobjective Location Problem
5.4 Multiobjective Control Problems
5.4.1 The Formulation of the Problem
5.4.2 An ε-Minimum Principle for Multiobjective Optimal Control Problems
5.4.3 A Multiobjective Stochastic Control Problem
5.5 To attain Pareto-equilibria by Asymptotic Behavior …
5.5.1 Pareto Equilibria and Associate Critical Points
5.5.2 Existence of Solutions for (WVEP) and (LSEP)λ
5.5.3 Existence of Solutions for Continuous First-Order Equilibrium Dynamical Systems
5.5.4 Asymptotic Behavior of Solutions when t rightarrow+infty
5.5.5 Asymptotic Behavior for Multiobjective Optimization Problem
5.5.6 Asymptotic Behavior for Multiobjective Saddle-point Problem
5.5.7 Numerical examples
6 Scalar Optimization under Uncertainty
6.1 Robustness and Stochastic Programming
6.2 Scalar Optimization under Uncertainty
6.2.1 Formulation of Optimization Problems under Uncertainty
6.2.2 Strict Robustness
6.2.3 Optimistic Robustness
6.2.4 Regret Robustness
6.2.5 Reliability
6.2.6 Adjustable Robustness
6.2.7 Minimizing the Expectation
6.2.8 Stochastic Dominance
6.2.9 Two-Stage Stochastic Programming
6.3 Vector Optimization as Unifying Concept
6.3.1 Vector Approach for Strict Robustness
6.3.2 Vector Approach for Optimistic Robustness
6.3.3 Vector Approach for Regret Robustness
6.3.4 Vector Approach for Reliability
6.3.5 Vector Approach for Adjustable Robustness
6.3.6 Vector Approach for Minimizing the Expectation
6.3.7 Vector Approach for Stochastic Dominance
6.3.8 Vector Approach for Two-stage Stochastic Programming
6.3.9 Proper Robustness
6.3.10 An Overview on Concepts of Robustness Based on Vector Approach
6.4 Set Relations as Unifying Concept
6.4.1 Set Approach for Strict Robustness
6.4.2 Set Approach for Optimistic Robustness
6.4.3 Set Approach for Regret Robustness
6.4.4 Set Approach for Reliability
6.4.5 Set Approach for Adjustable Robustness
6.4.6 Certain Robustness
6.4.7 An Overview on Concepts of Robustness Based on Set Relations
6.5 Translation Invariant Functions as Unifying Concept
6.5.1 Nonlinear Scalarizing Functional for Strict Robustness
6.5.2 Nonlinear Scalarizing Functional for Optimistic Robustness
6.5.3 Nonlinear Scalarizing Functional for Regret Robustness
6.5.4 Nonlinear Scalarizing Functional for Reliability
6.5.5 Nonlinear Scalarizing Functional for Adjustable Robustness
6.5.6 Nonlinear Scalarizing Functional for Minimizing the Expectation
6.5.7 Nonlinear Scalarizing Functional for Two-Stage Stochastic Programming
6.5.8 Nonlinear Scalarization Approach for ε-Constraint Robustness
6.5.9 An Overview on Concepts of Robustness Based on Translation Invariant Functionals
References
Index
Recommend Papers

Variational Methods in Partially Ordered Spaces (CMS/CAIMS Books in Mathematics, 7)
 303136533X, 9783031365331

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

CMS/CAIMS Books in Mathematics

Alfred Göpfert Hassan Riahi Christiane Tammer Constantin Zˇalinescu

Canadian Mathematical Society Société mathématique du Canada

Variational Methods in Partially Ordered Spaces Second Edition

CMS/CAIMS Books in Mathematics Volume 7

Series Editors Karl Dilcher, Department of Mathematics and Statistics, Dalhousie University, Halifax, NS, Canada Frithjof Lutscher, Department of Mathematics, University of Ottawa, Ottawa, ON, Canada Nilima Nigam, Department of Mathematics, Simon Fraser University, Burnaby, BC, Canada Keith Taylor, Department of Mathematics and Statistics, Dalhousie University, Halifax, NS, Canada Associate Editors Ben Adcock, Department of Mathematics, Simon Fraser University, Burnaby, BC, Canada Martin Barlow, University of British Columbia, Vancouver, BC, Canada Heinz H. Bauschke, University of British Columbia, Kelowna, BC, Canada Matt Davison, Department of Statistical and Actuarial Science, Western University, London, ON, Canada Leah Keshet, Department of Mathematics, University of British Columbia, Vancouver, BC, Canada Niky Kamran, Department of Mathematics and Statistics, McGill University, Montreal, QC, Canada Mikhail Kotchetov, Memorial University of Newfoundland, St. John’s, Canada Raymond J. Spiteri, Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada

CMS/CAIMS Books in Mathematics is a collection of monographs and graduatelevel textbooks published in cooperation jointly with the Canadian Mathematical Society- Societ´e math´ematique du Canada and the Canadian Applied and Industrial Mathematics Society-Societ´e Canadienne de Math´ematiques Appliqu´ees et Industrielles. This series offers authors the joint advantage of publishing with two major mathematical societies and with a leading academic publishing company. The series is edited by Karl Dilcher, Frithjof Lutscher, Nilima Nigam, and Keith Taylor. The series publishes high-impact works across the breadth of mathematics and its applications. Books in this series will appeal to all mathematicians, students and established researchers. The series replaces the CMS Books in Mathematics series that successfully published over 45 volumes in 20 years.

Alfred G¨opfert · Hassan Riahi · Christiane Tammer · Constantin Zˇalinescu

Variational Methods in Partially Ordered Spaces Second Edition

Alfred G¨opfert Institute of Mathematics Martin-Luther-University Halle-Wittenberg Halle, Germany

Hassan Riahi Department of Mathematics Cadi Ayyad University Marrakech, Morocco

Christiane Tammer Institute of Mathematics Martin-Luther-University Halle-Wittenberg Halle, Germany

Constantin Zˇalinescu Octav Mayer Institute of Mathematics Ias¸i Branch of Romanian Academy Ias¸i, Romania

ISSN 2730-650X ISSN 2730-6518 (electronic) CMS/CAIMS Books in Mathematics ISBN 978-3-031-36533-1 ISBN 978-3-031-36534-8 (eBook) https://doi.org/10.1007/978-3-031-36534-8 Mathematics Subject Classification: 90-02, 46N10, 65K10, 90C29, 90C48 1st edition: © 2003 Springer-Verlag New York, Inc. 2nd edition: © Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface to the First Edition

The title of the book corresponds to the aim to present research results on optimization in general spaces in the context of the interplay of variational methods and general preferences together with the access to these results. Variational methods are formed by variational principles, extremal principles, multifunctions, generalized differentiations, normal cones, variational inequalities, variations, and perturbations, preferences are described by variously formed order structures, often by means of a cone, which can also be movable. In mathematical modeling of processes occurring in industrial systems, logistics, management science, operations research, networks, and control theory one often encounters optimization problems involving more than one objective function, so that multiobjective optimization (or vector optimization, initiated by W. Pareto) has received new impetus. The growing interest in multiobjective problems, both from the theoretical point of view and as it concerns applications to real problems, and asks for a general scheme that embraces several existing developments and stimulates new ones. With this book, we intend to give direct access to new results and new applications of this quickly growing field.

Mathematical Background In particular, we discuss basic tools of partially ordered spaces and apply them to variational methods in nonlinear analysis and for optimization problems; i.e., we present the relevant functional analysis for our presentations, especially separation theorems for not necessarily convex sets, which are important for the characterization of solutions, for the proof of existence results and optimality conditions in multicriteria optimization. We use the optimality conditions in order to derive numerical algorithms for special classes of vector optimization problems.

v

vi

Preface to the First Edition

Purpose of This Book We believe that our book will be of interest to graduate students in mathematics, economics, and engineering as well as researchers in pure and applied mathematics, economics, engineering, geography, and town planning. A sound knowledge of linear algebra and introductory real analysis should provide readers with sufficient background for this book. On the one hand, the book has the character of a monograph, because the authors use many of their own results and applications; on the other hand, it is a textbook, because we would like to present in a sense a state of the art of the field in an understandable, useful, and teachable way.

Organization Firstly, we shall give some simple examples to show which kinds of problems can be handled with the methods of the book. Then the three main chapters follow. In the first of them we deal with connections between order structures and topological structures of sets, give a new nonconvex separation theorem, which is very useful for scalarization, and study different properties of multifunctions. The second of them contains our results concerning the theory of multicriteria optimization and equilibrium problems directly. Approximate efficiency, scalarization, new existence results with respect to different order relations, well-posedness, sensitivity, duality, and optimality conditions with respect to general vector optimization problems and a big section on minimal point theorems belong to this chapter as well as new results of vector equilibrium problems and their applications to vector-valued variational inequalities. Those new theoretical results are applied in the last chapter of the book in order to construct numerical algorithms, especially proximal point algorithms and geometrical algorithms based on duality assertions. It is possible to use the special structure of several classes of multicriteria optimization problems (location problems, approximation problems, fractional programming problems, multicriteria control problems) for deriving optimality conditions and corresponding algorithms. We discuss concrete applications (approximation problems, location problems in town planning, multicriteria equilibrium problems, and fractional programming) with solution procedures and in some cases with corresponding software. Here and in the whole book, there are examples to illustrate the results or to check stated conditions. The chapters are followed by a list of references, a list of symbols, and a big enough index.

Preface to the Second Edition

Since vector variational methods have been a growing field of research over the past few years, our main goal when preparing the second edition of Variational Methods in Partially Ordered Spaces was to expand on it without changing the structure of the initial book. This new edition, as suggested by the publisher, has benefited from the comments of the authors and many individuals, which have resulted in the addition of some new sections, and the reorganization of all others. Moreover, in the references, currently published articles and books related to vector variational methods are added. The main changes are: • In Chapter 1, an additional example is added in Section 1.2 in order to solve a vector convex minimization problem by the asymptotic behavior of solutions of a continuous dynamical system. Section 1.6 on uncertainty is new and briefly explains in two examples how the solution of a vector problem reacts to disturbances in the data. • Chapter 2 contains several new notions and results on Functional Analysis over Cones. In particular, it improves the presentation of Sections 2.1.2 and 2.2. Sections 2.3, 2.5, 2.9, and 2.12 are entirely new, and add notions necessary for reading the new sections in the chapters that follow. • Chapter 3 contains an additional Section 3.2 on Solution Concepts in SetValued Optimization. • Chapter 4 is new. It offers a brief overview on Generalized Differentiation and Optimality Conditions, and presents in Section 4.1 recent concepts of Mordukhovich’s generalized differentiation in order to express optimality conditions for optimization problems with set-valued objectives. • Chapter 5 contains an additional Section 5.5 on asymptotic behavior of multiobjective Pareto-equilibrium problems. Especially applications are suitable for asymptotic behavior of multiobjective optimization and saddle-point problems.

vii

viii

Preface to the Second Edition

• Chapter 6 is new. Dealing with scalar problems under uncertainty, this chapter introduces three general approaches (vector approach, set approach, and nonlinear scalarization) which permit a unified treatment of a large variety of models from robust optimization and stochastic programming. The book was written by four authors, we wrote it together, and it was at every time stimulating and profitable to consider the problems from different sides. A. G¨opfert, Chr. Tammer, and C. Zˇ alinescu contributed to Section 1.1, 2.1, 2.2, 2.4, 3.1, and 3.11; C. Zˇ alinescu wrote Sections 2.6, 2.7, 2.8, 2.10, 2.11 and 3.3–3.7; H. Riahi wrote Sections 2.9, 2.12, 3.9, 3.10, 5.2.3, 5.2.4, 5.5; and A. G¨ opfert and Chr. Tammer wrote Sections 1.2–1.6, 2.3, 2.5, 3.2, 3.8, 3.12, 4.1–4.4, 5.1, 5.2.1, 5.2.2, 5.3, 5.4, and 6.1–6.5. Here, in this second edition, we adapted and updated Chapters 1, 2, 3 and 5. Chapters 4 and 6 are new. Especially, C. Zˇ alinescu revised and extended Sections 2.1, 2.2, 2.4, 2.6, 2.7, 2.8, 2.10, 2.11, 3.1, 3.3–3.7, 3.11; H. Riahi added an example in Section 1.2, revised and extended Sections 3.9, 3.10, and added the new Sections 5.2.4, 5.5; and A. G¨ opfert and Chr. Tammer added the new Sections 1.6, 2.3, 2.5, 3.2, 4.1–4.4 and 6.1–6.5.

Acknowledgments We are grateful to Christian G¨ unther, Elisabeth K¨ obis, and Markus K¨ obis for their assistance in creating some of the figures in this book. Each author is grateful to her/his co-authors for their wonderful typing of the manuscript. Moreover, we wish to express our gratitude to Rosalind Elster, Johannes Jahn, and Akhtar Khan for the many years of intensive support and the fruitful, constructive discussions. Hassan Riahi would like to acknowledge with gratitude the support and love of his family his mother, Aicha and Batoul. Finally, the authors would like to thank Donna Chernyk and Casey Russel for their help and support. We are happy to publish this monograph in the series Canadian Mathematical Society Series of Monographs and Advanced Texts by Springer. The contribution of C. Zˇ alinescu was partially supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS–UEFISCDI, project number PN-III-P4-PCE-2021-0690, within PNCDI III.

Halle, Germany Halle, Germany Marrakech, Morocco Ia¸si, Romania March 2023

Alfred G¨ opfert Christiane Tammer Hassan Riahi Constantin Zˇ alinescu

Preface to the Second Edition

ix

With great dismay, we had to learn that our co-author and very good friend, Alfred G¨ opfert, passed away on January 22, 2023. We were privileged to complete the substantive work on the second edition of our book together with him. We have lost an excellent mathematician and a warmhearted friend. With his charisma, Alfred G¨ opfert inspired generations of mathematicians for his science. We have learned a lot from him. It is a need for us to express our sincere gratitude to Alfred and our condolences to his family and loved ones.

Hassan Riahi, Christiane Tammer and Constantin Zˇ alinescu

Contents

Preface to the First Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

v

Preface to the Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii

List of Symbols and Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . xvii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii 1

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Cones in Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Equilibrium Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Location Problems in Town Planning . . . . . . . . . . . . . . . . . . . . 1.4 Multiobjective Control Problems . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Stochastic Dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 4 8 10 13 13

2

Functional Analysis over Cones . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Order Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Binary relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Cone order structures on linear spaces . . . . . . . . . . . . . 2.2 Functional Analysis and Convexity . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Locally convex spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Examples and properties of locally convex spaces . . . 2.2.3 Asplund spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Special results in the theory of locally convex spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Generalized Set Less Relations . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Separation Theorems for Not Necessarily Convex Sets . . . . . 2.4.1 Algebraic and Topological Properties . . . . . . . . . . . . . . 2.4.2 Continuity and Lipschitz Continuity . . . . . . . . . . . . . . . 2.4.3 Separation Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Characterization of set relations by means of nonlinear functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Convexity Notions for Sets and Multifunctions . . . . . . . . . . . . 2.7 Continuity Notions for Multifunctions . . . . . . . . . . . . . . . . . . .

17 17 17 21 50 50 51 54 55 75 78 79 83 90 92 94 100 xi

xii

Contents

2.8

Continuity Notions for Extended Vector-valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Extended Multifunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Continuity Properties of Multifunctions Under Convexity Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Tangent Cones and Differentiability of Multifunctions . . . . . 2.12 Radial Epi-Differentiability of Extended Vector-Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Optimization in Partially Ordered Spaces . . . . . . . . . . . . . . . 3.1 Solution Concepts in Vector Optimization . . . . . . . . . . . . . . . . 3.1.1 Approximate Minimality . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 A General Scalarization Method . . . . . . . . . . . . . . . . . . 3.2 Solution Concepts in Set-Valued Optimization . . . . . . . . . . . . 3.2.1 Solution Concepts Based on Vector Approach . . . . . . 3.2.2 Solution Concepts Based on Set Approach . . . . . . . . . 3.3 Existence Results for Efficient Points . . . . . . . . . . . . . . . . . . . . 3.3.1 Preliminary Notions and Results Concerning Transitive Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Existence of Maximal Elements with Respect to Transitive Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Existence of Efficient Points with Respect to Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Types of Convex Cones and Compactness with Respect to Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 Classification of Existence Results for Efficient Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Some Density and Connectedness Results . . . . . . . . . . 3.4 Continuity Properties with Respect to a Scalarization Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Well-Posedness of Vector Optimization Problems . . . . . . . . . 3.6 Continuity Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Continuity Properties of Optimal-Value Multifunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Continuity Properties for the Optimal Multifunction in the Case of Moving Cones . . . . . . . . 3.6.3 Continuity Properties for the Solution Multifunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Sensitivity of Vector Optimization Problems . . . . . . . . . . . . . . 3.8 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Duality Without Scalarization . . . . . . . . . . . . . . . . . . . . 3.8.2 Duality by Scalarization . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 Duality for Approximation Problems . . . . . . . . . . . . . . 3.9 Vector Equilibrium Problems and Related Topics . . . . . . . . . 3.9.1 Vector Equilibrium Problems . . . . . . . . . . . . . . . . . . . . .

116 121 126 130 136 141 141 141 144 147 148 149 150 150 153 158 164 166 171 183 186 189 189 202 205 208 221 224 228 232 239 241

Contents

3.9.2 General Vector Monotonicity . . . . . . . . . . . . . . . . . . . . . 3.9.3 Generalized KKM Lemma . . . . . . . . . . . . . . . . . . . . . . . . 3.9.4 Existence of Vector Equilibria by Use of the Generalized KKM Lemma . . . . . . . . . . . . . . . . . . 3.9.5 Existence by Scalarization of Vector Equilibrium Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.6 Some Knowledge About the Assumptions . . . . . . . . . . 3.9.7 Some Particular Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.8 Mixed Vector Equilibrium Problems . . . . . . . . . . . . . . . 3.10 Vector Variational Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.1 Vector Variational-Like Inequalities . . . . . . . . . . . . . . . 3.10.2 Perturbed Vector Variational Inequalities . . . . . . . . . . 3.10.3 Hemivariational Inequality Systems . . . . . . . . . . . . . . . 3.10.4 Vector Complementarity Problems . . . . . . . . . . . . . . . . 3.10.5 Vector Optimization Problems . . . . . . . . . . . . . . . . . . . . 3.10.6 Minimax Theorem for Vector-Valued Mappings . . . . . 3.11 Minimal-Point Theorems in Product Spaces and Corresponding Variational Principles . . . . . . . . . . . . . . . . 3.11.1 Not Authentic Minimal-Point Theorems . . . . . . . . . . . 3.11.2 Authentic Minimal-Point Theorems . . . . . . . . . . . . . . . 3.11.3 Minimal-Point Theorems and Gauge Techniques . . . . 3.11.4 Minimal-Point Theorems and Cone-Valued Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.5 Fixed Point Theorems of Kirk–Caristi Type . . . . . . . . 3.12 Saddle Point Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.1 Lagrange Multipliers and Saddle Point Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.2 ε-Saddle Point Assertions . . . . . . . . . . . . . . . . . . . . . . . . 4

5

Generalized Differentiation and Optimality Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Mordukhovich/limiting generalized differentiation . . . . . . . . . 4.2 General Concept of Set Extremality . . . . . . . . . . . . . . . . . . . . . 4.2.1 Introduction to Set Extremality . . . . . . . . . . . . . . . . . . . 4.2.2 Characterizations of Asplund spaces . . . . . . . . . . . . . . . 4.2.3 Properties and Applications of Extremal systems . . . 4.2.4 Extremality and Optimality . . . . . . . . . . . . . . . . . . . . . . 4.3 Subdifferentials of Scalarization Functionals . . . . . . . . . . . . . . 4.4 Application to Optimization Problems with Set-valued Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Approximation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 General Approximation Problems . . . . . . . . . . . . . . . . . 5.1.2 Finite-dimensional Approximation Problems . . . . . . .

xiii

242 243 248 249 252 255 258 261 261 263 264 267 269 271 272 274 277 280 284 287 288 288 293 305 305 310 311 313 313 317 325 332 345 345 345 352

xiv

Contents

5.2

5.3

5.4

5.5

6

5.1.3 Lp -Approximation Problems . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Example: The Inverse Stefan Problem . . . . . . . . . . . . . Solution Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 A Proximal-Point Algorithm for Real-Valued Control Approximation Problems . . . . . . . . . . . . . . . . . 5.2.2 An Interactive Algorithm for the Vector Control Approximation Problem . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Proximal Algorithms for Vector Equilibrium Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Relaxation and Penalization for Vector Equilibrium Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . Location Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . . 5.3.2 An Algorithm for the Multiobjective Location Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiobjective Control Problems . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 The Formulation of the Problem . . . . . . . . . . . . . . . . . . 5.4.2 An -Minimum Principle for Multiobjective Optimal Control Problems . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 A Multiobjective Stochastic Control Problem . . . . . . . To attain Pareto-equilibria by Asymptotic Behavior of First-Order Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Pareto Equilibria and Associate Critical Points . . . . . 5.5.2 Existence of Solutions for (WVEP) and (LSEP)λ . . . 5.5.3 Existence of Solutions for Continuous First-Order Equilibrium Dynamical Systems . . . . . . . 5.5.4 Asymptotic Behavior of Solutions when t → +∞ . . . 5.5.5 Asymptotic Behavior for Multiobjective Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.6 Asymptotic Behavior for Multiobjective Saddle-point Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.7 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Scalar Optimization under Uncertainty . . . . . . . . . . . . . . . . . . 6.1 Robustness and Stochastic Programming . . . . . . . . . . . . . . . . . 6.2 Scalar Optimization under Uncertainty . . . . . . . . . . . . . . . . . . 6.2.1 Formulation of Optimization Problems under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Strict Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Optimistic Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Regret Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 Adjustable Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.7 Minimizing the Expectation . . . . . . . . . . . . . . . . . . . . . . 6.2.8 Stochastic Dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.9 Two-Stage Stochastic Programming . . . . . . . . . . . . . . .

357 359 361 361 376 381 391 397 397 399 400 400 402 407 414 416 419 424 437 447 450 456 463 463 466 466 469 469 470 470 471 472 473 475

Contents

6.3

xv

Vector Optimization as Unifying Concept . . . . . . . . . . . . . . . . 6.3.1 Vector Approach for Strict Robustness . . . . . . . . . . . . 6.3.2 Vector Approach for Optimistic Robustness . . . . . . . . 6.3.3 Vector Approach for Regret Robustness . . . . . . . . . . . . 6.3.4 Vector Approach for Reliability . . . . . . . . . . . . . . . . . . . 6.3.5 Vector Approach for Adjustable Robustness . . . . . . . . 6.3.6 Vector Approach for Minimizing the Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.7 Vector Approach for Stochastic Dominance . . . . . . . . 6.3.8 Vector Approach for Two-stage Stochastic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.9 Proper Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.10 An Overview on Concepts of Robustness Based on Vector Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Relations as Unifying Concept . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Set Approach for Strict Robustness . . . . . . . . . . . . . . . 6.4.2 Set Approach for Optimistic Robustness . . . . . . . . . . . 6.4.3 Set Approach for Regret Robustness . . . . . . . . . . . . . . 6.4.4 Set Approach for Reliability . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Set Approach for Adjustable Robustness . . . . . . . . . . . 6.4.6 Certain Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.7 An Overview on Concepts of Robustness Based on Set Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Translation Invariant Functions as Unifying Concept . . . . . . 6.5.1 Nonlinear Scalarizing Functional for Strict Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Nonlinear Scalarizing Functional for Optimistic Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 Nonlinear Scalarizing Functional for Regret Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Nonlinear Scalarizing Functional for Reliability . . . . . 6.5.5 Nonlinear Scalarizing Functional for Adjustable Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.6 Nonlinear Scalarizing Functional for Minimizing the Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.7 Nonlinear Scalarizing Functional for Two-Stage Stochastic Programming . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.8 Nonlinear Scalarization Approach for ε-Constraint Robustness . . . . . . . . . . . . . . . . . . . . . . 6.5.9 An Overview on Concepts of Robustness Based on Translation Invariant Functionals . . . . . . . . . . . . . .

476 478 480 482 485 486

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

517

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

547

6.4

6.5

487 489 491 492 495 496 498 500 501 502 503 503 505 505 507 508 510 511 512 512 514 514 516

List of Symbols and Abbreviations

Abbreviations a.c. a.s.P B-l.s.c. B-u.s.c. C-l.c. C-l.s.c. (CP) ComP or (ComP) dim dom f (DP ) ∅ ∞ EP EVP GVEP GVVI H-C-u.c. H-l.c. H.l.c.s. H.t.v.s. H-u.c. HVIS KKM-lemma l.c. l.c.s. l.s.c. n.v.s. PSNC SEP SVEP SGVEP SNC

Asymptotically compact Almost surely w.r.t. probability P Berge-lower semicontinuous Berge-upper semicontinuous C-lower continuous C-lower semicontinuous Containment property Complementarity problem Dimension Domain of f Domination property Empty set Element adjoined to Y to get Y • Equilibrium problem Ekeland’s variational principle Generalized vector equilibrium problem Generalized vector variational inequality Hausdorff C-upper continuous (w.r.t. cone C ) Hausdoff lower continuous Hausdorff locally convex space Hausdorff topological vector space Hausdorff upper continuous Hemivariational inequalities system, see (3.108) Knaster, Kuratowski, and Mazurkiewicz lemma Lower continuous Locally convex topological vector space Lower semicontinuous Real normed vector space Partially sequentially normally compact set or mapping Scalar equilibrium problem Strong vector equilibrium problem Strong generalized vector equilibrium problem Sequentially normally compact set or mapping xvii

xviii

List of Symbols and Abbreviations

SNEC s.t. SVCP t.l.s. t.v.s. u.c. u.s.c. VOP VSP VVI WGVEP w.l.o.g. w-normal w.r.t. WVOP WVCP WVEP WVSP

Sequential normal epi-compactness of functionals Subject to Strong vector complementarity problem Topological linear space Topological vector space Upper continuous Upper semicontinuous Vector optimization problem Vector saddle-point problem Vector variational inequality Weak generalized vector equilibrium problem Without loss of generality Weakly normal With respect to Weak vector optimization problem Weak vector complementarity problem Weak vector equilibrium problem Weak vector saddle point problem

Spaces (a | b) aT C [a, b] C (T ) C 1 [0,T ] C [0, 1] {cn } → c c0 ej p Lp Lp (Ω) Lp [a, b] N N> Nτ (x), N (x) Q R R+ R>

Inner product in a Hilbert space Transposed vector to a ∈ Rn Space of continuous functions on the real interval [a, b] Space of continuous functions on the compact set T See inverse Stefan problem See Example 2.1.12 Sequence {cn } converging to c Space of all real sequences in ∞ converging to zero Unit vector in a Euclidean space with 1 in the j-th component Space of all real sequences with p-norm  · p , 1 ≤ p ≤ +∞ See Example 2.2.2 Lebesgue space on a Lebesgue measurable set Ω ⊆ Rn (1 ≤ p ≤ +∞) Lebesgue space on the real interval [a, b] (1 ≤ p ≤ +∞) The set of nonnegative integers The set of positive integers The set of neighborhoods of x ∈ X in (X, τ ) The set of rational numbers The set of real numbers The set of nonnegative real numbers The set of positive real numbers

List of Symbols and Abbreviations

R ∞ Rn Rn+ , Rn− lim inf, lim sup B· (x0 , ε) ˚· (x0 , ε) B ˚· B· , B (χ,  · ) Y Y*  · ∗ | · | (X , ρ) (X, τ ) R• τ (xi ) → x τ1 × τ 2 τ -a.c. set w := σ(X, X ∗ ) w∗ := σ(X ∗ , X) P (X, P) Y• (Y, CY ) Y = C(U, R) Z

xix

:= R ∪ {−∞, +∞} := +∞ Space of n-dimensional vectors of real numbers Nonnegative and nonpositive orthant of Rn Limit inferior, limit superior Closed ball w.r.t. the norm  ·  with the center x 0 and the radius ε Open ball w.r.t. the norm  ·  with the center x 0 and the radius  Closed and open unit ball w.r.t. the norm  ·  Normed space X Algebraic dual space of the vector space Y (Topological) dual space of the topological vector space Y Norm in the topological dual space = dual norm to  ·  Vector-valued norm Metric space with metric ρ Topological space X equipped with topology τ R ∪ {+∞} Convergence (w.r.t. topology τ ) Product topology Asymptotically compact set w.r.t. topology τ Weak topology of X Weak* topology of X * Family of seminorms l.c.s. with a family P of seminorms Y ∪ {+∞} Linear space Y ordered by a cone CY See Chapter 6 The set of integers

Sets [a, b), [a, b[ ; (a, b], ]a, b] (a, b), ]a, b[ [a, b] {·} Ax A⊥ A+B A⊥B [A]C Ac aff A

(Half) open interval if a, b ∈ R Closed interval if a, b ∈ R Notation for sets {y ∈ Y |(x, y) ∈ A} (for A ⊂ X × Y nonempty) Orthogonal complement or annihilator to set A in Hilbert spaces Addition of sets Orthogonal closed linear subspaces See full set Y \A, where A ⊂ Y Affine hull of the set A

xx

List of Symbols and Abbreviations

cor A = aint A = Ai Ar bd A = bdτ A, r-bd A bdw∗ A, r-bdw∗ A B(x) or B B 0 or UX B(x, r) or Bρ (x, r) (with metric ρ) ∂B C-seq-b-regular τ cl A = clτ A = A = A clw∗ A conv A dir (D) e(A, B ) Ft F(M ) F(M, x) A⊕B icr A = iA = raint A int A = intτ A, r-int A intw∗ A, r-intw∗ A ker A lin A A×B ˆε (x; Ω) N Nτ (x) NX P(Y ) 2Y |P | U0

Algebraic interior (or core) of the set A See Theorem 3.11.12 Boundary and relative boundary of the set A Boundary and relative boundary of the set A ⊆ Y ∗ in the weak*-top Neighborhood base of x Closed unit ball (Closed) ball of center x and radius r; see closed ball B\intB See Section 3.11.4 Closure of the set A ⊆ (X, τ ) Closure of the set A ⊆ Y ∗ in the weak*-topology Convex hull of the set A Set of all scalarization directions k ∈ Y of set D ⊂ Y Excess of A over B σ-algebra The family of all nonempty finite subsets of M The family of all finite subsets of M containing x Direct sum of two linear subspaces of a linear space Relative algebraic interior (or intrinsic core) of the set A Interior and relative interior of the set A Interior and relative interior of the set A ⊆ Y ∗ in the weak*-topology Kernel of an operator A Linear hull of a nonempty set A Set of ordered pairs Set of ε-normals to Ω at x ∈ Ω The class of all neighborhoods of x w.r.t. a topology τ The class of all balanced neighborhoods of 0 in the t.v.s. X Power set (class of nonempty subsets) of Y Class of all subsets of Y Cardinality of P Polar set of the set U

List of Symbols and Abbreviations

  Yt+ (x) Yt− (x)

xxi

Upper (lower) section of Y ⊂ X with respect to t and x ∈ X

Cones, Relations, Measures, Nets A∞ 0+ A A+ C : X⇒Y C# Cε S ◦R cone A (p )+ (M, R) meas A N (x; Ω) ˆ (x; Ω) N N• NA (x; Ω) NC (x; Ω) ND (x0 ) NM (x; Ω)  · C,k Pu qi A r :X ×X →P TB (A; a) TC (A; a) TU (A; a) U1  U2 ⇔ U1 ⊇ U2 U3  U1

Asymptotic cone of the nonempty set A Recession cone of the nonempty set A (Continuous positive) dual cone of the set A Multifunction, whose values are pointed convex cones Strictly positive dual cone of the convex cone C (see also Proposition 2.2.25) Henig dilating cone Composition of two relations R and S Cone generated by the set A The common ordering cone in the Banach space p (1 ≤ p ≤ ∞) Set M with order structure R Measure of a set A Normal cone to Ω at x Regular normal, prenormal, or Fr´echet normal cone at x ∈ Ω Both the regular normal cone and limiting normal cone Approximate (Ioffe) normal cone to Ω at x Clarke normal cone to Ω at x Normal cone to D at x0 ∈ X Basic, limiting or Mordukhovich normal cone to Ω at x ∈ Ω Norm generated by an ordering cone C and k ∈ cor C Probability measure Quasi-interior of the nonempty convex subset A of an H.l.c.s. A P-valued metric, P a cone Bouligand tangent cone (or contingent cone) of A at a Clarke tangent cone of A at a Ursescu tangent cone (or adjacent cone) of A at a See directed sets or set less relation See directed sets

xxii

List of Symbols and Abbreviations

xi x (xi ), {xi } (for xi ∈ X, i ∈ I)

Weakly convergent net Sequences in X if I = N; nets if I is (upward) directed

Sets of Optimal Elements BEff (D, C ) BEff (D; C ) EffMin (P, C ) EffMax (D, C) EffMin (F, B, e) EffMax (F, B, e) Eff(M, Bεk0 ) EffMin (F, B, e) EffMax (F, B, e) Eff(M, Bεk0 ) EffMin (F, B, e) EffMax (F, B, e) Eff(M, Bk0 ) ε-Effϕ (M, C) GEff(F ) HEff(D, C) HMax(Y ; C) infC A Min(M0 , R) Max(M0 , R) Min(F, D) Max(F, D) Min(F, y ∗ , e) Max(F, y ∗ , e) Max(M0 , R) Min(M0 , R) PrEff(Y ; C) PrMax(Y ; C) (Q(ξ)) SPEff(A; C) SP-Θ Θ wEff(A; C) p-Eff(F, Z) X (ξ)

Set of properly efficient elements of D w.r.t. C according to Benson The set of Benson-efficient points of D See Remark 2.1.3 See Remark 2.1.3 (B, e)-minimal elements; see Section 3.12.2 The set of all (B, e)-maximal elements of F The set of approximately efficient points (B, e)-minimal elements; see Section 3.12.2 The set of all (B, e)-maximal elements of F The set of approximately efficient points (B, e)-minimal elements; see Section 3.12.2 The set of all (B, e)-maximal elements of F The set of approximately efficient points See Definition 3.1.2 Set of properly minimal points of F according to Geoffrion Set of properly minimal elements of D w.r.t. C according to Henig See Section 3.3.6 Infimum of A w.r.t. C The class of minimal elements of M0 w.r.t. R The class of maximal elements of M0 w.r.t. R Set of minimal elements of F w.r.t. a set D Set of maximal elements of F w.r.t. a set D The set of all (y ∗ , e)-minimal elements of F The set of all (y ∗ , e)-maximal elements of F The class of maximal elements of M0 with respect to R The class of minimal elements of M0 with respect to R The set of properly efficient points of Y w.r.t. C See Section 3.3.6 Optimization problem with an uncertain parameter ξ (called a scenario), see Section 6.2 Class of strong proper efficient points of set A w.r.t. C Set optimization problem, see Definition 4.3.1 Domination set of space Y Set of weakly efficient points of A w.r.t. C The set of properly efficient points w.r.t. a family Z Set of feasible solutions of (Q(ξ)), see Section 6.2

List of Symbols and Abbreviations

xxiii

Functions and Operators A* T A  T −1 A f (A) α(y) σA B – function β(y) f −1 (B) f * , f ** ∂f (x) ˆ (x) ∂f x) ∂A f (¯ ∂C f (x) ∂M f (x) ∂ ≤ f (x0 ) and ∂ ≤C f (x0 ) D∗ f (x) ∇x f (x, u)(v) DΓ (x, y)(u) DΓ (x, y)(u) SΓ (x, y)(u) f  (x)(v) ˆ ∗ f (x) D epi f and gr f hypo f Im f ιA (K, L)-monotonicity L(X, Y ) or L(X, Y ) L1 (x ), L2 (x ) Φ(x, z ∗ ) M(PC (Y ), R) Λα E Eu

Adjoint operator to operator A in l.c.s. Adjoint operator in Hilbert spaces Inverse operator of the adjoint operator AT to A f (A) := {f (a) | a ∈ A} for f : X → Y and A ⊆ X Inf-support function Support function of the set A See Bregman function Sup-support function f −1 (B) := {x ∈ X | f (x) ∈ B} for f : X → Y and B⊆X (Fenchel) conjugate and biconjugate of f (Fenchel) subdifferential of f at x Fr´echet (regular) subdifferential of f at x ¯ Approximate subdifferential of f at x Clarke generalized subdifferential of f at x Basic, limiting or Mordukhovich subdifferential of f at x Subdifferential of f at x0 ∈ dom f ; Definition 4.1.7 (Basic, normal, Mordukhovich) coderivative of f at x Fr´echet derivative of f w.r.t. x at (x, u) with v ∈ X Dini upper derivative of Γ at (x, y) in the direction u Dini lower derivative of Γ at (x, y) in the direction u Derivative in the sense of Shi in the direction u Directional derivative of f at x in the direction v Fr´echet coderivative of f at x Epigraph of f and graph of f Hypograph or subgraph of f Im f := {f (x)|x ∈ X} for f : X → Y Indicator function of the set A See Definition 3.9.5 Set of linear continuous mappings from X to Y See Section 5.1.2 Lagrangian (see Remark 3.12.5) Set of all functions from PC (Y ) to R For α ∈ ]0, 1[, see under Definition 2.6.1 Expectation operator Expectation, constructed from control u, see Theorem 6.3.27

xxiv

List of Symbols and Abbreviations

(Exp) pA PX , PrX PD (e) ΠM (x) R(PA ) P-pseudomonotonicity TA argminM ϕ ϕD,k d(x, A)or dist(x, A) IH(C, D)

Hλ (k) levϕ,R (t) x∗ (x) = x, x∗ 

Minimizing of the expectation; see Theorem 6.3.27 Minkowski functional of the set A Projection of X × Y onto X Projection of e onto a set D Metric projection; see Theorem 5.2.34 Range of the operator PA See Lemma 3.9.36 Partial inverse of T with respect to A Set of minimizers of the functional ϕ on M ϕA,k (y) := inf{t ∈ R | y ∈ tk − D} infa∈A d(x, a), where A ⊂ Xand d a metric on X Hausdorff-metric IH(C, D) := max{e(C, D), e(D, C)}, e(C, D) := supx∈C dist(x, D) Hλ (k) := {y ∗ ∈ Y ∗ |y ∗ (k) = λ}, see Theorem 4.3.2 levϕ,R (t) := {x ∈ X|ϕ(x)Rt}, sublevel sets of ϕ of height t Value of continuous linear functional x * at x

Set-Valued Mappings Δ◦Γ dom Γ epi Γ ΓC Γf,C Γ : X⇒Y Γ −1 (B) Γ +1 (B) x, y¯) D∗ Γ (¯ ∗ Γ (¯ x, y¯) DC ∗ Γ (¯ x, y¯) DA ˆ ∗ Γ (¯ D x, y¯) ∂Γ (¯ x, y¯)(y ∗ )

ˆ (¯ ∂Γ x, y¯)(y ∗ ) EΓ : X⇒Y gr Γ

Composition of multifunctions Δ and Γ The domain of a set-valued mapping Γ Epigraph of Γ : X⇒Y ΓC (x) := Γ (x) + C for x ∈ X Γf,C (x) := f (x) + C Set-valued mapping or multifunction, i.e., Γ : X → P(Y )\{∅} Γ −1 (B) := {x ∈ X | Γ (x) ∩ B = ∅} for a multifunction Γ and B ⊂ Γ (X) Γ +1 (B) := {x ∈ X | Γ (x) ⊂ B} for a multifunction Γ and B ⊂ Γ (X) (Basic, normal, Mordukhovich) coderivative of Γ at (¯ x, y¯) ∈ gr Γ Clarke coderivative of Γ at (¯ x, y¯) Approximate (Ioffe) coderivative of Γ at (¯ x, y¯) Fr´echet coderivative of Γ at (¯ x, y¯) (Limiting, basic, normal) subdifferential of Γ at (¯ x, y¯) in direction y ∗ ∈ Y ∗ Fr´echet subdifferential of Γ at (¯ x, y¯) in direction y∗ ∈ Y ∗ Epigraphical multifunction of Γ Graph of a multifunction Γ

List of Symbols and Abbreviations

lim inf x→x0 Γ (x) lim supx→x0 Γ (x) levΓ (y) lev< Γ (y) Im Γ lC uC sC cC pC m C mc C mn C ≤C k0 [f ργ] C : X⇒Y R−1 Γ (Ω) ∗ Γ (¯ x, y¯) DN ∗ Γ (¯ x, y¯) DM

ˆ ∗ Γ (¯ D x, y¯) ˆ ∂Γ (¯ x, y¯) ∂Γ (¯ x, y¯) x, y¯) ∂ ∞ Γ (¯ ΓΩ Γ˜ : X⇒ Y ∂• ϕD,k

xxv

Limit inferior of Γ at x0 Limit superior (Kuratowski-Painlev´e upper limit) of Γ at x0 Sublevel set; see Section 2.6 Strict sublevel set; see Section 2.6 Image (of a multifunction Γ ) Lower set less relation (w.r.t. C ) Upper set less relation (w.r.t. C ) Set less relation (w.r.t. C ) Certainly less relation (w.r.t. C ) Possibly less relation (w.r.t. C ) Minmax less relation (w.r.t. C ) Minmax certainly less relation (w.r.t. C ) Minmax certainly nondominated relation (w.r.t. C ) See Section 3.3.3 A preorder The set {x∈E | f (x)ργ} for f : E → R, γ∈R, ρ ∈ {} Multifunction, whose values are pointed convex cones Inverse of the relation (or multifunction) R The image set of a set-valued mapping Γ Normal Mordukhovich/limiting coderivative to Γ : X⇒ Y Mixed Mordukhovich/limiting coderivative to Γ : X⇒ Y Regular coderivative of Γ at (¯ x, y¯) to Γ : X⇒ Y Regular subdifferential of Γ at (¯ x, y¯) Basic subdifferential of Γ at (¯ x, y¯) Singular subdifferential of Γ at (¯ x, y¯) Restricted mapping of Γ over Ω For local considerings of Γ Regular and limiting subdifferentials of ϕD,k

List of Figures

Fig. 1.2.1

Fig. 1.3.2 Fig. 1.3.3 Fig. 1.3.4

Fig. 2.3.1 Fig. 2.3.2 Fig. 2.4.3 Fig. 3.1.1

Fig. 3.1.2 Fig. 5.2.1

Top: For a fix parameter α = 12 , starting from any initial point u(0) in R2 , we attain the same limit point x ¯ = (0, 0) in the solution set Sh with a faster convergence rate. Bottom: The limit point depends on the parameter α although u(0) = (0, −3) is fixed. . . . . . . . . . . . . . . . . . . . 6 Software FLO for solving multiobjective location problems (https://project-flo.de). . . . . . . . . . . . . . . . . . . 10 The set of efficient elements of the multiobjective location problem (P) with the maximum norm. . . . . . . 10 The set of efficient points of the multiobjective location problem (P) with the Lebesgue norm (instead of the maximum norm). . . . . . . . . . . . . . . . . . . . 11 A uD B (A ⊆ B − D) with D := K is the natural ordering cone in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 A lD B (B ⊆ A +D) with D := K is the natural ordering cone in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 The functional ϕD,k given by (2.42) at y 1 and y 2 with ϕD,k (y 1 ) = ϕD,k (y 2 ) = t. . . . . . . . . . . . . . . . . . . . . . 82 The set of approximately efficient elements of M w.r.t. B = R2+ (where the distance between it and the set of efficient elements is unbounded). . . . . . . 142 The set of approximately efficient elements of M w.r.t. a “bigger” cone C with R2+ ⊂ C. . . . . . . . . . . . . . 143 Solutions x10 , x20 , and x30 of the location problem generated by the proximal-point algorithm choosing different weights αi (i = 1, . . . , 9) in Example 5.2.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

xxvii

xxviii

List of Figures

Fig. 5.3.2

Fig. 5.5.3

Fig. 5.5.4

Fig. 5.5.5

Fig. 5.5.6

Fig. 5.5.7

Fig. 5.5.8

Fig. Fig. Fig. Fig. Fig. Fig.

6.3.1 6.3.2 6.3.3 6.4.4 6.4.5 6.4.6

The solution set of the multiobjective location problem (P) with the maximum norm (in red color) as well as with the Lebesgue norm (in blue color) generated using the software FLO (https://project-flo.de). . . . . . . . . . . . . . . . . . . . . . . . . . . Graphical view of the paths u(t) = (u1 (t), u2 (t)) and u(t) − x ¯u0  for different starting point u(0) = u0 . The results of simulation for exponential convergence in [−1, 1] × R. . . . . . . . . . . In this case, the results of simulation for exponential convergence are also valid as long asthe path u(t) crosses the closed euclidean ball B ( 12 , 0), 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Here, we only suppose h1 , h2 convex. The convergence rates obtained in this non-strictly  convex case is O t13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Here, for strongly convex case, we fix the parameter α = 12 . Then starting from any initial point u(0) in R2 , we reach the same limit point x ¯ = (0, 0) in Sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . For mid-strongly convex case and α = 12 in (5.163), starting from any initial point u(0) in R2 we reach more rapidly the same limit point x ¯ = (−1, 0) in Sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Even if the components of h are only convex, we manage to reach a unique solution (here 99 is close x ¯ = (0, 0)) when the parameter α = 100 to 1 and the system (5.163) penalized by (5.164). . . . . Functions F, G with F ≤sup G. . . . . . . . . . . . . . . . . . . . . Regret robustness with G ≤regret F . . . . . . . . . . . . . . . . Functions F, G with F ≤exp G. . . . . . . . . . . . . . . . . . . . . Visualization of A u B for closed sets A, B ∈ Z. . . . Visualization of A l B for closed sets A, B ∈ Z. . . . Visualization of A c B for closed sets A, B ∈ Z. . . .

400

458

459

460

461

462

462 478 483 487 499 500 504

1 Examples

A cone in a vector space is an algebraic notion, but its use in theory and applications of optimization demands that one considers cones in topological vector spaces and studies, besides algebraic properties such as pointedness and convexity, also analytical ones such as those of being closed, normal, Daniell, nuclear, or having a base with special properties. A short overview of such properties is given in this chapter. The examples of cones show interesting results on cones in infinite-dimensional vector spaces, which are important in vector optimization, set-valued optimization, control theory, equilibrium problems, related variational analysis, risk theory and optimization under uncertainty. In comparison with the first edition, hints are added w.r.t. preferences that are point-dependent (variable domination structure) or not describable by cones, to uncertainty, robustness and to the overview of cones. Furthermore, we are using the software FLO (Facility Location Optimizer) for solving multi-objective location problems.

1.1 Cones in Vector Spaces Optimization, especially vector optimization and set optimization and the corresponding variational analysis require, among other things, that one studies properties of cones (Definition 2.1.11), think of • tangent cones (Section 2.11), • based cones, well-based cones, nuclear cones (Definitions 2.1.42, 2.2.31), • normal cones and their properties (Definition 2.1.30 and the following lines in Subsection 2.1.2), • normal cones for sets (also of epigraphical type) in Asplund spaces and their applications to define coderivatives (Definition 4.1.2) or subdifferentials (Definition 4.1.7). Think also of application of these definitions for Fermat’s rules (Theorem 4.2.19 or [465, pp. 288–291]) or advantage of © Springer Nature Switzerland AG 2023 A. G¨ opfert et al., Variational Methods in Partially Ordered Spaces, CMS/CAIMS Books in Mathematics 7, https://doi.org/10.1007/978-3-031-36534-8 1

1

2





• •

• •

1 Examples

the mixed coderivative to the point-based characterization of Lipschitzlike behavior of set-valued mappings (Definition 4.1.3 and the following lines) and relations to metric regularity and linear openness of multivalued mappings and to SNC-properties (Definition 4.1.5), subdifferentials (Definition 2.6.7), think of the subdifferential of indicator functions, which coincide with the normal cone, used f.i. in order to state optimality conditions for optimization problems. See around Theorem 5.1.13 and Example 4.2.1, see also [41, ch.5] or [414, ch.9]), order relations in linear spaces induced by cones; think also of the consideration of smaller or larger (Figure 3.1.2) or even variable cones (compare variable domination structure in Section 1.6) and the behavior of corresponding efficiency sets, cones in order to represent inequalities; think of side restrictions, cone constraints, cone-valued metrics (see Subsection 3.11.4) and vector variational inequalities, cones in connection with some theoretical backgrounds; think of Phelps’s cone related to Ekeland’s variational principle in normed spaces ([475, p. 43]; this book: Proposition 3.11.2) or minimal points with respect to cone orderings for not pointed cones C with C \ (−C) = ∅, compare Chapter 4, cones C in connection with certain generalized topics as f.i. C-convex sets (Section 2.6) or C-nondominated elements (Definition 3.2.2), duality assertions (Section 3.8), dual cones (Definition 2.2.4), continuous dual space (see end of Subsection 2.2.1), polar sets (Subsection 2.2.4), bipolar theorem ((g) in Lemma 2.2.11 or [581, p. 7]).

A cone in a vector space is an algebraic notion, but its use in theory and applications of optimization demands that one considers cones in topological vector spaces and studies, besides algebraic properties such as pointedness and convexity, also analytical ones such as those of being closed, normal, Daniell, nuclear, or having a base with special properties. A short overview of such qualities can be found at the end of this section; the corresponding proofs are given by hints to references or to later sections. The examples of cones given below show interesting results on cones in infinite-dimensional vector spaces, which are important in vector optimization, set-valued optimization, control theory and variational analysis. In this book, one can find quite a lot of results in connection with order structures, which can not be represented by cones or where instead of cones C more general sets (domination sets, set less relation) are used. See Section 2.3, below Definition 4.3.1, compare also Definition 2.3.1, Subsection 6.4.6 and Section 4.1. See also Subsection 6.5, where the nonlinear scalarizing functional (2.42) is used in order to define a preference. Of course, replacing a cone C by more general sets delivers really more general theorems and is useful for applications. Example 1.1.1. Let X be a normed vector space with dim X = ∞, and let x : X → R be a linear but not continuous functional. Then the cone

1.1 Cones in Vector Spaces

3

C = {0} ∪ {x | x (x) > 0} has a base B = {x ∈ C | x (x) = 1}, but 0 ∈ cl B, since B is dense in C. For bases of cones see Definition 2.1.42. Example 1.1.2. Let X be a Hausdorff locally convex space and C a convex cone in X. Then C # = ∅ ⇒ cl C ∩ (−C) = {0} ⇒ C is pointed, where

  C # := x∗ ∈ X ∗ | ∀ x ∈ C \ {0} : x∗ (x) > 0 .

The first implication becomes an equivalence if dim X < ∞.  The converse of the second implication is not true, even if X = R2 ; C = (0, ∞) × R) ∪ ({0} × [0, ∞) is a counterexample. To prove the first implication consider u∗ ∈ C # . Assume that there exists c ∈ cl C ∩(−C), c = 0. Then there exists a net (ci )i∈I ⊂ C such that (ci ) → c. Because u∗ (ci ) ≥ 0, we have that u∗ (c) = lim u∗ (ci ) ≥ 0, which contradicts the fact that u∗ ∈ C # and c ∈ −C \ {0}. Assume dim X < ∞; let us prove the converse of the first implication. Let C = {0} (if C = {0} then C # = X ∗ ) and suppose that cl C ∩ (−C) = {0}. Then (1.1) 0∈ / icr C + ⊂ C # . Indeed, if 0 ∈ icr C + , then C + is a linear subspace, and consequently = cl C is a linear subspace. This implies that cl C ∩ (−C) = −C = {0}, C / C # ; then there exists a contradiction. Now let x∗ ∈ icr C + . Assume that x∗ ∈ ∗ ∗ + x ∈ C \ {0} such that x (x) = 0. Let x ∈ C . Then there is λ > 0 such that (1 + λ)x∗ − λx∗ ∈ C + . So ((1 + λ)x∗ − λx∗ )(x) = −λx∗ (x) ≥ 0, whence x∗ (x) ≤ 0; it follows that −x ∈ C ++ = cl C. It follows that cl C ∩(−C) = {0}, a contradiction. Taking into account dim X < ∞, (1.1) gives C # = ∅, since icr C + is nonempty (recall that every nonempty finite-dimensional convex set has a nonempty relative algebraic interior; see [279, p. 9]). ++

Example 1.1.3. The convex cone C in a Banach space X has the angle property if for some ε ∈ (0, 1] and x∗ ∈ X ∗ \ {0} we have C ⊂ {x ∈ X | x∗ (x) ≥ ε x∗  · x}. It follows that x∗ ∈ C # . Since for X = Rn = X ∗ the last inequality means cos(x∗ , x) ≥ ε, it is clear where the name “angle property” comes from. The class of convex cones with the angle property is very large (for all ε ∈ (0, 1) and x∗ ∈ X ∗ \ {0} the set {x ∈ X | x∗ (x) ≥ ε x∗  · x} is a closed convex cone with the angle property and nonempty interior). In fact, in normed spaces, a convex cone has the angle property iff it is well-based. So, the cone Rn+ in Rn has the angle property (with x∗ = (1, 1, . . . , 1) ∈ Rn and ε = n−1/2 ), but the ordinary order cone (2 )+ ⊂ 2 does not have it. Indeed, if for some x∗ ∈ 2 \{0} and some ε > 0, (2 )+ ⊂ {x ∈ 2 | x∗ (x) ≥ εx∗  · x}, then x∗n ≥ εx∗  (because en = (0, . . . , 1, . . .) ∈ (2 )+ ), whence the contradiction εx∗  ≤ 0 (since (x∗n ) → 0 for n → ∞).

4

1 Examples

Overview of Several Properties of Cones Let X be a Banach space, C and K proper (i.e., {0} = C, K = X) convex cones in X, K + the continuous dual cone of K (Definition 2.2.4), and K # := {x∗ ∈ K + | ∀ x ∈ K \ {0} : x∗ (x) > 0}. K # is the quasi-interior of K + if K is closed and X ∗ is endowed by its weak-star topology ((ii) in Proposition 2.2.25). The relation (cl K)# ⊆ K# in Proposition 2.2.25 can be strict, see Example 2.2.26. Furthermore the relationships in the overview below hold. For the last “⇐⇒” in the overview see theorem 2.2.19. For more relationships among different kinds of cones and spaces look at Sections 2.1, 2.2, for cones with base see Definition 2.1.42. K has compact base =⇒     K=K   X = Rn K has angle property

⇐⇒

∃ C : K \ {0} ⊆ int C

⇐⇒

K is well-based      

K well-based    K is normal

K=K, X=Rn

⇐⇒

K is based ⇐⇒   K=K  X separable

K is Daniell

int K + = ∅

K # = ∅

⇐=

K pointed   

⇐= K + − K + = X ∗   

⇐⇒

K is normal

⇐⇒ K is w-normal.

1.2 Equilibrium Problems Let us consider a common scalar optimization problem h(x) → min s.t. x ∈ B,

(1.2)

where B is a given nonempty set in a space X and h : B → R a given function. Let x ∈ B be a solution of (1.2); that is, ∀ y ∈ B : h(x) ≤ h(y).

(1.3)

Then, setting f (x, y) := h(x) − h(y) for x, y ∈ B, x solves also the problem find x ∈ B such that ∀ y ∈ B : f (x, y) ≤ 0.

(1.4)

For given B and f : B × B → R, a problem of the kind (1.4) is called an equilibrium problem and B its feasible set. A large number of quite

1.2 Equilibrium Problems

5

different problems can be subsumed under the class of equilibrium problems as, e.g., saddle point problems, Nash equilibria in noncooperative games, complementarity problems, variational inequalities, and fixed point problems. To sketch the last one, consider X a Hilbert space with its scalar product (· | ·) and T : B → B a given mapping. With f (x, y) := (x − T x | x − y) we have that x ∈ B is a fixed point of T (i.e., T x = x) if and only if x satisfies f (x, y) ≤ 0 for all y ∈ B. Indeed, if x is an equilibrium point, then taking y := T x, we have 0 ≥ (x − T x | x − T x) = x − T x2 , and so x = T x as claimed. The other direction of the assertion is obvious. There are powerful results that ensure the existence of a solution of the equilibrium problem (1.4); one of the most famous is Fan’s theorem; see [189]: Theorem 1.2.1. Let B be a compact convex subset of the Hausdorff locally convex space (H.l.c.s.) X and let f : B × B → R be a function satisfying ∀ y ∈ B : x → f (x, y) is lower semicontinuous, ∀ x ∈ B : y → f (x, y) is concave, ∀ y ∈ B : f (y, y) ≤ 0. Then there is x ∈ B with f (x, y) ≤ 0 for every y ∈ B. Since we often deal with vector optimization, we now take B ⊂ X as above and h : B → Y , where Y is the Euclidean space Rn (n > 1) or any locally convex space (l.c.s.). In Y we consider a convex pointed cone C, for instance C = Rn+ if Y = Rn . Then, as usual and in accordance with (1.3), x ∈ B solves the vector optimization problem h(x) → min, x ∈ B, if ∀ y ∈ B : h(y) ∈ / h(x) − (C \ {0}).

(1.5)

Sometimes, if the cone C has a nonempty interior (e.g., C = Rn+ when X = Rn ), one looks for so-called weak solutions of vector optimization problems replacing (1.5) by (1.6) ∀ y ∈ B : h(y) ∈ / h(x) − int C. As above, we introduce f (x, y) := h(x) − h(y) for x, y ∈ B, and so in accordance with (1.4) we get the following vector equilibrium problem: find x ∈ B (sometimes called vector equilibrium) with the property ∀ y ∈ B : f (x, y) ∈ / C \ {0},

(1.7)

or, considering weak solutions of vector optimization problems, find x ∈ B (sometimes called weak vector equilibrium) with the property ∀ y ∈ B : f (x, y) ∈ / int C.

(1.8)

6

1 Examples

Looking for existence theorems for solutions of (1.7) we mention Section 3.9, which is devoted to vector-valued equilibrium problems. Looking for methods to reach a solution of (1.8), we adopt in Subsection 5.5 different dynamic approaches (see Theorem 5.5.34). In the following, a simple   dynamic approach is applied, for h(x) = 12 (x1 + 1)2 +x22 , (x1 −1)2 + x22 , to the following vector optimization problem: find x ∈ R2 such that h(y) ∈ / h(x) − R2> , ∀y ∈ R2 .

(WVOP)

To attain a point in the corresponding set of solutions Sh = (−∞, 0] × {0}, we propose the following dynamic system: 1 1 u(t) ˙ + ∇h1 (u(t)) + ∇h2 (u(t)) = 0. 2 2

(1.9)

For the system (1.9), the results are depicted in Figure 1.2.1. In (Top left and Top right) of Figure 1.2.1, starting from any initial point, we reach more rapidly a unique solution in the solution set Sh when setting a fix parameter α (here α = 12 ). For the graphs in the bottom, we display the evolution of the distance of the solution u(t) from a fixed initial point u(0) = (0, −3) to the set of minimizers Sh , for different values of the parameter α. (1.9) exhibits faster convergence for all values of the parameter α in [0, 1].

Figure 1.2.1. Top: For a fix parameter α = 12 , starting from any initial point u(0) ¯ = (0, 0) in the solution set Sh with a faster in R2 , we attain the same limit point x convergence rate. Bottom: The limit point depends on the parameter α although u(0) = (0, −3) is fixed.

Also, vector equilibria play an important role in mathematical economics, e.g., if one deals with traffic control. As an example we describe an

1.2 Equilibrium Problems

7

extension of Wardrop’s principle for weak traffic equilibria to the case in which the route flows have to satisfy the travel demands between origindestination pairs and route capacity restrictions, and the travel cost function is a mapping. Let us consider a traffic (or transportation) network (N, L, P ), where N denotes a finite set of nodes, L a finite set of links, P ⊆ N × N a set of origin-destination pairs, and |P | the cardinality of P . We denote by • d ∈ R|P | the travel demand vector with dp > 0 for all p ∈ P ; • R a finite set of routes (paths) between the origin–destination pairs such that for each p ∈ P , the set R(p) of routes in R connecting p is nonempty; • Q the pair–route incidence matrix (Qpr = 1 if r ∈ R(p), Qpr = 0 otherwise); i.e., Q is a matrix of type (|P |, |R|). We suppose that the demand vector is given on R|P | . Then we introduce the set K of traffic flow vectors (path flow vectors), where K := {v ∈ R|R| | 0 ≤ vr ∀ r ∈ R, Qv = d}. Note that,  in more detailed form, the condition Qv = d states that for all p ∈ P , r∈R(p) vr = dp . The set K is the feasible set for the desired vector equilibrium problem. In order to formulate the vector equilibrium problem, a valuation of the traffic flow vectors is necessary. Therefore, let Y be a topological vector space partially ordered by a suitable convex, pointed cone C with int C = ∅ and F a travel cost function, which assigns to each route flow vector v ∈ R|R| a vector of marginal travel costs F (v) ∈ L(R|R| , Y ), where L(R|R| , Y ) is the set of all linear operators from R|R| into Y . The weak traffic equilibrium problem we describe consists in finding u ∈ K such that (in accordance to (1.4)) φ(u, v) := −F (u)(v − u) ∈ / int C ∀ v ∈ K,

(1.10)

1

recalling that in the scalar case Y = R , the product F (u)v is a price for v. Any route flow u ∈ K that satisfies (1.10) is called a weak equilibrium flow. It has been shown that u is a weak equilibrium flow if and only if u satisfies a generalized vector form of Wardrop’s principle, namely, for all origin–destination p ∈ P and all routes s, t ∈ R connecting p, i.e., s, t ∈ R(p), F (u)s − F (u)t ∈ − int C ⇒ us = 0.

(1.11)

To explain F (u)s consider u ∈ R|R| ; then u = (u1 , . . . , us , . . . , u|R| ), and for v = (0, . . . , 0, us , 0, . . . 0) it is F (u)s = F (u)v. Condition (1.11), due to its decomposed (user-oriented) form, is often more practical than the original definition (1.10), since it deals with pairs p and paths s, t ∈ R(p) directly: When the traffic flow is in vector equilibrium, users choose only (weak) Pareto-optimal or (weak) vector minimum paths to travel on. In the case Y = Rm , C = Rm + it is easily seen that (1.11) follows from (1.10). Let u ∈ K satisfy (1.10); take p ∈ P and s, t ∈ R(p). Choose a flow v such that

8

1 Examples

⎧ ⎪ ⎨uλ vλ = 0 ⎪ ⎩ ut + us

if λ = t, λ = s, if λ = s, (λ = 1, 2, . . . , |R|). if λ = t,

  Then v λ ∈ K, since 1≤λ≤|R| v λ = 1≤λ≤|R| uλ = dp . Consequently, from (1.10), writing F (u) as a matrix (Fμλ ) ∈ Rm×|R| , we have ∀ μ = 1, . . . , m, Fμλ (v λ − uλ ) = Fμt (us ) + Fμs (−us ) = (Fμt − Fμs )(us ) ≮ 0. 1≤λ≤|R|

But from the first condition in (1.11) it is Fμs − Fμt < 0 ∀ μ, so us = 0. For references see [147, 149, 218, 364, 401, 558, 572] or for other applications of multiobjective decision-making to economic problems [39].

1.3 Location Problems in Town Planning Urban development is connected with conflicting requirements of areas for dwelling, traffic, disposal of waste, recovery, trade, and others. Using methods of location theory may be one way of supporting urban planning to determine the best location for a special new layout or for arrangements. For references see e.g. [169, 170, 252, 261, 286, 288–290, 357, 479, 480, 556, 557, 559]. Location problems appear in many variants and with different constraints depending on the application in practice. In most location models the criteria for finding an optimal location for a new facility have economic issues. Travel time or travel costs to the given facilities are to be minimized. Especially, in economics, engineering, town planning and landscape development, methods of locational analysis are very useful for decision makers. Location models and corresponding algorithms are well-studied in the Operations Research literature and have been discussed by many authors from the theoretical as well as the computational point of view. Multiobjective location problems including corresponding algorithms are studied in [14] [214], [240]. In location models in town planning and landscape development the goal is to establish a new facility taking into account a certain set of existing facilities. Such facilities could be living areas, hospitals, warehouses, service centers or fire stations. Whereas, for instance, a hospital is considered as a desirable facility, there are other facilities, like airports, railway stations, dump sites, power plants or wireless stations, that are useful and necessary for the community, but are also a source of negative effects such as noise, smoke or electromagnetic energy. Those facilities are called semiobnoxious (push-pull or semi-desirable) facilities. Existing facilities that shall be close to the new facility are called attraction points, whereas those facilities that are asked to be far away are called repulsion points. Location problems involving attraction points and repulsion points are studied in [102, 109, 320, 437, 551].

1.3 Location Problems in Town Planning

9

It would be possible to formulate our problem as a real-valued location problem (Fermat–Weber problem), which is the problem to determine a location x of a new facility such that the weighted sum of distances between n given facilities ai (i = 1, . . . , n) and x is minimal. Using this approach it is very difficult to say how the weights λi (i = 1, . . . , n) are to be chosen. Another difficulty may arise if the solution of the corresponding optimal location is not practically useful. Then we need new weights, and again we don’t know how to choose the weights. So the following approach is of interest: We formulate the problem as a vector-valued (or synonymously vector or multiobjective) location problem ⎛ ⎞ x − a1 max ⎜ x − a2 max ⎟ ⎟, (P) ≤C − min2 ⎜ ⎠ ··· x∈R ⎝ x − an max where x, ai ∈ R2 (i = 1, . . . , n), xmax = max{| x1 |, | x2 |}, and “≤C − minx∈R2 ” means that we study the problem of determining the set of efficient points of an objective function f : X −→ Rn with respect to a proper, pointed, closed, convex cone C ⊂ Rn : Eff(f [X], C) := {f (x) | x ∈ X,

f [X] ∩ (f (x) − (C \ {0})) = ∅}.

Remark 1.3.1. For applications in town planning it is important that we can choose different norms in the formulation of (P). The decision which of the norms will be used depends on the course of the roads in the city or in the district or on other influences coming from the practical background of the planning problem. We study the problem (P) with C = Rn+ , where Rn+ denotes the usual ordering cone in n-dimensional Euclidean space. In Section 5.3 we consider a location problem in town planning, formulate a multiobjective location problem, derive optimality conditions, and present several algorithms for solving multiobjective location problems. It is well known that the set of solutions in vector optimization (set of efficient elements) may be large, and so a comparison of alternatives by using a graphical representation is useful. For solving the multiobjective location problem (P), we can use the software FLO (Facility Location Optimizer, see https://project-flo.de), see Figure 1.3.2. If the decision maker prefers the maximum norm, we get the solution set of the multiobjective location problem (P) as shown in Figure 1.3.3. But if the decision maker prefers the dual norm to the maximum norm (this norm is called the Lebesgue norm), the solution set has the form shown in Figure 1.3.4.

10

1 Examples

Figure 1.3.2. Software FLO for solving multiobjective location problems (https:// project-flo.de).

Figure 1.3.3. The set of efficient elements of the multiobjective location problem (P) with the maximum norm.

1.4 Multiobjective Control Problems In control theory often one has the problem to minimize more than one objective function, for instance, a cost functional as well as the distance between the final state and a given point. To realize this task usually one takes as objective function a weighted sum of the different objectives. However, the more natural way would be to study the set of efficient points of a vector optimization problem with the given objective functions. It is well known that the weighted sum is only a special surrogate problem to find efficient points, which has the disadvantage that in the nonconvex case one cannot find all efficient elements in this way. In order to formulate a multiobjective control problem we introduce a system of differential equations that describe the motion of the controlled system. Let x(t) be an n-dimensional phase vector that character-

1.4 Multiobjective Control Problems

11

Figure 1.3.4. The set of efficient points of the multiobjective location problem (P) with the Lebesgue norm (instead of the maximum norm).

izes the state of the controlled system at time t. Furthermore, let u(t) be an m-dimensional vector that characterizes the controlling action realized at time t. In particular, consider in a time interval 0 ≤ t ≤ T the system of ordinary differential equations with initial condition ⎫ dx (t) = ϕ(t, x(t), u(t)), ⎬ dt (1.12) ⎭ x(0) = x ∈ Rn . 0

It is assumed that the controlling actions u(t) satisfy the restrictions u(t) ∈ U ⊂ Rm . The vector x is often called the state, and u is called the control of (1.12); a pair (x, u) satisfying (1.12) and the control restriction is called an admissible solution (or process) of (1.12). Under additional assumptions it is possible to ensure the existence of a solution (x, u) of (1.12) on the whole time interval [0, T ] or at least almost everywhere on [0, T ] (compare Section 5.4). Introducing the criteria or performance space, the objective function f : (X × U ) −→ (Y, CY ), where Y is a linear space, and CY ⊂ Y is a proper convex cone, we have as multiobjective optimal control problem (also called vector-valued or vector optimal control problem) (P): Find some control u ¯ such that the corresponding trajectory x ¯ satisfies

12

1 Examples

f (x, u) ∈ / f (¯ x, u ¯) − (CY \ {0}) for all solution pairs (x, u) of (1.12). The pair (¯ x, u ¯) is then called an optimal process of (P ). If we study the problem to minimize the distance f1 of the final state x(T ) of the system (1.12) and a given point as well as a cost functional T Φ(t, x(t), u(t))dt by a control u, we have a special case of (P ) with 0  f (x, u) =

T 0

f1 (x(T )) Φ(t, x(t), u(t))dt

 .

In the following we explain that cooperative differential games are special cases of the multiobjective control problem (P ). We consider a game with n ≥ 2 players and define Y := Y1 × Y2 × · · · × Yn , the product of the criteria spaces of each of the n players, CY := CY1 × CY2 × · · · × CYn , the product of n proper convex cones, f (x, u) := (f1 (x, u), . . . , fn (x, u)), the vector of the loss functions of the n players, U := U1 × U2 × · · · × Un , the product of the control sets of the n players, and define u := (u1 , . . . , un ). The player j tries to minimize his cost function fj (the utility or profit function is −fj ) with respect to the partial order induced by the cone CYj influencing the system (1.12) by means of the function uj ∈ Uj . And “cooperative game” means that a state is considered optimal if no player can reduce his costs without increasing the costs of another player. Such a situation is possible because each cost function fj depends on the same control tuple u. The optimal process gives the Pareto minimum of the cost function f (x, u). It is well known that it is difficult to show the existence of optimal (or efficient) controls of (P ), whereas suboptimal controls exist under very weak conditions. So it is important to derive some assertions for suboptimal controls. This is done in Section 5.4. There an application of a variational principle for vector optimization problems yields an ε-minimum principle for (P), which is (for ε = 0) closely related to Pontryagin’s minimum principle.

1.6 Uncertainty

13

1.5 Stochastic Dominance Uncertainty is the key ingredient in many decision problems (see Chapter 6). Financial planning, cancer screening, and airline scheduling are just examples of areas in which ignoring uncertainty may lead to inferior or simply wrong decisions. There are many ways to model uncertainty; one that has proved particularly fruitful is to use probabilistic models. Two methods are frequently used for modeling choice among uncertainty prospects: stochastic dominance (Ogryczak and Ruszczynski [451], [452], Whitmore and Findlay ([563]); Levy ([374])) and mean-risk analysis (Markowitz ([398])). The stochastic dominance is based on an axiomatic model of riskaverse preferences: It leads to conclusions that are consistent with the axioms. Unfortunately, the stochastic dominance approach does not provide us with a simple computational recipe; it is, in fact, a multiple objective model with a continuum of criteria. The mean-risk approach quantifies the problem in a lucid form of only two criteria: • The mean, representing the expected outcome; • The risk, a scalar measure of the variability of outcomes. The mean-risk model is appealing to decision makers and allows a simple trade-off analysis, analytical or geometrical. On the other hand, mean-risk approaches are not capable of modeling the gamut of risk-averse preferences. Moreover, for typical dispersion statistics used as risk measures, the meanrisk approach may lead to inferior conclusions. The seminal portfolio optimization model of Markowitz ([396]) uses the variance as the risk measure in the mean-risk analysis, which results in a formulation of a quadratic programming model. Since then, many authors have pointed out that the mean-variance model is, in general, not consistent with stochastic dominance rules. The use of the semivariance rather than variance as the risk measure was already suggested by Markowitz ([397]) himself.

1.6 Uncertainty If one prepares a problem of the real world for an optimization process, one (almost) always has to deal with uncertain or disturbed data, compare Chapter 6. The following two examples ([316, 317, 343, 390, 391]) provide an insight. The current status of the associated optimization methods is presented in Chapter 6. However, also other problems can be subsumed under uncertainty as f. i. disturbances of data or parameters or side conditions in abstract optimization problems or variational inequalities, in order to arrive at answers to questions about stability as for instance “how the solution of

14

1 Examples

a problem or a Kuhn-Tucker point react to data disturbances” or “are there continuous dependencies with respect to perturbations” (stability analysis). This is discussed in Chapter 4, compare also [329]. Example 1.6.1 (Robust solutions). In [343], K¨ obis and Tammer get new robustness results for vector optimization problems under uncertainty: min f (x, ξ), x∈B

(P (ξ))

where ξ is a fixed uncertain parameter. The treatment of such problems leads to set-valued optimization problems (see Chapter 4) with a variable domination structure (see Eichfelder [175]). The authors prove the results in general spaces in [343], but give a motivating example on a selection problem with two objective functions (with respect to the mentioned variable domination structure). For Y = R2 , X a linear space, ∅ = U ⊆ RN an uncertainty set (with parameters ξ ∈ U ) and a feasibility set ∅ = B ⊆ X, consider a function f : B × U → Y that is to be minimized. But now, considering for x ∈ B the set fU (x) = {f (x, ξ)|ξ ∈ U }, that is the image set of f at x under U , it is clear, that one should compare sets fU (·) ⊂ Y in order to find a useful minimal set. Therefore, the common partial order on R2 should be changed, possibly in dependence of the points x ∈ B. That means, the decision maker looks for cones C(x) ⊂ Y in dependence of x ∈ B in order to find a nondominated set fU (x0 ): There does not exist x ∈ B, x = x0 , fU (x) ⊆ fU (x0 ) − C(x0 ). x0 is then called a robust solution. The paper [343] contains conditions showing that a full optimization theory for that selection of cones holds. Example 1.6.2 (Interval-valued functions). The field of optimization with interval-valued functions can also be subsumed to uncertainty. Here it is interesting, that on the one hand, one can use results from set-valued optimization (see Section 4), to work with such problems, on the other hand, having in mind the rules of interval arithmetic or interval analysis ([11], [410]), one can use them to study interval-valued optimization problems. Following [410], we consider an optimization problem with X a Banach space, gi , hk : X → R; i = 1, . . . , n; k = 1, . . . , m. Furthermore, we suppose gi (x) ≤ 0, and hk (x) = 0 and consider a goal function f acting between X and the set J of all closed and bounded intervals [a, b] in R. So, one has a set-valued optimization problem taking the following partial order in J: [a1 , b1 ] ≤J [a2 , b2 ] ⇐⇒ a1 ≤ a2 , b1 ≤ b2 , [a1 , b1 ] 0, or ... x1 = · · · = xn−1 = 0, xn > 0, or x = 0}; C satisfies all the conditions of Definition 2.1.11. With the help of cones, we can characterize compatibility between linear and order structures: Theorem 2.1.13. Let X be a linear space and let C be a cone in X. Then the relation RC := {(x1 , x2 ) ∈ X × X | x2 − x1 ∈ C} (2.8)

2.1 Order Structures

23

is reflexive and satisfies (2.3) and (2.4). Moreover, C is convex if and only if RC is transitive, and, respectively, C is pointed if and only if RC is antisymmetric. Conversely, if R is a reflexive relation on X satisfying (2.3) and (2.4), then C := {x ∈ X | 0Rx} is a cone and R = RC . Proof. Let us denote the fact that x1 RC x2 by x1 ≤ x2 . (i) RC is reflexive because 0 ∈ C. (ii) RC satisfies (2.3); indeed, if x1 , x2 ∈ X and λ ∈ R+ then x1 ≤ x2 =⇒ x2 − x1 ∈ C (2.8)

=⇒

Def. 2.1.9(e)

=⇒

Def. 2.1.11(a)

λ(x2 − x1 ) ∈ C

λx2 − λx1 ∈ C =⇒ λx1 ≤ λx2 . (2.8)

(iii) RC satisfies (2.4); indeed, if x1 , x2 , x ∈ X, then x1 ≤ x2 =⇒ x2 −x1 ∈ C (2.8)

=⇒

Def. 2.1.9

(x2 +x)−(x1 +x) ∈ C =⇒ x1 +x ≤ x2 +x. (2.8)

(iv) If C is convex, then RC is transitive: x1 ≤ x2 , x2 ≤ x3 =⇒ x2 − x1 ∈ C, x3 − x2 ∈ C (2.8)

=⇒

Def. 2.1.11(b)

x3 − x1 ∈ C

=⇒ x1 ≤ x3 .

(2.8)

Conversely, assume that RC is transitive. Then x1 , x2 ∈ C =⇒ 0 ≤ x1 , x1 ≤ x1 + x2 (2.8)

=⇒

Def. 2.1.1(b)

0 ≤ x1 + x2 =⇒ x1 + x2 ∈ C. (2.8)

(v) If C is pointed, then RC is antisymmetric: x1 ≤ x2 , x2 ≤ x1 =⇒ x2 − x1 ∈ C ∩ (−C) = {0} =⇒ x2 = x1 . (2.8)

Conversely, assume that RC is antisymmetric. Then x ∈ C ∩ (−C) =⇒ 0 ≤ x, x ≤ 0 (2.8)

=⇒

Def. 2.1.1(c)

x = 0.

(vi) Let R be a reflexive compatible order structure in X and consider C := {x ∈ X | 0Rx}.

(2.9)

C is really a cone. Indeed, let λ ∈ R+ and x ∈ C; then 0Rx. Since R fulfills (2.3), we obtain that λ0Rλx, and so λx ∈ C. Moreover, x1 Rx2 ⇐⇒ (x1 − x1 )R(x2 − x1 ) ⇐⇒ x2 − x1 ∈ C ⇐⇒ x1 RC x2 . (2.4)

Therefore, R = RC .

(2.9)

(2.8)



24

2 Functional Analysis over Cones

The preceding theorem shows that when ∅ = C ⊆ X, the relation RC defined by (2.8) is a preorder if and only if C is a convex cone, and RC is a partial order if and only if C is a pointed convex cone. Also note that RRn+ = Rn (defined in Example 2.1.5 (4)), while the relation RC with C ⊆ Rn defined in Example 2.1.12 (3) is a linear order, called the lexicographic order on Rn . ⊆ Rn (defined in (2.5)) has an interesting property: Taking The cone Rn+ n n B := {x ∈ R+ | i=1 xi = 1}, for every x ∈ Rn+ \ {0}, there exist unique elements b ∈ B and λ > 0 such that x = λb. Indeed, just take λ := x1 + · · · +xn (> 0) and b := λ−1 x. Taking into account this example, we introduce the following definition. Definition 2.1.14. Let X be a linear space and C a nontrivial convex cone in X. A nonempty convex subset B of C is called an algebraic base for C if each nonzero element x ∈ C has a unique representation of the form x = λb with λ > 0 and b ∈ B. Note that if B is an algebraic base of the nontrivial convex cone C, then 0∈ / B. Indeed, on the contrary case, taking b ∈ B \ {0} (⊆ C \ {0}), we have that b = 1 · b = 2 · ( 12 b) with b, 12 b ∈ B, contradicting the uniqueness of the representation of b ∈ C \ {0}. Theorem 2.1.15. Let C be a nontrivial convex cone in the linear space X and let B ⊆ X be a convex set. The following assertions are equivalent: (i) B is an algebraic base of C; (ii) C = R+ B and 0 ∈ / aff B; (iii) there exists a linear functional ϕ : X → R such that ϕ(x) > 0 for every x ∈ C \ {0} and B = {x ∈ C | ϕ(x) = 1}. Proof. (i) ⇒ (ii) Let B be an algebraic base for C; then, by Definition 2.1.14, C = R+ B. Since B is convex, aff B = {μb + (1 − μ)b | b, b ∈ B, μ ∈ R}. Assume that 0 ∈ aff B, and so 0 = μb + (1 − μ)b for some b, b ∈ B, μ ∈ R; since 0 ∈ / B, μ ∈ / [0, 1]. Therefore, there exist μ0 > 1, b0 , b0 ∈ B such that μ0 b0 = (μ0 − 1)b0 ∈ C, contradicting the definition of the algebraic base. Hence, 0 ∈ / aff B. / aff B. Consider b0 ∈ B and (ii) ⇒ (iii) Assume that C = R+ B and 0 ∈ / X0 . Let L0 ⊆ X0 X0 := aff B −b0 . Then X0 is a linear subspace of X and b0 ∈ be an algebraic base of X0 . Then L0 ∪ {b0 } is linearly independent; let us complete it to a base L of X. There exists a unique linear function ϕ : X → R such that ϕ(x) = 0 for any x ∈ L \ {b0 } and ϕ(b0 ) = 1. Since aff B = b0 + X0 , we have that ϕ(x) = 1 for every x ∈ aff B, and so B ⊆ {x ∈ C | ϕ(x) = 1}. Conversely, let x ∈ C be such that ϕ(x) = 1. Then x = tb for some t > 0 and b ∈ B. It follows that 1 = ϕ(x) = tϕ(b) = t, and so x ∈ B. (iii)⇒(i)Assumethatϕ : X → Rislinear,ϕ(x) > 0foreveryx ∈ C\{0},and B = {x ∈ C | ϕ(x) = 1}. Consider x ∈ C \ {0} and take t := ϕ(x) > 0 and

2.1 Order Structures

25

b := t−1 x; of course, x = tb. Because b ∈ C and ϕ(b) = 1, we have that b ∈ B. Assume that x = t b for some t > 0 and b ∈ B. Then t = ϕ(x) = t ϕ(b ) = t , whence b = b . Hence, every nonnull element x of C has a unique representation tb with t > 0 and b ∈ B, i.e., B is an algebraic base of C.  The preceding theorem shows that any convex cone having an algebraic base is pointed; the lexicographic cone in Rn with n ≥ 2 is a pointed convex cone without an algebraic base. Often it is useful to have also a topological structure on the set under consideration. Definition 2.1.16. Let X be a nonempty set. A topology τ on X is a family of subsets of X satisfying the following conditions: (a) every union of sets of τ belongs to τ , (b) every finite intersection of sets of τ belongs to τ , (c) the empty set ∅ and the whole set X belong to τ . The elements of τ are called open sets, and X equipped with τ is called a topological space and is denoted by (X, τ ). Having Y a nonempty subset of the topological space (X, τ ), the family τY := {D ∩ Y | D ∈ τ } is a topology on Y, called the induced (or trace) topology on Y. The dual notion for open set is that of closed set. So, the subset A ⊆ (X, τ ) is closed if X \ A is open; it follows easily that every intersection of closed sets is closed and every finite union of closed sets is closed. The interior and the closure of the subset A of the topological space (X, τ ) are defined, respectively, by  int A := {D ⊆ X | D ⊆ A, D open},  cl A := A := {F ⊆ X | A ⊆ F, F closed}. It is obvious that int A is open and cl A is closed. Two important classes of subsets of a topological space are those of compact sets and of connected sets. One says  that A ⊆ (X, τ ) is compact if, for any family (Oi )i∈I ⊆ τ such that A ⊆ i∈I Oi , there exists a finite set I0 ⊆ I such that A ⊆ i∈I0 Oi , that is any open cover of A has a finite subcover. One says that A ⊆ (X, τ ) is connected if A ∩ O1 ∩ O2 = ∅ whenever O1 , O2 ∈ τ are such that A ∩ O1 = ∅, A ∩ O2 = ∅ and A ⊆ O1 ∪ O2 ; notice that A ⊆ X is connected if and only if A ∩ F1 ∩ F2 = ∅ whenever F1 , F2 ⊆ X are closed sets such that A ∩ F1 = ∅, A ∩ F2 = ∅ and A ⊆ F1 ∪ F2 . Important examples of topological spaces are furnished by metric spaces. Recall that a metric on a nonempty set X is a mapping ρ : X × X → R+ having the properties

26

2 Functional Analysis over Cones

∀ x, y ∈ X : ρ(x, y) = 0 ⇔ x = y, ∀ x, y ∈ X : ρ(x, y) = ρ(y, x), ∀ x, y, z ∈ X : ρ(x, z) ≤ ρ(x, y) + ρ(y, z). The set X equipped with the metric ρ is called a metric space and is denoted by (X, ρ). Having the metric space (X, ρ), x ∈ X and r > 0, consider the (closed) ball of center x and radius r : B(x, r) := Bρ (x, r) := {y ∈ X | ρ(x, y) ≤ r}. Define τρ := {∅} ∪ {D ⊆ X | ∀ x ∈ D, ∃ r > 0 : B(x, r) ⊆ D}. It is easy to verify that τρ is a topology on X, and so X becomes a topological space; a metric space (X, ρ) will always be endowed with the topology τρ if not stated explicitly otherwise. Note that B(x, r) is a closed set, while the open ball {y ∈ X | ρ(x, y) < r} is an open set. For ∅ = Y ⊆ (X, ρ), the function ρY := ρ|Y ×Y is a metric on Y , and τρY = (τρ )Y . Taking X = Rn and ρ(x, y) the usual Euclidean distance between x ∈ Rn and y ∈ Rn , ρ is a metric on Rn , and the open sets w.r.t. ρ are the usual open sets of Rn . In the sequel, Rn with n ∈ N∗ is endowed with its usual topology if not stated explicitly otherwise; the same is valid also for the nonempty subsets of Rn . Notice that ρ : R×R → R defined by ρ(x, y) := |ϕ(x) − ϕ(y)| for x, y ∈ R, where ϕ(x) := x/(1 + |x|) for x ∈ R, ϕ(±∞) := ±1, is a metric on R and the induced topology of τρ on R is nothing else than the usual topology τ0 of R; in the sequel, R is endowed with the topology τρ defined above if not mentioned explicitly otherwise. As in Rn , in an arbitrary topological space (X, τ ) we define other topological notions, that of neighborhood being very useful. Definition 2.1.17. Let (X, τ ) be a topological space and x ∈ X. The subset U of X is a neighborhood of x (relative to τ ) if there exists an open set O ∈ τ such that x ∈ O ⊆ U . The class of all neighborhoods of x will be denoted by Nτ (x), or simply N (x). A subset B(x) of Nτ (x) is called a base of neighborhoods (filter base) of x relative to τ if, for every U ∈ Nτ (x), there exists V ∈ B(x) such that V ⊆ U . One says that (X, τ ) is first-countable if every element x ∈ X has an (at most) countable neighborhood base. For the topological space (X, τ ) and a neighborhood base B(x) of x for every x ∈ X, the family {B(x) | x ∈ X} has the following properties: (NB1) ∀ x ∈ X, ∀ U ∈ B(x) : x ∈ U , (NB2) ∀ x ∈ X, ∀ U1 , U2 ∈ B(x), ∃ U3 ∈ B(x) : U3 ⊆ U1 ∩ U2 ,

2.1 Order Structures

27

(NB3) ∀ x ∈ X, ∀ U ∈ B(x), ∃ V ∈ B(x), ∀ y ∈ V, ∃ W ∈ B(y) : W ⊆ U . Conversely, if, for every x ∈ X, one has a nonempty family B(x) ⊆ P(X) such that {B(x) | x ∈ X} satisfies the conditions (NB1)–(NB3), then there exists a unique topology τ on X such that B(x) is a neighborhood base for x, for every x ∈ X; moreover, ∀ x ∈ X : Nτ (x) = {U ⊆ X | ∃ V ∈ B(x) : V ⊆ U }. In fact, the topology (X, ρ) of a metric space is constructed in this manner. The family {B0 (x) | x ∈ X} with B0 (x) := {Bρ (x, 1/n) | n ∈ N> } satisfies the conditions (NB1)–(NB3) above; the topology defined by {B0 (x) | x ∈ X} is nothing else but the topology τρ defined above. Since B0 (x) is countable for every x ∈ X, we obtain that every metric space is first-countable. As seen above (property (NB2)), taking B(x) a neighborhood base of x ∈ (X, τ ) and considering the relation  on B(x) defined by U1  U2 if and only if U1 ⊇ U2 , then, for all U1 , U2 ∈ B(x), there exists U3 ∈ B(x) such that U3  U1 and U3  U2 . This is the prototype of a directed set. So, we say that the nonempty set I is (upward) directed by  if  is a preorder on I and ∀ i1 , i2 ∈ I, ∃ i3 ∈ I : i3  i1 and i3  i2 . Note that if (I  ,  ) and (I  ,  ) are directed sets, then I  × I  is directed by the relation  defined by (i , i )  (j  , j  ) if and only if i  j  and i  j  . Another important example of directed set is N endowed with its usual order ≤. A subset J of a directed set (I, ) is called cofinal if, for every i ∈ I, there exists j ∈ J such that j  i; it follows immediately that (J, ) is directed if J is a cofinal subset of I. Having two directed sets (J, ) and (I, ), the mapping ϕ : (J, ) → (I, ) is called filtering if, for every i ∈ I, there exists ji ∈ J such that ϕ(j)  i for every j ∈ J with j  ji ; clearly, ϕ(J) is a cofinal subset of I if ϕ is filtering. The above discussion shows that (B(x), ) is directed if B(x) is a neighborhood base for x ∈ (X, τ ); in the sequel, a neighborhood base B(x) of x ∈ (X, τ ) is always endowed with this preorder if not mentioned explicitly otherwise. This consideration is the base for the following definition. Definition 2.1.18. A mapping ϕ : I → X, where (I, ) is a directed set, is called a net or a generalized sequence of X; generally, the net ϕ is denoted by (xi )i∈I , where xi := ϕ(i). When X is equipped with a topology τ , we say that the net (xi )i∈I ⊆ X converges to x ∈ X if ∀ V ∈ Nτ (x), ∃ iV ∈ I, ∀ i  iV : xi ∈ V. τ

We denote this fact by (xi ) → x or simply (xi ) → x, or even xi → x, and x is called a limit of (xi )i∈I . Remark 2.1.19. Using the above definitions, if B is a neighborhood base of x ∈ (X, τ ) and xV ∈ V for every V ∈ B, then (xV )V ∈B → x.

28

2 Functional Analysis over Cones

Lemma 2.1.20. Let (X, τ ) be a topological space. Then every convergent net of X has a unique limit if and only if ∀ x, y ∈ X, x = y, ∃ U ∈ Nτ (x), V ∈ Nτ (y) : U ∩ V = ∅,

(2.10)

i.e., (X, τ ) is Hausdorff. Proof. Suppose that (2.10) does not hold. Then there exist x, y ∈ X, x = y, such that ∀ U ∈ Nτ (x), ∀ V ∈ Nτ (y), ∃ xU,V ∈ U ∩ V. As seen above, I := Nτ (x) × Nτ (y) is directed, and so (xU,V )(U,V )∈I is a net τ τ in X. It is obvious that (xU,V ) → x and (xU,V ) → y, which shows that the limit of nets is not unique in this case. Suppose now that (2.10) holds and assume that the net (xi )i∈I ⊆ X converges to x and y with x = y. Take U ∈ Nτ (x) and V ∈ Nτ (y) such that U ∩ V = ∅. Then there exist iU , iV ∈ I such that xi ∈ U for i  iU and xi ∈ V for i  iV . Because I is directed, there exists i0 ∈ I such that i0  iU and i0  iV . We thus obtain the contradiction that xi0 ∈ U ∩ V . The proof is complete.  A useful notion is that of subnet; we say that the net (yj )j∈J is a subnet of the net (xi )i∈I if there exists a filtering map ϕ : (J, ) → (I, ) such that yj = xϕ(j) for every j ∈ J; for easy writing, when there is no danger of confusion, the net (xϕ(j) )j∈J is denoted simply by (xj )j∈J , but we have in view that J is a directed set and there is a filtering mapping ϕ : J → I such that xj is xϕ(j) . It is obvious that the net (zk )k∈K is a subnet of (xi )i∈I whenever (zk )k∈K is a subnet of (yj )j∈J and (yj )j∈J is a subnet of (xi )i∈I . Clearly, if J is a cofinal subset of the directed set (I, ), then (xi )i∈J is a subnet of the net (xi )i∈I . With the help of nets and subnets, it is possible to characterize several topological notions as the closure of a set, the cluster points, the compact subsets, and the continuity of mappings between topological spaces in the same manner as these notions are characterized in Rk by using sequences and subsequences. For example, a subset A of the topological space (X, τ ) is closed if and only if x ∈ A whenever the net (xi )i∈I ⊆ A converges to x ∈ X, but generally, taking sequences is not sufficient; we say that A is sequentially closed if x ∈ A whenever the sequence (xn )n∈N ⊆ A converges to x ∈ X. Moreover, the subset A of the topological space (X, τ ) is compact if and only if every net (xi )i∈I ⊆ A has a subnet (yj )j∈J converging to some x ∈ A. Recall that the function f : (X, τ ) → (Y, σ) is continuous at x ∈ X if ∀ V ∈ Nσ (f (x)), ∃ U ∈ Nτ (x), ∀ x ∈ U : f (x ) ∈ V ; f is continuous if f is continuous at every x ∈ X. Having (X, τ ) a topological space and f : X → R, f is lower semicontinuous (l.s.c. for short) at x0 ∈ X if, for every λ ∈ R with λ < f (x0 ),

2.1 Order Structures

29

there exists U ∈ Nτ (x0 ) such that λ < f (x) for every x ∈ U ; f is lower semicontinuous if f is l.s.c. at every x0 ∈ X. Moreover, f : X → R is upper semicontinuous (u.s.c.) at x0 ∈ X if −f is l.s.c. at x0 ; f is u.s.c. if −f is l.s.c. When X is a linear space and τ is a topology on X, it is important to have compatibility between the linear and topological structures of X. Definition 2.1.21. Let X be a linear space endowed with a topology τ . We say that (X, τ ) is a topological linear space or topological vector space (t.l.s. or t.v.s. for short) if both operations on X (the addition and the multiplication by scalars) are continuous; in this case τ is called a linear topology on X. Since these operations are defined on product spaces, we recall that for two topological spaces (X1 , τ1 ) and (X2 , τ2 ), there exists a unique topology on X1 × X2 , denoted by τ1 × τ2 , with the property that B(x1 , x2 ) := {U1 × U2 | U1 ∈ Nτ1 (x1 ), U2 ∈ Nτ2 (x2 )} is a neighborhood base of (x1 , x2 ) w.r.t. τ1 × τ2 for every (x1 , x2 ) ∈ X1 × X2 ; τ1 × τ2 is called the product topology on X1 × X2 . Of course, in Definition 2.1.21 the topology on X×X is τ ×τ , and the topology on R×X is τ0 ×τ , where τ0 is the usual topology of R. It is easy to see that when (X, τ ) is a topological linear space, a ∈ X and λ ∈ R \ {0}, the mappings Ta , Hλ : X → X defined by Ta (x) = a + x, Hλ (x) := λx, are bijective and continuous with continuous inverses (i.e., they are homeomorphisms). It follows, that V ∈ Nτ (0) if and only if a + V ∈ Nτ (a) for every a ∈ X and every neighborhood of 0 is absorbing (A ⊆ X is absorbing if, for every x ∈ X, there exists δ > 0 such that [−δ, δ] · x ⊆ A, or equivalently, for every x ∈ X there exists δ > 0 such that [0, δ] · x ⊆ A). When (X, τ ) is a t.v.s., a neighborhood base B of 0 has the following properties: (LT0) ∀ U1 , U2 ∈ B, ∃U3 ∈ B : U3 ⊆ U1 ∩ U2 , (LT1) ∀ U ∈ B : U is absorbing, (LT2) ∀ V ∈ B, ∃ U ∈ B : [−1, 1]U ⊆ V , (LT3) ∀ V ∈ B, ∃ U ∈ B : U + U ⊆ V . Conversely, if B is a family of subsets of X verifying conditions (LT0)– (LT3), then there exists a unique linear topology τ on X such that B is a neighborhood base for 0 ∈ X. When X is a t.v.s., the class of all balanced neighborhoods of 0 ∈ X will be denoted by NX (∅ = A ⊆ X is balanced if [−1, 1] · A = A); it is well known that NX is a neighborhood base of 0 ∈ X. Having the t.v.s. (X, τ ) and a neighborhood base B of 0, it is well known that  cl A = {A + V | V ∈ B}. (2.11)

30

2 Functional Analysis over Cones

Using this formula, one obtains that the origin 0 of X has a neighborhood base formed by open (resp. closed), balanced and absorbing sets. Moreover, having in view that {D ∈ τ | x ∈D} is a neighborhood base for x ∈ X, from (2.11) one obtains that cl{x} = {D ∈ τ | x ∈ D}, and so y ∈ cl{x} ⇐⇒ Nτ (x) = Nτ (y) ∀x, y ∈ X.

(2.12)

If A ⊆ X, it is obvious that 0 ∈ cl A ⇒ cl{0} ⊆ cl A ⇒ cl{0} ∩ cl A = ∅; assuming that there exists x ∈ cl{0} ∩ cl A, then Nτ (x) = Nτ (0) by (2.12), and so V ∩A = ∅ for every V ∈ Nτ (0) because x ∈ cl A. Hence, cl{0}∩ cl A = ∅ ⇒ 0 ∈ cl A. Consequently, 0 ∈ cl A ⇐⇒ cl{0} ⊆ cl A ⇐⇒ cl{0} ∩ cl A = ∅.

(2.13)

Furthermore, (X, τ ) is Hausdorff if and only if, for every x ∈ X \{0}, there / V or, equivalently (see (2.11)), the set {0} is exists V ∈ NX such that x ∈ closed. This shows that each closed cone C ⊆ X is not pointed whenever (X, τ ) is not Hausdorff. For further use, we recall that the subset B of the t.v.s. (X, τ ) is bounded if, for every neighborhood V of 0 ∈ X, there exists λ > 0 such that λB ⊆ V ; of course, in the preceding definition one may take V in a neighborhood base of 0. Having (X, τ ) a t.v.s., one says that the net (xi )i∈I ⊆ X is Cauchy if, for every neighborhood V of 0 ∈ X, there exists iV ∈ I such that xj − xk ∈ V for all j, k ∈ I such that j, k  iV ; of course, as for boundedness of sets, one may take V in a neighborhood base of 0. It is easy to show that any convergent net (xi )i∈I ⊆ X is Cauchy, but the converse generally is not true; however, if a Cauchy net has a convergent subnet, then the net is convergent. For certain topological vector spaces, it is possible to have convergent nets (xi )i∈I with {xi | i  i0 } unbounded for every i0 ∈ I. However, the following result holds. Lemma 2.1.22. Let X be a t.v.s., let (ti )i∈I ⊆ R be a Cauchy net and (xi )i∈I ⊆ X be a bounded Cauchy net. Then (ti xi )i∈I is a Cauchy net. Proof. Because (ti )i∈I is Cauchy, there exists i0 ∈ I such that |ti − ti0 | ≤ 1 for all i  i0 , and so we may (and do) assume that {ti | i ∈ I} ⊆ [−μ, μ] for some μ > 0. Moreover, by hypothesis, the set B := {xi | i ∈ I} is bounded in X. Consider V ∈ NX ; then there exists U ∈ NX such that U + U ⊆ V . Because (xi )i∈I ⊆ X is Cauchy and U  := μ−1 U ∈ NX , there exists i1 ∈ I such that xj − xk ∈ U  for all j, k ∈ I such that j, k  i1 . Because B is bounded, there exists ε > 0 such that εB ⊆ U , while because (ti )i∈I is Cauchy, there exists i2 ∈ I such that |tj − tk | ≤ ε for j, k ∈ I such that j, k  i2 . There exists iV ∈ I such that iV  i1 , i2 . Then tj xj − tk xk = tj (xj − xk ) + (tj − tk )xk ∈ [−μ, μ]U  + [−ε, ε]B ⊆ U + U ⊆ V for all j, k ∈ I with j, k  iV . Therefore, (ti xi )i∈I is Cauchy.



2.1 Order Structures

31

Related to Cauchy nets in topological vector spaces, one has also the following result; the proof of the first assertion uses an idea from [432]. Lemma 2.1.23. Let X be a t.v.s. and (xi )i∈I be a net in X. (i) Assume that (xi )i∈I is not Cauchy. Then there exists an increasing sequence (in )n≥1 ⊆ I such that the sequence (xin )n≥1 is not Cauchy. (ii) Assume that (xi )i∈I is Cauchy and (Vn )n≥1 is a neighborhood base of 0 ∈ X. Then there exists an increasing sequence (in )n≥1 ⊆ I such that ∀n ≥ 1, ∀j, k ∈ I : [j, k  in ⇒ xj − xk ∈ Vn ];

(2.14)

in particular, (xin )n≥1 is a Cauchy sequence. Moreover, if xin → x ∈ X, then xi → x. Proof. (i) Because (xi )i∈I is not Cauchy, there exists a neighborhood V of 0 ∈ X such that, for every i ∈ I, there exist j, k ∈ I such that j, k  i / V . There exists a balanced neighborhood U of 0 such that and xj − xk ∈ U +U ⊆ V . For j, k found above, there exists l ∈ I such that l  j, k. We have / U or xk − xl ∈ / U (otherwise, xj − xk = (xj − xl ) + (xl − xk ) ∈ that xj − xl ∈ U + U ⊆ V , a contradiction). Taking this into account, we have that / U. ∀ i ∈ I, ∃ j, k ∈ I : i  j  k, xj − xk ∈

(2.15)

Let i0 ∈ I be fixed. Then taking i = i0 in (2.15), there exist i1 , i2 ∈ I / U . Taking i = i2 in (2.15), there exist such that i0  i1  i2 and xi1 − xi2 ∈ / U . Continuing in this way, i3 , i4 ∈ I such that i2  i3  i4 and xi3 − xi4 ∈ / U for we obtain an increasing sequence (in )n≥1 ⊆ I such that xi2n−1 − xi2n ∈ every n ≥ 1. Hence, the sequence (xin )n≥1 is not Cauchy. (ii) Without loss of generality we suppose that Vn ⊇ Vn+1 + Vn+1 for n ≥ 1. Because (xi ) is Cauchy, for n ≥ 1, there exists in ∈ I such that xj − xk ∈ Vn for all j, k ∈ I such that j, k  in . Set i1 := i1 . Because I is directed, there exists i2 ∈ I such that i2  i1 and i2  i2 , then there exists i3 ∈ I such that i3  i2 and i3  i3 ; continuing in this way, we find the increasing sequence (in )n≥1 ⊆ I such that in  in for n ≥ 1. Clearly, (in )n≥1 verifies (2.14), and so (xin )n≥1 is Cauchy because im , ip  in for m, p ≥ n. Assume that xin → x ∈ X. For p ≥ 1, there exists np ≥ p + 1 such that xin − x ∈ Vp+1 for every n ≥ np ; then, for i ∈ I with i  inp ( ip+1 ), we have that xi − xinp ∈ Vp+1 . It follows that xi − x = xi − xinp + xinp − x ∈ Vp+1 + Vp+1 ⊆ Vp and so xi → x.

∀i  inp , 

One says that the t.v.s. (X, τ ) is (sequentially complete) complete if any Cauchy (sequence) net from X is convergent; the nonempty subset Y of X is said to be (sequentially complete) complete if any Cauchy (sequence) net from Y is convergent to an element of Y .

32

2 Functional Analysis over Cones

Corollary 2.1.24. Let (X, τ ) be a first-countable topological vector space and Y a nonempty subset of X. Then Y is complete if and only if Y is sequentially complete. Proof. The implication ⇒ is obvious. So, assume that Y is sequentially complete. Because X is first-countable, there exists a base V := (Vn )n≥1 of balanced neighborhoods of 0 ∈ X. Take (yi )i∈I ⊆ Y a Cauchy sequence. Using Lemma 2.1.23 (ii), there exists an increasing sequence (in )n≥1 ⊆ I verifying (2.14). Because (yin )n≥1 (⊆ Y ) is Cauchy, there exists y ∈ Y such that yin → y by our working hypothesis. Using again Lemma 2.1.23 (ii), we  obtain that yi → y. Remark 2.1.25. Notice that having a first-countable t.v.s. (X, τ ), starting with a countable base of neighborhoods of 0 ∈ X and using iteratively the property (LT3) on page 29, one gets a base of open (resp. closed) balanced neighborhoods (Un )n≥1 of 0 ∈ X such that (Uk+1 ⊆) Uk+1 + Uk+1 ⊆ Uk for k ≥ 1; moreover, if there exists a bounded neighborhood of 0, one may take U1 (and so any Un ) to be bounded. Also, notice that with this choice of (Un )n≥1 , for all x ∈ X and all n, k ∈ N∗ , one has x ∈ Un+k ⇒ 2n x ∈ Uk

and

Un+1 + Un+2 + · · · + Un+k ⊆ Un .

(2.16)

Having two (real) topological vector spaces (X, τ ) and (Y, σ), the space of continuous linear operators T : X → Y is denoted by L(X, Y ); endowing L(X, Y ) with the usual addition of linear operators and with the usual multiplication with real scalars, it becomes a linear space. When Y := R, the space L(X, R) is denoted by X ∗ and is called the topological dual of X. The proof of the following result is easy. Proposition 2.1.26. Let (X, τ ), (Y, σ) be topological vector spaces and T : X → Y , ϕ : X → R be linear. Then the following assertions hold: (i) T is continuous if and only if T is continuous at some x0 ∈ X. (ii) ϕ is continuous if and only if int{x ∈ X | ϕ(x) ≤ α} = ∅ for some α ∈ R if and only if {x ∈ X | ϕ(x) = α} is closed for some α ∈ R. Recall that the algebraic interior, or core, and the relative algebraic interior, or intrinsic core, of the nonempty subset A of the linear space X are the sets cor A := {a ∈ A | ∀x ∈ X, ∃δ > 0, ∀λ ∈ [−δ, δ] : a + λx ∈ A}, icr A := {a ∈ A | ∀x ∈ aff A, ∃δ > 0, ∀λ ∈ [−δ, δ] : (1 − λ)a + λx ∈ A}, respectively; cor A is also denoted by Ai or aint A, while icr A is also denoted by i A or raint A. It follows that cor A = icr A if aff A = X and cor A = ∅ if aff A = X. Moreover, the algebraic closure of A is the set acl  A := {x ∈  X | ∃a ∈ A : [a, x[ ⊆ A}, where [a, x[ := (1 − λ)a + λx | λ ∈ [0, 1[ . One has icr A ⊆ A ⊆ acl A; moreover, int A ⊆ cor A and acl A ⊆ cl A if X is a t.v.s.

2.1 Order Structures

33

The convex subsets of topological vector spaces have important topological properties as the next result shows. Proposition 2.1.27. Let X be a t.v.s. and C ⊆ X be convex. Then (i) (ii) (iii) (iv) (v)

cl C is convex; if a ∈ int C and x ∈ cl C, then [a, x[ ⊆ int C; int C is convex; if int C = ∅ then cl(int C) = cl C and int(cl C) = int C; if int C = ∅ then cor C = int C and acl C = cl C.

Proof. (i) Take x, y ∈ cl C and λ ∈ [0, 1]. Then there exist the nets (xi )i∈I , (yi )i∈I ⊆ converging to x, y, respectively. It follows that zi := (1 − λ)x + λy ∈ C for i ∈ I and (zi ) → z, and so z ∈ cl C. (ii) Take a ∈ int C, x ∈ cl C and λ ∈ ]0, 1[. Because a ∈ int C, there exists V ∈ NX such that a + V + V ⊆ C. Then U := δV ∈ NX , where δ := min{λ, (1 − λ)/λ}, and x ∈ C + U by (2.11). Then (1 − λ)a + λx + U ⊆ (1 − λ)(a + V ) + λ(C + U ) ⊆ (1 − λ)(a + V ) + λC + (1 − λ)V = (1 − λ)(a + V + V ) + λC ⊆ C, and so (1 − λ)a + λx ∈ int C. (iii) The assertion is an immediate consequence of (ii). (iv) Fix a ∈ int C. The inclusions cl(int C) ⊆ cl C and int(cl C) ⊇ int C are obvious. Take x ∈ cl C; by (ii), we have that (1 − λ)a + λx ∈ int C for λ ∈ ]0, 1[. Letting λ → 1, we get x ∈ cl(int C). Take now x ∈ int(cl C). Using the continuity of the mapping λ → (1 − λ)x + λa at 0, there exists δ > 0 such that y := (1 + δ)x − δa ∈ cl C, and so x = (1 − λ)a + λy ∈ int C by (ii), 1 ∈ ]0, 1[. where λ := 1+δ (v) The inclusions int C ⊆ cor C and acl C ⊆ cl C were already mentioned for arbitrary sets. For the reverse inclusions, fix a ∈ int C. Consider first x ∈ cl C; by (ii), one has [a, x[ ⊆ int C ⊆ C, and so x ∈ acl C. Consider now x ∈ cor C. Then, because C − x is absorbing, there exists δ > 0 such 1 ∈ ]0, 1[ and a ∈ int C we get that y := (1 + δ)x − δa ∈ C; since λ := 1+δ x = (1 − λ)a + λy ∈ int C by (ii).  Several classes of topological linear spaces are given in the next section. For further use, we introduce several types of functions defined on a (real) linear space X. Having p : X → R, we say that: • • • •

p p p p

is is is is

subadditive if ∀ x, y ∈ X : p(x + y) ≤ p(x) + p(y); positively homogeneous if ∀ x ∈ X, ∀ λ ∈ R> : p(λx) = λp(x); symmetric if ∀ x ∈ X : p(−x) = p(x); sublinear if p is subadditive and positively homogeneous;

34

2 Functional Analysis over Cones

• p is a seminorm if p is sublinear and symmetric. Note that in this case p(x) ≥ 0 for every x ∈ X; indeed, 0 = p(0) ≤ p(x) + p(−x) = 2p(x); • p is a norm if p is a seminorm and p(x) = 0 ⇔ x = 0. Moreover, consider A an absorbing subset of the linear space X; the Minkowski functional associated with A is the mapping pA : X → R,

pA (x) := inf{t ∈ R> | x ∈ tA};

it is easily seen that pA = p[0,1]A . When working with the Minkowski functional associated to A ⊆ X, it is convenient to assume that A = [0, 1]A, that is A is star-shaped (at 0). For example, if A is star-shaped and IxA := {t ∈ R> | x ∈ tA}, as easily seen, one has ]pA (x), ∞[ ⊆ IxA ⊆ [pA (x), ∞[ for any x ∈ X. In the next result, we collect several properties of the Minkowski functional. Proposition 2.1.28. Let X be a linear space, let A, B ⊆ X be star-shaped absorbing sets, and λ ∈ R> . The following assertions hold: (i) pA is positively homogeneous, pλA = λ−1 pA , pA is symmetric if A is symmetric, and pA∩B = max{pA , pB }; in particular pB ≤ pA if A ⊆ B. Moreover, cor A ⊆ {x ∈ X | pA (x) < 1} ⊆ A ⊆ {x ∈ X | pA (x) ≤ 1} ⊆ acl A. (2.17) (ii) Assume that X is a t.v.s. Then int A ⊆ {x ∈ X | pA (x) < 1} ⊆ A ⊆ {x ∈ X | pA (x) ≤ 1} ⊆ cl A, (2.18) with equality in the first or last inclusions if A is open or closed, respectively. (iii) If C ⊆ X is a convex cone and A = [A]C := (A + C) ∩ (A − C), then pA is monotone w.r.t. C (or C-monotone), that is 0 ≤C x ≤C y ⇒ pA (x) ≤ pA (y).

(2.19)

(iv) Assume that A is convex; then pA is sublinear and cor A = {x ∈ X | pA (x) < 1} ⊆ A ⊆ {x ∈ X | pA (x) ≤ 1} = acl A; (2.20) moreover, if X is a t.v.s. and int A = ∅, then int A = {x ∈ X | pA (x) < 1},

cl A = {x ∈ X | pA (x) ≤ 1}.

(2.21)

Proof. (i) Using the definition of IxA , for x ∈ X and α, λ ∈ R> , one −A A = αIxA , IxλA = IλA−1 x , IxA = I−x and IxA∩B = IxA ∩ IxB , whence has: Iαx −1 pA (αx) = αpA (x), pλA (x) = λ pA (x), pA (x) = p−A (−x) and pA∩B (x) = max{pA (x), pB (x)}, respectively.

2.1 Order Structures

35

If x ∈ A, there exists δ ∈ R> such that tx ∈ A − x for t ∈ [−δ, δ], whence x ∈ (1+δ)−1 A, and so pA (x) ≤ (1+δ)−1 < 1. The second and third inclusions in (2.18) are immediate from the definition of pA . Assume that pA (x) ≤ 1; then ]1, ∞[ ⊆ IxA or, equivalently, ]0, 1[·x ⊆ A. Because 0 ∈ A, we have that (1 − λ) · 0 + λx ∈ A for λ ∈ [0, 1[, and so x ∈ acl A. (ii) Assume that X is a t.v.s.; then (2.18) is a consequence of (2.17) because int A ⊆ A and acl A ⊆ cl A. The corresponding equalities follow now immediately from (2.18) because A = int A when A is open and A = cl A when A is closed. (iii) To prove (2.19) consider x, y ∈ X such that 0 ≤C x ≤C y and take t ∈ R> such that y ∈ tA, i.e., y = ta with a ∈ A. Then x = y − (y − x) = t[a−t−1 (y−x)] ∈ t(A−C) and x = t(0+t−1 x) ∈ t(A+C). So x ∈ t[A]C = tA. Hence, pA (x) ≤ pA (y). (iv) Assume that A is convex and take x, y ∈ X. Then for t, s ∈ R> such that pA (x) < t and pA (y) < s, there exist t ∈ ]0, t] and s ∈ ]0, s] such that x ∈ t A and y ∈ s A. It follows that x + y ∈ t A + s A = (t + s )A, and so pA (x + y) ≤ t + s ≤ t + s. Letting t → pA (x) and s → pA (y), we obtain that pA (x + y) ≤ pA (x) + pA (y). Therefore, pA is sublinear. Having in view (2.17), for getting (2.20), it remains to prove the reverse of the first and last inclusions in (2.17). So take a ∈ X such that pA (a) < 1 and consider x ∈ X. Since pA (a + tx) ≤ pA (a) + tpA (x) for t ≥ 0, there exists δ > 0 such that pA (a + tx) < 1 for t ∈ [0, δ]. From (2.17), we obtain that a + tx ∈ A for t ∈ [0, δ], and so a ∈ A. Hence, A = {x ∈ X | pA (x) < 1}. Take now x ∈ acl C; then there exists a ∈ A such that [a, x[ ⊆ A. So, for λ ∈ [0, 1[, one has λpA (x) = pA (λx) ≤ pA ((1 − λ)a + λx) + pA ((λ − 1)a) ≤ 1 + (1 − λ)pA (−a). Taking the limit for λ → 1 we get pA (x) ≤ 1. Hence, (2.20) holds. Moreover, assume that X is a t.v.s. and int A = ∅. Then the equalities in (2.21) hold because A = int A and acl A = cl A by Proposition 2.1.27 (v).  Corollary 2.1.29. Let X be a t.v.s. and A ⊆ X be a star-shaped neighborhood of 0 ∈ X. Then the following assertions hold: (a) if A is open then pA is upper semicontinuous; (b) if A is closed then pA is lower semicontinuous; (c) if A is convex then pA is continuous; (d) if the net (xi )i∈I ⊂ X converges to x and lim supi∈I pA (xi ) > 0, then x = 0. Proof. Having a function f : E → R, γ ∈ R, and ρ ∈ {}, one sets [f ρ γ] := {x ∈ E | f (x) ρ γ}. (2.22)

36

2 Functional Analysis over Cones

Because pA is positively homogeneous, we have that [pA ≤ γ] = γ[pA ≤ 1] and [pA < γ] = γ[pA < 1] for all γ ∈ R> ; hence, [pA < γ] = γ int A is open for γ ∈ R> in cases (a) and (c), and [pA ≤ γ] = γ cl A is closed for γ ∈ R> in cases (b) and (c). Taking into account that [pA < γ] = ∅  = [pA ≤ γ  ] for  γ ∈ R− and γ ∈ (−R> ), as well as the fact that [pA ≤ 0] = γ∈R> [pA ≤ γ], the conclusions follow from the usual characterizations of upper and lower semicontinuity of an extended real-valued functional and Proposition 2.1.28. (d) The set B := int A is an open, star-shaped neighborhood of 0 and B ⊆ A; hence, pB is upper semicontinuous and pA ≤ pB . It follows that  0 < lim supi∈I pA (xi ) ≤ lim supi∈I pB (xi ) ≤ pB (x), and so x = 0. We discuss now the connections between topology and order. Contrary to the definition of an ordered linear space (i.e., a linear space endowed with a compatible preorder), the definition of an ordered topological linear space does not require any direct relation to exist between the involved order and topology. However, because a compatible preorder on a linear space is defined by a convex cone, generally one asks that the cone defining the order be closed, have nonempty interior, or be normal. Before giving the definition of a normal cone, let us say that the nonempty set A of the linear space X is full or order-convex with respect to the convex cone C ⊆ X if   [x, y]C | x, y ∈ A , A = [A]C := (A + C) ∩ (A − C) = where [x, y]C := {z ∈ X | x ≤C z ≤C y} = (x + C) ∩ (y − C); [x, y]C is called a C-order interval. For x, y ∈ X, A, B ⊆ X and λ ∈ R \ {0}, one has – [x, y]C = x + [0, y − x]C and [x, y]C = ∅ ⇔ x ≤C y;

– A ⊆ B ⇒ [A]C ⊆ [B]C , A ⊆ [A]C , [A]C C = [A]C , [λA]C = λ[A]C ; – [A]C is balanced, open or convex if A is so, respectively. Definition 2.1.30. Let (X, τ ) be a t.v.s. and let C ⊆ X be a convex cone. One says that C is normal (relative to τ ) if the origin 0 ∈ X has a neighborhood base formed by full sets w.r.t. C. In the next result, we give several characterizations of normal cones in a topological vector space. Theorem 2.1.31. Let (X, τ ) be a t.v.s., let C ⊆ X be a convex cone, and let B be a neighborhood base of 0 ∈ X. Consider the following assertions: (i) C is normal; (i ) ∀U ∈ B, ∃V ∈ B : [V ]C ⊆ U (⇔ {[U ]C | U ∈ B} is a neighborhood base of 0); (ii) ∀U ∈ B, ∃V ∈ B, ∀y ∈ V ∩ C : [0, y]C ⊆ U ; (iii) for all nets (xi )i∈I , (yi )i∈I ⊆ X such that 0 ≤C xi ≤C yi for every i ∈ I, one has yi → 0 ⇒ xi → 0;

2.1 Order Structures

37

(iii ) for all sequences (xn )n≥1 , (yn )n≥1 ⊆ X such that 0 ≤C xn ≤C yn for every n ≥ 1, one has yn → 0 ⇒ xn → 0; (iv) cl C is normal; (v) [B]C is τ -bounded for every τ -bounded subset B ⊆ X. Then (i) ⇔ (i ) ⇔ (ii) ⇔ (iii) ⇔ (iv), (i) ⇒ (iii ) and (i) ⇒ (v); moreover, if τ is first-countable then (i) ⇔ (iii ) ⇔ (v). Proof. The implications (i) ⇔ (i ), (iii) ⇒ (iii ), and (i) ⇒ (v) are obvious. (i ) ⇒ (ii) Let U ∈ B; then there exists V ∈ B such that [V ]C ⊆ U by (i ). Then, for y ∈ V ∩ C, one has [0, y]C ⊆ [V ]C ⊆ U because 0 ∈ V . (ii) ⇒ (iii) Consider the nets (xi )i∈I , (yi )i∈I ⊆ X such that 0 ≤C xi ≤C yi for every i ∈ I and yi → 0. Let U ∈ B; then there exists V ∈ B such that [0, y]C ⊆ U for every y ∈ V ∩ C. Because (yi ) → 0, there exists iV ∈ I such that yi ∈ V , and so xi ∈ [0, yi ]C ⊆ U , for i  iV . Therefore, xi → 0. (iii) ⇒ (i ) Assume, by contradiction, that (i ) does not hold. Then there exists U0 ∈ B such that [V ]C ⊆ U0 for every U ∈ B. Hence, for every U ∈ NX ,   ∈ U and xU , xU ∈ C such that zU := yU + xU = yU − xU ∈ / there exist yU , yU   V0 . Of course, yU , yU → 0, and so yU − yU → 0, too. Since 0 ≤C xU ≤C  yU − yU , by hypothesis we obtain that xU → 0, and so zU = yU + xU → 0. / V0 for every U ∈ B. This contradicts the fact that zU ∈ (iv) ⇒ (i) This is obvious because A ⊆ [A]C ⊆ [A]cl C for ∅ = A ⊆ X. (i) ⇒ (iv) Let V ∈ NX ; then there exists U ∈ NX such that [U ]C ⊆ V . By the property (LT3) of NX (see page 29), there exists U  ∈ NX such that U  + U  ⊆ U . Using (2.11), we obtain that   [U  ]cl C = (U  + cl C) ∩ (U  − cl C) ⊆ (U  + C + U  ) ∩ U  − (C − U  ) ⊆ (U + C) ∩ (U − C) = [U ]C ⊆ V ; hence, cl C is normal. Assume, moreover, that τ is first-countable. (iii ) ⇒ (i ) Using a countable base B of neighborhood of 0 in the proof of the implication (iii) ⇒ (i ), one gets the conclusion. (v) ⇒ (iii ) Having in view Remark 2.1.25, there exists B := {Un | n ≥ 1} a base of balanced neighborhoods of 0 such that Uk+1 + Uk+1 ⊆ Uk for k ≥ 1. Suppose that (iii ) does not hold; then there exist the sequences (xn )n≥1 , (yn )n≥1 ⊆ X such that 0 ≤C xn ≤C yn for n ≥ 1 with yn → 0 and xn → 0. Because xn → 0, there exist n0 ∈ N∗ and a strictly increasing / Un0 for k ∈ N∗ . Because ynk → 0, sequence (nk )k≥1 ⊆ N∗ such that xnk ∈ passing to a subsequence if necessary, we may (and do) assume that ynp ∈ Uk for p ≥ k ≥ 1. Replacing the sequences (xn ) and (yn ) with (xnk+k )k≥1 , (ynk+k )k≥1 , where k  = n0 + n1 , and renouncing to the neighborhoods Un with n < n1+k , then renumbering them, we have that xn ∈ / U := U1 ,

0 ≤C xn ≤C yn ,

yn ∈ Um

∀n, m ∈ N∗ , n ≥ m. (2.23)

38

2 Functional Analysis over Cones

Setting yn := 2n y2n , xn := 2n x2n , we have that 0 ≤C xn ≤C yn for n ≥ 1. From the implication in (2.16), one obtains that yn ∈ Un for n ≥ 1, and so yn → 0. It follows that B := {0}∪{yn | n ≥ 1} is bounded, and so [B]C is also bounded by our hypothesis; hence, there exists α ∈ R> such that [BC ] ⊆ αU . Because xn ∈ [0, yn ]C ⊆ [B]C , we get 2n x2n = xn ∈ αU for n ≥ 1. Taking some n ≥ 1 such that 2n α > 1, we get the contradiction x2n ∈ U (because U is balanced). Therefore, (iii ) holds. The proof is complete.  Corollary 2.1.32. Let (X, τ ) be an H.t.v.s. and let C ⊆ X be a convex cone. If C is normal, then cl C is pointed, and so C is pointed, too. Proof. Indeed, if x ∈ cl C ∩ (− cl C), then x ∈ ({0} + cl C) ∩ ({0} − cl C) ⊆ (U + cl C) ∩ (U − cl C) = [U ]cl C ∀U ∈ NX . Since cl C is normal (by Theorem 2.1.31), the family {[U ]cl C | U ∈ NX } is a neighborhood base of 0, and so x = 0 because X is Hausdorff.  The case in which the ordering cone has nonempty interior is special, as noticed by Peressini (see [470, p. 183]). Proposition 2.1.33. Let (X, τ ) be a t.v.s. and let C ⊆ X be a convex cone having u0 in its interior. Then the following assertions are equivalent: (a) C is normal; (b) B := [−u0 , u0 ]C is τ -bounded; (c) { n1 B | n ∈ N> } is a neighborhood base of 0 ∈ X; (d) any C-order interval is τ -bounded. Consequently, if C is normal then (X, τ ) is first-countable; moreover, if (X, τ ) is Hausdorff, then the Minkowski functional pB associated to B is a norm, and so (X, τ ) is normable. Proof. First, observe that B is a symmetric convex neighborhood of 0 ∈ X as the intersection of the convex neighborhoods u0 − C and C − u0 of 0. Moreover, by assertions (i) and (iv) of Proposition 2.1.28, the Minkowski functional pB of B is a seminorm such that int B = [pB < 1] (see (2.22)). The implication (a) ⇒ (d) holds by Theorem 2.1.31, (c) ⇒ (a) is true by the definition of a normal cone, while (d) ⇒ (b) is obvious (B being a C-order interval). (b) ⇒ (c) Consider V ∈ NX ; because B is τ -bounded, there exists n ∈ N∗ such that n1 B ⊆ V . Therefore, B := { n1 B | n ∈ N> } is a base of full (because B = [{−u0 , u0 }]C ) neighborhoods of 0. Assume that C is normal; then τ is first-countable because B is a countable base of neighborhood of 0. Moreover, assume that τ is Hausdorff and take x ∈ X \ {0}. Because B is a neighborhood base of 0, there exists n ∈ N> such that x ∈ / n1 B, and so pB (x) ≥ n1 > 0. Hence, pB is a norm. Because the

2.1 Order Structures

39

unit open ball w.r.t. pB is int B, it follows that τ coincides with the topology determined by the metric ρ defined by ρ(x, x ) := pB (x − x ), and so (X, τ ) is normable.  We provide below an example of a pointed closed convex cone with nonempty interior which is not normal. Example 2.1.34. Take X := 2 × R, q ∈ ]2, ∞[ and C := epi ·q := {(x, t) ∈ X | xq ≤ t}; consider on X the norm defined by (x, t) := x2 + |t|. It is clear that C is a pointed convex cone. Because xq ≤ x2 for all x ∈ 2 , it follows that ·q : 2 → R is continuous, and so C is closed; moreover, int C = {(x, t) ∈ 2 × R | xq < t}, and so u0 := (0, 2α) ∈ int C, where ∞ −1/2 α := ( k=1 k −q/2 )1/q ∈ R , · · · , n−1/2 , 0, 0, · · · ), we > . Setting xn := (1, 2 n −q/2 1/q ) < α, whence (xn , αn ) ∈ [−u0 , u0 ]C have that αn := xn q = ( k=1 k n for n ≥ 1. Because (xn , αn ) ≥ xn 2 = ( k=1 k −1 )1/2 → ∞, the cone C is not normal. Before providing some relations between the normality of the ordering cone C and other properties of C, we need to introduce some notions. Let (X, τ ) be a t.v.s. preordered by the proper convex cone C; we say that the net (xi )i∈I ⊆ X is increasing w.r.t. C (or C- or ≤C -increasing) if ∀ i, j ∈ I : i  j =⇒ xi ≤C xj ;

(2.24)

of course, the net (xi )i∈I ⊆ X is C-decreasing if (−xi )i∈I is C-increasing. Taking I := N∗ , we get the definitions of C-increasing and C-decreasing sequences. Let ∅ = A ⊆ X; we say that A is C-lower (C-upper) bounded if A is lower (upper) bounded with respect to RC (see Definition 2.1.7); A is Cbounded if A is C-lower and C-upper bounded, that is A is a subset of a C-order interval. Similarly, a ∈ X is a C-lower (C-upper) bound (infimum, supremum) of A if a is so for RC . Hence, a ∈ X is a C-lower bound of A if a ≤C x for every x ∈ A; a is an infimum of A w.r.t. C if a is a C-lower bound and for any C-lower bound a of A, we have that a ≤C a. The set of infimum (supremum) points of A w.r.t. C will be denoted by inf C A (supC A). When C is pointed then inf C A and supC A have at most one element; if inf C A (supC A) is a singleton, its unique element will be denoted by inf C A (supC A), too. Having the t.v.s. (X, τ ), we say that the proper convex cone C ⊆ X is (fully regular) regular if any C-increasing and (τ -bounded) C-bounded net from C is convergent; moreover, we say that C is Cauchy (fully regular) regular if any C-increasing and (τ -bounded) C-bounded net from C is Cauchy. Furthermore, we say that Y ⊆ X is C-complete if any Cauchy C-increasing net from Y has a limit in Y . In the case in which we replace the word “net” by “sequence” in the preceding definitions, we add the atribute “sequentially” to the previous notions.

40

2 Functional Analysis over Cones

For example, C is sequentially regular if any C-increasing and C-bounded sequence from C is convergent. Remark 2.1.35. Note that, in the preceding definitions of regularity notions, we may take the nets (sequences) from X. This is based on the following construction. Consider the net (xi )i∈I ⊆ X, fix some i0 ∈ I, set I0 := {i ∈ I | i  i0 } and xi := xi − xi0 for i ∈ I0 . Then: (xi )i∈I0 is convergent (Cauchy) if and only if (xi )i∈I is convergent (Cauchy); (xi )i∈I0 is Cincreasing, τ -bounded or C-bounded if (xi )i∈I is so, respectively; (xi )i∈I0 ⊆ C if (xi )i∈I is C-increasing. To get the “sequential” version replace “net” by “sequence” in the construction above. Asking the cone C be pointed and the limits of the corresponding nets (sequences) be in C, the notions of (fully and/or sequentially) regular cones in topological vector spaces were introduced by McArthur in [402], as extensions of the notions of (fully) regular cones introduced by Krasnosel’ski˘ı in Banach spaces for pointed closed convex cones (see [352, 353]); in the same context as in [353], Aliprantis & Tourki say (in [13]) that C satisfies the (strong) Levi property if C is (fully) sequentially regular. The same terminology as in [402] is used by Nemeth [432]; Isac [293] says that C is completely regular when C is fully regular. The notion of sequentially C-complete set was introduced by Ng & Zheng in [435]. Remark 2.1.36. Notice that the cone C is necessarily pointed if (X, τ ) is Hausdorff and C has one of the regularity properties mentioned above. Indeed, assuming that C is not pointed, there exists c ∈ C ∩ (−C) \ {0}. Set xn := (−1)n c ∈ C for n ∈ N∗ ; clearly, (xn )n≥1 is C-increasing, C-bounded, and τ -bounded, but (xn )n≥1 is not Cauchy, and so it is not convergent. In the next proposition, we mention several relations among the regularity notions introduced above. Proposition 2.1.37. Let (X, τ ) be a t.v.s., and let C ⊆ X be a proper convex cone. The following assertions hold: (i) C is Cauchy [fully] regular if and only if C is sequentially Cauchy [fully] regular. (ii) Assume that C is (sequentially) C-complete; then C is (sequentially) Cauchy [fully] regular if and only if C is (sequentially) [fully] regular; (iii) Assume that either (a) C is C-complete, or (b) τ is first-countable; then C is [fully] regular if and only if C is sequentially [fully] regular. (iv) Assume that C is normal; then C is (sequentially) [Cauchy] regular whenever C is (sequentially) [Cauchy] fully regular. Proof. It is clear that C is sequentially (Cauchy) [fully] regular if C is (Cauchy) [fully] regular. So, when needed, we shall prove only the reverse implications.

2.1 Order Structures

41

(i) Assume that C is sequentially Cauchy [fully] regular. Let (xi )i∈I ⊆ C be a C-increasing and [τ -bounded] C-bounded net. By contradiction, assume that (xi )i∈I is not Cauchy. Using Lemma 2.1.23(i), there exists an increasing sequence (in )n≥1 ⊆ I such that (xin )n≥1 is not Cauchy. Because the sequence (xin )n≥1 is C-increasing and [τ -bounded] C-bounded, we get the contradiction that (xin )n≥1 is Cauchy. Hence, (xi )i∈I is Cauchy, and so C is Cauchy [fully] regular. (ii) This assertion is obvious; indeed, because C is (sequentially) Ccomplete, a C-increasing net (xi )i∈I (sequence (xn )n≥1 ) from C is convergent if and only if it is Cauchy. (iii) Assume that C is sequentially [fully] regular, and consider (xi )i∈I ⊆ C a C-increasing and [τ -bounded] C-bounded net. Because C is sequentially Cauchy [fully] regular, C is Cauchy [fully] regular by (i), and so (xi )i∈I is Cauchy. In case (a), there exists x ∈ C such that xi → x because (xi )i∈I (⊆ C) is Cauchy and C-increasing. In case (b), by Lemma 2.1.23(ii), there exists an increasing sequence (in )n≥1 ⊆ I verifying (2.14). Because (in )n≥1 is increasing, (xin )n≥1 is Cincreasing and, of course, [τ -bounded] C-bounded; hence, xin → x for some x ∈ X by our working hypothesis. Using again Lemma 2.1.23(ii), one has xi → x. Therefore, C is [fully] regular. (iv) Because any C-order interval is τ -bounded by Theorem 2.1.31, the mentioned implications are obvious.  Recall that the t.v.s. (X, τ ) is (sequentially) quasi-complete if every (sequentially) closed and bounded subset of X is (sequentially) complete; so, in Proposition 2.1.37(ii), in the case of full regularity, one could replace the hypothesis on C by the fact that X be (sequentially) quasi-complete. Note that the equivalence of the regularity and the sequential regularity from Proposition 2.1.37(iii) was obtained in [432, (3.1)] for C complete and X an H.l.c.s. Theorem 2.1.38. Let (X, τ ) be a t.v.s., and let C ⊆ X be a proper convex cone. Consider the following assertions: (i) C is sequentially fully regular; (i ) C is sequentially Cauchy fully regular; (ii) C is sequentially regular; (ii ) C is sequentially Cauchy regular; (iii) C is normal; (iv) any C-order interval is τ -bounded (⇔ [0, x]C is τ -bounded for every x ∈ C). Then the following assertions hold: (a) (i) ⇒ (i ), (ii) ⇒ (ii ) ⇒ (iv) and (iii) ⇒ (iv); moreover, if 0 ∈ X has a bounded neighborhood then (i ) ⇒ (ii ).

42

2 Functional Analysis over Cones

(b) Assume that C is sequentially C-complete, and τ is Hausdorff and firstcountable. Then (i) ⇔ (i ), (ii) ⇔ (ii ) ⇒ (iii) ⇔ (iv); if, moreover, 0 ∈ X has a bounded neighborhood then (i) ⇒ (ii). Proof. (a) The implications (i) ⇒ (i ), (ii) ⇒ (ii ) and (iii) ⇒ (iv) are obvious. (ii ) ⇒ (iv) Assume that (iv) does not hold; then there exists u ∈ C such that A := [0, u]C is not bounded. Consequently, there exists a balanced open neighborhood U of 0 ∈ X such that λA ⊆ U for all λ ∈ R> ; in particular, / U , and so for n ≥ 1, there exists un ∈ A such that (C ) xn := 2−n un ∈ p(xn ) ≥ 1 because U = [p < 1] by Proposition 2.1.28(ii), where p := pU is the Minkowski functional associated to U . Replacing if necessary xn by [p(xn )]−1 xn (equivalently, replacing un by [p(xn )]−1 un ∈ A), we may (and do) assume that p(xn ) = 1 for n ≥ 1. Set zn := x1 + · · · + xn (∈ C); clearly, (zn )n≥1 is C-increasing. Because un ≤C u, we have that xn ≤C yn := 2−n u, and so zn ≤C y1 + · · · + yn ≤C u for n ≥ 1; hence, (zn )n≥1 (⊆ A) is also / U for n ≥ 1, the C-order bounded. Moreover, because zn+1 − zn = xn+1 ∈ sequence (zn ) is not Cauchy, and so (ii ) does not hold. Hence, (ii ) ⇒ (iv). For the next two implications, assume that 0 has a bounded neighborhood. (i ) ⇒ (iv) As above, assume that (iv) does not hold; take u ∈ C such that [0, u]C is not bounded. We use the construction and the notation from the proof of the implication (ii ) ⇒ (iv) in which we may (and do) assume that U is bounded; in particular, because (xn )n≥1 ⊆ [p = 1] ⊆ [p < 2] = 2U , the sequence (xn )n≥1 is bounded. Let us set zn := y1 + · · · + yn ; then 0 ≤C   xn ≤C zn ≤C zn for n ≥ 1. Setting v2m−1 := z2m−1 and v2m := z2m−1 + x2m for m ≥ 1, clearly (vn )n≥1 ⊆ C and v2m−1 ≤C v2m ≤C v2m−1 + y2m ≤C v2m−1 + y2m + y2m+1 = v2m+1

∀m ≥ 1,

which shows that (vn )n≥1 is C-increasing. On the other hand, because zn = (1 − 2−n )u → u, (zn )n≥1 is τ -bounded, and so (vn )n≥1 is τ -bounded because   )m≥1 ] and (v2m )m≥1 [= (z2m−1 )m≥1 + (x2m )m≥1 ] are (v2m−1 )m≥1 [= (z2m−1 so. Because v2m − v2m−1 = x2m ∈ U for m ≥ 1, (vn )n≥1 is not Cauchy; consequently, (i ) does not hold. Hence, (i ) ⇒ (iv). (i ) ⇒ (ii ) Assume that (i ) holds and consider (xn )n≥1 ⊆ X a C-order bounded C-increasing sequence. Because (i ) ⇒ (iv), the sequence (xn )n≥1 is τ -bounded; since (xn )n≥1 is also C-increasing, it follows that (xn )n≥1 is Cauchy. Hence, (ii ) holds. (b) We have that (i) ⇔ (i ) and (ii) ⇔ (ii ) by Proposition 2.1.37(ii). Moreover, note that xn ≤C x if (xn )n≥1 ⊆ X is C-increasing and xn → x. Indeed, 0 ≤C xn := xn+m −xm ≤C xn+1 → x−xm =: x for n, m ≥ 1, and so x ∈ C because C is sequentially C-complete. Therefore, xm = x − x ≤C x for m ≥ 1. (iv) ⇒ (iii) Having in view the equivalence (i) ⇔ (iii ) from Theorem 2.1.31, we have to prove that xn → 0 whenever (xn )n≥1 , (yn )n≥1 ⊆ X are such that yn → 0 and 0 ≤C xn ≤C yn for n ≥ 1. Assume that this assertion

2.1 Order Structures

43

is not true. Proceeding like in the proof of the implication (v) ⇒ (iii ) of Theorem 2.1.31, there exist a base (Un )n≥1 of open balanced neighborhoods of 0 such that Uk+1 + Uk+1 ⊆ Uk for k ≥ 1, and the sequences (xn )n≥1 , (yn )n≥1 ⊆ X such that 0 ≤C xn ≤C yn , zn

y1

yn ,

yn ∈ Un ,

Setting := + ··· + one has (2.16), for n, p ≥ 1, one has

yn

2−n xn ∈ / U1 ≤C zn

∀n ≥ 1.

 ≤C zn+p

for n, p ≥ 1. Using

    − zn = yn+1 + yn+2 + · · · + yn+p ∈ Un+1 + Un+2 + · · · + Un+p ⊆ Un , zn+p

showing that (zn )n≥1 ⊆ C is a Cauchy sequence. Because C is sequentially C-complete, there exists z  ∈ C such that (zn ) → z  ; as seen above, zn ≤C z  for n ≥ 1, and so (xn )n≥1 ⊆ [0, z  ]C . Because [0, z  ]C is τ -bounded by our working hypothesis, there exists α > 0 such that [0, z  ]C ⊆ αU1 , and so 2−n xn ∈ 2−n αU1 for n ≥ 1. Taking n0 ∈ N∗ such that 2−n0 α < 1, we get / U1 for n ≥ 1. Hence, a contradiction because U1 is balanced and 2−n xn ∈ (iv) ⇒ (iii), and so (iii) ⇔ (iv). The proof is complete.  Notice that the sequential C-completeness of C is essential for getting the implication (iv) ⇒ (iii) in Theorem 2.1.38 when int C = ∅ as the next example shows. R | xn → 0} endowed with Example 2.1.39. Let X := c0 := {(xn )n≥1 ⊂ m the norm ·∞ and C := {(xn )n≥1 ∈ c00 | k=1 xk ≥ 0 ∀m ≥ 1}, where c00 := {(xn )n≥1 ⊂ R | ∃n0 ≥ 1, ∀n > n0 : xn = 0} (⊂ c0 ). As seen in Example 2.2.21(i), C is a pointed convex cone such that all order intervals are bounded, but C is not normal. McArthur [402], for (X, τ ) a complete metrizable t.v.s. and C ⊂ X a convex cone, stated the equivalence (iii) ⇔ (iv) in [402, Th. 3(i)] and the implication (ii) ⇒ (iii) for C closed in ([402, Th. 3(ii)]). As seen in Example 2.1.39, the implication (iv) ⇒ (iii) is false if C is not closed (or, more precisely, C is not sequentially C-complete). Having (X, τ ) a t.v.s. and C ⊆ X a normal convex cone with nonempty interior, the τ -bounded and C-bounded subsets of X coincide, and so the (Cauchy and/or sequential) regularity and the (Cauchy and/or sequential) full regularity notions coincide, respectively. The next example provides a normal convex cone with nonempty interior which is not regular. Example 2.1.40. Consider the linear space ∞ := {(xn )n≥1 ⊆ R | (xn ) is bounded} endowed with the norm ·∞ and take C := ( ∞ )+ := {(xn )n≥1 ∈

∞ | xn ≥ 0 ∀n ≥ 1}; C is a normal pointed closed convex cone with int C = ∞ {(xn )n≥1 ∈ ∞ n| inf n≥1 xn > 0}. Consider also the sequence (ξn )n≥1 ⊆ , where ξn := k=1 ek in which ek := (δi,k )i≥1 with δi,k := 1 for i = k and δi,k := 0 for i = k; (ξn )n≥1 is C-increasing and supC {ξn | n ≥ 1} = e with e := (1, 1, 1, . . .) ∈ ∞ , but ξn − ξm ∞ = 1 for n > m ≥ 1. Therefore, (ξn )n≥1 is not Cauchy, and so C is not regular (of any type).

44

2 Functional Analysis over Cones

Related to monotone nets and sequences one also has the next result. Proposition 2.1.41. Let (X, τ ) be a t.v.s. preordered by the (sequentially) closed convex cone C, and let (xi )i∈I ⊆ X ((xn )n≥1 ⊆ X) be an increasing net (sequence) having a subnet (subsequence) convergent to x ∈ X. Then A := {xi | i ∈ I} (A := {xn | n ≥ 1}) is C-upper bounded and x ∈ supC A. Moreover, if C is normal, then xi → x (xn → x). Proof. Let xϕ(j) → x, where ϕ : (J, ≤) → (I, ), with (J, ≤) directed, is filtering. Fix i0 ∈ I, and set j0 := ji0 . Then ϕ(j)  i0 for j ≥ j0 , and so xϕ(j) ∈ xi0 + C for j ≥ j0 , whence x ∈ cl(xi0 + C) = xi0 + C. Therefore, x ≥C xi0 for every i0 ∈ I. Since i0 ∈ I is arbitrary, x is a C-upper bound of A. Let x ∈ X be an arbitrary C-upper bound of A. Then xi ∈ x − C for every i ∈ I, and so xϕ(j) ∈ x − C for every j ∈ J; hence, x ∈ cl(x − C) = x − C, that is x ≤C x . Therefore, x ∈ supC A. Assume now that C is normal and take V ∈ NX such that V = [V ]C . Since xi ≤C x for i ∈ I, we have that xi −x ∈ (−C) ⊆ V −C for i ∈ I. On the other hand, because xϕ(j) → x, there exists jV ∈ J such that xϕ(j) − x ∈ V for j ≥ jV . Then, for i  iV := ϕ(jV ), we have that xi −x ≥C xϕ(jV ) −x ∈ V , whence xi − x ∈ V + C. Therefore, xi − x ∈ (V − C) ∩ (V + C) = [V ]C = V for i  iV . Hence, xi → x. The “sequential” version is obtained from the previous one by replacing “(sub)net” by “(sub)sequence”.  The case of well-based ordering cones is also special, because these cones have nice properties as seen in Theorem 2.1.45 (below). Definition 2.1.42. Let (X, τ ) be a t.v.s. and let C ⊆ X be a proper convex cone. We say that: (i) C is based if C has a base, that is, a convex set B ⊆ X such that / cl B; C = R+ B and 0 ∈ (ii) C is well-based if C has a bounded base. Notice that Jameson says (in [315, p. 120]) that the convex cone C ⊆ (X, τ ) is well-based if C has an algebraic base B which is (τ -)bounded and 0∈ / cl B. Of course, if C is well-based in the sense of Jameson, then C is wellbased in the sense of Definition 2.1.42(ii); these definitions are equivalent if τ is locally convex. It is clear that, for the proper convex cone C, one has C is well-based =⇒ C is based =⇒ C is pointed. Moreover, C is based whenever there exists x∗ ∈ X ∗ such that x∗ (x) > 0 for all x ∈ C\{0}; indeed, for such an x∗ ∈ X ∗ , the set B := {x ∈ C | x, x∗  = 1} is a nonempty convex set, C = R+ B and cl B ⊆ [x∗ = 1]  0. This representation becomes necessary when τ is locally convex, as seen in Proposition 2.2.23 from the next section.

2.1 Order Structures

45

The next auxiliary result will be used in the proof of Theorem 2.1.45 and elsewhere. Lemma 2.1.43. Let (X, τ ) be a t.v.s. and B ⊆ X a nonempty bounded set such that 0 ∈ / cl B. (i) Assume that (sj )j∈J ⊆ R+ , (vj )j∈J ⊆ B, and {sj vj | j ∈ J} is bounded; then {sj | j ∈ J} is bounded. Consequently, if B  ⊆ X is a bounded / cl B  , then B  ⊆ [γ, δ]B for some set such that R+ B  = R+ B and 0 ∈ γ, δ ∈ R> with γ ≤ δ. (ii) Assume that the nets (ti )i∈I ⊆ R+ , (ui )i∈I ⊆ B and (xi )i∈I ⊆ X are such that xi = ti ui for i ∈ I, ti → t ∈ R+ and xi → x ∈ X. Then (a) t < ∞; (b) t = 0 ⇔ x ∈ cl{0}; (c) t ∈ R> ⇔ x ∈ R> · cl B. (iii) One has cl (R+ B) = (R> · cl B) ∪ cl{0}. Moreover, R+ · cl B = cl (R+ B) if and only if X is Hausdorff; in particular, R+ B is closed if B is closed and X is Hausdorff. (iv) If B is (sequentially) complete, then R+ B is (sequentially) complete. Proof. (i) Assume that the set {sj | j ∈ J} is not bounded. Hence, for every n ∈ N∗ , there exists jn ∈ J such that sjn ≥ n. Then (B ) vjn = (sjn )−1 (sjn vjn ) → 0 because (sjn )−1 → 0 and (sjn vjn )n≥1 is bounded, whence the contradiction 0 ∈ cl B. Hence, {sj | j ∈ J} is bounded. / cl B  and R+ B  = R+ B. Then, for Assume that B  ⊂ X is bounded, 0 ∈  every v ∈ B , there exists sv ∈ R> and uv ∈ B such that v = sv uv ; hence,  {sv uv | v ∈ B  } (= B  ) and {s−1 v v | v ∈ B } (⊆ B) are bounded, and so  −1 δ := sup{sv | v ∈ B } ∈ R> and ε := sup{sv | v ∈ B  } ∈ R> . Therefore, B  ⊆ [γ, δ]B, where γ := ε−1 . (ii) (a) By contradiction, assume that t = ∞. Then ui = t−1 i xi for large i, whence the contradiction B  ui → 0 · x = 0, and so 0 ∈ cl B. Hence, t < ∞. (b) & (c) If t = 0, then xi → 0 because (ui )i∈I is bounded and ti → 0, and so 0 = xi − xi → x − 0 = x. It follows that x ∈ cl{0}; if t ∈ R> , then −1 x, whence u := t−1 x ∈ cl B, and so x = tu ∈ R> · cl B. B  ui = t−1 i xi → t Hence, the direct implications in (b) and (c) hold. Because cl{0} ∩ cl B = ∅ by (2.13) and cl{0} is a linear space, it follows that cl{0} ∩ (R> · cl B) = ∅; therefore, the reverse implications in (b) and (c) also hold. (iii) Because B ∪ {0} ⊆ R+ B, we have that cl B ∪ cl{0} ⊆ cl(R+ B), and so (R> · cl B) ∪ cl{0} ⊆ cl (R+ B) because cl (R+ B) is clearly a cone. Take x ∈ cl (R+ B); then there exist the nets (ti )i∈I ⊆ R+ and (ui )i∈I ⊆ B such that xi := ti ui → x. From (ii)(a), one has that t < ∞. If t ∈ R> , one has x ∈ R> · cl B by (ii)(c), while for t = 0, one has x ∈ cl{0} by (ii)(b). Therefore, cl (R+ B) = (R> · cl B) ∪ cl{0}. Because R+ · cl B = (R> · cl B) ∪ {0} and cl{0} ∩ (R> · cl B) = ∅ (as observed above), one has cl (R+ B) = R+ · cl B ⇐⇒ cl{0} = {0} ⇐⇒ X is Hausdorff.

46

2 Functional Analysis over Cones

(iv) Assume that B is complete and consider the Cauchy net (xi )i∈I ⊆ R+ B; then there exist the nets (ti )i∈I ⊆ R+ and (ui )i∈I ⊆ B such that xi = ti ui for i ∈ I. Of course, (ti )i∈I has a subnet (tj )j∈J converging to t ∈ R+ . By assertion (ii)(a), t < ∞. If t = 0, then xj = tj uj → 0 because B is bounded, and so xi → 0 because (xi )i∈I is Cauchy. Let t ∈ R> ; then tj ∈ R> for large j, and so we may assume that tj ∈ R> for j ∈ J. Then sj := t−1 → t−1 , and so (sj )j∈J is Cauchy. Since uj = sj xj for j ∈ J, j (uj )j∈J (⊆ B) is Cauchy by Lemma 2.1.22 because (sj )j∈J and (xj )j∈J are Cauchy. Since B is complete, (uj )j∈J converges to some u ∈ B, and so xj = tj uj → tu ∈ R> B ⊆ R+ B. Therefore, R+ B is complete. The “sequential” version is obtained from the previous one by replacing “(sub)net” by “(sub)sequence”.  Corollary 2.1.44. Assume that (X, τ ) is an H.t.v.s., B ⊂ X is a bounded nonempty convex set such that 0 ∈ / cl B, and C := R+ B. (i) If B is closed, then C is closed; if B is (sequentially) complete, then C is (sequentially) complete. (ii) If C is closed, then cl B is a bounded closed base of C; moreover, if C is (sequentially) complete, then cl B is (sequentially) complete. (iii) If B is (sequentially) compact then [0, x]C is (sequentially) compact for every x ∈ C. Proof. Assertion (i) is an immediate consequence of Lemma 2.1.43(iii) [and (iv)]. (ii) Assume that C is closed; because B ⊂ C, one has cl B ⊂ cl C = C, and so R+ · cl B ⊆ C = R+ B ⊆ R+ · cl B. Hence, C = R+ · cl B, and so cl B is a bounded closed base of C. Assume, that C is (sequentially) complete; then cl B is (sequentially) complete as a closed subset of a (sequentially) complete set. (iii) Assume that B is compact. Then B is closed and bounded, and so C is closed by (i); hence, [0, x]C is closed for every x ∈ C. Fix x ∈ C \ {0} and take the net (xi )i∈I ⊂ [0, x]C . Then x = tb and xi = ti bi with t ∈ P, ti ∈ R+ , b, bi ∈ B for i ∈ I; moreover, because x − xi ∈ C, one has x − xi = ti bi with ti ∈ R+ , bi ∈ B for i ∈ I. Hence, (0 =) tb = xi + ti bi = ti bi + ti bi = (ti + ti )bi with bi ∈ B for i ∈ I because B is convex. Using Lemma 2.1.43(i), one gets that (ti + ti )i∈I is bounded, and so (ti )i∈I is also bounded; consequently, (ti )i∈I has a subnet (tj )j∈J with tj → t ∈ R+ . Because B is compact, the net (bj )j∈J has a subnet (bk )k∈K with bk → b ∈ B. Hence, ([0, x]C ) xk = tk bk → tb ∈ [0, x]C because [0, x]C is closed. Therefore, [0, x]C is compact. If B is sequentially compact, replace I by N> and (sub)net by (sub)sequence in the above arguments.  Note that Corollary 2.1.44(i) is assertion 3.8.3 from [315]. Moreover, from Corollary 2.1.44(ii), one obtains [315, Cor. 3.8.5] because, when (X, τ ) is locally convex, there exist x∗ ∈ X ∗ such that 1 ≤ x∗ (x) for x ∈ B, and so

2.1 Order Structures

47

C = R+ B  where B  := {x ∈ C | x∗ (x) = 1}; when C is closed and B is bounded, B  is closed (as the intersection of two closed sets) and bounded (because B  ⊆ ]0, 1]B). Theorem 2.1.45. Let (X, τ ) be a t.v.s., let B ⊆ X be a nonempty bounded convex set such that 0 ∈ / cl B, and C := R+ B. Then the following assertions hold: (i) The cone C is normal, Cauchy regular and Cauchy fully regular. (ii) Moreover, assume that B is (sequentially) complete; then C is (sequentially) complete, (sequentially) regular, and (sequentially) fully regular. In particular, C is regular and fully regular whenever B is compact. Proof. Because B is convex and 0 ∈ / B, C is a pointed proper convex cone; moreover, by Lemma 2.1.43(iv), C is (sequentially) complete when B is so. (i) In order to prove that C is normal, consider the nets (xi )i∈I and (yi )i∈I from X such that 0 ≤C xi ≤C yi for i ∈ I and yi → 0. It follows that there exist the nets (ti )i∈I , (si )i∈I , (si )i∈I ⊆ R+ and (vi )i∈I , (ui )i∈I , (ui )i∈I ⊆ B such that xi = si ui , yi = ti vi , yi − xi = si ui ∀i ∈ I. Because B is convex, one has yi = ti vi = si ui + si ui = (si + si )ui for some ui ∈ B and every i ∈ I. Using Lemma 2.1.43(ii)(b), we obtain that (si + si ) → 0, whence si → 0. Using again Lemma 2.1.43(ii)(b), we get xi → 0. Applying now Theorem 2.1.31, we obtain that C is normal. In order to prove that C is Cauchy (fully) regular, consider the Cincreasing and (τ -bounded) C-bounded sequence (xn )n≥1 ⊆ C. Then there exist the sequences (tn )n≥1 ⊆ R+ and (un )n≥1 ⊆ B such that xn − xn−1 = tn un for n ≥ 1, where x0 := 0. It follows that n n (xk − xk−1 ) = tk uk = Sn un ∀n ≥ 1, (2.25) xn = k=1

k=1

n

 where Sn := k=1 tk (≥ 0) and un ∈ B for n ≥ 1; clearly, (Sn )n≥1 is increasing. Assume that (xn )n≥1 is C-upper bounded by x ∈ X. Because xn ≤C x, there exist sn ∈ R+ and vn ∈ B such that x = Sn un + sn vn = (Sn + sn )vn for some vn ∈ B. Because {(Sn + sn )vn | n ∈ N∗ } (= {x}) is bounded, we have that the sequence (Sn + sn )n≥1 is bounded by Lemma 2.1.43(i), and so (Sn )n≥1 and (sn )n≥1 are bounded because 0 ≤ sn , Sn ≤ Sn + sn for n ≥ 1. Since (Sn )n≥1 is also increasing, (Sn )n≥1 is convergent, and so (Sn )n≥1 is Cauchy. Take V ∈ NX ; because B is bounded, there exists εV ∈ R> such that εV B ⊆ V . Because (Sn )n≥1 is Cauchy, there exists nV ∈ N∗ such that n+p 0 ≤ Sn+p − Sn = k=n+1 tk ≤ εV for all n, p ∈ N∗ with n ≥ nV . Because

xn+p − xn =

n+p k=n+1

tk u k ∈

n+p k=n+1

tk · B ⊆ [0, εV ] · B ⊆ V

48

2 Functional Analysis over Cones

for n ≥ nV and p ≥ 1, the sequence (xn )n≥1 is Cauchy. Therefore, C is sequentially Cauchy regular. Assume now that (xn )n≥1 is τ -bounded; taking into account (2.25), the set {Sn un | n ≥ 1} is bounded, and so the sequence (Sn )n≥1 is bounded by Lemma 2.1.43(i). Continuing the proof as in the preceding case, one obtains that (xn )n≥1 is Cauchy, and so C is sequentially Cauchy fully regular. Applying now Proposition 2.1.37(i), it follows that C is also Cauchy regular and Cauchy fully regular. (ii) Assume first that B is sequentially complete. Consider, as above, a C-increasing sequence (xn )n≥1 ⊆ C which is either C-bounded or τ -bounded. xn −xn−1 = tn un for n ≥ 1, Let (tn )n≥1 ⊆ R+ and (un )n≥1 ⊆ B be such that ∞ there exists S := where x0 := 0. As seen above, n=1 tn = lim Sn ∈ R+ and n the sequence (xn )n≥1 = ( k=1 tk uk )n≥1 is Cauchy. If S = 0 then xn = 0 for n ≥ 1, and so (xn ) is convergent. So, assume that S > 0. Replacing if necessary xn by xn := S −1 xn for n ≥ 1, we may (and do) assume that S = 1. Fix x ∈ B and set yn := xn + xn , where xn := (1 − Sn )x for n ≥ 1; clearly, (yn )n≥1 (⊆ B) is Cauchy because (xn )n≥1 is so and xn → 0. Because B is sequentially complete, there exists y ∈ B such that yn → y, whence xn = yn − xn → y − 0 = y. Therefore, C is sequentially regular and sequentially fully regular. Assume now that B is complete. By the preceding case, C is sequentially regular and sequentially fully regular. It follows that C is regular and fully regular by Proposition 2.1.37(iii) because C is complete. If B is compact, then B is (bounded and) complete, and so the conclusion follows by the preceding assertion.  Notice that B is complete by Corollary 2.1.24 when τ is first-countable and B is sequentially complete. Notice also that the normality and the Cauchy full regularity of well-based cones (in the sense of Jameson) are stated in [315, 3.8.2] and [315, Th. 3.8.7], respectively. It is well known that an increasing and upper bounded sequence (xn )n≥1 ⊆ R converges to sup{xn | n ≥ 1}. In ordered topological vector spaces this assertion generally is not true, as Example 2.1.40 shows; indeed, the sequence (ξn )n≥1 from Example 2.1.40 is ( ∞ )+ -increasing, supn≥1 ξn = e, but ξn − e∞ = 1 for n ≥ 1. Definition 2.1.46. Let (X, τ ) be a t.v.s. and C ⊆ X be a proper convex cone; C is called (sequentially) Daniell if every C-increasing and C-upper bounded (sequence) net from X has a supremum to which it converges. Remark 2.1.47. Observe that one may take the net (sequence) to be contained in C in Definition 2.1.46 (see also Remark 2.1.36). Also note that C is (sequentially) regular when C is (sequentially) Daniell; conversely, if C is (sequentially) regular and (sequentially) closed, then C is (sequentially) Daniell. Consequently, ( ∞ )+ is not Daniell (see Example 2.1.40).

2.1 Order Structures

49

The definition of Daniell cone (or Daniell property) can be found in [81], [387, p. 47], [306]. Proposition 2.1.48. Let X be an H.l.c.s. and C ⊆ X be a proper convex cone. (i) Assume that C is a Daniell cone and ∅ = D ⊆ X. If D is directed w.r.t. ≤C and C-upper bounded, then supC D exists in X and supC D ∈ cl D. (ii) Assume that C is closed. If any nonempty subset D of X which is directed w.r.t. ≤C and C-upper bounded there exists supC D and supC D ∈ cl D, then C is a Daniell cone. Proof. (i) Assume that D is directed w.r.t. ≤C and C-upper bounded. Consider the net (xd )d∈D where xd := d; it is clear that (xd )d∈D is C-increasing and C-upper bounded. Because C is Daniell, there exists x := sup{xd | d ∈ D} = sup D and xd → x. Since (D ) xd → x = sup D, we have that sup D ∈ cl D. (ii) Assume that (xi )I ⊂ X is C-increasing and C-upper bounded. Clearly, D := {xi | i ∈ I} is C-upper bounded and. Moreover, D is ≤C -directed. Indeed, take u, v ∈ D; then u = xi , v = xj for some i, j ∈ I, and so there exists k ∈ I such that i, j  k, whence xi , xj ≤C xk (∈ D) because (xi )i∈I is C-increasing. Hence, D is directed w.r.t. ≤C . By the working hypothesis, there exists x := supC D ∈ X and x ∈ cl D. Assume that xi → x; then there exist V0 ∈ Nx and a cofinal subset J of / V0 for all j ∈ J. Because D := {xi | i ∈ J} is obviously I such that xj ∈ directed w.r.t. ≤C and C-upper bounded, there exist x := sup D ∈ cl D . Because (xi )i∈J is a subnet of (xi )I , one has that x = supC D (= x) by Proposition 2.1.41. This contradiction proves that xi → x, and so C is a Daniell cone.  Notice that the partially ordered real topological vector spaces in which directed sets admit a supremum in their closure are investigated in [7]. An immediate consequence of Proposition 2.1.41 is the next result. Corollary 2.1.49. Let X be a t.v.s. and C ⊂ X be a (sequentially) closed, normal proper convex cone. If the C-order intervals are (sequentially) compact, then C is (sequentially) Daniell. Proof. Consider (xi )i∈I ⊂ C a C-increasing and C-upper bounded net. Then there exists u ∈ C such that (xi )i∈I ⊂ [0, u]C . Because [0, u]C is compact, (xi )i∈I has a subnet (xj )j∈J such that xj → x (∈ [0, u]C ). Using Proposition 2.1.41, one obtains that x ∈ supC {xi | i ∈ I} and xi → x. Therefore, C is Daniell. For the sequential case replace “(sub)net” by “(sub)sequence”.  Corollary 2.1.50. Let (X, τ ) be an H.t.v.s. and let C ⊆ X be a closed wellbased convex cone. Then C is (sequentially) Daniell whenever C is (sequentially) complete. Consequently, C is (sequentially) Daniell if has a (sequentially) compact base.

50

2 Functional Analysis over Cones

Proof. Because C is closed and well-based, by Corollary 2.1.44(ii), there exists a bounded closed convex set B ⊆ X such that C := R+ B; moreover, B is sequentially (complete) if C is so. Then C is (sequentially) regular by Theorem 2.1.45(ii), and so the conclusion follows by Remark 2.1.47 because C is also closed. For the last assertion, either use Corollaries 2.1.49 and 2.1.44(iii), or observe that the cone C is well-based and (sequentially) complete because a base is bounded and (sequentially) complete (being compact), whence C is (sequentially) Daniell by the first part.  The fact that C is Daniell when (X, τ ) is a complete H.t.v.s. and C is a closed and well-based cone in the sense of Jameson is stated in [315, Cor. 3.8.8].

2.2 Functional Analysis and Convexity 2.2.1 Locally convex spaces An important class of topological vector spaces is that of locally convex spaces. Usually, this class of topological vector spaces is introduced in two equivalent manners. The simplest one is that the t.v.s. (X, τ ) is a locally convex space (l.c.s. for short) if the origin 0 ∈ X has a neighborhood base formed by convex sets. It follows that the origin has a neighborhood base c in the sequel. formed by balanced closed convex sets, denoted by NX Consider X a linear space and P a nonempty family of seminorms on X. For every nonempty finite subset P ⊆ P, x ∈ X, and ε > 0 let V (x; P, ε) := {y ∈ X | p(y − x) < ε ∀ p ∈ P },

V (P, ε) := V (0; P, ε).

It is obvious that V (x; P, ε) is a convex set and V (x; P, ε)=x+V (P, ε); moreover, V (P, ε) is also symmetric (and so is balanced) and absorbing. Setting B(x) := {V (x; P, ε) | ε > 0, ∅ = P ⊆ P, P finite},

B := B(0),

B satisfies conditions (LT0)–(LT3) (see Section 2.1). Therefore, there exists a unique linear topology τP on X such that B is a neighborhood base of 0 w.r.t. τP ; of course, B(x) is a neighborhood base of x w.r.t. τP for every x ∈ X. Because V is convex for every B ∈ B, (X, τP ) is a locally convex space, denoted in the sequel by (X, P). Replacing, if necessary, P by   Q := max{p1 , ..., pk } | k ∈ N∗ , p1 , ..., pk ∈ P , we may assume that P is directed, that is, for every p1 , p2 ∈ P, there exists p3 ∈ P such that p1 ≤ p3 , p2 ≤ p3 . In such a case B := {V (p, ε) | p ∈ P, ε > 0} is a neighborhood base of 0 for the topology τP , where V (p, ε) := V ({p}, ε).

2.2 Functional Analysis and Convexity

51

The next theorem shows that, by the second procedure, one obtains all the locally convex spaces; for proving it we use the Minkowski functional defined on page 34. Theorem 2.2.1. Let (X, τ ) be a locally convex space. Then there exists a nonempty family P of seminorms on X such that τ = τP . Proof. Let B be a neighborhood base of 0 ∈ X formed by balanced convex and open subsets of X. Consider P := {pU | U ∈ B}. By assertions (i) and (iv) of Proposition 2.1.28, every element of P is a seminorm and U = V (pU , 1) for every U ∈ B. It follows that B ⊆ {V (p, ε) | p ∈ P, ε > 0} = {εU | U ∈ B, ε > 0} ⊆ Nτ (0). 

Therefore, τ = τP .

Having in view the preceding theorem, if the topology τ of the l.c.s. X is generated by the family P of seminorms, the set B ⊆ X is bounded if and only if p(B) is bounded in R for every p ∈ P. As seen in Corollary 2.1.29, the Minkowski functional pU associated with a convex neighborhood U of the origin of the t.v.s. X is continuous and int U = [pU < 1], cl U = [pU ≤ 1]. It follows that every seminorm p ∈ P is τP -continuous on the l.c.s. (X, P). The proof of the preceding theorem also shows that the locally convex space (X, τ ) is first-countable if and only if the topology τ is defined by an (at most) countable family of seminorms. One can show that a Hausdorff and first-countable l.c.s. is metrizable, i.e., there exists a metric ρ (even invariant under translations) such that τ = τρ . It is easy to show that the l.c.s. (X, P) is Hausdorff if and only if, for every x ∈ X \ {0}, there exists p ∈ P such that p(x) > 0. Using this characterization and the Hahn–Banach theorem below, one obtains a remarkable property of locally convex spaces, that is, the continuous dual X ∗ of the nontrivial Hausdorff l.c.s. (X, P) is rich; more precisely, for every x ∈ X \{0}, there exists x∗ ∈ X ∗ such that x, x∗  := x∗ (x) > 0. Recall that the linear functional ϕ : (X, P) → R is continuous if and only if there exist M > 0 and p1 , . . . , pk ∈ P such that ϕ(x) ≤ M max{p1 (x), . . . , pk (x)}

∀ x ∈ X.

2.2.2 Examples and properties of locally convex spaces We give now several examples locally convex spaces, and mention some properties of them; see (1)–(6) below. Example 2.2.2. (1) Let  ·  : X → R be a norm on the linear space X. Taking P := { · }, the space (X,  · ) := (X, P) is called a normed space. The normed space (X,  · ) is also a metric space, where the metric ρ is defined by ρ(x, y) := x − y, and so is a Hausdorff first-countable topological space. Important examples of normed spaces are:

52

2 Functional Analysis over Cones

(a) ∞ := {(xn )n≥1 ⊆ R | (xn )n≥1 is bounded} with the norm (xn )n≥1 ∞ := sup{|xn | | n ≥ 1} and its subspaces c := {(xn )n≥1 ⊆ R | (xn )n≥1 is R | (xn ) → 0} with the same norm; convergent} and c0 := {(x  n )n≥1 ⊆ p (b) p := {(xn )n≥1 ⊆ R | n≥1 |xn | < ∞}, for p ∈ [1, ∞[, with the norm  p 1/p ; (xn )n≥1 p := n≥1 |xn | (c) c00 := {(xn )n≥1 ⊆ R | ∃n0 ≥ 1 ∀n ≥ n0 : xn = 0} (⊆ p for p ∈ [1, ∞]), endowed with one of the norms ·p , p ∈ [1, ∞]; (d) C[0, 1] := {f : [0, 1] → R | f is continuous} with the Chebyshev norm x := x∞ := supt∈[0,1] |x(t)|;  p (e) Lp (Ω) := {x : Ω → R | x is measurable and Ω |x(t)| dt < ∞} with   1/p p the norm xp := Ω |x(t)| dt , where p ∈ [1, ∞[ and Ω ⊆ Rn is an open set endowed with the Lebesgue measure (in fact, one identifies two functions if they coincide a.e., that is, almost everywhere). (f) L∞ (Ω) := {x : Ω → R | x is measurable and there exists M ∈ R+ such that |x| ≤ M a.e. on Ω} with the norm x∞ := inf{M ∈ R+ | |x| ≤ M a.e.}, where Ω ⊆ Rn is an open set endowed with the Lebesgue measure (identifying the functions which coincide a.e.). When X is one of the real linear spaces mentioned above, we denote by X+ the set of its positive elements; more precisely ⎧ ⎨ {(xn )n≥1 ∈ X | xn ≥ 0 ∀n ≥ 1}, if X∈{c00 , c0 , c}∪{ p | p ∈ [1, ∞]} if X := C[0, 1] X+ := {x ∈ X | x(t) ≥ 0 ∀t ∈ [0, 1]} ⎩ {x ∈ X | x ≥ 0 a.e.} if X ∈ {Lp (Ω) | p ∈ [1, ∞]}; X+ is a pointed normal closed convex cone when X is endowed with the topology defined by the corresponding (mentioned) norm. The normed space (X, ·) is a Banach space if any Cauchy sequence (xn )n≥1 ⊆ X is convergent [(xn ) is Cauchy if ∀ ε > 0, ∃ nε ∈ N, ∀ n, m ≥ nε : xn − xm  < ε]; the normed spaces mentioned in (a), (b), (d)–(f) are Banach spaces, while c00 is dense in (c0 , ·∞ ) and in ( p , ·p ) for p ∈ [1, ∞[. The Banach space (X, ·) is a Hilbert space when there exists an inner  product (· | ·) on X such that x = (x | x) for x ∈ X, or, equivalently, the norm · satisfies the parallelogram law: x + y2 + x − y2 = 2(x2 + y2 ) for all x, y ∈ X. A Banach space X is an Asplund space, if every continuous convex function f on an open convex subset S ⊆ X is Fr´echet differentiable at the points of a dense subset M ⊆ S. Asplund himself named these spaces in his paper [24] strong differentiability spaces. The name Asplund space was given by Namioka and Phelps [428]. For more information compare Subsection 2.2.3 and Chapter 4. (2) Topology of a product space. Let (X, P) be a locally convex space and let X0 be a linear subspace of X. Then p |X0 is a seminorm on X0 for every p ∈ P. Taking P0 := {p |X0 | p ∈ P}, the space (X0 , P0 ) becomes a

2.2 Functional Analysis and Convexity

53

locally convex space; the topology τP0 is the trace of the topology τP on X0 . If (X1 , P1 ) and (X2 , P2 ) are locally convex spaces, then (X1 × X2 , P) is an l.c.s., where P := {p1 × p2 | p1 ∈ P1 , p2 ∈ P2 } and p1 × p2 : X1 × X2 → R, p1 × p2 (x1 , x2 ) := max{p1 (x1 ), p2 (x2 )}; the topology τP is exactly τP1 × τP2 . (3) Dual pairs. Let X, Y be two linear spaces and let Φ : X × Y → R be a separating bilinear form, i.e., Φ(x, ·) and Φ(·, y) are linear on Y and X for all x ∈ X and y ∈ Y , respectively, and ∀ x ∈ X \ {0}, ∃ y ∈ Y such that Φ(x, y) = 0, and ∀ y ∈ Y \ {0}, ∃ x ∈ X such that Φ(x, y) = 0. It is obvious that py : X → R, py (x) := |Φ(x, y)|, and px : Y → R, px (y) := |Φ(x, y)|, are seminorms for all y ∈ Y and x ∈ X. Taking PX := {py | y ∈ Y }, the space (X, PX ) is a Hausdorff l.c.s.; similarly, taking PY := {px | x ∈ X}, (Y, PY ) is a Hausdorff l.c.s. One says that (X, Y, Φ) is a dual pair; the topology τPX is denoted by σ(X, Y ), while the topology τPY is denoted by σ(Y, X). (4) Weak topologies. Let (X, τ ) be a Hausdorff locally convex space and X ∗ its topological dual. Then Φ : X ×X ∗ → R defined by Φ(x, x∗ ) := x∗ (x) = x, x∗  is a separating bilinear form (see the discussion about X ∗ before the statement of this example). The topology w := σ(X, X ∗ ) is called the weak topology of X, while the topology w∗ := σ(X ∗ , X) is called the weak∗ topology of X ∗ ; the name weak topology for w is motivated by the fact that w ⊆ τ , i.e., w is weaker than τ (this means that there are fewer open sets w.r.t. w than open sets w.r.t. τ ). It is useful to mention that the topological dual of (X, w) is X ∗ , while the topological dual of (X ∗ , w∗ ) is X (in the sense that, for every ϕ ∈ (X ∗ , w∗ )∗ , there exists a unique xϕ ∈ X such that ϕ(x∗ ) = xϕ , x∗  for all x∗ ∈ X ∗ ; moreover, the mapping ϕ ∈ X ∗ → xϕ ∈ X is an isomorphism of linear spaces. Recall that two locally convex topologies τ1 and τ2 on the linear space X are called compatible if (X, τ1 ) and (X, τ2 ) have the same topological dual; the finest topology σ on the locally convex space (X, τ ) that is compatible with τ is called the Mackey topology of X (such a topology always exists!). Moreover, by Mackey theorem, the bounded and weakly bounded subsets of a Hausdorff locally convex space coincide (see [349, (7), p. 254]). (5) Reflexivity. Let (X, ·) be a normed space. Then X ∗ is also a normed space, its norm being defined (always) by x∗  := sup{x, x∗  | x ≤1}; (X ∗ , ·) is even a Banach space. As in (3), on X, one has also the weak topology w, while on X ∗ , one has the topology w∗ ; one has that w = τ· if and only if X is finite-dimensional. Since (X ∗ , ·) is a normed space, on X ∗ , one has also the topology σ(X ∗ , X ∗∗ ), where X ∗∗ := (X ∗ )∗ ; note that w∗ ⊆ σ(X ∗ , X ∗∗ ) ⊆ τ· . Taking x ∈ X, the mapping φx : (X ∗ , ·) → R, φx (x∗ ) := x∗ (x), is linear and continuous; moreover, φx  = x. The mapping JX : (X, ·) → (X ∗∗ , ·) defined by JX (x) := φx is a linear operator having the property that JX (x) = x for every x ∈ X. One says that (X, ·) is reflexive if JX is onto. Because (X ∗∗ , ·) is a Banach space, every reflexive normed space is a Banach space. One gets that w∗ = σ(X ∗ , X ∗∗ ) if and only if (X, ·) is a reflexive Banach space. Among the usual Banach

54

2 Functional Analysis over Cones

spaces mentioned in (1), those that are reflexive are p and Lp (Ω) for p ∈ ]1, ∞[. If (X, τ ) is a Hilbert space with the scalar product (· | ·) and x∗ a linear continuous functional on X, then there is a unique element y ∈ X such that x, x∗  = (x | y) and y = x∗  (:= sup{|x, x∗ | | x ∈ X, x = 1}) (see Schechter [498]). It follows that any Hilbert space is reflexive. (6) Convex core topology. Let X be a nontrivial linear space and let P be the family of all semi-norms p : X → R. The topology τP is called the convex core topology of X and is denoted by τc . Several algebraic properties of subsets of linear spaces without any topology, as well as several results established in this context, can be obtained from the corresponding topological ones using the convex core topology τc ; see e.g., [279, Ex. 2.10] and [329, Sect. 6.3]. 2.2.3 Asplund spaces Definition 2.2.3. A Banach space X is an Asplund space, if every continuous convex function f on an open convex subset S ⊆ X is Fr´echet differentiable at the points of a dense Gδ -subset M ⊆ S. One can replace “dense Gδ -subset” by “dense subset”, since a dense set in S, where f is Fr´echet differentiable, is automatically a Gδ -set ([475, Prop. 1.25] or [413, p. 196]). A topological space (and so a Banach space) is separable, if it contains a countable dense subset (a Hilbert space is separable, if it admits a countable orthonormal base). ∞ and L∞ are not separable Banach spaces. An infinite-dimensional and separable Banach space is homeomorphic to the Hilbert space 2 (Kadec (1967), compare [221] and [64]). Banach spaces with separable dual subspaces are important for a characterization of Asplund spaces: A Banach space is an Asplund space, if and only if every separable subspace has a separable dual space ([475, p. 23]). This implies that every reflexive Banach space is an Asplund space. From the point of view of optimization in general spaces or of variational analysis, Asplund spaces are of particular importance among the special classes of Banach spaces, f. i. look to extremal principles below. This is mainly due to facts that Asplund spaces can be characterized especially • by (generic) Fr´echet differentiability of convex functions, • by extremal principles, • by using separability. Asplund spaces have indeed in a sense a good behavior because Corollary 2.35 in [475] delivers: If a Banach space X is not an Asplund space, then there exists an equivalent norm on X which is nowhere Fr´echet differentiable. For a characterizing of a Banach space with no Asplund property compare [413,

2.2 Functional Analysis and Convexity

55

Prop. 2.18]. In an early paper of Ekeland and Lebourg ([182]) it is shown, if a Banach space X has an equivalent norm that is Fr´echet differentiable away from the origin, then X is an Asplund space. The last sufficient condition gives a bridge to the mentioned characterization of Asplund spaces with extremal principles, since f.i. Kruger ([354]), dealing with a version of the extremal principle, speaks on “smooth” Banach spaces and uses for its definition exactly the Fr´echet differentiability away from zero. By the way ([413, p. 196]), Asplund spaces fail to have even an equivalent norm, which is Gˆateaux differentiable off the origin. Examples of Asplund spaces (compare f.i. [413, p. 196]): All reflexive Banach spaces (in particular all finite-dimensional Banach spaces) are Asplund spaces ([475, Cor. 2.15]); therefore, p , Lp [0, 1] for 1 < p < +∞, but not 1 and ∞ . c0 , the space of all real number sequences (with the supremum norm) converging to zero is an Asplund space. More general, if, for any set Γ , every separable subspace of c0 (Γ ) has separable dual, so c0 (Γ ) is an Asplund space ([475, Th. 2.12 and Exer. 2.16]). 2.2.4 Special results in the theory of locally convex spaces An important result in the theory of locally convex spaces is the Alaoglu– Bourbaki theorem: If U is a neighborhood of the origin 0 of the H.l.c.s. (X, τ ), then the polar set U 0 of U is w∗ -compact (i.e., every net (x∗i )i∈I ⊆ U 0 contains a subnet (x∗ψ(k) )k∈K converging to x∗ ∈ U 0 ), where the polar set of ∅ = A ⊆ X is A0 := {x∗ | x, x∗  ≥ −1 ∀ x ∈ A}; A0 is, obviously, w∗ -closed, convex and contains 0 ∈ X ∗ . If (X, ·) is a normed space and UX := {x ∈ X | x ≤ 1} is the closed unit ball of X, then (UX )0 = UX ∗ . Therefore, UX ∗ is w∗ -compact. It follows that the normed space (X, ·) is reflexive if and only if UX is w-compact. Another famous characterization of reflexive Banach spaces is due to R.C. James: the Banach space (X, ·) is reflexive if and only if any x∗ ∈ X ∗ attains its supremum on UX . The dual cone and the strictly positive dual cone are defined analogously to polar sets. Definition 2.2.4. Let C be a cone in the topological vector space X. Then the (positive) dual cone of C is C + := {x∗ ∈ X ∗ | x, x∗  ≥ 0 ∀ x ∈ C}, while the strictly positive dual cone of C is   C # := x∗ ∈ C + | x, x∗  > 0 ∀ x ∈ C \ {0} . Obviously, C + is a w∗ -closed convex cone as the intersection of a family of closed half-spaces; moreover, C + = (conv C)+ = (cl C)+ . Observe that (cl C)# ⊆ C # = (conv C)# , and the inclusion can be strict; indeed, for x∗ ∈ X ∗ \ {0} and C := {0} ∪ {x ∈ X | x, x∗  > 0} we have that C # = R> · x∗ , while (cl C)# = ∅.

56

2 Functional Analysis over Cones

As mentioned above, a fundamental tool in functional analysis is the (algebraic) Hahn–Banach theorem. Theorem 2.2.5. Let X be a linear space, X0 ⊆ X a linear subspace, p : X → R a sublinear functional, and ϕ0 : X0 → R a linear functional such that ϕ0 (x) ≤ p(x) for every x ∈ X0 . Then there exists a linear functional ϕ : X → R such that ϕ |X0 = ϕ0 and ϕ(x) ≤ p(x) for every x ∈ X. The proof of this result can be found in any book on functional analysis, so it is omitted; just note that for the proof one uses the Zorn axiom mentioned in the first section. If (X, ·) is a normed space, X0 is a linear subspace of X, and ϕ0 ∈ (X0 , ·)∗ , then p := ϕ0 ·· is sublinear, and ϕ0 (x) ≤ p(x) for every x ∈ X0 . By the Hahn–Banach theorem, there exists a linear functional ϕ : X → R such that ϕ |X0 = ϕ0 and ϕ(x) ≤ p(x) for every x ∈ X. It follows that ϕ is continuous, and so ϕ ∈ X ∗ , and ϕ = ϕ0 . If (X, P) is an l.c.s., a similar procedure is possible. Theorem 2.2.6. Let (X, P) be a locally convex space, X0 ⊆ X a linear subspace, and ϕ0 : X0 → R a continuous linear functional. Then there exists ϕ ∈ X ∗ such that ϕ |X0 = ϕ0 . Proof. Because ϕ0 is a continuous linear functional, there exist M > 0 and p1 , . . . , pk ∈ P such that ϕ0 (x) ≤ M ·max{p1 (x), . . . , pk (x)} for every x ∈ X0 . Since p := M · max{p1 , . . . , pk } is a seminorm, there exists ϕ : X → R such that ϕ |X0 = ϕ0 and ϕ(x) ≤ p(x) for every x ∈ X. The last inequality shows  that ϕ is continuous, and so ϕ ∈ X ∗ . Another important application of the Hahn–Banach theorem is to the separation of convex sets. The following result is met in the literature as the geometric form of the Hahn–Banach theorem. Theorem 2.2.7. Let (X, τ ) be a t.v.s., A ⊆ X a convex set with nonempty interior and x0 ∈ X \ int A. Then there exists x∗ ∈ X ∗ \ {0} such that x, x∗  ≤ x0 , x∗ 

∀ x ∈ A,

(2.26)

the inequality being strict for x ∈ int A. Proof. Replacing, if necessary, A by A − a with a ∈ int A, we may (and do) assume that 0 ∈ int A. Thus A is an absorbing convex set, and so, by Proposition 2.1.28, the Minkowski functional pA is sublinear and int A = / int A. [pA < 1] (⊆ A ⊆ [pA ≤ 1] = cl A), whence pA (x0 ) ≥ 1 because x0 ∈ Let X0 := lin{x0 } = Rx0 and ϕ0 : X0 → R defined by ϕ0 (tx0 ) := tpA (x0 ) for every t ∈ R. Since ϕ0 (−x0 ) = −pA (x0 ) ≤ pA (−x0 ), we obtain that ϕ0 (x) ≤ pA (x) for every x ∈ X0 . Using the Hahn–Banach theorem, there exists a linear functional ϕ : X → R such that ϕ |X0 = ϕ0 and ϕ(x) ≤ p(x) for every x ∈ X. Consequently,

2.2 Functional Analysis and Convexity

ϕ(x) ≤ pA (x) ≤ 1 ≤ pA (x0 ) = ϕ0 (x0 ) = ϕ(x0 )

∀ x ∈ A;

57

(2.27)

moreover, ϕ(x) ≤ pA (x) < 1 for x ∈ int A. Since A ⊆ [ϕ ≤ 1], ϕ is continuous by Proposition 2.1.26 (ii). Denoting ϕ by x∗ the conclusions hold.  The next result provides a sufficient condition for the strict separation of a point and a convex set; recall that the set {x ∈ X | x, x∗  ≤ α} with x∗ ∈ X ∗ \ {0} and α ∈ R is called a closed half-space of X; replacing ≤ by < we get an open half-space. Clearly, any closed (open) half-space is w-closed (w-open). Theorem 2.2.8. Let (X, τ ) be a t.v.s., A ⊆ X a closed convex set with nonempty interior and x0 ∈ X \ A. Then there exist x∗ ∈ X ∗ \ {0} and α ∈ R such that (2.28) x, x∗  ≤ α < x0 , x∗  ∀ x ∈ A. Consequently, A is the intersection of closed half-spaces containing it; in particular, A is w-closed. Proof. The proof for the existence of x∗ ∈ X ∗ \ {0} and α ∈ R such that (2.28) holds is practically the same as that of the preceding theorem. The difference consists in the fact that this time A = cl A = [pA ≤ 1], and so  pA (x0 ) > 1 (in (2.27) too). Another separation theorem is given in the next result. Theorem 2.2.9. Let (X, τ ) be a t.v.s., and let A, B ⊆ X be nonempty convex sets such that int B = ∅ and A ∩ int B = ∅. Then there exist x∗ ∈ X ∗ \ {0} and α ∈ R such that x, x∗  ≤ α ≤ y, x∗ 

∀ x ∈ A, y ∈ B,

(2.29)

the second inequality being strict for y ∈ int B. Proof. The set C := A − B is nonempty and convex. We claim that int C = A − int B. Because the inclusion is ⊇ is obvious, let us prove the reversed inclusion. Let us fix a0 ∈ A, b0 ∈ int B, and take x ∈ int C. Setting x0 := a0 − b0 ∈ C, there exists λ > 0 such that x := (1 + λ)x − λx0 ∈ C. Therefore, there exist a ∈ A, b ∈ B such that x = a − b , and so x = μx0 + (1 − μ)x = [μa0 + (1 − μ)a ] − [μb0 + (1 − μ)b ] ∈ A − int B, where μ := λ/(1 + λ) ∈ ]0, 1[, because μb0 + (1 − μ)b ∈ int B by Proposition 2.1.27 (ii). Hence, our claim is true. Because 0 ∈ / int C ( = ∅), using Theorem 2.2.7 we get x∗ ∈ X ∗ \ {0} such that x − y, x∗  ≤ 0 for all x ∈ A and y ∈ B, and so (2.29) holds with α := sup{x, x∗  | x ∈ A}. Assume that α = y0 , x∗  for some y0 ∈ int B, and so v, x∗  ≥ 0 for v := y − y0 ∈ int B − y0 = int(B − y0 ) with y ∈ int B. Because B − y0 is a neighborhood of 0, one gets the contradiction x∗ = 0. Hence, α < y, x∗  for y ∈ int B.  Another frequently used separation theorem is the following one.

58

2 Functional Analysis over Cones

Theorem 2.2.10. Let (X, τ ) be a l.c.s., A ⊆ X a nonempty closed convex set, and x0 ∈ X \ A. Then there exist x∗ ∈ X ∗ and α ∈ R such that (2.28) holds. Consequently, A is the intersection of closed half-spaces containing it; in particular, A is w-closed. Proof. Since x0 ∈ / A = cl A, there exists an open convex neighborhood U of 0 such that (x0 + U ) ∩ A = ∅. Using Theorem 2.2.9, there exists x∗ ∈ X ∗ \ {0} such that α := sup{x, x∗  | x ∈ A} ≤ inf{x0 + u, x∗  | u ∈ U } < x0 , x∗  , the second inequality being true because inf{u, x∗  | u ∈ U } < 0 (otherwise,  x∗ = 0, since U is absorbing), and so (2.28) holds. An important consequence of the preceding result is that cl A = clw A for every convex subset A of the l.c.s. (X, τ ). In the next result we mention several properties of the polar operator. For ∅ = E ⊆ X ∗ we set E 0 := {x ∈ X | x, x∗  ≥ −1 ∀x∗ ∈ E}; hence, E 0 is a closed convex set containing 0, and may be viewed as the polar of E as subset of (X ∗ , w∗ ) when (X, τ ) is an H.l.c.s. Lemma 2.2.11. Let (X, τ ) be an l.c.s. and ∅ = A, B, Ci ⊆ X (i ∈ I = ∅). Then the following assertions hold: (a) A ⊆ B ⇒ B 0 ⊆ A0 ; (b) (λA)0 = λ−1 A0 for every λ ∈ R \ {0}; (c) (∪i∈I Ci )0 = ∩i∈I (Ci )0 ; (d) A ⊆ B ⊆ conv(A ∪ {0}) ⇒ A0 = B 0 ; (e) A ⊆ A00 := (A0 )0 ; (f) A00 = A if and only if A is convex, closed and 0 ∈ A; (g) A00 = conv(A ∪ {0}); (h) if B is a cone then (A+B)0 = A0 ∩B + , B 0 = B + , and B ++ := (B + )+ = convB. Proof. Recall that convE := conv E is the closed convex hull of the subset E of the topological vector space Y . The assertions (a)–(e) follow easily using the definition of the polar of a set, while (g) is an immediate consequence of (d) and (f). (f) The implication ⇒ is obvious. Assume that A is convex, closed and 0 ∈ A. Take x0 ∈ X \ A. Using Theorem 2.2.10, there exists x∗ ∈ X ∗ and α ∈ R such that x0 , x∗  < −α < x, x∗  for all x ∈ A. Because 0 ∈ A, we have that α > 0. It follows that u∗ := α−1 x∗ ∈ A0 . Because x0 , u∗  < −1, / (A0 )0 = A00 . This proves that A00 ⊆ A, and so A00 = A by (e). x0 ∈ (h) Taking A := {0} in the first equality, one gets B 0 = B + because 0 A = X ∗ . Because B + is a cone, from the previous equality and (f), one obtains B ++ = (B + )0 = B 00 = convB. Clearly,

2.2 Functional Analysis and Convexity

59

x∗ ∈ (A + B)0 ⇐⇒ x + ty, x∗  ≥ −1 ∀x ∈ A, ∀y ∈ B, ∀t ∈ R+ ⇐⇒ [x, x∗  ≥ −1 ∀x ∈ A] ∧ [y, x∗  ≥ 0 ∀y ∈ B] ⇐⇒ x∗ ∈ A0 ∩ B + , 

and so the first equality also holds.

Proposition 2.2.12. Let (X, τ ) be an H.l.c.s. and A, B ⊆ X be convex sets containing 0 ∈ X. Let X ∗ be endowed with the weak∗ topology. (i) Assume that A and B are closed. Then  

 (A ∩ B)0 = conv(A0 ∪ B 0 ) = cl λ∈]0,1[ λA0 + (1 − λ)B 0 ;

(2.30)

moreover, if B is also a cone, then (A ∩ B)0 = cl(A0 + B + ), while if A and B are also cones, then (A ∩ B)+ = cl(A+ + B + ). (ii) Assume that A and B are neighborhoods of 0. Then (A ∩ B)0 = conv(A0 ∪ B 0 ) ⊆ A0 + B 0 ⊆ 2(A ∩ B)0 .

(2.31)

Proof. (i) Applying successively assertions (c) and (g) of Lemma 2.2.11 on (X ∗ , w∗ ) for A0 and B 0 we get (A0 ∪ B 0 )0 = A00 ∩ B 00 = A ∩ B, and so (A ∩ B)0 = (A0 ∪ B 0 )00 = conv(A0 ∪ B 0 ∪ {0}) = conv(A0 ∪ B 0 ), that is the first equality of (2.30) holds. For the second one, observe that conv(A0 ∪ B 0 ) = A0 ∪ B 0 ∪ E, and so conv(A0 ∪ B 0 ) = A0 ∪ B 0 ∪ cl E, where    E := λ∈]0,1[ λA0 + (1 − λ)B 0 . Since 0 ∈ A0 ∩ B 0 , one has ]0, 1[A0 ⊆ E, and so A0 ⊆ cl E and, similarly, B 0 ⊆ cl E. Therefore, conv(A0 ∪ B 0 ) = cl E, that is the second equality in (2.30) holds. Moreover, assume that B is also a cone; then B 0 = B + by Lemma 2.2.11(h). Taking into account that λ B 0 = B + for λ ∈ R> , it follows that E = ]0, 1[A0 + B + , whence cl E = cl(A0 + B + ). If also A is a cone, then (A ∩ B)0 = (A + B)+ and E = A+ + B + , and so (A ∩ B)+ = cl(A+ + B + ). (ii) We have that cl(A ∩ B) = cl A ∩ cl B. Indeed, the inclusion ⊆ is obvious. Take x ∈ cl A ∩ cl B; because 0 ∈ int A ∩ int B and A, B are convex, using Proposition 2.1.27(ii) we get {tx | 0 ≤ t < 1} ⊆ int A ∩ int B (⊆ A ∩ B), and so x ∈ cl(A ∩ B). Since (cl E)0 = E 0 for E ⊆ X, applying (2.30) for cl A and cl B we get (A ∩ B)0 = conv(A0 ∪ B 0 ). Since A, B ∈ NX , A0 and B 0 are w∗ -compact by Alaoglu–Bourbaki theorem. Using this fact one gets rapidly (using nets) that conv(A0 ∪ B 0 ) is closed (even compact), and so the first equality in (2.31) holds. Moreover, because 0 ∈ A0 ∩ B 0 we have that 0 0 0 0 x∗ ∈ A0 and A0 ∪ B 0 ⊆ A0 + B 0 , whence conv(A  A + B . Taking  1 ∗∪ B1 )∗⊆ ∗ 0 ∗ ∗ 0 0 y ∈ B , we have that x +y = 2 2 x + 2 y ∈ 2 conv(A ∪B ) = 2(A∩B)0 ,  and so A0 + B 0 ⊆ 2(A ∩ B)0 . The next theorem and its corollary are established by Jameson in [315, Th. 1.7.1] and [315, Cor. 3.1.11], respectively.

60

2 Functional Analysis over Cones

Theorem 2.2.13. Let (X, τ ) be an l.c.s., C ⊆ X be a proper convex cone, and U ∈ NX be convex. Suppose that x∗ ∈ X ∗ and α ∈ R+ are such that u, x∗  ≤ α for all (u, x) ∈ X × X with x ∈ C ∩ U and −x ≤C u ≤C x. Then there exists u∗ ∈ X ∗ such that u, u∗  ≤ α for all u ∈ U and |x, x∗ | ≤ x, u∗  for all x ∈ C. Proof. If x∗ |C = 0, then u∗ := 0 ∈ X ∗ is the desired element. Assume that x∗ |C = 0; hence, α > 0. It is clear that [x, x ]C + [y, y  ]C ⊆ [x + y, x + y  ]C

∀x, y, x , y  ∈ X.

(2.32)

Because U is absorbing, |x, x∗ | ≤ q(x) := sup{u, x∗  | u ∈ [−x, x]C } < ∞

∀x ∈ C,

(2.33)

q(x) ≤ α for all x ∈ C ∩ U , and q(x0 ) > 0 for some x0 ∈ C. It is clear that q(λx) = λq(x) for λ ∈ R+ and x ∈ C; moreover, using (2.32), we obtain that q(x + x ) ≥ q(x) + q(x ) for x, x ∈ C. It follows that the sets A := {x ∈ C | q(x) > α} is nonempty, convex and A ∩ U = ∅. Using Theorem 2.2.9, there exists u∗ ∈ X ∗ \ {0} such that γ := sup {u, u∗  | u ∈ U } ≤ inf {x, u∗  | x ∈ A} .

(2.34)

Because U is a neighborhood of 0 and u∗ = 0, γ > 0. Replacing u∗ by αγ −1 u∗ if necessary, we may (and do) assume that γ = α. Let x ∈ C be such that q(x) > 0. Then βq(x) = q(βx) > α, and so βx ∈ A, for β > α/q(x). Using (2.34) we obtain that βx, u∗  ≥ α for β > α/q(x), and so x, u∗  ≥ q(x). If x ∈ C is such that q(x) = 0, then x0 + λx ∈ C and q(x0 + λx) ≥ q(x0 ) + λq(x) > 0 for λ > 0, and so x0 , u∗  + λ x, u∗  ≥ 0 for λ > 0. For λ → ∞ we get x, u∗  ≥ 0 = q(x). Therefore, x, u∗  ≥ q(x) for all x ∈ C. The conclusion follows using now the first inequality in (2.33).  Corollary 2.2.14. Let (X, τ ) be an l.c.s., C ⊆ X be a proper convex cone, and x∗ ∈ X ∗ . Then x∗ ∈ C + − C + if and only if there exist U ∈ NX and α ∈ R such that x, x∗  ≤ α for all x ∈ C ∩ (U − C). Proof. Assume that x∗ ∈ C + −C + . Take x∗1 , x∗2 ∈ C + such that x∗ = x∗1 −x∗2 and U := {x ∈ X | x, x∗1  ≤ 1}; then U ∈ NX . For x ∈ C ∩ (U − C) = C ∩ U we have that x, x∗  = x, x∗1  − x, x∗2  ≤ 1 − 0 = 1. Assume now that x, x∗  ≤ α for all x ∈ C ∩ (U − C) (⊇ C ∩ U ), where U ∈ NX ; we may (and do) assume that U is convex. Take V := {x ∈ U | |x, x∗ | ≤ α}. Consider x ∈ C ∩ V and take u ∈ [−x, x]C . Then 0 ≤C 1 1 ∗ 2 (x + u) ≤C x, and so 2 (x + u) ∈ C ∩ (V − C), whence x + u, x  ≤ 2α. It follows that u, x∗  ≤ 3a. Using Theorem 2.2.13 we get u∗ ∈ X ∗ such that |x, x∗ | ≤ x, u∗  for all x ∈ C. Setting x∗1 := 12 (u∗ +x∗ ) and x∗2 := 12 (u∗ −x∗ ),  we have that x∗ = x∗1 − x∗2 and x∗1 , x∗2 ∈ C + . Assertion (iii) of Proposition 2.1.28 turns out to be useful in providing new characterizations of normal cones in locally convex spaces.

2.2 Functional Analysis and Convexity

61

Theorem 2.2.15. Let (X, τ ) be an H.l.c.s. and C ⊆ X be a nontrivial convex cone. Then the following assertions are equivalent: (i) C is normal; (ii) for every U ∈ NX , there exists V ∈ NX such that U 0 ⊆ C + ∩ V 0 − C + ∩ V 0; (iii) there exists a family P of C-monotone seminorms w.r.t. C such τ = τP . Proof. (i) ⇔ (ii) Observe that, in the definition of normality of C and in c assertion (ii), we may work with the neighborhood base NX of 0 ∈ X. Take c V ∈ NX and set A := V +C and B := V −C = −(V +C). Easy computations give A0 = V 0 ∩ C + , B 0 = −(V 0 ∩ C + ). Using Proposition 2.2.12 (ii) for these A and B we get ([V ]C )0 ⊆ V 0 ∩ C + − V 0 ∩ C + ⊆ 2([V ]C )0 = ([ 12 V ]C )0 .

(2.35)

c c ; then there exists V ∈ NX Assume that C is normal and consider U ∈ NX 0 0 0 + such that [V ]C ⊆ U . Using (2.35), we have that U ⊆ ([V ]C ) ⊆ V ∩ C − V 0 ∩ C + , and so (ii) holds. c . Then there Conversely, assume that (ii) holds and consider U ∈ NX c 0 0 + 0 + exists V ∈ NX such that U ⊆ V ∩ C − V ∩ C . Using again (2.35) we get U 0 ⊆ ([ 12 V ]C )0 , whence [ 12 V ]C ⊆ ([ 12 V ]C )00 ⊆ U 00 = U . Therefore, C is c . normal because 12 V ∈ NX (i) ⇔ (iii) Consider

B := {U ∈ NX | U = −U = conv U = int U },

B  := {[U ]C | U ∈ B};

clearly, B is a neighborhood base of 0 ∈ X. By the equivalence (i) ⇔ (i ) of Theorem 2.1.31, C is normal iff B  is a neighborhood base of 0; clearly, B  ⊆ B. By (the proof of) Theorem 2.2.1, having V ⊆ B a neighborhood base of 0, one has τ = τP , where P := {pV | V ∈ V}. Consequently, C is normal ⇔ B  is a neighborhood base of 0 ⇔ τ = τQ , where Q := {pV | V ∈ B  }. Having in view that (pV =) p[U ]C is a C-monotone and continuous seminorm for every U ∈ B by Proposition 2.1.28 and Corollary 2.1.29, the equivalence (i) ⇔ (iii) is true.  The equivalence of (i) and (ii) of the preceding proposition can be found in Jameson [315, Th. 3.4.7]. Corollary 2.2.16. Let (X, τ ) be an H.l.c.s. and C ⊆ X be a proper convex cone. (i) If C is normal then C + is reproducing that is C + − C + = X ∗ . (ii) C is w-normal if and only if C + is reproducing. (iii) Moreover, assume that τ is first-countable. Then C is normal if and only if C is w-normal.

62

2 Functional Analysis over Cones

Proof. (i) Take x∗ ∈ X ∗ and consider U := {x ∈ X | |x, x∗ | ≤ 1}; then U ∈ NX . Because C is normal, from the equivalence (i) ⇔ (ii) of Theorem 2.2.15, there exists V ∈ NX such that U 0 ⊆ V 0 ∩ C + − V 0 ∩ C + . Then x∗ ∈ C + − C + because clearly x∗ ∈ U 0 . (ii) Applying (i) for the weak topology we get the direct implication. Suppose now that C + is reproducing and the nets (xi )i∈I , (yi )i∈I ⊆ X w are such that 0 ≤C xi ≤C yi for all i ∈ I and (yi )i∈I −→ 0. Let x∗ ∈ X. + ∗ ∗ ∗ ∗ ∗ Since C is reproducing, x = x1 − x2 with x1 , x2 ∈ C + . It follows that 0 ≤ xi , x∗1  ≤ yi , x∗1  for every i ∈ I, and so (xi , x∗k ) → 0 for k ∈ {1, 2}, w and so (xi , x∗ ) → 0. Therefore, (xi )i∈I −→ 0. By Theorem 2.1.31, we obtain that C is w-normal. (iii) If C is normal, then C + − C + = X ∗ by (i), and so C is w-normal by (ii). Assume that C is w-normal and consider B ⊆ X a τ -bounded set. Then B is w-bounded, and so [B]C is w-bounded by the implication (i) ⇒ (v) of Theorem 2.1.31. Then [B]C is τ -bounded by the Mackey theorem (see Example 2.2.2(4)). Because B ⊆ X is an arbitrary τ -bounded set and τ is first-countable, we obtain that C is normal using the implication (v) ⇒ (i) of Theorem 2.1.31.  Assertions (i) and (ii) of Corollary 2.2.16 can be found in Peressini [470, Prop. 2.1.21], while assertion (iii) is established by Jameson in [315, Th. 3.4.8]. The next result is [315, Cor. 3.2.10]. Proposition 2.2.17. Let (X, τ ) be an H.l.c.s., C ⊆ X be a normal cone, w and (xi )i∈I ⊆ C be a C-decreasing net. Then xi → 0 ⇔ xi → 0. w

Proof. The implication ⇒ being obvious, assume that xi → 0. Consider A := conv{xi | i ∈ I} and take V ∈ NX . Because 0 ∈ clw A = clτ A, n there exists u ∈ V ∩ A. Hence, u = k=1 λik xik for some (ik )k∈1,n ⊆ I and n (λk )k∈1,n ⊆ R+ with k=1 λk = 1. Because (I, ) is directed, there exists i0 ∈ I such that i0  ik (hence, xi0 ≤C xik ) for k ∈ 1, n, and so xi0 ≤C u. Therefore, 0 ≤C xi ≤C u for i  i0 with 0, u ∈ V , whence xi ∈ [V ]C for i  i0 . The cone C being normal, ([V ]C )V ∈NX is a τ -neighborhood base of τ  0 ∈ X, and so xi → 0. Using Proposition 2.2.17 and Corollary 2.1.49 one gets the following result which is related to [7, Th. 3.8]. Corollary 2.2.18. Assume that X is an H.l.c.s. and C ⊆ X is a normal closed convex cone such that every C-order interval is weakly compact. Then C is a Daniell cone. Consequently, if A ⊆ X is a nonempty upward directed and C-upper bounded set, then supC A exists in X and supC A ∈ cl A. Proof. Notice first that C is w-normal by Corollary 2.2.16(i)&(ii). Consider (xi )i∈I ⊂ X a C-increasing and C-upper bounded net and set E := {xi |i ∈ I}.

2.2 Functional Analysis and Convexity

63

Using Corollary 2.1.49 for the weak topology one obtains that x := supC E w exists in X and xi → x. Setting yi := x − xi for i ∈ I, (yi )i∈I (⊂ C) is a C-decreasing net weakly converging to 0. Applying Proposition 2.2.17, one gets yi → 0, and so xi → x. Hence, C is a Daniell cone. The last assertion follows now by Proposition 2.1.48(i).  In the case of normed vector spaces, some further characterizations of normal cones are known. Theorem 2.2.19. Let (X, ·) be a normed vector space and C ⊆ X a convex cone. The following statements are equivalent: (i) (ii) (iii) (iv)

C is normal; [UX ]C is bounded, where UX := {x ∈ X | x ≤ 1}; there exists an equivalent monotone (w.r.t. C) norm ·0 ; there exists α > 0 such that y ≥ α x whenever x, y ∈ X with 0 ≤C x ≤C y; (v) there exists β > 0 such that x + y ≥ β whenever x, y ∈ C with x = y = 1; (vi) C is w-normal. Proof. (i) ⇒ (ii) Let V := UX ∈ N· (0). Using Theorem 2.1.31, there exists r > 0 such that [rUX ]C = r[UX ]C ⊆ UX . Therefore, [UX ]C is bounded. (ii) ⇒ (iii) There exists r > 0 such that rUX ⊆ [rUX ]C =: U ⊆ UX . It follows that · = pUX ≤ pU ≤ prUX = r−1 pUX = r−1 · . Since U is a convex, symmetric, and full neighborhood, from the above inequalities and Proposition 2.1.28 we get that ·0 := pU is a monotone norm, equivalent to ·. (iii) ⇒ (iv) There exist r1 , r2 > 0 such that ∀ x ∈ X : r1 x ≤ x0 ≤ r2 x . Let 0 ≤C x ≤C y. Then r1 x ≤ x0 ≤ y0 ≤ r2 y , whence y ≥ α x, where α = r1 /r2 > 0. (iv) ⇒ (v) Let x, y ∈ C with x = y = 1. Then 0 ≤C x ≤C x + y, whence x + y ≥ α x = α. (v) ⇒ (i) Let the sequences (xn ), (yn ) ⊆ X be such that 0 ≤C xn ≤C yn for every n and (yn ) → 0. We claim that (xn ) → 0; if the claim is true, using the implication (iii’) ⇒ (i) from Theorem 2.1.31, the conclusion follows. Assume that our claim is not true. Passing to a subsequence if necessary, there exists α > 0 such that xn  ≥ α > yn  for every n; it follows that

64

2 Functional Analysis over Cones −1

0 ≤C xn := xn 

xn ≤C xn 

−1

yn ≤C α−1 yn =: yn

∀n ∈ N.

−1

Hence, 1 = xn  > yn  → 0. Setting un := xn − yn  (yn − xn ), we have that xn , un ∈ C, xn  = un  = 1, and 1 − yn  ≤ xn − yn  ≤ 1 + yn  for every n, whence (xn − yn ) → 1. From our hypothesis we have that xn + un  ≥ β for every n. Since   1 y  − xn 1 = 1 − y  , (2.36) xn + un = xn + n xn +  yn − xn  yn − xn  yn − xn  n xn  = 1 for every n, (yn ) → 0 and (xn − yn ) → 1, we obtain the contradiction (xn + un ) → 0. Therefore, our claim is true, and so (i) holds. (i) ⇔ (vi) This equivalence is established in Corollary 2.2.16(iii) for (X, τ ) a first-countable H.l.c.s.  To the characterizations of the normality of the cone C from the preceding theorem, we add some sufficient conditions already obtained in Theorem 2.1.38 for X a complete first-countable H.t.v.s. Theorem 2.2.20. Let (X, ·) be a Banach space and C ⊆ X be a closed convex cone. Consider the following assertions: (i) (ii) (iii) (iv)

C is fully sequentially regular; C is sequentially regular; C is normal; any C-order interval is bounded.

Then (i) ⇒ (ii) ⇒ (iii) ⇔ (iv). Moreover, if X is reflexive, then (iii) ⇒ (i). Proof. The assertions (i), (ii), (iii), and (iv) are, respectively, the assertions (i), (ii), (iii), and (iv) from Theorem 2.1.38. Because X is a Banach space, the mentioned implications follow applying Theorem 2.1.38(b). (iii) ⇒ (i) Assume that X is reflexive and C is normal; hence, C is wnormal (by Theorem 2.2.19). Consider (xn )n≥1 ⊆ X a bounded C-increasing sequence. Because (xn ) is bounded and X is reflexive, (xn ) has a subsequence converging weakly to some x ∈ X. Because C is w-closed and w-normal, using w Proposition 2.1.41, we obtain that xn → x and x = sup{xn | n ≥ 1}. Then w C  xn := x − xn → 0 and (xn )n≥1 is C-decreasing, and so xn → 0 by  Proposition 2.2.17. Therefore, xn → x, and so (i) holds. The next example shows that the completeness of X is essential for the implication (iv) ⇒ (iii) in Theorem 2.2.20 while the reflexivity of X is essential for its implications (ii) ⇒ (i) and (iii) ⇒ (ii). Example 2.2.21. The spaces c00 , c0 and c mentioned below are endowed with their norms ·∞ ; see Example 2.2.2 for details. m (i) Let C := {(xn )n≥1 ∈ c00 | k=1 xk ≥ 0 ∀m ≥ 1}. Then C is a pointed closed convex cone such that all order intervals are bounded, but C is not normal.

2.2 Functional Analysis and Convexity

65

(ii) Let C := {(xn )n≥1 ∈ c0 | xn ≥ 0 ∀n ≥ 1}. Then C is a sequentially regular (hence, normal) closed convex cone which is not fully sequentially regular. (iii) Let C := {(xn )n≥1 ∈ c | xn ≥ 0 ∀n ≥ 1}. Then C is a normal closed convex cone which is not sequentially regular. Proof. (i) The fact that C is a convex cone is immediate. The closedness m m of C follows from the estimate | k=1 xk − k=1 yk | ≤ m x − y∞ for all m m ≥ 1 and x, y ∈ c00 ; assuming that x, −x ∈ C one gets k=1 xk = 0 for m ≥ 1, whence xm = 0 for m ≥ 1. ∗ Fix x := (xn ) n≥1 ∈ C; then mthere exists n0 ∈ N such that xn = 0 for m | ≤ n0 x =: γ for all m ≥ 1. Take n > n0 , and so k=1 xk ≤ k=1 |x km m y := (yn )n≥1 ∈ [0, x]C ; hence, 0 ≤ k=1 yk ≤ k=1 xk for m ≥ 1. Setting 0 k=1 ak := 0, we get −γ ≤ −

m−1 k=1

xk ≤ −

m−1 k=1

yk ≤ ym ≤

m−1 k=1

yk + ym ≤

m k=1

xk ≤ γ

for all m ≥ 1, and so y∞ ≤ γ. Hence, [u, v]C is bounded for all u, v ∈ X because [u, v]C = u + [0, v − u]C and [0, x]C = ∅ for x ∈ X \ C. Set en := (0, ..., 0, 1, 0, ...), where 1 is on the nth place and consider vn := 1 n (e1 + · · · + en ) and un := vn − en+1 for n ≥ 1; clearly, 0 ≤C un ≤C vn , un ∞ = 1 and vn ∞ = n1 for n ≥ 1. Therefore, vn → 0 and un → 0, and so C is not normal by the equivalence (i) ⇔ (iii’) of Theorem 2.1.31. (ii) The fact that C is a pointed closed convex cone is immediate, as well as the fact that the norm ·∞ is monotone w.r.t. C; hence, C is normal. Let us prove that C is sequentially regular. So, let (un )n≥1 ⊆ c0 be Cincreasing and C-upper bounded by u ∈ c0 . Then (vn )n≥1 := (u−un )n≥1 ⊆ C is C-decreasing. Set v k := inf{vkn | n ≥ 1} for k ≥ 1, where vn := (vkn )k≥1 for n ≥ 1 and v := (v k )k≥1 ; clearly, 0 ≤ limm→∞ vkm = v k ≤ vkn for any k, n ≥ 1, and so 0 ≤C v ≤ vn for n ≥ 1. Fix ε > 0; because v1 ∈ c0 , there exists kε ≥ 1 such that vk1 = vk1  ≤ ε for k ≥ kε . Moreover, because limm→∞ vkm = v k for 1 ≤ k ≤ kε , there exists mε ≥ 1 such that (0 ≤ vkm − v k =) |vkm − v k | ≤ ε for all m ≥ mε and 1 ≤ k ≤ kε . Because 0 ≤ vkm − v k ≤ vkm ≤ vk1 ≤ ε for m ≥ 1 and k ≥ kε , one has that |vkm − v k | ≤ ε for all k ≥ 1 and m ≥ mε , and so vm − v∞ ≤ ε for m ≥ mε . Hence, (vn )n≥1 is convergent, and so (un ) is convergent, too.  n The sequence ( k=1 ek )n≥1 ⊆ c0 is clearly C-increasing and bounded, but it is not convergent; hence, C is not fully sequentially regular. n (iii) As in (ii), C is a normal closed convex cone. Let xn := k=1 ek ∈ c for n ≥ 1; (xn )n≥1 is clearly C-increasing, supn≥1 xn = e := (1, 1, · · · ) ∈ c,  but xn − e∞ = 1 → 0. Hence, C is not sequentially regular. The examples in assertions (i)–(iii) of Example 2.2.21 are taken from [328, p. 86], [13, p. 90] and [471, Sect. 2], respectively. In finite-dimensional spaces the normal cones are recognized easily.

66

2 Functional Analysis over Cones

Corollary 2.2.22. Let (X, τ ) be a finite-dimensional H.t.v.s. and C ⊆ X a proper convex cone. Then C is normal if and only if cl C is pointed. Proof. As seen in Corollary 2.1.32, the necessity is valid without assuming that dim X < ∞. Suppose that cl C is pointed. It is well known that all the Hausdorff linear topologies on a finite-dimensional vector space coincide. Therefore, we may suppose that X is Rn (n = dim X ≥ 1) endowed with its Euclidean norm: 1/2  (x1 , . . . , xn ) := x21 + · · · + x2n . Since the mapping X × X  (x, y) −→ x + y ∈ X = Rn is continuous and S = {(x, y) ∈ cl C × cl C | x = y = 1} is compact, there exist (x, y) ∈ S such that β := x + y ≤ x + y for all (x, y) ∈ S. Because cl C is pointed, β > 0. Therefore, condition (v) of Theorem 2.2.19 is verified. It follows that C is normal.  Let X be an l.c.s. and C ⊆ X a nontrivial convex cone having a base B (see Definition 2.1.42). Because B is convex and 0 ∈ / cl B, Theorem 2.2.10 gives an element x∗ ∈ X ∗ such that b, x∗  ≥ 1 for every b ∈ cl B. Since every x ∈ C \ {0} has a representation x = λb with λ > 0 and b ∈ B, we get x, x∗  = λ b, x∗  ≥ λ > 0, and so x∗ ∈ C # ; moreover, B  := {x ∈ C | / cl B  (and B  x, x∗  = 1} (⊆ ]0, 1]B) is an algebraic base of C such that 0 ∈ ∗ # is bounded if B is so). Conversely, if x ∈ C , the set {x ∈ C | x, x∗  = 1} is a base of C (since x∗ is continuous). So we have proved the next result (see [153, Rem. 2.2]). Proposition 2.2.23. Let X be an l.c.s. and let C ⊆ X be a proper convex cone. Then C is based if and only if C # = ∅. A classical result referring to C # is Krein–Rutman theorem: If C is a nontrivial pointed closed convex cone of a separable normed space X, then C # = ∅. So each such cone has a base. Note that the separability of the normed vector space X is essential for the conclusion of Krein–Rutman theorem as the following example shows. Example 2.2.24. Let the set Γ be uncountable, X := 2 (Γ ) be the Hilbert space of square summable real-valued functions x : Γ → R and C := {x ∈ X | x(γ) ≥ 0 ∀γ ∈ Γ }. It is clear that C is a reproducing pointed closed convex  2 cone such that C + = C. Let y ∈ C + ; because γ∈Γ |x(γ)| is summable, the set Γy := {γ ∈ Γ | y(γ) = 0} is at most countable. Because Γ is uncountable, there exists γ0 ∈ Γ \Γy . Then x0 : Γ → R defined  by x(γ0 ) := 1 and x(γ) := 0 for γ ∈ Γ \{γ0 } belongs to C\{0} and x0 , y = γ∈Γ x0 (γ)·y(γ) = y(γ0 ) = 0. Therefore, C # = ∅. Having the nonempty convex subset A of the H.l.c.s. (X, τ ), the quasiinterior and the quasi-relative interior of A are, respectively, the sets qi A := {a ∈ A | cl (R> (A − a)) = X}, qri A := {a ∈ A | cl (R> (A − a)) is a linear space};

2.2 Functional Analysis and Convexity

67

these notions are introduced in [78, p. 2544] and [83, Def. 2.3], respectively. Note that qi A = qri A if affA = X and qi A = ∅ otherwise. Moreover, icr A ⊆ qri A (whence int A ⊆ cor A ⊆ qi A, with equality if int A = ∅) and λA + (1 − λ) qri A ⊆ qri A for λ ∈ ]0, 1[; hence, qri A is convex and cl A = cl(qri A) when qri A = ∅. In the sequel, throughout this section, qi C + is considered with respect to the topology w∗ on X ∗ if not mentioned explicitly otherwise. Proposition 2.2.25. Let X be an H.l.c.s., let σ be a linear topology on X ∗ , and let C ⊆ X be a proper convex cone. The following assertions hold: (i) One has intσ C + ⊆ cor C + ⊆ qi C + = (cl C)# ⊆ C # .

(2.37)

(ii) Assume that C is closed; then C # = qi C + . Moreover, C # = intσ C + whenever σ is a locally convex topology on X, compatible with the duality (X, X ∗ ), such that intσ C + = ∅; in particular, if X is a reflexive Banach space and int· C + = ∅ then C # = int· C + . Proof. (i) The first two inclusions, as well as the last one are (almost) obvious. Assume that there exists x∗ ∈ qi C + \ (cl C)# (⊆ C + ). Then there exists x ∈ (cl C) \ {0} such that x, x∗  = 0, and so 0 ≤ x, x∗ − x∗  for every x∗ ∈ C + . It follows that x, u∗  ≥ 0 for every u∗ ∈ cl (R+ (C + − x∗ )), and so cl (R+ (C + − x∗ )) = X ∗ , contradicting the fact that x∗ ∈ qi C + . Therefore, qi C + ⊆ (cl C)# . Assume now that there exists x∗ ∈ (cl C)# \ qi C + (⊆ (cl C)+ = C + ). ∗ ∈ X ∗ \ cl (R+ (C + − x∗ )). Then there Then cl (R+ (C + − x∗ )) = X ∗ ; take x ∗  < α ≤ x, t(x∗ − x∗ ) for x∗ ∈ C + exists x ∈ X and α ∈ R such that x, x and t ∈ R+ , whence x = 0 and x, x∗  ≤ x, x∗  for all x∗ ∈ C + . It follows / (cl C)# . that x, x∗  ≤ 0 and x ∈ C ++ = cl C, whence the contradiction x∗ ∈ Hence, qi C + = (cl C)# . (ii) Because C is closed, the equality in (2.37) becomes C # = intσ C + . Moreover, assume that intσ C + = ∅ for the compatible locally convex linear topology σ on X ∗ . From (i) we have that intσ C + ⊆ C # . Let x∗ ∈ X ∗ \ intσ C + . Taking into account the fact that (X ∗ , σ)∗ = X, using Theorem 2.2.7, we get x ∈ X \ {0} such that x, x∗  ≤ x, x∗  for all x∗ ∈ C + . Because C + is a cone, we obtain that x ∈ (C + )+ = cl C = C and x, x∗  ≤ 0. This / C # . Therefore, intσ C + = C # .  shows that x∗ ∈ The equality C # = qi C + for C closed in assertion (ii) of Proposition 2.2.25 is established in [74, Prop. 2.1.1]. Note that the inclusion cor C + ⊆ C # might be strict in Proposition 2.2.25 (i) when C is closed and cor C + = ∅, even for X a normed vector space (when cor C + = int· C + ), as the next example shows. Example 2.2.26. (Borwein–Lewis, [83, Ex. 3.11 (ii)]) Let (X, ·) := ( 1 , ·1 ) and C := ( 1 )+ . Then X ∗ = ( ∞ , ·∞ ) and C + = ( ∞ )+ ; moreover

68

2 Functional Analysis over Cones

cor C + = int·∞ C + = {(yn )n≥1 ∈ ∞ | inf n≥1 yn > 0}, C # = qi C + = {(yn )n≥1 ∈ ∞ | yn > 0 ∀n ≥ 1}. Proposition 2.2.23 can be refined, as shown in Theorem 2.2.28 below. We provide first a useful auxiliary result. Lemma 2.2.27. Let X be an H.l.c.s., C ⊆ X a proper convex cone, and x∗ ∈ X ∗ . Then the following assertions are equivalent: (i) the set {x ∈ C | x, x∗  ≤ 1} is bounded (resp. relatively compact); (ii) the set {x ∈ cl C | x, x∗  ≤ 1} is bounded (resp. compact); (iii) x∗ ∈ C # and the set {x ∈ C | x, x∗  = 1} is bounded (resp. relatively compact); (iv) x∗ ∈ C # and the set {x ∈ cl C | x, x∗  = 1} is bounded (resp. compact). Proof. Let us set C1 := {x ∈ C | x, x∗  ≤ 1 and C1= := {x ∈ C | x, x∗  = 1}. Using the (almost) obvious inclusions {x ∈ cl C | x, x∗  < 1} ⊆ cl C1 ⊆ {x ∈ cl C | x, x∗  ≤ 1}, we obtain that cl C1 = (cl C)1 . It follows that (i) ⇔ (ii). (i) ⇒ (iii) Let assume that C1 is bounded. Since C1= ⊆ C1 , C1= is bounded, too. Let x ∈ C be such that x, x∗  ≤ 0. Then clearly R+ x ⊆ C1 . Because C1 is bounded and X is separated, we have that x = 0. This shows that x∗ ∈ C # . The same argument works for the case in which C1 is relatively compact. (iii) ⇒ (i) Assume that x∗ ∈ C # . It follows that C1 = [0, 1]C1= . Indeed, the inclusion ⊇ is obvious; for the converse take x ∈ C1 \{0}. Then α := x, x∗  ∈ ]0, 1] because x∗ ∈ C # , and so x := α−1 x ∈ C1= , whence x = αx ∈ [0, 1]C1= . If C1= is bounded, then clearly C1 = [0, 1]C1= is bounded. If C1= is relatively compact, then so is C1 because C1 = ϕ ([0, 1] × C1= ), where ϕ : R × X is defined by ϕ(t, x) := tx; clearly, ϕ is continuous and [0, 1] × C1= is relatively compact. The equivalence of (ii) and (iv) follows from that of (i) and (iii).  Theorem 2.2.28. Let (X, τ ) be an H.l.c.s., C ⊆ X a proper convex cone, and let Γ = ∅ be a family of nonempty bounded subsets of X such that {A0 | A ∈ Γ } is a neighborhood base of 0 ∈ X ∗ for a separated linear topology τΓ on X ∗ . (i) Let x∗ ∈ X ∗ . Then x∗ ∈ intτΓ C + if and only if x∗ ∈ C # and there exists A ∈ Γ such that {x ∈ C | x, x∗  = 1} ⊆ conv(A ∪ {0}); (ii) intτΓ C + = ∅ if and only if there exist A ∈ Γ and a convex set B ⊆ X such that 0 ∈ / cl B, C = R+ B and B ⊆ conv(A ∪ {0}). Proof. (i) Assume that x∗ ∈ intτΓ C + . Then, there exists A ∈ Γ such that A0 ⊆ C + − x∗ , whence (C + − x∗ )0 ⊆ A00 = conv(A ∪ {0}). But

2.2 Functional Analysis and Convexity

69

x ∈ (C + − x∗ )0 ⇐⇒ [x, x∗ − x∗  ≥ −1 ∀x∗ ∈ C + ] ⇐⇒ [x ∈ C ++ ∧ x, x∗  ≤ 1], and so {x ∈ C | x, x∗  ≤ 1} ⊆ conv(A ∪ {0}). Since the set A is bounded, so is conv(A ∪ {0}). Using the equivalence (i) ⇔ (iii) of Lemma 2.2.27 for (X, w) we obtain that x∗ ∈ C # , and so the implication ⇒ holds. Assume now that x∗ ∈ C # and A ∈ Γ are such that B := {x ∈ C | 0 x, x∗  = 1} ⊆ conv(A ∪ {0}), and so A0 = (conv(A ∪ {0})) ⊆ B 0 . Consider x∗ ∈ A0 (⊆ B 0 ). Then for x ∈ B we have that x, x∗ + x∗  = 1 + x, x∗  ≥ 0, whence x∗ + x∗ ∈ C + because C = R+ B. Therefore, x∗ + A0 ⊆ C + , proving that x∗ ∈ intτΓ C + . (ii) Assuming that intτΓ C + = ∅, the conclusion follows from (i) applied to x∗ ∈ intτΓ C + = ∅ and taking B := {x ∈ C | x, x∗  = 1}. Conversely, assume that A ∈ Γ and the convex set B ⊆ X are such / cl B and that 0 ∈ / cl B, C = R+ B and B ⊆ conv(A ∪ {0}). Because 0 ∈ C = R+ B, as in the proof of Proposition 2.2.23, there exists x∗ ∈ C # such that B  := {x ∈ C | x, x∗  = 1} ⊆ [0, 1]B. It follows that B  ⊆ [0, 1]conv(A ∪  {0}) = conv(A ∪ {0}), and so the conclusion follows using (i). Remark 2.2.29. For (X, τ ) an H.l.c.s. and ∅ = A, Ai ⊆ X (i ∈ I = ∅), one has: A0 is absorbing if and only if A is bounded; ∩i∈I (Ai )0 = {0} if and only if conv (∪i∈I Ai ) = X. So, having Γ = ∅ a family of nonempty subsets of X, in order that {A0 | A ∈ Γ } be a neighborhood base of 0 ∈ X ∗ for a separated linear topology on X ∗ it is sufficient that: (a) A is bounded for every A ∈ Γ , (b) ∪A∈Γ A = X, (c) tA ∈ Γ for all t ∈ R, A ∈ Γ , and (d) for all A, B ∈ Γ there exists C ∈ Γ such that A ∪ B ⊆ conv(C ∪ {0}). Having the H.l.c.s. (X, τ ), there are many useful choices for the family Γ in the preceding theorem. The classical ones are: the family of nonempty finite sets in which case τΓ is the weak-star topology w∗ = σ(X ∗ , X) of X ∗ ; the family of nonempty convex weakly compact subsets of X in which case τΓ is the Mackey topology τ (X ∗ , X) of X ∗ ; the family of nonempty bounded subsets of X in which case τΓ is the strong topology β(X ∗ , X) of X ∗ . Theorem 2.2.28 (ii) is established by Han in [260, Th. 2.1]; it extends Theorem 3.8.4 from [315] stated for the strong topology on X ∗ . Using Proposition 2.2.25, the first part of the proof of Theorem 2.2.28 and the obvious inclusions w∗ := σ(X ∗ , X) ⊆ μ := τ (X ∗ , X) ⊆ β := β(X ∗ , X), we get the following result. Corollary 2.2.30. Let (X, τ ) be an H.l.c.s. and C ⊆ X a proper convex cone. Then intw∗ C + ⊆ intμ C + ⊆ intβ C + ⊆ cor C + ⊆ qi C + = (cl C)# ⊆ C # . (2.38) Moreover, if C is closed and intμ C + = ∅, then C # = intμ C + .

70

2 Functional Analysis over Cones

As seen in Example 2.2.26, the inclusion cor C + ⊆ C # can be strict even if X is a Banach space and C is a closed convex cone such that intβ C + (= int· C + = cor C + ) is nonempty. Besides based and well-based convex cones, defined in Definition 2.1.42, we introduce two other types of cones in the following definition. Definition 2.2.31. Let (X, τ ) be a l.c.s. whose topology is defined by the family P of seminorms, and let C ⊆ X be a proper convex cone. We say that (i) C is supernormal or nuclear if, for every p ∈ P, there exists x∗ ∈ X ∗ such that p(x) ≤ x, x∗  for all x ∈ C (in which case x∗ ∈ C + ); (ii) C has the property (π) (or is a (π)-cone) if there exists x∗ ∈ X ∗ such that the set {x ∈ C | x, x∗  ≤ 1} is relatively weakly compact. The notion of nuclear cone in Definition 2.2.31 (i) was introduced by Isac in [293]. Notice that the notion of nuclear cone does not depend on the family P of seminorms that induces the topology on X. Isac [293–295] and Postolic˘ a [477, 478] gave several examples of supernormal cones. From the definition of supernormal cones it is obvious that C is supernormal if and only if cl C is supernormal. The notion of (π)-cone was introduced by Sterna-Karwat in [517, Def. 2.1] as in Definition 2.2.31 (ii), extending so the corresponding notion introduced by Cesari and Suryanarayana in Banach spaces (see [111, Def. 4.1]). In [111] it is also said that the convex cone C ⊆ X is acute if there exists x∗ ∈ X ∗ such that cl C ⊆ {0} ∪ {x ∈ X | x, x∗  > 0}; of course, C is acute if and only if (cl C)# = ∅. In the following result we provide several characterizations of well-based cones met in the literature. Proposition 2.2.32. Let X be an H.l.c.s. whose topology is defined by the family P of semi-norms, and C ⊆ X a proper convex cone. Then the following assertions are equivalent: (a) C is well-based; (b) cl C is well-based; (c) there exists x∗ ∈ C # such that the set {x ∈ C | x, x∗  = 1} is bounded; (d) intβ(X ∗ ,X) C + = ∅, where β(X ∗ , X) is the strong topology on X ∗ ; (e) C has the weak property (π), that is, there exists x∗ ∈ X ∗ such that the set {x ∈ C | x, x∗  ≤ 1} is bounded; (f) C has the angle property, that is, there exists x∗ ∈ X ∗ such that, for every p ∈ P, there exists αp > 0 such that αp p(x) ≤ x, x∗  for all x ∈ C. Proof. For x∗ ∈ X ∗ we set Bx∗ := {x ∈ C | x, x∗  = 1}. (a) ⇔ (d) This equivalence follows from Theorem 2.2.28 (ii) taking Γ the family of nonempty bounded subsets of X. (a) ⇔ (c) It is obvious that (c) ⇒ (a). Conversely, assume that C is wellbased. Then there exists a bounded convex set B ⊆ X such that C = R+ B

2.2 Functional Analysis and Convexity

71

and 0 ∈ / cl B. As in the proof of Proposition 2.2.23, there exists x∗ ∈ C # such that Bx∗ ⊆ [0, 1]B. Hence, Bx∗ is bounded, and so (c) holds. (b) ⇔ (c) Using the equivalence of the “bounded” version of assertions (iii), (iv) from Lemma 2.2.27 and the equivalence (a) ⇔ (c) above, the conclusion follows. (c) ⇔ (e) is the “bounded” version of (iii) ⇔ (iv) from Lemma 2.2.27. (c) ⇒ (f) Consider x∗ provided by (c), and take p ∈ P. Since Bx∗ is bounded, β := 1 + sup p(Bx∗ ) < ∞, and so αp := β −1 > 0. For x ∈ C \ 0, −1 b := x, x∗  x ∈ Bx∗ , and so αp p(x) = β −1 x, x∗  p(b) ≤ x, x∗ . (f) ⇒ (a) Take x∗ and αp (p ∈ P) provided by the hypothesis; clearly, ∗ x ∈ C + . If x ∈ C is such that x, x∗  = 0, then p(x) = 0 for every p ∈ P, and so x = 0 because X is Hausdorff. Hence, x∗ ∈ C # , and so Bx∗ is a base for C. Moreover, for p ∈ P and x ∈ Bx∗ we have that p(x) ≤ αp−1 x, x∗  = αp−1 ,  which implies that Bx∗ is bounded. Observe that Example 2.2.26 provides a well-based closed convex cone C in a Banach space X having also unbounded bases. The notion of cone having the angle property as in Proposition 2.2.32 (f) is defined by Qiu in [481, Def. 2.1], extending so the corresponding notion introduced by Cesari and Suryanarayana in Banach spaces (see [111, Def. 4.2]); the notion of cone having the weak property (π) is introduced by Han in [259, Def. 2.1(ii)] for X a Banach space. The equivalence of assertions (a), (e) and (f) from Proposition 2.2.32 for C closed are established in [481, Th. 2.1], extending so the corresponding results from [259, Th. 2.1]. The equivalence of assertions (a), (d) is established in [315, Th. 3.8.4]. In the next result we provide other characterizations of well-based cones in normed vector spaces. Proposition 2.2.33. Let X be a normed vector space and C ⊆ X a proper convex cone. The following assertions are equivalent: (i) C is well-based; (ii) C is supernormal; (iii) there exists x∗ ∈ C + strongly exposing C at 0, that is ∀(xn )n≥1 ⊆ C : xn , x∗  → 0 ⇒ xn → 0;

(2.39)

(iv) 0 ∈ / conv(C ∩ SX ), where SX = {x ∈ X | x = 1}; (v) 0 ∈ / conv(C \ BX ), where BX = {x ∈ X | x < 1}, that is 0 is a denting point of C; (vi) there exist c ∈ C and x∗ ∈ C + such that c, x∗  > 0 and C ∩ SX ⊆ c + {x ∈ X | x, x∗  ≥ 0}. Proof. For x∗ ∈ X ∗ we set Bx∗ := {x ∈ C | x, x∗  = 1}. (ii) ⇒ (i) ∧ (iii) ∧ (vi) By hypothesis, there exists x∗ ∈ X ∗ such that

72

2 Functional Analysis over Cones

x ≤ x, x∗ 

∀x ∈ C.

(2.40)

Clearly, x∗ ∈ C # and Bx∗ (⊆ UX := {x ∈ X | x ≤ 1}) is convex, bounded, 0∈ / cl Bx∗ and C = R+ Bx∗ ; hence, C is well-based. Taking (xn )n≥1 ⊆ C with xn , x∗  → 0, we get xn  → 0 by (2.40), and so (iii) holds. Moreover, using again (2.40), we get C ∩ SX ⊆ C \ BX ⊆ {x ∈ X | x, x∗  ≥ 1} = c + {x ∈ X | x, x∗  ≥ 0} for every c ∈ Bx∗ . Therefore, (vi) holds because Bx∗ = ∅. (i) ⇒ (ii) Because C is well-based, using Proposition 2.2.32, there exists u∗ ∈ C # (⊆ C + ) such that Bu∗ is bounded; therefore, there exists β > 0 such that b ≤ β for all b ∈ Bu∗ . Taking x∗ := βu∗ , for x ∈ C \ {0}, one −1 has x = x, u∗  b with b := x, u∗  x (∈ Bu∗ ) and so x = x, u∗  b ≤ ∗ ∗ ∗ β x, u  = x, βu  = x, x . Hence, (2.40) holds, and so C is supernormal. (iii) ⇒ (ii) Let x∗ ∈ C + such that (2.39) is verified. We claim that there exists α > 0 such that (2.40) holds for x∗ replaced by αx∗ , in which case C is supernormal. In the contrary case, for every n ∈ N∗ , there exists xn ∈ C −1 such that xn  > n xn , x∗  ≥ 0. Then xn := xn  xn ∈ C ∩ SX and 1  ∗ 0 ≤ xn , x  < n . Using (2.39), we get the contradiction 1 = xn  → 0. Therefore, our claim is true. (vi) ⇒ (v) Setting α := c, x∗  > 0, we have that C ∩ SX ⊆ {x ∈ X | −1 x, x∗  ≥ α} := Hα . Taking x ∈ C \ BX , then x ≥ 1 and x := x x ∈ C ∩SX , whence x, x∗  = x x , x∗  ≥ α. Therefore, C \BX ⊆ Hα . It follows that 0 ∈ / Hα ⊇ conv(C \ BX ). Hence, (v) holds. (v) ⇒ (iv) The conclusion holds because C ∩ SX ⊆ C \ BX . (iv) ⇒ (i) Setting B := conv(C ∩ SX ) (⊆ C ∩ UX ) it is clear that B is  convex, bounded, 0 ∈ / cl B and C = R+ B. Hence, C is well-based. Note that (i) ⇔ (ii) is established in [293, Cor. 1] (see also [309, Lem. 2.2]), (i) ⇔ (v) in [225, Prop. 3.1], (i) ⇔ (vi) in [230, Prop. 3], (i) ⇔ (iv) in [132, Th. 3.6], while Daniilidis [150] attributes (i) ⇔ (iii) ⇔ (v) to [315]. Several characterizations of (π)-cones are provided in the next result. Proposition 2.2.34. Let X be an H.l.c.s. and C ⊆ X a proper convex cone. Then the following assertions are equivalent: (a) C is (π)-cone; (b) cl C is (π)-cone; (c) there exists x∗ ∈ C # such that the set {x ∈ C | x, x∗  = 1} is relatively w-compact; (d) there exists x∗ ∈ (cl C)# such that the set {x ∈ cl C | x, x∗  = 1} is relatively w-compact; (e) intτ (X ∗ ,X) C + = ∅, where τ (X ∗ , X) is the Mackey topology on X ∗ ; (f) C has the weak property (π) and every bounded subset of C is relatively w-compact.

2.2 Functional Analysis and Convexity

73

Proof. The equivalence of assertions (a)–(d) is given by Lemma 2.2.27. The equivalence (b) ⇔ (e) follows from Theorem 2.2.28 (ii) taking Γ the family of nonempty weakly compact subsets of X. (a) ⇒ (f) By hypothesis, there exists x∗ ∈ X ∗ such that C1 := {x ∈ C | x, x∗  ≤ 1} is relatively w-compact. Let ∅ = B ⊆ C be bounded. Then γ := sup x∗ (B) ∈ R+ , and so B ⊆ {x ∈ C | x, x∗  ≤ γ  } = γ  C1 , where γ  := γ + 1. Since C1 is relatively w-compact, so is γ  C1 and its subset B. (f) ⇒ (a) By hypothesis, there exists x∗ ∈ X ∗ such that the set C1 (⊆ C) is bounded, and so C1 is relatively w-compact. Therefore, C is a (π)-cone.  Note that the equivalences (a) ⇔ (b) ⇔ (d) are established in [517, Lem. 2.1, Prop. 2.1], while (c) ⇔ (e) is mentioned explicitly in [481, Th. 2.3]; (a) ⇔ (f) is established in [259, Thm. 2.3] for X a Banach space and C closed. Corollary 2.2.35. Let X be an H.l.c.s., X ∗ its topological dual, and let C be a proper convex cone in X and x ∈ int C. Then the set B = {x∗ ∈ C + | x, x∗  = 1} is a weak∗ -compact base for C + ; in particular, C + is a (π)-cone w.r.t. any locally convex topology σ on X ∗ such that (X ∗ , σ)∗ = X. Proof. Because int C is nonempty, we have that x ∈ int C = int(cl C) = intμ (cl C) = intμ (C + )+ , where μ := τ (X, X ∗ ) is the Mackey topology on X. Applying Theorem 2.2.28 (i) for (X, τ ) replaced by (X ∗ , w∗ ) and C replaced by C + , we obtain that x ∈ (C + )# and there exists a nonempty convex w∗ -compact set A ⊆ X ∗ such that B ⊆ convμ (A ∪ {0}) = [0, 1]A, and so B is a weak∗ -compact base for C + . Moreover, using Proposition 2.2.34, we obtain that C + is a (π)-cone  when (X ∗ , σ) is an H.l.c.s. with (X ∗ , σ)∗ = X. The first part of Corollary 2.2.35 is proved in [576, Prop. 5]. Proposition 2.2.36. Let (X, P) be an H.l.c.s. and let C ⊆ X be a proper convex cone. Then C is (π)-cone =⇒ C is well-based =⇒ C is supernormal =⇒ C is normal. If X is a normed vector space, then [C is supernormal] ⇒ [C is well-based]; moreover, if X is reflexive then [C is well-based] ⇒ [C is (π)-cone]. Proof. For x∗ ∈ X ∗ we set Bx∗ := {x ∈ C | x, x∗  = 1}. Assume that C is a (π)-cone. Using the implication (i) ⇒ (iii) of Lemma 2.2.27, C has a relatively w-compact base B. Clearly, B is w-bounded, and so B is bounded by the Mackey theorem (see Example 2.2.2(4)). Hence, C is well-based.

74

2 Functional Analysis over Cones

Suppose that C is well-based; then there exists x∗ ∈ C # such that Bx∗ is bounded, and so μp := sup p(Bx∗ ) ∈ R+ for any p ∈ P. It follows that p(x) ≤ x, μp x∗  for every x ∈ C, and so C is supernormal. Suppose now that C is supernormal and consider the nets (xi )i∈I , (yi )i∈I ⊆ X such that 0 ≤C xi ≤C yi for all i ∈ I and (yi )i∈I → 0. Let p ∈ P. By hypothesis, there exists x∗ ∈ C + such that p(x) ≤ x, x∗  for all x ∈ C. It follows that p(xi ) ≤ xi , x∗  ≤ yi , x∗  for every i ∈ I. As (yi )i∈I → 0, w (yi )i∈I → 0, and so (yi , x∗ )i∈I → 0. It follows that (p(xi ))i∈I → 0, and so (xi )i∈I → 0. Using Theorem 2.1.31, we obtain that C is normal. Assume that X is a normed space and C is supernormal. Then there exists x∗ ∈ C + such that x ≤ x, x∗  for all x ∈ C; hence, x∗ ∈ C # and Bx∗ is bounded, and so C is well-based. Assume now C is well-based and X is a reflexive Banach space. Using the implication (a) ⇒ (c) from Proposition 2.2.32 for Γ the class of nonempty bounded subsets of X, we get x∗ ∈ C # such that Bx∗ is bounded. Using the implication (iii) ⇒ (i) of Lemma 2.2.27, we obtain that the set E := {x ∈ C | x, x∗  ≤ 1} is bounded. Because X is reflexive, E is relatively w-compact set, and so C is (π)-cone.  Excepting the implications concerning (π)-cones, the other implications from the preceding proposition are established by Isac in [293]. The proof above shows that C satisfies a stronger condition than normality when C is supernormal. More exactly, if C is supernormal and (xi )i∈I , (yi )i∈I are nets from X such that 0 ≤C xi ≤C yi for all i ∈ I w and (yi )i∈I → 0, then (xi )i∈I → 0; in particular, for every net (xi )∈I ⊆ C, w one has (xi )i∈I → 0 ⇒ (xi )i∈I → 0, that is 0 is a continuity point of C. Also note (see Isac [294]) that C is w-supernormal ⇐⇒ C is w-normal ⇐⇒ C + − C + = X ∗ . The second equivalence is stated in Corollary 2.2.16(i). The implication ⇒ of the first equivalence follows from the preceding proposition. Assume that C is w-normal. Let y ∗ ∈ X ∗ ; since X ∗ = C + − C + , there exist y1∗ , y2∗ ∈ C + with y ∗ = y1∗ − y2∗ . Consider x∗ := y1∗ + y2∗ ∈ C + . It follows that |x, y ∗ | ≤ |x, y1∗ | + |x, y2∗ | = x, y1∗  + x, y2∗  = x, x∗  for all x ∈ C, and so C is w-supernormal. As seen in (the proof of) Proposition 2.2.36, if the proper convex cone C from the H.l.c.s. (X, τ ) has a bounded base B, then C is supernormal; furthermore, if B is compact, then C is Daniell by the last assertion of Corollary 2.1.50. It follows that C is Daniell and complete w.r.t. the weak topology of X if C is a closed (π)-cone. Proposition 2.2.37. Assume that (X, τ ) is a l.c.s. and C is a supernormal convex cone; then C is Cauchy regular. Proof. Because C is supernormal, C is normal by (the proof of) Proposition 2.2.36. Consider (xi )i∈I ⊆ C a C-increasing and C-bounded net, as well as

2.3 Generalized Set Less Relations

75

a continuous seminorm p : X → R. Because C is supernormal, there exists x∗ ∈ C + such that p(x) ≤ x, x∗  for every x ∈ C. It follows that (xi , x∗ )i∈I is an increasing bounded net in R, and so it is Cauchy. Hence, for any ε > 0, there exists iε ∈ I such that |xi , x∗  − xiε , x∗ | ≤ ε/2 for i  iε , whence p(xi − xiε ) ≤ xi − xiε , x∗  ≤ ε/2, and so p(xi − xj ) ≤ ε for i, j  iε . Because the seminorm p is arbitrary, it follows that (xi )i∈I is Cauchy, and so C is Cauchy fully regular.  Remark 2.2.38. (i) In any nontrivial normed space (X, ·), there exist wellbased convex cones with nonempty interior; take C := R+ (x + rUX ), where x ∈ X \ {0} and 0 < r < x. (ii) Among the classical Banach spaces (mentioned in Example 2.2.2) their usual positive cones are well-based only in 1 and L1 (Ω).

2.3 Generalized Set Less Relations In set optimization as well as in optimization under uncertainty (see Section 3.2, Chapter 4, Section 6.4), it is essential to compare sets by means of set relations, which are binary relations among sets. A diversity of set relations based on convex cones is studied in the literature (compare Kuroiwa [361, 362] and [329, Chapter 2.6.2] for an overview). This section is concerned with the study of more general set relations (see K¨obis, K¨ obis, Yao [339] and K¨ obis, Kuroiwa and Tammer [340]), where the involved set depicting the domination structure is not supposed to be convex or a cone. The assumptions concerning the domination structure do not depend on any convexity, and hence, these notions generalize those in Kuroiwa [361, 362]. Throughout this section, we consider a linear topological space Y . The power set of Y without the empty set is denoted by P(Y ) := {A ⊆ Y | A is nonempty}. We introduce generalized set relations w.r.t. a set D ∈ P(Y ) in the following definitions (launched in [339] and [340]) (see Figs 2.3.1 and 2.3.2). Definition 2.3.1 (Generalized Upper Set Less Relation). Consider D ∈ P(Y ). The generalized upper set less relation uD is given for two sets A, B ∈ P(Y ) by A uD B :⇐⇒ A ⊆ B − D, which is equivalent to ∀ a ∈ A, ∃ b ∈ B : a ∈ b − D.

76

2 Functional Analysis over Cones

K 0 B−K B

A

Figure 2.3.1. A uD B (A ⊆ B − D) with D := K is the natural ordering cone in R2 .

Remark 2.3.2. We note that uD is transitive if D + D ⊆ D. Moreover, if D is a cone, then D + D ⊆ D yields that D is convex. However, if D = R2+ \ {0}, then D + D ⊆ D is satisfied, but D is not a cone. Notice that uD is reflexive if 0 ∈ D. Hence, uD is a preorder if D + D ⊆ D and 0 ∈ D. Furthermore, we introduce the following extension of the lower set less relation by Kuroiwa [361, 362], see [340, Definition 3.5]. Definition 2.3.3 (Generalized Lower Set Less Relation). Consider D ∈ P(Y ). The generalized lower set less relation is given for two sets A, B ∈ P(Y ) by A lD B :⇐⇒ B ⊆ A + D, which is equivalent to ∀ b ∈ B, ∃ a ∈ A : b ∈ a + D. Remark 2.3.4. We note that lD is transitive if D + D ⊆ D and it is reflexive if 0 ∈ D.

2.3 Generalized Set Less Relations

77

A+K B

A

K 0

Figure 2.3.2. A lD B (B ⊆ A + D) with D := K is the natural ordering cone in R2 .

If D in Definition 2.3.3 is a nontrivial convex cone C ⊂ Y , then this notion matches with the definition of the lower set less relation launched by Kuroiwa [361, 362], and B ⊆ A + C is equivalent to ∀ b ∈ B, ∃ a ∈ A : a ≤C b , where ≤C concerns the relation induced by the nontrivial convex cone C, i.e., a ≤C b means that a ∈ b − C. In the following definition (see [340, Definition 3.10]), the notion of the set less relation introduced by Young [573] and Nishnianidze [442] is extended. Definition 2.3.5 (Generalized Set Less Relation). Consider D ∈ P(Y ). The generalized set less relation sD is given for two sets A, B ∈ P(Y ) by A sD B :⇐⇒ A uD B and A lD B. The following definition introduced in [340, Definition 3.12] is an extension of the certainly less relation (see Jahn, Ha [313], Eichfelder, Jahn [176]). Definition 2.3.6 (Generalized Certainly Less Relation). Consider D ∈ P(Y ). The generalized certainly less relation cD is given for two sets A, B ∈ P(Y ) by A cD B :⇐⇒ (A = B) or (∀ a ∈ A, ∀ b ∈ B : a ∈ b − D) .

78

2 Functional Analysis over Cones

The relation in the next definition (introduced in [340, Definition 3.15]) is an extension of the possibly less relation (see [133, 313]). Definition 2.3.7 (Generalized Possibly Less Relation). Consider D ∈ P(Y ). The generalized possibly less relation pD is given for two sets A, B ∈ P(Y ) by A pD B :⇐⇒ ∃ a ∈ A, ∃ b ∈ B : a ∈ b − D. Relationships between set relations and fixed point theory are given by Li and Tammer in [375]. Especially, set relations are applied to prove some fixed point theorems on ordered t.v.s., which are used to prove the existence of extended Nash equilibrium for set-valued mappings and the existence of vector Nash equilibrium for single-valued mappings. We will use the set less relations introduced in this section for defining solution concepts in set optimization (see Section 3.2, and Definition 4.4.2) and to formulate robust counterpart problems to scalar optimization problems under uncertainty in Section 6.4.

2.4 Separation Theorems for Not Necessarily Convex Sets Throughout this section, Y is a t.v.s. We shall use some usual notions and notation from convex analysis. So, having the function f : Y → R, its domain and epigraph are defined, respectively, by dom f := {y ∈ Y | f (y) < +∞},

epi f := {(y, t) ∈ Y × R | f (y) ≤ t};

f is said to be convex if epi f is a convex set, and f is said to be proper if dom f = ∅ and f does not take the value −∞. Of course, f is lower semicontinuous if epi f is closed. The aim of this section is to find a suitable functional ϕ : Y → R and conditions such that two given nonempty sets A and D can be separated by ϕ. Provided that D contains the rays generated by k ∈ Y \ {0}, i.e., D + [0, +∞)k ⊆ D,

(2.41)

we move −D along this ray and consider the set D := {(y, t) ∈ Y × R | y ∈ tk − D}. The assumption on D shows that D is of epigraph type; i.e., if (y, t) ∈ D and t ≥ t, then (y, t ) ∈ D . Indeed, if y ∈ tk − D and t ≥ t, since tk − D = t k − [D + (t − t)k] ⊆ t k − D, we obtain that (y, t ) ∈ D . Also observe that D = T −1 (D), where T : Y × R → Y is the continuous linear

2.4 Separation Theorems for Not Necessarily Convex Sets

79

operator defined by T (y, t) := tk − y. So, if D is closed (convex, cone), then D is closed (convex, cone). Since D is of epigraph type, we associate with D and k the function ϕ := ϕD,k : Y → R defined by ϕD,k (y) := inf{t ∈ R | (y, t) ∈ D } = inf{t ∈ R | y ∈ tk − D}.

(2.42)

An illustration of the functional ϕ is furnished by Figure 2.4.3. The functional ϕD,k in (2.42) is used for deriving results concerning nonlinear separation of nonconvex sets, initiated by Gerstewitz [210], [213], Gerstewitz and Iwanow [212]. This research was motivated by developing new scalarization techniques in vector optimization, see [209] and [414, p.100]. For assertions in the context of operator theory where a functional of the type (2.42) is used, see Krasnoselskiı [353] and Rubinov [493]. Obviously, the domain of ϕ is the set Rk − D and D ⊆ epi ϕ ⊆ cl D , from which it follows that if D is closed, we have that D = epi ϕ, and so ϕ is a lower semicontinuous (l.s.c.) function. In this section, we are dealing with functionals ϕ : Y → R such that we bring the rules for the calculation with −∞, +∞ in R to mind: ∀y ∈ R : −∞ < y < +∞, μ · (+∞) =+∞ and (−μ) · (+∞) = −∞ (μ > 0), +∞ + (+∞) =+∞, μ ± ∞ =±∞ (μ ∈ R).

(2.43)

Furthermore, we are using the conventions +∞ + (−∞) := + ∞ and 0 · (±∞) := 0. A subset D ⊂ Y is called proper if D = ∅ and D = Y . 2.4.1 Algebraic and Topological Properties In the next results, we collect several useful properties of ϕ = ϕD,k . Theorem 2.4.1. Let D ⊂ Y be a closed proper set and k ∈ Y \ {0} be such that (2.41) holds. Then, ϕ is l.s.c., dom ϕ = Rk − D, ∀λ ∈ R :

{y ∈ Y | ϕ(y) ≤ λ} = λk − D,

(2.44)

and translation invariant along k, i.e., ∀ y ∈ Y, ∀ λ ∈ R :

ϕ(y + λk) = ϕ(y) + λ.

(2.45)

Moreover, (a) ϕ is convex if and only if D is convex; ϕ(λy) = λϕ(y) for all λ > 0 and y ∈ Y if and only if D is a cone. (b) ϕ is proper if and only if D does not contain lines parallel to k, i.e., ∀ y ∈ Y, ∃ t ∈ R : y + tk ∈ / D.

(2.46)

80

2 Functional Analysis over Cones

(c) ϕ is finite-valued if and only if D does not contain lines parallel to k and Rk − D = Y.

(2.47)

(d) Let B ⊂ Y ; ϕ is B-monotone (i.e., y2 − y1 ∈ B ⇒ ϕ(y1 ) ≤ ϕ(y2 )) if and only if D + B ⊆ D. (e) ϕ is subadditive if and only if D + D ⊆ D. Suppose, furthermore, that D + (0, +∞)k ⊆ int D.

(2.48)

Then, (f) ϕ is continuous and ∀ λ∈R:

{y ∈ Y | ϕ(y) < λ} = λk − int D,

(2.49)

∀ λ∈R:

{y ∈ Y | ϕ(y) = λ} = λk − bd D.

(2.50)

(g) If ϕ is proper, then ϕ is B-monotone ⇔ D + B ⊆ D ⇔ bd D + B ⊆ D. Moreover, if ϕ is finite-valued, then ϕ is strictly B-monotone (i.e., y2 − y1 ∈ B \ {0} ⇒ ϕ(y1 ) < ϕ(y2 )) ⇔ D + (B \ {0}) ⊆ int D ⇔ bd D + (B \ {0}) ⊆ int D. (h) Assume that ϕ is proper; then ϕ is subadditive ⇔ D + D ⊆ D ⇔ bd D + bd D ⊆ D. Proof. We have already observed that dom ϕ = Rk − D and ϕ is l.s.c. when D is closed. From the definition of ϕ the inclusion ⊇ in (2.44) is obvious, while the converse inclusion is immediate, taking into account the closedness of D. Formula (2.45) follows easily from (2.44). (a) Since the operator T defined above is onto and epi ϕ = T −1 (D), we have that epi ϕ is convex (cone) if and only if D = T (epi ϕ) is so. The conclusion follows. (b) We have ϕ(y) = −∞ ⇔ y ∈ tk − D for every t ∈ R ⇔ {y + tk | t ∈ R} ⊆ D. The conclusion follows. (c) The conclusion follows from (b) and the fact that dom ϕ = Rk − D. (d) Suppose first that D + B ⊆ D and take y1 , y2 ∈ Y with y2 − y1 ∈ B. Let t ∈ R be such that y2 ∈ tk−D. Then, y1 ∈ y2 −B ⊆ tk−(D+B) ⊆ tk−D, and so ϕ(y1 ) ≤ t. Hence, ϕ(y1 ) ≤ ϕ(y2 ). Assume now that ϕ is B-monotone and take y ∈ D and b ∈ B. From (2.44) we have that ϕ(−y) ≤ 0. Since (−y) − (−y − b) ∈ B, we obtain that ϕ(−y − b) ≤ ϕ(−y) ≤ 0, and so, using again (2.44), we obtain that −y − b ∈ −D, i.e., y + b ∈ D. (e) Suppose first that D + D ⊆ D and take y1 , y2 ∈ Y . Let ti ∈ R be such that yi ∈ ti k − D for i ∈ {1, 2}. Then y1 + y2 ∈ (t1 + t2 )k − (D + D) ⊆ (t1 + t2 )k − D, and so ϕ(y1 + y2 ) ≤ t1 + t2 . It follows that ϕ(y1 + y2 ) ≤ ϕ(y1 ) + ϕ(y2 ). Assume now that ϕ is subadditive and take y1 , y2 ∈ D. From (2.44) we have that ϕ(−y1 ), ϕ(−y2 ) ≤ 0. Since ϕ is subadditive, we obtain

2.4 Separation Theorems for Not Necessarily Convex Sets

81

that ϕ(−y1 − y2 ) ≤ ϕ(−y1 ) + ϕ(−y2 ) ≤ 0, and so, using again (2.44), we obtain that −y1 − y2 ∈ −D, i.e., y1 + y2 ∈ D. Suppose now that (2.48) holds. (f) Let λ ∈ R and take y ∈ λk − int D. Since λk − y ∈ int D, there exists ε > 0 such that λk − y − εk ∈ int D ⊂ D. Therefore, ϕ(y) ≤ λ − ε < λ, which shows that the inclusion ⊇ always holds in (2.49) when int D = ∅. Let λ ∈ R and y ∈ Y be such that ϕ(y) < λ. There exists t ∈ R, t < λ, such that y ∈ tk − D. It follows that y ∈ λk − (D + (λ − t)k) ⊆ λk − int D. Therefore, (2.49) holds, and so ϕ is upper semicontinuous. Because ϕ is also lower semicontinuous, we have that ϕ is continuous. From (2.44) and (2.49) we obtain immediately that (2.50) holds. (g) Let us prove the second part, the first one being similar to that of (and partially proved in) (d). So, let ϕ be finite-valued. Assume that ϕ is strictly B-monotone and take y ∈ D and b ∈ B \ {0}. From (2.44) we have that ϕ(−y) ≤ 0, and so, by hypothesis, ϕ(−y − b) < 0. Using (2.49), we obtain that y+b ∈ int D. Assume now that bd D+(B\{0}) ⊆ int D. Consider y1 , y2 ∈ Y with y2 − y1 ∈ B \ {0}. From (2.50) we have that y2 ∈ ϕ(y2 )k −bd D, and so y1 ∈ ϕ(y2 )k −(bd D +(B \{0})) ⊆ ϕ(y2 )k −int D. From (2.49), we obtain that ϕ(y1 ) < ϕ(y2 ). The remaining implication is obvious. (h) Let ϕ be proper. One has to prove bd D + bd D ⊆ D ⇒ ϕ is subadditive. Consider y1 , y2 ∈ Y . If {y1 , y2 } ⊂ dom ϕ, there is nothing to prove; hence, let y1 , y2 ∈ dom ϕ. Then, by (2.50), yi ∈ ϕ(yi )k − bd D for i ∈ {1, 2}, and so y1 + y2 ∈ (ϕ(y1 ) + ϕ(y2 ))k − (bd D + bd D) ⊆ (ϕ(y1 ) + ϕ(y2 ))k − D.  Therefore, ϕ(y1 + y2 ) ≤ ϕ(y1 ) + ϕ(y2 ). Remark 2.4.2. The conditions (2.41), (2.46), (2.47), and (2.48) are invariant under translations of D, but the conditions D +D ⊆ D and bd D +bd D ⊆ D are not. Note also that R ⊆ Im ϕD,k if ϕD,k is finite somewhere. Related to condition (2.48) we have the following remark. Remark 2.4.3. If, for D ⊂ Y and k ∈ Y , one has that if cl D + (0, +∞)k ⊆ int D then cl(int D) = cl D and int(cl D) = int D. Indeed, if y ∈ D, then y + n−1 k ∈ int D for any n ∈ N> , and so y ∈ cl(int D); this proves the first equality. Now let y ∈ int(cl D); then there exists t > 0 such that y − tk ∈ cl D. Using the hypothesis, we obtain that y ∈ int D. Therefore, the second equality holds, too. It is obvious that (2.48) ⇒ (2.41). Other relations among several conditions used in Theorem 2.4.1 are established in the next result. Proposition 2.4.4. Let D ⊂ Y be a closed proper set and k ∈ Y . (i) If there exists a cone C ⊂ Y such that k ∈ int C and D + int C ⊆ D, then (2.46), (2.47), and (2.48) hold.

82

2 Functional Analysis over Cones

y1 tk − D

k

tk

y2

D Figure 2.4.3. The functional ϕD,k given by (2.42) at y 1 and y 2 with ϕD,k (y 1 ) = ϕD,k (y 2 ) = t.

(ii) If D is convex, int D = ∅, and (2.41), (2.47) are satisfied, then (2.46) and (2.48) hold. In particular, if the hypotheses of (i) or (ii) hold, then ϕD,k is finite-valued and continuous, being also convex in case (ii). Proof. (i) Let y ∈ Y . Since k ∈ int C, int C−k is a neighborhood of 0, and so there exists t > 0 such that ty ∈ int C −k. It follows that y ∈ int C −(0, +∞)k. Therefore, C + Rk = C − (0, +∞)k = int C + Rk = int C − (0, +∞)k = Y. Taking y0 ∈ D, from the inclusion D + int C ⊆ D, we have that D + Rk ⊇ y0 + int C + Rk = y0 + Y = Y ; i.e., (2.47) holds. Suppose that the line Rk + y is contained in D; then Y = y + Rk + int C ⊆ D + int C ⊆ D, contradicting the properness of D. Since D + (0, +∞)k ⊆ D + int C ⊆ D, it is obvious that (2.48) holds, too. (ii) Let us show that (2.48) holds in our hypotheses. In the contrary case, / int D. Since D is there exist y0 ∈ D and t0 ∈ (0, +∞) such that y0 + t0 k ∈ convex, by a classic separation theorem, there exists y ∗ ∈ Y ∗ \ {0} such that

2.4 Separation Theorems for Not Necessarily Convex Sets

∀y ∈ D :

83

y0 + t0 k, y ∗  ≤ y, y ∗  .

From (2.41), we obtain that y0 + t0 k, y ∗  ≤ y0 + tk, y ∗  for every t ≥ 0. Since t0 > 0, it follows that k, y ∗  = 0, and so ∀ y ∈ D, ∀ t ∈ R :

y0 , y ∗  ≤ y + tk, y ∗  .

From (2.47), we obtain that y0 , y ∗  ≤ y, y ∗  for all y ∈ Y , which shows that y ∗ = 0. This contradiction proves that (2.48) holds. Assume now that y + Rk ⊆ D for some y ∈ Y . Let d ∈ D and t ∈ R. Since 1 D is convex, for every n ∈ N> we have that n−1 n d + n (y + tnk) ∈ D. Taking the limit we obtain that d + tk ∈ cl D = D. Therefore, using also (2.47), we get the contradiction Y = D + Rk ⊆ D. Because in both cases conditions (2.46), (2.47), and (2.48) hold, from Theorem 2.4.1(c,f) we have that ϕ is finite-valued and continuous; moreover, ϕ is convex in case (ii), D being so.  Using the preceding result, we obtain the following important particular case of Theorem 2.4.1. Corollary 2.4.5. Let C ⊂ Y be a proper closed convex cone and k ∈ int C. Then ϕ : Y → R, ϕ(y) := inf{t ∈ R | y ∈ tk − C} is a well-defined continuous sublinear functional such that for every λ ∈ R, {y ∈ Y | ϕ(y) ≤ λ} = λk − C,

{y ∈ Y | ϕ(y) < λ} = λk − int C.

Moreover, ϕ is strictly int C-monotone. Proof. Just take D = C in Theorem 2.4.1 and use Proposition 2.4.4(ii). For the last part note that C + int C = int C.  2.4.2 Continuity and Lipschitz Continuity The primary goal of this section is to study local Lipschitz properties of the functional ϕD,k in (2.42) under the weakest assumptions concerning the subset D ⊂ Y and k ∈ Y . Furthermore, we suppose that C ⊂ Y is a proper, closed, convex cone. The results concerning continuity and Lipschitz continuity of the funcalinescu in [533]. tional ϕD,k in this section are shown by Tammer and Z˘ In Theorem 2.4.1(a), we have seen that ϕD,k is convex if D is a convex set. In such a situation, from the continuity of ϕD,k at a point in the interior of its domain, one obtains the local Lipschitz continuity of ϕD,k on the interior of its domain (if the functional is proper). Moreover, when D = C and k ∈ int C then (it is well known that) ϕD,k is a continuous, sublinear function, and so ϕD,k is Lipschitz continuous.

84

2 Functional Analysis over Cones

In the case Y = Rm and for C = Rm + , Bonnisseau–Crettez established in [76] the Lipschitz continuity of ϕD,k around a point y ∈ − bd D when −k belongs to the interior of Clarke’s tangent cone of −D at y. As we will see in the sequel, the (global) Lipschitz continuity of ϕD,k can be related to a corresponding result by Gorokhovik–Gorokhovik in [234] derived in normed spaces. In this section, we need the following assumptions: Assumption (A1). D is closed, satisfies the free-disposal assumption D + C = D and D = Y. Furthermore, the following (stronger) condition is of interest: Assumption (A2). D is closed, satisfies the strong free-disposal assumption D + (C \ {0}) = int D and D = Y. Remark 2.4.6. Sets satisfying the free-disposal assumption (A1) are very important in mathematical economics, especially in production theory, and in game theory, see the book by Debreu [155] and the paper by Jofr´e, Jourani [318]. We will derive a characterization of the Lipschitz continuity of ϕD,k assuming the free-disposal assumption (A1). Remark 2.4.7. The condition D + (C \ {0}) = int D is equivalent to D + (C \ {0}) ⊆ int D. Moreover, (A2) ⇒ (A1) since D + C = D ∪ (D + (C \ {0})). We consider the recession cone 0+ D of the nonempty set D ⊂ Y defined by 0+ D := {u ∈ Y | ∀y ∈ D, ∀t ∈ R+ :

y + tu ∈ D }.

The free-disposal condition D = D + C in (A1) shows that C ⊆ 0+ D. The set 0+ D is a convex cone, and 0+ D is also closed when D is closed. Therefore, 0+ D is the largest closed convex cone C verifying the free-disposal assumption D = D + C. In the proof of the following theorem (see [533, Theorem 3.1]), we will use the Minkowski functional (see page 34) for a closed convex neighborhood of 0∈Y. Theorem 2.4.8. Assume that C ⊂ Y is a proper, closed, convex cone, k ∈ C \ (−C) and D ⊂ Y is a nonempty set satisfying condition (A1). (i) One has ∀y, y  ∈ Y :

ϕD,k (y) ≤ ϕD,k (y  ) + ϕC,k (y − y  ).

(ii) If k ∈ int C, then ϕD,k is finite-valued and Lipschitz on Y .

(2.51)

2.4 Separation Theorems for Not Necessarily Convex Sets

85

Proof. (i) By Theorem 2.4.1 (applied for D and D := C, respectively), we obtain that ϕD,k and ϕC,k are lower semicontinuous functions and, furthermore, ϕC,k is sublinear and proper. Consider y, y  ∈ Y . If ϕD,k (y  ) = +∞ or ϕC (y − y  ) = +∞ it is nothing to prove. In the contrary case, let t, s ∈ R be such that y − y  ∈ tk − C and y  ∈ sk − D. Then, we obtain y ∈ tk − C + sk − D = (t + s)k + (−D − C) = (t + s)k − D taking into account assumption (A1). This yields ϕD,k (y) ≤ t + s. Passing to the infimum with respect to t and s satisfying the preceding relations, we conclude that (2.51) holds. (ii) We suppose k ∈ int C. Let V ⊂ Y be a symmetric, closed and convex neighborhood of 0 such that k + V ⊂ C. Let pV : Y → R be the Minkowski functional associated to V . Then, pV is a continuous seminorm and V = {y ∈ Y | pV (y) ≤ 1}. Consider y ∈ Y and t > 0 such that y ∈ tV . This yields t−1 y ∈ V ⊂ k − C and y ∈ tk − C. So, we conclude ϕC,k (y) ≤ t. Hence, ∀y ∈ Y :

ϕC,k (y) ≤ pV (y).

(2.52)

This inequality verifies (Rk − C =) dom ϕC,k = Y. Furthermore, since ϕC,k is sublinear, we obtain ϕC,k (y) ≤ ϕC,k (y  )+pV (y−y  ) and so (2.53) ∀y, y  ∈ Y : |ϕC,k (y) − ϕC,k (y  )| ≤ pV (y − y  ), i.e., ϕC,k is Lipschitz on Y . Moreover, ϕD,k does not take the value −∞. Indeed, in the contrary case (ϕD,k (y0 ) = −∞ for some y0 ∈ Y ) it holds that y0 + Rk ⊆ D and therefore D = D + C ⊇ y0 + Rk + C = y0 + Y = Y, a contradiction. Taking into account (A1) and dom ϕD,k = Rk −D, we obtain dom ϕD,k = Rk − D = Rk − C − D = Y − D = Y. This yields that ϕD,k is finite valued. From (2.51) and (2.52), we get ∀y, y  ∈ Y :

ϕD,k (y) − ϕD,k (y  ) ≤ ϕC,k (y − y  ) ≤ pV (y − y  ),

whence (by interchanging y and y  ) ∀y, y  ∈ Y : i.e., ϕD,k is Lipschitz on Y .

|ϕD,k (y) − ϕD,k (y  )| ≤ pV (y − y  ),

(2.54) 

It is important to mention that the condition D + (C \ {0}) ⊆ int D does not imply that ϕD,k is proper (see [533, Example 3.2]).

86

2 Functional Analysis over Cones −1

Example 2.4.9. Take D := −{(x, y) ∈ R2 | x = 0, y ≥ − |x| } and C := R+ k with k := (0, −1). Then D + (C \ {0}) = int D and ϕD,k (0, 1) = −∞. Furthermore, with our notation, the assertion in [76, Prop. 7] says that ϕD,k is finite and locally Lipschitz provided Y = Rn , C = Rn+ and k ∈ int C, which is much less general than the conclusion of Theorem 2.4.8(ii). Obviously, in the conditions of Theorem 2.4.8(ii) we have that k ∈ int 0+ D because C ⊆ 0+ D. Moreover, we also obtain a converse of Theorem 2.4.8(ii) (see [533, Proposition 3.3]). Proposition 2.4.10. Assume that C ⊂ Y is a proper, closed, convex cone, k ∈ C \ (−C) and D ⊂ Y is a nonempty set satisfying condition (A1). If ϕD,k is finite-valued and Lipschitz, then k ∈ int 0+ D. Proof. Under our assumptions, there exists a closed, convex and symmetric neighborhood V of 0 such that (2.54) holds. Taking into account (2.44) in Theorem 2.4.1, it holds that −D = {y ∈ Y | ϕD,k (y) ≤ 0}. Consider y ∈ D, v ∈ V and α ≥ 0. From (2.45) and because V = {y ∈ Y | pV (y) ≤ 1}, we get ϕD,k (y + α(v − k)) ≤ ϕD,k (y + αv) − α ≤ ϕD,k (y) + αpV (v) − α ≤ 0. Therefore, V − k ⊂ −0+ D, that shows k ∈ int 0+ D.



From Proposition 2.4.10 and Theorem 2.4.8(ii), we obtain the following necessary and sufficient condition (see [533, Corollary 3.4]). Corollary 2.4.11. Under the assumptions of Proposition 2.4.10, the functional ϕD,k is finite-valued and Lipschitz if and only if k ∈ int 0+ D. Proof. Let ϕD,k be finite-valued and Lipschitz. Then, k ∈ int 0+ D follows from Proposition 2.4.10. Now, suppose that k ∈ int 0+ D. Taking C := 0+ D and using Theorem 2.4.8(ii), we obtain that ϕD,k is finite-valued and Lipschitz.  In the case that int C = ∅ and k ∈ / int C, the functional ϕC,k is not finitevalued, and so it is not Lipschitz. We could ask whether the restriction of ϕC,k at its domain is Lipschitz. The next examples (see [533, Examples 3.5 and 3.6]) explain that both situations are possible. Example 2.4.12. Consider C = R2+ and k = (1, 0). It holds that ϕC,k (y1 , y2 ) = y1 for y2 ≤ 0, and ϕC,k (y1 , y2 ) = +∞ for y2 > 0, and so ϕC,k |dom ϕC,k is Lipschitz.

2.4 Separation Theorems for Not Necessarily Convex Sets

87

  Example 2.4.13. Consider C := (u, v, w) ∈ R3 | v, w ≥ 0, u2 ≤ vw and k := (0, 0, 1). Then, ⎧ if y > 0 or [y = 0 and x = 0], ⎨ +∞ if x = y = 0, ϕC,k (x, y, z) = z ⎩ z − x2 /y if y < 0. Of course, the restriction of ϕC,k at its domain is not continuous at (0, 0, 0) ∈ dom ϕC,k and the restriction of ϕC,k at the interior of its domain is not Lipschitz. Nevertheless, ϕC,k is locally Lipschitz on the interior of its domain. The local Lipschitz property mentioned in Example 2.4.13 is a general one for ϕD,k if D is convex (see [533, Proposition 3.7]). Proposition 2.4.14. Let D be a proper closed subset of Y and k ∈ Y \ {0} be such that D + R+ k = D. If D is convex, has a nonempty interior, and does not contain any line parallel with k (or equivalently k ∈ / −0+ D), then ϕD,k is locally Lipschitz on int(dom ϕD,k ) = Rk − int D. Proof. Since D does not contain any line parallel with k, ϕD,k is proper (see Theorem 2.4.1 taking into account assumption (A1)). Furthermore, we know from Theorem 2.4.1 that dom ϕD,k = Rk − D, and so int(dom ϕD,k ) = int(Rk − D) = Rk − int D (see, e.g., [581, Exer. 1.4]). Furthermore, it holds that −D ⊆ {y ∈ Y | ϕD,k (y) ≤ 0}. Because int D = ∅, we get that ϕD,k is bounded above on a neighborhood of a point, and so ϕD,k is locally Lipschitz  on int(dom ϕD,k ) = Rk − int D (see e.g., [581, Cor. 2.2.13]). In Theorem 2.4.8(ii), we supposed k ∈ int C in order to show Lipschitz continuity of ϕD,k . We will now investigate Lipschitz continuity without this assumption even for nonconvex sets D. Notice that for D not convex and y ∈ int(dom ϕD,k ), there are situations in which ϕD,k is not continuous at y or ϕD,k is continuous but not Lipschitz around y (see [533, Example 3.8]). Example 2.4.15. Take C := R2+ , k := (1, 0) and −D1 := ((−∞, 0] × (−∞, 1]) ∪ ([0, 1] × (−∞, 0]) −D2 := {(a, b) | a ∈ (0, +∞), b ≤ −a2 } ∪ ((−∞, 0] × (−∞, 1]) . Then, ⎧ ⎨ +∞ if v > 1, if 0 < v ≤ 1, ϕD1 ,k (u, v) = u ⎩ u − 1 if v ≤ 0,

⎧ if v > 1, ⎨ +∞ if 0 < v ≤ 1, ϕD2 ,k (u, v) = u √ ⎩ u − −v if v ≤ 0.

Obviously, (0, 0) ∈ int(dom ϕD1 ), but ϕD1 is not continuous at (0, 0), and (0, 0) ∈ int(dom ϕD2 ), ϕD2 is continuous at (0, 0) but ϕD2 is not Lipschitz at (0, 0).

88

2 Functional Analysis over Cones

The Lipschitz continuity of ϕD,k around a point y ∈ dom ϕD,k in finitedimensional spaces can be obtained using the notion of epi-Lipschitzianity of a set as introduced by Rockafellar [484] (see also [485]). We now enlarge this notion in our context. The set D ⊂ Y is said to be epi-Lipschitz at y ∈ D in the direction v ∈ Y \ {0} if there exist ε > 0 and a (closed, convex, symmetric) neighborhood V0 of 0 in Y such that ∀y ∈ (y + V0 ) ∩ (−D), ∀w ∈ v + V0 , ∀λ ∈ [0, ε] : y + λw ∈ −D.

(2.55)

Notice that (2.55) holds for v = 0 if and only if y ∈ − int D. Furthermore, if y ∈ − int D then D is epi-Lipschitz at y ∈ −D in any direction. The next assertion is derived in [533, Theorem 3.9]. A characterization of the Lipschitz behavior of the functional ϕD,k via the SNC property of the set D is given in Theorem 4.3.7. Theorem 2.4.16. Let D be a proper closed subset of Y and k ∈ Y \ {0} be such that D + R+ k = D. Assume that y0 ∈ Y is such that ϕD,k (y0 ) ∈ R. Then, ϕD,k is finite and Lipschitz on a neighborhood of y0 if and only if D is epi-Lipschitz at y := y0 − ϕD,k (y0 )k in the direction −k. Proof. Since ϕD,k is translation invariant along k (see (2.45)), we obtain ϕD,k (y) = 0. Because of (2.44), it holds that −D = {y ∈ Y | ϕD,k (y) ≤ 0}. Furthermore, the finite values of ϕD,k are attained (because D is closed). Suppose that there exist a closed, convex, symmetric neighborhood V of 0 in Y and p : Y → R a continuous seminorm such that ϕD,k is finite on y0 + V and |ϕD,k (y) − ϕD,k (y  )| ≤ p(y −y  ) for all y, y  ∈ y0 +V . Taking into account (2.45), we obtain that ϕD,k is finite on y + V and ∀y, y  ∈ y + V :

|ϕD,k (y) − ϕD,k (y  )| ≤ p(y − y  ).

Consider V0 := {y ∈ 13 V | p(y) ≤ 1} and ε ∈ ]0, 1] such that εk ∈ V0 . We will show that (2.55) holds with v replaced by −k. Indeed, take y ∈ (y + V0 ) ∩ (−D), w ∈ −k + V0 and λ ∈ [0, ε]. Then y − λk − y ∈ V0 + V0 ⊂ V and y + λw − y = y − λk − y + λ(w + k) ∈ V0 + V0 + V0 ⊂ V , and so ϕD,k (y + λw) ≤ ϕD,k (y − λk) + p(λ(w + k)) = ϕD,k (y) − λ + λp(w + k) ≤ λ(p(w + k) − 1) ≤ 0. Therefore, y + λw ∈ −D. This shows that D ⊂ Y is epi-Lipschitz at y = y0 − ϕD,k (y0 )k in the direction −k. Suppose now that D ⊂ Y is epi-Lipschitz at y = y0 − ϕD,k (y0 )k in the direction −k, i.e., (2.55) holds with v replaced by −k. Consider r ∈ (0, ε] be such that 2r(1 + p(k)) < 1, where p := pV0 . Obviously, {y | p(y) ≤ λ} = λV0 for every λ > 0 and if p(y) = 0 then y ∈ λV0 for every λ > 0. Put M := {y ∈ y + rV0 | |ϕD,k (y)| ≤ p(y − y)}.

2.4 Separation Theorems for Not Necessarily Convex Sets

89

It is clear that y ∈ M . We claim that M = y + rV0 . Let y ∈ M , w ∈ V0 and λ ∈ [0, r]. Putting y  := y − ϕD,k (y)k ∈ −D, we obtain ϕD,k (y  ) = 0 and p(y  − y) ≤ p(y − y) + |ϕD (y)| · p(k) ≤ r (1 + p(k))
0, and ϕD,k (y  + v) = ϕD,k (y  + λ(λ−1 v)) ≤ λ for every λ ∈ ]0, r], whence ϕD,k (y  + v) ≤ 0 = p(v). Hence, we get ϕD,k (y  + v) ≤ p(v). On the other hand, suppose that ϕD,k (y  + v) < −p(v). Since 2r(1 + p(k)) < 1, there exists t > 0 such that r+(t+r)p(k) ≤ 1/2 and ϕD,k (y  +v) < −p(v) − t =: t < 0. This yields y  + v − t k ∈ −D. Furthermore, taking into account (2.56), p(y  +v −t k −y) ≤ p(y  −y)+p(v)+(t+p(v))p(k) ≤ 1/2+r +(t+r)p(k) ≤ 1, and so y  + v − t k ∈ (y + V0 ) ∩ −D. Employing (2.55), if p(v) > 0 then   1 v ∈ (−D), y  + tk = y  − (t + p(v)) k = y  + v − t k + p(v) − k − p(v) while if p(v) = 0 then

  y  + (1 − γ)tk = y  + v − t k + γt −k − (γt)−1 v ∈ (−D)

for γ := min{ 12 , εt−1 }. We obtain the contradiction 0 = ϕD,k (y  ) ≤ −t < 0 in the first case and 0 = ϕD,k (y  ) ≤ −t(1 − γ) < 0 in the second case. Therefore, ϕD,k (y  + v) ∈ R and |ϕD,k (y  + v) − ϕD,k (y  )| ≤ p(v) for every v ∈ rV0 , or equivalently, ϕD,k (y + v) ∈ R,

|ϕD,k (y + v) − ϕD,k (y)| ≤ p(v)

for all v ∈ rV0 . (2.57)

Provided y := y ∈ M , we obtain y+rV0 ⊂ M from (2.57), and so M = y+rV0 as assert. Furthermore, if y, y  ∈ y + 12 rV0 , then y ∈ M and y  = y +v for some v ∈ rV0 . Employing again (2.57), we obtain |ϕD,k (y  ) − ϕD,k (y)| ≤ p(y  − y), i.e., ϕD,k is Lipschitz on a neighborhood of y0 and the conclusion follows.  We get the following conclusions (see [533, Corollaries 3.10 and 3.11]). Corollary 2.4.17. Let D be a proper, closed subset of Y and k ∈ Y \ {0} be such that D + R+ k = D. Consider y ∈ − bd D. If D is epi-Lipschitz at y in the direction −k, then ϕD,k (y) = 0. Proof. Consider ε ∈ ]0, 1[ and V0 provided by (2.55) with v := −k. Suppose that ϕD,k (y) = 0. Then, there exists t > 0 such that tpV0 (k) ≤ ε and y := y + tk ∈ −D. For λ := t in (2.55), we get y + t(−k + V0 ) = y + tV0 ⊂ −D, a contradiction to y ∈ − bd D. 

90

2 Functional Analysis over Cones

Corollary 2.4.18. Let D be a proper, closed, subset of Y and k ∈ Y \ {0} be such that D + R+ k = D. Assume that dim Y < +∞ and y ∈ − bd D. Then, ϕD,k is finite and Lipschitz on a neighborhood of y if and only if −k ∈ int TC (−D; y), where TC (−D; y) is the Clarke’s tangent cone of −D at y. Proof. Taking into account [485, Th. 2], −k ∈ int TC (−D; y) if and only if D is epi-Lipschitz at y in the direction −k. The assertion follows from Theorem 2.4.16 and Corollary 2.4.17.  In [76, Prop. 6], the fact that ϕD,k is Lipschitz on a neighborhood of y under the condition −k ∈ int TC (−D; y) is derived in the case Y = Rm (and C = Rm + ). 2.4.3 Separation Properties Now all preliminaries are done, and we can prove the following nonconvex separation theorem. Theorem 2.4.19. Nonconvex Separation Theorem. Let D ⊂ Y be a closed proper set with nonempty interior, A ⊂ Y a nonempty set such that A∩(− int D) = ∅ and k ∈ Y . Assume that one of the following two conditions holds: (i) there exists a cone C ⊂ Y such that k ∈ int C and D + int C ⊂ D; (ii) D is convex, (2.41) and (2.47) are satisfied. Then, ϕD,k is a finite-valued continuous function such that ∀ x ∈ A, ∀ y ∈ int D : ϕD,k (x) ≥ 0 > ϕD,k (−y);

(2.58)

moreover, ϕD,k (x) > 0 for every x ∈ int A. Proof. By Proposition 2.4.4, ϕD,k is a finite-valued continuous function. By Theorem 2.4.1(f) we have that − int D = {y ∈ Y | ϕD,k (y) < 0}, and so (2.58) obviously holds.  It is evident  that in our conditions int A ∩ (− int D) = ∅, whence int A ∩ − cl(int D) = ∅. From Remark 2.4.3, we obtain that int A ∩ (−D) = ∅. Using now (2.44) we obtain that ϕD,k (x) > 0 for every x ∈ int A.  Of course, if we impose additional conditions on D, we have additional properties of the separating functional ϕD,k (see Theorem 2.4.1). As we observed in Proposition 2.4.4, when condition (i) of the preceding theorem holds, condition (2.47) holds, too. When D is a pointed convex cone the converse implication is valid if one replaces the interior with the algebraic interior. To be more precise, we have the following result. Proposition 2.4.20. Let D ⊂ Y be a convex cone such that D = −D (i.e., D is not a linear subspace) and k ∈ Y . Then D + Rk = Y if and only if {k, −k} ∩ Di = ∅.

2.4 Separation Theorems for Not Necessarily Convex Sets

91

Proof. Assume first that k ∈ Di (the proof for −k ∈ Di being the same). Let y ∈ Y . Because D − k is absorbing, there exists t > 0 such that ty ∈ D − k, and so y ∈ D − t−1 k ⊂ D + Rk. Hence, Y = D + Rk. Assume now that Y = D+Rk. Of course, k = 0 and Y0 := D−D is a linear subspace. Moreover, Y = Y0 + Rk. Suppose that Y0 = Y . Let y ∈ Y0 ⊂ Y . Then y = v + λk for / Y0 some v ∈ D and λ ∈ R, whence λk = y − v ∈ Y0 − D ⊂ Y0 . Since k ∈ (otherwise, Y0 + Rk = Y0 ), we obtain that λ = 0, and so Y0 ⊂ D ⊂ Y0 . This contradicts the fact that D is not a linear subspace. Hence, Y0 = Y . Let us show now that Di = ∅. First, because k ∈ D −D, k = v1 −v2 with v1 , v2 ∈ D. Let v := v1 + v2 ∈ D. Consider λ, μ ≥ 0. Then D + λk = D + 2λv1 − λv ⊂ D − R+ v,

D − μk = D + 2μv2 − μv ⊂ D − R+ v.

Therefore, Y = D − R+ v. Consider y ∈ Y ; then y + v ∈ Y = D − R+ v, and so there exists λ ≥ 0 such that (1 + λ)v + y ∈ D, whence (1 + λ)−1 y + v ∈ D. Since D is convex, this implies that v ∈ Di . Hence, Di = ∅. Suppose by contradiction that {k, −k} ∩ Di = ∅. By the algebraic separation theorem, there exist ϕ, ψ ∈ X  \ {0} (linear functionals on X) such that ∀ y ∈ D : ϕ(k) ≤ 0 ≤ ϕ(y),

ψ(−k) ≤ 0 ≤ ψ(y).

(2.59)

If ϕ(k) = 0, then ϕ(y + λk) ≥ 0 for all y ∈ D and λ ∈ R, and so we get the contradiction ϕ = 0 because D + Rk = Y . Hence, ϕ(k) < 0. Similarly, ψ(−k) < 0. Therefore, there exists α > 0 such that (αϕ + ψ)(k) = 0. From (2.59) we obtain that (αϕ + ψ)(y) ≥ 0 for every y ∈ D. As above, it follows that αϕ + ψ = 0. Hence, ϕ(y − y  ) = ϕ(y) + α−1 ψ(y  ) ≥ 0 for all y, y  ∈ D. Since D−D = Y , we obtain the contradiction ϕ = 0. Hence, {k, −k}∩Di = ∅.  Remark 2.4.21. By a similar proof, if D ⊂ Y is a convex cone and k ∈ Y , then D − R+ k = Y ⇔ k ∈ Di . The functional ϕD,k defined by (2.42) was introduced by Gerstewitz [210] and Gerstewitz and Iwanow [212] in order to show separation theorems for nonconvex sets. The most part of the properties of this functional established in Theorem 2.4.1 and its corollaries were stated by Z˘ alinescu [578], Gerth and Weidner [215], Tammer [524] and G¨ opfert, Tammer, and Z˘ alinescu [231]. Theorem 2.4.19 is stated by Gerth and Weidner [215]; here one can find versions of nonconvex separation theorems without interiority conditions. Proposition 2.4.20 can be found in Z˘ alinescu [578]. A comparison of ϕD,k with other scalarization functionals (by HiriartUrruty [273, 274] and by Gra˜ na Drummond and Svaiter [235]) is given by Bouza, Quintana and Tammer in [89], compare also Guti´errez, Jim´enez, Miglierina and Molho [244] and references therein. Properties (in an algebraic setting) of a functional of type (2.42) defined on linear spaces are derived by Guti´errez, Novo, R´odenas-Pedregosa and Tanaka in [246], see also references therein.

92

2 Functional Analysis over Cones

For an overview on recent developments concerning extensions and properties of the functional (2.42) as well as applications, see the books [329], [532] as well as [254, 255] and references therein. Remark 2.4.22. Jahn [312] studied an interesting unified approach via functionals representing a negative convex cone as the set of solutions to an inequality. Scalarizing functionals well known from the literature as well as Bishop-Phelps functionals are included in the general class of representing functionals ϕ : Y → R with: −C = {y ∈ Y | ϕ(y) ≤ 0},

(2.60)

where Y is a real linear space and C ⊂ Y a convex cone. In Theorem 2.4.1, especially in (2.44), we have shown that the functional ϕD,k given by (2.42) with Y a t.v.s., D = C a closed convex cone, k ∈ Y \ {0} such that (2.41) holds, satisfies the condition −C = {y ∈ Y | ϕC,k (y) ≤ 0}, i.e., the condition (2.60) for −C-representing functionals is fulfilled.

2.5 Characterization of set relations by means of nonlinear functionals In Section 2.3, we introduced generalized set less relations. Now, employing the results from Section 2.4 concerning the functional ϕD,k in (2.42), we are able to give a complete characterization of set less relations. These characterizations are important for deriving algorithms to generate solutions of set-valued optimization problems, see [340] and [338]. For solution concepts in set-valued optimization, see Section 3.2. The characterizations of set less relations in this section are derived in [338], [339], [340] and [342]. The result in the next theorem, shown in [339], provides an equivalent representation for A uD B by means of the functional ϕ := ϕD,k : Y → R introduced in (2.42). Theorem 2.5.1. Assume that D ∈ P(Y ) is a closed proper set in Y , and k ∈ Y \ {0} such that D + [0, +∞)k ⊆ D (see (2.41)). For two sets A, B ∈ P(Y ), the following implication holds true: A⊆B−D

=⇒

sup inf ϕD,k (a − b) ≤ 0 .

a∈A b∈B

(2.61)

Furthermore, if we suppose that there exists k 0 ∈ Y \ {0} fulfilling D + [0, +∞)k 0 ⊆ D such that inf b∈B ϕD,k0 (a − b) is attained for all a ∈ A, then the converse implication is also true, i.e., sup inf ϕD,k0 (a − b) ≤ 0

a∈A b∈B

=⇒

A ⊆ B −D.

(2.62)

2.5 Characterization of set relations by means of nonlinear functionals

93

The next assertion (see [340, Theorem 3.7]) allows a first insight into the relationship between the generalized lower set less relation and its characterization via the functional ϕD,k . Theorem 2.5.2. Assume that D ∈ P(Y ) is a closed proper set in Y , k ∈ Y \ {0} such that D + [0, +∞)k ⊆ D (see (2.41)) is satisfied. Furthermore,  ⊆ Y such that D + D  ⊆ D, and A, B ∈ P(Y ). Then, it holds that suppose D  =⇒ inf ϕD,k (a) ≤ inf ϕD,k (b). B ⊆A+D a∈A

b∈B

Proof. We choose an arbitrary element k ∈ Y \{0} such that D+[0, +∞)k ⊆  Then, we obtain D is fulfilled, and let B ⊆ A + D. . ∀ b ∈ B, ∃ a ∈ A : b ∈ a + D Taking into account the monotonicity of the functional ϕD,k (see Theorem 2.4.1 (d)), we get ∀ b ∈ B, ∃ a ∈ A : ϕD,k (a) ≤ ϕD,k (b) . Hence, the stated inequality holds and the proof is completed.



As a consequence of Theorem 2.5.2, we obtain the following corollary, which was proven in [338, Theorem 3.15]. Corollary 2.5.3. Suppose that D ∈ P(Y ) is a proper, closed, convex cone in Y , k ∈ Y \ {0} and A, B ∈ P(Y ). Then, the following implication holds true B ⊆ A + D =⇒ inf ϕD,k (a) ≤ inf ϕD,k (b). a∈A

b∈B

Concerning the generalized lower set less relation, the next result (corresponding to Theorem 2.5.1) is derived in [340, Theorem 3.9]. Theorem 2.5.4. Suppose that D ∈ P(Y ) is a proper closed set in Y and k ∈ Y \{0} such that D+[0, +∞)k ⊆ D is fulfilled. For two sets A, B ∈ P(Y ), the following implication holds true: B ⊆A+D

=⇒

sup inf ϕD,k (a − b) ≤ 0 . b∈B

a∈A

Furthermore, suppose that there exists k 0 ∈ Y \{0} satisfying D+[0, +∞)k 0 ⊆ D such that inf a∈A ϕD,k0 (a − b) is attained for all b ∈ B, then sup inf ϕD,k0 (a − b) ≤ 0 b∈B

a∈A

=⇒

B ⊆ A+D.

Proof. Consider B ⊆ A + D. That means ∀ b ∈ B, ∃ a ∈ A : b ∈ a + D

=⇒

∀ b ∈ B, ∃ a ∈ A : a − b ∈ −D .

94

2 Functional Analysis over Cones

Taking into account (2.44) in Theorem 2.4.1 with λ = 0 and y = a − b, we obtain ∀ b ∈ B, ∃ a ∈ A : ϕD,k (a − b) ≤ 0 , and hence, sup inf ϕD,k (a − b) ≤ 0 . b∈B

a∈A

Conversely, consider k ∈ Y \ {0} such that for all b ∈ B the infimum inf a∈A ϕD,k0 (a − b) is attained. Suppose 0

sup inf ϕD,k0 (a − b) ≤ 0 .

b∈B a∈A

(2.63)

That means ∀ b ∈ B : inf ϕD,k0 (a − b) ≤ 0 . a∈A

Since for all b ∈ B the infimum inf a∈A ϕD,k0 (a − b) is attained, we get ∀ b ∈ B ∃ a ∈ A : ϕD,k0 (a − b) = inf ϕD,k0 (a − b) ≤ 0 . a∈A

Taking into account (2.44) in Theorem 2.4.1 with λ = 0 and y = a − b, we obtain ∀ b ∈ B ∃ a ∈ A : a − b ∈ −D, therefore B ⊆ A + D.



Further results concerning a characterization of set less relations via nonlinear scalarization are given in [256] and [271]. Recently, characterizations of set less relations by a functional ϕ : Y → R given by ϕ(y) ≤ 0 ⇐⇒ y ∈ −C (where C is a proper convex cone in a real linear space Y ) and applications for deriving necessary conditions for set inequalities using directional derivatives have been shown by Jahn in [311].

2.6 Convexity Notions for Sets and Multifunctions Let X be a real topological vector space. Definition 2.6.1. Let A ⊆ X be a nonempty set. We say that A is αconvex, where α ∈ ]0, 1[, if αx + (1 − α)y ∈ A for all x, y ∈ A. The set A is mid-convex if A is 12 -convex. The set A is nearly convex if A is α-convex for some α ∈ ]0, 1[. The empty set is α-convex for all α ∈ ]0, 1[ (and so nearly convex). Of course, A is convex if andonly if A is α-convex for every α ∈ ]0, 1[. Let α ∈ ]0, 1[ and set Λα := n∈N Λα n , where Λα 0 := {0, 1},

α Λα n+1 := {αt + (1 − α)s | t, s ∈ Λn }, n ∈ N.

2.6 Convexity Notions for Sets and Multifunctions

95

α α It is obvious that Λα n ⊆ Λn+1 for every n ∈ N and Λ is α-convex. Moreover, if A is α-convex, then λx + (1 − λ)y ∈ A for all x, y ∈ A and λ ∈ Λα . Indeed, fixing x, y ∈ A and taking Λx,y := {λ ∈ [0, 1] | λx + (1 − λ)y ∈ A}, one obtains easily, by induction, that Λα n ⊆ Λx,y for every n ∈ N.

Lemma 2.6.2. Let Λ ⊆ [0, 1] be such that 0, 1 ∈ Λ and ∃ δ ∈ ]0, 1/2], ∀ t, s ∈ Λ, ∃ λ ∈ [δ, 1 − δ] : λt + (1 − λ)s ∈ Λ. Then cl Λ = [0, 1]. In particular, cl Λα = [0, 1]. Proof. Suppose that cl Λ = [0, 1]. Then there exists t ∈ [0, 1] \ cl Λ = ]0, 1[ \ cl Λ. Since the last set is open in R, it follows that α := sup{t ∈ [0, t] | t ∈ cl Λ} < t,

β := inf{t ∈ [t, 1] | t ∈ cl Λ} > t

and α, β ∈ cl Λ. Let δ := δ(β − α) > 0. From the definitions of α and β, there exist α, β ∈ Λ such that α − δ < α ≤ α, β ≤ β < β + δ. Since α, β ∈ Λ, there ¯ 1 − δ] ¯ with λα + (1 − λ)β ∈ Λ. But exists λ ∈ [δ, λα + (1 − λ)β ≤ λα + (1 − λ)(β + δ) = β + δ − λ(β − α + δ) ≤ β + δ − δ(β − α + δ) = β − δδ < β, λα + (1 − λ)β ≥ λ(α − δ) + (1 − λ)β = α − δ + (1 − λ)(β − α + δ) ≥ α − δ + δ(β − α + δ) = α + δδ > α, contradicting the fact that Λ∩ ]α, β[ = ∅. Therefore, cl Λ = [0, 1]. It is obvious that Λ = Λα satisfies the hypothesis with δ = min{α, 1 − α}.  Proposition 2.6.3. Let X be a topological vector space and A ⊆ X a nonempty nearly convex set. Then (i) cl A is convex. (ii) If x ∈ cor A and y ∈ A, then [x, y] ⊆ A. Moreover, if x ∈ int A and y ∈ cl A, then [x, y[ ⊆ int A. (iii) If int A = ∅, then int A is convex and cor A = int A. Proof. By hypothesis, there exists α ∈ ]0, 1[ such that A is α-convex. Note first that for all γ ∈ Λα and x, y ∈ A we have that γx + (1 − γ)y ∈ A. (i) Consider x, y ∈ cl A and λ ∈ ]0, 1[. Since λ ∈ cl Λα , there exist the nets (xi ), (yi ) ⊆ A, and (λi ) ⊆ Λα converging to x, y, and λ, respectively. Since λi xi + (1 − λi )yi ∈ A for every i, we obtain that λx + (1 − λ)y ∈ cl A. Therefore, cl A is convex. (ii) Let x ∈ cor A and y ∈ A. Suppose that there exists λ0 ∈ ]0, 1[ such / A; of course, x = y. Since x ∈ cor A, there exists that λ0 x + (1 − λ0 )y ∈ δ ∈ ]0, 1[ such that (1 − λ)x + λy ∈ A for every λ ∈ [−δ, δ]. Let

96

2 Functional Analysis over Cones

γ := sup{γ ∈ ]0, 1[ | (1 − λ)x + λy ∈ A ∀ λ ∈ [0, γ[}. Of course, δ ≤ γ ≤ λ0 and (1 − λ)x + λy ∈ A for every λ ∈ [0, γ[. Let γ ∈ Λα ∩ [0, γ[. It follows that for all λ ∈ [0, γ[ we have (1 − γ) ((1 − λ)x + λy) + γy = (1 − (λ + γ − λγ)) x + (λ + γ − λγ)y ∈ A. But γ ∈ [γ, γ + γ(1 − γ)[ = {λ + γ − λγ | λ ∈ [0, γ[}. From the above relation it follows that (1 − λ)x + λy ∈ A for every λ ∈ [0, γ + γ(1 − γ)[, contradicting the choice of γ. Let now x ∈ int A, y ∈ cl A, and λ ∈ ]0, 1[, and set z := λx + (1 − λ)y. There exists an open V ∈ NX such that x + (V + V ) ⊆ int A ⊆ cor A; set λ V (∈ NX ). Since y ∈ cl A, there exists y  ∈ (y + U ) ∩ A, and so U := 1−λ  y = y − u with u ∈ U. Then z + λV = λ(x + V ) + (1 − λ)y = λ[x + λ−1 (1 − λ)u + V ] + (1 − λ)y  ⊆ λ(x + V + V ) + (1 − λ)y  ⊆ λ cor A + (1 − λ)y  ⊆ A, where the last inclusion is provided by the first part; hence, z ∈ int A. (iii) Suppose that int A = ∅ and fix x0 ∈ int A. Consider x ∈ cor A. There exist y ∈ A and λ ∈ ]0, 1[ such that x = (1 − λ)x0 + λy. From the second part of (ii) we get x ∈ int A.  Aleman ([12]) established the convexity of cl A and int A, as well as that [x, y[ ⊆ int A for x ∈ int A and y ∈ A, when A is nearly convex; the proof of Proposition 2.6.3 (i) is that of [580, Prop. 2.4]. An immediate consequence of the preceding proposition is the next corollary. Corollary 2.6.4. Let X be a topological vector space and A ⊆ X a nonempty nearly convex set. If A is open or closed, then A is convex. Remark 2.6.5. If the nonempty subset A of the t.v.s. X satisfies the condition ∃ δ ∈ ]0, 1/2], ∀ x, y ∈ A, ∃ λ ∈ [δ, 1 − δ] : λx + (1 − λ)y ∈ A, then cl A is convex, but int A is not necessarily convex. Indeed, the proof of the fact that cl A is convex is similar to that of Proposition 2.6.3, taking into account Lemma 2.6.2. The set A = R \ {0} satisfies the above condition, is open, but is not convex. Note that the intersection of an arbitrary family of α-convex sets is also α-convex, but the intersection of two nearly convex sets may not be nearly 1 1 convex; indeed, Λ 2 ∩ Λ 3 = {0, 1} is not nearly convex. If X, Y are real vector spaces, T : X → Y is a linear operator and A ⊆ X, B ⊆ Y are α-convex, then T (A) and T −1 (B) are α-convex, too. Let X, Y be arbitrary nonempty sets. A function Γ : X → 2Y is called a multifunction, and is denoted by Γ : X ⇒ Y . So, if Γ : X ⇒ Y is a

2.6 Convexity Notions for Sets and Multifunctions

97

multifunction, the image of Γ at x ∈ X is a (possibly empty) subset Γ (x) of Y . The domain of Γ is dom Γ := {x ∈ X | Γ (x) = ∅}, while its image is Im Γ := {y ∈ Y | ∃ x ∈ X : y ∈ Γ (x)}. The multifunction Γ : X ⇒ Y is usually identified with its graph, gr Γ := {(x, y) ∈ X × Y | y ∈ Γ (x)}. In this way, with each multifunction one associates a relation and vice versa. It is obvious that dom Γ = projX (grΓ ) and Im Γ = projY (gr Γ ). The image by Γ of the set A ⊆ X is Γ (A) := x∈A Γ (x); so, Im Γ = Γ (X). The inverse image by Γ of the set B ⊆ Y is Γ −1 (B) := {x ∈ X | Γ (x) ∩ B = ∅}. In fact, Γ −1 (B) is the image of B by the inverse multifunction Γ −1 : Y ⇒ X defined by Γ −1 (y) := {x ∈ X | y ∈ Γ (x)}; so Γ −1 (y) = Γ −1 ({y}). Of course, dom Γ −1 = Im Γ and Im Γ −1 = dom Γ . We shall use in the sequel also another type of inverse image: Γ +1 (B) := {x ∈ X | Γ (x) ⊆ B}. When Δ : Y ⇒ Z is another multifunction, the composition of Δ and Γ is the multifunction Δ ◦ Γ : X ⇒ Z defined by Δ ◦ Γ (x) := {z ∈ Z | ∃ y ∈ Γ (x) with z ∈ Δ(y)}; note that gr(Δ ◦ Γ ) = PrX×Z gr Γ × Z ∩ X × gr Δ . When Y is a linear space and Γ, Γ1 , Γ2 : X ⇒ Y , we define the sum Γ1 + Γ2 and multiplication by a scalar γΓ as the multifunctions Γ1 + Γ2 , γΓ : X ⇒ Y defined by (Γ1 + Γ2 )(x) := Γ1 (x) + Γ2 (x), (γΓ )(x) := γ · Γ (x) with the usual conventions that A + ∅ := ∅ + A := ∅, γ · ∅ = ∅ for A ⊆ Y . Suppose now that X, Y are real vector spaces and Γ : X ⇒ Y . We say that Γ is α-convex (mid-convex, nearly convex, convex) if gr Γ is α-convex (mid-convex, nearly convex, convex). It is obvious that if Γ is α-convex (mid-convex, nearly convex, convex), so are dom Γ and Im Γ . It is easy to see that Γ is α-convex if and only if αΓ (x) + (1 − α)Γ (y) ⊆ Γ (αx + (1 − α)y)

∀ x, y ∈ dom Γ.

Let C ⊆ Y be a convex cone. We say that Γ is C-α-convex (C-midconvex, C-nearly convex, C-convex) if the multifunction ΓC : X ⇒ Y,

ΓC (x) := Γ (x) + C,

is α-convex (mid-convex, nearly convex, convex). Of course, Γ is C-α-convex if and only if αΓ (x) + (1 − α)Γ (y) ⊆ Γ (αx + (1 − α)y) + C

∀ x, y ∈ dom Γ.

In a similar way we get concavity notions for multifunctions; so Γ is αconcave or C-α-concave if αΓ (x) + (1 − α)Γ (y) ⊇ Γ (αx + (1 − α)y)

∀ x, y ∈ dom Γ

or αΓ (x) + (1 − α)Γ (y) + C ⊇ Γ (αx + (1 − α)y) respectively.

∀ x, y ∈ dom Γ,

98

2 Functional Analysis over Cones

Observe that for C = {0} the preceding notions reduce to those in which C is missing, while for C = Y , one has ΓC (x) = Y for x ∈ dom Γ and ΓC (x) = ∅ elsewhere, and so these notions are uninteresting because the corresponding results are trivial. Note that sometimes gr ΓC is denoted by epiC Γ , or simply epi Γ , and is called the epigraph of Γ (w.r.t. C). The sublevel set of Γ of height y (w.r.t. C) is the set levΓ (y) := {x ∈ X | Γ (x) ∩ (y − C) = ∅}; when Y is a topological vector space and int C = ∅ we also consider the strict sublevel set of Γ of height y (w.r.t. C) defined by lev< Γ (y) := {x ∈ X | Γ (x) ∩ (y − int C) = ∅}. In this way we get the sublevel and strict sublevel multifunctions levΓ , lev< Γ : Y ⇒ X. We say that Γ is C-α-quasiconvex (C-mid-quasiconvex, C-nearly quasiconvex, C-quasiconvex) if, for every z ∈ Y , the sublevel set levΓ (z) is α-convex (mid-convex, nearly convex, convex). An equivalent definition of C-α-quasiconvexity is that (Γ (x) + C) ∩ (Γ (y) + C) ⊆ Γ (αx + (1 − α)y) + C

∀ x, y ∈ dom Γ.

Notice that Γ is C-α-quasiconvex if Γ (x) ⊆ Γ (αx + (1 − α)y) + C or Γ (y) ⊆ Γ (αx + (1 − α)y) + C for all x, y ∈ dom Γ . Note also that Γ is C-α-quasiconvex (C-mid-quasiconvex, C-nearly quasiconvex, C-quasiconvex) whenever Γ is C-α-convex (Cmid-convex, C-nearly convex, C-convex). In order to characterize the C-α-quasiconvexity of Γ we use the scalarization functional ϕ introduced in relation (2.42) (for D replaced by C). Proposition 2.6.6. ([123, Lemma 2.3], [389, Prop. 2.3]) Let X, Y be topological vector spaces, Γ : X ⇒ Y have nonempty domain, C ⊆ Y be a proper convex cone, and k 0 ∈ int C be fixed. Then Γ is C-α-quasiconvex if and only if ϕz ◦Γ : X ⇒ R is R+ -α-quasiconvex for every z ∈ Y , where ϕz (y) := ϕ(y−z) and (ϕz ◦ Γ )(x) := {ϕz (y) | y ∈ Γ (x)}. Proof. Endowing R withe the order induced by R+ , for t ∈ R and z ∈ Y we have that levϕz ◦Γ (t) = {x ∈ X | ϕz ◦ Γ (x) ∩ (t − R+ ) = ∅} = {x ∈ X | ∃ y ∈ Γ (x) : ϕz (y) ≤ t} = {x ∈ X | ∃ y ∈ Γ (x) : ϕ(y − z) ≤ t} = {x ∈ X | ∃ y ∈ Γ (x) : y − z ∈ tk 0 − C} (see Corollary 2.4.5) = {x ∈ X | Γ (x) ∩ (z + tk 0 − C) = ∅} = levΓ (z + tk 0 ).

2.6 Convexity Notions for Sets and Multifunctions

The conclusion follows.

99



We introduce some useful notions and notation related to vector-valued functions. To Y we adjoin a greatest element ∞ (∈ / Y ), thereby obtaining Y • := Y ∪{∞} and C • := C ∪{∞}. We consider that y +∞ = ∞, λ·∞ = ∞, and y ≤C ∞ for all y ∈ Y • and λ ∈ R+ . Of course, if f : X → Y • , the domain of f is dom f := {x ∈ X | f (x) ∈ Y }, the sublevel and strict sublevel sets of f of height y are levf,C (y) := {x ∈ X | f (x) ≤C y} and lev< f,C (y) := {x ∈ X | y − f (x) ∈ int C}, and the epigraph of f is epiC f := {(x, y) ∈ X × Y | f (x) ≤C y}; f is proper if dom f = ∅. With such an f we associate the multifunction Γf,C : X ⇒ Y whose graph is epiC f . So, Γf,C (x) = f (x) + C for every x ∈ dom Γf,C = dom f , < epiC f = epi Γf,C , levf,C (y) = levΓf,C (y), and lev< f,C (y) = levΓf,C (y) for every y ∈ Y ; in particular, Γf,C = (Γf,C )C . We say that f : X → Y • is C-α-convex (C-mid-convex, C-nearly convex, C-convex, C-α-quasiconvex, C-mid-quasiconvex, C-nearly quasiconvex, C-quasiconvex) if the multifunction Γf,C is C-α-convex (C-mid-convex, C-nearly convex, C-convex, C-α-quasiconvex, C-mid-quasiconvex, C-nearly quasiconvex, C-quasiconvex); in particular, f is C-convex if and only if f (αx1 + (1 − α)x2 ) ≤C αf (x1 ) + (1 − α)f (x2 ) ∀ x1 , x2 ∈ X, ∀ α ∈ [0, 1]. If f is C-α-convex (C-mid-convex, C-nearly convex, C-convex), then dom f is so, and f is C-α-quasiconvex (C-mid-quasiconvex, C-nearly quasiconvex, C-quasiconvex). We also remark that f is C-α-quasiconvex, provided that   {f (x), f (y)} ∩ f (αx + (1 − α)y) + C = ∅ ∀ x, y ∈ dom f. For the converse we need the condition C ∪ (−C) = Y . Indeed, let x, y ∈ dom f ; suppose that f (x) ∈ / f (αx + (1 − α)y) + C, that is, αx + (1 − α)y ∈ / / levf,C (f (x)), levf,C (f (x)). Since levf,C (f (x)) is α-convex, we deduce that y ∈ which means that f (x) − f (y) ∈ / C. This fact, together with C ∪ (−C) = Y , implies that f (y) − f (x) ∈ C, that is, x ∈ levf,C (f (y)). Since levf,C (f (y)) is α-convex, we obtain that f (y) ∈ f (αx + (1 − α)y) + C, which completes the proof. This remark shows that R+ -(α-)quasiconvexity of real-valued functions reduces to usual (α-)quasiconvexity. So, in Proposition 2.6.6, we may replace the R+ -α-quasiconvexity of ϕz ◦ f with the usual α-quasiconvexity. Let X, Y be topological vector spaces with Y ordered by the proper convex cone C, and f : X → Y • . Definition 2.6.7. We call the subdifferential of f at x0 ∈ dom f the set ∂ ≤C f (x0 ) := {T ∈ L(X, Y ) | T (x − x0 ) ≤C f (x) − f (x0 ) ∀ x ∈ X}, where L(X, Y ) denotes the set of continuous linear operators from X into Y .

100

2 Functional Analysis over Cones

It is obvious that ∂ ≤C f (x0 ) is a convex subset of L(X, Y ). When f : X → Y is a sublinear operator; i.e., f (x1 + x2 ) ≤C f (x1 ) + f (x2 ), f (0) = 0, and f (αx) = αf (x) for all x, x1 , x2 ∈ X and α ∈ (0, ∞), and C is pointed, the formula •

∂ ≤C f (x0 ) = {T ∈ ∂ ≤C f (0) | T (x0 ) = f (x0 )} holds for all x0 ∈ dom f . The following important result was stated by Valadier [548]. Theorem 2.6.8. Let (X, ·) and (Y, ·) be real reflexive Banach spaces and C ⊆ Y a proper convex cone with a weakly compact base. If f : X → Y • is a proper C-convex operator, continuous at some point of its domain, then  ∗  y ◦ ∂ ≤C f (x) = ∂(y ∗ ◦ f )(x) ∀ x ∈ int(dom f ), ∀ y ∗ ∈ C + , where we use the convention that y ∗ (∞) = ∞ for y ∗ ∈ C + .

2.7 Continuity Notions for Multifunctions In this section X and Y are separated (in the sense of Hausdorff) topological spaces and Γ : X ⇒ Y is a multifunction. When mentioned explicitly, Y is a Hausdorff topological vector space. Definition 2.7.1. Let x0 ∈ X. We say that (a) Γ is upper continuous (u.c.) at x0 if ∀ D ⊆ Y, D open, Γ (x0 ) ⊆ D, ∃ U ∈ NX (x0 ) ∀ x ∈ U : Γ (x) ⊆ D, (2.64) i.e., Γ +1 (D) is a neighborhood of x0 for each open set D ⊆ Y such that Γ (x0 ) ⊆ D; (b) Γ is lower continuous (l.c.) at x0 if ∀ D ⊆ Y, D open, Γ (x0 )∩D = ∅, ∃ U ∈ NX (x0 ), ∀ x ∈ U : Γ (x)∩D = ∅, (2.65) i.e., Γ −1 (D) is a neighborhood of x0 for each open set D ⊆ Y such that Γ (x0 ) ∩ D = ∅. (c) Γ is continuous at x0 if Γ is u.c. and l.c. at x0 . (d) Γ is upper continuous (lower continuous, continuous) if Γ is so at every x ∈ X; (e) Γ is lower continuous at (x0 , y0 ) ∈ X × Y if ∀ V ∈ NY (y0 ), ∃ U ∈ NX (x0 ), ∀ x ∈ U : Γ (x) ∩ V = ∅.

2.7 Continuity Notions for Multifunctions

101

  It follows from the definition that x0 ∈ int(dom Γ ) and y0 ∈ cl Γ (x0 ) if Γ is l.c. at (x0 , y0 ) and Γ is l.c. at x0 ∈ dom Γ if and only if Γ is l.c. at every (x0 , y) with y ∈ Γ (x0 ); moreover, Γ is l.c. at every x0 ∈ X \ dom Γ . If x0 ∈ X \ dom Γ , then Γ is u.c. at x0 if and only if x0 ∈ int(X \ dom Γ ). So, if Γ is u.c., then dom Γ is closed, while if Γ is l.c., then dom Γ is open. The next result follows immediately from the definitions. Proposition 2.7.2. The following assertions hold: (i) Γ D (ii) Γ D

is upper continuous if and only if Γ +1 (D) is open for every open set ⊆ Y if and only if Γ −1 (F ) is closed for every closed set F ⊆ Y ; is lower continuous if and only if Γ −1 (D) is open for every open set ⊆ Y if and only if Γ +1 (F ) is closed for every closed set F ⊆ Y .

The limit inferior of Γ at x0 ∈ X is defined by lim inf Γ (x) := {y ∈ Y | ∀ V ∈ NY (y), ∃ U ∈ NX (x0 ), x→x0 ∀ x ∈ U : Γ (x) ∩ V = ∅}, while the limit superior of Γ at x0 ∈ X is defined by  lim sup Γ (x) := y ∈ Y | ∀ V ∈ NY (y), ∀ U ∈ NX (x0 ), x→x0

 ∃ x ∈ U : Γ (x) ∩ V = ∅     = cl Γ (U ) | U ∈ NX (x0 ) .

It is clear from the definitions that Γ is l.c. at x0 ∈ X if and only if Γ (x0 ) ⊆ lim inf x→x0 Γ (x). Notice that x0 ∈ int (dom Γ ) whenever lim inf x→x0 Γ (x) = ∅, and (2.66) lim inf Γ (x) ⊆ cl Γ (x0 ) ⊆ lim sup Γ (x), x→x0

x→x0

lim inf x→x0 Γ (x) and lim supx→x0 Γ (x) being closed sets; moreover cl Γ (x0 ) = lim supx→x0 Γ (x) whenever x0 is an isolated point of X (i.e., {x0 } ∈ NX (x0 )). Sometimes in the definitions of lim inf x→x0 Γ (x) and lim supx→x0 Γ (x) one takes x ∈ dom Γ . Note that this situation reduces to the preceding one by considering the restriction of Γ at dom Γ . When (X, d) is a metric space and A, B ⊆ X, the excess of A over B is e(A, B) := sup dist(x, B) if A, B = ∅, x∈A

e(∅, B) = 0,

e(A, ∅) = ∞ if A = ∅,

where dist(x, A) := inf a∈A d(x, a); in particular, dist(x, ∅) = ∞. It is easy to show that for a nonempty and compact set A ⊆ X and an open set D ⊆ X, if A ⊆ D, then there exists ε > 0 such that Aε := {x ∈ X | dist(x, A) < ε} = ◦



{x | A ∩ B(x, ε) = ∅} ⊆ D, where B(x, ε) := {x ∈ X | d(x, x ) < ε}. In particular cases, for X or/and Y , one has useful characterizations for the elements of lim inf x→x0 Γ (x) and lim supx→x0 Γ (x).

102

2 Functional Analysis over Cones

Proposition 2.7.3. Let x0 ∈ X and y ∈ Y . (i) y ∈ lim inf x→x0 Γ (x) if and only if ∀ X ⊇ (xi )i∈I → x0 , ∃ (xϕ(j) )j∈J , ∃ Y ⊇ (yj )j∈J → y, ∀ j ∈ J : yj ∈ Γ (xϕ(j) ),

(2.67)

and y ∈ lim supx→x0 Γ (x) if and only if ∃ X ⊇ (xi )i∈I → x0 , ∃ Y ⊇ (yi )i ∈I → y, ∀ i ∈ I : yi ∈ Γ (xi ). (2.68) (ii) If X and Y are first-countable, then y ∈ lim inf x→x0 Γ (x) if and only if ∀ X ⊇ (xn ) → x0 , ∃ Y ⊇ (yn ) → y, ∃ n0 ∈ N, ∀ n ≥ n0 : yn ∈ Γ (xn ), (2.69) and y ∈ lim supx→x0 Γ (x) if and only if ∃ X ⊇ (xn ) → x0 , ∃ Y ⊇ (yn ) → y, ∀ n ∈ N : yn ∈ Γ (xn ).

(2.70)

(iii) If (Y, ρ) is a metric space, then   y ∈ lim inf Γ (x) ⇔ lim dist y, Γ (x) = 0, x→x0 x→x0   y ∈ lim sup Γ (x) ⇔ lim inf dist y, Γ (x) = 0. x→x0

x→x0

(2.71) (2.72)

Proof. (i) Suppose that y ∈ lim inf x→x0 Γ (x) and take X ⊇ (xi )i∈I → x0 . Then for every V ∈ NY (y), there exists iV ∈ I such that Γ (xi ) ∩ V = ∅ for i  iV . Consider J := I × NY (y) ordered by (i, V )  (i , V  ) if and only if i  i and V ⊆ V  . Consider ϕ : J → I such that ϕ(i, V )  i, iV and take yj ∈ Γ (xϕ(j) ) ∩ V for j = (i, V ). It is obvious that (xϕ(j) )j∈J is a subnet of (xi )i∈I . Moreover, (yj )j∈J → y. Therefore, (2.67) holds. Suppose now that (2.67) holds, but y ∈ / lim inf x→x0 Γ (x). Then there exists V0 ∈ NY (y) such that, for every U ∈ NX (x0 ), there exists xU ∈ U with Γ (xU ) ∩ V = ∅. Of course, (xU ) → x0 . Therefore, there exists a subnet (xϕ(j) )j∈J and a net (yj )j∈J → y such that yj ∈ Γ (xϕ(j) ) for j ∈ J. Since V0 ∈ NY (y), there exists j0 ∈ J with yj ∈ V0 for j  j0 . It follows that yj0 ∈ Γ (xϕ(j0 ) ) ∩ V0 = ∅, a contradiction. Suppose now that y ∈ lim supx→x0 Γ (x). Then for every (U, V ) ∈ NX (x0 )×NY (y) =: I there exist xU,V ∈ U and yU,V ∈ Γ (xU,V )∩V . Defining, as usual, (U, V )  (U  , V  ) if and only if U ⊆ U  and V ⊆ V  , it is obvious that (2.68) holds. The converse implication is obtained easily by contradiction. (ii) Consider (Un )n∈N and (Vn )n∈N decreasing bases of neighborhoods for x0 and y, respectively. Suppose first that y ∈ lim inf x→x0 Γ (x). Then for k ∈ N, there exists mk ∈ N such that Γ (x) ∩ Vk = ∅ for x ∈ Umk ; without loss of generality, we may suppose that (mk ) is increasing. Take X ⊇ (xn ) → x0 . For every

2.7 Continuity Notions for Multifunctions

103

k ∈ N, there exists nk ∈ N such that xn ∈ Umk for n ≥ nk ; once again, we may suppose that nk+1 > nk for every k. For k ∈ N and nk ≤ n < nk+1 take yn ∈ Γ (xn ) ∩ Vk . Of course, (yn ) → y and yn ∈ Γ (xn ) for n ≥ n0 . Conversely, suppose that the right-hand side of (2.69) holds, but y ∈ / lim inf x→x0 Γ (x). Then there exists W0 ∈ NY (y) such that, for every U ∈ NX (x0 ), there exists xU ∈ U with Γ (xU ) ∩ W0 = ∅. Therefore, for every n ∈ N, there exists xn ∈ Un such that Γ (xn ) ∩ W0 = ∅. By hypothesis, there exists Y ⊇ (yn ) → y and n0 ∈ N such that yn ∈ Γ (xn ) for n ≥ n0 . Since (yn ) → y, yn ∈ W0 for every n ≥ n1 . Taking n = max{n0 , n1 }, we obtain that Γ (xn ) ∩ W0 = ∅, a contradiction. The proof of (2.70) is similar. (iii) Let Y be a metric space. Recall that for f : X → R and x0 ∈ X, lim inf f (x) := x→x0

sup

inf f (x).

U ∈NX (x0 ) x∈U

So, ◦

y ∈ lim sup Γ (x) ⇔ ∀ ε > 0, ∀ U ∈ NX (x0 ), ∃ x ∈ U : Γ (x) ∩ B(y, ε) = ∅ x→x0   ⇔ ∀ ε > 0, ∀ U ∈ NX (x0 ), ∃ x ∈ U : dist y, Γ (x) < ε   ⇔ ∀ U ∈ NX (x0 ), ∀ ε > 0 : inf dist y, Γ (x) < ε x∈U   ⇔ lim inf dist y, Γ (x) = 0, x→x0

i.e., (2.72) holds. The proof of (2.71) is similar.



We have the following characterization of upper continuity in a special case. Proposition 2.7.4. Suppose that (Y, ρ) is a metric space and Γ (x0 ) is compact. Then Γ is u.c. at x0 if and only if limx→x0 e(Γ (x), Γ (x0 )) = 0. Proof. The (easy) proof is left to the reader.



In the next results we give (other) characterizations for upper and lower continuity at a point. Proposition 2.7.5. Let X, Y be first-countable (in particular, let X, Y be metric spaces) and x ∈ dom Γ . The following statements are equivalent: (i) the multifunction Γ is u.c. at x; (ii) for every closed set F ⊆ Y with Γ (x) ∩ F = ∅, there exists U ∈ NX (x) such that Γ (U ) ∩ F = ∅; (iii) for every closed set F ⊆ Y and every sequence X ⊇ (xn ) → x with Γ (xn ) ∩ F = ∅ for n ∈ N, we have Γ (x) ∩ F = ∅; (iv) for every open set D ⊆ Y with Γ (x) ⊆ D, and every sequence X ⊇ (xn ) → x, there exists nD ∈ N such that Γ (xn ) ⊆ D for n ≥ nD ; (v) for all sequences (xn ) ⊆ X, (yn ) ⊆ Y with (xn ) → x and yn ∈ Γ (xn ) \ Γ (x) for n ∈ N, there exists a subsequence (ynk ) → y ∈ Γ (x).

104

2 Functional Analysis over Cones

If X, Y are not first-countable, the conditions (i)–(iv) remain equivalent by replacing sequences by nets. Proof. The equivalence of conditions (i)–(iv) is quite simple (and known), so it is left to the reader. (iii) ⇒ (v) Consider (Vn )n∈N a countable basis of neighborhoods for y ∈ Y ; we suppose (without loss of generality) that Vn+1 ⊆ Vn for every n ∈ N. Let (xn ) ⊆ X, (yn ) ⊆ Y with (xn ) → x and yn ∈ Γ (xn ) \ Γ (x) for n ∈ N, and consider F := cl{yn | n ∈ N}. Since F is closed and Γ (xn ) ∩ F = ∅ for every n ∈ N, there exists y ∈ Γ (x) ∩ F . Of course, yn = y for every n ∈ N. Since y ∈ F , there exists yn1 ∈ V1. Since Y is separated, there exists m1 ∈ N such that Vm1 ∩ {yk | 0 ≤ k ≤ n1 } = ∅. There exists n2 ∈ N such that yn2 ∈ Vm1 . Similarly, there exists m2 ∈ N such that Vm2 ∩ {yk | 0 ≤ k ≤ n2 } = ∅. There exists n3 ∈ N such that yn3 ∈ Vm2 . The choice of m2 shows that m2 > m1 and n3 > n2 . Continuing in this way we find increasing sequences (mk ), (nk ) ⊆ N such that ynk ∈ Vmk for every k ∈ N. It is obvious that (ynk ) → y. (v) ⇒ (iv) In the contrary case, there exists an open set D ⊆ Y with Γ (x) ⊆ D and a sequence X ⊇ (xn ) → x such that P := {n ∈ N | Γ (xn ) ⊆ D} is infinite. For each n ∈ P take yn ∈ Γ (xn ) \ D ⊆ Γ (xn ) \ Γ (x). Letting P = {n0 , n1 , . . . , nk , . . .} with n0 < n1 < · · · < nk < · · · , by (v), there exists  the subsequence (ynkp ) → y ∈ Γ (x) \ D, a contradiction. Proposition 2.7.6. Let (x0 , y0 ) ∈ X × Y . The following statements are equivalent: (i) the multifunction Γ is l.c. at (x0 , y0 ); (ii) for every net (xi )i∈I → x0 there exist a subnet (xϕ(j) )j∈J of (xi ) and a net (yj )j∈J → y0 such that yj ∈ Γ (xϕ(j) ) for j ∈ J. (iii) y0 ∈ lim inf x→x0 Γ (x). Suppose now that X, Y are first-countable. Then (i) is equivalent to (iv) for every sequence X ⊇ (xn ) → x0 , there exist a sequence Y ⊇ (yn ) → y0 and n0 ∈ N such that yn ∈ Γ (xn ) for every n ≥ n0 . Proof. The proofs of the equivalences (i) ⇔ (ii) and (i) ⇔ (iv) are, mainly, the same as those of the first parts of Proposition 2.7.3 (i) and (ii), respectively. The equivalence of (i) and (iii) is immediate.  From the preceding result and (2.66), we obtain that Γ is l.c. at x0 ⇐⇒Γ (x0 )⊆ lim inf Γ (x) ⇐⇒ cl Γ (x0 )= lim inf Γ (x). x→x0

x→x0

(2.73)

Definition 2.7.7. (i) Γ is closed if gr Γ is aclosed subset of X × Y ;  (ii) Γ is closed at x ∈ X if, for every net (xi , yi ) i∈I ⊆ gr Γ converging to (x, y), we have that y ∈ Γ (x); (iii) Γ is closed-valued if Γ (x) is closed for every x ∈ X;

2.7 Continuity Notions for Multifunctions

105

  (iv) Γ is compact at x ∈ X if, for every net (xi , yi ) i∈I ⊆ gr Γ with (xi ) → x, there exists a subnet (yϕ(j) )j∈J converging to some y ∈ Γ (x). Of course, Γ is closed if and only if Γ is closed at every x ∈ X; moreover, if Γ is closed, then Γ is closed-valued. If Γ is compact at x, then Γ is closed at x; moreover, if Γ is closed (compact) at x, then Γ (x) is closed (compact). Note also that Γ is compact at x ∈ X \dom Γ if and only if x ∈ int(X \dom Γ ), but Γ may be closed at some x ∈ cl(dom Γ ) \ dom Γ . Indeed, consider Γ : R ⇒ R with dom Γ = ]0, 1[ and Γ (x) = {x−1 } for x ∈ dom Γ ; Γ is closed at 0 (but not compact at 0). Proposition 2.7.8. Let x ∈ X. The following assertions hold: (i) Γ is compact at x if and only if Γ (x) is compact and Γ is u.c. at x. (ii) If (Y, ρ) is a metric space, then Γ is   compact at x if and only if Γ (x) is compact and limx →x e Γ (x ), Γ (x) = 0. (iii) Let Y be first-countable. If Γ is compact at x, then for every sequence (xn , yn ) n∈N ⊆ gr Γ with (xn ) → x, there exists a subsequence (ynk ) of (yn ) converging to y ∈ Γ (x). If X is first-countable and Y is a metric space, the converse is true. Proof. If x ∈ / dom Γ , the hypotheses of every assertion of the proposition imply that x ∈ int(X \ dom Γ ); so the conclusion is obvious. So we suppose, during the proof, that x ∈ dom Γ . (i) We noted already that Γ (x) is compact. Suppose that Γ is not u.c. at x. Then there exists an open set D ⊆ Y such that for every U ∈ NX (x) there exists xU ∈ U and yU ∈ Γ (xU ) \ D. Since Γ is compact at x, there exists a subnet (yϕ(j) )j∈J converging to y ∈ Γ (x). Since (yϕ(j) )j∈J ⊆ Y \ D and D is open, it follows that  y ∈ Y \ D, contradicting Γ (x) ⊆ D. Conversely, let (xi , yi ) i∈I ⊆ gr Γ with (xi ) → x. For every j ∈ I consider the set Fj := cl{yi | i  j}. Suppose that for some j0 ∈ I, Fj0 ∩ Γ (x) = ∅. Then Γ (x) ⊆ Y \Fj0 . Since Y \Fj0 is open and Γ is u.c. at x, there exists i0 ∈ I such that Γ (xi ) ⊆ Y \ Fj0 for every i  i0 . Taking i ∈ I such that i  i0 and i  j0 we get the contradiction yi ∈ Fj0 ∩(Y \Fj0 ). Therefore, Fi ∩Γ (x) = ∅ for every i ∈ I. Since Γ (x) is compact and the the finite  family (Fi ∩ Γ (x))i∈I has intersection property, there exists y ∈ i∈I (Fi ∩ Γ (x)) = Γ (x) ∩ i∈I Fi . It follows that for every i ∈ I and every V ∈ NY (y), there exists ϕ(i, V ) ∈ I such that yϕ(i,V ) ∈ V and ϕ(i, V )  i. Let J := I ×NY (y) in which (i, V )  (i , V  ) if and only if i  i and V ⊆ V  . Then (yϕ(j) )j∈J is a subnet of (yi )i∈I converging to y. The conclusion follows. (ii) Suppose that Γ is compact at x, but limx →x e(Γ (x ), Γ (x)) does not exist or is different from 0. Then there exists ε0 > 0 such that, for every U ∈ NX (x), there exist xU ∈ U and yU ∈ Γ (xU ) such that d(yU , Γ (x)) := inf y∈Γ (x) ρ(yU , y) >  ε0 . By  hypothesis, there exists a subnet (yϕ(j) )j∈J → y ∈ Γ (x). Since d ·, Γ (x) is continuous, we obtain the contradiction 0 = d y, Γ (x) ≥ ε0 .

106

2 Functional Analysis over Cones

Conversely, suppose that  (Y, ρ) is a metric space  and Γ (x) is compact and limx →x e Γ (x ), Γ (x) = 0, and take the net (xi , yi ) i∈I ⊆ gr Γ with   (xi ) → x. It follows that limi∈I d yi , Γ (x) = 0. Therefore, for every ε > 0, there exists iε ∈ I such that d(yi , Γ (x)) < ε for i  iε . Let J := I× ]0, ∞[ ordered by (i, ε)  (i , ε ) if and only if i  i and ε ≤ ε . For every j = (i, ε) ∈ J consider ϕ(j) ∈ I such that ϕ(j)  i, iε . Consider also y ϕ(j) ∈ Γ (x)   such that ρ(yϕ(j) , y ϕ(j) ) < ε. Since Γ (x) is compact, y ϕ(j) contains a subnet     y ϕ◦ψ(k) k∈K converging to y ∈ Γ (x). It follows that yϕ◦ψ(k) k∈K → y.  Y be first-countable and Γ compact at x. Consider the sequence  (iii) Let (xn , yn ) n∈N ⊆ gr Γ such that (xn ) → x. By hypothesis, there exists a subnet (yϕ(j) )j∈J → y ∈ Γ (x). There exists a base (Vk )k∈N of neighborhoods of y. For every k ∈ N, there exists jk ∈ J such that yϕ(j) ∈ Vk for j  jk . Of course, taking n0 = ϕ(j0 ), for every k ∈ N, there exists nk+1 ∈ N such that nk+1 ≥ max{ϕ(jk+1 ), nk + 1}. It is obvious that (y   nk ) → y. Conversely, assume that for every sequence (xn , yn ) n∈N ⊆ gr Γ with (xn ) → x, there exists a subsequence (ynk ) → y ∈ Γ (x). Taking (yn ) ⊆ Γ (x) and xn :=x for n∈N, there exists a subsequence (ynk ) → y ∈Γ (x); hence,Γ (x) is compact. To apply (ii) it remains to show that limx →x e Γ (x ), Γ (x) = 0. If this is not true, since X  is first-countable, there exist ε0 > 0 and (xn ) → x such that e Γ (xn ), Γ (x) > ε0 for every n ∈ N. Proceeding as in the proof of the first part of (ii), we get a contradiction.  The next result extends well known properties of continuous functions. Proposition 2.7.9. Assume that A ⊆ dom Γ is a nonempty set. Then the following assertions hold: (i) Assume that A is compact, Γ (x) is compact for every x ∈ A and Γ is u.c. on A; then Γ (A) is compact. (ii) Assume that A is connected and Γ (x) is connected for every x ∈ A; if Γ is either u.c. on A or l.c. on A, then Γ (A) is connected. Proof. Replacing Γ by ΓA : A ⇒ Y defined by ΓA (x) := Γ (x) for x ∈ A, we may (and do) assume that A = X = dom Γ . (i) Consider the net (yi )i∈I ⊆Γ (X); then for each i ∈ I, there exists xi ∈ X such that yi ∈ Γ (xi ). Because X is compact, the net (xi )i∈I has a subnet (xψ(j) )j∈J convergent to x ∈ X. Because Γ is u.c. at x and Γ (x) is compact, Γ is compact at x by Proposition 2.7.8(i), and so the net (yψ(j) )j∈J has a subnet converging to some y ∈ Γ (x) ⊆ Γ (X). Therefore, Γ (X) is compact. (ii) Assume first that Γ is l.c. (on X = A). Consider the open sets D1 , D2 ⊆ Y such that Γ (X) ∩ D1 = ∅, Γ (X) ∩ D2 = ∅ and Γ (X) ⊆ D1 ∪ D2 . We have to show that Γ (X) ∩ D1 ∩ D2 = ∅. Setting Ei := Γ −1 (Di ) for i ∈ {1, 2}, they are open by Proposition 2.7.2(ii), nonempty because Γ (X) ∩ Di = ∅, and X ⊆ E1 ∪ E2 . Because X (= A) is connected, one has E1 ∩ E2 = ∅; hence, there exists x ∈ E1 ∩ E2 , and so Γ (x) ∩ Di = ∅ for i ∈ {1, 2}. Since Γ (x) ⊆ Γ (X) ⊆ D1 ∩ D2 and Γ (x) is connected, it follows

2.7 Continuity Notions for Multifunctions

107

that Γ (x) ∩ D1 ∩ D2 = ∅, and so Γ (X) ∩ D1 ∩ D2 = ∅. Therefore, Γ (X) is connected. Consider now the case in which Γ is u.c. Using the characterization of connectedness with closed sets instead of open sets (see page 25), one gets the conclusion just replacing “open” by “closed” in the proof above and using Proposition 2.7.2(i) instead of Proposition 2.7.2(ii).  Using Proposition 2.7.11 (below), one obtains the next result applying Proposition 2.7.9 for Γ replaced by Γgr . Corollary 2.7.10. Let A ⊆ dom Γ be nonempty and set G := {(x, y) ∈ A × Y | y ∈ Γ (x)}. Then the following assertions hold: (i) Assume that A is compact, Γ (x) is compact for every x ∈ A and Γ is u.c. on A; then G is compact. (ii) Assume that A is connected and Γ (x) is connected for every x ∈ A; if Γ is either l.c. on A, or Γ is u.c. on A and Γ (x) is compact for every x ∈ A, then G is connected. Proposition 2.7.11. Let the multifunction Γgr : X ⇒ X × Y be defined by Γgr (x) := {x} × Γ (x), and let x ∈ X be fixed. Then the following assertions hold: (i) dom Γgr = dom Γ , Γgr (X) = gr Γ , and Γgr (x) is closed, compact or connected if and only if Γ (x) is closed, compact or connected, respectively. (ii) Γgr is closed, compact or l.c. at x if and only if Γ is closed, compact or l.c. at x, respectively. (iii) If Γgr is u.c. at x, then Γ is u.c. at x. Conversely, if Γ is u.c. at x and Γ (x) is compact, then Γgr is u.c. at x. Proof. The assertions in (i) and (ii) are almost obvious, and so their proofs are omitted. (iii) If x ∈ int(X\dom Γ ), Γ and Γgr are u.c. at x, while for x ∈ cl(dom Γ )\ dom Γ , Γ and Γgr are not u.c. at x. Take x ∈ dom Γ and assume that Γgr is u.c. at x. Consider D ⊆ Y an open set such that Γ (x) ⊆ D. Because Γgr is u.c. at x, X ×D is an open subset of X ×Y and Γgr (x) = {x}×Γ (x) ⊆ X ×D, there exists U ∈ N (x) such that Γgr (x) = {x} × Γ (x) ⊆ X × D for every x ∈ U , and so Γ (x) ⊆ D for x ∈ U . Hence, Γ is u.c. at x. Conversely, assume that Γ (x) is compact and Γ is u.c. at x. Using Proposition 2.7.8(i)  and assertion (ii) above Γ is compact, and so u.c., at x. Related to the closedness of multifunctions we have the following result. Proposition 2.7.12. Let x ∈ X. The following assertions hold: (i) Γ is closed at x if and only if, for every y ∈ Y \ Γ (x), there exist U ∈ NX (x) and V ∈ NY (y) such that Γ (U ) ∩ V = ∅. (ii) Γ is closed at x if and only if lim supx →x Γ (x ) ⊆ Γ (x) = cl Γ (x).

108

2 Functional Analysis over Cones

(iii) Suppose that X, Y are first-countable.  Then Γ is closed at x if and only  if, for every sequence gr Γ ⊇ (xn , yn ) n∈N → (x, y), one has y ∈ Γ (x). (iv) If Γ is u.c. at x and Γ (x) is compact, then Γ is closed at x. (v) If Y is regular (in particular, if Y is a metric space or a topological vector space), Γ is u.c. at x, and Γ (x) is closed, then Γ is closed at x. Proof. The proof, which is not difficult, is left to the reader.



The next result gives sufficient conditions for the upper continuity of the intersection of two multifunctions. Proposition 2.7.13. Consider Γ1 , Γ2 : X ⇒ Y and Γ (x) := Γ1 (x) ∩ Γ2 (x) for every x ∈ X. Then Γ is u.c. at x0 ∈ X if one of the following conditions holds: (i) Γ1 is closed at x0 , Γ2 is u.c. at x0 , and Γ2 (x0 ) is compact; (ii) Y is normal (in particular, Y is a metric space), Γ1 , Γ2 are u.c. at x0 , and Γ1 (x0 ), Γ2 (x0 ) are closed. Proof. (i) Let D ⊆ Y be an open set such that Γ (x0 ) ⊆ D. Since Γ1 is closed at x0 , for every y ∈ Γ2 (x0 )\Γ1 (x0 ), there exist Uy ∈ N X (x0 ) and Vy ∈ NY (y) such that Γ1 (Uy )∩Vy = ∅. It follows that Γ2 (x0 ) ⊆ D∪ y∈Γ2 (x0 )\Γ1 (x0 ) int Vy . Since Γ2 (x0 ) iscompact, there exist y1 , . . . , yn ∈ Γ2 (x0 ) \ Γ1 (x0 ) such that n Vyi . Since Γ2 is u.c. at x0 , there Γ2 (x0 ) ⊆ D ∪ i=1 int  nexists U0 ∈ NX (x0 ) n such that Γ2 (U0 ) ⊆ D∪ i=1 int Vyi . Let x ∈ U := U0 ∩ i=1 Uyi and y ∈ Γ (x). / Vyi for every i, 1 ≤ i ≤ n. It follows that, necessarily, Since x ∈ Γ1 (Uyi ), y ∈ y ∈ D. Therefore, Γ is u.c. at x0 . (ii) Let D ⊆ Y be an open set such that Γ (x0 ) ⊆ D. Then (Γ1 (x0 ) \ D) ∩ (Γ2 (x) \ D) = ∅. Since the sets Γ1 (x0 ) \ D and Γ2 (x0 ) \ D are closed and Y is normal, there exist the disjoint open sets D1 , D2 such that Γi (x0 )\D ⊆ D1 for i = 1, 2. Since Γi is u.c. at x0 and Γi (x0 ) ⊆ D ∪ Di , there exists Ui ∈ NX (x0 ) such that Γi (Ui ) ⊆ D ∪ Di for i = 1, 2. Let x ∈ U1 ∩ U2 and y ∈ Γ (x). It  follows that y ∈ (D ∪ D1 ) ∩ (D ∪ D2 ) = D. Therefore, Γ is u.c. at x0 . Recall that NY denotes the class of balanced neighborhoods of the origin of the topological vector space Y . Definition 2.7.14. Let Y be a topological vector space and x0 ∈ X. We say that (a) Γ is Hausdorff upper continuous (H-u.c.) at x0 if ∀ V ∈ NY , ∃ U ∈ NX (x0 ), ∀ x ∈ U : Γ (x) ⊆ Γ (x0 ) + V.

(2.74)

(b) Γ is Hausdorff lower continuous (H-l.c.) at x0 if ∀ V ∈ NY , ∃ U ∈ NX (x0 ), ∀ x ∈ U : Γ (x0 ) ⊆ Γ (x) + V.

(2.75)

(c) Γ is Hausdorff continuous at x0 if Γ is H-u.c. and H-l.c. at x0 .

2.7 Continuity Notions for Multifunctions

109

(d) Γ is Hausdorff upper continuous (Hausdorff lower continuous, Hausdorff continuous) if Γ is so at every x ∈ X. The above definition can be given when Y is a metric space, too; just replace V ∈ NY by ε > 0 and Γ (x0 ) + V by Γ (x0 ) ε . Of course, if Y is a metric space, then Γ is H-u.c. at x0 if and only if limx→x0 e(Γ (x), Γ (x0 )) = 0, and Γ is H-l.c. at x0 if and only if limx→x0 e Γ (x0 ), Γ (x) = 0. Concerning the continuity of the sum of multifunctions and of the multiplication by scalars, we have the following result. Proposition 2.7.15. Let Y be a topological vector space, Γ, Γ1 , Γ2 : X ⇒ Y , x0 ∈ X, and α ∈ R. (i) If Γ is u.c. (resp. l.c., H-u.c., H-l.c.) at x0 , then αΓ is u.c. (resp. l.c., H-u.c., H-l.c.) at x0 . (ii) If Γ1 and Γ2 are l.c. (resp. H-u.c., H-l.c.) at x0 , then Γ1 + Γ2 is l.c. (resp. H-u.c., H-l.c.) at x0 . Proof. The proof of (i) is immediate. (ii) Assume that Γ1 and Γ2 are l.c. at x0 and let D ⊆ Y be an open set  such that Γ1 (x0 ) + Γ2 (x0 ) ∩ D = ∅. Let y1 ∈ Γ1 (x0 ) and y2 ∈ Γ2 (x0 ) be such that y1 + y2 ∈ D. Let V ∈ NY be such that y1 + y2 + V ⊆ D. There exists V0 ∈ NY with V0 + V0 ⊆ V . Because Γ1 and Γ2 are l.c. at x0 , there exist U1 , U2 ∈ NX (x0 ) such that Γi (x) ∩ (y  i + V0 ) = ∅ for i ∈ {1, 2} and all x ∈ Ui . It follows that Γ1 (x) + Γ2 (x) ∩ (y1 + V0 + y2 + V0 ) = ∅ for x ∈ U := U1 ∩ U2 ∈ NX (x0 ). Hence, (Γ1 + Γ2 )(x) ∩ D = ∅ for x ∈ U , and so Γ1 + Γ2 is l.c. at x0 . Assume now that Γ1 and Γ2 are H-u.c. at x0 and take V ∈ NY . Consider V0 ∈ NY with V0 + V0 ⊆ V . Then there exist U1 , U2 ∈ NX (x0 ) such that Γi (x) ⊆ Γi (x0 ) + V0 for i ∈ {1, 2} and x ∈ Ui . It follows that (Γ1 + Γ2 )(x) ⊆ Γ1 (x0 )+V0 +Γ2 (x0 )+V0 ⊆ (Γ1 +Γ2 )(x0 )+V for x ∈ U := U1 ∩U2 ∈ NX (x0 ), and so Γ1 + Γ2 is H-u.c. at x0 . The proof for H-lower continuity is similar.  Note that we have not a similar result to Proposition 2.7.15(ii) for upper continuity. To see this, consider the multifunctions Γ1 , Γ2 : R ⇒ R2 defined by Γ1 (x) := R × {0} and Γ2 (x) := {(0, x)}. It is obvious that Γ1 and Γ2 are u.c. at every x ∈ R, but Γ1 + Γ2 is not u.c. at every x ∈ R. Indeed, (Γ1 + Γ2 )(0) = R×{0} ⊆ D := {(u, v) ∈ R2 | |v| < exp(−u)}, but (Γ1 +Γ2 )(x) = R×{x} ⊆ D for every x ∈ R \ {0}. Note also that Γ is H-u.c. at x0 if Γ is u.c. at x0 ; the converse implication is true when Γ (x0 ) is compact. On the other hand, if Γ is H-l.c. at x0 , then Γ is l.c. at x0 , the converse being true if Γ (x0 ) is compact. We can characterize Hausdorff upper and lower continuities by using nets, and even sequences when X and Y are first-countable. Proposition 2.7.16. Suppose that Y is a topological vector space and x ∈ X. Then:

110

2 Functional Analysis over Cones

(i) Γ is H-l.c. at x if and only if, for all nets (xi )i∈I ⊆ X with (xi ) → x and (y i )i∈I ⊆ Γ (x), there exist a subnet (xϕ(j) ) and a net (yj )j∈J such that yj − y ϕ(j) → 0 and yj ∈ Γ (xϕ(j) ) for all j ∈ J;   (ii) Γ is H-u.c. at x if and only if, for every net (xi , yi ) i∈I ⊆ gr Γ with (xi ) → x, there exists a subnet (yϕ(j) )j∈J and a net (y j )j∈J ⊆ Γ (x) such that yϕ(j) − y j → 0. Suppose now that X and Y are first-countable. (iii) Γ is H-l.c. at x if and only if, for every sequence (xn )n∈N ⊆ X with (xn ) → x and every sequence (y n ) ⊆ Γ (x), there exists a sequence (yn ) such that yn − y n → 0 and yn ∈ Γ (xn ) for all n ≥ n0 ;  (iv) Γ is H-u.c. at x if and only if, for every sequence (xn , yn ) n∈N ⊆ gr Γ with (xn ) → x, there exists a sequence (y n ) ⊆ Γ (x) such that yn −y n → 0. Proof. We prove (i) and (iv), the proofs of (ii) and (iii) being similar. (i) Suppose that Γ is H-l.c. at x and consider the nets (xi ) → x and (y i )i∈I ⊆ Γ (x). For every V ∈ NY , there exists iV ∈ I such that Γ (x) ⊆ Γ (xi ) + V for all i  iV . Ordering J := I × NY in the usual way, consider ϕ : J → I with ϕ(i, V )  i, iV and take yj ∈ Γ (xϕ(j) ) ∩ (y ϕ(j) + V ) for j = (i, V ). It is obvious that (yj )j∈J satisfies the needed conditions. We prove by contradiction the converse implication. So, assume that Γ is not H-l.c. at (x, y). Then there exists V0 ∈ NY such that, for every U ∈ / Γ (xU ) + V0 . By NX (x), there exist xU ∈ U and y U ∈ Γ (x) with y U ∈ hypothesis, there exist a subnet (xϕ(j) ) and a net (yj )j∈J such that yj − y ϕ(j) → 0 and yj ∈ Γ (xϕ(j) ) for all j ∈ J. Since V0 ∈ NY , there exists j0 ∈ J such that yj − y ϕ(j) ∈ V0 for j  j0 . It follows that y ϕ(j0 ) ∈ yj0 + V0 ⊆ Γ (xϕ(j0 ) ) + V0 , a contradiction.   (iv) Suppose that Γ is H-u.c. at x and consider (xn , yn ) n∈N ⊆ gr Γ with (xn )→x. Let (Vk )k∈N be a base of neighborhoods of 0 ∈ Y . For every k ∈ N, there exists Uk ∈ NX (x0 ) such that Γ (x ) ⊆ Γ (x) + Vk for x ∈ Uk . Since (xn ) → x, there exists nk ∈ N such that xn ∈ Uk for every n ≥ nk . Without loss of generality, we may suppose that (nk ) is increasing. For every n such that nk ≤ n < nk+1 we take y n ∈ Γ (x) such that yn −y n ∈ Vk . The conclusion follows. The converse part follows immediately by contradiction.  Proposition 2.7.17. Suppose that Γ is upper continuous and closed-valued. Then gr Γ is closed. The same conclusion holds if Y is a topological vector space and Γ is H-u.c. instead of being u.c. Proof. We give the proof for the second case. So, let Γ be H-u.c. and closedvalued. Consider (x, y) ∈ X × Y \ gr Γ . If x ∈ / dom Γ , since Γ is H-u.c., there exists U ∈ NX (x) such that U ∩ dom Γ = ∅. Hence, U × Y ∩ gr Γ = ∅. It follows that (x, y) ∈ / cl(gr Γ ). Suppose now that x ∈ dom Γ , and so y ∈ / Γ (x). Since Γ (x) is closed, there exists V ∈ NY such that (y + V ) ∩ Γ (x) = ∅. Let W ∈ NY with W + W ⊆ V . Since Γ is H-u.c. at x, there exists U ∈ NX (x)

2.7 Continuity Notions for Multifunctions

111

  such that Γ (U ) ⊆ Γ (x) + W . It follows that U × (y + W ) ∩ gr Γ = ∅, and so (x, y) ∈ / cl(gr Γ ). Therefore, gr Γ is closed in X × Y .  Note that requiring the upper continuity of Γ only on dom Γ is not sufficient for the conclusion of the preceding result. Taking for example X = Y = R and Γ (x) = R+ for x ∈ ]0, 1[ and Γ (x) = ∅ otherwise, we have that Γ is u.c. (and so H-u.c.) at every x ∈ dom Γ , is closed-valued, but gr Γ = ]0, 1[ ×R+ is not closed in X × Y . Let (An )n∈N ⊆ P(Y ). Recall that lim inf An := {y ∈ Y | ∃ (yn ) → y such that yn ∈ An for n ≥ n0 }, n→∞

lim sup An := {y ∈ Y | ∃ (ynk ) → y such that ynk ∈ Ank for k ∈ N}; n→∞

of course, lim inf n→∞ An ⊆ lim supn→∞ An . We say that (An ) converges in the sense of Kuratowski–Painlev´ e to A ⊆ Y if lim supn→∞ An ⊆ A ⊆ lim inf n→∞ An . Having A, An ⊆ Y , n ∈ N, and taking X := N ∪ {∞} ⊆ R (X endowed with the topology induced by that of R) we may consider the multifunction Γ : X ⇒ Y defined by Γ (n) = An for n ∈ N and Γ (∞) = A. When Y is first-countable, lim inf n→∞ An is exactly lim inf n→∞ Γ (n) for A := Y and lim supn→∞ An is lim supn→∞ Γ (n) for A := ∅. Definition 2.7.18. Suppose that Y is a separated topological vector space and C ⊆ Y is a convex cone. We say that Γ is C-u.c., C-l.c., H-C-u.c., or H-C-l.c. at x0 ∈ X if relation (2.64), (2.65), (2.74) or (2.75) holds with Γ (x) ⊆ D + C, Γ (x) ∩ (D − C) = ∅, Γ (x) ⊆ Γ (x0 ) + V + C, Γ (x0 ) ⊆ Γ (x) + V + C instead of Γ (x) ⊆ D, Γ (x) ∩ D = ∅, Γ (x) ⊆ Γ (x0 ) + V , Γ (x0 ) ⊆ Γ (x) + V , respectively; similarly for Γ C-l.c. at (x0 , y0 ). Γ is Ccontinuous (C-Hausdorff continuous) at x0 if Γ is C-u.c. and C-l.c. (H-C-u.c. and H-C-l.c.) at x0 . Remark 2.7.19. As in Proposition 2.7.15 one can prove that having the multifunctions Γ, Γ1 , Γ2 : X ⇒ Y that are C-l.c. (resp. C-H-u.c., C-H-l.c.) at x0 ∈ X, Y being a t.v.s., C ⊆ Y a convex cone, and α ∈ R+ , then Γ1 + Γ2 and αΓ are C-l.c. (resp. C-H-u.c., C-H-l.c.) at x0 . When P ⊆ C is another convex cone, if Γ is P -l.c. (P -u.c., H-P -l.c., or H-P -u.c.) at x0 , then Γ is C-l.c. (C-u.c., H-C-l.c., or H-C-u.c.) at x0 . Note also that Γ is C-l.c., H-C-u.c., or H-C-l.c. if and only if ΓC is l.c., H-u.c., or H-l.c., respectively, but such an equivalence is not true for upper continuity. In fact, we have that Γ is C-u.c. at x0 if ΓC is u.c. at x0 , but the converse is not true even if Γ = ΓC , as shown by the following example: Γ : [0, ∞[ ⇒ R2 , Γ (x) = [0, ∞[ ×[0, x], and C = [0, ∞[ ×{0}. Definition 2.7.20. When Y is a topological vector space, we say that Γ is uniformly C-l.c. at x0 on A if A ⊆ Γ (x0 ) and ∀ W ∈ NY , ∃ U ∈ NX (x0 ), ∀ x ∈ U : A ⊆ Γ (x) + W + C.

112

2 Functional Analysis over Cones

Remark 2.7.21. If Γ is C-l.c. at (x0 , y) for every y ∈ A ⊆ Γ (x0 ) and A is compact, then Γ is uniformly C-l.c. at x0 on A. Indeed, let W ∈ NY and consider W1 ∈ NY open with W1 + W1 ⊆ W . such that For y ∈ A, since Γ is C-l.c. at (x0 , y), there exists Uy ∈ NX (x0 )  y ∈ Γ (x) + W1 + C for every x ∈ Uy . Since A is compact and A ⊆ y∈A (y + n W1 ), there exist y1 , . . . , yn ∈ A such that A ⊆ i=1 (yi + W1 ). Let x ∈ U :=  n i=1 Uyi and y ∈ A. There exists i, 1 ≤ i ≤ n, such that y ∈ yi + W1 . Since x ∈ Uyi , yi ∈ Γ (x) + W1 + C, whence y ∈ Γ (x) + W + C. Therefore, A ⊆ Γ (x) + W + C for every x ∈ U . With any multifunction  Γ : X ⇒ Y we associate its closure Γ : X ⇒ Y defined by Γ (x) := cl Γ (x) . Of course, dom Γ = dom Γ . Continuity properties of Γ and Γ are deeply related. Proposition 2.7.22. Let x ∈ X and y ∈ Y . The following assertions hold: (i) Γ is l.c. at (x, y) if and only if Γ is l.c. at (x, y). (ii) Γ is l.c. at x if and only if Γ is l.c. at x. (iii) If Y is a topological vector space and C ⊆ Y is a convex cone, then Γ is H-C-u.c. (H-C-l.c.) at x if and only if Γ is H-C-u.c. (H-C-l.c.) at x. (iv) If Γ is u.c. at x and either Y is normal or Y is regular and cl Γ (x) is compact, then Γ is u.c. at x. (v) If Γ is u.c. at x and either Y is regular or cl Γ (x) is compact, then Γ is closed at x. (vi) Γ is closed at x if and only if Γ (x) is closed and Γ is closed at x. (vii) If Y is a metric space, then Γ is compact at x if and only if Γ (x) is closed and Γ is compact at x. Proof. The proofs of assertions (i)–(iii) are not complicated. One uses that for A, D ⊆ Y with D open, D ∩cl A = ∅ ⇒ D ∩ A = ∅, and, when Y is a topological vector space, cl A = W ∈NY (A + W ). For (vi) take into consideration Proposition 2.7.12 (i), while for (vii) take into account Proposition 2.7.8 (ii). (iv) Let D ⊆ Y be an open set such that Γ (x) ⊆ D. If Y is normal, there exists D0 ⊆ Y open such that cl Γ (x) ⊆ D0 ⊆ cl D0 ⊆ D. If Y is regular and cl Γ (x) is compact, for every y ∈ cl Γ (x), there exist disjoint open sets Vy , Dy ⊆ Y such that y ∈ Vy and Y \ D ⊆ Dy . Since {Vy | y ∈ cl Γ (x)} is an open cover of cl Γ (x), there exist y1 , . . . , yn ∈ cl Γ (x) such that cl Γ (x) ⊆ D0 := Vy1 ∪ · · · ∪ Vyn . Of course, D0 and D1 := Dy1 ∩ · · · ∩ Dyn (⊇ Y \ D) are disjoint open sets. Therefore, D0 ⊆ cl D0 ⊆ Y \ D1 ⊆ D. Since Γ is u.c. at x, there exists U ∈ NX (x) such that Γ (x ) ⊆ D0 , whence Γ (x ) ⊆ cl D0 ⊆ D, for every x ∈ U . (v) Let y ∈ Y \Γ (x). If Y is regular, there exist V, D⊆Y disjoint open sets such that y ∈ V and Γ (x)⊆D. When Γ (x) is compact, for every z ∈ Γ (x), there exist disjoint open sets Vz , Dz ⊆ Y such that y ∈ Vz and z ∈ Dz . Since Γ (x) is compact, there exist z1 , . . . , zn ∈ Γ (x) such that D := Dz1 ∪ · · · ∪Dzn .

2.7 Continuity Notions for Multifunctions

113

Taking V := Vz1 ∩ · · · ∩ Vzn , we have again that V, D are open and disjoint, y ∈ V , and Γ (x) ⊆ D. Since Γ is u.c. at x and Γ (x) ⊆ D, there exists U ∈ NX (x) with Γ (U ) ⊆ D. Because V is open, we have even that Γ (U ) ∩ V = ∅.  Therefore, Γ is closed at x. Note that if Γ is u.c. at x, it does not follow that Γ is u.c. at x (but the implication is true if Γ (x) is closed). Take Γ : R ⇒ R, Γ (x) = ]x, x + 1[; Γ is not u.c. at 0, but Γ is. From now on (in this section), Y is a separated topological vector space and C ⊆ Y is a convex cone. Recall that the convex cone C determines a preorder ≤C defined by y1 ≤C y2 ⇔ y2 − y1 ∈ C. Definition 2.7.23. Consider the following multifunctions: levΓ : Y ⇒ X, lev< Γ

: Y ⇒ X,

levΓ (y) := {x ∈ X | Γ (x) ∩ (y − C) = ∅}, lev< Γ (y) := {x ∈ X | Γ (x) ∩ (y − int C) = ∅}.

We say that (i) Γ is C-lower semicontinuous (C-l.s.c. for short) if levΓ (y) is closed for every y ∈ Y ; (ii) Γ is C-upper semicontinuous (C-u.s.c. for short) if lev< Γ (y) is open in X for every y ∈ Y . −1 (y) for every Note that levΓ (y) = (ΓC )−1 (y) and lev< Γ (y) = (Γint C ) −1 y ∈ Y , so that Γ is C-l.s.c. if and only if (ΓC ) is closed-valued, and Γ is C-u.s.c. if and only if (Γint C )−1 is open-valued. Moreover, if y1 ≤C y2 , then < levΓ (y1 ) ⊆ levΓ (y2 ) and lev< Γ (y1 ) ⊆ levΓ (y2 ).  When int C = ∅ we have that int dom(levΓ ) = dom(lev< Γ ). First observe < ) is open. Indeed, let y ∈ dom(lev ). Then there exist x ∈ X that dom(lev< Γ Γ and z ∈ Γ (x) ∩ (y − int C). Hence, z = y − k for some k ∈ int C. Taking v ∈ V := int C −k, we have that z = y −k = (y +v)−(v +k) ⊆ (y +v)−int C, which shows that Γ (x) ∩ (y − int C) = ∅. Therefore, y + V ⊆ dom(lev< Γ ), and < ) is open. From the obvious inclusion dom(lev ) ⊆ dom(lev so dom(lev< Γ) Γ Γ   < ) ⊆ int dom(lev ) . For the converse inclusion we obtain that dom(lev< Γ Γ consider y ∈ int (dom(levΓ )). Let k ∈ int C; there exists t > 0 such that y − tk ∈ dom(levΓ ). Therefore, there exists x ∈ X such that Γ (x) ∩ (y − tk − C) = ∅. Since tk + C ⊆ int C, we have that y ∈ dom(lev< Γ ). The following result holds.

Proposition 2.7.24. The following assertions hold: (i) If epi Γ is closed, then Γ is C-lower semicontinuous. (ii) Suppose that Y is a locally convex space, C is closed, int C = ∅, Γ is C-lower semicontinuous, and either Γ (x) is weakly compact or Γ (x) + C is closed for every x ∈ X. Then epi Γ is closed. Moreover, if y0 , y1 ∈ Y are such that y1 − y0 ∈ int C and levΓ (y1 ) is compact, then levΓ (y0 ) is compact and levΓ is upper continuous at y0 .

114

2 Functional Analysis over Cones

Proof. (i) Suppose that epi Γ = gr ΓC is closed. Then gr(ΓC )−1 is also closed, whence (ΓC )−1 is closed-valued. Hence, Γ is C-l.s.c. (ii) Suppose now that Γ is C-l.s.c., Y is a locally convex space, C is closed, int C = ∅ and either Γ (x) is weakly or Γ (x) + C is closed   compact for every x ∈ X. Let us consider the net (xi , yi ) i∈I ⊆ epi Γ converging to (x, y) ∈ X × Y . Let k ∈ int C be fixed. Since k − C ∈ NY (0) and (yi ) → y, there exists ik ∈ I such that yi −y ∈ k−C for i  ik . It follows that (xi )i ik ⊆ levΓ (y + k), whence x ∈ levΓ (y + k) for every k ∈ int C. Fixing k0 ∈ int C we obtain that x ∈ levΓ (y + n−1 k0 ) for every n ∈ N> . Therefore, for every n there exists zn ∈ Γ (x) ∩ (y + n−1 k0 − C); in particular, y + n−1 k0 ∈ Γ (x) + C. If Γ (x) + C is closed, we get y ∈ Γ (x) + C, and so (x, y) ∈ epi Γ . If Γ (x) is weakly compact, the sequence (zn )n∈N> contains a subnet converging weakly to some z ∈ Γ (x). Since C is (weakly) closed, we obtain that z ∈ y − C, which shows that (x, y) ∈ epi Γ in this case, too. Now let y0 , y1 ∈ Y be such that y1 −y0 ∈ int C and levΓ (y1 ) is compact. Of course, levΓ (y0 ) ⊆ levΓ (y1 ). Since levΓ (y0 ) is closed and levΓ (y1 ) is compact, we have that levΓ (y0 ) is compact. Suppose that levΓ is not u.c. at y0 . Then there exist an open set D ⊆ X and the nets (yi )i∈I ⊆ Y , (xi )i∈I ⊆ X such that levΓ (y0 ) ⊆ D, (yi )i∈I converges to y0 , and xi ∈ levΓ (yi ) \ D for every i ∈ I. Since y0 ∈ y1 − int C, there exists i1 ∈ I such that yi ∈ y1 − C; i.e., yi ≤C y1 , for every i  i1 . It follows that xi ∈ levΓ (y1 ) for every i  i1 . Since levΓ (y1 ) is compact, there exists a subnet (xϕ(j) )j∈J of (xi ) converging to x. Of course, x ∈ / D. Since (xi , yi ) ∈ epi Γ for every i ∈ I and epi Γ is closed in our conditions (as seen above), we obtain that (x, y0 ) ∈ epi Γ , whence / D.  x ∈ levΓ (y0 ) ⊆ D. This is a contradiction because x ∈ Proposition 2.7.25. The following assertions hold: (i) Suppose that C is closed and P ⊆ C is another convex cone. If Γ is P -upper continuous, then Γ is C-lower semicontinuous. (ii) Suppose that int C = ∅; then Γ is C-lower continuous if and only if Γ is C-upper semicontinuous. Proof. (i) Let y ∈ Y and x ∈ / levΓ (y). It follows that Γ (x) ⊆ Y \ (y − C). Since Γ is P -upper continuous at x, there exists U ∈ NX (x) such that Γ (x ) ⊆    Y \ (y − C) + P , for every x ∈ U . Since (Y \ (y − C)) + P ⊆ Y \ (y − C), we obtain that Γ (x ) ∩ (y − C) = ∅ for each x ∈ U . Therefore, U ∩ levΓ (y) = ∅, which shows that x ∈ / cl (levΓ (y)). Hence, levΓ (y) is closed for every y ∈ Y . (ii) Suppose first that Γ is C-upper semicontinuous and fix x0 ∈ X. Let D be an open set in Y such that Γ (x0 )∩D =∅; then D−Γ  (x0 ) is a neighborhood ) ∩ int C = ∅. Let y0 ∈ D of 0 in Y . Since 0 ∈ cl(int C), we deduce D − Γ (x 0   be such that y0 − Γ (x0 ) ∩ int C = ∅; then x0 ∈ U := lev< Γ (y0 ), and U is open because Γ is C-u.s.c.; hence, U ∈ NX (x0 ) and ∅ = Γ (x)∩(y0 − int C) ⊆ Γ (x) ∩ (D − C) for each x ∈ U . It follows that Γ is C-l.c. at x0 . Suppose now that Γ is C-l.c. and fix y ∈ Y . Let x0 ∈ lev< Γ (y); this means that Γ (x0 ) ∩ D = ∅, where D := y − int C is an open set by our hypothesis.

2.7 Continuity Notions for Multifunctions

115

Because Γ is C-l.c. at x0 , there exists U ∈ NX (x0 ) such that ∅ = Γ (x) ∩ (D − C) = Γ (x) ∩ (y − int C − C) = Γ (x) ∩ (y − int C) < for all x ∈ U . It follows that U ⊆ lev< Γ (y). Therefore, levΓ (y) is open, and so Γ is C-u.s.c. The proof is complete. 

From the preceding result, we obtain that Γ is C-l.s.c. and −C-l.s.c. when Γ is upper continuous (and C is closed). The following interesting result holds. Proposition 2.7.26. Suppose that int C = ∅ and y ∈ dom(levΓ ). If levΓ is lower continuous at y, then levΓ (y) ⊆ cl lev< Γ (y) . Conversely, if X is a topological vector space (or a metric space), lev Γ (y) is totally bounded,   (y) , then lev is Hausdorff lower continuous at y. and levΓ (y) ⊆ cl lev< Γ Γ Moreover, if levΓ (y) is compact, then levΓ is lower continuous at y. Proof. Suppose first that levΓ is lower continuous at y and consider x ∈ levΓ (y). Fix k ∈ int C and take yn := y − n−1 k for n ∈ N∗ . By Proposition 2.7.6, there exist a subnet (yϕ(j) )j∈J of (yn )n∈N∗ and a net (xj )j∈J convergent C, we to x such that xj ∈ levΓ (yϕ(j) ) for every j ∈ J. Since yϕ(j) −C ⊆ y −int  < obtain that xj ∈ lev< Γ (y) for every j ∈ J. Therefore, x ∈ cl levΓ (y) . Suppose  now that X is a t.v.s., levΓ (y) is totally bounded, and levΓ (y) ⊆ cl lev< Γ (y) . We show first that ∀ U ∈ NX , ∃ k ∈ int C : levΓ (y) ⊆ levΓ (y − k) + U.

(2.76)

So, let U ∈ NX (0); there exists a balanced neighborhood W of 0 ∈ X such that W + W ⊆ U . Since levΓ (y) is totally bounded, there exist x1 , . . . , xp ∈ levΓ (y) such that p (2.77) levΓ (y) ⊆ l=1 (xl + W ).  <  Since levΓ (y) ⊆ cl levΓ (y) , for every l ∈ {1, . . . , p}, there exists xl ∈ (xl +  W ) ∩ lev< Γ (y). It follows that, for every l, there exists kl ∈ int C such that    yl := y − kl ∈ Γ (xl ). Let t1 := 1 and k1 := t1 k1 . Since k1 − tk2 → k1 ∈ int C for t → 0, there exists t2 ∈ ]0, 1] such that k1 − t2 k2 ∈ int C. Let k2 := t2 k2 . Similarly, there exists t3 ∈ ]0, 1] such that k2 − t3 k2 ∈ int C. Let k3 := t3 k3 . Continuing in this way, for every l ∈ N∗ , we find tl ∈ ]0, 1] and kl := tl kl such that kl −kl+1 ∈ int C for 1 ≤ l ≤ p−1. Since yl = y−kl −(1−tl )kl ⊆ y−kl −C, we have that xl ∈ levΓ (y − kl ) for every l. Moreover, levΓ (y − k1 ) ⊆ levΓ (y − k2 ) ⊆ . . . ⊆ levΓ (y − kp ). Now let x ∈ levΓ (y). From (2.77), there exists 1 ≤ l ≤ p such that x ∈ xl +W . Since xl ∈ xl + W , we obtain that x ∈ xl + W − W ⊆ xl + U ⊆ levΓ (y − kl ) + U ⊆ levΓ (y − kp ) + U.

116

2 Functional Analysis over Cones

Therefore, levΓ (y) ⊆ levΓ (y − k) + U with k := kp ∈ int C; i.e., (2.76) holds. Taking V = y + C − k, V is a neighborhood of y and levΓ (y) ⊆ levΓ (y − k) + U ⊆ levΓ (y  ) + U

∀ y  ∈ V.

Therefore, levΓ is H-l.c. at y. When X is a metric space instead of U one ◦

takes ε > 0, and instead of xl + W one takes B(xl , ε/2). When levΓ (y) is compact the conclusion is obvious. 

2.8 Continuity Notions for Extended Vector-valued Functions It is quite interesting that continuity properties for vector-valued functions and those of multifunctions are related. We give first some continuity notions for extended vector-valued functions; see page 99 for notations related to vector functions. Definition 2.8.1. Let f : X → Y • be proper and x0 ∈ X. (i) f is C-lower continuous (C-l.c.) at x0 if ∀ y ∈ Y, ∀ V ∈ NY , ∃ U ∈ NX (x0 ), ∀ x ∈ U : f (x) ∈ y + V + C • ; f is C-lower continuous (on A ⊆ X) if f is C-l.c. at any x ∈ X (x ∈ A). (ii) f is C-upper continuous (C-u.c.) at x0 ∈ dom f if ∀ V ∈ NY , ∃ U ∈ NX (x0 ), ∀ x ∈ U : f (x) ∈ f (x0 ) + V − C; f is C-upper continuous (on A ⊆ dom f ) if C-u.c. at any x ∈ X (x ∈ A). (iii) f is C-lower semicontinuous if levf,C (y) is closed for every y ∈ Y ; (iv) f is C-upper semicontinuous if lev< f,C (y) is open for every y ∈ Y . Clearly, if x0 ∈ dom f then f is C-l.c. at x0 if and only if ∀ V ∈ NY , ∃ U ∈ NX (x0 ), ∀ x ∈ U : f (x) ∈ f (x0 ) + V + C • . From Proposition 2.7.25(ii), in the case int C = ∅, we obtain that the function f : X → Y is C-upper semicontinuous on X if and only if Γf,C is C-lower continuous. Consider x0 ∈ dom f . If f is C-u.c. at x0 , then x0 ∈ int(dom f ). On the other hand, if Γf,C is closed at x0 , then C is closed; if C is closed and Γf,C is u.c. at x0 , then Γf,C is closed at x0 . Moreover, for x0 ∈ dom f , [f is {0}-u.c. at x0 ] ⇔ [f is continuous at x0 ] ⇔ [x0 ∈ int(dom f ) and f is {0}-l.c. at x0 ].

2.8 Continuity Notions for Extended Vector-valued Functions

117

Proposition 2.8.2. Let f : X → Y • and x0 ∈ dom f . Then: (i) f is C-l.c. at x0 ⇔ Γf,C is C-u.c. at x0 ⇔ Γf,C is H-u.c. at x0 . (ii) f is C-u.c. at x0 ⇔ Γf,C is l.c. at x0 ⇔ Γf,C is H-l.c. at x0 . Proof. (i) Suppose that f is C-l.c. at x0 and consider D ⊆ Y an open set such that Γf,C (x0 ) = f (x0 ) + C ⊆ D. Since f (x0 ) ∈ D, there exists U ∈ NX (x0 ) such that f (x) ∈ D + C • for every x ∈ U . Hence, Γf,C (x) ⊆ D + C for every x ∈ U , and so Γf,C is C-u.c. at x0 . Suppose now that Γf,C is H-u.c. at x0 and consider V ∈ NY . By definition, there exists U ∈ NX (x0 ) such that Γf,C (x) ⊆ Γf,C (x0 ) + V = f (x0 ) + V + C for all x ∈ U . Therefore, f (x) ∈ f (x0 ) + V + C • for every x ∈ U ; i.e., f is C-l.c. at x0 . Since the other implication is immediate, the proof of (i) is complete. (ii) Suppose that f is C-u.c. at x0 and consider V ∈ NY ; there exists U ∈ NX (x0 ) such that f (x) ∈ f (x0 ) + V − C for every x ∈ U . It follows that f (x0 ) ∈ Γf,C (x) + V , whence Γf,C (x0 ) ⊆ Γf,C (x) + V for every x ∈ U . Therefore, Γf,C is H-l.c. at x0 . Suppose now that Γf,C is l.c. at x0 and consider V ∈ NY . Since f (x0 ) ∈ (f (x0 ) + int V ) ∩ Γ (x0 ), there exists U ∈ NX (x0 ) such that (f (x0 ) + int V ) ∩ Γ (x) = ∅ for all x ∈ U . Therefore, f (x) ∈ f (x0 ) + V − C for every x ∈ U ; i.e., f is C-u.c. at x0 . Since Hausdorff lower continuity implies lower continuity, the proof is complete.  When Y = R and C = R+ we have more refined statements; we set Γf := Γf,R+ in this case. Proposition 2.8.3. Let f : X → R := R ∪ {−∞, ∞} with the associated multifunction Γf : X ⇒ R whose graph is epi f . (i) Suppose that x0 ∈ dom f (= dom Γf ). Then f is l.c. at x0 ⇔ Γf is u.c. at x0 ⇔ Γf is H-u.c. at x0 . (ii) Suppose that x0 ∈ X. Then Γf is H-l.c. at x0 ⇒ f is u.c. at x0 ⇔ Γf is l.c. at x0 ; moreover, if either f (x0 ) > −∞ or X is a topological vector space and f is convex, then f is u.c. at x0 ⇔ Γf is H-l.c. at x0 . Proof. When f (x0 ) ∈ R the conclusion follows from the preceding proposition; one must only mention that in this case upper continuity and C = R+ upper continuity for Γf coincide. If f (x0 ) = ∞, it is obvious that f is u.c. at x0 and Γf is l.c. and H-l.c. at x0 (since Γf (x0 ) = ∅); therefore, (ii) holds in this case. Suppose now that f (x0 ) = −∞; in this case Γf (x0 ) = R. Obviously, f is l.c. at x0 and Γf is u.c. and H-u.c. at x0 . Therefore, (i) holds. Assume that Γf is H-l.c. at x0 . Since Γf (x0 ) = R, taking V = [−1, 1] ∈ NR (0), there exists U ∈ NX (x0 ) such that Γf (x) = R for every x ∈ U . Therefore, f (x) = −∞ for every x ∈ U , whence f is u.c. at x0 .

118

2 Functional Analysis over Cones

Assume that f is u.c. at x0 and consider D ⊆ R an open set; take y0 ∈ Γf (x0 ) ∩ D = D. There exists ε > 0 such that ]y0 − ε, y0 + ε[ ⊆ D. Since f is u.c. at x0 , there exists U ∈ NX (x0 ) such that f (x) < y0 for every x ∈ U . It follows that y0 ∈ Γf (x) ∩ D for every x ∈ U . Therefore, Γf is l.c. at x0 . Assume that Γf is l.c. at x0 and take λ ∈ R. Since Γf (x0 )∩ ] − ∞, λ[ = ∅, there exists U ∈ NX (x0 ) such that Γf (x)∩ ] − ∞, λ[ = ∅ for every x ∈ U . It follows that f (x) < λ for every x ∈ U . Therefore, f is u.c. at x0 . Of course, if f takes the value −∞ on a neighborhood of x0 , then Γf is constant on that neighborhood, and so it is H-l.c. at x0 . This is the case in which f is convex, f (x0 )= − ∞, and f is u.c. at x0 . The proof is complete.  Proposition 2.8.4. The following assertions hold: (i) Let f : X → Y • be a proper C-l.c. function and C be a proper closed convex cone; then epi f is closed. (ii) Let fi : Xi → Y • be proper and C-l.c. at xi ∈ Xi ,, where (Xi , τi ) are topological spaces for i ∈ {1, 2}, and f : X1 × X2 → Y • be defined by f (x1 , x2 ) := f1 (x1 ) + f2 (x2 ). Then f is C-l.c. (iii) Let f1 , f2 : X → Y • be proper and C-l.c. at x ∈ X; then f := f1 + f2 is C-l.c. at x. (iv) Let g : (Z, σ) → X, τ be continuous at z ∈ Z, and f : X → Y • be proper and C-l.c. at x := g(z). Then f ◦ g is C-l.c. at z. (v) Let Z be a topological vector space ordered by the convex cone P ⊆ Z, f : X → Y • be proper and C-l.c. at x ∈ X, and h : Y → Z • be proper and increasing; set h(∞) := ∞ and y := f (x). Assume that either y ∈ Y and h is P -l.c. at y, or y = ∞, and for every z ∈ Z, there exists y ∈ Y with h(y) ≥ z. Then h ◦ f is P -l.c. at x. Proof. For the proofs of assertions (i), (ii), (iv) and (v) see the proof of [329, Prop. 3.1.31]. (iii) Let y ∈ Y such that y ≤C f (x) and V ∈ NY . There exist y1 , y2 ∈ Y such that y = y1 + y2 and yi ≤ fi (x). Indeed, if f1 (x) ∈ Y one takes y1 := f1 (x), and so Y  y2 := y − y1 ≤C f2 (x); similarly if f2 (x) ∈ Y . If f1 (x) = f2 (x) = ∞ one takes y1 := y2 := 12 y ∈ Y , and so yi ≤C fi (x) for i ∈ {1, 2}. Consider V  ∈ NY such that V  + V  ⊆ V . Because for i ∈ {1, 2} fi is C-l.c. at x, there exist Ui ∈ NX (x) such that fi (x) ∈ yi + V  + C • for x ∈ Ui , whence, for x ∈ U := U1 ∩ U2 , one has f (x) = f1 (x) + f2 (x) ∈  y1 + V  + C • + y2 + V  + C • ⊆ y + V + C • . Hence, f is C-l.c. at x. Note that for C = Y , any function f : X → Y • is C-l.c., while epiC f (= dom f × Y ) is closed if and only if dom f is closed. This shows that the properness of C in assertion (i) of the preceding result is essential. Moreover, the reverse implication of assertion (i) in Proposition 2.8.4 is not true as the following example from [469] shows: Let X = R, Y = R2 , C = R2+ and  −1  f : R → R2 defined by f (x) := (0, 0) if x = 0 and f (x) := |x| , −1 otherwise; then epiC f is closed in X × Y (= R3 ), while f is not C-l.c. at 0 as easily seen.

2.8 Continuity Notions for Extended Vector-valued Functions

119

We are now interested in continuity properties of the composition of two multifunctions or of a function with a multifunction. So consider another topological space U, the functions f : X × U → Y , g : X → Y , and the multifunctions Λ : U ⇒ X, F : X ×U ⇒ Y and G : X ⇒ Y ; we associate the multifunctions f Λ, gΛ, F Λ, GΛ : U ⇒ Y defined by (f Λ)(u) := f (Λ(u) × {u}), (gΛ)(u) := g (Λ(u)), (F Λ)(u) = F (Λ(u) × {u}), and (GΛ)(u) := G (Λ(u)). Proposition 2.8.5. Let f, g and Λ, F, G be as above and u0 ∈ dom Λ. (i) If Λ is u.c. at u0 and g is C-l.c. on Λ(u0 ), then gΛ is C-u.c. at u0 . Moreover, if Λ(u0 ) is compact, then gΛ is H-C-u.c. at u0 . (ii) If Λ is u.c. at u0 , F is C-u.c. at (x, u0 ) for all x ∈ Λ(u0 ), and Λ(u0 ) is compact, then F Λ is C-u.c. at u0 . If f is C-l.c. on Λ(u0 ), then f Λ is H-C-u.c. at u0 . (iii) If Λ is l.c. at (x0 , u0 ) ∈ gr Λ and F is C-l.c. at (x0 , u0 ; y0 ), then F Λ is C-l.c. at (u0 , y0 ). In particular, if f is C-u.c. at (x0 , u0 ), then f Λ is C-l.c. at (u0 , y0 ), where y0 = f (u0 , x0 ). (iv) If Λ is compact at u0 and F is closed at (x, u0 ) for all x ∈ Λ(u0 ), then F Λ is closed at u0 . (v) If Λ is compact at every u ∈ U and G is C-l.s.c., then GΛ is C-l.s.c., too. Suppose now that X is a topological vector space, too. (vi) If Λ is H-u.c. at u0 and f is equi-C-l.c. on Λ(u0 ) × {u0 }, then f Λ is H-C-u.c. at u0 . (vii) If Λ is H-l.c. at u0 and f is equi-C-u.c. on Λ(u0 ) × {u0 }, then f Λ is H-C-l.c. at u0 .   Proof. (i) Let D ⊆ Y be an open set such that g Λ(u0) ⊆ D. For every x ∈ Λ(u0 ), since g is C-l.c. at x and D ∈ NY g(x) , there exists an openneighborhood Vx of x in X such that g(Vx ) ⊆ D + C. The set D := x∈Λ(u0 ) Vx is open and contains Λ(u0 ). Therefore, there exists ⊆ D for every u ∈ U0 . It follows that U0 ∈ NU (u0) such  that Λ(u)  (gΛ)(u) = g Λ(u) ⊆ g(D ) ⊆ D + C for u ∈ U0 . Hence, gΛ is C-u.c. at u0 . If Λ(u0 ) is compact, then (gΛ)(u0 ) is also compact, and so gΛ is H-C-u.c. at u0 . (ii) Let D ⊆ Y be an open set such that F (Λ(u0 ) × {u0 }) ⊆ D. Then for every x ∈ Λ(u0 ), F (x, u0 ) ⊆ D. Since F is C-u.c. at (x, u0 ), there exists an open neighborhood Vx of x in X and Ux ∈ NU (u0 ) such that F (Vx × Ux ) ⊆ D + C. Since {Vx | x ∈ Λ(u0 )} is an open cover for the compact set Λ(u0 ), there exist x1 , . . . , xn ∈ Λ(u0 ) such that Λ(u0 ) ⊆ Vx1 ∪ · · · ∪ Vxn =: V ; of course, V is an open set. Since Λ is u.c. at u0 , there exists U0 ∈ NU (u0 ) such that Λ(U0 ) ⊆ V . Let U := U0 ∩ Ux1 ∩ · · · ∩ Uxn , u ∈ U , and x ∈ Λ(u) (⊆ V ). It follows that x ∈ Vxi for some 1 ≤ i ≤ n. Since u ∈ Uxi , F (x, u) ⊆ D + C. Hence, F Λ(u) ⊆ D +C. Therefore, F Λ is C-u.c. If f is C-l.c. on Λ(u0 )×{u0 }, by applying what precedes for the multifunction F := gr f , one obtains that

120

2 Functional Analysis over Cones

f Λ is C-u.c. at u0 . To obtain the stronger result note that the multifunction  Λ : U ⇒ X × U defined by Λ(u) := Λ(u) × {u} is l.c. at u0 ; by applying then (i), the conclusion follows. The fact that Λ is l.c. at u0 is obtained by using a similar argument to that used above, so that the proof is left to the reader. (iii) Let W ∈ NY (y0 ). Since F is C-u.c. at (x0 , u0 ; y0 ), there exist V ∈ NX (x0 ) and U1 ∈ NU (u0 ) such that F (x, u) ∩ (W − C) = ∅ for every (x, u) ∈ V × U1 . Since Λ is l.c. at (u0 , x0 ), there exists U2 ∈ NU (u0 ) such that Λ(u) ∩ V = ∅ for all u ∈ U2 . Let u ∈ U1 ∩ U2 and take x ∈ Λ(u) ∩ V . It follows that there exists y ∈ F (x, u) ∩ (W − C). Therefore, y ∈ F Λ(u) ∩ (W − C). Hence, F Λ is C-l.c. at (u0 , y0 ). When f is C-u.c. at (x0 , u0 ), by applying what precedes to gr f , we get the desired conclusion. (iv) Let gr F Λ ⊇ ((ui , yi ))i∈I → (u0 , y0 ). There exists X ⊇ (xi )i∈I such that ((ui , xi ))i∈I ⊆ gr Λ and ((xi , ui ; yi ))i∈I ⊆ gr F . Since Λ is compact at u0 , there exists a subnet (xϕ(j) )j∈J → x0 ∈ Λ(u0 ). Since F is closed at (x0 , u0 ), (x0 , u0 ; y0 ) ∈ gr F . It follows that (u0 , y0 ) ∈ gr F Λ. Therefore, F Λ is closed at u0 . (v) Let y ∈ Y and let (ui )i∈I be a net in levGΛ (y) converging to u ∈ U. There exists (xi )i∈I ⊆ levG (y) such that xi ∈ Λ(ui ) for every i ∈ I. Since Λ is compact at u, there exists the subnet (xϕ(j) )j∈J of (xi )i∈I convergent to x ∈ Λ(u). Since G is C-l.s.c., x ∈ levG (y), whence x ∈ levGΛ (y). Consider now that X is a topological vector space. (vi) Let W ∈ NY . Since f is equi-C-l.c. on Λ(u0 ) × {u0 }, there exist V ∈ NX and U1 ∈ NU (u0 ) such that f ((x + V ) × U1 ) ⊆ f (x, u0 ) + W + C for every x ∈ Λ(u0 ). Since Λ is H-u.c. at u0 , there exists U2 ∈ NU (u0 ) such that Λ(U2 ) ⊆ Λ(u0 ) + V . Consider u ∈ U1 ∩ U2 and y ∈ (f Λ)(u). Then y = f (x, u) for some x ∈ Λ(u). It follows that x = x0 + v with x0 ∈ Λ(u0 ) and v ∈ V . So y = f (x0 + v, u) ∈ f (x0 , u0 ) + W + C ⊆ (f Λ)(u0 ) + W + C, whence (f Λ)(U1 ∩ U2 ) ⊆ (f Λ)(u0 ) + W + C. Thus f Λ is H-C-u.c. at u0 . (vii) Let W ∈ NY . Since f is equi-C-u.c. on Λ(u0 ) × {u0 }, there exist V ∈ NX and U1 ∈ NU (u0 ) such that f (x , u) ∈ f (x, u0 ) + W − C for every x ∈ Λ(u0 ), x ∈ x+V , and u ∈ U1 , or equivalently, f (x, u0 ) ∈ f (x , u)+W +C for every x ∈ Λ(u0 ), x ∈ x + V , and u ∈ U1 . Since Λ is H-l.c. at u0 , there exists a neighborhood U2 of u0 such that Λ(u0 ) ⊆ Λ(u) + V for every u ∈ U2 . Let u ∈ U1 ∩ U2 and y ∈ (f Λ)(u0 ). Then y = f (x, u0 ) for some x ∈ Λ(u0 ) ⊆ Λ(u) + V . Hence, there exists some x ∈ Λ(u) ∩ (x + V ). It follows that y ∈ f (x , u)+W +C ⊆ f Λ(u) +W +C, whence (f Λ)(u0 ) ⊆ (f Λ)(u)+W +C  for every u ∈ U1 ∩ U2 . The proof is complete. The implication “⇐” of assertion (i) in Proposition 2.7.8 is established by Ferro [193, Lemma 2.2], while from the implication “⇒” one obtains Smithson’s [508, Th. 3.1]. Proposition 2.7.9 and Corollary 2.7.10 can be found in [275]; notice that Penot and Th´era in [469] say that Γ is graphically upper

2.9 Extended Multifunctions

121

semicontinuous at x when Γgr is upper continuous at x (see also [462]). The assertion (iv) of Proposition 2.7.12 is stated in Ferro [193, Lemma 2.1], while from assertion (v), one obtains Smithson’s [508, Th. 3.3]. The assertion (i) of Proposition 2.7.13 can be found in [61], the assertion (ii) in [360], while assertions (iv) and (v) of Proposition 2.7.22 in [466], where one can also find the statement (i) of Proposition 2.7.12. Propositions 2.7.24 and 2.7.25 are partially stated in Ferro [192, Prop. 2.3] and [193, Prop. 3.1]. The continuity notions for vector-valued functions are in accordance with those of Penot and Th´era in [468] and [469]. The assertions (ii)–(iv) of Proposition 2.8.5 can be found in [387] for C = {0}. Note that the statements (ii), (iii), (vi) and (vii), for g instead of f , of Proposition 2.8.5 can be found in [50], [52] and [53]. Other results on continuity of multifunctions can be found in the books by Aubin and Frankowska [36] and Cˆ arj˘ a [103] among others.

2.9 Extended Multifunctions Given an extended vector-valued function f : X → Y • and x0 ∈ dom f , we define the multifunctions Γf+ , Γf− : X ⇒ Y by Γf− (x0 ) = lim inf Γf,−C (x) and Γf+ (x0 ) = lim inf Γf,C (x) x→x0

x→x0

Equivalent definitions as introduced in [6, 395] are as follows: Proposition 2.9.1. If f : X → Y • and x0 ∈ dom f , then Γf− (x0 ) = {y ∈ Y | ∀ V ∈ NY (y), ∃ U ∈ NX (x0 ), f (U ) ⊂ V + C} and Γf+ (x0 ) = {y ∈ Y | ∀ V ∈ NY (y), ∃ U ∈ NX (x0 ), f (U ) ⊂ V − C}. Suppose now that X, Y are first-countable, then Γf− (x0 ) = {y ∈ Y | ∀xn → x0 , ∃yn →y : yn ≤C f (xn ) ∀n} and Γf+ (x0 ) = {y ∈ Y | ∀xn → x0 , ∃yn →y : f (xn ) ≤C yn ∀n}. Proof. The first relation follows from the following chain of equivalences: y ∈ Γf− (x0 ) ⇔ ∀ V ∈ NY (y), ∃ U ∈ NX (x0 ), ∀x ∈ U : Γf,−C (x) ∩ V = ∅ ⇔ ∀ V ∈ NY (y), ∃ U ∈ NX (x0 ), ∀x ∈ U, ∃v ∈ V : f (x) ∈ v + C ⇔ ∀ V ∈ NY (y), ∃ U ∈ NX (x0 ) : f (U ) ⊂ V + C. The proof of the second equality above is analogous to the proof of the first one. The sequential assertions follow from Proposition 2.7.3(ii).  − + Let us now list some properties of Γf and Γf :

122

2 Functional Analysis over Cones

Proposition 2.9.2. If f : X → Y • and x0 ∈ dom f , then (i) Γf− (x0 ) ⊂ Γf,−C (x0 ) and Γf+ (x0 ) ⊂ Γf,C (x0 ); (ii) Γf− (x0 ) (Γf+ (x0 )) is C-upper (C-lower) bounded by f (x0 ); (iii) for C is convex, Γf− (x0 ) (Γf+ (x0 )) is downward (upward), i.e., Γf− (x0 ) = Γf− (x0 ) − C (Γf+ (x0 ) = Γf+ (x0 ) + C); (iv) for C is convex, Γf− (x0 ) and Γf+ (x0 ) are closed and convex subsets of Y ; (v) f is C-l.c. at x0 and Γf− (x0 ) = ∅ if and only if f (x0 ) ∈ Γf− (x0 ), and f is C-u.c. at x0 if and only if f (x0 ) ∈ Γf+ (x0 ); (vi) for C is convex and int C = ∅, we have Γf− (x0 )(Γf+ (x0 )) = ∅ if and only if f is locally C-lower (C-upper) bounded around x0 . Proof. (i) and (ii) are obvious from the definition of Γf− (x0 ) and Γf+ (x0 ). (iii) The inclusion Γf− (x0 ) ⊂ Γf− (x0 ) − C is obvious. For the converse, let y ∈ Γf− (x0 ) and c ∈ C. If V is a neighborhood of y − c, then V + c is a neighborhood of y. Hence, there is U ∈ NX (x0 ) such that f (U ) ⊂ V +c+C ⊂ V + C, and then y − c ∈ Γf− (x0 ). The second equality can be established by inverting the order. (iv) First, we remark that Γf− (x0 ) and Γf+ (x0 ) are closed, since the lower limit of a given family of sets is always closed. To prove convexity of Γf− (x0 ), take y, z ∈ Γf− (x0 ) and λ ∈ [0, 1]. Fix V a neighborhood of λy + (1 − λ)z in Y . Since the function (u, v) ∈ Y × Y → λu + (1 − λ)v is continuous, there exist V1 ∈ NY (y) and V2 ∈ NY (z) such that λV1 + (1 − λ)V2 ⊂ V . Return to y, z ∈ Γf− (x0 ), one can find U1 , U2 ∈ NX (x0 ) such that for U = U1 ∩ U2 f (U ) ⊂ f (U1 ) ⊂ V1 + C and f (U ) ⊂ f (U2 ) ⊂ V2 + C; we conclude f (U ) ⊂ λ(V1 + C) + (1 − λ)(V2 + C) ⊂ λV1 + (1 − λ)V2 + C ⊂ V + C. Thus λy + (1 − λ)z ∈ Γf− (x0 ), which implies Γf− (x0 ) is convex. (v) We only prove the first implication ”⇒”; the others are obvious. Suppose that f is C-l.c. at x0 and consider V ∈ NY , then there exists U1 ∈ NX (x0 ) such that f (x) ∈ f (x0 ) + V + C • for every x ∈ U1 . Since Γf− (x0 ) is nonempty, there are y0 ∈ Y and U2 ∈ NX (x0 ) such that f (x) ∈ y0 +V +C ⊂ Y for every x ∈ U2 . Set U = U1 ∩ U2 , then f (x) ∈ (f (x0 ) + V + C • ) ∩ Y = f (x0 ) + V + C for every x ∈ U . Hence, f (x0 ) ∈ Γf− (x0 ). (vi) Suppose f is locally C-lower bounded around x0 , then exists U ∈ NX (x0 ) such that f (U ) ⊂ y0 + C ⊂ V + C and again y0 ∈ Γf− (x0 ). Suppose now that a0 ∈ int C = ∅, we can find C ⊃ V0 ∈ NY (a0 ), so that one can use y0 ∈ Γf− (x0 ) to obtain from y0 − a0 + V0 ∈ NY (y0 ) the existence of U ∈ NX (x0 ) such that f (U ) ⊂ y0 − a0 + V0 + C ⊂ y0 − a0 + C + C ⊂ y0 − a0 + C. Hence, f is C-lower bounded on U by y0 − a0 .  Proposition 2.9.3. If the cone C ⊂ Y is convex, f1 , f2 : X → Y • and x0 ∈ dom f1 ∩ dom f2 , then

2.9 Extended Multifunctions

123

(i) Γf−1 (x0 ) + Γf−2 (x0 ) ⊂ Γf−1 +f2 (x0 ) and Γf+1 (x0 ) + Γf+2 (x0 ) ⊂ Γf+1 +f2 (x0 ); (ii) equality holds if moreover f1 is both C-l.c. and C-u.c., i.e., C-continuous. Proof. (i) We only prove the first inclusion, the proof of the second is similar. Let y = y1 + y2 , where yi ∈ Γf−i (x0 ) (i = 1, 2), and V ∈ NY (y), then there exist Vi ∈ NY (yi ) and U ∈ NX (x0 ) such that V1 + V2 ⊂ V and fi (U ) ⊂ Vi + C, i = 1, 2. We conclude from convexity of C that f1 (U ) + f2 (U ) ⊂ V1 + V2 + C + C ⊂ V + C and then the assertion is proven. (ii) For the converse inclusion, let y ∈ Γf−1 +f2 (x0 ) and write y = y1 + y2 where y1 = f1 (x0 ) and y2 = y − f1 (x0 ). Since f1 is C-l.c. and C-u.c., we get from Proposition 2.9.2(v) y1 = f1 (x0 ) ∈ Γf−1 (x0 ). It suffices to give the proof for y2 ∈ Γf−2 (x0 ). Suppose that V ∈ NY (y2 ), we choose V0 ∈ NY , V1 ∈ NY (y2 ) (i.e., f1 (x0 ) + V1 ∈ NY (y)) such that V0 + V1 ⊂ V . Now we use f1 is C-u.c. and y ∈ Γf−1 +f2 (x0 ) to find U ∈ NX (x0 ) such that for every x ∈ U f1 (x0 ) − f1 (x) ⊂ V0 + C and f1 (x) + f2 (x) ⊂ f1 (x0 ) + V1 + C. Combining these relations we get for every x ∈ U , f2 (x) ⊂ V0 + V1 + C ⊂  V + C; therefore y2 ∈ Γf−2 (x0 ). Proposition 2.9.4. Let f : X → Y • . If f is positively homogenous, then so are Γf− and Γf+ . Proof. Let us prove Γf− is positively homogenous. We first remark that for every λ > 0, x0 ∈ / dom Γf− is equivalent to λx0 ∈ / dom Γf− . That is, Γf− is positively homogenous on X \ dom Γf− . Let x0 ∈ dom Γf− and λ > 0, then since f is positively homogenous y ∈ Γf− (λx0 ) ⇔ ∀ V ∈ NY (y), ∃ U ∈ NX (λx0 ) : f (U ) ⊂ V + C ⇔ ∀ V ∈ NY (y), ∃ Uλ = λ1 U ∈ NX (x0 ) : f (λUλ ) ⊂ V + C ⇔ ∀ V ∈ NY (y), ∃ Uλ ∈NX (x0 ) : f (Uλ )⊂ λ1 (V + C) = λ1 V + C ⇔ ∀ Vλ ∈ NY (y/λ), ∃ Uλ ∈ NX (x0 ) : f (Uλ ) ⊂ Vλ + C y ⇔ ∈ Γf− (x0 ) ⇔ y ∈ λΓf− (x0 ). λ This proves that Γf− is positively homogenous. Similar arguments show that Γf+ is positively homogenous.  Proposition 2.9.5. Let f : X → Y • and x0 ∈ dom f . If int C = ∅, then Γf− and Γf+ are l.c. at x0 . Proof.

124

2 Functional Analysis over Cones

We only prove lower continuity of Γf− , since for Γf. it suffices to invert the order. So, according to Proposition 2.7.6 and (2.73), we have only to prove Γf− (x0 ) ⊂ lim inf Γf− (x). x→x0

It is easily seen if Γf− (x0 ) = ∅. For Γf− (x0 ) = ∅, we’ll prove that for some a0 ∈ int C and for every ε > 0, Γf− (x0 ) − εa0 ⊂ lim inf Γf− (x). x→x0

Suppose on the contrary that there exist ε > 0 and y0 ∈ Γf− (x0 ) such that yε := y0 − εa0 ∈ / Γf− (x0 ). Using Proposition 2.7.3(i), we deduce that ∃ X \ / {x0 } ⊃ (xi )i∈I → x0 such that, for each i ∈ I, ∃ϕ(i) ∈ I for which yε ∈ Γf− (xϕ(i) ). Moreover, since Γf− (xϕ(i) ) = lim inf u→xϕ(i) Γf,−C (u), Proposition 2.7.3(i) implies ∃ X \ {xϕ(i) } ⊃ (uij )j∈Ji → xϕ(i) such that ∀j ∈ Ji , ∃ψ(j):



/ Γf,−C uiψ(j) , i.e., f uiψ(j) − yε ∈ / C. yε ∈

(2.78)

One defines the system K = (Ji ))i∈I of directed sets Ji indexed by a directed set I. Let Δ := {(i, (jk )k i ) : i ∈ I, jk ∈ Jk , ∀k  i} be the diagonal of K, which is equipped with the relation:  i  i   (i, (jk )k i )  (i , (jk )k i ) ⇐⇒ jk  jk ∀k  i. Furthermore, for each δi = (i, (jk )k i ) ∈ Δ, we define zi := uiψ(ji ) , we conclude that the net (zi )λi ∈Δ is convergent to x0 , since (xi )i∈I → x0 and (uij )j∈Ji → xϕ(i) for each i ∈ I. The fact that y0 ∈ Γf− (x0 ) = lim inf x→x0 Γf,−C (x) gives the existence of some (zα(i) ) and (yi ) → y0 such that yi ∈ Γf,−C (zα(i) ) for each i, meaning of course that yi ≤C f (zα(i) ) for each i. By conditions a0 ∈ int C and ε > 0 it follows from (yi ) → y0 that for each i  i0 , yi −

yε = yi − y0 + εa0 ∈ C, i.e., yε ≤C yi ; so yε ≤C f (zα(i) ) = f uiψ(jα(i) ) for each i  i0 , which clearly contradicts (2.78).

 Proposition 2.9.6. If f : X → Y • and x0 ∈ dom f , for which Γf− (x0 ) is nonempty and admits a supremum. Then f is C-l.c. at x0 ⇐⇒ supC Γf− (x0 ) = f (x0 ). Proof. (⇒) follows from Proposition 2.9.2 (ii) and (v). (⇐) Assume supC Γf− (x0 ) = f (x0 ), then f (x0 ) = supC Γf− (x0 ) ∈ Γf− (x0 ) since the last one is closed. By Proposition 2.9.2(iii) we deduce

2.9 Extended Multifunctions

125

Γf,−C (x0 ) = f (x0 ) − C ⊂ Γf− (x0 ) − C = Γf− (x0 ). Using once more Proposition 2.9.6 and Proposition 2.9.2(i), we obtain that Γf,−C (x0 ) = Γf− (x0 ) ⊂ lim inf Γf− (x) ⊂ lim inf Γf,−C (x). x→x0

x→x0

It follows that Γf,−C is l.c. at x0 . Return to Proposition 2.8.2, we conclude  that f is C-l.c. at x0 . Given a mapping f : X → Y • and x0 ∈ X. Our object now is to define the C-l.c. regularization of f at x0 . The following C-l.c. regularization of the vector-valued mapping f  sup inf f (x) if f locally bounded around x0 , ¯ f (x0 ) = U ∈NX (x0 ) x∈U (2.79) −∞ otherwise. has been initiated by Th´era [543] for maps with values in complete (lattice) Daniell spaces. This vector C-l.c. regularization concept is an adaptation of the classical real lower semicontinuous regularization f¯(x0 ) = lim inf f (x). x→x0

Since then several authors continue to use that vector lower continuity or some variant, see for instance [6, 8–10, 85, 139, 208, 219, 280, 387, 395, 469]. Inspired by the ideas in [469] and motivated by [6, 8, 139, 395], we use the concept of lower level-set Γf− (x0 ) to define the C-l.c. regularization of the vector-valued mapping f at x0 ∈ X by ⎧ − − ⎨ supC Γf (x0 ) if x0 ∈ dom Γf , (2.80) f (x0 ) := −∞ if x0 ∈ dom f \ dom Γf− , ⎩ / dom f. +∞ if x0 ∈ First observe that the function f- : X → Y¯ is well defined as an extended single vector-valued function when the supremum exists and reduced to a singleton, which is insured when the cone C is pointed. The domain of the vector-valued function f- is dom f- = {x ∈ X | f- (x) ∈ Y ∪ {−∞}} = {x ∈ X | Γf− (x) = ∅ and supC Γf− (x) exists}. Similarly, one can define the concept of C-u.c. regularization of the vectorvalued mapping f at x0 ∈ X by  inf C Γf+ (x0 ) if x0 ∈ dom Γf+ , +  f (x0 ) := (2.81) / dom Γf+ , +∞ if x0 ∈ and

dom f+ = {x ∈ X | Γf+ (x) = ∅ and inf C Γf+ (x) exists}.

126

2 Functional Analysis over Cones

Proposition 2.9.7. Let (Y, C) be an order complete t.v.s., with C pointed and int C = ∅. Let f : X → Y • and x0 ∈ dom f . We have (i) x0 ∈ dom f- if and only if f is locally C-lower bounded around x0 ; (ii) f- is C-l.c. at every point of its domain; (iii) f is C-l.c. at x0 ∈ dom f if and only if f- (x0 ) = f (x0 ); (iv) f- is the greatest C-l.c. vector function minorizing f ; (v) if moreover f is positively homogenous, then so is f- . Proof. (i) Applying Proposition 2.9.2(vi) and using that Y is order complete, we obtain: x ∈ dom f- ⇔ Γ − (x) = ∅ and sup Γ − (x) exists 0

C

f

f

⇔ f is locally C-lower bounded around x0

(ii) Suppose x0 ∈ dom f . As in the proof of Proposition 2.9.5, by using Proposition 2.9.2, we obtain Γf− (x) ⊂ Γf- ,−C (x) for every x ∈ dom f and Γf− (x0 ) = Γf- ,−C (x0 ). According to Proposition 2.9.5 Γf- ,−C (x0 ) = Γf− (x0 ) ⊂ lim inf Γf− (x) ⊂ lim inf Γf- ,−C (x); x→x0

x→x0

which yields Γf- ,−C is l.c. at x0 , and applying Proposition 2.8.2, we get f- is C-l.c. at x0 . The conclusion of (iii) follows from Proposition 2.9.5. (iv) Let h : X → Y • be a C-l.c. vector function that minorizes f on its domaine, then by Proposition 2.9.2(v), for each x ∈ dom f , h(x) ∈ Γh− (x) ⊂ Γf− (x) and therefore h(x) ≤C f- (x). (v) By Proposition 2.9.4, Γf− is positively homogenous, so the same for f- .  If we invert the order, we obtain similar properties for the upper level set as follows: Proposition 2.9.8. Under conditions of Proposition 2.9.7, we have (i) x0 ∈ dom f+ if and only if f is locally C-upper bounded around x0 ; (ii) f+ is C-u.c. at every point of its domain; (iii) f is C-u.c. at x0 ∈ dom f if and only if f+ (x0 ) = f (x0 ); (iv) f+ is the greatest C-u.c. vector function maximizing f ; (v) if moreover f is positively homogenous, then so is f+ .

2.10 Continuity Properties of Multifunctions Under Convexity Assumptions Throughout this section X and Y are topological vector spaces, and C ⊂ Y is a convex cone and Γ : X ⇒ Y is a multifunction. The next result is the analogue of a known one for convex functions.

2.10 Continuity Properties of Multifunctions Under Convexity Assumptions

127

Theorem 2.10.1. Let Γ be C-nearly convex. Suppose that Γ is C-l.c. at (x0 , y0 ) ∈ gr Γ . Then Γ is C-l.c. at every x ∈ int(dom Γ ) = aint(dom Γ ). Moreover, Γ is H-C-l.c. at every x ∈ int(dom Γ ) for which there exists a bounded set B ⊂ Y such that Γ (x) ⊂ B + C. Proof. Let α ∈ ]0, 1[ be such that Γ is C-α-convex. Consider V ∈ NY ; there exists a balanced W ∈ NY with W + W ⊂ V . By hypothesis, there exists U0 ∈ NX such that Γ (x0 + u) ∩ (y0 + W − C) = ∅, or, equivalently, y0 ∈ Γ (x0 + u) + W + C

∀ u ∈ U0 .

(2.82)

Let (x, y) ∈ gr Γ with x ∈ int(dom Γ ). Of course, dom Γ is α-convex, whence, by Proposition 2.6.3 (iii), int(dom Γ ) is a convex set. Then there exist λ ∈ Λα \ {0, 1} and (x1 , y1 ) ∈ gr Γ , with x1 ∈ int(dom Γ ), such that x = λx0 + (1 − λ)x1 ; take y := λy0 + (1 − λ)y1 . It follows that y ∈ λ (Γ (x0 + u) + W + C) + (1 − λ)Γ (x1 ) ⊆ Γ (x + λu) + λW + C ∀ u ∈ U0 .

(2.83)

Consider μ ∈ Λα \ {0, 1} such that μ(y − y) ∈ W . Then, by (2.83), the C-α-convexity of Γ and the choice of μ, y ∈ (1 − μ)Γ (x) + μy + μ(y − y) ⊆ (1 − μ)Γ (x) + μΓ (x + λu) + λμW + C + W ⊆ Γ (x + λμu) + V + C ∀ u ∈ U0 .

(2.84)

Taking U := λμU0 , we see that Γ is C-l.c. at (x, y). Consider now x ∈ int(dom Γ ) and a bounded set B ⊆ Y such that Γ (x) ⊆ B + C. For V ∈ NY , with the same construction as above, we obtain W and U0 for which (2.82) holds. Then we take (x1 , y1 ), λ and y as above; so relation (2.83) holds, too. Taking now μ ∈ Λα \ {0, 1} such that μ(B − y) ⊆ W , we find that (2.84) holds for every y ∈ Γ (x). Taking again U := λμU0 , we obtain that Γ is H-C-l.c. at x.  The next result is related to the boundedness condition of the preceding theorem. Proposition 2.10.2. Let Γ be C-nearly convex. Suppose that x0 ∈ int(dom Γ ) and Γ (x0 ) ⊆ B + C for some bounded set B ⊆ Y . Then for every x ∈ dom Γ , there exists a bounded set Bx ⊆ Y such that Γ (x) ⊆ Bx + C. Proof. By hypothesis, there exists α ∈ ]0, 1[ such that Γ is C-α-convex. Let x ∈ dom Γ ; there exist x1 ∈ dom Γ and λ ∈ Λα \ {0, 1} such that x0 = λx + (1 − λ)x1 . Since Γ is α-convex, λΓ (x) + (1 − λ)Γ (x1 ) ⊆ Γ (x0 ) + C ⊆ B + C.

128

2 Functional Analysis over Cones

Taking y1 ∈ Γ (x1 ), we obtain that Γ (x) ⊆ λ−1 (B − (1 − λ)y1 )+C = Bx +C,  where Bx := λ−1 (B − (1 − λ)y1 ). As an application of the preceding result, we obtain that if f : X → R is nearly convex and finite-valued at x0 ∈ int(dom f ), then f is finite-valued on dom f . Proposition 2.10.3. Assume that Γ is C-l.c. at (x0 , y0 ) ∈ gr Γ and C-nearly convex. If Γ (x0 ) ⊆ B + C for some bounded set B ⊆ Y , then Γ is H-C-u.c. at x0 . Proof. Let α ∈ ]0, 1[ be such that Γ is α-convex. Without loss of generality, we may suppose that x0 = 0. Let y0 ∈ Γ (0) be fixed. Consider V ∈ NY and take W ∈ NY such that W + W + W ⊆ V . Since Γ is H-C-l.c. at 0, there exists U ∈ NX such that y0 ∈ Γ (u) + W + C for all u ∈ U . Since B is bounded, there exists μ ∈ ]1/2, 1[ ∩Λα with (μ−1 − 1)(B ∪ {y0 }) ⊆ W . Let u ∈ (1 − μ−1 )U ∈ NX . There exists u ∈ U such that 0 = μu + (1 − μ)u . It follows that y0 = y  + w + k with y  ∈ Γ (u ), w ∈ W and k ∈ C. Therefore, μΓ (u) + (1 − μ)y  ⊆ Γ (0), whence μΓ (u) ⊆ Γ (0) + (1 − μ)(w + k − y0 ). Hence, Γ (u) ⊆ Γ (0) + (μ−1 − 1)Γ (0) + (μ−1 − 1)W + C − (μ−1 − 1)y0 ⊆ Γ (0) + (μ−1 − 1)B + C + W + C + W ⊆ Γ (0) + W + W + W + C ⊆ Γ (0) + V + C. Therefore, Γ is H-C-u.c. at x0 .



The preceding results lead to the following important theorem. Theorem 2.10.4. Let Γ be a C-nearly convex. Suppose that Γ is C-l.c. at (x0 , y0 ) ∈ gr Γ and Γ (x0 ) ⊆ B + C for some bounded set B ⊆ Y . Then Γ is C-Hausdorff continuous on int(dom Γ ). Proof. Of course, x0 ∈ int(dom Γ ) in our conditions. From Proposition 2.10.2, Γ (x) ⊆ Bx + C, with Bx bounded, for every x ∈ dom Γ . Using Theorem 2.10.1 we obtain that Γ is H-C-l.c. at every x ∈ int(dom Γ ). Then using Proposition 2.10.3, we obtain that Γ is H-C-u.c. at every x ∈ int(dom Γ ). Therefore, Γ is C-Hausdorff continuous on int(dom Γ ).  From the preceding results we have the following characterizations of the continuity of nearly convex functions. Corollary 2.10.5. Let f : X → Y • be a nearly convex function and x0 ∈ int(dom f ). Suppose that C is normal. Then the following assertions are equivalent: (i) f is C-u.c. at x0 , (ii) Γf,C is l.c. at x0 , (iii) Γf,C is H-continuous at every x ∈ int(dom f ), (iv) f is continuous on int(dom f ), (v) f is continuous at x0 .

2.10 Continuity Properties of Multifunctions Under Convexity Assumptions

129

Proof. The implications (iv) ⇒ (v) ⇒ (i) are obvious; (i) ⇒ (ii) is true by Proposition 2.8.2 (ii), (ii) ⇒ (iii) is true by Theorem 2.10.1 and Proposition 2.10.3, while (iii) ⇒ (iv) is true by the normality of C.  The next result corresponds to another well known result of convex analysis. Theorem 2.10.6. Let Γ be a C-nearly convex. Suppose that there exist x0 ∈ int(dom Γ ), B ⊆ Y a bounded set, and U ∈ NX such that Γ (x0 ) ⊆ B + C and Γ (x0 + u) ∩ (B − C) = ∅ for every u ∈ U . Then Γ is C-H-continuous on int(dom Γ ). Proof. Let α ∈ ]0, 1[ be such that Γ is α-convex. Taking into account Theorem 2.10.1 and Propositions 2.10.2, 2.10.3, it is sufficient to show that Γ is H-C-l.c. at x0 . Consider V ∈ NY and take W ∈ NY such that W + W ⊆ V . There exists λ ∈ Λα \ {0, 1} such that λB ⊆ W . It follows that Γ (x0 ) ⊆ λΓ (x0 ) + (1 − λ)Γ (x0 ) ⊆ λ(B + C) + (1 − λ)Γ (x0 ) ⊆ (1 − λ)Γ (x0 ) + W + C. Let u ∈ U and y ∈ Γ (x0 + u) ∩ (B − C); hence, y = b − k with b ∈ B and k ∈ C. Then λ(b − k) + (1 − λ)Γ (x0 ) ⊆ λΓ (x0 + u) + (1 − λ)Γ (x0 ) ⊆ Γ (x0 + λu) + C, whence (1 − λ)Γ (x0 ) ⊆ Γ (x0 + λu) + C − λb + λk ⊆ Γ (x0 + λu) + W + C. Therefore, Γ (x0 ) ⊆ Γ (x0 + λu) + W + C + W + C ⊆ Γ (x0 + λu) + V + C. Hence, Γ (x0 ) ⊆ Γ (x) + V + C for every x ∈ x0 + λU ∈ NX (x0 ). It follows  that Γ is H-C-l.c. at x0 . From Theorem 2.10.6, also taking into account Proposition 2.8.3, we obtain that if f : X → R is nearly convex, f (x0 ) ∈ R, and f is bounded above on a neighborhood of x0 , then f is (finite-valued and) continuous on int(dom f ). In fact, semicontinuous nearly convex multifunctions are very close to convex multifunctions. Proposition 2.10.7. Suppose that Γ : X ⇒ Y is closed-valued and nearly convex. If dom Γ is convex (for example, if dom Γ is closed or open) and Γ is H-u.c. at every point of dom Γ , then Γ is convex. Proof. Since Γ is nearly convex, so is dom Γ ; so, if dom Γ is closed or open, from Corollary 2.6.4, we obtain that dom Γ is convex. Let α ∈ ]0, 1[ be such

130

2 Functional Analysis over Cones

that Γ is α-convex. Consider x, y ∈ dom Γ and λ ∈ ]0, 1[. There exists a sequence (λn )n∈N ⊆ Λα converging to λ. Of course, λn Γ (x) + (1 − λn )Γ (y) ⊆ Γ (λn x + (1 − λn )y)

∀ n ∈ N.

Let u ∈ Γ (x) and v ∈ Γ (y) be fixed. Consider V ∈ NY , and take W ∈ NY such that W + W ⊆ V . Since Γ is H-u.c. at xλ := λx + (1 − λ)y ∈ dom Γ , there exists nV ∈ N such that Γ (λn x + (1 − λn )y) ⊆ Γ (xλ ) + W for all n ≥ nV . It follows that λn u + (1 − λn )v ∈ Γ (xλ ) + W for all n ≥ nV , whence λu + (1 − λ)v ∈cl (Γ (xλ ) + W ) ⊆ Γ (xλ ) + W + W ⊆ Γ (xλ ) + V . Hence, λu + (1 − λ)v ∈ V ∈NY (Γ (xλ ) + V ) = cl Γ (xλ ) = Γ (xλ ). Therefore, λΓ (x) + (1 − λ)Γ (y) ⊆ Γ (λx + (1 − λ)y), which shows that Γ is convex.  The first part of Theorem 2.10.1 was obtained by Borwein [79] for Γ convex and C = {0}, while the second part was obtained by Nikodem [440] for Γ mid-convex, bounded-valued, and H-C-l.c. at x0 ; also, Nikodem obtained Proposition 2.10.3 and Theorem 2.10.6 for Γ mid-convex and bounded-valued in [440], and Proposition 2.10.7 for Γ mid-convex with bounded values and Hcontinuous at each point of its open convex domain in [441]. The equivalence “f is continuous at x0 ⇔ Γf is l.c. at x0 ” of Corollary 2.10.5 is stated by Borwein [80] for f convex.

2.11 Tangent Cones and Differentiability of Multifunctions Throughout this section X, Y are normed vector spaces. Consider A ⊆ X a nonempty set and a ∈ X. The Bouligand tangent cone or contingent cone of A at a is defined as the set TB (A; a) = lim sup t−1 (A − a) := lim sup RA,a (t), t↓0

t→0

where RA,a : R+ ⇒ X, RA,a (t) = t−1 (A − a) for t ∈ R> and RA,a (0) = ∅, where R> := ]0, ∞[. Similarly, the Ursescu tangent cone or adjacent cone of A at a is defined by TU (A; a) = lim inf t−1 (A − a) := lim inf RA,a (t). t↓0

t→0

Of course, TU (A; a) ⊆ TB (A; a) and TB (A; a) = ∅ if a ∈ / cl A. Taking this into account, in the sequel we consider only the case a ∈ cl A when considering tangent cones. It is known (and easy to see) that TB (A; a) = TU (A; a) = cl (R> · (A − a)) if A is a convex set and a ∈ cl A. Using Proposition 2.7.3 we have the following result; the notation (tn ) → 0+ means that (tn )n∈N ⊆ R> and (tn ) → 0.

2.11 Tangent Cones and Differentiability of Multifunctions

131

Proposition 2.11.1. Let A ⊆ X and a ∈ cl A. Then     TB (A; a) = u ∈ X | lim inf dist u, t−1 (A − a) = 0 t↓0

= {u ∈ X | ∃ (tn ) → 0+ , ∃ (un ) → u, ∀ n ∈ N : a + tn un ∈ A} , and     TU (A; a) = u ∈ X | lim dist u, t−1 (A − a) = 0 t↓0

= {u ∈ X | ∀ (tn ) → 0+ , ∃ (un ) → u, ∀ n ∈ N : a + tn un ∈ A} . Moreover, TB (A; a) and TU (A; a) are nonempty closed cones (in particular, they contain 0). Proof. The formulae above follow from Proposition 2.7.3. The sets TB (A; a) and limit inferior of and TU (A; a) are closed, since they are limit superior   multifunctions, respectively. Since a ∈ cl A, dist 0, t−1 (A − a) = 0 for every t > 0. So 0 ∈ TU (A; a). The fact that they are cones follows immediately from the sequential characterizations of their elements.  Let Γ : X ⇒ Y and (x, y) ∈ X × Y . Consider the multifunction  −1 t (Γ (x + tu) − y) if t ∈ R> , Γx,y : R × X ⇒ Y, Γx,y (t, u) = ∅ if t ≤ R− , where R− := −R+ = ] − ∞, 0]. Definition 2.11.2. Let Γ : X ⇒ Y , (x, y) ∈ gr Γ and u ∈ X. The Dini upper derivative of Γ at (x, y) in the direction u is the set DΓ (x, y)(u) := DΓ (x, y)(u) := lim sup (Γx,y )(t, u ), (t,u )→(0,u)

while the Dini lower derivativeof Γ at (x, y) in the direction u is the set DΓ (x, y)(u) := lim inf (Γx,y )(t, u ).  (t,u )→(0,u)

Using again Proposition 2.7.3, one has that v ∈ DΓ (x, y)(u) if and only if ∃ (tn , un , vn ) → (0+ , u, v), ∀ n ∈ N : (x, y) + tn (un , vn ) ∈ gr Γ, and v ∈ DΓ (x, y)(u) if and only if ∀ (tn , un ) → (0+ , u), ∃ (vn ) → v, ∃ n0 , ∀ n ≥ n0 : (x, y) + tn (un , vn ) ∈ gr Γ. Definition 2.11.3. Let Γ : X ⇒ Y and (x, y) ∈ gr Γ . We say that Γ is semidifferentiable at (x, y) in the direction u if DΓ (x, y)(u) = DΓ (x, y)(u), and Γ is semidifferentiable at (x, y) if Γ is semidifferentiable at every u ∈ X.

132

2 Functional Analysis over Cones

The following theorem establishes sufficient conditions for semidifferentiability of convex multifunctions. Recall that icr A means the intrinsic core of A (see page 32). For the proof of the theorem we need the following two lemmas. Lemma 2.11.4. Let ∅ = A ⊆ X be a convex set and let T : X → Y be a linear operator. (i) If int A = ∅ and T is open, then int(R> A) = R> · int A and T (int A) = int T (A). (ii) If icr A = ∅ (e.g., if X is finite-dimensional), then icr(R> A) = R> · icr A and T (icr A) = icr T (A). Proof. (i) The inclusions R> · int A ⊆ int(R> A) and T (int A) ⊆ int T (A) are obvious. Let a ∈ int A be fixed. Consider x ∈ int(R> A). Then there exist x ∈ R> A and λ ∈ ]0, 1[ such that x = (1 − λ)x + λa. Therefore, there exist a ∈ A and t > 0 such that x = t a . Since a ∈ int A and A is convex, 1 (1 − λ)t λ x= a + a ∈ int A,  (1 − λ)t + λ (1 − λ)t + λ (1 − λ)t + λ whence x ∈ R> · int A. Consider now y ∈ int T (A). Then there exist y  ∈ T (A) and λ ∈ ]0, 1[ such that y = (1 − λ)y  + λT (a). Take x ∈ A such that y  = T (x ) and consider x := (1 − λ)x + λa. Since a ∈ int A and A is convex, x ∈ int A. Therefore, y = T (x) ∈ T (int A). (ii) The proof is similar; just note that a ∈ icr A if and only if, for every x ∈ A, there exists λ > 0 such that (1 + λ)a − λx ∈ A and if a ∈ icr A, x ∈ A and λ ∈ ]0, 1[, then (1 − λ)a + λx ∈ icr A. Lemma 2.11.5. Let A ⊆ X × Y be a nonempty convex set. Suppose that either (x, y) ∈ int A or (x, y) ∈ icr A and dim X < ∞. Then for every sequence (xn ) ⊆ PrX (A) with (xn ) → x, there exists (yn ) → y such that (xn , yn ) ∈ A for every n ∈ N. Proof. Without loss of generality, we may assume that (x, y) = (0, 0). Suppose first that (0, 0) ∈ int A. Since (xn , 0) → (0, 0) ∈ int A, (xn , 0) ∈ A for n ≥ n0 . Taking yn = 0 for n ≥ n0 and yn arbitrary such that (xn , yn ) ∈ A for n < n0 , the conclusion follows. Suppose now that dim X < ∞ and (0, 0) ∈ icr A. From the preceding lemma with T = PrX , we have that 0 ∈ icr(PrX (A)). Let X0 := span(PrX (A)). Since dim X < ∞ and PrX (A) is convex, there exists r > 0 such that X0 ∩ {x ∈ X | x ≤ r} ⊆ PrX (A). For every n ∈ N∗ consider yn ∈ Y such that (xn , yn ) ∈ A and yn  ≤ inf{y | (xn , y) ∈ A} + n−1 . Suppose that (yn ) → 0. Taking eventually subsequences, we may suppose that yn  ≥ r1 > 0 for every n ∈ N and (xn / xn ) → x. There exists a base {u1 , u2 , . . . , up } of X0 such that ui  = r for 1 ≤ i ≤ p and x ∈ int C, where

2.11 Tangent Cones and Differentiability of Multifunctions

133

p C = { i=1 λi ui | λi ≥ 0}. (For example, one takes a base {u1 , u2 , . . . , up } of X0 , a bijective linear operator T : X0 → X0 such that x = p1 T (u1 + · · · + up ) and ui = T ui for 1 ≤ i ≤ p.) Let (vi )pi=1 ⊆ Y be such that (ui , vi ) ∈ A for every i. Since (xn / xn ) → x ∈ int C, there exists n0 ∈ N such that there exist the sequences (λni ) ⊆ [0, ∞[, xn ∈ C for every n ≥ n0 . Therefore, p ui for n ≥ n0 . Since (xn ) → 0, (λni ) → 0 1 ≤ i ≤ p, such that xn = i=1 λni p for every i. We may suppose that i=1 λni ≤ 1 for n ≥ n0 . Since (0, 0) ∈ A, it follows that

p p xn , λni vi = λni (ui , vi ) ∈ A, i=1

i=1

p p whence yn  ≤  i=1 λni vi  + n−1 ≤ i=1 λni vi  + n−1 → 0, a contradiction.  Theorem 2.11.6. Let Γ : X ⇒ Y be a convex multifunction. Suppose that one of the following conditions holds: (i) int(gr Γ ) = ∅, x ∈ int(dom Γ ) and y ∈ Γ (x); (ii) X and Y are finite-dimensional, x ∈ icr(dom Γ ), and y ∈ Γ (x). Then Γ is semidifferentiable at (x, y). Proof. We give the proof of (ii), that of (i) being similar (in this case one takes into account that PrX is an open linear operator). We already know that DΓ (x, y)(u) ⊆ DΓ (x, y)(u) = {v ∈ Y | (u, v) ∈ TB (gr Γ, (x, y))}

∀ u ∈ X.

Let us show that v ∈ DΓ (x, y)(u) if (u, v) ∈ icr TB (gr Γ, (x, y)), even without asking that x ∈ icr(dom Γ ). Without loss of generality, we consider that (x, y) = (0, 0). Take (u, v) ∈ icr TB (gr Γ, (0, 0)) = icr(R> · gr Γ ) = R> · icr(gr Γ ); we used the relation icr A = icr(cl A), valid if A is a convex subset of a finitedimensional space, for the first equality and Lemma 2.11.4 for the last one. Then there exist λ > 0 and (u , v  ) ∈ icr(gr Γ ) such that (u, v) = λ (u , v  ). Let (tn ) → 0+ and (un ) → u such that tn un = x + tn un ∈ dom Γ for every n ∈ N. It follows that (un ) ⊆ aff(dom Γ ) = span(dom Γ ) and (un /λ ) → u ∈ icr(dom Γ ). So, there exists n0 ∈ N such that un /λ ∈ dom Γ for n ≥ n0 . So, using Lemma 2.11.5, there exists a sequence (vn ) ⊆ Y such that (vn ) → v  and (un /λ , vn ) ∈ gr Γ for n ≥ n0 . Let vn := λ vn for n ≥ n0 and arbitrary for n < n0 . So (vn ) → v. Since (tn ) → 0+ , there exists n1 ≥ n0 such that tn ≤ 1/λ for n ≥ n1 . Since (0, 0) ∈ gr Γ , we obtain that (x, y) + tn (un , vn ) = tn λ (un /λ , vn ) ∈ gr Γ for n ≥ n1 . Therefore, v ∈ DΓ (x, y)(u). Consider now u ∈ icr TB (dom Γ ; 0) = R> ·icr(dom Γ ) and v ∈ DΓ (x, y)(u). Therefore, u ∈ icr (R> · dom Γ ) and (u, v) ∈ cl (R> · gr Γ ). By Lemma 2.11.4, there exists v0 ∈ Y such that (u, v0 ) ∈ icr (R> · gr Γ ). It follows that for every λ ∈ ]0, 1[,

134

2 Functional Analysis over Cones

(u, (1 − λ)v + λv0 ) = (1 − λ)(u, v) + λ(u, v0 ) ∈ icr (R> · gr Γ ) . By what precedes, we have that (1 − λ)v + λv0 ∈ DΓ (x, y)(u) for every λ ∈ ]0, 1[. Taking into account that DΓ (x, y)(u) is a closed set, we obtain that v ∈ DΓ (x, y)(u) for λ → 0. Therefore, for every u ∈ icr TB (dom Γ, x), DΓ (x, y)(u) = {v ∈ Y | (u, v) ∈ TB (gr Γ, (x, y))} = DΓ (x, y)(u); i.e., Γ is semidifferentiable at every u ∈ icr TB (dom Γ, x). If our initial condition x ∈ icr(dom Γ ) holds, then TB (dom Γ ; x) = span(dom Γ − x) =  icr TB (dom Γ, x), and so Γ is semidifferentiable. Remark 2.11.7. The notion of semidifferentiable multifunction is introduced by Penot in [463]. Theorem 2.11.6 (i) is established in [463, Prop. 3.5]. Consider now another normed vector space U, the multifunction Λ : U ⇒ X, and the function f : X × U → Y . With these, we associate the multifunction f Λ : U ⇒ Y defined by f Λ(u) := f (Λ(u) × {u}). We are interested in evaluating Df Λ(u, y) for some (u, y) ∈ gr f Λ. Recall that a multifunction S : X ⇒ Y is upper Lipschitz at x ∈ dom S if there exist L, ρ > 0 such that S(x) ⊆ S(x) + L x − x BY for every x ∈ B(x, ρ). Theorem 2.11.8. Suppose that x ∈ Λ(u), y = f (x, u) and that f is Fr´echet differentiable at (x, u). Then ∇x f (x, u) (DΛ(u, x)(u)) + ∇u f (x, u)(u) ⊆ Df Λ(u, y)(u)

∀u ∈ U

(2.85)

and ∇x f (x, u) (DΛ(u, x)(u)) + ∇u f (x, u)(u) ⊆ Df Λ(u, y)(u)

∀ u ∈ U. (2.86)

Consider  y) := {x ∈ Λ(u) | f (x, u) = y}. Λ : U × Y ⇒ X, Λ(u, Moreover, suppose that dim X < ∞ and either (a) Λ(u) = {x} and Λ is  y) = {x} and Λ is upper Lipschitz at (u, y). upper Lipschitz at u or (b) Λ(u, Then ∇x f (x, u) (DΛ(u, x)(u)) + ∇u f (x, u)(u) = Df Λ(u, y)(u)

∀ u ∈ U. (2.87)

Proof. Let x ∈ DΛ(u, x)(u) and take y = ∇x f (x, u)(x) + ∇u f (x, u)(u). Then there exist (tn ) → 0+ , ((un , xn )) → (u, x) such that x + tn xn ∈ Λ(u + tn un ) for every n ∈ N. Since f is differentiable at (x, u), we have that for some Y ⊇ (vn ) → 0, f (x + tn xn , u + tn un ) = y + tn (∇x f (x, u)(xn ) + ∇u f (x, u)(un ) + vn ) for every n ∈ N. Taking yn := ∇x f (x, u)(xn ) + ∇u f (x, u)(un ) + vn , since ∇f (x, u) is continuous, we have that (yn ) → y. Since y +tn yn ∈ f Λ(u +tn un ) for every n ∈ N, y ∈ Df Λ(u, y)(u). Hence, (2.85) holds.

2.11 Tangent Cones and Differentiability of Multifunctions

135

Consider x ∈ DΛ(u, x)(u) and y as above. Let ((tn , un )) → (0+ , u) with u + tn un ∈ dom f Λ (= dom Λ) for every n. Then there exists (xn ) → x such that x+tn xn ∈ Λ(u+tn un ) for n ≥ n0 . Continuing as above, we get (yn ) → y with y+tn yn ∈ f Λ(u+tn un ) for n ≥ n0 . Therefore, y ∈ Df Λ(u, y)(u), whence (2.86). Suppose now that dim X < ∞ and (a) or (b) holds. Consider y ∈ Df Λ(u, y)(u); then there exist (tn ) → 0+ , ((un , yn )) → (u, y) such that y + tn yn ∈ f Λ(u + tn un ) for every n ∈ N. Let x n ∈ Λ(u + tn un ) be such that  + tn un , y + tn yn ), for every n. Since y + tn yn = f ( xn , u + tn un ); i.e., x n ∈ Λ(u Λ is upper Lipschitz at u and Λ(u) = {x} in case (a) and Λ is upper Lip y) = {x} in case (b), there exists a bounded sequence schitz at (u, y) and Λ(u, n = x + tn xn for every n ∈ N. Since dim X < ∞, there (xn ) ⊆ X such that x exists a subsequence (xnp )p∈N converging to some x ∈ X. It follows that x ∈ DΛ(u, x)(u), and so y ∈ ∇x f (x, u) (DΛ(u, x)(u)) + ∇u f (x, u)(u). Taking into account (2.85), we obtain that (2.87) holds.  In the following example, one of the conditions in (a) and (b) of Theorem 2.11.8 is not satisfied, and (2.87) does not hold. Example 2.11.9. ([538]) Let U = X = Y = R, Λ, Λ : U ⇒ X and f : X ×U → Y be defined by Λ(u) := [0, max(1, 1 + u)],

Λ (u) := [0, max(1, 1 + u)[,

f (x, u) := x2 − x,

u = 0, x = 0, y = f (0, 0) = 0. Then Λ, Λ are upper Lipschitz at u, but Λ(u) = {x} = Λ (u) (so (a) does not hold for Λ and Λ ) and  1  1 [− 4 , 0] [− 4 , 0] if u ≤ 0, if u ≤ 0,  f Λ(u) = f Λ(u) = [− 14 , u(1 + u)] if u > 0, [− 14 , u(1 + u)[ if u > 0. Let {xy1 , xy2 } = {x ∈ R | x2 − x = y}, with xy1 ≤ xy2 , for y ≥ − 41 . Then ⎧ y y 1 ⎨ {x1 , x2 } if u ∈ R, y ∈ [− 4 , 0], y  y) = {x } Λ(u, if u > 0, y ∈ ]0, u(1 + u)], ⎩ 2 ∅ otherwise, ⎧ y y {x1 , x2 } if u ∈ R, y ∈ [− 14 , 0[, ⎪ ⎪ ⎨ {0} if u ∈ R, y = 0, Λ (u, y) = y ⎪ if u > 0, y ∈ ]0, u(1 + u)[, ⎪ ⎩ {x2 } ∅ otherwise.  y) = {0, 1} = {x}, while Λ is Now, Λ is upper Lipschitz at (u, y) but Λ(u, not upper Lipschitz at (u, y) (so (b) does not hold for Λ and Λ ). One obtains that DΛ(u, x)(u) = DΛ (u, x)(u) = R × R+ , 

∀ u ∈ R,

Df Λ(u, y)(u) = Df Λ (u, y)(u) = {(u, y) ∈ R2 | y ≤ max(0, u)},

∀ u ∈ R.

136

2 Functional Analysis over Cones

Since ∇x f (x, u) = −1 and ∇u f (x, u) = 0, ∇x f (x, u) (DΛ(u, x)(u))+∇u f (x, u)(u) = ]−∞, 0] = ]−∞, u] = Df Λ(u, x)(u) for every u > 0, and similarly for Λ . For other types of tangent cones, one can consult Aubin and Frankowska’s book [36]. Formulae (2.85) and (2.87) are stated by Tanino [538] under the supplementary conditions that U, X, and Y are finite-dimensional and f is of class C 1 and by Klose [336, Th. 4.1].

2.12 Radial Epi-Differentiability of Extended Vector-Valued Functions Let X, Y be two normed vector spaces. To define the radial epiderivatives for extended vector-valued functions, let us consider f : X → Y • and the following difference quotient of f at a point x0 ∈ dom f in the direction h ∈ Y as follows for t > 0 fx0 (t, u) := t−1 (f (x0 + tu) − f (x0 )).

(2.88)

The domain of fx0 is the set Df,x0 = {(t, u) | t > 0, u ∈ X, x0 + tu ∈ dom f }. According to the previous subsection, we have for any x0 ∈ dom f , dom fx0 (t, ·) = Rdom f,x0 (t),

(2.89)

lim sup dom fx0 (t, ·) = lim sup Rdom f,x0 (t) = TB (dom f, x0 ),

(2.90)

lim inf dom fx0 (t, ·) = lim inf Rdom f,x0 (t) = TU (dom f, x0 ),

(2.91)

thus t↓0

t↓0

and t↓0

t↓0

where TB ( resp. TU ) is the Bouligand (resp. Ursescu) tangent cone. In particular, if f is convex and dom f is closed, then u ∈ dom fx0 (t, ·) for every t > 0 ⇐⇒ u ∈ (dom f )∞ , where (dom f )∞ is the asymptotic cone of dom f. In this case we see that  dom fx0 (t, ·) = (dom f )∞ . t>0

The directional derivatives of the function f : X → Y • at x0 ∈ dom f in the direction u ∈ X are defined and denoted by

2.12 Radial Epi-Differentiability of Extended Vector-Valued Functions

137

 ϕx0 (u) := f− (x0 ; u) = inf fx0 (t, u) t>0

and  ψx0 (u) := f+ (x0 ; u) = sup fx0 (t, u). t>0

Here, the C-infimum and C-supremum are considered in Y . According to the   , we always have f+ (x0 ; u) = +∞, whenever there is t > 0 definition of f+   such that u ∈ / dom fx0 (t, ·); whence, dom f+ (x0 ; ·) ⊂ dom fx0 (t, ·). t>0

 In particular, if f is C-convex and dom f is closed, then dom(f+ (x0 ; ·)) ⊂ dom f∞ . If in addition, f is C-bounded and Y is order complete, then  (x0 ; )˙ = dom f∞ . dom f+  (x0 ; u) ∈ Y if and only if the We may continue similarly to show that f− family {fx0 (t, u) | t > 0} has a C-upper bound in Y .  (x0 ; u) and If f is C-convex, then lim fx0 (t, u) exists implies existence of f− t↓0

equality holds. The converse is true if Y is a Daniell space, see [319]. Suppose Y = R and f : X → R ∪ {+∞} is an extended convex real-function,  (x0 ; u) = lim fx0 (t, u) which may take the extended values ±∞. then f− t↓0

 Whenever f is Gˆateaux differentiable at x0 , f− (x0 ; ·) is a continuous linear ∗  form, there exists ∇f (x0 ) ∈ X such that f− (x0 ; u) = ∇f (x0 ), u.

Proposition 2.12.1. Let f : X −→ Y • . Then, for any x0 ∈ dom f, the map  (x0 ; ·) is positively homogenous on its domain. f− Proof. It is a simple matter to check that, for each t > 0, fx0 (t, ·) is posi  (x0 ; ·); then, their supremum f− (x0 ; ·) is also tively homogeneous on dom f− positively homogenous on its domain.  Definition 2.12.2. For x0 ∈ dom f we define the lower radial epiderivative of f at x0 in the direction u ∈ X by Df (x0 ; u) := ϕ -x0 (u) ∈ Y = Y ∪ {−∞, +∞} where

⎧ − ⎨ supC Γϕx0 (u) if u ∈ dom Γϕx0 ϕ x- 0 (u) = −∞ if u ∈ dom ϕx0 \ dom Γϕx0 ⎩ +∞ if u ∈ / dom ϕx0 .

The upper radial epiderivative of f at x0 in the direction u ∈ X is defined by Df (x0 ; u) := ψx+0 (u) ∈ Y • = Y ∪ {+∞} where ψx+0 (u) =



inf C Γψ+x (u) if u ∈ dom Γψx0 0 +∞ if u ∈ / dom Γψx0 .

138

2 Functional Analysis over Cones

Theorem 2.12.3. Let f : X −→ Y • . Assume that the cone C is normal with C = ∅ and (Y, C) is a lattice space. Let us consider C • = C ∪ {+∞}. Then, for any x0 ∈ domf , we have (a) Df (x0 ; ·) (Df (x0 ; ·)) is C-l.c (C-u.c) on its domain, and for all u ∈   (x0 ; u) = −∞ or Df (x0 ; u) ≤C • f− (x0 ; u) X, either Df (x0 ; u) = f−  • (Df (x0 ; u) ≤C f+ (x0 ; u));     (x0 ; ·) (u ∈ dom f+ (x0 ; ·)), f− (x0 ; ·) (f+ (x0 ; ·)) is C(b) for u ∈ dom f−  (x0 ; u) (Df (x0 ; u) = l.c (C-u.c) at u if, and only if, Df (x0 ; u) = f−  (x0 ; u)); f+    (x0 ; ·) (u ∈ / dom f+ (x0 ; ·)), then Df (x0 ; u) = f− (x0 ; u) = (c) If u ∈ / dom f−  ±∞ (Df (x0 ; u) = f+ (x0 ; u) = +∞). Proof. The conclusions (a) and (b) of this theorem follow from Propositions 2.9.7 and 2.9.8. Statements in (c) are obvious.  The C-l.c. (C-u.c.) in [197, Theorem 3.13] is a sufficient condition, while in Theorem 2.12.3 it is proved to be also necessary, and also we avoid ordercompleteness condition. Theorem 2.12.4. Under conditions of Theorem 2.12.3, for x0 ∈ dom(f ) we have   (a) u ∈ dom f− (x0 ; ·) and f− (x0 ; ·) C-l.b. around u imply Df (x0 ; u) ∈ Y and

f (x0 + tu) − f (x0 ) ∈ tDf (x0 ; u) + C, ∀t > 0;  (b) 2 f− (x0 ; u) = +∞ implies f (x0 + tu) = Df (x0 ; u) = +∞, ∀t > 0;   (c) u ∈ dom f+ (x0 ; ·) and f+ (x0 ; ·) C-u.b. around u imply Df (x0 ; u) ∈ Y and f (x0 + tu) − f (x0 ) ∈ tDf (x0 ; u) − C, ∀t > 0.   Proof. Let u ∈ dom f− (x0 ; ·) satisfying f− (x0 ; ·) is C-l.b. around u. Clearly, u ∈ dom Df (x0 ; ·) and then Df (x0 ; u) ∈ Y . Since Df (x0 ; u)) is C-upper  (x0 ; u), we conclude bounded by f−  Df (x0 ; u) ≤C f− (x0 ; u) ≤C fx0 (t, u) = t−1 [f (x0 + tu) − f (x0 ], ∀t > 0,  and accordingly, we prove (a). Part (b) is trivial since, for f− (x0 ; u) = +∞ we have Df (x0 ; u) = +∞ and thenf (x0 + tu) = +∞. The proof of (c) is similar. 

Theorem 2.12.5. Under conditions of Theorem 2.12.3, for x0 ∈ dom(f ), we have (a) Df (x0 ; ·) and Df (x0 ; ·) are positively homogenous; 2

 This means: u ∈ / dom f− (x0 ; ·) implies f (x0 +tu)−f (x0 ) ∈ tDf (x0 ; u)+C • , ∀t > 0.

2.12 Radial Epi-Differentiability of Extended Vector-Valued Functions

139

(b) Df (x0 ; 0)(Df (x0 ; 0)) = 0 ⇐⇒ Df (x0 ; u) = −∞(Df (x0 ; u) ∈ Y )∀u ∈ X. Proof. We only prove the statements for the lower epiderivative, since the upper epiderivative’s ones are treated similarly by inverting the order.   (x0 ; u) ∈ Y . By Lemma 2.12.1, f− (x0 ; ·) is positively (a) Suppose that f−  homogenous in dom(f− (x0 ; ·)), and using Proposition 2.9.4, it results that for all λ > 0,

Df (x0 ; λu) = sup Γf− (x0 ;·) (λu) = λ sup Γf− (x0 ;·) (u) = λDf (x0 , u). C



C



 (x0 ; u) = +∞, then Df (x0 ; u) = +∞. It follows that for Suppose that f− every λ > 0 and all t > 0, u ∈ / (λt)−1 (dom f − x0 ). Accordingly,

/ (dom f ) (i.e., f (x0 + tλu) = +∞), ∀λ > 0, ∀t > 0. x0 + tλu ∈  (x0 , λu) = +∞ for every λ > 0, and hence Df (x0 , λu) = Therefore, f− +∞. It results that for every λ > 0

Df (x0 , λu) = +∞ = λDf (x0 ; u).  (x0 ; u) = −∞, then, Df (x0 ; u) = −∞. Remarking Suppose now that f−  that f− (x0 ; u) = −∞ enforces the set {fx0 (tu)| t > 0} to have no C-upper bound in Y ; concluding that {fx0 (tλu)| t > 0} has, in consequence, no  (x0 , λu) = −∞, and then ∀λ > 0 C-upper bound in Y . Therefore, f−

Df (x0 , λu) = −∞ = λDf (x0 ; u). (b) Assume first that Df (x0 ; 0) = −∞, then from (a), we get Df (x0 ; 0) = λDf (x0 ; 0), ∀λ > 0.

(2.92)  f− (x0 ; 0)

= +∞ Since x0 ∈ dom f , we obtain Df (x0 ; 0) = +∞; otherwise and then f (x0 ) = +∞, a contradiction. Therefore, Df (x0 ; 0) ∈ Y , and passing to limit in (2.92), when λ 0, we obtain that Df (x0 , 0) = 0.  (x0 ; ·) is C-l.b. around 0 which Conversely, suppose Df (x0 ; 0) = 0, then f− − means that for any v close to 0, Γf  (x0 ;·) (v) is nonempty. −

Assume for contradiction that for some u ∈ X, Df (x0 ; u) = −∞, then 0 we have for a any sequence of nonnegative real numbers (λn )n Df (x0 ; λn u) = λn Df (x0 ; u) = −∞.

(2.93)

Set un = λn u, then un → 0, and using Proposition 2.9.5, we obtain that 0 = Df (x0 ; 0) = max Γf− (x0 ;·) (0) ∈ Γf− (x0 ;·) (0) ⊂ lim inf Γf− (x0 ;·) (un ). C





n→+∞



Accordingly, for such (un )n , there exits (vn )n converging to 0, such that vn ∈ Γf− (x0 ;·) (un ) for all n large enough; and hence, −

vn ≤C max Γf− (x0 ;·) (un ) = Df (x0 ; un ), C



(2.94)

which contradicts (2.93). Thus, for every u ∈ X, Df (x0 ; u) = −∞.



140

2 Functional Analysis over Cones

Remark 2.12.6. Theorem 2.12.5 extends [197, Proposition 3.7] without involving the radial and interiorly radial cones of the hypo/epigraph of f , and presents moreover alternatives if the space does not satisfy the ordercompleteness property. The positive homogeneity and lower continuity of the hypo/epigraphical profile mappings (see Lemmas 2.12.1, 2.9.5) are crucial in the proof.

3 Optimization in Partially Ordered Spaces

In this chapter, we introduce solution concepts for vector- as well as setvalued optimization problems and derive characterizations of these solution concepts by nonlinear translation invariant functions given in (2.42). Furthermore, we show existence results for solutions of vector optimization problems, well-posedness of vector optimization problems, continuity properties, vector equilibrium problems, vector variational inequalities, duality assertions, minimal-point theorems, and variational principles for vector-valued functions of Ekeland’s type as well as saddle point assertions. Compared with the first edition, we added characterizations of solutions of set-valued optimization problems by nonlinear translation invariant functions given in (2.42).

3.1 Solution Concepts in Vector Optimization For a detailed description and discussion of solution concepts in vector optimization, we refer to the book by Khan, Tammer and Z˘ alinescu [329, Section 2.4]. Recently, an overview on solution concepts for vector optimization problems w.r.t. relatively solid convex cones in real linear spaces is given by G¨ unther, Khazayel, and Tammer in [241], see also references therein. 3.1.1 Approximate Minimality In the last years, several concepts for approximately efficient solutions of a vector optimization problem were published (see [157, 158], [187], [213], [228], [245], [243], [383], [433], [514], [524, 526], [550] and references therein). The reason for introducing approximately efficient solutions is the fact that numerical algorithms usually generate only approximate solutions anyhow, and moreover, the set of efficient points may be empty, whereas approximately efficient points always exist under very weak assumptions. © Springer Nature Switzerland AG 2023 A. G¨ opfert et al., Variational Methods in Partially Ordered Spaces, CMS/CAIMS Books in Mathematics 7, https://doi.org/10.1007/978-3-031-36534-8 3

141

142

3 Optimization in Partially Ordered Spaces

We begin with our concept given in [213]. Let M and B be nonempty subsets of the topological vector space Y with 0 ∈ bd B, let k 0 ∈ Y be such that cl B + (0, +∞)k 0 ⊂ int B, and let ε ≥ 0. Definition 3.1.1. (Approximate efficiency, εk 0 -efficiency) An element yε ∈ M is said to be εk 0 -efficient on M w.r.t. B if   M ∩ yε − εk 0 − (B \ {0}) = ∅. The set of εk 0 -efficient points of M w.r.t. B will be denoted by Eff(M, Bεk0). For the case ε = 0 the set Eff(M, Bεk0 ) coincides with the usual set Eff(M, B) of efficient points of M w.r.t. B (compare Remark 2.1.3 and Sections 3.4, 3.7). An illustration of the εk 0 -efficient points of M w.r.t. B is furnished by Figure 3.1.1; see also Figure 3.1.2.

y y − εk0 − B

M

k0 B

Figure 3.1.1. The set of approximately efficient elements of M w.r.t. B = R2+ (where the distance between it and the set of efficient elements is unbounded).

Because B + (0, ∞)k 0 ⊂ int B ⊂ B \ {0}, for 0 ≤ ε1 ≤ ε2 , we have that Eff(M, B) ⊆ Eff(M, Bε1 k0 ) ⊆ Eff(M, Bε2 k0 ). Approximate efficiency can also be defined by scalarization. Definition 3.1.2. Let C ⊂ Y be a nontrivial cone and ϕ : Y → R any Cmonotone functional. An element yε ∈ M is said to be ε-efficient w.r.t. ϕ if

3.1 Solution Concepts in Vector Optimization

y − εk0 − C

143

y M

k0

C

Figure 3.1.2. The set of approximately efficient elements of M w.r.t. a “bigger” cone C with R2+ ⊂ C.

∀ y ∈ M : y ∈ yε − C ⇒ ϕ(yε ) ≤ ϕ(y) + ε. The set of such ε-efficient points of M w.r.t. ϕ and C will be denoted by ε- Eff ϕ (M, C). Simple examples show that even if k 0 ∈ int C and cl B + int C ⊂ B, generally ε- Eff ϕ (M ) = Eff(M, Bεk0 ). So it seems to be very interesting to study relations between Definitions 3.1.1 and 3.1.2. We use the functional ϕcl B,k0 introduced in Section 2.4: ϕcl B,k0 (y) := inf{t ∈ R | y ∈ tk 0 − cl B}. Theorem 3.1.3. Let M, B ⊂ Y be nonempty sets such that 0 ∈ bd B, let C ⊂ Y be a cone, k 0 ∈ int C, and ε ≥ 0. Assume that cl B + int C ⊂ B. Consider yε ∈ Eff(M, Bεk0 ) and ϕ : Y → R, ϕ(y) := ϕcl B,k0 (y − yε ). Then ϕ is a finite-valued, continuous, and strictly (int C)-monotone (even strictly C-monotone if cl B + (C \ {0}) ⊂ int B) functional such that ∀ y ∈ M : ϕ(yε ) ≤ ϕ(y) + ε. In particular, yε ∈ ε- Eff ϕ (M, C). Proof. By Proposition 2.4.4, we have that ϕcl B,k0 is finite-valued and continuous, while from Theorem 2.4.1(g), we obtain that ϕcl B,k0 is strictly (int C)-monotone (strictly C-monotone if cl B + (C \ {0}) ⊂ int B). It is obvious that ϕ has the same properties.

144

3 Optimization in Partially Ordered Spaces

Assume that there exists y ∈ M such that ϕ(y)+ ε < ϕ(yε ) = 0. It follows that there exists t < −ε with y − yε ∈ tk 0 − cl B, and so   y ∈ yε − εk 0 − cl B + (−ε − t)k 0 ⊂ yε − εk 0 − int B ⊂ yε − εk 0 − (B \ {0}), a contradiction with yε ∈ Eff(M, Bεk0 ).



Theorem 3.1.4. Let M, B ⊂ Y be proper sets, C ⊂ Y a cone, k 0 ∈ int C, and ε ≥ 0. Assume that cl B + (C \ {0}) ⊂ int B. If yε ∈ M is such that ∀ y ∈ M : ϕcl B,k0 (yε ) ≤ ϕcl B,k0 (y) + ε,

(3.1)

then there is an open set D ⊂ Y with 0 ∈ bd D and cl D + (C \ {0}) ⊂ D such that yε ∈ Eff(M, Dεk0 ). Proof. First of all, note that cl B is proper (otherwise, B = Y ) and 0 ∈ / int C (otherwise, C = Y , and so B = Y ). Taking into account Remark 2.4.3 we have that cl B + int C ⊂ cl B. Using Proposition 2.4.4 we obtain that ϕ := ϕcl B,k0 is a finite-valued continuous function, while from Theorem 2.4.1 we have that ϕ is strictly C-monotone. Consider the set D := {y ∈ Y | ϕ(yε − y) < ϕ(yε )} = yε − tε k 0 + int B, where tε := ϕ(yε ), the last equality being a consequence of (2.49). Hence D is open. Taking again into account Remark 2.4.3, we obtain that cl D = yε −tε k 0 +cl B, bd D = yε −tε k 0 +bd B = {y ∈ Y | ϕ(yε −y) = ϕ(yε )}. Therefore 0 ∈ bd D. From the inclusion cl B +(C \{0}) ⊂ int B and the above formulae one obtains immediately that  ⊂ D. Using (2.49),  cl D +0 (C \ {0}) − ε)k − int B = ∅, or equivalently, condition (3.1) is equivalent to M ∩ (t ε    M ∩ yε − εk 0 − D = ∅, i.e., yε ∈ Eff(M, Dεk0 ). 3.1.2 A General Scalarization Method Scalarization of a given vector optimization problem (VOP) means converting that problem into an optimization problem (or a family of such problems) with a (common) real-valued objective function to be minimized. If the solutions of the latter problems (often called scalarized problems) are also solutions of the given VOP, then the obtained scalarization is advantageous in solving the VOP, because methods of usual “scalar” optimization (nonlinear programming) can be used. For the finite-dimensional case see, for example, [407]. The most advantageous scalarization is obtained by using suitable monotone functionals; the following propositions serve as theoretical background.

3.1 Solution Concepts in Vector Optimization

145

Proposition 3.1.5. Let Y be a linear space partially ordered by a nontrivial pointed convex cone C, let M be a nonempty subset of Y , and let ϕ : Y → R be a proper functional with M ∩ dom ϕ = ∅. Assume that y 0 ∈ M satisfies ∀y ∈ M :

ϕ(y) ≥ ϕ(y 0 ).

(3.2)

Then y 0 ∈ Eff(M, C) if one of the following conditions is satisfied: (i) ϕ is C-monotone on M and y 0 is uniquely determined, (ii) ϕ is strictly C-monotone on M ∩ dom ϕ. Proof. Let y 1 ∈ M ∩ (y 0 − C) and assume that y 1 = y 0 . In case (i), because ϕ is C-monotone, we obtain that ϕ(y 1 ) ≤ ϕ(y 0 ), contradicting the uniqueness of y 0 . In case (ii), because ϕ is strictly C-monotone, we obtain  that ϕ(y 1 ) < ϕ(y 0 ), contradicting (3.2). Therefore y 0 ∈ Eff(M, C). Note that condition (3.2) may be replaced by an apparently weaker one: ∀ y ∈ M ∩ (m − C) :

ϕ(y) ≥ ϕ(y 0 ),

where m ∈ M (or m ∈ Y with M ∩ (m − C) = ∅). Examples of C-monotone functionals are furnished by the elements of C + , while examples of strictly C-monotone functionals are given by the elements of C # . An easy application of the preceding result is the next corollary. Corollary 3.1.6. Let Y be a H.l.c.s., C ⊂ Y a convex cone with C # = ∅, and ∅ = M ⊂ Y . Then y 0 ∈ Eff(M, C) if and only if there exist m ∈ M and y ∗ ∈ C # such that y 0 is a solution of the following scalar minimization problem: y, y ∗  −→ min, s.t. y ∈ M, y ≤C m. Proof. If y 0 ∈ Eff(M, C), then M ∩ (y0 − C) = {y0 }, and so y 0 is a solution of the above problem for m = y 0 and arbitrary y ∗ ∈ C # . The converse implication follows immediately from case (ii) of the preceding proposition.  When M is C-convex, i.e., M + C is convex, the following result holds (compare Jahn [310, Theorem 5.4]). Proposition 3.1.7. Let Y be a H.l.c.s., C ⊂ Y a proper pointed convex cone, and M ⊂ Y a nonempty C-convex set. If y 0 ∈ Eff(M, C) and M + C has nonempty interior, then there exists y ∗ ∈ C + \ {0} such that  0 ∗ ∀y ∈ M : y , y ≤ y, y ∗  . Proof. Since y 0 ∈ Eff(M, C), (M + C) ∩ (y 0 − C) = {y 0 }, and so y 0 ∈ / int(M + C). By Theorem 2.2.7 there exists y ∗ ∈ Y ∗ \ {0} such that  0 ∗ ∀ c ∈ C, ∀ y ∈ M : y , y ≤ y + c, y ∗  .

146

3 Optimization in Partially Ordered Spaces

Replacing c ∈ C by tc with t ≥ 0 and c ∈ C, we obtain that y ∗ ∈ C + . Taking c = 0 in the above relation, we see that the conclusion holds.  An inspection of the proof of Proposition 3.1.5 shows that neither the convexity of C nor the fact that C is a cone was used. Also, additional properties of ϕ imply efficiency of y 0 w.r.t. larger sets having additional properties, as the next result shows. Theorem 3.1.8. (Gerth, Weidner [215]) Let B, M be nonempty subsets of the t.v.s. Y , and ϕ : Y → R a strictly B-monotone function. Assume that y 0 ∈ M is such that ϕ(y 0 ) ≤ ϕ(y) for all y ∈ M . Let H := {y ∈ Y | ϕ(y 0 − y) < ϕ(y 0 )}. Then B \ {0} ⊂ H and y 0 ∈ Eff(M, H); in particular, y 0 ∈ Eff(M, B). Moreover, (a) if ϕ is convex, then H is convex; (b) if ϕ is continuous, then H is open and cl H + (B \ {0}) ⊂ H; (c) if ϕ is linear, then H ∪ {0} is a convex cone; (d) if 0 ∈ cl(B \ {0}), then 0 ∈ bd H. Proof. From the choice of H it is clear that M ∩ (y 0 − H) = ∅, and so y 0 ∈ Eff(M, H). Because ϕ is strictly B-monotone, it is also obvious that B \ {0} ⊂ H. Statements (a) and (c) are obvious. (b) By the continuity of ϕ we have that H is open and cl H ⊂ {y ∈ Y | ϕ(y 0 − y) ≤ ϕ(y 0 )}. Let y ∈ cl H and d ∈ B \ {0}. Since ϕ is strictly B-monotone, ϕ(y 0 − y − d) < ϕ(y 0 − y) ≤ ϕ(y 0 ), and so y + d ∈ H. (d) Of course, 0 ∈ / H. Since B \ {0} ⊂ H, we have that 0 ∈ cl(B \ {0}) ⊂ cl H, and so 0 ∈ bd H.  Note that without asking 0 ∈ cl(B \ {0}) it is possible that 0 ∈ / cl H. Indeed, consider Y := R,

B := {0} ∪ [1, ∞),

M := [1, 4],

y 0 := 1,

and the strictly B-monotone continuous function   ϕ : R → R, ϕ(y) := min y + 12 , max{1, y} . Then H = ( 12 , ∞), and so 0 ∈ / cl H. Related to weakly efficient points (see Section 3.4) the following result holds; we make use of the function ϕD,k0 considered in Section 2.4. Theorem 3.1.9. Let Y be a H.l.c.s., B ⊂ Y a closed set with nonempty interior and 0 ∈ bd B, and M ⊂ Y a nonempty set. Assume that one of the following conditions holds: (i) there exists a cone C ⊂ Y with nonempty interior such that B + int C⊂ B, (ii) B is convex and there exists k 0 ∈ Y such that B + R+ k 0 ⊂ B and B + Rk 0 = Y . Then y 0 ∈ wEff(M, B) if and only if y 0 ∈ M and there exists a continuous function ϕ : Y → R such that

3.2 Solution Concepts in Set-Valued Optimization

∀ y ∈ int B, ∀ x ∈ M :

ϕ(y 0 − y) < 0 = ϕ(y 0 ) ≤ ϕ(x).

147

(3.3)

Proof. If condition (3.3) we have that M ∩ (y 0 − int B) = ∅, and so y 0 ∈ wEff(M, B). For the necessity, take k 0 ∈ int C in case (i). In both cases consider D := B − y 0 . The conclusion follows from the nonconvex separation  theorem (Theorem 2.4.19) taking ϕ := ϕD,k0 . Remark 3.1.10. Knowing additional properties of the set B in the preceding theorem we can ask additional properties for the function ϕ (see Theorem 2.4.1). For example, in case (ii) of Theorem 3.1.9 we can ask that ϕ be convex (and surjective in all the cases). Theorems 3.1.8 and 3.1.9 can be found in a more detailed form in Gerth and Weidner’s paper [215]. Special cases of those theorems had been proved earlier; for example, scalarization results for weakly efficient points in the case Y = Rn and B = y 0 − Rn+ , k 0 ∈ int Rn+ with the functional ϕ(y) = inf{t ∈ R | y ∈ y 0 − Rn+ + tk 0 }, had been obtained by Bernau [62] and for k 0 = (1, . . . , 1)T by Brosowski and Conci [97]. Furthermore, compare Jahn [303, 304, 308, 309]. In the nineties, results of such types were useful to get general variants of Ekeland’s variational principle in the case in which the performance space Y has dimension greater than 1.

3.2 Solution Concepts in Set-Valued Optimization Inspired by real-world problems in mathematical economics, financial mathematics, medicine, engineering, and optimization under uncertainties, the field of set-valued optimization has been strongly developed in recent years. Especially, in traffic optimization, medicine, uncertain weather conditions, and risk theory, the complex problems are contaminated with uncertain data such that the computed optimal solutions can be highly influenced by the uncertain parameters. When constructing robust counterpart problems in optimization under uncertainty, methods of set-valued optimization can be used (see Chapter 6). Set optimization is a rapidly developing area in applied mathematics, see [329] and [258] for a recent introduction to set optimization and its applications. Let Y be a linear topological space. First, we give the definition of nondominated elements of a family A of nonempty subsets of Y , where we are using set less relations  on P(Y ) introduced in Section 2.3; P(Y ) is the power set of Y without the empty set. Definition 3.2.1 (-Nondominated Elements of a Family of Sets). Consider a family A of nonempty subsets of Y and a set less relation  on P(Y ). A ∈ A is called a nondominated element of A w.r.t.  if

148

3 Optimization in Partially Ordered Spaces

A  A, A ∈ A

=⇒

A  A.

Now, we introduce a general set optimization problem with a geometric constraint: opt Γ (x) subject to (SP) x ∈ S, where Y is a linear topological space, partially ordered by a proper, pointed, convex, closed cone C, S ⊆ X, X is a linear space and the objective mapping Γ : S ⇒ Y is set-valued. Recall, we use the notations Γ (S) = ∪x∈S Γ (x) and dom Γ = {x ∈ S | Γ (x) = ∅}. There are different approaches to understand the optimization “opt” in (SP). In Sections 3.2.1 and 3.2.2, we introduce different solution concepts for the set optimization problem (SP). 3.2.1 Solution Concepts Based on Vector Approach In this section, we introduce a first solution concept where “opt” in (SP) is to be understood with respect to the partial order ≤C induced by the proper, pointed, convex, closed cone C ⊂ Y . We consider a set-valued objective mapping Γ : S ⇒ Y , such that for every x ∈ dom Γ there are many distinct values y ∈ Y such that y ∈ Γ (x) in contrast to single-valued functions. Therefore, in the vector approach, when studying nondominated elements of a set-valued mapping Γ : S ⇒ Y , we fix one element y ∈ Γ (x), and study the following solution concept based on the concept of efficient elements introduced in Definition 3.1.1 for ε = 0. Let us consider the set-valued (vector) optimization problem ≤C - opt Γ (x) subject to

(≤C -SP)

x ∈ S. Definition 3.2.2 (≤C -Nondominated elements of (≤C -SP)). Let x ∈ S and (x, y) ∈ gr Γ . The pair (x, y) ∈ gr Γ is called a ≤C -nondominated element of the problem (≤C -SP) if y ∈ Eff Γ (S), C . Furthermore, also the notion of weakly efficient elements of sets (see Sections 3.4, 3.5, 3.7) naturally induce corresponding notions of weakly ≤C nondominated elements to the corresponding set optimization problems. Definition 3.2.3 (Weakly ≤C -nondominated element of (≤C -SP)). ≤C Let x ∈ S and (x, y) ∈ gr Γ . The pair (x, y) ∈ gr Γ is called a weakly  nondominated element of the problem (≤C -SP) if y ∈ wEff Γ (S), C . A solution concept for set optimization problems based on vector approach in a more general sense (w.r.t. a general domination set) is given in Definition 4.4.1, compare also (4.34) and (4.35).

3.2 Solution Concepts in Set-Valued Optimization

149

3.2.2 Solution Concepts Based on Set Approach Although the concept of nondominated elements of the set optimization problem (SP) given in Definition 3.2.2 is of mathematical interest (in particular for deriving optimality conditions using tools from generalized differentiation, see Chapter 4, especially Section 4.4), it cannot be often employed in practice. It is important to mention that a ≤C -nondominated element (x, y) depends on only certain special element y of Γ (x) and other elements of Γ (x) are ignored. In other words, an element x ∈ S for that there exists at least one element y ∈ Γ (x) which is a nondominated element of the image set of Γ (compare Definition 3.1.1 for ε = 0 and Sections 3.4, 3.7) is a solution of the set-valued optimization problem (SP) even if there exist many bad elements in the image set Γ (x). For this reason, the solution concepts introduced in Section 3.2.1 are sometimes improper. To avoid this disadvantage, it is helpful and important to work with practically relevant set less relations (compare Chapter 6). This leads to solution concepts for set-valued optimization problems based on comparisons among values of the set-valued objective mapping Γ . Let Y be equipped with a binary relation C generated by a proper, pointed, convex, closed cone C (C denotes one of the set less relations introduced in Section 2.3 for the special case that D := C). Now, we consider a set optimization problem (SP) where “opt” is to understand with respect to C . Then, we can define optimal solutions with respect to the preorder C and the corresponding set-valued optimization problem is given by C - opt

Γ (x)

subject to x ∈ S,

(C -SP)

where we assume again (compare (SP)) that Y is a linear topological space, partially ordered by a proper, pointed, convex, closed cone C, S is a subset of X, X is a linear space. The following notion corresponds to Definition 3.2.1 with A = Γ (S). Definition 3.2.4 (C -Nondominated solutions of (C -SP) w.r.t. the preorder C ). An element x ∈ S is called a C -nondominated solution of problem (C -SP) w.r.t. the preorder C if Γ (x) C Γ (x)

for some

x ∈ S =⇒ Γ (x) C Γ (x).

A more general solution concept based on set approach with respect to general preorders where nonempty sets of P(Y ) are involved (see Section 2.3) is given in Definition 4.4.2. We will use these solution concepts in Chapter 4 for deriving optimality conditions in set-valued optimization and in Chapter 6 in optimization under uncertainty.

150

3 Optimization in Partially Ordered Spaces

3.3 Existence Results for Efficient Points In this section we establish several existence results for maximal points with respect to transitive relations; then we apply them in topological vector spaces for preorders generated by convex cones. A comparison of existence results for efficient points is made at the end of the section. 3.3.1 Preliminary Notions and Results Concerning Transitive Relations In the sequel X is a nonempty set and t ⊆ X × X, i.e., t is a relation on X. If ∅ = Y ⊆ X and t ⊆ X × X, the restriction of t to Y is denoted by tY , i.e., tY := t ∩ (Y × Y ). With the relation t on X, we associate the following relations: tR := t ∪ ΔX ,

tN := t \ t−1 = t \ (t ∩ t−1 ),

tN R = (tN )R .

Hazen and Morin [265] call tN the asymmetric part of t. Some properties of these relations are given in the following lemma; the first three properties mentioned in (ii) are stated by Dolecki and Malivert in [161]. Lemma 3.3.1. Let t be a transitive relation on X and ∅ = Y ⊆ X. (i) tR is a preorder and tN ∩ ΔX = ∅; (ii) t◦tN ⊆ tN , tN ◦t ⊆ tN , tN ◦tN ⊆ tN , (tR )N = tN , tR ◦tN = tN ◦tR = tN ; (tN )N = tN ; (iii) tN R is a partial order and (tN )N = (tN R )N = tN ;    Y  Y (iv) tY R = (tR ) , tY N = (tN ) . Proof. The proof is easy; one uses only the definitions.



Y Y Y Taking into  account  Y  the above  Y  lemma, we denote by tR , tN , and tN R the Y relations t R , t N , and t N R , respectively. The preceding lemma shows that with every transitive relation one can associate a preorder and a partial order. It is useful to know whether they determine the same maximal and minimal points. As noted in Section 2.1, Max(Y ; t) = Min(Y, t−1 ), where t ⊆ X × X is a (transitive) relation and ∅ = Y ⊆ X; so it is sufficient to study only the problems related to maximal points.

Corollary 3.3.2. Let ∅ = Y ⊆ X and let t be a transitive relation on X. Then Max(Y ; t) = Max(Y ; tR ) = Max(Y ; tN ) = Max(Y ; tN R ).

3.3 Existence results for efficient points

151

Proof. Let y¯ ∈ Y . Note first that y¯ ∈ Max(Y, t) iff (¯ y , y) ∈ / tN for every y ∈ Y . Using this remark we have immediately that Max(Y ; t) = Max(Y ; tN ). Applying the preceding lemma, we get tN = (tR )N = (tN )N = (tN R )N . The other equalities are now obvious.  The above corollary shows that the problem of existence (for example) of maximal points w.r.t. a transitive relation t reduces, theoretically, to the same problem for the partial order tN R . Another way to reduce this problem to one for a partial order is given by the following known result. −1

Proposition 3.3.3. Let t be a transitive relation on X and ρ = tR ∩ (tR ) . Then ρ is an equivalence relation, and  t = {( x, y) | (x, y) ∈ t} is a partial  order on X = X/ρ, where x  is the class of x ∈ X with respect to ρ. Moreover,   if x ∈ X, x ∈ Max(X; t) iff x  ∈ Max(X; t). In the sequel we shall also use the notation Yt+ (x) and Yt− (x) for the upper and lower sections of Y ⊆ X with respect to t and x ∈ X. So Yt+ (x) := {y ∈ Y | (x, y) ∈ tR },

Yt− (x) := {y ∈ Y | (y, x) ∈ tR },

respectively; the most common case is for x ∈ Y . Similarly, for Z ⊆ X, we consider Yt+ (Z) := {y ∈ Y | ∃ z ∈ Z : (z, y) ∈ tR } = z∈Z Yt+ (z), Yt− (Z) := {y ∈ Y | ∃ z ∈ Z : (y, z) ∈ tR } = z∈Z Yt− (z).   Note that Yt+ Yt+ (Z) = Yt+ (Z) (so one may suppose that Z ⊆ Y ) and Max(Yt+ (Z); t) = Yt+ (Z) ∩ Max(Y ; t), Min(Yt− (Z); t) = Yt− (Z) ∩ Min(Y ; t). (3.4) We say that ∅ = Y ⊆ X has the domination property (w.r.t. t) (DP) if Max(Yt+ (y); t) = ∅, for every y ∈ Y (i.e., every element of Y is dominated by a maximal element of Y ). A quite important problem is how to extend other notions related to partially ordered sets, like chain or increasing net, to sets endowed with transitive relations. Related to this problem we have the next result. Proposition 3.3.4. Let t be a transitive relation on X and ∅ = Y ⊆ X. (i) If tYR is a partial order on Y , then tYR ⊆ tN R . (ii) If μ is a total order on Y such that μ ⊆ tN R , then μ = tYN R . Proof. (i) Let μ = tYR be a partial order on Y and consider (x, y) ∈ μ. If x = y, we have that (x, y) ∈ tN R . Suppose now that x = y. It follows that / t \ t−1 , whence (x, y) ∈ t. Assuming that (x, y) ∈ / tN R , we have that (x, y) ∈ −1 (x, y) ∈ t . Therefore (y, x) ∈ t, and so (y, x) ∈ μ. Since μ is antisymmetric, it follows that x = y, a contradiction. Hence (x, y) ∈ tN R .

152

3 Optimization in Partially Ordered Spaces

(ii) Suppose that (Y, μ) is totally ordered and μ ⊆ tN R . Of course, μ ⊆ / μ. Of course, tYN R . Consider (x, y) ∈ tYN R (⊆ tR ) and suppose that (x, y) ∈ x = y. Since μ is a total order on Y , we have that (y, x) ∈ μ, and so (y, x) ∈ / t, a contradiction. Therefore μ = tYN R .  tN = t \ t−1 . It follows that (x, y) ∈ Note that in (ii) we cannot replace tN R by t or tR (take X = R and t = R×R). Let X be endowed with the transitive relation t and let (xi )i∈I ⊆ X be a net. We say that (xi ) is t-increasing [t-decreasing] if (xi , xj ) ∈ t [(xj , xi ) ∈ t] for all i, j ∈ I with i  j and i = j; (xi ) is strictly t-increasing [strictly t-decreasing] if (xi ) is tN -increasing [tN -decreasing]. In the sequel we say that ∅ = A ⊆ (X, t), with t a transitive relation on X, is a (t-) chain if tA R is a total order on A, while A is (t-) well-ordered if A A is well-ordered by tA R ; hence tR ⊆ tN R in both cases. The following interesting result is due to Gajek and Zagrodny (see [204, Lem. 3.2]). Proposition 3.3.5. Let t be a transitive relation on X. Then there exists a nonempty set W ⊆ X such that W is well-ordered by μ := tW R and for every x ∈ X \ W there exists w ∈ W with (x, w) ∈ t or (w, x) ∈ / t. Proof. Let L = {U ∈ 2X \ {∅} | U is well-ordered by tU R }. Then L is nonempty because {x} ∈ L for every x ∈ X. Consider the relation η = {(U, V ) ∈ L × L | U ⊆ V and ∀ (v, u) ∈ V × U : (v, u) ∈ t ⇒ v ∈ U }. It is easy to see that (L, η) is a partially ordered set. Let us show that (L, η) has maximal elements. For this let U ⊆ L be a (nonempty) chain, and show that U is upper bounded in L. Consider U = ∪U. We have that U ∈ L; to prove this, consider ∅ = A ⊆ U . There exists U0 ∈ U such that A0 := A ∩ U0 = ∅. Let a0 ∈ A0 be the least element of A0 as a subset of U0 . Let us show that a0 is the least element of A. Let a ∈ A; if a ∈ U0 then a ∈ A0 , and so (a0 , a) ∈ tN R . In the contrary case, there exists U ∈ U such that a ∈ U . Because U is a chain and a ∈ U \ U0 we have that (U0 , U ) ∈ η. Because a, a0 ∈ U , a and a0 are comparable w.r.t. tR . Suppose that (a, a0 ) ∈ t; from the definition of η, we obtain that a ∈ U0 , whence a ∈ A0 . It follows that (a0 , a) ∈ tR . Therefore a0 is the least element of A. Hence U ∈ L. It is easy to see that (U, U ) ∈ η for U ∈ U. Indeed, let u ∈ U and u ∈ U ∈ U such that (u, u) ∈ t. There exists U  ∈ U such that u ∈ U  . If (U  , U ) ∈ η, there is nothing to prove. If (U, U  ) ∈ η, by the very definition of η, we have that u ∈ U . Hence U is upper bounded in L. By Zorn’s lemma, L has a maximal element W . Let x ∈ X \ W . Suppose that the conclusion is not true. Then for every w ∈ W we have that (w, x) ∈ t and (x, w) ∈ / t, and so (w, x) ∈ tN . Let ∅ = A ⊆ W  ; if A = {x}, x is obviously the least element of A . In the contrary case we have that ∅ = A := A \ {x} ⊆ W . Taking a the least element of A, it is obvious that a is also the least element of A . Therefore W  ∈ L; this contradicts the maximality  of W because W η W  and W = W  .

3.3 Existence results for efficient points

153

3.3.2 Existence of Maximal Elements with Respect to Transitive Relations We begin with the following result. Proposition 3.3.6. Let t be a transitive relation on X and ∅ = Y ⊆ X. Suppose that one of the following conditions holds: (i) every nonempty set A ⊆ Y with A × A ⊆ t ∪ t−1 ∪ ΔX is upper bounded in Y ; (ii) every chain in Y is upper bounded in Y ; (iii) every well-ordered subset of Y is upper bounded in Y . Then Y has the domination property. In particular, Max(Y ; t) = ∅. Proof. Without loss of generality we may suppose that Y = X. It is obvious that (i) ⇒ (ii) ⇒ (iii). Assume that (iii) holds. By Gajek and Zagrodny’s lemma (Proposition 3.3.5) there exists a well-ordered nonempty set W ⊆ X such that for every x ∈ X \ W , there exists w ∈ W with (x, w) ∈ t or (w, x) ∈ / t. By hypothesis there exists x ∈ X such that (w, x) ∈ tR for every w ∈ W . Assume that for / W and (w, u) ∈ tR ◦ tN ⊆ tN some u ∈ X we have (x, u) ∈ tN . Of course, u ∈ for every w ∈ W . By the choice of W , there exists wu ∈ W such that (u, wu ) ∈ t. It follows that (u, u) ∈ t ◦ tN ⊆ tN , a contradiction. Therefore x ∈ Max(X; t). The fact that X has (DP) follows from the fact that if one of the conditions (i), (ii), (iii) holds for X, it holds also for Xt+ (x) for every x ∈ X.  Corollary 3.3.7. Let t be a transitive relation on X and ∅ = Y ⊆ X. Suppose that one of the following conditions holds: (i) every t-increasing net of Y is upper bounded in Y ; (ii) every strictly t-increasing net of Y is upper bounded in Y ; (iii) every t-increasing net (yi )i∈I ⊆ Y with I totally ordered is upper bounded in Y ; (iv) every strictly t-increasing net (yi )i∈I ⊆ Y with I totally ordered is upper bounded in Y ; (v) every t-increasing net (yi )i∈I ⊆ Y with I well-ordered is upper bounded in Y ; (vi) every strictly t-increasing net (yi )i∈I ⊆ Y with I well-ordered is upper bounded in Y . Then Y has the domination property. In particular, Max(Y ; t) = ∅.

154

3 Optimization in Partially Ordered Spaces

Proof. Of course, (i) ⇒ (ii), (iii) ⇒ (iv), (v) ⇒ (vi), (i) ⇒ (iii) ⇒ (v), and (ii) ⇒ (iv) ⇒ (vi). On the other hand, if (vi) holds, then condition (iii) of the preceding proposition is satisfied. Therefore the conclusions hold.  The following result is due, essentially, to Gajek and Zagrodny [204]. Proposition 3.3.8. Let t be a transitive relation on X and ∅ = Y ⊆ X. Suppose that the following two conditions hold: (i) every nonempty well-ordered subset W of Y is at most countable; (ii) every strictly t-increasing sequence of Y is upper bounded in Y . Then Y has the domination property. In particular, Max(Y ; t) = ∅. Proof. Without loss of generality we may suppose that Y = X. In order to apply Proposition 3.3.6 we show that every well-ordered subset W of X is upper bounded. So let W ⊆ X be a well-ordered set. By hypothesis W is at most countable. If W has a greatest element, this is also an upper bound for W . Suppose that W does not have a greatest element. It follows that W is not finite (otherwise, since W is also totally ordered, it has a greatest element). Therefore there exists p : N → W bijective.   Observe first that for every k ∈ N the set n ∈ N | n > k,  p(k), p(n) ∈ t is nonempty. In the contrary case we have that p(n), p(k) ∈ t for every n > k. Taking p(i) = max{p(0), . . . , p(k)},  p(i) is the  greatest element   of W , a contradiction. > n | p(n ), p(n) ∈ t , and so on; therefore Let n0 = 0, n1 = min n 0 0    nk+1 = min n > nk | p(nk ), p(n) ∈ t for every k ∈ N. Define xk = p(nk ) ∈ X. It is clear that (xk ) is a strictly t-increasing sequence.  Therefore  p(n), x ∈ tR there exists x such that (xk , x) ∈ tR for every k. Wehave that  for every n. In the / tR . It follows  contrary case for some n ∈ N, p(n),x  ∈  / tR , and so, by Proposition 3.3.4, p(nk ), p(n) ∈ tN , that p(n),p(nk ) ∈ for every k; in particular, n = nk for every k. Since n > 0 = n0 , it follows that n ≥ n1 , and so n > n1 . Continuing in this way we get that n ≥ nk for every k, which, of course, is not possible. It follows that x is an upper bound of W . Therefore condition (iii) of Proposition 3.3.6 is satisfied, and so the conclusions hold.  Note that condition (i) of Corollary 3.3.7 is equivalent to 

 + ∀ (yi )i∈I ⊆ Y, (yi ) t-increasing net : Y ∩ i∈I Xt (yi ) = ∅,

(3.5)

while its condition (ii) is equivalent to ∀ (yi )i∈I ⊆ Y strictly t-increasing net : Y ∩



i∈I

 Xt+N R (yi ) = ∅.

A sufficient condition for (3.5) is the following one: ∀A ⊆ Y : Y ⊆

      X \ Xt+ (a) ⇒ ∃ a1 , . . . , an ∈ A : Y ⊆ X \ Xt+ (ai ) .

a∈A

i∈1,n

(3.6)

3.3 Existence results for efficient points

155

If τ is a topology on X and the upper sections of X are closed (for example, if t is closed in X × X), then the sets X \ Xt+ (a) are open; in this situation condition (3.6) is a kind of compactness of Y . The following result is related to this kind of condition. Proposition 3.3.9. Let t be a transitive relation on X and ∅ = Y ⊆ X. Assume that there exists a relation s on X such that s ◦ t N ⊆ tR and ∀ (yi )i∈I ⊆ Y strictly t-increasing : Y ∩

(3.7) 

i∈I

 Xs+ (yi ) = ∅.

(3.8)

Then Y has the domination property. In particular, Max(Y ; t) = ∅. Proof. Once again, without loss of generality we may suppose that Y = X. In order to apply Proposition 3.3.6 we show that its condition (ii)

is verified. So let ∅ = C ⊆ X be a chain. By hypothesis there exists x ∈ c∈C {x ∈ X | (c, x) ∈ s}. If C has a greatest element, there is nothing to prove. Let the contrary case hold and take c ∈ C. Then there exists c ∈ C such that (c, c ) ∈ tN . Of course, (c , x) ∈ s; using (3.7) we obtain that (c, x) ∈ tR . Hence x is an upper bound for C. Using Proposition 3.3.6 the conclusions follow.  The above proof shows that the conclusions of Proposition 3.3.9 hold if in (3.8) I is totally ordered [even well-ordered, applying in this case Proposition 3.3.6(iii)]. When t−1 (=) is a preorder and Y verifies condition (ii) in Proposition 3.3.6 one says that Y has property (Z) in [435, p. 31] and Y is order-totallycomplete in [198, Def. 2.2(a)] (see also [198, Prop. 2.3]), respectively; when Y verifies condition (i) in Corollary 3.3.7 [or, equivalently (3.5)] one says that Y is order complete in [198, Def. 2.5(a)]; when Y verifies condition (3.6) one says that Y is order-semicompact in [435, Def. 2.1] and in [198, Def. (y) 2.4(a)]. Note that each of these conditions is obviously verified by Yt+ R when y ∈ Max(Y ; t) [= Max(Y ; tR )]. In the sequel, in this section, we suppose that (X, τ ) is a topological space. In such a case, the topological variants of the notions defined in [198, Defs. 2.2(b), 2.4(b) and 2.5(b)] (prefixed by τ -) are obtained replacing the sets of type Xt+ (a) with their closures. Applying the next result for t :=−1 one obtains [198, Th. 3.2]. Corollary 3.3.10. Let t be a transitive relation on the topological space (X, τ ) and ∅ = Y ⊆ X. Assume that (a) for all x, y, z ∈ X such that (x, y) ∈ tN and z ∈ cl Xt+ (y) one has z ∈ Xt+ (x), and (b) for every chain C ⊆ Y one has Y ∩ [∩x∈C cl Xt+ (x)] = ∅. Then Y has the domination property. In particular, Max(Y ; t) = ∅.

156

3 Optimization in Partially Ordered Spaces

Proof. Consider again that X = Y and set s := {(y, z) ∈ X × X | z ∈ cl Xt+ (y)}. Conditions (a) and (b) ensure that (3.7) and (3.8) are verified, respectively. Applying Proposition 3.3.9 one gets the conclusion.  In the present situation (that is (X, τ ) is a topological space) one can formulate other conditions for the existence of maximal points. Having a net (xi )i∈I , by (xϕ(j) )j∈J , or simply (xj )j∈J , we denote a subnet of (xi ); this means that (J, ) is directed and ϕ : J → I is filtering (see page 28). Corollary 3.3.11. Let t be a transitive relation on the topological space (X, τ ) and ∅ = Y ⊆ X. Assume that one of the following conditions holds: ∀ (yi )i∈I ⊆ Y t-increasing net, ∃ (yj )j∈J → y ∈ Y, ∀ j ∈ J : (yϕ(j) , y) ∈ tR , (3.9) ∀ (yi )i∈I ⊆ Y strictly t-increasing, ∃ (yj )j∈J → y ∈ Y, ∀ j ∈ J : (yj , y) ∈ tR . (3.10) Then Y has the domination property. In particular, Max(Y ; t) = ∅. Proof. If (3.9) or (3.10) holds, then condition (i) or (ii) of Corollary 3.3.7 is satisfied, respectively. Indeed, suppose that (3.9) holds and let (yi )i∈I ⊆ Y be t-increasing. There exists a subnet (yϕ(j) )j∈J → y ∈ Y such that (yϕ(j) , y) ∈ tR for every j ∈ J. Fix k ∈ I. For every i ∈ I there is some ji ∈ J with i  ϕ(j) for all j ∈ J, ji  j. Since (yi ) is t-increasing, (yk , yϕ(jk ) ) ∈ tR ; since (yϕ(jk ) , y) ∈ tR , we get (yk , y) ∈ t. Therefore y is an upper bound for (yi ) in Y . The conclusion holds.  When the upper or lower sections of X are closed, we may consider other conditions, too. Proposition 3.3.12. Let t be a transitive relation on the topological space (X, τ ) and ∅ = Y ⊆ X. Consider the following conditions: ∀ (yi )i∈I ⊆ Y t-increasing net : ∃ (yϕ(j) )j∈J → y ∈ Y,

(3.11)

∀ (yi )i∈I ⊆ Y strictly t-increasing net : ∃ (yϕ(j) )j∈J → y ∈ Y,

(3.12)

∀ (yi )i∈I ⊆ Y t-increasing net, ∃ y ∈ Y : yi → y,

(3.13)

and ∀ (yi )i∈I ⊆ Y strictly t-increasing net, ∃ y ∈ Y : yi → y.

(3.14)

If the upper sections of X are closed, then (3.11) ⇔ (3.9) and (3.12) ⇔ (3.10), while if t is a partial order and the upper and lower sections of X (w.r.t. t) are closed, then (3.11) ⇔ (3.13) and (3.12) ⇔ (3.14). Proof. The implications (3.9) ⇒ (3.11), (3.13) ⇒ (3.11), (3.10) ⇒ (3.12), and (3.14) ⇒ (3.12) are always true. Let us show that (3.11) ⇒ (3.9) and (3.13) ⇒ (3.9) under the mentioned supplementary conditions, the other two being proved similarly.

3.3 Existence results for efficient points

157

Suppose that the upper sections of X are closed and (3.11) holds. Consider (yi )i∈I ⊆ Y a t-increasing net. There exists the subnet (yϕ(j) )j∈J converging to y ∈ Y . With the notation from the proof of the preceding proposition, we have that (yϕ(j) )ji j ⊆ Xt+ (yi ). Taking the limit, we get that lim yϕ(j) = y ∈ cl Xt+ (yi ) = Xt+ (yi ) for all i ∈ I. Therefore y is an upper bound for (yi )i∈I in Y , whence y is an upper bound for (yϕ(j) ), too. Suppose now that t is a partial order and the upper and lower sections of X are closed. Let us show that (3.11)⇒(3.13). So, suppose that (3.11) holds and (yi )i∈I ⊆ Y is t-increasing. By hypothesis there exists a subnet (yϕ(j) )j∈J converging to y ∈ Y . As in the proof of (3.11)⇒(3.9), we have that yi ∈ Xt− (y) for every i ∈ I. Suppose that yi →y. It follows that there exists / V0 } is cofinal in a neighborhood V0 of y in X such that I0 := {i ∈ I | yi ∈ I. Of course, (yi )i∈I0 is a t-increasing net of Y . Therefore, by (3.11), there exists (yϕ(k) )k∈K a subnet of (yi )i∈I0 such that yϕ(k) → y0 ∈ Y ; of course, (yϕ(k) )k∈K is also a subnet of (yi )i∈I . The same argument as above gives yi ∈ Xt− (y0 ) for every i ∈ I. From (yϕ(j) )j∈J ⊆ Xt− (y0 ) we get y ∈ Xt− (y0 ), while from (yϕ(k) )k∈K ⊆ Xt− (y) we get y0 ∈ Xt− (y). Since t is antisymmetric, it follows that y = y0 , which contradicts the choice of I0 (because V0 is a  neighborhood of y0 in this case). Corollary 3.3.13. Let t be a transitive relation on the Hausdorff topological space (X, τ ) and let Y ⊆ X be nonempty. Assume that the upper sections of X are closed and Y is compact. Then Y has the domination property. In particular, Max(Y ; t) = ∅. Proof. In the hypotheses of the corollary condition (3.11) holds, and because the upper sections of X are closed, (3.9) holds, too. The conclusions follow from Corollary 3.3.11.  Hazen and Morin proved the result of Corollary 3.3.13 in [265, Cor. 2.8]; note that Theorem 2.2 of [265] does not follow from any of the preceding results. One can ask naturally whether a kind of converse of Proposition 3.3.6 (Corollary 3.3.7) is true, in the sense that if (X; t) has the domination property, is it true that every chain (every t-increasing net) of X is upper bounded? The answer is negative, as the following example shows. Example 3.3.14. Let X = {fn | n ∈ N> } ∪ {gn | n ∈ N> }, where fn , gn : [0, 1] → R, fn (x) = xn and xn if x ∈ [0, 2−1/n ], gn (x) = 21/n (1 − x) if x ∈ ](2−1/n , 1]. 2(21/n −1) For f, g ∈ X we put f  g iff g(x) ≤ f (x) for every x ∈ [0, 1]. Note that gn is strictly increasing on [0, 2−1/n ] and strictly decreasing on [2−1/n , 1], and so attains its greatest value, 12 , for x = 2−1/n . It follows that for n, m ∈ N> ,

158

3 Optimization in Partially Ordered Spaces

n = m, gn , and gm are not comparable. Moreover, fn  fn+1  gn+1 for every n ∈ N> , but fn gm for n, m ∈ N> , n > m. It follows that the set of maximal points of X is {gn | n ∈ N> }. So (X, ) has the domination property, but the chain {fn | n ∈ N> } is not upper bounded. 3.3.3 Existence of Efficient Points with Respect to Cones To begin with, let X be a real vector space and C ⊆ X a convex cone. As usual, with C we associate (see Theorem 2.1.13) the reflexive and transitive relation (3.15) ≤C := t := {(x, y) ∈ X × X | y − x ∈ C}. Taking L := l(C) := C ∩ (−C), the lineality space of C, the equivalence relation ρ := t ∩ t−1 is {(x, y) ∈ X × X | y − x ∈ L}. So we get tN = {(x, y) ∈ X × X | y − x ∈ C \ L}.

(3.16)

Using Lemma 3.3.1 we obtain that C + (C \ L) = C \ L,

(C \ L) + (C \ L) ⊆ C \ L;

the above formulae are obtained by Luc in [386, 387]. So, tN R = {(x, y) ∈ X × X | y − x ∈ (C \ L) ∪ {0}}. It follows that (C \ L) ∪ {0} is a pointed convex cone. Note that for x ∈ X and Y ⊆ X one has, for the upper and lower sections of Y with respect to C, YC+ (x) := Y≤+C (x) = Y ∩ (x + C) and YC− (x) = Y ∩ (x − C). Therefore the upper and lower sections of X (w.r.t. C) are closed iff C is closed. Similarly, for Z ⊆ X, YC+ (Z) := Y ∩ (Z + C) and YC− (Z) := Y ∩ (Z − C). In accordance with the notions introduced in Section 3.3.1, the net (xi )i∈I ⊆ X is (strictly) C-increasing if xj − xi ∈ C (xj − xi ∈ C \ L) for all i, j ∈ I, i  j, i = j; (xi )i∈I is C-decreasing if (−xi )i∈I is C-increasing. Of course, the nonempty set Y ⊆ X is C-upper (lower) bounded if Y ⊆ x0 − C (Y ⊆ x0 + C) for some x0 ∈ X. Moreover, the sets Max(Y, ≤C ) and Min(Y, ≤C ) will be denoted by Max(Y ; C) and Min(Y ; C), respectively. In the sequel, in this section, we consider only problems related to maximal elements of subsets of Y w.r.t. ≤C ; for getting the results corresponding to minimal element just replace C by −C. An element y ∈ Max(Y ; C) is called an efficient point of Y (w.r.t. C). Furthermore, Y has the domination property (w.r.t. C), denoted by (DP), if for every y ∈ Y there exists y ∈ Max(Y ; C) such that y ≤C y or, equivalently, Y ⊆ Max(Y ; C) − C. It is obvious that for ∅ = Y ⊆ X and C a linear subspace of X we have that Max(Y ; C) = Y . Taking into account this fact, in the sequel we suppose that C is not a linear space; in particular, {0} = C = X. The next result corresponds to Proposition 3.3.6 and Corollary 3.3.7.

3.3 Existence results for efficient points

159

Proposition 3.3.15. Let ∅ = Y ⊆ X. Suppose that one of the following conditions holds: (i) every nonempty set Z ⊆ Y such that Z − Z ⊆ C ∪ (−C) is C-upper bounded in Y ; (ii) every chain in Y w.r.t. ≤C is C-upper bounded in Y ; (iii) every well-ordered subset of Y (w.r.t. ≤C ) is C-upper bounded in Y ; (iv) every C-increasing net of Y is C-upper bounded in Y ; (v) every strictly C-increasing net of Y is C-upper bounded in Y . Then Y has the domination property. In particular, Max(Y ; C) = ∅. In the following proposition we gather some properties of efficient sets, most of them appearing in the literature in particular cases. Proposition 3.3.16. Let C, K⊆X be proper convex cones such that C ⊆ K, x ∈ X, and ∅ = Y, Z ⊆ X. (i) Max(YK+ (Z); C) = YK+ (Z) ∩ Max(Y ; C). In particular, Max(YC+ (x); C) = YC+ (x) ∩ Max(Y ; C). (ii) If Y ⊆ Z ⊆ Y − K and

K ∩ (−C) ⊆ C,

(3.17)

then Max(Z; K) ⊆ Max(Y ; C) + (K ∩ (−K)) . In particular, if Max(Z; K) = ∅, then Max(Y ; C) = ∅. (iii) If (3.17) holds, then Max(Y ; K) ⊆ Max(Y ; C). (iv) If Y ⊆ Z ⊆ Y − C, then Max(Y ; K) ⊆ Max(Z; K); moreover, if K ∩ (−C) = {0}, then Max(Y ; K) = Max(Z; K). (v) Suppose that (3.17) and K + (C \ L) ⊆ C

(3.18)

hold. If Y ⊆ Max(Y ; K) − K, then Y ⊆ Max(Y ; C) − C (i.e., if Y has (DP) w.r.t. K, then Y has (DP) with respect to C). (vi) Suppose that K ∩ (−C) = {0} and Y ⊆ Max(Y ; K) − C. Then Max(Y ; C) = Max(Y ; K).   Proof. (i) Let y ∈ Max YK+ (Z); C . Then y ∈ YK+ (Z) and YK+ (Z)∩(y+C) ⊆ y − C. Let y  ∈ Y ∩ (y + C). It follows that y  ∈ y + C ⊆ z + K + C ⊆ z + K y  ∈ y − C, which shows for some z ∈ Z, and so y  ∈ YK+(Z). Therefore  + that y ∈ Max (Y ; C). Hence Max YK (Z); C ⊆ YK+ (Z) ∩ Max (Y ; C); since the converse inclusion is immediate, we have the first relation. The other one follows immediately taking K = C and Z = {x}.

160

3 Optimization in Partially Ordered Spaces

(ii) Let z ∈ Max(Z; K) ⊆ Z ⊆ Y − K. Then z = y − k with y ∈ Y (⊆ Z) and k ∈ K. Since y ∈ Z, we obtain that k = y − z ∈ (Z − z) ∩ K ⊆ (−K), and so k ∈ K ∩(−K). Let us show that y ∈ Max(Y ; C). For this let c ∈ (Y −y)∩C. It follows that y + c ∈ Y ⊆ Z, whence (Z − z ) y + c − z = y + c − y + k = c + k ⊆ C + K ⊆ K. Therefore c + k ∈ −K, and so −c ∈ K + k ⊆ K. Using (3.17) we get c ∈ −C. Hence y ∈ Max(Y ; C). (iii) Although (iii) is not a particular case of (ii) (considering Z = Y ), the proof is the same taking k = 0. (iv) Let y ∈ Max(Y ; K) ⊆ Y ⊆ Z. If k ∈ (Z − y) ∩ K, then y + k ∈ Z, whence y+k = y  −c with y  ∈ Y and c ∈ C; so y  −y = k+c ∈ (Y −y)∩K ⊆ −K. It follows that k ∈ −c − K ⊆ −K. Therefore y ∈ Max(Z; K). Suppose that C ∩ (−K) = {0} and consider z ∈ Max(Z; K). It follows that z = y − c for some y ∈ Y and c ∈ C. Hence y ∈ Z ∩ (z + K) ⊆ z − K. Therefore y = z − k for some k ∈ K. We obtain that c + k = 0, whence c = −k ∈ C ∩(−K) = {0}. Thus z = y ∈ Y and Y ∩(z−K) ⊆ Z ∩(z+K) ⊆ z−K, which shows that z ∈ Max(Y ; K). Therefore Max(Y ; K) = Max(Z; K). (v) Suppose that (3.17), (3.18) hold and Y ⊆ Max(Y ; K) − K. Let y ∈ Y . If y ∈ Max(Y ; C), there is nothing to prove. Assume that y ∈ / Max(Y ; C). It follows that y = y  − c with y  ∈ Y and c ∈ C \ L. Since Y has (DP) w.r.t. K, y  = y − k with y ∈ Max(Y ; K) and k ∈ K. By (iii) we have that y ∈ Max(Y ; C). Moreover, y = y − k − c = y − (k + c ) ∈ Max(Y ; C) − (K + (C \ L)) ⊆ Max(Y ; C) − C. Therefore Y ⊆ Max(Y ; C) − C. (vi) Since K ∩ (−C) ⊆ C, by (iii), one has that Max(Y ; K) ⊆ Max(Y ; C). Since Max(Y ; K) ⊆ Y ⊆ Max(Y ; K) − C and C ∩ (−C) = {0}, by (iv) one has Max(Y ; C) = Max(Max(Y ; K); C) ⊆ Max(Y ; K). The proof is complete.  In the sequel we suppose that (X, τ ) is a real Hausdorff topological vector space. Applying the results from Section 3.3.2 we obtain several existence theorems for efficient points w.r.t. cones. Before stating them, let us recall or introduce some notions or, more exactly, some possible properties of the proper convex cone C. As seen in Definition 2.1.46, C is (sequentially) Daniell if every C-upper bounded and C-increasing (sequence) net in X has a least upper bound (or supremum) and converges to it. Because we suppose that X is Hausdorff, every (sequentially) Daniel cone is pointed. Other similar conditions are: (P1) Every C-increasing and C-bounded net in C converges to an element of C.

3.3 Existence results for efficient points

161

(P2) Every C-increasing and C-bounded net in C is Cauchy. (P3) Every C-increasing and τ -bounded net in C converges to an element of C. (P4) Every C-increasing and τ -bounded net in C is Cauchy. (P5) Every C-increasing and τ -bounded net in X which is contained in a complete set is convergent. The sequential variants of (P1)–(P4) are: (SP1) Every C-increasing and C-upper bounded sequence in C converges to an element of C. (SP2) Every C-increasing and C-bounded sequence in C is Cauchy. (SP3) Every C-increasing and τ -bounded sequence in C converges to an element of C. (SP4) Every C-increasing and τ -bounded sequence in C is Cauchy. Note that the cone C verifies (P2) [(P4)] iff C is Cauchy [fully regular] regular; moreover, when C is closed, C verifies (P1) [(P3)] iff C is [fully regular] regular. In a similar way, the conditions (SP1)–(SP4) correspond to the sequential regularity notions mentioned before (see page 39). Condition (P5) was introduced by Ha [248] under the name of property (∗∗ ). If one of the above conditions holds, then, necessarily, C is pointed (see Remark 2.1.36). Having in view Remark 2.1.35, an equivalent formulation for (P1) [and similarly for (P3), (SP1), and (SP3)] is: (P1 ) Every C-increasing and C-upper bounded net in X converges to an element of X that is a C-upper bound for the net. We have the following table of implications, where (D) means Daniell, while (sD) means sequentially Daniell (Table 3.1). Table 3.1. Relations among properties (P*), (SP*), and (*D). 





(D) ⇒ (P1) ⇒ (P2) (P3) ⇒ (P4) ⇒ (P5) ⇓ ⇓↑

⇓↑

↓  ← ← (P3). (sD) ⇒ (SP1) ⇒ (SP2) (SP3) ⇒ (SP4)

The implications marked by ⇒ and ⇓ are trivial; those marked by  are valid always (by Proposition 2.1.37(i)); those marked by ↑ are valid when either C is complete or X is first-countable (by Proposition 2.1.37(iii)); the implication  is valid when either C is complete, or C is closed and X is first-countable (see Remark 2.1.47); the implications ↓ and ← are obviously valid if C is complete; those marked by are valid for C closed (see Remark 2.1.47). Other implications among the properties (P1)–(P4) and (SP1)–(SP4) are mentioned in (3.19), where the implications → hold if C is normal, while

162

3 Optimization in Partially Ordered Spaces

the implications ← hold if C is boundedly order complete (that is, every C-increasing and τ -bounded net of C has a supremum): (P3) ← → (P1), (P4) ← → (P2), (SP3) ← → (SP1) and (SP4) ← → (SP2).

(3.19)

Remark 3.3.17. Note also that C verifies (P2) and (P4) when C is well-based (see Theorem 2.1.45), and C verifies (P2) when X is a l.c.s. and C is supernormal (see Proposition 2.2.37). Proposition 3.3.18. Let Y ⊆ X be a nonempty set. Assume that one of the following conditions holds: (i) C satisfies (P1), while Y is closed and C-upper bounded; (ii) C is closed and satisfies (SP2), while Y is C-complete and C-upper bounded; (iii) C satisfies (P3), while Y is closed and τ -bounded; (iv) C is closed and satisfies (SP4), while Y is C-complete and τ -bounded. Then Y has the domination property. In particular, Max(Y ; C) = ∅. Proof. Let (yi )i∈I ⊆ Y be a C-increasing net. In case (i) or (iii), since C satisfies (P1) or (P3), and Y is C-upper bounded or τ -bounded, respectively, (yi ) is convergent to some y ∈ X which is a C-upper bound for (yi ). Since Y is closed, y ∈ Y . Therefore condition (3.9) holds. In cases (ii) and (iv), since C satisfies (SP2) or (SP4), C satisfies (P2) or (P4), respectively (see Table 3.1). Since Y is C-upper bounded or τ -bounded, (yi ) is Cauchy; hence yi → y ∈ Y because Y is C-complete (see page 39 for the definition of the C-completeness). Therefore in both cases condition (3.13) holds. Since C is closed, condition (3.9) also holds by Proposition 3.3.12. Applying Corollary 3.3.11, the conclusions follow.  When every well-ordered subset W of X w.r.t. ≤C (see page 152) is at most countable (i.e., (X, ≤C ) is countable orderable in the sense of Gajek and Zagrodny [204]) the closedness and completeness in the preceding result can be taken in the weaker sequential sense. Proposition 3.3.19. Let Y ⊆ X be a nonempty set. Assume that every wellordered subset W of X w.r.t. ≤C is at most countable and one of the following conditions holds: (i) C satisfies (SP1), while Y is sequentially closed and C-upper bounded; (ii) C is closed and satisfies (SP2), while Y is sequentially C-complete and C-upper bounded; (iii) C satisfies (SP3), while Y is sequentially closed and τ -bounded; (iv) C is closed and satisfies (SP4), while Y is sequentially C-complete and τ -bounded.

3.3 Existence results for efficient points

163

Then Y has the domination property. In particular, Max(Y ; C) = ∅. Proof. Let W ⊆ Y be well-ordered with respect to ≤C . Therefore, by our hypothesis, W is at most countable. If W is finite, then W is bounded from above by its greatest element (which is in Y ). In the contrary case W = {yn | n ≥ 1}. Consider zn := max{yk | k ≤ n}. It is obvious that (zn ) ⊆ Y is a C-increasing sequence. As in the proof of the preceding proposition we obtain that (zn ) converges to some z ∈ Y with the property that zn ≤C z for every n. Of course, yn ≤C z for every n; consequently, W is C-upper bounded in Y . The conclusion follows by applying Proposition 3.3.6(iii).  In the next result, we provide two sufficient conditions on C to ensure that any well-ordered (w.r.t. ≤C ) subset of X be at most countable. Proposition 3.3.20. Assume that one of the following conditions is satisfied: (i) C has an algebraic base; (ii) X is an H.t.v.s. and C is based. Then every well-ordered subset of X w.r.t. ≤C is at most countable. Proof. Clearly, ≤C is a partial order on X. Let W ⊆ X be well-ordered with respect to ≤C . Replacing W by W +c0 −w0 with c0 ∈ C0 := C \{0} and w0 the least element of W , we may (and do) assume that W ⊆ C0 . Having in view Remark 2.1.6, it is sufficient to have a function ϕ : C0 → R such that ϕ(x) < ϕ(x + x ) for x, x ∈ C0 ; in such a case ϕ is injective and ϕ(W ) is a well-ordered subset of R. Hence W is at most countable. (i) Assume that C has an algebraic base. By Theorem 2.1.15, there exists a linear functional ϕ : X → R such that ϕ(x) > 0 for x ∈ C0 . Hence ϕ(x) < ϕ(x + x ) for all x, x ∈ C0 . Hence W is at most countable. (ii) By Definition 2.1.42(i), there exists a convex set B such that 0 ∈ / cl B and C = R+ B. Consider νB : C0 → R defined by νB (x) := sup Ix , where Ix := {t > 0 | x ∈ tB}. Of course, νB (x) > 0 for every x ∈ C0 ; moreover, νB (x) < ∞, because 0 ∈ cl B otherwise. Consider x, x ∈ C0 ; for t ∈ Ix and t ∈ Ix , that is, x ∈ tB and x ∈ t B, one has x + x ∈ (t + t )B, and so Ix + Ix ⊆ Ix+x ; hence νB (x + x ) ≥ νB (x) + νB (x ) > ν(x). Hence W is at most countable.  Using Proposition 3.3.16, in Propositions 3.3.18 and 3.3.19, one can ask only that the hypotheses on Y be satisfied by a nonempty upper section YC+ (x), or even a nonempty set of the form YC+ (Z), to have Max(Y ; C) = ∅. The next result is due to Ha [249]. Proposition 3.3.21. Assume that cl C satisfies (P5) and Y ⊆ X is nonempty, complete, and τ -bounded. Then Max(Y ; C) = ∅. Proof. The proof is the same as that of assertion (iii) of Proposition 3.3.18 (but obtaining directly that (yi ) is convergent).  Proposition 3.3.22. Assume that C is closed and Y ⊆ X is nonempty and compact. Then Y has the domination property. In particular, Max(Y ; C) = ∅.

164

3 Optimization in Partially Ordered Spaces

Proof. The relation t := ≤C and Y satisfy the conditions of Corollary 3.3.13. The conclusions follow.  Corollary 3.3.23. Let ∅ = Y ⊆ X and assume that cl C ∩ (−C) ⊆ C.

(3.20)

If there exists Z ⊆ X such that YC+ (Z) or Ycl+C (Z) is nonempty and compact (even weakly compact if X is locally convex), then Max(Y ; C) = ∅. Proof. Suppose first that YC+ (Z) is nonempty and compact (or weakly compact). By the preceding proposition, Max(YC+ (Z); cl C) = ∅. Using Proposition 3.3.16(iii) and (i) for K = cl C, we have that Max(YC+ (Z); cl C) ⊆ Max(YC+ (Z); C) ⊆ Max(Y ; C), whence Max(Y ; C) = ∅. Similarly, if Ycl+C (Z) is nonempty and compact (or weakly compact), then ∅ = Max(Ycl+C (Z); cl C) ⊆ Max(Y ; cl C) ⊆ Max(Y ; C). Therefore the conclusion holds in both cases.



Proposition 3.3.24. Let C, K ⊆ X be convex cones and ∅ = Y ⊆ X. Assume that (3.18) and 

 ∀ (yi )i∈I ⊆ Y strictly C-increasingnet : Y ∩ i∈I (yi + K) = ∅ (3.21) hold. Then Y has the domination property. In particular, Max(Y ; C) = ∅. Proof. Consider t defined by (3.15) and s := {(x, y) ∈ X × X | y − x ∈ K}. Taking into account (3.15) and (3.16), (3.18) is equivalent to (3.7), while from (3.21) we obtain that (3.8) holds. The conclusions follow by applying Proposition 3.3.9.  Note that in locally convex spaces, to every result stated above corresponds a version in which the topology τ on X is replaced by the weak topology w. 3.3.4 Types of Convex Cones and Compactness with Respect to Cones In Sections 2.1 and 2.2 we introduced several types of cones, some of them being useful for existence of efficient points. Luc, in [386, Def. 2.3] and [387, Def. I.1.1], says that C is correct if (3.18) holds for K = cl C. Every domination cone in the sense of Henig [266, p. 112] is correct. Concerning cone compactness, recall the following notions.

3.3 Existence results for efficient points

165

Hartley, in [263, p. 214], says that Y is C-compact if Ycl+C (y) is compact for every y ∈ Y . Corley, in of [140, Def. 2.5], says that Y ⊆ X is C-semicompact if (3.6) holds for t :=≤cl C ; this notion goes back to Wagner [552, p. 31]. As a generalization of this notion, Luc ([386, Def. 2.1] and [387, Def. II.3.2]) says that Y is C-complete if (3.21) holds for K = cl C. Postolic˘ a [477] and Isac [295] say that Y is C-bounded if there exists a τ -bounded set Y0 ⊆ Y such that Y ⊆ Y0 − C; Y is C-closed if Y − C is closed; Y is C-semicompact if Y is C-bounded and C-closed. In fact, these notions are also used in [293, Def. 3], but with Y0 a singleton. Dedieu [156] says that Y is asymptotically compact (a.c. for short) if there exist γ > 0 and U a neighborhood of 0 ∈ X such that ([0, γ]Y ) ∩ U is relatively compact; note that Y is a.c. iff cl Y is a.c. (see [579, Prop. 2.2(i)]). Of course, every subset of Rm is asymptotically compact. Several properties of a.c. sets can be found in [579, Prop. 2.2]. The asymptotic cone of the nonempty set A ⊆ X is A∞ := {x ∈ X | ∃ (ti )i∈I ⊆ (0, ∞), ti → 0, ∃ (ai )i∈I ⊆ A : ti ai → x}; if X is a normed space, in particular, for X = Rm , one can use sequences convex, then A∞ instead of nets. Note that A∞ = (cl A)∞ . If A is closed and

is given by the known formula from convex analysis A∞ = t>0 t(A − a) for some fixed a ∈ A. The importance of this notion in our context is shown by the following result. Proposition 3.3.25. Let X be an H.t.v.s., C ⊆ X be a closed convex cone, and Y ⊆ X be a nonempty closed set. (i) If Y is asymptotically compact and Y∞ ∩ C = {0}, then Y ∩ (x + C) is compact for every x ∈ X (hence Y is C-compact). (ii) Suppose that there exists a compact set Q ⊆ X such that 0 ∈ / Q, C = R+ · Q and Y∞ ∩ C = {0}. Then Y ∩ (x + C) is compact for every x ∈ X. Moreover, if C is pointed, then Y − C is closed and (Y − C) ∩ (x + C) is compact for every x ∈ X. Conversely, if Y is convex and Y ∩ (x + C) is nonempty and compact for some x ∈ X, then Y∞ ∩ C = {0}. Proof. Suppose that Y ∩ (x + C) = ∅. Both in (i) and (ii) we have that {0} ⊆ [Y ∩ (x + C)]∞ ⊆ Y∞ ∩ (x + C)∞ = Y∞ ∩ cl C = Y∞ ∩ C = {0}. If C = R+ · Q with Q compact and 0 ∈ / Q, one obtains easily that C is locally compact. Using [579, Prop. 2.2(ii)] we get that C is an a.c. set. So, in both cases Y ∩ (x + C) is a.c. as a subset of such a set. Applying [579, Prop. 2.3] we obtain that Y ∩ (x + C) is relatively compact; since the set is closed, it is compact, too.

166

3 Optimization in Partially Ordered Spaces

Consider now case (ii) and suppose that C is pointed. Applying [579, Cor. 3.12] we have that Y − C is closed and (Y − C)∞ = Y∞ − C. Let z ∈ (Y − C)∞ ∩ C. It follows that there are y ∈ Y∞ and c ∈ C such that z = y − c. Since z ∈ C, we obtain that y = z + c ∈ C ∩ Y∞ = {0}. Since C is pointed, we get z = 0. By what was obtained above we have that (Y − C) ∩ (x + C) is compact for every x ∈ X. Now let Y be convex and x ∈ X be such that Y ∩ (x + C) is nonempty and compact; take y ∈ Y ∩ (x + C). Assume that 0 = u ∈ Y∞ ∩ C; because Y is closed, we have that y + R+ u ⊆ Y , and so y + R+ u ⊆ Y ∩ (y + C) [⊆ Y ∩ (x + C)], contradicting the compactness of Y ∩ (x + C).  Corollary 3.3.26. Let X be an H.t.v.s., C ⊆ X be a closed convex cone and Y ⊆ X be a nonempty closed set. Assume that Y∞ ∩ C = {0}. If Y is asymptotically compact or C = R+ · Q for some compact set Q ⊆ X with 0∈ / Q, then Y has the domination property. Proof. 3.3.25, YC+ (y) is compact for every y ∈ Y ; it follows   +By Proposition that YC (y); C = ∅ by Proposition 3.3.22. Therefore Y has the domination property.  Notice that Y∞ ∩ C = {0} if C is a pointed closed convex cone and there exists a bounded set B ⊆ Y such that Y ⊆ B − C, in particular if Y is C-upper bounded. 3.3.5 Classification of Existence Results for Efficient Points In the can be stated results set.

sequel we mention several existence results for efficient points that found in the literature in equivalent formulations (some of them were for minimization problems, or using other terminology). In all the from this section C ⊆ X is a convex cone and Y ⊆ X is a nonempty

Results Corresponding to Proposition 3.3.15 (The Nontopological Case) Theorem 3.3.27. (Corley [140, Th. 3.1]) Let X be an H.t.v.s. and cl C be pointed. Suppose that Y is C-semicompact in Corley’s sense. Then Max(Y ; C) = ∅. Proof. By hypothesis, Y satisfies (3.6), and therefore (3.5), for t =≤cl C . It follows that (iv) of Proposition 3.3.15 holds for cl C and Y , and so Max(Y ; cl C) = ∅. Using Proposition 3.3.16(iii) for K = cl C, the conclusion follows.  We put this result here because applied to K = cl C, the result is not topological.

3.3 Existence results for efficient points

167

Theorem 3.3.28. (Chew [131]) Let X be real vector space. Suppose that the intersection of every nonempty chain (w.r.t. inclusion) of C-upper sections of Y is nonempty. Then Max(Y ; C) = ∅.   Proof. For (yi )i∈I ⊆ Y , YC+ (yi ) i∈I is a chain iff (yi )i∈I is a chain in  (Y, ≤C ). So, using Proposition 3.3.15(ii), the conclusion follows. Note that the above formulation covers Proposition 4.6 (where it is supposed that C is pointed), Corollary 4.4, and Proposition 4.7 from [131] (in Proposition 4.7 it is assumed that X is a Hilbert space and C ∩ (−C) is closed). Results Corresponding to Propositions 3.3.18 and 3.3.19 Theorem 3.3.29. (Jameson [315, Cor. 3.8.10]) Let X be an H.t.v.s., and let C be well-based and closed. If Y is complete and bounded, then Y has the domination property. Proof. By Theorem 2.1.45(i), C is Cauchy fully regular, that is, condition (P4) is satisfied; the conclusion follows applying Proposition 3.3.18(iv).  Theorem 3.3.30. (Penot [461, Th. 3.3]) Let X be an H.t.v.s. and let C be Daniell and closed. If Y is C-upper bounded and closed, then Max(Y ; C) = ∅. Proof. Apply Proposition 3.3.18(i).



Theorem 3.3.31. (Cesari and Suryanarayana [111, Lemma 4.1]) Let X be a Banach space and C a closed (π)-cone. Suppose that Y is C-upper bounded. Then Max(clw Y ; C) = ∅. Proof. Having in view Definition 2.2.31(ii) and Corollary 2.1.50, C is Daniell and complete w.r.t. the weak topology w. It follows that C and clw Y satisfy condition (i) of Proposition 3.3.18 for τ = w.  The above result also follows from Corollary 3.3.26 because Y ⊆ y0 − C implies Y∞ ∩ C = {0}. Theorem 3.3.32. (Borwein [82, Th. 1]) Let X be an H.t.v.s. and let C be Daniell and closed. Assume that either (i) there exists y ∈ Y such that YC+ (y) is C-upper bounded and closed, or (ii) Y is τ -bounded and closed, and C is boundedly order complete, is satisfied. Then Max(Y ; C) = ∅. Proof. In the first case apply Proposition 3.3.18(i) for YC+ (y) and then Proposition 3.3.16(i). In the second case (P3) holds [see also (3.19)], so that the conclusion follows by applying Proposition 3.3.18(iii).  Theorem 3.3.33. (Isac [293, Th. 2]) Let X be an H.l.c.s. and let C be supernormal. Suppose that there exists ∅ = Z ⊆ Y such that YC+ (Z) is complete and τ -bounded. Then Max(Y ; C) = ∅.

168

3 Optimization in Partially Ordered Spaces

Proof. By Proposition 2.2.37, cl C satisfies (P4). Using also Proposition 3.3.16(iii) for K = cl C and (i) we have     ∅ = Max YC+ (Z); cl C ⊆ Max YC+ (Z); C ⊆ Max (Z; C) .  Note that Postolic˘ a [478, Th. 3.2] obtained a slightly more general variant: X, C, and Y being as in Isac’s theorem, he supposes that there exists Z such + (Y0 ) is complete and that Y ⊆ Z ⊆ Y − C and ∅ = Y0 ⊆ Y such that ZC τ -bounded; then Max(Y ; C) = ∅. The conclusion follows from Isac’s theorem applied to Z and Proposition 3.3.16(iv). Theorem 3.3.34. (Postolic˘ a [478, Cor. 3.2.1]) Let X be an H.l.c.s. and C have a complete bounded base. Suppose that Y is bounded and closed. Then Y has the domination property. Proof. The cone C is fully regular by Theorem 2.1.45(iii); hence C satisfies (P3) because it is also closed. Applying Proposition 3.3.18(iii), the conclusion follows.  Theorem 3.3.35. (Attouch & Riahi [32, Th. 2.5]) Let X be a Banach space and let C be a closed convex cone such that C ⊆ {x ∈ X | x ≤ x, x∗ } for some x∗ ∈ X ∗ . Suppose that Y is closed and x∗ (Y ) is bounded above (in R). Then Y has the domination property. Proof. Set γ := sup x∗ (Y ) (∈ R) and take y0 ∈ Y ; then Y0 := YC+ (y0 ) is clearly closed, and so it is complete because X is so. Then y − y0 ∈ C, and so y − y0  ≤ y − y0 , x∗  ≤ γ − y0 , x∗  for y ∈ Y0 . Applying Theorem 3.3.29 (or directly Proposition 3.3.18(iv)), one has Max(Y0 ; C) = ∅. Because y0 ∈ Y is arbitrary, Y has the domination property.  Remark 3.3.36. If either (a) C is normal and Y is C-upper bounded, or (b) there exists x∗ ∈ C # such that B := {y ∈ C | y, x∗  = 1} is bounded and x∗ (Y ) is upper bounded, then YC+ (y) is bounded for every y ∈ Y . Indeed, in case (a) one has YC+ (y) ⊆ [y, x0 ]C for all y ∈ Y , where x0 ∈ X is a C-upper bound of Y . In case (b), if y  ∈ YC+ (y), then y  − y = λb for some λ ≥ 0 and b ∈ B; thus, 0 ≤ λ = y  − y, x∗  ≤ sup x∗ (Y ) − y, y ∗  =: γ ∈ R. Hence YC+ (y) ⊆ [0, γ] · B. Theorem 3.3.37. (Ha [248, 249]) Let X be a quasi-complete H.t.v.s. Assume that cl C satisfies (P5) and [A]cl C is bounded for every bounded set A ⊆ X. If Y − cl C is closed and Y ⊆ Y0 − C for some bounded set Y0 ⊆ X, then Max(Y ; C) = ∅.

3.3 Existence results for efficient points

169

Proof. We claim that the set cl C satisfies (P3). Indeed, take (yi )i∈I a cl C-increasing and τ -bounded net; then E := cl([A]cl C ) is a closed bounded subset of the quasi-complete space (X, τ ), and so E is complete, where A := {yi | i ∈ I}. Hence (yi )i∈I converges by (P5), and so (P3) holds. Let y0 ∈ Y and consider Y1 := (Y − cl C)+ cl C (y0 ); hence Y1 is closed. Because Y1 ⊆ [Y0 ∪{y0 }]cl C and Y0 is bounded, so is Y1 . Hence Max(Y1 ; cl C) = ∅ by Proposition 3.3.18(iii), and so Max(Y ; C) = ∅ by Proposition 3.3.16(i) and (ii). The proof is complete.  Theorem 3.3.38. (Ng and Zheng [435, Th. 3.1]) Let (X, τ ) be an H.t.v.s. and let C be well-based and closed. Assume that there exists y0 ∈ Y such that YC+ (y0 ) is sequentially C-complete and τ -bounded; then Max(Y ; C) = ∅. Proof. By Theorem 2.1.45(i), C is Cauchy fully regular, and so (SP4) holds. The conclusion follows using Propositions 3.3.20 and 3.3.19(iv).  Results Corresponding to Proposition 3.3.22 and Its Corollary Theorem 3.3.39. (Bitran and Magnanti [71, Prop. 3.1]) Let X = Rm , let C be closed with C # = ∅, and let Y be closed and convex. Then Max(Y ; C) = ∅ iff Y∞ ∩ C = {0}. Proof. The necessity follows from the last part of Proposition 3.3.25, while the sufficiency follows from both parts of Corollary 3.3.26.  Note that the convexity of Y is used only for the sufficiency part. Theorem 3.3.40. (Nieuwenhuis [438, Th. I-14]) Let X be a Banach space and C have nonempty interior. If Y is compact, then Max(Y ; C0 ) = ∅, where C0 := {0} ∪ int C. Proof. By Proposition 3.3.22, Max(Y ; cl C) =∅. The conclusion follows from  Proposition 3.3.16(iii) because cl C ∩ (−C0 ) = {0} ⊆ C0 . Theorem 3.3.41. (Henig [266, Th. 2.1]) Let X = Rm and let C # be nonempty. Suppose that there exists a closed set Z such that Y ⊆ Z ⊆ Y − cl C and Z∞ ∩ cl C = {0}. Then Max(Y ; C) ∩ (y0 + cl C) = ∅ for every y0 ∈ Y . + Proof. As in (or from) Bitran–Magnanti’s theorem, Max(Zcl C (y0 ); cl C) =∅. The conclusion follows from Proposition 3.3.16(iii) taking K = cl C. 

Theorem 3.3.42. (Borwein [82, Th. 1]) Let X be an H.t.v.s. and let C be closed. If YC+ (y) is compact for some y ∈ Y , then Max(Y ; C) = ∅. Proof. Apply Corollary 3.3.23 for Z = {y}.



Theorem 3.3.43. (Penot and Sterna-Karwat [466, Rem. 3.3]) Let X be an H.t.v.s., Y a closed convex set, and C ⊆ X have a compact base. If Max(Y ; C) = ∅, then Y has the domination property.

170

3 Optimization in Partially Ordered Spaces

Proof. Taking y ∈ Max(Y ; C), YC+ (y) is nonempty and compact, being a singleton. Using Proposition 3.3.25, Y∞ ∩ C = {0}, whence Y has the domination property by Corollary 3.3.26.  Theorem 3.3.44. (Jahn [306, Th. 2.3(b), Th. 2.6(b)]) Let (X, ·) be a reflexive Banach space and Y ⊆ X have a nonempty C-upper section YC+ (y) which is weakly closed and C-upper bounded. (i) Suppose that x < x + y for all x, y ∈ C \ {0}. Then Max(Y ; C) = ∅. (ii) Suppose that aint C = ∅ and x < x + y for all x ∈ C, y ∈ aint C. Then Max(Y ; C) = ∅. Proof. Note first that cl A = cl(aint A) when A (subset of an H.t.v.s.) is convex and aint A = ∅. Indeed, let u ∈ aint A and x ∈ cl A. There exists (xi )i∈I ⊆ A with xi → x. For every λ ∈ (0, 1) and i ∈ I we have that (1 − λ)u + λxi ∈ aint A, and so (1 − λ)u + λx ∈ cl(aint A). Taking λ → 1, we get x ∈ cl(aint A). Therefore cl A = cl(aint A). Let x, y ∈ cl C \ {0}. There exist (xn )n≥1 , (yn )n≥1 sequences from C \ {0} in case (i) and from aint C in case (ii) such that xn → x, yn → y. Since xn  < xn + yn  in both cases, it follows that x ≤ x + y. Therefore x ≤ x + y for all x, y ∈ cl C in both cases. In particular, C is an acute (normal) cone. From the normality of C (or directly from the monotonicity conditions), YC+ (y) is also bounded. It follows that YC+ (y) is w-compact. Since (3.20) is satisfied, C being acute, the conclusion follows by applying Corollary 3.3.23.  Note that Jahn stated a similar result when X is the dual of a normed space and YC+ (y) is w∗ -closed. Theorem 3.3.45. (Tanaka [534, Lemma 2.4]) Let X = Rm and let cl C be pointed. Suppose that Y is C-compact. Then Max(Y ; C) = ∅. Moreover, if C is correct, then Y has the domination property. Proof. Note that by taking K = cl C, condition (3.17) is satisfied, while when C is correct, (3.18) holds, too. Since Y is C-compact, Ycl+C (y) is compact  for every y∈Y . From Corollary 3.3.23 we have that Max Ycl+C (y); cl C = ∅ for every y ∈ Y , and so Y has the domination property w.r.t. cl C. In particular, by Proposition 3.3.16(iii), Max(Y ; C) = ∅, while when C is correct we obtain that Y has (DP) using assertion (v) of the same proposition.  Results Corresponding to Proposition 3.3.24 Theorem 3.3.46. (Luc [386, Th. 2.6, Cor. 2.12], [387, Th. II.3.3]) Assume that C is correct and Y satisfies (3.21) for K = cl C. Then Y has the domination property.

3.3 Existence results for efficient points

Proof. The conditions of Proposition 3.3.24 are satisfied in this case.

171



Theorem 3.3.47. (Malivert [394, Th. 3.5]) Let C be a Daniell cone with int C = ∅. Suppose that Y is closed and for every y ∈ Y \ Max(Y ; C0 ) there exists y0 ∈ (y + int C) ∩ Y such that (y0 + int C) ∩ Y is C-upper bounded, where C0 := {0} ∪ int C. Then Y has the domination property w.r.t. C0 . Proof. Let y ∈ Y \ Max(Y ; C0 ). By hypothesis, there exists y0 ∈ (y + int C)∩Y such that Y0 := (y0 +int C)∩Y is C-upper bounded. If Y0 = ∅ then y0 ∈ Max(Y ; C0 ) and y is dominated by y0 . Assume that Y0 = ∅, and take (zi )i∈I ⊆ Y0 a strictly C0 -increasing net. It follows that (zi ) is C-increasing and C-upper bounded. Since C is Daniell, zi → z ∈ X and zi ≤C z for every i ∈ I. Hence z ∈ Y because Y is closed, and so z

∈ Y0 because  (zi ) ⊆ Y0 is strictly C0 -increasing. So, we obtain that z ∈ Y0 ∩ i∈I (zi + C) . Therefore, the hypotheses of Proposition 3.3.24 are satisfied by C, K, Y replaced by C0 , C, Y0 , respectively, and so there exists y  ∈ Max(Y0 ; C0 ) (⊆ Y0 ⊆ y0 + int C ⊆ y + int C). Assuming that y  ∈ Max(Y ; C0 ), there exists y  ∈ Y ∩ (y  + int C) ⊆ Y ∩ (y0 + int C) = Y0 , contradicting the fact that  y  ∈ Max(Y0 ; C0 ). Hence y ∈ Max(Y ; C0 ) − C0 . Note that Propositions 3.3.18–3.3.24 have different fields of application, although these fields are not disjoint. For example, even in finite-dimensional spaces there exist convex cones whose closure is pointed that are not correct; for such a cone and a compact set one can apply Corollary 3.3.23 but not Proposition 3.3.24 in Luc’s version. Also note that there are other results concerning existence of efficient points w.r.t. cones that do not enter into our classification. To our knowledge, among these results are Theorem 4.1 of Hartley [263], Proposition 4.10 and Proposition 4.11 of Chew [131], and Theorem 2.2 of Sterna-Karwat [515] (see also the paper of Turinici [547] for a discussion related to the use of ordinals). The presentation of this section follows that in the paper by Sonntag and Z˘ alinescu [511]; however, we modified slightly Proposition 3.3.18 and introduced Propositions 3.3.19 and 3.3.20 in order to cover Theorem 3.3.29 by Jameson [315] and Theorem 3.3.38 by Ng and Zheng [435]. 3.3.6 Some Density and Connectedness Results Throughout this section, (X, τ ) is an H.l.c.s. if not mentioned explicitly otherwise; when no mention of the topology is made for a topological notion, this is considered w.r.t. τ . Besides τ, we consider also a locally convex topology σ on X such that w ⊆ σ ⊆ τ , w being the weak topology of X. Note that cl A ⊆ clσ A ⊆ clw A for any subset A of X, the inclusions becoming equalities if A is convex. Moreover, based on the Mackey theorem (see [349, (7), p. 254]), notice that the classes of bounded subsets of X w.r.t. τ, σ, and w coincide. Let ∅ = Y ⊆ X, and C ⊆ X a convex cone. We say that y0 ∈ Y is properly maximal w.r.t. C if there exists x∗ ∈ C # such that y0 , x∗  ≥ y, x∗ 

172

3 Optimization in Partially Ordered Spaces

for every y ∈ Y ; we denote the set of properly maximal points of Y w.r.t. C by PrMax(Y ; C). Another notion of proper maximality is due to Henig [267]; namely, y0 ∈ Y is Henig proper maximal w.r.t. C if there exists a proper convex cone K ⊆ X such that C \ {0} ⊆ int K and y0 ∈ Max(Y ; K). We denote by HMax(Y ; C) the set of Henig proper maximal points of Y w.r.t. C. We have that PrMax(Y ; C) ⊆ HMax(Y ; C) ⊆ Max(Y ; C).

(3.22)

For the first inclusion in (3.22) take y0 ∈ PrMax(Y ; C) and K := {x ∈ X | x, x∗  ≥ 0}, where x∗ ∈ C # is the element in the definition of proper maximality, while the second inclusion in (3.22) is obvious. Note that when one of the first two sets in (3.22) is nonempty, then C # = ∅, and so C is a pointed cone. Lemma 3.3.48. Let (X, τ ) be an H.l.c.s., C ⊆ X a based closed convex cone, σ a locally convex topology on X such that w ⊆ σ ⊆ τ , and ∅ = Y ⊆ X. Then the following assertions hold: (i) Assume that Y is w-compact; then PrMax(Y ; C) = ∅. (ii) Assume that Y − C is convex; then PrMax(Y ; C) = HMax(Y ; C). Proof. (i) Take x∗ ∈ C # ; because Y is w-compact and x∗ is (w-)continuous, there exists y0 ∈ Y such that y0 , x∗  ≥ y, x∗  for every y ∈ Y, and so y0 ∈ PrMax(Y ; C). (ii) Take y0 ∈ HMax(Y ; C); then there exists a proper convex cone K ⊆ X such that C \{0} ⊆ int K and (Y −y0 )∩K ⊆ (−K). If z ∈ (Y −C −y0 )∩int K, then z = y −c−y0 with y ∈ Y and c ∈ C, whence y −y0 = c+z ∈ C +int K ⊆ K + int K = int K, and so y − y0 ∈ (Y − y0 ) ∩ int K ⊆ (−K) ∩ int K = ∅. Hence (Y − C − y0 ) ∩ int K = ∅. Since Y − C − y0 and int K are convex and nonempty, applying Theorem 2.2.9, we get x∗ ∈ X ∗ \ {0} and γ ∈ R such that y − y0 − c, x∗  ≤ γ ≤ x, x∗  ∀y ∈ Y, ∀c ∈ C, ∀x ∈ K. In particular, taking y := y0 , c := x := 0, we get γ = 0, whence y, x∗  ≤ y0 , x∗  for every y ∈ Y and x∗ ∈ K + (⊆ C + ). Assuming that c, x∗  = 0 for some c ∈ C \ {0} (⊆ int K), we get the contradiction x∗ = 0; hence x∗ ∈ C # , and so y0 ∈ PrMax(Y ; C). Therefore, HMax(Y ; C) ⊆ PrMax(Y ; C), and so the conclusion follows using (3.22).  The elements of PrMax(Y ; C) are obtained by linear scalarization (compare with Section 3.1.1), and so they can be obtained (quite) easily. So, for practical purposes, it is important to have the possibility to approximate each element in Max(Y ; C) by elements of PrMax(Y ; C). The first result in this direction, abbreviated ABB theorem, was obtained by Arrow, Barankin, and Blackwell [23] for X = Rn , C = Rn+ , and Y a compact convex set. This result was later extended in several directions. One of these extensions was realized

3.3 Existence results for efficient points

173

by Henig [267, Th. 5.1], who proved that Max(Y ; C) ⊆ cl(HMax(Y ; C)) when X = Rn , C is a closed pointed convex cone, and there exists A ⊆ C such that 0 ∈ A, Y − A is closed, and (Y − A)∞ ∩ C = {0}. In the sequel we present some extensions of this result to the case in which (X, τ ) is an H.l.c.s. The next auxiliary results will be used in the proofs of Theorems 3.3.51 and 3.3.56. Lemma 3.3.49. Let τ, σ be Hausdorff linear topologies on X such that σ ⊆ τ , let B ⊆ X be a nonempty σ-closed set with 0 ∈ / B, and set C := R+ B. Consider ∅ = Y ⊆ X and the nets (tj )j∈J ⊆ R+ , (yj )j∈J ⊆ Y , (bj )j∈J ⊆ B, (uj )j∈J , (uj )j∈J ⊆ X such that uj → 0, and yj = tj (bj + uj ) + uj ∀j ∈ J. (3.23) The following assertions hold: tj → t ∈ R+ ,

uj →τ 0,

τ

σ

(i) (i1 ) If (yj )j∈J is σ-bounded, then t ∈ R+ . (i2 ) If yj → y ∈ X, then t ∈ R+ σ and y ∈ clσ C. (i3 ) If yj → 0, then t = 0. σ

σ

(ii) (ii1 ) If bj → b and t ∈ R+ , then yj → tb ∈ C ∩ clσ Y . (ii2 ) If (bj )j∈J is σ τ σ-bounded (resp. τ -bounded) and t = 0 then yj → 0 (resp. yj → 0). σ

(iii) Assume that Y ∩ C = {0}. (iii1 ) If bj → b and Y is a σ-bounded, σ-closed σ σ set, then t = 0, and so yj → 0. (iii2 ) If yj → y ∈ Y , then t = 0; moreover, σ if C is σ-closed then yj → 0. τ

(iv) Additionally to (iii), assume that B is τ -bounded; then yj → 0. Proof. Note that (uj )j∈J and (uj )j∈J σ-converge to 0 because σ ⊆ τ . Moreover, when tj → t ∈ ]0, ∞], we (may and do) assume that tj > 0 for j ∈ J because tj > 0 for large j. (i1 ) & (i2 ) Assume that t = ∞. From (3.23), we have that −1  bj = t−1 j y j − tj u j − u j

(3.24)

→ 0, in both cases, we obtain that bj → 0 ∈ B, a for large j. Because t−1 j contradiction. Hence t ∈ R+ . In the case (i2 ), using again (3.23) and the fact that t ∈ R+ , from σ

C  tj bj = yj − tj uj − uj → y, σ

(3.25)

we get y ∈ clσ C. σ σ (i3 ) Assume that yj → 0; if t = 0 (and so t ∈ P by (i2 )), then bj → 0 by (3.24), contradicting the fact that 0 ∈ / B. Hence t = 0. (ii) The conclusions of both assertions are obtained immediately using the continuity of operations in topological vector spaces. σ (iii) By (i), in both cases t ∈ R+ because (yj )j∈J is bounded or yj → y.

174

3 Optimization in Partially Ordered Spaces σ

(iii1 ) Assume that bj → b and Y is a σ-bounded, σ-closed set. By (i1 ), σ t ∈ R+ ; then, by (ii1 ), yj → tb ∈ C ∩ clσ Y = C ∩ Y = {0}, whence t = 0, σ and so yj → 0. σ (iii2 ) Assume that yj → y ∈ Y . By (i2 ) t ∈ R+ , and so y ∈ Y ∩ clσ C σ (because yj → y ∈ Y ). Assume that t ∈ P. Then, using (3.24), we get σ bj → t−1 y =: b (∈ B ⊆ C), whence the contradiction tb ∈ Y ∩ C = {0}. Hence t = 0; in particular y = 0 when C is σ-closed because Y ∩ C = {0}, σ and so yj → 0. τ (iv) Because B is bounded and tj → 0, one obtains that yj → 0 using the  expression of yj in (3.23). The following lemma for K = X is essentially [434, Lem. 2.2] (see [518, Th. 3.1], [86, Th. 1.1] and [224, Lem. 2.1] for X a n.v.s.). Lemma 3.3.50. Let (X, τ ) be an H.l.c.s., let C ⊆ X be a based convex cone with base B, and take V0 a balanced closed convex neighborhood of 0 ∈ X such that B ∩ (2V0 ) = ∅. Consider V a neighborhood base of 0 formed by balanced closed convex sets included in V0 , and K ⊆ X a convex cone such that C \ {0} ⊆ int K. Set BV := cl[(B + V ) ∩ int K] and CV := R+ BV for V ∈ V. Then: (i) CV is a pointed convex cone with C \ {0} ⊆ int CV ⊆ K, and BV is a base of CV for every V ∈ V. (ii) If U, V ∈ V and U ⊆ V , then C ⊆ CU ⊆ CV . (iii) C ⊆ ∩{CV | V ∈ V} ⊆ cl C, and so C = ∩{CV | V ∈ V} when C is closed; in particular, C = ∩{CV | V ∈ V} if B is closed and bounded. (iv) Moreover, assume that V0 is bounded and C is closed; then CV is closed for every V ∈ V. Proof. (i) Consider V ∈ V. Because BV [⊆ cl(int K) = K] is a closed convex set and BV ⊆ cl(B + V ) ⊆ B + V + V0 ⊆ B + 2V0 = B − 2V0  0, CV is clearly a convex cone having BV as a base; moreover, C \ {0} = PB ⊆ int K ∩ [P(B + V )] = ∪t∈P [(B + V ) ∩ int K] ⊆ int CV . (ii) This assertion is obvious. (iii) The first inclusion is obvious. Take x ∈ ∩{CV | V ∈ V} \ {0}; then for each V ∈ V there exist tV ∈ P, bV ∈ B and uV , uV ∈ V such that bV + uV ∈ int K and yV := x = tV (bV + uV ) + uV . Clearly, the net (tV )V ∈V has a subnet (tψ(j) )j∈J converging to t ∈ [0, ∞]. Applying Lemma 3.3.49(i2 ) for σ := τ , Y := {0, x} and the corresponding subnets of (yV ), (bV ), (uV ) and (uV ), we get x ∈ cl C. Therefore, both inclusions hold. When B is closed and bounded, C (= R+ B) is closed by Corollary 2.1.44(i).

3.3 Existence results for efficient points

175

(iv) Because V0 is bounded, (X, τ ) is normable; more precisely, pV0 is a norm on X and τ is the topology determined by pV0 . Consider V ∈ V and 0 = x ∈ cl CV . Then, there exists (xn )n≥1 ⊂ CV \{0} such that xn → x; hence, for each n ≥ 1 there exist tn ∈ P, bn ∈ B, vn ∈ V (⊆ V0 ) and un ∈ X such that bn + vn ∈ int K, un → 0 and xn = tn (bn + vn + un ). Passing to a subsequence if necessary, there exists t ∈ [0, ∞] such that tn → t. If t = 0, then C  tn bn = xn −tn vn −tn un → x, and so x ∈ cl C = C ⊆ CV ; −1 x, and so if t ∈ P, then (B + V ) ∩ int K  bn + vn = t−1 n xn − un → t −1 t x ∈ cl[(B +V )∩int K] = BV , whence x ∈ CV ; if t = ∞, then t−1 n → 0, and −1 so (B + V ) ∩ int K  bn + vn = t−1 n xn + tn un → 0, whence the contradiction  0 ∈ BV . Hence CV is closed. The cone CV introduced in Lemma 3.3.50 is called a Henig dilating cone. Theorem 3.3.51. Let (X, τ ) be an H.l.c.s., let C ⊆ X be a based closed convex cone and let σ be a locally convex topology on X such that w ⊆ σ ⊆ τ , w being the weak topology on X. Consider the nonempty σ-closed and σasymptotically compact subset Y of X. Assume that there exists a proper closed convex cone K ⊆ X such that σ Y∞ ∩ K = {0} and C \ {0} ⊆ int K.

(3.26)

Max(Y ; C) ⊆ clσ (HMax(Y ; C)).

(3.27)

Then Proof. Consider y ∈ Max(Y ; C); we must show that y ∈ clσ (HMax(Y ; C)). Replacing Y by Y − y if necessary, we assume that y = 0; hence Y ∩ C = {0}. Because w ⊆ σ ⊆ τ , the closures w.r.t. τ, w and σ of each convex subset of X coincide; in particular, K and C are σ-closed convex cones. So, because σ ∩ K = {0}, we obtain that Y ∩ K is σ-compact Y is σ-closed, σ-a.c. and Y∞ using Proposition 3.3.25(i). Because C is closed and based, there exists the closed convex set B such / B. that C = R+ B and 0 ∈ Consider V0 , V, BV and CV as in the statement of Lemma 3.3.50; without loss of generality we assume that V0 ∈ V. Let us set KV := cl CV (⊆ K) and YV := Y ∩ KV (⊆ Y ∩ K) for V ∈ V; hence YV is σ-compact as a σ-closed subset of a σ-compact set. Using Proposition 3.3.22 we get the existence of yV ∈ Max(YV ; KV ); moreover, by Proposition 3.3.16(i), we have that yV ∈ Max(Y ; KV ), and so yV ∈ HMax(Y ; C) because C \ {0} ⊆ int KV . Since cl (R+ cl A) = cl(R+ A) for ∅ = A ⊆ X and yV ∈ KV = cl CV , it follows that yV ∈ cl [R+ (B + V )] ⊆ [R+ (B + V )] + V , whence yV = tV bV + tV uV + uV for some tV ∈ R+ , bV ∈ B and uV , uV ∈ V . Since V ⊆ V0 for any V ∈ V, (yV )V ∈V ⊆ Y0 := YV0 ⊆ Y . Because Y0 is σ-compact, the net (yV )V ∈V has a subnet (yψ(j) )j∈J σconverging to y ∈ Y0 ; passing to a subnet if necessary, we (may) assume that (tψ(j) )j∈J converges to t ∈ [0, ∞]. Clearly, Y0 ∩ C = {0}; having in view

176

3 Optimization in Partially Ordered Spaces

the (σ-) closedness of C, one gets (y =) 0 ∈ clσ (HMax(Y ; C)) using Lemma  3.3.50(iii2 ). Remark 3.3.52. Notice that, under the conditions on X, C and σ from Theorem 3.3.51, there exists a proper closed convex cone K ⊆ X verifying (3.26) whenever Y is nonempty and σ-compact. Indeed, because C is based, there exists x∗ ∈ C # . Taking K := [x∗ ≥ 0], one has C \ {0} ⊆ [x∗ > 0] = int K σ ∩ K = {0} ∩ K = {0}. and Y∞ The version of Theorem 3.3.51 for normed vector spaces is the following result. Corollary 3.3.53. Let (X, ·) be a normed vector space, C ⊆ X a based closed convex cone, and σ a locally convex topology on X such that w ⊆ σ ⊆ τ := τ · . Assume that Y is a nonempty σ-closed and σ-asymptotically σ ∩ C = {0}. Then (3.27) holds. compact subset of X with Y∞ Proof. The conclusion follows immediately from Theorem 3.3.51 using the next result.  Proposition 3.3.54. Under the hypothesis of Corollary 3.3.53, there exists a proper closed convex cone K ⊆ X such that (3.26) is verified. Proof. Consider a closed convex set B such that C = R+ B and 0 ∈ / B; this is possible because C is based and closed. Because B, C are closed and σ σ σ = B∞ ⊆ C; hence B∞ ∩Y∞ = {0}. convex, they are also σ-closed, and so B∞ σ Because Y is σ-a.c., so is Y∞ (by [579, Prop. 2.2(i)]). Using [579, Cor. σ is σ-closed, and so B  is (s-)closed. Since 3.12], we obtain that B  := B − Y∞ σ σ / B  . It follows that Y∞ ∩ C = {0}, we have that Y∞ ∩ B = ∅, whence 0 ∈  there exists δ ∈ P such that B ∩ (2V0 ) = ∅, where V0 = {x ∈ X | x ≤ δ}. Consider V := {εV0 | ε ∈ ]0, δ]}. Then (X, τ ), C, B, V0 , V and K := X verify the hypotheses of Lemma 3.3.50, including the fact that V0 is bounded and C is closed; applying assertions (i) and (iv) of Lemma 3.3.50 one obtains that CV := R+ · cl(B + V ) is a proper closed convex cone and C \ {0} ⊆ int CV for every V ∈ V. Moreover, for V ∈ V one has σ σ σ ∅ = Y∞ ∩ (B + 2V0 ) ⊇ Y∞ ∩ (B + V + V0 ) ⊇ Y∞ ∩ cl(B + V ), σ whence Y∞ ∩ CV = {0}; hence K := CV0 verifies condition (3.26).



Remark 3.3.55. Borwein and Zhuang [86] say that y0 is a super efficient point of Y w.r.t. C, written y0 ∈ SE(Y ; C), if (UX + C) ∩ R+ (Y − y0 ) ⊆ μUX for some μ > 0. Under the hypothesis that C has a bounded base B, in [86, Prop. 2.5] one proves that y0 ∈ SE(Y ; C) if and only if y0 ∈ Max(Y ; Cε ) for some 0 < ε < d(0, B), where Cε is introduced in the proof of Proposition 3.3.54.

3.3 Existence results for efficient points

177

Observe that the net (yψ(j) )j∈J τ -converges to y (= 0) in (the proof of) Theorem 3.3.51 and Corollary 3.3.53, if the families of weak and τ neighborhoods of y in Y coincide (that is, y is a continuity point of Y ); this happens, for example, if Y − y is contained in a supernormal cone. The next result provides some sufficient conditions for the density of the set of Henig efficient points of Y ⊆ X in the set of its efficient points with respect to the initial topology τ of X. Theorem 3.3.56. Let (X, τ ) be an H.l.c.s. and let C ⊆ X be a based closed convex cone. Consider σ a locally convex topology on X such that w ⊆ σ ⊆ τ , and Y a nonempty σ-closed subset of X. Then Max(Y ; C) ⊆ clτ (HMax(Y ; C))

(3.28)

provided one of the following two conditions is satisfied: (i) there exists a proper closed convex cone K ⊆ X such that (3.26) is verified and either a) any element of Max(Y ; C) is a continuity point of Y , or b) C is well-based and Y is σ-asymptotically compact; (ii) X is normable, C has a σ-compact base, and either a) any bounded σclosed subset of Y is σ-complete, and Y ⊆ A − C for some bounded set σ ∩ C = {0}. A ⊆ X, or b) (X, τ ) is complete and Y∞ Proof. Consider a closed convex set B ⊆ X such that 0 ∈ / B, C = R+ B and, moreover, B is bounded in the case (i) b) and B is σ-compact in the case (ii). We follow the lines of the proofs of Theorem 3.3.51 or Corollary 3.3.53; we point out only the respective differences. Let y ∈ Max(Y ; C). As usual, we (may) assume that y = 0. (i) a) As observed before the statement of the theorem, the net (yψ(j) )j∈J σ-converges to 0 in (the proof of) Theorem 3.3.51, and so 0 ∈ cl(HMax(Y ; C)). b) Because B is bounded, applying Lemma 3.3.49(iv) via (iii2 ) in the τ last line of the proof of Theorem 3.3.51 we have that yψ(j) → 0, and so 0 ∈ cl(HMax(Y ; C)). (ii) Let · be a norm on X inducing the topology τ . Because B is σ-compact, B is also (σ-)bounded; moreover, Y ∩ C = {0} because 0 ∈ Max(Y ; C). σ σ ⊆ (A − C)σ∞ = −C, and so B − Y∞ ⊆ a) Because A is (σ-)bounded, Y∞ B + C = [1, ∞[ ·B. Since [1, ∞[ ·B is closed and does not contain 0, there exists δ > 0 such that ([1, ∞[ ·B)∩(2V0 ) = ∅, where V0 := δUX ; consequently, σ ) ∩ (2V0 ) = ∅. (B − Y∞ Set V := {εV0 | ε ∈ ]0, 1]} and take V ∈ V. Because V is convex, bounded, and closed (whence V is also σ-closed), and B is σ-compact, B + V is convex, bounded and closed; as in the proof of Proposition 3.3.54, one obtains that CV := R+ · cl(B + V ) = R+ (B + V ) is a well-based closed convex cone such σ ∩ CV = {0}. Taking A0 := A ∪ {0}, one has that C \ {0} ⊆ int CV and Y∞ YV := Y ∩ CV ⊆ (A − C) ∩ CV ⊆ (A0 − CV ) ∩ (A0 + CV ) = [A0 ]CV ,

178

3 Optimization in Partially Ordered Spaces

and so YV is bounded because A0 is bounded and CV is normal (being well-based; see Theorem 2.1.45). Hence YV (⊆ Y0 := YV0 ⊆ Y ) is σbounded and σ-closed, and so YV is σ-complete. Applying Remark 3.3.17 and Proposition 3.3.18(iv) for (X, σ), there exists yV ∈ Max(YV ; CV ) (⊆ Y0 ∩ CV ∩ HMax(Y ; C)), and so yV = tV bV + tV uV for some tV ∈ R+ , bV ∈ B and uV ∈ V . Clearly, Y0 ∩ C = {0}. Because B is σ-compact, the net (bV )v∈V has a subnet (bψ(j) )j∈J σconverging to b ∈ B; we (may) assume that tψ(j) → t ∈ [0, ∞]. Using successively assertions (i1 ), (ii1 ), (iii1 ) and (iv) of Lemma 3.3.49 for (X, τ ), σ, Y0 σ τ and B, we get t ∈ R+ , yψ(j) → tb ∈ C ∩ clσ Y0 = C ∩ Y0 , t = 0 and yψ(j) → 0, respectively. Therefore, 0 ∈ cl(HMax(Y ; C)). (b) Because B is closed and 0 ∈ / B, there exists δ ∈ P such that B ∩ (2V0 ) = ∅, where V0 := δUX . Set V := {εV0 | ε ∈ ]0, 1]} and, for V ∈ V, CV := R+ (B + V ) and YV := Y ∩ CV . It follows that CV is a well-based closed convex cone such that C \ {0} ⊆ int CV and YV is σ-closed, and so YV is (τ -)closed; having in view that (X, τ ) is complete and YV is (τ -)closed, YV is (τ -)complete. Consequently, if YV is also bounded, invoking the same results as above, one gets (3.29) ∅ = Max(YV ; CV ) ⊆ YV ∩ HMax(Y ; C). We claim that for every U ∈ V there exists V ∈ V such that YV ⊆ U. Hence, if our claim is true, one has that (diam YV )V ∈V → 0, and so one gets 0 ∈ cl(HMax(Y ; C)) by using (3.29). Assume that the claim is not true. Then there exists ε0 ∈ ]0, 1] such that, for U := ε0 V0 ∈ V, YV ⊆ U for every V ∈ V; hence, for V ∈ V there exists yV ∈ Y ∩ CV \ U , and so yV = tV bV + tV uV for some tV ∈ R+ , bV ∈ B and uV ∈ V . It follows that ε0 δ ≤ yV  ≤ tV (bV  + uV ) ≤ tV (μ + ε) ≤ tV (μ + 1), where V = εV0 with ε ∈ ]0, 1] and μ := sup{b | b ∈ B} ∈ P; hence tV ≥ ε0 δ/(μ + 1). Because B is σ-compact, the net (bV ) has a subnet (bψ(j) )j∈J σ-converging to b ∈ B; we (may) assume that tψ(j) → t ∈ ]0, ∞]. Assuming that t = ∞, σ σ we get t−1 ψ(j) yψ(j) = bψ(j) + uψ(j) → b, and so Y∞ ∩ B = ∅, contradicting the σ

σ hypothesis Y∞ ∩ C = {0}. Hence t ∈ P, and so yψ(j) → tb ∈ Y ∩ C = {0}. This contradiction proves that our claim is true. 

Corollary 3.3.57. Let (X, τ ) be an H.l.c.s., let C ⊆ X be a based closed convex cone and let Y ⊆ X be a nonempty closed convex set. Consider σ a locally convex topology on X such that w ⊆ σ ⊆ τ . Then Max(Y ; C) ⊆ clτ (PrMax(Y ; C)) provided one of the following conditions is satisfied:

(3.30)

3.3 Existence results for efficient points

179

(i) there exists a proper closed convex cone K ⊆ X such that Y∞ ∩ K = {0} and C \ {0} ⊆ int K and either a) Y is locally compact, or b) C is well-based and Y is locally σ-compact; (ii) X is normable, C has a w-compact base, and either a) the bounded closed subsets of Y are σ-complete, or b) (X, τ ) is complete. Proof. Note that PrMax(Y ; C) = HMax(Y ; C) in the present case because Y is convex; see Lemma 3.3.48(ii). Moreover, because Y is closed and conσ vex, Y is σ-closed and Y∞ = Y∞ [= ∩t∈P t(Y − y) for some (any) y ∈ Y ]; furthermore, because Y is convex and σ-closed, Y is locally σ-compact if and only if Y is σ-a.c. by [579, Prop. 2.2(ii)]. (i) a) Because Y is a.c., the conclusion follows using Theorem 3.3.51 for σ := τ . b) Because Y is (also) σ-a.c., the conclusion follows using Theorem 3.3.56(i) b). (ii) Let y ∈ Max(Y ; C); as usual, we (may) take y = 0, and so Y ∩ C = {0}, whence {0} = (Y ∩ C)∞ = Y∞ ∩ C. We have to prove that 0 ∈ clτ (PrMax(Y ; C)). w ∩ C = Y∞ ∩ C = {0}, one gets immediately that 0 ∈ b) Because Y∞ clτ (PrMax(Y ; C)) using Theorem 3.3.56(ii) b) for σ := w. a) Let B be a w-compact base of C and take V0 a bounded closed convex neighborhood of 0. Then Y0 := Y ∩ V0 is a bounded closed convex subset of Y , and so Y0 is also σ-complete. Of course, Y0 ⊆ Y0 − C. We claim that 0 ∈ clτ (HMax(Y0 ; C)). For proving the claim, we follow the same procedure as in the proof of Theorem 3.3.56(ii) a). So, for V0 as above, we consider the same sets V, YV , CV ; consequently, YV is convex and closed for every V ∈ V; in particular, YV is σ-closed, and so YV is σ-complete. Hence there exists yV ∈ Max(YV ; CV ) [⊆ Y0 ∩ CV ∩ HMax(Y0 ; C)]. Continuing exactly as in the proof of Theorem 3.3.56 with σ replaced by w, we obtain that the claim is true, that is, 0 ∈ clτ (HMax(Y0 ; C)), whence 0 ∈ clτ (PrMax(Y0 ; C)) because Y0 is convex. Since (X, τ ) is normable, there exists (yn )n≥1 ⊆ PrMax(Y0 ; C) such that τ yn → 0. Then for n ≥ 1, there exists x∗n ∈ C # such that yn maximizes x∗n on Y0 . Fix n ≥ 1, and take an arbitrary y ∈ Y ; then there exists t ∈ ]0, 1[ such that (1 − t)yn + ty ∈ Y0 , and so (1 − t)yn + ty, x∗n  ≤ yn , x∗n , whence y, x∗n  ≤ yn , x∗n . Hence yn ∈ PrMax(Y ; C) for every n ≥ 1, and so y = 0 ∈  clτ (PrMax(Y ; C)). A stronger version of Corollary 3.3.57(i) b) is the following result. Theorem 3.3.58. Let (X, τ ) be an H.l.c.s., let C := R+ B with 0 ∈ /B⊆X a nonempty bounded closed convex set, and let Y ⊆ X be a nonempty, convex, and locally σ-compact set, where σ is a locally convex topology on X such that w ⊆ σ ⊆ τ . Then Max(Y ; C) ⊆ clτ (PbMax(Y ; C)), where PbMax(Y ; C) := {y ∈ Y | ∃x∗ ∈ C # : [inf x∗ (B) > 0, sup x∗ (Y ) = y, x∗ ]}.

180

3 Optimization in Partially Ordered Spaces

Proof. Fix y ∈ Max(Y, C); doing a translation, we (may) assume that c | V = cl V }, and fix y = 0, and so Y ∩ C = {0}. Set N0 := {V ∈ NX U ∈ N0 . Because Y is locally σ-compact, there exists a (σ-)closed convex balanced neighborhood W of 0 such that Y ∩ W is σ-compact. Moreover, because 0 ∈ / B = cl B, there exists U  ∈ N0 such that B ∩ U  = ∅; then U0 := U ∩ U  ∩ W ∈ N0 and Y0 := Y ∩ U0 is σ-compact. Because B is bounded, there exists β > 0 such that βB ⊆ 12 U0 . Because (X, σ)∗ = X ∗ , Y0 ∩ (βB) = ∅, Y0 , B are convex, Y0 is σ-compact and B is (σ-)closed, using a separation theorem, there exist α ∈ R, u∗ ∈ X ∗ , and y ∈ Y0 such that y, u∗  ≤ y, u∗  < α ≤ βb, u∗ 

∀y ∈ Y0 , b ∈ B.

(3.31)

Hence α > 0 because 0 ∈ Y0 . We claim that (Y − y) ∩ [P(βB + V )] = ∅ for some V ∈ N0 with V ⊆ 12 U0 . Otherwise, for every V ∈ N0 with V ⊆ 12 U0 , there exist tV > 0, bV ∈ B and vV ∈ V such that yV := y + tV (βbV + vV ) ∈ Y . Hence, there exist the nets (tj )j∈J ⊆ P, (bj )j∈J ⊆ B, and (vj )j∈J ⊆ 12 U0 such that vj → 0 and yj := y + tj (βbj + vj ) ∈ Y for j ∈ J. It follows that Y 

1 1 tj yj = y+ (βbj + vj ) ∈ U0 1 + tj 1 + tj 1 + tj

∀j ∈ J.

Indeed, the first inclusion is true because Y is convex and 0 ∈ Y, while the second is true because U0 is convex, y ∈ U0 and βbj + vj ∈ βB + 12 U0 ⊆ 1 1 1 2 U0 + 2 U0 = U0 ; consequently, 1+tj yj ∈ Y0 . Using (3.31), one gets   1 1 tj tj y, u∗ + (α+vj , u∗ ) ≤ y+ (βbj +vj ), u∗ ≤ y, u∗  , 1 + tj 1 + tj 1 + tj 1 + tj whence α + vj , u∗  ≤ y, u∗  , for every j ∈ J. Taking the limit, one gets the contradiction α ≤ y, u∗  . Hence the claim is true, and so there exists V ∈ N0 such that (Y −y)∩K = ∅, where K := P(βB + V ). Because Y − y, K are convex and int K = ∅, there exist x∗ ∈ X ∗ \ {0} and γ ∈ R such that y − y, x∗  ≤ γ ≤ t(v + βb), x∗ 

∀y ∈ Y, b ∈ B, v ∈ V, t ∈ P.

Since y ∈ Y0 ⊆ Y and t ∈ P, it follows that γ = 0, and so sup y, x∗  = y, x∗  and 0 < sup v, x∗  ≤ β inf b, x∗  ;

y∈Y

v∈V

b∈B

consequently, yU := y ∈ U ∩ PbMax(Y ; C). Since N0 is a neighborhood base of 0 and U ∈ N0 is arbitrary, it follows that (y =) 0 ∈ clτ (PbMax(Y ; C)). Therefore, the conclusion holds. 

3.3 Existence results for efficient points

181

Remark 3.3.59. In the statements of Corollary 3.3.53 and Theorem 3.3.56 besides Y we can consider another set Z such that Y ⊆ Z ⊆ Y − C and ask that Z satisfy the conditions imposed on Y , the conclusion remaining the same for the set Y . Density results like those proved in Corollary 3.3.53 and Theorem 3.3.56(i) can be used to establish the connectedness of the set Max(Y ; C). Connectedness of an efficient point set provides a possibility of moving continuously from one optimal solution to any other along optimal alternatives only. Lemma 3.3.60. Let (X, τ ) be an H.l.c.s., C ⊆ X a based closed convex cone and σ a locally convex topology on X such that w ⊆ σ ⊆ τ . Consider Y a σ nonempty σ-closed and σ-a.c. set such that Y − C is convex and Y∞ ⊆ (−C). ∗ # Then, for every x ∈ C , the set ΓY (x∗ ) := {y ∈ Y | y, x∗  ≥ y  , x∗  ∀ y  ∈ Y }

(3.32)

is nonempty, convex, and σ-compact. Proof. Consider x∗ ∈ C # and set f := −x∗ + ιY . Because x∗ is (σ-)continuous and Y is σ-closed, f is σ-l.s.c.; moreover, dom f = Y, and so dom f is σ-a.c. Fix some λ ≥ inf Y f such that [f ≤ λ] = ∅; then [f ≤ λ] = {y ∈ Y | y, x∗  ≥ −λ}, whence [f ≤ λ] is σ-a.c. and σ-closed. σ ). Then there exist the nets (ti )i∈I ⊆ P and Take v ∈ [f ≤ λ]σ∞ (⊆ Y∞ σ σ (yi )i∈I ⊆ [f ≤ λ] (⊆ Y ) such that ti → 0 and ti yi → v; hence v ∈ Y∞ ⊆ (−C). ∗ ∗ Moreover, from yi , x  ≥ −λ for i ∈ I we get −λti ≤ ti yi , x  → v, x∗  , and so −v, x∗  ≤ 0. Since −v ∈ C and x∗ ∈ C # we obtain that v = 0, and so [f ≤ λ]σ∞ = {0}. Using [579, Prop. 2.6(i)], there exists y ∈ X such that f (y) ≤ f (y) for all y ∈ X, and so y ∈ Y and y, x∗  ≥ y, x∗  for all y ∈ Y, that is, y ∈ ΓY (x∗ ). Because x∗ ∈ C # we have that γ := supY x∗ = supY −C x∗ , and so ΓY (x∗ ) = [f ≤ −γ] = ∅; hence ΓY (x∗ ) is σ-compact by [579, Prop. 2.3] because [f ≤ −γ] is σ-closed, σ-a.c. and [f ≤ −γ]σ∞ = {0}. If y ∈ Y, c ∈ C and γ = y − c, x∗  (≤ y, x∗  ≤ γ), then y, x∗  = γ and c = 0, whence ΓY (x∗ ) = {y  ∈ Y − C | y  , x∗  = γ}. Because Y − C is convex, so is ΓY (x∗ ). Hence the conclusion holds.  Theorem 3.3.61. Let (X, τ ) be an H.l.c.s., C ⊆ X a based closed convex cone and σ a locally convex topology on X such that w ⊆ σ ⊆ τ . Consider Y a nonempty σ-compact set such that Y − C is convex. Then PrMax(Y ; C), HMax(Y ; C) and Max(Y ; C) are σ-connected. Moreover, if any element of Max(Y ; C) is a continuity point of Y , then PrMax(Y ; C), HMax(Y ; C), and Max(Y ; C) are τ -connected. σ Proof. Because Y is σ-compact, Y∞ = {0} ⊆ (−C); moreover, as observed in Remark 3.3.52, there exists a proper closed convex cone K verifying (3.26). Using Lemma 3.3.48, relation (3.22) and Theorem 3.3.51, one obtains

182

3 Optimization in Partially Ordered Spaces

∅ = PrMax(Y ; C) = HMax(Y ; C) ⊆ Max(Y ; C) ⊆ clσ (PrMax(Y ; C)). (3.33) Consider the multifunction ΓY : C # ⇒ X, where ΓY (x∗ ) is defined in (3.32). Applying Lemma 3.3.60, we obtain that ΓY (x∗ ) is a nonempty convex σ-compact set for any x∗ ∈ C # . In order to get the σ-connectedness of PrMax(Y ; C) [= ΓY (C # )], having in view Propositions 2.7.9(ii) and 2.7.8(i), it is sufficient to prove that ΓY is β-σ-compact at any x∗ ∈ C # , where β := β(X ∗ , X) is the strong topology on X ∗ . So, consider the net ((x∗i , xi ))i∈I ⊆ gr ΓY such that x∗i →β x∗ ∈ C # ; hence xi , x∗i − x∗  → 0 because (x∗i )i∈I converges uniformly to x∗ on bounded sets, Y is bounded (being σ-compact), and (xi )i∈I ⊆ ΓY (C # ) ⊆ Y . Because Y is σ-compact, there exists a subnet (xj )j∈J := (xψ(j) )j∈J of (xi )i∈I σ-converging to x ∈ Y ; set (x∗ j )j∈J :=   ∗ (x∗ψ(j) )j∈J . Then xj , x∗ → x, x  because j   ∗       ∗ xj , xj − x, x∗  = xj , x∗ + xj − x, x∗ → 0. j −x    ∗   Since xj ∈ ΓY (x∗ ≥ y, x∗ for all j ∈ J and y ∈ Y ; j ) we have that xj , xj j passing to the limit we get x, x∗  ≥ y, x∗  for y ∈ Y, and so x ∈ ΓY (x∗ ). Therefore, ΓY is β-σ-compact on C # , and so ΓY (C # ) = PrMax(Y ; C) is σ-connected. Using (3.33) and the fact that A ⊆ B ⊆ cl A and A connected imply B connected, a well-known result in topological spaces, one obtains that Max(Y ; C) is σ-connected too. Moreover, assume that any element of Max(Y ; C) is a continuity point of Y . Because Y is also w-compact, we obtain that (3.28) holds by Theorem 3.3.56(i) a), and so (3.33) holds with σ := τ . Moreover, having ((x∗i , xi ))i∈I ⊆ gr ΓY such that x∗i →β x∗ ∈ C # , we have seen above that (xi )i∈I (⊆ Y ) has a subnet (xj )j∈J σ-converging to x ∈ ΓY (x∗ ) [⊆ Max(Y ; C)], and so w τ (xj )j∈J → x because w ⊆ σ; therefore, (xj )j∈J → x because x is a continuity ∗ point of Y . Therefore, ΓY is β-τ -compact at x . Invoking again Propositions 2.7.9(ii) and 2.7.8(i) for X endowed with its initial topology τ , one obtains that PrMax(Y ; C), HMax(Y ; C) and Max(Y ; C) are τ -connected.  Assertions (i) and (ii) of Lemma 3.3.48 are the versions for locally convex spaces of Proposition 3.4 and Theorem 3.2 from [393]. Theorem 3.3.51, Proposition 3.3.54, Theorem 3.3.56(i) b), and Corollary 3.3.57(i) are established by Newhall & Goodrich in [434], extending so the corresponding results from [232, Ths. 4.1, 4.2] stated in normed vector spaces. Notice that the first ABB type result in which a linear topology σ with w ⊆ σ ⊆ τ is involved was obtained by Daniilidis in [150, Th. 2] for X a Banach space and Y a convex and σ-compact set. Other density results, for X a locally convex space and Y a convex w- or τ -compact set, are those established by Gallagher & Saleh in [206, Ths. 2.6, 3.1], Makarov & Rachkovski in [393, Th. 4.3], Cheng & Fu in [130, Th. 5.1]; in these results, for the density w.r.t. the topology τ , C is well based or Y is τ -compact. The stronger conclusion from Theorem 3.3.58 is obtained by

3.4 Continuity properties with respect to a scalarization parameter

183

Guerraggio & Luc in [239, Th. 3.2] for σ := w. Corollary 3.3.53, Theorem 3.3.56(ii), and Corollary 3.3.57(ii) subsume the density results obtained by Jahn [309, Th. 3.1], Petchke [473, Th. 4.1], Borwein & Zhuang [86, Th. 4.2] (see also Remark 3.3.55), Zhuang [585, Th. 3.3], Gong [225, Th. 3.1] and Ferro [194, Th. 3.1]. There are several density results established for topological vector spaces; we mention only the papers [206] by Gallagher & Saleh and [393, Th. 4.3] by Makarov & Rachkovski. Theorem 3.3.61 for X a nvs and σ := τ · was essentially established by Gong [224, Th. 4.1], while for X a l.c.s., σ := w and any element of HMax(Y ; C) being a continuity point of Y by Song in [509, Th. 3.1].

3.4 Continuity Properties with Respect to a Scalarization Parameter Let Y be a topological vector space, C ⊆ Y a proper convex cone, and ∅ = A ⊆ Y . Recall that Eff(A; C) := Min(A; C) := Max(A; −C) := {a ∈ A | A ∩ (a − C) ⊆ a + C}; of course, Eff(∅; C) = ∅. When int C = ∅ we denote by wEff(A; C) the set Eff(A; {0} ∪ int C) of the weak efficient points of A w.r.t. C. Throughout this section we assume that k0 ∈ int C. For every p ∈ Y , consider the minimization problem P (p)

minimize t

subject to t ∈ R, y ∈ A, p ∈ tk0 − y − C.

Consider also the marginal function m : Y → R associated with problems P (p) defined by m(p) := inf{t ∈ R | y ∈ A, p ∈ tk0 − y − C} = inf{t ∈ R | p ∈ tk0 − (A + C)}. We have the following relations between the set wEff(A; C) and the solutions of P (p). Let us denote the set B \ int B by ∂B; note that ∂B differs from the usual boundary bd B of B. Proposition 3.4.1. (i) If (t, y) ∈ R × A is a solution of P (p), then y ∈ wEff(A; C) and tk0 − p ∈ ∂(A + C). (ii) If y ∈ wEff(A; C), then (0, y) is a solution of P (−y). Proof. (i) Let (t, y) ∈ R × A be a solution of P (p) and suppose that y ∈ / wEff(A; C). Then there exists k ∈ int C such that y := y − k ∈ A. There exists δ > 0 such that k − δk0 ∈ C. So p − (t − δ)k0 ∈ −y + δk0 − C = −y − (k − δk0 ) − C ⊆ −y − C, contradicting the optimality of (t, y). Therefore y ∈ wEff(A; C). If tk0 − p ∈ / ∂(A + C), then tk0 − p ∈ int(A + C). There exists δ > 0 such that

184

3 Optimization in Partially Ordered Spaces

(t − δ)k0 − p ∈ A + C, which shows that (t, y) is not a solution of P (p), a contradiction. Therefore tk0 − p ∈ ∂(A + C). (ii) Let y ∈ wEff(A; C); of course, (0, y) is an admissible solution of P (−y). Suppose that (0, y) is not a solution of P (−y). Then there exist t < 0 and y ∈ A such that −y ∈ tk0 − y − C, whence y ∈ y − int C, a contradiction.  We say that the segment [y, z] ⊆ Y with y = z is parallel to C if y ≤C z or z ≤C y. Lemma 3.4.2. Let ∅ = A, V ⊆ Y be such that ∂(A + C) ∩ V contains no segments parallel to C. Then ∂(A + C) ∩ aint V ⊆ Eff(A; C). Proof. Let y ∈ [(A + C) \ int(A + C)] ∩ aint V . Suppose that y − k ∈ A for some k ∈ C \ {0}. Since y ∈ aint V , there exists δ ∈ ]0, 1[ such that y + tk ∈ V for every t ∈ [−δ, δ]. Of course, for t ∈ [0, δ] we have that y − tk = y − k + (1 − t)k ⊆ A + C. Suppose that y − tk ∈ int(A + C) for some t ∈ ]0, δ], i.e., y − tk + W ⊆ A + C for some neighborhood W of 0 in Y . Then y + W ⊆ A + C + tk ⊆ A + C, which shows that y ∈ int(A + C), a contradiction. Therefore y − tk ∈ ∂(A + C) ∩ V for every t ∈ [0, δ]. Since the segment [y, y − δk] is parallel to C contained in ∂(A + C) ∩ V , we get a contradiction. It follows that A ∩ (y − C) ⊆ {y}. Since y ∈ A + C, we obtain  that y ∈ A and, furthermore, y ∈ Eff(A; C). As application of the preceding lemma we have the following result. Proposition 3.4.3. Let p ∈ Y be such that the problem P (p) has at least a solution. If there exists an open set V ⊆ Y such that m(p)k0 − p ∈ V and ∂(A + C) ∩ V contains no segments parallel to C, then y ∈ A obtained by solving P (p) coincides with m(p)k0 − p and belongs to Eff(A; C). Proof. The conclusion follows immediately by using Proposition 3.4.1 (i) and Lemma 3.4.2.  Concerning the continuity properties of the marginal function m we have the following results. Proposition 3.4.4. The marginal function m is upper semicontinuous at every p ∈ dom m. Proof. Indeed, let p0 ∈ dom m and λ ∈ R such that m(p0 ) < λ. Then there exists t < λ such that p0 ∈ tk0 − (A + C). It follows that p0 ∈ λk0 − [(λ − t)k0 + A + C] ⊆ λk0 − (A + int C) ⊆ λk0 − int(A + C). Therefore U := λk0 − (A + C) is a neighborhood of p0 . Of course, for p ∈ U  we have that m(p) ≤ λ. Hence m is upper semicontinuous at p0 . Let M := {p ∈ Y | m(p) ∈ R} and consider the function g : M → Y defined by g(p) := m(p)k0 − p.

3.4 Continuity properties with respect to a scalarization parameter

185

Corollary 3.4.5. The function g is C-u.c. on M . Proof. Let p0 ∈ M be fixed and V ∈ NY . Consider V1 ∈ NY such that V1 + V1 ⊆ V . There exists ε > 0 such that εk0 ∈ V1 . Since m is upper continuous at p0 , there exists U1 ∈ NY such that m(p) < m(p0 ) + ε for every p ∈ p0 + U1 . Consider U := U1 ∩ V1 and p ∈ M ∩ (p0 + U ). Then m(p) − m(p0 ) = ε − γ with γ > 0, whence, for p ∈ M ∩ (p0 + U ), (m(p) − m(p0 )) k0 − (p − p0 ) ⊆ (ε − γ)k0 + V1 ⊆ V1 + V1 − γk0 ⊆ V − C. It follows that g(p) ∈ g(p0 ) + V − C, which shows that g is C-u.c. at p0 .  Proposition 3.4.6. Suppose that A + C is closed. Then m is lower semicontinuous on Y and continuous on dom m. Proof. Taking T : Y ×R → Y defined by T (p, t) := tk0 −p, T is a continuous linear operator. Therefore the set T −1 (A + C) is closed. It is also a set of epigraph type. It follows that epi m = T −1 (A + C), and so m is lower semicontinuous. The continuity of m on dom m follows now from the preceding proposition.  Note that in the case in which A + C is closed, M is the set of those p ∈ Y for which problem P (p) has solutions; in this case, by Proposition 3.4.1 (i), we have that g(p) ∈ wEff(A; C) for every p ∈ M . Corollary 3.4.7. Suppose that A + C is closed. Then g is C-continuous on M . Moreover, if C is normal, then g is continuous on M . Proof. The proof of the C-lower continuity of g is similar to that of Corollary 3.4.5, so we omit it. Therefore g is C-continuous on M . When C is normal, C-continuity and usual continuity are equivalent.  Combining Proposition 3.4.3 and Corollary 3.4.7 we obtain the following result. Corollary 3.4.8. Suppose that A + C is closed and that there exists an open set V ⊆ Y such that g(p0 ) ∈ V for some p0 ∈ M , and ∂(A + C) ∩ (V − C) contains no segments parallel to C. Then there exists a neighborhood U of p0 in M such that g is C-continuous on U with values in Eff(A; C). Proof. By the preceding corollary we have that g(p) ∈ V − C for p in a neighborhood U of p0 in M . Applying now Proposition 3.4.3 to each g(p) with p ∈ U , we obtain that g(p) ∈ Eff(A; C). The C-continuity of g is stated in the preceding corollary.  Consider now f : X → Y , where (X, τ ) is a topological space, and consider A := f (X). We associate with f the multifunction Σ : Y ⇒ X defined by {x ∈ X | f (x) ∈ m(p)k0 − p − C} if p ∈ M, Σ(p) := ∅ if p ∈ Y \ M. Of course, Σ(p) is the set of weak efficient solutions of the problem to minimize f (x) w.r.t. C, by solving P (p). The next result holds.

186

3 Optimization in Partially Ordered Spaces

Proposition 3.4.9. Suppose that f is continuous, C is closed, and m is continuous at p0 ∈ M . Then Σ is closed at p0 . Proof. Let ((pi , xi ))i∈I ⊆ gr Σ converging to (p0 , x). Then m(pi )k0 −f (xi )− pi ∈ C for every i ∈ I. Taking the limit, we obtain that m(p0 )k0 − f (x) − p0 ∈ C, which shows that x ∈ Σ(p0 ).  Of course, if Σ has some compactness properties at p0 , then Σ is even u.c. at p0 . Note that Pascoletti and Serafini [458] considered Y to be a finitedimensional space and a supplementary parameter q in span C; also, SternaKarwat considered this supplementary parameter q in the case in which the intrinsic core of C is nonempty, Y being an arbitrary topological vector space. Our motivation to consider a fixed parameter q (= k0 ) is that with the help of it one can find all the weak efficient elements of the set A. The results stated above correspond to similar ones in [516].

3.5 Well-Posedness of Vector Optimization Problems Throughout this section X is a Hausdorff topological space, Y is a Hausdorff topological vector space, and C ⊆ Y is a convex cone with C = Y . Recall that, for A ⊆ Y , Eff(A; C) := Max(A; −C) and A has the domination property (DP) (w.r.t. C) if A ⊆ Eff(A; C) + C. Recall also that wEff(A; C) := Eff(A; {0} ∪ int C) when int C = ∅. Remark 3.5.1. If A ⊆ Y and int C = ∅, then A ∩ cl (Eff(A; C)) ⊆ wEff(A; C) = A ∩ cl (wEff(A; C)) . In particular, if A is closed, then wEff(A; C) is closed, too. Indeed, let wEff(A; C) ⊇ (yi )i∈I → y ∈ A. Suppose that y ∈ / wEff(A; C). Then there exists y  ∈ A ∩ (y − int C). It follows that yi − y  → y − y  ∈ int C, whence yi −y  ∈ int C for i  i0 , contradicting the fact that yi0 ∈ wEff(A; C). The other assertions are obvious. Let f : X → Y be a function and A ⊆ X a nonempty set. The optimal value of the vector optimization problem (P) C-minimize f (x) subject to x ∈ A is Ω = Ω(f, A; C) := Eff(f (A); C), while its solution set is Σ(f, A; C) := A ∩ f −1 (Ω); an element of Σ(f, A; C) is called an optimal solution, or simply solution, of problem (P). We associate with (P) and η ∈ Ω the multifunction Π η : Y ⇒ X defined by Π η (ε) := {x ∈ A | f (x) ≤C η + ε} for ε ∈ C We also consider the multifunction Π : Y ⇒ X and Π η (ε) := ∅ otherwise. defined by Π(ε) := η∈Ω Π η (ε). In the case X = Y and f = IdX we denote Π η and Π by Π0η and Π0 , respectively. An element of Π(ε) is called an ε-optimal solution of (P).

3.5 Well-posedness of vector optimization problems

187

Definition 3.5.2. Let f : X → Y be a function and A ⊆ X a nonempty set. We say that (i) (P) is η-well-posed if Ω = ∅ and Π η is u.c. at 0 for every η ∈ Ω; (ii) (P) is well-posed if Ω = ∅ and Π is u.c. at 0; (iii) (P) is weakly well-posed if X is a topological vector space, Ω = ∅, and Π is H-u.c. at 0. Before stating the next result, let us introduce another notion. We say that the nonempty set A ⊆ Y has the containment property (CP) if ∀ W ∈ NY , ∃ V ∈ NY : [A \ (Eff(A; C) + W )] + V ⊆ Eff(A; C) + C. If int C = ∅, then (CP) is equivalent to ∀ W ∈NY , ∃ V ∈ NY , ∀ y ∈ A \ (Eff(A; C) + W ) , ∃ y ∈ Eff(A; C), ∃ k ∈ C : y = y + k, k + V ⊆ C.

(3.34)

It is obvious that (3.34) is sufficient for (CP) (and implies that int C = ∅ if A ⊆ cl (Eff(A; C))). Suppose that (CP) holds and int C = ∅. For W ∈ NY let V0 ∈ NY be such that [A \ (Eff(A; C) + W )] + V0 ⊆ Eff(A; C) + C. Consider k ∈ V0 ∩ int C; there exists V ∈ NY such that k + V ⊆ C. Let y ∈ A\(Eff(A; C) + W ). It follows that y−k = y0 +k  for some y0 ∈ Eff(A; C) and k  ∈ C. Hence y = y0 + k0 with k0 = k + k  . Since k0 + V = k + k  + V ⊆ C + C = C, (3.34) holds. Generally the condition (DP) does not imply and is not implied by (CP). However, if (CP) holds and int C = ∅, then wEff(A; C) = A ∩ cl (Eff(A; C)) and A ⊆ wEff(A; C) + C. Indeed, suppose that (CP) holds and y ∈ A \ cl (Eff(A; C)). Then there exists W ∈ NY such that y ∈ A \ (Eff(A; C) + W ). By (3.34), y = y + k / wEff(A; C). Hence with y ∈ Eff(A; C) and k ∈ int C. Therefore y ∈ wEff(A; C) = A ∩ cl (Eff(A; C)). The preceding argument shows also that A⊆ wEff(A; C) + C. We have the following result. Proposition 3.5.3. Let A ⊆ Y be nonempty and compact and let C be closed and with nonempty interior. If Eff(A; C) = wEff(A; C), then (CP) holds for A. Proof. It is known that A has (DP) in this case. Since C ∩ (−C0 ) ⊆ C0 and C + (C0 \ {0}) ⊆ C0 , where C0 := {0} ∪ int C, we have that (DP) holds w.r.t. C0 , too. Therefore, taking into account that Eff(A; C) = wEff(A; C), A ⊆ Eff(A; C0 ) + C0 = Eff(A; C) ∪ (Eff(A; C) + int C) . Let W ∈ NY . From the above inclusion we have that every y ∈ A \ Eff(A; C) has a representation y = ey + ky with ey ∈ Eff(A; C) and ky ∈ int C; let

188

3 Optimization in Partially Ordered Spaces

Vy ∈ NY be such that ky + Vy ⊆ C. There exists Vy1 ∈ NY open such that   Vy1 +Vy1 ⊆ Vy . It follows that the family y+Vy1 | y ∈ A\(Eff(A; C) + int W ) represents an open cover of the compact set A\(Eff(A; C) + int W ). Therefore there exist y1 , . . . , yn ∈ A \ (Eff(A; C) + int W ) such that  n  A \ (Eff(A; C) + int W ) ⊆ i=1 yi + Vy1i .

n Let V := i=1 Vy1i ∈ NY and consider y ∈ A \ (Eff(A; C) + int W ). Then there exists 1 ≤ i ≤ n such that y ∈ yi + Vy1i . So, using the above notation, y = eyi + kyi + vi , with vi ∈ Vy1i . Taking ky := kyi + vi , we have that ky + V = kyi + vi + V ⊆ kyi + Vy1i + V ⊆ kyi + Vyi ⊆ C. Therefore condition (3.34) holds, and so (CP) holds, too.  Proposition 3.5.4. Let A ⊆ Y be a nonempty set. Then (i) Π0 is (−C)-H-u.c. at 0. (ii) If (DP) holds for A, then Π0 is C-u.c. at 0. (iii) If (CP) holds for A, then Π0 is H-u.c. at 0. (iv) If C is normal and closed, Eff(A; C) is compact, and (DP) holds for A, then Π0 is u.c. at 0. Proof. (i) Let V ∈ NY . Consider ε ∈ V ∩ C and y ∈ Π0 (ε). By definition, there exists η ∈ Π0 (0) with y ≤C η + ε, whence y ∈ Π0 (0) + V − C. Therefore Π0 (V ) ⊆ Π0 (0) + V − C, which shows that Π0 is (−C)-H-u.c. at 0. (ii) Since (DP) holds, Π0 (ε) ⊆ A ⊆ A + C = Π0 (0) + C ⊆ D + C for every ε ∈ X and every open set D ⊆ Y with Π0 (0) ⊆ D. Therefore Π0 is C-u.c. at 0. (iii) Let W ∈ NY . Since (CP) holds for A, there exists V ∈ NY such that [A \ (Π0 (0) + W )]+V ⊆ Π0 (0)+C. Consider ε ∈ C ∩(V ∩W ) and y ∈ Π0 (ε). Suppose that y ∈ / Π0 (0) + W ; then y + V ⊆ Π0 (0) + C. Since y ∈ Π0 (ε), y = η + ε − k for some η ∈ Π0 (0) and k ∈ C. But y − ε ∈ y + V , whence y −ε = η −k = η  +k  with η  ∈ Π0 (0) and k  ∈ C. Therefore η = η  +(k +k  ). Since η, η  ∈ Eff(A; C) and C is pointed, it follows that k = k  = 0, and so y ∈ Π0 (0) + W , a contradiction. Hence Π0 (V ∩ W ) ⊆ Π0 (0) + W , which shows that Π0 is H-u.c. at 0. (iv) Suppose that Π0 is not u.c. at 0; then there exist D0 ⊆ Y an open set with Π0 (0) ⊆ D0 and a net (εi )i∈I ⊆ C with (εi ) → 0 and Π0 (εi ) ⊆ D0 for every i ∈ I. Therefore for every i ∈ I there exists yi ∈ Π0 (εi ) \ D0 . It follows that for every i there exist ηi ∈ Π0 (0) and ki ∈ C such that yi = ηi + εi − ki . Since (DP) holds for A, yi = ηi + ki with ηi ∈ Π0 (0) and ki ∈ C for every i. It follows that ηi + εi = ηi + (ki + ki ) for i ∈ I. Since Π0 (0) is compact, passing to subnets if necessary, we may suppose that (ηi ) and (ηi ) converge to η and η  from Π0 (0), respectively. We obtain that (ki + ki ) → η − η  ∈ C. Since η, η  ∈ Eff(A; C), we obtain that η = η  , and so (ki + ki ) → 0. Since C is normal, it follows that (ki ) → 0, and so (yi ) → η ∈ Π0 (0) ⊆ D0 . We get

3.6 Continuity properties

189

that yi ∈ D0 for i0  i (for some i0 ∈ I), a contradiction. Therefore Π0 is u.c. at 0.  In the general case we have the following result concerning the relations between η-well-posedness and well-posedness. Proposition 3.5.5. Let f : X → Y , and A ⊆ X a nonempty set. Suppose that Eff (f (A); C) is compact, int C = ∅, and (P) is η-well-posed. Then (P) is well-posed. Proof. Consider D ⊆ X an open set such that Π(0) ⊆ D. Since Π η is u.c. at 0 and Π η (0) ⊆ Π(0) for every η ∈ Eff (f (A); C), for each η ∈ Eff (f (A); C) there exists Vη ∈ NY such that Π η (Vη ) = A ∩ f −1 (η + Vη ∩ C − C) ⊆ D. Of course, for every η ∈ Eff (f (A); C) there exists an open neighborhood Vη of 0 ∈ Y such that Vη + Vη ⊆ Vη . Since the family   η + (Vη ∩ int C) − (Vη ∩ int C) | η ∈ Eff(f (A); C) represents an open cover of the compact set Eff (f (A); C), there exist η1 , . . . , ηn in the set Eff (f (A); C) such that Eff (f (A); C) ⊆

n



 ηi + (Vηi ∩ int C) − (Vηi ∩ int C) .

(3.35)

i=1

n Consider V := i=1 Vηi ∈ NY . Let ε ∈ V ∩ C and x ∈ Π(ε). There exist η ∈ Eff (f (A); C) and k ∈ C such that f (x) = η + ε − k. From (3.35) we have that η = ηi +ki −ki for some ki , ki ∈ Vηi ∩int C. It follows that ε+ki ∈ Vηi ∩C, whence f (x) ∈ ηi + Vηi − C, and so x ∈ Π ηi (Vηi ) ⊆ D. Therefore Π(V ) ⊆ D. The proof is complete.  The notion of well-posedness for vector optimization problems, as well as that of the containment property, was introduced by Bednarczuk (see [50], [52]) in the framework of topological vector spaces ordered by closed and pointed convex cones with nonempty interior. In this framework, assertions (ii) and (iii) of Proposition 3.5.4 are proved in [54], [52], while assertion (iv) is proved in [52] for dim Y < ∞. Proposition 3.5.5 is stated by Bednarczuk [52], too. Proposition 3.5.3 is proved in [53].

3.6 Continuity Properties 3.6.1 Continuity Properties of Optimal-Value Multifunctions As in the preceding section we consider X a Hausdorff topological space, Y a Hausdorff topological vector space, C ⊆ Y a convex cone with C = Y , and the multifunction Γ : X ⇒ Y .

190

3 Optimization in Partially Ordered Spaces

With Γ we associate the optimal-value multifunctions Ω, Ω : X ⇒ Y defined by   Ω(x) := Eff (Γ (x); C) , Ω(x) := Eff Γ (x); C . When int C = ∅ we also consider the weak optimal-value multifunctions Ωw , Ω w obtained from Ω and Ω, respectively, by replacing C with {0} ∪ int C. We say that Γ has the domination property (around x0 ∈ X) if Γ (x) has the domination property at every x ∈ X (at every x ∈ U0 , where U0 is some neighborhood of x0 ). Note that when Γ has the domination property, then ΓC = ΩC , and so Ω is C-(nearly) convex if X is a topological vector space and Γ is C-(nearly) convex. An important special case occurs when U is another topological (vector) space and Γ is replaced by f Λ, where f : X × U → Y and Λ : U ⇒ X, i.e., Γ (u) := f (Λ(u) × {u}) and Ω(u) := Eff (f (Λ(u) × {u}) ; C) (and similarly for Ωw (u)). In the first two results we give continuity properties of Ω under convexity assumptions for Γ . Proposition 3.6.1. Let X be a topological vector space and let Γ be C-nearly convex and have the domination property. If Γ is C-l.c. at (x0 , y0 ) ∈ gr Γ and Γ (x0 ) ⊆ B + C for some bounded set B ⊆ Y , then Ω is C-H-continuous on int(dom Γ ). Proof. Since Γ has the domination property, we have that ΓC = ΩC . Because Γ is C-l.c. at (x0 , y0 ) ∈ gr Γ ⊆ gr ΓC = gr ΩC , ΩC is C-l.c. at (x0 , y0 ). Of course, ΩC (x0 ) ⊆ B + C. Applying Theorem 2.10.4 we obtain that ΩC is C-H-continuous on int(dom ΩC ), whence Ω is C-H-continuous on  int(dom Γ ) = int(dom ΩC ). Proposition 3.6.2. Let X be a topological vector space and let Γ be Cnearly convex and have the domination property. Suppose that there exist x0 ∈ int(dom Γ ), B ⊆ Y a bounded set, and U ∈ NX such that Γ (x0 ) ⊆ B+C and Γ (x0 + u) ∩ (B − C) = ∅ for every u ∈ U . Then Ω is C-H-continuous on int(dom Γ ). Proof. Since Γ has the domination property, it follows that Ω(x0 ) ⊆ B + C and Ω(x0 + u) ∩ (B − C) = ∅ for every u ∈ U . Since Ω is C-nearly convex, we obtain that Ω is C-H-continuous on int(dom Γ ) = int(dom Ω), by applying Theorem 2.10.6.  In the sequel we study the continuity properties of the optimal multifunction Ω without convexity assumptions. We begin with a result concerning the lower semicontinuity of the optimal multifunction. Proposition 3.6.3. Suppose that Γ is C-l.s.c. and (DP) holds for Γ (x) for every x ∈ X. Then Ω is C-l.s.c., too.

3.6 Continuity properties

191

Proof. Let y ∈ Y and (xi )i∈I ⊆ levΩ (y), (xi )i∈I → x ∈ X. Then (xi )i∈I ⊆ levΓ (y). Since Γ is C-l.s.c., x ∈ levΓ (y); therefore Γ (x) ∩ (y − C) = ∅. Since Γ (x) ⊆ Ω(x) + C, it follows that Ω(x) ∩ (y − C) = ∅, whence x ∈ levΩ (y).  Corollary 3.6.4. Let G : X ⇒ Y be C-l.s.c. and Λ : U ⇒ X be compact at every u ∈ U. Assume that C is closed and that, for every x ∈ X, any strictly C-decreasing net in G(x)+C is C-lower bounded in G(x)+C. Then Γ := GΛ has (DP) at every u ∈ U and Ω is C-l.s.c. Proof. From Proposition 2.8.5(v) we have that Γ is C-l.s.c.; in order to apply the preceding result we must show that Γ (u) has (DP) for every u ∈ U. In order to apply Proposition 3.3.15(v), let (yi )i∈I ⊆ Γ (u) be a strictly Cdecreasing net. For every i ∈ I there exists xi ∈ Λ(u) such that yi ∈ G(xi ). Since Λ(u) is compact, there exists the subnet (xϕ(j) )j∈J converging to x ∈ Λ(u). Let i ∈ I; there exists ji ∈ J such that ϕ(j)  i for every j  ji . It follows that yϕ(j) ≤C yi for all j  ji . Hence (xϕ(j) )j ji ⊆ levG (yi ). Since G is C-l.s.c., x ∈ levG (yi ), i.e., (yi )i∈I ⊆ G(x) + C. By hypothesis, there exist y0 ∈ G(x) and k0 ∈ C such that y0 + k0 ≤C yi for every i ∈ I. Therefore y0 ∈ Γ (u), and it is a minorant for (yi )i∈I . By the above-mentioned result we have that Γ (u) has (DP).  We continue with a closedness result for the weak optimal multifunction. Theorem 3.6.5. Suppose that int C = ∅ and Γ is C-l.c. at x0 . If Γ or Γ is closed at x0 , then Ωw or Ω w is closed at x0 , respectively. Proof. Suppose first that Γ is closed at x0 . Let y ∈ / Ωw (x0 ). If y ∈ / Γ (x0 ), since Γ is closed at x0 , by Proposition 2.7.12, there exist U ∈ NX (x0 ) and V ∈ NY (y) such that Γ (U ) ∩ V = ∅, whence Ωw (U ) ∩ V = ∅. Suppose now that y ∈ Γ (x0 ). Then there exists z ∈ Γ (x0 ) ∩ (y − int C). Consider W ∈ NY such that W + W ⊆ z − y + int C. Since Γ is C-l.c. at (x0 , z), there exists U ∈ NX (x0 ) such that Γ (x) ∩ (z + W − C) = ∅; i.e., z ∈ Γ (x) + W + C, for every x ∈ U . Let z  ∈ y + W . It follows that z  ∈ y + W − z + Γ (x) + W + C ⊆ Γ (x) + int C + C ⊆ Γ (x) + int C, which means that z  ∈ / Ωw (x). Therefore Ωw (U ) ∩ (y + W ) = ∅. Hence Ωw is closed at x. If Γ is closed at x0 , since in our conditions Γ is C-l.c. at x0 , by applying  the first part we get that Ωw is closed at x0 . Corollary 3.6.6. Suppose that int C = ∅, Λ : U ⇒ X is l.c. at u0 , and f : X × U → Y is C-u.c. on Λ(u0 ) × {u0 }. If f Λ is closed at u0 , then Ωw is closed at x0 . Proof. Using Proposition 2.8.5(iii) we have that f Λ is C-l.c. at x0 . The conclusion follows from the preceding theorem.  Using the preceding theorem we get also an upper continuity result for Ωw .

192

3 Optimization in Partially Ordered Spaces

Corollary 3.6.7. Suppose that int C = ∅, Γ is u.c., and C-l.c. at x0 and Γ (x0 ) or cl Γ (x0 ) is compact. Then Ωw or Ω w is u.c. at x0 , respectively. Proof. Suppose that Γ (x0 ) is compact. Using Proposition 2.7.12 (iii) we have that Γ is closed at x0 . Then using Theorem 3.6.5 we have that Ωw is closed at x0 . Since Ωw (x) = Ωw (x)∩Γ (x) for every x ∈ X, using Proposition 2.7.13 (i), we obtain that Ωw is u.c. at x0 . Suppose now that cl Γ (x0 ) is compact. Using Proposition 2.7.22 (iii) and (iv) we obtain that Γ is C-l.c. and u.c. at x0 . The conclusion follows from the first part.  The following examples show that the hypotheses on Γ in Theorem 3.6.5 and Corollary 3.6.7 are essential. Example 3.6.8. ([541]) Let X = ] − ∞, 0[, Y = R2 , C = R2+ , and let the multifunction Γ be defined by {(y1 , y2 ) ∈ R2 | y2 ≥ xy1 , y1 ≤ 1, y2 ≤ 1} if x = −1, Γ (x) = {(y1 , y2 ) ∈ R2 | y2 ≥ −y1 + 1, y1 ≤ 1, y2 ≤ 1} if x = −1. Then Γ is l.c. (in particular, C-l.c.), but neither closed nor u.c., at x0 = −1; moreover, Γ (x0 ) is compact. We have that {(y1 , y2 ) ∈ R2 | y2 = xy1 , x−1 ≤ y1 ≤ 1} if x = −1, Ω(x) = Ωw (x) = {(y1 , y2 ) ∈ R2 | y2 = −y1 + 1, 0 ≤ y1 ≤ 1} if x = −1; Ω is not closed at x0 . Example 3.6.9. ([541]) Let X = ] − ∞, 0[, Y = R2 , C = R2+ , and let the multifunction Γ be defined by {(y1 , y2 ) ∈ R2 | y2 ≥ xy1 , y1 ≤ 1, y2 ≤ 1} if x = −1, Γ (x) = 2 {(y1 , y2 ) ∈ R | y2 ≥ −y1 − 1, y1 ≤ 1, y2 ≤ 1} if x = −1. Then Γ is is closed and u.c., but not C-l.c., at x0 = 1; moreover, Γ (x0 ) is compact. We have that {(y1 , y2 ) ∈ R2 | y2 = xy1 , x−1 ≤ y1 ≤ 1} if x = −1, Ω(x) = Ωw (x) = 2 {(y1 , y2 ) ∈ R | y2 = −y1 − 1, −2 ≤ y1 ≤ 1} if x = −1; Ω is not closed at x0 . Proposition 3.6.10. Suppose that Γ is C-u.c. at x0 and (DP) holds for Γ (x0 ). Then Ω is C-u.c. at x0 . Proof. Let D ⊆ Y be an open set such that Ω(x0 ) ⊆ D. Then Γ (x0 ) ⊆ Ω(x0 ) + C ⊆ D + C. Since Γ is C-u.c. at x0 and D + C is open, there exists U ∈ NX (x0 ) such that Γ (U ) ⊆ D + C + C, whence Ω(U ) ⊆ D + C. Therefore  Ω is C-u.c. at x0 . In Example 3.6.8 (DP) holds for Γ (x0 ), but Ω is not C-u.c. at x0 , showing that the C-upper continuity of Γ at x0 is essential.

3.6 Continuity properties

193

Corollary 3.6.11. Let Λ : U ⇒ X be u.c. at u0 ∈ U. (i) If g : X → Y is C-l.c. on Λ(u0 ) and (DP) holds for gΛ(u0 ), then Ω is C-u.c. at u0 . (ii) Assume that f : X × U → Y is C-l.c. on Λ(u0 ) × {u0 }, (DP) holds for f Λ(u0 ), and either the set Λ(u0 ) is compact or the multifunction  Λ : U ⇒ X × U, Λ(u) = Λ(u) × {u}, is u.c. at u0 ; then Ω is C-u.c. at u0 . Proof. (i) Using Proposition 2.8.5 (i), we have that gΛ is C-u.c. at u0 . The conclusion follows from the preceding proposition. (ii) If Λ(u0 ) is compact, then as seen in the proof of Proposition 2.8.5 (ii), Λ is u.c. at u0 . So, in both cases the conclusion follows from (i) applied with  respectively. g and Λ replaced by f and Λ,  In order to obtain the P -H-upper continuity of Ω for a convex cone P ⊆ Y we introduce the multifunction Γ (x0 ) if x = x0 , P P Γ (x) := Γ : X ⇒ Y, Γ (x) \ (Ω(x0 ) + P ) if x = x0 . For P = {0} the next result establishes another sufficient condition for Ω to be H-u.c. at u0 . Theorem 3.6.12. Suppose that int C = ∅ and (CP) holds for Γ (x0 ). If Γ is uniformly C-l.c. at x0 on Ω(x0 ) and Γ P is H-u.c. at x0 , then Ω is H-P -u.c. at x0 . Proof. When x0 ∈ / dom Γ , the conclusion is obvious. Let x0 ∈ dom Γ and suppose that Ω is not H-P -u.c. at x0 . Then there exists W ∈ NY such that ∀ U ∈ NX (x0 ), ∃ xU ∈ U, ∃ yU ∈ Ω(xU ) \ (Ω(x0 ) + W + P ) .

(3.36)

Consider W1 ∈ NY such that W1 + W1 ⊆ W . Since (CP) holds for Γ (x0 ), there exists V ∈ NY open such that ∀ y ∈ Γ (x0 )\(Ω(x0 ) + W1 ) , ∃ ey ∈ Ω(x0 ), ky ∈ C : y = ey +ky , ky +V ⊆ C. Let V1 ∈ NY be such that V1 + V1 ⊆ V . Since Γ P is H-u.c. at x0 , there exists U1 ∈ NX (x0 ) such that Γ (x) \ (Ω(x0 ) + P ) ⊆ Γ (x0 ) + V1 ∩ W1

∀ x ∈ U1 .

(3.37)

Since Γ is uniformly C-l.c. at x0 on Ω(x0 ), there exists U2 ∈ NX (x0 ) such that (3.38) Ω(x0 ) ⊆ Γ (x) + V1 + C ∀ x ∈ U2 . From (3.36) we obtain x ∈ U1 ∩ U2 and y  ∈ Ω(x ) \ (Ω(x0 ) + W + P ) ⊆ Γ (x ) \ (Ω(x0 ) + P ). By (3.37), y  = y0 + w for some y0 ∈ Γ (x0 ) and w ∈

194

3 Optimization in Partially Ordered Spaces

V1 ∩ W1 . If y0 ∈ Ω(x0 ) + W1 , then y  ∈ Ω(x0 ) + W + P , a contradiction. / Ω(x0 ) + W1 , whence y0 = e0 + k0 with e0 ∈ Ω(x0 ) and Therefore y0 ∈ k0 + V ⊆ int C. Since x ∈ U2 , by (3.38), there exist y  ∈ Γ (x ), v  ∈ V1 , and k  ∈ C such that e0 = y  + v  + k  . Hence y  = y  + v  + k  + k0 + w ∈ Γ (x ) + k0 + V + C ⊆ Γ (x ) + int C, contradicting the fact that y  ∈ Ω(x ). The proof is complete.  Note that for P = {0}, Γ P is H-u.c. at x0 if and only if Γ is H-u.c. at x0 . Corollary 3.6.13. Suppose that Γ is H-u.c. at x0 ∈ X, and C is proper, closed, with nonempty interior. If Γ (x0 ) is compact, Eff(Γ (x0 ); C) = wEff(Γ (x0 ); C), and Γ is C-l.c. at (x0 , y) for all y ∈ Ω(x0 ), then Ω is u.c. at x0 . Proof. We may suppose that x0 ∈ dom Γ . Using Proposition 3.5.3, we have that (CP) holds for Γ (x0 ), while from Remark 2.7.21 we have that Γ is uniformly C-l.c. at x0 on Ω(x0 ). Then, from the preceding theorem for P = {0}, we obtain that Ω is H-u.c. at x0 . Since Eff(A; C) = wEff(A; C),  Ω(x0 ) is closed, and so compact. Hence Ω is actually u.c. at x0 . In Examples 3.6.8 and 3.6.9 all the conditions in Theorem 3.6.12 (for P = {0}) and Corollary 3.6.13 are satisfied, except the fact that Γ is H-u.c. at x0 in Example 3.6.8 and Γ is uniformly C-l.c. at x0 on Ω(x0 ) in Example 3.6.9, showing that these two conditions are essential in those results. The next example shows that the condition (CP) for Γ (x0 ) is also essential in Theorem 3.6.12. Example 3.6.14. Let X = [0, 1], Y = R2 , C = R2+ , and let Γ be defined by {(y1 , y2 ) | y1 ≥ 0, y2 = min(y12 , e−y1 +1 )} if x = 0, Γ (x) = 2 −y1 +1 {(y1 , y2 ) | x ≤ y1 ≤ 2 − 2 ln x, y2 = min(y1 , e )} if x = 0. Then Γ is H-u.c. and l.c. at 0, whence Γ P is H-u.c. at x0 for every convex cone P ⊆ Y , Γ (0) has the domination property, but (CP) does not hold for Γ (0). Moreover, {(0, 0)} if x = 0, Ω(x) = {(x, x2 )} ∪ {(y, e1−y ) | 1 − 2 ln x < y ≤ 2 − 2 ln x} if x = 0. So Γ is uniformly C-l.c. at x0 on Ω(x0 ), but Ω is not H-P -u.c. at x0 for P ⊆ −C. Note that asking Γ to be C-l.c. at x0 makes the preceding corollary a particular case of Corollary 3.6.7 because Ω(x0 ) = Ωw (x0 ) and Ω(x) ⊆ Ωw (x) for x ∈ X. Corollary 3.6.15. Suppose that C is closed, proper, with nonempty interior, Λ : U ⇒ X is u.c. and l.c. at u0 , Λ(u0 ) is compact, and f : X × U → Y is continuous on Λ(u0 ) × {u0 }. If wEff (f Λ(u0 ); C) = Eff (f Λ(u0 ); C), then Ω is u.c. at u0 .

3.6 Continuity properties

195

Proof. Since Λ(u0 ) is compact and f is continuous on Λ(u0 ) × {u0 }, using Proposition 2.8.5 (ii) and (vi), f Λ is u.c. and l.c. at u0 ; moreover, f Λ(u0 ) is compact. Using the preceding corollary we obtain the conclusion.  In the following results we establish sufficient conditions for the lower continuity of the optimal multifunction Ω. Proposition 3.6.16. Suppose that Γ is C-l.c. at (x0 , y0 ) with y0 ∈ Ω(x0 ). If (DP) holds for Γ around x0 , then Ω is C-l.c. at (x0 , y0 ). Proof. Take U0 ∈ NX (x0 ) such that Γ (x) ⊆ Ω(x) + C for x ∈ U0 . Let W ∈ NY . Since Γ is C-l.c. at (x0 , y0 ), there exists U1 ∈ NX (x0 ) such that Γ (x) ∩ (W − C) = ∅ for every x ∈ U1 . Let x ∈ U0 ∩ U1 . Then there exists y ∈ Γ (x) ∩ (W − C); it follows that y = y  + k with y  ∈ Ω(x) and k ∈ C. Therefore y  ∈ W − C − C = W − C, whence Ω(x) ∩ (W − C) = ∅. Hence Ω  is C-l.c. at (x0 , y0 ). In Example 3.6.8 (DP) holds for Γ (x) for every x ∈ X, but Ω is not C-l.c. at (x0 , y) for every y ∈ Ω(x0 ); Γ is not C-l.c. at (x0 , y) for every y ∈ Ω(x0 ). In the next example (DP) does not hold for Γ around x0 . Example 3.6.17. ([541]) Let X = [0, ∞[, Y = R2 , C = R2+ , and let Γ be defined by {(y1 , y2 ) ∈ R2 | y12 + y22 < x2 } if x = 1, Γ (x) = {(y1 , y2 ) ∈ R2 | y12 + y22 ≤ 1} if x = 1. Then Γ is l.c. (and so C-l.c.) at every x ∈ X and ∅ if x = 1, Ω(x) = {(y1 , y2 ) ∈ R2 | y12 + y22 = 1, y1 ≤ 0, y2 ≤ 0} if x = 1. So (DP) does not hold for Γ around x0 = 1, and Ω is not C-l.c. at (x0 , y) for every y ∈ Ω(x0 ). Corollary 3.6.18. Suppose that Λ : U ⇒ X is l.c. at u0 ∈ U, and f : X × U → Y is C-u.c. on Λ(u0 ) × {u0 }. If (DP) holds for f Λ around u0 , then Ω is C-l.c. at u0 . Proof. Using Proposition 2.8.5 (iii), f Λ is C-l.c. at u0 . The conclusion follows from the preceding proposition.  In order to obtain the next result we need a uniform (CP) condition. So, we say that (CP) holds for Γ uniformly around x0 ∈ X if there exists U ∈ NX (x0 ) such that ∀ W ∈ NY , ∃ V ∈ NY , ∀ x ∈ U : [Γ (x) \ (Ω(x) + W )] + V ⊆ Ω(x) + C. When int C = ∅, one can write a similar condition to (3.34).

196

3 Optimization in Partially Ordered Spaces

Theorem 3.6.19. Suppose that int C = ∅, (CP) holds for Γ uniformly around x0 ∈ X, and Γ is H-C-u.c. at x0 . If Γ is l.c. at (x0 , y0 ), where y0 ∈ Ω(x0 ), then Ω is l.c. at (x0 , y0 ). Moreover, if Γ is uniformly {0}-l.c. at x0 on Ω(x0 ), then Ω is H-l.c. at x0 . Proof. Let U0 ∈ NX (x0 ) be such that (CP) holds uniformly for Γ (x) with x ∈ U0 . Let us fix W ∈ NY and consider W1 ∈ NY such that W1 + W1 ⊆ W . There exists V ∈ NY open such that ∀x ∈ U0 , ∀y ∈ Γ (x)\(Ω(x) + W1 ) , ∃e ∈ Ω(x), k ∈ C : y = e+k, k+V ⊆ C. (3.39) There exists V1 ∈ NY such that V1 + V1 ⊆ V . Since Γ is l.c. at (x0 , y0 ), there exists U1 ∈ NX (x0 ) such that y0 ∈ Γ (x) + V1 ∩ W1

∀ x ∈ U1 .

(3.40)

Now, since Γ is H-C-u.c. at x0 , there exists U2 ∈ NX (x0 ) such that Γ (x) ⊆ Γ (x0 ) + V1 + C

∀ x ∈ U2 .

(3.41)

Let U := U0 ∩ U1 ∩ U2 and take x ∈ U . From (3.40) we have that y0 = yx + wx with yx ∈ Γ (x) and wx ∈ V1 ∩W1 . Suppose that yx ∈ / Ω(x)+W1 . From (3.39) we get ex ∈ Ω(x) and kx ∈ C such that yx = ex + kx and kx + V ⊆ int C. From (3.41) we have that ex = yx0 + vx + kx with yx0 ∈ Γ (x0 ), vx ∈ V1 , and kx ∈ C. So, y0 = yx0 + vx + kx + kx + wx , whence y0 ∈ yx0 + kx + V1 + V1 + C ⊆ yx0 + int C + C ⊆ yx0 + int C, contradicting the fact that y0 ∈ Ω(x0 ). Therefore yx ∈ Ω(x) + W1 , whence y0 ∈ Ω(x) + W . Hence Ω is l.c. at (x0 , y0 ). If Γ is uniformly {0}-l.c. at x0 on Ω(x0 ), relation (3.40) holds for every  y0 ∈ Ω(x0 ), and so Ω(x0 ) ⊆ Ω(x) + W for all x ∈ U . In Examples 3.6.8, 3.6.9, and 3.6.17 all the hypotheses of the preceding theorem are satisfied at x0 and y0 ∈ Ω(x0 ), except, respectively, the fact that Γ is H-C-u.c. at x0 , Γ is l.c. at (x0 , y0 ), and (CP) holds for Γ , uniformly around x0 ; Ω is not l.c. at (x0 , y0 ). The assumption that (CP) holds uniformly for Γ around x0 is sufficiently strong. This condition may be relaxed if Γ (x0 ) or Ω(x0 ) is compact. Theorem 3.6.20. Suppose that C is closed and normal, Γ satisfies (DP) around x0 ∈ X, Γ is H-C-u.c. at x0 , and either Γ (x0 ) is closed and Ω(x0 ) is relatively compact (for example, if Γ (x0 ) is compact) or Ω(x0 ) is compact. Then, if Γ is l.c. at (x0 , y0 ) ∈ gr Ω, Ω is l.c. at (x0 , y0 ), too. Moreover, if Γ is l.c. at (x0 , y) for every y ∈ Ω(x0 ), then Ω is l.c. at x0 (even H-l.c. at x0 if Ω(x0 ) is compact).

3.6 Continuity properties

197

Proof. We give the proof for X, Y first-countable spaces; in the general case one uses the equivalence of (i) and (ii) of Proposition 2.7.6 and assertion (ii) of Proposition 2.7.16 instead of the equivalence of (i) and (iii), and assertion (iv), respectively. Let us note first that C is necessarily pointed because C is normal and Y is separated. Let X ⊇ (xn ) → x0 . Since Γ is l.c. at (x0 , y0 ), using Proposition 2.7.6, there exists Y ⊇ (yn ) → y0 such that yn ∈ Γ (xn ) for every n ≥ n0 . We may suppose that (DP) holds for Γ (xn ) for n ≥ n0 . Hence yn = y n + kn with y n ∈ Ω(xn ) ⊆ Γ (xn ) and kn ∈ C for n ≥ n0 . Since ΓC is H-u.c. at x0 , using Proposition 2.7.16, there exist (yn0 ) ⊆ Γ (x0 ) and (kn0 ) ⊆ C such that 0 wn := y n − (yn0 + kn0 ) → 0. Since Γ (x0 ) has (DP), we have that yn0 = y 0n + k n 0 with y 0n ∈ Ω(x0 ) and k n ∈ C. So, for n ≥ n0 , 0

yn = y n + kn = yn0 + wn + (kn0 + kn ) = y 0n + wn + (k n + kn0 + kn ), 0

where (yn0 ) ⊆ Γ (x0 ), (y 0n ) ⊆ Ω(x0 ), (k n ), (kn0 ), (kn ) ⊆ C and (wn ) → 0. Since Ω(x0 ) is relatively compact, there exists a subsequence (y 0np ) → y 0 ; 0

in both cases y 0 ∈ Γ (x0 ), and so (k n + kn0 p + knp ) → y0 − y 0 ∈ C. Since y0 ∈ 0

Ω(x0 ) and C is pointed, we have that y0 = y 0 , and so (k n + kn0 p + knp ) → 0. Since C is normal, we obtain that (knp ) → 0, whence (y np ) → y0 . Arguing by contradiction we obtain, in fact, that (y n ) → y0 . Using again Proposition 2.7.6 we obtain that Ω is l.c. at (x0 , y0 ). The second part is an immediate consequence of the first one.  In Examples 3.6.8, 3.6.9, and 3.6.17 all the hypotheses of the preceding theorem are satisfied at x0 and y0 ∈ Ω(x0 ), except, respectively, the fact that Γ is H-C-u.c. at x0 , Γ is l.c. at (x0 , y0 ), and (DP) holds for Γ around x0 ; Ω is not l.c. at (x0 , y0 ). In the next result we suppose that Y is a normed vector space. Theorem 3.6.21. Suppose that Y is a normed vector space, C is closed and normal, and Γ is l.c. and closed at x0 . If there exists a neighborhood U0 of x0 such that Γ (x) satisfies (DP) for x ∈ U0• := U0 \ {x0 }, Γ (U0• ) is C-bounded from below (i.e., Γ (U0• ) ⊆ B0 + C for some bounded set B0 ⊆ Y ), and the bounded subsets of Ω(U0• ) are relatively compact, then Ω is l.c. at x0 . Proof. Consider y0 ∈ Ω(x0 ) and X ⊇ (xi )i∈I → x0 . We may suppose that (xi )i∈I ⊆ U0• . Since Γ is l.c. at x0 , by Proposition 2.7.6, there exist a subnet (xϕ(j) )j∈J of (xi ) and a net (yj )j∈J → y0 with yj ∈ Γ (xϕ(j) ) for every j ∈ J. Because Γ (xi ) satisfies (DP) for i ∈ I, there exist y j ∈ Ω(xϕ(j) ) and kj ∈ C such that yj = y j + kj for all j ∈ J. Since (yj ) → y0 , there exists a bounded set B1 ⊆ Y such that (yj )j j1 ⊆ B1 for some j1 ∈ J. Taking B := B0 ∪ B1 , we obtain that y j ∈ (B + C) ∩ (B − C) for j  j1 . Since C is normal, the set (B + C) ∩ (B − C) is bounded, whence (y j )j j1 ⊆ Ω(U0• ) is bounded. By hypothesis, (y j )j j1 contains a subnet (y ψ(p) )p∈P converging to y ∈ Y .

198

3 Optimization in Partially Ordered Spaces

Since Γ is closed at x0 , it follows that y ∈ Γ (x0 ). Since kj = yj − y j ∈ C and C is closed, we obtain that y0 − y ∈ C, whence y0 = y (C being pointed and y0 ∈ Ω(x0 )). Therefore (y ψ(p) )p∈P → y. Using again Proposition 2.7.6  we have that Ω is l.c. at x0 . Examples 3.6.8, 3.6.9, and 3.6.17 show that the condition Γ is closed at x0 , Γ is l.c. at x0 , and (DP) holds for Γ around x0 are essential in the preceding theorem. Also, the hypothesis that Γ (U0• ) is bounded from below for some U ∈ NX (x0 ) is important, as the next example shows. Example 3.6.22. Let X = [0, ∞[, Y = R2 , C = R2+ , and let Γ be defined by {(0, 0)}∪ ] − ∞, 1] × {1} if x = 0, Γ (x) := x 1 1 {(0, 0)} ∪ {(y1 , x+1 y1 + x+1 ) | y1 ∈ [− x , 1]} if x > 0. Then Γ is closed and l.c. at x0 = 0, Γ (x) has the domination property for every x = 0, but Γ (U • ) is not bounded from below for every U ∈ NX (x0 ). Moreover, {(0, 0)} if x = 0, Ω(x) = {(− x1 , 0)} if x > 0. Of course, Ω is not l.c. at x0 . Another version of the preceding theorem is given in the next result. Theorem 3.6.23. Suppose that Y is a normed vector space, C is closed and C ⊆ {y ∈ Y | y ≤ y, y ∗ } for some y ∗ ∈ Y ∗ , and Γ is l.c. and closed at x0 ∈ X. If there exists a neighborhood U0 of x0 such that Γ (x) is complete for x ∈ U0• , y ∗ (Γ (U0• )) is bounded from below, and the bounded subsets of Ω(U0• ) are relatively compact, then Ω is l.c. at x0 . Proof. Fix x ∈ U0• . Since Γ (x) is complete and y ∗ (Γ (x)) is bounded from below, using Theorem 3.3.35 of Attouch–Riahi (see also [32, Th. 2.5]; it is stated for Y a Banach space, but it is sufficient for the set to be complete), Γ (x) has the domination property. Using the samenotation as   in the proof of Theorem 3.6.21, we have that yj − y j  ≤ yj − y j , y ∗ ≤ yj , y ∗  − sup y ∗ (Γ (U0• )). It follows that (y j )j j1 is bounded. The conclusion follows as in the proof of Theorem 3.6.21.  The same examples, as for Theorem 3.6.21, show that all the conditions on Γ in the preceding theorem are essential. Corollary 3.6.24. Suppose that X is a topological vector space, C is closed and normal, Λ : U ⇒ X is H-u.c. and l.c. at u0 , and either f : X × U → Y is equicontinuous on Λ(u0 ) × {u0 } or f is continuous on Λ(u0 ) × {u0 } and  Λ : U ⇒ X × U, Λ(u) := Λ(u) × {u}, is u.c. at u0 . If (DP) holds for f Λ around u0 and either Λ(u0 ) is compact or Ω(u0 ) is compact, then Ω is l.c. at u0 .

3.6 Continuity properties

199

Proof. From Proposition 2.8.5 (iii) we have that f Λ is l.c. at u0 , while from (v) if f is equicontinuous on Λ(u0 ) × {u0 } or (i) if Λ is u.c. at u0 and f is continuous on Λ(u0 ) × {u0 } (of the same proposition), we have that f Λ is H-u.c., or u.c. at u0 , respectively. Moreover, f Λ(u0 ) is compact if Λ(u0 ) is so. The conclusion follows by applying the preceding theorem.  One can obtain the lower continuity of Ω at (x0 , y0 ) without using condition (CP) or compactness conditions if one knows that y0 is a strong proper efficient point of Γ (x0 ). One says that y is a strong proper efficient point of A ⊆ Y , the class of such points being denoted by SPEff(A; C), if y ∈ Eff(A; P ), where P is a proper convex cone such that the pair (C, P ) has the property ∀ W ∈ NY , ∃ V ∈ NY : (C \ W ) + V ⊆ P.

(3.42)

Proposition 3.6.25. Let P ⊆ Y be a convex cone such that (3.42) holds. Then (cl C, P ) satisfies (3.42), cl C \ {0} ⊆ int P , and ∀ W ∈ NY , ∃ V ∈ NY , ∀ y ∈ Y \ (W ∪ P ) : (y + V ) ∩ C = ∅.

(3.43)

If C has a bounded base B, then (C, P ) satisfies (3.42) if and only if there exists V ∈ NY such that B + V ⊆ P . So, if C has a bounded base and Y is a locally convex space, then there exists a proper convex cone P ⊆ Y such that (3.42) holds. The converse is true if Y is a normed vector space. Proof. Let (C, P ) satisfy (3.42) and take W ∈ NY . There exists W0 ∈ NY such that W0 + W0 ⊆ W . By (3.42) there exists V ∈ NY such that (C \ W0 ) + V ⊆ P . Consider V0 ∈ NY such that V0 + V0 ⊆ V . Then (cl C \ W ) + V0 ⊆ P . Indeed, let x = x + v with x ∈ cl C \ W and v ∈ V0 . Since x ∈ cl C ⊆ / W0 ; C + (V0 ∩ W0 ), x = k + w for some k ∈ C and w ∈ V0 ∩ W0 . Then k ∈ otherwise, x ∈ W0 + W0 ⊆ W . Moreover, w + v ∈ V0 + V0 ⊆ V , whence x ∈ (C \ W0 ) + V ⊆ P . / W . Since Consider now y ∈ cl C \ {0}; there exists W ∈ NY such that y ∈ (cl C, P ) satisfies (3.42), there exists V ∈ NY such that (cl C \ W ) + V ⊆ P , whence y + V ⊆ P . Therefore y ∈ int P . In order to prove (3.43), let W ∈ NY and take W1 ∈ NY with W1 + W1 ⊆ W . There exists V1 ∈ NY such that (C \W1 )+V1 ⊆ P . Consider V := V1 ∩W1 . Suppose that for some y ∈ Y \ (W ∪ P ) there exists y  ∈ (y + V ) ∩ C; then / W1 (otherwise, y ∈ W1 −V ⊆ W ). Hence y ∈ y  +V ⊆ P , a contradiction. y ∈ Suppose now that B is a (convex) base of C. Suppose first that (3.42) holds. Since 0 ∈ / cl B, there exists W ∈ NY such that B ∩ W = ∅. There exists V ∈ NY such that B + V ⊆ (C \ W ) + V ⊆ P . Suppose now that V0 ∈ NY and B + V0 ⊆ P . By contradiction, suppose that (3.42) does not hold. Then there exists W0 ∈ NY such that (C\W0 )+V ⊆ P for every V ∈ NY . Of course, we may suppose that B ∩ W0 = ∅. Since B is bounded, there exists λ > 0 such that B ⊆ λW0 . It follows that

200

3 Optimization in Partially Ordered Spaces

∀ V ∈ NY , ∃ tV > 0, ∃ bV ∈ B, ∃ yV ∈ V : tV bV ∈ / W0 , t V b V + y V ∈ / P. Therefore bV + t−1 / P , whence t−1 / V0 for every V ∈ NY . Since V yV ∈ V yV ∈ / W0 and B ⊆ λW0 we get t−1 < λ for every V ∈ NY . Since t V bV ∈ V y ) → 0, contradicting the fact that (yV )V ∈NY → 0, it follows that (t−1 V V y ∈ / V for every V ∈ N . Therefore (3.42) holds. t−1 V 0 Y V Now let Y be a locally convex space and C have a bounded base B. Then / B + V . Taking there exists a convex V ∈ NY such that B ∩ V = ∅. Then 0 ∈ P = [0, ∞[·(B + V ), P is a proper convex cone and B + V ⊆ P . Suppose now that Y is a normed vector space and (3.42) holds for some proper convex cone P ⊆ Y . Let B0 = {y ∈ C | y = 1}. From (3.42) for W = B(0, 1) = {y ∈ Y | y < 1}, there exists r > 0 such that B0 + B(0, r) ⊆ P . Then B = conv B0 is a bounded base for C. Indeed, it is obvious that C = [0, ∞[·B. But, B + B(0, r) = conv (B0 + B(0, r)) ⊆ P, and so cl B + B(0, r/2) ⊆ P . If 0 ∈ cl B, then B(0, r/2) ⊆ P , whence P = Y , a contradiction. It follows that B is a (bounded) base for C.  In the definition of a strong proper efficient point we may suppose that P is pointed (otherwise, replace P by {0} ∪ int P ). Theorem 3.6.26. Consider y0 ∈ SPEff(Γ (x0 ); C) and suppose that Γ is C0 H-u.c. at x0 and l.c. at (x0 , y0 ), where C0 = C if C is normal and C0 = {0} otherwise. Then Ω is l.c. at (x0 , y0 ) if for some U0 ∈ NX (x0 ) one of the following conditions holds: (i) Γ (x) has the domination property for every x ∈ U0 ; (ii) Y is locally compact, C is closed, and Γ (u) is closed for every x ∈ U0 . Proof. Let P be the pointed convex cone with the property (3.43) for which y0 ∈ Eff(Γ (x0 ); P ). It follows that Γ (x0 ) ⊆ (y0 − P )c ∪ {y0 },

(3.44)

where Ac denotes Y \ A. Consider W ∈ NY such that (W + C0 ) ∩ (W − C0 ) = W , and W compact if Y is locally compact. Let W1 ∈ NY be such that W1 + W1 ⊆ W . From (3.43) we get V ∈ NY such that (y + V ) ∩ C = ∅ for every y ∈ Y \ (W ∪ P ). It follows that [((y0 − P )c \ (y0 + W1 )) + V ] ∩ (y0 − C) = ∅. Taking V1 ∈ NY such that V1 + V1 ⊆ V , we obtain that [((y0 − P )c \ (y0 + W1 )) + V1 ] ∩ [(y0 + V1 ) − C] = ∅.

(3.45)

From (3.44), taking into account that (y0 − P )c + C = (y0 − P )c , we obtain Γ (x0 ) + V1 ∩ W1 + C0 ⊆ [((y0 − P )c \ (y0 + W1 )) + V1 ∩ W1 ] ∪ (y0 + W + C0 ). Since Γ is C0 -H-u.c. at x0 , there exists U1 ∈ NX (x0 ) such that

3.6 Continuity properties

Γ (x) ⊆ Γ (x0 ) + V1 ∩ W1 + C0

201

∀ x ∈ U1 .

Therefore Γ (x) ⊆ [((y0 − P )c \(y0 + W1 )) +V1 ∩ W1 ] ∪ (y0 + W + C0 )

∀ x ∈ U1 . (3.46)

Since Γ is l.c. at (x0 , y0 ), there exists U2 ∈ NX (x0 ) such that (y0 + V1 ∩ W1 ) ∩ Γ (x) = ∅ for every x ∈ U2 . For x ∈ U := U0 ∩ U1 ∩ U2 let yx ∈ (y0 + V1 ∩ W1 ) ∩ Γ (x). Then, by applying (3.45), we obtain that (yx − C) ∩ [((y0 − P )c \ (y0 + W1 )) + V1 ∩ W1 ] = ∅. Thus, from (3.46), we obtain that (yx − C) ∩ Γ (x) ⊆ (y0 + W + C0 ) ∩ (y0 + V1 ∩ W1 − C) ⊆ y0 + W

∀ x ∈ U.

For x ∈ U , if (i) holds, then there exists y x ∈ Ω(x) ∩ (yx − C), while if (ii) holds, (yx − C) ∩ Γ (x) is a compact set (as a closed subset of a compact set), and we obtain again y x ∈ Ω(x) ∩ (yx − C); in both cases we have that  Ω(x) ∩ (y0 + W ) = ∅ for all x ∈ U . The proof is complete. As a consequence of the preceding theorem we get the following result. Corollary 3.6.27. Let Y = Rm and let C be closed and pointed. Suppose that y0 is an element of SPEff(Γ (x0 ); C) and Γ is closed-valued, H-C-u.c. at x0 , and l.c. at (x0 , y0 ). Then Ω is l.c. at (x0 , y0 ). Proof. Note that C is normal in this case. Applying Theorem 3.6.26 (ii) we get the conclusion.  Examples 3.6.8, 3.6.9 (with y0 ∈ Ω(x0 )), and 3.6.17 (with y0 = (−2−1/2 , −2−1/2 )) show that the hypotheses Γ is H-C-u.c. at x0 , Γ is l.c. at (x0 , y0 ), and Γ is closed-valued, respectively, are essential for Theorem 3.6.26 and Corollary 3.6.27. Corollary 3.6.28. Suppose that Λ : U ⇒ X is l.c. and H-u.c. at u0 , f : X × U → Y is equicontinuous on Λ(u0 ) × {u0 }, and f Λ satisfies (DP) around u0 . Then Ω is l.c. at (u0 , y0 ) for all y0 ∈ SPEff (f Λ(u0 ); C). Proof. Applying again Proposition 2.8.5 (iii) and (vii), f Λ is l.c. and H-u.c. at u0 . The conclusion follows by applying the preceding theorem.  Note that Sterna-Karwat [519] established Proposition 3.6.1 for Γ with the domination property, C-mid-convex, bounded-valued, and H-C-l.c. at x0 ∈ dom Γ , and Proposition 3.6.2 for Γ with the domination property, Cmid-convex, bounded-valued, and C-upper bounded, i.e., Im Γ ⊆ B − C for some bounded set B ⊆ Y . Corollary 3.6.4 was established by Ferro [192, Th. 4.1] under the following additional assumptions: U is a locally convex space ordered by closed convex

202

3 Optimization in Partially Ordered Spaces

cone D with nonempty interior, G is a function, Λ(u1 ) ⊆ Λ(u2 ) if u2 − u1 ∈ int D. Theorem 3.6.5, Propositions 3.6.10, 3.6.16, and Corollaries 3.6.11, 3.6.7, 3.6.18 were obtained by Penot and Sterna-Karwat in [466] and [467] even for more general relations then ≤C . Theorem 3.6.12 for P = {0} is stated by Bednarczuk in [50], while Theorem 3.6.12 for P = −C and Corollary 3.6.13 are stated in [52]. Theorem 3.6.19 with the stronger conditions that Γ is H-u.c. and l.c. at x0 , Theorem 3.6.20 under somewhat different conditions Theorem 3.6.26 (except the case when C is normal) and its corollary are obtained in [52]. Also, Corollaries 3.6.15, 3.6.24, and 3.6.28 are from [52]. Theorem 3.6.23 was obtained by Attouch and Riahi [32] for X = N ∪ {∞}, x0 = ∞, and Y a Banach space. 3.6.2 Continuity Properties for the Optimal Multifunction in the Case of Moving Cones As in the preceding section, X is a Hausdorff topological space, Y is a Hausdorff topological vector space, and Γ : X ⇒ Y . In this section instead of a single convex cone C ⊆ Y we consider a multifunction C : X ⇒ Y whose values are pointed convex cones. We associate also the multifunction C c : X ⇒ Y defined by C c (x) := Y \ (C(x) \ {0}) = (Y \ C(x)) ∪ {0}. In this section Ω, Ωw : X ⇒ Y , Ω(x) := Eff(Γ (x); C(x)), and Ωw (x) := wEff(Γ (x); C(x)). Theorem 3.6.29. Suppose that Γ (x0 ) ⊆ lim inf (Γ (x) + C(x)) and lim sup C c (x) ⊆ C c (x0 ). x→x0

(3.47)

x→x0

(i) If lim supx→x0 Γ (x) ⊆ Γ (x0 ) + C(x0 ), then lim supx→x0 Ω(x) ⊆ Ω(x0 ). (ii) If Γ is closed at x0 and C(x0 ) \ {0} is open, then Ω is closed at x0 . (iii) If Γ is compact at x0 , then Ω is u.c. at x0 . Proof. (i) Let y ∈ lim supx→x0 Ω(x). By Proposition 2.7.3 (i), there exists a net ((xi , yi ))i∈I in gr Ω converging to (x0 , y) for i ∈ I. Since gr Ω ⊆ gr Γ , / from the hypothesis, we obtain that y ∈ Γ (x0 ) + C(x0 ). Suppose that y ∈ Ω(x0 ) = Eff (Γ (x0 ) + C(x0 ); C(x0 )); then there exists k ∈ C(x0 ) \ {0} such that y  := y − k ∈ Γ (x0 ). Since Γ (x0 ) ⊆ lim inf x→x0 (Γ (x) + C(x)), using again Proposition 2.7.3, there exist a subnet (xϕ(j) )j∈J of (xi ) and the nets (yj ), (kj ) with yj ∈ Γ (xϕ(j) ), kj ∈ C(xϕ(j) ) for j ∈ J and (yj + kj ) → y  . Since (yϕ(j) − yj − kj ) → y − y  = 0, we may suppose that yϕ(j) − yj − kj = 0 for all j ∈ J. Taking into account that yϕ(j) ∈ Ω(xϕ(j) ), we have that yϕ(j) − yj − kj ∈ C c (xϕ(j) ) for j ∈ J. Since lim supx→x0 C c (x) ⊆ C c (x0 ), we obtain that k ∈ C c (x0 ), a contradiction. Therefore y ∈ Ω(x0 ). (ii) If Γ is closed at x0 , then Γ (x0 ) is closed and lim supx→x0 Γ (x) ⊆ Γ (x0 ). From (i) we obtain that lim supx→x0 Ω(x) ⊆ Ω(x0 ). Since C(x0 ) \ {0} is open, Ω(x0 ) is closed. Using Proposition 2.7.12 we obtain that Ω is closed at x0 .

3.6 Continuity properties

203

(iii) Suppose that Ω is not u.c. at x0 . Then there exist an open set D0 ⊆ Y such that Ω(x0 ) ⊆ D0 and a net ((xi , yi ))i∈I ⊆ gr Ω (⊆ gr Γ ) with (xi ) → x0 and yi ∈ / D0 for every i ∈ I. Since Γ is compact at x0 , there exists a subnet / D0 , and so y ∈ / Ω(x0 ). Hence there (yϕ(j) )j∈J → y ∈ Γ (x0 ). Of course, y ∈ exists k ∈ C(x0 ) \ {0} such that y  = y − k ∈ Γ (x0 ). Proceeding as in (i) we  obtain a contradiction. Therefore Ω is u.c. at x0 . In the next example all the hypotheses of the preceding theorem are satisfied at x0 = −1, except Γ (x0 ) ⊆ lim inf x→x0 (Γ (x) + C(x)), and the conclusions of (i)–(iii) fail. Example 3.6.30. Let X = ] − ∞, 0[, Y = R2 , C(x) = {(0, 0)}∪ ]0, ∞[ × ]0, ∞[, and let Γ be defined by {(y1 , y2 ) ∈ R2 | y2 ≥ xy1 , y1 ≤ 1, y2 ≤ 1} if x = −1, Γ (x) = 2 {(y1 , y2 ) ∈ R | y2 ≥ −y1 − 1, y1 ≤ 1, y2 ≤ 1} if x = −1. Then

Ω(x) =

{(y1 , y2 ) ∈ R2 | y2 = xy1 , x−1 ≤ y1 ≤ 1} if x = −1, 2 {(y1 , y2 ) ∈ R | y2 = −y1 − 1, −2 ≤ y1 ≤ 1} if x = −1.

In the next example all the hypotheses of the preceding theorem are satisfied at x0 = 1, except lim supx→x0 C c (x) ⊆ C c (x0 ), and the conclusions of (i)–(iii) fail. Example 3.6.31. ([541], modified) Let X = ]0, ∞[, Y = R2 , and Γ and C be defined by Γ (x) = {(y1 , y2 ) ∈ R2 | y12 + y22 ≤ 1}, {(0, 0)} ∪ {(y1 , y2 ) ∈ R2 | y2 > 0, y1 + y2 > 0} if x = 1, C(x) = {(0, 0)}∪ ]0, ∞[ × ]0, ∞[ if x = 1. Then



Ω(x) =

{(y1 , y2 ) ∈ R2 | y12 + y22 = 1, y1 ≤ 0, y1 − y2 ≥ 0} if x = 1, {(y1 , y2 ) ∈ R2 | y12 + y22 = 1, y1 ≤ 0, y2 ≤ 0} if x = 1.

In the next example condition (3.47) holds, but the hypotheses of (i) and (ii) fail at x0 = 0, as well as their conclusions. Example 3.6.32. ([541], modified) Let X = ]0, ∞[, Y = R2 , and let Γ and C be defined by Γ (x) = {(y1 , y2 ) ∈ R2 | y12 + y22 ≤ 1} \ {(−1, 0)}, C(x) = {(0, 0)} ∪ {(y1 , y2 ) ∈ R2 | y2 > 0, y1 + xy2 > 0}. Then

Ω(x) =

{(y1 , y2 ) ∈ R2 | y12 + y22 = 1, y1 ≤ 0, y2 ≤ xy1 } if x = 0, {(y1 , y2 ) ∈ R2 | y12 + y22 = 1, y1 ≤ 0, y2 < 0} if x = 0.

204

3 Optimization in Partially Ordered Spaces

Theorem 3.6.33. Consider (x0 , y0 ) ∈ gr Ω. Suppose that Γ + C is l.c. at (x0 , y0 ) and Γ (x) has (DP) w.r.t. C(x) for x ∈ U • , where U ∈ NX (x0 ). Then (i) y0 ∈ lim inf x→x0 (Ω(x) + C(x)). (ii) Moreover, assume that lim supx→x0 Γ (x) ⊆ Γ (x0 ) (or, more generally, lim supx→x0 Ω(x) ⊆ Γ (x0 ) + C(x0 )) and lim supx→x0 C(x) ⊆ C(x0 ) and for all nets ((xi , yi ))i∈I ⊆ gr Γ , ((xi , zi ))i∈I ⊆ gr Ω with zi ≤C(xi ) yi for i ∈ I, (zi ) contains a convergent subnet. Then Ω is l.c. at (x0 , y0 ). Proof. Let X \ {x0 } ⊇ (xi ) → x0 . Since Γ + C is l.c. at (x0 , y0 ), there exist a subnet (xϕ(j) )j∈J and a net (yj + kj )j∈J → y0 with yj ∈ Γ (xϕ(j) ) and kj ∈ C(xϕ(j) ) for every j ∈ J. Since Γ (xϕ(j) ) has (DP) w.r.t. C(xϕ(j) ), there exist zj ∈ Ω(xϕ(j) ) and kj ∈ C(xϕ(j) ) such that yj = zj + kj . (i) It follows that yj + kj ∈ Ω(xϕ(j) ) + C(xϕ(j) ) for every j ∈ J. Therefore y0 is in lim inf x→x0 (Ω(x) + C(x)). (ii) By hypothesis, there exists a subnet (zψ(l) )l∈L convergent to some z ∈ Y . It follows that z = z0 + k0 with z0 ∈ Γ (x0 ), k0 ∈ C(x0 ), and (kψ(l) +  ) → y0 − z =: k ∈ C(x0 ). Therefore y0 = z + k = z0 + k0 + k. Since y0 ∈ kψ(l) Ω(x0 ) and C(x0 ) is pointed, we obtain that z = z0 = y0 . Using Proposition  2.7.6, we have that Ω is l.c. at y0 . Note that the last condition of the hypothesis of (ii) in the preceding theorem holds if Ω(U ) is relatively compact for some U ∈ NX (x0 ). Taking C(x) = C and y0 ∈ Ω(x0 ) in Examples 3.6.9, 3.6.17, 3.6.8, and 3.6.22, we observe that the conditions of Theorem 3.6.33, Γ + C is l.c. at (x0 , y0 ), Γ (x) has (DP) w.r.t. C(x) for x ∈ U , lim supx→x0 Γ (x) ⊆ Γ (x0 ), and the second hypothesis of (ii) are essential for obtaining (i) and the conclusion of (ii), respectively. In the following example all the hypotheses of Theorem 3.6.33 (ii) for x0 = 1 and y0 = (−1, 0) hold, except lim supx→x0 C(x) ⊆ C(x0 ); Ω is not l.c. at (x0 , y0 ). Example 3.6.34. Let X = R> , Y = R2 , and let the multifunctions Γ and C be defined by Γ (x) = {(y1 , y2 ) ∈ R2 | y12 + y22 ≤ 1}, {(y1 , y2 ) ∈ R2 | y2 ≥ 0, y1 + y2 ≥ 0} if x = 1, C(x) = R2+ if x = 1. Then Ω(x) =

{(y1 , y2 ) ∈ R2 | y12 + y22 = 1, y1 ≤ 0, y1 − y2 ≥ 0} if x = 1, {(y1 , y2 ) ∈ R2 | y12 + y22 = 1, y1 ≤ 0, y2 ≤ 0} if x = 1.

Before stating the next result we extend the notion of lower semicontinuous multifunction to the case of moving cones. So, if F : X × U ⇒ Y [G : U ⇒ Y ], we say that F [G] is C-lower semicontinuous (C-l.s.c. for

3.6 Continuity properties

205

short), where this time C : U ⇒ Y if the sets levF,C (y) := {(x, u) ∈ X × U | F (x, u) ∩ (y − C(u)) = ∅} [levG,C (y) := {u ∈ U | G(u) ∩ (y − C(u)) = ∅}] are closed for every y ∈ Y . We have the following extension of Corollary 3.6.4. Theorem 3.6.35. Let F : X × U ⇒ Y be C-l.s.c. and let Λ : U ⇒ X be compact at every u ∈ U. Assume that for every u ∈ U and every x ∈ Λ(u) one has that C(u) is closed and every strictly C(u)-decreasing net from F (x, u) + C(u) is C(u)-lower bounded in F (x, u) + C(u). Then Γ (u) := F (Λ(u) × {u}) has the domination property w.r.t. C(u) for every u ∈ U, and the optimal multifunction Ω is C-l.s.c. Proof. Let us note first that Γ (u) has (DP) w.r.t. C(u) for every u ∈ U. The proof is the same as that of the corresponding part in the proof of Corollary 3.6.4; just replace G(x) by F (x, u) and C by C(u). Let y ∈ Y , and suppose that the net (ui ) ⊆ levΩ,C (y) converge to u ∈ U. Therefore, for every i ∈ I there exist xi ∈ Λ(ui ) and yi ∈ F (xi , ui ) such that yi ∈ y − C(ui ). It follows that ((xi , ui ))i∈I ⊆ levF,C (y). Since Λ is compact at u, there exists the subnet (xϕ(j) )j∈J of (xi )i∈I converging to x ∈ Λ(u). Since F is C-l.s.c. we get (x, u) ∈ levF,C (y), whence u ∈ levΓ,C (y). Since Γ (u) has (DP) w.r.t. C(u), we  obtain that u ∈ levΓ,C (y). Therefore Ω is C-l.s.c. Theorem 3.6.29 (i), (iii) was essentially obtained by Dolecki and Malivert [161], while (ii) was obtained by Penot and Sterna-Karwat [466]. From this theorem one obtains easily Theorem 2.1 of [388] (established for Y a normed vector space, X = N ∪ {∞}, x0 = ∞, and and C(x) \ {0} open) and Theorem 2.2 of [388] (established for Y a reflexive Banach space endowed with the weak topology and X = N ∪ {∞}, x0 = ∞). Theorem 3.6.33 (i) extends a result of Luc [387], while (ii) is obtained, essentially, in [466] for C a constant multifunction, in [161] for Ω compact at x0 , and in [388] for X = N ∪ {∞}, x0 = ∞. From this theorem one obtains easily Theorems 3.3 and 3.4 from [388]. Under the condition lim supx→x0 Ω(x) ⊆ Γ (x0 )+C(x0 ) with C(x0 ) = C a pointed closed convex cone, X = N ∪ {∞}, and x0 = ∞, Theorem 3.6.33 (ii) is obtained in [385, Th. 2.2]; note that [385, Th. 2.3] can be obtained from Theorem 3.6.33 (ii), too. The notion of lower semicontinuity w.r.t. moving cones is given by Ferro [193]. Theorem 3.6.35 was established by Ferro [193, Th. 4.1], too; in [193, Th. 4.2] a slightly different result is also stated. 3.6.3 Continuity Properties for the Solution Multifunction Consider U, X two Hausdorff topological spaces and Y a Hausdorff topological vector space. As in the preceding section, consider a multifunction C : U ⇒ Y whose values are pointed convex cones. With C we associate the multifunction C c : U ⇒ Y defined by C c (u) := (Y \ C(u)) ∪ {0}. Take also Λ : U ⇒ X and f : X × U → Y . For a fixed u0 ∈ U we consider the problem (P) C(u0 )-minimize f (x, u0 ) subject to x ∈ Λ(u0 ),

206

3 Optimization in Partially Ordered Spaces

and its perturbed problems (Pu ) C(u)-minimize f (x) subject to x ∈ Λ(u). Thus the initial problem is just (Pu0 ). We associate the optimal-value multifunction Ω : U ⇒ Y and the solution multifunction Σ : U ⇒ X defined, respectively, by Ω(u) := Eff (f (Λ(u) × {u}); C(u)) ,

Σ(u) := {x ∈ Λ(u) | f (x, u) ∈ Ω(u)}.

It is obvious that dom Σ = dom Ω. In the preceding section we obtained continuity properties of Ω in Corollaries 3.6.15, 3.6.24, and 3.6.28. In this section we are interested in continuity properties for the solution multifunction Σ. We begin with a closedness condition for Σw . Theorem 3.6.36. Suppose that f is continuous on Λ(u0 ) × {u0 }, Λ(u0 ) ⊆ lim inf Λ(u), and lim sup C c (u) ⊆ C c (u0 ). u→u0

(3.48)

u→u0

(i) If lim supu→u0 Λ(u) ⊆ Λ(u0 ), then lim supu→u0 Σ(u) ⊆ Σ(u0 ). (ii) If C(u0 ) \ {0} is open and Λ is closed at u0 , then Σ is closed at u0 . (iii) If C(u0 ) \ {0} is open and Λ is compact at u0 (Λ(u0 ) is compact and Λ is u.c. at u0 ), then Σ is compact at u0 (Σ is u.c. at u0 ). Proof.  (i) Let x ∈ lim supu→u0 Σ(u). This means that there exists a net (ui , xi ) i∈I ⊆ gr Σ (⊆ gr Λ) converging to (u0 , x0 ). From the hypothesis we / Σ(u0 ); then there exists x ∈ Λ(u0 ) have that x0 ∈ Λ(u0 ). Suppose that x0 ∈ such that k := f (x0 , u0 ) − f (x, u0 ) ∈ C(u0 ) \ {0}. Since  x0 ∈ Λ(u0 ), from (3.48) there exist a subnet uϕ(j) j∈J of (ui ) and a net xϕ(j) j∈J converging     to x. Since f is continuous on Λ(u0 )×{u0 }, f xϕ(j) , uϕ(j) −f xϕ(j) , uϕ(j) →     k = 0, so we may suppose that f xϕ(j) , uϕ(j) − f xϕ(j) , uϕ(j) = 0 for every      j ∈ J. Since xϕ(j) ∈ Σ(uϕ(j) ), we have that f xϕ(j) , uϕ(j) −f xϕ(j) , uϕ(j) ∈ C c (uϕ(j) ) for j ∈ J. From (3.48) we obtain that k ∈ C c (u0 ), a contradiction. Therefore x0 ∈ Σ(u0 ). (ii) Using the fact that Λ(u0 ) is closed (Λ being closed at u0 ) and C(u0 ) \ {0} is open, one obtains easily that Σ(u0 ) is closed. The conclusion then follows from (i). (iii) If Λ is compact at u0 , the proof is similar to that of (ii). If Λ(u0 ) is compact and Λ is u.c. at u0 , then Λ is closed at u0 by Proposition 2.7.12 (iv). Noting that Σ(u) = Σ(u) ∩ Λ(u) for every u ∈ U, the upper continuity of Σ follows from (ii) and Proposition 2.7.13 (i).  Note that the preceding result can be stated for a multifunction F : X × U ⇒ Y instead of the function f ; the conclusions remain valid if we suppose that F is compact and l.c. at (x, u0 ) for every x ∈ Λ(u0 ). In the sequel in this section f (x, u) = g(x), where g : X → Y and C(u) = C for every u ∈ U.

3.6 Continuity properties

207

Theorem 3.6.37. Suppose that int C = ∅, g is continuous, Λ is u.c. at u0 , (P) is well-posed, and Ω is (−C)-H-u.c. at u0 . Then Σ is u.c. at u0 . Proof. If Σ(u0 ) = ∅, then Ω(u0 ) = ∅. Since Ω is (−C)-H-u.c. at u0 , it follows that u0 ∈ int (U \ dom Ω) = int (U \ dom Σ), and so Σ is u.c. at u0 . Let Σ(u0 ) = ∅. Consider an open set D ⊆ Y such that Σ(u0 ) ⊆ D. Since (P) is well-posed, there exists W ∈ NY such that

{x ∈ Λ(u0 ) | g(x) ≤C η + ε} ⊆ D ∀ ε ∈ W ∩ C. η∈Ω(u0 )

Let L(A) := Ω(u0 ) + A − C for A ⊆ Y . The above relation shows that Λ(u0 ) ⊆ D ∪ X \ g −1 (L(W ∩ C) . Fix ε ∈ W ∩ int C. There exists W1 ∈ NY with ε + W1 ⊆ C. It follows that W1 ⊆ ε − C ⊆ (W ∩ C) − C, and so Λ(u0 ) ⊆ D ∪ X \ g −1 (L(W1 )) . Let W2 ∈ NY be such that W2 + W2 ⊆ W1 . Since cl L(W2 ) ⊆ W2 + L(W2 ) ⊆ L(W1 ), we have that   Λ(u0 ) ⊆ D ∪ X \ g −1 (cl L(W2 )) =: D1 . Because g is continuous, g −1 (cl L(W2 )) is closed, and so D1 is open. Taking into account that Λ is u.c. at u0 , there exists U1 ∈ NU (u0 ) such that Λ(U1 ) ⊆ D1 . Since Ω is (−C)-H-u.c. at u0 , there exists U2 ∈ NU (u0 ) such that Ω(U2 ) ⊆ Ω(u0 ) + W2 − C = L(W2 ). Let u ∈ U1 ∩ U2 and x ∈ Σ(u) (⊆ Λ(u)). Then g(x) ∈ Ω(u) ⊆ L(W2 ), which means that x ∈ g −1 (L(W2 )) ⊆ g −1 (cl L(W2 )). It follows that x ∈ D. Therefore Σ(U1 ∩ U2 ) ⊆ D, which shows that Σ is u.c.  at u0 . Corollary 3.6.38. Suppose that int C = ∅, g is continuous, Λ is u.c. at u0 , Ω(u0 ) is compact, (P) is η-well-posed, and Ω is (−C)-H-u.c. at u0 . Then Σ is u.c. at u0 . Proof. By Proposition 3.5.5 the problem (P) is well-posed. The conclusion follows from the preceding theorem.  The next result establishes sufficient conditions for Hausdorff upper continuity of Σ. Theorem 3.6.39. Suppose that X is a topological vector space, int C = ∅, g is uniformly continuous on X, Λ is H-u.c. at u0 , (P) is weakly well-posed, and Ω is (−C)-H-u.c. at u0 . Then Σ is H-u.c. at u0 . Proof. Let V ∈ NX be fixed and consider V1 ∈ NX such that V1 + V1 ⊆ V . Since (P) is weakly well-posed, there exists W ∈ NY such that

{x ∈ Λ(u0 ) | g(x) ≤C η + ε} ⊆ Σ(u0 ) + V1 ∀ ε ∈ W ∩ C, η∈Ω(u0 )

i.e., with the notation from the proof of the preceding theorem, Λ(u0 ) ∩ g −1 (L(W ∩ C)) ⊆ Σ(u0 ) + V1 .

208

3 Optimization in Partially Ordered Spaces

Taking ε, W1 , and W2 as in the proof of the preceding theorem, we obtain that   (3.49) Λ(u0 ) ⊆ (Σ(u0 ) + V1 ) ∪ X \ g −1 (L(W1 )) . By the uniform continuity of g there exists V2 ∈ NX such that      g X \ g −1 (L(W1 )) + V2 ⊆ g X \ g −1 (L(W1 )) +W2 ⊆ (Y \ L(W1 ))+W2 . Since (Y \ L(W1 )) + W2 ⊆ Y \ L(W2 ), we obtain that   X \ g −1 (L(W1 )) + V2 ⊆ g −1 (Y \ L(W2 )) = X \ g −1 (L(W2 )) .

(3.50)

Since Λ is H-u.c. at u0 , there exists U1 ∈ NU (u0 ) such that Λ(U1 ) ⊆ Λ(u0 ) + V1 ∩ V2 . From (3.49) we have that    Λ(u0 ) + V1 ∩ V2 ⊆ (Σ(u0 ) + V1 + V1 ∩ V2 ) ∪ X \ g −1 (L(W1 )) + V1 ∩ V2 . Taking into account (3.50) and that V1 + V1 ⊆ V , we get   Λ(u0 ) + V1 ∩ V2 ⊆ (Σ(u0 ) + V ) ∪ X \ g −1 (L(W2 )) . Therefore

  Λ(U1 ) ⊆ (Σ(u0 ) + V ) ∪ X \ g −1 (L(W2 )) .

(3.51)

Since Ω is (−C)-H-u.c. at u0 , there exists U2 ∈ NU (u0 ) such that Ω(U2 ) ⊆ Ω(u0 ) + W2 − C = L(W2 ). So, for u ∈ U1 ∩ U2 and x ∈ Σ(u) we have that g(x) ∈ L(W2 ). From (3.51) we obtain that x ∈ Σ(u0 ) + V . This shows that  Σ(U1 ∩ U2 ) ⊆ Σ(u0 ) + V , and so Σ is u.c. at u0 . Theorem 3.6.36 was stated, essentially, by Penot and Sterna-Karwat in [466] and Dolecki and Malivert [161]. Bednarczuk stated Theorems 3.6.37 and 3.6.39 and Corollary 3.6.38 in [52].

3.7 Sensitivity of Vector Optimization Problems Throughout this section X, Y are normed vector spaces and {0} = C ⊆ Y is a pointed closed convex cone; when we consider weak minimal points of a set we assume also that int C = ∅. Let D ⊆ Y ; recall that the efficient set of D w.r.t. C is Eff(D; C) = {y ∈ D | D ∩ (y − C) ⊆ y + C}, the set of Henig proper efficient points of D w.r.t. C is

HEff(D; C) = {Eff(D; K) | Kconvex cone, C \ {0} ⊆ int K = Y }, the set of Benson proper efficient points of D w.r.t. C is   BEff(D; C) = y ∈ D | cone(D + C − y) ∩ (−C) = {0} ,

3.7 Sensitivity of vector optimization problems

209

and the set of weakly efficient points of D w.r.t. C is the set wEff(D; C) = Eff(D; {0} ∪ int C). It is obvious that SPEff(D; C) ⊆ HEff(D; C) ⊆ BEff(D; C) ⊆ Eff(D; C) ⊆ wEff(D; C), the last inclusion if int C = ∅. Note that if C has a compact base, then SPEff(D; C) = HEff(D; C) = BEff(D; C). The first equality follows from Proposition 3.6.25. Let us prove the second one. Suppose that B is a compact base for C (i.e., B is compact, convex, 0 ∈ / B, and C = cone B), and y ∈ BEff(D; C). It follows that cone(D − y) ∩ (−B) = ∅, and so r := d(−B, cone(D − y)) > 0. Otherwise, there are (yn ) ⊆ cone(D − y) such that bn + yn → 0. There exists bnk → b ∈ B since B is compact, and so ynk → −b, which shows that cone(D − y) ∩ (−B) = ∅. Taking K := cone(B + 2r BY ), we have that C \ {0} ⊆ int K and (D − y) ∩ (−K) ⊆ cone(D − y) ∩ (−K) = {0}. Therefore y ∈ HEff(D; C). The equality of HEff(D; C) = BEff(D; C) in the case of finite-dimensional spaces is proven by Henig [267]. Consider the multivalued mapping Γ : X ⇒ Y and the following associated multivalued mappings: Ω, HΩ, Ωw : X ⇒ Y , given by Ω(x) := Eff(Γ (x); C), HΩ(x) := HEff(Γ (x); C), Ωw (x) := wEff(F (u); C). In the sequel we shall also use the multivalued mapping ΓC defined by ΓC (u) := Γ (u) + C for u ∈ X. Of course, gr ΓC = gr Γ + {0} × C. Throughout this section we denote by DΓ (x, y)(u) the set DΓ (x, y)(u). Shi [503] introduced the following derivative of Γ at (x, y) ∈ gr Γ in the direction u: SΓ (x, y)(u) := {y ∈ Y | ∃ (tn ) ⊆ R> , ((xn , yn )) ⊆ gr Γ : xn → x, tn (xn − x, yn − y) → (u, y)}. It is obvious that for (x, y) ∈ gr Γ , gr DΓ (x, y) ⊆ gr SΓ (x, y) ⊆ cone (gr Γ − (x, y)) , with equality if gr Γ is a convex set, and SΓ (x, y)(u) = DΓ (x, y)(u)

∀ u ∈ X \ {0}.

In the sequel we shall use several times the following condition: SΓ (x, y)(0) ∩ (−C) = {0}.

(3.52)

Note that if (3.52) holds, then cone(Γ (x)− y) ∩ (−C) = {0}, while if C has a compact base, then y ∈ BEff (Γ (x); C), and therefore y ∈ HΩ(x). In the following theorem we give sufficient conditions for (3.52). Theorem 3.7.1. Let y ∈ Γ (x). Then each of the following conditions is sufficient for (3.52):

210

3 Optimization in Partially Ordered Spaces

(i) y ∈ BEff (Γ (x); C), Γ is upper Lipschitz at x, i.e., there exist L, ρ > 0 such that Γ (x) ⊆ Γ (x) + L x − x BY for all x ∈ B(x, ρ); (ii) y ∈ BEff (Γ (x); C), X, Y are finite-dimensional, x ∈ icr(dom Γ ), and Γ is C-convex; (iii) y ∈ HEff (Γ (x); C), X, Y are Banach spaces, x ∈ int(dom Γ ), and Γ is C-convex and C-closed (that is, gr ΓC is closed). Proof. (i) Let y ∈ SΓ (x, y)(0) ∩ (−C). Then there exist (tn ) ⊆ (0, ∞), ((xn , yn )) ⊆ gr Γ such that xn → 0 and tn ((xn , yn ) − (x, y)) → (0, y). We may suppose that xn ∈ B(x, ρ) for every n ∈ N. It follows that for every n there exist y n ∈ Γ (x) and vn ∈ BY such that yn = y n + L xn − x · vn . Therefore tn (y n − y) − tn (yn − y) ≤ Ltn xn − x for every n ∈ N, whence tn (y n − y) → y. So y ∈ cone(Γ (x) − y) ∩ (−C) = {0}, a contradiction. (ii) Taking (x, y) = (0, 0) (replacing Γ by Γ − (x, y)) and replacing X by span(dom Γ ) if necessary, we may suppose that x ∈ int(dom Γ ). Having in view Proposition 3.6.25, since dim Y < ∞ and y ∈ BEff (Γ (x); C), we have that y ∈ HEff (Γ (x); C); therefore there exists a convex cone K such that C \ {0} ⊆ int K = Y and Γ (x) ∩ (y − K) ⊆ y + K. Since gr ΓK = gr Γ + / icr(gr ΓK ); {0} × C + {0} × K, the set gr ΓK is convex. We have that (x, y) ∈ in the contrary case for c ∈ C \ {0} there exists some λ > 0 such that (x, y) − λ(0, c) = (x, y − λc) ∈ gr Γ + {0} × K, i.e., y − λc − k  ∈ Γ (x) for some k  ∈ K. This contradicts the hypothesis, since λc + k  ∈ C \ {0} + K ⊆ int K. Using a separation theorem, there exists (u∗ , y ∗ ) ∈ X ∗ × Y ∗ \ {(0, 0)} such that x, u∗  + y, y ∗  ≤ x, u∗  + y + k, y ∗ 

∀ (x, y) ∈ gr Γ, ∀ k ∈ K.

(3.53)

If y ∗ = 0, then x, u∗  ≤ x, u∗  for every x ∈ dom Γ , and so u∗ = 0. Therefore y ∗ = 0. From (3.53) we obtain k, y ∗  ≥ 0 for k ∈ K, whence k, y ∗  > 0 for every k ∈ int K ⊇ C \ {0}. Suppose that there exists y ∈ SΓ (x, y)(0) ∩ (−C \ {0}). Then there are (tn ) ⊆ R> and ((xn , yn )) ⊆ gr Γ such that xn → 0 and tn ((xn , yn ) − (x, y)) → (0, y). From (3.53) we obtain that tn (xn − x), u∗  + tn (yn − y), y ∗  ≥ 0 for every n, whence −y, y ∗  ≤ 0, a contradiction. (iii) Let K be a convex cone such that C \ {0} ⊆ int K = Y and y ∈ Eff(Γ (x); K). If we show that int(gr ΓK ) = ∅, arguments like those in the proof of (ii) show that (3.52) holds. For this aim consider the relation R := {(y, x) | (x, y) ∈ gr ΓK }; R is closed, convex, and x ∈ int(Im R). Applying the Robinson–Ursescu theorem (see [36, Th. 2.2.2]), we have that R(V ) is a neighborhood of x for every neighborhood V of y. Let k0 ∈ int K. There exists ρ > 0 such that B(k0 , ρ) ⊆ K. It follows that for some δ > 0 we have B(x, δ) ⊆ R (B(y, ρ/2)). Let us show that B(x, δ) × B(k0 + y, ρ/2) ⊆ gr ΓK . Indeed, let (u, y) belong to the first set. There exists y  ∈ R−1 (u) such that y  − y ≤ ρ/2. It follows that y = y  + (y − y  ) and k0 − (y − y  ) ≤ (k0 + y − y) + y  − y ≤ ρ/2 + ρ/2 = ρ,

3.7 Sensitivity of vector optimization problems

211

which shows that y − y  ∈ B(k0 , ρ) ⊆ K. Therefore (x, y) ∈ R−1 + {0} × K =  gr ΓK . The proof is complete. Remark 3.7.2. Part (i) of the preceding theorem is proven in [503, Prop. 3.2]. The condition that Γ is upper Lipschitz at x in Theorem 3.7.1(i) is important, as the next example shows. Example 3.7.3. ([538]) Let X = Y = R, C = R+ , and let Γ : R ⇒ R be defined by {0} if x ≤ 0, √ Γ (x) := {− x, 0} if x > 0. Then ΓC (x) =

[0, ∞[ if x ≤ 0, √ [− x, ∞[ if x > 0,

Ω(x) =

{0} if x ≤ 0, √ {− x} if x > 0.

Take x = 0 and y = 0. Then

{0} if u = 0, −C if u = 0, ⎧ ⎨ {0} if u < 0, C if u < 0, DΓC (x, y)(u) = DΩ(x, y)(u) = −C if u = 0, R if u ≥ 0, ⎩ ∅ if u > 0, {0} if u = 0, Eff (DΓ (x, y)(u); C) = ∅ if u = 0, {0} if u < 0, Eff (DΓC (x, y)(u); C) = ∅ if u ≥ 0. SΓ (x, y)(u) = DΓ (x, y)(u) =

The condition y ∈ BEff (Γ (x); C) in the preceding theorem cannot be replaced by y ∈ Eff (Γ (x); C). In the next example Γ is upper Lipschitz at x ∈ int(dom Γ ), C-convex, and C-closed. Example 3.7.4. Let X = R, Y = R2 , C = R2+ , and Γ : X ⇒ Y be defined by Γ (x) := {(y1 , y2 ) | y2 ≥ y12 }. Take x = 0 and y = 0; then y ∈ Eff(Γ (x); C) \ BEff(Γ (x); C). Moreover, SΓ (x, y)(u) = DΓ (x, y)(u) = R × R+ , and so SΓ (x, y)(u) ∩ (−C) = ] − ∞, 0] × {0} = {0}. If dim X = ∞ in Theorem 3.7.1(ii), the conclusion may be not true. Example 3.7.5. Consider X an infinite-dimensional normed vector space, Y := R, φ : X → R a noncontinuous linear functional, let Γ : X ⇒ Y be defined by Γ (x) := {φ(x)} and C := R+ . One obtains easily that SΓ (x, y)(u) = DΓ (x, y)(u) = R for all (x, y) ∈ gr Γ and u ∈ X. Theorem 3.7.6. Let (x, y) ∈ gr Γ . Then

212

3 Optimization in Partially Ordered Spaces

(i) For every u ∈ X we have DΓ (x, y)(u) + C ⊆ DΓC (x, y)(u). (ii) If the set {y ∈ C | y = 1} is compact, then for every u ∈ X, Eff (DΓC (x, y)(u); C) ⊆ Eff (DΓ (x, y)(u); C) ⊆ DΓ (x, y)(u).

(3.54)

Moreover, if Γ (x) is convex for x ∈ V , where V is some neighborhood of x, then Eff (DΓ (x, y)(u); C) = Eff (DΓC (x, y)(u); C)

∀ u ∈ X.

(3.55)

(iii) If the set {y ∈ C | y = 1} is compact and (3.52) holds, then DΓ (x, y)(u) + C = DΓC (x, y)(u)

∀ u ∈ X.

(3.56)

Proof. (i) Let y ∈ DΓ (x, y)(u) and c ∈ C. Then there exist (tn ) → 0+ and ((un , yn )) → (u, y) such that (x, y) + tn (un , yn ) ∈ gr Γ for every n ∈ N. It follows that (x, y) + tn (un , yn + c) ∈ gr ΓC and (un , yn + c) → (u, y + c). Therefore y + c ∈ DΓC (0, 0)(u). In the rest of the proof we suppose that Q := {y ∈ C | y = 1} is compact. Of course, C = cone Q. (ii) Let y ∈ Eff (DΓC (x, y)(u); C); there exists (tn ) → 0+ , (λn ) ⊆ [0, ∞), (qn ) ⊆ Q, ((un , yn )) ⊆ X × Y such that (un , yn + λn qn ) → (u, y) and (x, y) + tn (un , yn ) ∈ gr Γ for every n ∈ N. Since Q is compact, we may suppose that qn → q ∈ Q. Assume that there are a subsequence (λnk ) and γ > 0 such that λnk ≥ γ for every k ∈ N. Then (x, y) + tnk (unk , ynk + (λnk − γ)qnk ) ∈ gr ΓC and (unk , ynk + (λnk − γ)qnk ) → (u, y − λq). It follows that y − γq ∈ DΓC (x, y)(u) ∩ (y − C) \ {y}, a contradiction. Therefore λn → 0, and so (un , yn ) → (u, y). It follows that y ∈ DΓ (x, y)(u). Using also (i), we have that for every u ∈ X, Eff (DΓC (x, y)(u); C) ⊆ DΓ (x, y)(u) ⊆ DΓC (x, y)(u). From the above inclusions we obtain immediately that (3.54) holds. Assume, furthermore, that Γ (x) is convex near x and let y be an element / of Eff (DΓ (x, y)(u); C)(⊆ DΓ (x, y)(u) ⊆ DΓC (x, y)(u)). Suppose that y ∈ Eff (DΓC (x, y)(u); C). Then there exists k ∈ C \ {0} such that y − k ∈ DΓC (x, y)(u). It follows that there exists ((tn , un , yn ))n∈N → (0+ , u, y − k) such that y + tn yn ∈ Γ (x + tn un ) + C for every n ∈ N, i.e., y + tn (yn − λn qn ) ∈ Γ (x + tn un ) for every n, with (qn ) ⊆ Q, (λn ) ⊆ [0, ∞[. There exists n0 ∈ N such that Γ (x + tn un ) is convex for every n ≥ n0 . Suppose that lim sup λn > 0. Then there exist (np )p∈N ⊆ N an increasing sequence and α > 0 such that  λnp ≥ α for every p ∈ N. Since Q is compact, we may suppose that qnp → q ∈ Q. Since Γ (x + tnp unp ) is convex, we have that       y+tnp ynp − λαn qnp = λαn y + tnp (ynp −λnp qnp ) + 1 − λαn (y + tnp ynp ) p

p

∈ Γ (x + tnp unp ) + C ∀p ∈ N.

p

3.7 Sensitivity of vector optimization problems

213

  Since ynp − λαn qnp → y − k − α q with α ∈ [0, 1], we obtain that y − (k + p α q) ∈ DΓ (x, y)(u), with k + α q ∈ C \ {0}, contradicting the choice of y. Hence (λn ) → 0, whence the contradiction y − k ∈ DΓ (x, y)(u). Therefore y ∈ Eff (DΓC (x, y)(u); C). It follows that (3.55) holds. (iii) The inclusion ⊇ being proven in (i), let us prove the converse one. Of course, we assume that (3.52) holds. Let y ∈ DΓC (x, y)(u); there exist (tn ) → 0+ , (λn ) ⊆ [0, ∞), (qn ) ⊆ Q, ((un , yn )) ⊆ X × Y such that (un , yn + λn qn ) → (u, y) and (x, y) + tn (un , yn ) ∈ gr Γ for every n ∈ N. We may suppose that qn → q ∈ Q. Taking also a subsequence if necessary, we may assume that λn → λ ∈ [0, ∞]. Suppose that λ = ∞. Since yn + λn qn → y, we obtain that λ−1 n yn → −q. Of course, x + tn un → x and     −1 −1 (x + tn un , y + tn yn ) − (x, y) = λ−1 (λn tn ) n un , λn yn → (0, −q). It follows that −q ∈ SΓ (x, y)(0) ∩ (−C) \ {0}, a contradiction. Therefore λ < ∞, whence (un , yn ) → (u, y − λq). Hence y ∈ DΓ (x, y)(u) + C. The proof is complete.  Remark 3.7.7. Assertion (i) and the first part of (ii) of Theorem 3.7.6 are proved by Tanino in [538, Prop. 2.1, Th. 2.1] (see also [336, Th. 3.4]), while assertion (iii) is proved by Shi in [503, Prop. 3.1]. The next example shows that the hypothesis “{y ∈ C | y = 1} is compact” in the previous theorem is essential for having (3.54), (3.55), and (3.56). Example 3.7.8. Let X = R, Y = 2 , C = 2+ = {(xn )n∈N ∈ 2 | xn ≥ 0 ∀ n ∈ N}, and Γ : X ⇒ Y be defined by {0} if x ∈ ] − ∞, 0] ∪ [1, ∞[, Γ (x) := 1 1 1 ∗ n [−a − en , a] if x ∈ [ n+1 , n [, n ∈ N , where a ∈ 2+ \ {0} and (en )n∈N is the canonical base of 2 . Then C if x ∈ ] − ∞, 0] ∪ [1, ∞[, ΓC (x) := 1 − n1 (a + en ) + C if x ∈ [ n+1 , n1 [, n ∈ N∗ , {0} if x ∈ ] − ∞, 0] ∪ [1, ∞[, Ω(x) := 1 1 − n (a + en ) if x ∈ [ n+1 , n1 [, n ∈ N∗ . Take x = 0 and y = 0. Then



DΓ (x, y)(u) = DΓC (x, y)(u) = DΩ(x, y)(u) =

{0} if u ≤ 0, {ua} if u > 0, C if u ≤ 0, −ua + C if u > 0, {0} if u ≤ 0, ∅ if u > 0.

214

3 Optimization in Partially Ordered Spaces

So (3.54), (3.55), and (3.56) do not hold for u > 0. Let us prove the formulae for DΓ (x, y)(u) and DΓC (x, y)(u) for u ≥ 0, those for u < 0 being obvious. So, let u ≥ 0 and v ∈ DΓ (x, y)(u). Then there exist (tn ) ⊆ ]0, ∞[ and gr Γ ⊇ (xn , yn ) → (0, 0) such that (tn xn ) → u and (tn yn ) → v. If P := {n ∈ N | xn ≤ 0} is infinite, then yn = 0 for every n ∈ P , and so (u, v) = (0, 0). Suppose that P is finite (this is the case when u > 0). Since (xn ) → 0, there exists n0 ∈ N> such that xn ∈ ]0, 1[ for n ≥ n0 . For every n ≥ n0 there exists pn ∈ N> such that xn ∈ [ pn1+1 , p1n [; of course, (pn ) → ∞. Then for every n ≥ n0 there exists     λn ∈ [0, 1] such that yn = p1n (1 − 2λn )a − λn en . It follows that ptnn → u   and ptnn ((1 − 2λn )a − λn epn ) → v. If u = 0, it is obvious that v = 0. Suppose that u > 0. Taking eventually a subsequence, we may assume that (λn ) → λ ∈ [0, 1]. We obtain that (λn epn ) → (1 − 2λ)a − u−1 v. If λ = 0, we get the contradiction that (en ) contains a norm-convergent subsequence. Therefore (λn ) → 0, whence v = ua. It is easy to get that ua ∈ DΓ (x, y)(u) for u ≥ 0 (in fact, similar to the proof in the next paragraph). So the formula for DΓ (x, y)(u) holds. 1 , yn = Consider first u > 0 and v = −ua + k with k ∈ C. Taking xn = n+1 1 1 −1 −1 (−a + u k) = (−a − e + u k + e ) (∈ Γ (x )), and t = (n + 1)u >0 n C n  n  n n  n for n ∈ N> , we have that (xn , yn ) → (0, 0) and tn (xn , yn ) → (u, v); if u = 0 and v = k ∈ k just take xn = 0, yn = n1 k, and tn = n. Therefore v ∈ DΓC (x, y)(u). Let u ≥ 0 and  v ∈ DΓC (x, y)(u). Then there  exist (tn ) ⊆ ]0, ∞[ and gr ΓC ⊇ (xn , yn ) → (0, 0) such that tn (xn , yn ) → (u, v). As in the first part we may suppose that xn ∈ ]0, 1[ for n ≥ n0 . So there exists N> ⊇ (pn ) → ∞ with xn ∈ [ pn1+1 , p1n [ for n ≥ n0 . Of course, yn = − p1n (a + epn ) + kn with     kn ∈ C. It follows that ptnn → u and − ptnn (a + epn ) + tn kn → v. Since 0 = w-lim en , it follows that v + ua = w-lim(tn kn ). Because C is weakly closed, v + ua ∈ C, whence v ∈ −ua + C. The formula for DΩ(x, y)(u) is obtained similarly. In Example 3.7.3, (3.52) is not satisfied and Γ (x) is not convex for x > 0; (3.56) and (3.55) do not hold for u > 0. Corollary 3.7.9. Assume that the conditions of Theorem 3.7.6 (iii) hold. Then the formulae (3.55) and (3.57) below hold: HEff (DΓ (x, y)(u); C) = HEff (DΓC (x, y)(u); C)

∀ u ∈ X.

(3.57)

∀ u ∈ X,

(3.58)

∀ u ∈ X,

(3.59)

Moreover, if int C = ∅ (so dim Y < ∞), then wEff (DΓ (x, y)(u); C) ⊆ wEff (DΓC (x, y)(u); C)   wEff (DΓ (x, y)(u); C) = wEff DΓC (x, y)(u); C

 is a closed convex cone such that C  ⊆ {0} ∪ int C. where C

3.7 Sensitivity of vector optimization problems

215

Proof. Using the preceding theorem, equality (3.56) holds. The conclusion then follows by using Proposition 3.3.16 (iv) for the pair (C, K) (there) replaced by the pairs (C, C) and (C, K), respectively, where the last K is from the definition of Henson proper efficiency. Suppose now that int C = ∅. Then (3.60) follows immediately from (3.56).  be a closed convex cone with C  ⊆ {0} ∪ int C. It is obvious that Let C  satisfies the conditions of Theorem 3.7.6 (iii), and so (3.56) holds with C  Applying again Proposition 3.3.16 (iv), now for (C, K) C replaced by C.   replaced by (C, C0 ) (with C0 = {0} ∪ int C), we obtain (3.59), too. Remark 3.7.10. When X, Y are finite-dimensional, formulae (3.55), (3.57), (3.60), and (3.59) are established in [359, Th. 2.1] under the conditions of Corollary 3.7.9, while formulae (3.57) and (3.59) are established in [358, Th. 2.1] for x ∈ int(dom Γ ) and Γ locally C-convex near x, i.e., Γ |V is C-convex for some neighborhood V of x [see also Theorem 3.7.1 (ii)]. In Example 3.7.3, (3.52) is not satisfied, and (3.57), (3.58), (3.59) (for  C = C) do not hold for u > 0. Theorem 3.7.11. Let (x, y) ∈ gr Γ . Assume that {y ∈ C | y = 1} is compact and Γ is C-dominated by Ω near x, i.e., Γ (u) ⊆ Ω(u) + C for every u ∈ V for some neighborhood V of x. (i) If y ∈ Ω(x), then for every u ∈ X, Eff (DΓC (x, y)(u); C) ⊆ Eff (DΩ(x, y)(u); C) ⊆ DΩ(x, y)(u).

(3.60)

Moreover, if Γ (x) is convex for x ∈ V , with V a neighborhood of x, then Eff (DΓ (x, y)(u); C) ⊆ Eff (DΩ(x, y)(u); C)

∀ u ∈ X.

(3.61)

(ii) If (3.52) holds, then y ∈ Ω(x) and for every u ∈ X, Eff (DΓ (x, y)(u); C) = Eff (DΩ(x, y)(u); C) ⊆ DΩ(x, y)(u), (3.62) HEff (DΓ (x, y)(u); C) = HEff (DΩ(x, y)(u); C) ⊆ DΩ(x, y)(u). (3.63) Proof. By hypothesis we have that ΓC (u) = ΩC (u) for every u ∈ V , where V is a neighborhood of x. It follows that DΓC (x, y)(u) = DΩC (x, y)(u) for every u ∈ X. (i) Using Theorem 3.7.6 (ii), we obtain for every u ∈ X, Eff (DΓC (x, y)(u); C) = Eff (DΩC (x, y)(u); C) ⊆ Eff (DΩ(x, y)(u); C) ⊆ DΩ(x, y)(u). If Γ (x) is convex for x ∈ V , with V a neighborhood of x, by Theorem 3.7.6 (ii), (3.55) holds, whence the inclusion (3.61) follows immediately.

216

3 Optimization in Partially Ordered Spaces

(ii) Assume that (3.52) holds. We already remarked that y is an element of BEff(Γ (x); C), and so y ∈ Ω(x). Since (3.52) holds for Γ , it holds for Ω, too. It follows that equality (3.56) and the corresponding one for Ω hold. The conclusion is immediate from Proposition 3.3.16 (iv), taking K = C for (3.62) and (C, K) for (3.63), where K is the cone in the definition of Henig proper efficient point.  In Example 3.7.8 the set {y ∈ C | y = 1} is not compact, but all the conditions on Γ in Theorem 3.7.11 hold, although the relations (3.60), (3.61), (3.62), and (3.63) do not hold for u > 0. In Example 3.7.3, (3.52) is not satisfied and Γ (x) is not convex for x > 0 (but Γ (x) is C-dominated for every x); (3.61), (3.62), and (3.63) do not hold for u > 0. In the next example Γ (x) is not C-dominated for x < 0, the other conditions of Theorem 3.7.11 being satisfied; (3.61), (3.62), and (3.63) do not hold for u < 0. Example 3.7.12. ([358]) Let X = R, Y = R2 , C = R2+ , and Γ : R ⇒ R2 be defined by {(y1 , y2 ) | y1 ≥ 0, y2 ≥ y12 } if x ≥ 0, Γ (x) := {(y1 , y2 ) | y1 > 0, y2 ≥ y12 } if x < 0.

Then Ω(x) :=

{(0, 0)} if x ≥ 0, ∅ if x < 0.

Take x = 0, y = (0, 0). Then DΓ (x, y)(u) = DΓC (x, y)(u) = C,

DΩ(x, y)(u) = Ω(u) ∀ u ∈ X.

Corollary 3.7.13. Suppose that y ∈ BEff (Γ (x); C)and Γ is C-dominated by Ω near x. Then (3.62) and (3.63) hold if one of the following conditions is satisfied: (i) Γ is upper Lipschitz at x and {y ∈ C | y = 1} is compact; (ii) X, Y are finite-dimensional, x ∈ int(dom Γ ), and Γ is C-convex near x. Proof. Applying Theorem 3.7.1, condition (3.52) holds if (i) or (ii) is satisfied. The conclusion follows by applying now part (ii) of the preceding theorem.  Theorem 3.7.14. Let (x, y) ∈ gr Γ . Assume that (3.52) holds, dim Y < ∞,  int C = ∅, and Γ is C-dominated by Ωw near x for some closed convex cone   C with C ⊆ {0} ∪ int C. Then for every u ∈ X, wEff (DΓ (x, y)(u); C) = w Eff (DΩw (x, y)(u); C) ⊆ DΩw (x, y)(u). (3.64) Proof. Since (3.52) holds, we have that y ∈ Ω(x) ⊆ Ωw (x). Since the conditions of Corollary 3.7.9 hold, equality (3.59) holds, too. Since gr Ωw ⊆ gr Γ , condition (3.52) is satisfied with Γ replaced by Ωw , whence (3.59)

3.7 Sensitivity of vector optimization problems

217

 holds for the same substitution. Because Γ is C-dominated by Ωw near x we have that ΓC (u) = (Ωw )C (u) for every u in a neighborhood of x, whence DΓC (x, y) = D(Ωw )C (x, y). The conclusion is obtained immediately by using  C0 ).  again Proposition 3.3.16 (iv) for the pair of cones (C, In Examples 3.7.3 and 3.7.12 relation (3.64) does not hold; in Example  3.7.3, (3.52) is not satisfied, while in Example 3.7.12, Γ (x) is not C-dominated  with C  ⊆ {0} ∪ int C. by Ωw (x) for x < 0 for some closed convex cone C Corollary 3.7.15. Suppose that X, Y are finite-dimensional, x ∈ int(dom Γ ),  y ∈ BEff (Γ (x); C), and Γ is C-convex near x. If Γ is C-dominated by Ωw  with C  ⊆ {0}∪int C, then (3.64) holds. near x for some closed convex cone C Proof. Applying Theorem 3.7.1, condition (3.52) holds. The conclusion follows from the preceding theorem.  Besides Examples 3.7.3 and 3.7.12 mentioned above, we consider the following example where Γ is not C-convex (near x); relation (3.64) does not hold for every u ∈ X. Example 3.7.16. ([504]) Let X = Y = R, C = R+ , and let Γ : R ⇒ R be defined by  [− |x| , ∞[ ∪{− |x|} if |x| ≤ 1,  Γ (x) := [− |x|, ∞[ if |x| > 1. Then

 ΓC (x) = [− |x|, ∞[,

 Ω(x) = {− |x|} ∀ x ∈ X.

Take x = 0, y = 0. Then DΓ (x, y)(u) = [− |u| , ∞[, DΓC (x, y)(u) = X, ] − ∞, 0] if u = 0, DΩ(x, y)(u) = ∅ if u = 0.

∀ u ∈ X,

Corollary 3.7.17. Suppose that Y is finite-dimensional, y ∈ BEff (Γ (x); C) and Γ (u) is C-closed for all u in a neighborhood of x. Then Eff (DΓ (x, y)(u); C) = Eff (D(HΩ)(x, y)(u); C) ⊆ D(HΩ)(x, y)(u) ∀u ∈ X if one of the following two conditions is satisfied: (i) condition (3.52) holds and Γ (u) is C-bounded (i.e., (Γ (u))∞ ∩ (−C) = {0}) for all u in a neighborhood of x; (ii) X is finite-dimensional, x ∈ int(dom Γ ), and Γ is C-convex and Cdominated by Ω near x.

218

3 Optimization in Partially Ordered Spaces

Proof. We have that (ii) ⇒ (i). Indeed, condition (3.52) holds by Theorem 3.7.1. Since Γ is C-dominated by Ω near x ∈ int(dom Γ ), we have that Ω(u) = ∅ for all u in a neighborhood V of x; we may suppose that Γ (u) is also C-closed and C-convex for u ∈ V . By a known result it follows that Γ (u) is C-bounded for u ∈ V . Suppose that (i) holds. Let V be a neighborhood of x such that Γ (u) is C-dominated by Ω(u) and C-bounded for every u ∈ V . By [267, Th. 5.1] we have that HΩ(u) ⊆ Ω(u) ⊆ cl (HΩ(u)) ∀ u ∈ V. It follows that gr HΩ|V ⊆ gr Ω|V ⊆ cl (gr HΩ|V ), and so DΩ(x, y) =  D(HΩ)(x, y). The conclusion follows by applying Theorem 3.7.11 (ii). In Example 3.7.16, SΓ (x, y)(0) = X. So (3.52) does not hold and Γ is not C-convex near x; the conclusion of the preceding corollary does not hold. Remark 3.7.18. Formula (3.60) of Theorem 3.7.11 is proved by Tanino [538, Th. 3.1] for Y finite-dimensional and by Klose [336, Th. 3.4] for arbitrary Y ; for X and Y finite-dimensional spaces, formula (3.61) is proved by Shi [504, Th. 4.1], Theorem 3.7.11 (ii) is proved by Shi [503, Th. 4.1], and Theorem 3.7.14 is proved by Kuk, Tanino, and Tanaka in [359, Th. 3.1] in the same conditions. Corollary 3.7.13 (i) was obtained by Tanino [538, Th. 3.2] for Y finite-dimensional, Shi [503, Cor. 4.1], and Klose [336, Th. 3.5], while part (ii) by Shi [504, Th. 4.2]. Corollary 3.7.15 was obtained by Shi in [504, Th. 5.2]. Formula (3.63) was obtained in [358, Th. 3.1] under the conditions of Corollary 3.7.13 (ii). Corollary 3.7.17 (i) and (ii) reinforce [359, Th. 3.2] and [358, Cor. 3.1] of Kuk, Tanino, and Tanaka. Theorem 3.7.19. Suppose that int C = ∅ and consider y ∈ Ωw (x). If one of the following conditions holds, (i) Γ is semidifferentiable at (x, y); (ii) Γ is C-convex and x ∈ int(dom Γ ); (iii) X, Y are finite-dimensional, Γ is C-convex and x ∈ icr(dom Γ ), then DΩw (x, y)(u) ⊆ wEff (DΓ (x, y)(u); C)

∀ u ∈ X.

(3.65)

Suppose that either (a) (3.52) and (i) hold or (b) y ∈ BEff (Γ (x); C) and  (iii) hold. If {y ∈ C | y = 1} is compact and Γ is C-dominated by Ωw or  ⊆ {0} ∪ int C is a closed convex cone, then Ω near x, where C DΩw (x, y)(u) = wEff (DΓ (x, y)(u); C)

∀u ∈ X

(3.66)

DΩ(x, y)(u) = wEff (DΓ (x, y)(u); C)

∀ u ∈ X,

(3.67)

or respectively.

3.7 Sensitivity of vector optimization problems

219

Proof. Assume that (i) holds and consider y ∈ DΩw (x, y)(u). If y does not belong to wEff (DΓ (x, y)(u); C), there exists k ∈ int C such that y − k ∈ DΩw (x, y)(u). Therefore there are (tn ) → 0+ and ((un , yn )) → (u, y − k) such that (x, y) + tn (un , yn ) ∈ gr Ωw ⊆ gr Γ . Since Γ is semidifferentiable at (x, y) and (tn ) → 0+ , un → u, there exists (yn ) → y − k such that (x, y) + tn (un , yn ) ∈ gr Γ for every n ≥ n0 . Therefore y + tn yn ∈ Γ (x + tn un ) and y + tn yn ∈ Ωw (x + tn un ) ⊆ Γ (x + tn un ). Since   (y + tn yn ) − (y + tn yn ) /tn = yn − yn → y − k − y = −k ∈ − int C, there exists some n1 ≥ n0 such that (y + tn yn ) − (y + tn yn ) ∈ − int C for every n ≥ n1 , contradicting the fact that y + tn yn ∈ Ωw (x + tn un ). Assume now that (ii) holds; since int C = ∅, (x, y, k) ∈ int(gr ΓC ) for k ∈ int C. By Theorem 2.11.6 (i) ΓC is semidifferentiable at (x, y). Taking y ∈ DΩw (x, y)(u), with the same argument as in the proof of (i), we obtain the same sequences, the sole difference being that y + tn yn ∈ Γ (x + tn un ) + C for n ≥ n0 . The same contradiction is obtained. If (iii) holds, by Theorem 2.11.6 (ii), ΓC is semidifferentiable at (x, y). The proof is the same as for (ii). (b) ⇒ (a). Indeed, if (iii) holds, using Theorem 3.7.1 (ii) we get that (3.52) is satisfied.  Assume that (a) is satisfied and Γ (u) is C-dominated by Ωw (u) for all  = Ωw (u) + C  u ∈ V , for some neighborhood V of x. It follows that Γ (u) + C  for u ∈ V . So, DΓC (x, y) = D(Ωw )C (x, y). Applying Theorem 3.7.6 for (Γ, C)    and (Ωw , C) we obtain that DΓ (x, y)(u) + C = DΩw (x, y)(u) + C for every  C0 ) we get for u ∈ X. Applying now Proposition 3.3.16 (iv) for the pair (C, every u ∈ X, wEff (DΓ (x, y)(u); C) = wEff (DΩw (x, y)(u); C) ⊆ DΩw (x, y)(u). From (3.65) and the above relation (3.66) follows.  If Γ is C-dominated by Ω near x, all we said above is true if we replace Ωw by Ω. Therefore (3.67) holds, too. Note that y ∈ Ω(x) because (3.52) holds.  In Example 3.7.3 Γ is neither semidifferentiable at x, nor C-convex; relation (3.65) does not hold for u = 0. In Example 3.7.12 (a) and (b) are satisfied  but Γ is not C-dominated by Ωw or Ω near x, for some closed convex cone  C ⊆ {0} ∪ int C; relations (3.66) and (3.67) do not hold for u < 0. In the next example neither (a) nor (b) is satisfied; relations (3.66) and (3.67) do not hold for u > 0. Example 3.7.20. ([538]) Let X = R, Y = R2 , C = R2+ , and let Γ : R ⇒ R2 be defined by Γ (x) := {(y1 , −y1 ) | y1 ≤ x} ∪ {(y1 , −1 − y1 ) | y1 > 0}.

220

3 Optimization in Partially Ordered Spaces

Then Ω(x) = Ωw (x) = {(y1 , −y1 ) | y1 ≤ min(0, x)} ∪ {(y1 , −1 − y1 ) | y1 > 0}. Let x = 0 and y = (0, 0). Then DΓ (x, y)(u) = Eff (DΓ (x, y)(u); C) = {(v1 , −v1 ) | v1 ≤ u}, DΩ(x, y)(u) = Eff (DΓ (x, y)(u); C) = {(v1 , −v1 ) | v1 ≤ min(0, u)}. Remark 3.7.21. For Ωw replaced by Ω and X, Y finite-dimensional, (3.65) was obtained by Shi [504, Th. 5.1] under condition (iii) and by Kuk, Tanino, and Tanaka [359, Th. 3.3] under (i). Kuk, Tanino, and Tanaka obtained (3.65) under condition (iii) and (3.66) under condition (b) in [358, Th. 3.2] and (3.67) under condition (b) in [358, Th. 3.3]. Consider now another normed vector space U, the multifunction Λ : U ⇒ X, the function f : X × U → Y , and the multifunction Γ : U ⇒ Y defined by Γ (u) := f (Λ(u) × {u}). Theorem 3.7.22. Suppose that the following conditions hold: (i) Λ is upper Lipschitz at u ∈ dom Λ and Λ(u) is compact for u ∈ U , with U a neighborhood of u; (ii) x ∈ Λ(u) and y = f (x, u) ∈ BEff(Γ (u); C); (iii) X is finite-dimensional, C has a compact base, f is Lipschitz on bounded sets and Fr´echet differentiable at (x, u); (iv) Λ(u) = {x} or the multifunction Λ : U × Y ⇒ X,

 y) := {x ∈ Λ(u) | f (x, u) = y}, Λ(u,

 y) = {x}. is upper Lipschitz at (u, y) and Λ(u, Then for every u ∈ X, Eff (∇x f (x, u) (DΛ(u, x)(u)) + ∇u f (x, u)(u); C) = Eff (DΩ(u, y)(u); C) . In order to prove this result we need the following lemma. Lemma 3.7.23. Let Λ, f , Γ be as above and u ∈ dom Λ. If f is Lipschitz on bounded sets, Λ(u) is bounded, and Λ is upper Lipschitz at u, then Γ is upper Lipschitz at u. Proof. Consider the box norm on U × X. Since Λ is upper Lipschitz at u, there exist L, ρ > 0 such that (3.68) Λ(u) ⊆ Λ(u) + L u − u BX ∀ u ∈ B(u, ρ). It follows that the set A := u∈B(u,ρ) (Λ(u) × {u}) is bounded. From hypothesis, there exists L > 0 such that f is L -Lipschitz on A. Let u ∈ B(u, ρ)

3.8 Duality

221

and y ∈ Γ (u). Hence there exists x ∈ Λ(u) such that y = f (x, u). From (3.68), there exists x ∈ Λ(u) such that x − x  ≤ L u − u. Taking y  = f (x , u) ∈ Γ (u), we obtain that y − y   = f (x, u) − f (x , u) ≤ L (x − x , u − u) ≤ L max(L, 1) u − u . Taking L > L max(L, 1), we have that Γ (u) ⊆ Γ (u) + L u − u BY for  every u ∈ B(u, ρ). Therefore Γ is upper Lipschitz at u. Of course, a sufficient condition for f to be Lipschitz on bounded sets is that f be of class C 1 and U and X have finite dimension. Proof of Theorem 3.7.22. From (i) and (iii), using Lemma 3.7.23, we obtain that Γ is upper Lipschitz at u. Since y ∈ BEff(Γ (u); C), from Theorem 3.7.1(i) we obtain that (3.52) holds for Γ at (u, y). Since Λ(u) is compact for u ∈ U and f is continuous, Γ (u) is compact for u ∈ U . Therefore Γ (u) ⊆ Ω(u) + C for u ∈ U . Now using Theorem 3.7.11(ii), we have that Eff (DΓ (u, y)(u); C) = Eff (DΩ(u, y)(u); C)

∀ u ∈ U.

But from the second part of Theorem 2.11.8, we have that DΓ (u, y)(u) = ∇x f (x, u) (DΛ(u, x)(u)) + ∇u f (x, u)(u) The conclusion follows.

∀ u ∈ U. 

Note that the preceding theorem (with a slightly weaker conclusion) is obtained by Tanino in [538, Th. 4.1] for U, X, Y finite-dimensional and f of class C 1 and by Klose [336, Th. 4.4] (observe that the compactness of Λ(u) for u ∈ U was used for having that Γ (u) ⊆ Ω(u) + C, while the Lipschitz condition of f on bounded sets for having that Γ is upper Lipschitz at u, conditions written directly in [336]). Variants of Lemma 3.7.23 are stated and proved in [538, Lemma 4.1] and [336, Lemmas 4.2, 4.3].

3.8 Duality It is an old idea to try to complement a given optimization problem (f (x) → min with minimal value I) by a dual problem (g(y) → max with supremal value S, S ≤ I); remember the dual variational principles of Dirichlet and Thompson (see Zeidler [582]) or, e.g., the paper of K.O. Friedrichs [202]. There are at least three reasons to look for useful dual problems: • The dual problem has (under additional conditions) the same optimal value as the given “primal” optimization problem, but solving the dual problem could be done with other methods of analysis or numerical mathematics.

222

3 Optimization in Partially Ordered Spaces

• An approximate solution of the given minimization problem gives an estimation of the minimal value I from above, whereas an approximate solution of the dual problem is an estimation of I from below, so that one gets intervals that contain I. • Recalling the Lagrange method, saddle points, equilibrium points of twoperson games, shadow prices in economics, perturbation methods, or dual variational principles, it becomes clear that optimal dual variables often have a special meaning for the given problem. Of course, the advantages just listed require a skillfully chosen dual program. Nevertheless, the mentioned points are motivation enough, to look for dual problems in vector optimization too. There are several books and many papers dedicated to that aim, as well as survey papers (see Jahn [306, 310], Bot¸, Grad and Wanka [74], Bot¸ and Wanka [75], Bot¸ [87], Luc [387], Hern´ andez, L¨ ohne, Rodr´ıguez-Mar´ın and Tammer [270], L¨ ohne and Tammer [382] and L¨ ohne [380], [381] and references therein). In Sections 3.8.1 and 3.8.2, we shall concentrate firstly upon nonconvex vector problems and derive theoretical duality assertions. Ideas from these theoretical results are useful in order to derive duality assertions on the basis of the special structure of vector-valued approximation problems in Section 3.8.3. One can consider the following different approaches to duality: Conjugation: Bot¸, Grad and Wanka [74], Sch¨ onfeld [500], Breckner [91], Zowe [586, 587], Nehse [430], Rosinger [492], Tanino and Sawaragi [540], Brumelle [99], Kawasaki [325, 326], Gerstewitz and G¨opfert [211], L¨ ohne [380, 381], L¨ ohne and Tammer [382], Luc [387], Malivert [394], Sawaragi, Nakayama and Tanino [496], Tanino [538, 539], Z˘ alinescu [577]. Lagrangian: Bot¸, Grad and Wanka [74], Corley [141, 142], Bitran [70], Gerstewitz and Iwanow [212], G¨ opfert and Gerth [226], G¨ unther, Khazayel and Tammer [242], Jahn [302, 306, 307], Iwanow and Nehse [301], Nakayama [425, 426], Nehse [430], Sawaragi, Nakayama and Tanino [496], Luc [387], Dolecki and Malivert [161], Guti´errez, Huerga, Novo and Tammer[243]. Axiomatic Duality: Luc [387]. On the other hand, there is certainly no uniform approach to dualization in vector optimization in the relevant literature. One of the difficulties in the existing approaches is that the set of solutions to a vector optimization problem is in general not a singleton. For the development of the duality theory, the notions of infimum and supremum of a set provided with a partial order are essential. A remarkable discussion of these aspects can be found in [381] by L¨ ohne, in [455] by Pallaschke and Rolewicz as well as in [427] by Nakayama.

3.8 Duality

223

To overcome the problems that arise with the extension of the known duality theory in scalar optimization to the vector-valued case, there are at least three essential approaches in the literature (see L¨ohne and Tammer [382]): • A first approach to duality in vector optimization is a scalarization using linear or nonlinear functionals. This scalarization is used in the construction of the dual problem (see Bot¸ and Wanka [75], Bot¸, Grad and Wanka [74], Bot¸ [87], Breckner [90], Gerstewitz [209], G¨ unther, Khazayel and Tammer [242], Guti´errez, Huerga, Novo and Tammer[243], Jahn [302, 310], Sch¨ onfeld [500]). These scalarization concepts and corresponding duality assertions from scalar optimization are employed to show duality assertions or to solve the dual problem. In Section 3.8.2, we show duality assertions related to this first approach. • A second approach to duality in vector optimization is based on the observation that the dual vector optimization problem is naturally a setvalued one (see Khan, Tammer and Z˘ alinescu [329], Tanino, Sawaragi [540], Corley [141], Tanino [538, 539], L¨ ohne and Tammer [382], Luc [387], Nakayama [427], Tammer [523], Dolecki and Malivert [162], Pallaschke and Rolewicz [455], Song [509], L¨ ohne [380], [381]). In these papers and books, the duality statements for vector optimization problems are derived without a scalarization “from the beginning” so that the dual problem becomes set-valued. In his work [539], Tanino already established the set-valued structure of the dual problem to a primal vector optimization problem. He derived a weak duality statement by embedding the primal problem into a family (depending on perturbation parameters) of set-valued optimization problems and by applying an extension of Fenchel’s inequality. Moreover, Tanino proved a strong duality assertion using the relationship between a mapping and its biconjugate. Dolecki and Malivert [162] used a set-valued approach in combination with Lagrangian techniques and perturbations of marginal relations in order to prove duality results for general vector optimization problems where the solution concept is defined by a transitive, translation invariant relation. In Section 3.8.1, we derive duality statements corresponding to this second approach. • A third approach to the construction of dual problems to vector optimization problems is based on solution concepts with respect to the supremum and infimum in the sense of a vector lattice, see L¨ ohne [380], [381], [382]. Using corresponding notions of infimum and supremum in the sense of utopia minima (maxima), Pallaschke and Rolewicz derived duality statements for objective functions with values in vector lattices in [455]. Nevertheless, these infima and suprema may not be solutions in the sense of vector optimization since they may not belong to the set of objective function values.

224

3 Optimization in Partially Ordered Spaces

Of course, the assumption that the objective function has its values in a vector lattice is too restrictive for vector optimization. Therefore, Nieuwenhuis introduced in [438] solution concepts on the basis of infimal (supremal) sets. Employing these solution concepts, Nieuwenhuis [438] and Tanino [538, 539] derived corresponding duality assertions. These infimal sets are closely related to weakly efficient elements of vector optimization problems. Furthermore, in [162], Dolecki and Malivert extended these concepts to infimal sets being closely related to other kinds of efficiency too. In [382], the approach to duality in vector optimization is characterized by an embedding of the image space of the vector optimization problem into a complete lattice without linear structure and the lattice structure is consequently used. The aim of the following subsections is to show duality assertions for vector optimization problems without convexity or regularity assumptions using a generalized Lagrangian approach (without scalarization in Subsection 3.8.1 and based on a suitable scalarization in Subsection 3.8.2). 3.8.1 Duality Without Scalarization Let (Y, C) be a (with the pointed, convex cone C that induces the order relation ≤C ) partially ordered linear space, P, D nonempty subsets of Y , and let us consider the following vector optimization problems using the solution concepts introduced in Remark 2.1.3, Definition 3.1.1 for ε = 0 and Sections 3.4, 3.7: (P) Determine Eff Min (P, C), Determine

Eff Max (D, C).

(D)

We speak of a pair of weakly dual problems, if P ∩ (D − (C \ {0})) = ∅.

(3.69)

Since C is pointed, this is equivalent to (P + (C \ {0})) ∩ (D − (C \ {0})) = ∅. (P) and (D) are called strongly dual, if (3.69) holds together with 0 ∈ cl(P − D) or equivalently (P + O) ∩ (D + O) = ∅ for all open neighborhoods O of zero in Y . So strong duality means that P and D touch each other. Otherwise, we speak of a pair of dual problems with a duality gap (in the aforementioned scalar case at the beginning of this chapter that would mean I > S). Lemma 3.8.1. Assume (P), (D) to be weakly dual. If there are z 0 ∈ P, ζ 0 ∈ D such that z 0 = ζ 0 , then z 0 is minimal-efficient for (P), ζ 0 maximal-efficient for (D), and (P), (D) are strongly dual.

3.8 Duality

225

Proof. z 0 not minimal-efficient means that there is z 1 ∈ P such that 1 0 z ∈ z − (C \ {0}) = ζ 0 − (C \ {0}), which contradicts (3.69). For ζ 0 to be maximal-efficient follows similarly. From z 0 = ζ 0 it follows that (P) and (D) are strongly dual.  As usual, if one deals with vector optimization problems, the question is whether to apply scalarization methods. So at first we prove duality theorems completely without taking into account scalarization of the goal function. Instead of (P), we consider the following vector optimization problem with side conditions ≤C - min f (x) subject to (P1 ) x ∈ A, with A := {x ∈ M, g(x) ∈ CV }, where X is a real linear space, M a nonempty set in X, V a real linear space, CV ⊂ V a nonempty set, Y a real linear space, C ⊂ Y a proper, pointed convex cone, f : X → Y , and g : X → V . So a vector optimization problem is given as usual (with the solution concept (efficiency) introduced in Remark 2.1.3, Definition 3.1.1 for ε = 0 and Sections 3.4, 3.7), i.e., we consider the problem to determine Eff Min (f (A), C). To deal with duality in vector optimization, we suppose: Assumption (V1) For the proper, pointed, convex cone C in Y , we consider a convex, pointed cone B ⊂ Y such that B + (C \ {0}) ⊆ B \ {0}.

(3.70)

We introduce the following generalized Lagrangian L : M × V −→ Y ∪ {+∞Y }, and M 0∗ := {L : M × V → Y ∪ {+∞Y } | x ∈ M, v ∈ V, v = g(x), Eff Min ({L(x, v) | x ∈ M }, B) = ∅}. Now we are able to write down a set-valued problem (D1 ) that can be considered as a dual problem to (P1 ): ≤C - max F ∗ (L) subject to ∗

L∈M ,

(D1 )

226

3 Optimization in Partially Ordered Spaces

with the set-valued objective mapping F ∗ (L) := Eff Min ({L(x, v) | x ∈ M }, B) = ∅ ⊂ Y

(L ∈ M ∗ , v ∈ V )

and the feasible set M ∗ := {L ∈ M 0∗ | L(x, v) ∈ f (x) − B if v ∈ CV }, where B is given by Assumption (V1). We are looking for ≤C -nondominated (maximal) solutions of the setvalued problem (D1 ) (compare Definition 4.4.1) in the following sense: Definition 3.8.2 (≤C -nondominated (maximal) solution of problem (D1 )). Consider the set optimization problem (D1 ), L ∈ M ∗ and (L, z) ∈ gr F ∗ . The pair (L, z) ∈ gr F ∗ is called ≤C -nondominated (maximal) solution of problem (D1 ) if F ∗ (M ∗ ) ∩ (z + C) = {z}. The problem (D1 ) is a so-called Lagrange set-valued dual problem to (P1 ). Lemma 3.8.3 (Weak duality for (P1 ), (D1 )). Suppose that Assumption (V1) is fulfilled. The problems (P1 ) and (D1 ) are weakly dual, i.e., ∀x ∈ A, ∀(L, z) ∈ gr F ∗ with L ∈ M ∗ : f (x) ∈ / z − (C \ {0}).

(3.71)

Proof. Take L ∈ M ∗ , v ∈ V and (L, z) ∈ gr F ∗ with z ∈ Eff Min ({L(x, v) | x ∈ M }, B). This means that {L(x, v) | x ∈ M } ∩ (z − (B \ {0})) = ∅, and for A ⊆ M instead of M ,   {L(x, v) | x ∈ A)} ∩ z − (B \ {0}) = ∅. Taking into account the property (3.70) for B,   {L(x, v) | x ∈ A} ∩ z − (B + (C \ {0})) = ∅. We obtain ({L(x, v) | x ∈ A} + B) ∩ (z − (C \ {0})) = ∅.

(3.72)

Furthermore, by the definition of M ∗ and L ∈ M ∗ , we have for x ∈ A L(x, v) ∈ f (x) − B.

(3.73)

3.8 Duality

227

From (3.72), (3.73), since L ∈ M ∗ and x ∈ A are chosen arbitrarily, we conclude ∀x ∈ A, ∀(L, z) ∈ gr F ∗ with L ∈ M ∗ : f (x) ∈ / z − (C \ {0}), i.e., (3.71) holds.



A strong duality theorem holds as well. Theorem 3.8.4 (Strong (direct) duality for (P1 ), (D1 )). Suppose that Assumption (V1) is fulfilled. Consider the problems (P1 ) and (D1 ) as given above and assume f (x) ∈ Eff Min (f (A), B). Then, there is an element L ∈ M ∗ such that (L, f (x)) ∈ gr F ∗ is a ≤C -nondominated (maximal) solution of problem (D1 ), i.e., f (x) ∈ Eff Max (F ∗ (M ∗ ), C). Proof. This time we take into account that the Lagrangian L ∈ M 0∗ can be chosen as f (x) if v ∈ CV , L(x, v) := (+∞)Y if otherwise. Therefore, L ∈ M ∗ and Eff Min ({L(x, v) | x ∈ A}, B) = Eff Min (f (A), B). / CV , we obtain Eff Min ({L(x, v) | x ∈ M \ Since L(x, v) = (+∞)Y if v ∈ A}, B) = ∅. This yields Eff Min ({L(x, v) | x ∈ M }, B) = Eff Min (f (A), B),

(3.74)

since Eff Min ({L(x, v) | x ∈ M }, B) = Eff Min (Eff Min ({L(x, v) | x ∈ A}, B) ∪ Eff Min ({L(x, v) | x ∈ M \ A}, B), B). As assumed, f (x) ∈ Eff Min (f (A), B), so (3.74) gives f (x) ∈ Eff Min ({L(x, v) | x ∈ M }, B); that is, f (x) ∈ F ∗ (L) and (L, f (x)) ∈ gr F ∗ . The weak duality assertion in Lemma 3.8.3 implies ∀(L, z) ∈ gr F ∗ with L ∈ M ∗ : z ∈ / f (x) + (C \ {0}). This yields together with f (x) ∈ F ∗ (L) that f (x) is a ≤C -nondominated  (maximal) solution of problem (D1 ). The proof is complete. Sometimes results like Theorem 3.8.4 are called a strong direct duality theorem, because a primal optimal solution is shown to be dually optimal. The converse direction is also interesting. This leads to converse duality theorems. To get such a theorem for our pair of optimization problems, we state a condition (V2): Assumption (V2) Every solution of (D1 ) is to be dominated by a solution of (P1 ), which means that with a cone B as in (3.70), for all d˜ ∈ x) ∈ Eff Min (f (A), B) such that f (˜ x) ∈ Eff Max (F ∗ (M ∗ ), C) there is an f (˜ d˜ + C.

228

3 Optimization in Partially Ordered Spaces

Theorem 3.8.5 (Strong (converse) duality for (P1 ), (D1 )). Suppose ˜ ∈ gr F ∗ with L ˜ ∈ ˜ d) that Assumptions (V1) and (V2) are fulfilled and (L, ∗ M is a ≤C -nondominated (maximal) solution of problem (D1 ), i.e., d˜ ∈ Eff Max (F ∗ (M ∗ ), C). Then there is an element x ∈ A such that d˜ = f (x) ∈ Eff Min (f (A), C). Proof. For d˜ as assumed, (V2) leads to an f (˜ x) ∈ Eff Min (f (A), B) with d˜ ∈ f (˜ x) − C. Then d˜ ∈ f (A) + C. Indeed, if we suppose d˜ ∈ f (A) + C, then d˜ ∈ f (˜ x) − C can be satisfied only if d˜ ∈ f (˜ x) − (C \ {0}).

(3.75)

With f (˜ x) ∈ Eff Min (f (A), B) and (3.74), we obtain f (˜ x) ∈ F ∗ (M ∗ ). ∗ ∗ ˜ ˜ x) ∈ d + (C \ {0}), which Because of d ∈ Eff Max (F (M ), C) it follows that f (˜ contradicts (3.75). Since d˜ ∈ f (A) + C and taking into account weak duality (Lemma 3.8.3), we get d˜ ∈ f (A). Therefore, there is x ∈ A such that d˜ = f (x). Again from  Lemma 3.8.3, we obtain f (x) ∈ Eff Min (f (A), C). 3.8.2 Duality by Scalarization The duality in Section 3.8.1 is an abstract and not scalarized Lagrangian formalism for very general optimization problems. Now, we will prove duality theorems with the help of scalarization. Here we again use the Lagrangian form of duality. For a similar method, using the Fenchel form of duality, see Bot¸, Grad, Wanka [74]. To prove duality theorems, we come back to the primal constrained vector optimization problem (P1 ) under some stronger assumptions concerning the involved spaces and the cone C in the following form: ≤C - min f (x) subject to x ∈ A,

(P2 )

with A := {x ∈ M, g(x) ∈ CV } = ∅, f : M → Y , ∅ = M ⊂ X, g : M → V , CV a nonempty set in V and X, Y, V t.v.s., Y partially ordered by a proper, closed, convex, pointed cone C. This means, we consider the problem to determine Eff Min (f (A), C). Furthermore, we suppose: Assumption (V3) For C, we consider a closed, convex, pointed cone B ⊂ Y with B + (C \ {0}) ⊆ int B. (3.76)

3.8 Duality

229

If we use a set B with condition (3.76) and an element k ∈ B with Rk−B = Y in the definition of the scalarizing functional (2.42), then the functional (2.42) is strictly C-monotone (see Theorem 2.4.1, (g)). Such a property is important for the proof of duality assertions via scalarization. We suppose that the Assumption (V3) is satisfied. Then, the set S of strictly C-monotone (see Theorem 2.4.1(g)) functionals s : Y ∪ {+∞Y } → R ∪ {+∞} with s(y) ∈ R for all y ∈ Y and s(+∞Y ) := +∞ is nonempty. Again, we introduce a generalized Lagrangian L : M × V −→ Y ∪ {+∞Y } and consider for s ∈ S, Ms0∗ := {L : M × V → Y ∪ {+∞Y } | x ∈ M, v ∈ V, v = g(x), inf{s(L(x, v)) | x ∈ M } > −∞}. Using the solution concept (efficiency) introduced in Remark 2.1.3 (see also Definition 3.1.1 for ε = 0 and Sections 3.4, 3.7) and the functionals s ∈ S, we define a dual vector optimization problem to (P2 ) by: Determine

Eff Max (AD , C),

(D2 )

where   AD = h ∈ Y | ∃ s ∈ S, ∃ L ∈ Ms∗ : s(h) ≤ inf{s(L(x, v)) | x ∈ M } and

Ms∗ := {L ∈ Ms0∗ | s(L(x, v)) ≤ s(f (x)) if g(x) ∈ CV }.

For elements h ∈ AD , we call s (L, respectively) corresponding to h if the conditions in the definition of AD are satisfied for h, s (L, respectively). Lemma 3.8.6 (Weak duality for (P2 ), (D2 )). The pair (P2 ), (D2 ) satisfies: (i) For each h ∈ AD with the corresponding s ∈ S, it holds that s(h) ≤ s(f (x)) for every x ∈ A,

(3.77)

f (A) ∩ (AD − (C \ {0})) = ∅.

(3.78)

(ii) Proof. (i) Taking into account the definition of the dual feasible set AD , for all h ∈ AD with the corresponding s ∈ S and L ∈ Ms∗ , we obtain s(h) ≤ inf s(L(x, v)) ≤ s(L(x, v)) ≤ s(f (x)) for all x ∈ A. x∈M

(ii) Suppose that (3.78) does not hold. Then, there would be x0 ∈ A, h0 ∈ AD such that f (x0 ) ∈ h0 − (C \ {0}). With s0 ∈ S corresponding to h0 , we get

230

3 Optimization in Partially Ordered Spaces

s0 (f (x0 )) < s0 (h0 ) because s is strictly C-monotone. A contradiction to (3.77).



In the following strong (direct) duality assertion, we consider properly efficient elements z0 = f (x0 ), x0 ∈ A, of (P2 ) in the sense that there is an s0 ∈ S such that s0 (f (x0 )) = inf{s0 (f (x)) | x ∈ A} > −∞. Theorem 3.8.7 (Strong (direct) duality for (P2 ), (D2 )). If z0 = f (x0 ) with x0 ∈ A is properly efficient for (P2 ), then it is efficient for (D2 ), i.e., z0 ∈ Eff Max (AD , C). Proof. Let f (x0 ) be a properly efficient element of the primal problem (P2 ). With the corresponding element s0 ∈ S, we can choose the Lagrangian L0 ∈ Ms∗0 such that s0 (f (x)) if g(x) ∈ CV , s0 (L0 (x, v)) = +∞ otherwise. Using this Lagrangian we get s0 (f (x0 )) = inf s0 (f (x)) x∈A  = inf inf s0 (L0 (x, v)) , x∈A

inf

x∈M \A

 s0 (L0 (x, v))

= inf s0 (L0 (x, v)). x∈M

This yields f (x0 ) ∈ AD . From the weak duality assertion in Lemma 3.8.6(ii), we can conclude that f (x0 ) is an efficient element for (D2 ), i.e., f (x0 ) ∈  Eff Max (AD , C). The point of the last proof is that there are Lagrangians L0 and functionals s0 such that a possibly nonlinear Lagrangian works. So an essential task is to find sufficiently practicable s0 and L0 for special optimization problems such as (P3 ) or (P4 ) below. Next, we would like to complete the study of (P2 ) and (D2 ) with a converse duality theorem. At first we show that efficient elements h ∈ AD can be characterized by scalarization. Lemma 3.8.8. h0 ∈ Eff Max (AD , C) if and only if sh (h) ≤ sh (h0 ) for all h ∈ AD , where sh corresponds to h according to the definition of AD in (D2 ). Proof. (a) If h0 ∈ Eff Max (AD , C) and h ∈ h0 + C \ {0}), then h ∈ AD . Therefore, for all s ∈ S and for all L ∈ Ms∗ , s(h) > inf{s(L(x, v)) | x ∈ M }. Applying sh (corresponding to h ∈ AD ) to h as above, we get

(3.79)

3.8 Duality

231

sh (h0 ) = inf{sh (h) | h ∈ h0 + (C \ {0})} ≥ inf{sh (L(x, s)) | x ∈ M } ≥ sh (h), as claimed, where the first inequality follows from (3.79). (b) If h0 ∈ AD but h0 ∈ Eff Max (AD , C), then there is h ∈ AD such that h ∈ h0 + (C \ {0}). Therefore s(h) > s(h0 ) for all s ∈ S, contrary to our assumption.  Additionally, a strong converse duality statement holds. Theorem 3.8.9 (Strong (converse) duality for (P2 ), (D2 )). Under the conditions given for the pair (P2 ), (D2 ) and additionally, if P := f (A) + C is closed and if for all s ∈ S such that inf{s(f (x)) | x ∈ A} > −∞ there is an x0 ∈ A with inf{s(f (x)) | x ∈ A} = s(f (x0 )), then h ∈ Eff Max (AD , C), i.e., h is an efficient element of the dual problem (D2 ), implies that h is a properly efficient element of the primal problem (P2 ). Proof. Let h be an element of Eff Max (AD , C). If h did not belong to P , there would be an open convex neighborhood U of h with U ∩ P = ∅. Even (U − C) ∩ P = ∅; otherwise, there would exist u ∈ U, k ∈ C such that u − k ∈ P = f (A) + C, which means that u ∈ f (A) + C + k = P . Now we consider the set B := U −C. Obviously, B is open and convex and B −C = B, and so B − C ⊂ B − C ⊂ B. Theorems 2.4.1 and 2.4.19 deliver an s1 ∈ S such that s1 (p) ≥ 0 > s1 (u) for all p ∈ P and for all u ∈ B. We have h ∈ B, since 0 ∈ C, so there is a number γ with ∀p ∈ P :

s1 (h) < γ < 0 ≤ s1 (p).

From weak duality we get for s2 ∈ S, ∀p ∈ P :

s2 (h) ≤ s2 (p).

Taking λ ∈ (0, 1) we consider sλ := λs2 + (1 − λ)s1 . Then sλ is again in S, and sλ (h) = s1 (h) + λ(s2 (h) − s1 (h)), ∀ x ∈ A : sλ (f (x)) = λ(s2 (f (x))) + (1 − λ)(s1 (f (x))), ≥ λs2 (f (x)) ≥ λs2 (h), since f (x) ∈ P . Now we choose λ sufficiently small; then not only s1 (h) < γ but sλ (h) < γ, and from γ < 0 it follows that γ < λs2 (h), so sλ (h) < γ < λs2 (h) ≤ sλ (f (x)) for all x ∈ A. This means that inf{sλ (f (x)) | x ∈ A} ≥ γ, and the assumptions give an x0 ∈ A such that sλ (h) < γ ≤ inf{sλ (f (x)) | x ∈ A} = sλ (f (x0 )). But strong duality now demands f (x0 ) ∈ AD , and Lemma 3.8.8(i) requires sλ (f (x0 )) ≤ sλ (h), a contradiction. So h ∈ P = f (A) + C. Weak duality in Lemma 3.8.6 (ii) gives h ∈ f (A). Finally, Lemma 3.8.6 (i) yields that h is a  properly efficient element of the primal problem (P2 ).

232

3 Optimization in Partially Ordered Spaces

Remark 3.8.10. Recently, duality assertions related to approximate proper solutions of vector optimization problems are derived by Guti´errez, Huerga, Novo, and Tammer in [243]. 3.8.3 Duality for Approximation Problems As an example for useful duality theory and because of its practicability, we now give duality theorems for a general approximation problem and a corresponding dual problem. To this end, we consider (P1 ) in the following special form: Determine Eff Min (f (A), K)

(P3 )

with A := {x ∈ CX | B(x) − b ∈ CV }, where now X is a linear normed space, equipped with a proper, closed, convex cone CX , V , V is a reflexive Banach space, CV a cone in V that is convex and closed, B ∈ L(X, V ), Y = Rp , partially ordered by a closed convex cone K such that K + Rp+ ⊆ K and int K + = ∅, and f may have the form ⎞ ⎛ α1 A1 (x) − a1  ⎠ ... (3.80) f (x) = C(x) + ⎝ αp Ap (x) − ap  with C ∈ L(X, Rp ), αi ≥ 0 real (i = 1, . . . , p), ai ∈ U , U is a given real normed space, Ai ∈ L(X, U ), i = 1, . . . , p, and A∗i denotes the adjoint operator to Ai . For brevity and clarity we sometimes leave out the parenthesis in connection with A∗i . On the one hand, (P3 ), although a special case of (P1 ), itself contains many important special cases: (i) (P3 ) is a semi-infinite linear problem if αi = 0 for all i = 1, . . . , p. (ii) f can be interpreted as a Lipschitz perturbed linear problem. (iii) For the case C = 0, Ai (i = 1, . . . , p) are identical operators, (P3 ) is a vector-valued location problem. (iv) Consider C(x) → min, x ∈ A0 = {x ∈ A = ∅ | Ai (x) = ai , i = 1, . . . , p} = ∅. Then (3.80) is a parameterized surrogate problem. (v) The general vector-valued approximation problem or location problem dealt with in Sections 5.1 and 5.3. On the other hand, (P3 ) can be generalized considerably if we assume ai to be variable in a nonempty set Wi ⊂ Ui (i = 1, . . . , p), Ui instead of a fixed space U , so that we get an optimization problem (P4 ): Determine Eff Min (f (A), K), where

(P4 )

3.8 Duality

233

A = {(x, a) | x ∈ CX , a = (a1 , . . . , ap ), ai ∈ Wi , i = 1, . . . , p, B(x)−b ∈ CV }, ⎛

⎞ ... f (x, a) = C(x) + ⎝ αi Ai (x) − ai Ui ⎠ . ...

and

We get for this vector-valued optimization problem a useful dual problem (D4 ), which satisfies together with (P4 ) the above-defined conditions for weak and strong duality. Moreover, in Section 5.3 a practicable algorithmic procedure will be constructed and tested. We introduce a suitable Lagrangian Lλ0 to (P4 ) for a given λ0 ∈ int K + : ⎞ ⎛ ⎞⎞ ⎛⎛ ... ... Lλ0 (x, a, Y, u∗ ) = λ0 ⎝⎝αi Y i (ai − Ai (x))⎠ + ⎝C i (x)⎠⎠ + u∗ (b − B(x)), ... ... where (x, a) ∈ M = {(x, a) | x ∈ CX , ai ∈ Wi } and (Y, u∗ ) ∈ M 0∗ := {(Y, u∗ ) | Y = (Y 1 , . . . , Y p ), Y i ∈ L(Ui , R), u∗ ∈ L(V, R), u∗ ∈ CV+ , αi Y i ∗ ≤ αi , i = 1, . . . , p}. From this setting two results follow immediately: sup (Y,u∗ )∈M 0∗

Lλ0 (x, a, Y, u∗ )  =

αi ai − Ai (x) + λ0 C(x) if B(x) − b ∈ CV , (3.81) +∞ otherwise, 0 i λi

and inf

(x,a)∈M

Lλ0 (x, a, Y, u∗ )



=

0 i λi

−∞

inf ai ∈Wi (αi Y i (ai )) + u∗ (b)



+ if λ0i (−αi Ai∗ Y i + C i∗ ) − B ∗ (u∗ ) ∈ CX , otherwise.

(3.82) To obtain a dual problem to (P4 ), we recall (D2 ) and define Mλ∗0 to be    + Mλ∗0 := (Y, u∗ ) ∈ M 0∗ | . λ0i (−αi Ai∗ Y i + C i∗ ) − B ∗ (u∗ ) ∈ CX So we are able to give the following dual problem: Determine where

Eff Max (AD , K),

(D4 )

234

3 Optimization in Partially Ordered Spaces

 AD = d ∈ Y | ∃ λ ∈ int K + , ∃ (Y, u∗ ) ∈ Mλ∗ : λ(d) = It follows that λ(d) =



inf

(x,a)∈M

 Lλ (x, a, Y, u∗ ) .

λi iinf (αi Y i (ai )) + u∗ (b). a ∈Wi

i

In order to descalarize the last line we consider the set M ∗ and use Lemma 3.8.11, which is related to a result given by Jahn [306]: M ∗ = {(Y, Z) | Y = (Y 1 , . . . , Y p ), Y i ∈ L(Ui , R), i = 1, . . . , p, Z ∈ L(V, Rp ), ∃ λ∗ ∈ int K +

such that

p 

+ λ∗i (−αi Ai∗ Y i + C i∗ ) − (Z(B))∗ (λ∗ ) ∈ CX ,

i=1 ∗



Z (λ ) ∈

CV+ , αi Y i ∗

≤ αi , i = 1, . . . , p}.

Moreover, consider the sets D1 and D2 : • D ∈ Rp | ∃ λ∗ ∈ int K + and ∃ (Y, u∗ ) ∈ Mλ∗∗ such that λ∗ (d) = 1p := {d ∗ i i ∗ λ (α i Y (a )) + u (b)}, i=1 i ⎞ ⎛ ...   • D2 := d ∈ Rp | ∃ (Y, Z) ∈ M ∗ such that d = ⎝ αi Y i (ai ) ⎠ + Z(b) , ... where d in D2 satisfies a descalarized equation. Lemma 3.8.11. For the sets D1 , D2 as defined above, D2 ⊆ D1 . Moreover, if b = 0, even D2 = D1 . Proof. (a) Let d be in D2 . According to the definition of M ∗ there is Z ∗ (λ∗ ) ∈ CV+ , λ∗ ∈ int K + , Y i ∈ L(Ui , R) (i = 1, . . . , p), Z ∈⎛L(V, Rp ), ⎞ ... αi Y i ∗ ≤ αi (i = 1, . . . , p), such that t := d − ⎝ αi Y i (ai ) ⎠ = Z(b) and ... p + ∗ i∗ i i∗ ∗ ∗ CX . Then λ∗ (t) = u∗ (b) with i=1 λi (−αi A Y + C ) − (Z(B)) (λ ) ∈ ∗ ∗ ∗ ∗ ∗ u = Z (λ ) ∈ L(V, R), and so λ (d) = λi (αi Y i (ai )) + u∗ (b), which means that d ∈ D1 . (b) Let d be in D1 . Therefore there exist λ∗ ∈ int K + , Y i ∈ L(Ui , R) (i = 1, . . . , p), u∗ ∈ L(V, R), u∗ ∈ CV+ such that  + λ∗i (−αi Ai∗ Y i + C i∗ ) − B ∗ (u∗ ) ∈ CX (3.83) and λ∗ (d) =

p 

λ∗i (αi Y i (ai )) + u∗ (b).

(3.84)

i=1

Since b = 0, (3.84) leads to the existence of Z ∈ L(V, Rp ) with (see Jahn [306])

3.8 Duality



235



... d = ⎝ αi Y i (ai ) ⎠ + Z(b) ...

(3.85)

and Z ∗ (λ∗ ) = u∗ . Then from (3.83) and u∗ ∈ CV+ it is clear that d ∈ D2 . To understand where the existence of Z comes from, we consider once more λ∗ (t) = u∗ (b), which follows from (3.84). It is well known that if b = 0 and λ∗ = 0, then one can define an operator T ∈ L(V, Rp ) such that T ∗ (λ∗ ) = u∗ ∗ (·) and t = T (b): If u∗ (b) = 0, take T (·) = uu∗ (b) t over V ; if u∗ (b) = 0, then ∗ ∗ consider λ = 0. For λ there exists t such that λ∗ (t0 ) = 1. Since b = 0, there ˜∗ (b) = 1. Then take is u ˜∗ ∈ L(V, R) that separates b and 0, i.e., u T (·) = u∗ (·)t0 + u ˜∗ (·)t over V.



If b = 0, we obtain from (D4 ) , regarding Lemma 3.8.11, the following dual problem (D4 ) to (P4 ): Determine where

Eff Max (f ∗ (M ∗ ), K),

⎞ ... ⎜ inf αi Y i (ai ) ⎟ ∗ ⎠ + Z(b) =: f (Y, Z) ⎝ ai ∈Wi ...

(D4 )



for

(Y, Z) ∈ M ∗ .

For an example consider Section 5.3.2 or recall the dual problem for a common finite-dimensional linear vector optimization problem. Example 3.8.12. Consider instead of (P3 ) the following scalar optimization problem: p  c(x) + αi Ai (x) − ai  → min i=1

subject to

(P5 )

x ∈ CX , B(x) − b ∈ CV , where X, V, CX , CV , Ai , ai (i = 1, . . . , p) as above, ai fixed, c ∈ L(X, R1 ), and αi > 0 ∀ i = 1, . . . , p. Since the sum in (P5 ) is really a norm, (P5 ) is an example of (P3 ). Then, we obtain a dual problem (D5 ) as a special case of (D4 ):

236

3 Optimization in Partially Ordered Spaces p 

αi y i (ai ) + z(b) → max

i=1

subject to y = (y 1 , . . . , y p ) ∈ L(U, R)p , y ∗ ≤ 1 for all i, z ∈ L(V, R), z ∈ i

p 

(D5 ) CV+ ,

+ (−αi Ai∗ (y i ) + c) − B ∗ (z) ∈ CX .

i=1

The problem (D5 ) follows immediately from (D4 ), since y∗ ≤ 1 in (D4 ) means that maxi=1,...,p y i ∗ ≤ 1 and so y i ∗ ≤ 1 for all i, because the maximum norm is suitable as a dual norm to a sum of norms. Example 3.8.13. The ordinary scalar classic location problem (Fermat’s problem) is contained in (P5 ): X, α1 , . . . , αp , a1 , . . . , ap as above, a1 , . . . , ap fixed, p 

αi x − ai  → min

i=1

subject to x ∈ X, p 

(P6 )

αi y i (ai ) → max

i=1

subject to y i ∈ X ∗ , y i ∗ ≤ 1 for all i, p 

(D6 )

αi y i = 0.

1

Taking as A ⊂ X a closed linear subspace, we get x − a → min subject to x ∈ A, y(a) → max subject to

(P7 )

(D7 )



y ∈ A , y∗ ≤ 1, where A⊥ is the annihilator to A : A⊥ = {y ∈ X ∗ | ∀ x ∈ A : y(x) = 0}.

3.8 Duality

237

(P5 ) can be generalized in another way, too (see Wanka [554, 555]). Wanka considers the following scalar optimization problem: p 

αi x − ai βi → min

i=1

(P8 )

subject to x ∈ U,

where βi ≥ 1, αi > 0 for all i = 1, . . . , p, U a convex closed subset of a reflexive Banach space X, and a1 , . . . , ap given elements of X. Then he studies the dual problem (D8 ), which contains our problem (D6 ), using Fenchel–Rockafellar theory instead of Lagrange theory. He also considers the vector-valued variant of (P8 ) together with a linear mapping as in (P3 ). We come back to these general problems in Section 5.1. The dual problem to (P8 ) is: p 

αi (1 − βi )y  i

βi 1−βi

+

i=1 βi >1

p 

αi βi y (a ) − sup i

i=1

i

m 

x∈U i=1

αi βi y i (x) → max

subject to y i ∈ X ∗ for all i, y i ∗ ≤ 1 for i with βi = 1. (D8 ) Taking into account the fact that we constructed (D4 ) as in Section 3.8.2, weak, strong, and converse duality follow immediately. We use the chosen Lagrangian and the saddle point theorem of convex programming. For simplicity we assume ai ∈ W i = {ai }; therefore, we write x ∈ A instead of (x, a) ∈ A. So, we consider the dual problem to (P3 ) as Determine where

Eff Max (f ∗ (M ∗ ), K),

(D3 )



⎞ ... ⎝ αi Y i (ai ) ⎠ + Z(b) =: f ∗ (Y, Z) ...

for

(Y, Z) ∈ M ∗ .

Theorem 3.8.14 (Weak duality for (P3 ), (D3 )).   f (A) ∩ f ∗ (M ∗ ) − (K \ {0}) = ∅.

(3.86)

Now we derive a strong duality assertion for properly efficient elements f (x0 ) ∈ f (A) in the sense that there is λ0 ∈ int K + with λ0 (f (x0 )) = inf{λ0 (f (x)) : x ∈ A}.

238

3 Optimization in Partially Ordered Spaces

Theorem 3.8.15 (Strong (direct) duality for (P3 ), (D3 )). Let U be a reflexive Banach space and b = 0. Then, for every properly efficient f (x0 ) of (P4 ) there exists an efficient element f ∗ (Y 0 , Z 0 ) of (D4 ) such that f (x0 ) = f ∗ (Y 0 , Z 0 )

(3.87)

if the following condition is satisfied: There is a feasible element x of (P4 ) with B(x) − b ∈ int CV , and for every λ∗ ∈ int K + there is a feasible pair + -inclusion in (D4 ) is satisfied with respect to (Y , Z) of (D4 ) such that the CX + int K . Proof. Since f (x0 ) is properly efficient, there is λ0 ∈ int K + such that λ0 (f (x0 )) = inf{λ0 (f (x)) : x ∈ A}. Then (3.81) gives λ0 (f (x0 )) = inf λ0 (f (x)) = inf x∈A

sup

x∈M (Y,u∗ )∈M 0∗

Lλ0 (x, Y, u∗ ).

We use the saddle point theorem of convex programming (see [582]) and have that there is (Y 0 , u∗0 ) contained in Mλ∗0 such that for x ∈ M and (Y, u∗ ) ∈ Mλ∗0 ,

Lλ0 (x, Y 0 , u∗0 ) ≥ Lλ0 (x0 , Y 0 , u∗0 ) ≥ Lλ0 (x0 , Y, u∗ ). (3.88)  ∗ 0 ∗0 i0 i ∗0 Then (3.82) gives inf x∈CX Lλ0 (x, Y , u ) = λi (αi Y (a )) + u (b), so we get p  λ0 (f (x0 )) = λ0i (αi Y i0 (ai )) + u∗0 (b) (3.89) i=1

or using Lemma 3.8.11 its vector-valued variant (3.87). Considering (D4 ), we see that the proof is complete.  Our next aim is a converse duality theorem. To attempt this we use the scalarization Lemma 3.8.8. In our convex case it reads as follows: Lemma 3.8.16. d0 ∈ Eff Max (D1 , K) if and only if λd (d0 ) ≥ λd (d) for all d ∈ D1 and λd ∈ int K + such that for every (Y, u∗ ) ∈ Mλ∗d ,  λd,i (αi Y i (ai )) + u∗ (b). λd (d) = With the help of the last lemma a converse duality theorem follows. We again suppose W i = {ai }, i = 1, . . . , p. Theorem 3.8.17 (Strong (converse) duality for (P3 ), (D3 )).We assume b = 0, int K = ∅, f (A) + K is closed, ∃ x ∈ A with Bx − b ∈ int CV+ , for every λ∗ ∈ int K + with inf{λ∗ (f (x)) | x ∈ A} > −∞ there exists (Y , Z) ∈ M ∗ such that  i + λ∗i (−αi Ai∗ Y + C i∗ ) + (Zb)∗ (λ∗ ) ∈ int CX and additionally an xλ ∈ A with inf{λ∗ (f (x)) | x ∈ A} = λ∗ (f (xλ )). Then to every f ∗ (Y 0 , Z 0 ) that is efficient with respect of (D3 ) there is a properly efficient element f (x0 ) of (P3 ) such that f ∗ (Y 0 , Z 0 ) = f (x0 ).

(3.90)

3.9 Vector equilibrium problems and related topics

239

We would like to emphasize that both strong duality Theorems 3.8.15 and 3.8.17 state duality in a nonscalarized form. Proof. Let f ∗ (Y 0 , Z 0 ) be maximal with respect to M ∗ . Then it is also maximal w.r.t. the set D2 , and since b = 0, also with respect to D1 ; hence Lemma 3.8.16 yields (3.91) λd (f ∗ (Y 0 , Z 0 )) ≥ λd (d) for all d ∈ D1 with λd ∈ int K + belonging to d. From (3.91) we get d0 := f ∗ (Y 0 , Z 0 ) ∈ P := f (A) + K.

(3.92)

Otherwise, d0 ∈ f (A) + K. Now, separating P and d0 by a continuous linear functional λ∗1 ∈ Rp \ {0}, we proceed as in the proof of Theorem 3.8.9 and get d0 = f ∗ (Y 0 , Z 0 ) = f (x0 ) for a suitable x0 ∈ A. We observe that f (x0 ) is even properly minimal w.r.t.  (P4 ) with a λ∗ ∈ int K + that belongs to (Y 0 , Z 0 ). In Section 3.12 it is shown that the saddle point theory w.r.t. (P4 ) can be extended considerably: We use sublinear functionals instead of norms and approximate efficiency instead of efficiency.

3.9 Vector Equilibrium Problems and Related Topics It is well known that optimization and nonlinear analysis are two branches of modern mathematics much developed lately. The results obtained in these areas use certain kinds of differentiability (directional derivative, subgradient, generalized subgradients, etc.), certain generalizations of convexity, geometric methods (cone of feasible directions, normal and tangent cones, etc.), game theory, fixed point theory, topological degree, etc. The reader can find extensive expositions about these topics in a series of books of Zeidler [583] entitled Nonlinear Analysis and Its Applications, and those of Aubin and Ekeland [35], Aubin and Frankowska [36], and Rockafellar and Wets [490]. In the same way, equilibrium problems have come to play a central role in the development and unification of diverse areas of mathematics and physical sciences. Thus various problems of practical interest in optimization, variational inequalities, complementarity, economics, Nash equilibria, and engineering involve equilibrium in their description. The literature on equilibrium problems and treatment in these areas is vast; see [37], [73], [112], [191], [322], [323], [575] and so on. These sources provide extensive references and extended bibliographies for the reader wishing to explore the topic in depth. Our objectin this section is topresentadeveloped investigation in generalized vector equilibrium problems, which embody at least vector optimization problems and vector variational inequalities. The vector variational inequalities have

240

3 Optimization in Partially Ordered Spaces

been widely developed in recent years, and various solutions have been characterized and computed. These were first introduced by Giannessi [216] and further developed by many authors; see, for instance, [17], [119], [122], [129], [369], [505], [506], and precisely, [217] in different areas. Recent topics attracting considerable attention are equilibrium problems for vector-valued mappings, see [445], [446], [251]. Inspired by the scalar case, such problems have received different developments depending on the kind of order space where these have been considered. Some recent papers may be grouped in the following way: 1. Theory of vector optimization, vector equilibrium problems, and vector variational inequalities (Ansari [17, 18]; Ansari and Siddiqi [19]; Ansari, Siddiqi and Yao [16]; Bianchi, Hadjisvvas and Schaible [65]; Blum and Oettli [72, 73]; Chen and Craven [121, 122]; Fan [189]; Fu [201]; Kalmoun, Riahi and Tanaka [321–324]; Konnov and Yao [348]; Lee and Kum [371]; Lee, G.M., Kim and Lee, B.S. [365–368]; Lee, G.M., Kim and Lee, B.S., Cho [369]; Lee and Kum [372]; Li, Yang and Chen [376]; Lin, Yang, Yao [379]; Nagurney, N. and Zhao, L., [424], Oettli [444]; Samuelson [495], Siddiqi, Ansari, Ahmad [505]; Yang [569]; Yang and Chen [570]; Yang and Goh [571], Yu and Yao [574]). 2. Existence of solutions for generalized vector variational inequalities and complementarity problems (Chen [119]; Chen and Hou [124, 128]; Danilidis and Hadjisavvas [151]; Isac and Yuan [297]; Kazmi [327]; Lai and Yao [363]). 3. Vector variational inequalities and vector equilibrium problems with multifunctions (Ding and Tarafdar [159]; Kalmoun and Riahi [321, 322]; Siddiqi, Ansari, Khan [506]; Song [510]). 4. Vector variational inequalities, vector optimization, and scalarization (Chen, G.-Y., Chen, G.M. [120]; Chen and Craven [122]; Giannessi, Mastroeni and Pellegrini [218]; Goh and Yang [223]; Lee, G.M., Kim, Lee, B.S., Yen [370]). 5. Monotone vector variational inequalities (Chowdhury and Tan [134]; Hadjisavvas and Schaible [251], Ding and Tarafdar [160]). We propose two approaches to establish the existence of solutions of equilibrium problems in the vector case. The first one directly uses a generalization of the well-known lemma of Knaster, Kuratowski, and Mazurkiewicz (KKM lemma) as proposed by Br´ezis, Nirenberg, and Stampacchia [95]. The second one, as proposed by Oettli [444], leads in a straightforward way to existence results for vector equilibrium problems from the results about scalar case. A key tool for the study of such problems is an appropriate gauge function. We will see in subsequent sections that an overwhelming majority of vector problems of potential interest in nonlinear analysis can be cast in the form of vector equilibrium problems. In this section our attention is focused on the existence of a generalized vector equilibrium.

3.9 Vector equilibrium problems and related topics

241

3.9.1 Vector Equilibrium Problems Let us agree to define a standard scalar (= real) equilibrium problem as follows. Given a nonempty subset M of a real topological vector space X, and a real function f defined on M × M , (EP ) find at least one point x ∈ M satisfying f (x, y) ≤ 0 for all y ∈ M. Let us see whether this scalar equilibrium problem can be formulated for the case of vector-valued mappings. Given the following: • Y a real topological vector space; • M a nonempty subset of X, called the set of feasible decisions; • a multifunction P : M ⇒ Y that is used for an outcome y at a decision event x: for y1 , y2 ∈ Y , y1 dominates y2 at the decision event x ∈ M iff y1 − y2 ∈ P (x); • a multifunction f : M × M ⇒ Y , called the criterion mapping. We shall assume for the multifunctions P and f that dom P = M and dom f = M × M . Inspired by the scalar equilibrium problem, the generalized vector equilibrium problems (in short GVEP) we can consider can be generalized in several possible ways, for instance, to establish the existence of some feasible decision x ∈ M such that

1. f (x, y) P (x) = ∅ for all y ∈ M ; ⊂ P (x) for all y ∈ M ; 2. f (x, y)

3. f (x, y) − int P (x) = ∅ for all y ∈ M ;

⊂ − int P (x) for all y ∈ M ; 4. f (x, y)

5. f (x, y) − (P (x) \ {0}) = ∅ for all y ∈ M ; 6. f (x, y) ⊂ − (P (x) \ {0}) for all y ∈ M . Proposition 3.9.1. If we denote the solution set of the problem (GVEPi ) by S(f,P)i , for i = 1, . . . , 6, respectively, we have (a) S(f, P )2 ⊂ S(f, P )1 , S(f, P )3 ⊂ S(f, P )4 and S(f, P )5 ⊂ S(f, P )6 ; / int P (x); (b) S(f, P )5 ⊂ S(f, P )3 and S(f, P )6 ⊂ S(f, P )4 if 0 ∈ (c) S(f, P )2 ⊂ S(f, P )3 and S(f, P )1 ⊂ S(f, P )4 , if ∀ x ∈ M , P (x) is wpointed, i.e., P (x) ∩ − int P (x) = ∅; (d) S(f, P )2 ⊂ S(f, P )5 and S(f, P )1 ⊂ S(f, P )6 , if ∀ x ∈ M , P (x) is pointed, i.e., P (x) ∩ −P (x) = {0}; (e) S(f, P )4 ⊂ S(f, P )1 and S(f, P )3 ⊂ S(f, P )2 if ∀ x ∈ M , the complement of P (x) is included in − int P (x). The proof of this proposition, which can be argued directly from the definition of the problems (GVEPi ), for i = 1, . . . , 6, can be omitted.

242

3 Optimization in Partially Ordered Spaces

Remark 3.9.2. (1) Combining (c) and (e) we can state, under the condition that the sets P (x) are connected, i.e., P (x) ∩ − int P (x) = ∅ and P (x) ∪ − int P (x) = Y , that S(f, P )2 = S(f, P )3 and S(f, P )1 = S(f, P )4 . (2) Taking M = Y = R, P (x) = [0, ∞[ and f (x, y) = {−1, 1} for all x, y ∈ Y , one can see that S(f, P )1 = S(f, P )4 = S(f, P )6 = Y and S(f, P )2 = S(f, P )3 = S(f, P )5 = ∅. This example contradicts the converse inclusions in (a). (3) If we suppose that P (x) \ {0} is open in Y , then the converse inclusions in (b) are valid, i.e., S(f, P )5 = S(f, P )3 and S(f, P )6 = S(f, P )4 . Remark 3.9.3. When f is a single-valued function, these problems (GVEPi ) can be reduced to (GVEP1 , GVEP2 ) f (x, y) ∈ P (x)

for all y ∈ M ;

(GVEP3 , GVEP4 ) f (x, y) ∈ − int P (x)

for all y ∈ M ;

(GVEP5 , GVEP6 ) f (x, y) ∈ − (P (x) \ {0}) for all y ∈ M. Remark 3.9.4. If f is as in the last remark, P =] − ∞, 0], and Y = R, all the problems (GVEPi ), for i = 1, . . . , 6, are reduced to the scalar equilibrium problem (EP). In order to emphasize the relationship to vector optimization we decide to consider here the fourth and sixth generalized vector equilibrium problems (WGVEP) find x ∈ M such that f (x, y) ⊂ − int P (x) ∀ y ∈ M and (SGVEP) find x ∈ M such that f (x, y) ⊂ − (P (x) \ {0}) ∀ y ∈ M . We mean by (WGVEP) (resp. (SGVEP)) weak (resp. strong) generalized vector equilibrium problem. 3.9.2 General Vector Monotonicity Recently, much attention has been given to the development of different monotonicity notions. These monotonicity notions permit one to lighten the topological requirements needed to establish a solution of vector equilibrium problems, as compared with the nonmonotone case. Let us recall some of them. Definition 3.9.5. Let X, Y be two vector spaces, M a nonempty subset of X. Let f and g be two multifunctions from M × M into Y . Let K and L be two multifunctions from M into Y . (a) The pair (f, g) is said to be (K, L)-monotone if for all x, y ∈ M, f (x, y) ⊂ K(y) =⇒ g(x, y) ⊂ L(y).

3.9 Vector equilibrium problems and related topics

243

(b) f is said to be K-monotone if for all x, y ∈ M , one has f (x, y)+f (y, x) ⊂ K(x) ∩ K(y). Remark 3.9.6. If g(x, y) = f (y, x) and K(y) + K(y) ⊂ K(y) for every x, y ∈ M , then f is K-monotone implies that the pair (f, g) is (−K, K)-monotone. The definition of (K, L)-monotonicity of a pair (f, g) unifies several notions of monotonicity in the literature. Let us cite some of them: Given a closed convex cone P with nonempty interior int P ; (1) if g(x, y) = f (y, x), then (int P, − int P )-monotonicity of the pair (f, g) is just the P -pseudomonotonicity of f , as introduced in [95], [443]; (2) in case g(x, y) = f (y, x) is a single-valued function, the above definition of (int P, −P )-monotonicity of the pair (f, g) reduces to that of P -quasimonotonicity of f , as introduced in [65], [251]. If we take in the last definition Y to be R and P = R− , we obtain the definitions, which have been introduced by Bianchi and Schaible in [66], of pseudomonotonicity, i.e., f (x, y) ≥ 0 implies f (y, x) ≤ 0 for each x, y ∈ K, and quasimonotonicity, i.e., f (x, y) > 0 implies f (y, x) ≤ 0 for each x, y ∈ K. 3.9.3 Generalized KKM Lemma The classical Knaster–Kuratowski–Mazurkiewicz lemma (the KKM Lemma) is of fundamental importance in nonlinear analysis, game theory, economics, optimization theory, and variational inequalities. Many generalizations of the KKM lemma, beginning with Fan’s result [189] (Fan–KKM lemma), have been given (see [119], [545], and the quoted bibliography). Most of the results obtained are based on the assumption that the considered multifunctions have closed values. If M is a nonempty subset and x ∈ M , we shall denote by F(M ) (respectively, F(M, x)) the family of all nonempty finite subsets of M (respectively, of M containing x). If X and Y are two subsets of a vector space with X convex and S a multifunction defined from X into Y , then S is said to be a KKM mapping if for every A ∈ F(X), conv(A) ⊂ ∪x∈A S(x) = S(A). Note that if S is a KKM mapping, then x ∈ S(x) for all x ∈ X. If S and T are multifunctions defined from X into Y (i.e., dom S = X and dom T = X), then S is said to be a KKM mapping (resp. weak KKM mapping) w.r.t. T if for each A ∈ F(X) (resp. for each A ∈ F(X) and for each x ∈ conv(A)), T (conv(A)) ⊆ S(A) (resp. T (x) ∩ S(A) = ∅). Remark 3.9.7. (1) KKM mappings w.r.t. other multifunctions were first introduced by Park [456][S. Park, Generalizations of Ky Fan’s matching theorems and their applications, J. Math. Anal. Appl. 141 (1989) 164–176], and followed by some others [115][Chang, T.H., Yen, C.L.: KKM property and fixed point theorems. J. Math. Anal. Appl. 203, 224–235 (1996)].

244

3 Optimization in Partially Ordered Spaces

(2) Clearly each KKM mapping S w.r.t. T is a weak KKM mapping w.r.t. T . We will next deal with the Fan–KKM lemma. Lemma 3.9.8. (Fan–KKM Lemma, [189, Lemma 1]) Let X be a nonempty convex subset of a Hausdorff topological vector space E, and S a KKM mapping defined from X into E. Suppose that (i) (compactness condition) ∃x0 ∈ X such that S(x0 ) is compact in Y ; (ii) (closedness condition) ∀x ∈ X, S(x) is closed in Y .

Then x∈X S(x) = ∅. Here, by relaxing the closedness condition, we begin by stating the following generalized Fan’s theorem, which will play a crucial role in proving the main existence result. Lemma 3.9.9. Let X be an arbitrary nonempty convex subset of a Hausdorff topological vector space E, x0 ∈ X and T a KKM mapping defined from X into E. Suppose that for each A ∈ F(X, x0 ) and for FA = conv(A), one has: (i) T (x0 ) ∩ X is relatively compact in X; (ii) for every x ∈ FA , the intersection T (x) ∩ FA is closed in FA ; (iii) clX (X ∩ (∩x∈FA T (x))) ∩ FA = (∩x∈FA T (x)) ∩ FA . Then the intersection of all subsets T (x), for x ∈ X, is nonempty. Proof. For A ∈ F(X, x0 ) consider FA = conv(A) ⊂ X and SA a multifunction defined on FA by SA (x) := T (x) ∩ FA . Note that SA (x) is closed, and so SA is compact-valued. We claim that SA is a KKM mapping. Indeed, for ∅ = B ⊂ FA , B finite, we have that conv(B) ⊂ (∪x∈B T (x)) ∩ FA = ∪x∈B (T (x) ∩ FA ) = ∪x∈B SA (x). Using the Fan–KKM lemma (Lemma 3.9.8), there exists xA ∈ ∩x∈FA SA (x) ⊂ (∩x∈FA T (x)) ∩ X ⊂ T (x0 ) ∩ X. Consider UA := clX {x B | B ∈ F(X, x0 ), 

 A ⊂ B} ⊂ clX ( x∈FA T (x)) ∩ X ⊂ clX (T (x0 ) ∩ X). The family {UA | A ∈ F(X, x0 )} has the finite-intersection property. Since clX (T (x0 ) ∩ X) is compact, there exists some x ∈ ∩B∈F (X,x0 ) UB . Let x ∈ X and take A := {x, x, x0 } ∈ F(X, x0 ). Since x ∈ A ⊂ FA , then x ∈ UA ∩ FA ⊂ clX ((∩y∈FA T (y)) ∩ X) ∩ FA = (∩y∈FA T (y)) ∩ FA . Since x ∈ FA , we obtain x ∈ T (x). Therefore x ∈ ∩x∈X T (x).



3.9 Vector equilibrium problems and related topics

245

Remark 3.9.10. (1) Observe that in the case where T is closed-valued, this result becomes the Fan–KKM lemma. (2) This result is in effect a generalization of the Br´ezis–Nirenberg– Stampacchia’s theorem; see [95, p. 295]. Proposition 3.9.11. Let X be a convex subset of a Hausdorff topological vector space E, Y be a nonempty set and S, T : X ⇒ Y be two nonemptyvalued multifunctions satisfying the following conditions: (i) there exists x0 ∈ X such that T −1 (S(x0 )) is relatively compact in X; (ii) for each u ∈ X and each A ∈ F(X, x0 ), the set T −1 (S(u)) ∩ conv(A) is closed in conv(A); (iii) S is a weak KKM mapping w.r.t. T . Then, there exists an x ¯ ∈ X such that T (¯ x) ∩ S(u) = ∅, ∀u ∈ X. Proof. Consider the multifunction R defined on X by R(u) := {x ∈ X | T (x) ∩ S(u) = ∅}, for any u ∈ X. Observe that R(u) = T −1 (S(u)) for every u ∈ X, then we can transform the conditions of the proposition as follows: (i)r R(x0 ) is relatively compact in X; (ii)r for each A ∈ F(X, x0 ), the set R(u) ∩ conv(A) is closed in conv(A); (iii)r R is a KKM mapping; and then we have only to verify the condition (iii) of Lemma 3.9.9. To check it, we remark that for each u ∈ X and each A ∈ F(X, x0 ), we have R(u) = X ∩ R(u) is closed in conv(A). This implies ∩u∈conv(A)

R(u) is also closed in conv(A), which means (iii) of Lemma 3.9.9. Thus u∈X R(u) = ∅. We conclude that there exists x ¯ ∈ X such that for every u ∈ X, T (¯ x) ∩ S(u) = ∅.  Remark 3.9.12. When we suppose X to be convex compact, conditions of Proposition 3.9.11 are satisfied when S is a weak KKM mapping w.r.t. T and for each u ∈ X, the set T −1 (S(u)) is closed in X, and thus we find [38, Theorem 2] which is demonstrated differently when E is not a topological vector space, but only a generalized convex (G-convex) space (see, for example [457], for G-convex space). To prove the next associated Fan’s lemma we also need the following two lemmas (see [247, Lemma 2]). Lemma 3.9.13. Let K be an n-simplex and Y be a compact convex subset of a Hausdorff topological vector space. Suppose P : K ⇒ Y satisfies the following conditions: (i) P is a upper continuous on K;

246

3 Optimization in Partially Ordered Spaces

(ii) P (x) is nonempty, closed and convex in Y for each x ∈ K; (iii) p : Y → K is continuous. Then the composite P ◦ p has a fixed point. Proof. Let {σk | k ∈ In } be a barycentric subdivision of the n-simplex K (consists of finite family of n-simplices in K). For the k-th barycentric subdivision σk and ak ∈ σk a vertex, we choose a point bk ∈ P (ak ). Consider a continuous linear mapping ξk defined from K to Y such that bk := ξk (ak ). Since the composite map p ◦ ξk is continuous from the n-simplex K to itself, by Brouwer’s fixed point theorem, there exists xk ∈ K such that p ◦ ξk (xk ) = xk . Set yk = ξk (xk ), then xk = p(yk ). Since Y is compact, there exists a x) subsequence (yki ) which converges to some y¯ ∈ K. Let us prove that y¯ ∈ P (¯ where x ¯ = p(¯ y ). First, since p is continuous then limi xki = limi p(yki ) = p(¯ y) = x ¯. For each i, there exists a n-simplex {ai0 , · · · , ain } in σki that contains the point xki . Thus, for each i, we can write yki = σki (xki ) =

n 

γji σki (aij ) =

j=0

n 

γji bij ,

(3.93)

j=0

n i i i where γji ≥ 0, j=0 γj = 1, and bj := σki (aj ). But as each n-simplex is compact, by passing to a subsequence if necessary, we may assume that all the sequences (γji )i and (bij )i (for j = 0, · · · , n), respectively, converge to γ¯j and ¯bj (for j = 0, · · · , n). Passing to the limit in (3.93) (as i → +∞), we obtain y¯ =

n  j=0

γ¯j ¯bj with γ¯j ≥ 0,

n 

γ¯j = 1.

j=0

Since P is upper continuous, it follows from Proposition 2.7.12 (d) that P is closed and then y¯ ∈ P (¯ x). We conclude y¯ ∈ P ◦ p(¯ y ).  The next assertion is shown in [1, Lemma 3.1]. Lemma 3.9.14. Let X, Y be two nonempty convex subsets of a Hausdorff topological vector space E and P : X ⇒ Y . Suppose: (i) (ii) (iii) (iv)

Y is compact in E; P (x) is nonempty and convex for every x ∈ X; X \ P −1 (y) is convex for every y ∈ Y ; P is upper continuous on conv(A) for each A ∈ F(X).

Then x∈X P (x) = ∅.

Proof. We will use a proof by contradiction. Suppose that Y is covered by the family of open subsets O(x) := Y \ P (x), where x ∈ X. Since Y is assumed to be compact, let {O(xi ) | i = 0, . . . , n} be a finite subcover of Y .

3.9 Vector equilibrium problems and related topics

247

Let {pi | i = 0, . . . , n} be a continuous partition of unity subordinate to this finite covering. We introduce the function p : Y −→ K := conv({xi | i = 0, . . . , n}) ⊆ X defined by p(y) :=

n 

pi (y)xi .

i=1

This mapping is continuous on Y . Since P is upper continuous on conv({xi | i = 0, . . . , n}), then by using Lemma 3.9.13, there exists y¯ ∈ Y such that y¯ ∈ P ◦ p(¯ y ). y ) > 0; then Now, let I be the set of index i ∈ {0, . . . , n} for which pi (¯ y ) for all y¯ ∈ O(xi ) = Y \ P (xi ) for all i ∈ I, which means xi ∈ X \ P −1 (¯ −1 y ) we obtain p(¯ y ) ∈ X \ P (¯ y ), and i ∈ I. Thus by convexity of X \ P −1 (¯

 then y¯ ∈ / P (p(¯ y )); a contradiction. Therefore, x∈X P (x) = ∅. The next assertion is shown in [2, Theorem 3.1]. Theorem 3.9.15. Let X, Y be nonempty convex subsets of Hausdorff topological vector spaces and S, T : X ⇒ Y be two nonempty-valued multifunctions. Suppose S, T satisfy conditions (i)–(iii) of Proposition 3.9.11 and (iv) S is upper continuous on conv(A) for each A ∈ F(X, x0 ); (v) S(x) and X \ S −1 (y) are convex ∀(x, y) ∈ X × Y ; (vi) T has convex compact values in Y .

Then, there exists an x ¯ ∈ X such that T (¯ x) ∩ u∈X S(u) = ∅. Proof. Using Proposition 3.9.11, we have existence of x ¯ ∈ X such that the multifunction P defined from X into T (¯ x) by P (u) := T (¯ x) ∩ S(u) has nonempty convex values. For each y ∈ T (¯ x), we also have X \P −1 (y) = {u ∈ X | y ∈ / T (x0 )∩S(u)} = {u ∈ X | y ∈ / S(u)} = X \S −1 (y). By relying on (iv), we deduce that P is upper continuous on conv(A) for each A ∈ F(X) with P (x) and X \ P −1 (y) are convex for each x ∈ X, y ∈ Y . Return to Lemma 3.9.14, we conclude ! " ∅ =

P (u) = T (x0 ) u∈X

S(u) , u∈X

which means the conclusion. The assertion in the next theorem is shown in [2, Theorem 3.2].



Theorem 3.9.16. Under conditions (iii)–(vi) of Theorem 3.9.15, we suppose moreover ¯ of (i’) there exist a compact convex subset C¯ of X and a compact subset K ¯ ¯ X such that ∀x ∈ X \ K, ∀y ∈ T (x), ∃u ∈ C such that y ∈ / S(u); (ii’) T is upper continuous.

248

3 Optimization in Partially Ordered Spaces

Then, there exists x0 ∈ X such that T (x0 )



u∈X

 S(u) = ∅.

Proof. Consider the family of sets ¯ M := {C ⊆ X | C convex compact ⊇ C} and the multifunction Ψ : M ⇒ X defined for each C ∈ M by ! " # Ψ (C) :=

x ∈ X | T (x)

S(u)

= ∅ .

u∈C

From Theorem 3.9.15 and (ii’), the multifunction Ψ has nonempty and closed values. By contraposition, we also claim from (i’) that Ψ (C) is included in ¯ of X, for every C ∈ M. the compact subset K Now, for each finite subset {C1 , · · · , Cm } of M, by setting C0 = conv(C1 ∪ · · · ∪ Cm ), we have ∅ = Ψ (C0 ) ⊆ Ψ (Ci ). 1≤i≤m

Using this finite-intersection property, the Tychonoff’s theorem would mean ¯ x0 ∈

that for some x0 ∈ K, Ψ C∈M (C), i.e., for each C ∈ M, we have



S(u) =

∅. So, the multifunction Φ : M ⇒ X defined for T (x0 ) u∈C each C ∈ M by " ! Φ(C) := T (x0 )

S(u) u∈C

is well defined on M. 



By u∈X S(u) =

a similar reasoning to the above for Φ, we obtain T (x0 )  C∈M Φ(C) = ∅; the proof is complete. 3.9.4 Existence of Vector Equilibria by Use of the Generalized KKM Lemma We are now in position to provide a first existence result. Theorem 3.9.17. Let X and Y be two real Hausdorff topological vector spaces. Let M be a nonempty convex subset of X, and consider two multifunctions K and L from M to Y . Suppose that f and g are two multifunctions from M × M to 2Y such that (H1) the n pair (f, g) is (K, L)-monotone; (H2) i=1 g(xi , z) ⊂ L(z) ∀ (x1 , . . . ., xn ) ∈ M , ∀ z ∈ conv(x1 , . . . ., xn ); (H3) for every fixed x ∈ M and every A ∈ F(M ), the set {y ∈ conv(A) | f (x, y) ⊂ K(y)} is closed in conv(A); (H4) for every A ∈ F(M ), and every converging net (yα ) on M to y ∈ conv(A) f (z, yα ) ⊂ K(yα ) ∀ z ∈ conv(A) ⇒ f (z, y) ⊂ K(y) ∀ z ∈ conv(A); (3.94)

3.9 Vector equilibrium problems and related topics

249

(H5) there is a nonempty subset B of X and x0 ∈ M ∩ B such that M ∩ B is relatively compact in M , and f (x0 , y) ⊂ K(y) for all y ∈ M \ B. Then there exists x ∈ M ∩ B such that f (y, x) ⊂ K(x) for all y ∈ M . Proof. For each y ∈ M , let T (y) := {x ∈ M | f (y, x) ⊂ K(x)} . From (H5), T (x0 ) ⊂ M ∩ B is relatively compact in M , and therefore (i) of Lemma 3.9.9 is satisfied. (H1) and (H2) lead clearly to the fact that T is a KKM mapping, while property (ii) follows from (H3). It remains to check (iii) of Lemma 3.9.9. For this, let F be the convex hull of a finite subset A of M . We have to show that clM (∩z∈F T (z)) ∩ F = (∩z∈F T (z)) ∩ F. So let x ∈ clM (∩z∈F T (z)) ∩ F . Then there exists a net (xα ) on M that converges to x ∈ F such that f (z, xα ) ⊂ K(xα ) ∀ z ∈ F. From (H4) we get f (z, x) ⊂ K(x) ∀ z ∈ F, which implies that x ∈ (∩z∈F T (z)) ∩ F . Thus (iii) is satisfied. According to the above lemma we deduce that ∩y∈M T (y) = ∅, which is the conclusion of Theorem 3.9.17.  3.9.5 Existence by Scalarization of Vector Equilibrium Problems In this subsection, we derive the existence theorem of (GVEP) by way of solving an appropriate scalar one. Very recently, Oettli [444] proposed an approach to solving vector equilibrium problems by using results in the scalar case. A key tool for the study of such problems is an appropriate Minkowski’s gauge function. Scalar problems Before going into the study of vector equilibrium problems, let us consider the scalar case. Usually, we establish theorems in the theory of noncooperative games and in the theory of general equilibrium with the aid of the famous KKM lemma, which is equivalent to the Brouwer fixed point theorem. This method is more interesting; but since we have used this method in the first paragraph, we prefer for convenience to use partitions of unity and fixed point theorems to establish the existence of scalar equilibria. Theorem 3.9.18. Let X be a Hausdorff real topological vector space, M a nonempty closed convex subset. Consider two real functions ϕ and ψ defined on M × M such that:

250

3 Optimization in Partially Ordered Spaces

(i) φ(x, x) ≤ 0 for each x ∈ M ; (ii) for each x, y ∈ M , if φ(x, y) ≤ 0, then ϕ(x, y) ≤ 0; (iii) for each y ∈ M , {x ∈ M ∩ K | ϕ(x, y) ≤ 0} is closed for every compact subset K of M ; (iv) for each y ∈ M , {x ∈ M | φ(x, y) > 0} is convex. (v) (Compactness assumption) there exists a compact convex subset B of M such that for all y ∈ M \ B, there exists x ∈ B such that ϕ(x, y) > 0. Then, there exists an equilibrium point x ∈ B, i.e. ϕ(y, x) ≤ 0 for each y ∈ M. Proof. We proceed in two steps. 1. Case of M = B, i.e., M is assumed compact. Suppose that a solution does not exist. Then the subset M can be covered by the family of open subsets O(y) := {x ∈ M | ϕ(y, x) > 0}, where y ∈ M . Since M is assumed to be compact, let {O(yi )} be a finite subcover of M (we set Oi = O(yi )). Let {pi | i = 1, . . . , n} be a continuous partition of unity subordinate to this finite covering. We introduce the function p : M −→ M defined by p(u) :=

n 

pi (u)yi .

i=1

This mapping is continuous; we obtain, by using Tychonoff’s fixed point theorem, the existence of some u0 ∈ M such that p(u0 ) = u0 . Let I be the set of index i ∈ {1, . . . , n} for which pi (u0 ) > 0; then u0 ∈ Oi for all i ∈ I. From (ii), we obtain φ(yi , u0 ) > 0 for all i ∈ I. Therefore, by using assumption (iv), we obtain φ(u0 , u0 ) = φ(p(u0 ), u0 ) = φ

n $

% pi (u0 )yi , u0 > 0,

i=1

a contradiction with condition (i). Hence ∃ x ∈ M such that ϕ(y, x) ≤ 0 for each y ∈ M . 2. General case. For every y ∈ M , consider A(y) := {x ∈ B | ϕ(y, x) ≤ 0}. Let {y1 , . . . , ym } be a fixed finite subset of M , and let M0 be the convex hull of B and {y1 , . . . , ym }. Then M0 is compact, since B is convex and compact and X is a Hausdorff topological vector space. By the first step, there exists some x0 ∈ M0 such that ϕ(y, x0 ) ≤ 0 for each y ∈ M0 . Using assumption (v), we have x0 ∈ B. Thus x0 ∈ ∩1≤i≤m A(yi ). Hence every finite subfamily of the sets A(y) has nonempty intersection. Observe that (iii) justifies that A(y) is compact for each y ∈ M , since B is compact. Thus by virtue of the finite intersection property in a compact set, we deduce that there exists x ∈ B that belongs to all the sets A(y) for y ∈ M . This means that (EP ) has a solution in B. 

3.9 Vector equilibrium problems and related topics

251

Vector problems Now we prepare the treatment of the vector problem. First, let us recall the following propositions and notions on an ordered vector space. Let Y be a Hausdorff l.c.s., and let Y ∗ denote its topological dual space. Let P , P = Y , be a closed convex proper cone with nonempty interior, and P + := {y ∗ ∈ Y ∗ | y, y ∗  ≥ 0 ∀ y ∈ P } its dual cone. Since we suppose Y a Hausdorff locally convex space, int P = ∅ and P = Y , the dual cone P + has a convex w∗ -compact base B ∗ , i.e., 0 ∈ B ∗ and P + = ∪t≥0 tB ∗ . As has been mentioned in Corollary 2.2.35, we can choose, for instance, B ∗ = {y ∗ ∈ P + | b, y ∗  = 1} for some arbitrary b ∈ int P . We shall fix such a w∗ -compact base B ∗ in what follows. Let us consider the sup-support and inf-support functions α(y) := min{y, y ∗  | y ∗ ∈ B ∗ } and β(y) := max{y, y ∗  | y ∗ ∈ B ∗ }. Let us remark that β(y) = −α(−y), and that the infimum (resp. supremum) of linear and continuous functions α is superlinear and upper semicontinuous (resp. β is sublinear and lower semicontinuous) on the space Y . Note that in Y , see Section 2.2, the order relations between two elements x, y ∈ Y can be defined and characterized by x ≤P y ⇔ x ∈ y − P ⇔ x − y, y ∗  ≤ 0 ∀ y ∗ ∈ P + ⇔ x − y, y ∗  ≤ 0 ∀ y ∗ ∈ B ∗ ⇔ β(x − y) ≤ 0 ⇔ α(y − x) ≥ 0; and x

0. To produce the negation of each created relation for vector orders, it suffices to use negation of equivalent ones, i.e., x P y ⇔ x − y ∈ −P ⇔ β(x − y) > 0 ⇔ α(y − x) < 0 and x ≮P y ⇔ x − y ∈ − int P ⇔ β(x − y) ≥ 0 ⇔ α(y − x) ≤ 0. Now for a given vector-valued mapping f from M × M into Y , we define the real-valued functions F (x, y) := β(f (x, y)).

(3.95)

252

3 Optimization in Partially Ordered Spaces

Note that each inequality for F can be transformed into vector relations for f as follows: f (x, y) ≤P 0 ⇐⇒ F (x, y) ≤ 0 and f (x, y)

0 and f (x, y) ≮P 0 ⇐⇒ F (x, y) ≥ 0. A direct application of the above equivalences between vector orders and real inequalities to Theorem 3.9.18 gives the following theorem. Theorem 3.9.19. Under the same setting of Theorem 3.9.18, we suppose in addition that Y is a Hausdorff locally convex topological vector space and P a closed convex cone with int P = ∅ and P = Y . Consider f, g : M × M −→ Y satisfying (H1 ) g(x, x) ∈ − int P for all x ∈ M ; (H2 ) f (x, y) ∈ − int P implies g(x, y) ∈ − int P for every x, y ∈ M ; (H3 ) for every y ∈ M and every compact subset K of X, the subset {x ∈ M ∩ K | f (x, y) ∈ − int P } is closed in M ; (H4 ) for every y ∈ M , the subset {x ∈ M | g(x, y) ∈ − int P } is convex; (H5 ) there exists a compact subset B of M such that for every x ∈ M \ B, there exists some y ∈ B for which f (y, x) ∈ − int P . Then there exists x ∈ B such that f (y, x) ∈ − int P . 3.9.6 Some Knowledge About the Assumptions The conditions of Theorems 3.9.17 and 3.9.29 are not particularly restrictive. We will now show, in the following lemmas, how to use these assumptions. Lemma 3.9.20. Suppose that (i) g(x, x) ⊂ L(x) for all x ∈ M , (ii) for every fixed y ∈ M , the set {x ∈ M | g(x, y) ⊂ L(y)} is convex. Then the hypothesis (H2) is satisfied. Proof. Suppose by contradiction that , n there exist n for each i = 1, . . .  n xi ∈ M and λi ∈ [0, 1] such that λi = 1 and g(xi , j=1 λj xj ) ⊂ i=1 n L( j=1 λj xj ) for every i = 1, . . . , n. Assumption (ii) shows that if z = n j=1 λj xj , then g(z, z) ⊂ L(z). This contradicts (i), and the proposition is proved.  Lemma 3.9.21. Condition (ii) of Lemma 3.9.20, namely, convexity of the subset {x ∈ M | g(x, y) ⊂ L(y)}, is satisfied if one of the following conditions is satisfied: (i) L(y) is convex and g(·, y) is concave, i.e., for all u, v ∈ M and t ∈ [0, 1], g(tu + (1 − t)v, y) ⊂ tg(u, y) + (1 − t)g(v, y).

3.9 Vector equilibrium problems and related topics

253

(ii) L(y) is a convex open cone and g(·, y) is quasi L(y)-concave on M , i.e., for all u, v ∈ M and t ∈ [0, 1], g(tu + (1 − t)v, y) ⊂ conv(g(u, y) ∪ g(v, y)) + L(y). Proof. Under assumption (i), the condition is trivially satisfied. For the second one, it suffices to use the inclusion L(y) + L(y) ⊂ L(y).  Remark 3.9.22. One can confirm that g satisfies condition (ii) of Lemma 3.9.21 if one supposes that g satisfies one of the following conditions: (iia ) g(·, y) is L(y)-concave: for all u, v, y ∈ M and t ∈ [0, 1], g(tu+(1−t)v, y) ⊂ tg(u, y) + (1 − t)g(v, y) + L(y); (iib ) for all u, v, y ∈ M and t ∈ [0, 1], either g(tu + (1 − t)v, y) ⊂ g(u, y) + L(y) or g(tu + (1 − t)v, y) ⊂ g(v, y) + L(y). The proof of this assertion is trivial. Lemma 3.9.23. Suppose that the set {y ∈ M | f (x, y) ⊂ K(y)} is closed for every x ∈ M ; then assumptions (H3) and (H4) are satisfied. The proof of this lemma is immediate. Lemma 3.9.24. If K(y) = K is open and independent of y ∈ M , then the assumption (H3) is satisfied if f (x, ·) is K-upper continuous on the convex hull of every finite subset of M . Proof. Let F be the convex hull of a finite subset of M , and set U (x) = f (x, ·)+1 (K) ∩ F := {y ∈ F | f (x, y) ⊂ K}. Let us show that U (x) is open in F . Fix y0 ∈ U (x). Then K is an open set in Y that contains f (x, y0 ). Since f (x, ·) is K-upper continuous on F , see Definition 2.7.1, it follows that for some U0 ∈ NF (y0 ), ∀y ∈ U0 f (x, y) ⊂ K + K ⊂ K. Thus U0 ⊂ U (x), and the proof is complete.  When we suppose f (x, ·) to be upper continuous on the convex hull of every finite subset of M , then since K is an open subset of Y , one can immediately justify that f (x, ·)+1 (K) ∩ F is open in F . Lemma 3.9.25. The assumption (H3) is satisfied when we suppose that for every fixed x ∈ M , f (x, ·), with compact values, is upper continuous on the convex hull of every finite subset of M , and the multifunction K has an open graph in M × Y . Proof. Let us fix arbitrarily x ∈ M and F the convex hull of a finite subset A of M . We claim that M (x) := {u ∈ F | f (x, u) ⊂ K(u)} is closed in F . Indeed, consider a net {ui | i ∈ I} in M (x) that converges to some u ∈ F ; / K(ui ). then for every i ∈ I, there exists some zi ∈ f (x, ui ) such that zi ∈ Suppose first that the net {zi } converges, and let z be the limit. Since / K(u). We claim that K has an open graph in M × Y , we deduce that z ∈ z ∈ f (x, u); otherwise, from the Hausdorff property of Y there exists V ∈

254

3 Optimization in Partially Ordered Spaces

NX (f (x, u)) such that z ∈ / V . The multifunction f (x, ·) is assumed to be upper continuous and ui → u. Then there exists i0 ∈ I such that ∀i ∈ I, / V , which is a contradiction. Thus u ∈ M (x). i  i0 ⇒ f (x, ui ) ⊂ V , and zi ∈ Suppose now that the net {zi } doesn’t converge. Then for every z ∈ f (x, u) one can find an open set V (z) and I(z) ⊂ I such that z ∈ V (z) and / V (z). Since f (x, u) is compact and f (x, u) ⊂ ∪z∈f (x,u) V (z), ∀i ∈ I(z), zi ∈ there exist z1 , . . . , zn ∈ f (x, u) such that f (x, u) ⊂ V (z1 ) ∪ · · · ∪ V (zn ). If we take V0 = ∪1≤k≤n V (zi ) and I0 = ∩1≤k≤n I(zi ), then one has ∀i ∈ I0 , / V0 , which gives f (x, ui ) ⊂ V0 . This contradicts our assumption on upper zi ∈ continuity of f (x, ·) on F , and completes the proof.  Lemma 3.9.26. The assumption (H4) of Theorem 3.9.17 is equivalent to the following: (H8) For every x, y ∈ M and (yα ) ⊂ M converging to y, then f (tx + (1 − t)y, yα ) ⊂ K(yα ) ∀ t ∈ [0, 1] implies f (x, y) ⊂ K(y). Proof. Let us first suppose (H4). Consider x, y ∈ M and (yα ) a net on M that converges to y. By taking D = conv{x, y} and using (H4), we have f (z, y) ⊂ K(y) for every z ∈ D; in particular, z = x does the job. For the converse, let A ∈ F(M ) and (yα ) a net on M converging to y ∈ conv(A). Suppose that f (z, yα ) ⊂ K(yα ) for all α and all z ∈ conv(A). Fix z ∈ conv(A). Then f (tz + (1 − t)y, yα ) ⊂ K(yα ) for all α and all t ∈ [0, 1], since tz + (1 − t)y ∈ conv(A). Using (H8), we deduce that f (z, y) ⊂ K(y). This is true for each z ∈ conv(A), and so (H4) is satisfied.  Consider the following problems: (I) find x ∈ M such that f (y, x) ⊂ K(x) for all y ∈ M ; (II) find x ∈ M such that f (x, y) ⊂ L(x) for all y ∈ M . Lemma 3.9.27. (Generalized Minty’s Linearization) (a) If) x solves Problem (II), then x is a solution of Problem (I) whenever the pair (f, g) is (K, L)-monotone with g(x, y) = f (y, x) for every x, y ∈ M . (b) If x solves Problem (I), then x is a solution of Problem (II) whenever the following assumptions are satisfied: (i) f (x, x) ⊂ L(x) for all x ∈ M ; (ii) if x, y ∈ M, x = y, then u ∈]x, y[ and f (u, y) ⊂ L(x) implies f (u, x) ⊂ K(x); (iii) for all x, y ∈ M , the set {v ∈ [x, y] | f (v, y) ⊂ L(x)} is open in [x, y]. Proof. (a) The first assertion is derived from (K, L)-monotonicity of the pair (f, g); let us prove (b). Assume, for contradiction, that f (x, z) ⊂ L(x) for some z ∈ M . By assumption (i), we have z = x. According to assumption (iii), there exists some u ∈]x, z[ such that f (u, z) ⊂ L(x). This implies, see assumption (ii), f (u, x) ⊂ K(x). Since f (u, x) ⊂ K(x), we obtain a contradiction. 

3.9 Vector equilibrium problems and related topics

255

Remark 3.9.28. This lemma remains true if we suppose instead of (ii) the following: 1. if x = y ∈ M , then u ∈]x, y[, f (u, x) ⊂ K(x), and f (u, y) ⊂ L(x) imply that f (u, v) ⊂ K(x) for all v ∈]x, y]; 2. f (x, x) ⊂ K(y) for all x, y ∈ M . 3.9.7 Some Particular Cases Let us single out some particular cases of Theorem 3.9.17. First, Theorem 3.9.17 and Lemma 3.9.20 give the following result: Theorem 3.9.29. In addition to the assumptions (H1) and (H2) in Theorem 3.9.17, assume that M is closed and the following assumptions (H6) and (H7). Then the conclusion of Theorem 3.9.17 remains true. (H6) For every fixed x ∈ M and every convex compact subset K of X, the set {y ∈ K ∩ M | f (x, y) ⊂ K(y)} is closed in K ∩ M ; (H7) there is a convex compact subset B of X such that for all y ∈ M \ B, there exists x ∈ M ∩ B for which f (x, y) ⊂ K(y). Proof. Let us consider the multifunction S defined, for each y ∈ M , by S(y) := {x ∈ M ∩ B | f (y, x) ⊂ K(x)}. Let A ∈ F(M ) and consider M0 the convex hull of (B ∩ M ) ∪ A. Since B is compact and X is a Hausdorff topological vector space, M0 is compact. Note that the multifunction f from M0 ×M0 into Y satisfies all assumptions of Theorem 3.9.17. Then there exists x1 ∈ M0 such that f (y, x1 ) ⊂ K(x1 ) for every y ∈ M0 . Using compactness assumption (H7), we have x1 ∈ B, and therefore x1 ∈ ∩y∈A S(y). We conclude that every finite-intersection subfamily of the set {S(y) | y ∈ M } has nonempty intersection. Observe that (H6) justifies that S(y) is compact for each y ∈ M . Thus by virtue of the finite-intersection property in a compact set, we deduce that there exists x ∈ B that belongs to the set S(y) for all y ∈ M . This means  that x is a suitable solution in B. Remark 3.9.30. It is clear that the condition (H7) of Theorem 3.9.29 is weaker than the corresponding condition (H5) of Theorem 3.9.17. Nevertheless, we restrict ourselves to the more stringent compactness condition in (H6). Theorem 3.9.29 and Lemma 3.9.20 give the following result: Theorem 3.9.31. Under the same setting of Theorem 3.9.29, we suppose that (H1), (H7) are satisfied, and (i) g(x, x) ⊂ L(x) for all x ∈ M ; (ii) for every fixed y ∈ M , the set {x ∈ M | g(x, y) ⊂ L(y)} is convex; (iii) for every fixed x ∈ M , the set {y ∈ M | f (x, y) ⊂ K(y)} is closed.

256

3 Optimization in Partially Ordered Spaces

Then we get the same conclusion of Theorem 3.9.17. To consider another particular case, we assume that K(x) = L(x) and f (x, y) = g(x, y) for all x, y ∈ M . Then, from Theorem 3.9.17 and Lemma 3.9.20 we obtain the following theorem: Theorem 3.9.32. Assume that (i) (ii) (iii) (iv)

f (x, x) ⊂ K(x) for all x ∈ M ; for every fixed y ∈ M , the set {x ∈ M | f (x, y) ⊂ K(y)} is convex; for every fixed x ∈ M , the set {y ∈ M | f (x, y) ⊂ K(y)} is closed; there is a compact subset B of X and x0 ∈ M ∩ B such that f (x0 , y) ⊂ K(y) for all y ∈ M \ B.

Then we get the same conclusion of Theorem 3.9.17. Remark 3.9.33. This theorem contains as a special case, when f is a singlevalued function, Lemma 2.1 from [368]. Remark 3.9.34. When K(x) = R+ for all x ∈ M and f (x, y) = h(x, y) − supx∈M h(x, x), we can state a version of the celebrated 1972 Fan’s minimax inequality; see [189, Theorem 1] and [34, Theorem 8.5]. Theorem 3.9.35. Under the same setting of Theorem 3.9.17. Assume that the multifunction f : M × M → Y satisfies f (x, y) ⊂ − int P (x) implies f (y, x) ⊂ int P (x) for all x, y ∈ M ; f (x, x) ⊂ − int P (x) for all x ∈ M ; for every fixed x ∈ M , f (x, ·) satisfies condition (ii) of Lemma 3.9.21; for every x ∈ M and A ∈ F(M ), the mapping f (x, ·) is upper continuous on conv(A); (v) for every x ∈ M and every A ∈ F(M ) whenever y ∈ conv(A) and (yα ) ⊂ M converges to y, then f (tx + (1 − t)y, yα ) ⊂ int P (yα ) ∀ α, ∀ t ∈ [0, 1] implies f (x, y) ⊂ int P (y) ; (vi) there is a compact set B ⊂ X and x0 ∈ M ∩ B such that f (x0 , y) ⊂ int P (y) ∀y ∈ M \ B; (vii) if x = y ∈ M and u ∈]x, y[, then f (u, x) ⊂ int P (x) or f (u, y) ⊂ − int P (x); (viii) for all x, y ∈ M , the set {v ∈ [x, y] | f (v, y) ⊂ − int P (x)} is open in [x, y]. (i) (ii) (iii) (iv)

Then there exists x ∈ M such that f (x, y) ⊂ − int P (x) for all y ∈ M . Proof. All conditions of Theorem 3.9.17 are satisfied by assumptions (i)– (vi) if we set K(x) = −L(x) = int P (x) and g(x, y) = f (y, x). Indeed, (H1) and (H5) follow from (i) and (vi); combining Lemmas 3.9.20, 3.9.21 with assumptions (ii)–(iii) we get (H2); using Lemma 3.9.24 and assumption (3) we obtain (H3); and (H4) follows from Lemma 3.9.26 and assumption (v).

3.9 Vector equilibrium problems and related topics

257

Therefore there exists x ∈ M such that f (y, x) ⊂ K(x) ∀ y ∈ M . Then we apply Lemma 3.9.27 to say that assumptions (i), (vii), (viii) lead to f (x, y) ⊂  L(x) ∀ y ∈ M . A generalized Minty’s linearization lemma for single-valued functions can be formulated as follows. Lemma 3.9.36. Suppose that the single-valued function f is P -pseudomonotone, i.e., for all x, y ∈ M , f (x, y) ∈ − int P implies f (y, x) ∈ int P ; then f (x, y) ∈ − int P ∀ y ∈ M =⇒ f (y, x) ∈ int P ∀ y ∈ M. The converse is true whenever for every x ∈ M , f (x, ·) satisfies condition (ii) of Lemma 3.9.21, f (·, x) is P -lower continuous on every line segment in M , and f (x, x) ∈ P . Proof. As mentioned previously, the proof of the first statement is straightforward. Let us prove the second one. Fix y ∈ M and set yt := ty + (1 − t)x, which is in M for every t ∈ [0, 1]. Thus / int P, ∀ t ∈ [0, 1]. f (yt , x) ∈

(3.96)

Taking into account condition (ii) of Lemma 3.9.21 and Remark 3.9.22, we get for some s ∈ [0, 1[, −f (yt , yt ) + (sf (yt , x) + (1 − s)f (yt , y)) ∈ P.

(3.97)

Combining relations (3.96), (3.97), and f (yt , yt ) ∈ P , and using Y \int P −P ⊂ Y \ int P , we have / − int P ⇔ f (yt , y) ∈ / − int P. (1 − s)f (yt , y) ∈

(3.98)

/ − int P ; otherwise, since f (·, y) is P -lower continWe deduce that f (x, y) ∈ uous on [x, y[ and V = −f (x, y) − int P ∈ NY , one can find t near zero such that f (yt , y) ∈ f (x, y) + V − P = f (x, y) − (f (x, y) + int P ) − P ⊂ − int P, which contradicts (3.98), and the result follows.



Just after this lemma we shall give a theorem of existence of vector equilibria for vector single-valued functions. Let us set g(x, y) = f (y, x), K(x) = int P and L(x) = − int P for all x, y ∈ M , where P is a closed convex and w-pointed cone. Then by using Theorem 3.9.31; Remark 3.9.22; and Lemmas 3.9.20, 3.9.21, 3.9.24, and 3.9.36, we can state the following theorem: Theorem 3.9.37. If f : M × M → Y satisfies (i) f is P -pseudomonotone; i.e., f (x, y) ∈ − int P implies f (y, x) ∈ int P ;

258

3 Optimization in Partially Ordered Spaces

(ii) f (x, x) = 0 for all x ∈ M ; (iii) for every x ∈ M , f (x, ·) is P -upper continuous and quasi-P -concave on every convex compact subset of M ; (iv) for every y ∈ M , f (·, y) is P -lower continuous on every line segment in M; (v) there exists a convex compact subset B of X such that for all y ∈ M \ B, there is x ∈ M ∩ B for which f (x, y) ∈ int P . Then there exists x ∈ M such that f (x, y) ∈ − int P for all y ∈ M . 3.9.8 Mixed Vector Equilibrium Problems In this section, we establish results for vector equilibrium problems where the monotone criterion mappings are supposed to be perturbed by nonmonotone mappings. Definition 3.9.38. Let X, Y be two Hausdorff real topological vector spaces, f a mapping from M ×M to Y , P a convex closed cone of Y , and P + its dual cone. The mapping f is said to be P -pseudomonotone in the topological sense whenever (xi ) is a net on M converging to x ∈ M such that ∀V ∈ NY , ∃αx satisfying ∀i  αx , (f (xi , x) − V ) ⊂ f (x, x) − int P , then ∀y ∈ M and ∀W ∈ NY , ∃αxy such that ∀i  αxy , (f (xi , y) − W ) ∩ (f (x, y) − P ) = ∅. In the case where Y = R and P = R+ , the definition of P -pseudomonotonicity in the topological sense coincides with the classical Br´ezis– Browder pseudomonotonicity: M

xi → x and lim inf i f (xi , x) ≤ f (x, x) ⇒ lim supi f (xi , y) ≥ f (x, y) ∀y ∈ M . Observe that if f (·, y) is upper P -semicontinuous for all y ∈ M , then f is P -pseudomonotone in the topological sense. We give now the following result concerning a vector equilibrium problem with a nonmonotone perturbation of a P -monotone mapping. Theorem 3.9.39. Let X, Y be two real Hausdorff topological vector spaces, M a nonempty convex subset of X, and let P be a closed convex w-pointed cone in Y with a nonempty interior. Consider two mappings f and g from M × M to Y such that (i) g is −P -monotone; (ii) for every x ∈ M , g(x, x) ∈ P and f (x, x) = 0; (iii) for every x ∈ M , f (x, ·) and g(x, ·) are P -convex; (iv) for every x ∈ M , g(x, ·) is P -lower continuous; (v) for every y ∈ M , f (·, y) is P -upper continuous on the convex hull of every finite subset of M ; (vi) f is P -pseudomonotone in the topological sense; (vii) for every y ∈ M , g(·, y) is P -upper continuous on each line segment in M;

3.9 Vector equilibrium problems and related topics

259

(viii) there is a compact subset B of X and x0 ∈ M ∩ B such that ∀y ∈ M \ B, g(x0 , y) − f (y, x0 ) ∈ int P . Then there exists x ∈ M such that f (x, y) + g(x, y) ∈ − int P ∀y ∈ M , which is equivalent to f (x, y) − g(y, x) ∈ − int P ∀y ∈ M . Proof. Set ϕ(x, y) = g(x, y) − f (y, x), φ(x, y) = f (y, x) + g(y, x) and K(x) = int P and L(x) = − int P for all x ∈ M . We begin by proving that all assumptions of Theorem 3.9.17 are satisfied. (H1): Let x, y ∈ M with g(x, y) − f (y, x) ∈ int P ; then f (y, x) + g(y, x) = f (y, x) − g(x, y) + g(x, y) + g(y, x) ∈ g(x, y) + g(y, x) − int P. From assumption (i), we conclude that f (y, x) + g(y, x) ∈ −P − int P ⊂ − int P ; this is our claim. (H2): Let x1 , x2 , y ∈ M . Then by P -convexity of f (y, ·) and g(y, ·), for every t ∈]0, 1[ one has g(y, tx1 + (1 − t)x2 ) + f (y, tx1 + (1 − t)x2 ) ∈ t [g(y, x1 ) + f (y, x1 )] + (1 − t) [g(y, x2 ) + f (y, x2 )] − P. Since − int P is convex and −P − int P ⊂ − int P , we conclude that {x ∈ M : φ(x, y) ∈ − int P } is convex for all y ∈ M . (H3): Since g(x, ·) and −f (·, x) are P -lower continuous on the convex hull of every finite subset of M for all fixed x ∈ M , then so is −f (·, x) + g(x, ·). Proposition 2.7.25(ii) means that (H3) is satisfied. (H4): Let x, y ∈ M and {yi } ⊂ M , which converges to y. Suppose that for every i, g(tx + (1 − t)y, yi ) − f (yi , tx + (1 − t)y) ∈ / int P ∀t ∈ [0, 1].

(3.99)

Suppose, contrary to our claim, that g(x, y) − f (y, x) ∈ int P. Hence int P is a neighborhood of g(x, y) − f (y, x) in Y , and then we can find two neighborhoods V1 , V2 ∈ NY (0) such that (g(x, y) + V1 ) − (f (y, x) + V2 ) ⊂ int P . Since g(x, ·) is P -lower continuous, there exists an index αxy such that g(x, yi ) ∈ g(x, y) + V1 + P, ∀i  αxy .

(3.100)

For t = 0 in (3.99), one has g(y, yi ) − f (yi , y) ∈ / int P.

(3.101)

Let V ∈ NY . Since g(y, ·) is P -lower continuous, there exists αy such that g(y, yi ) ∈ g(y, y) + V + P for any i  αy . Combining this with assumption (ii), the relation (3.101) yields (f (yi , y) − V ) ⊂ − int P ∀i  αy .

260

3 Optimization in Partially Ordered Spaces

 By topological P -pseudomonotonicity of f , for V2 there exists αxy such that  f (yi , x) ∈ f (y, x) + V2 − P ∀i  αxy .  max(αxy , αy , αxy )

Setting α0 = we deduce that ∀i  α0 ,

(3.102)

and combining relations (3.100) and (3.102),

g(x, yi ) − f (yi , x) ∈ (g(x, y) + V1 ) − (f (y, x) + V2 ) + P ⊂ int P + P ⊂ int P, which contradicts (3.99) for t = 1. Thus (H4) is satisfied. All assumptions (H1)–(H5) of Theorem 3.9.17 are satisfied, so we deduce that there exists x ∈ M such that g(y, x) − f (x, y) ∈ / int P for all y ∈ M.

(3.103)

Let y ∈ M be a fixed point and set yt = ty + (1 − t)x for t ∈ (0, 1). Since g(yt , ·) is P -convex and g(yt , yt ) ∈ P , then tg(yt , y) + (1 − t)g(yt , x) ∈ P.

(3.104)

From relation (3.103), one has (1−t)g(yt , x)−(1−t)f (x, yt ) ∈ / int P . Combin/ − int P . Using the ing with (3.104), it follows that tg(yt , y) + (1 − t)f (x, yt ) ∈ / convexity of f (x, ·), we can write tg(yt , y) + t(1 − t)f (x, y) + (1 − t)2 f (x, x) ∈ / − int P because f (x, x) = 0. The − int P , or else g(yt , y) + (1 − t)f (x, y) ∈ P -upper continuity of g(·, y) on line segments allows us to write / − int P. g(x, y) + f (x, y) ∈ The converse follows immediately from −P -monotonicity of g, and the proof is complete.  Remark 3.9.40. The P -monotonicity of g can be replaced by the following (f, P )-pseudomonotonicity: For all x, y ∈ M g(x, y) + f (x, y) ∈ int P ⇒ g(y, x) − f (x, y) ∈ int P. In that case, Theorem 3.9.39 appears as a generalization of Theorem 5.1 in [65]. Remark 3.9.41. The conclusion of Theorem 3.9.39 is valid (see Theorem 3.9.29) when instead of assumptions (v), (vi), and (viii) we only suppose that (viii ) there is B ⊂ X convex and compact for which ∀y ∈ M \B, ∃x ∈ M ∩B such that g(x0 , y) − f (y, x0 ) ∈ int P ; in return, we must suppose that (v  ) for every y ∈ M , f (·, y) is P -lower continuous on every convex compact subset of M . In the next section we present some typical situations in which the vector equilibrium structure occurs quite naturally.

3.10 Vector variational inequalities

261

3.10 Vector Variational Inequalities As a first example of particular case of the (GVE) problem, let us consider the following generalized vector variational inequality: (GVVI) find x ∈ K s.t. ∀ y ∈ K, ∃ t ∈ T (x) : t(η(x, y)) ∈ / − int P (x). Here T is a multifunction from X into the space L(X, Y ) of all linear continuous mappings from X into Y , η : K × K → K is a mapping, and t(x) denotes the evaluation of the linear mapping t at x. Thus (GVVI) is a particular case of the (WGVEP) problem if we take f (x, y) = T (x)(η(x, y)). This problem is an extension of the single-valued (GVVI) problem introduced by Siddiqi, Ansari, and Khaliq [506]. Note that for η(x, y) = y − g(x), (GVVI) is equivalent to the vector variational problem introduced and studied by Konnov and Yao [348]: / − int P (x) ∀ y ∈ K. (VVI) find x ∈ K such that T (x)(y − g(x)) ∈ When P (x) = P for all x ∈ K, the (GVVI) problem becomes the vector variational inequality problem introduced by Chen and Yang [129]. If Y = R, X = Rn , L(X, Y ) = X ∗ , P (x) = R+ ∀ x ∈ K, and g is the identity mapping, the problem (GVVI) becomes the usual scalar variational inequality considered and studied by Hartman and Stampacchia [264]. The vector variational inequalities (VVI) were introduced by Giannessi [216] in a finite-dimensional Euclidean space in 1980. Chen and several authors (see [119], [120], [121], [122], [129], [369], [574]) have intensively studied the vector variational inequalities in abstract infinite-dimensional spaces. Recently, the equivalence between a (VVI) and vector optimization problems and the equivalence between (VVI) and vector complementarity problems have been studied; see, e.g., [129]. 3.10.1 Vector Variational-Like Inequalities We assume: • • • •

X and Y are two Hausdorff topological vector spaces; L(X, Y ) is the set of all continuous linear operators from X into Y ; M ⊆ X is a nonempty convex set; {P (x) | x ∈ M } is a family of closed convex cones in Y with int P (x) = ∅ for all x ∈ M ; • η : M × M → M is continuous and affine with η(x, x) = 0, for every x ∈ M. For Π ⊆ L(X, Y ) we write Π(x) := {π(x) | π ∈ Π}, for all x ∈ M . We have to assume that L(X, Y ) is topologized in such a way that the bilinear form (π, x) !→ π(x) is continuous from L(X, Y ) × M into Y ; see the next lemma for some examples.

262

3 Optimization in Partially Ordered Spaces

Lemma 3.10.1. Let X, Y be two Banach spaces, M a closed convex subset of X, and let {(αi , xi )} be a net in L(X, Y ) × M , and {(α, x)} in L(X, Y ) × M . w

(i) If αi − α → 0 in L(X, Y ) and xi  x in X, then {αi (xi )} converges weakly to α(x) in Y . (ii) If (αi − α)(u), y ∗  → 0 for every u ∈ X, y ∗ ∈ Y ∗ and xi − x → 0 in X, then {αi (xi )} converges weakly to α(x) in Y . (iii) If αi −α → 0 in L(X, Y ) and xi −x → 0 in X, then αi (xi )−α(x) → 0 in Y . Proof. (i) Fix y ∗ ∈ Y ∗ . We have |(αi − α)(xi ), y ∗ | ≤ y ∗  · αi − α · xi . w

Our assumption xi  x in X just says that x∗ (xi ) → x∗ (x) for all x∗ ∈ X ∗ . Then, according to [165, Theorem II.3.20], {xi } is bounded in X, and thus lim sup|αi (xi ) − α(x), y ∗ | i

≤ lim sup |αi (xi ) − α(xi ), y ∗ | + lim sup |α(xi ) − α(x), y ∗ | i ∗

i

≤ y  · sup xi  · lim sup αi − α i

i

+ lim sup |xi − x, α∗ y ∗ | (α∗ is the adjoint operator of α) i

≤ 0. This shows that {αi (xi )} converges weakly to α(x) in Y . (ii) Using [165, Corollary II.3.21] and supi αi (u), y ∗  < +∞ for every u ∈ X, y ∗ ∈ Y ∗ , we have, that {αi } is bounded in L(X, Y ), and lim sup|αi (xi ) − α(x), y ∗ | i

≤ lim sup |αi (xi ) − αi (x), y ∗ | + lim sup |(αi − α)(x), y ∗ | i ∗

i

≤ y  · sup αi  · lim sup xi − x + lim sup |(αi − α)(x), y ∗ | i

i

i

≤ 0. (iii) is obvious.



Definition 3.10.2. Let T be a multifunction from M to L(X, Y ). (a) T is said to be (P, η)-monotone if T y(η(x, y))−T x(η(x, y)) ⊂ P (x)∩P (y). (b) T : M → L(X, Y ) is said to be (P, η)-pseudomonotone if T x(η(x, y)) ⊂ − int P (y) implies T y(η(x, y)) ⊂ − int P (y). (c) T is said to be V -hemicontinuous if for any x, y ∈ M, T (tx + (1 − t)y) → T y as t → 0+ (i.e., ∀zt ∈ T (tx + (1 − t)y), ∃z ∈ T y such that ∀a ∈ M, zt (a) → z(a) as t → 0+ ).

3.10 Vector variational inequalities

263

Then, according to Theorem 3.9.39, we can formulate the following existence result concerning a vector variational-like inequality. Theorem 3.10.3. Let T be a multifunction from M to L(X, Y ). If one has (i) T is compact-valued, (P, η)-pseudomonotone, and V-hemicontinuous; (ii) for some convex compact subset B of X, we have for every y ∈ M \ B there exists x ∈ M ∩ B such that T x0 (η(x0 , y)) ⊂ − int P (y). Then there exists x ∈ B satisfying

T x(η(y, x)) ⊂ − int P (x) ∀y ∈ M .

Proof. Set f (x, y) = −T x(η(x, y)), g(x, y) = −T y(η(x, y)), and K(x) = int P (x) for x, y ∈ M . Then assumptions (H1), (i), (ii), and (H7) of Theorem 3.9.31 are easily satisfied. Let us show that (H6) of Theorem 3.9.31 is also satisfied. For this, let x ∈ M be fixed and let (yi ) be a net on M converging to y ∈ M such that T x(η(x, yi )) ⊂ − int P (yi ). Therefore there exists zi ∈ T x / − int P (yi ). satisfying zi (η(x, yi )) ∈ Since T x is compact, then, passing to a subnet if necessary, we may assume that zi converges to z ∈ T x. By the continuity of the mapping η and the bilinear form on L(X, Y ) × M , we get zi (η(x, yi )) → z(η(x, y)). Hence z(η(x, y)) ∈ / − int P (y) or else T x(η(x, y)) ⊂ − int P (y). Then all assumptions of Theorem 3.9.31 are satisfied; thus, there exists x ∈ M such that T y(η(y, x)) ⊂ − int P (x) for all y ∈ M. Let y ∈ M be fixed and set yt = ty+(1−t)x. Then T yt (η(yt , x)) ⊂ − int P (x). Hence T yt (η(y, x)) ⊂ − int P (x). By the V-hemicontinuity of T and the  closedness of Y \ (− int P (x)), it follows that T x(η(y, x)) ⊂ − int P (x). 3.10.2 Perturbed Vector Variational Inequalities Now, if we take P (x) = P for all x ∈ M , then we obtain from Theorem 3.9.39 the following proposition: Proposition 3.10.4. Let ϕ : M → Y be a P -lower continuous and P -convex function, and let S, T : M −→ L(X, Y ) satisfy the following assumptions: (i) S is continuous on M ; (ii) T is P -monotone and continuous on each line segment on M ; (iii) there is a compact subset B of X and x0 ∈ M ∩ B such that (Sy + T x0 )(y − x0 ) + ϕ(y) − ϕ(x0 ) ∈ int P for all y ∈ M \ B. Then there exists x ∈ M such that (S + T )x(y − x) + ϕ(y) − ϕ(x) ∈ / − int P for all y ∈ M.

264

3 Optimization in Partially Ordered Spaces

Proof. It suffices to set f (x, y) = Sx(y −x) and g(x, y) = T x(y −x)+ϕ(y)− ϕ(x) and to check that all assumptions of Theorem 3.9.39 are satisfied. We observe that assumptions (i)–(iii) and (viii) of Theorem 3.9.39 arise automatically from P -convexity of ϕ, P -monotonicity of T , and condition (iii). Using P -lower continuity of ϕ, (i)–(ii), and the condition on the bilinear form on L(X, Y ) × M , one deduces (iv)–(vii) of Theorem 3.9.39.  3.10.3 Hemivariational Inequality Systems In 1981, by using Clarke’s generalized gradient, Panagiotopoulos introduced the notion of nonconvex super-potential. In the absence of convexity, a new type of inequality was obtained, the so-called hemivariational inequalities. These look like a variational formulation of mechanic problems. Let us take a brief look at this concept of nonsmooth analysis. Let X be a Banach space, ϕ : X → R, and x ∈ X. We say that ϕ is locally Lipschitz at x if there is some ε > 0 and k > 0 such that u − x < ε and v − x < ε =⇒ |ϕ(u) − ϕ(v)| ≤ ku − v. The Clarke’s generalized directional derivative of ϕ at x in the direction h ∈ X is defined by ϕo (x; h) := lim sup y→x t 0

ϕ(y + th) − ϕ(y) , t

and the associated Clarke’s generalized subdifferential is defined at x by ∂C ϕ(x) := {x∗ ∈ X ∗ | ϕo (x; h) ≥ h, x∗  ∀h ∈ X}. Proposition 3.10.5. [137, Prop. 1.1 and Prop. 1.5] Let ϕ : X → R be locally Lipschitz of rank k at x. Then (a) the function h !−→ ϕo (x; h) is finite, positively homogeneous, subadditive on X, and for every h ∈ X, |ϕo (x; h)| ≤ kh; (b) the function (y, h) !−→ ϕo (y; h) is upper semicontinuous at (x, h) for every h ∈ X; (c) ∂C ϕ(x) is nonempty, convex, weak∗ -compact, and x∗  ≤ k for every x∗ ∈ ∂C ϕ(x); (d) for every h ∈ X we have ϕo (x; h) = max{h, x∗  | x∗ ∈ ∂C ϕ(x)}. Definition 3.10.6. We say that a multifunction R from X to X ∗ is (Browder– Hess) BH-pseudomonotone, (quasi-pseudomonotone), or (of class (S)+ ) if (xn) converges weakly to x and lim inf n→∞ supx∗n ∈Rxn x−xn , x∗n ≥ 0 imply lim supn→∞ supx∗n ∈Rxn y − xn , x∗n  ≤ supx∗ ∈Rx y − x, x∗  ∀y ∈ X, (limn→∞ supx∗n ∈Rxn x − xn , x∗n  = 0), or (xn → x and (Rxn ) is bounded for n sufficiently large).

3.10 Vector variational inequalities

265

Proposition 3.10.7. Consider two multifunctions R1 and R2 from X to X ∗ . Then R = R1 + R2 is BH-pseudomonotone if either R1 or R2 is BH-pseudomonotone or if one of the pair (R1 , R2 ) is the Clarke’s subdifferential ∂C ϕ of a locally Lipschitz function ϕ that is assumed quasi-pseudomonotone and the other one is of class (S)+ . Proof. Let us suppose that xn  x and lim inf n→∞ supx∗n ∈Rxn x−xn , x∗n  ≥ 0. Then " ! lim inf n→∞

sup x − xn , u∗n  +

u∗ n ∈R1 xn

sup x − xn , vn∗ , 

∗ ∈R x vn 2 n

≥ 0.

(3.105)

We claim that lim inf

sup x − xn , u∗n  ≥ 0 and lim inf

n→∞ u∗ ∈R1 xn n

sup x − xn , x∗n  ≥ 0. (3.106)

n→∞ v ∗ ∈R2 xn n

To the contrary, suppose, for instance, that the first is false. Then for a subsequence r := limp→∞ supu∗n ∈R1 xnp x − xnp , u∗np  < 0. Using (3.105), it p follows sup x − xnp , vn∗ p  ≥ −r > 0. (3.107) lim inf p→∞ v ∗ ∈R2 xn p np

• If R2 is BH-pseudomonotone, then limp→∞ supvn∗ ∈R2 xnp x−xnp , vn∗ p  = p 0, a contradiction. Thus (3.106) is satisfied. We conclude from BH-pseudomonotonicity of R1 and R2 that ∀y ∈ X, lim sup sup y − xn , x∗n  n→∞ x∗ n ∈Rxn

≤ lim sup

sup y − xn , u∗n  + lim sup

n→∞ u∗ n ∈R1 xn

sup y − xn , vn∗ 

∗ ∈R x n→∞ vn 2 n

≤ sup y − x, u∗  + sup y − x, v ∗  u∗ ∈R1 x

v ∗ ∈R2 x



= sup y − x, x . x∗ ∈Rx

This gives BH-pseudomonotonicity of R. • If R2 is of class (S)+ , then (3.107) gives xnp → x and (R2 xnp ) is bounded. Thus limp→∞ supvn∗ ∈R2 xnp x − xnp , vn∗ p  = 0, which contradicts p (3.107) and establishes (3.106). Using that R2 is of class (S)+ , then xn → x and (R2 xn ) is bounded. We conclude that ∀y ∈ X, lim supn→∞ supvn∗ ∈R2 xn y − xn , vn∗  ≤ sup y − x, v ∗ . v ∗ ∈R2 x

Using Proposition 3.10.5 (b) (d), we deduce

266

3 Optimization in Partially Ordered Spaces

lim sup sup y − xn , x∗n  n→∞ x∗ n ∈Rxn

≤ lim sup ϕo (xn , y − xn ) + lim sup

sup y − xn , vn∗ 

∗ ∈R x n→∞ vn 2 n

n→∞

≤ ϕo (x, y − x) + sup y − x, v ∗  v ∗ ∈R2 x ∗

= sup y − x, u  + sup y − x, v ∗  u∗ ∈R1 x

v ∗ ∈R2 x



= sup y − x, x . x∗ ∈Rx

• The same proof remains valid when we suppose that R2 = ∂ϕ is quasi pseudomonotone and R1 is of class (S)+ . As the maximal monotone operators for the variational inequalities, the generalized pseudomonotone operators are a mathematically important tool for the formulation of existence results concerning the hemivariational mappings. In this subsection, we are interested in some hemivariational inequality systems. Assume that we are given X a real reflexive Banach space, M a nonempty convex subset of X. Let Sk , Tk : M → X ∗ , for k = 1, . . . , m, and let J : X → Rm be locally Lipschitz near M . Let us introduce the&following mappings: S, & T : M −→ X ∗ × X ∗ × · · · × X ∗ m m o defined by Sx := k=1 Sk x and T x := k=1 Tk x, and set J (x; h) := o o (J1 (x; h), . . . , Jm (x; h)). We consider the following hemivariational inequality system (HVIS), which consists in finding x ∈ M such that ∀ y ∈ M , ∃ky ∈ {1, . . . , m} with y − x, Sky x + y − x, Tky x + Jkoy (x; y − x) ≥ 0.

(3.108)

By making use of Theorem 3.9.39 and Remark 3.9.41 we are able to give conditions that guarantee the existence of a solution to the inequality (HVIS). Theorem 3.10.8. Let us impose the following conditions: (i) for every k = 1, . . . , m, Sk is BH-pseudomonotone and locally bounded; (ii) for every k = 1, . . . , m, either ∂C Jk is BH-pseudomonotone, or ∂C Jk is quasi-pseudomonotone and S is of class (S)+ ; (iii) for every k = 1, . . . , m, Tk is monotone and continuous from each line segment of M to the weak topology on X ∗ ; (iv) there is a weak-compact subset B ⊂ X such that for all y ∈ M \ B there exists x ∈ M ∩ B such that ∀ k ∈ {1, . . . , m} Jko (y, x − y) < y − x, Sk y + T x. Then the problem (HVIS) has at least one solution. Proof. Let us first remark that setting, for each k ∈ {1, . . . , m} and x, y ∈ M , fk (x, y) := y − x, Sk x +

max

ξ∈∂C Jk (x)

y − x, ξ and gk (x, y) := y − x, Tk x,

3.10 Vector variational inequalities

267

the system (3.108) is equivalent to f (x, y) + g(x, y) ∈ / − int Rm + ∀ y ∈ M. Let Y = Rm and P = Rm + . We shall verify the conditions of Theorem 3.9.39 when X and X ∗ are, respectively, endowed with the weak and weak∗ topologies: (i), (ii), (iii), (iv), and (viii) are obviously satisfied; (vi) holds by using Propositions 3.10.7 and 3.10.5 (d), since J and S satisfy conditions (i) and (ii). (v) of Theorem 3.9.39: Fix y ∈ M and F a convex hull of a finite subset of M . We have to show that for every k = 1, . . . , m, f (·, y) is Rm + -lower continuous on F , which is equivalent to fk (·, y) is R+ -upper continuous on F , i.e., fk (·, y) is R+ -upper semicontinuous1 . Let {xn } be a sequence in F weakconverging to x0 ∈ F . Since F is a closed subset of a finite-dimensional space, we assert that {xn } converges in norm to x0 . By the upper semicontinuity of Jko on M × X, see Proposition 3.10.5 (b), and the local boundedness of Sk we obtain that f (·, y) is R+ -upper semicontinuous at x0 . (vii) of Theorem 3.9.39: holds by using a similar argument to that used in proving (v).  Remark 3.10.9. The hemivariational inequalities studied in Chapter 4 of [429] correspond to the case where k = 1 and for all x ∈ M, T x = l ∈ X ∗ . Remark 3.10.10. The study of variational–hemivariational inequalities is a particular case of the study of the inequalities system (3.108). It corresponds to the case in which k = 1 and y − x, T x = ϕ(y) − ϕ(x) − y − x, l for all x, y ∈ M, with l ∈ X ∗ and ϕ is a real lower semicontinuous convex function on M . Therefore, a solution of this inequality exists without recourse to a condition of quasi- or strong quasi-boundedness on ∂ϕ, as was made in [429]. 3.10.4 Vector Complementarity Problems To introduce a vector complementarity problem, let X, Y be two topological vector spaces, M ⊂ X, and let T be a function from X into L(X, Y ). Suppose Y is ordered by the family of cones {int P (x) : x ∈ M } ∪ {0}. We define the following weak and strong vector complementarity problems: (WVCP) find x ∈ M with T x(x) ∈ / int P (x), T x(y) ∈ / − int P (x) ∀y ∈ M. (SVCP) find x ∈ M with T x(x) ∈ / int P (x), T x(y) ∈ P (x) ∀y ∈ M . If Y = R, P (x) = R+ for each x ∈ M , the vector complementarity problems (WVCP) and (SVCP) coincide with the scalar complementarity problem: 1

Let us recall that vector R+ -upper (lower) semicontinuity of a real function coincides with the usual scalar upper (lower) semicontinuity.

268

3 Optimization in Partially Ordered Spaces

(ComP) find x ∈ M such that x, T x = 0 and T x ∈ M + , where M + := {x∗ ∈ X ∗ | x, x∗  ≥ 0 ∀x ∈ M }. If we restrict ourselves to X = Rn and M = Rn+ , then the space L(X, Y ) becomes equal to Rn , and (ComP) becomes, findx ≥ 0 such that x, T x = 0 and T x ≥ 0. In the following, conditions under which solutions of vector complementarity problems exist are presented. To derive these conditions, Lemma 3.10.11 will be used. Lemma 3.10.11. Let us consider the following vector variational inequality (VVI): / − int P (x) ∀ y ∈ M. find x ∈ M such that T x(y − x) ∈

(3.109)

(a) Suppose that the cone P (x) is w-pointed (i.e., int P (x) ∩ −P (x) = ∅) and T is a single-valued function; then (SVCP) =⇒ (VVI).2 (b) Suppose that M is a convex cone; then (VVI) =⇒ (WVCP). (c) Suppose that int P (x) ∪ −P (x) = X; then (WVCP) =⇒ (SVCP). Proof. (a): Since P (x) is w-pointed, (SVCP) =⇒ (VVI) follows from the / int P (x) and T x(y) ∈ P (x) ⇒ T x(y) − T x(x) ∈ / implication T x(x) ∈ − int P (x). (b): Let us take y = 0 in (3.109); we have T x(x) ∈ / int P (x). Together, fix z ∈ M and take y = z + x (∈ M , since M is a convex cone) in (3.109); then / − int P (x). T x(z) = T x(y − x) ∈  (c): It is immediate from condition int P (x) ∪ −P (x) = X. Theorem 3.10.12. Suppose that (i) for every x, y ∈ M, T x(y−x) ∈ / − int P (x) implies T y(y−x) ∈ / − int P (x); (ii) the graph of int P is open on M × Y ; (iii) T is continuous from each line segment of M to a topology on L(X, Y ) for which the bilinear form (π, x) !→ π(x) is continuous from L(X, Y ) × M into Y ; (iv) there is a convex weakly compact subset B of X such that for every y ∈ M \ B there exists x ∈ M ∩ B such that T x(y − x) ∈ int P (x). Then the problem (WVCP) has at least one solution. Proof. The assumptions (i)–(v), (vii), and (viii) of Theorem 3.9.35 hold when applied to f (x, y) = T x(y − x) for x, y ∈ M . To verify (vi), fix x, y ∈ M such that x = y, and suppose to the contrary that there exists some u ∈]x, y[ such that T u(x − u) ∈ / int P (x) and T u(y − u) ∈ − int P (x). According to Y \ int P (x) − int P (x) ⊂ Y \ P (x), we deduce that for every t ∈]0, 1[, T u(tx + (1 − t)y − u) = tT u(x − u) + (1 − t)T u(y − u) ∈ / P (x), which is impossible for t such that u = tx + (1 − t)y. Therefore, (VVI) has a solution, and invoking Lemma 3.10.11, we finish the proof of theorem.  2

This means: x is a solution of (SVCP) =⇒ x is a solution of (VVI).

3.10 Vector variational inequalities

269

3.10.5 Vector Optimization Problems Consider φ : M −→ Y and the following vector optimization problems (WVOP) find x ∈ M such that φ (y) ∈ / φ (x) − int P ∀ y ∈ M; and / φ (x) − (P \ {0}) ∀ y ∈ M. (VOP) find x ∈ M such that φ (y) ∈ General vector mappings We first obtain a direct existence result for the weak minimum of a vector optimization problem by setting f (x, y) = φ(x) − φ(y). It is trivial to check that x ∈ M is a solution of (GVEP) (and (WGVEP)) iff x is a solution of (VOP) (and (WVOP)). Theorem 3.10.13. Suppose that (i) φ is quasi P -convex and P -lower continuous; (ii) there exists B ⊆ M convex and compact such that ∀y ∈ M \ B, ∃x ∈ B such that φ(y) ∈ φ(x) + int P . Then (WVOP) admits at least one solution x ∈ B. Proof. We apply Theorem 3.9.32 with f (x, y) = φ(x) − φ(y) and K(x) = − int P .  Remark 3.10.14. In the particular case where Y = Rm and P = Rm − we have under assumptions (i) and (ii) of Theorem 3.10.13 the existence of a weak Pareto optimum, i.e., φ (M ) ∩ (φ (x) + int P ∪ {0}) = {φ (x)}. Remark 3.10.15. Theorem 3.10.13 is a generalization, in the case of singlevalued functions, of a result [387, Corollary 5-6 p. 59] on existence of solutions of vector optimization problems. Smooth vector mappings In this paragraph, by considering smooth vector mappings, we prove the existence of weak minimum for vector optimization problems by means of a vector variational-like inequality and preinvex mappings. For our analysis, the following concepts are necessary. Definition 3.10.16. We say that a Gˆ ateaux 3 differentiable function4 φ : M → Y is P -invex with respect to η : M × M → X if φ(x) − φ(y) − ∇φ(y)(η(x, y)) ∈ P ∀x, y ∈ M . Here ∇φ denotes the Fr´echet derivative of φ. 3

4

φ : X → Y is Gˆ ateaux-differentiable at x ∈ X if, for each d ∈ X, the limit φ (x; d) := limt→0 1t φ(x + td) − φ(x)) exists, and φ (x; ·) is a linear continuous functional on X. This functional, denoted ∇φ(x), is called the Gˆ ateaux derivative of f at x. φ must be defined in a neighborhood of M .

270

3 Optimization in Partially Ordered Spaces

Definition 3.10.17. We say that φ : M → Y is P -preinvex with respect to η : M × M → X if for all x, y ∈ M, t ∈ [0, 1], such that y + tη(x, y) ∈ M one has t(φ(x) − φ(y)) + φ(y) − φ(y + tη(x, y)) ∈ P. Remark 3.10.18. If we suppose that φ is Fr´echet differentiable, then P invexity of φ is satisfied whenever φ is P -preinvex with respect to the same η. Indeed, suppose that t(φ(x) − φ(y)) + φ(y) − φ(y + tη(x, y)) ∈ P . Dividing by t and letting t goes to zero, we obtain φ(x) − φ(y) − ∇φ(y)(η(x, y)) ∈ P ; that is, P -invexity of φ.  Remark 3.10.19. We remark that if we take R and R+ respectively in place of Y and P , we obtain the definitions of scalar invex and preinvex functions. Note that invex functions were first introduced by Hanson [262] and Craven [144]. To characterize the invexity, Craven and Glover [145] showed the equivalence between a global minimum of a scalar function and a stationary point. Now, we prove the following existence theorem of weak vector minimum points. Theorem 3.10.20. Let M be a nonempty closed, convex subset of a real normed space X and Y be a real normed space. Let φ : M → Y be a Gˆ ateaux-differentiable and P -preinvex mapping such that its derivative ∇φ is continuous from each line segment of M to L(X, Y ), i.e., ∀x, y ∈ M, ∀z ∈ X ∇φ(x + t(y − x))(z) → ∇φ(x)(z) in Y if t → 0+ . Let η : M × M → X be such that η(x, x) = 0 for every x ∈ M . Suppose that, for every x, y ∈ M , (i) ∇φ(x)(η(y, x)) ∈ / − int P =⇒ ∇φ(y)(η(y, x)) ∈ / − int P ; (ii) η is linear in the first argument and continuous in the second one; (iii) there exists a compact subset B of M , and x0 ∈ B such that ∇φ(x0 )(η(y, x0 )) ∈ − int P for all y ∈ M \ B. Then the vector optimization problem (WVOP) has a global minimum x ∈ B. Proof. By Theorem 3.9.31, with K(x) = L(x) = − int P, f (x, y) = ∇φ(x)(η(x, y)), and g(x, y) = ∇φ(y)(η(x, y)), there exists x ∈ B such that / − int P ∀ y ∈ M. ∇φ(y)(η(y, x)) ∈ / For y ∈ M fixed, set yt = ty+(1−t)x, where t ∈ [0, 1]; then ∇φ(yt )(η(yt , x)) ∈ − int P ∀ t ∈ [0, 1]. ¯) = tη(y, x ¯) + (1 − Using linearity in the first argument of η, we get η(yt , x t)η(¯ x, x ¯), and then from η(x, x) = 0, we obtain / − int P. ∇φ(yt )(η(y, x)) = 1/t∇φ(yt )(η(yt , x)) ∈ Since ∇φ is continuous from each line segment of M to L(X, Y ), it follows that the vector variational-like inequality

3.10 Vector variational inequalities

∇φ(x)(η(y, x)) ∈ / − int P for all y ∈ M

271

(3.110)

has a solution. Because of the P -preinvexity, which implies P -invexity, of φ we deduce φ(y) − φ(x) − ∇φ(x)(η(y, x)) ∈ P.

(3.111)

Combining (3.110) and (3.111) we obtain for every y ∈ M , / − int P, φ(y) − φ(x) = (φ(y) − φ(x) − ∇φ(x)(η(y, x))) + ∇φ(x)(η(y, x)) ∈ which completes the proof.



3.10.6 Minimax Theorem for Vector-Valued Mappings Let us first introduce the notions of cone saddle point of vector-valued mappings. Assume that we are given X = X1 × X2 a product of two real topological vector spaces, and U , V two nonempty subsets of X1 and X2 , respectively. Let F : M = U × V −→ Y be a vector-valued mapping. Definition 3.10.21. (1) A point (u0 , v0 ) ∈ M is said to be a weak P -saddle point of F with respect to M = U × V , a WVSP for short, if F (u0 , V ) ∩ (F (u0 , v0 ) + int P ∪ {0}) = {F (u0 , v0 )} F (U, v0 ) ∩ (F (u0 , v0 ) − int P ∪ {0}) = {F (u0 , v0 )}. (2) A point (u0 , v0 ) ∈ M is said to be a P -saddle point of F with respect to M = U × V , a VSP for short, if F (u0 , V ) ∩ (F (u0 , v0 ) + P ) = {F (u0 , v0 )} F (U, v0 ) ∩ (F (u0 , v0 ) − P ) = {F (u0 , v0 )}. It should be remarked that a VSP is also a WVSP. In the case where int P ∪ {0} = P , the two concepts are coincident. Lemma 3.10.22. Assume that we are given F a vector-valued mapping from M = U × V into Y , P a nonempty cone with int P = ∅, and f : M × M → Y defined, for each (u1 , v1 ), (u2 , v2 ) ∈ M , by f ((u1 , v1 ), (u2 , v2 )) := F (u1 , v2 ) − F (u2 , v1 ). / − int P (resp. −P \ {0}) ∀(u, v) ∈ M , If (u0 , v0 ) satisfies f ((u, v), (u0 , v0 )) ∈ then (u0 , v0 ) is a WVSP (resp. VSP) of F . Proof. First, we remark that F (u0 , v) − F (u0 , v0 ) ∈ / int P ∀ v ∈ V, (u0 , v0 ) is a WVSP ⇔ F (u, v0 ) − F (u0 , v0 ) ∈ / − int P ∀ u ∈ U,

272

3 Optimization in Partially Ordered Spaces

and

(u0 , v0 ) is a VSP ⇔

F (u0 , v) − F (u0 , v0 ) ∈ / P \ {0} ∀ v ∈ V, / −P \ {0} ∀ u ∈ U. F (u, v0 ) − F (u0 , v0 ) ∈

The end of the proof is immediate from these equivalences and the definition of f .  This lemma, combined with Theorem 3.9.32, Lemma 3.9.24, and Remark 3.9.22, leads to the following vector minimax theorem. Theorem 3.10.23. Suppose that U and V are compact and convex subsets of two Hausdorff topological vector spaces, P a closed convex cone with nonempty interior in a Hausdorff locally convex topological vector space. Let F : U × V → Y satisfy the following assumptions: (i) for every u ∈ U , F (u, ·) is P -concave and P -upper semicontinuous; (ii) for every u ∈ V , F (·, v) is P -convex and P -lower semicontinuous. Then F admits a weak P -saddle point in U × V . For further references see [190], [536], and [439].

3.11 Minimal-Point Theorems in Product Spaces and Corresponding Variational Principles The importance of the Ekeland variational principle in nonlinear analysis is well known. Below we recall a versatile variant. Proposition 3.11.1. Ekeland’s variational principle. Let (X, d) be a complete metric space and f : X → R• := R ∪ {∞} a proper, lower semicontinuous function bounded below. Consider ε > 0 and x0 ∈ X such that f (x0 ) ≤ inf f + ε. Then for every λ > 0 there exists x ∈ dom f such that f (x) + λ−1 εd(x, x0 ) ≤ f (x0 ), and

f (x) < f (x) + λ−1 εd(x, x)

d(x, x0 ) ≤ λ, ∀ x ∈ X \ {x}.

(3.112) (3.113)

This means that for λ, ε > 0 and x0 an ε-approximate solution of the minimization problem minimize f (x) s.t. x ∈ X,

(3.114)

there exists a new point x that is not worse than x0 and belongs to a λ-neighborhood of x0 , and especially, x satisfies the variational inequality (3.113). Relation (3.113) says, in fact, that x minimizes globally f + λ−1 εd(x, ·), which is nothing else than a Lipschitz perturbation of f (for

3.11 Minimal-Point Theorems in Product Spaces

273

√ “smooth” principles, see [84]). Note that λ = ε gives a useful compromise in Proposition 3.11.1. For applications see Section 5.1 and 5.4 and, e.g., [194, 195, 491, 525, 529]. There are several statements that are equivalent to Ekeland’s variational principle (EVP); see, e.g., [96, 125–127, 148, 205, 207, 227–231, 257, 464, 474, 475]. We mention explicitly a result of Attouch and Riahi [32], who showed that (in Banach spaces) EVP is equivalent to the existence of minimal points with respect to cones satisfying some additional conditions. Proposition 3.11.2. Phelps minimal-point theorem. Let X be a Banach space, C ⊆ X a closed convex cone such that C ⊆ Kx∗ := {x ∈ X | x ≤ x, x∗ } for some x∗ ∈ X ∗ , and A ⊆ X a nonempty closed set such that x∗ (A) is bounded from below. Then for every x0 ∈ A there exists x ∈ A ∩ (x0 − C) such that x is a minimal element of A with respect to the partial order induced by C. Obviously, Kx∗ is a pointed closed convex cone, and so C is pointed, too. Moreover, Kx∗ is well-based, a bounded base being B := {x ∈ Kx∗ | x, x∗  = 1}, and so C is well-based, too. Sometimes the cone Kx∗ is called a Phelps cone. Since in this book we are mainly interested in multiobjective problems, we look for multiobjective (or vector) variants of EVP and minimal-point theorems. Loridan [384], then Khanh [330], Nemeth [431], and Tammer [524] were the first to prove vector-valued EVP. We illustrate this kind of result with the following one, which is very close to a result stated by Tammer [524] (see also Corollary 3.11.19). Proposition 3.11.3. Vector EVP. Let M be a nonempty closed subset of the Banach space (X,  · ), Y a t.v.s., C ⊆ Y a proper closed convex cone, k 0 ∈ int C, and f : M → Y . Assume that {x ∈ M | f (x) ≤C rk 0 } is closed for ε > 0 and x0 ∈ dom f , we have that   every r ∈ R and for some f (M ) ∩ f (x0 ) − εk 0 − (C \ {0}) = ∅. Then for every λ > 0 there exists x ∈ dom f such that   f (M ) ∩ f (x) − λ−1 εk 0 − int C = ∅, x − x0  ≤ λ, (3.115) and

f (x) + λ−1 εk 0 x − x ≤C f (x) ⇒ x = x.

(3.116)

In particular, x ∈ Eff(fλ−1 εk0 ,x , C), where fk,x := f +  · −x · k. Note that for Y = R the conditions of the preceding proposition are very close to those of the Ekeland variational principle (Proposition 3.11.1). Before starting with more general minimal-point theorems let us have a look at (3.113). If we write it in the form f (x) − f (x) + ελ−1 d(x, x) ≤ 0 ⇒ x = x,

274

3 Optimization in Partially Ordered Spaces

then (x, f (x)) is a minimal point of epi f with respect to the binary relation defined by (3.117) (x1 , t1 )  (x2 , t2 ) ⇐⇒ t1 ≥ t2 + ελ−1 d(x1 , x2 ). If X is a normed space, this binary relation is determined by the cone C := {(x, t) ∈ X × R | t ≥ ελ−1 x}. The binary relations defined as in (3.117) will play a decisive role in this section. It is worth mentioning that a weaker result than a full (= authentic) minimal-point theorem gives an EVP, as shown in Section 3.11.1. 3.11.1 Not Authentic Minimal-Point Theorems Throughout the section (X, d) is a complete metric space, Y is a separated locally convex space, Y ∗ is its topological dual, C ⊆ Y is a convex cone; as usual, C + = {y ∗ ∈ Y ∗ | y ∗ (y) ≥ 0 ∀ y ∈ C} is the dual cone of C and C # = {y ∗ ∈ Y ∗ | y ∗ (y) > 0 ∀ y ∈ C \ {0}}. Consider also k 0 ∈ C \ (−C). We notice that there exists z ∗ ∈ C + such that z ∗ (k 0 ) = 1. Indeed, using a separation theorem for k 0 and −C we get z1∗ ∈ C + and a real α > 0 such that z1∗ (k 0 ) > α > z1∗ (y) for all y ∈ C, then take z ∗ := α−1 z1∗ . The cone C determines a preorder on Y denoted, as usual, by ≤C ; so, for y1 , y2 ∈ Y , y1 ≤C y2 iff y2 −y1 ∈ C. It is known (see Section 2.1) that ≤C is reflexive and transitive; ≤C is antisymmetric iff C is pointed, i.e., C ∩ −C = {0}. Using the element k 0 we introduce a preorder on X × Y , denoted by k0 , in the following manner: (x1 , y1 ) k0 (x2 , y2 ) ⇐⇒ y1 + k 0 d(x1 , x2 ) ≤C y2 .

(3.118)

Note that k0 is reflexive and transitive; if (x1 , y1) k0 (x2 , y2) and (x2 , y2 ) k0 (x1 , y1 ), then x1 = x2 . If C is pointed then k0 is antisymmetric, too. Note that if (X,  · ) is a normed vector space (n.v.s.), then k0 is determined by the convex cone Ck0 := {(x, y) ∈ X × Y | x · k 0 ≤C y}; Ck0 is pointed if C is. Consider a nonempty set A ⊆ X × Y . For (x, y) ∈ A we denote by A(x, y) the lower section of A with respect to k0 : A(x, y) = {(x , y  ) ∈ A | (x , y  ) k0 (x, y)}. In the sequel we shall use the following condition on A: (H1) for every k0 -decreasing sequence ((xn , yn )) ⊆ A with xn → x ∈ X there exists y ∈ Y such that (x, y) ∈ A and (x, y) k0 (xn , yn ) for every n ∈ N. A related condition is

  (H2) for every sequence (xn , yn ) ⊆ A with xn → x ∈ X and (yn ) ≤C decreasing there exists y ∈ Y such that (x, y) ∈ A and y ≤C yn for every n ∈ N.

3.11 Minimal-Point Theorems in Product Spaces

275

These two conditions are motivated by conditions used by Isac [296] and Nemeth [431], respectively, as we shall see below. Note that if A satisfies (H2) and C has closed lower sections with respect to R+ k 0 , i.e., C ∩ (y − R+ k 0 ) is closed for every y ∈ C, then (H1) is also satisfied.   Indeed, let (xn , yn ) ⊆ A be a k0 -decreasing sequence with xn → x. It is obvious that (yn ) is ≤C -decreasing. By (H2), there exists y ∈ Y such that (x, y) ∈ A and y ≤C yn for every n ∈ N. It follows that y + k 0 d(xn+p , xn ) ≤C yn+p + k 0 d(xn+p , xn ) ≤C yn

∀ n, p ∈ N.

Fixing n and letting p → ∞, by the closedness of C in the direction k 0 , one obtains that y + k 0 d(x, xn ) ≤C yn , i.e., (x, y) k0 (xn , yn ) for every n ∈ N. Note also that (H2) holds if A is closed, PrY (A) is ≤C -bounded from below (that is, there exists some y ∈ Y such that PrY (A) ⊆ y + C), and every ≤C decreasing sequence in C is convergent (i.e., C is a sequential Daniell cone). This is the case (even for nets) if Y is a Banach space and C has a closed (convex) and bounded base (see [81, Prop. 3.6] and Section 2.2 for bases of cones). We establish now our first minimal-point theorem. In the sequel PrX and PrY will denote the projections of X × Y onto X and Y , respectively; so PrX (x, y) = x for every (x, y) ∈ X × Y . Theorem 3.11.4. Let A ⊆ X × Y satisfy (H1) and suppose that there exists y˜ ∈ Y such that PrY (A) ⊆ y˜ + C. Then for every (x0 , y0 ) ∈ A there exists (¯ x, y¯) ∈ A such that (¯ x, y¯) k0 (x0 , y0 ) and if (x , y  ) ∈ A is such that x, y¯) then x = x ¯. (x , y  ) k0 (¯ Proof. First of all, note that for (x, y) ∈ A the set PrX (A(x, y)) is bounded. Indeed, as noticed at the beginning of this section, there exists z ∗ ∈ C + with z ∗ (k 0 ) = 1. Let x ∈ PrX (A(x, y)); there exists y  ∈ Y such that (x , y  ) ∈ A y ), and (x , y  ) k0 (x, y). Then d(x, x ) ≤ z ∗ (y) − z ∗ (y  ) ≤ z ∗ (y) − z ∗ (˜ which shows that PrX (A(x, y)) is bounded. Let us construct a sequence ((xn , yn ))n∈N ⊆ A in the following way: Having (xn , yn ) ∈ A, where n ∈ N, by the above remark, there exists (xn+1 , yn+1 ) ∈ A(xn , yn ) such that d(xn+1 , xn ) ≥

1 2

sup{d(x, xn ) | x ∈ Bn },

where Bn := PrX (A(xn , yn )). We obtain in this way the sequence ((xn , yn )) ⊆ A, which is k0 -decreasing. Since A(xn+1 , yn+1 ) ⊆ A(xn , yn ), we have that Bn+1 ⊆ Bn for every n ∈ N. Of course, xn ∈ Bn . Let us show that diam Bn → 0. In the contrary case there exists δ > 0 such that diam Bn ≥ δ for every n ∈ N. From 14 δk 0 ≤C k 0 d(xm , xm+1 ) ≤C ym − ym+1 one gets 14 δ ≤ z ∗ (ym ) − z ∗ (ym+1 ), and so, adding these relations for m from 0 to n − 1, we obtain 1 4 δn

≤ z ∗ (y0 ) − z ∗ (yn ) ≤ z ∗ (y0 ) − z ∗ (˜ y ),

276

3 Optimization in Partially Ordered Spaces

  which yields a contradiction for n → ∞. Thus we have that the sequence Bn is a decreasing sequence of nonempty closed subsets of the complete

metric space (X, d), whose diameters tend to 0. By Cantor’s theorem, n∈N Bn = ¯. Since ((xn , yn )) ⊆ A is a k0 {¯ x} for some x ¯ ∈ X. Of course, xn → x decreasing sequence, from (H1) we get an y¯ ∈ Y such that (¯ x, y¯) k0 (xn , yn ) for every n ∈ N; (¯ x, y¯) is the desired element. Indeed, (¯ x, y¯) k0 (x0 , y0 ); let (x , y  ) ∈ A(¯ x, y¯). It follows that (x , y  ) ∈ A(xn , yn ), and so x ∈ Bn ⊆ Bn for every n. Thus x = x ¯.  We want to apply the preceding results to obtain two vector-valued EVP. To envisage functions defined on subsets of X we add to Y an element ∞ not belonging to the space Y , obtaining thus the space Y • , that is, Y • = Y ∪ {∞}. We consider that y ≤C ∞ for all y ∈ Y . Consider now the function f : X → Y • . As usual, the domain of f is dom f = {x ∈ X | f (x) = ∞}; the epigraph of f is epi f = {(x, y) ∈ X × Y | f (x) ≤C y}; the graph of f is gr f = {(x, f (x)) | x ∈ dom f }. Of course, f is proper if dom f = ∅. Corollary 3.11.5. Let f : X → Y • be a proper function, bounded from below (i.e., there exists y˜ ∈ Y such that y˜ ≤C f (x) for every x ∈ X), satisfying the condition (H3) {x ∈ X | f (x ) + k 0 d(x , x) ≤C f (x)} is closed for every x ∈ X. ¯ ∈ X such that Then for every x0 ∈ dom f there exists x f (¯ x) + k 0 d(¯ x, x0 ) ≤C f (x0 )

(3.119)

∀ x ∈ X : f (x) + k 0 d(x, x ¯) ≤C f (¯ x) ⇒ x = x ¯.

(3.120)

and Proof. Consider A = gr f ⊆ X × Y . Let us show that (H1) holds. Indeed, let ((xn , yn )) ⊆ A be a k0 -decreasing sequence with xn → x ∈ X. Of course, since yn = f (xn ), for all n, p ∈ N we have that xn+p ∈ {x ∈ X | f (x) + k 0 d(x, xn ) ≤C f (xn )}. Since the last set is closed, it also contains the limit x of the sequence (xn+p )p∈N . Therefore (x, y) k0 (xn , yn ) for every n, where y = f (x). Of course, (x, y) ∈ A. Applying Theorem 3.11.4 we obtain x ¯ ∈ X such that (¯ x, f (¯ x)) ∈ gr f satisfies the conclusion of that theorem. This means that (3.119) and (3.120) hold.  Note that Isac [296, Th. 4] obtained the above result when dom f = X, C is a normal cone, and k 0 ∈ C \ {0} (by Theorem 2.1.31, C is pointed in this case, and so k 0 ∈ C \ (−C)). Corollary 3.11.6. Let f : X → Y • be a proper function, bounded from below and satisfying the condition (H4) for every sequence (xn ) ⊆ dom f , with xn → x and (f (xn )) ≤C – decreasing, f (x) ≤C f (xn ) for all n ∈ N.

3.11 Minimal-Point Theorems in Product Spaces

277

If C is closed in the direction k 0 , then the conclusion of Corollary 3.11.5 holds. Proof. Consider A = gr f ⊆ X × Y . Let us show that (H2) holds. Indeed, let ((xn , yn )) ⊆ A be such that (yn ) is ≤C -decreasing and xn → x ∈ X. Of course, since yn = f (xn ), by (H4) we have that y = f (x) ≤C yn for every n. By a previous discussion, under our conditions (H1) holds. Using the same proof as for Corollary 3.11.5, we obtain the conclusion.  Note that Nemeth [431, Prop. 1] obtained the above result when dom f = X, Y is a Banach space, C is a regular (supposed to be closed) cone, and k 0 ∈ C \ {0}; observe that a regular cone is pointed (so that, once again, k 0 ∈ C \ (−C)). Traditionally, in the statements of the EVP there appears an ε > 0 and an estimate of d(¯ x, x0 ) (see Propositions 3.11.1 and 3.11.3). For the first situation just replace k 0 by εk 0 or d by εd. For the second one, suppose that in the conditions  of Corollary 3.11.5 or Corollary 3.11.6, x, x0 ) ≤ λ. f (X) ∩ f (x0 ) − λk 0 − C \ {0} = ∅, where λ ∈ (0, ∞). Then d(¯ Indeed, in the contrary case, by (3.119), for some k ∈ C, we have x, x0 ) − k = f (x0 ) − λk 0 − (d(¯ x, x0 ) − λ)k 0 − k f (¯ x) = f (x0 ) − k 0 d(¯ ∈ f (x0 ) − λk 0 − C \ {0}, since (d(¯ x, x0 ) − λ)k 0 + k ∈ C \ {0}. Note that taking Y = R and C = R+ , from Corollary 3.11.6 one obtains the EVP for functions that are not necessarily lower semicontinuous. For example, the function f : R → R, f (x) = exp(−|x|) for x = 0 and f (0) = 2, satisfies the hypothesis of Corollary 3.11.6. Theorem 3.11.4 does not ensure, effectively, a minimal point. In the next section we shall derive an authentic minimal-point theorem. Note that from both Corollary 3.11.5 and Corollary 3.11.6 one obtains Loridan’s variant of EVP [384], taking X = Q a closed subset of a Banach space and Y = Rp ordered by C = Rp+ . 3.11.2 Authentic Minimal-Point Theorems We take X, Y, C, k0 as in Section 3.11.1. In addition to the element k 0 considered in the preceding section, let us take also an element z ∗ ∈ C + such that z ∗ (k 0 ) = 1. We have noticed that such an element exists in our conditions. We introduce the order relation k0 ,z∗ on X × Y by (x1 , y1 ) = (x2 , y2 ) or (x1 , y1 ) k0 ,z∗ (x2 , y2 ) ⇐⇒ (x1 , y1 ) k0 (x2 , y2 ) and z ∗ (y1 ) < z ∗ (y2 ). It is easy to verify that k0 ,z∗ is a partial order, that is, it is reflexive, transitive, and antisymmetric. The next theorem is the main result of Section 3.11.

278

3 Optimization in Partially Ordered Spaces

Theorem 3.11.7. Let A ⊆ X×Y satisfy (H1) and suppose that z ∗ is bounded x, y¯) ∈ A a from below on PrY (A). Then for every (x0 , y0 ) ∈ A there exists (¯ x, y¯) k0 ,z∗ (x0 , y0 ). minimal element of A with respect to k0 ,z∗ such that (¯ Proof. We construct a sequence ((xn , yn ))n≥0 ⊆ A as follows: Having (xn , yn ) ∈ A, we take (xn+1 , yn+1 ) ∈ A, (xn+1 , yn+1 ) k0 ,z∗ (xn , yn ), such that z ∗ (yn+1 ) ≤ inf{z ∗ (y) | (x, y) ∈ A, (x, y) k0 ,z∗ (xn , yn )} + 1/(n + 1). Of course, ((xn , yn )) is k0 ,z∗ -decreasing. It follows that ((xn , yn )) is k0 decreasing and (yn ) is ≤C -decreasing, whence (z ∗ (yn )) is decreasing, too. Suppose first that there exists n0 ∈ N such that z ∗ (yn0 ) = lim z ∗ (yn ). It follows that z ∗ (yn ) = z ∗ (yn0 ), and, for n ≥ n0 , since (xn , yn ) k0 ,z∗ (xn0 , yn0 ), (xn , yn ) = (xn0 , yn0 ) =: (¯ x, y¯). Then z ∗ (yn ) = z ∗ (¯ y ) ≤ inf{z ∗ (y) | (x, y) ∈ A, (x, y) k0 ,z∗ (¯ x, y¯)}+1/n

∀ n ≥ n0 ,

whence z ∗ (¯ y ) ≤ z ∗ (y) for every (x, y) ∈ A with (x, y) k0 ,z∗ (¯ x, y¯). Once again, by the definition of k0 ,z∗ , we obtain that {(x, y) ∈ A | (x, y) k0 ,z∗ (¯ x, y¯)} = {(¯ x, y¯)}, i.e., (¯ x, y¯) is a minimal point of A with respect to k0 ,z∗ , and of course, (¯ x, y¯) k0 ,z∗ (x0 , y0 ). Suppose now that limm→∞ z ∗ (ym ) < z ∗ (yn ) for every n ∈ N. Because (xn+p , yn+p ) k0 ,z∗ (xn , yn ) for n, p ∈ N, we obtain d(xn+p , xn ) ≤ z ∗ (yn ) − z ∗ (yn+p ) ≤ 1/n

∀ n, p ∈ N, n ≥ 1.

It follows that (xn ) is a Cauchy sequence in the complete metric space (X, d), ¯ ∈ X. Since ((xn , yn )) is k0 -decreasing, and so (xn ) is convergent to some x by (H1) there exists some y¯ ∈ Y such that (¯ x, y¯) ∈ A and (¯ x, y¯) k0 (xn , yn ) y ) ≤ lim z ∗ (yn ), and so z ∗ (¯ y ) < z ∗ (yn ) for for every n ∈ N. It follows that z ∗ (¯ every n ∈ N. Therefore (¯ x, y¯) k0 ,z∗ (xn , yn ) for every n ∈ N. Let (x , y  ) ∈ A   x, y¯). Since (¯ x, y¯) k0 ,z∗ (xn , yn ) for every n ∈ N, be such that (x , y ) k0 ,z∗ (¯ we have d(x , x ¯) ≤ z ∗ (¯ y ) − z ∗ (y  ) ≤ z ∗ (yn ) − z ∗ (y  ) ≤ 1/n

∀ n ≥ 1,

whence z ∗ (y  ) = z ∗ (¯ y ). Once again, by the definition of k0 ,z∗ , we obtain x, y¯).  that (x , y  ) = (¯ An immediate consequence of the preceding theorem is the following weaker result. Corollary 3.11.8. Let A ⊆ X × Y satisfy (H1) and suppose that z ∗ is bounded from below on PrY (A). Then for every (x0 , y0 ) ∈ A there exists (¯ x, y¯) ∈ A such that (¯ x, y¯) k0 (x0 , y0 ), and if (x , y  ) ∈ A and (x , y  ) k0  (¯ x, y¯), then x = x ¯ and z ∗ (y  ) = z ∗ (¯ y ).

3.11 Minimal-Point Theorems in Product Spaces

279

Note that a direct proof is possible. Just take in the proof of Theorem 3.11.7 (xn+1 , yn+1 ) ∈ A(xn , yn ) such that z ∗ (yn+1 ) ≤ inf{z ∗ (y) | (x, y) ∈ A(xn , yn )} + 1/(n + 1). But it is not possible to obtain Theorem 3.11.7 from Corollary 3.11.8, as we shall see below by an example. Of course, Theorem 3.11.4 is an immediate consequence of Corollary 3.11.8. Example 3.11.9. Consider X = {a, b} with d(a, b) = 1, Y = R3 , C = R3+ , k 0 = (1, 1, 1), z ∗ ∈ C + = C, z ∗ (u, v, w) = u, y0 = (2, 2, 3), y1 = (2, 2, 1), y2 = (2, 1, 0), y3 = (2, 0, 0), y4 = (1, 2, 0), and y5 = (1, 1, 1). Let us take A = {(a, y0 ), (a, y1 ), (a, y2 ), (a, y3 ), (a, y4 ), (b, y5 )}. We have that (b, y5 ) k0 ,z∗ (a, y0 ),

(a, y4 ) k0 ,z∗ (a, y1 ),

and (a, y3 ) k0 (a, y2 ) k0 (a, y1 ) k0 (a, y0 ), but k0 may not be replaced by k0 ,z∗ in the above listing. Taking (x0 , y0 ) = (a, y0 ), the conclusion of Theorem 3.11.7 is satisfied by (b, y5 ) and (a, y4 ), the conclusion of Corollary 3.11.8 is satisfied by (b, y5 ), (a, y4 ), (a, y2 ), and (a, y3 ), while the conclusion of Theorem 3.11.4 is satisfied by all the elements of A except (a, y0 ). We prefer to formulate and give a direct proof of Theorem 3.11.4 because it has the advantage of not containing any reference to an element z ∗ ∈ C + and the proof is interesting in itself. Note that if z ∗ ∈ C # , then the order relations k0 and k0 ,z∗ coincide. Indeed, let (x, y) k0 (x , y  ). Of course, d(x, x ) ≤ z ∗ (y  ) − z ∗ (y). If z ∗ (y  ) − z ∗ (y) > 0, then (x, y) k0 ,z∗ (x , y  ), while in the contrary case (x, y) = (x , y  ), since y  − y ∈ C. Remark 3.11.10. Taking a preorder  such that  ⊆ k0 or  ⊆ k0 ,z∗ , the conclusions of Theorem 3.11.4 and Theorem 3.11.7 remain valid for k0 and k0 ,z∗ replaced by , respectively, if k0 is replaced by  in (H1). Corollary 3.11.11. Suppose that C # is nonempty and let B ⊆ Y be a nonempty subset such that every ≤C -decreasing sequence in B is bounded below by an element of B. If there exists an element of C # that is bounded below on B, then B has the domination property, i.e., for every y0 ∈ B there exists a minimal element y¯ of B such that y¯ ≤C y0 . Proof. Let x0 ∈ X be fixed and consider A = {x0 } × B. The hypothesis shows that there exists z ∗ ∈ C # such that z ∗ is bounded below on B. Take ¯ It is obvious that A k 0 ∈ C such that z ∗ (k 0 ) = 1; of course, k 0 ∈ C \ (−C). satisfies (H1). Applying Theorem 3.11.7 we get the desired y¯ ∈ B. 

280

3 Optimization in Partially Ordered Spaces

3.11.3 Minimal-Point Theorems and Gauge Techniques We shall establish now a minimal-point theorem by a gauge technique. As in the previous section, (X, d) is a complete metric space, Y is a separated l.c.s., C ⊆ Y is a convex cone, and k 0 ∈ C \ (−C). Let A ⊆ X × Y be a nonempty set and consider for x ∈ X and y ∈ Y the sets Ax := {y ∈ Y | (x, y) ∈ A}, Ay := {x ∈ X | (x, y) ∈ A}. Of course, PrX (A) = {x ∈ X | Ax = ∅} and PrY (A) = {y ∈ Y | Ay = ∅}. Theorem 3.11.12. Let (x0 , y0 ) ∈ A be fixed. Assume that C is closed and the following conditions hold:   (i) there exists t˜ ∈ R such that PrY (A) ∩ y0 − t˜k 0 − (C \ {0}) = ∅; (ii) the set Ar := {x ∈ X | ∃ y ≤C y0 + rk 0 , (x, y) ∈ A} is closed for every r ∈ R; (iii) every C-increasing, proper, l.s.c., and sublinear function ψ : Y → R attains its infimum on Ax − y0 for every x ∈ PrX (A). Then there exists (¯ x, y¯) ∈ A such that y¯ + k 0 d(¯ x, x0 ) ≤C y0 and 



0





(x , y ) ∈ A, y d(x , x ¯) ≤C y¯ ⇒

(3.121)

x = x ¯ and   y  ∈ (¯ y − C) \ y¯ − (0, ∞) · k 0 − C . (3.122)

If k 0 ∈ int C, we may replace (iii) by (iv) every C-increasing, continuous, and sublinear function ψ : Y → R attains its infimum on Ax − y0 for every x ∈ PrX (A), and (3.122) by (x , y  ) ∈ A, y 0 d(x , x ¯) ≤C y¯ ⇒ x = x ¯, y  ∈ / y¯ − int C.

(3.123)

Proof. Consider the function inf{ϕ(y − y0 ) | y ∈ Ax } for x ∈ PrX (A), ξ : X → R, ξ(x) = +∞ for x ∈ / PrX (A), where ϕ = ϕC,k0 is defined in (2.42). Because ϕ is C-increasing (see Theorem 2.4.1(d)), from (i) we obtain that ξ is bounded from below. Indeed, suppose that there exists x ∈ X such that ξ(x) < −t˜; then there exists y ∈ Y such that (x, y) ∈ A and ϕ(y − y0 ) < −t˜, whence y − y0 ∈ −t˜k 0 − tk 0 − C for some t > 0, which contradicts (i). The function ξ is also lower semicontinuous. For this let r, r ∈ R be such that r < r ; then

3.11 Minimal-Point Theorems in Product Spaces

281



Ar ⊆ {x ∈ X | ξ(x) ≤ r} ⊆ Ar . If x ∈ Ar , there exists y ≤C y0 + rk 0 with (x, y) ∈ A, whence ξ(x) ≤ ϕ(y − y0 ) ≤ ϕ(rk 0 ) = r. Now let x ∈ X be such that ξ(x) ≤ r. Since r < r , from the definition of ξ, there exists y ∈ Ax such that ϕ(y − y0 ) ≤ r , whence, by the properties of ϕ, y ∈ y0 + r0 − C, i.e., y ≤C y0 + r0 . It follows that  x ∈ Ar . Therefore the inclusions mentioned above hold. So we get {x ∈ X | ξ(x) ≤ r} ⊆



r  >r

Ar ⊆

r  >r

{x ∈ X | ξ(x) ≤ r }

= {x ∈ X | ξ(x) ≤ r}. 

Since by (ii), Ar is closed for every r ∈ R, it follows that {x ∈ X | ξ(x) ≤ r} is closed for every r ∈ R; hence ξ is l.s.c. on X. Applying the EVP (for example, Corollary 3.11.5 or Corollary 3.11.6 for Y = R) we find x ¯ ∈ X such that (3.124) ξ(¯ x) + d(¯ x, x0 ) ≤ ξ(x0 ) ≤ ϕ(y0 − y0 ) = 0 and ξ(¯ x) < ξ(x) + d(x, x ¯) ∀ x ∈ X, x = x ¯.

(3.125)

By (iii) there exists y¯ ∈ Ax¯ such that ξ(¯ x) = ϕ(¯ y − y0 ). From (3.124), taking x, x0 ) ≤C 0, i.e., (3.121) into account (2.45), one obtains that y¯ − y0 + k 0 d(¯ ¯) ≤C y¯. Then holds. Now let (x , y  ) ∈ A be such that y 0 d(x , x ξ(x ) + d(x , x ¯) ≤ ϕ(y  − y0 ) + d(x , x ¯) = ϕ(y  − y0 + k 0 d(x , x ¯)) ≤ ϕ(¯ y − y0 ) = ξ(¯ x), whence, from (3.125), we obtain that x = x ¯. If y  ∈ y¯ − (0, ∞) · k 0 − C, then  0 y = y¯ − tk − k for some t > 0 and k ∈ C. It follows that ξ(¯ x) = ξ(x ) ≤ ϕ(y  − y0 ) = ϕ(¯ y − y0 − tk 0 − k) ≤ ϕ(¯ y − y0 − tk 0 ) = ϕ(¯ y − y0 ) − t < ξ(¯ x), a contradiction. Therefore (3.122) holds. The second part is obvious because when k 0 ∈ int C, then ϕ is finite,  continuous, and int C ⊆ (0, ∞)k 0 + C. Notice also that this result is not an authentic minimal-point theorem. Note also that if condition (ii) is replaced by (ii ) the set {x ∈ X | ∃ y ≤C rk 0 , (x, y) ∈ A} is closed for every r ∈ R, then (3.121) must be replaced (when int C = ∅) by the weaker condition y¯ + k 0 d(¯ x, x0 ) ∈ / y0 + int C. Remark 3.11.13. In the conditions of the preceding theorem we have also that d(¯ x, x0 ) ≤ t˜.

282

3 Optimization in Partially Ordered Spaces

Indeed, since ξ(¯ x) ≥ t˜, from (3.124) we get −t˜ + d(¯ x, x0 ) ≤ 0, i.e., our assertion. Applying Theorem 3.11.12 to the epigraph of an operator we obtain the following vector-valued EVP. Corollary 3.11.14. Let f : X → Y • be a proper function and let x0 ∈ dom f . Assume that C is closed and for every r ∈ R the set {x ∈ X | f (x) ≤C f (x0 ) + rk 0 }  is closed. If for some ε > 0 we have f (X) ∩ ¯ ∈ dom f f (x0 ) − εk 0 − (C \ {0}) = ∅, then for every λ > 0 there exists x such that x, x0 ) ≤C f (x0 ), d(¯ x, x0 ) ≤ λ, (3.126) f (¯ x) + λ−1 εk 0 d(¯ and

f (x) + λ−1 εk 0 d(x, x ¯) ≤C f (¯ x) ⇒ x = x ¯.

(3.127)

Proof. Consider A = epi f and take y0 = f (x0 ). It is obvious that the hypotheses of Theorem 3.11.12 are satisfied for t˜ = ε. It follows that there exists (¯ x, y¯) ∈ A such that (3.121) and (3.122) hold with λ−1 εd instead of d. The first relation of the conclusion is immediate from (3.121) and Remark 3.11.13. Assuming that d(¯ x, x0 ) = λ + t with t > 0, we obtain that for some c ∈ C, f (¯ x) = f (x0 ) − λ−1 εk 0 d(¯ x, x0 ) − c = f (x0 ) − εk 0 − (λ−1 εtk 0 + c) ∈ f (x0 ) − εk 0 − (C \ {0}). This contradiction proves that d(¯ x, x0 ) ≤ λ. Let x ∈ X be such that f (x) + ¯) ≤C f (¯ x) (≤C y¯); of course, x ∈ dom f . Taking y = f (x), from λ−1 εk 0 d(x, x (3.122) one obtains that x = x ¯.  Notice also in this case that if the condition “{x ∈ X | f (x) ≤C f (x0 ) + rk 0 } is closed for every r ∈ R” is replaced by “{x ∈ X | f (x) ≤C rk 0 } is closed for every r ∈ R”, then the first relation in (3.126) must be replaced by x, x0 ) ∈ / f (x0 ) + int C” (if int C = ∅). “f (¯ x) + λ−1 εk 0 d(¯ Embedding C \ {0} in the interior of another proper cone, we obtain a variant of Theorem 3.11.7 in slightly different conditions by using the gauge technique. Theorem 3.11.15. Assume that there exists a proper closed convex cone B ⊆ Y such that C \ {0} ⊆ int B. Assume also that the set A ⊆ X × Y y − int B) = ∅ satisfies the condition (H1) in Section 3.11.1 and PrY (A) ∩ ( for some y ∈ Y . Then for every (x0 , y0 ) ∈ A there exists (x, y) ∈ A, minimal with respect to k0 , such that (x, y) k0 (x0 , y0 ). Proof. Let ϕ := ϕB,k0 be defined by (2.42). By Theorem 2.4.1, ϕ is a continuous sublinear function for which (2.44), (2.45), (2.49), and (2.50) hold with D replaced by B; moreover, if y2 − y1 ∈ C \ {0}, then ϕ(y1 ) < ϕ(y2 ). Observe that for (x, y) ∈ A we have ϕ(y − y) ≥ 0. Otherwise, for some

3.11 Minimal-Point Theorems in Product Spaces

283

(x, y) ∈ A we have ϕ(y − y) < 0. It follows that there exists λ > 0 such that y − y ∈ −λk 0 − B. Hence y ∈ y − (λk 0 + B) ⊆ y − (B + int B) ⊆ y − int B, a contradiction. Since 0 ≤ ϕ(y − y) ≤ ϕ(y) + ϕ(− y ), it follows that ϕ is bounded from below on PrY (A). Let us construct a sequence ((xn , yn ))n≥0 ⊆ A as follows: Having (xn , yn ) ∈ A, we take (xn+1 , yn+1 ) ∈ A, (xn+1 , yn+1 ) k0 (xn , yn ), such that ϕ(yn+1 ) ≤ inf{ϕ(y) | (x, y) ∈ A, (x, y) k0 (xn , yn )} + 1/(n + 1). Of course, the sequence ((xn , yn )) is k0 -decreasing. It follows that yn+p + k 0 d(xn+p , xn ) ≤C yn

∀ n, p ∈ N∗ ,

so that d(xn+p , xn ) ≤ ϕ(yn ) − ϕ(yn+p ) ≤ 1/n

∀ n, p ∈ N∗ .

It follows that (xn ) is a Cauchy sequence in the complete metric space (X, d), ¯ ∈ X. By (H1) there exists y ∈ Y and so (xn ) is convergent to some x x, y¯) k0 (xn , yn ) for every n ∈ N. Let us show such that (x, y) ∈ A and (¯ x, y¯) k0 (x0 , y0 ). Suppose that that (x, y) is the desired element. Indeed, (¯ x, y¯) (k0 (xn , yn ) for every n ∈ N). (x , y  ) ∈ A is such that (x , y  ) k0 (¯ Thus ϕ(y  ) + d(x , x) ≤ ϕ(y), whence d(x , x ¯) ≤ ϕ(¯ y ) − ϕ(y  ) ≤ ϕ(yn ) − ϕ(y  ) ≤ 1/n

∀ n ≥ 1.

It follows that d(x , x ¯) = ϕ(¯ y ) − ϕ(y  ) = 0. Hence x = x. As y  ≤C y,   y ), a contradiction. if y = y, then y − y ∈ C \ {0}, whence ϕ(y  ) < ϕ(¯ Therefore (x , y  ) = (¯ x, y¯).  Comparing with Theorem 3.11.7, note that the present condition on C is stronger (because in this case C # = ∅), while the condition on A is weaker (A may be not contained in a half-space). Note that when C and k 0 are as in Theorem 3.11.15, Corollaries 3.11.5 and 3.11.6 may be improved. Corollary 3.11.16. Let f : X → Y • . Assume that there exists a proper closed convex cone B ⊆ Y such that C \ {0} ⊆ int B and f (X) ∩ ( y − B) = ∅ for some y ∈ Y . Also assume that either (H3) (in Section 3.11.1) holds or C is closed in the direction k 0 and (H4) holds. Then the conclusion of Corollary 3.11.5 holds, too. As above, C is closed in the direction k 0 if C ∩ (y − R+ k 0 ) is closed for every y ∈ C. The proof of Corollary 3.11.16 is similar to those of Corollaries 3.11.5 and 3.11.6.

284

3 Optimization in Partially Ordered Spaces

3.11.4 Minimal-Point Theorems and Cone-Valued Metrics Nemeth [432] obtained a vector-valued EVP using cone-valued metrics. The aim of this section is to obtain a minimal-point theorem in product spaces using such metrics. Applying the result to operators, we establish a vectorvalued EVP slightly more general than Nemeth’s result. As in the previous sections, Y is a separated locally convex space, while X is a nonempty set that will be endowed with a cone-valued metric. In this sense consider P a convex cone in Y and let ≤P be the preorder on Y determined by P . We say that the mapping r : X × X → P is a P -(valued) metric if r satisfies the usual conditions, i.e., for all x1 , x2 , x3 ∈ X one has r(x1 , x2 ) = 0 ⇔ x1 = x2 , r(x1 , x2 ) = r(x2 , x1 ) and r(x1 , x3 ) ≤P r(x1 , x2 ) + r(x2 , x3 ). The notions of convergent net and fundamental net are defined as usual; so the net (xi )i∈I ⊆ X converges to x ∈ X if r(xi , x) → 0 in Y , while (xi )i∈I is fundamental (or Cauchy) if for every neighborhood V of the origin in Y there exists iV ∈ I such that r(xi , xj ) ∈ V for all i, j ∈ I with i, j  iV . One sees easily that when P is normal, the limit of a convergent net is unique and every convergent net is fundamental. Of course, (X, r) is complete if every Cauchy net is convergent. For other details and comments on cone-valued metrics see [432]. An example of a cone-valued metric is furnished below. Consider d a scalar metric on X and y0 a fixed element in Y \ {0}. Taking P = R+ · y0 , the mapping r : X × X → P , defined by r(x1 , x2 ) = y0 d(x1 , x2 ), is a P -metric. Note that (xi )i∈I ⊆ X is r-convergent (r-fundamental) iff (xi ) is d-convergent (d-fundamental); so (X, r) is complete iff (X, d) is complete. Now let C, C0 ⊆ Y be convex cones with C0 ⊆ C. We say that C0 is sequentially C-bound regular (in short C-seq-b-regular) if every C0 increasing and C-bounded sequence in C0 is Cauchy. Of course, if C0 is C-seq-b-regular then C0 is pointed (even more: C0 ∩ (−C) = {0}). In the terminology of Nemeth [432], C0 is sequentially C-bound regular if every C0 -increasing and C-bounded net in C0 is convergent (so our requirement is weaker when C0 is normal). Note that if k 0 ∈ C \ (−C) and C0 = R+ · k 0 , then C0 is C-seq-b-regular (even in the sense of Nemeth). Indeed, let (tn )n∈N ⊆ R+ be such that (tn k 0 ) is C0 -increasing and C-bounded. Then (tn ) is increasing in R+ . Assuming that (tn ) is not bounded, tn → ∞ (so we can consider that tn > 0 for every n ∈ N). Since tn k 0 ≤C y for some y ∈ Y and every n ∈ N, it follows that k 0 = t1n y −kn , with (kn ) ⊆ C. So we get that kn → −k 0 ∈ C, a contradiction. Let now C0 ⊆ C be a convex cone and r a C0 -metric on X. We consider the relation ≤r on X × Y defined by (x1 , y1 ) ≤r (x2 , y2 ) if y1 + r(x1 , x2 ) ≤C y2 . One obtains easily that ≤r is a preorder (even a partial order if C is pointed). In order to establish the main result of this section we need a similar condition to (H1):

3.11 Minimal-Point Theorems in Product Spaces

285

(H5) For every ≤r -decreasing net ((xi , yi ))i∈I ⊆ A with xi → x ∈ X there exists y ∈ Y such that (x, y) ∈ A and (x, y) ≤r (xi , yi ) for every i ∈ I. We shall see during the proof of the next theorem that one can consider that I is a totally ordered set in (H5). A related condition, which is independent of the metric r, and seems to be natural enough, is the following: (H6) For every net ((xi , yi ))i∈I ⊆ A, with xi → x ∈ X and (yi )i∈I Cdecreasing, there exists y ∈ Y such that (x, y) ∈ A and y ≤C yi for every i ∈ I. Note that if the lower sections of C with respect to C0 are closed, i.e., C ∩ (y − C0 ) is closed for every y ∈ C, and C0 is normal, then (H6)⇒(H5). Indeed, let ((xi , yi ))i∈I ⊆ A be a ≤r -decreasing net with xi → x. Of course, (yi )i∈I is C-decreasing, and by (H6), there exists y ∈ Y such that (x, y) ∈ A and y ≤C yi for every i ∈ I. Then y + r(xi , xj ) ≤C yj + r(xi , xj ) ≤C yi

∀ i, j ∈ I, j  i.

It follows that yi − y − r(xi , xj ) ∈ C ∩ (yi − y − C0 ), whence, by taking the limit with respect to j ∈ I, we obtain that yi − y − r(xi , x) ∈ C, i.e., (x, y) ≤C (xi , yi ) for every i ∈ I. The normality of C0 was used to obtain that r(xi , xj ) → r(xi , x) when taking the limit with respect to j ∈ I. Theorem 3.11.17. Let C ⊆ Y be a pointed convex cone, C0 ⊆ C a normal convex cone that is C-seq-b-regular, r a C0 -metric on X such that (X, r) is complete, A ⊆ X × Y a nonempty set such that PrY (A) is C-lower bounded, and suppose that (H5) holds. Then for every (x0 , y0 ) ∈ A there exists a ≤r minimal point (x, y) ∈ A such that (x, y) ≤r (x0 , y0 ). Proof. Let B = {(x, y) ∈ A | (x, y) ≤r (x0 , y0 )} and let C be a maximal chain in B. Of course, (x0 , y0 ) ∈ C. Consider the directed set (I, ) defined by I = C and i1 , i2 ∈ I, i1 = (x1 , y1 ), i2 = (x2 , y2 ), i1  i2 if (x1 , y1 ) ≤r (x2 , y2 ). So C becomes a net indexed on I: For i = (x, y) ∈ I, (xi , yi ) = (x, y). The net (xi )i∈I is Cauchy with respect to r. In the contrary case there exists a neighborhood V of the origin of Y such that ∀ i ∈ I, ∃ j, k ∈ I, j, k  i : r(xi , xj ) ∈ / V.

(3.128)

Taking into account that (I, ) is totally ordered, for i0 = (x0 , y0 ) there / V . Taking now i = i2 exist i1 , i2 ∈ I such that i2  i1  i0 and r(xi2 , xi1 ) ∈ in (3.128), there exist i3 , i4 ∈ I such that i4  i3  i2 and r(xi4 , xi3 ) ∈ / V. Continuing in this manner we obtain an increasing sequence (in )n∈N in I such / V for every n ∈ N. Since for every n, in = (xn , yn ), that r(xi2n+2 , xi2n+1 ) ∈ /V we obtain a ≤r -decreasing sequence ((xn , yn )) ⊆ C with r(x2n+2 , x2n+1 ) ∈ for every n. Since yn+1 + r(xn+1 , xn ) ≤C yn for every n, we get that for all m ∈ N,

286

3 Optimization in Partially Ordered Spaces

sm =

m n=0

r(xn+1 , xn ) ≤C

m n=0

(yn − yn+1 ) = y0 − ym+1 ≤C y0 − y,

where y is a C-lower bound for PrY (A). Therefore (sm ) is a C0 -increasing and C-bounded sequence in C0 . Since C0 is C-seq-b-regular, it follows that (sm ) is a Cauchy sequence. In particular, for the neighborhood V obtained above, for some m0 ∈ N and all m, n ∈ N, m, n ≥ m0 , we have that sm − sn ∈ V . In particular, r(x2m0 +2 , x2m0 +1 ) = s2m0 +1 − s2m0 ∈ V , a contradiction. Therefore (xi )i∈I is a Cauchy net. Because (X, r) is complete, (xi ) r-converges to some x ∈ X. Since (H5) holds, there exists y ∈ Y such that (x, y) ∈ A and (x, y) ≤r (xi , yi ) for all i ∈ I; that is, (x, y) ≤r (x, y) for all (x, y) ∈ C. It follows that (x, y) ∈ B, and since C is a maximal chain in B, (x, y) ∈ C. Therefore (x, y) is the least element of C. The fact that (x, y) is a minimal element of A follows easily.  Note that if C is not pointed, then (¯ x, y¯) is such that (x , y  ) ∈ A,    x, y¯) imply (¯ x, y¯) ≤r (x , y ). (x , y ) ≤r (¯ Consider now k 0 ∈ C \ {0} and d a scalar metric on X and define the order relation ≤k0 by (3.118). As a consequence of Theorem 3.11.17 we obtain the following weaker version of Theorem 3.11.7. 

Corollary 3.11.18. Let C ⊆ Y be a pointed convex cone, k 0 ∈ C \ {0}, d a (scalar) metric on X such that (X, d) is complete, and A ⊆ X ×Y a nonempty set such that PrY (A) is C-lower bounded and (H1), with “sequence” replaced by “net”, holds. Then for every (x0 , y0 ) ∈ A there exists a ≤k0 -minimal point (x, y) ∈ A such that (x, y) ≤k0 (x0 , y0 ). Proof. Consider the convex cone C0 = R+ · k 0 and the C0 -metric r on X defined by r(x1 , x2 ) = k 0 d(x1 , x2 ). The discussion from the beginning of this section shows that the hypotheses of Theorem 3.11.17 are satisfied. Since the  relations ≤r and ≤k0 coincide, the conclusion follows. Another consequence of Theorem 3.11.17 is the following vector-valued EVP. Corollary 3.11.19. Let C ⊆ Y be a pointed convex cone, C0 ⊆ C a normal convex cone that is C-seq-b-regular, r a C0 -metric on X such that (X, r) is complete, and f : X → Y • . Suppose that all the lower sections of C with respect to C0 are closed, that f is C-lower bounded and for every net (xi )i∈I ⊆ X converging to x ∈ X such that (f (xi )) is C-decreasing, f (x) ≤C f (xi ) for every i ∈ I. Then for every x0 ∈ dom f there exists x ∈ dom f such that f (x) + r(x, x0 ) ≤C f (x0 ) and ∀ x ∈ X : f (x) + r(x, x) ≤C f (x) ⇒ x = x.

3.11 Minimal-Point Theorems in Product Spaces

287

Proof. Let us consider the set A = gr f . It is obvious that A satisfies (H6). Since the lower sections of C with respect to C0 are closed, it follows that A satisfies (H5), too. Applying Theorem 3.11.17 for (x0 , y0 ) = (x0 , f (x0 )) ∈ A, we get (x, y) a minimal point of A with respect to ≤r such that (x, y) ≤r (x0 , y0 ). Of course, x ∈ dom f and y = f (x). It is obvious that x satisfies the conclusion of the corollary.  Note that Nemeth [432, Th. 6.1] obtained the same conclusion as in Corollary 3.11.19 under the supplementary hypotheses that C is closed and C0 is complete. For further references to vector-valued variational principles, see, e.g., [127, 281, 296, 376]. 3.11.5 Fixed Point Theorems of Kirk–Caristi Type The following existence result is close to a vector-valued variant of a theorem by Takahashi [522] obtained by Tammer [527]. Theorem 3.11.20. Let (X, d) be a complete metric space, Y a t.v.s., B ⊆ Y a closed convex cone, C ⊆ Y a proper convex cone such that C \ {0} ⊆ int B, {x ∈ and k 0 ∈ C \ {0}. Consider f : X → Y • a proper function for which   X| f (x) ≤B rk 0 } is closed for every r ∈ R. Assume that f (X) ∩ tk 0 − B = ∅ for some t ∈ R and for every x ∈ dom f \ Eff(f, C) there exists y = x such that f (y) + k 0 d(y, x) ≤C f (x). Then Eff(f, C) = ∅. Proof. Note  first that for every x0 ∈ dom f there exists some ε > 0 such that f (X) ∩ f (x0 ) − εk 0 − (B \ {0}) = ∅; otherwise, for some x0 ∈ dom f and every n ∈ N∗ there exists xn ∈ dom f such that f (xn ) ∈ f (x0 ) − nk 0 −B. It follows that ϕB,k0 (f (xn ) − f (x0 )) ≤ −n, whence ϕB,k0 (f (xn )) ≤ ϕB,k0 (f (x0 )) − n ≤ t for n sufficiently large. This yields the contradiction f (xn ) ∈ tk 0 −  B. Applying Corollary 3.11.14 for x0 ∈ dom f , ε > 0 such that f (X) ∩ f (x0 ) − εk 0 − (B \ {0}) = ∅, λ := 1/ε and C replaced by B (and taking into account the discussion after the proof of that result), there exists x ∈ dom f such that x = x whenever f (x) + k 0 d(x, x) ≤B f (x). / Eff(f, C). By hypothesis, there exists x = x such that Assume that x ∈ f (x) + k 0 d(x, x) ∈ f (x) − C ⊆ f (x) − B. So, we get the contradiction x = x.  The following result is close to a vector-valued variant of the Kirk– Caristi fixed point theorem obtained by Tammer [527]. Theorem 3.11.21. Let (X, d) be a complete metric space, T : X → X, Y a t.v.s., C ⊆ Y a proper closed convex cone, and k 0 ∈ int C. Assume that 0 there exists a proper function f : X → Y• for which  {x ∈ X | f (x) ≤C rk } 0 is closed for every r ∈ R and f (X) ∩ tk − C = ∅ for some t ∈ R. If f (T x) + k 0 d(T x, x) ≤C f (x) for every x ∈ dom f , then T has at least one fixed point.

288

3 Optimization in Partially Ordered Spaces

Proof. As in the proof of the preceding theorem (applied for (C, C0 ) instead of (B, C), where C0 = {0} ∪ int C), there exists x ∈ dom f such that x = x whenever f (x) + k 0 d(x, x) ≤C f (x). But f (T x) + k 0 d(T x, x) ≤C f (x), and  so T x = x. Recent results concerning existence principles of Takahashi’s type for vector optimization problems in a more general framework concerning the involved spaces are derived by Khazayel and Farajzadeh in [331] and by K¨ obis and Tammer in [344].

3.12 Saddle Point Theory 3.12.1 Lagrange Multipliers and Saddle Point Assertions Consider a convex vector minimization problem Compute the set

Eff(f (A), CY ),

(P)

where f : M −→ Y , g : M −→ Z, Y and Z are normed spaces, CZ and CY are closed convex pointed cones in Z, Y , respectively, and A := {x ∈ M | g(x) ∈ −CZ }. As in ordinary scalar optimization, Lagrange multipliers can be used for different purposes such as duality, saddle point theory, sensitivity, and numerical approaches (compare Amahroq and Taa [15], Clarke [136], El Abdouni and Thibault [183], Bot¸, Grad and Wanka [74], Guti´errez, Huerga, Novo and Tammer [243], Li and Wang [378], Miettinen, [408], Minami [409], Tanaka [535–537], Thibault [544], Wang [553]). In the following we derive existence results for Lagrange multipliers. These results extend well-known theorems (compare Kosmol [341]) on Lagrange multipliers in nonlinear programming considerably. Lemma 3.12.1. Let X be a linear space, M a convex subset of X, Y , and Z normed spaces, CZ and CY closed convex pointed cones in Z, Y , respectively, and int CY = ∅. Assume that f : M −→ Y , g : M −→ Z are CY -convex, CZ -convex, respectively, mappings for which the following regularity assumptions are satisfied: (A.1) int{(y, z) ∈ Y × Z | ∃ x ∈ M : y ∈ f (x) + CY and z ∈ g(x) + CZ } = ∅, (A.2) ∃ y0 ∈ cl f (A) and f (A) ∩ (y0 − (CY \ {0})) = ∅. Then there exist y0∗ ∈ CY+∗ , z0∗ ∈ CZ+∗ with (y0∗ , z0∗ ) = (0, 0), and y0∗ (y0 ) = inf{y0∗ (f (x)) + z0∗ (g(x)) | x ∈ M }.

3.12 Saddle Point Theory

289

Proof. Consider the following sets: A := {(y, z) ∈ Y × Z | ∃ x ∈ M : y ∈ f (x) + CY , z ∈ g(x) + CZ } and B := {(y, z) ∈ Y × Z | y ∈ y0 − CY , z ∈ −CZ }. In order to apply a separation theorem for convex sets (compare Theorem 2.2.7) we show that the assumptions of the separation theorem are satisfied. The set A is convex, since we get for (y 1 , z 1 ) ∈ A, (y 2 , z 2 ) ∈ A, 0 ≤ λ ≤ 1, and corresponding elements x1 , x2 ∈ M , λy 1 + (1 − λ)y 2 ∈ λf (x1 ) + CY + (1 − λ)f (x2 ) + CY ⊆ f (λx1 + (1 − λ)x2 ) + CY + CY ⊆ f (λx1 + (1 − λ)x2 ) + CY , since CY is a convex cone and f a CY -convex mapping. Together with λz 1 + (1 − λ)z 2 ∈ λg(x1 ) + (1 − λ)g(x2 ) + CZ ⊆ g(λx1 + (1 − λ)x2 ) + CZ , because CZ is a convex cone and g a CZ -convex mapping, we can conclude that (λy 1 + (1 − λ)y 2 , λz 1 + (1 − λ)z 2 ) ∈ A. Moreover, B is convex with respect to the convexity of CY and CZ . Under the assumption (A.1), int A = ∅. In order to show that int A ∩ B = ∅ we suppose ∃ (y, z) ∈ int A ∩ B. This implies ∃x ∈ M

with

g(x) ∈ z − CZ ⊆ −CZ

and f (x) ∈ y − CY ⊆ y0 − CY ,

so that we get y0 = y = f (x) because of the definition of y0 in (A.2) and since CY is a pointed convex cone. Regarding (y, z) ∈ int A, it follows that there are an ε > 0 and Uε (y) ⊂ Y , Vε (z) ⊂ Z with Uε (y) × Vε (z) ⊂ A; especially for k 0 ∈ CY \ {0}, k 0  = 1 we consider (y − 2ε k 0 , z) ∈ A, i.e., for some x ∈ M , ε ε y0 − k 0 = y − k 0 ∈ f (x ) + CY 2 2 and

g(x ) ∈ z − CZ ⊆ −CZ .

This means that x ∈ A and f (x ) ∈ y0 − (CY \ {0}), in contradiction to the definition of y0 in (A.2). Consider the set A − B. Under the given assumptions A − B is convex and int(A−B) = ∅. Taking into account int A∩B = ∅ we get 0 ∈ / int(A−B). Now, it is possible to apply a separation theorem for convex sets (Theorem 2.2.7). This separation theorem implies the existence of (y0∗ , z0∗ ) ∈ (Y ∗ × Z ∗ ) \ {0} such that for every (y 1 , z 1 ) ∈ A, for every (y 2 , z 2 ) ∈ B:

290

3 Optimization in Partially Ordered Spaces

z0∗ (z 1 ) + y0∗ (y 1 ) ≥ z0∗ (z 2 ) + y0∗ (y 2 ).

(3.129)

In the following we show that y0∗ ∈ CY+∗ and z0∗ ∈ CZ+∗ . / CY+∗ , i.e., y0∗ (¯ y ) < 0 for an element y¯ ∈ CY , we get for If we suppose y0∗ ∈ y := −¯ y ∈ −CY , regarding that CY is a cone, sup{y0∗ (ny) | n ∈ N } = sup{ny0∗ (y) | n ∈ N } = +∞, in contradiction to the separation property (3.129). Analogously, we can show that z0∗ ∈ CZ+∗ . For all x ∈ M , (f (x), g(x)) ∈ A, and with (y0 , 0) ∈ B we get inf{z0∗ (g(x)) + y0∗ (f (x)) | x ∈ M } ≥ y0∗ (y0 ). Now consider a sequence {xn }n∈N in A = {x ∈ M | g(x) ∈ −CZ } with lim f (xn ) = y0 .

n→∞

Then we get inf{z0∗ (g(x)) + y0∗ (f (x)) | x ∈ M } ≤ inf{z0∗ (g(x)) + y0∗ (f (x)) | x ∈ A} ≤ inf{y0∗ (f (x)) | x ∈ A} ≤ lim y0∗ (f (xn )) = y0∗ (y0 ), n→∞

so that the equation holds.



Lemma 3.12.2. Additionally to the assumptions of Lemma 3.12.1 we suppose (A.3) There exists an element x1 ∈ M such that for all z ∗ ∈ CZ+∗ \ {0}, z ∗ (g(x1 )) < 0. (i) Then there exist elements y0∗ ∈ CY+∗ \ {0} and z0∗ ∈ CZ+∗ with y0∗ (y0 ) = inf{y0∗ (f (x)) + z0∗ (g(x)) | x ∈ M }. (ii) If x0 ∈ A and f (x0 ) ∈ Eff(f (A), CY ), then there exist y0∗ ∈ CY+∗ \{0} and z0∗ ∈ CZ+∗ such that x0 is also a minimal solution of y0∗ (f (·)) + z0∗ (g(·)) on M , and z0∗ (g(x0 )) = 0. Proof. (i) From Lemma 3.12.1 we can conclude that there exist y0∗ ∈ CY+∗ , z0∗ ∈ CZ+∗ with (y0∗ , z0∗ ) = 0, and y0∗ (y0 ) = inf{y0∗ (f (x)) + z0∗ (g(x)) | x ∈ M }.

(3.130)

Under the assumption (A.3) we suppose y0∗ = 0. Then we get in (3.129) with z 1 = g(x1 ), z 2 = 0, (3.131) z0∗ (g(x1 )) ≥ z0∗ (0) = 0.

3.12 Saddle Point Theory

291

Regarding (y0∗ , z0∗ ) = 0, we have z0∗ = 0, and now together with the assumption (A.3) we obtain a contradiction, 0 > z0∗ (g(x1 )) ≥ 0, because of (3.131). (ii) If x0 ∈ A and y0 := f (x0 ) ∈ Eff(f (A), CY ), then (3.130) implies y0∗ (y0 ) ≤ y0∗ (f (x0 )) + z0∗ (g(x0 )) ≤ y0∗ (f (x0 )) = y0∗ (y0 ), so that y0∗ (f (x0 )) + z0∗ (g(x0 )) = inf{y0∗ (f (x)) + z0∗ (g(x)) | x ∈ M } and  z0∗ (g(x0 )) = 0. Remark 3.12.3. Conversely, if x0 ∈ M is a minimal solution of the Lagrangian y0∗ (f (·)) + z0∗ (g(·)) with g(x0 ) ∈ −CZ and z0∗ (g(x0 )) = 0, then f (x0 ) ∈ wEff(f (A), CY ) follows without regularity assumption: y0∗ (f (x0 )) = y0∗ (f (x0 )) + z0∗ (g(x0 )) ≤ y0∗ (f (x)) + z0∗ (g(x)) ≤ y0∗ (f (x)) for all x ∈ M with g(x) ∈ −CZ and f (x0 ) ∈ wEff(f (A), CY ), regarding y0∗ ∈ CY+∗ \ {0}. Theorem 3.12.4. Suppose that (A.1), (A.2), and (A.3) are satisfied. Assume x0 ∈ M . Then: (i) If f (x0 ) ∈ Eff(f (A), CY ), then there exist y0∗ ∈ CY+∗ \ {0} and z0∗ ∈ CZ+∗ such that the following saddle point assertion is satisfied: For every x ∈ M , for every z ∗ ∈ CZ+∗ it holds that y0∗ (f (x0 )) + z ∗ (g(x0 )) ≤ y0∗ (f (x0 )) + z0∗ (g(x0 )) ≤ y0∗ (f (x)) + z0∗ (g(x)). (3.132) (ii) Conversely, if there are y0∗ ∈ CY+∗ \ {0} and (x0 , z0∗ ) ∈ M × CZ+∗ such that the saddle point assertion (3.132) is satisfied for all x ∈ M and z ∗ ∈ CZ+∗ , then f (x0 ) ∈ wEff(f (A), CY ). Proof. (i) Assume f (x0 ) ∈ Eff(f (A), CY ). Using Lemma 3.12.2, (ii), we get that there exist y0∗ ∈ CY+∗ \ {0} and z0∗ ∈ CZ+∗ with y0∗ (f (x0 )) + z0∗ (g(x0 )) ≤ y0∗ (f (x)) + z0∗ (g(x)) for every x ∈ M. Furthermore, regarding −g(x0 ) ∈ CZ , it follows, again with Lemma 3.12.2, (ii), that z ∗ (g(x0 )) ≤ 0 = z0∗ (g(x0 )) for every z ∗ ∈ CZ+∗ . This yields

292

3 Optimization in Partially Ordered Spaces

y0∗ (f (x0 )) + z ∗ (g(x0 )) ≤ y0∗ (f (x0 )) + z0∗ (g(x0 )) for every z ∗ ∈ CZ+∗ . Then both inequalities are satisfied. (ii) Suppose y0∗ ∈ CY+∗ \ {0} and assume that the saddle point assertion is satisfied for (x0 , z0∗ ) ∈ M × CZ+∗ . Then the first inequality implies z ∗ (g(x0 )) ≤ z0∗ (g(x0 ))

for every z ∗ ∈ CZ+∗ ,

so that we get, regarding that CZ+∗ is a convex cone, (z ∗ + z0∗ )(g(x0 )) ≤ z0∗ (g(x0 )) for every z ∗ ∈ CZ+∗ , z ∗ (g(x0 )) ≤ 0 for every z ∗ ∈ CZ+∗ , and g(x0 ) ∈ −CZ . This implies 0 ≥ z0∗ (g(x0 )) ≥ 0(g(x0 )) = 0, since 0 ∈ CZ+∗ , and so z0∗ (g(x0 )) = 0. Consider now x ∈ M with g(x) ≤ 0. Then we conclude from the second inequality in the saddle point assertion that y0∗ (f (x0 )) = y0∗ (f (x0 )) + z0∗ (g(x0 )) ≤ y0∗ (f (x)) + z0∗ (g(x)) ≤ y0∗ (f (x)) and

y0∗ (f (x0 )) ≤ y0∗ (f (x))

for every x ∈ A.

This means that f (x0 ) ∈ wEff(f (A), CY ).



Remark 3.12.5. A point (x0 , z0∗ ) ∈ M × CZ+∗ satisfying the property (3.132) for an element y0∗ ∈ CY ∗ \{0} is called a y0∗ -saddle point of the Lagrangian Φ(x, z ∗ ) := y0∗ (f (x)) + z ∗ (g(x)),

x ∈ M, z ∗ ∈ CZ+∗ .

In the next subsection we will derive necessary and sufficient conditions for y ∗ -saddle points of a generalized Lagrangian. The relation (3.132) can be described by Φ(x0 , z0∗ ) ∈ Min({Φ(x, z0∗ ) | x ∈ M }, y0∗ ), Φ(x0 , z0∗ ) ∈ Max({Φ(x0 , z ∗ ) | z ∗ ∈ CZ+∗ }, y0∗ ) (cf. Section 3.12.2). Remark 3.12.6. Taking M = R2+ ⊂ Y = R2 , CY = CZ = R2+ , Y = Z = R2 , f = I (identity), g(x) = −x for every x ∈ M , we have A = {x ∈ R2+ }, and all assumptions of Theorem 3.12.4 are satisfied. Then x0 = (0, 1)T , y0∗ = (1, 0)T , z0∗ = (0, 0)T is a y0∗ -saddle point of the Lagrangian Φ (Φ(x, z ∗ ) = y0∗ (f (x)) + z ∗ (g(x)), x ∈ M , z ∗ ∈ CZ+∗ ), since 0 + z ∗ (−x0 ) ≤ 0 ≤ (x)1 . The element x0 is only weakly efficient, as proved in the theorem. So we cannot expect a symmetrical assertion of the kind “saddle-point iff efficiency”.

3.12 Saddle Point Theory

293

3.12.2 ε-Saddle Point Assertions In the following we derive ε-saddle point assertions for a special class of convex vector optimization problems (cf. [92], [93]). We consider in this section a general class of vector-valued approximation problems that contains many practically important special cases and apply the concept of approximately efficient elements introduced in Section 3.1.1 to this problem. Approximate solutions of optimization problems are of interest from the computational as well as the theoretical point of view. Especially, the solution set of the approximation problem may be empty in the general noncompact case, whereas approximate solutions exist under very weak assumptions. Valyi [549, 550] has developed Hurwicz-type saddle point theorems for different types of approximately efficient solutions of convex vector optimization problems (cf. a survey in the paper of Dauer and Stadler [154]). The aim of this section is to derive approximate saddle point assertions for vector-valued location and approximation problems using a generalized Lagrangian. We introduce a generalized saddle function for the vector-valued approximation problem and use different concepts of approximate saddle points. Furthermore, we derive necessary and sufficient conditions for approximate saddle points, estimate the approximation error, and study the relations between the original problem and saddle point assertions under regularity assumptions. All topological linear spaces that will occur are over the field R of real numbers. If X and U are linear spaces, then L(X, U ) denotes the set of all continuous linear mappings from X into U . Let X be a topological linear space. We recall that a function f : X → R is said to be sublinear (cf. Section 2.2) if f (x + y) ≤ f (x) + f (y) and f (αx) = αf (x) whenever α ∈ R+ and x, y ∈ X. If f : X → R is a sublinear function, then the set ∂f (0) := {y ∗ ∈ L(X, R) | y ∗ (x) ≤ f (x) for every x ∈ X} is called the subdifferential of f at the origin of X. It is well known (Hahn– Banach theorem) that for each continuous sublinear function f : X → R the following formula holds: f (x) = max{y ∗ (x) | y ∗ ∈ ∂f (0)}

for every x ∈ X.

(3.133)

Generalizing the concept of a sublinear function, we call a mapping f = (f1 , . . . , fp ) : X → Rp sublinear if its components f1 , . . . , fp are sublinear

294

3 Optimization in Partially Ordered Spaces

functions. The subdifferential at the origin of X of a sublinear mapping f := (f1 , . . . , fp ) : X → Rp is defined by ∂f (0) := ∂f1 (0) × · · · × ∂fp (0). Taking into account formula (3.133), it follows for a continuous sublinear mapping f : X → Rp that f (x) ∈ Λ(x) + Rp+

for every Λ ∈ ∂f (0), for every x ∈ X.

(3.134)

Let F be a subset of Rp , and let y0 be a point in Rp . Given a subset B of Rp and an element e ∈ Rp , in extension of the notation in Section 3.1.1 the point y0 is called a (B, e)-minimal (resp. (B, e)-maximal) element of F (Definition 2.1.2) if y0 ∈ F and F ∩ (y0 − e − (B \ {0})) = ∅ (resp. F ∩ (y0 + e + (B \ {0})) = ∅). The set consisting of all (B, e)-minimal (resp. (B, e)-maximal) elements of F is denoted by Eff Min (F, B, e) (resp. Eff Max (F, B, e)). If e is the origin of Rp , then the (B, e)-minimal (resp. (B, e)-maximal) elements of F are simply called B-minimal (resp. B-maximal) elements of F , and their set is denoted by Eff Min (F, B)

(resp. Eff Max (F, B)).

Given a function y ∗ ∈ L(Rp , R) and an element e ∈ Rp , the point y0 is called a (y ∗ , e)-minimal (resp. (y ∗ , e)-maximal) point (or element) of F if y0 ∈ F and y ∗ (y0 ) − y ∗ (e) ≤ y ∗ (y) (resp. y ∗ (y) ≤ y ∗ (y0 ) + y ∗ (e)

for every y ∈ F for every y ∈ F ).

The set consisting of all (y ∗ , e)-minimal (resp. (y ∗ , e)-maximal) elements of F is denoted by Min(F, y ∗ , e)

(resp. Max(F, y ∗ , e)).

If e is the origin of Rp , then the (y ∗ , e)-minimal (resp. (y ∗ , e)-maximal) elements of F are simply called y ∗ -minimal (resp. y ∗ -maximal) elements of F and their set is denoted by Min(F, y ∗ )

(resp. Max(F, y ∗ )).

Let M and N be nonempty sets, let X be a topological vector space, and let Φ be a mapping from M × N to X. Given a subset B of X and an element e ∈ X, a point (x0 , y0 ) ∈ M × N is said to be a (B, e)-saddle point of Φ with respect to M × N if the following conditions are satisfied:

3.12 Saddle Point Theory

295

Φ(x0 , y0 ) ∈ Eff Min ({Φ(x, y0 ) | x ∈ M }, B, e) ;

(3.135)

Φ(x0 , y0 ) ∈ Eff Max ({Φ(x0 , y) | y ∈ N }, B, e) .

(3.136)

Given a function y ∗ ∈ L(X, R) and an element e ∈ X, a point (x0 , y0 ) ∈ M × N is said to be a (y ∗ , e)-saddle point of Φ with respect to M × N if the following conditions are satisfied: Φ(x0 , y0 ) ∈ Min ({Φ(x, y0 ) | x ∈ M }, y ∗ , e) ; ∗

Φ(x0 , y0 ) ∈ Max ({Φ(x0 , y) | y ∈ N }, y , e) .

(3.137) (3.138)

In order to formulate this problem we suppose in the whole section that • • • • •

X, U , and V are reflexive Banach spaces; A : X → U , B : X → V , l : X → Rp are continuous linear mappings; f : U → Rp is a continuous sublinear mapping; b ∈ V \ {0}; A ⊆ U , X ⊆ X, CV ⊆ V , and C ⊂ Rp are closed, pointed, and convex cones; • C + Rp+ ⊆ C. Defining F : A × X → Rp by F (a, x) := l(x) + f (a − A(x)),

and S := {(a, x) ∈ U × X | a ∈ A, x ∈ X , B(x) − b ∈ CV }, we consider the following vector optimization problem: Compute the set Eff Min (F (S), C).

(P(C))

Remark 3.12.7. Special cases of the vector optimization problem (P(C)) are: 1. The vector approximation problem and the vector location problem by setting ⎞ ⎛ α1 a1 − A1 (x)1 ⎠ ··· f (a − A(x)) = ⎝ αp ap − Ap (x)p (cf. Jahn [306], Gerth (Tammer), P¨ ohler [214], Wanka [555]). 2. Linear vector optimization problems if we set f = 0. 3. Surrogate problems for linear vector optimization problems with an objective function l(x) subject to S 0 = {x ∈ S | A(x) = a}, for which the feasible set S 0 is empty but S is nonempty. Proceeding as in the paper of [524], we define the mapping L : U × X × L(U, Rp ) × L(V, Rp ) → Rp by L(a, x, Y, Z) := l(x) + Y (a − A(x)) + Z(b − B(x)).

296

3 Optimization in Partially Ordered Spaces

When three of the four variables a ∈ U , x ∈ X, Y ∈ L(U, Rp ), and Z ∈ L(X, Rp ) are fixed, then the corresponding partial mappings L(·, x, Y, Z), L(a, ·, Y, Z), L(a, x, ·, Z), L(a, x, Y, ·) are affine. This property distinguishes our mapping L from the Lagrangian mapping usually associated with the problem (P(C)) (see, e.g., [550]). In what follows we consider L as a function of two variables (a, x) and (Y, Z), and investigate approximate saddle points of L with respect to (A × X ) × (Y × Z), where Y := ∂f (0), Z := {Z ∈ L(V, Rp ) | Z[CV ] ⊆ C}. For short, we set D := (A × X ) × (Y × Z). Theorem 3.12.8. Let y ∗ be a functional in C + \{0}, let e be an element in C, and let (a0 , x0 , Y0 , Z0 ) be an element in D. Then (a0 , x0 , Y0 , Z0 ) is a (y ∗ , e)saddle point of L with respect to D if and only if the following conditions are satisfied: (i) L(a0 , x0 , Y0 , Z0 ) ∈ Min({L(a, x, Y0 , Z0 ) | (a, x) ∈ A × X }, y ∗ , e), (ii) B(x0 ) − b ∈ CV , (iii) y ∗ (Y0 (a0 − A(x0 ))) + y ∗ (Z0 (b − B(x0 ))) ≥ y ∗ (f (a0 − A(x0 ))) − y ∗ (e). Proof. Necessity. Condition (i) follows from (3.137) in the definition of a (y ∗ , e)-saddle point. In order to prove (ii), we suppose that B(x0 ) − b ∈ / CV . According to a strict separation theorem (see, e.g., Theorem 3.18 in [306]), there exists a functional μ ∈ CV + such that μ(B(x0 ) − b) < 0. Let k be a point chosen from int C. With μ and k we define the mapping Z : V → Rp by μ(v) (e + k) + Z0 (v). Z(v) := (3.139) μ(b − B(x0 )) Obviously, Z belongs to Z. Taking into account that y ∗ (k) > 0, we also see that y ∗ ((Z − Z0 )(b − B(x0 ))) = y ∗ (e + k) > y ∗ (e). This result implies that y ∗ (L(a0 , x0 , Y0 , Z)) = y ∗ (L(a0 , x0 , Y0 , Z0 )) + y ∗ ((Z − Z0 )(b − B(x0 ))) > y ∗ (L(a0 , x0 , Y0 , Z0 )) + y ∗ (e), which contradicts L(a0 , x0 , Y0 , Z0 ) ∈ Max({L(a0 , x0 , Y, Z); (Y, Z) ∈ Y × Z}, y ∗ , e).

(3.140)

Therefore, condition (ii) must be satisfied. Next we note that (3.140) implies

3.12 Saddle Point Theory

y ∗ (L(a0 , x0 , Y, Z)) ≤ y ∗ (L(a0 , x0 , Y0 , Z0 )) + y ∗ (e)

297

(3.141)

for every (Y, Z) ∈ Y × Z. By specializing (Y, Z) in (3.141), we obtain (iii). Indeed, from (3.141) it follows that for any mapping Y ∈ Y with the property Y (a0 − A(x0 )) = f (a0 − A(x0 )) and for Z = 0 the relation y ∗ (f (a0 − A(x0 ))) ≤ y ∗ (Y0 (a0 − A(x0 ))) + y ∗ (Z0 (b − B(x0 ))) + y ∗ (e) holds. This means that (iii) is true. Sufficiency. (i) is equivalent to (3.137) in the definition of a (y ∗ , e)-saddle point. We have to prove that (3.138) also holds. Let (Y, Z) be an arbitrary pair in Y × Z. Then we have from (3.134), f (a0 − A(x0 )) ∈ Y (a0 − A(x0 )) + Rp+ ⊆ Y (a0 − A(x0 )) + C and (ii) Z(B(x0 ) − b) ∈ C. Since y ∗ ∈ C + , we conclude that y ∗ (Y (a0 − A(x0 ))) ≤ y ∗ (f (a0 − A(x0 ))) and

y ∗ (Z(b − B(x0 ))) ≤ 0.

These inequalities imply y ∗ (Y (a0 − A(x0 ))) + y ∗ (Z(b − B(x0 ))) ≤ y ∗ (f (a0 − A(x0 ))). Taking (iii) into consideration, it follows that y ∗ (Y (a0 − A(x0 )) + y ∗ (Z(b − B(x0 ))) ≤ y ∗ (Y0 (a0 − A(x0 ))) + y ∗ (Z0 (b − B(x0 ))) + y ∗ (e). Consequently, we have y ∗ (L(a0 , x0 , Y, Z)) ≤ y ∗ (L(a0 , x0 , Y0 , Z0 )) + y ∗ (e). Since (Y, Z) was arbitrarily chosen, the latter inequality holds for all (Y, Z) ∈ Y × Z. So, (3.138) has been proved.  Corollary 3.12.9. Let y ∗ be a functional in C + \ {0}, let e be an element in C, and let (a0 , x0 , Y0 , Z0 ) ∈ D be a (y ∗ , e)-saddle point of L with respect to D. Then the following properties are true: (j) (a0 , x0 ) ∈ S, (jj) y ∗ (Y0 (a0 − A(x0 ))) ≥ y ∗ (f (a0 − A(x0 ))) − y ∗ (e), (jjj) y ∗ (Z0 (b − B(x0 ))) ≥ −y ∗ (e).

298

3 Optimization in Partially Ordered Spaces

Proof. Obviously, (j) results from (ii) in Theorem 3.12.8. In order to prove (jj) and (jjj), we note that (3.141) implies for every Y ∈ Y, for every Z ∈ Z: y ∗ (Y (a0 − A(x0 ))) + y ∗ (Z(b − B(x0 ))) ≤ y ∗ (Y0 (a0 − A(x0 ))) + y ∗ (Z0 (b − B(x0 ))) + y ∗ (e).

(3.142)

By setting in (3.142) a Y ∈ Y with the property Y (a0 − A(x0 )) = f (a0 − A(x0 )) and Z = Z0 , we obtain (jj), while by setting Y = Y0 and Z = 0 in (3.142), we obtain (jjj).  Remark 3.12.10. Item (jjj) in Corollary 3.12.9 can be interpreted as a condition of approximate complementary slackness for Z0 and b − B(x0 ). Namely, we have −y ∗ (e) ≤ y ∗ (Z0 (b − B(x0 ))) ≤ 0. Putting e = 0, this relation implies the well-known condition y ∗ (Z0 (b − B(x0 ))) = 0. Theorem 3.12.11. Let y ∗ be a functional in C + \ {0}, let e be an element in C, and let (a0 , x0 , Y0 , Z0 ) ∈ D be a (y ∗ , e)-saddle point of L with respect to D. Then (a0 , x0 ) is a (y ∗ , e¯)-minimal element of F (S), where e¯ := 2e is the approximation error. Proof. According to property (j) in Corollary 3.12.9, we have (a0 , x0 ) ∈ S. Let be (a, x) an arbitrary point in S. From (i) in Theorem 3.12.8, we have y ∗ (L(a0 , x0 , Y0 , Z0 )) − y ∗ (e) ≤ y ∗ (L(a, x, Y0 , Z0 )). On the other hand, condition (iii) in Theorem 3.12.8 implies that y ∗ (F (a0 , x0 )) − y ∗ (e) ≤ y ∗ (L(a0 , x0 , Y0 , Z0 )). Therefore, the following inequality is true: e) ≤ y ∗ (L(a, x, Y0 , Z0 )), y ∗ (F (a0 , x0 )) − y ∗ (¯ with e¯ := 2e. Taking now into account that f (a − A(x)) ∈ Y0 (a − A(x)) + Rp+ ⊆ Y0 (a − A(x)) + C and that Z0 (B(x) − b) ∈ C, we conclude that

(3.143)

3.12 Saddle Point Theory

299

y ∗ (Y0 (a − A(x))) ≤ y ∗ (f (a − A(x))) and

y ∗ (Z0 (b − B(x))) ≤ 0.

These inequalities imply y ∗ (L(a, x, Y0 , Z0 )) ≤ y ∗ (F (a, x)). From this and (3.143), we obtain y ∗ (F (a0 , x0 )) − y ∗ (¯ e) ≤ y ∗ (F (a, x)). Since (a, x) was arbitrarily chosen the latter inequality holds for all (a, x) ∈ S. This means that (a0 , x0 ) is a (y ∗ , e¯)-minimal element of F (S).  Theorem 3.12.12. We assume the existence of a feasible point (¯ a, x ¯) ∈ S with B(¯ x) − b ∈ int CV . Let y ∗ be a functional in C + \ (−C + ), let e be an element in C, and let (a0 , x0 ) ∈ S be a (y ∗ , e)-minimal element of F (S). Then there exist operators Y0 ∈ Y and Z0 ∈ Z, such that (a0 , x0 , Y0 , Z0 ) is a (y ∗ , e)-saddle point of L with respect to D. Proof. We consider the scalarized Lagrangian defined by ∗

Ly (a, x, Y, v ∗ ) := y ∗ (l(x)) + y ∗ (Y (a − A(x))) + v ∗ (b − B(x)) over D := (A × X ) × (Y × CV+∗ ). We show that for this Lagrangian the assumptions of Theorem 49.A in [582] are satisfied. Obviously, the assumptions (H1) and (H2) of this theorem are true. To show (H3∗ ), we consider a sequence {(Y n , vn∗ )} ⊂ Y × CV+∗ with (Y n , vn∗ ) → ∞ if n → ∞. Since Y n ∈ ∂f (0) for every n, there is a constant α > 0 such that Y n  ≤ α for every n. Thus we have vn∗  → ∞ if n → ∞. From B(¯ x) − b ∈ int CV follows the existence of a δ > 0 such that v ∗ (b − B(¯ x)) ≤ −δ

for every v ∗ ∈ {v ∗ ∈ CV+∗ ; v ∗  = 1}.

So, we have ∗

Ly (¯ a, x ¯, Y n , vn∗ ) = y ∗ (l(¯ x)) + y ∗ (Y n (¯ a − A(¯ x))) + vn∗ (b − B(¯ x)) ≤ y ∗ (l(¯ x)) + αy ∗ ¯ a − A(¯ x) − δvn∗ 

n→∞

−→ −∞,

which proves (v). Applying the mentioned theorem, there exist Y0 ∈ Y, v0∗ ∈ CV+∗ satisfying inf

(a,x)∈A×X

and we have



Ly (a, x, Y0 , v0∗ ) =

sup

inf

+ (a,x)∈A×X (Y,v ∗ )∈Y×CV ∗



Ly (a, x, Y, v ∗ ),

300

3 Optimization in Partially Ordered Spaces

inf



sup

(a,x)∈A×X (Y,v ∗ )∈Y×C + V∗

Ly (a, x, Y, v ∗ ) =

sup



inf

+ (Y,v ∗ )∈Y×CV ∗

(a,x)∈A×X

Ly (a, x, Y, v ∗ ).



Regarding that sup(Y,v∗ )∈Y×C +∗ Ly (a, x, Y, v ∗ ) = y ∗ (F (a, x)) whenever V (a, x) ∈ S, we get sup + (Y,v ∗ )∈Y×CV ∗



Ly (a0 , x0 , Y, v ∗ ) = y ∗ (F (a0 , x0 )) ≤ =

inf

(a,x)∈A×X

y ∗ (F (a, x)) + y ∗ (e)

inf

sup

(a,x)∈A×X (Y,v ∗ )∈Y×C + V∗

=

sup

inf

(a,x)∈A×X

+ (Y,v ∗ )∈Y×CV ∗

=

inf



Ly (a, x, Y, v ∗ ) + y ∗ (e) ∗

Ly (a, x, Y, v ∗ ) + y ∗ (e)



(a,x)∈A×X

Ly (a, x, Y0 , v0∗ ) + y ∗ (e).

(3.144)

From (3.144) it follows on the one hand that ∗



Ly (a0 , x0 , Y, v ∗ ) − y ∗ (e) ≤ Ly (a0 , x0 , Y0 , v0∗ )

for every (Y, v ∗ ) ∈ Y×CV+∗ , (3.145)

and on the other hand that ∗



Ly (a0 , x0 , Y0 , v0∗ ) ≤ Ly (a, x, Y0 , v0∗ ) + y ∗ (e)

for every (a, x) ∈ A × X . (3.146) + ∗ ∗ + + Finally, we have to show that for v0 ∈ CV ∗ and y ∈ C \ (−C ) there is a mapping Z ∈ Z such that v0∗ (v) = y ∗ (Z(v)) for all v ∈ V . Let k ∈ C be a point such that y ∗ (k) > 0. Define the mapping Z0 : V → Rp by Z0 (v) :=

v0∗ (v) k. y ∗ (k)

Then we have Z0 ∈ L(V, Rp ). Since v0∗ (v) ≥ 0 for all v ∈ CV , we conclude that Z0 [CV ] ⊆ C, i.e., Z ∈ Z. Furthermore, we get for all v ∈ V , y ∗ (Z0 (v)) =

v0∗ (v) ∗ y (k) = v0∗ (v). y ∗ (k)

This yields together with (3.145) and (3.146) the desired assertion.



Theorem 3.12.13. Let C˜ be a pointed convex cone in Rp satisfying C˜ ⊇ C, let e be an element in C, and let (a0 , x0 , Y0 , Z0 ) be an element in D. Then ˜ e)-saddle point of L with respect to D if and only if (a0 , x0 , Y0 , Z0 ) is an (C, the following conditions are satisfied:

3.12 Saddle Point Theory

301

˜ e), (i) L(a0 , x0 , Y0 , Z0 ) ∈ Eff Min ({L(a, x, Y0 , Z0 ) | (a, x) ∈ A × X }, C, (ii) B(x0 ) − b ∈ CV , / f (a0 − A(x0 )) − e − (C˜ \ {0}). (iii) Y0 (a0 − A(x0 )) + Z0 (b − B(x0 )) ∈ Proof. Necessity. Condition (i) follows from (3.135) in the definition of a ˜ e)-saddle point. In order to prove (ii) and (iii), we argue similarly as in the (C, proof of necessity of Theorem 3.12.8. First, we suppose that B(x0 ) − b ∈ / CV . Then we apply the strict separation theorem and conclude that there is a functional μ ∈ CV+∗ such that μ(B(x0 ) − b) < 0. Let k be a point chosen from C \ {0}. By means of μ and k we define the mapping Z : V → Rp by (3.139). ˜ we also see that Obviously, Z belongs to Z. Taking in account that C ⊆ C, (Z − Z0 )(b − B(x0 )) = e + k ∈ e + (C \ {0}) ⊂ e + (C˜ \ {0}). This result implies that L(a0 , x0 , Y0 , Z) = L(a0 , x0 , Y0 , Z0 ) + (Z − Z0 )(b − B(x0 )) ⊂ L(a0 , x0 , Y0 , Z0 ) + e + (C˜ \ {0}), which contradicts ˜ e). (3.147) L(a0 , x0 , Y0 , Z0 ) ∈ Eff Max ({L(a0 , x0 , Y, Z) | (Y, Z) ∈ Y × Z}, C, Therefore, condition (ii) must be satisfied. Next we apply (3.147) again and conclude that L(a0 , x0 , Y, Z) ∈ / L(a0 , x0 , Y0 , Z0 ) + e + (C˜ \ {0})

(3.148)

for every (Y, Z) ∈ Y × Z. By specializing (Y, Z) in (3.148), we obtain (iii). Indeed, from (3.148) it follows that for any mapping Y ∈ Y with the property Y (a0 − A(x0 )) = f (a0 − A(x0 )) and for Z = 0 the relation f (a0 − A(x0 )) ∈ / Y0 (a0 − A(x0 )) + Z0 (b − B(x0 )) + e + (C˜ \ {0}) holds. This means that (iii) is true. ˜ e)-saddle Sufficiency. (i) is equivalent to (3.135) in the definition of a (C, point. We have to prove that (3.136) also holds. To this end we suppose that there is a pair (Y, Z) ∈ Y × Z such that L(a0 , x0 , Y, Z) ∈ L(a0 , x0 , Y0 , Z0 ) + e + (C˜ \ {0}). Then we have Y (a0 − A(x0 )) + Y (b − B(x0 )) ∈ Y0 (a0 − A(x0 )) + Z0 (b − B(x0 )) + e + (C˜ \ {0}), which implies that

302

3 Optimization in Partially Ordered Spaces

Y (a0 − A(x0 )) + Z(b − B(x0 )) + C ⊆ Y0 (a0 − A(x0 )) + Z0 (b − B(x0 )) + e + C + (C˜ \ {0}) ⊆ Y0 (a0 − A(x0 )) + Z0 (b − B(x0 )) + e + (C˜ \ {0}). But on the other hand, from (3.134), f (a0 − A(x0 )) ∈ Y (a0 − A(x0 )) + Rp+ ⊆ Y (a0 − A(x0 )) + C, and (ii) Z(B(x0 ) − b) ∈ C it follows that f (a0 − A(x0 )) ∈ Y (a0 − A(x0 )) + Z(b − B(x0 )) + Z(B(x0 ) − b) + C ⊆ Y (a0 − A(x0 )) + Z(b − B(x0 )) + C + C ⊆ Y (a0 − A(x0 )) + Z(b − B(x0 )) + C. Consequently, we have f (a0 − A(x0 )) ∈ Y0 (a0 − A(x0 )) + Z0 (b − B(x0 )) + e + (C˜ \ {0}), which contradicts (iii).



Corollary 3.12.14. Let C˜ be a pointed convex cone in Rp satisfying C˜ ⊇ C, ˜ e)-saddle point let e be an element in C, and let (a0 , x0 , Y0 , Z0 ) ∈ D be a (C, of L with respect to D. Then the following properties are true: (j) (a0 , x0 ) ∈ S, / f (a0 − A(x0 )) − e − (C˜ \ {0}), (jj) Y0 (a0 − A(x0 )) ∈ / −e − (C˜ \ {0}). (jjj) Z0 (b − B(x0 )) ∈ Proof. Obviously, (j) results from (ii) in Theorem 3.12.13. In order to prove (jj) and (jjj), we note that (3.148) implies for every Y ∈ Y, for every Z ∈ Z: Y (a0 − A(x0 )) + Z(b − B(x0 )) ≤ Y0 (a0 − A(x0 )) + Z0 (b − B(x0 )) + e + (C˜ \ {0}).

(3.149)

By setting in (3.149) a Y ∈ Y with the property Y (a0 − A(x0 )) = f (a0 − A(x0 )) and Z = Z0 , we obtain (jj), while by setting Y = Y0 and Z = 0 in (3.149), we obtain (jjj).  Remark 3.12.15. Item (jjj) in Corollary 3.12.14 can be interpreted as a condition of approximate complementary slackness for Z0 and b − B(x0 ). Namely, we have Z0 (b − B(x0 )) ∈ −C, Z0 (b − B(x0 )) ∈ / −e − (C˜ \ {0}).

3.12 Saddle Point Theory

303

˜ ∪ {e}, which is Hence, Z0 (b − B(x0 )) is contained in the set −C \ [−e − (C)] ˜ a bounded set if we claim that C ⊃ C. Putting e = 0, this relation implies the well-known condition Z0 (b − B(x0 )) = 0. Theorem 3.12.16. Let C˜ be a pointed convex cone in Rp satisfying C˜ ⊇ C, ˜ e)-saddle point let e be an element in C, and let (a0 , x0 , Y0 , Z0 ) ∈ D be a (C, ˜ e¯)-minimal element of F (S), of L with respect to D. Then (a0 , x0 ) is a (C, where e¯ := e + f (a0 − A(x0 )) − Y0 (a0 − A(x0 )) − Z0 (b − B(x0 )) is the approximation error. Proof. According to property (j) in Corollary 3.12.14, we have (a0 , x0 ) ∈ S. We suppose that there is an (a, x) ∈ S such that F (a, x) ∈ F (a0 , x0 ) − e¯ − (C˜ \ {0}). This means that F (a, x) ∈ L(a0 , x0 , Y0 , Z0 ) − e − (C˜ \ {0}). Hence we have F (a, x) − C ⊆ L(a0 , x0 , Y0 , Z0 ) − e − (C˜ \ {0}). But, on the other hand, in view of f (a − A(x)) − Y0 (a − A(x)) − Z0 (b − B(x)) ∈ Rp+ + C ⊆ C, we have L(a, x, Y0 , Z0 ) = F (a, x) − [f (a, x) − Y0 (a − A(x)) − Z0 (b − B(x))] ∈ F (a, x) − C. Consequently, L(a, x, Y0 , Z0 ) ∈ L(a0 , x0 , Y0 , Z0 ) − e − (C˜ \ {0}), which contradicts condition (i) in Theorem 3.12.13.



Theorem 3.12.17. We assume the existence of a feasible point (¯ a, x ¯) ∈ S with B(¯ x) − b ∈ int CV . Let C˜ be a pointed convex cone in Rp satisfying ˜ ∪ {0} ⊇ cl C, let e be an element in C, and let (a0 , x0 ) ∈ S be a int (C) ˜ (C, e)-minimal element of F (S). Then there exist operators Y0 ∈ Y and Z0 ∈ Z, such that (a0 , x0 , Y0 , Z0 ) is a (C, e)-saddle point of L with respect to D.

304

3 Optimization in Partially Ordered Spaces

Proof. Under the given assumptions there exists an element y ∗ ∈ int C + (cf. Theorem 5.11 in [306]) such that (a0 , x0 ) belongs to Min(F (S), y ∗ , e). Theorem 3.12.12 implies the existence of a pair (Y0 , Z0 ) ∈ Y × Z, such that (a0 , x0 , Y0 , Z0 ) is a (y ∗ , e)-saddle point of L with respect to D. From the strict C-monotonicity of y ∗ , we can conclude that (a0 , x0 , Y0 , Z0 ) also is a (C, e)-saddle point of L with respect to D.  Remark 3.12.18. Recent results concerning optimality conditions for approximate proper solutions of vector optimization problems are derived by Guti´errez, Huerga, Novo, and Tammer in [243], see also references therein.

4 Generalized Differentiation and Optimality Conditions

We introduce fundamental tools from variational analysis and derive necessary optimality conditions for solutions of not necessarily convex vector as well as set-valued optimization problems using concepts of generalized differentiation and scalarization techniques by nonlinear translation invariant functionals given in (2.42). A detailed discussion of solution concepts in vector- and set-valued optimization is given in the book by Khan, Tammer, Z˘alinescu [329, Sections 2.4 and 2.6].

4.1 Mordukhovich/limiting generalized differentiation Throughout this chapter, we apply standard notations of variational analysis, see the books by Mordukhovich [413, 414], Mordukhovich and Nam [415], and suppose that all the spaces under consideration are Asplund spaces (see Section 2.2.3) unless otherwise requested. As in Definition 2.2.3 introduced, a Banach space is Asplund if every convex continuous functional ϕ : U → R defined on an open convex subset U of X is Fr´echet/regular differentiable on a dense subset of U . We recall the definitions and properties of the Mordukhovich/limiting differential constructions in the framework of Asplund spaces in this section. For details and useful modifications in general Banach spaces, see [291, 292, 413–415] and the bibliographies therein. As usual, for a given Asplund space X, we designate its norm by  ·  and consider the dual space X ∗ equipped with the weak∗ topology w∗ , where x∗ (x) denotes the canonical pairing between X ∗ and X. Definition 4.1.1. Consider x ∈ Ω. Ω is said to be closed around x ∈ Ω if there is a neighborhood U of x such that Ω ∩ cl U is a closed set. Again, we use Greek capitals for set-valued mappings, the lower case letters for single-valued functions, and the Greek lower case letters for extendedreal-valued functionals. We consider a set-valued mapping Γ : X ⇒ Y © Springer Nature Switzerland AG 2023 A. G¨ opfert et al., Variational Methods in Partially Ordered Spaces, CMS/CAIMS Books in Mathematics 7, https://doi.org/10.1007/978-3-031-36534-8 4

305

306

4 Generalized Differentiation and Optimality Conditions

between two Asplund spaces. Recall the following notions: dom Γ := {x ∈ X | Γ (x) = ∅}, gr Γ := {(x, y) ∈ X × Y | y ∈ Γ (x)}, and Γ (Ω) := ∪{Γ (x) | x ∈ Ω} the domain, the graph, and the image set of Γ over Ω, respectively. A single-valued function f : X → Y can be considered as a special form of the following set-valued mapping Γ with  {f (x)} if x ∈ dom f, Γ (x) := ∅ if x ∈ dom f. For simplification, we are writing Γ (x) = f (x). Furthermore, the sum of a setvalued mapping Γ and a single-valued function g is expressed by (Γ +g)(x) = Γ (x)+g(x). The generalized composition of a set-valued mapping Γ : X ⇒ Y and a functional ϕ : Y → R is a set-valued mapping ϕ ◦ Γ acting from X into R with image sets ϕ ◦ Γ (x) := ∪{ϕ(y) ∈ R | y ∈ Γ (x)}. In the next definition, we are using the sequential Kuratowski–Painlev´e upper limit with respect to the norm topology on X and weak*-topology on X ∗ of a set-valued map Γ : X ⇒ X ∗ acting between a Banach space X and its dual X ∗ (compare Section 2.7): w∗

lim sup Γ (x) := {x∗ ∈ X ∗ | ∃ sequences xk → x0 , and x∗k −→ x∗ , x→x0

with

x∗k ∈ Γ (xk ),

for all k = 1, 2, . . .},

w∗

where −→ denotes convergence in the weak*-topology of X ∗ . Definition 4.1.2 (Normal cones). Consider a nonempty set Ω in an Asplund space X. (i) The regular normal cone (also known as prenormal or Fr´echet normal cone) to Ω at x ∈ Ω is given by   ∗ x (u − x) ∗ ∗ ˆ (x; Ω) := x ∈ X | lim sup ≤0 . (4.1) N u − x Ω u− →x (ii) Suppose that Ω is closed around x ∈ Ω. The limiting normal cone (also known as basic or Mordukhovich normal cone) to Ω at x is given by ˆ (x; Ω) N (x; Ω) := lim sup N x→x  (4.2) w∗ ˆ (xk ; Ω) , = x∗ ∈ X ∗ | ∃ xk → x, x∗k −−→ x∗ with x∗k ∈ N where lim sup denotes the sequential Kuratowski–Painlev´e upper limit of regular normal cones to Ω at x as x tends to x.

4.1 Mordukhovich/limiting generalized differentiation

307

As it is known, the Mordukhovich normal cone (4.2) (being nonconvex in general) enjoys a comprehensive calculus in Asplund spaces in comparison with the convex regular normal cone (4.1). If Ω is convex, both cones (4.2) and (4.1) coincide the normal cone of convex analysis. Definition 4.1.3 (Coderivatives of set-valued mappings). Consider a set-valued mapping Γ : X ⇒ Y acting between the Asplund spaces X and Y . Suppose that gr Γ is closed around (x, y) ∈ gr Γ . ˆ ∗ Γ (x, y) : Y ∗ ⇒ X ∗ of Γ at (x, y) is given (i) The regular coderivative D via the normal cone to gr Γ at (x, y) by   ˆ ∗ Γ (x, y)(y ∗ ) := {x∗ ∈ X ∗ | (x∗ , −y ∗ ) ∈ N ˆ (x, y); gr Γ }. D ∗ (ii) The normal Mordukhovich/limiting coderivative DN Γ (x, y) : Y ∗ ⇒ ∗ X of Γ at (x, y) is specified via the normal cone to gr Γ at (x, y) by   ∗ DN Γ (x, y)(y ∗ ) := x∗ ∈ X ∗ | (x∗ , −y ∗ ) ∈ N (x, y); gr Γ }  gr Γ = x∗ ∈ X ∗ | ∃ (xk , yk ) −→ (x, y),   w∗ ˆ (xk , yk ); gr Γ . (x∗k , yk∗ ) → (x∗ , y ∗) with (x∗k , −yk∗ ) ∈ N ∗ (iii) The mixed Mordukhovich/limiting coderivative DM Γ (x, y) : Y ∗ ⇒ w∗

X ∗ is specified by replacing the weak∗ convergence yk∗ → y ∗ in (ii) with ·

the norm convergence yk∗ −→ y ∗ , i.e.,  gr Γ w∗ ∗ F (x, y)(y ∗ ) := x∗ ∈ X ∗ | ∃ (xk , yk ) −→ (x, y), x∗k → x∗ , DM  (4.3) · ˆ (xk , yk ); gr Γ . yk∗ −→ y ∗ with (x∗k , −yk∗ ) ∈ N If Γ = f : X → Y is single-valued, we always omit y = f (x) in the notation of the coderivative. Of course, for every y ∗ ∈ Y ∗ , it holds that ∗ ∗ DM Γ (x, y)(y ∗ ) ⊆ DN Γ (x, y)(y ∗ ).

(4.4)

The equality holds in (4.4) if Y is finite-dimensional. In the case that f is strictly differentiable at x (that is automatically fulfilled when it is C 1 around this point), then (4.4) becomes ∗ ∗ DN f (x)(y ∗ ) = DM f (x)(y ∗ ) = ∇f (x)∗ y ∗ ,

where ∇f (x) is the Fr´echet gradient of f at x. One essential advantage of the mixed coderivative is the point-based characterization of Lipschitz-like behavior to set-valued mappings.

308

4 Generalized Differentiation and Optimality Conditions

Definition 4.1.4 (Lipschitz-like behavior). Consider a set-valued mapping Γ : X ⇒ Y between Asplund spaces. Γ is called Lipschitz-like (also known as Γ enjoys Aubin’s “pseudo-Lipschitzian” property) around a point (x, y) ∈ gr Γ if there are neighborhoods U of x and V of y and a nonnegative number  ≥ 0 such that ∀x, u ∈ U : Γ (x) ∩ V ⊆ Γ (u) + x − uB· . It is well known that this robust Lipschitzian property of Γ is equivalent to both metric regularity and linear openness properties of the inverse multifunction. If there are neighborhoods U of x and V of y such that Γ (x) ∩ V + κrUY ⊆ Γ (x + rUX ) whenever x + rUX ⊆ U as r > 0, Γ has the local covering property around (x, y) with modulus κ ≥ 0. The validity of calculus and characterizations for objects of generalized differentiation demands specific additional “sequential normal compactness” properties of sets and mappings in infinite-dimensional spaces. However, these properties are automatically fulfilled in finite dimensions, whereas being a crucial ingredient of variational analysis in infinite dimensions. Definition 4.1.5 (SNC and PSNC properties). Consider a set Ω ⊆ X × Y in the product of Asplund spaces, Ω = gr Γ , where Γ : X ⇒ Y is a set-valued mapping. Suppose that Ω is closed around (x, y) ∈ Ω. (i) Ω is sequentially normally compact (SNC) at v = (x, y) if for any sequences {vk , x∗k , yk∗ } with Ω ˆ (vk ; Ω) (k ∈ N), vk → v, (x∗k , yk∗ ) ∈ N

it holds that

w∗

(4.5)

·

(x∗k , yk∗ ) → 0 =⇒ (x∗k , yk∗ ) −→ 0. The product structure of the space in consideration plays no role in this property (one can set Y = {0} without loss of generality) in contrast to its following partial modifications. (ii) Ω is partially sequentially normally compact (PSNC) with respect to X at v if for any sequences (vk , x∗k , yk∗ ) satisfying (4.5) it holds that w∗

·

·

(x∗k → 0 ∧ zk∗ −→ 0) =⇒ x∗k −→ 0. (iii) Γ is SNC at (x, y) if gr Γ is SNC at (x, y). (iv) Γ is PSNC at (x, z) if gr Γ is PSNC at (x, y) with respect to X. A sufficient condition for the SNC property for sets established in [413, Theorem 1.21] is provided in the next lemma.

4.1 Mordukhovich/limiting generalized differentiation

309

Lemma 4.1.6 (Sufficient conditions for SNC conditions). A subset Ω of an Asplund space X is SNC at x ∈ Ω only if codim aff(Ω ∩ U ) < +∞ for any neighborhood U of x, where aff(A) is the closed affine hull of A and codim aff(A) is defined as the dimension of the quotient space X/(aff − x). Especially, a singleton in X is SNC if and only if X is finite-dimensional. Furthermore, if Ω is convex and ri(Ω) = ∅, the sequential normal compactness of Ω at every x ∈ Ω is equivalent to the finite codimension condition codim aff(Ω ∩ U ) < +∞. We now suppose in addition that the space Y is equipped with a domination set Θ of Y . The epigraph of Γ : X ⇒ Y with respect to Θ is given by epi Γ := {(x, y) ∈ X × Y | y ∈ Γ (x) + Θ}. For simplicity, we omit Θ in the notation of the epigraph. The set-valued mapping EΓ : X ⇒ Y defined by EΓ (x) := Γ (x) + Θ

(4.6)

is called the epigraphical multifunction with Γ (and Θ) due to the fact that gr EΓ = epi Γ . Now, we define subdifferential constructions for Γ adopting coderivatives of set-valued mappings to epigraphical multifunctions. Definition 4.1.7 (Subdifferentials of set-valued mappings). Consider a set-valued mapping Γ : X ⇒ Y and a domination set Θ of Y . Suppose that epi Γ is closed around (x, y) ∈ epi Γ . ˆ (x, y) : Y ∗ ⇒ X ∗ of Γ at (x, y) is given (i) The regular subdifferential ∂Γ by ˆ (x, y)(y ∗ ) := D ˆ ∗ EΓ (x, y)(y ∗ ). ∂Γ (ii) The basic subdifferential ∂Γ (x, y) : Y ∗ ⇒ X ∗ of Γ at (x, y) is given by ∗ ∂Γ (x, y)(y ∗ ) := DN EΓ (x, y)(y ∗ ).

(4.7)

(iii) The singular subdifferential ∂ ∞ Γ (x, y) of Γ at (x, y) is defined by ∗ ∂ ∞ Γ (x, y) := DM EΓ (x, y)(0).

(4.8)

If Γ = f : X → Y is single-valued, we drop y = f (x) in the subdifferential notation (4.7) and (4.8). Furthermore, if Γ = ϕ : X → R ∪ {±∞} is an extended-real-valued functional that is finite at x and Θ = R+ is the natural ordering cone on R,

310

4 Generalized Differentiation and Optimality Conditions

the epigraphical multifunction (4.6) reduces to the known mapping in scalar optimization Eϕ (x) = {α ∈ R | α ≥ ϕ(x)} and the subdifferential (4.7) with y ∗ = 1 and y = ϕ(x) and the singular subdifferential (4.8) collapses with the well-known prototypes ∂ϕ(x) := {x∗ ∈ X ∗ | (x∗ , −1) ∈ N ((x, ϕ(x)); epi ϕ)} and ∂ ∞ ϕ(x) := {x∗ ∈ X ∗ | (x∗ , 0) ∈ N ((x, ϕ(x)); epi ϕ)}. Definition 4.1.8 (Sequential normal epi-compactness of functionals). Suppose that ϕ : X → R∪{±∞} is finite at x. ϕ is called sequential normal epi-compact (SNEC) at x if its epigraph is SNC at (x, ϕ(x)). Moreover, we recall the point-based characterizations of Lipschitz-like property in [413, Theorem 4.10]. Lemma 4.1.9 (Pointbased characterizations of Lipschitz-like property). Consider a set-valued mapping Γ : X ⇒ Y between Asplund spaces. Suppose that gr Γ is closed around (x, y) ∈ gr Γ . Then, the following properties are equivalent: (a) Γ is Lipschitz-like around (x, y). ∗ Γ (x, y)(0) = {0}. (b) Γ is PSNC at (x, y) and DM Consequently, a functional ϕ : X → R ∪ {±∞} is locally Lipschitz at x if and only if ∂ ∞ ϕ(x) = {0}.

4.2 General Concept of Set Extremality Dealing with (nonconvex) optimization and related problems an important method is to formulate optimality notions as a kind of extremal behavior of a collection of certain sets Ωi in Banach spaces, mostly in Asplund spaces. Such sets can be related, for example, to the graph or epigraph of the objective function or to the constraints or other parameters of a problem. Kruger and Mordukhovich [355] introduced such a concept defining extremal points w.r.t. a finite collection of sets Ωi and proved a first extremal principle. A lot of extensions followed up to now, compare f. i. [100, 413–417], also w.r.t. infinite systems of sets Ωi . Extremal principles and their extensions are theorems (mostly in Asplund spaces and their duals), which give conditions for generalized normals at special points of that sets Ωi or with other words ([100]) give a dual characterization of local extremality in form of generalized separation. Those principles generalize convex separation to nonconvex cases (compare (4.19)). For nonconvex separation, see also Theorem 2.4.19.

4.2 General Concept of Set Extremality

311

4.2.1 Introduction to Set Extremality In order to formulate conventional extremal principles, one first defines local extremal points and extremal systems ([413, Definition 2.1]). Unless otherwise provided, the involved sets are closed around the point in consideration. Definition 4.2.1. Consider nonempty subsets Ω1 , . . . , Ωn (n ≥ 2) of a Banach space (or a linear topological space) X and let x be a common

n point of these sets, that means i=1 Ωi = ∅. It is said that x is a local extremal point of the set system {Ω1 , . . . , Ωn }, if there are sequences {aik } ⊂ X, i = 1, . . . , n, and a neighborhood U of x such that {aik } → 0 as k → ∞ and for all large k ∈ N :

n

(Ωi − aik ) ∩ U = ∅.

(4.9)

i=1

In this case {Ω1 , . . . , Ωn , x} is called to be an extremal system in X or simply locally extremal at x. If U = X, then x is called extremal point. For examples of extremal points and extremal systems, look at Example 4.2.16 as well as applications to the scalar optimization problem (P ) and to the set-valued optimization problem (≤

C -SP) below. For extended versions of n extremal principles, including the case i=1 Ωi = ∅, compare Remark 4.2.13. As mentioned, extremal principles contain mostly relations with sets or cones of elements in X ∗ . Such cones are defined in Section 4.1 (the Fr´echet normal ˆ (x; Ω) (see (4.1)) and the limiting normal cone N (x; Ω) (see Definition cone N ˆε (x; Ω) (compare [414, Section 1.1]), 4.1.2 (ii)). For the set of ε-normals N ˆε (x; Ω) is used in the look at the following definition. The set of ε-normals N formulation of the ε-extremal principle in Definition 4.2.3. In this chapter, the definitions of normals are given in the Asplund space setting but they are the same in general Banach spaces (compare Remark 4.2.12 and Theorem 4.2.6). Definition 4.2.2 (Set of ε-normals). Let Ω ⊂ X be a nonempty subset of a Banach space X. Consider x ∈ Ω and ε ≥ 0. Then,   ∗ ˆε (x; Ω) := x∗ ∈ X ∗ lim sup x , u − x ≤ ε (4.10) N u − x Ω u− →x defines the set of ε-normals to Ω at x. To come to conventional extremal principles, we recall that Mordukhovich [413, Definition 2.5] introduced three basic versions of extremal principles in Banach spaces X (see the following definition) and explained that they can be considered as a kind of local separation of (nonconvex) sets. Definition 4.2.3 (Formulation of the three basic extremal principles). Consider an extremal system {Ω1 , . . . , Ωn , x} in X. Then,

312

4 Generalized Differentiation and Optimality Conditions

(a) {Ω1 , . . . , Ωn , x} fulfills the ε-extremal principle if for every ε > 0 there are xj ∈ Ωj ∩ (x + εBX ) and x∗j ∈ X ∗ such that ˆε (xj ; Ωj ) x∗j ∈ N x∗1 + · · · + x∗n = 0,

(j = 1, . . . , n),

x∗1 ∗ + · · · + x∗n ∗ = 1.

(4.11) (4.12)

(b) {Ω1 , . . . , Ωn , x} fulfills the approximate extremal principle if for every ε > 0 there are xj ∈ Ωj ∩ (x + εB· ) and x∗j ∈ X ∗ with ˆ (xj ; Ωj ) + εB· x∗j ∈ N ∗

j = 1, . . . , n,

(4.13)

such that (4.12) holds. (c) {Ω1 , . . . , Ωn , x} fulfills the exact extremal principle if there are basic normals (4.14) x∗j ∈ N (x; Ωj ), (j = 1, . . . , n), such that (4.12) holds. An ε-extremal, approximate extremal or exact extremal principle holds in the space X if it holds for every extremal system {Ω1 , . . . , Ωn , x} in X, where all the sets Ωj (j = 1, . . . , n) are closed or closed around x. ˆ (x; Ω)+εBX ∗ ⊂ N ˆε (x; Ω), it is clear that Remark 4.2.4 Taking into account N the ε-extremal principle follows from the approximate extremal principle for any extremal system {Ω1 , . . . , Ωn , x} in a Banach space X. A few extremal principles can be found in this section and reference is made to the proofs. Applications to the scalar optimization problem (P ) and to the set-valued optimization problem (≤C -SP) are presented below as well as extensions in Theorem 4.2.23. Here is an example of an ε-extremal principle ([355], [418], [584, Theorem 1]): Theorem 4.2.5. Let x be an extremal point of closed sets Ω1 , Ω2 , . . . , Ωn in an Asplund space X. Then, for every ε > 0 there exist xi ∈ Ωi and x∗i ∈ Nε (Ωi , xi ) such that xi − x < ε,

n

i=1

x∗i  = 1,

n

x∗i = 0.

(4.15)

i=1

Sometimes applications demand the approximate extremal principle for two sets Ω1 , Ω2 , look f. i. the application to the set-valued optimization problem (≤C -SP) below. Thereby the conditions in (4.15) are often modified. An important version of such an extremal principle is Theorem 4.2.11.

4.2 General Concept of Set Extremality

313

4.2.2 Characterizations of Asplund spaces The assertion in the following theorem (shown in [413, Theorem 2.20]) explains that there are equivalences between Asplund spaces and basic extremal principles (a) and (b) in Definition 4.2.3. Theorem 4.2.6 (Extremal characterization of Asplund spaces). Suppose that X is a Banach space. Then, the following assertions are equivalent: (a) X is Asplund, (b) the approximate extremal principle holds in X, (c) the ε-extremal principle holds in X. So, a Banach space X is Asplund if and only if the approximate extremal principle or the ε-extremal principle hold in X. There are different versions of characterizations of Asplund spaces. The characterization (a Banach space X is an Asplund space if and only if every separable subspace has a separable dual space) is interesting for numerical purposes but not only, remember therefore (compare [188]) the method of separable reduction in order to prove extremal principles or the phenomenon of separable determinacy which— as an example—leads to a characterization of Asplund spaces using lower semicontinuous functions: A Banach space is Asplund if and only if every lower semicontinuous function is densely Fr´echet ε−subdifferentiable with ε > 0. For ε = 0 and trustworthiness of the subdifferential on an Asplund space look at [188] and the references therein. There is another connection to separable reduction which concerns the Mordukhovich normal cone in Asplund spaces ([414, p. 59]). This cone can be defined in two ways, namely, ˆ (x; Ω) but also with the set N ˆε (x; Ω) of with the Fr´echet normal cone N ε-normals: ˆ (x; Ω) = lim sup N ˆε (x; Ω), (4.16) N (x, Ω) = lim sup N x→x

x→x,ε↓0

and those definitions coincide in Asplund spaces ([418]). Thereby the proof passing from the second to the first definition in (4.16) is nontrivial, it uses especially Borwein–Preiss variational principle and the method of separable reduction ([414, p. 59]). There are different versions of extremal principles and characterizations of Asplund spaces ([413, page 207,209], [465, page 337], [475, Section 5]) and applications of Asplund spaces ([100], [101], [188], [329], [414]). 4.2.3 Properties and Applications of Extremal systems In the following, we give a look into the general concept of set extremality (Bui and Kruger [100, 101], Mordukhovich [411–414], Kruger and Mordukhovich [355], Kruger [354], Khan, Tammer and Z˘ alinescu [329]) including relations to separation properties of nonconvex sets and optimality conditions in constrained scalar, vector-valued, and set-valued optimization. Local extremal points and extremal systems were explained in Definition 4.2.1. In particular it holds for two sets:

314

4 Generalized Differentiation and Optimality Conditions

Remark 4.2.7. If we consider two closed sets Ω1 and Ω2 in the Banach space X, we say that a point x ¯ ∈ Ω1 ∩ Ω2 is locally extremal for the set system ¯ such that for any ε > 0 we can {Ω1 , Ω2 } if there is a neighborhood U of x find an element a ∈ εB· with Ω1 ∩ (Ω2 + a) ∩ V = ∅.

(4.17)

Condition Ω1 ∩ Ω2 = {x} does not necessary imply that x is a local extremal point of {Ω1 , Ω2 }. The formula in (4.12), (4.13), and (4.14) suggests that extremal principles can produce (local) optimality conditions for the nonconvex case similar as separation theorems in the convex case (see Remark 4.2.8). Example 4.2.16, applications to the scalar optimization problem (P ) and to the set-valued optimization problem (≤C -SP) as well as Theorem 4.2.23 show how extremal principles work in order to derive first-order necessary optimality condition for fully localized nondominated elements of (set-valued) optimization problems. Remark 4.2.8. Furthermore, the extremal principles are related to local separation of nonconvex sets. In fact, if we consider the exact extremal principle for two sets Ω1 and Ω2 , then (4.12) and (4.14) collapse to the assertion that there exists an element x∗ ∈ X ∗ with 0 = x∗ ∈ N (x; Ω1 ) ∩ (−N (x; Ω2 )).

(4.18)

We also obtain such a condition employing a separation theorem for nonconvex sets (Theorem 2.4.19), where the nonlinear translation invariant functional is given by (2.42). Applying Theorem 2.4.19 to an extremal system {Ω1 , Ω2 , x} and taking into account the properties of the subgradients of the functional (2.42), we derive a condition corresponding to (4.18) under certain restrictive assumptions (see [532, Theorem 12.6.6]). Remark 4.2.9. In case that Ω1 and Ω2 are convex, (4.18) is equivalent to ∀ x1 ∈ Ω1 , ∀x2 ∈ Ω2 :

x∗ , x1  ≤ x∗ , x2 ,

(4.19)

which is nothing other than the classical separation property for two convex sets. Remark 4.2.10. Taking into account Remark 4.2.4 we get from Theorem 4.2.6 that the ε-extremal principle holds in an Asplund space X. An application of this remark for the case of two subsets Ω1 and Ω2 of an Asplund space X can be found in Sections 12.9 and 12.11 in [329]. ¯ ∈ Ω1 ∩ Ω2 is In the case of a system {Ω1 , Ω2 }, a locally extremal point x given by (4.17). Then, we obtain the following assertion (see [41, Section 2]):

4.2 General Concept of Set Extremality

315

Theorem 4.2.11 (An approximate extremal principle). Let X be an Asplund space and x ¯ be a local extremal point of the set system {Ω1 , Ω2 }, ¯. Then for every ε > 0 there are where both Ω1 and Ω2 are closed around x xj ∈ Ωj ∩ (¯ x + εB· )

and

ˆ (xj ; Ωj ) x∗j ∈ N

j = 1, 2,

satisfying the relations 1 − ε ≤ x∗1 ∗ + x∗2 ∗ ≤ 1 + ε

and

x∗1 + x∗2 ∗ ≤ ε.

As explained in Remark 4.2.8, the extremal principle can be considered as a nonconvex variational counterpart of the classical separation principle for convex sets, similar like the separation theorems for nonconvex sets (see Theorem 2.4.19). It plays in fact a fundamental role in variational analysis similar to that played by the convex separation and Bishop–Phelps theorems under convexity assumptions; see Subsection 4.2.4 and the books by Mordukhovich [413], [414] for more details and numerous applications. Remark 4.2.12. As explained in [413, Chapter 2.5], the considered extremal problems and related results characterize Asplund spaces. To cover other classes of Banach spaces than Asplund spaces for the formulation of extremal principles one needs to use different constructions of generalized normals. By the way, compare page 227 in [100] with historical comments and with respect to non-extension of results beyond Asplund spaces. In order to derive necessary optimality conditions for nondominated elements of (set-valued) optimization problems we need certain additional assumptions concerning kinds of compactness properties of sets and mappings, which are automatically fulfilled in finite-dimensional spaces while are indispensably needed in infinite-dimensional spaces due to the natural lack of compactness therein. These properties are used in the framework of Asplund spaces, and so the given definitions (see Section 4.1) are specified to this setting, compare [413] or [100, p. 218,224] for appropriate modifications in general Banach spaces. For examples of necessary optimality conditions, see Subsection 4.2.4. Remark 4.2.13. Having extremal principles in mind, an important condition is that the intersection of the sets Ai := Ωi of the extremal system is nonempty. Interestingly there are theorems, quite similar (also with respect to main points of the proof, f. i. Ekeland’s variational principle) to the (conventional) extremal principles treated above, but without a common point of the considered sets Ωi . In the following we give a short impression of that topic.

Clearly, Theorem 4.2.5 cannot be applied, if Ai = ∅. Defining an (p− weighted, 1 ≤ p ≤ ∞) non-intersect index γp of finitely many closed subsets A1 , . . . , An of a Banach space X:

316

4 Generalized Differentiation and Optimality Conditions

γp (A1 , . . . , An ) := inf

⎧ ⎨ n−1



 p1 xi − xn p

i=1

⎫ ⎬ : xi ∈ Ai , i = 1, . . . , n , ⎭

(4.20)

where for p = +∞ the norm in (4.20) is understood as maximum norm and γ2 (A1 , A2 ) is the distance between A1 and A2 , one finds ([584, Theorem 1.1]) the following exact separation result: Theorem 4.2.14. Let X be an Asplund space and A, A1 , . . . , An be nonempty closed

n (not necessarily convex) subsets of X such that A is compact and A i=1 Ai = ∅. Let 1 ≤ p, q ≤ +∞ be such that p1 + 1q = 1. Then, for any ε ∈ (0, +∞) and ρ ∈ (0, 1) there exist a ∈ A, ai ∈ Ai and a∗i ∈ X ∗ (i = 1, . . . , n) such that the following statements hold: n 1 (i) ( i=1 ai − ap ) p < γp (A1 , . . . , An , A) + ε,  (Ai , ai )(i = 1, . . . , n), − n a∗ ∈ N  (A, a), (n a∗ q ) q1 = 1, (ii) a∗i ∈ N i i=1 i i=1  n 1 n (iii) ρ( i=1 ai − ap ) p < i=1 (a∗i , a − ai ). In the paper [584] it is shown, that compactness of A cannot be relaxed to boundedness even when A, A1 , . . . , An are convex and that the theorem leads to new optimality conditions in terms of Fr´echet subdifferential and Fr´echet normal cone.

We considered empty and nonempty intersections of Ai in extremal systems, but these conditions are not so far apart:

(j) On the one hand, if {A1 , . . . , An , a} with a ∈ Ai is an local extremal system, then, with the sequences {aik } → 0 (converging to zero for k → +∞), it holds that n

(Ai − aik ) = ∅ for large k.

(4.21)

i=1

(jj) On the other hand, if for nonempty sets A1 , A2 there is A1 ∩ A2 = ∅, then with a ∈ A1 and b ∈ A2 we get 0 ∈ A1 − a ∩ A2 − b. Both these results are useful in connection with extremal principles. Indeed, considering (4.21) together with p = 1, then calculation of γ1 gives 0 ≤ γ1 (A1 − a1k , . . . , An − ank ) ≤

n−1

(a − a1k ) − (a − ank ) → 0. (4.22)

i=1

Applying this result to Theorem 4.2.14, one gets a stronger version of Theorem 4.2.5. And now to (jj): Having a look in the paper [100] by Bui and Kruger, especially Section 4.2, the trick in (jj) is used defining a corresponding notion of extremality (compare [100, Definition 1(i) and Remark 5]) and extended extremal principles.

4.2 General Concept of Set Extremality

317

4.2.4 Extremality and Optimality In this subsection, we give some examples representing optimality by extremality systems. In convex optimization and convex analysis, separation theorems (Hahn– Banach theorem) are often used to obtain optimality statements. Interestingly, in the case dim X < ∞ and Ωi , i = 1, . . . , n ≥ 2, being an extremal system with U = X, it holds that (see [413, p. 174]): “extremality and separation of convex sets are unconditionally equivalent”. So, new optimization results in finite-dimensional Banach spaces X based on extremal principles refer primarily to nonconvex optimization. Remark 4.2.15. Recall, that conventional separation property for n sets Ωi ⊂ X, i = 1, . . . , n ≥ 2, possibly nonconvex, means ([413, p. 173]), that there are vectors x∗i ∈ X ∗ , not all equal to zero, and numbers αi , such that ∀x ∈ Ωi , i = 1, . . . , n, x∗i , x ≤ αi , x∗1 + · · · + x∗n = 0, α1 + · · · + αn ≤ 0. If the mentioned sets have a common point, then the last inequality must hold as equality. Example 4.2.16. As example for an extremal point, we recall the definition of (Pareto) efficient points (see Section 3.7). Let Y be a Banach space, C ⊂ Y a proper, pointed, closed, convex cone (defining a partial order ≤C on Y ), and M a nonempty set in Y . y ∈ M is a (Pareto) efficient point of M w.r.t. C, if (compare (4.34)) M ∩ (y − C) = {y}. Now, we can identify y as an extremal point of an extremal system with two sets Ω1 , Ω2 taking X = Y and a sequence {a2k } ⊂ (C \ {0}) converging to 0 ∈ Y . Since −a2k ∈ −C and y is (Pareto) efficient, it follows M ∩ (y − C) = {y}

=⇒

M ∩ (y − C − a2k ) = ∅.

(4.23)

That means, if Ω1 = M, Ω2 = y − C, it holds for all (large) k ∈ N : Ω1 ∩ (Ω2 − a2k ) = ∅,

(4.24)

and therefore, y is an extremal point of the set system {Ω1 , Ω2 }. Roughly speaking, the two sets Ω1 and Ω2 were locally separated (or with Mordukhovich’s words “locally pushed apart by a small perturbation (translation)”) by means of the elements a2k . y can also be seen as efficient point of a vector optimization problem with performance space Y (compare the applications to the set-valued optimization problem (≤C -SP) below). Application to Scalar Constrained Optimization Problems Direct applications of an extremal principle (see Theorem 4.2.3) to get necessary optimality conditions are given in [414, Theorems 6.5 and 6.10]. Let

318

4 Generalized Differentiation and Optimality Conditions

Ω ⊆ Rn be a nonempty, feasible, locally closed (around the point in question) set (see Definition 4.1.1), ϕi : Ω → R, i = 0, . . . , m+r functionals, lower semicontinuous around a reference point and let x ∈ Ω a local nondominated point of the following optimization problem (see [414, Section 6.1.2]) ϕ0 (x) → min subject to ϕi (x) ≤ 0, i = 1, . . . , m;

(P )

ϕi (x) = 0, i = m + 1, . . . , m + r; x ∈ Ω ⊆ Rn . It is no loss of generality assuming ϕ0 (x) = 0. To use an extremal principle means to embrace the minimality character of (x, 0) or with other words to realize (4.9) or [414, (2.1) in Definition 2.1] with suitable variations (or perturbations). Thus we consider the point (x, 0) ∈ Ω × R and an environment U ⊂ Rn × R of it and show, that this point is locally extremal w.r.t. the following system of (locally closed (around (x, 0))) sets Ω0 , . . . , Ωm+r . One finds these sets by considering the objective function and the functions in the inequalities using their epigraphs and the functions in the equation constraints using their graphs (see [414, Proof of Theorem 6.5]): Ωi := {(x, α0 , . . . , αm+r ) | αi ≥ ϕi (x)}, i = 0, . . . , m, Ωi := {(x, α0 , . . . , αm+r ) | αi = ϕi (x)}, i = m + 1, . . . , m + r,

(4.25) (4.26)

Ωm+r+1 := Ω × 0. (4.27) Ωm+r+1 is locally closed as assumed. The lower semicontinuity of ϕi gives the closedness of Ωi for i = 0, . . . , m and an additional requirement of continuity for ϕi around x ensures the closedness of the remaining sets Ωi , i = m + 1, . . . , m + r. Of course, (x, 0) is a common point of the sets Ωi , i = 0, . . . , m + r + 1. Next one proves, that (x, 0) is a locally extremal point of those sets Ωi , i = 0, . . . , m + r + 1 in the space Rn × Rm+r+1 . Indeed, having (4.9) in mind (nonemptiness of the intersections of U and the perturbed Ωi ), one chooses for all i = 0, . . . , m + r : αi − k1 , k = 1, . . . . Then nonemptiness of the intersections is fulfilled for all k large enough (it is necessary remaining in U ). Now, the exact extremal principle (see Definition 4.2.3 and [414, Theorem 2.3]) delivers directly the necessary conditions (4.12), (4.14). These conditions, first applied to the extremal system in this application, can second be transformed (if additionally it is assumed, that all the ϕi are Lipschitz continuous around x) to Lagrangian necessary optimality conditions (Fritz John form).

4.2 General Concept of Set Extremality

319

The conditions (4.12), (4.14) have the following form for the scalar optimization problem (P ) (compare Theorem 6.5 in [414]): Let x be a local minimizer of (P ). Under the given assumptions there are vectors (vi , λi ) ∈ Rn+1 , i = 0, . . . , m + r, not equal to zero simultaneously and a vector v ∈ Rn such that (compare [414, Prop. 1.17]) λi ≥ 0 for i = 0, . . . , m and (v0 , −λ0 ) ∈ N ((x, ϕ0 (x); epi ϕ0 ), v ∈ N (x; Ω), (vi , −λi ) ∈ N ((x, 0); epi ϕi ), i = 1, . . . , m,

(4.28) (4.29)

(vi , −λi ) ∈ N ((x, 0); gr ϕi ), i = m + 1, . . . , m + r, m+r

v+ vi = 0.

(4.30) (4.31)

i=0

For the translation into the mentioned Lagrange form of necessary optimality conditions one uses [414, Theorem 1.22] (Subdifferentials of Locally Lipschitzian Functions) and gets (for ϕi Lipschitz continuous around x for all i = 0, . . . , m + r): There are multipliers λ0 , . . ., λm+r = 0 such that 0∈

m

i=0

λi ∂ϕi (x) +

m+r

λi [∂ϕi (x) ∪ ∂(−ϕi )(x)] + N (x; Ω),

(4.32)

m+1

λi ≥ 0, i = 0, . . . , m + r, and λi ϕi (x) = 0, i = 1, . . . , m.

(4.33)

For remarks about complementary slackness, constraint qualifications, and smooth functions compare [414, p.226/227]. Application to Unconstrained Set Optimization Problems In the book by Mordukhovich [414, Chapter 9] and, for instance, in the paper by Bao and Mordukhovich [41, Section 5], necessary optimality conditions are derived for local solutions of a general set-valued (vector-)optimization problem (compare Section 3.2.1) ≤C - opt Γ (x) subject to

(≤C -SP)

x ∈ X, where X and Y are Asplund spaces, Y preordered by a proper (C = {0}, C = Y ), closed and convex cone C ⊂ Y and Γ : X ⇒ Y a set-valued mapping with dom Γ = ∅. For an overview on results on necessary optimality conditions for solution concepts based on the vector approach in set optimization, see the book by Mordukhovich [414, Sections 9.4 and 9.6] and references therein.

320

4 Generalized Differentiation and Optimality Conditions

We show how to apply the extremal principle in Theorem 4.2.3. Concerning the set-valued optimization problem (≤C -SP), it is interesting that several kinds of nondominated elements can be covered simultaneously. It should also be remarked that constraints are only implicitly covered (x ∈ X with Γ (x) empty) and can be considered by applying calculation rules with the used derivatives and finally, that C can be a non-pointed cone. Now, we introduce local solution concepts for set-valued optimization problems based on vector approach (corresponding to Definitions 3.2.2 and 3.2.3). (x, y) ∈ gr Γ ⊂ U × Y is called a local ≤C -nondominated solution of the set optimization problem (≤C -SP), if there is an environment U ⊂ X of x such that (4.34) (y − C) ∩ Γ (U ) = {y}. If int C = ∅, then (x, y) ∈ gr Γ ⊂ U × Y is a local weakly ≤C nondominated solution of the set optimization problem (≤C -SP) if (y − int C) ∩ Γ (U ) = ∅.

(4.35)

(4.34) can also be written with ∅ on the right side of the equation, taking  := C\{0} instead of C. Some other concepts of nondominain (4.34) C tion weakening the condition int C = ∅ (compare [41, p. 303], [332, 414, p. 454 or 478], where the concepts of local intrinsic relative nondominated elements, local quasi-relative nondominated elements, or local primary relative nondominated elements are discussed) can also be subordinated to the formulation as in (4.35) and so simplifying the proof of the following theorem. Remark 4.2.17. A local ≤C -nondominated solution of the set optimization problem (≤C -SP) can also be defined by (y − C) ∩ Γ (U ) ⊆ y + C.

(4.36)

The notions (4.34) and (4.36) are equivalent, when C is pointed. When the ordering cone C in Y is a non-pointed cone, the subordination to the formulation as (4.34) is possible, taking (4.34) with the pointed cone  := (C ∩ (Y \(−C))) ∪ {0}. C Remark 4.2.18. The aim of this discussion concerning the application of the extremal principle in Theorem 4.2.11 to the set-valued problem (≤C -SP) is to show that a local nondominated element of (≤C -SP) (of the kind (4.34), (4.35)) allows a representation as a local extremal point of a system of sets. This can be seen following the ideas of the proof of the following theorem. Look therefore at the text around (4.42) up to (4.54). The assertion in the following theorem is shown in [414, Th. 9.20]. Theorem 4.2.19 (Fermat rules for local solutions to set-valued (vector-) optimization problems). Suppose that Γ : X ⇒ Y is a set-

4.2 General Concept of Set Extremality

321

valued mapping acting between Asplund spaces such that its graph [epigraph in case of Remark 4.2.20] is closed around the reference point (x, y) ∈ gr Γ . Assume that the image space Y is preordered by a proper, closed, convex cone C ⊂ Y. Then, the (Fermat type) coderivative condition 0 ∈ D∗ Γ (x, y)(y ∗ ) with some − y ∗ ∈ N (0; C), y ∗  = 1

(4.37)

is necessary for optimality of (x, y) to Γ in both of the following two cases: (x, y) is a local weakly nondominated solution of (≤C −SP );

(4.38)

(x, y) is a local nondominated solution of (≤C −SP ),

(4.39)

provided that C\(−C) = ∅ and that either C is SN C at 0 ∈ Y or Γ −1 is P SN C at (y, x) (4.40) [or the PSCN property holds for the inverse mapping to the associated epigraphical multifunction at (y, x) in case of Remark 4.2.20.] In Theorem 9.20 in [414], also other types of local relative nondominated elements are included. C\(−C) = ∅ means C is not a linear subspace of Y . Remark 4.2.20 Also the subdifferential condition 0 ∈ ∂Γ (x, y)

(4.41)

is necessary for optimality of (x, y) to Γ for local nondominated solutions of (≤C -SP) and the weak ones as above but under the changed conditions mentioned in brackets in Theorem 4.2.19. Now, remembering Remark 4.2.18, our next aim is to confirm a local nondominated element (x, y) ∈ gr Γ as a local extremal point of some system of sets in the Asplund product space X × Y . We define the sets: Ω1 =: gr Γ, Ω2 =: X × (y − C).

(4.42)

Both these sets are closed around (x, y) because of the closedness assumptions imposed on Γ and C. Clearly (x, y) ∈ Ω1 ∩ Ω2 . To show the local extremality of (x, y) w.r.t. Ω1 , Ω2 , we look for a suitable sequence {ck } ⊂ Y converging to zero such that with the sequence {(0, ck )} ⊂ X ×Y the extremality conditions (compare f. i. (4.9)) Ω1 ∩ (Ω2 + (0, ck )) ∩ (U × Y ) = ∅, k ∈ N,

(4.43)

hold. U is a neighborhood of x from its local nondomination property. As wanted sequence we take ck = kc , k ∈ N, c ∈ Y, c = 0. Would there be an element (x, y) ∈ U × Y so that (4.43) is not fulfilled, that is, c there is (x, y) ∈ U × Y (x, y) ∈ Ω1 ∩ (Ω2 + (0, )) ∩ (U × Y ), k ∈ N, (4.44) k

322

4 Generalized Differentiation and Optimality Conditions

we get a contradiction against the nondominated element property of (x, y). Indeed, (4.44) gives c (4.45) there is x ∈ U, y ∈ Γ (x) and y ∈ y − C + , k ∈ N. k If (x, y) is a local weakly nondominated solution of (≤C -SP), we choose c ∈ − int C and (4.45) gives c (4.46) y ∈ y − C + ⊂ y − C − int C = y − int C, k and so we have x ∈ U, y ∈ Γ (U ) ∩ y − int C, a contradiction as claimed. This procedure also leads to a contradiction for the other local weakly nondominated solutions of (≤C -SP) (only the last equality in (4.46) is not quite obvious, compare Exercise 9.44 in [414]). If (x, y) is a local nondominated solution of (≤C -SP), we choose c ∈ −(C\(−C) and (4.45) gives c there is x ∈ U, y ∈ Γ (x) and y ∈ y − C + ⊂ y − C − (C\(−C)) ⊂ y − C\{0}, k (4.47) and so also a contradiction as claimed. Now the approximate extremal principle (compare Theorem 4.2.11) can be applied in the Asplund space X × Y with the norm (x, y) := x + y and the maximum dual norm on X ∗ × Y ∗ . With a sequence {εk } ↓ 0 one finds from that principle: There are sequences {(xik , yik )} ⊂ X × Y and ∗ )} ⊂ X ∗ × Y ∗ , (i = 1, 2) satisfying {(x∗ik , yik (x1k , y1k ) ∈ gr Γ, (x2k , y2k ) ∈ X × (y − C), (xik , yik ) − (x, y) ≤ εk , (4.48) ∗ ∗  ((x1k , y1k ); gr Γ ), 0 = x∗2k ∈ N  (x2k ; X), y2k  (y − y2k ; C), (x∗1k , −y1k )∈N ∈N (4.49) ∗ ∗ + y2k } ≤ εk , max{x∗1k , y1k

(4.50)

∗ ∗ 1 − εk ≤ max{x∗1k , y1k } + y2k  ≤ 1 + εk .

(4.51)

Now properties of Asplund spaces come into play. It follows from (4.51) that ∗ the sequences {x∗ik , yik }, i = 1, 2, are bounded in the (dual to Asplund) ∗ ∗ space X × Y and therefore they contain weak* converging subsequences. From (4.50) it follows w∗

w∗

∗ ∗ {x∗1k } → 0, {y1k } → y ∗ , {y2k } → −y ∗ as k → +∞,

(4.52)

and passing to the limit as k → +∞ in (4.49) and (4.48) gives that the weak* limit y ∗ ∈ Y ∗ satisfies the inclusions (0, −y ∗ ) ∈ N ((x, y); gr Γ )and − y ∗ ∈ N (0; C). Next we show that y ∗ = 0. Otherwise (4.52) gives

(4.53)

4.2 General Concept of Set Extremality w∗

w∗

∗ ∗ {y1k } → 0, {y2k } → 0 as k → +∞.

323

(4.54) ∗ } {y2k

→ Either C is SNC at 0, then the second limit process in (4.54) gives ∗ } → 0. (4.52) gives 0, and consequently because of (4.50) it holds also {y1k {x∗1k } → 0. The last three results together contradict (4.51). Or Γ −1 is ∗ ∗ } → 0 from (4.52) and {y2k } → 0 PSNC at (y, x), then it follows {y1k and this again contradicts (4.51). So, after normalization, the coderivative necessary condition (4.37) is proved for local nondominated solutions of (≤C SP). But also for weakly nondominated solutions of (≤C -SP), since in this case C has a nonempty interior and is so automatically SNC at 0 (see Exercise 2.29 in [414]). The subdifferential conditions (see (4.41)): 0 ∈ ∂Γ (x, y) for (x, y) being a local nondominated solution or a weakly local nondominated solution of (≤C -SP) still need to be proven. For the case of a weakly local nondominated solution of (≤C -SP) look below. Since subdifferentials of set-valued mappings as Γ are defined (see Definition 4.1.7) with help of the epigraphical multifunction associated to Γ (see (4.6)), we have to work with EΓ (x) := Γ (x) + C and it is EΓ = epi Γ,

(4.55)

1 = and must introduce epi Γ into the Ω sets. So we change Ω1 = gr Γ to Ω 1 and Ω2 . epi Γ and verify the extremality property (4.43) but now with Ω 1 ∩ Ω2 . Assuming that the extremality property does Of course it is (x, y) ∈ Ω not hold, there is 1 ∩ (Ω2 + (0, ck )) ∩ (U × Y ), k ∈ N, (4.56) (x, y) ∈ U × Y such that(x, y) ∈ Ω and so we have some (x, y, z) ∈ X × Y × C satisfying x ∈ U, y ∈ Γ (x) + z, and y − z ∈ y − z − C + ck , k ∈ N. And this means (with ck =

c k

(4.57)

and c ∈ −(C\(−C))

x ∈ U, y − z ∈ Γ (U ), y − z ∈ y − C + ck ⊂ y − (C\{0}).

(4.58)

It follows y − z ∈ (y − (C − \{0})) ∩ Γ (U ),

(4.59)

contradicting the nondominated element property of (x, y). Now we apply the extremal principle Theorem 4.2.11 to the considered local nondominated solution (x, y) of (≤C -SP) which we confirmed as extremal point of the set system 1 = epi Γ, Ω2 . This is an interesting point of the proof, since starting with Ω (4.42) we had the same situation as now but with the set system Ω1 = gr Γ and Ω2 . Application of Theorem 4.2.11 had given the necessary coderiva∗ Γ (x, y)(y ∗ ). Consequently, that procedure but now with tive conditions DN 1 = epi Γ and Ω2 gives D∗ EΓ (x, y)(y ∗ ), and this is nothing respect to Ω N else (see (4.7)) as the necessary subdifferential condition for Γ which we looked for: 0 ∈ ∂Γ (x, y), i.e., (4.41) is fulfilled.

324

4 Generalized Differentiation and Optimality Conditions

To prove the subdifferential condition (4.41) for a weakly local nondominated solution of (≤C -SP), we first show that from its defining notion (see (4.35)) (y − int C) ∩ Γ (U ) = ∅ it follows (y − int C) ∩ (Γ (U ) + C) = ∅

(4.60)

or with other words, the considered nondominated element is also a weakly nondominated element minimizing EΓ (x) or (look at (4.55)) to minimize Γ (x) + C. Indeed, the negation of (4.60) means, that there is y ∈ (y − int C) ∩ (Γ (U ) + C) and it follows that there are u ∈ U, v ∈ Γ (u), w ∈ C, y = v + w ∈ y − int C.

(4.61)

So we get v = y − w ∈ y − int C − w ⊂ y − int C − C = y − int C.

(4.62)

And v ∈ (y − int C) ∩ Γ (U ) contradicts the weakly nondominated element property (4.35) of (x, y). Having proved that the weakly nondominated element is a weakly nondominated element of Γ (x)+C too, we apply the (already proved above) necessary coderivative condition w.r.t. Γ (x) + C (see (4.37)) ∗ EΓ (x, y). Looking at the definition of basic subdifferentials which is 0 ∈ DN ∗ EΓ (x, y)(y ∗ ) = ∂Γ (x, y) as claimed in Remark 4.2.20. (4.7), it is 0 ∈ DN The next assertions derived in [413, Lemma 3.1] and [413, Theorem 2.28] will be employed in order to show necessary conditions for Θ-nondominated solutions of set-valued mappings in Theorem 4.4.4. In the proof of the following assertion in [413, Lemma 3.1], the extremal principle (see Theorem 4.2.11) is used. Lemma 4.2.21 (A fuzzy intersection rule). Consider arbitrary subsets ˆ (x; Ω1 ∩ Ω2 ). Ω1 and Ω2 of X that are closed around x ∈ Ω1 ∩ Ω2 , and x∗ ∈ N Then, for every ε > 0, there are λ ≥ 0, xi ∈ Ωi ∩ (x + εUX ), and x∗i ∈ ˆ (xi ; Ωi ) + εUX ∗ for i = 1, 2 such that N λx∗ = x∗1 + x∗2 and max{λ, x∗1 } = 1. Lemma 4.2.22 (Subdifferential variational principle). Suppose that X is an Asplund space and ϕ : X → R∪{+∞} is a proper lower semicontinuous functional being bounded from below. For every ε > 0, λ > 0, and x0 ∈ X with ˆ such that x − x0  < λ, ϕ(x0 ) < inf X ϕ + ε, there are x ∈ X and x∗ ∈ ∂ϕ(x) ∗ ϕ(x) < inf X ϕ + ε, and x  < ε/λ.

4.3 Subdifferentials of Scalarization Functionals

325

An Extended Extremal Principle The following formulation of an (extended) extremal principle is given in [41, Section 2] (compare [413, Lemma 5.58] and (4.18)) and is applied in [329, Section 12.11] in order to show first-order necessary conditions for localized nondominated elements in Theorem 12.11. in [329] as well as in [329, Section 15.3] for deriving an extended welfare theorem (see Theorem 15.3.5 in [329]). Theorem 4.2.23 (An extended extremal principle). Let x be a local extremal point of the set system {Ω1 , Ω2 }, where both Ω1 and Ω2 are closed n around x in the product i=1 Xi of Asplund spaces Xi , i = 1, . . . , n. Take two index sets I, J ⊂ {1, . . . , n} with I ∪ J = {1, . . . , n} and I ∩ J = ∅. Furthermore, suppose that one of the following PSNC conditions is satisfied for Ω1 and Ω2 : • Ω1 is PSNC at x with respect to I and Ω2 is strongly PSNC at x with respect to J; • Ω1 is strongly PSNC at x with respect to I and Ω2 is PSNC at x with respect to J. Then there exists an element x∗ ∈ X ∗ such that 0 = x∗ ∈ N (x; Ω1 ) ∩ (−N (x; Ω2 )). Proof. Consider the extremal system {Ω1 , Ω2 , x}. Under the given assumptions we get 4.2.6 that the approximate extremal principle   with Theorem holds in ( i∈I Xi ) × ( j∈J Xj ) with the corresponding index sets I and J therein satisfying the condition I ∩ J = ∅. Following the line of the proof in [413, Lemma 5.58] we get the desired result.  In this section, nondominated solutions of the set-valued optimization problem (≤C -SP) were transformed into a representation as local extremal points of a system of sets in order to obtain necessary optimality conditions by applying an extremal principle. In the following two sections, a scalarization by translation invariant functionals (2.42) is used to obtain such optimality conditions. Thereby the nonlinear scalarizing functional ϕ is used, which served already in Section 2.4, (2.42), for the separation of not necessarily convex sets. In Section 4.3, we discuss a suitable scalarization of set-valued optimization problems. For deriving necessary conditions for nondominated solutions of set-valued optimization problems in Section 4.4, we apply the fuzzy intersection rule given in Lemma 4.2.21 to the scalarized mapping.

4.3 Subdifferentials of Scalarization Functionals For deriving optimality conditions in vector as well as set optimization, the application of a certain scalarization technique is an important tool in order

326

4 Generalized Differentiation and Optimality Conditions

to use known results from scalar optimization. In the nonconvex case, it is important to use nonlinear translation invariant functionals ϕD,k given by (2.42). These functionals are introduced and their properties are discussed in Section 2.4. The results in this section are derived by Bao and Tammer in [43]. For our approach, we use the following notion of a scalarization direction given by Bao and Tammer [43, Definition 2.1], compare Section 2.4, especially (2.41). Definition 4.3.1 (Scalarization directions of sets). Consider a proper subset D of a linear space Y . A vector k ∈ Y \ {0} is referred a scalarization direction of D if D does not contain lines parallel to k and the condition ∀t ∈ R+ : D + tk ⊆ D

(4.63)

is fulfilled. With dir(D), we denote the set of all scalarization directions of D. In this section, we focus on the nonconvex case and establish formulas for both basic and singular subdifferentials of the nonlinear translation invariant functional ϕD,k (y) = inf{t ∈ R | y ∈ tk − D} introduced in (2.42). Furthermore, we discuss conditions ensuring the SNEC property and the Lipschitz behavior for ϕD,k . We do not suppose that the set D is a cone and that the set D is convex and/or has a nonempty interior like in [167, Lemma 2.1] by Durea and Tammer. If D is not convex, the nonlinear translation invariant functional ϕD,k is not convex (see Theorem 2.4.1(a)). Furthermore, it is not Lipschitz if D has an empty interior. Hence, we employ the limiting generalized differentiation to compute both basic and singular subdifferentials of the translation invariant functional ϕD,k . The assertions concerning the regular and limiting subdifferentials of ϕD,k given by (2.42) (where D is a closed subset of an Asplund space Y and k a scalarization direction of D in the sense of Definition 4.3.1) in the next theorem are derived in [43, Theorem 3.1]. Theorem 4.3.2 (Regular and limiting subdifferentials of scalarization functionals). Consider an Asplund space Y , a proper and closed set D ⊆ Y , k ∈ dir(D) and the nonlinear translation invariant functional ϕD,k given by (2.42). Let y ∈ dom ϕD,k be given. Then, for λ ∈ R, the regular and limiting subdifferentials of ϕD,k at (y, t) ∈ epi ϕD,k are given by   ∂• ϕD,k (y)(λ) = Hλ (k) ∩ − N• (tk − y; D) , (4.64) where Hλ (k) := {y ∗ ∈ Y ∗ | y ∗ (k) = λ}, and • stands for both regular and limiting subdifferential.

4.3 Subdifferentials of Scalarization Functionals

327

Proof. We define a set-valued mapping Γ : R ⇒ Y by Γ (t) := tk − D.

(4.65)

We show that gr Γ −1 = epi ϕD,k (compare Section 2.4). Indeed, let us first prove that epi ϕD,k ⊆ gr Γ −1 . For any (−u, t) ∈ epi ϕD,k , it holds that ϕD,k (tk − u − tk) ≤ t. We obtain ϕD,k (−u − tk) + t ≤ t and therefore ϕD,k (−u − tk) ≤ 0. The last inequality implies −u − tk ∈ −D. Put y = u + tk ∈ D. We get (−u, t) = (tk − y, t) ∈ gr Γ −1 . Now, we prove the reverse inclusion. Let us fix an arbitrary pair (t, tk − y) ∈ gr Γ implying that y ∈ D and (tk − y, t) ∈ gr Γ . We obtain ϕD,k (−y) ≤ 0 and ϕD,k (tk − y) = t + ϕD,k (−y) ≤ t clearly verifying that (tk − y, t) ∈ epi ϕD,k . Since (tk − y, t) was arbitrary in gr Γ −1 , we have gr Γ −1 ⊆ epi ϕD,k . Furthermore, we introduce a constant set-valued mapping D : R ⇒ Y with D(t) :≡ −D and a strictly differentiable function κ : R → Y with κ(t) := tk to describe the set-valued mapping Γ in (4.65) as a sum of two mappings Γ = κ + D. We take a fixed arbitrary pair (t, y) ∈ gr Γ , i.e., (y, t) ∈ epi ϕD,k . Now, we employ the coderivative sum rule with equality from [413, Theorem 1.62] to κ and D. For every y ∗ ∈ Y ∗ , we obtain D∗ Γ (t, y)(−y ∗ ) = −y ∗ (k) + D∗ D(t, y − tk)(−y ∗ ),

(4.66)

ˆ ∗ Γ (t, y) and the where D∗ Γ (t, y) stands for both the regular coderivative D ∗ limiting coderivative DN Γ (t, y) introduced in Definition 4.1.3. Due to gr D = R × (−D) and taking into account that the normal cone to Cartesian sets is the product of the normal cones to component sets as stated in [413, Proposition 1.2], we obtain N• ((t, y − tk); gr D) = N• (t; R) × N• (y − tk; −D) = {0} × N• (y − tk; −D), ˆ ((t, y − where N• ((t, y − tk); gr D) stands for both the regular normal cone N tk); gr D) and limiting normal cone N ((t, y − tk); gr D) introduced in Definition 4.1.2. From the definition of coderivatives in Definition 4.1.3, we conclude y ∗ ∈ N• (y − tk; −D) and D∗ D(y − tk; −D)(−y ∗ ) = {0}. From (4.66), we now obtain that y ∗ ∈ N• (y − tk; −D) = −N• (tk − y; D) and D∗ Γ (t, y)(−y ∗ ) = −y ∗ (k). We now further employ the last equality and get

328

4 Generalized Differentiation and Optimality Conditions

(−y ∗ (k), y ∗ ) ∈ N• ((t, y); gr Γ ) ⇔ (y ∗ , −y ∗ (k)) ∈ N• ((y, t); gr Γ −1 ) ⇐⇒ (y ∗ , −y ∗ (k)) ∈ N• ((y, t); epi ϕD,k ) ⇐⇒ y ∗ ∈ D∗ EϕD,k (y, t)(y ∗ (k)). Because of Definition 4.1.7, we obtain ∂• ϕD,k (y)(λ) = D∗ EϕD,k (y, t)(λ) = Hλ (k) ∩ (−N• (tk − y; D)), (4.67) where Hλ (k) = {y ∗ ∈ Y ∗ | y ∗ (k) = λ} such that the proof has been completed.  Concerning the basic and the singular subdifferentials of the scalarization functional ϕD,k given by (2.42), we get the following result derived in [43, Corollary 3.1]. Corollary 4.3.3 (Basic and singular subdifferentials of scalarization functionals). Consider Y , D, k, ϕD,k , (y, t) as in Theorem 4.3.2. Then, (i) the basic subdifferential of ϕD,k at y is   ∂ϕD,k (y) = H1 (k) ∩ − N (ϕD,k (y)k − y; D) ,

(4.68)

where H1 (k) := {y ∗ ∈ Y ∗ | y ∗ (k) = 1}. (ii) the singular subdifferential of ϕD,k at y is   ∂ ∞ ϕD,k (y) = H0 (k) ∩ − N (ϕD,k (y)k − y; D) ,

(4.69)

where H0 (k) := {y ∗ ∈ Y ∗ | y ∗ (k) = 0}. Remark 4.3.4. (Discussion of the results and special cases). (a) In [40], the formulas for the subdifferentials were established via bd(D) instead of D. Corollary 4.3.3 gives sharper results. (b) Theorem 4.3.2 allocates subdifferentials of the scalarization functionals at pairs in the epigraph. This is essential for us to study the equivalence between the SNC property of the set D and the SNEC property of the scalarization functional ϕD,k (see Theorem 4.3.6). Because we do not suppose that the set D has a nonempty interior, we have to generate the singular subdifferential ∂ ∞ ϕD,k of ϕD,k in order to verify the fulfillment of the so-called qualification condition of calculus rules for generalized differentiation. This is important for the proof of necessary optimality conditions in Theorem 4.4.4. (c) If D is a convex set, both regular and limiting normal cone to C collapse to the normal cone of convex analysis. Hence, the basic subdifferential (4.68) of ϕD,k coincides with the known formulas in the convex case (with less restrictive assumptions imposed on D, see Durea and Tammer [167]. More precisely, we obtain   ∂ϕD,k (y) = H1 (k) ∩ − N (ϕD,k (y)k − y; D) = H1 (k) ∩ {y ∗ ∈ Y ∗ | ∀y ∈ D : y ∗ (y + y) − ϕD,k (y) ≥ 0}.

4.3 Subdifferentials of Scalarization Functionals

329

Furthermore, if D = C and C is a convex cone, we obtain (see [167, Lemma 2.1]) ∂ϕC,k (y) = H1 (k) ∩ {y ∗ ∈ Y ∗ | y ∗ (y) = ϕC,k (y)}. (d) Suppose that D = C and C is a nontrivial, convex, and closed cone with nonempty interior and k ∈ int D. Then, ϕC,k is a finite-valued, continuous, sublinear functional, dom ϕC,k = Y (see Theorem 2.4.1 and Corollary 2.4.5) and the singular subdifferential ∂ ∞ ϕC,k disappears everywhere, i.e., ∂ ∞ ϕC,k (y) = H0 (k) ∩ (−N (ϕC,k (y)k − y; C)) = {0}. (e) If D is convex and k ∈ / −D∞ , then for all y ∈ Y one has (compare [533]) ∂ϕD,k (y) = {y ∗ ∈ bar D | y ∗ (k) = 1, ∀y ∈ D : y ∗ (y + y) − ϕD,k (y) ≥ 0}. (4.70) Moreover, if D + (C \ {0}) = int D holds, then ∂ϕD,k (y) ⊂ C # for every y ∈Y. From Theorem 4.3.2, we conclude the following assertion about subdifferentials of scalarization functionals shifted along the scalarization direction derived in [43, Corollary 3.2]. Corollary 4.3.5 (Subdifferentials along scalarization directions). Consider Y , D, k, ϕD,k , (y, t) as in Theorem 4.3.2. Then, for any y ∈ dom ϕD,k = Rk − D and for any t ∈ R, it holds that ∂ϕD,k (y + tk) = ∂ϕD,k (y) and

∂ ∞ ϕD,k (y + tk) = ∂ ∞ ϕD,k (y).

Proof. Taking into account Theorem 4.3.2 and that ϕD,k is translation invariant (see (2.45) in Theorem 2.4.1), it holds that   ∂ϕD,k (y + tk)(λ) = Hλ (k) ∩ − N (ϕD,k (y + tk)k − y − tk; D)   = Hλ (k) ∩ − N (ϕD,k (y)k + tk − y − tk; D)   = Hλ (k) ∩ − N (ϕD,k (y)k − y; D) = ∂ϕD,k (y)(λ) (with λ = 1 for the basic subdifferential and λ = 0 singular subdifferential) and the assertions are shown.  The next theorems show important assertions concerning the Lipschitz behavior and SNEC property of the scalarization functional ϕD,k given by (2.42). In the next theorem ([43, Theorem 3.2]) the relationship between the SNEC property of the functional ϕD,k and the SNC property of the set D is shown.

330

4 Generalized Differentiation and Optimality Conditions

Theorem 4.3.6. (SNEC property of scalarization functionals). Consider Y , D, k, ϕD,k given by (2.42), (y, t) as in Theorem 4.3.2. Then, ϕD,k is SNEC at y ∈ dom ϕD,k

⇐⇒

D is SNC at ϕD,k (y)k − y.

Proof. ϕD,k is SNEC at y ∈ dom ϕD,k if and only if epi ϕD,k is SNC at (y, ϕD,k (y)) taking into account Definitions 4.1.5 and 4.1.8 for SNC and SNEC properties. We now show that epi ϕD,k is SNC at (y, ϕD,k (y)) if and only if D is ˆ ((yk , αk); SNC at ϕD,k (y)k −y. Indeed, fix an arbitrary sequence (yk∗ , −λk) ∈ N w∗

epi ϕD,k ) such that (yk , αk ) → (y, ϕD,k (y)), x∗k → 0, and λk → 0 as ˆ ((yk , αk ); epi ϕD,k ) that y ∗ ∈ k → ∞. We obtain from (yk∗ , −λk ) ∈ N k ˆ D,k (yk , αk )(λk ). Employing the description of the regular subdifferential ∂ϕ in Theorem 4.3.2, we get ˆ (αk k − yk ; D). λk = yk∗ (k) and − yk∗ ∈ N From the Definition 4.1.5, we conclude that D is SNC at ϕD,k (y)k − y. This completes the proof.  The next theorem characterizes the Lipschitz behavior of the functional ϕD,k given by (2.42) via the SNC property of the set D. This assertion is shown in [43, Theorem 3.3]. Theorem 4.3.7 (Characteristics of Lipschitz behavior of scalarization functionals). Consider Y , D, k, ϕD,k , (y, t) as in Theorem 4.3.2. Then, ϕD,k is locally Lipschitz continuous at y ∈ dom ϕD,k ⇐⇒ D is SNC at v := ϕD,k (y)k − y and H0 (k) ∩ (−N (v; D)) = {0}, where H0 (k) = {y ∗ ∈ Y ∗ | y ∗ (k) = 0}. Proof. From Corollary 4.3.3, we obtain ∂ ∞ ϕD,k (y) = H0 (k) ∩ (−N (v; D)). Taking into account Theorem 4.3.6, we have that D is SNC at v if and only if ϕD,k is SNEC at y. Moreover, an extended-real-valued functional is locally Lipschitz continuous at a point y in its domain if and only if its singular subdifferential is trivial and it is SNEC at that point (see [Theorem 4.0]). Hence, the proof is complete.  Remark 4.3.8. The result concerning the Lipschitz behavior in Theorem 4.3.7 is different from Theorem 2.4.16 (see [533, Theorem 3.9]) stating that ϕD,k is finite and Lipschitz on a neighborhood of y if and only if D is epi-Lipschitz

4.3 Subdifferentials of Scalarization Functionals

331

at v := y − ϕD,k (y)k in the direction −k. Suppose in addition that D is a convex set, then the epi-Lipschitz property of D is equivalent to D = ∅. This implies the SNC property. Example 4.3.9. We illustrate how to use Theorem 4.3.7 to examine whether the translation invariant functional ϕD,k given by (2.42) is locally Lipschitz or not. Let Y = R3 , D = R+ × R+ × {0} and k := (1, 1, 0). It holds that dom ϕD,k = R2 × {0}. The SNC property is automatically fulfilled in the finite-dimensional space R3 , and H0 (k) = {(a, b, c) ∈ R3 | a+b=0}. We study four cases in order to check the validity of the condition H0 (k) ∩ (−N (v; D)) = {0} in Theorem 4.3.7: 1. 2. 3. 4.

If v = (0, 0, 0), we obtain N (v; D) = R− × R− × R. If v = (a, 0, 0) with a > 0, we get N (v; D) = {0} × R− × R. If v = (0, b, 0) with b > 0, we obtain N (v; D) = R− × {0} × R. if v = (a, b, 0) with a > 0 and b > 0, we obtain N (v; D) = {0} × {0} × R.

Taking into account that condition H0 (k) ∩ (−N (v; D)) = {0} in Theorem 4.3.7 does not hold for each case, the scalarization functional ϕD,k is not Lipschitz on dom ϕD,k = R2 × {0}. In the next result, we show that the translation invariant functional ϕD,k given by (2.42) is locally Lipschitz under the assumption that D = C ⊂ Y is a nontrivial, closed, convex, and cone with nonempty interior (see [43, Corollary 3.3]). Corollary 4.3.10. (Lipschitz continuity of scalarization functionals). Consider Y an Asplund space, a nontrivial, closed, convex, and cone C ⊂ Y with nonempty interior, and a scalarization direction k ∈ int C of C. Then, the functional ϕC,k is locally Lipschitz. Proof. Since k ∈ int C, it holds that dom ϕC,ϕ = Rk − C = Y (see Theorem 2.4.1 and Corollary 2.4.5). Taking into account [413, Proposition 1.25 and Theorem 1.26], the convexity of C and int C = ∅ imply the SNC property of C. From Theorem 4.3.6, we get that ϕC,k is SNEC at every point y ∈ Y . We next prove that ∂ ∞ ϕC,k (y) = {0}. Taking into account Corollary 4.3.3 (ii), it is sufficient to prove that H0 (k) ∩ (−N (ϕC,k (y)k − y; C)) = {0}. Because C is a convex cone, the element v := ϕC,k (y)k − y is in C. From k ∈ int C, we obtain the existence of r > 0 such that v + k + UY (0, r) ⊆ C, where UY (0, r) is the ball centered at the origin with radius r in Y . For  any y ∗ ∈ H0 (k) ∩ − N (v; C) , we obtain y ∗ (k) = 0 and −y ∗ ∈ N (v; C) , i.e., for every y ∈ C, we get y ∗ (y − v) ≥ 0. Then, for all u ∈ UY (0, r), we obtain y = v + k + u ∈ C and therefore y ∗ (u) ≥ 0. This implies y ∗ = 0. Taking into  account Theorem 4.3.7, ϕC,k is locally Lipschitz at every point y ∈ Y .

332

4 Generalized Differentiation and Optimality Conditions

4.4 Application to Optimization Problems with Set-valued Objectives The purpose of this section is to establish necessary optimality conditions to set optimization problems (see problem (SP) in Section 3.2) using a scalarization by means of the translation invariant functional (2.42) studied in Section 2.4 and the results derived in Section 4.3. Necessary optimality conditions in set optimization for solution concepts based on the vector approach are discussed by Mordukhovich in [414, Sections 9.4 and 9.6]. For an overview on corresponding results, see [414, Section 9.6] and references therein. The results in this section are derived by Bao and Tammer in [43]. Throughout this section, we suppose that Γ : X ⇒ Y is a set-valued mapping acting between two Asplund spaces, S is a nonempty subset of X, Θ is a domination set in Y with dir(Θ) = ∅ (see Definition 4.3.1). We consider the constrained set optimization problem (compare problem (≤C -SP) in Section 3.2.1 and also the unconstrained set optimization problem (≤C -SP) in Section 4.2.4 for ≤Θ :=≤C ) ≤Θ - opt Γ (x) subject to x ∈ S,

(≤Θ -SP)

where the “≤Θ -optimality” is understood in a more general sense (w.r.t. a domination set Θ in Y ) than in the definition of nondominated solutions in (4.34). Definition 4.4.1 (≤Θ -nondominated solution of problem (≤Θ -SP)). Consider the set optimization problem (≤Θ -SP), x ∈ S and (x, y) ∈ gr Γ . The pair (x, y) ∈ gr Γ is called a ≤Θ -nondominated solution of problem (≤Θ -SP) if Γ (S) ∩ (y − Θ) = {y}. If S = X, problem (≤Θ -SP) is an unconstrained set optimization problem ≤Θ - opt Γ (x) subject to

(≤Θ -SPX )

x ∈ X, and a ≤Θ -nondominated solution of problem (≤Θ -SP) with S = X is called a ≤Θ -nondominated solution of (≤Θ -SPX ). It is important to mention that we use the vector approach to define the concept of ≤Θ -nondominated solutions for set-valued mappings Γ and therefore we have to fix an element y ∈ Γ (x). This solution concept is very favorable in deriving necessary conditions in terms of generalized differentiation.

4.4 Application to Optimization Problems with Set-valued Objectives

333

Furthermore, it is of increasing interest to consider solution concepts of set-valued mappings defined via set less relations introduced in Section 2.3 (see Kuroiwa in [361, 362]): In Section 2.3, we introduced the set relations lD , uD , sD , cD , pD , where D is a nonempty subset of Y . Under the assumption that D denotes one of these set relations, we introduce the following general solution concept for a set optimization problem based on the set approach. For D = Θ, we now introduce optimal solutions with respect to Θ to the following set optimization problem (compare problem (C -SP) with Θ :=C in Section 3.2.2): Θ - opt Γ (x) subject to x ∈ S,

(Θ -SP)

where we suppose that S is a subset of X, X and Y are linear spaces, Γ :X ⇒Y. Analogously to Definition 3.2.4, where we introduced C -nondominated solutions of (C -SP) under the assumption that C ⊂ Y is a proper, pointed, convex, closed cone, we now study a corresponding concept using the domination set Θ ⊂ Y . Definition 4.4.2 (Nondominated solutions of (Θ -SP) w.r.t. the preorder Θ ). We say that x ∈ S is a Θ -nondominated solution of problem (Θ -SP) w.r.t. Θ if Γ (x) Θ Γ (x) for some x ∈ S

=⇒

Γ (x) Θ Γ (x).

It is important to mention that it is possible to derive necessary optimality conditions for solutions based on the set approach (see Section 3.2.2) using corresponding results for solutions based on the vector approach and the relationships between both approaches (see Eichfelder, Pilecka [178, 179] and K¨ obis, Tammer, Yao [345]). Therefore it is possible to employ our necessary conditions for solutions defined by the vector approach for deriving corresponding results for solutions based on set approach. Note that the domination set Θ is an extensive extension of the (Pareto) ordering cone in vector optimization. Thus, we capture the concept of a utility functional ϕ : Y → R and the corresponding concept of the so-called ϕminimal solutions (in the sense that ϕ(y) > ϕ(y) for all y ∈ Γ (S) \ {y}). This concept is often applied in economics. In this context, we have Θ := −{y ∈ Y | ϕ(y) ≤ ϕ(y)} − y. Scalarization methods are important and useful tools in vector optimization from the theoretical as well as computational point of view (see Chapter

334

4 Generalized Differentiation and Optimality Conditions

3 and Section 5.2). Especially, scalarization techniques are very beneficial for deriving necessary conditions for (Pareto) efficient solutions to vector optimization problems. We now illustrate this approach in set optimization. Throughout this section, we assume that Γ : X ⇒ Y is a set-valued mapping acting between two Asplund spaces. For every scalarization direction k ∈ dir(Θ), it is easy to see that (x, 0) is a minimal solution of a scalarization of the set-valued mapping Γ by means of the functional (2.42), if (x, y) is a ≤Θ -nondominated solution of problem (≤Θ -SPX ). Using the generalized Fermat rule, we obtain   0 ∈ ∂ ϕΘ−y,k ◦ Γ (x, 0)(1). To reformulate the right-hand side, we employ a chain rule for coderivatives. Hence, we need to suppose that the set-valued mapping S : X × R ⇒ Y defined by S(x, α) := EΓ (x) ∩ ϕ−1 Θ−y,k (α) = EΓ (x) ∩ (αk + y − Θ) is inner semicontinuous at (x, α, y) in the sense that for every sequence (xn , αn ) with limn→∞ (xn , αn ) = (x, α), there exists a sequence yn ∈ S(xn , αn ) such that limn→∞ yn = y. Obviously, it is an additional condition, i.e., it is not automatically fulfilled if gr Γ is closed around (x, y) and Θ is a closed domination set. In this section, we modify the scalarization method of the set optimization problem by launching a nonlinear scalarization functional acting from the product space X × Y into the extended space of real numbers, see Proposition 4.4.3. We define an extended real-valued functional s : X × Y → R by ∀(x, y) ∈ gr Γ : s(x, y) := ϕ(Θ−y),k (y),

(4.71)

where ϕ(Θ−y),k is given by the translation invariant functional ϕD,k (see (2.42)) with D = Θ − y. Proceeding this way, we will avoid the usage of the chain rule for coderivatives to manipulate the subdifferential of the generalized composition ϕΘ−y,k ◦ Γ . The following relationships between ≤Θ -nondominated solutions of the set optimization problem (≤Θ -SPX ) and minima to a suitable scalarized optimization problem are derived in [43, Proposition 4.1]. Proposition 4.4.3 (Characterization of solutions to (≤Θ -SPX ) via scalarization). Let (≤Θ -SPX ) be a set optimization problem with dir(Θ) = ∅. Consider (x, y) ∈ gr Γ . Then, (i) If (x, y) is a ≤Θ -nondominated solution of problem (≤Θ -SPX ), then for any k ∈ dir(Θ), the pair (x, y) is a minimum of the scalar optimization problem min

(x,y)∈X×R

s(x, y) + ιgr Γ ((x, y)),

(4.72)

4.4 Application to Optimization Problems with Set-valued Objectives

335

where s is given by (4.71) and ιgr Γ (·) : X×Y → R∪{+∞} is the indicator function of gr Γ . (ii) Suppose additionally that Θ is closed and pointed. Moreover, assume that ∀(x, y) ∈ gr Γ : ϕΘ−y,k (y) = 0 =⇒ ϕΘ−y,k (y) = 0.

(4.73)

If (x, y) is a minimum of the scalar optimization problem (4.72) for some k ∈ dir(Θ), then it is a ≤Θ -nondominated solution of problem (≤Θ -SPX ). Proof. (i) Suppose that (x, y) is a ≤Θ -nondominated solution of Γ , i.e., Γ (X) ∩ (y − Θ) = {y}. For every y ∈ Γ (X), it holds that ϕΘ−y,k (y) ≥ 0 (see Theorem 2.4.1). Hence, ∀(x, y) ∈ gr Γ : s(x, y) = ϕΘ−y,k (y) ≥ 0. Because Θ is a closed domination set, we obtain s(x, y) = ϕΘ−y,k (y) = 0. Then, the pair (x, y) is a minimal solution of the scalar optimization problem (4.72). (ii) Suppose now that (x, y) is a minimal solution of the scalar optimization problem (4.72), i.e., ∀x ∈ dom Γ, ∀y ∈ Γ (x) : ϕΘ−y,k (y) ≥ 0. By contradiction, we suppose that (x, y) is not a ≤Θ -nondominated solution of problem (≤Θ -SPX ). Then, there is an element x ∈ dom Γ and an element y ∈ Γ (x) such that y = y and y ∈ y − Θ. This implies that ϕΘ−y,k (y) ≤ 0 and thus ϕΘ−y,k (y) = 0 due to the minimality of (x, y). Taking into account (4.73), we get ϕΘ−y,k (y) = 0. We obtain y ∈ y − Θ since Θ is closed. From y ∈ y − Θ and y ∈ y − Θ, we obtain y = y since Θ is pointed. This is a contradiction to the choice of y (y = y) such that (x, y) is a ≤Θ -nondominated  solution to problem (≤Θ -SPX ). The following necessary condition for ≤Θ -nondominated solutions of the unconstrained problem (≤Θ -SPX ) is derived in [43, Theorem 4.1]. Theorem 4.4.4 (Necessary conditions for ≤Θ -nondominated solutions to (≤Θ -SPX )). We consider problem (≤Θ -SPX ) and a ≤Θ -nondominated solution (x, y) of (≤Θ -SPX ). Let k ∈ dir(Θ) be a scalarization direction of Θ, ϕΘ−y,k be the translation invariant functional (see (2.42)), and s : X × Y → R be the scalarization functional defined by (4.71). Suppose that (H1) (Closedness condition) The domination set Θ is closed around 0 ∈ Θ and gr Γ is closed around (x, y). (H2) (PSNC condition) Either Θ is SNC at 0 or Γ −1 is PSNC at (y, x). (H3) (Mixed qualification condition for {Γ, Θ})  ∗  ∗ y ∈ DM Γ −1 (y, x)(0) ∩ (−N (0; Θ)) and y ∗ (k) = 0 =⇒ y ∗ = 0. (4.74)

336

4 Generalized Differentiation and Optimality Conditions

Then, there exists y ∗ ∈ −N (0; Θ) with y ∗ (k) = 1 and ∗ − y ∗ ∈ DM Γ −1 (y, x)(0)

(4.75)

that implies 0 ∈ DF∗ (x, y)(y ∗ ). Proof. Taking into account Proposition 4.4.3, (x, y) is a minimum of the scalar problem (4.72). Because the limiting generalized differential objects and SNC are local properties, we suppose without loss generality that Θ is a closed set. For a closed set Θ in Y , the translation invariant functional ϕΘ−y,k is lower semicontinuous around y (see Theorem 2.4.1), and so is the scalarization functional s around (x, y). We fix a sequence {εj } of positive numbers with εj ↓ 0. Applying the subdifferential variational principle in Lemma 4.2.22 to the ε2j -minimal solution of g := s + ιgr Γ (·) with λ = ε, there exists a sequence {xj , yj , x∗j , yj∗ } with (xj , yj ) ∈ gr Γ, (xj , yj ) − (x, y) ≤ εj , ˆ j , yj ), (x∗ , y ∗ ) ≤ εj . (x∗j , yj∗ ) ∈ ∂g(x j j

(4.76)

Because epi g = Ω1 ∩ Ω2 , where Ω1 = epi s = X × epi ϕΘ−y,k and Ω2 = ˆ j , yj ) that gr Γ × R, we obtain from (x∗j , yj∗ ) ∈ ∂g(x ˆ ((xj , yj , αj ); Ω1 ∩ Ω2 ) with αj := g(xj , yj ) = ϕΘ−y,k (yj ). (x∗j , yj∗ , −1) ∈ N We are working in the framework of the Asplund space X × Y × R with the L1 -norm and its dual space X ∗ × Y ∗ × R with the L∞ -norm. Now, we employ Lemma 4.2.21 for each j ∈ N to choose sequences λj ≥ 0, (xij , yij , αij ) ∈ Ωi (i = 1, 2), ∗ ˆ ((x1j , y1j , α1j ); Ω1 ) = {0} × N ˆ ((y1j , α1j ); epi ϕΘ−y,k ), (0, y1j , t1j ) ∈ N ∗ ˆ ((x2j , y2j , α2j ); Ω2 ) = N ˆ ((x2j , y2j ); gr Γ ) × {0} , 0) ∈ N (x∗2j , y2j

fulfilling (xij , yij , αij ) − (xj , yj , αj ) ≤ εj , i ∈ {1, 2}, ∗ ∗ (0, y1j , t1j ) + (x∗2j , y2j , 0) − λj (x∗j , yj∗ , −1) ≤ 2εj ,

(4.77)

∗ max{λj , y1j , |t1j |}

(4.78)

1 − εj ≤

≤ 1 + εj .

x∗j 

→ 0 and yj∗  → 0 (see (4.76)), we obtain from ∗ ∗ }, {t1j }, {x∗2j }, {y2j }, and {λj } are sequences {y1j

Taking into account (4.77) and (4.78) that the bounded. Since X and Y are Asplund spaces, we suppose that w∗

∗ ∗ (y1j , x∗2j , y2j ) → (y ∗ , x∗ , −y ∗ ), λj → λ and t1j → λ.

4.4 Application to Optimization Problems with Set-valued Objectives

337

We show that λ = 0. Indeed, arguing by contradiction, we suppose that λ = 0. From x∗2j  → 0, we get (4.77). Taking into account the structure of the sets ∗ Γ −1 (y, x)(0), i.e., Ω1 and Ω2 , we obtain y ∗ ∈ ∂ ∞ ϕΘ−y,k (y) and −y ∗ ∈ DM ∗ Γ −1 (y, x)(0) ∩ (−∂ ∞ ϕΘ−y,k (y)). −y ∗ ∈ DM

The subdifferential ∂ ∞ ϕΘ−y,k (y) = H0 (k) ∩ (−N (0; Θ)) of the translation invariant functional ϕΘ−y,k in Theorem 4.3.2 and the supposed mixed qualification condition for {Γ, Θ} in (H3) ensure y ∗ = 0. According to the SNC assumptions in (H2), we study two cases: w∗

∗ → y ∗ and y ∗ = 0, Case 1: Suppose that Θ is SNC at 0. Then, from y1j ∗ we obtain y1j  → 0. ∗ →0 Case 2: Suppose that Γ −1 is PSNC at (y, x). Then, we obtain y2j w∗

∗ ∗ from x∗2j  → 0 and y2j → 0. From (4.77) and yj∗  → 0, we obtain y1j →0. ∗ Then, we get y1j  → 0 in both cases. This, together with λj → 0, t1j → λ = 0, contradicts condition (4.78). So, it holds that λ > 0. Now, we suppose that λj ≡ 1 for all j ∈ N without loss of generality. We obtain ∗ Γ −1 (y, x)(0). y ∗ ∈ ∂ϕΘ−y,k (y) and − y ∗ ∈ DM ∗ ∗ Γ −1 (y, x)(0) ⊆ DN Γ −1 (y, x)(0), we conclude Since DM ∗ ∗ Γ −1 (y, x)(0) ⇔ (0, −y ∗ ) ∈ N ((x, y), gr Γ ) ⇔ 0 ∈ DN F (x, y)(y ∗ ). −y ∗ ∈ DN

By the structure of the subdifferential of the translation invariant functional ϕΘ−y,k in Theorem 4.3.2, we obtain y ∗ ∈ −N (−y; Θ − y) = −N (0; Θ) and  y ∗ (k) = 1. This completes the proof. Remark 4.4.5 (A discussion of the assumptions in Theorem 4.4.4). (a) By analyzing the proof of Theorem 4.4.4, we recognize that the mixed qualification condition for {Γ, Θ} in (4.74) could be replaced by the qualification condition for {Γ, ϕΘ−y,k } ∗ Γ −1 (y, x)(0) ∩ (−∂ ∞ ϕΘ−y,k )(y)) = {0}, DM

and that y ∗ could be chosen from ∂ϕΘ−y,k )(y). (b) The mixed qualification condition (4.74) for {Γ, Θ} is fulfilled if one of the following conditions holds true: • Θ is a convex cone with int Θ = ∅; • Γ −1 is locally Lipschitz-like around (y, x). Certain fuzzy results based on the characteristic of the openness property at nondominated solutions of problem (≤C -SP) with respect to a convex ordering cone C are derived by Durea and Strugariu in [166]. If (x, y) ∈ gr Γ is a local nondominated solution of (≤C -SP) and C is not a linear subspace of Y , then the set-valued map Γ is not open at (x, y). Recall that Γ is referred

338

4 Generalized Differentiation and Optimality Conditions

as open at (x, y) ∈ gr Γ if the image through Γ of every neighborhood of x is a neighborhood of y. The next corollary (derived in [43, Corollary 4.1]) shows a connection between ≤Θ -nondominated solutions and linear openness (see [414, Definition 3.1(a)]). Corollary 4.4.6. (Lipschitz-like behavior at ≤Θ -nondominated solutions of set-valued mappings). Suppose that the assumptions of Theorem 4.4.4 are satisfied. If (x, y) is a ≤Θ -nondominated solution of (≤Θ -SPX ), then Γ −1 is not Lipschitz-like around (y, x), and therefore Γ is not linearly open at (x, y). Proof. Theorem 4.4.4 yields ∗ DM Γ −1 (y, x)(0) = {0}.

Hence, Γ −1 is not Lipschitz-like around (y, x) because of Lemma 4.1.9.



It is well established that the constrained set optimization problem (≤Θ SP) ≤Θ - opt Γ (x) subject to

(4.79)

x∈S can be reformulated as an equivalent unconstrained problem by employing the set-valued mapping ΓS : X ⇒ Y of Γ over S given by  Γ (x) if x ∈ S, ΓS (x) := ∅ if x ∈ S. Before we present necessary conditions for ≤Θ -nondominated solutions to the constrained optimization problem (≤Θ -SP), we discuss important properties for the set-valued mapping ΓS (see [43, Proposition 4.2]). Proposition 4.4.7 (Calculus rules for restricted mappings). Let X and Y be Asplund spaces. Consider a set-valued mapping Γ : X ⇒ Y , a nonempty set S ⊆ X and the set-valued mapping ΓS . Fix x ∈ S ∩ dom Γ and y ∈ Γ (x). Suppose that gr Γ and S are closed around (x, y) and x, respectively. Then, the following assertions hold. (i) (Coderivative rule for ΓS ) Suppose that the norm-convergence qualification condition for {Γ, S} is fulfilled: ∗ ) with For any sequence (x1j , x2j , y1j , x∗1j , x∗2j , y1j

4.4 Application to Optimization Problems with Set-valued Objectives

339





(x1j , y1j ) ∈ gr Γ, x2j ∈ S, ⎥ ⎢ x∗ ∈ D ˆ ∗ Γ (x1j , y1j )(y ∗ ), x∗ ∈ N ˆ (x2j ; S), 1j 2j ⎦ ⎣ 1j ∗ w ∗ ∗ ∗ ∗ (x1j , y1j ) → (x, y), x2j → x, (x1j , x2j ) −→ (x1 , x2 )

(4.80)

it holds that (x∗1j + x∗2j  → 0 ∧ yj∗  → 0) ⇒ x∗1j  + x∗2j  → 0.

(4.81)

Then, the coderivative of ΓS can be estimated by ∀y ∗ ∈ Y ∗ : D∗ ΓS (x, y)(y ∗ ) ⊆ D∗ Γ (x, y)(y ∗ ) + N (x; S), where D∗ stands for both the normal and mixed coderivatives defined in Definition 4.1.3. (ii) (PSNC rule for ΓS ) Suppose that the following conditions are satisfied: (a) (SNC condition) Either Γ is SNC at (x, y), or Γ −1 is PSNC at (y, x) and S is SNC at x. (b) (Mixed qualification condition for {Γ, S}) For any sequence (x1j , x2j , ∗ ) fulfilling condition (4.80), it holds that y1j , x∗1j , x∗2j , y1j w∗

(x∗1j + x∗2j  → 0 ∧ yj∗ → 0) =⇒ x∗1 = x∗2 = 0,

(4.82)

where x∗i is the weak∗ -limit of the sequence {x∗ij } for i = 1, 2. Then, the mapping ΓS−1 is PSNC at (y, x). Proof. For the proof of (i), see [413, Proposition 3.12]. Now, we will show the PSNC property for ΓS−1 in (ii). It holds that gr ΓS = S1 ∩ S2 , where S1 and S2 are subsets in the product space X × Y given by S1 := gr Γ and S2 := S × Y.

(4.83)

To verify the PSNC property of ΓS−1 , we prove that for any sequences ˆ ((xj , yj ); S1 ∩ S2 ), for all j ∈ N, if (xj , yj ) ∈ S1 ∩ S2 , (x∗j , yj∗ ) ∈ N w∗

(xj , yj ) → (x, y), x∗j  → 0, yj∗ → 0, then yj∗  → 0 as j → +∞. It is sufficient to prove that yj∗  → 0 along a subsequence because we are dealing with arbitrary sequences fulfilling the convergence properties given above. We apply [413, Lemma 3.1] (see Lemma 4.2.21) for each j ∈ N for a fixed sequence εj ↓ 0. So, we find λj ≥ 0, (xij , yij ) ∈ Si , (xij , yij ) − (xj , yj ) ≤ εj , (i = 1, 2), ˆ ((x1j , y1j ); S1 ) = N ˆ ((x1j , y1j ); gr Γ ) (x∗ , y ∗ ) ∈ N 1j

1j

ˆ ((x2j , y2j ; S2 ) = N ˆ (x2j ; S) × {0}, (x∗2j , 0) ∈ N fulfilling

340

4 Generalized Differentiation and Optimality Conditions ∗ (x∗1j , y1j ) + (x∗2j , 0) − λj (x∗j , yj∗ ) ≤ 2εj ,

(4.84)

∗ 1 − εj ≤ max{λj , x∗1j , y1j } ≤ 1 + εj .

(4.85)

w∗

Taking into account x∗j  → 0 and yj∗ → 0, the sequence (x∗j , yj∗ ) is bounded. ∗ , x∗2j )} and From (4.84) and (4.85), we obtain that the sequences {(x∗1j , y1j {λj } are bounded. Since X and Y are Asplund spaces, we may assume that the triple sequence weak∗ -converges to some (x∗1 , y1∗ , x∗2 ), and that λj → λ ≥ 0 as j → +∞. By (4.84) and by the convergence of (x∗j , yj∗ ), this yields that w∗

∗ → 0. x∗1j + x∗2j  → 0 and y1j

The supposed mixed qualification condition for {Γ, S} implies x∗1 = x∗2 = 0 and y1∗ = 0. Let us consider two cases according to the SNC conditions supposed in the theorem. Case 1: Suppose that Γ is SNC at (x, y), i.e., gr Γ is SNC at (x, y). Then, ∗  → 0 as j → +∞. we obtain x∗1j  → 0 and y1j Case 2: Suppose that S is SNC at x. It follows that x∗2j  → 0 and therefore x∗1j  → 0 as j → +∞. Then, the PSNC of Γ −1 guarantees that ∗  → 0 as j → +∞. y1j ∗  → 0 as j → +∞ in both cases. Hence, So, we get x∗1j  → 0 and y1j condition (4.85) yields λj → λ = 1 as j → +∞, and (4.84) implies yj∗  → 0 as j → +∞. This completes the proof of (ii).  Remark 4.4.8. When the pair {Γ, S} fulfills both SNC and qualification condition in (ii), then the norm-convergence qualification condition in (i) is satisfied, i.e., under the assumptions supposed in (ii), we could employ the coderivative rule for ΓS . The following necessary optimality condition for ≤Θ -nondominated solutions to the constrained problem (≤Θ -SP) is shown in [43, Theorem 4.2]. Theorem 4.4.9 (Necessary conditions for ≤Θ -nondominated solutions to (≤Θ -SP)). Let problem (≤Θ -SP) and a ≤Θ -nondominated solution (x, y) be given. Consider a scalarization direction k ∈ dir(Θ) of Θ, and let ϕ = ϕΘ−y,k be the translation invariant functional given by (2.42). Suppose that the following conditions hold: (H1) (Closedness condition) The domination set Θ is closed around the origin, gr Γ is closed around (x, y), and S is closed around x. (H2) (SNC conditions) One of the following conditions holds: (a) Θ is SNC at 0 and S is SNC at x; (b) Θ is SNC at 0 and Γ is PSNC at (x, y); (c) S is SNC at x and Γ −1 is PSNC at (y, x); (d) Γ is SNC at (x, y).

4.4 Application to Optimization Problems with Set-valued Objectives

341

(H3) (Qualification conditions) • Either the norm-convergence qualification condition for {Γ, S} is fulfilled for the SNC condition (a) or (b), or the mixed one is satisfied for the SNC condition (c) or (d). • The qualification condition for {Θ, Γ, S}: For any sequence ∗ )} {(x1j , x2j , y1j , x∗1j , x∗2j , y1j

fulfilling condition (4.80) it holds that # $ w∗ w∗ x∗1j → x∗1 , x∗2j → x∗2 , x∗1j + x∗2j  → 0, w∗

∗ → −y1∗ , y1∗ ∈ −N (0; Θ) ∩ H0 (k) y1j

% =⇒

& x∗1 = x∗2 = 0 . y1∗ = 0

Then, there exists y ∗ ∈ −N (0; Θ) with y ∗ (k) = 1 and 0 ∈ ∂Γ (x, z)(y ∗ ) + N (x; S).

(4.86)

Proof. Because (x, y) is a ≤Θ -nondominated solution to problem (≤Θ -SP), ΓS . Under our assumptions, it is a ≤Θ -nondominated solution to the mapping   we find some y ∗ ∈ ∂ϕΘ−y,k (y) = H1 (k) ∩ − N (ϕ(y)k − y; Θ) such that ∗ Γ −1 (y, x)(0) providing that the following SNC and qualification −y ∗ ∈ DM conditions hold: • (SNC condition) Either Θ is SNC at 0 or ΓS−1 is PSNC at (y, x). ∗ ΓS−1 (y, x)(0) ∩ −∂ ∞ ϕΘ−y,k (y) = {0}. • (qualification condition) DM ∗ ΓS−1 (y, x)(0) ⊆ D∗ ΓS−1 (y, x)(0). From −y ∗ ∈ D∗ Γ −1 Notice that DM (y, x)(0), we obtain that (0, −y ∗ ) ∈ N ((x, y); gr ΓS ) and therefore 0 ∈ ∗ ΓS (x, y)(y ∗ ). The mixed qualification condition for {Γ, S} together with DN the qualification condition (c) or (d) yields the norm-convergence qualification condition for {Γ, S}. Applying the coderivative rule for ΓS , we obtain ∗ 0 ∈ DN Γ (x, y)(y ∗ ) + N (x; S).

This completes the proof of the theorem provided that the aforementioned SNC and qualification conditions are fulfilled. Under the assumptions made in the theorem, the SNC condition is satisfied by Proposition 4.4.7. In order to verify the qualification condition, we consider an arbitrary element ∗ ΓS−1 (y, x)(0) ∩ (−∂ ∞ ϕΘ−y,k (y)). y ∗ ∈ DM Employing the definition of mixed coderivative, we find sequences {(xj , yj , ˆ ((xj , yj ); gr Γ ), (xj , yj ) → x∗j , yj∗ )} fulfilling (xj , yj ) ∈ gr Γ , (x∗j , yj∗ ) ∈ N (x, y), w∗

x∗j  → 0 and yj∗ → y ∗ .

(4.87)

342

4 Generalized Differentiation and Optimality Conditions

Consider gr ΓS = S1 ∩ S2 ⊆ X × Y with S1 = gr Γ and S2 = S × Y . For a fixed sequence εj ↓ 0, we apply the fuzzy intersection rule (see Lemma 4.2.21) for each j ∈ N. Then, we can choose sequences λj ≥ 0, (xij , yij ) ∈ Si , (xij , yij ) − (xj , yj ) ≤ εj , (i = 1, 2), ∗ ˆ ((x1j , y1j ); S1 ) = N ˆ ((x1j , y1j ); gr Γ ), )∈N (x∗1j , y1j ˆ ((x2j , y2j ; S2 ) = N ˆ (x2j ; S) × {0}, (x∗2j , 0) ∈ N fulfilling

∗ ) + (x∗2j , 0) − λj (x∗j , yj∗ ) ≤ 2εj , (x∗1j , y1j

(4.88)

∗ max{λj , x∗1j , y1j }

(4.89)

1 − εj ≤

≤ 1 + εj .

w∗

Because x∗j  → 0 and yj∗ → 0 (see 4.87), the sequences {x∗j } and {yj∗ } are ∗ bounded. From (4.88) and (4.89), we obtain that the sequences {(x∗1j , y1j , x∗2j )} and {λj } are bounded. Since X and Y are Asplund spaces, we can assume that the triple sequence weak∗ -converges to some (x∗ , y ∗ , −x∗ ), and that λj → λ ≥ 0 as j → +∞. We will prove that λ > 0. By contradiction, we suppose that λ = 0. Then, we obtain w∗

∗ y1j → 0 and x∗1j + x∗2j  → 0.

The mixed limiting qualification condition implies x∗ = 0 and y ∗ = 0. Each qualification condition in (a)-(d) yields x∗1j  → 0, x∗2j  → 0 and yj∗  → 0. This contradicts the nontriviality condition (4.89). Hence, λ > 0. Without loss of generality, we suppose that λj ≡ 1 for all j ∈ N in (4.88). Now, we have ∗ ∗ w → y ∗ and x∗1j + x∗2j  → 0. y1j The qualification condition for {ϕΘ−y,Γ,S } forces y ∗ = 0. This shows that the qualification condition is fulfilled.  Remark 4.4.10 (Some comments on local nondominated solutions). Suppose additionally that Θ + Θ ⊆ Θ. Then, we can formulate subdifferential versions of the necessary optimality conditions for ≤Θ -nondominated elements as in [41, 42]. This follows since, if (x, y) is a ≤Θ -nondominated solution of a set-valued mapping Γ , then it is also a ≤Θ -nondominated solution of the epigraphical multifunction of Γ with respect to Θ. The following subdifferential necessary condition for ≤Θ -nondominated solutions to (≤Θ -SP) is shown in [43, Corollary 4.2]. Corollary 4.4.11 (Subdifferential necessary condition for ≤Θ -nondominated solutions to (≤Θ -SP)). Let problem (≤Θ -SP) and a ≤Θ -nondominated solution (x, y) be given. We consider a scalarization direction k ∈ dir(Θ) of Θ and ϕ = ϕΘ−y,k is the translation invariant functional given by (2.42). Suppose that Θ + Θ ⊆ Θ and Θ ∩ (−Θ) = {0} and also that

4.4 Application to Optimization Problems with Set-valued Objectives

343

(H1) (Closedness condition) The domination set Θ is closed around the origin and epi Γ = gr EΓ is closed around (x, y). (H2) (PSNC condition) Either Θ is SNC at 0 or EΓ−1 is PSNC at (y, x). (H3) (Mixed qualification condition for {EΓ , Θ}) ∗ (y ∗ ∈ DM EΓ−1 (y, x)(0) ∩ (−N (0; Θ)) ∧ y ∗ (k) = 0) =⇒ y ∗ = 0.

Then, there exists y ∗ ∈ −N (0; Θ) with y ∗ (k) = 1 and 0 ∈ ∂Γ (x, y)(y ∗ ). Proof. Taking into account Theorem 4.4.4, it is sufficient to show that (x, y) is a ≤Θ -nondominated solution of EΓ . Arguing by contradiction, we suppose that it is not a ≤Θ -nondominated solution of EΓ . Then, there is (x, y) ∈ epi EΓ with y = y and y ≤Θ y, i.e., y ∈ y − Θ. By the definition of epigraph, there exists θ ∈ Θ such that y˜ = y − θ ∈ Γ (x). Since y ∈ y − Θ, y˜ = y − θ ∈ y − Θ − θ ⊆ y − Θ, i.e., y˜ ≤Θ y. Because (x, y) is a ≤Θ nondominated solution to (≤Θ -SP), we obtain y˜ = y − θ = y and thus y ∈ y + Θ. From the pointedness of Θ, we obtain y = y, in contradiction to  the choice of y (y = y). This completes the proof.

5 Applications

In this chapter, we study applications in approximation theory, primal-dual algorithms for solving approximation as well as locational problems and εminimum principles for deterministic and for stochastic multiobjective control problems. In comparison with the first edition, we added a section about the asymptotic behavior of first-order dynamical systems. Here it is elaborated for a vectorial equilibrium problem and then set up for a vector optimization problem.

5.1 Approximation Problems 5.1.1 General Approximation Problems Location and approximation problems play an important role in optimization theory, and many practical problems can be described as location or approximation problems. Besides problems with one objective function, several authors have even investigated vector-valued (synonymously vector or multiobjective) location and approximation problems. In this section, we will consider a general vector control approximation problem and derive necessary conditions for approximate solutions of this problem. Throughout this section, we assume (A1) (X,  · X ), (Y,  · Y ) and (Z,  · Z ) are real reflexive Banach spaces; (A2) C ⊂ Y is a pointed, closed, convex cone with k 0 ∈ C \ (−C). Moreover, we assume that C is a cone with int C = ∅ having the Daniell property, which means that every decreasing net (i.e., i ≤ j implies xj ≤ xi ) having a lower bound converges to its infimum (see Section 2.1). Further, we suppose that C has a weakly compact base (cf. Lemma 2.2.35).

© Springer Nature Switzerland AG 2023 A. G¨ opfert et al., Variational Methods in Partially Ordered Spaces, CMS/CAIMS Books in Mathematics 7, https://doi.org/10.1007/978-3-031-36534-8 5

345

346

5 Applications

In order to formulate our vector control approximation problem, we will introduce a vector-valued norm: ||| · ||| : Z −→ C is called a vector-valued norm if ∀ z, z1 , z2 ∈ Z, ∀ λ ∈ R, 1. |||z||| = 0 ⇐⇒ z = 0; 2. |||λ z ||| =| λ | |||z|||; 3. |||z1 + z2 ||| ∈ |||z1 ||| + |||z2 ||| − C. In the following, we assume that |||·||| is continuous. The set of linear continuous mappings from X to Y is denoted by L(X, Y ). Suppose that f1 ∈ L(X, Y ), Ai ∈ L(X, Z), and αi ≥ 0 (i = 1, . . . , n). Here A∗i denotes the adjoint operator to Ai . For brevity and clarity we sometimes omit parentheses in connection with y ∗ and A∗i . Then we consider for x ∈ X and ai ∈ Z (i = 1, . . . , n) the vector-valued function n  f (x) := f1 (x) + αi |||Ai (x) − ai |||. i=1

Now, we will introduce the following vector control approximation problem: Compute the set Eff(f (X), C). (P1) Remark 5.1.1. The problem (P1) contains the following practically important special cases: 1. Vector-valued optimal control problems of the form (cf. Section 5.4) Eff(F1 (U ), R2+ ), 

with F1 (u) :=

A(u) − a1 u2

 ,

u ∈ U ⊂ X,

where H1 and H2 are Hilbert spaces, A ∈ L(H1 , H2 ), a ∈ H2 , U ⊂ H1 is a nonempty closed convex set, and R2+ denotes the usual ordering cone in R2 . Here u denotes the so-called control variable; the image z = Au denotes the state variable. 2. Scalar location and approximation problems (Y = R, f1 ≡ 0): n  i=1

αi Ai (x) − ai  −→ inf , x∈X

where  ·  is a norm in Z (cf. Section 3.8 and 5.2). 3. Vector approximation and location problems (f1 ≡ 0, n = 1, cf. Bot¸, Grad and Wanka [74], Jahn [306], Gerth and P¨ ohler [214], Henkel and Tammer [268, 269], Jahn [314], Tammer [528, 529], Wanka [554], Oettli [443]). 4. Linear vector optimization problems (αi = 0 for all i = 1, . . . , n).

5.1 Approximation Problems

347

5. Surrogate problems for linear vector optimization problems with an objective function f (x) := f1 (x) subject to x ∈ X and A(x) = a, for which the feasible set is empty. 6. Perturbed linear vector optimization problems. 7. Tychonoff regularization for linear vector optimization problems. In the following theorem, we will derive necessary conditions for approximately efficient solutions of Eff(f (X), Cεk0 ) (cf. Section 3.1.1, Definition 3.1.1) using the concept of C-convexity introduced in Section 2.6 and the subdifferential of a C-convex function f : X → Y (see Definition 2.6.7). Taking into account the notation of this section, we consider the subdifferential of the C-convex function f : X −→ Y at x0 ∈ X: ∂ ≤ f (x0 ) = {M ∈ L(X, Y ) | ∀ x ∈ X : M (x − x0 ) ∈ f (x) − f (x0 ) − C}. The subdifferential of the vector-valued norm ||| · ||| : Z −→ Y has the following form (cf. Jahn [306]): ∂ ≤ ||| · |||(z0 ) = {M ∈ L(Z, Y ) | M (z0 ) = |||z0 |||, ∀ z ∈ Z : |||z||| − M (z) ∈ C}. (5.1) The following result will be used in the proof of Theorem 5.1.3. Lemma 5.1.2. (Jahn [306]) Let S be a nonempty subset of a partially ordered reflexive Banach space Y with a pointed nontrivial ordering cone C. If the set S+C is convex and has a nonempty topological interior, then for each efficient element y¯ ∈ S of the set S there exists a linear functional y ∗ ∈ C + \ {0} with the property y ) ≤ y ∗ (y) for all y ∈ S. y ∗ (¯ Now, we formulate the main result where we are using the concept of approximately efficient elements introduced in Definition 3.1.1 and corresponding notations. Theorem 5.1.3. Under the assumptions of this section, for any ε > 0 and any approximately efficient element f (x0 ) ∈ Eff(f (X), Cεk0 ) there exist an element xε , a functional y ∗ ∈ C + \ {0}, and linear continuous mappings Miε ∈ L(Z, Y ) with Miε (Ai (xε ) − ai ) = |||Ai (xε ) − ai |||, |||z||| − Miε (z) ∈ C for all z ∈ Z such that (α) (β) (γ)

(i = 1, . . . , n),

√ f (xε ) ∈ f (x0 ) − εx0 − xε X k 0 − C, f (xε ) ∈ Eff(f (X), Cεk0 ), √ xε − x0 X ≤ ε,  √ n y ∗ f1 + i=1 αi A∗i y ∗ Miε ∗ ≤ εy ∗ (k 0 ).

348

5 Applications

Proof. We assume f (x0 ) ∈ Eff(f (X), Cεk0 ). Under the given assumptions, Corollary 3.11.14 implies the existence of an element xε with √ (α ) f (xε ) ∈ f (x0 ) − εx0 − xε X k 0 − C, √ (β  ) x0 − xε X ≤ ε, (γ  ) fεk0 (xε ) ∈ Eff(fεk0 (X), C), √ where fεk0 (x) := f (x) + εx − xε X k 0 . Because of the C-convexity of f and by assertion (γ  ) it is possible to conclude from Lemma 5.1.2 that there exists a functional y ∗ ∈ C + \ {0} with   √ ∗ ∗ ∗ 0 y (f (xε )) ≤ inf y (f (x)) + εx − xε X y (k ) . x∈X

This means that xε minimizes the function √ x −→ y ∗ (f (x)) + ε x − xε X y ∗ (k 0 ). Then the subdifferential calculus of convex functionals (cf. Aubin and Ekeland [35], Corollary 4.3.6) implies that √ 0 ∈ ∂y ∗ (f (xε )) + εy ∗ (k 0 )B 0 , where B 0 denotes the (closed) unit ball in X ∗ . So it follows immediately that there is a linear continuous functional lε : X −→ R belonging to the subdifferential of the scalarized function y ∗ ◦ f at the point xε ∈ X with √ lε ∗ ≤ ε y ∗ (k 0 ). (5.2) Further, we have 





∂(y ◦ f )(xε ) = ∂ y (f1 (·)) +

n 

 αi y |||Ai (·) − a ||| (xε ), ∗

i

i=1 ∗

because y is a linear functional. The rule of sums for subdifferentials yields the relation 



∂ y (f1 (·)) +

n 

 αi y |||Ai (·) − a ||| (xε ) ∗

i

i=1

= ∂y ∗ (f1 (·))(xε ) +

n 

αi ∂y ∗ |||Ai (·) − ai |||(xε ).

i=1

Moreover, from Theorem 2.6.8 and Corollary 4.3.6 in Aubin and Ekeland [35] we get the following equation:

5.1 Approximation Problems

∂(y ∗ ◦ f )(x) = ∂y ∗ (f1 (·))(x) +

n 

349

αi ∂y ∗ |||Ai (·) − ai |||(x)

i=1

= y ∗ f1 +

n 

αi A∗i ∂(y ∗ ||| · |||)(Ai (x) − ai )

i=1

= y ∗ f1 +

n 

αi A∗i y ∗ ∂ ≤ ||| · |||(Ai (x) − ai ) = y ∗ ◦ ∂ ≤ f (x), (5.3)

i=1

where f1 ∈ ∂ ≤ f1 (x). Applying (5.1), statement (5.3) implies ∂(y ∗ ◦ f )(x) = y ∗ f1 +

n 

αi A∗i y ∗ ∂ ≤ ||| · |||(Ai (x) − ai ) = y ∗ f1 +

i=1

n  i=1

αi A∗i y ∗ Miε (5.4)

with Miε ∈ L(Z, Y ) , Miε (Ai (xε ) − ai ) = |||Ai (xε ) − ai ||| and |||z||| − Miε (z) ∈ C for all z ∈ Z

(i = 1, . . . , n).

From (5.2), (5.3), and (5.4) we get the desired inequality   n    √ ∗ 0  ∗  αi A∗i y ∗ Miε  ≤ ε y (k ). y f1 +   i=1





Remark 5.1.4. 1. Obviously, if we use a scalarization of the approximation problem (P1) with linear continuous functionals y ∗ ∈ C + \ {0} and Ekeland’s original result [180], then we can show in the same way as in Theorem 5.1.3 that for any ε > 0 and any approximate solution x0 ∈ X with y ∗ (f (x0 )) ≤ inf x∈X y ∗ (f (x)) + ε, y ∗ ∈ C + \ {0}, there exists an element xε and linear continuous mappings Miε ∈ L(Z, Y ) with Miε (Ai (xε ) − ai ) = |||Ai (xε ) − ai |||, |||z||| − Miε (z) ∈ C for all z ∈ Z

(i = 1, . . . , n),

such that √ y ∗ (f (xε )) ≤ y ∗ (f (x0 )) − εx0 − xε X , (α ) √ xε − x0 X ≤ ε, (β  ) n √ y ∗ f1 + i=1 αi A∗i y ∗ Miε ∗ ≤ ε. (γ  ) But the assertion (α) in Theorem 5.1.3 is a sharper result than (α ). 2. The assertion (γ) of Theorem 5.1.3 is a sharper result than the assertion in Theorem 2 of [269], because the mappings Miε (i = 1, . . . , n) in Theorem 5.1.3 do not depend on a direction v ∈ X.

350

5 Applications

As a direct consequence of Theorem 5.1.3, we get an extension of Theorem 5.4.3 of Aubin/Ekeland [35] for real-valued functions to vector-valued functions. Now, we want to apply this result to the vector-valued approximation problem (P1). Here we will use the following set: dom f := {x ∈ X | ∃ y ∗ ∈ C #

with

y ∗ (f (x)) < +∞}.

Corollary 5.1.5. Under the assumptions of this section, the set of points where f is subdifferentiable is dense in dom f ; i.e., for each x ∈ dom f there exists a sequence {xk }, k ∈ N, with (α) (β) (γ)

xk → x, y ∗ (f (xk )) → y ∗ (f (x)) for an element y ∗ ∈ C # , ∀ i = 1, . . . n, ∃ Mik ∈ L(Z, Y ) with Mik (Ai (xk ) − ai ) = |||Ai (xk ) − ai |||, ∀ z ∈ Z : |||z||| − Mik (z) ∈ C f1 +

n 

αi A∗i y ∗ Mik ∈ ∂ ≤ (f (xk )) = ∅

and

for all k.

i=1

Remark 5.1.6. The assumption in Theorem 5.1.3 and Corollary 5.1.5 that C is a convex cone with a weakly compact base may be replaced by the assumption that X is a Gˆateaux differentiability space (compare [77], [305]). In the following, we will study some practically important special cases of the general approximation problem (P1). Let us now assume, that the space Y is the space of real numbers R. Suppose that f1 ∈ L(X, R), Ai ∈ L(X, Z) and αi ≥ 0 (i = 1, . . . , n). Then we consider for x ∈ X and ai ∈ Z (i = 1, . . . , n) the following real-valued approximation problem: f˜2 (x) := f1 (x) +

n 

αi Ai (x) − ai Z −→ inf . x∈X

i=1

(P2)

If we put f1 = 0 and Ai = I for all i = 1, . . . , n, we get the special case of the real-valued location problem, i.e., fˆ3 (x) =

n  i=1

αi x − ai  −→ inf . x∈X

(P3)

In the following corollaries, we will see that for the special approximation problems (P2) and (P3) the assertions of Theorem 5.1.3 get an easier form.

5.1 Approximation Problems

351

Corollary 5.1.7. We consider the real-valued problem (P2), which is a special case of (P1) if we put k 0 = 1 and C = {x ∈ R | x ≥ 0}. Then, for any ε > 0 and any approximate solution x0 with f˜2 (x0 ) ≤ inf x∈X f˜2 (x) + ε there exist an element xε and linear continuous functionals liε ∈ Z ∗ with liε (Ai (xε ) − ai ) = Ai (xε ) − ai , and liε ∗ = 1 such that √ (α) f˜2 (xε ) ≤ f˜2 (x0 ) − εx0 − xε X , √ (β) xε − x0 X ≤ ε, n √ (γ) f1 + i=1 αi A∗i liε ∗ ≤ ε. Corollary 5.1.8. We consider the real-valued location problem (P3), which is a special case of (P1) if we additionally put k 0 = 1 and C = {x ∈ R | x ≥ 0}. Then, for any ε > 0 and any approximate solution x0 with fˆ3 (x0 ) ≤ inf x∈X fˆ3 (x) + ε there exist an element xε and linear continuous functionals liε ∈ Z ∗ with liε (xε − ai ) = xε − ai , and liε ∗ = 1 such that √ (α) fˆ3 (xε ) ≤ fˆ3 (x0 ) − εx0 − xε X , √ (β) xε − x0 X ≤ ε, n √ (γ)  i=1 αi liε ∗ ≤ ε. Corollary 5.1.9. Consider the scalar optimization problem (P2), which is a special case of the problem (P1) if we additionally put k 0 = 1 and C = {x ∈ R | x ≥ 0}. Then, the assertion (γ) of Corollary 5.1.5 takes the following form: (γ  ) ∃ lin ∈ Z ∗ with lin (Ai (xn ) − ai ) = Ai (xn ) − ai , lin ∗ = 1, (i = 1, . . . , n) n with f1 + i=1 αi A∗i lin ∈ ∂ ≤ f˜2 (xn ) = ∅. Corollary 5.1.10. Consider the scalar location problem (P3), which is a special case of the problem (P1) if we additionally put k 0 = 1 and C = {x ∈ R | x ≥ 0}. Then the assertion (γ) of Corollary 5.1.5 takes the following form: (γ  )

∃ lin ∈ Z ∗ with lin (xn − ai ) = xn − ai , lin ∗ = 1, (i = 1, . . . , n)

with  n i=1

αi lin ∈ ∂ ≤ fˆ3 (xn ) = ∅.

352

5 Applications

5.1.2 Finite-dimensional Approximation Problems We consider a class of vector-valued control approximation problems in which each objective function is a sum of two terms, a linear function and a power of a norm of a linear vector function. Necessary conditions for approximate solutions of these problems are derived using a vector-valued variational principle, the subdifferential calculus, and the directional derivative, respectively. In Section 5.1.1, we have derived necessary conditions for approximately efficient elements of a class of abstract approximation problems in which the objective function takes its values in a reflexive Banach space. However, these conditions are not easy to utilize, so that for more special problems it would be worthwhile to find necessary conditions that are easier to handle. The aim of this section is to derive necessary conditions for approximately efficient solutions for another class of approximation problems having a finitedimensional image space of the objective function, which is more useful for control problems in comparison with the approximation problem in Section 5.1.1. In our proofs, it is possible to use the special structure of the subdifferential of a power of the norm and a vector-valued variational principle. We will introduce a general vector-valued control approximation problem (P4). Using the variational principle and the subdifferential calculus, we derive necessary conditions for approximately efficient elements of the vector-valued control approximation problem (P4). Moreover, we show necessary conditions for approximately efficient elements of the vector-valued control approximation problem (P4) using the directional derivative of the objective function. For special cases of (P4), for instance, for the case of a real-valued control approximation problem, ε-variational inequalities will be presented. We suppose that (A) (X,  · X ) and (Yi ,  · i ) (i = 1, . . . , n) are real Banach spaces, x ∈ X, n ai ∈ Yi , αi ≥ 0 , βi ≥ 1, Ai ∈ L (X, Yi ), (i = 1, . . . , n), f1 ∈ L(X, mR ), (B) Dj ⊆ X ( j = 1, . . . , m) are closed and convex sets, and D = j=1 Dj is nonempty, (C) C ⊂ Rn is a pointed closed convex cone with nonempty interior and C + Rn+ ⊆ C , k 0 ∈ C \ (−C). Moreover,  · ∗ denotes the dual norm to  · X , and  · i∗ the dual norm to  · i . Let us recall that the dual norm  · ∗ to  · X is defined by p∗ :=

sup | p(x) | .

xX =1

Now we consider the following vector control approximation problem Compute the set Eff(f (D), C) where

(P4)

5.1 Approximation Problems

353

⎞ α1 A1 (x) − a1 β1 1 ⎠ f (x) := f1 (x) + ⎝ ··· n βn αn An (x) − a n ⎛

is the objective vector function. Additionally, we assume that (D) f : X −→ Rn is bounded from below on D, i.e., there exists some z ∈ Rn with f (D) ⊂ z + C. It is well known that the set Eff(f (D), C) may be empty in the general noncompact case (compare Section 3.3). We will derive necessary conditions for approximately efficient elements using a vector-valued variational principle and the subdifferential calculus. In order to apply the subdifferential calculus, we have to show that the objective function in (P4) is C-convex (compare Section 2.6). In order to prove our main result, we need the following assertion of Aubin and Ekeland [35] concerning the subdifferential of norm terms: Lemma 5.1.11. If X is a Banach space then we have {p ∈ L(X, R) | p(x) = x, p∗ = 1} if x = 0, ∂x = if x = 0, {p ∈ L(X, R) | p∗ ≤ 1} and for β > 1,   1 β  ·  (x) = {p ∈ L(X, R) | p∗ = xβ−1 , p(x) = xβ }. ∂ β In the next theorem, we will derive necessary conditions for approximately efficient solutions of Eff(f (X), Cεk0 ) using the subdifferential calculus. First, we state necessary conditions for approximately efficient elements of the vector-valued approximation problem (P4) with D = X. Theorem 5.1.12. Under the assumptions (A), (C), and (D) for any ε > 0 and any x0 ∈ X with f (x0 ) ∈ Eff(f (X), Cεk0 ), there exist an element xε ∈ X, a functional y ∗ ∈ C + \ {0}, and linear continuous mappings Miε ∈ L(Yi , R) with Miε (Ai (xε ) − ai ) = Ai (xε ) − ai βi i

and Miε i∗

≤1 if βi = 1 and Ai (xε ) = ai , i βi −1 otherwise, = Ai (xε ) − a i

for all i = 1, . . . , n, such that √ (α) f (xε ) ∈ f (x0 ) − εx0 − xε X k 0 − C, √ (β) xε − x0 X ≤ ε, n √ (γ) y ∗ f1 + i=1 αi βi A∗i yi∗ Miε ∗ ≤ εy ∗ (k 0 ).

354

5 Applications

Proof. We assume f (x0 ) ∈ Eff(f (X), Cεk0 ). Under the given assumptions, Corollary 3.11.14 implies the existence of an element xε ∈ X with √ (α ) f (xε ) ∈ f (x0 ) − εx0 − xε X k 0 − C, √ (β  ) x0 − xε X ≤ ε, (γ  ) fεk0 (xε ) ∈ Eff(fεk0 [X], C), √ where fεk0 (x) := f (x) + εx − xε X k 0 . Because of the C-convexity of f and by assertion (γ  ), we get that there exists a functional y ∗ ∈ C + \ {0} with

 √ (5.5) y ∗ (f (xε )) ≤ inf y ∗ (f (x)) + εx − xε X y ∗ (k 0 ) . x∈X

This means that xε minimizes the function √ x −→ y ∗ (f (x)) + εx − xε X y ∗ (k 0 ), and then from the subdifferential calculus of convex functionals (cf. Aubin and Ekeland [35], Corollary 4.3.6), we get √ 0 ∈ ∂(y ∗ ◦ f )(xε ) + εy ∗ (k 0 )B 0 , where B 0 is the closed unit ball in X ∗ . It follows immediately that there is a linear continuous functional lε : X −→ R belonging to the subdifferential of the scalarized function y ∗ ◦ f at the point xε ∈ X with √ lε ∗ ≤ ε y ∗ (k 0 ). Furthermore, we have 





∂(y ◦ f )(xε ) = ∂ y (f1 (·)) +

n 

αi yi∗ Ai (·)

 −a

i

||βi i

(xε ),

i=1

because y ∗ is a linear functional. Lemma 5.1.11 and the rule of sums for subdifferentials yield the relation   n  ∗ ∗ i βi ∂ y (f1 (·)) + αi yi Ai (·) − a i (xε ) i=1

= y ∗ ∂ ≤ (f1 (·))(xε ) +

n 

αi yi∗ ∂Ai (·) − ai βi i (xε ).

(5.6)

i=1

Applying Lemma 5.1.11, relation (5.6), implies ∂(y ∗ ◦ f )(xε ) = y ∗ f1 +

n 

αi A∗i yi∗ ∂(uβi i ) |u=Ai (xε )−ai

i=1

n   = y ∗ f1 + αi βi A∗i yi∗ Miε | Miε ∈ L(Yi , R), Miε (Ai (xε ) − ai ) = i=1

Ai (xε ) − ai βi i , Miε i∗ ≤ 1 if βi = 1 and Ai (xε ) = ai ,

 Miε i∗ = Ai (xε ) − ai βi i −1 otherwise (for all i, 1 ≤ i ≤ n) .

5.1 Approximation Problems

Then, we get the desired inequality   n √ ∗ 0  ∗  αi βi A∗i yi∗ Miε  ≤ ε y (k ). y f1 + i=1

355





In the next theorem, we derive necessary conditions for approximately efficient elements of the vector approximation problem (P4) with restrictions. In order to formulate necessary conditions, we need the indicator functions ιDj of Dj defined by ιDj (x) = 0 if x ∈ Dj , and ιDj (x) = +∞ otherwise. It is well known that the subdifferential of the indicator function ιDj at x0 ∈ X is the normal cone NDj (x0 ) to Dj at x0 ∈ X, defined by 0

NDj (x ) =



{p ∈ L(X, R) : ∅

p(x − x0 ) ≤ 0 for all x ∈ Dj } if x0 ∈ Dj , otherwise.

Theorem 5.1.13. Under the assumptions (A), (B), (C), and (D) for any ε > 0 and any x0 ∈ D with f (x0 ) ∈ Eff(f (D), Cεk0 ), there exist an element xε ∈ D, a functional y ∗ ∈ C + \ {0}, linear continuous mappings Miε ∈ L(Yi , R) with Miε (Ai (xε ) − ai ) = Ai (xε ) − ai βi i , ≤1 if βi = 1 and Ai (xε ) = ai , Miε i∗ i βi −1 otherwise, = Ai (xε ) − a i for all i = 1, . . . , n, and elements rj ∈ NDj (xε ) for all j = 1, . . . , m, such that √ (α) f (xε ) ∈ f (x0 ) − εx0 − xε X k 0 − C, √ (β) xε − x0 X ≤ ε, n m √ (γ) y ∗ f1 + i=1 αi βi A∗i yi∗ Miε + j=1 rj ∗ ≤ εy ∗ (k 0 ). Proof. We can follow the line of the proof of Theorem 5.1.12 taking into consideration that our problem has the feasible set D instead of X. Then, we get instead of inequality (5.5) the inequality

 √ y ∗ (f (xε )) ≤ inf y ∗ (f (x)) + εx − xε X y ∗ (k 0 ) . x∈D

This problem is equivalent to the following unconstrained minimization problem: F (x) := y ∗ (f (x)) +

m  √ εx − xε X y ∗ (k 0 ) + ιDj (x) −→ min . j=1

x∈X

Taking into account the fact that the subdifferential of the indicator function ιDj at xε is the normal cone to Dj at xε , we can conclude the statement of the theorem in the same way as in the proof of Theorem 5.1.12.  In the following, we will study special cases of (P4). Under the assumptions (A), (B), (C), and (D), we consider a special case of the vector-valued approximation problem (P4): αi = 1, βi = 1 for all i = 1, . . . , n, and C = Rn+ , namely,

356

5 Applications

Compute the set Eff(f (D), C) with

(P5)



⎞ A1 (x) − a1 1 ⎜ A2 (x) − a2 2 ⎟ ⎟. f (x) := f1 (x) + ⎜ ⎝ ⎠ ··· An (x) − an n

In order to derive necessary conditions for approximately efficient elements of (P5), we will use besides the subdifferential, the directional derivative. Definition 5.1.14. The directional derivative of the function f : X −→ Rn at x ∈ D in the direction v ∈ X is defined by f  (x)(v) :=

lim

t→+0

f (x + tv) − f (x) . t

Corollary 3.11.14 implies necessary conditions for approximately efficient elements. Such necessary conditions can be used in order to derive numerical algorithms. The following lemma is a direct consequence of Corollary 3.11.14 if we use the directional derivative of the norms Ai (x) − ai i , i = 1, . . . , n (see Jahn [306], Theorem 2.27). So we derive necessary conditions for approximately efficient solutions of the problem (P 5). In order to formulate the next lemma, we introduce a set of linear continuous mappings L1 (xε ) := {l = (l1 , . . . , ln ) | li ∈ L(Yi , R), li i∗ = 1, li (Ai (xε ) − ai ) = Ai (xε ) − ai i , i = 1, . . . , n}. Theorem 5.1.15. Under the assumptions given above, for any ε > 0 and any x0 ∈ D with f (x0 ) ∈ Eff(f (D), Cεk0 ), there exists some xε ∈ D with √ f (xε ) ∈ f (x0 ) − εxε − x0 X k 0 − C,

xε − x0 ||X ≤

√ ε,

such that for any feasible direction v at xε with respect to D having vX = 1 there is a linear continuous mapping lε ∈ L1 (xε ), with ⎞ ⎛ ⎞ ⎛ l1 A1 (v) lε1 A1 (v) ⎝ · · · ⎠ ∈ ⎝ · · · ⎠ + C for all l = (l1 , . . . , ln ) ∈ L1 (xε ) (5.7) lεn An (v) ln An (v) and



⎞ (A∗1 lε1 )(v) ⎜ (A∗2 lε2 )(v) ⎟ √ ⎟∈ / − ε k 0 − int C. f1 (v) + ⎜ ⎝ ⎠ ··· (A∗n lεn )(v)

(5.8)

5.1 Approximation Problems

357

Finally, we will study the special case of a real-valued approximation problem. We suppose that X and Y are real Banach spaces, n = 1, α1 = 1, k 0 = 1, A(= A1 ) ∈ L(X, Y ), f1 ∈ L(X, R), and a(= a1 ) ∈ Y . In this case we study the real-valued objective function f (x) = f1 (x) + A(x) − a, which is to be minimized over D. According to Theorem 5.1.15, we get for C = R+ , k 0 = 1 the following ε-Kolmogorov condition: max{f1 (v) + A∗ l(v) | l ∈ L(Y, R), l(A(xε ) − a) = A(xε ) − a, l∗ = 1} √ ≥ − ε. (5.9) Here the maximality property (5.7) in Theorem 5.1.15 implies that we have to maximize the left-hand side of the inequality (5.9) with respect to l ∈ L(Y, R) satisfying l(A(xε ) − a) = A(xε ) − a and

l∗ = 1,

and the ε-variational inequality (5.8) implies the right-hand side of the inequality (5.9). If we suppose that A is the identical operator and f1 = 0, then we derive from (5.7) and (5.8) in Theorem 5.1.15 the following ε-variational inequality: √ max{l(v) | l ∈ L(Y, R), l(xε − a) = xε − a, l∗ = 1} ≥ − ε. Furthermore, by putting ε = 0 in the last inequality, we get a well-known necessary condition for solutions of the real-valued approximation problem (see Jahn [306]): max{l(v) | l ∈ L(Y, R), l(xε − a) = xε − a, l∗ = 1} ≥ 0. But we recall that the assertion of Theorem 5.1.15 is true only for ε > 0, since for ε = 0 the existence of efficient solutions is not guaranteed. 5.1.3 Lp -Approximation Problems In approximation theory Lp -approximation problems play an important role. Now we will apply Theorem 5.1.15 in order to derive necessary conditions for approximately efficient elements of an Lp -approximation problem. Let us assume that S ⊂ Rn is a closed convex set, Y = Rm , C = Rm +, ε > 0 , k 0 ∈ C \ {0}. Suppose that Ω ⊂ Rq is compact. For elements fji ∈ Lpi (Ω), 1 ≤ pi ≤ ∞, [i = 1, . . . , m], [j = 0, . . . , n], we define the vector-valued function

358

5 Applications

n

⎞ xj fj1 − f01 p1 ⎠. ··· f (x) := ⎝  n m m  j=1 xj fj − f0 pm ⎛



j=1

In the following we study the vector optimization problem Compute the set Eff(f (S), C).

(P6)

As a consequence of Theorem 5.1.15 we get necessary conditions for approximately efficient elements of problem (P6). Corollary 5.1.16. Under the assumptions given above for any ε > 0 and any approximately efficient element f (x0 ) ∈ Eff(f (S), Cεk0 ), there exist elements xε ∈ Rn with f (xε ) ∈ Eff(f (S), Cεk0 ), √ x0 − xε Rn ≤ ε, such that for any feasible direction v at xε with respect to S having vX = 1, there is a linear continuous mapping lε ∈ L2 , where ∗ L2 := l = (l1 , . . . , lm ) | li ∈ Lpi , li pi ∗ = 1, for all i = 1, . . . , m,  Ω

   n   n   i i li (t) xεj fji (t) − f0i (t) dt =  x f − f , εj j 0  j=1

⎧ ⎨ pi /(pi − 1) : : with p∗i = ∞ ⎩ 1 :

pi

j=1

1 < pi < ∞ pi = 1 pi = ∞,

n n ⎞ ⎛ ⎞ l1 ( j=1 vj fj1 ) lε1 ( j=1 vj fj1 ) ⎠∈⎝ ⎠+C for all l = (l1 , . . . , lm ) ∈ L2 , ⎝ n· · · n· · · m m lεm ( j=1 vj fj ) lm ( j=1 vj fj ) ⎛

 ⎞ vj ( Ω fj1 (t)lεi (t)dt) √ ⎠∈ ⎝ · / − εk 0 − int C,  ··m n j=1 vj ( Ω fj (t)lεi (t)dt) ⎛ n

and

j=1

respectively m  i=1

yi∗

n  j=1

 vj Ω

 √ fji (t)lεi (t) dt ≥ − εy ∗ (k 0 )

for a

y ∗ ∈ C + \ {0}.

5.1 Approximation Problems

359

Moreover, in the case that S = Rn and f is Gˆ ateaux differentiable in a neighborhood of x0 , we have ⎞ ⎛ i  m f1 (t)lεi (t)dt   Ω √  ⎠ ··· yi∗ ⎝   ≤ εy ∗ (k 0 ).    i=1 f i (t)lεi (t)dt ∗ Ω n 5.1.4 Example: The Inverse Stefan Problem We discuss our method using the example of the inverse Stefan problem following Reemtsen [482] and Jahn [306]. We show necessary conditions for approximative solutions of the inverse Stefan problem, which are important for numerical algorithms. We consider the problem of melting ice, where the temperature distribution u(x, t) in the water at time t is described by the heat-flow equation uxx (x, t) − ut (x, t) = 0. We assume that the motion of the melting interface is known and some other boundary condition has to be determined; i.e., the ablating boundary δ(·) is a known function of t and the heat input g(t) along x = 0 is to be determined. Physically, the boundary condition has to be determined such that the melting interface moves in the prescribed way x = δ(t), t ≥ 0. Suppose that δ(t) ∈ C 1 [0, T ], T > 0, is a given function, 0 ≤ t ≤ T , 0 ≤ x ≤ δ(t), and δ(0) = 0. Put D(δ) := {(x, t) ∈ R2 | 0 < x < δ(t), 0 < t ≤ T } for δ ∈ C 1 [0, T ]. Now, consider the parabolic boundary value problem uxx (x, t) − ut (x, t) = 0, (x, t) ∈ D(δ), ux (0, t) = g(t), 0 < t ≤ T,

(5.10) (5.11)

where g ∈ C([0, T ]), g(0) < 0 is to be determined, ˙ = −ux (δ(t), t), δ(t)

u(δ(t), t) = 0,

0 < t ≤ T.

(5.12)

The inverse Stefan problem (5.10), (5.11), (5.12) was discussed by Crank [143]. In the following we apply the results of Section 5.1.3 for a characterization of approximate solutions of this problem in form of ε-Kolmogorov conditions using the settings u ¯(x, t, a) =

l 

ai wi (x, t),

l > 0 integer, fixed,

i=0 i

with

wi (x, t) =

[2]  k=0

i! xi−2k tk , (i − 2k)!k!

i = 0, . . . , l,

and as an ansatz g(t) = c0 + c1 t + c2 t2 , c0 ≤ 0, c1 ≤ 0, c2 ≤ 0.

360

5 Applications

So we get an objective function given by three error functions ϕ1 (t, a, c) := u ¯(δ(t), t, a) − 0, ϕ2 (t, a, c) := u ¯x (0, t, a) − g(t), ˙ ¯x (δ(t), t, a) − (−δ(t)), ϕ3 (t, a, c) := u ⎞ ⎛ ϕ1 (·, a, c)1 ϕ(a, c) := ⎝ ϕ2 (·, a, c)2 ⎠ . ϕ3 (·, a, c)3 Moreover, assume S ⊂ Rl × R3 and S := {s ∈ Rl × R3 | si ∈ R (i = 1, . . . , l + 3); si ≤ 0 (i = l + 1, . . . , l + 3)}. Now, we study the problem to determine the set Eff(ϕ(S), R3+ ) in order to compute approximate solutions of the inverse Stefan problem. This is a special case of problem (P5) with ⎞ ⎛ A1 (s) − a1 1 f (s) := ⎝ A2 (s) − a2 2 ⎠ , A3 (s) − a3 3 where Ai ∈ L(Rl × R3 , Yi ), Yi are reflexive Lq -spaces, especially A1 (t) = (w1 (δ(t), t), w2 (δ(t), t), . . . , wl (δ(t), t), 0, 0, 0), A2 (t) = (w1x (0, t), w2x (0, t), . . . , wlx (0, t), −1, −t, −t2 ), A3 (t) = (w1x (δ(t), t), w2x (δ(t), t), . . . , wlx (δ(t), t), 0, 0, 0), sT = (a1 , a2 , . . . , al , c0 , c1 , c2 ), a1 = (0, . . . , 0) ∈ Y1 , a2 = (0, . . . , 0) ∈ Y2 ,

a3 = −δ˙ ∈ Y3 = Lq [0, T ],

and  · i (i = 1, 2, 3) denotes a norm in a reflexive Lq -space Yi . Now it is possible to apply Theorem 5.1.15: Under the assumptions given above for any ε > 0 and any approximately efficient element ϕ(s0 ) ∈ εk 0 − Eff(ϕ(S), R3+ ), there exists √ an element sε ∈ S with ϕ(sε ) ∈ εk 0 − Eff(ϕ(S), R3+ ), sε − s0 Rl+3 ≤ ε, such that for any feasible direction v at sε with respect to S having v = 1 there is a linear continuous mapping lε ∈ L1 , where L1 := {l = (l1 , l2 , l3 ) | li ∈ L(Yi , R) : li i∗ = 1, li (Ai (sε ) − ai ) = Ai (sε ) − ai i for all i = 1, 2, 3}, with ⎞ ⎛ ⎞ ⎛ l1 A1 (v) lε1 A1 (v) ⎝ lε2 A2 (v) ⎠ ∈ ⎝ l2 A2 (v) ⎠ + R3+ for all l = (l1 , l2 , l3 ) ∈ L1 and lε3 A3 (v) l3 A3 (v) ⎞ ⎛ ∗ (A1 lε1 )(v) √ ⎝ (A∗2 lε2 )(v) ⎠ ∈ / − εk 0 − int R3+ , (A∗3 lε3 )(v) respectively 3  i=1

√ yi∗ ((A∗i lεi )(v)) ≥ − εy ∗ (k 0 )

for a

y ∗ ∈ R3+ \ {0}.

(5.13)

5.2 Solution Procedures

361

Remark 5.1.17. For ε = 0 the condition (5.13) coincides with the well-known Kolmogorov condition (cf. Jahn [306]), which means that the directional derivative at the optimal point is greater than or equal to zero. Moreover, necessary conditions for approximate solutions of the type (5.13) are important for numerical algorithms, especially for proximal-point algorithms (cf. Benker, Hamel, and Tammer [58]).

5.2 Solution Procedures 5.2.1 A Proximal-Point Algorithm for Real-Valued Control Approximation Problems We consider the real-valued optimization problem f (x) := c(x) +

n 

 βi αi Ai (x) − ai  →

i=1

min,

x∈D

which is an extension of a problem studied by Idrissi, Lefebvre, and Michelot [286, 287]. This class of problems contains many practically important special cases such as approximation, location, and optimal control problems, perturbed linear programming problems, and surrogate problems for linear programming. Necessary and sufficient optimality conditions are derived using the subdifferential calculus. A proximal-point algorithm is modified by the method of partial inverse (see Spingarn [513]) in order to solve the optimality conditions. For further references see [57, 58, 298]). Many authors have studied generalizations of the well-known Fermat Weber problem from the theoretical and computational points of view (see [104–108], [168], [169], [200], [286], [306], [356], [406], [560], [562], [561], [566]). The aim of this section is to extend the results of Michelot and Lefebvre [406] and Idrissi, Lefebvre, and Michelot [286] to the following general approximation problem (compare [57, 58]) f (x) = (c | x) +

n  i=1

 βi αi Ai (x) − ai  → min, x∈D

(5.14)

where c ∈ H , ai , and x ∈ H , αi ≥ 0 , βi ≥ 1, Ai ∈ L (H, H), (i = 1, . . . , n), (L(H, H) denotes the space of linear continuous operators from H to H), . . , m), are closed and convex sets, H is a Hilbert space , Dj ⊂ H ( j = 1, .  m D0 ⊆ H is a linear subspace, and D = j=0 Dj . Furthermore, B 0 denotes the unit ball associated with the dual norm of the norm ., PD (e) denotes the projection of e onto a set D. We assume that the inverse operator (AiT )−1 of the adjoint operator AiT to Ai exists for all i = 1, . . . , n. In order to derive a primal-dual

362

5 Applications

algorithm we assume in the following that any suitable constraint qualification (generalized Slater condition, stability, etc.) is satisfied (cf. [226]). Furthermore, NDj (x0 ) denotes the normal cone to Dj at x0 ∈ H defined by {x ∈ H | (x | x − x0 ) ≤ 0 for all x ∈ Dj } : x0 ∈ Dj , NDj (x0 ) = ∅ : otherwise. Remark 5.2.1. (1) If a set Dj is bounded (j = 1, . . . , m), then the existence of a solution of the problem (5.14) is ensured. (2) The problem (5.14) contains the following practically important special cases: • •

linear programming if αi = 0 for all i; surrogate problems for linear programming problems f (x) := (c | x) −→ minimum subject to x∈D

• • •

Ai (x) = ai

and

(i = 1, . . . , n),

for which the feasible set is empty but D is nonempty, perturbed linear programming problems, approximation and location problems if c = 0, optimal control problems (optimal regulator problems) of the form (c = 0, n = 2, βi = p) p

p

A(u) − r + α u → min . u∈U ⊂H

We apply duality assertions in order to obtain lower bounds for the approximation problem (5.14). Furthermore, we derive optimality conditions for problem (5.14) using the subdifferential calculus. These optimality conditions are useful for an application of Spingarn’s proximal-point algorithm (see [513], [286], and [406]) for solving problem (5.14). We present two proximalpoint algorithms with different subspaces in the formulation of the algorithm. Moreover, we study two special cases of (5.14). Duality theorems (compare Section 3.8) can be used in order to obtain lower bounds for the problem (5.14). For the special case f (x) =

n   i  A (x) − ai p + α xp → min, i=1

x∈H

(5.15)

where α > 0 and p = 1 or 2, we can apply the duality theorem of approximation theory (see [56]). Following [56] we obtain the following lower bounds for the problem (5.15).

5.2 Solution Procedures

363

Lemma 5.2.2. For p = 1 we have the estimation n    i  A (x) − ai  + α x ≥ min

x∈H

i=1

n    ai 

i=1 n ,  max 1, α1 Ai 

(5.16)

i=1

and for p = 2,  min

x∈H

n    i  A (x) − ai 2 + α x2

α

n  2  ai  i=1 n 



α+

i=1



Ai 

2

.

(5.17)

i=1

Proof. With A(x) := (A1 (x), . . . , An (x)) we define a linear continuous operator A ∈ L(H, H n ). Now, we can write the problem (5.15) in the form p

p

A(x) − aH n + α xH → min, x∈H

where

a := (a1 , . . . , an ), 2

aH n = (a | a)H n :=

n 

ai | ai

 H

=

i=1

and aH n :=

n   i 2 a  H

for p = 2,

i=1

n   i a  H

for p = 1.

i=1

For p = 2 we obtain

n   i 2 A  . A ≤ 2

i=1

In a similar way, it follows that 

 n  1  Ai  A ≤ max 1, α i=1

for p = 1.

Now, Lemma 1 from [56] yields the lower bounds (5.16) and (5.17).  Applying a general duality theorem, we can also calculate lower bounds for another special case of problem (5.14). Let us consider the problem f (x) = (c | x) +

n  i=1

αi Ai (x) − ai  −→

min

x∈CH ,B(x)−b∈CV

,

(P)

364

5 Applications

where additionally to the assumptions given above, V is a reflexive Banach space, B ∈ L(H, V ), b ∈ V , CH ⊂ H and CV ⊂ V are closed convex cones, + and CV+ the corresponding dual cones. and CH We have shown in Section 3.8.3, Example 3.8.12, that the following problem (D) can be considered as the dual problem to (P): g(y, z) =

n 

αi yi (ai ) + z(b) −→ max ,

(D)

(y,z)∈D

i=1

where D = (y, z) | y = (y1 , . . . , yn ), yi ∈ H, αi yi ∗ ≤ αi (i = 1, . . . , n), z ∈ CV+ , c−

n 

 + αi Ai∗ yi − B ∗ z ∈ CH .

i=1

Here  · ∗ denotes the dual norm to  · . Lemma 5.2.3. Consider the problems (P) and (D). 1. For any (y, z) ∈ D, we have  inf

x∈CH ,B(x)−b∈CV

(c | x) +

n 

 αi A (x) − a  i

i

i=1



n 

αi yi (ai ) + z(b).

i=1

x) − b ∈ int CV and 2. Assume that there exist an element ¯ ∈ CH with B(¯ n x + . Then, the an element (¯ y , z¯) ∈ D with c − i=1 αi AiT y¯i − B T z¯ ∈ int CH problems (P) and (D) have optimal solutions x0 and (y 0 , z 0 ), and (c | x0 ) +

n  i=1

αi Ai (x0 ) − ai  =

n 

αi yi0 (ai ) + z 0 (b).

i=1

Proof. Analogous to the proof of duality assertions in Section 3.8.3 it is possible to derive the assertions from a general saddle-point theorem (Theorem 47.1 in Zeidler [582]).  The following approach is motivated by papers of Michelot and Lefebvre [406] and Idrissi, Lefebvre, and Michelot [286] and developed in [57, 58], [531]. Using the indicator functions ιDj of Dj defined by ιDj (x) = 0, if x ∈ Dj and ιDj (x) = +∞ otherwise, the problem (5.14) is equivalent to the following unconstrained minimization problem: n m    i  i βi  A (x) − a + ιDj (x) → min, F (x) := (c | x) + i=1

j=0

x∈H

where we put αi = 1 (i = 1, . . . , n) without loss of generality.

(5.18)

5.2 Solution Procedures

365

The functional F is convex under the given assumptions. Therefore, the subdifferential condition (5.19) 0 ∈ ∂F (x0 ) is necessary and sufficient for the optimality of x0 . By calculating the subdifferential of the functional F from (5.18), the condition (5.19) is equivalent to the following optimality conditions (5.20), (5.21), (5.22): qi ∈ ∂( Ai (x0 ) − ai βi ),

i = 1, 2, . . . , n,

j = 1, 2, . . . , m, rj ∈ NDj (x0 ), n m   c+ qi + rj ∈ D0⊥ . i=1

(5.20) (5.21) (5.22)

j=0

In order to reformulate the optimality conditions (5.20)–(5.22) in a more practical way, we introduce a Hilbert space E, suitable subspaces A, B, and an operator T . This will be done in different ways. For the first algorithm we will use the following notation: We introduce the Hilbert space E := (H)n+m+1 with the inner product (u | v) :=

n+m+1 

(ui | vi )

i=1

for u = (u1 , . . . , un+m+1 ) and v = (v1 , . . . , vn+m+1 ) ∈ E and the subspaces A := {y ∈ E | y = (y1 , . . . , yn+m+1 ); yj = x, j = 1, . . . , n + m + 1; x ∈ D0 } ,   n+m+1  ⊥ pi ∈ D0 , B := p ∈ E | p = (p1 , . . . , pn+m+1 ) , i=1

and the operator T defined on E by T (y) :=

n+m+1 

T i (x) = (T 1 (x), . . . , T n+m+1 (x)),

i=1

where  βi T i (x) := ∂(Ai (x) − ai  ), T T

n+j

n+m+1

(x) := NDj (x), (x) := c.

i = 1, . . . , n,

j = 1, . . . , m,

(5.23)

366

5 Applications

With this notation, the optimality conditions (5.20)–(5.22) can be rewritten as: Find

(y 0 , p0 ) ∈ A × B

such that

p0 ∈ T (y 0 ).

(5.24)

Remark 5.2.4. (1) Clearly, A and B are closed linear subspaces of E with A⊥B. We prove additionally that the subspaces A and B are complementary; i.e., H = A ⊕ B: We define the operator   n+m+1 n+m+1   1 PD0 ( ei ), . . . , PD0 ( ei ) , PA (e) = n+m+1 i=1 i=1 where e = (e1 , . . . , en+m+1 ) ∈ E is arbitrary. Trivially, we have B = ker(PA ). Applying the necessary and sufficient condition for all a ∈ A (e − PA (e) | a) = 0 for an operator PA to be a projection onto A, we see that PA is the projection of E onto A. (2) It is easy to see that the operator T , defined in (5.23), realizes a maximal monotone multifunction, since T is composed of subdifferentials. For a numerical solution we need the condition (5.24) in the form 0 ∈ T (y 0 )−p0 , but the new multifunction T (y, p) := T (y) − p is not maximal monotone. Therefore, we use in the following an idea of Spingarn [513]. Applying the results of Spingarn [513], the problem (5.24) is equivalent to: Find (y 0 , p0 ) ∈ A × B such that 0 ∈ TA (y 0 + p0 ),

(5.25)

where TA is the partial inverse of T with respect to A. The operator TA is described by its graph in the following way: {(yA + pB , pA + yB ) | p ∈ T (y), p = pA + pB , y = yA + yB } , where vA and vB are the projections of v onto A and B , respectively. We remark that TA is maximal monotone if and only if T is maximal monotone. Setting z = zA + zB and zA = y,

zB = p,

then the problem (5.25) has the following form: Find

z0 ∈ E

such that

0 ∈ TA (z 0 ).

(5.26)

5.2 Solution Procedures

367

Now, we can apply the proximal-point algorithm for solving (5.26) (compare Michelot/Lefebvre [406] and Idrissi/Lefebvre/Michelot [286]). This algorithm has the form z k+1 = (I + ck TA )−1 (z k )

k = 1, 2, . . . ,

(5.27)

where the starting point z 1 is arbitrary, and ck is a sequence of positive real (we set ck = 1), then either If (ck ) is bounded away from zero   

numbers. z k converges weakly to a zero of TA , or z k  → ∞ and TA has no zeros (see [483]). The iteration (5.27) can be rewritten in the form y k+1 + pk+1 = (I + TA )−1 (y k + pk ), which is equivalent to y k − y k+1 + pk − pk+1 ∈ TA (y k+1 + pk+1 ).

(5.28)

Now, we will realize this iteration in terms of the multifunction T instead of TA . Setting pk = y k − y k+1 + pk+1 , y k = pk − pk+1 + y k+1 , which yields y k + pk = y k + p k ,

y k+1 = (y k )A ,

pk+1 = (pk )B ,

and applying the definition of the partial inverse TA , we obtain the iteration (5.28) as or pk ∈ T (y k + pk − pk ). (5.29) pk ∈ T (y k ), Further, considering the definition of the operator T , the iteration (5.29) is equivalent to  βi  i = 1, . . . , n, (5.30) pki ∈ ∂ Ai (xk + pki − pki ) − ai  pkn+j ∈ NDj (xk + pkn+j − pkn+j ),

j = 1, . . . , m,

pkn+m+1 = c.

(5.31) (5.32)

For an implementation of the iteration algorithm (5.30)–(5.32) it remains to calculate the subdifferential from (5.30) and the normal cone from (5.31). Setting zik := Ai (xk + pki − pki ) − ai , we get for the subdifferential from (5.30),  βi  βi −1  k  ∂( zik  ) = AiT βi zik  ∂( zi ),

368

5 Applications

and the iteration condition (5.30) now has the following form:  βi −1  k  ∂( zi ). (AiT )−1 βi−1 pki ∈ zik 

(5.33)

To apply this expression in practice, a computable form is needed. This is possible only for special cases. Special case 1: Let βi = 1 for i = 1, . . . , n. Then (5.33) has the form

AiT

−1

   " ! pki ∈ ∂( zik ) = zi∗ ∈ H |  zi∗  = 1 , ( zi∗ | zik ) =  zik  .    iT −1 k  pi  = 1  A

Hence and

 k

 zi  = ( AiT −1 pki | zik ).

Using the inequality  k zi  ≥ ( zi∗ | zik ) we get

for all zi∗

 zi∗  = 1

with

(⇔ zi∗ ∈ B 0 ),

−1 k k

( AiT pi | zi ) ≥ ( zi∗ | zik ).

It follows that −1 k k

( zi∗ − AiT pi | z i ) ≤ 0

for all zi∗ ∈ B 0 .

Substituting the definition for zik into the last inequality, we get −1 k i k

pi | A (x + pki − pki ) − ai ) ≤ 0 ( zi∗ − AiT

for all zi∗ ∈ B 0 , (5.34)

which is equivalent to −1 k

zik = Ai (xk + pki − pki ) − ai ∈ NB 0 ( AiT pi ). Now we can transform the inequality (5.34) in the form

−1 i iT ∗ a | A zi − pki ) ≥ 0 (pki − xk − pki + Ai

for all zi∗ ∈ B 0 ,

(5.35)

which is equivalent to

−1 i a ), pki ∈ PM (xk + pki − Ai where PM is the projection operator onto the set " ! M := AiT zi∗ | zi∗ ∈ B 0 .

(5.36)

5.2 Solution Procedures

369

Therefore, we have to solve the minimization problem 2

v − B(u) → min0 , u∈B

where

−1 i v := xk + pki − Ai a

B := AiT .

and

We find the solution of this quadratic minimization problem as a solution of the operator equation (B T B + λI)u = B T v, where λ is to be determined such that u = 1 . This completes the calculation of the iteration (5.30). It remains to solve the iteration (5.31), which is equivalent to k+1 k k k k − yn+j + pk+1 yn+j n+j ∈ NDj (x + pn+j − pn+j ),

(5.37)

where k = xk yn+j

and

k+1 yn+j = xk+1 .

The last relation (5.37) can be transformed into the form k k xk+1 + pkn+j − pk+1 n+j = PDj (x + pn+j ).

(5.38)

The algorithm for the special case 1 thus takes the following final form: Algorithm I for solving Problem (5.18) with βi = 1 (i = 1, . . . , n): • Choose the starting points x1 ∈ D0

and

p1 ∈ E

with

n+m+1 

p1i ∈ D0⊥ .

(5.39)

i=1

• Compute xk+1 and pk+1 from

−1 i xk − xk+1 + pk+1 = PM (xk + pki − Ai a ), i = 1, . . . , n, (5.40) i k k xk+1 + pkn+j − pk+1 n+j = PDj (x + pn+j ),

x −x k

with

k+1

+

pk+1 n+m+1

n+m+1 

j = 1, . . . , m,

= c,

pk+1 ∈ D0⊥ i

(5.41) (5.42)

and

xk+1 ∈ D0 .

(5.43)

i=1

This means that we must first calculate the projections on M and Dj in the right-hand side of the equations (5.40) and (5.41). Then we have to solve

370

5 Applications

the linear equations (5.40)–(5.42) such that the relations (5.43) are satisfied. The solution of the linear equations (5.40)–(5.42) is very simple, since we can eliminate the variables in the following way: Setting for 1 ≤ i ≤ n − 1 and 1 ≤ j ≤ m,

−1 i+1 

−1 1 

bki = PM xk + pk1 − Ai , a − PM xk + pki+1 − Ai+1 a

 



−1 bkn+j−1 = PM xk + pk1 − Ai a1 + PDj xk + pkn+j − xk + pkn+j ,



−1 1  a − c, bkn+m = PM xk + pk1 − Ai we obtain immediately the solutions pk+1 = pk+1 − bki−1 , 1 i

i = 2, . . . , n + m + 1,

(5.44)

where pk+1 is determined by the relations (5.43): 1 (n+m+1) pk+1 − 1

n+m 

bki ∈ D0⊥

and

(xk+1 =) xk −c+pk+1 −bkn+m ∈ D0 . 1

i=1

(5.45) from the relations (5.45) is posRemark 5.2.5. (1) The determination of pk+1 1 sible only if the subspace D0 is concretely given; for example, if D0 = H, then from (5.45) follows = pk+1 1

n+m  1 bk n + m + 1 i=1 i

and

xk+1 = xk − c + pk+1 − bkn+m . 1

(5.46) (2) Another possibility for the solution of the equations (5.40)–(5.42) is given in the following way: Using the definitions given above, pk = y k − y k+1 + pk+1 , y k = pk − pk+1 + y k+1 , which yields y k + pk = y k + p k ,

y k+1 = (y k )A ,

pk+1 = (pk )B ,

we obtain pk1 , . . . , pkn from (5.40), k k , . . . , yn+m+1 yn+1

from (5.41), (5.42), and the remaining components from

5.2 Solution Procedures

371

k pkn+j = xk + pkn+j − yn+j (1 ≤ j ≤ m) and yik = xk + pki − pki (1 ≤ i ≤ n).

Now, we obtain the solution vectors k+1 pk+1 = ( pk+1 , . . . , pk+1 = ( xk+1 , . . . , xk+1 ) 1 n+m+1 ) and y

by calculating the projections (pk )B and (y k )A from pk and y k onto the subspaces B and A, respectively. Special case 2: Let be βi = β > 1 and c = 0; i.e., we consider the problem f (x) =

n   i  A (x) − ai β →

min m

x∈D=

i=1



j=0

.

(5.47)

Dj

Defining an operator A ∈ L(H → H n ) by A(x) := (A1 (x), A2 (x), . . . , An (x)) and a norm on H n = H × H × · · · × H by uH n :=

n   i β 1/β u  ,

where

u = (u1 , u2 , . . . , un ),

i=1

and using the indicator functions ιDj of Dj , the problem (5.47) is equivalent to the unconstrained minimization problem F (x) = A(x) − aH n +

n  j=0

ιDj (x) → min, x∈H

(5.48)

where a = (a1 , a2 , . . . , an ) . Using the optimality condition 0 ∈ ∂F (x0 ) , we obtain analogously to problem (5.14) the following necessary and sufficient conditions for the problem (5.48):   (5.49) q ∈ ∂(A(x0 ) − aH n ), rj ∈ NDj (x0 ),

q+

m 

rj ∈ D0⊥ .

j = 1, . . . , m,

(5.50) (5.51)

j=1

It remains to verify condition (5.49), since the other conditions (5.50) and (5.51) are the same as in (5.21) and (5.22). So we introduce the following subspaces: A1 := {y ∈ E | y = (y1 , . . . , ym+1 ) ; yj = x , j = 1, . . . , m + 1 ; x ∈ D0 } ,

372

5 Applications

and  B1

:=

p ∈ E | p = (p1 , . . . , pm+1 ),

m+1 

 pi ∈

D0⊥

,

i=1

where E = H m+1 . Using these subspaces A1 and B1 , we can compute the subdifferential (5.49) in a similar manner as for the special case 1 and obtain pk1 = PM (xk + pk1 − A−1 a), where PM is the projection operator onto the set " ! M = AT z ∗ | z ∗ ∈ B 0 , where

B 0 = { z ∗ ∈ H n | z ∗ H n = 1}

and

AT z ∗ = (A1T z1∗ + . . . + AnT zn∗ ) ∈ H.

Now the given algorithm for the special case 1 can be applied. However, Algorithm I has some disadvantages. For example, it needs the operators Ai to be regular and invertible, and some operator equation has to be solved if Ai = I (I = identity). If we define the subspaces A and B in a different way as above, we get another specification of Spingarn’s algorithm under weaker assumptions. The new idea for deriving a proximalpoint algorithm is to put the operators Ai in the definition of the subspaces A and B. In the following we derive a proximal-point algorithm for control approximation problems (Algorithm II, cf. [531]): ⎫ n  ⎪ ⎪ αi Ai (x) − ai β(i)i → minx∈D , f (x) = c(x) + ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ ⎪ k ⎪  · (i) : norm in R , (i = 1,. . . , n) ⎬ (5.52) ⎪ x, c ∈ Rk , ai ∈ Rk , αi ≥ 0, βi ≥ 1, Ai ∈ L(Rk , Rk ), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ m ⎪  ⎪ k ⎪ D = Dj , Dj ⊂ R closed and convex, ⎭ 1

where L(Rk , Rk ) denotes the set of linear continuous mappings from Rk to Rk . Furthermore, suppose int D = ∅.

5.2 Solution Procedures

373

In contrast to problem (5.14) at the beginning of Section 5.2.1, we consider now a control approximation problem (5.52) with different norms in the objective function, in finite-dimensional spaces and without assuming that the operators Ai are regular. The space E is defined by E := Rk1 × Rk2 × · · · × Rkn+m+1 ,

(5.53)

where k1 = · · · = kn+m+1 = k. Define as a shortcut the operator S : E → Rk by n m+1   S(e) := AiT ei + en+j , e = (e1 , . . . , en+m+1 ) ∈ E . i=1

j=1

Then the subspaces have the form A2 := {y ∈ E | y = (A1 x, A2 x, . . . , An x, x, . . . , x), x ∈ Rk }, () * '

(5.54)

n+m+1

B2 := {p ∈ E : S(p) = 0} .

(5.55)

It is easy to see that the subspace B2 is the orthogonal complement of A2 : With v ∈ E and a ∈ A2 , it yields (a | v) =

n 

(Ai x | vi ) +

i=1

m+1 

(x | vn+j ) = (x | S(v)) .

(5.56)

j=1

If v ∈ B2 , then the right-hand side is zero, and therefore (5.56) yields B2 ⊂ ⊥ A⊥ 2 . If v ∈ A2 , then (5.56) has to be zero for all a ∈ A2 and therefore for all k x ∈ R . This means that the right part in the inner product has to be zero, ⊥ ⊥ so we have v ∈ B2 and A⊥ 2 ⊂ B2 ⊂ A2 , A2 = B2 . With the closedness of A2 and B2 follows (5.57) E = A2 ⊕ B2 . The operator T : E → 2E has again the form p˜ ∈ T (˜ y ) ⇔ p˜i ∈ Ti (˜ yi ) Ti (yi ) := ∂(αi yi − Tn+j (x) := NDj (x) Tn+m+1 := c .

i = 1, . . . , n + m + 1,

ai β(i)i ),

p˜, y˜ ∈ E .

i = 1, . . . , n,

j = 1, . . . , m,

The problem (5.20)–(5.22) is equivalent to the problem (5.24) with this 0 choice of A2 , B2 and T with qi = AiT p0i , rj = p0n+j , and yn+m+1 = 1 0 n 0 0 0 (A x , . . . , A x , x , . . . , x ). This means that if one of the problems has a solution, then the other one also has a solution, and the solution of (5.24) can be transferred into a solution of (5.20)–(5.22).

374

5 Applications

The projection onto the subspaces A2 and B2 is an important step for Spingarn’s algorithm. Let v ∈ E be an arbitrary element, v = vA2 + vB2 , vA2 = (y1 , . . . , yn+m+1 ) ∈ A2 ,

vA2 = (A1 x, . . . , An x, x, . . . , x) ∈ A2 .

n m+1 Using the operator S (S(e) = i=1 AiT ei + j=1 en+j ) on both sides, and using S(p) = 0 for p ∈ B2 , one gets S(v) = S(vA2 ) , n  S(v) = AiT Ai x + (m + 1)x i=1 n  i=1

AiT vi +

n+m+1 

vj =

j=n+1

Setting u=

B=

n 

n 

AiT Ai x + (m + 1)x.

i=1

AiT vi +

i=1

and

n 

n+m+1 

vj

j=n+1

AiT Ai + (m + 1)I ,

i=1

x can be computed by solving the equation Bx = u . This B is regular for all finite-dimensional linear operators Ai : Rk → Rk , because all eigenvalues are greater than or equal to 1. So the inverse operator of B exists. The projection of v ∈ E onto A2 has the following form: ⎛ ⎞ n n+m+1   AiT vi + vj ⎠ , x := B −1 ⎝ i=1

j=n+1

i

yi := A x i = 1(1)n, j = 1(1) m + 1, yn+j := x vA2 = (y1 , . . . , yn+m+1 ) . Regarding E = A2 ⊕ B2 (cf. (5.57)), the projection onto B2 is vB2 = v − vA2 .

5.2 Solution Procedures

375

The points p˜k and y˜k are connected via p˜k + y˜k = pk + y k (y k ∈ A2 , pk ∈ B2 ). The projection onto a linear subspace is additive, so this can be used to calculate one projection from the other: y˜k = y k + pk − p˜k , y

PA2 (˜ yk ) k+1 k −y

= PA2 (y k ) + PA2 (pk ) − PA2 (˜ pk ), k = −PA2 (˜ p ).

Thus all steps of Spingarn’s method of the partial inverse are solved for this problem. So it is possible to give another formulation of the proximalpoint algorithm (with the sum pk + y k as one variable, and B0i = {p : pi∗ ≤ 1}). The following algorithm is a procedure for solving the control approximation problem (5.52): Algorithm II for solving the control approximation problem (5.52): 1. Initialization 1 = Choose x1 ∈ Rk and p1 ∈ B2 , set (p1 + y 1 )i = p1i + Ai x1 and p1n+j + yn+j 1 1 pn+j + x . −1

n Calculate B −1 := (m + 1)I + i=1 AiT Ai .

2. Proximal Step For i = 1(1)n: Set bi := yik + pki − ai . If βi = 1, then set bi if bi  < αi , k p˜i = αi PB0i (bi /αi ) otherwise. If βi > 1 and  · (i) the sum, maximum, or Euclidean norm, then a special construction of p˜ki is given in [531]. k k For j = 1(1)m: Set p˜kn+j = pkn+j + yn+j − PDj (pkn+j + yn+j ). Set p˜n+m+1 = c.

3. Projection Step n m+1 Calculate:    p¯k := B −1 AiT p˜ki + p˜n+j . i=1

'

()

=:o1

j=1

*

xk+1 := xk − p¯k . pk+1 + y k+1 := p˜k + (A1 (xk − 2¯ pk ), . . . , An (xk − 2¯ pk ), xk − 2¯ pk , . . . , xk − 2¯ pk ).

376

5 Applications

Stop if o1  + ¯ pk  = S(˜ p) + ¯ pk  < ε and (pk+1 + y k+1 ) − (pk + y k ) < ε for a given value ε > 0. Otherwise, set k = k + 1 and return to the Proximal Step (step 2). Special Case: For the special case where all operators satisfy Ai = I, no B −1 is needed, and the projection step simplifies to n+m+1   k p¯ := p˜ki /(n + m + 1) . i=1

xk+1 := xk − p¯k . pk . (pk+1 + y k+1 )i := p˜ki + xk − 2¯ Finally, we remark that the general convergence results for Spingarn’s algorithm (compare [513]) also hold for our application of the proximal-point algorithm. Example 5.2.6. Let us calculate an example for the application of the proximal-point algorithm using as a limit of exactness ε = 10−10 . We consider a scalarization of a vector-valued location problem given by (5.14) with c = 0, n = 9, Ai are identical operators, x, ai ∈ R2 (with a1 = (−1.5, 3.5), a2 = (1, 3), a3 = (1, 0), a4 = (−3, −2), a5 = (3.5, −1.5), a6 = (2, 2), a7 = (−2, 2), a8 = (4, 1), a9 = (−3, 2)), βi = 1 (for all i = 1, . . . , 9), D = R2 and · is the maximum norm. If we take the same weights αi = 1 for all i = 1, . . . , 9, we get as optimal solution x10 = (0.5, 1.5). Furthermore, let us take different weights (α1 , . . . , α9 )T = (1, 2, 1, 4, 3, 1, 2, 1, 1)T , then we get as solution x20 = (1.37944, 0.62056), and for (α1 , . . . , α9 )T = (1, 3, 3, 1, 6, 5, 1, 1, 1)T , we obtain the solution x30 = (0.289759, 0.710241). These locations are plotted in Figure 5.2.1 by small circles. The existing facilities ai (i = 1, . . . , 9) are plotted by big circles. Furthermore, Figure 5.2.1 shows the whole set of efficient solutions of the vector-valued location problem generated by a duality-based geometric algorithm, compare Section 5.3.2.

5.2.2 An Interactive Algorithm for the Vector Control Approximation Problem In order to formulate this problem we suppose that (A) ai , x ∈ X , αi ≥ 0 , βi ≥ 1, f1 ∈ L(X, Rn ), Ai ∈ L (X, Yi ) (i = 1, . . . , n); (L(X, Y ) denotes the space of linear continuous operators from X to Y); are closed and convex sets, D0 ⊆ X is a closed lin(B) Dj ⊂ X ( j = 1, . . . , m)  m ear subspace, and D = j=0 Dj is nonempty and bounded. Furthermore, we assume that a suitable constraint qualification is satisfied.

5.2 Solution Procedures

377

• •



Figure 5.2.1. Solutions x10 , x20 , and x30 of the location problem generated by the proximal-point algorithm choosing different weights αi (i = 1, . . . , 9) in Example 5.2.6.

(C) One of the following conditions (C1) or (C2) is satisfied: (C1) H is a Hilbert space and X = Yi = H for all i = 1, . . . , n; (C2) X = Rk and Yi = Rk for all i = 1, . . . , n. (D) C ⊂ Rn is a convex cone with cl C + (Rn+ \ {0}) ⊂ int C. Now we consider the following vector control approximation problem: Compute the set Eff(f (D), C), where

⎞ α1 A1 (x) − a1 β1 ⎠. ··· f (x) := f1 (x) + ⎝ n n βn αn A (x) − a 

(P)



(5.58)

We introduce a suitable scalarization of the vector control approximation (P), and we derive an interactive algorithm for the vector control approximation problem using a surrogate parametric optimization problem and taking into account stability results of this special parametric optimization problem. Under the given assumptions, it is easy to see that the vector-valued objective function f : X −→ Rn in (5.58) is (Rn+ )-convex; i.e., for all x1 , x2 ∈ X, μ ∈ [0, 1], we have μf (x1 )+(1−μ)f (x2 ) ∈ f (μx1 +(1−μ)x2 )+Rn+ (compare Section 2.6). Then, we can show (see Lemma 5.1.2) that for each element f (x0 ) ∈ Eff(f (D), C) there exists a parameter λ ∈ C + \ {0} such that x0 solves the real-valued optimization problem

378

5 Applications

f (x, λ) :=

n 

 λi (f1 )i (x) + αi Ai (x) − ai βi −→ min . x∈D

i=1

(P(λ))

In the following, for the case αi > 0 for all i = 1, . . . , n, without loss of generality we replace (P(λ)) by fˆ1 (x) +

n 

(Ps )

Ai (x) − ai βi −→ min,

i=1

x∈D



where fˆ1 ∈ X . Using the indicator functions ιDj of Dj defined by ιDj (x) = 0, if x ∈ Dj and ιDj (x) = +∞ otherwise, the problem (Ps ) is equivalent to the following unconstrained minimization problem: F (x) = fˆ1 (x) +

n m    i  A (x) − ai βi + ιDj (x) → min . i=1

j=0

x∈X

(Ps )

First, we consider the case of ((A),(B),(C1)). Obviously, under the given assumptions the functional F is convex. In order to solve (Ps ), it is possible to use Algorithm I. Remark 5.2.7. Under the assumptions ((A), (B), (C2)), it is possible to apply Algorithm II (see Section 5.2.1) in the same way. In this case the weak convergence of the sequence {z k } implies even the norm convergence. Now, we present an interactive algorithm in which we have to solve the special parametric optimization problem (P(λ)) with λ ∈ Λ ⊂ int C + under the assumptions ((A), (B), (C1)) or ((A), (B), (C2)). Stability results for parametric optimization problems are important for an effective interactive algorithm. In other words, we need various types of continuity of the optimal-value function ϕ(λ) := inf{f (x, λ) | x ∈ D}, of the optimal set mapping ψ(λ) := {x ∈ D | f (x, λ) = ϕ(λ)}, or of the ε-optimal set mappings ψε (λ) := {x ∈ D | f (x, λ) < ϕ(λ) + ε} and ¯ ε) := {x ∈ D | f (x, λ) ≤ ϕ(λ) + ε}. ψ(λ, ¯ we mean in particular By stability of the mappings ϕ, ψ, ψε , and ψ, certain continuity attributes of these mappings. We use the concept of upper and lower continuity introduced in Definition 2.7.1 (a) and (b) in the following formulation:

5.2 Solution Procedures

379

Definition 5.2.8. A multifunction Γ : Λ ⇒ 2X , where (X, dX ) and (Λ, dΛ ) are metric spaces, is called: 1. upper continuous at a point λ0 if for each open set Ω containing Γ (λ0 ) there exists a δ-neighborhood Vδ {λ0 } of λ0 such that Γ (λ) ⊂ Ω

for all

λ ∈ Vδ {λ0 };

2. lower continuous at a point λ0 if for each open set Ω satisfying Ω ∩ Γ (λ0 ) = ∅ there exists a δ-neighborhood Vδ {λ0 } of λ0 such that Γ (λ) ∩ Ω = ∅ for all λ ∈ Vδ {λ0 }. The existence of continuous selection functions is closely related to the condition that ψ is lower continuous and all optimal sets are nonempty and convex. If one considers ε-optimal solutions, then the strong condition of lower continuity of ψ may be avoided. Theorem 5.2.9. Suppose that the assumptions (A), (B), (C1) are satisfied. Then, for each ε > 0 the ε-optimal set mapping ψε is lower continuous in the sense of Definition 5.2.8. Proof. The assumptions of Theorem 4.2.4 in [238] are satisfied, since f is continuous on X × Λ, D is closed and does not depend on the parameter λ, and ϕ is continuous regarding the continuity of f on the set D. Theorem 4.2.4 in [238] yields that for each ε > 0 the ε-optimal set mapping ψε is lower continuous.  In the finite-dimensional case we can derive some additional results: Theorem 5.2.10. We consider the problem (P (λ)) subject to the assumptions (A), (B ) and (C2). Suppose that ψ(λ0 ) is nonempty and bounded. Then (i) the optimal-value function ϕ is continuous at λ0 ; (ii) the optimal set mapping ψ is upper continuous in the sense of Definition 5.2.8 at λ0 ; (iii) the ε-optimal set mapping ψε is lower continuous in the sense of Definition 5.2.8 at λ0 for each ε > 0; (iv) the mapping ψ¯ defined by ¯ ε) := {x ∈ M | f (x, λ) ≤ ϕ(λ) + ε}, λ ∈ Λ, ε ≥ 0, ψ(λ, is upper continuous in the sense of Definition 5.2.8 at (λ0 , 0). Proof. The results follow immediately from Theorem 9 in Hogan [278].  Remark 5.2.11. The last property is of interest if the problems corresponding to the parameters λt , t = 1, 2, . . ., λt −→ λ0 , constitute certain substitute problems with respect to the problem to be solved, (P(λ0 )), and the problems (P(λt )) are solved with increasing accuracy εt −→ 0. Then each sequence {xt } of εt -optimal solutions of (P(λt )) possesses an accumulation point, and each of its accumulation points is contained in the solution set ψ(λ0 ).

380

5 Applications

Moreover, under the assumptions of Theorem 5.2.9, a continuous selection theorem by Michael [403] can be used if we additionally assume the compactness of Λ. Theorem 5.2.12. Assume that (A), (B), (C) hold, and that Λ is a compact set. Then, there exists a function g ∈ C(Λ) such that ∀λ ∈ Λ :

g(λ) ∈ cl ψε (λ).

Proof. We can conclude from Theorems 5.2.9 and 5.2.10 that ψε is lower continuous in the sense of Definition 5.2.8. Moreover, the image sets ψε (λ) are nonempty and convex for all λ ∈ Λ. Then, we get the desired result from the continuous selection theorem by Michael [403]. Using these stability statements, we can derive the following interactive algorithm for the vector control approximation problem (P). At least under ((A), (B), (C2)) it is possible to seek elements of a neighborhood of the set of proper efficient points that corresponds to the individual interest of the decision-maker (compare [58]). In the following interactive procedure for solving the vector control approximation problem, we can use Algorithm I or Algorithm II from Section 5.2.1. Interactive procedure for solving the vector control approximation problem (P) ¯ ¯ ∈ Λ. Compute an approximate solution (x0 , p0 ) of (P(λ)) Step 1: Choose λ 0 0 with the primal-dual algorithm Algorithm I (or II). If (x , p ) is accepted by the decision-maker, then stop. Otherwise, go to Step 2. ˆ ∈ Λ, λ ˆ = λ. ¯ Go to Step 3. Step 2: Put k = 0, t0 = 0. Choose λ Step 3: Choose tk+1 with tk < tk+1 ≤ 1 and compute an approximate solution (xk+1 , pk+1 ) of

min

x∈D

n 

i=1

  ¯ i + tk+1 (λ ˆi − λ ¯ i ) (f1 )i (x) + αi Ai (x) − ai βi λ ¯ λ)) ˆ (P(tk+1 , λ,

with the Algorithm I (or II) and use (xk , pk ) as starting point. If ¯ λ) ˆ cannot be found for t > tk , then an approximate solution of P(t, λ, go to Step 1. Otherwise, go to Step 4. Step 4: The point (xk+1 , pk+1 ) is to be evaluated by the decision-maker. If it is accepted by the decision-maker, then stop. Otherwise, go to Step 5. Step 5: If tk+1 ≥ 1, then go to Step 1. Otherwise, set k = k + 1 and go to Step 3.

5.2 Solution Procedures

381

Remark 5.2.13. Under the assumptions (A), (B) and (C2) of Theorem 5.2.10, ¯ λ)) ˆ can be gena sufficiently good approximation of a solution of (P(tk+1 , λ, ¯ λ)) ˆ as starting point. erated if we use an approximate solution of (P(tk , λ, 5.2.3 Proximal Algorithms for Vector Equilibrium Problems The proximal-point method was introduced by Martinet (see [399] and [400]) as a regularization method in the context of convex optimization in Hilbert spaces. It has since been studied by several authors for monotone inclusion problems and variational inequalities (see [483] for a survey). The purpose of this section is to present generalized proximal-point methods with Bregman functions applied to vector equilibrium problems and discuss the convergence properties of these algorithms. Our method consists, in the first place, in scalarizing the vector-valued problem, then adapting the real proximal-point algorithm, as proposed by [423] and [196], to the weak vector equilibrium problem (WVEP). During this section, we restrict attention to a finite-dimensional space X = Rm , even if much of what will be said carries over to the reflexive Banach setting. Suppose Y is a Hausdorff topological vector space, and P ⊂ Y a pointed closed convex cone with nonempty interior int P . Let M ⊂ X and f : M × M −→ Y be a vector-valued function and consider the problem (WVEP)

find x ∈ M such that f (x, y) ∈ − int P for all y ∈ M .

Associated with this problem is the scalar equilibrium problem (SEP)

find x ∈ M such that F (x, y) ≥ 0 for all y ∈ M .

Here F : M × M −→ R is defined by F (x, y) = ϕ(f (x, y)), where ϕ is a suitable functional (see Section 2.4) defined on Y by ϕ(z) := inf{t ∈ R | z ∈ tu0 − P } and u0 is a fixed arbitrary element in int P . We will make use of the following scalarization properties: see Theorem 2.4.1 and Corollary 2.4.5: ϕ is continuous, sublinear, P -monotone (z2 − z1 ∈ P ⇒ ϕ(z1 ) ≤ ϕ(z2 ) and z2 − z1 ∈ int P ⇒ ϕ(z1 ) < ϕ(z2 )), and for every λ ∈ R the sublevel and strict sublevel sets of ϕ of height λ are given by levϕ (λ) = {z ∈ Y | ϕ(z) ≤ λ} = λu0 − P and

0 lev< ϕ (λ) = {z ∈ Y | ϕ(z) < λ} = λu − int P.

Moreover, for every λ ∈ R and z ∈ Y one has ϕ(z + λu0 ) = ϕ(z) + λ. Note that from the above characterizations of the sublevel sets of ϕ, one can also obtain

382

5 Applications

ϕ(z) > λ ⇔ z − λu0 ∈ / −P, / − int P, ϕ(z) ≥ λ ⇔ z − λu0 ∈ ϕ(z) = λ ⇔ z − λu0 ∈ − bd P. Thus the problems (WVEP) and (SEP) are equivalent in the sense that the solution set of each of the two problems coincides with each other. Bregman functions. For a given real function h defined on a nonempty closed convex subset S of X, with int S = ∅, we let Dh (x, y) := h(x) − h(y) − (x − y|∇h(y)), for each x ∈ S, y ∈ int S. The function Dh is called the Bregman distance. Let us notice that the term “distance” is misleading: Neither is Dh symmetric nor does it satisfy the triangle inequality. However, the function Dh has some good distance features if the function h is nice enough; see Lemmas 5.2.21 and 5.2.22. Definition 5.2.14. A function h : S −→ R is called a Bregman function with zone int S, for short B-function, if (B1) h is continuous and strictly convex on S; (B2) h is continuously differentiable on int S; (B3) for every t ∈ R, x ∈ S and y ∈ int S, the level set L(x, t) := {y ∈ int S | Dh (x, y) ≤ t} is bounded; (B4) if yn −→ y0 in S, then Dh (y0 , yn ) −→ 0; (B5) if {yn } ⊂ int S converges to y0 in S, {xn } is bounded in S, and Dh (xn , yn ) −→ 0, then xn −→ y0 . Note that letting h(x) = +∞ for x ∈ S, we have that h : X → R∪{+∞} is a proper convex lower semicontinuous function on X, with domain dom h=S. For the function Dh , the above conditions ensure that the natural domain of Dh is S × int S. Remark 5.2.15. When all {xn }, {yn }, and y0 are in int S, conditions (B4) and (B5) hold automatically, as a consequence of (B1)–(B3). So (B4) and (B5) need to be checked only at points on the boundary bd S := cl S \ int S of S. Remark 5.2.16. When S = Rm , a sufficient condition for a strictly convex and differentiable function h to be a B-function is a coercivity condition, i.e., −1 limx→∞ h(x)x = +∞. Let us cite some examples of B-functions; other examples can be found in [110], [118], [171], [196], [300], [299]. Example 5.2.17. Let h(x) = 12 x2 for which Dh (x, y) = 12 x − y2 for every x, y ∈ X. Then all the properties (B1)–(B5) are satisfied for S = X; i.e., h is a B-function.

5.2 Solution Procedures

383

Example 5.2.18. that S = Rm and for x = (x1 , . . . , xm ) ∈ S, let m Suppose 1 p h(x) = 2 i=1 |xi | . Then h is a B-function with zone Rm .  m On could also consider S = Rm + and the function h(x) = i=1 xi log xi m on R> := int S and h(x) = 0 if x ∈ bd S. In this case, Dh (x, y) =

m   i=1

xi log

 xi + yi − xi . yi

Example 5.2.19. that S = [−1, +1]m and for x = (x1 , . . . , xm ) ∈ S, + mSuppose let h(x) = − i=1 1 − x2i . Then h is a B-function with zone ] − 1, +1[m . Example 5.2.20. Suppose that S = Rm + and for x = (x1 , . . . , xm ) ∈ S, let m β α h(x) = i=1 (xi − xi ), with α ≥ 1 and 0 < β < 1. Then m √ √ • for α=2 and β = 12 , we get Dh (x, y) = x − y2 + i=1 2√1yi ( xi − yi )2 ;  √ √ m • and for α = 1 and β = 12 , we get Dh (x, y) = i=1 2√1yi ( xi − yi )2 . Lemma 5.2.21. For every x ∈ S and every y ∈ int S, we have Dh (x, y) ≥ 0, and Dh (x, y) = 0 if and only if x = y. Proof. This lemma follows from the definition of Dh and the property (B1).  Lemma 5.2.22. Three points Lemma. For every x, y, z ∈ int S, we have Dh (x, z) = Dh (x, y) + Dh (y, z) + (x − y|∇h(y) − ∇h(z)).

(5.59)

Proof. This follows from a straightforward substitution of the definition of Dh .  In the rest of this section, let us consider the following assumptions concerning the data of the studied vector equilibrium problem: (A0) M is a nonempty closed convex subset of X. (A1) f : M × M → Y satisfies ∀x, y ∈ M ∀λ ∈ R : f (x, y) ∈ / λu0 − int P 0 implies f (y, x) ∈ −λu − P , and f (x, x) = 0. (A2) In the second argument f is P -convex and P -lower semicontinuous on M. (A3) In the first argument f is P -upper semicontinuous on each line segment in M . ¯ is a B-function with a zone S whose interior contains M . (B6) h : X −→ R (B7) h is strongly convex on int S with modulus α > 0; i.e., for all x, y ∈ int S, t ∈ [0, 1], one has h(tx + (1 − t)y) ≤ th(x) + (1 − t)h(y) −

α t(1 − t)x − y2 . 2

384

5 Applications

(B8) The gradient ∇h of h is Lipschitz continuous on int S with modulus K; i.e., for all x, y ∈ int S one has ∇h(x) − ∇h(y) ≤ Kx − y. Equivalently, in terms of subdifferential operators, h is strongly convex iff ∀x, y ∈ int S, h(x) − h(y) ≥ (x − y|∇h(y)) +

α x − y2 ; 2

iff its subdifferential operator is strongly monotone, that is to say, ∀x, y ∈ int S (5.60) (y − x|∇h(y) − ∇h(x)) ≥ αx − y2 . One remarks also that if h is strongly convex, then h is strictly convex1 and has bounded level sets. The Lipschitz continuity of ∇h implies the following general condition: ∀x, y ∈ int S (5.61) (y − x|∇h(y) − ∇h(x)) ≤ Kx − y2 , The relations (5.60) and (5.61) lead also to α x − y2 ≤ Dh (x, y) ≤ Kx − y2 . 2 Remark 5.2.23. (1) Let us notice that (A1) is equivalent to ∀x, y ∈ M, ∀λ ∈ R, F (x, y) ≥ λ ⇒ F (y, x) ≤ −λ. This condition is also satisfied when we suppose that ∀x, y ∈ M, F (x, y)+ F (y, x) ≤ 0. (2) Condition (A2) implies that F is convex and lower semicontinuous in the second argument. Proof. To prove the first statement it suffices to use the sublinearity of ϕ and levϕ (0) := {z ∈ Y | ϕ(z) ≤ 0} = −P . For the second statement, we remark that for x ∈ M , levF (x,·) (λ) := {y ∈ M | F (x, y) ≤ λ} = {y ∈ M | f (x, y) ∈ λu0 − P } =: levf (x,·) (λu0 ). Thus, P -lower semicontinuity (resp. P -quasi-convexity) of f (x, ·) implies lower semicontinuity (resp. quasi-convexity) of F (x, ·). To prove convexity of F (x, ·), we use only the sublinearity and P -monotonicity of ϕ.  1

∀x, y ∈ int S, with x = y, ∀t ∈ (0, 1), one has h(tx+(1−t)y) < th(x)+(1−t)h(y).

5.2 Solution Procedures

385

One-phase proximal algorithms. We start with the following approximate iteration scheme (VPA1) (called the one-phase vector proximal algorithm) for solving (WVEP): Algorithm (VPA1). Consider sequences {αk } and {εk } of nonnegative real numbers. 1. Start at an arbitrary x0 ∈ M and ξ0 ∈ B ∗ . 2. If (xk , ξk ) is in the current iterate, the next iterate (xk+1 , ξk+1 ) ∈ M ×B ∗ is a solution of αk (f (xk+1 , x)|ξk+1 )+h(x)−h(xk+1 )−(x−xk+1 |∇h(xk ))+εk ≥ 0 ∀ x ∈ M. (5.62) The idea underlying the vector proximal algorithm for solving the problem (WVEP) for a vector-valued mapping is basically the same as the scalar one for solving the problem (SPE). More precisely, if xk is the current point, the next term xk+1 of the iteration (5.62) produced by (VPA1) is a solution of αk F (xk+1 , x) − Dh (xk+1 , xk ) + Dh (x, xk ) + εk ≥ 0 ∀ x ∈ M.

(5.63)

Lemma 5.2.24. Suppose that (A2) and (B6) are satisfied. Then for εk = 0 a solution of iteration (5.63) is a solution of (5.64), and conversely, αk F (xk+1 , x) + (xk+1 − x|∇h(xk ) − ∇h(xk+1 )) ≥ 0 ∀ x ∈ M.

(5.64)

Proof. Suppose that xk+1 is a solution of (5.63). By way of the subdifferential operator of the proper convex lower semicontinuous functions φ(x) := αk F (xk+1 , x) + Dh (x, xk ) + δM (x), the relation (5.63) is equivalent to (5.65) 0 ∈ ∂(αk F (xk+1 , ·) + Dh (·, xk ) + δM (·))(xk+1 ). Applying the subdifferential calculus to the sum of convex functions, and using the domain qualification rint dom(F (xk+1 , ·)) ∩ rint dom(Dh (·, xk )) ∩ rint M = rint M = ∅, we have2 0 ∈ ∂(αk F (xk+1 , ·) + δM )(xk+1 ) + ∂Dh (·, xk )(xk+1 ). Let y1 ∈ ∂(αk F (xk+1 , ·) + δM )(xk+1 ) and y2 ∈ ∂Dh (·, xk )(xk+1 ) be such that 0 = y1 + y2 ; then 2

rint M is the relative interior of M , i.e., the interior of M relative to aff M , the intersection of all linear manifolds containing M .

386

5 Applications

−y2 = ∇h(xk ) − ∇h(xk+1 ) ∈ ∂(αk F (xk+1 , ·) + δM )(xk+1 ). In other words, 0 ≤ αk F (xk+1 , xk+1 ) ≤ αk F (xk+1 , x)+(xk+1 −x|∇h(xk )−∇h(xk+1 )) ∀x ∈ M. We then obtain the relation (5.64). For the converse, it suffices to remark that ∇h(xk+1 ) is a subgradient of  the convex function h at xk+1 . Let us remark that for Lemma 5.2.24 we base our argument on the sum formula of subdifferentials that is now a part of the folklore of scalar convex analysis; see Rockafellar [483]. Theorem 5.2.25. (Convergence result of VPA1) Assume that M, f, h, and P satisfy assumptions (A1)–(A3) and (B1)–(B6). Suppose in addition that (5.66) εk = 0 and 0 < λ ≤ αk ≤ Λ < +∞ for each k ∈ N. Then the sequence {xk } generated by (VPA1) converges to a solution x of (WVEP), and for each solution x∗ one has f (x, x∗ ), f (x∗ , x) ∈ − bd P . Proof. By way of Lemma 5.2.24, we have 0 ≤ αk F (xk+1 , x) + (xk+1 − x|∇h(xk ) − ∇h(xk+1 )) ∀x ∈ M.

(5.67)

Choose x = x∗ in the solution set of (WVEP). Then combining (A1) and Remark 5.2.23 yields αk F (xk+1 , x∗ ) ≤ 0 for each k ∈ N, and thus (xk+1 − x∗ |∇h(xk ) − ∇h(xk+1 )) ≥ 0.

(5.68)

Applying Lemma 5.2.22, we obtain Dh (x∗ , xk ) ≥ Dh (x∗ , xk+1 ) + Dh (xk+1 , xk ).

(5.69)

By Lemma 5.2.21, we deduce that the sequence {Dh (x∗ , xk )} is nonnegative and decreasing. Thus {Dh (x∗ , xk )} is convergent. On account of boundedness of the level set L(x∗ , Dh (x∗ , x0 )) := {x ∈ int S | Dh (x∗ , x) ≤ Dh (x∗ , x0 )}, we can confirm that {xk } is bounded. Let x ∈ M be an accumulation point of {xk }. Then for some subsequence kj {x } we have limj→∞ xkj = x. Taking for j ∈ N, uj = xkj and vj = xkj +1 , we have limj→∞ Dh (vj , uj ) = 0. Using (B5), we deduce that {uj } and {vj } have the same limit x. Fix some x ∈ M ; we obtain from (5.67) and (B2) that lim inf F (xkj +1 , x) ≥ lim inf k→+∞

k→+∞

1 kj +1 (x −x|∇h(xkj +1 )−∇h(xkj )) = 0. (5.70) αk

By assumptions (A1), Remark 5.2.23, and (A2) we have

5.2 Solution Procedures

387

F (x, x) ≤ lim inf F (x, xkj +1 ) ≤ 0. j→∞

Thus x is a solution of F (x, x) ≤ 0 for all x ∈ M . The assertion of this theorem is not yet proved. For this, let y ∈ K and consider, for t ∈]0, 1[, xt = ty + (1 − t)x. Since M is convex, then for each t ∈]0, 1[, F (yt , x) ≤ 0. From (A2) (see Remark 5.2.23) it follows that for every t ∈]0, 1[, 0 = F (xt , xt ) ≤ tF (xt , y) + (1 − t)F (xt , x) ≤ tF (xt , y). Letting t  0, from the assumption (A3), upper semicontinuity on [x, y] for the first argument of F yields F (x, y) ≥ 0; it follows that x is a solution of (WVEP). Let us prove that the whole sequence {xk } converges to x. Since x is a solution of (WVEP), we obtain from (5.69) that {Dh (x, xk }) converges to some limit r ≥ 0. Using the Bregman assumption (B4), we have limj→∞ Dh (x, xkj ) = 0. Therefore, r = 0. Let us consider any convergent subsequence {vj = xkj } of {xk } and  x the corresponding limit point. Then, for uj = x ∀ j ∈ N, we have limj→∞ Dh (uj , vj ) = 0, and thus (B5) gives x = x . This implies that the whole sequence {xk } converges to x. Let x∗ be an arbitrary solution of (WVEP); one deduces that F (x, x∗ ) ≥ 0 and F (x∗ , x) ≥ 0. Using (A1), it follows that F (x, x∗ ) = F (x∗ , x) = 0; this completes the proof.  Remark 5.2.26. In the algorithm (VPA1) the iteration xk+1 is obtained through the optimal solution of the equilibrium problem (5.64), which is difficult or even impossible to solve exactly. In practice, we may expect to compute an approximation to the optimal solution xk+1 of Problem (5.64), i.e., εk > 0. Thus we need to take intoaccount that {εk } is a sequence of ∞ √ decreasing positive numbers such that k=0 εk < +∞. We shall leave it to the reader to verify this assertion; we refer to [196]. Remark 5.2.27. Replacing (A1) by ∀x, y ∈ M ∀λ ∈ R one has that x = y and f (x, y) ∈ / λu0 − int P imply f (y, x) ∈ −λu0 − int P ; we can assert that the solution set of (WVEP) is reduced to the unique solution x. Indeed, consider x∗ another solution of (WVEP). One deduces that f (x, x∗ ) and f (x∗ , x) are not in − int P . Using the proposed assumptions  with λ = 0, it follows that f (x, x∗ ) ∈ − int P , a contradiction. Remark 5.2.28. When we suppose that X is a real reflexive Banach space, we need assumptions (B7) and (B8) on the B-function h to realize weak convergence of a subsequence of {xk } to a solution of (WVEP). If f satisfies instead of (A1) the condition in Remark 5.2.27, we conclude that the whole sequence {xk } weakly converges to the unique solution of (WVEP).

388

5 Applications

Proof. As in the proof of the previous theorem, we could justify that {xk } is bounded and Dh (xk+1 , xk ) ≤ Dh (x∗ , xk )−Dh (x∗ , xk+1 ). Taking into account α (B7), we have xk −xk+1 2 ≤ Dh (xk+1 , xk ), and {Dh (x∗ , xk )} is decreasing 2 to some nonnegative limit; thus lim xk − xk+1  = 0.

k→+∞

Coming back to (5.70), and using (A2) and (B2), which ensures the weak lower semicontinuity of F (x, ·), we deduce from (B8) that for all x ∈ M , F (x, x) ≤ lim inf F (x, xkj +1 ) j→∞

1 ≤ lim supxkj +1 − x, ∇h(xkj ) − ∇h(xkj +1 ) λ j→∞ K ≤ lim sup xkj − xkj +1  · xkj +1 − x λ j→∞ ≤ 0. This being true for all x ∈ M , and following the proof of the previous theorem, we may conclude that x is a solution of (WVEP). Since the solution of (WVEP) is unique and equal to x, by considering any weakly converging subsequence of {xk }, the corresponding limit point must be equal to x. This implies that the whole sequence {xk } converges  to x. Remark 5.2.29. When we suppose that X is a real Banach space and, instead of the hypothesis (B7), that F is strongly monotone, i.e., ∃ δ > 0 such that F (x, y) + F (y, x) ≤ −δx − y2 ∀ x, y ∈ M , then the weak convergence of {xk } becomes strong convergence: xk − x −→ 0. If we suppose δ > K/λ, we obtain the rate of convergence  k K k+1 − x ≤ x1 − x0 . x λδ Proof. Setting x = x in (5.67), we have 1 k+1 x − x, ∇h(xk ) − ∇h(xk+1 ) αk 1 ≤ −F (x, xk+1 ) − δx − xk+1 2 + ∇h(xk ) − ∇h(xk+1 ) · xk+1 − x λ K ≤ −δx − xk+1 2 + xk − xk+1  · xk+1 − x. λ

0 ≤ F (xk+1 , x) +

Hence xk+1 − x ≤

K k λδ x

− xk+1 ; and since

lim xk − xk+1  = 0, we

k→+∞

deduce that the sequence {xk } strongly converges to x.

K k 0 x − x1 , and By induction we obtain the estimate xk+1 − x ≤ λδ K the convergence is ensured by λδ < 1.  For some similar results for the convergence of the one-phase proximal algorithm one can refer to [196] and [423].

5.2 Solution Procedures

389

Two-phase proximal algorithms In this paragraph, instead of taking at each iteration an equilibrium point, which can be interpreted as a fixed point of a solution set mapping of optimization problems (i.e., xk+1 ∈ Uk (xk+1 ) := argmin{αk F (xk+1 , x) + Dh (x, xk ) | x ∈ M } the solution set of αk F (xk+1 , ·) + Dh (·, xk ) on M ), we choose a simultaneous optimization method. Algorithm (VPA2). 1. Start at an arbitrary x0 . 2. If xk is the current iterate, the next iterate xk+1 is found from the following two-phase procedure: 1

phase 1: xk+ 2 ← xk is a solution of ∀ x ∈ M , 1

1

1

αk (F (xk , xk+ 2 ) − F (xk , x)) ≤ h(x) − h(xk+ 2 ) − (x − xk+ 2 |∇h(xk )) + εk ; 1

phase 2: xk+1 ← xk+ 2 is a solution of ∀ x ∈ M , 1

1

αk (F (xk+ 2 , xk+1 )−F (xk+ 2 , x)) ≤ h(x)−h(xk+1 )−(x−xk+1 |∇h(xk ))+εk . Remark 5.2.30. Note that the iterations in the two-phase algorithm(VPA2) are equivalent to the following proximal-point method with Bregman distances: 1

1: xk+ 2 ∈ εk − argmin{αk F (xk , x) + Dh (x, xk ) : x ∈ M }; 1 2: xk+1 ∈ εk − argmin{αk F (xk+ 2 , x) + Dh (x, xk ) : x ∈ M }. We now address an important question relating to algorithms (VPA1) and (VPA2): Given any start point x0 ∈ M , does the sequence {xk } generated by these algorithms exist? For the algorithm (VPA1) we need conditions that ensure the existence of vector equilibria, so that similar assumptions as in Theorem 3.9.37 are imposed. For the algorithm (VPA2) of this subsection, we need only conditions 1 that guarantee the existence of εk -optimal solutions xk+ 2 and xk+1 to the problems of minimizing the scalar functions φ1 (x) := αk F (xk , x) + Dh (x, xk ) 1 and φ2 (x) := αk F (xk+ 2 , x) + Dh (x, xk ) over the set-constraints M . 1 When εk > 0, these optimal solutions xk+ 2 and xk+1 always exist. 1 When εk = 0, the existence of xk+ 2 and xk+1 is guaranteed under conditions of existence of the minimum of convex lower semicontinuous functions, for instance when φi , i = 1, 2, are coercive, or M is compact. This two-phase proximal algorithm for the equilibrium problems was suggested by Antipin in a set of papers (see [20–22]) and mainly [196]. The following theorem establishes a global convergence of the twophase algorithm (VPA2).

390

5 Applications

Theorem 5.2.31. (Convergence result of (VPA2)) Assume, in addition to the hypotheses of Theorem 5.2.25, that M, f, h satisfy that there exists γ > 0 such that 0 ≤ Λγ < 1 and for each x1 , x2 , y1 , y2 ∈ M we have ϕ(f (x1 , y1 )) − ϕ(f (x1 , y2 ))−ϕ(f (x2 , y1 )) + ϕ(f (x2 , y2 )) ≥ −γ (Dh (x1 , x2 ) + Dh (y1 , y2 )) .

(5.71)

Then the conclusion of Theorem 5.2.25 remains true. Proof. By a similar argument as in the proof of Lemma 5.2.24, we have for each x in M ,   1 1 1 αk F (xk , xk+ 2 ) − F (xk , x) ≤ (xk+ 2 − x|∇h(xk ) − ∇h(xk+ 2 )) (5.72) and

  1 1 αk F (xk+ 2 , xk+1 ) − F (xk+ 2 , x) ≤ (xk+1 − x|∇h(xk ) − ∇h(xk+1 )). (5.73)

Setting x = xk+1 in (5.72) and x = x∗ a solution of the problem (WVEP) in (5.73), by adding these two last inequalities and using the condition (A1), we may obtain

 1 1 1 1 αk F (xk , xk+ 2 ) − F (xk , xk+1 ) − F (xk+ 2 , xk+ 2 ) + F (xk+ 2 , xk+1 )

 1 1 1 ≤ αk F (xk , xk+ 2 ) − F (xk , xk+1 ) − F (xk+ 2 , x∗ ) + F (xk+ 2 , xk+1 ) 1

1

1

≤ (xk+ 2 − x∗ |∇h(xk )) + (xk+1 − xk+ 2 , ∇h(xk+ 2 )) + x∗ − xk+1 , ∇h(xk+1 ) 1

1

= Dh (x∗ , xk ) − Dh (x∗ , xk+1 ) − Dh (xk+1 , xk+ 2 ) − Dh (xk+ 2 , xk ). (5.74) The last equality follows by expanding the definition of Dh and direct algebra. Using assumption (5.71), it follows that 1

1

1

1

F (xk , xk+ 2 ) − F (xk , xk+1 ) − F (xk+ 2 , xk+ 2 ) + F (xk+ 2 , xk+1 )

1 1  ≥ −αk γ Dh (xk+ 2 , xk ) + Dh (xk+1 , xk+ 2 ) . Now invoking the condition 0 ≤ Λγ < 1, we get 1

1

Dh (x∗ , xk ) ≥ Dh (x∗ , xk+1 ) + Dh (xk+1 , xk+ 2 ) + Dh (xk+ 2 , xk )

1 1  −αk γ Dh (xk+ 2 , xk ) + Dh (xk+1 , xk+ 2 ) ≥ Dh (x∗ , xk+1 ). We obtain that {Dh (x∗ , xk )} is a nonincreasing and nonnegative sequence; thus limk→∞ Dh (x∗ , xk ) exists and 1

1

lim Dh (xk+ 2 , xk ) = lim Dh (xk+1 , xk+ 2 ) = 0.

k→∞

k→∞

As a straight adaptation of the proof of Theorem 5.2.25, we can confirm 1 that the sequences {xk }, {xk+ 2 }, and {xk+1 } converge to some solution x of (WVEP). 

5.2 Solution Procedures

391

Remark 5.2.32. The condition (5.71) simplifies when h(x) = 12 x2 to ∀x1 , x2 , y1 , y2 ∈ M ,  γ F (x1 , y1 )−F (x1 , y2 )−F (x2 , y1 )+F (x2 , y2 ) ≥ − x1 − x2 2 + y1 − y2 2 , 2 which holds when f satisfies the following vector P -H¨older condition: γ y1 − y2 2 u0 ∈ P, 2 γ f (x1 , y1 ) − f (x2 , y1 ) + x1 − x2 2 u0 ∈ P. 2 f (x1 , y1 ) − f (x1 , y2 ) +

(5.75)

Proof. When x1 , x2 , y1 , y2 ∈ M , the P -H¨older condition (5.75) yields γ γ F (x1 , y1 )−F (x1 , y2 ) ≥ − y1 −y2 2 , F (x1 , y1 )−F (x2 , y1 ) ≥ − x1 −x2 2 , 2 2 so that F (x1 , y1 ) − F (x1 , y2 ) − F (x2 , y1 ) + F (x2 , y2 )

  = 12 F (x1 , y1 ) − F (x1 , y2 ) + 12 F (x1 , y1 ) − F (x2 , y1 )

  + 12 F (x2 , y2 ) − F (x2 , y1 ) + 12 F (x2 , y2 ) − F (x1 , y2 )  γ ≥ − Dh (y1 , y2 ) + Dh (x1 , x2 ) + Dh (y2 , y1 ) + Dh (x2 , x1 ) 2  γ = − x1 − x2 2 + y1 − y2 2 . 2 5.2.4 Relaxation and Penalization for Vector Equilibrium Problems In this section we are interested in analyzing the perturbation of the vector equilibrium problem. More precisely, we give conditions under which a relaxation of the domain of feasible decisions M and the penalization of the vector criterion mapping f do not change the set of solutions of the considered problems. Assume that we are given a closed convex cone P with nonempty interior in the space Y = Rm , a closed convex subset M of X = Rn , and f : X×X−→Y . Consider the vector equilibrium problem (P0 )

find x ∈ M such that f (x, y) ∈ − int P for all y ∈ M .

Associated with this problem, let us consider the family of equilibrium problems (Pλ )

find x ∈ D such that f (x, y) + λΦ(x, y) ∈ − int P for all y ∈ D.

Here the replacement of M by D represents a relaxation of the constraints domain, and f + λΦ represents the penalization of the objective function f .

392

5 Applications

We are interested in seeing under what conditions the problems (P0 ) and (Pλ ) are equivalent in the sense that the solution sets S(P0 ) and S(Pλ ) of the two problems coincide. Lemma 5.2.33. Let P and P0 be two cones in Y such that P0 is closed and ∅ = P0 \ {0} ⊂ int P . Then for each ρ > 0, there exists a real δ0 > 0 such that for every δ > δ0 , δP0 ∩ U + B(0, ρ) ⊂ int P, where U := {x ∈ Y : x = 1} and B(0, ρ) := {x ∈ Y : x ≤ ρ}. Proof. Suppose the assertion of the lemma is false. Then we can find some ρ0 > 0 and sequences {kn } (of positive integers) {un } and {vn } such that for each n ∈ N> , kn ≥ n, vn  ≤ ρ0 , un ∈ P0 ∩ U and kn un + vn ∈ / int P. By the compactness of P0 ∩ U in Y , one can find a convergent subsequence of {un }, also denoted by {un }, to some u ∈ P0 ∩ U . / int P , hence that We conclude from “int P is a cone” that un + k1n vn ∈ u∈ / int P when n → +∞, and finally that u ∈ P0 \ int P . This contradicts  our assumption “P0 \ {0} ⊂ int P ”. We are now ready to provide conditions under which a relaxation of the domain and a penalization of the objective vector mapping do not change the set of solutions of (VEP). Theorem 5.2.34. Let M and D be two nonempty compact subsets of X with M ⊂ D. We denote by PM (x) the metric projection of x on M . Suppose that f, Φ : D × D → Y satisfy (A1) there exist L > 0, α > 0 such that f (x, y) ≤ Lx − yα ∀ x ∈ D \ M, ∀ y ∈ PM (x); (A2) Φ is continuous on M × D; (A3) Φ(x, y) ∈ −P ∀ x, y ∈ M ; (A4) there exists a closed cone P0 such that ∅ = P0 \ {0} ⊂ − int P and Φ(x, y) ∈ P0 \ {0} ∀ x ∈ D \ M, ∀ y ∈ PM (x); (A5) for each z ∈ M , there exists a neighborhood V (z) and ε(z) > 0 such that Φ(x, y) ≥ ε(z)x − yα ∀ x ∈ V (z) ∩ (D \ M ), ∀ y ∈ PM (x). Then there exists some μ0 > 0 such that for each μ > μ0 , each solution of the problem (Pμ ) is a solution of (P0 ); i.e., S(Pμ ) ⊂ S(P0 ).

5.2 Solution Procedures

393

Proof. Taking into account (A3), it suffices to find some μ0 > 0 such that for every μ > μ0 , each solution of (Pμ ) is contained in M . Step 1. Since M is compact and covered by the family {V (z) : z ∈ M }, we conclude that there exists a finite subset {z1 , . . . , zk } of M such that M ⊂ ∪ki=1 V (zi ) =: V . Setting ρ := max1≤i≤k ε(zLi ) , which is positive, and using (A1) and (A5), then for every x ∈ V ∩ (D \ M ) and y ∈ PM (x) one has 1 f (x, y) ∈ B(0, ρ). Φ(x, y)

(5.76)

On the other hand, (A4) implies that for every x ∈ D \ M and y ∈ PM (x) one has 1 Φ(x, y) ∈ P0 ∩ U. (5.77) Φ(x, y) Using Lemma 5.2.33, we obtain the existence of some η0 > 0 such that ∀μ > η0 , ∀x ∈ V ∩ (D \ M ) and ∀y ∈ PM (x), 1 (f (x, y) + μΦ(x, y)) ∈ − int P. Φ(x, y)

(5.78)

Multiplying by Φ(x, y), since − int P is a cone, we conclude that for every μ > η0 the solution set of the problem (Pμ ) does not intersect the subset V ∩ (D \ M ). Step 2. Set D0 = D \ V , and consider the marginal function α defined on D0 by α(x) := inf y∈PM (x) Φ(y, x). Since D0 and M are compact sets, one can confirm, via the closedness of the graph of the multifunction PM , that PM is upper continuous on D0 . Also, Φ is continuous on M × D0 . Then using Berge’s maximum theorem (see [60, p. 123], [35, Chap. III, Corollary 9 and Proposition 21]), it follows that the function α is lower semicontinuous on D0 . Since D0 is compact, we deduce that α admits a minimum point x on D0 ; i.e., α(x) = minx∈D0 α(x). Set MΦ := inf x∈D0 ,y∈PM (x) Φ(x, y) = inf x∈D0 α(x). Then taking into account compactness of PM (x), (A2), and (A4), we have for some y ∈ PM (x), MΦ = α(x) =

min Φ(x, y) = Φ(x, y) > 0.

y∈PM (x)

Return now to (A1). We have for every x ∈ D0 , y ∈ PM (x), f (x, y) ≤ Ly − xα ≤

sup x∈D0 ,y∈M

Ly − xα =: Mf .

(5.79)

We have Mf < ∞, since M and D0 are bounded. M Taking ρ := MΦf y, which is positive, and using Lemma 5.2.33, we can find some η1 > 0 such that for every μ > η1 one has

394

5 Applications

1 (f (x, y) + μΦ(x, y)) ∈ − int P ∀x ∈ D0 , ∀y ∈ PM (x). Φ(x, y)

(5.80)

Hence, for every μ > η1 , each solution of (Pμ ) does not belong to D0 . Step 3. Choose μ0 := max(η0 , η1 ). We confirm that each solution of (Pμ ) does not belong to D \ M , and the proof is complete.  Remark 5.2.35. By making use of boundedness of f on {x} × D for every x ∈ M , it is possible to replace (A2) and (A4) by (A2 ) for every x ∈ M the mapping Φ(x, ·) is continuous on D; (A4 ) for every x ∈ M , there exists y ∈ D \ M such that Φ(y, x) ∈ P0 \ {0}. Proof. Indeed, let us follow lines of the proof of the above theorem and pass directly to the second step. Fix z0 ∈ M and use (A2 ); we obtain minx∈D0 Φ(z0 , x) =: MΦ > 0. The boundedness of f implies that Mf := supx∈D0 f (z0 , x) is finite. If we set ρ := Mf /MΦ , it follows from (A4 ) that ρ > 0. Then, following the proof of the above theorem, one can prove the result.  Remark 5.2.36. When the relaxation of the domain D is too large and we have doubts about the control in the assumption (A1), we can restrict our control to a neighborhood of M . But in this case we must suppose the boundedness of f on the whole relaxed domain D × M . So instead of (A1) one can suppose (A1 ) f is bounded on D × M and there exist L > 0, α > 0 and an open subset Ω of X such that M ⊂ Ω and f (x, y) ≤ Ly − xα

∀ x ∈ (D \ M ) ∩ Ω, ∀ y ∈ PM (x).

Then the conclusion of Theorem 5.2.34 remains true. To prove this conclusion it suffices to remark that Mf is finite in (5.79) and can be justified by using only boundedness of f on D0 × M , which is included in D × M .  The next result deals with the inverse inclusion of the solution sets: S(P0 ) ⊂ S(Pμ ). Theorem 5.2.37. Let M and D be two nonempty compact subsets of X with M ⊂ D. Suppose that in addition to (A2) and (A3), the following assumptions are satisfied: (A6) there exist L > 0 and α > 0 and an open subset Ω of X such that M ⊂ Ω and f (x, y)−f (x, z) ≤ Lz −yα ∀ x ∈ M, ∀ y ∈ (Ω ∪D)\M, ∀ z ∈ PM (y); (A7) ∀ y ∈ D, Φ(·, y) is constant on M ;

5.2 Solution Procedures

395

(A8) there exists a closed cone P1 such that ∅ = P1 \ {0} ⊂ int P and Φ(z, y) ∈ P1 \ {0} ∀ y ∈ D \ M, ∀ z ∈ PM (y); (A9) for each x ∈ M , there exists a neighborhood V (x) in Ω and ε(x) > 0 such that Φ(z, y) ≥ ε(x)z − yα ∀ y ∈ V (x) ∩ (D \ M ), ∀ z ∈ PM (y). Then there exists some μ1 > 0 such that for each μ > μ1 , S(P0 ) ⊂ S(Pμ). Proof. Let x ∈ M be a solution of (P0 ). Then from (A3) we immediately have ∀ μ ∈ R, / − int P f (x, y) + μΦ(x, y) ∈

∀ y ∈ M.

(5.81)

Let us verify that (5.81) holds for y ∈ D \ M . Following the lines of Step 1 in the proof of Theorem 5.2.34 and using (A6), (A8), (A9), we can find an open set V ⊂ Ω and η2 > 0 such that ∀ μ > η2 , f (x, y) − f (x, z) + μΦ(z, y) ∈ int P ∀ y ∈ V ∩ (D \ M ), ∀ z ∈ PM (y). (5.82) Let us fix some y ∈ D \ V , which is a compact set. By continuity of Φ on M × D, (A7), and (A8) we have MΦ :=

inf

x∈M,y∈D\V

Φ(x, y) =

inf

inf

y∈D\V z∈PM (y)

Φ(z, y) > 0.

Since D\V and M are compact and f satisfies (A6), we have ∀ y ∈ D\V, ∀ z ∈ PM (y), f (x, y) − f (x, z) ≤ Ly − zα ≤

sup z∈M, y∈D\V

Set ρ :=

Mf MΦ

y − zα := Mf < +∞.

> 0, we have

1 (f (x, y) − f (x, z)) ∈ B(0, ρ) Φ(z, y)

∀ y ∈ D \ V, ∀z ∈ PM (y).

Apply Lemma 5.2.33, (A7), and (A8). There exists η3 > 0 such that ∀ μ > η3 , f (x, y) − f (x, z) + μΦ(z, y) ∈ int P ∀ y ∈ D \ V, ∀z ∈ PM (y).

(5.83)

Using (A7), (5.82), and (5.83) we have ∀ μ > μ1 = max(η2 , η3 ), f (x, y) − f (x, z) + μΦ(x, y) ∈ int P ∀ y ∈ D \ M, ∀z ∈ PM (y).

(5.84)

Let us recall that x is a solution of (P0 ). Then since z ∈ PM (y) ⊂ M , / − int P. f (x, z) ∈

(5.85)

Combining (5.84) and (5.85), and using Y \ int P − int P ⊂ Y \ P ⊂ Y \ int P , we deduce that for μ > η2 , / − int P f (x, y) + μΦ(x, y) ∈

∀ y ∈ D \ M.

(5.86)

If we put μ1 = max(η2 , η3 ), we get (5.81) for every y ∈ D; therefore x is a solution of (Pμ) for every μ > μ1 . 

396

5 Applications

Discrete equilibrium problems. Since in Theorems 5.2.34 and 5.2.37, we suppose M to be only compact (without convexity), one can treat the discrete case. Consider X = Rn and Y = Rp endowed with the sum norms (which are equivalent to the usual Euclidean structure) and the ordering defined by P = Rp+ . The domain is supposed to be discrete, since M = An , where A = {0, 1}. Consider the relaxation of the domain M as ! " D = x = (x1 , . . . , xn ) ∈ X | 0 ≤ xi ≤ 1 ∀ i ∈ I = {1, . . . , n} . The penalty term we consider is defined for (x, y) ∈ D × D by ⎛ ⎛ ⎞ ⎞ (e − x, x) − (e − y, y) 1 ⎜ ⎜ .. ⎟ ⎟ .. p n Φ(x, y) := ⎝ , where e = ∈ R ⎝.⎠∈R . ⎠ . (e − x, x) − (e − y, y)

1

Theorem 5.2.38. Suppose that f : D × D → Y satisfies: (i) there exist L > 0 and an open subset Ω such that M ⊂ Ω and f (x, y) − f (x, z) ≤ Lz − y ∀ x ∈ D, ∀ y ∈ (Ω ∪ D) \ M, ∀ z ∈ PM (y); (ii) f (x, x) = 0 ∀ x ∈ Ω ∩ D. Then there exists μ2 > 0 such that for each μ > μ2 , the solution sets of the problems (P0 ) and (Pμ) coincide; i.e., S(P0 ) = S(Pμ). Proof. To prove this result we will have to verify all assumptions of Theorem 5.2.34 and Theorem 5.2.37. Firstly, we remark that for α = 1, the assumptions (A1) and (A6) are satisfied. Suppose we are given the cones P0 = {u ∈ Y : u1 = · · · = up ≤ 0} and P1 = −P0 . Then assumptions (A2)– (A4) and (A7)–(A8) are obviously satisfied. In order to prove (A5) and (A9), let us for every fixed z ∈ M take the neighborhood V (z) = B(z, 12 ) ∩ Ω, where B(z, 12 ) denotes the open ball around z with radius 12 . Let x ∈ U (z) := V (z) ∩ (D \ M ). Then the subset {y ∈ PM (x) : x ∈ U (z)} is reduced to {z}. Thus, for α = 1, (A5) and (A9) jointly become Φ(x, z) = Φ(z, x) ≥ ε(z)x − z ∀ x ∈ V (z) ∩ (D \ M ).

(5.87)

Take an arbitrary point x ∈ U (z). Then by setting I1 = {i ∈ I : zi = 0} and I2 = {i ∈ I : zi = 1}, we have I = I1 ∪ I2 and

5.3 Location Problems

,

397

-

n   1 1  Φ(x, z) − x − z = p xi (1 − xi ) − xi + (1 − xi ) 2 2 i=1 i∈I1

i∈I2

n 

1 1 xi (1 − xi ) − xi − (1 − xi ) ≥ 2 2 i=1 i∈I1 i∈I2     1 1 1 − xi − ≥ xi (1 − xi ) xi − > 0. 2 2 2 i∈I1

i∈I2

The last inequality is valid because, when i ∈ I1 , 0 ≤ xi ≤ i ∈ I2 , 12 ≤ xi ≤ 1.

1 2,

and when

We conclude that all assumptions (A1)–(A9) are satisfied. Thus for μ > μ2 = max(μ0 , μ1 ), the solution sets S(P0 ) and S(Pμ) coincide. 

5.3 Location Problems 5.3.1 Formulation of the Problem Urban development is connected with conflicting requirements of areas for dwelling, traffic, disposal of waste, recovery, trade, and others. Conflicting potential consists in the problem that on the one hand municipalities require an economical use of urban areas, and on the other hand, demand of urban areas is increasing, even if the population is constant or is decreasing. Up to now this problem has been solved in the following way: • to build more compactly (but this may be connected with a reduction of quality of life); • to build much more into the urban surroundings or into the natural areas (but this is problematic for ecological reasons). Consequently, it is necessary to use the available urban areas in an optimal sense. Sustainable oriented town planning has to solve not only the problem of which institutions or establishments are necessary, but also at which location they are needed, and in each case in dependence on the given supply and inventory. Using methods of location theory may be one way supporting urban planning to determine the best location for a special new construction, establishment, or for equipments. The area of a town can be thought of as a mosaic in which the whole is made of smaller units. Different kinds of units represent different kinds of use areas. In real towns, however, the elements of the mosaic continually change position and shape or disappear to be replaced by other or new elements. The job of the planner is to recognize the opportunities and constraints, to consider acute deficits as well as to present needs produced by this shifting mosaic, and he has to propose a development plan for the town.

398

5 Applications

The aim of this section is to formulate a multiobjective location problem and to present a duality-based algorithm for solving multiobjective location problems. Furthermore, we give a hint to corresponding software for solving multiobjective location problems. Location and approximation problems have been studied by many authors from the theoretical as well as the computational point of view (Chalmet, Francis and Kolen [114], Drezner [163], Drezner and Wesolowsky [164], G¨ unther [240], Kuhn [356], Idrissi, Lefebvre and Michelot [287–289], Idrissi, Loridan and Michelot [290], Gerth, and P¨ ohler [214], Pelegrin and Fern´ andez [460], Wendell, Hurter, and Lowe [562], Hamacher and Nickel [253], Wagner, Mart´ınez-Legaz and Tammer [551], Wanka [554] and many others). An interesting overview on algorithms for planar location problems is given in the book of Hamacher [252]. It would be possible to formulate the problem as a real-valued location problem (Fermat–Weber problem), which is the problem to determine a location x of a new facility such that the weighted sum of distances between p given facilities ai (i = 1, . . . , p) and x is minimal. In its first and simplest form such a problem was posed by the jurist and mathematician Fermat in 1629. He asked for the point realizing the minimal sum of distances from three given points. In 1909 this problem appeared, in a slightly generalized form, in the ¨ pioneering work “Uber den Standort der Industrien” of Weber [559]. Later, F¨ oppl [199] introduced the notation “Vial-Zentrum” for the optimal point. In the following decades, this location problem has influenced a great number of useful generalizations (cf. [476], [170], [135]) and applications (cf. [406]). A location problem in town planning above mentioned leads to a problem of determining the minimal weighted sum of distances between the given facilities and the unknown Vial x. Using this approach it is very difficult to say how the weights λi (i = 1, . . . , p) should be chosen. Another difficulty may arise if the solution of the corresponding location problem is practically not useful. Then we need new weights, and again we don’t know how to choose the weights. So, the following approach is of interest. Here we formulate the problem as a multiobjective location problem Compute the set where

Eff Min (f (R2 ), Rn+ ),



⎞ x − a1 max ⎜ x − a2 max ⎟ ⎟, f (x) := ⎜ ⎝ ⎠ ··· x − an max

x, ai ∈ R2 , (i = 1, . . . , n), xmax = max{| x1 |, | x2 |}.

(P)

5.3 Location Problems

399

Remark 5.3.1. For applications in town planning it is important that we can choose different norms in the formulation of (P). The decision which of the norms will be used depends on the course of the roads in the city or in the district. In Section 5.3.2, we derive a geometric algorithm based on duality assertions (cf. [214]). This algorithm generates the whole set of efficient elements. 5.3.2 An Algorithm for the Multiobjective Location Problem Using duality assertions, we present an algorithm for solving (P) introduced in Section 5.3.1(compare Chalmet, Francis, and Kolen [114], Gerth (Tammer), and P¨ ohler [214]). In Section 3.8.3, we derived the following dual problem for (P): Compute the set Eff Max (f ∗ (B), Rn+ ), (D) ⎞ Y 1 (a1 ) where f ∗ (Y ) := ⎝ · · · ⎠ and Y n (an ) ⎛

B = {Y = (Y 1 , . . . , Y n ), Y i ∈ L(R2 , R) | ∃ λ∗ ∈ int Rn+ with n 

λ∗i Y i = 0,

and

|| Y i ∗ ≤ 1 (i = 1, . . . , n)}.

i=1

Here  · ∗ denotes the Lebesgue nnorm. We can use the conditions i=1 λ∗i Y i = 0, and Y i ∗ ≤ 1 (i = 1, . . . , n) in order to derive an algorithm (cf. [214]). Consider the following sets with respect to the given facilities ai ∈ R2 (i = 1, . . . , n), which are related to the structure of the subdifferential of the maximum norm: s1 (ai ) = {x ∈ R2 | ai1 − x1 = ai2 − x2 ≥ 0}, s2 (ai ) = {x ∈ R2 | ai1 − x1 = ai2 − x2 ≤ 0}, s3 (ai ) = {x ∈ R2 | ai1 − x1 = x2 − ai2 ≥ 0}, s4 (ai ) = {x ∈ R2 | ai1 − x1 = x2 − ai2 ≤ 0},

s5 (ai ) = {x ∈ R2 | ai2 − x2 > |ai1 − x1 |}, s6 (ai ) = {x ∈ R2 | x2 − ai2 > |ai1 − x1 |}, s7 (ai ) = {x ∈ R2 | ai1 − x1 > |ai2 − x2 |}, s8 (ai ) = {x ∈ R2 | x1 − ai1 > |ai2 − x2 |}. Moreover, we introduce the sets Sr := {x ∈ N | ∃ i ∈ {1, . . . , n}

and x ∈ sr (ai )}

400

5 Applications

(r = 5, 6, 7, 8), where N denotes the smallest level set of the dual norm to the maximum norm (Lebesgue norm) containing the points ai (i = 1, . . . , n). Now, we are able to describe the following primal-dual algorithm for solving the multiobjective location problem (P) (see [214] and Figure 5.3.2): ! " XEff = (cl S5 ∩ cl S6 ) ∪ [(N \ S5 ) ∩ (N \ S6 )] ∩ ! " (cl S7 ∩ cl S8 ) ∪ [(N \ S7 ) ∩ (N \ S8 )] . This whole set of solutions XEff to the multiobjective location problem (P) in the decision space R2 is illustrated in Figure 5.3.2. Improvements and extensions of the primal-dual algorithm for solving (P) are given by Alzorba, G¨ unther, Popovici, and Tammer in [14]. Algorithms for solving scalar as well as multiobjective location problems are included in the software FLO (https://project-flo.de).

Figure 5.3.2. The solution set of the multiobjective location problem (P) with the maximum norm (in red color) as well as with the Lebesgue norm (in blue color) generated using the software FLO (https://project-flo.de).

5.4 Multiobjective Control Problems 5.4.1 The Formulation of the Problem In the present section, we study suboptimal controls of a class of multiobjective optimal control problems.

5.4 Multiobjective Control Problems

401

In control theory, one often has the problem to minimize more than one objective function, for instance, a cost functional as well as the distance between the final state and a given point. To realize this task, one usually takes as objective function a weighted sum of the different objectives. However, the more natural way would be to study the set of efficient points of a vector optimization problem with the given objective functions. It is well known that the weighted sum is only a special surrogate problem to that of finding efficient points, which has the disadvantage that in the nonconvex case one cannot find all efficient elements in this way. Necessary conditions for solutions of multiobjective dynamic programming or control problems have been derived by several authors; see Kl¨ otzler [337], Benker and Kossert [59], Breckner [92], Gorochowik and Kirillowa [233], Leugering and Schiel [373], Tammer [530], and Salukvadze [494]. It is difficult to show the existence of an optimal control (see Kl¨otzler [337]), whereas suboptimal controls exist under very weak assumptions. So it is important to derive some assertions for suboptimal controls. Ekeland-type variational principles (compare Section 3.11, [180], [181], [35]) are very useful results in optimization theory; they say that there exists an exact solution of a slightly perturbed optimization problem in a neighborhood of an approximate solution of the original problem. The aim of this section is to derive an ε-minimum principle for suboptimal controls of multiobjective control problems from the vector-valued variational principle. We formulate a multiobjective control problem with an objective function that takes its values in the m-dimensional Euclidean space Rm . Throughout this section, let us assume: (A1)

(V, d) is a complete metric space, C ⊂ Rm is a pointed, closed, convex cone with nonempty interior and k 0 ∈ C \ (−C). (A2) Let F : V −→ Rm and x0 ∈ V . The set {x ∈ V | F (x) ≤C F (x0 ) + rk 0 } is closed for every r ∈ R. For some ε > 0, it holds that F (V ) ∩ F (x0 ) − εk 0 − (C \ {0}) = ∅. The following corollary follows immediately from the variational principle in Corollary 3.11.14 regarding the fact that under the given assumptions there always exists an approximately efficient element v0 ∈ V with F (v0 ) ∈ Eff(F (V ), Cεk0 ). Corollary 5.4.1. Assume (A1) and (A2). Then, for every ε > 0 there exists some point vε ∈ V such that (i) F (V ) ∩ (F (vε ) − εk 0 − (C \ {0})) = ∅, (ii) Fεk0 (V ) ∩ (Fεk0 (vε ) − (C \ {0})) = ∅, where Fεk0 (v) := F (v) + d(v, vε )εk 0 .

402

5 Applications

Remark 5.4.2. Corollary 3.11.14 is slightly stronger than Corollary 5.4.1. The main difference concerns condition (3.126) in Corollary 3.11.14, which gives the whereabouts of point xε in V . Remark 5.4.3. The main result of Corollary 5.4.1 says that vε is an efficient solution of a slightly perturbed vector optimization problem. This statement can be used to derive necessary conditions for approximately efficient elements. In the next section, we will use Corollary 5.4.1 in order to derive an ε-minimum principle in the sense of Pontryagin for suboptimal solutions of multiobjective control problems. 5.4.2 An ε-Minimum Principle for Multiobjective Optimal Control Problems In this section, we will give an application of the multiobjective variational principle in Corollary 5.4.1 to control problems. Consider the system of differential equations dx dt (t)

x(0)

⎫ = ϕ(t, x(t), u(t)), ⎬ ⎭ = x ∈ Rn

(5.88)

0

and the control restriction u(t) ∈ U, which must hold almost everywhere on [0, T ] with T > 0. We assume that (C1) ϕ : [0, T ] × Rn × U −→ Rn is continuous and U is a compact set. The vector x(t) describes the state of the system, u(t) is the control at time t and belongs to the set U . Furthermore, suppose that ∂ϕ , i=1,. . . ,n, are continuous on [0, T ] × Rn × U , (C2) ∂x i (C3) (x, ϕ(t, x, u)) ≤ c(1 + x2 ) for some c > 0. Remark 5.4.4. Let u : [0, T ] −→ U be a measurable control. Condition (C2) and the continuity of ϕ ensure that there exists a unique solution x of the differential equation (5.88) on [0, τ ] for a sufficiently small τ > 0. By using Gronwall’s inequality, condition (C3) implies x(t)2 ≤ (x0 2 + 2cT )e2cT , and hence ensures the existence of the solution on the whole time interval [0, T ]. Moreover, the last inequality yields    dx(t)     ≤ max{ϕ(t, x, u) | (t, x, u) ∈ [0, T ] × B 0 × U },   dt 

5.4 Multiobjective Control Problems

403

1

where B 0 denotes the ball of radius (x0 2 + 2cT ) 2 ecT . Applying Ascoli’s theorem, we see that the family of all trajectories x of the control system (5.88) is equicontinuous and bounded, and hence relatively compact in the uniform topology (compare Ekeland [180]). In order to formulate the multiobjective control problem, we introduce the objective function f : Rn −→ Rm and suppose that (C4) f is a differentiable vector-valued function, (C5) C ⊂ Rm is a pointed, closed, convex cone with nonempty interior and k 0 ∈ int C. Let F : u −→ f (x(T )) and u0 ∈ V . The set {x ∈ V | F (u) ≤C F (u0 ) + rk 0 } is closed for every r ∈ R. For some ε > 0, it holds that F (V ) ∩ F (u0 ) − εk 0 − (C \ {0}) = ∅. Under the assumptions (C1)–(C5), we formulate the multiobjective optimal control problem: (P) Find some measurable control u ¯ such that the corresponding trajectory x ¯ satisfies f (x(T )) ∈ / f (¯ x(T )) − (C \ {0}) for all solutions x of (5.88). It is well known that it is difficult to show the existence of optimal (or efficient) controls of (P), whereas suboptimal controls exist under very weak conditions. So it is important to derive some assertions for suboptimal controls. An application of a variational principle for vector optimization problems yields an ε-minimum principle for (P), which is closely related to Pontryagin’s minimum principle (for ε = 0). Now we apply Corollary 5.4.1 in order to derive an ε-minimum principle for the multiobjective optimal control problem (P). We introduce the space V of controls, defined as the set of all measurable functions u : [0, T ] −→ U with the metric d(u1 , u2 ) = meas{t ∈ [0, T ] : u1 (t) = u2 (t)}. In order to prove our main result, we need the following lemmas: Lemma 5.4.5. (Ekeland [180]) (V, d) is a complete metric space. Lemma 5.4.6. (Ekeland [180]) The function F : u −→ f (x(T )) is continuous on V , where x(·) is the solution of (5.88) depending on u ∈ V . Theorem 5.4.7. Consider the multiobjective control problem (P) under the assumptions (C1)–(C5). Then for every ε > 0, there exists a measurable control uε with the corresponding admissible trajectory xε such that

404

5 Applications

1. f (x(T )) ∈ / f (xε (T )) − εk 0 − (C \ {0}) for all solutions x of (5.88), 2. ⎞ ⎛ ⎞ ⎛ (ϕ(t, xε (t), uε (t)) | p1ε (t)) (ϕ(t, xε (t), u(t)) | p1ε (t)) ⎠∈ ⎠−εk 0 −int C ⎝ ··· ··· /⎝ m m (ϕ(t, xε (t), u(t)) | pε (t)) (ϕ(t, xε (t), uε (t)) | pε (t)) for every u ∈ U and almost all t ∈ [0, T ], where pε (·) = (p1ε (·), . . . , pm ε (·)) is the solution of the linear differential system ⎫ n ∂ϕj dpsεi s for i = 1, . . . , n; ⎬ j=1 ∂xi (t, xε (t), uε (t))pεj dt (t) = − (5.89) ⎭ ps (T ) = f  (x (T )) for s = 1, . . . , m. ε

s

ε

Proof. Suppose that (C1)–(C5) are satisfied. We consider the vector-valued function F : u −→ f (x(T )) and the space V of measurable controls u : [0, T ] −→ U . From Lemma 5.4.5, we get that (V, d) is a complete metric space. Together with Lemma 5.4.6, we obtain that the assumptions of Corollary 5.4.1 are satisfied, and we can apply Corollary 5.4.1 to the vector-valued function F . This yields a measurable control uε ∈ V such that (i) F (uε ) ∈ Eff(F (V ), Cεk0 ), (ii) F (uε ) ∈ Eff(Fεk0 (V ), C), where

Fεk0 (u) := F (u) + εk 0 d(u, uε ).

The corresponding admissible trajectory xε to uε satisfies dxε (t) = ϕ(t, xε (t), uε (t)) dt

(5.90)

for almost all t ∈ [0, T ] and xε (0) = x0 . So we derive from (i) f (x(T )) ∈ / f (xε (T )) − εk 0 − (C \ {0}) for all solutions x of (5.88); i.e., statement 1 is satisfied. In order to prove statement 2, we take t0 ∈ (0, T ), where equality (5.90) holds, u0 ∈ U , and define vτ ∈ V for τ ≥ 0 by u0 : t ∈ [0, T ] ∩ (t0 − τ, t0 ), vτ (t) := / [0, T ] ∩ (t0 − τ, t0 ). uε (t) : t ∈ For a sufficiently small τ , d(uε , vτ ) = meas{t | vτ (t) = uε (t)} ≤ τ.

5.4 Multiobjective Control Problems

405

Furthermore, for the corresponding admissible trajectory xτ (compare Pallu de la Barri`ere [45]), d fs (xτ (T )).. = (ϕ(t0 , xε (t0 ), u0 ) − ϕ(t0 , xε (t0 ), uε (t0 )) | psε (t0 )), (5.91) τ =0 dτ s = 1, . . . , m, where pε = (p1ε , . . . , pm ε ) satisfies (5.89). Now we can conclude from (ii) for u = vτ that / F (uε ) − (C \ {0}) F (uτ ) + εk 0 d(uτ , uε ) ∈ and

f (xτ (T )) ∈ / f (xε (T )) − εk 0 d(uτ , uε ) − (C \ {0}).

For sufficiently small τ we get / f (xε (T )) − ετ k 0 − (C \ {0}), f (xτ (T )) ∈

τ > 0.

This implies f (xτ (T )) − f (xε (T )) ∈ / −εk 0 − (C \ {0}), τ and lim

τ →+0

f (xτ (T )) − f (xε (T )) ∈ / −εk 0 − (C \ {0}), τ

τ > 0,

τ > 0,

i.e., d f (xτ (T )).. ∈ / −εk 0 − int C. τ =0 dτ Finally, (5.91) yields ⎛ ⎞ (ϕ(t0 , xε (t0 ), u0 ) − (ϕ(t0 , xε (t0 ), uε (t0 )) | p1ε (t0 )) ⎝ ⎠∈ ··· / −εk 0 − int C m (ϕ(t0 , xε (t0 ), u0 ) − (ϕ(t0 , xε (t0 ), uε (t0 )) | pε (t0 )) for an arbitrary u0 ∈ U and almost all t0 ∈ [0, T ]. Hence, ⎞ ⎛ ⎞ ⎛ (ϕ(t, xε (t), uε (t)) | p1ε (t)) (ϕ(t, xε (t), u(t)) | p1ε (t)) ⎠∈ ⎠−εk 0 −int C ⎝ ··· ··· /⎝ m m (ϕ(t, xε (t), u(t)) | pε (t)) (ϕ(t, xε (t), uε (t)) | pε (t)) for an arbitrary u ∈ U and almost all t ∈ [0, T ].



Remark 5.4.8. If we put ε = 0, then Theorem 5.4.7 coincides with the following assertion: Whenever there is a measurable control uε and the corresponding admissible trajectory xε with f (x(T )) ∈ / f (xε (T )) − (C \ {0})

406

5 Applications

for all solutions x of (5.88), then ⎛

⎞ ⎛ ⎞ (ϕ(t, xε (t), u(t)) | p1ε (t)) (ϕ(t, xε (t), uε (t)) | p1ε (t)) ⎝ ⎠∈ ⎠ − int C ··· ··· /⎝ m m (ϕ(t, xε (t), u(t)) | pε (t)) (ϕ(t, xε (t), uε (t)) | pε (t)) for every u ∈ U and almost all t ∈ [0, T ], where pε (·) is the solution of the linear differential system (5.89). This means that uε satisfies a minimum principle for multiobjective control problems in the sense of Pontryagin (compare Gorochowik and Kirillowa [233]). Remark 5.4.9. Now let us study the special case Y = R1 and put ε = 0. Then Theorem 5.4.7 coincides with the following assertion: Whenever (i) f (xε (T )) ≤ inf f (x(T )) holds, then (ii) (ϕ(t, xε (t), uε (t)) | pε (t)) ≤ minu∈U (ϕ(t, xε (t), u(t)) | pε (t)) almost everywhere on [0,T], where pε (·) is the solution of (5.89). This is the statement of Pontryagin’s minimum principle. However, Theorem 5.4.7 holds even if optimal solutions do not exist. In Theorem 5.4.7, we have proved an ε-minimum principle without any scalarization of the multiobjective optimal control problem. Finally, we derive an ε-minimum principle using a suitable scalarization with a functional of type (2.42) with z := ϕC,k0 : Rm −→ R (under the assumptions C ⊂ Rm is a pointed, closed, convex cone with nonempty interior and k 0 ∈ int C, see (C5)) defined for y ∈ Rm by z(y) = ϕC,k0 (y) = inf{t ∈ R | y ∈ −C + tk 0 }.

(5.92)

In Section 2.4, Corollary 2.4.5, we have shown that the functional z in (5.92) is continuous, sublinear, and strictly (int C)-monotone. Theorem 5.4.10. Consider the multiobjective control problem (P) under the assumptions (C1)–(C5). Then for every ε > 0, there exists a measurable control uε with the corresponding admissible trajectory xε such that 1. f (x(T )) ∈ / f (xε (T )) − εk 0 − (C \ {0}) 2.

for all solutions x of (5.88), ⎛ ⎞  (ϕ(t, xε (t), u(t)) − ϕ(t, xε (t), uε (t)) | p1ε (t))  ⎠ ≥ −ε, ··· z ⎝ m (ϕ(t, xε (t), u(t)) − ϕ(t, xε (t), uε (t)) | pε (t))

5.4 Multiobjective Control Problems

407

for every u ∈ U and almost all t ∈ [0, T ], where z : Rm −→ R1 is the continuous, sublinear, strictly (int C)-monotone functional defined by (5.92), and pε = (p1ε , . . . , pm ε ) is the solution of the linear differential system (5.89). Proof. Under the given assumptions we can conclude from the second statement of Theorem 5.4.7 that ⎞ ⎛ (ϕ(t, xε (t), u(t)) − ϕ(t, xε (t), uε (t)) | p1ε (t)) ⎠∈ ⎝ ··· / −εk 0 − int C (5.93) m (ϕ(t, xε (t), u(t)) − ϕ(t, xε (t), uε (t)) | pε (t)) for every u ∈ U and almost all t ∈ [0, T ], where pε = (p1ε , . . . , pm ε ) is the solution of (5.89). The functional z in (5.92) has for y ∈ Rm and t ∈ R the property (compare Section 2.4, Theorem 2.4.1) z(y) ≥ t ⇐⇒ y ∈ / − int C + tk 0 , so we can conclude from (5.93) that ⎞ ⎛  (ϕ(t, xε (t), u(t)) − ϕ(t, xε (t), uε (t)) | p1ε (t))  ⎠ ≥ −ε, ··· z ⎝ m (ϕ(t, xε (t), u(t)) − ϕ(t, xε (t), uε (t)) | pε (t)) for every u ∈ U and almost all t ∈ [0, T ], where pε (·) is the solution of (5.89).  Remark 5.4.11. Vector-valued optimal control problems with PDE-constraints and characterizations of corresponding solutions via scalarization by means of the functional (2.42) are studied by Leugering and Schiel in [373], compare also [532]. 5.4.3 A Multiobjective Stochastic Control Problem In this section, we consider a multiobjective stochastic control problem and derive necessary conditions for approximate solutions of the control problem using a multiobjective variational principle of Ekeland’s type (see [236, 272]). The restrictions in the multiobjective stochastic control problem are formulated by dynamical equations. The solution of these dynamical equations can be obtained by applying the Girsanov measure transformation. Furthermore, the objective functions are terminal cost gi (x(1)), for which we consider the expected value of control u, i.e., Eu [gi (x(1))] = Fi (u) (i = 1, . . . , l), where Eu denotes the expectation constructed from control u. We introduce the following model of a partially observed stochastic control problem:

408

5 Applications

Let {Bt }t∈[0,1] be a Brownian motion on a probability space (Ω, A, μ) taking values in Rm and let C be the space of continuous functions from [0, 1] to Rm endowed with the usual filtering Ft = σ(x(s) : s ≤ t) of measurable states up to time t. The σ-algebra of measurable states depending on t is denoted by Ft = σ(xs | s ≤ t). In the following we assume (C1) σ is an m × m matrix-valued mapping σ = (σij ) defined on [0, 1] × C with C = C[0, 1]; σ(t, x) is nonsingular; for 1 ≤ i, j ≤ m, σij (t, x) is Ft -measurable in its second argument and Lebesgue measurable in its first; each σij satisfies a uniform Lipschitz condition in x, xs = sup |x(t)|; 0≤t≤s

there is a constant k0 < ∞ such that  1 2 σij dt ≤ k0

a.s.P.

0

(a.s.P. means: almost surely with respect to P). Consider the stochastic differential equation dx(t) x(0)

= f (t, x, u)dt + σ(t, x)dBt , 0 ≤ t ≤ 1,



= x0 ∈ Rm ,

(5.94)

where x(t) splits into an observed component y(t) ∈ Rn and an unobserved component z(t) ∈ Rm−n . Furthermore, consider the observation σ-algebra Yt generated by {y(s) : s ≤ t}. Definition 5.4.12. An admissible partially observable feedback control is defined to be a Yt -predictable mapping u : [0, 1] × C −→ U , where U is a Borel subset of R, such that E|u(τ, .)| < +∞. The set of such controls is denoted by V . We define on V , for u1 , u2 ∈ V , d(u1 , u2 ) = P˜ ({(t, x) | |u1 (t, x) − u2 (t, x)| > 0}),

(5.95)

where P˜ is the product measure of λ and P (λ is the Lebesgue measure on ¯ [0, 1] and P is the probability measure on (C, F1 ) induced by the solution x of  d¯ x(t) = σ(t, x ¯)dBt , (5.96) x ¯(0) = x0 ∈ Rm , i.e., P (A) = μ{ω ∈ Ω : x ¯(ω) ∈ A} for A ∈ F1 ). We will see that (V, d) is a complete metric space (cf. Elliott and Kohlmann [184]). Moreover, we suppose that

5.4 Multiobjective Control Problems

409

(C2) ϕ : [0, 1] × C × U −→ Rm is measurable, causal, and continuous in its third component. We recall that a mapping Φ : [0, 1] × C −→ Rm is called causal if Φ is optional and |σ −1 (t, x)Φ(t, x)| ≤ M (1 + xt ). Under these assumptions, we can apply Girsanov’s theorem in order to construct a probability measure Pu on (C, F1 ) that is absolutely continuous with respect to P and a Wiener process {wtu }t∈[0,1] on (C, F1 , Pu ) such that (C, F1 , Pu , {wtu }t∈[0,1] , ξ), where ξ is the canonical process on C (i.e., ξ(t, x) = x(t)), is a weak solution of (5.94) for each u ∈ V . In fact, {wt }t∈[0,1] defined by  t wt (x) = σ −1 (s, x)dx(s) 0

is a Wiener process on (C, F1 , P ) and by Girsanov’s theorem {wtu }t∈[0,1] defined by dwtu (x) = dwt (x) − σ −1 (t, x)f (t, x, u(t, x))dt = σ −1 (t, x) [dx(t) − f (t, x, u(t, x))dt] is a Wiener process on (C, F1 , Pu ) where Pu is the probability measure on (C, F1 ) defined by  1  Pu (A) = exp f T (t, x, u(t, x))(σ(t, x)σ T (t, x))−1 dx(t) A

1 − 2



1

0

T

T

−1

f (t, x, u(t, x))(σ(t, x)σ (t, x)) 0

=: A

 f (t, x, u(t, x))dt P (dx)

p10 (u)P (dx)

for A ∈ F1 and we have dx(t) = f (t, x, u(t, x))dt + σ(t, x)dwtu (x). In order to formulate the multiobjective stochastic control problem, we introduce the multiobjective function ⎞ ⎛ Eu [g1 (x(1))] ⎠, ··· J(u) := ⎝ Eu [gl (x(1))] where Eu denotes the expectation (constructed from the control u) with respect to Pu ; gi are bounded F1 -measurable functions. We suppose

410

5 Applications

(C3) C ⊂ Rl is a pointed, closed, convex cone with nonempty interior, k 0 ∈ C \ (−C). Let u0 ∈ V . The set {u ∈ V | J(u) ≤C J(u0 ) + rk 0 } is

closed for0 every r ∈ R. For some ε > 0, it holds that J(V ) ∩ J(u0 ) − εk − (C \ {0}) = ∅. Under the assumptions (C1)–(C3), we formulate the multiobjective stochastic control problem: ¯ such that (PC ) Compute a feasible control u J(u) ∈ / J(¯ u) − (C \ {0}) for all admissible controls u. In this way we study an extension of the stochastic control problem introduced by Elliott and Kohlmann [184], [185]. Consider the space V of all partially observable admissible controls and the distance d on V introduced by (5.95). Lemma 5.4.13. (cf. Elliott and Kohlmann [184]) (V, d) is a complete metric space. Furthermore, we introduce a vector-valued mapping F associated with the multiobjective control problem (PC ) for which the assumptions of the variational principle in Corollary 5.4.1 are satisfied. Lemma 5.4.14. (cf. Elliott and Kohlmann [184]) Suppose (C1)–(C3) hold. Then the mapping F : (V, d) −→ (Rl ,  · Rl ) defined by ⎞ ⎛ Eu [g1 (x(1))] ⎠ ··· F (u) := ⎝ Eu [gl (x(1))] is continuous. Remark 5.4.15. Lemmas 5.4.13 and 5.4.14 show together with assumption (C3) that the assumptions (A1), (A2) of Corollary 5.4.1 are satisfied for the multiobjective stochastic control problem (PC ). Theorem 5.4.16. Assume that (C1)–(C3) hold. Then for every ε > 0 there exists an Ft -predictable process γε with the admissible control uε ∈ V such that for every t ∈ [0, 1], every A ∈ Yt , and every admissible control u ∈ V the following statements are true: 1. For the control uε ∈ V , ⎞ ⎛ ⎞ ⎛ Euε [g1 (x(1))] Eu [g1 (x(1))] ⎠∈ ⎠ − εk 0 − (C \ {0}). ⎝ ··· ··· /⎝ Eu [gl (x(1))] Euε [gl (x(1))]

5.4 Multiobjective Control Problems

411

2. For τ > 0 we get the following assertion: ⎛  t+τ t ⎞ p (u )pt+τ (u)γ1ε σ −1 (fsu − fsuε )dP ds t A 0 ε t ⎝ ⎠∈ ··· / −εk 0 τ P (A)−(C\{0}),  t+τ t p (u )pt+τ (u)γlε σ −1 (fsu − fsuε )dP ds t A 0 ε t where fsu := f (s, x, u(s, x)). Proof. We consider the vector-valued function F : u −→ J(u) and the space V of admissible controls u : [0, T ] → U . From Lemma 5.4.13 we get that (V, d) is a complete metric space. Lemma 5.4.14 yields that F is lower semicontinuous. Furthermore, the boundedness condition is fulfilled. So we can conclude that the assumptions of Corollary 5.4.1 are satisfied, and we can apply Corollary 5.4.1 to the vector-valued function F . This yields an admissible control uε ∈ V such that (i) F (uε ) ∈ Eff(F (V ), Cεk0 ), (ii) Fεk0 (uε ) ∈ Eff(Fεk0 (V ), C), where Fεk0 (u) := F (u) + εk 0 d(u, uε ). So we derive from (i) J(u) ∈ / J(uε ) − εk 0 − (C \ {0}) for all feasible controls u; i.e., statement 1 is satisfied. Furthermore, the families of conditional expectations Gti = Euε [gi (x(1)) | Ft ],

i = 1, . . . , l,

are martingales and thus have the representation  t Gti = Fi (uε ) + γiε dwε , i = 1, . . . , l, 0

where wε is the process defined by dwε = σ −1 (dx − f uε dt), so that we can conclude from Girsanov’s theorem that wε is a Brownian motion under the measure Puε . In order to prove statement 2, we take t ∈ [0, 1], A ∈ Yt , and u ∈ V and define vτ ∈ V for τ > 0 by u(s, x) : (s, x) ∈ (t, t + τ ] × A, vτ (s, x) := uε (s, x) : (s, x) ∈ [0, t] × C ∪ (t, t + τ ] × A ∪ (t + τ, 1] × C, where A = Ω \ A.

412

5 Applications

The indicator function of B := (t, t + τ ] × A, denoted by ιB , is a Yt predictable map. Regarding that vτ can be written as ιB u + ιB  uε , it follows that vτ is predictable and an admissible control in V . Now, we apply the martingale representations given above for t = 1 and i = 1, . . . , l:  1

gi (x(1)) = Fi (uε ) +

0

γiε dwε .

So, we get /  Evτ [gi ] = Fi (vτ ) = Fi (uε ) + Evτ ιA

t+τ

0 γiε σ −1 (f u − f uε )ds .

t

Then, we may conclude from statement (ii) in Corollary 5.4.1, Fεk0 (uε ) ∈ Eff(Fεk0 (V ), C), such that

F (vτ ) + εk 0 d(uτ , uε ) ∈ / F (uε ) − (C \ {0}).

Regarding d(vτ , uε ) ≤ τ P (A), it follows that F (vτ ) + εk 0 τ P (A) ∈ / F (uε ) − (C \ {0}). Together with the definition of Puτ and the properties given above, we derive ⎛  t+τ  t ⎞ p (u )pt+τ (u)γ1ε σ −1 (fsu − fsuε )dP ds t A 0 ε t ⎝ ⎠∈ ··· / −εk 0 τ P (A)−(C\{0}).  t+τ  t t+τ p (u )p (u)γlε σ −1 (fsu − fsuε )dP ds t A 0 ε t  Using the martingale representation results given above, we derive the following stochastic ε−Pontryagin minimum principle. This is a condition of the following kind: uε must be an εk 0 -weakly efficient element of the conditional expectation of a certain Hamiltonian. Theorem 5.4.17. Consider the stochastic control problem (PC ) under the assumptions (C1)–(C3). Then, for each ε > 0, there exists an admissible control uε , such that 1.



⎞ ⎛ ⎞ Eu [g1 (x(1))] Euε [g1 (x(1))] ⎝ ⎠∈ ⎠ − εk 0 − (C \ {0}) ··· ··· /⎝ Eu [gl (x(1))] Euε [gl (x(1))] for all feasible controls u ∈ V ,

5.4 Multiobjective Control Problems

2.

413



⎞ ⎛ ⎞ E[p1ε σ −1 ftu | Yt ] E[p1ε σ −1 ftuε | Yt ] ⎝ ⎠∈ ⎠ − εk 0 − int C ··· ··· /⎝ −1 u −1 uε E[plε σ ft | Yt ] E[plε σ ft | Yt ] for all u ∈ V and the Ft -predictable process ⎞ ⎛ t p0 (uε )γ1ε ⎠. ··· pε = ⎝ pt0 (uε )γlε

Proof. We will differentiate the left-hand side in the vector-valued inequality of the last theorem. From the second condition in the assertions of Corollary 5.4.1, we derive for (τ > 0) ⎛  t+τ t ⎞ p (u )pt+τ (u)γ1ε σ −1 (fsu − fsuε )dP ds A 0 ε t 1⎝ t 1 ⎠∈ ··· / −εk 0 P (A)− (C\{0}).   τ τ t+τ pt (u )pt+τ (u)γlε σ −1 (fsu − fsuε )dP ds t A 0 ε t This yields, regarding that C is a cone, ⎛  t+τ  t ⎞ p (u )pt+τ (u)γ1ε σ −1 (fsu − fsuε )dP ds A 0 ε t 1⎝ t ⎠∈ ··· lim / −εk 0 P (A)− int C.  t+τ  t τ →0 τ t+τ −1 u uε p (u )p (u)γlε σ (fs − fs )dP ds t A 0 ε t Now, we compute the left-hand side of this variational inequality. We observe that Yt is countably generated for any rational number r, 0 ≤ r ≤ 1, by sets {Anr }, n = 1, 2, . . ., since the trajectories are continuous, almost surely. Furthermore, unr can be considered as an admissible control over the time interval (t, t + τ ] for t ≥ r, and we can consider a perturbation of uε by unr for t ≥ r and x ∈ A ∈ Yt , as in the above section. Under the given assumptions, the following limit exists and ⎛  t+τ  ⎞ t+τ −1 p (u )γ σ (f (s, x, u ) − f (s, x, u ))dP ds ε 1ε mr ε Anr 0 1⎜ t ⎟ ··· lim ⎝ ⎠   τ →0 τ t+τ t+τ −1 p (u )γ σ (f (s, x, u ) − f (s, x, u ))dP ds ε lε mr ε t Anr 0 ⎛

⎞ pt0 (uε )γ1ε σ −1 (f (s, x, umr ) − f (s, x, uε ))dP ⎠ ··· =⎝ t −1 p (u )γ σ (f (s, x, umr ) − f (s, x, uε ))dP Anr 0 ε lε Anr

for almost all t ∈ [0, 1]; i.e., there is a set T1 ⊂ [0, 1] of zero measure such that the equation given above is true for t ∈ / T1 and all n, m, r. Moreover, / T2 , there is a set T2 ⊂ [0, 1] of zero measure such that if t ∈ ⎛  t+τ ⎞ ⎞ ⎛ 2 2 Euε (γ1u )ds Euε (γ1ε )ds ε 1⎝ t ⎠. ⎠=⎝ ··· ··· lim  t+τ τ →0 τ 2 2 Euε (γluε )ds Euε (γlε )ds t

414

5 Applications

Then, we can conclude by applying Lemma 5.1 in Elliott and Kohlmann [184]: ⎛  t+τ  ⎞ t+τ t −1 p (u )p (u )γ σ (f (s, x, u ) − f (s, x, u ))dP ds ε mr 1ε mr ε t 0 t A nr 1⎜ ⎟ ··· lim ⎝ ⎠  t+τ  τ →0 τ t+τ t −1 p (u )p (u )γ σ (f (s, x, u ) − f (s, x, u ))dP ds ε mr lε mr ε t t Anr 0 ⎛

⎞ pt0 (uε )γ1ε σ −1 (f (s, x, umr ) − f (s, x, uε ))dP ⎠ ··· =⎝ t −1 p (u )γ σ (f (s, x, u ) − f (s, x, u ))dP ε lε mr ε Anr 0 Anr

for t ∈ / T1 ∪ T2 , all r ≤ t, and all n, m. Finally, this implies that ⎞ ⎛ ⎞ ⎛ E[p1ε σ −1 ftuε | Yt ] E[p1ε σ −1 ftu | Yt ] ⎠∈ ⎠ − εk 0 − int C ⎝ ··· ··· /⎝ −1 u −1 uε E[plε σ ft | Yt ] E[plε σ ft | Yt ] for all u ∈ V and a Ft -predictable process ⎞ ⎛ t p0 (uε )γ1ε ⎠. ··· pε = ⎝ t p0 (uε )γlε The proof is complete.



Remark 5.4.18. Using martingale representation results and a variational principle for multiobjective optimization problems we obtained a necessary condition that uε must satisfy. In fact, uε is an ε-weakly minimal solution a.s.P˜ . in the sense of multiobjective optimization for the conditional expectation of a certain Hamiltonian of the stochastic system. Here the expectation is taken with respect to the observed σ-field. Remark 5.4.19. The necessary condition presented in Theorem 5.4.17 is used for the development of a numerical algorithm by Heyde, Grecksch, Tammer in [272] and Grecksch, Heyde, Isac, Tammer in [236].

5.5 To attain Pareto-equilibria by Asymptotic Behavior of First-Order Dynamical Systems In this section, we study a continuous gradient-like dynamical system which enjoys remarkable properties with respect to constrained Pareto optimization. We come back to the scalar equilibrium problem: find x ¯ ∈ M such that f (¯ x, y) ≥ 0, ∀y ∈ M,

(EP )

where M is a nonempty closed convex subset of a Hilbert space H and f : M × M → R. To attain a solution of (EP ), a great interest has been

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

415

brought to the study of the convergence of some splitting iterative methods (Gradient-descent (Forward) and Proximal (Backward) methods and others). To explain a link between these discrete aspects and the trajectories of a dynamical system, consider the (EP ) problem where M = H and f (x, ·) is convex and differentiable; then x satisfying (EP ) is equivalent to ∇2 f (x, x) := ∇f (x, ·)(x) = 0. Thus, to attain a solution x of (EP ), we may consider a limit, as t → +∞, of x(t) the solution of the dynamical system 0 = x(t) ˙ + ∇2 f (x(t), x(t)), or equivalently f (x(t), y) + (x(t) ˙ | y − x(t)) ≥ 0, ∀y ∈ H.

(5.97)

d Here and throughout this section, x(t) ˙ = dt x(t) = limh→0 1t (x(t + h) − x(t)). The standard approach to discretization of (EP ) employs the increasing elements {tk : 0 ≤ k < +∞} of [0, +∞[. So that, when we set xk = x(tk ) and rk = tk+1 − tk , we may consider the following iteration methods:  Forward (descent or Euler’s) method [68, 113, 346, 347, 377]:

0=

xk+1 −xk rk

+ ∇2 f (xk , xk ), i.e., xk+1 = xk − rk ∇2 f (xk , xk ).

 Backward (proximal) method [67, 436]: 0=

xk+1 −xk rk

+ ∇2 f (xk , xk+1 ), i.e., xk+1 = (I + rk ∇f (xk , ·))

−1

(xk ).

 E-Backward (Equilibrium proximal) method [421, 422] (see Section 5.2.3 for vector equilibrium problems): rk f (xk+1 , y) + (xk+1 − xk | y − xk+1 ) ≥ 0, ∀y ∈ H, i.e., −1 xk+1 = (I + rk ∇2 f (xk+1 , ·)) (xk ). Chbani and Riahi [117] introduced a class of demipositive bifunctions and use it to study the asymptotic behavior of solutions for the corresponding general dynamical equilibrium system: find x ∈ C 0 ([0, +∞]; K) such that f (x(t), y) + (x(t)|y ˙ − x(t)) ≥ 0, ∀y ∈ K, for a.e. t ≥ 0. More precisely, they obtained the weak convergence of the trajectories to a solution of (EP ). Almost any real-world problem asks for treating more than one objective function, and then deals with multiobjective optimization. Typically, multiobjective optimization problems are solved according to the Pareto principle of optimality where a solution is called efficient (or Pareto optimal) if no other feasible solution exists that is not worse in any objective function and better in at least one objective.

416

5 Applications

The main contribution of this subsection is to propose a dynamical system for solving the following vector equilibrium problem to find a Pareto 1p equilib/ −Rp> , ∀y ∈ M , where Rp> := i=1 (0, +∞) rium x ∈ M such that F (x, y) ∈ is the positive orthant cone. Our goal is first to show that each vector equilibrium may be translated to the associated critical point which is defined by the following vector inclusion 0 ∈ conv{∂2 Fi (x, x)} + NM (x) where conv means the closed convex hull and N is the normal cone. We achieve this by existence results for vector equilibria based on concepts of Fan Lemma in conjunction with the Minty linearization Lemma (see Sections 3.9.3, 3.9.6). After having linked equilibria of the vector equilibrium problem and associated critical point, we are interested to existence of solutions of associated first-order set-valued evolution system from a starting point u(0) = u0 0 ∈ u(t) ˙ + conv{∂2 Fi (u(t), u(t))} + NM (u(t)) for a.e. t ≥ 0. One of the natural ways to ensure existence of solutions is to use one of the main existence theorems on dynamical systems. For this we first reduce this set-valued system to the single-valued one: ⊕

0 = u(t) ˙ + [conv{∇2 Fi (u(t), u(t))} + NM (u(t))] , where C ⊕ denotes the metric projection of the origin onto the closed convex set C ⊆ H, i.e., C ⊕ := PC (0) = argminC  · . ⊕ Since the operator [conv{∇2 Fi (u(t), u(t))} + NM (u(t))] does not satisfy all conditions of Cauchy-Lipschitz theorem, our demonstration relies on Peano theorem. This compendium then constitutes the basis for the study of the asymptotic behavior of the solutions of the proposed systems. 5.5.1 Pareto Equilibria and Associate Critical Points Let K, M be nonempty closed convex subsets of a Hilbert space H with M ⊂ K, (· | ·) the scalar product and  ·  the norm. For each i = 1, . . . , p, let Fi : K × K → R be an equilibrium bifunction, i.e., Fi (x, x) = 0 for each x ∈ K. We denote by F : K × K → Rp the vector-valued function F (x, y) whose components are Fi (x, y) for every (x, y) ∈ K × K. In this section we consider the vector equilibrium problems find x ∈ M such that F (x, y) ∈ − int P ∀ y ∈ M

(WVEP)

and find x ∈ M such that F (x, y) ∈ − (P \ {0}) ∀ y ∈ M. (SVEP) 1 p Here P = Rp+ is the orthant cone, then int P = Rp> = i=1 (0, +∞). We say that x in (WVEP) and (SVEP) is a weak Pareto equilibrium point and

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

417

a Pareto equilibrium point respectively. When p = 1, int P = P \ {0} = (0, +∞), and then (WVEP) and (SVEP) coincide. By Minty’s linearization Lemma, under suitable conditions of F we propose the following vector equilibrium problem: find x ∈ M such that F (y, x) ∈ / int P, for all y ∈ M.

(MVEP)

When we suppose the bifunctions fi to be subdifferentiables relatively to the second variable on M (i.e., the convex subdifferential

    ∂2 Fi (x, y) := ∂(Fi (x, ·))(y) := {ξ ∈ H : Fi (x, y ) − Fi (x, y) ≥ ξ | y − y ∀y ∈ H}

is nonempty for every x ∈ K and y ∈ M ), we also define the Pareto critical point of the multiobjective equilibrium problem (WVEP) as a point x ∈ M such that:  λi ∂2 Fi (x, x) + NM (x). (5.98) ∃λ ∈ Δp such that 0 ∈ 1≤i≤p

/ K, and NM Here we add the extended values F (x, y  ) = +∞ if y  ∈ denotes the normal cone to the set M , i.e., NM (x) := {y ∈ H : (y, u − x) ≥ 0, ∀u ∈ M }. Notice that NM = ∂ιM where ιM (x) = 0 if x ∈ M and ιM (x) = +∞ if x ∈ / M. Because the closed convex hull of the family {∂2 Fi (x, x) : i = 1, . . . , p} is equal to ⎧ ⎫ ⎨  ⎬ conv{∂2 Fi (x, x)} = λi ξi : ξi ∈ ∂2 Fi (x, x) and λ ∈ Δp , ⎩ ⎭ 1≤i≤p

then the problem (5.98) is translated by 0 ∈ conv{∂2 Fi (x, x)} + NM (x).

(CVEP)

Classically, multiobjective optimization problems are often solved using scalarization techniques. A simple means to scalarize the problem (WVEP) is to attach nonnegative weights to each objective function and then to minimize the weighted sum of objective functions (weighted sum method, see, e.g., [138]). Hence, the multiobjective equilibrium problem (WVEP) is reformulated as a Linear Scalarization Equilibrium Problem: Given λ ∈ Rp+ , for which at least one of the components is positive,  λi Fi (x, y) ≥ 0, for all y ∈ M. find x ∈ M such that (F (x, y) | λ) = 1≤i≤p

(LSEPλ ) In the next result we’ll show the relation between the solution set of the problems (WVEP), (SVEP), (MVEP), (LSEP), and (CVEP). The solution sets of these problems are respectively denoted by WSF (M ), SSF (M ), MSF (M ), SF (M, λ) and CSF (M ).

418

5 Applications

Proposition 5.5.1. Consider the vector bifunction F : K × K → Rp . (a) We have SSF (M ) ⊆ WSF (M ). (b) If F is P -pseudomonotone, i.e., for all x, y ∈ K, F (x, y) ∈ / − int P implies F (y, x) ∈ / int P , then WSF (M ) ⊆ MSF (M ). (c) Assume that, for every y ∈ M , F (·, y) is P -lower hemicontinuous, i.e., P -lower continuous on every line segment in K, and for every x ∈ M , F (x, ·) is P -convex, then MSF (M ) ⊆ WSF (M ). (d) Assume that, for every x ∈ K, F (x, ·) is P -convex on K, then CSF (M ) = WSF (M ). (e) Assume that for every x ∈ K, all the functions Fi (x, ·) are strictly convex on K, then SSF (M ) = WSF (M ) = CSF (M ). Proof. (a) This follows from the inclusion int P ⊂ P \ {0}. Assertions (b) and (c) result from Minty’s linearization Lemma 3.9.27. (d) To prove the inclusion CSF (M ) ⊆ WSF (M ), let x ∈ CSF (M ). We first use the qualification condition on K (the effective domain of the convex lower semicontinuous real functions Fi (x, ·) (i = 1, . . . , p, x ∈ K)) and ιM , to write ⎞ ⎛   λi ∂2 Fi (x, x) + NM (x) = ∂ ⎝ λi Fi (x, ·) + ιM ⎠ (x) 0 ∈ 1≤i≤p





1≤i≤p

λi (Fi (x, x) − Fi (x, y)) + ιM (x) − ιM (y) ≤ 0, ∀y ∈ H

1≤i≤p





λi Fi (x, y) ≥ 0, ∀y ∈ M.

(5.99)

1≤i≤p

/ WSF (M ), we should find y0 ∈ K and a0 ∈ int P satisfying Assume that x ∈ a0 + F (x, y0 ) = 0. In this case all Fi (x, y0 ) < 0, and then  λi Fi (x, y0 ) < 0 1≤i≤p

because 0 = λ ≥ 0 in R ; whence a contradiction. To prove the reversed inclusion, we consider x ∈ WSF (M ), then p

(−F (x, M ) − P ) ∩ int P = ∅. To show that D = −(F (x, M )+P ) is convex in Rp , we set d1 , d2 ∈ D and γ ∈ [0, 1]; then there exist y1 , y2 ∈ M and c1 , c2 ∈ P such that di = −ci − F (x, yi ) (i = 1, 2). By P -convexity of F (x, ·), we have γd1 + (1 − γ)d2 = −(γc1 + (1 − γ)c2 ) − (γF (x, y1 ) + (1 − γ)F (x, y2 )) ∈ −(γc1 + (1 − γ)c2 ) − (F (x, γy1 + (1 − γ)y2 ) + P ) ⊆ −P − P − F (x, γy1 + (1 − γ)y2 ) ⊆ −P − F (x, γy1 + (1 − γ)y2 ) ⊆ D.

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

419

Thus −(F (x, M ) + P ) is convex. Next, we apply the geometric form of the Hahn–Banach Theorem to find e ∈ Rp \ {0} such that (e | d − c) < 0, ∀c ∈ int P, ∀d ∈ D;

(5.100)

hence by 0 ∈ D we get (e | c) > 0, ∀c ∈ int P , and then e ∈ P \ {0}. Return to (5.100) and use int P + P = int P , we obtain (e | c) + (e | F (x, y)) > 0, ∀c ∈ int P, ∀y ∈ M. This remains true for each c ∈ int P , it follows that (e | F (x, y)) ≥ 0 for every y ∈ M . Now the equivalence (5.99) above shows that x ∈ CSF (M ), which gives a proof of part (b). To prove part (e), we only need to justify CSF (M ) ⊆ SSF (M ). Suppose the inclusion was false and let x ∈ CSF (M ) \ SSF (M ). In view of strict convexity of the functions  Fi (x, ·), we conclude from (5.98) the existence of λ ∈ P \ {0} such that 1≤i≤p λi Fi (x, y) has x as a unique minimum point / SSF (M ) means existence of y0 = x such that y0 ∈ K and on K. Also x ∈ −F (x, y0 ) ∈ P , and then   λi Fi (x, x) < λi Fi (x, y0 ) ≤ 0, 0= 1≤i≤p

1≤i≤p

whence a contradiction.



5.5.2 Existence of Solutions for (WVEP) and (LSEP)λ To ensure existence of the solution u(·) for (5.134), we need moreover a strong P -pseudomonotone condition on F and use a general existence result for scalar equilibrium problems. Definition 5.5.2. Suppose there exists a continuous function γ : R+ → R+ γ(s) = +∞ and γ(s) → 0 implies for which γ(s) > 0 whenever s > 0, lim s→+∞ s s → 0. We say that F : M × M → Rp is γ-strongly P -pseudomonotone if there exists u0 ∈ int P such that for all u, v ∈ M , F (u, v) ∈ / − int P =⇒ F (v, u) ∈ / −γ (u − v) u0 + int P.

(5.101)

The assertion in the next lemma is shown in [112, Theorem 3.1]. Lemma 5.5.3. In a Hausdorff topological vector space, let M be a nonempty closed convex subset. Consider two real bifunctions ϕ and ψ defined on M ×M such that: (H1) For each x, y ∈ M , if ψ(x, y) ≤ 0, then ϕ(x, y) ≤ 0. (H2) For each x ∈ M , ϕ(x, ·) is lower semicontinuous on every compact subset of M .

420

5 Applications

(H3) For each finite subset A of M , one has supy∈conv(A) minx∈A ψ(x, y) ≤ 0. (H4) There exists C ⊂ M convex compact and x ∈ C such that, ∀y ∈ M \ C, ψ(x, y) > 0. Then, there exists x ¯ ∈ C such that ϕ(y, x ¯) ≤ 0 for each y ∈ M . First, we treat the assumption (H3) (see [112, Lemma 4.1]). Lemma 5.5.4. Suppose that: (i) ψ(y, y) ≤ 0, for each y ∈ M ; (ii) for each y ∈ M , {x ∈ M | ψ(x, y) > 0} is convex. Then, Assumption (H3) is satisfied. Proof. On the contrary, if (H3) does not hold, n we have the existence of x1 , . . . , xn ∈ M and λ1 , . . . , λn ≥ 0 such that i=1 λi = 1 and   n  ψ xj , λi xi > 0 for each j = 1, ·, n. i=1

By assumption (ii), if y = contradicts (i).

n j=1

λj xj , we deduce that ψ(y, y) > 0; but this 

Theorem 5.5.5. Let F : M × M → Rp . Suppose that (i) F (x, x) = 0 for every x ∈ M ; (ii) for each i = 1, . . . , p, the bifunction Fi : M × M → R is upper hemicontinuous relatively to the first variable and convex lower semicontinuous relatively to the second variable; (iii) F is γ-strongly Rp+ -pseudomonotone, with u0 ∈ Rp> and u0i > 1 for all i = 1, . . . , p. Then (WVEP) admits a unique solution x ¯, i.e., WSF (M ) = {¯ x}. Remark 5.5.6. Following lines of Theorem 2.4.1 and Corollary 2.4.5, if u0 ∈ Rp> , then ξ : Rp → R,

ξ(y) := inf{t ∈ R : y ∈ tu0 − Rp+ }

is a well-defined continuous sublinear function such that for every λ ∈ R, {y ∈ Rp : ξ(y) ≤ λ} = λu0 − Rp+ ,

{y ∈ Rp : ξ(y) < λ} = λu0 − Rp> .

Inequality (5.101) is equivalent to ξ(F (u, v)) ≥ 0 =⇒ ξ(−F (v, u)) ≥ γ (u − v) ,

(5.102)

and also is satisfied when, for all u, v ∈ K, ξ(F (u, v)) − ξ(−F (v, u)) ≤ −γ (u − v) .

(5.103)

If p = 1 and u0 ∈ R> = (0, +∞), (5.103) is exactly the strong α-monotonicity with α(u, v) = u0 γ (u − v) in the sense of [5, Definition 3.1].

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

421

Remark 5.5.7. Let us first notice that every function γ : R+ → R+ which is increasing to +∞ and continuous at the origin with γ(0) = 0 must satisfy all conditions for γ in Theorem 5.5.5. For example, γ(s) = asδ with a > 0 and δ > 1 is suitable. Proof of Theorem 5.5.5 (i) For existence it is enough to apply Lemma 5.5.3 with X = H endowed with the weak topology, and Y = Rp . We choose ϕ(x, y) = −ξ(−F (x, y)) and ψ(x, y) = −ξ(F (y, x)) for every x, y ∈ M. (H1) is immediate, since F is pseudomonotone. Condition (H2) follows from lower semicontinuity of all the bifunctions Fi relatively to the second variable. Indeed, for x ∈ M , ϕ(x, ·) is lower semicontinuous at y0 on M if and only if, {ϕ(x, ·) > a} is a neighborhood of y0 in M for all a < ϕ(x, y0 ). As all the functions F (x, ·) are lower semicontinuous at y0 on M , and ϕ(x, y0 ) > a ⇔ F (x, y0 ) ∈ −au0 + Rp>

⇔ Fi (x, y0 ) > −au0i , ∀i = 1, . . . , p;  the level set {ϕ(x, ·) > a} = 1≤i≤p {Fi (x, ·) > −au0i } is a neighborhood of y0 in M , and hence ϕ(x, ·) is lower semicontinuous at y0 on M . Assumption (H3) derives from Lemma 5.5.4 since, for each y ∈ M , ψ(y, y) = −ξ(−F (y, y)) = 0 and quasi-convexity of Fi (y, ·) ensures 2 {Fi (y, ·) < 0} is convex. {ψ(·, y) > 0} = {F (y, ·) ∈ −Rp> } = 1≤i≤p

Suppose (H4) is false, then for each k ∈ N and x ∈ Bk := {u ∈ M : u ≤ k}, there exists yk ∈ M \ Bk such that ψ(x, yk ) = −ξ(F (yk , x) ≤ 0. Owing to γ-strong pseudomonotonicity condition, we get for k0 the first integer for which Bk0 ∩ M = ∅, ξ(−F (x, yk )) ≥ γ(x − yk ), ∀k ≥ k0 . Thus, for each k ≥ k0 , there exists ik ∈ {1, . . . , p} such that Fik (x0 , yk )) + γ(x0 − yk )u0ik ≤ 0.

(5.104)

Since {ik : k ≥ k0 } ⊂ {1, . . . , p}, we can extract a subsequence {iν(k) = p0 , ∀k ∈ N} for which p0 ∈ {1, . . . , p}; thus Fi0 (x0 , yν(k) )) + γ(x0 − yν(k) )u0i0 ≤ 0, ∀k ≥ 0. Using the existence of y0 ∈ M and y0∗ ∈ ∂ F¯i0 (x0 , ·)(y0 ), where F¯i0 (x0 , y) = Fi0 (x0 , y) if y ∈ M, F¯i0 (x0 , y) = +∞ if y ∈ / M,

422

5 Applications

we obtain Fi0 (x0 , y) − Fi0 (x0 , y0 ) ≥ (y0∗ | y − y0 ) , ∀y ∈ M. Since yν(k) ∈ M , we get from (5.104) for every r > r0 0 ≥ Fi0 x0 , yν(k) ) + γ(x0 − yν(k) )u0i0 

≥ Fi0 (x0 , y0 ) + y0∗ | yν(k) − y0 + γ(x0 − yν(k) )u0i0 ≥ Fi0 (x0 , y0 ) − y0∗ (yν(k) − x0  + x0 − y0 ) + γ(x0 − yν(k) )u0i0   γ(x0 − yν(k) ) 0 ui0 − y0∗  = Fi0 (x0 , y0 ) + yν(k) − x0  yν(k) − x0  −y0∗  · x0 − y0 ). Note that lim yν(k) − x0  ≥ lim yν(k)  − x0  = +∞

k→+∞

and

k→+∞

γ(x0 − yν(k) ) = +∞, k→+∞ yν(k) − x0  lim

we thus obtain a contradiction, and then Assumption (H4) is satisfied. This means by Lemma 5.5.3 that there exists x ¯ ∈ M that belongs to MSF (M ), i.e., (5.105) F (y, x ¯) ∈ / Rp> , ∀y ∈ M. To deduce that x ¯ ∈ WSF (M ), we consider y ∈ M and set yt := (1 − t)¯ x + ty ∈ M for each t ∈ [0, 1]. We have by (5.105) F (yt , x ¯) ∈ / Rp> , then for some it ∈ {1, . . . , p}, Fit (yt , x ¯) ≤ 0. Since Fit (yt , ·) is convex, we conclude 0 = Fit (yt , yt ) ≤ (1 − t)Fit (yt , x ¯) + tFit (yt , y) ≤ tFit (yt , y); and then for every t ∈ (0, 1], there exists it ∈ {1, . . . , p} such that Fit (yt , y) ≥ 0. As we have seen previously, we can find tk decreasing to 0 and iy = itk ∈ {1, . . . , p} such that Fiy (ytk , y) ≥ 0 for every k ∈ N. Using ytk = x ¯ + tk (y − x ¯) → 0 and Fiy (·, y) is upper hemicontinuous, we obtain that Fiy (¯ x, y) ≥ lim sup Fiy (ytk , y) ≥ 0. k→+∞

It follows that for every y ∈ M , there exists iy ∈ {1, . . . , p} such that x, y) ≥ 0, i.e., F (¯ x, y) ∈ / −Rp> for every y ∈ M . Hence x ¯ ∈ WSF (M ). Fiy (¯

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

423

(ii) The uniqueness results from γ-strongly pseudomonotonicity of F . Indeed, fix z1 , z2 ∈ WSF (M ) with z1 = z2 , then ξ(F (z1 , z2 )) ≥ 0 and ξ(F (z2 , z1 ) ≥ 0. Combining the above inequalities and using the γ-strongly pseudomonotonicity, positivity of γ on (0, +∞) leads to the contradiction ξ(−F (z2 , z1 )) ≥ γ(z2 − z1 ) > 0. Return to definition of ξ, we get ξ(−F (z2 , z1 )) > 0 if, and only if, inf{t ∈ R : −F (z2 , z1 ) ∈ tu0 − Rp> } = sup{s ∈ R : F (z2 , z1 ) ∈ su0 + Rp> } > 0. Thus, for some s0 > 0, F (z2 , z1 ) ∈ s0 u0 + Rp> ⊂ Rp> , and so, using again pseudomonotonicity of F , we obtain that F (z1 , z2 ) ∈ −Rp> , a contradiction.  Therefore WSF (M ) is reduced to a singleton. The proof is complete. Theorem 5.5.8. Let F : M × M → Rp . Suppose that for every x, y ∈ M , ⎧ (i) F (x, x) = 0, Fi (x, ·) is convex lsc for all i = 1, . . . , p, ⎪ ⎪ ⎪ ⎪ ⎨ (ii) Fi is monotone for all i = 1, . . . , p, i.e., for every x, y ∈ M, max1≤i≤p (Fi (x, y) + Fi (y, x)) ≤ 0, (H ∗ ) ⎪ ⎪ (iii) γ−strong monotonicity: for γ defined in Definition 5.5.2 ⎪ ⎪ ⎩ min1≤i≤p (Fi (x, y) + Fi (y, x)) ≤ −γ(x − y), ∀x, y ∈ M. ¯. Then, for every λ ∈ Rp> , (LSEP)λ admits a unique solution x Remark 5.5.9. This γ-strong monotonicity condition is satisfied whenever all the bifunctions Fi are monotone and one of them is strongly monotone. Proof. To justify that SF (M, λ) is nonempty and reduced to a singleton for every λ ∈ Rp> , one can follow lines of the proof of Theorem 5.5.5. Indeed, when we set, for every (u, v) ∈ M × M , 

ϕ(u, v) := F (u, v) | λ0 and ψ(u, v) := − (F (v, u) | λ), then conditions (H1) − (H4) of Lemma 5.5.3 follow from (H ∗ ). In fact, (H1) − (H3) are obvious. If (H4) is false, then using γ-strong pseudomonotonicity condition, we get for some integer k0 , ψ(yk , x0 ) := − (F (x0 , yk ) | λ) ≥ γ(x0 − yk ), ∀k ≥ k0 . Using the existence of y0 ∈ M and y0∗ ∈ ∂g(y0 ), where g(y) = (F (x0 , y) | λ) if y ∈ M and g(y) = +∞ if y ∈ / M , we obtain (F (x0 , yk ) | λ) − (F (x0 , y0 ) | λ) ≥ (y0∗ | yk − y0 ) , ∀k; and then

424

5 Applications

0 ≥ (F (x0 , yk ) | λ) + γ(x0 − yk ) ≥ (F (x0 , y0 ) | λ) + (y0∗ , yk − y0 ) + γ(x0 − yk )   γ(x0 − yk ) ∗ = (F (x0 , y0 ) | λ) + yk − x0  − y0  − y0∗  · x0 − y0 ). yk − x0  Since lim yν(k) − x0  = +∞, we end to a contradiction, and thus (H4) is k→+∞



satisfied. 5.5.3 Existence of Solutions for Continuous First-Order Equilibrium Dynamical Systems

To attain a solution of (WVEP), we first consider the equivalent critical point problem (CVEP), and then it is parametrized by a dynamical system. Let us now describe the proposed dynamical system. Because the closed convex hull of the vectors {∇2 Fi (x, x) : i = 1, . . . , p} is equal to ⎧ ⎫ ⎨  ⎬  λi ∇2 Fi (x, x) : λ ∈ P and λi = 1 , conv{∇2 Fi (x, x)} = ⎩ ⎭ 1≤i≤p

1≤i≤p

then the problem (CVEP) is translated by 0 ∈ conv{∇2 Fi (x, x)} + NM (x).

(5.106)

As we already mentioned, the approach we propose to solve the problem (5.106) makes use of a first-order autonomous dynamical system ⊕

0 = u(t) ˙ + [conv{∇2 Fi (u(t), u(t))} + NM (u(t))] ,

(5.107)

where C ⊕ denotes the metric projection of the origin onto the closed convex set C ⊆ H, i.e., C ⊕ := PC (0) = argminC  · . In view of the applications, we prove the existence of strong global solutions of the more general evolution equation ⊕

0 = u(t) ˙ + [−F(u(t)) + ∂ψ(u(t))] .

(5.108)

Here F : H ⇒ H is a nonlinear set-valued operator and ψ : H → R ∪ {+∞} is proper convex lower semicontinuous function. Definition 5.5.10. We say that u(·) is a strong global solution of (5.108) if (1) u : [0, +∞) → H is continuous, and absolutely continuous on each interval [0, T ], for every 0 < T < +∞; (2) there exist v : [0, +∞) → H and w : [0, +∞) → H such that (i) for every T > 0, v, w ∈ L2 (0, T ; H); (ii) for a.e. (almost every) t > 0, v(t) ∈ ∂ψ(u(t)) and w(t) ∈ F(u(t));

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

425



(iii) for a.e. t > 0, v(t) + w(t) = [F(u(t)) + ∂ψ(u(t))] ; (iv) for a.e. t > 0, u(t) ˙ + v(t) + w(t) = 0. Keeping the terminology and notations above, we shall now prove an existence theorem for (5.108) by assuming further that H is finite dimensional since our demonstration relies on Peano, and not on Cauchy-Lipschitz theorem. Theorem 5.5.11. Assume that the following conditions hold: (h1 ) H is a finite-dimensional space; (h2 ) ψ : H → R ∪ {+∞} is a proper convex lower semicontinuous function; (h3 ) F : H ⇒ H is a Hausdorff-continuous set-valued operator; (h4 ) there exists c > 0 such that F(x) ⊆ c(1 + x)B, where B is the closed unit ball in H. Then, the dynamical system (5.108) has at least one global strong solution. We mean by F is a Hausdorff-continuous set-valued operator, the continuity of F from H endowed with the norm  ·  to the family of nonempty closed subsets of H endowed with the Hausdorff-metric IH(C, D) := max {e(C, D), e(D, C)} where e(C, D) := sup dist(x, D) x∈C

is the excess of C over D if C, D ⊂ H are nonempty closed and dist(x, D) := inf y∈D x − y. In order to prove Theorem 5.5.11, we follow lines of the proof as in [31, Theorem 3.5]. We then use the Moreau envelope of ψ, which is defined for x ∈ H and λ > 0 by   1 2 y − x . (5.109) ψλ (x) = inf ψ(y) + y∈H 2λ and for all λ > 0, we consider the following approximating Cauchy problem: ⊕

0 = u˙ λ (t) + [−F(uλ (t)) + ∇ψλ (uλ (t))] .

(5.110)

We recall that ψλ was proposed in 1962 by Moreau, see [419]. Lemma 5.5.12. The Moreau envelope has the following properties: (a) For every x ∈ H and λ > 0, the minimization problem in (5.109) admits a unique solution, which is denoted by proxλψ x. (b) y = proxλψ x = (I + λ∂ψ)−1 x ⇐⇒ x − y ∈ ∂ψ(x).

426

5 Applications

(c) For every λ > 0, the operator proxλψ is everywhere defined and nonexpansive: for every x1 , x2 ∈ H, 

proxλψ x1 − proxλψ x2 | x1 − x2 ≥ proxλψ x1 − proxλψ x2 2 . (d) ψλ is a continuously differentiable function which gradient coincides with the Yosida approximation of the maximal monotone operator ∂ψ, i.e., ∂ψλ (x) = ∇ψλ (x) = (∂ψ)λ (x) =

 1 I − proxλψ (x), ∀x ∈ H, λ

moreover, the operator ∇ψλ is Lipschitz continuous on H with Lipschitz constant λ1 . Proof. The proof of these results can be found in [47].



When ψ is the indicator function on K, i.e., 0 if x ∈ K ψ(x) = ιK (x) = +∞ if x ∈ / K, the proximal operator of ψ reduces to orthogonal projection onto K, which for every x ∈ H and λ > 0, we denote by PK (x) := proxλψ x = argmin  · −x. K

We conclude from the nonexpansive property of proxλψ that (PK (x1 ) − PK (x2 ) | x1 − x2 ) ≥ PK (x1 ) − PK (x2 )2 , ∀x1 , x2 ∈ H, (5.111) and then PK is a Lipschitz continuous operator on H. We follow by recalling some results about metric projection mappings onto closed convex sets. Lemma 5.5.13. Let C be a nonempty closed convex set in H. (a) For x ∈ H and c ∈ C, we have c = PC (x) if, and only if, (x−c | y−c) ≤ 0, for all y ∈ C. (b) For every x ∈ H, Px−C (0) = x − PC (x). (c) Suppose that C, D are two nonempty closed convex set in H such that C − D is also closed. Then a = PC−D (0) if, and only if, there exist c ∈ C and d ∈ D such that c = PC (d), d = PD (c) and a = c − d. Proof. (a) Using the necessary and sufficient optimality conditions of convex minimization problem we have   1  · −x2 + ιC (c) = (c − x) + NC (c) c = PC (x) ⇐⇒ 0 ∈ ∂ 2 ⇐⇒ (x − c | y − c) ≤ 0, ∀y ∈ C. Property (b) follows from (a).

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

427

For (c), suppose first that c = PC (d), d = PD (c) and a = c − d. Applying property (a), and adding the inequalities we get that, for all x ∈ C, y ∈ D, 0 ≤ (c − d | x − c) + (d − c | y − d) = (c − d | x − y − (c − d)).

(5.112)

This means, from (a), that c − d = PC−D (0). For the converse, it suffices to take separately x = c and y = d in (5.112).  Lemma 5.5.14. If C, D ⊂ H are nonempty bounded closed convex and x, y ∈ H, we have (5.113) PC (x) − PC (y)| ≤ x − y and

PC (y) − PD (y)| ≤ r(y)IH(C, D)1/2 ,

(5.114)

where r(y) := [dist(y, C) + dist(y, D)]1/2 . Proof. Since the first assertion follows from (5.111), let us fix y ∈ H to prove the second. Using Lemma 5.5.13 (a), we respectively characterize PC (y) and PD (y) by (5.115) (PC (y) − y | PC (y) − u) ≤ 0, ∀u ∈ C and (PD (y) − y | PD (y) − v) ≤ 0, ∀v ∈ D.

(5.116)

Consider u0 ∈ C and v0 ∈ D such that PD (y) − u0  = dist(PD (y), C) ≤ sup dist(v, C) = e(D, C) v∈D

and PC (y) − v0  = dist(PC (y), D) ≤ sup dist(u, D) = e(C, D); u∈C

then we infer max (PD (y) − u0 , PC (y) − v0 ) ≤ IH(C, D).

(5.117)

Now, setting u0 in (5.115), v0 in (5.116) and adding the two inequalities, we get 2

PC (y) − PD (y)| = (PC (y) − y | PC (y) − u0 ) + (PC (y) − y | u0 − PD (y)) + (PD (y) − y | PD (y) − v0 ) + (PD (y) − y | v0 − PC (y)) ≤ max (PD (y) − u0 , PC (y)−v0 ) (dist(y, C)+ dist(y, D)). Using (5.117), we obtain the result.



Remark 5.5.15. Note that the exponent 1/2 is optimal and it is related to the Hilbertian structure of H. We also note that this result is justified for unbounded convex closed sets with the same proof in [33, Proposition 5.1]. The authors used the bounded Hausdorff pseudo-distance.

428

5 Applications

Lemma 5.5.16 (Peano’s existence theorem). [459] Let U ⊂ Rn be an open set and f : U × [0, T ] → Rn be a continuous function. Then, for every x0 ∈ U there is a positive δ and a continuous function x : [t0 − δ, t0 + δ] → U solution to the Cauchy’s ordinary differential equation x(t) ˙ = f (t, x(t)) satisfying the initial condition x(t0 ) = x0 .

(5.118)

There are many proofs of the existence theorem, the most direct being the proof given by Giuseppe Peano in 1890. In 1886 Peano first stated and demonstrated the theorem in dimension n = 1. In (1890) he extended this theorem to systems of the first-order differential equation (5.118) by using a successive Euler’s approximations proof. The details of the proof require advanced calculus and are therefore omitted, we refer to [542] Section 2.7 for a recent proof using the classical Arzel`a-Ascoli Theorem. Contrary to what the CauchyLipschitz theorem allows to conclude under more restrictive Lipschitz condition, there is no uniqueness here. Peano himself exposed in [459] the following dx = 3x2/3 , x(0) = 0 where the solutions x(t) = t3 and x(t) = 0 examples dt 4xt3 dx = 2 are distinct, and , x(0) = 0 which also has five distinct solutions. dt x + t4 We also need the following nonlinear generalization of Gronwall’s inequality. Lemma 5.5.17 (Generalized Gronwall’s Lemma). Let t0 ∈ R, T > 0, α ∈ [0, 1) and v(t) be an absolutely continuous function that satisfies the differential inequality a.e. on [t0 , t0 + T ] (1 − α)v(t) ˙ ≤ a(t)v(t) + b(t)v(t)α ,

(5.119)

1

where a(t), b(t) ∈ L (t0 , t0 + T ; R) and b(t) ≥ 0 for a.e. t ∈ [t0 , t0 + T ]. Then, for a.e. t ∈ [t0 , t0 + T ],  t   t  t  1−α 1−α v(t) ≤ v(t0 ) exp a(s)ds + exp a(τ )dτ b(s)ds. (5.120) t0

t0

s

Proof. In differential form of Bernoulli’s equation, set x(t) := v(t)1−α for ˙ = (1 − α)v(t)−α v(t); ˙ every t ∈ [t0 , t0 + T ], then for a.e. t ∈ [t0 , t0 + T ], x(t) and then x(t) ˙ ≤ a(t)v(t)1−α + b(t) = a(t)x(t) + b(t). Integrating this linear first-order differential inequality, we get, for a.e. t ∈ [t0 , t0 + T ],  t   t  t  x(t) ≤ x(t0 ) exp a(s)ds + exp a(τ )dτ b(s)ds, t0

t0

s

and then we get the result.  The following lemma gives the existence of a global strong solution of the dynamical system (5.110).

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

429

Lemma 5.5.18 (Existence of global solution to approximate systems). For every λ > 0 and every u0 ∈ dom ∂ψ, the system (5.110) admits a global strong solution uλ on [0, +∞[ with uλ (0) = u0 . Proof. We split the proof into the following steps: Step 1. We first remark that by Lemma 5.5.13 (b), uλ solves the dynamical system (5.110), with initial condition uλ (0) = u0 , is equivalent to the assertion that uλ solves 0 = u˙ λ (t) + ∇ψλ (uλ (t)) − PF(uλ (t)) (∇ψλ (uλ (t))) for a.e. t ∈ (0, +∞) uλ (0) = u0 . (5.121) Step 2. Using Peano Lemma 5.5.16, we shall prove the existence of a local solution to (5.121). To do so, we consider the function F(t, x) = PF(x) (∇ψλ (x)) − ∇ψλ (x).

(5.122)

Taking x, y ∈ H and s, t ≥ 0, we have F (s, x) − F (t, y) ≤ PF(x) (∇ψλ (x)) − PF(y) (∇ψλ (y)) + ∇ψλ (x) − ∇ψλ (y) ≤ PF(x) (∇ψλ (x)) − PF(x) (∇ψλ (y)) + ∇ψλ (x) − ∇ψλ (y) +PF(x) (∇ψλ (y)) − PF(y) (∇ψλ (y)),

and then from Lemma 5.5.12 (c), (d) and Lemma 5.5.14, it follows that F(s, x) − F(t, y) ≤

2 x − y + r(∇ψλ (y))IH(F(x), F(y))1/2 , λ

(5.123)

where r(∇ψλ (y)) = [dist(∇ψλ (y), F(x)) + dist(∇ψλ (y), F(y))]1/2 . Using Hausdorff-continuity of set-valued mapping F and continuity of ∇ψλ , then F is continuous on H × [0, +∞). Hence from Peano Lemma 5.5.16, we have the existence of a local solution to (5.121). As a consequence, we obtain from Zorn’s lemma the existence of maximal strong solution uλ on [0, Tm [. Step 3. Fix T ∈ (0, Tm ), and take the scalar product of (5.121) by uλ − u0 , then for a.e. t ∈ [0, T ], 0=

 1 d uλ (t) − u0 2 + (∇ψλ (uλ (t)) − ∇ψλ (u0 ) | uλ (t) − u0 ) 2 dt 

+ (∇ψλ (u0 ) | uλ (t) − u0 ) − PF(uλ (t)) (∇ψλ (uλ (t))) | uλ (t) − u0 .

Using that the operator ∇ψλ is monotone, then    ∇ψλ (u0 ) + PF(uλ (t)) (∇ψλ (uλ (t))) uλ (t) − u0    (∂ψ(u0 ))⊕  + c(1 + uλ (t)) uλ (t) − u0    ≤ (∂ψ(u0 ))⊕  + c(1 + u0) uλ (t) − u0  + cuλ (t) − u0 2 .

 1 d  uλ (t) − u0 2 ≤ 2 dt ≤

Set v(t) := uλ (t) − u0 2 , we have for a.e. t ∈ [0, T ],

430

5 Applications

 1 v(t) ˙ ≤ (∂ψ(u0 ))⊕  + c(1 + u0 ) v(t)1/2 + cv(t). 2 According to Gronwall’s Lemma 5.5.17 with t0 = 0 and α = 12 , we obtain for a.e. t ∈ [0, T ],  t   t

 1 exp cdτ (∂ψ(u0 ))⊕  + c(1 + u0 ) ds uλ (t) − u0  = v(t) 2 ≤ t0

s

 ect −1

, ≤ (∂ψ(u0 ))⊕  + c(1 + u0 ) c and hence for the norm  ·  on L∞ (0, T ; R), we conclude

 ecT −1 . sup uλ ∞ ≤ (∂ψ(u0 ))⊕  + c(1 + u0 ) c λ>0

(5.124)

Step 4. Return to (5.121) and take the scalar product by u˙ λ , then for a.e. t ∈ [0, T ], 2

u˙ λ (t) +

 d (ψλ (uλ (t))) = PF(uλ (t)) (∇ψλ (uλ (t))) | u˙ λ (t) dt   1  PF(u (t)) (∇ψλ (uλ (t)))2 + u˙ λ (t)2 , ≤ λ 2

so that from (h4 ), we find 1 d c2 2 2 u˙ λ (t) + (ψλ (uλ (t))) ≤ (1 + uλ (t)) . 2 dt 2 Therefore, integrating over (0, T ), we get for every T ∈ (0, Tm ),   1 T c2 T 2 2 u˙ λ (t) dt ≤ (ψλ (u0 ) − ψλ (uλ (T ))) + (1 + uλ (t)) dt. 2 0 2 0 (5.125) Step 5. We now claim that Tm = +∞. Suppose this is not true, then the inequality (5.125) leads to {uλ (t)} is bounded on [0, Tm ), and therefore from T 2 (5.124) and continuity of ψλ on H, we also have 0 u˙ λ (t) dt < +∞. Now, we claim that {uλ (t)} converges in H as t → TM . If not, then there exist two sequences sk → TM and tk → TM such that uλ (sk ) → x1 and uλ (tk ) → x2 for x1 = x2 . We note that boundedness of {uλ (t) : 0 ≤ t ≤ TM } T 2 and 0 u˙ λ (t) dt < +∞ imply that 2

2

x1 − x2  = lim uλ (tk ) − uλ (sk ) k→+∞   tk = lim u˙ λ (t)dt | uλ (tk ) − uλ (sk ) = 0, k→+∞

sk

which is a contradiction. Consequently, we can extend continuously the function uλ on [0, Tm ] by uλ (TM ) = x0 = lim uλ (t). k→TM

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

431

Return to the system (5.121) with the initial condition uλ (TM ) = x0 . Using ones more Peano Lemma 5.5.16, we have the existence of a strong solution to (5.121) on a great interval which strictly contains [0, Tm ], which  contradicts the definition of TM , and the proof is complete. Proof of Theorem 5.5.11 The proof will be divided into several steps. Step 1. By virtue of Lemma 5.5.18, for every λ > 0 and u0 ∈ dom ∂ψ, (5.121) admits a global strong solution uλ on [0, +∞[ with uλ (0) = u0 and satisfying: for each T ∈ (0, +∞), supλ>0 uλ L∞ (0,T ;H) < +∞ and 1 2

 0

T

2

u˙ λ (t) dt ≤ (ψλ (u0 ) − ψλ (uλ (T ))) +

2 T c2 1 + uλ L∞ (0,T ;H) . 2

We note that, in view of Lemma 5.5.12, we have for yλ = proxλψ uλ (T ) and y0 ∈ ∂ψ(u0 ) such that u0 = proxλψ (u0 + λy0 ) ψλ (uλ (T )) ≥ ψ(yλ ) ≥ ψ(u0 ) + (y0 | yλ − u0 ) ≥ ψ(u0 ) − y0  · yλ − u0  ≥ ψ(u0 ) − y0  · uλ (T ) − (λy0 + u0 ). The last inequality follows from Lipschitz continuity of the mapping proxλψ . We conclude that  T 2 u˙ λ (t) dt < +∞. (5.126) sup u˙ λ 2L2 (0,T ;H) = sup λ>0

λ>0

0

By return to (5.121) and using condition (h4 ), we obtain   ∇ψλ (uλ (t)) ≤ PF(uλ (t)) (∇ψλ (uλ (t))) + u˙ λ (t) ≤ c (1 + uλ (t)) + u˙ λ (t) , and then

sup ∇ψλ (uλ )2L2 (0,T ;H) < +∞.

(5.127)

λ>0

Step 2. Since {uλ : λ > 0} is a uniformly bounded family of H-valued functions on [0, T ] such that H is a finite-dimensional real vectors space"and ! the families {u˙ λ : λ > 0}, {∇ψλ (uλ ) : λ > 0}, PF(uλ ) (∇ψλ (uλ )) : λ > 0 are bounded in L2 (0, T ; H), then there is a sequence {λi : i ∈ I} converging to 0 such that uλi converge uniformly to some continuous function u on [0, T ], i.e., converges strongly in C(0, T ; H) to u, and ∇ψλi (uλi ), PF(uλi ) (∇ψλ (uλi )) converge weakly in L2 (0, T ; H) to v and w respectively. By continuous injection of C(0, T ; H) into L2 (0, T ; H), we also conclude that uλi converge strongly to u in L2 (0, T ; H). Let’s go back to the dynamic system (5.121) and go to the limit when λi converge to 0, we then get

432

5 Applications

u˙ + v − w = 0 a.e. in (0, T ) and

in L2 (0, T ; H).

(5.128)

Step 3. We’ll show that v(t) ∈ ∂ψ(u(t)) and w(t) ∈ F (u(t)) for a.e. t ∈ [0, T ]. For this, let ˜ := {(ξ, η) ∈ L2 (0, T ; H)2 : η(t) ∈ ∂ψ(ξ(t)) for a.e. t ∈ [0, T ]}. ∂ψ ˜ is a maximal monotone operator (c.f. [94, Prop. 2.16]) in the Hilbert Then ∂ψ space L2 (0, T ; H)2 , and thus it is closed relatively to s × w-topology. Return to uλi − proxλi ψ uλi = λi ∇ψλi (uλi ), then from λi → 0, we conclude that proxλi ψ uλi converge strongly to u in 

L2 (0, T ; H). According to ∇ψλi (uλi ) ∈ ∂ψ proxλi ψ uλi , which proves that ˜ therefore v(t) ∈ ∂ψ(u(t)) for a.e. t ∈ [0, T ]. (u, v) ∈ ∂ψ, Similarly, the operator defined by F˜ := {(ξ, η) ∈ L2 (0, T ; H)2 : η(t) ∈ F(ξ(t)) for a.e. t ∈ [0, T ]}, is also s × w-closed in L2 (0, T ; H)2 , see [28, Prop. 3.4]. It follows that (u, w) ∈ F˜ , and then w(t) ∈ F(u(t)) for a.e. t ∈ [0, T ]. Step 4. Let us show that {u˙ λi : i ∈ I} strongly converges to u˙ in L2 (0, T ; H). Note that from Hilbert structure of L2 (0, T ; H), it suffices to ˙ L2 (0,T ;H) . justify that limi u˙ λi L2 (0,T ;H) = u Since w(t) ∈ F(u(t)) for a.e. t ∈ [0, T ], uλi converge strongly in C(0, T ; H) to u, and F is a Hausdorff-continuous set-valued mapping, we conclude from Lemma 5.5.14 that wi (·) := PF(uλi (·) (w(·)) strongly converge to w(·) = PF(u(·) (w(·)) in C(0, T ; H). Using the Lebesgue dominated convergence theorem, we see that (wi ) strongly converge to w in L2 (0, T ; H). Clearly, the definition of wi ensures that for a.e. t ∈ [0, T ], wi (t) ∈ F(uλi (t)), and using the projection property, we get   PF(uλi (t)) [∇ψλi (uλi (t))] −∇ψλi (uλi (t))|PF(uλi (t)) [∇ψλi (uλi (t))] − wi (t) ≤0, that is,



 u(t) ˙ | PF(uλi (t)) [∇ψλi (uλi (t))] − wi (t) ≤ 0.

(5.129)

Taking the scalar product of (5.121) by u(t) ˙ and integrating on [0, T ], we obtain  T 0= u˙ λi (t)2 dt + ψλi (uλi (T )) − ψλi (u0 ) 0





0

T



 PF(uλi (t)) (∇ψλi (uλi (t)) | u˙ λi (t) dt.

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

433

These two inequalities and the convergence properties above entail that we have  T  T (i) u˙ λi (t)2 dt + ψλi (uλi (T )) − ψλi (u0 ) − (wi (t) | u˙ λi (t)) dt ≤ 0, 0



T

u˙ λi (t)2 dt ≥

(ii) lim inf i

0

0



T

2 u(t) ˙ dt since u˙ λi  u˙ in L2 (0, T ; H),

0

(iii) lim inf (ψλi (uλi (T )) − ψλi (u0 )) ≥ ψ(u(T )) − ψ(u0 ), i  T  T (wi (t) | u˙ λi (t)) dt = (w(t) | u(t)) ˙ dt, (iv) lim i



0

T

0

2 u(t) ˙ dt + ψ(u(T )) − ψ(u0 ) −

(v) 0



T

(w(t) | u(t)) ˙ dt = 0,

0

where we can use [94] to get the last inequality. Indeed, after the inner product of (5.128) with u(t) ˙ and integration on (0, T ) give  T

 2 u(t) ˙ 0= dt + (v(t) | u(t)) ˙ − (w(t), u(t)) ˙ dt. 0

Since v(t) ∈ ∂ψ(u(t)) for a.e. t ∈ [0, T ], then [94, Lemma 3.3] yields d (ψ(u(t)) for a.e. t ∈ (0, T ). Hence (v(t) | u(t)) ˙ = dt  T (w(t) | u(t)) ˙ dt = ψ(u(T )) − ψ(u0 ), 0

which immediately gives (v) above. Return to the use of these five relations (i)–(v), and assume that  T  T 2 2 u˙ λi (t) dt > u(t) ˙ dt, lim sup i

then



T

0

2 u(t) ˙ dt + ψ(u(T )) − ψ(u0 ) −

0= 0



T

< lim sup i



− lim i

≤ lim inf i

 −

0

0

T

 0

T

(w(t) | u(t)) ˙ dt

(by (v))

u˙ λi (t)2 dt + lim inf (ψλi (uλi (T )) − ψλi (u0 )) i

0 T

0  

(wi (t) | u˙ λi (t)) dt

0

T

(by (ii), (iii) and (iv))

u˙ λi (t)2 dt + ψλi (uλi (T )) − ψλi (u0 )

(wi (t) | u˙ λi (t)) dt

 ≤0

(by (i))

434

5 Applications

T T 2 which leads to contradiction. Thus lim supi 0 u˙ λi (t)2 dt ≤ 0 u(t) ˙ dt, 2 that is precisely u˙ λi converge strongly to u˙ in L (0, T ; H). Step 5. Now, we prove that for a.e. t ∈ [0, T ], v(t) = P∂ψ(u(t)) (w(t)). We have shown above that for some w ∈ L2 (0, T ; H), w(t) ∈ F (u(t)) for a.e. t ∈ [0, T ], u is a solution on [0, T ] of the following system w(t) ∈ u(t) ˙ + ∂ψ(u(t)). According to [94, Remark 3.9], the solution u satisfies 0 = u(t) ˙ + P∂ψ(u(t)) w(t) − w(t), for a.e. t ∈ [0, T ].

(5.130)

Consequently, by virtue of (5.128), we obtain, for a.e. t ∈ [0, T ], v(t) = P∂ψ(u(t)) (w(t)). Step 6. Next, we show that for a.e. t ∈ [0, T ], w(t) = PF(u(t)) (v(t)). Let u be the limit vector in C(0, T ; H), and consider the operator whose graph is G := {(ξ, η) ∈ L2 (0, T ; H)2 : η(t) = ξ(t) − PF(u(t)) (ξ(t)) for a.e. t ∈ [0, T ]}. We shall show that G is maximal monotone. For this, let (ξi , ηi ) ∈ G for i = 1, 2, then for a.e. t ∈ [0, T ] we have, ηi (t) = ξi (t) − PF(u(t)) (ξi (t)). We note from (5.111) that for a.e. t ∈ [0, T ]  (η2 (t) − η1 (t) | PF(u(t)) (ξ2 (t)) − PF(u(t)) (ξ1 (t))

 = ξ2 (t) − ξ1 (t) | PF(u(t)) (ξ2 (t)) − PF(u(t)) (ξ1 (t))   − PF(u(t)) (ξ2 (t)) − PF(u(t)) (ξ1 (t)) ≥ 0, which implies that η2 −

2 η1 L2 (0,T ;H)



T

= 0

 −

0

 ≤

T

0

T

(η2 (t) − η1 (t) | ξ2 (t) − ξ1 (t)) dt

 η2 (t) − η1 (t) | PF(u(t)) (ξ2 (t)) − PF(u(t)) (ξ1 (t)) dt

(η2 (t) − η1 (t) | ξ2 (t) − ξ1 (t)) dt.

Thus, (η2 − η1 | ξ2 − ξ1 )

2

L2 (0,T ;H)2

≥ η2 − η1 

L2 (0,T ;H)

so that the operator G is firmly nonexpansive, and then maximal monotone in L2 (0, T ; H)2 . We conclude that the graph of G is closed for the s × w topology in L2 (0, T ; H)2 . Return to the system (5.121), we have for each i ∈ I and for a.e. t ∈ [0, T ],

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

435

0 = u˙ λi (t) + ∇ψλi (uλi (t)) − PF(uλi (t)) (∇ψλi (uλi (t))). Taking εi (t) := PF(uλi (t)) (∇ψλi (uλi (t)) − PF(u(t)) (∇ψλi (uλi (t))), we get 0 = u˙ λi (t) + ∇ψλi (uλi (t)) − PF(u(t)) (∇ψλi (uλi (t))) − εi (t). (5.131) The vector field εi converge strongly to 0 in L2 (0, T ; H). To see this, note that for a.e. t ∈ [0, T ],  

2 

εi (t)2 = PF(uλ (t)) (∇ψλi (uλi (t)) − PF(u(t)) (∇ψλi (uλi (t))) i

2

≤ r ∇ψλi (uλi (t) IH(F(uλi (t)), F(u(t))) (IH is the Hausdorff-metric) ≤ r0 IH(F(uλi (t)), F(u(t))), since F is bounded on bounded sets.

Since F is Hausdorff-continuous and uλi converge strongly in C(0, T ; H) to u, Lebesgue’s dominated convergence theorem provides εi converge strongly to u in the space L2 (0, T ; H). ˙ in L2 (0, T ; H) and ∇ψλi (uλi (·)), Finally, since u˙ λi (·) converge strongly to u(·) PF(u(·)) (∇ψλ (uλi (·))) converges weakly in L2 (0, T ; H) to v and w respectively, we conclude that, for ξi (t) = ∇ψλi (uλi ) and ηi = −u˙ λi − εi , we have ηi converges strongly to −u, ˙ ξi converges weakly to v and ηi ∈ G(ξi ) for each i ∈ I. Therefore, closedness of the graph of G relatively to s × w-topology implies that −u˙ ∈ G (v), so that w(t) = u(t) ˙ + v(t) = PF(u(t)) (v(t)) for a.e. t ∈ [0, T ]. Step 7. It remains to complete the proof of theorem. Let us recall from the previous results that: for a.e. t ∈ [0, T ], v(t) = P∂ψ(u(t)) (w(t)) and w(t) = PF (u(t)) (v(t)). Then, by Lemma 5.5.13 (c), v(t) − w(t) = P∂ψ(u(t))−F (u(t) (0). It follows from ⊕ (5.128) that 0 = u(t) ˙ + [∂ψ(u(t)) − F(u(t)] , for a.e. t ∈ [0, T ]. The proof is then complete.  Theorem 5.5.19. Assume that (i) H is a finite-dimensional space, (ii) M ⊂ H is closed and convex, (iii) the equilibrium bifunctions Fi are convex, differentiable in the second variable, and satisfy the Lipschitz condition (5.132). Then, the dynamical system (5.107) has at least one global strong solution on (0, +∞). Proof. If we set F(u) = −conv{∇2 Fi (u, u)} for u∈H and ψ(u) = ιM (u) = 1 if u ∈ M and = 0 otherwise, then (5.107) becomes a case of (5.108).

436

5 Applications

Obviously, ψ satisfies condition (h2 ) since M is closed and convex. To prove the conditions (h3 ), (h4 ), we need the following condition holds: there exists L > 0 such that ∇2 Fi (u, u) − ∇2 Fi (v, v) ≤ Lu − v), ∀i = 1, . . . , p, ∀u, v ∈ H. (5.132) To establish (h3 ), we fix u ∈ H and consider (v, ξ) ∈ F , then ξ=−

p 

λi ∇2 Fi (v, v), where



λi = 1 and λi ≥ 0 for 1 ≤ i ≤ p.

1≤i≤p

i=1

p Since − i=1 λi ∇2 Fi (u, u) ∈ F(u) and the gradients ∇2 Fi are L-Lipschitz continuous on H, we get   p      dist(ξ, F(u)) ≤ ξ − λi ∇2 Fi (u, u)    p i=1      = λi (∇2 Fi (v, v) − ∇2 Fi (u, u))   i=1

≤ ≤

p  i=1 p 

λi ∇2 Fi (v, v) − ∇2 Fi (u, u) λi L v − u = Lv − u.

i=1

Thus supξ∈F (v) dist(ξ, F (u)) ≤ Lv−u, and since u and v can be exchanged, we conclude that for every u, v ∈ H, IH(F(u), F(v)) ≤ Lu − v. It follows that F : H ⇒ H is Hausdorff-continuous. To establish (h4 ), we first set u ∈ H and v = 0, then F(u) ⊂ F(0) + LuB = −conv{∇2 Fi (u, u)} + LuB ⊂ r0 B + LuB ⊂ ρ0 (1 + u)B. where the second inclusion holds thanks to conv{∇2 Fi (u, u) ≤ max ∇2 Fi (u, u) := r0 1≤i≤p

and the last inclusion holds due to max ∇2 Fi (u, u) + Lu ≤ (r0 + L)(1 + u) := ρ0 (1 + u).

1≤i≤p

We conclude (h4 ) is satisfied. Then, Theorem 5.5.11 leads to a result.



5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

437

Theorem 5.5.20. Under conditions of Theorem 5.5.19, the dynamical system p  ⊕ 0 = u(t) ˙ + λ0i ∇2 Fi (u(t), u(t)) + [NM (u(t))] , (5.133) i=1 0

for λ ∈ Δp , has at least one global strong solution on (0, +∞). p Proof. Setting F (u) = {− i=1 λ0i ∇2 Fi (u, u)} for u ∈ H and ψ(u) = ιM (u), then following lines of the prove of Theorem 5.5.19, all conditions (h1 ) · · · (h4 ) of Theorem 5.5.11 are satisfied, since F is a single-valued operator.  5.5.4 Asymptotic Behavior of Solutions when t → +∞ In this section, we analyze the asymptotic behavior of a solution u(t) of the subdifferential dynamical system 0 ∈ u(t) ˙ +

p 

λ0i ∂Fi (u(t), ·)(u(t)) + NM (u(t)),

(5.134)

i=1

when t is assumed to tend to +∞. Here λ0 ∈ Δp . We need to obtain the weak (and strong) convergence of trajectories of (5.134) to a solution of (WVEP). Weak convergence Recall the following Lemma 5.5.21 (Opial Lemma). [454] Let B be a nonempty subset of a Hilbert space H and x : [t0 , +∞) → H. Suppose that (i) lim x(t) − y exists for each y ∈ B, t→+∞

(ii) each sequential weak cluster point of {x(t)} belongs to B. Then the weak limit of x(t), when t goes to +∞, exists and belongs to B. Proof. Using (i) and nonemptyness of B, the trajectory x(t) is bounded in the Hilbert space H. In order to obtain its weak convergence, we just need to prove that the trajectory has a unique weak cluster point. For this, let us consider two weak cluster points z1 and z2 , then there are two real sequences (t1n ) and (t2n ) converging to +∞ such that x(t1n )  z1 and x(t2n )  z2 . By (ii), it follows that z1 ∈ B and z2 ∈ B. Return to (i), we conclude that limt→+∞ x(t) − z1  and limt→+∞ x(t) − z2  exist. Thus, limt→+∞ (x(t) − z1 2 − x(t) − z2 2 ) exists. Developing and simplifying this last expression, we deduce that limt→+∞ x(t), z2 − z1  exists. Hence z1 , z2 − z1  = lim x(t1n ), z2 − z1  = lim x(t2n ), z2 − z1  = z2 , z2 − z1 . n→+∞

n→+∞

438

5 Applications

We deduce z1 − z2 2 = 0, which means z1 = z2 .



Using Opial Lemma 5.5.21, in order to prove weak convergence of {u(t)} to x ∈ S, when t → ∞, one of two main conditions is that each weak cluster point lies in S. The key tool to prove this condition is demipositivity which was first introduced in Bruck [98] for monotone operators and developed to bifunctions as follows (see [116, 117]). Definition 5.5.22. A bifunction ϕ : K × K → R is called demipositive on C ⊂ K if Sϕ (C) := {x ∈ C : ϕ(x, v) ≥ 0, ∀v ∈ C} = ∅ and  {un } ⊂ K converges weakly to u ∈ C, =⇒ u ∈ Sϕ (C). there exists z ∈ Sϕ (C) such that ϕ(un , z) → 0 The following proposition extends [117, Proposition 1] from monotone properties to pseudomonotone ones and gives examples for bifunctions with demipositive property. Proposition 5.5.23. Consider an equilibrium bifunction ϕ : K × K → R with Sϕ := Sϕ (K) = ∅. Then ϕ is demipositive on K if it satisfies each of the following conditions: (i) ϕ is strongly pseudomonotone; (ii) ϕ is upper hemicontinuous (lim supt 0 ϕ(tz+(1−t)x, y) ≤ ϕ(x, y) ∀x, y, z ∈ K), for each x ∈ K, ϕ(x, ·) is convex and lower semicontinuous, and one of the following conditions: (a) ϕ is 3-pseudomonotone: ∀x, y, z ∈ K, ϕ(x, y) ≥ 0 gives ϕ(y, z) + ϕ(z, x) ≤ 0; (b) ϕ is monotone and μ-pseudo-cocoercive (μ > 0): for each x, y ∈ K, ϕ(x, ·) is differentiable3 and ϕ(x, y) + ϕ(y, x) ≤ 0 and ϕ(x, y) ≥ 0 =⇒ ϕ(y, x) ≤ −μ∇2 ϕ(x, x) − ∇2 ϕ(y, y)2 ; (5.135) (c) int Sϕ = ∅. Proof. Let us first prove the assertion under condition (i). We consider a sequence un  u and take an element z ∈ Sϕ such that limn→∞ ϕ(un , z) = 0. Since ϕ is strongly pseudomonotone and ϕ(z, un ) ≥ 0 for every n, we get ϕ(un , z) ≤ −γ (un − z) , for each n ≥ 0. In view of the scalar aspect of Minty Lemma (Lemma 3.9.27 with K(x) = −L(x) = R∗+ := (0, +∞)), we obtain γ (un − z) ≤ −ϕ(un , z). Passing to the superior limit, we deduce that un → z and then u = z ∈ Sϕ . Assume (ii)(a). Consider un  u and take an element z ∈ Sϕ such that limn→∞ ϕ(un , z) = 0, then ϕ(z, un ) ≥ 0 for every n ≥ 0. Using the 3

Here we may suppose ϕ defined on K × U where U is an open set containing K.

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

439

3-pseudomonotonicity of ϕ, we deduce that ϕ(y, un ) + ϕ(un , z) ≤ 0. Passing to the limit when n → ∞, and thanks to weak lower semicontinuity of ϕ(y, ·) we obtain ϕ(y, u) ≤ lim inf n→∞ ϕ(y, un ) ≤ 0. We conclude by the scalar aspect of Lemma 3.9.27. Assume (ii)(b). Consider un  u and take z ∈ Sϕ such that limn→∞ ϕ(un , z) = 0. Since z ∈ Sϕ , then ∇2 ϕ(z, z) = 0, so by setting vn = ∇2 ϕ(un , un ), (5.135) gives: ϕ(un , z) ≤ −μvn 2 . Using limn→∞ ϕ(un , z) = 0, we deduce that limn→∞ vn  = 0. Now since ϕ(un , ·) is convex, vn = ∇2 ϕ(un , un ) implies that ϕ(un , y) + (vn | un − y) ≥ 0 for all y ∈ K, which implies by monotonicity of ϕ, (vn | un − y) ≥ ϕ(y, un ). Going to the limit when n → ∞, we obtain ϕ(y, u) ≤ lim inf ϕ(y, un ) ≤ lim (vn | un − y) = 0, n→∞

n→∞

which assures the result by the scalar aspect of Lemma 3.9.27. Assume (ii)(c). Consider un  u and take now an element z ∈ int Sϕ such that limn→∞ ϕ(un , z) = 0. Let  > 0 such that B(z, ) ⊂ Sϕ . Since z ∈ int Sϕ ⊂ int K = int dom(ϕ(un , ·) + ιK ) = int dom∂(ϕ(un , ·) + ιK ) (it is a classical result that, for ψ ∈ Γ0 (H) one has int domψ ⊂ int dom∂ψ see Proposition 1.7 in [44]), one can find vn ∈ ∂(ϕ(un , ·) + ιK )(z), which means that for each y ∈ K ϕ(un , y) − ϕ(un , z) ≥ (vn | y − z) .

(5.136)

Without loss of generality, we suppose that vn = 0 for all n ∈ N and by considering yn = z +  vvnn  ∈ Sϕ in (5.136) we have ϕ(un , yn ) − ϕ(un , z) ≥ vn .

(5.137)

Using ϕ(un , yn ) ≤ 0 (since yn ∈ Sϕ ) and that limn→∞ ϕ(un , z) = 0, we obtain in (5.137) that limn→∞ vn  = 0. Coming back to (5.136) and thanks to weak lower semicontinuity of ϕ(y, ·) and monotonicity of ϕ we get ϕ(y, u) ≤ lim inf ϕ(y, un ) n→∞

≤ lim inf −ϕ(un , y) n→∞

≤ − lim (ϕ(un , z) + (vn | y − z)) = 0. n→∞

We conclude by the scalar aspect of Lemma 3.9.27.



440

5 Applications

Proposition 5.5.24. Suppose (i) A is a maximal monotone set-valued operator from H to H. (ii) A is demipositive, i.e., there exists x0 ∈ A−1 (0) such that, for every weakly convergent sequence (xk ) and every bounded sequence yk ∈ A(xn ), if (xn − x0 | yn ) −→ 0 then limn→+∞ xn ∈ A−1 (0). Then the bifunction ϕ(u, v) := supy∈A(u) (y | v − u) is also demipositive on H. Proof. First remark that since effective domain of A is the whole space H, then A is locally bounded on H, and then ϕ is well defined on H × H. Furthermore, Sϕ (H) = {x ∈ H : ϕ(x, v) = supy∈A(x) (y | v − x) ≥ 0, ∀v ∈ H} = A−1 (0) = ∅; the proof relies on the maximal monotonicity of the set-valued operator A. Suppose that un  u, z ∈ Sϕ (H) = A−1 (0) and limn→+∞ ϕ(un , z) = 0. By using local boundedness of A we derive existence of yn ∈ A(un ) such that (un − z | yn ) −→ 0. According to A demipositive,  we obtain u ∈ A−1 (0) = Sϕ (H), which concludes the proof. Now, as in Subsection 3.9.5, let us reformulate vector equilibrium problems by using the inf-support function α. On the closed convex orthant cone P = Rp+ , see Lemma 2.2.35, we consider the following convex compact base  Δp := {λ ∈ Rp+ : λi = 1}, 1≤i≤p

and then use the associate inf-support and sup-support functions α and β. In comparison with results in Subsection 3.9.5, when we suppose the equilibrium bifunction F : K ×K → Rp to be P -convex and lower semicontinuous relatively to the second variable, the following hold x ∈ WSF (M ) ⇐⇒ x ∈ M such that F (x, y) ∈ − int P ∀ y ∈ M ⇐⇒ x ∈ M and α(−F (x, y)) = min − (F (x, y) | λ) ≤ 0 ∀y ∈ M, λ∈Δp

⇐⇒ x ∈ M and

sup min − (F (x, y) | λ) ≤ 0.

y∈M λ∈Δp

In order to invert the variables y and λ in this last relation, we consider the real-valued bifunction G : Δp × K → R defined by G(λ, y) := − (F (x, y) | λ), and apply the following Sion’s minimax theorem (see [507]). Lemma 5.5.25. Let U , V be convex subsets of linear topological spaces, and G be a real-valued bifunction on U × V . Suppose that U is compact and, for every u ∈ U, v ∈ V , −G(u, ·) and G(·, v) are lower semicontinuous and quasi-convex on U and V respectively. Then min sup G(u, v) = sup min G(u, v). u∈U v∈V

v∈V u∈U

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

441

We see easily that U = Δp is a nonempty compact subset of Rp and G is convex and continuous on Δp for every y ∈ M . In order for G(λ, ·) to be concave and upper semicontinuous, we may assume that F (x, ·) is P -convex and lower semicontinuous, which is equivalent to all the real functions Fi (x, ·) are convex and lower semicontinuous on K. Then according to Lemma 5.5.25, it follows that sup min − (F (x, y) | λ) = min sup − (F (x, y) | λ) .

y∈M λ∈Δp

λ∈Δp y∈M

These results are summarized in the following proposition. Proposition 5.5.26. If the equilibrium bifunction F : K × K → Rp is P convex and lower semicontinuous relatively to the second variable, then x is a solution of (WVEP) iff x ∈ M and there exists λ ∈ Δp such that  λi Fi (x, y) ≥ 0, ∀y ∈ M. (5.138) 1≤i≤p

We conclude that WSF (M ) =

3

SF (M, λ).

(5.139)

λ∈Δp

Proof. Since Δp is a nonempty compact subset of Rp , we can deduce the assertions from the following equivalences: x is a solution of (WVEP) ⇔ x ∈ M and sup min − (F (x, y) | λ) ≤ 0 y∈M λ∈Δp

⇔ x ∈ M and sup min − (F (x, y) | λ) ≤ 0 y∈M λ∈Δp

⇔ x ∈ M and min sup − (F (x, y) | λ) ≤ 0 λ∈Δp y∈M

⇔ x ∈ M and ∃λ ∈ Δp such that ∀y ∈ M, (F (x, y) | λ) ≥ 0 ⇔ ∃λ ∈ Δp such that x ∈ SF (M, λ) 3 SF (M, λ). ⇔x∈ λ∈Δp

 We conclude this section with the following main convergence results. Theorem 5.5.27. Suppose, for each i = 1, . . . , p, the equilibrium bifunction Fi : K × K → R is upper hemicontinuous relatively to the first variable (i.e., upper semicontinuous on every line segment in K) and convex lower semicontinuous relatively to the second variable. Consider u(·) a solution of (5.134) for a fixed λ0 ∈ Δp .

442

5 Applications



(a) Suppose SF (M, λ0 ) := {z ∈ M : F (z, y) | λ0 ≥ 0, ∀y ∈ M } = ∅, and F is λ0 -pseudomonotone∗ , i.e., 

z ∈ SF (M, λ0 ) =⇒ F (y, z) | λ0 ≤ 0 ∀y ∈ M, (5.140) then lim u(t) − z exists for every z ∈ SF (M, λ0 ). t→+∞ 

(b) If in addition, φ(u, v) := − F (v, u) | λ0 , for u, v ∈ M , is demipositive on M , then there is a solution u ¯ ∈ WSF (M ) such that the whole trajectory {u(t)} weakly converges to u ¯ as t → +∞. Proof. (a) Fix z ∈ SF (M, λ0 ) and set hz (t) = 12 u(t) − z2 , then for a.e. t ∈ [0, +∞) we have ˙ | u(t) − z) . h˙ z (t) = (u(t) We first observe that when u(.) is a solution of (5.134), then for a.e. t > 0  −u(t) ˙ ∈ λ0i ∂2 Fi (u(t), u(t)) + NM (u(t)). 1≤i≤p

Using the subdifferential inequality, we get ˙ | u(t) − z) h˙ z (t) = (u(t)  

λ0i Fi (u(t), z) = F (u(t), z) | λ0 . ≤ 1≤i≤p

Taking advantage of (5.140) and using u(t) ∈ M , we obtain for a.e. t > 0 

(5.141) h˙ z (t) ≤ F (u(t), z) | λ0 ≤ 0. Now, the fact that u is absolutely continuous implies that hz is absolutely continuous; then, we have, as t goes to +∞, hz (t) is decreasing and bounded below, so converges; and hence,  ∞  ∞ 1 ˙ h˙ z (t)dt ≤ hz (0) < +∞, |hz (t)|dt = − 2 0 0 which means h˙ z ∈ L1 (0, +∞). (b) To apply Opial’s Lemma 5.5.21, it remains to show that every weak 0 cluster point of {u(t)} SF (M, λ ). Consider tn → +∞ such that

belongs to 0 u(tn )  z, φ(u, v) = F (u, v) | λ , and fix a point z0 ∈ Sφ (M ) = SF (M, λ0 ) supposed nonempty. ˙ is bounded a.e. and u(·) is absolutely conSince h˙ z0 ∈ L1 (0, +∞), u(·) tinuous, then following lines of the proof of [98, Theorem 1], we construct another sequence {snk } such that both |snk − tnk |, u(snk ) − u(tnk ) and ˙ nk ) | u(snk ) − z0 ) converge to 0. h˙ z0 (snk ) = (u(s Return to u(tn )  z, it follows that {u(snk )} converges weakly to z.

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

443

Taking into account (5.141) and z0 ∈ SF (M, λ0 ), then passing to the limit as k → +∞, we conclude that

 0 = lim h˙ z (sn ) ≤ lim inf F (u(sn ), z0 ) | λ0 k→+∞

0

k

k→+∞

≤ lim sup F (u(snk ), z0 ) | λ

0



k

≤ 0,

k→+∞

according to which we claim that φ(u(snk ), z0 ) → 0. Observe also that the bifunction φ, supposed to be demipositive on M , satisfies: φ(u(snk ), z0 ) → 0 and z0 ∈ S φ (M ), u(snk )  z, then z ∈ SF (M, λ0 ). We conclude from Lemma 5.5.21, the whole trajectory {u(t)} converges weakly to some x ¯ ∈ SF (M, λ0 ). Using Proposition 5.5.26,  we conclude x ¯ ∈ WSF (M ), which completes the proof. Theorem 5.5.28. Under the same data as in Theorem 5.5.27, consider a solution u(·) of 0 ∈ u(t) ˙ + conv{∂2 Fi (u(t), u(t))} + NM (u(t)).

(5.142)

(a) Suppose S := {x ∈ M : mini=1,...,p Fi (x, y) ≥ 0, ∀y ∈ M } = ∅ and assume ∀z ∈ S,

max (Fi (y, z) + Fi (z, y)) ≤ 0, ∀y ∈ M,

i=1,...,p

(5.143)

then lim u(t) − z exists ∀z ∈ S. t→+∞

(b) If in addition, ψ(u, v) := mini=1,...,p Fi (u, v) is demipositive on M , then there exists a solution u ¯ ∈ WSF (M ) such that {u(t)} weakly converges to u ¯ as t → +∞. Proof. (a) Since S = ∅, then there exists z ∈ M such that Fi (z, y) ≥ 0, ∀y ∈ M, ∀i = 1, . . . , p, we deduce that, for every λ0 ∈ Δp ,  

λ0i Fi (z, y) ≥ 0. ∀y ∈ M, F (u(t), z) | λ0 = 1≤i≤p

This ensures SF (M, λ0 ) is nonempty for every λ0 ∈ Δp . So using (5.143), we deduce 

F (y, z) | λ0 ≤ 0, ∀y ∈ M. In this case (5.141) is satisfied, and then lim u(t) − z exists for every t→+∞

z ∈ S. (b) Weak convergence of {u(t)} a solution of (5.142) to some point in S ⊆ WSF (M ) is ensured when in addition we assume that ψ is demipositive. 

444

5 Applications

Remark 5.5.29. Suppose all the real bifunctions Fi are monotone, then F satisfies (5.143) and also is λ0 -pseudomonotone∗ for every λ0 ∈ Δp . Indeed, fix z ∈ SF (M, λ0 ), then p 

λ0i Fi (z, y) ≥ 0, ∀y ∈ M, ∀i = 1, . . . , p.

i=1

Using monotonicity of Fi and λ0 ∈ Rp> , we get for every y ∈ M

F (y, z) | λ

0



=

p 

λ0i Fi (y, z)

≤−

i=1

p 

λ0i Fi (z, y) ≤ 0.

i=1

Strong convergence The following theorem shows the strong convergence of the unique strong global solution of the dynamical system (5.134) under γ-strong Rp+ pseudomonotonicity of the bifunction F . Theorem 5.5.30. Additionally to assumptions of Theorem 5.5.8, consider u(.) a solution of (5.134). Then for a.e. t > 0, (u(t) ˙ | u(t) − x ¯) ≤ −γ(¯ x− u(t)) and u(t) strongly converges to the unique solution x ¯ of (WVEP). Proof. Theorem 5.5.8 ensures that WSF (M ) is nonempty and reduces to ¯2 for each t > 0, where x ¯ is the unique a singleton. Set h(t) := 12 u(t) − x element of WSF (M ) and λ0 ∈ Δp . Following lines of the proof of Theorem 5.5.27 (a), we obtain from (5.141) that for a.e. t > 0 

˙ h(t) = (u(t) ˙ | u(t) − x ¯) ≤ F (u(t), x ¯ ) | λ0 . By γ-strong monotonicity of F , we deduce for each t > 0 max (Fi (u(t), x ¯) + Fi (¯ x, u(t)) ≤ 0

1≤i≤p

and ¯ ) + Fi x ¯, u(t)) ≤ −γ(¯ x − u(t)). min (Fi (u(t), x

1≤i≤p

According to the above analysis, we conclude that p  

˙ λ0i Fi (u(t), x ¯) h(t) ≤ F (u(t), x ¯) | λ0 = i=1

≤−

p 

λ0i Fi (¯ x, u(t)) − γ(¯ x − u(t))

i=1

≤ −γ(¯ x − u(t)). Therefore, for a.e. t > 0, we have

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

˙ h(t) + γ(¯ x − u(t)) ≤ 0.

445

(5.144)

After integrating from 0 to +∞ the above inequality, one obtains  +∞ γ(¯ x − u(t))dt ≤ h(0) < +∞, 0

and since γ is continuous, we conclude lim inf γ(¯ x − u(t)) = 0. In view of t→+∞

the convergence of the hull family u(t) − x ¯ as t → +∞ (see Theorem 5.5.27 (a) and Remark 5.5.29) and conditions on γ (see Definition 5.5.2), we deduce x − u(t) = 0, hence {u(t)} converges strongly to the unique that lim ¯ t→+∞

solution x ¯, which concludes the proof.  Corollary 5.5.31. Under assumptions of Theorem 5.5.30, suppose γ(t) = ¯ the unique Ct2 on R+ and consider u(·) a solution of (5.134). Then for x solution of (WVEP), we have ¯ x − u(t) ≤ ¯ x − u(0)e−Ct .

(5.145)

˙ Proof. Taking into account (5.144) we get h(s) ≤ −C¯ x − u(s)2 = −2Ch(s) for s ≥ 0. Integrating from 0 to t, we obtain h(t) ≤ h(0)e−2Ct , which means (5.145).  To prove our result of strong asymptotic convergence when the bifunction F isn’t assumed to be γ-strongly Rp+ -pseudomonotone, we consider the regularized ε-Tikhonov-like bifunction F ε where (ε > 0) and for each i = 1, . . . , p Fiε (u, v) = Fi (u, v) + ε (u | v − u) , for u, v ∈ M.

(5.146)

For this, we need the following lemma. Lemma 5.5.32. For each ε > 0 and λ0 ∈ Δp , the solution set SF ε (M, λ0 ) is nonempty and reduced to {xε }, and if moreover SF (M, λ0 ) is also nonempty, then (xε ) converges strongly, as ε → 0, to the element of minimum norm of the closed convex set SF (M, λ0 ). Proof. The first assertion is justified by Theorem 5.5.8 since, for any fixed ε > 0, λ0 ∈ Δp and for each u, v ∈ M , monotonicity of Fi gives Fiε (u, v) + Fiε (v, u) = Fi (u, v) + Fi (v, u) − εu − v2 ≤ −εu − v2 . For the second assertion, set z ∈ SF (M, λ0 ) and consider xε the unique solution of SF ε (M, λ0 ), then monotonicity of F also gives

446

5 Applications p 

λ0i ε(xε 2

− z · xε ) ≤

i=1

p 

λ0i ε(xε 2 − (xε | z))

i=1

≤−

p 



λ0i ε (xε | z − xε ) − F (xε , z) | λ0

i=1

 = − F ε (xε , z) | λ0 ≤ 0, and then (xε ) remains bounded, since xε  ≤ z for every ε > 0. If we take x∞ a weak limit point of xε as ε → 0, then there exists a sequence εk → 0 such that xεk  x∞ . Since xεk is an equilibrium point of Fεk , then for any v ∈ M we have p   F (xεk , v) | λ0 + λ0i εk (xεk | v − xεk ) ≥ 0,

i=1

which implies by monotonicity of F , p

  0 F (v, xεk ) | λ ≤ λ0i εk (xεk | v − xεk ) . i=1

Going to the limit when k → +∞, we obtain

p 

  F (v, x∞ ) | λ0 ≤ lim inf F (v, xεk ) | λ0 ≤ λ0i lim εk (xεk |v − xεk ) = 0. k→+∞

i=1

k→+∞

By Minty’s Lemma, we conclude that x∞ belong to SF (M, λ0 ), and using the weak lower semicontinuity of the norm, the inequality xεk  ≤ z for every k gives x∞  ≤ z, and then x∞ = z. We conclude z is the unique weak cluster point of the bounded family {xε : ε > 0}, and then xε  z. Since the norm is weakly lower semicontinuous, we also have x∗  ≤ lim inf xε  ≤ lim sup xε  ≤ z. ε→0

ε→0

which implies that xε  → z, and then the (xε ) converges strongly to z.  Theorem 5.5.33. Under the assumptions of Theorem 5.5.8 except the strong monotonicity condition, we have for λ0 ∈ Δp ∩ Rp> and uε (t) the strong global solution of  λ0i ∂2 Fiε (u(t), u(t)) + NM (u(t)) (5.147) 0 ∈ u(t) ˙ + 1≤i≤p

and z the element of minimum norm of the closed convex set SF (M, λ0 ) lim lim uε (t) − z = 0.

ε→0 t→+∞

Proof. Combining Corollary 5.5.31 and Lemma 5.5.32, we get xε − uε (t) ≤ xε − uε (0)e−εt and lim xε − z = 0. ε→0

and then the desired property follows from the norm’s triangle inequality. 

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

447

5.5.5 Asymptotic Behavior for Multiobjective Optimization Problem Let h : M → Rp and consider the vector optimization problem minRp+ ψ: find z ∈ M such that h(y) ∈ / h(z) − Rp> , ∀y ∈ M.

(WVOP)

The approach by an equilibrium problem is therefore operated on vector optimization problems by following two directions: the first one (which is suitable for the weak convergence) is to set F (x, y) = h(y) − h(x), and the second (for strong convergence) is to choose F (x, y) = h (x, y − x), where h (x; d) := limt→0+ 1t (h(x + td) − h(x)) is the directional derivative of h. Weak convergence Set F (x, y) = h(y) − h(x) for x, y ∈ M ⊂ H ∈ Rp , where hi : M → R for i = 1, . . . , p, and λ0 ∈ Δp ∩ Rp> ; then / −Rp> , ∀y ∈ M WSF (M ) = MSF (M ) = {z ∈ M  : h(y) − h(z)

∈  }, and 0 0 SF (M, λ ) = argmin h(·) | λ = {z ∈ M : h(y) − h(z) | λ0 ≥ 0, ∀y ∈ M }. Remark that z∈ / WSF (M ) ⇐⇒ ∃y ∈ M : h(y) − h(z) ∈ −Rp> =⇒ ∃y ∈ M : h(y) − h(z) | λ0 < 0 ⇐⇒ z ∈ / SF (M, λ0 ); this means that

SF (M, λ0 ) ⊂ WSF (M ).

Theorem 5.5.34. Assume that all the functions hi : M → R for i = 1, . . . , p are convex lower semicontinuous, and u(t) is a strong global solution of  0 ∈ u(t) ˙ + λ0i ∂hi (u(t)) + NM (u(t)). (5.148) 1≤i≤p

If SF (M, λ0 ) = ∅, then u(t) converges weakly in H, as t → +∞, to a solution of (WVOP). Proof. We remark that, for all (u, v) ∈ M × M, ∂F

i (u, ·)(v) = ∂hi(v). So, we only need to justify demipositivity of ϕ(x, y) = h(y) − h(x) | λ0 on M × M to use Theorem 5.5.27. For demipositivity of ϕ we use (ii) (a) of Proposition 5.5.23. Indeed, let u, v, w ∈ M , then  

ϕ(u, v) + ϕ(v, w) + ϕ(w, u) = h(v) − h(u) | λ0 + h(w) − h(v) | λ0 

+ h(u) − h(w) | λ0 = 0. We conclude ϕ is 3-monotone, and as a consequence ϕ satisfies (ii) (a) in Proposition 5.5.23. 

448

5 Applications

Remark 5.5.35. When we suppose S := {z ∈ M : hi (z) ≤ hi (y), ∀y ∈ M, ∀ i = 1, . . . , p} =

2 1≤i≤p

argmin(hi ) = ∅, M

then SF (M, λ0 ) is nonempty; in this case weak convergence of {u(t)} to a solution of (5.142) is ensured when in addition we assume that the bifunction ψ(u, v) = max Fi (u, v) = max (hi (v) − hi (u)) i=1,...,p

i=1,...,p

is demipositive on M . Remark 5.5.36. For weak convergence of the trajectories u(t) solution of  0 ∈ u(t) ˙ + λi (t)∂hi (u(t)) + NM (u(t)) , (5.149) 1≤i≤p

results have been approved in [31, Proposition 2.6], [29, Theorem 4.7] and [30, Theorem 2.2] where the authors have imposed the boundedness of u(t) and also (not indicated in the statement) S0 := {x ∈ H : hi (x) ≤ lim hi (u(t)), ∀ ∈ {1, . . . , q}} t→+∞

is nonempty. Strong convergence For the proof of the strong convergence results we need the following characterization for strongly convex real functions. Lemma 5.5.37. Let ϕ : H → R ∪ {+∞} be a proper lower semicontinuous function. Then ϕ is m-strongly convex, i.e., ∀x, y ∈ M, ∀t ∈ [0, 1] ϕ(tx + (1 − t)y) ≤ tϕ(x) + (1 − t)ϕ(y) −

m t(1 − t)x − y2 ; 2

(5.150)

if and only if ϕ (x; y − x) + ϕ (y; x − y) ≤ −mx − y2 for all x, y ∈ H;

(5.151)

if and only if ϕ (x; y − x) ≤ ϕ(y) − ϕ(x) −

m x − y2 for all x, y ∈ H. 2

(5.152)

Proof. Using [581, Theorem 2.1.16] and [276, Proposition 1.1.2], we know 2 that ϕ is m-strongly convex if and only if the function ψ := ϕ − m 2  ·  is convex if and only if

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

ψ  (x; y − x) + ψ  (y; x − y) ≤ 0, ∀x, y ∈ H.

449

(5.153)

Of course, since ψ  (x; y − x) + ψ  (y; x − y) = (ϕ (x; y − x) − m (x | y − x)) +(ϕ (y; x − y) − m (y | x − y)), we know that ϕ is m-strongly convex ⇔ ϕ (x; y − x) + ϕ (y; x − y) ≤ −mx − y2 ; and then completing the proof of (5.150) ⇔ (5.151). The last equivalence is quickly justified by going first from (5.152) to (5.151) by summation, and then by following the implications (5.151) ⇒ (5.150) ⇒ (5.152).  Now, we suppose that M is included in the interior of the effective domain of h, and for x, y ∈ M and i = 1, . . . , p, we set Fi (x, y) := hi (x; y − x) =

sup ξ∈∂hi (x)

(ξ | y − x)

where, for all i = 1, . . . , p, ∂hi is the well-defined convex subdifferential of hi on M , see [420, p. 65]. Thus conclusions of Corollary 5.5.31 and Theorem 5.5.33 are valid, i.e., Theorem 5.5.38. (a) Under conditions of Theorem 5.5.34, we have for each strong global solution uε (t) of  λ0i (∂hi (u(t)) + εu(t)) + NM (u(t)) (5.154) 0 ∈ u(t) ˙ + 1≤i≤p

and for z the projection of the origin on argmin

 1≤i≤p

 λ0i hi ,

lim lim uε (t) − z = 0.

ε→0 t→+∞

(b) Suppose moreover that one of the functions hi is m-strongly convex and u(t) is a strong global solution of (5.148), then u(t) strongly converges in H, as t → +∞, to the unique solution z of (LSEP)λ (for λ = λ0 ) and then z ∈ WSF (M ). We also have the convergence rate u(t) − z ≤ u(0) − ze−mt for t large enough. Proof. From Lemma 5.5.37 it is clear that, for some i0 ∈ {1, . . . , p}, hi0 is strongly convex if and only if Fi0 is strongly monotone. So, by Corollary 5.5.31 and Theorem 5.5.33, the only thing we need to show is that ∂2 Fiε (u(t), u(t)) = ∂hi (u(t)) + εu(t) for each ε ≥ 0. We first remark by relying on the subdifferential of sums that ∂(Fiε (u(t), ·))(u(t)) = ∂(Fi (u(t), ·))(u(t)) + εu(t).

450

5 Applications

Consider x ∈ M , since M is included in the interior of the effective domain of h, then ξ ∈ ∂(Fi (x, ·))(x) ⇔ Fi (x, v) = hi (x; v − x) =

sup η∈∂hi (x)

(η | v − x)

≥ (ξ | v − x) , ∀v ∈ H ⇔ ∃ηx ∈∂hi (x) such that (ηx | v − x) ≥ (ξ | v − x) , ∀v∈H ⇔ ∃ηx ∈ ∂hi (x) such that ηx = ξ, i.e., ξ ∈ ∂hi (x). Hence ∂Fi (x, ·)(x) = ∂hi (x), for all x ∈ M , and then the result follows.



Remark 5.5.39. Among recent works on strong convergence for solutions of associated weak Pareto optima dynamical systems, we cite [31, Theorem 1.5. (iv)], where only if the trajectory u(t) is norm-relatively compact in H (or if one of the functions hi has norm-relatively compact sublevel sets, i.e., strong inf-compactness property) have been proved. To our knowledge, our result is the first showing strong convergence of trajectories under only strong convexity of one of the functions hi . More precisely, we derived a stability convergence result, characterizing the exponential convergence of the proposed method. Remark 5.5.40. We have shown (as a corollary of Theorem 5.5.33) a result of almost strong convergence of the trajectories in (a) above. It is an interesting (open) question to find other conditions on the data (hi and the positive functions εi (t)) which provide strong convergence of trajectories satisfying, either  0 ∈ u(t) ˙ + λ0i (∂hi (u(t)) + ε(t)u(t)) + NM (u(t)), (5.155) 1≤i≤p

or 0 ∈ u(t) ˙ +



λi (t) (∂hi (u(t)) + ε(t)u(t)) + NM (u(t)).

(5.156)

1≤i≤p

5.5.6 Asymptotic Behavior for Multiobjective Saddle-point Problem Assume that we are given H = H1 × H2 a product of two Hilbert spaces, and M1 , M2 two nonempty subsets of H1 and H2 , respectively. Let f : M = M1 × M2 −→ Rp be a vector-valued mapping. A point (z1 , z2 ) ∈ M is said to be a weak Rp+ -saddle point of f with respect to M if f (z1 , M2 ) ∩ (f (z1 , z2 ) + Rp> ∪ {0}) = {f (z1 , z2 )} (WVSP) f (M1 , z2 ) ∩ (f (z1 , z2 ) − Rp> ∪ {0}) = {f (z1 , z2 )}.

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

451

Weak convergence Set F : M × M → Rp defined, for each u = (u1 , u2 ), v = (v1 , v2 ) ∈ M , by F (u, v) := f (v1 , u2 ) − f (u1 , v2 ). Then z = (z1 , z2 ) ∈ WSF (M ) = MSF (M ) iff z is a solution of (WVSP). We also have F (u, v) = −F (v, u), for every u, v ∈ M , and SF (M, λ0 ) ⊂ WSF (M ) for every λ0 ∈ Rp> . We point out that in all results above, the weak convergence of each trajectory of considered systems imposes the demipositivity condition. In fact, consider the following simple example. Example 5.5.41. Let f : R2 → R be defined by f (u1 , u2 ) = u1 u2 for each u ∈ R2 . Observe that the unique saddle point of f is z = (0, 0), and the associated bifunction F ((u1 , u2 ), (v1 , v2 )) = u2 v1 − u1 v2 satisfies all conditions except demipositive one (see [117]). Indeed, taking λ0 = 1 and un = (1, 1), we have that φ(un , z) = −F (z, un ) = 0 but (1, 1) does not belong to WSF (M ). The dynamical system (5.134) becomes u(t) ˙ = −v(t), v(t) ˙ = u(t). Its solution is u(t) = u(0) cos t − v(0) sin t and v(t) = u(0) sin t + v(0) cos t, then u2 (t) + v 2 (t) = u(0)2 + v(0)2 , so the trajectories describe concentric circles and can never converge. In the next proposition, we’ll give conditions ensuring demipositivity of bifunctions associated to saddle functions (see [117, Prop. 2 for p = 1]). Proposition 5.5.42. Suppose f : M → Rp is a closed convex-concave function that verifies the conditions: SF (M, λ0 ) = ∅ for some λ0 ∈ Rp> and  0 1 , u2 ) ∈ M and

λ ), u = (u  

z = (z1 , z2 ) ∈0 SF (M, =⇒ u ∈ SF (M, λ0 ). f (z1 , u2 ) | λ = f (z1 , z2 ) | λ0 = f (u1 , z2 ) | λ0 (Cλ0 ) 

Consider ϕ(u, v) := − F (v, u) | λ0 for (u, v) ∈ M × M . Then, (a) Sϕ (M ) := {z ∈ M : ϕ(z, v) ≥ 0, ∀v ∈ M } = SF (M, λ0 ); (b) the bifunction ϕ is demipositive on M , see Definition 5.5.22. Proof. The first assertion results from the antisymmetry relation for the bifunctions F and ϕ.

452

5 Applications

For the second, consider z = (z1 , z2 ) ∈ Sϕ (M ) and (un ) = (u1n , u2n ) that weakly converges to a bipoint (u) = (u1 , u2 ) and such that lim ϕ(un , z) = n→+∞

0. We use fi (·, z2 ), −fi (z1 , ·) convex, lower semicontinuous and λ0 ∈ Rp> to have

 0 = lim −ϕ(un , z) ≥ lim inf − F (z, un ) | λ0 n→+∞ n→+∞ 



≥ lim inf (− f (z1 , u2n ) | λ0 ) + lim inf f (u1n , z2 ) | λ0 n→+∞ n→+∞



 ≥ − f (z1 , u2 ) | λ0 ) + f (u1 , z2 ) | λ0 = −ϕ(u, z) = ϕ(z, u). But we already have that ϕ(z, u) ≥ 0 since

z ∈ Sϕ (M) we deduce that

ϕ(z, u) = f (u1 , z2 ) − f (z1 , u 2 ) | λ0 = 0,so f (u1 , z2 ) | λ0 = f (z1 , u2 ) | λ0 . Return to z ∈ Sϕ (M ), i.e., F (v, z) | λ0 ≤ 0, ∀v ∈ M , and choose respectively v = (u1 , z2 ), v = (z1 , u2 ), we get   

f (z1 , u2 ) | λ0 ≤ f (z1 , z2 ) | λ0 ≤ f (u1 , z2 ) | λ0 . Combining these last properties, we obtain

   f (z1 , u2 ) | λ0 = f (z1 , z2 ) | λ0 = f (u1 , z2 ) | λ0 . Using (Cλ0 ), we deduce that u ∈ SF (M, λ0 ) = Sϕ (M ), and then ϕ is demipositive.  As consequence of Theorem 5.5.27 and Proposition 5.5.42, we obtain Theorem 5.5.43. Under conditions of Proposition 5.5.42, let us consider λ0 ∈ Δp ∩ Rp> and u(t) = (u1 (t), u2 (t)) the strong global solution of the dynamical system:  p 0 ∈ u˙ 1 (t) + i=1 λ0i ∂1 fi (u1 (t), u2 (t)) + NM1 (u1 (t)) (5.157) p 0 ∈ u˙ 2 (t) + i=1 λ0i ∂2 [−fi (u1 (t), u2 (t))] + NM2 (u2 (t)). Then, as t → ∞, u(t) weakly converges to a saddle point of f . Proof. We first remark that all conditions of Theorem 5.5.27 are satisfied by the vector bifunction F , and using the subdifferential of functions with separate variables on a product space the dynamical system (5.134) may be written as p  λ0i ∂1 fi (u1 (t), u2 (t)) × ∂2 [−fi (u1 (t), u2 (t))] 0 ∈ (u˙ 1 (t), u˙ 2 (t)) + i=1

+NM1 (u1 (t)) × NM2 (u2 (t)),

(5.158)

which is equivalent to (5.157). Here ∂j is the subdifferential relatively to the jth variable (j = 1, 2). Using Theorem 5.5.27 and Proposition 5.5.42, we conclude that u(t) weakly converges to some element in SF (M, λ0 ) and then to a weak vector saddle point of f .  We shall look with interest at this explicit example of vector bifunction for which all conditions of the above theorem are satisfied.

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

453

Example 5.5.44. We define a closed convex-concave vector function f on the closed convex set M = [−1, 0] × [−1, 0] by f (x1 , x2 ) = ((2 + x1 )x2 , x21 − x22 ). We set λ0 = (1, 2) ∈ R2> , then 

z ∈ SF (M, λ0 ) ⇔ F (z, y) | λ0 ≥ 0, ∀y ∈ M ⇔ f (y1 , z2 ) − f (z1 , y2 ) | λ0 ≥ 0, ∀y ∈ M 1 2 ⇔ min (y1 − z41 )2 + (y1 − 2+z 4 ) y∈M  1 (z1 + 2)2 + (z2 + 2)2 − 4 + z11 + z22 . ≥ 16 After calculating the minimum, we conclude  2  2 16 1 2 5 z ∈ SF (M, λ0 ) ⇐⇒ + z2 + ≤ 2. z1 + 17 4 17 17 We choose z = (z1 , z2 ) = (− 14 , 0), then z ∈ SF (M, λ0 ). We also easily verify that SF (M, λ0 ) ⊂ WSF (M ), since F (u, v) = −F (v, u), for every u, v ∈ M , and then z is a solution of (WVSP). To verify the condition (Cλ0 ), let us choose z = (− 14 , 0) ∈ SF (M, λ0 ), λ = (1, 2) ∈ R2> and consider u ∈ M that satisfy

   f (z1 , u2 ) | λ0 = f (z1 , z2 ) | λ0 = f (u1 , z2 ) | λ0 0

which is equivalent to u2 (7 − 16u2 ) = 0 and u21 −

1 = 0. 16

But u ∈ M = [−1, 0]2 , consequently u = z ∈ WSf (M ); from which we deduce  (Cλ0 ), and then from Proposition 5.5.42, F is demipositive. Strong convergence Now, we justify strong convergence of the trajectories. If f is convex-concave on M (supposed included in the interior of the effective domain of f ), then all the functions fi (i = 1, . . . , p) have partial directional derivatives  fi,1 (u1 , u2 ; d1 ) := lim

t→0+

and

1 (fi (u1 + td1 , u2 ) − fi (u1 , u2 )) t

1 (fi (u1 , u2 + td2 ) − fi (u1 , u2 )) . t  We denote by f1 (u1 , u2 ; d1 ) (resp. fi,2 (u1 , u2 ; d2 )), the vector function whose   components are fi,1 (u1 , u2 ; d1 ) (resp. fi,2 (u1 , u2 ; d2 )).  fi,2 (u1 , u2 ; d2 ) := lim+ t→0

454

5 Applications

Proposition 5.5.45. Under conditions above, a necessary and sufficient condition in order that z = (z1 , z2 ) ∈ M to be a saddle point of f on M is f1 (z1 , z2 ; v1 − z1 ) ∈ / −Rp> , ∀v1 ∈ M1 , and f2 (z1 , z2 ; v2 − z2 ) ∈ / Rp> , ∀v2 ∈ M2 . (5.159) Proof. Suppose z = (z1 , z2 ) ∈ M is a saddle point of f on M , then / −Rp> , ∀v1 ∈ M1 , f (v1 , z2 ) − f (z1 , z2 ) ∈ (5.160) / Rp> , ∀v2 ∈ M2 . f (z1 , v2 ) − f (z1 , z2 ) ∈ Let us fix v = (v1 , v2 ) ∈ M , then for each t ∈ (0, 1), since z1 + t(v1 − z1 ) = tv1 + (1 − t)z1 ∈ M1 and z2 + t(v2 − z2 ) = tv2 + (1 − t)z2 ∈ M2 , there exist it , jt ∈ {1, . . . , p} such that fit (z1 +t(v1 −z1 ), z2 )−fit (z1 , z2 ) ≥ 0 and fjt (z1 , z2 +t(v2 −z2 ))−fjt (z1 , z2 ) ≤ 0. One can then construct sequences tn → 0+ and sn → 0+ such that itn = iv and jsn = jv for every n ∈ N. Then passing to the limit as n → +∞, we conclude fiv ,1 (z1 , z2 ; v1 − z1 ) = lim

1

n→+∞ tn

(fiv (z1 + tn (v1 − z1 ), z2 ) − fiv (z1 , z2 )) ≥ 0

and fjv ,2 (z1 , z2 ; v2 − z2 ) = lim

n→+∞

1 (fjv (z1 , z2 + sn (v2 − z2 )) − fjv (z1 , z2 )) ≤ 0, sn

which mean (5.159). The converse is more subtle, and we show it by starting from (5.159), then for v = (v1 , v2 ) ∈ M , there are iv , jv ∈ {1, . . . , p} such that sup (ξ | v1 − z1 ) ≥ 0 fiv ,1 (z1 , z2 ; v1 − z1 ) = ξ∈∂fiv (·,z2 )(z1 )

and

fjv ,2 (z1 , z2 ; v2 − z2 ) =

sup η∈∂(−fjv (z1 ,·))(z2 )

(η | v2 − z2 ) ≤ 0.

We supposed that M is included in the interior of the effective domain of f , then there must exist some ξv ∈ ∂fiv (·, z2 )(z1 ) and ηv ∈ ∂(−fjv (z1 , ·))(z2 ) such that (ξv | v1 − z1 ) ≥ 0 and (ηv | v2 − z2 ) ≤ 0. By the definition of subdifferential, we conclude that fiv (v1 , z2 ) − fiv (z1 , z2 ) ≥ 0 and fjv (z1 , v2 ) − fjv (z1 , z2 ) ≤ 0, which give the result.



5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

455

By setting the bifunction F (u, v) := f1 (u1 , u2 ; v1 − u1 ) − f2 (u1 , u2 ; v2 − u2 ) for each u, v ∈ M, we deduce that the problems (WVEP), (WVSP), and (5.159) are equivalent in the sense that the solution sets are similar. Following lines of the proof of Theorem 5.5.38, we look at the nonlinear dynamical systems (5.134) and (5.147) respectively formulated as (5.157) and p 0 ∈ u˙ 1 (t) + i=1 λ0i ∂1 fi (u1 (t), u2 (t)) + εu1 (t) + NM1 (u1 (t)) (5.161) p 0 ∈ u˙ 2 (t) + i=1 λ0i ∂2 [−fi ] (u1 (t), u2 (t)) + εu2 (t) + NM2 (u2 (t)) and deduce from Corollary 5.5.31 and Theorem 5.5.33 that Theorem 5.5.46. (a) Under conditions of Theorem 5.5.43, we have for each strong global solution uε (t) of (5.161), there exists a solution z of (WVSP), such that lim lim uε (t) − z = 0. ε→0 t→+∞

(b) Assume moreover that one of the functions fi is m-strongly convexconcave and u(t) is a strong global solution of (5.157), then u(t) strongly converges in H, as t → +∞, to a solution z of (WVSP), and u(t)−z ≤ u(0) − ze−mt for t large enough. Proof. (a) We remark that all conditions of Theorem 5.5.33 are satisfied, then the conclusion is valid for z the projection of the origin on SF (M, λ0 ). Since 

z ∈ SF (M, λ0 ) ⇔ z ∈ M and f1 (z1 , z2 ; v1 −z1 ) − f2 (z1 , z2 ; v2 −z2 ) | λ0 ≥ 0, ∀v ∈ M

  0 ⇔ z ∈ M and f1 (z1 , z2 ; v1 − z1 ) | λ ≥ 0, ∀v1 ∈ M1 

and f2 (z1 , z2 ; v2 − z2 ) | λ0 ≤ 0, ∀v2 ∈ M2 . We conclude that for every v = (v1 , v2 ) ∈ M , there are iv , jv ∈ {1, . . . , p} such that fiv ,1 (z1 , z2 ; v1 − z1 ) ≥ 0 and fjv ,2 (z1 , z2 ; v2 − z2 ) ≤ 0, and then fiv (v1 , z2 ) − fiv (z1 , z2 ) ≥ 0 and fjv (z1 , v2 ) − fjv (z1 , z2 ) ≤ 0; which means that z is a solution of (WVSP). Thus, we have established the first statement. (b) If we suppose one function fi0 to be m-strongly convex-concave on M , i.e., ∀(x, y) ∈ M, fi0 (·, y) is to be m-strongly convex on M1 and fi0 (x, ·) is to be m-strongly concave on M2 , then, in view of Lemma 5.5.37, for each u, v ∈ M ,

456

5 Applications

Fi0 (u, v) + Fi0 (v, u) = fi0 ,1 (u1 , u2 ; v1 − u1 ) − fi0 ,2 (u1 , u2 ; v2 − u2 )

+fi0 ,1 (v1 , v2 ; u1 − v1 ) − fi0 ,2 (v1 , v2 ; u2 − v2 ) m ≤ (fi0 ,1 (v1 , u2 ) − fi0 ,1 (u1 , u2 ) − v1 − u1 2 ) 2 m −(fi0 ,1 (u1 , v2 ) − fi0 ,1 (u1 , u2 ) − v2 − u2 2 ) 2 m +(fi0 ,1 (u1 , v2 ) − fi0 ,1 (v1 , v2 ) v1 − u1 2 ) 2 m −(fi0 ,1 (v1 , u2 ) − fi0 ,1 (v1 , v2 ) − v2 − u2 2 ))  2

= −m v1 − u1 2 + v2 − u2 2 = −mv − u2 .

We deduce that Fi0 is m-strongly monotone, and thus conclusions of Theorem 5.5.8 and Corollary 5.5.31 are valid.  5.5.7 Numerical examples In order to compute solutions of an optimization problem find z ∈ M such that h(y) ≤ h(z) ∀y ∈ M,

(OP)

where M is a nonempty closed convex set, a dynamic approach is often exploited. In the subsection above, this method is elaborated for a vectorvalued equilibrium problem (WVEP) and then set up for a vector optimization problem (WVOP). Crucial in both cases are the derivation of a firstorder differential inclusion (5.108) and the establishment of suitable (as weak as possible) preconditions in order to match the calculus between (WVEP) and (5.108). In particular, the Fan lemma, the Minty linearization, the Peano theorem, the Moreau envelope, and monotonicity conditions were exploited for the analytical conclusions. In this subsection, we report the numerical experiments of the asymptotic behavior for multiobjective unconstrained optimization problems and compare their performances in three elementary examples: all components are strongly convex, one component is strongly convex, no component is strongly convex. Let us first recall an element x ∈ Rn is weak Pareto optimal for h : Rn → p R if there does not exist y ∈ Rn such that hi (y) < hi (x) for all i = 1, . . . , p, which is formulated as find z ∈ Rn such that h(y) ∈ / h(z) − Rp> , ∀y ∈ Rn .

(WVOP)

It is recalled that the mapping h is Rp+ -convex if and only if its components hi : Rn → R are all convex. To determine the set of weak optima of h, we use the following first-order optimality condition where, under Rp+ -convexity, these concepts of weak efficiency and stationarity are equivalent.

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

457

Proposition 5.5.47. If z is a weak Pareto optimum for the differentiable function h, then it is a critical point, i.e., 0 ∈ conv{∇hi (z) | i = 1, . . . , p}. If, in addition, h is Rp+ -convex, this condition is also sufficient for weak Pareto optimality. Proof. We can justify this proposition by using Proposition 5.5.1 when F (x, y) = (∇h(x)|y − x).  To attain one of the weak Pareto optima, we come back to the descent dynamic systems (5.107), (5.148) and (5.155) to formulate the following corresponding systems: ⊕

˙ + Pconv{∇hi (u(t))} (0), 0 = u(t) ˙ + [conv{∇hi (u(t))}] = u(t) p 

(5.162)

λ0i ∇hi (u(t))

(5.163)

λ0i (∇hi (u(t)) + ε(t)u(t))

(5.164)

0 = u(t) ˙ +

i=1

and 0 = u(t) ˙ +

p  i=1

where each function hi is convex and differentiable, λ0 ∈ Δp ∩Rp> and ε(t) > 0 is a suitable function that converges to 0 as t goes to +∞. Example 5.5.48 (Strongly convex case). Consider the quadratic differentiable functions 1 1 1 1 h1 (x1 , x2 ) = (x1 + 1)2 + x22 and h2 (x1 , x2 ) = (x1 − 1)2 + x22 . 2 2 2 2 Then, these two functions are strongly convex4 , the corresponding set of weak Pareto optima 5 is Sh = [−1, +1] × {0} ⊂ R2 , and for each (x1 , x2 ) ∈ R2 , conv{∇h1 (x1 , x2 ), ∇h2 (x1 , x2 )} = [(x1 + 1, x2 ) , (x1 − 1, x2 )] . So, we can concretize the projection of the origin is to say ⎧ ⎨ (x1 − 1, x2 ) Pconv{∇hi (x1 ,x2 )} (0, 0) = (0, x2 ) ⎩ (x1 + 1, x2 ) The corresponding descent dynamic system ⎧ ⎨ (−u1 (t) + 1, −u2 (t)) (u˙ 1 (t), u˙ 2 (t)) = (0, −u2 (t)) ⎩ (−u1 (t) − 1, −u2 (t)) 4 5

on this family of sets, that if x1 > 1, if 1 ≤ x1 ≤ −1, if x1 < −1.

(5.162) becomes if u1 (t) > 1, if 1 ≤ u1 (t) ≤ −1, if u1 (t) < −1.

(5.165)

Since h1 , h2 are strictly convex, then Proposition 5.5.1 claims that weak and strong Pareto are the same. In view of Proposition 5.5.47, we determine all the weak unconstrained Pareto optima of h.

458

5 Applications

Figure 5.5.3. Graphical view of the paths u(t) = (u1 (t), u2 (t)) and u(t) − x ¯ u0  for different starting point u(0) = u0 . The results of simulation for exponential convergence in [−1, 1] × R.

In Figure 5.5.3 (left), depending on the initial data, we observe the convergence of the trajectories to different weak Pareto optima. From each starting point u(0) in (−∞, −1] × R (resp. [1, +∞) × R), we end up at the weak Pareto (−1, 0) (resp. (1, 0)). The numerical examples illustrated in Figure 5.5.3 (right) are in agreement with the convergence rates predicted in Theorem 5.5.38 (b). We then notice that this rapid convergence is related to the domain of starting for the system (5.165). Indeed, the conservation of exponential convergence is confirmed when we start from a point in [−1, 1] × R, while in both the other parts of R2 this convergence becomes linear whenever t is large enough. Example 5.5.49 (Mid-strongly convex case). Consider the convex functions h1 (x1 , x2 ) = 12 (x21 + x22 ) and h2 (x1 , x2 ) = x1 . We can easily verify that h1 is strongly convex, h2 is non-strictly convex, and that the corresponding set of weak Pareto optima is Sh = (−∞, 0] × {0}. Likewise, for each (x1 , x2 ) ∈ R2 , conv{∇h1 (x1 , x2 ), ∇h2 (x1 , x2 )} = [(x1 , x2 ) , (1, 0)] and

⎧ if x1 ≥ 1, ⎨ (−1, 0) if (2x1 − 1)2 + 4x22 ≤ 1, Pconv{∇hi (x1 ,x2 )} (0, 0) = −(x1 , x2 ) ⎩ −x2 (x2 , 1 − x1 ) elsewhere. (x1 −1)2 +x2 2

The corresponding dynamic system (5.162) is ⎧ (1, 0) if u1 (t) ≥ 1, ⎪ ⎨ (u˙ 1 (t), u˙ 2 (t)) = (u1 (t), u2 (t)) if (2u1 − 1)2 + 4u22 ≤ 1, ⎪ ⎩ u2 (t) elsewhere. (u1 (t)−1)2 +xu2 (t)2 (u2 (t), 1 − u1 (t)) (5.166)

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

459

Figure 5.5.4. In this case, the results of simulation for exponential convergence   are also valid as long as the path u(t) crosses the closed euclidean ball B ( 12 , 0), 12 .

If u(t) (the solution of (5.166)) circulates in the half-plane [1,  +∞) × R, its

path is horizontal. Similarly in the closed disc D = B ( 12 .0), 12 the trajectory u(t) straightly reaches its border point (0, 0); while being located in the other parts of R2 , it follows a circular path centered at (1, 0) and reaches a point of the half-line R− × {0} while u(t) remains in this domain. In this case, the exponential convergence is also confirmed as long as the path u(t) crosses the set D (Figure 5.5.4). Example 5.5.50 (Non strictly convex case). Now, we consider tow convex (non-strictly convex) functions h1 (x1 , x2 ) = 12 (x1 − x2 )2 and h2 (x1 , x2 ) = x2 . We have the corresponding set of weak Pareto optima Sh = {(x1 , x2 ) ∈ R2 |x2 = x1 }, and for each (x1 , x2 ) ∈ R2 ⎧ (0, −1) if x1 − x2 < −1, ⎪ ⎪ ⎪ ⎨ (x2 − x1 , x1 − x2 ) if − 12 ≤ x1 − x2 ≤ 0, x2 − x1 Pconv{∇hi (x1 ,x2 )} (0, 0) = (1 + x1 − x2 , x2 − x1 ) ⎪ 2 + (1 + x − x )2 ⎪ (x − x ) ⎪ 2 1 2 ⎩ 1 elsewhere.

Denote by v(t) = u1 (t) − u2 (t). Then (5.162) becomes ⎧ (0, −1) if v(t) ≤ −1, ⎪ ⎪ ⎨ (−v(t), v(t)) if − 21 ≤ v(t) ≤ 0, (u˙ 1 (t), u˙ 2 (t)) = −v(t) ⎪ ⎪ (1 + v(t) , −v(t)) elsewhere. ⎩ 2 v(t) + (1 + v(t))2 (5.167)

460

5 Applications

Figure 5.5.5. Here, we only supposeh1 ,h2 convex. The convergence rates obtained in this non-strictly convex case is O t13 .

Indeed, by doing so the underlying dynamics is preserved. We illustrate this in the right plot of Figure 5.5.5, where we have shown the estimation O t13 of the norm of u(t) − x ¯u0 whenever t is large enough. This explains even more the interest of the resolution of (WVOP) via the dynamic first-order time systems 5.162. We note that this dynamic method is recently invested in the references [28, 30], and therefore deserves a development in second-order systems ⊕

¨(t) + αu(t) ˙ + Pconv{∇hi (u(t))} (0), 0=u ¨(t) + αu(t) ˙ + [conv{∇hi (u(t))}] = u (5.168) where convergence rates for the values h(u(t)−h (¯ xu0 )  would become faster. In the case of scalar optimization, this is illustrated by a recent large production (see [25–27, 501, 521]). In the examples solved above, we notice that the proposed systems (5.165), (5.166) and (5.167) admit different solutions u(t) depending on the initial points u(t0 ) that converge to different points in Sh as t → +∞. Now to reach a common target point (a sought optimal point) x ¯0 , we use the systems (5.163) and (5.164). Indeed, in Figures 5.5.6, 5.5.7 and 5.5.8 we numerically experiment the asymptotic behavior for the three elementary examples of multiobjective unconstrained optimization above. We adopt the differential systems (5.163) and (5.164) that are easier to solve, and whose convergence of any solution toward a single optimal point is ensured with the same rate of convergence. Compared to the first example (strongly convex case), we first note that all solutions u(t) = (2α − 1, 0) + e−t (u1 (0) + 1 − 2α, u2 (0)) of the proposed dynamical system 0 = u(t) ˙ + α∇h1 (u(t)) + (1 − α)∇h2 (u(t))

(5.169)

5.5 To attain Pareto-equilibria by Asymptotic Behavior . . .

461

converge to a single point x ¯0 = (2α − 1, 0) in the solution set Sh = [−1, +1] × {0} whenever the parameter λ0 = (α, 1 − α) is fixed, and that the positioning of this limit point is relative to the choice of the multiplier λ0 ∈ Δ2 . In Figure 5.5.6 for α = 12 , we first remark that starting from any initial point u(0) in ¯ = (0, 0). Secondly, the convergence rate R2 , we reach the same limit point x of u(t)− x ¯0 2 is similar to that tested for the solution of the dynamic system ⊕

u(t) ˙ + [conv{∇h1 (u(t)), ∇h2 (u(t))}] = 0.

Figure 5.5.6. Here, for strongly convex case, we fix the parameter α = 12 . Then ¯ = (0, 0) starting from any initial point u(0) in R2 , we reach the same limit point x in Sh .

We pass to the second example (mid-strongly convex case). We have −αt (u1 (0) + 1−α the solution of (5.169) for u(t) = −( 1−α α , 0) + e α , u2 (0)) is  , 0 which belongs to α > 0 and the limite as t → +∞ is x ¯α = − 1−α α Sh = (−∞, 0] × {0}. Our comment on Figure 5.5.7 is similar to that done on Figure 5.5.6. In the non-strictly convex case the corresponding set of weak Pareto optima Sh = {(x1 , x2 ) ∈ R2 |x2 = x1 } is unbounded and the problem is ill-conditioned, so its resolution requires a reinforcement of the proposed dynamical system by a strongly monotone factor. So, in this example, we switch to the system (5.164) + (t + 1)−1/1000 u1 (t) = 0 u˙ 1 (t) + 0, 99(u1 (t) − u2 (t)) u˙ 1 (t) + 0, 99(u2 (t) − u1 (t)) + 0, 01 + (t + 1)−1/1000 u2 (t) = 0 to ensure the convergence of all solutions to a single  in Sh . As can be

99 point 1 is favored because , 100 seen in Figure 5.5.8, the first component of λ0 = 100 of the power of the component h1 in the problem minR2 h. The choice of the

462

5 Applications

Figure 5.5.7. For mid-strongly convex case and α = 12 in (5.163), starting from ¯ = (−1, 0) any initial point u(0) in R2 we reach more rapidly the same limit point x in Sh .

Figure 5.5.8. Even if the components of h are only convex, we manage to reach a 99 is close to 1 and the unique solution (here x ¯ = (0, 0)) when the parameter α = 100 system (5.163) penalized by (5.164).

penalty function ε(t) = (t + 1)−1/1000 is also required in order to accelerate the convergence of u(t) − x ¯0 2 .

6 Scalar Optimization under Uncertainty

In this chapter, we consider scalar problems under uncertainty and introduce three general approaches (vector approach, set approach and nonlinear scalarization) to robustness and stochastic programming. These approaches permit a unified treatment of a large variety of models from robust optimization and stochastic programming, respectively. In this chapter, we review several classical concepts, both from robust optimization and from stochastic programming, and interpret them in the light of vector optimization (see Section 3.1), set optimization (see Sections 2.3 and 3.2) and using nonlinear scalarizing functionals (see Section 2.4)—whenever this is possible and leads to meaningful characterizations. Under relatively mild assumptions, it turns out that solutions that are optimal for robust optimization or stochastic programming models are typically obtained as (weakly) efficient solutions of an appropriately formulated deterministic vector optimization counterpart problem. Similarly, nonlinear scalarizing functionals which yield (weakly) efficient solutions of the respective vector optimization counterparts can be applied to achieve similar results. Since the set-based interpretation does not reflect distributional information, it cannot be used in all cases. However, it provides an interesting interpretation of many classes of robust optimization problems where the focus is on worst and/or best case decisions. The assertions in this chapter are derived by Klamroth, K¨ obis, Sch¨ obel and Tammer in [334].

6.1 Robustness and Stochastic Programming It is well known that real-world problems often contaminated with uncertain data, especially, due to unknown future processing, errors in measurement, or incomplete information in the development of the model. Uncertainties can be raised by future demands that have to be forecasted in order to modulate a manufacturing process or the design of a networking. Furthermore, uncertainties are to take into consideration in mathematical finance and risk theory. Due to market changes, changing preferences of customers and unforeseeable © Springer Nature Switzerland AG 2023 A. G¨ opfert et al., Variational Methods in Partially Ordered Spaces, CMS/CAIMS Books in Mathematics 7, https://doi.org/10.1007/978-3-031-36534-8 6

463

464

6 Scalar Optimization under Uncertainty

events, assets are naturally affected by uncertainties. If the decision-maker does not take into account such uncertainties when solving real-world optimization problems, the generated solutions may perform very poorly under some scenarios, or the solutions are even infeasible in some cases. Robust optimization and stochastic programming are two significant approaches for treating optimization problems under uncertainty. In robust optimization, it is characteristically supposed that the uncertain parameters appertain to a certain set that is established prior to resolving the optimization problem. The focus reclines on searching at the worst case or the worst case regret of a solution to the problem, therefore no probability distribution for the uncertain data is required. The aim characteristically is to retain that the solution is admissible and performs acceptably well in each possible future scenario, in spite of how estimated this scenario may be. Soyster [512] introduced robust optimization problems in 1973. These problems have been extensively discussed in the literature, see Kouvelis and Yu [351], Ben-Tal, El Ghaoui and Nemirovski [53], and Goerigk and Sch¨ obel [222] for an overview on recently published developments and new concepts of robustness. Furthermore, in stochastic optimization, a knowledge of the probability distribution of the uncertain parameter is supposed. In place of bringing into focus the worst case scenario, the objective functions are commonly given by the expected performance of a solution, or by criteria raised by stochastic dominance. Stochastic programming models often incorporate two-stage and multi-stage processes where the situation that the realization of the uncertain parameters increases over time is reflected. This field of optimization has been intensely studied, we refer to Birge and Louveaux [69] for an overview on the topic. In real-world applications, however, the required probability distribution is often not known. Additionally, solving a stochastic optimization problem does not guarantee to obtain a solution that is actually feasible for an uncertain scenario. Instead, using chance constraints, one often aims at gaining a solution that is most likely to be feasible, or which has a good objective value with a high probability. Rockafellar and Royset present an examination of superquantiles and their wide application to risk and random variables in [486]. Furthermore, in [487–489], Rockafellar and Royset investigate decision-making under uncertainty in a unified setting incorporating risk measures. Especially in [487], the authors explain how risk measures provide suitable models for handling such uncertainty. The interesting definition of measures of residual risk is introduced in [488]. This concept extends the notion of risk measures by considering an additional random variable and is motivated by trade-offs recognized by forecasters and investors. It provides important connections to regression, surrogate models, and distributional robustness. Moreover, Rockafellar and Royset show in [489] that even models involving utility functions blend into the unifying framework of measures of risk.

6.1 Robustness and Stochastic Programming

465

An introduction to both of these a priori approaches—stochastic and robust optimization—will be presented in Section 6.2 along with examples for a linear optimization problem. Other approaches dealing with uncertainty concern are online optimization, where decisions are to be made ad hoc in real time and without knowing all parameters involved in the problem (compare Gr¨ otschel, Krumke and Rambau [237]), and a posteriori approaches like parametric optimization (for example, see Klatte and Kummer [335]). Robust optimization and stochastic programming have usually been handled separately in the literature because of the basic distinction in modeling requirements (probability distribution known or unknown, respectively). Under the assumption that the set of scenarios is finite and defining one objective function for each scenario, an analysis from the point of view of multiobjective optimization shows that both concepts maintain to solutions that are efficient with respect to the same multiobjective counterpart problem (see Chapter 3, especially Sections 3.1, 3.4, and 3.7). Klamroth et al. [333] derived a unifying framework covering both robust and stochastic optimization taking into account this analysis. Furthermore, in [333], it is explained that the nonlinear translation invariant functional (2.42) (see Section 2.4), employed to the multiobjective counterpart problems, provide many of the known concepts from robust optimization and stochastic programming. In the case of a finite set of uncertainties, the interaction between scalar optimization under uncertainty and associated deterministic multiobjective counterpart problems including a critical evaluation is studied by Hites et al. in [277]. There is an outstanding focus on special robustness concepts and on transferring ideas from one modeling paradigm to another. Especially, Kouvelis and Sayin [350, 497] transfer methods from scalar optimization under uncertainty to deterministic multiobjective optimization to develop efficient solution procedures for the latter case. Reversely, multiobjective counterpart problems are employed to develop approximation algorithms for several classes of robust optimization problems, compare Aissi, Bazgan, and Vanderpooten [3, 4]. Furthermore, multiobjective counterpart problems can be employed to derive new concepts of robustness, see Ogryczak [448–450], ´ Ogryczak and Sliwi´ nski [453], and Perny et al. [472]. However, a deterministic multiobjective counterpart problem with a finite number of objectives is in general not sufficient to comprehensively represent the problem for infinite sets of uncertainties. Some of our assertions concerning deterministic vector optimization counterparts are closely related to recent papers by Rockafellar and Royset [486– 489] who examine decision-making under uncertainty in a unified setting involving risk measures. This relationship will be explained in more detail in Remark 6.3.46 in Section 6.3.10. Deterministic multiobjective optimization counterpart problems are also studied by Engau in [186], where the concept of proper efficiency is generalized to the case of a countably infinite number of objective functions. With this approach, weakly efficient solutions can be

466

6 Scalar Optimization under Uncertainty

avoided, not only in the context of optimization under uncertainty. Gabrel, Murat, and Thiele present in [203] a discussion of recent developments in robust optimization, embedding a discussion on the relationship between different modeling paradigms. The aim of this chapter is to present unified approaches for deriving robust counterpart problems for scalar optimization problems under uncertainty with infinite uncertainty sets using solution concepts from vector optimization (see Sections 3.1, 3.4 and 3.7), set optimization (see Section 2.3 and Definition 4.4.2) and nonlinear scalarization (see Section 2.4). We show that many concepts of robustness can be described using these approaches. Using these unifying approaches, we obtain important properties and useful interpretations of solutions to optimization problems under uncertainty. These properties are very helpful for an evaluation by the decisionmaker as well as for developing algorithms for solving optimization problems under uncertainty. Employing these results, it is possible to apply well-known methods from vector optimization, set-valued optimization, and scalarization techniques to generate robust solutions to optimization problems under uncertainty. Furthermore, the analysis presented in this chapter naturally leads to new concepts for handling uncertainties, see Sections 6.3.9 and 6.5.8. The applicability of these unifying approaches is much more extensive. Innovative modeling ideas can be motivated by employing these and similar interpretations. Several concepts for handling uncertainty in multiobjective optimization are developed in the literature, see [173, 177, 220, 284, 285] for an overview. These concepts may also be interpreted in functional spaces; first results are given in [88]. The results in the next sections are derived by Klamroth, K¨ obis, Sch¨ obel, and Tammer in [334].

6.2 Scalar Optimization under Uncertainty 6.2.1 Formulation of Optimization Problems under Uncertainty In this section, we are dealing with optimization problems (Q(ξ)) depending on uncertain parameters ξ ∈ U ⊆ RL , where U denotes the set of uncertain parameters. The problem in consideration is given for fixed parameters (scenarios) ξ ∈ U as f (x, ξ) → inf s.t. hi (x, ξ) ≤ 0, i = 1, . . . , m, x∈R , n

where f : Rn × U → R, hi : Rn × U → R, i = 1, . . . , m.

(Q(ξ))

6.2 Scalar Optimization under Uncertainty

467

The set of feasible solutions of (Q(ξ)) is denoted by X (ξ) = {x ∈ Rn : hi (x, ξ) ≤ 0, i = 1, . . . , m}. For every fixed scenario ξ ∈ U, we suppose that the optimization problem (Q(ξ)) has an optimal solution; especially, X (ξ) = ∅. The uncertain parameters are modeled by ξ. In many real-world optimization problems, such uncertainties are arising. For instance, uncertainties can be induced by measurement errors, modeling preconditions or unsophisticated because a future parameter is unknown prior to resolve an optimization problem. Consequently, it seems appropriate to treat some of the input data as uncertain and it is important to find a way to handle uncertain data in optimization problems. Throughout this chapter, we suppose that the parameters (scenarios) ξ are unknown. The parameters ξ belong to an uncertainty set U. We assume that U is nonempty and compact, and in general not finite. These are common assumptions in the context of robust optimization. Examples for uncertainty sets are given by interval-based uncertainties (see [63]), polyhedral uncertainties (see [499]), or ellipsoidal uncertainty sets (see [53]). An optimization problem under uncertainty P (U) is defined as a family of parametrized optimization problems (Q(ξ), ξ ∈ U). (6.1) The nominal value is denoted by ξˆ ∈ U. This is the value of ξ that we believe ˆ to be true today. Furthermore, the nominal problem is denoted by (Q(ξ)). If the set of uncertain parameters U = {ξ 1 , . . . , ξ q } is finite, each scenario can be taken as an objective function. Then, for an element x ∈ Rn , we obtain a q-dimensional vector Fx ∈ Rq which includes f (x, ξ i ) in its ith coordinate. In order to compare two elements x and y, binary relations or order structures (see Section 2.1) for the vectors Fx and Fy are used. Using this idea, many concepts of robust optimization and of stochastic programming can be characterized using vector optimization problems as counterparts, see Section 6.3 and [333]. If the set of uncertain parameters U is not a finite set, we obtain not vectors but functions, i.e., Fx : U → R, where Fx (ξ) = f (x, ξ) contains the objective function value of x in scenario ξ ∈ U. Let us introduce the real linear functional space RU of all mappings F : U → R. Therefore, we need binary relations (or order structures) on the real linear functional space RU in order to compare two elements x and y (see Section 6.3). The following approaches regarding optimization problems under uncertainty have been established in the literature: • Stochastic Optimization: This idea goes back to Dantzig [152] and has been intensely studied ever since. Stochastic optimization assumes that the uncertain parameter is probabilistic. Usually, one optimizes some cost function using the expected value of the uncertain parameter. We refer to [69] for more background on this field of research.

468

6 Scalar Optimization under Uncertainty

• Robust Optimization: Robustness, on the other hand, pursues a different approach to optimization problems with uncertainties not relying on a probability distribution but only using the uncertainty set. Typically, one wishes to optimize the worst case scenario, which we refer to as strict robustness [53]. Many other robustness concepts exist, see [222] for a survey. • Parametric Optimization: One is looking at first for an optimal solution of the optimization problem with a fixed scenario and after this one discusses stability properties of this solution based on continuity properties of the optimal-value mapping (see Klatte, Kummer [335]). Approaches for handling optimization problems under uncertainty (Q(ξ), ξ ∈ U) frequently depend on the formulation of a deterministic counterpart problem in order to identify a most preferred solution under variable modeling preconditions. Let us illustrate this approach on the following example from robust optimization and from stochastic programming, respectively (see [334, Example 1]). Example 6.2.1. (Linear optimization problem under uncertainty) We look at an objective function (involving uncertainties) f (x, ξ) := c(ξ)T x, where x ∈ L Rn is the decision variable and c(ξ) = c0 + i=1 ξi ci ∈ Rn for given ci ∈ Rn , i = 0, . . . , L and ξ ∈ RL . The uncertainty set is U := {ξ ∈ RL | − 1 ≤ ξi ≤ 1, i = 1, . . . , L} and the uncertain parameters ξ belong to U. Our aim is to solve (c0 +

L 

ξi ci )T x → infn , x∈R

i=1

where ξ ∈ U is unidentified, such that we consider the uncertain optimization problem (Q(ξ), ξ ∈ U). Strictly robust solution: A very well-known concept of robustness is to minimize the worst case objective function value, i.e., our goal is to solve sup(c0 + ξ∈U

L  i=1

ξi ci )T x → infn . x∈R

Minimizing the expectation: We minimize the expected objective value if a probability distribution over U is known, i.e., our aim is to solve L    ξi ci )T x → infn . E (c0 + i=1

x∈R

In this chapter, we discuss unifying settings for these two and several other known concepts which deal with optimization under uncertainty. Furthermore, these general settings give a motivation for new concepts of robustness.

6.2 Scalar Optimization under Uncertainty

469

The aim of this chapter is to discuss the basic ideas which allow a unified treatment of well-known concepts from robust optimization and stochastic programming. In order to do this, we are employing concepts from vector optimization (see Chapter 3), set relations (see Section 2.3), and nonlinear scalarization using the functional (2.42) discussed in Section 2.4. In the following subsections, we recall different concepts of robustness known from the literature. 6.2.2 Strict Robustness A very prominent concept is the strict robustness (also known under the name min max robustness). Soyster introduced this concept in [512]. This concept is extensively studied in the literature, compare Ben-Tal et al. [53] for an overview on corresponding results with respect to various uncertainty sets. In the conservative concept of strict robustness, a robust solution is requested to be feasible for every scenario ξ ∈ U, and the worst case is considered in the objective function. The strictly robust counterpart (RC) of the optimization problem under uncertainty (Q(ξ), ξ ∈ U) is given by ρRC (x) = sup f (x, ξ) → inf ξ∈U

s.t. ∀ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m, x ∈ Rn .

(RC)

A feasible solution to (RC) is named strictly robust and we tag the set of strictly robust solutions by Astrict := {x ∈ Rn | ∀ ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m}.

(6.2)

6.2.3 Optimistic Robustness While the concept of strict robustness straightened on the worst case such that it can be considered as a pessimistic model, optimistic robustness takes aim at minimizing the best realization of the objective value of a feasible solution over all scenarios. This leads to the optimization problem ρoRC (x) = inf f (x, ξ) → inf ξ∈U

s.t. ∀ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m, x ∈ Rn ,

(oRC)

that is called optimistically robust counterpart. Problem (oRC) balances the optimistic counterpart problem studied by Beck and Ben-Tal in [49] in the context of duality theory in robust optimization. In distinction to [49], a feasible solution to (oRC) is requested to be strictly robust, the set of feasible solutions of (oRC) is hence fixed by Astrict .

470

6 Scalar Optimization under Uncertainty

6.2.4 Regret Robustness The following concept of robustness is known under the names min max regret robustness or deviation robustness. In this concept of robustness, one appraises a solution by comparing it with the best possible solution for the realized scenario in the worst case. This means, the function to be minimized is supξ∈U (f (x, ξ) − f ∗ (ξ)), where f ∗ (ξ) ∈ R is the optimal value of problem (Q(ξ)) for the fixed parameter ξ ∈ U. In many applications, e.g., in scheduling or location theory, mostly in cases where no uncertainty in the constraints is present (see Kouvelis and Yu, [351]), for spanning trees and in matroids (see Yaman et al. [567, 568]), this robustness concept has been used. The regret robust counterpart of the optimization problem under uncertainty (Q(ξ), ξ ∈ U) is given by ρrRC (x) = sup(f (x, ξ) − f ∗ (ξ)) → inf ξ∈U

s.t. ∀ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m,

(rRC)

x ∈ Rn . It is important to mention that we require x ∈ Astrict , i.e., we only permit strictly robust solutions as admissible solutions for the regret robust counterpart. Now, we consider a function f ∗ ∈ RU , f ∗ : U → R defined by f ∗ (ξ) := inf{Fx (ξ)|x ∈ Astrict }.

(6.3)

Because we supposed that for every fixed scenario ξ an optimal solution exists (see Section 6.2.1), the inf in (6.3) can be replaced by min. f ∗ is called the ideal solution to problem (Q(ξ)). In general, f ∗ is not an admissible solution to (rRC), this means, there does not exist x ∈ Astrict with f (x, ξ) = f ∗ (ξ) for all ξ ∈ U. Thus, the regret robust counterpart (rRC) minimizes the maximum deviation (over all scenarios) between the objective value of the implemented solution f (x, ξ) and the ideal solution f ∗ (ξ). 6.2.5 Reliability It is a very conservative approach to require the existence of a strictly robust solution x. In the case that the set of strictly robust solutions Astrict is empty or Astrict only contains solutions with a high objective function value, the concept of reliability is a less conservative approach. In this approach, some cushion is added to the right-hand side values of the constraints, i.e., the constraints hi (x, ξ) ≤ 0 are slackened to hi (x, ξ) ≤ δi for some given δi ∈ R+ , i = 1, . . . , m. However, it is requested that the original constraints must be ˆ ≤ 0, i = 1, . . . , m. Ben-Tal ˆ i.e., hi (x, ξ) fulfilled for the nominal scenario ξ, and Nemirovski proposed in [55] the reliable counterpart of (6.1) given by

6.2 Scalar Optimization under Uncertainty

471

ρRely (x) = sup f (x, ξ) → inf ξ∈U

ˆ ≤ 0, i = 1, . . . , m, s.t. hi (x, ξ) ∀ξ ∈ U : hi (x, ξ) ≤ δi , i = 1, . . . , m,

(Rely)

x ∈ Rn . We refer a feasible solution to (Rely) reliable. The reliable counterpart (Rely) reduces to the strictly robust problem (RC) in the case δi = 0 for all i = 1, . . . , m. Furthermore, the reliable counterpart can be taken as a strictly robust counterpart with the uncertain constraint functions hi (x, ξ) = ˆ ≤ 0, i = 1, . . . , m. hi (x, ξ) − δi and the deterministic constraint hi (x, ξ) Already this interpretation encourages that many results can be transferred from strict robustness to reliability. The set of reliable solutions is denoted by ˆ ≤ 0, i = 1, . . . , m}. Arely := {x ∈ Rn |∀ξ ∈ U : hi (x, ξ) ≤ δi , hi (x, ξ) 6.2.6 Adjustable Robustness The concept of adjustable robustness is a two-stage approach in robust optimization which supposes that there are two types of variables: • The here-and-now variables x whose values have to be fixed before the scenario realizes. • The wait-and-see variables u which may be decided after the scenario becomes known. For the fixed x and the realized scenario ξ, these variables are then chosen such that the objective function is minimized. Ben-Tal et al. [54] introduced the concept of adjustable robustness. This concept is intensely discussed in [53]. In the adjustable robust counterpart, one is looking for a solution x that, if adjusted optimally in the second stage, reaches the best overall objective value in the worst case. In terms of the here-and-now variables x ∈ Rn and the wait-and-see variables u ∈ Rp , we reformulate (Q(ξ)) and sustain a problem under uncertainty with n + p variables. f (x, u, ξ) → inf s.t. hi (x, u, ξ) ≤ 0, i = 1, . . . , m

(Q(ξ))

x ∈ Rn , u ∈ Rp . The set of feasible second-stage variables u for given first-stage variables x and a given scenario ξ is defined by G(x, ξ) := {u ∈ Rp : hi (x, u, ξ) ≤ 0, i = 1, . . . , m}.

472

6 Scalar Optimization under Uncertainty

Then, the second-stage optimization problem can be formulated as Q(x, ξ) = inf f (x, u, ξ)

(6.4)

s.t. u ∈ G(x, ξ).

We now suppose that (Q(ξ)) has an optimal solution for every fixed scenario ξ ∈ U, and furthermore, that Q(x, ξ) has an optimal solution for all ξ ∈ U and x ∈ Rn . So, we can replace inf by min and Q(x, ξ) is well defined. We call a solution x ∈ Rn adjustable robust if it can be accomplished to a solution (x, u) that is admissible for the problem (Q(ξ)) for each scenario ξ ∈ U, i.e., if G(x, ξ) = ∅ for each scenario ξ ∈ U. The set of adjustable robust solutions is defined by Aadjust := {x ∈ Rn | ∀ ξ ∈ U ∃ u ∈ Rp : hi (x, u, ξ) ≤ 0, i = 1, . . . , m} = {x ∈ Rn | ∀ ξ ∈ U : G(x, ξ) = ∅} and the adjustable robust counterpart is given by ρaRC (x) = sup Q(x, ξ) → ξ∈U

inf

x∈Aadjust

.

(aRC)

It is important to mention that the adjustable robust counterpart is closely related to the strictly robust counterpart studied in Section 6.2.2 (only with a modified feasible set), similar to the case of reliability (see Section 6.2.5). Furthermore, in the context of two-stage stochastic programming in Section 6.2.9, adjustable robust solutions will again be important in a secondstage decision. 6.2.7 Minimizing the Expectation In many real-world problems, the decision-maker is not interested in minimizing the worst case. If a probability distribution on U is known, the decisionmaker would prefer an acceptable objective function value in the average case by minimizing the expectation (compare [46], [69]). We suppose that a continuous density function p : U → R+ on U is given, i.e.,   P (U ) = p(ξ)dξ U

for all sets U  ⊆ U. Then, the expectation of the random variable Fx with realizations Fx (ξ) = f (x, ξ) for all ξ ∈ U can be calculated by  E(Fx ) = f (x, ξ)p(ξ)dξ. U

In the following, we suppose that the integral on the right-hand side exists for all feasible Fx , x ∈ Rn . Especially, we suppose that Y contains all random

6.2 Scalar Optimization under Uncertainty

473

 variables F ∈ RU such that U F (ξ)p(ξ)dξ exists. Remark that this yields C(U, R) ⊆ Y ⊆ RU . Now, we formulate the problem of minimizing the expectation as follows:  ρExp (x) = f (x, ξ)p(ξ)dξ → inf U

s.t. ∀ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m, x ∈ Rn .

(Exp)

The set of admissible solutions of (Exp) is denoted by (as in (6.2)) Astrict = {x ∈ Rn | ∀ ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m}. In Section 6.3.6, we reformulate (Exp) as a vector optimization problem and in Section 6.5.6 using the nonlinear translation invariant function (2.42) introduced and discussed in Section 2.4. An approach via set relations (see Section 6.4) is not possible here because the information on the distribution cannot be reflected in a (one-dimensional) outcome set. 6.2.8 Stochastic Dominance Again, we interpret Fx (for a given x ∈ Rn ) as a random variable with values Fx (ξ) = f (x, ξ), ξ ∈ U. The distribution functions of random variables are compared with each other in the concept of stochastic dominance. For doing this, a probability density function on U is needed. We will consider the same assumptions as in Section 6.2.7 for the sake of simplicity of the description. Ogryczak [447], Ogryczak and Ruszczynski [452], Michalowski and Ogryczak [404, 405] discussed decision problems with real-valued outcomes (such as return, net profit, or number of lives saved) in the case that the probability distribution of the uncertainties is known. Furthermore, Ogryczak and Ruszczynski [451] studied more general problems of comparing real-valued random variables (i.e., their distributions). Random variables are compared by pointwise comparison of some performance functions designed from their distribution functions in the stochastic dominance approach (see [451, 452]). In Section 6.2.1, we introduced the real linear functional space RU . Now, we consider the functional space Y := RU as the space of random variables. Now, the functions F ∈ Y are considered as random variables with the probability measure P . Therefore, we use P {F ≤ η} = P (U ), with U = {ξ ∈ U | F (ξ) ≤ η}. Consider a random variable R ∈ Y with probability measure P . The first-degree stochastic dominance (FSD) (see [451]) relies on the rightcontinuous cumulative distribution function:

474

6 Scalar Optimization under Uncertainty

ΨR1 (η) := ΨR (η) := P {R ≤ η} for all

η∈R

and is given for random variables R, S ∈ Y as S F SD R :⇐⇒ ΨS (η) ≥ ΨR (η) for all η ∈ R. Moreover, the second-degree stochastic dominance (SSD) relies on measuring the area below the distribution function ΨR (we suppose the existence of all integrals now and in the following),  η ΨR2 (η) := ΨR (s)ds for η ∈ R, −∞

and is given for random variables R, S ∈ Y by S SSD R :⇐⇒ ΨS2 (η) ≥ ΨR2 (η) for all η ∈ R. We consider the real linear functional space Y = RU of random variables Fx : U → R with outcomes Fx (ξ) = f (x, ξ) for all ξ ∈ U in order to formulate the first-degree and the second-degree stochastic dominance problems. Suppose that a probability distribution P on U is given and Astoch−dom ⊆ Rn . Let us formulate the first-degree stochastic dominance problem and the second-degree stochastic dominance problem: First-degree stochastic dominance problem: Find FSD-solutions, i.e., solutions x ∈ Astoch−dom such that for all y ∈ Astoch−dom the implication Fy F SD Fx =⇒ Fx F SD Fy holds. Second-degree stochastic dominance problem: Find SSD-solutions, i.e., solutions x ∈ Astoch−dom such that for all y ∈ Astoch−dom the implication Fy SSD Fx =⇒ Fx SSD Fy holds. We will describe the vector approach for stochastic dominance in Section 6.3.7. Because the distribution of values f (x, ξ), ξ ∈ U can (as in Section 6.2.7) not be reflected in the (one-dimensional) outcome sets, a set-valued representation (see Section 6.4) of stochastic dominance is not possible. Furthermore, a scalarization (see Section 6.5) that defines one objective value for every feasible element is not useful in this context for both concepts of stochastic dominance since a given problem may have a set of efficient outcomes rather than a unique optimal objective value.

6.2 Scalar Optimization under Uncertainty

475

6.2.9 Two-Stage Stochastic Programming In two-stage stochastic programming, we suppose (as in Sections 6.2.7 and 6.2.8) that a continuous probability density function on U is given. We observe that two-stage stochastic programming records affinities with adjustable robustness (Section 6.2.6) and with minimizing the expectation (see Section 6.2.7). The basic idea in two-stage stochastic programming is that a decision x ∈ Rn conducted in the first stage of the decision process can be adjusted in a second stage, once the realization of the random parameter ξ ∈ U is known, by taking an optimized recourse action u ∈ Rp . For early references, we refer to Beale [48], Dantzig [152] and Tintner [546] and for a general introduction to stochastic programming, see Birge, Louveaux [69] and Shapiro et al. [502]. Let A2−stage ⊆ Rn be a set which describes the set of admissible solutions of the first-stage problem and let Q(x, ξ) be the optimal objective value of the second-stage problem, fixed by Q(x, ξ) = inf f (x, u, ξ) s.t. u ∈ G(x, ξ).

(6.5)

In (6.5), the objective function f (x, u, ξ) and the set of feasible elements G(x, ξ) of the second-stage problem (6.5) are parameterized w.r.t. the firststage decision x ∈ A2−stage and w.r.t. the uncertain parameter ξ ∈ U. The second-stage problem shapes that, for given x and ξ, an optimal recourse action u is implemented in the second stage. Under these assumptions and preliminaries, a two-stage stochastic programming problem is given by  ρ2SP (x) = E[Q(x, ξ)] = Q(x, ξ)p(ξ)dξ → inf (2SP) U

x∈A2−stage

The formulation in (2SP) permits several different interpretations: For example, A2−stage could be defined as in (6.2), i.e., A2−stage = Astrict = {x ∈ Rn | ∀ ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m} (see also Section 6.2.7). Alternatively, A2−stage may also be determined as in the case of adjustable robustness (Section 6.2.6), i.e., A2−stage = Aadjust = {x ∈ Rn | ∀ ξ ∈ U : G(x, ξ) = ∅} involving the second-stage decisions. We suppose that the second-stage problem (6.5) has an optimal solution for each x ∈ Rn and ξ ∈ U, and that the integral on the right-hand side of (2SP) exists for all admissible x ∈ Rn .

476

6 Scalar Optimization under Uncertainty

Employing the interchangeability principle for two-stage stochastic programming (see, e.g., [502]), we can substitute (2SP) and (6.5) by  ρ2SP (x) = E[f (x, u(ξ), ξ)] = f (x, u(ξ), ξ)p(ξ)dξ → inf U

(6.6)

s.t. x ∈ A2−stage u(ξ) ∈ G(x, ξ) a.e. ξ ∈ U.

Taking into account the assumption that the second-stage problem (6.5) has an optimal solution for every x ∈ Rn and ξ ∈ U, we can substitute the constraint induced by the feasibility of the second-stage problem by u(ξ) ∈ G(x, ξ) for all ξ ∈ U.

(6.7)

6.3 Vector Optimization as Unifying Concept In this section, we will employ the solution concept well known from vector optimization (see Sections 3.1, 3.4 and 3.7) in order to characterize different concepts of robustness such that the understanding and the algorithmic treatment of the robust counterpart problems can be improved. This approach universalizes the idea of multiobjective counterpart problems (discussed for finite sets of uncertainties in Klamroth et al. [333]) to the case of infinite sets of uncertainties. The underlying brainchild of this approach is derived by Rockafellar and Royset in [486–489] or by Engau in [186] and studied by Klamroth et al. in [334] in a broad scope, compare the overview in the summary in Section 6.3.10. For a formal description of the vector optimization approach (see the solution concepts introduced in Sections 3.1, 3.4 and 3.7), we consider the given optimization problem under uncertainty (Q(ξ), ξ ∈ U) and the space of all functions F : U → R denoted by Y = RU . For a fixed element x ∈ Rn , let us define Fx ∈ Y : Fx (ξ) := f (x, ξ). To compare elements of Y , we study different binary relations or order structures (see Definition 2.1.1) on the space Y and we denote it by ≤. We know that (partial) order relations can be induced by cones (see Section 2.1.2). Suppose that C ⊂ Y is a proper (C = Y and C = {0}), closed, convex, and pointed cone. Then, C induces a partial order ≤C given by y1 ≤C y2 :⇐⇒ y1 ∈ y2 − C

(⇐⇒ y2 ∈ y1 + C).

We suppose that the functions Fx = f (x, ξ) are continuous in ξ for all feasible values of x, i.e., that Y = C(U, R) whenever we are dealing with the interior of an ordering cone. In the next definition, we introduce a special binary relation induced by a cone that is of interest later on in this section (compare the cone C+ [0, 1] in (2.6) as a special case).

6.3 Vector Optimization as Unifying Concept

477

Definition 6.3.1. Consider the cone CY := {F ∈ Y |∀ξ ∈ U : F (ξ) ≥ 0}. The cone CY induces the natural order relation ≤CY on Y = RU for all F, G ∈ Y by F ≤CY G :⇐⇒ G ∈ F + CY ⇐⇒ F (ξ) ≤ G(ξ) for all ξ ∈ U. In this approach, we are dealing with (weakly) efficient elements in subsets of Y (see Sections 3.4, 3.5, 3.7 and compare Definition 3.1.1 for ε = 0). Consider a nonempty subset F of Y and an order structure ≤ on Y . Recall that F ∈ F is an efficient element of F in Y w.r.t. ≤ if for all G ∈ F : G ≤ F =⇒ F ≤ G. Furthermore, if the order structure ≤ is induced by a proper, closed, convex cone C in Y with int C = ∅, then F ∈ F is referred as weakly efficient element of F in Y w.r.t. ≤C if (F − int C) ∩ F = ∅. It is important to mention that an element F ∈ F is an efficient element of F in Y w.r.t. ≤C (where ≤C is induced by a proper convex cone C) if and only if (F − C) ∩ F ⊆ F + C (see Sections 3.4, 3.7). If C is a proper, closed, convex cone with int C = ∅, then efficiency implies weak efficiency. Remark 6.3.2. Of course, the concept of efficiency (see Sections 3.1, 3.4 and 3.7) mentioned above is well known as (Edgeworth) Pareto efficiency in the context of vector or multiobjective optimization. A solution which is efficient with respect to the natural order relation ≤CY is often named a Pareto solution in the literature. For a given order relation (binary relation) ≤, we study the following vector optimization problem, where we are looking for efficient elements (see Sections 3.4, 3.7) of F w.r.t. ≤: Compute efficient elements of F w.r.t. ≤ .

(≤-VOP)

We will explain that many concepts of robustness for optimization problems under uncertainty can be interpreted as a vector optimization problem (≤-VOP), and conversely, every order structure ≤ induces a certain concept of robustness for the treatment of uncertainty. Although not all such concepts necessarily have an expedient interpretation in the context of optimization under uncertainty, this relationship raises a systematic means of conceiving and understanding deterministic counterparts of an optimization problem under uncertainty.

478

6 Scalar Optimization under Uncertainty

Example 6.3.3. ([334, Example 2]) An element F ∈ F is an efficient element of F in Y w.r.t. the natural order relation ≤CY of Y in the sense of Definition 6.3.1 if and only if  ∃G ∈ F \ {F } : ∀ξ ∈ U : (G − F )(ξ) ≤ 0. If we consider Y = C(U, R), then int CY = {F ∈ Y |∀ξ ∈ U : F (ξ) > 0}. An element F ∈ F is a weakly efficient element of F in Y w.r.t. ≤CY if and only if  ∃G ∈ F : ∀ξ ∈ U : (G − F )(ξ) < 0. (6.8) Our intention is to specify the order relation ≤ in order to characterize different concepts of robustness via vector approach in the following sections. 6.3.1 Vector Approach for Strict Robustness The strictly robust counterpart (RC) (see Section 6.2.2) can be formulated as a vector optimization problem in the functional space Y = RU employing the ideas from Sections 3.1, 3.4 and 3.7. The set of strictly robust outcome functions in Y is denoted by Fstrict := {Fx ∈ Y | x ∈ Astrict }.

(6.9)

Now, our intention is to represent the strict robust counterpart problem (RC) as a vector optimization problem (see Sections 3.4, 3.7). For doing this, we suppose that two functions Fx , Fy ∈ Y are given and consider the following order relation on Y (see Section 2.1.1 and Figure 6.3.1): Fx ≤sup Fy :⇐⇒ sup Fx (ξ) ≤ sup Fy (ξ). ξ∈U

ξ∈U

F, G sup G(ξ) ξ∈U

sup F (ξ) ξ∈U

F

G ξ

Figure 6.3.1. Functions F, G with F ≤sup G.

(6.10)

6.3 Vector Optimization as Unifying Concept

479

The order relation ≤sup corresponds to the max-order relation in multiobjective optimization in the special case of a finite uncertainty set U = {ξ1 , . . . , ξq }, q ∈ N (see, for example, Ehrgott [172]). Thus, we will denote ≤sup as the sup-order relation in the following. Like in the finitedimensional case, the sup-order relation ≤sup is not compatible with addition, this means, for three elements Fx , Fy , Fz ∈ Y , Fx ≤sup Fy does not necessarily implicate (Fx + Fz ) ≤sup (Fy + Fz ). Therefore, ≤sup cannot be represented by an ordering cone. However, it has the following important properties. Remark 6.3.4. ≤sup is reflexive and transitive. Moreover, ≤sup is a total preorder. Employing the sup-order relation ≤sup , we introduce the following vector optimization problem, where we are looking for efficient elements (see Sections 3.4, 3.7) of Fstrict w.r.t. ≤sup . Compute efficient elements of Fstrict w.r.t. ≤sup .

(≤sup -VOP)

The order relation ≤sup permits to formulate the strictly robust optimization problem (see Section 6.2.2) as a vector optimization problem. We show in the next theorem ([334, Theorem 1]) that an element x ∈ Rn is an optimal solution to (RC) if and only if Fx is an efficient element of Fstrict w.r.t. the sup-order relation ≤sup . Theorem 6.3.5. x ∈ Rn solves (RC)

⇐⇒

Fx is an efficient element for (≤sup -VOP).

Proof. Consider x ∈ Astrict . Then, x is an optimal solution to (RC) ⇐⇒ sup f (x, ξ) ≤ sup f (x, ξ) for all x ∈ Astrict ξ∈U

ξ∈U

⇐⇒ sup Fx (ξ) ≤ sup Fx (ξ) for all x ∈ Astrict ξ∈U

ξ∈U

⇐⇒ Fx ≤sup Fx for all x ∈ Astrict , ⇐⇒ Fx ≤sup G for all G ∈ Fstrict .

The assertion follows taking into account that ≤sup is a total preorder.



Theorem 6.3.5 shows that optimal solutions of the strictly robust counterpart (RC) correspond to outcome functions whose suprema are efficient. Now, we describe the relationship between the sup-order relation ≤sup and the natural order relation ≤CY introduced in Definition 6.3.1. Remark 6.3.6. ∀F, G ∈ Y :

F ≤CY G =⇒ F ≤sup G.

480

6 Scalar Optimization under Uncertainty

Nevertheless, in general, this does not imply that every efficient element w.r.t. ≤sup is also an efficient element w.r.t. ≤CY , or vice versa, in other words, an optimal solution to (RC) need not be an efficient solution, or vice versa. Iancu and Trichakis [282] have shown that there exist optimal solutions to (RC) which are efficient under some additional assumptions. They refer them PRO robust solutions. However, one can show the following general relation on efficient elements derived in [334, Lemma 2]. Lemma 6.3.7. Consider Y = C(U, R). Suppose that every F ∈ Fstrict attains its supremum on U. If F ∈ Fstrict is an efficient element for (≤sup -VOP), i.e., an efficient element of Fstrict w.r.t. ≤sup , then F is a weakly efficient element of Fstrict w.r.t. the natural order relation ≤CY . Proof. Suppose that F ∈ Fstrict is an efficient element for (≤sup -VOP), i.e., an efficient element of Fstrict in Y w.r.t. ≤sup . Taking into account that ≤sup is a total preorder, this yields sup F (ξ) ≤ sup G(ξ) for all G ∈ Fstrict . ξ∈U

(6.11)

ξ∈U

Now, we assume that F is not a weakly efficient element of Fstrict in Y w.r.t. the natural order relation ≤CY of Y . Then, there exists G ∈ Fstrict with ∀ ξ ∈ U : G(ξ) < F (ξ), compare (6.8). Because G attains its supremum on U, this yields sup G(ξ) = G(ξ) < F (ξ) ≤ sup F (ξ), ξ∈U

ξ∈U

with some ξ ∈ U, in contradiction to (6.11).



Employing this relationship together with Theorem 6.3.5, we get that Fx is weakly efficient for all optimal solutions x to (RC) as shown in ([334, Corollary 1]). Corollary 6.3.8. Consider Y = C(U, R) and suppose that for every solution x ∈ Astrict , the corresponding Fx ∈ Fstrict attains its supremum on U. Then, for every optimal solution x to the strictly robust counterpart (RC), Fx is a weakly efficient element of Fstrict w.r.t. the natural order relation ≤CY in Y . 6.3.2 Vector Approach for Optimistic Robustness In this section, we formulate the optimistically robust counterpart (oRC) introduced in Section 6.2.3 as a vector optimization problem in the functional space Y = RU . Again, we employ the set of strictly robust outcome functions Fstrict = {Fx ∈ Y | x ∈ Astrict } in Y (see (6.9)).

6.3 Vector Optimization as Unifying Concept

481

Our intention is to represent the optimistically robust counterpart (oRC) as a vector optimization problem (see Sections 3.4, 3.7). For doing this, we suppose that two functions Fx , Fy ∈ Y are given and consider the following order relation on Y : Fx ≤inf Fy :⇐⇒ inf Fx (ξ) ≤ inf Fy (ξ). ξ∈U

(6.12)

ξ∈U

We denote this order relation as inf-order relation. The inf-order relation is closely related to the sup-order relation (6.10) taking into account inf F (ξ) ≤ inf G(ξ) ⇐⇒ sup(−G(ξ)) ≤ sup(−F (ξ)), and thus

ξ∈U

ξ∈U

ξ∈U

ξ∈U

F ≤inf G ⇐⇒ (−G) ≤sup (−F ).

(6.13)

Therefore, the inf-order relation ≤inf defines a total preorder (compare Remark 6.3.4). As ≤sup , the inf-order relation ≤inf cannot be represented by an ordering cone in Y . Remark 6.3.9. ≤inf is reflexive, transitive, and total. Hence, ≤inf is a total preorder. Using the inf-order relation ≤inf , we introduce the following vector optimization problem, where we are looking for efficient elements (see Sections 3.4, 3.7) of Fstrict w.r.t. ≤inf . Compute efficient elements of Fstrict w.r.t. ≤inf .

(≤inf -VOP)

Analogous to Theorem 6.3.5, the order relation ≤inf allows to formulate the optimistically robust optimization problem (oRC) (see Section 6.2.3) as a vector optimization problem. In the next theorem ([334, Theorem 1]), we show that an element x ∈ Rn is an optimal solution to (oRC) if and only if Fx is an efficient element of Fstrict w.r.t. the inf-order relation ≤inf . Theorem 6.3.10. x ∈ Rn solves (oRC)

⇐⇒

Fx is an efficient element for (≤inf -VOP).

Proof. We consider x ∈ Astrict and obtain x is an optimal solution to (oRC) ⇐⇒ inf f (x, ξ)≤ inf f (x, ξ) for all x ∈ Astrict ξ∈U

ξ∈U

⇐⇒ inf Fx (ξ)≤ inf Fx (ξ) for all x ∈ Astrict ξ∈U

ξ∈U

⇐⇒ Fx ≤inf Fx for all x ∈ Astrict . The assertion follows since ≤inf is a total preorder.



482

6 Scalar Optimization under Uncertainty

Now, we study the relationship between the inf-order relation ≤inf and the natural order relation ≤CY . Let us first note that the natural order relation ≤CY fulfills F ≤CY G ⇐⇒ (−G) ≤CY (−F ).

(6.14)

Taking into account (6.14), the assertions in Remark 6.3.6, Lemma 6.3.7 and Corollary 6.3.8 can be easily adapted to the case of ≤inf (as shown in [334, Lemma 4, Corollary 2]): Remark 6.3.11. ∀F, G ∈ Y :

F ≤CY G =⇒ F ≤inf G.

However, this does in general not imply that every efficient element of Fstrict w.r.t. ≤inf is also an efficient element of Fstrict w.r.t. ≤CY , or vice very. The only relation on efficient elements is the following: Lemma 6.3.12. Consider Y = C(U, R) and suppose that every F ∈ Fstrict attains its infimum on U. If F ∈ Fstrict is a solution of (≤inf -VOP), i.e., it is an efficient element of Fstrict in Y w.r.t. ≤inf , then F is a weakly efficient element of Fstrict in Y w.r.t. the natural order relation ≤CY . Like in Corollary 6.3.8, this means that Fx is a weakly efficient element of Fstrict in Y w.r.t. the natural order relation ≤CY for all optimal solutions x to (oRC). Corollary 6.3.13. Consider Y = C(U, R) and suppose that for every solution x ∈ Astrict , the corresponding Fx ∈ Fstrict attains its infimum on U. Then, for every optimal solution x to the optimistic robust counterpart (oRC), Fx is a weakly efficient element of Fstrict w.r.t. the natural order relation ≤CY in Y . 6.3.3 Vector Approach for Regret Robustness The regret robust counterpart (rRC) (see Section 6.2.4) also can be directly reformulated as a vector optimization problem of type (≤-VOP) in the real linear functional space Y = RU , similar to the strictly robust counterpart (RC). Let us consider Fstrict as in (6.9) and two functions Fx , Fy ∈ Y be given. Then, the order relation ≤regret on Y is defined by (see Figure 6.3.2) Fx ≤regret Fy :⇐⇒ sup(Fx (ξ) − f ∗ (ξ)) ≤ sup(Fy (ξ) − f ∗ (ξ)). ξ∈U

ξ∈U

Remark 6.3.14. The following relationship between ≤regret and the sup-order relation ≤sup introduced in (6.10) holds:

6.3 Vector Optimization as Unifying Concept

483

F, G sup G(ξ) ξ∈U

sup(G − f ∗ ) sup F (ξ)

ξ∈U

ξ∈U

F sup(F − f ∗ ) ξ∈U

G f∗ ξ

Figure 6.3.2. Regret robustness with G ≤regret F .

Fx ≤regret Fy ⇐⇒ (Fx − f ∗ ) ≤sup (Fy − f ∗ ). Thus, we can interpret ≤regret as a sup-order relation with reference point f ∗ , while the sup-order relation ≤sup has the origin f 0 :≡ 0 ∈ Y as reference point. Intuitionally, we shift the reference point from the origin f 0 (for ≤sup ) to the ideal solution f ∗ (for ≤regret ). We obtain analogously to the sup-order relation ≤sup : Remark 6.3.15. The sup-order relation with reference point f ∗ (≤regret ) is reflexive and transitive. Furthermore, ≤regret is a total preorder. It is important to mention that ≤regret is (as ≤sup ) not compatible with addition. Thus, ≤regret cannot be represented by an ordering cone. Using ≤regret , we introduce the following vector optimization problem, where we are looking for efficient elements (see Sections 3.4, 3.7) of Fstrict w.r.t. ≤regret . Compute efficient elements of Fstrict w.r.t. ≤regret .

(≤regret -VOP)

We will see that the concept of regret robustness can be represented as a vector optimization problem (≤regret -VOP). The assertion in the next theorem ([334, Theorem 8]) shows that a solution x ∈ Rn is an optimal solution to (rRC) if and only if Fx is an efficient element of Fstrict w.r.t. ≤regret , i.e., with respect to the sup-order relation with reference point f ∗ . Theorem 6.3.16. x ∈ Rn solves (rRC)

⇐⇒

Fx is an efficient element for (≤regret -VOP)

Proof. We consider x ∈ Astrict . Then, we obtain

484

6 Scalar Optimization under Uncertainty

x is an optimal solution to (rRC) ⇐⇒ sup(f (x, ξ) − f ∗ (ξ)) ≤ sup(f (x, ξ) − f ∗ (ξ)) for all x ∈ Astrict ξ∈U

ξ∈U



⇐⇒ (Fx − f ) ≤sup (Fx − f ∗ ) for all x ∈ Astrict ⇐⇒ Fx ≤regret Fx for all x ∈ Astrict ⇐⇒ Fx ≤regret G for all G ∈ Fstrict .

As before, the result follows taking into account that ≤regret is a total preorder.  Remark 6.3.17. The following implication is easy to see: ∀F, G ∈ Y :

F ≤CY G

=⇒

F ≤regret G.

Similarly to a corresponding assertion for ≤sup , we obtain the following relationship ([334, Lemma 6]) between efficient elements of Fstrict in Y w.r.t. ≤regret and weakly efficient solutions w.r.t. ≤CY . Lemma 6.3.18. Consider Y = C(U, R) and suppose that every function F − f ∗ attains its supremum on U for all F ∈ Fstrict . If F ∈ Fstrict is an efficient element for (≤regret -VOP), then F is a weakly efficient element of Fstrict in Y w.r.t. the natural order relation ≤CY . Proof. Suppose that F ∈ Fstrict is an efficient element of Fstrict in Y w.r.t. ≤regret . Taking into account that ≤regret is a total preorder, this means that sup(F (ξ) − f ∗ (ξ)) ≤ sup(G(ξ) − f ∗ (ξ)) for all G ∈ Fstrict . ξ∈U

(6.15)

ξ∈U

We now suppose that F is not a weakly efficient element of Fstrict in Y w.r.t. the natural order relation ≤CY of Y . Therefore, there exists G ∈ Fstrict with ∀ ξ ∈ U : G(ξ) < F (ξ),

and hence

(G(ξ) − f ∗ (ξ)) < (F (ξ) − f ∗ (ξ)).

Because (G − f ∗ ) attains its supremum on U, we obtain sup(G(ξ) − f ∗ (ξ)) = G(ξ) − f ∗ (ξ) < F (ξ) − f ∗ (ξ) ≤ sup(F (ξ) − f ∗ (ξ)), ξ∈U

with some ξ ∈ U. This is a contradiction to (6.15).

ξ∈U



Like in Corollary 6.3.8, we get that Fx is weakly efficient also for all optimal solutions x to (rRC) (see [334, Corollary 3]). Corollary 6.3.19. Consider Y = C(U, R) and suppose that for every solution x ∈ Astrict , the corresponding Fx − f ∗ attains its supremum on U. Then, for every optimal solution x to the regret robust counterpart (rRC), Fx is a weakly efficient element of Fstrict w.r.t. the natural order relation ≤CY in Y .

6.3 Vector Optimization as Unifying Concept

485

6.3.4 Vector Approach for Reliability Likewise to the approaches in the previous sections, the reliable counterpart (Rely) introduced in Section 6.2.5 can be reformulated as a vector optimization problem of type (≤-VOP) in the functional space Y = RU . For doing this, we denote the set of reliable outcome functions in Y by Frely := {Fx ∈ Y | x ∈ Arely }.

(6.16)

Again, we consider the order relation ≤sup that has been introduced in Section 6.3.1 and delegate the results obtained there. We introduce the following vector optimization problem, where we are looking for efficient elements (see Sections 3.4, 3.7) of Frely w.r.t. ≤sup . Compute efficient elements of Frely w.r.t. ≤sup .

(≤rely -VOP)

We will see that the concept of reliable robustness can be represented as a vector optimization problem (≤rely -VOP). The assertion in the next theorem ([334, Theorem 11]) shows that a solution x ∈ Rn is an optimal solution to (Rely) if and only if Fx is an efficient element of Frely w.r.t. ≤sup , i.e., with respect to the sup-order relation. Theorem 6.3.20. x ∈ Rn solves (Rely)

⇐⇒

Fx is an efficient element for (≤rely -VOP).

For the concept of reliability, we also have an analogous relationship between optimal reliable solutions and efficient elements w.r.t. the natural ordering ≤CY . This assertion is shown in [334, Lemma 8]. Lemma 6.3.21. We consider Y = C(U, R) and suppose that every F ∈ Frely attains its supremum on U. If F ∈ Frely is an efficient element for (≤rely -VOP), then F is a weakly efficient element of Frely w.r.t. the natural order relation ≤CY . Analogously to the corresponding assertions in the previous section, this means that Fx is weakly efficient for all optimal solutions x to (Rely) (see [334, Corollary 4]). Corollary 6.3.22. We consider Y = C(U, R) and suppose that for every solution x ∈ Arely , the corresponding Fx ∈ Frely attains its supremum on U. Then, for each optimal solution x to the reliable counterpart (Rely), Fx is a weakly efficient element of Frely w.r.t. the natural order relation ≤CY in Y .

486

6 Scalar Optimization under Uncertainty

6.3.5 Vector Approach for Adjustable Robustness We consider Y = RU . Using the notations introduced in Section 6.2.6, we define the functions Fx ∈ Y , for x ∈ Rn , by Fx (ξ) := Q(x, ξ) = inf{f (x, u, ξ)|u ∈ G(x, ξ)}. Moreover, we define Fadjust := {Fx ∈ Y | x ∈ Aadjust }. For ρaRC introduced in (aRC), we get ρaRC (x) = sup Fx (ξ). ξ∈U

Again, we consider the sup-order relation as ordering relation:  Fx ≤sup Fy ⇐⇒ sup Fx (ξ) ≤ sup Fy (ξ) . ξ∈U

ξ∈U

Now, we introduce the following vector optimization problem, where we are looking for efficient elements (see Sections 3.4, 3.7) of Fadjust w.r.t. ≤sup . Compute efficient elements of Fadjust w.r.t. ≤sup .

(≤adjust -VOP)

In the next theorem ([334, Theorem 14]), we will see that the concept of adjustable robustness can be represented as a vector optimization problem (≤adjust -VOP), i.e., we show that a solution x ∈ Rn is an optimal solution to (aRC) if and only if Fx is an efficient element of Fadjust w.r.t. ≤sup , i.e., with respect to the sup-order relation. Theorem 6.3.23. x ∈ Rn solves (aRC)

⇐⇒

Fx is an efficient element for (≤adjust −VOP).

Proof. For x ∈ Aadjust , we obtain x is an optimal solution to (aRC) ⇐⇒ sup Q(x, ξ) ≤ sup Q(x, ξ) for all x ∈ Aadjust ξ∈U

ξ∈U

⇐⇒ sup Fx (ξ) ≤ sup Fx (ξ) for all x ∈ Aadjust ξ∈U

ξ∈U

⇐⇒ Fx ≤sup Fx for all x ∈ Aadjust ⇐⇒ Fx ≤sup G for all G ∈ Fadjust .

The assertion follows since ≤sup is a total preorder.



6.3 Vector Optimization as Unifying Concept

487

We obtain the following relationship between efficient solutions of Fadjust w.r.t. ≤sup and weakly efficient elements w.r.t the natural order relation ≤CY . Taking into account that the result of Lemma 6.3.7 only depends on the order relations ≤CY and ≤sup but not on the considered subset of functions F ⊆ Y , its conclusion Corollary 6.3.8 is also true for the case of adjustable robustness, i.e., for every optimal solution x to (aRC) its outcome function is weakly efficient, this means, there is no other solution x ¯ which is strictly better for every scenario ξ ∈ U (see [334, Lemma 9 and Corollary 5]). Lemma 6.3.24. Consider Y = C(U, R) and suppose that every F ∈ Fadjust attains its supremum on U. If F ∈ Fadjust is an efficient element for (≤adjust -VOP), then F is a weakly efficient element of Fadjust w.r.t. the natural order relation ≤CY . Corollary 6.3.25. Consider Y = C(U, R) and suppose that the worst case is attained for every solution x ∈ Aadjust . Then, for every optimal solution x to the adjustable robust counterpart (aRC), Fx is a weakly efficient element of Fadjust w.r.t. the natural order relation ≤CY in Y . 6.3.6 Vector Approach for Minimizing the Expectation In this section, we are using the notations introduced in Section 6.2.7. Under the assumptions considered in Section 6.2.7, we define (see Figure 6.3.3) Fstrict := {Fx ∈ Y | x ∈ Astrict } and the order relation in Y by   Fx ≤exp Fy :⇐⇒ Fx (ξ)p(ξ)dξ ≤ Fy (ξ)p(ξ)dξ . U

U

p · F, p · G

F ·p

U

G·p ξ

Figure 6.3.3. Functions F, G with F ≤exp G.

488

6 Scalar Optimization under Uncertainty

Remark 6.3.26. The order relation ≤exp is reflexive and transitive, and it is a total preorder. Moreover, ≤exp is compatible with scalar multiplication and addition, i.e., for all F, G ∈ Y , it holds that F ≤exp G =⇒ (γF ) ≤exp (γG) for all scalars γ ∈ R+ , and F ≤exp G =⇒ (F +H) ≤exp (G+H) for all H ∈ Y . Therefore, ≤exp is generated by the cone {F ∈ Y | U Fx (ξ)p(ξ)dξ ≥ 0}. We introduce the following vector optimization problem in the functional space Y , where we are looking for efficient elements (see Sections 3.4, 3.7) of Fstrict w.r.t. ≤exp . Compute efficient elements of Fstrict w.r.t. ≤exp .

(≤exp -VOP)

In the next theorem ([334, Theorem 17]), we will see that the minimization of the expectation as concept of robustness can be represented as a vector optimization problem (≤exp -VOP), i.e., we show that a solution x ∈ Rn is an optimal solution to (Exp) if and only if Fx is an efficient element of Fstrict w.r.t. ≤exp . Theorem 6.3.27. x ∈ Rn solves (Exp)

⇐⇒

Fx is an efficient element for (≤exp -VOP).

Proof. We consider x ∈ Astrict and obtain: x is an optimal solution to (Exp)   ⇐⇒ f (x, ξ)p(ξ)dξ ≤ f (x, ξ)p(ξ)dξ for all x ∈ Astrict U

U

⇐⇒ Fx ≤exp Fx for all x ∈ Astrict ⇐⇒ Fx ≤exp G for all G ∈ Fstrict . The assertion follows since ≤exp is a total preorder.



Remark 6.3.28. Concerning the relationship between ≤exp and the natural ordering ≤CY , we easily obtain that ≤CY implies ≤exp : F ≤CY G =⇒ F ≤exp G. Moreover, we also watch the same relationship between weakly efficient elements with respect to ≤CY and efficient elements with respect to ≤exp we had in the previous sections: Every optimal solution to (Exp) is a weakly efficient element w.r.t. ≤CY (see [334, Lemma 10 and Corollary 6]). Lemma 6.3.29. We consider Y = C(U, R). If Fx ∈ Fstrict is an efficient element for (≤exp -VOP), then Fx is a weakly efficient element of Fstrict w.r.t. the natural order relation ≤CY .

6.3 Vector Optimization as Unifying Concept

489

Proof. Assume that Fx ∈ Fstrict is an efficient element for (≤exp -VOP). Suppose Fx is not a weakly efficient element of Fstrict w.r.t. the natural order relation ≤CY in Y . Then, there exists  Fy (ξ) < Fx (ξ) for all  Fy ∈ Fstrict with ξ ∈ U. Therefore, we conclude that U Fx (ξ)p(ξ)dξ < U Fy (ξ)p(ξ)dξ. This is  a contradiction to Fx being an efficient element of Fstrict w.r.t. ≤exp . Therefore, we also get here that, for all optimal solutions x to (Exp), Fx is weakly efficient of Fstrict w.r.t. the natural order relation ≤CY . Corollary 6.3.30. Consider Y = C(U, R) and suppose that x ∈ Astrict is an optimal solution to (Exp). Then, Fx is a weakly efficient element of Fstrict w.r.t. the natural order relation ≤CY in Y . 6.3.7 Vector Approach for Stochastic Dominance In this section, we are using the notations introduced in Section 6.2.8. Under the assumptions considered in Section 6.2.8, we define the set of feasible integrable outcome functions by Fstoch−dom := {Fx : x ∈ Astoch−dom }. Notice that Fstoch−dom ⊆ RU . Our intention is to characterize FSD-solutions (see Section 6.2.8) within our framework of vector optimization, we study the following order relation for two random variables (or, functions) Fx , Fy ∈ Fstoch−dom : Fx ≤stoc−1 Fy :⇐⇒ P {Fx ≤ η} ≥ P {Fy ≤ η} for all η ∈ R.

(6.17)

Remark 6.3.31. The order relation ≤stoc−1 is reflexive and transitive, but ≤stoc−1 is not a total preorder in general. Furthermore, because ≤stoc−1 is not compatible with addition, it cannot be induced by a cone. We introduce the following vector optimization problem in the functional space Y , where we are looking for efficient elements (see Sections 3.4, 3.7) of Fstoch−dom w.r.t. ≤stoc−1 . Compute efficient elements of Fstoch−dom w.r.t. ≤stoc−1 .

(≤stoc−1 -VOP)

In the next theorem (derived in [334, Theorem 19]), we will see that the stochastic dominance as concept of robustness can be represented as a vector optimization problem (≤stoc−1 -VOP), i.e., we show that a solution x ∈ Astoch−dom is a FSD-solution if and only if its corresponding random variable Fx is an efficient element of Fstoch−dom w.r.t. ≤stoc−1 . We omit the proof of this assertion since this assertion is obvious taking into account the definition of ≤stoc−1 .

490

6 Scalar Optimization under Uncertainty

Theorem 6.3.32. x ∈ Rn is a FSD-solution ⇐⇒ Fx is an efficient element for (≤stoc−1 -VOP). Remark 6.3.33. We establish the following relationship between the firstdegree stochastic dominance relation and the natural order relation ≤CY taking into account that F ≤CY G implies P {F ≤ η} ≥ P {G ≤ η} for all η ∈ R: F ≤CY G =⇒ F ≤stoc−1 G. It is important to mention that a relationship between (weakly) efficient elements w.r.t. ≤CY and (weakly) efficient elements w.r.t. ≤stoc−1 does not hold in general. This means that we cannot much say about (weakly) efficient solutions in relationship to (weakly) first-degree stochastically dominated solutions because Fx can attain any value on sets S ⊆ Rn with probability P (S) = 0. Now, our intention is to characterize SSD-solutions (see Section 6.2.8) within our framework of vector optimization, we study the following order relation for two random variables (or, functions) Fx , Fy ∈ Fstoch−dom :  η  η Fx ≤stoc−2 Fy :⇐⇒ P {Fx ≤ s}ds ≥ P {Fy ≤ s}ds for all η ∈ R, −∞

−∞

(6.18)

where we again suppose that the integrals exist for all feasible Fx . We get the same result like for ≤stoc−1 : Remark 6.3.34. ≤stoc−2 is reflexive and transitive. The order relation ≤stoc−2 is not a total preorder in general. We introduce the following vector optimization problem in the space Y of random variables, where we are looking for efficient elements (see Sections 3.4, 3.7) of Fstoch−dom w.r.t. ≤stoc−2 . Compute efficient elements of Fstoch−dom w.r.t. ≤stoc−2 .

(≤stoc−2 -VOP)

The following assertion (derived in [334, Theorem 20]) says that x ∈ Astoch−dom is a SSD-solution if and only if its corresponding random variable Fx is an efficient element of Fstoch−dom w.r.t. ≤stoc−2 . It is an immediate consequence of the definition of ≤stoc−2 . Theorem 6.3.35. Let Fstoch−dom ⊆ Y and suppose that Ψ 2 is defined for all F ∈ Fstoch−dom . x ∈ Rn is a SSD-solution ⇐⇒ Fx is an efficient element for (≤stoc−2 -VOP). Remark 6.3.36. Taking into account that R ≤stoc−1 S implies R ≤stoc−2 S, we obtain the known relationship to ≤CY also for ≤stoc−2 : F ≤CY G =⇒ F ≤stoc−1 G =⇒ F ≤stoc−2 G.

6.3 Vector Optimization as Unifying Concept

491

6.3.8 Vector Approach for Two-stage Stochastic Programming In this section, we are using the notations introduced in Section 6.2.9. We consider Y ⊆ RU and suppose that U F (ξ)p(ξ)dξ exists for all F ∈ Y as in Section 6.2.7. For x ∈ A2−stage , we define the random variables Fx ∈ Y by Fx (ξ) := inf{f (x, u, ξ) | u ∈ G(x, ξ)},

ξ ∈ U.

Notice that Fx is well defined for all x ∈ A2−stage because of the assumptions concerning the solvability of the second-stage problem (6.5). Considering the feasible set F2−stage := {Fx ∈ Y | x ∈ A2−stage }, we reformulate problem (6.6) as  Fx (ξ)p(ξ)dξ → ρ2SP (x) = U

inf

Fx ∈F2−stage

.

So, the second-stage problem is incorporated in the definition of the random variables Fx , and we can directly assign the results from Section 6.3.6 for minimizing the expectation. We introduce the following vector optimization problem in the functional space Y , where we are looking for efficient elements (see Sections 3.4, 3.7) of Fstrict w.r.t. ≤exp . Compute efficient elements of F2−stage w.r.t. ≤exp .

(≤2−stage -VOP)

In the next theorem, we show that a solution x ∈ Rn is an optimal solution to (6.6) if and only if Fx ∈ F2−stage is an efficient element of F2−stage with respect to the order relation ≤exp (see [334, Theorem 21]). Theorem 6.3.37. x ∈ Rn solves (6.6) ⇐⇒ Fx ∈ F2−stage is an efficient element for (≤2−stage - VOP).

Proof. Suppose x ∈ A2−stage . Then, x is an optimal solution to (6.6)   ⇐⇒ f (x, u(ξ), ξ)p(ξ)dξ ≤ f (x, u(ξ), ξ)p(ξ)dξ U

⇐⇒ Fx ≤exp

U

for all x ∈ A2−stage , u(ξ) ∈ G(¯ x, ξ) Fx for all x ∈ A2−stage . 

The next assertion concerning the relationship between solutions of problem (≤2−stage -VOP) and weakly efficient elements of F2−stage w.r.t. the natural order relation ≤CY is derived in [334, Lemma 11].

492

6 Scalar Optimization under Uncertainty

Lemma 6.3.38. Suppose that Y = C(U, R) and assume that F ∈ F2−stage is an efficient element of F2−stage w.r.t. ≤exp . Then, F is a weakly efficient element of F2−stage w.r.t. the natural order relation ≤CY . Therefore, we get also that Fx is weakly efficient for all optimal solutions x to (6.6) ([334, Corollary 7]). Corollary 6.3.39. Consider Y = C(U, R) and assume that x ∈ A2−stage is an optimal solution to (6.6). Then, Fx is a weakly efficient element of F2−stage w.r.t. the natural order relation ≤CY in Y . Proof. Let x ∈ A2−stage be optimal to (6.6). Assume Fx is not a weakly efficient element of F2−stage w.r.t. the natural order relation ≤CY in Y . Then, there exists Fy ∈ F2−stage such  that Fy (ξ) < Fx (ξ) for all ξ ∈ U. We conclude that then U Fx (ξ)p(ξ)dξ < U Fy (ξ)p(ξ)dξ, a contradiction to x being an  efficient element w.r.t. ≤exp . 6.3.9 Proper Robustness Using ideas of the unifying vector approach, we motivate an alternative concept for robustness (proper robustness) in this section. Employing the general features of this approach, we obtain valuable and useful information for the decision-maker concerning the properties of the robust solutions as well as concerning corresponding solution methods. The concept of properly efficient solutions is well known in vector optimization (see Sections 3.3.6 and 3.7). Engau [186] discussed this concept in the context of optimization under uncertainty with a countably infinite number of scenarios. In [334, Section 4.1], an alternative (but closely related derivation) is provided. As seen in Section 3.7 (in the definition of Henigproper efficient points), properly efficient solutions can be defined using the dilating cone introduced by Henig [266]. Suppose that Y = RU is the space of all functions F : U → R and CY the natural ordering cone in Y (see Definition 6.3.1). Recall that a proper, convex cone D ⊂ Y is called (Henig) dilating cone (see Section 3.3.6) if D ⊂ Y with CY \ {0} ⊂ int D (compare the definition of Henig-proper efficient points in Section 3.3.6). The following example is given in [334, Example 7]. Example 6.3.40. Consider the space Y = C(U, R) of continuous real-valued functions on a compact subset U of a separable Banach space. Recall that the natural ordering cone (see Definition 6.3.1 and Example 6.3.3) in C(U, R) is given by C(U, R)+ := {F ∈ C(U, R) | ∀ξ ∈ U : F (ξ) ≥ 0}. For δ > 0, s ∈ U, we consider the sets Ds,δ := {F ∈ C(U, R) | F (s) > 0,

Dδ := Ds,δ . s∈U

∀ξ ∈ U \ {s}, F (s) + δF (ξ) > 0},

6.3 Vector Optimization as Unifying Concept

493

For δ > 0, the sets Ds,δ and Dδ are open, and the sets Ds,δ ∪ {0} are proper, convex cones (see Winkler [565, Lemma 3.1]). Moreover, the sets Dδ ∪ {0} are cones, and it holds that C(U, R)+ \ {0} ⊂ Dδ , i.e., Dδ ∪ {0} are dilating cones (see Section 3.3.6). Example 6.3.41. We use the notation introduced in Example 6.3.40. The topological dual space of C(U, R) is given by the space M (U) of Radon measures on U. The dual cone to C(U, R)+ is exactly the cone of positive Radon measures,  F dμ ≥ 0 . M (U)+ = μ ∈ M (U) | ∀F ∈ C(U, R)+ : U

M (U)+ has empty interior. The quasi-interior of M (U)+ is given by 

 qi M (U)+ := μ ∈ M (U) | ∀F ∈ C(U, R)+ \ {0} : F dμ > 0 . U

Obviously, qi M (U)+ is nonempty. Consider the set Cμ,m := {F ∈ C(U, R) | ∀ξ ∈ U :

1 F (ξ) + m

 U

F dμ ≥ 0}

with μ ∈ qi M (U)+ and m > 0. Winkler [565, Lemma 3.3] has shown that Cμ,m is a convex cone for all m > 0 with  1 F dμ > 0} int Cμ,m := {F ∈ C(U, R) | ∀ξ ∈ U : F (ξ) + m U and C(U, R)+ ⊂ Cμ,m and Cμ,m1 \ {0} ⊂ int Cμ,m2 for m1 > m2 > 0. This means that Cμ,m is a dilating cone, see Section 3.3.6. In [334, Section 4.1], a new concept for robustness based on dilating cones for the optimization problem under uncertainty (Q(ξ)) is derived. The set of strictly robust outcome functions in Y is denoted (as before) by Fstrict = {Fx ∈ Y | x ∈ Astrict }. We consider two given functions F, G ∈ Y and the following order relation ≤prop , where a dilating cone D (see Section 3.3.6) is employed: F ≤prop G

:⇐⇒

∃ a dilating cone D ⊂ Y and G ∈ F + D.

(6.19)

In all previous sections, we considered certain robust counterpart problems to handle uncertainty and derived an order relation which enabled us to rephrase the original problem as a vector optimization problem. In [334, Section 4.1], the authors proceed the other way around: They defined an order relation ≤prop (see (6.19)) and used it to introduce a new concept of robustness. Toward this end, we say

494

6 Scalar Optimization under Uncertainty

x ∈ Astrict is properly robust :⇐⇒ Fx is an efficient element of Fstrict w.r.t ≤prop . Note that ≤prop is not a total preorder. The properly robust counterpart of (Q(ξ), ξ ∈ U) is a vector optimization problem in the functional space Y = RU : Definition 6.3.42. Let the set HEff(Fstrict , CY ) := {Fx ∈ Fstrict | ∃ dilating cone D with CY \ {0} ⊂ int D such that (Fx − int D) ∩ Fstrict = ∅} be given. Fx ∈ HEff(Fstrict , CY ) is called properly efficient element of Fstrict w.r.t. CY . The properly robust counterpart of the optimization problem under uncertainty (Q(ξ), ξ ∈ U) is given by the problem of searching properly efficient elements of Fstrict w.r.t. CY : Compute

HEff(Fstrict , CY ).

(pRC)

Remark 6.3.43. Examples for dilating cones D in Definition 6.3.42 are the cones Dδ ∪ {0} (with δ > 0) and Cμ,m from Examples 6.3.40 and 6.3.41. Another example of a dilating cone is the Bishop–Phelps cone, which has many useful properties, see Ha and Jahn [250]. In the next lemma ([334, Lemma 12]), we show that all solutions to (pRC) are efficient solutions w.r.t. ≤CY (see corresponding results derived in Section 3.3.6). Lemma 6.3.44. If F ∈ Fstrict is an efficient element of Fstrict w.r.t. ≤prop , then F is an efficient element of Fstrict w.r.t. the natural order relation ≤CY . Proof. Suppose that F ∈ Fstrict is an efficient element of Fstrict in Y w.r.t. ≤prop . From the definition of ≤prop , we obtain that there exists a dilating cone D ⊂ Y with (F − int D) ∩ Fstrict = ∅, and hence (F − (CY \ {0}) ∩ Fstrict = ∅ since CY \ {0} ⊂ int D. Therefore, F is an efficient element of Fstrict w.r.t.  the natural order relation ≤CY . The next assertion is derived in [334, Corollary 8]. Corollary 6.3.45. Suppose that x ∈ Astrict is a properly robust solution, i.e., Fx is a solution to (pRC). Then, Fx is an efficient element of Fstrict w.r.t. the natural order relation ≤CY in Y .

6.3 Vector Optimization as Unifying Concept

495

It is important to mention that the advantage of the concept of properly robust counterpart problems is that, in general, we obtain a smaller set of robust solutions in comparison with the concept of (weak) efficiency where the natural order relation ≤CY induced by the cone CY is involved. Very often, the set of weakly efficient elements of Fstrict w.r.t. the natural order relation ≤CY is very large and includes elements that are not of interest for the decision-maker. Especially, dilating cones may be employed to avoid weakly efficient elements when considering classical concepts as strict robustness, optimistic robustness, regret robustness, reliable robustness and adjustable robustness. 6.3.10 An Overview on Concepts of Robustness Based on Vector Approach In this section, we summarize the different characterizations of robust counterpart problems for scalar problems under uncertainty using the general concept based on the vector approach employed in the previous sections. Deterministic counterparts based on the vector approach offer a powerful tool to an improved understanding of a large diversity of models in optimization under uncertainty. Especially, trade-offs between different models can be examined and the relative importance of specific scenarios in the respective models can be recognized from a new perspective. The main characteristics of the respective robust counterpart models via vector optimization are consolidated in Table 6.1. Note, however, that even though the same order relation is used, differences may occur in the respective definitions of objective functions and feasible sets which is not reflected in this table. Remark 6.3.46. As already mentioned, the general idea of the vector optimization approach to optimization problems under uncertainty is studied in related works by Rockafellar and Royset [486–489], where the authors propose a similar unifying concept for handling uncertainty in a decision-making process. By construing the uncertain outcome Fx of a solution x ∈ Rn as a random variable (related to the interpretation in Sections 6.3.6 and 6.3.7), Rockafellar and Royset explain that many concepts from robust optimization and stochastic programming can be described by risk measures. Especially, consider a space Y of random variables. An extended real-valued function R : Y → R ∪ {±∞} that assigns to a response or cost random variable y a number R(y) as a quantification of the risk in y is called measure of risk. Furthermore, in [487], it is shown how risk measures provide models for handling uncertainty, especially, involving the worst case optimization (see Section 6.2.2) and expected value minimization (see Section 6.2.7). Risk measures discussed in [486–489] include quantiles, safety margins, superquantiles, and utility functions. Engau considers in [186] uncertain optimization problems with a countably infinite uncertainty set. In this case, a vector optimization problem is formulated as a deterministic counterpart. This counterpart problem is

496

6 Scalar Optimization under Uncertainty

Concept

Section Order relation

Definition: Fx , Fy ∈ Y : Fx ≤ Fy ⇐⇒

Strict robustness

6.3.1

≤sup

supξ∈U Fx (ξ) ≤ supξ∈U Fy (ξ)

Optimistic robustness

6.3.2

≤inf

inf ξ∈U Fx (ξ) ≤ inf ξ∈U Fy (ξ)

Regret robustness

6.3.3

≤regret

supξ∈U (Fx (ξ)−f ∗ (ξ)) ≤ supξ∈U (Fy (ξ)−f ∗ (ξ))

Reliability

6.3.4

≤sup

supξ∈U Fx (ξ) ≤ supξ∈U Fy (ξ)

Adjustable robustness

6.3.5

≤sup

supξ∈U Fx (ξ) ≤ supξ∈U Fy (ξ)

Minimizing the expectation

6.3.6

≤exp

Stochastic 6.3.7 dominance 1 Two-stage stochastic programming

6.3.8

≤stoc−1

≤exp

 U

Fx (ξ)p(ξ)dξ



 U

Fy (ξ)p(ξ)dξ

P {Fx ≤ η} ≥ P {Fy ≤ η} for all η ∈ R  U

Fx (ξ)p(ξ)dξ



 U

Fy (ξ)p(ξ)dξ

∃ a proper cone D ⊂ Y with CY \ {0} ⊂ Proper 6.3.9 ≤prop int D and Fy ∈ Fx + D robustness Table 6.1. Summary of characteristics of robust optimization problems based on vector optimization counterparts, adapted from [334, Table 1]

employed to apply generalized concepts of proper efficiency in the context of optimization under uncertainty with the aim to avoid weakly efficient solutions. In Section 6.3.9, a generalization of this concept via vector approach is presented.

6.4 Set Relations as Unifying Concept We describe an approach for deriving robust counterpart problems to scalar optimization under uncertainty based on generalized set less relations and set-valued optimization (see Sections 2.3, 3.2 and 4.4). There are some concepts of robustness for multiobjective optimization under uncertainty that deals with concepts from set-valued optimization in the literature. Ide et al. [284] study multiobjective optimization problems under uncertainty and

6.4 Set Relations as Unifying Concept

497

relationships to set less relations (compare also Ide, K¨ obis [283] and Crespi et al. [146]). In these papers, the authors introduce robust solutions to families of multiobjective problems under uncertainty and employ techniques from set optimization. A method to transfer the set optimization problem into a vector optimization problem (i.e., an embedding approach from set optimization as a vectorization) is employed in [146], and furthermore, well-posedness by means of several convexity notions with respect to the objective map is discussed. We consider scalar optimization problems under uncertainty with an infinite uncertainty set and study their corresponding set-valued robust counterpart problems. These robust counterpart problems are then deterministic. Corresponding results for multiobjective optimization problems under uncertainty and their robust set-valued counterpart problems are studied in [146]. In the set-based approach for handling optimization problems under uncertainty, we associate a feasible solution x ∈ Rn with the outcome set of possible objective function values which can appear if x is fixed. This means, we consider the outcome sets Bx := f (x, U) := {f (x, ξ) | ξ ∈ U} ⊆ R. To compare two feasible solutions x and y in this framework, we employ certain set less relations between their corresponding outcome sets Bx and By (see Section 2.3). Consider the set Z := P(R) of all nonempty subsets of R (the power set of R without the empty set (see Section 2.3)). For fixed x ∈ Rn , the set Bx ∈ Z is therefore the image of the mapping Fx under U. In case that f (x, ·) is a continuous function on a convex uncertainty set U, Bx ⊆ R is an interval. Throughout this section, we assume that the outcome sets associated to feasible solutions are closed. We consider various set less relations := D (where D is a nonempty subset of Y ) introduced in Section 2.3 to compare elements of Z. Example 6.4.1. A first example for a set less relation in our approach is the upper set less relation introduced in Definition 2.3.1. In our special case (Y = R and D = R+ , the upper set less relation is given as follows: Let A, B ∈ Z be nonempty closed sets. Then, A u B :⇐⇒ A ⊆ B − R+

(6.20)

⇐⇒ sup A ≤ sup B. Another example for a set less relation in our approach is the lower set less relation introduced in Definition 2.3.3. For Y = R and D = R+ , the lower set less relation is given as follows: Let A, B ∈ Z be nonempty closed sets. Then, A l B :⇐⇒ B ⊆ A + R+ ⇐⇒ inf A ≤ inf B.

(6.21)

498

6 Scalar Optimization under Uncertainty

We are looking for -nondominated elements of a nonempty subset of Z (see Definition 3.2.1). Toward this end, let B be a nonempty subset of Z. Recall that nondominated elements in B w.r.t. a general set less relation

(see Section 2.3) are given as follows (compare Definitions 3.2.1, 3.2.4 and 4.4.2): Consider B ⊆ Z and the set less relation on Z. A ∈ B is a nondomiated element in B w.r.t. if for all A ∈ B : A A =⇒ A A. A set optimization problem asks for nondominated elements of B in Z w.r.t. for a given set less relation and a set B ⊆ Z. Therefore, we consider the following set optimization problem in the set-based approach for handling optimization problems under uncertainty: Compute nondominated elements of B w.r.t. .

( -SP)

The aim of the following sections is to explain how many (not all) concepts of robustness for optimization problems under uncertainty can be characterized by set-valued optimization problems w.r.t. different set less relations (see Section 2.3). Furthermore, every set less relation induces a certain concept for handling uncertainty. In the particular instance of a finite set of uncertainties U = {ξ1 , . . . , ξq } (q ∈ N), we get that Bx = {f (x, ξ) | ξ ∈ U} = {f (x, ξ1 ), . . . , f (x, ξq )} ⊂ R is a set of finitely many elements in R. 6.4.1 Set Approach for Strict Robustness We reformulate the strictly robust counterpart (RC) (see Section 6.2.2) as a set optimization problem. We refer the set of strictly robust outcome sets in Z = P(R) by Bstrict := {Bx ∈ Z | x ∈ Astrict }. (6.22) We suppose that the sets Bx are closed for all x ∈ Astrict . For closed sets Bx , By ∈ Z, the upper set less relation u is given by Bx u By :⇐⇒ Bx ⊆ By − R+ ⇐⇒ sup Bx ≤ sup By ,

(6.23)

see Definition 2.3.1, Example 6.4.1 and Figure 6.4.4. Remark 6.4.2. u in (6.23) is reflexive and transitive. Moreover, it is a total preorder.

6.4 Set Relations as Unifying Concept

sup A A

sup B

B

499

R

Figure 6.4.4. Visualization of A u B for closed sets A, B ∈ Z.

The following relationship between u (see (6.23)) and ≤sup (see (6.10)) is derived in [334, Lemma 3]. Lemma 6.4.3. Consider x, y ∈ Rn with the corresponding outcome functions Fx , Fy and the corresponding closed outcome sets Bx , By . Then, Bx u By ⇐⇒ Fx ≤sup Fy . Proof. Bx u By ⇐⇒ sup Bx ≤ sup By ⇐⇒ sup{Fx (ξ) | ξ ∈ U} ≤ sup{Fy (ξ) | ξ ∈ U} ⇐⇒Fx ≤sup Fy .  For the set less relation u and the set Bstrict ⊆ Z, we are looking for nondominated elements of Bstrict in Z w.r.t. u . This means, we consider the following set optimization problem Compute nondominated elements of Bstrict w.r.t. u .

( u -SP)

Now, we are able to represent the strictly robust optimization problem as a set optimization problem ( u -SP). In the next theorem, we show that x ∈ Rn is an optimal solution to (RC) if and only if Bx is a nondominated element of Bstrict w.r.t. the set less relation u (see [334, Theorem 2]). Theorem 6.4.4. Suppose that the sets Bx are closed for all x ∈ Astrict . Then, x ∈ Rn solves (RC) ⇐⇒ Bx ∈ Bstrict is nondominated for ( u -SP). Proof. From Theorem 6.3.5, we know that x ∈ Astrict is an optimal solution to (RC) if and only if Fx ≤sup Fx for all x ∈ Astrict . Taking into account Lemma 6.4.3, this is equivalent to Bx u Bx for all x ∈ Astrict and the assertion follows. 

500

6 Scalar Optimization under Uncertainty

6.4.2 Set Approach for Optimistic Robustness Employing the lower set less relation (see Definition 2.3.3), we reformulate the optimistically robust counterpart (oRC) introduced in Section 6.2.3 as a set optimization problem. As before (see Section 6.4.1), the set of optimistically robust outcome sets in Z = P(R) is Bstrict = {Bx ∈ Z| x ∈ Astrict }. In the optimistically robust counterpart problem (oRC), we are minimizing the best case objective value of an element x ∈ Astrict . In the set-based reformulation, we minimize the infimum of the outcome sets Bx ⊆ R with x ∈ Astrict . Therefore, we use the lower set less relation l given in (6.21) as a special case of Definition 2.3.3 (see Figure 6.4.5). Remark 6.4.5. l in (6.21) is reflexive and transitive. Moreover, it is a total preorder.

inf A inf B A

B

R

Figure 6.4.5. Visualization of A l B for closed sets A, B ∈ Z.

The following relationship between l (see (6.21)) and ≤inf (see (6.12)) is shown in [334, Lemma 5]. Lemma 6.4.6. Consider x, y ∈ Rn , their corresponding outcome functions Fx , Fy and their corresponding closed outcome sets Bx , By . Then, Bx l By ⇐⇒ Fx ≤inf Fy . Now, we are looking for nondominated elements of Bstrict in Z w.r.t. l . Therefore, we consider the following set optimization problem. Compute nondominated elements of Bstrict w.r.t. l .

( l -SP)

In the next theorem, we show that x ∈ Rn is an optimal solution to (oRC) if and only if Bx is a nondominated element of Bstrict w.r.t. the lower set less relation l (see [334, Theorem 5]). Theorem 6.4.7. Suppose that the sets Bx are closed for all x ∈ Astrict . Then, x ∈ Rn solves (oRC) ⇐⇒ Bx ∈ Bstrict is nondominated for ( l -SP).

6.4 Set Relations as Unifying Concept

501

6.4.3 Set Approach for Regret Robustness The reformulation of the strictly robust counterpart (RC) presented in Section 6.2.2 as set optimization problem (see Section 6.4.1) does not directly transfer to the case of regret robustness introduced in Section 6.2.4, since the regret of an element x ∈ Astrict depends on the scenario ξ ∈ U and the information about the functional dependence between objective values f (x, ξ), ξ ∈ U, and the respective scenarios ξ ∈ U is not captured in the outcome set Bx ∈ Bstrict . More precisely, it holds that (in general) ⎛ ⎞ ⎜ ⎟ sup{f (x, ξ) − f ∗ (ξ)|ξ ∈ U} = sup ⎝{f (x, ξ)|ξ ∈ U} −{f ∗ (ξ)|ξ ∈ U}⎠ .    =Bx

Therefore, for x ∈ Astrict we define the regret robust outcome set ∗

Bxf −f := {f (x, ξ) − f ∗ (ξ)|ξ ∈ U} as the set of possible outcomes of the regret function f (x, ξ) − f ∗ (ξ), ξ ∈ U. ∗ The set of feasible regret sets Bxf −f , x ∈ Astrict , ∗

Bregret := {Bxf −f ∈ Z| x ∈ Astrict } is referred to the set of regret robust outcome sets. In the regret robust counterpart problem (rRC), we minimize the worst case regret value of an element x ∈ Astrict . This is equivalent to minimize the supremum of the regret robust ∗ outcome sets Bxf −f ∈ Bregret . Using the upper set less relation u introduced in (6.23) on Bregret , we consider the following set optimization problem w.r.t.

u and the set Bregret ⊆ Z, where we are looking for nondominated elements of Bregret in Z w.r.t. u . Compute nondominated elements of Bregret w.r.t. u .

( u -SPreg )

In the next theorem, we show that x ∈ Rn is an optimal solution to ∗ (rRC) if and only if the corresponding regret robust outcome set Bxf −f is u a nondominated element of Bregret w.r.t. the set less relation (see [334, Theorem 9]). ∗

Theorem 6.4.8. Suppose that the sets Bxf −f are closed for all x ∈ Astrict . Then, ∗

x ∈ Rn solves (rRC) ⇐⇒ Bxf −f ∈ Bregret is nondominated for ( u -SPreg ). Proof. Consider x ∈ Astrict . Taking into account that sup Bxf −f supξ∈U (f (x, ξ) − f ∗ (ξ)), we obtain



=

502

6 Scalar Optimization under Uncertainty

x is an optimal solution to (rRC) ⇐⇒ sup(f (x, ξ) − f ∗ (ξ)) ≤ sup(f (x, ξ) − f ∗ (ξ)) for all x ∈ Astrict ξ∈U

ξ∈U

⇐⇒

∗ sup Bxf −f

⇐⇒

∗ Bxf −f

⇐⇒

∗ Bxf −f



∗ sup Bxf −f

for all x ∈ Astrict

u

∗ Bxf −f

u

B for all B ∈ Bregret .



for all x ∈ Astrict .

Since u is a total preorder, the assertion follows.



Notice that Theorem 6.4.8 can alternatively be shown employing the relationship between ≤regret and ≤sup and between ≤sup and u , compare Lemma 6.4.3. 6.4.4 Set Approach for Reliability In this section, we analogously interpret the reliable counterpart problem (Rely) introduced in Section 6.2.5 as a set optimization problem. Toward this end, we refer the set of reliable outcome sets in Z = P(R) by Brely := {Bx ∈ Z| x ∈ Arely }.

(6.24)

We employ the upper set less relation u as given by (6.23) (compare Definition 2.3.1) in order to compare two sets of Brely . In the reliable counterpart problem (Rely), we minimize the worst case objective value of an element x ∈ Arely . This is equivalent to minimizing the supremum of the outcome sets Bx ⊆ R with x ∈ Arely in the set-based approach, where we consider the following set optimization problem w.r.t. the upper set less relation u and the set Brely ⊆ Z. Here, we are looking for nondominated elements of Brely in Z w.r.t. u . Compute nondominated elements of Brely w.r.t. u .

( u -SPrel )

In the next theorem, we show that x ∈ Rn is an optimal solution to (Rely) if and only if Bx is a nondominated element of Brely w.r.t. the set less relation

u (see [334, Theorem 12]). Theorem 6.4.9. Suppose that the sets Bx are closed for all x ∈ Arely . Then, x ∈ Rn solves (Rely) ⇐⇒ Bx ∈ Brely is nondominated for ( u -SPrel ). Proof. Analogously to the proof of Theorem 6.4.4, the assertion follows from Theorem 6.3.20 and Lemma 6.4.3. 

6.4 Set Relations as Unifying Concept

503

6.4.5 Set Approach for Adjustable Robustness Employing the set approach, a similar analysis like in the previous sections can be done concerning the adjustable robust counterpart introduced in Section 6.2.6. Consider Z = P(R) and Fx (ξ) = Q(x, ξ) as before. Let us define Bx := {Q(x, ξ)|ξ ∈ U} = {Fx (ξ)|ξ ∈ U} and refer the set of adjustable robust outcome sets by Badjust := {Bx ∈ Z| x ∈ Aadjust } ⊆ Z. In order to compare Bx , By ∈ Z, we use the upper set less relation u given by (6.20). Taking into account the relationship between u and ≤sup shown in Lemma 6.4.3, we employ u for a representation of the adjustable robust counterpart problem as a set optimization problem. Therefore, we consider a set optimization problem w.r.t. u and the set Badjust ⊆ Z and ask for nondominated elements of Badjust w.r.t. u . Compute nondominated elements of Badjust w.r.t. u .

( u -SPadj )

In the next theorem, we show that x ∈ Rn is an optimal solution to (aRC) if and only if Bx is a nondominated element of Badjust w.r.t. the upper set less relation u (see [334, Theorem 15]). Theorem 6.4.10. Suppose that the sets Bx are closed for all x ∈ Aadjust . Then, x ∈ Rn solves (aRC) ⇐⇒ Bx ∈ Badjust is nondominated for ( u -SPadj ). Proof. Consider x ∈ Aadjust . From Theorem 6.3.23, we obtain that x is an optimal solution to (aRC) if and only if Fx ≤sup Fx for all x ∈ Aadjust . Taking into account Lemma 6.4.3, this is equivalent to Bx u Bx for all x ∈ Aadjust .  6.4.6 Certain Robustness In [334], a motivation for a new concept of robustness (certain robustness) by means of the set approach is presented. A set-based reformulation given by a set less relation still implies a corresponding reformulation as a vector optimization problem with an order relation ≤ by defining Fx ≤ Fy : ⇐⇒ see, e.g., Sections 6.3.6, 6.3.7 and 6.3.8.

Bx By ,

504

6 Scalar Optimization under Uncertainty

In general, the converse is not true since is not well defined through Bx By : ⇐⇒

Fx ≤ Fy .

Therefore, the following concept also based on a vector optimization model is motivated. Nevertheless, in the case studied in this section, the set approach is better appropriated to highlight the particular problem characteristics as well as their interpretation. Consider the closed sets Bx , By ∈ Z. Employing a set less relation c (see Definition 2.3.5) to our special case, we consider Bx c By :⇐⇒ Bx ⊆ By − R+ and By ⊆ Bx + R+ ⇐⇒ sup Bx ≤ sup By and inf Bx ≤ inf By , i.e., Bx dominates By if both the upper bound as well as the lower bound of the set Bx is smaller than the respective upper or lower bounds of the set By (Figure 6.4.6). The following relationship between the set less relations u ,

l and c holds: Bx c By =⇒ Bx u By as well as Bx l By .

inf A

inf B A

sup A

sup B B

R

Figure 6.4.6. Visualization of A c B for closed sets A, B ∈ Z.

Toward this end, suppose that Bstrict ⊆ Z is defined as the set of strictly robust outcome sets (see (6.22)). We consider a set optimization problem w.r.t. the certainly less relation

c and the set Bstrict ⊆ Z, where we ask for nondominated elements of Bstrict in Z w.r.t. c . Suppose that the sets Bx are closed for all x ∈ Astrict . Compute nondominated elements of Bstrict w.r.t. c .

( c -SPcer )

We now use ( c -SPcer ) to define certainly robust solutions. x is certainly robust :⇐⇒ Bx solves ( c -SPcer ). Taking into account the properties of the certainly less relation c , the concept of certainly robust solutions is useful if a decision-maker is interested in solutions that are not dominated either by their upper or by their lower bounds. By using the certainly less relation c , we get a rather weak concept. However, the concept of certainly robust solutions filters off solutions that are surely bad choices.

6.5 Translation Invariant Functions as Unifying Concept

505

6.4.7 An Overview on Concepts of Robustness Based on Set Relations Because the set-based reformulation for robust counterpart problems of optimization problems under uncertainty does not reflect distributional information as needed, especially when minimizing the expectation or in 2-stage stochastic programming models, its practicality is mostly limited to concepts from robust optimization. A summary is given in Table 6.2.

Concept

Strict robustness Optimistic robustness

Section Set less relation

6.4.1 6.4.2

u

Definition: Bx , By ∈ Z, closed : Bx By ⇐⇒

sup Bx ≤ sup By

l

inf Bx ≤ inf By

u





Regret robustness

6.4.3



sup Bxf −f ≤ sup Byf −f

Reliability

6.4.4

u

sup Bx ≤ sup By

Adjustable robustness

6.4.5

u

sup Bx ≤ sup By



sup Bx ≤ sup By and inf Bx ≤ inf By Table 6.2. Summary of interpretations using set-based counterparts, adapted from [334, Table 2] Certain robustness

6.4.6

c

6.5 Translation Invariant Functions as Unifying Concept Several well-known concepts of robust and stochastic optimization can be reformulated employing the nonlinear translation invariant functional (2.42) discussed in Section 2.4. Intuitionally, whenever a robust counterpart problem of an optimization problem under uncertainty generates (weakly) efficient elements of an associated vector optimization problem as discussed in Section 6.3 (compare Sections 3.4, 3.5 and 3.7), these solutions can also be generated employing suitable (nonlinear) scalarizing functionals. These scalarizing functionals have many useful properties depending on the properties of the involved parameters (see Theorem 2.4.1), especially translation invariance, monotonicity, and continuity properties. Accordingly, this gives a justification to the third unifying framework via nonlinear translation invariant functionals.

506

6 Scalar Optimization under Uncertainty

Suppose that Y is a linear topological space, k ∈ Y \ {0} and the set of feasible elements F is a nonempty, proper subset of Y . Throughout this section, we suppose that Y is a linear topological space of real-valued functions F : U → R equipped with a closed natural ordering cone CY (see Definition 6.3.1). Under the assumption that B be a proper, closed subset of Y fulfilling (see (2.41)) B + [0, +∞) · k ⊆ B, (6.25) we consider the nonlinear translation invariant functional (see (2.42)) ϕB,k : Y → R ∪ {±∞} = R, ϕB,k (y) = inf{t ∈ R | y ∈ tk − B}.

(6.26)

It is an interesting question whether the nonlinear translation invariant functional ϕB,k can be employed as a tool to characterize solutions of robust counterpart problems to scalar optimization problems under uncertainty. Employing the scalarizing functional (6.26), we consider the following scalar minimization problem that will be studied later on to describe certain concepts of robust and stochastic optimization. Suppose that F ⊆ Y and ϕB.k is given by (6.26). An element F ∈ F is a minimal element of F in Y w.r.t. ϕB,k if ∀G ∈ F :

ϕB,k (F ) ≤ ϕB,k (G),

i.e., F is a solution of the scalar optimization problem ϕB,k (F ) → inf . F ∈F

(Pk,B,F )

It is important to mention that many well-known scalarization concepts for vector optimization problems discussed in the literature are special cases of the scalarized problem (Pk,B,F ). In the case of vector optimization problems with a finite-dimensional image space, this scalarization method covers weighted sum, Tschebyscheff- and ε-constraint-scalarizations, and many others. Characterizations of solutions of vector optimization problems via scalarization are discussed in Sections 3.1.1 and 3.1.2. Moreover, the surrogate scalar problem by Pascoletti and Serafini [458] is related to a scalarization by the functional (6.26), see Eichfelder [174]. In Section 2.4, we have seen that the functional (6.26) has many interesting and useful properties (see in Theorem 2.4.1). Especially in the case that Y is a linear topological space and B is a proper, closed, convex cone in Y with nonempty interior and k ∈ int B, the functional (6.26) is a well-defined continuous sublinear function (see Corollary 2.4.5). The aim of the following sections is to explain how many concepts of robustness for optimization problems under uncertainty can be characterized via the translation invariant functional (6.26) by choosing different parameters k, B, F. Every choice of k, B, F induces a certain concept for handling uncertainty.

6.5 Translation Invariant Functions as Unifying Concept

507

6.5.1 Nonlinear Scalarizing Functional for Strict Robustness We depict the strictly robust counterpart (RC) (introduced in Section 6.2.2) employing translation invariant functionals (6.26), see Section 2.4. The assertion in the following theorem holds for the general case that Y is a linear topological space of real-valued functions F : U → R. We show that x ∈ Rn is an optimal solution to (RC) if and only if Fx solves problem (Pk,B,Fstrict ) for B := CY , and k :≡ 1 ∈ Y (see [334, Theorem 3]). Theorem 6.5.1. Suppose that B := CY , and k :≡ 1 ∈ Y . Then, x ∈ Rn solves (RC) ⇐⇒ Fx is a solution of (Pk,B,Fstrict ). Proof. Obviously, B + [0, +∞) · k ⊆ B holds, such that inclusion (6.25) is fulfilled and the functional ϕB,k is defined. Moreover, we obtain ϕB,k (Fx ) = inf{t ∈ R|Fx ∈ tk − B} = inf{t ∈ R|Fx − tk ∈ −CY } = inf{t ∈ R|∀ξ ∈ U : Fx (ξ) ≤ t} = sup f (x, ξ). ξ∈U

Therefore, Fx is minimal for (Pk,B,Fstrict ) if and only if x ∈ Astrict minimizes supξ∈U f (x, ξ), i.e., if and only if x is an optimal solution to (RC).  Remark 6.5.2. From Theorem 2.4.1 and Corollary 2.4.5, we obtain (under the assumption Y = C(U, R)) that ϕB,k is continuous, finite-valued, CY monotone, strictly (int CY )-monotone and sublinear, and ∀ Fx ∈ Y, ∀ t ∈ R : ϕB,k (Fx ) ≤ t ⇐⇒ Fx ∈ tk − CY , ∀ Fx ∈ Y, ∀ t ∈ R : ϕB,k (Fx ) < t ⇐⇒ Fx ∈ tk − int CY , because B = CY is a proper, closed, convex cone and k ∈ int CY . In the special case of a discrete uncertainty set U = {ξ1 , . . . , ξq }, Theorem 6.5.1 reduces to min

Fx ∈Fstrict

ϕB,k (Fx ) =

min max f (x, ξ)

x∈Astrict ξ∈U

with B := Rq+ and k := (1, . . . , 1)T . This problem is equivalent to the reference point approach by Wierzbicki [564] employing the origin as a reference point, and in the case that f (x, ξ) ≥ 0 for all ξ ∈ U and x ∈ Astrict , to the weighted Tchebycheff scalarization, compare Steuer [520], (with equal weights) employed to the corresponding vector optimization problem

508

6 Scalar Optimization under Uncertainty

≤Rq+ - opt(f (x, ξ1 ), . . . , f (x, ξq )) subject to

(6.27)

x ∈ Astrict , where the optimization is to be understood in the sense of efficiency (see Sections 3.4, 3.7) with an order relation ≤Rq+ induced by the natural ordering cone Rq+ in Rq y 1 ≤Rq+ y 2 :⇐⇒ y 2 ∈ y 1 + Rq+ for all y 1 , y 2 ∈ Rq . Remark 6.5.3. We know from Corollary 6.3.8 that, for every optimal solution x of the scalarized problem (Pk,B,Fstrict ), Fx is a weakly efficient element w.r.t. the natural order relation ≤CY under the assumption that the worst case is attained for every element x ∈ Astrict . This fact is not always satisfiable, and especially in the context of scalarizing functionals it is common practice to employ techniques that guarantee efficient (instead of weakly efficient) elements w.r.t. ≤CY . For example, this can be realized by a second stage optimization employed to the set of optimal solutions of (Pk,B,Fstrict ) as proposed by Iancu and Trichakis in [282], or by employing an suitable augmentation term for ϕB,k in the first stage (see, for example, Jahn [310]). 6.5.2 Nonlinear Scalarizing Functional for Optimistic Robustness Now, we explain that the optimistically robust counterpart (oRC) introduced in Section 6.2.3 can be characterized by translation invariant functionals (6.26). The basic reformulation again holds under the assumption that Y is a linear topological space of real-valued functions F : U → R. In the next theorem, we show that x ∈ Rn is an optimal solution to (oRC) if and only if Fx solves problem (Pk,Binf ,Fstrict ) for Binf := {F ∈ Y |∃ ξ ∈ U : F (ξ) ≥ 0}, and k :≡ 1 ∈ Y (see [334, Theorem 6]). Theorem 6.5.4. Consider a closed set Binf := {F ∈ Y |∃ ξ ∈ U : F (ξ) ≥ 0}, and k :≡ 1 ∈ Y . Then, x ∈ Rn solves (oRC) ⇐⇒ Fx is a solution of (Pk,Binf ,Fstrict ). Proof. Since Binf + [0, +∞) · k ⊆ Binf holds, the inclusion (6.25) is satisfied such that the functional ϕBinf ,k is well defined. We obtain ϕBinf ,k (Fx ) = inf{t ∈ R|Fx ∈ tk − Binf } = inf{t ∈ R|Fx − tk ∈ −Binf } = inf{t ∈ R|∃ ξ ∈ U : Fx (ξ) ≤ t} = inf f (x, ξ). ξ∈U

6.5 Translation Invariant Functions as Unifying Concept

509

This stands for Fx is minimal for (Pk,Binf ,Fstrict ) if and only if x ∈ Astrict minimizes inf ξ∈U f (x, ξ), i.e., if and only if x is an optimal solution to (oRC).  Remark 6.5.5. Under the assumption Y = C(U, R), Binf is a proper, convex cone and k ≡ 1 ∈ int Binf because of CY ⊂ Binf and int CY = ∅. Since Binf = Y \ int(−CY ), we obtain the closedness of Binf . Hence, from Theorem 2.4.1 and Corollary 2.4.5, we conclude that the functional ϕBinf ,k is continuous, finite-valued, Binf -monotone, strictly (int Binf )-monotone and sublinear, and it holds that ∀ Fx ∈ Y, ∀ t ∈ R : ϕBinf ,k (Fx ) ≤ t ⇐⇒ Fx ∈ tk − Binf , ∀ Fx ∈ Y, ∀ t ∈ R : ϕBinf ,k (Fx ) < t ⇐⇒ Fx ∈ tk − int Binf . If U = {ξ1 , . . . , ξq } (q ∈ N) is a discrete set of uncertainties, Theorem 6.5.4 simplifies to min ϕB opt ,k (Fx ) = min min f (x, ξ), Fx ∈Fstrict

x∈Astrict ξ∈U

where B opt := {y ∈ R |∃i ∈ {1, . . . , q} : yi ≥ 0} = Rq \ int(−Rq+ ) and k := (1, . . . , 1)T . Therefore, an optimal solution to (oRC) is minimal for at least one of the individual objective functions. In general, this does not hold for strict robustness. q

The duality relationship between the optimistically robust counterpart (oRC) and the strictly robust counterpart (RC) leads to an alternative formulation as a maximization problem using the natural ordering cone CY . Theorem 6.5.6. Suppose that B := CY , k ≡ 1. Then, x ∈ Rn is an optimal solution to (oRC) if and only if Fx ∈ Fstrict minimizes sup{t ∈ R|Fx ∈ tk + B}. Proof. Analogously to the proof of Theorem 6.5.1, we get sup{t ∈ R|Fx ∈ tk + B} = sup{t ∈ R|Fx − tk ∈ CY } = sup{t ∈ R|∀ξ ∈ U : Fx (ξ) ≥ t} = inf f (x, ξ). ξ∈U

Therefore, Fx ∈ Fstrict minimizes sup{t ∈ R|Fx ∈ tk + B} if and only if x ∈ Astrict minimizes inf ξ∈U f (x, ξ), i.e., if and only if x is an optimal solution to (oRC).  Remark 6.5.7. A second-stage optimization or an augmented scalarization can be used to avoid weakly efficient elements w.r.t. ≤CY taking into account Corollary 6.3.13 (as in the case of strict robustness (see Remark 6.5.3)).

510

6 Scalar Optimization under Uncertainty

6.5.3 Nonlinear Scalarizing Functional for Regret Robustness Like in the case of strict robustness (see Section 6.2.2), the reformulation of the regret robust counterpart (rRC) (see Section 6.2.4) as a vector optimization problem in a linear topological space Y of real-valued functions F : U → R leads to a representation using translation invariant functionals (6.26). The dominating set B ⊂ Y is now shifted by the ideal solution f ∗ ∈ Y of problem (Q(ξ)). In the next theorem, we show that x ∈ Rn is an optimal solution to (rRC) if and only if Fx solves problem (Pk,Bregret ,Fstrict ) for Bregret := CY − f ∗ , and k :≡ 1 ∈ Y (see [334, Theorem 10]). Theorem 6.5.8. Let Bregret := CY − f ∗ , and k :≡ 1 ∈ Y . Then, x ∈ Rn solves (rRC) ⇐⇒ Fx is a solution of (Pk,Bregret ,Fstrict ). Proof. Because of Bregret +[0, +∞)·k ⊆ Bregret , condition (6.25) is fulfilled. Furthermore, ϕBregret ,k (Fx ) = inf{t ∈ R|Fx ∈ tk − Bregret } = inf{t ∈ R|∀ξ ∈ U : Fx (ξ) − t ≤ f ∗ (ξ)} = inf{t ∈ R|∀ξ ∈ U : f (x, ξ) − f ∗ (ξ) ≤ t} = sup(f (x, ξ) − f ∗ (ξ)). ξ∈U

Therefore, Fx is minimal for (Pk,Bregret ,Fstrict ) if and only if x minimizes  supξ∈U (f (x, ξ) − f ∗ (ξ)), i.e., if it is optimal for (rRC). The next assertion is shown in [334, Lemma 7] as a conclusion of Theorem 2.4.1. Lemma 6.5.9. Suppose that Y = C(U, R) and k :≡ 1 ∈ Y . Then, the functional ϕBregret ,k is continuous, finite-valued, CY -monotone, strictly (int CY )monotone, convex and ∀ Fx ∈ Y, ∀ t ∈ R : ϕBregret ,k (Fx ) ≤ t ⇐⇒ Fx ∈ tk − (CY − f ∗ ), ∀ Fx ∈ Y, ∀ t ∈ R : ϕBregret ,k (Fx ) < t ⇐⇒ Fx ∈ tk − int(CY − f ∗ ). Proof. For f ∗ ∈ Y , we get for all k ∈ int CY ϕCY −f ∗ ,k (y) = ϕCY ,k (y − f ∗ ), i.e., it holds that ϕBregret ,k (Fx ) = ϕB,k (Fx −f ∗ ) for all Fx ∈ Y . Employing this shift and since the functional ϕB,k is continuous, finite-valued, CY -monotone, strictly (int CY )-monotone and convex (see Theorem 2.4.1 and Remark 6.5.2), we obtain these properties for the functional ϕBregret ,k as well. Moreover, we conclude

6.5 Translation Invariant Functions as Unifying Concept

511

∀ Fx ∈ Y, ∀ t ∈ R : ϕBregret ,k (Fx ) ≤ t ⇐⇒ Fx ∈ tk − (CY − f ∗ ), ∀ Fx ∈ Y, ∀ t ∈ R : ϕBregret ,k (Fx ) < t ⇐⇒ Fx ∈ tk − int(CY − f ∗ ) from the corresponding properties of the functional ϕB,k in Theorem 2.4.1.  For the special case of a discrete uncertainty set U = {ξ1 , . . . , ξq }, q ∈ N, the scalarized problem (Pk,Bregret ,Fstrict ) reduces to min

Fx ∈Fstrict

ϕB regret ,k (Fx ) =

min max (f (x, ξ) − f ∗ (ξ))

x∈Astrict ξ∈U

(6.28)

with B regret := Rq+ − (f ∗ (ξ1 ), . . . , f ∗ (ξ1 ))T and k := (1, . . . , 1)T , see [333]. In the light of the associated vector optimization problem ≤Rq+ −

opt (f (x, ξ1 ), . . . , f (x, ξq )),

x∈Astrict

(6.28) corresponds to a reference point approach with reference point f ∗ , and to a weighted Tchebycheff scalarization with reference point f ∗ (and with equal weights) if f (x, ξ) ≥ 0 for all ξ ∈ U and x ∈ Astrict . Remark 6.5.10. Again, (compare Remarks 6.5.3 and 6.5.7), a second-stage optimization or an augmented scalarization can be employed also in this case to guarantee efficient (instead of weakly efficient) elements w.r.t. ≤CY . 6.5.4 Nonlinear Scalarizing Functional for Reliability Suppose that Y is a linear topological space of real-valued functions F : U → R. The following theorem shows how the reliably robust optimization problem (see Section 6.2.5) can be depicted employing the nonlinear translation invariant functional ϕB,k (as for strict robustness (see Theorem 6.5.1), but with the feasible set Frely instead of Fstrict ). In the next theorem, we show that x ∈ Rn is an optimal solution to (Rely) if and only if Fx solves problem (Pk,B,Frely ) for B = CY , and k :≡ 1 ∈ Y (see [334, Theorem 13]). Theorem 6.5.11. Let B = CY , and k :≡ 1 ∈ Y . Then, x ∈ Rn solves (Rely) ⇐⇒ Fx is a solution of (Pk,B,Frely ). Taking into account that the scalarized problems (Pk,B,Frely ) and (Pk,B,Fstrict ) (for reliable robustness and for strict robustness, respectively) differ only in their respective admissible sets, they have a comparable interpretation in the case that the set of scenarios is finite: Then (Pk,B,Frely ) reduces to a reference point approach with the origin as a reference point, which coincides with the weighted Tchebycheff scalarization (with equal weights) of the corresponding vector optimization problem

512

6 Scalar Optimization under Uncertainty

≤Rq+ – opt (f (x, ξ1 ), . . . , f (x, ξq )) x∈Arely

if f (x, ξ) ≥ 0 for all ξ ∈ U and x ∈ Arely . Remark 6.5.12. Like in the cases of strict, optimistic, and regret robustness (compare Remarks 6.5.3, 6.5.7 and 6.5.10), the scalarization by the functional ϕB,k leads to a second-stage optimization or an appropriate augmentation in order to avoid weakly efficient elements. 6.5.5 Nonlinear Scalarizing Functional for Adjustable Robustness Suppose that Y is a linear topological space of real-valued functions F : U → R. We study the relationship between nonlinear scalarization and adjustable robustness introduced in Section 6.2.6. In the next theorem, we show that x ∈ Rn is an optimal solution to (aRC) if and only if Fx solves problem (Pk,B,Fadjust ) for B = CY , and k :≡ 1 ∈ Y (see [334, Theorem 16]). Theorem 6.5.13. Suppose that B = CY , and k :≡ 1 ∈ Y . Then, x ∈ Rn solves (aRC) ⇐⇒ Fx is a solution of (Pk,B,Fadjust ). Proof. The proof of the assertion is completely analogous to the proof of Theorem 6.5.1, with the adapted admissible set Fadjust instead of Fstrict .  The scalarized problems (Pk,B,Fadjust ) and (Pk,B,Fstrict ) (for adjustable robustness and for strict robustness, respectively) again differ only in their admissible sets. Therefore, the scalarized problem (Pk,B,Fadjust ) also corresponds to a reference point approach with the origin as a reference point for finite sets of scenarios. Remark 6.5.14. Also in the case of adjustable robustness, a second-stage optimization can be conducted, or a suitable augmentation term can be attached to the nonlinear translation invariant functional ϕB,k in order to avoid weakly efficient elements as before. 6.5.6 Nonlinear Scalarizing Functional for Minimizing the Expectation Let Y be a linear topological space of real-valued functions F : U → R. A reformulation of problem (Exp) (introduced in Section 6.2.7) based on the nonlinear translation invariant functionals (6.26) can be realized as explained in the following. Toward this end, we prove that x ∈ Rn is an optimal solution to (Exp) if and only if Fx solves problem (Pk,Bexp ,Fstrict ) for Bexp := {F ∈ Y | U F (ξ)p(ξ)dξ ≥ 0}, and k :≡ 1 ∈ Y (see [334, Theorem 18]).

6.5 Translation Invariant Functions as Unifying Concept

Theorem 6.5.15. Let Bexp := {F ∈ Y | and k :≡ 1 ∈ Y . Then,

 U

513

F (ξ)p(ξ)dξ ≥ 0} be a closed set

x ∈ Rn solves (Exp) ⇐⇒ Fx is a solution of (Pk,Bexp ,Fstrict ). Proof. Since Bexp +[0, +∞)·k ⊆ Bexp holds, the inclusion (6.25) is fulfilled. Whenever Fx ∈ Y and t ∈ R, we have Fx − tk ∈ Y . Therefore, we obtain ϕBexp ,k (Fx ) = inf {t ∈ R|Fx ∈ tk − Bexp } = inf {t ∈ R|Fx − tk ∈ −Bexp }   = inf t ∈ R (Fx (ξ) − t)p(ξ)dξ ≤ 0 U ⎧ ⎫ ⎪ ⎪  ⎪ ⎪   ⎨ ⎬  = inf t ∈ R Fx (ξ)p(ξ)dξ ≤ t · p(ξ)dξ ⎪ ⎪ U ⎪ ⎩ ⎭  U  ⎪ =1   = inf t ∈ R Fx (ξ)p(ξ)dξ ≤ t U  = f (x, ξ)p(ξ)dξ. U

 Several important properties of the functional ϕBexp ,k are collected in the following remark. Remark 6.5.16. Suppose that Y = C(U, R). Then, Bexp is a proper, closed, convex cone and k ∈ int Bexp . Therefore, Theorem 2.4.1 and Corollary 2.4.5 yield that the functional ϕBexp ,k is continuous, finite-valued, Bexp -monotone, strictly int Bexp -monotone and sublinear, and ∀ y ∈ Y, ∀ r ∈ R : ϕBexp ,k (y) ≤ r ⇐⇒ y ∈ rk − Bexp , ∀ y ∈ Y, ∀ r ∈ R : ϕBexp ,k (y) < r ⇐⇒ y ∈ rk − int Bexp . If the scenario set is finite, then the scalarized problem (Pk,Bexp ,Fstrict ) equals to a weighted sum scalarization of the vector optimization problem ≤Rq+ - opt(f (x, ξ1 ), . . . , f (x, ξq )) subject to x ∈ Astrict , where the weights meet the probabilities of the respective scenarios.

(6.29)

514

6 Scalar Optimization under Uncertainty

6.5.7 Nonlinear Scalarizing Functional for Two-Stage Stochastic Programming We discuss a reformulation of the two-stage stochastic programming problem (6.6) (introduced in Section 6.2.9) based on the nonlinear translation invariant functionals (6.26) defined on a linear topological space Y of real-valued functions F : U → R. Again, we are using the similarity to the problem of minimizing the expectation and apply a very similar treatment as in Section 6.5.6. Therefore, we will state the adapted restatement without repeating the proof. The next theorem shows that x ∈ Rn is an optimal solution to the two-stage stochastic programming problem (6.6) if and only if Fx solves problem (Pk,Bexp ,F2−stage )  for Bexp = {F ∈ Y | F (ξ)p(ξ)dξ ≥ 0}, and k :≡ 1 ∈ Y (see [334, Theorem U

22]).  Theorem 6.5.17. Consider a closed set Bexp = {F ∈ Y | F (ξ)p(ξ)dξ ≥ 0}, U

and k :≡ 1 ∈ Y . Then,

x ∈ Rn solves (6.6) ⇐⇒ Fx is a solution of (Pk,Bexp ,F2−stage ) . 6.5.8 Nonlinear Scalarization Approach for ε-Constraint Robustness Using properties and tools from nonlinear scalarization in vector optimization, we get a motivation for deriving new concepts of robustness. In [334, Section 4.3], it is explained how the nonlinear translation invariant functional ϕB,k can be employed for deriving the new concept of ε-constraint robustness for a compact uncertainty set U (see [333]). Suppose that Y is a linear topological space of real-valued functions F : U → R. Consider a fixed ξ ∈ U, a functional ε : U → R, and Fstrict = {Fx ∈ Y | x ∈ Astrict }, where Astrict ⊆ Rn is the set of strictly feasible elements (compare (6.2)) and Fx (ξ) := f (x, ξ) for all ξ ∈ U. Moreover, consider a functional kε : U → R, 1 for ξ = ξ, kε (ξ) := (6.30) 0 otherwise. Again, we refer the natural ordering cone of Y by CY = {F ∈ Y | ∀ξ ∈ U : F (ξ) ≥ 0} (see Section 6.3). Furthermore, let us define Bε := {y ∈ Y | y ∈ CY − ε},

(6.31)

For given kε and Bε , conversely to our approaches presented in the previous sections, we employ the nonlinear translation invariant functional ϕBε ,kε on the feasible set Fstrict to define the concept of ε-constraint robust solutions:

6.5 Translation Invariant Functions as Unifying Concept

515

An element x ∈ Rn is called an ε-constraint robust solution if x solves the scalarized problem (Pkε ,Bε ,Fstrict ) : ϕBε ,kε (F ) → inf F ∈Fstrict . Then, the ε-constraint robust counterpart is formulated in the next theorem, see [334, Theorem 23]. Theorem 6.5.18. Consider ε : U → R. Then, for k = kε , B = Bε , condition (6.25) holds and the scalarized problem (Pkε ,Bε ,Fstrict ) is equivalent to inf f (x, ξ) − ε(ξ) s.t. ∀ ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m, x ∈ Rn ,

(εRC)

∀ ξ ∈ U \ {ξ} : f (x, ξ) ≤ ε(ξ). Proof. Taking into account Bε + [0, +∞) · kε ⊆ Bε , condition (6.25) is fulfilled. Furthermore, for Fx ∈ Fstrict , inf

Fx ∈Fstrict

ϕBε ,kε (Fx ) = = =

inf

Fx ∈Fstrict

inf

x∈Astrict

inf

Fx ∈Fstrict

inf{t ∈ R| Fx ∈ tkε − Bε }

inf{t ∈ R| Fx − tkε ∈ −Bε }

inf{t ∈ R| f (x, ξ) ≤ t + ε(ξ), ∀ ξ ∈ U \ {ξ} : f (x, ξ) ≤ ε(ξ)}

= inf{f (x, ξ) − ε(ξ)| x ∈ Rn , ∀ ξ ∈ U : hi (x, ξ) ≤ 0, i = 1, . . . , m, ∀ ξ ∈ U \ {ξ} : f (x, ξ) ≤ ε(ξ)}.  The above discussion explains that the nonlinear translation invariant functional ϕB,k can be employed to define new concepts for handling uncertainty. We refer problem (εRC) the ε-constraint robust counterpart of an optimization problem under uncertainty (Q(ξ), ξ ∈ U). The interpretation of ε-constraint robustness is studied in the following: The definition of kε in (6.30) signifies that only one objective function for one particular value ξ ∈ U is minimized, whereas the remaining, possibly infinitely many, objective functions f (x, ξ), ξ ∈ U \ {ξ}, are handled as additional constraints bounded by ε(ξ), ξ ∈ U \{ξ}. In our studies, for a compact set of uncertainties U, the same difficulty as for the finite-dimensional case arises (see [333]), indeed, how one should choose the function ε. The decision-maker must ensure that the upper bounds of the remaining objective functions are chosen in such a way that the problem remains feasible (this means that the values of ε should not be too small) and at the same time the objective function values attain the requested levels (this means that the values of ε should be small enough). The ε-constraint robust counterpart problem may be helpful for a decision-maker whose preferences are not reflected by any other robustness approach or to offer him a comprehensive choice of options. Furthermore, the functional ε could, for instance, constitute a company’s regulations or safety standards that have to be fulfilled.

516

6 Scalar Optimization under Uncertainty

6.5.9 An Overview on Concepts of Robustness Based on Translation Invariant Functionals The reformulations of robust counterpart problems based on nonlinear translation invariant functionals given in (2.42) are closely related to the respective vector optimization counterpart problems. A summary for the approach based on nonlinear scalarization is given in Table 6.3. Section

B

k

F

Strict robustness

6.5.1

CY

1

Fstrict

Optimistic robustness

6.5.2

Binf

1

Fstrict

1

Fstrict Frely

Concept



Regret robustness

6.5.3 Bregret = CY − f

Reliability

6.5.4

CY

1

Adjustable robustness

6.5.5

CY

1 Fadjust

Minimizing the expectation

6.5.6

Bexp

1

Two-stage stochastic programming

6.5.7

Bexp

1 F2−stage

ε-constraint robustness

6.5.8



Fstrict

kε Fstrict

Table 6.3. Summary of interpretations based on scalarization by nonlinear translation invariant functionals given in (2.42), adapted from [334, Table 3]

References

1. Agarwal, R.P., Balaj, M., O’Regan, D.: Common fixed point theorems in topological vector spaces via intersection theorems. J. Optim. Theory Appl. 173(2), 443–458 (2017) 2. Agarwal, R.P., Balaj, M., O’Regan, D.: Intersection theorems with applications in optimization. J. Optim. Theory Appl. 179(3), 761–777 (2018) 3. Aissi, H., Bazgan, C., Vanderpooten, D.: Approximation of min-max and minmax regret versions of some combinatorial optimization problems. Eur. J. Oper. Res. 179, 281–290 (2007) 4. Aissi, H., Bazgan, C., Vanderpooten, D.: General approximation schemes for min-max (regret) versions of some (pseudo-)polynomial problems. Discrete Optim. 7, 136–148 (2010) 5. Ait Mansour, M., Elakri, R.A., Laghdir, M.: Equilibrium and quasiequilibrium problems under ϕ-quasimonotonicity and ϕ-quasiconvexity. Existence, stability and applications. Minimax Theory Appl. 2(2), 175–229 (2017) 6. Ait Mansour, M., Metrane, A., Th´era, M.: Lower semicontinuous regularization for vector-valued mappings. J. Global Optim. 35(2), 283–309 (2006) 7. Ait Mansour, M., Popovici, N., Th´era, M.: On directed sets and their suprema. Positivity 11(1), 155–169 (2007) 8. Ait Mansour, M., Riahi, H.: On the cone minima and maxima of directed convex free disposal subsets and applications. Minimax Theory Appl. 1(2), 163–195 (2016) 9. Akian, M.: Densities of idempotent measures and large deviations. Trans. Amer. Math. Soc. 351(11), 4515–4543 (1999) 10. Akian, M., Singer, I.: Topologies on lattice ordered groups, separation from closed downward sets and conjugations of type Lau. Optimization 52(6), 629– 673 (2003) 11. Alefeld, G., Mayer, G.: Interval analysis: theory and applications. J. Comp. Appl. Math. 121(1–2), 421–464 (2000) 12. Aleman, A.: On some generalizations of convex sets and convex functions. Anal. Num´er. Th´eor. Approx. 14(1), 1–6 (1985) 13. Aliprantis, C.D., Tourky, R.: Cones and Duality. American Mathematical Society, Providence, RI (2007)

© Springer Nature Switzerland AG 2023 A. G¨ opfert et al., Variational Methods in Partially Ordered Spaces, CMS/CAIMS Books in Mathematics 7, https://doi.org/10.1007/978-3-031-36534-8 0

517

518

References

14. Alzorba, S., G¨ unther, C., Popovici, N., Tammer, C.: A new algorithm for solving planar multiobjective location problems involving the Manhattan norm. Eur. J. Oper. Res. 258(7), 35–46 (2017) 15. Amahroq, T., Taa, A.: On Lagrange-Kuhn-Tucker multipliers for multiobjective optimization problems. Optimization 41(2), 159–172 (1997) 16. Ansari, A.H., Siddiqi, A.H., Yao, J.C.: Generalized vector variational-like inequalities and their scalarizations. In: Vector Variational Inequalities and Vector Equilibria, pp. 17–37. Kluwer Academic Publishers, Dordrecht (2000) 17. Ansari, Q.H.: On generalized vector variational-like inequalities. Ann. Sci. Math. Qu´ebec 19(2), 131–137 (1995) 18. Ansari, Q.H.: Vector equilibrium problems and vector variational inequalities. In: Vector Variational Inequalities and Vector Equilibria, pp. 1–15. Kluwer Academic Publishers, Dordrecht (2000) 19. Ansari, Q.H., Siddiqi, A.H.: A generalized vector variational-like inequality and optimization over an efficient set. In: Functional Analysis with Current Applications in Science. Technology and Industry (Aligarh, 1996), Pitman Research Notes in Mathematics Series, vol. 377, pp. 177–191. Longman, Harlow (1998) 20. Antipin, A.S.: Controlled proximal differential systems for solving saddle problems. Differentsial nye Uravneniya 28(11), 1846–1861, 2021 (1992) 21. Antipin, A.S.: Saddle gradient feedback-controlled processes. Avtomat. i Telemekh. 3, 12–23 (1994) 22. Antipin, A.S.: Convergence and estimates for the rate of convergence of proximal methods to fixed points of extremal mappings. Zh. Vychisl. Mat. i Mat. Fiz. 35(5), 688–704 (1995) 23. Arrow, K.J., Barankin, E.W., Blackwell, D.: Admissible points of convex sets. In: Contributions to the Theory of Games, vol. 2, Annals of Mathematics Studies, no. 28, pp. 87–91. Princeton University Press, Princeton, N.J. (1953) 24. Asplund, E.: Fr´echet differentiability of convex functions. Acta Math. 121, 31–47 (1968) 25. Attouch, H., Cabot, A., Chbani, Z., Riahi, H.: Rate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficient. Evol. Equ. Control Theory 7(3), 353–371 (2018) 26. Attouch, H., Chbani, Z., Riahi, H.: Fast proximal methods via time scaling of damped inertial dynamics. SIAM J. Optim. 29(3), 2227–2256 (2019) 27. Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3. ESAIM Control Optim. Calc. Var. 25, Paper No. 2, 34 (2019) 28. Attouch, H., Damlamian, A.: Strong solutions for parabolic variational inequalities. Nonlinear Anal. 2(3), 329–353 (1978) 29. Attouch, H., Garrigos, G.: Multiobjective optimization: an inertial dynamical approach to Pareto optima (2015) 30. Attouch, H., Garrigos, G., Goudou, X.: A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions. J. Math. Anal. Appl. 422(1), 741–771 (2015) 31. Attouch, H., Goudou, X.: A continuous gradient-like dynamical approach to Pareto-optimization in Hilbert spaces. Set-Valued Var. Anal. 22(1), 189–219 (2014)

References

519

32. Attouch, H., Riahi, H.: Stability results for Ekeland’s -variational principle and cone extremal solutions. Math. Oper. Res. 18(1), 173–201 (1993) 33. Attouch, H., Wets, R.J.B.: Quantitative stability of variational systems. II. A framework for nonlinear conditioning. SIAM J. Optim. 3(2), 359–381 (1993) 34. Aubin, J.P.: Optima and Equilibria. Springer, Berlin (1993) 35. Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Pure and Applied Mathematics (New York). Wiley, New York (1984) 36. Aubin, J.P., Frankowska, H.: Set-Valued Analysis, Systems & Control: Foundations & Applications, vol. 2. Birkh¨ auser Boston Inc., Boston, MA (1990) 37. Baiocchi, C., Capelo, A.: Variational and quasivariational inequalities. Wiley, New York (1984) 38. Balaj, M.: Weakly G-KKM mappings, G-KKM property, and minimax inequalities. J. Math. Anal. Appl. 294(1), 237–245 (2004) 39. Ballestero, E., Romero, C.: Multiple Criteria Decision Making and its Applications to Economic Problems. Kluwer, Boston (1998) 40. Bao, T.Q., Hillmann, M., Tammer, C.: Subdifferentials of nonlinear scalarization functions and applications. J. Nonlinear Convex Anal. 18(4), 589–605 (2017) 41. Bao, T.Q., Mordukhovich, B.S.: Relative Pareto minimizers for multiobjective problems: existence and optimality conditions. Math. Program. 122(2, Ser. A), 301–347 (2010) 42. Bao, T.Q., Mordukhovich, B.S.: Necessary nondomination conditions in sets and vector optimization with variable ordering structures. J. Optim. Theory Appl. 162, 350–370 (2014) 43. Bao, T.Q., Tammer, C.: Subdifferentials and SNC property of scalarization functionals with uniform level sets and applications. J. Nonlinear Var. Anal. 2(3), 355–378 (2018) 44. Barbu, V.: Nonlinear Differential Equations of Monotone Types in Banach Spaces. Springer, New York (2010) 45. Pallu de la Barri`ere, R.: Cours d’automatique th´eorique. Dunod, Paris (1966) 46. Bastin, F., Cirillo, C., Toint, P.L.: Convergence theory for nonconvex stochastic programming with an application to mixed logit. Math. Program. 108, 207–234 (2006) 47. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, Cham (2017) 48. Beale, E.: On minimizing a convex function subject to linear inequalities. J. R. Stat. Soc. Ser. B 17, 173–184 (1955) 49. Beck, A., Ben-Tal, A.: Duality in robust optimization: primal worst equals dual best. Oper. Res. Lett. 37(1), 1–6 (2009) 50. Bednarczuk, E.M.: Some stability results for vector optimization problems in partially ordered topological vector spaces. In: World Congress of Nonlinear Analysts ’92, Vol. I–IV (Tampa, FL, 1992), pp. 2371–2382. de Gruyter, Berlin (1996) 51. Bednarczuk, E.: An approach to well-posedness in vector optimization: consequences to stability. Control Cybernet. 23(1–2), 107–122 (1994) 52. Bednarczuk, E.M.: Berge-type theorems for vector optimization problems. Optimization 32(4), 373–384 (1995) 53. Ben-Tal, A., Ghaoui, L.E., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton and Oxford (2009)

520

References

54. Ben-Tal, A., Goryashko, A., Guslitzer, E., Nemirovski, A.: Adjustable robust solutions of uncertain linear programs. Math. Program. A 99, 351–376 (2003) 55. Ben-Tal, A., Nemirovski, A.: Robust solutions of linear programming problems contaminated with uncertain data. Math. Program. 88, 411–424 (2000) 56. Benker, H.: Upper and lower bounds for minimal norm problems under linear constraints. In: Mathematical Control Theory. Banach Center Publications, vol. 14, pp. 35–45. PWN, Warsaw (1985) 57. Benker, H., Hamel, A., Tammer, C.: A proximal point algorithm for control approximation problems. I. Theoretical background. Math. Methods Oper. Res. 43(3), 261–280 (1996) 58. Benker, H., Hamel, A., Tammer, C.: An algorithm for vectorial control approximation problems. In: Multiple Criteria Decision Making (Hagen. 1995), pp. 3–12. Springer, Berlin (1997) 59. Benker, H., Kossert, S.: Remarks on quadratic optimal control problems in Hilbert spaces. Z. Anal. Anwendungen 1(3), 13–21 (1982) 60. Berge, C.: Espaces topologiques: Fonctions multivoques. Collection Universitaire de Math´ematiques, Vol. III. Dunod, Paris (1959) 61. Berge, C.: Topological spaces. Dover Publications Inc, Mineola, NY (1997) 62. Bernau, H.: Interactive methods for vector optimization. In: Optimization in Mathematical Physics (Oberwolfach. 1985), Methoden und Verfahren der mathematischen Physik, vol. 34, pp. 21–36. Peter Lang, Frankfurt a. M. (1987) 63. Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(1), 35–53 (2004) 64. Bessaga, C., Klee, V.: Every non-normable fr´echet space is homeomorphic with all of its closed convex bodies. Math. Ann. 163, 161–166 (1966) 65. Bianchi, M., Hadjisavvas, N., Schaible, S.: Vector equilibrium problems with generalized monotone bifunctions. J. Optim. Theory Appl. 92(3), 527–542 (1997) 66. Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90(1), 31–43 (1996) 67. Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Nonlinear programming techniques for equilibria. Springer, Cham (2019) 68. Bigi, G., Passacantando, M.: D-gap functions and descent techniques for solving equilibrium problems. J. Global Optim. 62(1), 183–203 (2015) 69. Birge, J., Louveaux, F.: Introduction to Stochastic Programming. Springer, New York (1997) 70. Bitran, G.R.: Duality for nonlinear multiple-criteria optimization problems. J. Optim. Theory Appl. 35(3), 367–401 (1981) 71. Bitran, G.R., Magnanti, T.L.: The structure of admissible points with respect to cone dominance. J. Optim. Theory Appl. 29(4), 573–614 (1979) 72. Blum, E., Oettli, W.: Variational principles for equilibrium problems. In: Parametric Optimization and Related Topics, III (G¨ ustrow, 1991), Approx. Optim., vol. 3, pp. 79–88. Peter Lang, Frankfurt am Main (1993) 73. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994) 74. Bot¸, R.I., Grad, S.M., Wanka, G.: Duality in Vector Optimization. Springer, Berlin (2009) 75. Bot¸, R.I., Wanka, G.: An analysis of some dual problems in multiobjective optimization. I, II. Optimization 53(3), 281–300, 301–324 (2004)

References

521

76. Bonnisseau, J.M., Crettez, B.: On the characterization of efficient production vectors. Econ. Theory 31, 213–223 (2007) 77. Borwein, J.: Proper efficient points for maximizations with respect to cones. SIAM J. Control Optim. 15(1), 57–63 (1977) 78. Borwein, J., Goebel, R.: Notions of relative interior in Banach spaces. J. Math. Sci. (N.Y.) 115(4), 2542–2553 (2003) 79. Borwein, J.M.: Convex relations in analysis and optimization. In: Generalized Concavity in Optimization and Economics, pp. 335–377. Academic Press, Inc. (1981) 80. Borwein, J.M.: A Lagrange multiplier theorem and a sandwich theorem for convex relations. Math. Scand. 48(2), 189–204 (1981) 81. Borwein, J.M.: Continuity and differentiability properties of convex operators. Proc. London Math. Soc. (3) 44(3), 420–444 (1982) 82. Borwein, J.M.: On the existence of Pareto efficient points. Math. Oper. Res. 8(1), 64–73 (1983) 83. Borwein, J.M., Lewis, A.S.: Partially finite convex programming. I. Quasi relative interiors and duality theory. Math. Program. 57(1, Ser. B), 15–48 (1992) 84. Borwein, J.M., Preiss, D.: A smooth variational principle with applications to subdifferentiability and to differentiability of convex functions. Trans. Amer. Math. Soc. 303(2), 517–527 (1987) 85. Borwein, J.M., Th´era, M.: Sandwich theorems for semicontinuous operators. Canad. Math. Bull. 35(4), 463–474 (1992) 86. Borwein, J.M., Zhuang, D.: Super efficiency in vector optimization. Trans. Amer. Math. Soc. 338(1), 105–122 (1993) 87. Bot¸, R.I.: Conjugate Duality in Convex Optimization. Springer, Berlin (2010) 88. Botte, M.: Aspects of multi-scenario efficiency. Master’s thesis, Universit¨ at G¨ ottingen, Faculty of Mathematics (2015) 89. Bouza, G., Quintana, E., Tammer, C.: A unified characterization of nonlinear scalarizing functionals in optimization. Vietnam J. Math. 47(3), 683–713 (2019) 90. Breckner, W.W.: Dualit¨ at bei Optimierungsaufgaben in halbgeordneten topologischen Vektorr¨ aumen. I [Duality for vector optimization problems in partially ordered vector spaces]. Math. Rev. Anal. Num´er. Th´eor. Approx. 1, 5–35 (1972) 91. Breckner, W.W.: Dualit¨ at bei Optimierungsaufgaben in halbgeordneten topologischen Vektorr¨ aumen. I, II. Rev. Anal. Num´er. Th´eorie Approx. 1, 5–35; ibid. 2 (1973), 27–35 (1972) 92. Breckner, W.W.: Derived sets for weak multiobjective optimization problems with state and control variables. J. Optim. Theory Appl. 93(1), 73–102 (1997) 93. Breckner, W.W., Sekatzek, M., Tammer, C.: Approximate saddle point assertions for a general class of approximation problems. In: Approximation. Optimization and Mathematical Economics (Pointe-` a-Pitre, 1999), pp. 71–80. Physica, Heidelberg (2001) 94. Br´ezis, H.: Op´erateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. North-Holland Publishing Co., AmsterdamLondon; American Elsevier Publishing Co., Inc., New York (1973) 95. Br´ezis, H., Nirenberg, L., Stampacchia, G.: A remark on Ky Fan’s minimax principle. Boll. Un. Mat. Ital. 4(6), 293–300 (1972)

522

References

96. Brøndsted, A.: On a lemma of Bishop and Phelps. Pacific J. Math. 55, 335–341 (1974) 97. Brosowski, B., Conci, A.: On vector optimization and parametric programming. In: Segundas Jornados Latino Americanas de Matematica Aplicada, vol. 2, pp. 483–495 (1983) 98. Bruck, R.E., Jr.: Asymptotic convergence of nonlinear contraction semigroups in Hilbert space. J. Funct. Anal. 18, 15–26 (1975) 99. Brumelle, S.: Duality for multiple objective convex programs. Math. Oper. Res. 6(2), 159–172 (1981) 100. Bui, H., Kruger, A.: About extensions of the extremal principle. Vietnam J. Math. 46, 215–242 (2018) 101. Bui, H., Kruger, A.: Extremality, stationarity and generalized separation of collection of sets. J. Optim. Theory Appl. 182(1), 211–264 (2019) 102. Cappanera, P.: A survey on obnoxious facility location problems. Tech. rep. (1999) 103. Cˆ arj˘ a, O.: Elements of Nonlinear Functional Analysis. Editura Universit˘ a¸tii “Al.I.Cuza” Ia¸si (1998) 104. Carrizosa, E., Conde, E., Fernandez, F.R., Puerto, J.: Efficiency in Euclidean constrained location problems. Oper. Res. Lett. 14(5), 291–295 (1993) 105. Carrizosa, E., Fernandez, R.: A polygonal upper bound for the efficient set for single-location problems with mixed norms, pp. 107–116. Top. Soc. Estad. Investig. Oper, Madrid pp (1993) 106. Carrizosa, E., Fernandez, R., Puerto, J.: An axiomatic approach to location criteria. In: Orban, F., Rasson, J.P. (eds.) FUNDP. Namur, Belgium (1990) 107. Carrizosa, E., Fernandez, R., Puerto, J.: Determination of a pseudoefficient set for single-location problems with mixed polyhedral norms. In: Orban, F., Rasson, J.P. (eds.) FUNDP. Namur, Belgium (1990) 108. Carrizosa, E., Plastria, F.: A characterization of efficient points in constrained location problems with regional demand. Working paper, BEIF/53, Vrije Universiteit Brussel, Brussels, Belgium (1993) 109. Carrizosa, E., Plastria, F.: Location of semi-obnoxious facilities. Stud. Locat. Anal. 12, 1–27 (1999) 110. Censor, Y., Zenios, S.A.: Proximal minimization algorithm with D-functions. J. Optim. Theory Appl. 73(3), 451–464 (1992) 111. Cesari, L., Suryanarayana, M.B.: Existence theorems for Pareto optimization; multivalued and Banach space valued functionals. Trans. Amer. Math. Soc. 244, 37–65 (1978) 112. Chadli, O., Chbani, Z., Riahi, H.: Equilibrium problems with generalized monotone bifunctions and applications to variational inequalities. J. Optim. Theory Appl. 105(2), 299–323 (2000) 113. Chadli, O., Konnov, I.V., Yao, J.C.: Descent methods for equilibrium problems in a Banach space. Comput. Math. Appl. 48(3–4), 609–616 (2004) 114. Chalmet, L.G., Francis, R.L., Kolen, A.: Finding efficient solutions for rectilinear distance location problems efficiently. Eur. J. Oper. Res. 6(2), 117–124 (1981) 115. Chang, T.H., Yen, C.L.: KKM property and fixed point theorems. J. Math. Anal. Appl. 203(1), 224–235 (1996)

References

523

116. Chbani, Z., Mazgouri, Z., Riahi, H.: From convergence of dynamical equilibrium systems to bilevel hierarchical Ky Fan minimax inequalities and applications. Minimax Theory Appl. 4(2), 231–270 (2019) 117. Chbani, Z., Riahi, H.: Existence and asymptotic behaviour for solutions of dynamical equilibrium systems. Evol. Equ. Control Theory 3(1), 1–14 (2014) 118. Chen, G., Teboulle, M.: Convergence analysis of a proximal-like minimization algorithm using Bregman functions. SIAM J. Optim. 3(3), 538–543 (1993) 119. Chen, G.Y.: Existence of solutions for a vector variational inequality: an extension of the Hartmann-Stampacchia theorem. J. Optim. Theory Appl. 74(3), 445–456 (1992) 120. Chen, G.Y., Chen, G.M.: Vector variational inequality and vector optimization. In: Lecture Notes in Economics and Mathematical Systems, vol. 285, pp. 408–416. Springer, New York (1987) 121. Chen, G.Y., Craven, B.D.: Approximate dual and approximate vector variational inequality for multiobjective optimization. J. Austral. Math. Soc. Ser. A 47(3), 418–423 (1989) 122. Chen, G.Y., Craven, B.D.: A vector variational inequality and optimization over an efficient set. ZOR 3, 1–12 (1990) 123. Chen, G.Y., Goh, C.J., Yang, X.Q.: Existence of a solution for generalized vector variational inequalities. Optimization 50(1–2), 1–15 (2001) 124. Chen, G.Y., Hou, S.H.: Existence of solutions for vector variational inequalities. In: Vector Variational Inequalities and Vector Equilibria, pp. 73–86. Kluwer Academic Publishers, Dordrecht (2000) 125. Chen, G.Y., Huang, X.X.: Ekeland’s -variational principle for set-valued mappings. Math. Methods Oper. Res. 48(2), 181–186 (1998) 126. Chen, G.Y., Huang, X.X.: A unified approach to the existing three types of variational principles for vector valued functions. Math. Methods Oper. Res. 48(3), 349–357 (1998) 127. Chen, G.Y., Huang, X.X., Lee, G.M.: Equivalents of an approximate variational principle for vector-valued functions and applications. Math. Methods Oper. Res. 49(1), 125–136 (1999) 128. Chen, G.Y., Yang, X.: On the existence of solutions to vector complementarity problems. Kluwer Academic Publishers, Dordrecht (2000) 129. Chen, G.Y., Yang, X.Q.: The vector complementary problem and its equivalences with the weak minimal element in ordered spaces. J. Math. Anal. Appl. 153(1), 136–158 (1990) 130. Cheng, Y.H., Fu, W.T.: Strong efficiency in a locally convex space. Math. Methods Oper. Res. 50(3), 373–384 (1999) 131. Chew, K.L.: Maximal points with respect to cone dominance in Banach spaces and their existence. J. Optim. Theory Appl. 44(1), 1–53 (1984) 132. Chiang, Y.: Characterizations for solidness of dual cones with applications. J. Global Optim. 52(1), 79–94 (2012) 133. Chiriaev, A., Walster, G.: Interval arithmetic specification. Technical Report (1998) 134. Chowdhury, M.S.R., Tan, K.K.: Generalized variational inequalities for quasimonotone operators and applications. Bull. Polish Acad. Sci. Math. 45(1), 25–54 (1997) 135. Cieslik, D.: The Fermat-Steiner-Weber-problem in Minkowski spaces. Optimization 19(4), 485–489 (1988)

524

References

136. Clarke, F.H.: Optimization and nonsmooth analysis. Wiley, New York (1983) 137. Clarke, F.H., Ledyaev, Y.S., Stern, R.J., Wolenski, P.R.: Nonsmooth Analysis and Control Theory. Springer, New York (1998) 138. Cohon, J.L.: Multiobjective programming and planning. Dover Publications Inc, Mineola, NY (2003) 139. Combari, C., Laghdir, M., Thibault, L.: Sous-diff´erentiel de fonctions convexes compos´ees. Ann. Sci. Math. Qu´ebec 18(2), 119–148 (1994) 140. Corley, H.W.: An existence result for maximizations with respect to cones. J. Optim. Theory Appl. 31(2), 277–281 (1980) 141. Corley, H.W.: Duality theory for maximizations with respect to cones. J. Math. Anal. Appl. 84(2), 560–568 (1981) 142. Corley, H.W.: Existence and Lagrangian duality for maximizations of setvalued functions. J. Optim. Theory Appl. 54(3), 489–501 (1987) 143. Crank, J.: Free and Moving Boundary Problems. Oxford University Press, New York (1984) 144. Craven, B.D.: Invex functions and constrained local minima. Bull. Austral. Math. Soc. 24(3), 357–366 (1981) 145. Craven, B.D., Glover, B.M.: Invex functions and duality. J. Austral. Math. Soc. Ser. A 39(1), 1–20 (1985) 146. Crespi, G.P., Kuroiwa, D., Rocca, M.: Quasiconvexity of set-valued maps assures well-posedness of robust vector optimization. An. Oper. Res. 1–16 (2015) 147. Dafermos, S.: Traffic equilibrium and variational inequalities. Transp. Sc. 14, 42–54 (1980) 148. Daneˇs, J.: A geometric theorem useful in nonlinear functional analysis. Boll. Un. Mat. Ital. 4(6), 369–375 (1972) 149. Daniele, P., Maugeri, A.: Vector variational inequalities and modelling of a continuum traffic equilibrium problem. In: Vector Variational Inequalities and Vector Equilibria, pp. 97–111. Kluwer Academic Publishers, Dordrecht (2000) 150. Daniilidis, A.: Arrow-Barankin-Blackwell theorems and related results in cone duality: a survey. In: Optimization (Namur. 1998), pp. 119–131. Springer, Berlin (2000) 151. Daniilidis, A., Hadjisavvas, N.: Existence theorems for vector variational inequalities. Bull. Austral. Math. Soc. 54(3), 473–481 (1996) 152. Dantzig, G.: Linear programming under uncertainty. Manag. Sci. 1, 197–206 (1955) 153. Dauer, J.P., Gallagher, R.J.: Positive proper efficient points and related cone results in vector optimization theory. SIAM J. Control Optim. 28(1), 158–172 (1990) 154. Dauer, J.P., Stadler, W.: A survey of vector optimization in infinitedimensional spaces II. J. Optim. Theory Appl. 51(2), 205–241 (1986) 155. Debreu, G.: Theory of value: an axiomatic analysis of economic equilibrium. Wiley, New York; Chapman & Hall Ltd, London (1959) 156. Dedieu, J.P.: Crit`eres de fermeture pour l’image d’un ferm´e non convexe par une multiapplication. C. R. Acad. Sci. Paris S´er. A-B 287(14), A941–A943 (1978) 157. Dentcheva, D., Helbig, S.: On several concepts for ε-efficiency. OR Spektrum 16, 179–186 (1994)

References

525

158. Dentcheva, D., Helbig, S.: On variational principles, level sets, well-posedness, and ε-solutions in vector optimization. J. Optim. Theory Appl. 89(2), 325–349 (1996) 159. Ding, X.P., Tarafdar, E.: Generalized vector variational-like inequalities without monotonicity. In: Vector Variational Inequalities and Vector Equilibria, pp. 113–124. Kluwer Academic Publishers, Dordrecht (2000) 160. Dinkelbach, W.: On nonlinear fractional programming. Manag. Sci. 13, 492– 498 (1967) 161. Dolecki, S., Malivert, C.: Polarities and stability in vector optimization. In: Recent Advances and Historical Development of Vector Optimization, pp. 96– 113. Springer, Berlin (1987) 162. Dolecki, S., Malivert, C.: General duality in vector optimization. Optimization 27(1–2), 97–119 (1993) 163. Drezner, Z. (ed.): Facility location. Springer Series in Operations Research. Springer, New York (1995) 164. Drezner, Z., Wesolowsky, G.O.: A maximin location problem with maximum distance constraints. AIIE Trans. 12(3), 249–252 (1980) 165. Dunford, N., Schwartz, J.T.: Linear operators. Part I. Wiley, New York (1988) 166. Durea, M., Strugariu, R.: On some Fermat rules for set-valued optimization problems. Optimization 60, 575–591 (2011) 167. Durea, M., Tammer, C.: Fuzzy necessary optimality conditions for vector optimization problems. Optimization 58, 449–467 (2009) 168. Durier, R.: On Pareto optima, the Fermat-Weber problem and polyhedral gauges. Math. Program. 47, 65–79 (1990) 169. Durier, R., Michelot, C.: Geometrical properties of the Fermat-Weber problem. European J. Oper. Res. 20(3), 332–343 (1985) 170. Durier, R., Michelot, C.: Sets of efficient points in a normed space. J. Math. Anal. Appl. 117(2), 506–528 (1986) 171. Eckstein, J.: Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming. J. Math. Oper. Res. 18(1), 202–226 (1993) 172. Ehrgott, M.: Multicriteria Optimization. Springer, New York (2005) 173. Ehrgott, M., Ide, J., Sch¨ obel, A.: Minmax robustness for multi-objective optimization problems. European J. Oper. Res. 239, 17–31 (2014) 174. Eichfelder, G.: Adaptive Scalarization Methods in Multiobjective Optimization. Springer, Berlin (2008) 175. Eichfelder, G.: Variable Ordering Structures in Vector Optimization. Springer, Berlin (2014) 176. Eichfelder, G., Jahn, J.: Vector optimization problems and their solution concepts. In: Ansari, Q.H., Yao, J.C. (eds.) Recent Developments in Vector Optimization, pp. 1–27. Springer, Berlin, Heidelberg (2012) 177. Eichfelder, G., Kr¨ uger, C., Sch¨ obel, A.: Decision uncertainty in multiobjective optimization. J. Global Optim. 69(2), 485–510 (2017) 178. Eichfelder, G., Pilecka, M.: Set approach for set optimization with variable ordering structures Part I: set relations and relationship to vector approach. J. Optim. Theory Appl. 171(3), 931–946 (2016) 179. Eichfelder, G., Pilecka, M.: Set approach for set optimization with variable ordering structures Part II: scalarization approaches. J. Optim. Theory Appl. 171(3), 947–963 (2016)

526

References

180. Ekeland, I.: On the variational principle. J. Math. Anal. Appl. 47, 324–353 (1974) 181. Ekeland, I.: Nonconvex minimization problems. Bull. Amer. Math. Soc. 1, 443–474 (1979) 182. Ekeland, I., Lebourg, G.: Generic Fr´echet differentiability and perturbed optimization problems in Banach spaces. Trans. Amer. Math. Soc. 193–216 (1976) 183. El Abdouni, B., Thibault, L.: Lagrange multipliers for Pareto nonsmooth programming problems in Banach spaces. Optimization 26(3–4), 277–285 (1992) 184. Elliott, R.J., Kohlmann, M.: The variational principle and stochastic optimal control. Stochastics 3(3), 229–241 (1980) 185. Elliott, R.J., Kohlmann, M.: The variational principle for optimal control of diffusions with partial information. Syst. Control Lett. 12(1), 63–69 (1989) 186. Engau, A.: Definition and characterization of Geoffrion proper efficiency for real vector optimization with infinitely many criteria. J. Optim. Theory Appl. 165, 439–457 (2015) 187. Ester, J.: Systemanalyse und mehrkriterielle Entscheidung. VEB Verlag Technik, Berlin (1987) 188. Fabian, M., Ioffe, A.: Separable reduction in the theory of Fr´echet subdifferentials. Set-Valued Var. Anal. 21(4), 661–671 (2013) 189. Fan, K.: A generalization of Tychonoff’s fixed point theorem. Math. Ann. 142, 305–310 (1961) 190. Fan, K.: A Minimax Inequality and Applications. Academic Press, New York (1972) 191. Fan, K.: A survey of some results closely related to the Knaster-KuratowskiMazurkiewicz theorem. In: Game Theory and Applications (Columbus. OH, 1987), pp. 358–370. Academic Press, San Diego, CA (1990) 192. Ferro, F.: An optimization result for set-valued mappings and a stability property in vector problems with constraints. J. Optim. Theory Appl. 90(1), 63–77 (1996) 193. Ferro, F.: Optimization and stability results through cone lower semicontinuity. Set-Valued Anal. 5(4), 365–375 (1997) 194. Ferro, F.: A new ABB theorem in Banach spaces. Optimization 46(4), 353–362 (1999) 195. de Figueiredo, D.G.: Lectures on the Ekeland variational principle with applications and detours. Published for the Tata Institute of Fundamental Research, Bombay; by Springer, Berlin (1989) 196. Fl˚ am, S.D., Antipin, A.S.: Equilibrium programming using proximal-like algorithms. Math. Program. 78(1, Ser. A), 29–41 (1997) 197. Flores-Baz´ an, F.: Radial epiderivatives and asymptotic functions in nonconvex vector optimization. SIAM J. Optim. 14(1), 284–305 (2003) 198. Flores-Baz´ an, F., Hern´ andez, E., Novo, V.: Characterizing efficiency without linear structure: a unified approach. J. Global Optim. 41(1), 43–60 (2008) 199. F¨ oppl, A.: Vorlesungen u ¨ ber Technische Mechanik. Teubner-Verlag, Leipzig (1923) 200. Francis, R., White, J.: Facility Layout and Location: An Analytical Approach. Prentice-Hall, Englewood Cliffs, New Jersey (1974) 201. Friedrichs, K.O.: Ein Verfahren derVariationsrechnung, das Minimum eines Integrals als das Maximum eines anderen Ausdruckes darzustellen. Nachr. der Gesellschaft der Wiss. zu G¨ ottingen, pp. 13–20 (1929)

References

527

202. Fu, J.: Simultaneous vector variational inequalities and vector implicit complementarity problem. J. Optim. Theory Appl. 93, 141–151 (1997) 203. Gabrel, V., Murat, C., Thiele, A.: Recent advances in robust optimization: an overview. European J. Oper. Res. 235, 471–483 (2014) 204. Gajek, L., Zagrodny, D.: Countably orderable sets and their applications in optimization. Optimization 26(3–4), 287–301 (1992) 205. Gajek, L., Zagrodny, D.: Geometric variational principle. Dissertationes Math. (Rozprawy Mat.) 340, 55–71 (1995) 206. Gallagher, R.J., Saleh, O.A.: Two generalizations of a theorem of Arrow, Barankin, and Blackwell. SIAM J. Control Optim. 31(1), 247–256 (1993) 207. Georgiev, P.G.: The strong Ekeland variational principle, the strong drop theorem and applications. J. Math. Anal. Appl. 131(1), 1–21 (1988) 208. Gerritse, G.: Lattice-valued semicontinuous functions. In: Probability and Lattices, CWI Tract, vol. 110, pp. 93–125. Math. Centrum, Centrum Wisk. Inform., Amsterdam (1997) 209. Gerstewitz (Tammer), C.: Nichtkonvexe Dualit¨ at in der Vektoroptimierung. Wiss. Z. Tech. Hochsch. Leuna-Merseburg 25(3), 357–364 (1983) 210. Gerstewitz (Tammer), C.: Beitr¨ age zur Dualit¨ atstheorie der nichtlinearen Vektoroptimierung [Contributions to duality theory in nonlinear vector optimization]. Ph.D. Thesis, Technische Hochschule Leuna-Merseburg (1984) 211. Zur Dualit¨ at in derVektoroptimierung: Gerstewitz (Tammer), C., G¨ opfert, A. Seminarberichte der Sektion Mathematik der Humboldt-Universit¨ at zu Berlin 39, 67–84 (1981) 212. Gerstewitz (Tammer), C., Iwanow, E.: Dualit¨ at f¨ ur nichtkonvexe Vektoroptimierungsprobleme. Wiss. Z. Tech. Hochsch. Ilmenau 31(2), 61–81 (1985), workshop on vector optimization (Plaue, 1984) 213. Gerth (Tammer), C.: N¨ aherungsl¨ osungen in der Vektoroptimierung [Approximate solutions in vector optimization]. Seminarberichte der Sektion Mathematik der Humboldt-Universit¨ at Berlin 90, 67–76 (1987) 214. Dualit¨ at und algorithmische Anwendung beim vektoriellen Standortproblem: Gerth (Tammer), C., P¨ ohler, K. Optimization 19, 491–512 (1988) 215. Gerth (Tammer), C., Weidner, P.: Nonconvex separation theorems and some applications in vector optimization. J. Optim. Theory Appl. 67(2), 297–320 (1990) 216. Giannessi, F.: Theorems of alternative, quadratic programs and complementarity problems. In: Variational Inequalities and Complementarity Problems, pp. 151–186. Wiley, Chichester (1980) 217. Giannessi, F.: Vector Variational Inequalities and Vector Equilibria. Mathematical Theories. Kluwer Academic Publishers, Dordrecht (1999) 218. Giannessi, F., Mastroeni, G., Pellegrini, L.: On the theory of vector optimization and variational inequalities. Image space analysis and separation. In: Vector Variational Inequalities and Vector Equilibria, pp. 153–215. Kluwer Academic Publishers, Dordrecht (2000) 219. Gierz, G., Hofmann, K.H., Keimel, K., Lawson, J.D., Mislove, M.W., Scott, D.S.: A Compendium of Continuous Lattices. Springer, Berlin-New York (1980) 220. Goberna, M., Jeyakumar, V., Li, G., Vicente-P´erez, J.: Robust solutions to multi-objective linear programs with uncertain data. European J. Oper. Res. 242, 730–743 (2015)

528

References

221. Godefroy, G.: From Grothendieck to Naor: a stroll through the metric analysis of Banach spaces. Newsl. Europ. Math. Soc. 107, 9–16 (2018) 222. Goerigk, M., Sch¨ obel, A.: Algorithm engineering in robust optimization. In: Kliemann, L., Sanders, P. (eds.) Algorithm Engineering: Selected Results and Surveys, LNCS State of the Art, vol. 9220. Springer (2016) 223. Goh, C.J., Yang, X.Q.: Scalarization methods for vector variational inequality. In: Vector Variational Inequalities and Vector Equilibria. Nonconvex Optimization and Its Applications, vol. 38, pp. 217–232. Kluwer Academic Publishers, Dordrecht (2000) 224. Gong, X.H.: Connectedness of efficient solution sets for set-valued maps in normed spaces. J. Optim. Theory Appl. 83(1), 83–96 (1994) 225. Gong, X.H.: Density of the set of positive proper minimal points in the set of minimal points. J. Optim. Theory Appl. 86(3), 609–630 (1995) ¨ 226. G¨ opfert, A., Gerth (Tammer), C.: Uber die Skalarisierung und Dualisierung von Vektoroptimierungsproblemen. Z. Anal. Anwendungen 5(4), 377–384 (1986) 227. G¨ opfert, A., Tammer, C.: A new maximal point theorem. Z. Anal. Anwendungen 14(2), 379–390 (1995) 228. G¨ opfert, A., Tammer, C.: ε-approximate solutions and conical support points. a new maximal point theorem. ZAMM 75, 595–596 (1995) 229. G¨ opfert, A., Tammer, C.: Maximal point theorems in product spaces and applications for multicriteria approximation problems. In: Research and practice in multiple criteria decision making (Charlottesville. VA, 1998), Lecture Notes in Economics and Mathematical Systems, vol. 487, pp. 93–104. Springer, Berlin (2000) 230. G¨ opfert, A., Tammer, C., Z˘ alinescu, C.: A new minimal point theorem in product spaces. Z. Anal. Anwendungen 18(3), 767–770 (1999) 231. G¨ opfert, A., Tammer, C., Z˘ alinescu, C.: On the vectorial Ekeland’s variational principle and minimal points in product spaces. Nonlinear Anal. 39(7, Ser. A: Theory Methods), 909–922 (2000) 232. G¨ opfert, A., Tammer, C., Z˘ alinescu, C.: A new ABB theorem in normed vector spaces. Optimization 53(4), 369–376 (2004) 233. Gorochowik, B., Kirillowa, F.: About scalarization of vector optimization problems (Russian). Dokl. Akad. Nauk 19, 588–591 (1975) 234. Gorokhovik, V., Gorokhovik, S.: A criterion of the global epi-Lipschitz property of sets (Russian). Izvestiya Natsional’no Akademii Nauk Belarusi. Seriya Fiziko-Matematicheskikh Nauk, pp. 118–120 (1995) 235. Gra˜ na Drummond, L.M., Svaiter, B.F.: A steepest descent method for vector optimization. J. Comput. Appl. Math. 2, 395–414 (2005) 236. Grecksch, W., Heyde, F., Isac, G., Tammer, C.: A characterization of approximate solutions of multiobjective stochastic optimal control problems. Optimization 52(2), 153–170 (2003) 237. Gr¨ otschel, M., Krumke, S.O., Rambau, J. (eds.): Online Optimization of Large Scale Systems. Springer (2001) 238. Guddat, J., Guerra Vasquez, F., Tammer, K., Wendler, K.: Multiobjective and stochastic optimization based on parametric optimization. Mathematical Research, vol. 26. Akademie-Verlag, Berlin (1985) 239. Guerraggio, A., Luc, D.T.: Properly maximal points in product spaces. Math. Oper. Res. 31(2), 305–315 (2006)

References

529

240. G¨ unther, C.: On generalized-convex constrained multi-objective optimization and application in location theory. Ph.D. Thesis, Martin-Luther-Universit¨ at Halle-Wittenberg (2018) 241. G¨ unther, C., Khazayel, B., Tammer, C.: Vector optimization w.r.t. relatively solid convex cones in real linear spaces. J. Optim. Theory Appl. 193(1-3), 408–442 (2022) 242. G¨ unther, C., Khazayel, B., Tammer, C.: Duality assertions in vector optimization w.r.t. relatively solid convex cones in real linear spaces. Minimax Theory Appl. 9(2), 1–xx (2024) 243. Guti´errez, C., Huerga, L., Novo, V., Tammer, C.: Duality related to approximate proper solutions of vector optimization problems. J. Global Optim. 64(1), 117–139 (2016) 244. Guti´errez, C., Jim´enez, B., Miglierina, E., Molho, E.: Scalarization in set optimization with solid and nonsolid ordering cones. J. Global Optim. 61(3), 525– 552 (2015) 245. Guti´errez, C., Jim´enez, B., Novo, V.: On approximate efficiency in multiobjective programming. Math. Methods Oper. Res. 64(1), 165–185 (2006) 246. Guti´errez, C., Novo, V., R´ odenas-Pedregosa, J.L., Tanaka, T.: Nonconvex separation functional in linear spaces with applications to vector equilibria. SIAM J. Optim. 26(4), 2677–2695 (2016) 247. Ha, C.W.: Minimax and fixed point theorems. Math. Ann. 248(1), 73–77 (1980) 248. Ha, T.X.D.: A note on a class of cones ensuring the existence of efficient points in bounded complete sets. Optimization 31(2), 141–152 (1994) 249. Ha, T.X.D.: On the existence of efficient points in locally convex spaces. J. Global Optim. 4(3), 265–278 (1994) 250. Ha, T.X.D., Jahn, J.: Bishop-Phelps cones given by an equation in Banach spaces. Optimization 72(5), 1309–1346 (2023) 251. Hadjisavvas, N., Schaible, S.: From scalar to vector equilibrium problems in the quasimonotone case. J. Optim. Theory Appl. 96(2), 297–309 (1998) 252. Hamacher, H.: Mathematische L¨ osungsverfahren f¨ ur planare Standortprobleme. Vieweg, Braunschweig/Wiesbaden (1995) 253. Hamacher, W., Nickel, S.: Multicriterial planar location problems, vol. Preprint 243. Fachbereich Mathematik, Universit¨ at Kaiserslautern, Germany (1993) 254. Hamel, A.: Translative sets and functions and their applications to risk measure theory and nonlinear separation. Preprint serie d 21, IMPA, Rio de Janeiro (2006) 255. Hamel, A.: A very short history of directional translative functions. Draft. Yeshiva University, New York (2012) 256. Hamel, A., L¨ ohne, A.: Minimal element theorems and Ekeland’s principle with set relations. J. Nonlinear Convex Anal. 7(1), 19–37 (2006) 257. Hamel, A.H.: Phelps’ lemma, Daneˇs’ drop theorem and Ekeland’s principle in locally convex spaces. Proc. Amer. Math. Soc. 131(10), 3025–3038 (2003) 258. Hamel, A.H., Heyde, F., L¨ ohne, A., Rudloff, B., Schrage, C.: Set optimization - a rather short introduction. In: Hamel, A.H., Heyde, F., L¨ ohne, A., Rudloff, B., Schrage, C. (eds.) Set Optimization and Applications-The State of the Art, pp. 65–141. Springer, Berlin (2015)

530

References

259. Han, Z.Q.: Remarks on the angle property and solid cones. J. Optim. Theory Appl. 82(1), 149–157 (1994) 260. Han, Z.Q.: Relationship between solid cones and cones with bases. J. Optim. Theory Appl. 90(2), 457–463 (1996) 261. Hansen, P., Thisse, J.: Recent advances in continuous location theory. Sistemi Urbani 1, 33–54 (1983) 262. Hanson, M.A.: On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 80(2), 545–550 (1981) 263. Hartley, R.: On cone-efficiency, cone-convexity and cone-compactness. SIAM J. Appl. Math. 34(2), 211–222 (1978) 264. Hartman, P., Stampacchia, G.: On some non-linear elliptic differentialfunctional equations. Acta Math. 115, 271–310 (1966) 265. Hazen, G.B., Morin, T.L.: Optimality conditions in nonconical multipleobjective programming. J. Optim. Theory Appl. 40(1), 25–60 (1983) 266. Henig, M.I.: Existence and characterization of efficient decisions with respect to cones. Math. Program. 23(1), 111–116 (1982) 267. Henig, M.I.: Proper efficiency with respect to cones. J. Optim. Theory Appl. 36(3), 387–407 (1982) 268. Henkel, E.C., Tammer, C.: -variational inequalities for vector approximation problems. Optimization 38(1), 11–21 (1996), multicriteria optimization and decision theory (Holzhau, 1994) 269. Henkel, E.C., Tammer, C.: -variational inequalities in partially ordered spaces. Optimization 36(2), 105–118 (1996) 270. Hern´ andez, E., L¨ ohne, A., Rodr´ıguez-Mar´ın, L., Tammer, C.: Lagrange duality, stability and subdifferentials in vector optimization. Optimization 62(3), 415–428 (2013) 271. Hern´ andez, E., Rodr´ıguez-Mar´ın, L.: Nonconvex scalarization in set optimization with setvalued maps. J. Math. Anal. Appl. 325(1), 1–18 (2007) 272. Heyde, F., Grecksch, W., Tammer, C.: Exploitation of necessary and sufficient conditions for suboptimal solutions of multiobjective stochastic control problems. Math. Methods Oper. Res. 54(3), 425–438 (2001) 273. Hiriart-Urruty, J.B.: New concepts in nondifferentiable programming. Bull. Soc. Math. France M´em 60, 57–85 (1979) 274. Hiriart-Urruty, J.B.: Tangent cones, generalized gradients and mathematical programming in Banach spaces. Math. Oper. Res. 4(1), 79–97 (1979) 275. Hiriart-Urruty, J.B.: Images of connected sets by semicontinuous multifunctions. J. Math. Anal. Appl. 111(2), 407–422 (1985) 276. Hiriart-Urruty, J.B., Lemar´echal, C.: Fundamentals of Convex Analysis. Springer, Berlin (2001) 277. Hites, R., De Smet, Y., Risse, N., Salazar-Neumann, M., Vincke, P.: About the applicability of MCDA to some robustness problems. European J. Oper. Res. 174, 322–332 (2006) 278. Hogan, W.W.: The continuity of the perturbation function of a convex program. Oper. Res. 21, 351–352 (1973) 279. Holmes, R.B.: Geometric Functional Analysis and Its Applications. Springer, New York-Heidelberg (1975) 280. Holwerda, H.: Closed hypographs, semicontinuity and the topological closedgraph theorem, a unifying approach. Report 8935, Catholic University of Nijmegen (1989)

References

531

281. Huang, X.X.: Equivalents of a general approximate variational principle for set-valued maps and application to efficiency. Math. Methods Oper. Res. 51(3), 433–442 (2000) 282. Iancu, D., Trichakis, N.: Pareto efficiency in robust optimization. Manag. Sci. 60, 130–147 (2014) 283. Ide, J., K¨ obis, E.: Concepts of efficiency for uncertain multi-objective optimization problems based on set order relations. Math. Meth. Oper. Res. 80, 99–127 (2014) 284. Ide, J., K¨ obis, E., Kuroiwa, D., Sch¨ obel, A., Tammer, C.: The relationship between multi-objective robustness concepts and set valued optimization. Fixed Point Theory Appl. 2014(83) (2014) 285. Ide, J., Sch¨ obel, A.: Robustness for uncertain multi-objective optimization: a survey and analysis of different concepts. OR Spectr. 38(1), 235–271 (2016) 286. Idrissi, H., Lefebvre, O., Michelot, C.: A primal dual algorithm for a constrained Fermat-Weber problem involving mixed gauges. Revue d’Automatique d’Informatique et de Recherche Op´erationnelle, Operations Research 22, 313–330 (1988) 287. Idrissi, H., Lefebvre, O., Michelot, C.: Applications and numerical convergence of the partial inverse method. In: Optimization (Varetz. 1988), pp. 39–54. Springer, Berlin (1989) 288. Idrissi, H., Lefebvre, O., Michelot, C.: Duality for constrained multifacility location problems with mixed norms and applications. Ann. Oper. Res. 18(14), 71–92 (1989), facility location analysis: theory and applications (Namur, 1987) 289. Idrissi, H., Lefebvre, O., Michelot, C.: Solving constrained multifacility minimax location problems. Working paper. Centre de Recherches de Mathematiques. Universit´e de Paris 22, 313–330 (1991) 290. Idrissi, H., Loridan, P., Michelot, C.: Approximation of solutions for location problems. J. Optim. Theory Appl. 56, 127–143 (1988) 291. Ioffe, A.D.: Variational analysis and mathematical economics 1: subdifferential calculus and the second theorem of welfare economics. In: Advances in Mathematical Economics. Volume 12, pp. 71–95. Springer, Tokyo (2009) 292. Ioffe, A.D.: Variational Analysis of Regular Mappings. Springer, Cham (2017) 293. Isac, G.: Sur l’existence de l’optimum de Pareto. Riv. Mat. Univ. Parma 4(9), 303–325 (1983) 294. Isac, G.: Supernormal cones and fixed point theory. Rocky Mt. J. Math. 17(2), 219–226 (1987) 295. Isac, G.: Pareto optimization in infinite-dimensional spaces: the importance of nuclear cones. J. Math. Anal. Appl. 182(2), 393–404 (1994) 296. Isac, G.: The Ekeland’s principle and the Pareto ε-efficiency. In: Multiobjective Programming and Goal Programming: Theories and Applications, pp. 148–163. Springer, Berlin (1996) 297. Isac, G., Yuan, G.X.Z.: The existence of essentially connected components of solutions for variational inequalities. In: Vector Variational Inequalities and Vector Equilibria, pp. 253–265. Kluwer Academic Publishers, Dordrecht (2000) 298. Iusem, A.N.: Some properties of generalized proximal point methods for quadratic and linear programming. J. Optim. Theory Appl. 85(3), 593–612 (1995)

532

References

299. Iusem, A.N.: On some properties of generalized proximal point methods for variational inequalities. J. Optim. Theory Appl. 96(2), 337–362 (1998) 300. Iusem, A.N., Svaiter, B.F., Teboulle, M.: Entropy-like proximal methods in convex programming. Math. Oper. Res. 19(4), 790–814 (1994) 301. Iwanow, E., Nehse, R.: Some results on dual vector optimization problems. Math. Operationsforsch. Statist. Ser. Optim. 166, 505–517 (1985) 302. Jahn, J.: Duality in vector optimization. Math. Program. 25(3), 343–353 (1983) 303. Jahn, J.: Scalarization in vector optimization. Math. Program. 29(2), 203–218 (1984) 304. Jahn, J.: A characterization of properly minimal elements of a set. SIAM J. Control Optim. 23(5), 649–656 (1985) 305. Jahn, J.: Existence theorems in vector optimization. J. Optim. Theory Appl. 50(3), 397–406 (1986) 306. Jahn, J.: Mathematical vector optimization in partially ordered linear spaces, Methoden und Verfahren der Mathematischen Physik [Methods and Procedures in Mathematical Physics], vol. 31. Verlag Peter D. Lang, Frankfurt a. M. (1986) 307. Jahn, J.: Duality in partially ordered sets. In: Jahn, J., Krabs, W. (eds.) Recent Advances and Historical Development of Vector Optimization, pp. 160–172. Springer, Berlin, Heidelberg (1987) 308. Jahn, J.: Parametric approximation problems arising in vector optimization. J. Optim. Theory Appl. 54(3), 503–516 (1987) 309. Jahn, J.: A generalization of a theorem of Arrow, Barankin, and Blackwell. SIAM J. Control Optim. 26(5), 999–1005 (1988) 310. Jahn, J.: Vector Optimization - Introduction, Theory, and Extensions, 2nd edn. Springer (2011) 311. Jahn, J.: Characterizations of the set less order relation in nonconvex set optimization. J. Optim. Theory Appl. 193(1–3), 523–544 (2022) 312. Jahn, J.: A unified approach to Bishop-Phelps and scalarizing functionals. J. Appl. Numer. Optim. 5, 5–25 (2023) 313. Jahn, J., Ha, T.X.D.: New order relations in set optimization. J. Optim. Theory Appl. 148(2), 209–236 (2011) 314. Jahn, J., Krabs, W.: Applications of multicriteria optimization in approximation theory. In: Multicriteria Optimization in Engineering and in the Sciences, pp. 49–75. Plenum, New York (1988) 315. Jameson, G.: Ordered Linear Spaces. Springer, Berlin-New York (1970) 316. Jeyakumar, V., Luc, D.T.: Approximate Jacobian matrices for nonsmooth continuous maps and C 1 -optimization. SIAM J. Control Optim. 36(5), 1815– 1832 (1998) 317. Jeyakumar, V., Luc, D.T.: Nonsmooth calculus, minimality, and monotonicity of convexificators. J. Optim. Theory Appl. 101, 599–621 (1999) 318. Jofr´e, A., Jourani, A.: Characterizations of the free disposal condition for nonconvex economies on infinite dimensional commodity spaces. SIAM J. Optim. 25(1), 699–712 (2015) 319. Jouak, M., Thibault, L.: Directional derivatives and almost everywhere differentiability of biconvex and concave-convex operators. Math. Scand. 57(1), 215–224 (1985)

References

533

320. Jourani, A., Michelot, C., Ndiaye, M.: Efficiency for continuous facility location problems with attraction and repulsion. Ann. Oper. Res. 167, 43–60 (2009) 321. Kalmoun, E.: R´esultats d’´existence pour les probl`emes d’´equilibre vectoriels et leurs applications. Ph.D. thesis, University Cadi Ayyad of Marrakech (2001) 322. Kalmoun, E.M., Riahi, H.: Topological KKM theorems and generalized vector equilibria on G-convex spaces with applications. Proc. Amer. Math. Soc. 129(5), 1335–1348 (2001) 323. Kalmoun, E.M., Riahi, H., Tanaka, T.: On vector equilibrium problems: remarks on a general existence theorem and applications. Nihonkai Math. J. 12(2), 149–164 (2001) 324. Kalmoun, E.M., Riahi, H., Tanaka, T.: Remarks on a new existence theorem for generalized vector equilibrium problems and its applications. S¯ urikaisekikenky¯ usho Kokyuroku 1246, 165–173 (2002) 325. Kawasaki, H.: Conjugate relations and weak subdifferentials of relations. Math. Oper. Res. 6(4), 593–607 (1981) 326. Kawasaki, H.: A duality theorem in multiobjective nonlinear programming. Math. Oper. Res. 7(1), 95–110 (1982) 327. Kazmi, K.R.: Existence of solutions for vector saddle-point problems. In: Vector Variational Inequalities and Vector Equilibria, pp. 267–275. Kluwer Academic Publishers, Dordrecht (2000) 328. Khaleelulla, S.M.: Counterexamples in Topological Vector Spaces. Springer, Berlin-New York (1982) 329. Khan, A., Tammer, C., Z˘ alinescu, C.: Set-Valued Optimization (An Introduction with Applications). Springer, Heidelberg (2015) 330. Khanh, P.: On Caristi-Kirk’s theorem and Ekeland’s variational principle for Pareto extrema. Tech. Rep. 357, Institute of Mathematics, Polish Academy of Sciences (1986) 331. Khazayel, B., Farajzadeh, A.: New vectorial versions of Takahashi’s nonconvex minimization problem. Optim. Lett. 15(3), 847–858 (2021) 332. Khazayel, B., Farajzadeh, A., G¨ unther, C., Tammer, C.: On the intrinsic core of convex cones in real linear spaces. SIAM J. Optim. 31(2), 1276–1298 (2021) 333. Klamroth, K., K¨ obis, E., Sch¨ obel, A., Tammer, C.: A unified approach for different concepts of robustness and stochastic programming via non-linear scalarizing functionals. Optimization 62(5), 649–671 (2013) 334. Klamroth, K., K¨ obis, E., Sch¨ obel, A., Tammer, C.: A unified approach to uncertain optimization. European J. Oper. Res. 260(2), 403–420 (2017) 335. Klatte, D., Kummer, B.: Nonsmooth equations in optimization. Kluwer Academic Publishers, Dordrecht (2002) 336. Klose, J.: Sensitivity analysis using the tangent derivative. Numer. Funct. Anal. Optim. 13(1–2), 143–153 (1992) 337. Kl¨ otzler, R.: On a general concept of duality in optimal control. In: Equadiff IV (Proceedings of the Czechoslovak Conference on Differential Equations and their Applications, Prague, 1977), pp. 189–196. Springer, Berlin (1979) 338. K¨ obis, E., K¨ obis, M.: Treatment of set order relations by means of a nonlinear scalarization functional: a full characterization. Optimization 65(10), 1805– 1827 (2016)

534

References

339. K¨ obis, E., K¨ obis, M., Yao, J.C.: Generalized upper set less order relation by means of a nonlinear scalarization functional. J. Nonlinear Convex Anal. 17, 725–734 (2017) 340. K¨ obis, E., Kuroiwa, D., Tammer, C.: Generalized set order relations and their numerical treatment. Appl. Anal. Optim. 1(1), 45–65 (2017) 341. K¨ obis, E., Tammer, C.: Relations between strictly robust optimization problems and a nonlinear scalarization method. AIP Conf. Proc. 1479, 2371–2374 (2012) 342. K¨ obis, E., Tammer, C.: Characterization of set relations by means of a nonlinear scalarization functional. In: Le Thi, H., Pham Dinh, T., Nguyen, N. (eds.) Modelling, Computation and Optimization in Information Systems and Management Sciences. Advances in Intelligent Systems and Computing, pp. 491–503. Springer, Cham (2015) 343. K¨ obis, E., Tammer, C.: Robust vector optimization with a variable domination structure. Carpathian J. Math. 33(3), 343–351 (2018) 344. K¨ obis, E., Tammer, C.: An existence principle of Takahashi’s type for vector optimization problems and applications. J. Nonlinear Convex Anal. 23(1), 19–32 (2022) 345. K¨ obis, E., Tammer, C., Yao, J.C.: Optimality conditions for set-valued optimization problems based on set approach and applications in uncertain optimization. J. Nonlinear Convex Anal. 18(6), 1001–1014 (2017) 346. Konnov, I.V., Ali, M.S.S.: Descent methods for monotone equilibrium problems in Banach spaces. J. Comput. Appl. Math. 188(2), 165–179 (2006) 347. Konnov, I.V., Pinyagina, O.V.: A descent method with inexact linear search for nonsmooth equilibrium problems. Zh. Vychisl. Mat. Mat. Fiz. 48(10), 1812–1818 (2008) 348. Konnov, I.V., Yao, J.C.: On the generalized vector variational inequality problem. J. Math. Anal. Appl. 206(1), 42–58 (1997) 349. K¨ othe, G.: Topological vector spaces. I. Springer-Verlag New York Inc., New York (1969) 350. Kouvelis, P., Sayin, S.: Algorithm robust for the bicriteria discrete optimization problem. Ann. Oper. Res. 147, 71–85 (2006) 351. Kouvelis, P., Yu, G.: Robust Discrete Optimization and its Applications. Kluwer Academic Publishers (1997) 352. Krasnosel’ski˘ı, M.A.: Polozhitel’nye resheniya operatornykh uravneni˘ı. Gosudarstv. Izdat. Fiz.-Mat. Lit, Moscow (1962) 353. Krasnosel’ski˘ı, M.A.: Positive solutions of operator equations. Translated from the Russian by Richard E. Flaherty; edited by Leo F. Boron. P. Noordhoff Ltd. Groningen (1964) 354. Kruger, A.Y.: On the extremality of set systems. Dokl. Nats. Akad. Nauk Belarus 42(1), 24–28, 123 (1998) 355. Kruger, K., Mordukhovich, B.: Extremal points and the Euler equation in nonsmooth optimization problems. Dokl. Akad. Nauk BSSR 24, 684–687 (1980) 356. Kuhn, H.: A note on Fermat’s problem. Math. Program. 4, 98–107 (1973) 357. Kuhpfahl, I., Patz, R., Tammer, C.: Location problems in town planning. In: Schweigert, T. (ed.) Multicriteria Decision Theory. University of Kaiserslautern (1996) 358. Kuk, H., Tanino, T., Tanaka, M.: Sensitivity analysis in parametrized convex vector optimization. J. Math. Anal. Appl. 202(2), 511–522 (1996)

References

535

359. Kuk, H., Tanino, T., Tanaka, M.: Sensitivity analysis in vector optimization. J. Optim. Theory Appl. 89(3), 713–730 (1996) 360. Kuratowski, K.: Topology. Vol. I. Academic Press, New York-London; Pa´ nstwowe Wydawnictwo Naukowe, Warsaw (1966) 361. Kuroiwa, D.: The natural criteria in set-valued optimization. Res. Nonlinear Anal. Convex Anal. 1031, 85–90 (1998) 362. Kuroiwa, D.: Some duality theorems of set-valued optimization with natural criteria. In: Nonlinear Analysis and Convex Analysis (Niigata, 1998), pp. 221– 228. World Scientific Publishing, River Edge, NJ (1999) 363. Lai, T.C., Yao, J.C.: Existence results for VVIP. Appl. Math. Lett. 9(3), 17–19 (1996) 364. Larson, T., Partrisson, M.: Equilibrium characterizations of solutions to side constraint asymmetric traffic assignment models. Le Matematiche 49, 249–280 (1994) 365. Lee, B.S., Lee, G.M., Kim, D.S.: Generalized vector-valued variational inequalities and fuzzy extensions. J. Korean Math. Soc. 33(3), 609–624 (1996) 366. Lee, B.S., Lee, G.M., Kim, D.S.: Generalized vector variational-like inequalities on locally convex Hausdorff topological vector spaces. Indian J. Pure Appl. Math. 28(1), 33–41 (1997) 367. Lee, G.M., Kim, D.S., Lee, B.S.: Generalized vector variational inequality. Appl. Math. Lett. 9(1), 39–42 (1996) 368. Lee, G.M., Kim, D.S., Lee, B.S.: On noncooperative vector equilibrium. Indian J. Pure Appl. Math. 27(8), 735–739 (1996) 369. Lee, G.M., Kim, D.S., Lee, B.S., Cho, S.J.: Generalized vector variational inequality and fuzzy extension. Appl. Math. Lett. 6(6), 47–51 (1993) 370. Lee, G.M., Kim, D.S., Lee, B.S., Yen, N.D.: Vector variational inequality as a tool for studying vector optimization problems. Nonlinear Anal. 34(5), 745– 765 (1998) 371. Lee, G.M., Kim, D.S., Lee, B.S., Yen, N.D.: Vector variational inequality as a tool for studying vector optimization problems. In: Vector Variational Inequalities and Vector Equilibria, pp. 277–305. Kluwer Academic Publishers, Dordrecht (2000) 372. Lee, G.M., Kum, S.H.: On implicit vector variational inequalities. J. Optim. Theory Appl. 104(2), 409–425 (2000) 373. Leugering, G., Schiel, R.: Regularized nonlinear scalarization for vector optimization problems with PDE-constraints. GAMM-Mitt. 35(2), 209–225 (2012) 374. Levy, H.: Stochastic dominance and expected utility. Manag, Sci (1992) 375. Li, J., Tammer, C.: Set optimization problems on ordered topological vector spaces. Pure Appl. Funct. Anal. 5(3), 621–651 (2020) 376. Li, S.J., Yang, X.Q., Chen, G.: Vector Ekeland variational principle. In: Vector Variational Inequalities and Vector Equilibria, pp. 321–333. Kluwer Academic Publishers, Dordrecht (2000) 377. Li, X.B., Zhou, L.W., Huang, N.J.: Gap functions and descent methods for equilibrium problems on Hadamard manifolds. J. Nonlinear Convex Anal. 17(4), 807–826 (2016) 378. Li, Z.F., Wang, S.Y.: Lagrange multipliers and saddle points in multiobjective programming. J. Optim. Theory Appl. 83(1), 63–81 (1994) 379. Lin, K.L., Yang, D.P., Yao, J.C.: Generalized vector variational inequalities. J. Optim. Theory Appl. 92(1), 117–125 (1997)

536

References

380. L¨ ohne, A.: Duality in vector optimization with infimum and supremum. In: Ansari, Q.H., Yao, J.C. (eds.) Recent Developments in Vector Optimization, pp. 61–94. Springer, Berlin (2012) 381. L¨ ohne, A.: Vector Optimization with Infimum and Supremum. Springer, Berlin (2012) 382. L¨ ohne, A., Tammer, C.: A new approach to duality in vector optimization. Optimization 56(1–2), 221–239 (2007) 383. Loridan, P.: A dual approach to the generalized Weber problem under loca´ tional uncertainty. Cahiers Centre Etudes Rech. Op´er. 26(3–4), 241–253 (1984) 384. Loridan, P.: ε-solutions in vector minimization problems. J. Optim. Theory Appl. 43(2), 265–276 (1984) 385. Loridan, P., Morgan, J., Raucci, R.: Convergence of minimal and approximate minimal elements of sets in partially ordered vector spaces. J. Math. Anal. Appl. 239(2), 427–439 (1999) 386. Luc, D.T.: An existence theorem in vector optimization. Math. Oper. Res. 14(4), 693–699 (1989) 387. Luc, D.T.: Theory of Vector Optimization. Springer, Berlin, Heidelberg, NewYork, London, Paris, Tokyo (1989) 388. Luc, D.T., Lucchetti, R., Maliverti, C.: Convergence of the efficient sets. SetValued Anal. 2(1–2), 207–218 (1994) 389. Luc, D.T., Vargas, C.: A saddlepoint theorem for set-valued maps. Nonlinear Anal. 18(1), 1–7 (1992) 390. Luu, D.V.: Convexificators and necessary conditions for efficiency. Optimization 63(3), 321–335 (2014) 391. Luu, D.V.: Optimality condition for local efficient solutions of vector equilibrium problems via convexificators and applications. J. Optim. Theory Appl. 171(2), 643–665 (2016) 392. Luu, D.V., Mai, T.T.: Optimality and duality in constrained interval-valued optimization. 4OR 16(3), 311–337 (2018) 393. Makarov, E.K., Rachkovski, N.N.: Density theorems for generalized Henig proper efficiency. J. Optim. Theory Appl. 91(2), 419–437 (1996) 394. Malivert, C.: Fenchel duality in vector optimization. In: Advances in Optimization (Lambrecht, 1991), pp. 420–438. Springer, Berlin (1992) 395. Mansour, M.A., Malivert, C., Th´era, M.: Semicontinuity of vector-valued mappings. Optimization 56(1–2), 241–252 (2007) 396. Markowitz, H.: Portfolio selection. J. Finance 7, 77–91 (1952) 397. Markowitz, H.M.: Portfolio selection: efficient diversification of investments. Wiley, New York; Chapman & Hall Ltd, London (1959) 398. Markowitz, H.M.: Mean-Variance Analysis in Portfolio Choice and Capital Markets. Basil Blackwell, Oxford (1987) 399. Martinet, B.: R´egularisation d’in´equations variationnelles par approximations successives. Rev. Fran¸caise Informat. Recherche Op´erationnelle 4(S´er. R-3), 154–158 (1970) 400. Martinet, B.: Algorithmes pour la r´esolution de probl`emes d’optimisation et de minimax. Universit´e de Grenoble, Th`ese d’Etat (1972) 401. Maugeri, A.: Optimization problems with side constraints and generalized equilibrium principles. Matematiche (Catania) 49(2), 305–312 (1995) (1994)

References

537

402. McArthur, C.W.: Convergence of monotone nets in ordered topological vector spaces. Studia Math. 34, 1–16 (1970) 403. Michael, E.: Continuous selections i. Ann. Math. 63, 361–382 (1956) 404. Michalowski, W., Ogryczak, W.: A recursive procedure for selecting optimal portfolio according to the mad model. Control Cybernet. 28, 725–738 (1999) 405. Michalowski, W., Ogryczak, W.: Extending the mad portfolio optimization model to incorporate downside risk aversion. Naval Res. Logist. 48, 185–200 (2001) 406. Michelot, C., Lefebvre, O.: A primal-dual algorithm for the Fermat-Weber problem involving mixed gauges. Math. Program. 39, 319–335 (1987) 407. Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer Academic Publishers, Boston, MA (1999) 408. Miettinen, K., M¨ akel¨ a, M.M.: Tangent and normal cones in nonconvex multiobjective optimization. In: Research and Practice in Multiple Criteria Decision Making (Charlottesville. VA, 1998), pp. 114–124. Springer, Berlin (2000) 409. Minami, M.: Weak Pareto-optimal necessary conditions in a nondifferentiable multiobjective program on a Banach space. J. Optim. Theory Appl. 41(3), 451–461 (1983) 410. Moore, R.E.: Methods and Applications of Interval Analysis. SIAM, Philadelphia (1979) 411. Mordukhovich, B.S.: Maximum principle in the problem of time optimal response with nonsmooth constraints. Prikl. Mat. Meh. 40(6), 1014–1023 (1976) 412. Mordukhovich, B.S.: Generalized differential calculus for nonsmooth and setvalued mappings. J. Math. Anal. Appl. 183(1), 250–288 (1994) 413. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, Vol. I: Basic Theory, Vol. II: Applications. Springer, Berlin (2006) 414. Mordukhovich, B.S.: Variational Analysis and Applications. Springer, Berlin (2018) 415. Mordukhovich, B.S., Nam, N.M.: Convex Analysis and Beyond, Volume I: Basic Theory. Springer Nature Switzerland, Cham (2022) 416. Mordukhovich, B.S., P´erez-Aros, P.: New extremal principles with applications to stochastic and semi-infinite programming. Math. Program. 189(1–2), 527–553 (2021) 417. Mordukhovich, B.S., Phan, B.S.: Rated extremal principles for finite and infinite systems. Optimization 60(7–9), 893–923 (2011) 418. Mordukhovich, B.S., Shao, Y.: Nonsmooth sequential analysis in Asplund spaces. Trans. Am. Math. Soc. 348, 1235–1280 (1996) 419. Moreau, J.J.: D´ecomposition orthogonale d’un espace hilbertien selon deux cˆ ones mutuellement polaires. C. R. Acad. Sci. Paris 255, 238–240 (1962) 420. Moreau, J.J.: Fonctionnelles convexes. S´eminaire Jean Leray (2), 1–108 (1966– 1967) 421. Moudafi, A.: Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 15(1–2), 91–100 (1999) 422. Moudafi, A.: On finite and strong convergence of a proximal method for equilibrium problems. Numer. Funct. Anal. Optim. 28(11–12), 1347–1354 (2007) 423. Moudafi, A., Th´era, M.: Proximal and dynamical approaches to equilibrium problems. In: Ill-Posed Variational Problems and Regularization Techniques (Trier, 1998), pp. 187–201. Springer, Berlin (1999)

538

References

424. Nagurney, A., Zhao, L.: Disequilibrium and variational inequalities. J. Comput. Appl. Math. 33(2), 181–198 (1990) 425. Nakayama, H.: Geometric consideration of duality in vector optimization. J. Optim. Theory Appl. 44(4), 625–655 (1984) 426. Nakayama, H.: Duality theory in vector optimization: an overview. In: Decision Making with Multiple Objectives (Cleveland. Ohio, 1984), pp. 109–125. Springer, Berlin (1985) 427. Nakayama, H.: Duality in multi-objective optimization. In: Gal, T., Stewart, T., Hanne, T. (eds.) Multicriteria Decision Making, pp. 3–29. Kluwer Academic Publishers, Boston-Dordrecht-London (1999) 428. Namioka, I., Phelps, N.N.: Banach spaces which are Asplund spaces. Duke Math. J. 42, 735–750 (1975) 429. Naniewicz, Z., Panagiotopoulos, P.D.: Mathematical Theory of Hemivariational Inequalities and Applications. Marcel Dekker Inc., New York (1995) 430. Nehse, R.: Duale Vektoroptimierungsprobleme vom Wolfe-typ. Seminarberichte der Sektion Mathematik der Humboldt-Universit¨ at zu Berlin 37, 55– 60 (1981) 431. N´emeth, A.B.: The nonconvex minimization principle in ordered regular Banach spaces. Mathematica (Cluj) 23(46)(1), 43–48 (1981) 432. N´emeth, A.B.: A nonconvex vector minimization problem. Nonlinear Anal. 10(7), 669–678 (1986) 433. N´emeth, A.B.: Between Pareto efficiency and Pareto -efficiency. Optimization 20(5), 615–637 (1989) 434. Newhall, J., Goodrich, R.K.: On the density of Henig efficient points in locally convex topological vector spaces. J. Optim. Theory Appl. 165(3), 753–762 (2015) 435. Ng, K.F., Zheng, X.Y.: Existence of efficient points in vector optimization and generalized Bishop-Phelps theorem. J. Optim. Theory Appl. 115(1), 29– 47 (2002) 436. Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: A bundle method for solving equilibrium problems. Math. Program. 116(1-2, Ser. B), 529–552 (2009) 437. Nickel, S., Dudenh¨ offer, E.M.: Weber’s problem with attraction and repulsion under polyhedral gauges. J. Global Optim. 11(4), 409–432 (1997) 438. Nieuwenhuis, J.W.: Supremal points and generalized duality. Math. Operationsforsch. Statist. Ser. Optim. 11(1), 41–59 (1980) 439. Nieuwenhuis, J.W.: Some minimax theorems in vector-valued functions. J. Optim. Theory Appl. 40(3), 463–475 (1983) 440. Nikodem, K.: Continuity of K-convex set-valued functions. Bull. Polish Acad. Sci. Math. 34(7–8), 393–400 (1986) 441. Nikodem, K.: On midpoint convex set-valued functions. Aequationes Math. 33(1), 46–56 (1987) 442. Nishnianidze, Z.G.: Fixed points of monotone multivalued operators. Soobshch. Akad. Nauk Gruzin. SSR 114(3), 489–491 (1984) 443. Oettli, W.: Approximate solutions of variational inequalities. In: Quantitative Wirtschaftsforschung, pp. 535–538. J.C.B. Mohr, T¨ ubingen (1977) 444. Oettli, W.: A remark on vector-valued equilibria and generalized monotonicity. Acta Math. Vietnam. 22(1), 213–221 (1997) 445. Oettli, W., Schl¨ ager, D.: Existence of equilibria for monotone multivalued mappings. Math. Methods Oper. Res. 48(2), 219–228 (1998)

References

539

446. Oettli, W., Schl¨ ager, D.: Generalized vectorial equilibria and generalized monotonicity. In: Functional Analysis with Current Applications in Science. Technology and Industry (Aligarh, 1996), Pitman Research Notes in Mathematics Series, vol. 377, pp. 145–154. Longman, Harlow (1998) 447. Ogryczak, W.: Multiple criteria linear programming model for portfolio selection. Ann. Oper. Res. 97, 143–162 (2000) 448. Ogryczak, W.: Multiple criteria optimization and decisions under risk. Control Cybern. 31, 975–1003 (2002) 449. Ogryczak, W.: Robust decisions under risk for imprecise probabilities. In: Ermoliev, Y., Makowski, M., Marti, K. (eds.) Managing Safety of Heterogeneous Systems, pp. 51–66. Springer (2012) 450. Ogryczak, W.: Tail mean and related robust solution concepts. Int. J. Syst. Sci. 45, 29–38 (2014) 451. Ogryczak, W., Ruszczynski, A.: On consistency of stochastic dominance and mean-semideviation models. Math. Program. 89, 217–232 (2001) 452. Ogryczak, W., Ruszczynski, A.: Dual stochastic dominance and related meanrisk models. SIAM J. Optim. 13, 60–78 (2002) ´ 453. Ogryczak, W., Sliwi´ nski, T.: On efficient WOWA optimization for decision support under risk. Int. J. Approx. Reason. 50, 915–928 (2009) 454. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 73, 591–597 (1967) 455. Pallaschke, D., Rolewicz, S.: Foundations of Mathematical Optimization. Kluwer Academic Publishers, Dordrecht (1997) 456. Park, S.: Generalizations of Ky Fan’s matching theorems and their applications. J. Math. Anal. Appl. 141(1), 164–176 (1989) 457. Park, S.: Elements of the KKM theory for generalized convex spaces. Korean J. Comput. Appl. Math. 7(1), 1–28 (2000) 458. Pascoletti, A., Serafini, P.: Scalarizing vector optimization problems. J. Optim. Theory Appl. 42(4), 499–524 (1984) 459. Peano, G.: D´emonstration de l’int´egrabilit´e des ´equations diff´erentielles ordinaires. Math. Ann. 37(2), 182–228 (1890) 460. Pelegrin, B., Fernandez, F.R.: Determination of efficient points in multipleobjective location problems. Naval Res. Logist. 35(6), 697–705 (1988) 461. Penot, J.P.: L’optimisation ` a la Pareto: deux ou trois choses que je sais d’elle. In: Structures Economiques et Econom´etrie (Lyon, 1978), pp. 4.14– 4.33. Springer, Berlin (1978) 462. Penot, J.P.: Continuity properties of performance functions. In: Optimization: Theory and Algorithms (Confolant, 1981), pp. 77–90. Dekker, New York (1983) 463. Penot, J.P.: Differentiability of relations and differential stability of perturbed optimization problems. SIAM J. Control Optim. 22(4), 529–551 (1984) 464. Penot, J.P.: The drop theorem, the petal theorem and Ekeland’s variational principle. Nonlinear Anal. 10(9), 813–822 (1986) 465. Penot, J.P.: Analysis. Springer International Publishing Switzerland, From Concepts to Applications (2016) 466. Penot, J.P., Sterna-Karwat, A.: Parametrized multicriteria optimization: continuity and closedness of optimal multifunctions. J. Math. Anal. Appl. 120(1), 150–168 (1986)

540

References

467. Penot, J.P., Sterna-Karwat, A.: Parametrized multicriteria optimization; order continuity of the marginal multifunctions. J. Math. Anal. Appl. 144(1), 1–15 (1989) 468. Penot, J.P., Th´era, M.: Semi-continuit´e des applications et des multiapplications. C. R. Acad. Sci. Paris S´er. A-B 288(4), A241–A244 (1979) 469. Penot, J.P., Th´era, M.: Semicontinuous mappings in general topology. Arch. Math. (Basel) 38(2), 158–166 (1982) 470. Peressini, A.L.: Ordered Topological Vector Spaces. Harper & Row, Publishers, New York-London (1967) 471. P´erez-Abreu, V., Rosi´ nski, J.: Representation of infinitely divisible distributions on cones. J. Theoret. Probab. 20(3), 535–544 (2007) 472. Perny, P., Spanjaard, O., Storme, L.X.: A decision-theoretic approach to robust optimization. Ann. Oper. Res. 147, 317–341 (2006) 473. Petschke, M.: On a theorem of Arrow, Barankin, and Blackwell. SIAM J. Control Optim. 28(2), 395–401 (1990) 474. Phelps, R.R.: Convex Functions, Monotone Operators and Differentiability. Lecture Notes in Mathematics, vol. 1364. Springer, Berlin (1989) 475. Phelps, R.R.: Convex Functions, Monotone Operators and Differentiability, Lecture Notes in Mathematics, vol. 1364, 2nd edn. Springer, Berlin (1993) 476. Planchart, A., Hurter, A.P., Jr.: An efficient algorithm for the solution of the Weber problem with mixed norms. SIAM J. Control 13, 650–665 (1975) 477. Postolic˘ a, V.: Existence conditions of efficient points for multifunctions with values in locally convex spaces. Stud. Cerc. Mat. 41(4), 325–331 (1989) 478. Postolic˘ a, V.: New existence results for efficient points in locally convex spaces ordered by supernormal cones. J. Global Optim. 3(2), 233–242 (1993) 479. Puerto, F., Fernandez, F.: Multicriteria decisions in location. Stud. Locat. Anal. 7, 185–199 (1994) 480. Puerto, J., Rodr´ıguez-Ch´ıa, A.M.: On the structure of the solution set for the single facility location problem with average distances. Math. Program. 128(1-2, Ser. A), 373–401 (2011) 481. Qiu, J.H.: On solidness of polar cones. J. Optim. Theory Appl. 109(1), 199– 214 (2001) 482. Reemtsen, R.: On level sets and an approximation problem for the numerical solution of a free boundary problem. Computing 27(1), 27–35 (1981) 483. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976) 484. Rockafellar, R.T.: Clarke’s tangent cones and the boundaries of closed sets in Rn . Nonlinear Anal. Theory Methods Appl. 3, 145–154 (1979) 485. Rockafellar, R.T.: The theory of subgradients and its applications to problems of optimization, R & E, vol. 1. Heldermann Verlag, Berlin (1981) 486. Rockafellar, R.T., Royset, J.: Superquantiles and their applications to risk, random variables, and regression. INFORMS Tutorials (2013) 487. Rockafellar, R.T., Royset, J.: Engineering decisions under risk averseness. ASCE-ASME J. Risk Uncertainty Eng. Syst. 1(2) (2015) 488. Rockafellar, R.T., Royset, J.: Measures of residual risk with connections to regression, risk tracking, surrogate models, and ambiguity. SIAM J. Optim. 25(2), 1179–1208 (2015)

References

541

489. Rockafellar, R.T., Royset, J.: Risk measures in engineering design under uncertainty. In: Proceedings of International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP) (2015) 490. Rockafellar, R.T., Wets, R.J.B.: Variational analysis. Springer, Berlin (1998) 491. Rolewicz, S.: On drop property. Studia Math. 85(1), 27–35 (1987) (1986) 492. Rosinger, E.E.: Multiobjective duality without convexity. J. Math. Anal. Appl. 66(2), 442–450 (1978) 493. Rubinov, A.M.: Sublinear operators and their applications. Uspehi Mat. Nauk 32(4(196)), 113–174, 287 (1977) 494. Salukvadze, M.: Vector Optimization Problems in Control Theory. Mecniereba, Tbilisi (1955) 495. Samuelson, P.: A spatial price equilibrium and linear programming. Amer. Econom. Rev. 42, 283–303 (1952) 496. Sawaragi, Y., Nakayama, H., Tanino, T.: Theory of Multiobjective Optimization. Academic Press Inc., Orlando, FL (1985) 497. Sayin, S., Kouvelis, P.: The multiobjective discrete optimization problem: a weighted min-max two-stage optimization approach and a bicriteria algorithm. Manag. Sci. 51, 1572–1581 (2005) 498. Schechter, M.: Principles of Functional Analysis, 2nd edn., American Mathematical Society (2002) 499. Sch¨ obel, A.: Generalized light robustness and the trade-off between robustness and nominal quality. Math. Methods Oper. Res. 80(2), 161–191 (2014) 500. Sch¨ onfeld, P.: Some duality theorems for the non-linear vector maximum problem. Unternehmensforschung 14, 51–63 (1970) 501. Sebbouh, O., Dossal, C., Rondepierre, A.: Convergence rates of damped inertial dynamics under geometric conditions and perturbations. SIAM J. Optim. 30(3), 1850–1877 (2020) 502. Shapiro, A., Dentcheva, D., Ruszczy´ nski, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM (2009) 503. Shi, D.S.: Contingent derivative of the perturbation map in multiobjective optimization. J. Optim. Theory Appl. 70(2), 385–396 (1991) 504. Shi, D.S.: Sensitivity analysis in convex vector optimization. J. Optim. Theory Appl. 77(1), 145–159 (1993) 505. Siddiqi, A.H., Ansari, Q.H., Ahmad, R.: On vector variational-like inequalities. Indian J. Pure Appl. Math. 28(8), 1009–1016 (1997) 506. Siddiqi, A.H., Ansari, Q.H., Khaliq, A.: On vector variational inequalities. J. Optim. Theory Appl. 84(1), 171–180 (1995) 507. Sion, M.: On general minimax theorems. Pacific J. Math. 8, 171–176 (1958) 508. Smithson, R.E.: Subcontinuity for multifunctions. Pacific J. Math. 61(1), 283– 288 (1975) 509. Song, W.: On the connectivity of efficient point sets. Appl. Math. (Warsaw) 25(1), 121–127 (1998) 510. Song, W.: Generalized vector variational inequalities. In: Vector Variational Inequalities and Vector Equilibria, pp. 381–401. Kluwer Academic Publishers, Dordrecht (2000) 511. Sonntag, Y., Z˘ alinescu, C.: Comparison of existence results for efficient points. J. Optim. Theory Appl. 105(1), 161–188 (2000) 512. Soyster, A.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21, 1154–1157 (1973)

542

References

513. Spingarn, J.: Partial inverse of a monotone operator. Appl. Math. Optim. 10, 247–265 (1983) 514. Staib, T.: On two generalizations of Pareto minimality. J. Optim. Theory Appl. 59(2), 289–306 (1988) 515. Sterna-Karwat, A.: On existence of cone-maximal points in real topological linear spaces. Israel J. Math. 54(1), 33–41 (1986) 516. Sterna-Karwat, A.: Continuous dependence of solutions on a parameter in a scalarization method. J. Optim. Theory Appl. 55(3), 417–434 (1987) 517. Sterna-Karwat, A.: Remarks on convex cones. J. Optim. Theory Appl. 59(2), 335–340 (1988) 518. Sterna-Karwat, A.: Approximating families of cones and proper efficiency in vector optimization. Optimization 20(6), 809–817 (1989) 519. Sterna-Karwat, A.: Convexity of the optimal multifunctions and its consequences in vector optimization. Optimization 20(6), 799–808 (1989) 520. Steuer, R., Choo, E.: An interactive weighted Tchebycheff procedure for multiple objective programming. Math. Program. 26, 326–344 (1983) 521. Su, W., Boyd, S., Cand`es, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17, Paper No. 153, 43 (2016) 522. Takahashi, W.: Existence theorems generalizing fixed point theorems for multivalued mappings. In: Fixed Point Theory and Applications (Marseille, 1989), pp. 397–406. Longman Scientific & Technical, Harlow (1991) 523. Tammer, C.: Lagrange-Dualit¨ at in der Vektoroptimierung [Lagrange duality in vector optimization]. Wiss. Z. Tech. Hochsch. Ilmenau 37(3), 71–88 (1991) 524. Tammer, C.: A generalization of Ekeland’s variational principle. Optimization 25(2–3), 129–141 (1992) 525. Tammer, C.: Erweiterung und Anwendungen des Variationsprinzips von Ekeland. Z. Angew. Math. Mech. 73(7–8), T823–T826 (1993) 526. Tammer, C.: Existence results and necessary conditions for ε-efficient elements. In: Multicriteria Decision; Proceedings of the 14th Meeting of the German Working Group “Mehrkriterielle Entscheidung”, pp. 97–110. Peter Lang Verlag, Frankfurt Bern (1993) 527. Tammer, C.: A variational principle and a fixed point problem. In: System Modelling and Optimization, pp. 248–257. Springer, London (1994) 528. Tammer, C.: Necessary conditions for approximately efficient solutions of vector approximation problems. In: Approximation and Optimization in the Caribbean, II (Havana, 1993), Approx. Optim., vol. 8, pp. 651–663. Peter Lang, Frankfurt am Main (1995) 529. Tammer, C.: A variational principle and applications for vectorial control approximation problems. Math. J. Univ, Bacau (Romania) (1996) 530. Tammer, C.: Multiobjective optimal control problems. In: Variational Calculus. Optimal Control and Applications (Trassenheide, 1996), pp. 97–106. Birkh¨ auser, Basel (1998) 531. Tammer, C., Gergele, M., Patz, R., Weinkauf, R.: Standortprobleme in der Landschaftsgestaltung. In: Habenicht, W., Scheubrein, B.R. (eds.) MultiCriteria- und Fuzzy-Systeme in Theorie und Praxis. Deutscher Universit¨ atsverlag, pp. 261–286 (2003) 532. Tammer, C., Weidner, P.: Scalarization and Separation by Translation Invariant Functions. Springer, Cham (2020)

References

543

533. Tammer, C., Z˘ alinescu, C.: Lipschitz properties of the scalarization function and applications. Optimization 59(2), 305–319 (2010) 534. Tanaka, T.: Some minimax problems of vector-valued functions. J. Optim. Theory Appl. 59(3), 505–524 (1988) 535. Tanaka, T.: Existence theorems for cone saddle points of vector-valued functions in infinite-dimensional spaces. J. Optim. Theory Appl. 62(1), 127–138 (1989) 536. Tanaka, T.: Two types of minimax theorems for vector-valued functions. J. Optim. Theory Appl. 68(2), 321–334 (1991) 537. Tanaka, T.: Generalized quasiconvexities, cone saddle points, and minimax theorem for vector-valued functions. J. Optim. Theory Appl. 81(2), 355–377 (1994) 538. Tanino, T.: Sensitivity analysis in multiobjective optimization. J. Optim. Theory Appl. 56(3), 479–499 (1988) 539. Tanino, T.: Conjugate duality in vector optimization. J. Math. Anal. Appl. 167(1), 84–97 (1992) 540. Tanino, T., Sawaragi, Y.: Duality theory in multiobjective programming. J. Optim. Theory Appl. 27(4), 509–529 (1979) 541. Tanino, T., Sawaragi, Y.: Stability of nondominated solutions in multicriteria decision-making. J. Optim. Theory Appl. 30(2), 229–253 (1980) 542. Teschl, G.: Ordinary Differential Equations and Dynamical Systems. American Mathematical Society, Providence, RI (2012) ´ 543. Th´era, M.: Etude des fonctions convexes vectorielles semi-continues. Doctorat de Troisi`eme Cycle, N 76, Universit´e de Pau et des Pays de l’Adour (1978) 544. Thibault, L.: Sequential convex subdifferential calculus and sequential Lagrange multipliers. SIAM J. Control Optim. 35(4), 1434–1444 (1997) 545. Tian, G.Q.: Generalized KKM theorems, minimax inequalities, and their applications. J. Optim. Theory Appl. 83(2), 375–389 (1994) 546. Tintner, G.: Stochastic linear programming with applications to agricultural economics. In: Antosiewicz, H. (ed.) Proceedings of the 2nd Symposium on Linear Programming, pp. 197–228. National Bureau of Standards (1955) 547. Turinici, M.: Cone maximal points in topological linear spaces. An. S ¸ tiint¸. Univ. Al. I. Cuza Ia¸si Sect¸. I a Mat. 37(3), 371–390 (1991) 548. Valadier, M.: Sous-diff´erentiabilit´e de fonctions convexes ` a valeurs dans un espace vectoriel ordonn´e. Math. Scand. 30, 65–74 (1972) 549. V´ alyi, I.: On duality theory related to approximate solutions of vector-valued optimization problems. In: Nondifferentiable Optimization: Motivations and Applications (Sopron, 1984), pp. 150–162. Springer, Berlin (1985) 550. V´ alyi, I.: Approximate saddle-point theorems in vector optimization. J. Optim. Theory Appl. 55(3), 435–448 (1987) 551. Wagner, A., Mart´ınez-Legaz, J.E., Tammer, C.: Locating a semi-obnoxious facility - a Toland-Singer duality based approach. J. Convex Anal. 23(4), 1185–1204 (2016) 552. Wagner, D.H.: Semi-compactness with respect to a Euclidean cone. Canadian J. Math. 29(1), 29–36 (1977) 553. Wang, S.: Lagrange conditions in nonsmooth and multiobjective mathematical programming. Math. Econ. 1, 183–193 (1984) 554. Wanka, G.: Duality in vectorial control approximation problems with inequality restrictions. Optimization 22(5), 755–764 (1991)

544

References

555. Wanka, G.: On duality in the vectorial control-approximation problem. Z. Oper. Res. 35(4), 309–320 (1991) 556. Wanka, G.: Characterization of approximately efficient solutions to multiobjective location problems using Ekeland’s variational principle. Stud. Locat. Anal. 10, 163–176 (1996) 557. Ward, J., Wendel, R.: Characterizing Efficient Points in Location Problems Under One-Infinity Norm. North Holland, Amsterdam (1983) 558. Wardrop, J.: Some theoretical aspects of road traffic research. Proceedings of the Institute of Civil Engineers, Part II, 325–378 (1952) ¨ 559. Weber, A.: Uber den Standort der Industrien. T¨ ubingen (1909) 560. Weidner, P.: Ein Trennungskonzept und seine Anwendungen auf Vektoroptimierungsverfahren. Dissertation B, Martin–Luther Universit¨ at Halle– Wittemberg (1991) 561. Wendell, R., Peterson, E.: A dual approach for obtaining lower bounds to the Weber problem. J. Reg. Sci. 24, 219–228 (1984) 562. Wendell, R.E., Hurter, A.P., Jr., Lowe, T.J.: Efficient points in location problems. AIIE Trans. 9(3), 238–246 (1977) 563. Whitmore, G., Findlay, M.: Stochastic Dominance: An Approach to Decision Making Under Risk. Lexington Books (1978) 564. Wierzbicki, A.: On the completeness and constructiveness of parametric characterizations to vector optimization problems. OR Spectr. 8, 73–87 (1986) 565. Winkler, K.: Geoffrion proper efficiency in an infinite dimensional space. Optimization 53(4), 355–368 (2004) 566. Witzgall, C.: Optimal location of a central facility: mathematical models and concepts, vol. Report 8388. National Bureau of Standards, Washington (1964) 567. Yaman, H., Karasan, O., Pinar, M.: The robust spanning tree problem with interval data. Oper. Res. Lett. 29, 31–40 (2001) 568. Yaman, H., Karasan, O., Pinar, M.: Restricted robust optimization for maximization over uniform matroid with interval data uncertainty. Math. Program. 110(2), 431–441 (2007) 569. Yang, X.Q.: On some equivalent conditions of vector variational inequalities. In: Vector Variational Inequalities and Vector Equilibria, pp. 423–432. Kluwer Academic Publishers, Dordrecht (2000) 570. Yang, X.Q., Chen, G.Y.: On inverse vector variational inequalities. In: Vector Variational Inequalities and Vector Equilibria, pp. 433–446. Kluwer Academic Publishers, Dordrecht (2000) 571. Yang, X.Q., Goh, C.J.: On vector variational inequalities: application to vector equilibria. J. Optim. Theory Appl. 95(2), 431–443 (1997) 572. Yang, X.Q., Goh, C.J.: Vector variational inequalities, vector equilibrium flow and vector optimization. In: Vector Variational Inequalities and Vector Equilibria, pp. 447–465. Kluwer Academic Publishers, Dordrecht (2000) 573. Young, R.: The algebra of many-valued quantities. Math. Ann. 104(1), 260– 290 (1931) 574. Yu, S.J., Yao, J.C.: On vector variational inequalities. J. Optim. Theory Appl. 89(3), 749–769 (1996) 575. Yuan, G.X.Z.: KKM Theory and Applications in Nonlinear Analysis. Marcel Dekker Inc., New York (1999) 576. Z˘ alinescu, C.: A generalization of the Farkas lemma and applications to convex programming. J. Math. Anal. Appl. 66(3), 651–678 (1978)

References

545

577. Z˘ alinescu, C.: Duality for vectorial nonconvex optimization by convexification and applications. An. S ¸ tiint¸. Univ. Al. I. Cuza Ia¸si Sect¸. I a Mat. 29(3, suppl.), 15–34 (1983) 578. Z˘ alinescu, C.: On two notions of proper efficiency. In: Optimization in Mathematical Physics (Oberwolfach, 1985), pp. 77–86. Peter Lang, Frankfurt am Main (1987) 579. Z˘ alinescu, C.: Stability for a class of nonlinear optimization problems and applications. In: Nonsmooth Optimization and Related Topics, pp. 437–458. Plenum, New York (1989) 580. Z˘ alinescu, C.: On a new stability condition in mathematical programming. In: Nonsmooth Optimization: Methods and Applications, pp. 429–438. Gordon and Breach, Montreux (1992) 581. Z˘ alinescu, C.: Convex Analysis in General Vector Spaces. World Scientific Publishing Co., Inc, River Edge, NJ (2002) 582. Zeidler, E.: Nonlinear Functional Analysis and its Applications. Variational Methods and Optimization. Springer, New York, Part III (1986) 583. Zeidler, E.: Nonlinear Functional Analysis and its Applications. Springer, New York (1990) 584. Zheng, X., Yang, Z., Zou, J.: Exact separation theorem for closed sets in Asplund spaces. Optimization 66(7), 1065–1077 (2017) 585. Zhuang, D.: Density results for proper efficiencies. SIAM J. Control Optim. 32(1), 51–58 (1994) 586. Zowe, J.: Subdifferentiability of convex functions with values in an ordered vector space. Math. Scand. 34, 69–83 (1974) 587. Zowe, J.: A duality theorem for a convex programming problem in order complete vector lattices. J. Math. Anal. Appl. 50, 273–287 (1975)

Index

A algebraic base (for a cone), 24 algorithm – geometric, 399 – interactive, 378 – – stability results, 378 – one-phase vector proximal, 385 – – convergence, 386 – proximal-point, 367 – – convergence results, 376 – Spingarn, 369, 372 – two-phase vector proximal – – convergence, 389 – twophase vector proximal, 389 annihilator, 236 approximation error, 298, 303 Asplund space, 52, 54 associativity, 20 B base (for a cone), 44 base of neighborhoods, 26 best location, 8 bound – lower, 20 – upper, 20 Bregman – distance, 382 Bregman function, 382 – examples, 382 C chain, 152

closure – algebraic, 32 closure (of a set), 25 coderivatives of set-valued mappings, 307 commutativity, 20 comparable elements, 18 complementary slackness – approximate, 298, 302 concave – quasi-, 253 cone, 21 – (fully regular) regular, 39 – (sequentially) Daniell, 48 – acute, 70 – adjacent, 130 – angle property, 3 – asymptotic, 165 – based, 44 – Bouligand tangent, 130 – contingent, 130 – convex, 22 – correct, 164 – dual, 55 – generating, 22 – Henig dilating, 175 – nontrivial, 22 – normal, 36, 362 – nuclear, 70 – Phelps, 273 – pointed, 22 – proper, 22

© Springer Nature Switzerland AG 2023 A. G¨ opfert et al., Variational Methods in Partially Ordered Spaces, CMS/CAIMS Books in Mathematics 7, https://doi.org/10.1007/978-3-031-36534-8

547

548

Index

– recession, 84 – reproducing, 22 – sequentially C-bound regular, 284 – supernormal, 70 – Ursescu tangent, 130 – well-based, 44 cone saddle point, 271 cones – overview, 4 control, 11 – suboptimal, 12 convergence – Kuratowski–Painlev´e, 111 – net, 27 core (of a set), 32 – intrinsic, 32 D derivative – Dini lower, 131 – Dini upper, 131 differential inclusion, 456 domination set, 309 domination structure – variable, 14 dual pair, 53 duality – axiomatic, 222 – conjugation, 222 – converse, 228, 231 – – for approximation problems, 238 – direct, 227 – Lagrange, 226 – strong, 227, 230 – – for approximation problems, 238 – weak, 226, 229 – – for approximation problems, 237 – with scalarization, 225 – without scalarization, 225 dynamic approach, 456 E efficiency – approximate, 142 – ε-, 142 – εk0 -, 142 element – ≤C -nondominated, 148 – ≤Θ -nondominated, 332

– -nondominated, 147 – approximately efficient, 142 – efficient, 18, 158, 477 – – Benson proper, 208 – – Henig proper, 208 – – strong proper, 199 – – weakly, 209, 477 – Henig proper maximal, 172 – maximal (minimal), 18, 294 – null, 20 – properly efficient, 230 – properly maximal, 171 – unity, 20 – weakly ≤C -nondominated, 148 excess (of A over B), 101 existence – of equilibria, 248, 257 existence of solutions – of a variation-like inequality, 263 – of complementarity problems, 268 – of equilibrium problems, 5 – of hemivariational inequalities, 266 – of vector optimization problems, 269 extremal principle, 311, 314 – ε-extremal, 312 – approximate, 312 – exact, 312 – extended, 325 F Fan’s minimax inequality, 256 Fan–KKM Lemma, 243 Fermat rules, 320 free-disposal assumption, 84 – strong, 84 function – B-monotone, 80 – – strictly, 80 – C-α-convex, 99 – C-α-quasiconvex, 99 – C-convex, 99 – C-lower continuous (C-l.c.), 116 – C-lower semicontinuous, 116 – C-mid-convex, 99 – C-mid-quasiconvex, 99 – C-nearly convex, 99 – C-nearly quasiconvex, 99 – C-quasiconvex, 99

Index – C-upper semicontinuous, 116 – continuous, 28 – convex, 78 – directional derivative, 356 – domain of, 78, 99 – epigraph of, 78, 99 – filtering, 27 – lower semicontinuous, 78 – marginal, 183 – proper, 78, 99 – strictly convex, 384 – strongly convex, 383 – upper continuous (u.c.) – – C- (C-u.c.), 116 functional – Minkowski, 34, 84 – monotone, 34 – positively homogeneous, 33 – subadditive, 33 – sublinear, 33 – symmetric, 33 G game – cooperative differential, 12 gauge technique, 282 H hemivariational inequality, 266 homeomorphism, 29 hull – affine, 21 – convex, 21 – – closed, 58 – linear, 21 I infimum of a set, 20 interior, 25 – algebraic, 32 – – relative, 32 K KKM Lemma, 243 KKM mapping, 243 Kolmogorov condition, 361 – ε-, 357 L Lagrange multipliers, 288

549

– existence results, 288 Lagrangian, 292, 296 – generalized, 225, 229 – scalarized, 299 limit, 27 – inferior, 101 – superior, 101 linear openness, 338 Lipschitz – epi-Lipschitz, 88 lower bound, 362, 363 M manifold – linear, 21 mapping – invex, 269 – preinvex, 270 – pseudomonotone, 258 mean-risk approach, 13 metric, 25 – P -(valued), 284 minimal-point theorem, 275 – authentic, 277 minimizing the expectation, 472 Minty’s linearization, 254, 257 multifunction, 96 – C-α-quasiconvex, C-mid-quasiconvex, C-nearly quasiconvex, Cquasiconvex, 98 – α-concave, C-α-concave, 97 – BH-pseudomonotone, 264 – C-continuous, 111 – – Hausdorff, 111 – C-convex, C-α-convex, 97 – C-lower continuous (C-l.c.), 111 – – Hausdorff (H-C-l.c.), 111 – – uniformly (at a point on a set), 111 – C-mid-convex, C-nearly convex, 97 – C-upper continuous (C-u.c.), 111 – – Hausdorff (H-C-u.c.), 111 – closed, 104 – – at a point, 104 – closed-valued, 104 – compact (at a point), 105 – continuous, 100 – – Hausdorff, 108 – convex, α-convex, 97

550

Index

– derivative of, 209 – domain of, 97 – domination property, 190 – – around a point, 190 – epigraph of, 98 – graph of, 97 – image of, 97 – lower continuous (l.c.), 100 – – Hausdorff (H-l.c.), 108 – lower semicontinuous – – C- (C-l.s.c.), 113 – – C- (C-l.s.c.), 204 – mid-convex, nearly convex, 97 – monotonicity notions, 242 – of class (S)+ , 264 – optimal-value, 190, 206 – – weak, 190 – quasi-pseudomonotone, 264 – semidifferentiable, 131 – solution, 206 – sublevel, strict sublevel, 98 – upper continuous (u.c.), 100 – – Hausdorff (H-u.c.), 108 – upper Lipschitz, 134 – upper semicontinuous – – C- (C-u.s.c.), 113 N neighborhood, 26 net, 27 – C-decreasing, 158 – C-increasing, 158 – – strictly, 158 – convergent, 27 – t-decreasing, 152 – t-increasing, 152 – – strictly, 152 norm, 34 – vector-valued, 346 O operator – sublinear, 100 order – lexicographic, 24 – linear, 17 – partial, 17 – total, 17 order interval, 36

P parametric optimization, 468 Pareto minimum, 12 point – efficient, 18, 158 – – Benson proper, 208 – – Henig proper, 208 – – properly, 230 – – strong proper, 199 – – super, 176 – – weakly, 209 power set, 75 preorder, 17 principle – Ekeland’s variational, 272 – – vector-valued, 273 – Pontryagin’s minimum, 406 – – ε-, 402 – – stochastic ε-, 412 – Wardrop, 7 problem – Lp -approximation, 357 – approximation, 232, 361, 362 – – special cases, 232 – complementarity, 267 – dual – – strongly, 224 – – weakly, 224 – equilibrium, 4 – – scalar, 241 – Fermat–Weber, 9, 398 – inverse Stefan, 359 – linear programming, 362 – – perturbed, 362 – – surrogate problem, 362 – location, 362 – multiobjective stochastic control, 410 – multiobjective location, 398 – – algorithms, 398 – – geometric algorithm, 399 – – solution set, 9 – optimal control, 362 – – multiobjective, 403 – – multiobjective, 11 – – vector-valued (vector), 11 – optimal regulator, 362 – real-valued approximation, 350 – real-valued location, 9, 350

Index – – – –

scalar equilibrium, 381 scalarized, 144 under uncertainty, 466 vector control approximation, 346, 377 – – finite-dimensional, 352, 355 – – interactive algorithm, 380 – – necessary conditions, 347 – – special cases, 350 – vector equilibrium, 5, 242 – – discrete case, 396 – – penalization, 391 – – perturbation, 391 – – relaxation, 391 – vector location, 9 – weak traffic equilibrium, 7 – weak vector equilibrium, 5, 381 – well-posed, 187 – – η-, 187 – – weakly, 187 process, 11 property – angle, 70 – containment, 187 – – uniformly around a point, 195 – domination, 151, 186 – pi, 70 – weak pi, 70 proximal-point method, 381 R radial epiderivative – lower, 137 – upper, 137 relation – antisymmetric, 17 – binary, 17 – composition, 18 – generalized certainly less, 77 – generalized lower set less, 76 – generalized possibly less, 78 – generalized set less, 77 – generalized upper set less, 75 – inf-order, 481 – inverse, 18 – natural order, 477 – reflexive, 17 – transitive, 17 robust optimization, 468

551

robustness – ε-constraint, 514 – adjustable, 471 – certain, 503 – optimistic, 469 – proper, 492 – regret, 470 – reliability, 470 – strict, 469 S saddle point – approximate, 296 – (B, e)-, 294 – cone, 271 – y0∗ -, 292 scalarization, 144, 325 scalarization directions of sets, 326 section – lower, 151 – upper, 151 segment parallel to, 184 seminorm, 34 separable determinacy, 313 separable reduction, 313 separation – of convex sets, 56 – of nonconvex sets, 90 separation for two convex sets, 314 separation,generalized, 310 sequence – generalized, 27 set – absorbing, 29 – affine, 21 – α-convex, 94 – asymptotically compact (a.c.), 165 – balanced, 29 – bounded, 30 – – C-upper (lower), 158 – – lower, 20 – – upper, 20 – boundedly order complete, 162 – C-compact, 165 – C-complete, 39 – C-lower (upper) bounded, 39 – C-semicompact, 165 – closed, 25 – – sequentially, 28

552

Index

closed around x, 305 cofinal, 27 compact, 25 connected, 25 convex, 21 directed, 27 epigraph type, 78 full, 36 mid-convex, 94 nearly convex, 94 open, 25 partially sequentially normally compact, 308 – polar, 55 – sequentially normally compact, 308 – solution, 186 – star-shaped, 34 – strict sublevel, 98, 99 – sublevel, 98, 99 – well-ordered, 18, 152 set of ε-normals, 311 set-valued problem (SP) – solution concept, 148 set-valued problem (SP), 148 – C -nondominated solution, 149 set-valued problem (≤C ) – ≤C -nondominated element, 148 – – weak, 148 solution – Θ -nondominated – – to the set-valued problem Θ -SP, 333 – ε-optimal, 186 – optimal, 186 space – (sequentially) quasi-complete, 41 – Asplund, 54 – Banach, 52 – Hilbert, 52 – lineality, 158 – linear, 20 – locally convex, 50 – – examples, 51 – – properties, 51 – metric, 26 – metrizable, 51 – normed, 51 – reflexive, 53 – – – – – – – – – – – –

– separable, 54 – topological, 25 – – first-countable, 26 – – Hausdorff, 28 – – linear, 29 – – vector, 29 – vector, 20 state, 11 stochastic dominance, 13, 473 – first-degree, 473 – second-degree, 474 stochastic optimization, 467 stochastic programming – two-stage, 475 structure – compatible, 21 – linear, 20 – order, 17 subdifferential, 99, 347 – of a sublinear multifunction, 294 – of norm terms, 353 – of the indicator function, 355 – of the vector-valued norm, 347 subnet, 28 subspace – linear, 20 supremum of a set, 20 T theorem – Alaoglu–Bourbaki, 55 – Fan, 5 – – generalized, 244 – Hahn–Banach, 56 – – geometric form, 56 – Kirk–Caristi fixed point, 287 – Krein–Rutman, 66 – nonconvex separation, 90 – Phelps minimal-point, 273 – vector minimax, 272 topology, 25 – compatible, 53 – induced, 25 – linear, 29 – Mackey, 53 – product, 29 – trace, 53 – weak, 53 – weak∗ , 53

Index town planning, 397 traffic control, 6 transportation network, 7 trustworthiness of the subdifferential, 313 U upper limit, Kuratowski–Painlev´e, 306

V value – optimal, 186 variational inequality – scalar, 261 – vector, 261 variational-like inequality, 263 Z Zorn’s lemma, 20

553