Schur-Convex Functions and Inequalities: Volume 1: Concepts, Properties, and Applications in Symmetric Function Inequalities 9783110607840, 9783110606126

This two-volume work introduces the theory and applications of Schur-convex functions. The first volume introduces conce

191 79 2MB

English Pages 236 Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Preface
Introduction
Acknowledgment
Notation and symbols
1. Majorization
2. Definitions and properties of Schur-convex functions
3. Schur-convex functions and elementary symmetric function inequalities
4. Schur-convex functions and other symmetric function inequalities
Bibliography
Index
Recommend Papers

Schur-Convex Functions and Inequalities: Volume 1: Concepts, Properties, and Applications in Symmetric Function Inequalities
 9783110607840, 9783110606126

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Huan-nan Shi Schur-Convex Functions and Inequalities

Also of Interest Univalent Functions. A Primer Derek K. Thomas, Nikola Tuneski, Allu Vasudevarao, 2018 ISBN 978-3-11-056009-1, e-ISBN (PDF) 978-3-11-056096-1, e-ISBN (EPUB) 978-3-11-056012-1

Numerical Analysis. An Introduction Timo Heister, Leo G. Rebholz, Fei Xue, 2019 ISBN 978-3-11-057330-5, e-ISBN (PDF) 978-3-11-057332-9, e-ISBN (EPUB) 978-3-11-057333-6

Hamilton-Jacobi-Bellman Equations. Numerical Methods and Applications in Optimal Control Dante Kalise, Karl Kunisch, Zhiping Rao (Eds.), 2018 ISBN 978-3-11-054263-9, e-ISBN (PDF) 978-3-11-054359-9, e-ISBN (EPUB) 978-3-11-054271-4 Hausdorff Calculus. Applications to Fractal Systems Yingjie Liang, Wen Chen, Wie Cai, 2019 ISBN 978-3-11-060692-8, e-ISBN (PDF) 978-3-11-060852-6, e-ISBN (EPUB) 978-3-11-060705-5

Infinite-Dimensional Dynamical Systems Volume 1: Attractors and Inertial Manifolds Boling Guo, Liming Ling, Yansheng Ma, Hui Yang, 2018 ISBN 978-3-11-054925-6, e-ISBN (PDF) 978-3-11-054965-2, e-ISBN (EPUB) 978-3-11-054942-3

Huan-nan Shi

Schur-Convex Functions and Inequalities |

Volume 1: Concepts, Properties, and Applications in Symmetric Function Inequalities

Mathematics Subject Classification 2010 Primary: 26A51, 26B25, 52A41; Secondary:, 26D07, 26D10, 26D15, 26D20, 26E60 Author Huan-nan Shi Teacher’s College Beijing Union University Beijing 100011 Beijing People’s Republic of China [email protected]

ISBN 978-3-11-060612-6 e-ISBN (PDF) 978-3-11-060784-0 e-ISBN (EPUB) 978-3-11-060691-1 Library of Congress Control Number: 2019937573 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2019 Harbin Institute of Technology Press Ltd, Harbin, Heilongjiang and Walter de Gruyter GmbH, Berlin/Boston Cover image: ugurhan / E+ / gettyimages.com Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com

|

Dedicated to the memory of my father

Contents Preface | IX Introduction | XI Acknowledgment | XV Notation and symbols | XVII 1 1.1 1.2 1.3 1.4 1.5 1.6

Majorization | 1 Increasing functions and convex functions | 1 Generalization of convex functions | 4 Definition and basic properties of majorization | 15 Some common majorization | 35 Convex functions and majorization | 45 Generalizations of Karamata’s inequality | 49

2 2.1 2.2 2.3 2.4 2.5 2.6 2.7

Definitions and properties of Schur-convex functions | 51 Definitions and properties of Schur-convex functions | 51 Convex functions and Schur-convex functions | 57 Some applications of Karamata’s inequality | 60 Generalizations of Schur-convex functions | 74 Symmetrization of convex and Schur-convex functions | 98 Abstract majorization inequalities | 109 Weak monotonic functions | 132

3

Schur-convex functions and elementary symmetric function inequalities | 135 Properties of elementary symmetric functions and dual forms | 135 Schur-convexity of quotient or difference for elementary symmetric functions | 142 Schur-convexity of some composite functions for elementary symmetric functions | 150 Generalizations of several well-known inequalities | 162

3.1 3.2 3.3 3.4 4 4.1 4.2 4.3

Schur-convex functions and other symmetric function inequalities | 179 Schur-convexity of complete symmetric functions | 179 Schur-convexity of Hamy symmetric functions | 186 Schur-convexity of Muirhead symmetric functions and applications | 194

VIII | Contents 4.4 4.5

Expansions of Kantorovich inequality | 200 Schur-convexity of two complementary symmetric functions | 202

Bibliography | 205 Index | 213

Preface In 1979, A. M. Marshall and I. Olkin co-published “Inequalities: Theory of Majorization and Its Application.” Since then, majorization theory has become an independent discipline of mathematics. From September 1979 to September 1981, Professor Boying Wang of the Beijing Normal University studied the theory at the University of California (UCSB), as a visiting scholar. In 1984, he took the lead in setting up a course for postgraduate students named “Matrix and Majorization Inequalities” on majorization theory in China. From September 1984 to January 1986, I took this course during my studies at the Mathematics Department of Beijing Normal University. I did not expect this random choice to become my future research direction. So far, I have published 90 papers on majorization theory. In 1990, Professor Boying Wang’s book “Fundamentals of Majorization Inequalities” (in Chinese) was published. In addition to the classical basic theories in Marshall and Olkin’s book, this book also contains a number of wonderful original contents of Professor Boying Wang. In the application part, the book focuses on the application of majorization theory in matrices. As Professor Boying Wang puts it, “The majorization inequalities have almost infiltrated into various fields of mathematics, and played a wonderful role everywhere, because they can always profoundly describe the intrinsic relationship between many mathematical quantities, thus facilitating the derivation of the required conclusions. It can also easily derive many existing inequalities derived by different methods in a uniform way. It is a powerful means to generalize existing inequalities and discover new inequalities, and the theory and application of the majorization inequalities have a bright future.” The publication “Fundamentals of Majorization Inequalities” has greatly promoted the research of majorization theory in China. At present, Chinese scholars have published more than 300 research papers in this field and have formed a research team that has exerted some influence on the world. In 2011, A. M. Marshall, I. Olkin, and B. C. Arnold published “Inequalities: Theory of Majorization and Its Application” (The second edition),” which quoted a number of papers from Chinese scholars (including the five articles from the author). In 2012, China’s Harbin Institute of Technology Press published my monograph titled ““Majorization Theory and Analytic Inequality” (in Chinese),” which raised concerns from my peers. Over the past five years, almost all problems put forward in the book have been pursued and concluded by the following research. This English monograph “Schur-convex functions and inequalities” is a revision and supplement of the monograph “Majorization Theory and Analytical Inequalities.” More than 160 papers published in the past five years have been added, 97 of which were published by Chinese scholars (including 30 works by authors and collaborators).

https://doi.org/10.1515/9783110607840-201

X | Preface This book is divided into nine chapters. Chapters 1 and 2 of the first volume introduce the basic concepts and main theorems of Schur-convex function theory. In order to save space, the book does not include the detailed proofs of some basic theorems (which can be found in monographs [109] and [54]). We introduce the new developments of Schur-convex functions by Chinese scholars. Chapters 3 and 4 of the first volume introduce the wide applications of Schurconvex functions in symmetric function inequalities. Chapters 1 and 2 of the second volume introduce the application of Schur-convex functions in sequence inequalities and integral inequalities, respectively. Chapters 3 and 4 of the second volume introduce the applications of Schur-convex functions to mean inequalities. Chapter 5 of the second volume introduces the applications of Schur-convex functions in geometric inequalities.

Introduction There are many kinds of inequalities, and the techniques to solve them are colorful and numerous, so there is no general theory to deal with all inequalities. In 1923, I. Schur summed up some common and useful elementary and advanced inequalities and deduced a complete set of theories to deal with the inequalities with certain characteristics, which is the theory of majorization. In the theory of majorization, there are two key concepts: majorizing relations (see Definition 1.3.1) and Schur-convex functions (see Definition 2.1.1). Majorizing relations are weaker ordered relations among vectors, and Schur-convex functions are a kind of more extensive functions than classical convex functions. Combining these two objects is an effective method of constructing inequalities. In research of majorization theory, there are two important and fundamental objects: establishing majorizing relations among vectors and finding various Schurconvex functions. The judgement theorem of Schur-convex functions (that is, Theorem 2.1.3) is the main method to determine Schur-convex functions. This theorem only depends on the first derivative of the function, so it is convenient to use. Majorizing relations deeply characterize intrinsic connections among vectors and combining a new majorizing relation with suitable Schur-convex functions can lead to various interesting inequalities. Below we introduce an elementary question that I came across when I first studied majorization theory, hoping to arouse the readers’ interest. Problem 0.0.1 (IMO, 1984). Let x, y, z ≥ 0, x + y + z + 1. Then 0 ≤ xy + yz + zx − 2xyz ≤

7 . 27

(0.1)

This problem has many kinds of proofs. The author found the following equivalent form of inequality (0.1): 3

3

1 1 0 ≤ (1 − x)(1 − y)(1 − z) − xyz ≤ (1 − ) − ( ) . 3 3

(0.2)

Thus, the high-dimensional extension is taken into account, i. e., n

n

i=1

i=1

0 ≤ ∏(1 − xi ) − ∏ xi ≤ (1 −

n

n

1 1 ) −( ) . n n

(0.3)

The author uses a gradual adjustment method to prove (0.3) (see [70]), and then extends (0.3) to the case of elementary symmetric functions. Let x ∈ ℝn+ and n1 ∑ni=1 xi = 1. Then k

k

n 1 1 0 ≤ Ek (1 − x) − Ek (x) ≤ ( )[(1 − ) − ( ) ], k n n https://doi.org/10.1515/9783110607840-202

(0.4)

XII | Introduction where Ek (x) = Ek (x1 , . . . , xn ) =



1≤i1 0 for all i}, ℤn+ = {(p1 , . . . , pn ) | pi is a nonnegative integer, i = 1, . . . , n, n ≥ 2}. Throughout this book, increasing means nondecreasing and decreasing means nonincreasing. Thus if f : ℝ → ℝ, f is increasing if x ≤ y ⇒ f (x) ≤ f (y), strictly increasing if x < y ⇒ f (x) < f (y), decreasing if x ≤ y ⇒ f (x) ≥ f (y), strictly decreasing if x < y ⇒ f (x) > f (y). For any x = (x1 , . . . , xn ) ∈ ℝ, let x[1] ≥ ⋅ ⋅ ⋅ ≥ x[n] denote the components of x in decreasing order, and let x ↓= (x[1] , . . . , x[n] ) denote the decreasing rearrangement of x. Similarly, let x(1) ≥ ⋅ ⋅ ⋅ ≥ x(n) denote the components of x in increasing order, and let x ↑= (x(1) , . . . , x(n) ) denote the increasing rearrangement of x. The elementwise vector ordering xi ≤ yi , i = 1, . . . , n, is denoted by x ≤ y. For any x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ ℝ, we denote x + y = (x1 + y1 , . . . , xn + yn ), xy = (x1 y1 , . . . , xn yn ), αx = (αx1 , . . . , αxn ).

https://doi.org/10.1515/9783110607840-204

XVIII | Notation and symbols Generally, for f : ℝ → ℝ, f (x) = (f (x1 ), . . . , f (xn )), and for f : ℝ2 → ℝ, f (x, y) = (f (x1 , y1 ), . . . , f (xn , yn )). For f : Ω ⊂ ℝn → ℝ, we denote R(f ) = {f (x) | x ∈ Ω}, n

⨁ Ik = {(x1 , . . . , xn ) | xk ∈ Ik ⊂ ℝ++ }; k=1

n! (nk ) = k!(n−k)! is the number of combinations of n elements taken k at a time, defined n (0 ) = 1 and (nk ) = 0 for k > n.

1 Majorization 1.1 Increasing functions and convex functions The definitions and theorems in this section are quoted from the monographs [109] and [54]. Definition 1.1.1. A set Ω ⊂ ℝn is called a symmetric set if x ∈ Ω implies xP ∈ Ω for every n × n permutation matrix P. A function φ : Ω → ℝ is called symmetric if for every permutation matrix P, φ(xP) = φ(x) for all x ∈ Ω. Definition 1.1.2. Let x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) ∈ ℝn . (a) For any x, y ∈ ℝn , write x ≤ y if x i ≤ y i , i = 1, . . . , n. A function φ : ℝn → ℝ is said to be increasing if x ≤ y ⇒ φ(x) ≤ φ(y). If x ≤ y and x ≠ y ⇒ φ(x) < φ(y), then φ is said to be strictly increasing. If −φ is increasing (or strictly increasing, respectively), then φ is said to be decreasing (or strictly decreasing, respectively). (b) The set Ω ⊂ ℝn is said to be a convex set if x, y ∈ Ω, 0 ≤ α ≤ 1 implies αx + (1 − α)y = (αx1 + (1 − α)y1 , . . . , αxn + (1 − α)yn ) ∈ Ω. (c) Let Ω ⊂ ℝn be a convex set. A function φ: Ω → ℝ is said to be a convex function on Ω if φ(αx + (1 − α)y) ≤ αφ(x) + (1 − α)φ(y)

(1.1.1)

holds for all x, y ∈ Ω and all α ∈ [0, 1]. If strict inequality holds in (1.1.1) whenever x ≠ y and α ∈ [0, 1], then φ is said to be strictly convex. If −φ is convex, φ is said to be concave, and if −φ is strictly convex, φ is said to be strictly concave. Remark 1.1.1 ([48, p. 413]). If taking α = φ(

1 2

in (1.1.1), that is,

φ(x) + φ(y) x+y )≤ , 2 2

(1.1.2)

then φ is called midpoint convex on Ω, also called convex in the Jensen sense, or Jconvex. Obviously the convex function must be a J-convex function. Conversely, it can be proved that a J-convex function is also a convex function under continuous condition. Majorization theory is closely related to the convex function. The properties of the convex function to be used in the future are given without proof. https://doi.org/10.1515/9783110607840-001

2 | 1 Majorization Theorem 1.1.1. Let I ⊂ ℝ be an open interval and let g : I → ℝ be differentiable. Then (a) g is increasing on I if and only if g 󸀠 (x) ≥ 0 for all x ∈ I; (b) g is strictly increasing on I if and only if g 󸀠 (x) ≥ 0 for all x ∈ I and the set where g 󸀠 (x) = 0 contains no intervals. Theorem 1.1.2. Let I ⊂ ℝ be an open convex set and let g : I → ℝ be twice differentiable. Then (a) g is convex on I if and only if g 󸀠󸀠 (t) ≥ 0 for all t ∈ I; (b) if g 󸀠󸀠 (t) > 0 on I for all t ∈ I, then φ is strictly convex on I. The above two theorems are well-known results in calculus. The following theorem allows us to convert the convexity of multivariate functions to a function of one variable to judge. Theorem 1.1.3. Let Ω ⊂ ℝn be an open convex set, φ : Ω → ℝ. For x, y ∈ Ω, define the function of one variable g(t) = φ(tx + (1 − t)y) on (0, 1). Then (a) φ is convex on Ω if and only if g is convex on (0, 1) for all x, y ∈ Ω; (b) φ is strictly convex on Ω if and only if g is strictly convex on (0, 1) for all x, y ∈ Ω and x ≠ y. Theorem 1.1.4. Let Ω ⊂ ℝn be an open convex set and let φ : Ω → ℝ be differentiable. Then (a) φ is increasing on Ω if and only if ∇φ(x) := (

𝜕φ(x) 𝜕φ(x) ,..., )≥0 𝜕x1 𝜕xn

for all x ∈ Ω; (b) φ is strictly convex on Ω if and only if ∇φ(x) ≥ 0 for all x in the interior of Ω and 𝜕φ(x) for fixed x1 , . . . , xi−1 , xi+1 , . . . , xn , the set of all xi such that 𝜕x (x1 , . . . , xi , . . . , xn ) = 0 i contains no intervals, i = 1, . . . , n. Theorem 1.1.5. Let Ω ⊂ ℝn be an open convex set and let φ : Ω → ℝ be twice differentiable. Then φ is convex on Ω if and only if the Hesse matrix H(x) is nonnegative definite on Ω. If H(x) is positive definite on Ω, then φ is strictly convex on Ω. Theorem 1.1.6. Let Ω ⊂ ℝn , φi : Ω → ℝ, i = 1, . . . , k, h : ℝk → ℝ, ψ(x) = h(φ1 (x), . . . , φk (x)). (a) If each φi is convex, h is increasing and convex, then ψ(x) is convex; (b) if each φi is concave, h is decreasing and convex, then ψ(x) is convex; (c) if each φi is convex, h is decreasing and concave, then ψ(x) is concave; (d) if each φi is concave, h is increasing and concave, then ψ(x) is concave.

1.1 Increasing functions and convex functions | 3

Corollary 1.1.1. Let Ω ⊂ ℝn , φ : Ω → ℝ. (a) If log φ is convex, then φ is convex; (b) if φ is concave, then log φ is concave. Theorem 1.1.7. Let Ω ⊂ ℝn , g : Ω → ℝ, φ : ℝn → ℝ, ψ(x) = h(g(x1 ), . . . , g(xn )). (a) If g is convex, φ is increasing and convex, then ψ(x) is convex; (b) if g is concave, φ is decreasing and convex, then ψ(x) is convex; (c) if g is convex, φ is decreasing and concave, then ψ(x) is concave; (d) if g is concave, φ is increasing and concave, then ψ(x) is concave. Corollary 1.1.2. Let Ω ⊂ ℝn , φi : Ω → ℝ, ai > 0, i = 1, . . . , k. If each φi is convex (or concave, respectively), then ψ = ∑ki=1 ai φi is convex (or concave, respectively), and if each φi is strictly convex (or concave, respectively), then ψ is strictly convex (or concave, respectively). Corollary 1.1.3. Let I ⊂ ℝ, g : I → ℝ is convex (or concave, respectively), ai > 0, i = 1, . . . , n. If g is convex (or concave, respectively), then ψ(x) = ∑ki=1 ai g(xi ) is convex (or concave, respectively), and if g is strictly convex (or concave, respectively), then ψ is strictly convex (or concave, respectively). Example 1.1.1 ([23]). The function of two variables φ(x, y) =

x2 y2 + 2a2 2b2

is a convex function on ℝ2++ , where a > 0, b > 0. Proof. The function g(t) = t 2 is convex on ℝ++ , and Corollary 1.1.3.

1 2a2

> 0,

1 2b2

> 0, so it is proved by

Remark 1.1.2. In judging the convexity of multivariate functions, we should pay attention to making full use of the properties of convex functions to make judgments, and use of Theorem 1.1.5 to make judgments is often rather complicated. The proof of the following example is simpler than the original proof. Example 1.1.2 ([25]). The following functions are convex functions on ℝn : p (a) ψ1 (x) = (∑ni=1 xi2 ) 2 , p ≥ 1; n

2

(b) ψ2 (x) = (1 + ∑ni=1 xi2 )∑i=1 xi .

Proof. We only prove (b); (a) is similar to (b), so it is left to the reader. Similarly to Example 1.1.1 we can prove that φ(x) = ∑ni=1 xi2 is convex on ℝn++ . Let h(t) = (1 + t)t . By computation we obtain h󸀠 (t) = [log(1 + t) +

t ]h(t) ≥ 0 1+t

4 | 1 Majorization and h󸀠󸀠 (t) = [log(1 + t) +

2

t 2+t h(t) ≥ 0. ] h(t) + 1+t (1 + t)2

Therefore the function h(t) is increasing and convex on ℝn+ , thus by Theorem 1.1.6(a), it follows that ψ1 (x) is convex on ℝn . Next we present an example of how to use Theorem 1.1.5 to judge convexity of multivariate functions. Example 1.1.3 ([163]). Let a, x ∈ ℝn++ and ∑ni=1 xi = 1. Prove that the function l(a) = x x a1 1 ⋅ ⋅ ⋅ ann is concave with a ∈ ℝn++ . Proof. By computation, the Hesse matrix of l(a) is

H=( (

x1 (x1 −1)l(a) a21 x1 x2 l(a) a1 a2

.. .

x1 xn l(a) a1 an

x1 x2 l(a) a1 a2 x2 (x2 −1)l(a) a22

⋅⋅⋅

x2 xn l(a) a2 an

⋅⋅⋅ ⋅⋅⋅

x1 xn l(a) a1 an x2 xn l(a) a1 an

⋅⋅⋅

.. .

.. .

(1.1.3)

).

xn (xn −1)l(a) a2n )

If we use “Det” to represent the determinant of the matrix, then the ith (1 ≤ i ≤ n) leading principal minor determinant of the matrix (1.1.3) is 1−

i

det H = li (a) ∏( j=1

2 1 xi ) ⋅ det ( . .. aj 1

1 x1

1

1− .. . 1

1

⋅⋅⋅

1 x2

1 .. ) . 1 − x1

⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ i

= ⋅ ⋅ ⋅ = (−1)i ⋅ li (a)(1 − x1 − x2 − ⋅ ⋅ ⋅ − xi ) ∏ j=1

xj

a2j

i

.

The sign of the determinant is determined by (−1)i , and using the discriminant method of the half negative definite matrix (see [110, pp. 8–9]), it follows that (1.1.3) is a nonnegative definite matrix, and then from Theorem 1.1.5 we know that the function x x l(a) = a1 1 ⋅ ⋅ ⋅ ann is concave with a ∈ ℝn++ . The above example shows that determination of multivariate convex functions is a complicated thing.

1.2 Generalization of convex functions There are many kinds of generalized convex functions; this section only introduces several kinds that will be used later.

1.2 Generalization of convex functions | 5

1.2.1 Logarithmically convex functions Definition 1.2.1. Let Ω ⊂ ℝn . A function φ : Ω → ℝ++ is said to be logarithmically convex if log φ is convex on Ω, so α

φ[αx + (1 − α)y] ≤ [φ(x)] [f (y)]

1−α

(0 ≤ α ≤ 1)

(1.2.1)

holds for all x, y ∈ Ω; φ is said to be a logarithmically concave function if inequality (1.2.1) is reversed. Theorem 1.2.1 ([61]). Define the interval I ⊂ ℝ, φ : I → ℝ++ . Then φ is logarithmically convex on I if and only if ∀a ∈ ℝ, eax φ(x) is convex on I.

1.2.2 Weakly logarithmically convex functions Definition 1.2.2. Let the interval I ⊂ ℝ. A function φ : Ω → ℝ++ is said to be a weakly logarithmically convex function if φ(

x1 + x2 ) ≤ √φ(x1 )φ(x1 ) 2

(1.2.2)

holds for all x1 , x2 ∈ I; φ is said to be a weakly logarithmically concave function if inequality (1.2.2) is reversed. For weakly logarithmically convex functions, Guan [29] obtained the following conclusions. Theorem 1.2.2. Let the interval I ⊂ ℝ, φ : I → ℝ++ be a weakly logarithmically convex function, xk ∈ I, k = 1, 2, . . . , n. Then 1 n

n 1 n φ( ∑ xk ) ≤ (∏ φ(xk )) . n k=1 k=1

(1.2.3)

If φ is weakly logarithmically concave on I, then inequality (1.2.3) is reversed. Theorem 1.2.3 ([61]). Let φ(x) be a positive function with two derivatives on interval I. Then φ󸀠 (x) (a) φ(x) is weakly logarithmically convex on I ⇔ (φ󸀠 (x))2 ≤ φ(x)φ󸀠󸀠 (x) or φ(x) is increasing for x ∈ I; φ󸀠 (x) (b) φ(x) is weakly logarithmically concave on I ⇔ (φ󸀠 (x))2 ≥ φ(x)φ󸀠󸀠 (x) or φ(x) is decreasing for x ∈ I.

6 | 1 Majorization 1.2.3 Geometrically convex functions Definition 1.2.3. (a) Ω ⊂ ℝn is said to be a geometrically convex set or logarithmically convex set if x, y ∈ Ω, 0 ≤ α ≤ 1 implies x α y 1−α ∈ Ω. (b) Let Ω ⊂ ℝn be a geometrically convex set. A function φ: Ω → ℝ is said to be a geometrically convex function on Ω if φ(x α y 1−α ) ≤ φ(x)α φ(y)1−α

(1.2.4)

holds for all x, y ∈ Ω and all α ∈ [0, 1]; φ is said to be a geometrically concave function if inequality (1.2.4) is reversed. Theorem 1.2.4 ([160]). Let the interval I ⊂ ℝ++ , φ : I → ℝ++ . Then the following four conclusions are equivalent to each other: (a) The inequality f (√x1 x2 ) ≤ √f (x1 )f (x2 )

(1.2.5)

holds for all x1 , x2 ∈ I. (b) The inequality β

β

f (x1α x2 ) ≤ f (x1α )f (x2 )

(1.2.6)

holds for all x1 , x2 ∈ I, α, β > 0 and α + β = 1. (c) Let n ∈ ℕ, n ≥ 2. Then the inequality n

n

i=1

i=1

f (√n ∏xi ) ≤ √n ∏ f (xi )

(1.2.7)

holds for all x1 , . . . , xn ∈ I. (d) Let n ∈ ℕ, n ≥ 2. Then the inequality n

λ

n

f (∏ xi i ) ≤ ∏ f λi (xi ) i=1

i=1

(1.2.8)

always holds for all x1 , . . . , xn ∈ I, λi > 0, i = 1, . . . , n, and ∑ni=1 λi = 1. Theorem 1.2.5 ([157]). (a) Let Ω ⊂ ℝn++ . If φ is geometrically convex (or geometrically concave, respectively) on Ω, then log φ(ex ) is convex (or concave, respectively) on log Ω = {log x | x ∈ Ω}. (b) Let Ω ⊂ ℝn . If φ is convex (or concave, respectively) on Ω, then eφ(log(x)) is geometrically convex (or geometrically concave, respectively) on eΩ = {ex | x ∈ Ω}.

1.2 Generalization of convex functions | 7

Theorem 1.2.6 ([64]). Define the interval I ⊂ ℝ++ , and let the function φ : I → ℝ++ be derivable. Then the following conclusions are equivalent to each other: (a) φ is geometrically convex on I;

(b) (c)

xφ󸀠 (x) φ(x)

is increasing on I;

φ(x) x ≥( ) φ(y) y

yφ󸀠 (y) φ(y)

holds for all x, y ∈ I; (d) furthermore, if function φ has a second derivative, then φ is geometrically convex (or geometrically concave, respectively) if and only if 2

x[φ(x)φ󸀠󸀠 (x) − (φ󸀠 (x)) ] + φ(x)φ󸀠 (x) ≥ 0

(or ≤ 0, respectively)

(1.2.9)

holds for all x ∈ I. Example 1.2.1. Let φ = φ(x) = tanh x. Then (a) φ is an increasing concave function on ℝ++ ; (b) φ is a geometrically convex function on ℝ++ ; (c) for any x1 , x2 ∈ ℝ++ , we have tanh √x1 x2 ≥ tanh √x1 tanh √x2 .

(1.2.10)

Proof. (a) When x > 0, we have sinh x ≥ 0, cosh x ≥ 1 and 0 < tanh x ≤ 1. From x φ󸀠 (x) = 1 2 ≥ 0 and φ󸀠󸀠 (x) = −2 sinh ≤ 0, it follows that φ = tanh x is an increasing cosh3 x cosh x concave function on ℝ++ . (b) We have 2

Δ := xφφ󸀠󸀠 − x(φ󸀠 ) + φφ󸀠 = = =

−2x tanh x sinh x 3



x

4

+

tanh x

cosh x cosh2 x cosh x 2 tanh x cosh x − 2x tanh x sinh x cosh x − x ψ(x)

cosh4 x

cosh4 x

,

where ψ(x) = tanh x cosh2 x − x tanh x sinh2 x − x. Since ψ󸀠 (x) = 1 + tanh x sinh2 x − tanh x sinh2 x − x(tanh x sinh2 x) − 1 󸀠

8 | 1 Majorization

= −x(

sinh 2x

cosh2 x

+ 2 tanh x cosh 2x) ≤ 0,

we obtain tanh x cosh2 x −x tanh x sinh 2x −x ≤ 0, and then Δ ≤ 0. From Theorem 1.2.6, it follows that φ is a geometrically convex function on ℝ++ . (c) By case (b) and Definition 1.2.3 we obtain (1.2.10). It is not difficult to verify that the logarithmically convex function and the geometrically convex function have the following relation. Theorem 1.2.7 ([157, p. 30]). Let the interval I ⊂ ℝ++ , φ : I → ℝ++ be logarithmically convex (or concave, respectively) and increasing (or decreasing, respectively). Then φ is geometrically convex (or geometrically concave, respectively).

1.2.4 Harmonically convex functions Definition 1.2.4. (a) The set Ω ⊂ ℝn is said to be a harmonically convex set if x, y ∈ Ω, 0 ≤ α ≤ 1 implies (αx −1 + (1 − α)y −1 )−1 ∈ Ω. (b) Let Ω ⊂ ℝn be a harmonically convex set. A function φ : Ω → ℝ is said to be a harmonically convex function on Ω if φ((αx −1 + (1 − α)y −1 ) ) ≤ (α(φ(x)) −1

−1

+ (1 − α)(φ(y)) )

−1 −1

(1.2.11)

holds for all x, y ∈ Ω and all α ∈ [0, 1]; φ is said to be a harmonically concave function if inequality (1.2.11) is reversed. Theorem 1.2.8 ([156]). Let the interval (a, b) ⊂ ℝ++ , φ : (a, b) → ℝ++ . The necessary and sufficient condition for φ to be harmonically convex (or concave, respectively) on (a, b) is that (φ(x−1 ))−1 is concave (or convex, respectively) on (b−1 , a−1 ). Theorem 1.2.9 ([126]). Let the interval I ⊂ ℝ++ be a harmonically convex set and let φ : I → ℝ++ be twice differentiable. If 2

x[2(φ󸀠 (x)) − φ(x)φ󸀠󸀠 (x)] − 2φ(x)φ󸀠 (x) ≤ 0

(or ≥ 0, respectively)

holds for all x ∈ I, then φ is harmonically convex (or convex, respectively) on I. 1.2.5 Generalized convex functions Definition 1.2.5. A function M : ℝ2++ → ℝ++ is called a mean function if (a) M(x, y) = M(y, x), (b) M(x, x) = x,

(1.2.12)

1.2 Generalization of convex functions | 9

(c) x < M(x, y) < y, whenever x < y, (d) M(ax, ay) = aM(x, y) for all a > 0. Example 1.2.2. We present the following examples: (a) M(x, y) = A(x, y) = x+y is the arithmetic mean; 2 (b) M(x, y) = G(x, y) = √xy is the geometric mean; (c) M(x, y) = H(x, y) = 11 1 is the harmonic mean; (d) M(x, y) = L(x, y) = (e) M(x, y) = I(x, y) =

A( x , y ) x−y for x ≠ y and L(x, x) = x is the logarithmic mean; log x−log y 1 1 xx x−y ( ) for x ≠ y and I(x, x) = x is the identric mean. e yy

Anderson [1] has made a more systematic generalization of one variable convex function. Definition 1.2.6. Let f : I → ℝ++ be continuous, where I is a subinterval of ℝ++ . Let M and N be any two mean functions. We say f is MN-convex (or concave, respectively) if f (M(x, y)) ≤ (or ≥, respectively) N(f (x), f (y)) for all x, y ∈ I. It is clear that the AA-convex function is the usual convex function, the GG-convex function is the geometrically convex function, and the AG-convex function is the weakly logarithmically convex function. Theorem 1.2.10. Let I be an open subinterval of ℝ++ and let f : I → ℝ++ be continuous. In (d)–(i), let I = (0, b), 0 < b < ∞. Then (a) f is AA-convex (or concave, respectively) if and only if f is convex (or concave, respectively); (b) f is AG-convex (or concave, respectively) if and only if log f is convex (or concave, respectively); (c) f is AH-convex (or concave, respectively) if and only if f1 is concave (or convex, respectively); (d) f is GA-convex (or concave, respectively) on I if and only if f (be−1 ) is convex (or concave, respectively) on ℝ++ ; (e) f is GG-convex (or concave, respectively) on I if and only if log f (be−1 ) is convex (or concave, respectively) on ℝ++ ; (f) f is GH-convex (or concave, respectively) on I if and only if f (be1 −1 ) is concave (or convex, respectively) on ℝ++ ; (g) f is HA-convex (or concave, respectively) on I if and only if f ( x1 ) is convex (or concave, respectively) on ( b1 , ∞); (h) f is HG-convex (or concave, respectively) on I if and only if log f ( x1 ) is concave (or convex, respectively) on ( b1 , ∞); (i) f is HH-convex (or concave, respectively) on I if and only if 11 is convex (or concave, respectively) on ( b1 , ∞).

f(x)

10 | 1 Majorization The next result is an immediate consequence of Theorem 1.2.10. Corollary 1.2.1. Let I be an open subinterval of ℝ++ and let f : I → ℝ++ be differentiable. In (d)–(i), let I = (0, b), 0 < b < ∞. Then (a) f is AA-convex (or concave, respectively) if and only if f 󸀠 (x) is increasing (or decreasing, respectively); 󸀠 (x) (b) f is AG-convex (or concave, respectively) if and only if ff (x) is increasing (or decreasing, respectively); 󸀠 (x) (c) f is AH-convex (or concave, respectively) if and only if ff (x) 2 is increasing (or decreasing, respectively); (d) f is GA-convex (or concave, respectively) if and only if xf 󸀠 (x) is increasing (or decreasing, respectively); 󸀠 (x) is increasing (or de(e) f is GG-convex (or concave, respectively) if and only if xff (x) creasing, respectively); 󸀠 (f) f is GH-convex (or concave, respectively) if and only if xff (x)(x)2 is increasing (or decreasing, respectively); (g) f is HA-convex (or concave, respectively) if and only if x 2 f 󸀠 (x) is increasing (or decreasing, respectively); 2 󸀠 (h) f is HG-convex (or concave, respectively) if and only if x ff(x)(x) is increasing (or decreasing, respectively); 2 󸀠 f (x) (i) f is HH-convex (or concave, respectively) if and only if xf (x) 2 is increasing (or decreasing, respectively). Remark 1.2.1. Since H(x, y) ≤ G(x, y) ≤ A(x, y), it follows that (a) f is AH-convex ⇒ f is AG-convex ⇒ f is AA-convex; (b) f is GH-convex ⇒ f is GG-convex ⇒ f is GA-convex; (c) f is HH-convex ⇒ f is HG-convex ⇒ f is HA-convex. Further, if f is increasing (or decreasing, respectively), then AN-convex (or concave, respectively) implies GN-convex (or concave, respectively) implies HN-convex (or concave, respectively), where N is any mean function. For concavity, the implications in (a)–(c) are reversed. These implications are strict, as shown by the examples below. Example 1.2.3. For x ∈ ℝ++ , (a) f (x) = cosh x is AG-convex, hence GG-convex and HG-convex, on ℝ++ ; but it is not AH-convex, nor GH-convex, nor HH-convex; (b) f (x) = sinh x is AA-convex, but AG-concave on ℝ++ ; (c) f (x) = ex is GG-convex and HG-convex, but neither GH-convex nor HH-convex, on ℝ++ ; (d) f (x) = log(1 + x) is GA-convex, but GG-concave on ℝ++ ; (e) f (x) = arctan x is HA-convex, but not HG-convex, on ℝ++ .

1.2 Generalization of convex functions | 11

Guan and Guan [35] have obtained the following conclusions. Theorem 1.2.11. Let I ⊂ ℝ++ be an interval and f : I → ℝ++ be continuous in I. Then (a) f is GA-convex (or concave, respectively) in I if and only if f (ex ) is convex (or concave, respectively) in log I = {log x|x ∈ I}; (b) f is HA-convex (or concave, respectively) in I if and only if f ( x1 ) is convex (or concave, respectively) in 1I = { x1 |x ∈ I}; (c) f is GG-convex (or concave, respectively) in I if and only if log f (ex ) is convex (or concave, respectively) in log I = {log x|x ∈ I}. For more information about MN-convex functions, see references [64, 15, 36, 11]. 1.2.6 Generalized power convex functions In 2018, Dr Tao Zhang from the School of Mathematical Sciences of the Inner Mongolia University of China proposed the concept of generalized power convex function in his doctoral thesis. Let x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ ℝn++ , m ∈ ℝ and 0 ≤ λ ≤ 1. Then the m-order weighted power mean of x and y is defined as 1

1

((λxm + (1 − λ)y1m ) m , . . . , (λxnm + (1 − λ)ynm ) m ), m ≠ 0, Mm (x, y, λ) = { λ 11−λ (x1 y1 , . . . , xnλ yn1−λ ), m = 0. In particular, when λ = 21 , note 1 Mm (x, y, ) = Mm (x, y). 2 It is clear that M0 (x, y, λ) = lim Mm (x, y, λ) = G(x, y, λ) m→0

and M1 (x, y, λ) = A(x, y, λ),

M−1 (x, y, λ) = H(x, y, λ).

For any x ∈ ℝn+ , let xm, φm (x) = { log x,

m ≠ 0, m = 0,

1

xm, φ−1 m (x) = { exp x,

m ≠ 0, m = 0.

Then for any x, y ∈ Ω ⊂ ℝn+ , 0 ≤ λ ≤ 1, we have −1 Mm (x, y, λ) = φ−1 m (λφm (x) + (1 − λ)φm (y)) = φm (A(φm (x), φm (y), λ)).

(1.2.13)

12 | 1 Majorization Definition 1.2.7. Let m ∈ ℝ. A subset Ω ⊂ ℝ is said to be an Mm -convex set if Mm (x, y, λ) ∈ Ω for all x, y ∈ Ω and 0 ≤ λ ≤ 1. Obviously, the M1 -convex set is the usual convex set, the M0 -convex set is the geometrically convex set, and the M−1 -convex set is the harmonic convex set. The definition of Mm1 Mm2 -convex functions is given below. Definition 1.2.8. Let m1 , m2 ∈ ℝ, Ω ⊂ ℝn++ be an Mm1 -convex set. If the function f : Ω → ℝ++ satisfies f (Mm1 (x, y, λ)) ≤ (or ≥, respectively) Mm2 (f (x), f (y), λ), for any x, y ∈ Ω, 0 ≤ λ ≤ 1, then f is called a generalized power convex (or generalized power concave, respectively) function on Ω, also called Mm1 Mm2 -convex (or concave, respectively) function. Theorem 1.2.12. Let Ω ⊂ ℝn++ be an Mm1 -convex set, let Ω1 = {φm1 (x) | x ∈ Ω}, and let f : Ω → ℝ++ be a function. (a) If m2 ≥ 0, then f (x) is Mm1 Mm2 -convex (or Mm1 Mm2 -concave, respectively) on Ω if and only if the function φm2 (f (φ−1 m1 (x))) is convex (or concave, respectively) on Ω1 ; (b) if m2 < 0, then f (x) is Mm1 Mm2 -convex (or Mm1 Mm2 -concave, respectively) on Ω if and only if the function φm2 (f (φ−1 m1 (x))) is concave (or convex, respectively) on Ω1 . When taking m1 = 1, 0, −1, respectively, the following three corollaries can be obtained from Theorem 1.2.12. Corollary 1.2.2. Let Ω ⊂ ℝn++ be a convex set, and let f : Ω → ℝn++ be a function. (a) If m ≥ 0, then f (x) is an AMm -convex (or AMm -concave, respectively) function on Ω if and only if the function φm (f (x)) is convex (or concave, respectively) on Ω; (b) if m < 0, then f (x) is AMm -convex (or AMm -concave, respectively) on Ω if and only if the function φm (f (x)) is concave (or convex, respectively) on Ω. Corollary 1.2.3. Let Ω ⊂ ℝn++ be a geometrically convex set, and let f : Ω → ℝ++ be a function. (a) If m ≥ 0, then f (x) is GMm -convex (or GMm -concave, respectively) on Ω if and only if the function φ−1 0 (φm (f (x))) is geometrically convex (or geometrically concave, respectively) on Ω; (b) if m < 0, then f (x) is GMm -convex (or GMm -concave, respectively) on Ω if and only if the function φ−1 0 (φm f (x)) is geometrically concave (or geometrically convex, respectively) on Ω. Corollary 1.2.4. Let Ω ⊂ ℝn++ be a harmonic convex set, and let f : Ω → ℝ++ be a function.

1.2 Generalization of convex functions | 13

(a) If m > 0, then f (x) is HMm -convex (or HMm -concave, respectively) on Ω if and only if the function φ−m f (x) is harmonically concave (or harmonically convex, respectively) on Ω; (b) if m = 0, then f (x) is HG (HM0 )-convex (or HG-(HM0 )-concave, respectively) on Ω if and only if the function φ−1 φ0 f (x) is harmonically concave (or harmonically convex, respectively) on Ω; (c) if m < 0, then f (x) is HMm -convex (or HMm -concave, respectively) on Ω if and only if the function φ−m f (x) is harmonically convex (or harmonically concave, respectively) on Ω. Corollary 1.2.5. Let Ω ⊂ ℝn++ be an Mm1 -convex set, let Ω2 = {φ−1 m1 (x) | x ∈ Ω}, and let f : Ω → ℝ+ be a function. (a) If m2 ≥ 0, then f (x) is convex (or concave, respectively) on Ω if and only if the function φ−1 m2 (f (φm1 (x))) is Mm1 Mm2 -convex (or Mm1 Mm2 -concave, respectively) on Ω2 ; (b) if m2 < 0, then f (x) is convex (or concave, respectively) on Ω if and only if the function φ−1 m2 (f (φm1 (x))) is Mm1 Mm2 -concave (or Mm1 Mm2 -convex, respectively) on Ω2 . Theorem 1.2.13. Let Ω ⊂ ℝn++ be an Mm1 -convex set, and let f : Ω → ℝ++ be a function. (a) When m > m2 , if f (x) is Mm1 Mm2 -convex on Ω, then f (x) is Mm1 Mm -convex on Ω; conversely, if f (x) is Mm1 Mm -concave on Ω, then f (x) is Mm1 Mm2 -concave on Ω. (b) When m < m2 , if f (x) is Mm1 Mm2 -concave on Ω, then f (x) is Mm1 Mm -concave on Ω; conversely, if f (x) is Mm1 Mm -convex on Ω, then f (x) is Mm1 Mm2 -convex on Ω. Theorem 1.2.14. Let Ω ⊂ ℝn++ be an Mm1 -convex set, and let f : Ω → ℝ++ be a continuous function. Then f (x) is Mm1 Mm2 -convex (or Mm1 Mm2 -concave, respectively) on Ω if and only if the inequality f (Mm1 (x, y)) ≤ (or ≥, respectively) Mm2 (f (x), f (y)) holds for any x, y ∈ Ω. Theorem 1.2.15. Let Ω ⊂ ℝ++ be an Mm1 -convex set, gi : Ω → ℝ++ , 1 ≤ i ≤ k,

⨁ki=1 R(gi ) be the Mm2 convex set, f : ⨁ki=1 R(gi ) → ℝ++ , and F(x) = f (g1 (x), . . . , gk (x)).

(a) If gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -convex and f is increasing and Mm2 Mm3 -convex, then F(x) is Mm1 Mm3 -convex; (b) if gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -concave and f is increasing and Mm2 Mm3 -concave, then F(x) is Mm1 Mm3 -concave; (c) if gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -concave and f is decreasing and Mm2 Mm3 -convex, then F(x) is Mm1 Mm3 -convex;

14 | 1 Majorization (d) if gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -convex and f is decreasing and Mm2 Mm3 -concave, then F(x) is Mm1 Mm3 -concave. Theorem 1.2.16. Let gi : ℝn++ → ℝ++ , 1 ≤ i ≤ k, f : ℝk++ → ℝ++ , t = (t1 , . . . , tn ) ∈ ℝn++ , and F(x) = f (g1 (t1 x1 , . . . , tn xn ), . . . , gk (t1 x1 , . . . , tn xn )). (a) If gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -convex and f (x) is increasing and Mm2 Mm3 -convex, then F(x) is Mm1 Mm3 -convex on ℝn++ ; (b) if gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -concave and f (x) is decreasing and Mm2 Mm3 -convex, then F(x) is Mm1 Mm3 -convex on ℝn++ ; (c) if gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -concave and f (x) is increasing and Mm2 Mm3 -concave, then F(x) is Mm1 Mm3 -concave on ℝn++ ; (d) if gi (x) (1 ≤ i ≤ k) is Mm1 Mm2 -convex and f (x) is decreasing and Mm2 Mm3 -concave, then F(x) is Mm1 Mm3 -concave on ℝn++ . Theorem 1.2.17. Let fk (x) (1 ≤ k ≤ n) be an Mm1 MM2 -convex (or Mm1 MM2 -concave, respectively) function on ℝn++ . Then max fk (x) and

1≤k≤n

min fk (x)

1≤k≤n

are both Mm1 Mm2 -convex (or Mm1 Mm2 -concave, respectively) functions on ℝn++ . Theorem 1.2.18. Let Ω ⊂ ℝn++ be an Mm1 -convex set, fk : Ω → ℝ++ , 1 ≤ k ≤ n, and n

1 m

F(x) = ( ∑ fk (x)) . k=1

(1.2.14)

(a) If m > 0 and fk (x) (1 ≤ k ≤ n) is Mm1 A-convex (or Mm1 A-concave, respectively) on Ω, then Fm (x) is Mm1 Mm -convex (or Mm1 Mm -concave, respectively) on I n ; (b) if m < 0 and f is Mm1 A-convex (or Mm1 A-concave, respectively) on I, then Fm (x) is Mm1 Mm -concave (or Mm1 Mm -convex, respectively) on I n . Theorem 1.2.19. Let Ω ⊂ ℝn++ be an Mm1 -convex set, fk : Ω → ℝ++ , 1 ≤ k ≤ n, and n

1 m

F(x) = (∏ fk (x)) . k=1

(1.2.15)

(a) If m > 0 and fk (x) (1 ≤ k ≤ n) is Mm1 G-convex (or concave, respectively) on Ω, then Fm (x) is Mm1 Mm -convex (or Mm1 Mm -concave, respectively) on I n ; (b) if m < 0 and fk (x) (1 ≤ k ≤ n) is Mm1 G-convex (or Mm1 G-concave, respectively) on Ω, then Fm (x) is Mm1 Mm -concave (or Mm1 Mm -convex, respectively) on I n .

1.3 Definition and basic properties of majorization

| 15

1.2.7 Wright-convex functions Definition 1.2.9 ([34]). Define the interval I ∈ ℝ. The function f : I → ℝ is said to be Wright-convex if f (λx + (1 − λ)y) + f ((1 − λ)x + λy) ≤ f (x) + f (y)

(1.2.16)

for all x, y ∈ I and λ ∈ [0, 1]. A function f : ℝ → ℝ is additive if f (x + y) = f (x) + f (y) for all x, y ∈ ℝ. Theorem 1.2.20 ([34]). Let I ⊂ ℝ be an open interval, and let f : I → ℝ be a function. Then f is Wright-convex if and only if there exist a convex function C : I → ℝ and an additive function A : R → ℝ such that f (x) = C(x) + A(x), x ∈ I. Definition 1.2.10 ([34]). A function f : I → ℝ++ is said to be Wright type multiplicatively (or geometrically) convex if f (xλ y1−λ )f (x1−λ yλ ) ≤ f (x)f (y)

(1.2.17)

for all x, y ∈ I and λ ∈ [0, 1]. A function f : ℝ++ → ℝ++ is multiplicative if f (xy) = f (x)f (y) for all x, y ∈ ℝ. Theorem 1.2.21 ([34]). Let I ⊂ ℝ++ be an open interval, and let f : I → ℝ++ be a function. Then f is Wright type multiplicatively convex if and only if there exist a multiplicatively convex function G : I → ℝ++ and a multiplicative function M : ℝ++ → ℝ++ such that f (x) = G(x)M(x), x ∈ I.

1.3 Definition and basic properties of majorization For any x = (x1 , . . . , xn ) ∈ ℝn , let x[1] ≥ ⋅ ⋅ ⋅ ≥ x[n] denote the components of x in decreasing order, and let x ↓= (x[1] , . . . , x[n] ) denote the decreasing rearrangement of x. Similarly, let x(1) ≤ ⋅ ⋅ ⋅ ≤ x(n) denote the components of x in increasing order, and let x ↑= (x(1) , . . . , x(n) ) denote the increasing rearrangement of x. Obviously, x[i] = x(n+1−i) .

16 | 1 Majorization Definition 1.3.1 ([109, 53]). Let x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) ∈ ℝn . (a) The vector x is said to be majorized by y (in symbols x ≺ y or y ≻ x) if k

k

i=1

i=1

∑ x[i] ≤ ∑ y[i]

for k = 1, 2, . . . , n − 1

(1.3.1)

and n

n

i=1

i=1

∑ xi = ∑ yi .

(1.3.2)

If x is not a rearrangement of y, then x is said to be strictly majorized by y, denoted as x ≺≺ y. (b) x is said to be weakly submajorized by y (in symbols x ≺w y) if k

k

i=1

i=1

∑ x[i] ≤ ∑ y[i]

for k = 1, 2, . . . , n.

(1.3.3)

(c) x is said to be weakly supermajorized by y (in symbols x ≺w y) if k

k

i=1

i=1

∑ x(i) ≥ ∑ y(i)

for k = 1, 2, . . . , n.

(1.3.4)

The weak submajorization and the weak supermajorization are uniformly referred to as weak majorization. Relative to the weak majorization, majorization is also called strong majorization. Remark 1.3.1. The condition (1.3.1) is equivalent to k

max

1≤i1 m + r + 1. This is a contradiction with (1.3.30). (2) If 1 ≤ p ≤ m − 2, by (1.3.30) we have n − q ≤ n − r − 2. By using Lemma 1.3.2(iii) (k+1) we obtain a(k+1) n−r−1 ≤ am−1 . It follows that (k+1) (k+1) (k+1) a(k+1) n−q ≤ an−r−2 ≤ an−r−1 ≤ am−1 .

So the right side of (1.3.29) should include a(k+1) m−1 . Therefore, p ≥ m − 1. This is a contradiction with 1 ≤ p ≤ m − 2. Thus p = m or p = m − 1. In a similar way as Corollary 1.3.2, we can prove the following.

1.3 Definition and basic properties of majorization

| 25

Corollary 1.3.3. Let n ≥ 4, 2 ≤ k ≤ n − 2, 2 ≤ m ≤ n − k, 0 ≤ r ≤ k − 2. (k) (k+1) (i) If a(k) must be one of the following two cases: n ≤ am , then Sm m

(k+1) Sm = ∑ a(k+1) i i=1

or m−1

(k+1) Sm = ∑ a(k+1) + a(k+1) . n i i=1

(k+1) (k) (k) (ii) If a(k) n−r−1 ≤ am ≤ an−r , then Sm+r+1 must be one of the following two cases: m

r

i=1

i=0

m−1

r+1

i=1

i=0

(k+1) = ∑ a(k+1) Sm+r+1 + ∑ a(k+1) i n−i

or (k+1) = ∑ a(k+1) Sm+r+1 + ∑ a(k+1) i n−i .

(k) Corollary 1.3.4. Let n ≥ 4, 2 ≤ k ≤ n − 2, 1 ≤ m ≤ n − k, −1 ≤ r ≤ k − 2, and ∑−1 i=0 an−i = 0. If m

r

i=1

i=0

(k) (k) = ∑ a(k) Sm+r+1 i + ∑ an−i ,

then we have the following: (i) If m = 1, then r

(k+1) + ∑ a(k+1) Sr+2 = a(k+1) 1 n−i . i=0

(k+1) must be one of the following two cases: (ii) If 2 ≤ m ≤ n − k, then Sm+r+1 m

r

i=1

i=0

m−1

r+1

i=1

i=0

(k+1) = ∑ a(k+1) Sm+r+1 + ∑ a(k+1) i n−i

or (k+1) = ∑ a(k+1) Sm+r+1 + ∑ a(k+1) i n−i .

Theorem 1.3.1. For any n ≥ 2, we have a(2) ≺ a(1) = a.

(1.3.31)

26 | 1 Majorization Proof. It is clear that (1.3.31) holds if n = 2. Next we let n ≥ 3. Then we have a1 + a2 = S1(2) ≤ S1(1) = a1 , 2

Sn(2) = Sn(1) .

(2) (1) For any 2 ≤ m ≤ n − 1, we prove that Sm ≤ Sm by the following two cases: m (2) (2) (i) If Sm = ∑i=1 ai , then (2) (1) Sm − Sm =

am+1 − a1 ≤ 0. 2

m−1 (2) (2) (ii) If Sm = a(2) n + ∑i=1 ai , then (1) (2) = − Sm Sm

an − am ≤ 0. 2

So (1.3.31) holds. (k) Theorem 1.3.2. Let n ≥ 4, 2 ≤ k ≤ n − 2, 1 ≤ m ≤ n − k, −1 ≤ r ≤ k − 2, and ∑−1 i=0 an−i = 0. If r

m

{ S(k) = ∑ a(k) + ∑ a(k) { n−i , { { m+r+1 i=1 i i=0 m r { { (k+1) { {Sm+r+1 = ∑ a(k+1) + ∑ a(k+1) , i n−i { i=1 i=0

(1.3.32)

then (k+1) (k) . ≥ Sm+r+1 Sm+r+1

(1.3.33)

Proof. By a simple calculation we obtain (k+1) (k) = − Sm+r+1 Sm+r+1

k+m 1 (k) − ∑ ai ). (Sm+r+1 k+1 i=k−r

By using (1.3.32) we have (k) a(k) m+1 ≤ an−r .

It follows that am+1 + am+2 + ⋅ ⋅ ⋅ + ak+m ≤ an−r + an−r+1 + ⋅ ⋅ ⋅ + an−r+k−1 . So we have ak−r + ak−r+1 + ⋅ ⋅ ⋅ + ak+m ≤ an−r + an−r+1 + ⋅ ⋅ ⋅ + an+m .

(1.3.34)

1.3 Definition and basic properties of majorization

| 27

Note that an−r + an−r+1 + ⋅ ⋅ ⋅ + an+m ≤ an−r+j + an−r+1+j + ⋅ ⋅ ⋅ + an+m+j , { ak−r + ak−r+1 + ⋅ ⋅ ⋅ + ak+m ≤ an−r+j + an−r+1+j + ⋅ ⋅ ⋅ + an+m+j ,

0 ≤ j ≤ r,

r + 1 ≤ j ≤ k − 1.

Thus we can deduce m

r

i=1

i=0

(k) (k) = ∑ a(k) Sm+r+1 i + ∑ an−i =

k+m 1 n+m k−1 1 k−1 n+m ∑ ∑ aj+i = ∑ ∑ aj+i ≥ ∑ ai . k i=n−r j=0 k j=0 i=n−r i=k−r

This means that (1.3.33) holds. (k) Theorem 1.3.3. Let n ≥ 4, 2 ≤ k ≤ n − 2, 2 ≤ m ≤ n − k, −1 ≤ r ≤ k − 2, and ∑−1 i=0 an−i = 0. If m

r

{ { S(k) = ∑ a(k) + ∑ a(k) { n−i , { { m+r+1 i=1 i i=0 m−1 r+1 { { (k+1) { (k+1) { + ∑ a(k+1) {Sm+r+1 = ∑ ai n−i , i=1 i=0 {

(1.3.35)

then (k+1) (k) . ≥ Sm+r+1 Sm+r+1

(1.3.36)

Proof. Note that (k+1) (k) = − Sm+r+1 Sm+r+1

m−1 r 1 1 k+m−1 (k+1) ( ∑ a(k) + ∑ a(k) ) + a(k) ∑ a m − an−r−1 − i n−i k + 1 i=1 k + 1 i=k−r i i=0

=

m r n+m−1 1 (k) (∑ a(k) a + − ∑ ∑ ai ) n−i k + 1 i=1 i i=0 i=n−r−1

=

n+m−1 1 (k) − ∑ ai ). (Sm+r+1 k+1 i=n−r−1

By (1.3.35) we have (k) a(k) n−r−1 ≤ am .

It follows that an−r−1 + an−r + ⋅ ⋅ ⋅ + an−r+k−2 ≤ am + am+1 + ⋅ ⋅ ⋅ + am+k−1 . So we can deduce an−r−1 + an−r + ⋅ ⋅ ⋅ + an+m−1 ≤ ak−r−1 + ak−r + ⋅ ⋅ ⋅ + ak+m−1 .

(1.3.37)

28 | 1 Majorization Since an−r−1 + an−r + ⋅ ⋅ ⋅ + an+m−1 ≤ an−r+j + an−r+1+j + ⋅ ⋅ ⋅ + an+m+j , { ak−r−1 + ak−r + ⋅ ⋅ ⋅ + ak+m−1 ≤ an−r+j + an−r+1+j + ⋅ ⋅ ⋅ + an+m+j ,

0 ≤ j ≤ r,

r + 1 ≤ j ≤ k − 1,

we have (k) = Sm+r+1

n+m−1 1 k−1 n+m ∑ ∑ aj+i ≥ ∑ ai . k j=0 i=n−r i=n−r−1

This means that (1.3.36) holds. Theorem 1.3.4. For any n ≥ 2, 1 ≤ k ≤ n − 1, we have a(k+1) ≺ a(k) .

(1.3.38)

Proof. It is clear that (1.3.38) holds for any n ≥ 2, k = n − 1. If n ≥ 3, k = 1, we can deduce that (1.3.38) holds by using Theorem 1.3.1. Next we let n ≥ 4, 2 ≤ k ≤ n − 2. (k) For any 2 ≤ k ≤ n − 2, 1 ≤ m ≤ n − k, −1 ≤ r ≤ k − 2, let ∑−1 i=0 an−i = 0, and let m

r

i=1

i=0

(k) (k) = ∑ a(k) Sm+r+1 i + ∑ an−i .

Next we prove that (k+1) (k) ≥ Sm+r+1 Sm+r+1

holds by the following two cases. (i) If m = 1, −1 ≤ r ≤ k − 2, then by using Corollary 1.3.4(i) and Theorem 1.3.2 we obtain (k) (k+1) Sr+2 ≥ Sr+2 .

(ii) If 2 ≤ m ≤ n − k, and if −1 ≤ r ≤ k − 2, then by using Corollary 1.3.4(ii), Theorem 1.3.2, and Theorem 1.3.3 we obtain (k+1) (k) . ≥ Sm+r+1 Sm+r+1

Note that Sn(k) = Sn(k+1) , so (1.3.38) holds. Definition 1.3.3 ([54, pp. 29–30]). An n × k real matrix Q = (qij ) is called column stochastic if qij ≥ 0 for i = 1, 2, . . . , n, j = 1, 2, . . . .k, and all column sums of Q are equal to 1, that is, ∑i qij = 1, for j = 1, . . . , k. An n × n matrix Q = (qij ) is doubly stochastic if qij ≥ 0 for i, j = 1, . . . , n, and all row and column sums of Q are equal to 1, that is, ∑i qij = 1, j = 1, . . . , n; ∑j qij = 1, i = 1, . . . , n.

1.3 Definition and basic properties of majorization

| 29

Theorem 1.3.5 ([109, pp. 27–28]). The necessary and sufficient condition for P to be a double random matrix is the existence of permutation matrices G1 , G2 , . . . , Gl and positive real numbers a1 , a2 , . . . , al satisfying a1 + a2 + ⋅ ⋅ ⋅ + al = 1, such that P = a1 G1 + a2 G2 + ⋅ ⋅ ⋅ + al Gl . Theorem 1.3.6. Let x, y ∈ ℝn . (a) If x ≺ y, then z 1 , z 2 , . . . z n ∈ ℝn exist, such that x = z n ≺ z n−1 ≺ ⋅ ⋅ ⋅ ≺ z 2 ≺ z 1 = y and z j and z j+1 have only two different components, j = 1, 2, . . . , n − 1. (b) x ≺ y ⇔ there exist a doubly stochastic matrix Q, such that x = yQ. Example 1.3.2. Let x = (x1 , . . . , xn ) ∈ ℝn , x =

1 n

∑ni=1 xi . Then

. . . , x ) ≺ (x1 , . . . , xn ). (x, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

(1.3.39)

n

Proof. Since . . . , x ) = (x1 , . . . , xn )Q, (x, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

where

1 n

Q = ( ...

1 n

⋅⋅⋅

1 n

⋅⋅⋅

1 n

.. ) .

(1.3.40)

is a doubly stochastic matrix, by Theorem 1.3.6(b), it follows that (1.3.39) holds. Of course, Example 1.3.2 can also be proved by Definition 1.3.1. Example 1.3.3. Let x = (x1 , . . . , xn ) ∈ ℝn . Then (

x + xn xn + x1 x1 + x2 x2 + x3 , , . . . , n−1 , ) ≺ (x1 , . . . , xn ). 2 2 2 2

(1.3.41)

Proof. Since ( where

x + xn xn + x1 x1 + x2 x2 + x3 , , . . . , n−1 , ) = (x1 , . . . , xn )P, 2 2 2 2 1 2 1 2

( P=( (0 .. . (0

0 1 2

⋅⋅⋅ 0

1 2

1 2

..

. ⋅⋅⋅

..

. 0

0 ⋅⋅⋅ .. . .. . 1 2

1 2

0 .. ) .) )

(1.3.42)

0

1 2)

is a doubly stochastic matrix, by Theorem 1.3.6(b), it follows that (1.3.41) holds.

30 | 1 Majorization Theorem 1.3.7 ([109, p. 12]). Let x, y ∈ ℝn . Then n

n

n

i=1

i=1

i=1

∑ x[i] y(i) ≤ ∑ xi yi ≤ ∑ x[i] y[i] .

(1.3.43)

Theorem 1.3.8 ([53, p. 445]). Let x, y ∈ ℝn . Then n

n

i=1

i=1

∑ xi ui ≤ ∑ yi ui

(1.3.44)

holds for any u1 ≥ ⋅ ⋅ ⋅ ≥ un if and only if k

k

i=1

i=1

∑ xi ≤ ∑ yi

for k = 1, 2, . . . , n − 1

(1.3.45)

and n

n

i=1

i=1

∑ xi = ∑ yi .

(1.3.46)

Proof. If (1.3.44) holds whenever u1 ≥ ⋅ ⋅ ⋅ ≥ un , then the choices u = (1, 1, . . . , 1) and u = (−1, −1, . . . , −1) yield (1.3.46), and the choices 1, . . . , 1) u = (0, . . . , 0, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−k

k

yield (1.3.45). Next, suppose (1.3.45) and (1.3.46). Then for any u1 ≥ ⋅ ⋅ ⋅ ≥ un , by Abel’s lemma [60, p. 63], we have n

n

i=1

i=1

n

∑ yi ui − ∑ xi ui = ∑(yi − xi )ui i=1

n

n−1 k

i=1

k=1 i=1

= ∑(yi − xi )un + ∑ ∑[(yi − xi )](uk − uk+1 ) ≥ 0. The proof of Theorem 1.3.8 is completed. Theorem 1.3.9 ([53, p. 445]). Let x, y ∈ ℝn . Then n

n

i=1

i=1

∑ xi ui ≤ ∑ yi ui

(1.3.47)

holds for any u1 ≥ ⋅ ⋅ ⋅ ≥ un ≥ 0 if and only if k

k

i=1

i=1

∑ xi ≤ ∑ yi

for k = 1, 2, . . . , n.

(1.3.48)

1.3 Definition and basic properties of majorization

| 31

1, . . . , 1) Proof. If (1.3.47) holds whenever u1 ≥ ⋅ ⋅ ⋅ ≥ un ≥ 0, then the choices u = (0, . . . , 0, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k

n−k

yield (1.3.48). Next, suppose (1.3.48). Then for any u1 ≥ u2 ≥ ⋅ ⋅ ⋅ ≥ un ≥ 0, let ti = yi − xi . Then k ∑i=1 ti ≥ 0, k = 1, . . . , n, and then n

n

n

i=1

i=1

i=1

∑ yi ui − ∑ xi ui = ∑ ti ui = t1 (u1 − u2 ) + (t1 + t2 )(u2 − u3 ) + ⋅ ⋅ ⋅ + (t1 + ⋅ ⋅ ⋅ + tn−1 )(un−1 − un ) + (t1 + ⋅ ⋅ ⋅ + tn )un ≥ 0. The proof of Theorem 1.3.9 is completed. Remark 1.3.3. In Theorem 1.3.8, u ∈ ℝn , and in Theorem 1.3.9, u ∈ ℝn+ . Pay attention to this difference when using them. From Theorem 1.3.8 and Theorem 1.3.9, it is not difficult to derive the following two theorems. Theorem 1.3.10 ([109, p. 15]). Let x, y ∈ ℝn . Then (a) n

n

i=1

i=1

n

n

i=1

i=1

n

n

i=1

i=1

x ≺ y ⇔ ∑ x[i] u[i] ≤ ∑ y[i] u[i] ,

∀u ∈ ℝn ;

(b) x ≺ y ⇔ ∑ x(i) u[i] ≥ ∑ y(i) u[i] ,

∀u ∈ ℝn ;

(c) x ≺ y ⇔ ∑ x[i] u(i) ≥ ∑ y[i] u(i) ,

∀u ∈ ℝn .

Theorem 1.3.11 ([109, p. 14]). Let x, y ∈ ℝn . Then (a) n

n

i=1

i=1

n

n

i=1

i=1

x ≺w y ⇔ ∑ x[i] u[i] ≤ ∑ y(i) u[i] ,

∀u ∈ ℝn+ ;

(b) x ≺w y ⇔ ∑ x(i) u[i] ≥ ∑ y(i) u[i] ,

∀u ∈ ℝn+ .

32 | 1 Majorization Example 1.3.4 ([145, p. 69]). Let xi , λi ∈ ℝ+ , i = 1, 2, . . . , n, x1 ≥ ⋅ ⋅ ⋅ ≥ xn ≥ 0, λ1 ≥ 1, λ1 + λ2 ≥ 2, λ1 + λ2 + λ3 ≥ 3, . . . , λ1 + λ2 + ⋅ ⋅ ⋅ + λn ≥ n, α ≥ 1. Then n

n

i=1

i=1

∑(λi xi )α ≥ ∑ xiα .

(1.3.49)

Proof. For α > 1, the function t α is increasing and convex on ℝ+ , since 1 ≥ x2α ≥ ⋅ ⋅ ⋅ ≥ xnα ≥ 0 and k

∑ λiα i=1

α

α

1 k 1 ≥ k( ∑ λi ) ≥ k( k) = k, k i=1 k

k = 1, . . . , n.

Using Theorem 1.3.8, it follows that n

n

i=1

i=1

∑ λiα xiα ≥ ∑ 1 ⋅ xiα , that is, (1.3.49) holds. Theorem 1.3.12 ([110, p. 193] [53]). Let x, y ∈ ℝn , x1 ≥ ⋅ ⋅ ⋅ ≥ xn , and ∑ni=1 xi = ∑ni=1 yi . If one of the following conditions holds, then x ≺ y: (a) for some k, 1 ≤ k < n, xi ≤ yi , i = 1, . . . , k, xi ≥ yi for i = k + 1, . . . , n; (b) yi − xi is decreasing in i = 1, . . . , n; y (c) xi > 0 for all i and xi is decreasing in i = 1, . . . , n. i

Theorem 1.3.13 ([110, p. 194]). Let x, y ∈ ℝn , x1 ≥ ⋅ ⋅ ⋅ ≥ xn , and ∑ni=1 xi ≤ ∑ni=1 yi . If any condition in Theorem 1.3.12 holds, then x ≺w y. Example 1.3.5. If m ≥ 3n, then m−n 1 m−n ,..., , n − (m + 1)(n + 1)(m − n)). (n, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0 ) ≺ ( ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 2 2 2 (m+1)(n+1)

(1.3.50)

(m+1)(n+1)

, we have n − 21 (m + 1)(n + 1)(m − n) ≤ 0. From TheoProof. Since m ≥ 3n, n ≤ m−n 2 rem 1.3.12(a), it follows that (1.3.50) holds. Theorem 1.3.14 ([109, p. 5]). Let x, y ∈ ℝn+ , u, v ∈ ℝm . (a) If x ≺w y, u ≺w v, then (x, u) ≺w (y, v); (b) if x ≺w y, u ≺w v, then (x, u) ≺w (y, v); (c) if x ≺ y, u ≺ v, then (x, u) ≺ (y, v). Theorem 1.3.15 ([109, p. 5]). Let u ∈ ℝm , y ∈ ℝn+ , 1 ≤ m < n. Then there exists a v ∈ ℝn−m such that (u, v) ≺ y if and only if u ≺w (y[1] , . . . , y[m] ), and u ≺w (y(1) , . . . , y(m) ).

1.3 Definition and basic properties of majorization

| 33

Corollary 1.3.5 ([109, p. 10]). Let y ∈ ℝn , y[1] ≥ α ≥ y[n] , β = ∑ni=1 yi . Then (α,

β−α β−α ,..., ) ≺ y. n−1 n−1

Theorem 1.3.16 ([109, p. 10]). Let x, y, u, v ∈ ℝn+ . (a) If x ≺w y, u ≺w v, then x + u ≺w y ↓ +v ↓; (b) if x ≺w y, u ≺w v, then x + u ≺w y ↓ +v ↓; (c) if x ≺ y, u ≺ v, then x + u ≺ y ↓ +v ↓. Remark 1.3.4. Let x, y, u ∈ ℝn+ . By Theorem 1.3.16(c), we know that if x ≺ y, then x+u ≺ y ↓ +u ↓. But the general x + u ≺ y + u does not hold. For example x = (1, 1, 1) ≺ y = (3, 0, 0), u = (0, 3, 1), but x + u = (1, 4, 2) ≺ y + u = (3, 3, 1) does not hold. Theorem 1.3.17 ([109, p. 11]). Let x, y ∈ ℝn+ . Then x ↓ +y ↑≺ x + y ≺ x ↓ +y ↓ . Theorem 1.3.18 ([109, p. 11]). Let x j ≺ y ∈ ℝn , j = 1, . . . , m, αj ≥ 0, and ∑m j=1 αj = 1. Then (j) ≺ y. ∑m j=1 αj x

The condition of the weak majorization is looser than the strong majorization, so the weak majorization is more common and easier to obtain than the strong majorization. However, the inequalities obtained by using strong majorization are often stronger than those obtained by using weak majorization. Therefore, modifying or extending a weak majorization to a strong majorization is very meaningful and in doing so one can often achieve the purpose of strengthening or refining the existing inequalities. Theorem 1.3.19 ([109, p. 10]). Let x, y ∈ ℝn . (a) If x ≺w y, then (x, xn+1 ) ≺ (y, yn+1 ), where n+1

n

i=1

i=0

xn+1 = min{x1 , . . . , xn , y1 , . . . , yn }, yn+1 = ∑ xi − ∑ yi . (b) If x ≺w y, then (x0 , x) ≺ (y0 , y), where n

n

i=0

i=0

x0 = max{x1 , . . . , xn , y1 , . . . , yn }, y0 = ∑ xi − ∑ yi . (c) If x ≺w y, then n

n

i=1

i=1

(x, 0, 0) ≺ (y, ∑ xi , − ∑ yi ).

34 | 1 Majorization Theorem 1.3.20 ([109, p. 7]). Let x, y ∈ ℝn , y1 ≤ ⋅ ⋅ ⋅ ≤ yn and let ỹ = y1 −(∑ni=1 yi −∑ni=1 xi ). If x ≺w y, then (x1 , . . . , xn ) ≺ (y1 , . . . , yn−1 , ỹ). Theorem 1.3.21 ([54, p. 177]). If x ≺w y, where x ∈ ℝn+ , y ∈ ℝn , and δ = ∑ni=1 (yi − xi ), then for any integer k δ δ (x, , . . . , ) ≺ (y, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0). ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k k k k

In reference [109, p. 7], the proof of the case k = n is given. Remark 1.3.5. There is generally no number c such that (x, 0) ≺ (y, c). Remark 1.3.6. There is generally no number u such that (x, u) ≺ (y, 0, . . . , 0). Example 1.3.6 ([129]). Let x = (x1 , . . . , xn ) ∈ ℝn++ , n ≥ 2, and ∏ni=1 xi ≥ 1. Then (1, . . . , 1) ≺w (x1 , . . . , xn ). ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

(1.3.51)

n

Proof. Since (1, . . . , 1) ≤ (x, . . . , x ) ≺ (x1 , . . . , xn ), ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

n

(1.3.51) holds. For positive numbers x1 , x2 , x3 , from (1.3.51), we have (1, 1, 1) ≺w (

x2 + x3 x3 + x1 x1 + x2 , , ), x3 + x1 x1 + x2 x2 + x3

(1.3.52)

(1, 1, 1) ≺w (

x3 x2 x1 , , ), √x2 x3 √x3 x1 √x1 x2

(1.3.53)

(1, 1, 1) ≺w (

√x1 x2 √x3 x1 √x2 x3 , , ). x3 x2 x1

(1.3.54)

and

The weak majorization (1.3.53) and (1.3.54) are derived from reference [75]. Let x = (x1 , . . . , xn ) ∈ ℝn++ , n ≥ 2, and ∏ni=1 xi ≥ 1. Using Theorem 1.3.21 and Theorem 1.3.19, (1.3.51) can be divided as follows: x − 1, . . . , x − 1) ≺ (x1 , . . . , xn , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0) (1, . . . , 1, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

(1.3.55)

(1, . . . , 1, a) ≺ (x1 , . . . , xn , xn+1 ), ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

(1.3.56)

n

n

n

and n

where a = min{x1 , . . . , xn , 1}, xn+1 = n + a − ∑ni=1 xi .

1.4 Some common majorization

| 35

1.4 Some common majorization In the study of majorization theory, it is an important and basic work to find and establish the majorizing relation between vectors. Because the majorizing relation profoundly describes the intrinsic connection between vectors, a new majorizing relation is combined with the Schur-convex function, which will be introduced in the next chapter, often producing many interesting inequalities of all kinds. Here we have collected some important majorizing relations between vectors; familiarity with them is necessary. 1. Let a ≤ xi ≤ b, i = 1, . . . , n, n ≥ 2, ∑ni=1 xi = s. Then . . . , a) = y, x = (x1 , . . . , xn ) ≺ (b, . . . , b, c, a, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−1−u

u

(1.4.1)

where u = [ nb−s ], c = s − b(n − 1) + (b − u)a. b−a

Proof. [128] According to the conditions of the problem, we have ∑ni=1 xi = ∑ni=1 yi , so to prove (1.4.1), we only need to prove ∑ki=1 x[i] ≤ ∑ki=1 y[i] , k = 1, . . . , n − 1. It is easy to verify that b ≥ c ≥ a. From a ≤ xi ≤ b (i = 1, 2, . . . , n), it follows that k

k

i=1

i=1

∑ x[i] ≤ kb = ∑ y[i] ,

k = 1, 2, . . . , n − 1 − u

and n

n

i=k+1

i=k+1

∑ x[i] ≥ (n − k)a = ∑ y[i] ,

k = n − u, n − u + 1, . . . , n;

therefore (1.4.1) holds. 2. Let x = (x1 , . . . , xn ) ∈ ℝn++ , n ≥ 2, ∑ni=1 xi = s > 0, c ≥ s. Then (

c − xn x x c − x1 ,..., ) ≺ ( 1 , . . . , 1 ). nc − s nc − s s s

Proof. [73] Without loss of generality, we may assume that x1 ≥ ⋅ ⋅ ⋅ ≥ xn . Then x1 x ≥ ⋅⋅⋅ ≥ 1 s s and c − xn c − x1 ≤ ⋅⋅⋅ ≤ . nc − s nc − s Obviously, n

∑ i=1

n x c − xi = ∑ i = 1, nc − s i=1 s

(1.4.2)

36 | 1 Majorization so to prove (1.4.2), we only need to prove k

∑ i=1

k xi c − xi ≥∑ , s i=1 nc − s

k = 1, 2, . . . , n − 1.

The above inequalities are equivalent to k

k

i=1

i=1

(nc − s) ∑ xi + s ∑ xn−i+1 ≥ kcs.

(1.4.3)

Firstly, we prove that k

k

i=1

i=1

(n − 1) ∑ xi + ∑ xn−i+1 ≥ ks.

(1.4.4)

In fact, the left side of inequality (1.4.4) can be written k

k

k

i=1

j=2

i=1

(n − k)x1 + ∑ xn−i+1 + ∑[(n − k)xj + ∑ xi ]. From x1 ≥ ⋅ ⋅ ⋅ ≥ xn , it follows that k

n−k

k

n

i=1

i=1

i=1

i=1

(n − k)x1 + ∑ xn−i+1 ≥ ∑ xi + ∑ xn−i+1 = ∑ xi = s and k

n

k

n

i=1

i=k+1

i=1

i=1

(n − k)xj + ∑ xi ≥ ∑ xi + ∑ xi = ∑ xi = s,

j = 2, 3, . . . , k.

Thus (1.4.4) holds. Moreover, it is obvious that k

k

i=1

i=1

∑ xi − ∑ xn−i+1 ≥ 0.

(1.4.5)

And then (1.4.4) × c + (1.4.5) × (c − s) yields (1.4.3); therefore (1.4.2) holds. 3. Let x = (x1 , . . . , xn ) ∈ ℝn++ , n ≥ 2, ∑ni=1 xi = s > 0, c ≥ 0. Then (

c + xn x x c + x1 ,..., ) ≺ ( 1 , . . . , 1 ). nc + s nc + s s s

(1.4.6)

Remark 1.4.1. Niezgoda gave an extension of (1.4.2) and (1.4.6) from the classical majorization ordering to a group-induced cone ordering induced by a compact group of orthogonal operators acting on a linear space. The interested readers can refer to the literature [65].

1.4 Some common majorization

| 37

4. Let x = (x1 , . . . , xn ) ∈ ℝn++ , n ≥ 2, 0 < r ≤ s. Then (

x1r ,..., n ∑j=1 xjr

xnr ) n ∑j=1 xjr

≺(

x1s ,..., n ∑j=1 xjs

xns ). n ∑j=1 xjs

5. [90] Let a = (a1 , . . . , an ) ∈ ℝn++ , ∑ni=1 ai = sn , and xi = ai + sn − mai ≥ 0, i = 1, . . . , n, then

(1.4.7) n−m−1 (sn n−1

x = (x1 , . . . , xn ) ≺ (sn − ma1 , . . . , sn − man ) = sn − ma.

− ai ). If (1.4.8)

6. [53] Let x = (x1 , . . . , xn ) ∈ ℝn++ , n ≥ 2, and ∑ni=1 xi = (n − 2)c. Then (

c + xn c + x1 ,..., ) ≺ (c − x1 , . . . , c − xn ). n−1 n−1

(1.4.9)

7. [53] Let x = (x1 , . . . , xn ) ∈ ℝn++ , n ≥ 2, and ∑ni=1 xi = nc − 2s, c ≥ s. Then (

(c + xn )s (c + x1 )s ,..., ) ≺ (c − x1 , . . . , c − xn ). nc − s nc − s

(1.4.10)

8. [108] Let x = (x1 , . . . , xn ) ∈ ℝn++ . If xn−1 < 0, xn ≥ 0, then (x1 , . . . , xn−2 , xn−1 , xn , 0) ≺ (x1 , . . . , xn ). Proof. Let θ =

xn−1 xn−1 −xn

(1.4.11)

and 1 ( ( (

..

.

0

1

0 θ 1−θ

) ).

1−θ θ )

It is easy to see that P is a double random matrix and (x1 , . . . , xn−2 , xn−1 , x − n, 0) = xP; therefore (1.4.11) holds. 9. [108] Let xn−1 < 0, xn ≥ 0, ε > 0, xn−1 +ε ≤ 0 or xn−1 ≤ 0, xn > 0, ε > 0, xn−1 −ε ≥ 0. Then (x1 , . . . , xn−2 , xn−1 + ε, xn − ε) ≺ (x1 , . . . , xn ).

(1.4.12)

10. Let x > 0. Then x x x x (1 + , . . . , 1 + ) ≺ (1 + , 1). ,...,1 + ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n n n−1 n−1 n

n−1

(1.4.13)

38 | 1 Majorization 11. If x ≠ n − 1, then x+1 x+1 x x ( , 1). ,..., ,..., ) ≺≺ ( ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n n n−1 n−1 n

(1.4.14)

n−1

12. If xi > 0 or −1 < xi < 0, i = 1, . . . , n, then n

1, . . . , 1). (1 + x1 , . . . , 1 + xn ) ≺ (1 + ∑ xi , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ i=1

n−1

(1.4.15)

13. We have 1 1 1 1 ( ,..., ) ≺ ( ,..., , 0) ≺ ⋅ ⋅ ⋅ n n n−1 n−1 1 1 ≺ ( , , 0, . . . , 0) ≺ (1, 0, . . . , 0). 2 2

(1.4.16)

14. [53] Let x = (x1 , . . . , xn ) ∈ ℝn+ . Then ≺ (x[1] , . . . , x[k] , ∑nj=k+1 x[j] , 0, . . . , 0) ≺ (∑ni=1 xi , 0, . . . , 0), (x1 , . . . , xn ) { ≺ (∑kj=1 x[j] , x[k+1] , . . . , x[n] , 0, . . . , 0) ≺ (∑ni=1 xi , 0, . . . , 0).

(1.4.17)

We have (x, . . . , x) ≺ (x[1] , . . . , x[k] , x,̇ . . . , x)̇ ≺ (x1 , . . . , xn )

(1.4.18)

(x, . . . , x) ≺ (x,̂ . . . , x,̂ x[k+1] , . . . , x[n] ) ≺ (x1 , . . . , xn ),

(1.4.19)

and

1 where ẋ = n+k−1 ∑ni=k+1 x[i] , x̂ = k1 ∑ki=1 x[i] , x = n1 ∑ni=1 x[i] . 15. [53] If x1 ≥ y1 ≥ x2 ≥ ⋅ ⋅ ⋅ ≥ yn−1 ≥ xn , then n

n−1

j=1

j=1

(y1 , . . . , yn−1 , ∑ xj − ∑ yj ) ≺ (x1 , . . . , xn )

(1.4.20)

(x2 , . . . , xn ) ≺w (y1 , . . . , yn−1 ) ≺w (x1 , . . . , xn−1 ).

(1.4.21)

and

16. [53] If x1 ≥ ⋅ ⋅ ⋅ ≥ x2n−1 > x2n > 0, then 2n−1

( ∑ (−1)i−1 xi , x2 , x4 , . . . , x2n−2 ) ≺ (x1 , x3 , . . . , x2n−1 ) i=1

(1.4.22)

1.4 Some common majorization

| 39

and 2n

(∑(−1)i−1 xi , x2 , x4 , . . . , x2n ) ≺ (x1 , x3 , . . . , x2n−1 , 0). i=1

17. [53] If m ≥ l and α =

l m

(1.4.23)

≤ 1, then

. . . , c, 0, . . . , 0). (αc, . . . , αc, 0, . . . , 0) ≺ (c, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ m

(1.4.24)

l

18. [53] We have (

x x1 + c x +c x ,..., n 1 ) ≺ ( n 1 , . . . , n n ), ∑ni=1 xi + nc ∑i=1 xn + nc ∑i=1 xi ∑i=1 xi

c ≥ 0.

(1.4.25)

19. [53] Let x1 ≥ ⋅ ⋅ ⋅ ≥ xl ≥ a > xl+1 ≥ ⋅ ⋅ ⋅ ≥ xn , y1 ≥ ⋅ ⋅ ⋅ ≥ ym ≥ a > ym+1 ≥ ⋅ ⋅ ⋅ ≥ yn . If l ≥ m, then (x1 , . . . , xl ) ≺w (y1 , . . . , ym , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ a, . . . , a)

(1.4.26)

(a, . . . , a, xl+1 , . . . , xn ) ≺w (ym+1 , . . . , yn ). ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

(1.4.27)

(x1 , . . . , xl , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ a, . . . , a) ≺w (y1 , . . . , ym )

(1.4.28)

(xl+1 , . . . , xn ) ≺w (a, . . . , a, ym+1 , . . . , yn ). ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

(1.4.29)

l−m

and l−m

If l < m, then m−l

and m−l

20. [77] Let x > 0, λ > 0. Then (

x+λ x+λ x x λ λ ,..., ) ≺ ( ,..., , ,..., ). ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n n k k ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−k n−k k

(1.4.30)

n−k

21. Let x ≠ 1, x ≥ 0. Then . . . , 1). (x, . . . , x ) ≺≺ ((n + 1)x − n, 1, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n+1

22. [89] Let a ≤ b, u(t) = tb + (1 − t)a, v(t) = ta + (1 − t)b, (

(1.4.31)

n

1 2

≤ t2 ≤ t1 ≤ 1. Then

a+b a+b , ) ≺ (u(t2 ), v(t2 )) ≺ (u(t1 ), v(t1 )) ≺ (a, b). 2 2

(1.4.32)

40 | 1 Majorization 23. If xi > 0, i = 1, . . . , n, then for any constant c, 0 < c < (

1 n

∑ni=1 xi , we have

x x −c x −c x1 ,..., nn ) ≺ ( n 1 ,..., n n ). ∑ni=1 xi ∑i=1 xi ∑i=1 (xi − c) ∑i=1 (xi − c)

(1.4.33)

24. [53] Let m = min{xi }, M = max{xi }. Then (m,

∑nj=1 xj − m − M n−2

,...,

∑nj=1 xj − m − M n−2

) ≺ (x1 , . . . , xn ).

(1.4.34)

25. [53] If x[n] ≤ x[n−1] − d, then (x1 , . . . , xn ) ≺ (x[n] , x[n] + d, . . . , x[n] + d, M),

(1.4.35)

∑ni=1 xi

= x[n] + (n − 2)(x[n] + d) + M. where M is determined by 26. [53] If c ≥ 1 and x[1] ≥ cx[2] , x[n] ≥ 0, then (x1 , . . . , xn ) ≺ (x[n] , x[n] + d, . . . , x[n] + d, M)

(1.4.36)

and (x1 , . . . , xn ) ≺ (x[1] ,

x[n] x[n] , θ, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0), ,..., ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ c c n−l−2

(1.4.37)

l

x[1] c

x [ c[1] ]

∑ni=1 xi

where 0 ≤ θ < = x[1] + and + θ. 27. [53] If b ≥ 0 and x[1] ≥ x[2] + b, x[n] ≥ 0, then (x1 , . . . , xn ) ≺ (x[1] , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ x[1] − b, . . . , x[1] − b, θ, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0), n−l−2

l

(1.4.38)

where 0 ≤ θ < x[1] − b and ∑ni=1 xi = x[1] + l(x[1] − b) + θ. 28. [53] If x[n] ≤ cx[n−1] , then (x1 , . . . , xn ) ≺ (x[n] ,

x[n] x[n] ,..., , M), c c

(1.4.39)

x

+ M determines M. where ∑ni=1 xi = x[n] + (n − 2) [n] c 29. [53] If xi ≥ m, i = 1, . . . , n, and ∑ni=1 xi = s, then (x1 , . . . , xn ) ≺ (m, . . . , m, s − (n − 1)m). If xi ≤ M, i = 1, . . . , n, and

∑ni=1 xi

(1.4.40)

= s, then

(x1 , . . . , xn ) ≺ (

s−M s−M ,..., , M). n−1 n−1

(1.4.41)

30. We have 2n, . . . , 2n) 4, . . . , 4, . . . , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ (2, . . . , 2, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n+1

n+1

n+1

≺ (1, . . . , 1, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 3, . . . , 3, . . . , 2n + 1, . . . , 2n + 1). ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

n

n

(1.4.42)

1.4 Some common majorization | 41

Proof. [73] Using mathematical induction, when n = 2, (1.4.42) obviously holds. Suppose when n = k, (1.4.42) holds, namely, 2n, . . . , 2n) 4, . . . , 4, . . . , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ x = (2, . . . , 2, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k+1

k+1

k+1

+ 1, . . . , 2n + 1) = y. 3, . . . , 3, . . . , 2n ≺ (1, . . . , 1, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k

k

k

Now we consider the case n = k + 1. From Theorem 1.3.12(a), we know that u = (2, 4, . . . , 2k, 2k + 2, . . . , 2k + 2) ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k+2

≺ (1, 3, . . . , 2k + 1, 2k + 3, . . . , 2k + 3 ) = v. ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k+1

By Theorem 1.3.14(c), we have (x, u) ≺ (y, v), i. e., 2n, . . . , 2n) 4, . . . , 4, . . . , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ (2, . . . , 2, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k+2

k+2

k+2

+ 1, . . . , 2n + 1), 3, . . . , 3, . . . , 2n ≺ (1, . . . , 1, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ k+1

k+1

k+1

such that when n = k + 1, (1.4.42) holds, so (1.4.42) is true for all n. 31. [55] If αi > 0, i = 1, . . . , n, β1 ≥ β2 ≥ ⋅ ⋅ ⋅ ≥ βn > 0, and

β1 α1

≤ ⋅⋅⋅ ≤

βn , αn

then

(b1 , . . . , bn ) ≺ (a1 , . . . , an ), where ai =

αi , ∑nj=1 αj

bi =

βi , ∑nj=1 βj

(1.4.43)

i = 1, . . . , n.

32. [164] Let x = (x1 , . . . , xn ) ∈ ℝn+ , ∑ni=1 xi = s, and xn+i = xi , i = 1, 2, . . . , n. If s = 1, then k k k k k ( , . . . , ) ≺ (∑ xi , ∑ xi+1 , . . . , ∑ xi+n−1 ) n n i=1 i=1 i=1

(1.4.44)

≺ (kx1 , . . . , kxn ) ≺ (k, . . . , 0). If s ≤ 1, then k k k k k ( , . . . , ) ≺w (∑ xi , ∑ xi+1 , . . . , ∑ xi+n−1 ) n n i=1 i=1 i=1

(1.4.45)

k k k k k ( , . . . , ) ≺w (∑ xi , ∑ xi+1 , . . . , ∑ xi+n−1 ) n n i=1 i=1 i=1

(1.4.46)

≺w (kx1 , . . . , kxn ) ≺w (k, . . . , 0).

If s ≥ 1, then

≺w (kx1 , . . . , kxn ) ≺w (k, . . . , 0).

42 | 1 Majorization 33. [152] Let x ∈ [0, 1], β ≥ 1, αi > 0, i = 1, 2, . . . , n, n ∈ ℕ. Then n

n

i=1

i=1

(β + (∑ αi )x − 1, α1 , . . . , αn ) ≺ (β + ∑ αi − 1, α1 x, . . . , αn x)

(1.4.47)

and n

0, . . . , 0). (β, α1 x, . . . , αn x) ≺ (β + (∑ αi )x − 1, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

i=1

(1.4.48)

Proof. Let n

u = (u1 , . . . , un , un+1 ) = (β + (∑ αi )x − 1, α1 , . . . , αn ) i=1

and n

v = (v1 , . . . , vn , vn+1 ) = (β + ∑ αi − 1, α1 x, . . . , αn x). i=1

n+1 It is clear that ∑n+1 i=1 ui = ∑i=1 vi . Without loss of generality, we may assume that α1 ≥ α2 ≥ ⋅ ⋅ ⋅ ≥ αn . So v1 ≥ ⋅ ⋅ ⋅ ≥ vn+1 . The following discussion is divided into two cases. Case 1. We have β + (∑ni=1 αi )x − 1 ≥ α1 . Note that x ∈ [0, 1] and αi > 0, i = 1, . . . , n. We have n

n

i=1

i=1

u1 = β + (∑ αi )x − 1 ≤ β + ∑ αi − 1 = v1 and ui = αi−1 ≥ αi−1 x = vi ,

i = 2, . . . , n + 1.

Hence from Theorem 1.3.12(a), it follows that u ≺ v. Case 2. We have β + (∑ni=1 αi )x − 1 < α1 . Let u[1] ≥ ⋅ ⋅ ⋅ ≥ u[n+1] denote the components of u in decreasing order. There exist k ∈ {2, 3, . . . , n} such that n

α1 ≥ ⋅ ⋅ ⋅ ≥ αk−1 ≥ β + (∑ αi )x − 1 ≥ αk+1 ≥ ⋅ ⋅ ⋅ ≥ αn . i=1

Note that β − 1 ≥ 0, x ∈ [0, 1], and αi > 0. If 1 ≤ m ≤ k − 1, then m

m

n

m

i=1

i=1

i=1

i=1

∑ u[i] = ∑ αi ≤ β + ∑ αi − 1 ≤ ∑ vi .

1.4 Some common majorization |

43

If n ≥ m > k − 1, then m

n

i=1

i=1

k−1

m

i=1

i=k+1

∑ u[i] = β + (∑ αi )x − 1 + ∑ αi + ∑ αi m−1

m

(If m = k, let ∑ αi = 0.) i=k+1

m

k−1

n

= β + (( ∑ αi )x + αm x + ( ∑ αi )x) − 1 + ∑ αi + ∑ αi i=1

i=1

i=m+1

m−1

i=k+1

k−1

m

n

i=1

i=k+1

i=m+1

= β + ( ∑ αi )x − 1 + ( ∑ αi + αm x + ∑ αi + ( ∑ αi )x) i=1

m−1

n

i=1

i=1

≤ β + ( ∑ αi )x − 1 + ∑ αi m

= ∑ vi . i=1

Hence from Definition 1.3.1(a), it follows that u ≺ v. Let w = (w1 , . . . , wn , wn+1 ) = (β − 1, α1 x, . . . , αn x) and n

0, . . . , 0). z = (z1 , . . . , zn , zn+1 ) = (β + (∑ αi )x − 1, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ i=1

n

n+1 It is clear that ∑n+1 i=1 wi = ∑i=1 zi . The following discussion is divided into two cases. Case 1. We have β − 1 ≥ α1 x. Note that x ∈ [0, 1] and αi > 0, i = 1, . . . , n. We have n

w1 = β − 1 ≤ β + (∑ αi )x − 1 = z1 i=1

and wi = αi−1 x ≥ 0 = zi ,

i = 2, . . . , n + 1.

Hence from Theorem 1.3.12(a), it follows that w ≺ z. Case 2. We have β − 1 < α1 x. Let w[1] ≥ ⋅ ⋅ ⋅ ≥ w[n+1] denote the components of w in decreasing order. There exist k ∈ k = 2, . . . , n such that α1 x ≥ ⋅ ⋅ ⋅ ≥ αk−1 x ≥ β − 1 ≥ αk+1 x ≥ ⋅ ⋅ ⋅ ≥ αn x. Now note that β − 1 ≥ 0, x ∈ [0, 1], and αi > 0. We have n

w[1] = α1 x ≤ β + (∑ αi )x − 1 = z1 , i=1

44 | 1 Majorization w[i] = αi x ≥ 0 = zi ,

i = 2, . . . , k − 1,

w[k] = β − 1 ≥ 0 = zk ,

and w[i] = αi−1 x ≥ 0 = zi ,

i = k + 1, . . . , n + 1.

Hence from Theorem 1.3.12(a), it follows that w ≺ z. 34. [53] Suppose that m ≤ xi ≤ M, i = 1, . . . , n. Then there exist a unique θ ∈ [m, M) and a unique integer l ∈ {0, 1, . . . , n} such that n

∑ xi = (n − l − 1)m + θ + lM, i=1

with l and θ determined as follows: m, . . . , m). (x1 , . . . , xn ) ≺ (M, . . . , M , θ, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−l−1

l

(1.4.49)

Note that because θ = ∑ni=1 xi − (n − l − 1)m − lM ∈ [m, M), ∑ni=1 xi − nm ∑n x − nm − 1 ≤ l < i=1 i , M−m M−m and this determines l. 35. [53] If b ≥ 0, c ≥ 1, and x[1] ≥ cx[2] , x[1] ≥ x[2] + b, x[n] ≥ 0, then (x1 , . . . , xn ) ≺ (x[1] , z, . . . , z, θ, 0, . . . , 0),

(1.4.50)

x

where z = min{ c[1] , x[1] − b} and 0 ≤ θ ≤ z. 36. [53] If 0 ≤ xi ≤ ci for i = 1, . . . , n, c1 ≥ ⋅ ⋅ ⋅ ≥ cn , and ∑ni=1 xi = s, then r

(x1 , . . . , xn ) ≺ (c1 , . . . , cr , s − ∑ ci , 0, . . . , 0), i=1

(1.4.51)

r+1 ci ≥ s. If no such integer exists, where r ∈ {1, . . . , n − 1} is such that ∑ri=1 ci < s and ∑i=1 then r = n. 37. [53] If 0 ≤ ai ≤ xi , i = 1, . . . , n, a1 ≥ ⋅ ⋅ ⋅ ≥ an , and ∑ni=1 xi = s, then r

(a1 , . . . , ar , s − ∑ ai , 0, . . . , 0) ≺ (x1 , . . . , xn ), i=1

(1.4.52)

r+1 ai ≥ s. If no such integer exists, where r ∈ {1, . . . , n − 1} is such that ∑ri=1 ai < s and ∑i=1 then r = n.

1.5 Convex functions and majorization

| 45

1.5 Convex functions and majorization Theorem 1.5.1 ([109]). Let Ω ⊂ ℝn be a symmetric convex set, x, y ∈ Ω. Then (a) x ≺ y ⇔ φ(x) ≤ (or ≥, respectively) φ(y) for all symmetric convex (or concave, respectively) functions φ : Ω → ℝ; (b) x ≺≺ y ⇔ φ(x) < (or >, respectively) φ(y) for all strictly symmetric convex (or concave, respectively) functions φ : Ω → ℝ; (c) x ≺w y ⇔ φ(x) ≤ (or ≥, respectively) φ(y) for all increasing (or decreasing, respectively) symmetric convex (or concave, respectively) functions φ : Ω → ℝ; (d) x ≺w y ⇔ φ(x) ≤ (or ≥, respectively) φ(y) for all decreasing (or increasing, respectively) symmetric convex (or concave, respectively) functions φ : Ω → ℝ. Theorem 1.5.2 ([109]). Let the interval I ⊂ ℝ, x, y ∈ I n ⊂ ℝn . Then (a) x ≺ y ⇔ ∑ni=1 g(xi ) ≤ (or ≥, respectively) ∑ni=1 g(yi ) for all convex (or concave, respectively) functions g : I → ℝ; (b) x ≺≺ y ⇔ ∑ni=1 g(xi ) < (or >, respectively) ∑ni=1 g(yi ) for all strictly convex (or concave, respectively) functions g : I → ℝ; (c) x ≺w y ⇔ ∑ni=1 g(xi ) ≤ (or ≥, respectively) ∑ni=1 g(yi ) for all increasing convex (or decreasing concave, respectively) functions g : I → ℝ; (d) x ≺w y ⇔ ∑ni=1 g(xi ) ≤ (or ≥, respectively) ∑ni=1 g(yi ) for all decreasing convex (or increasing concave, respectively) functions g : I → ℝ. Remark 1.5.1. Theorem 1.5.2(a) is the famous Karamata inequality. Example 1.5.1 (The 40th IMO, second question). Let n be a fixed integer, n ≥ 2. (a) Determine the smallest constant c such that ∑

i 1/2, then 1−x1 < 1/2 and we have (x2 , x3 , . . . , xn ) ≺ (1−x1 , 0, . . . , 0). Applying Karamata’s inequality once more, we obtain f (x1 ) + f (x2 ) + ⋅ ⋅ ⋅ + f (xn ) ≤ f (x1 ) + f (1 − x1 ) + (n − 2)f (0) = f (x1 ) + f (1 − x1 ). It is easy to prove that the function g(x) = f (x) + f (1 − x) has the maximum on the segment [0, 1] equal to g(1/2) = 1/8. Thus, in this case also, f (x1 )+f (x2 )+⋅ ⋅ ⋅+f (xn ) ≤ 1/8 follows. Equality occurs, e. g., for x1 = x2 = 1/2, which proves that C = 1/8. The author adds a lower bound estimate. Using the convexity of f (x) = x3 (1 − x) on [0, 1/2] and (1/n, 1/n, . . . , 1/n) ≺ (x1 , x2 , . . . , xn ), we have 3

1 1 n−1 f (x1 ) + f (x2 ) + ⋅ ⋅ ⋅ + f (xn ) ≥ n( ) (1 − ) = 3 . n n n Then we obtain the inverse inequality of (1.5.1), 4

n−1 n (∑ x ) ≤ ∑ xi xj (xi2 + xj2 ), n3 i=1 i i 0. If f (x) is a convex function in the interval [0, p] and p ≥ 2s , then the maximum value of F(x1 , . . . , xn ) = ∑ni=1 f (xi ) needs to be divided into two parts to discuss. First, all components are less than or equal to p. Second, there is a component to take the value greater than or equal to p; it is easy to know that the remaining components are less than or equal to p. In the first case, we have s s ( , . . . , ) ≺ (x1 , . . . , xn ) ≺ (p, s − p, 0, . . . , 0), ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n n n−2 n

1.5 Convex functions and majorization

| 47

and then s s F( , . . . , ) ≤ F(x1 , . . . , xn ) ≤ F(p, s − p, 0, . . . , 0). ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n n n−2 n

For the second case, by symmetry, we can assume that x1 ≥ p. Then we fix x1 and we let G(x2 , . . . , xn ) = ∑ni=1 f (xi ). Then from s − x1 s − x1 ( ) ≺ (x2 , . . . , xn ) ≺ (s − x1 , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0), ,..., ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−1 n−1 n−2 n−1

it follows that s − x1 s − x1 G( ) ≤ G(x2 , . . . , xn ) ≤ G(s − x1 , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0), ,..., ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−1 n−1 n−2 n

and therefore F(x1 ,

s − x1 s − x1 ) ≤ F(x1 , . . . , xn ) ≤ F(x1 , s − x1 , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0). ,..., ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−1 n−1 n−2 n−1

For univariate functions, we can easily find the highest value of their interval [p, s]. In the interval [p, s], let F(x1 ,

s − x1 s − x1 )≥l ,..., ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n−1 n−1 n−1

and F(x1 , s − x1 , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0) ≤ m. n−2

Then we have the following theorem. Theorem 1.5.3 ([117]). Let x = (x1 , . . . , xn ) ∈ I n ⊂ ℝn+ and ∑ni=1 xi = s > 0. If f (x) is a convex function in the interval [0, p] and p ≥ 2s , then s s min{l, F( , . . . , )} ≤ F(x1 , x2 , . . . , xn ) ≤ max{m, F(p, s − p, 0, . . . , 0)}. n n Theorem 1.5.4 ([109, pp. 49–50]). Define the interval I ⊂ ℝ, x, y ∈ I n ⊂ ℝn . Then (a) x ≺ y ⇒ (g(x1 ), . . . , g(xn )) ≺w (or ≺w , respectively) (g(y1 ), . . . , g(yn )) for any convex (or concave, respectively) function g : I → ℝ; (b) x ≺w y ⇒ (g(x1 ), . . . , g(xn )) ≺w (or ≺w , respectively) (g(y1 ), . . . , g(yn )) for any increasing convex (or decreasing concave, respectively) function g : I → ℝ;

48 | 1 Majorization (c) x ≺w y ⇒ (g(x1 ), . . . , g(xn )) ≺w (or ≺w , respectively) (g(y1 ), . . . , g(yn )) for any decreasing convex (or increasing concave, respectively) function g : I → ℝ. Example 1.5.3. Let x = (x1 , . . . , xn ) ∈ ℝn++ , s = ∑ni=1 xi . (a) If r ≥ 1, α ≥ 1, t ≥ 1, then α

α

st−1 st−1 (( t−1 ) , . . . , ( t−1 ) ) n (rn − 1) n (rn − 1) ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

α

(1.5.3)

α

xnt x1t ) ,...,( ) ). ≺w (( rs − x1 rs − xn (b) If r ≥ 1, α ≥ 1, t > 0, then α

α

nt−1 (rn − 1) nt−1 (rn − 1) ) ,...,( ) ) (( t−1 s st−1 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ α

≺w (( Proof. Let g(u) =

ut , rs−u

g 󸀠󸀠 (u) =

h(u) =

n

(1.5.4)

α

rs − xn rs − x1 ) ). ) ,...,( xnt x1t

rs−u . ut

Then

2tut−1 t(t − 1)ut−2 2ut + ≥0 + 2 rs − u (rs − u)3 (rs − u)

for r ≥ 1, t ≥ 1

and h󸀠󸀠 (u) =

t(1 + t)(rs − u) 2t ≥0 + ut+1 ut+2

for r ≥ 1, t > 0.

This means that for r ≥ 1, g(u) is convex on R++ , and for t ≥ 1 and t > 0, h(u) is convex on R++ . Then according to Theorem 1.1.7(a), g α (u) and hα (u) are convex functions on R++ . And then, by Theorem 1.5.4(a), from (x, . . . , x) ≺ (x1 , . . . , xn ), it follows that (g α (x), . . . , g α (x)) ≺w (g α (x1 ), . . . , g α (xn )) and (hα (x), . . . , hα (x)) ≺w (hα (x1 ), . . . , hα (xn )). That is, (1.5.3) and (1.5.4) hold. Remark 1.5.2. If we prove (1.5.3) and (1.5.4) directly using the definition of weak majorization, it will be much more troublesome than the above proof. The reader may feel free to try.

1.6 Generalizations of Karamata’s inequality | 49

1.6 Generalizations of Karamata’s inequality In 1947, Fuchs [26] gave a weighted generalization of the Karamata inequality, namely, Theorem 1.5.4(a). Theorem 1.6.1. Define the interval I ⊂ ℝ, x, y ∈ I n , x1 ≥ ⋅ ⋅ ⋅ ≥ xn , y1 ≥ ⋅ ⋅ ⋅ ≥ yn , w = (w1 , . . . , wn ) ∈ ℝn . If k

k

i=1

i=1

∑ wi xi ≤ ∑ wi yi ,

k = 1, 2, . . . , n − 1,

(1.6.1)

and n

n

i=1

i=1

∑ wi xi = ∑ wi yi ,

(1.6.2)

then n

n

i=1

i=1

∑ wi φ(xi ) ≤ ∑ wi φ(yi )

(1.6.3)

holds for any continuous convex function φ : I → ℝ. Bullen et al. further gave the following results. Theorem 1.6.2 ([5]). Define the interval I ⊂ ℝ, x, y ∈ I n , x1 ≥ ⋅ ⋅ ⋅ ≥ xn , y1 ≥ ⋅ ⋅ ⋅ ≥ yn , w = (w1 , . . . , wn ) ∈ ℝn . If the condition (1.6.1) is satisfied, then for any increasing continuous convex function, φ : I → ℝ, inequality (1.6.3) is established. In the paper “A generalization of Karamata’s inequality,” Janne Junnila proved the following interesting result based on Theorem 1.5.4(c), applying mathematical induction. Theorem 1.6.3. Let h0 , h1 , . . . , hn : I → I be increasing convex functions on an interval I ⊂ ℝ. We define functions Hk = hk ∘ hk−1 ∘ ⋅ ⋅ ⋅ ∘ h1 ∘ h0 for all k ∈ {0, 1, . . . , n}. Let a1 ≥ ⋅ ⋅ ⋅ ≥ an and b1 ≥ ⋅ ⋅ ⋅ ≥ bn be real numbers in the interval I. If m+1

m+1

k=1

k=1

n

n

k=1

k=1

∑ Hm (ak ) ≥ ∑ Hm (bk )

(1.6.4)

for all m ∈ {0, 1, . . . , n}, then ∑ Hm (ak ) ≥ ∑ Hm (bk ).

(1.6.5)

Example 1.6.1. Let a, b, c, x, y, z ∈ R++ with x ≥ y ≥ z. Assume that a ≥ x, a2 +b2 ≥ x2 +y2 and a3 + b3 + c3 ≥ x3 + y3 + z 3 . Then a6 + b6 + c6 ≥ x6 + y6 + z 6 .

(1.6.6)

50 | 1 Majorization Proof. Suppose that b > a. Then if we swap a and b we will not change a6 + b6 + c6 and the assumed inequalities are still satisfied. Similarly we can swap b and c if c > b. So we can assume a ≥ b ≥ c. 3 Consider functions H0 (x) = x, H1 (x) = (H0 (x))2 = x2 , H3 (x) = (H1 (x)) 2 = x3 , H4 (x) = (H3 (x))2 = x6 . Clearly the conditions for the above theorem are satisfied and the result follows. In 1985, Ng [63] proved the following extension of the Karamata inequality using Theorem 1.5.4(a) and Theorem 1.2.20. Theorem 1.6.4. Define the interval I ⊂ ℝ, and assume the inequality n

n

i=1

i=1

∑ g(xi ) ≤ ∑ g(yi ) n

for all x, y ∈ I and x ≺ y holds if and only if g : I → ℝ is a Wright-convex function. In 2012, Inoan and Rasa [41] gave an elementary proof of Theorem 1.6.4. In 2018, Monea and Marinescu [62] gave another elementary proof of the same theorem. Moreover, their proof uses weaker conditions than the proof from [41]. Theorem 1.6.4 remains valid for any interval I, not necessarily open. In 2009, Miao and Qi [59] put forward the following open question. Problem 1.6.1. For n ∈ ℕ, let x1 , . . . , xn and y1 , . . . , yn be two positive sequences satisfying x1 ≥ ⋅ ⋅ ⋅ ≥ xn , y1 ≥ ⋅ ⋅ ⋅ ≥ yn , and m

m

i=1

i=1

∑ xi ≤ ∑ yi

(1.6.7)

for 1 ≤ m ≤ n. Under what conditions does the inequality m

β

m

α+β

∑ xiα yi ≤ ∑ yi i=1

i=1

(1.6.8)

hold for α and β? Miao and Qi first proved that under the condition of Problem 1.6.1, inequality m

m

i=1

i=1

∑ xiα ≤ ∑ yiα

(1.6.9)

holds for α ≥ 1 and 1 ≤ m ≤ n. They then skillfully used the Hölder inequality to prove that inequality (1.6.8) holds for α ≥ 1, β ≥ 0. Miao and Qi proved in a very inefficient way inequality (1.6.9), but it is a direct corollary of Theorem 1.5.4(c), noting that (1.6.7) means x ≺w y and that t α is increasing and convex when α ≥ 1.

2 Definitions and properties of Schur-convex functions The majorizing relations and Schur-convex functions are the two most basic and important concepts of majorization theory. This chapter will introduce the classical definitions and properties of Schur-convex functions and generalizations of Schur-convex functions by Chinese scholars in recent years, including Schur-geometrically convex functions, Schur-harmonically convex functions, Schur-power convex functions, and abstract majorization inequality theory.

2.1 Definitions and properties of Schur-convex functions This section first introduces the classical definition of Schur-convex functions, and then gives some basic properties of Schur-convex functions without any proofs. Detailed proofs are found in monographs [109] and [53]. Definition 2.1.1 ([109, 53]). Let Ω ⊂ ℝn ; φ : Ω → ℝ is said to be a Schur-convex function if x ≺ y ⇒ φ(x) ≤ φ(y). If, in addition, φ(x) < φ(y) whenever x ≺≺ y, then φ is said to be strictly Schur-convex; φ is said to be Schur-concave if and only if −φ is Schur-convex. Theorem 2.1.1. If φ is Schur-convex (or Schur-concave) on a symmetric set Ω, then φ is a symmetric function on Ω. Proof. For all x ∈ Ω and any permutation matrix G, we have xG ∈ Ω and φ(x) ≤ φ(xG) ≤ φ(x) from x ≺ xG ≺ x, so φ(xG) = φ(x), that is, φ is a symmetric function on Ω. Remark 2.1.1. If φ is Schur-convex (or Schur-concave) on the asymmetric set Ω, it is not necessarily a symmetric function. Theorem 2.1.2 ([109, pp. 49–50]). Let Ω ⊂ ℝn , φ : Ω → ℝ. Then (a) x ≺w y ⇔ φ(x) ≤ φ(y) for any increasing Schur-convex function φ; (b) x ≺w y ⇔ φ(x) ≤ φ(y) for any decreasing Schur-convex function φ; (c) x ≺w y ⇔ φ(x) ≥ φ(y) for any decreasing Schur-concave function φ; (c) x ≺w y ⇔ φ(x) ≥ φ(y) for any increasing Schur-concave function φ. Theorem 2.1.3 ([109, 53]). Let Ω ⊂ ℝn be a convex set and have a nonempty interior set Ω0 . Let φ : Ω → ℝ be continuous on Ω and differentiable in Ω0 . Then φ is Schur-convex https://doi.org/10.1515/9783110607840-002

52 | 2 Definitions and properties of Schur-convex functions (or Schur-concave, respectively) if and only if it is symmetric on Ω and if (x1 − x2 )(

𝜕φ 𝜕φ − )≥0 𝜕x1 𝜕x2

(or ≤ 0, respectively)

(2.1.1)

holds for any x = (x1 , . . . , xn ) ∈ Ω0 . Inequality (2.1.1) is called the Schur-condition. Remark 2.1.2. Theorem 2.1.3, called the Schur-convex function judgment theorem, is the most important theorem of majorization theory. When using this theorem, pay attention to the condition of this theorem. First, let us consider whether the set Ω under consideration is a symmetric convex set. If so, then observe whether the function is a symmetric function. If not, it can be immediately concluded from Theorem 2.1.1 that the function is not a Schur-convex function (or Schur-concave function); if it is, then further test the Schur-condition. In short, the use of this theorem must take full account of all conditions of the theorem. Theorem 2.1.4 ([109, 53]). Let Ω ⊂ ℝn be a convex set and have a nonempty interior set Ω0 . Let φ : Ω → ℝ be continuous on Ω and differentiable in Ω0 . Then φ is a strictly Schur-convex (or Schur-concave, respectively) function if and only if it is symmetric on Ω and if (x1 − x2 )(

𝜕φ 𝜕φ − )>0 𝜕x1 𝜕x2

(or < 0, respectively)

(2.1.2)

holds for any x = (x1 , . . . , xn ) ∈ Ω0 and x1 ≠ x2 . Feng Qi [68] obtained the following inequality between the exponential and the quadratic sum of a nonnegative sequence sum using the analytical method. For (x1 , . . . , xn ) ∈ [0, ∞)n and n ≥ 2, inequality n e2 n 2 ∑ xi ≤ exp(∑ xi ) 4 i=1 i=1

(2.1.3)

is valid. Equality in (2.1.3) occurs if xi = 2 for some given 1 ≤ i ≤ n and xj = 0 for all 2

1 ≤ j ≤ n with j ≠ i. So, the constant e4 in (2.1.3) is the best possible. Shi [82] established the following theorem with methods from majorization theory, thus generalizing (2.1.3). Theorem 2.1.5. Let (x1 , . . . , xn ) ∈ ℝn+ and n ≥ 2. If α ≥ 1, then the inequality n eα n α xi ) x ) ≤ exp( ( ∑ ∑ i αα i=1 i=1

(2.1.4)

is valid. Equality in (2.1.4) occurs if and only if xi = α and xj = 0 for some given 1 ≤ i ≤ n and all 1 ≤ j ≤ n with j ≠ i.

2.1 Definitions and properties of Schur-convex functions | 53

In 2009, Witkowski [124] gave a decision theorem on binary Schur-convex functions. Theorem 2.1.6. Let I ⊂ ℝ. The binary symmetric function φ : I 2 → ℝ is Schur-convex if and only if the function φa (t) = φ(t, a − t) is decreasing on (−∞, a2 ) for any a ∈ ℝ++ (assuming (t, a − t) ∈ I 2 ). Proof. Suppose that φ is Schur-convex. Obviously (s, a − s) ≺ (t, a − t) for t < s < a2 ; therefore φ(s, a − s) ≤ φ(t, a − t). Conversely, suppose x ≺ y and let s = min{x1 , x2 }, t = min{y1 , y2 }, and a = x1 + x2 . Then t < s < a2 . From the monotonicity of φa (x), we have φ(s, a − s) ≤ φ(t, a − t), and then φ(x) ≤ φ(y) by the symmetry of φ. Example 2.1.1. Prove that the mean difference 1

MSA (x, y) = (

x2 + y2 2 x + y ) − 2 2

is Schur-convex on ℝ2++ . Proof. It is obvious that MSA (x, y) is symmetric. For all a ∈ ℝ++ , let 1

x2 + (a − x)2 2 a ) − . φa (x) = MSA (x, a − x) = ( 2 2 For x ≤ a2 , we have φ󸀠a (x) =

a 2 x2 +(a−x)2 21 ] [ 2

x−

≤ 0.

This means that MSA (x, y) is decreasing on (−∞, a2 ); by Theorem 2.1.6, it follows that MSA (x, y) is Schur-convex on ℝ2++ . For the Schur-convex (or concave) function on an asymmetric convex set D = {x ∈ ℝn | x1 ≥ ⋅ ⋅ ⋅ ≥ xn }, we have the following decision theorem. Theorem 2.1.7 ([109, p. 58]). Let φ : D → ℝ be continuous on D and differentiable on the interior Do of D. Then φ is Schur-convex (or Schur-concave, respectively) if and only if 𝜕φ(x) 𝜕φ(x) ≥ (or ≤, respectively) , 𝜕xi 𝜕xi+1 for all x ∈ Do .

i = 1, . . . , n − 1,

(2.1.5)

54 | 2 Definitions and properties of Schur-convex functions Theorem 2.1.8 ([109, p. 58]). Let φ : D → ℝ be continuous on D and differentiable on the interior Do of D. Then φ is strictly Schur-convex (or Schur-concave) if and only if 𝜕φ(x) 𝜕φ(x) > (or 0,

(2.1.7)

is Schur-convex (or concave, respectively) on D = {(x, y) ∈ R2++ | x ≥ y} if and only if ω ≤ 1 (or ≥ 1, respectively). Example 2.1.2 ([48, p. 57]). Let (x, y) ∈ ℝ2++ , m, n ∈ ℕ. Prove that K(m, n) =

mA(x, y) + nG(x, y) m(x + y) + 2n√xy = m+n 2(m + n)

(2.1.8)

is Schur-concave with (x, y) ∈ ℝ2++ . Proof. It is clear that K(m, n) is symmetric with (x, y) ∈ ℝ2++ . If taking in (2.1.7) ω=

(m + 2n)x − my − 2n√xy , mx − (m + 2n)y + 2n√xy

then M(ω; x, y) = K(m, n). Write t = xy . Then ω=

(m + 2n)( xy ) − m − 2n√ xy m( xy )

− (m + 2n) +

2n√ xy

=

(m + 2n)t − m − 2n√t . mt − (m + 2n) + 2n√t

By Theorem 2.1.10, to prove that K(m, n) is Schur-concave on D = {(x, y) ∈ R2++ | x ≥ y} we only need to prove that ω ≥ 1, but ω ≥ 1 ⇔ (m + 2n)t − m − 2n√t ≥ mt − (m + 2n) + 2n√t ⇔ t − 2√t + 1 = (√t − 1)2 ≥ 0.

From Remark 2.1.3, it follows that K(m, n) is also Schur-concave on ℝ2++ . In American Mathematical Monthly, Problem No. 10529, provided by Juan Bosco Romero Marquez [American Mathematical Monthly, 1996, 103 (6): 509] is as follows. Let λ ≥ 0, 0 < a ≤ b, n ∈ ℤ, n > 1. Prove n n n n n √ab ≤ √n a + b + λ((a + b) − a − b ) ≤ a + b . 2 + λ(2n − 2) 2

(2.1.9)

In 1998, Robin J. Chapman pointed out that a condition of this problem, “λ ≥ 0,” should be changed to “λ ≥ 1,” and gave a proof. Shi et al. [86] applied Theorem 2.1.7 to prove the following results.

56 | 2 Definitions and properties of Schur-convex functions Example 2.1.3. Let 0 < a ≤ b, α > 0. (a) If λ ≤ 1, 0 < α ≤ 1 or λ ≥ 1, α ≥ 1, then 1

aα + bα + λ((a + b)α − aα − bα ) α a + b ) ≤ ; ≤( 1 2 + λ(2α − 2) 2 (2 + λ(2α − 2)) α a+b

(2.1.10)

(b) if λ ≤ 1, α ≥ 1 or λ ≥ 1, 0 < α ≤ 1, the inequalities in (2.1.10) are reversed. 1

Proof. Let φ(a, b) = (ψ(a, b)) α , where ψ(a, b) =

aα + bα + λ[(a + b)α − aα − bα ] , 2 + λ(2α − 2)

𝜕φ αaα−1 + λα[(a + b)α−1 − aα−1 ] = , 𝜕a 2 + λ(2α − 2) and

𝜕φ αbα−1 + λα[(a + b)α−1 − bα−1 ] = . 𝜕b 2 + λ(2α − 2) (a) It is easy to see that 𝜕a ≥ 𝜕b is equivalent to α(bα−1 − aα−1 )(1 − λ) ≤ 0. According to Theorem 2.1.7, when λ ≤ 1, 0 < α ≤ 1, or λ ≥ 1 and α ≥ 1, ψ(x, y) is Schur-concave on D2 = {(a, b) ∈ ℝ2++ | b ≥ a}. So from 𝜕φ

(

𝜕φ

a+b a+b , ) ≺ (a, b) ≺ (a + b, 0), 2 2

it follows that φ(

a+b a+b a+b , ) ≥ φ(a, b) ≥ φ(a + b, 0) = , 1 2 2 (2 + λ(2α − 2)) α

that is, (2.1.10) holds. We have proved (a), and (b) can be proved similarly. Remark 2.1.4. Compared with the above Problem No. 10529, here n ∈ ℤ, n > 1 relaxed to α > 0, we changed the left end of (2.1.9), and we considered the case of 0 ≤ λ < 1. Theorem 2.1.11 ([109, p. 58]). Let the interval I ⊂ ℝ, φ : ℝn → ℝ, g : I → ℝ, and ψ(x) = φ(g(x1 ), . . . , g(xn )) : I n → ℝ. (a) If φ is increasing and Schur-convex and g is convex, then ψ is Schur-convex; (b) if φ is increasing and Schur-concave and g is concave, then ψ is Schur-concave; (c) if φ is decreasing and Schur-convex and g is concave, then ψ is Schur-convex; (d) if φ is decreasing and Schur-concave and g is convex, then ψ is Schur-concave; (e) if φ is increasing and Schur-convex and g is decreasing and convex, then ψ is decreasing and Schur-convex; (f) if φ is decreasing and Schur-concave and g is decreasing and convex, then ψ is increasing and Schur-concave;

2.2 Convex functions and Schur-convex functions | 57

(g) if φ is increasing and Schur-convex and g is increasing and convex, then ψ is increasing and Schur-convex; (h) if φ is decreasing and Schur-convex and g is decreasing and concave, then ψ is increasing and Schur-convex; (i) if φ is decreasing and Schur-convex and g is increasing and concave, then ψ is decreasing and Schur-convex. Corollary 2.1.2. Let g : I → ℝ be continuous, φ(x) = Σni=1 g(xi ). Then (a) φ is (strictly) Schur-convex on I n ⇔ g is (strictly) convex on I; (b) φ is (strictly) Schur-concave on I n ⇔ g is (strictly) concave on I. Corollary 2.1.3. Let g : I → ℝ be continuous, φ(x) = ∏ni=1 g(xi ). Then (a) φ is (strictly) Schur-convex on I n ⇔ log g is (strictly) convex on I; (b) φ is (strictly) Schur-concave on I n ⇔ log g is (strictly) concave on I.

2.2 Convex functions and Schur-convex functions The convex functions and the Schur-convex functions are closely related. In this section we provide a brief description. Example 2.2.1. We give the following example of a convex function rather than a Schur-convex function: φ(x, y) = x is obviously a convex function on ℝ2 , but it is not symmetric, so it is not Schur-convex. Example 2.2.2. We give the following example of a Schur-convex function rather than a convex function. For x1 > 0, x2 > 0, let φ(x) = φ(x1 , x2 ) = −x1 x2 . If (x1 , x2 ) ≺ (y1 , y2 ), then x1 + x2 = y1 +y2 := s. Without loss of generality, we may assume that x1 ≥ x2 , y1 ≥ y2 . Then x1 ≤ y1 , and −x1 x2 = x1 (s − x1 ) ≤ y1 (s − y1 ) = −y1 y2 ⇔ y12 − x12 ≥ s(y1 − x1 ) ⇔ y1 + x1 ≥ s. The last inequality obviously holds, so φ is Schur-convex. The Hesse matrix of the function φ is 0 H(x) = ( −1

−1 ). 0

Because det H(x) = −1 < 0, H(x) is not negative definite and φ is not a convex function. Theorem 2.2.1 ([109, p. 58]). Let Ω ⊂ ℝn be a symmetric convex set, x, y ∈ Ω. Then (a) x ≺ y ⇔ φ(x) ≤ φ(y) for any symmetric convex function φ : Ω → ℝ; (b) x ≺≺ y ⇔ φ(x) < φ(y) for any strictly symmetric convex function φ : Ω → ℝ; (c) x ≺ y ⇔ φ(x) ≥ φ(y) for any symmetric concave function φ : Ω → ℝ; (d) x ≺≺ y ⇔ φ(x) > φ(y) for any strictly symmetric concave function φ : Ω → ℝ; (e) x ≺w y ⇔ φ(x) ≤ φ(y) for any increasing symmetric convex function φ : Ω → ℝ;

58 | 2 Definitions and properties of Schur-convex functions (f) x ≺w y ⇔ φ(x) ≤ φ(y) for any decreasing symmetric convex function φ : Ω → ℝ; (g) x ≺w y ⇔ φ(x) ≥ φ(y) for any decreasing symmetric concave function φ : Ω → ℝ; (h) x ≺w y ⇔ φ(x) ≥ φ(y) for any increasing symmetric concave function φ : Ω → ℝ. The following two corollaries are obtained immediately from (a) and (c) in Theorem 2.2.1. Corollary 2.2.1. Let φ be a symmetric convex (or concave, respectively) function on the symmetric convex set Ω. Then φ is a Schur-convex (or Schur-concave, respectively) function on Ω. Corollary 2.2.2. Let I ⊂ ℝ be an interval and g be a (strictly) convex function on I → ℝ. Then φ(x) = Σni=1 g(xi ) is a (strictly) Schur-convex function on I n . Example 2.2.3. The standard deviation 1 n σ(x) = ( ∑(xi − x)2 ) n i=1 is a strictly Schur-convex function on ℝn , where x =

1 2

1 n

∑ni=1 xi .

Proof. The function g(t) = (t − x)2 is strictly convex. From Corollary 2.2.2 we know that 1 1 n ∑ g(xi ) is strictly Schur-convex. As h = t 2 is strictly increasing, from Theorem 2.1.7, n i=1 it follows that σ(x) is strictly Schur-convex on ℝn . Let f be a function defined on an interval I and let the derivative f 󸀠 exist. Define the function F of two variables by F(x, y) =

f (y) − f (x) (x ≠ y), y−x

F(x, x) = f 󸀠 (x),

where (x, y) ∈ I 2 . Let us now consider the following statements: (A) f 󸀠 is convex on I; 󸀠 󸀠 (y) (B) F(x, y) ≤ f (x)+f for all x, y ∈ I; 2 x+y 󸀠 (C) f ( 2 ) ≤ F(x, y) for all x, y ∈ I; (D) F is Schur-convex on I 2 ; and (A󸀠 ) (B󸀠 ) (C󸀠 ) (D󸀠 )

f 󸀠 is concave on I; 󸀠 󸀠 (y) F(x, y) ≥ f (x)+f for all x, y ∈ I; 2 󸀠 x+y f ( 2 ) ≥ F(x, y) for all x, y ∈ I; F is Schur-concave on I 2 .

In 2001, Merkle studied the equivalence of the above statements and obtained the following results.

2.2 Convex functions and Schur-convex functions | 59

Theorem 2.2.2 ([58]). If x → f 󸀠󸀠󸀠 (x) is continuous on I, then the conditions (A)–(D) are equivalent and the conditions (A󸀠 )–(D󸀠 ) are equivalent. Example 2.2.4 ([42]). Taking f (x) = xn+1 , from Theorem 2.2.2, we know that ∑ni=0 xi yn−i is Schur-convex on ℝ2 . Walorski [107] proved the following results. Theorem 2.2.3. If α > 0 and F(x, y) = f (x) + αf (

x+y ) + f (y) (x ≠ y), 2

F(x, x) = f 󸀠 (x),

then the following conditions (A)–(F) are equivalent: (A) “x → F(x; x)” is convex on I; , x+y ) ≤ x+y for x, y ∈ I; (B) F( x+y 2 2 2 (C) (D) (E) (F)

F(x; y) ≤ F(x,x)+F(y,y) for x, y ∈ I; 2 2 F is convex on I ; F is Schur-convex on I 2 ; f is convex on I.

Kun Zhu et al. [165] generalized Theorem 2.2.2 as follows. Let f , g be functions defined on an interval I; their derivatives f 󸀠 , g 󸀠 exist, and 󸀠 g ≠ 0. Define the function F of two variables by F(x, y) =

f (y) − f (x) , (x ≠ y), g(y) − g(x)

F(x, x) =

f 󸀠 (x) , g 󸀠 (x)

where (x, y) ∈ I 2 . Let us now consider the following statements: (A) f 󸀠 is convex, g 󸀠 is concave, f 󸀠 ≥ 0, and g 󸀠 > 0 on I; or f 󸀠 is concave, g 󸀠 is convex, f 󸀠 ≤ 0, and g 󸀠 < 0 on I;

(B) F(x, y) ≥ (C) F(x, y) ≤

) f 󸀠 ( x+y 2 for all x, y ∈ I; g 󸀠 ( x+y ) 2 󸀠 󸀠 f (x)+f (y) for all x, y ∈ g 󸀠 (x)+g 󸀠 (y) 2

I;

(D) F is Schur-convex on I

and (A󸀠 ) f 󸀠 is concave, g 󸀠 is convex, f 󸀠 ≥ 0, and g 󸀠 > 0 on I; or f 󸀠 is convex, g 󸀠 is concave, f 󸀠 ≤ 0, and g 󸀠 < 0 on I; (B󸀠 ) F(x, y) ≤

(C󸀠 ) F(x, y) ≥

) f 󸀠 ( x+y 2 for all x, y ∈ I; g 󸀠 ( x+y ) 2 f 󸀠 (x)+f 󸀠 (y) for all x, y ∈ g 󸀠 (x)+g 󸀠 (y) 2

I;

(D󸀠 ) F is Schur-concave on I .

Theorem 2.2.4. Let f 󸀠󸀠󸀠 (t), g 󸀠󸀠󸀠 (t) be continuous on I. If (A) holds, then the conditions (B)–(D) hold. If (A󸀠 ) holds, then the conditions (B󸀠 )–(D󸀠 ) hold.

60 | 2 Definitions and properties of Schur-convex functions

(A1 ) (A󸀠1 ) (A2 ) (A󸀠2 )

Let us now consider the other four statements: g 󸀠 is concave on I; g 󸀠 is convex on I; f 󸀠 is convex on I; f 󸀠 is concave on I.

Theorem 2.2.5 ([165]). If g 󸀠󸀠󸀠 (t) is continuous on I and f (t) = t, then the conditions (A1 ), (B), (C), (D) are equivalent, and the conditions (A󸀠1 ), (B󸀠 ), (C󸀠 ), (D󸀠 ) are equivalent. Theorem 2.2.6 ([165]). If g 󸀠󸀠󸀠 (t) is continuous on I and g(t) = t, then the conditions (A2 ), (B), (C), (D) are equivalent, and the conditions (A󸀠2 ), (B󸀠 ), (C󸀠 ), (D󸀠 ) are equivalent. Define the symmetric function G of three variables by G(x, y, z) = ∑ F(x, y) = F(x, y) + F(y, z) + F(z, x), cyclic

where (x, y, z) ∈ I 3 . Theorem 2.2.7 ([165]). Let x + y + z = s > 0, x, y, z ∈ I = (0, 2s ). (a) If F is Schur-convex on I 2 , ϕ1 (t) = F(t, t) is convex, and ϕ2 (t) = F( 2s , t) is strictly convex on I, then G( 3s , 3s , 3s ) ≤ G(x, y, z) < G( 2s , 2s , 0). (b) If F is Schur-concave on I 2 , ϕ1 (t) = F(t, t) is concave, and ϕ2 (t) = F( 2s , t) is strictly concave on I, then G( 3s , 3s , 3s ) ≥ G(x, y, z) > G( 2s , 2s , 0). Theorem 2.2.8 ([165]). Let x + y + z = s > 0, x, y, z ∈ I = (0, s). (a) If F is Schur-convex on I 2 , ϕ1 (t) = F(t, t) is convex, and ϕ2 (t) = F(t, 0) is strictly convex on I, then s s s G( , , ) ≤ G(x, y, z) < G(s, 0, 0); 3 3 3 (b) if F is Schur-concave on I 2 , ϕ1 (t) = F(t, t) is concave, and ϕ2 (t) = F(t, 0) is strictly concave on I, then s s s G( , , ) ≥ G(x, y, z) > G(s, 0, 0). 3 3 3 This theorem can be formulated for a function in n variables.

2.3 Some applications of Karamata’s inequality Define the interval I ⊂ ℝ. Then n

n

i=1

i=1

x ≺≺ y ⇒ ∑ g(xi ) ≤ (≥) ∑ g(yi )

(2.3.1)

2.3 Some applications of Karamata’s inequality | 61

holds for all strictly convex (or concave, respectively) functions g on I. Also, n

n

i=1

i=1

x ≺ y ⇒ ∑ g(xi ) < (or >, respectively) ∑ g(yi )

(2.3.2)

holds for all convex (or concave, respectively) functions g on I. The above conclusion, that is, Corollary 2.2.2, is the Karamata inequality, which is a very important conclusion in majorization theory. This section focuses on its application in polynomial inequalities, selected from the references [87] and [78]. 2.3.1 Majorized proof of the inequality with the integer power functions t

Lemma 2.3.1. Let g(t) = log x t−1 . (a) If x > 1, then g(t) is a strictly convex function on ℝ++ ; (b) if 0 < x < 1, then g(t) is a strictly concave function on ℝ++ . Proof. By computation, g 󸀠󸀠 (t) = −

xt log2 x 1 + . (xt − 1)2 t 2

To prove g 󸀠󸀠 (t) > 0, it is equivalent to prove that 2

t 2 xt (log x)2 < (x t − 1) .

(2.3.3)

Extracting the square root on both sides in this inequality and dividing by the same xt , (2.3.3) becomes equivalent to t

t

f (t) := x 2 − x− 2 − t log x > 0, t t 1 f 󸀠 (t) = log x(x 2 + x− 2 − 2) > 0; 2 when x > 1, f (t) is strictly increasing on (0, +∞), so f (t) > f (0) = 0 for t > 0, that is, g 󸀠󸀠 (t) > 0. Similarly, we can prove (b). Example 2.3.1. Let x > 0, x ≠ 1, m, n, k ∈ ℕ. Then x

n−(k−2)m

xkn−1 + x < xkn + 1, +x

−(k−1)n k

n.

(2.3.7)

62 | 2 Definitions and properties of Schur-convex functions Proof. For x > 0, x ≠ 1, g(t) = xt is a strictly convex function on [0, +∞) because g 󸀠󸀠 (t) = xt (log x)2 > 0. It is easy to see that (kn − 1, 1) ≺≺ (kn, 0), and from (2.3.2), we can prove (2.3.4). From (kn + m, (k − 1)m) ≺≺ (km + n, (k − 1)n), it follows that xkn+m + x(k−1)m < xkm+n + x(k−1)n , and this inequality is equivalent to inequality (2.3.5). From 0, . . . , 0) ≺≺ (n, (k, . . . , k , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0), . . . , n, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

k

(k−1)n

(n−1)k

we obtain nx k + (k − 1)n < kx n + (n − 1)k, and this inequality is equivalent to inequality (2.3.6). From 0, . . . , 0), . . . , m, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0, . . . , 0) ≺≺ (m, (n, . . . , n, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ m

n

n

m

it follows that nx m + m > mxn + n, and this inequality is equivalent to inequality (2.3.7). Example 2.3.2. Let x > 1, n ≥ 2, n, k ∈ ℕ, k < n. Then k(xn − x−n ) ≥ n(x k − x−k ).

(2.3.8)

Proof. Let g(t) = xt − x−t . For x > 1 and t > 0, we have g 󸀠󸀠 (t) = (log x)2 (xt − x−t ) = (log x)2 (x2t − 1)x −t ≥ 0, so g(t) is convex on (0, +∞). It is not hard to verify (k, . . . , k ) ≺ (n, 0, . . . , 0). . . . , n, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

From (2.3.1), it follows that (2.3.8) holds.

k

n−k

2.3 Some applications of Karamata’s inequality | 63

Example 2.3.3. Let x > 0, x ≠ 1, n ∈ ℕ. Then n

∑ xk >

k=0

n + 1 n−1 k ∑x . n − 1 k=1

(2.3.9)

Proof. When x > 0, x ≠ 1, g(t) = xt is strictly convex on [0, +∞), and noting that n, . . . , n), 1, . . . , 1, . . . , ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ . . . , 0, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ − 1, . . . , n − 1) ≺≺ (0, . . . , 2, . . . , n (1, . . . , 1, 2, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n+1

n−1

n+1

n+1

n−1

n−1

from (2.3.2) we have n

n−1

k=0

k=1

∑ (n − 1)x k > ∑ (n + 1)xk .

This inequality is equivalent to (2.3.9). Example 2.3.4 ([46, p. 20]). Let x > 0, x ≠ 1, n, k ∈ ℕ, n ≥ 2, n > k. Then (1 −

k2 2 2 )(x n − 1) < (x n−k − 1)(xn+k − 1) < (xn − 1) . n2

(2.3.10)

Proof. Since xt is strictly convex on [0, +∞), from (n, n) ≺≺ (n + k, n − k), it follows that 2xn < xn+k + xn−k ; this is equivalent to the right inequality in (2.3.10). For the left inequality in (2.3.10) we divide the proof into two cases. t (I) When x > 1, it is known from Lemma 2.3.1(a) that g(t) = log x t−1 is a strictly convex function on ℝ++ . From (2.3.2) and (n, n) ≺≺ (n + k, n − k), we have 2 log

xn−k − 1 xn+k − 1 xn − 1 < log + log ; n n−1 n+1

this inequality is equivalent to the left inequality in (2.3.10). (II) When 0 < x < 1, by Lemma 2.3.1(b), we also know that the left inequality in (2.3.10) holds. Example 2.3.5 ([46, p. 120]). Let 0 ≤ x ≤ 1. Then (2n + 1)xn (1 − x) ≤ 1 − x2n+1 ,

(2.3.11)

and when x ≠ 1, the inequality holds strictly. Proof. When x = 0 or 1, (2.3.11) clearly holds. When 0 < x < 1, since xt is strictly convex on [0, +∞), from (n, . . . , n) ≺≺ (0, 1, . . . , 2n), ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 2n+1

64 | 2 Definitions and properties of Schur-convex functions it follows that (2n + 1)x n < x2n + x2n−1 + ⋅ ⋅ ⋅ + x + 1. Multiply 1 − x with both sides and note that 1 − x2n+1 = (1 − x)(x2n + x2n−1 + ⋅ ⋅ ⋅ + x + 1) immediately yields (2.3.11).

2.3.2 Refinements of an inequality for the rational fractions The monograph [48, pp. 150–151] states that the following relates to rational fractions inequalities. P (x) Let x ≥ 0, m > n, Pn (x) = ∑nk=0 xk , and write f (x) = Pm(x) , g(x) = f (x)xn−m . Then n

1 < g(x)
0, dt 2 = xt (log x)2 > 0, so φ(t) is a strictly convex function on ℝ. By Theorem 1.5.2(b), from (2.3.17), it follows that xn + (n + 1)Pm (x)xn < (m + 1)x m Pn (x) + xn . ∗

Dividing both sides by (n + 1)xm Pn (x) yields m+1 xn xn + g(x) < + , m (n + 1)x Pn (x) n + 1 (n + 1)xm Pn (x) ∗

that is, m + 1 xn−m (1 − xn −n ) − . n+1 (n + 1)Pn (x) ∗

g(x)
+ . n + 1 (n + 1)Pn (x) ∗

This proves the fifth inequality of (2.3.14). Next, we prove (2.3.15). We only need to prove the second and fifth inequalities of (2.3.15). Note that for 0 < x < 1, we have 1 < x1 < ∞ and g(x) = f ( x1 ). Replacing x → x1 the second and fifth inequalities of (2.3.15) are obtained from the second and fifth inequalities of (2.3.14), respectively. Theorem 2.3.2. The condition is the same as that of Theorem 2.3.1. Then m−n

m + 1 (m + 1)(x 2 − 1) 1 < g(x) < − n+1 xm Pn (x)

(2.3.20)

m−n