469 65 7MB
English Pages 632 [194] Year 2018
solutions MANUAL FOR Applied Functional Analysis THIRD EDITION
by
J. Tinsley Oden Leszek F. Demkowicz
solutionS MANUAL FOR Applied Functional Analysis THIRD EDITION
by
J. Tinsley Oden Leszek F. Demkowicz
Boca Raton London New York
CRC Press is an imprint of the Taylor & Francis Group, an informa business
CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 334872742 © 2018 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acidfree paper Version Date: 20171011 International Standard Book Number13: 9781498761147 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http:// www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 9787508400. CCC is a notforprofit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
1 Preliminaries
Elementary Logic and Set Theory
1.1
Sets and Preliminary Notations, Number Sets
Exercises Exercise 1.1.1 If Z I = {. . . , −2, −1, 0, 1, 2, . . .} denotes the set of all integers and IN = {1, 2, 3, . . .} the set of all natural numbers, exhibit the following sets in the form A = {a, b, c, . . .}: (i) {x ∈ Z I : x2 − 2x + 1 = 0} (ii) {x ∈ Z I : 4 ≤ x ≤ 10} (iii) {x ∈ IN : x2 < 10} (i) {1} (ii) {4, 5, 6, 7, 8, 9, 10} (iii) {1, 2, 3}
1.2
Level One Logic
Exercises Exercise 1.2.1 Construct the truth table for De Morgan’s Law: ∼ (p ∧ q) ⇔ ((∼ p) ∨ (∼ q)) 1
2
APPLIED FUNCTIONAL ANALYSIS
∼ 1 1 1 0
(p ∧ 00 00 10 11
q) 0 1 0 1
⇔ ((∼ 1 1 1 1 1 0 1 0
SOLUTION MANUAL
p) 0 0 1 1
∨ 1 1 1 0
(∼ 1 0 1 0
q)) 0 1 0 1
Exercise 1.2.2 Construct truth tables to prove the following tautologies: (p ⇒ q) ⇔ (∼ q ⇒∼ p) ∼ (p ⇒ q) ⇔ p ∧ ∼ q
(p ⇒ 0 1 0 1 1 0 1 1 ∼ 0 0 1 0
q) 0 1 0 1
(p (0 (0 (1 (1
⇔ (∼ q ⇒ 1 10 1 1 01 1 1 10 0 1 01 1
⇒ q) 1 0) 1 1) 0 0) 1 1)
⇔ 1 1 1 1
p∧ 00 00 11 10
∼ p) 10 10 01 01 ∼ 1 0 1 0
q 0 1 0 1
Exercise 1.2.3 Construct truth tables to prove the associative laws in logic: p ∨ (q ∨ r) ⇔ (p ∨ q) ∨ r p ∧ (q ∧ r) ⇔ (p ∧ q) ∧ r p 0 0 0 0 1 1 1 1
∨ 0 1 1 1 1 1 1 1
(q 0 0 1 1 0 0 1 1
∨ 0 1 1 1 0 1 1 1
r) 0 1 0 1 0 1 0 1
⇔ 1 1 1 1 1 1 1 1
(p 0 0 0 0 1 1 1 1
∨ 0 0 1 1 1 1 1 1
q) 0 0 1 1 0 0 1 1
∨r 00 11 10 11 10 11 10 11
p 0 0 0 0 1 1 1 1
∧ 0 0 0 0 0 0 0 1
(q 0 0 1 1 0 0 1 1
∧ 0 0 0 1 0 0 0 1
r) 0 1 0 1 0 1 0 1
⇔ 1 1 1 1 1 1 1 1
(p 0 0 0 0 1 1 1 1
∧ 0 0 0 0 0 0 1 1
q) 0 0 1 1 0 0 1 1
∧r 00 01 00 01 00 01 00 11
Preliminaries
1.3
3
Algebra of Sets
Exercises Exercise 1.3.1 Of 100 students polled at a certain university, 40 were enrolled in an engineering course, 50 in a mathematics course, and 64 in a physics course. Of these, only 3 were enrolled in all three subjects, 10 were enrolled only in mathematics and engineering, 35 were enrolled only in physics and mathematics, and 18 were enrolled only in engineering and physics. (i) How many students were enrolled only in mathematics? (ii) How many of the students were not enrolled in any of these three subjects? Let A, B, C denote the subsets of students enrolled in mathematics, the engineering course and physics, repectively. Sets: A ∩ B ∩ C, A ∩ B − (A ∩ B ∩ C), A ∩ C − (A ∩ B ∩ C) and A − (B ∪ C) are
pairwise disjoint (no two sets have a nonempty common part) and their union equals set A, see Fig. 1.1. Consequently,
#(A − (B ∪ C)) = #A − #A ∩ B ∩ C − #(A ∩ B − (A ∩ B ∩ C)) − #(A ∩ C − (A ∩ B ∩ C)) = 50 − 3 − 10 − 35 = 2 In the same way we compute, #(B − (A ∪ C)) = 9
and
#(C − (A ∪ B)) = 8
Thus, the total number of students enrolled is #(A − (B ∪ C)) + #(B − (A ∪ C)) + #(C − (A ∪ B)) +#(A ∩ B − C) + #(A ∩ C − B) + #(B ∩ C − A) +#(A ∩ B ∩ C) = 2 + 9 + 8 + 10 + 35 + 18 + 3 = 85 Consequently, 15 students did not enroll in any of the three classes. Exercise 1.3.2 List all of the subsets of A = {1, 2, 3, 4}. Note: A and ∅ are considered to be subsets of A. ∅, {1}, {2}, {1, 2}, {3}, {1, 3}, {2, 3}, {1, 2, 3}, {4}, {1, 4}, {2, 4}, {1, 2, 4}, {3, 4}, {1, 3, 4}, {2, 3, 4}, {1, 2, 3, 4}
4
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Figure 1.1 Illustration of Exercise 1.3.1. Exercise 1.3.3 Construct Venn diagrams to illustrate the idempotent, commutative, associative, distributive, and identity laws. Note: some of these are trivially illustrated. This is a very simple exercise. For example, Fig. 1.2 illustrates the associative law for the union of sets.
Figure 1.2 Venn diagrams illustrating the associative law for the union of sets.
Exercise 1.3.4 Construct Venn diagrams to illustrate De Morgan’s Laws. Follow Exercise 1.3.3. Exercise 1.3.5 Prove the distributive laws.
Preliminaries
5
x ∈ A ∩ (B ∪ C) � deﬁnition of intersection of sets x ∈ A and x ∈ (B ∪ C) � deﬁnition of union of sets x ∈ A and (x ∈ B or x ∈ C) � tautology:p ∧ (q ∨ r) ⇔ (p ∧ q) ∨ (p ∧ r) (x ∈ A and x ∈ B) or (x ∈ A and x ∈ C) � deﬁnition of intersection of sets x ∈ (A ∩ B) or x ∈ (A ∩ C) � deﬁnition of union of sets x ∈ (A ∩ B) ∪ (A ∩ C) In the same way, x ∈ A ∪ (B ∩ C) � deﬁnition of union of sets x ∈ A or x ∈ (B ∩ C) � deﬁnition of intersection of sets x ∈ A or (x ∈ B and x ∈ C) � tautology:p ∨ (q ∧ r) ⇔ (p ∨ q) ∧ (p ∨ r) (x ∈ A or x ∈ B) and (x ∈ A or x ∈ C) � deﬁnition of union of sets x ∈ (A ∪ B) and x ∈ (A ∪ C) � deﬁnition of intersection of sets x ∈ (A ∪ B) ∩ (A ∪ C) Exercise 1.3.6 Prove the identity laws. In each case, one ﬁrst has to identify and prove the corresponding logical law. For instance, using the truth tables, we ﬁrst verify that, if f denotes a false statement, then p∨f ⇔p for an arbitrary statement p. This tautology then provides the basis for the corresponding identity law in algebra of sets: A∪∅=A Indeed, x ∈ (A ∪ ∅) � deﬁnition of union of sets x ∈ A or x ∈ ∅ � tautology above x∈A The remaining three proofs are analogous. Exercise 1.3.7 Prove the second of De Morgan’s Laws.
6
APPLIED FUNCTIONAL ANALYSIS
x ∈ A − (B ∩ C) � x ∈ A and x ∈ / (B ∩ C) � x ∈ A and ∼ (x ∈ B ∩ C) � x ∈ A and ∼ (x ∈ B ∧ x ∈ C) � (x ∈ A and x ∈ / B) or (x ∈ A and x ∈ / C) � x ∈ (A − B) or x ∈ (A − C) � x ∈ (A − B) ∪ (A − C)
SOLUTION MANUAL
deﬁnition of difference of sets x∈ / D ⇔∼ (x ∈ D) deﬁnition of intersection tautology: p∧ ∼ (q ∧ r) ⇔ (p∧ ∼ q) ∨ (p∧ ∼ r) deﬁnition of difference of sets deﬁnition of union
Exercise 1.3.8 Prove that (A − B) ∩ B = ∅. Empty set is a subset of any set, so the inclusion ∅ ⊂ (A − B) ∩ B is obviously satisﬁed. To prove the
converse, notice that the statement x ∈ ∅ is equivalent to the statement that x does not exist. Suppose now, to the contrary, that there exists an x such that x ∈ (A − B) ∩ B. Then x ∈ (A − B) ∩ B ⇓ x ∈ (A − B) and x ∈ B ⇓ (x ∈ A and x ∈ / B) and x ∈ B ⇓ x ∈ A and (x ∈ / B and x ∈ B) ⇓ x ∈ A and x ∈ ∅ ⇓ x∈∅
deﬁnition of intersection deﬁnition of difference associative law for conjuction p∧ ∼ p is false identity law for conjuction
In fact, the statements above are equivalent. Exercise 1.3.9 Prove that B − A = B ∩ A� . x∈B−A � deﬁnition of difference x ∈ B and x ∈ /A � deﬁnition of complement x ∈ B and x ∈ A� � deﬁnition of intersection x ∈ B ∩ A�
Preliminaries
1.4
7
Level Two Logic
Exercises Exercise 1.4.1 Use Mathematical Induction to derive and prove a formula for the sum of squares of the ﬁrst n positive integers:
n �
i2 = 1 + 22 + . . . + n2
i=1
This is an “inverse engineering” problem. Based on elementary integration formulas for polynomials, we expect the formula to take the form: n �
i2 =
i=1
αn3 + βn2 + γn + δ A
In the proof by induction, we will need to show that: n �
i2 + (n + 1)2 =
i=1
This leads to the identity:
n+1 �
i2
i=1
α(n + 1)3 + β(n + 1)2 + γ(n + 1) + δ αn3 + βn2 + γn + δ + (n + 1)3 = A A Comparing coefﬁcients in front of n3 , n2 , n, 1 on both sides, we get relations: A = 3α,
2A = 3α + 2β,
A=α+β+γ
This leads to α = A/3, β = A/2, γ = A/6. Choosing A = 6, we get α = 2, β = 3, γ = 1. Validity of the formula for n = 1 implies that δ = 0. Exercise 1.4.2 Use mathematical induction to prove that the power set of a set U with n elements has 2n elements: #U = n
⇒
#P(U ) = 2n
The hash symbol # replaces the phrase “number of elements of.” • T (1). Let U = {a}. Then P(U ) = {∅, {a}}, so #P(U ) = 2. • T (n) ⇒ T (n + 1). Assume the statement has been proved for every set with n elements. Let #U = n + 1. Pick an arbitrary element a from set U . The power set of U can then be split into two families: subsets that do not contain element a and subsets that do contain a: P(U ) = A ∪ B where
A = {A ⊂ U : a ∈ / A},
B = {B ⊂ U : a ∈ B}
8
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The two families are disjoint, so #P(U ) = #A + #B. But A = P(U − {a}) so, by the
assumption of mathematical induction, set A has 2n elements. On the other side: B ∈ B ⇔ B − {a} ∈ P(U − {a})
so family B has also exactly 2n elements. Consequently, power set P(U ) has 2n + 2n = 2n+1
elements, and the proof is ﬁnished.
Another way to see the result is to recall Newton’s formula: � � � � � � � � n n n n (a + b)n = a n b0 + an−1 b1 + . . . + a1 bn−1 + a 0 bn 0 1 n−1 n In the particular case of a = b = 1, Newton’s formula reduces to the identity: � � � � � � � � n n n n 2n = + + ... + + 0 1 n−1 n Recall that Newton’s symbol
� � n k
represents the number of k–combinations of n elements, i.e., the number of different subsets with k elements from a set with n elements. As all subsets of a set U with n elements can be partitioned into subfamiles of subsets with k elements, k = 0, 1, . . . , n, the righthand side of the identity above clearly represents the number of all possible subsets of set U . Obviously, in order to prove the formula above, we may have to use mathematical induction as well.
1.5
Inﬁnite Unions and Intersections
Exercises Exercise 1.5.1 Let B(a, r) denote an open ball centered at a with radius r: B(a, r) = {x : d(x, a) < r} Here a, x are points in the Euclidean space and d(x, a) denotes the (Euclidean) distance between the points. Similarly, let B(a, r) denote a closed ball centered at a with radius r: B(a, r) = {x : d(x, a) ≤ r} Notice that the open ball does not include the points on the sphere with radius r, whereas the closed ball does.
Preliminaries
9
Determine the following inﬁnite unions and intersections: �
�
B(a, r),
r 0, we can ﬁnd an inﬁnite number of k’s such that d(xk , x) < � = 1/l. Proceed by induction. Pick any k1 such that d(xk1 , x) < � = 1. Given k1 , . . . , kl , pick kl+1 �= k1 , . . . , kl (you have an inﬁnite number of indices to pick from ...) such that d(xkl+1 , x) < � = 1/(l + 1). By construction, subsequence xkl → x.
I 2 given by the formula Exercise 1.18.6 Let xk = (xk1 , xk2 ) be a sequence in R xki = (−1)k+i
k+1 , i = 1, 2, k ∈ IN k
Determine the cluster points of the sequence. For k even, the corresponding subsequence converges to (−1, 1). For k odd, the corresponding subsequence converges to (1, −1). These are the only two cluster points of the sequence. Exercise 1.18.7 Calculate lim inf and lim sup of the following sequence in R: I for n = 3k n/(n + 3) for n = 3k + 1 an = n2 /(n + 3) 2 n /(n + 3)2 for n = 3k + 2 where k ∈ IN .
Each of the three subsequences is convergent, a3k → 1, a3k+1 → ∞, and a3k+2 → 1. There are only
two cluster points of the sequence, 1 and ∞. Consequently, lim sup an = ∞ and lim inf an = 1. Exercise 1.18.8 Formulate and prove a theorem analogous to Proposition 1.17.2 for limit superior. Follow precisely the reasoning in the text. Exercise 1.18.9 Establish the convergence or divergence of the sequences {xn }, where n2 (b) xn = sin(n) 1 + n2 3n2 + 2 (−1)n n2 (d) x = (c) xn = n 1 + 3n2 1 + n2
(a) xn =
Sequences (a) and (c) converge, (b) and (d) diverge. I be > 1 and let x2 = 2 − 1/x1 , . . . , xn+1 = 2 − 1/xn . Show that this sequence Exercise 1.18.10 Let x1 ∈ R converges and determine its limit. For x1 > 1,
1 x1
< 1, so x2 = 2 −
1 x1
> 1. By the same argument, if xn > 1 then xn+1 > 1. By
induction, xn is bounded below by 1. Also, if x > 1 then 2−
1 ≤x x
44
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Indeed, multiply both sides of the inequality to obtain an equivalent inequality 2x − 1 ≤ x2 which in turn is implied by (x − 1)2 ≥ 0. The sequence is thus decreasing. By the Monotone Sequence
Lemma, it must have a limit x. The value of x may be computed by passing to the limit in the recursive relation. We get x=2−
1 x
which results in x = 1.
1.19
Limits and Continuity
Exercises Exercise 1.19.1 Prove Proposition 1.18.2. (i) ⇒ (ii). Let G be an open set. Let x ∈ f −1 (G). We need to show that x is an interior point of
f −1 (G). Since G is open and f (x) ∈ G, there exists an open ball B(f (x), �) ⊂ G. Continuity of f implies that there exists an open ball B(x, δ) such that
B(x, δ) ⊂ f −1 (B(f (x), �)) ⊂ f −1 (G) This proves that x is an interior point of f −1 (G). (ii) ⇒ (i). Take an open ball B(f (x), �) neighborhood of f (x). The inverse image f −1 (B(f (x), �)) being open implies that there exists a ball B(x, δ) such that
B(x, δ) ⊂ f −1 (B(f (x), �)) or, equivalently, f (B(x, δ)) ⊂ B(f (x), �) which proves the continuity of f at x. (ii) ⇔ (iii). This follows from the duality principle (complement of a set is open iff the set is closed) and the identity
Rm − G) = f −1 (I Rm ) − f −1 (G) = R I n − f −1 (G) f −1 (I I m and g : R Im →R I k . Prove Exercise 1.19.2 Let g ◦ f denote the composition of a function f : R In →R that if f is continuous at x0 and g is continuous at f (x0 ), then g ◦ f is continuous at x0 .
Preliminaries
45
Pick an open ball B(g(f (x0 )), �) neighborhood of g(f (x0 )). By continuity of function g, there exists an open ball B(f (x0 ), δ) neighborhood of f (x0 ) such that g(B(f (x0 ), δ)) ⊂ B(g(f (x0 )), �) In turn, by continuity of f , there exists an open ball B(x, α) neighborhood of x0 such that f (B(x, α)) ⊂ B(f (x0 ), δ) Consequently, (g ◦ f )(B(x, α)) ⊂ g(B(f (x0 ), δ)) ⊂ B(g(f (x0 )), �) which proves the continuity of composition g ◦ f . Exercise 1.19.3 Let f, g : R In →R I m be two continuous functions. Prove that the linear combination of f, g deﬁned as
(αf + βg)(x) = αf (x) + βg(x) is also continuous. Let d denote the Euclidean distance in R I k (k = n, m), and � · � the corresponding Euclidean norm
(comp. Exercise 1.16.8). We have,
d(αf (x) + βg(x), αf (x0 ) + βg(x0 )) = �αf (x) + βg(x) − (αf (x0 ) + βg(x0 ))� = �α(f (x) − f (x0 )) + β(g(x) − g(x0 ))� ≤ α�f (x) − f (x0 )� + β�g(x) − g(x0 )� = αd(f (x), f (x0 )) + βd(g(x), g(x0 )) Pick an arbitrary � > 0. Continuity of f implies that there exists δ1 such that d(x, x0 ) < δ1
⇒
d(f (x), f (x0 ))
c. Case x < c is treated analogously. Deﬁne a polynomial φ(x) = f (c) +
1 � 1 1 f (c)(x − c) + f �� (c)(x − c)2 + · · · + f (n) (c)(x − c)n + A(x − c)n+1 1! 2! n!
48
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
and select the constant A in such a way that f (x) matches φ(x) at x = b. By Rolle’s Theorem, there exists an intermediate point ξ1 such that (f − φ)� (ξ1 ) = 0 But we have also (f − φ)� (c) = 0 so, by the Rolle’s Theorem again, there exists an intermediate point
ξ1 ∈ (c, ξ1 ) such that,
(f − φ)�� (ξ2 ) = 0
Continuing in this manner, we arrive at the existence of a point ξn ∈ (c, x) such that (f − φ)(n+1) (ξn ) = f (n+1) (ξn ) − A(n + 1)! = 0 Solving for A we get the ﬁnal result. Exercise 1.20.3 Let f be differentiable on (a, b). Prove the following: (i) If f � (x) = 0 on (a, b), then f (x) = constant on (a, b). Pick arbitrary c, x ∈ (a, b), c �= x. By the Lagrange MeanValue Theorem, ∃ξ ∈ (c, x) : f (x) − f (c) = f � (ξ)(x − c) = 0 i.e., f (x) = f (c). Since x and c were arbitrary points, the function must be constant. (ii) If f � (x) = g � (x) on (a, b), then f (x) − g(x) = constant. Apply (i) to f (x) − g(x).
(iii) If f � (x) < 0 ∀ x ∈ (a, b) and if x1 < x2 ∈ (a, b), then f (x1 ) > f (x2 ). Apply Lagrange MeanValue Theorem,
f (x2 ) − f (x1 ) = f � (ξ) < 0 x2 − x1
⇒
f (x2 ) − f (x1 ) < 0
(iv) If f � (x) ≤ M < ∞ on (a, b), then f (x1 ) − f (x2 ) ≤ M x1 − x2 
∀ x1 , x2 ∈ (a, b)
Again, by the Lagrange MeanValue Theorem, f (x1 ) − f (x2 ) = f � (ξ)(x1 − x2 ) for some ξ ∈ (x1 , x2 ). Take absolute value on both sides to obtain f (x1 ) − f (x2 ) = f � (ξ) x1 − x2  ≤ M x1 − x2  Exercise 1.20.4 Let f and g be continuous on [a, b] and differentiable on (a, b). Prove that there exists a point c ∈ (a, b) such that f � (c)(g(b) − g(a)) = g � (c)(f (b) − f (a)). This result is sometimes called the
Cauchy MeanValue Theorem.
Preliminaries
49
Hint: Consider the function h(x) = (g(b) − g(a))(f (x) − f (a)) − (g(x) − g(a))(f (b) − f (a)). Repeat the reasoning from the proof of the Lagrange MeanValue Theorem. We have: h(a) = h(b) = 0. By Rolle’s Theorem, there exists c ∈ (a, b) such that h� (c) = (g(b) − g(a))f � (c) − g � (c)(f (b) − f (a)) = 0 Exercise 1.20.5 Prove L’Hˆospital’s rule: If f (x) and g(x) are differentiable on (a, b), with g � (x) �= 0 ∀x ∈
(a, b), and if f (c) = g(c) = 0 and the limit K = limx→c f � (x)/g � (x) exists, then limx→c f (x)/g(x) =
K. Hint: Use the result of Exercise 1.20.4. According to the Cauchy Meanvalue Theorem, there exists ξ ∈ (c, x) such that f � (ξ)(g(x) − g(c)) = g � (ξ)(f (x) − f (c)) or,
f (x) f � (ξ) = � g (ξ) g(x)
With x → c, the intermediate point ξ converges to c as well. As the limit on the lefthand side exists, the righthand side has a limit as well, and the two limits are equal.
Exercise 1.20.6 Let f and g be Riemann integrable on I = [a, b]. Show that for any real numbers α and β, αf + βg is integrable, and �
b
(αf + βg) dx = α a
�
b
f dx + β a
�
b
g dx a
Let P be an arbitrary partition of I, a = x 0 ≤ x 1 ≤ x 2 ≤ · · · ≤ xn = b and ξk , k = 1, . . . , n arbitrary intermediate points. We have the following simple relation between the Riemann sums of functions f ,g and αf + βg, R(P, αf + βg) =
n �
(αf (ξk ) + βg(ξk ))(xk − xk−1 )
k=1 n �
=α
k=1
f (ξk )(xk − xk−1 ) + β
= αR(P, f ) + βR(P, g)
n �
k=1
g(ξk )(xk − xk−1 )
Thus, if the Riemann sums on the righthand side converge, the sum on the lefthand side converges as well, and the two limits (integrals) are equal.
50
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 1.20.7 Let f and g be continuous on [a, b] and suppose that F and G are primitive functions of f and g, respectively, i.e., F � (x) = f (x) and G� (x) = g(x) ∀ x ∈ [a, b]. Prove the integrationbyparts formula:
�
b a
F (x)g(x) dx = F (b)G(b) − F (a)G(a) −
�
b
f (x)G(x) dx a
Integrate between a and b both sides of �
[F (x)G(x)] = f (x)G(x) + F (x)g(x) to obtain F (b)G(b) − F (a)G(a) =
�
b
f (x)G(x) dx + a
�
b
F (x)g(x) dx a
Exercise 1.20.8 Prove that if f is Riemann integrable on [a, c], [c, b], and [a, b], then �
b
f dx = a
�
c
f dx + a
�
b
f dx,
aV ∗ ×V Thus, ∗ δij =< e∗∗ i , ej >V ∗∗ ×V ∗
and
< ι(ei ), e∗j >V ∗∗ ×V ∗ =< e∗j , ei >V ∗ ×V = δji = δij
The relation follows then form the uniqueness of the (bi)dual basis.
68
2.11
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Transpose of a Linear Transformation
Exercises Exercise 2.11.1 The following is a “sanity check” of your understanding of concepts discussed in the last two sections. Consider R I 2. I 2. (a) Prove that a1 = (1, 0), a2 = (1, 1) is a basis in R It is sufﬁcient to show linear independence. Any n linearly independent vectors in a ndimensional vector space provide a basis for the space. The vectors are clearly not collinear, so they are linearly independent. Formally, α1 a1 + α2 a2 = (α1 + α2 , α2 ) = (0, 0) implies α1 = α2 = 0, so the vectors are linearly independent. I f (x1 , x2 ) = 2x1 + 3x2 . Prove that the functional is linear, (b) Consider a functional f : R I 2 → R, and determine its components in the dual basis a∗1 , a∗2 .
Linearity is trivial. Dual basis functionals return components with respect to the original basis, a∗j (ξ1 a1 + ξ2 a2 ) = ξj It is, therefore, sufﬁcient to determine ξ1 , ξ2 . We have, ξ1 a1 + ξ2 a2 = ξ1 e1 + ξ2 (e1 + e2 ) = (ξ1 + ξ2 )e1 + ξ2 e2 so x1 = ξ1 + ξ2 and x2 = ξ2 . Inverting, we get, ξ1 = x1 − x2 , ξ2 = x2 . These are the dual basis functionals. Consequently,
f (x1 , x2 ) = 2x1 + 3x2 = 2(ξ1 + ξ2 ) + 3ξ2 = 2ξ1 + 5ξ2 = (2a∗1 + 5a∗2 )(x1 , x2 ) Using the argumentless notation, f = 2a∗1 + 5a∗2 If you are not interested in the form of the dual basis functionals, you get compute the components of f with respect to the dual basis faster. Assume α1 a∗1 + α2 a∗2 = f . Evaluating both sides at x = a1 we get, (α1 a∗1 + α2 a∗2 )(a1 ) = α1 = f (a1 ) = f (1, 0) = 2 Similarly, evaluating at x = a2 , we get α2 = 5. I 2 whose matrix representation in basis a1 , a2 is (c) Consider a linear map A : R I 2 →R � � 10 12
Linear Algebra
69
Compute the matrix representation of the transpose operator with respect to the dual basis. Nothing to compute. Matrix representation of the transpose operator with respect to the dual basis is equal of the transpose of the original matrix, � � 11 02 Exercise 2.11.2 Prove Proposition 2.11.3. All ﬁve properties of the matrices are directly related to the properties of linear transformations discussed in Proposition 2.11.1 and Proposition 2.11.2. They can also be easily veriﬁed directly. (i) (αAij + βBij )T = αAji + βBji = α(Aij )T + β(βij )T (ii)
�
n � l=1
(iii) (δij )T = δji = δij .
Bil Alj
�T
=
n �
Bjl Ali =
l=1
n �
Ali Bjl =
l=1
n �
(Ail )T (Blj )T
l=1
(iv) Follow the reasoning for linear transformations: AA−1 = I
⇒
(A−1 )T AT = I T = I
A−1 A = I
⇒
AT (A−1 )T = I T = I
Consequently, matrix AT is invertible, and (AT )−1 = (A−1 )T . (v) Conclude this from Proposition 2.11.2. Given a matrix Aij , ij, = 1, . . . , n, we can interpret it as I n deﬁned as: the matrix representation of map A : R I n →R where
y = Ax
yi =
n �
Aij xj
j=1
with respect to the canonical basis ei , i = 1, . . . , n. The transpose matrix AT can then be interpreted as the matrix of the transpose transformation: AT : (I Rn )∗ → (I R n )∗ The conclusion follows then from the facts that rank A = rank A, rank AT = rank AT , and Proposition 2.11.2. Exercise 2.11.3 Construct an example of square matrices A and B such that (a) AB �= BA �
10 A= 11 Then AB =
�
11 12
�
�
�
11 B= 01
and
BA =
�
�
21 11
�
70
APPLIED FUNCTIONAL ANALYSIS
(b) AB = 0, but neither A = 0 nor B = 0 � � 10 A= 00
SOLUTION MANUAL
�
00 B= 01
�
(c) AB = AC, but B �= C
Take A, B from (b) and C = 0.
Exercise 2.11.4 If A = [Aij ] is an m × n rectangular matrix and its transpose AT is the n × m matrix, ATn×m = [Aji ]. Prove that (i) (AT )T = A. �
(Aij )T
�T
= (Aji )T = Aij
(ii) (A + B)T = AT + B T . Particular case of Proposition 2.11.3(i). (iii) (ABC · · · XY Z)T = Z T Y T X T · · · C T B T AT . Use Proposition 2.11.3(ii) and recursion,
(ABC . . . XY Z)T = (BC . . . XY Z)T AT = (C . . . XY Z)T B T AT .. . = Z T Y T X T . . . C T B T AT (iv) (qA)T = qAT . Particular case of Proposition 2.11.3(i). Exercise 2.11.5 In this exercise, we develop a classical formula for the inverse of a square matrix. Let A = [aij ] be a matrix of order n. We deﬁne the cofactor Aij of the element aij of the ith column of A as the determinant of the matrix obtained by deleting the ith row and jth column of A, multiplied by (−1)i+j :
Aij = cofactor aij
(a) Show that
� � a11 � � � � def i+j � ai−1,1 = (−1) � � ai+1,1 � � � an1 δij det A =
n �
k=1
a12 ai−1,2 ai+1,2 an2
· · · a1,j−1 ··· · · · ai−1,j−1 · · · ai+1,j−1 ··· · · · an,j−1
aik Ajk ,
a1,j+1 ··· ai−1,j+1 ai+1,j+1 ··· an,j+1
1 ≤ i, j ≤ n
� � � � � · · · ai−1,n �� · · · ai+1,n �� � � · · · ann � · · · a1n
Linear Algebra
71
where δij is the Kronecker delta. Hint: Compare Exercise 2.13.4. For i = j, the formula reduces to the Laplace Expansion Formula for determinants discussed in Exercise 2.13.4. For i �= j, the righthand side represents the Laplace expansion of the determi
nant of an array where two rows are identical. Antilinearity of determinant (comp. Section 2.13) implies then that the value must be zero.
(b) Using the result in (a), conclude that A−1 =
1 [Aij ]T det A
Divide both sides by det A. (c) Use (b) to compute the inverse of
and verify your answer by showing that
1 22 A = 1 −1 0 2 13
A−1 A = AA−1 = I
A
Exercise 2.11.6 Consider the matrices � � 1 041 A= , 2 −1 3 0 and
−1
1
4 3
−1 4 B = 12 0 , 01
If possible, compute the following:
1 1 −2 3 3 = −1 −1 1
� � 2 D= , 3
− 23
1 −1 E= 1 0
C = [1, −1, 4, −3] 0 2 4 0 0 2 1 −1
3 1 4 2
(a) AAT + 4D T D + E T The expression is illdeﬁned, AAT ∈ Matr(2, 2) and E T ∈ Matr(4, 4), so the two matrices cannot be added to each other.
(b) C T C + E − E 2
−1 −7 = −2 −1
−4 3 −17 −12 −1 1 −8 16 −27 −2 −9 10
72
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(c) B T D Illdeﬁned, mismatched dimensions. (d) B T BD − D =
�
276 36
�
(e) EC − AT A
EC is not computable.
(f) AT DC(E − 2I)
32 −40 40 144 −12 15 −15 −54 = 68 −85 85 306 8 −10 10 36
Exercise 2.11.7 Do the following vectors provide a basis for R I 4? a = (1, 0, −1, 1),
b = (0, 1, 0, 22)
c = (3, 3, −3, 9),
d = (0, 0, 0, 1)
It is sufﬁcient to check linear independence, αa + βb + γc + δd = 0
?
⇒
α=β=γ=δ=0
Computing αa + βb + γc + δd = (α + 3γ, β + 3γ, −α − 3γ, α + 22β + 9γ + δ) we arrive at the homogeneous system of equations 1 0 30 0 1 3 0 −1 0 −3 0 1 22 9 1
α 0 β 0 = γ 0 0 δ
The system has a nontrivial solution iff the matrix is singular, i.e., det A = 0. By inspection, the third
row equals minus the ﬁrst one, so the determinant is zero. Vectors a, b, c, d are linearly dependent and, therefore, do not provide a basis for R I 4. Exercise 2.11.8 Evaluate the determinant of the matrix 1 −1 0 4 1 0 2 1 A= 4 7 1 −1 1 01 2
Linear Algebra
73
Use e.g. the Laplace expansion with respect to the last row and Sarrus’ formulas, � � � � � � 1 −1 0 4 � � � � �1 0 4� � −1 −1 0 � � � � −1 0 4 � � � � � � � �1 0 2 1� � � � � � � � � � 4 7 1 −1 � = −(−1) � 0 2 1 � − 1 � 1 2 1 � + 2 � 1 0 2 � � � � 7 1 −1 � � 4 1 −1 � � 4 7 1� �1 0 1 2�
= 2 − 56 + 1 − (−2 + 4 − 32 − 1) + 2(−8 − 14 + 1) = −53 − (−31) − 42 = −64
Exercise 2.11.9 Invert the following matrices (see Exercise 2.11.5). � � 42 1 −1 A= , B = 2 4 1 2 12
−1
A
=
2 1 3 3
− 13
1
3
B
−1
4 12
1 2 2
2 − 12
0
− 2 6 7 − 12 12 12 = 6 0 − 12 1
Exercise 2.11.10 Prove that if A is symmetric and nonsingular, so is A−1 . Use Proposition 2.11.3(iv). (A−1 )T = (AT )−1 = A−1 Exercise 2.11.11 Prove that if A, B, C, and D are nonsingular matrices of the same order then (ABCD)−1 = D −1 C −1 B −1 A−1 Use the fact that matrix product is associative, (ABCD)(D −1 C −1 B −1 A−1 ) = ABC(DD −1 )C −1 B −1 A−1 = ABC I C −1 B −1 A−1 = . . . = I In the same way, (D −1 C −1 B −1 A−1 )(ABCD) = I So, (ABCD)−1 = D −1 C −1 B −1 A−1 . Exercise 2.11.12 Consider the linear problem (i) Determine the rank of T .
0 1 3 −2 T = 2 1 −4 3 , 2 3 2 −1
1 y = 5 7
74
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Multiplication of columns (rows) with nonzero factor, addition of columns (rows), and interchange of columns (rows), do not change the rank of a matrix. We may use those operations and mimic Gaussian elimination to compute the rank of matrices. 0 1 3 −2 rank 2 1 −4 3 2 3 2 −1 1 3 −2 0 = rank 1 −4 3 2 switch columns 1 and 4 3 2 −1 2 1 3 −2 0 = rank 1 −4 3 2 divide row 3 by 3 1 23 − 13 32 1 3 −2 0 subtract row 1 from rows 2 and 3 = rank 0 −7 5 2 0 − 73 35 32 1 0 0 0 = rank 0 −7 5 2 manipulate the same way columns to zero out the ﬁrst row 0 − 73 35 32 10 0 0 = rank 0 1 − 57 − 27 00 0 0 1000 = rank 0 1 0 0 0000 =2
(ii) Determine the null space of T . Set x3 = α and x4 = β and solve for x1 , x2 to obtain 7 5 N (T ) = {( α − β, −3α + 2β, α, β)T : α, β ∈ R} I 2 2 (iii) Obtain a particular solution and the general solution. Check that the rank of the augmented matrix is also equal 2. Set x3 = x4 = 0 to obtain a particular solution x = (2, 1, 0, 0)T The general solution is then 7 5 x = (2 + α − β, 1 − 3α + 2β, α, β)T , α, β ∈ R I 2 2 (iv) Determine the range space of T . As rank T = 2, we know that the range of T is twodimensional. It is sufﬁcient thus to ﬁnd two linearly independent vectors that are in the range, e.g. we can take T e1 , T e2 represented by the ﬁrst two columns of the matrix, R(T ) = span{(0, 2, 2)T , (1, 1, 3)T }
Linear Algebra
75
Exercise 2.11.13 Construct examples of linear systems of equations having (1) no solutions, (2) inﬁnitely many solutions, (3) if possible, unique solutions for the following cases: (a) 3 equations, 4 unknowns (1)
1000 T = 1 0 0 0 0111
(2)
1111 T = 1 1 1 1 1111
(3) Unique solution is not possible.
0 y = 1 0
0 y = 0 0
(b) 3 equations, 3 unknowns (1)
100 T = 1 0 0 011
(2)
100 T = 2 0 0 111
(3)
100 T = 0 1 0 001
Exercise 2.11.14 Determine the rank of the following matrices: 21 4 7 12134 0 1 2 1 , T2 = 2 0 3 2 1 T = 2 2 6 8 11121 4 4 14 10
0 y = 1 0
1 y = 2 1
1 y = 1 1
4 2 −1 1 5, T3 = 2 0 1 3 0 11
In all three cases, the rank is equal 3.
Exercise 2.11.15 Solve, if possible, the following systems: (a) + 3x3 − x4 + 2x5 = 2
4x1
x 1 − x2 + x3 − x4 + x5 = 1 x 1 + x2 + x3 − x4 + x5 = 1 x1 + 2x2 + x3
t+3 0 x = −2t − 3 , −1 t
+ x5 = 0
t ∈R I
76
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(b) − 4x1 − 8x2 + 5x3 = 1 − 2x2
2x1
+ x2
5x1
+ 3x3
= 2
+ 2x3
= 4
�
19 47 3 x= ,− , 120 120 10
�T
(c) 2x1 + 3x2 + 4x3 + 3x4 = 0 x1 + 2x2 + 3x3 + 2x4 = 0 x 1 + x2 + x3 + x4 = 0
α+β −2α − β , x= α β
α, β ∈ R I
2.12
Tensor Products, Covariant and Contravariant Tensors
2.13
Elements of Multilinear Algebra
Exercises Exercise 2.13.1 Let X be a ﬁnitedimensional space of dimension n. Prove that the dimension of the space s Mm (X) of all mlinear symmetric functionals deﬁned on X is given by the formula � � (n + m − 1)! n(n + 1) . . . (n + m − 1) n+m−1 s dim Mm = = (X) = m 1 · 2 · ... · m m! (n − 1)!
Proceed along the following steps. (a) Let Pi,m denote the number of increasing sequences of m natural numbers ending with i, 1 ≤ a 1 ≤ a2 ≤ . . . ≤ a m = i Argue that s dim Mm (X) =
n � i=1
Pi,m
Linear Algebra
77
Let a be a general mlinear functional deﬁned on X. Let e1 , . . . , en be a basis for X, and let v j , j = 1, . . . , n, denote components of a vector v with respect to the basis. The multilinearity of a implies the representation formula, a(v1 , . . . , vm ) =
n � n �
...
j1 =1 j2 =1
n �
jm a(ej1 , ej2 , . . . , ejm ) v1j1 v2j2 . . . vm
jm =1
On the other side, if the form a is symmetric, we can interchange any two arguments in the coefﬁcient a(ej1 , ej2 , . . . , ejm ) without changing the value. The form is thus determined by coefﬁcients a(ej1 , ej2 , . . . , ejm ) where 1 ≤ j 1 ≤ . . . ≤ jm ≤ n s The number of such increasing sequences equals the dimension of space Mm (X). Obviously, we
can partition the set of such sequences into subsets that contain sequences ending at particular indices 1, 2, . . . , n, from which the identity above follows. (b) Argue that Pi,m+1 =
i �
Pj,m
j=1
The ﬁrst m elements of an increasing sequence of m + 1 integers ending at i, form an increasing sequence of m integers ending at j ≤ i. (c) Use the identity above and mathematical induction to prove that Pi,m =
i(i + 1) . . . (i + m − 2) (m − 1)!
For m = 1, Pi,1 = 1. For m = 2, Pi,2 =
i �
1=i
j=1
For m = 3, Pi,3 =
i �
j=
j=1
i(i + 1) 2
Assume the formula is true for a particular m. Then Pi,m+1 =
i � j(j + 1) . . . (j + m − 2) j=1
(m − 1)!
We shall use induction in i to prove that i � j(j + 1) . . . (j + m − 2) j=1
(m − 1)!
=
i(i + 1) . . . (i + m − 1) m!
78
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The case i = 1 is obvious. Suppose the formula is true for a particular value of i. Then, i+1 � j(j + 1) . . . (j + m − 2) j=1
(m − 1)!
i � j(j + 1) . . . (j + m − 2)
(i + 1)(i + 2) . . . (i + m − 1) (m − 1)! (m − 1)! j=1 i(i + 1) . . . (i + m − 1) m(i + 1)(i + 2) . . . (i + m − 1) + = m! m! (i + 1)(i + 2) . . . (i + m − 1)(i + m) = m! (i + 1)(i + 2) . . . (i + m − 1)((i + 1) + m − 1) = m! =
+
(d) Conclude the ﬁnal formula. Just use the formula above. Exercise 2.13.2 Prove that any bilinear functional can be decomposed into a unique way into the sum of a symmetric functional and an antisymmetric functional. In other words, M2 (V ) = M2s (V ) ⊕ M2a (V ) Does this result hold for a general mlinear functional with m > 2 ? The result follows from the simple decomposition, a(u, v) =
1 1 (a(u, v) + a(v, u)) + (a(u, v) − a(v, u)) 2 2
Unfortunately, it does not generalize to m > 2. This can for instance be seen from the simple comparison of dimensions of the involved spaces in the ﬁnitedimensional case, � � � � n+m−1 n m + n > m m for 2 < m ≤ n. Exercise 2.13.3 Antisymmetric linear functionals are a great tool to check for linear independence of vectors. Let a be an mlinear antisymmetric functional deﬁned on a vector space V . Let v1 , . . . , vm be m vectors in space V such that a(v1 , . . . , vm ) �= 0. Prove that vectors v1 , . . . , vn are linearly independent.
Is the converse true? In other words, if vectors v1 , . . . , vn are linearly independent, and a is a nontrivial mlinear antisymmetric form, is a(v1 , . . . , vm ) �= 0? Assume in contrary that there exists an index i such that vi =
�
βj vj
j�=i
for some constants βj , j �= i. Substituting into the functional a, we get, a(v1 , . . . , vi , . . . , vm ) = a(v1 , . . . ,
� j�=i
βj vj , . . . , v m ) =
� j�=i
βj a(v1 , . . . , vj , . . . , vm ) = 0
Linear Algebra
79
since in each of the terms a(v1 , . . . , vj , . . . , vm ), two arguments are the same. The converse is not true. Consider for instance a bilinear, antisymmetric form deﬁned on a threedimensional space. Let e1 , e2 , e3 be a basis for the space. As discussed in the text, the form is uniquely determined by its values on pairs of basis vectors: a(e1 , e2 ), a(e1 , e3 ), a(e2 , e3 ). It is sufﬁcient for one of these numbers to be nonzero in order to have a nontrivial form. Thus we may have a(e1 , e2 ) = 0 for the linearly independent vectors e1 , e2 , and a nontrivial form a. The discussed criterion is only a sufﬁcient condition for the linear independence but not necessary. Exercise 2.13.4 Use the fact that the determinant of matrix A is a multilinear antisymmetric functional of matrix columns and rows to prove the Laplace Expansion Formula. Select a particular column of matrix Aij , say the jth column. Let Aij denote the submatrix of A obtained by removing ith row and jth column (do not confuse it with a matrix representation). Prove that det A =
n �
(−1)i+j Aij det Aij
i=1
Formulate and prove an analogous expansion formula with respect to an ith row. It follows from the linearity of the determinant with respect to the jth column that, ... 1 ... ... 0 ... . . . A1j . . . det ... ... ... = A1j det ... ... ... + . . . + Anj det ... ... ... ... 0 ...
. . . Anj . . .
... 1 ...
On the other side, the determinant of matrix,
(j) ... 0 ... . . (i) .. 1 .. ... 0 ...
is a multilinear functional of the remaining columns (and rows) and, for Aij = I (The I denote here the identity matrix in R I n−1 ), its value reduces to (−1)i+j . Hence, (j) ... 0 ... i+j i+j det . . = (−1) det A (i) .. 1 .. ... 0 ...
The reasoning follows identical lines for the expansion with respect to the ith column.
Exercise 2.13.5 Prove the Kramer’s formulas for the solution of a nonsingular system of n equations with n unknowns,
a11 .. .
... .. .
x1 b1 a1n .. .. = .. . . .
an1 . . . ann
xn
bn
80
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Hint: In order to develop the formula for the jth unknown, rewrite the system in the form:
a11 .. . an1
1 . . . x1 . . . a1n .. .. .. . . . 0 . . . xn . . . . . . ann ... .. .
(j)
a11 . . . b1 . . . a1n .. .. = . . 1 an1 . . . bn . . . ann 0
(j)
Compute the determinant of both sides of the identity, and use Cauchy’s Theorem for Determinants for the lefthand side.
Exercise 2.13.6 Explain why the rank of a (not necessarily square) matrix is equal to the maximum size of a square submatrix with a nonzero determinant.
Consider an m × n matrix Aij . The matrix can be considered to be a representation of a linear map A from an ndimensional space X with a basis ei , i = 1, . . . , n, into an mdimensional space Y with a
basis g1 , . . . , gm . The transpose of the matrix represents the transpose operator AT mapping dual space ∗ and e∗1 , . . . , e∗n . The rank of the Y ∗ into the dual space X ∗ , with respect to the dual bases g1∗ , . . . , gm
matrix is equal to the dimension of the range space of operator A and operator AT . Let ej1 , . . . , ejk be such vectors that Aej1 , . . . , Aejk is the basis for the range of operator A. The corresponding submatrix represents a restriction B of operator A to a subspace X0 = span(ej1 , . . . , ejk ) and has the same rank as the original whole matrix. Its transpose has the same rank equal to k. By the same argument, there exist k vectors gi1 , . . . , gik such that AT gi∗1 , . . . , AT gi∗k are linearly independent. The corresponding k × k submatrix represents the restriction of the transpose operator to the kdimensional subspace
Y0∗ = span(gi∗1 , . . . , gi∗k ), with values in the dual space X0∗ , and has the same rank equal k. Thus,
the ﬁnal submatrix represents an isomorphism from a kdimensional space into a kdimensional space and, consequently, must have a nonzero determinant.
Conversely, let v 1 , . . . , v m be k column vectors in R I m . Consider a matrix composed of the columns. If there exists a square submatrix of the matrix with a nonzero determinant, the vectors must be linearly independent. Indeed, the determinant of any square submatrix of the matrix represents a klinear, antisymmetric functional of the column vectors, so, by Exercise 2.13.3, v 1 , . . . , v k are linearly independent vectors. The same argument applies to the rows of the matrix.
Linear Algebra
81
Euclidean Spaces
2.14
Scalar (Inner) Product. Representation Theorem in FiniteDimensional Spaces
2.15
Basis and Cobasis. Adjoint of a Transformation. Contra and Covariant Components of Tensors
Exercises Exercise 2.15.1 Go back to Exercise 2.11.1 and consider the following product in R I 2, R I 2 ×R I 2 � (x, y) → (x, y)V = x1 y1 + 2x2 y2 Prove that (x, y)V satisﬁes the axioms for an inner product. Determine the adjoint of map A from Exercise 2.11.1 with respect to this inner product. The product is bilinear, symmetric and positive deﬁnite, since (x, x)V = x21 +2x22 ≥ 0, and x21 +2x22 =
0 implies x1 = x2 = 0. The easiest way to determine a matrix representation of A∗ is to determine the
cobasis of the (canonical) basis used to deﬁne the map A. Assume that a1 = (α, β). Then (a1 , a1 ) = α = 1 (a2 , a1 ) = α + 2β = 0
=⇒
β = − 12
so a1 = (1, − 12 ). Similarly, if a2 = (α, β) then, (a1 , a2 ) = α = 0 (a2 , a2 ) = α + 2β = 1
=⇒
β=
1 2
so a2 = (0, 12 ). The matrix representation of A∗ in the cobasis is simply the transpose of the original matrix,
�
11 02
�
In order to represent A∗ in the original, canonical basis, we need to switch in between the bases. a1 = e1 − 12 e2 a2 = 12 e2
=⇒
e 1 = a1 + a 2 e2 = 2a2
82
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Then, A∗ y = A∗ (y1 e1 + y2 e2 ) = A∗ (y1 (a1 + a2 ) + y2 2a2 ) = A∗ (y1 a1 + (y1 + 2y2 )a2 ) = y1 A∗ a1 + (y1 + 2y2 )A∗ a2 = y1 a1 + (y1 + 2y2 )(a1 + 2a2 ) = y1 (e1 − 12 e2 ) + (y1 + 2y2 )(e1 + 12 e2 ) = 2(y1 + y2 )e1 − 12 y1 e2 = (2(y1 + y2 ), y2 ) Now, let us check our calculations. First, let us compute the original map (that has been given to us in basis a1 , a2 ), in the canonical basis, A(x1 , x2 ) = A(x1 e1 + x2 e2 ) = A(x1 a1 + x2 (a2 − a1 )) = A((x1 − x2 )a1 + x2 a2 ) = (x1 − x2 )(a1 + a2 ) + x2 2a2 = (x1 − x2 )(2e1 + e2 ) + 2x2 (e1 + e2 ) = (2x1 , x1 + x2 ) If our calculations are correct then, (Ax, y)V = 2x1 y1 + 2(x1 + x2 )y2 must match (x, A∗ y)V = x1 2(y1 + y2 ) + 2x2 y2 which it does! Needless to say, you can solve this problem in many other ways.
3 Lebesgue Measure and Integration
Lebesgue Measure
3.1
Elementary Abstract Measure Theory
Exercises Exercise 3.1.1 Prove Proposition 3.1.2. Let Sι ⊂ P(X) be a family of σalgebras. Prove that the common part
�
ι
Sι is a σalgebra as well.
The result is a simple consequence of commutativity of universal quantiﬁers. For any open statement P (x, y), we have, P (x, y)
∀x ∀y ⇐⇒ P (x, y)
∀y ∀x
For ﬁnite sets of indices, the property follows by induction from level one logic axioms. The property serves then as a motivation for assuming the level two logic axiom above. The speciﬁc arguments are very simple and look as follows. � � 1. ∅, X ∈ Sι , ∀ι implies ∅, X ∈ ι Sι , so ι Sι is nonempty. � � 2. Let A ∈ ι Sι . Then A ∈ Sι , ∀ι and, consequently, A� ∈ Sι , ∀ι, which implies that A� ∈ ι Sι . � �∞ 3. Ai ∈ ι Sι , i = 1, 2, . . ., implies Ai ∈ Sι , ∀ι, i = 1, 2, . . ., which in turn implies i=1 Ai ∈ �∞ � Sι , ∀ι, and, consequently, i=1 Ai ∈ ι Sι .
Exercise 3.1.2 Let f : X → Y be a function. Prove the following properties of the inverse image: f −1 (Y − B) = X − f −1 (B) and f −1 (
∞ �
i=1
for arbitrary B, Bi ⊂ Y .
Bi ) =
∞ �
f −1 (Bi )
i=1
83
84
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Both proofs are very straightforward. x ∈ f −1 (Y − B)
⇔
f (x) ∈ /B
⇔
∼ (f (x) ∈ B)
⇔
x∈ / f −1 (B)
and x ∈ f −1 (
∞ �
i=1
Bi ) ⇔ f (x) ∈
∞ �
i=1
Bi ⇔ ∃i f (x) ∈ Bi ⇔ ∃i x ∈ f −1 (Bi ) ⇔ x ∈
∞ �
f −1 (Bi )
i=1
Exercise 3.1.3 Construct an example of f : X → Y and a σalgebra in X such that f (S) is not a σalgebra in Y .
Take any X, Y and any nonsurjective function f : X → Y . Take the trivial σalgebra in X, S = {∅, X} Then, in particular, f (S) = {∅, f (X)} does not contain Y = ∅� and, therefore, it violates the ﬁrst axiom for σalgebras. Exercise 3.1.4 Prove Corollary 3.1.1. Let f : X → Y be a bijection and S ⊂ P(X) a σalgebra. Prove that: (i) f (S) := {f (A) : A ∈ S} is a σalgebra in Y . (ii) If a set K generates S in X then f (K) generates f (S) in Y . Solutions: (i) Let g = f −1 be the inverse function of f . Then f (S) = g −1 (S) and the property follows from Proposition 3.1.3. Equivalently, f (S) = R deﬁned in the same Proposition. (ii) Let f : X → Y be an arbitrary function. Let S = S(K) be the σalgebra generated by a family K in Y . Then, by Proposition 3.1.3, f −1 (S) is a σalgebra in X and it contains f −1 (S).
Consequently, it must contain the smallest σalgebra containing f −1 (S), i.e. f −1 (S(K)) ⊃ S(f −1 (K)) Applying the result to the inverse function g = f −1 , we get f (S(K)) ⊃ S(f (K)) Applying the last result to g = f −1 and set L = f (K) ⊂ Y gives: S(g(L)) ⊂ g(S(L)) which, after applying f to both sides, leads to the inverse inclusion, f (S(g(L)) ⊂ S(���� L ) ���� K
f (K)
Lebesgue Measure and Integration
85
Exercise 3.1.5 Prove the details from the proof of Proposition 3.1.5. Let G ⊂ R I n be an open set. Prove that the family
{F ⊂ R I m : G × F ∈ B(I Rn × R I m )} is a σalgebra in R I m. I n ×R I m . Thus, the family contains all For any open set F ⊂ R I m , Cartesian product G × F is open in R
open sets and, therefore, is not empty. The properties of σalgebra follow then from simple identities, G × F � = G × (I Rm − F ) = G × R Im −G×F � � G× Fn = (G × Fn ) n
n
Exercise 3.1.6 Let X be a set, S ⊂ PX a σalgebra of sets in X, and y a speciﬁc element of X. Prove that function
µ(A) := is a measure on S.
�
if y ∈ A otherwise
1 0
�∞ Obviously, µ ≡ / ∞. Let An ∈ S, i = 1, 2, . . ., be a family of pairwise disjoint sets. If y ∈ / n=1 An �∞ �∞ �∞ then y ∈ / An , ∀n, and, therefore, 0 = µ( n=1 An ) = n=1 0. If y ∈ n=1 An then there must be exactly one m such that y ∈ Am . Consequently, 1 = µ(
∞ �
An ) = µ(Am ) =
n=1
3.2
∞ �
µ(An )
n=1
Construction of Lebesgue Measure in R In
Exercises Exercise 3.2.1 Let F1 , F2 ∈ R I n be two disjoint closed sets, not necessarily bounded. Construct open sets G1 , G2 such that
Fi ⊂ Gi , i = 1, 2
and
G 1 ∩ G2 = ∅
Obviously, F1 ⊂ F2� . Since F2� is open, for every x ∈ F1 , there exists a ball B(x, �x ) ⊂ F2� . Deﬁne G1 =
�
x∈F1
B(x,
�x ) 2
Being a union of opens sets, G1 is open. Construct G2 in the same way. We claim that G1 , G2 are disjoint. Indeed, let y ∈ G1 ∩ G2 . It follows then from the construction that there exist x1 ∈ F1 and
86
APPLIED FUNCTIONAL ANALYSIS
x2 ∈ F2 such that y ∈ B(xi ,
�x i 2
SOLUTION MANUAL
), i = 1, 2. Without any loss of generality, assume that �x1 ≥ �x2 . Let
d be the Euclidean metric. Then,
d(x1 , x2 ) ≤ d(x1 , y) + d(x2 , y) ≤
�x � x1 + 2 ≤ �x 1 2 2
which implies that x2 ∈ B(x1 , �x1 ). Since x2 ∈ F2 , this contradicts the construction of G1 . Exercise 3.2.2 Complete proof of Proposition 3.2.4. Prove that the following families of sets coincide with Lebesgue measurable sets in R I n. (ii) {J ∪ Z : J is Fσ type, m∗ (Z) = 0}, (iii) S(B(I Rn ) ∪ {Z : m∗ (Z) = 0}). Let E be a Lebesgue measurable set. According to Proposition 3.2.3, for every i, there exists a closed subset Fi of E such that m∗ (E − Fi ) ≤ Deﬁne H =
�∞
i=1
Fi ⊂ E. As the sets E −
�n
i=1
Fi form an increasing sequence, we have,
m∗ (E − H) = m(E − H) = lim m(E − n→∞
1 i
n �
i=1
Fi ) ≤ lim m(E − Fn ) = 0 n→∞
Consequently, Z := E − H is of measure zero, and E = H ∪ Z. Conversely, from the fact that Fi , Z �∞ Rn ) is a σalgebra are Lebesgue measurable, it follows that i=1 Fi ∪Z must be measurable as well (L(I containing open sets).
Since L(I Rn ) is a σalgebra that contains both Borel sets and sets of measure zero, it must also contain
the smallest σalgebra containing the two families. Conversely, representation (ii) above proves that every Lebesgue measurable set belongs to S(B(I Rn ) ∪ {Z : m∗ (Z) = 0}).
3.3
The Fundamental Characterization of Lebesgue Measure
Exercises Exercise 3.3.1 Follow the outlined steps to prove that every linear isomorphism g : R In →R I n is a compoλ . sition of simple isomorphisms gH,c
Lebesgue Measure and Integration
87
/ H. Show Step 1: Let H be a hyperplane in R I n , and let a, b denote two vectors such that a, b, a − b ∈ λ that there exists a unique simple isomorphism gH,c such that λ (a) = b gH,c
Hint: Use c = b − a.
Indeed, consider the decompositions a = a0 + α(b − a),
a0 ∈ S, α ∈ R, I
b = b0 + β(b − a),
b0 ∈ S, β ∈ R, I
Subtracting the two representations from each other, we get b − a = b0 − a0 + (β − α)(b − a) which implies b0 − a0 + (β − α − 1)(b − a) = 0 It follows from the assumption b − a ∈ / H that the two terms are linearly independent. Conse/ S implies that α, β �= 0. Therefore, with λ = β/α, quently, a0 = b0 . Also, assumption a, b ∈
we have
λ λ gH,c (a) = gH,c (a0 + α(b − a)) = b0 + β(b − a) = b
Step 2: Let g be a linear isomorphism in R I n and consider the subspace Y = Y (g) such that g(x) = x on Y . Assume that Y �= R I n . Let H be any hyperplane containing Y . Show that there exist vectors a, b such that
g(a) ∈ /H and b∈ / H,
b − g(a) ∈ / H,
b−a∈ /H
Make use then of the Step 1 result and consider simple isomorphisms g1 and h1 invariant on H and mapping f (a) into b and b into a, respectively. Prove that dimY (h1 ◦ g1 ◦ g) > dimY (g) From the fact that g is an isomorphism follows that there must exist a vector a �= 0 for which g(a) ∈ / H. Denote c = g(a) ∈ / H and consider the corresponding direct sum decomposition R I n = H ⊕ Rc I Let b be an arbitrary vector. Consider decompositions of vectors a and b corresponding to the direct sum above a = a0 + αc,
b = b0 + βc,
a0 , b0 ∈ H, α, β ∈ R I
88
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Then b − g(a) = b − c = b0 + (β − 1)c,
b − a = b0 − a0 + (β − α)c
By choosing any β ∈ R I − {0, 1, α} we satisfy the required conditions. Let g1 , h1 be now the
simple isomorphisms invariant on H and mapping g(a) into b, and b into a, respectively. By construction, g(a) �= a (otherwise, a ∈ Y (g) ⊂ H), and (h1 ◦ g1 ◦ g)(a) = a, so a ∈
Y (h1 ◦ g1 ◦ g). As h1 , g1 do not alter H, Y (g) � Y (h1 ◦ g1 ◦ g). Step 3: Use induction to argue that after a ﬁnite number of steps m
dimY (hm ◦ gm ◦ . . . ◦ h1 ◦ g1 ◦ g) = n Consequently, hm ◦ gm ◦ . . . ◦ h1 ◦ g1 ◦ g = idR In Finish the proof by arguing that the inverse of a simple isomorphism is itself a simple isomorphism, too. The induction argument is obvious. Also, �
λ gH,c
�−1
1
λ = gH,c
Lebesgue Integration Theory
3.4
Measurable and Borel Functions
Exercises Exercise 3.4.1 Let ϕ : R In → R I¯ be a function such that dom ϕ is measurable (Borel). Prove that the following conditions are equivalent to each other: (i) ϕ is measurable (Borel). (ii) {(x, y) ∈ dom ϕ × R I : y ≤ ϕ(x)} is measurable (Borel). (iii) {(x, y) ∈ dom ϕ × R I : y > ϕ(x)} is measurable (Borel). (iv) {(x, y) ∈ dom ϕ × R I : y ≥ ϕ(x)} is measurable (Borel).
Lebesgue Measure and Integration
89
I n+1 into itself and it maps {y < g(x)} For λ �= 0, function gλ (x, y) = x+λy is an isomorphism from R
into {λy < g(x)}. Choose any 0 < λn � 1. Then {y ≤ g(x)} =
∞ �
n=1
{y < λ−1 n g(x)} =
∞ �
gλn ({y < g(x)})
n=1
Since a linear isomorphism maps measurable (Borel) sets into measurable (Borel) sets, each of the sets on the righthand side is measurable (Borel) and, consequently, their common part must be measurable (Borel) as well. Use the identity {y < g(x)} =
∞ �
n=1
{y ≤ λn g(x)} =
∞ �
n=1
g λ1 ({y ≤ g(x)}) n
to demonstrate the converse statement. The last two results follow from simple representations {y < g(x)} = dom ϕ × R I − {y ≥ g(x)},
{y ≤ g(x)} = dom ϕ × R I − {y > g(x)}
Exercise 3.4.2 Prove Proposition 3.4.3. Let g : R In →R I n be an afﬁne isomorphism. Then a function ϕ : R I n ⊃ dom ϕ → R I¯ is measurable (Borel) if and only if the composition ϕ ◦ g is measurable (Borel).
Obviously, g ⊗ idR I n ×R I � (x, y) → (g(x), y) ∈ R I n ×R I is an afﬁne isomorphism, too. The I : R assertion follows then from the identity −1 {(x, y) ∈ g −1 (dom ϕ) × R I : y < (ϕ ◦ g)(x)} = (g ⊗ idR I : y < ϕ(z)} I ) {(z, y) ∈ dom ϕ) × R
Exercise 3.4.3 Prove Proposition 3.4.4. Let ϕi : E → R, I¯ i = 1, 2 and ϕ1 = ϕ2 a.e. in E. Then ϕ1 is measurable if an only if ϕ2 is measurable Let E0 = {x ∈ E : ϕ1 (x) = ϕ2 (x)}. Then, for i = 1, 2, {(x, y) ∈ E × R I : y < ϕi (x)} = {(x, y) ∈ E0 × R I : y < ϕi (x)} ∪Zi � �� � � �� � :=Si
:=S0
where Zi ⊂ (E − E0 ) × R I are of measure zero. Thus, if S1 is measurable then S0 = S1 − Z1 is measurable and, in turn, S2 = S0 + Z2 must be measurable as well.
90
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
3.5
Lebesgue Integral of Nonnegative Functions
3.6
Fubini’s Theorem for Nonnegative Functions
3.7
Lebesgue Integral of Arbitrary Functions
Exercises Exercise 3.7.1 Complete proof of Proposition 3.7.1. Let ai ∈ R, I¯ i ∈ IN . Suppose that ai = a1i − a2i , where a1i , a2i ≥ 0, and one of the series ∞ �
∞ �
a1i ,
1
is ﬁnite. Then the sum
�
IN ai exists and �
∞ �
ai =
1
IN
Case 3:
�∞
a2i
1
a1i −
∞ �
a2i
1
�∞ − 2 a2i = ∞. From a− i ≤ ai follows that 1 ai = ∞. By the same argument � + ai < ∞. Consequently, the sum IN ai exists but it is inﬁnite. So is the sum of the sums of the two series.
1 �∞ 1
a1i < ∞,
�∞ 1
Exercise 3.7.2 Prove Corollary 3.7.1. � � I¯ The sum IN is ﬁnite if and only if IN ai  is ﬁnite. In such a case Let ai ∈ R. �
ai =
∞ �
ai
1
IN
We have − ai  = a+ i + ai
Consequently,
n � 1
ai  =
n � 1
a+ i +
n � 1
a− i
Lebesgue Measure and Integration
91
If both sequences on the righthand side have ﬁnite limits, so does the lefthand side, and the equality � �∞ holds in the limit. As the negative part of ai  is simply zero, IN ai  = 1 ai . �∞ Conversely, assume that 1 ai  is ﬁnite. By the Monotone Sequence Lemma, both sequences �n + � n − 1 ai , 1 ai converge. The equality above implies that both limits must be ﬁnite as well.
Finally, the equality
n �
ai =
1
n � 1
− (a+ i − ai ) =
implies that the lefthand side converges and ∞ �
ai =
1
∞ � 1
a+ i −
∞ �
n � 1
a+ i −
n �
�
ai 
a− i =:
1
IN
a− i
1
Exercise 3.7.3 Prove Corollary 3.7.2. Let f : E → R I¯ be measurable and f = f1 + f2 , where f1 , f2 ≥ 0 are measurable. Assume that at � � least one of the integrals f1 dm, f2 dm is ﬁnite. Then f is integrable and � � � f dm = f1 dm + f2 dm Repeat proof of Proposition 3.7.1 (compare also Exercise 3.7.1), replacing sums with integrals. Exercise 3.7.4 Prove Proposition 3.7.4. The following conditions are equivalent to each other: (i) f is summable. � � (ii) f + dm, f − dm < +∞. � (iii) f  dm < +∞.
(i) ⇒ (ii) We have

�
f dm = 
�
f + dm −
�
f − dm
If the lefthand side is ﬁnite then so is the righthand side. This implies that both � − f dm must be ﬁnite as well.
(ii)⇒(iii) follows from
(iii)⇒(i) follows from
�
f  dm =
�
(f + + f − ) dm =

�
f dm ≤
�
�
f + dm −
f  dm
Exercise 3.7.5 Prove Proposition 3.7.5. All functions are measurable. The following properties hold:
�
f − dm
�
f + dm and
92
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(i) f summable, E measurable ⇒ f E summable. (ii) f, ϕ : E → R, I¯ f  ≤ ϕ a.e. in E, ϕ summable ⇒ f summable. (iii) f1 , f2 : E → R I¯ summable ⇒ α1 F1 + α2 f2 summable for α1 , α2 ∈ R. I (i) follows from Proposition 3.7.4 and the monotonicity of measure which implies that � f dm for f ≥ 0.
�
E
f dm ≤
(ii) follows from Proposition 3.7.4 and monotonicity of measure. (iii) follows from (ii) and inequality
α1 f1 + α2 f2  ≤ α1  f1  + α2  f2  Exercise 3.7.6 Prove Theorem 3.7.2. Im →R I¯ be summable (and Borel). Then the following properties (Fubini’s Theorem): Let f : R I n ×R hold:
(i) y → f (x, y) is summable for almost all x (Borel for all x). � (ii) x → f (x, y) dm(y) is summable (and Borel). � � � �� (iii) f dm = f (x, y) dm(y) dm(x) The result is a direct consequence of the Fubini’s theorem for nonnegative functions, deﬁnition of the integral for arbitrary functions, Corollary 3.7.2 and Proposition 3.7.4. Summability of f implies � � that both integrals f + dm and f − dm are ﬁnite. Applying the Fubini’s theorem for nonnega
tive functions, we conclude all three statements for both positive and negative parts of f . Then use Corollary 3.7.2 and Proposition 3.7.4.
3.8
Lebesgue Approximation Sums, Riemann Integrals
Exercises Exercise 3.8.1 Consider function f from Example 3.8.1. Construct explicitly Lebesgue and Riemann approximation sums and explain why the ﬁrst sum converges while the other does not. Let . . . < y−1 < y0 < y1 < . . . < yk−1 ≤ 2 < yk < . . . < yn−1 ≤ 3 < yn < . . .
Lebesgue Measure and Integration
93
be any partition of the real axis. It follows from the deﬁnition of the function that Ek = f −1 ([yk−1 , yk )) = {irrational numbers} ∩ (0, 1) and that En = {rational numbers} ∩ (0, 1). Thus, m(Ek ) = 1 and
m(En ) = 0. All other sets Ei are empty. Consequently, the lower and upper Lebesgue sums reduce to s = yk−1 m((0, 1)) = yk−1
and
S = yk
If supi (yi − yi−1 ) → 0, then both yk−1 , yk must converge simply to 2, and both Lebesgue sums converge to the value of the Lebesgue integral equal to 2. On the other side, if 0 = x0 < . . . < xk−1 < xk < . . . < xn = 1 is an arbitrary partition of interval (0, 1), then for each subinterval [xk−1 , xk ) we can choose either a rational or irrational intermediate point ξk . If all intermediate points are irrational then f (ξk ) = 2 and the Riemann sum is equal to R=
n �
k=1
n �
f (ξk )(xk − xk−1 ) =
k=1
2(xk − xk−1 ) = 2
n �
(xk − xk−1 ) = 2
k=1
Similarly, if all intermediate points are rational, the corresponding value of the Riemann sum is equal 3. For a Riemann integrable function, the Riemann sums must converge to a common value which obviously cannot be the case. Exercise 3.8.2 Let f : R I n ⊃ D →R I¯ be a measurable (Borel) function. Prove that the inverse image f −1 ([c, d)) = {x ∈ D : c ≤ f (x) < d} is measurable (Borel), for any constants c, d. It is sufﬁcient to prove that f −1 ((c, ∞)) = {x ∈ D : c < f (x)} is a measurable (Borel) set in R I n , for any constant c. Indeed, it follows from the identity, f −1 ([c, ∞)) = f −1 (
∞ �
n=1
(c −
∞ � 1 1 , ∞)) = f −1 ((c − , ∞)) n n n=1
that f −1 ([c, ∞)) is measurable. Similarly, the identity, f −1 ([c, d)) = f −1 ([c, ∞) − [d, ∞)) = f −1 ([c, ∞)) − f −1 ([d, ∞)) implies that f −1 ([c, d)) must be measurable as well. I n � x → (x, c) ∈ R I n+1 is obviously continuous, and Function ic : R f −1 (c, ∞) = i−1 c ({(x, t) : x ∈ D, t < f (x)}) This proves the assertion for Borel functions, as the inverse image of a Borel set through a continuous function is Borel. If f is only measurable then the Fubini’s Theorem (Generic Case) implies only that
94
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
set f −1 (c, ∞) is measurable a.e. in c. The critical algebraic property that distinguishes set {y < f (x)} from an arbitrary set, is that,
f −1 ((c, ∞)) = f −1 (
�
cn �c
(cn , ∞)) =
�
cn �c
f −1 ((cn , ∞))
for any sequence cn � c. As f −1 ((c, ∞)) are measurable a.e. in c, we can always ﬁnd a sequence cn � c, for which sets f −1 ((cn , ∞)) are measurable. Consequently, f −1 ((c, ∞)) is measurable as
well.
Lp Spaces
3.9
H¨older and Minkowski Inequalities
Exercises Exercise 3.9.1 Prove the generalized H¨older inequality: �� � � � � uvw� ≤ �u�p �v�q �w�r � � where 1 ≤ p, q, r ≤ ∞, 1/p + 1/q + 1/r = 1. In view of estimate: 
�
uvw ≤
�
uvw =
�
u v w
we can restrict ourselves to nonnegative, realvalued functions only. The inequality follows then from the original H¨older’s result, �
uvw ≤
��
u
p
�1/p ��
(v w)
p p−1
� p−1 p
If 1/q+1/r = 1−1/p = (p−1)/p, then for q � = q(p−1)/p, r� = r(p−1)/p, we have 1/q � +1/r� = 1. Consequently, �
v
p p−1
w
p p−1
≤
��
(v
p p−1
)
q�
� 1� �� q
(w
p p−1
)
r�
Combining the two inequalities, we get the ﬁnal result.
� 1� r
=
��
v
q
p � q(p−1) ��
w
r
�
p r ( p−1)
Lebesgue Measure and Integration
95
Exercise 3.9.2 Prove that the H¨older inequality �
Ω
f g ≤
��
Ω
f 
p
� p1 ��
Ω
g
q
� q1
1 1 + = 1, p q
,
p, q > 1
turns into an equality if and only if there exist constants α and β such that αf p + βgq = 0
a.e. in Ω
It is sufﬁcient to prove the fact for realvalued and nonnegative functions f, g ≥ 0, with unit norms,
i.e.,
�
�
p
f = 1, Ω
gq = 1 Ω
The critical observation here is the fact that the inequality used in the proof of the H¨older inequality, 1
1 1 t+ v p q
1
tp vq ≤
turns into equality if and only if t = v. Consequently, for nonnegative functions u, v, � � � 1 1 1 1 p q u+ v−u v = 0 implies u = v a.e. in Ω . p q Ω Substituting u = f p , v = g q we get the desired result. Exercise 3.9.3
(i) Show that integral
�
is ﬁnite, but, for any � > 0, integral �
1 2
0
1 2
0
dx x ln2 x
dx [x ln2 x]1+�
is inﬁnite. (ii) Use the property above to construct an example of a function f : (0, 1) → R I which belongs to space Lp (0, 1), 1 < p < ∞, but it does not belong to any Lq (0, 1), for q > p.
(i) Use substitution y = ln x to compute: � � 1 1 dx dy =− =− = y2 y ln x x ln2 x Hence,
Deﬁne:
�
1 2
�
1 1 dx + . =− ln 1/2 ln � x ln2 x
fn =
x ln12 x 0
x ∈ [ n1 , 12 ) x ∈ (0, n1 )
96
APPLIED FUNCTIONAL ANALYSIS
Then fn � f =
1 x ln2 x
SOLUTION MANUAL
in (0, 12 ) and, therefore f is integrable with
�
fn �
�
f.
Raising function f to any power greater than 1 renders the resulting integral to be inﬁnite. It is sufﬁcient to construct a lower bound for f whose integral is inﬁnite. It follows easily from applying the L’Hospital’s rule that lim x� lnr x = 0
x→0
for an arbitrarily small � > 0 and large r > 0. Consequently, for any �, r, there exists a δ > 0 such that: Consequently,
1 ln
for x < δ. Then,
with
�
1 x
x� lnr x ≤ 1
implies
x N . But this implies that f (xn ) ∈ G, for
n > N , i.e. f (xn ) → f (x).
Equivalence of second and third conditions follows immediately from the duality principle (Prop. 4.4.2). Exercise 4.4.5 Let X be a sequential Hausdorff topological space, and f : X → Y a sequentially continuous function into an arbitrary topological space Y . Prove that f is continuous.
Let G ⊂ Y be an arbitrary open and, therefore, sequentially open set in Y . By Exercise 4.4.4, f −1 (G)
is sequentially open in X. But X is a sequential space, so f −1 (G) must be open. This proves that f is (globally) continuous.
Exercise 4.4.6 Let X be a topological space such that, for every function f : X → Y (with arbitrary topo
logical space Y ), sequential continuity of f implies its continuity. Prove that X must be a sequential topological space. 1. Let Y = X with a new topology introduced through open sets X1 where for the open sets X1 we take all sequentially open sets in X. Prove that such sets satisfy the axioms for open sets. • Empty set and the space X are open and, therefore, sequentially open as well.
• Union of an arbitrary family Gι , ι ∈ I of sequentially open sets is sequentially open. Indeed � let x ∈ ι∈I Gι . There exists then κ ∈ I such that x ∈ Gκ . Let xn → x. As Gκ is � sequentially open, there exists N such that for n > N , xn ∈ Gκ ⊂ ι∈I Gι . Done.
• Intersection of a ﬁnite number of sequentially open sets is sequentially open. Indeed, let xn → x and x ∈ G1 ∩G2 ∩. . .∩Gm . Let Ni , i = 1, . . . , m be such that xn ∈ Gi for n > Ni . Take N = max{N1 , . . . , Nm } to see that, for n > N , xn ∈ G1 ∩ . . . ∩ Gm .
2. Take Y = X with the new, stronger (explain, why?) topology induced by X1 and consider the
identity map idX mapping original topological space (X, X ) onto (X, X1 ). Prove that the map is sequentially continuous.
Let xn → x in the original topology. We need to prove that xn → x in the new topology as
well. Take any open neighborhood G ∈ X1 of x. By construction, G is sequentially open in the original topology so there exists N such that, for n > N , xn ∈ G. But, due to the arbitrariness
of G, this proves that x → x in the new topology.
112
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
3. The identity map is, therefore, continuous as well. But this means that G ∈ X1
⇒
id−1 X (G) = G ∈ X
i.e. every sequentially open set in X is automatically open.
4.5
Topological Equivalence. Homeomorphism
Theory of Metric Spaces
4.6
Metric and Normed Spaces, Examples
Exercises Exercise 4.6.1 Let (X, d) be a metric space. Show that ρ(x, y) = min{1, d(x, y)} is also a metric on X. Symmetry and positive deﬁnitness of d(x, y) imply symmetry and positive deﬁnitness of ρ(x, y). The easiest perhaps way to see that the triangle inequality is satisﬁed, is to consider the eight possible cases: 1: d(x, y) < 1, d(x, z) < 1, d(z, y) < 1. Then ρ(x, y) = d(x, y) ≤ d(x, z) + d(z, y) = ρ(x, z) + ρ(z, y). 2: d(x, y) < 1, d(x, z) < 1, d(z, y) ≥ 1. Then
ρ(x, y) = d(x, y) < 1 = ρ(z, y) ≤ ρ(x, z) + ρ(y, z).
3: d(x, y) < 1, d(x, z) ≥ 1, d(z, y) < 1. Then
ρ(x, y) = d(x, y) < 1 = ρ(x, z) ≤ ρ(x, z) + ρ(y, z).
4: d(x, y) < 1, d(x, z) ≥ 1, d(z, y) ≥ 1. Then
ρ(x, y) = d(x, y) < 1 < 2 = ρ(x, z) + ρ(y, z).
5: d(x, y) ≥ 1, d(x, z) < 1, d(z, y) < 1. Then
ρ(x, y) = 1 ≤ d(x, y) ≤ d(x, z) + d(z, y) = ρ(x, z) + ρ(z, y).
Topological and Metric Spaces
113
6: d(x, y) ≥ 1, d(x, z) < 1, d(z, y) ≥ 1. Then
ρ(x, y) = 1 = ρ(z, y) ≤ ρ(x, z) + ρ(y, z).
7: d(x, y) ≥ 1, d(x, z) ≥ 1, d(z, y) < 1. Then
ρ(x, y) = 1 = ρ(x, z) ≤ ρ(x, z) + ρ(y, z).
8: d(x, y) ≥ 1, d(x, z) ≥ 1, d(z, y) ≥ 1. Then ρ(x, y) = 1 < 2 = ρ(x, z) + ρ(y, z).
Exercise 4.6.2 Show that any two norms � · �p and � · �q in R I n , 1 ≤ p, q ≤ ∞, are equivalent, i.e., there exist constants C1 > 0, C2 > 0 such that
and
�x�p ≤ C1 �x�q
�x�q ≤ C2 �x�p
for any x ∈ R I n . Try to determine optimal (minimum) constants C1 and C2 . It is sufﬁcient to show that any pnorm is equivalent to e.g. the ∞norm. We have, � p1 � n � 1 p p p �x�∞ = max xi  = xj  (for a particular index j) = (xj  ) ≤ xi  = �x�p i=1,...,n
i=1
At the same time,
�x�p =
�
n � i=1
xi 
p
� p1
≤
�
n � i=1
�x�p∞
� p1
= n �x�∞
Consider now 1 < p, q < ∞. Let Cpq be the smallest constant for which the following estimate holds. ∀x ∈ R In
�x�p ≤ Cpq �x�q
Constant Cpq can be determined by solving the constrained minimization problem, Cpq = max �x�p �x�q =1
This leads to the Lagrangian, L(x, λ) =
n � i=1
p
xi  − λ
and the necessary conditions, � � xi pxi p−1 − λqxi q−1 ∂L xi  = ∂xi 0 For xi �= 0, we get,
xi  = λ
1 p−q
�
n � i=1
q
�
xi  − 1
if xi �= 0
if xi = 0 1 � � p−q p q
Raising both sides to power q and summing up over i, we get q � � p−q n � q q q xi  = mλ p−q 1= p i=1
= 0,
i = 1, . . . , n
114
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
where m is the number of coordinates for which xi �= 0. Notice that m must be positive to satisfy the constraint. This yields the value for λ,
λ=
p − p−q n q q
and the corresponding value for the pnorm, �
n � i=1
xi 
p
� p1
1
1
= mp−q
Consequently, the maximum is attained at the point for which m = n, i.e., all coordinates are different from zero. This gives the value for the optimal constant, 1
1
Cpq = n p − q
Upon passing with p, q to 1 or ∞, we get the corresponding values of the constant for the limiting
cases.
Exercise 4.6.3 Consider R I N with the l1 norm, x = (x1 , . . . , xN ),
�x�1 =
N � i=1
xi 
Let �x� be now any other norm deﬁned on R I n. (i) Show that there exists a constant C > 0 such that �x� ≤ C�x�1
∀x ∈ R IN
(ii) Use (i) to demonstrate that function R I N � x → �x� ∈ R I is continuous in l1 norm. (iii) Use Weierstrass Theorem to conclude that there exists a constant D > 0 such that �x�1 ≤ D�x�
∀x ∈ R IN
Therefore, the l1 norm is equivalent to any other norm on R I N . Explain why the result implies that any two norms deﬁned on an arbitrary ﬁnitedimensional vector space must be equivalent. (i) Let ei denote the canonical basis in R I n . Then �x� = �
n � i=1
xi ei � ≤
n � i=1
xi  �ei � ≤ C
where C = max{�e1 �, . . . , �en �}
n � i=1
xi 
Topological and Metric Spaces
115
(ii) This follows immediately from the fact that �x� − �y� ≤ �x − y� and property (i). (iii) The l1 unit ball is compact. Consequently, norm � · � attains a minimum on the l1 unit ball, i.e., C≤�
x � �x�1
∀x
Positive deﬁnitness of the norm implies that C > 0. Multiplying by �x�1 /C, we get �x�1 ≤ C −1 �x� Take now two arbitrary norms. As each of them is equivalent to norm � · �1 , they must be equivalent with each other as well.
4.7
Topological Properties of Metric Spaces
Exercises Exercise 4.7.1 Prove that F : (X, D) → (Y, ρ) is continuous if and only if the inverse image of every (open) ball B(y, �) in Y is an open set in X.
It is sufﬁcient to show that the inverse image of any open set G in Y is open in X. For any y ∈ G,
there exists �y > 0 such that B(y, �y ) ⊂ G. Set G thus can be represented as a union of open balls, � B(y, �y ) G= y∈G
Consequently, set
f −1 (G) = f −1
�
y∈G
as a union of open sets must be open.
B(y, �y ) =
�
f −1 (B(y, �y ))
y∈G
Exercise 4.7.2 Let X = C ∞ (a, b) be the space of inﬁnitely differentiable functions equipped with Chebyshev metric. Let F : X → X be the derivative operator, F f = df /dx. Is F a continuous map on X? No, it is not. Take, for instance, [a, b] = [0, π] and the sequence fn (x) = �fn � = max  x∈[0,π]
1 sin(nx)  ≤ → 0, n n
sin(nx) . n
Then,
116
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
but, at the same time, �fn� � = max fn� (x) = max  cos(nx) = 1 � 0 x∈[0,π]
4.8
x∈[0,π]
Completeness and Completion of Metric Spaces
Exercises Exercise 4.8.1 Let Ω ⊂ R I n be an open set and let (C(Ω), � �p ) denote the (incomplete) metric space of continuous, realvalued functions on Ω with metric induced by the Lp norm. Construct arguments supporting the fact that the completion of this space is Lp (Ω). Hint: See Exercise 4.9.3 and use Theorem 4.8.2. Deﬁnition of the completion of a metric space and the density result imply that Lp (Ω) is a completion of (C(Ω), � �p ). By Theorem 4.8.2, completions are unique (up to an isometry) and, therefore, space
Lp (Ω) is the completion of (C(Ω), � �p ).
Exercise 4.8.2 Let xnk be a subsequence of a Cauchy sequence xn . Show that if xnk converges to x, so does the whole sequence xn . Recall that xn is Cauchy if ∀� > 0
∃N1 = N1 (�)
n, m ≥ N1 ⇒ d(xn , xm ) < �
Also xnk → x means that ∀� > 0
∃N2 = N2 (�)
k ≥ N2 ⇒ d(xnk , x) < �
Let � be now an arbitrary positive number. Choose N = max{N1 (�/2), N2 (�/2)} Then, for any n ≥ N , d(xn , x) ≤ d(xn , xnN2 (�/2) ) + d(xnN2 (�/2) , x)
0, there I such that exists a simple function φ� : Ω → R
�f − φ� �∞ ≤ � Hint: Use the Lebesgue approximation sums. I such that Indeed, let . . . < yi−1 < yi < yi+1 < . . . be a partition of R sup yi − yi−1  ≤ � i
Deﬁne Ei = f −1 ([yi−1 , yi )), select any ci ∈ [yi−1 , yi ] and set: φe =
�
c i χ Ei
i∈Z I
Exercise 4.9.3 Let Ω ⊂ R I n be an open set. Let f ∈ Lp (Ω), 1 ≤ p < ∞. Use results of Exercise 4.9.1 and
I converging to Exercise 4.9.2 to show that there exists a sequence of continuous functions φn : Ω → R function f in the Lp (Ω) norm.
Topological and Metric Spaces
119
1. Assume additionally that domain Ω is bounded, and 0 ≤ f ≤ M < ∞ a.e. in Ω. Pick an arbitrary � > 0.
• Select a partition 0 = y0 < y1 < . . . < yN = M such that max yi − yi−1  ≤
i=1,...,N
and deﬁne: φ=
N �
yi χ E i ,
1 � [m(Ω)]− p 2
Ei = f −1 ([yi−1 , yi ))
i=1
Then
1
�f − φ�p ≤ �f − φ�∞ [m(Ω)] p ≤
� 2
• Use Exercise 4.9.1 to select continuous functions φi such that �φEi − φi �p ≤ Then �
N � i=1
yi χ E i −
N �
By triangle inequality, �f −
i=1
y i φ i �p ≤
�N
i=1
N � i=1
� 1 1 2 2i yi N
yi �χEi − φi �p ≤
�� 1 � ≤ i 2 i=1 2 2
yi φi �p ≤ �.
2. Drop the assumption on f being bounded. Deﬁne fM (x) = min{f (x), M }. Then fM → f
pointwise and, by construction, fM are bounded by f . Thus, by Lebesgue Dominated Conver
gence Theorem, fM − f p → 0 as M → ∞. Apply then the result of the previous step to functions fM .
3. Drop the assumption on f being nonnegative. Split function f into its positive and negative parts: f = f+ − f− and apply the previous step to each of the parts separately. 4. Drop the assumption on Ω being bounded. Let Bn = B(0, n). Consider restriction of f to Ω ∩ Bn . By the Lebesgue Dominated Convergence argument again, �fn − f �p → 0. Apply the
result of the previous step to fn then.
Exercise 4.9.4 Argue that, in the result of Exercise 4.9.3, one can assume additionally that functions fn have compact support. Let Ω ⊂ R I n be an open set. Deﬁne, Ωn := {x ∈ Ω : d(x, Ω� ) >
1 , x ∈ Bn } n
Continuity of the distance function d(x, A) implies that sets Ωn are open. By construction, they are ¯ n ⊂ Ωn+1 . Consider restriction fn of f to Ωn and apply the result of Exercise 4.9.3 also bounded and Ω
¯ n ⊂ Ωn+1 , one can assume that the opens sets G used in Exercise 4.9.1 are to function fn . Since Ω
contained in Ωn+1 . Consequently, the corresponding continuous approximations have their supports in Ωn+1 ⊂ Ω.
120
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 4.9.5 Let F be a uniformly bounded class of functions in Chebyshev space C[a, b], i.e., ∃M > 0 : f (x) ≤ M,
∀x ∈ [a, b], ∀f ∈ F
Let G be the corresponding class of primitive functions � x f (s) ds, F (x) =
f ∈F
a
Show that G is precompact in the Chebyshev space.
According to Arzel`a–Ascoli Theorem, it is sufﬁcient to show that G is equicontinuous. We have, � x � x f (s) ds ≤ f (s) ds ≤ M x − y F (x) − F (y) =  y
y
Thus functions F (x) are Lipschitz continuous with a uniform (with respect to F ) bound on the Lipschitz constant. This implies that functions F, f ∈ F are equicontinuous.
4.10
Contraction Mappings and Fixed Points
Exercises Exercise 4.10.1 Reformulate Example 4.10.6 concerning the Fredholm Integral Equation using the Lp spaces, 1 < p < ∞. What is the natural regularity assumption on kernel function K(x, y)? Does it have to be bounded? Let Af = φ + λ
�
b
K(·, y)f (y) dy a
We need to come up with sufﬁcient conditions to have the estimate �Af − Ag�p = �A(f − g)�p ≤ k�f − g�p where k < 1. The inequality will also automatically imply that operator A is well deﬁned. Let h = f − g. We have, � b � �λ K(·, y)h(y) dy�pp = a
b a
≤ λp = λp
λ �
� b
a
�
b
K(x, y)h(y) dyp dx a
�
b a
�
��
a b
1
b
K(x, y)
p p−1
p
dy
K(x, y) p−1 dy
� p−1 �� p
�p−1
dx
b a
�
� p1 p h(y)p dy dx
b a
h(y)p dy
Topological and Metric Spaces
121
where we have applied the H¨older inequality to the inner integral to obtain an estimate with the desired Lp norm of function h. A natural assumption is thus to request the inequality: p1 �p−1 � � b � b p λ K(x, y) p−1 dy dx 0 and it is not uniformly Lipschitz with respect to q in the domain of deﬁnition. We need to restrict ourselves to a smaller domain [0, T ] × [�, ∞) where � < 1 is ﬁxed and T is to be determined. The MeanValue Theorem implies then that ln q1 − ln q2 =
1 (q1 − q2 ) ξ
for some intermediate point ξ ∈ (q1 , q2 ). This implies that, in the smaller domain, function F (t, q) is
uniformly Lipschitz in q,
t ln q1 − t ln q2  ≤
T q1 − q2  �
∀t ∈ [0, T ], q ∈ [�, ∞)
The initialvalue problem is equivalent to the integral equation: q ∈ C([0, T ]) � t q(t) = 1 + s ln q(s) ds 0
and function q is a solution of the integral equation if an only if q is a ﬁxed point of the operator: � t s ln q(s) ds A : C([0, T ]) → C([0, T ]), (Aq)(t) = 1 + 0
We have,
(Aq1 )(t) − (Aq2 )(t) ≤
�
t 0
s ln q1 (s) − s ln q2 (s) ds ≤
Consequently, map A is a contraction if
�
t 0
T2 T q1 (s) − q2 (s) ds ≤ �q1 − q2 �∞ � �
T2 0
such that v ∈ C, β − α < δ
⇒
βv ∈ B
In particular, for β = α, v∈C
⇒
αv ∈ B
In other words, for every α, ∀B ∈ B0 ∃C ∈ B0 : αC ⊂ B i.e., αB0 � B0 . For α �= 0, this implies that αB0 ∼ B0 .
1 α B0
� B0 or, equivalently, B0 � αB0 . Consequently,
125
126
5.2
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Locally Convex Topological Vector Spaces
Exercises Exercise 5.2.1 Let A ∈ L(X, Y ) be a linear transformation from a vector space X into a normed space Y . Assume that A is noninjective, and deﬁne
p(x) := �Ax�Y Explain why p is a seminorm but not a norm. Funtion p(x) is homogeneous and it satisﬁes the triangle inequality. But it is not positive deﬁnite as p(x) vanishes on N (A). Exercise 5.2.2 Show that each of the seminorms inducing a locally convex topology is continuous with respect to this topology. Consider seminorm pκ , κ ∈ I, and u ∈ V . We need to show that ∀� > 0 ∃C ∈ Bx : y ∈ C ⇒ pκ (y) − pκ (x) ≤ � It is sufﬁcient to take C = x + B({κ}, �). Indeed, by deﬁnition of B({κ}, �), y ∈ C ⇔ pκ (y − x) ≤ � and, consequently, pκ (y) = pκ (x + y − x) ≤ pκ (x) + pκ (y − x) ≤ pκ (x) + � At the same time, pκ (x) = pκ (y + x − y) ≤ pκ (y) + pκ (x − y) = pκ (y) + pκ (y − x) ≤ pκ (y) + � Exercise 5.2.3 Show that replacing the weak equality in the deﬁnition of set Mc with a strict one does not change the properties of Mc . Properties (i)(iii) follow from the same arguments as for the case with weak inequality. (iv) Consider two cases: • If p(u) = 0 then p(αu) = αp(u) = 0, so αu ∈ Mc , for any α > 0. • If p(u) �= 0, take any α >
p(u) c .
Then
p(α−1 u) = α−1 p(u)
0 such that f (v) ≤ C max pι (v) ι∈I0
The following is a straightforward generalization of arguments used in Proposition 5.6.1. For linear functionals deﬁned on a t.v.s., continuity at 0 implies continuity at any u. Indeed, let u be an arbitrary vector. We need to show that ∀� > 0 ∃B ∈ B0 : y ∈ u + B ⇒ f (y) − f (u) < � If f is continuous at 0 then ∀� > 0 ∃B ∈ B0 : z ∈ B ⇒ f (z) < � Consequently, for y ∈ u + B, i.e., y − u ∈ B, f (y) − f (x) = f (y − x) < � . Thus, it is sufﬁcient to show that the condition above is equivalent to continuity of f at 0. Necessity: Let f be continuous at 0. For any � > 0 and, therefore, for � = 1 as well, there exists a neighborhood B ∈ B0 such that
x∈B
⇒
f (x) < 1
128
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Recalling the construction of the base of neighborhoods, we learn that there exists a ﬁnite subset I0 ⊂ I,
and a constant δ > 0 such that,
max pι (x) < δ ι∈I0
⇒
f (x) < 1
Let λ := maxι∈I0 pι (x). Then max pι ι∈I0
�
xδ λ
�
=
δ max pι (x) < δ λ ι∈I0
f
�
xδ λ
�
and, therefore,
or, equivalently, f (x)
0 ∃N : n, m ≥ N ⇒ �un − um �U + �A(un − um )�V < � This implies that both un is Cauchy in U , and Aun is Cauchy in V . By completness of U and V , un → u, Aun → v, for some u ∈ U, v ∈ V . But operator A is closed, so (u, v) ∈ G(A) which in
turn implies that u ∈ D(A) and v = Au.
Banach Spaces
5.11
137
Example of a Closed Operator
Exercises Exercise 5.11.1 Let Ω ⊂ R I n be an open set. Prove that the following conditions are equivalent to each other. (i) For every point x ∈ Ω, there exists a ball B = B(x, �x ) ⊂ Ω such that uB ∈ L1 (B). (ii) For every compact subset K ⊂ Ω, uK ∈ L1 (K). � � (ii) ⇒ (i). Take a closed ball B ⊂ Ω and observe that B u = B u.
(ii) ⇒ (i). Let K be a compact subset of Ω. Use neighborhoods from (i) to form an open cover for K, � B(x, �x ) K⊂ x∈K
By compactness of K, there exists a ﬁnite number of points xi ∈ K, i = 1, . . . , N such that K⊂
But then
�
K
u ≤
N �
B(xi , �xi )
i=1
N � � i=1
B(xi ,�xi )
u < ∞
Exercise 5.11.2 Consider X = L2 (0, 1) and deﬁne a linear operator T u = u� , D(T ) = C ∞ ([0, 1]) ⊂ L2 (0, 1). Show that T is closable. Can you suggest what would be the closure of T ?
We shall use Proposition 5.10.3. Assume un → 0 and T un = u�n → v, where the convergence is
understood in the L2 sense. We need to show that v = 0. Let φ ∈ D(0, 1). Then, � 1 � 1 � un φ = u�n φ − 0
0
L convergence of un to zero implies that the lefthand side converges to zero. At the same time, � L2 convergence of u�n to v implies that the righthand side converges to vφ. Consequently, � 1 vφ = 0, ∀φ ∈ D(0, 1) 2
0
Thus, density of test functions in L2 (0, 1) implies that v must be zero and, by Proposition 5.10.3, the operator is closable. The closure of the operator is the distributional derivative deﬁned on the Sobolev space, L2 (0, 1) ⊃ H 1 (0, 1) � u → u� ∈ L2 (0, 1)
138
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.11.3 Show that the Sobolev space W m,p (Ω) is a normed space. All three conditions for a norm are easily veriﬁed. Case: 1 ≤ p < ∞. Positive deﬁnitness. �u�W m,p (Ω) = 0 ⇒ �u�Lp (Ω) = 0 ⇒ u = 0
( in the Lp sense, i.e., u = 0 a.e.)
Homogeneity.
�λu�W m,p (Ω) =
�
α≤m
Triangle inequality.
= λ
�u + v�W m,p (Ω)
=
�
�α�≤m
≤ ≤
�
�α�≤m
�Dα (λu)�pLp (Ω) =
�
α≤m
p1
�Dα u�pLp (Ω)
�
α≤m
p1
λp �Dα u�pLp (Ω)
= λ�u�W m,p (Ω)
p1
�Dα (u + v)�pLp (Ω)
� �
�α�≤m
p1
�p
p1
�Dα u�Lp (Ω) + �Dα v�Lp (Ω) p1
�Dα u�Lp (Ω) +
�
�α�≤m
( triangle inequality in Lp (Ω) ) p1
�Dα v�Lp (Ω) ( triangle inequality for the p norm in R I N)
where, in the last step, N equals the number of partial derivatives, i.e., N = #{α ≤ m}. Case: p = ∞ is proved analogously.
Banach Spaces
139
Topological Duals, Weak Compactness
5.12
Examples of Dual Spaces, Representation Theorem for Topological Duals of Lp Spaces
Exercises Exercise 5.12.1 Let Ω ⊂ R I N be a bounded set, and ﬁx 1 ≤ p < ∞. Prove that, for every r such that p < r ≤ ∞, Lr (Ω) is dense in Lp (Ω).
Hint: For an arbitrary u ∈ Lp (Ω) deﬁne un (x) =
�
u(x)
if u(x) ≤ n
n
otherwise
Show that 1. un ∈ Lr (Ω) and 2. �un − u�p → 0. 1. By construction,
�
Ω
un p ≤ np meas(Ω) < ∞
2. From the deﬁnition of un , it follows that u − un → 0 pointwise and
Since
�
u − un p ≤ up up < ∞, the Lebesgue Dominated Convergence Theorem implies that � u − un p → 0 Ω
Exercise 5.12.2 Consider R I n equipped with the pnorm, � � p1 n � p xi  �x�p = i=1 max xi  1≤i≤n
Prove that
sup
n �
�y�p =1 i=1
1≤p 0. In order to determine the supremum, it is sufﬁcient thus to compute λ. Solving for yi and requesting y to be of unit norm, we get n � i=1
p
p
xi  p−1 = λ p−1
Consequently, λ=
�
n � i=1
xi 
p p−1
n � i=1
p
yi p = λ p−1
� p−1 p
= �x�q
Case: p = 1. We have, 
n � i=1
x i yi  ≤
n � i=1
xi yi  ≤ max xi  1≤i≤n
n � i=1
yi  = max xi  = �x�∞ 1≤i≤n
The bound is attained. Indeed, let i0 be an index such that xi0  = max xi  1≤i≤n
select yi = sgn xi0 δi,i0 . Then �y�1 = 1, and n � i=1
xi yi = xi0 
Case: p = ∞. We have, 
n � i=1
x i yi  ≤
n � i=1
xi yi  ≤ max yi  1≤i≤n
n � i=1
xi  =
n � i=1
xi  = �x�1
Banach Spaces
141
The bound is attained. Indeed, select yi = sgn xi . Then �y�∞ = 1, and n �
x i yi =
i=1
n � i=1
xi  = �x�1
Finally, recall that every linear functional deﬁned on a ﬁnitedimensional vector space equipped with a norm, is automatically continuous. The topological dual coincides thus with the algebraic dual. Given a linear functional f on R I n , and recalling the canonical basis ei , i = 1, . . . , n, we have the standard representation formula, f (y) = f
�
n �
yi e i
i=1
�
=
n � i=1
The map, Rn � y → R I n � f → {I
n � i=1
f (ei ) yi = � �� � fi
n �
f i yi
i=1
fi yi ∈ R} I ∈ (I Rn )∗ = (I R n )�
is a linear isomorphism and, by the ﬁrst part of this exercise, it is an isometry if the space for f is equipped with the qnorm. Exercise 5.12.3 Prove Theorem 5.12.2. Let u = (u1 , u2 , . . .) ∈ �p , 1 ≤ p < ∞ and let ei = (0, . . . , 1(i) , . . .). Deﬁne uN =
N �
ui ei = (u1 , . . . , uN , . . .)
i=1
It follows from the deﬁnition of �p spaces that the tail of the sequence converges to zero, �u − uN ��p =
�
∞ �
i=N +1
ui 
p
� p1
→0
as N → ∞
Notice that the argument breaks down for p = ∞. Let f ∈ (�p )� . Set φi = f (ei ). Then ∞ �
def
φ i ui =
i=1
lim
N →∞
N �
φi ui = lim f (uN ) = f (u)
i=1
N →∞
H¨older inequality implies that f (u) ≤ �φ��q �u��p We will show that the bound equals the supremum. Case: p = 1. Let ij be a sequence such that φij  → sup φi  = �φ�∞ , as j → ∞ i
142
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Take v j = sgn φij eij . Then �v j ��1 = 1, and f (uj ) = φij  → �φ�∞ Case: 1 < p < ∞. Use the choice derived in Exercise 5.12.2 for the ﬁnitedimensional case, 1
φi  p−1 sgn φi ui = �∞ p 1 ( i=1 φi  p−1 ) p
Then �u��p = 1, and f (u) =
∞ �
φ i ui =
i=1
�∞
p p−1
i=1 φi  p �n 1 ( i=1 φi  p−1 ) p
= �φ��q
Exercise 5.12.4 Generalize the Representation Theorem for Lp Spaces (Theorem 5.12.1) to the case of an arbitrary (measurable) set Ω. Hint: Consider a sequence of truncated domains, Ωn = Ω ∩ B(0, n) , use Theorem 5.12.1 on Ωn to conclude existence φn ∈ Lq (Ωn ), and investigate convergence of φn . Let φn ∈ Lq (Ωn ) be such that,
�
∀v ∈ Lp (Ωn ) ,
φn v = f (˜ v) Ωn
where v˜ is the zero extension of v. For m > n and v ∈ Lp (Ωn ), � � φm v = f (˜ v) = φn v . Ωn
Ωn
Restriction φm Ωn lives in Lq (Ωn ) so, by uniqueness of φ in the Representation Theorem, φm is an extension of φn . Let φ(x) be the (trivial) pointwise limit of φ˜n (x). For p > 1, Lemma 3.5.1 implies that
�
Ω
φ˜n q →
�
Ω
φq .
At the same time, �φn �Lq (Ωn ) are uniformly bounded as, by the Representation Theorem, �φn �Lq (Ωn ) = �f Lp (Ωn ) �(Lp (Ωn ))� ≤ �f �(Lp (Ω))� , which proves that �φ�Lq (Ω) is ﬁnite (case q = ∞ included). Let now v ∈ Lp (Ω), and let vn be its restriction to Ωn . By Lebesgue Dominated Convergence Theorem, v˜n → v in Lp (Ω). Consequently, the Lp functions with support in Ωn , n = 1, 2, . . ., are dense in Lp (Ω). Consequently,
�
φv = Ω
�
φn v = f (v) , Ωn
true for any v ∈ Lp (Ω) with support in Ωn , can be extended to arbitrary v ∈ Lp (Ω). I a measurable function deﬁned on Ω. Prove that Exercise 5.12.5 Let Ω ⊂ R I n be an open set and f : Ω → R the following conditions are equivalent to each other:
Banach Spaces
143
1. For every x ∈ Ω there exists a neighborhood N (x) of x (e.g., a ball B(x, ε) with some ε = ε(x) > 0) such that
�
2. For every compact K ⊂ Ω
N (x)
�
K
f  dx < +∞
f  dx < +∞
Functions of this type are called locally integrable and form a vector space, denoted L1loc (Ω). Closed balls are compact, so the second condition trivially implies the ﬁrst. To show the converse, consider a compact set K and the corresponding open cover consisting of balls present in the ﬁrst condition, K⊂
�
B(x, εx )
x∈K
As the set is compact, there must exist a ﬁnite subcover. i.e., a collection of points x1 , . . . , xN ∈ K
such that
K ⊂ B(x1 , εx1 ) ∪ . . . ∪ B(xN , εxN ) Standard properties of the Lebesgue integral imply � � � f  dx ≤ f  dx + . . . + K
B(x1 ,εx1 )
B(xN ,εxN )
f  dx < ∞
which proves the second assertion. Exercise 5.12.6 Consider the set B deﬁned in (5.1). Prove that B is balanced, convex, absorbing, and Bi ⊂ B, for each i = 1, 2, . . ..
�
• B is balanced as, for every α ≤ 1, and α
�
i∈I
ϕi ∈ B,
ϕi =
i∈I
�
i∈I0
αϕi ∈ B
since all Bi ’s are balanced. • B is convex. Indeed, let α ∈ [0, 1], J0 , J2 = I2 − J0 . Then α
�
i∈I1
ϕi + (1 − α)
�
i∈I2
ψi =
�
�
j∈J0
i∈I1
ϕi ,
�
i∈I2
ψi ∈ B. Set J0 = I1 ∩ I2 , J1 = I1 −
αϕj + (1 − α)ψj +
�
j∈J1
αϕj +
�
j∈J2
(1 − α)ψj ∈ B
since Bi ’s are convex and balanced. • B is absorbing. Indeed, for ever ϕ ∈ D(Ω), there exists i such that ϕ ∈ D(Ki ) and Bi ⊂ D(Ki ) is absorbing.
• The last condition is satisﬁed by construction.
144
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.12.7 Let q be a linear functional on D(K). Prove that q is sequentially continuous iff there exist constants CK > 0 and k ≥ 0 such that
q(φ) ≤ Ck sup sup Dα φ(x) α≤k x∈K
∀φ ∈ D(K)
Proof is a direct consequence of the deﬁnition of topology in D(K) and Exercise 5.2.6. Exercise 5.12.8 Prove that the regular distributions and the Dirac delta functional deﬁned in the text are continuous on D(Ω). It is sufﬁcient to use the criterion discussed in the text. Let f ∈ L1loc (Ω), and K ⊂ Ω be an arbitrary compact set. Then, for ϕ ∈ D(K), �� � �� � � � � � � � f ϕ dx� = � f ϕ dx� ≤ f  dx sup ϕ(x) � � � � where the integral
�
K
Ω
K
x∈K
K
f  is ﬁnite.
Similarly, for the delta functional δx0 , and a compact set K, for every ϕ ∈ D(K),  < δx0 , ϕ >  = ϕ(x0 ) ≤ sup ϕ(x) x∈K
Exercise 5.12.9 Consider function u : (0, 1) → R I of the form � u1 (x) 0 < x ≤ x0 u(x) = u2 (x) x0 < x ≤ 1
where x0 ∈ (0, 1)
Here u1 and u2 are C 1 functions (see Example 5.11.1), but the global function u is not necessarily continuous at x0 . Follow the lines of Example 5.11.1 to prove that the distributional derivative of the regular distribution qu corresponding to u is given by the formula (qu )� = qu� + [u(x0 )]δx0 where u� is the union of the two branches, derivatives u�1 and u�2 (see Example 5.11.1), δx0 is the Dirac delta functional at x0 , and [u(x0 )] denotes the jump of u at x0 , [u(x0 )] = u2 (x0 ) − u1 (x0 ) It is sufﬁcient to interprete the result obtained in the text, def
< qu� , ϕ > = − < qu , ϕ� > � x0 � 1 � 1 uϕ� dx = u1 ϕ� dx + u2 ϕ� dx = 0 0 x0 � 1 � x0 � u1 ϕ dx + u�2 ϕ dx + [u2 (x0 ) − u1 (x0 )]ϕ(x0 ) = 0 x0 � 1 = u� ϕ dx + [u(x0 )] < δx0 , ϕ > 0
=< qu� + [u(x0 )]δx0 , ϕ >
Banach Spaces
5.13
145
Bidual, Reﬂexive Spaces
Exercises Exercise 5.13.1 Explain why every ﬁnitedimensional space is reﬂexive. Recall the discussion from Chapter 2. As the evaluation map is injective and bidual space is of the same dimension as the orginal space, the evaluation map must be surjective. I n . The closure in W m,p (Ω) Exercise 5.13.2 Let W m,p (Ω) be a Sobolev space for Ω, a smooth domain in R of the test functions C0∞ (Ω) (with respect to the W m,p norm), denoted by W0m,p (Ω), W0m,p (Ω) = C0∞ (Ω) may be identiﬁed as a collection of all “functions” from W m,p (Ω) which “vanish” on the boundary together with their derivatives up to m − 1 order (this is a very nontrivial result based on Lions’ Trace Theorem; see [8, 10]). The duals of the spaces W0m,p (Ω) are the socalled negative Sobolev spaces def
W −m,p (Ω) = (W0m,p (Ω))
�
m>0
Explain why both W0m,p (Ω) and W −m,p (Ω), for 1 < p < ∞, are reﬂexive. This is a simple consequence of Proposition 5.13.1. Space W0m,p (Ω) as a closed subspace of a reﬂexive space must be reﬂexive, and W −m,p (Ω) as the dual of a reﬂexive space is reﬂexive as well.
5.14
Weak Topologies, Weak Sequential Compactness
Exercises Exercise 5.14.1 Prove Proposition 5.14.1. All properties are a direct consequence of the deﬁnitions. Exercise 5.14.2 Let U and V be two normed spaces. Prove that if a linear transformation T ∈ L(U, V ) is strongly continuous, then it is automatically weakly continuous, i.e., continuous with respect to weak topologies in U and V .
146
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Hint: Prove ﬁrst the following: Lemma: Let X be an arbitrary topological vector space, and Y be a normed space. Let T ∈ L(X, Y ).
The following conditions are equivalent to each other.
(i) T : X → Y (with weak topology) is continuous (ii) f ◦ T : X → R(I I C ) is continuous, ∀ f ∈ Y � Follow then the discussion in the section about strongly and weakly continuous linear functionals. (i) ⇒ (ii). Any linear functional f ∈ Y � is also continuous on Y with weak topology. Composition of
two continuous functions is continuous.
(ii) ⇒ (i). Take an arbitrary B(I0 , �), where I0 is a ﬁnite subset of Y � . By (ii), ∀g ∈ I0 ∃Bg , a neighborhood of 0 in X : u ∈ Bg ⇒ g(T (u)) < � It follows from the deﬁnition of ﬁlter of neighborhoods that B=
�
Bg
g∈I0
is also a neighborhood of 0. Consequently, u ∈ B ⇒ g(T (u)) < � ⇒ T u ∈ B(I0 , �) To conclude the ﬁnal result, it is sufﬁcient now to show that, for any g ∈ Y � , g ◦ T : X (with weak topology) → R I is continuous. But g ◦ T , as a composition of continuous functions, is a strongly continuous linear
functional and, consequently, it is continuous in the weak topology as well (compare the discussion in the text). Exercise 5.14.3 Consider space c0 containing inﬁnite sequences of real numbers converging to zero, equipped with �∞ norm. def
c0 = {x = {xn } : xn → 0},
�x� = sup xi  i
Show that (a) c�0 = �1 (b) c��0 = �∞ (c) If en = (0, . . . , 1(n) , . . .) then en → 0 weakly∗ but it does not converge to zero weakly.
Banach Spaces
147
(a) We follow the same reasoning as in the Exercise 5.12.3. Deﬁne N �
xN =
xi ei = (x1 , . . . , xN , . . .)
i=1
It follows from the deﬁnition of c0 space that �x − xN � = sup xi  → 0 i>N
Let f ∈ c�0 and set φi = f (ei ). Then ∞ �
def
φ i xi =
i=1
lim
N →∞
N �
φi xi = lim f (xN ) = f (x)
i=1
N →∞
Consequently, f (x) ≤ �φ��1 �x� In order to show that the bound equals the supremum, it is sufﬁcient to take a sequence of vectors xN = (sgn φ1 , . . . , sgn φN , 0, . . .) ∈ c0 Then f (xN ) =
N � i=1
φi  →
∞ � i=1
φi 
(b) This follows from ��1 = �∞ . (c) We have < eN , x >�1 ×c0 = xN → 0,
∀x ∈ c0
but < φ, eN >�∞ ×�1 = 1 � 0 for φ = (1, 1, . . .) ∈ �∞ . Exercise 5.14.4 Let U and V be normed spaces, and let either U or V be reﬂexive. Prove that every operator A ∈ L(U, V ) has the property that A maps bounded sequences in U into sequences having weakly
convergent subsequences in V .
Case: V reﬂexive. Map A maps a bounded sequence in U into a bounded sequence in V . In turn, any bounded sequence in reﬂexive space V has a weakly convergent subsequence. Case: U reﬂexive. Any bounded sequence un in reﬂexive space U has a weakly convergent subsequence unk . As A is also weakly continuous ( Recall Exercise 5.14.2 ), it follows that Aunk is weakly convergent in V .
148
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.14.5 In numerical analysis, one is often faced with the problem of approximating an integral of a given continuous function f ∈ C[0, 1] by using some sort of numerical quadrature formula. For instance, we might introduce in [0, 1] a sequence of integration points 0 ≤ xn1 < xn2 < · · · < xnj < · · · < xnn ≤ 1, and set def
Qn (f ) =
n �
k=1
ank f (xnk ) ≈
�
n = 1, 2, . . .
1
f (x) dx 0
where the coefﬁcients ank satisfy the condition n �
k=1
ank  < M,
∀n≥1
Suppose that the quadrature rule Qn (f ) integrates polynomials p(x) of degree n − 1 exactly; i.e., Qn (p) = (a) Show that, for every f ∈ C[0, 1], lim
n→∞
�
�
1
p(x)dx 0
Qn (f ) −
�
1
f (x)dx 0
�
=0
(b) Characterize the type of convergence; this limit deﬁnes in terms of convergence in the space C[0, 1] (equipped with the Chebyshev norm). (a) We start with a simple abstract result. Lemma. Let U be a normed space, X a dense subspace of U , and fn ∈ U � a uniformly bounded
sequence of continuous linear functionals on U , i.e., �fn �U � ≤ M , for some M > 0. Assume
that the sequence converges to zero on X, fn (x) → 0, ∀x ∈ X. Then, the sequence converges to zero on the entire space,
fn (u) → 0, ∀u ∈ U Proof follows from the simple inequality, fn (u) = fn (u − x) + fn (x) ≤ �fn � �u − x� + fn (x) ≤ M �u − x� + fn (x) Given u ∈ U , and � > 0, select x ∈ X such that �u − x� < �/2M , and then N such that fn (x) < �/2 for n ≥ N . It follows from the inequality above that, for n ≥ N , fn (x) < �.
According to Weierstrass Theorem on polynomial approximation of continuous functions (see, e.g. []), polynomials are dense in space C([0, 1]). Functionals C([0, 1]) � f → Qn (f ) −
�
1 0
f (x)dx ∈ R I
Banach Spaces
149
converge to zero for all polynomials f (for n exceeding the order of polynomial f , the functionals are identically zero). At the same time, condition on the integration points implies that the functionals are uniformly bounded. Consequently, by the lemma above, sequence Qn (f ) −
�
1 0
f (x)dx → 0
for any f ∈ C([0, 1]). (b) The sequence converges to zero in weak∗ topology.
5.15
Compact (Completely Continuous) Operators
Exercises Exercise 5.15.1 Let T : U → V be a linear continuous operator from a normed space U into a reﬂexive
Banach space V . Show that T is weakly sequentially compact, i.e., it maps bounded sets in U into sets
whose closures are weakly sequentially compact in V . A is bounded in U ⇒ T (A) is weakly sequentially compact in V . This is a simple consequence of the fact that bounded sets in a reﬂexive Banach space are weakly sequentially compact. Exercise 5.15.2 Let U and V be normed spaces. Prove that a linear operator T : U → V is compact iff T (B) is precompact in V for B  the unit ball in U .
Assume T is linear and maps unit ball in U into a precompact set in V . Let C be an arbitrary bounded set in U , �u�U ≤ M,
∀u ∈ C
Set M −1 C is then a subset of unit ball B and, consequently, M −1 T (C) is a subset of T (B). Thus, M −1 T (C) ⊂ T (B) as a closed subset of a compact set, is compact as well. Finally, since multiplication by a nonzero constant is a homeomorphism, T (C) is compact as well. Exercise 5.15.3
Use the Frechet–Kolmogorov Theorem (Theorem 4.9.4) to prove that operator T from
R) Example 5.15.1 with an appropriate condition on kernel K(x, ξ) is a compact operator from Lp (I R), 1 ≤ p, r < ∞. into Lr (I
150
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
According to Exercise 5.15.2 above, we can restrict ourselves to a unit ball B ∈ Lp (I R) and seek
conditions that would guarantee that T (B) is precompact in Lr (I R). By the FrechetKolmogorov Theorem, we need to come up with sufﬁcient conditions on kernel K(y, x) to satisfy the following three conditions. R). (i) T (B) is bounded in Lr (I (ii)
�
R I
(iii)
t→0
T u(t + s) − T u(s)r ds −→ 0, �
n→∞
s>n
T u(s)r ds −→ 0,
uniformly in u ∈ B
uniformly in u ∈ B
We shall restrict ourselves to a direct use of H¨older inequality only. We have the following estimates. (i)
�r � rq � �� � �� � � � � � K(y, x)u(y) dy � dx ≤ � K(y, x)q dy � � � � � R I R I R I R I
≤1 � rq � �� � � � K(y, x)q dy � dx =: A ≤ � � R I R I
(ii)
(iii)
� pr �� � � � u(y)p dy � dx � � I �R �� �
�r � �� � � � � K(y, t + s)u(y) dy − � ds K(y, s)u(y) dy � � R I R I R I � � � � � �r � [K(y, t + s) − K(y, s)]u(y) dy � ds = � � R I R I � rq �� � pr � �� � � � � � K(y, t + s) − K(y, s)q dy � � u(y)p dy � ds ≤ � � � � I R I R I �R �� � ≤1 � rq � �� � � � K(y, t + s) − K(y, s)q dy � ds =: B(t) ≤ � � R I R I
�� �r �� � rq � � � � � � K(y, s)u(y) dy � ds ≤ � K(y, s)q dy � � � � � I I s>n R s>n R
�
≤
�
s>n

�
R I
q
K(y, s) dy
r q
�� � pr � � � u(y)p dy � ds � � I �R �� � ≤1
ds =: C(n)
where q is the conjugate index to p. Consequently, if A < ∞, limt→0 B(t) = 0, limn→∞ C(n) = 0,
R) into Lr (I R). In fact, the ﬁrst condition implies the last then operator T is a compact map from Lp (I one and, if we assume that kernel K(y, x) is continuous a.e., we can use the Lebesgue Dominated Convergence Theorem to show that the second condition is veriﬁed as well. Indeed, the integrand
Banach Spaces
151
in B(t) converges pointwise a.e. to zero, as t → 0, and we can construct a dominating function by utilizing convexity of function xq for q ≥ 1, �q � 1 1 1 1 K(y, t + s) + K(y, s) ≤ K(y, t + s)q + K(y, s)q 2 2 2 2
which in turn implies
K(y, t + s) − K(y, s)q ≤ 2q−1 (K(y, t + s)q + K(y, s)q )
Closed Range Theorem, Solvability of Linear Equations
5.16
Topological Transpose Operators, Orthogonal Complements
Exercises Exercise 5.16.1 Prove Proposition 5.16.1(i)–(iv). The properties follow immediately from the properties of the algebraic transpose operator. Exercise 5.16.2 Let U, V be two Banach spaces, and let A ∈ L(U, V ) be compact. Show that A� is also compact. Hint: See Exercise 5.21.2 and recall Arzel`a–Ascoli Theorem. See proof of Lemma 5.21.5.
5.17
Solvability of Linear Equations in Banach Spaces, The Closed Range Theorem
Exercises Exercise 5.17.1 Let X be a Banach space, and P : X → X be a continuous linear projection, i.e., P 2 = P . Prove that the range of P is closed.
Let un ∈ R(P ), un → u. We need to show that u ∈ R(P ) as well. Let vn ∈ X be such that
un = P vn . Then P un = P 2 vn = P vn = un → P u. By the uniqueness of the limit, it must be u = P u. Consequently, u is the image of itself and must be in the range of the projection.
152
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.17.2 Let X be a Banach space, and M ⊂ X a closed subspace. Consider the map: ι : X � ⊃ M ⊥ � f → f˜ ∈ (X/M )�
f˜(x + M ) := f (x) .
Here M ⊥ is the orthogonal complement of M , M ⊥ = {f ∈ X � : f M = 0} . Prove that map ι is an isometric isomorphism. Consequently, (X/M )� and M ⊥ can be identiﬁed with each other. The map is welldeﬁned, i.e. f (x + m) = f (x), for every m ∈ M . Next, f˜(x + M ) = f (x + m) ≤ �f �X � �x + m� . Taking inﬁmum in m ∈ M on the righthand side, we have, f˜(x + M ) ≤ �f �X � �x + M �X/M , i.e. �f˜�(X/M )� ≤ �f �X � . On the other side, f (x) = f˜(x + M ) ≤ �f˜�(X/M )� �x + M �X/M ≤ �f˜�(X/M )� �x� , so also, �f �X � ≤ �f˜�(X/M )� . Map ι is thus an isometry (and, therefore, injective). To show surjectivity, take g ∈ (X/M )� , and set f (x) = g(x + M ). Then, for any x ∈ M ,
f (x) = g(x + M ) = g(M ) = 0 so, f ∈ M ⊥ and, ﬁnally, f˜ = g.
( g is a linear map)
Banach Spaces
5.18
153
Generalization for Closed Operators
Exercises Exercise 5.18.1 Let X, Y be two normed spaces with Y being complete. Let X be a dense subspace of
X. Let A be a linear and continuous map from X into Y . Prove that operator A admits a unique continuous extension A˜ to the whole space X that preserves the norm of A. Hint: For x ∈ X, take
xn → x, xn ∈ X , and investigate sequence Axn .
Let x ∈ X and let xn → x, xn ∈ X . Sequence Axn is Cauchy in Y . Indeed, �Axn − Axm �Y ≤ �A�L(X ,Y ) �xn − xm � and xn , as a convergent sequence, is Cauchy in X. By completness of Y , sequence Axn has a unique ˜ := y = limn→∞ Axn . First of all, limit y that can be identiﬁed as the value of the extension, Ax extension A˜ is well deﬁned, i.e. the limit y is independent of xn . Indeed, if we take another sequence converging to x, say z n → x, then �Axn − Az n �Y ≤ �A� �xn − z n �Y ≤ �A� (�xn − x�X + �x − z n �X ) → 0 Secondly, if xn → x and z n → z then αxn + βz n → αx + βz and, passing to the limit in: A(αxn + βz n ) = αAxn + βAz n we learn that A˜ is linear. Finally, by passing to the limit in �Axn � ≤ �A�L(X ,Y ) �xn � ˜ ≤ �A� and, therefore, �A� ˜ = �A�. we learn that A˜ is continuous and �A� Exercise 5.18.2 Discuss in your own words why the original deﬁnition of the transpose for a closed operator and the one discussed in Remark 5.18.1 are equivalent. The original deﬁnition requires the identity: �y � , Ax� = �x� , x�
∀x ∈ D(A)
for some x� ∈ X � and then sets A� y � := x� . Embedded in the condition is thus the requirement that
A� y � is continuous. Conversely, the more explicitly deﬁned transpose satisﬁes the identity above with x� = A� y � . Exercise 5.18.3 Prove Proposition 5.18.1.
154
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(i) By deﬁnition, y � ∈ D(A�1 ) and x�1 = A�1 y � , if < y � , A1 x > = < x�1 , x >
∀x ∈ D
Similarly, y � ∈ D(A�2 ) and x�2 = A�2 y � , if < y � , A2 x > = < x�2 , x >
∀x ∈ D
Let y � ∈ D(A�1 ) ∩ D(A�2 ). Taking a linear combination of the equalities above, we get < y � , (α1 A1 + α2 A2 )x > = < α1 x�1 + α2 x�2 , x >
∀x ∈ D
Consequently, y � ∈ D((α1 A1 + α2 A2 )� ) and (α1 A1 + α2 A2 )� y � = α1 x�1 + α2 x�2 = α1 A�1 y � + α2 A�2 y � (ii) Again, by deﬁnition, y � ∈ D(A� ) and x� = A� y � , if < y � , Ax > = < x� , x >
∀x ∈ D(A)
Similarly, z � ∈ D(B � ) and y � = B � z � , if < z � , By > = < y � , y >
∀y ∈ D(B)
Taking y = Ax in the second equality and making use of the ﬁrst one, we get, < z � , BAx > = < y � , Ax > = < x� , x >
∀x ∈ D(A)
Consequently, z � ∈ D((BA)� ) and (BA)� z � = x� = B � y � = B � A� z � (iii) It is sufﬁcient to notice that < y � , Ax > = < x� , x >
∀x ∈ D(A)
is equivalent to < y � , y > = < x� , A−1 y >
∀y ∈ D(A−1 )
Banach Spaces
5.19
155
Closed Range Theorem for Closed Operators
Exercises Exercise 5.19.1 Prove property (5.4). Let z � ∈ (M + N )⊥ . By deﬁnition, �z � , m + n� = 0
∀ m ∈ M, n ∈ N
Setting n = 0, we have, �z � , m� = 0 ∀ m ∈ M i.e. z � ∈ M ⊥ . By the same argument, z � ∈ N ⊥ . Similarly, if �z � , m� = 0 ∀ m ∈ M
and
�z � , n� = 0
∀n ∈ N
then, by linearity of z � , �z � , m + n� = �z � , m� + �z � , n� = 0 Exercise 5.19.2 Let Z be a Banach space and X ⊂ Z a closed subpace of Z. Let Z = X ⊕ Y for some Y , and let PX : Z → X be the corresponding projection, PX z = x where z = x + y is the unique
decomposition of z. Prove that projection PX is a closed operator. Assume z n = xn + y n Let
(z n , xn ) = (z n , PX z n ) → (z, x) Then y n = z n − xn → z − x =: y so, z = x + y. This proves that x = PX z, i.e. (z, x) ∈ graph of PX .
Exercise 5.19.3 Prove the algebraic identities (5.6). Elementary. Exercise 5.19.4 Let X, Y be two topological vector spaces. Prove that a set B ⊂ Y is closed in Y if and only if set X + B is closed in X × Y . Here X is identiﬁed with X × {0} ⊂ X × Y and B is identiﬁed with {0} × B ⊂ X × Y .
Elementary. The whole trouble lies in the identiﬁcation. We have, X × {0} + {0} × B = X × B and the assertion follows from the construction of topology in Cartesian product.
156
5.20
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Examples
Exercises Exercise 5.20.1 Prove that the linear mapping (functional) H 1 (0, 1) � w → w(x0 ), where x0 ∈ [0, l] is continuous. Use the result to prove that space W in Example 5.20.1 is closed. Hint: Consider ﬁrst smooth functions w ∈ C ∞ ([0, l]) and then use the density of C ∞ ([0, l]) in H 1 (0, l).
Take w ∈ C ∞ ([0, l]). Function w(x) ˆ := w(x) −
1 l
�
l
w(s) ds 0
is continuous and has a zero average. By the MeanValue Theorem of Integral Calculus, there exists an intermediate point c ∈ [0, l] such that w(c) ˆ = 0. Integrating w ˆ � = w� from c to x ∈ [0, l], we obtain � x � x w� (s) ds = w ˆ � (s) ds = w(x) ˆ c
c
Consequently,
1 l
w(x) = This implies the estimate 1 w(x) ≤ l
��
0
� 12 ��
l
�
l
w(s) ds + 0
l 2
1
0
w(s) ds
≤ �w�L2 (0,l) + l�w� �L2 (0,l)
� 12
�
+
x
w� (s) ds
c
��
� 12 ��
x
x
1 c
c
w� (s)2 ds
� 12
≤ max{1, l}�w�H 1 (0,l) By the density argument, the estimate generalizes to w ∈ H 1 (0, l). Applying the result to higher derivatives, we conclude that operator I4 f : H 4 (0, l) � w → (w�� (0), w��� (0), w�� (l), w��� (l)) ∈ R is continuous. Consequently W = f −1 ({0}) as inverse image of a closed set, must be closed. Exercise 5.20.2 Let u, v ∈ H 1 (0, l). Prove the integration by parts formula � l � l uv � dx = − u� v dx + (uv)l0 0
0
Banach Spaces
157
Hint: Make use of the density of C ∞ ([0, l]) in H 1 (0, l). Integrate (uv)� = u� v + uv � , to obtain the formula for smooth functions u, v ∈ C ∞ ([0, l]). Let
u, v ∈ H 1 (0, l). By the density argument, there exist seqeunces un , vn ∈ C ∞ ([0, l]) such that un →
u, vn → v in H 1 (0, l). For each n, we have � l � l � un vn dx = − u�n vn dx + (un vn )l0 0
0
The point is that both sides of the equality represent continuous functional on space H 1 (0, l). Integrals represent L2 products, and the boundary terms are continuous by the result of Exercise 5.20.1 above. Consequently, we can pass on both sides to the limit with n → ∞, to obtain the ﬁnal result. Exercise 5.20.3 Work out all the details of Example 5.20.1 once again, with different boundary conditions: w(0) = w�� (0) = 0 and w�� (l) = w��� (l) = 0 (left end of the beam is supported by a pin support). We follow precisely the same lines to obtain W = {w ∈ H 4 (0, l) : w(0) = w�� (0) = w�� (l) = w��� (l) = 0} The transpose operator is deﬁned on the whole L2 (0, l) by the same formula as before (but different W ). The corresponding null space is N (A� ) = {v ∈ L2 (0, l) : v ���� = 0 and v(0) = v �� (0) = v �� (l) = v ��� (l) = 0} I = {v ∈ L2 (0, l) : v(x) = αx, α ∈ R} The necessary and sufﬁcient condition for the existence of a solution w ∈ W , q ∈ N (A� )⊥ = 0 is equivalent to
�
l
q(x)x dx = 0 0
(the moment of active load q with respect to the pin must vanish). The solution is determined up to a linearized rotation about x = 0 (the pin). Exercise 5.20.4 Prove that operator A from Remark 5.20.1 is closed. Consider a sequence of functions (un , qn ) ∈ L2 (0, l) × L2 (0, l) such that 2 �� �� ��� ���� un ∈ D(A), i.e., u���� n ∈ L (0, l), un (0) = un (0) = un (l) = un (l) = 0 and un = qn
Assume un → u, qn → q. The question is: Does u ∈ D(A), Au = q ? Recall the deﬁnition of distributional derivative, def
���� < u���� >= n , φ > = < un , φ
�
l 0
un φ���� =
�
l
qn φ 0
∀φ ∈ D(0, l)
158
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Keeping φ ﬁxed, we pass to the limit with n → ∞ to learn that � l � l < u���� , φ >= uφ���� = qφ ∀φ ∈ D(0, l) 0
i.e., u
����
0
= q. The boundary conditions represent continuous functionals on H 4 (0, l) and, therefore,
they are satisﬁed in the limit as well. Exercise 5.20.5 (A ﬁnitedimensional sanity check). Determine necessary conditions on data f for solutions to the linear systems of equations that follow to exist. Determine if the solutions are unique and, if not, describe the null space of the associated operator: Au = f Here
�
1 −1 0 A= −1 0 1
�
1 2 −1 A = 4 0 2 3 −2 −3
3 1 −1 2 A = 6 2 −2 4 9 3 −3 6
We shall discuss the ﬁrst case, the other two being fully analogous. Matrix operator A : R I3 →R I 2.
Identifying the dual of R I n with itself (through the canonical inner product), we have 1 −1 I 2 →R I 3 , A = −1 0 A� : R 0 1
The null space of A� is trivial and, consequently, the linear problem has a solution for any righthand side f . The solution is determined up to elements from the kernel of A, N (A) = {(t, t, t) : t ∈ R} I In elementary language, one can set x1 = t, and solve the remaining 2 × 2 system for x2 , x3 to obtain x2 = −f1 + t, x3 = f2 − t
5.21
Equations with Completely Continuous Kernels, Fredholm Alternative
Exercises Exercise 5.21.1 Complete the proof of Lemma 5.21.6. Remark: We use the same notation as in the text. A linear functional may be denoted with both standard and boldface symbols when we want to emphasize the vectorial character of the dual space.
Banach Spaces
159
Case: n ≥ m. Deﬁne operator def
P = T � + Q,
Qf =
m �
f (y k )f k
k=1
Operator I − P is injective. Indeed, assume f − P f = f − T � f − Qf = A� f − Qf = 0 Evaluating both sides at xi , we obtain, < A� f , xi > −
m �
f (y k ) < f k , xi >= 0
k=1
which implies < f , Axi > −f (y i ) = 0 But Axi = 0 so f (y i ) = 0, i = 1, . . . , m and, consequently, A� f = 0, i.e., f ∈ N (A� ). This implies
that f can be represented in terms of functionals gi , f=
m �
bi g i
i=1
Evaluating both sides at vectors y i , we conclude that bi = 0, i = 1, . . . , m and, therefore, f = 0. This proves that I − P is injective. By Corollary 5.21.2, I − P is surjective as well. There exists thus a solution f¯ to the problem m � f¯(y k )f k = f m+1 A� f¯ − k=1
Evaluating both sides at xn+1 we arrive at a contradiction, the lefthand side vanishes while the right
hand one is equal to one. This proves that n = m. Exercise 5.21.2 Let X, d be a complete metric space and let A ⊂ X. Prove that the following conditions are equivalent to each other.
(i) A is precompact in X, i.e., A is compact in X. (ii) A is totally bounded. (iii) From every sequence in A one can extract a Cauchy subsequence. (i) ⇒ (ii). Since A is compact then, by Theorem 4.9.2, it is totally bounded, i.e., for every � > 0, there exists an �net Y� ⊂ A, i.e.,
A⊂A⊂
�
B(y� , �)
y∈Y�
It remains to show that one can select �nets from A itself. Take � > 0. Let Y 2� be the corresponding � 2 net
in A. For each z ∈ Y 2� , there exists a corresponding yz ∈ A such that d(y, z) < �/2. It follows
from the triangle inequality that
{zy : y ∈ Y 2� } ⊂ A
160
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
is an �net for set A. (ii) ⇒ (i). By Theorem 4.9.2 again, it is sufﬁcient to show that A is totally bounded. Take � > 0 and
select an arbitrary �1 < �. Then �1 net Y�1 for A is an �net for A. Indeed, A⊂ implies A⊂
�
y∈Y�1
�
B(y, �1 )
y∈Y�1
B(y, �1 ) ⊂
�
B(y, �)
y∈Y�1
(i) ⇒ (iii). Let xn ∈ A. As A is sequentially compact, one can extract a convergent and, therefore, Cauchy subsequence xnk .
(iii) ⇒ (i). Let xn ∈ A. We need to demonstrate that there exists a subsequence xnk converging to x ∈ A. For each n, select yn ∈ A such that d(yn , xn )
0 (compare Exercise 3.9.2). Does the result extend to complex vector spaces?
164
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The result is true for complex spaces. Let v = αu, α > 0. Then both sides are equal to (1 + α)�u�. Conversely, squaring the lefthand side of the equality above, �u + v�2 = (u + v, u + v) = �u�2 + 2Re (u, v) + �v�2 and comparing it with the square of the righthand side, we learn that Re (u, v) = �u� �v� Consider now a function of a real argument α, �u − αv�2 = (u − αv, u − αv) = �u�2 − 2αRe (u, v) + α2 �v�2 = α2 �v�2 − 2α�u� �v� + �u�2 Quadratic function on the righthand side has a minimum equal zero at α = �v�/�u� > 0. Conse
quently, the lefthand side must vanish as well which implies that u − αv = 0.
Exercise 6.1.5 Let {un } be a sequence of elements in an inner product space V . Prove that if (un , u) −→ (u, u)
and
�un � −→ �u�
then un −→ u, i.e., �un − u� −→ 0. We have �un − u�2 = (un − u, un − u) = (un , un ) − (u, un ) − (un , u) + (u, u) = �un �2 − (u, un ) − (un , u) + �u�2 → 2�u�2 − 2(u, u) = 0 since (u, un ) = (un , u) → (u, u). Exercise 6.1.6 Show that the sequence of sequences u1 = (α1 , 0, 0, . . .) u2 = (0, α2 , 0, . . .) u3 = (0, 0, α3 , . . .) etc., where the αi are scalars, is an orthogonal sequence in �2 , i.e., (un , um ) = 0 for m �= n. Apply deﬁnition of the inner product in �2 . Exercise 6.1.7 Let A : U → V be a linear map from a Hilbert space U, (·, ·)U into a Hilbert space V, (·, ·)V . Prove that the following conditions are equivalent to each other, (i) A is unitary, i.e., it preserves the inner product structure, (Au, Av)V = (u, v)U
∀u, v ∈ U
(ii) A is an isometry, i.e., it preserves the norm, �Au�V = �u�U
∀u ∈ U
(i)⇒(ii). Substitute v = u. (ii)⇒(i). Use the parallelogram law (polarization formula) discussed in Exercise 6.1.2.
Hilbert Spaces
6.2
165
Orthogonality and Orthogonal Projections
Exercises Exercise 6.2.1 Let V be an inner product space and M, N denote vector subspaces of V . Prove the following algebraic properties of orthogonal complements: (i) M ⊂ N ⇒ N ⊥ ⊂ M ⊥ . (ii) M ⊂ N ⇒ (M ⊥ )⊥ ⊂ (N ⊥ )⊥ . (iii) M ∩ M ⊥ = {0}. (iv) If M is dense in V , (M = V ) then M ⊥ = {0}. (i) Let v ∈ N ⊥ . Then (n, v) = 0 ∀n ∈ N
⇒
(n, v) = 0 ∀n ∈ M
⇒
v ∈ M⊥
(ii) Apply (i) twice. (iii) Let v ∈ M ∩ M ⊥ . Then v must orthogonal to itself, i.e., (v, v) = 0 ⇒ v = 0. (iv) Let v ∈ M ⊥ and M � v n → v. Passing to the limit in (v n , v) = 0 we get (v, v) = 0 ⇒ v = 0. Exercise 6.2.2 Let M be a subspace of a Hilbert space V . Prove that M = (M ⊥ )⊥ By Corollary 6.2.1,
It is sufﬁcient thus to show that
� ⊥ �⊥ M= M M⊥ = M
As M ⊂ M , Exercise 6.2.1(i) implies that M
M � mn → m ∈ M . Passing to the limit in
⊥
⊥
⊂ M ⊥ . Conversely, assume v ∈ M ⊥ , and let
(mn , v) = 0 we learn that v ∈ M
⊥
as well.
166
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 6.2.3 Two subspaces M and N of an inner product space V are said to be orthogonal, denoted M ⊥ N , if
∀ m ∈ M, n ∈ N
(m, n) = 0,
Let V now be a Hilbert space. Prove or disprove the following: (i) M ⊥ N =⇒ M ⊥ ⊥ N ⊥ . (ii) M ⊥ N =⇒ (M ⊥ )⊥ ⊥ (N ⊥ )⊥ . I The ﬁrst assertion is false. Consider, e.g. R I 3 with the canonical inner product. Take M = R×{0}×{0} and N = {0} × R I × {0}. Then I ×R I M ⊥ = {0} × R
and
N⊥ =R I × {0} × R I
Obviously, spaces M ⊥ and N ⊥ are not orthogonal. To prove the second assertion, in view of Exercise 6.2.2, it is sufﬁcient to show that M ⊥N ⇒ M ⊥N But this follows immediately from the continuity of the inner product. Take M � mn → m ∈ M and N � nn → n ∈ N , and pass to the limit in
(mn , nn ) = 0
Exercise 6.2.4 Let Ω be an open, bounded set in R I n and V = L2 (Ω) denote the space of square integrable functions on Ω. Find the orthogonal complement in V of the space of constant functions � � M = u ∈ L2 (Ω) : u = const a.e. in Ω
Let f ∈ L2 (Ω). Projection of f onto M is equivalent to the variational problem: u ∈R I f � � u v = f v ∀v ∈ R I f Ω
Ω
Selecting v = 1, we learn that uf is the average of f , u0 =
1 meas(Ω)
�
f Ω
Orthogonal complement M ⊥ contains functions f − uf , i.e., functions of zero average, � M ⊥ = {f ∈ L2 (Ω) : = 0} Ω
(compare Example 2.??)
Hilbert Spaces
167
I C ) a sequence of measurable functions. Exercise 6.2.5 Let Ω ⊂ R I N be a measurable set and fn : Ω → R(I
We say that sequence fn converges in measure to a measurable function f : Ω → R(I I C ) if, for every
ε > 0,
m({x ∈ Ω : fn (x) − f (x) ≥ ε}) → 0
as
n→0
Let now m(Ω) < ∞. Prove that Lp (Ω) convergence, for any 1 ≤ p ≤ ∞, implies convergence in measure. Hint: �� � p1 1 fn (x) − f (x)p dx Ω m({x ∈ Ω : fn (x) − f (x) ≥ ε}) ≤ ε 1 ess supx∈Ω fn (x) − f (x) ε
1≤p 0, one can extract a subsequence fnk such that m ({x ∈ Ω : fnk (x) − f (x) ≥ ε}) ≤
1 2k+1
∀k ≥ 1
Step 2. Use the diagonal choice method to show that one can extract a subsequence fnk such that m({x ∈ Ω : fnk (x) − f (x) ≥
1 1 }) ≤ k+1 k 2
Consequently, m({x ∈ Ω : fnk (x) − f (x) ≥ ε}) ≤ for every ε > 0, and for k large enough.
∀k ≥ 1
1 2k+1
168
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Step 3. Let ϕk = fnk be the subsequence extracted in Step 2. Use the identities {x ∈ Ω : inf sup ϕn (x) − f (x) > 0} = ν≥0 n≥ν
{x ∈ Ω : inf sup ϕn (x) − f (x) ≥ ε} = ν≥0 n≥ν
to prove that
�
{x ∈ Ω : inf sup ϕn (x) − f (x) ≥
k �
ν≥0
ν≥0 n≥ν
1 } k
{x ∈ Ω : sup ϕn (x) − f (x) ≥ ε} n≥ν
m({x ∈ Ω : lim sup ϕn (x) − f (x) > 0}) n→∞
≤ Step 4. Use the identity
� k
lim m({x ∈ Ω : sup ϕn (x) − f (x) ≥
ν→∞
n≥ν
{x ∈ Ω : sup ϕn (x) − f (x) > n≥ν
1 }) k
� 1 1 }⊂ {x ∈ Ω : ϕn (x) − f (x) > } k k n≥ν
and the result of Step 2 to show that m({x ∈ Ω : sup ϕn (x) − f (x) ≥ ε}) ≤ n≥ν
1 2ν
for every ε > 0 and (εdependent!) ν large enough. Step 5. Use the results of Step 3 and Step 4 to conclude that m({x ∈ Ω : lim fnk (x) �= f (x)}) = 0 k→∞
Remark: The Lebesgue Dominated Convergence Theorem establishes conditions under which pointwise convergence of a sequence of functions fn to a limit function f implies the Lp convergence. While the converse, in general, is not true, the results of the last two exercises at least show that the Lp convergence of a sequence fn implies the pointwise convergence (almost everywhere only, of course) of a subsequence fnk . Step 1. This follows directly from the deﬁnition of a convergent sequence. We have, ∀δ ∃N n ≥ N m({x ∈ Ω : fn (x) − f (x) ≥ ε}) ≤ δ
∀n
Select δ = 1 and an element fn1 that satisﬁes the condition for δ = 1/2. By induction, given n1 , . . . , nk−1 , select nk > n1 , . . . , nk−1 such that fnk satisﬁes the condition for δ = 1/2k+1 . Notice that avoding the duplication (enforcing injectivity of function k = nk ) is possible since we have an inﬁnite number of elements of the sequence at our disposal. Step 2. Use Step 1 result for ε = 1. In particular, the subsequence converges alond with the orginal sequence. Take then ε = 1/2 and select a subsequence of the ﬁrst subsequence fnk (denoted with the same symbol) to staisfy the same condition. Proceed then by induction. The diagonal subsequence satisﬁes the required condition.
Hilbert Spaces
169
Step 3. By Proposition 3.1.6v, m({x ∈ Ω : inf sup ϕn (x) − f (x) ≥ ν≥0 n≥ν
1 1 }) = lim m({x ∈ Ω : sup ϕn (x) − f (x) ≥ }) ν→∞ k k n≥ν
The ﬁnal condition is then a consequence of the equality above, the ﬁrst identity in Step 3, and subadditivity of the measure. Step 4. Given ε, choose k such that � > k1 . The identity and subadditivity of the measure imply that 1 }) k n≥ν � 1 }) ≤ m({x ∈ Ω : ϕn (x) − f (x) > k−1 n≥ν ∞ � 1 1 ≤ ≤ k+1 2 ν ν
m({x ∈ Ω : sup  ϕn (x) − f (x) ≥ ε}) ≤ m({x ∈ Ω : sup  ϕn (x) − f (x) > n≥ν
Step 5. Combining results of Step 3 and Step 4, we get
m({x ∈ Ω : lim sup ϕn (x) − f (x) > 0}) = 0 n→∞
which is equivalent to the ﬁnal assertion.
6.3
Orthonormal Bases and Fourier Series
Exercises Exercise 6.3.1 Prove that every (not necessarily separable) nontrivial Hilbert space V possesses an orthonormal basis. Hint: Compare the proof of Theorem 2.4.3 and prove that any orthonormal set in V can be extended to an orthonormal basis. Let A0 be an orthonormal set. Let U be a class of orthonormal sets A (i.e., A contains unit vectors, and
every two vectors are orthogonal to each other) containing A0 . Obviously, U is nonempty. Family U is partially ordered by the inclusion. Let Aι , ι ∈ I, be a chain in U . Then � � A := Aι ∈ U and Aκ ⊂ Aι , ∀κ ∈ I ι∈I
ι∈I
Indeed, linear ordering of the chain implies that, for each two vectors in u, v ∈ A, there exists a common index ι ∈ I such that u, v ∈ Aι . Consequently, u and v are orthogonal. By the Kuratowski
Zorn Lemma, U has a maximal element, i.e., an orthonormal basis for space U that contains A0 . To conclude the ﬁnal result, pick an arbitrary vector u1 �= 0, and set A0 = {u/�u�}.
170
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 6.3.2 Let {en }∞ n=1 be an orthonormal family in a Hilbert space V . Prove that the following conditions are equivalent to each other.
(i) {en }∞ n=1 is an orthonormal basis, i.e., it is maximal. ∞ � (u, en ) en ∀u∈V. (ii) u = n=1
(iii) (u, v) = 2
(iv) �u� = (i)⇒(ii). Let
∞ �
n=1 ∞ �
n=1
(u, en ) (v, en ). 2
(u, en ) .
uN :=
N �
uN → u
uj ej ,
j=1
Multiply both sides of the equality above by ei , and use orthonormality of ej to learn that ui = (uN , ei ) → (u, ei ) as N → ∞ (ii)⇒(iii). Use orthogonality of ei to learn that (uN , v N ) =
N �
ui v i =
i=1
N � i=1
(u, ei ) (v, ei ) →
∞ �
(u, ei ) (v, ei )
i=1
(iii)⇒(iv). Substitute v = u. (iv)⇒(i). Suppose, to the contrary, the {e1 , e2 , . . .} can be extended with a vector u �= 0 to a bigger
orthonormal family. Then u is orthogonal with each ei and, by property (iv), �u� = 0. So u = 0, a contradiction.
Exercise 6.3.3 Let {en }∞ n=1 be an orthonormal family (not necessarily maximal) in a Hilbert space V . Prove Bessel’s inequality
∞ � i=1
2
2
(u, ei ) ≤ �u�
∀u∈V
Extend the family to an orthonormal basis (see Exercise 6.3.1), and use the property (iv) proved in Exercise 6.3.2. Exercise 6.3.4 Prove that every separable Hilbert space V is unitary equivalent with the space �2 . Hint: Establish a bijective correspondence between the canonical basis in �2 and an orthonormal basis in V and use it to deﬁne a unitary map mapping �2 onto V . Let e1 , e2 , . . . be an orthonormal basis in V . Deﬁne the map T : �2 � (x1 , x2 , . . .) →
∞ � i=1
xi ei =: x ∈ V
Hilbert Spaces
171
Linearity is obvious. for N > M ,
�∞
i=1
xi 2 < ∞ implies that sequence xN =
�uN − uM �2 =
N �
i=M +1
∞ �
xi 2 ≤
i=M +1
�N
i=1
xi ei is Cauchy in V . Indeed,
xi 2 → 0 as M → ∞
By completeness of V , the series converges, i.e., the map is well deﬁned. By Exercise 6.3.2(iv), the map is a surjection. Orthonormality of ei implies that it is also an injection. Finally, it follows from the deﬁnition that the map is unitary. Exercise 6.3.5 Prove the Riesz–Fisher Theorem. Let V be a separable Hilbert space with an orthonormal basis {en }∞ n=1 . Then � �∞ ∞ � � 2 vn en : vn  < ∞ V = n=1
n=1
In other words, elements of V can be characterized as inﬁnite series
∞ �
vn en with �2 summable
n=1
coefﬁcients vn . See Exercise 6.3.4
Exercise 6.3.6 Let I = (−1, 1) and let V be the fourdimensional inner product space spanned by the monomials {1, x, x2 , x3 } with (f, g)V =
�
1
f g dx −1
(i) Use the GramSchmidt process to construct an orthonormal basis for V . (ii) Observing that V ⊂ L2 (I), compute the orthogonal projection Πu of the function u(x) = x4 onto V .
(iii) Show that (x4 − Πx4 , v)L2 (I) = 0
∀v ∈ V .
(iv) Show that if p(x) is any polynomial of degree ≤ 3, then Πp = p. (v) Sketch the function Πx4 and show graphically how it approximates x4 in V . (i) Taking monomials 1, x, x2 , x3 , we obtain e1 (x) =
1 2 �
3 x 2 � 90 2 1 e3 (x) = (x − ) 21 6 � 6 175 3 e4 (x) = (x − x) 8 10
e2 (x) =
(ii) We get (P u)(x) =
414 2 1 1 + (x − ) 10 441 6
172
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(iii) Nothing to show. According to the Orthogonal Decomposition Theorem, u − P u is orthogonal to subspace V .
(iv) This is a direct consequence of the orthogonality condition. If u ∈ V then u − P u ∈ V as well and, in particular, u − P u must be orthogonal to itself,
(u − P u, u − P u) = 0 which implies u − P u = 0. (v) See Fig. 6.1.
Figure 6.1 Function x4 and its L2 projection onto P 3 (−1, 1).
Exercise 6.3.7 Use the orthonormal basis from Example 6.3.4 to construct the (classical) Fourier series representation of the following functions in L2 (0, 1).
f (x) = x,
f (x) = x + 1
Evaluation of coefﬁcients (f, ek ) leads to the formulas,
x=
∞ 1�1 1 + sin 2πkx, 2 π k k=1
x+1=
∞ 3 1�1 + sin 2πkx 2 π k k=1
Hilbert Spaces
173
Duality in Hilbert Spaces
6.4
Riesz Representation Theorem
Exercises Exercise 6.4.1 Revisit Example 6.4.1 and derive the matrix representation of the Riesz map under the assumption that the dual space consists of antilinear functionals. Follow the lines in the text to obtain, fj = gjk xk The only difference between the formula above and the formula in the text, is the dissapearance of the conjugate over xk .
6.5
The Adjoint of a Linear Operator
6.6
Variational BoundaryValue Problems
Exercises Exercise 6.6.1 Let X be a Hilbert space and V a closed subspace. Prove that the quotient space X/V , which a priori is only a Banach space, is in fact a Hilbert space. Let V ⊥ be the orthogonal complement of V in X, X =V ⊕V⊥ As a closed subspace of a Hilbert space, V ⊥ is also a Hilbert space. Consider map T : V ⊥ � w → [w] = w + V ∈ X/V
174
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Map T is an isometry from V ⊥ onto X/V . Indeed, T is linear and inf �w + v�2 = inf (�w�2 + �v�2 ) = �w�2
v∈V
v∈V
From the representation x = (x − P x) + P x where P is the orthogonal projection onto V , follows also that T is surjective. Map T transfers the inner product from V ⊥ into X/V , def
([w1 ], [w2 ])X/V = (T −1 [w1 ], T −1 [w2 ]) Exercise 6.6.2 Prove a simpliﬁed version of the Poincar´e inequality for the case of Γ1 = Γ. Let Ω be a bounded, open set in R I n . There exists a positive constant c > 0 such that � � 2 u dx ≤ c ∇u2 dx ∀ u ∈ H01 (Ω) Ω
Ω
Hint: Follow the steps: Step 1. Assume that Ω is a cube in R I n , Ω = (−a, a)n and that u ∈ C0∞ (Ω). Since u vanishes on the
boundary of Ω, we have
u(x1 , . . . , xn ) = Use the Cauchy–Schwarz inequality to obtain 2
u (x1 , . . . , xn ) ≤
�
a −a
�
�
xn −a
∂u (x1 , ..., t)dt ∂xn
∂u (x1 , . . . , xn ) ∂xn
�2
dxn (xn + a)
and integrate over Ω to get the result. Step 2. Ω bounded. u ∈ C0∞ (Ω). Enclose Ω with a sufﬁciently large cube (−a, a)n and extend u by zero to the cube. Apply Step 1 result.
Step 3. Use density of test functions C0∞ (Ω) in H01 (Ω). Solution: Step 1. Applying Cauchy–Schwarz inequality to the identity above, we get, 2
u (x1 , . . . , xn ) ≤ ≤
�
�
xn −a a
−a
�
�
∂u (x1 , . . . , t) ∂xn
�2
∂u (x1 , . . . , xn ) ∂xn
dt (xn + a)
�2
dxn (xn + a)
Integrating over Ω on both sides, we get �
2
Ω
u dx ≤
� �
∂u ∂xn
�2
�
∂u ∂xn
�2
Ω
dx · 2a2
Step 2. Applying the Step 1 results, we get �
2
u dx = Ω
�
2
Ω1
u dx ≤ 2a
2
�
Ω1
2
dx = 2a
� � Ω
∂u ∂xn
�2
dx
Hilbert Spaces
175
Step 3. Let u ∈ H01 (Ω) and um ∈ C0∞ (Ω) be a sequence converging to u in H 1 (Ω). Then �
Ω
u2m
dx ≤ 2a
� �
� �
�2
2
Ω
∂um ∂xn
�2
dx
Passing to the limit, we get �
2
Ω
u dx ≤ 2a
2
Ω
∂u ∂xn
dx ≤ 2a2
�
Ω
∇u2 dx
Exercise 6.6.3 Let Ω be a sufﬁciently regular domain in R I n , n ≥ 1, and let Γ denote its boundary. Con
sider the diffusionconvectionreaction problem discussed in the text with slightly different boundary conditions,
−(aij u,j ),i + bi u,i + cu = f in Ω u = 0 on Γ1 aij u,j ni = 0 on Γ2
with commas denoting the differentiation, e.g., u,i =
∂u ∂xi ,
and the Einstein summation convention in
use. In this and the following exercises, we ask the reader to reproduce the arguments in the text for this slightly modiﬁed problem. Make the same assumptions on coefﬁcients aij , bi , c as in the text. Step 1: Derive (formally) the classical variational formulation: 1 u ∈ HΓ1 (Ω) � � {aij u,j v,i + bj u,j v + cuv} dx = f v dx Ω
Ω
∀v ∈ HΓ11 (Ω)
where
HΓ11 (Ω) := {u ∈ H 1 (Ω) : u = 0 on Γ1 } Step 2: Use Cauchy–Schwarz inequality, the assumptions on the coefﬁcients aij , bj , c, and an appropriate assumption on source term f (x) to prove that the bilinear and linear forms are continuous on H 1 (Ω). Step 3: Use Poincar´e inequality and the assumptions on the coefﬁcients aij , bj , c to prove that the bilinear form is coercive. Step 4: Use the Lax–Milgram Theorem to conclude that the variational problem is well posed. All reasoning is fully analogous to that in the text. Exercise 6.6.4 Reformulate the second order diffusionconvectionreaction problem considered in Exercise 6.6.3, as a ﬁrst order problem �
σi = aij u,j −σi,i + bi u,i + cu = f
176
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
where the ﬁrst equation may be considered to be a (new) deﬁnition of ﬂux σi . Use the ellipticity condition to introduce inverse αij = (aij )−1 (the compliance matrix), and multiply the ﬁrst equation with αij to arrive at the equivalent system, �
αij σj − u,i = gi
−σi,i + bi u,i + cu = f
(6.4)
with the additional source term gi = 0 vanishing for the original problem. We can cast the system into a general abstract problem Au = f where u, f are group variables, and A represents the ﬁrst order system, �
u = (σ, u) ∈ (L2 (Ω))n × L2 (Ω) �
f = (g, f ) ∈ (L2 (Ω))n × L2 (Ω) Au := (αij σj − u,j , −σi,i + bi u,i + cu) Recall that the accent over the equality sign indicates a “metalanguage” and it is supposed to help you survive the notational conﬂicts; on the abstract level both u and f gain a new meaning. Deﬁnition of the domain of A incorporates the boundary condition, D(A) := { (σ, u) ∈ (L2 (Ω))n × L2 (Ω) : A(σ, u) ∈ (L2 (Ω))n × L2 (Ω) and u = 0 on Γ1 , σ · n = 0 on Γ2 } Step 1: Prove that the operator A is closed. Step 2: Prove that the operator A is bounded below, �Au� ≥ γ�u�,
u ∈ D(A)
Hint: Eliminate the ﬂux and reduce the problem back to the second order problem with the righthand side equal to f − (aij gj ),i . Upgrade then slightly the arguments used in Exercise 6.6.3. Step 3: Identify the adjoint operator A∗ (along with its domain), (Au, v) = (u, A∗ v),
u ∈ D(A), v ∈ D(A∗ )
Step 4: Show that the adjoint operator is injective. Recall then the Closed Range Theorem for Closed Operators and conclude that the adjoint is bounded below with the same constant γ as well. Discuss then the wellposedness of the ﬁrst order problem Au = f . Step 5: Discuss the wellposedness of the adjoint problem, v ∈ D(A∗ ), Again, the reasoning is identical to that in the text.
A∗ v = f
Hilbert Spaces
177
Exercise 6.6.5 Consider the ultraweak variational formulation corresponding to the ﬁrst order system studied in Exercise 6.6.4.
�
u ∈ L2 (Ω) (u, A∗ v) = (f, v)
or, in a more explicit form, σ ∈ (L2 (Ω))n , u ∈ L2 (Ω) � � where
σi (αji τj + v,i ) dx +
Ω
Ω
v ∈ D(A∗ )
u(τi,i − (bi v),i + cv) dx =
�
(gi τi + f v) dx Ω
τ ∈ HΓ2 (div, Ω), v ∈ HΓ11 (Ω)
HΓ2 (div, Ω) := {σ ∈ (L2 (Ω))n : div σ ∈ L2 (Ω), σ · n = 0 on Γ2 } Step 1: Double check that the energy spaces in the abstract and the concrete formulations are identical. Step 2: Identify the strong form of the adjoint operator discussed in Exercise 6.6.4 as the transpose operator corresponding to the bilinear form b(u, v) = (u, A∗ v), and conclude thus that the conjugate operator is bounded below. Step 3: Use the wellposedness of the (strong form) of the adjoint problem to conclude that the ultraweak operator, B : L2 → (D(A∗ ))� ,
�Bu, v� = b(u, v)
is injective. Step 4: Use the Closed Range Theorem for Continuous Operators to conclude that the ultraweak operator B satisﬁes the infsup condition with the same constant γ as for the adjoint operator and, therefore, the same constant γ as for the original strong form of operator A. Step 5: Conclude with a short discussion on the wellposedness of the ultraweak variational formulation. Just follow the text. Exercise 6.6.6 Suppose we begin with the ﬁrst order system (6.4). We multiply the ﬁrst equation with test function τi , the second with test function v, integrate over domain Ω, and sum up all the equations. If we leave them alone, we obtain a “trivial” variational formulation with solution (σ, u) in the graph norm energy space and L2 test functions (τ, v). If we integrate by parts (“relax”) both equations, we obtain the ultraweak variational formulation. The energy spaces have been switched. The solution lives now in the L2 space, and the test function comes from the graph norm energy space for the adjoint. We have two more obvious choices left. We can relax one of the equations and leave the other one in the strong form. The purpose of this exercise is to study the formulation where we relax the second equation (conservation law) only. Identify the energy spaces and show that the problem is equivalent to the classical variational formulation discussed in Exercise 6.6.3 with the righthand side equal to
178
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
f − (αij gj ),i . Discuss the relation between the two operators. Could you modify the standard norm in the H 1 space in such a way that the infsup constants corresponding to the two formulations would
actually be identical ? Follow the text. Exercise 6.6.7 Analogously to Exercise 6.6.6, we relax now the ﬁrst equation (constitutive law) only. We arrive at the socalled mixed formulation, σ ∈ HΓ2 (div, Ω), u ∈ L2 (Ω) � � � αij σj τi dx + uτj,j dx = gi τi dx Ω Ω Ω � � (−σi,i + bi u,i + cu)v dx = f v dx Ω
Ω
τ ∈ HΓ2 (div, Ω) v ∈ L2 (Ω)
Identify the corresponding conjugate operator. Discuss boundedness below (infsup conditions) for both operators and the wellposedness of the formulation. Follow the text.
6.7
Generalized Green’s Formulae for Operators on Hilbert Spaces
Exercises Exercise 6.7.1 Consider the elastic beam equation (EIw�� )�� = q
0 n
�
1 −a+ 2n 1 −a− 2n
1 dξ > 4n2 → ∞ (a + ξ)2
Concluding, the operator has only a continuous spectrum that coincides with the whole real line.
Hilbert Spaces
6.9
187
Spectra of Continuous Operators. Fundamental Properties
Exercises Exercise 6.9.1 Let X be a real normed space and X × X its complex extension (comp. Section 6.1). Let A : X → X be a linear operator and let A˜ denote its extension to the complex space deﬁned as ˜ A((u, v)) = (Au, Av) Suppose that λ ∈ IC is an eigenvalue of A˜ with a corresponding eigenvector w = (u, v). Show ¯ is an eigenvalue of A˜ as well with the corresponding eigenvector equal that the complex conjugate λ w ¯ = (u, −v). Exercise 6.9.2 Let U be a Banach space and let λ and µ be two different eigenvalues (λ �= µ) of an operator A ∈ L(U, U ) and its transpose A� ∈ L(U � , U � ) with corresponding eigenvectors x ∈ U and g ∈ U � . Show that
�g, x� = 0
6.10
Spectral Theory for Compact Operators
Exercises Exercise 6.10.1 Let T be a compact operator from a Hilbert space U into a Hilbert space V . Show that: (i) T ∗ T is a compact, selfadjoint, positive semideﬁnite operator from a space U into itself. (ii) All eigenvalues of a selfadjoint operator on a Hilbert space are real. Conclude that all eigenvalues of T ∗ T are real and nonnegative. By Proposition 5.15.3??(ii), T ∗ T as a composition of a compact and a continuous operator is compact. Since (T ∗ T )∗ = T ∗ T ∗∗ = T ∗ T T ∗ T is also selfadjoint. Finally, (T ∗ T u, u)U = (T u, T u)V = �T u�2V ≥ 0
188
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
i.e., the operator is positive semideﬁnite. Next, if A is a selfadjoint operator on a Hilbert space H, and (λ, e) is an eigenpair of A, we have λ�e�2 = (λe, e) = (Ae, e) = (Ae, e) = (e, Ae) = (e, λe) = λ�e�2 which implies that λ = λ. Additionally, if A is positive semideﬁnite then (Ae, e) = λ(e, e) ≥ 0 which implies λ ≥ 0.
6.11
Spectral Theory for SelfAdjoint Operators
Exercises Exercise 6.11.1 Determine the spectral properties of the integral operator � x� 1 u(η) dη dξ (Au)(x) = 0
ξ
deﬁned on the space U = L2 (0, 1). The operator if selfadjoint. Indeed, integrating twice by parts, we get � 1� x� 1 � 1� 1 � 1 u(η) dη dξ v¯(x) dx = u(η) dη v¯(ξ) dξ dx 0 0 ξ �0 1 x � x � 1 x = u(x) v¯(η) dη dξ dx 0
0
ξ
Second step in the reasoning above shows also that the operator is positive semideﬁnite. The operator is also compact. Indeed, � x� 1 � x� x � x� 1 u(η) dη dξ = u(η) dη dξ + u(η) dη dξ 0 ξ � 01 x � x �0 x �ξ η dξu(η) dη + u(η) dη dξ = 0 � 1 x �0 x 0 ηu(η) dη + x u(η) dη = x �0 1 = K(x, η)u(η) dη 0
where K(x, η) =
�
η x
0