257 32 2MB
English Pages [312] Year 2023
Basic Analysis I Introduction to Real Analysis, Volume I
by Jiří Lebl July 11, 2023 (version 6.0)
2
Typeset in LATEX. Copyright ©2009–2023 Jiří Lebl
This work is dual licensed under the Creative Commons Attribution-Noncommercial-Share Alike 4.0 International License and the Creative Commons Attribution-Share Alike 4.0 International License. To view a copy of these licenses, visit https://creativecommons.org/ licenses/by-nc-sa/4.0/ or https://creativecommons.org/licenses/by-sa/4.0/ or send a letter to Creative Commons PO Box 1866, Mountain View, CA 94042, USA. You can use, print, duplicate, share this book as much as you want. You can base your own notes on it and reuse parts if you keep the license the same. You can assume the license is either the CC-BY-NC-SA or CC-BY-SA, whichever is compatible with what you wish to do, your derivative works must use at least one of the licenses. Derivative works must be prominently marked as such. During the writing of this book, the author was in part supported by NSF grants DMS0900885 and DMS-1362337. The date is the main identifier of version. The major version / edition number is raised only if there have been substantial changes. From 6th edition onwards, both volumes share the same version number. Edition number started at 4, that is, version 4.0, as it was not kept track of before. See https://www.jirka.org/ra/ for more information (including contact information, possible updates and errata). The LATEX source for the book is available for possible modification and customization at github: https://github.com/jirilebl/ra
Contents Introduction 0.1 About this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.2 About analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.3 Basic set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Real Numbers 1.1 Basic properties . . . . . . . . . . . . . . 1.2 The set of real numbers . . . . . . . . . . 1.3 Absolute value and bounded functions 1.4 Intervals and the size of ℝ . . . . . . . . 1.5 Decimal representation of the reals . . .
2
Sequences and Series 2.1 Sequences and limits . . . . . . . . . . . . . . . . . . . . 2.2 Facts about limits of sequences . . . . . . . . . . . . . . 2.3 Limit superior, limit inferior, and Bolzano–Weierstrass . 2.4 Cauchy sequences . . . . . . . . . . . . . . . . . . . . . . 2.5 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 More on series . . . . . . . . . . . . . . . . . . . . . . . .
3
Continuous Functions 3.1 Limits of functions . . . . . . . . . . . . . 3.2 Continuous functions . . . . . . . . . . . . 3.3 Extreme and intermediate value theorems 3.4 Uniform continuity . . . . . . . . . . . . . 3.5 Limits at infinity . . . . . . . . . . . . . . . 3.6 Monotone functions and continuity . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
4
The Derivative 4.1 The derivative . . . . . . 4.2 Mean value theorem . . 4.3 Taylor’s theorem . . . . . 4.4 Inverse function theorem
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . .
. . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
5 5 7 8
. . . . .
23 23 29 36 41 44
. . . . . .
51 51 61 73 84 87 100
. . . . . .
. . . . . .
113 113 122 130 138 145 149
. . . .
. . . .
155 155 162 171 176
. . . . . . . . . . .
4
CONTENTS
5
The Riemann Integral 5.1 The Riemann integral . . . . . . . . 5.2 Properties of the integral . . . . . . 5.3 Fundamental theorem of calculus . 5.4 The logarithm and the exponential 5.5 Improper integrals . . . . . . . . .
6
Sequences of Functions 227 6.1 Pointwise and uniform convergence . . . . . . . . . . . . . . . . . . . . . . . 227 6.2 Interchange of limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.3 Picard’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7
Metric Spaces 7.1 Metric spaces . . . . . . . . . . . . . . . . . . . . 7.2 Open and closed sets . . . . . . . . . . . . . . . . 7.3 Sequences and convergence . . . . . . . . . . . . 7.4 Completeness and compactness . . . . . . . . . . 7.5 Continuous functions . . . . . . . . . . . . . . . . 7.6 Fixed point theorem and Picard’s theorem again
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
181 181 191 200 207 214
255 255 264 274 279 288 296
Further Reading
301
Index
303
List of Notation
309
Introduction 0.1
About this book
This first volume is a one semester course in basic analysis. With the second volume it is a year-long course. The book started its life as my lecture notes for teaching Math 444 at the University of Illinois at Urbana-Champaign (UIUC) in the fall semester of 2009. I added the metric space chapter to teach Math 521 at University of Wisconsin–Madison (UW). Volume II was added to teach Math 4143/4153 at Oklahoma State University (OSU). A prerequisite for these courses is usually a basic proof course, using for example [H], [F], or [DW]. It should be possible to use the book for both a basic course for students who do not necessarily wish to go to graduate school (such as UIUC 444), but also as a more advanced one-semester course that also covers topics such as metric spaces (such as UW 521). Here are suggestions for a semester course. A slower course such as UIUC 444: §0.3, §1.1–§1.4, §2.1–§2.5, §3.1–§3.4, §4.1–§4.2, §5.1–§5.3, §6.1–§6.3 A more rigorous course covering metric spaces that runs quite a bit faster (e.g., UW 521): §0.3, §1.1–§1.4, §2.1–§2.5, §3.1–§3.4, §4.1–§4.2, §5.1–§5.3, §6.1–§6.2, §7.1–§7.6 It should also be possible to run a faster course without metric spaces covering all sections of chapters 0 through 6. The approximate number of lectures given in the section notes through chapter 6 are a very rough estimate and were designed for the slower course. The first few chapters of the book can be used in an introductory proofs course as is done, for example, at Iowa State University Math 201, where this book is used in conjunction with Hammack’s Book of Proof [H]. With volume II, one can run a year-long course that covers multivariable topics. In this scenario, it may make sense to cover most of the first volume in the first semester while leaving metric spaces for the beginning of the second semester. The structure of the beginning of volume I somewhat follows the standard syllabus of UIUC Math 444 and therefore has some similarities with Bartle and Sherbert, Introduction to Real Analysis [BS], which is the standard book at UIUC. A major difference is that we define the Riemann integral using Darboux sums and not tagged partitions. The Darboux approach is far more appropriate for a course of this level. Our approach allows us to fit a course such as UIUC 444 within a semester and still spend some time on the interchange of limits and end with Picard’s theorem on the
6
INTRODUCTION
existence and uniqueness of solutions of ordinary differential equations. This theorem is a wonderful example that uses many results proved in the book. For more advanced students, material may be covered faster so that we arrive at metric spaces and prove Picard’s theorem using the fixed point theorem as is usual. Other excellent books exist. My favorite is Rudin’s excellent Principles of Mathematical Analysis [R2] or, as it is commonly and lovingly called, baby Rudin (to distinguish it from his other great analysis textbook, big Rudin). I took a lot of inspiration and ideas from Rudin. However, Rudin is a bit more advanced and ambitious than this present course. For those that wish to continue mathematics, Rudin is a fine investment. An inexpensive and somewhat simpler alternative to Rudin is Rosenlicht’s Introduction to Analysis [R1]. There is also the freely downloadable Introduction to Real Analysis by William Trench [T]. A note about the style of some of the proofs: Many proofs traditionally done by contradiction, I prefer to do by a direct proof or by contrapositive. While the book does include proofs by contradiction, I only do so when the contrapositive statement seemed too awkward, or when contradiction follows rather quickly. Contradiction is more likely to get beginning students into trouble, as we are talking about objects that do not exist. I try to avoid unnecessary formalism where it is unhelpful. Furthermore, the proofs and the language get slightly less formal as we progress through the book, as more and more details are left out to avoid clutter. As a general rule, I use B instead of = to define an object rather than to simply show equality. I use this symbol rather more liberally than is usual for emphasis. I use it even when the context is “local,” that is, I may simply define a function 𝑓 (𝑥) B 𝑥 2 for a single exercise or example. Finally, I would like to acknowledge Jana Maříková, Glen Pugh, Paul Vojta, Frank Beatrous, Sönmez Şahutoğlu, Jim Brandt, Kenji Kozai, Arthur Busch, Anton Petrunin, Mark Meilstrup, Harold P. Boas, Atilla Yılmaz, Thomas Mahoney, Scott Armstrong, and Paul Sacks, Matthias Weber, Manuele Santoprete, Robert Niemeyer, Amanullah Nabavi, for teaching with the book and giving me lots of useful feedback. Frank Beatrous wrote the University of Pittsburgh version extensions, which served as inspiration for many more recent additions. I would also like to thank Dan Stoneham, Jeremy Sutter, Eliya Gwetta, Daniel Pimentel-Alarcón, Steve Hoerning, Yi Zhang, Nicole Caviris, Kristopher Lee, Baoyue Bi, Hannah Lund, Trevor Mannella, Mitchel Meyer, Gregory Beauregard, Chase Meadors, Andreas Giannopoulos, Nick Nelsen, Ru Wang, Trevor Fancher, Brandon Tague, Wang KP, Wai Yan Pong, Sam Merat, Judah Nouriyelian, Arnold Cross, Jesse Wallace, an anonymous reader or two, and in general all the students in my classes for suggestions and finding errors and typos.
0.2. ABOUT ANALYSIS
0.2
7
About analysis
Analysis is the branch of mathematics that deals with inequalities and limits. The present course deals with the most basic concepts in analysis. The goal of the course is to acquaint the reader with rigorous proofs in analysis and also to set a firm foundation for calculus of one variable (and several variables if volume II is also considered). Calculus has prepared you, the student, for using mathematics without telling you why what you learned is true. To use, or teach, mathematics effectively, you cannot simply know what is true, you must know why it is true. This course shows you why calculus is true. It is here to give you a good understanding of the concept of a limit, the derivative, and the integral. Let us use an analogy. An auto mechanic that has learned to change the oil, fix broken headlights, and charge the battery, but who does not understand how the engine works, will only be able to do those simple tasks. He will be unable to work independently to diagnose and fix problems. A high school teacher that does not understand the definition of the Riemann integral or the derivative may not be able to properly answer all the students’ questions. To this day I remember several nonsensical statements I heard from my calculus teacher in high school, who simply did not understand the concept of the limit, though he could “do” the problems in the textbook. We start with a discussion of the real number system, most importantly its completeness property, which is the basis for all that comes after. We then discuss the simplest form of a limit, the limit of a sequence. Afterwards, we study functions of one variable, continuity, and the derivative. Next, we define the Riemann integral and prove the fundamental theorem of calculus. We discuss sequences of functions and the interchange of limits. Finally, we give an introduction to metric spaces. Let us give the most important difference between analysis and algebra. In algebra, we prove equalities directly; we prove that an object, a number perhaps, is equal to another object. In analysis, we usually prove inequalities, and we prove those inequalities by estimating. To illustrate the point, consider the following statement. Let 𝑥 be a real number. If 𝑥 < 𝜖 is true for all real numbers 𝜖 > 0, then 𝑥 ≤ 0.
This statement is the general idea of what we do in analysis. Suppose we really wish to prove the equality 𝑥 = 0. In analysis, we prove two inequalities: 𝑥 ≤ 0 and 𝑥 ≥ 0. To prove the inequality 𝑥 ≤ 0, we prove 𝑥 < 𝜖 for all positive 𝜖. To prove the inequality 𝑥 ≥ 0, we prove 𝑥 > −𝜖 for all positive 𝜖.
The term real analysis is a little bit of a misnomer. I prefer to use simply analysis. The other type of analysis, complex analysis, really builds up on the present material, rather than being distinct. Furthermore, a more advanced course on real analysis would talk about complex numbers often. I suspect the nomenclature is historical baggage. Let us get on with the show. . .
8
INTRODUCTION
0.3
Basic set theory
Note: 1–3 lectures (some material can be skipped, covered lightly, or left as reading) Before we start talking about analysis, we need to fix some language. Modern* analysis uses the language of sets, and therefore that is where we start. We talk about sets in a rather informal way, using the so-called “naïve set theory.” Do not worry, that is what the majority of mathematicians use, and it is hard to get into trouble. The reader has hopefully seen the very basics of set theory and proof writing before, and this section should be a quick refresher.
0.3.1
Sets
Definition 0.3.1. A set is a collection of objects called elements or members. A set with no objects is called the empty set and is denoted by ∅ (or sometimes by {}).
Think of a set as a club with a certain membership. For example, the students who play chess are members of the chess club. The same student can be a member of many different clubs. However, do not take the analogy too far. A set is only defined by the members that form the set; two sets that have the same members are the same set. Most of the time we will consider sets of numbers. For example, the set 𝑆 B {0, 1, 2}
is the set containing the three elements 0, 1, and 2. By “ B ”, we mean we are defining what 𝑆 is, rather than just showing equality. We write 1∈𝑆
to denote that the number 1 belongs to the set 𝑆. That is, 1 is a member of 𝑆. At times we want to say that two elements are in a set 𝑆, so we write “1, 2 ∈ 𝑆” as a shorthand for “1 ∈ 𝑆 and 2 ∈ 𝑆.” Similarly, we write 7∉𝑆 to denote that the number 7 is not in 𝑆. That is, 7 is not a member of 𝑆. The elements of all sets under consideration come from some set we call the universe. For simplicity, we often consider the universe to be the set that contains only the elements we are interested in. The universe is generally understood from context and is not explicitly mentioned. In this course, our universe will most often be the set of real numbers. While the elements of a set are often numbers, other objects, such as other sets, can be elements of a set. A set may also contain some of the same elements as another set. For example, 𝑇 B {0, 2} contains the numbers 0 and 2. In this case all elements of 𝑇 also belong to 𝑆. We write 𝑇 ⊂ 𝑆. See Figure 1 for a diagram. *The
term “modern” refers to late 19th century up to the present.
9
0.3. BASIC SET THEORY
𝑆
0
1
𝑇
2
7
Figure 1: A diagram of the example sets 𝑆 and its subset 𝑇.
Definition 0.3.2. (i) A set 𝐴 is a subset of a set 𝐵 if 𝑥 ∈ 𝐴 implies 𝑥 ∈ 𝐵, and we write 𝐴 ⊂ 𝐵. That is, all members of 𝐴 are also members of 𝐵. At times we write 𝐵 ⊃ 𝐴 to mean the same thing. (ii) Two sets 𝐴 and 𝐵 are equal if 𝐴 ⊂ 𝐵 and 𝐵 ⊂ 𝐴. We write 𝐴 = 𝐵. That is, 𝐴 and 𝐵 contain exactly the same elements. If it is not true that 𝐴 and 𝐵 are equal, then we write 𝐴 ≠ 𝐵. (iii) A set 𝐴 is a proper subset of 𝐵 if 𝐴 ⊂ 𝐵 and 𝐴 ≠ 𝐵. We write 𝐴 ⊊ 𝐵. For the example 𝑆 and 𝑇 defined above, 𝑇 ⊂ 𝑆, but 𝑇 ≠ 𝑆. So 𝑇 is a proper subset of 𝑆. If 𝐴 = 𝐵, then 𝐴 and 𝐵 are simply two names for the same exact set. To define sets, one often uses the set building notation,
𝑥 ∈ 𝐴 : 𝑃(𝑥) .
This notation refers to a subset of the set 𝐴 containing all elements of 𝐴 that satisfy the property 𝑃(𝑥). Using 𝑆 = {0, 1, 2} as above, {𝑥 ∈ 𝑆 : 𝑥 ≠ 2} is the set {0, 1}. The notation is sometimes abbreviated as 𝑥 : 𝑃(𝑥) , that is, 𝐴 is not mentioned when understood from context. Furthermore, 𝑥 ∈ 𝐴 is sometimes replaced with a formula to make the notation easier to read. Example 0.3.3: The following are sets including the standard notations. (i) The set of natural numbers, ℕ B {1, 2, 3, . . .}.
(ii) The set of integers, ℤ B {0, −1, 1, −2, 2, . . .}.
(iii) The set of rational numbers, ℚ B
𝑚 𝑛
: 𝑚, 𝑛 ∈ ℤ and 𝑛 ≠ 0 .
(iv) The set of even natural numbers, {2𝑚 : 𝑚 ∈ ℕ }. (v) The set of real numbers, ℝ. Note that ℕ ⊂ ℤ ⊂ ℚ ⊂ ℝ.
We create new sets out of old ones by applying some natural operations.
10
INTRODUCTION
Definition 0.3.4. (i) A union of two sets 𝐴 and 𝐵 is defined as 𝐴 ∪ 𝐵 B {𝑥 : 𝑥 ∈ 𝐴 or 𝑥 ∈ 𝐵}. (ii) An intersection of two sets 𝐴 and 𝐵 is defined as 𝐴 ∩ 𝐵 B {𝑥 : 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵}. (iii) A complement of 𝐵 relative to 𝐴 (or set-theoretic difference of 𝐴 and 𝐵) is defined as 𝐴 \ 𝐵 B {𝑥 : 𝑥 ∈ 𝐴 and 𝑥 ∉ 𝐵}. (iv) We say complement of 𝐵 and write 𝐵 𝑐 instead of 𝐴 \ 𝐵 if the set 𝐴 is either the entire universe or if it is the obvious set containing 𝐵, and is understood from context. (v) We say sets 𝐴 and 𝐵 are disjoint if 𝐴 ∩ 𝐵 = ∅.
The notation 𝐵 𝑐 may be a little vague at this point. If the set 𝐵 is a subset of the real numbers ℝ, then 𝐵 𝑐 means ℝ \ 𝐵. If 𝐵 is naturally a subset of the natural numbers, then 𝐵 𝑐 is ℕ \ 𝐵. If ambiguity can arise, we use the set difference notation 𝐴 \ 𝐵.
𝐴
𝐵
𝐴∪𝐵
𝐴
𝐴
𝐵
𝐴∩𝐵
𝐵
𝐵
𝐴\𝐵
𝐵𝑐
Figure 2: Venn diagrams of set operations, the result of the operation is shaded.
We illustrate the operations on the Venn diagrams in Figure 2. Let us now establish one of most basic theorems about sets and logic.
11
0.3. BASIC SET THEORY Theorem 0.3.5 (DeMorgan). Let 𝐴, 𝐵, 𝐶 be sets. Then (𝐵 ∪ 𝐶)𝑐 = 𝐵 𝑐 ∩ 𝐶 𝑐 ,
(𝐵 ∩ 𝐶)𝑐 = 𝐵 𝑐 ∪ 𝐶 𝑐 ,
or, more generally, 𝐴 \ (𝐵 ∪ 𝐶) = (𝐴 \ 𝐵) ∩ (𝐴 \ 𝐶),
𝐴 \ (𝐵 ∩ 𝐶) = (𝐴 \ 𝐵) ∪ (𝐴 \ 𝐶).
Proof. The first statement is proved by the second statement if we assume the set 𝐴 is our “universe.” Let us prove 𝐴 \ (𝐵 ∪ 𝐶) = (𝐴 \ 𝐵) ∩ (𝐴 \ 𝐶). Remember the definition of equality of sets. First, we must show that if 𝑥 ∈ 𝐴 \ (𝐵 ∪ 𝐶), then 𝑥 ∈ (𝐴 \ 𝐵) ∩ (𝐴 \ 𝐶). Second, we must also show that if 𝑥 ∈ (𝐴 \ 𝐵) ∩ (𝐴 \ 𝐶), then 𝑥 ∈ 𝐴 \ (𝐵 ∪ 𝐶). So let us assume 𝑥 ∈ 𝐴 \ (𝐵 ∪ 𝐶). Then 𝑥 is in 𝐴, but not in 𝐵 nor 𝐶. Hence 𝑥 is in 𝐴 and not in 𝐵, that is, 𝑥 ∈ 𝐴 \ 𝐵. Similarly 𝑥 ∈ 𝐴 \ 𝐶. Thus 𝑥 ∈ (𝐴 \ 𝐵) ∩ (𝐴 \ 𝐶). On the other hand suppose 𝑥 ∈ (𝐴 \ 𝐵) ∩ (𝐴 \ 𝐶). In particular, 𝑥 ∈ (𝐴 \ 𝐵), so 𝑥 ∈ 𝐴 and 𝑥 ∉ 𝐵. Also as 𝑥 ∈ (𝐴 \ 𝐶), then 𝑥 ∉ 𝐶. Hence 𝑥 ∈ 𝐴 \ (𝐵 ∪ 𝐶). The proof of the other equality is left as an exercise. □ The result above we called a Theorem, while most results we call a Proposition, and a few we call a Lemma (a result leading to another result) or Corollary (a quick consequence of the preceding result). Do not read too much into the naming. Some of it is traditional, some of it is stylistic choice. It is not necessarily true that a Theorem is always “more important” than a Proposition or a Lemma. We will also need to intersect or union several sets at once. If there are only finitely many, then we simply apply the union or intersection operation several times. However, suppose we have an infinite collection of sets (a set of sets) {𝐴1 , 𝐴2 , 𝐴3 , . . .}. We define ∞ Ø 𝑛=1 ∞ Ù 𝑛=1
𝐴𝑛 B {𝑥 : 𝑥 ∈ 𝐴𝑛 for some 𝑛 ∈ ℕ }, 𝐴𝑛 B {𝑥 : 𝑥 ∈ 𝐴𝑛 for all 𝑛 ∈ ℕ }.
We can also have sets indexed by two natural numbers. For example, we can have the set of sets {𝐴1,1 , 𝐴1,2 , 𝐴2,1 , 𝐴1,3 , 𝐴2,2 , 𝐴3,1 , . . .}. Then we write ∞ Ø ∞ Ø
𝑛=1 𝑚=1
𝐴𝑛,𝑚 =
∞ Ø ∞ Ø
! 𝐴𝑛,𝑚 .
𝑛=1 𝑚=1
And similarly with intersections. It is not hard to see that we can take the unions in any order. However, switching the order of unions and intersections is not generally permitted without proof. For instance, ∞ Ù ∞ Ø 𝑛=1 𝑚=1
{𝑘 ∈ ℕ : 𝑚𝑘 < 𝑛} =
∞ Ø 𝑛=1
∅ = ∅.
12
INTRODUCTION
However,
∞ Ø ∞ Ù 𝑚=1 𝑛=1
{𝑘 ∈ ℕ : 𝑚𝑘 < 𝑛} =
∞ Ù
ℕ = ℕ.
𝑚=1
Sometimes, the index set is not the natural numbers. In such a case we require a more general notation. Suppose 𝐼 is some set and for each 𝜆 ∈ 𝐼, there is a set 𝐴𝜆 . Then we define
Ø 𝜆∈𝐼
0.3.2
𝐴𝜆 B {𝑥 : 𝑥 ∈ 𝐴𝜆 for some 𝜆 ∈ 𝐼},
Ù 𝜆∈𝐼
𝐴𝜆 B {𝑥 : 𝑥 ∈ 𝐴𝜆 for all 𝜆 ∈ 𝐼}.
Induction
When a statement includes an arbitrary natural number, a common method of proof is the principle of induction. We start with the set of natural numbers ℕ = {1, 2, 3, . . .}, and we give them their natural ordering, that is, 1 < 2 < 3 < 4 < · · · . By 𝑆 ⊂ ℕ having a least element, we mean that there exists an 𝑥 ∈ 𝑆, such that for every 𝑦 ∈ 𝑆, we have 𝑥 ≤ 𝑦. The natural numbers ℕ ordered in the natural way possess the so-called well ordering property. We take this property as an axiom; we simply assume it is true. Well ordering property of ℕ . Every nonempty subset of ℕ has a least (smallest) element. The principle of induction is the following theorem, which is in a sense* equivalent to the well ordering property of the natural numbers. Theorem 0.3.6 (Principle of induction). Let 𝑃(𝑛) be a statement depending on a natural number 𝑛. Suppose that (i) (basis statement) 𝑃(1) is true. (ii) (induction step) If 𝑃(𝑛) is true, then 𝑃(𝑛 + 1) is true.
Then 𝑃(𝑛) is true for all 𝑛 ∈ ℕ .
Proof. Let 𝑆 be the set of natural numbers 𝑛 for which 𝑃(𝑛) is not true. Suppose for contradiction that 𝑆 is nonempty. Then 𝑆 has a least element by the well ordering property. Call 𝑚 ∈ 𝑆 the least element of 𝑆. We know 1 ∉ 𝑆 by hypothesis. So 𝑚 > 1, and 𝑚 − 1 is a natural number as well. Since 𝑚 is the least element of 𝑆, we know that 𝑃(𝑚 − 1) is true. But the induction step says that 𝑃(𝑚 − 1 + 1) = 𝑃(𝑚) is true, contradicting the statement that 𝑚 ∈ 𝑆. Therefore, 𝑆 is empty and 𝑃(𝑛) is true for all 𝑛 ∈ ℕ . □ Sometimes it is convenient to start at a different number than 1, all that changes is the labeling. The assumption that 𝑃(𝑛) is true in “if 𝑃(𝑛) is true, then 𝑃(𝑛 + 1) is true” is usually called the induction hypothesis. *To
be completely rigorous, this equivalence is only true if we also assume as an axiom that 𝑛 − 1 exists for all natural numbers bigger than 1, which we do. In this book, we are assuming all the usual arithmetic holds.
13
0.3. BASIC SET THEORY Example 0.3.7: Let us prove that for all 𝑛 ∈ ℕ , 2𝑛−1 ≤ 𝑛!
(recall 𝑛! = 1 · 2 · 3 · · · 𝑛).
We let 𝑃(𝑛) be the statement that 2𝑛−1 ≤ 𝑛! is true. Plug in 𝑛 = 1 to see that 𝑃(1) is true. Suppose 𝑃(𝑛) is true. That is, suppose 2𝑛−1 ≤ 𝑛! holds. Multiply both sides by 2 to obtain 2𝑛 ≤ 2(𝑛!). As 2 ≤ (𝑛 + 1) when 𝑛 ∈ ℕ , we have 2(𝑛!) ≤ (𝑛 + 1)(𝑛!) = (𝑛 + 1)!. That is, 2𝑛 ≤ 2(𝑛!) ≤ (𝑛 + 1)!, and hence 𝑃(𝑛 + 1) is true. By the principle of induction, 𝑃(𝑛) is true for all 𝑛 ∈ ℕ . In other words, 2𝑛−1 ≤ 𝑛! is true for all 𝑛 ∈ ℕ . Example 0.3.8: We claim that for all 𝑐 ≠ 1,
1 + 𝑐 + 𝑐2 + · · · + 𝑐𝑛 =
1 − 𝑐 𝑛+1 . 1−𝑐
Proof: It is easy to check that the equation holds with 𝑛 = 1. Suppose it is true for 𝑛. Then 1 + 𝑐 + 𝑐 2 + · · · + 𝑐 𝑛 + 𝑐 𝑛+1 = (1 + 𝑐 + 𝑐 2 + · · · + 𝑐 𝑛 ) + 𝑐 𝑛+1 1 − 𝑐 𝑛+1 + 𝑐 𝑛+1 1−𝑐 1 − 𝑐 𝑛+1 + (1 − 𝑐)𝑐 𝑛+1 = 1−𝑐 𝑛+2 1−𝑐 . = 1−𝑐 =
Sometimes, it is easier to use in the inductive step that 𝑃(𝑘) is true for all 𝑘 = 1, 2, . . . , 𝑛, not just for 𝑘 = 𝑛. This principle is called strong induction and is equivalent to the normal induction above. The proof of that equivalence is left as an exercise. Theorem 0.3.9 (Principle of strong induction). Let 𝑃(𝑛) be a statement depending on a natural number 𝑛. Suppose that (i) (basis statement) 𝑃(1) is true. (ii) (induction step) If 𝑃(𝑘) is true for all 𝑘 = 1, 2, . . . , 𝑛, then 𝑃(𝑛 + 1) is true.
Then 𝑃(𝑛) is true for all 𝑛 ∈ ℕ .
0.3.3
Functions
Informally, a set-theoretic function 𝑓 taking a set 𝐴 to a set 𝐵 is a mapping that to each 𝑥 ∈ 𝐴 assigns a unique 𝑦 ∈ 𝐵. We write 𝑓 : 𝐴 → 𝐵. An example function 𝑓 : 𝑆 → 𝑇 taking
14
INTRODUCTION
𝑆 B {0, 1, 2} to 𝑇 B {0, 2} can be defined by assigning 𝑓 (0) B 2, 𝑓 (1) B 2, and 𝑓 (2) B 0. That is, a function 𝑓 : 𝐴 → 𝐵 is a black box, into which we stick an element of 𝐴 and the function spits out an element of 𝐵. Sometimes 𝑓 is called a mapping or a map, and we say 𝑓 maps 𝐴 to 𝐵. Often, functions are defined by some sort of formula; however, you should really think of a function as just a very big table of values. The subtle issue here is that a single function can have several formulas, all giving the same function. Also, for many functions, there is no formula that expresses its values. To define a function rigorously, first let us define the Cartesian product. Definition 0.3.10. Let 𝐴 and 𝐵 be sets. The Cartesian product is the set of tuples defined as 𝐴 × 𝐵 B (𝑥, 𝑦) : 𝑥 ∈ 𝐴, 𝑦 ∈ 𝐵 .
For instance, {𝑎, 𝑏} × {𝑐, 𝑑} = (𝑎, 𝑐), (𝑎, 𝑑), (𝑏, 𝑐), (𝑏, 𝑑) . A more complicated example is the set [0, 1] × [0, 1]: a subset of the plane bounded by a square with vertices (0, 0), (0, 1), (1, 0), and (1, 1). When 𝐴 and 𝐵 are the same set we sometimes use a superscript 2 to denote such a product. For example, [0, 1]2 = [0, 1] × [0, 1] or ℝ2 = ℝ × ℝ (the Cartesian plane).
Definition 0.3.11. A function 𝑓 : 𝐴 → 𝐵 is a subset 𝑓 of 𝐴 × 𝐵 such that for each 𝑥 ∈ 𝐴, there exists a unique 𝑦 ∈ 𝐵 for which (𝑥, 𝑦) ∈ 𝑓 . We write 𝑓 (𝑥) = 𝑦. Sometimes the set 𝑓 is called the graph of the function rather than the function itself. The set 𝐴 is called the domain of 𝑓 (and sometimes confusingly denoted 𝐷( 𝑓 )). The set 𝑅( 𝑓 ) B {𝑦 ∈ 𝐵 : there exists an 𝑥 ∈ 𝐴 such that 𝑓 (𝑥) = 𝑦} is called the range of 𝑓 . The set 𝐵 is called the codomain of 𝑓 . It is possible that the range 𝑅( 𝑓 ) is a proper subset of the codomain 𝐵, while the domain of 𝑓 is always equal to 𝐴. We generally assume that the domain of 𝑓 is nonempty. Example 0.3.12: From calculus, you are most familiar with functions taking real numbers to real numbers. However, you saw some other types of functions as well. The derivative is a function mapping the set of differentiable functions to the set of all functions. Another example is the Laplace transform, which also takes functions to functions. Yet another example is the function that takes a continuous function 𝑔 defined on the interval [0, 1]
and returns the number
∫1 0
𝑔(𝑥) 𝑑𝑥.
Definition 0.3.13. Consider a function 𝑓 : 𝐴 → 𝐵. Define the image (or direct image) of a subset 𝐶 ⊂ 𝐴 as 𝑓 (𝐶) B 𝑓 (𝑥) ∈ 𝐵 : 𝑥 ∈ 𝐶 .
Define the inverse image of a subset 𝐷 ⊂ 𝐵 as
𝑓 −1 (𝐷) B 𝑥 ∈ 𝐴 : 𝑓 (𝑥) ∈ 𝐷 .
In particular, 𝑅( 𝑓 ) = 𝑓 (𝐴), the range is the direct image of the domain 𝐴.
15
0.3. BASIC SET THEORY
1
𝑓
2
𝑎
𝑓 {1, 2, 3, 4} = {𝑏, 𝑐, 𝑑}
𝑏
𝑓 {1} = {𝑏}
𝑓 {1, 2, 4} = {𝑏, 𝑑}
3
𝑐
𝑓 −1 {𝑎, 𝑏, 𝑐} = {1, 3, 4}
4
𝑑
𝑓 −1 {𝑏} = {1, 4}
𝑓 −1 {𝑎} = ∅
Figure 3: Example of direct and inverse images for the function 𝑓 : {1, 2, 3, 4} → {𝑎, 𝑏, 𝑐, 𝑑} defined by 𝑓 (1) B 𝑏, 𝑓 (2) B 𝑑, 𝑓 (3) B 𝑐, 𝑓 (4) B 𝑏.
Example 0.3.14: Define the function 𝑓 : ℝ → ℝ by 𝑓 (𝑥) B sin(𝜋𝑥). Then 𝑓 [0, 1/2] = [0, 1], 𝑓 −1 {0} = ℤ, etc.
Proposition 0.3.15. Consider 𝑓 : 𝐴 → 𝐵. Let 𝐶, 𝐷 be subsets of 𝐵. Then 𝑓 −1 (𝐶 ∪ 𝐷) = 𝑓 −1 (𝐶) ∪ 𝑓 −1 (𝐷),
𝑓 −1 (𝐶 ∩ 𝐷) = 𝑓 −1 (𝐶) ∩ 𝑓 −1 (𝐷),
𝑐
𝑓 −1 (𝐶 𝑐 ) = 𝑓 −1 (𝐶) .
Read the last line of the proposition as 𝑓 −1 (𝐵 \ 𝐶) = 𝐴 \ 𝑓 −1 (𝐶). Proof. We start with the union. If 𝑥 ∈ 𝑓 −1 (𝐶 ∪ 𝐷), then 𝑥 is taken to 𝐶 or 𝐷, that is, 𝑓 (𝑥) ∈ 𝐶 or 𝑓 (𝑥) ∈ 𝐷. Thus 𝑓 −1 (𝐶 ∪ 𝐷) ⊂ 𝑓 −1 (𝐶) ∪ 𝑓 −1 (𝐷). Conversely if 𝑥 ∈ 𝑓 −1 (𝐶), then 𝑥 ∈ 𝑓 −1 (𝐶 ∪ 𝐷). Similarly for 𝑥 ∈ 𝑓 −1 (𝐷). Hence 𝑓 −1 (𝐶 ∪ 𝐷) ⊃ 𝑓 −1 (𝐶) ∪ 𝑓 −1 (𝐷), and we have equality. The rest of the proof is left as an exercise. □ For direct images, the best we can do is the following weaker result. Proposition 0.3.16. Consider 𝑓 : 𝐴 → 𝐵. Let 𝐶, 𝐷 be subsets of 𝐴. Then 𝑓 (𝐶 ∪ 𝐷) = 𝑓 (𝐶) ∪ 𝑓 (𝐷),
𝑓 (𝐶 ∩ 𝐷) ⊂ 𝑓 (𝐶) ∩ 𝑓 (𝐷).
The proof is left as an exercise. Definition 0.3.17. Let 𝑓 : 𝐴 → 𝐵 be a function. The function 𝑓 is said to be injective or one-to-one if 𝑓 (𝑥1 ) = 𝑓 (𝑥 2 ) implies 𝑥1 = 𝑥 2 . In other words, 𝑓 is injective if for all 𝑦 ∈ 𝐵, the set 𝑓 −1 ({𝑦}) is empty or consists of a single element. We call such an 𝑓 an injection. If 𝑓 (𝐴) = 𝐵, then we say 𝑓 is surjective or onto. In other words, 𝑓 is surjective if the range and the codomain of 𝑓 are equal. We call such an 𝑓 a surjection. If 𝑓 is both surjective and injective, then we say 𝑓 is bijective or that 𝑓 is a bijection.
16
INTRODUCTION
When 𝑓 : 𝐴 → 𝐵 is a bijection, then the inverse image of a single element, 𝑓 −1 ({𝑦}), is always a unique element of 𝐴. We then consider 𝑓 −1 as a function 𝑓 −1 : 𝐵 → 𝐴 and we write simply 𝑓 −1 (𝑦). In this case, we call 𝑓 −1 the inverse function of 𝑓 . For instance, for the √ bijection 𝑓 : ℝ → ℝ defined by 𝑓 (𝑥) B 𝑥 3 , we have 𝑓 −1 (𝑥) = 3 𝑥. Definition 0.3.18. Consider 𝑓 : 𝐴 → 𝐵 and 𝑔 : 𝐵 → 𝐶. The composition of the functions 𝑓 and 𝑔 is the function 𝑔 ◦ 𝑓 : 𝐴 → 𝐶 defined as (𝑔 ◦ 𝑓 )(𝑥) B 𝑔 𝑓 (𝑥) .
For example, if 𝑓 : ℝ → ℝ is 𝑓 (𝑥) B 𝑥 3 and 𝑔 : ℝ → ℝ is 𝑔(𝑦) = sin(𝑦), then (𝑔 ◦ 𝑓 )(𝑥) = sin(𝑥 3 ). It is left to the reader as an easy exericise to show that composition of one-to-one maps is one-to-one and composition of onto maps is onto. Therefore, composition of bijections is a bijection.
0.3.4
Relations and equivalence classes
We often compare two objects in some way. We say 1 < 2 for natural numbers, or 1/2 = 2/4 for rational numbers, or {𝑎, 𝑐} ⊂ {𝑎, 𝑏, 𝑐} for sets. The ‘ Lichtenstein. A typical ordered set that you have used since primary school is the dictionary. It is the ordered set of words where the order is the so-called lexicographic ordering. Such ordered sets often appear, for example, in computer science. In this book we will mostly be interested in ordered sets of numbers. Definition 1.1.2. Let 𝐸 ⊂ 𝑆, where 𝑆 is an ordered set.
(i) If there exists a 𝑏 ∈ 𝑆 such that 𝑥 ≤ 𝑏 for all 𝑥 ∈ 𝐸, then we say 𝐸 is bounded above and 𝑏 is an upper bound of 𝐸.
(ii) If there exists a 𝑏 ∈ 𝑆 such that 𝑥 ≥ 𝑏 for all 𝑥 ∈ 𝐸, then we say 𝐸 is bounded below and 𝑏 is a lower bound of 𝐸.
24
CHAPTER 1. REAL NUMBERS
(iii) If there exists an upper bound 𝑏 0 of 𝐸 such that 𝑏 0 ≤ 𝑏 for all upper bounds 𝑏 of 𝐸, then 𝑏 0 is called the least upper bound or the supremum of 𝐸. See Figure 1.1. We write sup 𝐸 B 𝑏 0 . (iv) Similarly, if there exists a lower bound 𝑏 0 of 𝐸 such that 𝑏 0 ≥ 𝑏 for all lower bounds 𝑏 of 𝐸, then 𝑏 0 is called the greatest lower bound or the infimum of 𝐸. We write inf 𝐸 B 𝑏 0 . When a set 𝐸 is both bounded above and bounded below, we say simply that 𝐸 is bounded. The notation sup 𝐸 and inf 𝐸 is justified as the supremum (or infimum) is unique (if it exists): If 𝑏 and 𝑏 ′ are suprema of 𝐸, then 𝑏 ≤ 𝑏 ′ and 𝑏 ′ ≤ 𝑏, because both 𝑏 and 𝑏 ′ are the least upper bounds, so 𝑏 = 𝑏 ′.
𝐸
upper bounds of 𝐸
smaller
bigger least upper bound of 𝐸
Figure 1.1: A set 𝐸 bounded above and the least upper bound of 𝐸.
A simple example: Let 𝑆 B {𝑎, 𝑏, 𝑐, 𝑑, 𝑒} be ordered as 𝑎 < 𝑏 < 𝑐 < 𝑑 < 𝑒, and let 𝐸 B {𝑎, 𝑐}. Then 𝑐, 𝑑, and 𝑒 are upper bounds of 𝐸, and 𝑐 is the least upper bound or supremum of 𝐸. A supremum or infimum for 𝐸 (even if it exists) need not be in 𝐸. The set 𝐸 B {𝑥 ∈ ℚ : 𝑥 < 1} has a least upper bound of 1, but 1 is not in the set 𝐸 itself. The set 𝐺 B {𝑥 ∈ ℚ : 𝑥 ≤ 1} also has an upper bound of 1, and in this case 1 ∈ 𝐺. The set 𝑃 B {𝑥 ∈ ℚ : 𝑥 ≥ 0} has no upper bound (why?) and therefore it cannot have a least upper bound. The set 𝑃 does have a greatest lower bound: 0. Definition 1.1.3. An ordered set 𝑆 has the least-upper-bound property if every nonempty subset 𝐸 ⊂ 𝑆 that is bounded above has a least upper bound, that is sup 𝐸 exists in 𝑆. The least-upper-bound property is sometimes called the completeness property or the Dedekind completeness property* . The real numbers have this property. Example 1.1.4: The set ℚ of rational numbers does not have the least-upper-bound property. The subset {𝑥 ∈ ℚ : 𝑥 2 < 2} does not have a supremum in ℚ. We will see later * Named
after the German mathematician Julius Wilhelm Richard Dedekind (1831–1916).
1.1. BASIC PROPERTIES
25
√ (Example 1.2.3) that the supremum is 2, which is not rational* . Suppose 𝑥 ∈ ℚ such that 𝑥 2 = 2. Write 𝑥 = 𝑚/𝑛 in lowest terms. So (𝑚/𝑛 )2 = 2 or 𝑚 2 = 2𝑛 2 . Hence, 𝑚 2 is divisible by 2, and so 𝑚 is divisible by 2. Write 𝑚 = 2𝑘 and so (2𝑘)2 = 2𝑛 2 . Divide by 2 and note that 2𝑘 2 = 𝑛 2 , and hence 𝑛 is divisible by 2. But that is a contradiction as 𝑚/𝑛 is in lowest terms. That ℚ does not have the least-upper-bound property is one of the most important reasons why we work with ℝ in analysis. The set ℚ is just fine for algebraists. But us analysts require the least-upper-bound property to do any work. We also require our real numbers to have many algebraic properties. In particular, we require that they are a field. Definition 1.1.5. A set 𝐹 is called a field if it has two operations defined on it, addition 𝑥 + 𝑦 and multiplication 𝑥𝑦, and if it satisfies the following axioms: (A1) If 𝑥 ∈ 𝐹 and 𝑦 ∈ 𝐹, then 𝑥 + 𝑦 ∈ 𝐹.
(A2) (commutativity of addition) 𝑥 + 𝑦 = 𝑦 + 𝑥 for all 𝑥, 𝑦 ∈ 𝐹.
(A3) (associativity of addition) (𝑥 + 𝑦) + 𝑧 = 𝑥 + (𝑦 + 𝑧) for all 𝑥, 𝑦, 𝑧 ∈ 𝐹. (A4) There exists an element 0 ∈ 𝐹 such that 0 + 𝑥 = 𝑥 for all 𝑥 ∈ 𝐹.
(A5) For every element 𝑥 ∈ 𝐹, there exists an element −𝑥 ∈ 𝐹 such that 𝑥 + (−𝑥) = 0.
(M1) If 𝑥 ∈ 𝐹 and 𝑦 ∈ 𝐹, then 𝑥𝑦 ∈ 𝐹.
(M2) (commutativity of multiplication) 𝑥𝑦 = 𝑦𝑥 for all 𝑥, 𝑦 ∈ 𝐹.
(M3) (associativity of multiplication) (𝑥𝑦)𝑧 = 𝑥(𝑦𝑧) for all 𝑥, 𝑦, 𝑧 ∈ 𝐹.
(M4) There exists an element 1 ∈ 𝐹 (and 1 ≠ 0) such that 1𝑥 = 𝑥 for all 𝑥 ∈ 𝐹.
(M5) For every 𝑥 ∈ 𝐹 such that 𝑥 ≠ 0 there exists an element 1/𝑥 ∈ 𝐹 such that 𝑥(1/𝑥 ) = 1. (D) (distributive law) 𝑥(𝑦 + 𝑧) = 𝑥𝑦 + 𝑥𝑧 for all 𝑥, 𝑦, 𝑧 ∈ 𝐹.
Example 1.1.6: The set ℚ of rational numbers is a field. On the other hand ℤ is not a field, as it does not contain multiplicative inverses. For example, there is no 𝑥 ∈ ℤ such that 2𝑥 = 1, so (M5) is not satisfied. You can check that (M5) is the only property that fails† . We will assume the basic facts about fields that are easily proved from the axioms. For example, 0𝑥 = 0 is easily proved by noting that 𝑥𝑥 = (0 + 𝑥)𝑥 = 0𝑥 + 𝑥𝑥, using (A4), (D), and (M2). Then using (A5) on 𝑥𝑥, along with (A2), (A3), and (A4), we obtain 0 = 0𝑥. Definition 1.1.7. A field 𝐹 is said to be an ordered field if 𝐹 is also an ordered set such that (i) For 𝑥, 𝑦, 𝑧 ∈ 𝐹, 𝑥 < 𝑦 implies 𝑥 + 𝑧 < 𝑦 + 𝑧.
(ii) For 𝑥, 𝑦 ∈ 𝐹, 𝑥 > 0 and 𝑦 > 0 implies 𝑥𝑦 > 0.
If 𝑥 > 0, we say 𝑥 is positive. If 𝑥 < 0, we say 𝑥 is negative. We also say 𝑥 is nonnegative if 𝑥 ≥ 0, and 𝑥 is nonpositive if 𝑥 ≤ 0. √𝑘 is true for all other roots of 2, and interestingly, the fact that 2 is never rational for 𝑘 > 1 implies no piano can ever be perfectly tuned in all keys. See for example: https://youtu.be/1Hqm0dYKUx4. †An algebraist would say that ℤ is an ordered ring, or perhaps a commutative ordered ring. *This
26
CHAPTER 1. REAL NUMBERS
The rational numbers ℚ with the standard ordering is an ordered field. We leave the details to an interested reader. Proposition 1.1.8. Let 𝐹 be an ordered field and 𝑥, 𝑦, 𝑧, 𝑤 ∈ 𝐹. Then (i) If 𝑥 > 0, then −𝑥 < 0 (and vice versa).
(ii) If 𝑥 > 0 and 𝑦 < 𝑧, then 𝑥𝑦 < 𝑥𝑧. (iii) If 𝑥 < 0 and 𝑦 < 𝑧, then 𝑥𝑦 > 𝑥𝑧. (iv) If 𝑥 ≠ 0, then 𝑥 2 > 0. (v) If 0 < 𝑥 < 𝑦, then 0 < 1/𝑦 < 1/𝑥 .
(vi) If 0 < 𝑥 < 𝑦, then 𝑥 2 < 𝑦 2 .
(vii) If 𝑥 ≤ 𝑦 and 𝑧 ≤ 𝑤, then 𝑥 + 𝑧 ≤ 𝑦 + 𝑤.
Note that (iv) implies in particular that 1 > 0.
Proof. Let us prove (i). The inequality 𝑥 > 0 implies by item (i) of definition of ordered fields that 𝑥 + (−𝑥) > 0 + (−𝑥). Apply the algebraic properties of fields to obtain 0 > −𝑥. The “vice versa” follows by similar calculation. For (ii), notice that 𝑦 < 𝑧 implies 0 < 𝑧 − 𝑦 by item (i) of the definition of ordered fields. Apply item (ii) of the definition of ordered fields to obtain 0 < 𝑥(𝑧 − 𝑦). By algebraic properties, 0 < 𝑥𝑧 − 𝑥𝑦. Again by item (i) of the definition, 𝑥𝑦 < 𝑥𝑧. Part (iii) is left as an exercise. To prove part (iv), first suppose 𝑥 > 0. By item (ii) of the definition of ordered fields, 2 𝑥 > 0 (use 𝑦 = 𝑥). If 𝑥 < 0, we use part (iii) of this proposition, where we plug in 𝑦 = 𝑥 and 𝑧 = 0. To prove part (v), notice that 1/𝑥 cannot be equal to zero (why?). Suppose 1/𝑥 < 0, then −1/𝑥 > 0 by (i). Apply part (ii) of the definition (as 𝑥 > 0) to obtain 𝑥(−1/𝑥 ) > 0 or −1 > 0, which contradicts 1 > 0 by using part (i) again. Hence 1/𝑥 > 0. Similarly, 1/𝑦 > 0. Thus (1/𝑥 )(1/𝑦 ) > 0 by definition of ordered field, and by part (ii), (1/𝑥 )(1/𝑦 )𝑥 < (1/𝑥 )(1/𝑦 )𝑦.
By algebraic properties, 1/𝑦 < 1/𝑥 . Parts (vi) and (vii) are left as exercises.
□
The product of two positive numbers (elements of an ordered field) is positive. However, it is not true that if the product is positive, then each of the two factors must be positive. For instance, (−1)(−1) = 1 > 0. Proposition 1.1.9. Let 𝑥, 𝑦 ∈ 𝐹, where 𝐹 is an ordered field. If 𝑥𝑦 > 0, then either both 𝑥 and 𝑦 are positive, or both are negative.
Proof. We show the contrapositive: If either one of 𝑥 or 𝑦 is zero, or if 𝑥 and 𝑦 have opposite signs, then 𝑥𝑦 is not positive. If either 𝑥 or 𝑦 is zero, then 𝑥𝑦 is zero and hence not positive. Hence assume that 𝑥 and 𝑦 are nonzero and have opposite signs. Without loss of generality suppose 𝑥 > 0 and 𝑦 < 0. Multiply 𝑦 < 0 by 𝑥 to get 𝑥𝑦 < 0𝑥 = 0. □
27
1.1. BASIC PROPERTIES
Example 1.1.10: The reader may also know about the complex numbers, usually denoted by ℂ. That is, ℂ is the set of numbers of the form 𝑥 + 𝑖𝑦, where 𝑥 and 𝑦 are real numbers, and 𝑖 is the imaginary number, a number such that 𝑖 2 = −1. The reader may remember from algebra that ℂ is also a field; however, it is not an ordered field. While one can make ℂ into an ordered set in some way, it is not possible to put an order on ℂ that would make it an ordered field: In every ordered field, −1 < 0 and 𝑥 2 > 0 for all nonzero 𝑥, but in ℂ, 𝑖 2 = −1. Finally, an ordered field that has the least-upper-bound property has the corresponding property for greatest lower bounds.
Proposition 1.1.11. Let 𝐹 be an ordered field with the least-upper-bound property. Let 𝐴 ⊂ 𝐹 be a nonempty set that is bounded below. Then inf 𝐴 exists. Proof. Let 𝐵 B {−𝑥 : 𝑥 ∈ 𝐴}. Let 𝑏 ∈ 𝐹 be a lower bound for 𝐴: If 𝑥 ∈ 𝐴, then 𝑥 ≥ 𝑏. In other words, −𝑥 ≤ −𝑏. So −𝑏 is an upper bound for 𝐵. Since 𝐹 has the least-upper-bound property, 𝑐 B sup 𝐵 exists, and 𝑐 ≤ −𝑏. As 𝑦 ≤ 𝑐 for all 𝑦 ∈ 𝐵, then −𝑐 ≤ 𝑥 for all 𝑥 ∈ 𝐴. So −𝑐 is a lower bound for 𝐴. As −𝑐 ≥ 𝑏, −𝑐 is the greatest lower bound of 𝐴. □
1.1.1
Exercises
Exercise 1.1.1: Prove part (iii) of Proposition 1.1.8. That is, let 𝐹 be an ordered field and 𝑥, 𝑦, 𝑧 ∈ 𝐹. Prove If 𝑥 < 0 and 𝑦 < 𝑧, then 𝑥 𝑦 > 𝑥𝑧. Exercise 1.1.2: Let 𝑆 be an ordered set. Let 𝐴 ⊂ 𝑆 be a nonempty finite subset. Then 𝐴 is bounded. Furthermore, inf 𝐴 exists and is in 𝐴 and sup 𝐴 exists and is in 𝐴. Hint: Use induction. Exercise 1.1.3: Prove part (vi) of Proposition 1.1.8. That is, let 𝑥, 𝑦 ∈ 𝐹, where 𝐹 is an ordered field, such that 0 < 𝑥 < 𝑦. Show that 𝑥 2 < 𝑦 2 . Exercise 1.1.4: Let 𝑆 be an ordered set. Let 𝐵 ⊂ 𝑆 be bounded (above and below). Let 𝐴 ⊂ 𝐵 be a nonempty subset. Suppose all the infs and sups exist. Show that inf 𝐵 ≤ inf 𝐴 ≤ sup 𝐴 ≤ sup 𝐵. Exercise 1.1.5: Let 𝑆 be an ordered set. Let 𝐴 ⊂ 𝑆 and suppose 𝑏 is an upper bound for 𝐴. Suppose 𝑏 ∈ 𝐴. Show that 𝑏 = sup 𝐴. Exercise 1.1.6: Let 𝑆 be an ordered set. Let 𝐴 ⊂ 𝑆 be nonempty and bounded above. Suppose sup 𝐴 exists and sup 𝐴 ∉ 𝐴. Show that 𝐴 contains a countably infinite subset. Exercise 1.1.7: Find a (nonstandard) ordering of the set of natural numbers ℕ such that there exists a nonempty proper subset 𝐴 ⊊ ℕ and such that sup 𝐴 exists in ℕ , but sup 𝐴 ∉ 𝐴. To keep things straight it might be a good idea to use a different notation for the nonstandard ordering such as 𝑛 ≺ 𝑚.
Exercise 1.1.8: Let 𝐹 B {0, 1, 2}.
a) Prove that there is exactly one way to define addition and multiplication so that 𝐹 is a field if 0 and 1 have their usual meaning of (A4) and (M4).
b) Show that 𝐹 cannot be an ordered field.
28
CHAPTER 1. REAL NUMBERS
Exercise 1.1.9: Let 𝑆 be an ordered set and 𝐴 is a nonempty subset such that sup 𝐴 exists. Suppose there is a 𝐵 ⊂ 𝐴 such that whenever 𝑥 ∈ 𝐴 there is a 𝑦 ∈ 𝐵 such that 𝑥 ≤ 𝑦. Show that sup 𝐵 exists and sup 𝐵 = sup 𝐴. Exercise 1.1.10: Let 𝐷 be the ordered set of all possible words (not just English words, all strings of letters of arbitrary length) using the Latin alphabet using only lower case letters. The order is the lexicographic order as in a dictionary (e.g. aa < aaa < dog < door). Let 𝐴 be the subset of 𝐷 containing the words whose first letter is ‘a’ (e.g. a ∈ 𝐴, abcd ∈ 𝐴). Show that 𝐴 has a supremum and find what it is. Exercise 1.1.11: Let 𝐹 be an ordered field and 𝑥, 𝑦, 𝑧, 𝑤 ∈ 𝐹.
a) Prove part (vii) of Proposition 1.1.8. That is, if 𝑥 ≤ 𝑦 and 𝑧 ≤ 𝑤, then 𝑥 + 𝑧 ≤ 𝑦 + 𝑤.
b) Prove that if 𝑥 < 𝑦 and 𝑧 ≤ 𝑤, then 𝑥 + 𝑧 < 𝑦 + 𝑤.
Exercise 1.1.12: Prove that any ordered field must contain a countably infinite set. Exercise 1.1.13: Let ℕ∞ B ℕ ∪ {∞}, where elements of ℕ are ordered in the usual way amongst themselves, and 𝑘 < ∞ for every 𝑘 ∈ ℕ . Show ℕ∞ is an ordered set and that every subset 𝐸 ⊂ ℕ∞ has a supremum in ℕ∞ (make sure to also handle the case of an empty set). Exercise 1.1.14: Let 𝑆 B {𝑎 𝑘 : 𝑘 ∈ ℕ } ∪ {𝑏 𝑘 : 𝑘 ∈ ℕ }, ordered such that 𝑎 𝑘 < 𝑏 𝑗 for every 𝑘 and 𝑗, 𝑎 𝑘 < 𝑎 𝑚 whenever 𝑘 < 𝑚, and 𝑏 𝑘 > 𝑏 𝑚 whenever 𝑘 < 𝑚. a) Show that 𝑆 is an ordered set. b) Show that every subset of 𝑆 is bounded (both above and below). c) Find a bounded subset of 𝑆 that has no least upper bound.
1.2. THE SET OF REAL NUMBERS
1.2
29
The set of real numbers
Note: 2 lectures, the extended real numbers are optional
1.2.1
The set of real numbers
We finally get to the real number system. To simplify matters, instead of constructing the real number set from the rational numbers, we simply state their existence as a theorem without proof. Notice that ℚ is an ordered field. Theorem 1.2.1. There exists a unique* ordered field ℝ with the least-upper-bound property such that ℚ ⊂ ℝ.
Note that also ℕ ⊂ ℚ. We saw that 1 > 0. By induction (exercise) we can prove that 𝑛 > 0 for all 𝑛 ∈ ℕ . Similarly, we verify simple statements about rational numbers. For example, we proved that if 𝑛 > 0, then 1/𝑛 > 0. Then 𝑚 < 𝑘 implies 𝑚/𝑛 < 𝑘/𝑛 . Analysis consists of proving inequalities, and the following proposition, or one of its many variations, is how an analyst proves a nonstrict inequality. Proposition 1.2.2. If 𝑥 ∈ ℝ is such that 𝑥 ≤ 𝜖 for all 𝜖 ∈ ℝ where 𝜖 > 0, then 𝑥 ≤ 0. Proof. If 𝑥 > 0, then 0 < 𝑥/2 < 𝑥 (why?). Take 𝜖 = 𝑥/2 to get a contradiction. Thus 𝑥 ≤ 0. □
For nonnegative 𝑥, an equality results: If 𝑥 ≥ 0 is such that 𝑥 ≤ 𝜖 for all 𝜖 > 0, then 𝑥 = 0. A common version uses the absolute value (see §1.3): If |𝑥| ≤ 𝜖 for all 𝜖 > 0, then 𝑥 = 0. To prove 𝑥 ≥ 0, an analyst might prove that 𝑥 ≥ −𝜖 for all 𝜖 > 0. From now on, when we say 𝑥 ≥ 0 or 𝜖 > 0, we automatically mean that 𝑥 ∈ ℝ and 𝜖 ∈ ℝ. The idea behind the proposition above is that any time we have two real numbers 𝑎 < 𝑏, then there is another real number 𝑐 such that 𝑎 < 𝑐 < 𝑏. Infinitely many such 𝑐 exist. One of them is, for example, 𝑐 = 𝑎+𝑏 2 (why?). We will use this fact in the next example. The most useful property of ℝ for analysts is not just that it is an ordered field, but that it has the least-upper-bound property. Essentially, we want ℚ, but we also want to take suprema (and infima) willy-nilly. So what we do is take ℚ and throw in enough numbers to obtain ℝ. We mentioned already that ℝ contains elements that are not in ℚ because of the least-upper-bound property. Let us prove it. We saw there is no rational √ square root of 2 two. The set {𝑥 ∈ ℚ : 𝑥 < 2} implies the existence of the real number 2, although this fact requires a bit of work. See also Exercise 1.2.14. √ Example 1.2.3: Claim: There exists a unique positive 𝑟 ∈ ℝ such that 𝑟 2 = 2. We denote 𝑟 by 2. * Uniqueness
is up to isomorphism, but we wish to avoid excessive use of algebra. For us, it is simply enough to assume that a set of real numbers exists. See Rudin [R2] for the construction and more details.
30
CHAPTER 1. REAL NUMBERS
Proof. Take the set 𝐴 B {𝑥 ∈ ℝ : 𝑥 2 < 2}. We first show that 𝐴 is bounded above and nonempty. The equation 𝑥 ≥ 2 implies 𝑥 2 ≥ 4 (see Exercise 1.1.3), so if 𝑥 2 < 2, then 𝑥 < 2. So 𝐴 is bounded above. As 1 ∈ 𝐴, the set 𝐴 is nonempty. The supremum, therefore, exists. Let 𝑟 B sup 𝐴. We will show that 𝑟 2 = 2 by showing that 𝑟 2 ≥ 2 and 𝑟 2 ≤ 2. This is the way analysts show equality, by showing two inequalities. We already know that 𝑟 ≥ 1 > 0. In the following, it may seem we are pulling certain expressions out of a hat. When writing a proof such as this we would, of course, come up with the expressions only after playing around with what we wish to prove. The order in which we write the proof is not necessarily the order in which we come up with the proof. Let us first show that 𝑟 2 ≥ 2. Take a positive number 𝑠 such that 𝑠 2 < 2. We wish to find 2−𝑠 2 > 0. Choose an ℎ ∈ ℝ such an ℎ > 0 such that (𝑠 + ℎ)2 < 2. As 2 − 𝑠 2 > 0, we have 2𝑠+1 2 2−𝑠 that 0 < ℎ < 2𝑠+1 . Furthermore, assume ℎ < 1. Estimate, (𝑠 + ℎ)2 − 𝑠 2 = ℎ(2𝑠 + ℎ) < ℎ(2𝑠 + 1) < 2 − 𝑠2
since ℎ < 1 since ℎ
0, we have 𝑠 + ℎ > 𝑠. So 𝑠 < 𝑟 = sup 𝐴. As 𝑠 was an arbitrary positive number such that 𝑠 2 < 2, it follows that 𝑟 2 ≥ 2. Now take a positive number 𝑠 such that 𝑠 2 > 2. We wish to find an ℎ > 0 such that 2 2 (𝑠 − ℎ)2 > 2 and 𝑠 − ℎ is still positive. As 𝑠 2 − 2 > 0, we have 𝑠 2𝑠−2 > 0. Let ℎ B 𝑠 2𝑠−2 , and 2 check 𝑠 − ℎ = 𝑠 − 𝑠 2𝑠−2 = 2𝑠 + 1𝑠 > 0. Estimate, 𝑠 2 − (𝑠 − ℎ)2 = 2𝑠 ℎ − ℎ 2 < 2𝑠 ℎ
since ℎ 2 > 0 as ℎ ≠ 0
= 𝑠2 − 2
since ℎ =
𝑠 2 −2 2𝑠
.
By subtracting 𝑠 2 from both sides and multiplying by −1, we find (𝑠 − ℎ)2 > 2. Therefore, 𝑠 − ℎ ∉ 𝐴. Moreover, if 𝑥 ≥ 𝑠 − ℎ, then 𝑥 2 ≥ (𝑠 − ℎ)2 > 2 (as 𝑥 > 0 and 𝑠 − ℎ > 0) and so 𝑥 ∉ 𝐴. Thus, 𝑠 − ℎ is an upper bound for 𝐴. However, 𝑠 − ℎ < 𝑠, or in other words, 𝑠 > 𝑟 = sup 𝐴. Hence, 𝑟 2 ≤ 2. Together, 𝑟 2 ≥ 2 and 𝑟 2 ≤ 2 imply 𝑟 2 = 2. The existence part is finished. We still need to handle uniqueness. Suppose 𝑠 ∈ ℝ such that 𝑠 2 = 2 and 𝑠 > 0. Thus 𝑠 2 = 𝑟 2 . However, if 0 < 𝑠 < 𝑟, then 𝑠 2 < 𝑟 2 . Similarly, 0 < 𝑟 < 𝑠 implies 𝑟 2 < 𝑠 2 . Hence 𝑠 = 𝑟. □ √ The number 2 ∉ ℚ. The set ℝ \ ℚ is called the set of irrational numbers. We just proved that ℝ \ ℚ is nonempty. Not only is it nonempty, as we will see, it is very large indeed. Using the same technique as above, we can show that a positive real number 𝑥 1/𝑛 exists for all 𝑛 ∈ ℕ and all 𝑥 > 0. That is, for each 𝑥 > 0, there exists a unique positive real number 𝑟 such that 𝑟 𝑛 = 𝑥. The proof is left as an exercise.
31
1.2. THE SET OF REAL NUMBERS
1.2.2
Archimedean property
As we have seen, there are plenty of real numbers in any interval. But there are also infinitely many rational numbers in any interval. The following is one of the fundamental facts about the real numbers. The two parts of the next theorem are actually equivalent, even though it may not seem like that at first sight. Theorem 1.2.4. (i) (Archimedean property)* If 𝑥, 𝑦 ∈ ℝ and 𝑥 > 0, then there exists an 𝑛 ∈ ℕ such that 𝑛𝑥 > 𝑦. (ii) (ℚ is dense in ℝ) If 𝑥, 𝑦 ∈ ℝ and 𝑥 < 𝑦, then there exists an 𝑟 ∈ ℚ such that 𝑥 < 𝑟 < 𝑦.
Proof. Let us prove (i). Divide through by 𝑥. Then (i) says that for every real number 𝑡 B 𝑦/𝑥 , we can find 𝑛 ∈ ℕ such that 𝑛 > 𝑡. In other words, (i) says that ℕ ⊂ ℝ is not bounded above. Suppose for contradiction that ℕ is bounded above. Let 𝑏 B sup ℕ . The number 𝑏 − 1 cannot possibly be an upper bound for ℕ as it is strictly less than 𝑏 (the least upper bound). Thus there exists an 𝑚 ∈ ℕ such that 𝑚 > 𝑏 − 1. Add one to obtain 𝑚 + 1 > 𝑏, contradicting 𝑏 being an upper bound. 𝑚−1 𝑛
𝑚 𝑛
𝑚+1 𝑛
1 𝑛
𝑥
𝑦
Figure 1.2: Idea of the proof of the density of ℚ: Find 𝑛 such that 𝑦 − 𝑥 > 1/𝑛 , then take the least 𝑚 such that 𝑚/𝑛 > 𝑥.
Let us tackle (ii). See Figure 1.2 for a picture of the idea behind the proof. First assume 𝑥 ≥ 0. Note that 𝑦 − 𝑥 > 0. By (i), there exists an 𝑛 ∈ ℕ such that 𝑛(𝑦 − 𝑥) > 1
or
𝑦 − 𝑥 > 1/𝑛 .
Again by (i) the set 𝐴 B {𝑘 ∈ ℕ : 𝑘 > 𝑛𝑥} is nonempty. By the well ordering property of ℕ , 𝐴 has a least element 𝑚, and as 𝑚 ∈ 𝐴, then 𝑚 > 𝑛𝑥. Divide through by 𝑛 to get 𝑥 < 𝑚/𝑛 . As 𝑚 is the least element of 𝐴, 𝑚 − 1 ∉ 𝐴. If 𝑚 > 1, then 𝑚 − 1 ∈ ℕ , but 𝑚 − 1 ∉ 𝐴 and so 𝑚 − 1 ≤ 𝑛𝑥. If 𝑚 = 1, then 𝑚 − 1 = 0, and 𝑚 − 1 ≤ 𝑛𝑥 still holds as 𝑥 ≥ 0. In other words, 𝑚 − 1 ≤ 𝑛𝑥
or
𝑚 ≤ 𝑛𝑥 + 1.
On the other hand, from 𝑛(𝑦 − 𝑥) > 1 we obtain 𝑛𝑦 > 1 + 𝑛𝑥. Hence 𝑛𝑦 > 1 + 𝑛𝑥 ≥ 𝑚, and therefore 𝑦 > 𝑚/𝑛 . Putting everything together we obtain 𝑥 < 𝑚/𝑛 < 𝑦. So take 𝑟 = 𝑚/𝑛 . * Named
after the Ancient Greek mathematician Archimedes of Syracuse (c. 287 BC – c. 212 BC). This property is Axiom V from Archimedes’ “On the Sphere and Cylinder” 225 BC.
32
CHAPTER 1. REAL NUMBERS
Now assume 𝑥 < 0. If 𝑦 > 0, then just take 𝑟 = 0. If 𝑦 ≤ 0, then 0 ≤ −𝑦 < −𝑥, and we find a rational 𝑞 such that −𝑦 < 𝑞 < −𝑥. Then take 𝑟 = −𝑞. □ Let us state and prove a simple but useful corollary of the Archimedean property. Corollary 1.2.5. inf{1/𝑛 : 𝑛 ∈ ℕ } = 0. See Figure 1.3. Proof. Let 𝐴 B {1/𝑛 : 𝑛 ∈ ℕ }. Obviously 𝐴 is not empty. Furthermore, 1/𝑛 > 0 for all 𝑛 ∈ ℕ , so 0 is a lower bound, and 𝑏 B inf 𝐴 exists. As 0 is a lower bound, then 𝑏 ≥ 0. Take an arbitrary 𝑎 > 0. By the Archimedean property, there exists an 𝑛 such that 𝑛𝑎 > 1, that is, 𝑎 > 1/𝑛 ∈ 𝐴. Therefore, 𝑎 cannot be a lower bound for 𝐴. Hence 𝑏 = 0. □
0
···
11 1 1 87 6 5
1 4
1 3
1 2
1
Figure 1.3: The set {1/𝑛 : 𝑛 ∈ ℕ } and its infimum 0.
1.2.3
Using supremum and infimum
Suprema and infima are compatible with algebraic operations. For a set 𝐴 ⊂ ℝ and 𝑥 ∈ ℝ define 𝑥 + 𝐴 B {𝑥 + 𝑦 ∈ ℝ : 𝑦 ∈ 𝐴}, 𝑥𝐴 B {𝑥𝑦 ∈ ℝ : 𝑦 ∈ 𝐴}.
For example, if 𝐴 = {1, 2, 3}, then 5 + 𝐴 = {6, 7, 8} and 3𝐴 = {3, 6, 9}. Proposition 1.2.6. Let 𝐴 ⊂ ℝ be nonempty.
(i) If 𝑥 ∈ ℝ and 𝐴 is bounded above, then sup(𝑥 + 𝐴) = 𝑥 + sup 𝐴.
(ii) If 𝑥 ∈ ℝ and 𝐴 is bounded below, then inf(𝑥 + 𝐴) = 𝑥 + inf 𝐴.
(iii) If 𝑥 > 0 and 𝐴 is bounded above, then sup(𝑥𝐴) = 𝑥(sup 𝐴). (iv) If 𝑥 > 0 and 𝐴 is bounded below, then inf(𝑥𝐴) = 𝑥(inf 𝐴). (v) If 𝑥 < 0 and 𝐴 is bounded below, then sup(𝑥𝐴) = 𝑥(inf 𝐴). (vi) If 𝑥 < 0 and 𝐴 is bounded above, then inf(𝑥𝐴) = 𝑥(sup 𝐴).
Do note that multiplying a set by a negative number switches supremum for an infimum and vice versa. Also, as the proposition implies that supremum (resp. infimum) of 𝑥 + 𝐴 or 𝑥𝐴 exists, it also implies that 𝑥 + 𝐴 or 𝑥𝐴 is nonempty and bounded above (resp. below). Proof. Let us only prove the first statement. The rest are left as exercises.
33
1.2. THE SET OF REAL NUMBERS
Suppose 𝑏 is an upper bound for 𝐴. That is, 𝑦 ≤ 𝑏 for all 𝑦 ∈ 𝐴. Then 𝑥 + 𝑦 ≤ 𝑥 + 𝑏 for all 𝑦 ∈ 𝐴, and so 𝑥 + 𝑏 is an upper bound for 𝑥 + 𝐴. In particular, if 𝑏 = sup 𝐴, then sup(𝑥 + 𝐴) ≤ 𝑥 + 𝑏 = 𝑥 + sup 𝐴. The opposite inequality is similar: If 𝑐 is an upper bound for 𝑥 + 𝐴, then 𝑥 + 𝑦 ≤ 𝑐 for all 𝑦 ∈ 𝐴 and so 𝑦 ≤ 𝑐 − 𝑥 for all 𝑦 ∈ 𝐴. So 𝑐 − 𝑥 is an upper bound for 𝐴. If 𝑐 = sup(𝑥 + 𝐴), then sup 𝐴 ≤ 𝑐 − 𝑥 = sup(𝑥 + 𝐴) − 𝑥.
The result follows.
□
Sometimes we need to apply supremum or infimum twice. Here is an example. Proposition 1.2.7. Let 𝐴, 𝐵 ⊂ ℝ be nonempty sets such that 𝑥 ≤ 𝑦 whenever 𝑥 ∈ 𝐴 and 𝑦 ∈ 𝐵. Then 𝐴 is bounded above, 𝐵 is bounded below, and sup 𝐴 ≤ inf 𝐵. Proof. Any 𝑥 ∈ 𝐴 is a lower bound for 𝐵. Therefore 𝑥 ≤ inf 𝐵 for all 𝑥 ∈ 𝐴, so inf 𝐵 is an upper bound for 𝐴. Hence, sup 𝐴 ≤ inf 𝐵. □ We must be careful about strict inequalities and taking suprema and infima. Note that 𝑥 < 𝑦 whenever 𝑥 ∈ 𝐴 and 𝑦 ∈ 𝐵 still only implies sup 𝐴 ≤ inf 𝐵, and not a strict inequality. For example, take 𝐴 B {0} and 𝐵 B {1/𝑛 : 𝑛 ∈ ℕ }. Then 0 < 1/𝑛 for all 𝑛 ∈ ℕ . However, sup 𝐴 = 0 and inf 𝐵 = 0. This important subtle point comes up often. The proof of the following often used fact is left to the reader. A similar result holds for infima. Proposition 1.2.8. If 𝑆 ⊂ ℝ is nonempty and bounded above, then for every 𝜖 > 0 there exists an 𝑥 ∈ 𝑆 such that (sup 𝑆) − 𝜖 < 𝑥 ≤ sup 𝑆.
To make using suprema and infima even easier, we may want to write sup 𝐴 and inf 𝐴 without worrying about 𝐴 being bounded and nonempty. We make the following natural definitions. Definition 1.2.9. Let 𝐴 ⊂ ℝ be a set.
(i) If 𝐴 is empty, then sup 𝐴 B −∞.
(ii) If 𝐴 is not bounded above, then sup 𝐴 B ∞.
(iii) If 𝐴 is empty, then inf 𝐴 B ∞.
(iv) If 𝐴 is not bounded below, then inf 𝐴 B −∞.
For convenience, ∞ and −∞ are sometimes treated as if they were numbers, except we do not allow arbitrary arithmetic with them. We make ℝ∗ B ℝ ∪ {−∞, ∞} into an ordered set by letting −∞ < ∞
and
−∞ < 𝑥
and
𝑥 0 (𝑡 ∈ ℝ), then there exists an 𝑛 ∈ ℕ such that
1 < 𝑡. 𝑛2
Exercise 1.2.2: Prove that if 𝑡 ≥ 0 (𝑡 ∈ ℝ), then there exists an 𝑛 ∈ ℕ such that 𝑛 − 1 ≤ 𝑡 < 𝑛. Exercise 1.2.3: Finish the proof of Proposition 1.2.6. Exercise 1.2.4: Let 𝑥, 𝑦 ∈ ℝ. Suppose 𝑥 2 + 𝑦 2 = 0. Prove that 𝑥 = 0 and 𝑦 = 0. √ Exercise 1.2.5: Show that 3 is irrational. √ Exercise 1.2.6: Let 𝑛 ∈ ℕ . Show that 𝑛 is either an integer or it is irrational. Exercise 1.2.7: Prove the arithmetic-geometric mean inequality. For two positive real numbers 𝑥, 𝑦, √
𝑥𝑦 ≤
𝑥+𝑦 . 2
Furthermore, equality occurs if and only if 𝑥 = 𝑦. Exercise 1.2.8: Show that for every pair of real numbers 𝑥 and 𝑦 such that 𝑥 < 𝑦, there exists an irrational 𝑦 𝑥 number 𝑠 such that 𝑥 < 𝑠 < 𝑦. Hint: Apply the density of ℚ to √ and √ . 2 2
35
1.2. THE SET OF REAL NUMBERS
Exercise 1.2.9: Let 𝐴 and 𝐵 be two nonempty bounded sets of real numbers. Let 𝐶 B {𝑎 + 𝑏 : 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵}. Show that 𝐶 is a bounded set and that sup 𝐶 = sup 𝐴 + sup 𝐵
and
inf 𝐶 = inf 𝐴 + inf 𝐵.
Exercise 1.2.10: Let 𝐴 and 𝐵 be two nonempty bounded sets of nonnegative real numbers. Define the set 𝐶 B {𝑎𝑏 : 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵}. Show that 𝐶 is a bounded set and that sup 𝐶 = (sup 𝐴)(sup 𝐵)
and
inf 𝐶 = (inf 𝐴)(inf 𝐵).
Exercise 1.2.11 (Hard): Given 𝑥 > 0 and 𝑛 ∈ ℕ , show that there exists a unique positive real number 𝑟 such that 𝑥 = 𝑟 𝑛 . Usually 𝑟 is denoted by 𝑥 1/𝑛 . Exercise 1.2.12 (Easy): Prove Proposition 1.2.8. Exercise 1.2.13: Prove the so-called Bernoulli’s inequality* : If 1 + 𝑥 > 0, then for all 𝑛 ∈ ℕ , we have (1 + 𝑥)𝑛 ≥ 1 + 𝑛𝑥. Exercise 1.2.14: Prove sup{𝑥 ∈ ℚ : 𝑥 2 < 2} = sup{𝑥 ∈ ℝ : 𝑥 2 < 2}. Exercise 1.2.15: a) Prove that given 𝑦 ∈ ℝ, we have sup{𝑥 ∈ ℚ : 𝑥 < 𝑦} = 𝑦.
b) Let 𝐴 ⊂ ℚ be a set that is bounded above such that whenever 𝑥 ∈ 𝐴 and 𝑡 ∈ ℚ with 𝑡 < 𝑥, then 𝑡 ∈ 𝐴. Further suppose sup 𝐴 ∉ 𝐴. Show that there exists a 𝑦 ∈ ℝ such that 𝐴 = {𝑥 ∈ ℚ : 𝑥 < 𝑦}. A set such as 𝐴 is called a Dedekind cut. c) Show that there is a bijection between ℝ and Dedekind cuts. Note: Dedekind used sets as in part b) in his construction of the real numbers. Exercise 1.2.16: Prove that if 𝐴 ⊂ ℤ is a nonempty subset bounded below, then there exists a least element in 𝐴. Now describe why this statement would simplify the proof of Theorem 1.2.4 part (ii) so that you do not have to assume 𝑥 ≥ 0. Exercise 1.2.17: Let us suppose we know 𝑥 1/𝑛 exists for every 𝑥 > 0 and every 𝑛 ∈ ℕ (see Exercise 1.2.11 𝑝 above). For integers 𝑝 and 𝑞 > 0 where 𝑝/𝑞 is in lowest terms, define 𝑥 𝑝/𝑞 B (𝑥 1/𝑞 ) .
a) Show that the power is well-defined even if the fraction is not in lowest terms: If 𝑝/𝑞 = 𝑚/𝑘 where 𝑚 and 𝑝
𝑘
𝑘 > 0 are integers, then (𝑥 1/𝑞 ) = (𝑥 1/𝑚 ) .
b) Let 𝑥 and 𝑦 be two positive numbers and 𝑟 a rational number. Assuming 𝑟 > 0, show 𝑥 < 𝑦 if and only if 𝑥 𝑟 < 𝑦 𝑟 . Then suppose 𝑟 < 0 and show: 𝑥 < 𝑦 if and only if 𝑥 𝑟 > 𝑦 𝑟 . c) Suppose 𝑥 > 1 and 𝑟, 𝑠 are rational where 𝑟 < 𝑠. Show 𝑥 𝑟 < 𝑥 𝑠 . If 0 < 𝑥 < 1 and 𝑟 < 𝑠, show that 𝑥 𝑟 > 𝑥 𝑠 . Hint: Write 𝑟 and 𝑠 with the same denominator. d) (Challenging)† For an irrational 𝑧 ∈ ℝ \ ℚ and 𝑥 > 1 define 𝑥 𝑧 B sup{𝑥 𝑟 : 𝑟 ≤ 𝑧, 𝑟 ∈ ℚ}, for 𝑥 = 1 define 1𝑧 = 1, and for 0 < 𝑥 < 1 define 𝑥 𝑧 B inf{𝑥 𝑟 : 𝑟 ≤ 𝑧, 𝑟 ∈ ℚ}. Prove the two assertions of part b) for all real 𝑧. * Named
after the Swiss mathematician Jacob Bernoulli (1655–1705). §5.4 we will define the exponential and the logarithm and define 𝑥 𝑧 B exp(𝑧 ln 𝑥). We will then have sufficient machinery to make proofs of these assertions far easier. At this point, however, we do not yet have these tools. † In
36
CHAPTER 1. REAL NUMBERS
1.3
Absolute value and bounded functions
Note: 0.5–1 lecture A concept we will encounter over and over is the concept of absolute value. You want to think of the absolute value as the “size” of a real number. Here is the formal definition:
( |𝑥| B
𝑥 −𝑥
if 𝑥 ≥ 0, if 𝑥 < 0.
Let us give the main features of the absolute value as a proposition. Proposition 1.3.1. (i) |𝑥| ≥ 0, moreover, |𝑥| = 0 if and only if 𝑥 = 0.
(ii) |−𝑥| = |𝑥| for all 𝑥 ∈ ℝ.
(iii) |𝑥 𝑦| = |𝑥| |𝑦| for all 𝑥, 𝑦 ∈ ℝ. (iv) |𝑥| 2 = 𝑥 2 for all 𝑥 ∈ ℝ.
(v) |𝑥| ≤ 𝑦 if and only if −𝑦 ≤ 𝑥 ≤ 𝑦.
(vi) − |𝑥| ≤ 𝑥 ≤ |𝑥| for all 𝑥 ∈ ℝ.
Proof. (i): First suppose 𝑥 ≥ 0. Then |𝑥| = 𝑥 ≥ 0. Also, |𝑥| = 𝑥 = 0 if and only if 𝑥 = 0. On the other hand, if 𝑥 < 0, then |𝑥| = −𝑥 > 0, and |𝑥| is never zero.
(ii): If 𝑥 > 0, then −𝑥 < 0 and so |−𝑥| = −(−𝑥) = 𝑥 = |𝑥|. Similarly when 𝑥 < 0, or 𝑥 = 0.
(iii): If 𝑥 or 𝑦 is zero, then the result is immediate. When 𝑥 and 𝑦 are both positive, then |𝑥| |𝑦| = 𝑥𝑦. As 𝑥𝑦 is also positive, 𝑥𝑦 = |𝑥𝑦|. If 𝑥 and 𝑦 are both negative, then 𝑥 𝑦 = (−𝑥)(−𝑦) is still positive and |𝑥𝑦| = 𝑥𝑦. Also, |𝑥| |𝑦| = (−𝑥)(−𝑦) = 𝑥𝑦. If 𝑥 > 0 and 𝑦 < 0, then |𝑥| |𝑦| = 𝑥(−𝑦) = −(𝑥𝑦). Now 𝑥𝑦 is negative and |𝑥𝑦| = −(𝑥𝑦). Similarly when 𝑥 < 0 and 𝑦 > 0. (iv): Immediate if 𝑥 ≥ 0. If 𝑥 < 0, then |𝑥| 2 = (−𝑥)2 = 𝑥 2 .
(v): Suppose |𝑥| ≤ 𝑦. If 𝑥 ≥ 0, then 𝑥 ≤ 𝑦. It follows that 𝑦 ≥ 0, leading to −𝑦 ≤ 0 ≤ 𝑥. So −𝑦 ≤ 𝑥 ≤ 𝑦 holds. If 𝑥 < 0, then |𝑥| ≤ 𝑦 means −𝑥 ≤ 𝑦. Negating both sides we get 𝑥 ≥ −𝑦. Again 𝑦 ≥ 0 and so 𝑦 ≥ 0 > 𝑥. Hence, −𝑦 ≤ 𝑥 ≤ 𝑦. On the other hand, suppose −𝑦 ≤ 𝑥 ≤ 𝑦 is true. If 𝑥 ≥ 0, then 𝑥 ≤ 𝑦 is equivalent to |𝑥| ≤ 𝑦. If 𝑥 < 0, then −𝑦 ≤ 𝑥 implies (−𝑥) ≤ 𝑦, which is equivalent to |𝑥| ≤ 𝑦. (vi): Apply (v) with 𝑦 = |𝑥|.
□
A property used frequently enough to give it a name is the so-called triangle inequality. Proposition 1.3.2 (Triangle Inequality). |𝑥 + 𝑦| ≤ |𝑥| + |𝑦| for all 𝑥, 𝑦 ∈ ℝ.
1.3. ABSOLUTE VALUE AND BOUNDED FUNCTIONS
37
Proof. Proposition 1.3.1 gives −|𝑥| ≤ 𝑥 ≤ |𝑥| and −|𝑦| ≤ 𝑦 ≤ |𝑦|. Add these two inequalities to obtain − |𝑥| + |𝑦| ≤ 𝑥 + 𝑦 ≤ |𝑥| + |𝑦|.
Apply Proposition 1.3.1 again to find |𝑥 + 𝑦| ≤ |𝑥| + |𝑦|.
□
There are other often applied versions of the triangle inequality.
Corollary 1.3.3. Let 𝑥, 𝑦 ∈ ℝ.
(i) (reverse triangle inequality) (|𝑥| − |𝑦|) ≤ |𝑥 − 𝑦|.
(ii) |𝑥 − 𝑦| ≤ |𝑥| + |𝑦|.
Proof. Let us plug in 𝑥 = 𝑎 − 𝑏 and 𝑦 = 𝑏 into the standard triangle inequality to obtain |𝑎| = |𝑎 − 𝑏 + 𝑏| ≤ |𝑎 − 𝑏| + |𝑏| , or |𝑎| − |𝑏| ≤ |𝑎 − 𝑏|. Switching the roles of 𝑎 and 𝑏 we find |𝑏| − |𝑎| ≤ |𝑏 − 𝑎| = |𝑎 − 𝑏|. Applying Proposition 1.3.1, we obtain the reverse triangle inequality. The second item in the corollary is obtained from the standard triangle inequality by just replacing 𝑦 with −𝑦, and noting |−𝑦| = |𝑦|. □ Corollary 1.3.4. Let 𝑥 1 , 𝑥2 , . . . , 𝑥 𝑛 ∈ ℝ. Then |𝑥1 + 𝑥2 + · · · + 𝑥 𝑛 | ≤ |𝑥1 | + |𝑥2 | + · · · + |𝑥 𝑛 | . Proof. We proceed by induction. The conclusion holds trivially for 𝑛 = 1, and for 𝑛 = 2 it is the standard triangle inequality. Suppose the corollary holds for 𝑛. Take 𝑛 + 1 numbers 𝑥1 , 𝑥2 , . . . , 𝑥 𝑛+1 and first use the standard triangle inequality, then the induction hypothesis |𝑥1 + 𝑥2 + · · · + 𝑥 𝑛 + 𝑥 𝑛+1 | ≤ |𝑥 1 + 𝑥 2 + · · · + 𝑥 𝑛 | + |𝑥 𝑛+1 |
≤ |𝑥 1 | + |𝑥 2 | + · · · + |𝑥 𝑛 | + |𝑥 𝑛+1 |.
□
Let us see an example of the use of the triangle inequality. Example 1.3.5: Find a number 𝑀 such that |𝑥 2 − 9𝑥 + 1| ≤ 𝑀 for all −1 ≤ 𝑥 ≤ 5. Using the triangle inequality, write |𝑥 2 − 9𝑥 + 1| ≤ |𝑥 2 | + |9𝑥| + |1| = |𝑥| 2 + 9|𝑥| + 1. The expression |𝑥| 2 + 9|𝑥| + 1 is largest when |𝑥| is largest (why?). In the interval provided, |𝑥| is largest when 𝑥 = 5 and so |𝑥| = 5. One possibility for 𝑀 is 𝑀 = 52 + 9(5) + 1 = 71.
There are, of course, other 𝑀 that work. The bound of 71 is much higher than it need be, but we didn’t ask for the best possible 𝑀, just one that works. The last example leads us to the concept of bounded functions.
38
CHAPTER 1. REAL NUMBERS
Definition 1.3.6. Suppose 𝑓 : 𝐷 → ℝ is a function. We say 𝑓 is bounded if there exists a number 𝑀 such that | 𝑓 (𝑥)| ≤ 𝑀 for all 𝑥 ∈ 𝐷.
In the example, we proved 𝑥 2 − 9𝑥 + 1 is bounded when considered as a function on 𝐷 = {𝑥 : −1 ≤ 𝑥 ≤ 5}. On the other hand, if we consider the same polynomial as a function on the whole real line ℝ, then it is not bounded. 𝑀
sup 𝑓 (𝐷) 𝑓 (𝐷)
−𝑀
𝐷
inf 𝑓 (𝐷)
Figure 1.4: Example of a bounded function, a bound 𝑀, and its supremum and infimum.
For a function 𝑓 : 𝐷 → ℝ, we write (see Figure 1.4 for an example) sup 𝑓 (𝑥) B sup 𝑓 (𝐷)
and
𝑥∈𝐷
inf 𝑓 (𝑥) B inf 𝑓 (𝐷).
𝑥∈𝐷
We also sometimes replace the “𝑥 ∈ 𝐷” with an expression. For example if, as before, 𝑓 (𝑥) = 𝑥 2 − 9𝑥 + 1, for −1 ≤ 𝑥 ≤ 5, a little bit of calculus shows sup 𝑓 (𝑥) = sup (𝑥 2 − 9𝑥 + 1) = 11, 𝑥∈𝐷
inf 𝑓 (𝑥) =
𝑥∈𝐷
−1≤𝑥≤5
inf (𝑥 2 − 9𝑥 + 1) = −77/4.
−1≤𝑥≤5
Proposition 1.3.7. If 𝑓 : 𝐷 → ℝ and 𝑔 : 𝐷 → ℝ (𝐷 nonempty) are bounded* functions and 𝑓 (𝑥) ≤ 𝑔(𝑥) then
sup 𝑓 (𝑥) ≤ sup 𝑔(𝑥) 𝑥∈𝐷
for all 𝑥 ∈ 𝐷, and
𝑥∈𝐷
inf 𝑓 (𝑥) ≤ inf 𝑔(𝑥).
𝑥∈𝐷
𝑥∈𝐷
(1.1)
Be careful with the variables. The 𝑥 on the left side of the inequality in (1.1) is different from the 𝑥 on the right. You should really think of, say, the first inequality as sup 𝑓 (𝑥) ≤ sup 𝑔(𝑦). 𝑥∈𝐷
𝑦∈𝐷
Let us prove this inequality. If 𝑏 is an upper bound for 𝑔(𝐷), then 𝑓 (𝑥) ≤ 𝑔(𝑥) ≤ 𝑏 for all 𝑥 ∈ 𝐷, and hence 𝑏 is also an upper bound for 𝑓 (𝐷), or 𝑓 (𝑥) ≤ 𝑏 for all 𝑥 ∈ 𝐷. Take the least upper bound of 𝑔(𝐷) to get that for all 𝑥 ∈ 𝐷 𝑓 (𝑥) ≤ sup 𝑔(𝑦). 𝑦∈𝐷
*The boundedness hypothesis is for simplicity, it can be dropped if we allow for the extended real numbers.
39
1.3. ABSOLUTE VALUE AND BOUNDED FUNCTIONS
Therefore, sup 𝑦∈𝐷 𝑔(𝑦) is an upper bound for 𝑓 (𝐷) and thus greater than or equal to the least upper bound of 𝑓 (𝐷). sup 𝑓 (𝑥) ≤ sup 𝑔(𝑦). 𝑥∈𝐷
𝑦∈𝐷
The second inequality (the statement about the inf) is left as an exercise (Exercise 1.3.4). A common mistake is to conclude sup 𝑓 (𝑥) ≤ inf 𝑔(𝑦).
(1.2)
𝑦∈𝐷
𝑥∈𝐷
The inequality (1.2) is not true given the hypothesis of the proposition above. For this stronger inequality we need the stronger hypothesis for all 𝑥 ∈ 𝐷 and 𝑦 ∈ 𝐷.
𝑓 (𝑥) ≤ 𝑔(𝑦)
The proof as well as a counterexample is left as an exercise (Exercise 1.3.5).
1.3.1
Exercises
Exercise 1.3.1: Show that |𝑥 − 𝑦| < 𝜖 if and only if 𝑥 − 𝜖 < 𝑦 < 𝑥 + 𝜖. a) max{𝑥, 𝑦} =
Exercise 1.3.2: Show:
𝑥+𝑦+|𝑥−𝑦| 2
b) min{𝑥, 𝑦} =
𝑥+𝑦−|𝑥−𝑦| 2
Exercise 1.3.3: Find a number 𝑀 such that |𝑥 3 − 𝑥 2 + 8𝑥| ≤ 𝑀 for all −2 ≤ 𝑥 ≤ 10. Exercise 1.3.4: Finish the proof of Proposition 1.3.7. That is, prove that given a set 𝐷, and two bounded functions 𝑓 : 𝐷 → ℝ and 𝑔 : 𝐷 → ℝ such that 𝑓 (𝑥) ≤ 𝑔(𝑥) for all 𝑥 ∈ 𝐷, then inf 𝑓 (𝑥) ≤ inf 𝑔(𝑥).
𝑥∈𝐷
𝑥∈𝐷
Exercise 1.3.5: Let 𝑓 : 𝐷 → ℝ and 𝑔 : 𝐷 → ℝ be functions (𝐷 nonempty). a) Suppose 𝑓 (𝑥) ≤ 𝑔(𝑦) for all 𝑥 ∈ 𝐷 and 𝑦 ∈ 𝐷. Show that
sup 𝑓 (𝑥) ≤ inf 𝑔(𝑥). 𝑥∈𝐷
𝑥∈𝐷
b) Find a specific 𝐷, 𝑓 , and 𝑔, such that 𝑓 (𝑥) ≤ 𝑔(𝑥) for all 𝑥 ∈ 𝐷, but sup 𝑓 (𝑥) > inf 𝑔(𝑥). 𝑥∈𝐷
𝑥∈𝐷
Exercise 1.3.6: Prove Proposition 1.3.7 without the assumption that the functions are bounded. Hint: You need to use the extended real numbers. Exercise 1.3.7: Let 𝐷 be a nonempty set. Suppose 𝑓 : 𝐷 → ℝ and 𝑔 : 𝐷 → ℝ are bounded functions. a) Show
sup 𝑓 (𝑥) + 𝑔(𝑥) ≤ sup 𝑓 (𝑥) + sup 𝑔(𝑥)
𝑥∈𝐷
𝑥∈𝐷
𝑥∈𝐷
and
inf 𝑓 (𝑥) + 𝑔(𝑥) ≥ inf 𝑓 (𝑥) + inf 𝑔(𝑥).
𝑥∈𝐷
b) Find an example (or examples) where we obtain strict inequalities.
𝑥∈𝐷
𝑥∈𝐷
40
CHAPTER 1. REAL NUMBERS
Exercise 1.3.8: Suppose 𝐷 is nonempty, 𝑓 : 𝐷 → ℝ and 𝑔 : 𝐷 → ℝ are bounded functions, and 𝛼 ∈ ℝ. a) Show that 𝛼 𝑓 : 𝐷 → ℝ defined by (𝛼 𝑓 )(𝑥) B 𝛼 𝑓 (𝑥) is a bounded function.
b) Show that 𝑓 + 𝑔 : 𝐷 → ℝ defined by ( 𝑓 + 𝑔)(𝑥) B 𝑓 (𝑥) + 𝑔(𝑥) is a bounded function. Exercise 1.3.9: Let 𝑓 : 𝐷 → ℝ and 𝑔 : 𝐷 → ℝ be functions with 𝐷 nonempty, 𝛼 ∈ ℝ, and recall what 𝑓 + 𝑔 and 𝛼 𝑓 means from the previous exercise. a) Prove that if 𝑓 + 𝑔 and 𝑔 are bounded, then 𝑓 is bounded.
b) Find an example where 𝑓 and 𝑔 are both unbounded, but 𝑓 + 𝑔 is bounded. c) Prove that if 𝑓 is bounded but 𝑔 is unbounded, then 𝑓 + 𝑔 is unbounded.
d) Find an example where 𝑓 is unbounded but 𝛼 𝑓 is bounded.
1.4. INTERVALS AND THE SIZE OF ℝ
1.4
41
Intervals and the size of ℝ
Note: 0.5–1 lecture (proof of uncountability of ℝ can be optional) You surely saw the notation for intervals before, but let us give a formal definition here. For 𝑎, 𝑏 ∈ ℝ such that 𝑎 < 𝑏, we define [𝑎, 𝑏] B {𝑥 ∈ ℝ : 𝑎 ≤ 𝑥 ≤ 𝑏}, (𝑎, 𝑏) B {𝑥 ∈ ℝ : 𝑎 < 𝑥 < 𝑏},
(𝑎, 𝑏] B {𝑥 ∈ ℝ : 𝑎 < 𝑥 ≤ 𝑏}, [𝑎, 𝑏) B {𝑥 ∈ ℝ : 𝑎 ≤ 𝑥 < 𝑏}.
The interval [𝑎, 𝑏] is called a closed interval and (𝑎, 𝑏) is called an open interval. The intervals of the form (𝑎, 𝑏] and [𝑎, 𝑏) are called half-open intervals. The intervals above are bounded intervals, since both 𝑎 and 𝑏 are real numbers. We define unbounded intervals, [𝑎, ∞) B {𝑥 ∈ ℝ : 𝑎 ≤ 𝑥}, (𝑎, ∞) B {𝑥 ∈ ℝ : 𝑎 < 𝑥},
(−∞, 𝑏] B {𝑥 ∈ ℝ : 𝑥 ≤ 𝑏}, (−∞, 𝑏) B {𝑥 ∈ ℝ : 𝑥 < 𝑏}.
For completeness, we define (−∞, ∞) B ℝ. The intervals [𝑎, ∞), (−∞, 𝑏], and ℝ are sometimes called unbounded closed intervals, and (𝑎, ∞), (−∞, 𝑏), and ℝ are sometimes called unbounded open intervals. The proof of the following proposition is left as an exercise. In short, an interval is a set with at least two points that contains all points between any two points.* Proposition 1.4.1. A set 𝐼 ⊂ ℝ is an interval if and only if 𝐼 contains at least 2 points and for all 𝑎, 𝑐 ∈ 𝐼 and 𝑏 ∈ ℝ such that 𝑎 < 𝑏 < 𝑐, we have 𝑏 ∈ 𝐼.
We have already seen that every open interval (𝑎, 𝑏) (where 𝑎 < 𝑏 of course) must be nonempty. For example, it contains the number 𝑎+𝑏 2 . An unexpected fact is that from a set-theoretic perspective, all intervals have the same “size,” that is, they all have the same cardinality. For instance, the map 𝑓 (𝑥) B 2𝑥 takes the interval [0, 1] bijectively to the interval [0, 2]. Maybe more interestingly, the function 𝑓 (𝑥) B tan(𝑥) is a bijective map from (−𝜋/2, 𝜋/2) to ℝ. Hence the bounded interval (−𝜋/2, 𝜋/2) has the same cardinality as ℝ. It is not completely straightforward to construct a bijective map from [0, 1] to (0, 1), but it is possible. And do not worry, there does exist a way to measure the “size” of subsets of real numbers that “sees” the difference between [0, 1] and [0, 2]. However, its proper definition requires much more machinery than we have right now. * Sometimes
single point sets and the empty set are also called intervals, but in this book, intervals have at least 2 points. That is, we only defined the bounded intervals if 𝑎 < 𝑏.
42
CHAPTER 1. REAL NUMBERS
Let us say more about the cardinality of intervals and hence about the cardinality of ℝ. We have seen that there exist irrational numbers, that is ℝ \ ℚ, is nonempty. The question is: How many irrational numbers are there? It turns out there are a lot more irrational numbers than rational numbers. We have seen that ℚ is countable, and we will show that ℝ is uncountable. In fact, the cardinality of ℝ is the same as the cardinality of P(ℕ ), although we will not prove this claim here. Theorem 1.4.2 (Cantor). ℝ is uncountable. We give a version of Cantor’s original proof from 1874 as this proof requires the least setup. Normally this proof is stated as a contradiction, but a proof by contrapositive is easier to understand. Proof. Let 𝑋 ⊂ ℝ be a countably infinite subset such that for every pair of real numbers 𝑎 < 𝑏, there is an 𝑥 ∈ 𝑋 such that 𝑎 < 𝑥 < 𝑏. Were ℝ countable, we could take 𝑋 = ℝ. We will show that 𝑋 is necessarily a proper subset, and so 𝑋 cannot equal ℝ, and ℝ must be uncountable. As 𝑋 is countably infinite, there is a bijection from ℕ to 𝑋. We write 𝑋 as a sequence of real numbers 𝑥1 , 𝑥2 , 𝑥3 , . . ., such that each number in 𝑋 is given by 𝑥 𝑛 for some 𝑛 ∈ ℕ . We inductively construct two sequences of real numbers 𝑎 1 , 𝑎2 , 𝑎3 , . . . and 𝑏 1 , 𝑏2 , 𝑏3 , . . .. Let 𝑎 1 B 𝑥1 and 𝑏 1 B 𝑥1 + 1. Note that 𝑎 1 < 𝑏 1 and 𝑥1 ∉ (𝑎 1 , 𝑏1 ). For some 𝑘 > 1, suppose 𝑎 1 , 𝑎2 , . . . , 𝑎 𝑘−1 and 𝑏 1 , 𝑏2 , . . . , 𝑏 𝑘−1 have been defined, suppose 𝑎 1 < 𝑎 2 < · · · < 𝑎 𝑘−1 < 𝑏 𝑘−1 < · · · < 𝑏 2 < 𝑏 1 , and suppose for each 𝑗 = 1, 2, . . . , 𝑘 − 1, we have 𝑥ℓ ∉ (𝑎 𝑗 , 𝑏 𝑗 ) for ℓ = 1, 2, . . . , 𝑗. (i) Define 𝑎 𝑘 B 𝑥 𝑛 , where 𝑛 is the smallest 𝑛 ∈ ℕ such that 𝑥 𝑛 ∈ (𝑎 𝑘−1 , 𝑏 𝑘−1 ). Such an 𝑥 𝑛 exists by our assumption on 𝑋, and 𝑛 ≥ 𝑘 by the assumption on (𝑎 𝑘−1 , 𝑏 𝑘−1 ).
(ii) Next, define 𝑏 𝑘 to be some real number in (𝑎 𝑘 , 𝑏 𝑘−1 ).
Notice that 𝑎 𝑘−1 < 𝑎 𝑘 < 𝑏 𝑘 < 𝑏 𝑘−1 . Also notice that (𝑎 𝑘 , 𝑏 𝑘 ) does not contain 𝑥 𝑘 and hence does not contain 𝑥 𝑗 for 𝑗 = 1, 2, . . . , 𝑘. The two sequences are now defined. Claim: 𝑎 𝑛 < 𝑏 𝑚 for all 𝑛 and 𝑚 in ℕ . Proof: Let us first assume 𝑛 < 𝑚. Then 𝑎 𝑛 < 𝑎 𝑛+1 < · · · < 𝑎 𝑚−1 < 𝑎 𝑚 < 𝑏 𝑚 . Similarly for 𝑛 > 𝑚. The claim follows. Let 𝐴 B {𝑎 𝑛 : 𝑛 ∈ ℕ } and 𝐵 B {𝑏 𝑛 : 𝑛 ∈ ℕ }. By Proposition 1.2.7 and the claim above, sup 𝐴 ≤ inf 𝐵. Define 𝑦 B sup 𝐴. The number 𝑦 cannot be a member of 𝐴: If 𝑦 = 𝑎 𝑛 for some 𝑛, then 𝑦 < 𝑎 𝑛+1 , which is impossible. Similarly, 𝑦 cannot be a member of 𝐵. Therefore, 𝑎 𝑛 < 𝑦 for all 𝑛 ∈ ℕ and 𝑦 < 𝑏 𝑛 for all 𝑛 ∈ ℕ . In other words, for every 𝑛 ∈ ℕ , we have 𝑦 ∈ (𝑎 𝑛 , 𝑏 𝑛 ). By the construction of the sequence, 𝑥 𝑛 ∉ (𝑎 𝑛 , 𝑏 𝑛 ), and so 𝑦 ≠ 𝑥 𝑛 . As this was true for all 𝑛 ∈ ℕ , we have that 𝑦 ∉ 𝑋. We have constructed a real number 𝑦 that is not in 𝑋, and thus 𝑋 is a proper subset of ℝ. The sequence 𝑥1 , 𝑥2 , . . . cannot contain all elements of ℝ and thus ℝ is uncountable. □
43
1.4. INTERVALS AND THE SIZE OF ℝ
1.4.1
Exercises
Exercise 1.4.1: For 𝑎 < 𝑏, construct an explicit bijection from (𝑎, 𝑏] to (0, 1]. Exercise 1.4.2: Suppose 𝑓 : [0, 1] → (0, 1) is a bijection. Using 𝑓 , construct a bijection from [−1, 1] to ℝ. Exercise 1.4.3: Prove Proposition 1.4.1. That is, suppose 𝐼 ⊂ ℝ is a subset with at least 2 elements such that if 𝑎 < 𝑏 < 𝑐 and 𝑎, 𝑐 ∈ 𝐼, then 𝑏 ∈ 𝐼. Prove that 𝐼 is one of the nine types of intervals explicitly given in this section. Furthermore, prove that the intervals given in this section all satisfy this property. Exercise 1.4.4 (Hard): Construct an explicit bijection from (0, 1] to (0, 1). Hint: One approach is as follows: First map (1/2, 1] to (0, 1/2], then map (1/4, 1/2] to (1/2, 3/4], etc. Write down the map explicitly, that is, write down an algorithm that tells you exactly what number goes where. Then prove that the map is a bijection. Exercise 1.4.5 (Hard): Construct an explicit bijection from [0, 1] to (0, 1). Exercise 1.4.6: a) Show that every closed interval [𝑎, 𝑏] is the intersection of countably many open intervals.
b) Show that every open interval (𝑎, 𝑏) is a countable union of closed intervals.
c) Show that an intersection of a possibly infinite family of bounded closed intervals, empty, a single point, or a bounded closed interval.
Ñ 𝜆∈𝐼
[𝑎𝜆 , 𝑏𝜆 ], is either
Exercise 1.4.7: Suppose 𝑆 is a set of disjoint open intervals in ℝ. That is, if (𝑎, 𝑏) ∈ 𝑆 and (𝑐, 𝑑) ∈ 𝑆, then either (𝑎, 𝑏) = (𝑐, 𝑑) or (𝑎, 𝑏) ∩ (𝑐, 𝑑) = ∅. Prove 𝑆 is a countable set. Exercise 1.4.8: Prove that the cardinality of [0, 1] is the same as the cardinality of (0, 1) by showing that |[0, 1]| ≤ |(0, 1)| and |(0, 1)| ≤ |[0, 1]|. See Definition 0.3.28. This proof requires the Cantor–Bernstein– Schröder theorem, which we stated without proof. Note that this proof does not give you an explicit bijection. Exercise 1.4.9 (Challenging): A number 𝑥 is algebraic if 𝑥 is a root of a polynomial with integer coefficients, in other words, 𝑎 𝑛 𝑥 𝑛 + 𝑎 𝑛−1 𝑥 𝑛−1 + · · · + 𝑎 1 𝑥 + 𝑎 0 = 0 where 𝑎 0 , 𝑎1 , . . . , 𝑎 𝑛 ∈ ℤ. a) Show that there are only countably many algebraic numbers.
b) Show that there exist non-algebraic (transcendental) numbers (follow in the footsteps of Cantor, use the uncountability of ℝ). Hint: Feel free to use the fact that a polynomial of degree 𝑛 has at most 𝑛 real roots. Exercise 1.4.10 (Challenging): Let 𝐹 be the set of all functions 𝑓 : ℝ → ℝ. Prove | ℝ | < |𝐹| using Cantor’s Theorem 0.3.34.*
* Interestingly,
if 𝐶 is the set of continuous functions, then | ℝ | = |𝐶 |.
44
CHAPTER 1. REAL NUMBERS
1.5
Decimal representation of the reals
Note: 1 lecture (optional) We often think of real numbers as their decimal representation. For a positive integer 𝑛, we find the digits 𝑑𝐾 , 𝑑𝐾−1 , . . . , 𝑑2 , 𝑑1 , 𝑑0 for some 𝐾, where each 𝑑 𝑗 is an integer between 0 and 9, then 𝑛 = 𝑑𝐾 10𝐾 + 𝑑𝐾−1 10𝐾−1 + · · · + 𝑑2 102 + 𝑑1 10 + 𝑑0 .
We often assume 𝑑𝐾 ≠ 0. To represent 𝑛 we write the sequence of digits: 𝑛 = 𝑑𝐾 𝑑𝐾−1 · · · 𝑑2 𝑑1 𝑑0 . By a (decimal) digit, we mean an integer between 0 and 9. Similarly, we represent some rational numbers. That is, for certain numbers 𝑥, we can find a negative integer −𝑀, a positive integer 𝐾, and digits 𝑑𝐾 , 𝑑𝐾−1 , . . . , 𝑑1 , 𝑑0 , 𝑑−1 , . . . , 𝑑−𝑀 , such that 𝑥 = 𝑑𝐾 10𝐾 + 𝑑𝐾−1 10𝐾−1 + · · · + 𝑑2 102 + 𝑑1 10 + 𝑑0 + 𝑑−1 10−1 + 𝑑−2 10−2 + · · · + 𝑑−𝑀 10−𝑀 .
We write 𝑥 = 𝑑𝐾 𝑑𝐾−1 · · · 𝑑1 𝑑0 . 𝑑−1 𝑑−2 · · · 𝑑−𝑀 . 1 Not every real number has such √ a representation, even the simple rational number /3 does not. The irrational number 2 does not have such a representation either. To get a representation for all real numbers, we must allow infinitely many digits. Let us consider only real numbers in the interval (0, 1]. If we find a representation for these, adding integers to them obtains a representation for all real numbers. Take an infinite sequence of decimal digits: 0.𝑑1 𝑑2 𝑑3 . . . . That is, we have a digit 𝑑 𝑗 for every 𝑗 ∈ ℕ . We renumbered the digits to avoid the negative signs. We call the number 𝐷𝑛 B
𝑑1 𝑑2 𝑑3 𝑑𝑛 + 2 + 3 +···+ 𝑛. 10 10 10 10
the truncation of 𝑥 to 𝑛 decimal digits. We say this sequence of digits represents a real number 𝑥 if 𝑑1 𝑑2 𝑑3 𝑑𝑛 𝑥 = sup + 2 + 3 + · · · + 𝑛 = sup 𝐷𝑛 . 10 10 10 𝑛∈ℕ 𝑛∈ℕ 10 Proposition 1.5.1. (i) Every infinite sequence of digits 0.𝑑1 𝑑2 𝑑3 . . . represents a unique real number 𝑥 ∈ [0, 1], and 1 for all 𝑛 ∈ ℕ . 𝐷𝑛 ≤ 𝑥 ≤ 𝐷𝑛 + 𝑛 10 (ii) For every 𝑥 ∈ (0, 1] there exists an infinite sequence of digits 0.𝑑1 𝑑2 𝑑3 . . . that represents 𝑥. There exists a unique representation such that 𝐷𝑛 < 𝑥 ≤ 𝐷𝑛 +
1 10𝑛
for all 𝑛 ∈ ℕ .
1.5. DECIMAL REPRESENTATION OF THE REALS
45
Proof. We start with the first item. Take an arbitrary infinite sequence of digits 0.𝑑1 𝑑2 𝑑3 . . .. Use the geometric sum formula to write 𝐷𝑛 =
𝑑1 𝑑2 𝑑3 𝑑𝑛 9 9 9 9 + 2 + 3 +···+ 𝑛 ≤ + 2 + 3 +···+ 𝑛 10 10 10 10 10 10 10 10 9 = 1 + 1/10 + (1/10)2 + · · · + (1/10)𝑛−1 10 9 1 − (1/10)𝑛 = = 1 − (1/10)𝑛 < 1. 1 10 1 − /10
In particular, 𝐷𝑛 < 1 for all 𝑛. A sum of nonnegative numbers is nonnegative so 𝐷𝑛 ≥ 0, and hence 0 ≤ sup 𝐷𝑛 ≤ 1. 𝑛∈ℕ
Therefore, 0.𝑑1 𝑑2 𝑑3 . . . represents a unique number 𝑥 B sup𝑛∈ℕ 𝐷𝑛 ∈ [0, 1]. As 𝑥 is a supremum, then 𝐷𝑛 ≤ 𝑥. Take 𝑚 ∈ ℕ . If 𝑚 < 𝑛, then 𝐷𝑚 − 𝐷𝑛 ≤ 0. If 𝑚 > 𝑛, then computing as above 𝐷𝑚 − 𝐷𝑛 =
𝑑𝑛+1 10𝑛+1
+
𝑑𝑛+2 10𝑛+2
+
𝑑𝑛+3 10𝑛+3
+···+
1 1 𝑑𝑚 1/10)𝑚−𝑛 < ≤ 1 − ( . 𝑚 𝑛 10 10 10𝑛
Take the supremum over 𝑚 to find 𝑥 − 𝐷𝑛 ≤
1 . 10𝑛
We move on to the second item. Take any 𝑥 ∈ (0, 1]. First let us tackle the existence. For convenience, let 𝐷0 B 0. Then, 𝐷0 < 𝑥 ≤ 𝐷0 + 10−0 . Suppose we defined the digits 𝑑1 , 𝑑2 , . . . , 𝑑𝑛 , and that 𝐷 𝑘 < 𝑥 ≤ 𝐷 𝑘 + 10−𝑘 , for 𝑘 = 0, 1, 2, . . . , 𝑛. We need to define 𝑑𝑛+1 . By the Archimedean property of the real numbers, find an integer 𝑗 such that 𝑥 − 𝐷𝑛 ≤ −(𝑛+1) 𝑗10 . Take the least such 𝑗 and obtain (𝑗 − 1)10−(𝑛+1) < 𝑥 − 𝐷𝑛 ≤ 𝑗10−(𝑛+1) .
(1.3)
Let 𝑑𝑛+1 B 𝑗 − 1. As 𝐷𝑛 < 𝑥, then 𝑑𝑛+1 = 𝑗 − 1 ≥ 0. On the other hand, since 𝑥 − 𝐷𝑛 ≤ 10−𝑛 , we have that 𝑗 is at most 10, and therefore 𝑑𝑛+1 ≤ 9. So 𝑑𝑛+1 is a decimal digit. Since 𝐷𝑛+1 = 𝐷𝑛 + 𝑑𝑛+1 10−(𝑛+1) add 𝐷𝑛 to the inequality (1.3) above: 𝐷𝑛+1 = 𝐷𝑛 + (𝑗 − 1)10−(𝑛+1) < 𝑥 ≤ 𝐷𝑛 + 𝑗10−(𝑛+1)
= 𝐷𝑛 + (𝑗 − 1)10−(𝑛+1) + 10−(𝑛+1) = 𝐷𝑛+1 + 10−(𝑛+1) .
And so 𝐷𝑛+1 < 𝑥 ≤ 𝐷𝑛+1 + 10−(𝑛+1) holds. We inductively defined an infinite sequence of digits 0.𝑑1 𝑑2 𝑑3 . . .. Consider 𝐷𝑛 < 𝑥 ≤ 𝐷𝑛 + 10−𝑛 . As 𝐷𝑛 < 𝑥 for all 𝑛, then sup{𝐷𝑛 : 𝑛 ∈ ℕ } ≤ 𝑥. The second inequality for 𝐷𝑛 implies 𝑥 − sup{𝐷𝑚 : 𝑚 ∈ ℕ } ≤ 𝑥 − 𝐷𝑛 ≤ 10−𝑛 .
46
CHAPTER 1. REAL NUMBERS
As the inequality holds for all 𝑛 and 10−𝑛 can be made arbitrarily small (see Exercise 1.5.8), we have 𝑥 ≤ sup{𝐷𝑚 : 𝑚 ∈ ℕ }. Therefore, sup{𝐷𝑚 : 𝑚 ∈ ℕ } = 𝑥. What is left to show is the uniqueness. Suppose 0.𝑒1 𝑒2 𝑒3 . . . is another representation of 𝑥. Let 𝐸𝑛 be the 𝑛-digit truncation of 0.𝑒1 𝑒2 𝑒3 . . ., and suppose 𝐸𝑛 < 𝑥 ≤ 𝐸𝑛 + 10−𝑛 for all 𝑛 ∈ ℕ . Suppose for some 𝐾 ∈ ℕ , 𝑒 𝑛 = 𝑑𝑛 for all 𝑛 < 𝐾, so 𝐷𝐾−1 = 𝐸𝐾−1 . Then 𝐸𝐾 = 𝐷𝐾−1 + 𝑒 𝐾 10−𝐾 < 𝑥 ≤ 𝐸𝐾 + 10−𝐾 = 𝐷𝐾−1 + 𝑒 𝐾 10−𝐾 + 10−𝐾 . Subtracting 𝐷𝐾−1 and multiplying by 10𝐾 we get 𝑒 𝐾 < (𝑥 − 𝐷𝐾−1 )10𝐾 ≤ 𝑒 𝐾 + 1. Similarly,
𝑑𝐾 < (𝑥 − 𝐷𝐾−1 )10𝐾 ≤ 𝑑𝐾 + 1.
Hence, both 𝑒 𝐾 and 𝑑𝐾 are the largest integer 𝑗 such that 𝑗 < (𝑥 − 𝐷𝐾−1 )10𝐾 , and therefore 𝑒 𝐾 = 𝑑𝐾 . That is, the representation is unique. □ The representation is not unique if we do not require 𝐷𝑛 < 𝑥 for all 𝑛. For example, for the number 1/2, the method in the proof obtains the representation 0.49999 . . . . However, 1/2 also has the representation 0.50000 . . .. The only numbers that have nonunique representations are ones that end either in an infinite sequence of 0s or 9s, because the only representation for which 𝐷𝑛 = 𝑥 is one where all digits past the 𝑛th digit are zero. In this case there are exactly two representations of 𝑥 (see the exercises). Let us give another proof of the uncountability of the reals using decimal representations. This is Cantor’s second proof, and is probably better known. This proof may seem shorter, but it is because we already did the hard part above and we are left with a slick trick to prove that ℝ is uncountable. This trick is called Cantor diagonalization and finds use in other proofs as well. Theorem 1.5.2 (Cantor). The set (0, 1] is uncountable. Proof. Let 𝑋 B {𝑥 1 , 𝑥2 , 𝑥3 , . . .} be any countable subset of real numbers in (0, 1]. We will construct a real number not in 𝑋. Let 𝑥 𝑛 = 0.𝑑1𝑛 𝑑2𝑛 𝑑3𝑛 . . . be the unique representation from the proposition, that is, 𝑑 𝑛𝑗 is the 𝑗th digit of the 𝑛th number. Let ( 1 if 𝑑𝑛𝑛 ≠ 1, 𝑒𝑛 B 2 if 𝑑𝑛𝑛 = 1.
47
1.5. DECIMAL REPRESENTATION OF THE REALS
Let 𝐸𝑛 be the 𝑛-digit truncation of 𝑦 = 0.𝑒1 𝑒2 𝑒3 . . .. Because all the digits are nonzero we get 𝐸𝑛 < 𝐸𝑛+1 ≤ 𝑦. Therefore 𝐸𝑛 < 𝑦 ≤ 𝐸𝑛 + 10−𝑛
for all 𝑛, and the representation is the unique one for 𝑦 from the proposition. For every 𝑛, the 𝑛th digit of 𝑦 is different from the 𝑛th digit of 𝑥 𝑛 , so 𝑦 ≠ 𝑥 𝑛 . Therefore 𝑦 ∉ 𝑋, and as 𝑋 was an arbitrary countable subset, (0, 1] must be uncountable. See Figure 1.5 for an example. □ 𝑥1 = 𝑥2 = 𝑥3 = 𝑥4 = 𝑥5 = .. .
0. 0. 0. 0. 0. .. .
1 7 3 8 1 .. .
3 9 0 9 6 .. .
2 4 1 2 0 .. .
1 1 3 5 2 .. .
0 3 4 6 4 .. .
··· ··· ··· ··· ··· .. .
Number not in the list: 𝑦 = 0.21211 . . .
Figure 1.5: Example of Cantor diagonalization, the diagonal digits 𝑑𝑛𝑛 marked.
Using decimal digits we can also find lots of numbers that are not rational. The following proposition is true for every rational number, but we give it only for 𝑥 ∈ (0, 1] for simplicity.
Proposition 1.5.3. If 𝑥 ∈ (0, 1] is a rational number and 𝑥 = 0.𝑑1 𝑑2 𝑑3 . . ., then the decimal digits eventually start repeating. That is, there are positive integers 𝑁 and 𝑃, such that for all 𝑛 ≥ 𝑁, 𝑑𝑛 = 𝑑𝑛+𝑃 . Proof. Suppose 𝑥 = 𝑝/𝑞 for positive integers 𝑝 and 𝑞. Suppose also that 𝑥 is a number with a unique representation, as otherwise we have seen above that both its representations are repeating, see also Exercise 1.5.3. This also means that 𝑥 ≠ 1 so 𝑝 < 𝑞. To compute the first digit we take 10𝑝 and divide by 𝑞. Let 𝑑1 be the quotient, and the remainder 𝑟1 is some integer between 0 and 𝑞 − 1. That is, 𝑑1 is the largest integer such that 𝑑1 𝑞 ≤ 10𝑝 and then 𝑟1 = 10𝑝 − 𝑑1 𝑞. As 𝑝 < 𝑞, then 𝑑1 < 10, so 𝑑1 is a digit. Furthermore, 𝑝 𝑑1 𝑑1 𝑟1 𝑑1 1 ≤ = + ≤ + . 10 𝑞 10 10𝑞 10 10 The first inequality must be strict since 𝑥 has a unique representation. That is, 𝑑1 really is the first digit. What is left is 𝑟1/(10𝑞). This is the same as computing the first digit of 𝑟1/𝑞 . To compute 𝑑2 divide 10𝑟1 by 𝑞, and so on. After computing 𝑛 − 1 digits, we have 𝑝/𝑞 = 𝐷𝑛−1 + 𝑟𝑛−1/(10𝑛−1 𝑞). To get the 𝑛th digit, divide 10𝑟 𝑛−1 by 𝑞 to get quotient 𝑑 𝑛 , remainder 𝑟𝑛 , and the inequalities 𝑑𝑛 𝑟𝑛−1 𝑑𝑛 𝑟𝑛 𝑑𝑛 1 ≤ = + ≤ + . 10 𝑞 10 10𝑞 10 10
48
CHAPTER 1. REAL NUMBERS
Dividing by 10𝑛−1 and adding 𝐷𝑛−1 we find 𝐷𝑛 ≤ 𝐷𝑛−1 +
𝑝 𝑟𝑛−1 1 = ≤ 𝐷 + . 𝑛 𝑞 10𝑛 10𝑛−1 𝑞
By uniqueness we really have the 𝑛th digit 𝑑𝑛 from the construction. The new digit depends only the remainder from the previous step. There are at most 𝑞 possible remainders and hence at some step the process must start repeating itself, and 𝑃 is at most 𝑞. □ The converse of the proposition is also true and is left as an exercise. Example 1.5.4: The number 𝑥 = 0.101001000100001000001 . . . , is irrational. That is, the digits are 𝑛 zeros, then a one, then 𝑛 + 1 zeros, then a one, and so on and so forth. The fact that 𝑥 is irrational follows from the proposition; the digits never start repeating. For every 𝑃, if we go far enough, we find a 1 followed by at least 𝑃 + 1 zeros.
1.5.1
Exercises
Exercise 1.5.1 (Easy): What is the decimal representation of 1 guaranteed by Proposition 1.5.1? Make sure to show that it does satisfy the condition. Exercise 1.5.2: Prove the converse of Proposition 1.5.3, that is, if the digits in the decimal representation of 𝑥 are eventually repeating, then 𝑥 must be rational. Exercise 1.5.3: Show that real numbers 𝑥 ∈ (0, 1) with nonunique decimal representation are exactly the rational numbers that can be written as 10𝑚𝑛 for some integers 𝑚 and 𝑛. In this case show that there exist exactly two representations of 𝑥. Exercise 1.5.4: Let 𝑏 ≥ 2 be an integer. Define a representation of a real number in [0, 1] in terms of base 𝑏 rather than base 10 and prove Proposition 1.5.1 for base 𝑏. Exercise 1.5.5: Using the previous exercise with 𝑏 = 2 (binary), show that cardinality of ℝ is the same as the cardinality of P(ℕ ), obtaining yet another (though related) proof that ℝ is uncountable. Hint: Construct two injections, one from [0, 1] to P(ℕ ) and one from P(ℕ ) to [0, 1]. Hint 2: Given a set 𝐴 ⊂ ℕ , let the 𝑛th binary digit of 𝑥 be 1 if 𝑛 ∈ 𝐴. Exercise 1.5.6 (Challenging): Explicitly construct an injection from [0, 1] × [0, 1] to [0, 1] (think about why this is so surprising* ). Then describe the set of numbers in [0, 1] not in the image of your injection (unless, of course, you managed to construct a bijection). Hint: Consider even and odd digits of the decimal expansion. *With
quite a bit more work (or by applying the Cantor–Bernstein–Schröder theorem) one can prove that there is a bijection. When he proved this result, Cantor apparently wrote “I see it but I don’t believe it.”
1.5. DECIMAL REPRESENTATION OF THE REALS
49
Exercise 1.5.7: Prove that if 𝑥 = 𝑝/𝑞 ∈ (0, 1] is a rational number, 𝑞 > 1, then the period 𝑃 of repeating digits in the decimal representation of 𝑥 is in fact less than or equal to 𝑞 − 1. Exercise 1.5.8: Prove that if 𝑏 ∈ ℕ and 𝑏 ≥ 2, then for every 𝜖 > 0, there is an 𝑛 ∈ ℕ such that 𝑏 −𝑛 < 𝜖. Hint: One possibility is to first prove that 𝑏 𝑛 > 𝑛 for all 𝑛 ∈ ℕ by induction. Exercise 1.5.9: Explicitly construct an injection 𝑓 : ℝ → ℝ \ ℚ using Proposition 1.5.3.
50
CHAPTER 1. REAL NUMBERS
Chapter 2 Sequences and Series 2.1
Sequences and limits
Note: 2.5 lectures Analysis is essentially about taking limits. The most basic type of a limit is a limit of a sequence of real numbers. We have already seen sequences used informally. Let us give the formal definition. Definition 2.1.1. A sequence (of real numbers) is a function 𝑥 : ℕ → ℝ. Instead of 𝑥(𝑛), we usually denote the 𝑛th element in the sequence by 𝑥 𝑛 . To denote a sequence we write† {𝑥 𝑛 }∞ 𝑛=1 . A sequence {𝑥 𝑛 }∞ is bounded if the underlying function is bounded. That is, if there 𝑛=1 exists a 𝐵 ∈ ℝ such that for all 𝑛 ∈ ℕ . |𝑥 𝑛 | ≤ 𝐵
In other words, the sequence {𝑥 𝑛 }∞ is bounded whenever the set {𝑥 𝑛 : 𝑛 ∈ ℕ } is bounded. 𝑛=1 We similarly define the words bounded below and bounded above. When we need to give a concrete sequence, we often give each term as a formula in terms of 𝑛. For example, {1/𝑛 }∞ stands for the sequence 1, 1/2, 1/3, 1/4, 1/5, . . .. The sequence 𝑛=1 ∞ {1/𝑛 } 𝑛=1 is a bounded sequence (𝐵 = 1 suffices). On the other hand, the sequence {𝑛}∞ 𝑛=1 stands for 1, 2, 3, 4, . . ., and this sequence is not bounded (why?). While the notation for is similar‡ to that of a set, the notions are distinct. For a sequence 𝑛 ∞ example, the sequence (−1) 𝑛=1 is the sequence −1, 1, −1, 1, −1, 1, . . ., whereas the set of values, the range of the sequence, is just the set {−1, 1}. We write this set as (−1)𝑛 : 𝑛 ∈ ℕ . Another example of a sequence is the so-called constant sequence. That is a sequence ∞ {𝑐} 𝑛=1 = 𝑐, 𝑐, 𝑐, 𝑐, . . . consisting of a single constant 𝑐 ∈ ℝ repeating indefinitely. † It
is common to use {𝑥 𝑛 } or {𝑥 𝑛 } 𝑛 for brevity. use (𝑥 𝑛 )∞ to denote a sequence instead of {𝑥 𝑛 }∞ , which is what [R2] uses. Both are common. 𝑛=1 𝑛=1
‡ [BS]
52
CHAPTER 2. SEQUENCES AND SERIES
Definition 2.1.2. A sequence {𝑥 𝑛 }∞ is said to converge to a number 𝑥 ∈ ℝ if for every 𝑛=1 𝜖 > 0, there exists an 𝑀 ∈ ℕ such that |𝑥 𝑛 − 𝑥| < 𝜖 for all 𝑛 ≥ 𝑀. The number 𝑥 is called a limit of the sequence. If the limit 𝑥 is unique, we write* lim 𝑥 𝑛 B 𝑥.
𝑛→∞
A sequence that converges is said to be convergent. Otherwise, we say the sequence diverges or that it is divergent. Shortly, in Proposition 2.1.6 we will show that the limit 𝑥 is always unique if it exists. It makes sense to talk about the limit of a sequence and we only need to show that the sequence converges to one number. For the next couple of examples, let us pretend we have already proved that limits are unique. Intuitively, the limit being 𝑥 means that eventually every number in the sequence is close to the number 𝑥. More precisely, we get arbitrarily close to the limit, provided we go far enough in the sequence. It does not mean we ever reach the limit. It is possible, and quite common, that there is no 𝑥 𝑛 in the sequence that equals the limit 𝑥. We illustrate the concept in Figure 2.1. In the figure we first think of the sequence as a graph, as it is a function of ℕ . Secondly, we also plot it as a sequence of labeled points on the real line.
𝑥+𝜖
···
𝑥
𝑥−𝜖
1
2
3
4 𝑥−𝜖
𝑥2
𝑥6
𝑥5 𝑥4
5 𝑥
6
7 M
8
9
10
𝑥+𝜖
𝑥8 𝑥7 𝑥 10 𝑥 9
𝑥3 𝑥1
Figure 2.1: Illustration of convergence. On top, we show the first ten points of the sequence as a graph with 𝑀 and the interval around the limit 𝑥 marked. On bottom, the points of the same sequence are marked on the number line. * In
text, this may get rendered as lim𝑛→∞ 𝑥 𝑛 .
53
2.1. SEQUENCES AND LIMITS
When we write lim𝑛→∞ 𝑥 𝑛 = 𝑥 for some real number 𝑥, we are saying two things: First, that {𝑥 𝑛 }∞ is convergent, and second, that the limit is 𝑥. 𝑛=1 The definition above is one of the most important definitions in analysis, and it is necessary to understand it perfectly. The key point in the definition is that given any 𝜖 > 0, we can find an 𝑀. The 𝑀 can depend on 𝜖, so we only pick an 𝑀 once we know 𝜖. Let us illustrate convergence on a few examples. Example 2.1.3: The constant sequence 1, 1, 1, 1, . . . converges to 1. For every 𝜖 > 0, pick 𝑀 = 1. That is, |𝑥 𝑛 − 𝑥| = |1 − 1| < 𝜖 for all 𝑛. Example 2.1.4: Claim: The sequence {1/𝑛 }∞ is convergent and 𝑛=1 1 = 0. 𝑛→∞ 𝑛 lim
Proof: Given an 𝜖 > 0, find an 𝑀 ∈ ℕ such that 0 < 1/𝑀 < 𝜖 (Archimedean property at work). For all 𝑛 ≥ 𝑀,
1 1 1 1 < 𝜖. |𝑥 𝑛 − 𝑥| = − 0 = = ≤ 𝑛 𝑛 𝑛 𝑀 𝑛 ∞
Example 2.1.5: The sequence (−1) 𝑛=1 is divergent. Proof: If there were a limit 𝑥, then for 𝜖 = 12 we expect an 𝑀 that satisfies the definition. Suppose such an 𝑀 exists. Then for an even 𝑛 ≥ 𝑀, we compute 1/2
> |𝑥 𝑛 − 𝑥| = |1 − 𝑥|
And we obtain a contradiction
and
1/2
> |𝑥 𝑛+1 − 𝑥| = |−1 − 𝑥| .
2 = |1 − 𝑥 − (−1 − 𝑥)| ≤ |1 − 𝑥| + |−1 − 𝑥| < 1/2 + 1/2 = 1. Proposition 2.1.6. A convergent sequence has a unique limit. The proof of this proposition exhibits a useful technique in analysis. Many proofs follow the same general scheme. We want to show a certain quantity is zero. We write the quantity using the triangle inequality as two quantities, and we estimate each one by arbitrarily small numbers. Proof. Suppose {𝑥 𝑛 }∞ has limits 𝑥 and 𝑦. Take an arbitrary 𝜖 > 0. From the definition 𝑛=1 find an 𝑀1 such that for all 𝑛 ≥ 𝑀1 , |𝑥 𝑛 − 𝑥| < 𝜖/2. Similarly, find an 𝑀2 such that for all 𝑛 ≥ 𝑀2 , we have |𝑥 𝑛 − 𝑦| < 𝜖/2. Now take an 𝑛 such that 𝑛 ≥ 𝑀1 and also 𝑛 ≥ 𝑀2 , and estimate |𝑦 − 𝑥| = |𝑥 𝑛 − 𝑥 − (𝑥 𝑛 − 𝑦)| ≤ |𝑥 𝑛 − 𝑥| + |𝑥 𝑛 − 𝑦| 𝜖 𝜖 < + = 𝜖. 2 2
As |𝑦 − 𝑥| < 𝜖 for all 𝜖 > 0, then |𝑦 − 𝑥| = 0 and 𝑦 = 𝑥. Hence the limit (if it exists) is unique. □
54
CHAPTER 2. SEQUENCES AND SERIES
Proposition 2.1.7. A convergent sequence {𝑥 𝑛 }∞ is bounded. 𝑛=1 Proof. Suppose {𝑥 𝑛 }∞ converges to 𝑥. Thus there exists an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, 𝑛=1 we have |𝑥 𝑛 − 𝑥| < 1. For 𝑛 ≥ 𝑀, |𝑥 𝑛 | = |𝑥 𝑛 − 𝑥 + 𝑥|
≤ |𝑥 𝑛 − 𝑥| + |𝑥|
< 1 + |𝑥| .
The set |𝑥1 | , |𝑥2 | , . . . , |𝑥 𝑀−1 | , 1 + |𝑥| is a finite set and hence let
𝐵 B max |𝑥 1 | , |𝑥 2 | , . . . , |𝑥 𝑀−1 | , 1 + |𝑥| .
Then for all 𝑛 ∈ ℕ ,
|𝑥 𝑛 | ≤ 𝐵.
∞
□
The sequence (−1)𝑛 𝑛=1 shows that the converse does not hold. A bounded sequence is not necessarily convergent.
Example 2.1.8: Let us show
n
o∞ 𝑛 2 +1 𝑛 2 +𝑛 𝑛=1
converges and
𝑛2 + 1 = 1. 𝑛→∞ 𝑛 2 + 𝑛 lim
Given 𝜖 > 0, find 𝑀 ∈ ℕ such that
1 𝑀
< 𝜖. Then for all 𝑛 ≥ 𝑀,
2 2 𝑛 +1 𝑛 + 1 − (𝑛 2 + 𝑛) 1 − 𝑛 𝑛 2 + 𝑛 − 1 = = 𝑛2 + 𝑛 𝑛2 + 𝑛
𝑛−1 𝑛2 + 𝑛 𝑛 1 ≤ 2 = 𝑛 +𝑛 𝑛+1 1 1 ≤ ≤ < 𝜖. 𝑛 𝑀
=
2
+1 Therefore, lim𝑛→∞ 𝑛𝑛2 +𝑛 = 1. This example shows that sometimes to get what you want, you must throw away some information to get a simpler estimate.
2.1.1
Monotone sequences
The simplest type of a sequence is a monotone sequence. Checking that a monotone sequence converges is as easy as checking that it is bounded. It is also easy to find the limit for a convergent monotone sequence, provided we can find the supremum or infimum of a countable set of numbers.
55
2.1. SEQUENCES AND LIMITS
Definition 2.1.9. A sequence {𝑥 𝑛 }∞ is monotone increasing if 𝑥 𝑛 ≤ 𝑥 𝑛+1 for all 𝑛 ∈ ℕ . A 𝑛=1 ∞ sequence {𝑥 𝑛 } 𝑛=1 is monotone decreasing if 𝑥 𝑛 ≥ 𝑥 𝑛+1 for all 𝑛 ∈ ℕ . If a sequence is either monotone increasing or monotone decreasing, we can simply say the sequence is monotone.* For example, {𝑛}∞ is monotone increasing, {1/𝑛 }∞ is monotone decreasing, the 𝑛=1 𝑛=1 ∞ constant sequence {1} 𝑛=1 is both monotone increasing and monotone decreasing, and ∞ (−1)𝑛 𝑛=1 is not monotone. First few terms of a sample monotone increasing sequence are shown in Figure 2.2.
1
2
3
4
5
6
7
8
9 10
Figure 2.2: First few terms of a monotone increasing sequence as a graph.
Theorem 2.1.10 (Monotone convergence theorem). A monotone sequence {𝑥 𝑛 }∞ is bounded 𝑛=1 if and only if it is convergent. Furthermore, if {𝑥 𝑛 }∞ is monotone increasing and bounded, then 𝑛=1 lim 𝑥 𝑛 = sup{𝑥 𝑛 : 𝑛 ∈ ℕ }.
𝑛→∞
If {𝑥 𝑛 }∞ is monotone decreasing and bounded, then 𝑛=1 lim 𝑥 𝑛 = inf{𝑥 𝑛 : 𝑛 ∈ ℕ }.
𝑛→∞
Proof. Consider a monotone increasing sequence {𝑥 𝑛 }∞ . Suppose first the sequence is 𝑛=1 bounded, that is, the set {𝑥 𝑛 : 𝑛 ∈ ℕ } is bounded. Let 𝑥 B sup{𝑥 𝑛 : 𝑛 ∈ ℕ }. Let 𝜖 > 0 be arbitrary. As 𝑥 is the supremum, there must be at least one 𝑀 ∈ ℕ such that 𝑥 𝑀 > 𝑥 − 𝜖. As {𝑥 𝑛 }∞ is monotone increasing, then it is easy to see (by induction) that 𝑛=1 𝑥 𝑛 ≥ 𝑥 𝑀 for all 𝑛 ≥ 𝑀. Hence for all 𝑛 ≥ 𝑀, * Some
|𝑥 𝑛 − 𝑥| = 𝑥 − 𝑥 𝑛 ≤ 𝑥 − 𝑥 𝑀 < 𝜖.
authors use the word monotonic.
56
CHAPTER 2. SEQUENCES AND SERIES
So {𝑥 𝑛 }∞ converges to 𝑥. Therefore, a bounded monotone increasing sequence converges. 𝑛=1 For the other direction, we already proved that a convergent sequence is bounded. The proof for monotone decreasing sequences is left as an exercise. □ A monotone increasing sequence {𝑥 𝑛 }∞ is always bounded from below since 𝑥1 ≤ 𝑛=1 𝑥2 ≤ · · · ≤ 𝑥 𝑛 for any 𝑛, so 𝑥1 is a lower bound. So to see if a monotone increasing sequence is bounded, it is enough to check if it is bounded above. Similarly, a monotone decreasing sequence is always bounded from above, so it is enough to check whether it is bounded from below. Example 2.1.11: Take the sequence
∞ √1 . 𝑛 𝑛=1 as √1𝑛 > 0
The sequence is bounded below for all 𝑛 ∈ ℕ . Let us show that it is monotone √ √ decreasing. We start with 𝑛 + 1 ≥ 𝑛 (why is that true?). From this inequality we obtain √
1
1 ≤ √ . 𝑛 𝑛+1
So the sequence is monotone decreasing and bounded below (hence bounded). Via Theorem 2.1.10 we find that the sequence is convergent and 1 1 lim √ = inf √ : 𝑛 ∈ ℕ . 𝑛→∞ 𝑛 𝑛
We already know that the infimum is greater than or equal to 0, as 0 is a lower bound. Take a number 𝑏 ≥ 0 such that 𝑏 ≤ √1𝑛 for all 𝑛. We square both sides to obtain 𝑏2 ≤
1 𝑛
for all 𝑛 ∈ ℕ .
We have seen before that this implies that 𝑏 2 ≤ 0 (a consequence of the Archimedean property). As 𝑏 2 ≥ 0 as well, we have 𝑏 2 = 0 and so 𝑏 = 0. Hence, 𝑏 = 0 is the greatest lower bound, and lim √1𝑛 = 0. 𝑛→∞
Example 2.1.12: A word of caution: Showing that a monotone sequence is bounded in order to use Theorem 2.1.10 may be difficult. The sequence {1 + 1/2 + · · · + 1/𝑛 }∞ is a 𝑛=1 monotone increasing sequence that grows slowly and in fact grows slower and slower as 𝑛 gets larger. We will see, once we get to series, that this sequence has no upper bound and so does not converge. It is not at all obvious that this sequence has no upper bound. A common example of where monotone sequences arise is the following proposition. The proof is left as an exercise. Proposition 2.1.13. Let 𝑆 ⊂ ℝ be a nonempty bounded set. Then there exist monotone sequences and {𝑦𝑛 }∞ such that 𝑥 𝑛 , 𝑦𝑛 ∈ 𝑆 and {𝑥 𝑛 }∞ 𝑛=1 𝑛=1 sup 𝑆 = lim 𝑥 𝑛 𝑛→∞
and
inf 𝑆 = lim 𝑦𝑛 . 𝑛→∞
57
2.1. SEQUENCES AND LIMITS
2.1.2
Tail of a sequence
Definition 2.1.14. For a sequence {𝑥 𝑛 }∞ , the 𝐾-tail (where 𝐾 ∈ ℕ ), or just the tail, of 𝑛=1 ∞ {𝑥 𝑛 } 𝑛=1 is the sequence starting at 𝐾 + 1, usually written as {𝑥 𝑛+𝐾 }∞ 𝑛=1
or
{𝑥 𝑛 }∞ 𝑛=𝐾+1 .
For example, the 4-tail of {1/𝑛 }∞ is 1/5, 1/6, 1/7, 1/8, . . .. The 0-tail of a sequence is the 𝑛=1 sequence itself. The convergence and the limit of a sequence only depends on its tail. Proposition 2.1.15. Let {𝑥 𝑛 }∞ be a sequence. Then the following statements are equivalent: 𝑛=1 (i) The sequence {𝑥 𝑛 }∞ converges. 𝑛=1
(ii) The 𝐾-tail {𝑥 𝑛+𝐾 }∞ converges for all 𝐾 ∈ ℕ . 𝑛=1
(iii) The 𝐾-tail {𝑥 𝑛+𝐾 }∞ converges for some 𝐾 ∈ ℕ . 𝑛=1
Furthermore, if any (and hence all) of the limits exist, then for all 𝐾 ∈ ℕ lim 𝑥 𝑛 = lim 𝑥 𝑛+𝐾 .
𝑛→∞
𝑛→∞
Proof. It is clear that (ii) implies (iii). We will therefore show first that (i) implies (ii), and then we will show that (iii) implies (i). That is, to prove
(i)
(ii) to prove
(iii)
In the process we will also show that the limits are equal. We start with (i) implies (ii). Suppose {𝑥 𝑛 }∞ converges to some 𝑥 ∈ ℝ. Let 𝐾 ∈ ℕ be 𝑛=1 arbitrary, and define 𝑦𝑛 B 𝑥 𝑛+𝐾 . We wish to show that {𝑦𝑛 }∞ converges to 𝑥. Given an 𝑛=1 𝜖 > 0, there exists an 𝑀 ∈ ℕ such that |𝑥 − 𝑥 𝑛 | < 𝜖 for all 𝑛 ≥ 𝑀. Note that 𝑛 ≥ 𝑀 implies 𝑛 + 𝐾 ≥ 𝑀. Therefore, for all 𝑛 ≥ 𝑀, we have |𝑥 − 𝑦𝑛 | = |𝑥 − 𝑥 𝑛+𝐾 | < 𝜖. Consequently, {𝑦𝑛 }∞ converges to 𝑥. 𝑛=1 Let us move to (iii) implies (i). Let 𝐾 ∈ ℕ be given, define 𝑦𝑛 B 𝑥 𝑛+𝐾 , and suppose that {𝑦𝑛 }∞ converges to 𝑥 ∈ ℝ. That is, given an 𝜖 > 0, there exists an 𝑀 ′ ∈ ℕ such that 𝑛=1 |𝑥 − 𝑦𝑛 | < 𝜖 for all 𝑛 ≥ 𝑀 ′. Let 𝑀 B 𝑀 ′ + 𝐾. Then 𝑛 ≥ 𝑀 implies 𝑛 − 𝐾 ≥ 𝑀 ′. Thus, whenever 𝑛 ≥ 𝑀, we have |𝑥 − 𝑥 𝑛 | = |𝑥 − 𝑦𝑛−𝐾 | < 𝜖. Therefore, {𝑥 𝑛 }∞ converges to 𝑥. 𝑛=1
□
At the end of the day, the limit does not care about how the sequence begins, it only cares about the tail of the sequence. The beginning of the sequence may be arbitrary.
58
CHAPTER 2. SEQUENCES AND SERIES
𝑛 For example, the sequence defined by 𝑥 𝑛 B 𝑛 2 +16 is decreasing if we start at 𝑛 = 4 (it is ∞ 1 1 3 1 increasing before). That is: {𝑥 𝑛 } 𝑛=1 = /17 , /10 , /25 , /8, 5/41 , 3/26 , 7/65 , 1/10 , 9/97 , 5/58 , . . ., and 1/17
< 1/10 < 3/25 < 1/8 > 5/41 > 3/26 > 7/65 > 1/10 > 9/97 > 5/58 > . . . .
If we throw away the first 3 terms and look at the 3-tail, it is decreasing. The proof is left as an exercise. Since the 3-tail is monotone and bounded below by zero, it is convergent, and therefore the sequence is convergent.
2.1.3
Subsequences
It is useful to sometimes consider only some terms of a sequence. A subsequence of {𝑥 𝑛 }∞ 𝑛=1 is a sequence that contains only some of the numbers from {𝑥 𝑛 }∞ in the same order. 𝑛=1
Definition 2.1.16. Let {𝑥 𝑛 }∞ be a sequence. Let {𝑛 𝑖 }∞ be a strictly increasing sequence 𝑛=1 𝑖=1 of natural numbers, that is, 𝑛 𝑖 < 𝑛 𝑖+1 for all 𝑖 ∈ ℕ (in other words 𝑛1 < 𝑛2 < 𝑛3 < · · · ). The sequence {𝑥 𝑛 𝑖 }∞ 𝑖=1 is called a subsequence of {𝑥 𝑛 }∞ . 𝑛=1
So the subsequence is the sequence 𝑥 𝑛1 , 𝑥 𝑛2 , 𝑥 𝑛3 , . . .. Consider the sequence {1/𝑛 }∞ . The 𝑛=1 ∞ 1 sequence { /3𝑖 } 𝑖=1 is a subsequence. To see how these two sequences fit in the definition, take 𝑛 𝑖 B 3𝑖, that is, {𝑛 𝑖 }∞ is the sequence 3, 6, 9, 12, . . .. The numbers 𝑥 𝑛 𝑖 in the subsequence 𝑖=1 must come from the original sequence. So 1, 0, 1/3, 0, 1/5, . . . is not a subsequence of {1/𝑛 }∞ . 𝑛=1 1 1 1 Similarly, order must be preserved. So the sequence 1, /3, /2, /5, . . . is not a subsequence of {1/𝑛 }∞ . 𝑛=1 A tail of a sequence is one special type of a subsequence. For an arbitrary subsequence, we have the following proposition about convergence. Proposition 2.1.17. If {𝑥 𝑛 }∞ is a convergent sequence, then every subsequence {𝑥 𝑛 𝑖 }∞ is also 𝑛=1 𝑖=1 convergent, and lim 𝑥 𝑛 = lim 𝑥 𝑛 𝑖 . 𝑛→∞
𝑖→∞
Proof. Suppose lim𝑛→∞ 𝑥 𝑛 = 𝑥. So for every 𝜖 > 0, there is an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, |𝑥 𝑛 − 𝑥| < 𝜖.
It is not hard to prove (do it!) by induction that 𝑛 𝑖 ≥ 𝑖 for all 𝑖 ∈ ℕ . Hence 𝑖 ≥ 𝑀 implies 𝑛 𝑖 ≥ 𝑀. Thus, for all 𝑖 ≥ 𝑀, |𝑥 𝑛 𝑖 − 𝑥| < 𝜖, and we are done.
□
Example 2.1.18: Existence of a convergent subsequence does not imply convergence of the sequence itself. Take the sequence 0, 1, 0, 1, 0, 1, . . .. That is, 𝑥 𝑛 = 0 if 𝑛 is odd, and 𝑥 𝑛 = 1 if 𝑛 is even. The sequence {𝑥 𝑛 }∞ is divergent; however, the subsequence {𝑥2𝑖 }∞ 𝑛=1 𝑖=1 converges to 1 and the subsequence {𝑥2𝑖+1 }∞ converges to 0. Compare Proposition 2.3.7 . 𝑖=1
59
2.1. SEQUENCES AND LIMITS
2.1.4
Exercises
In the following exercises, feel free to use what you know from calculus to find the limit, if it exists. But you must prove that you found the correct limit, or prove that the sequence is divergent. Exercise 2.1.1: Is the sequence {3𝑛}∞ bounded? Prove or disprove. 𝑛=1
Exercise 2.1.2: Is the sequence {𝑛}∞ convergent? If so, what is the limit? 𝑛=1
(−1)𝑛 Exercise 2.1.3: Is the sequence 2𝑛
∞
convergent? If so, what is the limit?
𝑛=1
Exercise 2.1.4: Is the sequence {2−𝑛 }∞ convergent? If so, what is the limit? 𝑛=1 Exercise 2.1.5: Is the sequence
n 𝑛 o∞
convergent? If so, what is the limit? 𝑛 + 1 𝑛=1 n 𝑛 o∞ convergent? If so, what is the limit? Exercise 2.1.6: Is the sequence 2 𝑛 + 1 𝑛=1 Exercise 2.1.7: Let {𝑥 𝑛 }∞ be a sequence. 𝑛=1
a) Show that lim 𝑥 𝑛 = 0 (that is, the limit exists and is zero) if and only if lim |𝑥 𝑛 | = 0. 𝑛→∞
b) Find an example such that
𝑛→∞
{|𝑥 𝑛 |}∞ 𝑛=1 𝑛 ∞
Exercise 2.1.8: Is the sequence
2 𝑛!
converges and
{𝑥 𝑛 }∞ 𝑛=1
diverges.
convergent? If so, what is the limit?
𝑛=1
1 Exercise 2.1.9: Show that the sequence √3 𝑛 find the limit.
∞
𝑛+1 Exercise 2.1.10: Show that the sequence 𝑛 find the limit.
is monotone and bounded. Then use Theorem 2.1.10 to
𝑛=1
∞
is monotone and bounded. Then use Theorem 2.1.10 to
𝑛=1
Exercise 2.1.11: Finish the proof of Theorem 2.1.10 for monotone decreasing sequences. Exercise 2.1.12: Prove Proposition 2.1.13. Exercise 2.1.13: Let {𝑥 𝑛 }∞ be a convergent monotone sequence. Suppose there exists a 𝑘 ∈ ℕ such that 𝑛=1 lim 𝑥 𝑛 = 𝑥 𝑘 .
𝑛→∞
Show that 𝑥 𝑛 = 𝑥 𝑘 for all 𝑛 ≥ 𝑘.
Exercise 2.1.14: Find a convergent subsequence of the sequence (−1)𝑛
Exercise 2.1.15: Let {𝑥 𝑛 }∞ be a sequence defined by 𝑛=1
( 𝑥𝑛 B
𝑛
if 𝑛 is odd,
1/𝑛
if 𝑛 is even.
a) Is the sequence bounded? (prove or disprove) b) Is there a convergent subsequence? If so, find it.
∞
𝑛=1
.
60
CHAPTER 2. SEQUENCES AND SERIES
Exercise 2.1.16: Let {𝑥 𝑛 }∞ be a sequence. Suppose there are two convergent subsequences {𝑥 𝑛 𝑖 }∞ and 𝑛=1 𝑖=1 ∞ {𝑥 𝑚𝑖 } 𝑖=1 . Suppose lim 𝑥 𝑛 𝑖 = 𝑎 and lim 𝑥 𝑚𝑖 = 𝑏, 𝑖→∞
where 𝑎 ≠ 𝑏. Prove that
{𝑥 𝑛 }∞ 𝑛=1
𝑖→∞
is not convergent, without using Proposition 2.1.17.
Exercise 2.1.17 (Tricky): Find a sequence {𝑥 𝑛 }∞ such that for every 𝑦 ∈ ℝ, there exists a subsequence 𝑛=1 ∞ {𝑥 𝑛 𝑖 } 𝑖=1 converging to 𝑦. Exercise 2.1.18 (Easy): Let {𝑥 𝑛 }∞ be a sequence and 𝑥 ∈ ℝ. Suppose for every 𝜖 > 0, there is an 𝑀 such 𝑛=1 that |𝑥 𝑛 − 𝑥| ≤ 𝜖 for all 𝑛 ≥ 𝑀. Show that lim 𝑥 𝑛 = 𝑥. 𝑛→∞
Exercise 2.1.19 (Easy): Let {𝑥 𝑛 }∞ be a sequence and 𝑥 ∈ ℝ such that there exists a 𝑘 ∈ ℕ such that for all 𝑛=1 ∞ 𝑛 ≥ 𝑘, 𝑥 𝑛 = 𝑥. Prove that {𝑥 𝑛 } 𝑛=1 converges to 𝑥. Exercise 2.1.20: Let {𝑥 𝑛 }∞ be a sequence and define a sequence {𝑦𝑛 }∞ by 𝑦2𝑘 B 𝑥 𝑘 2 and 𝑦2𝑘−1 B 𝑥 𝑘 𝑛=1 𝑛=1 ∞ for all 𝑘 ∈ ℕ . Prove that {𝑥 𝑛 } 𝑛=1 converges if and only if {𝑦𝑛 }∞ converges. Furthermore, prove that if they 𝑛=1 converge, then lim 𝑥 𝑛 = lim 𝑦𝑛 . 𝑛→∞
𝑛→∞
𝑛 Exercise 2.1.21: Show that the 3-tail of the sequence defined by 𝑥 𝑛 B 𝑛 2 +16 is monotone decreasing. Hint: Suppose 𝑛 ≥ 𝑚 ≥ 4 and consider the numerator of the expression 𝑥 𝑛 − 𝑥 𝑚 .
Exercise 2.1.22: Suppose that {𝑥 𝑛 }∞ is a sequence such that the subsequences {𝑥 2𝑛 }∞ , {𝑥2𝑛−1 }∞ , and 𝑛=1 𝑛=1 𝑛=1 ∞ ∞ {𝑥 3𝑛 } 𝑛=1 all converge. Show that {𝑥 𝑛 } 𝑛=1 is convergent. Exercise 2.1.23: Suppose that {𝑥 𝑛 }∞ is a monotone increasing sequence that has a convergent subsequence. 𝑛=1 ∞ Show that {𝑥 𝑛 } 𝑛=1 is convergent. Note: So Proposition 2.1.17 is an “if and only if” for monotone sequences.
61
2.2. FACTS ABOUT LIMITS OF SEQUENCES
2.2
Facts about limits of sequences
Note: 2–2.5 lectures, recursively defined sequences can safely be skipped In this section we go over some basic results about the limits of sequences. We start by looking at how sequences interact with inequalities.
2.2.1
Limits and inequalities
A basic lemma about limits and inequalities is the so-called squeeze lemma. It allows us to show convergence of sequences in difficult cases if we find two other simpler convergent sequences that “squeeze” the original sequence. Lemma 2.2.1 (Squeeze lemma). Let {𝑎 𝑛 }∞ , {𝑏 𝑛 }∞ , and {𝑥 𝑛 }∞ be sequences such that 𝑛=1 𝑛=1 𝑛=1 𝑎𝑛 ≤ 𝑥𝑛 ≤ 𝑏𝑛
for all 𝑛 ∈ ℕ .
Suppose {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ converge and 𝑛=1 𝑛=1 lim 𝑎 𝑛 = lim 𝑏 𝑛 .
𝑛→∞
𝑛→∞
Then {𝑥 𝑛 }∞ converges and 𝑛=1 lim 𝑥 𝑛 = lim 𝑎 𝑛 = lim 𝑏 𝑛 .
𝑛→∞
𝑛→∞
𝑛→∞
Proof. Let 𝑥 B lim𝑛→∞ 𝑎 𝑛 = lim𝑛→∞ 𝑏 𝑛 . Let 𝜖 > 0 be given. Find an 𝑀1 such that for all 𝑛 ≥ 𝑀1 , we have that |𝑎 𝑛 − 𝑥| < 𝜖, and an 𝑀2 such that for all 𝑛 ≥ 𝑀2 , we have |𝑏 𝑛 − 𝑥| < 𝜖. Set 𝑀 B max{𝑀1 , 𝑀2 }. Suppose 𝑛 ≥ 𝑀. In particular, 𝑥 − 𝑎 𝑛 < 𝜖, or 𝑥 − 𝜖 < 𝑎 𝑛 . Similarly, 𝑏 𝑛 < 𝑥 + 𝜖. Putting everything together, we find 𝑥 − 𝜖 < 𝑎 𝑛 ≤ 𝑥 𝑛 ≤ 𝑏 𝑛 < 𝑥 + 𝜖. In other words, −𝜖 < 𝑥 𝑛 − 𝑥 < 𝜖 or |𝑥 𝑛 − 𝑥| < 𝜖. So {𝑥 𝑛 }∞ converges to 𝑥. See 𝑛=1 Figure 2.3. □
𝜖 𝑥−𝜖
𝑎𝑛
Figure 2.3: Squeeze lemma proof in picture.
𝜖 𝑥
𝑥𝑛
𝑏𝑛
𝑥+𝜖
62
CHAPTER 2. SEQUENCES AND SERIES
Example 2.2.2: One application of the squeeze lemma is to compute limits of sequences using limits that we already know. For example, consider the sequence { 𝑛 √1 𝑛 }∞ . Since 𝑛=1 √ 𝑛 ≥ 1 for all 𝑛 ∈ ℕ , we have 1 1 0≤ √ ≤ 𝑛 𝑛 𝑛 for all 𝑛 ∈ ℕ . We already know lim𝑛→∞ 1/𝑛 = 0. Hence, using the constant sequence {0}∞ 𝑛=1 and the sequence {1/𝑛 }∞ in the squeeze lemma, we conclude 𝑛=1 lim
𝑛→∞
1 √
𝑛 𝑛
= 0.
Limits, when they exist, preserve non-strict inequalities. Lemma 2.2.3. Let {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ be convergent sequences and 𝑛=1 𝑛=1 𝑥 𝑛 ≤ 𝑦𝑛
Then
for all 𝑛 ∈ ℕ .
lim 𝑥 𝑛 ≤ lim 𝑦𝑛 .
𝑛→∞
𝑛→∞
Proof. Let 𝑥 B lim𝑛→∞ 𝑥 𝑛 and 𝑦 B lim𝑛→∞ 𝑦𝑛 . Let 𝜖 > 0 be given. Find an 𝑀1 such that for all 𝑛 ≥ 𝑀1 , we have |𝑥 𝑛 − 𝑥| < 𝜖/2. Find an 𝑀2 such that for all 𝑛 ≥ 𝑀2 , we have |𝑦𝑛 − 𝑦| < 𝜖/2. In particular, for some 𝑛 ≥ max{𝑀1 , 𝑀2 }, we have 𝑥 − 𝑥 𝑛 < 𝜖/2 and 𝑦𝑛 − 𝑦 < 𝜖/2. We add these inequalities to obtain 𝑦𝑛 − 𝑥 𝑛 + 𝑥 − 𝑦 < 𝜖,
or
𝑦𝑛 − 𝑥 𝑛 < 𝑦 − 𝑥 + 𝜖.
Since 𝑥 𝑛 ≤ 𝑦𝑛 , we have 0 ≤ 𝑦𝑛 − 𝑥 𝑛 and hence 0 < 𝑦 − 𝑥 + 𝜖. In other words, 𝑥 − 𝑦 < 𝜖.
Because 𝜖 > 0 was arbitrary, we obtain 𝑥 − 𝑦 ≤ 0. Therefore, 𝑥 ≤ 𝑦.
□
The next corollary follows by using constant sequences in Lemma 2.2.3. The proof is left as an exercise. Corollary 2.2.4. (i) If {𝑥 𝑛 }∞ is a convergent sequence such that 𝑥 𝑛 ≥ 0 for all 𝑛 ∈ ℕ , then 𝑛=1 lim 𝑥 𝑛 ≥ 0.
𝑛→∞
(ii) Let 𝑎, 𝑏 ∈ ℝ and let {𝑥 𝑛 }∞ be a convergent sequence such that 𝑛=1 𝑎 ≤ 𝑥𝑛 ≤ 𝑏
Then
for all 𝑛 ∈ ℕ .
𝑎 ≤ lim 𝑥 𝑛 ≤ 𝑏. 𝑛→∞
In Lemma 2.2.3 and Corollary 2.2.4 we cannot simply replace all the non-strict inequalities with strict inequalities. For example, let 𝑥 𝑛 B −1/𝑛 and 𝑦𝑛 B 1/𝑛 . Then 𝑥 𝑛 < 𝑦𝑛 , 𝑥 𝑛 < 0,
63
2.2. FACTS ABOUT LIMITS OF SEQUENCES
and 𝑦𝑛 > 0 for all 𝑛. However, these inequalities are not preserved by the limit operation as lim𝑛→∞ 𝑥 𝑛 = lim𝑛→∞ 𝑦𝑛 = 0. The moral of this example is that strict inequalities may become non-strict inequalities when limits are applied; if we know 𝑥 𝑛 < 𝑦𝑛 for all 𝑛, we may only conclude lim 𝑥 𝑛 ≤ lim 𝑦𝑛 . 𝑛→∞
𝑛→∞
This issue is a common source of errors.
2.2.2
Continuity of algebraic operations
Limits interact nicely with algebraic operations. Proposition 2.2.5. Let {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ be convergent sequences. 𝑛=1 𝑛=1 (i) The sequence {𝑧 𝑛 }∞ , where 𝑧 𝑛 B 𝑥 𝑛 + 𝑦𝑛 , converges and 𝑛=1
lim (𝑥 𝑛 + 𝑦𝑛 ) = lim 𝑧 𝑛 = lim 𝑥 𝑛 + lim 𝑦𝑛 .
𝑛→∞
𝑛→∞
𝑛→∞
𝑛→∞
(ii) The sequence {𝑧 𝑛 }∞ , where 𝑧 𝑛 B 𝑥 𝑛 − 𝑦𝑛 , converges and 𝑛=1 lim (𝑥 𝑛 − 𝑦𝑛 ) = lim 𝑧 𝑛 = lim 𝑥 𝑛 − lim 𝑦𝑛 .
𝑛→∞
𝑛→∞
𝑛→∞
𝑛→∞
(iii) The sequence {𝑧 𝑛 }∞ , where 𝑧 𝑛 B 𝑥 𝑛 𝑦𝑛 , converges and 𝑛=1
lim (𝑥 𝑛 𝑦𝑛 ) = lim 𝑧 𝑛 = lim 𝑥 𝑛
𝑛→∞
𝑛→∞
𝑛→∞
lim 𝑦𝑛 .
𝑛→∞
(iv) If lim𝑛→∞ 𝑦𝑛 ≠ 0 and 𝑦𝑛 ≠ 0 for all 𝑛 ∈ ℕ , then the sequence {𝑧 𝑛 }∞ , where 𝑧 𝑛 B 𝑛=1 converges and
𝑥𝑛 , 𝑦𝑛
lim𝑛→∞ 𝑥 𝑛 𝑥𝑛 = lim 𝑧 𝑛 = . 𝑛→∞ 𝑛→∞ 𝑦 𝑛 lim𝑛→∞ 𝑦𝑛 lim
Proof. We start with (i). Suppose {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ are convergent sequences and write 𝑛=1 𝑛=1 𝑧 𝑛 B 𝑥 𝑛 + 𝑦𝑛 . Let 𝑥 B lim𝑛→∞ 𝑥 𝑛 , 𝑦 B lim𝑛→∞ 𝑦𝑛 , and 𝑧 B 𝑥 + 𝑦. Let 𝜖 > 0 be given. Find an 𝑀1 such that for all 𝑛 ≥ 𝑀1 , we have |𝑥 𝑛 − 𝑥| < 𝜖/2. Find an 𝑀2 such that for all 𝑛 ≥ 𝑀2 , we have |𝑦𝑛 − 𝑦| < 𝜖/2. Take 𝑀 B max{𝑀1 , 𝑀2 }. For all 𝑛 ≥ 𝑀, we have |𝑧 𝑛 − 𝑧| = |(𝑥 𝑛 + 𝑦𝑛 ) − (𝑥 + 𝑦)| = |𝑥 𝑛 − 𝑥 + 𝑦𝑛 − 𝑦|
≤ |𝑥 𝑛 − 𝑥| + |𝑦𝑛 − 𝑦| 𝜖 𝜖 < + = 𝜖. 2 2
Therefore (i) is proved. Proof of (ii) is almost identical and is left as an exercise.
64
CHAPTER 2. SEQUENCES AND SERIES
Let us tackle (iii). Suppose again that {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ are convergent sequences 𝑛=1 𝑛=1 and write 𝑧 𝑛 B 𝑥 𝑛 𝑦𝑛 . Let 𝑥 B lim𝑛→∞ 𝑥 𝑛 , 𝑦 B lim𝑛→∞ 𝑦𝑛 , and 𝑧 B 𝑥𝑦. Let 𝜖 > 0 be given. Let 𝐾 B max{|𝑥| , |𝑦| , 𝜖/3 , 1}. Find an 𝑀1 such that for all 𝑛 ≥ 𝑀1 , 𝜖 𝜖 we have |𝑥 𝑛 − 𝑥| < 3𝐾 . Find an 𝑀2 such that for all 𝑛 ≥ 𝑀2 , we have |𝑦𝑛 − 𝑦| < 3𝐾 . Take 𝑀 B max{𝑀1 , 𝑀2 }. For all 𝑛 ≥ 𝑀, we have |𝑧 𝑛 − 𝑧| = |(𝑥 𝑛 𝑦𝑛 ) − (𝑥𝑦)|
= |(𝑥 𝑛 − 𝑥 + 𝑥)(𝑦𝑛 − 𝑦 + 𝑦) − 𝑥𝑦|
= |(𝑥 𝑛 − 𝑥)𝑦 + 𝑥(𝑦𝑛 − 𝑦) + (𝑥 𝑛 − 𝑥)(𝑦𝑛 − 𝑦)|
≤ |(𝑥 𝑛 − 𝑥)𝑦| + |𝑥(𝑦𝑛 − 𝑦)| + |(𝑥 𝑛 − 𝑥)(𝑦𝑛 − 𝑦)|
= |𝑥 𝑛 − 𝑥| |𝑦| + |𝑥| |𝑦𝑛 − 𝑦| + |𝑥 𝑛 − 𝑥| |𝑦𝑛 − 𝑦| 𝜖 𝜖 𝜖 𝜖 𝐾+𝐾 + (now notice that < 3𝐾 3𝐾 3𝐾 3𝐾 𝜖 𝜖 𝜖 ≤ + + = 𝜖. 3 3 3
𝜖 3𝐾
≤ 1 and 𝐾 ≥ 1)
Finally, we examine (iv). Instead of proving (iv) directly, we prove the following simpler claim: Claim: If {𝑦𝑛 }∞ is a convergent sequence such that lim𝑛→∞ 𝑦𝑛 ≠ 0 and 𝑦𝑛 ≠ 0 for all 𝑛 ∈ ℕ , 𝑛=1 ∞ 1 then { /𝑦𝑛 } 𝑛=1 converges and lim
𝑛→∞
1 1 = . 𝑦𝑛 lim𝑛→∞ 𝑦𝑛
Once the claim is proved, we take the sequence {1/𝑦𝑛 }∞ , multiply it by the sequence 𝑛=1 {𝑥 𝑛 }∞ and apply item (iii) . 𝑛=1 Proof of claim: Let 𝜖 > 0 be given. Let 𝑦 B lim𝑛→∞ 𝑦𝑛 . As |𝑦| ≠ 0, then n o
min |𝑦| 2 2𝜖 ,
|𝑦| 2
> 0. Find an 𝑀 such that for all 𝑛 ≥ 𝑀, we have 𝜖 |𝑦| . |𝑦𝑛 − 𝑦| < min |𝑦| , 2 2
For all 𝑛 ≥ 𝑀, we have |𝑦 − 𝑦𝑛 |
0).
√ √ 𝑥𝑛 − 𝑥 𝑥𝑛 − 𝑥 = √ 𝑥 + √𝑥 𝑛
1
√ |𝑥 𝑛 − 𝑥| 𝑥𝑛 + 𝑥 1 ≤ √ |𝑥 𝑛 − 𝑥| . 𝑥
=√
We leave the rest of the proof to the reader.
□
66
CHAPTER 2. SEQUENCES AND SERIES 1/𝑘
A similar proof works for the 𝑘th root. That is, we also obtain lim𝑛→∞ 𝑥 𝑛 = (lim𝑛→∞ 𝑥 𝑛 )1/𝑘 . We leave this to the reader as a challenging exercise. We may also want to take the limit past the absolute value sign. The converse of this proposition is not true, see Exercise 2.1.7 part b). Proposition 2.2.7. If {𝑥 𝑛 }∞ is a convergent sequence, then {|𝑥 𝑛 |}∞ is convergent and 𝑛=1 𝑛=1
lim |𝑥 𝑛 | = lim 𝑥 𝑛 .
𝑛→∞
𝑛→∞
Proof. We simply note the reverse triangle inequality
|𝑥 𝑛 | − |𝑥| ≤ |𝑥 𝑛 − 𝑥| . Hence if |𝑥 𝑛 − 𝑥| can be made arbitrarily small, so can |𝑥 𝑛 | − |𝑥| . Details are left to the reader. □ Let us see an example putting the propositions above together. Since lim𝑛→∞ 1/𝑛 = 0, then r
p
lim 1 + 1/𝑛 − 100/𝑛 2 = 1 + lim 1/𝑛 − 100 lim 1/𝑛
𝑛→∞
𝑛→∞
𝑛→∞
lim 1/𝑛 = 1.
𝑛→∞
That is, the limit on the left-hand side exists because the right-hand side exists. You really should read the equality above from right to left. On the other hand you must apply the propositions carefully. For example, by rewriting the expression with common denominator first we find lim
𝑛→∞
However,
2.2.3
𝑛2 ∞ 𝑛+1 𝑛=1
𝑛2 − 𝑛 = −1. 𝑛+1
𝑛2 𝑛+1 𝑛→∞
and {𝑛}∞ are not convergent, so lim 𝑛=1
− lim 𝑛 is nonsense. 𝑛→∞
Recursively defined sequences
Now that we know we can interchange limits and algebraic operations, we can compute the limits of many sequences. One such class are recursively defined sequences, that is, sequences where the next number in the sequence is computed using a formula from a fixed number of preceding elements in the sequence. Example 2.2.8: Let {𝑥 𝑛 }∞ be defined by 𝑥 1 B 2 and 𝑛=1 𝑥 𝑛+1 B 𝑥 𝑛 −
𝑥 𝑛2 − 2 . 2𝑥 𝑛
We must first find out if this sequence is well-defined; we must show we never divide by zero. Then we must find out if the sequence converges. Only then can we attempt to find the limit.
67
2.2. FACTS ABOUT LIMITS OF SEQUENCES
So let us prove that for all 𝑛 𝑥 𝑛 exists and 𝑥 𝑛 > 0 (so the sequence is well-defined and bounded below). Let us show this by induction. We know that 𝑥1 = 2 > 0. For the induction step, suppose 𝑥 𝑛 exists and 𝑥 𝑛 > 0. Then 𝑥 𝑛+1
𝑥 𝑛2 − 2 2𝑥 𝑛2 − 𝑥 𝑛2 + 2 𝑥 𝑛2 + 2 = 𝑥𝑛 − = = . 2𝑥 𝑛 2𝑥 𝑛 2𝑥 𝑛 𝑥 2 +2
𝑛 It is always true that 𝑥 𝑛2 + 2 > 0, and as 𝑥 𝑛 > 0, then 𝑥 𝑛+1 = 2𝑥 > 0. 𝑛 Next let us show that the sequence is monotone decreasing. If we show that 𝑥 𝑛2 − 2 ≥ 0 for all 𝑛, then 𝑥 𝑛+1 ≤ 𝑥 𝑛 for all 𝑛. Obviously 𝑥12 − 2 = 4 − 2 = 2 > 0. For an arbitrary 𝑛, we have
2 𝑥 𝑛+1
𝑥2 + 2 −2= 𝑛 2𝑥 𝑛
2
−2=
𝑥 𝑛4 + 4𝑥 𝑛2 + 4 − 8𝑥 𝑛2 4𝑥 𝑛2
=
𝑥 𝑛4 − 4𝑥 𝑛2 + 4 4𝑥 𝑛2
=
𝑥 𝑛2 − 2 4𝑥 𝑛2
2 .
2 Since squares are nonnegative, 𝑥 𝑛+1 − 2 ≥ 0 for all 𝑛. Therefore, {𝑥 𝑛 }∞ is monotone 𝑛=1 decreasing and bounded (𝑥 𝑛 > 0 for all 𝑛), and so the limit exists. It remains to find the limit. Write 2𝑥 𝑛 𝑥 𝑛+1 = 𝑥 𝑛2 + 2.
Since {𝑥 𝑛+1 }∞ is the 1-tail of {𝑥 𝑛 }∞ , it converges to the same limit. Let us define 𝑛=1 𝑛=1 𝑥 B lim𝑛→∞ 𝑥 𝑛 . Take the limit of both sides to obtain 2𝑥 2 = 𝑥 2 + 2,
or 𝑥 2 = 2. As 𝑥 𝑛 > 0 for all 𝑛 we get 𝑥 ≥ 0, and therefore 𝑥 =
√ 2.
You may have seen the sequence above before. It is Newton’s method* for finding the square root of 2. This method comes up often in practice and converges very rapidly. We used the fact that 𝑥12 − 2 > 0, although it was not strictly needed to show convergence by considering a tail of the sequence. The sequence converges as long as 𝑥1 ≠ 0, although √ with a negative 𝑥1 we would arrive at 𝑥 = − 2. By replacing the 2 in the numerator we obtain the square root of any positive number. These statements are left as an exercise. You should, however, be careful. Before taking any limits, you must make sure the sequence converges. Let us see an example. Example 2.2.9: Suppose 𝑥1 B 1 and 𝑥 𝑛+1 B 𝑥 𝑛2 + 𝑥 𝑛 . If we blindly assumed that the limit exists (call it 𝑥), then we would get the equation 𝑥 = 𝑥 2 + 𝑥, from which we might conclude 𝑥 = 0. However, it is not hard to show that {𝑥 𝑛 }∞ is unbounded and therefore does not 𝑛=1 converge. The thing to notice in this example is that the method still works, but it depends on the initial value 𝑥1 . If we set 𝑥1 B 0, then the sequence converges and the limit really is 0. An entire branch of mathematics, called dynamics, deals precisely with these issues. See Exercise 2.2.14. * Named
after the English physicist and mathematician Isaac Newton (1642–1726/7).
68
CHAPTER 2. SEQUENCES AND SERIES
2.2.4
Some convergence tests
It is not always necessary to go back to the definition of convergence to prove that a sequence is convergent. We first give a simple convergence test. The main idea is that {𝑥 𝑛 }∞ converges to 𝑥 if and only if {|𝑥 𝑛 − 𝑥|}∞ converges to zero. 𝑛=1 𝑛=1
Proposition 2.2.10. Let {𝑥 𝑛 }∞ be a sequence. Suppose there is an 𝑥 ∈ ℝ and a convergent 𝑛=1 sequence {𝑎 𝑛 }∞ such that 𝑛=1 lim 𝑎 𝑛 = 0 𝑛→∞
and |𝑥 𝑛 − 𝑥| ≤ 𝑎 𝑛
Then {𝑥 𝑛 }∞ converges and lim 𝑥 𝑛 = 𝑥. 𝑛=1
for all 𝑛 ∈ ℕ .
𝑛→∞
Proof. Let 𝜖 > 0 be given. Note that 𝑎 𝑛 ≥ 0 for all 𝑛. Find an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, we have 𝑎 𝑛 = |𝑎 𝑛 − 0| < 𝜖. Then, for all 𝑛 ≥ 𝑀, we have |𝑥 𝑛 − 𝑥| ≤ 𝑎 𝑛 < 𝜖.
□
As the proposition shows, to study when a sequence has a limit is the same as studying when another sequence goes to zero. In general, it may be hard to decide if a sequence converges, but for certain sequences there exist easy to apply tests that tell us if the sequence converges or not. Let us see one such test. First, let us compute the limit of a certain specific sequence. Proposition 2.2.11. Let 𝑐 > 0. (i) If 𝑐 < 1, then
lim 𝑐 𝑛 = 0.
𝑛→∞
(ii) If 𝑐 > 1, then {𝑐 𝑛 }∞ is unbounded. 𝑛=1
Proof. First consider 𝑐 < 1. As 𝑐 > 0, then 𝑐 𝑛 > 0 for all 𝑛 ∈ ℕ by induction. As 𝑐 < 1, then 𝑐 𝑛+1 < 𝑐 𝑛 for all 𝑛. So {𝑐 𝑛 }∞ is a decreasing sequence that is bounded below. Hence, it is 𝑛=1 convergent. Let 𝑥 B lim𝑛→∞ 𝑐 𝑛 . The 1-tail {𝑐 𝑛+1 }∞ also converges to 𝑥. Taking the limit 𝑛=1 𝑛+1 𝑛 of both sides of 𝑐 = 𝑐 · 𝑐 , we obtain 𝑥 = 𝑐𝑥, or (1 − 𝑐)𝑥 = 0. It follows that 𝑥 = 0 as 𝑐 ≠ 1. 𝑛 ∞ 1 1 Now consider 𝑐 > 1. Let 𝐵 > 0 be arbitrary. As /𝑐 < 1, then ( /𝑐 ) 𝑛=1 converges to 0. Hence for some large enough 𝑛, we get
𝑛
1 1 = 𝑛 𝑐 𝑐
𝐵, and 𝐵 is not an upper bound for {𝑐 𝑛 }∞ . As 𝐵 was arbitrary, 𝑛=1 ∞ 𝑛 {𝑐 } 𝑛=1 is unbounded. □ In the proposition above, the ratio of the (𝑛 + 1)th term and the 𝑛th term is 𝑐. We generalize this simple result to a larger class of sequences. The following lemma will come up again once we get to series.
69
2.2. FACTS ABOUT LIMITS OF SEQUENCES
Lemma 2.2.12 (Ratio test for sequences). Let {𝑥 𝑛 }∞ be a sequence such that 𝑥 𝑛 ≠ 0 for all 𝑛 𝑛=1 and such that the limit |𝑥 𝑛+1 | 𝐿 B lim exists. 𝑛→∞ |𝑥 𝑛 | (i) If 𝐿 < 1, then {𝑥 𝑛 }∞ converges and lim 𝑥 𝑛 = 0. 𝑛=1 𝑛→∞
(ii) If 𝐿 > 1, then
{𝑥 𝑛 }∞ 𝑛=1
is unbounded (hence diverges).
If 𝐿 exists, but 𝐿 = 1, the lemma says nothing. We cannot make any conclusion based on that information alone. For example, the sequence {1/𝑛 }∞ converges to zero, but 𝐿 = 1. 𝑛=1 𝑛 ∞ The constant sequence {1}∞ converges to 1, not zero, and 𝐿 = 1. The sequence (−1) 𝑛=1 𝑛=1 does not converge at all, and 𝐿 = 1 as well. Finally, the sequence {𝑛}∞ is unbounded, yet 𝑛=1 again 𝐿 = 1. The statement of the lemma may be strengthened somewhat, see exercises 2.2.13 and 2.3.15. | Proof. Suppose 𝐿 < 1. As |𝑥|𝑥𝑛+1 ≥ 0 for all 𝑛, then 𝐿 ≥ 0. Pick 𝑟 such that 𝐿 < 𝑟 < 1. We 𝑛| wish to compare the sequence {𝑥 𝑛 }∞ to the sequence {𝑟 𝑛 }∞ . The idea is that while the 𝑛=1 𝑛=1 | ratio |𝑥|𝑥𝑛+1 is not going to be less than 𝐿 eventually, it will eventually be less than 𝑟, which 𝑛| is still less than 1. The intuitive idea of the proof is illustrated in Figure 2.4.
𝐿
1
𝑟
Figure 2.4: The short lines represent the ratios
|𝑥 𝑛+1 | |𝑥 𝑛 |
approaching 𝐿 < 1.
As 𝑟 − 𝐿 > 0, there exists an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, we have
|𝑥 𝑛+1 | < 𝑟 − 𝐿. − 𝐿 |𝑥 𝑛 | Therefore, for 𝑛 ≥ 𝑀, |𝑥 𝑛+1 | −𝐿< 𝑟−𝐿 |𝑥 𝑛 |
or
|𝑥 𝑛+1 | < 𝑟. |𝑥 𝑛 |
For 𝑛 > 𝑀 (that is for 𝑛 ≥ 𝑀 + 1) write |𝑥 𝑛 | = |𝑥 𝑀 |
|𝑥 𝑀+1 | |𝑥 𝑀+2 | |𝑥 𝑛 | ··· < |𝑥 𝑀 | 𝑟𝑟 · · · 𝑟 = |𝑥 𝑀 | 𝑟 𝑛−𝑀 = (|𝑥 𝑀 | 𝑟 −𝑀 )𝑟 𝑛 . |𝑥 𝑀 | |𝑥 𝑀+1 | |𝑥 𝑛−1 |
The sequence {𝑟 𝑛 }∞ converges to zero and hence |𝑥 𝑀 | 𝑟 −𝑀 𝑟 𝑛 converges to zero. By 𝑛=1 Proposition 2.2.10, the 𝑀-tail {𝑥 𝑛 }∞ converges to zero and therefore {𝑥 𝑛 }∞ converges 𝑛=1 𝑛=𝑀+1 to zero.
70
CHAPTER 2. SEQUENCES AND SERIES
Now suppose 𝐿 > 1. Pick 𝑟 such that 1 < 𝑟 < 𝐿. As 𝐿 − 𝑟 > 0, there exists an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀 |𝑥 𝑛+1 | < 𝐿 − 𝑟. − 𝐿 |𝑥 𝑛 |
Therefore,
|𝑥 𝑛+1 | > 𝑟. |𝑥 𝑛 |
Again for 𝑛 > 𝑀, write |𝑥 𝑛 | = |𝑥 𝑀 |
|𝑥 𝑛 | |𝑥 𝑀+1 | |𝑥 𝑀+2 | ··· > |𝑥 𝑀 | 𝑟𝑟 · · · 𝑟 = |𝑥 𝑀 | 𝑟 𝑛−𝑀 = (|𝑥 𝑀 | 𝑟 −𝑀 )𝑟 𝑛 . |𝑥 𝑀 | |𝑥 𝑀+1 | |𝑥 𝑛−1 |
The sequence {𝑟 𝑛 }∞ is unbounded (since 𝑟 > 1), and so {𝑥 𝑛 }∞ cannot be bounded (if 𝑛=1 𝑛=1 𝐵 𝑛 𝑀 |𝑥 𝑛 | ≤ 𝐵 for all 𝑛, then 𝑟 < |𝑥 𝑀 | 𝑟 for all 𝑛 > 𝑀, which is impossible). Consequently, {𝑥 𝑛 }∞ cannot converge. □ 𝑛=1 Example 2.2.13: A simple application of the lemma above is to prove 2𝑛 lim = 0. 𝑛→∞ 𝑛! Proof: Compute
It is not hard to see that
2𝑛+1 /(𝑛 + 1)! 2𝑛+1 𝑛! 2 = 𝑛 = . 𝑛 2 (𝑛 + 1)! 𝑛 + 1 2 /𝑛!
∞ 2 𝑛+1 𝑛=1
converges to zero. The conclusion follows by the lemma.
Example 2.2.14: A more complicated (and useful) application of the ratio test is to prove lim 𝑛 1/𝑛 = 1.
𝑛→∞
Proof: Let 𝜖 > 0 be given. Consider the sequence
∞ 𝑛 . 𝑛 (1+𝜖) 𝑛=1
Compute
(𝑛 + 1)/(1 + 𝜖)𝑛+1 𝑛 + 1 1 = . 𝑛 1+𝜖 𝑛/(1 + 𝜖)𝑛 The limit of
𝑛+1 𝑛
=1+
1 𝑛
as 𝑛 → ∞ is 1, and so (𝑛 + 1)/(1 + 𝜖)𝑛+1 1 lim = < 1. 𝑛 𝑛→∞ 1+𝜖 𝑛/(1 + 𝜖)
∞ 𝑛 converges to 0. 𝑛 (1+𝜖) 𝑛=1 𝑛 < 1, or 𝑛 < (1 + 𝜖)𝑛 , (1+𝜖)𝑛
Therefore, we have
In particular, there exists an 𝑀 such that for 𝑛 ≥ 𝑀,
or 𝑛 1/𝑛 < 1 + 𝜖. As 𝑛 ≥ 1, then 𝑛 1/𝑛 ≥ 1, and so
0 ≤ 𝑛 1/𝑛 − 1 < 𝜖. Consequently, lim 𝑛 1/𝑛 = 1. 𝑛→∞
71
2.2. FACTS ABOUT LIMITS OF SEQUENCES
2.2.5
Exercises
Exercise 2.2.1: Prove Corollary 2.2.4. Hint: Use constant sequences and Lemma 2.2.3. Exercise 2.2.2: Prove part (ii) of Proposition 2.2.5. is a convergent sequence, 𝑘 ∈ ℕ , then Exercise 2.2.3: Prove that if {𝑥 𝑛 }∞ 𝑛=1 lim
𝑛→∞
𝑥 𝑛𝑘
= lim 𝑥 𝑛
𝑘
𝑛→∞
.
Hint: Use induction. Exercise 2.2.4: Suppose 𝑥 1 B You cannot divide by zero! Exercise 2.2.5: Let 𝑥 𝑛 B limit.
1 2
and 𝑥 𝑛+1 B 𝑥 𝑛2 . Show that {𝑥 𝑛 }∞ converges and find lim𝑛→∞ 𝑥 𝑛 . Hint: 𝑛=1
𝑛−cos(𝑛) . 𝑛
Use the squeeze lemma to show that {𝑥 𝑛 }∞ converges and find the 𝑛=1 𝑦
Exercise 2.2.6: Let 𝑥 𝑛 B 𝑛12 and 𝑦𝑛 B 𝑛1 . Define 𝑧 𝑛 B 𝑥𝑦𝑛𝑛 and 𝑤 𝑛 B 𝑥 𝑛𝑛 . Do {𝑧 𝑛 }∞ and {𝑤 𝑛 }∞ 𝑛=1 𝑛=1 converge? What are the limits? Can you apply Proposition 2.2.5? Why or why not? Exercise 2.2.7: True or false, prove or find a counterexample. If {𝑥 𝑛 }∞ is a sequence such that {𝑥 𝑛2 }∞ 𝑛=1 𝑛=1 converges, then {𝑥 𝑛 }∞ converges. 𝑛=1 Exercise 2.2.8: Show that
𝑛2 = 0. 𝑛→∞ 2𝑛 lim
Exercise 2.2.9: Suppose {𝑥 𝑛 }∞ is a sequence, 𝑥 ∈ ℝ, and 𝑥 𝑛 ≠ 𝑥 for all 𝑛 ∈ ℕ . Suppose the limit 𝑛=1 𝐿 B lim
𝑛→∞
|𝑥 𝑛+1 − 𝑥| |𝑥 𝑛 − 𝑥|
exists and 𝐿 < 1. Show that {𝑥 𝑛 }∞ converges to 𝑥. 𝑛=1 Exercise 2.2.10 (Challenging): Let {𝑥 𝑛 }∞ be a convergent sequence such that 𝑥 𝑛 ≥ 0 and 𝑘 ∈ ℕ . Then 𝑛=1 1/𝑘
lim 𝑥 𝑛
𝑛→∞
Hint: Find an expression 𝑞 such that
1/𝑘
𝑥 𝑛 −𝑥 1/𝑘 𝑥 𝑛 −𝑥
= lim 𝑥 𝑛
1/𝑘
𝑛→∞
.
= 1𝑞 .
Exercise 2.2.11: Let 𝑟 > 0. Show that starting with an arbitrary 𝑥 1 ≠ 0, the sequence defined by 𝑥 𝑛+1 B 𝑥 𝑛 − converges to
√
√ 𝑟 if 𝑥 1 > 0 and − 𝑟 if 𝑥1 < 0.
𝑥 𝑛2 − 𝑟 2𝑥 𝑛
72
CHAPTER 2. SEQUENCES AND SERIES
be sequences. and {𝑏 𝑛 }∞ Exercise 2.2.12: Let {𝑎 𝑛 }∞ 𝑛=1 𝑛=1
converges to 0. converges to 0. Show that {𝑎 𝑛 𝑏 𝑛 }∞ is bounded and {𝑏 𝑛 }∞ a) Suppose {𝑎 𝑛 }∞ 𝑛=1 𝑛=1 𝑛=1
is unbounded, {𝑏 𝑛 }∞ converges to 0, and {𝑎 𝑛 𝑏 𝑛 }∞ is not convergent. b) Find an example where {𝑎 𝑛 }∞ 𝑛=1 𝑛=1 𝑛=1 c) Find an example where {𝑎 𝑛 }∞ is bounded, {𝑏 𝑛 }∞ converges to some 𝑥 ≠ 0, and {𝑎 𝑛 𝑏 𝑛 }∞ is not 𝑛=1 𝑛=1 𝑛=1 convergent.
Exercise 2.2.13 (Easy): Prove the following stronger version of Lemma 2.2.12, the ratio test. Suppose {𝑥 𝑛 }∞ is a sequence such that 𝑥 𝑛 ≠ 0 for all 𝑛. 𝑛=1 a) Prove that if there exists an 𝑟 < 1 and 𝑀 ∈ ℕ such that |𝑥 𝑛+1 | ≤𝑟 |𝑥 𝑛 |
then {𝑥 𝑛 }∞ converges to 0. 𝑛=1
for all 𝑛 ≥ 𝑀,
b) Prove that if there exists an 𝑟 > 1 and 𝑀 ∈ ℕ such that |𝑥 𝑛+1 | ≥𝑟 |𝑥 𝑛 |
for all 𝑛 ≥ 𝑀,
then {𝑥 𝑛 }∞ is unbounded. 𝑛=1 Exercise 2.2.14: Suppose 𝑥 1 B 𝑐 and 𝑥 𝑛+1 B 𝑥 𝑛2 + 𝑥 𝑛 . Show that {𝑥 𝑛 }∞ converges if and only if 𝑛=1 −1 ≤ 𝑐 ≤ 0, in which case it converges to 0. Exercise 2.2.15: Prove lim (𝑛 2 + 1)
1/𝑛
𝑛→∞
Exercise 2.2.16: Prove that (𝑛!)1/𝑛 converges to zero.
= 1.
∞
𝑛=1
is unbounded. Hint: Show that for every 𝐶 > 0,
𝐶𝑛 ∞ 𝑛!
𝑛=1
2.3. LIMIT SUPERIOR, LIMIT INFERIOR, AND BOLZANO–WEIERSTRASS
2.3
73
Limit superior, limit inferior, and Bolzano–Weierstrass
Note: 1–2 lectures, alternative proof of BW optional In this section we study bounded sequences and their subsequences. In particular, we define the so-called limit superior and limit inferior of a bounded sequence and talk about limits of subsequences. Furthermore, we prove the Bolzano–Weierstrass theorem* , an indispensable tool in analysis, showing the existence of convergent subsequences. We proved that every convergent sequence is bounded; nevertheless, ∞ there exist many bounded divergent sequences. For instance, the sequence (−1)𝑛 𝑛=1 is bounded, but divergent. All is not lost, however, and we can still compute certain limits with a bounded divergent sequence.
2.3.1
Upper and lower limits
There are ways of creating monotone sequences out of any sequence, and in this fashion we get the so-called limit superior and limit inferior. These limits always exist for bounded sequences. If a sequence {𝑥 𝑛 }∞ is bounded, then the set {𝑥 𝑘 : 𝑘 ∈ ℕ } is bounded. For every 𝑛, the 𝑛=1 set {𝑥 𝑘 : 𝑘 ≥ 𝑛} is also bounded (as it is a subset), so we take its supremum and infimum. Definition 2.3.1. Let {𝑥 𝑛 }∞ be a bounded sequence. Define the sequences {𝑎 𝑛 }∞ and 𝑛=1 𝑛=1 ∞ {𝑏 𝑛 } 𝑛=1 by 𝑎 𝑛 B sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} and 𝑏 𝑛 B inf{𝑥 𝑘 : 𝑘 ≥ 𝑛}. Define, if the limits exist, lim sup 𝑥 𝑛 B lim 𝑎 𝑛 , 𝑛→∞
𝑛→∞
lim inf 𝑥 𝑛 B lim 𝑏 𝑛 . 𝑛→∞
𝑛→∞
For a bounded sequence, liminf and limsup always exist (see below). It is possible to define liminf and limsup for unbounded sequences if we allow ∞ and −∞, and we do so later in this section. It is not hard to generalize the following results to include unbounded sequences; however, we first restrict our attention to bounded ones. Proposition 2.3.2. Let {𝑥 𝑛 }∞ be a bounded sequence. Let 𝑎 𝑛 and 𝑏 𝑛 be as in the definition above. 𝑛=1 (i) The sequence {𝑎 𝑛 }∞ is bounded monotone decreasing and {𝑏 𝑛 }∞ is bounded monotone 𝑛=1 𝑛=1 increasing. In particular, lim inf 𝑥 𝑛 and lim sup 𝑥 𝑛 exist. 𝑛→∞
𝑛→∞
(ii) lim sup 𝑥 𝑛 = inf{𝑎 𝑛 : 𝑛 ∈ ℕ } and lim inf 𝑥 𝑛 = sup{𝑏 𝑛 : 𝑛 ∈ ℕ }. 𝑛→∞
𝑛→∞
(iii) lim inf 𝑥 𝑛 ≤ lim sup 𝑥 𝑛 . 𝑛→∞
𝑛→∞
Proof. Let us see why {𝑎 𝑛 }∞ is a decreasing sequence. As 𝑎 𝑛 is the least upper bound for 𝑛=1 {𝑥 𝑘 : 𝑘 ≥ 𝑛}, it is also an upper bound for the subset {𝑥 𝑘 : 𝑘 ≥ 𝑛 + 1}. Therefore 𝑎 𝑛+1 , the least upper bound for {𝑥 𝑘 : 𝑘 ≥ 𝑛 + 1}, has to be less than or equal to 𝑎 𝑛 , the least upper * Named
after the Czech mathematician Bernhard Placidus Johann Nepomuk Bolzano (1781–1848), and the German mathematician Karl Theodor Wilhelm Weierstrass (1815–1897).
74
CHAPTER 2. SEQUENCES AND SERIES
bound for {𝑥 𝑘 : 𝑘 ≥ 𝑛)}. That is, 𝑎 𝑛 ≥ 𝑎 𝑛+1 for all 𝑛. Similarly (an exercise), {𝑏 𝑛 }∞ is 𝑛=1 ∞ an increasing sequence. It is left as an exercise to show that if {𝑥 𝑛 } 𝑛=1 is bounded, then {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ must be bounded. 𝑛=1 𝑛=1 The second item follows as the sequences {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ are monotone and 𝑛=1 𝑛=1 bounded. For the third item, note that 𝑏 𝑛 ≤ 𝑎 𝑛 , as the inf of a nonempty set is less than or equal to its sup. The sequences {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ converge to the limsup and the liminf 𝑛=1 𝑛=1 respectively. Apply Lemma 2.2.3 to obtain lim 𝑏 𝑛 ≤ lim 𝑎 𝑛 .
𝑛→∞
□
𝑛→∞
lim sup 𝑥 𝑛 𝑛→∞
lim inf 𝑥 𝑛 𝑛→∞
⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄ ⋄⋄⋄ ⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄ ⋄⋄⋄⋄ ⋄⋄⋄⋄⋄⋄⋄
Figure 2.5: First 50 terms of an example sequence. Terms 𝑥 𝑛 of the sequence are marked with dots (•), 𝑎 𝑛 are marked with circles (◦), and 𝑏 𝑛 are marked with diamonds (⋄).
Example 2.3.3: Let {𝑥 𝑛 }∞ be defined by 𝑛=1
( 𝑥𝑛 B
𝑛+1 𝑛
0
if 𝑛 is odd, if 𝑛 is even.
Let us compute the lim inf and lim sup of this sequence. See also Figure 2.6. First the limit inferior: lim inf 𝑥 𝑛 = lim inf{𝑥 𝑘 : 𝑘 ≥ 𝑛} = lim 0 = 0. 𝑛→∞
𝑛→∞
𝑛→∞
For the limit superior, we write lim sup 𝑥 𝑛 = lim sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} .
𝑛→∞
𝑛→∞
2.3. LIMIT SUPERIOR, LIMIT INFERIOR, AND BOLZANO–WEIERSTRASS
75
It is not hard to see that
( sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} =
𝑛+1 𝑛 𝑛+2 𝑛+1
if 𝑛 is odd, if 𝑛 is even.
We leave it to the reader to show that the limit is 1. That is, lim sup 𝑥 𝑛 = 1. 𝑛→∞
Do note that the sequence {𝑥 𝑛 }∞ is not a convergent sequence. 𝑛=1
lim sup 𝑥 𝑛 𝑛→∞
lim inf 𝑥 𝑛 𝑛→∞
⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄
Figure 2.6: First 20 terms of the sequence in Example 2.3.3. The marking is as in Figure 2.5.
We associate certain subsequences with lim sup and lim inf. It is important to notice that {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ are not subsequences of {𝑥 𝑛 }∞ , nor do they have to even consist 𝑛=1 𝑛=1 𝑛=1 of the same numbers. For example, for the sequence {1/𝑛 }∞ , 𝑏 = 0 for all 𝑛 ∈ ℕ . 𝑛=1 𝑛
Theorem 2.3.4. If {𝑥 𝑛 }∞ is a bounded sequence, then there exists a subsequence {𝑥 𝑛 𝑘 }∞ such 𝑛=1 𝑘=1 that lim 𝑥 𝑛 𝑘 = lim sup 𝑥 𝑛 . 𝑘→∞
𝑛→∞
such that Similarly, there exists a (perhaps different) subsequence {𝑥 𝑚 𝑘 }∞ 𝑘=1 lim 𝑥 𝑚 𝑘 = lim inf 𝑥 𝑛 .
𝑘→∞
𝑛→∞
Proof. Define 𝑎 𝑛 B sup{𝑥 𝑘 : 𝑘 ≥ 𝑛}. Write 𝑥 B lim sup𝑛→∞ 𝑥 𝑛 = lim𝑛→∞ 𝑎 𝑛 . We define the subsequence inductively. Let 𝑛1 B 1, and suppose 𝑛1 , 𝑛2 , . . . , 𝑛 𝑘−1 are already defined for some 𝑘 ≥ 2. Pick an 𝑚 ≥ 𝑛 𝑘−1 + 1 such that 𝑎 (𝑛 𝑘−1 +1) − 𝑥 𝑚
0 be given. As {𝑎 𝑛 }∞ converges to 𝑥, the subsequence {𝑎 𝑛 𝑘 }∞ 𝑛=1 𝑘=1 converges to 𝑥. Thus there exists an 𝑀1 ∈ ℕ such that for all 𝑘 ≥ 𝑀1 , we have
𝑎𝑛 − 𝑥 < 𝜖 . 𝑘 2
Find an 𝑀2 ∈ ℕ such that
1 𝜖 ≤ . 𝑀2 2
Take 𝑀 B max{𝑀1 , 𝑀2 , 2}. For all 𝑘 ≥ 𝑀,
𝑥 − 𝑥 𝑛 = 𝑎 𝑛 − 𝑥 𝑛 + 𝑥 − 𝑎 𝑛 𝑘 𝑘 𝑘 𝑘 ≤ 𝑎𝑛𝑘 − 𝑥𝑛𝑘 + 𝑥 − 𝑎𝑛𝑘
1 𝜖 + 𝑘 2 1 𝜖 𝜖 𝜖 ≤ + ≤ + = 𝜖. 𝑀2 2 2 2
𝑛 𝑘 such that 𝑥 𝑛 𝑘+1 ∈ [𝑎 𝑘 , 𝑦]. If there are not infinitely many 𝑗 such that 𝑥 𝑗 ∈ [𝑎 𝑘 , 𝑦], then it must be true that there are infinitely many 𝑗 ∈ ℕ such that 𝑥 𝑗 ∈ [𝑦, 𝑏 𝑘 ]. In this case pick 𝑎 𝑘+1 B 𝑦, 𝑏 𝑘+1 B 𝑏 𝑘 , and pick 𝑛 𝑘+1 > 𝑛 𝑘 such that 𝑥 𝑛 𝑘+1 ∈ [𝑦, 𝑏 𝑘 ]. We now have the sequences defined. What is left to prove is that lim𝑖→∞ 𝑎 𝑖 = lim𝑖→∞ 𝑏 𝑖 . The limits exist as the sequences are monotone. In the construction, 𝑏 𝑖 − 𝑎 𝑖 is cut in half in 𝑖 each step. Therefore, 𝑏 𝑖+1 − 𝑎 𝑖+1 = 𝑏 𝑖 −𝑎 2 . By induction, 𝑏1 − 𝑎1 . 2𝑖−1 Let 𝑥 B lim𝑖→∞ 𝑎 𝑖 . As {𝑎 𝑖 }∞ is monotone, 𝑖=1 𝑏𝑖 − 𝑎𝑖 =
𝑥 = sup{𝑎 𝑖 : 𝑖 ∈ ℕ }.
Let 𝑦 B lim𝑖→∞ 𝑏 𝑖 = inf{𝑏 𝑖 : 𝑖 ∈ ℕ }. Since 𝑎 𝑖 < 𝑏 𝑖 for all 𝑖, then 𝑥 ≤ 𝑦. As the sequences are monotone, then for all 𝑖, we have (why?) 𝑦 − 𝑥 ≤ 𝑏𝑖 − 𝑎𝑖 =
𝑏1 − 𝑎1 . 2𝑖−1
2.3. LIMIT SUPERIOR, LIMIT INFERIOR, AND BOLZANO–WEIERSTRASS Because lemma.
𝑏1 −𝑎 1 2𝑖−1
79
is arbitrarily small and 𝑦 − 𝑥 ≥ 0, we have 𝑦 − 𝑥 = 0. Finish by the squeeze
□
Yet another proof of the Bolzano–Weierstrass theorem is to show the following claim, which is left as a challenging exercise. Claim: Every sequence has a monotone subsequence.
2.3.4
Infinite limits
Just as for infima and suprema, it is possible to allow certain limits to be infinite. That is, we write lim𝑛→∞ 𝑥 𝑛 = ∞ or lim𝑛→∞ 𝑥 𝑛 = −∞ for certain divergent sequences. Definition 2.3.9. We say {𝑥 𝑛 }∞ diverges to infinity* if for every 𝐾 ∈ ℝ, there exists an 𝑛=1 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, we have 𝑥 𝑛 > 𝐾. In this case we write lim 𝑥 𝑛 B ∞.
𝑛→∞
Similarly, if for every 𝐾 ∈ ℝ there exists an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, we have 𝑥 𝑛 < 𝐾, we say {𝑥 𝑛 }∞ diverges to minus infinity and we write 𝑛=1 lim 𝑥 𝑛 B −∞.
𝑛→∞
With this definition and allowing ∞ and −∞, we can write lim𝑛→∞ 𝑥 𝑛 for any monotone sequence. Proposition 2.3.10. Suppose {𝑥 𝑛 }∞ is a monotone unbounded sequence. Then 𝑛=1
( lim 𝑥 𝑛 =
𝑛→∞
∞
−∞
if {𝑥 𝑛 }∞ is increasing, 𝑛=1
if {𝑥 𝑛 }∞ is decreasing. 𝑛=1
Proof. The case of monotone increasing follows from Exercise 2.3.14 part c) below. Suppose {𝑥 𝑛 }∞ is decreasing and unbounded. That the sequence is unbounded means that for 𝑛=1 every 𝐾 ∈ ℝ, there is an 𝑀 ∈ ℕ such that 𝑥 𝑀 < 𝐾. By monotonicity, 𝑥 𝑛 ≤ 𝑥 𝑀 < 𝐾 for all 𝑛 ≥ 𝑀. Therefore, lim𝑛→∞ 𝑥 𝑛 = −∞. □ Example 2.3.11: lim 𝑛 = ∞,
𝑛→∞
lim 𝑛 2 = ∞,
𝑛→∞
lim −𝑛 = −∞.
𝑛→∞
We leave verification to the reader. We may also allow lim inf and lim sup to take on the values ∞ and −∞, so that we can apply lim inf and lim sup to absolutely any sequence, not just a bounded one. Unfortunately, the sequences {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ are not sequences of real numbers but of extended real 𝑛=1 𝑛=1 numbers. In particular, 𝑎 𝑛 can equal ∞ for some 𝑛, and 𝑏 𝑛 can equal −∞. So we have no definition for the limits. But since the extended real numbers are still an ordered set, we can take suprema and infima. * Sometimes
it is said that {𝑥 𝑛 }∞ converges to infinity. 𝑛=1
80
CHAPTER 2. SEQUENCES AND SERIES
Definition 2.3.12. Let {𝑥 𝑛 }∞ be an unbounded sequence of real numbers. Define 𝑛=1 sequences of extended real numbers by 𝑎 𝑛 B sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} and 𝑏 𝑛 B inf{𝑥 𝑘 : 𝑘 ≥ 𝑛}. Define lim sup 𝑥 𝑛 B inf{𝑎 𝑛 : 𝑛 ∈ ℕ }, 𝑛→∞
and
lim inf 𝑥 𝑛 B sup{𝑏 𝑛 : 𝑛 ∈ ℕ }. 𝑛→∞
This definition agrees with the definition for bounded sequences. Proposition 2.3.13. Let {𝑥 𝑛 }∞ be an unbounded sequence. Define {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ as 𝑛=1 𝑛=1 𝑛=1 ∞ ∞ above. Then {𝑎 𝑛 } 𝑛=1 is decreasing, and {𝑏 𝑛 } 𝑛=1 is increasing. If 𝑎 𝑛 is a real number for every 𝑛, then lim sup𝑛→∞ 𝑥 𝑛 = lim𝑛→∞ 𝑎 𝑛 . If 𝑏 𝑛 is a real number for every 𝑛, then lim inf𝑛→∞ 𝑥 𝑛 = lim𝑛→∞ 𝑏 𝑛 . Proof. As before, 𝑎 𝑛 = sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} ≥ sup{𝑥 𝑘 : 𝑘 ≥ 𝑛 + 1} = 𝑎 𝑛+1 . So {𝑎 𝑛 }∞ is 𝑛=1 decreasing. Similarly, {𝑏 𝑛 }∞ is increasing. 𝑛=1 If the sequence {𝑎 𝑛 }∞ is a sequence of real numbers, then lim𝑛→∞ 𝑎 𝑛 = inf{𝑎 𝑛 : 𝑛 ∈ ℕ }. 𝑛=1 This follows from Theorem 2.1.10 if {𝑎 𝑛 }∞ is bounded and Proposition 2.3.10 if {𝑎 𝑛 }∞ is 𝑛=1 𝑛=1 ∞ unbounded. We proceed similarly with {𝑏 𝑛 } 𝑛=1 . □ The definition behaves as expected with lim sup and lim inf, see exercises 2.3.13 and 2.3.14.
Example 2.3.14: Suppose 𝑥 𝑛 B 0 for odd 𝑛 and 𝑥 𝑛 B 𝑛 for even 𝑛. Then 𝑎 𝑛 = ∞ for all 𝑛, since for every 𝑀, there exists an even 𝑘 such that 𝑥 𝑘 = 𝑘 ≥ 𝑀. On the other hand, 𝑏 𝑛 = 0 for all 𝑛, as for every 𝑛, the set {𝑏 𝑘 : 𝑘 ≥ 𝑛} consists of 0 and positive numbers. So, lim 𝑥 𝑛
𝑛→∞
2.3.5
does not exist,
lim sup 𝑥 𝑛 = ∞, 𝑛→∞
lim inf 𝑥 𝑛 = 0. 𝑛→∞
Exercises
Exercise 2.3.1: Suppose {𝑥 𝑛 }∞ is a bounded sequence. Define 𝑎 𝑛 and 𝑏 𝑛 as in Definition 2.3.1. Show that 𝑛=1 ∞ ∞ {𝑎 𝑛 } 𝑛=1 and {𝑏 𝑛 } 𝑛=1 are bounded. Exercise 2.3.2: Suppose {𝑥 𝑛 }∞ is a bounded sequence. Define 𝑏 𝑛 as in Definition 2.3.1. Show that 𝑛=1 {𝑏 𝑛 }∞ is an increasing sequence. 𝑛=1 Exercise 2.3.3: Finish the proof of Proposition 2.3.6. That is, suppose {𝑥 𝑛 }∞ is a bounded sequence and 𝑛=1 ∞ {𝑥 𝑛 𝑘 } 𝑘=1 is a subsequence. Prove lim inf 𝑥 𝑛 ≤ lim inf 𝑥 𝑛 𝑘 . 𝑛→∞
𝑘→∞
Exercise 2.3.4: Prove Proposition 2.3.7. Exercise 2.3.5: (−1)𝑛 a) Let 𝑥 𝑛 B . Find lim sup 𝑥 𝑛 and lim inf 𝑥 𝑛 . 𝑛→∞ 𝑛 𝑛→∞ b) Let 𝑥 𝑛 B
(𝑛 − 1)(−1)𝑛 . Find lim sup 𝑥 𝑛 and lim inf 𝑥 𝑛 . 𝑛→∞ 𝑛 𝑛→∞
2.3. LIMIT SUPERIOR, LIMIT INFERIOR, AND BOLZANO–WEIERSTRASS
81
be bounded sequences such that 𝑥 𝑛 ≤ 𝑦𝑛 for all 𝑛. Show and {𝑦𝑛 }∞ Exercise 2.3.6: Let {𝑥 𝑛 }∞ 𝑛=1 𝑛=1 lim sup 𝑥 𝑛 ≤ lim sup 𝑦𝑛 𝑛→∞
lim inf 𝑥 𝑛 ≤ lim inf 𝑦𝑛 .
and
𝑛→∞
𝑛→∞
𝑛→∞
Exercise 2.3.7: Let {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ be bounded sequences. 𝑛=1 𝑛=1 is bounded. a) Show that {𝑥 𝑛 + 𝑦𝑛 }∞ 𝑛=1
b) Show that
lim inf 𝑥 𝑛 + lim inf 𝑦𝑛 ≤ lim inf (𝑥 𝑛 + 𝑦𝑛 ). 𝑛→∞
𝑛→∞
𝑛→∞
Hint: One proof is to find a subsequence {𝑥 𝑛𝑚 + 𝑦𝑛𝑚 }∞ of {𝑥 𝑛 + 𝑦𝑛 }∞ that converges. Then find a 𝑛=1 𝑚=1 ∞ ∞ subsequence {𝑥 𝑛𝑚𝑖 } 𝑖=1 of {𝑥 𝑛𝑚 } 𝑚=1 that converges.
c) Find an explicit {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ such that 𝑛=1 𝑛=1
lim inf 𝑥 𝑛 + lim inf 𝑦𝑛 < lim inf (𝑥 𝑛 + 𝑦𝑛 ). 𝑛→∞
𝑛→∞
𝑛→∞
Hint: Look for examples that do not have a limit. Exercise 2.3.8: Let {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ be bounded sequences (by the previous exercise, {𝑥 𝑛 + 𝑦𝑛 }∞ is 𝑛=1 𝑛=1 𝑛=1 bounded). a) Show that
lim sup 𝑥 𝑛 + lim sup 𝑦𝑛 ≥ lim sup (𝑥 𝑛 + 𝑦𝑛 ). 𝑛→∞
𝑛→∞
𝑛→∞
Hint: See previous exercise. b) Find an explicit {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ such that 𝑛=1 𝑛=1
lim sup 𝑥 𝑛 + lim sup 𝑦𝑛 > lim sup (𝑥 𝑛 + 𝑦𝑛 ). 𝑛→∞
𝑛→∞
𝑛→∞
Hint: See previous exercise. Exercise 2.3.9: If 𝑆 ⊂ ℝ is a set, then 𝑥 ∈ ℝ is a cluster point if for every 𝜖 > 0, the set (𝑥 − 𝜖, 𝑥 + 𝜖)∩𝑆 \{𝑥} is not empty. That is, if there are points of 𝑆 arbitrarily close to 𝑥. For example, 𝑆 B {1/𝑛 : 𝑛 ∈ ℕ } has a unique (only one) cluster point 0, but 0 ∉ 𝑆. Prove the following version of the Bolzano–Weierstrass theorem: Theorem. Let 𝑆 ⊂ ℝ be a bounded infinite set, then there exists at least one cluster point of 𝑆.
Hint: If 𝑆 is infinite, then 𝑆 contains a countably infinite subset. That is, there is a sequence {𝑥 𝑛 }∞ of 𝑛=1 distinct numbers in 𝑆. Exercise 2.3.10 (Challenging): a) Prove that every sequence contains a monotone subsequence. Hint: Call 𝑛 ∈ ℕ a peak if 𝑎 𝑚 ≤ 𝑎 𝑛 for all 𝑚 ≥ 𝑛. There are two possibilities: Either the sequence has at most finitely many peaks, or it has infinitely many peaks. b) Conclude the Bolzano–Weierstrass theorem.
82
CHAPTER 2. SEQUENCES AND SERIES
Exercise 2.3.11: Prove a stronger version of Proposition 2.3.7. Suppose {𝑥 𝑛 }∞ is a sequence such that 𝑛=1 ∞ ∞ every subsequence {𝑥 𝑛𝑚 } 𝑚=1 has a subsequence {𝑥 𝑛𝑚𝑖 } 𝑖=1 that converges to 𝑥. a) First show that {𝑥 𝑛 }∞ is bounded. 𝑛=1
b) Now show that {𝑥 𝑛 }∞ converges to 𝑥. 𝑛=1 Exercise 2.3.12: Let {𝑥 𝑛 }∞ be a bounded sequence. 𝑛=1
a) Prove that there exists an 𝑠 such that for every 𝑟 > 𝑠, there exists an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, we have 𝑥 𝑛 < 𝑟.
b) If 𝑠 is a number as in a), then prove lim sup 𝑥 𝑛 ≤ 𝑠. 𝑛→∞
c) Show that if 𝑆 is the set of all 𝑠 as in a), then lim sup 𝑥 𝑛 = inf 𝑆. 𝑛→∞
Exercise 2.3.13 (Easy): Suppose {𝑥 𝑛 }∞ is such that lim inf 𝑥 𝑛 = −∞, lim sup 𝑥 𝑛 = ∞. 𝑛=1 𝑛→∞
a) Show that
{𝑥 𝑛 }∞ 𝑛=1
𝑛→∞
is not convergent, and also that neither lim 𝑥 𝑛 = ∞ nor lim 𝑥 𝑛 = −∞ is true. 𝑛→∞
𝑛→∞
b) Find an example of such a sequence. Exercise 2.3.14: Let {𝑥 𝑛 }∞ be a sequence. 𝑛=1
a) Show that lim 𝑥 𝑛 = ∞ if and only if lim inf 𝑥 𝑛 = ∞. 𝑛→∞
𝑛→∞
b) Then show that lim 𝑥 𝑛 = −∞ if and only if lim sup 𝑥 𝑛 = −∞. 𝑛→∞
𝑛→∞
c) If is monotone increasing, show that either lim𝑛→∞ 𝑥 𝑛 exists and is finite or lim𝑛→∞ 𝑥 𝑛 = ∞. In either case, lim𝑛→∞ 𝑥 𝑛 = sup{𝑥 𝑛 : 𝑛 ∈ ℕ }. {𝑥 𝑛 }∞ 𝑛=1
Exercise 2.3.15: Prove the following stronger version of Lemma 2.2.12, the ratio test. Suppose {𝑥 𝑛 }∞ is a 𝑛=1 sequence such that 𝑥 𝑛 ≠ 0 for all 𝑛. a) Prove that if lim sup
|𝑥 𝑛+1 | < 1, |𝑥 𝑛 |
lim inf
|𝑥 𝑛+1 | > 1, |𝑥 𝑛 |
𝑛→∞
then {𝑥 𝑛 }∞ converges to 0. 𝑛=1
b) Prove that if
𝑛→∞
then {𝑥 𝑛 }∞ is unbounded. 𝑛=1
Exercise 2.3.16: Suppose {𝑥 𝑛 }∞ is a bounded sequence, 𝑎 𝑛 B sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} as before. Suppose that 𝑛=1 for some ℓ ∈ ℕ , 𝑎ℓ ∉ {𝑥 𝑘 : 𝑘 ≥ ℓ }. Then show that 𝑎 𝑗 = 𝑎ℓ for all 𝑗 ≥ ℓ , and hence lim sup 𝑥 𝑛 = 𝑎ℓ . 𝑛→∞
Exercise 2.3.17: Suppose {𝑥 𝑛 }∞ is a sequence, and 𝑎 𝑛 B sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} and 𝑏 𝑛 B sup{𝑥 𝑘 : 𝑘 ≥ 𝑛} 𝑛=1 as before. a) Prove that if 𝑎ℓ = ∞ for some ℓ ∈ ℕ , then lim sup 𝑥 𝑛 = ∞. 𝑛→∞
b) Prove that if 𝑏ℓ = −∞ for some ℓ ∈ ℕ , then lim inf 𝑥 𝑛 = −∞. 𝑛→∞
2.3. LIMIT SUPERIOR, LIMIT INFERIOR, AND BOLZANO–WEIERSTRASS
83
Exercise 2.3.18: Suppose {𝑥 𝑛 }∞ is a sequence such that both lim inf𝑛→∞ 𝑥 𝑛 and lim sup𝑛→∞ 𝑥 𝑛 are 𝑛=1 ∞ finite. Prove that {𝑥 𝑛 } 𝑛=1 is bounded. Exercise 2.3.19: Suppose {𝑥 𝑛 }∞ is a bounded sequence, and 𝜖 > 0 is given. Prove that there exists an 𝑀 𝑛=1 such that for all 𝑘 ≥ 𝑀,
𝑥 𝑘 − lim sup 𝑥 𝑛 < 𝜖 𝑛→∞
and
lim inf 𝑥 𝑛 − 𝑥 𝑘 < 𝜖. 𝑛→∞
Exercise 2.3.20: Extend Theorem 2.3.4 to unbounded sequences: Suppose that {𝑥 𝑛 }∞ is a sequence. If 𝑛=1 ∞ lim sup𝑛→∞ 𝑥 𝑛 = ∞, then prove that there exists a subsequence {𝑥 𝑛 𝑖 } 𝑖=1 converging to ∞. Then prove the same result for −∞, and then prove both statements for lim inf.
84
CHAPTER 2. SEQUENCES AND SERIES
2.4
Cauchy sequences
Note: less than a lecture Often we wish to describe a certain number by a sequence that converges to it. In this case, it is impossible to use the number itself in the proof that the sequence converges. It would be nice if we could check for convergence without knowing the limit. Definition 2.4.1. A sequence {𝑥 𝑛 }∞ is a Cauchy sequence* if for every 𝜖 > 0 there exists an 𝑛=1 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀 and all 𝑘 ≥ 𝑀, we have |𝑥 𝑛 − 𝑥 𝑘 | < 𝜖.
Informally, being Cauchy means that the terms of the sequence are eventually all arbitrarily close to each other. We might expect such a sequence to be convergent, and we would be correct due to ℝ having the least-upper-bound property. Before we prove this fact, we look at some examples. Example 2.4.2: The sequence {1/𝑛 }∞ is a Cauchy sequence. 𝑛=1 Proof: Given 𝜖 > 0, find 𝑀 such that 𝑀 > 2/𝜖. Then for 𝑛, 𝑘 ≥ 𝑀, we have 1/𝑛 < 𝜖/2 and 1/𝑘 < 𝜖/2. Therefore, for 𝑛, 𝑘 ≥ 𝑀, we have
1 1 1 1 𝜖 𝜖 − ≤ + < + = 𝜖. 𝑛 𝑘 𝑛 𝑘 2 2 𝑛 ∞
Example 2.4.3: The sequence (−1) 𝑛=1 is not a Cauchy sequence. Proof: Given any 𝑀 ∈ ℕ , take 𝑛 ≥ 𝑀 to be any even number, and let 𝑘 B 𝑛 + 1. Then
𝑛 𝑘 𝑛 𝑛+1 (−1) − (−1) = (−1) − (−1) = |1 − (−1)| = 2.
Therefore, for any 𝜖 ≤ 2 the definition cannot be satisfied, and the sequence is not Cauchy. Proposition 2.4.4. If a sequence is Cauchy, then it is bounded. Proof. Suppose {𝑥 𝑛 }∞ is Cauchy. Pick an 𝑀 such that for all 𝑛, 𝑘 ≥ 𝑀, we have 𝑛=1 |𝑥 𝑛 − 𝑥 𝑘 | < 1. In particular, for all 𝑛 ≥ 𝑀, |𝑥 𝑛 − 𝑥 𝑀 | < 1.
By the reverse triangle inequality, |𝑥 𝑛 | − |𝑥 𝑀 | ≤ |𝑥 𝑛 − 𝑥 𝑀 | < 1. Hence for 𝑛 ≥ 𝑀, Let
|𝑥 𝑛 | < 1 + |𝑥 𝑀 | .
𝐵 B max |𝑥 1 | , |𝑥 2 | , . . . , |𝑥 𝑀−1 | , 1 + |𝑥 𝑀 | .
Then |𝑥 𝑛 | ≤ 𝐵 for all 𝑛 ∈ ℕ . * Named
after the French mathematician Augustin-Louis Cauchy (1789–1857).
□
85
2.4. CAUCHY SEQUENCES Theorem 2.4.5. A sequence of real numbers is Cauchy if and only if it converges.
Proof. Suppose {𝑥 𝑛 }∞ converges to 𝑥, and let 𝜖 > 0 be given. Then there exists an 𝑀 such 𝑛=1 that for 𝑛 ≥ 𝑀, 𝜖 |𝑥 𝑛 − 𝑥| < . 2 Hence for 𝑛 ≥ 𝑀 and 𝑘 ≥ 𝑀, 𝜖 𝜖 |𝑥 𝑛 − 𝑥 𝑘 | = |𝑥 𝑛 − 𝑥 + 𝑥 − 𝑥 𝑘 | ≤ |𝑥 𝑛 − 𝑥| + |𝑥 − 𝑥 𝑘 | < + = 𝜖. 2 2 Alright, that direction was easy. Now suppose {𝑥 𝑛 }∞ is Cauchy. We have shown that 𝑛=1 ∞ {𝑥 𝑛 } 𝑛=1 is bounded. For a bounded sequence, liminf and limsup exist, and this is where we use the least-upper-bound property. If we show that lim inf 𝑥 𝑛 = lim sup 𝑥 𝑛 , 𝑛→∞
𝑛→∞
then must be convergent by Proposition 2.3.5. Define 𝑎 B lim sup𝑛→∞ 𝑥 𝑛 and 𝑏 B lim inf𝑛→∞ 𝑥 𝑛 . By Theorem 2.3.4, there exist subsequences {𝑥 𝑛 𝑖 }∞ and {𝑥 𝑚𝑖 }∞ , such that 𝑖=1 𝑖=1 {𝑥 𝑛 }∞ 𝑛=1
lim 𝑥 𝑛 𝑖 = 𝑎
𝑖→∞
and
lim 𝑥 𝑚𝑖 = 𝑏.
𝑖→∞
Given an 𝜖 > 0, there exists an 𝑀1 such that |𝑥 𝑛 𝑖 − 𝑎| < 𝜖/3 for all 𝑖 ≥ 𝑀1 and an 𝑀2 such that |𝑥 𝑚𝑖 − 𝑏| < 𝜖/3 for all 𝑖 ≥ 𝑀2 . There also exists an 𝑀3 such that |𝑥 𝑛 − 𝑥 𝑘 | < 𝜖/3 for all 𝑛, 𝑘 ≥ 𝑀3 . Let 𝑀 B max{𝑀1 , 𝑀2 , 𝑀3 }. If 𝑖 ≥ 𝑀, then 𝑛 𝑖 ≥ 𝑀 and 𝑚 𝑖 ≥ 𝑀. Hence, |𝑎 − 𝑏| = |𝑎 − 𝑥 𝑛 𝑖 + 𝑥 𝑛 𝑖 − 𝑥 𝑚𝑖 + 𝑥 𝑚𝑖 − 𝑏|
≤ |𝑎 − 𝑥 𝑛 𝑖 | + |𝑥 𝑛 𝑖 − 𝑥 𝑚𝑖 | + |𝑥 𝑚𝑖 − 𝑏| 𝜖 𝜖 𝜖 < + + = 𝜖. 3 3 3 As |𝑎 − 𝑏| < 𝜖 for all 𝜖 > 0, then 𝑎 = 𝑏 and the sequence converges.
□
Remark 2.4.6. The statement of this theorem is sometimes used to define the completeness property of the real numbers. We say a set is Cauchy-complete (or sometimes just complete) if every Cauchy sequence converges to something in the set. Above, we proved that as ℝ has the least-upper-bound property, ℝ is Cauchy-complete. One can construct ℝ via “completing” ℚ by “throwing in” just enough points to make all Cauchy sequences converge (we omit the details). The resulting field has the least-upper-bound property. The advantage of defining completeness via Cauchy sequences is that it generalizes to more abstract settings such as metric spaces, see chapter 7.
The Cauchy criterion is stronger than |𝑥 𝑛+1 − 𝑥 𝑛 | (or 𝑥 𝑛+𝑗 − 𝑥 𝑛 for a fixed 𝑗) going to zero as 𝑛 goes to infinity. When we get to the partial sums of the harmonic series (see Example 2.5.11 in the next section), we will have a sequence 𝑥 𝑛+1 − 𝑥 𝑛 = 1/𝑛 , such that ∞ yet {𝑥 𝑛 } 𝑛=1 is divergent. In fact, for that sequence, lim𝑛→∞ 𝑥 𝑛+𝑗 − 𝑥 𝑛 = 0 for every 𝑗 ∈ ℕ (compare Exercise 2.5.12). The key point in the definition of Cauchy is that 𝑛 and 𝑘 vary independently and can be arbitrarily far apart.
86
2.4.1
CHAPTER 2. SEQUENCES AND SERIES
Exercises 2
∞ Exercise 2.4.1: Prove that { 𝑛𝑛−1 2 } 𝑛=1 is Cauchy using directly the definition of Cauchy sequences.
be a sequence such that there exists a positive 𝐶 < 1 and for all 𝑛, Exercise 2.4.2: Let {𝑥 𝑛 }∞ 𝑛=1 |𝑥 𝑛+1 − 𝑥 𝑛 | ≤ 𝐶 |𝑥 𝑛 − 𝑥 𝑛−1 | . Prove that {𝑥 𝑛 }∞ is Cauchy. Hint: You can freely use the formula (for 𝐶 ≠ 1) 𝑛=1 1 + 𝐶 + 𝐶2 + · · · + 𝐶𝑛 =
1 − 𝐶 𝑛+1 . 1−𝐶
Exercise 2.4.3 (Challenging): Suppose 𝐹 is an ordered field that contains the rational numbers ℚ, such that ℚ is dense, that is: Whenever 𝑥, 𝑦 ∈ 𝐹 are such that 𝑥 < 𝑦, then there exists a 𝑞 ∈ ℚ such that 𝑥 < 𝑞 < 𝑦. Say a sequence {𝑥 𝑛 }∞ of rational numbers is Cauchy if given every 𝜖 ∈ ℚ with 𝜖 > 0, there exists an 𝑀 𝑛=1 such that for all 𝑛, 𝑘 ≥ 𝑀, we have |𝑥 𝑛 − 𝑥 𝑘 | < 𝜖. Suppose every Cauchy sequence of rational numbers has a limit in 𝐹. Prove that 𝐹 has the least-upper-bound property. Exercise 2.4.4: Let {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ be sequences such that lim𝑛→∞ 𝑦𝑛 = 0. Suppose that for all 𝑘 ∈ ℕ 𝑛=1 𝑛=1 and for all 𝑚 ≥ 𝑘, we have |𝑥 𝑚 − 𝑥 𝑘 | ≤ 𝑦 𝑘 . Show that {𝑥 𝑛 }∞ is Cauchy. 𝑛=1
Exercise 2.4.5: Suppose a Cauchy sequence {𝑥 𝑛 }∞ is such that for every 𝑀 ∈ ℕ , there exists a 𝑘 ≥ 𝑀 𝑛=1 and an 𝑛 ≥ 𝑀 such that 𝑥 𝑘 < 0 and 𝑥 𝑛 > 0. Using simply the definition of a Cauchy sequence and of a convergent sequence, show that the sequence converges to 0. Exercise 2.4.6: Suppose |𝑥 𝑛 − 𝑥 𝑘 | ≤ 𝑛/𝑘 2 for all 𝑛 and 𝑘. Show that {𝑥 𝑛 }∞ is Cauchy. 𝑛=1 Exercise 2.4.7: Suppose {𝑥 𝑛 }∞ is a Cauchy sequence such that for infinitely many 𝑛, 𝑥 𝑛 = 𝑐. Using only 𝑛=1 the definition of Cauchy sequence prove that lim 𝑥 𝑛 = 𝑐. 𝑛→∞
Exercise 2.4.8: True or false, prove or find a counterexample: If {𝑥 𝑛 }∞ is a Cauchy sequence, then there 𝑛=1 exists an 𝑀 such that for all 𝑛 ≥ 𝑀, we have |𝑥 𝑛+1 − 𝑥 𝑛 | ≤ |𝑥 𝑛 − 𝑥 𝑛−1 |.
87
2.5. SERIES
2.5
Series
Note: 2 lectures A fundamental object in mathematics is that of a series. In fact, when the foundations of analysis were being developed, the motivation was to understand series. Understanding series is important in applications of analysis. For example, solutions to differential equations are often given as series, and differential equations are the basis for understanding almost all of modern science.
2.5.1
Definition
Definition 2.5.1. Given a sequence {𝑥 𝑛 }∞ , we write the formal object 𝑛=1 ∞ Õ
𝑥𝑛
𝑛=1
and call it a series. A series converges if the sequence {𝑠 𝑘 }∞ defined by 𝑘=1 𝑠𝑘 B
𝑘 Õ 𝑛=1
𝑥 𝑛 = 𝑥1 + 𝑥2 + · · · + 𝑥 𝑘
converges. The numbers 𝑠 𝑘 are called partial sums. If the series converges, we write ∞ Õ
𝑥 𝑛 = lim 𝑠 𝑘 . 𝑘→∞
𝑛=1
In this case, we cheat a little and treat ∞ 𝑛=1 𝑥 𝑛 as a number. Í ∞ If the sequence {𝑠 𝑘 } 𝑘=1 diverges, we say the series is divergent. In this case, ∞ 𝑛=1 𝑥 𝑛 is simply a formal object and not a number.
Í
In other words, for a convergent series, we have ∞ Õ
𝑥 𝑛 = lim
𝑛=1
𝑘→∞
𝑘 Õ
𝑥𝑛 .
𝑛=1
We only have this equality if the limit on the right actually exists. If the series does not converge, the right-hand side does not make sense (the limit does not exist). Therefore, be Í careful as ∞ 𝑛=1 𝑥 𝑛 means two different things (a notation for the series itself or the limit of the partial sums), and you must use context to distinguish. Remark 2.5.2. It is sometimes convenient to start the series at an index different from 1. For instance, we can write ∞ Õ 𝑛=0
𝑟𝑛 =
∞ Õ 𝑛=1
The left-hand side is more convenient to write.
𝑟 𝑛−1 .
88
CHAPTER 2. SEQUENCES AND SERIES
Í∞
Remark 2.5.3. It is common to write the series
𝑛=1
𝑥 𝑛 as
𝑥1 + 𝑥2 + 𝑥3 + · · · with the understanding that the ellipsis indicates a series and not a simple sum. We do not use this notation as it is the sort of informal notation that leads to mistakes in proofs. Example 2.5.4: The series
∞ Õ 1
2𝑛
𝑛=1
converges and the limit is 1. That is, ∞ Õ 1 𝑛=1
= lim
2𝑛
𝑘 Õ 1
𝑘→∞
𝑛=1
2𝑛
= 1.
Proof: We need the equality 𝑘 Õ 1 𝑛=1
!
2𝑛
+
1 = 1. 2𝑘
The equality is immediate when 𝑘 = 1. The proof for general 𝑘 follows by induction, which we leave to the reader. See Figure 2.7 for an illustration. 0
1/2 1/2
Figure 2.7: The equality
Í
𝑘 1 𝑛=1 2𝑛
1
+ 1/4 + 1/8 1/4
+
1 2𝑘
1/8
1/8
= 1 illustrated for 𝑘 = 3.
Let 𝑠 𝑘 be the partial sum. We write
𝑘 Õ 1 1 1 |1 − 𝑠 𝑘 | = 1 − = 𝑘 = 𝑘. 𝑛 2 2 2 𝑛=1
The sequence to 1.
1 ∞ 2𝑘
𝑘=1
, and therefore |1 − 𝑠 𝑘 |
∞
𝑘=1
, converges to zero. So, {𝑠 𝑘 }∞ converges 𝑘=1
Proposition 2.5.5. Suppose −1 < 𝑟 < 1. Then the geometric series ∞ Õ 𝑛=0
𝑟𝑛 =
1 . 1−𝑟
Í∞
𝑛=0
𝑟 𝑛 converges, and
89
2.5. SERIES Details of the proof are left as an exercise. The proof consists of showing 𝑘−1 Õ
𝑟𝑛 =
𝑛=0
1 − 𝑟𝑘 , 1−𝑟
and then taking the limit as 𝑘 goes to ∞. Geometric series is one of the most important series, and in fact it is one of the few series for which we can so explicitly find the limit. As for sequences we can talk about a tail of a series. Proposition 2.5.6. Let
Í∞
∞ Õ
𝑛=1
𝑥𝑛
𝑥 𝑛 be a series. Let 𝑀 ∈ ℕ . Then ∞ Õ
converges if and only if
𝑛=1
𝑥𝑛
converges.
𝑛=𝑀
Proof. We look at partial sums of the two series (for 𝑘 ≥ 𝑀) 𝑘 Õ
𝑥𝑛 =
𝑛=1
Note that
2.5.2
Í𝑀−1 𝑛=1
𝑀−1 Õ 𝑛=1
𝑘 Õ
! 𝑥𝑛 +
𝑥𝑛 .
𝑛=𝑀
𝑥 𝑛 is a fixed number. Use Proposition 2.2.5 to finish the proof.
□
Cauchy series
Definition 2.5.7. A series ∞ 𝑛=1 𝑥 𝑛 is said to be Cauchy or a Cauchy series if the sequence of ∞ partial sums {𝑠 𝑛 } 𝑛=1 is a Cauchy sequence.
Í
A sequence of real numbers converges if and only if it is Cauchy. Therefore, a series is Í convergent if and only if it is Cauchy. The series ∞ 𝑛=1 𝑥 𝑛 is Cauchy if for every 𝜖 > 0, there exists an 𝑀 ∈ ℕ , such that for every 𝑛 ≥ 𝑀 and 𝑘 ≥ 𝑀, we have
! ! 𝑘 𝑛 Õ Õ 𝑥𝑖 − 𝑥 𝑖 < 𝜖. 𝑖=1
𝑖=1
Without loss of generality we assume 𝑛 < 𝑘. Then we write
! ! 𝑘 𝑘 𝑛 Õ Õ Õ 𝑥𝑖 − 𝑥𝑖 = 𝑥 𝑖 < 𝜖. 𝑖=1
𝑖=1
𝑖=𝑛+1
We have proved the following simple proposition. Proposition 2.5.8. The series ∞ 𝑛=1 𝑥 𝑛 is Cauchy if for every 𝜖 > 0, there exists an 𝑀 ∈ ℕ such that for every 𝑛 ≥ 𝑀 and every 𝑘 > 𝑛,
Í
𝑘 Õ 𝑥 𝑖 < 𝜖. 𝑖=𝑛+1
90
CHAPTER 2. SEQUENCES AND SERIES
2.5.3
Basic properties
Proposition 2.5.9. Let and
Í∞
𝑛=1
𝑥 𝑛 be a convergent series. Then the sequence {𝑥 𝑛 }∞ is convergent 𝑛=1 lim 𝑥 𝑛 = 0.
𝑛→∞
Proof. Let 𝜖 > 0 be given. As that for every 𝑛 ≥ 𝑀,
Í∞
𝑛=1
𝑥 𝑛 is convergent, it is Cauchy. Thus we find an 𝑀 such
𝑛+1 Õ 𝜖> 𝑥 𝑖 = |𝑥 𝑛+1 | . 𝑖=𝑛+1
Hence for every 𝑛 ≥ 𝑀 + 1, we have |𝑥 𝑛 | < 𝜖.
□
𝑛 Example 2.5.10: If 𝑟 ≥ 1 or 𝑟 ≤ −1, then the geometric series ∞ 𝑛=0 𝑟 diverges. 𝑛 Proof: |𝑟 𝑛 | = |𝑟| ≥ 1𝑛 = 1. The terms do not go to zero and the series cannot converge.
Í
So if a series converges, the terms of the series go to zero. The implication, however, goes only one way. Let us give an example. 1 1 Example 2.5.11: The series ∞ 𝑛=1 /𝑛 diverges (despite the fact that lim𝑛→∞ /𝑛 = 0). This is * the famous harmonic series . Proof: We will show that the sequence of partial sums is unbounded, and hence cannot converge. Write the partial sums 𝑠 𝑛 for 𝑛 = 2 𝑘 as:
Í
𝑠1 = 1, 𝑠2 = (1) +
1 , 2
1 1 1 𝑠4 = (1) + + + , 2 3 4
1 1 1 1 1 1 1 𝑠8 = (1) + + + + + + + , 2 3 4 5 6 7 8 .. .
𝑠2𝑘
𝑘 2𝑖 Õ © Õ 1ª =1+ ®. 𝑚 𝑖=1 «𝑚=2𝑖−1 +1 ¬
Notice 1/3 + 1/4 ≥ 1/4 + 1/4 = 1/2 and 1/5 + 1/6 + 1/7 + 1/8 ≥ 1/8 + 1/8 + 1/8 + 1/8 = 1/2. More generally 2𝑘 Õ 𝑚=2 𝑘−1 +1 *The
1 ≥ 𝑚
2𝑘 Õ 𝑚=2 𝑘−1 +1
1 1 1 = (2 𝑘−1 ) 𝑘 = . 𝑘 2 2 2
divergence of the harmonic series was known long before the theory of series was made rigorous. The proof we give is the earliest proof and was given by Nicole Oresme (1323?–1382).
91
2.5. SERIES Therefore, 𝑠2𝑘
𝑘 𝑘 2𝑖 Õ Õ 1 𝑘 © Õ 1ª =1+ . =1+ ® ≥ 1+ 𝑚 2 2 𝑖=1 𝑖=1 «𝑚=2𝑖−1 +1 ¬
As { 𝑘/2}∞ is unbounded by the Archimedean property, that means that {𝑠2𝑘 }∞ is un𝑘=1 𝑘=1 ∞ ∞ bounded, and therefore {𝑠 𝑛 } 𝑛=1 is unbounded. Hence {𝑠 𝑛 } 𝑛=1 diverges, and consequently Í∞ 1 𝑛=1 /𝑛 diverges. Convergent series are linear. That is, we can multiply them by constants and add them and these operations are done term by term. Proposition 2.5.12 (Linearity of series). Let 𝛼 ∈ ℝ and series. Then (i)
Í∞
𝑛=1
𝛼𝑥 𝑛 = 𝛼
∞ Õ
𝑛=1
Í∞
𝑛=1 𝑥 𝑛 and
Í∞
𝑛=1
𝑦𝑛 be convergent
𝛼𝑥 𝑛 is a convergent series and ∞ Õ
(ii)
Í∞
𝑛=1 (𝑥 𝑛
𝑥𝑛 .
𝑛=1
+ 𝑦𝑛 ) is a convergent series and ∞ Õ
∞ Õ
𝑛=1
𝑛=1
(𝑥 𝑛 + 𝑦𝑛 ) =
! 𝑥𝑛 +
∞ Õ
! 𝑦𝑛 .
𝑛=1
Proof. For the first item, we simply write the 𝑘th partial sum 𝑘 Õ
𝛼𝑥 𝑛 = 𝛼
𝑛=1
𝑘 Õ
! 𝑥𝑛 .
𝑛=1
We look at the right-hand side and note that the constant multiple of a convergent sequence is convergent. Hence, we take the limit of both sides to obtain the result. For the second item we also look at the 𝑘th partial sum 𝑘 Õ
𝑘 Õ
𝑛=1
𝑛=1
(𝑥 𝑛 + 𝑦𝑛 ) =
! 𝑥𝑛 +
𝑘 Õ
! 𝑦𝑛 .
𝑛=1
We look at the right-hand side and note that the sum of convergent sequences is convergent. Hence, we take the limit of both sides to obtain the proposition. □ An example of a useful application of the first item is the following formula. If |𝑟| < 1 and 𝑖 ∈ ℕ , then ∞ Õ 𝑟𝑖 𝑟𝑛 = . 1−𝑟 𝑛=𝑖
92
CHAPTER 2. SEQUENCES AND SERIES
The formula follows by using the geometric series and multiplying by 𝑟 𝑖 : 𝑟
𝑖
∞ Õ
𝑛
𝑟 =
𝑛=0
∞ Õ
𝑟
𝑛+𝑖
𝑛=0
=
∞ Õ
𝑟𝑛.
𝑛=𝑖
Multiplying series is not as simple as adding, see the next section. It is not true, of course, that we multiply term by term. That strategy does not work even for finite sums: (𝑎 + 𝑏)(𝑐 + 𝑑) ≠ 𝑎𝑐 + 𝑏𝑑.
2.5.4
Absolute convergence
As monotone sequences are easier to work with than arbitrary sequences, it is usually Í easier to work with series ∞ 𝑛=1 𝑥 𝑛 , where 𝑥 𝑛 ≥ 0 for all 𝑛. The sequence of partial sums is then monotone increasing and converges if it is bounded above. Let us formalize this statement as a proposition. Proposition 2.5.13. If 𝑥 𝑛 ≥ 0 for all 𝑛, then partial sums is bounded above.
Í∞
𝑛=1
𝑥 𝑛 converges if and only if the sequence of
As the limit of a monotone increasing sequence is the supremum, then when 𝑥 𝑛 ≥ 0 for all 𝑛, we have the inequality 𝑘 Õ 𝑛=1
𝑥𝑛 ≤
∞ Õ
𝑥𝑛 .
𝑛=1
If we allow infinite limits, the inequality still holds even when the series diverges to infinity, although in that case it is not terribly useful. We will see that the following common criterion for convergence of series has big implications for how the series can be manipulated. ∞ Definition 2.5.14. A series ∞ 𝑛=1 𝑥 𝑛 converges absolutely if the series 𝑛=1 |𝑥 𝑛 | converges. If a series converges, but does not converge absolutely, we say it converges conditionally.
Í
Proposition 2.5.15. If the series
Í
Í∞
𝑛=1
𝑥 𝑛 converges absolutely, then it converges.
Proof. A series is convergent if and only if it is Cauchy. Hence suppose ∞ 𝑛=1 |𝑥 𝑛 | is Cauchy. That is, for every 𝜖 > 0, there exists an 𝑀 such that for all 𝑘 ≥ 𝑀 and all 𝑛 > 𝑘, we have
Í
𝑛 Õ 𝑖=𝑘+1
𝑛 Õ = |𝑥 𝑖 | |𝑥 𝑖 | < 𝜖. 𝑖=𝑘+1
We apply the triangle inequality for a finite sum to obtain
𝑛 𝑛 Õ Õ 𝑥𝑖 ≤ |𝑥 𝑖 | < 𝜖. 𝑖=𝑘+1
Hence
Í∞
𝑛=1
𝑖=𝑘+1
𝑥 𝑛 is Cauchy, and therefore it converges.
□
93
2.5. SERIES
∞ ∞ If ∞ 𝑛=1 𝑥 𝑛 converges absolutely, the limits of 𝑛=1 𝑥 𝑛 and 𝑛=1 |𝑥 𝑛 | are generally different. Computing one does not help us compute the other. However, the computation above leads to a useful inequality for absolutely convergent series, a series version of the triangle inequality, a proof of which we leave as an exercise:
Í
Í
Í
∞ ∞ Õ Õ 𝑥 ≤ |𝑥 𝑖 | . 𝑖 𝑖=1
𝑖=1
Absolutely convergent series have many wonderful properties. For example, absolutely convergent series can be rearranged arbitrarily, or we can multiply such series together easily. Conditionally convergent series on the other hand often do not behave as one would expect. See the next section. We leave as an exercise to show that ∞ Õ (−1)𝑛
𝑛
𝑛=1
converges, although the reader should finish this section before trying. On the other hand, we proved ∞ Õ 1 𝑛=1
diverges. Therefore,
2.5.5
Í∞
𝑛=1
(−1)𝑛 𝑛
𝑛
is a conditionally convergent series.
Comparison test and the 𝑝-series
We noted above that for a series with positive terms to converge the terms not only have to go to zero, but they have to go to zero “fast enough.” If we know about convergence of a certain series, we can use the following comparison test to see if the terms of another series go to zero “fast enough.” Proposition 2.5.16 (Comparison test). Let for all 𝑛 ∈ ℕ . (i) If (ii) If
Í∞
𝑛=1 Í∞ 𝑛=1
𝑦𝑛 converges, then so does
𝑥 𝑛 diverges, then so does
Í∞
𝑛=1
Í∞
𝑛=1
Í∞
𝑛=1
𝑥 𝑛 and
Í∞
𝑛=1
𝑦𝑛 be series such that 0 ≤ 𝑥 𝑛 ≤ 𝑦𝑛
𝑥𝑛 .
𝑦𝑛 .
Proof. As the terms of the series are all nonnegative, the sequences of partial sums are both monotone increasing. Since 𝑥 𝑛 ≤ 𝑦𝑛 for all 𝑛, the partial sums satisfy for all 𝑘 𝑘 Õ 𝑛=1
𝑥𝑛 ≤
𝑘 Õ 𝑛=1
𝑦𝑛 .
(2.1)
94
CHAPTER 2. SEQUENCES AND SERIES
If the series ∞ the 𝑛=1 𝑦 𝑛 converges, the partial sums for the series are bounded. Therefore, Í𝑘 right-hand side of (2.1) is bounded for all 𝑘; there exists some 𝐵 ∈ ℝ such that 𝑛=1 𝑦𝑛 ≤ 𝐵 for all 𝑘, and so
Í
𝑘 Õ 𝑛=1
Í∞
𝑥𝑛 ≤
𝑘 Õ 𝑛=1
𝑦𝑛 ≤ 𝐵.
Hence the partial sums for 𝑛=1 𝑥 𝑛 are also bounded. Since the partial sums are a monotone increasing sequence they are convergent. The first item is thus proved. Í On the other hand if ∞ sums must be unbounded 𝑛=1 𝑥 𝑛 diverges, the sequence of partial Í since it is monotone increasing. That is, the partial sums for ∞ 𝑛=1 𝑥 𝑛 are eventually bigger than any real number. Putting this together with (2.1) we see that for every 𝐵 ∈ ℝ, there is a 𝑘 such that 𝐵≤ Hence the partial sums for
Í∞
𝑛=1
𝑘 Õ 𝑛=1
𝑥𝑛 ≤
𝑘 Õ
𝑦𝑛 .
𝑛=1
𝑦𝑛 are also unbounded, and
Í∞
𝑛=1
𝑦𝑛 also diverges.
□
A useful series to use with the comparison test is the 𝑝-series* . Proposition 2.5.17 (𝑝-series or the 𝑝-test). For 𝑝 ∈ ℝ, the series ∞ Õ 1 𝑛=1
𝑛𝑝
converges if and only if 𝑝 > 1. ∞ 1 1 Proof. First suppose 𝑝 ≤ 1. As 𝑛 ≥ 1, we have 𝑛1𝑝 ≥ 𝑛1 . Since ∞ 𝑛=1 𝑛 diverges, 𝑛=1 𝑛 𝑝 must diverge for all 𝑝 ≤ 1 by the comparison test. Now suppose 𝑝 > 1. We proceed as we did for the harmonic series, but instead of showing that the sequence of partial sums is unbounded, we show that it is bounded. The terms of the series are positive, so the sequence of partial sums is monotone increasing and converges if it is bounded above. Let 𝑠 𝑛 denote the 𝑛th partial sum.
Í
Í
𝑠1 = 1, 1 1 𝑠3 = (1) + 𝑝 + 𝑝 , 2 3
1 1 1 1 1 1 𝑠7 = (1) + 𝑝 + 𝑝 + 𝑝 + 𝑝 + 𝑝 + 𝑝 , 2 3 4 5 6 7 .. .
𝑠2𝑘 −1 *We
𝑖+1 −1 𝑘−1 2Õ Õ 1 ª © =1+ ®. 𝑚𝑝 𝑖 𝑖=1 « 𝑚=2 ¬
have not yet defined 𝑥 𝑝 for 𝑥 > 0 and an arbitrary 𝑝 ∈ ℝ. The definition is 𝑥 𝑝 B exp(𝑝 ln 𝑥). We will define the logarithm and the exponential in §5.4. For now you can just think of rational 𝑝 where 𝑘
𝑥 𝑘/𝑚 = (𝑥 1/𝑚 ) . See also Exercise 1.2.17.
95
2.5. SERIES
Instead of estimating from below, we estimate from above. As 𝑝 is positive, then 2𝑝 < 3𝑝 , and hence 21𝑝 + 31𝑝 < 21𝑝 + 21𝑝 . Similarly, 41𝑝 + 51𝑝 + 61𝑝 + 71𝑝 < 41𝑝 + 41𝑝 + 41𝑝 + 41𝑝 . Therefore, for all 𝑘 ≥ 2, 𝑠2𝑘 −1
𝑖+1 −1 𝑘−1 2Õ Õ 1 ª © =1+ ® 𝑚𝑝 𝑖 𝑖=1 « 𝑚=2 ¬ 𝑖+1 −1 𝑘−1 2 Õ© Õ 1 ª < 1+ ® 𝑖 )𝑝 (2 𝑖 𝑖=1 « 𝑚=2 ¬ 𝑘−1 Õ 2𝑖
=1+
=1+ As 𝑝 > 1, then
1 2𝑝−1
𝑖=1 𝑘−1 Õ
(2𝑖 ) 1
𝑖=1
𝑝
𝑖
2𝑝−1
.
< 1. Proposition 2.5.5 says that
𝑖 ∞ Õ 1 2𝑝−1
𝑖=1
converges. Thus, 𝑠2𝑘 −1 < 1 +
𝑖 𝑘−1 Õ 1 𝑖=1
2𝑝−1
For every 𝑛 there is a 𝑘 ≥ 2 such that 𝑛 ≤ 𝑠 𝑛 ≤ 𝑠2𝑘 −1 . So for all 𝑛,
2𝑘
𝑠𝑛 < 1 +
≤ 1+
𝑖 ∞ Õ 1 𝑖=1
2𝑝−1
.
− 1, and as {𝑠 𝑛 }∞ is a monotone sequence, 𝑛=1
𝑖 ∞ Õ 1 𝑖=1
2𝑝−1
Thus the sequence of partial sums is bounded, and the series converges.
□
Neither the 𝑝-series test nor the comparison test tell us what the sum converges to. They only tell us that a limit of the partial sums exists. For instance, while we know that Í∞ Í∞ * 1 2 𝜋2 1 𝑝 𝑛=1 /𝑛 converges, it is far harder to find that the limit is /6. If we treat 𝑛=1 /𝑛 as a function of 𝑝, we get the so-called Riemann 𝜁 (zeta) function. Understanding the behavior of this function contains one of the most famous unsolved problems in mathematics today and has applications in seemingly unrelated areas such as modern cryptography. 1 Example 2.5.18: The series ∞ 𝑛=1 𝑛 2 +1 converges. Í Proof: First, 𝑛 21+1 < 𝑛12 for all 𝑛 ∈ ℕ . The series ∞ 𝑛=1 Í 1 Therefore, by the comparison test, ∞ converges. 𝑛=1 𝑛 2 +1
Í
* Demonstration
famous.
1 𝑛2
converges by the 𝑝-series test.
of this fact is what made the Swiss mathematician Leonhard Paul Euler (1707–1783)
96
CHAPTER 2. SEQUENCES AND SERIES
2.5.6
Ratio test 𝑛+1
𝑟 𝑛 Suppose 𝑟 > 0. The ratio of two subsequent terms in the geometric series ∞ 𝑛=0 𝑟 is 𝑟 𝑛 = 𝑟, and the series converges whenever 𝑟 < 1. Just as for sequences, this fact can be generalized to more arbitrary series as long as we have such a ratio “in the limit.” We then compare the tail of a series to the geometric series.
Í
Proposition 2.5.19 (Ratio test). Let
Í∞
𝑛=1
𝐿 B lim
𝑥 𝑛 be a series, 𝑥 𝑛 ≠ 0 for all 𝑛, and such that
𝑛→∞
|𝑥 𝑛+1 | |𝑥 𝑛 |
(i) If 𝐿 < 1, then
Í∞
𝑥 𝑛 converges absolutely.
(ii) If 𝐿 > 1, then
Í∞
𝑥 𝑛 diverges.
𝑛=1 𝑛=1
exists.
Although the test as stated is often sufficient, it can be strengthened a bit, see Exercise 2.5.6. Proof. If 𝐿 > 1, then Lemma 2.2.12 says that the sequence {𝑥 𝑛 }∞ diverges. Since it is a 𝑛=1 necessary condition for the convergence of series that the terms go to zero, we know that Í∞ 𝑛=1 𝑥 𝑛 must diverge. Í Thus suppose 𝐿 < 1. We will argue that ∞ 𝑛=1 |𝑥 𝑛 | must converge. The proof is similar to that of Lemma 2.2.12. Of course 𝐿 ≥ 0. Pick 𝑟 such that 𝐿 < 𝑟 < 1. As 𝑟 − 𝐿 > 0, there exists an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀,
|𝑥 𝑛+1 | |𝑥 𝑛 | − 𝐿 < 𝑟 − 𝐿. Therefore,
|𝑥 𝑛+1 | < 𝑟. |𝑥 𝑛 |
For 𝑛 > 𝑀 (that is for 𝑛 ≥ 𝑀 + 1), write |𝑥 𝑛 | = |𝑥 𝑀 |
|𝑥 𝑛 | |𝑥 𝑀+1 | |𝑥 𝑀+2 | ··· < |𝑥 𝑀 | 𝑟𝑟 · · · 𝑟 = |𝑥 𝑀 | 𝑟 𝑛−𝑀 = (|𝑥 𝑀 | 𝑟 −𝑀 )𝑟 𝑛 . |𝑥 𝑀 | |𝑥 𝑀+1 | |𝑥 𝑛−1 |
For 𝑘 > 𝑀, write the partial sum as 𝑘 Õ 𝑛=1
|𝑥 𝑛 | = < =
𝑀 Õ 𝑛=1 𝑀 Õ 𝑛=1 𝑀 Õ 𝑛=1
! |𝑥 𝑛 | +
! |𝑥 𝑛 | +
𝑘 Õ
!
𝑛=𝑀+1 𝑘 Õ
|𝑥 𝑛 |
! (|𝑥 𝑀 | 𝑟 −𝑀 )𝑟 𝑛
𝑛=𝑀+1
! |𝑥 𝑛 | + |𝑥 𝑀 | 𝑟
−𝑀
𝑘 Õ 𝑛=𝑀+1
! 𝑟𝑛 .
97
2.5. SERIES
∞ 𝑛 𝑛 As 0 < 𝑟 < 1, the geometric series ∞ 𝑛=0 𝑟 converges, so 𝑛=𝑀+1 𝑟 converges as well. We take the limit as 𝑘 goes to infinity on the right-hand side above to obtain
Í
𝑘 Õ 𝑛=1
Í
𝑀 Õ
|𝑥 𝑛 |
0 and 𝑏 𝑛 > 0 for all 𝑛, and 𝑎𝑛 < ∞, 𝑛→∞ 𝑏 𝑛
0 < lim then either
Í∞
𝑛=1
𝑎 𝑛 and
Í∞
𝑛=1 𝑏 𝑛
Exercise 2.5.12: Let 𝑥 𝑛 B
Í𝑛
𝑖=1
Cauchy.
both converge or both diverge. 1/𝑖 .
Show that for every 𝑘, we get lim |𝑥 𝑛+𝑘 − 𝑥 𝑛 | = 0, yet {𝑥 𝑛 }∞ is not 𝑛=1 𝑛→∞
Exercise 2.5.13: Let 𝑠 𝑘 be the 𝑘th partial sum of
Í∞
𝑛=1
𝑥𝑛 .
a) Suppose there exists an 𝑚 ∈ ℕ such that lim 𝑠 𝑚 𝑘 exists and lim 𝑥 𝑛 = 0. Show that 𝑘→∞
𝑛→∞
b) Find an example where lim 𝑠 2𝑘 exists and lim 𝑥 𝑛 ≠ 0 (and therefore 𝑘→∞
𝑛→∞
Í∞
𝑛=1
Í∞
𝑛=1
𝑥 𝑛 converges.
𝑥 𝑛 diverges).
c) (Challenging) Find an example where lim𝑛→∞ 𝑥 𝑛 = 0, and there exists a subsequence {𝑠 𝑘 𝑖 }∞ such that 𝑖=1 Í lim 𝑠 𝑘 𝑖 exists, but ∞ 𝑥 still diverges. 𝑛=1 𝑛 𝑖→∞
99
2.5. SERIES Exercise 2.5.14: Suppose
Í∞
𝑛=1
𝑥 𝑛 converges and 𝑥 𝑛 ≥ 0 for all 𝑛. Prove that
Í∞
𝑛=1
𝑥 𝑛2 converges.
Exercise 2.5.15 (Challenging): Suppose {𝑥 𝑛 }∞ is a decreasing sequence of positive numbers. The proof of 𝑛=1 convergence/divergence for the 𝑝-series generalizes. Prove the so-called Cauchy condensation principle: ∞ Õ
𝑥𝑛
∞ Õ
converges if and only if
𝑛=1
2 𝑛 𝑥 2𝑛
converges.
𝑛=1
Exercise 2.5.16: Use the Cauchy condensation principle (see Exercise 2.5.15) to decide the convergence of a)
∞ Õ ln 𝑛 𝑛=1
𝑛2
b)
∞ Õ 𝑛=2
1 𝑛 ln 𝑛
c)
∞ Õ 𝑛=2
1 2
𝑛(ln 𝑛)
d)
∞ Õ
1
𝑛=2
𝑛(ln 𝑛)(ln ln 𝑛)2
Note that only the tails of some of these series satisfy the hypotheses of the principle; you should argue why that is sufficient. Hint: Feel free to use the identity ln(2𝑛 ) = 𝑛 ln 2. Exercise 2.5.17 (Challenging): Prove Abel’s theorem: Theorem. Suppose ∞ whose sequence of partial sums is bounded, {𝜆𝑛 }∞ is a 𝑛=1 𝑥 𝑛 is a series 𝑛=1 Í Í∞ sequence with lim𝑛→∞ 𝜆𝑛 = 0, and 𝑛=1 |𝜆𝑛+1 − 𝜆𝑛 | is convergent. Then ∞ 𝜆 𝑥 is convergent. 𝑛=1 𝑛 𝑛
Í
100
2.6
CHAPTER 2. SEQUENCES AND SERIES
More on series
Note: up to 2–3 lectures (optional, can safely be skipped or covered partially)
2.6.1
Root test
A test similar to the ratio test is the so-called root test. The proof of this test is similar. Again, the idea is to generalize what happens for the geometric series. Proposition 2.6.1 (Root test). Let
Í∞
𝑛=1
𝑥 𝑛 be a series and let
𝐿 B lim sup |𝑥 𝑛 | 1/𝑛 . 𝑛→∞
(i) If 𝐿 < 1, then
Í∞
𝑥 𝑛 converges absolutely.
(ii) If 𝐿 > 1, then
Í∞
𝑥 𝑛 diverges.
𝑛=1 𝑛=1
such that 𝐿 = lim 𝑘→∞ |𝑥 𝑛 𝑘 | 1/𝑛 𝑘 . Proof. If 𝐿 > 1, then there exists* a subsequence {𝑥 𝑛 𝑘 }∞ 𝑘=1 Let 𝑟 be such that 𝐿 > 𝑟 > 1. There exists an 𝑀 such that for all 𝑘 ≥ 𝑀, we have |𝑥 𝑛 𝑘 | 1/𝑛 𝑘 > 𝑟 > 1, or in other words |𝑥 𝑛 𝑘 | > 𝑟 𝑛 𝑘 > 1. The subsequence {|𝑥 𝑛 𝑘 |}∞ , and 𝑘=1 ∞ therefore also {|𝑥 𝑛 |} 𝑛=1 , cannot possibly converge to zero, and so the series diverges. Now suppose 𝐿 < 1. Pick 𝑟 such that 𝐿 < 𝑟 < 1. By definition of limit supremum, there is an 𝑀 such that for all 𝑛 ≥ 𝑀, sup |𝑥 𝑘 | 1/𝑘 : 𝑘 ≥ 𝑛 < 𝑟.
Therefore, for all 𝑛 ≥ 𝑀,
|𝑥 𝑛 | 1/𝑛 < 𝑟,
or in other words
|𝑥 𝑛 | < 𝑟 𝑛 .
Let 𝑘 > 𝑀, and estimate the 𝑘th partial sum: 𝑘 Õ 𝑛=1
|𝑥 𝑛 | =
𝑀 Õ 𝑛=1
𝑘 Õ
! |𝑥 𝑛 | +
As 0 < 𝑟 < 1, the geometric series 𝑘 Õ 𝑛=1
!
𝑛=𝑀+1
Í∞
𝑛=𝑀+1
|𝑥 𝑛 | ≤
𝑛=1
𝑛=1
Í∞
𝑛=1
! |𝑥 𝑛 | +
𝑘 Õ
! |𝑥 𝑛 | +
𝑟 𝑛 converges to
𝑀 Õ
Thus the sequence of partial sums of Í Therefore, ∞ 𝑛=1 𝑥 𝑛 converges absolutely. * In
|𝑥 𝑛 | ≤
𝑀 Õ
𝑟 𝑀+1 1−𝑟 .
! 𝑟𝑛 .
𝑛=𝑀+1
As everything is positive,
𝑟 𝑀+1 . 1−𝑟
|𝑥 𝑛 | is bounded, and the series converges.
□
case 𝐿 = ∞, see Exercise 2.3.20. Alternatively, note that if 𝐿 = ∞, then |𝑥 𝑛 | unbounded.
1/𝑛 ∞
𝑛=1
and thus {𝑥 𝑛 }∞ is 𝑛=1
101
2.6. MORE ON SERIES
2.6.2
Alternating series test
The tests we have seen so far only addressed absolute convergence. The following test gives a large supply of conditionally convergent series. Proposition 2.6.2 (Alternating series). Let {𝑥 𝑛 }∞ be a monotone decreasing sequence of 𝑛=1 positive real numbers such that lim 𝑥 𝑛 = 0. Then 𝑛→∞
∞ Õ 𝑛=1
Proof. Let 𝑠 𝑚 B 𝑠2𝑘 =
Í𝑚
𝑘=1 (−1)
2𝑘 Õ 𝑛=1
𝑛
(−1)𝑛 𝑥 𝑛
converges.
𝑥 𝑛 be the 𝑚th partial sum. Then
𝑛
(−1) 𝑥 𝑛 = (−𝑥1 + 𝑥2 ) + · · · + (−𝑥2𝑘−1 + 𝑥 2𝑘 ) =
𝑘 Õ ℓ =1
(−𝑥2ℓ −1 + 𝑥 2ℓ ).
The sequence {𝑥 𝑛 }∞ is decreasing, so (−𝑥 2ℓ −1 + 𝑥2ℓ ) ≤ 0 for all ℓ . Thus, the subsequence 𝑛=1 ∞ {𝑠2𝑘 } 𝑘=1 of partial sums is a decreasing sequence. Similarly, (𝑥 2ℓ − 𝑥2ℓ +1 ) ≥ 0, and so 𝑠2𝑘 = −𝑥1 + (𝑥2 − 𝑥3 ) + · · · + (𝑥2𝑘−2 − 𝑥2𝑘−1 ) + 𝑥2𝑘 ≥ −𝑥1 .
The intuition behind the bound 0 ≥ 𝑠2𝑘 ≥ −𝑥 1 is illustrated in Figure 2.8. 𝑥2 𝑥4 𝑥6 −𝑥 7 + 𝑥 8 −𝑥 5 + 𝑥 6 −𝑥 3 + 𝑥 4 −𝑥 1 + 𝑥 2
−𝑥 1
−𝑥 2
−𝑥 3
−𝑥 4
−𝑥 5
−𝑥 6
𝑥8 −𝑥 7 −𝑥 8
Figure 2.8: Showing that 0 ≥ 𝑠2𝑘 ≥ −𝑥 1 where 𝑘 = 4 for an alternating series.
As {𝑠2𝑘 }∞ is decreasing and bounded below, it converges. Let 𝑎 B lim 𝑘→∞ 𝑠2𝑘 . We 𝑘=1 wish to show that lim𝑚→∞ 𝑠 𝑚 = 𝑎 (and not just for the subsequence). Given 𝜖 > 0, pick 𝑀 such that |𝑠2𝑘 − 𝑎| < 𝜖/2 whenever 𝑘 ≥ 𝑀. Since lim𝑛→∞ 𝑥 𝑛 = 0, we also make 𝑀 possibly larger to obtain 𝑥 2𝑘+1 < 𝜖/2 whenever 𝑘 ≥ 𝑀. Suppose 𝑚 ≥ 2𝑀 + 1. If 𝑚 = 2𝑘, then 𝑘 ≥ 𝑀 + 1/2 ≥ 𝑀 and |𝑠 𝑚 − 𝑎| = |𝑠2𝑘 − 𝑎| < 𝜖/2 < 𝜖. If 𝑚 = 2𝑘 + 1, then also 𝑘 ≥ 𝑀. Notice 𝑠2𝑘+1 = 𝑠2𝑘 − 𝑥2𝑘+1 . Thus |𝑠 𝑚 − 𝑎| = |𝑠2𝑘+1 − 𝑎| = |𝑠 2𝑘 − 𝑎 − 𝑥2𝑘+1 | ≤ |𝑠2𝑘 − 𝑎| + 𝑥2𝑘+1 < 𝜖/2 + 𝜖/2 = 𝜖.
□
102
CHAPTER 2. SEQUENCES AND SERIES
Notably, there exist conditionally convergent series where the absolute values of the terms go to zero arbitrarily slowly. The series ∞ Õ (−1)𝑛
𝑛𝑝
𝑛=1
converges for arbitrarily small 𝑝 > 0, but it does not converge absolutely when 𝑝 ≤ 1.
2.6.3
Rearrangements
Absolutely convergent series behave as we imagine they should. For example, absolutely convergent series can be summed in any order whatsoever. Nothing of the sort holds for conditionally convergent series (see Example 2.6.4 and Exercise 2.6.3). Consider a series ∞ Õ
𝑥𝑛 .
𝑛=1
Given a bijective function 𝜎 : ℕ → ℕ , the corresponding rearrangement is the series: ∞ Õ
𝑥 𝜎(𝑘) .
𝑘=1
We simply sum the series in a different order. Proposition 2.6.3. Let ∞ be an absolutely convergent series converging to a number 𝑥. Let 𝑛=1 𝑥 𝑛Í 𝜎 : ℕ → ℕ be a bijection. Then ∞ 𝑛=1 𝑥 𝜎(𝑛) is absolutely convergent and converges to 𝑥.
Í
In other words, a rearrangement of an absolutely convergent series converges (absolutely) to the same number.
Proof. Let 𝜖 > 0 be given. As
Í∞
𝑛=1
𝑥 𝑛 is absolutely convergent, take 𝑀 such that
! 𝑀 Õ 𝜖 𝑥𝑛 − 𝑥 < 2 𝑛=1
and
∞ Õ 𝑛=𝑀+1
|𝑥 𝑛 |
𝑀 ¬ « ! 𝑀 𝑁 Õ Õ 𝑥 𝜎(𝑛) ≤ 𝑥𝑛 − 𝑥 + 𝑛=1
𝑛=1 𝜎(𝑛)>𝑀
! 𝑄 𝑀 Õ Õ ≤ 𝑥𝑛 − 𝑥 + |𝑥 𝑛 | 𝑛=1
𝑁 be where we first switch back from even to odd. Then ′ −1 𝑁−1 𝑁 Õ Õ 1 1 𝐿+ ≥ 𝑎 𝜎(𝑛) > 𝑎 𝜎(𝑛) > 𝐿 − . 𝜎(𝑁) 𝜎(𝑁 ′)
𝑛=1
𝑛=1
104
CHAPTER 2. SEQUENCES AND SERIES
Similarly for switching in the other direction. Therefore, the sum up to 𝑁 ′ − 1 is within 1 of 𝐿. As we switch infinitely many times, 𝜎(𝑁) → ∞ and 𝜎(𝑁 ′) → ∞. Hence min{𝜎(𝑁),𝜎(𝑁 ′ )} ∞ Õ
𝑎 𝜎(𝑛) =
𝑛=1
∞ Õ (−1)𝜎(𝑛)+1 𝑛=1
𝜎(𝑛)
= 𝐿.
Here is an example to illustrate the proof. Suppose 𝐿 = 1.2, then the order is 1 + 1/3 − 1/2 + 1/5 + 1/7 + 1/9 − 1/4 + 1/11 + 1/13 − 1/6 + 1/15 + 1/17 + 1/19 − 1/8 + · · · . At this point we are no more than 1/8 from the limit. See Figure 2.9.
Figure 2.9: The first 14 partial sums of the rearrangement converging to 1.2.
2.6.4
Multiplication of series
As we have already mentioned, multiplication of series is somewhat harder than addition. If at least one of the series converges absolutely, then we can use the following theorem. For this result, it is convenient to start the series at 0, rather than at 1. ∞ Theorem 2.6.5 (Mertens’ theorem* ). Suppose ∞ 𝑛=0 𝑎 𝑛 and 𝑛=0 𝑏 𝑛 are two convergent series, converging to 𝐴 and 𝐵 respectively. Suppose at least one of the series converges absolutely. Define
Í
Í
𝑐 𝑛 B 𝑎 0 𝑏 𝑛 + 𝑎1 𝑏 𝑛−1 + · · · + 𝑎 𝑛 𝑏 0 = Then the series
Í∞
converges to 𝐴𝐵.
The series
Í∞
is called the Cauchy product of
* Proved
𝑛=0 𝑐 𝑛
𝑛=0 𝑐 𝑛
Í∞
𝑛 Õ
𝑎 𝑖 𝑏 𝑛−𝑖 .
𝑖=0
𝑛=0
𝑎 𝑛 and
by the German mathematician Franz Mertens (1840–1927).
Í∞
𝑛=0 𝑏 𝑛 .
105
2.6. MORE ON SERIES
Proof. Suppose ∞ 𝑛=0 𝑎 𝑛 converges absolutely, and let 𝜖 > 0 be given. In this proof instead of picking complicated estimates just to make the final estimate come out as less than 𝜖, let us simply obtain an estimate that depends on 𝜖 and can be made arbitrarily small. Write
Í
𝐴𝑚 B
𝑚 Õ 𝑛=0
We rearrange the 𝑚th partial sum of
𝑎𝑛 ,
𝐵𝑚 B
𝑚 Õ
𝑏𝑛 .
𝑛=0
Í∞
𝑛=0 𝑐 𝑛 :
! ! 𝑚 𝑚 Õ 𝑛 Õ Õ 𝑐 − 𝐴𝐵 = 𝑎 𝑏 − 𝐴𝐵 𝑛 𝑖 𝑛−𝑖 𝑛=0 𝑛=0 𝑖=0 ! 𝑚 Õ = 𝐵𝑛 𝑎 𝑚−𝑛 − 𝐴𝐵 𝑛=0 ! 𝑚 Õ = (𝐵𝑛 − 𝐵)𝑎 𝑚−𝑛 + 𝐵𝐴𝑚 − 𝐴𝐵 𝑛=0 ! 𝑚 Õ ≤
𝑛=0
|𝐵𝑛 − 𝐵| |𝑎 𝑚−𝑛 | + |𝐵| |𝐴𝑚 − 𝐴|
We can surely make the second term on the right-hand side go to zero. The trick is to handle the first term. Pick 𝐾 such that for all 𝑚 ≥ 𝐾, we have |𝐴𝑚 − 𝐴| < 𝜖 and also Í |𝐵𝑚 − 𝐵| < 𝜖. Finally, as ∞ 𝑛=0 𝑎 𝑛 converges absolutely, make sure that 𝐾 is large enough such that for all 𝑚 ≥ 𝐾, 𝑚 Õ
𝑛=𝐾
Í∞
|𝑎 𝑛 | < 𝜖.
As 𝑛=0 𝑏 𝑛 converges, then 𝐵max B sup |𝐵𝑛 − 𝐵| : 𝑛 = 0, 1, 2, . . . is finite. Take 𝑚 ≥ 2𝐾. In particular 𝑚 − 𝐾 + 1 > 𝐾. So
𝑚 Õ 𝑛=0
|𝐵𝑛 − 𝐵| |𝑎 𝑚−𝑛 | = ≤
𝑚−𝐾 Õ 𝑛=0 𝑚 Õ 𝑛=𝐾
!
|𝐵𝑛 − 𝐵| |𝑎 𝑚−𝑛 | +
! |𝑎 𝑛 | 𝐵max +
≤ 𝜖𝐵max + 𝜖
∞ Õ 𝑛=0
𝐾−1 Õ
!𝑛=0
𝑚 Õ
𝑛=𝑚−𝐾+1
!
|𝐵𝑛 − 𝐵| |𝑎 𝑚−𝑛 |
! 𝜖 |𝑎 𝑛 |
|𝑎 𝑛 | .
Therefore, for 𝑚 ≥ 2𝐾, we have
! ! 𝑚 𝑚 Õ Õ 𝑐 𝑛 − 𝐴𝐵 ≤ |𝐵𝑛 − 𝐵| |𝑎 𝑚−𝑛 | + |𝐵| |𝐴𝑚 − 𝐴| 𝑛=0 𝑛=0 ! ∞ Õ ≤ 𝜖𝐵max + 𝜖
𝑛=0
|𝑎 𝑛 | + |𝐵| 𝜖 = 𝜖 𝐵max +
∞ Õ 𝑛=0
!
!
|𝑎 𝑛 | + |𝐵| .
106
CHAPTER 2. SEQUENCES AND SERIES
The expression in the parenthesis on the right-hand side is a fixed number. Hence, we can Í make the right-hand side arbitrarily small by picking a small enough 𝜖 > 0. So ∞ 𝑛=0 𝑐 𝑛 converges to 𝐴𝐵. □ Example 2.6.6: If both series are only conditionally convergent, the Cauchy product series need not even converge. Suppose we take 𝑎 𝑛 = 𝑏 𝑛 = (−1)𝑛 √ 1 . The series
Í∞
𝑛+1
∞ 𝑛=0 𝑎 𝑛 = 𝑛=0 𝑏 𝑛 converges by the alternating series test; however, it does not converge absolutely as can be seen from the 𝑝-test. Let us look at the Cauchy product.
Í
𝑐 𝑛 = (−1) Therefore,
𝑛
|𝑐 𝑛 | =
𝑛 Õ 𝑖=0
1
p
(𝑖 + 1)(𝑛 − 𝑖 + 1)
The terms do not go to zero and hence
2.6.5
𝑛
!
Õ 1 1 1 1 +···+ √ . +√ +p = (−1)𝑛 √ p 2𝑛 𝑛+1 𝑛+1 3(𝑛 − 1) (𝑖 + 1)(𝑛 − 𝑖 + 1) 𝑖=0 1
≥
Í∞
𝑛=0 𝑐 𝑛
𝑛 Õ 𝑖=0
1
p
(𝑛 + 1)(𝑛 + 1)
= 1.
cannot converge.
Power series
Fix 𝑥0 ∈ ℝ. A power series about 𝑥0 is a series of the form ∞ Õ 𝑛=0
𝑎 𝑛 (𝑥 − 𝑥0 )𝑛 .
A power series is really a function of 𝑥, and many important functions in analysis can be written as a power series. We use the convention that 00 = 1 (that is, if 𝑥 = 𝑥0 and 𝑛 = 0). A power series is said to be convergent if there is at least one 𝑥 ≠ 𝑥 0 that makes the series converge. If 𝑥 = 𝑥0 , then the series always converges since all terms except the first are zero. If the series does not converge for any point 𝑥 ≠ 𝑥 0 , we say that the series is divergent. Example 2.6.7: The series
∞ Õ 1 𝑛=0
𝑛!
𝑥𝑛
is absolutely convergent for all 𝑥 ∈ ℝ using the ratio test: For any 𝑥 ∈ ℝ lim
𝑛→∞
1/(𝑛 + 1)! 𝑥 𝑛+1
(1/𝑛!) 𝑥 𝑛
𝑥 = 0. 𝑛→∞ 𝑛 + 1
= lim
Recall from calculus that this series converges to 𝑒 𝑥 . Example 2.6.8: The series
∞ Õ 1 𝑛=1
𝑛
𝑥𝑛
107
2.6. MORE ON SERIES converges absolutely for all 𝑥 ∈ (−1, 1) via the ratio test:
1/(𝑛 + 1) 𝑥 𝑛+1 𝑛 = |𝑥| < 1. lim = lim |𝑥| 𝑛 𝑛→∞ 𝑛+1 𝑛→∞ (1/𝑛) 𝑥 𝑛
(−1) The series converges at 𝑥 = −1, as ∞ series test. But 𝑛=1 𝑛 converges by the alternating Í∞ 1 the power series does not converge absolutely at 𝑥 = −1, because 𝑛=1 𝑛 does not converge. The series diverges at 𝑥 = 1. When |𝑥| > 1, then the series diverges via the ratio test.
Í
Example 2.6.9: The series
∞ Õ
𝑛𝑛 𝑥𝑛
𝑛=1
diverges for all 𝑥 ≠ 0. Let us apply the root test lim sup |𝑛 𝑛 𝑥 𝑛 | 1/𝑛 = lim sup 𝑛 |𝑥| = ∞. 𝑛→∞
𝑛→∞
Therefore, the series diverges for all 𝑥 ≠ 0. Convergence of power series in general works analogously to the three examples above. 𝑛 Proposition 2.6.10. Let ∞ 𝑛=0 𝑎 𝑛 (𝑥 − 𝑥 0 ) be a power series. If the series is convergent, then either it converges absolutely at all 𝑥 ∈ ℝ, or there exists a number 𝜌, such that the series converges absolutely on the interval (𝑥0 − 𝜌, 𝑥0 + 𝜌) and diverges when 𝑥 < 𝑥 0 − 𝜌 or 𝑥 > 𝑥0 + 𝜌.
Í
The number 𝜌 is called the radius of convergence of the power series. We write 𝜌 = ∞ if the series converges for all 𝑥, and we write 𝜌 = 0 if the series is divergent. At the endpoints, that is, if 𝑥 = 𝑥 0 + 𝜌 or 𝑥 = 𝑥0 − 𝜌, the proposition says nothing, and the series might or might not converge. See Figure 2.10. In Example 2.6.8 the radius of convergence is 𝜌 = 1, in Example 2.6.7, the radius of convergence is 𝜌 = ∞, and in Example 2.6.9, the radius of convergence is 𝜌 = 0. diverges
?
converges absolutely
?
𝑥0 − 𝜌
𝑥0
𝑥0 + 𝜌
Figure 2.10: Convergence of a power series.
Proof. Write
𝑅 B lim sup |𝑎 𝑛 | 1/𝑛 . 𝑛→∞
diverges
108
CHAPTER 2. SEQUENCES AND SERIES
We apply the root test,
1/𝑛
𝐿 = lim sup 𝑎 𝑛 (𝑥 − 𝑥0 )𝑛
= |𝑥 − 𝑥 0 | lim sup |𝑎 𝑛 | 1/𝑛 = |𝑥 − 𝑥0 | 𝑅. 𝑛→∞
𝑛→∞
If 𝑅 = ∞, then 𝐿 = ∞ for every 𝑥 ≠ 𝑥0 , and the series diverges by the root test. On the other hand, if 𝑅 = 0, then 𝐿 = 0 for every 𝑥, and the series converges absolutely for all 𝑥. Suppose 0 < 𝑅 < ∞. The series converges absolutely if 1 > 𝐿 = 𝑅 |𝑥 − 𝑥0 |, that is, |𝑥 − 𝑥0 | < 1/𝑅 . The series diverges when 1 < 𝐿 = 𝑅 |𝑥 − 𝑥 0 |, or |𝑥 − 𝑥0 | > 1/𝑅 . Letting 𝜌 B 1/𝑅 completes the proof.
□
It may be useful to restate what we have learned in the proof as a separate proposition. Proposition 2.6.11. Let
Í∞
𝑛=0
𝑎 𝑛 (𝑥 − 𝑥 0 )𝑛 be a power series, and let 𝑅 B lim sup |𝑎 𝑛 | 1/𝑛 . 𝑛→∞
If 𝑅 = ∞, the power series is divergent. If 𝑅 = 0, then the power series converges everywhere. Otherwise, the radius of convergence 𝜌 = 1/𝑅.
Often, radius of convergence is written as 𝜌 = 1/𝑅 in all three cases, with the understanding of what 𝜌 should be if 𝑅 = 0 or 𝑅 = ∞. Convergent power series can be added and multiplied together, and multiplied by constants. The proposition has a straight forward proof using what we know about series in general, and power series in particular. We leave the proof to the reader.
𝑛 𝑛 ∞ Proposition 2.6.12. Let ∞ 𝑛=0 𝑎 𝑛 (𝑥 − 𝑥 0 ) and 𝑛=0 𝑏 𝑛 (𝑥 − 𝑥 0 ) be two convergent power series with radius of convergence at least 𝜌 > 0 and 𝛼 ∈ ℝ. Then for all 𝑥 such that |𝑥 − 𝑥0 | < 𝜌, we have ! !
Í
∞ Õ 𝑛=0
Í
𝑎 𝑛 (𝑥 − 𝑥0 ) 𝛼
𝑛
∞ Õ 𝑛=0
and
∞ Õ 𝑛=0
+
∞ Õ 𝑛=0
𝑏 𝑛 (𝑥 − 𝑥0 )
! 𝑎 𝑛 (𝑥 − 𝑥 0 )𝑛 =
! 𝑎 𝑛 (𝑥 − 𝑥0 )
𝑛
where 𝑐 𝑛 = 𝑎 0 𝑏 𝑛 + 𝑎 1 𝑏 𝑛−1 + · · · + 𝑎 𝑛 𝑏 0 .
∞ Õ 𝑛=0
𝑛
∞ Õ 𝑛=0
=
∞ Õ 𝑛=0
(𝑎 𝑛 + 𝑏 𝑛 )(𝑥 − 𝑥0 )𝑛 ,
𝛼𝑎 𝑛 (𝑥 − 𝑥 0 )𝑛 ,
! 𝑏 𝑛 (𝑥 − 𝑥 0 )
𝑛
=
∞ Õ 𝑛=0
𝑐 𝑛 (𝑥 − 𝑥0 )𝑛 ,
109
2.6. MORE ON SERIES
For all 𝑥 with |𝑥 − 𝑥0 | < 𝜌, we have two convergent series so their term-by-term addition and multiplication by constants follows by the previous section. As for such 𝑥 the series converges absolutely, we can apply Merten’s theorem to find the product of two series. Consequently, after performing the algebraic operations, the radius of convergence of the resulting series is at least 𝜌. The radius of convergence of the result could be strictly larger than the radius of convergence of either of the series we started with. See the exercises. Let us look at some examples of power series. Polynomials are simply finite power series: A polynomial is a power series where the 𝑎 𝑛 are zero for all 𝑛 large enough. We expand a polynomial as a power series about any point 𝑥0 by writing the polynomial as a polynomial in (𝑥 − 𝑥0 ). For example, 2𝑥 2 − 3𝑥 + 4 as a power series around 𝑥0 = 1 is 2𝑥 2 − 3𝑥 + 4 = 3 + (𝑥 − 1) + 2(𝑥 − 1)2 . We can also expand rational functions (that is, ratios of polynomials) as power series, although we will not completely prove this fact here. Notice that a series for a rational function only defines the function on an interval even if the function is defined elsewhere. For example, for the geometric series, we have that for 𝑥 ∈ (−1, 1), ∞
Õ 1 = 𝑥𝑛 . 1−𝑥 𝑛=0
1 The series diverges when |𝑥| > 1, even though 1−𝑥 is defined for all 𝑥 ≠ 1. We can use the geometric series together with rules for addition and multiplication of power series to expand rational functions as power series around 𝑥0 , as long as the denominator is not zero at 𝑥0 . We state without proof that this is always possible, and we give an example of such a computation using the geometric series.
Example 2.6.13: Let us expand find the radius of convergence.
𝑥 1+2𝑥+𝑥 2
as a power series around the origin (𝑥0 = 0) and
2
Write 1 + 2𝑥 + 𝑥 2 = (1 + 𝑥)2 = 1 − (−𝑥) , and suppose |𝑥| < 1. Compute 𝑥 1 =𝑥 2 1 − (−𝑥) 1 + 2𝑥 + 𝑥
=𝑥 =𝑥
∞ Õ
! 𝑐𝑛 𝑥 𝑛
𝑛=0
=
𝑛=0
!2
(−1)𝑛 𝑥 𝑛
𝑛=0 ∞ Õ
∞ Õ
2
𝑐 𝑛 𝑥 𝑛+1 .
110
CHAPTER 2. SEQUENCES AND SERIES
Using the formula for the product of series, we obtain 𝑐0 = 1, 𝑐 1 = −1 − 1 = −2, 𝑐 2 = 1 + 1 + 1 = 3, etc. Hence, for |𝑥| < 1, ∞
Õ 𝑥 = (−1)𝑛+1 𝑛𝑥 𝑛 . 2 1 + 2𝑥 + 𝑥 𝑛=1 The radius of convergence is at least 1. We leave it to the reader to verify that the radius of convergence is exactly equal to 1. You can use the method of partial fractions you know from calculus. For example, to 3 at 0, write find the power series for 𝑥𝑥 2+𝑥 −1 ∞
∞
Õ Õ 1 𝑥3 + 𝑥 1 𝑛 𝑛 − = 𝑥 + (−1) 𝑥 − 𝑥𝑛 . = 𝑥 + 1+𝑥 1−𝑥 𝑥2 − 1 𝑛=0 𝑛=0
2.6.6
Exercises
Exercise 2.6.1: Decide the convergence or divergence of the following series. a)
∞ Õ 𝑛=1
1
b)
22𝑛+1
∞ Õ (−1)𝑛 (𝑛 − 1)
𝑛
𝑛=1
c)
∞ Õ (−1)𝑛 𝑛=1
d)
𝑛 1/10
∞ Õ
𝑛𝑛
𝑛=1
(𝑛 + 1)2𝑛
∞ Exercise 2.6.2: Suppose both ∞ 𝑛=0 𝑎 𝑛 and 𝑛=0 𝑏 𝑛 converge absolutely. Show that the product series, Í∞ 𝑐 where 𝑐 = 𝑎 𝑏 + 𝑎 𝑏 + · · · + 𝑎 𝑏 𝑛 0 𝑛 1 𝑛−1 𝑛 0 , also converges absolutely. 𝑛=0 𝑛
Í
Í
Exercise 2.6.3 (Challenging): Let ∞ 𝑛=1 𝑎 𝑛 be conditionally convergent. Show that given an arbitrary Í 𝑥 ∈ ℝ there exists a rearrangement of ∞ 𝑛=1 𝑎 𝑛 such that the rearranged series converges to 𝑥. Hint: See Example 2.6.4.
Í
Exercise 2.6.4: 𝑛+1
(−1) a) Show that the alternating harmonic series ∞ has a rearrangement such that for every interval 𝑛=1 𝑛 (𝑥, 𝑦), there exists a partial sum 𝑠 𝑛 of the rearranged series such that 𝑠 𝑛 ∈ (𝑥, 𝑦).
Í
b) Show that the rearrangement you found does not converge. See Example 2.6.4.
c) Show that for every 𝑥 ∈ ℝ, there exists a subsequence of partial sums {𝑠 𝑛 𝑘 }∞ of your rearrangement 𝑘=1 such that lim 𝑠 𝑛 𝑘 = 𝑥. 𝑘→∞
Exercise 2.6.5: For the following power series, find if they are convergent or not, and if so find their radius of convergence. a)
∞ Õ 𝑛=0
2𝑛 𝑥 𝑛
b)
∞ Õ
𝑛𝑥 𝑛
𝑛=0
Exercise 2.6.6: Suppose
c)
∞ Õ 𝑛=0
Í∞
𝑛=0
𝑛! 𝑥 𝑛
d)
∞ Õ 𝑛=0
1 (𝑥 − 10)𝑛 (2𝑛)!
e)
∞ Õ
𝑥 2𝑛
𝑛=0
𝑎 𝑛 𝑥 𝑛 converges for 𝑥 = 1.
a) What can you say about the radius of convergence? b) If you further know that at 𝑥 = 1 the convergence is not absolute, what can you say?
f)
∞ Õ 𝑛=0
𝑛! 𝑥 𝑛!
111
2.6. MORE ON SERIES Exercise 2.6.7: Expand Exercise 2.6.8:
𝑥 as a power series around 𝑥 0 = 0, and compute its radius of convergence. 4 − 𝑥2
𝑛 a) Find an example where the radii of convergence of ∞ 𝑛=0 𝑎 𝑛 𝑥 and of convergence of the sum of the two series is infinite.
Í
Í∞
𝑛=0 𝑏 𝑛 𝑥
𝑛 b) (Trickier) Find an example where the radii of convergence of ∞ 𝑛=0 𝑎 𝑛 𝑥 and the radius of convergence of the product of the two series is infinite.
Í
𝑛
are both 1, but the radius
Í∞
𝑛=0 𝑏 𝑛 𝑥
𝑛
are both 1, but
Exercise 2.6.9: Figure out how to compute the radius of convergence using the ratio test. That is, suppose Í∞ |𝑎 𝑛+1 | 𝑛 𝑛=0 𝑎 𝑛 𝑥 is a power series and 𝑅 B lim𝑛→∞ |𝑎 𝑛 | exists or is ∞. Find the radius of convergence and prove your claim. Exercise 2.6.10: a) Prove that lim𝑛→∞ 𝑛 1/𝑛 = 1 using the following procedure: Write 𝑛 1/𝑛 = 1 + 𝑏 𝑛 and note 𝑏 𝑛 > 0. Then 2 show that (1 + 𝑏 𝑛 )𝑛 ≥ 𝑛(𝑛−1) 2 𝑏 𝑛 and use this to show that lim 𝑏 𝑛 = 0. 𝑛→∞
Í∞
𝑥𝑛
b) Use the result of part a) to show that if 𝑛=0 𝑎 𝑛 is a convergent power series with radius of convergence Í 𝑛 𝑅, then ∞ 𝑛=0 𝑛𝑎 𝑛 𝑥 is also convergent with the same radius of convergence. There are different notions of summability (convergence) of a series than just the one we have Í seen. A common one is Cesàro summability* . Let ∞ 𝑛=1 𝑎 𝑛 be a series and let 𝑠 𝑛 be the 𝑛th partial sum. The series is said to be Cesàro summable to 𝑎 if 𝑎 = lim
𝑛→∞
𝑠1 + 𝑠2 + · · · + 𝑠 𝑛 . 𝑛
Exercise 2.6.11 (Challenging): a) If
Í∞
𝑛=1
𝑎 𝑛 is convergent to 𝑎 (in the usual sense), show that
b) Show that in the sense of Cesàro
Í∞
𝑛=1 (−1)
𝑛
Í∞
𝑛=1
𝑎 𝑛 is Cesàro summable (see above) to 𝑎.
is summable to 1/2.
c) Let 𝑎 𝑛 B 𝑘 when 𝑛 = 𝑘 3 for some 𝑘 ∈ ℕ , 𝑎 𝑛 B −𝑘 when 𝑛 = 𝑘 3 + 1 for some 𝑘 ∈ ℕ , otherwise let Í 𝑎 𝑛 B 0. Show that ∞ 𝑛=1 𝑎 𝑛 diverges in the usual sense (in fact, both the sequence of terms and the partial sums are unbounded), but it is Cesàro summable to 0 (seems a little paradoxical at first sight). Exercise 2.6.12 (Challenging): Show that the monotonicity in the alternating series test is necessary. That Í 𝑛 is, find a sequence of positive real numbers {𝑥 𝑛 }∞ with lim𝑛→∞ 𝑥 𝑛 = 0 but such that ∞ 𝑛=1 (−1) 𝑥 𝑛 𝑛=1 diverges. Exercise 2.6.13: Find a series
Í∞
𝑛=1
𝑥 𝑛 that converges, but
Í∞
𝑛=1
𝑥 𝑛2 diverges. Hint: Compare Exercise 2.5.14.
Exercise 2.6.14: Suppose {𝑐 𝑛 }∞ is a sequence. Prove that for every 𝑟 ∈ (0, 1), there exists a strictly 𝑛=1 ∞ increasing sequence {𝑛 𝑘 } 𝑘=1 of natural numbers (𝑛 𝑘+1 > 𝑛 𝑘 ) such that ∞ Õ
𝑐 𝑘 𝑥 𝑛𝑘
𝑘=1
converges absolutely for all 𝑥 ∈ [−𝑟, 𝑟]. * Named
for the Italian mathematician Ernesto Cesàro (1859–1906).
112
CHAPTER 2. SEQUENCES AND SERIES
Exercise 2.6.15 (Tonelli/Fubini for sums, challenging): Suppose let {𝑥 𝑘,ℓ }∞ denote a doubly indexed 𝑘=1,ℓ =1 2 sequence and let 𝜎 : ℕ → ℕ be a bijection. Consider the series i)
∞ Õ 𝑖=1
𝑥 𝜎(𝑖) ,
ii)
∞ Õ ∞ Õ 𝑘=1
ℓ =1
! 𝑥 𝑘,ℓ ,
iii)
∞ Õ ∞ Õ ℓ =1
! 𝑥 𝑘,ℓ .
𝑘=1
The expressions ii) and iii) are series of series and so we say they converge if the inner series always converges and the outer series then also converges. a) (Tonelli) Suppose 𝑥 𝑘,ℓ ≥ 0 for all 𝑘, ℓ . Show that the three series i), ii), iii) either all diverge (to ∞) or they all converge to the same number. In the case of divergence, some of the “inner” series might be infinity in which case we consider the entire sum to diverge. b) (Fubini) Suppose i) converges absolutely. Show that ii) and iii) converge and they both converge to the same number as i).
Chapter 3 Continuous Functions 3.1
Limits of functions
Note: 2–3 lectures Before we define continuity of functions, we visit a somewhat more general notion of a limit, than that of a sequence. Given a function 𝑓 : 𝑆 → ℝ, we want to see how 𝑓 (𝑥) behaves as 𝑥 tends to a certain point.
3.1.1
Cluster points
First, we return to a concept we have previously seen in an exercise. When moving within the set 𝑆 we can only approach points that have elements of 𝑆 arbitrarily near. Definition 3.1.1. Let 𝑆 ⊂ ℝ be a set. A number 𝑥 ∈ ℝ is called a cluster point of 𝑆 if for every 𝜖 > 0, the set (𝑥 − 𝜖, 𝑥 + 𝜖) ∩ 𝑆 \ {𝑥} is not empty. That is, 𝑥 is a cluster point of 𝑆 if there are points of 𝑆 arbitrarily close to 𝑥. Another way to phrase the definition is to say that 𝑥 is a cluster point of 𝑆 if for every 𝜖 > 0, there exists a 𝑦 ∈ 𝑆 such that 𝑦 ≠ 𝑥 and |𝑥 − 𝑦| < 𝜖. Note that a cluster point of 𝑆 need not lie in 𝑆. Let us see some examples. (i) The set {1/𝑛 : 𝑛 ∈ ℕ } has a unique cluster point zero.
(ii) The cluster points of the open interval (0, 1) are all points in the closed interval [0, 1].
(iii) The set of cluster points of ℚ is the whole real line ℝ.
(iv) The set of cluster points of [0, 1) ∪ {2} is the interval [0, 1]. (v) The set ℕ has no cluster points in ℝ.
Proposition 3.1.2. Let 𝑆 ⊂ ℝ. Then 𝑥 ∈ ℝ is a cluster point of 𝑆 if and only if there exists a convergent sequence of numbers {𝑥 𝑛 }∞ such that 𝑥 𝑛 ≠ 𝑥 and 𝑥 𝑛 ∈ 𝑆 for all 𝑛, and lim 𝑥 𝑛 = 𝑥. 𝑛=1 𝑛→∞
114
CHAPTER 3. CONTINUOUS FUNCTIONS
Proof. First suppose 𝑥 is a cluster point of 𝑆. For every 𝑛 ∈ ℕ , pick 𝑥 𝑛 to be an arbitrary point of (𝑥 − 1/𝑛 , 𝑥 + 1/𝑛 ) ∩ 𝑆 \ {𝑥}, which is nonempty because 𝑥 is a cluster point of 𝑆. Then 𝑥 𝑛 is within 1/𝑛 of 𝑥, that is, |𝑥 − 𝑥 𝑛 | < 1/𝑛 .
As {1/𝑛 }∞ converges to zero, {𝑥 𝑛 }∞ converges to 𝑥. 𝑛=1 𝑛=1 On the other hand, if we start with a sequence of numbers {𝑥 𝑛 }∞ in 𝑆 converging 𝑛=1 to 𝑥 such that 𝑥 𝑛 ≠ 𝑥 for all 𝑛, then for every 𝜖 > 0 there is an 𝑀 such that, in particular, □ |𝑥 𝑀 − 𝑥| < 𝜖. That is, 𝑥 𝑀 ∈ (𝑥 − 𝜖, 𝑥 + 𝜖) ∩ 𝑆 \ {𝑥}.
3.1.2
Limits of functions
If a function 𝑓 is defined on a set 𝑆 and 𝑐 is a cluster point of 𝑆, then we define the limit of 𝑓 (𝑥) as 𝑥 approaches 𝑐. It is irrelevant for the definition whether 𝑓 is defined at 𝑐 or not. Even if the function is defined at 𝑐, the limit of the function as 𝑥 goes to 𝑐 can very well be different from 𝑓 (𝑐). Definition 3.1.3. Let 𝑓 : 𝑆 → ℝ be a function and 𝑐 a cluster point of 𝑆 ⊂ ℝ. Suppose there exists an 𝐿 ∈ ℝ and for every 𝜖 > 0, there exists a 𝛿 > 0 such that whenever 𝑥 ∈ 𝑆 \ {𝑐} and |𝑥 − 𝑐| < 𝛿, we have | 𝑓 (𝑥) − 𝐿| < 𝜖.
We then say 𝑓 (𝑥) converges to 𝐿 as 𝑥 goes to 𝑐, and we write 𝑓 (𝑥) → 𝐿
as
𝑥 → 𝑐.
We say 𝐿 is a limit of 𝑓 (𝑥) as 𝑥 goes to 𝑐, and if 𝐿 is unique (it is), we write lim 𝑓 (𝑥) B 𝐿.
𝑥→𝑐
If no such 𝐿 exists, then we say that the limit does not exist or that 𝑓 diverges at 𝑐. Again the notation and language we are using above assumes the limit 𝐿, if it exists, is unique, which needs to be proved. Note that the fact that 𝑐 is a cluster point is important to prove uniqueness. Proposition 3.1.4. Let 𝑐 be a cluster point of 𝑆 ⊂ ℝ and let 𝑓 : 𝑆 → ℝ be a function such that 𝑓 (𝑥) converges as 𝑥 goes to 𝑐. Then the limit of 𝑓 (𝑥) as 𝑥 goes to 𝑐 is unique.
Proof. Let 𝐿1 and 𝐿2 be two numbers that both satisfy the definition. Take an 𝜖 > 0 and find a 𝛿 1 > 0 such that | 𝑓 (𝑥) − 𝐿1 | < 𝜖/2 for all 𝑥 ∈ 𝑆 \ {𝑐} with |𝑥 − 𝑐| < 𝛿 1 . Also find 𝛿2 > 0 such that | 𝑓 (𝑥) − 𝐿2 | < 𝜖/2 for all 𝑥 ∈ 𝑆 \ {𝑐} with |𝑥 − 𝑐| < 𝛿2 . Put 𝛿 B min{𝛿 1 , 𝛿2 }. Suppose 𝑥 ∈ 𝑆, |𝑥 − 𝑐| < 𝛿, and 𝑥 ≠ 𝑐. As 𝛿 > 0 and 𝑐 is a cluster point, such an 𝑥 exists. Then 𝜖 𝜖 |𝐿1 − 𝐿2 | = |𝐿1 − 𝑓 (𝑥) + 𝑓 (𝑥) − 𝐿2 | ≤ |𝐿1 − 𝑓 (𝑥)| + | 𝑓 (𝑥) − 𝐿2 | < + = 𝜖. 2 2 As |𝐿1 − 𝐿2 | < 𝜖 for arbitrary 𝜖 > 0, then 𝐿1 = 𝐿2 .
□
115
3.1. LIMITS OF FUNCTIONS Example 3.1.5: Consider 𝑓 : ℝ → ℝ defined by 𝑓 (𝑥) B 𝑥 2 . Then for any 𝑐 ∈ ℝ, lim 𝑓 (𝑥) = lim 𝑥 2 = 𝑐 2 .
𝑥→𝑐
𝑥→𝑐
Proof: Let 𝑐 ∈ ℝ be fixed, and suppose 𝜖 > 0 is given. Write
𝜖 𝛿 B min 1, . 2 |𝑐| + 1 Take 𝑥 ≠ 𝑐 such that |𝑥 − 𝑐| < 𝛿. In particular, |𝑥 − 𝑐| < 1. By reverse triangle inequality, |𝑥| − |𝑐| ≤ |𝑥 − 𝑐| < 1. Adding 2 |𝑐| to both sides, we obtain |𝑥| + |𝑐| < 2 |𝑐| + 1. Estimate
𝑓 (𝑥) − 𝑐 2 = 𝑥 2 − 𝑐 2
= |(𝑥 + 𝑐)(𝑥 − 𝑐)| = |𝑥 + 𝑐| |𝑥 − 𝑐|
≤ (|𝑥| + |𝑐|) |𝑥 − 𝑐|
< (2 |𝑐| + 1) |𝑥 − 𝑐| 𝜖 = 𝜖. < (2 |𝑐| + 1) 2 |𝑐| + 1 Example 3.1.6: Define 𝑓 : [0, 1) → ℝ by
( 𝑓 (𝑥) B
𝑥
if 𝑥 > 0,
1
if 𝑥 = 0.
Then lim 𝑓 (𝑥) = 0, even though 𝑓 (0) = 1. See Figure 3.1. 𝑥→0
Figure 3.1: Function with a different limit and value at 0.
Proof: Let 𝜖 > 0 be given. Let 𝛿 B 𝜖. For 𝑥 ∈ [0, 1), 𝑥 ≠ 0, and |𝑥 − 0| < 𝛿, we get | 𝑓 (𝑥) − 0| = |𝑥| < 𝛿 = 𝜖.
116
CHAPTER 3. CONTINUOUS FUNCTIONS
3.1.3
Sequential limits
Let us connect the limit as defined above with limits of sequences. Lemma 3.1.7. Let 𝑆 ⊂ ℝ, let 𝑐 be a cluster point of 𝑆, let 𝑓 : 𝑆 → ℝ be a function, and let 𝐿 ∈ ℝ. Then 𝑓 (𝑥) → 𝐿 as 𝑥 → 𝑐 if and only if for every sequence {𝑥 𝑛 }∞ such that 𝑥 𝑛 ∈ 𝑆 \ {𝑐} for 𝑛=1 ∞ all 𝑛, and such that lim𝑛→∞ 𝑥 𝑛 = 𝑐, we have that the sequence 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝐿. Proof. Suppose 𝑓 (𝑥) → 𝐿 as 𝑥 → 𝑐, and {𝑥 𝑛 }∞ is a sequence such that 𝑥 𝑛 ∈ 𝑆 \ {𝑐} and 𝑛=1 ∞ lim𝑛→∞ 𝑥 𝑛 = 𝑐. We wish to show that 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝐿. Let 𝜖 > 0 be given. Find a 𝛿 > 0 such that if 𝑥 ∈ 𝑆 \ {𝑐} and |𝑥 − 𝑐| < 𝛿, then | 𝑓 (𝑥) − 𝐿| < 𝜖. As {𝑥 𝑛 }∞ converges 𝑛=1 to 𝑐, find an 𝑀 such that for 𝑛 ≥ 𝑀, we have that |𝑥 𝑛 − 𝑐| < 𝛿. Therefore, for 𝑛 ≥ 𝑀, | 𝑓 (𝑥 𝑛 ) − 𝐿| < 𝜖.
∞
Thus 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝐿. For the other direction, we use proof by contrapositive. Suppose it is not true that 𝑓 (𝑥) → 𝐿 as 𝑥 → 𝑐. The negation of the definition is that there exists an 𝜖 > 0 such that for every 𝛿 > 0 there exists an 𝑥 ∈ 𝑆 \ {𝑐}, where |𝑥 − 𝑐| < 𝛿 and | 𝑓 (𝑥) − 𝐿| ≥ 𝜖. Let us use 1/𝑛 for 𝛿 in the statement above to construct a sequence {𝑥 𝑛 }∞ . We have 𝑛=1 that there exists an 𝜖 > 0 such that for every 𝑛, there exists a point 𝑥 𝑛 ∈ 𝑆 \ {𝑐}, where just constructed converges to 𝑐, |𝑥 𝑛 − 𝑐| < 1/𝑛 and | 𝑓 (𝑥 𝑛 ) − 𝐿| ≥ 𝜖. The sequence {𝑥 𝑛 }∞ 𝑛=1 ∞ but the sequence 𝑓 (𝑥 𝑛 ) 𝑛=1 does not converge to 𝐿. And we are done. □
to strengthen the reverse direction of the lemma by simply stating that It is possible ∞
“ 𝑓 (𝑥 𝑛 )
𝑛=1
converges,” without requiring a specific limit. See Exercise 3.1.11.
Example 3.1.8: lim sin(1/𝑥 ) does not exist, but lim 𝑥 sin(1/𝑥 ) = 0. See Figure 3.2. 𝑥→0
𝑥→0
Figure 3.2: Graphs of sin(1/𝑥 ) and 𝑥 sin(1/𝑥 ). Note that the computer cannot properly graph sin(1/𝑥 ) near zero as it oscillates too fast.
117
3.1. LIMITS OF FUNCTIONS Proof: We start with sin(1/𝑥 ). Define a sequence by 𝑥 𝑛 B that lim𝑛→∞ 𝑥 𝑛 = 0. Furthermore,
1 . 𝜋𝑛+𝜋/2
It is not hard to see
sin(1/𝑥 𝑛 ) = sin(𝜋𝑛 + 𝜋/2) = (−1)𝑛 . Therefore, sin(1/𝑥 𝑛 )
∞
does not converge. By Lemma 3.1.7, lim sin(1/𝑥 ) does not exist.
𝑛=1 𝑥 sin(1/𝑥 ).
𝑥→0
Now consider Let {𝑥 𝑛 }∞ be a sequence such that 𝑥 𝑛 ≠ 0 for all 𝑛, and such 𝑛=1 that lim𝑛→∞ 𝑥 𝑛 = 0. Notice that |sin(𝑡)| ≤ 1 for all 𝑡 ∈ ℝ. Therefore, |𝑥 𝑛 sin(1/𝑥 𝑛 ) − 0| = |𝑥 𝑛 | |sin(1/𝑥 𝑛 )| ≤ |𝑥 𝑛 | .
As 𝑥 𝑛 goes to 0, then |𝑥 𝑛 | goes to zero, and hence 𝑥 𝑛 sin(1/𝑥 𝑛 ) Lemma 3.1.7, lim 𝑥 sin(1/𝑥 ) = 0.
∞
𝑛=1
converges to zero. By
𝑥→0
1 Keep in mind the phrase “for every sequence” in the ∞lemma. For example, take sin( /𝑥 ) and the sequence given by 𝑥 𝑛 B 1/𝜋𝑛 . Then sin(1/𝑥 𝑛 ) 𝑛=1 is the constant zero sequence, and therefore converges to zero, but the limit of sin(1/𝑥 ) as 𝑥 → 0 does not exist. Using Lemma 3.1.7, we can start applying everything we know about sequential limits to limits of functions. Let us give a few important examples.
Corollary 3.1.9. Let 𝑆 ⊂ ℝ and let 𝑐 be a cluster point of 𝑆. Suppose 𝑓 : 𝑆 → ℝ and 𝑔 : 𝑆 → ℝ are functions such that the limits of 𝑓 (𝑥) and 𝑔(𝑥) as 𝑥 goes to 𝑐 both exist, and 𝑓 (𝑥) ≤ 𝑔(𝑥) Then
for all 𝑥 ∈ 𝑆 \ {𝑐}.
lim 𝑓 (𝑥) ≤ lim 𝑔(𝑥).
𝑥→𝑐
𝑥→𝑐
Proof. Take {𝑥 𝑛 }∞ be a sequence of numbers in 𝑆 \ {𝑐} that converges to 𝑐. Let 𝑛=1 𝐿1 B lim 𝑓 (𝑥), 𝑥→𝑐
and
𝐿2 B lim 𝑔(𝑥).
∞
𝑥→𝑐
∞
Lemma 3.1.7 says that 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝐿1 and 𝑔(𝑥 𝑛 ) 𝑛=1 converges to 𝐿2 . We also have 𝑓 (𝑥 𝑛 ) ≤ 𝑔(𝑥 𝑛 ) for all 𝑛. We obtain 𝐿1 ≤ 𝐿2 using Lemma 2.2.3. □
By applying constant functions, we get the following corollary. The proof is left as an exercise. Corollary 3.1.10. Let 𝑆 ⊂ ℝ and let 𝑐 be a cluster point of 𝑆. Suppose 𝑓 : 𝑆 → ℝ is a function such that the limit of 𝑓 (𝑥) as 𝑥 goes to 𝑐 exists. Suppose there are two real numbers 𝑎 and 𝑏 such that 𝑎 ≤ 𝑓 (𝑥) ≤ 𝑏 for all 𝑥 ∈ 𝑆 \ {𝑐}. Then
𝑎 ≤ lim 𝑓 (𝑥) ≤ 𝑏. 𝑥→𝑐
118
CHAPTER 3. CONTINUOUS FUNCTIONS
Using Lemma 3.1.7 in the same way as above, we also get the following corollaries, whose proofs are again left as exercises. Corollary 3.1.11. Let 𝑆 ⊂ ℝ and let 𝑐 be a cluster point of 𝑆. Suppose 𝑓 : 𝑆 → ℝ, 𝑔 : 𝑆 → ℝ, and ℎ : 𝑆 → ℝ are functions such that 𝑓 (𝑥) ≤ 𝑔(𝑥) ≤ ℎ(𝑥)
for all 𝑥 ∈ 𝑆 \ {𝑐}.
Suppose the limits of 𝑓 (𝑥) and ℎ(𝑥) as 𝑥 goes to 𝑐 both exist, and lim 𝑓 (𝑥) = lim ℎ(𝑥).
𝑥→𝑐
𝑥→𝑐
Then the limit of 𝑔(𝑥) as 𝑥 goes to 𝑐 exists and lim 𝑔(𝑥) = lim 𝑓 (𝑥) = lim ℎ(𝑥).
𝑥→𝑐
𝑥→𝑐
𝑥→𝑐
Corollary 3.1.12. Let 𝑆 ⊂ ℝ and let 𝑐 be a cluster point of 𝑆. Suppose 𝑓 : 𝑆 → ℝ and 𝑔 : 𝑆 → ℝ are functions such that the limits of 𝑓 (𝑥) and 𝑔(𝑥) as 𝑥 goes to 𝑐 both exist. Then
(i) lim 𝑓 (𝑥) + 𝑔(𝑥) = lim 𝑓 (𝑥) + lim 𝑔(𝑥) .
𝑥→𝑐
𝑥→𝑐
𝑥→𝑐
(ii) lim 𝑓 (𝑥) − 𝑔(𝑥) = lim 𝑓 (𝑥) − lim 𝑔(𝑥) .
𝑥→𝑐
𝑥→𝑐
(iii) lim 𝑓 (𝑥)𝑔(𝑥) = lim 𝑓 (𝑥)
𝑥→𝑐
𝑥→𝑐
𝑥→𝑐
lim 𝑔(𝑥) .
𝑥→𝑐
(iv) If lim 𝑔(𝑥) ≠ 0 and 𝑔(𝑥) ≠ 0 for all 𝑥 ∈ 𝑆 \ {𝑐}, then 𝑥→𝑐
lim
𝑥→𝑐
𝑓 (𝑥) lim𝑥→𝑐 𝑓 (𝑥) = . 𝑔(𝑥) lim𝑥→𝑐 𝑔(𝑥)
Corollary 3.1.13. Let 𝑆 ⊂ ℝ and let 𝑐 be a cluster point of 𝑆. Suppose 𝑓 : 𝑆 → ℝ is a function such that the limit of 𝑓 (𝑥) as 𝑥 goes to 𝑐 exists. Then
lim | 𝑓 (𝑥)| = lim 𝑓 (𝑥) .
𝑥→𝑐
3.1.4
𝑥→𝑐
Limits of restrictions and one-sided limits
Sometimes we work with the function defined on a subset. Definition 3.1.14. Let 𝑓 : 𝑆 → ℝ be a function and 𝐴 ⊂ 𝑆. Define the function 𝑓 | 𝐴 : 𝐴 → ℝ by 𝑓 | 𝐴 (𝑥) B 𝑓 (𝑥) for 𝑥 ∈ 𝐴.
We call 𝑓 | 𝐴 the restriction of 𝑓 to 𝐴.
The function 𝑓 | 𝐴 is simply the function 𝑓 taken on a smaller domain. The following proposition is the analogue of taking a tail of a sequence. It says that the limit is “local”: The limit only depends on points arbitrarily near 𝑐.
119
3.1. LIMITS OF FUNCTIONS
Proposition 3.1.15. Let 𝑆 ⊂ ℝ, 𝑐 ∈ ℝ, and let 𝑓 : 𝑆 → ℝ be a function. Suppose 𝐴 ⊂ 𝑆 is such that there is some 𝛼 > 0 such that (𝐴 \ {𝑐}) ∩ (𝑐 − 𝛼, 𝑐 + 𝛼) = (𝑆 \ {𝑐}) ∩ (𝑐 − 𝛼, 𝑐 + 𝛼). (i) The point 𝑐 is a cluster point of 𝐴 if and only if 𝑐 is a cluster point of 𝑆.
(ii) Supposing 𝑐 is a cluster point of 𝑆, then 𝑓 (𝑥) → 𝐿 as 𝑥 → 𝑐 if and only if 𝑓 | 𝐴 (𝑥) → 𝐿 as 𝑥 → 𝑐.
Proof. First, let 𝑐 be a cluster point of 𝐴. Since 𝐴 ⊂ 𝑆, then if (𝐴 \ {𝑐}) ∩ (𝑐 − 𝜖, 𝑐 + 𝜖) is nonempty for every 𝜖 > 0, then (𝑆 \ {𝑐}) ∩ (𝑐 − 𝜖, 𝑐 + 𝜖) is nonempty for every 𝜖 > 0. Thus 𝑐 is a cluster point of 𝑆. Second, suppose 𝑐 is a cluster point of 𝑆. Then for 𝜖 > 0 such that 𝜖 < 𝛼 we get that (𝐴 \ {𝑐}) ∩ (𝑐 − 𝜖, 𝑐 + 𝜖) = (𝑆 \ {𝑐}) ∩ (𝑐 − 𝜖, 𝑐 + 𝜖), which is nonempty. This is true for all 𝜖 < 𝛼 and hence (𝐴 \ {𝑐}) ∩ (𝑐 − 𝜖, 𝑐 + 𝜖) must be nonempty for all 𝜖 > 0. Thus 𝑐 is a cluster point of 𝐴. Now suppose 𝑐 is a cluster point of 𝑆 and 𝑓 (𝑥) → 𝐿 as 𝑥 → 𝑐. That is, for every 𝜖 > 0 there is a 𝛿 > 0 such that if 𝑥 ∈ 𝑆 \ {𝑐} and |𝑥 − 𝑐| < 𝛿, then | 𝑓 (𝑥) − 𝐿| < 𝜖. Because 𝐴 ⊂ 𝑆, if 𝑥 ∈ 𝐴 \ {𝑐}, then 𝑥 ∈ 𝑆 \ {𝑐}, and hence 𝑓 | 𝐴 (𝑥) → 𝐿 as 𝑥 → 𝑐. Finally, suppose 𝑓 | 𝐴 (𝑥) → 𝐿 as 𝑥 → 𝑐 and let 𝜖 > 0 be given. There is a 𝛿′ > 0 such that if 𝑥 ∈ 𝐴 \ {𝑐} and |𝑥 − 𝑐| < 𝛿′, then 𝑓 | 𝐴 (𝑥) − 𝐿 < 𝜖. Take 𝛿 B min{𝛿′ , 𝛼}. Now suppose ′ 𝑥 ∈ 𝑆 \ {𝑐} and |𝑥 − 𝑐| < 𝛿. As |𝑥 − 𝑐| < 𝛼, we find 𝑥 ∈ 𝐴 \ {𝑐}, and as |𝑥 − 𝑐| < 𝛿 , we get □ | 𝑓 (𝑥) − 𝐿| = 𝑓 | 𝐴 (𝑥) − 𝐿 < 𝜖. The hypothesis on 𝐴 in the proposition is necessary. For an arbitrary restriction we generally get an implication in only one direction, see Exercise 3.1.6. The usual notation for the limit is lim 𝑓 (𝑥) B lim 𝑓 | 𝐴 (𝑥). 𝑥→𝑐 𝑥∈𝐴
𝑥→𝑐
A common use of restriction with respect to limits, which does not satisfy the hypothesis in the proposition, is the so-called one-sided limit* Definition 3.1.16. Let 𝑓 : 𝑆 → ℝ be function and let 𝑐 ∈ ℝ. If 𝑐 is a cluster point of 𝑆 ∩ (𝑐, ∞) and the limit of the restriction of 𝑓 to 𝑆 ∩ (𝑐, ∞) as 𝑥 → 𝑐 exists, define lim 𝑓 (𝑥) B lim 𝑓 | 𝑆∩(𝑐,∞) (𝑥).
𝑥→𝑐 +
𝑥→𝑐
Similarly, if 𝑐 is a cluster point of 𝑆 ∩ (−∞, 𝑐) and the limit of the restriction as 𝑥 → 𝑐 exists, define lim− 𝑓 (𝑥) B lim 𝑓 | 𝑆∩(−∞,𝑐) (𝑥). 𝑥→𝑐
𝑥→𝑐
Proposition 3.1.15 does not apply to one-sided limits. It is possible to have one-sided limits, but no limit at a point. For example, define 𝑓 : ℝ → ℝ by 𝑓 (𝑥) B 1 for 𝑥 < 0 and 𝑓 (𝑥) B 0 for 𝑥 ≥ 0. We leave it to the reader to verify that lim 𝑓 (𝑥) = 1,
𝑥→0−
lim 𝑓 (𝑥) = 0,
𝑥→0+
lim 𝑓 (𝑥)
𝑥→0
does not exist.
All is not lost, however, for we have the following replacement. * One
sees a plethora of one-sided limit notations. E.g., lim 𝑓 (𝑥), lim 𝑓 (𝑥), or lim 𝑓 (𝑥) for lim− 𝑓 (𝑥). 𝑥→𝑐 𝑥 0 there is a 𝛿 > 0 such that whenever 𝑥 ∈ 𝑆 and |𝑥 − 𝑐| < 𝛿, we have | 𝑓 (𝑥) − 𝑓 (𝑐)| < 𝜖. When 𝑓 : 𝑆 → ℝ is continuous at all 𝑐 ∈ 𝑆, then we simply say 𝑓 is a continuous function.
𝜖 𝜖
𝑦 = 𝑓 (𝑥)
𝑓 (𝑐) 𝛿
𝛿 𝑐
Figure 3.3: For |𝑥 − 𝑐| < 𝛿, the graph of 𝑓 (𝑥) should be within the gray region.
If 𝑓 is continuous for all 𝑐 ∈ 𝐴, we say 𝑓 is continuous on 𝐴 ⊂ 𝑆. A straightforward exercise (Exercise 3.2.7) shows that this implies that 𝑓 | 𝐴 is continuous, although the converse does not hold (as we will see in Example 3.2.13). Continuity may be the most important definition to understand in analysis, and it is not an easy one. See Figure 3.3. Note that 𝛿 not only depends on 𝜖, but also on 𝑐; we need not pick one 𝛿 for all 𝑐 ∈ 𝑆. It is no accident that the definition of continuity is similar to the definition of a limit of a function. The main feature of continuous functions is that these are precisely the functions that behave nicely with limits. Proposition 3.2.2. Consider a function 𝑓 : 𝑆 → ℝ defined on a set 𝑆 ⊂ ℝ and let 𝑐 ∈ 𝑆. Then: (i) If 𝑐 is not a cluster point of 𝑆, then 𝑓 is continuous at 𝑐.
(ii) If 𝑐 is a cluster point of 𝑆, then 𝑓 is continuous at 𝑐 if and only if the limit of 𝑓 (𝑥) as 𝑥 → 𝑐 exists and lim 𝑓 (𝑥) = 𝑓 (𝑐). 𝑥→𝑐
(iii) The function 𝑓 is continuous at 𝑐 if and only if for every sequence {𝑥 𝑛 }∞ where 𝑥 𝑛 ∈ 𝑆 𝑛=1 ∞ and lim 𝑥 𝑛 = 𝑐, the sequence 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝑓 (𝑐). 𝑛→∞
3.2. CONTINUOUS FUNCTIONS
123
Proof. We start with (i). Suppose 𝑐 is not a cluster point of 𝑆. Then there exists a 𝛿 > 0 such that 𝑆 ∩ (𝑐 − 𝛿, 𝑐 + 𝛿) = {𝑐}. For any 𝜖 > 0, simply pick this given 𝛿. The only 𝑥 ∈ 𝑆 such that |𝑥 − 𝑐| < 𝛿 is 𝑥 = 𝑐. Then | 𝑓 (𝑥) − 𝑓 (𝑐)| = | 𝑓 (𝑐) − 𝑓 (𝑐)| = 0 < 𝜖. Let us move to (ii). Suppose 𝑐 is a cluster point of 𝑆. Let us first suppose that lim𝑥→𝑐 𝑓 (𝑥) = 𝑓 (𝑐). Then for every 𝜖 > 0, there is a 𝛿 > 0 such that if 𝑥 ∈ 𝑆 \ {𝑐} and |𝑥 − 𝑐| < 𝛿, then | 𝑓 (𝑥) − 𝑓 (𝑐)| < 𝜖. Also | 𝑓 (𝑐) − 𝑓 (𝑐)| = 0 < 𝜖, so the definition of continuity at 𝑐 is satisfied. On the other hand, suppose 𝑓 is continuous at 𝑐. For every 𝜖 > 0, there exists a 𝛿 > 0 such that for 𝑥 ∈ 𝑆 where |𝑥 − 𝑐| < 𝛿, we have | 𝑓 (𝑥) − 𝑓 (𝑐)| < 𝜖. Then the statement is, of course, still true if 𝑥 ∈ 𝑆 \ {𝑐} ⊂ 𝑆. Therefore, lim𝑥→𝑐 𝑓 (𝑥) = 𝑓 (𝑐). For (iii), first suppose 𝑓 is continuous at 𝑐. Let {𝑥 𝑛 }∞ be a sequence such that 𝑥 𝑛 ∈ 𝑆 𝑛=1 and lim𝑛→∞ 𝑥 𝑛 = 𝑐. Let 𝜖 > 0 be given. Find a 𝛿 > 0 such that | 𝑓 (𝑥) − 𝑓 (𝑐)| < 𝜖 for all 𝑥 ∈ 𝑆 where |𝑥 − 𝑐| < 𝛿. Find an 𝑀 ∈ ℕ such that for ∞𝑛 ≥ 𝑀, we have |𝑥 𝑛 − 𝑐| < 𝛿. Then for 𝑛 ≥ 𝑀, we have that | 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑐)| < 𝜖, so 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝑓 (𝑐). We prove the other direction of (iii) by contrapositive. Suppose 𝑓 is not continuous at 𝑐. Then there exists an 𝜖 > 0 such that for every 𝛿 > 0, there exists an 𝑥 ∈ 𝑆 such that as follows. Let 𝑥 𝑛 ∈ 𝑆 be |𝑥 − 𝑐| < 𝛿 and | 𝑓 (𝑥) − 𝑓 (𝑐)| ≥ 𝜖. Define a sequence {𝑥 𝑛 }∞ 𝑛=1 ∞ 1 such that |𝑥 𝑛 − 𝑐| < /𝑛 and | 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑐)| ≥ 𝜖. Now {𝑥 𝑛 } 𝑛=1 is a sequence in 𝑆 such that ∞ lim𝑛→∞ 𝑥 𝑛 = 𝑐 and such that | 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑐)| ≥ 𝜖 for all 𝑛 ∈ ℕ . Thus 𝑓 (𝑥 𝑛 ) 𝑛=1 does not converge to 𝑓 (𝑐). It may or may not converge, but it definitely does not converge to 𝑓 (𝑐). □ The last item in the proposition is particularly powerful. It allows us to quickly apply what we know about limits of sequences to continuous functions and even to prove that certain functions are continuous. It can also be strengthened, see Exercise 3.2.13. Example 3.2.3: The function 𝑓 : (0, ∞) → ℝ defined by 𝑓 (𝑥) B 1/𝑥 is continuous. Proof: Fix 𝑐 ∈ (0, ∞). Let {𝑥 𝑛 }∞ be a sequence in (0, ∞) such that lim𝑛→∞ 𝑥 𝑛 = 𝑐. Then 𝑛=1 𝑓 (𝑐) =
1 1 1 = = lim = lim 𝑓 (𝑥 𝑛 ). 𝑐 lim𝑛→∞ 𝑥 𝑛 𝑛→∞ 𝑥 𝑛 𝑛→∞
Thus 𝑓 is continuous at 𝑐. As 𝑓 is continuous at all 𝑐 ∈ (0, ∞), 𝑓 is continuous. We have previously shown lim𝑥→𝑐 𝑥 2 = 𝑐 2 directly. Therefore the function 𝑥 2 is continuous. The last item of Proposition 3.2.2 and the continuity of algebraic operations with respect to limits of sequences, Proposition 2.2.5, gives a quick proof of a much more general result. Proposition 3.2.4. Let 𝑓 : ℝ → ℝ be a polynomial. That is, 𝑓 (𝑥) = 𝑎 𝑑 𝑥 𝑑 + 𝑎 𝑑−1 𝑥 𝑑−1 + · · · + 𝑎1 𝑥 + 𝑎0 , for some constants 𝑎0 , 𝑎1 , . . . , 𝑎 𝑑 . Then 𝑓 is continuous.
124
CHAPTER 3. CONTINUOUS FUNCTIONS
Proof. Fix 𝑐 ∈ ℝ. Let {𝑥 𝑛 }∞ be a sequence such that lim𝑛→∞ 𝑥 𝑛 = 𝑐. Then 𝑛=1 𝑓 (𝑐) = 𝑎 𝑑 𝑐 𝑑 + 𝑎 𝑑−1 𝑐 𝑑−1 + · · · + 𝑎 1 𝑐 + 𝑎 0
= 𝑎 𝑑 lim 𝑥 𝑛 𝑛→∞
𝑑
+ 𝑎 𝑑−1 lim 𝑥 𝑛 𝑛→∞
𝑑−1
+ · · · + 𝑎1 lim 𝑥 𝑛 + 𝑎 0 𝑛→∞
= lim 𝑎 𝑑 𝑥 𝑛𝑑 + 𝑎 𝑑−1 𝑥 𝑛𝑑−1 + · · · + 𝑎1 𝑥 𝑛 + 𝑎 0 = lim 𝑓 (𝑥 𝑛 ). 𝑛→∞
𝑛→∞
Thus 𝑓 is continuous at 𝑐. As 𝑓 is continuous at all 𝑐 ∈ ℝ, 𝑓 is continuous.
□
By similar reasoning, or by appealing to Corollary 3.1.12, we can prove the following proposition. The proof is left as an exercise. Proposition 3.2.5. Let 𝑓 : 𝑆 → ℝ and 𝑔 : 𝑆 → ℝ be functions continuous at 𝑐 ∈ 𝑆. (i) The function ℎ : 𝑆 → ℝ defined by ℎ(𝑥) B 𝑓 (𝑥) + 𝑔(𝑥) is continuous at 𝑐.
(ii) The function ℎ : 𝑆 → ℝ defined by ℎ(𝑥) B 𝑓 (𝑥) − 𝑔(𝑥) is continuous at 𝑐.
(iii) The function ℎ : 𝑆 → ℝ defined by ℎ(𝑥) B 𝑓 (𝑥)𝑔(𝑥) is continuous at 𝑐. (iv) If 𝑔(𝑥) ≠ 0 for all 𝑥 ∈ 𝑆, the function ℎ : 𝑆 → ℝ given by ℎ(𝑥) B
𝑓 (𝑥) 𝑔(𝑥)
is continuous at 𝑐.
Example 3.2.6: The functions sin(𝑥) and cos(𝑥) are continuous. In the following computations we use the sum-to-product trigonometric identities. We also use the simple facts that |sin(𝑥)| ≤ |𝑥|, |cos(𝑥)| ≤ 1, and |sin(𝑥)| ≤ 1.
𝑥 − 𝑐 𝑥 + 𝑐 cos |sin(𝑥) − sin(𝑐)| = 2 sin 2 2 𝑥 − 𝑐 𝑥+𝑐 = 2 sin cos 2 2 𝑥−𝑐 ≤ 2 sin 2 𝑥 − 𝑐 ≤ 2 = |𝑥 − 𝑐| 2
𝑥 + 𝑐 2 2 𝑥 − 𝑐 𝑥+𝑐 = 2 sin sin 2 2 𝑥−𝑐 ≤ 2 sin 2 𝑥 − 𝑐 ≤ 2 = |𝑥 − 𝑐|
|cos(𝑥) − cos(𝑐)| = −2 sin
𝑥 − 𝑐
sin
2
The claim that sin and cos are continuous follows by taking an arbitrary sequence {𝑥 𝑛 }∞ converging to 𝑐, or by applying the definition of continuity directly. Details are 𝑛=1 left to the reader.
125
3.2. CONTINUOUS FUNCTIONS
3.2.2
Composition of continuous functions
You probably already realized that one of the basic tools in constructing complicated functions out of simple ones is composition. Recall that for two functions 𝑓 and 𝑔, the composition 𝑓 ◦ 𝑔 is defined by ( 𝑓 ◦ 𝑔)(𝑥) B 𝑓 𝑔(𝑥) . A composition of continuous functions is again continuous. Proposition 3.2.7. Let 𝐴, 𝐵 ⊂ ℝ and 𝑓 : 𝐵 → ℝ and 𝑔 : 𝐴 → 𝐵 be functions. If 𝑔 is continuous at 𝑐 ∈ 𝐴 and 𝑓 is continuous at 𝑔(𝑐), then 𝑓 ◦ 𝑔 : 𝐴 → ℝ is continuous at 𝑐.
be a sequence in 𝐴 such that lim𝑛→∞ 𝑥 𝑛 = 𝑐. As 𝑔 is continuous at 𝑐, Proof. Let {𝑥 𝑛 }∞ 𝑛=1 ∞ ∞ we have 𝑔(𝑥 𝑛 ) 𝑛=1 converges to 𝑔(𝑐). As 𝑓 is continuous at 𝑔(𝑐), we have 𝑓 𝑔(𝑥 𝑛 ) 𝑛=1 converges to 𝑓 𝑔(𝑐) . Thus 𝑓 ◦ 𝑔 is continuous at 𝑐. □
2
Example 3.2.8: Claim: sin(1/𝑥 ) is a continuous function on (0, ∞). Proof: The function 1/𝑥 is continuous on (0, ∞) and sin(𝑥) is continuous on (0, ∞) (actually on ℝ, but (0, ∞) is the range for 1/𝑥 ). Hence the composition sin(1/𝑥 ) is continuous. Also, 𝑥 2 is continuous on the interval [−1, 1] (the range of sin). Thus the composition 2 sin(1/𝑥 ) is continuous on (0, ∞).
3.2.3
Discontinuous functions
When 𝑓 is not continuous at 𝑐, we say 𝑓 is discontinuous at 𝑐, or that it has a discontinuity at 𝑐. The following proposition is a useful test and follows immediately from third item of Proposition 3.2.2. Proposition 3.2.9. Let 𝑓 : 𝑆 → ℝ be a function and 𝑐 ∈ there exists a sequence 𝑆. Suppose ∞ ∞ {𝑥 𝑛 } 𝑛=1 , 𝑥 𝑛 ∈ 𝑆 for all 𝑛, and lim𝑛→∞ 𝑥 𝑛 = 𝑐 such that 𝑓 (𝑥 𝑛 ) 𝑛=1 does not converge to 𝑓 (𝑐). Then 𝑓 is discontinuous at 𝑐.
∞
Again, saying that 𝑓 (𝑥 𝑛 ) 𝑛=1 does not converge to 𝑓 (𝑐) means that it either does not converge at all, or it converges to something other than 𝑓 (𝑐).
Example 3.2.10: The function 𝑓 : ℝ → ℝ defined by
( 𝑓 (𝑥) B
−1 1
if 𝑥 < 0, if 𝑥 ≥ 0
is not continuous at 0. Proof: Consider {−1/𝑛 }∞ , which converges to 0. Then 𝑓 (−1/𝑛 ) = −1 for every 𝑛, and so 𝑛=1 lim𝑛→∞ 𝑓 (−1/𝑛 ) = −1, but 𝑓 (0) = 1. Thus the function is not continuous at 0. SeeFigure 3.4. ∞ Notice that 𝑓 (1/𝑛 ) = 1 for all 𝑛 ∈ ℕ . Hence, lim𝑛→∞ 𝑓 (1/𝑛 ) = 𝑓 (0) = 1. So 𝑓 (𝑥 𝑛 ) 𝑛=1 may converge to 𝑓 (0) for some specific sequence {𝑥 𝑛 }∞ going to 0, despite the function 𝑛=1 being discontinuous at 0. Finally, consider 𝑓
(−1)𝑛 𝑛
= (−1)𝑛 . This sequence diverges.
126
CHAPTER 3. CONTINUOUS FUNCTIONS
−1 2
−1
−1 −1 3 4
···
Figure 3.4: Jump discontinuity. The values of 𝑓 (−1/𝑛 ) and 𝑓 (0) are marked as black dots.
Example 3.2.11: For an extreme example, take the so-called Dirichlet function* .
( 𝑓 (𝑥) B
1
if 𝑥 is rational,
0
if 𝑥 is irrational.
The function 𝑓 is discontinuous at all 𝑐 ∈ ℝ. Proof: If 𝑐 is rational, take a sequence {𝑥 𝑛 }∞ of irrational numbers such that 𝑛=1 lim𝑛→∞ 𝑥 𝑛 = 𝑐 (why can we?). Then 𝑓 (𝑥 𝑛 ) = 0 and so lim𝑛→∞ 𝑓 (𝑥 𝑛 ) = 0, but 𝑓 (𝑐) = 1. If 𝑐 is irrational, take a sequence of rational numbers {𝑥 𝑛 }∞ that converges to 𝑐 (why can 𝑛=1 we?). Then lim𝑛→∞ 𝑓 (𝑥 𝑛 ) = 1, but 𝑓 (𝑐) = 0. Let us test the limits of our intuition. Can there exist a function continuous at all irrational numbers, but discontinuous at all rational numbers? There are rational numbers arbitrarily close to any irrational number. Perhaps strangely, the answer is yes, such a function exists. The following example is called the Thomae function† or the popcorn function. Example 3.2.12: Define 𝑓 : (0, 1) → ℝ as
( 𝑓 (𝑥) B
1/𝑘
0
if 𝑥 = 𝑚/𝑘 , where 𝑚, 𝑘 ∈ ℕ and have no common divisors (lowest terms),
if 𝑥 is irrational.
See the graph of 𝑓 in Figure 3.5. We claim that 𝑓 is continuous at all irrational 𝑐 and discontinuous at all rational 𝑐. Proof: Let 𝑐 = 𝑚/𝑘 be rational and in lowest terms. Take a sequence of irrational numbers {𝑥 𝑛 }∞ such that lim𝑛→∞ 𝑥 𝑛 = 𝑐. Then lim𝑛→∞ 𝑓 (𝑥 𝑛 ) = lim𝑛→∞ 0 = 0, but 𝑓 (𝑐) = 1/𝑘 ≠ 0. 𝑛=1 So 𝑓 is discontinuous at 𝑐. Now let 𝑐 be irrational, so 𝑓 (𝑐) = 0. Take a sequence {𝑥 𝑛 }∞ in (0, 1) such that 𝑛=1 lim𝑛→∞ 𝑥 𝑛 = 𝑐. Given 𝜖 > 0, find 𝐾 ∈ ℕ such that 1/𝐾 < 𝜖 by the Archimedean property. If 𝑚/𝑘 ∈ (0, 1) and 𝑚, 𝑘 ∈ ℕ , then 0 < 𝑚 < 𝑘. So there are only finitely many rational numbers in (0, 1) whose denominator 𝑘 in lowest terms is less than 𝐾. As lim𝑛→∞ 𝑥 𝑛 = 𝑐, every number not equal to 𝑐 can appear at most finitely many times in {𝑥 𝑛 }∞ . Hence, there is 𝑛=1 * Named
† Named
after the German mathematician Johann Peter Gustav Lejeune Dirichlet (1805–1859). after the German mathematician Carl Johannes Thomae (1840–1921).
127
3.2. CONTINUOUS FUNCTIONS
Figure 3.5: Graph of the “popcorn function.”
an 𝑀 such that for 𝑛 ≥ 𝑀, all the numbers 𝑥 𝑛 that are rational have a denominator larger than or equal to 𝐾. Thus for 𝑛 ≥ 𝑀, | 𝑓 (𝑥 𝑛 ) − 0| = 𝑓 (𝑥 𝑛 ) ≤ 1/𝐾 < 𝜖. Therefore, 𝑓 is continuous at irrational 𝑐. Let us end on an easier example. Example 3.2.13: Define 𝑔 : ℝ → ℝ by 𝑔(𝑥) B 0 if 𝑥 ≠ 0 and 𝑔(0) B 1. Then 𝑔 is not continuous at zero, but continuous everywhere else (why?). The point 𝑥 = 0 is called a removable discontinuity. That is because if we would change the definition of 𝑔, by insisting that 𝑔(0) be 0, we would obtain a continuous function. On the other hand, let 𝑓 be the function of Example 3.2.10. Then 𝑓 does not have a removable discontinuity at 0. No matter how we would define 𝑓 (0) the function would still fail to be continuous. The difference is that lim𝑥→0 𝑔(𝑥) exists while lim𝑥→0 𝑓 (𝑥) does not. We stay with this example to show another phenomenon. Let 𝐴 B {0}, then 𝑔| 𝐴 is continuous (why?), while 𝑔 is not continuous on 𝐴. Similarly, if 𝐵 B ℝ \ {0}, then 𝑔| 𝐵 is also continuous, and 𝑔 is in fact continuous on 𝐵.
3.2.4
Exercises
Exercise 3.2.1: Using the definition of continuity directly prove that 𝑓 : ℝ → ℝ defined by 𝑓 (𝑥) B 𝑥 2 is continuous. Exercise 3.2.2: Using the definition of continuity directly prove that 𝑓 : (0, ∞) → ℝ defined by 𝑓 (𝑥) B 1/𝑥 is continuous. Exercise 3.2.3: Define 𝑓 : ℝ → ℝ by
( 𝑓 (𝑥) B
𝑥
if 𝑥 is rational,
𝑥2
if 𝑥 is irrational.
Using the definition of continuity directly prove that 𝑓 is continuous at 1 and discontinuous at 2.
128
CHAPTER 3. CONTINUOUS FUNCTIONS
Exercise 3.2.4: Define 𝑓 : ℝ → ℝ by
( 𝑓 (𝑥) B
sin(1/𝑥 ) 0
if 𝑥 ≠ 0, if 𝑥 = 0.
Is 𝑓 continuous? Prove your assertion. Exercise 3.2.5: Define 𝑓 : ℝ → ℝ by
( 𝑓 (𝑥) B
𝑥 sin(1/𝑥 )
0
if 𝑥 ≠ 0, if 𝑥 = 0.
Is 𝑓 continuous? Prove your assertion. Exercise 3.2.6: Prove Proposition 3.2.5. Exercise 3.2.7: Let 𝑆 ⊂ ℝ and 𝐴 ⊂ 𝑆. Let 𝑓 : 𝑆 → ℝ be a continuous function. Prove that the restriction 𝑓 | 𝐴 is continuous. Exercise 3.2.8: Suppose 𝑆 ⊂ ℝ, such that (𝑐 − 𝛼, 𝑐 + 𝛼) ⊂ 𝑆 for some 𝑐 ∈ ℝ and 𝛼 > 0. Let 𝑓 : 𝑆 → ℝ be a function and 𝐴 B (𝑐 − 𝛼, 𝑐 + 𝛼). Prove that if 𝑓 | 𝐴 is continuous at 𝑐, then 𝑓 is continuous at 𝑐. Exercise 3.2.9: Give an example of functions 𝑓 : ℝ → ℝ and 𝑔 : ℝ → ℝ such that the function ℎ, defined by ℎ(𝑥) B 𝑓 (𝑥) + 𝑔(𝑥), is continuous, but 𝑓 and 𝑔 are not continuous. Can you find 𝑓 and 𝑔 that are nowhere continuous, but ℎ is a continuous function? Exercise 3.2.10: Let 𝑓 : ℝ → ℝ and 𝑔 : ℝ → ℝ be continuous functions. Suppose that 𝑓 (𝑟) = 𝑔(𝑟) for all 𝑟 ∈ ℚ. Show that 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥 ∈ ℝ. Exercise 3.2.11: Let 𝑓 : ℝ → ℝ be continuous. Suppose 𝑓 (𝑐) > 0. Show that there exists an 𝛼 > 0 such that for all 𝑥 ∈ (𝑐 − 𝛼, 𝑐 + 𝛼), we have 𝑓 (𝑥) > 0. Exercise 3.2.12: Let 𝑓 : ℤ → ℝ be a function. Show that 𝑓 is continuous. Exercise 3.2.13: Let 𝑓 : 𝑆 → ℝ be a function and 𝑐 ∈ 𝑆, such that for every sequence {𝑥 𝑛 }∞ in 𝑆 with 𝑛=1 ∞ lim𝑛→∞ 𝑥 𝑛 = 𝑐, the sequence 𝑓 (𝑥 𝑛 ) 𝑛=1 converges. Show that 𝑓 is continuous at 𝑐. Exercise 3.2.14: Suppose 𝑓 : [−1, 0] → ℝ and 𝑔 : [0, 1] → ℝ are continuous and 𝑓 (0) = 𝑔(0). Define ℎ : [−1, 1] → ℝ by ℎ(𝑥) B 𝑓 (𝑥) if 𝑥 ≤ 0 and ℎ(𝑥) B 𝑔(𝑥) if 𝑥 > 0. Show that ℎ is continuous. Exercise 3.2.15: Suppose 𝑔 : ℝ → ℝ is a continuous function such that 𝑔(0) = 0, and suppose 𝑓 : ℝ → ℝ is such that | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝑔(𝑥 − 𝑦) for all 𝑥 and 𝑦. Show that 𝑓 is continuous. Exercise 3.2.16 (Challenging): Suppose 𝑓 : ℝ → ℝ is continuous at 0 and such that 𝑓 (𝑥 + 𝑦) = 𝑓 (𝑥)+ 𝑓 (𝑦) for every 𝑥 and 𝑦. Show that 𝑓 (𝑥) = 𝑎𝑥 for some 𝑎 ∈ ℝ. Hint: Show that 𝑓 (𝑛𝑥) = 𝑛 𝑓 (𝑥), then show 𝑓 is continuous on ℝ. Then show that 𝑓 (𝑥)/𝑥 = 𝑓 (1) for all rational 𝑥. Exercise 3.2.17: Suppose 𝑆 ⊂ ℝ and let 𝑓 : 𝑆 → ℝ and 𝑔 : 𝑆 → ℝ be continuous functions. Define 𝑝 : 𝑆 → ℝ by 𝑝(𝑥) B max 𝑓 (𝑥), 𝑔(𝑥) and 𝑞 : 𝑆 → ℝ by 𝑞(𝑥) B min 𝑓 (𝑥), 𝑔(𝑥) . Prove that 𝑝 and 𝑞 are continuous.
3.2. CONTINUOUS FUNCTIONS
129
Exercise 3.2.18: Suppose 𝑓 : [−1, 1] → ℝ is a function continuous at all 𝑥 ∈ [−1, 1] \ {0}. Show that for every 𝜖 such that 0 < 𝜖 < 1, there exists a function 𝑔 : [−1, 1] → ℝ continuous on all of [−1, 1], such that 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥 ∈ [−1, −𝜖] ∪ [𝜖, 1], and | 𝑔(𝑥)| ≤ | 𝑓 (𝑥)| for all 𝑥 ∈ [−1, 1]. Exercise 3.2.19 (Challenging): A function 𝑓 : 𝐼 → ℝ is convex if whenever 𝑎 ≤ 𝑥 ≤ 𝑏 for 𝑎, 𝑥, 𝑏 in 𝐼, we 𝑥−𝑎 have 𝑓 (𝑥) ≤ 𝑓 (𝑎) 𝑏−𝑥 𝑏−𝑎 + 𝑓 (𝑏) 𝑏−𝑎 . In other words, if the line drawn between 𝑎, 𝑓 (𝑎) and 𝑏, 𝑓 (𝑏) is above the graph of 𝑓 . a) Prove that if 𝐼 = (𝛼, 𝛽) an open interval and 𝑓 : 𝐼 → ℝ is convex, then 𝑓 is continuous.
b) Find an example of a convex 𝑓 : [0, 1] → ℝ that is not continuous.
130
3.3
CHAPTER 3. CONTINUOUS FUNCTIONS
Extreme and intermediate value theorems
Note: 1.5 lectures Continuous functions on closed and bounded intervals are quite well behaved.
3.3.1
Min-max or extreme value theorem
Recall that 𝑓 : [𝑎, 𝑏] → ℝ is bounded if there exists a 𝐵 ∈ ℝ such that | 𝑓 (𝑥)| ≤ 𝐵 for all 𝑥 ∈ [𝑎, 𝑏]. For a continuous function on a closed and bounded interval, we have the following lemma. Lemma 3.3.1. A continuous function 𝑓 : [𝑎, 𝑏] → ℝ is bounded.
Proof. We prove the claim by contrapositive. Suppose 𝑓 is not bounded. Then for each 𝑛 ∈ ℕ , there is an 𝑥 𝑛 ∈ [𝑎, 𝑏], such that | 𝑓 (𝑥 𝑛 )| ≥ 𝑛. The sequence {𝑥 𝑛 }∞ is bounded as 𝑎 ≤ 𝑥 𝑛 ≤ 𝑏. By the Bolzano–Weierstrass theorem, 𝑛=1 there is a convergent subsequence {𝑥 𝑛 𝑖 }∞ . Let 𝑥 B lim𝑖→∞ 𝑥 𝑛 𝑖 . Since 𝑎 ≤ 𝑥 𝑛 𝑖 ≤ 𝑏 for all 𝑖, ∞𝑖=1 then 𝑎 ≤ 𝑥 ≤ 𝑏. The sequence 𝑓 (𝑥 𝑛 𝑖 ) 𝑖=1 is not bounded as | 𝑓 (𝑥 𝑛 𝑖 )| ≥ 𝑛 𝑖 ≥ 𝑖. Thus 𝑓 is not continuous at 𝑥 as
𝑓 (𝑥) = 𝑓 lim 𝑥 𝑛 𝑖 , 𝑖→∞
but
lim 𝑓 (𝑥 𝑛 𝑖 ) does not exist.
𝑖→∞
□
Notice a key point of the proof. Boundedness of [𝑎, 𝑏] allows us to use Bolzano– Weierstrass, while the fact that it is closed gives us that the limit is back in [𝑎, 𝑏]. The technique is a common one: Find a sequence with a certain property, then use Bolzano– Weierstrass to make such a sequence that also converges. Recall from calculus that 𝑓 : 𝑆 → ℝ achieves an absolute minimum at 𝑐 ∈ 𝑆 if 𝑓 (𝑥) ≥ 𝑓 (𝑐)
for all 𝑥 ∈ 𝑆.
On the other hand, 𝑓 achieves an absolute maximum at 𝑐 ∈ 𝑆 if 𝑓 (𝑥) ≤ 𝑓 (𝑐)
for all 𝑥 ∈ 𝑆.
If such a 𝑐 ∈ 𝑆 exists, then we say 𝑓 achieves an absolute minimum (resp. absolute maximum) on 𝑆, and we call 𝑓 (𝑐) the absolute minimum (resp. absolute maximum). If 𝑆 is a closed and bounded interval, then a continuous 𝑓 is not just bounded, it must achieve an absolute minimum and an absolute maximum on 𝑆. Theorem 3.3.2 (Minimum-maximum theorem / Extreme value theorem). A continuous function 𝑓 : [𝑎, 𝑏] → ℝ achieves both an absolute minimum and an absolute maximum on [𝑎, 𝑏].
Again, we remark that is important that the domain of 𝑓 is a closed and bounded interval [𝑎, 𝑏].
131
3.3. EXTREME AND INTERMEDIATE VALUE THEOREMS absolute maximum of 𝑓 = 𝑓 (𝑐) absolute minimum of 𝑓 = 𝑓 (𝑑)
𝑑
𝑎 𝑐
𝑏
Figure 3.6: 𝑓 : [𝑎, 𝑏] → ℝ achieves an absolute maximum 𝑓 (𝑐) at 𝑐, and an absolute minimum 𝑓 (𝑑) at 𝑑.
Proof. The lemma says that 𝑓 is bounded, so the set 𝑓 [𝑎, 𝑏] = 𝑓 (𝑥) : 𝑥 ∈ [𝑎, 𝑏] has a supremum and an infimum. There exist sequences in the set 𝑓 [𝑎, 𝑏] that approach ∞ ∞ its supremum and its infimum. That is, there are sequences 𝑓 (𝑥 𝑛 ) 𝑛=1 and 𝑓 (𝑦𝑛 ) 𝑛=1 , where 𝑥 𝑛 and 𝑦𝑛 are in [𝑎, 𝑏], such that
lim 𝑓 (𝑥 𝑛 ) = inf 𝑓 [𝑎, 𝑏]
𝑛→∞
and
lim 𝑓 (𝑦𝑛 ) = sup 𝑓 [𝑎, 𝑏] .
𝑛→∞
We are not done yet; we need to find where the minima and the maxima are. The problem is that the sequences {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ need not converge. We know {𝑥 𝑛 }∞ and 𝑛=1 𝑛=1 𝑛=1 ∞ {𝑦𝑛 } 𝑛=1 are bounded (their elements belong to a bounded interval [𝑎, 𝑏]). Apply the Bolzano–Weierstrass theorem, to find convergent subsequences {𝑥 𝑛 𝑖 }∞ and {𝑦𝑚𝑖 }∞ . Let 𝑖=1 𝑖=1 𝑥 B lim 𝑥 𝑛 𝑖 𝑖→∞
and
𝑦 B lim 𝑦𝑚𝑖 . 𝑖→∞
As 𝑎 ≤ 𝑥 𝑛 𝑖 ≤ 𝑏 for all 𝑖, we have 𝑎 ≤ 𝑥 ≤ 𝑏. Similarly, 𝑎 ≤ 𝑦 ≤ 𝑏. So 𝑥 and 𝑦 are in [𝑎, 𝑏]. A limit of a subsequence is the same as the limit of the sequence, and we can take a limit past the continuous function 𝑓 :
inf 𝑓 [𝑎, 𝑏] = lim 𝑓 (𝑥 𝑛 ) = lim 𝑓 (𝑥 𝑛 𝑖 ) = 𝑓 lim 𝑥 𝑛 𝑖 = 𝑓 (𝑥).
𝑛→∞
𝑖→∞
𝑖→∞
Similarly,
sup 𝑓 [𝑎, 𝑏] = lim 𝑓 (𝑦𝑛 ) = lim 𝑓 (𝑦𝑚𝑖 ) = 𝑓 lim 𝑦𝑚𝑖 = 𝑓 (𝑦).
𝑛→∞
𝑖→∞
𝑖→∞
Hence, 𝑓 achieves an absolute minimum at 𝑥 and an absolute maximum at 𝑦.
□
Example 3.3.3: The function 𝑓 (𝑥) B 𝑥 2 + 1 defined on the interval [−1, 2] achieves a minimum at 𝑥 = 0 when 𝑓 (0) = 1. It achieves a maximum at 𝑥 = 2 where 𝑓 (2) = 5. Do note that the domain of definition matters. If we instead took the domain to be [−10, 10], then 𝑓 would no longer have a maximum at 𝑥 = 2. Instead, the maximum would be achieved at either 𝑥 = 10 or 𝑥 = −10. We show by examples that the different hypotheses of the theorem are truly needed.
132
CHAPTER 3. CONTINUOUS FUNCTIONS
Example 3.3.4: The function 𝑓 : ℝ → ℝ defined by 𝑓 (𝑥) B 𝑥 achieves neither a minimum, nor a maximum. So it is important that we are looking at a bounded interval. Example 3.3.5: The function 𝑓 : (0, 1) → ℝ defined by 𝑓 (𝑥) B 1/𝑥 achieves neither a minimum, nor a maximum. It is continuous, but (0, 1) is not closed. The values of the function are unbounded as we approach 0. Also as we approach 𝑥 = 1, the values of the function approach 1, but 𝑓 (𝑥) > 1 for all 𝑥 ∈ (0, 1). There is no 𝑥 ∈ (0, 1) such that 𝑓 (𝑥) = 1. So it is important that we are looking at a closed interval. Example 3.3.6: Continuity is important. Define 𝑓 : [0, 1] → ℝ by 𝑓 (𝑥) B 1/𝑥 for 𝑥 > 0 and let 𝑓 (0) B 0. The function does not achieve a maximum. The domain [0, 1] is closed and bounded, but the problem is that the function is not continuous at 0.
3.3.2
Bolzano’s intermediate value theorem
Bolzano’s intermediate value theorem is one of the cornerstones of analysis. It is sometimes only called the intermediate value theorem, or just Bolzano’s theorem. To prove Bolzano’s theorem we prove the following simpler lemma. Lemma 3.3.7. Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function. Suppose 𝑓 (𝑎) < 0 and 𝑓 (𝑏) > 0. Then there exists a number 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 (𝑐) = 0. Proof. We define two sequences {𝑎 𝑛 }∞ and {𝑏 𝑛 }∞ inductively: 𝑛=1 𝑛=1 (i) Let 𝑎 1 B 𝑎 and 𝑏 1 B 𝑏.
(ii) If 𝑓
𝑎 𝑛 +𝑏 𝑛 2
≥ 0, let 𝑎 𝑛+1 B 𝑎 𝑛 and 𝑏 𝑛+1 B
(iii) If 𝑓
𝑎 𝑛 +𝑏 𝑛 2
< 0, let 𝑎 𝑛+1 B
𝑎 𝑛 +𝑏 𝑛 2
𝑎 𝑛 +𝑏 𝑛 2 .
and 𝑏 𝑛+1 B 𝑏 𝑛 .
𝑛 See Figure 3.7 for an example defining the first five steps. If 𝑎 𝑛 < 𝑏 𝑛 , then 𝑎 𝑛 < 𝑎 𝑛 +𝑏 < 𝑏𝑛 . 2 So 𝑎 𝑛+1 < 𝑏 𝑛+1 . Thus by induction 𝑎 𝑛 < 𝑏 𝑛 for all 𝑛. Furthermore, 𝑎 𝑛 ≤ 𝑎 𝑛+1 and 𝑏 𝑛 ≥ 𝑏 𝑛+1 for all 𝑛, that is, the sequences are monotone. As 𝑎 𝑛 < 𝑏 𝑛 ≤ 𝑏 1 = 𝑏 and 𝑏 𝑛 > 𝑎 𝑛 ≥ 𝑎 1 = 𝑎 for all 𝑛, the sequences are also bounded. Therefore, the sequences converge. Let 𝑐 B lim𝑛→∞ 𝑎 𝑛 and 𝑑 B lim𝑛→∞ 𝑏 𝑛 , where also 𝑎 ≤ 𝑐 ≤ 𝑑 ≤ 𝑏. We need to show that 𝑐 = 𝑑. Notice 𝑏𝑛 − 𝑎𝑛 𝑏 𝑛+1 − 𝑎 𝑛+1 = . 2 By induction, 𝑏1 − 𝑎1 𝑏 𝑛 − 𝑎 𝑛 = 𝑛−1 = 21−𝑛 (𝑏 − 𝑎). 2
As 21−𝑛 (𝑏 − 𝑎) converges to zero, we take the limit as 𝑛 goes to infinity to get 𝑑 − 𝑐 = lim (𝑏 𝑛 − 𝑎 𝑛 ) = lim 21−𝑛 (𝑏 − 𝑎) = 0. 𝑛→∞
In other words, 𝑑 = 𝑐.
𝑛→∞
133
3.3. EXTREME AND INTERMEDIATE VALUE THEOREMS
c
𝑎1 𝑎2 𝑎3
𝑏1 𝑏2 𝑏3 𝑎4 𝑏4 𝑎5 𝑏5
Figure 3.7: Finding roots (bisection method).
By construction, for all 𝑛, 𝑓 (𝑎 𝑛 ) < 0
and
𝑓 (𝑏 𝑛 ) ≥ 0.
Since lim𝑛→∞ 𝑎 𝑛 = lim𝑛→∞ 𝑏 𝑛 = 𝑐 and 𝑓 is continuous at 𝑐, we may take limits in those inequalities: 𝑓 (𝑐) = lim 𝑓 (𝑎 𝑛 ) ≤ 0 and 𝑓 (𝑐) = lim 𝑓 (𝑏 𝑛 ) ≥ 0. 𝑛→∞
𝑛→∞
As 𝑓 (𝑐) ≥ 0 and 𝑓 (𝑐) ≤ 0, we conclude 𝑓 (𝑐) = 0. Thus also 𝑐 ≠ 𝑎 and 𝑐 ≠ 𝑏, so 𝑎 < 𝑐 < 𝑏. □
Theorem 3.3.8 (Bolzano’s intermediate value theorem). Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function. Suppose 𝑦 ∈ ℝ is such that 𝑓 (𝑎) < 𝑦 < 𝑓 (𝑏) or 𝑓 (𝑎) > 𝑦 > 𝑓 (𝑏). Then there exists a 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 (𝑐) = 𝑦. The theorem says that a continuous function on a closed interval achieves all the values between the values at the endpoints.
Proof. If 𝑓 (𝑎) < 𝑦 < 𝑓 (𝑏), then define 𝑔(𝑥) B 𝑓 (𝑥) − 𝑦. Then 𝑔(𝑎) < 0 and 𝑔(𝑏) > 0, and we apply Lemma 3.3.7 to 𝑔 to find 𝑐. If 𝑔(𝑐) = 0, then 𝑓 (𝑐) = 𝑦. Similarly, if 𝑓 (𝑎) > 𝑦 > 𝑓 (𝑏), then define 𝑔(𝑥) B 𝑦 − 𝑓 (𝑥). Again, 𝑔(𝑎) < 0 and 𝑔(𝑏) > 0, and we apply Lemma 3.3.7 to find 𝑐. As before, if 𝑔(𝑐) = 0, then 𝑓 (𝑐) = 𝑦. □ If a function is continuous, then the restriction to a subset is continuous; if 𝑓 : 𝑆 → ℝ is continuous and [𝑎, 𝑏] ⊂ 𝑆, then 𝑓 | [𝑎,𝑏] is also continuous. We generally apply the theorem to a function continuous on some large set 𝑆, but we restrict our attention to an interval. The proof of the lemma tells us how to find the root 𝑐. The proof is not only useful for us pure mathematicians, it is a useful idea in applied mathematics, where it is called the bisection method.
134
CHAPTER 3. CONTINUOUS FUNCTIONS
Example 3.3.9 (Bisection method): The polynomial 𝑓 (𝑥) B 𝑥 3 − 2𝑥 2 + 𝑥 − 1 has a real root in (1, 2). We simply notice that 𝑓 (1) = −1 and 𝑓 (2) = 1. Hence there must exist a point 𝑐 ∈ (1, 2) such that 𝑓 (𝑐) = 0. To find a better approximation of the root we follow the proof of Lemma 3.3.7. We look at 1.5 and find that 𝑓 (1.5) = −0.625. Therefore, there is a root of the polynomial in (1.5, 2). Next we look at 1.75 and note that 𝑓 (1.75) ≈ −0.016. Hence there is a root of 𝑓 in (1.75, 2). Next we look at 1.875 and find that 𝑓 (1.875) ≈ 0.44, thus there is a root in (1.75, 1.875). We follow this procedure until we gain sufficient precision. In fact, the root is at 𝑐 ≈ 1.7549. The technique is the simplest method of finding roots of polynomials, a common problem in applied mathematics. In general, finding roots is hard to do quickly, precisely, and automatically. There are other, faster methods of finding roots of polynomials, such as Newton’s method. One advantage of the method above is its simplicity. The moment we find an interval where the intermediate value theorem applies, we are guaranteed to find a root up to a desired precision in finitely many steps. Furthermore, the bisection method finds roots of any continuous function, not just a polynomial. The theorem guarantees one 𝑐 such that 𝑓 (𝑐) = 𝑦, but there may be other roots of the equation 𝑓 (𝑐) = 𝑦. If we follow the procedure of the proof, we are guaranteed to find approximations to one such root. We need to work harder to find any other roots. Polynomials of even degree may not have any real roots. There is no real number 𝑥 such that 𝑥 2 + 1 = 0. Odd polynomials, on the other hand, always have at least one real root. Proposition 3.3.10. Let 𝑓 (𝑥) be a polynomial of odd degree. Then 𝑓 has a real root.
Proof. Suppose 𝑓 is a polynomial of odd degree 𝑑. We write
𝑓 (𝑥) = 𝑎 𝑑 𝑥 𝑑 + 𝑎 𝑑−1 𝑥 𝑑−1 + · · · + 𝑎1 𝑥 + 𝑎0 ,
where 𝑎 𝑑 ≠ 0. We divide by 𝑎 𝑑 to obtain a monic polynomial*
𝑔(𝑥) B 𝑥 𝑑 + 𝑏 𝑑−1 𝑥 𝑑−1 + · · · + 𝑏 1 𝑥 + 𝑏 0 ,
where 𝑏 𝑘 = 𝑎 𝑘/𝑎 𝑑 . Let us show that 𝑔(𝑛) is positive for some large 𝑛 ∈ ℕ . We first compare the highest order term with the rest:
𝑏 𝑑−1 𝑛 𝑑−1 + · · · + 𝑏1 𝑛 + 𝑏0 𝑏 𝑑−1 𝑛 𝑑−1 + · · · + 𝑏1 𝑛 + 𝑏0 = 𝑛𝑑 𝑛𝑑 |𝑏 𝑑−1 | 𝑛 𝑑−1 + · · · + |𝑏 1 | 𝑛 + |𝑏 0 | ≤ 𝑛𝑑 𝑑−1 + · · · + |𝑏 1 | 𝑛 𝑑−1 + |𝑏 0 | 𝑛 𝑑−1 |𝑏 𝑑−1 | 𝑛 ≤ 𝑛𝑑 𝑛 𝑑−1 |𝑏 𝑑−1 | + · · · + |𝑏 1 | + |𝑏0 | = 𝑛𝑑 1 = |𝑏 𝑑−1 | + · · · + |𝑏 1 | + |𝑏0 | . 𝑛 *The
word monic means that the coefficient of 𝑥 𝑑 is 1.
135
3.3. EXTREME AND INTERMEDIATE VALUE THEOREMS Therefore,
𝑏 𝑑−1 𝑛 𝑑−1 + · · · + 𝑏 1 𝑛 + 𝑏0 = 0. 𝑛→∞ 𝑛𝑑 Thus there exists an 𝑀 ∈ ℕ such that lim
𝑏 𝑑−1 𝑀 𝑑−1 + · · · + 𝑏1 𝑀 + 𝑏0 < 1, 𝑀𝑑 which implies
−(𝑏 𝑑−1 𝑀 𝑑−1 + · · · + 𝑏1 𝑀 + 𝑏 0 ) < 𝑀 𝑑 .
Therefore, 𝑔(𝑀) > 0. Next, consider 𝑔(−𝑛) for 𝑛 ∈ ℕ . By a similar argument, there exists a 𝐾 ∈ ℕ such that 𝑏 𝑑−1 (−𝐾)𝑑−1 + · · · + 𝑏 1 (−𝐾) + 𝑏 0 < 𝐾 𝑑 and therefore 𝑔(−𝐾) < 0 (see Exercise 3.3.5). In the proof, make sure you use the fact that 𝑑 is odd. In particular, if 𝑑 is odd, then (−𝑛)𝑑 = −(𝑛 𝑑 ). We appeal to the intermediate value theorem to find a 𝑐 ∈ (−𝐾, 𝑀), such that 𝑔(𝑐) = 0. 𝑓 (𝑥) □ As 𝑔(𝑥) = 𝑎 𝑑 , then 𝑓 (𝑐) = 0, and the proof is done.
√ Example 3.3.11: You may recall how hard we worked to show that 2 in Example 1.2.3. With Bolzano’s theorem, we can prove the existance 𝑘th root of any positive number 𝑦 > 0 without any effort. We claim that for any 𝑘 ∈ ℕ and any 𝑦 > 0, there exists a number 𝑥 > 0 such that 𝑥 𝑘 = 𝑦. Proof: If 𝑦 = 1, then it is clear, so assume 𝑦 ≠ 1. Let 𝑓 (𝑥) B 𝑥 𝑘 − 𝑦. We notice 𝑓 (0) = −𝑦 < 0. If 𝑦 < 1, then 𝑓 (1) = 1 𝑘 − 𝑦 > 0. If 𝑦 > 1, then 𝑓 (𝑦) = 𝑦 𝑘 − 𝑦 = 𝑦(𝑦 𝑘−1 −1) > 0. In either case, apply Bolzano’s theorem to find an 𝑥 > 0 such that 𝑓 (𝑥) = 0, or in other words 𝑥 𝑘 = 𝑦. Example 3.3.12: Interestingly, there exist discontinuous functions with the intermediate value property. The function
( 𝑓 (𝑥) B
sin(1/𝑥 ) if 𝑥 ≠ 0, 0
if 𝑥 = 0,
is not continuous at 0; however, 𝑓 has the intermediate value property: Whenever 𝑎 < 𝑏 and 𝑦 is such that 𝑓 (𝑎) < 𝑦 < 𝑓 (𝑏) or 𝑓 (𝑎) > 𝑦 > 𝑓 (𝑏), there exists a 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 (𝑐) = 𝑦. See Figure 3.2 for a graph of sin(1/𝑥 ). Proof is left as Exercise 3.3.4. The intermediate value theorem says that if 𝑓 : [𝑎, 𝑏] → ℝ is continuous, then 𝑓 [𝑎, 𝑏] contains all the values between 𝑓 (𝑎) and 𝑓 (𝑏). In fact, more is true. Combining all the results of this section one can prove the following useful corollary whose proof is left as an exercise. Hint: See Figure 3.8 and notice what the endpoints of the image interval are.
Corollary 3.3.13. If 𝑓 : [𝑎, 𝑏] → ℝ is continuous, then the direct image 𝑓 [𝑎, 𝑏] is a closed and bounded interval or a single number.
136
CHAPTER 3. CONTINUOUS FUNCTIONS
𝑓 [𝑎, 𝑏]
𝑦 = 𝑓 (𝑥)
𝑎
𝑏
Figure 3.8: The image of a continuous 𝑓 : [𝑎, 𝑏] → ℝ.
3.3.3
Exercises
Exercise 3.3.1: Find an example of a discontinuous function 𝑓 : [0, 1] → ℝ where the conclusion of the intermediate value theorem fails. Exercise 3.3.2: Find an example of a bounded discontinuous function 𝑓 : [0, 1] → ℝ that has neither an absolute minimum nor an absolute maximum. Exercise 3.3.3: Let 𝑓 : (0, 1) → ℝ be a continuous function such that lim 𝑓 (𝑥) = lim 𝑓 (𝑥) = 0. Show that 𝑥→0
𝑥→1
𝑓 achieves either an absolute minimum or an absolute maximum on (0, 1) (but perhaps not both).
Exercise 3.3.4: Let
( 𝑓 (𝑥) B
sin(1/𝑥 ) 0
if 𝑥 ≠ 0, if 𝑥 = 0.
Show that 𝑓 has the intermediate value property. That is, whenever 𝑎 < 𝑏, if there exists a 𝑦 such that 𝑓 (𝑎) < 𝑦 < 𝑓 (𝑏) or 𝑓 (𝑎) > 𝑦 > 𝑓 (𝑏), then there exists a 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 (𝑐) = 𝑦. Exercise 3.3.5: Suppose 𝑔(𝑥) is a monic polynomial of odd degree 𝑑, that is, 𝑔(𝑥) = 𝑥 𝑑 + 𝑏 𝑑−1 𝑥 𝑑−1 + · · · + 𝑏1 𝑥 + 𝑏0 , for some real numbers 𝑏0 , 𝑏1 , . . . , 𝑏 𝑑−1 . Show that there exists a 𝐾 ∈ ℕ such that 𝑔(−𝐾) < 0. Hint: Make sure to use the fact that 𝑑 is odd. You will have to use that (−𝑛)𝑑 = −(𝑛 𝑑 ). Exercise 3.3.6: Suppose 𝑔(𝑥) is a monic polynomial of positive even degree 𝑑, that is, 𝑔(𝑥) = 𝑥 𝑑 + 𝑏 𝑑−1 𝑥 𝑑−1 + · · · + 𝑏1 𝑥 + 𝑏0 , for some real numbers 𝑏0 , 𝑏1 , . . . , 𝑏 𝑑−1 . Suppose 𝑔(0) < 0. Show that 𝑔 has at least two distinct real roots. Exercise 3.3.7: Prove Corollary 3.3.13: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is a continuous function. Prove that the direct image 𝑓 [𝑎, 𝑏] is a closed and bounded interval or a single number. Exercise 3.3.8: Suppose 𝑓 : ℝ → ℝ is continuous and periodic with period 𝑃 > 0. That is, 𝑓 (𝑥 + 𝑃) = 𝑓 (𝑥) for all 𝑥 ∈ ℝ. Show that 𝑓 achieves an absolute minimum and an absolute maximum.
3.3. EXTREME AND INTERMEDIATE VALUE THEOREMS
137
Exercise 3.3.9 (Challenging): Suppose 𝑓 (𝑥) is a bounded polynomial, in other words, there is an 𝑀 such that | 𝑓 (𝑥)| ≤ 𝑀 for all 𝑥 ∈ ℝ. Prove that 𝑓 must be a constant. Exercise 3.3.10: Suppose 𝑓 : [0, 1] → [0, 1] is continuous. Show that 𝑓 has a fixed point, in other words, show that there exists an 𝑥 ∈ [0, 1] such that 𝑓 (𝑥) = 𝑥. Exercise 3.3.11: Find an example of a continuous bounded function 𝑓 : ℝ → ℝ that does not achieve an absolute minimum nor an absolute maximum on ℝ. Exercise 3.3.12: Suppose 𝑓 : ℝ → ℝ is continuous such that 𝑥 ≤ 𝑓 (𝑥) ≤ 𝑥 + 1 for all 𝑥 ∈ ℝ. Find 𝑓 (ℝ). Exercise 3.3.13: True/False, prove or find a counterexample. If 𝑓 : ℝ → ℝ is a continuous function such that 𝑓 | ℤ is bounded, then 𝑓 is bounded. Exercise 3.3.14: Suppose 𝑓 : [0, 1] → (0, 1) is a bijection. Prove that 𝑓 is not continuous. Exercise 3.3.15: Suppose 𝑓 : ℝ → ℝ is continuous.
a) Prove that if there is a 𝑐 such that 𝑓 (𝑐) 𝑓 (−𝑐) < 0, then there is a 𝑑 ∈ ℝ such that 𝑓 (𝑑) = 0.
b) Find a continuous function 𝑓 such that 𝑓 (ℝ) = ℝ, but 𝑓 (𝑥) 𝑓 (−𝑥) ≥ 0 for all 𝑥 ∈ ℝ. Exercise 3.3.16: Suppose 𝑔(𝑥) is a monic polynomial of even degree 𝑑, that is, 𝑔(𝑥) = 𝑥 𝑑 + 𝑏 𝑑−1 𝑥 𝑑−1 + · · · + 𝑏1 𝑥 + 𝑏0 ,
for some real numbers 𝑏0 , 𝑏1 , . . . , 𝑏 𝑑−1 . Show that 𝑔 achieves an absolute minimum on ℝ. Exercise 3.3.17: Suppose 𝑓 (𝑥) is a polynomial of degree 𝑑 and 𝑓 (ℝ) = ℝ. Show that 𝑑 is odd.
138
3.4
CHAPTER 3. CONTINUOUS FUNCTIONS
Uniform continuity
Note: 1.5–2 lectures (continuous extension can be optional)
3.4.1
Uniform continuity
We made a fuss of saying that the 𝛿 in the definition of continuity depended on the point 𝑐. There are situations when it is advantageous to to be able to pick a 𝛿 independent of any point, and so we give a name to this concept. Definition 3.4.1. Let 𝑆 ⊂ ℝ, and let 𝑓 : 𝑆 → ℝ be a function. Suppose for every 𝜖 > 0 there exists a 𝛿 > 0 such that whenever 𝑥, 𝑐 ∈ 𝑆 and |𝑥 − 𝑐| < 𝛿, then | 𝑓 (𝑥) − 𝑓 (𝑐)| < 𝜖. Then we say 𝑓 is uniformly continuous. A uniformly continuous function must be continuous. The only difference in the definitions is that in uniform continuity, for a given 𝜖 > 0 we pick a 𝛿 > 0 that works for all 𝑐 ∈ 𝑆. That is, 𝛿 can no longer depend on 𝑐, it only depends on 𝜖. The domain of definition of the function makes a difference now. A function that is not uniformly continuous on a larger set, may be uniformly continuous when restricted to a smaller set. Note that 𝑥 and 𝑐 are not treated any differently in this definition. Example 3.4.2: 𝑓 : [0, 1] → ℝ defined by 𝑓 (𝑥) B 𝑥 2 is uniformly continuous. Proof: Note that 0 ≤ 𝑥, 𝑐 ≤ 1. Then |𝑥 2 − 𝑐 2 | = |𝑥 + 𝑐||𝑥 − 𝑐| ≤ |𝑥| + |𝑐| |𝑥 − 𝑐| ≤ (1 + 1)|𝑥 − 𝑐|.
Therefore, given 𝜖 > 0, let 𝛿 B 𝜖/2. If |𝑥 − 𝑐| < 𝛿, then |𝑥 2 − 𝑐 2 | < 𝜖.
On the other hand, 𝑔 : ℝ → ℝ defined by 𝑔(𝑥) B 𝑥 2 is not uniformly continuous. Proof: Suppose it is uniformly continuous, then for every 𝜖 > 0, there would exist a 𝛿 > 0 such that if |𝑥 − 𝑐| < 𝛿, then |𝑥 2 − 𝑐 2 | < 𝜖. Take 𝑥 > 0 and let 𝑐 B 𝑥 + 𝛿/2. Write 𝜖 > |𝑥 2 − 𝑐 2 | = |𝑥 + 𝑐||𝑥 − 𝑐| = (2𝑥 + 𝛿/2)𝛿/2 ≥ 𝛿𝑥.
Therefore, 𝑥 < 𝜖/𝛿 for all 𝑥 > 0, which is a contradiction.
Example 3.4.3: The function 𝑓 : (0, 1) → ℝ defined by 𝑓 (𝑥) B 1/𝑥 is not uniformly continuous. Proof: Given 𝜖 > 0, then 𝜖 > | 1/𝑥 − 1/𝑦 | holds if and only if 𝜖 > | 1/𝑥 − 1/𝑦 | =
or
|𝑦 − 𝑥| |𝑦 − 𝑥| = , 𝑥𝑦 |𝑥𝑦|
|𝑥 − 𝑦| < 𝑥𝑦𝜖.
Suppose 𝜖 < 1, and we wish to see if a small 𝛿 > 0 would work. If 𝑥 ∈ (0, 1) and 𝑦 = 𝑥 + 𝛿/2 ∈ (0, 1), then |𝑥 − 𝑦| = 𝛿/2 < 𝛿. We plug 𝑦 into the inequality above to get 𝛿/2 < 𝑥 𝑥 + 𝛿/2 𝜖 < 𝑥. If the definition of uniform continuity is satisfied, then the inequality 𝛿/2 < 𝑥 holds for all 𝑥 > 0. But then 𝛿 ≤ 0. Therefore, no single 𝛿 > 0 works for all points.
139
3.4. UNIFORM CONTINUITY
The examples show that if 𝑓 is defined on an interval that is either not closed or not bounded, then 𝑓 can be continuous, but not uniformly continuous. For a closed and bounded interval [𝑎, 𝑏], we can, however, make the following statement.
Theorem 3.4.4. Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function. Then 𝑓 is uniformly continuous.
Proof. We prove the statement by contrapositive. Suppose 𝑓 is not uniformly continuous. We will prove that there is some 𝑐 ∈ [𝑎, 𝑏] where 𝑓 is not continuous. Let us negate the definition of uniformly continuous. There exists an 𝜖 > 0 such that for every 𝛿 > 0, there exist points 𝑥, 𝑦 in [𝑎, 𝑏] with |𝑥 − 𝑦| < 𝛿 and | 𝑓 (𝑥) − 𝑓 (𝑦)| ≥ 𝜖. So for the 𝜖 > 0 above, we find sequences {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ such that |𝑥 𝑛 − 𝑦𝑛 | < 1/𝑛 𝑛=1 𝑛=1 and such that | 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑦𝑛 )| ≥ 𝜖. By Bolzano–Weierstrass, there exists a convergent subsequence {𝑥 𝑛 𝑘 }∞ . Let 𝑐 B lim 𝑘→∞ 𝑥 𝑛 𝑘 . As 𝑎 ≤ 𝑥 𝑛 𝑘 ≤ 𝑏 for all 𝑘, we have 𝑎 ≤ 𝑐 ≤ 𝑏. 𝑘=1 Estimate |𝑦𝑛 𝑘 − 𝑐| = |𝑦𝑛 𝑘 − 𝑥 𝑛 𝑘 + 𝑥 𝑛 𝑘 − 𝑐| ≤ |𝑦𝑛 𝑘 − 𝑥 𝑛 𝑘 | + |𝑥 𝑛 𝑘 − 𝑐| < 1/𝑛 𝑘 + |𝑥 𝑛 𝑘 − 𝑐|. As 1/𝑛 𝑘 and 𝑥 𝑛 𝑘 − 𝑐 both go to zero when 𝑘 goes to infinity, {𝑦𝑛 𝑘 }∞ converges and the 𝑘=1 limit is 𝑐. We now show that 𝑓 is not continuous at 𝑐. Estimate
𝑓 (𝑥 𝑛 ) − 𝑓 (𝑐) = 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑦𝑛 ) + 𝑓 (𝑦𝑛 ) − 𝑓 (𝑐) 𝑘 𝑘 𝑘 𝑘 ≥ 𝑓 (𝑥 𝑛 𝑘 ) − 𝑓 (𝑦𝑛 𝑘 ) − 𝑓 (𝑦𝑛 𝑘 ) − 𝑓 (𝑐) ≥ 𝜖 − 𝑓 (𝑦𝑛 ) − 𝑓 (𝑐) . 𝑘
Or in other words,
𝑓 (𝑥 𝑛 ) − 𝑓 (𝑐) + 𝑓 (𝑦𝑛 ) − 𝑓 (𝑐) ≥ 𝜖. 𝑘 𝑘 ∞ ∞
At least one of the sequences 𝑓 (𝑥 𝑛 𝑘 ) 𝑘=1 or 𝑓 (𝑦𝑛 𝑘 ) 𝑘=1 cannot converge to 𝑓 (𝑐), otherwise the left-hand side of the inequality would go to zero while the right-hand side is positive. Thus 𝑓 cannot be continuous at 𝑐. □ As before, note what is key in the proof: We can apply Bolzano–Weierstrass because the interval [𝑎, 𝑏] is bounded, and the limit of the subsequence is back in [𝑎, 𝑏] because the interval is closed.
3.4.2
Continuous extension
Uniformly continuous functions on open intervals extend continuously to the endpoints. The key is the following lemma, which also has many other uses. It says that uniformly continuous functions preserve Cauchy sequences. The new issue here is that for a Cauchy sequence, the limit may not end up in the domain of the function. Lemma 3.4.5. Let 𝑆 ⊂ ℝ and let 𝑓 : 𝑆 → ℝ be a uniformly continuous function. Let {𝑥 𝑛 }∞ be 𝑛=1 ∞ a Cauchy sequence in 𝑆. Then 𝑓 (𝑥 𝑛 ) 𝑛=1 is Cauchy.
140
CHAPTER 3. CONTINUOUS FUNCTIONS
Proof. Let 𝜖 > 0 be given. There is a 𝛿 > 0 such that | 𝑓 (𝑥) − 𝑓 (𝑦)| < 𝜖 whenever 𝑥, 𝑦 ∈ 𝑆 and |𝑥 − 𝑦| < 𝛿. Find an 𝑀 ∈ ℕ such that for all 𝑛, 𝑘 ≥ 𝑀, we have |𝑥 𝑛 − 𝑥 𝑘 | < 𝛿. Then for all 𝑛, 𝑘 ≥ 𝑀, we have | 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑥 𝑘 )| < 𝜖. □ An application of the lemma above is the following extension result. It says that a function on an open interval is uniformly continuous if and only if it can be extended to a continuous function on the closed interval. Proposition 3.4.6. A function 𝑓 : (𝑎, 𝑏) → ℝ is uniformly continuous if and only if the limits 𝐿 𝑎 B lim 𝑓 (𝑥)
and
𝑥→𝑎
𝐿𝑏 B lim 𝑓 (𝑥) 𝑥→𝑏
exist and the function e 𝑓 : [𝑎, 𝑏] → ℝ defined by
𝑓 (𝑥) if 𝑥 ∈ (𝑎, 𝑏),
e 𝑓 (𝑥) B 𝐿 𝑎 𝐿𝑏
if 𝑥 = 𝑎, if 𝑥 = 𝑏
is continuous. Proof. One direction is quick. If e 𝑓 is continuous, then it is uniformly continuous by Theorem 3.4.4. As 𝑓 is the restriction of e 𝑓 to (𝑎, 𝑏), 𝑓 is also uniformly continuous (exercise). Now suppose 𝑓 is uniformly continuous. We must first show that the limits 𝐿 𝑎 and 𝐿𝑏 exist. Let us concentrate on 𝐿 𝑎 . Take {𝑥 𝑛 }∞ in (𝑎, 𝑏) such that lim𝑛→∞ 𝑥 𝑛 = 𝑎. The 𝑛=1 ∞ ∞ sequence {𝑥 𝑛 } 𝑛=1 is Cauchy, so by Lemma 3.4.5 the sequence 𝑓 (𝑥 𝑛 ) 𝑛=1 is Cauchy and thus convergent. Let 𝐿1 B lim𝑛→∞ 𝑓 (𝑥 𝑛 ). Take another sequence {𝑦𝑛 }∞ in (𝑎, 𝑏) such 𝑛=1 that lim𝑛→∞ 𝑦𝑛 = 𝑎. By the same reasoning we get 𝐿2 B lim𝑛→∞ 𝑓 (𝑦𝑛 ). If we show that 𝐿1 = 𝐿2 , then the limit 𝐿 𝑎 = lim𝑥→𝑎 𝑓 (𝑥) exists. Let 𝜖 > 0 be given. Find 𝛿 > 0 such that |𝑥 − 𝑦| < 𝛿 implies | 𝑓 (𝑥) − 𝑓 (𝑦)| < 𝜖/3. Find 𝑀 ∈ ℕ such that for 𝑛 ≥ 𝑀, we have |𝑎 − 𝑥 𝑛 | < 𝛿/2, |𝑎 − 𝑦𝑛 | < 𝛿/2, | 𝑓 (𝑥 𝑛 ) − 𝐿1 | < 𝜖/3, and | 𝑓 (𝑦𝑛 ) − 𝐿2 | < 𝜖/3. Then for 𝑛 ≥ 𝑀, So
|𝑥 𝑛 − 𝑦𝑛 | = |𝑥 𝑛 − 𝑎 + 𝑎 − 𝑦𝑛 | ≤ |𝑥 𝑛 − 𝑎| + |𝑎 − 𝑦𝑛 | < 𝛿/2 + 𝛿/2 = 𝛿. |𝐿1 − 𝐿2 | = |𝐿1 − 𝑓 (𝑥 𝑛 ) + 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑦𝑛 ) + 𝑓 (𝑦𝑛 ) − 𝐿2 |
≤ |𝐿1 − 𝑓 (𝑥 𝑛 )| + | 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑦𝑛 )| + | 𝑓 (𝑦𝑛 ) − 𝐿2 |
≤ 𝜖/3 + 𝜖/3 + 𝜖/3 = 𝜖.
Therefore, 𝐿1 = 𝐿2 . Thus 𝐿 𝑎 exists. To show that 𝐿𝑏 exists is left as an exercise. If 𝐿 𝑎 = lim𝑥→𝑎 𝑓 (𝑥) exists, then lim𝑥→𝑎 e 𝑓 (𝑥) exists and equals 𝐿 𝑎 (see Proposition 3.1.15). e Similarly for 𝐿𝑏 . Hence 𝑓 is continuous at 𝑎 and 𝑏. And since 𝑓 is continuous at 𝑐 ∈ (𝑎, 𝑏), then e 𝑓 is continuous at 𝑐 ∈ (𝑎, 𝑏) (Proposition 3.1.15 again). □ A typical application of this proposition (together with Proposition 3.1.17) is the following. Suppose 𝑓 : (−1, 0) ∪ (0, 1) → ℝ is uniformly continuous, then lim𝑥→0 𝑓 (𝑥) exists and the function has a removable singularity, that is, we can extend the function to a continuous function on (−1, 1).
141
3.4. UNIFORM CONTINUITY
3.4.3
Lipschitz continuous functions
Definition 3.4.7. A function 𝑓 : 𝑆 → ℝ is Lipschitz continuous* , if there exists a 𝐾 ∈ ℝ, such that for all 𝑥 and 𝑦 in 𝑆. | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝐾 |𝑥 − 𝑦| A large class of functions is Lipschitz continuous. Be careful, just as for uniformly continuous functions, the domain of definition of the function is important. See the examples below and the exercises. First, we justify the use of the word continuous. Proposition 3.4.8. A Lipschitz continuous function is uniformly continuous. Proof. Let 𝑓 : 𝑆 → ℝ be a function and let 𝐾 be a constant such that | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝐾 |𝑥 − 𝑦| for all 𝑥, 𝑦 in 𝑆. Let 𝜖 > 0 be given. Take 𝛿 B 𝜖/𝐾 . For all 𝑥 and 𝑦 in 𝑆 such that |𝑥 − 𝑦| < 𝛿, | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝐾 |𝑥 − 𝑦| < 𝐾𝛿 = 𝐾
𝜖 = 𝜖. 𝐾
Therefore, 𝑓 is uniformly continuous.
□
We interpret Lipschitz continuity geometrically. Let 𝑓 be a Lipschitz continuous function with some constant 𝐾. We rewrite the inequality to say that for 𝑥 ≠ 𝑦, we have
𝑓 (𝑥) − 𝑓 (𝑦) 𝑥 − 𝑦 ≤ 𝐾. 𝑓 (𝑥)− 𝑓 (𝑦)
The quantity 𝑥−𝑦 is the slope of the line between the points 𝑥, 𝑓 (𝑥) and 𝑦, 𝑓 (𝑦) , that is, a secant line. Therefore, 𝑓 is Lipschitz continuous if and only if every line that intersects the graph of 𝑓 in at least two distinct points has slope in absolute value less than or equal to 𝐾. See Figure 3.9.
slope =
𝑥
𝑓 (𝑥)− 𝑓 (𝑦) 𝑥−𝑦
𝑦
𝑓 (𝑥)− 𝑓 (𝑦) Figure 3.9: The slope of a secant line. A function is Lipschitz if 𝑥−𝑦 ≤ 𝐾 for all 𝑥 and 𝑦.
* Named
after the German mathematician Rudolf Otto Sigismund Lipschitz (1832–1903).
142
CHAPTER 3. CONTINUOUS FUNCTIONS
Example 3.4.9: The functions sin(𝑥) and cos(𝑥) are Lipschitz continuous. In Example 3.2.6 we have seen the following two inequalities. |sin(𝑥) − sin(𝑦)| ≤ |𝑥 − 𝑦|
and
|cos(𝑥) − cos(𝑦)| ≤ |𝑥 − 𝑦| .
Hence sine and cosine are Lipschitz continuous with 𝐾 = 1. √ Example 3.4.10: The function 𝑓 : [1, ∞) → ℝ defined by 𝑓 (𝑥) B 𝑥 is Lipschitz continuous. Proof: √ 𝑥 − 𝑦 |𝑥 − 𝑦| √ 𝑥 − 𝑦 = √ = √ √ √ . 𝑥 + 𝑦 𝑥+ 𝑦
As 𝑥 ≥ 1 and 𝑦 ≥ 1, we see that
√ 1√ 𝑥+ 𝑦
≤ 12 . Therefore,
√ 𝑥 − 𝑦 1 √ 𝑥 − 𝑦 = √ ≤ |𝑥 − 𝑦| . 𝑥 + √𝑦 2 On the other hand, 𝑔 : [0, ∞) → ℝ defined by 𝑔(𝑥) B Proof: Suppose for all 𝑥, 𝑦 ∈ [0, ∞), √ 𝑥 − √ 𝑦 ≤ 𝐾 |𝑥 − 𝑦| ,
√
𝑥 is not Lipschitz continuous.
√ √ for some 𝐾. Set 𝑦 = 0 to obtain 𝑥 ≤ 𝐾𝑥. If 𝐾 > 0, then for 𝑥 > 0 we get 1/𝐾 ≤ 𝑥 or 1/𝐾 2 ≤ 𝑥. This cannot possibly be true for all 𝑥 > 0. Thus no such 𝐾 > 0 exists and 𝑔 is not Lipschitz continuous. See Figure 3.10 and note how secant lines would be more and more vertical as we get closer to 𝑥 = 0.
Figure 3.10: Graph of
√
√ 𝑥 and some secant lines through (0, 0) and (𝑥, 𝑥).
The last example 𝑔 is a function that is uniformly continuous but not Lipschitz √ continuous. To see that 𝑥 is uniformly continuous as a function on [0, ∞), note that it is uniformly continuous when restricted to [0, 1] by Theorem 3.4.4. It is also Lipschitz (and so uniformly continuous) when restricted to [1, ∞). It is not hard (exercise) to show that √ this means that 𝑥 is a uniformly continuous function on [0, ∞).
143
3.4. UNIFORM CONTINUITY
3.4.4
Exercises
Exercise 3.4.1: Let 𝑓 : 𝑆 → ℝ be uniformly continuous. Let 𝐴 ⊂ 𝑆. Then the restriction 𝑓 | 𝐴 is uniformly continuous. Exercise 3.4.2: Let 𝑓 : (𝑎, 𝑏) → ℝ be a uniformly continuous function. Finish the proof of Proposition 3.4.6 by showing that the limit lim 𝑓 (𝑥) exists. 𝑥→𝑏
Exercise 3.4.3: Show that 𝑓 : (𝑐, ∞) → ℝ for some 𝑐 > 0 and defined by 𝑓 (𝑥) B 1/𝑥 is Lipschitz continuous. Exercise 3.4.4: Show that 𝑓 : (0, ∞) → ℝ defined by 𝑓 (𝑥) B 1/𝑥 is not Lipschitz continuous. Exercise 3.4.5: Let 𝐴, 𝐵 be intervals. Let 𝑓 : 𝐴 → ℝ and 𝑔 : 𝐵 → ℝ be uniformly continuous functions such that 𝑓 (𝑥) = 𝑔(𝑥) for 𝑥 ∈ 𝐴 ∩ 𝐵. Define the function ℎ : 𝐴 ∪ 𝐵 → ℝ by ℎ(𝑥) B 𝑓 (𝑥) if 𝑥 ∈ 𝐴 and ℎ(𝑥) B 𝑔(𝑥) if 𝑥 ∈ 𝐵 \ 𝐴. a) Prove that if 𝐴 ∩ 𝐵 ≠ ∅, then ℎ is uniformly continuous.
b) Find an example where 𝐴 ∩ 𝐵 = ∅ and ℎ is not even continuous. Exercise 3.4.6 (Challenging): Let 𝑓 : ℝ → ℝ be a polynomial of degree 𝑑 ≥ 2. Show that 𝑓 is not Lipschitz continuous. Exercise 3.4.7: Let 𝑓 : (0, 1) → ℝ be a bounded continuous function. Show that the function 𝑔(𝑥) B 𝑥(1 − 𝑥) 𝑓 (𝑥) is uniformly continuous. Exercise 3.4.8: Show that 𝑓 : (0, ∞) → ℝ defined by 𝑓 (𝑥) B sin(1/𝑥 ) is not uniformly continuous. Exercise 3.4.9 (Challenging): Let 𝑓 : ℚ → ℝ be a uniformly continuous function. Show that there exists a uniformly continuous function e 𝑓 : ℝ → ℝ such that 𝑓 (𝑥) = e 𝑓 (𝑥) for all 𝑥 ∈ ℚ. Exercise 3.4.10: a) Find a continuous 𝑓 : (0, 1) → ℝ and a sequence {𝑥 𝑛 }∞ in (0, 1) that is Cauchy, but such that 𝑛=1 ∞ 𝑓 (𝑥 𝑛 ) 𝑛=1 is not Cauchy.
b) Prove that if 𝑓 : ℝ → ℝ is continuous, and {𝑥 𝑛 }∞ is Cauchy, then 𝑓 (𝑥 𝑛 ) 𝑛=1
∞
𝑛=1
is Cauchy.
Exercise 3.4.11: Prove: a) If 𝑓 : 𝑆 → ℝ and 𝑔 : 𝑆 → ℝ are uniformly continuous, then ℎ : 𝑆 → ℝ given by ℎ(𝑥) B 𝑓 (𝑥) + 𝑔(𝑥) is uniformly continuous. b) If 𝑓 : 𝑆 → ℝ is uniformly continuous and 𝑎 ∈ ℝ, then ℎ : 𝑆 → ℝ given by ℎ(𝑥) B 𝑎 𝑓 (𝑥) is uniformly continuous. Exercise 3.4.12: Prove: a) If 𝑓 : 𝑆 → ℝ and 𝑔 : 𝑆 → ℝ are Lipschitz, then ℎ : 𝑆 → ℝ given by ℎ(𝑥) B 𝑓 (𝑥) + 𝑔(𝑥) is Lipschitz.
b) If 𝑓 : 𝑆 → ℝ is Lipschitz and 𝑎 ∈ ℝ, then ℎ : 𝑆 → ℝ given by ℎ(𝑥) B 𝑎 𝑓 (𝑥) is Lipschitz.
144
CHAPTER 3. CONTINUOUS FUNCTIONS
Exercise 3.4.13: a) If 𝑓 : [0, 1] → ℝ is given by 𝑓 (𝑥) B 𝑥 𝑚 for an integer 𝑚 ≥ 0, show 𝑓 is Lipschitz and find the best (the smallest) Lipschitz constant 𝐾 (depending on 𝑚 of course). Hint: (𝑥 − 𝑦)(𝑥 𝑚−1 + 𝑥 𝑚−2 𝑦 + 𝑥 𝑚−3 𝑦 2 + · · · + 𝑥𝑦 𝑚−2 + 𝑦 𝑚−1 ) = 𝑥 𝑚 − 𝑦 𝑚 .
b) Using the previous exercise, show that if 𝑓 : [0, 1] → ℝ is a polynomial, that is, 𝑓 (𝑥) B 𝑎 𝑚 𝑥 𝑚 + 𝑎 𝑚−1 𝑥 𝑚−1 + · · · + 𝑎 0 , then 𝑓 is Lipschitz. Exercise 3.4.14: Suppose for 𝑓 : [0, 1] → ℝ, we have | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝐾 |𝑥 − 𝑦| for all 𝑥, 𝑦 in [0, 1], and 𝑓 (0) = 𝑓 (1) = 0. Prove that | 𝑓 (𝑥)| ≤ 𝐾/2 for all 𝑥 ∈ [0, 1]. Further show by example that 𝐾/2 is the best possible, that is, there exists such a continuous function for which | 𝑓 (𝑥)| = 𝐾/2 for some 𝑥 ∈ [0, 1]. Exercise 3.4.15: Suppose 𝑓 : ℝ → ℝ is continuous and periodic with period 𝑃 > 0. That is, 𝑓 (𝑥 + 𝑃) = 𝑓 (𝑥) for all 𝑥 ∈ ℝ. Show that 𝑓 is uniformly continuous. Exercise 3.4.16: Suppose 𝑓 : 𝑆 → ℝ and 𝑔 : [0, ∞) → [0, ∞) are functions, 𝑔 is continuous at 0, 𝑔(0) = 0, and whenever 𝑥 and 𝑦 are in 𝑆, we have | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝑔 |𝑥 − 𝑦| . Prove that 𝑓 is uniformly continuous. Exercise 3.4.17: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is a function such that for every 𝑐 ∈ [𝑎, 𝑏] there is a 𝐾 𝑐 > 0 and an 𝜖 𝑐 > 0 for which | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝐾 𝑐 |𝑥 − 𝑦| for all 𝑥 and 𝑦 in (𝑐 − 𝜖 𝑐 , 𝑐 + 𝜖 𝑐 ) ∩ [𝑎, 𝑏]. In other words, 𝑓 is “locally Lipschitz.” a) Prove that there exists a single 𝐾 > 0 such that | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ 𝐾 |𝑥 − 𝑦| for all 𝑥, 𝑦 in [𝑎, 𝑏].
b) Find a counterexample to the above if the interval is open, that is, find an 𝑓 : (𝑎, 𝑏) → ℝ that is locally Lipschitz, but not Lipschitz.
145
3.5. LIMITS AT INFINITY
3.5
Limits at infinity
Note: less than 1 lecture (optional, can safely be omitted unless §3.6 or §5.5 is also covered)
3.5.1
Limits at infinity
As for sequences, a continuous variable can also approach infinity. Definition 3.5.1. We say ∞ is a cluster point of 𝑆 ⊂ ℝ if for every 𝑀 ∈ ℝ, there exists an 𝑥 ∈ 𝑆 such that 𝑥 ≥ 𝑀. Similarly, −∞ is a cluster point of 𝑆 ⊂ ℝ if for every 𝑀 ∈ ℝ, there exists an 𝑥 ∈ 𝑆 such that 𝑥 ≤ 𝑀. Let 𝑓 : 𝑆 → ℝ be a function, where ∞ is a cluster point of 𝑆. If there exists an 𝐿 ∈ ℝ such that for every 𝜖 > 0, there is an 𝑀 ∈ ℝ such that | 𝑓 (𝑥) − 𝐿| < 𝜖 whenever 𝑥 ∈ 𝑆 and 𝑥 ≥ 𝑀, then we say 𝑓 (𝑥) converges to 𝐿 as 𝑥 goes to ∞. We call 𝐿 the limit and write lim 𝑓 (𝑥) B 𝐿. 𝑥→∞
Alternatively we write 𝑓 (𝑥) → 𝐿 as 𝑥 → ∞. Similarly, if −∞ is a cluster point of 𝑆 and there exists an 𝐿 ∈ ℝ such that for every 𝜖 > 0, there is an 𝑀 ∈ ℝ such that | 𝑓 (𝑥) − 𝐿| < 𝜖
whenever 𝑥 ∈ 𝑆 and 𝑥 ≤ 𝑀, then we say 𝑓 (𝑥) converges to 𝐿 as 𝑥 goes to −∞. Alternatively, we write 𝑓 (𝑥) → 𝐿 as 𝑥 → −∞. We call 𝐿 a limit and, if unique, write lim 𝑓 (𝑥) B 𝐿.
𝑥→−∞
The first thing to do, as usual, is to prove that the limit, if it exists, is unique. We leave it as an exercise for the reader. Proposition 3.5.2. The limit at ∞ or −∞ as defined above is unique if it exists. Example 3.5.3: Let 𝑓 (𝑥) B
1 . |𝑥|+1
Then
lim 𝑓 (𝑥) = 0
𝑥→∞
and
lim 𝑓 (𝑥) = 0.
𝑥→−∞
1 Proof: Let 𝜖 > 0 be given. Find 𝑀 > 0 large enough so that 𝑀+1 < 𝜖. If 𝑥 ≥ 𝑀, then 1 1 1 0 < |𝑥|+1 = 𝑥+1 ≤ 𝑀+1 < 𝜖. The first limit follows. The proof for −∞ is left to the reader.
Example 3.5.4: Let 𝑓 (𝑥) B sin(𝜋𝑥). Then lim𝑥→∞ 𝑓 (𝑥) does not exist. To prove this fact note that if 𝑥 = 2𝑛 + 1/2 for some 𝑛 ∈ ℕ , then 𝑓 (𝑥) = 1, while if 𝑥 = 2𝑛 + 3/2, then 𝑓 (𝑥) = −1. So they cannot both be within a small 𝜖 of a single real number.
146
CHAPTER 3. CONTINUOUS FUNCTIONS
Be careful not to confuse continuous limits with limits of sequences. We could say lim sin(𝜋𝑛) = 0,
𝑛→∞
but
lim sin(𝜋𝑥) does not exist.
𝑥→∞
∞
Of course the notation is ambiguous: Are we thinking of the sequence sin(𝜋𝑛) 𝑛=1 or the function sin(𝜋𝑥) of a real variable? We are simply using the convention that 𝑛 ∈ ℕ , while 𝑥 ∈ ℝ. When the notation is not clear, it is good to explicitly mention where the variable lives, or what kind of limit are you using. If there is possibility of confusion, one can write, for example, lim sin(𝜋𝑛).
𝑛→∞ 𝑛∈ℕ
There is a connection of continuous limits to limits of sequences, but we must take all sequences going to infinity, just as before in Lemma 3.1.7. Lemma 3.5.5. Suppose 𝑓 : 𝑆 → ℝ is a function, ∞ is a cluster point of 𝑆 ⊂ ℝ, and 𝐿 ∈ ℝ. Then lim 𝑓 (𝑥) = 𝐿
𝑥→∞
if and only if
lim 𝑓 (𝑥 𝑛 ) = 𝐿
𝑛→∞
for all sequences {𝑥 𝑛 }∞ in 𝑆 such that lim 𝑥 𝑛 = ∞. 𝑛=1 𝑛→∞
The lemma also holds for the limit as 𝑥 → −∞. Its proof is almost identical and is left as an exercise.
Proof. First suppose 𝑓 (𝑥) → 𝐿 as 𝑥 → ∞. Given an 𝜖 > 0, there exists an 𝑀 such that for all 𝑥 ≥ 𝑀, we have | 𝑓 (𝑥) − 𝐿| < 𝜖. Let {𝑥 𝑛 }∞ be a sequence in 𝑆 such that lim𝑛→∞ 𝑥 𝑛 = ∞. 𝑛=1 Then there exists an 𝑁 such that for all 𝑛 ≥ 𝑁, we have 𝑥 𝑛 ≥ 𝑀. And thus | 𝑓 (𝑥 𝑛 ) − 𝐿| < 𝜖. We prove the converse by contrapositive. Suppose 𝑓 (𝑥) does not go to 𝐿 as 𝑥 → ∞. This means that there exists an 𝜖 > 0, such that for every 𝑛 ∈ ℕ , there exists an 𝑥 ∈ 𝑆, 𝑥 ≥ 𝑛, ∞let ∞ us call it 𝑥 𝑛 , such that | 𝑓 (𝑥 𝑛 ) − 𝐿| ≥ 𝜖. Consider the sequence {𝑥 𝑛 } 𝑛=1 . Clearly 𝑓 (𝑥 𝑛 ) 𝑛=1 does not converge to 𝐿. It remains to note that lim𝑛→∞ 𝑥 𝑛 = ∞, because 𝑥 𝑛 ≥ 𝑛 for all 𝑛. □ Using the lemma, we again translate results about sequential limits into results about continuous limits as 𝑥 goes to infinity. That is, we have almost immediate analogues of the corollaries in §3.1.3. We simply allow the cluster point 𝑐 to be either ∞ or −∞, in addition to a real number. We leave it to the student to verify these statements.
3.5.2
Infinite limit
Just as for sequences, it is often convenient to distinguish certain divergent sequences, and talk about limits being infinite almost as if the limits existed. Definition 3.5.6. Let 𝑓 : 𝑆 → ℝ be a function and suppose 𝑆 has ∞ as a cluster point. We say 𝑓 (𝑥) diverges to infinity as 𝑥 goes to ∞ if for every 𝑁 ∈ ℝ there exists an 𝑀 ∈ ℝ such that 𝑓 (𝑥) > 𝑁
147
3.5. LIMITS AT INFINITY whenever 𝑥 ∈ 𝑆 and 𝑥 ≥ 𝑀. We write
lim 𝑓 (𝑥) B ∞,
𝑥→∞
or we say that 𝑓 (𝑥) → ∞ as 𝑥 → ∞.
A similar definition can be made for limits as 𝑥 → −∞ or as 𝑥 → 𝑐 for a finite 𝑐. Also similar definitions can be made for limits being −∞. Stating these definitions is left as an exercise. Note that sometimes converges to infinity is used. We can again use sequential limits, and an analogue of Lemma 3.1.7 is left as an exercise. 1+𝑥 2 𝑥→∞ 1+𝑥
Example 3.5.7: Let us show that lim Proof: For 𝑥 ≥ 1, we have
= ∞.
1 + 𝑥2 𝑥2 𝑥 ≥ = . 1+𝑥 𝑥+𝑥 2 Given 𝑁 ∈ ℝ, take 𝑀 = max{2𝑁 + 1, 1}. If 𝑥 ≥ 𝑀, then 𝑥 ≥ 1 and 𝑥/2 > 𝑁. So 𝑥 1 + 𝑥2 ≥ > 𝑁. 1+𝑥 2
3.5.3
Compositions
Finally, just as for limits at finite numbers we can compose functions easily. Proposition 3.5.8. Suppose 𝑓 : 𝐴 → 𝐵, 𝑔 : 𝐵 → ℝ, 𝐴, 𝐵 ⊂ ℝ, 𝑎 ∈ ℝ ∪ {−∞, ∞} is a cluster point of 𝐴, and 𝑏 ∈ ℝ ∪ {−∞, ∞} is a cluster point of 𝐵. Suppose lim 𝑓 (𝑥) = 𝑏
lim 𝑔(𝑦) = 𝑐
and
𝑥→𝑎
𝑦→𝑏
for some 𝑐 ∈ ℝ ∪ {−∞, ∞}. If 𝑏 ∈ 𝐵, then suppose 𝑔(𝑏) = 𝑐. Then lim 𝑔 𝑓 (𝑥) = 𝑐.
𝑥→𝑎
The proof is straightforward, and left as an exercise. We already know the proposition when 𝑎, 𝑏, 𝑐 ∈ ℝ, see Exercises 3.1.9 and 3.1.14. Again the requirement that 𝑔 is continuous at 𝑏, if 𝑏 ∈ 𝐵, is necessary. Example 3.5.9: Let ℎ(𝑥) B 𝑒 −𝑥
2 +𝑥
. Then
lim ℎ(𝑥) = 0.
𝑥→∞
Proof: The claim follows once we know lim −𝑥 2 + 𝑥 = −∞
𝑥→∞
and
lim 𝑒 𝑦 = 0,
𝑦→−∞
which is usually proved when the exponential function is defined.
148
3.5.4
CHAPTER 3. CONTINUOUS FUNCTIONS
Exercises
Exercise 3.5.1: Prove Proposition 3.5.2. Exercise 3.5.2: Let 𝑓 : [1, ∞) → ℝ be a function. Define 𝑔 : (0, 1] → ℝ via 𝑔(𝑥) B 𝑓 (1/𝑥 ). Using the definitions of limits directly, show that lim𝑥→0+ 𝑔(𝑥) exists if and only if lim𝑥→∞ 𝑓 (𝑥) exists, in which case they are equal. Exercise 3.5.3: Prove Proposition 3.5.8. Exercise 3.5.4: Let us justify terminology. Let 𝑓 : ℝ → ℝ be a function such that lim𝑥→∞ 𝑓 (𝑥) = ∞ (diverges to infinity). Show that 𝑓 (𝑥) diverges (i.e. does not converge) as 𝑥 → ∞. Exercise 3.5.5: Come up with the definitions for limits of 𝑓 (𝑥) going to −∞ as 𝑥 → ∞, 𝑥 → −∞, and as 𝑥 → 𝑐 for a finite 𝑐 ∈ ℝ. Then state the definitions for limits of 𝑓 (𝑥) going to ∞ as 𝑥 → −∞, and as 𝑥 → 𝑐 for a finite 𝑐 ∈ ℝ. Exercise 3.5.6: Suppose 𝑃(𝑥) B 𝑥 𝑛 + 𝑎 𝑛−1 𝑥 𝑛−1 + · · · + 𝑎 1 𝑥 + 𝑎 0 is a monic polynomial of degree 𝑛 ≥ 1 (monic means that the coefficient of 𝑥 𝑛 is 1). a) Show that if 𝑛 is even, then lim 𝑃(𝑥) = lim 𝑃(𝑥) = ∞. 𝑥→∞
𝑥→−∞
b) Show that if 𝑛 is odd, then lim 𝑃(𝑥) = ∞ and lim 𝑃(𝑥) = −∞ (see previous exercise). 𝑥→∞
𝑥→−∞
Exercise 3.5.7: Let {𝑥 𝑛 }∞ be a sequence. Consider 𝑆 B ℕ ⊂ ℝ, and 𝑓 : 𝑆 → ℝ defined by 𝑓 (𝑛) B 𝑥 𝑛 . 𝑛=1 Show that the two notions of limit, lim 𝑥 𝑛
𝑛→∞
lim 𝑓 (𝑥)
and
𝑥→∞
are equivalent. That is, show that if one exists so does the other one, and in this case they are equal. Exercise 3.5.8: Extend Lemma 3.5.5 as follows. Suppose 𝑆 ⊂ ℝ has a cluster point 𝑐 ∈ ℝ, 𝑐 = ∞, or 𝑐 = −∞. Let 𝑓 : 𝑆 → ℝ be a function and suppose 𝐿 = ∞ or 𝐿 = −∞. Show that lim 𝑓 (𝑥) = 𝐿
𝑥→𝑐
if and only if
lim 𝑓 (𝑥 𝑛 ) = 𝐿 for all sequences {𝑥 𝑛 }∞ 𝑛=1 such that lim 𝑥 𝑛 = 𝑐.
𝑛→∞
𝑛→∞
Exercise 3.5.9: Suppose 𝑓 : ℝ → ℝ is a 2-periodic function, that is 𝑓 (𝑥 + 2) = 𝑓 (𝑥) for all 𝑥. Define 𝑔 : ℝ → ℝ by ! √ 𝑥2 + 1 − 1 𝑔(𝑥) B 𝑓 𝑥 a) Find the function 𝜑 : (−1, 1) → ℝ such that 𝑔 𝜑(𝑡) = 𝑓 (𝑡), that is 𝜑 −1 (𝑥) =
b) Show that 𝑓 is continuous if and only if 𝑔 is continuous and
lim 𝑔(𝑥) = lim 𝑔(𝑥) = 𝑓 (1) = 𝑓 (−1).
𝑥→∞
𝑥→−∞
√
𝑥 2 +1−1 . 𝑥
3.6. MONOTONE FUNCTIONS AND CONTINUITY
3.6
149
Monotone functions and continuity
Note: 1 lecture (optional, can safely be omitted unless §4.4 is also covered, requires §3.5) Definition 3.6.1. Let 𝑆 ⊂ ℝ. We say 𝑓 : 𝑆 → ℝ is increasing (resp. strictly increasing) if 𝑥, 𝑦 ∈ 𝑆 with 𝑥 < 𝑦 implies 𝑓 (𝑥) ≤ 𝑓 (𝑦) (resp. 𝑓 (𝑥) < 𝑓 (𝑦)). We define decreasing and strictly decreasing in the same way by switching the inequalities for 𝑓 . If a function is either increasing or decreasing, we say it is monotone. If it is strictly increasing or strictly decreasing, we say it is strictly monotone. Sometimes nondecreasing (resp. nonincreasing) is used for increasing (resp. decreasing) function to emphasize it is not strictly increasing (resp. strictly decreasing). If 𝑓 is increasing, then − 𝑓 is decreasing and vice versa. Therefore, many results about monotone functions can just be proved for, say, increasing functions, and the results follow easily for decreasing functions.
3.6.1
Continuity of monotone functions
One-sided limits for monotone functions are computed by computing infima and suprema. Proposition 3.6.2. Let 𝑆 ⊂ ℝ, 𝑐 ∈ ℝ, 𝑓 : 𝑆 → ℝ be increasing, and 𝑔 : 𝑆 → ℝ be decreasing. If 𝑐 is a cluster point of 𝑆 ∩ (−∞, 𝑐), then lim 𝑓 (𝑥) = sup{ 𝑓 (𝑥) : 𝑥 < 𝑐, 𝑥 ∈ 𝑆}
𝑥→𝑐 −
and
lim 𝑔(𝑥) = inf{𝑔(𝑥) : 𝑥 < 𝑐, 𝑥 ∈ 𝑆}.
𝑥→𝑐 −
If 𝑐 is a cluster point of 𝑆 ∩ (𝑐, ∞), then lim 𝑓 (𝑥) = inf{ 𝑓 (𝑥) : 𝑥 > 𝑐, 𝑥 ∈ 𝑆}
𝑥→𝑐 +
and
lim 𝑔(𝑥) = sup{𝑔(𝑥) : 𝑥 > 𝑐, 𝑥 ∈ 𝑆}.
𝑥→𝑐 +
If ∞ is a cluster point of 𝑆, then lim 𝑓 (𝑥) = sup{ 𝑓 (𝑥) : 𝑥 ∈ 𝑆}
𝑥→∞
and
lim 𝑔(𝑥) = inf{𝑔(𝑥) : 𝑥 ∈ 𝑆}.
𝑥→∞
If −∞ is a cluster point of 𝑆, then lim 𝑓 (𝑥) = inf{ 𝑓 (𝑥) : 𝑥 ∈ 𝑆}
𝑥→−∞
and
lim 𝑔(𝑥) = sup{𝑔(𝑥) : 𝑥 ∈ 𝑆}.
𝑥→−∞
Namely, all the one-sided limits exist whenever they make sense. For monotone functions therefore, when we say the left-hand limit 𝑥 → 𝑐 − exists, we mean that 𝑐 is a cluster point of 𝑆 ∩ (−∞, 𝑐), and same for the right-hand limit.
Proof. Let us assume 𝑓 is increasing, and we will show the first equality. The rest of the proof is very similar and is left as an exercise. Let 𝑎 B sup{ 𝑓 (𝑥) : 𝑥 < 𝑐, 𝑥 ∈ 𝑆}. If 𝑎 = ∞, then given an 𝑀 ∈ ℝ, there exists an 𝑥 𝑀 ∈ 𝑆, 𝑥 𝑀 < 𝑐, such that 𝑓 (𝑥 𝑀 ) > 𝑀. As 𝑓 is increasing, 𝑓 (𝑥) ≥ 𝑓 (𝑥 𝑀 ) > 𝑀 for all 𝑥 ∈ 𝑆 with 𝑥 > 𝑥 𝑀 . Take 𝛿 B 𝑐 − 𝑥 𝑀 > 0 to obtain the definition of the limit going to infinity.
150
CHAPTER 3. CONTINUOUS FUNCTIONS
Next suppose 𝑎 < ∞. Let 𝜖 > 0 be given. Because 𝑎 is the supremum and 𝑆 ∩ (−∞, 𝑐) is nonempty, 𝑎 ∈ ℝ and there exists an 𝑥 𝜖 ∈ 𝑆, 𝑥 𝜖 < 𝑐, such that 𝑓 (𝑥 𝜖 ) > 𝑎 − 𝜖. As 𝑓 is increasing, if 𝑥 ∈ 𝑆 and 𝑥 𝜖 < 𝑥 < 𝑐, we have 𝑎 − 𝜖 < 𝑓 (𝑥 𝜖 ) ≤ 𝑓 (𝑥) ≤ 𝑎. Let 𝛿 B 𝑐 − 𝑥 𝜖 . Then for 𝑥 ∈ 𝑆 ∩ (−∞, 𝑐) with |𝑥 − 𝑐| < 𝛿, we have | 𝑓 (𝑥) − 𝑎| < 𝜖. □ Suppose 𝑓 : 𝑆 → ℝ is increasing, 𝑐 ∈ 𝑆, and that both one-sided limits exist. Since 𝑓 (𝑥) ≤ 𝑓 (𝑐) ≤ 𝑓 (𝑦) whenever 𝑥 < 𝑐 < 𝑦, taking the limits we obtain lim 𝑓 (𝑥) ≤ 𝑓 (𝑐) ≤ lim+ 𝑓 (𝑥).
𝑥→𝑐 −
𝑥→𝑐
Then 𝑓 is continuous at 𝑐 if and only if both limits are equal to each other (and hence equal to 𝑓 (𝑐)). See also Proposition 3.1.17. See Figure 3.11 to get an idea of what a discontinuity looks like. Corollary 3.6.3. If 𝐼 ⊂ ℝ is an interval and 𝑓 : 𝐼 → ℝ is monotone and not constant, then 𝑓 (𝐼) is an interval if and only if 𝑓 is continuous. Assuming 𝑓 is not constant is to avoid the technicality that 𝑓 (𝐼) is a single point: 𝑓 (𝐼) is a single point if and only if 𝑓 is constant. A constant function is continuous. Proof. Without loss of generality, suppose 𝑓 is increasing. First suppose 𝑓 is continuous. Take two points 𝑓 (𝑥 1 ), 𝑓 (𝑥 2 ) in 𝑓 (𝐼) and suppose 𝑓 (𝑥1 ) < 𝑓 (𝑥2 ). As 𝑓 is increasing, then 𝑥1 < 𝑥 2 . By the intermediate value theorem, given 𝑦 with 𝑓 (𝑥1 ) < 𝑦 < 𝑓 (𝑥2 ), we find a 𝑐 ∈ (𝑥1 , 𝑥2 ) ⊂ 𝐼 such that 𝑓 (𝑐) = 𝑦, so 𝑦 ∈ 𝑓 (𝐼). Hence, 𝑓 (𝐼) is an interval. Let us prove the reverse direction by contrapositive. Suppose 𝑓 is not continuous at 𝑐 ∈ 𝐼, and that 𝑐 is not an endpoint of 𝐼. Let 𝑎 B lim− 𝑓 (𝑥) = sup 𝑓 (𝑥) : 𝑥 ∈ 𝐼, 𝑥 < 𝑐 ,
𝑥→𝑐
𝑏 B lim+ 𝑓 (𝑥) = inf 𝑓 (𝑥) : 𝑥 ∈ 𝐼, 𝑥 > 𝑐 .
𝑥→𝑐
As 𝑐 is a discontinuity, 𝑎 < 𝑏. If 𝑥 < 𝑐, then 𝑓 (𝑥) ≤ 𝑎, and if 𝑥 > 𝑐, then 𝑓 (𝑥) ≥ 𝑏. Therefore no point in (𝑎, 𝑏) \ 𝑓 (𝑐) is in 𝑓 (𝐼). There exists 𝑥1 ∈ 𝐼 with 𝑥1 < 𝑐, so 𝑓 (𝑥1 ) ≤ 𝑎, and there exists 𝑥 2 ∈ 𝐼 with 𝑥2 > 𝑐, so 𝑓 (𝑥2 ) ≥ 𝑏. Both 𝑓 (𝑥1 ) and 𝑓 (𝑥 2 ) are in 𝑓 (𝐼), but there are points in between them that are not in 𝑓 (𝐼). So 𝑓 (𝐼) is not an interval. See Figure 3.11. When 𝑐 ∈ 𝐼 is an endpoint, the proof is similar and is left as an exercise. □
A striking property of monotone functions is that they cannot have too many discontinuities. Corollary 3.6.4. Let 𝐼 ⊂ ℝ be an interval and 𝑓 : 𝐼 → ℝ be monotone. Then 𝑓 has at most countably many discontinuities. Proof. Let 𝐸 ⊂ 𝐼 be the set of all discontinuities that are not endpoints of 𝐼. As there are only two endpoints, it is enough to show that 𝐸 is countable. Without loss of generality, suppose 𝑓 is increasing. We will define an injection ℎ : 𝐸 → ℚ. For each 𝑐 ∈ 𝐸, both one-sided limits of 𝑓 exist as 𝑐 is not an endpoint. Let 𝑎 B lim− 𝑓 (𝑥) = sup 𝑓 (𝑥) : 𝑥 ∈ 𝐼, 𝑥 < 𝑐 ,
𝑥→𝑐
𝑏 B lim+ 𝑓 (𝑥) = inf 𝑓 (𝑥) : 𝑥 ∈ 𝐼, 𝑥 > 𝑐 .
𝑥→𝑐
151
3.6. MONOTONE FUNCTIONS AND CONTINUITY
𝑓 (𝑥2 )
𝑓 (𝐼)
𝑦 = 𝑓 (𝑥)
lim 𝑓 (𝑥) = 𝑏 𝑓 (𝑐) lim− 𝑓 (𝑥) = 𝑎
𝑥→𝑐 + 𝑥→𝑐
𝑓 (𝑥1 )
𝐼 𝑥1
𝑐
𝑥2
Figure 3.11: Increasing function 𝑓 : 𝐼 → ℝ discontinuity at 𝑐.
As 𝑐 is a discontinuity, 𝑎 < 𝑏. There exists a rational number 𝑞 ∈ (𝑎, 𝑏), so let ℎ(𝑐) B 𝑞. Suppose 𝑑 ∈ 𝐸 is another discontinuity. If 𝑑 > 𝑐, there exist an 𝑥 ∈ 𝐼 with 𝑐 < 𝑥 < 𝑑, and so lim𝑥→𝑑− 𝑓 (𝑥) ≥ 𝑏. Hence the rational number we choose for ℎ(𝑑) is different from 𝑞, since 𝑞 = ℎ(𝑐) < 𝑏 and ℎ(𝑑) > 𝑏. Similarly if 𝑑 < 𝑐. After making such a choice for every element of 𝐸, we have a one-to-one (injective) function into ℚ. Therefore, 𝐸 is countable. □ Example 3.6.5: By ⌊𝑥⌋ denote the largest integer less than or equal to 𝑥. Define 𝑓 : [0, 1] → ℝ by 𝑓 (𝑥) B 𝑥 +
⌊1/(1−𝑥)⌋ Õ
2−𝑛 ,
𝑛=0
for 𝑥 < 1 and 𝑓 (1) B 3. It is an exercise to show that 𝑓 is strictly increasing, bounded, and has a discontinuity at all points 1 − 1/𝑘 for 𝑘 ∈ ℕ . In particular, there are countably many discontinuities, but the function is bounded and defined on a closed bounded interval. See Figure 3.12. 3 2.5 2 1.5
0
1
Figure 3.12: Strictly increasing function on [0, 1] with countably many discontinuities.
Similarly, one can find an example of a monotone function discontinuous on a dense set such as the rational numbers. See the exercises.
152
3.6.2
CHAPTER 3. CONTINUOUS FUNCTIONS
Continuity of inverse functions
A strictly monotone function 𝑓 is one-to-one (injective). To see this fact, notice that if 𝑥 ≠ 𝑦, then we can assume 𝑥 < 𝑦. Either 𝑓 (𝑥) < 𝑓 (𝑦) if 𝑓 is strictly increasing or 𝑓 (𝑥) > 𝑓 (𝑦) if 𝑓 is strictly decreasing, so 𝑓 (𝑥) ≠ 𝑓 (𝑦). Hence, 𝑓 must have an inverse 𝑓 −1 defined on its range. Proposition 3.6.6. If 𝐼 ⊂ ℝ is an interval and 𝑓 : 𝐼 → ℝ is strictly monotone, then the inverse 𝑓 −1 : 𝑓 (𝐼) → 𝐼 is continuous.
Proof. Let us suppose 𝑓 is strictly increasing. The proof is almost identical for a strictly decreasing function. Since 𝑓 is strictly increasing, so is 𝑓 −1 . That is, if 𝑓 (𝑥) < 𝑓 (𝑦), then we must have 𝑥 < 𝑦 and therefore 𝑓 −1 𝑓 (𝑥) < 𝑓 −1 𝑓 (𝑦) . Take 𝑐 ∈ 𝑓 (𝐼). If 𝑐 is not a cluster point of 𝑓 (𝐼), then 𝑓 −1 is continuous at 𝑐 automatically. So let 𝑐 be a cluster point of 𝑓 (𝐼). Suppose both of the following one-sided limits exist: 𝑥0 B lim− 𝑓 −1 (𝑦) = sup 𝑓 −1 (𝑦) : 𝑦 < 𝑐, 𝑦 ∈ 𝑓 (𝐼) = sup 𝑥 ∈ 𝐼 : 𝑓 (𝑥) < 𝑐 ,
𝑦→𝑐
𝑥1 B lim+ 𝑓 −1 (𝑦) = inf 𝑓 −1 (𝑦) : 𝑦 > 𝑐, 𝑦 ∈ 𝑓 (𝐼) = inf 𝑥 ∈ 𝐼 : 𝑓 (𝑥) > 𝑐 .
𝑦→𝑐
We have 𝑥0 ≤ 𝑥1 as 𝑓 −1 is increasing. For all 𝑥 ∈ 𝐼 where 𝑥 > 𝑥0 , we have 𝑓 (𝑥) ≥ 𝑐. As 𝑓 is strictly increasing, we must have 𝑓 (𝑥) > 𝑐 for all 𝑥 ∈ 𝐼 where 𝑥 > 𝑥0 . Therefore, {𝑥 ∈ 𝐼 : 𝑥 > 𝑥0 } ⊂ 𝑥 ∈ 𝐼 : 𝑓 (𝑥) > 𝑐 .
The infimum of the left-hand set is 𝑥0 , and the infimum of the right-hand set is 𝑥1 , so we obtain 𝑥0 ≥ 𝑥1 . So 𝑥 1 = 𝑥0 , and 𝑓 −1 is continuous at 𝑐. If one of the one-sided limits does not exist, the argument is similar and is left as an exercise. □ Example 3.6.7: The proposition does not require 𝑓 itself to be continuous. Let 𝑓 : ℝ → ℝ be defined by ( 𝑥 if 𝑥 < 0, 𝑓 (𝑥) B 𝑥 + 1 if 𝑥 ≥ 0.
The function 𝑓 is not continuous at 0. The image of 𝐼 = ℝ is the set (−∞, 0) ∪ [1, ∞), not an interval. Then 𝑓 −1 : (−∞, 0) ∪ [1, ∞) → ℝ can be written as
( 𝑓 −1 (𝑦) =
𝑦
if 𝑦 < 0,
𝑦−1
if 𝑦 ≥ 1.
It is not difficult to see that 𝑓 −1 is a continuous function. See Figure 3.13 for the graphs. Notice what happens with the proposition if 𝑓 (𝐼) is an interval. In that case, we could simply apply Corollary 3.6.3 to both 𝑓 and 𝑓 −1 . That is, if 𝑓 : 𝐼 → 𝐽 is an onto strictly monotone function and 𝐼 and 𝐽 are intervals, then both 𝑓 and 𝑓 −1 are continuous. Furthermore, 𝑓 (𝐼) is an interval precisely when 𝑓 is continuous.
3.6. MONOTONE FUNCTIONS AND CONTINUITY
153
Figure 3.13: Graph of 𝑓 on the left and 𝑓 −1 on the right.
3.6.3
Exercises
Exercise 3.6.1: Suppose 𝑓 : [0, 1] → ℝ is monotone. Prove 𝑓 is bounded. Exercise 3.6.2: Finish the proof of Proposition 3.6.2. Hint: You can halve your work by noticing that if 𝑔 is decreasing, then −𝑔 is increasing. Exercise 3.6.3: Finish the proof of Corollary 3.6.3. Exercise 3.6.4: Prove the claims in Example 3.6.5. Exercise 3.6.5: Finish the proof of Proposition 3.6.6. Exercise 3.6.6: Suppose 𝑆 ⊂ ℝ, and 𝑓 : 𝑆 → ℝ is an increasing function. Prove: a) If 𝑐 is a cluster point of 𝑆 ∩ (𝑐, ∞), then lim+ 𝑓 (𝑥) < ∞. 𝑥→𝑐
b) If 𝑐 is a cluster point of 𝑆 ∩ (−∞, 𝑐) and lim− 𝑓 (𝑥) = ∞, then 𝑆 ⊂ (−∞, 𝑐). 𝑥→𝑐
Exercise 3.6.7: Let 𝐼 ⊂ ℝ be an interval and 𝑓 : 𝐼 → ℝ a function. Suppose that for each 𝑐 ∈ 𝐼, there exist 𝑎, 𝑏 ∈ ℝ with 𝑎 > 0 such that 𝑓 (𝑥) ≥ 𝑎𝑥 + 𝑏 for all 𝑥 ∈ 𝐼 and 𝑓 (𝑐) = 𝑎𝑐 + 𝑏. Show that 𝑓 is strictly increasing. Exercise 3.6.8: Suppose 𝐼 and 𝐽 are intervals and 𝑓 : 𝐼 → 𝐽 is a continuous, bijective (one-to-one and onto) function. Show that 𝑓 is strictly monotone. Exercise 3.6.9: Consider a monotone function 𝑓 : 𝐼 → ℝ on an interval 𝐼. Prove that there exists a function 𝑔 : 𝐼 → ℝ such that lim− 𝑔(𝑥) = 𝑔(𝑐) for all 𝑐 in 𝐼 except the smaller (left) endpoint of 𝐼, and such that 𝑥→𝑐
𝑔(𝑥) = 𝑓 (𝑥) for all but countably many 𝑥 ∈ 𝐼.
Exercise 3.6.10: a) Let 𝑆 ⊂ ℝ be a subset. If 𝑓 : 𝑆 → ℝ is increasing and bounded, then show that there exists an increasing 𝐹 : ℝ → ℝ such that 𝑓 (𝑥) = 𝐹(𝑥) for all 𝑥 ∈ 𝑆.
b) Find an example of a strictly increasing bounded 𝑓 : 𝑆 → ℝ such that an increasing 𝐹 as above is never strictly increasing.
154
CHAPTER 3. CONTINUOUS FUNCTIONS
Exercise 3.6.11 (Challenging): Find an example of an increasing function 𝑓 : [0, 1] → ℝ that has a discontinuity at each rational number. Then show that the image 𝑓 [0, 1] contains no interval. Hint: Enumerate the rational numbers and define the function with a series. Exercise 3.6.12: Suppose 𝐼 is an interval and 𝑓 : 𝐼 → ℝ is monotone. Show that ℝ \ 𝑓 (𝐼) is a countable union of disjoint intervals. Exercise 3.6.13: Suppose 𝑓 : [0, 1] → (0, 1) is increasing. Show that for every 𝜖 > 0, there exists a strictly increasing 𝑔 : [0, 1] → (0, 1) such that 𝑔(0) = 𝑓 (0), 𝑓 (𝑥) ≤ 𝑔(𝑥) for all 𝑥, and 𝑔(1) − 𝑓 (1) < 𝜖. Exercise 3.6.14: Prove that the Dirichlet function 𝑓 : [0, 1] → ℝ, defined by 𝑓 (𝑥) B 1 if 𝑥 is rational and 𝑓 (𝑥) B 0 otherwise, cannot be written as a difference of two increasing functions. That is, there do not exist increasing 𝑔 and ℎ such that, 𝑓 (𝑥) = 𝑔(𝑥) − ℎ(𝑥). Exercise 3.6.15: Suppose 𝑓 : (𝑎, 𝑏) → (𝑐, 𝑑) is a strictly increasing onto function. Prove that there exists a 𝑔 : (𝑎, 𝑏) → (𝑐, 𝑑), which is also strictly increasing and onto, and 𝑔(𝑥) < 𝑓 (𝑥) for all 𝑥 ∈ (𝑎, 𝑏).
Chapter 4 The Derivative 4.1
The derivative
Note: 1 lecture The idea of a derivative is the following. If the graph of a function looks locally like a straight line, then we can then talk about the slope of this line. The slope tells us the rate at which the value of the function is changing at that particular point. Of course, we are leaving out any function that has corners or discontinuities. Let us be precise.
4.1.1
Definition and basic properties
Definition 4.1.1. Let 𝐼 be an interval, let 𝑓 : 𝐼 → ℝ be a function, and let 𝑐 ∈ 𝐼. If the limit 𝐿 B lim
𝑥→𝑐
𝑓 (𝑥) − 𝑓 (𝑐) 𝑥−𝑐
exists, then we say 𝑓 is differentiable at 𝑐, we call 𝐿 the derivative of 𝑓 at 𝑐, and we write 𝑓 ′(𝑐) B 𝐿.
If 𝑓 is differentiable at all 𝑐 ∈ 𝐼, then we simply say that 𝑓 is differentiable, and then we 𝑑𝑓 𝑑 obtain a function 𝑓 ′ : 𝐼 → ℝ. The derivative is sometimes written as 𝑑𝑥 or 𝑑𝑥 𝑓 (𝑥) . The expression
𝑓 (𝑥)− 𝑓 (𝑐) 𝑥−𝑐
is called the difference quotient.
The graphical interpretation of the derivative is depicted in Figure 4.1. The left-hand 𝑓 (𝑥)− 𝑓 (𝑐) plot gives the line through 𝑐, 𝑓 (𝑐) and 𝑥, 𝑓 (𝑥) with slope 𝑥−𝑐 , that is, the so-called secant line. When we take the limit as 𝑥 goes to 𝑐, we get the right-hand plot, where we see that the derivative of the function at the point 𝑐 is the slope of the line tangent to the graph of 𝑓 at the point 𝑐, 𝑓 (𝑐) . We allow 𝐼 to be a closed interval and we allow 𝑐 to be an endpoint of 𝐼. Some calculus books do not allow 𝑐 to be an endpoint of an interval, but all the theory still works by allowing it, and it makes our work easier.
156
CHAPTER 4. THE DERIVATIVE
𝑓 (𝑥)− 𝑓 (𝑐) 𝑥−𝑐
slope =
slope = 𝑓 ′(𝑐)
𝑐
𝑥
𝑐
Figure 4.1: Graphical interpretation of the derivative.
Example 4.1.2: Let 𝑓 (𝑥) B 𝑥 2 defined on the whole real line. Let 𝑐 ∈ ℝ be arbitrary. We find that if 𝑥 ≠ 𝑐, 𝑥 2 − 𝑐 2 (𝑥 + 𝑐)(𝑥 − 𝑐) = = (𝑥 + 𝑐). 𝑥−𝑐 𝑥−𝑐 Therefore, 𝑥2 − 𝑐2 𝑓 ′(𝑐) = lim = lim (𝑥 + 𝑐) = 2𝑐. 𝑥→𝑐 𝑥 − 𝑐 𝑥→𝑐 Example 4.1.3: Let 𝑓 (𝑥) B 𝑎𝑥 + 𝑏 for numbers 𝑎, 𝑏 ∈ ℝ. Let 𝑐 ∈ ℝ be arbitrary. For 𝑥 ≠ 𝑐, 𝑓 (𝑥) − 𝑓 (𝑐) 𝑎(𝑥 − 𝑐) = = 𝑎. 𝑥−𝑐 𝑥−𝑐
Therefore,
𝑓 ′(𝑐) = lim
𝑥→𝑐
𝑓 (𝑥) − 𝑓 (𝑐) = lim 𝑎 = 𝑎. 𝑥→𝑐 𝑥−𝑐
In fact, every differentiable function “infinitesimally” behaves like the affine function 𝑎𝑥 + 𝑏. You can guess many results and formulas for derivatives, if you work them out for affine functions first. √ Example 4.1.4: The function 𝑓 (𝑥) B 𝑥 is differentiable for 𝑥 > 0. To see this fact, fix 𝑐 > 0, and suppose 𝑥 ≠ 𝑐 and 𝑥 > 0. Compute √
√ √ √ 𝑥− 𝑐 𝑥− 𝑐 1 = √ √ √ √ =√ √ . 𝑥−𝑐 ( 𝑥 − 𝑐)( 𝑥 + 𝑐) 𝑥+ 𝑐
Therefore, 𝑓 (𝑐) = lim ′
𝑥→𝑐
√
√ 𝑥− 𝑐 1 1 = lim √ √ = √ . 𝑥→𝑐 𝑥 + 𝑐 𝑥−𝑐 2 𝑐
157
4.1. THE DERIVATIVE
Example 4.1.5: The function 𝑓 (𝑥) B |𝑥| is not differentiable at the origin. When 𝑥 > 0, |𝑥| − |0| 𝑥 − 0 = = 1. 𝑥−0 𝑥−0
When 𝑥 < 0,
|𝑥| − |0| −𝑥 − 0 = = −1. 𝑥−0 𝑥−0 A famous example of Weierstrass shows that there exists a continuous function that is not differentiable at any point. The construction of this function is beyond the scope of this chapter. On the other hand, a differentiable function is always continuous. Proposition 4.1.6. Let 𝑓 : 𝐼 → ℝ be differentiable at 𝑐 ∈ 𝐼, then it is continuous at 𝑐. Proof. We know the limits lim
𝑥→𝑐
exist. Furthermore,
𝑓 (𝑥) − 𝑓 (𝑐) = 𝑓 ′(𝑐) 𝑥−𝑐 𝑓 (𝑥) − 𝑓 (𝑐) =
and
lim (𝑥 − 𝑐) = 0
𝑥→𝑐
𝑓 (𝑥) − 𝑓 (𝑐) (𝑥 − 𝑐). 𝑥−𝑐
Therefore, the limit of 𝑓 (𝑥) − 𝑓 (𝑐) exists and
𝑓 (𝑥) − 𝑓 (𝑐) lim 𝑓 (𝑥) − 𝑓 (𝑐) = lim lim (𝑥 − 𝑐) = 𝑓 ′(𝑐) · 0 = 0. 𝑥→𝑐 𝑥→𝑐 𝑥→𝑐 𝑥−𝑐
Hence lim 𝑓 (𝑥) = 𝑓 (𝑐), and 𝑓 is continuous at 𝑐. 𝑥→𝑐
□
An important property of the derivative is linearity. The derivative is the approximation of a function by a straight line. The slope of a line through two points changes linearly when the 𝑦-coordinates are changed linearly. By taking the limit, it makes sense that the derivative is linear. Proposition 4.1.7 (Linearity). Let 𝐼 be an interval, let 𝑓 : 𝐼 → ℝ and 𝑔 : 𝐼 → ℝ be differentiable at 𝑐 ∈ 𝐼, and let 𝛼 ∈ ℝ. (i) Define ℎ : 𝐼 → ℝ by ℎ(𝑥) B 𝛼 𝑓 (𝑥). Then ℎ is differentiable at 𝑐 and ℎ ′(𝑐) = 𝛼 𝑓 ′(𝑐).
(ii) Define ℎ : 𝐼 → ℝ by ℎ(𝑥) B 𝑓 (𝑥) + 𝑔(𝑥). Then ℎ is differentiable at 𝑐 and ℎ ′(𝑐) = 𝑓 ′(𝑐) + 𝑔 ′(𝑐).
Proof. First, let ℎ(𝑥) B 𝛼 𝑓 (𝑥). For 𝑥 ∈ 𝐼, 𝑥 ≠ 𝑐,
𝑓 (𝑥) − 𝑓 (𝑐) ℎ(𝑥) − ℎ(𝑐) 𝛼 𝑓 (𝑥) − 𝛼 𝑓 (𝑐) = =𝛼 . 𝑥−𝑐 𝑥−𝑐 𝑥−𝑐
The limit as 𝑥 goes to 𝑐 exists on the right-hand side by Corollary 3.1.12. We get lim
𝑥→𝑐
𝑓 (𝑥) − 𝑓 (𝑐) ℎ(𝑥) − ℎ(𝑐) = 𝛼 lim . 𝑥→𝑐 𝑥−𝑐 𝑥−𝑐
158
CHAPTER 4. THE DERIVATIVE
Therefore, ℎ is differentiable at 𝑐, and the derivative is computed as given. Next, define ℎ(𝑥) B 𝑓 (𝑥) + 𝑔(𝑥). For 𝑥 ∈ 𝐼, 𝑥 ≠ 𝑐, we have 𝑓 (𝑥) + 𝑔(𝑥) − 𝑓 (𝑐) + 𝑔(𝑐) 𝑓 (𝑥) − 𝑓 (𝑐) 𝑔(𝑥) − 𝑔(𝑐) ℎ(𝑥) − ℎ(𝑐) = = + . 𝑥−𝑐 𝑥−𝑐 𝑥−𝑐 𝑥−𝑐
The limit as 𝑥 goes to 𝑐 exists on the right-hand side by Corollary 3.1.12. We get lim
𝑥→𝑐
𝑓 (𝑥) − 𝑓 (𝑐) 𝑔(𝑥) − 𝑔(𝑐) ℎ(𝑥) − ℎ(𝑐) = lim + lim . 𝑥→𝑐 𝑥→𝑐 𝑥−𝑐 𝑥−𝑐 𝑥−𝑐
Therefore, ℎ is differentiable at 𝑐, and the derivative is computed as given.
□
It is not true that the derivative of a product of two functions is the product of the derivatives. Instead we get the so-called product rule or the Leibniz rule* . Proposition 4.1.8 (Product rule). Let 𝐼 be an interval, let 𝑓 : 𝐼 → ℝ and 𝑔 : 𝐼 → ℝ be functions differentiable at 𝑐. If ℎ : 𝐼 → ℝ is defined by ℎ(𝑥) B 𝑓 (𝑥)𝑔(𝑥), then ℎ is differentiable at 𝑐 and ℎ ′(𝑐) = 𝑓 (𝑐)𝑔 ′(𝑐) + 𝑓 ′(𝑐)𝑔(𝑐). The proof of the product rule is left as an exercise. The key to the proof is the identity 𝑓 (𝑥)𝑔(𝑥) − 𝑓 (𝑐)𝑔(𝑐) = 𝑓 (𝑥) 𝑔(𝑥) − 𝑔(𝑐) + 𝑓 (𝑥) − 𝑓 (𝑐) 𝑔(𝑐), which is illustrated in Figure 4.2.
𝑔(𝑥)
𝑓 (𝑥) 𝑔(𝑥) − 𝑔(𝑐)
𝑔(𝑐)
𝑓 (𝑥) − 𝑓 (𝑐) 𝑔(𝑐)
𝑓 (𝑐)𝑔(𝑐)
0
0
𝑓 (𝑐) 𝑓 (𝑥)
Figure 4.2: The idea of product rule. The area of the entire rectangle 𝑓 (𝑥)𝑔(𝑥) differs from the area of the white rectangle 𝑓 (𝑐)𝑔(𝑐) by the area of the lightly shaded rectangle 𝑓 (𝑥) 𝑔(𝑥) − 𝑔(𝑐) plus the darker rectangle 𝑓 (𝑥) − 𝑓 (𝑐) 𝑔(𝑐). In other words, Δ( 𝑓 · 𝑔) = 𝑓 · Δ𝑔 + Δ 𝑓 · 𝑔. * Named
for the German mathematician Gottfried Wilhelm Leibniz (1646–1716).
159
4.1. THE DERIVATIVE
Proposition 4.1.9 (Quotient rule). Let 𝐼 be an interval, let 𝑓 : 𝐼 → ℝ and 𝑔 : 𝐼 → ℝ be differentiable at 𝑐 and 𝑔(𝑥) ≠ 0 for all 𝑥 ∈ 𝐼. If ℎ : 𝐼 → ℝ is defined by ℎ(𝑥) B
𝑓 (𝑥) , 𝑔(𝑥)
then ℎ is differentiable at 𝑐 and ℎ ′(𝑐) =
𝑓 ′(𝑐)𝑔(𝑐) − 𝑓 (𝑐)𝑔 ′(𝑐) 𝑔(𝑐)
2
.
Again, the proof is left as an exercise.
4.1.2
Chain rule
More complicated functions are often obtained by composition, which is differentiated via the chain rule. The rule also tells us how a derivative changes if we change variables. Proposition 4.1.10 (Chain rule). Let 𝐼1 , 𝐼2 be intervals, let 𝑔 : 𝐼1 → 𝐼2 be differentiable at 𝑐 ∈ 𝐼1 , and 𝑓 : 𝐼2 → ℝ be differentiable at 𝑔(𝑐). If ℎ : 𝐼1 → ℝ is defined by ℎ(𝑥) B ( 𝑓 ◦ 𝑔)(𝑥) = 𝑓 𝑔(𝑥) ,
then ℎ is differentiable at 𝑐 and
ℎ ′(𝑐) = 𝑓 ′ 𝑔(𝑐) 𝑔 ′(𝑐).
Proof. Let 𝑑 B 𝑔(𝑐). Define 𝑢 : 𝐼2 → ℝ and 𝑣 : 𝐼1 → ℝ by 𝑢(𝑦) B
( 𝑓 (𝑦)− 𝑓 (𝑑)
if 𝑦 ≠ 𝑑,
𝑓 ′(𝑑)
if 𝑦 = 𝑑,
𝑦−𝑑
( 𝑣(𝑥) B
𝑔(𝑥)−𝑔(𝑐) 𝑥−𝑐 𝑔 ′(𝑐)
if 𝑥 ≠ 𝑐, if 𝑥 = 𝑐.
Because 𝑓 is differentiable at 𝑑 = 𝑔(𝑐), we find that 𝑢 is continuous at 𝑑. Similarly, 𝑣 is continuous at 𝑐. For any 𝑥 and 𝑦, 𝑓 (𝑦) − 𝑓 (𝑑) = 𝑢(𝑦)(𝑦 − 𝑑)
and
𝑔(𝑥) − 𝑔(𝑐) = 𝑣(𝑥)(𝑥 − 𝑐).
Plug in to obtain ℎ(𝑥) − ℎ(𝑐) = 𝑓 𝑔(𝑥) − 𝑓 𝑔(𝑐) = 𝑢 𝑔(𝑥) 𝑔(𝑥) − 𝑔(𝑐) = 𝑢 𝑔(𝑥) 𝑣(𝑥)(𝑥 − 𝑐) .
Therefore, if 𝑥 ≠ 𝑐,
ℎ(𝑥) − ℎ(𝑐) = 𝑢 𝑔(𝑥) 𝑣(𝑥). (4.1) 𝑥−𝑐 By continuity of 𝑢 and 𝑣 at 𝑑 and 𝑐 respectively, we find lim 𝑦→𝑑 𝑢(𝑦) = 𝑓 ′(𝑑) = 𝑓 ′ 𝑔(𝑐) and lim𝑥→𝑐 𝑣(𝑥) = 𝑔 ′(𝑐). The function 𝑔 is continuous at 𝑐, and so lim𝑥→𝑐 𝑔(𝑥) = 𝑔(𝑐). Hence ′ 𝑔(𝑐) 𝑔 ′(𝑐). the limit of the right-hand side of (4.1) as 𝑥 goes to 𝑐 exists and is equal to 𝑓 Thus ℎ is differentiable at 𝑐 and ℎ ′(𝑐) = 𝑓 ′ 𝑔(𝑐) 𝑔 ′(𝑐). □
160
4.1.3
CHAPTER 4. THE DERIVATIVE
Exercises
Exercise 4.1.1: Prove the product rule. Hint: Prove and use 𝑓 (𝑥)𝑔(𝑥) − 𝑓 (𝑐)𝑔(𝑐) = 𝑓 (𝑥) 𝑔(𝑥) − 𝑔(𝑐) + 𝑓 (𝑥) − 𝑓 (𝑐) 𝑔(𝑐).
Exercise 4.1.2: Prove the quotient rule. Hint: You can do this directly, but it may be easier to find the derivative of 1/𝑥 and then use the chain rule and the product rule. Exercise 4.1.3: For 𝑛 ∈ ℤ, prove that 𝑥 𝑛 is differentiable and find the derivative, unless, of course, 𝑛 < 0 and 𝑥 = 0. Hint: Use the product rule. Exercise 4.1.4: Prove that a polynomial is differentiable and find the derivative. Hint: Use the previous exercise. Exercise 4.1.5: Define 𝑓 : ℝ → ℝ by
( 𝑓 (𝑥) B
𝑥2 0
if 𝑥 ∈ ℚ ,
otherwise.
Prove that 𝑓 is differentiable at 0, but discontinuous at all points except 0. Exercise 4.1.6: Assume the inequality |𝑥 − sin(𝑥)| ≤ 𝑥 2 . Prove that sin is differentiable at 0, and find the derivative at 0. Exercise 4.1.7: Using the previous exercise, prove that sin is differentiable at all 𝑥 and that the derivative is cos(𝑥). Hint: Use the sum-to-product trigonometric identity as we did before.
𝑛
Exercise 4.1.8: Let 𝑓 : 𝐼 → ℝ be differentiable. For 𝑛 ∈ ℤ, let 𝑓 𝑛 be the function defined by 𝑓 𝑛 (𝑥) B 𝑓 (𝑥) . If 𝑛 < 0, assume 𝑓 (𝑥) ≠ 0 for all 𝑥 ∈ 𝐼. Prove that ( 𝑓 𝑛 )′(𝑥) = 𝑛 𝑓 (𝑥)
𝑛−1
𝑓 ′(𝑥).
Exercise 4.1.9: Suppose 𝑓 : ℝ → ℝ is a differentiable Lipschitz continuous function. Prove that 𝑓 ′ is a bounded function. Exercise 4.1.10: Let 𝐼1 , 𝐼2 be intervals. Let 𝑓 : 𝐼1 → 𝐼2 be a bijective function and 𝑔 : 𝐼2 → 𝐼1 be the inverse. Suppose that both 𝑓 is differentiable at 𝑐 ∈ 𝐼1 and 𝑓 ′(𝑐) ≠ 0 and 𝑔 is differentiable at 𝑓 (𝑐). Use the chain rule to find a formula for 𝑔 ′ 𝑓 (𝑐) (in terms of 𝑓 ′(𝑐)). Exercise 4.1.11: Suppose 𝑓 : 𝐼 → ℝ is bounded, 𝑔 : 𝐼 → ℝ is differentiable at 𝑐 ∈ 𝐼, and 𝑔(𝑐) = 𝑔 ′(𝑐) = 0. Show that ℎ(𝑥) B 𝑓 (𝑥)𝑔(𝑥) is differentiable at 𝑐. Hint: You cannot apply the product rule. Exercise 4.1.12: Suppose 𝑓 : 𝐼 → ℝ, 𝑔 : 𝐼 → ℝ, and ℎ : 𝐼 → ℝ, are functions. Suppose 𝑐 ∈ 𝐼 is such that 𝑓 (𝑐) = 𝑔(𝑐) = ℎ(𝑐), 𝑔 and ℎ are differentiable at 𝑐, and 𝑔 ′(𝑐) = ℎ ′(𝑐). Furthermore suppose ℎ(𝑥) ≤ 𝑓 (𝑥) ≤ 𝑔(𝑥) for all 𝑥 ∈ 𝐼. Prove 𝑓 is differentiable at 𝑐 and 𝑓 ′(𝑐) = 𝑔 ′(𝑐) = ℎ ′(𝑐). Exercise 4.1.13: Suppose 𝑓 : (−1, 1) → ℝ is a function such that 𝑓 (𝑥) = 𝑥 ℎ(𝑥) for a bounded function ℎ. a) Show that 𝑔(𝑥) B 𝑓 (𝑥)
2
is differentiable at the origin and 𝑔 ′(0) = 0.
b) Find an example of a continuous function 𝑓 : (−1, 1) → ℝ with 𝑓 (0) = 0, but such that 𝑔(𝑥) B 𝑓 (𝑥) is not differentiable at the origin.
2
161
4.1. THE DERIVATIVE
Exercise 4.1.14: Suppose 𝑓 : 𝐼 → ℝ is differentiable at 𝑐 ∈ 𝐼. Prove there exist numbers 𝑎 and 𝑏 with the property that for every 𝜖 > 0, there is a 𝛿 > 0, such that |𝑎 + 𝑏(𝑥 − 𝑐) − 𝑓 (𝑥)| ≤ 𝜖 |𝑥 − 𝑐|, whenever 𝑥 ∈ 𝐼 and |𝑥 − 𝑐| < 𝛿. In other words, show that there exists a function 𝑔 : 𝐼 → ℝ such that lim𝑥→𝑐 𝑔(𝑥) = 0 and |𝑎 + 𝑏(𝑥 − 𝑐) − 𝑓 (𝑥)| = 𝑔(𝑥) |𝑥 − 𝑐|. Exercise 4.1.15: Prove the following simple version of L’Hôpital’s rule. Suppose 𝑓 : (𝑎, 𝑏) → ℝ and 𝑔 : (𝑎, 𝑏) → ℝ are differentiable functions whose derivatives 𝑓 ′ and 𝑔 ′ are continuous functions. Suppose that at 𝑐 ∈ (𝑎, 𝑏), 𝑓 (𝑐) = 0, 𝑔(𝑐) = 0, 𝑔 ′(𝑥) ≠ 0 for all 𝑥 ∈ (𝑎, 𝑏), and 𝑔(𝑥) ≠ 0 whenever 𝑥 ≠ 𝑐. Note that the limit of 𝑓 ′(𝑥)/𝑔′(𝑥) as 𝑥 goes to 𝑐 exists. Show that lim
𝑥→𝑐
𝑓 ′(𝑥) 𝑓 (𝑥) = lim ′ . 𝑔(𝑥) 𝑥→𝑐 𝑔 (𝑥)
Exercise 4.1.16: Suppose 𝑓 : (𝑎, 𝑏) → ℝ is differentiable at 𝑐 ∈ (𝑎, 𝑏), 𝑓 (𝑐) = 0, and 𝑓 ′(𝑐) > 0. Prove that there is a 𝛿 > 0 such that 𝑓 (𝑥) < 0 whenever 𝑐 − 𝛿 < 𝑥 < 𝑐 and 𝑓 (𝑥) > 0 whenever 𝑐 < 𝑥 < 𝑐 + 𝛿.
162
4.2
CHAPTER 4. THE DERIVATIVE
Mean value theorem
Note: 2 lectures (some applications may be skipped)
4.2.1
Relative minima and maxima
We talked about absolute maxima and minima. These are the tallest peaks and the lowest valleys in the entire mountain range. What about peaks of individual mountains and bottoms of individual valleys? The derivative, being a local concept, is like walking around in a fog; it cannot tell you if you are on the highest peak, but it can tell you whether you are at the top of some peak. Definition 4.2.1. Let 𝑆 ⊂ ℝ be a set and let 𝑓 : 𝑆 → ℝ be a function. The function 𝑓 is said to have a relative maximum at 𝑐 ∈ 𝑆 if there exists a 𝛿 > 0 such that for all 𝑥 ∈ 𝑆 where |𝑥 − 𝑐| < 𝛿, we have 𝑓 (𝑥) ≤ 𝑓 (𝑐). The definition of relative minimum is analogous. Lemma 4.2.2. Suppose 𝑓 : (𝑎, 𝑏) → ℝ is differentiable at 𝑐 ∈ (𝑎, 𝑏), and 𝑓 has a relative minimum or a relative maximum at 𝑐. Then 𝑓 ′(𝑐) = 0.
Proof. Suppose 𝑐 is a relative maximum of 𝑓 . That is, there is a 𝛿 > 0 such that for every 𝑥 ∈ (𝑎, 𝑏) where |𝑥 − 𝑐| < 𝛿, we have 𝑓 (𝑥) − 𝑓 (𝑐) ≤ 0. Consider the difference quotient. If 𝑐 < 𝑥 < 𝑐 + 𝛿, then 𝑓 (𝑥) − 𝑓 (𝑐) ≤ 0, 𝑥−𝑐 and if 𝑐 − 𝛿 < 𝑦 < 𝑐, then 𝑓 (𝑦) − 𝑓 (𝑐) ≥ 0. 𝑦−𝑐
See Figure 4.3 for an illustration.
slope =
𝑓 (𝑦)− 𝑓 (𝑐) 𝑦−𝑐
𝑦
slope =
≥0
𝑐
𝑓 (𝑥)− 𝑓 (𝑐) 𝑥−𝑐
≤0
𝑥
Figure 4.3: Slopes of secants at a relative maximum.
As 𝑎 < 𝑐 < 𝑏, there exist sequences {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ in (𝑎, 𝑏) and within 𝛿 of 𝑐, such 𝑛=1 𝑛=1 that 𝑥 𝑛 > 𝑐, and 𝑦𝑛 < 𝑐 for all 𝑛 ∈ ℕ , and such that lim𝑛→∞ 𝑥 𝑛 = lim𝑛→∞ 𝑦𝑛 = 𝑐. Since 𝑓 is
163
4.2. MEAN VALUE THEOREM differentiable at 𝑐, 0 ≥ lim
𝑛→∞
𝑓 (𝑦𝑛 ) − 𝑓 (𝑐) 𝑓 (𝑥 𝑛 ) − 𝑓 (𝑐) = 𝑓 ′(𝑐) = lim ≥ 0. 𝑛→∞ 𝑥𝑛 − 𝑐 𝑦𝑛 − 𝑐
We are done with a maximum. For a minimum, consider the function − 𝑓 .
□
For a differentiable function, a point where 𝑓 ′(𝑐) = 0 is called a critical point. When 𝑓 is not differentiable at some points, it is common to also say that 𝑐 is a critical point if 𝑓 ′(𝑐) does not exist. The theorem says that a relative minimum or maximum at an interior point of an interval must be a critical point. As you remember from calculus, one finds minima and maxima of a function by finding all the critical points together with the endpoints of the interval and simply checking at which of these points is the function biggest or smallest.
4.2.2
Rolle’s theorem
Suppose a function has the same value at both endpoints of an interval. Intuitively it ought to attain a minimum or a maximum in the interior of the interval, then at such a minimum or a maximum, the derivative should be zero. See Figure 4.4 for the geometric idea. This is the content of the so-called Rolle’s theorem* .
𝑎
𝑐
𝑏
Figure 4.4: Point where the tangent line is horizontal, that is 𝑓 ′(𝑐) = 0.
Theorem 4.2.3 (Rolle). Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function differentiable on (𝑎, 𝑏) such that 𝑓 (𝑎) = 𝑓 (𝑏). Then there exists a 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 ′(𝑐) = 0.
Proof. As 𝑓 is continuous on [𝑎, 𝑏], it attains an absolute minimum and an absolute maximum in [𝑎, 𝑏]. We wish to apply Lemma 4.2.2, and so we need to find some 𝑐 ∈ (𝑎, 𝑏) where 𝑓 attains a minimum or a maximum. Write 𝐾 B 𝑓 (𝑎) = 𝑓 (𝑏). If there exists an 𝑥 such that 𝑓 (𝑥) > 𝐾, then the absolute maximum is bigger than 𝐾 and hence occurs at some 𝑐 ∈ (𝑎, 𝑏), and therefore 𝑓 ′(𝑐) = 0. On the other hand, if there exists an 𝑥 such that * Named
after the French mathematician Michel Rolle (1652–1719).
164
CHAPTER 4. THE DERIVATIVE
𝑓 (𝑥) < 𝐾, then the absolute minimum occurs at some 𝑐 ∈ (𝑎, 𝑏), and so 𝑓 ′(𝑐) = 0. If there is no 𝑥 such that 𝑓 (𝑥) > 𝐾 or 𝑓 (𝑥) < 𝐾, then 𝑓 (𝑥) = 𝐾 for all 𝑥 and then 𝑓 ′(𝑥) = 0 for all 𝑥 ∈ [𝑎, 𝑏], so any 𝑐 ∈ (𝑎, 𝑏) works. □
It is absolutely necessary for the derivative to exist for all 𝑥 ∈ (𝑎, 𝑏). Consider the function 𝑓 (𝑥) B |𝑥| on [−1, 1]. Clearly 𝑓 (−1) = 𝑓 (1), but there is no point 𝑐 where 𝑓 ′(𝑐) = 0.
4.2.3
Mean value theorem
We extend Rolle’s theorem to functions that attain different values at the endpoints. Theorem 4.2.4 (Mean value theorem). Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function differentiable on (𝑎, 𝑏). Then there exists a point 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 (𝑏) − 𝑓 (𝑎) = 𝑓 ′(𝑐)(𝑏 − 𝑎).
For a geometric interpretation of the mean value theorem, see Figure 4.5. The idea is 𝑓 (𝑏)− 𝑓 (𝑎) that the value 𝑏−𝑎 is the slope of the line between the points 𝑎, 𝑓 (𝑎) and 𝑏, 𝑓 (𝑏) . 𝑓 (𝑏)− 𝑓 (𝑎)
Then 𝑐 is the point such that 𝑓 ′(𝑐) = 𝑏−𝑎 , that is, the tangent line at the point 𝑐, 𝑓 (𝑐) has the same slope as the line between 𝑎, 𝑓 (𝑎) and 𝑏, 𝑓 (𝑏) . The name comes from the fact that the slope of the secant line is the mean value of the derivative, so the average derivative is achieved in the interior of the interval. The theorem follows from Rolle’s theorem, by subtracting from 𝑓 the affine linear 𝑓 (𝑏)− 𝑓 (𝑎) function with the derivative 𝑏−𝑎 with the same values at 𝑎 and 𝑏 as 𝑓 . That is, we subtract the function whose graph is the straight line 𝑎, 𝑓 (𝑎) and 𝑏, 𝑓 (𝑏) . Then we are looking for a point where this new function has derivative zero.
(𝑏, 𝑓 (𝑏))
𝑐 (𝑎, 𝑓 (𝑎)) Figure 4.5: Graphical interpretation of the mean value theorem.
Proof. Define the function 𝑔 : [𝑎, 𝑏] → ℝ by 𝑔(𝑥) B 𝑓 (𝑥) − 𝑓 (𝑏) −
𝑓 (𝑏) − 𝑓 (𝑎) (𝑥 − 𝑏). 𝑏−𝑎
165
4.2. MEAN VALUE THEOREM
The function 𝑔 is differentiable on (𝑎, 𝑏), continuous on [𝑎, 𝑏], such that 𝑔(𝑎) = 0 and 𝑔(𝑏) = 0. Thus there exists a 𝑐 ∈ (𝑎, 𝑏) such that 𝑔 ′(𝑐) = 0, that is, 0 = 𝑔 ′(𝑐) = 𝑓 ′(𝑐) − In other words, 𝑓 (𝑏) − 𝑓 (𝑎) = 𝑓 ′(𝑐)(𝑏 − 𝑎).
𝑓 (𝑏) − 𝑓 (𝑎) . 𝑏−𝑎
□ 𝑓 (𝑏)− 𝑓 (𝑎)
The proof generalizes. By considering 𝑔(𝑥) B 𝑓 (𝑥) − 𝑓 (𝑏) − 𝜑(𝑏)−𝜑(𝑎) 𝜑(𝑥) − 𝜑(𝑏) , one can prove the following version. We leave the proof as an exercise.
Theorem 4.2.5 (Cauchy’s mean value theorem). Let 𝑓 : [𝑎, 𝑏] → ℝ and 𝜑 : [𝑎, 𝑏] → ℝ be continuous functions differentiable on (𝑎, 𝑏). Then there exists a point 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 (𝑏) − 𝑓 (𝑎) 𝜑′(𝑐) = 𝑓 ′(𝑐) 𝜑(𝑏) − 𝜑(𝑎) .
The mean value theorem has the distinction of being one of the few theorems commonly cited in court. That is, when police measure the speed of cars by aircraft, or via cameras reading license plates, they measure the time the car takes to go between two points. The mean value theorem then says that the car must have somewhere attained the speed you get by dividing the difference in distance by the difference in time.
4.2.4
Applications
Let us look at a few applications of the mean value theorem. The applications show the typical use of the theorem, which is to get rid of a limit by finding the right sort of points where the derivative is not just close to some difference quotient, but actually equal to one. First, we solve our very first differential equation. Proposition 4.2.6. Let 𝐼 be an interval and let 𝑓 : 𝐼 → ℝ be a differentiable function such that 𝑓 ′(𝑥) = 0 for all 𝑥 ∈ 𝐼. Then 𝑓 is constant.
Proof. Take arbitrary 𝑥, 𝑦 ∈ 𝐼 with 𝑥 < 𝑦. As 𝐼 is an interval, [𝑥, 𝑦] ⊂ 𝐼. Then 𝑓 restricted to [𝑥, 𝑦] satisfies the hypotheses of the mean value theorem. Therefore, there is a 𝑐 ∈ (𝑥, 𝑦) such that 𝑓 (𝑦) − 𝑓 (𝑥) = 𝑓 ′(𝑐)(𝑦 − 𝑥).
As 𝑓 ′(𝑐) = 0, we have 𝑓 (𝑦) = 𝑓 (𝑥). Hence, the function is constant.
□
Now that we know what it means for the function to stay constant, we look at increasing and decreasing functions. We say 𝑓 : 𝐼 → ℝ is increasing (resp. strictly increasing) if 𝑥 < 𝑦 implies 𝑓 (𝑥) ≤ 𝑓 (𝑦) (resp. 𝑓 (𝑥) < 𝑓 (𝑦)). We define decreasing and strictly decreasing in the same way by switching the inequalities for 𝑓 . Proposition 4.2.7. Let 𝐼 be an interval and let 𝑓 : 𝐼 → ℝ be a differentiable function. (i) 𝑓 is increasing if and only if 𝑓 ′(𝑥) ≥ 0 for all 𝑥 ∈ 𝐼.
(ii) 𝑓 is decreasing if and only if 𝑓 ′(𝑥) ≤ 0 for all 𝑥 ∈ 𝐼.
166
CHAPTER 4. THE DERIVATIVE
Proof. Let us prove the first item. Suppose 𝑓 is increasing. For all 𝑥, 𝑐 ∈ 𝐼 with 𝑥 ≠ 𝑐,
𝑓 (𝑥) − 𝑓 (𝑐) ≥ 0. 𝑥−𝑐 Taking a limit as 𝑥 goes to 𝑐, we see that 𝑓 ′(𝑐) ≥ 0. For the other direction, suppose 𝑓 ′(𝑥) ≥ 0 for all 𝑥 ∈ 𝐼. Take any 𝑥, 𝑦 ∈ 𝐼 where 𝑥 < 𝑦, and note that [𝑥, 𝑦] ⊂ 𝐼. By the mean value theorem, there is some 𝑐 ∈ (𝑥, 𝑦) such that 𝑓 (𝑦) − 𝑓 (𝑥) = 𝑓 ′(𝑐)(𝑦 − 𝑥).
As 𝑓 ′(𝑐) ≥ 0 and 𝑦 − 𝑥 > 0, then 𝑓 (𝑦) − 𝑓 (𝑥) ≥ 0 or 𝑓 (𝑥) ≤ 𝑓 (𝑦), and so 𝑓 is increasing. We leave the second item, decreasing 𝑓 , to the reader as exercise. □ A similar but weaker statement is true for strictly increasing and decreasing functions. Proposition 4.2.8. Let 𝐼 be an interval and let 𝑓 : 𝐼 → ℝ be a differentiable function. (i) If 𝑓 ′(𝑥) > 0 for all 𝑥 ∈ 𝐼, then 𝑓 is strictly increasing.
(ii) If 𝑓 ′(𝑥) < 0 for all 𝑥 ∈ 𝐼, then 𝑓 is strictly decreasing.
The proof of (i) is left as an exercise. Then (ii) follows from (i) by considering − 𝑓 instead. The converse of this proposition is not true. The function 𝑓 (𝑥) B 𝑥 3 is strictly increasing, but 𝑓 ′(0) = 0. Another application of the mean value theorem is the following result about location of extrema, sometimes called the first derivative test. The result is stated for an absolute minimum and maximum. To apply it to find relative minima and maxima, restrict 𝑓 to an interval (𝑐 − 𝛿, 𝑐 + 𝛿). Proposition 4.2.9. Let 𝑓 : (𝑎, 𝑏) → ℝ be continuous. Let 𝑐 ∈ (𝑎, 𝑏) and suppose 𝑓 is differentiable on (𝑎, 𝑐) and (𝑐, 𝑏).
(i) If 𝑓 ′(𝑥) ≤ 0 whenever 𝑥 ∈ (𝑎, 𝑐) and 𝑓 ′(𝑥) ≥ 0 whenever 𝑥 ∈ (𝑐, 𝑏), then 𝑓 has an absolute minimum at 𝑐.
(ii) If 𝑓 ′(𝑥) ≥ 0 whenever 𝑥 ∈ (𝑎, 𝑐) and 𝑓 ′(𝑥) ≤ 0 whenever 𝑥 ∈ (𝑐, 𝑏), then 𝑓 has an absolute maximum at 𝑐. Proof. We prove the first item and leave the second to the reader. Take 𝑥 ∈ (𝑎, 𝑐) and a sequence {𝑦𝑛 }∞ such that 𝑥 < 𝑦𝑛 < 𝑐 for all 𝑛 and lim𝑛→∞ 𝑦𝑛 = 𝑐. By the preceding 𝑛=1 proposition, 𝑓 is decreasing on (𝑎, 𝑐) so 𝑓 (𝑥) ≥ 𝑓 (𝑦𝑛 ) for all 𝑛. As 𝑓 is continuous at 𝑐, we take the limit to get 𝑓 (𝑥) ≥ 𝑓 (𝑐). Similarly, take 𝑥 ∈ (𝑐, 𝑏) and {𝑦𝑛 }∞ a sequence such that 𝑐 < 𝑦𝑛 < 𝑥 and lim𝑛→∞ 𝑦𝑛 = 𝑐. 𝑛=1 The function is increasing on (𝑐, 𝑏) so 𝑓 (𝑥) ≥ 𝑓 (𝑦𝑛 ) for all 𝑛. By continuity of 𝑓 , we get 𝑓 (𝑥) ≥ 𝑓 (𝑐). Thus 𝑓 (𝑥) ≥ 𝑓 (𝑐) for all 𝑥 ∈ (𝑎, 𝑏). □ The converse of the proposition does not hold. See Example 4.2.12 below.
Another often used application of the mean value theorem you have possibly seen in calculus is the following result on differentiability at the end points of an interval. The proof is Exercise 4.2.13.
167
4.2. MEAN VALUE THEOREM Proposition 4.2.10.
(i) Suppose 𝑓 : [𝑎, 𝑏) → ℝ is continuous, differentiable in (𝑎, 𝑏), and lim𝑥→𝑎 𝑓 ′(𝑥) = 𝐿. Then 𝑓 is differentiable at 𝑎 and 𝑓 ′(𝑎) = 𝐿.
(ii) Suppose 𝑓 : (𝑎, 𝑏] → ℝ is continuous, differentiable in (𝑎, 𝑏), and lim𝑥→𝑏 𝑓 ′(𝑥) = 𝐿. Then 𝑓 is differentiable at 𝑏 and 𝑓 ′(𝑏) = 𝐿.
In fact, using the extension result Proposition 3.4.6, you do not need to assume that 𝑓 is defined at the end point. See Exercise 4.2.14.
4.2.5
Continuity of derivatives and the intermediate value theorem
Derivatives of functions satisfy an intermediate value property. Theorem 4.2.11 (Darboux). Let 𝑓 : [𝑎, 𝑏] → ℝ be differentiable. Suppose 𝑦 ∈ ℝ is such that 𝑓 ′(𝑎) < 𝑦 < 𝑓 ′(𝑏) or 𝑓 ′(𝑎) > 𝑦 > 𝑓 ′(𝑏). Then there exists a 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 ′(𝑐) = 𝑦.
The proof follows by subtracting 𝑓 and a linear function with derivative 𝑦. The new function 𝑔 reduces the problem to the case 𝑦 = 0, where 𝑔 ′(𝑎) > 0 > 𝑔 ′(𝑏). That is, 𝑔 is increasing at 𝑎 and decreasing at 𝑏, so it must attain a maximum inside (𝑎, 𝑏), where the derivative is zero. See Figure 4.6. 𝑔 ′(𝑐) = 0
𝑔 ′(𝑏) < 0
𝑔 ′(𝑎) > 0 𝑎
𝑐
𝑏
Figure 4.6: Idea of the proof of Darboux theorem.
Proof. Suppose 𝑓 ′(𝑎) < 𝑦 < 𝑓 ′(𝑏). Define 𝑔(𝑥) B 𝑦𝑥 − 𝑓 (𝑥). The function 𝑔 is continuous on [𝑎, 𝑏], and so 𝑔 attains a maximum at some 𝑐 ∈ [𝑎, 𝑏]. The function 𝑔 is also differentiable on [𝑎, 𝑏]. Compute 𝑔 ′(𝑥) = 𝑦 − 𝑓 ′(𝑥). Thus 𝑔 ′(𝑎) > 0. As the derivative is the limit of difference quotients and is positive, there must be some difference quotient that is positive. That is, there must exist an 𝑥 > 𝑎 such that 𝑔(𝑥) − 𝑔(𝑎) > 0, 𝑥−𝑎
168
CHAPTER 4. THE DERIVATIVE
or 𝑔(𝑥) > 𝑔(𝑎). Thus 𝑔 cannot possibly have a maximum at 𝑎. Similarly, as 𝑔 ′(𝑏) < 0, 𝑔(𝑥)−𝑔(𝑏) we find an 𝑥 < 𝑏 (a different 𝑥) such that 𝑥−𝑏 < 0 or that 𝑔(𝑥) > 𝑔(𝑏), thus 𝑔 cannot possibly have a maximum at 𝑏. Therefore, 𝑐 ∈ (𝑎, 𝑏), and Lemma 4.2.2 applies: As 𝑔 attains a maximum at 𝑐 we find 𝑔 ′(𝑐) = 0 and so 𝑓 ′(𝑐) = 𝑦. Similarly, if 𝑓 ′(𝑎) > 𝑦 > 𝑓 ′(𝑏), consider 𝑔(𝑥) B 𝑓 (𝑥) − 𝑦𝑥. □ We have seen already that there exist discontinuous functions that have the intermediate value property. While it is hard to imagine at first, there also exist functions that are differentiable everywhere and the derivative is not continuous. Example 4.2.12: Let 𝑓 : ℝ → ℝ be the function defined by
( 𝑓 (𝑥) B
𝑥 sin(1/𝑥 )
0
2
if 𝑥 ≠ 0, if 𝑥 = 0.
We claim that 𝑓 is differentiable everywhere, but 𝑓 ′ : ℝ → ℝ is not continuous at the origin. Furthermore, 𝑓 has a minimum at 0, but the derivative changes sign infinitely often near the origin. See Figure 4.7.
Figure 4.7: A function with a discontinuous derivative. The function 𝑓 is on the left and 𝑓 ′ is on the right. Notice that 𝑓 (𝑥) ≤ 𝑥 2 on the left graph.
Proof: It is immediate from the definition that 𝑓 has an absolute minimum at 0; we know 𝑓 (𝑥) ≥ 0 for all 𝑥 and 𝑓 (0) = 0. For 𝑥 ≠ 0, 𝑓 is differentiable and the derivative is 2 sin(1/𝑥 ) 𝑥 sin(1/𝑥 ) − cos(1/𝑥 ) . As an 4 4 exercise, show that for 𝑥 𝑛 = (8𝑛+1)𝜋 , we have lim𝑛→∞ 𝑓 ′(𝑥 𝑛 ) = −1, and for 𝑦𝑛 = (8𝑛+3)𝜋 , we have lim𝑛→∞ 𝑓 ′(𝑦𝑛 ) = 1. So 𝑓 ′ cannot be continuous at 0 no matter what 𝑓 ′(0) is. Let us show that 𝑓 is differentiable at 0 and 𝑓 ′(0) = 0. For 𝑥 ≠ 0,
2 2 𝑓 (𝑥) − 𝑓 (0) 𝑥 sin (1/𝑥 ) = 𝑥 sin2 (1/𝑥 ) ≤ |𝑥| . = − 0 𝑥−0 𝑥
169
4.2. MEAN VALUE THEOREM
𝑓 (𝑥)− 𝑓 (0) And, of course, as 𝑥 tends to zero, |𝑥| tends to zero, and hence 𝑥−0 − 0 goes to zero. Therefore, 𝑓 is differentiable at 0 and the derivative at 0 is 0. A key point in the calculation above is that | 𝑓 (𝑥)| ≤ 𝑥 2 , see also Exercises 4.1.11 and 4.1.12.
It is sometimes useful to assume the derivative of a differentiable function is continuous. If 𝑓 : 𝐼 → ℝ is differentiable and the derivative 𝑓 ′ is continuous on 𝐼, then we say 𝑓 is continuously differentiable. It is common to write 𝐶 1 (𝐼) for the set of continuously differentiable functions on 𝐼.
4.2.6
Exercises
Exercise 4.2.1: Finish the proof of Proposition 4.2.7. Exercise 4.2.2: Finish the proof of Proposition 4.2.9. Exercise 4.2.3: Suppose 𝑓 : ℝ → ℝ is a differentiable function such that 𝑓 ′ is a bounded function. Prove that 𝑓 is a Lipschitz continuous function. Exercise 4.2.4: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is differentiable and 𝑐 ∈ [𝑎, 𝑏]. Show there exists a sequence {𝑥 𝑛 }∞ 𝑛=1 converging to 𝑐, 𝑥 𝑛 ≠ 𝑐 for all 𝑛, such that 𝑓 ′(𝑐) = lim 𝑓 ′(𝑥 𝑛 ). 𝑛→∞
Do note this does not imply that 𝑓 ′ is continuous (why?). Exercise 4.2.5: Suppose 𝑓 : ℝ → ℝ is a function such that | 𝑓 (𝑥) − 𝑓 (𝑦)| ≤ |𝑥 − 𝑦| 2 for all 𝑥 and 𝑦. Show that 𝑓 (𝑥) = 𝐶 for some constant 𝐶. Hint: Show that 𝑓 is differentiable at all points and compute the derivative. Exercise 4.2.6: Finish the proof of Proposition 4.2.8. That is, suppose 𝐼 is an interval and 𝑓 : 𝐼 → ℝ is a differentiable function such that 𝑓 ′(𝑥) > 0 for all 𝑥 ∈ 𝐼. Show that 𝑓 is strictly increasing. Exercise 4.2.7: Suppose 𝑓 : (𝑎, 𝑏) → ℝ is a differentiable function such that 𝑓 ′(𝑥) ≠ 0 for all 𝑥 ∈ (𝑎, 𝑏). Suppose there exists a point 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 ′(𝑐) > 0. Prove 𝑓 ′(𝑥) > 0 for all 𝑥 ∈ (𝑎, 𝑏). Exercise 4.2.8: Suppose 𝑓 : (𝑎, 𝑏) → ℝ and 𝑔 : (𝑎, 𝑏) → ℝ are differentiable functions such that 𝑓 ′(𝑥) = 𝑔 ′(𝑥) for all 𝑥 ∈ (𝑎, 𝑏), then show that there exists a constant 𝐶 such that 𝑓 (𝑥) = 𝑔(𝑥) + 𝐶. Exercise 4.2.9: Prove the following version of L’Hôpital’s rule. Suppose 𝑓 : (𝑎, 𝑏) → ℝ and 𝑔 : (𝑎, 𝑏) → ℝ are differentiable functions and 𝑐 ∈ (𝑎, 𝑏). Suppose that 𝑓 (𝑐) = 0, 𝑔(𝑐) = 0, 𝑔 ′(𝑥) ≠ 0 when 𝑥 ≠ 𝑐, and that the limit of 𝑓 ′(𝑥)/𝑔′(𝑥) as 𝑥 goes to 𝑐 exists. Show that lim
𝑥→𝑐
𝑓 (𝑥) 𝑓 ′(𝑥) = lim ′ . 𝑔(𝑥) 𝑥→𝑐 𝑔 (𝑥)
Compare to Exercise 4.1.15. Note: Before you do anything else, prove that 𝑔(𝑥) ≠ 0 when 𝑥 ≠ 𝑐. Exercise 4.2.10: Let 𝑓 : (𝑎, 𝑏) → ℝ be an unbounded differentiable function. Show 𝑓 ′ : (𝑎, 𝑏) → ℝ is unbounded.
170
CHAPTER 4. THE DERIVATIVE
Exercise 4.2.11: Prove the theorem Rolle actually proved in 1691: If 𝑓 is a polynomial, 𝑓 ′(𝑎) = 𝑓 ′(𝑏) = 0 for some 𝑎 < 𝑏, and there is no 𝑐 ∈ (𝑎, 𝑏) such that 𝑓 ′(𝑐) = 0, then there is at most one root of 𝑓 in (𝑎, 𝑏), that is at most one 𝑥 ∈ (𝑎, 𝑏) such that 𝑓 (𝑥) = 0. In other words, between any two consecutive roots of 𝑓 ′ is at most one root of 𝑓 . Hint: Suppose there are two roots and see what happens. Exercise 4.2.12: Suppose 𝑎, 𝑏 ∈ ℝ and 𝑓 : ℝ → ℝ is differentiable, 𝑓 ′(𝑥) = 𝑎 for all 𝑥, and 𝑓 (0) = 𝑏. Find 𝑓 and prove that it is the unique differentiable function with this property. Exercise 4.2.13: a) Prove Proposition 4.2.10. b) Suppose 𝑓 : (𝑎, 𝑏) → ℝ is continuous, and suppose 𝑓 is differentiable everywhere except at 𝑐 ∈ (𝑎, 𝑏) and lim𝑥→𝑐 𝑓 ′(𝑥) = 𝐿. Prove that 𝑓 is differentiable at 𝑐 and 𝑓 ′(𝑐) = 𝐿. Exercise 4.2.14: Suppose 𝑓 : (0, 1) → ℝ is differentiable and 𝑓 ′ is bounded.
a) Show that there exists a continuous function 𝑔 : [0, 1) → ℝ such that 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥 ≠ 0. Hint: Proposition 3.4.6 and Exercise 4.2.3.
b) Find an example where the 𝑔 is not differentiable at 𝑥 = 0. Hint: Consider something based on sin(ln 𝑥), and assume you know basic properties of sin and ln from calculus. c) Instead of assuming that 𝑓 ′ is bounded, assume that lim𝑥→0 𝑓 ′(𝑥) = 𝐿. Prove that not only does 𝑔 exist but it is differentiable at 0 and 𝑔 ′(0) = 𝐿. Exercise 4.2.15: Prove Theorem 4.2.5.
171
4.3. TAYLOR’S THEOREM
4.3
Taylor’s theorem
Note: less than a lecture (optional section)
4.3.1
Derivatives of higher orders
When 𝑓 : 𝐼 → ℝ is differentiable, we obtain a function 𝑓 ′ : 𝐼 → ℝ. The function 𝑓 ′ is called the first derivative of 𝑓 . If 𝑓 ′ is differentiable, we denote by 𝑓 ′′ : 𝐼 → ℝ the derivative of 𝑓 ′. The function 𝑓 ′′ is called the second derivative of 𝑓 . We similarly obtain 𝑓 ′′′, 𝑓 ′′′′, and so on. With a larger number of derivatives the notation would get out of hand; we denote by 𝑓 (𝑛) the 𝑛th derivative of 𝑓 . When 𝑓 possesses 𝑛 derivatives, we say 𝑓 is 𝑛 times differentiable.
4.3.2
Taylor’s theorem
Taylor’s theorem* is a generalization of the mean value theorem. Mean value theorem says that up to a small error 𝑓 (𝑥) for 𝑥 near 𝑥0 can be approximated by 𝑓 (𝑥0 ), that is 𝑓 (𝑥) = 𝑓 (𝑥0 ) + 𝑓 ′(𝑐)(𝑥 − 𝑥0 ), where the “error” is measured in terms of the first derivative at some point 𝑐 between 𝑥 and 𝑥0 . Taylor’s theorem generalizes this result to higher derivatives. It tells us that up to a small error, any 𝑛 times differentiable function can be approximated at a point 𝑥0 by a polynomial. The error of this approximation behaves like (𝑥 − 𝑥 0 )𝑛 near the point 𝑥0 . To see why this is a good approximation notice that for a big 𝑛, (𝑥 − 𝑥0 )𝑛 is very small in a small interval around 𝑥 0 . Definition 4.3.1. For an 𝑛 times differentiable function 𝑓 defined near a point 𝑥0 ∈ ℝ, define the 𝑛th order Taylor polynomial for 𝑓 at 𝑥0 as 𝑃𝑛𝑥0 (𝑥)
B
𝑛 Õ 𝑓 (𝑘) (𝑥 0 ) 𝑘=0
𝑘!
(𝑥 − 𝑥0 ) 𝑘
= 𝑓 (𝑥0 ) + 𝑓 ′(𝑥0 )(𝑥 − 𝑥0 ) + +···+
𝑓 ′′(𝑥0 ) 𝑓 (3) (𝑥 0 ) (𝑥 − 𝑥 0 )2 + (𝑥 − 𝑥0 )3 2 6
𝑓 (𝑛) (𝑥 0 ) (𝑥 − 𝑥0 )𝑛 . 𝑛!
See Figure 4.8 for the odd degree Taylor polynomials for the sine function at 𝑥0 = 0. The even degree terms are all zero, as even derivatives of sine are again sines, which are zero at the origin. * Named
for the English mathematician Brook Taylor (1685–1731). It was first found by the Scottish mathematician James Gregory (1638–1675). The statement we give was proved by Joseph-Louis Lagrange (1736–1813).
172
CHAPTER 4. THE DERIVATIVE
𝑦 = 𝑃30 (𝑥)
𝑦 = sin(𝑥)
𝑦 = 𝑃10 (𝑥)
𝑦 = 𝑃50 (𝑥)
𝑦 = 𝑃70 (𝑥)
Figure 4.8: The odd degree Taylor polynomials for the sine function.
Taylor’s theorem says a function behaves like its 𝑛th Taylor polynomial. The mean value theorem is really Taylor’s theorem for the first derivative. Theorem 4.3.2 (Taylor). Suppose 𝑓 : [𝑎, 𝑏] → ℝ is a function with 𝑛 continuous derivatives on [𝑎, 𝑏] and such that 𝑓 (𝑛+1) exists on (𝑎, 𝑏). Given distinct points 𝑥 0 and 𝑥 in [𝑎, 𝑏], we can find a point 𝑐 between 𝑥0 and 𝑥 such that 𝑓 (𝑥) = 𝑃𝑛𝑥0 (𝑥) +
𝑓 (𝑛+1) (𝑐) (𝑥 − 𝑥 0 )𝑛+1 . (𝑛 + 1)!
𝑓 (𝑛+1) (𝑐)
The term 𝑅 𝑛𝑥0 (𝑥) B (𝑛+1)! (𝑥 − 𝑥0 )𝑛+1 is called the remainder term. This form of the remainder term is called the Lagrange form of the remainder. There are other ways to write the remainder term, but we skip those. Note that 𝑐 depends on both 𝑥 and 𝑥 0 . Proof. Find a number 𝑀 𝑥,𝑥0 (depending on 𝑥 and 𝑥0 ) solving the equation 𝑓 (𝑥) = 𝑃𝑛𝑥0 (𝑥) + 𝑀 𝑥,𝑥0 (𝑥 − 𝑥0 )𝑛+1 . Define a function 𝑔(𝑠) by 𝑔(𝑠) B 𝑓 (𝑠) − 𝑃𝑛𝑥0 (𝑠) − 𝑀 𝑥,𝑥0 (𝑠 − 𝑥0 )𝑛+1 . (𝑘)
We compute the 𝑘th derivative at 𝑥0 of the Taylor polynomial (𝑃𝑛𝑥0 ) (𝑥0 ) = 𝑓 (𝑘) (𝑥0 ) for 𝑘 = 0, 1, 2, . . . , 𝑛 (the zeroth derivative of a function is the function itself). Therefore, 𝑔(𝑥0 ) = 𝑔 ′(𝑥0 ) = 𝑔 ′′(𝑥0 ) = · · · = 𝑔 (𝑛) (𝑥0 ) = 0. In particular, 𝑔(𝑥0 ) = 0. On the other hand 𝑔(𝑥) = 0. By the mean value theorem there exists an 𝑥1 between 𝑥0 and 𝑥 such that 𝑔 ′(𝑥1 ) = 0. Applying the mean value theorem to 𝑔 ′ we obtain that there exists 𝑥2 between 𝑥0 and 𝑥 1 (and therefore between 𝑥0 and 𝑥) such
173
4.3. TAYLOR’S THEOREM
that 𝑔 ′′(𝑥 2 ) = 0. We repeat the argument 𝑛 + 1 times to obtain a number 𝑥 𝑛+1 between 𝑥0 and 𝑥 𝑛 (and therefore between 𝑥0 and 𝑥) such that 𝑔 (𝑛+1) (𝑥 𝑛+1 ) = 0. Let 𝑐 B 𝑥 𝑛+1 . We compute the (𝑛 + 1)th derivative of 𝑔 to find 𝑔 (𝑛+1) (𝑠) = 𝑓 (𝑛+1) (𝑠) − (𝑛 + 1)! 𝑀 𝑥,𝑥0 . Plugging in 𝑐 for 𝑠 we obtain 𝑀 𝑥,𝑥0 =
𝑓 (𝑛+1) (𝑐) , (𝑛+1)!
and we are done.
□
(𝑘)
In the proof, we found (𝑃𝑛𝑥0 ) (𝑥 0 ) = 𝑓 (𝑘) (𝑥0 ) for 𝑘 = 0, 1, 2, . . . , 𝑛. Therefore, the Taylor polynomial has the same derivatives as 𝑓 at 𝑥0 up to the 𝑛th derivative. That is why the Taylor polynomial is a good approximation to 𝑓 . Notice how in Figure 4.8 the Taylor polynomials are reasonably good approximations to the sine near 𝑥 = 0. We do not necessarily get good approximations by the Taylor polynomial everywhere. 𝑥 Consider expanding the function 𝑓 (𝑥) B 1−𝑥 around 0, for 𝑥 < 1, we get the graphs in Figure 4.9. The dotted lines are the first, second, and third degree approximations. The dashed line is the 20th degree polynomial. The approximation does seem to get better as the degree rises for 𝑥 > −1. However, for 𝑥 < −1, it in fact gets worse. The polynomials are Í 𝑛 the partial sums of the geometric series ∞ 𝑛=1 𝑥 , and the series only converges on (−1, 1). See the discussion of power series §2.6.
Figure 4.9: The function 0 polynomial 𝑃20 (dashed).
𝑥 1−𝑥 ,
and the Taylor polynomials 𝑃10 , 𝑃20 , 𝑃30 (all dotted), and the
If 𝑓 is infinitely differentiable, that is, if 𝑓 can be differentiated any number of times, then we define the Taylor series: ∞ Õ 𝑓 (𝑘) (𝑥0 ) (𝑥 − 𝑥 0 ) 𝑘 . 𝑘! 𝑘=0
There is no guarantee that this series converges for any 𝑥 ≠ 𝑥0 . And even where it does converge, there is no guarantee that it converges to the function 𝑓 . Functions 𝑓 whose Taylor series at every point 𝑥 0 converges to 𝑓 in some open interval containing 𝑥 0 are
174
CHAPTER 4. THE DERIVATIVE
called analytic functions. Many functions one tends to see in practice are analytic. See Exercise 5.4.11, for an example of a non-analytic function. The definition of derivative says that a function is differentiable if it is locally approximated by a line. We mention in passing that there exists a converse to Taylor’s theorem, which we will neither state nor prove, saying that if a function is locally approximated in a certain way by a polynomial of degree 𝑑, then it has 𝑑 derivatives. Taylor’s theorem gives us a quick proof of a version of the second derivative test. By a strict relative minimum of 𝑓 at 𝑐, we mean that there exists a 𝛿 > 0 such that 𝑓 (𝑥) > 𝑓 (𝑐) for all 𝑥 ∈ (𝑐 − 𝛿, 𝑐 + 𝛿) where 𝑥 ≠ 𝑐. A strict relative maximum is defined similarly. Continuity of the second derivative is not needed, but the proof is more difficult and is left as an exercise. The proof also generalizes immediately into the 𝑛th derivative test, which is also left as an exercise. Proposition 4.3.3 (Second derivative test). Suppose 𝑓 : (𝑎, 𝑏) → ℝ is twice continuously differentiable, 𝑥0 ∈ (𝑎, 𝑏), 𝑓 ′(𝑥0 ) = 0 and 𝑓 ′′(𝑥 0 ) > 0. Then 𝑓 has a strict relative minimum at 𝑥0 . Proof. As 𝑓 ′′ is continuous, there exists a 𝛿 > 0 such that 𝑓 ′′(𝑐) > 0 for all 𝑐 ∈ (𝑥0 − 𝛿, 𝑥0 + 𝛿), see Exercise 3.2.11. Take 𝑥 ∈ (𝑥0 − 𝛿, 𝑥0 + 𝛿), 𝑥 ≠ 𝑥0 . Taylor’s theorem says that for some 𝑐 between 𝑥0 and 𝑥, 𝑓 (𝑥) = 𝑓 (𝑥0 ) + 𝑓 ′(𝑥0 )(𝑥 − 𝑥0 ) +
𝑓 ′′(𝑐) 𝑓 ′′(𝑐) (𝑥 − 𝑥0 )2 = 𝑓 (𝑥 0 ) + (𝑥 − 𝑥0 )2 . 2 2
As 𝑓 ′′(𝑐) > 0, and (𝑥 − 𝑥0 )2 > 0, then 𝑓 (𝑥) > 𝑓 (𝑥0 ).
4.3.3
□
Exercises
Exercise 4.3.1: Compute the 𝑛th Taylor polynomial at 0 for the exponential function. Exercise 4.3.2: Suppose 𝑝 is a polynomial of degree 𝑑. Given 𝑥 0 ∈ ℝ, show that the 𝑑th Taylor polynomial for 𝑝 at 𝑥 0 is equal to 𝑝. Exercise 4.3.3: Let 𝑓 (𝑥) B |𝑥| 3 . Compute 𝑓 ′(𝑥) and 𝑓 ′′(𝑥) for all 𝑥, but show that 𝑓 (3) (0) does not exist. Exercise 4.3.4: Suppose 𝑓 : ℝ → ℝ has 𝑛 continuous derivatives. Show that for every 𝑥0 ∈ ℝ, there exist polynomials 𝑃 and 𝑄 of degree 𝑛 and an 𝜖 > 0 such that 𝑃(𝑥) ≤ 𝑓 (𝑥) ≤ 𝑄(𝑥) for all 𝑥 ∈ [𝑥 0 , 𝑥0 + 𝜖] and 𝑄(𝑥) − 𝑃(𝑥) = 𝜆(𝑥 − 𝑥 0 )𝑛 for some 𝜆 ≥ 0. 𝑥
𝑅 𝑛0 (𝑥) 𝑛 𝑥→𝑥0 (𝑥−𝑥0 )
Exercise 4.3.5: If 𝑓 : [𝑎, 𝑏] → ℝ has 𝑛 + 1 continuous derivatives and 𝑥 0 ∈ [𝑎, 𝑏], prove lim
= 0.
Exercise 4.3.6: Suppose 𝑓 : [𝑎, 𝑏] → ℝ has 𝑛+1 continuous derivatives and 𝑥 0 ∈ (𝑎, 𝑏). Prove: 𝑓 (𝑘) (𝑥 0 ) = 0 𝑓 (𝑥) for all 𝑘 = 0, 1, 2, . . . , 𝑛 if and only if lim 𝑛+1 exists. 𝑥→𝑥0 (𝑥−𝑥0 )
Exercise 4.3.7: Suppose 𝑎, 𝑏, 𝑐 ∈ ℝ and 𝑓 : ℝ → ℝ is differentiable, 𝑓 ′′(𝑥) = 𝑎 for all 𝑥, 𝑓 ′(0) = 𝑏, and 𝑓 (0) = 𝑐. Find 𝑓 and prove that it is the unique differentiable function with this property.
4.3. TAYLOR’S THEOREM
175
Exercise 4.3.8 (Challenging): Show that a simple converse to Taylor’s does not hold. Find a theorem 3 function 𝑓 : ℝ → ℝ with no second derivative at 𝑥 = 0 such that 𝑓 (𝑥) ≤ 𝑥 , that is, 𝑓 goes to zero at 0 faster than 𝑥 2 , and while 𝑓 ′(0) exists, 𝑓 ′′(0) does not. Exercise 4.3.9: Suppose 𝑓 : (0, 1) → ℝ is differentiable and 𝑓 ′′ is bounded.
a) Show that there exists a once differentiable function 𝑔 : [0, 1) → ℝ such that 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥 ≠ 0. Hint: See Exercise 4.2.14.
b) Find an example where the 𝑔 is not twice differentiable at 𝑥 = 0. Exercise 4.3.10: Prove the 𝑛th derivative test. Suppose 𝑛 ∈ ℕ , 𝑥 0 ∈ (𝑎, 𝑏), and 𝑓 : (𝑎, 𝑏) → ℝ is 𝑛 times continuously differentiable, with 𝑓 (𝑘) (𝑥0 ) = 0 for 𝑘 = 1, 2, . . . , 𝑛 − 1, and 𝑓 (𝑛) (𝑥 0 ) ≠ 0. Prove: a) If 𝑛 is odd, then 𝑓 has neither a relative minimum, nor a maximum at 𝑥 0 .
b) If 𝑛 is even, then 𝑓 has a strict relative minimum at 𝑥 0 if 𝑓 (𝑛) (𝑥 0 ) > 0 and a strict relative maximum at 𝑥 0 if 𝑓 (𝑛) (𝑥 0 ) < 0. Exercise 4.3.11: Prove the more general version of the second derivative test. Suppose 𝑓 : (𝑎, 𝑏) → ℝ is differentiable and 𝑥 0 ∈ (𝑎, 𝑏) is such that, 𝑓 ′(𝑥0 ) = 0, 𝑓 ′′(𝑥 0 ) exists, and 𝑓 ′′(𝑥 0 ) > 0. Prove that 𝑓 has a strict relative minimum at 𝑥 0 . Hint: Consider the limit definition of 𝑓 ′′(𝑥 0 ).
176
CHAPTER 4. THE DERIVATIVE
4.4
Inverse function theorem
Note: less than 1 lecture (optional section, needed for §5.4, requires §3.6)
4.4.1
Inverse function theorem
We start with a simple example. Consider the function 𝑓 (𝑥) B 𝑎𝑥 for a number 𝑎 ≠ 0. Then 𝑓 : ℝ → ℝ is bijective, and the inverse is 𝑓 −1 (𝑦) = 1𝑎 𝑦. In particular, 𝑓 ′(𝑥) = 𝑎 and ( 𝑓 −1 )′(𝑦) = 1𝑎 . As differentiable functions are “infinitesimally like” linear functions, we expect the same behavior from the inverse function. The main idea of differentiating inverse functions is the following lemma. Lemma 4.4.1. Let 𝐼, 𝐽 ⊂ ℝ be intervals. If 𝑓 : 𝐼 → 𝐽 is strictly monotone (hence one-to-one), onto ( 𝑓 (𝐼) = 𝐽), differentiable at 𝑥0 ∈ 𝐼, and 𝑓 ′(𝑥0 ) ≠ 0, then the inverse 𝑓 −1 is differentiable at 𝑦0 = 𝑓 (𝑥0 ) and 1 1 = ′ ( 𝑓 −1 )′(𝑦0 ) = . 𝑓 (𝑥0 ) 𝑓 ′ 𝑓 −1 (𝑦0 ) If 𝑓 is continuously differentiable and 𝑓 ′ is never zero, then 𝑓 −1 is continuously differentiable.
Proof. By Proposition 3.6.6, 𝑓 has a continuous inverse. For convenience call the inverse 𝑔 : 𝐽 → 𝐼. Let 𝑥0 , 𝑦0 be as in the statement. For 𝑥 ∈ 𝐼 write 𝑦 B 𝑓 (𝑥). If 𝑥 ≠ 𝑥0 and so 𝑦 ≠ 𝑦0 , we find 𝑔(𝑦) − 𝑔(𝑦0 ) 𝑔 𝑓 (𝑥) − 𝑔 𝑓 (𝑥0 ) 𝑥 − 𝑥0 = . = 𝑦 − 𝑦0 𝑓 (𝑥) − 𝑓 (𝑥0 ) 𝑓 (𝑥) − 𝑓 (𝑥0 )
See Figure 4.10 for the geometric idea.
slope =
𝑓 (𝑥 0 ) = 𝑦0
𝑓 (𝑥)− 𝑓 (𝑥0 ) 𝑥−𝑥0
=
𝑦−𝑦0 𝑔(𝑦)−𝑔(𝑦0 )
slope =
𝑥−𝑥0 𝑓 (𝑥)− 𝑓 (𝑥0 )
=
𝑔(𝑦)−𝑔(𝑦0 ) 𝑦−𝑦0
𝑥 0 = 𝑔(𝑦0 )
𝑓 (𝑥) = 𝑦 𝑥 = 𝑔(𝑦) 𝑥 = 𝑔(𝑦)
𝑥 0 = 𝑔(𝑦0 )
Figure 4.10: Interpretation of the derivative of the inverse function.
𝑓 (𝑥) = 𝑦 𝑓 (𝑥 0 ) = 𝑦0
177
4.4. INVERSE FUNCTION THEOREM Let
( 𝑄(𝑥) B
As 𝑓 is differentiable at 𝑥0 ,
𝑥−𝑥 0 𝑓 (𝑥)− 𝑓 (𝑥 0 ) 1 𝑓 ′ (𝑥 0 )
if 𝑥 ≠ 𝑥0 , if 𝑥 = 𝑥0
lim 𝑄(𝑥) = lim
𝑥→𝑥0
𝑥→𝑥 0
(notice that 𝑓 ′(𝑥0 ) ≠ 0).
1 𝑥 − 𝑥0 = ′ = 𝑄(𝑥 0 ), 𝑓 (𝑥) − 𝑓 (𝑥0 ) 𝑓 (𝑥0 )
that is, 𝑄 is continuous at 𝑥0 . As 𝑔(𝑦) is continuous at 𝑦0 , the composition 𝑄 𝑔(𝑦) =
𝑔(𝑦)−𝑔(𝑦0 ) 𝑦−𝑦0
is continuous at 𝑦0 by Proposition 3.2.7. Therefore,
𝑔(𝑦) − 𝑔(𝑦0 ) 1 . = 𝑄 𝑔(𝑦0 ) = lim 𝑄 𝑔(𝑦) = lim 𝑦→𝑦0 𝑦→𝑦0 𝑦 − 𝑦0 𝑓 ′ 𝑔(𝑦0 ) So 𝑔 is differentiable at 𝑦0 and 𝑔 ′(𝑦0 ) =
1 . 𝑓 ′ ( 𝑔(𝑦0 ))
If 𝑓 ′ is continuous and nonzero at all 𝑥 ∈ 𝐼, then the lemma applies at all 𝑥 ∈ 𝐼. As 𝑔 is also continuous (it is differentiable), the derivative 𝑔 ′(𝑦) = ′ 1 must be continuous. □ 𝑓 ( 𝑔(𝑦)) What is usually called the inverse function theorem is the following result.
Theorem 4.4.2 (Inverse function theorem). Let 𝑓 : (𝑎, 𝑏) → ℝ be a continuously differentiable function, 𝑥0 ∈ (𝑎, 𝑏) a point where 𝑓 ′(𝑥0 ) ≠ 0. Then there exists an open interval 𝐼 ⊂ (𝑎, 𝑏) with 𝑥0 ∈ 𝐼, the restriction 𝑓 | 𝐼 is injective with a continuously differentiable inverse 𝑔 : 𝐽 → 𝐼 defined on an interval 𝐽 B 𝑓 (𝐼), and 𝑔 ′(𝑦) =
1 𝑓 ′ 𝑔(𝑦)
for all 𝑦 ∈ 𝐽.
Proof. Without loss of generality, suppose 𝑓 ′(𝑥0 ) > 0. As 𝑓 ′ is continuous, there must exist an open interval 𝐼 = (𝑥0 − 𝛿, 𝑥0 + 𝛿) such that 𝑓 ′(𝑥) > 0 for all 𝑥 ∈ 𝐼. See Exercise 3.2.11. By Proposition 4.2.8, 𝑓 is strictly increasing on 𝐼, and hence the restriction 𝑓 | 𝐼 is bijective onto 𝐽 := 𝑓 (𝐼). As 𝑓 is continuous, then by the Corollary 3.6.3 (or directly via the intermediate value theorem) 𝑓 (𝐼) is an interval. Now apply Lemma 4.4.1. □ √ In Example 1.2.3 we saw how difficult an endeavor proving the existence of 2. With the intermediate value theorem we made the existence of roots almost trivial, and with the machinery of this section we will prove far more than mere existence. Corollary 4.4.3. Given 𝑛 ∈ ℕ and 𝑥 ≥ 0, there exists a unique number 𝑦 ≥ 0 (denoted 𝑥 1/𝑛 B 𝑦), such that 𝑦 𝑛 = 𝑥. Furthermore, the function 𝑔 : (0, ∞) → (0, ∞) defined by 𝑔(𝑥) B 𝑥 1/𝑛 is continuously differentiable and 𝑔 ′(𝑥) = 𝑚
using the convention 𝑥 𝑚/𝑛 B (𝑥 1/𝑛 ) .
1 𝑛𝑥 (𝑛−1)/𝑛
=
1 (1−𝑛)/𝑛 𝑥 , 𝑛
178
CHAPTER 4. THE DERIVATIVE
Proof. For 𝑥 = 0 the existence of a unique root is trivial. Let 𝑓 : (0, ∞) → (0, ∞) be defined by 𝑓 (𝑦) B 𝑦 𝑛 . The function 𝑓 is continuously differentiable and 𝑓 ′(𝑦) = 𝑛𝑦 𝑛−1 , see Exercise 4.1.3. For 𝑦 > 0 the derivative 𝑓 ′ is strictly positive and so again by Proposition 4.2.8, 𝑓 is strictly increasing (this can also be proved directly). Given any 𝑀 > 1, 𝑓 (𝑀) = 𝑀 𝑛 ≥ 𝑀, and given any 1 > 𝜖 > 0, 𝑓 (𝜖) = 𝜖 𝑛 ≤ 𝜖. For every 𝑥 with 𝜖 0, we can take 𝐼 = (0, ∞), but no larger. Example 4.4.5: Another useful example is 𝑓 (𝑥) B 𝑥 3 . The function 𝑓 : ℝ → ℝ is one-to-one and onto, so 𝑓 −1 (𝑦) = 𝑦 1/3 exists on the entire real line including zero and negative 𝑦. The function 𝑓 has a continuous derivative, but 𝑓 −1 has no derivative at the origin. The point is that 𝑓 ′(0) = 0. See Figure 4.11 for a graph, notice the vertical tangent on the cube root at the origin. See also Exercise 4.4.4.
𝑦 = 𝑥3 𝑦 = 𝑥 1/3
Figure 4.11: Graphs of 𝑥 3 and 𝑥 1/3 .
4.4.2
Exercises
Exercise 4.4.1: Suppose 𝑓 : ℝ → ℝ is continuously differentiable and 𝑓 ′(𝑥) > 0 for all 𝑥. Show that 𝑓 ′ is invertible on the interval 𝐽 = 𝑓 (ℝ), the inverse is continuously differentiable, and ( 𝑓 −1 ) (𝑦) > 0 for all 𝑦 ∈ 𝑓 (ℝ).
4.4. INVERSE FUNCTION THEOREM
179
Exercise 4.4.2: Suppose 𝐼, 𝐽 are intervals and a monotone onto 𝑓 : 𝐼 → 𝐽 has an inverse 𝑔 : 𝐽 → 𝐼. Suppose you already know that both 𝑓 and 𝑔 are differentiable everywhere and 𝑓 ′ is never zero. Using chain rule but not Lemma 4.4.1 prove the formula 𝑔 ′(𝑦) = ′ 1 . Remark: This exercise is the same as Exercise 4.1.10, no 𝑓 ( 𝑔(𝑦)) need to do it again if you have solved it already.. Exercise 4.4.3: Let 𝑛 ∈ ℕ be even. Prove that every 𝑥 > 0 has a unique negative 𝑛th root. That is, there exists a negative number 𝑦 such that 𝑦 𝑛 = 𝑥. Compute the derivative of the function 𝑔(𝑥) B 𝑦. Exercise 4.4.4: Let 𝑛 ∈ ℕ be odd and 𝑛 ≥ 3. Prove that every 𝑥 has a unique 𝑛th root. That is, there exists a number 𝑦 such that 𝑦 𝑛 = 𝑥. Prove that the function defined by 𝑔(𝑥) B 𝑦 is differentiable except at 𝑥 = 0 and compute the derivative. Prove that 𝑔 is not differentiable at 𝑥 = 0. Exercise 4.4.5 (requires §4.3): Show that if in the inverse function theorem 𝑓 has 𝑘 continuous derivatives, then the inverse function 𝑔 also has 𝑘 continuous derivatives. Exercise 4.4.6: Let 𝑓 (𝑥) B 𝑥 + 2𝑥 2 sin(1/𝑥 ) for 𝑥 ≠ 0 and 𝑓 (0) B 0. Show that 𝑓 is differentiable at all 𝑥, that 𝑓 ′(0) > 0, but that 𝑓 is not invertible on any open interval containing the origin. Exercise 4.4.7: a) Let 𝑓 : ℝ → ℝ be a continuously differentiable function and 𝑘 > 0 be a number such that 𝑓 ′(𝑥) ≥ 𝑘 for all 𝑥 ∈ ℝ. Show 𝑓 is one-to-one and onto, and has a continuously differentiable inverse 𝑓 −1 : ℝ → ℝ.
b) Find an example 𝑓 : ℝ → ℝ where 𝑓 ′(𝑥) > 0 for all 𝑥, but 𝑓 is not onto.
Exercise 4.4.8: Suppose 𝐼, 𝐽 are intervals and a monotone onto 𝑓 : 𝐼 → 𝐽 has an inverse 𝑔 : 𝐽 → 𝐼. Suppose 𝑥 ∈ 𝐼 and 𝑦 B 𝑓 (𝑥) ∈ 𝐽, and that 𝑔 is differentiable at 𝑦. Prove: a) If 𝑔 ′(𝑦) ≠ 0, then 𝑓 is differentiable at 𝑥.
b) If 𝑔 ′(𝑦) = 0, then 𝑓 is not differentiable at 𝑥.
180
CHAPTER 4. THE DERIVATIVE
Chapter 5 The Riemann Integral 5.1
The Riemann integral
Note: 1.5 lectures An integral is a way to “sum” the values of a function. There is sometimes confusion among students of calculus between the integral and the antiderivative. The integral is (informally) the area under the curve, nothing else. That we can compute an antiderivative using the integral is a nontrivial result we must prove. We will define the Riemann integral† using the Darboux integral‡ , an equivalent but technically simpler definition.
5.1.1
Partitions and lower and upper integrals
We want to integrate a bounded function defined on an interval [𝑎, 𝑏]. We first define two auxiliary integrals that are defined for all bounded functions. Only then can we talk about the Riemann integral and the Riemann integrable functions. Definition 5.1.1. A partition 𝑃 of [𝑎, 𝑏] is a finite set of numbers {𝑥0 , 𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 } such that 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < · · · < 𝑥 𝑛−1 < 𝑥 𝑛 = 𝑏.
We write
Δ𝑥 𝑖 B 𝑥 𝑖 − 𝑥 𝑖−1 .
Suppose 𝑓 : [𝑎, 𝑏] → ℝ is bounded and 𝑃 is a partition of [𝑎, 𝑏]. Define 𝑚 𝑖 B inf 𝑓 (𝑥) : 𝑥 𝑖−1 ≤ 𝑥 ≤ 𝑥 𝑖 ,
𝑀 𝑖 B sup 𝑓 (𝑥) : 𝑥 𝑖−1 ≤ 𝑥 ≤ 𝑥 𝑖 ,
𝐿(𝑃, 𝑓 ) B
𝑈(𝑃, 𝑓 ) B
𝑛 Õ 𝑖=1
𝑚 𝑖 Δ𝑥 𝑖 ,
𝑛 Õ
𝑀 𝑖 Δ𝑥 𝑖 .
𝑖=1
We call 𝐿(𝑃, 𝑓 ) the lower Darboux sum and 𝑈(𝑃, 𝑓 ) the upper Darboux sum. † Named ‡ Named
after the German mathematician Georg Friedrich Bernhard Riemann (1826–1866). after the French mathematician Jean-Gaston Darboux (1842–1917).
182
CHAPTER 5. THE RIEMANN INTEGRAL
The geometric idea of Darboux sums is indicated in Figure 5.1. The lower sum is the area of the shaded rectangles, and the upper sum is the area of the entire rectangles, shaded plus unshaded parts. The width of the 𝑖th rectangle is Δ𝑥 𝑖 , the height of the shaded rectangle is 𝑚 𝑖 , and the height of the entire rectangle is 𝑀 𝑖 . Δ𝑥 5 𝑀5 𝑚5
𝑥0
𝑥1
𝑥2
𝑥3
𝑥4
𝑥5
𝑥6
𝑥7
𝑥8
Figure 5.1: Sample Darboux sums.
Proposition 5.1.2. Let 𝑓 : [𝑎, 𝑏] → ℝ be a bounded function. Let 𝑚, 𝑀 ∈ ℝ be such that for all 𝑥 ∈ [𝑎, 𝑏], we have 𝑚 ≤ 𝑓 (𝑥) ≤ 𝑀. Then for every partition 𝑃 of [𝑎, 𝑏],
(5.1)
𝑚(𝑏 − 𝑎) ≤ 𝐿(𝑃, 𝑓 ) ≤ 𝑈(𝑃, 𝑓 ) ≤ 𝑀(𝑏 − 𝑎).
Proof. Let 𝑃 be a partition of [𝑎, 𝑏]. Note that 𝑚 ≤ 𝑚 𝑖 for all 𝑖 and 𝑀 𝑖 ≤ 𝑀 for all 𝑖. Also Í 𝑚 𝑖 ≤ 𝑀 𝑖 for all 𝑖. Finally, 𝑛𝑖=1 Δ𝑥 𝑖 = (𝑏 − 𝑎). Therefore, 𝑚(𝑏 − 𝑎) = 𝑚
𝑛 Õ
! Δ𝑥 𝑖 =
𝑖=1
𝑛 Õ 𝑖=1
𝑚Δ𝑥 𝑖 ≤ ≤
𝑛 Õ 𝑖=1
𝑛 Õ 𝑖=1
𝑚 𝑖 Δ𝑥 𝑖 ≤
𝑀 𝑖 Δ𝑥 𝑖 ≤
𝑛 Õ
𝑀Δ𝑥 𝑖 = 𝑀
𝑖=1
𝑛 Õ 𝑖=1
! Δ𝑥 𝑖 = 𝑀(𝑏 − 𝑎).
Hence we get (5.1). In particular, the set of lower and upper sums are bounded sets.
□
Definition 5.1.3. As the sets of lower and upper Darboux sums are bounded, we define 𝑏
∫
𝑎 𝑏
∫ We call
𝑓 (𝑥) 𝑑𝑥 B inf 𝑈(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑏] .
𝑎
∫
𝑓 (𝑥) 𝑑𝑥 B sup 𝐿(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑏] ,
the lower Darboux integral and
∫
the upper Darboux integral. To avoid worrying
about the variable of integration, we often simply write
∫ 𝑎
𝑏
𝑓 B
∫ 𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥
and
∫ 𝑎
𝑏
𝑓 B
∫ 𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥.
183
5.1. THE RIEMANN INTEGRAL
If integration is to make sense, then the lower and upper Darboux integrals should be the same number, as we want a single number to call the integral. However, these two integrals may differ for some functions. Example 5.1.4: Take the Dirichlet function 𝑓 : [0, 1] → ℝ, where 𝑓 (𝑥) B 1 if 𝑥 ∈ ℚ and 𝑓 (𝑥) B 0 if 𝑥 ∉ ℚ. Then
∫
0
1
𝑓 =0
∫
and
0
1
𝑓 = 1.
The reason is that for any partition 𝑃 and every 𝑖, we have 𝑚 𝑖 = inf 𝑓 (𝑥) : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] = 0 and 𝑀 𝑖 = sup 𝑓 (𝑥) : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] = 1. Thus
𝐿(𝑃, 𝑓 ) =
𝑛 Õ 𝑖=1
0 · Δ𝑥 𝑖 = 0,
Remark 5.1.5. The same definition of
and 𝑈(𝑃, 𝑓 ) =
∫𝑏 𝑎
𝑓 and
∫𝑏 𝑎
𝑛 Õ 𝑖=1
1 · Δ𝑥 𝑖 =
𝑛 Õ
Δ𝑥 𝑖 = 1.
𝑖=1
𝑓 is used when 𝑓 is defined on a larger set
𝑆 such that [𝑎, 𝑏] ⊂ 𝑆. In that case, we use the restriction of 𝑓 to [𝑎, 𝑏] and we must ensure that the restriction is bounded on [𝑎, 𝑏]. To compute the integral, we often take a partition 𝑃 and make it finer. That is, we cut intervals in the partition into yet smaller pieces.
e = {e Definition 5.1.6. Let 𝑃 = {𝑥0 , 𝑥1 , . . . , 𝑥 𝑛 } and 𝑃 𝑥0 , e 𝑥1 , . . . , e 𝑥ℓ } be partitions of [𝑎, 𝑏]. e e We say 𝑃 is a refinement of 𝑃 if as sets 𝑃 ⊂ 𝑃. e is a refinement of a partition if it contains all the numbers in 𝑃 and perhaps That is, 𝑃 some other numbers in between. For example, {0, 0.5, 1, 2} is a partition of [0, 2] and {0, 0.2, 0.5, 1, 1.5, 1.75, 2} is a refinement. The main reason for introducing refinements is the following proposition. Proposition 5.1.7. Let 𝑓 : [𝑎, 𝑏] → ℝ be a bounded function, and let 𝑃 be a partition of [𝑎, 𝑏]. e be a refinement of 𝑃. Then Let 𝑃
e 𝑓) 𝐿(𝑃, 𝑓 ) ≤ 𝐿(𝑃,
e 𝑓 ) ≤ 𝑈(𝑃, 𝑓 ). 𝑈(𝑃,
and
e = {e Proof. The tricky part of this proof is to get the notation correct. Let 𝑃 𝑥0 , e 𝑥1 , . . . , e 𝑥ℓ } be a refinement of 𝑃 = {𝑥 0 , 𝑥1 , . . . , 𝑥 𝑛 }. Then 𝑥0 = e 𝑥 0 and 𝑥 𝑛 = e 𝑥ℓ . In fact, there are integers 𝑘0 < 𝑘1 < · · · < 𝑘 𝑛 such that 𝑥 𝑖 = e 𝑥 𝑘 𝑖 for 𝑖 = 0, 1, 2, . . . , 𝑛. Let Δe 𝑥𝑞 B e 𝑥𝑞 − e 𝑥 𝑞−1 for 𝑞 = 0, 1, 2, . . . , ℓ . See Figure 5.2. We get Δ𝑥 𝑖 = 𝑥 𝑖 − 𝑥 𝑖−1 = e 𝑥 𝑘𝑖 − e 𝑥 𝑘 𝑖−1 =
𝑘𝑖 Õ 𝑞=𝑘 𝑖−1 +1
e 𝑥𝑞 − e 𝑥 𝑞−1 =
𝑘𝑖 Õ 𝑞=𝑘 𝑖−1 +1
Δe 𝑥𝑞 .
184
CHAPTER 5. THE RIEMANN INTEGRAL
···
=
e 𝑥 𝑘𝑖
=
e 𝑥 𝑘 𝑖−1 e 𝑥 𝑞−3
Δe 𝑥 𝑞−2
e 𝑥 𝑞−2
𝑥 𝑞−1 Δe 𝑥 𝑞−1 e
𝑥 𝑖−1
Δe 𝑥𝑞
Δ𝑥 𝑖
e 𝑥𝑞
···
𝑥𝑖
Figure 5.2: Refinement of a subinterval. Notice Δ𝑥 𝑖 = Δe 𝑥 𝑞−2 +Δe 𝑥 𝑞−1 +Δe 𝑥 𝑞 , and also 𝑘 𝑖−1 +1 = 𝑞−2 and 𝑘 𝑖 = 𝑞.
e e Let 𝑚 𝑖 be as before and correspond to the partition 𝑃. Let 𝑚 𝑞 B inf 𝑓 (𝑥) : 𝑥 𝑞−1 ≤ 𝑥 ≤ e e 𝑞 for 𝑘 𝑖−1 < 𝑞 ≤ 𝑘 𝑖 . Therefore, 𝑥 𝑞 . Now, 𝑚 𝑖 ≤ 𝑚
𝑚 𝑖 Δ𝑥 𝑖 = 𝑚 𝑖
𝑘𝑖 Õ 𝑞=𝑘 𝑖−1 +1
So 𝐿(𝑃, 𝑓 ) =
𝑛 Õ 𝑖=1
𝑚 𝑖 Δ𝑥 𝑖 ≤
Δe 𝑥𝑞 =
𝑘𝑖 Õ 𝑞=𝑘 𝑖−1 +1
𝑘𝑖 Õ
𝑛 Õ
𝑖=1 𝑞=𝑘 𝑖−1 +1
𝑚 𝑖 Δe 𝑥𝑞 ≤
e 𝑞 Δe 𝑚 𝑥𝑞 =
𝑘𝑖 Õ 𝑞=𝑘 𝑖−1 +1
ℓ Õ 𝑞=1
e 𝑞 Δe 𝑚 𝑥𝑞 .
e 𝑓 ). e 𝑞 Δe 𝑚 𝑥 𝑞 = 𝐿(𝑃,
e 𝑓 ) ≤ 𝑈(𝑃, 𝑓 ) is left as an exercise. The proof of 𝑈(𝑃,
□
Armed with refinements we prove the following. The key point of this next proposition is that the lower Darboux integral is less than or equal to the upper Darboux integral. Proposition 5.1.8. Let 𝑓 : [𝑎, 𝑏] → ℝ be a bounded function. Let 𝑚, 𝑀 ∈ ℝ be such that for all 𝑥 ∈ [𝑎, 𝑏], we have 𝑚 ≤ 𝑓 (𝑥) ≤ 𝑀. Then 𝑚(𝑏 − 𝑎) ≤
∫ 𝑎
𝑏
𝑓 ≤
∫
𝑏
𝑎
𝑓 ≤ 𝑀(𝑏 − 𝑎).
(5.2)
Proof. By Proposition 5.1.2, for every partition 𝑃, 𝑚(𝑏 − 𝑎) ≤ 𝐿(𝑃, 𝑓 ) ≤ 𝑈(𝑃, 𝑓 ) ≤ 𝑀(𝑏 − 𝑎). The inequality 𝑚(𝑏 −𝑎) ≤ 𝐿(𝑃, 𝑓 ) implies 𝑚(𝑏 −𝑎) ≤
∫𝑏 𝑎
𝑓 . The inequality 𝑈(𝑃, 𝑓 ) ≤ 𝑀(𝑏 −𝑎)
∫𝑏
implies 𝑎 𝑓 ≤ 𝑀(𝑏 − 𝑎). The middle inequality in (5.2) is the main point of the proposition. Let 𝑃1 , 𝑃2 be e B 𝑃1 ∪ 𝑃2 . The set 𝑃 e is a partition of [𝑎, 𝑏], which is a partitions of [𝑎, 𝑏]. Define 𝑃 e 𝑓 ) and refinement of 𝑃1 and a refinement of 𝑃2 . By Proposition 5.1.7, 𝐿(𝑃1 , 𝑓 ) ≤ 𝐿(𝑃, e 𝑓 ) ≤ 𝑈(𝑃2 , 𝑓 ). So 𝑈(𝑃,
e 𝑓 ) ≤ 𝑈(𝑃, e 𝑓 ) ≤ 𝑈(𝑃2 , 𝑓 ). 𝐿(𝑃1 , 𝑓 ) ≤ 𝐿(𝑃,
185
5.1. THE RIEMANN INTEGRAL
In other words, for two arbitrary partitions 𝑃1 and 𝑃2 , we have 𝐿(𝑃1 , 𝑓 ) ≤ 𝑈(𝑃2 , 𝑓 ). Recall Proposition 1.2.7, and take the supremum and infimum over all partitions:
∫
𝑏
𝑎
𝑓 = sup 𝐿(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑏]
≤ inf 𝑈(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑏] =
5.1.2
∫ 𝑎
𝑏
𝑓.
□
Riemann integral
We can finally define the Riemann integral. However, the Riemann integral is only defined on a certain class of functions, called the Riemann integrable functions. Definition 5.1.9. Let 𝑓 : [𝑎, 𝑏] → ℝ be a bounded function such that
∫
𝑏
𝑓 (𝑥) 𝑑𝑥 =
𝑎
𝑏
∫ 𝑎
𝑓 (𝑥) 𝑑𝑥.
Then 𝑓 is said to be Riemann integrable. The set of Riemann integrable functions on [𝑎, 𝑏] is denoted by R [𝑎, 𝑏] . When 𝑓 ∈ R [𝑎, 𝑏] , we define
∫ 𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥 B
As before, we often write
∫ 𝑎
The number
∫𝑏 𝑎
𝑏
∫
𝑓 (𝑥) 𝑑𝑥 =
𝑎
𝑏
∫
𝑓 B
𝑎
∫ 𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥.
𝑏
𝑓 (𝑥) 𝑑𝑥.
𝑓 is called the Riemann integral of 𝑓 , or sometimes simply the integral of 𝑓 .
By definition, a Riemann integrable function is bounded. Appealing to Proposition 5.1.8, we immediately obtain the following proposition. See also Figure 5.3. Proposition 5.1.10. Let 𝑓 : [𝑎, 𝑏] → ℝ be a Riemann integrable function. Let 𝑚, 𝑀 ∈ ℝ be such that 𝑚 ≤ 𝑓 (𝑥) ≤ 𝑀 for all 𝑥 ∈ [𝑎, 𝑏]. Then 𝑚(𝑏 − 𝑎) ≤
∫ 𝑎
𝑏
𝑓 ≤ 𝑀(𝑏 − 𝑎).
A weaker form of this proposition is often useful: If | 𝑓 (𝑥)| ≤ 𝑀 for all 𝑥 ∈ [𝑎, 𝑏], then
∫ 𝑏 𝑎
𝑓 ≤ 𝑀(𝑏 − 𝑎).
186
CHAPTER 5. THE RIEMANN INTEGRAL
𝑀
𝑚
𝑎
𝑏
Figure 5.3: The area under the curve is bounded from above by the area of the entire rectangle, 𝑀(𝑏 − 𝑎), and from below by the area of the shaded part, 𝑚(𝑏 − 𝑎).
Example 5.1.11: We integrate constant functions using Proposition 5.1.8. If 𝑓 (𝑥) B 𝑐 for some constant 𝑐, then we take 𝑚 = 𝑀 = 𝑐. In inequality (5.2) all the inequalities must be equalities. Thus 𝑓 is integrable on [𝑎, 𝑏] and
∫𝑏 𝑎
𝑓 = 𝑐(𝑏 − 𝑎).
Example 5.1.12: Let 𝑓 : [0, 2] → ℝ be defined by 𝑓 (𝑥) B
1
1/2
0
if 𝑥 < 1, if 𝑥 = 1, if 𝑥 > 1.
∫2
We claim 𝑓 is Riemann integrable and 0 𝑓 = 1. Proof: Let 0 < 𝜖 < 1 be arbitrary. Let 𝑃 B {0, 1 − 𝜖, 1 + 𝜖, 2} be a partition. We use the notation from the definition of the Darboux sums. Then 𝑚1 = inf 𝑓 (𝑥) : 𝑥 ∈ [0, 1 − 𝜖] = 1,
𝑚2 = inf 𝑓 (𝑥) : 𝑥 ∈ [1 − 𝜖, 1 + 𝜖] = 0,
𝑚3 = inf 𝑓 (𝑥) : 𝑥 ∈ [1 + 𝜖, 2] = 0,
𝑀1 = sup 𝑓 (𝑥) : 𝑥 ∈ [0, 1 − 𝜖] = 1,
𝑀2 = sup 𝑓 (𝑥) : 𝑥 ∈ [1 − 𝜖, 1 + 𝜖] = 1,
𝐿(𝑃, 𝑓 ) =
𝑈(𝑃, 𝑓 ) = Thus,
∫ 0
2
𝑓 −
𝑚 𝑖 Δ𝑥 𝑖 = 1 · (1 − 𝜖) + 0 · 2𝜖 + 0 · (1 − 𝜖) = 1 − 𝜖,
𝑖=1 3 Õ 𝑖=1
∫ 0
2
𝑀3 = sup 𝑓 (𝑥) : 𝑥 ∈ [1 + 𝜖, 2] = 0.
Furthermore, Δ𝑥 1 = 1 − 𝜖, Δ𝑥2 = 2𝜖, and Δ𝑥 3 = 1 − 𝜖. See Figure 5.4. We compute 3 Õ
𝑀 𝑖 Δ𝑥 𝑖 = 1 · (1 − 𝜖) + 1 · 2𝜖 + 0 · (1 − 𝜖) = 1 + 𝜖.
𝑓 ≤ 𝑈(𝑃, 𝑓 ) − 𝐿(𝑃, 𝑓 ) = (1 + 𝜖) − (1 − 𝜖) = 2𝜖.
187
5.1. THE RIEMANN INTEGRAL
𝑀1 = 𝑀2 = 𝑚 1 = 1
𝑀3 = 𝑚 2 = 𝑚 3 = 0
0
2 1−𝜖 1+𝜖 Δ𝑥 1 = 1 − 𝜖 Δ𝑥 2 = 2𝜖 Δ𝑥 3 = 1 − 𝜖
Figure 5.4: Darboux sums for the step function. 𝐿(𝑃, 𝑓 ) is the area of the shaded rectangle, 𝑈(𝑃, 𝑓 ) is the area of both rectangles, and 𝑈(𝑃, 𝑓 )− 𝐿(𝑃, 𝑓 ) is the area of the unshaded rectangle.
By Proposition 5.1.8, integrable. Finally,
∫2 0
∫2
𝑓 ≤
0
𝑓 . As 𝜖 was arbitrary,
1 − 𝜖 = 𝐿(𝑃, 𝑓 ) ≤
∫ 2
Hence,
∫
2
0
0
0
𝑓 =
∫2 0
𝑓 . So 𝑓 is Riemann
𝑓 ≤ 𝑈(𝑃, 𝑓 ) = 1 + 𝜖.
𝑓 − 1 ≤ 𝜖. As 𝜖 was arbitrary, we conclude
∫2
∫2 0
𝑓 = 1.
It may be worthwhile to extract part of the technique of the example into a proposition. Note that 𝑈(𝑃, 𝑓 ) − 𝐿(𝑃, 𝑓 ) is exactly the total area of the white part of the rectangles in Figure 5.1. Proposition 5.1.13. Let 𝑓 : [𝑎, 𝑏] → ℝ be a bounded function. Then 𝑓 is Riemann integrable if for every 𝜖 > 0, there exists a partition 𝑃 of [𝑎, 𝑏] such that 𝑈(𝑃, 𝑓 ) − 𝐿(𝑃, 𝑓 ) < 𝜖.
Proof. If for every 𝜖 > 0 such a 𝑃 exists, then 0≤ Therefore,
∫𝑏 𝑎
𝑓 =
∫𝑏 𝑎
∫
𝑏
𝑓 −
𝑎
∫ 𝑎
𝑏
𝑓 ≤ 𝑈(𝑃, 𝑓 ) − 𝐿(𝑃, 𝑓 ) < 𝜖.
𝑓 , and 𝑓 is integrable.
□
1 Example 5.1.14: Let us show 1+𝑥 is integrable on [0, 𝑏] for all 𝑏 > 0. We will see later that continuous functions are integrable, but let us demonstrate how we do it directly. Let 𝜖 > 0 be given. Take 𝑛 ∈ ℕ and let 𝑥 𝑖 B 𝑖𝑏/𝑛 form the partition 𝑃 B {𝑥 0 , 𝑥1 , . . . , 𝑥 𝑛 } of [0, 𝑏]. Then Δ𝑥 𝑖 = 𝑏/𝑛 for all 𝑖. As 𝑓 is decreasing, for every subinterval [𝑥 𝑖−1 , 𝑥 𝑖 ],
1 1 𝑚 𝑖 = inf : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] = , 1+𝑥 1 + 𝑥𝑖
1 1 𝑀 𝑖 = sup : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] = . 1+𝑥 1 + 𝑥 𝑖−1
188
CHAPTER 5. THE RIEMANN INTEGRAL
Then 𝑈(𝑃, 𝑓 ) − 𝐿(𝑃, 𝑓 ) =
𝑛 Õ 𝑖=1
𝑛 1 𝑏Õ 1 − = Δ𝑥 𝑖 (𝑀 𝑖 − 𝑚 𝑖 ) = 𝑛 1 + (𝑖−1)𝑏/𝑛 1 + 𝑖𝑏/𝑛 𝑖=1
𝑏 1 𝑏2 1 = − = . 𝑛 1 + 0𝑏/𝑛 1 + 𝑛𝑏/𝑛 𝑛(𝑏 + 1)
The sum telescopes, the terms successively cancel each other, something we have seen 𝑏2 before. Picking 𝑛 to be such that 𝑛(𝑏+1) < 𝜖, the proposition is satisfied, and the function is integrable. Remark 5.1.15. A way of thinking of the integral is that it adds up (integrates) lots of local information—it sums 𝑓 (𝑥) 𝑑𝑥 over all 𝑥. The integral sign was chosen by Leibniz to be the long S to mean summation. Unlike derivatives, which are “local,” integrals show up in applications when one wants a “global” answer: total distance travelled, average temperature, total charge, etc.
5.1.3
More notation
When 𝑓 : 𝑆 → ℝ is defined on a larger set 𝑆 and [𝑎, 𝑏] ⊂ 𝑆, we say 𝑓 is Riemann integrable on [𝑎, 𝑏] if the restriction of 𝑓 to [𝑎, 𝑏] is Riemann integrable. In this case, we say 𝑓 ∈ R [𝑎, 𝑏] , and we write
∫𝑏 𝑎
𝑓 to mean the Riemann integral of the restriction of 𝑓 to [𝑎, 𝑏].
∫𝑏
It is useful to define the integral 𝑎 𝑓 even if 𝑎 ≮ 𝑏. Suppose 𝑏 < 𝑎 and 𝑓 ∈ R [𝑏, 𝑎] , then define ∫ 𝑏 ∫ 𝑎 𝑓 B− 𝑓. 𝑎
𝑏
For any function 𝑓 , define
∫ 𝑎
𝑎
𝑓 B 0.
At times, the variable 𝑥 may already have some other meaning. When we need to write down the variable of integration, we may simply use a different letter. For example,
∫ 𝑎
5.1.4
𝑏
𝑓 (𝑠) 𝑑𝑠 B
∫ 𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥.
Exercises
Exercise 5.1.1: Define 𝑓 : [0, 1] → ℝ by 𝑓 (𝑥) B 𝑥 3 and let 𝑃 B {0, 0.1, 0.4, 1}. Compute 𝐿(𝑃, 𝑓 ) and 𝑈(𝑃, 𝑓 ). Exercise 5.1.2: Let 𝑓 : [0, 1] → ℝ be defined by 𝑓 (𝑥) B 𝑥. Show that 𝑓 ∈ R [0, 1] and compute using the definition of the integral (but feel free to use the propositions of this section).
∫1 0
𝑓
189
5.1. THE RIEMANN INTEGRAL
Exercise 5.1.3: Let 𝑓 : [𝑎, 𝑏] → ℝ be a bounded function. Suppose there exists a sequence of partitions {𝑃𝑘 }∞ of [𝑎, 𝑏] such that 𝑘=1 lim 𝑈(𝑃𝑘 , 𝑓 ) − 𝐿(𝑃𝑘 , 𝑓 ) = 0. 𝑘→∞
Show that 𝑓 is Riemann integrable and that
∫ 𝑎
𝑏
𝑓 = lim 𝑈(𝑃𝑘 , 𝑓 ) = lim 𝐿(𝑃𝑘 , 𝑓 ). 𝑘→∞
𝑘→∞
Exercise 5.1.4: Finish the proof of Proposition 5.1.7. Exercise 5.1.5: Suppose 𝑓 : [−1, 1] → ℝ is defined as
( 𝑓 (𝑥) B Prove that 𝑓 ∈ R [−1, 1] and compute propositions of this section).
∫1 −1
1
if 𝑥 > 0,
0
if 𝑥 ≤ 0.
𝑓 using the definition of the integral (but feel free to use the
Exercise 5.1.6: Let 𝑐 ∈ (𝑎, 𝑏) and let 𝑑 ∈ ℝ. Define 𝑓 : [𝑎, 𝑏] → ℝ as
( 𝑓 (𝑥) B Prove that 𝑓 ∈ R [𝑎, 𝑏] and compute propositions of this section).
∫𝑏 𝑎
𝑑
if 𝑥 = 𝑐,
0
if 𝑥 ≠ 𝑐.
𝑓 using the definition of the integral (but feel free to use the
Exercise 5.1.7: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is Riemann integrable. Let 𝜖 > 0 be given. Then show that there exists a partition 𝑃 = {𝑥 0 , 𝑥1 , . . . , 𝑥 𝑛 } such that for every set of numbers {𝑐 1 , 𝑐2 , . . . , 𝑐 𝑛 } with 𝑐 𝑘 ∈ [𝑥 𝑘−1 , 𝑥 𝑘 ] for all 𝑘, we have 𝑛 ∫ 𝑏 Õ 𝑓 − 𝑓 (𝑐 𝑘 )Δ𝑥 𝑘 < 𝜖. 𝑎 𝑘=1
Exercise 5.1.8: Let 𝑓 : [𝑎, 𝑏] → ℝ be a Riemann integrable function. Let 𝛼 > 0 and 𝛽 ∈ ℝ. Then define 𝑎−𝛽 𝑏−𝛽 𝑔(𝑥) B 𝑓 (𝛼𝑥 + 𝛽) on the interval 𝐼 = [ 𝛼 , 𝛼 ]. Show that 𝑔 is Riemann integrable on 𝐼. Exercise 5.1.9: Suppose 𝑓 : [0, 1] → ℝ and 𝑔 : [0, 1] → ℝ are such that for all 𝑥 ∈ (0, 1], we have 𝑓 (𝑥) = 𝑔(𝑥). Suppose 𝑓 is Riemann integrable. Prove 𝑔 is Riemann integrable and
∫1 0
𝑓 =
∫1 0
𝑔.
Exercise 5.1.10: Let 𝑓 : [0, 1] → ℝ be a bounded function. Let 𝑃𝑛 = {𝑥 0 , 𝑥1 , . . . , 𝑥 𝑛 } be a uniform partition ∞ of [0, 1], that is, 𝑥 𝑖 = 𝑖/𝑛 . Is 𝐿(𝑃𝑛 , 𝑓 ) 𝑛=1 always monotone? Yes/No: Prove or find a counterexample. Exercise 5.1.11 (Challenging): For a bounded function 𝑓 : [0, 1] → ℝ, let 𝑅 𝑛 B (1/𝑛 ) uniform right-hand rule). a) If 𝑓 is Riemann integrable show
∫1 0
𝑓 = lim 𝑅 𝑛 . 𝑛→∞
b) Find an 𝑓 that is not Riemann integrable, but lim 𝑅 𝑛 exists. 𝑛→∞
Í𝑛
𝑖=1
𝑓 (𝑖/𝑛 ) (the
190
CHAPTER 5. THE RIEMANN INTEGRAL
Exercise 5.1.12 (Challenging): Generalize the previous exercise. Show that 𝑓 ∈ R [𝑎, 𝑏] if and only if there exists an 𝐼 ∈ ℝ, such that for every 𝜖 > 0 there exists a 𝛿 > 0 such that if 𝑃 is a partition with Δ𝑥 𝑖 < 𝛿
for all 𝑖, then |𝐿(𝑃, 𝑓 ) − 𝐼 | < 𝜖 and |𝑈(𝑃, 𝑓 ) − 𝐼 | < 𝜖. If 𝑓 ∈ R [𝑎, 𝑏] , then 𝐼 =
∫𝑏 𝑎
𝑓.
Exercise 5.1.13: Using Exercise 5.1.12 and the idea of the proof in Exercise 5.1.7, show that Darboux integral is the same as the standard definition of Riemann integral, which you have most likely seen in calculus. That is, show that 𝑓 ∈ R [𝑎, 𝑏] if and only if there exists an 𝐼 ∈ ℝ, such that for Íevery 𝜖 > 0 there exists a 𝛿 > 0 such that if 𝑃 = {𝑥 0 , 𝑥1 , . . . , 𝑥 𝑛 } is a partition with Δ𝑥 𝑖 < 𝛿 for all 𝑖, then 𝑛𝑖=1 𝑓 (𝑐 𝑖 )Δ𝑥 𝑖 − 𝐼 < 𝜖 for every set {𝑐1 , 𝑐2 , . . . , 𝑐 𝑛 } with 𝑐 𝑖 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ]. If 𝑓 ∈ R [𝑎, 𝑏] , then 𝐼 =
∫𝑏 𝑎
𝑓.
Exercise 5.1.14 (Challenging): Construct functions 𝑓 and 𝑔, where 𝑓 : [0, 1] → ℝ is Riemann integrable, 𝑔 : [0, 1] → [0, 1] is one-to-one and onto, and such that the composition 𝑓 ◦ 𝑔 is not Riemann integrable. Exercise 5.1.15: Suppose that 𝑓 : [𝑎, 𝑏] → ℝ is a bounded function, and 𝑃 is a partition of [𝑎, 𝑏] such that 𝐿(𝑃, 𝑓 ) = 𝑈(𝑃, 𝑓 ). Prove that 𝑓 is a constant function.
191
5.2. PROPERTIES OF THE INTEGRAL
5.2
Properties of the integral
Note: 2 lectures, integrability of functions with discontinuities can safely be skipped
5.2.1
Additivity
Adding a bunch of things in two parts and then adding those two parts should be the same as adding everything all at once. The corresponding property for integrals is called the additive property of the integral. First, we prove the additivity property for the lower and upper Darboux integrals. Lemma 5.2.1. Suppose 𝑎 < 𝑏 < 𝑐 and 𝑓 : [𝑎, 𝑐] → ℝ is a bounded function. Then 𝑐
∫ 𝑎
𝑓 =
∫ 𝑎
𝑏
𝑓 +
𝑐
∫ 𝑏
𝑓
𝑐
∫
𝑓 =
and 𝑎
∫ 𝑎
𝑏
𝑓 +
∫ 𝑏
𝑐
𝑓.
Proof. If we have partitions 𝑃1 = {𝑥0 , 𝑥1 , . . . , 𝑥 𝑘 } of [𝑎, 𝑏] and 𝑃2 = {𝑥 𝑘 , 𝑥 𝑘+1 , . . . , 𝑥 𝑛 } of [𝑏, 𝑐], then the set 𝑃 B 𝑃1 ∪ 𝑃2 = {𝑥0 , 𝑥1 , . . . , 𝑥 𝑛 } is a partition of [𝑎, 𝑐]. We find 𝐿(𝑃, 𝑓 ) =
𝑛 Õ
𝑚 𝑖 Δ𝑥 𝑖 =
𝑖=1
𝑘 Õ 𝑖=1
𝑛 Õ
𝑚 𝑖 Δ𝑥 𝑖 +
𝑖=𝑘+1
𝑚 𝑖 Δ𝑥 𝑖 = 𝐿(𝑃1 , 𝑓 ) + 𝐿(𝑃2 , 𝑓 ).
When we take the supremum of the right-hand side over all 𝑃1 and 𝑃2 , we are taking a supremum of the left-hand side over all partitions 𝑃 of [𝑎, 𝑐] that contain 𝑏. If 𝑄 is a partition of [𝑎, 𝑐] and 𝑃 = 𝑄 ∪ {𝑏}, then 𝑃 is a refinement of 𝑄 and so 𝐿(𝑄, 𝑓 ) ≤ 𝐿(𝑃, 𝑓 ). Therefore, taking a supremum only over the 𝑃 that contain 𝑏 is sufficient to find the supremum of 𝐿(𝑃, 𝑓 ) over all partitions 𝑃, see Exercise 1.1.9. Finally, recall Exercise 1.2.9 to compute 𝑐
∫
𝑓 = sup 𝐿(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑐]
𝑎
= sup 𝐿(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑐], 𝑏 ∈ 𝑃
= sup 𝐿(𝑃1 , 𝑓 ) + 𝐿(𝑃2 , 𝑓 ) : 𝑃1 a partition of [𝑎, 𝑏], 𝑃2 a partition of [𝑏, 𝑐]
= sup 𝐿(𝑃1 , 𝑓 ) : 𝑃1 a partition of [𝑎, 𝑏] + sup 𝐿(𝑃2 , 𝑓 ) : 𝑃2 a partition of [𝑏, 𝑐]
=
∫
𝑎
𝑏
𝑓 +
∫
𝑏
𝑐
𝑓.
Similarly, for 𝑃, 𝑃1 , and 𝑃2 as above, we obtain 𝑈(𝑃, 𝑓 ) =
𝑛 Õ 𝑖=1
𝑀 𝑖 Δ𝑥 𝑖 =
𝑘 Õ 𝑖=1
𝑀 𝑖 Δ𝑥 𝑖 +
𝑛 Õ 𝑖=𝑘+1
𝑀 𝑖 Δ𝑥 𝑖 = 𝑈(𝑃1 , 𝑓 ) + 𝑈(𝑃2 , 𝑓 ).
192
CHAPTER 5. THE RIEMANN INTEGRAL
We wish to take the infimum on the right over all 𝑃1 and 𝑃2 , and so we are taking the infimum over all partitions 𝑃 of [𝑎, 𝑐] that contain 𝑏. If 𝑄 is a partition of [𝑎, 𝑐] and 𝑃 = 𝑄 ∪ {𝑏}, then 𝑃 is a refinement of 𝑄 and so 𝑈(𝑄, 𝑓 ) ≥ 𝑈(𝑃, 𝑓 ). Therefore, taking an infimum only over the 𝑃 that contain 𝑏 is sufficient to find the infimum of 𝑈(𝑃, 𝑓 ) for all 𝑃. We obtain 𝑐
∫
𝑓 =
𝑎
𝑏
∫
𝑓 +
𝑎
𝑐
∫
𝑓.
𝑏
□
Proposition 5.2.2. Let 𝑎 < 𝑏 < 𝑐. A function 𝑓 : [𝑎, 𝑐] → ℝ is Riemann integrable if and only if 𝑓 is Riemann integrable on [𝑎, 𝑏] and [𝑏, 𝑐]. If 𝑓 is Riemann integrable, then 𝑐
∫
𝑓 =
𝑎
𝑏
∫ 𝑎
𝑓 +
𝑐
∫
𝑓.
𝑏
Proof. Suppose 𝑓 ∈ R [𝑎, 𝑐] , then it is bounded and
𝑐
∫
𝑓 =
𝑎
∫
𝑐
𝑓 =
𝑎
𝑏
∫
𝑓 +
𝑎
∫
𝑐
𝑓 ≤
𝑏
∫
∫
𝑏
𝑓 +
𝑎
𝑐
𝑎
𝑓 =
∫
𝑐
𝑎
𝑓 =
𝑏
𝑐
∫
𝑓 =
∫
𝑐
𝑎
𝑐
∫
𝑓 =
𝑎
𝑓 . The lemma gives 𝑐
∫
𝑓.
𝑎
Thus the inequality is an equality,
∫
𝑏
𝑓 +
𝑎
As we also know
∫𝑏 𝑎
𝑓 ≤
∫𝑏 𝑎
∫ 𝑎
𝑓 and 𝑏
𝑓 =
∫
𝑐
𝑏
∫ 𝑎
∫
𝑐
𝑓 =
𝑏
𝑓 ≤
𝑐
∫ 𝑏
𝑏
∫
𝑏
𝑓 +
𝑎
𝑏
𝑓.
𝑓 , we conclude
∫
and
𝑓
𝑐
∫
𝑏
𝑐
𝑓 =
∫ 𝑏
𝑐
𝑓.
Thus 𝑓 is Riemann integrable on [𝑎, 𝑏] and [𝑏, 𝑐] and the desired formula holds. Now assume 𝑓 is Riemann integrable on [𝑎, 𝑏] and on [𝑏, 𝑐]. Again it is bounded, and the lemma gives
∫ 𝑎
𝑐
𝑓 =
∫ 𝑎
𝑏
𝑓 +
∫ 𝑏
𝑐
𝑓 =
∫ 𝑎
𝑏
𝑓 +
∫ 𝑏
𝑐
𝑓 =
∫ 𝑎
𝑏
𝑓 +
∫ 𝑏
𝑐
𝑓 =
∫ 𝑎
𝑐
𝑓.
Therefore, 𝑓 is Riemann integrable on [𝑎, 𝑐], and the integral is computed as indicated.
□
An easy consequence of the additivity is the following corollary. We leave the details to the reader as an exercise. Corollary 5.2.3. If 𝑓 ∈ R [𝑎, 𝑏] and [𝑐, 𝑑] ⊂ [𝑎, 𝑏], then the restriction 𝑓 | [𝑐,𝑑] is in R [𝑐, 𝑑] .
193
5.2. PROPERTIES OF THE INTEGRAL
5.2.2
Linearity and monotonicity
A sum is a linear function of the summands. So is the integral. Proposition 5.2.4 (Linearity). Let 𝑓 and 𝑔 be in R [𝑎, 𝑏] and 𝛼 ∈ ℝ.
(i) 𝛼 𝑓 is in R [𝑎, 𝑏] and
𝑏
∫
∫
𝛼 𝑓 (𝑥) 𝑑𝑥 = 𝛼
𝑎
𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥.
(ii) 𝑓 + 𝑔 is in R [𝑎, 𝑏] and
∫ 𝑎
𝑏
𝑓 (𝑥) + 𝑔(𝑥) 𝑑𝑥 =
𝑏
∫
𝑓 (𝑥) 𝑑𝑥 +
𝑎
∫ 𝑎
𝑏
𝑔(𝑥) 𝑑𝑥.
Proof. Let us prove the first item for 𝛼 ≥ 0. Let 𝑃 be a partition of [𝑎, 𝑏], and 𝑚 𝑖 B inf 𝑓 (𝑥) : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] as usual. As 𝛼 ≥ 0, the multiplication by 𝛼 moves past the infimum,
inf 𝛼 𝑓 (𝑥) : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] = 𝛼 inf 𝑓 (𝑥) : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] = 𝛼𝑚 𝑖 .
Therefore, 𝐿(𝑃, 𝛼 𝑓 ) =
𝑛 Õ
𝛼𝑚 𝑖 Δ𝑥 𝑖 = 𝛼
𝑖=1
𝑛 Õ 𝑖=1
𝑚 𝑖 Δ𝑥 𝑖 = 𝛼𝐿(𝑃, 𝑓 ).
Similarly, 𝑈(𝑃, 𝛼 𝑓 ) = 𝛼𝑈(𝑃, 𝑓 ).
Again, as 𝛼 ≥ 0, we may move multiplication by 𝛼 past the supremum. Hence,
∫ 𝑎
𝑏
𝛼 𝑓 (𝑥) 𝑑𝑥 = sup 𝐿(𝑃, 𝛼 𝑓 ) : 𝑃 a partition of [𝑎, 𝑏]
= sup 𝛼𝐿(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑏]
= 𝛼 sup 𝐿(𝑃, 𝑓 ) : 𝑃 a partition of [𝑎, 𝑏]
=𝛼
∫
𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥.
Similarly, we show
∫
𝑏
𝑎
𝛼 𝑓 (𝑥) 𝑑𝑥 = 𝛼
∫ 𝑎
𝑏
𝑓 (𝑥) 𝑑𝑥.
The conclusion now follows for 𝛼 ≥ 0. To finish the proof of the first item (for 𝛼 < 0), we need to show that − 𝑓 is Riemann
∫𝑏
∫𝑏
integrable and 𝑎 − 𝑓 (𝑥) 𝑑𝑥 = − 𝑎 𝑓 (𝑥) 𝑑𝑥. The proof of this fact is left as Exercise 5.2.1. The proof of the second item is left as Exercise 5.2.2. It is not difficult, but it is not as trivial as it may appear at first glance. □
194
CHAPTER 5. THE RIEMANN INTEGRAL
The second item in the proposition does not hold with equality for the Darboux integrals, but we do obtain inequalities. The proof of the following proposition is Exercise 5.2.16. It follows for upper and lower sums on a fixed partition by Exercise 1.3.7, that is, supremum of a sum is less than or equal to the sum of suprema and similarly for infima. Proposition 5.2.5. Let 𝑓 : [𝑎, 𝑏] → ℝ and 𝑔 : [𝑎, 𝑏] → ℝ be bounded functions. Then
∫ 𝑎
𝑏
( 𝑓 + 𝑔) ≤
𝑏
∫
𝑓 +
𝑎
𝑏
∫
∫
𝑔,
𝑎
𝑏
( 𝑓 + 𝑔) ≥
and 𝑎
∫ 𝑎
𝑏
𝑓 +
∫ 𝑎
𝑏
𝑔.
Adding up smaller numbers should give us a smaller result. That is true for an integral as well. Proposition 5.2.6 (Monotonicity). Let 𝑓 : [𝑎, 𝑏] → ℝ and 𝑔 : [𝑎, 𝑏] → ℝ be bounded, and 𝑓 (𝑥) ≤ 𝑔(𝑥) for all 𝑥 ∈ [𝑎, 𝑏]. Then
∫ 𝑎
𝑏
𝑓 ≤
∫
𝑏
𝑎
∫
𝑔
and 𝑎
𝑏
𝑓 ≤
∫
𝑏
𝑎
𝑔.
Moreover, if 𝑓 and 𝑔 are in R [𝑎, 𝑏] , then
𝑏
∫
𝑓 ≤
𝑎
∫
𝑏
𝑔.
𝑎
Proof. Let 𝑃 = {𝑥 0 , 𝑥1 , . . . , 𝑥 𝑛 } be a partition of [𝑎, 𝑏]. Then let 𝑚 𝑖 B inf 𝑓 (𝑥) : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ]
and
e𝑖 B inf 𝑔(𝑥) : 𝑥 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ] . 𝑚
e𝑖 . Therefore, As 𝑓 (𝑥) ≤ 𝑔(𝑥), then 𝑚 𝑖 ≤ 𝑚 𝐿(𝑃, 𝑓 ) =
𝑛 Õ 𝑖=1
𝑚 𝑖 Δ𝑥 𝑖 ≤
𝑛 Õ 𝑖=1
e𝑖 Δ𝑥 𝑖 = 𝐿(𝑃, 𝑔). 𝑚
We take the supremum over all 𝑃 (see Proposition 1.3.7) to obtain
∫ 𝑎
𝑏
𝑓 ≤
∫ 𝑎
𝑏
𝑔.
Similarly, we obtain the same conclusion for the upper integrals. Finally, if 𝑓 and 𝑔 are Riemann integrable all the integrals are equal, and the conclusion follows. □
5.2.3
Continuous functions
Let us show that continuous functions are Riemann integrable. We can even allow some discontinuities. We start with a function continuous on the whole closed interval [𝑎, 𝑏].
195
5.2. PROPERTIES OF THE INTEGRAL Lemma 5.2.7. If 𝑓 : [𝑎, 𝑏] → ℝ is a continuous function, then 𝑓 ∈ R [𝑎, 𝑏] .
Proof. As 𝑓 is continuous on a closed bounded interval, it is bounded and uniformly 𝜖 continuous. Given 𝜖 > 0, find a 𝛿 > 0 such that |𝑥 − 𝑦| < 𝛿 implies | 𝑓 (𝑥) − 𝑓 (𝑦)| < 𝑏−𝑎 . Let 𝑃 = {𝑥0 , 𝑥1 , . . . , 𝑥 𝑛 } be a partition of [𝑎, 𝑏] such that Δ𝑥 𝑖 < 𝛿 for all 𝑖 = 1, 2, . . . , 𝑛. 𝑖 For example, take 𝑛 such that 𝑏−𝑎 𝑛 < 𝛿, and let 𝑥 𝑖 B 𝑛 (𝑏 − 𝑎) + 𝑎. Then for all 𝑥, 𝑦 ∈ [𝑥 𝑖−1 , 𝑥 𝑖 ], we have |𝑥 − 𝑦| ≤ Δ𝑥 𝑖 < 𝛿, and so 𝑓 (𝑥) − 𝑓 (𝑦) ≤ | 𝑓 (𝑥) − 𝑓 (𝑦)|
0 be a real number such that | 𝑓 (𝑥)| ≤ 𝑀. As (𝑏 − 𝑎) ≥ (𝑏 𝑛 − 𝑎 𝑛 ), −𝑀(𝑏 − 𝑎) ≤ −𝑀(𝑏 𝑛 − 𝑎 𝑛 ) ≤
∫
𝑏𝑛
𝑎𝑛
𝑓 ≤ 𝑀(𝑏 𝑛 − 𝑎 𝑛 ) ≤ 𝑀(𝑏 − 𝑎).
∫ 𝑏 𝑛 ∞
Therefore, the sequence of numbers 𝑎 𝑓 𝑛=1 is bounded and by Bolzano–Weierstrass 𝑛 has a convergent subsequence indexed by 𝑛 𝑘 . Let us call 𝐿 the limit of the subsequence
∫ 𝑏 𝑛 𝑘 ∞ 𝑓
𝑎𝑛𝑘
𝑘=1
.
∫
𝑏
Lemma 5.2.1 says that the lower and upper integral are additive and the hypothesis says that 𝑓 is integrable on [𝑎 𝑛 𝑘 , 𝑏 𝑛 𝑘 ]. Therefore 𝑓 =
𝑎
𝑎𝑛𝑘
∫ 𝑎
𝑓 +
𝑏𝑛𝑘
∫
𝑓 +
𝑎𝑛𝑘
𝑏
∫
𝑏𝑛𝑘
𝑓 ≥ −𝑀(𝑎 𝑛 𝑘 − 𝑎) +
𝑏𝑛𝑘
∫
𝑓 − 𝑀(𝑏 − 𝑏 𝑛 𝑘 ).
𝑎𝑛𝑘
We take the limit as 𝑘 goes to ∞ on the right-hand side, 𝑏
∫ 𝑎
𝑓 ≥ −𝑀 · 0 + 𝐿 − 𝑀 · 0 = 𝐿.
Next we use additivity of the upper integral, 𝑏
∫ 𝑎
𝑓 =
∫ 𝑎
𝑎𝑛𝑘
𝑓 +
∫
𝑏𝑛𝑘
𝑎𝑛𝑘
We take the same subsequence {
∫ 𝑎
Thus
∫𝑏 𝑎
𝑓 =
∫𝑏 𝑎
𝑓 +
∫ 𝑏𝑛 𝑎𝑛𝑘 𝑏
𝑘
∫
𝑏
𝑏𝑛𝑘
𝑓 ≤ 𝑀(𝑎 𝑛 𝑘 − 𝑎) +
∫
𝑏𝑛𝑘
𝑓 + 𝑀(𝑏 − 𝑏 𝑛 𝑘 ).
𝑎𝑛𝑘
𝑓 }∞ and take the limit to obtain 𝑘=1
𝑓 ≤ 𝑀 · 0 + 𝐿 + 𝑀 · 0 = 𝐿.
𝑓 = 𝐿 and hence 𝑓 is Riemann integrable and
∫𝑏 𝑎
𝑓 = 𝐿. In particular, no
matter what subsequence we chose, the 𝐿 is the same number. To prove the final statement of the lemma we use Proposition 2.3.7. We have shown that every convergent subsequence
∫ 𝑏 𝑛 ∞ 𝑎𝑛
𝑓
𝑛=1
∫ 𝑏 𝑛 𝑘 ∞ 𝑎𝑛𝑘
𝑓
converges to 𝐿 = 𝑘=1
is convergent and converges to
∫𝑏 𝑎
𝑓.
∫𝑏 𝑎
𝑓 . Therefore, the sequence
□
We say a function 𝑓 : [𝑎, 𝑏] → ℝ has finitely many discontinuities if there exists a finite set 𝑆 = {𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 } ⊂ [𝑎, 𝑏], and 𝑓 is continuous at all points of [𝑎, 𝑏] \ 𝑆.
Theorem 5.2.9. Let 𝑓 : [𝑎, 𝑏] → ℝ be a bounded function with finitely many discontinuities. Then 𝑓 ∈ R [𝑎, 𝑏] . Proof. We divide the interval into finitely many intervals [𝑎 𝑖 , 𝑏 𝑖 ] so that 𝑓 is continuous on the interior (𝑎 𝑖 , 𝑏 𝑖 ). If 𝑓 is continuous on (𝑎 𝑖 , 𝑏 𝑖 ), then it is continuous and hence integrable on [𝑐 𝑖 , 𝑑 𝑖 ] whenever 𝑎 𝑖 < 𝑐 𝑖 < 𝑑 𝑖 < 𝑏 𝑖 . By Lemma 5.2.8, the restriction of 𝑓 to [𝑎 𝑖 , 𝑏 𝑖 ] is integrable. By additivity of the integral (and induction), 𝑓 is integrable on the union of the intervals. □
197
5.2. PROPERTIES OF THE INTEGRAL
5.2.4
More on integrable functions
Sometimes it is convenient (or necessary) to change certain values of a function and then integrate. The next result says that if we change the values at finitely many points, the integral does not change. Proposition 5.2.10. Let 𝑓 : [𝑎, 𝑏] → ℝ be Riemann integrable. Let 𝑔 : [𝑎, 𝑏] → ℝ be such that 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥 ∈ [𝑎, 𝑏] \ 𝑆, where 𝑆 is a finite set. Then 𝑔 is Riemann integrable and 𝑏
∫
𝑔=
𝑎
𝑏
∫
𝑓.
𝑎
Sketch of proof. Using additivity of the integral, split the interval [𝑎, 𝑏] into smaller intervals such that 𝑓 (𝑥) = 𝑔(𝑥) holds for all 𝑥 except at the endpoints (details are left to the reader). Therefore, without loss of generality suppose 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥 ∈ (𝑎, 𝑏). The proof follows by Lemma 5.2.8, and is left as Exercise 5.2.3. □ Finally, monotone (increasing or decreasing) functions are always Riemann integrable. The proof is left to the reader as part of Exercise 5.2.14. Proposition 5.2.11. Let 𝑓 : [𝑎, 𝑏] → ℝ be a monotone function. Then 𝑓 ∈ R [𝑎, 𝑏] .
5.2.5
Exercises
Exercise 5.2.1: Finish the proof of the first part of Proposition 5.2.4. Let 𝑓 be in R [𝑎, 𝑏] . Prove that − 𝑓 is in R [𝑎, 𝑏] and
∫
𝑎
𝑏
∫
− 𝑓 (𝑥) 𝑑𝑥 = −
𝑏
𝑓 (𝑥) 𝑑𝑥.
𝑎
Exercise 5.2.2: Prove the second part of Proposition 5.2.4. Let 𝑓 and 𝑔 be in R [𝑎, 𝑏] . Prove, without using Proposition 5.2.5, that 𝑓 + 𝑔 is in R [𝑎, 𝑏] and
𝑏
∫
𝑓 (𝑥) + 𝑔(𝑥) 𝑑𝑥 =
𝑎
∫
𝑏
𝑓 (𝑥) 𝑑𝑥 +
𝑎
∫
𝑏
𝑔(𝑥) 𝑑𝑥.
𝑎
Hint: One way to do it is to use Proposition 5.1.7 to find a single partition 𝑃 such that 𝑈(𝑃, 𝑓 ) − 𝐿(𝑃, 𝑓 ) < 𝜖/2 and 𝑈(𝑃, 𝑔) − 𝐿(𝑃, 𝑔) < 𝜖/2. Exercise 5.2.3: Let 𝑓 : [𝑎, 𝑏] → ℝ be Riemann integrable, and 𝑔 : [𝑎, 𝑏] → ℝ be such that 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥 ∈ (𝑎, 𝑏). Prove that 𝑔 is Riemann integrable and that
∫ 𝑎
𝑏
𝑔=
∫
𝑏
𝑓.
𝑎
Exercise 5.2.4: Prove the mean value theorem for integrals: If 𝑓 : [𝑎, 𝑏] → ℝ is continuous, then there exists a 𝑐 ∈ [𝑎, 𝑏] such that
∫𝑏 𝑎
𝑓 = 𝑓 (𝑐)(𝑏 − 𝑎).
Exercise 5.2.5: Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function such that 𝑓 (𝑥) ≥ 0 for all 𝑥 ∈ [𝑎, 𝑏] and
∫𝑏 𝑎
𝑓 = 0. Prove that 𝑓 (𝑥) = 0 for all 𝑥.
198
CHAPTER 5. THE RIEMANN INTEGRAL
Exercise 5.2.6: Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function and 𝑐 ∈ [𝑎, 𝑏] such that 𝑓 (𝑐) = 0. (Compare with the previous exercise.)
∫𝑏 𝑎
𝑓 = 0. Prove that there exists a
Exercise 5.2.7: Let 𝑓 : [𝑎, 𝑏] → ℝ and 𝑔 : [𝑎, 𝑏] → ℝ be continuous functions such that Show that there exists a 𝑐 ∈ [𝑎, 𝑏] such that 𝑓 (𝑐) = 𝑔(𝑐).
∫𝑏 𝑎
𝑓 =
∫𝑏 𝑎
𝑔.
Exercise 5.2.8: Let 𝑓 ∈ R [𝑎, 𝑏] . Let 𝛼, 𝛽, 𝛾 be arbitrary numbers in [𝑎, 𝑏] (not necessarily ordered in any way). Prove
𝛾
∫
∫
𝑓 =
𝛼
Recall what
∫𝑏 𝑎
𝛽
𝛼
𝑓 +
∫
𝛾
𝑓.
𝛽
𝑓 means if 𝑏 ≤ 𝑎.
Exercise 5.2.9: Prove Corollary 5.2.3. Exercise 5.2.10: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is bounded and has finitely many discontinuities. Show that as a function of 𝑥 the expression | 𝑓 (𝑥)| is bounded with finitely many discontinuities and is thus Riemann integrable. Then show
∫ 𝑏 ∫ 𝑏 𝑓 (𝑥) 𝑑𝑥 ≤ | 𝑓 (𝑥)| 𝑑𝑥. 𝑎 𝑎
Exercise 5.2.11 (Hard): Show that the Thomae or popcorn function (see Example 3.2.12) is Riemann integrable. Therefore, there exists a function discontinuous at all rational numbers (a dense set) that is Riemann integrable. That is, define 𝑓 : [0, 1] → ℝ by
( 𝑓 (𝑥) B Show
∫1 0
1/𝑘
0
if 𝑥 = 𝑚/𝑘 where 𝑚, 𝑘 ∈ ℕ and 𝑚 and 𝑘 have no common divisors, if 𝑥 is irrational.
𝑓 = 0.
If 𝐼 ⊂ ℝ is a bounded interval, then the function
( 𝜑𝐼 (𝑥) B
1 if 𝑥 ∈ 𝐼,
0 otherwise,
is called an elementary step function. Exercise 5.2.12: Let 𝐼 be an arbitrary bounded interval (you should consider all types of intervals: closed, open, half-open) and 𝑎 < 𝑏, then using only the definition of the integral show that the elementary step function 𝜑𝐼 is integrable on [𝑎, 𝑏], and find the integral in terms of 𝑎, 𝑏, and the endpoints of 𝐼. A function 𝑓 is called a step function if it can be written as 𝑓 (𝑥) =
𝑛 Õ 𝑘=1
𝛼 𝑘 𝜑𝐼 𝑘 (𝑥)
for some real numbers 𝛼1 , 𝛼2 , . . . , 𝛼 𝑛 and some bounded intervals 𝐼1 , 𝐼2 , . . . , 𝐼𝑛 .
199
5.2. PROPERTIES OF THE INTEGRAL
Exercise 5.2.13: Using Exercise 5.2.12, show that a step function (see above) is integrable on every interval [𝑎, 𝑏]. Furthermore, find the integral in terms of 𝑎, 𝑏, the endpoints of 𝐼 𝑘 and the 𝛼 𝑘 . Exercise 5.2.14: Let 𝑓 : [𝑎, 𝑏] → ℝ be a function.
a) Show that if 𝑓 is increasing, then it is Riemann integrable. Hint: Use a uniform partition; each subinterval of same length.
b) Use part a) to show that if 𝑓 is decreasing, then it is Riemann integrable. c) Suppose* ℎ = 𝑓 − 𝑔 where 𝑓 and 𝑔 are increasing functions on [𝑎, 𝑏]. Show that ℎ is Riemann integrable. Exercise 5.2.15 (Challenging): Suppose 𝑓 ∈ R [𝑎, 𝑏] , then the function that takes 𝑥 to | 𝑓 (𝑥)| is also Riemann integrable on [𝑎, 𝑏]. Then show the same inequality as Exercise 5.2.10.
Exercise 5.2.16: Suppose 𝑓 : [𝑎, 𝑏] → ℝ and 𝑔 : [𝑎, 𝑏] → ℝ are bounded. a) Show
∫𝑏 𝑎
( 𝑓 + 𝑔) ≤
∫𝑏 𝑎
𝑓 +
∫𝑏 𝑎
𝑔 and
∫𝑏 𝑎
( 𝑓 + 𝑔) ≥
∫𝑏 𝑎
𝑓 +
∫𝑏 𝑎
𝑔.
b) Find example 𝑓 and 𝑔 where the inequality is strict. Hint: 𝑓 and 𝑔 should not be Riemann integrable. Exercise 5.2.17: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is continuous and 𝑔 : ℝ → ℝ is Lipschitz continuous. Define 𝑏
∫
ℎ(𝑥) B
𝑎
𝑔(𝑡 − 𝑥) 𝑓 (𝑡) 𝑑𝑡.
Prove that ℎ is Lipschitz continuous. Exercise 5.2.18: Prove a version of the so-called Riemann–Lebesgue Lemma (one of several such named): Suppose 𝑓 : [𝑎, 𝑏] → ℝ is continuous and define the sequence {𝑥 𝑛 }∞ by 𝑛=1 𝑥𝑛 B
∫
Prove that lim 𝑥 𝑛 = 0. 𝑛→∞
* Such
an ℎ is said to be of bounded variation.
𝑎
𝑏
𝑓 (𝑡) sin(𝑛𝑡) 𝑑𝑡.
200
CHAPTER 5. THE RIEMANN INTEGRAL
5.3
Fundamental theorem of calculus
Note: 1.5 lectures In this chapter we discuss and prove the fundamental theorem of calculus. The entirety of integral calculus is built upon this theorem, ergo the name. The theorem relates the seemingly unrelated concepts of integral and derivative. It tells us how to compute the antiderivative of a function using the integral and vice versa.
5.3.1
First form of the theorem
Theorem 5.3.1. Let 𝐹 : [𝑎, 𝑏] → ℝ be a continuous function, differentiable on (𝑎, 𝑏). Let 𝑓 ∈ R [𝑎, 𝑏] be such that 𝑓 (𝑥) = 𝐹′(𝑥) for 𝑥 ∈ (𝑎, 𝑏). Then
∫ 𝑎
𝑏
𝑓 = 𝐹(𝑏) − 𝐹(𝑎).
It is not hard to generalize the theorem to allow a finite number of points in [𝑎, 𝑏] where 𝐹 is not differentiable, as long as it is continuous. This generalization is left as an exercise. Proof. Let 𝑃 = {𝑥0 , 𝑥1 , . . . , 𝑥 𝑛 } be a partition of [𝑎, 𝑏]. For each interval [𝑥 𝑖−1 , 𝑥 𝑖 ], use the mean value theorem to find a 𝑐 𝑖 ∈ (𝑥 𝑖−1 , 𝑥 𝑖 ) such that 𝑓 (𝑐 𝑖 )Δ𝑥 𝑖 = 𝐹′(𝑐 𝑖 )(𝑥 𝑖 − 𝑥 𝑖−1 ) = 𝐹(𝑥 𝑖 ) − 𝐹(𝑥 𝑖−1 ). See Figure 5.5, and notice that the area of all three shaded rectangles is 𝐹(𝑥 𝑖+1 ) − 𝐹(𝑥 𝑖−2 ). The idea is that by taking smaller and smaller subintervals we prove that this area is the integral of 𝑓 .
𝑓 (𝑐 𝑖−1 )
𝑦 = 𝑓 (𝑥) = 𝐹′(𝑥)
𝑓 (𝑐 𝑖 )
𝑓 (𝑐 𝑖+1 )
area = 𝑓 (𝑐 𝑖+1 )Δ𝑥 𝑖+1
= 𝐹(𝑥 𝑖+1 ) − 𝐹(𝑥 𝑖 )
area = 𝑓 (𝑐 𝑖−1 )Δ𝑥 𝑖−1
𝑥 𝑖−2
= 𝐹(𝑥 𝑖−1 ) − 𝐹(𝑥 𝑖−2 )
𝑐 𝑖−1
area = 𝑓 (𝑐 𝑖 )Δ𝑥 𝑖
𝑥 𝑖−1 Δ𝑥 𝑖−1
= 𝐹(𝑥 𝑖 ) − 𝐹(𝑥 𝑖−1 ) 𝑐𝑖 Δ𝑥 𝑖
𝑥𝑖
𝑐 𝑖+1
𝑥 𝑖+1
Δ𝑥 𝑖+1
Figure 5.5: Mean value theorem on subintervals of a partition approximating the area under the curve.
201
5.3. FUNDAMENTAL THEOREM OF CALCULUS
Using the notation from the definition of the integral, we have 𝑚 𝑖 ≤ 𝑓 (𝑐 𝑖 ) ≤ 𝑀 𝑖 , and so 𝑚 𝑖 Δ𝑥 𝑖 ≤ 𝐹(𝑥 𝑖 ) − 𝐹(𝑥 𝑖−1 ) ≤ 𝑀 𝑖 Δ𝑥 𝑖 . We sum over 𝑖 = 1, 2, . . . , 𝑛 to get 𝑛 Õ 𝑖=1
𝑛 Õ
𝑚 𝑖 Δ𝑥 𝑖 ≤
𝐹(𝑥 𝑖 ) − 𝐹(𝑥 𝑖−1 ) ≤
𝑖=1
𝑛 Õ
𝑀 𝑖 Δ𝑥 𝑖 .
𝑖=1
In the middle sum, all the terms except the first and last cancel and we end up with 𝐹(𝑥 𝑛 ) − 𝐹(𝑥 0 ) = 𝐹(𝑏) − 𝐹(𝑎). The sums on the left and on the right are the lower and the upper sum respectively. So 𝐿(𝑃, 𝑓 ) ≤ 𝐹(𝑏) − 𝐹(𝑎) ≤ 𝑈(𝑃, 𝑓 ). We take the supremum of 𝐿(𝑃, 𝑓 ) over all partitions 𝑃 and the left inequality yields 𝑏
∫ 𝑎
𝑓 ≤ 𝐹(𝑏) − 𝐹(𝑎).
Similarly, taking the infimum of 𝑈(𝑃, 𝑓 ) over all partitions 𝑃 yields 𝐹(𝑏) − 𝐹(𝑎) ≤
∫
𝑏
𝑎
𝑓.
As 𝑓 is Riemann integrable, we have
∫ 𝑎
𝑏
𝑓 =
𝑏
∫
𝑓 ≤ 𝐹(𝑏) − 𝐹(𝑎) ≤
𝑎
∫ 𝑎
𝑏
𝑓 =
∫ 𝑎
𝑏
𝑓.
The inequalities must be equalities and we are done.
□
The theorem is used to compute integrals. Suppose we know that the function 𝑓 (𝑥) is a
derivative of some other function 𝐹(𝑥), then we can find an explicit expression for Example 5.3.2: To compute
∫
1
0
we notice 𝑥 2 is the derivative of
𝑥3 3
𝑥 2 𝑑𝑥,
. The fundamental theorem says
∫ 0
1
𝑥 2 𝑑𝑥 =
13 03 1 − = . 3 3 3
∫𝑏 𝑎
𝑓.
202
5.3.2
CHAPTER 5. THE RIEMANN INTEGRAL
Second form of the theorem
The second form of the fundamental theorem gives us a way to solve the differential equation 𝐹′(𝑥) = 𝑓 (𝑥), where 𝑓 is a known function and we are trying to find an 𝐹 that satisfies the equation. Theorem 5.3.3. Let 𝑓 : [𝑎, 𝑏] → ℝ be a Riemann integrable function. Define 𝑥
∫
𝐹(𝑥) B
𝑓.
𝑎
First, 𝐹 is continuous on [𝑎, 𝑏]. Second, if 𝑓 is continuous at 𝑐 ∈ [𝑎, 𝑏], then 𝐹 is differentiable at 𝑐 and 𝐹′(𝑐) = 𝑓 (𝑐). Proof. As 𝑓 is bounded, there is an 𝑀 > 0 such that | 𝑓 (𝑥)| ≤ 𝑀 for all 𝑥 ∈ [𝑎, 𝑏]. Suppose 𝑥, 𝑦 ∈ [𝑎, 𝑏] with 𝑥 > 𝑦. Then
∫ |𝐹(𝑥) − 𝐹(𝑦)| =
𝑎
𝑥
𝑓 −
∫ 𝑥 𝑓 = 𝑦
𝑦
∫ 𝑎
𝑓 ≤ 𝑀 |𝑥 − 𝑦| .
By symmetry, the same also holds if 𝑥 < 𝑦. So 𝐹 is Lipschitz continuous and hence continuous. Now suppose 𝑓 is continuous at 𝑐. Let 𝜖 > 0 be given. Let 𝛿 > 0 be such that for 𝑥 ∈ [𝑎, 𝑏], |𝑥 − 𝑐| < 𝛿 implies | 𝑓 (𝑥) − 𝑓 (𝑐)| < 𝜖. In particular, for such 𝑥, we have 𝑓 (𝑐) − 𝜖 < 𝑓 (𝑥) < 𝑓 (𝑐) + 𝜖. Thus if 𝑥 > 𝑐, then 𝑓 (𝑐) − 𝜖 (𝑥 − 𝑐) ≤
𝑥
∫
𝑓 ≤ 𝑓 (𝑐) + 𝜖 (𝑥 − 𝑐).
𝑐
When 𝑐 > 𝑥, then the inequalities are reversed. Therefore, assuming 𝑥 ≠ 𝑐, we get 𝑥
∫ 𝑓 (𝑐) − 𝜖 ≤ As
we have
𝐹(𝑥) − 𝐹(𝑐) = 𝑥−𝑐
𝑐
𝑓
𝑥−𝑐
∫ 𝑎
𝑥
≤ 𝑓 (𝑐) + 𝜖.
𝑓 −
∫ 𝑎
𝑥−𝑐
𝑐
𝑓
∫ =
𝑐
𝑥
𝑓
𝑥−𝑐
𝐹(𝑥) − 𝐹(𝑐) ≤ 𝜖. − 𝑓 (𝑐) 𝑥−𝑐
,
The result follows. It is left to the reader to see why is it OK that we just have a non-strict inequality. □
203
5.3. FUNDAMENTAL THEOREM OF CALCULUS
Of course, if 𝑓 is continuous on [𝑎, 𝑏], then it is automatically Riemann integrable, 𝐹 is differentiable on all of [𝑎, 𝑏] and 𝐹′(𝑥) = 𝑓 (𝑥) for all 𝑥 ∈ [𝑎, 𝑏].
Remark 5.3.4. The second form of the fundamental theorem of calculus still holds if we let 𝑑 ∈ [𝑎, 𝑏] and define ∫ 𝑥
𝐹(𝑥) B
𝑓.
𝑑
That is, we can use any point of [𝑎, 𝑏] as our base point. The proof is left as an exercise.
Let us look at what a simple ∫ 𝑥 discontinuity can do. Take 𝑓 (𝑥) B −1 if 𝑥 < 0, and 𝑓 (𝑥) B 1 if 𝑥 ≥ 0. Let 𝐹(𝑥) B 0 𝑓 . It is not difficult to see that 𝐹(𝑥) = |𝑥|. Notice that 𝑓 is discontinuous at 0 and 𝐹 is not differentiable at 0. However, the converse ∫ 𝑥 in the theorem does not hold. Let 𝑔(𝑥) B 0 if 𝑥 ≠ 0, and 𝑔(0) B 1. Letting 𝐺(𝑥) B 0 𝑔, we find that 𝐺(𝑥) = 0 for all 𝑥. So 𝑔 is discontinuous at 0, but 𝐺′(0) exists and is equal to 0. A common misunderstanding of the integral for calculus students is to think of integrals whose solution cannot be given in closed-form as somehow deficient. This is not the case. Most integrals we write down are not computable in closed-form. Even some integrals that we consider in closed-form are not really such. We define the natural logarithm as the antiderivative of 1/𝑥 such that ln 1 = 0: ln 𝑥 B
𝑥
∫ 1
1 𝑑𝑠. 𝑠
How does a computer find the value of ln 𝑥? One way to∫ do it is to numerically approximate 𝑥 this integral. Morally, we did not really “simplify” 1 1𝑠 𝑑𝑠 by writing down ln 𝑥. We simply gave the integral a name. If we require numerical answers, it is possible we end up doing the calculation by approximating an integral anyway. In the next section, we even define the exponential using the logarithm, which we define in terms of the integral. Another common function defined by an integral that cannot be evaluated symbolically in terms of elementary functions is the erf function, defined as 2 erf(𝑥) B √ 𝜋
∫ 0
𝑥
2
𝑒 −𝑠 𝑑𝑠.
This function comes up often in applied mathematics. It is simply the antiderivative of 2 √ (2/ 𝜋) 𝑒 −𝑥 that is zero at zero. The second form of the fundamental theorem tells us that we can write the function as an integral. If we wish to compute any particular value, we numerically approximate the integral.
5.3.3
Change of variables
A theorem often used in calculus to solve integrals is the change of variables theorem, you may have called it 𝑢-substitution. Let us prove it now. Recall a function is continuously differentiable if it is differentiable and the derivative is continuous.
204
CHAPTER 5. THE RIEMANN INTEGRAL
Theorem 5.3.5 (Change of variables). Let 𝑔 : [𝑎, 𝑏] → ℝ be a continuously differentiable function, let 𝑓 : [𝑐, 𝑑] → ℝ be continuous, and suppose 𝑔 [𝑎, 𝑏] ⊂ [𝑐, 𝑑]. Then 𝑏
∫
′
𝑓 𝑔(𝑥) 𝑔 (𝑥) 𝑑𝑥 =
𝑎
∫
𝑔(𝑏)
𝑔(𝑎)
𝑓 (𝑠) 𝑑𝑠.
Proof. As 𝑔, 𝑔 ′, and 𝑓 are continuous, 𝑓 𝑔(𝑥) 𝑔 ′(𝑥) is a continuous function of [𝑎, 𝑏], therefore it is Riemann integrable. Similarly, 𝑓 is integrable on every subinterval of [𝑐, 𝑑]. Define 𝐹 : [𝑐, 𝑑] → ℝ by ∫
𝑦
𝐹(𝑦) B
𝑔(𝑎)
𝑓 (𝑠) 𝑑𝑠.
By the second form of the fundamental theorem of calculus (see Remark 5.3.4 and Exercise 5.3.4), 𝐹 is a differentiable function and 𝐹′(𝑦) = 𝑓 (𝑦). Apply the chain rule,
′
𝐹 ◦ 𝑔 (𝑥) = 𝐹′ 𝑔(𝑥) 𝑔 ′(𝑥) = 𝑓 𝑔(𝑥) 𝑔 ′(𝑥).
Note that 𝐹 𝑔(𝑎) = 0 and use the first form of the fundamental theorem to obtain
𝑔(𝑏)
∫
𝑓 (𝑠) 𝑑𝑠 = 𝐹 𝑔(𝑏) = 𝐹 𝑔(𝑏) − 𝐹 𝑔(𝑎)
𝑔(𝑎)
=
𝑏
∫ 𝑎
′
𝐹 ◦ 𝑔 (𝑥) 𝑑𝑥 =
∫ 𝑎
𝑏
𝑓 𝑔(𝑥) 𝑔 ′(𝑥) 𝑑𝑥.
□
The change of variables theorem is often used to solve integrals by changing them to integrals that we know or that we can solve using the fundamental theorem of calculus. Example 5.3.6: The derivative of sin(𝑥) is cos(𝑥). Using 𝑔(𝑥) B 𝑥 2 , we solve
∫ 0
√
𝜋
2
𝑥 cos(𝑥 ) 𝑑𝑥 =
∫ 0
𝜋
1 cos(𝑠) 𝑑𝑠 = 2 2
∫ 0
𝜋
cos(𝑠) 𝑑𝑠 =
sin(𝜋) − sin(0) = 0. 2
However, beware that we must satisfy the hypotheses of the theorem. The following example demonstrates why we should not just move symbols around mindlessly. We must be careful that those symbols really make sense. Example 5.3.7: Consider
1
ln |𝑥| 𝑑𝑥. 𝑥 −1 It may be tempting to take 𝑔(𝑥) B ln |𝑥|. Compute 𝑔 ′(𝑥) = 1/𝑥 and try to write
∫
∫
𝑔(1)
𝑔(−1)
𝑠 𝑑𝑠 =
∫ 0
0
𝑠 𝑑𝑠 = 0.
This “solution” is incorrect, and it does not say that we can solve the given integral. First ln|𝑥| problem is that 𝑥 is not continuous on [−1, 1]. It is not defined at 0, and cannot be made ln|𝑥| continuous by defining a value at 0. Second, 𝑥 is not even Riemann integrable on [−1, 1] (it is unbounded). The integral we wrote down simply does not make sense. Finally, 𝑔 is not continuous on [−1, 1], let alone continuously differentiable.
205
5.3. FUNDAMENTAL THEOREM OF CALCULUS
5.3.4
Exercises
𝑑 Exercise 5.3.1: Compute 𝑑𝑥
∫
𝑑 Exercise 5.3.2: Compute 𝑑𝑥
∫
𝑥
−𝑥
𝑠2
𝑒 𝑑𝑠 .
𝑥2
0
2
sin(𝑠 ) 𝑑𝑠 .
Exercise 5.3.3: Suppose 𝐹 : [𝑎, 𝑏] → ℝ is continuous and differentiable on [𝑎, 𝑏] \ 𝑆, where 𝑆 is a finite set. Suppose there exists an 𝑓 ∈ R [𝑎, 𝑏] such that 𝑓 (𝑥) = 𝐹′(𝑥) for 𝑥 ∈ [𝑎, 𝑏]\𝑆. Show that
∫𝑏 𝑎
𝑓 = 𝐹(𝑏)−𝐹(𝑎).
Exercise 5.3.4: Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function. Let 𝑐 ∈ [𝑎, 𝑏] be arbitrary. Define 𝐹(𝑥) B
∫
𝑥
𝑓.
𝑐
Prove that 𝐹 is differentiable and that 𝐹′(𝑥) = 𝑓 (𝑥) for all 𝑥 ∈ [𝑎, 𝑏].
Exercise 5.3.5: Prove integration by parts. That is, suppose 𝐹 and 𝐺 are continuously differentiable functions on [𝑎, 𝑏]. Then prove
∫ 𝑎
𝑏
′
𝐹(𝑥)𝐺 (𝑥) 𝑑𝑥 = 𝐹(𝑏)𝐺(𝑏) − 𝐹(𝑎)𝐺(𝑎) −
∫ 𝑎
𝑏
𝐹′(𝑥)𝐺(𝑥) 𝑑𝑥.
continuously*
Exercise 5.3.6: Suppose 𝐹 and 𝐺 are differentiable functions defined on [𝑎, 𝑏] such that ′ ′ 𝐹 (𝑥) = 𝐺 (𝑥) for all 𝑥 ∈ [𝑎, 𝑏]. Using the fundamental theorem of calculus, show that 𝐹 and 𝐺 differ by a constant. That is, show that there exists a 𝐶 ∈ ℝ such that 𝐹(𝑥) − 𝐺(𝑥) = 𝐶.
The next exercise shows how we can use the integral to “smooth out” a non-differentiable function. Exercise 5.3.7: Let 𝑓 : [𝑎, 𝑏] → ℝ be a continuous function. Let 𝜖 > 0 be a constant so that 𝑎 + 𝜖 < 𝑏 − 𝜖. For 𝑥 ∈ [𝑎 + 𝜖, 𝑏 − 𝜖], define ∫ 𝑥+𝜖 1 𝑔(𝑥) B 𝑓. 2𝜖 𝑥−𝜖 a) Show that 𝑔 is differentiable and find the derivative. b) Let 𝑓 be differentiable and fix 𝑥 ∈ (𝑎, 𝑏) (let 𝜖 be small enough). What happens to 𝑔 ′(𝑥) as 𝜖 gets smaller? c) Find 𝑔 for 𝑓 (𝑥) B |𝑥|, 𝜖 = 1 (you can assume [𝑎, 𝑏] is large enough).
Exercise 5.3.8: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is continuous and 𝑓 (𝑥) = 0 for all 𝑥 ∈ [𝑎, 𝑏]. Exercise 5.3.9: Suppose 𝑓 : [𝑎, 𝑏] → ℝ is continuous and 𝑓 (𝑥) = 0 for all 𝑥 ∈ [𝑎, 𝑏].
∫
𝑎
𝑥
∫ 𝑎 𝑥
𝑓 =
∫𝑏 𝑥
𝑓 for all 𝑥 ∈ [𝑎, 𝑏]. Show that
𝑓 = 0 for all rational 𝑥 in [𝑎, 𝑏]. Show that
Exercise 5.3.10: A function 𝑓 is an odd function if 𝑓 (𝑥) = − 𝑓 (−𝑥), and 𝑓 is an even function if 𝑓 (𝑥) = 𝑓 (−𝑥). Let 𝑎 > 0. Assume 𝑓 is continuous. Prove: a) If 𝑓 is odd, then
b) If 𝑓 is even, then * Compare
𝑎 𝑓 = 0. −𝑎 ∫𝑎 ∫𝑎 𝑓 = 2 −𝑎 0
∫
𝑓.
this hypothesis to Exercise 4.2.8.
206
CHAPTER 5. THE RIEMANN INTEGRAL
Exercise 5.3.11: a) Show that 𝑓 (𝑥) B sin(1/𝑥 ) is integrable on every interval (you can define 𝑓 (0) to be anything). b) Compute
∫1
−1
sin(1/𝑥 ) 𝑑𝑥 (mind the discontinuity).
Exercise 5.3.12 (uses §3.6): 5.2.11, 𝑓 is Riemann integrable. Suppose 𝑓 has a a) Suppose 𝑓 : [𝑎, 𝑏] → ℝ is increasing, by Proposition ∫𝑥 discontinuity at 𝑐 ∈ (𝑎, 𝑏), show that 𝐹(𝑥) B 𝑎 𝑓 is not differentiable at 𝑐.
b) In Exercise 3.6.11, you constructed an increasing function 𝑓 : [0, 1] → ℝ that is discontinuous at every 𝑥 ∈ [0, 1] ∩ ℚ. Use this 𝑓 to construct a function 𝐹(𝑥) that is continuous on [0, 1], but not differentiable at every 𝑥 ∈ [0, 1] ∩ ℚ. Exercise 5.3.13: For any ℓ ∈ ℕ , show that the following limit exists and find what it is: lim
𝑛→∞
𝑛 Õ 𝑘ℓ 𝑘=1
𝑛ℓ +1
207
5.4. THE LOGARITHM AND THE EXPONENTIAL
5.4
The logarithm and the exponential
Note: 1 lecture (optional, requires the optional sections §3.5, §3.6, §4.4) We now have the tools required to properly define the exponential and the logarithm that you know from calculus so well. We start with exponentiation. If 𝑛 is a positive integer, it is obvious to define 𝑥𝑛 B 𝑥 · 𝑥 · · · · · 𝑥 .
|
{z
𝑛 times
}
𝑥0
It makes sense to define B 1. For negative integers, let 𝑥 −𝑛 B 1/𝑥 𝑛 . If 𝑥 > 0, define 𝑥 1/𝑛 as the unique positive 𝑛th root. Finally, for a rational number 𝑛/𝑚 (in lowest terms), define 𝑥 𝑛/𝑚 B 𝑥 1/𝑚
𝑛
.
It is not difficult to show we get the same number no matter what representation of 𝑛/𝑚 we use, so we do not need to use lowest terms. √ √2 However, what do we mean by 2 ? Or 𝑥 𝑦 in general? In particular, what is 𝑒 𝑥 for all 𝑥? And how do we solve 𝑦 = 𝑒 𝑥 for 𝑥? This section answers these questions and more.
5.4.1
The logarithm
It is convenient to define the logarithm first. Let us show that a unique function with the right properties exists, and only then will we call it the logarithm. Proposition 5.4.1. There exists a unique function 𝐿 : (0, ∞) → ℝ such that (i) 𝐿(1) = 0.
(ii) 𝐿 is differentiable and 𝐿′(𝑥) = 1/𝑥 .
(iii) 𝐿 is strictly increasing, bijective, and lim 𝐿(𝑥) = −∞,
and
𝑥→0
lim 𝐿(𝑥) = ∞.
𝑥→∞
(iv) 𝐿(𝑥 𝑦) = 𝐿(𝑥) + 𝐿(𝑦) for all 𝑥, 𝑦 ∈ (0, ∞).
(v) If 𝑞 is a rational number and 𝑥 > 0, then 𝐿(𝑥 𝑞 ) = 𝑞𝐿(𝑥).
Proof. To prove existence, we define a candidate and show it satisfies all the properties. Let 𝐿(𝑥) B
∫ 1
𝑥
1 𝑑𝑡. 𝑡
Obviously, (i) holds. Property (ii) holds via the second form of the fundamental theorem of calculus (Theorem 5.3.3). To prove property (iv), we change variables 𝑢 = 𝑦𝑡 to obtain 𝐿(𝑥) =
∫ 1
𝑥
1 𝑑𝑡 = 𝑡
∫ 𝑦
𝑥𝑦
1 𝑑𝑢 = 𝑢
∫ 1
𝑥𝑦
1 𝑑𝑢 − 𝑢
∫ 1
𝑦
1 𝑑𝑢 = 𝐿(𝑥𝑦) − 𝐿(𝑦). 𝑢
208
CHAPTER 5. THE RIEMANN INTEGRAL
Let us prove (iii). Property (ii) together with the fact that 𝐿′(𝑥) = 1/𝑥 > 0 for 𝑥 > 0, implies that 𝐿 is strictly increasing and hence one-to-one. Let us show 𝐿 is onto. As 1/𝑡 ≥ 1/2 when 𝑡 ∈ [1, 2], ∫ 2 1 𝑑𝑡 ≥ 1/2. 𝐿(2) = 𝑡 1 By induction, (iv) implies that for 𝑛 ∈ ℕ , 𝐿(2𝑛 ) = 𝐿(2) + 𝐿(2) + · · · + 𝐿(2) = 𝑛𝐿(2). Given 𝑦 > 0, by the Archimedean property of the real numbers (notice 𝐿(2) > 0), there is an 𝑛 ∈ ℕ such that 𝐿(2𝑛 ) > 𝑦. The intermediate value theorem gives an 𝑥 1 ∈ (1, 2𝑛 ) such that 𝐿(𝑥1 ) = 𝑦. Thus (0, ∞) is in the image of 𝐿. As 𝐿 is increasing, 𝐿(𝑥) > 𝑦 for all 𝑥 > 2𝑛 , and so lim 𝐿(𝑥) = ∞. 𝑥→∞
Next 0 = 𝐿(𝑥/𝑥 ) = 𝐿(𝑥) + 𝐿(1/𝑥 ), and so 𝐿(𝑥) = −𝐿(1/𝑥 ). Using 𝑥 = 2−𝑛 , we obtain as above that 𝐿 achieves all negative numbers. And lim 𝐿(𝑥) = lim −𝐿(1/𝑥 ) = lim −𝐿(𝑥) = −∞.
𝑥→0
𝑥→∞
𝑥→0
In the limits, note that only 𝑥 > 0 are in the domain of 𝐿. Let us prove (v). Fix 𝑥 > 0. As above, (iv) implies 𝐿(𝑥 𝑛 ) = 𝑛𝐿(𝑥) for all 𝑛 ∈ ℕ . We already found that 𝐿(𝑥) = −𝐿(1/𝑥 ), so 𝐿(𝑥 −𝑛 ) = −𝐿(𝑥 𝑛 ) = −𝑛𝐿(𝑥). Then for 𝑚 ∈ ℕ 𝐿(𝑥) = 𝐿 (𝑥 1/𝑚 )
𝑚
= 𝑚𝐿 𝑥 1/𝑚 .
Putting everything together for 𝑛 ∈ ℤ and 𝑚 ∈ ℕ , we have 𝐿(𝑥 𝑛/𝑚 ) = 𝑛𝐿(𝑥 1/𝑚 ) = (𝑛/𝑚 )𝐿(𝑥). Uniqueness follows using properties (i) and (ii). Via the first form of the fundamental theorem of calculus (Theorem 5.3.1), 𝐿(𝑥) =
∫ 1
𝑥
1 𝑑𝑡 𝑡
is the unique function such that 𝐿(1) = 0 and 𝐿′(𝑥) = 1/𝑥 .
□
Having proved that there is a unique function with these properties, we simply define the logarithm or sometimes called the natural logarithm: ln(𝑥) B 𝐿(𝑥). See Figure 5.6. Mathematicians usually write log(𝑥) instead of ln(𝑥), which is more familiar to calculus students. For all practical purposes, there is only one logarithm: the natural logarithm. See Exercise 5.4.2.
209
5.4. THE LOGARITHM AND THE EXPONENTIAL
2 𝑦 = 1/𝑥 1
shaded area = ln(4)
0
2
1
3
4
5
𝑦 = ln(𝑥)
−1 −2
Figure 5.6: Plot of ln(𝑥) together with 1/𝑥 , showing the value ln(4).
5.4.2
The exponential
Just as with the logarithm we define the exponential via a list of properties. Proposition 5.4.2. There exists a unique function 𝐸 : ℝ → (0, ∞) such that (i) 𝐸(0) = 1.
(ii) 𝐸 is differentiable and 𝐸′(𝑥) = 𝐸(𝑥).
(iii) 𝐸 is strictly increasing, bijective, and lim 𝐸(𝑥) = 0,
𝑥→−∞
and
lim 𝐸(𝑥) = ∞.
𝑥→∞
(iv) 𝐸(𝑥 + 𝑦) = 𝐸(𝑥)𝐸(𝑦) for all 𝑥, 𝑦 ∈ ℝ. (v) If 𝑞 ∈ ℚ, then 𝐸(𝑞𝑥) = 𝐸(𝑥)𝑞 .
Proof. Again, we prove existence of such a function by defining a candidate and proving that it satisfies all the properties. The 𝐿 = ln defined above is invertible. Let 𝐸 be the inverse function of 𝐿. Property (i) is immediate. Property (ii) follows via the inverse function theorem, in particular via Lemma 4.4.1: 𝐿 satisfies all the hypotheses of the lemma, and hence 𝐸′(𝑥) =
1 = 𝐸(𝑥). 𝐿′ 𝐸(𝑥)
Let us look at property (iii). The function 𝐸 is strictly increasing since 𝐸′(𝑥) = 𝐸(𝑥) > 0. As 𝐸 is the inverse of 𝐿, it must also be bijective. To find the limits, we use that 𝐸 is strictly increasing and onto (0, ∞). For every 𝑀 > 0, there is an 𝑥 0 such that 𝐸(𝑥0 ) = 𝑀 and 𝐸(𝑥) ≥ 𝑀 for all 𝑥 ≥ 𝑥0 . Similarly, for every 𝜖 > 0, there is an 𝑥0 such that 𝐸(𝑥0 ) = 𝜖 and 𝐸(𝑥) < 𝜖 for all 𝑥 < 𝑥0 . Therefore, lim 𝐸(𝑥) = 0,
𝑛→−∞
and
lim 𝐸(𝑥) = ∞.
𝑛→∞
210
CHAPTER 5. THE RIEMANN INTEGRAL
To prove property (iv), we use the corresponding property for the logarithm. Take 𝑥, 𝑦 ∈ ℝ. As 𝐿 is bijective, find 𝑎 and 𝑏 such that 𝑥 = 𝐿(𝑎) and 𝑦 = 𝐿(𝑏). Then 𝐸(𝑥 + 𝑦) = 𝐸 𝐿(𝑎) + 𝐿(𝑏) = 𝐸 𝐿(𝑎𝑏) = 𝑎𝑏 = 𝐸(𝑥)𝐸(𝑦).
Property (v) also follows from the corresponding property of 𝐿. Given 𝑥 ∈ ℝ, let 𝑎 be such that 𝑥 = 𝐿(𝑎) and 𝐸(𝑞𝑥) = 𝐸 𝑞𝐿(𝑎) = 𝐸 𝐿(𝑎 𝑞 ) = 𝑎 𝑞 = 𝐸(𝑥)𝑞 .
Uniqueness follows from (i) and (ii). Let 𝐸 and 𝐹 be two functions satisfying (i) and (ii).
𝑑 𝐹(𝑥)𝐸(−𝑥) = 𝐹′(𝑥)𝐸(−𝑥) − 𝐸′(−𝑥)𝐹(𝑥) = 𝐹(𝑥)𝐸(−𝑥) − 𝐸(−𝑥)𝐹(𝑥) = 0. 𝑑𝑥 Therefore, by Proposition 4.2.6, 𝐹(𝑥)𝐸(−𝑥) = 𝐹(0)𝐸(−0) = 1 for all 𝑥 ∈ ℝ. Next, 1 = 𝐸(0) = 𝐸(𝑥 − 𝑥) = 𝐸(𝑥)𝐸(−𝑥). Then 0 = 1 − 1 = 𝐹(𝑥)𝐸(−𝑥) − 𝐸(𝑥)𝐸(−𝑥) = 𝐹(𝑥) − 𝐸(𝑥) 𝐸(−𝑥).
Finally, 𝐸(−𝑥) ≠ 0* for all 𝑥 ∈ ℝ. So 𝐹(𝑥) − 𝐸(𝑥) = 0 for all 𝑥, and we are done.
□
Having proved 𝐸 is unique, we define the exponential function (see Figure 5.7) as exp(𝑥) B 𝐸(𝑥).
4 3 2 1
−1
0
1
Figure 5.7: Plot of 𝑒 𝑥 , together with a slope field giving slope 𝑦 at every point (𝑥, 𝑦). The 𝑑 𝑥 equation 𝑑𝑥 𝑒 = 𝑒 𝑥 means that 𝑦 = 𝑒 𝑥 follows these slopes. *𝐸
is a function into (0, ∞) after all. However, 𝐸(−𝑥) ≠ 0 also follows from 𝐸(𝑥)𝐸(−𝑥) = 1. Therefore, we can prove uniqueness of 𝐸 given (i) and (ii), even for functions 𝐸 : ℝ → ℝ.
211
5.4. THE LOGARITHM AND THE EXPONENTIAL If 𝑦 ∈ ℚ and 𝑥 > 0, then
𝑥 𝑦 = exp ln(𝑥 𝑦 ) = exp 𝑦 ln(𝑥) .
We can now make sense of exponentiation 𝑥 𝑦 for arbitrary 𝑦 ∈ ℝ; if 𝑥 > 0 and 𝑦 is irrational, define 𝑥 𝑦 B exp 𝑦 ln(𝑥) . As exp is continuous, then 𝑥 𝑦 is a continuous function of 𝑦. Therefore, we would obtain the same result had we taken a sequence of rational numbers {𝑦𝑛 }∞ approaching 𝑦 and 𝑛=1 𝑦 𝑦 𝑛 defined 𝑥 = lim𝑛→∞ 𝑥 . Define the number 𝑒, called Euler’s number or the base of the natural logarithm, as 𝑒 B exp(1). Let us justify the notation 𝑒 𝑥 for exp(𝑥): 𝑒 𝑥 = exp 𝑥 ln(𝑒) = exp(𝑥).
The properties of the logarithm and the exponential extend to irrational powers. The proof is immediate. Proposition 5.4.3. Let 𝑥, 𝑦 ∈ ℝ.
𝑦
(i) exp(𝑥𝑦) = exp(𝑥) .
(ii) If 𝑥 > 0, then ln(𝑥 𝑦 ) = 𝑦 ln(𝑥).
Remark 5.4.4. There are other equivalent ways to define the exponential and the logarithm. A common way is to define 𝐸 as the solution to the differential equation 𝐸′(𝑥) = 𝐸(𝑥), 𝐸(0) = 1. See Example 6.3.3, for a sketch of that approach. Yet another approach is to define the exponential function by power series, see Example 6.2.14. Remark 5.4.5. We proved the uniqueness of the functions 𝐿 and 𝐸 from just the properties 𝐿(1) = 0, 𝐿′(𝑥) = 1/𝑥 and the equivalent condition for the exponential 𝐸′(𝑥) = 𝐸(𝑥), 𝐸(0) = 1. Existence also follows from just these properties. Alternatively, uniqueness also follows from the laws of exponents, see the exercises.
5.4.3
Exercises
Exercise 5.4.1: Given a real number 𝑦 and 𝑏 > 0, define 𝑓 : (0, ∞) → ℝ and 𝑔 : ℝ → ℝ as 𝑓 (𝑥) B 𝑥 𝑦 and 𝑔(𝑥) B 𝑏 𝑥 . Show that 𝑓 and 𝑔 are differentiable and find their derivative. Exercise 5.4.2: Let 𝑏 > 0, 𝑏 ≠ 1 be given. a) Show that for every 𝑦 > 0, there exists a unique number 𝑥 such that 𝑦 = 𝑏 𝑥 . Define the logarithm base 𝑏, log𝑏 : (0, ∞) → ℝ, by log𝑏 (𝑦) B 𝑥.
b) Show that log𝑏 (𝑥) =
ln(𝑥) . ln(𝑏)
c) Prove that if 𝑐 > 0, 𝑐 ≠ 1, then log𝑏 (𝑥) =
log𝑐 (𝑥) . log𝑐 (𝑏)
d) Prove log𝑏 (𝑥 𝑦) = log𝑏 (𝑥) + log𝑏 (𝑦), and log𝑏 (𝑥 𝑦 ) = 𝑦 log𝑏 (𝑥).
212
CHAPTER 5. THE RIEMANN INTEGRAL
Exercise 5.4.3 (requires §4.3): Use Taylor’s theorem to study the remainder term and show that for all 𝑥∈ℝ ∞ Õ 𝑥𝑛 𝑥 𝑒 = . 𝑛! 𝑛=0
Hint: Do not differentiate the series term by term (unless you would prove that it works). Exercise 5.4.4: Use the geometric sum formula to show (for 𝑡 ≠ −1) 1 − 𝑡 + 𝑡 2 − · · · + (−1)𝑛 𝑡 𝑛 = Using this fact show ln(1 + 𝑥) =
1 (−1)𝑛+1 𝑡 𝑛+1 − . 1+𝑡 1+𝑡
∞ Õ (−1)𝑛+1 𝑥 𝑛
𝑛
𝑛=1
for all 𝑥 ∈ (−1, 1] (note that 𝑥 = 1 is included). Finally, find the limit of the alternating harmonic series ∞ Õ (−1)𝑛+1 𝑛=1
Exercise 5.4.5: Show
𝑛
=1−
1 1 1 + − +··· 2 3 4
𝑒 𝑥 = lim 1 + 𝑛→∞
𝑥 𝑛 . 𝑛
Hint: Take the logarithm. 𝑛 Note: The expression 1 + 𝑛𝑥 arises in compound interest calculations. It is the amount of money in a bank account after 1 year if 1 dollar was deposited initially at interest 𝑥 and the interest was compounded 𝑛 times during the year. The exponential 𝑒 𝑥 is the result of continuous compounding. Exercise 5.4.6: a) Prove that for 𝑛 ∈ ℕ ,
𝑛 Õ 1 𝑘=2
𝑘
≤ ln(𝑛) ≤
b) Prove that the limit 𝛾 B lim
𝑛→∞
𝑛 Õ 1 𝑘=1
𝑘
𝑛−1 Õ 1 𝑘=1
𝑘
.
!
! − ln(𝑛)
exists. This constant is known as the Euler–Mascheroni constant* . It is not known if this constant is rational or not. It is approximately 𝛾 ≈ 0.5772. Exercise 5.4.7: Show lim
𝑥→∞
ln(𝑥) = 0. 𝑥
𝑏 𝑥−𝑎 Exercise 5.4.8: Show that 𝑒 𝑥 is convex, in other words, show that if 𝑎 ≤ 𝑥 ≤ 𝑏, then 𝑒 𝑥 ≤ 𝑒 𝑎 𝑏−𝑥 𝑏−𝑎 + 𝑒 𝑏−𝑎 . * Named
for the Swiss mathematician Leonhard Paul Euler (1707–1783) and the Italian mathematician Lorenzo Mascheroni (1750–1800).
5.4. THE LOGARITHM AND THE EXPONENTIAL Exercise 5.4.9: Using the logarithm find
213
lim 𝑛 1/𝑛 .
𝑛→∞
Exercise 5.4.10: Show that 𝐸(𝑥) = 𝑒 𝑥 is the unique continuous function such that 𝐸(𝑥 + 𝑦) = 𝐸(𝑥)𝐸(𝑦) and 𝐸(1) = 𝑒. Similarly, prove that 𝐿(𝑥) = ln(𝑥) is the unique continuous function defined on positive 𝑥 such that 𝐿(𝑥 𝑦) = 𝐿(𝑥) + 𝐿(𝑦) and 𝐿(𝑒) = 1. Exercise 5.4.11 (requires §4.3): Since (𝑒 𝑥 )′ = 𝑒 𝑥 , it is easy to see that 𝑒 𝑥 is infinitely differentiable (has derivatives of all orders). Define the function 𝑓 : ℝ → ℝ.
( 𝑓 (𝑥) B a) Prove that for every 𝑚 ∈ ℕ ,
𝑒 −1/𝑥
if 𝑥 > 0,
0
if 𝑥 ≤ 0.
lim+
𝑥→0
𝑒 −1/𝑥 = 0. 𝑥𝑚
b) Prove that 𝑓 is infinitely differentiable. c) Compute the Taylor series for 𝑓 at the origin, that is, ∞ Õ 𝑓 (𝑘) (0) 𝑘=0
𝑘!
𝑥𝑘.
Show that it converges, but show that it does not converge to 𝑓 (𝑥) for any given 𝑥 > 0.
214
5.5
CHAPTER 5. THE RIEMANN INTEGRAL
Improper integrals
Note: 2–3 lectures (optional section, can safely be skipped, requires the optional §3.5) Often it is necessary to integrate over the entire real line, or an unbounded interval of the form [𝑎, ∞) or (−∞, 𝑏]. We may also wish to integrate unbounded functions defined on an open bounded interval (𝑎, 𝑏). For such intervals or functions, the Riemann integral is not defined, but we will write down the integral anyway in the spirit of Lemma 5.2.8. These integrals are called improper integrals and are limits of integrals rather than integrals themselves. Definition 5.5.1. Suppose 𝑓 : [𝑎, 𝑏) → ℝ is a function (not necessarily bounded) that is Riemann integrable on [𝑎, 𝑐] for all 𝑐 < 𝑏. We define
∫ 𝑎
𝑏
𝑓 B lim−
𝑐
∫
𝑐→𝑏
𝑎
𝑓
if the limit exists. Suppose 𝑓 : [𝑎, ∞) → ℝ is a function such that 𝑓 is Riemann integrable on [𝑎, 𝑐] for all 𝑐 < ∞. We define ∫ ∞ ∫ 𝑐 𝑎
𝑓 B lim
𝑐→∞
𝑎
𝑓
if the limit exists. If the limit exists, we say the improper integral converges. If the limit does not exist, we say the improper integral diverges. We similarly define improper integrals for the left-hand endpoint, we leave this to the reader. For a finite endpoint 𝑏, if 𝑓 is bounded, then Lemma 5.2.8 says that we defined nothing new. What is new is that we can apply this definition to unbounded functions. The following set of examples is so useful that we state it as a proposition. Proposition 5.5.2 (𝑝-test for integrals). The improper integral
∫ 1
∞
1 𝑑𝑥 𝑥𝑝
1 converges to 𝑝−1 if 𝑝 > 1 and diverges if 0 < 𝑝 ≤ 1. The improper integral ∫ 1 1 𝑝 𝑑𝑥 0 𝑥
converges to
1 1−𝑝
if 0 < 𝑝 < 1 and diverges if 𝑝 ≥ 1.
Proof. The proof follows by application of the fundamental theorem of calculus. Let us do the proof for 𝑝 > 1 for the infinite right endpoint and leave the rest to the reader. Hint: You should handle 𝑝 = 1 separately.
215
5.5. IMPROPER INTEGRALS Suppose 𝑝 > 1. Then using the fundamental theorem, 𝑏
∫ 1
1 𝑑𝑥 = 𝑥𝑝
∫ 1
𝑏
𝑥 −𝑝 𝑑𝑥 =
1−𝑝+1 −1 𝑏 −𝑝+1 1 − = . + 𝑝−1 −𝑝 + 1 −𝑝 + 1 (𝑝 − 1)𝑏 𝑝−1
As 𝑝 > 1, then 𝑝 − 1 > 0. Take the limit as 𝑏 → ∞ to obtain that follows.
1 𝑏 𝑝−1
goes to 0. The result
□
We state the following proposition on “tails” for just one type of improper integral, though the proof is straightforward and the same for other types of improper integrals. Proposition 5.5.3. Let 𝑓 : [𝑎, ∞) → ∫ ∞ℝ be a function that is Riemann ∫ ∞ integrable on [𝑎, 𝑏] for all 𝑏 > 𝑎. For every 𝑏 > 𝑎, the integral 𝑏 𝑓 converges if and only if 𝑎 𝑓 converges, in which case ∞
∫ 𝑎
Proof. Let 𝑐 > 𝑏. Then
∫
𝑓 =
𝑐
𝑓 =
𝑎
∫
𝑏
𝑓 +
𝑎
∫
𝑏
𝑎
𝑓 +
∞
∫ 𝑏
𝑓.
𝑐
∫
𝑓.
𝑏
Taking the limit 𝑐 → ∞ finishes the proof.
□
Nonnegative functions are easier to work with as the following proposition demonstrates. The exercises will show that this proposition holds only for nonnegative functions. Analogues of this proposition exist for all the other types of improper limits and are left to the student. Proposition 5.5.4. Suppose 𝑓 : [𝑎, ∞) → ℝ is nonnegative ( 𝑓 (𝑥) ≥ 0 for all 𝑥) and 𝑓 is Riemann integrable on [𝑎, 𝑏] for all 𝑏 > 𝑎. (i)
∫ 𝑎
∞
𝑓 = sup
∫
𝑥
𝑎
𝑓 :𝑥≥𝑎 .
(ii) Suppose {𝑥 𝑛 }∞ is a sequence with lim𝑛→∞ 𝑥 𝑛 = ∞. Then 𝑛=1 lim𝑛→∞
∫
𝑥𝑛
𝑎
𝑓 exists, in which case
∫ 𝑎
∞
𝑓 = lim
𝑛→∞
∫
𝑥𝑛
𝑎
∫∞ 𝑎
𝑓 converges if and only if
𝑓.
In the first item we allow for the value of ∞ in the supremum indicating that the integral diverges to infinity. Proof. We start with the first item. As 𝑓 is nonnegative,
∫ 𝑎
𝑥
𝑓 is increasing as a function of
𝑥. If the supremum is infinite, then for every 𝑀 ∈ ℝ we find 𝑁 such that
∫
𝑎
𝑥
𝑓 is increasing,
∫
𝑎
𝑥
𝑓 ≥ 𝑀 for all 𝑥 ≥ 𝑁. So
∫∞ 𝑎
𝑓 diverges to infinity.
∫ 𝑎
𝑁
𝑓 ≥ 𝑀. As
216
CHAPTER 5. THE RIEMANN INTEGRAL
Next suppose the supremum is finite, say 𝐴 B sup find an 𝑁 such that 𝐴 − and hence
∫∞ 𝑎
𝑁
∫
𝑓 < 𝜖. As
𝑎
𝑓 converges to 𝐴.
∫
𝑥
𝑎
n∫
𝑥
o
𝑓 : 𝑥 ≥ 𝑎 . For every 𝜖 > 0, we
𝑎
𝑓 is increasing, then 𝐴 −
∫
𝑎
𝑥
𝑓 < 𝜖 for all 𝑥 ≥ 𝑁
∫∞
Let us look at the second item. If 𝑎 𝑓 converges, then every sequence {𝑥 𝑛 }∞ going 𝑛=1 to infinity works. The trick is proving the other direction. Suppose {𝑥 𝑛 }∞ is such that 𝑛=1 lim𝑛→∞ 𝑥 𝑛 = ∞ and ∫ 𝑥𝑛
lim
𝑛→∞
𝑓 =𝐴
𝑎
converges. Given 𝜖 > 0, pick 𝑁 such that for all 𝑛 ≥ 𝑁, we have 𝐴 − 𝜖 < Because
∫
𝑥
𝑎
𝑓 is increasing as a function of 𝑥, we have that for all 𝑥 ≥ 𝑥 𝑁 𝑥𝑁
∫
𝐴−𝜖
𝑎. Suppose that for all 𝑥 ≥ 𝑎, | 𝑓 (𝑥)| ≤ 𝑔(𝑥). (i) If
∫∞
(ii) If
∫∞
𝑎
𝑎
𝑔 converges, then 𝑓 diverges, then
∫∞ 𝑎
∫∞ 𝑎
∫ ∞ ∫∞ 𝑓 ≤ 𝑎 𝑔. 𝑎
𝑓 converges, and in this case
𝑔 diverges.
Proof. We start with the first item. For every 𝑏 and 𝑐, such that 𝑎 ≤ 𝑏 ≤ 𝑐, we have −𝑔(𝑥) ≤ 𝑓 (𝑥) ≤ 𝑔(𝑥), and so 𝑐
∫ 𝑏
∫
In other words,
𝑐
𝑏
𝑓 ≤
∫ 𝑏
𝑐
−𝑔 ≤
𝑐
∫
𝑓 ≤
𝑏
𝑐
∫ 𝑏
𝑔.
𝑔.
Let 𝜖 > 0 be given. Because of Proposition 5.5.3,
∫ 𝑎
∫𝑏
As 𝑎 𝑔 goes to such that
∫∞ 𝑎
∞
𝑔=
∫
𝐵
𝑔+
𝑎
𝑔 as 𝑏 goes to infinity,
∫
𝑏
∫∞
∞
𝑏
∫ 𝑏
∞
𝑔.
𝑔 goes to 0 as 𝑏 goes to infinity. Choose 𝐵
𝑔 < 𝜖.
217
5.5. IMPROPER INTEGRALS 𝑐
As 𝑔 is nonnegative, if 𝐵 ≤ 𝑏 < 𝑐, then 𝑏 𝑔 < 𝜖 as well. Let {𝑥 𝑛 }∞ be a sequence going 𝑛=1 to infinity. Let 𝑀 be such that 𝑥 𝑛 ≥ 𝐵 for all 𝑛 ≥ 𝑀. Take 𝑛, 𝑚 ≥ 𝑀, with 𝑥 𝑛 ≤ 𝑥 𝑚 ,
∫
∫
𝑥𝑚
𝑎 𝑥𝑛
𝑓 −
𝑥𝑛
∫ 𝑎
𝑥𝑚
∫ 𝑓 =
𝑥𝑛
∫ 𝑓 ≤
𝑥𝑚
𝑥𝑛
𝑔 < 𝜖.
∞
Therefore, the sequence 𝑎 𝑓 𝑛=1 is Cauchy and hence converges. We need to show that the limit is unique. Suppose {𝑥 𝑛 }∞ is a sequence converging 𝑛=1 ∫ 𝑥 𝑛 ∞ ∞ to infinity such that 𝑎 𝑓 𝑛=1 converges to 𝐿1 , and {𝑦𝑛 } 𝑛=1 is a sequence converging
∫
∫ 𝑦𝑛 ∞ 𝑓 𝑛=1 converges to 𝐿2 . Then there must be some 𝑛 such that ∫ 𝑥 𝑛 ∫ 𝑦𝑎𝑛 < 𝜖 and < 𝜖. We can also suppose 𝑥 𝑛 ≥ 𝐵 and 𝑦𝑛 ≥ 𝐵. Then 𝑓 − 𝐿 𝑓 − 𝐿 1 2 𝑎 𝑎 ∫ 𝑦𝑛 ∫ 𝑥 𝑛 ∫ 𝑥 𝑛 ∫ 𝑦𝑛 ∫ 𝑦𝑛 𝑓 + 𝜖 < 3𝜖. 𝑓 + 𝑓 − 𝑓 + 𝑓 − 𝐿2 < 𝜖 + |𝐿1 − 𝐿2 | ≤ 𝐿1 − to infinity such that
𝑎
𝑎
𝑎
As 𝜖 > 0 was arbitrary, 𝐿1 = 𝐿2 , and hence
𝑎
∫∞
𝑥𝑛
𝑓 converges. Above we have shown that
𝑎 ∫ ∫ 𝑐 𝑐 𝑎 𝑓 ≤ 𝑎 𝑔 for all 𝑐 > 𝑎. By taking the limit 𝑐 → ∞, the first item is proved.
The second item is simply a contrapositive of the first item.
□
Example 5.5.6: The improper integral
∫ 0
∞
sin(𝑥 2 )(𝑥 + 2) 𝑑𝑥 𝑥3 + 1
converges. Proof: Observe we simply need to show that the integral converges when going from 1 to infinity. For 𝑥 ≥ 1 we obtain
Then
sin(𝑥 2 )(𝑥 + 2) ≤ 𝑥 + 2 ≤ 𝑥 + 2 ≤ 𝑥 + 2𝑥 ≤ 3 . 𝑥3 + 1 3 𝑥 +1 𝑥3 𝑥3 𝑥2 ∫ 1
∞
3 𝑑𝑥 = 3 𝑥2
∫ 1
∞
1 𝑑𝑥 = 3. 𝑥2
So using the comparison test and the tail test, the original integral converges. Example 5.5.7: You should be careful when doing formal manipulations with improper integrals. The integral ∫ ∞ 2 𝑑𝑥 𝑥2 − 1 2 converges via the comparison test using 1/𝑥 2 again. However, if you succumb to the temptation to write 2 1 1 = − 𝑥2 − 1 𝑥 − 1 𝑥 + 1
218
CHAPTER 5. THE RIEMANN INTEGRAL
and try to integrate each part separately, you will not succeed. It is not true that you can split the improper integral in two; you cannot split the limit.
∫ 2
∞
2 𝑑𝑥 = lim 𝑏→∞ 𝑥2 − 1 = lim ≠
𝑏→∞ ∞
∫
2
𝑏
∫ 2
∫
2 𝑑𝑥 𝑥2 − 1
𝑏
2
1 𝑑𝑥 − 𝑥−1
1 𝑑𝑥 − 𝑥−1
∞
∫
2
∫ 2
𝑏
1 𝑑𝑥 𝑥+1
1 𝑑𝑥. 𝑥+1
The last line in the computation does not even make sense. Both of the integrals diverge to infinity, since we can apply the comparison test appropriately with 1/𝑥 . We get ∞ − ∞. Now suppose we need to take limits at both endpoints. Definition 5.5.8. Suppose 𝑓 : (𝑎, 𝑏) → ℝ is a function that is Riemann integrable on [𝑐, 𝑑] for all 𝑐, 𝑑 such that 𝑎 < 𝑐 < 𝑑 < 𝑏, then we define
∫
𝑏
𝑓 B lim+ lim− 𝑐→𝑎
𝑎
𝑑→𝑏
𝑑
∫
𝑓
𝑐
if the limits exist. Suppose 𝑓 : ℝ → ℝ is a function such that 𝑓 is Riemann integrable on all bounded intervals [𝑎, 𝑏]. Then we define ∞
∫
−∞
𝑓 B lim lim
𝑐→−∞ 𝑑→∞
∫ 𝑐
𝑑
𝑓
if the limits exist. We similarly define improper integrals with one infinite and one finite improper endpoint, we leave this to the reader. One ought to always be careful about double limits. The definition given above says that we first take the limit as 𝑑 goes to 𝑏 or ∞ for a fixed 𝑐, and then we take the limit in 𝑐. We will have to prove that in this case it does not matter which limit we compute first. Example 5.5.9:
∫
∞
−∞
1 𝑑𝑥 = lim lim 𝑎→−∞ 𝑏→∞ 1 + 𝑥2
∫ 𝑎
𝑏
1 𝑑𝑥 = lim lim arctan(𝑏) − arctan(𝑎) = 𝜋. 𝑎→−∞ 𝑏→∞ 1 + 𝑥2
In the definition, the order of the limits can always be switched if they exist. Let us state and prove this fact only for the limits at infinity.
219
5.5. IMPROPER INTEGRALS
Proposition 5.5.10. Suppose 𝑓 : ℝ → ℝ is integrable on every bounded interval [𝑎, 𝑏]. Then lim lim
∫
𝑎→−∞ 𝑏→∞
𝑏
𝑓
𝑎
converges
lim lim
if and only if
𝑏
∫
𝑏→∞ 𝑎→−∞
𝑓
𝑎
converges,
in which case the two expressions are equal. If either of the expressions converges, then the improper integral converges and 𝑎
∫
lim
𝑎→∞
𝑓 =
−𝑎
∞
∫
−∞
𝑓.
Proof. Without loss of generality, assume 𝑎 < 0 and 𝑏 > 0. Suppose the first expression converges. Then lim lim
𝑎→−∞ 𝑏→∞
∫ 𝑎
𝑏
𝑓 = lim lim
𝑎→−∞ 𝑏→∞
= lim
𝑓 +
𝑎
0
∫
lim
𝑎→−∞
𝑏→∞
0
∫
𝑓 =
0
𝑓 +
𝑎
𝑏
∫
𝑏
∫
∫
lim
𝑎→−∞
0
𝑓 + lim
𝑏→∞
𝑎
𝑓 = lim lim
𝑏→∞ 𝑎→−∞
0
0
∫ 𝑎
𝑓 +
∫
𝑏
𝑓
0
∫
𝑏
0
𝑓 .
Similar computation shows the other direction. Therefore, if either expression converges, then the improper integral converges and
∫
∞
−∞
𝑓 = lim lim
𝑎→−∞ 𝑏→∞
𝑏
∫ 𝑎
𝑓 =
= lim
𝑎→∞
lim
∫
𝑎→−∞ 0
∫
0
𝑓 + lim
𝑏→∞
𝑎
𝑎
∫
𝑓 + lim
𝑎→∞
−𝑎
∫ 0
𝑏
𝑓
𝑓 = lim
∫
𝑎→∞
0
0
−𝑎
𝑓 +
∫ 0
𝑎
𝑓 = lim
𝑎→∞
∫
𝑎
−𝑎
𝑓.
□ Example 5.5.11: On the other hand, you must be careful to take the limits independently 𝑥 before you know convergence. Let 𝑓 (𝑥) = |𝑥| for 𝑥 ≠ 0 and 𝑓 (0) = 0. If 𝑎 < 0 and 𝑏 > 0, then
∫ 𝑎
𝑏
∫
𝑓 =
0
𝑎
𝑓 +
∫ 0
𝑏
𝑓 = 𝑎 + 𝑏.
For every fixed 𝑎 < 0, the limit ∫ ∞ as 𝑏 → ∞ is infinite. So even the first limit does not exist, and the improper integral −∞ 𝑓 does not converge. On the other hand, if 𝑎 > 0, then
∫
𝑎
−𝑎
𝑓 = (−𝑎) + 𝑎 = 0.
Therefore, lim
𝑎→∞
∫
𝑎
−𝑎
𝑓 = 0.
220
CHAPTER 5. THE RIEMANN INTEGRAL
Example 5.5.12: An example to keep in mind for improper integrals is the so-called sinc function* . This function comes up quite often in both pure and applied mathematics. Define
( sinc(𝑥) B
sin(𝑥) 𝑥
1
if 𝑥 ≠ 0, if 𝑥 = 0.
1
1 2
−4𝜋
−2𝜋
2𝜋
4𝜋
− 14
Figure 5.8: The sinc function.
It is not difficult to show that the sinc function is continuous at zero, but that is not important right now. What is important is that
∫
∞
−∞
sinc(𝑥) 𝑑𝑥 = 𝜋,
while
∫
∞
−∞
|sinc(𝑥)| 𝑑𝑥 = ∞.
The integral of the sinc function is a continuous analogue of the alternating harmonic Í Í∞ (−1)𝑛/𝑛 , while the absolute value is like the regular harmonic series 1 series ∞ 𝑛=1 𝑛=1 /𝑛 . In particular, the fact that the integral converges must be done directly rather than using comparison test. We will not prove the first statement exactly. Let us simply prove that the integral of the sinc function converges, but we will not worry about the exact limit. Because sin(−𝑥) = sin(𝑥) −𝑥 𝑥 , it is enough to show that ∫ ∞ sin(𝑥) 𝑑𝑥 𝑥 2𝜋 converges. We also avoid 𝑥 = 0 this way to make our life simpler. For every 𝑛 ∈ ℕ , we have that for 𝑥 ∈ [𝜋2𝑛, 𝜋(2𝑛 + 1)],
* Shortened
sin(𝑥) sin(𝑥) sin(𝑥) ≤ ≤ , 𝑥 𝜋2𝑛 𝜋(2𝑛 + 1)
from Latin: sinus cardinalis
221
5.5. IMPROPER INTEGRALS as sin(𝑥) ≥ 0. For 𝑥 ∈ [𝜋(2𝑛 + 1), 𝜋(2𝑛 + 2)],
sin(𝑥) sin(𝑥) sin(𝑥) ≤ ≤ , 𝑥 𝜋(2𝑛 + 1) 𝜋(2𝑛 + 2)
as sin(𝑥) ≤ 0. Via the fundamental theorem of calculus, 2 = 𝜋(2𝑛 + 1)
𝜋(2𝑛+1)
∫
𝜋2𝑛
Similarly,
sin(𝑥) 𝑑𝑥 ≤ 𝜋(2𝑛 + 1)
−2 ≤ 𝜋(2𝑛 + 1)
∫
𝜋(2𝑛+1)
𝜋2𝑛
𝜋(2𝑛+2)
∫
𝜋(2𝑛+1)
Adding the two together we find
−2 2 + ≤ 0= 𝜋(2𝑛 + 1) 𝜋(2𝑛 + 1)
∫
2𝜋(𝑛+1)
2𝜋𝑛
See Figure 5.9.
sin(𝑥) 𝑑𝑥 ≤ 𝑥
∫
𝜋(2𝑛+1)
𝜋2𝑛
sin(𝑥) −1 𝑑𝑥 ≤ . 𝑥 𝜋(𝑛 + 1)
1 −1 1 sin(𝑥) 𝑑𝑥 ≤ + = . 𝑥 𝜋𝑛 𝜋(𝑛 + 1) 𝜋𝑛(𝑛 + 1)
sin(𝑥) 𝑥
+
sin(𝑥) 𝜋(2𝑛+1)
𝜋(2𝑛 + 1)
𝜋2𝑛 sin(𝑥) 𝜋(2𝑛+1)
sin(𝑥) 𝑥
Figure 5.9: Bound of
∫ 2𝜋(𝑛+1) 2𝜋𝑛
sin(𝑥) 𝜋2𝑛
sin(𝑥) 𝑥
sin(𝑥) 1 𝑑𝑥 = . 𝜋2𝑛 𝜋𝑛
sin(𝑥) 𝜋(2𝑛+2)
𝜋(2𝑛 + 2) −
𝑑𝑥 using the shaded integral (signed area
1 𝜋𝑛
+
−1 ). 𝜋(𝑛+1)
For 𝑘 ∈ ℕ ,
∫
2𝑘𝜋
2𝜋
𝑘−1
Õ sin(𝑥) 𝑑𝑥 = 𝑥 𝑛=1
∫
2𝜋(𝑛+1)
2𝜋𝑛
𝑘−1 Õ sin(𝑥) 1 𝑑𝑥 ≤ . 𝑥 𝜋𝑛(𝑛 + 1) 𝑛=1
We find the partial sums of a series with positive terms. The series converges as is a convergent series. Thus as a sequence, lim
𝑘→∞
∫
2𝑘𝜋
2𝜋
∞ Õ sin(𝑥) 1 𝑑𝑥 = 𝐿 ≤ < ∞. 𝑥 𝜋𝑛(𝑛 + 1) 𝑛=1
1 𝑛=1 𝜋𝑛(𝑛+1)
Í∞
222
CHAPTER 5. THE RIEMANN INTEGRAL
Let 𝑀 > 2𝜋 be arbitrary, and let 𝑘 ∈ ℕ be the largest integer such that 2𝑘𝜋 ≤ 𝑀. For 1 −1 ≤ 2𝑘𝜋 ≤ sin(𝑥) , and so 𝑥 ∈ [2𝑘𝜋, 𝑀], we have 2𝑘𝜋 𝑥
∫
𝑀
2𝑘𝜋
sin(𝑥) 𝑀 − 2𝑘𝜋 1 𝑑𝑥 ≤ ≤ . 𝑥 2𝑘𝜋 𝑘
As 𝑘 is the largest 𝑘 such that 2𝑘𝜋 ≤ 𝑀, then as 𝑀 ∈ ℝ goes to infinity, so does 𝑘 ∈ ℕ . Then ∫ 2𝑘𝜋 ∫ 𝑀 ∫ 𝑀 sin(𝑥) sin(𝑥) sin(𝑥) 𝑑𝑥 = 𝑑𝑥 + 𝑑𝑥. 𝑥 𝑥 𝑥 2𝜋 2𝑘𝜋 2𝜋 As 𝑀 goes to infinity, the first term on the right-hand side goes to 𝐿, and the second term on the right-hand side goes to zero. Hence
∫
∞
2𝜋
sin(𝑥) 𝑑𝑥 = 𝐿. 𝑥
The double-sided integral of sinc also exists as noted above. We leave the other statement—that the integral of the absolute value of the sinc function diverges—as an exercise.
5.5.1
Integral test for series
The fundamental theorem of calculus can be used in proving a series is summable and to estimate its sum. Proposition 5.5.13 (Integral test). Suppose 𝑓 : [𝑘, ∞) → ℝ is a decreasing nonnegative function where 𝑘 ∈ ℤ. Then ∞ Õ 𝑛=𝑘
𝑓 (𝑛)
∞
∫ converges
if and only if 𝑘
In this case
∫
∞
𝑘
𝑓 ≤
∞ Õ 𝑛=𝑘
𝑓 (𝑛) ≤ 𝑓 (𝑘) +
∞
∫ 𝑘
𝑓
converges.
𝑓.
See Figure 5.10, for an illustration with 𝑘 = 1. By Proposition 5.2.11, 𝑓 is integrable on every interval [𝑘, 𝑏] for all 𝑏 > 𝑘, so the statement of the theorem makes sense without additional hypotheses of integrability. Proof. Let ℓ , 𝑚 ∈ ℤ be such that 𝑚 > ℓ ≥ 𝑘. Because 𝑓 is decreasing, we have 𝑓 (𝑛) ≤
𝑛 𝑛−1
∫
∫ ℓ
𝑓 . Therefore, 𝑚
𝑓 =
𝑚−1 Õ ∫ 𝑛+1 𝑛=ℓ
𝑛
𝑓 ≤
𝑚−1 Õ 𝑛=ℓ
𝑓 (𝑛) ≤ 𝑓 (ℓ ) +
𝑚−1 Õ 𝑛=ℓ +1
∫
𝑛
𝑛−1
𝑓 ≤ 𝑓 (ℓ ) +
∫ ℓ
𝑚−1
𝑓.
𝑛+1 𝑛
∫
𝑓 ≤
(5.3)
223
5.5. IMPROPER INTEGRALS
0
2
1
3
5
4
6
7
8
9
··· 10
∫∞
Figure 5.10: The area under the curve, 1 𝑓 , is bounded below by the area of the shaded rectangles, 𝑓 (2) + 𝑓 (3) + 𝑓 (4) + · · · , and bounded above by the area entire rectangles, 𝑓 (1) + 𝑓 (2) + 𝑓 (3) + · · · .
Suppose first that
∫∞ 𝑘
𝑓 converges and let 𝜖 > 0 be given. As before, since 𝑓 is positive, 𝑚
then there exists an 𝐿 ∈ ℕ such that if ℓ ≥ 𝐿, then ℓ 𝑓 < 𝜖/2 for all 𝑚 ≥ ℓ . The function 𝑓 must decrease to zero (why?), so make 𝐿 large enough so that for ℓ ≥ 𝐿, we have 𝑓 (ℓ ) < 𝜖/2. Thus, for 𝑚 > ℓ ≥ 𝐿, we have via (5.3),
∫
𝑚 Õ 𝑛=ℓ
𝑓 (𝑛) ≤ 𝑓 (ℓ ) +
∫ ℓ
𝑚
𝑓 < 𝜖/2 + 𝜖/2 = 𝜖.
The series is therefore Cauchy and thus converges. The estimate in the proposition is obtained by letting 𝑚 go to ∫ ∞infinity in (5.3) with ℓ = 𝑘. Conversely, suppose 𝑘 𝑓 diverges. As 𝑓 is positive, then by Proposition 5.5.4, the sequence {
𝑚 𝑘
∫
𝑓 }∞ diverges to infinity. Using (5.3) with ℓ = 𝑘, we find 𝑚=𝑘 𝑚
∫
𝑓 ≤
𝑘
𝑚−1 Õ 𝑛=𝑘
𝑓 (𝑛).
As the left-hand side goes to infinity as 𝑚 → ∞, so does the right-hand side.
□
Example 5.5.14: The integral test can be used not only to show that a series converges, but Í 1 to estimate its sum to arbitrary precision. Let us show ∞ 𝑛=1 𝑛 2 exists and estimate its sum to within 0.01. As this series is the 𝑝-series for 𝑝 = 2, we already proved it converges (let us pretend we do not know that), but we only roughly estimated its sum. The fundamental theorem of calculus says that for all 𝑘 ∈ ℕ ,
∫
𝑘
∞
1 1 𝑑𝑥 = . 2 𝑘 𝑥
In particular, the series must converge. But we also have 1 = 𝑘
∫ 𝑘
∞
∫ ∞ ∞ Õ 1 1 1 1 1 1 𝑑𝑥 ≤ ≤ + 𝑑𝑥 = + . 𝑥2 𝑛2 𝑘2 𝑥2 𝑘2 𝑘 𝑘 𝑛=𝑘
224
CHAPTER 5. THE RIEMANN INTEGRAL
Adding the partial sum up to 𝑘 − 1 we get 𝑘−1 𝑘−1 ∞ Õ 1 1 Õ 1 1 Õ 1 1 + ≤ ≤ 2+ + . 2 2 2 𝑘 𝑘 𝑛 𝑛 𝑘 𝑛 𝑛=1 𝑛=1 𝑛=1 𝑘−1 1 2 In other words, 1/𝑘 + 𝑛=1 /𝑛 is an estimate for the sum to within 1/𝑘 2 . Therefore, if we wish to find the sum to within 0.01, we note 1/102 = 0.01. We obtain
Í
9 9 ∞ Õ 1 Õ 1 1 Õ 1 1 1 + + + 1.6397 . . . ≈ ≤ ≤ ≈ 1.6497 . . . . 10 100 10 𝑛 2 𝑛=1 𝑛 2 𝑛2 𝑛=1 𝑛=1
The actual sum is 𝜋2/6 ≈ 1.6449 . . ..
5.5.2
Exercises
Exercise 5.5.1: Finish the proof of Proposition 5.5.2. Exercise 5.5.2: Find out for which 𝑎 ∈ ℝ does bound for the sum.
Í∞
𝑛=1
𝑒 𝑎𝑛 converge. When the series converges, find an upper
Exercise 5.5.3: a) Estimate
1 𝑛=1 𝑛(𝑛+1)
Í∞
correct to within 0.01 using the integral test.
b) Compute the limit of the series exactly and compare. Hint: The sum telescopes. Exercise 5.5.4: Prove
∫
∞
−∞
|sinc(𝑥)| 𝑑𝑥 = ∞.
Hint: Again, it is enough to show this on just one side. Exercise 5.5.5: Can you interpret
1
∫
−1
1
p
as an improper integral? If so, compute its value.
|𝑥|
𝑑𝑥
Exercise 5.5.6: Take 𝑓 : [0, ∞) → ℝ, Riemann integrable on every interval [0, 𝑏], and such that there exist 𝑀, 𝑎, and 𝑇, such that | 𝑓 (𝑡)| ≤ 𝑀𝑒 𝑎𝑡 for all 𝑡 ≥ 𝑇. Show that the Laplace transform of 𝑓 exists. That is, for every 𝑠 > 𝑎 the following integral converges: 𝐹(𝑠) B
∫ 0
∞
𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡.
Exercise 5.5.7: Let 𝑓 : ℝ → ℝ be a Riemann integrable function on every interval [𝑎, 𝑏], and such that ∫∞ | 𝑓 (𝑥)| 𝑑𝑥 < ∞. Show that the Fourier sine and cosine transforms exist. That is, for every 𝜔 ≥ 0 the −∞ following integrals converge 1 𝐹 (𝜔) B 𝜋 𝑠
∫
∞
−∞
𝑓 (𝑡) sin(𝜔𝑡) 𝑑𝑡,
1 𝐹 (𝜔) B 𝜋
Furthermore, show that 𝐹 𝑠 and 𝐹 𝑐 are bounded functions.
𝑐
∫
∞
−∞
𝑓 (𝑡) cos(𝜔𝑡) 𝑑𝑡.
225
5.5. IMPROPER INTEGRALS Exercise 5.5.8: Suppose 𝑓 : [0, ∞) → ℝ is Riemann integrable on every interval [0, 𝑏]. Show that
∫ 𝑏 converges if and only if for every 𝜖 > 0 there exists an 𝑀 such that if 𝑀 ≤ 𝑎 < 𝑏, then 𝑎 𝑓 < 𝜖.
∫∞ 0
𝑓
Exercise 5.5.9: Suppose 𝑓 : [0, ∞) → ℝ is nonnegative and decreasing. Prove: a) If
∫∞ 0
𝑓 < ∞, then lim 𝑓 (𝑥) = 0. 𝑥→∞
b) The converse does not hold. Exercise 5.5.10: ∫ ∞Find an example of an unbounded continuous function 𝑓 : [0, ∞) → ℝ that is nonnegative and such that 0 𝑓 < ∞. Note that lim𝑥→∞ 𝑓 (𝑥) will not exist; compare previous exercise. Hint: On each interval [𝑘, 𝑘 + 1], 𝑘 ∈ ℕ , define a function whose integral over this interval is less than say 2−𝑘 .
Exercise 5.5.11 (More challenging): Find an example of a function 𝑓 : [0, ∞) → ℝ integrable on all ∫𝑛 ∫∞ intervals such that lim𝑛→∞ 0 𝑓 converges as a limit of a sequence (so 𝑛 ∈ ℕ ), but such that 0 𝑓 does not exist. Hint: For all 𝑛 ∈ ℕ , divide [𝑛, 𝑛 + 1] into two halves. On one half make the function negative, on the other make the function positive. Exercise 5.5.12: Suppose 𝑓 : [1, ∞) → ℝ is such that 𝑔(𝑥) B 𝑥 2 𝑓 (𝑥) is a bounded function. Prove that ∫∞ 𝑓 converges. 1 It is sometimes desirable to assign a value to integrals that normally cannot be interpreted even
∫1
as improper integrals, e.g. −1 1/𝑥 𝑑𝑥. Suppose 𝑓 : [𝑎, 𝑏] → ℝ is a function and 𝑎 < 𝑐 < 𝑏, where 𝑓 is Riemann integrable on the intervals [𝑎, 𝑐 − 𝜖] and [𝑐 + 𝜖, 𝑏] for all 𝜖 > 0. Define the Cauchy principal value of
∫𝑏 𝑎
𝑓 as
𝑝.𝑣.
∫ 𝑎
𝑏
𝑓 B lim+
∫
𝜖→0
𝑐−𝜖
𝑓 +
𝑎
∫
𝑏
𝑓 ,
𝑐+𝜖
if the limit exists. Exercise 5.5.13: a) Compute 𝑝.𝑣.
∫1 −1
1/𝑥
𝑑𝑥.
∫ −𝜖
b) Compute lim𝜖→0+ (
−1
1/𝑥
𝑑𝑥 +
∫1
2𝜖
1/𝑥
𝑑𝑥) and show it is not equal to the principal value.
c) Show that if 𝑓 is integrable on [𝑎, 𝑏], then 𝑝.𝑣.
∫𝑏 𝑎
𝑓 =
∫𝑏 𝑎
𝑓 (for an arbitrary 𝑐 ∈ (𝑎, 𝑏)).
d) Suppose 𝑓 : [−1, 1] → ℝ is an odd function ( 𝑓 (−𝑥) = − 𝑓 (𝑥)) that is integrable on [−1, −𝜖] and [𝜖, 1] for all 𝜖 > 0. Prove that 𝑝.𝑣.
∫1
−1
𝑓 =0
e) Suppose 𝑓 : [−1, 1] → ℝ is continuous and differentiable at 0. Show that 𝑝.𝑣.
∫1 −1
𝑓 (𝑥) 𝑥
𝑑𝑥 exists.
Exercise 5.5.14: Let 𝑓 : ℝ → ℝ and 𝑔 : ℝ → ℝ be continuous functions, where 𝑔(𝑥) = 0 for all 𝑥 ∉ [𝑎, 𝑏] for some interval [𝑎, 𝑏]. a) Show that the convolution
(𝑔 ∗ 𝑓 )(𝑥) B is well-defined for all 𝑥 ∈ ℝ.
b) Suppose
∫∞
−∞
∫
∞
−∞
𝑓 (𝑡)𝑔(𝑥 − 𝑡) 𝑑𝑡
| 𝑓 (𝑥)| 𝑑𝑥 < ∞. Prove that
lim (𝑔 ∗ 𝑓 )(𝑥) = 0,
𝑥→−∞
and
lim (𝑔 ∗ 𝑓 )(𝑥) = 0.
𝑥→∞
226
CHAPTER 5. THE RIEMANN INTEGRAL
Chapter 6 Sequences of Functions 6.1
Pointwise and uniform convergence
Note: 1–1.5 lecture Up till now, when we talked about limits of sequences we talked about sequences of numbers. A very useful concept in analysis is a sequence of functions. For example, a solution to some differential equation might be found by finding only approximate solutions. Then the actual solution is some sort of limit of those approximate solutions. When talking about sequences of functions, the tricky part is that there are multiple notions of a limit. Let us describe two common notions of a limit of a sequence of functions.
6.1.1
Pointwise convergence
Definition 6.1.1. For every 𝑛 ∈ ℕ , let 𝑓𝑛 : 𝑆 → ℝ be a function. The sequence { 𝑓𝑛 }∞ 𝑛=1 converges pointwise† to 𝑓 : 𝑆 → ℝ if for every 𝑥 ∈ 𝑆, we have 𝑓 (𝑥) = lim 𝑓𝑛 (𝑥). 𝑛→∞
Limits of sequences of numbers are unique, and so if a sequence { 𝑓𝑛 }∞ converges 𝑛=1 pointwise, the limit function 𝑓 is unique. It is common to say that 𝑓𝑛 : 𝑆 → ℝ converges pointwise to 𝑓 on 𝑇 ⊂ 𝑆 for some 𝑓 : 𝑇 → ℝ. In that case we mean 𝑓 (𝑥) = lim𝑛→∞ 𝑓𝑛 (𝑥) for every 𝑥 ∈ 𝑇. In other words, the restrictions of 𝑓𝑛 to 𝑇 converge pointwise to 𝑓 . Example 6.1.2: On [−1, 1], the sequence of functions defined by 𝑓𝑛 (𝑥) B 𝑥 2𝑛 converges pointwise to 𝑓 : [−1, 1] → ℝ, where
( 𝑓 (𝑥) =
1 0
if 𝑥 = −1 or 𝑥 = 1, otherwise.
See Figure 6.1. † Unless
otherwise specified, converges generally means converges pointwise.
228
CHAPTER 6. SEQUENCES OF FUNCTIONS
𝑥2
𝑥4
𝑥 6 𝑥 16
Figure 6.1: Graphs of 𝑓1 , 𝑓2 , 𝑓3 , and 𝑓8 for 𝑓𝑛 (𝑥) B 𝑥 2𝑛 .
To see this is so, first take 𝑥 ∈ (−1, 1). Then 0 ≤ 𝑥 2 < 1. We have seen before that
2𝑛 𝑥 − 0 = (𝑥 2 )𝑛 → 0 as 𝑛 → ∞.
Therefore, lim𝑛→∞ 𝑓𝑛 (𝑥) = 0. When 𝑥 = 1 or then 𝑥 2𝑛 = 1 for all 𝑛 and hence lim𝑛→∞ 𝑓𝑛 (𝑥) = 1. For all other 𝑥 = −1, ∞ 𝑥, the sequence 𝑓𝑛 (𝑥) 𝑛=1 does not converge. Often, functions are given as a series. In this case, we use the notion of pointwise convergence to find the values of the function. Example 6.1.3: We write
∞ Õ
𝑥𝑘
𝑘=0
to denote the limit of the functions 𝑓𝑛 (𝑥) B
𝑛 Õ
𝑥𝑘.
𝑘=0
When studying series, we saw that for (−1, 1) the 𝑓𝑛 converge pointwise to 1 . 1−𝑥
1 The subtle point here is that while 1−𝑥 is defined for all 𝑥 ≠ 1, and 𝑓𝑛 are defined for all 𝑥 (even at 𝑥 = 1), convergence only happens on (−1, 1). Therefore, when we write
𝑓 (𝑥) B
∞ Õ
𝑥𝑘
𝑘=0
we mean that 𝑓 is defined on (−1, 1) and is the pointwise limit of the partial sums.
6.1. POINTWISE AND UNIFORM CONVERGENCE
229
Example 6.1.4: Let 𝑓𝑛 (𝑥) B sin(𝑛𝑥). Then 𝑓𝑛 does not converge pointwise to any function on any interval. It may converge at certain points, such as when 𝑥 = 0 or 𝑥 = 𝜋. It is left as an exercise that in any interval [𝑎, 𝑏], there exists an 𝑥 such that sin(𝑥𝑛) does not have a limit as 𝑛 goes to infinity. See Figure 6.2.
Figure 6.2: Graphs of sin(𝑛𝑥) for 𝑛 = 1, 2, . . . , 10, with higher 𝑛 in lighter gray.
Before we move to uniform convergence, let us reformulate pointwise convergence in a different way. We leave the proof to the reader—it is a simple application of the definition of convergence of a sequence of real numbers. Proposition 6.1.5. Let 𝑓𝑛 : 𝑆 → ℝ and 𝑓 : 𝑆 → ℝ be functions. Then { 𝑓𝑛 }∞ converges 𝑛=1 pointwise to 𝑓 if and only if for every 𝑥 ∈ 𝑆 and every 𝜖 > 0, there exists an 𝑁 ∈ ℕ such that | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| < 𝜖
for all 𝑛 ≥ 𝑁 .
The key point is that 𝑁 can depend on 𝑥, not just on 𝜖. For each 𝑥, we can pick a different 𝑁. If we could pick one 𝑁 for all 𝑥, we would have what is called uniform convergence.
6.1.2
Uniform convergence
Definition 6.1.6. Let 𝑓𝑛 : 𝑆 → ℝ and 𝑓 : 𝑆 → ℝ be functions. The sequence { 𝑓𝑛 }∞ converges 𝑛=1 uniformly to 𝑓 if for every 𝜖 > 0, there exists an 𝑁 ∈ ℕ such that for all 𝑛 ≥ 𝑁, | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| < 𝜖
for all 𝑥 ∈ 𝑆.
In uniform convergence, 𝑁 cannot depend on 𝑥. Given 𝜖 > 0, we must find an 𝑁 that works for all 𝑥 ∈ 𝑆. See Figure 6.3 for an illustration. Uniform convergence implies pointwise convergence, and the proof follows by Proposition 6.1.5: Proposition 6.1.7. Let { 𝑓𝑛 }∞ be a sequence of functions 𝑓𝑛 : 𝑆 → ℝ. If { 𝑓𝑛 }∞ converges 𝑛=1 𝑛=1 ∞ uniformly to 𝑓 : 𝑆 → ℝ, then { 𝑓𝑛 } 𝑛=1 converges pointwise to 𝑓 . The converse does not hold.
230
CHAPTER 6. SEQUENCES OF FUNCTIONS
𝑓 +𝜖 𝑓
𝑓𝑛
𝑓 −𝜖
Figure 6.3: In uniform convergence, for 𝑛 ≥ 𝑁, the functions 𝑓𝑛 are within a strip of ±𝜖 from 𝑓 .
Example 6.1.8: The functions 𝑓𝑛 (𝑥) B 𝑥 2𝑛 do not converge uniformly on [−1, 1], even though they converge pointwise. To see this, suppose for contradiction that the convergence is uniform. For 𝜖 B 1/2, there would have to exist an 𝑁 such that 𝑥 2𝑁 = 𝑥 2𝑁 − 0 < 1/2 for all 𝑥 ∈ (−1, 1) (as 𝑓𝑛 (𝑥) converges to 0 on (−1, 1)). But that means that for every sequence {𝑥 𝑘 }∞ in (−1, 1) such that lim 𝑘→∞ 𝑥 𝑘 = 1, we have 𝑥 2𝑁 < 1/2 for all 𝑘. On the other hand, 𝑘=1 𝑘 𝑥 2𝑁 is a continuous function of 𝑥 (it is a polynomial). Therefore, we obtain a contradiction 1 1 = 12𝑁 = lim 𝑥 2𝑁 𝑘 ≤ /2. 𝑘→∞
However, if we restrict our domain to [−𝑎, 𝑎] where 0 < 𝑎 < 1, then { 𝑓𝑛 }∞ converges 𝑛=1 2𝑛 uniformly to 0 on [−𝑎, 𝑎]. Note that 𝑎 → 0 as 𝑛 → ∞. Given 𝜖 > 0, pick 𝑁 ∈ ℕ such that 𝑎 2𝑛 < 𝜖 for all 𝑛 ≥ 𝑁. If 𝑥 ∈ [−𝑎, 𝑎], then |𝑥| ≤ 𝑎. So for all 𝑛 ≥ 𝑁 and all 𝑥 ∈ [−𝑎, 𝑎],
2𝑛 𝑥 = |𝑥| 2𝑛 ≤ 𝑎 2𝑛 < 𝜖.
6.1.3
Convergence in uniform norm
For bounded functions, there is another more abstract way to think of uniform convergence. To every bounded function we assign a certain nonnegative number that measures the “distance” of the function from the constant function 0. This number allows us to “measure” how far two functions are from each other. We then translate a statement about uniform convergence into a statement about a certain sequence of real numbers converging to zero. Definition 6.1.9. Let 𝑓 : 𝑆 → ℝ be a bounded function. Define ∥ 𝑓 ∥ 𝑆 B sup | 𝑓 (𝑥)| : 𝑥 ∈ 𝑆 .
We call ∥·∥ 𝑆 the uniform norm. Sometimes other notation* is used, such as ∥ 𝑓 ∥ 𝑢 . *The
notation nor terminology is not completely standardized. The norm is also called the sup norm or infinity norm, and in addition to ∥ 𝑓 ∥ 𝑢 and ∥ 𝑓 ∥ 𝑆 it is sometimes written as ∥ 𝑓 ∥ ∞ or ∥ 𝑓 ∥ ∞,𝑆 .
231
6.1. POINTWISE AND UNIFORM CONVERGENCE The subscript is the set over which the supremum is taken. So if 𝐾 ⊂ 𝑆 then ∥ 𝑓 ∥ 𝐾 = sup | 𝑓 (𝑥)| : 𝑥 ∈ 𝐾 .
Proposition 6.1.10. A sequence of bounded functions 𝑓𝑛 : 𝑆 → ℝ converges uniformly to 𝑓 : 𝑆 → ℝ if and only if lim ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 = 0. 𝑛→∞
Proof. First suppose lim𝑛→∞ ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 = 0. Let 𝜖 > 0 be given. There exists an 𝑁 such that for 𝑛 ≥ 𝑁, we have ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 < 𝜖. As ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 is the supremum of | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)|, we see that for all 𝑥 ∈ 𝑆, we have | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| ≤ ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 < 𝜖. On the other hand, suppose { 𝑓𝑛 }∞ converges uniformly to 𝑓 . Let 𝜖 > 0 be given. 𝑛=1 Then find 𝑁 such that for all 𝑛 ≥ 𝑁, we have | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| < 𝜖 for all 𝑥 ∈ 𝑆. Taking the supremum over 𝑥 ∈ 𝑆, we see that ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 ≤ 𝜖. Hence lim𝑛→∞ ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 = 0. □ Sometimes it is said that { 𝑓𝑛 }∞ converges to 𝑓 in uniform norm instead of converges 𝑛=1 uniformly if ∥ 𝑓𝑛 − 𝑓 ∥ 𝑆 → 0. The proposition says that the two notions are the same thing for bounded functions. Example 6.1.11: Let 𝑓𝑛 : [0, 1] → ℝ be defined by 𝑓𝑛 (𝑥) B converges uniformly to 𝑓 (𝑥) B 𝑥. Let us compute: ∥ 𝑓𝑛 − 𝑓 ∥ [0,1]
𝑛𝑥+sin(𝑛𝑥 2 ) . 𝑛
We claim { 𝑓𝑛 }∞ 𝑛=1
𝑛𝑥 + sin(𝑛𝑥 2 ) = sup − 𝑥 : 𝑥 ∈ [0, 1] 𝑛 ( ) sin(𝑛𝑥 2 ) = sup
𝑛
: 𝑥 ∈ [0, 1]
≤ sup 1/𝑛 : 𝑥 ∈ [0, 1]
= 1/𝑛 .
Using the uniform norm, we define Cauchy sequences in a similar way as we define Cauchy sequences of real numbers. Definition 6.1.12. Let 𝑓𝑛 : 𝑆 → ℝ be bounded functions. The sequence is Cauchy in the uniform norm or uniformly Cauchy if for every 𝜖 > 0, there exists an 𝑁 ∈ ℕ such that for all 𝑚, 𝑘 ≥ 𝑁, ∥ 𝑓𝑚 − 𝑓 𝑘 ∥ 𝑆 < 𝜖. Proposition 6.1.13. Let 𝑓𝑛 : 𝑆 → ℝ be bounded functions. Then { 𝑓𝑛 }∞ is Cauchy in the uniform 𝑛=1 ∞ norm if and only if there exists an 𝑓 : 𝑆 → ℝ and { 𝑓𝑛 } 𝑛=1 converges uniformly to 𝑓 . Proof. First suppose { 𝑓𝑛 }∞ is Cauchy in the uniform norm. Let us define 𝑓 . Fix 𝑥. The 𝑛=1 ∞ sequence 𝑓𝑛 (𝑥) 𝑛=1 is Cauchy because | 𝑓𝑚 (𝑥) − 𝑓 𝑘 (𝑥)| ≤ ∥ 𝑓𝑚 − 𝑓 𝑘 ∥ 𝑆 .
232
CHAPTER 6. SEQUENCES OF FUNCTIONS
Thus 𝑓𝑛 (𝑥)
∞
𝑛=1
converges to some real number. Define 𝑓 : 𝑆 → ℝ by 𝑓 (𝑥) B lim 𝑓𝑛 (𝑥). 𝑛→∞
The sequence { 𝑓𝑛 }∞ converges pointwise to 𝑓 . To show that the convergence is uniform, 𝑛=1 let 𝜖 > 0 be given. Find an 𝑁 such that for all 𝑚, 𝑘 ≥ 𝑁, we have ∥ 𝑓𝑚 − 𝑓 𝑘 ∥ 𝑆 < 𝜖/2. In other words, for all 𝑥, we have | 𝑓𝑚 (𝑥) − 𝑓 𝑘 (𝑥)| < 𝜖/2. For any fixed 𝑥, take the limit as 𝑘 goes to infinity. Then | 𝑓𝑚 (𝑥) − 𝑓 𝑘 (𝑥)| goes to | 𝑓𝑚 (𝑥) − 𝑓 (𝑥)|. Consequently for all 𝑥, | 𝑓𝑚 (𝑥) − 𝑓 (𝑥)| ≤ 𝜖/2 < 𝜖. Hence, { 𝑓𝑛 }∞ converges uniformly. 𝑛=1 Next, we prove the other direction. Suppose { 𝑓𝑛 }∞ converges uniformly to 𝑓 . Given 𝑛=1 𝜖 > 0, find 𝑁 such that for all 𝑛 ≥ 𝑁, we have | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| < 𝜖/4 for all 𝑥 ∈ 𝑆. Therefore, for all 𝑚, 𝑘 ≥ 𝑁 and all 𝑥, | 𝑓𝑚 (𝑥) − 𝑓 𝑘 (𝑥)| = | 𝑓𝑚 (𝑥) − 𝑓 (𝑥) + 𝑓 (𝑥) − 𝑓 𝑘 (𝑥)|
≤ | 𝑓𝑚 (𝑥) − 𝑓 (𝑥)| + | 𝑓 (𝑥) − 𝑓 𝑘 (𝑥)| < 𝜖/4 + 𝜖/4 = 𝜖/2.
Take the supremum over all 𝑥 to obtain ∥ 𝑓𝑚 − 𝑓 𝑘 ∥ 𝑆 ≤ 𝜖/2 < 𝜖.
6.1.4
□
Exercises
Exercise 6.1.1: Let 𝑓 and 𝑔 be bounded functions on [𝑎, 𝑏]. Prove ∥ 𝑓 + 𝑔 ∥ [𝑎,𝑏] ≤ ∥ 𝑓 ∥ [𝑎,𝑏] + ∥ 𝑔∥ [𝑎,𝑏] . Exercise 6.1.2: 𝑒 𝑥/𝑛 for 𝑥 ∈ ℝ. 𝑛 b) Is the limit uniform on ℝ? a) Find the pointwise limit
c) Is the limit uniform on [0, 1]? Exercise 6.1.3: Suppose 𝑓𝑛 : 𝑆 → ℝ are functions that converge uniformly to 𝑓 : 𝑆 → ℝ. Suppose 𝐴 ⊂ 𝑆. Show that the sequence of restrictions { 𝑓𝑛 | 𝐴 }∞ converges uniformly to 𝑓 | 𝐴 . 𝑛=1 Exercise 6.1.4: Suppose { 𝑓𝑛 }∞ and {𝑔𝑛 }∞ defined on some set 𝐴 converge to 𝑓 and 𝑔 respectively 𝑛=1 𝑛=1 ∞ pointwise. Show that { 𝑓𝑛 + 𝑔𝑛 } 𝑛=1 converges pointwise to 𝑓 + 𝑔. Exercise 6.1.5: Suppose { 𝑓𝑛 }∞ and {𝑔𝑛 }∞ defined on some set 𝐴 converge to 𝑓 and 𝑔 respectively 𝑛=1 𝑛=1 uniformly on 𝐴. Show that { 𝑓𝑛 + 𝑔𝑛 }∞ converges uniformly to 𝑓 + 𝑔 on 𝐴. 𝑛=1 Exercise 6.1.6: Find an example of a sequence of functions { 𝑓𝑛 }∞ and {𝑔𝑛 }∞ that converge uniformly to 𝑛=1 𝑛=1 ∞ some 𝑓 and 𝑔 on some set 𝐴, but such that { 𝑓𝑛 𝑔𝑛 } 𝑛=1 (the multiple) does not converge uniformly to 𝑓 𝑔 on 𝐴. Hint: Let 𝐴 B ℝ, let 𝑓 (𝑥) B 𝑔(𝑥) B 𝑥. You can even pick 𝑓𝑛 = 𝑔𝑛 .
233
6.1. POINTWISE AND UNIFORM CONVERGENCE
Exercise 6.1.7: Suppose there exists a sequence of functions {𝑔𝑛 }∞ uniformly converging to 0 on 𝐴. Now 𝑛=1 ∞ suppose we have a sequence of functions { 𝑓𝑛 } 𝑛=1 and a function 𝑓 on 𝐴 such that | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| ≤ 𝑔𝑛 (𝑥) for all 𝑥 ∈ 𝐴. Show that { 𝑓𝑛 }∞ converges uniformly to 𝑓 on 𝐴. 𝑛=1 Exercise 6.1.8: Let { 𝑓𝑛 }∞ , {𝑔𝑛 }∞ and {ℎ 𝑛 }∞ be sequences of functions on [𝑎, 𝑏]. Suppose { 𝑓𝑛 }∞ 𝑛=1 𝑛=1 𝑛=1 𝑛=1 and {ℎ 𝑛 }∞ converge uniformly to some function 𝑓 : [𝑎, 𝑏] → ℝ and suppose 𝑓 (𝑥) ≤ 𝑔 (𝑥) ≤ ℎ (𝑥) for 𝑛 𝑛 𝑛 𝑛=1 all 𝑥 ∈ [𝑎, 𝑏]. Show that {𝑔𝑛 }∞ converges uniformly to 𝑓 . 𝑛=1 Exercise 6.1.9: Let 𝑓𝑛 : [0, 1] → ℝ be a sequence of increasing functions (that is, 𝑓𝑛 (𝑥) ≥ 𝑓𝑛 (𝑦) whenever 𝑥 ≥ 𝑦). Suppose 𝑓𝑛 (0) = 0 and lim 𝑓𝑛 (1) = 0. Show that { 𝑓𝑛 }∞ converges uniformly to 0. 𝑛=1 𝑛→∞
Exercise 6.1.10: Consider a sequence of functions 𝑓𝑛 : [0, 1] → ℝ so that there is a sequence of distinct numbers 𝑥 𝑛 ∈ [0, 1] such that for all 𝑛, 𝑓𝑛 (𝑥 𝑛 ) = 1. Prove or disprove the following statements: a) True or false: There exists { 𝑓𝑛 }∞ as above that converges pointwise to 0. 𝑛=1
b) True or false: There exists { 𝑓𝑛 }∞ as above that converges uniformly to 0. 𝑛=1 Exercise 6.1.11: Fix a continuous ℎ : [𝑎, 𝑏] → ℝ. Let 𝑓 (𝑥) B ℎ(𝑥) for 𝑥 ∈ [𝑎, 𝑏], 𝑓 (𝑥) B ℎ(𝑎) for 𝑥 < 𝑎 and 𝑓 (𝑥) B ℎ(𝑏) for all 𝑥 > 𝑏. First show that 𝑓 : ℝ → ℝ is continuous. Now let 𝑓𝑛 be the function 𝑔 from Exercise 5.3.7 with 𝜖 = 1/𝑛 , defined on the interval [𝑎, 𝑏]. That is, 𝑛 𝑓𝑛 (𝑥) B 2
∫
𝑥+1/𝑛
𝑓.
𝑥−1/𝑛
Show that { 𝑓𝑛 }∞ converges uniformly to ℎ on [𝑎, 𝑏]. 𝑛=1 Exercise 6.1.12: Prove that if a sequence of functions 𝑓𝑛 : 𝑆 → ℝ converge uniformly to a bounded function 𝑓 : 𝑆 → ℝ, then there exists an 𝑁 such that for all 𝑛 ≥ 𝑁, the 𝑓𝑛 are bounded. Exercise 6.1.13: Suppose there is a single constant 𝐵 and a sequence of functions 𝑓𝑛 : 𝑆 → ℝ that are bounded by 𝐵, that is | 𝑓𝑛 (𝑥)| ≤ 𝐵 for all 𝑥 ∈ 𝑆. Suppose that { 𝑓𝑛 }∞ converges pointwise to 𝑓 : 𝑆 → ℝ. 𝑛=1 Prove that 𝑓 is bounded. Exercise 6.1.14 (requires §2.6): In Example 6.1.3 we saw a) Show that whenever 0 ≤ 𝑐 < 1, the series
b) Show that the series
Í∞
𝑘=0
Í∞
𝑘=0
𝑥𝑘
Í∞
𝑘=0
𝑥 𝑘 converges pointwise to
converges uniformly on [−𝑐, 𝑐].
𝑥 𝑘 does not converge uniformly on (−1, 1).
1 1−𝑥
on (−1, 1).
234
6.2
CHAPTER 6. SEQUENCES OF FUNCTIONS
Interchange of limits
Note: 1–2.5 lectures, subsections on derivatives and power series (which requires §2.6) optional. Large parts of modern analysis deal mainly with the question of the interchange of two limiting operations. When we have a chain of two limits, we cannot always just swap the limits. For instance,
𝑛 𝑛 0 = lim lim ≠ lim lim = 1. 𝑛→∞ 𝑘→∞ 𝑛 + 𝑘 𝑘→∞ 𝑛→∞ 𝑛 + 𝑘
When talking about sequences of functions, interchange of limits comes up quite often. We look at several instances: continuity of the limit, the integral of the limit, the derivative of the limit, and the convergence of power series.
6.2.1
Continuity of the limit
If we have a sequence { 𝑓𝑛 }∞ of continuous functions, is the limit continuous? Suppose 𝑛=1 𝑓 is the (pointwise) limit of { 𝑓𝑛 }∞ . If lim 𝑘→∞ 𝑥 𝑘 = 𝑥, we are interested in the following 𝑛=1 interchange of limits, where the equality to prove is marked with a question mark. Equality is not always true, in fact, the limits to the left of the question mark might not even exist.
?
lim 𝑓 (𝑥 𝑘 ) = lim lim 𝑓𝑛 (𝑥 𝑘 ) = lim lim 𝑓𝑛 (𝑥 𝑘 ) = lim 𝑓𝑛 (𝑥) = 𝑓 (𝑥).
𝑘→∞
𝑘→∞ 𝑛→∞
𝑛→∞ 𝑘→∞
𝑛→∞
We wish to find conditions on the sequence { 𝑓𝑛 }∞ so that the equation above holds. If we 𝑛=1 only require pointwise convergence, then the limit of a sequence of functions need not be continuous, and the equation above need not hold. Example 6.2.1: Define 𝑓𝑛 : [0, 1] → ℝ as
(
𝑓𝑛 (𝑥) B
1 − 𝑛𝑥 0
if 𝑥 < 1/𝑛 ,
if 𝑥 ≥ 1/𝑛 .
See Figure 6.4. Each function 𝑓𝑛 is continuous. Fix an 𝑥 ∈ (0, 1]. If 𝑛 ≥ 1/𝑥 , then 𝑥 ≥ 1/𝑛 . Therefore for 𝑛 ≥ 1/𝑥 , we have 𝑓𝑛 (𝑥) = 0, and so lim 𝑓𝑛 (𝑥) = 0.
𝑛→∞
On the other hand, if 𝑥 = 0, then lim 𝑓𝑛 (0) = lim 1 = 1.
𝑛→∞
𝑛→∞
Thus the pointwise limit of 𝑓𝑛 is the function 𝑓 : [0, 1] → ℝ defined by
(
𝑓 (𝑥) B The function 𝑓 is not continuous at 0.
1
if 𝑥 = 0,
0
if 𝑥 > 0.
235
6.2. INTERCHANGE OF LIMITS
1
1/𝑛
Figure 6.4: Graph of 𝑓𝑛 (𝑥).
If we, however, require the convergence to be uniform, the limits can be interchanged. Theorem 6.2.2. Suppose 𝑆 ⊂ ℝ. Let { 𝑓𝑛 }∞ be a sequence of continuous functions 𝑓𝑛 : 𝑆 → ℝ 𝑛=1 converging uniformly to 𝑓 : 𝑆 → ℝ. Then 𝑓 is continuous. Proof. Let 𝑥 ∈ 𝑆 be fixed. Let {𝑥 𝑛 }∞ be a sequence in 𝑆 converging to 𝑥. 𝑛=1 ∞ Let 𝜖 > 0 be given. As { 𝑓 𝑘 } 𝑘=1 converges uniformly to 𝑓 , we find a 𝑘 ∈ ℕ such that | 𝑓 𝑘 (𝑦) − 𝑓 (𝑦)| < 𝜖/3 for all 𝑦 ∈ 𝑆. As 𝑓 𝑘 is continuous at 𝑥, we find an 𝑁 ∈ ℕ such that for all 𝑚 ≥ 𝑁, | 𝑓 𝑘 (𝑥 𝑚 ) − 𝑓 𝑘 (𝑥)| < 𝜖/3. Thus for all 𝑚 ≥ 𝑁, | 𝑓 (𝑥 𝑚 ) − 𝑓 (𝑥)| = | 𝑓 (𝑥 𝑚 ) − 𝑓 𝑘 (𝑥 𝑚 ) + 𝑓 𝑘 (𝑥 𝑚 ) − 𝑓 𝑘 (𝑥) + 𝑓 𝑘 (𝑥) − 𝑓 (𝑥)|
≤ | 𝑓 (𝑥 𝑚 ) − 𝑓 𝑘 (𝑥 𝑚 )| + | 𝑓 𝑘 (𝑥 𝑚 ) − 𝑓 𝑘 (𝑥)| + | 𝑓 𝑘 (𝑥) − 𝑓 (𝑥)|
< 𝜖/3 + 𝜖/3 + 𝜖/3 = 𝜖.
∞
Therefore, 𝑓 (𝑥 𝑚 ) 𝑚=1 converges to 𝑓 (𝑥), and consequently 𝑓 is continuous at 𝑥. As 𝑥 was arbitrary, 𝑓 is continuous everywhere. □
6.2.2
Integral of the limit
Again, if we simply require pointwise convergence, then the integral of a limit of a sequence of functions need not be equal to the limit of the integrals.
236
CHAPTER 6. SEQUENCES OF FUNCTIONS
Example 6.2.3: Define 𝑓𝑛 : [0, 1] → ℝ as
0
if 𝑥 = 0,
𝑓𝑛 (𝑥) B 𝑛 −
𝑛2 𝑥
0
if 0 < 𝑥 < 1/𝑛 , if 𝑥 ≥ 1/𝑛 .
See Figure 6.5.
𝑛
1/𝑛
Figure 6.5: Graph of 𝑓𝑛 (𝑥).
Each 𝑓𝑛 is Riemann integrable (it is continuous on (0, 1] and bounded), and the fundamental theorem of calculus says that
∫ 0
1
𝑓𝑛 =
1/𝑛
∫ 0
(𝑛 − 𝑛 2 𝑥) 𝑑𝑥 = 1/2.
Let us compute the pointwise limit of { 𝑓𝑛 }∞ . Fix an 𝑥 ∈ (0, 1]. For 𝑛 ≥ 1/𝑥 , we have 𝑛=1 𝑥 ≥ 1/𝑛 and so 𝑓𝑛 (𝑥) = 0. Hence, lim 𝑓𝑛 (𝑥) = 0. 𝑛→∞
We also have 𝑓𝑛 (0) = 0 for all 𝑛. So the pointwise limit of { 𝑓𝑛 }∞ is the zero function. In 𝑛=1 summary, 1/2
= lim
𝑛→∞
∫ 0
1
𝑓𝑛 (𝑥) 𝑑𝑥 ≠
∫ 0
1
lim 𝑓𝑛 (𝑥) 𝑑𝑥 =
𝑛→∞
∫ 0
1
0 𝑑𝑥 = 0.
But if we require the convergence to be uniform, the limits can be interchanged.* *Weaker
conditions are sufficient for this kind of theorem, but to prove such a generalization requires more sophisticated machinery than we cover here: the Lebesgue integral. In particular, the theorem holds with pointwise convergence as long as 𝑓 is integrable and there is an 𝑀 such that ∥ 𝑓𝑛 ∥ [𝑎,𝑏] ≤ 𝑀 for all 𝑛.
237
6.2. INTERCHANGE OF LIMITS
Theorem 6.2.4. Let { 𝑓𝑛 }∞ be a sequence of Riemann integrable functions 𝑓𝑛 : [𝑎, 𝑏] → ℝ 𝑛=1 converging uniformly to 𝑓 : [𝑎, 𝑏] → ℝ. Then 𝑓 is Riemann integrable, and 𝑏
∫
𝑓 = lim
𝑛→∞
𝑎
𝑏
∫ 𝑎
𝑓𝑛 .
Proof. Let 𝜖 > 0 be given. As 𝑓𝑛 goes to 𝑓 uniformly, we find an 𝑀 ∈ ℕ such that for all 𝜖 𝑛 ≥ 𝑀, we have | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| < 2(𝑏−𝑎) for all 𝑥 ∈ [𝑎, 𝑏]. In particular, by reverse triangle 𝜖 inequality, | 𝑓 (𝑥)| < 2(𝑏−𝑎) + | 𝑓𝑛 (𝑥)| for all 𝑥. Hence 𝑓 is bounded, as 𝑓𝑛 is bounded. Note that 𝑓𝑛 is integrable and compute
∫ 𝑎
𝑏
𝑓 −
∫ 𝑎
𝑏
𝑓 = ≤ = =
𝑏
∫
𝑓 (𝑥) − 𝑓𝑛 (𝑥) + 𝑓𝑛 (𝑥) 𝑑𝑥 −
𝑎
∫
𝑏
𝑓 (𝑥) − 𝑓𝑛 (𝑥) 𝑑𝑥 +
𝑎
∫
𝑏
𝑓 (𝑥) − 𝑓𝑛 (𝑥) 𝑑𝑥 +
𝑎
∫
𝑏
𝑓 (𝑥) − 𝑓𝑛 (𝑥) 𝑑𝑥 −
𝑎
∫
𝑎
𝑓 (𝑥) − 𝑓𝑛 (𝑥) + 𝑓𝑛 (𝑥) 𝑑𝑥
𝑓𝑛 (𝑥) 𝑑𝑥 − 𝑏
𝑓𝑛 (𝑥) 𝑑𝑥 −
𝑎
∫
𝑎
𝑏
𝑏
𝑎
∫
∫
𝑏
∫
𝑓 (𝑥) − 𝑓𝑛 (𝑥) 𝑑𝑥 −
𝑎 𝑏
∫
𝑓 (𝑥) − 𝑓𝑛 (𝑥) 𝑑𝑥 −
𝑎
𝑏
∫
𝑓𝑛 (𝑥) 𝑑𝑥
𝑎
∫ 𝑎
𝑏
𝑓𝑛 (𝑥) 𝑑𝑥
𝑏
𝑓 (𝑥) − 𝑓𝑛 (𝑥) 𝑑𝑥
𝜖 𝜖 ≤ (𝑏 − 𝑎) + (𝑏 − 𝑎) = 𝜖. 2(𝑏 − 𝑎) 2(𝑏 − 𝑎)
The first inequality is Proposition 5.2.5. The second inequality follows by Proposition 5.1.8 −𝜖 𝜖 and the fact that for all 𝑥 ∈ [𝑎, 𝑏], we have 2(𝑏−𝑎) . As 𝜖 > 0 was < 𝑓 (𝑥) − 𝑓𝑛 (𝑥) < 2(𝑏−𝑎) arbitrary, 𝑓 is Riemann integrable.
∫𝑏
Finally, we compute 𝑎 𝑓 . We apply Proposition 5.1.10 in the calculation. Again, for all 𝑛 ≥ 𝑀 (where 𝑀 is the same as above),
∫ ∫ 𝑏 ∫ 𝑏 𝑏 𝑓 − 𝑓𝑛 = 𝑓 (𝑥) − 𝑓𝑛 (𝑥) 𝑑𝑥 𝑎 𝑎 𝑎 ≤ Therefore,
∫ 𝑏 𝑎
𝑓𝑛
∞
converges to 𝑛=1
∫𝑏 𝑎
𝜖 𝜖 (𝑏 − 𝑎) = < 𝜖. 2 2(𝑏 − 𝑎)
𝑓.
□
Example 6.2.5: Suppose we wish to compute lim
𝑛→∞
∫ 0
1
𝑛𝑥 + sin(𝑛𝑥 2 ) 𝑑𝑥. 𝑛
It is impossible to compute the integrals for any particular 𝑛 using calculus as sin(𝑛𝑥 2 ) has no closed-form antiderivative. However, we can compute the limit. We have shown before
238 that
CHAPTER 6. SEQUENCES OF FUNCTIONS 𝑛𝑥+sin(𝑛𝑥 2 ) 𝑛
converges uniformly on [0, 1] to 𝑥. By Theorem 6.2.4, the limit exists and lim
𝑛→∞
1
∫ 0
𝑛𝑥 + sin(𝑛𝑥 2 ) 𝑑𝑥 = 𝑛
∫
1
0
𝑥 𝑑𝑥 = 1/2.
Example 6.2.6: If convergence is only pointwise, the limit need not even be Riemann integrable. On [0, 1] define
( 𝑓𝑛 (𝑥) B
1 if 𝑥 = 𝑝/𝑞 in lowest terms and 𝑞 ≤ 𝑛,
0 otherwise.
Each function 𝑓𝑛 differs from the zero function at finitely many points; there are only finitely many fractions in [0, 1] with denominator less than or equal to 𝑛. So 𝑓𝑛 is integrable
∫1
∫1
and 0 𝑓𝑛 = 0 0 = 0. It is an easy exercise to show that { 𝑓𝑛 }∞ converges pointwise to the 𝑛=1 Dirichlet function ( 1 if 𝑥 ∈ ℚ , 𝑓 (𝑥) B 0 otherwise, which is not Riemann integrable. Example 6.2.7: In fact, if the convergence is only pointwise, the limit of bounded functions is not even necessarily bounded. Define 𝑓𝑛 : [0, 1] → ℝ by
( 𝑓𝑛 (𝑥) B
0 1/𝑥
if 𝑥 < 1/𝑛 , else.
For every 𝑛 we get that | 𝑓𝑛 (𝑥)| ≤ 𝑛 for all 𝑥 ∈ [0, 1] so the functions are bounded. However, { 𝑓𝑛 } ∞ converges pointwise to the unbounded function 𝑛=1
( 𝑓 (𝑥) B
6.2.3
0
if 𝑥 = 0,
1/𝑥
else.
Derivative of the limit
While uniform convergence is enough to swap limits with integrals, it is not, however, enough to swap limits with derivatives, unless you also have uniform convergence of the derivatives themselves. Example 6.2.8: Let 𝑓𝑛 (𝑥) B sin(𝑛𝑥) 𝑛 . Then 𝑓𝑛 converges uniformly to 0. See Figure 6.6. The derivative of the limit is 0. But 𝑓𝑛′ (𝑥) = cos(𝑛𝑥), which does not converge even pointwise, for example 𝑓𝑛′ (𝜋) = (−1)𝑛 . Moreover, 𝑓𝑛′ (0) = 1 for all 𝑛, which does converge, but not to 0.
239
6.2. INTERCHANGE OF LIMITS
Figure 6.6: Graphs of
sin(𝑛𝑥) 𝑛
for 𝑛 = 1, 2, . . . , 10, with higher 𝑛 in lighter gray.
1 Example 6.2.9: Let 𝑓𝑛 (𝑥) B 1+𝑛𝑥 2 . If 𝑥 ≠ 0, then lim𝑛→∞ 𝑓𝑛 (𝑥) = 0, but lim𝑛→∞ 𝑓𝑛 (0) = 1. ∞ Hence, { 𝑓𝑛 } 𝑛=1 converges pointwise to a function that is not continuous at 0. We compute
𝑓𝑛′ (𝑥) =
−2𝑛𝑥 . (1 + 𝑛𝑥 2 )2
For every 𝑥, lim𝑛→∞ 𝑓𝑛′ (𝑥) = 0, so the derivatives converge pointwise to 0, but the reader can check that the convergence is not uniform on any interval containing 0. The limit of 𝑓𝑛 is not differentiable at 0—it is not even continuous at 0. See Figure 6.7.
Figure 6.7: Graphs of
1 1+𝑛𝑥 2
and its derivatives for 𝑛 = 1, 2, . . . , 10, with higher 𝑛 in lighter gray.
See the exercises for more examples. Using the fundamental theorem of calculus, we find an answer for continuously differentiable functions. The following theorem is true even if we do not assume continuity of the derivatives, but the proof is more difficult. Theorem 6.2.10. Let 𝐼 be a bounded interval and let 𝑓𝑛 : 𝐼 → ℝ be continuouslydifferentiable ∞ converges uniformly to 𝑔 : 𝐼 → ℝ , and suppose 𝑓 functions. Suppose { 𝑓𝑛′ }∞ (𝑐) is 𝑛 𝑛=1 𝑛=1 ∞ a convergent sequence for some 𝑐 ∈ 𝐼. Then { 𝑓𝑛 } 𝑛=1 converges uniformly to a continuously differentiable function 𝑓 : 𝐼 → ℝ, and 𝑓 ′ = 𝑔.
240
CHAPTER 6. SEQUENCES OF FUNCTIONS
Proof. Define 𝑓 (𝑐) B lim𝑛→∞ 𝑓𝑛 (𝑐). As 𝑓𝑛′ are continuous and hence Riemann integrable, then via the fundamental theorem of calculus, we find that for 𝑥 ∈ 𝐼, 𝑓𝑛 (𝑥) = 𝑓𝑛 (𝑐) +
𝑥
∫
𝑓𝑛′ .
𝑐
As { 𝑓𝑛′ }∞ converges uniformly on 𝐼, it converges uniformly on [𝑐, 𝑥] (or [𝑥, 𝑐] if 𝑥 < 𝑐). 𝑛=1 Thus, the limit as 𝑛 → ∞ on the right-hand side exists. Define 𝑓 at the remaining points (where 𝑥 ≠ 𝑐) by this limit: 𝑓 (𝑥) B lim 𝑓𝑛 (𝑐) + lim 𝑛→∞
∫
𝑛→∞
𝑥
𝑐
𝑓𝑛′
= 𝑓 (𝑐) +
𝑥
∫ 𝑐
𝑔.
The function 𝑔 is continuous, being the uniform limit of continuous functions. Hence 𝑓 is differentiable and 𝑓 ′(𝑥) = 𝑔(𝑥) for all 𝑥 ∈ 𝐼 by the second form of the fundamental theorem. It remains to prove uniform convergence. Suppose 𝐼 has a lower bound 𝑎 and upper bound 𝑏. Let 𝜖 > 0 be given. Take 𝑀 such that for all 𝑛 ≥ 𝑀, we have | 𝑓 (𝑐) − 𝑓𝑛 (𝑐)| < 𝜖/2 𝜖 and | 𝑔(𝑥) − 𝑓𝑛′ (𝑥)| < 2(𝑏−𝑎) for all 𝑥 ∈ 𝐼. Then
∫ | 𝑓 (𝑥) − 𝑓𝑛 (𝑥)| = 𝑓 (𝑐) +
𝑥
𝑥
𝑔 − 𝑓𝑛 (𝑐) + 𝑓𝑛′ 𝑐 ∫ 𝑥 ∫ 𝑥𝑐 ≤ | 𝑓 (𝑐) − 𝑓𝑛 (𝑐)| + 𝑔− 𝑓𝑛′ 𝑐 ∫𝑐 𝑥 = | 𝑓 (𝑐) − 𝑓𝑛 (𝑐)| + 𝑔(𝑠) − 𝑓𝑛′ (𝑠) 𝑑𝑠
∫
𝑐
𝜖 𝜖 (𝑏 − 𝑎) = 𝜖. < + 2 2(𝑏 − 𝑎)
□
The proof goes through without boundedness of 𝐼, except for the uniform convergence of 𝑓𝑛 to 𝑓 . As an example suppose 𝐼 = ℝ and let 𝑓𝑛 (𝑥) B 𝑥/𝑛 . Then 𝑓𝑛′ (𝑥) = 1/𝑛 , which converges uniformly to 0. However, { 𝑓𝑛 }∞ converges to 0 only pointwise. 𝑛=1
6.2.4
Convergence of power series
In §2.6 we saw that a power series converges absolutely inside its radius of convergence, so it converges pointwise. Let us show that it (and all its derivatives) also converges uniformly. This fact allows us to swap several types of limits. Not only is the limit continuous, we can integrate and even differentiate convergent power series term by term. 𝑛 Proposition 6.2.11. Let ∞ 𝑛=0 𝑐 𝑛 (𝑥 − 𝑎) be a convergent power series with a radius of convergence 𝜌, where 0 < 𝜌 ≤ ∞. Then the series converges uniformly in [𝑎 − 𝑟, 𝑎 + 𝑟] whenever 0 < 𝑟 < 𝜌. In particular, the series converges (pointwise) to a continuous function on (𝑎 − 𝜌, 𝑎 + 𝜌) if 𝜌 < ∞, or on ℝ if 𝜌 = ∞.
Í
241
6.2. INTERCHANGE OF LIMITS
Proof. Let 𝐼 B (𝑎 − 𝜌, 𝑎 + 𝜌) if 𝜌 < ∞, or let 𝐼 B ℝ if 𝜌 = ∞. Take 0 < 𝑟 < 𝜌. The series Í 𝑛 converges absolutely for every 𝑥 ∈ 𝐼, in particular if 𝑥 = 𝑎 + 𝑟. So ∞ 𝑛=0 |𝑐 𝑛 | 𝑟 converges. Given 𝜖 > 0, find 𝑀 such that for all 𝑘 ≥ 𝑀, ∞ Õ 𝑛=𝑘+1
|𝑐 𝑛 | 𝑟 𝑛 < 𝜖.
For all 𝑥 ∈ [𝑎 − 𝑟, 𝑎 + 𝑟] and all 𝑚 > 𝑘,
𝑚 𝑘 𝑚 Õ Õ Õ 𝑛 𝑛 𝑛 𝑐 𝑛 (𝑥 − 𝑎) − 𝑐 𝑛 (𝑥 − 𝑎) = 𝑐 𝑛 (𝑥 − 𝑎) 𝑛=0
𝑛=0
≤
𝑛=𝑘+1 𝑚 Õ
𝑛=𝑘+1
𝑚 Õ
𝑛
|𝑐 𝑛 | |𝑥 − 𝑎| ≤
𝑛=𝑘+1
𝑛
|𝑐 𝑛 | 𝑟 ≤
∞ Õ 𝑛=𝑘+1
|𝑐 𝑛 | 𝑟 𝑛 < 𝜖.
The partial sums are therefore uniformly Cauchy on [𝑎 − 𝑟, 𝑎 + 𝑟] and hence converge uniformly on that set. Moreover, the partial sums are polynomials, which are continuous, and so their uniform limit on [𝑎 − 𝑟, 𝑎 + 𝑟] is a continuous function. As 𝑟 < 𝜌 was arbitrary, the limit function is continuous on all of 𝐼. □ As we said, we will show that power series can be differentiated and integrated term by term. The differentiated or integrated series is again a power series, and we will show it has the same radius of convergence. Therefore, any power series defines an infinitely differentiable function. We first prove that we can antidifferentiate, as integration only needs uniform limits. 𝑛 Corollary 6.2.12. Let ∞ 𝑛=0 𝑐 𝑛 (𝑥 − 𝑎) be a convergent power series with a radius of convergence 0 < 𝜌 ≤ ∞. Let 𝐼 B (𝑎 − 𝜌, 𝑎 + 𝜌) if 𝜌 < ∞ or 𝐼 B ℝ if 𝜌 = ∞. Let 𝑓 : 𝐼 → ℝ be the limit. Then
Í
∫ 𝑎
𝑥
𝑓 =
∞ Õ 𝑐 𝑛=1
𝑛−1
𝑛
(𝑥 − 𝑎)𝑛 ,
where the radius of convergence of this series is at least 𝜌. 𝑘 Proof. Take 0 < 𝑟 < 𝜌. The partial sums 𝑛=0 𝑐 𝑛 (𝑥 − 𝑎)𝑛 converge uniformly on [𝑎 − 𝑟, 𝑎 + 𝑟]. For every fixed 𝑥 ∈ [𝑎 − 𝑟, 𝑎 + 𝑟], the convergence is also uniform on [𝑎, 𝑥] (or [𝑥, 𝑎] if 𝑥 < 𝑎). Hence,
Í
∫ 𝑎
𝑥
𝑓 =
∫ 𝑎
𝑥
lim
𝑘→∞
𝑘 Õ 𝑛=0
𝑛
𝑐 𝑛 (𝑠 − 𝑎) 𝑑𝑠 = lim
𝑘→∞
∫ 𝑎
𝑥
𝑘 Õ 𝑛=0
𝑛
𝑐 𝑛 (𝑠 − 𝑎) 𝑑𝑠 = lim
𝑘→∞
𝑘+1 Õ 𝑐 𝑛=1
𝑛−1
𝑛
(𝑥 − 𝑎)𝑛 . □
242
CHAPTER 6. SEQUENCES OF FUNCTIONS
𝑛 Corollary 6.2.13. Let ∞ 𝑛=0 𝑐 𝑛 (𝑥 − 𝑎) be a convergent power series with a radius of convergence 0 < 𝜌 ≤ ∞. Let 𝐼 B (𝑎 − 𝜌, 𝑎 + 𝜌) if 𝜌 < ∞ or 𝐼 B ℝ if 𝜌 = ∞. Let 𝑓 : 𝐼 → ℝ be the limit. Then 𝑓 is a differentiable function, and
Í
′
𝑓 (𝑥) =
∞ Õ 𝑛=0
(𝑛 + 1)𝑐 𝑛+1 (𝑥 − 𝑎)𝑛 ,
where the radius of convergence of this series is 𝜌. Proof. Take 0 < 𝑟 < 𝜌. The series converges uniformly on [𝑎 − 𝑟, 𝑎 + 𝑟], but we need uniform convergence of the derivative. Let 𝑅 B lim sup |𝑐 𝑛 | 1/𝑛 . 𝑛→∞
As the series is convergent 𝑅 < ∞, and the radius of convergence is 1/𝑅 (or ∞ if 𝑅 = 0). Let 𝜖 > 0 be given. In Example 2.2.14, we saw lim𝑛→∞ 𝑛 1/𝑛 = 1. Hence there exists an 𝑁 such that for all 𝑛 ≥ 𝑁, we have 𝑛 1/𝑛 < 1 + 𝜖. So 𝑅 = lim sup |𝑐 𝑛 | 1/𝑛 ≤ lim sup |𝑛𝑐 𝑛 | 1/𝑛 ≤ (1 + 𝜖) lim sup |𝑐 𝑛 | 1/𝑛 = (1 + 𝜖)𝑅. 𝑛→∞
𝑛→∞
𝑛→∞
𝑛 As 𝜖 was arbitrary, lim sup𝑛→∞ |𝑛𝑐 𝑛 | 1/𝑛 = 𝑅. Therefore, ∞ 𝑛=1 𝑛𝑐 𝑛 (𝑥 − 𝑎) has radius of Í∞ 𝑛 convergence 𝜌. By dividing by (𝑥 − 𝑎), we find 𝑛=0 (𝑛 + 1)𝑐 𝑛+1 (𝑥 − 𝑎) has radius of convergence 𝜌 as well. Í𝑘 Consequently, the partial sums 𝑛=0 (𝑛 + 1)𝑐 𝑛+1 (𝑥 − 𝑎)𝑛 , which are derivatives of the Í 𝑘+1 partial sums 𝑛=0 𝑐 𝑛 (𝑥 − 𝑎)𝑛 , converge uniformly on [𝑎 − 𝑟, 𝑎 + 𝑟]. Furthermore, the series clearly converges at 𝑥 = 𝑎. We may thus apply Theorem 6.2.10, and we are done as 𝑟 < 𝜌 was arbitrary. □
Í
Example 6.2.14: We could have used this result to define the exponential function. That is, the power series ∞ Õ 𝑥𝑛 𝑓 (𝑥) B 𝑛! 𝑛=0
has radius of convergence 𝜌 = ∞. Furthermore, 𝑓 (0) = 1, and by differentiating term by term, we find that 𝑓 ′(𝑥) = 𝑓 (𝑥). Example 6.2.15: The series
∞ Õ
𝑛𝑥 𝑛
𝑛=1
converges to
𝑥 (1−𝑥)2
on (−1, 1).
∞ 1 𝑛 𝑛−1 then converges Proof: On (−1, 1), ∞ 𝑛=0 𝑥 converges to 1−𝑥 . The derivative 𝑛=1 𝑛𝑥 1 on the same interval to 2 . Multiplying by 𝑥 obtains the result.
Í
(1−𝑥)
Í
243
6.2. INTERCHANGE OF LIMITS
6.2.5
Exercises
Exercise 6.2.1: Find an explicit example of a sequence of differentiable functions on [−1, 1] that converge uniformly to a function 𝑓 such that 𝑓 is not differentiable. Hint: q There are many possibilities, simplest is
1 perhaps to combine |𝑥| and 𝑛2 𝑥 2 + 2𝑛 , another is to consider 𝑥 2 + (1/𝑛 )2 . Show that these functions are differentiable, converge uniformly, and then show that the limit is not differentiable. 𝑛
Exercise 6.2.2: Let 𝑓𝑛 (𝑥) B 𝑥𝑛 . Show that { 𝑓𝑛 }∞ converges uniformly to a differentiable function 𝑓 on 𝑛=1 [0, 1] (find 𝑓 ). However, show that 𝑓 ′(1) ≠ lim 𝑓𝑛′ (1). 𝑛→∞
Exercise 6.2.3: Let 𝑓 : [0, 1] → ℝ be Riemann integrable (hence bounded). Find lim
𝑛→∞
Exercise 6.2.4: Show lim
𝑛→∞
∫ 1
2
∫ 0
1
𝑓 (𝑥) 𝑑𝑥. 𝑛
2
𝑒 −𝑛𝑥 𝑑𝑥 = 0. Feel free to use calculus facts about the exponential.
Exercise 6.2.5: Find an example of a sequence of continuous functions on (0, 1) that converges pointwise to a continuous function on (0, 1), but the convergence is not uniform. Note: In the previous exercise, (0, 1) was picked for simplicity. For a more challenging exercise, replace (0, 1) with [0, 1]. Exercise 6.2.6: True/False; prove or find a counterexample to the following statement: If { 𝑓𝑛 }∞ is a sequence 𝑛=1 of everywhere discontinuous functions on [0, 1] that converge uniformly to a function 𝑓 , then 𝑓 is everywhere discontinuous. Exercise 6.2.7: For a continuously differentiable function 𝑓 : [𝑎, 𝑏] → ℝ, define ∥ 𝑓 ∥ 𝐶 1 B ∥ 𝑓 ∥ [𝑎,𝑏] + ∥ 𝑓 ′ ∥ [𝑎,𝑏] . Suppose { 𝑓𝑛 }∞ is a sequence of continuously differentiable functions such that for every 𝜖 > 0, there exists 𝑛=1 an 𝑀 such that for all 𝑛, 𝑘 ≥ 𝑀, we have ∥ 𝑓𝑛 − 𝑓 𝑘 ∥ 𝐶 1 < 𝜖. Show that { 𝑓𝑛 }∞ converges uniformly to some continuously differentiable function 𝑓 : [𝑎, 𝑏] → ℝ. 𝑛=1 Suppose 𝑓 : [0, 1] → ℝ is Riemann integrable. For the following two exercises define the number ∥ 𝑓 ∥ 𝐿1 B
∫ 0
1
| 𝑓 (𝑥)| 𝑑𝑥.
It is true that | 𝑓 | is integrable whenever 𝑓 is, see Exercise 5.2.15. The number is called the 𝐿1 -norm and defines another very common type of convergence called the 𝐿1 -convergence. It is, however, a bit more subtle. Exercise 6.2.8: Suppose { 𝑓𝑛 }∞ is a sequence of Riemann integrable functions on [0, 1] that converges 𝑛=1 uniformly to 0. Show that lim ∥ 𝑓𝑛 ∥ 𝐿1 = 0. 𝑛→∞
244
CHAPTER 6. SEQUENCES OF FUNCTIONS
Exercise 6.2.9: Find a sequence { 𝑓𝑛 }∞ of Riemann integrable functions on [0, 1] converging pointwise 𝑛=1 to 0, but such that lim ∥ 𝑓𝑛 ∥ 𝐿1 does not exist (is ∞). 𝑛→∞
Exercise 6.2.10 (Hard): Prove Dini’s theorem: Let 𝑓𝑛 : [𝑎, 𝑏] → ℝ be a sequence of continuous functions such that 0 ≤ 𝑓𝑛+1 (𝑥) ≤ 𝑓𝑛 (𝑥) ≤ · · · ≤ 𝑓1 (𝑥) for all 𝑛 ∈ ℕ .
Suppose { 𝑓𝑛 }∞ converges pointwise to 0. Show that { 𝑓𝑛 }∞ converges to zero uniformly. 𝑛=1 𝑛=1
Exercise 6.2.11: Suppose 𝑓𝑛 : [𝑎, 𝑏] → ℝ is a sequence of continuous functions that converges ∞ pointwise to a continuous 𝑓 : [𝑎, 𝑏] → ℝ. Suppose that for every 𝑥 ∈ [𝑎, 𝑏], the sequence | 𝑓𝑛 (𝑥) − 𝑓 (𝑥)| 𝑛=1 is monotone. Show that the sequence { 𝑓𝑛 }∞ converges uniformly. 𝑛=1 Exercise 6.2.12: Find sequences of Riemann integrable functions 𝑓𝑛 : [0, 1] → ℝ such that { 𝑓𝑛 }∞ converges 𝑛=1 to zero pointwise, and such that a) b)
∫ 1
∞ 𝑓 0 𝑛 𝑛=1 ∫ 1 ∞ 𝑓 0 𝑛 𝑛=1
increases without bound, is the sequence −1, 1, −1, 1, −1, 1, . . ..
It is possible to define a joint limit of a double sequence {𝑥 𝑛,𝑚 }∞ of real numbers (that is a 𝑛,𝑚=1 ∞ function from ℕ × ℕ to ℝ). We say 𝐿 is the joint limit of {𝑥 𝑛,𝑚 } 𝑛,𝑚=1 and write lim 𝑥 𝑛,𝑚 = 𝐿,
𝑛→∞ 𝑚→∞
or
lim
(𝑛,𝑚)→∞
𝑥 𝑛,𝑚 = 𝐿,
if for every 𝜖 > 0, there exists an 𝑀 such that if 𝑛 ≥ 𝑀 and 𝑚 ≥ 𝑀, then |𝑥 𝑛,𝑚 − 𝐿| < 𝜖.
Exercise 6.2.13: Suppose the joint limit (see above) of {𝑥 𝑛,𝑚 }∞ is 𝐿, and suppose that for all 𝑛, lim 𝑥 𝑛,𝑚 𝑛,𝑚=1 exists, and for all 𝑚, lim 𝑥 𝑛,𝑚 exists. Then show lim lim 𝑥 𝑛,𝑚 = lim lim 𝑥 𝑛,𝑚 = 𝐿. 𝑛→∞
𝑛→∞ 𝑚→∞
𝑚→∞
𝑚→∞ 𝑛→∞
Exercise 6.2.14: A joint limit (see above) does not mean the iterated limits exist. Consider 𝑥 𝑛,𝑚 B
(−1)𝑛+𝑚 . min{𝑛,𝑚}
a) Show that for no 𝑛 does lim 𝑥 𝑛,𝑚 exist, and for no 𝑚 does lim 𝑥 𝑛,𝑚 exist. So neither lim lim 𝑥 𝑛,𝑚 𝑚→∞
𝑛→∞
nor lim lim 𝑥 𝑛,𝑚 makes any sense at all.
𝑛→∞ 𝑚→∞
𝑚→∞ 𝑛→∞
b) Show that the joint limit of {𝑥 𝑛,𝑚 }∞ exists and equals 0. 𝑛,𝑚=1
Exercise 6.2.15: We say that a sequence of functions 𝑓𝑛 : ℝ → ℝ converges uniformly on compact subsets if for every 𝑘 ∈ ℕ , the sequence { 𝑓𝑛 }∞ converges uniformly on [−𝑘, 𝑘]. 𝑛=1
a) Prove that if 𝑓𝑛 : ℝ → ℝ is a sequence of continuous functions converging uniformly on compact subsets, then the limit is continuous.
b) Prove that if 𝑓𝑛 : ℝ → ℝ is a sequence of functions Riemann integrable on every closed and bounded interval [𝑎, 𝑏], and converging uniformly on compact subsets to an 𝑓 : ℝ → ℝ, then for every interval [𝑎, 𝑏], we have 𝑓 ∈ R [𝑎, 𝑏] , and
∫𝑏 𝑎
𝑓 = lim
∫𝑏
𝑛→∞ 𝑎
𝑓𝑛 .
Exercise 6.2.16 (Challenging): Find a sequence of continuous functions 𝑓𝑛 : [0, 1] → ℝ that converge to the popcorn function 𝑓 : [0, 1] → ℝ, that is the function such that 𝑓 (𝑝/𝑞 ) B 1/𝑞 (if 𝑝/𝑞 is in lowest terms) and 𝑓 (𝑥) B 0 if 𝑥 is not rational (note that 𝑓 (0) = 𝑓 (1) = 1), see Example 3.2.12. So a pointwise limit of continuous functions can have a dense set of discontinuities. See also the next exercise.
245
6.2. INTERCHANGE OF LIMITS
Exercise 6.2.17 (Challenging): The Dirichlet function 𝑓 : [0, 1] → ℝ, that is the function such that 𝑓 (𝑥) B 1 if 𝑥 ∈ ℚ and 𝑓 (𝑥) B 0 if 𝑥 ∉ ℚ, is not the pointwise limit of continuous functions, although this is difficult to show. Prove, however, that 𝑓 is a pointwise limit of functions that are themselves pointwise limits of continuous functions themselves. Exercise 6.2.18: a) Find a sequence of Lipschitz continuous functions on [0, 1] whose uniform limit is non-Lipschitz function.
√
𝑥, which is a
b) On the other hand, show that if 𝑓𝑛 : 𝑆 → ℝ are Lipschitz with a uniform constant 𝐾 (meaning all of them satisfy the definition with the same constant) and { 𝑓𝑛 }∞ converges pointwise to 𝑓 : 𝑆 → ℝ, then the 𝑛=1 limit 𝑓 is a Lipschitz continuous function with Lipschitz constant 𝐾. 𝑛 Exercise 6.2.19 (requires §2.6): If ∞ 𝑛=0 𝑐 𝑛 (𝑥 − 𝑎) has radius of convergence 𝜌, show that the term by term Í∞ 𝑐 𝑛−1 𝑛 integral 𝑛=1 𝑛 (𝑥 − 𝑎) has radius of convergence 𝜌. Note that we only proved above that the radius of convergence was at least 𝜌.
Í
Exercise 6.2.20 (requires §2.6 and §4.3): Suppose 𝑓 (𝑥) B
Í∞
𝑛=0 𝑐 𝑛 (𝑥
− 𝑎)𝑛 converges in (𝑎 − 𝜌, 𝑎 + 𝜌).
a) Suppose that 𝑓 (𝑘) (𝑎) = 0 for all 𝑘 = 0, 1, 2, 3, . . .. Prove that 𝑐 𝑛 = 0 for all 𝑛, or in other words, 𝑓 (𝑥) = 0 for all 𝑥 ∈ (𝑎 − 𝜌, 𝑎 + 𝜌).
b) Using part a) prove a version of the so-called “identity theorem for analytic functions”: If there exists an 𝜖 > 0 such that 𝑓 (𝑥) = 0 for all 𝑥 ∈ (𝑎 − 𝜖, 𝑎 + 𝜖), then 𝑓 (𝑥) = 0 for all 𝑥 ∈ (𝑎 − 𝜌, 𝑎 + 𝜌). Exercise 6.2.21: Let 𝑓𝑛 (𝑥) B
𝑥 . 1+(𝑛𝑥)2
Notice that 𝑓𝑛 are differentiable functions.
a) Show that { 𝑓𝑛 }∞ converges uniformly to 0. 𝑛=1
b) Show that | 𝑓𝑛′ (𝑥)| ≤ 1 for all 𝑥 and all 𝑛.
c) Show that { 𝑓𝑛′ }∞ converges pointwise to a function discontinuous at the origin. 𝑛=1
d) Let {𝑎 𝑛 }∞ be an enumeration of the rational numbers. Define 𝑛=1 𝑔𝑛 (𝑥) B
𝑛 Õ 𝑘=1
2−𝑘 𝑓𝑛 (𝑥 − 𝑎 𝑘 ).
Show that {𝑔𝑛 }∞ converges uniformly to 0. 𝑛=1
converges pointwise to a function 𝜓 that is discontinuous at every rational number e) Show that {𝑔𝑛′ }∞ 𝑛=1 and continuous at every irrational number. In particular, lim 𝑔𝑛′ (𝑥) ≠ 0 for every rational number 𝑥. 𝑛→∞
Exercise 6.2.22 (requires §5.5): Show that uniform convergence is not enough to pass the limit through improper integrals over infinite intervals. That is, find a sequence of functions∫ 𝑓𝑛 : ℝ → ℝ Riemann ∞ integrable on every bounded interval, converging uniformly to zero, and such that −∞ 𝑓𝑛 = 1 for every 𝑛.
246
6.3
CHAPTER 6. SEQUENCES OF FUNCTIONS
Picard’s theorem
Note: 1–2 lectures (can be safely skipped) A first semester course in analysis should have a pièce de résistance caliber theorem. We pick a theorem whose proof combines everything we have learned. It is more sophisticated than the fundamental theorem of calculus, the first highlight theorem of this course. The theorem we are talking about is Picard’s theorem* on existence and uniqueness of a solution to an ordinary differential equation. Both the statement and the proof are beautiful examples of what one can do with the material we mastered so far. It is also a good example of how analysis is applied, as differential equations are indispensable in science of every stripe.
6.3.1
First order ordinary differential equation
Modern science is described in the language of differential equations. That is, equations involving not only the unknown, but also its derivatives. The simplest nontrivial form of a differential equation is the so-called first order ordinary differential equation 𝑦 ′ = 𝐹(𝑥, 𝑦). Generally, we also specify an initial condition 𝑦(𝑥 0 ) = 𝑦0 . The solution of the equation is a ′ function 𝑦(𝑥) such that 𝑦(𝑥 0 ) = 𝑦0 and 𝑦 (𝑥) = 𝐹 𝑥, 𝑦(𝑥) . See Figure 6.8 for a graphical representation as a so-called slope field.
(𝑥0 , 𝑦0 )
Figure 6.8: A slope field giving the slope 𝐹(𝑥, 𝑦) at each point, in this case 𝐹(𝑥, 𝑦) = 𝑥(1 − 𝑦). A solution is drawn going through the point (𝑥 0 , 𝑦0 ) = (1, 0.3), notice how it follows the slopes.
When 𝐹 involves only the 𝑥 variable, the solution is given by the fundamental theorem of calculus. On the other hand, when 𝐹 depends on both 𝑥 and 𝑦, we need far more firepower. It is not always true that a solution exists, and if it does, that it is the unique solution. Picard’s theorem gives us certain sufficient conditions for existence and uniqueness. * Named
for the French mathematician Charles Émile Picard (1856–1941).
247
6.3. PICARD’S THEOREM
6.3.2
The theorem
We need a definition of continuity in two variables. A point in the plane ℝ2 = ℝ × ℝ is denoted by an ordered pair (𝑥, 𝑦). For simplicity, we give the following sequential definition of continuity. Definition 6.3.1. Let 𝑈 ⊂ ℝ2 be a set, 𝐹 : 𝑈 → ℝ a function, (𝑥, 𝑦) ∈ 𝑈 a point. The and ∞ function 𝐹 is continuous at (𝑥, 𝑦) if for every sequence (𝑥 𝑛 , 𝑦𝑛 ) 𝑛=1 of points in 𝑈 such that lim𝑛→∞ 𝑥 𝑛 = 𝑥 and lim𝑛→∞ 𝑦𝑛 = 𝑦, we have lim 𝐹(𝑥 𝑛 , 𝑦𝑛 ) = 𝐹(𝑥, 𝑦).
𝑛→∞
We say 𝐹 is continuous if it is continuous at all points in 𝑈. Theorem 6.3.2 (Picard’s theorem on existence and uniqueness). Let 𝐼, 𝐽 ⊂ ℝ be closed bounded intervals, let 𝐼 ◦ and 𝐽 ◦ be their interiors* , and let (𝑥 0 , 𝑦0 ) ∈ 𝐼 ◦ × 𝐽 ◦ . Suppose 𝐹 : 𝐼 × 𝐽 → ℝ is continuous and Lipschitz in the second variable, that is, there exists an 𝐿 ∈ ℝ such that |𝐹(𝑥, 𝑦) − 𝐹(𝑥, 𝑧)| ≤ 𝐿 |𝑦 − 𝑧|
for all 𝑦, 𝑧 ∈ 𝐽, 𝑥 ∈ 𝐼.
Then there exists an ℎ > 0 such that [𝑥 0 − ℎ, 𝑥0 + ℎ] ⊂ 𝐼 and a unique differentiable function 𝑓 : [𝑥0 − ℎ, 𝑥0 + ℎ] → 𝐽 ⊂ ℝ such that 𝑓 ′(𝑥) = 𝐹 𝑥, 𝑓 (𝑥)
and
𝑓 (𝑥0 ) = 𝑦0 .
(6.1)
Proof. Suppose we could find a solution 𝑓 . Using the fundamental theorem of calculus we integrate the equation 𝑓 ′(𝑥) = 𝐹 𝑥, 𝑓 (𝑥) , 𝑓 (𝑥0 ) = 𝑦0 , and write (6.1) as the integral equation ∫ 𝑥
𝑓 (𝑥) = 𝑦0 +
𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡.
𝑥0
(6.2)
The idea of our proof is that we try to plug in approximations to a solution to the right-hand side of (6.2) to get better approximations on the left-hand side of (6.2). We hope that in the end the sequence converges and solves (6.2) and hence (6.1). The technique below is called Picard iteration, and the individual functions 𝑓 𝑘 are called the Picard iterates. Without loss of generality, suppose 𝑥0 = 0 (exercise below). Another exercise tells us that 𝐹 is bounded as it is continuous. Therefore pick some 𝑀 > 0 so that |𝐹(𝑥, 𝑦)| ≤ 𝑀 for all (𝑥, 𝑦) ∈ 𝐼 × 𝐽. Pick 𝛼 > 0 such that [−𝛼, 𝛼] ⊂ 𝐼 and [𝑦0 − 𝛼, 𝑦0 + 𝛼] ⊂ 𝐽. Define
n
ℎ B min 𝛼,
o 𝛼 . 𝑀 + 𝐿𝛼
Observe [−ℎ, ℎ] ⊂ 𝐼. Set 𝑓0 (𝑥) B 𝑦0 . We define 𝑓 𝑘 inductively. Assuming 𝑓 𝑘−1 ([−ℎ, ℎ]) ⊂ [𝑦0 − 𝛼, 𝑦0 + 𝛼], we see 𝐹 𝑡, 𝑓 𝑘−1 (𝑡) is a well-defined function of 𝑡 for 𝑡 ∈ [−ℎ, ℎ]. Further if 𝑓 𝑘−1 is continuous * By
interior of [𝑎, 𝑏], we mean (𝑎, 𝑏).
248
CHAPTER 6. SEQUENCES OF FUNCTIONS
on [−ℎ, ℎ], then 𝐹 𝑡, 𝑓 𝑘−1 (𝑡) is continuous as a function of 𝑡 on [−ℎ, ℎ] (left as an exercise). Define ∫ 𝑥
𝑓 𝑘 (𝑥) B 𝑦0 +
𝐹 𝑡, 𝑓 𝑘−1 (𝑡) 𝑑𝑡,
0
and 𝑓 𝑘 is continuous on [−ℎ, ℎ] by the fundamental theorem of calculus. To see that 𝑓 𝑘 maps [−ℎ, ℎ] to [𝑦0 − 𝛼, 𝑦0 + 𝛼], we compute for 𝑥 ∈ [−ℎ, ℎ]
∫ | 𝑓 𝑘 (𝑥) − 𝑦0 | =
0
𝑥
𝛼 ≤ 𝛼. 𝐹 𝑡, 𝑓 𝑘−1 (𝑡) 𝑑𝑡 ≤ 𝑀 |𝑥| ≤ 𝑀 ℎ ≤ 𝑀 𝑀 + 𝐿𝛼
We next define 𝑓 𝑘+1 using 𝑓 𝑘 and so on. Thus we have inductively defined a sequence { 𝑓 𝑘 }∞ of functions. We need to show that it converges to a function 𝑓 that solves the 𝑘=1 equation (6.2) and therefore (6.1). We wish to show that the sequence { 𝑓 𝑘 }∞ converges uniformly to some function on 𝑘=1 [−ℎ, ℎ]. First, for 𝑡 ∈ [−ℎ, ℎ], we have the following useful bound
𝐹 𝑡, 𝑓𝑛 (𝑡) − 𝐹 𝑡, 𝑓 𝑘 (𝑡) ≤ 𝐿 | 𝑓𝑛 (𝑡) − 𝑓 𝑘 (𝑡)| ≤ 𝐿 ∥ 𝑓𝑛 − 𝑓 𝑘 ∥ [−ℎ,ℎ] , where ∥ 𝑓𝑛 − 𝑓 𝑘 ∥ [−ℎ,ℎ] is the uniform norm, that is the supremum of | 𝑓𝑛 (𝑡) − 𝑓 𝑘 (𝑡)| for 𝛼 𝑡 ∈ [−ℎ, ℎ]. Now note that |𝑥| ≤ ℎ ≤ 𝑀+𝐿𝛼 . Therefore
∫ 𝑥 ∫ 𝑥 𝐹 𝑡, 𝑓𝑛−1 (𝑡) 𝑑𝑡 − 𝐹 𝑡, 𝑓 𝑘−1 (𝑡) 𝑑𝑡 | 𝑓𝑛 (𝑥) − 𝑓 𝑘 (𝑥)| = 0 0 ∫ 𝑥 = 𝐹 𝑡, 𝑓𝑛−1 (𝑡) − 𝐹 𝑡, 𝑓 𝑘−1 (𝑡) 𝑑𝑡 0
≤ 𝐿 ∥ 𝑓𝑛−1 − 𝑓 𝑘−1 ∥ [−ℎ,ℎ] |𝑥| ≤ Let 𝐶 B
𝐿𝛼 𝑀+𝐿𝛼
𝐿𝛼 ∥ 𝑓𝑛−1 − 𝑓 𝑘−1 ∥ [−ℎ,ℎ] . 𝑀 + 𝐿𝛼
and note that 𝐶 < 1. Taking supremum on the left-hand side we get ∥ 𝑓𝑛 − 𝑓 𝑘 ∥ [−ℎ,ℎ] ≤ 𝐶 ∥ 𝑓𝑛−1 − 𝑓 𝑘−1 ∥ [−ℎ,ℎ] .
Without loss of generality, suppose 𝑛 ≥ 𝑘. Then by induction we can show ∥ 𝑓𝑛 − 𝑓 𝑘 ∥ [−ℎ,ℎ] ≤ 𝐶 𝑘 ∥ 𝑓𝑛−𝑘 − 𝑓0 ∥ [−ℎ,ℎ] . For 𝑥 ∈ [−ℎ, ℎ], we have | 𝑓𝑛−𝑘 (𝑥) − 𝑓0 (𝑥)| = | 𝑓𝑛−𝑘 (𝑥) − 𝑦0 | ≤ 𝛼. Therefore,
∥ 𝑓𝑛 − 𝑓 𝑘 ∥ [−ℎ,ℎ] ≤ 𝐶 𝑘 ∥ 𝑓𝑛−𝑘 − 𝑓0 ∥ [−ℎ,ℎ] ≤ 𝐶 𝑘 𝛼.
As 𝐶 < 1, { 𝑓𝑛 }∞ is uniformly Cauchy and by Proposition 6.1.13 we obtain that { 𝑓𝑛 }∞ 𝑛=1 𝑛=1 converges uniformly on [−ℎ, ℎ] to some function 𝑓 : [−ℎ, ℎ] → ℝ. The function 𝑓 is the
249
6.3. PICARD’S THEOREM
uniform limit of continuous functions and therefore continuous. Furthermore, since 𝑓𝑛 [−ℎ, ℎ] ⊂ [𝑦0 − 𝛼, 𝑦0 + 𝛼] for all 𝑛, then 𝑓 [−ℎ, ℎ] ⊂ [𝑦0 − 𝛼, 𝑦0 + 𝛼] (why?). We now need to show that 𝑓 solves (6.2). First, as before we notice
𝐹 𝑡, 𝑓𝑛 (𝑡) − 𝐹 𝑡, 𝑓 (𝑡) ≤ 𝐿 | 𝑓𝑛 (𝑡) − 𝑓 (𝑡)| ≤ 𝐿 ∥ 𝑓𝑛 − 𝑓 ∥ [−ℎ,ℎ] . As ∥ 𝑓𝑛 − 𝑓 ∥ [−ℎ,ℎ] converges to 0, then 𝐹 𝑡, 𝑓𝑛 (𝑡) converges uniformly to 𝐹 𝑡, 𝑓 (𝑡) for 𝑡 ∈ [−ℎ, ℎ]. Hence, for 𝑥 ∈ [−ℎ, ℎ] the convergence is uniform for 𝑡 ∈ [0, 𝑥] (or [𝑥, 0] if 𝑥 < 0). Therefore,
𝑦0 +
∫ 0
𝑥
𝐹(𝑡, 𝑓 (𝑡) 𝑑𝑡 = 𝑦0 +
= 𝑦0 +
∫
𝑥
∫0 𝑥 0
𝐹 𝑡, lim 𝑓𝑛 (𝑡) 𝑑𝑡
𝑛→∞
lim 𝐹 𝑡, 𝑓𝑛 (𝑡) 𝑑𝑡
(by continuity of 𝐹)
𝑛→∞
= lim 𝑦0 + 𝑛→∞
∫ 0
𝑥
𝐹 𝑡, 𝑓𝑛 (𝑡) 𝑑𝑡
(by uniform convergence)
= lim 𝑓𝑛+1 (𝑥) = 𝑓 (𝑥). 𝑛→∞
We apply the fundamental theorem of calculus (Theorem 5.3.3) to show that 𝑓 is differentiable and its derivative is 𝐹 𝑥, 𝑓 (𝑥) . It is obvious that 𝑓 (0) = 𝑦0 . Finally, what is left to do is to show uniqueness. Suppose 𝑔 : [−ℎ, ℎ] → 𝐽 ⊂ ℝ is another solution. As before we use the fact that 𝐹 𝑡, 𝑓 (𝑡) − 𝐹 𝑡, 𝑔(𝑡) ≤ 𝐿 ∥ 𝑓 − 𝑔∥ [−ℎ,ℎ] . Then
∫ 𝑥 ∫ 𝑥 𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡 − 𝑦0 + 𝐹 𝑡, 𝑔(𝑡) 𝑑𝑡 | 𝑓 (𝑥) − 𝑔(𝑥)| = 𝑦0 + ∫ 𝑥 0 0 = 𝐹 𝑡, 𝑓 (𝑡) − 𝐹 𝑡, 𝑔(𝑡) 𝑑𝑡 0
≤ 𝐿 ∥ 𝑓 − 𝑔∥ [−ℎ,ℎ] |𝑥| ≤ 𝐿ℎ ∥ 𝑓 − 𝑔∥ [−ℎ,ℎ] ≤ As before, 𝐶 = obtain
𝐿𝛼 𝑀+𝐿𝛼
𝐿𝛼 ∥ 𝑓 − 𝑔∥ [−ℎ,ℎ] . 𝑀 + 𝐿𝛼
< 1. By taking supremum over 𝑥 ∈ [−ℎ, ℎ] on the left-hand side we ∥ 𝑓 − 𝑔∥ [−ℎ,ℎ] ≤ 𝐶 ∥ 𝑓 − 𝑔 ∥ [−ℎ,ℎ] .
This is only possible if ∥ 𝑓 − 𝑔∥ [−ℎ,ℎ] = 0. Therefore, 𝑓 = 𝑔, and the solution is unique.
6.3.3
□
Examples
Let us look at some examples. The proof of the theorem gives us an explicit way to find an ℎ that works. It does not, however, give us the best ℎ. It is often possible to find a much larger ℎ for which the conclusion of the theorem holds. The proof also gives us the Picard iterates as approximations to the solution. So the proof actually tells us how to obtain the solution, not just that the solution exists.
250
CHAPTER 6. SEQUENCES OF FUNCTIONS
Example 6.3.3: Consider
𝑓 (0) = 1.
𝑓 ′(𝑥) = 𝑓 (𝑥),
That is, we suppose 𝐹(𝑥, 𝑦) = 𝑦, and we are looking for a function 𝑓 such that 𝑓 ′(𝑥) = 𝑓 (𝑥). Let us forget for the moment that we solved this equation in §5.4. See also Figure 5.7 for a plot of both the equation, showing the slope 𝐹(𝑥, 𝑦) = 𝑦 at each point, and the solution, the exponential, that satisfies 𝑓 (0) = 1. We pick any 𝐼 that contains 0 in the interior. We pick an arbitrary 𝐽 that contains 1 in its interior. We can use 𝐿 = 1. The theorem guarantees an ℎ > 0 such that there exists a unique solution 𝑓 : [−ℎ, ℎ] → ℝ. This solution is usually denoted by 𝑒 𝑥 B 𝑓 (𝑥). We leave it to the reader to verify that by picking 𝐼 and 𝐽 large enough the proof of the theorem guarantees that we are able to pick 𝛼 such that we get any ℎ we want as long as ℎ < 1/2. We omit the calculation. Of course, we know this function exists as a function for all 𝑥, so an arbitrary ℎ ought to work, but the theorem only provides ℎ < 1/2. By same reasoning as above, no matter what 𝑥0 and 𝑦0 are, the proof guarantees an arbitrary ℎ as long as ℎ < 1/2. Fix such an ℎ. We get a unique function defined on [𝑥0 − ℎ, 𝑥0 + ℎ]. After defining the function on [−ℎ, ℎ] we find a solution on the interval [0, 2ℎ] and notice that the two functions must coincide on [0, ℎ] by uniqueness. We thus iteratively construct the exponential for all 𝑥 ∈ ℝ. Therefore, Picard’s theorem could be used to prove the existence and uniqueness of the exponential. Let us compute the Picard iterates. We start with the constant function 𝑓0 (𝑥) B 1. Then 𝑓1 (𝑥) = 1 +
∫
𝑓2 (𝑥) = 1 +
∫
𝑓3 (𝑥) = 1 +
0
𝑥
𝑥
∫0 𝑥 0
𝑓0 (𝑠) 𝑑𝑠 = 1 + 𝑥, 𝑓1 (𝑠) 𝑑𝑠 = 1 + 𝑓2 (𝑠) 𝑑𝑠 = 1 +
∫
𝑥
(1 + 𝑠) 𝑑𝑠 = 1 + 𝑥 +
∫0 𝑥 0
𝑥2 , 2
𝑠2 𝑥2 𝑥3 1+𝑠+ 𝑑𝑠 = 1 + 𝑥 + + . 2 2 6
We recognize the beginning of the Taylor series for the exponential. See Figure 6.9. Example 6.3.4: Consider the equation 𝑓 ′(𝑥) = 𝑓 (𝑥)
2
and
𝑓 (0) = 1.
From elementary differential equations we know 𝑓 (𝑥) =
1 1−𝑥
is the solution. The solution is only defined on (−∞, 1). That is, we are able to use ℎ < 1, but never a larger ℎ. The function that takes 𝑦 to 𝑦 2 is not Lipschitz as a function on all
251
6.3. PICARD’S THEOREM
4 3 2 1
0
−1
1
Figure 6.9: The exponential (solid line) together with 𝑓0 , 𝑓1 , 𝑓2 , 𝑓3 (dashed).
of ℝ. As we approach 𝑥 = 1 from the left, the solution becomes larger and larger. The derivative of the solution grows as 𝑦 2 , and so the 𝐿 required has to be larger and larger as 1 𝑦0 grows. If we apply the theorem with 𝑥0 close to 1 and 𝑦0 = 1−𝑥 we find that the ℎ that 0 the proof guarantees is smaller and smaller as 𝑥0 approaches 1. The ℎ from the √ proof is not the best ℎ. By picking 𝛼 correctly, the proof of the theorem guarantees ℎ = 1 − 3/2 ≈ 0.134 (we omit the calculation) for 𝑥0 = 0 and 𝑦0 = 1, even though we saw above that any ℎ < 1 should work. Example 6.3.5: Consider the equation
q
𝑓 (𝑥) = 2 | 𝑓 (𝑥)|, ′
𝑓 (0) = 0.
The function 𝐹(𝑥, 𝑦) = 2 |𝑦| is continuous, but not Lipschitz in 𝑦 (why?). The equation does not satisfy the hypotheses of the theorem. The function
p
( 𝑓 (𝑥) =
𝑥2 −𝑥 2
if 𝑥 ≥ 0, if 𝑥 < 0,
is a solution, but 𝑓 (𝑥) = 0 is also a solution. A solution exists, but is not unique.
Example 6.3.6: Consider 𝑦 ′ = 𝜑(𝑥) where 𝜑(𝑥) B 0 if 𝑥 ∈ ℚ and 𝜑(𝑥) B 1 if 𝑥 ∉ ℚ. In other words, the 𝐹(𝑥, 𝑦) = 𝜑(𝑥) is discontinuous. The equation has no solution regardless of the initial conditions. A solution would have derivative 𝜑, but 𝜑 does not have the intermediate value property at any point (why?). No solution exists by Darboux’s theorem. The examples show that without the Lipschitz condition, a solution might exist but not be unique, and without continuity of 𝐹, we may not have a solution at all. It is in fact a theorem, the Peano existence theorem, that if 𝐹 is continuous a solution exists (but may not be unique).
252
CHAPTER 6. SEQUENCES OF FUNCTIONS
′ Remark 6.3.7. It is possible to weaken what we ∫ 𝑥 mean by “solution to 𝑦 = 𝐹(𝑥, 𝑦)” by focusing on the integral equation 𝑓 (𝑥) = 𝑦0 + 𝑥 𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡. For example, let 𝐻 be the 0 Heaviside function* , that is 𝐻(𝑥) B 0 for 𝑥 < 0 and 𝐻(𝑥) B 1 for 𝑥 ≥ 0. Then 𝑦 ′ = 𝐻(𝑥), 𝑦(0) = 0, is a common equation. The “solution” is the∫ ramp function 𝑓 (𝑥) B 0 if 𝑥 < 0 and 𝑥 𝑓 (𝑥) B 𝑥 if 𝑥 ≥ 0, since this function satisfies 𝑓 (𝑥) = 0 𝐻(𝑡) 𝑑𝑡. Notice, however, that 𝑓 ′(0) does not exist, so 𝑓 is only a so-called weak solution to the differential equation.
6.3.4
Exercises
Exercise 6.3.1: Let 𝐼, 𝐽 ⊂ ℝ be intervals. Let 𝐹 : 𝐼 × 𝐽 → ℝ be a continuous function of two variables and suppose 𝑓 : 𝐼 → 𝐽 be a continuous function. Show that 𝐹 𝑥, 𝑓 (𝑥) is a continuous function on 𝐼. Exercise 6.3.2: Let 𝐼, 𝐽 ⊂ ℝ be closed bounded intervals. Show that if 𝐹 : 𝐼 × 𝐽 → ℝ is continuous, then 𝐹 is bounded. Exercise 6.3.3: We proved Picard’s theorem under the assumption that 𝑥 0 = 0. Prove the full statement of Picard’s theorem for an arbitrary 𝑥 0 . Exercise 6.3.4: Let 𝑓 ′(𝑥) = 𝑥 𝑓 (𝑥) be our equation. Start with the initial condition 𝑓 (0) = 2 and find the Picard iterates 𝑓0 , 𝑓1 , 𝑓2 , 𝑓3 , 𝑓4 . Exercise 6.3.5: Suppose 𝐹 : 𝐼 × 𝐽 → ℝ is a function that is continuous in the first variable, that is, for every fixed 𝑦 the function that takes 𝑥 to 𝐹(𝑥, 𝑦) is continuous. Further, suppose 𝐹 is Lipschitz in the second variable, that is, there exists a number 𝐿 such that |𝐹(𝑥, 𝑦) − 𝐹(𝑥, 𝑧)| ≤ 𝐿 |𝑦 − 𝑧|
for all 𝑦, 𝑧 ∈ 𝐽, 𝑥 ∈ 𝐼.
Show that 𝐹 is continuous as a function of two variables. Therefore, the hypotheses in the theorem could be made even weaker. Exercise 6.3.6: A common type of equation one encounters are linear first order differential equations, that is equations of the form 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥), 𝑦(𝑥 0 ) = 𝑦0 . Prove Picard’s theorem for linear equations. Suppose 𝐼 is an interval, 𝑥 0 ∈ 𝐼, and 𝑝 : 𝐼 → ℝ and 𝑞 : 𝐼 → ℝ are continuous. Show that there exists a unique differentiable 𝑓 : 𝐼 → ℝ, such that 𝑦 = 𝑓 (𝑥) satisfies the equation and the initial condition. Hint: Assume existence of the exponential function and use the integrating factor formula for existence of 𝑓 (prove that it works and then that it is unique): 𝑓 (𝑥) B 𝑒
−
𝑥 𝑥0
∫
𝑝(𝑠) 𝑑𝑠
∫
𝑥
𝑥0
𝑒
∫𝑡 𝑥0
𝑝(𝑠) 𝑑𝑠
𝑞(𝑡) 𝑑𝑡 + 𝑦0 .
Exercise 6.3.7: Consider the equation 𝑓 ′(𝑥) = 𝑓 (𝑥), from Example 6.3.3. Show that given any 𝑥 0 , any 𝑦0 , and any positive ℎ < 1/2, we can pick 𝛼 > 0 large enough that the proof of Picard’s theorem guarantees a solution for the initial condition 𝑓 (𝑥 0 ) = 𝑦0 in the interval [𝑥0 − ℎ, 𝑥0 + ℎ]. * Named
for the English engineer, mathematician, and physicist Oliver Heaviside (1850–1825).
6.3. PICARD’S THEOREM
253
Exercise 6.3.8: Consider the equation 𝑦 ′ = 𝑦 1/3 𝑥. a) Show that for the initial condition 𝑦(1) = 1, Picard’s theorem applies. Find an 𝛼 > 0, 𝑀, 𝐿, and ℎ that would work in the proof. b) Show that for the initial condition 𝑦(1) = 0, Picard’s theorem does not apply. c) Find a solution for 𝑦(1) = 0 anyway. Exercise 6.3.9: Consider the equation 𝑥 𝑦 ′ = 2𝑦. a) Show that 𝑦 = 𝐶𝑥 2 is a solution for every constant 𝐶. b) Show that for every 𝑥 0 ≠ 0 and every 𝑦0 , Picard’s theorem applies for the initial condition 𝑦(𝑥 0 ) = 𝑦0 . c) Show that 𝑦(0) = 𝑦0 is solvable if and only if 𝑦0 = 0.
254
CHAPTER 6. SEQUENCES OF FUNCTIONS
Chapter 7 Metric Spaces 7.1
Metric spaces
Note: 1.5 lectures As mentioned in the introduction, the main idea in analysis is to take limits. In chapter 2 we learned to take limits of sequences of real numbers. And in chapter 3 we learned to take limits of functions as a real number approached some other real number. We want to take limits in more complicated contexts. For example, we want to have sequences of points in 3-dimensional space. We wish to define continuous functions of several variables. We even want to define functions on spaces that are a little harder to describe, such as the surface of the earth. We still want to talk about limits there. Finally, we have seen the limit of a sequence of functions in chapter 6. We wish to unify all these notions so that we do not have to reprove theorems over and over again in each context. The concept of a metric space is an elementary yet powerful tool in analysis. And while it is not sufficient to describe every type of limit we find in modern analysis, it gets us very far indeed. Definition 7.1.1. Let 𝑋 be a set, and let 𝑑 : 𝑋 × 𝑋 → ℝ be a function such that for all 𝑥, 𝑦, 𝑧 ∈ 𝑋 (i) 𝑑(𝑥, 𝑦) ≥ 0.
(nonnegativity)
(ii) 𝑑(𝑥, 𝑦) = 0 if and only if 𝑥 = 𝑦.
(identity of indiscernibles)
(iii) 𝑑(𝑥, 𝑦) = 𝑑(𝑦, 𝑥).
(symmetry)
(iv) 𝑑(𝑥, 𝑧) ≤ 𝑑(𝑥, 𝑦) + 𝑑(𝑦, 𝑧).
(triangle inequality)
The pair (𝑋 , 𝑑) is called a metric space. The function 𝑑 is called the metric or the distance function. Sometimes we write just 𝑋 as the metric space instead of (𝑋 , 𝑑) if the metric is clear from context. The geometric idea is that 𝑑 is the distance between two points. Items (i)–(iii) have obvious geometric interpretation: Distance is always nonnegative, the only point that is
256
CHAPTER 7. METRIC SPACES
distance 0 away from 𝑥 is 𝑥 itself, and finally that the distance from 𝑥 to 𝑦 is the same as the distance from 𝑦 to 𝑥. The triangle inequality (iv) has the interpretation given in Figure 7.1. 𝑧
𝑑(𝑥, 𝑧)
shorter
𝑑(𝑦, 𝑧)
longer 𝑥
𝑑(𝑥, 𝑦)
𝑦
Figure 7.1: Diagram of the triangle inequality in metric spaces.
For the purposes of drawing, it is convenient to draw figures and diagrams in the plane with the metric being the euclidean distance. However, that is only one particular metric space. Just because a certain fact seems to be clear from drawing a picture does not mean it is true in every metric space. You might be getting sidetracked by intuition from euclidean geometry, whereas the concept of a metric space is a lot more general. Let us give some examples of metric spaces. Example 7.1.2: The set of real numbers ℝ is a metric space with the metric 𝑑(𝑥, 𝑦) B |𝑥 − 𝑦| . Items (i)–(iii) of the definition are easy to verify. The triangle inequality (iv) follows immediately from the standard triangle inequality for real numbers: 𝑑(𝑥, 𝑧) = |𝑥 − 𝑧| = |𝑥 − 𝑦 + 𝑦 − 𝑧| ≤ |𝑥 − 𝑦| + |𝑦 − 𝑧| = 𝑑(𝑥, 𝑦) + 𝑑(𝑦, 𝑧). This metric is the standard metric on ℝ. If we talk about ℝ as a metric space without mentioning a specific metric, we mean this particular metric. Example 7.1.3: We can also put a different metric on the set of real numbers. For example, take the set of real numbers ℝ together with the metric 𝑑(𝑥, 𝑦) B
|𝑥 − 𝑦| . |𝑥 − 𝑦| + 1
Items (i)–(iii) are again easy to verify. The triangle inequality (iv) is a little bit more difficult. 𝑡 Note that 𝑑(𝑥, 𝑦) = 𝜑(|𝑥 − 𝑦|) where 𝜑(𝑡) = 𝑡+1 and 𝜑 is an increasing function (positive
257
7.1. METRIC SPACES derivative, see Figure 7.2). Hence 𝑑(𝑥, 𝑧) = 𝜑(|𝑥 − 𝑧|) = 𝜑(|𝑥 − 𝑦 + 𝑦 − 𝑧|)
≤ 𝜑(|𝑥 − 𝑦| + |𝑦 − 𝑧|) |𝑥 − 𝑦| + |𝑦 − 𝑧| = |𝑥 − 𝑦| + |𝑦 − 𝑧| + 1 |𝑦 − 𝑧| |𝑥 − 𝑦| + = |𝑥 − 𝑦| + |𝑦 − 𝑧| + 1 |𝑥 − 𝑦| + |𝑦 − 𝑧| + 1 |𝑥 − 𝑦| |𝑦 − 𝑧| ≤ + = 𝑑(𝑥, 𝑦) + 𝑑(𝑦, 𝑧). |𝑥 − 𝑦| + 1 |𝑦 − 𝑧| + 1
The function 𝑑 is thus a metric, and gives an example of a nonstandard metric on ℝ. With this metric, 𝑑(𝑥, 𝑦) < 1 for all 𝑥, 𝑦 ∈ ℝ. That is, every two points are less than 1 unit apart.
Figure 7.2: Graph of
𝑡 𝑡+1
for positive 𝑡 with an asymptote at 1.
An important metric space is the 𝑛-dimensional euclidean space ℝ𝑛 = ℝ × ℝ × · · · × ℝ. We use the following notation for points: 𝑥 = (𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 ) ∈ ℝ𝑛 . We will not write 𝑥® nor x for a point in ℝ𝑛 as is common in multivariable calculus, we simply give it a name such as 𝑥 and we will remember that 𝑥 is an element of ℝ𝑛 . We also write simply 0 ∈ ℝ𝑛 to mean the point (0, 0, . . . , 0). Before making ℝ𝑛 a metric space, we prove an important inequality, the so-called Cauchy–Schwarz inequality. Lemma 7.1.4 (Cauchy–Schwarz inequality* ). Suppose 𝑥 = (𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 ) ∈ ℝ𝑛 , 𝑦 = (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈ ℝ𝑛 . Then
Õ 𝑛 𝑘=1 * Sometimes
𝑥 𝑘 𝑦𝑘
2
≤
Õ 𝑛 𝑘=1
𝑥 2𝑘
Õ 𝑛
𝑦 𝑘2
.
𝑘=1
it is called the Cauchy–Bunyakovsky–Schwarz inequality. Karl Hermann Amandus Schwarz (1843–1921) was a German mathematician and Viktor Yakovlevich Bunyakovsky (1804–1889) was a Ukrainian mathematician. What we stated should really be called the Cauchy inequality, as Bunyakovsky and Schwarz provided proofs for infinite-dimensional versions.
258
CHAPTER 7. METRIC SPACES
Proof. A square of a real number is nonnegative. Hence a sum of squares is nonnegative: 0≤ =
𝑛 Õ 𝑛 Õ 𝑘=1 ℓ =1
(𝑥 𝑘 𝑦ℓ − 𝑥ℓ 𝑦 𝑘 )2
𝑛 Õ 𝑛 Õ
𝑥 2𝑘 𝑦ℓ2 + 𝑥ℓ2 𝑦 𝑘2 − 2𝑥 𝑘 𝑥ℓ 𝑦 𝑘 𝑦ℓ
𝑘=1 ℓ =1
=
Õ 𝑛
𝑥 2𝑘
Õ 𝑛
𝑦ℓ2
+
ℓ =1
𝑘=1
Õ 𝑛
𝑦 𝑘2
Õ 𝑛
𝑥ℓ2
ℓ =1
𝑘=1
−2
Õ 𝑛
𝑥 𝑘 𝑦𝑘
Õ 𝑛
𝑥ℓ 𝑦ℓ .
ℓ =1
𝑘=1
We relabel and divide by 2 to obtain precisely what we wanted, 0≤
Õ 𝑛
𝑥 2𝑘
Õ 𝑛
𝑘=1
𝑘=1
𝑦 𝑘2 −
Õ 𝑛
𝑥 𝑘 𝑦𝑘
2
.
□
𝑘=1
Example 7.1.5: Let us construct the standard metric for ℝ𝑛 . Define 𝑑(𝑥, 𝑦) B
q
2
2
2
(𝑥1 − 𝑦1 ) + (𝑥 2 − 𝑦2 ) + · · · + (𝑥 𝑛 − 𝑦𝑛 ) =
v t
𝑛 Õ 𝑘=1
(𝑥 𝑘 − 𝑦 𝑘 )2 .
For 𝑛 = 1, the real line, this metric agrees with what we defined above. For 𝑛 > 1, the only tricky part of the definition to check, as before, is the triangle inequality. It is less messy to work with the square of the metric. In the following estimate, note the use of the Cauchy–Schwarz inequality. 𝑑(𝑥, 𝑧)
2
=
𝑛 Õ 𝑘=1
= = =
𝑛 Õ
(𝑥 𝑘 − 𝑦 𝑘 + 𝑦 𝑘 − 𝑧 𝑘 )2
𝑘=1 𝑛 Õ 𝑘=1 𝑛 Õ 𝑘=1
≤
(𝑥 𝑘 − 𝑧 𝑘 )2
𝑛 Õ 𝑘=1
v t
2
2
(𝑥 𝑘 − 𝑦 𝑘 ) + (𝑦 𝑘 − 𝑧 𝑘 ) + 2(𝑥 𝑘 − 𝑦 𝑘 )(𝑦 𝑘 − 𝑧 𝑘 ) 2
(𝑥 𝑘 − 𝑦 𝑘 ) + 2
(𝑥 𝑘 − 𝑦 𝑘 ) +
𝑛 Õ 𝑘=1 𝑛 Õ 𝑘=1
𝑛 © Õ = (𝑥 𝑘 − 𝑦 𝑘 )2 + « 𝑘=1
2
(𝑦 𝑘 − 𝑧 𝑘 ) + 2 2
(𝑦 𝑘 − 𝑧 𝑘 ) + 2
v t
𝑛 Õ 𝑘=1
𝑛 Õ 𝑘=1
v t
(𝑥 𝑘 − 𝑦 𝑘 )(𝑦 𝑘 − 𝑧 𝑘 )
𝑛 Õ 𝑘=1 2
(𝑥 𝑘 − 𝑦 𝑘 )
2
𝑛 Õ 𝑘=1
(𝑦 𝑘 − 𝑧 𝑘 )2
2
(𝑦 𝑘 − 𝑧 𝑘 )2 ® = 𝑑(𝑥, 𝑦) + 𝑑(𝑦, 𝑧) .
ª ¬
Because the square root is an increasing function, the inequality is preserved when we take the square root of both sides, and we obtain the triangle inequality.
259
7.1. METRIC SPACES
Example 7.1.6: The set of complex numbers ℂ is the set of numbers 𝑧 = 𝑥 + 𝑖𝑦, where 𝑥 and 𝑦 are in ℝ. By imposing 𝑖 2 = −1, we make ℂ into a field. For the purposes of taking limits, the set ℂ is regarded as the metric space ℝ2 , where 𝑧 = 𝑥 + p𝑖𝑦 ∈ ℂ corresponds to 2 (𝑥, 𝑦) ∈ ℝ . For 𝑧 = 𝑥 + 𝑖𝑦 define the complex modulus by |𝑧| B 𝑥 2 + 𝑦 2 . Then for two complex numbers 𝑧 1 = 𝑥1 + 𝑖𝑦1 and 𝑧2 = 𝑥 2 + 𝑖𝑦2 , the distance is 𝑑(𝑧 1 , 𝑧2 ) =
q
(𝑥1 − 𝑥2 )2 + (𝑦1 − 𝑦2 )2 = |𝑧 1 − 𝑧 2 |.
Furthermore, when working with complex numbers it is often convenient to write the metric in terms of the so-called complex conjugate: The conjugate of 𝑧 = 𝑥 + 𝑖𝑦 is 𝑧¯ B 𝑥 − 𝑖𝑦. Then |𝑧| 2 = 𝑥 2 + 𝑦 2 = 𝑧 𝑧¯ , and so |𝑧1 − 𝑧 2 | 2 = (𝑧 1 − 𝑧 2 )(𝑧 1 − 𝑧 2 ). Example 7.1.7: An example to keep in mind is the so-called discrete metric. For any set 𝑋, define ( 1 if 𝑥 ≠ 𝑦, 𝑑(𝑥, 𝑦) B 0 if 𝑥 = 𝑦.
That is, all points are equally distant from each other. When 𝑋 is a finite set, we can draw a diagram, see for example Figure 7.3. Of course, in the diagram the distances are not the normal euclidean distances in the plane. Things become subtle when 𝑋 is an infinite set such as the real numbers. 𝑒 1 1
𝑎 1
1 1 1
𝑏
1
𝑑
1 1
1 𝑐
Figure 7.3: Sample discrete metric space {𝑎, 𝑏, 𝑐, 𝑑, 𝑒}, the distance between any two points is 1.
While this particular example may seldom come up in practice, it gives a useful “smell test.” If you make a statement about metric spaces, try it with the discrete metric. To show that (𝑋 , 𝑑) is indeed a metric space is left as an exercise. Example 7.1.8: Let 𝐶 [𝑎, 𝑏], ℝ be the set of continuous real-valued functions on the interval [𝑎, 𝑏]. Define the metric on 𝐶 [𝑎, 𝑏], ℝ as
𝑑( 𝑓 , 𝑔) B sup | 𝑓 (𝑥) − 𝑔(𝑥)| . 𝑥∈[𝑎,𝑏]
Let us check the properties. First, 𝑑( 𝑓 , 𝑔) is finite as | 𝑓 (𝑥) − 𝑔(𝑥)| is a continuous function on a closed bounded interval [𝑎, 𝑏], and so is bounded. It is clear that 𝑑( 𝑓 , 𝑔) ≥ 0, it is the
260
CHAPTER 7. METRIC SPACES
supremum of nonnegative numbers. If 𝑓 = 𝑔, then | 𝑓 (𝑥) − 𝑔(𝑥)| = 0 for all 𝑥, and hence 𝑑( 𝑓 , 𝑔) = 0. Conversely, if 𝑑( 𝑓 , 𝑔) = 0, then for every 𝑥, we have | 𝑓 (𝑥) − 𝑔(𝑥)| ≤ 𝑑( 𝑓 , 𝑔) = 0, and hence 𝑓 (𝑥) = 𝑔(𝑥) for all 𝑥, and so 𝑓 = 𝑔. That 𝑑( 𝑓 , 𝑔) = 𝑑(𝑔, 𝑓 ) is equally trivial. To show the triangle inequality we use the standard triangle inequality; 𝑑( 𝑓 , 𝑔) = sup | 𝑓 (𝑥) − 𝑔(𝑥)| = sup | 𝑓 (𝑥) − ℎ(𝑥) + ℎ(𝑥) − 𝑔(𝑥)| 𝑥∈[𝑎,𝑏]
𝑥∈[𝑎,𝑏]
≤ sup | 𝑓 (𝑥) − ℎ(𝑥)| + |ℎ(𝑥) − 𝑔(𝑥)|
𝑥∈[𝑎,𝑏]
≤ sup | 𝑓 (𝑥) − ℎ(𝑥)| + sup |ℎ(𝑥) − 𝑔(𝑥)| = 𝑑( 𝑓 , ℎ) + 𝑑(ℎ, 𝑔). 𝑥∈[𝑎,𝑏]
𝑥∈[𝑎,𝑏]
When treating 𝐶 [𝑎, 𝑏], ℝ as a metric space without mentioning a metric, we mean this particular metric. Notice that 𝑑( 𝑓 , 𝑔) = ∥ 𝑓 − 𝑔∥ [𝑎,𝑏] , the uniform norm of Definition 6.1.9. This example may seem esoteric at first, but it turns out that working with spaces such as 𝐶 [𝑎, 𝑏], ℝ is really the meat of a large part of modern analysis. Treating sets of functions as metric spaces allows us to abstract away a lot of the grubby detail and prove powerful results such as Picard’s theorem with less work.
Example 7.1.9: Another useful example of a metric space is the sphere with a metric usually called the great circle distance. Let 𝑆 2 be the unit sphere in ℝ3 , that is 𝑆2 B {𝑥 ∈ ℝ3 : 𝑥12 + 𝑥22 + 𝑥32 = 1}. Take 𝑥 and 𝑦 in 𝑆2 , draw a line through the origin and 𝑥, and another line through the origin and 𝑦, and let 𝜃 be the angle that the two lines make. Then define 𝑑(𝑥, 𝑦) B 𝜃. See Figure 7.4. The law of cosines from vector calculus says 𝑑(𝑥, 𝑦) = arccos(𝑥1 𝑦1 + 𝑥2 𝑦2 + 𝑥3 𝑦3 ). It is relatively easy to see that this function satisfies the first three properties of a metric. Triangle inequality is harder to prove, and requires a bit more trigonometry and linear algebra than we wish to indulge in right now, so let us leave it without proof. 𝑥
𝑆2
𝜃 0
𝑦
Figure 7.4: The great circle distance on the unit sphere.
This distance is the shortest distance between points on a sphere if we are allowed to travel on the sphere only. It is easy to generalize to arbitrary diameters. If we take a sphere of radius 𝑟, we let the distance be 𝑑(𝑥, 𝑦) B 𝑟𝜃. As an example, this is the standard distance you would use if you compute a distance on the surface of the earth, such as computing the distance a plane travels from London to Los Angeles.
261
7.1. METRIC SPACES
Oftentimes it is useful to consider a subset of a larger metric space as a metric space itself. We obtain the following proposition, which has a trivial proof. Proposition 7.1.10. Let (𝑋 , 𝑑) be a metric space and 𝑌 ⊂ 𝑋. Then the restriction 𝑑|𝑌×𝑌 is a metric on 𝑌. Definition 7.1.11. If (𝑋 , 𝑑) is a metric space, 𝑌 ⊂ 𝑋, and 𝑑′ B 𝑑|𝑌×𝑌 , then (𝑌, 𝑑′) is said to be a subspace of (𝑋 , 𝑑). It is common to simply write 𝑑 for the metric on 𝑌, as it is the restriction of the metric on 𝑋. Sometimes we say 𝑑′ is the subspace metric and 𝑌 has the subspace topology.
A subset of the real numbers is bounded whenever all its elements are at most some fixed distance from 0. When dealing with an arbitrary metric space there may not be some natural fixed point 0, but for the purposes of boundedness it does not matter. Definition 7.1.12. Let (𝑋 , 𝑑) be a metric space. A subset 𝑆 ⊂ 𝑋 is said to be bounded if there exists a 𝑝 ∈ 𝑋 and a 𝐵 ∈ ℝ such that 𝑑(𝑝, 𝑥) ≤ 𝐵
for all 𝑥 ∈ 𝑆.
We say (𝑋 , 𝑑) is bounded if 𝑋 itself is a bounded subset.
For example, the set of real numbers with the standard metric is not a bounded metric space. It is not hard to see that a subset of the real numbers is bounded in the sense of chapter 1 if and only if it is bounded as a subset of the metric space of real numbers with the standard metric. On the other hand, if we take the real numbers with the discrete metric, then we obtain a bounded metric space. In fact, any set with the discrete metric is bounded. There are other equivalent ways we could generalize boundedness, and are left as exercises. Suppose 𝑋 is nonempty to avoid a technicality. Then 𝑆 ⊂ 𝑋 being bounded is equivalent to either (i) For every 𝑝 ∈ 𝑋, there exists a 𝐵 > 0 such that 𝑑(𝑝, 𝑥) ≤ 𝐵 for all 𝑥 ∈ 𝑆.
(ii) diam(𝑆) B sup 𝑑(𝑥, 𝑦) : 𝑥, 𝑦 ∈ 𝑆 < ∞.
The quantity diam(𝑆) is called the diameter of a set and is usually only defined for a nonempty set.
7.1.1
Exercises
Exercise 7.1.1: Show that for every set 𝑋, the discrete metric (𝑑(𝑥, 𝑦) = 1 if 𝑥 ≠ 𝑦 and 𝑑(𝑥, 𝑥) = 0) does give a metric space (𝑋 , 𝑑). Exercise 7.1.2: Let 𝑋 B {0} be a set. Can you make it into a metric space? Exercise 7.1.3: Let 𝑋 B {𝑎, 𝑏} be a set. Can you make it into two distinct metric spaces? (define two distinct metrics on it)
262
CHAPTER 7. METRIC SPACES
Exercise 7.1.4: Let the set 𝑋 B {𝐴, 𝐵, 𝐶} represent 3 buildings on campus. Suppose we wish our distance to be the time it takes to walk from one building to the other. It takes 5 minutes either way between buildings 𝐴 and 𝐵. However, building 𝐶 is on a hill and it takes 10 minutes from 𝐴 and 15 minutes from 𝐵 to get to 𝐶. On the other hand it takes 5 minutes to go from 𝐶 to 𝐴 and 7 minutes to go from 𝐶 to 𝐵, as we are going downhill. Do these distances define a metric? If so, prove it, if not, say why not. Exercise 7.1.5: Suppose (𝑋 , 𝑑) is a metric space and 𝜑 : [0, ∞) → ℝ is an increasing function such that 𝜑(𝑡) ≥ 0 for all 𝑡 and 𝜑(𝑡) = 0 if and only if 𝑡 = 0. Also suppose 𝜑 is subadditive, that is, 𝜑(𝑠 + 𝑡) ≤ 𝜑(𝑠) + 𝜑(𝑡). Show that with 𝑑′(𝑥, 𝑦) B 𝜑 𝑑(𝑥, 𝑦) , we obtain a new metric space (𝑋 , 𝑑′). Exercise 7.1.6: Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces.
a) Show that (𝑋 × 𝑌, 𝑑) with 𝑑 (𝑥 1 , 𝑦1 ), (𝑥 2 , 𝑦2 ) B 𝑑𝑋 (𝑥 1 , 𝑥2 ) + 𝑑𝑌 (𝑦1 , 𝑦2 ) is a metric space.
b) Show that (𝑋 × 𝑌, 𝑑) with 𝑑 (𝑥 1 , 𝑦1 ), (𝑥 2 , 𝑦2 ) B max 𝑑𝑋 (𝑥1 , 𝑥2 ), 𝑑𝑌 (𝑦1 , 𝑦2 ) is a metric space.
Exercise 7.1.7: Let 𝑋 be the set of continuous functions on [0, 1]. Let 𝜑 : [0, 1] → (0, ∞) be continuous. Define 𝑑( 𝑓 , 𝑔) B
∫ 0
1
| 𝑓 (𝑥) − 𝑔(𝑥)| 𝜑(𝑥) 𝑑𝑥.
Show that (𝑋 , 𝑑) is a metric space. Exercise 7.1.8: Let (𝑋 , 𝑑) be a metric space. For nonempty bounded subsets 𝐴 and 𝐵 let 𝑑(𝑥, 𝐵) B inf 𝑑(𝑥, 𝑏) : 𝑏 ∈ 𝐵
and
𝑑(𝐴, 𝐵) B sup 𝑑(𝑎, 𝐵) : 𝑎 ∈ 𝐴 .
Now define the Hausdorff metric as 𝑑𝐻 (𝐴, 𝐵) B max 𝑑(𝐴, 𝐵), 𝑑(𝐵, 𝐴) .
Note: 𝑑𝐻 can be defined for arbitrary nonempty subsets if we allow the extended reals. a) Let 𝑌 ⊂ P(𝑋) be the set of bounded nonempty subsets. Prove that (𝑌, 𝑑𝐻 ) is a so-called pseudometric space: 𝑑𝐻 satisfies the metric properties (i), (iii), (iv), and further 𝑑𝐻 (𝐴, 𝐴) = 0 for all 𝐴 ∈ 𝑌.
b) Show by example that 𝑑 itself is not symmetric, that is 𝑑(𝐴, 𝐵) ≠ 𝑑(𝐵, 𝐴).
c) Find a metric space 𝑋 and two different nonempty bounded subsets 𝐴 and 𝐵 such that 𝑑𝐻 (𝐴, 𝐵) = 0. Exercise 7.1.9: Let (𝑋 , 𝑑) be a nonempty metric space and 𝑆 ⊂ 𝑋 a subset. Prove:
a) 𝑆 is bounded if and only if for every 𝑝 ∈ 𝑋, there exists a 𝐵 > 0 such that 𝑑(𝑝, 𝑥) ≤ 𝐵 for all 𝑥 ∈ 𝑆.
b) A nonempty 𝑆 is bounded if and only if diam(𝑆) B sup{𝑑(𝑥, 𝑦) : 𝑥, 𝑦 ∈ 𝑆} < ∞. Exercise 7.1.10: a) Working in ℝ, compute diam [𝑎, 𝑏] .
b) Working in ℝ𝑛 , for every 𝑟 > 0, let 𝐵𝑟 B {𝑥 12 + 𝑥 22 + · · · + 𝑥 𝑛2 < 𝑟 2 }. Compute diam(𝐵𝑟 ).
c) Suppose (𝑋 , 𝑑) is a metric space with at least two points, 𝑑 is the discrete metric, and 𝑝 ∈ 𝑋. Compute diam({𝑝}) and diam(𝑋), then conclude that (𝑋 , 𝑑) is bounded.
263
7.1. METRIC SPACES Exercise 7.1.11: a) Find a metric 𝑑 on ℕ such that ℕ is an unbounded set in (ℕ , 𝑑).
b) Find a metric 𝑑 on ℕ such that ℕ is a bounded set in (ℕ , 𝑑).
c) Find a metric 𝑑 on ℕ such that for every 𝑛 ∈ ℕ and every 𝜖 > 0, there exists an 𝑚 ∈ ℕ such that 𝑑(𝑛, 𝑚) < 𝜖.
Exercise 7.1.12: Let 𝐶 1 [𝑎, 𝑏], ℝ be the set of once continuously differentiable functions on [𝑎, 𝑏]. Define
𝑑( 𝑓 , 𝑔) B ∥ 𝑓 − 𝑔∥ [𝑎,𝑏] + ∥ 𝑓 ′ − 𝑔 ′ ∥ [𝑎,𝑏] , where ∥·∥ [𝑎,𝑏] is the uniform norm. Prove that 𝑑 is a metric. Exercise 7.1.13: Consider ℓ 2 the set of sequences {𝑥 𝑛 }∞ of real numbers such that 𝑛=1
Í∞
𝑛=1
𝑥 𝑛2 < ∞.
a) Prove the Cauchy–Schwarz inequality for two sequences {𝑥 𝑛 }∞ and {𝑦𝑛 }∞ in ℓ 2 : Prove that 𝑛=1 𝑛=1 Í∞ 𝑛=1 𝑥 𝑛 𝑦 𝑛 converges (absolutely) and
Õ ∞ 𝑛=1
𝑥 𝑛 𝑦𝑛
2
≤
Õ ∞
𝑥 𝑛2
Õ ∞
𝑛=1
𝑦𝑛2
.
𝑛=1
q
2 ∞ b) Prove is a metric space with the metric 𝑑(𝑥, 𝑦) B 𝑛=1 (𝑥 𝑛 − 𝑦 𝑛 ) . Hint: Don’t forget to show that the series for 𝑑(𝑥, 𝑦) always converges to some finite number.
that ℓ 2
Í
264
7.2
CHAPTER 7. METRIC SPACES
Open and closed sets
Note: 2 lectures
7.2.1
Topology
Before we get to convergence, we define the so-called topology. That is, we define open and closed sets in a metric space. And before that, we define two special open and closed sets. Definition 7.2.1. Let (𝑋 , 𝑑) be a metric space, 𝑥 ∈ 𝑋, and 𝛿 > 0. Define the open ball, or simply ball, of radius 𝛿 around 𝑥 as 𝐵(𝑥, 𝛿) B 𝑦 ∈ 𝑋 : 𝑑(𝑥, 𝑦) < 𝛿 .
Define the closed ball as
𝐶(𝑥, 𝛿) B 𝑦 ∈ 𝑋 : 𝑑(𝑥, 𝑦) ≤ 𝛿 .
When dealing with different metric spaces, it is sometimes vital to emphasize which metric space the ball is in. We do this by writing 𝐵𝑋 (𝑥, 𝛿) B 𝐵(𝑥, 𝛿) or 𝐶 𝑋 (𝑥, 𝛿) B 𝐶(𝑥, 𝛿). Example 7.2.2: Take the metric space ℝ with the standard metric. For 𝑥 ∈ ℝ and 𝛿 > 0, and
𝐵(𝑥, 𝛿) = (𝑥 − 𝛿, 𝑥 + 𝛿)
𝐶(𝑥, 𝛿) = [𝑥 − 𝛿, 𝑥 + 𝛿].
Example 7.2.3: Be careful when working on a subspace. Consider the metric space [0, 1] as a subspace of ℝ. Then in [0, 1], 𝐵(0, 1/2) = 𝐵[0,1] (0, 1/2) = 𝑦 ∈ [0, 1] : |0 − 𝑦| < 1/2 = [0, 1/2).
This is different from 𝐵ℝ (0, 1/2) = (−1/2 , 1/2). The important thing to keep in mind is which metric space we are working in. Definition 7.2.4. Let (𝑋 , 𝑑) be a metric space. A subset 𝑉 ⊂ 𝑋 is open if for every 𝑥 ∈ 𝑉, there exists a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) ⊂ 𝑉. See Figure 7.5. A subset 𝐸 ⊂ 𝑋 is closed if the complement 𝐸 𝑐 = 𝑋 \ 𝐸 is open. When the ambient space 𝑋 is not clear from context, we say 𝑉 is open in 𝑋 and 𝐸 is closed in 𝑋. If 𝑥 ∈ 𝑉 and 𝑉 is open, then we say 𝑉 is an open neighborhood of 𝑥 (or sometimes just neighborhood). Intuitively, an open set 𝑉 is a set that does not include its “boundary.” Wherever we are in 𝑉, we are allowed to “wiggle” a little bit and stay in 𝑉. Similarly, a set 𝐸 is closed if everything not in 𝐸 is some distance away from 𝐸. The open and closed balls are examples of open and closed sets (this must still be proved). But not every set is either open or closed. Generally, most subsets are neither.
265
7.2. OPEN AND CLOSED SETS
𝑉
𝛿 𝑥 𝐵(𝑥, 𝛿)
Figure 7.5: Open set in a metric space. Note that 𝛿 depends on 𝑥.
Example 7.2.5: The set (0, ∞) ⊂ ℝ is open: Given any 𝑥 ∈ (0, ∞), let 𝛿 B 𝑥. Then 𝐵(𝑥, 𝛿) = (0, 2𝑥) ⊂ (0, ∞). The set [0, ∞) ⊂ ℝ is closed: Given 𝑥 ∈ (−∞, 0) = [0, ∞)𝑐 , let 𝛿 B −𝑥. Then 𝐵(𝑥, 𝛿) = (−2𝑥, 0) ⊂ (−∞, 0) = [0, ∞)𝑐 . The set [0, 1) ⊂ ℝ is neither open nor closed. First, every ball in ℝ around 0, 𝐵(0, 𝛿) = (−𝛿, 𝛿), contains negative numbers and hence is not contained in [0, 1). So [0, 1) is not open. Second, every ball in ℝ around 1, 𝐵(1, 𝛿) = (1 − 𝛿, 1 + 𝛿), contains numbers strictly less than 1 and greater than 0 (e.g. 1 − 𝛿/2 as long as 𝛿 < 2). Thus [0, 1)𝑐 = ℝ \ [0, 1) is not open, and [0, 1) is not closed. If (𝑋 , 𝑑) is any metric space, and 𝑥 ∈ 𝑋 is a point, then {𝑥} is closed (exercise). On the other hand, {𝑥} may or may not be open depending on 𝑋. The set {0} ⊂ ℝ is not open as 𝐵(0, 𝛿) contains nonzero numbers for every 𝛿 > 0. If 𝑋 = {𝑥}, then {𝑥} is open. Proposition 7.2.6. Let (𝑋 , 𝑑) be a metric space. (i) ∅ and 𝑋 are open.
(ii) If 𝑉1 , 𝑉2 , . . . , 𝑉𝑘 are open subsets of 𝑋, then 𝑘 Ù
𝑉𝑗
𝑗=1
is also open. That is, a finite intersection of open sets is open. (iii) If {𝑉𝜆 }𝜆∈𝐼 is an arbitrary collection of open subsets of 𝑋, then
Ø
𝑉𝜆
𝜆∈𝐼
is also open. That is, a union of open sets is open. The index set 𝐼 in (iii) can be arbitrarily large. By all 𝑥 such that 𝑥 ∈ 𝑉𝜆 for at least one 𝜆 ∈ 𝐼.
Ð
𝜆∈𝐼
𝑉𝜆 , we simply mean the set of
Proof. The sets ∅ and 𝑋 are obviously open in 𝑋. Ñ Let us prove (ii). If 𝑥 ∈ 𝑘𝑗=1 𝑉𝑗 , then 𝑥 ∈ 𝑉𝑗 for all 𝑗. As 𝑉𝑗 are all open, for every 𝑗 there exists a 𝛿 𝑗 > 0 such that 𝐵(𝑥, 𝛿 𝑗 ) ⊂ 𝑉𝑗 . Take 𝛿 B min{𝛿1 , 𝛿 2 , . . . , 𝛿 𝑘 } and notice 𝛿 > 0.
266
CHAPTER 7. METRIC SPACES
We have 𝐵(𝑥, 𝛿) ⊂ 𝐵(𝑥, 𝛿 𝑗 ) ⊂ 𝑉𝑗 for every 𝑗 and so 𝐵(𝑥, 𝛿) ⊂ 𝑘𝑗=1 𝑉𝑗 . Consequently the intersection is open. Ð Let us prove (iii). If 𝑥 ∈ 𝜆∈𝐼 𝑉𝜆 , then 𝑥 ∈ 𝑉𝜆 for some 𝜆 ∈ 𝐼. As 𝑉𝜆 is open, there exists Ð a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) ⊂ 𝑉𝜆 . But then 𝐵(𝑥, 𝛿) ⊂ 𝜆∈𝐼 𝑉𝜆 , and so the union is open. □
Ñ
Example 7.2.7: Notice the difference between items (ii) and (iii). Item (ii) is not true for an Ñ −1 1 arbitrary intersection. For instance, ∞ 𝑛=1 ( /𝑛 , /𝑛 ) = {0}, which is not open. The proof of the following analogous proposition for closed sets is left as an exercise.
Proposition 7.2.8. Let (𝑋 , 𝑑) be a metric space. (i) ∅ and 𝑋 are closed.
(ii) If {𝐸𝜆 }𝜆∈𝐼 is an arbitrary collection of closed subsets of 𝑋, then
Ù
𝐸𝜆
𝜆∈𝐼
is also closed. That is, an intersection of closed sets is closed. (iii) If 𝐸1 , 𝐸2 , . . . , 𝐸 𝑘 are closed subsets of 𝑋, then 𝑘 Ø
𝐸𝑗
𝑗=1
is also closed. That is, a finite union of closed sets is closed. Despite the naming, we have not yet shown that the open ball is open and the closed ball is closed. Let us show these facts now to justify the terminology. Proposition 7.2.9. Let (𝑋 , 𝑑) be a metric space, 𝑥 ∈ 𝑋, and 𝛿 > 0. Then 𝐵(𝑥, 𝛿) is open and 𝐶(𝑥, 𝛿) is closed. Proof. Let 𝑦 ∈ 𝐵(𝑥, 𝛿). Let 𝛼 B 𝛿 − 𝑑(𝑥, 𝑦). As 𝛼 > 0, consider 𝑧 ∈ 𝐵(𝑦, 𝛼). Then 𝑑(𝑥, 𝑧) ≤ 𝑑(𝑥, 𝑦) + 𝑑(𝑦, 𝑧) < 𝑑(𝑥, 𝑦) + 𝛼 = 𝑑(𝑥, 𝑦) + 𝛿 − 𝑑(𝑥, 𝑦) = 𝛿. Therefore, 𝑧 ∈ 𝐵(𝑥, 𝛿) for every 𝑧 ∈ 𝐵(𝑦, 𝛼). So 𝐵(𝑦, 𝛼) ⊂ 𝐵(𝑥, 𝛿), and so 𝐵(𝑥, 𝛿) is open. See Figure 7.6. The proof that 𝐶(𝑥, 𝛿) is closed is left as an exercise. □ Again, be careful about which metric space we are in. The set [0, 1/2) is an open ball in [0, 1], and so [0, 1/2) is an open set in [0, 1]. On the other hand, [0, 1/2) is neither open nor closed in ℝ. Proposition 7.2.10. Let 𝑎, 𝑏 be two real numbers, 𝑎 < 𝑏. Then (𝑎, 𝑏), (𝑎, ∞), and (−∞, 𝑏) are open in ℝ. Also [𝑎, 𝑏], [𝑎, ∞), and (−∞, 𝑏] are closed in ℝ. The proof is left as an exercise. Keep in mind that there are many other open and closed sets in the set of real numbers.
267
7.2. OPEN AND CLOSED SETS
𝛼 𝑦 𝑧 𝛿
𝑥 𝐵(𝑥, 𝛿)
Figure 7.6: Proof that 𝐵(𝑥, 𝛿) is open: 𝐵(𝑦, 𝛼) ⊂ 𝐵(𝑥, 𝛿) with the triangle inequality illustrated.
Proposition 7.2.11. Suppose (𝑋 , 𝑑) is a metric space, and 𝑌 ⊂ 𝑋. Then 𝑈 ⊂ 𝑌 is open in 𝑌 (in the subspace topology) if and only if there exists an open set 𝑉 ⊂ 𝑋 (so open in 𝑋) such that 𝑉 ∩ 𝑌 = 𝑈.
For example, let 𝑋 B ℝ, 𝑌 B [0, 1], 𝑈 B [0, 1/2). We saw that 𝑈 is an open set in 𝑌. We may take 𝑉 B (−1/2, 1/2). Proof. Suppose 𝑉 ⊂ 𝑋 is open and 𝑉 ∩ 𝑌 = 𝑈. Let 𝑥 ∈ 𝑈. As 𝑉 is open and 𝑥 ∈ 𝑉, there exists a 𝛿 > 0 such that 𝐵𝑋 (𝑥, 𝛿) ⊂ 𝑉. Then 𝐵𝑌 (𝑥, 𝛿) = 𝐵𝑋 (𝑥, 𝛿) ∩ 𝑌 ⊂ 𝑉 ∩ 𝑌 = 𝑈.
So 𝑈 is open in 𝑌. The proof of the opposite direction, that is, that if 𝑈 ⊂ 𝑌 is open in the subspace topology there exists a 𝑉 is left as Exercise 7.2.12. □ A hint for finishing the proof (the exercise) is that a useful way to think about an open set is as a union of open balls. If 𝑈 is open, then for each 𝑥 ∈ 𝑈, there is a 𝛿 𝑥 > 0 (depending Ð on 𝑥) such that 𝐵(𝑥, 𝛿 𝑥 ) ⊂ 𝑈. Then 𝑈 = 𝑥∈𝑈 𝐵(𝑥, 𝛿 𝑥 ). In the case of an open subset of an open set or a closed subset of a closed set, matters are simpler. Proposition 7.2.12. Suppose (𝑋 , 𝑑) is a metric space, 𝑉 ⊂ 𝑋 is open, and 𝐸 ⊂ 𝑋 is closed. (i) 𝑈 ⊂ 𝑉 is open in the subspace topology if and only if 𝑈 is open in 𝑋.
(ii) 𝐹 ⊂ 𝐸 is closed in the subspace topology if and only if 𝐹 is closed in 𝑋. Proof. We prove (i) and leave (ii) as an exercise. If 𝑈 ⊂ 𝑉 is open in the subspace topology, by Proposition 7.2.11, there is a set 𝑊 ⊂ 𝑋 open in 𝑋 such that 𝑈 = 𝑊 ∩ 𝑉. Intersection of two open sets is open so 𝑈 is open in 𝑋. Now suppose 𝑈 is open in 𝑋. Then 𝑈 = 𝑈 ∩ 𝑉. So 𝑈 is open in 𝑉 again by Proposition 7.2.11. □
268
7.2.2
CHAPTER 7. METRIC SPACES
Connected sets
Let us generalize the idea of an interval to general metric spaces. One of the main features of an interval in ℝ is that it is connected—that we can continuously move from one point of it to another point without jumping. For example, in ℝ we usually study functions on intervals, and in more general metric spaces we usually study functions on connected sets. Definition 7.2.13. A nonempty* metric space (𝑋 , 𝑑) is connected if the only subsets of 𝑋 that are both open and closed (so-called clopen subsets) are ∅ and 𝑋 itself. If a nonempty (𝑋 , 𝑑) is not connected we say it is disconnected. When we apply the term connected to a nonempty subset 𝐴 ⊂ 𝑋, we mean that 𝐴 with the subspace topology is connected. In other words, a nonempty 𝑋 is connected if whenever we write 𝑋 = 𝑋1 ∪ 𝑋2 where 𝑋1 ∩ 𝑋2 = ∅ and 𝑋1 and 𝑋2 are open, then either 𝑋1 = ∅ or 𝑋2 = ∅. So to show 𝑋 is disconnected, we need to find nonempty disjoint open sets 𝑋1 and 𝑋2 whose union is 𝑋. For subsets, we state this idea as a proposition. The proposition is illustrated in Figure 7.7. Proposition 7.2.14. Let (𝑋 , 𝑑) be a metric space. A nonempty set 𝑆 ⊂ 𝑋 is disconnected if and only if there exist open sets 𝑈1 and 𝑈2 in 𝑋 such that 𝑈1 ∩ 𝑈2 ∩ 𝑆 = ∅, 𝑈1 ∩ 𝑆 ≠ ∅, 𝑈2 ∩ 𝑆 ≠ ∅, and 𝑆 = 𝑈1 ∩ 𝑆 ∪ 𝑈2 ∩ 𝑆 .
𝑈1
𝑆
𝑆
𝑈2
Figure 7.7: Disconnected subset. Notice that 𝑈1 ∩ 𝑈2 need not be empty, but 𝑈1 ∩ 𝑈2 ∩ 𝑆 = ∅.
Proof. First suppose 𝑆 is disconnected: There are nonempty disjoint 𝑆1 and 𝑆2 that are open in 𝑆 and 𝑆 = 𝑆1 ∪ 𝑆2 . Proposition 7.2.11 says there exist 𝑈1 and 𝑈2 that are open in 𝑋 such that 𝑈1 ∩ 𝑆 = 𝑆1 and 𝑈2 ∩ 𝑆 = 𝑆2 . For the other direction start with the 𝑈1 and 𝑈2 . Then 𝑈1 ∩ 𝑆 and 𝑈2 ∩ 𝑆 are open in 𝑆 by Proposition 7.2.11. Via the discussion before the proposition, 𝑆 is disconnected. □ Example 7.2.15: Suppose 𝑆 ⊂ ℝ and there are 𝑥, 𝑦, 𝑧 such that 𝑥 < 𝑧 < 𝑦 with 𝑥, 𝑦 ∈ 𝑆 and 𝑧 ∉ 𝑆. Claim: 𝑆 is disconnected. Proof: Notice (−∞, 𝑧) ∩ 𝑆 ∪ (𝑧, ∞) ∩ 𝑆 = 𝑆.
* Some
authors do not exclude the empty set from the definition, and the empty set would then be connected. We avoid the empty set for essentially the same reason why 1 is neither a prime nor a composite number: Our connected sets have exactly two clopen subsets and disconnected sets have more than two. The empty set has exactly one.
269
7.2. OPEN AND CLOSED SETS
Proposition 7.2.16. A nonempty set 𝑆 ⊂ ℝ is connected if and only if 𝑆 is an interval or a single point. Proof. Suppose 𝑆 is connected. If 𝑆 is a single point, then we are done. So suppose 𝑥 < 𝑦 and 𝑥, 𝑦 ∈ 𝑆. If 𝑧 ∈ ℝ is such that 𝑥 < 𝑧 < 𝑦, then (−∞, 𝑧) ∩ 𝑆 is nonempty and (𝑧, ∞) ∩ 𝑆 is nonempty. The two sets are disjoint. As 𝑆 is connected, we must have they their union is not 𝑆, so 𝑧 ∈ 𝑆. By Proposition 1.4.1, 𝑆 is an interval. If 𝑆 is a single point, it is connected. Therefore, suppose 𝑆 is an interval. Consider open subsets 𝑈1 and 𝑈2 of ℝ such that 𝑈1 ∩𝑆 and 𝑈2 ∩𝑆 are nonempty, and 𝑆 = 𝑈1 ∩𝑆 ∪ 𝑈2 ∩𝑆 . We will show that 𝑈1 ∩ 𝑆 and 𝑈2 ∩ 𝑆 contain a common point, so they are not disjoint, proving that 𝑆 is connected. Suppose 𝑥 ∈ 𝑈1 ∩ 𝑆 and 𝑦 ∈ 𝑈2 ∩ 𝑆. Without loss of generality, assume 𝑥 < 𝑦. As 𝑆 is an interval, [𝑥, 𝑦] ⊂ 𝑆. Note that 𝑈2 ∩ [𝑥, 𝑦] ≠ ∅, and let 𝑧 B inf(𝑈2 ∩ [𝑥, 𝑦]). We wish to show that 𝑧 ∈ 𝑈1 . If 𝑧 = 𝑥, then 𝑧 ∈ 𝑈1 . If 𝑧 > 𝑥, then for every 𝜖 > 0, the ball 𝐵(𝑧, 𝜖) = (𝑧 − 𝜖, 𝑧 + 𝜖) contains points of [𝑥, 𝑦] not in 𝑈2 , as 𝑧 is the infimum of 𝑈2 ∩ [𝑥, 𝑦]. So 𝑧 ∉ 𝑈2 as 𝑈2 is open. Therefore, 𝑧 ∈ 𝑈1 as every point of [𝑥, 𝑦] is in 𝑈1 or 𝑈2 . As 𝑈1 is open, 𝐵(𝑧, 𝛿) ⊂ 𝑈1 for a small enough 𝛿 > 0. As 𝑧 is the infimum of the nonempty set 𝑈2 ∩ [𝑥, 𝑦], there must exist some 𝑤 ∈ 𝑈2 ∩ [𝑥, 𝑦] such that 𝑤 ∈ [𝑧, 𝑧 + 𝛿) ⊂ 𝐵(𝑧, 𝛿) ⊂ 𝑈1 . Therefore, 𝑤 ∈ 𝑈1 ∩ 𝑈2 ∩ [𝑥, 𝑦]. So 𝑈1 ∩ 𝑆 and 𝑈2 ∩ 𝑆 are not disjoint, and 𝑆 is connected. See Figure 7.8. □
𝑈2
𝑈1
𝑧
𝑥
𝑤
𝑦
(𝑧 − 𝛿, 𝑧 + 𝛿) Figure 7.8: Proof that an interval is connected.
Example 7.2.17: Oftentimes a ball 𝐵(𝑥, 𝛿) is connected, but this is not necessarily true in every metric space. For a simplest example, take a two point space {𝑎, 𝑏} with the discrete metric. Then 𝐵(𝑎, 2) = {𝑎, 𝑏}, which is not connected as 𝐵(𝑎, 1) = {𝑎} and 𝐵(𝑏, 1) = {𝑏} are open and disjoint.
7.2.3
Closure and boundary
Sometimes we wish to take a set and throw in everything that we can approach from the set. This concept is called the closure. Definition 7.2.18. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. The closure of 𝐴 is the set 𝐴B
Ù
{𝐸 ⊂ 𝑋 : 𝐸 is closed and 𝐴 ⊂ 𝐸}.
That is, 𝐴 is the intersection of all closed sets that contain 𝐴.
270
CHAPTER 7. METRIC SPACES
Proposition 7.2.19. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. The closure 𝐴 is closed, and 𝐴 ⊂ 𝐴. Furthermore, if 𝐴 is closed, then 𝐴 = 𝐴. Proof. The closure is an intersection of closed sets, so 𝐴 is closed. There is at least one closed set containing 𝐴, namely 𝑋 itself, so 𝐴 ⊂ 𝐴. If 𝐴 is closed, then 𝐴 is a closed set that contains 𝐴. So 𝐴 ⊂ 𝐴, and thus 𝐴 = 𝐴. □ Example 7.2.20: The closure of (0, 1) in ℝ is [0, 1]. Proof: If 𝐸 is closed and contains (0, 1), then 𝐸 contains 0 and 1 (why?). Thus [0, 1] ⊂ 𝐸. But [0, 1] is also closed. Hence, the closure (0, 1) = [0, 1]. Example 7.2.21: Be careful to notice what ambient metric space you are working with. If 𝑋 = (0, ∞), then the closure of (0, 1) in (0, ∞) is (0, 1]. Proof: Similarly as above, (0, 1] is closed in (0, ∞) (why?). Any closed set 𝐸 that contains (0, 1) must contain 1 (why?). Therefore, (0, 1] ⊂ 𝐸, and hence (0, 1) = (0, 1] when working in (0, ∞). Let us justify the statement that the closure is everything that we can “approach” from within the set. Proposition 7.2.22. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. Then 𝑥 ∈ 𝐴 if and only if for every 𝛿 > 0, 𝐵(𝑥, 𝛿) ∩ 𝐴 ≠ ∅.
Proof. Let us prove the two contrapositives. Let us show that 𝑥 ∉ 𝐴 if and only if there exists a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) ∩ 𝐴 = ∅. 𝑐 First suppose 𝑥 ∉ 𝐴. We know 𝐴 is closed. Thus there is a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) ⊂ 𝐴 . 𝑐 As 𝐴 ⊂ 𝐴 we see that 𝐵(𝑥, 𝛿) ⊂ 𝐴 ⊂ 𝐴 𝑐 and hence 𝐵(𝑥, 𝛿) ∩ 𝐴 = ∅. On the other hand, suppose there is a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) ∩ 𝐴 = ∅. In other words, 𝐴 ⊂ 𝐵(𝑥, 𝛿)𝑐 . As 𝐵(𝑥, 𝛿)𝑐 is a closed set, as 𝑥 ∉ 𝐵(𝑥, 𝛿)𝑐 , and as 𝐴 is the intersection of closed sets containing 𝐴, we have 𝑥 ∉ 𝐴. □ We can also talk about the interior of a set (points we cannot approach from the complement), and the boundary of a set (points we can approach both from the set and its complement). Definition 7.2.23. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. The interior of 𝐴 is the set 𝐴◦ B {𝑥 ∈ 𝐴 : there exists a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) ⊂ 𝐴}. The boundary of 𝐴 is the set
𝜕𝐴 B 𝐴 \ 𝐴◦ .
Alternatively, the interior is the union of open sets lying in 𝐴, see Exercise 7.2.14. By definition, 𝐴◦ ⊂ 𝐴; however, the points of the boundary may or may not be in 𝐴. Example 7.2.24: Suppose 𝐴 B (0, 1] and 𝑋 B ℝ. Then 𝐴 = [0, 1], 𝐴◦ = (0, 1), and 𝜕𝐴 = {0, 1}.
271
7.2. OPEN AND CLOSED SETS
Example 7.2.25: Consider 𝑋 B {𝑎, 𝑏} with the discrete metric, and let 𝐴 B {𝑎}. Then 𝐴 = 𝐴◦ = 𝐴 and 𝜕𝐴 = ∅. Proposition 7.2.26. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. Then 𝐴◦ is open and 𝜕𝐴 is closed. Proof. Given 𝑥 ∈ 𝐴◦ , there is a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) ⊂ 𝐴. If 𝑧 ∈ 𝐵(𝑥, 𝛿), then as open balls are open, there is an 𝜖 > 0 such that 𝐵(𝑧, 𝜖) ⊂ 𝐵(𝑥, 𝛿) ⊂ 𝐴. So 𝑧 ∈ 𝐴◦ . Therefore, 𝐵(𝑥, 𝛿) ⊂ 𝐴◦ , and so 𝐴◦ is open. As 𝐴◦ is open, then 𝜕𝐴 = 𝐴 \ 𝐴◦ = 𝐴 ∩ (𝐴◦ )𝑐 is closed. □ The boundary is the set of points that are close to both the set and its complement. See Figure 7.9 for a diagram of the next proposition. Proposition 7.2.27. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. Then 𝑥 ∈ 𝜕𝐴 if and only if for every 𝛿 > 0, 𝐵(𝑥, 𝛿) ∩ 𝐴 and 𝐵(𝑥, 𝛿) ∩ 𝐴 𝑐 are both nonempty.
𝐴𝑐 𝛿 𝑥 𝐴
𝜕𝐴
𝐵(𝑥, 𝛿)
Figure 7.9: Boundary is the set where every ball contains points in the set and also its complement.
Proof. Suppose 𝑥 ∈ 𝜕𝐴 = 𝐴 \ 𝐴◦ and let 𝛿 > 0 be arbitrary. By Proposition 7.2.22, 𝐵(𝑥, 𝛿) contains a point of 𝐴. If 𝐵(𝑥, 𝛿) contained no points of 𝐴 𝑐 , then 𝑥 would be in 𝐴◦ . Hence 𝐵(𝑥, 𝛿) contains a point of 𝐴 𝑐 as well. Let us prove the other direction by contrapositive. Suppose 𝑥 ∉ 𝜕𝐴, so 𝑥 ∉ 𝐴 or 𝑥 ∈ 𝐴◦ . 𝑐 If 𝑥 ∉ 𝐴, then 𝐵(𝑥, 𝛿) ⊂ 𝐴 for some 𝛿 > 0 as 𝐴 is closed. So 𝐵(𝑥, 𝛿) ∩ 𝐴 is empty, because 𝑐 𝐴 ⊂ 𝐴 𝑐 . If 𝑥 ∈ 𝐴◦ , then 𝐵(𝑥, 𝛿) ⊂ 𝐴 for some 𝛿 > 0, so 𝐵(𝑥, 𝛿) ∩ 𝐴 𝑐 is empty. □ We obtain the following immediate corollary about closures of 𝐴 and 𝐴 𝑐 . We simply apply Proposition 7.2.22. Corollary 7.2.28. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. Then 𝜕𝐴 = 𝐴 ∩ 𝐴 𝑐 .
272
7.2.4
CHAPTER 7. METRIC SPACES
Exercises
Exercise 7.2.1: Prove Proposition 7.2.8. Hint: Apply Proposition 7.2.6 to the the complements of the sets. Exercise 7.2.2: Finish the proof of Proposition 7.2.9 by proving that 𝐶(𝑥, 𝛿) is closed. Exercise 7.2.3: Prove Proposition 7.2.10. Exercise 7.2.4: Suppose (𝑋 , 𝑑) is a nonempty metric space with the discrete topology. Show that 𝑋 is connected if and only if it contains exactly one element. Exercise 7.2.5: Take ℚ with the standard metric, 𝑑(𝑥, 𝑦) = |𝑥 − 𝑦|, as our metric space. Prove that ℚ is totally disconnected, that is, show that for every 𝑥, 𝑦 ∈ ℚ with 𝑥 ≠ 𝑦, there exists an two open sets 𝑈 and 𝑉 such that 𝑥 ∈ 𝑈, 𝑦 ∈ 𝑉, 𝑈 ∩ 𝑉 = ∅, and 𝑈 ∪ 𝑉 = ℚ. Exercise 7.2.6: Show that in a metric space, every open set can be written as a union of closed sets. Exercise 7.2.7: Prove that in a metric space, a) 𝐸 is closed if and only if 𝜕𝐸 ⊂ 𝐸.
b) 𝑈 is open if and only if 𝜕𝑈 ∩ 𝑈 = ∅. Exercise 7.2.8: Prove that in a metric space, a) 𝐴 is open if and only if 𝐴◦ = 𝐴. b) 𝑈 ⊂ 𝐴◦ for every open set 𝑈 such that 𝑈 ⊂ 𝐴. Exercise 7.2.9: Let 𝑋 be a set and 𝑑, 𝑑′ be two metrics on 𝑋. Suppose there exists an 𝛼 > 0 and 𝛽 > 0 such that 𝛼𝑑(𝑥, 𝑦) ≤ 𝑑′(𝑥, 𝑦) ≤ 𝛽𝑑(𝑥, 𝑦) for all 𝑥, 𝑦 ∈ 𝑋. Show that 𝑈 is open in (𝑋 , 𝑑) if and only if 𝑈 is open in (𝑋 , 𝑑′). That is, the topologies of (𝑋 , 𝑑) and (𝑋 , 𝑑′) are the same. Exercise 7.2.10: Suppose {𝑆 𝑖 }, 𝑖 ∈ ℕ , is a collection of connected subsets of a metric space (𝑋 , 𝑑), and there Ð exists an 𝑥 ∈ 𝑋 such that 𝑥 ∈ 𝑆 𝑖 for all 𝑖 ∈ ℕ . Show that ∞ 𝑖=1 𝑆 𝑖 is connected. Exercise 7.2.11: Let 𝐴 be a connected set in a metric space. a) Is 𝐴 connected? Prove or find a counterexample. b) Is 𝐴◦ connected? Prove or find a counterexample. Hint: Think of sets in ℝ2 . Exercise 7.2.12: Finish the proof of Proposition 7.2.11. Suppose (𝑋 , 𝑑) is a metric space and 𝑌 ⊂ 𝑋. Show that with the subspace metric on 𝑌, if a set 𝑈 ⊂ 𝑌 is open (in 𝑌), then there exists an open set 𝑉 ⊂ 𝑋 such that 𝑈 = 𝑉 ∩ 𝑌. Exercise 7.2.13: Let (𝑋 , 𝑑) be a metric space.
a) For every 𝑥 ∈ 𝑋 and 𝛿 > 0, show 𝐵(𝑥, 𝛿) ⊂ 𝐶(𝑥, 𝛿).
b) Is it always true that 𝐵(𝑥, 𝛿) = 𝐶(𝑥, 𝛿)? Prove or find a counterexample. Exercise 7.2.14: Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. Show that 𝐴◦ = Exercise 7.2.15: Finish the proof of Proposition 7.2.12.
Ð
{𝑉 : 𝑉 is open and 𝑉 ⊂ 𝐴}.
7.2. OPEN AND CLOSED SETS
273
Exercise 7.2.16: Let (𝑋 , 𝑑) be a metric space. Show that there exists a bounded metric 𝑑′ such that (𝑋 , 𝑑′) has the same open sets, that is, the topology is the same. Exercise 7.2.17: Let (𝑋 , 𝑑) be a metric space.
a) Prove that for every 𝑥 ∈ 𝑋, there either exists a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) = {𝑥}, or 𝐵(𝑥, 𝛿) is infinite for every 𝛿 > 0.
b) Find an explicit example of (𝑋 , 𝑑), 𝑋 infinite, where for every 𝛿 > 0 and every 𝑥 ∈ 𝑋, the ball 𝐵(𝑥, 𝛿) is finite. c) Find an explicit example of (𝑋 , 𝑑) where for every 𝛿 > 0 and every 𝑥 ∈ 𝑋, the ball 𝐵(𝑥, 𝛿) is countably infinite. d) Prove that if 𝑋 is uncountable, then there exists an 𝑥 ∈ 𝑋 and a 𝛿 > 0 such that 𝐵(𝑥, 𝛿) is uncountable. Exercise 7.2.18: For every 𝑥 ∈ ℝ𝑛 and every 𝛿 > 0 define the “rectangle” 𝑅(𝑥, 𝛿) B (𝑥 1 − 𝛿, 𝑥1 + 𝛿) × (𝑥 2 − 𝛿, 𝑥2 + 𝛿) × · · · × (𝑥 𝑛 − 𝛿, 𝑥 𝑛 + 𝛿). Show that these sets generate the same open sets as the balls in standard metric. That is, show that a set 𝑈 ⊂ ℝ𝑛 is open in the sense of the standard metric if and only if for every point 𝑥 ∈ 𝑈, there exists a 𝛿 > 0 such that 𝑅(𝑥, 𝛿) ⊂ 𝑈.
274
7.3
CHAPTER 7. METRIC SPACES
Sequences and convergence
Note: 1 lecture
7.3.1
Sequences
The notion of a sequence in a metric space is very similar to a sequence of real numbers. The related definitions are essentially the same as those for real numbers in the sense of chapter 2, where ℝ with the standard metric 𝑑(𝑥, 𝑦) = |𝑥 − 𝑦| is replaced by an arbitrary metric space (𝑋 , 𝑑). Definition 7.3.1. A sequence in a metric space (𝑋 , 𝑑) is a function 𝑥 : ℕ → 𝑋. As before we write 𝑥 𝑛 for the 𝑛th element in the sequence, and for the whole sequence use the notation {𝑥 𝑛 }∞ 𝑛=1 . A sequence {𝑥 𝑛 }∞ is bounded if there exists a point 𝑝 ∈ 𝑋 and 𝐵 ∈ ℝ such that 𝑛=1 for all 𝑛 ∈ ℕ .
𝑑(𝑝, 𝑥 𝑛 ) ≤ 𝐵
In other words, the sequence {𝑥 𝑛 }∞ is bounded whenever the set {𝑥 𝑛 : 𝑛 ∈ ℕ } is bounded. 𝑛=1 ∞ If {𝑛 𝑘 } 𝑘=1 is a sequence of natural numbers such that 𝑛 𝑘+1 > 𝑛 𝑘 for all 𝑘, then the sequence {𝑥 𝑛 𝑘 }∞ is said to be a subsequence of {𝑥 𝑛 }∞ . 𝑛=1 𝑘=1 Similarly we define convergence. See Figure 7.10, for an idea of the definition. Definition 7.3.2. A sequence {𝑥 𝑛 }∞ in a metric space (𝑋 , 𝑑) is said to converge to a point 𝑛=1 𝑝 ∈ 𝑋 if for every 𝜖 > 0, there exists an 𝑀 ∈ ℕ such that 𝑑(𝑥 𝑛 , 𝑝) < 𝜖 for all 𝑛 ≥ 𝑀. The point 𝑝 is said to be a limit of {𝑥 𝑛 }∞ . If the limit is unique, we write 𝑛=1 lim 𝑥 𝑛 B 𝑝.
𝑛→∞
A sequence that converges is convergent. Otherwise, the sequence is divergent. 𝑥4 𝑥 10 𝑥6 𝑥5
𝑥7
𝑥8 𝑝
𝜖 𝑥2 𝑥9
𝑥1 𝑥3
Figure 7.10: Sequence converging to 𝑝. The first 10 points are shown and 𝑀 = 7 for this 𝜖.
275
7.3. SEQUENCES AND CONVERGENCE
The limit is unique. The proof is almost identical (word for word) to the proof of the same fact for sequences of real numbers, Proposition 2.1.6. Proofs of many results we know for sequences of real numbers can be adapted to the more general settings of metric spaces. We must replace |𝑥 − 𝑦| with 𝑑(𝑥, 𝑦) in the proofs and apply the triangle inequality correctly. Proposition 7.3.3. A convergent sequence in a metric space has a unique limit. Proof. Suppose {𝑥 𝑛 }∞ has limits 𝑥 and 𝑦. Take an arbitrary 𝜖 > 0. From the definition 𝑛=1 find an 𝑀1 such that for all 𝑛 ≥ 𝑀1 , 𝑑(𝑥 𝑛 , 𝑥) < 𝜖/2. Similarly find an 𝑀2 such that for all 𝑛 ≥ 𝑀2 , we have 𝑑(𝑥 𝑛 , 𝑦) < 𝜖/2. Now take an 𝑛 such that 𝑛 ≥ 𝑀1 and also 𝑛 ≥ 𝑀2 , and estimate 𝑑(𝑦, 𝑥) ≤ 𝑑(𝑦, 𝑥 𝑛 ) + 𝑑(𝑥 𝑛 , 𝑥) 𝜖 𝜖 < + = 𝜖. 2 2 As 𝑑(𝑦, 𝑥) < 𝜖 for all 𝜖 > 0, then 𝑑(𝑥, 𝑦) = 0 and 𝑦 = 𝑥. Hence the limit (if it exists) is unique. □ The proofs of the following propositions are left as exercises. Proposition 7.3.4. A convergent sequence in a metric space is bounded. Proposition 7.3.5. A sequence {𝑥 𝑛 }∞ in a metric space (𝑋 , 𝑑) converges to 𝑝 ∈ 𝑋 if and only if 𝑛=1 there exists a sequence {𝑎 𝑛 }∞ of real numbers such that 𝑛=1 𝑑(𝑥 𝑛 , 𝑝) ≤ 𝑎 𝑛
for all 𝑛 ∈ ℕ ,
and
lim 𝑎 𝑛 = 0.
𝑛→∞
Proposition 7.3.6. Let {𝑥 𝑛 }∞ be a sequence in a metric space (𝑋 , 𝑑). 𝑛=1
(i) If {𝑥 𝑛 }∞ converges to 𝑝 ∈ 𝑋, then every subsequence {𝑥 𝑛 𝑘 }∞ converges to 𝑝. 𝑛=1 𝑘=1
(ii) If for some 𝐾 ∈ ℕ the 𝐾-tail {𝑥 𝑛 }∞ converges to 𝑝 ∈ 𝑋, then {𝑥 𝑛 }∞ converges to 𝑝. 𝑛=1 𝑛=𝐾+1 Example 7.3.7: Take 𝐶 [0, 1], ℝ be the set of continuous functions with the metric being the uniform metric. We saw that we obtain a metric space. If we look at the definition of convergence, we notice that it is identical to uniform convergence. That is, { 𝑓𝑛 }∞ 𝑛=1 converges uniformly if and only if it converges in the metric space sense.
Remark 7.3.8. It is perhaps surprising that on the set of functions 𝑓 : [𝑎, 𝑏] → ℝ (continuous or not) there is no metric that gives pointwise convergence. Although the proof of this fact is beyond the scope of this book.
7.3.2
Convergence in euclidean space
In the euclidean space ℝ𝑛 , a sequence converges if and only if every component converges:
276
CHAPTER 7. METRIC SPACES
Proposition 7.3.9. Let {𝑥 𝑚 }∞ be a sequence in ℝ𝑛 , where 𝑥 𝑚 = 𝑥 𝑚,1 , 𝑥 𝑚,2 , . . . , 𝑥 𝑚,𝑛 ∈ ℝ𝑛 . 𝑚=1 Then {𝑥 𝑚 }∞ converges if and only if {𝑥 𝑚,𝑘 }∞ converges for every 𝑘 = 1, 2, . . . , 𝑛, in which 𝑚=1 𝑚=1 case lim 𝑥 𝑚 = lim 𝑥 𝑚,1 , lim 𝑥 𝑚,2 , . . . , lim 𝑥 𝑚,𝑛 .
𝑚→∞
𝑚→∞
𝑚→∞
𝑚→∞
converges to 𝑦 = (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈ ℝ𝑛 . Given 𝜖 > 0, there exists Proof. Suppose {𝑥 𝑚 }∞ 𝑚=1 an 𝑀 such that for all 𝑚 ≥ 𝑀, we have 𝑑(𝑦, 𝑥 𝑚 ) < 𝜖. Fix some 𝑘 = 1, 2, . . . , 𝑛. For all 𝑚 ≥ 𝑀,
q 2 𝑦 𝑘 − 𝑥 𝑚,𝑘 = 𝑦 𝑘 − 𝑥 𝑚,𝑘 ≤
v t
𝑛 Õ ℓ =1
𝑦ℓ − 𝑥 𝑚,ℓ
2
= 𝑑(𝑦, 𝑥 𝑚 ) < 𝜖.
Hence the sequence {𝑥 𝑚,𝑘 }∞ converges to 𝑦 𝑘 . 𝑚=1 For the other direction, suppose {𝑥 𝑚,𝑘 }∞ converges to 𝑦 𝑘 for every 𝑘 = 1, 2, . . . , 𝑛. 𝑚=1 √ Given 𝜖 > 0, pick an 𝑀 such that if 𝑚 ≥ 𝑀, then 𝑦 𝑘 − 𝑥 𝑚,𝑘 < 𝜖/ 𝑛 for all 𝑘 = 1, 2, . . . , 𝑛. Then v v v t t t 𝑑(𝑦, 𝑥 𝑚 ) = That is, the sequence
𝑛 Õ 𝑘=1
{𝑥 𝑚 }∞ 𝑚=1
𝑦 𝑘 − 𝑥 𝑚,𝑘
2
0 such that 𝐵(𝑝, 𝜖) ⊂ 𝑈. As the sequence converges, find an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀, we have 𝑑(𝑝, 𝑥 𝑛 ) < 𝜖, or in other words 𝑥 𝑛 ∈ 𝐵(𝑝, 𝜖) ⊂ 𝑈. Let us prove the other direction. Given 𝜖 > 0, let 𝑈 B 𝐵(𝑝, 𝜖) be the neighborhood of 𝑝. Then there is an 𝑀 ∈ ℕ such that for 𝑛 ≥ 𝑀, we have 𝑥 𝑛 ∈ 𝑈 = 𝐵(𝑝, 𝜖), or in other words, 𝑑(𝑝, 𝑥 𝑛 ) < 𝜖. □ A closed set contains the limits of its convergent sequences.
277
7.3. SEQUENCES AND CONVERGENCE
Proposition 7.3.12. Let (𝑋 , 𝑑) be a metric space, 𝐸 ⊂ 𝑋 a closed set, and {𝑥 𝑛 }∞ a sequence in 𝑛=1 𝐸 that converges to some 𝑝 ∈ 𝑋. Then 𝑝 ∈ 𝐸. is a sequence in 𝑋 that converges Proof. Let us prove the contrapositive. Suppose {𝑥 𝑛 }∞ 𝑛=1 𝑐 𝑐 to 𝑝 ∈ 𝐸 . As 𝐸 is open, Proposition 7.3.11 says that there is an 𝑀 such that for all 𝑛 ≥ 𝑀, 𝑥 𝑛 ∈ 𝐸 𝑐 . So {𝑥 𝑛 }∞ is not a sequence in 𝐸. □ 𝑛=1 To take a closure of a set 𝐴, we start with 𝐴, and we throw in points that are limits of sequences in 𝐴. Proposition 7.3.13. Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. Then 𝑝 ∈ 𝐴 if and only if there exists a sequence {𝑥 𝑛 }∞ of elements in 𝐴 such that lim 𝑥 𝑛 = 𝑝. 𝑛=1 𝑛→∞
Proof. Let 𝑝 ∈ 𝐴. For every 𝑛 ∈ ℕ , Proposition 7.2.22 guarantees a point 𝑥 𝑛 ∈ 𝐵(𝑝, 1/𝑛 ) ∩ 𝐴. As 𝑑(𝑝, 𝑥 𝑛 ) < 1/𝑛 , we have lim𝑛→∞ 𝑥 𝑛 = 𝑝. For the other direction, see Exercise 7.3.1. □
7.3.4
Exercises
Exercise 7.3.1: Finish the proof of Proposition 7.3.13: Let (𝑋 , 𝑑) be a metric space and 𝐴 ⊂ 𝑋. Let 𝑝 ∈ 𝑋 be such that there exists a sequence {𝑥 𝑛 }∞ in 𝐴 that converges to 𝑝. Prove that 𝑝 ∈ 𝐴. 𝑛=1 Exercise 7.3.2: a) Show that 𝑑(𝑥, 𝑦) B min 1, |𝑥 − 𝑦| defines a metric on ℝ.
b) Show that a sequence converges in (ℝ , 𝑑) if and only if it converges in the standard metric. c) Find a bounded sequence in (ℝ , 𝑑) that contains no convergent subsequence.
Exercise 7.3.3: Prove Proposition 7.3.4. Exercise 7.3.4: Prove Proposition 7.3.5. Exercise 7.3.5: Suppose {𝑥 𝑛 }∞ converges to 𝑝. Suppose 𝑓 : ℕ → ℕ is a one-to-one function. Show that 𝑛=1 {𝑥 𝑓 (𝑛) }∞ converges to 𝑝. 𝑛=1 Exercise 7.3.6: Let (𝑋 , 𝑑) be a metric space where 𝑑 is the discrete metric. Suppose {𝑥 𝑛 }∞ is a convergent 𝑛=1 sequence in 𝑋. Show that there exists a 𝐾 ∈ ℕ such that for all 𝑛 ≥ 𝐾, we have 𝑥 𝑛 = 𝑥 𝐾 . Exercise 7.3.7: A set 𝑆 ⊂ 𝑋 is said to be dense in 𝑋 if 𝑋 ⊂ 𝑆 or in other words if for every 𝑝 ∈ 𝑋, there exists a sequence {𝑥 𝑛 }∞ in 𝑆 that converges to 𝑝. Prove that ℝ𝑛 contains a countable dense subset. 𝑛=1 Exercise 7.3.8 (Tricky): Suppose {𝑈𝑛 }∞ is a decreasing (𝑈𝑛+1 ⊂ 𝑈𝑛 for all 𝑛) sequence of open sets in a 𝑛=1 Ñ∞ metric space (𝑋 , 𝑑) such that 𝑛=1 𝑈𝑛 = {𝑝} for some 𝑝 ∈ 𝑋. Suppose {𝑥 𝑛 }∞ is a sequence of points in 𝑋 𝑛=1 such that 𝑥 𝑛 ∈ 𝑈𝑛 . Does {𝑥 𝑛 }∞ necessarily converge to 𝑝? Prove or construct a counterexample. 𝑛=1 Exercise 7.3.9: Let 𝐸 ⊂ 𝑋 be closed and let {𝑥 𝑛 }∞ be a sequence in 𝑋 converging to 𝑝 ∈ 𝑋. Suppose 𝑛=1 𝑥 𝑛 ∈ 𝐸 for infinitely many 𝑛 ∈ ℕ . Show 𝑝 ∈ 𝐸.
278
CHAPTER 7. METRIC SPACES
𝑥 Exercise 7.3.10: Take ℝ∗ = {−∞} ∪ ℝ ∪ {∞} be the extended reals. Define 𝑑(𝑥, 𝑦) B 1+|𝑥| −
𝑥, 𝑦 ∈ ℝ, define 𝑑(∞, 𝑥) B 1 −
𝑥 , 1+|𝑥|
a) Show that (ℝ∗ , 𝑑) is a metric space.
𝑑(−∞, 𝑥) B 1 +
𝑥 1+|𝑥|
𝑦 1+|𝑦|
if
for all 𝑥 ∈ ℝ, and let 𝑑(∞, −∞) B 2.
b) Suppose {𝑥 𝑛 }∞ is a sequence of real numbers such that for every 𝑀 ∈ ℝ, there exists an 𝑁 such that 𝑛=1 𝑥 𝑛 ≥ 𝑀 for all 𝑛 ≥ 𝑁. Show that lim 𝑥 𝑛 = ∞ in (ℝ∗ , 𝑑). 𝑛→∞
c) Show that a sequence of real numbers converges to a real number in (ℝ∗ , 𝑑) if and only if it converges in ℝ with the standard metric.
Exercise 7.3.11: Suppose {𝑉𝑛 }∞ is a sequence of open sets in (𝑋 , 𝑑) such that 𝑉𝑛+1 ⊃ 𝑉𝑛 for all 𝑛. Let 𝑛=1 ∞ {𝑥 𝑛 } 𝑛=1 be a sequence such that 𝑥 𝑛 ∈ 𝑉𝑛+1 \ 𝑉𝑛 and suppose {𝑥 𝑛 }∞ converges to 𝑝 ∈ 𝑋. Show that 𝑛=1 Ð 𝑝 ∈ 𝜕𝑉 where 𝑉 = ∞ 𝑉 . 𝑛=1 𝑛 Exercise 7.3.12: Prove Proposition 7.3.6. Exercise 7.3.13: Let (𝑋 , 𝑑) be a metric space and {𝑥 𝑛 }∞ a sequence in 𝑋. Prove that {𝑥 𝑛 }∞ converges to 𝑛=1 𝑛=1 ∞ 𝑝 ∈ 𝑋 if and only if every subsequence of {𝑥 𝑛 } 𝑛=1 has a subsequence that converges to 𝑝. Exercise 7.3.14: Consider ℝ𝑛 , and let 𝑑 be the standard euclidean metric. Let 𝑑′(𝑥, 𝑦) B and 𝑑′′(𝑥, 𝑦) B max{|𝑥 1 − 𝑦1 | , |𝑥 2 − 𝑦2 | , · · · , |𝑥 𝑛 − 𝑦𝑛 |}. a) Use Exercise 7.1.6, to show that (ℝ𝑛 , 𝑑′) and (ℝ𝑛 , 𝑑′′) are metric spaces.
Í𝑛
ℓ =1
|𝑥ℓ − 𝑦ℓ |
b) Let {𝑥 𝑘 }∞ be a sequence in ℝ𝑛 and 𝑝 ∈ ℝ𝑛 . Prove that the following statements are equivalent: 𝑘=1 (1) {𝑥 𝑘 }∞ converges to 𝑝 in (ℝ𝑛 , 𝑑). 𝑘=1
(2) {𝑥 𝑘 }∞ converges to 𝑝 in (ℝ𝑛 , 𝑑′). 𝑘=1
(3) {𝑥 𝑘 }∞ converges to 𝑝 in (ℝ𝑛 , 𝑑′′). 𝑘=1
279
7.4. COMPLETENESS AND COMPACTNESS
7.4
Completeness and compactness
Note: 2 lectures
7.4.1
Cauchy sequences and completeness
Just like with sequences of real numbers, we define Cauchy sequences. Definition 7.4.1. Let (𝑋 , 𝑑) be a metric space. A sequence {𝑥 𝑛 }∞ in 𝑋 is a Cauchy sequence 𝑛=1 if for every 𝜖 > 0, there exists an 𝑀 ∈ ℕ such that for all 𝑛 ≥ 𝑀 and all 𝑘 ≥ 𝑀, we have 𝑑(𝑥 𝑛 , 𝑥 𝑘 ) < 𝜖. The definition is again simply a translation of the concept from the real numbers to metric spaces. A sequence of real numbers is Cauchy in the sense of chapter 2 if and only if it is Cauchy in the sense above, provided we equip the real numbers with the standard metric 𝑑(𝑥, 𝑦) = |𝑥 − 𝑦|. Proposition 7.4.2. A convergent sequence in a metric space is Cauchy. Proof. Suppose {𝑥 𝑛 }∞ converges to 𝑝. Given 𝜖 > 0, there is an 𝑀 such that for all 𝑛 ≥ 𝑀, 𝑛=1 𝜖 we have 𝑑(𝑝, 𝑥 𝑛 ) < /2. Hence for all 𝑛, 𝑘 ≥ 𝑀, we have 𝑑(𝑥 𝑛 , 𝑥 𝑘 ) ≤ 𝑑(𝑥 𝑛 , 𝑥) + 𝑑(𝑥, 𝑥 𝑘 ) < 𝜖/2 + 𝜖/2 = 𝜖. □ Definition 7.4.3. We say a metric space (𝑋 , 𝑑) is complete or Cauchy-complete if every Cauchy sequence {𝑥 𝑛 }∞ in 𝑋 converges to a 𝑝 ∈ 𝑋. 𝑛=1 Proposition 7.4.4. The space ℝ𝑛 with the standard metric is a complete metric space. For ℝ = ℝ1 , completeness was proved in chapter 2. The proof of completeness in ℝ𝑛 is a reduction to the one-dimensional case. Proof. Let {𝑥 𝑚 }∞ be a Cauchy sequence in ℝ𝑛 , where 𝑥 𝑚 = 𝑥 𝑚,1 , 𝑥 𝑚,2 , . . . , 𝑥 𝑚,𝑛 ∈ ℝ𝑛 . 𝑚=1 As the sequence is Cauchy, given 𝜖 > 0, there exists an 𝑀 such that for all 𝑖, 𝑗 ≥ 𝑀,
𝑑(𝑥 𝑖 , 𝑥 𝑗 ) < 𝜖. Fix some 𝑘 = 1, 2, . . . , 𝑛. For 𝑖, 𝑗 ≥ 𝑀,
q 2 𝑥 𝑖,𝑘 − 𝑥 𝑗,𝑘 = 𝑥 𝑖,𝑘 − 𝑥 𝑗,𝑘 ≤
v t
𝑛 Õ ℓ =1
𝑥 𝑖,ℓ − 𝑥 𝑗,ℓ
2
= 𝑑(𝑥 𝑖 , 𝑥 𝑗 ) < 𝜖.
Hence the sequence {𝑥 𝑚,𝑘 }∞ is Cauchy. As ℝ is complete the sequence converges; 𝑚=1 there exists a 𝑦 𝑘 ∈ ℝ such that 𝑦 𝑘 = lim𝑚→∞ 𝑥 𝑚,𝑘 . Write 𝑦 = (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈ ℝ𝑛 . By Proposition 7.3.9, {𝑥 𝑚 }∞ converges to 𝑦 ∈ ℝ𝑛 , and hence ℝ𝑛 is complete. □ 𝑚=1
280
CHAPTER 7. METRIC SPACES
In the language of metric spaces, the results on continuity of section §6.2, say that the metric space 𝐶 [𝑎, 𝑏], ℝ of Example 7.1.8 is complete. The proof follows by “unrolling the definitions,” and is left as Exercise 7.4.7. Proposition 7.4.5. The space of continuous functions 𝐶 [𝑎, 𝑏], ℝ with the uniform norm as metric is a complete metric space.
A subset of a complete metric space such as ℝ𝑛 with the subspace metric need not be complete. For example, (0, 1] with the subspace metric is not complete, as {1/𝑛 }∞ is a 𝑛=1 Cauchy sequence in (0, 1] with no limit in (0, 1]. However, a closed subspace of a complete metric space is complete. After all, one way to think of a closed set is that it contains all points reachable from the set via a sequence. The proof is Exercise 7.4.16. Proposition 7.4.6. Suppose (𝑋 , 𝑑) is a complete metric space and 𝐸 ⊂ 𝑋 is closed. Then 𝐸 is a complete metric space with the subspace metric.
7.4.2
Compactness
Definition 7.4.7. Let (𝑋 , 𝑑) be a metric space and 𝐾 ⊂ 𝑋. The set 𝐾 is said to be compact if for every collection of open sets {𝑈𝜆 }𝜆∈𝐼 such that 𝐾⊂
Ø
𝑈𝜆 ,
𝜆∈𝐼
there exists a finite subset {𝜆1 , 𝜆2 , . . . , 𝜆𝑚 } ⊂ 𝐼 such that 𝐾⊂
𝑚 Ø
𝑈𝜆 𝑗 .
𝑗=1
A collection of open sets {𝑈𝜆 }𝜆∈𝐼 as above is said to be an open cover of 𝐾. A way to say that 𝐾 is compact is to say that every open cover of 𝐾 has a finite subcover. Example 7.4.8: Let ℝ be the metric space with the standard metric. The set ℝ is not compact. Proof: For 𝑗 ∈ ℕ , let 𝑈 𝑗 B (−𝑗, 𝑗). Any 𝑥 ∈ ℝ is in some 𝑈 𝑗 (by the Archimedean property), so we have an open cover. Suppose we have a finite subcover ℝ ⊂ 𝑈 𝑗1 ∪ 𝑈 𝑗2 ∪ · · · ∪ 𝑈 𝑗𝑚 , and suppose 𝑗1 < 𝑗2 < · · · < 𝑗𝑚 . Then ℝ ⊂ 𝑈 𝑗𝑚 , but that is a contradiction as 𝑗𝑚 ∈ ℝ on one hand and 𝑗𝑚 ∉ 𝑈 𝑗𝑚 = (−𝑗𝑚 , 𝑗𝑚 ) on the other. The set (0, 1) ⊂ ℝ is also not compact. Proof: Take the sets 𝑈 𝑗 B (1/𝑗 , 1 − 1/𝑗 ) for Ð 𝑗 = 3, 4, 5, . . .. As above (0, 1) = ∞ 𝑗=3 𝑈 𝑗 . And similarly as above, if there exists a finite subcover, then there is one 𝑈 𝑗 such that (0, 1) ⊂ 𝑈 𝑗 , which again leads to a contradiction. The set {0} ⊂ ℝ is compact. Proof: Given an open cover {𝑈𝜆 }𝜆∈𝐼 , there must exist a 𝜆0 such that 0 ∈ 𝑈𝜆0 as it is a cover. But then 𝑈𝜆0 gives a finite subcover. We will prove below that [0, 1], and in fact every closed and bounded interval [𝑎, 𝑏], is compact. Proposition 7.4.9. Let (𝑋 , 𝑑) be a metric space. If 𝐾 ⊂ 𝑋 is compact, then 𝐾 is closed and bounded.
281
7.4. COMPLETENESS AND COMPACTNESS
Proof. First, we prove that a compact set is bounded. Fix 𝑝 ∈ 𝑋. We have the open cover 𝐾⊂
∞ Ø
𝐵(𝑝, 𝑛) = 𝑋.
𝑛=1
If 𝐾 is compact, then there exists some set of indices 𝑛1 < 𝑛2 < . . . < 𝑛 𝑚 such that 𝐾⊂
𝑚 Ø 𝑗=1
𝐵(𝑝, 𝑛 𝑗 ) = 𝐵(𝑝, 𝑛 𝑚 ).
As 𝐾 is contained in a ball, 𝐾 is bounded. See the left-hand side of Figure 7.11. Next, we show a set that is not closed is not compact. Suppose 𝐾 ≠ 𝐾, that is, there is a point 𝑥 ∈ 𝐾 \ 𝐾. If 𝑦 ≠ 𝑥, then 𝑦 ∉ 𝐶(𝑥, 1/𝑛 ) for 𝑛 ∈ ℕ such that 1/𝑛 < 𝑑(𝑥, 𝑦). Furthermore, 𝑥 ∉ 𝐾, so 𝐾⊂
∞ Ø 𝑛=1
𝐶(𝑥, 1/𝑛 )𝑐 .
A closed ball is closed, so its complement 𝐶(𝑥, 1/𝑛 )𝑐 is open, and we have an open cover. If we take any finite collection of indices 𝑛1 < 𝑛2 < . . . < 𝑛 𝑚 , then 𝑚 Ø 𝑗=1
𝐶(𝑥, 1/𝑛 𝑗 )𝑐 = 𝐶(𝑥, 1/𝑛𝑚 )𝑐
As 𝑥 is in the closure of 𝐾, then 𝐶(𝑥, 1/𝑛𝑚 ) ∩ 𝐾 ≠ ∅. So there is no finite subcover and 𝐾 is not compact. See the right-hand side of Figure 7.11. □
𝐾 1 𝑝
2 3
𝑥
𝐵(𝑝, 2) 𝐵(𝑝, 3)
𝐶(𝑥, 1/4)
𝐾
𝐵(𝑝, 1)
𝐶(𝑥, 1)
𝐶(𝑥, 1/3) 𝐶(𝑥, 1/2)
Figure 7.11: Proving compact set is bounded (left) and closed (right).
We prove below that in a finite-dimensional euclidean space, every closed bounded set is compact. So closed bounded sets of ℝ𝑛 are examples of compact sets. It is not true that in every metric space, closed and bounded is equivalent to compact. A simple example is an incomplete metric space such as (0, 1) with the subspace metric from ℝ. There are
282
CHAPTER 7. METRIC SPACES
many complete and very useful metric spaces where closed and bounded is not enough to give compactness: 𝐶 [𝑎, 𝑏], ℝ is a complete metric space, but the closed unit ball 𝐶(0, 1) is not compact, see Exercise 7.4.8. However, see also Exercise 7.4.12. Not worrying about the boundedness for a moment, note further the difference between being closed and being compact. Being closed depends on the ambient metric space: The set (0, 1] is not closed in ℝ, but it is closed in the subspace (0, ∞). However, a set 𝐾 is compact in some metric space (𝑋 , 𝑑) if and only if it is compact in the subspace metric on 𝐾. So for a compact set, we do not have to ask what metric space it lives in. On the other hand, every set is always closed in the subspace metric as a subset of itself. See also Exercise 7.4.6. A useful property of compact sets in a metric space is that every sequence in the set has a convergent subsequence converging to a point in the set. Such sets are called sequentially compact. We will prove that in the context of metric spaces, a set is compact if and only if it is sequentially compact. First we prove a lemma. Lemma 7.4.10 (Lebesgue covering lemma* ). Let (𝑋 , 𝑑) be a metric space and 𝐾 ⊂ 𝑋. Suppose every sequence in 𝐾 has a subsequence convergent in 𝐾. Given an open cover {𝑈𝜆 }𝜆∈𝐼 of 𝐾, there exists a 𝛿 > 0 such that for every 𝑥 ∈ 𝐾, there exists a 𝜆 ∈ 𝐼 with 𝐵(𝑥, 𝛿) ⊂ 𝑈𝜆 . Proof. We prove the lemma by contrapositive. If the conclusion is not true, then there is an open cover {𝑈𝜆 }𝜆∈𝐼 of 𝐾 with the following property. For every 𝑛 ∈ ℕ , there exists an 𝑥 𝑛 ∈ 𝐾 such that 𝐵(𝑥 𝑛 , 1/𝑛 ) is not a subset of any 𝑈𝜆 . Take any 𝑥 ∈ 𝐾. There is a 𝜆 ∈ 𝐼 such that 𝑥 ∈ 𝑈𝜆 . As 𝑈𝜆 is open, there is an 𝜖 > 0 such that 𝐵(𝑥, 𝜖) ⊂ 𝑈𝜆 . Take 𝑀 such that 1/𝑀 < 𝜖/2. If 𝑦 ∈ 𝐵(𝑥, 𝜖/2) and 𝑛 ≥ 𝑀, then 𝐵(𝑦, 1/𝑛 ) ⊂ 𝐵(𝑦, 1/𝑀 ) ⊂ 𝐵(𝑦, 𝜖/2) ⊂ 𝐵(𝑥, 𝜖) ⊂ 𝑈𝜆 , where 𝐵(𝑦, 𝜖/2) ⊂ 𝐵(𝑥, 𝜖) follows by triangle inequality. See Figure 7.12. Thus 𝑦 ≠ 𝑥 𝑛 . In other words, for all 𝑛 ≥ 𝑀, 𝑥 𝑛 ∉ 𝐵(𝑥, 𝜖/2). The sequence cannot have a subsequence converging to 𝑥. As 𝑥 ∈ 𝐾 was arbitrary we are done. □ It is important to recognize what the lemma says. It says that if 𝐾 is sequentially compact, then given any cover there is a single 𝛿 > 0. The 𝛿 depends on the cover, but, of course, it does not depend on 𝑥. For example, let 𝐾 B [−10, 10] and let 𝑈𝑛 B (𝑛, 𝑛 + 2) for 𝑛 ∈ ℤ give an open cover. 1 Consider 𝑥 ∈ 𝐾. There is an 𝑛 ∈ ℤ, such that 𝑛 ≤ 𝑥 < 𝑛 + 1. If 𝑛 ≤ 𝑥 < 𝑛 + /2, then 𝐵 𝑥, 1/2 ⊂ 𝑈𝑛−1 . If 𝑛 + 1/2 ≤ 𝑥 < 𝑛 + 1, then 𝐵 𝑥, 1/2 ⊂ 𝑈𝑛 . So 𝛿 = 1/2 will do. The sets 𝑛 𝑛+2 𝑈𝑛′ B 2 , 2 , again give an open cover, but now the largest 𝛿 that works is 1/4. On the other hand, ℕ ⊂ ℝ is not sequentially compact. It is an exercise to find a cover for which no 𝛿 > 0 works. Theorem 7.4.11. Let (𝑋 , 𝑑) be a metric space. Then 𝐾 ⊂ 𝑋 is compact if and only if every sequence in 𝐾 has a subsequence converging to a point in 𝐾. * Named
after the French mathematician Henri Léon Lebesgue (1875–1941). The number 𝛿 is sometimes called the Lebesgue number of the cover.
283
7.4. COMPLETENESS AND COMPACTNESS
𝐵(𝑦, 1/𝑛 ) 𝑥 𝜖
𝑦 𝐵(𝑦, 𝜖/2)
𝜖/2
𝐵(𝑥, 𝜖/2)
𝐵(𝑥, 𝜖)
𝑈𝜆
Figure 7.12: Proof of Lebesgue covering lemma. Note that 𝐵(𝑦, 𝜖/2) ⊂ 𝐵(𝑥, 𝜖) by triangle inequality.
Proof. Claim: Let 𝐾 ⊂ 𝑋 be a subset of 𝑋 and {𝑥 𝑛 }∞ a sequence in 𝐾. Suppose that for each 𝑛=1 𝑥 ∈ 𝐾, there is a ball 𝐵(𝑥, 𝛼 𝑥 ) for some 𝛼 𝑥 > 0 such that 𝑥 𝑛 ∈ 𝐵(𝑥, 𝛼 𝑥 ) for only finitely many 𝑛 ∈ ℕ . Then 𝐾 is not compact. Proof of the claim: Notice Ø 𝐾⊂ 𝐵(𝑥, 𝛼 𝑥 ). 𝑥∈𝐾
Any finite collection of these balls contains at most finitely many elements of {𝑥 𝑛 }∞ , and 𝑛=1 so there must be an 𝑥 𝑛 ∈ 𝐾 not in their union. Hence, 𝐾 is not compact and the claim is proved. So suppose that 𝐾 is compact and {𝑥 𝑛 }∞ is a sequence in 𝐾. Then there exists an 𝑛=1 𝑥 ∈ 𝐾 such that for all 𝛿 > 0, 𝐵(𝑥, 𝛿) contains 𝑥 𝑛 for infinitely many 𝑛 ∈ ℕ . We define the subsequence inductively. The ball 𝐵(𝑥, 1) contains some 𝑥 𝑘 , so let 𝑛1 B 𝑘. Suppose 𝑛 𝑗−1 is defined. There must exist a 𝑘 > 𝑛 𝑗−1 such that 𝑥 𝑘 ∈ 𝐵(𝑥, 1/𝑗 ). Define 𝑛 𝑗 B 𝑘. We now posses a subsequence {𝑥 𝑛 𝑗 }∞ . Since 𝑑(𝑥, 𝑥 𝑛 𝑗 ) < 1/𝑗 , Proposition 7.3.5 says lim 𝑗→∞ 𝑥 𝑛 𝑗 = 𝑥. 𝑗=1 For the other direction, suppose every sequence in 𝐾 has a subsequence converging in 𝐾. Take an open cover {𝑈𝜆 }𝜆∈𝐼 of 𝐾. Using the Lebesgue covering lemma above, find a 𝛿 > 0 such that for every 𝑥 ∈ 𝐾, there is a 𝜆 ∈ 𝐼 with 𝐵(𝑥, 𝛿) ⊂ 𝑈𝜆 . Pick 𝑥 1 ∈ 𝐾 and find 𝜆1 ∈ 𝐼 such that 𝐵(𝑥1 , 𝛿) ⊂ 𝑈𝜆1 . If 𝐾 ⊂ 𝑈𝜆1 , we stop as we have found a finite subcover. Otherwise, there must be a point 𝑥2 ∈ 𝐾 \ 𝑈𝜆1 . Note that 𝑑(𝑥2 , 𝑥1 ) ≥ 𝛿. There must exist some 𝜆2 ∈ 𝐼 such that 𝐵(𝑥2 , 𝛿) ⊂ 𝑈𝜆2 . We work inductively. Suppose 𝜆𝑛−1 is defined. Either 𝑈𝜆1 ∪ 𝑈𝜆2 ∪ · · · ∪ 𝑈𝜆𝑛−1 is a finite cover of 𝐾, in which case we stop, or there must be a point 𝑥 𝑛 ∈ 𝐾 \ 𝑈𝜆1 ∪ 𝑈𝜆2 ∪ · · · ∪ 𝑈𝜆𝑛−1 . Note that 𝑑(𝑥 𝑛 , 𝑥 𝑗 ) ≥ 𝛿 for all 𝑗 = 1, 2, . . . , 𝑛 − 1. Next, there must be some 𝜆𝑛 ∈ 𝐼 such that 𝐵(𝑥 𝑛 , 𝛿) ⊂ 𝑈𝜆𝑛 . See Figure 7.13. Either at some point we obtain a finite subcover of 𝐾, or we obtain an infinite sequence {𝑥 𝑛 }∞ as above. For contradiction, suppose that there is no finite subcover and we 𝑛=1 have the sequence {𝑥 𝑛 }∞ . For all 𝑛 and 𝑘, 𝑛 ≠ 𝑘, we have 𝑑(𝑥 𝑛 , 𝑥 𝑘 ) ≥ 𝛿. So no 𝑛=1 ∞ subsequence of {𝑥 𝑛 } 𝑛=1 is Cauchy. Hence, no subsequence of {𝑥 𝑛 }∞ is convergent, which 𝑛=1 is a contradiction. □
284
CHAPTER 7. METRIC SPACES
𝑈𝜆 3 𝑥2
𝛿 𝑈𝜆 1
𝑥3
𝑥1
𝑥4 𝐾
𝑈𝜆 2 Figure 7.13: Covering 𝐾 by 𝑈𝜆 . The points 𝑥 1 , 𝑥2 , 𝑥3 , 𝑥4 , the three sets 𝑈𝜆1 , 𝑈𝜆2 , 𝑈𝜆2 , and the first three balls of radius 𝛿 are drawn.
Example 7.4.12: Theorem 2.3.8, the Bolzano–Weierstrass theorem for sequences of real numbers, says that every bounded sequence in ℝ has a convergent subsequence. Therefore, every sequence in a closed interval [𝑎, 𝑏] ⊂ ℝ has a convergent subsequence. The limit is also in [𝑎, 𝑏] as limits preserve non-strict inequalities. Hence a closed bounded interval [𝑎, 𝑏] ⊂ ℝ is (sequentially) compact. Proposition 7.4.13. Let (𝑋 , 𝑑) be a metric space and let 𝐾 ⊂ 𝑋 be compact. If 𝐸 ⊂ 𝐾 is a closed set, then 𝐸 is compact. Because 𝐾 is closed, 𝐸 is closed in 𝐾 if and only if it is closed in 𝑋. See Proposition 7.2.12. Proof. Let {𝑥 𝑛 }∞ be a sequence in 𝐸. It is also a sequence in 𝐾. Therefore, it has a 𝑛=1 convergent subsequence {𝑥 𝑛 𝑗 }∞ that converges to some 𝑥 ∈ 𝐾. As 𝐸 is closed the limit of 𝑗=1 a sequence in 𝐸 is also in 𝐸 and so 𝑥 ∈ 𝐸. Thus 𝐸 must be compact. □
Theorem 7.4.14 (Heine–Borel* ). A closed bounded subset 𝐾 ⊂ ℝ𝑛 is compact.
So subsets of ℝ𝑛 are compact if and only if they are closed and bounded, a condition that is much easier to check. Let us reiterate that the Heine–Borel theorem only holds for ℝ𝑛 and not for metric spaces in general. The theorem does not hold even for subspaces of ℝ𝑛 , just in ℝ𝑛 itself. In general, compact implies closed and bounded, but not vice versa.
Proof. For ℝ = ℝ1 , suppose 𝐾 ⊂ ℝ is closed and bounded. Then 𝐾 ⊂ [𝑎, 𝑏] for some closed and bounded interval, which is compact by Example 7.4.12. As 𝐾 is a closed subset of a compact set, it is compact by Proposition 7.4.13. We carry out the proof for 𝑛 = 2 and leave arbitrary 𝑛 as an exercise. As 𝐾 ⊂ ℝ2 is bounded, there exists a set 𝐵 = [𝑎, 𝑏] × [𝑐, 𝑑] ⊂ ℝ2 such that 𝐾 ⊂ 𝐵. We will show that 𝐵 is compact. Then 𝐾, ∞being a closed subset of a compact 𝐵, is also compact. Let (𝑥 𝑘 , 𝑦 𝑘 ) 𝑘=1 be a sequence in 𝐵. That is, 𝑎 ≤ 𝑥 𝑘 ≤ 𝑏 and 𝑐 ≤ 𝑦 𝑘 ≤ 𝑑 for all 𝑘. A bounded sequence of real numbers has a convergent subsequence so there is a subsequence * Named
after the German mathematician Heinrich Eduard Heine (1821–1881), and the French mathematician Félix Édouard Justin Émile Borel (1871–1956).
285
7.4. COMPLETENESS AND COMPACTNESS
{𝑥 𝑘 𝑗 }∞ that is convergent. The subsequence {𝑦 𝑘 𝑗 }∞ is also a bounded sequence so there 𝑗=1 𝑗=1 ∞ exists a subsequence {𝑦 𝑘 𝑗𝑖 } 𝑖=1 that is convergent. A subsequence of a convergent sequence is still convergent, so {𝑥 𝑘 𝑗𝑖 }∞ is convergent. Let 𝑖=1 𝑥 B lim 𝑥 𝑘 𝑗𝑖 𝑖→∞
and
𝑦 B lim 𝑦 𝑘 𝑗𝑖 . 𝑖→∞
∞
By Proposition 7.3.9, (𝑥 𝑘 𝑗𝑖 , 𝑦 𝑘 𝑗𝑖 ) 𝑖=1 converges to (𝑥, 𝑦). Furthermore, as 𝑎 ≤ 𝑥 𝑘 ≤ 𝑏 and 𝑐 ≤ 𝑦 𝑘 ≤ 𝑑 for all 𝑘, we know that (𝑥, 𝑦) ∈ 𝐵. □
Example 7.4.15: The discrete metric provides interesting counterexamples again. Let (𝑋 , 𝑑) be a metric space with the discrete metric, that is, 𝑑(𝑥, 𝑦) = 1 if 𝑥 ≠ 𝑦. Suppose 𝑋 is an infinite set. Then (i) (𝑋 , 𝑑) is a complete metric space.
(ii) Any subset 𝐾 ⊂ 𝑋 is closed and bounded.
(iii) A subset 𝐾 ⊂ 𝑋 is compact if and only if it is a finite set.
(iv) The conclusion of the Lebesgue covering lemma is always satisfied, e.g. with 𝛿 = 1/2, even for noncompact 𝐾 ⊂ 𝑋.
The proofs of the statements above are either trivial or are relegated to the exercises below. Remark 7.4.16. A subtle issue with Cauchy sequences, completeness, compactness, and convergence is that compactness and convergence only depend on the topology, that is, on which sets are the open sets. On the other hand, Cauchy sequences and completeness depend on the actual metric. See Exercise 7.4.19.
7.4.3
Exercises
Exercise 7.4.1: Let (𝑋 , 𝑑) be a metric space and 𝐴 a finite subset of 𝑋. Show that 𝐴 is compact. Exercise 7.4.2: Let 𝐴 B {1/𝑛 : 𝑛 ∈ ℕ } ⊂ ℝ.
a) Show that 𝐴 is not compact directly using the definition.
b) Show that 𝐴 ∪ {0} is compact directly using the definition. Exercise 7.4.3: Let (𝑋 , 𝑑) be a metric space with the discrete metric. a) Prove that 𝑋 is complete.
b) Prove that 𝑋 is compact if and only if 𝑋 is a finite set. Exercise 7.4.4: a) Show that the union of finitely many compact sets is a compact set. b) Find an example where the union of infinitely many compact sets is not compact. Exercise 7.4.5: Prove Theorem 7.4.14 for arbitrary dimension. Hint: The trick is to use the correct notation.
286
CHAPTER 7. METRIC SPACES
Exercise 7.4.6: Show that a compact set 𝐾 (in any metric space) is itself a complete metric space (using the subspace metric). Exercise 7.4.7: Let 𝐶 [𝑎, 𝑏], ℝ be the metric space as in Example 7.1.8. Show that 𝐶 [𝑎, 𝑏], ℝ is a complete metric space.
Exercise 7.4.8 (Challenging): Let 𝐶 [0, 1], ℝ be the metric space of Example 7.1.8. Let 0 denote the zero function. Then show that the closed ball 𝐶(0, 1) is not compact (even though it is closed and bounded). Hints: Construct a sequence of distinct continuous functions { 𝑓𝑛 }∞ such that 𝑑( 𝑓𝑛 , 0) = 1 and 𝑑( 𝑓𝑛 , 𝑓 𝑘 ) = 1 for 𝑛=1 all 𝑛 ≠ 𝑘. Show that the set { 𝑓𝑛 : 𝑛 ∈ ℕ } ⊂ 𝐶(0, 1) is closed but not compact. See chapter 6 for inspiration.
Exercise 7.4.9 (Challenging): Show that there exists a metric on ℝ that makes ℝ into a compact set.
Exercise 7.4.10: Suppose (𝑋 , 𝑑) is complete and suppose we have a countably infinite collection of nonempty Ñ compact sets 𝐸1 ⊃ 𝐸2 ⊃ 𝐸3 ⊃ · · · . Prove ∞ 𝑗=1 𝐸 𝑗 ≠ ∅. Exercise 7.4.11 (Challenging): Let 𝐶 [0, 1], ℝ be the metric space of Example 7.1.8. Let 𝐾 be the set of 𝑓 ∈ 𝐶 [0, 1], ℝ such that 𝑓 is equal to a quadratic polynomial, i.e. 𝑓 (𝑥) = 𝑎 + 𝑏𝑥 + 𝑐𝑥 2 , and such that | 𝑓 (𝑥)| ≤ 1 for all 𝑥 ∈ [0, 1], that is 𝑓 ∈ 𝐶(0, 1). Show that 𝐾 is compact.
Exercise 7.4.12 (Challenging): Let (𝑋 , 𝑑) be a complete metric space. Show that 𝐾 ⊂ 𝑋 is compact if and only if 𝐾 is closed and such that for every 𝜖 > 0 there exists a finite set of points 𝑥 1 , 𝑥2 , . . . , 𝑥 𝑛 with Ð 𝐾 ⊂ 𝑛𝑗=1 𝐵(𝑥 𝑗 , 𝜖). Note: Such a set 𝐾 is said to be totally bounded, so in a complete metric space a set is compact if and only if it is closed and totally bounded. Exercise 7.4.13: Take ℕ ⊂ ℝ using the standard metric. Find an open cover of ℕ such that the conclusion of the Lebesgue covering lemma does not hold. Exercise 7.4.14: Prove the general Bolzano–Weierstrass theorem: Any bounded sequence {𝑥 𝑘 }∞ in ℝ𝑛 has 𝑘=1 a convergent subsequence. Exercise 7.4.15: Let 𝑋 be a metric space and 𝐶 ⊂ P(𝑋) the set of nonempty compact subsets of 𝑋. Using the Hausdorff metric from Exercise 7.1.8, show that (𝐶, 𝑑𝐻 ) is a metric space. That is, show that if 𝐿 and 𝐾 are nonempty compact subsets, then 𝑑𝐻 (𝐿, 𝐾) = 0 if and only if 𝐿 = 𝐾. Exercise 7.4.16: Prove Proposition 7.4.6. That is, let (𝑋 , 𝑑) be a complete metric space and 𝐸 ⊂ 𝑋 a closed set. Show that 𝐸 with the subspace metric is a complete metric space. Exercise 7.4.17: Let (𝑋 , 𝑑) be an incomplete metric space. Show that there exists a closed and bounded set 𝐸 ⊂ 𝑋 that is not compact.
Exercise 7.4.18: Let (𝑋 , 𝑑) be a metric space and 𝐾 ⊂ 𝑋. Prove that 𝐾 is compact as a subset of (𝑋 , 𝑑) if and only if 𝐾 is compact as a subset of itself with the subspace metric. Exercise 7.4.19: two metrics on ℝ. Let 𝑑(𝑥, 𝑦) B |𝑥 − 𝑦| be the standard metric, and let 𝑥 Consider 𝑦 ′ 𝑑 (𝑥, 𝑦) B 1+|𝑥| − 1+|𝑦| . a) Show that (ℝ , 𝑑′) is a metric space (if you have done Exercise 7.3.10, the computation is the same).
b) Show that the topology is the same, that is, a set is open in (ℝ , 𝑑) if and only if it is open in (ℝ , 𝑑′). c) Show that a set is compact in (ℝ , 𝑑) if and only if it is compact in (ℝ , 𝑑′).
d) Show that a sequence converges in (ℝ , 𝑑) if and only if it converges in (ℝ , 𝑑′).
e) Find a sequence of real numbers that is Cauchy in (ℝ , 𝑑′) but not Cauchy in (ℝ , 𝑑). f) While (ℝ , 𝑑) is complete, show that (ℝ , 𝑑′) is not complete.
7.4. COMPLETENESS AND COMPACTNESS
287
Exercise 7.4.20: Let (𝑋 , 𝑑) be a complete metric space. We say a set 𝑆 ⊂ 𝑋 is relatively compact if the closure 𝑆 is compact. Prove that 𝑆 ⊂ 𝑋 is relatively compact if and only if given any sequence {𝑥 𝑛 }∞ in 𝑆, 𝑛=1 there exists a subsequence {𝑥 𝑛 𝑘 }∞ that converges (in 𝑋). 𝑘=1
288
7.5
CHAPTER 7. METRIC SPACES
Continuous functions
Note: 1.5–2 lectures
7.5.1
Continuity
Definition 7.5.1. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces and 𝑐 ∈ 𝑋. Then 𝑓 : 𝑋 → 𝑌 is continuous at 𝑐 if for every 𝜖 > 0 there is a 𝛿 > 0 such that whenever 𝑥 ∈ 𝑋 and 𝑑𝑋 (𝑥, 𝑐) < 𝛿, then 𝑑𝑌 𝑓 (𝑥), 𝑓 (𝑐) < 𝜖.
When 𝑓 : 𝑋 → 𝑌 is continuous at all 𝑐 ∈ 𝑋, we simply say that 𝑓 is a continuous function.
The definition agrees with the definition from chapter 3 when 𝑓 is a real-valued function on the real line—as long as we take the standard metric on ℝ, of course. Proposition 7.5.2. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces. Then 𝑓 : 𝑋 → 𝑌 is continuous ∞at ∞ 𝑐 ∈ 𝑋 if and only if for every sequence {𝑥 𝑛 } 𝑛=1 in 𝑋 converging to 𝑐, the sequence 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝑓 (𝑐).
Proof. Suppose 𝑓 is continuous at 𝑐. Let {𝑥 𝑛 }∞ be a sequence in 𝑋 converging to 𝑐. Given 𝑛=1 𝜖 > 0, there is a 𝛿 > 0 such that 𝑑𝑋 (𝑥, 𝑐) < 𝛿 implies 𝑑𝑌 𝑓 (𝑥), 𝑓 (𝑐) < 𝜖. So take 𝑀 such ∞ that for all 𝑛 ≥ 𝑀, we have 𝑑𝑋 (𝑥 𝑛 , 𝑐) < 𝛿, then 𝑑𝑌 𝑓 (𝑥 𝑛 ), 𝑓 (𝑐) < 𝜖. Hence 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝑓 (𝑐). On the other hand, suppose 𝑓 is not continuous at 𝑐. Then there exists an 𝜖 > 0, such that for every 𝑛 ∈ ℕ there exists an 𝑥 𝑛 ∈ 𝑋, with 𝑑𝑋 (𝑥 𝑛 , 𝑐) < 1/𝑛 such that 𝑑𝑌 𝑓 (𝑥 𝑛 ), 𝑓 (𝑐) ≥ 𝜖. ∞ Then {𝑥 𝑛 }∞ converges to 𝑐, but 𝑓 (𝑥 ) does not converge to 𝑓 (𝑐). □ 𝑛 𝑛=1 𝑛=1 Example 7.5.3: Suppose 𝑓 : ℝ2 → ℝ is a polynomial. That is, 𝑓 (𝑥, 𝑦) =
𝑑−𝑗 𝑑 Õ Õ 𝑗=0 𝑘=0
𝑎 𝑗 𝑘 𝑥 𝑗 𝑦 𝑘 = 𝑎 0 0 + 𝑎 1 0 𝑥 + 𝑎 0 1 𝑦 + 𝑎2 0 𝑥 2 + 𝑎 1 1 𝑥𝑦 + 𝑎0 2 𝑦 2 + · · · + 𝑎0 𝑑 𝑦 𝑑 ,
∞
for some 𝑑 ∈ ℕ (the degree) and 𝑎 𝑗 𝑘 ∈ ℝ. We claim 𝑓 is continuous. Let (𝑥 𝑛 , 𝑦𝑛 ) 𝑛=1 be a sequence in ℝ2 that converges to (𝑥, 𝑦) ∈ ℝ2 . We proved that this means lim𝑛→∞ 𝑥 𝑛 = 𝑥 and lim𝑛→∞ 𝑦𝑛 = 𝑦. By Proposition 2.2.5,
lim 𝑓 (𝑥 𝑛 , 𝑦𝑛 ) = lim
𝑛→∞
𝑛→∞
𝑑−𝑗 𝑑 Õ Õ 𝑗=0 𝑘=0
𝑗 𝑎 𝑗 𝑘 𝑥 𝑛 𝑦𝑛𝑘
=
𝑑−𝑗 𝑑 Õ Õ 𝑗=0 𝑘=0
𝑎 𝑗 𝑘 𝑥 𝑗 𝑦 𝑘 = 𝑓 (𝑥, 𝑦).
So 𝑓 is continuous at (𝑥, 𝑦), and as (𝑥, 𝑦) was arbitrary 𝑓 is continuous everywhere. Similarly, a polynomial in 𝑛 variables is continuous. Be careful about taking limits separately. Consider 𝑓 : ℝ2 → ℝ defined by 𝑓 (𝑥, 𝑦) B outside the origin and 𝑓 (0, 0) B 0. See Figure 7.14. In Exercise 7.5.2, you are asked to prove that 𝑓 is not continuous at the origin. However, for every 𝑦, the function 𝑔(𝑥) B 𝑓 (𝑥, 𝑦) is continuous, and for every 𝑥, the function ℎ(𝑦) B 𝑓 (𝑥, 𝑦) is continuous. 𝑥𝑦 𝑥 2 +𝑦 2
289
7.5. CONTINUOUS FUNCTIONS
𝑧
𝑦 𝑥 Figure 7.14: Graph of
𝑥𝑦 . 𝑥 2 +𝑦 2
Example 7.5.4: Let 𝑋 be a metric space and 𝑓 : 𝑋 → ℂ a complex-valued function. Write 𝑓 (𝑝) = 𝑔(𝑝) + 𝑖 ℎ(𝑝), where 𝑔 : 𝑋 → ℝ and ℎ : 𝑋 → ℝ are the real and imaginary parts of 𝑓 . Then 𝑓 is continuous at 𝑐 ∈ 𝑋 if and only if its real and imaginary parts are continuous at 𝑐. ∞ This fact follows because 𝑓 (𝑝 𝑛 ) = 𝑔(𝑝 𝑛 ) + 𝑖 ℎ(𝑝 𝑛 ) 𝑛=1 converges to 𝑓 (𝑝) = 𝑔(𝑝) + 𝑖 ℎ(𝑝) if ∞ ∞ and only if 𝑔(𝑝 𝑛 ) 𝑛=1 converges to 𝑔(𝑝) and ℎ(𝑝 𝑛 ) 𝑛=1 converges to ℎ(𝑝).
7.5.2
Compactness and continuity
Continuous maps do not map closed sets to closed sets. For example, 𝑓 : (0, 1) → ℝ defined by 𝑓 (𝑥) B 𝑥 takes the set (0, 1), which is closed in (0, 1), to the set (0, 1), which is not closed in ℝ. On the other hand, continuous maps do preserve compact sets. Lemma 7.5.5. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces and 𝑓 : 𝑋 → 𝑌 a continuous function. If 𝐾 ⊂ 𝑋 is a compact set, then 𝑓 (𝐾) is a compact set.
∞
Proof. A sequence in 𝑓 (𝐾) can be written as 𝑓 (𝑥 𝑛 ) 𝑛=1 , where {𝑥 𝑛 }∞ is a sequence in 𝐾. 𝑛=1 ∞ The set 𝐾 is compact and therefore there is a subsequence {𝑥 𝑛 𝑗 } 𝑗=1 that converges to some 𝑥 ∈ 𝐾. By continuity, lim 𝑓 (𝑥 𝑛 𝑗 ) = 𝑓 (𝑥) ∈ 𝑓 (𝐾).
𝑗→∞
So every sequence in 𝑓 (𝐾) has a subsequence convergent to a point in 𝑓 (𝐾), and 𝑓 (𝐾) is compact by Theorem 7.4.11. □ As before, 𝑓 : 𝑋 → ℝ achieves an absolute minimum at 𝑐 ∈ 𝑋 if 𝑓 (𝑥) ≥ 𝑓 (𝑐)
for all 𝑥 ∈ 𝑋.
On the other hand, 𝑓 achieves an absolute maximum at 𝑐 ∈ 𝑋 if 𝑓 (𝑥) ≤ 𝑓 (𝑐)
for all 𝑥 ∈ 𝑋.
290
CHAPTER 7. METRIC SPACES
Theorem 7.5.6. Let (𝑋 , 𝑑) be a nonempty compact metric space and let 𝑓 : 𝑋 → ℝ be continuous. Then 𝑓 is bounded and in fact 𝑓 achieves an absolute minimum and an absolute maximum on 𝑋. Proof. As 𝑋 is compact and 𝑓 is continuous, 𝑓 (𝑋) ⊂ ℝ is compact. Hence 𝑓 (𝑋) is closed and bounded. In particular, sup 𝑓 (𝑋) ∈ 𝑓 (𝑋) and inf 𝑓 (𝑋) ∈ 𝑓 (𝑋), because both the sup and the inf can be achieved by sequences in 𝑓 (𝑋) and 𝑓 (𝑋) is closed. Therefore, there is some 𝑥 ∈ 𝑋 such that 𝑓 (𝑥) = sup 𝑓 (𝑋) and some 𝑦 ∈ 𝑋 such that 𝑓 (𝑦) = inf 𝑓 (𝑋). □
7.5.3
Continuity and topology
Let us see how to define continuity in terms of the topology, that is, the open sets. We have already seen that topology determines which sequences converge, and so it is no wonder that the topology also determines continuity of functions. Lemma 7.5.7. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces. A function 𝑓 : 𝑋 → 𝑌 is continuous at 𝑐 ∈ 𝑋 if and only if for every open neighborhood 𝑈 of 𝑓 (𝑐) in 𝑌, the set 𝑓 −1 (𝑈) contains an open neighborhood of 𝑐 in 𝑋. See Figure 7.15.
𝑓 −1 (𝑈) 𝑊
𝑈
𝑓
𝑓 (𝑐)
𝑐
Figure 7.15: For every neighborhood 𝑈 of 𝑓 (𝑐), the set 𝑓 −1 (𝑈) contains an open neighborhood 𝑊 of 𝑐.
Proof. First suppose that 𝑓 is continuous at 𝑐. Let 𝑈 be an open neighborhood of 𝑓 (𝑐) in 𝑌, then 𝐵𝑌 𝑓 (𝑐), 𝜖 ⊂ 𝑈 for some 𝜖 > 0. By continuity of 𝑓 , there exists a 𝛿 > 0 such that whenever 𝑥 is such that 𝑑𝑋 (𝑥, 𝑐) < 𝛿, then 𝑑𝑌 𝑓 (𝑥), 𝑓 (𝑐) < 𝜖. In other words, 𝐵𝑋 (𝑐, 𝛿) ⊂ 𝑓 −1 𝐵𝑌 𝑓 (𝑐), 𝜖
⊂ 𝑓 −1 (𝑈),
and 𝐵𝑋 (𝑐, 𝛿) is an open neighborhood of 𝑐. For the other direction, let 𝜖 > 0 be given. If 𝑓 −1 𝐵𝑌 𝑓 (𝑐), 𝜖 contains an open neighborhood 𝑊 of 𝑐, it contains a ball. That is, there is some 𝛿 > 0 such that 𝐵𝑋 (𝑐, 𝛿) ⊂ 𝑊 ⊂ 𝑓 −1 𝐵𝑌 𝑓 (𝑐), 𝜖 .
That means precisely that if 𝑑𝑋 (𝑥, 𝑐) < 𝛿, then 𝑑𝑌 𝑓 (𝑥), 𝑓 (𝑐) < 𝜖. So 𝑓 is continuous at 𝑐. □
291
7.5. CONTINUOUS FUNCTIONS
Theorem 7.5.8. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces. A function 𝑓 : 𝑋 → 𝑌 is continuous if and only if for every open 𝑈 ⊂ 𝑌, 𝑓 −1 (𝑈) is open in 𝑋. The proof follows from Lemma 7.5.7 and is left as an exercise.
Example 7.5.9: Let 𝑓 : 𝑋 → 𝑌 be a continuous function. Theorem 7.5.8 tells us that if 𝐸 ⊂ 𝑌 is closed, then 𝑓 −1 (𝐸) = 𝑋 \ 𝑓 −1 (𝐸 𝑐 ) is also closed. Therefore, if we have a continuous −1 function 𝑓 : 𝑋 → ℝ, then the zero set of 𝑓 , that is, 𝑓 (0) = 𝑥 ∈ 𝑋 : 𝑓 (𝑥) = 0 , is closed. We have just proved the most basic result in algebraic geometry, the study of zero sets of polynomials: The zero set of a polynomial is closed. Similarly, the set where 𝑓 is nonnegative, 𝑓 −1 [0, ∞) = 𝑥 ∈ 𝑋 : 𝑓 (𝑥) ≥ 0 , is closed. On the other hand, the set where 𝑓 is positive, 𝑓 −1 (0, ∞) = 𝑥 ∈ 𝑋 : 𝑓 (𝑥) > 0 , is open.
7.5.4
Uniform continuity
As for continuous functions on the real line, in the definition of continuity it is sometimes convenient to be able to pick one 𝛿 for all points. Definition 7.5.10. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces. Then 𝑓 : 𝑋 → 𝑌 is uniformly continuous if for every𝜖 > 0 there is a 𝛿 > 0 such that whenever 𝑝, 𝑞 ∈ 𝑋 and 𝑑𝑋 (𝑝, 𝑞) < 𝛿, we have 𝑑𝑌 𝑓 (𝑝), 𝑓 (𝑞) < 𝜖. A uniformly continuous function is continuous, but not necessarily vice versa as we have seen. Theorem 7.5.11. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces. Suppose 𝑓 : 𝑋 → 𝑌 is continuous and 𝑋 is compact. Then 𝑓 is uniformly continuous. Proof. Let 𝜖 > 0 be given. For each 𝑐 ∈ 𝑋, pick 𝛿 𝑐 > 0 such that 𝑑𝑌 𝑓 (𝑥), 𝑓 (𝑐) < 𝜖/2 whenever 𝑥 ∈ 𝐵(𝑐, 𝛿 𝑐 ). The balls 𝐵(𝑐, 𝛿 𝑐 ) cover 𝑋, and the space 𝑋 is compact. Apply the Lebesgue covering lemma to obtain a 𝛿 > 0 such that for every 𝑥 ∈ 𝑋, there is a 𝑐 ∈ 𝑋 for which 𝐵(𝑥, 𝛿) ⊂ 𝐵(𝑐, 𝛿 𝑐 ). Suppose 𝑝, 𝑞 ∈ 𝑋 where 𝑑𝑋 (𝑝, 𝑞) < 𝛿. Find a 𝑐 ∈ 𝑋 such that 𝐵(𝑝, 𝛿) ⊂ 𝐵(𝑐, 𝛿 𝑐 ). Then 𝑞 ∈ 𝐵(𝑐, 𝛿 𝑐 ). By the triangle inequality and the definition of 𝛿 𝑐 ,
𝑑𝑌 𝑓 (𝑝), 𝑓 (𝑞) ≤ 𝑑𝑌 𝑓 (𝑝), 𝑓 (𝑐) + 𝑑𝑌 𝑓 (𝑐), 𝑓 (𝑞) < 𝜖/2 + 𝜖/2 = 𝜖.
□
As an application of uniform continuity, we prove a useful criterion for continuity of functions defined by integrals. Let 𝑓 (𝑥, 𝑦) be a function of two variables and define 𝑔(𝑦) B
∫ 𝑎
𝑏
𝑓 (𝑥, 𝑦) 𝑑𝑥.
Question is, is 𝑔 is continuous? We are really asking when do two limiting operations commute, which is not always possible, so some extra hypothesis is necessary. A useful sufficient (but not necessary) condition is that 𝑓 is continuous on a closed rectangle.
292
CHAPTER 7. METRIC SPACES
Proposition 7.5.12. If 𝑓 : [𝑎, 𝑏] × [𝑐, 𝑑] → ℝ is continuous, then 𝑔 : [𝑐, 𝑑] → ℝ defined by 𝑔(𝑦) B
𝑏
∫ 𝑎
𝑓 (𝑥, 𝑦) 𝑑𝑥
is continuous.
Proof. Fix 𝑦 ∈ [𝑐, 𝑑] and let 𝜖 > 0 be given. As 𝑓 is continuous on [𝑎, 𝑏] × [𝑐, 𝑑], which is compact, 𝑓 is uniformly continuous. In particular, there exists a 𝛿 > 0 such that whenever 𝜖 𝑧 ∈ [𝑐, 𝑑] and |𝑧 − 𝑦| < 𝛿, we have | 𝑓 (𝑥, 𝑧) − 𝑓 (𝑥, 𝑦)| < 𝑏−𝑎 for all 𝑥 ∈ [𝑎, 𝑏]. So suppose |𝑧 − 𝑦| < 𝛿. Then
∫ ∫ 𝑏 𝑏 𝑔(𝑧) − 𝑔(𝑦)| = 𝑓 (𝑥, 𝑧) 𝑑𝑥 − 𝑓 (𝑥, 𝑦) 𝑑𝑥 | 𝑎 𝑎 ∫ 𝑏 𝜖 = 𝜖. □ = 𝑓 (𝑥, 𝑧) − 𝑓 (𝑥, 𝑦) 𝑑𝑥 ≤ (𝑏 − 𝑎) 𝑏−𝑎 𝑎 In applications, if we are interested in continuity at 𝑦0 , we just need to apply the proposition in [𝑎, 𝑏] × [𝑦0 − 𝜖, 𝑦0 + 𝜖] for some small 𝜖 > 0. For example, if 𝑓 is continuous in [𝑎, 𝑏] × ℝ, then 𝑔 is continuous on ℝ. Example 7.5.13: Useful examples of uniformly continuous functions are again the socalled Lipschitz continuous functions. That is, if (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) are metric spaces, then 𝑓 : 𝑋 → 𝑌 is called Lipschitz or 𝐾-Lipschitz if there exists a 𝐾 ∈ ℝ such that 𝑑𝑌 𝑓 (𝑝), 𝑓 (𝑞) ≤ 𝐾 𝑑𝑋 (𝑝, 𝑞)
for all 𝑝, 𝑞 ∈ 𝑋.
A Lipschitz function is uniformly continuous: Take 𝛿 = 𝜖/𝐾 . A function can be uniformly √ continuous but not Lipschitz, as we already saw: 𝑥 on [0, 1] is uniformly continuous but not Lipschitz. It is worth mentioning that, if a function is Lipschitz, it tends to be easiest to simply show it is Lipschitz even if we are only interested in knowing continuity.
7.5.5
Cluster points and limits of functions
While we have not started the discussion of continuity with them and we will not need them until volume II, let us also translate the idea of a limit of a function from the real line to metric spaces. Again we need to start with cluster points. Definition 7.5.14. Let (𝑋 , 𝑑) be a metric space and 𝑆 ⊂ 𝑋. A point 𝑝 ∈ 𝑋 is called a cluster point of 𝑆 if for every 𝜖 > 0, the set 𝐵(𝑝, 𝜖) ∩ 𝑆 \ {𝑝} is not empty. It is not enough that 𝑝 is in the closure of 𝑆, it must be in the closure of 𝑆 \ {𝑝} (exercise). So, 𝑝 is a cluster point if and only if there exists a sequence in 𝑆 \ {𝑝} that converges to 𝑝.
293
7.5. CONTINUOUS FUNCTIONS
Definition 7.5.15. Let (𝑋 , 𝑑𝑋 ), (𝑌, 𝑑𝑌 ) be metric spaces, 𝑆 ⊂ 𝑋, 𝑝 ∈ 𝑋 a cluster point of 𝑆, and 𝑓 : 𝑆 → 𝑌 a function. Suppose there exists an 𝐿 ∈ 𝑌 and for every 𝜖 > 0, there exists a 𝛿 > 0 such that whenever 𝑥 ∈ 𝑆 \ {𝑝} and 𝑑𝑋 (𝑥, 𝑝) < 𝛿, then 𝑑𝑌 𝑓 (𝑥), 𝐿 < 𝜖.
Then we say 𝑓 (𝑥) converges to 𝐿 as 𝑥 goes to 𝑝, and 𝐿 is a limit of 𝑓 (𝑥) as 𝑥 goes to 𝑝. If 𝐿 is unique, we write lim 𝑓 (𝑥) B 𝐿. 𝑥→𝑝
If 𝑓 (𝑥) does not converge as 𝑥 goes to 𝑝, we say 𝑓 diverges at 𝑝. As usual, we prove that the limit, if it exists, is unique. The proof is a direct translation of the proof from chapter 3, so we leave it as an exercise. Proposition 7.5.16. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces, 𝑆 ⊂ 𝑋, 𝑝 ∈ 𝑋 a cluster point of 𝑆, and let 𝑓 : 𝑆 → 𝑌 be a function such that 𝑓 (𝑥) converges as 𝑥 goes to 𝑝. Then the limit of 𝑓 (𝑥) as 𝑥 goes to 𝑝 is unique. In any metric space, just like in ℝ, continuous limits may be replaced by sequential limits. The proof is again a direct translation of the proof from chapter 3, and we leave it as an exercise. The upshot is that we really only need to prove things for sequential limits. Lemma 7.5.17. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces, 𝑆 ⊂ 𝑋, 𝑝 ∈ 𝑋 a cluster point of 𝑆, and let 𝑓 : 𝑆 → 𝑌 be a function. Then 𝑓 (𝑥) converges to 𝐿 ∈ 𝑌 as 𝑥 goes to 𝑝 if and only if for every sequence {𝑥 𝑛 }∞ in 𝑆 \ {𝑝} 𝑛=1 ∞ such that lim𝑛→∞ 𝑥 𝑛 = 𝑝, the sequence 𝑓 (𝑥 𝑛 ) 𝑛=1 converges to 𝐿.
By applying Proposition 7.5.2 or the definition directly we find (exercise) as in chapter 3, that for cluster points 𝑝 of 𝑆 ⊂ 𝑋, the function 𝑓 : 𝑆 → 𝑌 is continuous at 𝑝 if and only if lim 𝑓 (𝑥) = 𝑓 (𝑝).
𝑥→𝑝
7.5.6
Exercises
Exercise 7.5.1: Consider ℕ ⊂ ℝ with the standard metric. Let (𝑋 , 𝑑) be a metric space and 𝑓 : 𝑋 → ℕ a continuous function. a) Prove that if 𝑋 is connected, then 𝑓 is constant (the range of 𝑓 is a single value). b) Find an example where 𝑋 is disconnected and 𝑓 is not constant. Exercise 7.5.2: Define 𝑓 : ℝ2 → ℝ by 𝑓 (0, 0) B 0, and 𝑓 (𝑥, 𝑦) B
𝑥𝑦 𝑥 2 +𝑦 2
if (𝑥, 𝑦) ≠ (0, 0). See Figure 7.14.
a) Show that for every fixed 𝑥, the function that takes 𝑦 to 𝑓 (𝑥, 𝑦) is continuous. Similarly for every fixed 𝑦, the function that takes 𝑥 to 𝑓 (𝑥, 𝑦) is continuous.
b) Show that 𝑓 is not continuous.
294
CHAPTER 7. METRIC SPACES
Exercise 7.5.3: Suppose (𝑋 , 𝑑𝑋 ), (𝑌, 𝑑𝑌 ) are metric spaces and 𝑓 : 𝑋 → 𝑌 is continuous. Let 𝐴 ⊂ 𝑋. a) Show that 𝑓 (𝐴) ⊂ 𝑓 (𝐴).
b) Show that the subset can be proper. Exercise 7.5.4: Prove Theorem 7.5.8. Hint: Use Lemma 7.5.7. Exercise 7.5.5: Suppose 𝑓 : 𝑋 → 𝑌 is continuous for metric spaces (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ). Show that if 𝑋 is connected, then 𝑓 (𝑋) is connected. Exercise 7.5.6: Prove the following version of the intermediate value theorem. Let (𝑋 , 𝑑) be a connected metric space and 𝑓 : 𝑋 → ℝ a continuous function. Suppose 𝑥 0 , 𝑥1 ∈ 𝑋 and 𝑦 ∈ ℝ are such that 𝑓 (𝑥 0 ) < 𝑦 < 𝑓 (𝑥1 ). Then prove that there exists a 𝑧 ∈ 𝑋 such that 𝑓 (𝑧) = 𝑦. Hint: See Exercise 7.5.5. Exercise 7.5.7: A continuous 𝑓 : 𝑋 → 𝑌 between metric spaces (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) is said to be proper if for every compact set 𝐾 ⊂ 𝑌, the set 𝑓 −1 (𝐾) is compact. Supposea continuous 𝑓 : (0, 1) → (0, 1) is ∞ proper and {𝑥 𝑛 }∞ is a sequence in (0, 1) converging to 0. Show that 𝑓 (𝑥 ) has no subsequence that 𝑛 𝑛=1 𝑛=1 converges in (0, 1). Exercise 7.5.8: Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces and 𝑓 : 𝑋 → 𝑌 be a one-to-one and onto continuous function. Suppose 𝑋 is compact. Prove that the inverse 𝑓 −1 : 𝑌 → 𝑋 is continuous. Exercise 7.5.9: Take the metric space of continuous functions 𝐶 [0, 1], ℝ . Let 𝑘 : [0, 1] × [0, 1] → ℝ be a continuous function. Given 𝑓 ∈ 𝐶 [0, 1], ℝ define
𝜑 𝑓 (𝑥) B
∫
1
0
𝑘(𝑥, 𝑦) 𝑓 (𝑦) 𝑑𝑦.
a) Show that 𝑇( 𝑓 ) B 𝜑 𝑓 defines a function 𝑇 : 𝐶 [0, 1], ℝ → 𝐶 [0, 1], ℝ .
b) Show that 𝑇 is continuous.
Exercise 7.5.10: Let (𝑋 , 𝑑) be a metric space.
a) If 𝑝 ∈ 𝑋, show that 𝑓 : 𝑋 → ℝ defined by 𝑓 (𝑥) B 𝑑(𝑥, 𝑝) is continuous.
b) Define a metric on 𝑋 × 𝑋 as in Exercise 7.1.6 part b, and show that 𝑔 : 𝑋 × 𝑋 → ℝ defined by 𝑔(𝑥, 𝑦) B 𝑑(𝑥, 𝑦) is continuous. c) Show that if 𝐾 1 and 𝐾 2 are compact subsets of 𝑋, then there exists a 𝑝 ∈ 𝐾1 and 𝑞 ∈ 𝐾 2 such that 𝑑(𝑝, 𝑞) is minimal, that is, 𝑑(𝑝, 𝑞) = inf{𝑑(𝑥, 𝑦) : 𝑥 ∈ 𝐾1 , 𝑦 ∈ 𝐾 2 }. Exercise 7.5.11: Let (𝑋 , 𝑑) be a compact metric space, let 𝐶(𝑋 , ℝ) be the set of real-valued continuous functions. Define 𝑑( 𝑓 , 𝑔) B ∥ 𝑓 − 𝑔 ∥ 𝑋 = sup | 𝑓 (𝑥) − 𝑔(𝑥)| . 𝑥∈𝑋
a) Show that 𝑑 makes 𝐶(𝑋 , ℝ) into a metric space.
b) Show that for every 𝑥 ∈ 𝑋, the evaluation function 𝐸 𝑥 : 𝐶(𝑋 , ℝ) → ℝ defined by 𝐸 𝑥 ( 𝑓 ) B 𝑓 (𝑥) is a continuous function.
295
7.5. CONTINUOUS FUNCTIONS
Exercise 7.5.12: Let 𝐶 [𝑎, 𝑏], ℝ be the set of continuous functions and 𝐶 1 [𝑎, 𝑏], ℝ the set of once continuously differentiable functions on [𝑎, 𝑏]. Define
𝑑𝐶 ( 𝑓 , 𝑔) B ∥ 𝑓 − 𝑔 ∥ [𝑎,𝑏]
𝑑𝐶 1 ( 𝑓 , 𝑔) B ∥ 𝑓 − 𝑔∥ [𝑎,𝑏] + ∥ 𝑓 ′ − 𝑔 ′ ∥ [𝑎,𝑏] ,
and
where ∥·∥ [𝑎,𝑏] is the uniform norm. By Example 7.1.8 and Exercise 7.1.12, we know that 𝐶 [𝑎, 𝑏], ℝ with 𝑑𝐶 is a metric space and so is 𝐶 1 [𝑎, 𝑏], ℝ with 𝑑𝐶 1 .
a) Prove that the derivative operator 𝐷 : 𝐶 1 [𝑎, 𝑏], ℝ → 𝐶 [𝑎, 𝑏], ℝ defined by 𝐷( 𝑓 ) B 𝑓 ′ is continuous.
b) On the other hand if we consider the metric 𝑑𝐶 on 𝐶 1 [𝑎, 𝑏], ℝ , then prove the derivative operator is no longer continuous. Hint: Consider sin(𝑛𝑥).
Exercise 7.5.13: Let (𝑋 , 𝑑) be a metric space, 𝑆 ⊂ 𝑋, and 𝑝 ∈ 𝑋. Prove that 𝑝 is a cluster point of 𝑆 if and only if 𝑝 ∈ 𝑆 \ {𝑝}. Exercise 7.5.14: Prove Proposition 7.5.16. Exercise 7.5.15: Prove Lemma 7.5.17. Exercise 7.5.16: Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces, 𝑆 ⊂ 𝑋, 𝑝 ∈ 𝑋 a cluster point of 𝑆, and let 𝑓 : 𝑆 → 𝑌 be a function. Prove that 𝑓 : 𝑆 → 𝑌 is continuous at 𝑝 if and only if lim 𝑓 (𝑥) = 𝑓 (𝑝).
𝑥→𝑝
Exercise 7.5.17: Define
( 𝑓 (𝑥, 𝑦) B
2𝑥 𝑦 𝑥 4 +𝑦 2
0
if (𝑥, 𝑦) ≠ (0, 0), if (𝑥, 𝑦) = (0, 0).
a) Show that for every fixed 𝑦 the function that takes 𝑥 to 𝑓 (𝑥, 𝑦) is continuous and hence Riemann integrable. b) For every fixed 𝑥, the function that takes 𝑦 to 𝑓 (𝑥, 𝑦) is continuous. c) Show that 𝑓 is not continuous at (0, 0).
d) Now show that 𝑔(𝑦) B
∫1 0
𝑓 (𝑥, 𝑦) 𝑑𝑥 is not continuous at 𝑦 = 0.
Note: Feel free to use what you know about arctan from calculus, in particular that
𝑑 𝑑𝑠
arctan(𝑠) =
1 . 1+𝑠 2
Exercise 7.5.18: Prove a stronger version of Proposition 7.5.12: If 𝑓 : (𝑎, 𝑏) × (𝑐, 𝑑) → ℝ is a bounded continuous function, then 𝑔 : (𝑐, 𝑑) → ℝ defined by 𝑔(𝑦) B
∫ 𝑎
Hint: First integrate over [𝑎 + 1/𝑛 , 𝑏 − 1/𝑛 ].
𝑏
𝑓 (𝑥, 𝑦) 𝑑𝑥
is continuous.
296
7.6
CHAPTER 7. METRIC SPACES
Fixed point theorem and Picard’s theorem again
Note: 1 lecture (optional, does not require §6.3) In this section we prove the fixed point theorem for contraction mappings. As an application we prove Picard’s theorem, which we proved without metric spaces in §6.3. The proof presented here is similar, but the proof goes a lot smoother with metric spaces and the fixed point theorem.
7.6.1
Fixed point theorem
Definition 7.6.1. Let (𝑋 , 𝑑𝑋 ) and (𝑌, 𝑑𝑌 ) be metric spaces. A map 𝜑 : 𝑋 → 𝑌 is a contraction (or a contractive map) if it is a 𝑘-Lipschitz map for some 𝑘 < 1, i.e. if there exists a 𝑘 < 1 such that 𝑑𝑌 𝜑(𝑝), 𝜑(𝑞) ≤ 𝑘 𝑑𝑋 (𝑝, 𝑞) for all 𝑝, 𝑞 ∈ 𝑋. Given a map 𝜑 : 𝑋 → 𝑋, a point 𝑥 ∈ 𝑋 is called a fixed point if 𝜑(𝑥) = 𝑥.
Theorem 7.6.2 (Contraction mapping principle or Banach fixed point theorem* ). Let (𝑋 , 𝑑) be a nonempty complete metric space and 𝜑 : 𝑋 → 𝑋 a contraction. Then 𝜑 has a unique fixed point. The words complete and contraction are necessary. See Exercise 7.6.6. Proof. Pick 𝑥0 ∈ 𝑋. Define a sequence {𝑥 𝑛 }∞ by 𝑥 𝑛+1 B 𝜑(𝑥 𝑛 ). Then 𝑛=1 𝑑(𝑥 𝑛+1 , 𝑥 𝑛 ) = 𝑑 𝜑(𝑥 𝑛 ), 𝜑(𝑥 𝑛−1 ) ≤ 𝑘𝑑(𝑥 𝑛 , 𝑥 𝑛−1 ).
Repeating 𝑛 times, we get 𝑑(𝑥 𝑛+1 , 𝑥 𝑛 ) ≤ 𝑘 𝑛 𝑑(𝑥 1 , 𝑥0 ). For 𝑚 > 𝑛, 𝑑(𝑥 𝑚 , 𝑥 𝑛 ) ≤ ≤
𝑚−1 Õ 𝑖=𝑛 𝑚−1 Õ 𝑖=𝑛
𝑑(𝑥 𝑖+1 , 𝑥 𝑖 ) 𝑘 𝑖 𝑑(𝑥1 , 𝑥0 )
𝑛
= 𝑘 𝑑(𝑥 1 , 𝑥0 ) ≤ 𝑘 𝑛 𝑑(𝑥1 , 𝑥0 )
𝑚−𝑛−1 Õ 𝑖=0 ∞ Õ 𝑖=0
𝑘𝑖
𝑘 𝑖 = 𝑘 𝑛 𝑑(𝑥1 , 𝑥0 )
1 . 1−𝑘
In particular, the sequence is Cauchy (why?). Since 𝑋 is complete, we let 𝑥 B lim𝑛→∞ 𝑥 𝑛 , and we claim that 𝑥 is our unique fixed point. * Named
after the Polish mathematician Stefan Banach (1892–1945) who first stated the theorem in 1922.
7.6. FIXED POINT THEOREM AND PICARD’S THEOREM AGAIN
297
Fixed point? The function 𝜑 is a contraction, so it is Lipschitz continuous:
𝜑(𝑥) = 𝜑 lim 𝑥 𝑛 = lim 𝜑(𝑥 𝑛 ) = lim 𝑥 𝑛+1 = 𝑥. 𝑛→∞
𝑛→∞
𝑛→∞
Unique? Let 𝑥 and 𝑦 be fixed points. 𝑑(𝑥, 𝑦) = 𝑑 𝜑(𝑥), 𝜑(𝑦) ≤ 𝑘 𝑑(𝑥, 𝑦).
As 𝑘 < 1, the inequality means that 𝑑(𝑥, 𝑦) = 0, and hence 𝑥 = 𝑦. The theorem is proved. □ The proof is constructive. Not only do we know a unique fixed point exists, we know how to find it. Start with any point 𝑥0 ∈ 𝑋, then iterate 𝜑(𝑥0 ), 𝜑(𝜑(𝑥0 )), 𝜑(𝜑(𝜑(𝑥0 ))), etc. to find better and better approximations. We can even find how far away from the fixed point we are, see the exercises. The idea of the proof is therefore useful in real-world applications.
7.6.2
Picard’s theorem
We start with the metric space where we will apply the fixed point theorem: the space 𝐶 [𝑎, 𝑏], ℝ of Example 7.1.8, the space of continuous functions 𝑓 : [𝑎, 𝑏] → ℝ with the metric 𝑑( 𝑓 , 𝑔) B ∥ 𝑓 − 𝑔∥ [𝑎,𝑏] = sup | 𝑓 (𝑥) − 𝑔(𝑥)| . 𝑥∈[𝑎,𝑏]
Convergence in this metric is convergence in uniform norm, or in other words, uniform convergence. Therefore, 𝐶 [𝑎, 𝑏], ℝ is a complete metric space, see Proposition 7.4.5. Consider now the ordinary differential equation 𝑑𝑦 = 𝐹(𝑥, 𝑦). 𝑑𝑥 Given some 𝑥0 , 𝑦0 , we desire a function 𝑦 = 𝑓 (𝑥) such that 𝑓 (𝑥 0 ) = 𝑦0 and such that 𝑓 ′(𝑥) = 𝐹 𝑥, 𝑓 (𝑥) .
To avoid having to come up with many names, we often simply write 𝑦 ′ = 𝐹(𝑥, 𝑦) for the equation and 𝑦(𝑥) for the solution. The simplest example is the equation 𝑦 ′ = 𝑦, 𝑦(0) = 1. The solution is the exponential 𝑦(𝑥) = 𝑒 𝑥 . A somewhat more complicated example is 𝑦 ′ = −2𝑥𝑦, 𝑦(0) = 1, whose solution 2 is the Gaussian 𝑦(𝑥) = 𝑒 −𝑥 . A subtle issue is how long does the solution exist. Consider the equation 𝑦 ′ = 𝑦 2 , 1 𝑦(0) = 1. Then 𝑦(𝑥) = 1−𝑥 is a solution. While 𝐹 is a reasonably “nice” function and in particular it exists for all 𝑥 and 𝑦, the solution “blows up” at 𝑥 = 1. For more examples related to Picard’s theorem, see §6.3.
298
CHAPTER 7. METRIC SPACES
We will look for the solution in 𝐶 [𝑎, 𝑏], ℝ , which may feel strange at first as we are searching for a differentiable function. The explanation is that we consider the corresponding integral equation
𝑓 (𝑥) = 𝑦0 +
∫
𝑥
𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡.
𝑥0
To solve this integral equation we only need a continuous function, and in some sense our task should be easier—we have more candidate functions to try. This way of thinking is quite typical when solving differential equations. Theorem 7.6.3 (Picard’s theorem on existence and uniqueness). Let 𝐼, 𝐽 ⊂ ℝ be closed and bounded intervals, let 𝐼 ◦ and 𝐽 ◦ be their interiors, and let (𝑥0 , 𝑦0 ) ∈ 𝐼 ◦ × 𝐽 ◦ . Suppose 𝐹 : 𝐼 × 𝐽 → ℝ is continuous and Lipschitz in the second variable, that is, there exists an 𝐿 ∈ ℝ such that |𝐹(𝑥, 𝑦) − 𝐹(𝑥, 𝑧)| ≤ 𝐿 |𝑦 − 𝑧|
for all 𝑦, 𝑧 ∈ 𝐽, 𝑥 ∈ 𝐼.
Then there exists an ℎ > 0 such that [𝑥 0 − ℎ, 𝑥0 + ℎ] ⊂ 𝐼 and a unique differentiable function 𝑓 : [𝑥0 − ℎ, 𝑥0 + ℎ] → 𝐽 ⊂ ℝ such that 𝑓 ′(𝑥) = 𝐹 𝑥, 𝑓 (𝑥)
𝑓 (𝑥0 ) = 𝑦0 .
and
Proof. Without loss of generality, assume 𝑥0 = 0 (exercise). As 𝐼 × 𝐽 is compact and 𝐹 is continuous, 𝐹 is bounded. So find an 𝑀 > 0 such that |𝐹(𝑥, 𝑦)| ≤ 𝑀 for all (𝑥, 𝑦) ∈ 𝐼 × 𝐽. Pick 𝛼 > 0 such that [−𝛼, 𝛼] ⊂ 𝐼 and [𝑦0 − 𝛼, 𝑦0 + 𝛼] ⊂ 𝐽. Let
o 𝛼 ℎ B min 𝛼, . 𝑀 + 𝐿𝛼 n
Note [−ℎ, ℎ] ⊂ 𝐼. Let
𝑌 B 𝑓 ∈ 𝐶 [−ℎ, ℎ], ℝ : 𝑓 [−ℎ, ℎ] ⊂ 𝐽 .
That is, 𝑌 is the set of continuous functions on [−ℎ, ℎ] with values in 𝐽, in other words, exactly those functions where 𝐹 𝑥, 𝑓 (𝑥) makes sense. It is left as an exercise to show that 𝑌 is a closed subset of 𝐶 [−ℎ, ℎ], ℝ (because 𝐽 is closed). The space 𝐶 [−ℎ, ℎ], ℝ is complete, and a closed subset of a complete metric space is a complete metric space with the subspace metric, see Proposition 7.4.6. So 𝑌 with the subspace metric is a complete metric space. We will write 𝑑( 𝑓 , 𝑔) = ∥ 𝑓 − 𝑔∥ [−ℎ,ℎ] for this metric. Define a mapping 𝑇 : 𝑌 → 𝐶 [−ℎ, ℎ], ℝ by 𝑇( 𝑓 )(𝑥) B 𝑦0 +
∫ 0
𝑥
𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡.
It is an exercise to check that 𝑇 is well-defined, and that for 𝑓 ∈ 𝑌, 𝑇( 𝑓 ) really is in 𝐶 [−ℎ, ℎ], ℝ . Let 𝑓 ∈ 𝑌 and |𝑥| ≤ ℎ. As 𝐹 is bounded by 𝑀, we have
∫ |𝑇( 𝑓 )(𝑥) − 𝑦0 | =
0
𝑥
𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡
≤ |𝑥| 𝑀 ≤ ℎ𝑀 ≤
𝛼𝑀 ≤ 𝛼. 𝑀 + 𝐿𝛼
299
7.6. FIXED POINT THEOREM AND PICARD’S THEOREM AGAIN
So 𝑇( 𝑓 ) [−ℎ, ℎ] ⊂ [𝑦0 − 𝛼, 𝑦0 + 𝛼] ⊂ 𝐽, and 𝑇( 𝑓 ) ∈ 𝑌. In other words, 𝑇(𝑌) ⊂ 𝑌. From now on, we consider 𝑇 as a mapping of 𝑌 to 𝑌. We claim 𝑇 : 𝑌 → 𝑌 is a contraction. First, for 𝑥 ∈ [−ℎ, ℎ] and 𝑓 , 𝑔 ∈ 𝑌, we have
𝐹 𝑥, 𝑓 (𝑥) − 𝐹 𝑥, 𝑔(𝑥) ≤ 𝐿 | 𝑓 (𝑥) − 𝑔(𝑥)| ≤ 𝐿 𝑑( 𝑓 , 𝑔). Therefore,
∫ |𝑇( 𝑓 )(𝑥) − 𝑇(𝑔)(𝑥)| =
𝑥
0
𝐹 𝑡, 𝑓 (𝑡) − 𝐹 𝑡, 𝑔(𝑡)
≤ |𝑥| 𝐿 𝑑( 𝑓 , 𝑔) ≤ ℎ𝐿 𝑑( 𝑓 , 𝑔) ≤
𝑑𝑡 𝐿𝛼 𝑑( 𝑓 , 𝑔). 𝑀 + 𝐿𝛼
𝐿𝛼 We chose 𝑀 > 0 and so 𝑀+𝐿𝛼 < 1. Take supremum over 𝑥 ∈ [−ℎ, ℎ] of the left-hand side 𝐿𝛼 above to obtain 𝑑 𝑇( 𝑓 ), 𝑇(𝑔) ≤ 𝑀+𝐿𝛼 𝑑( 𝑓 , 𝑔), that is, 𝑇 is a contraction. The fixed point theorem (Theorem 7.6.2) gives a unique 𝑓 ∈ 𝑌 such that 𝑇( 𝑓 ) = 𝑓 . In other words, ∫ 𝑥
𝑓 (𝑥) = 𝑦0 +
𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡.
0
Clearly, 𝑓 (0) = 𝑦0 . By the fundamental theorem of calculus (Theorem 5.3.3), 𝑓 is differentiable and its derivative is 𝐹 𝑥, 𝑓 (𝑥) . Differentiable functions are continuous, so 𝑓 is the ′ unique differentiable 𝑓 : [−ℎ, ℎ] → 𝐽 such that 𝑓 (𝑥) = 𝐹 𝑥, 𝑓 (𝑥) and 𝑓 (0) = 𝑦0 . □
7.6.3
Exercises
For more exercises related to Picard’s theorem see §6.3. Exercise 7.6.1: Let 𝐽 be a closed and bounded interval and 𝑌 B Show that 𝑌 ⊂ 𝐶 [−ℎ, ℎ], ℝ is closed. Hint: 𝐽 is closed.
𝑓 ∈ 𝐶 [−ℎ, ℎ], ℝ : 𝑓 [−ℎ, ℎ] ⊂ 𝐽 .
Exercise 7.6.2: In the proof of Picard’s theorem, show that if 𝑓 : [−ℎ, ℎ] → 𝐽 is continuous, then 𝐹 𝑡, 𝑓 (𝑡) is continuous on [−ℎ, ℎ] as a function of 𝑡. Use this to show that 𝑇( 𝑓 )(𝑥) B 𝑦0 +
∫ 0
𝑥
𝐹 𝑡, 𝑓 (𝑡) 𝑑𝑡
is well-defined and that 𝑇( 𝑓 ) ∈ 𝐶 [−ℎ, ℎ], ℝ .
Exercise 7.6.3: Prove that in the proof of Picard’s theorem, the statement “Without loss of generality assume 𝑥 0 = 0” is justified. That is, prove that if we know the theorem with 𝑥 0 = 0, the theorem is true as stated. Exercise 7.6.4: Let 𝐹 : ℝ → ℝ be defined by 𝐹(𝑥) B 𝑘𝑥 + 𝑏 where 0 < 𝑘 < 1, 𝑏 ∈ ℝ. a) Show that 𝐹 is a contraction.
b) Find the fixed point and show directly that it is unique.
300
CHAPTER 7. METRIC SPACES
Exercise 7.6.5: Let 𝑓 : [0, 1/4] → [0, 1/4] be defined by 𝑓 (𝑥) B 𝑥 2 .
a) Show that 𝑓 is a contraction, and find the best (smallest) 𝑘 from the definition that works.
b) Find the fixed point and show directly that it is unique. Exercise 7.6.6: a) Find an example of a contraction 𝑓 : 𝑋 → 𝑋 of a non-complete metric space 𝑋 with no fixed point.
b) Find a 1-Lipschitz map 𝑓 : 𝑋 → 𝑋 of a complete metric space 𝑋 with no fixed point.
Exercise 7.6.7: Consider 𝑦 ′ = 𝑦 2 , 𝑦(0) = 1. Use the iteration scheme from the proof of the contraction mapping principle. Start with 𝑓0 (𝑥) = 1. Find a few iterates (at least up to 𝑓2 ). Prove that the pointwise limit 1 1 of 𝑓𝑛 is 1−𝑥 , that is, for every 𝑥 with |𝑥| < ℎ for some ℎ > 0, prove that lim 𝑓𝑛 (𝑥) = 1−𝑥 . 𝑛→∞
Exercise 7.6.8: Suppose 𝑓 : 𝑋 → 𝑋 is a contraction for 𝑘 < 1. Suppose you use the iteration procedure with 𝑥 𝑛+1 B 𝑓 (𝑥 𝑛 ) as in the proof of the fixed point theorem. Suppose 𝑥 is the fixed point of 𝑓 . 1 a) Show that 𝑑(𝑥, 𝑥 𝑛 ) ≤ 𝑘 𝑛 𝑑(𝑥1 , 𝑥0 ) 1−𝑘 for all 𝑛 ∈ ℕ .
b) Suppose 𝑑(𝑦1 , 𝑦2 ) ≤ 16 for all 𝑦1 , 𝑦2 ∈ 𝑋, and 𝑘 = 1/2. Find an 𝑁 such that starting at any given point 𝑥 0 ∈ 𝑋, 𝑑(𝑥, 𝑥 𝑛 ) ≤ 2−16 for all 𝑛 ≥ 𝑁. √ 2 Exercise 7.6.9: Let 𝑓 (𝑥) B 𝑥 − 𝑥2𝑥−2 (you may recognize Newton’s method for 2). a) Prove 𝑓 [1, ∞) ⊂ [1, ∞).
b) Prove that 𝑓 : [1, ∞) → [1, ∞) is a contraction.
c) Show√that the fixed point theorem applies, find the unique 𝑥 ≥ 1 such that 𝑓 (𝑥) = 𝑥, and show that √ 𝑥 = 2. Note: In particular, the technique from the proof of the theorem can be used to approximate 2.
Exercise 7.6.10: Suppose 𝑓 : 𝑋 → 𝑋 is a contraction, and (𝑋 , 𝑑) is a metric space with the discrete metric, that is, 𝑑(𝑥, 𝑦) = 1 whenever 𝑥 ≠ 𝑦. Show that 𝑓 is constant, that is, there exists a 𝑐 ∈ 𝑋 such that 𝑓 (𝑥) = 𝑐 for all 𝑥 ∈ 𝑋. Exercise 7.6.11: Suppose (𝑋 , 𝑑) is a nonempty complete metric space, 𝑓 : 𝑋 → 𝑋 is a mapping, and denote by 𝑓 𝑛 the 𝑛th iterate of 𝑓 . Suppose for every 𝑛 there exists a 𝑘 𝑛 > 0 such that 𝑑 𝑓 𝑛 (𝑥), 𝑓 𝑛 (𝑦) ≤ 𝑘 𝑛 𝑑(𝑥, 𝑦) Í∞ for all 𝑥, 𝑦 ∈ 𝑋, where 𝑛=1 𝑘 𝑛 < ∞. Prove that 𝑓 has a unique fixed point in 𝑋.
Further Reading [BS] Robert G. Bartle and Donald R. Sherbert, Introduction to Real Analysis, 3rd ed., John Wiley & Sons Inc., New York, 2000. [DW] John P. D’Angelo and Douglas B. West, Mathematical Thinking: Problem-Solving and Proofs, 2nd ed., Prentice Hall, 1999. [F] Joseph E. Fields, A Gentle Introduction to the Art of Mathematics. Available at http: //giam.southernct.edu/GIAM/. [H] Richard Hammack, Book of Proof. Available at http://www.people.vcu.edu/ ~rhammack/BookOfProof/. [R1] Maxwell Rosenlicht, Introduction to Analysis, Dover Publications Inc., New York, 1986. Reprint of the 1968 edition. [R2] Walter Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw-Hill Book Co., New York, 1976. International Series in Pure and Applied Mathematics. [T] William F. Trench, Introduction to Real Analysis, Pearson Education, 2003. http: //ramanujan.math.trinity.edu/wtrench/texts/TRENCH_REAL_ANALYSIS.PDF.
302
FURTHER READING
Index Abel’s theorem, 99 absolute convergence, 92 absolute maximum, 130, 289 absolute minimum, 130, 289 absolute value, 36 achieves absolute maximum, 130 achieves absolute minimum, 130 additive property of the integral, 191 algebraic geometry, 291 algebraic number, 43 analytic function, 174 Archimedean property, 31 arithmetic-geometric mean inequality, 34 axiom, 12, 25 ball, 264 Banach fixed point theorem, 296 base of the natural logarithm, 211 basis statement, 12, 13 Bernoulli’s inequality, 35 bijection, 15 bijective, 15 binary relation, 16 bisection method, 133 Bolzano’s intermediate value theorem, 133 Bolzano’s theorem, 133 Bolzano–Weierstrass theorem, 78, 286 boundary, 270 bounded above, 23 sequence, 51 bounded below, 23 sequence, 51
bounded function, 38, 130 bounded interval, 41 bounded sequence, 51, 274 bounded set, 24, 261 bounded variation, 199 Cantor diagonalization, 46 Cantor’s theorem, 19, 42 Cantor–Bernstein–Schröder, 18 cardinality, 17 Cartesian product, 14 Cauchy condensation principle, 99 Cauchy in the uniform norm, 231 Cauchy principal value, 225 Cauchy product, 104 Cauchy sequence, 84, 279 Cauchy series, 89 Cauchy’s mean value theorem, 165 Cauchy–Bunyakovsky–Schwarz inequality, 257 Cauchy–Schwarz inequality, 257, 263 Cauchy-complete, 85, 279 Cesàro summability, 111 chain rule, 159 change of variables theorem, 204 clopen, 268 closed ball, 264 closed interval, 41 closed set, 264 closure, 269 cluster point, 81, 113, 145 in a metric space, 292 codomain, 14
304 compact, 280 comparison test for improper integrals, 216 comparison test for series, 93 complement, 10 complement relative to, 10 complete, 85, 279 completeness property, 24 complex conjugate, 259 complex modulus, 259 complex numbers, 27 composition of functions, 16 conditional convergence, 92 connected, 268 constant sequence, 51 continuous at 𝑐, 122, 288 continuous function, 122 in a metric space, 288 continuous function of two variables, 247 continuously differentiable, 169 contraction, 296 contraction mapping principle, 296 convergent improper integral, 214 power series, 106 sequence, 52 sequence in a metric space, 274 series, 87 converges function, 114, 145 function in a metric space, 293 converges absolutely, 92 converges conditionally, 92 converges in uniform norm, 231 converges pointwise, 227 converges to infinity, 147 converges uniformly, 229 converges uniformly on compact subsets, 244 convex, 129, 212 convolution, 225 countable, 18
INDEX countably infinite, 18 critical point, 163 Darboux integral, 182 Darboux sum, 181 Darboux’s theorem, 167 decimal digit, 44 decimal representation, 44 decreasing, 149, 165 Dedekind completeness property, 24 Dedekind cut, 35 DeMorgan’s theorem, 11 dense, 277 density of rational numbers, 31 derivative, 155 diagonalization, 46 diameter, 261 difference quotient, 155 differentiable, 155 continuously, 169 infinitely, 173, 213 𝑛 times, 171 differential equation, 246 digit, 44 Dini’s theorem, 244 direct image, 14 Dirichlet function, 126, 154, 183, 238, 245 disconnected, 268 discontinuity, 125 discontinuous, 125 discrete metric, 259 disjoint, 10 distance function, 255 divergent improper integral, 214 power series, 106 sequence, 52 sequence in a metric space, 274 series, 87 diverges, 114 function in a metric space, 293 diverges to infinity, 79, 146 diverges to minus infinity, 79
305
INDEX domain, 14
Heine–Borel theorem, 284
element, 8 elementary step function, 198 empty set, 8 equal, 9 equivalence class, 17 equivalence relation, 16 euclidean space, 257 Euler’s number, 211 Euler–Mascheroni constant, 212 even function, 205 existence and uniqueness theorem, 247, 298 exponential, 209, 210 extended real numbers, 34 extreme value theorem, 130
identity of indiscernibles, 255 image, 14 improper integrals, 214 increasing, 149, 165 induction, 12 induction hypothesis, 12 induction step, 12, 13 infimum, 24 infinite, 17 infinite limit of a function, 146 of a sequence, 79 infinitely differentiable, 173, 213 infinity norm, 230 initial condition, 246 injection, 15 injective, 15 integers, 9 integral test for series, 222 integration by parts, 205 interior, 270 intermediate value theorem, 133 intersection, 10 interval, 41 inverse function, 16 inverse function theorem, 177 inverse image, 14 irrational, 30
field, 25 finite, 17 finitely many discontinuities, 196 first derivative, 171 first derivative test, 166 first order ordinary differential equation, 246 fixed point, 296 fixed point theorem, 296 Fourier sine and cosine transforms, 224 Fubini for sums, 112 function, 13 bounded, 130 continuous, 122, 288 differentiable, 155 Lipschitz, 141, 292 fundamental theorem of calculus, 200 geometric series, 88, 109 graph, 14 great circle distance, 260 greatest lower bound, 24 half-open interval, 41 harmonic series, 90 Hausdorff metric, 262
joint limit, 244 L’Hôpital’s rule, 161, 169 L’Hospital’s rule, 161, 169 𝐿1 -convergence, 243 𝐿1 -norm, 243 Lagrange form, 172 Laplace transform, 224 least element, 12 least upper bound, 24 least-upper-bound property, 24 Lebesgue covering lemma, 282
306
INDEX
Lebesgue number, 282 Leibniz rule, 158 liminf, 73, 80 limit infinite, 79, 146 of a function, 114 of a function at infinity, 145 of a function in a metric space, 293 of a sequence in a metric space, 274 limit comparison test, 98 limit inferior, 73, 80 limit superior, 73, 80 limsup, 73, 80 linear first order differential equations, 252 linearity of series, 91 linearity of the derivative, 157 linearity of the integral, 193 Lipschitz continuous, 141 in a metric space, 292 logarithm, 207, 208 logarithm base 𝑏, 211 lower bound, 23 lower Darboux integral, 182 lower Darboux sum, 181
minimum-maximum theorem, 130 modulus, 259 monic polynomial, 134, 148 monotone convergence theorem, 55 monotone decreasing sequence, 55 monotone function, 149 monotone increasing sequence, 55 monotone sequence, 55 monotonic sequence, 55 monotonicity of the integral, 194
map, 14 mapping, 14 maximum, 34 absolute, 130 relative, 162 strict relative, 174 maximum-minimum theorem, 130 mean value theorem, 164 mean value theorem for integrals, 197 member, 8 Mertens’ theorem, 104 metric, 255 metric space, 255 minimum, 34 absolute, 130 relative, 162 strict relative, 174
odd function, 205 one-sided limit, 119 one-to-one, 15 onto, 15 open ball, 264 open cover, 280 open interval, 41 open neighborhood, 264 open set, 264 ordered field, 25 ordered set, 23 ordinary differential equation, 246
𝑛 times differentiable, 171 naïve set theory, 8 natural logarithm, 208 natural numbers, 9 negative, 25 neighborhood, 264 nondecreasing, 149 nonincreasing, 149 nonnegative, 25 nonnegativity of a metric, 255 nonpositive, 25 𝑛th derivative, 171 𝑛th derivative test, 175 𝑛th order Taylor polynomial, 171
𝑝-series, 94 𝑝-test, 94 𝑝-test for integrals, 214 partial sums, 87
307
INDEX partition, 181 Picard iterate, 247 Picard iteration, 247 Picard’s theorem, 247, 298 pointwise convergence, 227 polynomial, 123 popcorn function, 126, 198 positive, 25 power series, 106 power set, 19 principle of induction, 12 principle of strong induction, 13 product rule, 158 proper, 294 proper subset, 9 pseudometric space, 262 quotient rule, 159 radius of convergence, 107 range, 14 range of a sequence, 51 ratio test for sequences, 69 ratio test for series, 96 rational functions, 109 rational numbers, 9 real numbers, 23 rearrangement of a series, 102 refinement of a partition, 183 reflexive relation, 16 relation, 16 relative maximum, 162 relative minimum, 162 relatively compact, 287 remainder term in Taylor’s formula, 172 removable discontinuity, 127 removable singularity, 140 restriction, 118 reverse triangle inequality, 37 Riemann integrable, 185 Riemann integral, 185 Riemann–Lebesgue Lemma, 199 Rolle’s theorem, 163
root test, 100 secant line, 141, 155 second derivative, 171 second derivative test, 174 sequence, 51, 274 sequentially compact, 282 series, 87 set, 8 set building notation, 9 set theory, 8 set-theoretic difference, 10 set-theoretic function, 13 sinc function, 220 slope field, 246 sphere, 260 squeeze lemma, 61 standard metric on ℝ𝑛 , 258 standard metric on ℝ, 256 step function, 198 strict relative maximum, 174 strict relative minimum, 174 strictly decreasing, 149, 165 strictly increasing, 149, 165 strictly monotone function, 149 strong induction, 13 subadditive, 262 subcover, 280 subsequence, 58, 274 subset, 9 subspace, 261 subspace metric, 261 subspace topology, 261 sup norm, 230 supremum, 24 surjection, 15 surjective, 15 symmetric difference, 20 symmetric relation, 16 symmetry of a metric, 255 tail of a sequence, 57 tail of a series, 89
308 Taylor polynomial, 171 Taylor series, 173 Taylor’s theorem, 172 Thomae function, 126, 198 Tonelli for sums, 112 topology, 264 totally bounded, 286 totally disconnected, 272 transitive relation, 16 triangle inequality, 36, 255 trichotomy, 23 unbounded closed intervals, 41 unbounded interval, 41 unbounded open intervals, 41 uncountable, 18 uniform convergence, 229 uniform convergence on compact subsets,
INDEX 244 uniform norm, 230 uniform norm convergence, 231 uniformly Cauchy, 231 uniformly continuous, 138 in a metric space, 291 union, 10 unit sphere, 260 universe, 8 upper bound, 23 upper Darboux integral, 182 upper Darboux sum, 181 Venn diagram, 10 weak solution, 252 well ordering property, 12 zero set, 291
List of Notation Notation
Description
Page
∅
the empty set
8
{1, 2, 3}
set with the given elements
8
𝐴B𝐵
define 𝐴 to equal 𝐵
8
𝑥∈𝑆 𝑥∉𝑆
𝑥 is an element of 𝑆 𝑥 is not an element of 𝑆
8 8
𝐴⊂𝐵 𝐴=𝐵
𝐴 is a subset of 𝐵 𝐴 and 𝐵 are equal
8 9
𝐴⊊𝐵
𝐴 is a proper subset of 𝐵
9
{𝑥 ∈ 𝑆 : 𝑃(𝑥)}
set building notation
9
ℕ ℤ
the natural numbers: 1, 2, 3, . . .
9 9
ℚ ℝ
the integers: . . . , −2, −1, 0, 1, 2, . . . the rational numbers the real numbers union of 𝐴 and 𝐵 intersection of 𝐴 and 𝐵 set minus, elements of 𝐴 not in 𝐵
9 9 10 10 10
set complement, elements not in 𝐵
10
𝐴𝑛
union of all 𝐴𝑛 for all 𝑛 ∈ ℕ
11
𝐴𝑛
intersection of all 𝐴𝑛 for all 𝑛 ∈ ℕ
11
𝐴𝜆
union of all 𝐴𝜆 for all 𝜆 ∈ 𝐼
12
𝐴𝜆
intersection of all 𝐴𝜆 for all 𝜆 ∈ 𝐼
12
𝐴∪𝐵 𝐴∩𝐵 𝐴\𝐵 𝐵𝑐
∞ Ø 𝑛=1 ∞ Ù 𝑛=1
Ø 𝜆∈𝐼
Ø 𝜆∈𝐼
310
LIST OF NOTATION
Notation
Description
Page
𝑓:𝐴→𝐵
function with domain 𝐴 and codomain 𝐵
13
Cartesian product of 𝐴 and 𝐵
14
𝑓 (𝐴)
direct image of 𝐴 by 𝑓
14
inverse image of 𝐴 by 𝑓
14
𝑓 −1
inverse function
16
𝑓 ◦𝑔
composition of functions
16
[𝑎]
equivalence class of 𝑎
17
cardinality of a set 𝐴
17
P(𝑃)
power set of 𝐴
19
𝑥=𝑦
𝑥 is equal to 𝑦
23
𝑥𝑦
𝑥 is greater than 𝑦
23
𝑥≥𝑦
𝑥 is greater than or equal to 𝑦
23
sup 𝐸
supremum of 𝐸
24
inf 𝐸
infimum of 𝐸 the complex numbers
24 27
the extended real numbers infinity
33 33
maximum of 𝐸 minimum of 𝐸 absolute value
34 34 36
sup 𝑓 (𝑥)
supremum of 𝑓 (𝐷)
38
𝑥∈𝐷
inf 𝑓 (𝑥)
infimum of 𝑓 (𝐷)
38
(𝑎, 𝑏)
open bounded interval
41
[𝑎, 𝑏]
closed bounded interval
41
(𝑎, 𝑏], [𝑎, 𝑏)
half-open bounded interval
41
(𝑎, ∞), (−∞, 𝑏)
open unbounded interval
41
closed unbounded interval
41
{𝑥 𝑛 }∞ 𝑛=1
sequence
51, 274
𝐴×𝐵
𝑓 −1 (𝐴)
|𝐴|
ℂ ℝ∗ ∞
max 𝐸 min 𝐸 |𝑥|
𝑥∈𝐷
[𝑎, ∞), (−∞, 𝑏]
311
LIST OF NOTATION Notation
Description
Page
lim 𝑥 𝑛
limit of a sequence
52, 274
{𝑥 𝑛 𝑖 }∞ 𝑖=1
subsequence
58, 274
lim sup 𝑥 𝑛
limit superior
73, 80
lim inf 𝑥 𝑛
limit inferior
73, 80
𝑎𝑛
series
87
𝑎𝑛
sum 𝑎 1 + 𝑎 2 + · · · + 𝑎 𝑘
87
𝑓 (𝑥) converges to 𝐿 as 𝑥 goes to 𝑐
114
limit of a function
114, 293
lim 𝑓 (𝑥), lim− 𝑓 (𝑥)
one-sided limit of a function
119
lim 𝑓 (𝑥), lim 𝑓 (𝑥)
limit of a function at infinity
145
𝑑𝑓 𝑑 𝑑𝑥 , 𝑑𝑥 𝑓 ′′, 𝑓 ′′′, 𝑓 ′′′′
derivative of 𝑓
155
second, third, fourth derivative of 𝑓
171
𝑓 (𝑛)
𝑛th derivative of 𝑓
171
𝐿(𝑃, 𝑓 )
lower Darboux sum of 𝑓 over partition 𝑃
181
𝑈(𝑃, 𝑓 )
upper Darboux sum of 𝑓 over partition 𝑃
181
∫
𝑓
lower Darboux integral
182
𝑓
upper Darboux integral
182
Riemann integrable functions on [𝑎, 𝑏]
185
Riemann integral of 𝑓 on [𝑎, 𝑏]
185
ln(𝑥), log(𝑥)
natural logarithm function
208
exp(𝑥), 𝑒 𝑥
exponential function
210
𝑥𝑦
exponentiation of 𝑥 > 0 and 𝑦 ∈ ℝ
211
𝑛→∞
𝑛→∞
𝑛→∞
∞ Õ 𝑛=1 𝑘 Õ 𝑛=1
𝑓 (𝑥) → 𝐿 as 𝑥 → 𝑐
lim 𝑓 (𝑥)
𝑥→𝑐
𝑥→𝑐 +
𝑥→𝑐
𝑥→∞
𝑥→−∞
𝑓 ′(𝑥),
𝑏
𝑎
R [𝑎, 𝑏]
∫ 𝑎
𝑒
𝑏
𝑎
∫
𝑓 (𝑥)
𝑏
𝑓,
∫
𝑏
𝑎
𝑓 (𝑥) 𝑑𝑥
Euler’s number, base of the natural logarithm
211
312
LIST OF NOTATION
Notation
Description
Page
∥ 𝑓 ∥𝑆
uniform norm of 𝑓 over 𝑆
230
ℝ𝑛
the 𝑛-dimensional euclidean space
257
𝐶(𝑆, ℝ)
259
diam(𝑆)
continuous functions 𝑓 : 𝑆 → ℝ
diameter of 𝑆
261
𝐶 1 (𝑆, ℝ)
continuously differentiable functions 𝑓 : 𝑆 → ℝ
263, 295
open ball in a metric space
264
closed ball in a metric space
264
closure of 𝐴 interior of 𝐴 boundary of 𝐴
269 270 270
𝐵(𝑝, 𝛿), 𝐵𝑋 (𝑝, 𝛿)
𝐶(𝑝, 𝛿), 𝐶 𝑋 (𝑝, 𝛿) 𝐴 𝐴◦ 𝜕𝐴