177 35 26MB
English Pages 332 [335] Year 2020
i
Secondary Mathematics for Mathematicians and Educators
In this engaging text, Michael Weiss offers an advanced view of the secondary mathematics curriculum through the prism of theory, analysis, and history, aiming to take an intellectually and mathematically mature perspective on the content normally taught in high school mathematics courses. Rather than a secondary mathematics textbook, Weiss presents here a textbook about the secondary mathematics curriculum, written for mathematics educators and mathematicians and presenting a long-overdue modern-day integration of the disparate topics and methods of secondary mathematics into a coherent mathematical theory. Areas covered include: • • • • • •
Polynomials and polynomial functions; Geometry, graphs, and symmetry; Abstract algebra, linear algebra, and solving equations; Exponential and logarithmic functions; Complex numbers; The historical development of the secondary mathematics curriculum.
Written using precise definitions and proofs throughout on a foundation of advanced content knowledge, Weiss offers a compelling and timely investigation into the secondary mathematics curriculum, relevant for preservice secondary teachers as well as graduate students and scholars in both mathematics and mathematics education. Michael Weiss is currently a member of the faculty at the Department of Mathematics at the University of Michigan, USA. His background is in mathematics education and pure mathematics, and he was formerly a high school mathematics teacher in the United States.
ii
iii
Secondary Mathematics for Mathematicians and Educators A View from Above Michael Weiss
iv
First published 2021 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 Taylor & Francis The right of Michael Weiss to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this title has been requested ISBN: 978-1-138-29466-0 (hbk) ISBN: 978-1-138-29467-7 (pbk) ISBN: 978-1-315-10096-8 (ebk) Typeset in Times New Roman by Newgen Publishing UK
v
Contents
Acknowledgments
Introduction
vii 1
0.1 Who This Book is For 1 0.2 Preservice Secondary Teachers 2 0.3 Mathematics Graduate Students 4 0.4 Mathematics Education Doctoral Students 5 0.5 Thinking Like a Mathematician 6 0.6 The Theory-Building Disposition 7 0.7 Structure of the Book 10
1. Numbers and Number Systems
12
1.1 Old and New Math 12 1.2 Back to Basics 15 1.3 What are Real Numbers? 18 1.4 Characterizing the Reals 21 1.5 Groups 23 1.6 Fields and Rings 28 1.7 Important Examples 34 1.8 Order Properties and Ordered Fields 38 1.9 Examples (and Non-Examples) of Ordered Fields 43 1.10 Rational Subfields and the Completeness Property 45 1.11 The Real Number Characterization Theorem, At Last 52 1.12 Existence of a Complete Ordered Field 57 1.13 Decimal Representations 63 1.14 Recommended Reading 70
2. Polynomials and Polynomial Functions 2.1 Polynomials in the Secondary Curriculum 76 2.2 Just What is a Polynomial? 77 2.3 Functions 79 2.4 Constant Functions and Polynomial Functions 84 2.5 Formal Polynomials 89 2.6 Interpreting Polynomials as Functions 95 2.7 Polynomials over Finite Rings 106 2.8 Recommended Reading 114
76
vi
vi Contents
3. Solving Equations
117
3.1 “Equivalence” in the Secondary Curriculum 117 3.2 Strings and Algebraic Strings 120 3.3 Algebraic Equivalence 124 3.4 Equations, Strong and Weak Equivalence, and Solutions 126 3.5 A Complete (?) Algorithm for Solving Polynomial Equations in High School 135 3.6 Equations in Two Variables 137 3.7 Recommended Reading 146
4. Geometry, Graphs and Symmetry
150
4.1 Euclidean Geometry in the Secondary Curriculum 150 4.2 Compass-and-Straightedge Constructions in the Euclidean Plane 153 4.3 Measuring Ratios in the Plane 159 4.4 From Geometry to Algebra: Coordinatizing Lines and the Plane 165 4.5 Coordinate Systems, Lines and 1st-Degree Equations 175 4.6 Non-Orthonormal Coordinate Systems 185 4.7 Transformations and Symmetry 191 4.8 Groups of Transformations 204 4.9 Operations on Functions 213 4.10 Recommended Reading 219
5. Exponential and Logarithmic Functions
224
5.1 What We Talk About when We Talk About Logs 224 5.2 Exponential Functions, Roots, and the AM–GM Inequality 230 5.3 Exponential Equations and Logarithmic Functions 246 5.4 Logarithm-Like and Exponential-Like Functions 258 5.5 Exponentials and Logarithms in Other Fields and Rings 270 5.6 Applications to Cryptography 279 5.7 Recommended Reading 284
6. Complex Numbers
289
6.1 A World of Pure Imagination? 289 6.2 Hamilton’s Construction 292 6.3 Building a Multiplicative Structure from Scratch 296 6.4 The Field Criterion 301 6.5 The Complex Criterion 303 6.6 The Case µ 2 + 4 λ ≥ 0 305 6.7 Quadratic Polynomials, Factoring and Completing the Square 309 6.8 Quotient Rings and Abstract Algebra 312 6.9 Recommended Reading 316
Index
319
newgenprepdf
vii
Acknowledgments
This book was written over the course of nearly a decade, during which time I was on the faculty of three different universities. Any project that spans such a lengthy period of time inevitably depends on the support and encouragement of many, many people, and it is a pleasure to have the opportunity to thank some of them now. Secondary Mathematics for Mathematicians and Educators began its life as a course initially taught to a small group of graduate students in the Mathematics Department at Oakland University, and I am extremely grateful to Professor Eddie Cheng, then chair of the OU Math Dept., for taking a chance on a junior faculty member with an unconventional idea. One year later, in its second incarnation, the course moved online, where its small local enrollment was supplemented by a cohort of Math Education doctoral students from San Diego State University—a development that was possible only because of the support and encouragement of Prof. Randy Phillips of SDSU. Finally, the course was taught for a third and fourth time to two lively cohorts of Math Ed graduate students at Michigan State University, for which I am indebted to Professor Vince Melfi, chair of MSU’s Program in Mathematics Education. While thanking the departmental leadership that gave me the opportunity to pilot these materials in their earliest stages, I must also pause to thank the students (too many to name here) who not only provided a receptive audience but also took and shared with me the lecture notes that became the basis for the eventual manuscript of this text. I am grateful to Karen Smith, Patricio Herbst, and Carolyn Masserang, all of whom provided valuable feedback on this manuscript as it neared completion. I have also been fortunate to have the support of a succession of three wonderful editors: Catherine Bernard, who initially saw value in my proposal; Karen Adler, who presided over the middle stages of the book’s development; and Simon Jacobs, who helped usher it across the finish line. In Fall 2018 the writing of this book was temporarily interrupted by a sudden and unexpected diagnosis of non-Hodgkin’s lymphoma. I am grateful to Dr. Tycell Phillips, and the entire hematology team at the University of Michigan’s Rogel Cancer Center, for helping to ensure the survival of this project (and its author). Most important of all, none of this would have been possible without the constant love, support and encouragement of my family—my five wonderful children, and (most especially) my wife, Fruma Taub. Words simply cannot do justice to how fortunate I am to have her in my life, so I will simply say thank you for everything.
viii
1
Introduction
0.1 Who This Book is For This is a textbook about the secondary mathematics curriculum, but it is not a secondary mathematics textbook. In much the same way that a text on mathematical biology or mathematical finance applies mathematical tools to study biology or finance, so too does this text take the mathematics of the secondary curriculum as an object of study and deploys more advanced-level mathematics in the service of analyzing it. The aim of this book, then, is to take an intellectually and mathematically mature perspective on the content normally taught in high school mathematics courses. More than a century ago, the great German mathematician Felix Klein1 published Elementary Mathematics from an Advanced Perspective, which expanded on a series of lectures originally concerned with “the different ways in which the problem of instruction can be presented to the mathematician” (Preface to the First Edition, 1908). Klein explained that his purpose was to put before the teacher, as well as the maturing student, from the view-point of modern science, but in a manner as simple, stimulating, and convincing as possible, both the content and the foundations of the topics of instruction, with due regard for the current methods of teaching. (ibid) Later, in his preface to the third edition (1924), Klein further explained his goal as “to bring to the attention of secondary school teachers of mathematics and science the significance for their professional work of their academic studies, especially their studies in pure mathematics.” Klein’s work, eventually published in English in two volumes, was the first to seriously examine the curriculum of K–12 mathematics from the perspective of advanced—that is, university and research-level—mathematics, and it has stood the test of time as a landmark in the field of mathematics education. In the past century, however, neither mathematics nor the curriculum have stood still; indeed, much of Klein’s exposition is inaccessible to contemporary readers, both because the curriculum as it stands today is significantly different from that of Klein’s time, and because the language, notation, and techniques of mathematics have moved on. The primary focus of this book, then, is on integrating the disparate topics and methods of secondary mathematics into a coherent mathematical theory. What are the theorems of first-year and second-year Algebra? What are its postulates? How far can those theorems and postulates be generalized, and what mathematically interesting phenomena appear in those generalizations? Precise definitions and proofs are used throughout the text, which assumes an advanced-level undergraduate or beginning-level graduate student in mathematics or
2
2 Introduction mathematics education. Topics and ideas from abstract algebra, analysis, Euclidean geometry, and linear algebra are brought in where appropriate. A secondary focus is on the historical development of the mathematics and of the curriculum. When did inequalities enter the high school curriculum—and why? How were real numbers defined in the 1920s, 1960s, and in contemporary high school textbooks, and what (if any) are the mathematical differences between those approaches? Through sustained investigation of these (and many other) questions, the mathematics of the secondary curriculum itself becomes the object of mathematical analysis. This book is intended to be of use for three similar, but distinct, audiences: (1) preservice secondary teachers at or near the conclusion of their undergraduate studies; (2) graduate- level students in Mathematics with a particular interest in education at the secondary or community-college level; and (3) Doctoral students in Mathematics Education. In the sections below, I discuss briefly the specific needs of each of these groups, and how this book is intended to address those needs.
0.2 Preservice Secondary Teachers What should be done about the mathematical training of future mathematics teachers? The usual answer is the simplest one: that there should be more of it. Indeed few would disagree with this answer; who among us, after all, would argue that preservice teachers should study less mathematics? Many of the teacher education reforms of the last few decades take the need for more mathematics courses for teachers as their starting position. In the United States, for example, the No Child Left Behind Act of 2001 mandated that all teachers be “highly qualified” in the subject area in which they teach, a standard that many states have interpreted to mean that secondary teachers must have a bachelor’s degree in their major subject. In such a system a preservice secondary mathematics teacher must take the same undergraduate courses and meet the same degree requirements as a mathematics major who intends to pursue a graduate degree in mathematics. However, the “more is better” approach alone fails to engage substantively with the difficult question of which mathematics topics and courses would actually be of value for a preservice teacher. Is a semester of graph theory just as good as one in partial differential equations? Which is more useful for the aspiring mathematics teacher, topology or Galois theory? The tacit premise often seems to be that it doesn’t matter—as if mathematics were one large, undifferentiated mass, of which students should absorb as much as possible, without regard for their intended future careers. This premise—that mathematical knowledge is, so to speak, fungible—is not supported by research. As far back as the 1970s, the noted scholar Ed Begle (1979) surveyed studies of the correlation between teachers’ knowledge of undergraduate-level mathematics (as measured by both the number of college mathematics courses they took and by their average grade in those courses) and the performance of their students, and found that such studies showed either no significant correlation at all or, in some cases, a negative correlation. This finding is so counterintuitive that it bears repeating: the best research available at the time showed that the more undergraduate-level mathematics the teachers knew, the worse their own students did. Begle further noted that that whether teachers were mathematics majors had a statistically significant impact on their students’ learning in only 20% of cases. When it comes to teacher knowledge, more is not better! However, Begle noted, there were exceptions. His review did find that some undergraduate math courses had a positive correlation with student achievement. In reviewing these findings, Begle concluded that “The small, but positive, correlation between teacher understanding of the real number system and student achievement in ninth-grade Algebra would lead to
3
Introduction 3 the recommendation that teachers should be provided with a solid understanding of the courses they are expected to teach.” Over the ensuing decades many have echoed these findings. Wu (2011) argued that “To help teachers teach effectively, we must provide them with a body of mathematical knowledge that satisfies both of the following conditions: (A) It is relevant to teaching, i.e., does not stray far from the material they teach in school. (B) It is consistent with the fundamental principles of mathematics.” Elsewhere Wu (1999) clarifies the distinction between the mathematics courses that customarily form the core of a mathematics major, and the type of mathematics courses that (he believes) would be most beneficial for teachers, as follows: In contrast with the normal courses that are relentlessly ‘forward-looking’ (i.e., the far-better-things-to-come in graduate courses), considerable time should be devoted to ‘looking back.’ (p. 13) In keeping with Begle’s findings and Wu’s recommendations, the Conference Board of the Mathematical Sciences (CBMS 2001) recommended that preservice teachers complete a “capstone course connecting their college mathematics courses with high school mathematics”; a study by Cox, Chesler, Beisiegel, Kenney, Newton & Stone (2011) found that slightly more than half of the universities in their sample had such a capstone course. More recently, the Conference Board has recommended that prospective high school teachers of mathematics should be required to complete the equivalent of an undergraduate major in mathematics that includes three courses with a primary focus on high school mathematics from an advanced viewpoint. (CBMS 2012, p. 18) But just how “advanced” should this advanced viewpoint be? In order to “look back”, as Wu has it, one must first determine the vantage point from which the looking back is to occur. This text assumes the perspective of a student who has completed, or is near the very end of, an undergraduate mathematics degree. Thus, we assume that the reader knows the basics of linear algebra; is familiar with the definition of groups, rings and fields; and is comfortable with various methods of proof. The question that drives this book is: what is all of that material useful for, for a preservice secondary teacher? Consider, for example, the teaching of exponential functions in an Algebra 2 classroom. One “big idea” for this topic is that there is an analogy between linear functions and exponential functions: just as in the well-known form= y mx + b the parameter b stands for the initial value (or y -intercept) of the function, and the parameter m stands for the function’s growth rate, so too in the exponential form y = AB x the parameter A stands for the initial value and the parameter B stands for the multiplicative growth factor. The two types of functions, linear and exponential, are thus in a certain sense “just like” each other. From an advanced perspective, one might observe that this “analogy” can more precisely be called an “isomorphism”—and that, in fact, the mapping x B x establishes an isomorphism from the additive group of real numbers to the multiplicative group of positive reals. Linear and exponential functions are “just like” each other because the two groups are isomorphic, and the isomorphism transforms one kind of function into the other kind. In fact, the well-known identities B x + y = B x B y and log B= ( xy ) log B x + log B y really say nothing more than that the functions f ( x ) = B x and g ( x ) = log B x are group homomorphisms. The point of this example is to illustrate the principle that the advanced mathematics courses commonly taken by the multiple intended audiences of the book (see below) both can be and should be relevant for those embarking on careers in mathematics education—but
4
4 Introduction that such relevance is not self-evident for students, and must be made explicit through a careful consideration of the links between the mathematics content taught at the secondary and the mathematics content studied at the advanced undergraduate /beginning graduate level. These important connections do not just “happen”, but rather must be brought out deliberately. That is one of the goals of this book.
0.3 Mathematics Graduate Students This book is adapted from a course taught four times, at two universities, over a seven-year period. The first time it was taught the enrollment consisted exclusively of mathematics graduate students, some near the end of a master’s degree and others at the beginning of their doctoral studies. All of them had in common a background as former high school teachers; some of them intended to return to those careers, but most hoped to pursue a new career as faculty at the community-college level, where the majority of the mathematics courses are roughly equivalent to courses taught at the secondary level. These students were not interested in going on to do research in mathematics, or even to find careers working in industry—and their complaint was essentially the same as that of their undergraduate counterparts: they saw little relevance for their future work in the advanced-level mathematics courses they were currently engaged in studying. Although little data seems to be available on how many secondary teachers eventually return to university to earn a master’s degree in Mathematics, anecdotal evidence suggests that at universities that offer a terminal master’s degree in math, a large share of the students (perhaps even the majority) are either current secondary mathematics teachers or are preparing for a career as one. Depending on locale, high school teachers often have an incentive to earn an advanced degree, which can confer not only a higher salary but also the opportunity to take on a leadership position (department chair, etc.) in their school or district. While some students choose to earn master’s degrees or Ph.Ds in education or a related subject, some of them quite naturally choose to pursue an advanced degree in Mathematics. These are teachers who love mathematics, love sharing it with their students, and want to gain an advanced mathematical perspective on the subject they love. In order to earn their degree these students need to take graduate-level mathematics courses; in order for the degree to be relevant for their professional aspirations, they need courses that focus on secondary mathematics. There are no textbooks currently available that meet both of those needs simultaneously. This book is intended to fill that gap by providing a mathematically serious approach, suitable for a graduate-level mathematics course, that is nevertheless anchored in the mathematics of the secondary curriculum. This text assumes an audience that has already completed the courses that form the core of an undergraduate mathematics major at most universities: specifically, it assumes that the student has prior experience with abstract algebra and linear algebra, and has some experience with reading and writing proofs. To be clear, the text does not assume expertise or proficiency in these areas; the typical student in my courses (particularly a teacher returning for a graduate degree) may have completed his or her undergraduate mathematics coursework years or even decades earlier, so that the meanings of words like group, field and basis may be little more than dim memories. For this reason, the text weaves a thorough review of all essential prerequisite mathematical knowledge into its exposition. Examples (and non-examples) play a substantial role in this exposition: each new concept or definition is developed through a consideration of multiple examples, including both conventional and unconventional ones. But while this background knowledge is not taken for granted, neither are these advanced topics avoided: on the contrary, they lie at the very heart of the book’s approach. The thesis of the book is that advanced mathematical knowledge is neither
5
Introduction 5 irrelevant nor tangential for the educator, but rather can form the foundation of a teacher’s mathematical knowledge for teaching.
0.4 Mathematics Education Doctoral Students The third group of students for whom this book is intended are doctoral students in Mathematics Education. Although this population is the smallest of the three intended audiences, it is also one where the need is perhaps greatest. At most universities that offer a doctorate in Math Ed, graduate students are required to take a certain number of advanced- level mathematics courses, often at the Masters level or higher. As with the other populations described above, the institutional requirement for these students to take advanced-level mathematics courses is complicated by the need for those courses to be relevant for their future professional work as mathematics teacher educators or as scholars in the field of mathematics education. There is broad (if not universal) agreement, it seems, that a student seeking a Ph.D. in Math Education should take at least some graduate-level mathematics courses. But here, again, we face the question: Which ones? Do we really think that a course in differential forms and a course in algebraic geometry are interchangeable—that what matters is that they take something at a sufficiently high level, and that the specifics don’t matter? In recent years, a great deal of research has been done on what Ball, Thames & Phelps (2008) call Mathematical Knowledge for Teaching (MKT). In the MKT model, the specialized content knowledge that teachers draw upon in the course of their work is classified into specific knowledge domains. For example, there is Knowledge of Content and Students (KCS), which includes a familiarity with common student misconceptions and errors, and Knowledge of Content and Teaching (KCT), which encompasses the range of instructional decisions a teacher must make (posing problems, managing discussions, remediating an error) in the course of his or her work. Less attention has been paid to the analogous problem of describing what might be called Mathematical Knowledge for Teaching Teachers2 (MKTT) or Mathematical Knowledge for Education Research (MKER). Certainly, it seems reasonable to assume that, just as MKT includes but is not limited to the mathematical content that elementary and secondary students are expected to learn, so too would MKTT and MKER include MKT as a proper subset. But what else do teacher educators or mathematics education researchers need to know? I propose that for those who are engaged in the scholarship of mathematics education, two particularly useful domains of knowledge are what Ball et al. call Knowledge of Content and Curriculum and Horizon Content Knowledge. Knowledge of Content and Curriculum comprises an awareness of how different curricula structure the relationships between mathematical topics in different ways; it also, arguably, includes an understanding of how and why curricula have changed over the years. Horizon Content Knowledge is “how the mathematics [taught] is related to … what will come later”, and “includes the vision useful in seeing connections to much later mathematical ideas” (Ball et al., p. 403). This textbook seeks to address these two domains in particular; in connecting the mathematics of the secondary curriculum to the mathematical topics of the advanced undergraduate and beginning graduate curriculum, it seeks to explore and cultivate Horizon Content Knowledge, while in reviewing the history and evolution of mathematics curricula it aims to help develop Knowledge of Content and Curriculum. In addition to the mathematical knowledge described above, future scholars of mathematics education need a broad familiarity with the existing research literature in their chosen field. Toward this end, each chapter of this book ends with a section of recommended readings
6
6 Introduction from the math ed literature, each one linked to the topic of the chapter. Accompanying these recommended readings is a brief summary or discussion of the indicated papers, and several research projects that would be suitable for doctoral students to work on either individually or in small partnerships.
0.5 Thinking Like a Mathematician Another, less explicit goal of this text is to help students develop some of the habits of mind of a mathematician. While undergraduate coursework commonly makes explicit the norms of proof and problem-solving in mathematical work, other important aspects of mathematical practice—including systematizing, generalizing, and other theory-generating moves— are typically invisible to the student. It seems worthwhile to pause here and consider some of these practices, as they generate the narrative thrust that moves this book forward. One of the essential features of mathematical practice is the posing of new questions. While students in K–12 and undergraduate courses generally spend most of their time answering questions posed by others, research mathematicians typically work on questions that they have asked for themselves. Learning to generate interesting questions is a nontrivial accomplishment! Where do good problems come from? Brown & Walter (2004) illustrate one paradigm for problem-posing by describing what they call the “what-if-not” strategy. In this framework, one begins by observing a fact that is known to be true under certain conditions, and then asks what would happen if one of the conditions is falsified or altered. To take an example that will be central to our study of polynomials in Chapter 2: it is well-known that if two polynomials p= ( x ) an x n + + a0 m and q= ( x ) bm x + b0 (with all coefficients real numbers) have identical graphs, then the degrees of the polynomials must be the same, and in fact the polynomials must actually have all the same terms, i.e. ak = bk for all k. But what if the coefficients are not real numbers? What if, instead, they belong to some other ring or field, like n? Suddenly everything changes, and it is possible for two different polynomials to nevertheless have identical graphs. On the other hand, other well-known facts of the secondary curriculum still turn out to be true, even in the more generalized setting. Asking the question “What if the coefficients are not real numbers?” turns out to be an enormously fruitful strategy for generating new questions. A more elaborated model of problem-posing is described in Weiss (2009) and Weiss & Moore-Russo (2012), in which an effort is made to articulate the components of a mathematical sensibility. In this context, the phrase “mathematical sensibility” includes a network of “generative moves” that mathematicians can use to produce new questions or conjectures from existing knowledge. Some examples of generative moves are weakening a hypothesis, generalization, strengthening a conclusion, specialization, and considering the converse. Throughout this book, these generative moves are used to raise new questions whenever a new result or counterexample is established. In addition to the generative moves, the mathematical sensibility also includes a set of “categories of perception and appreciation”—different ways of describing and ascribing value to a piece of mathematical work. In general these categories of perception and appreciation come in dialectical pairs (Weiss 2009, pp. 62–75): for example, mathematics can be appreciated because it is useful or abstract; because it is complicated or simple; or because it either surprises us or confirms our expectations. In addition, sometimes when we describe a body of mathematical work, we perceive it as being unified by a coherent body of problem- solving techniques, while other times we place a high value on the organization of a body of mathematical work. We will have more to say on this latter category of appreciation—the Theory-Building disposition—in the next section.
7
Introduction 7
0.6 The Theory-Building Disposition Here, we turn to pay particular attention to one of the most important (but perhaps least- appreciated) elements of the mathematical sensibility: the disposition towards theory- building (Weiss, 2009; Weiss & Herbst, 2015). The Theory-Building disposition attaches value not only to the individual elements of a mathematical theory—its theorems, definitions, proofs, and so on—but also to the organization of those elements into an interconnected structure. For example, consider the following two theorems, both mainstays of the secondary Algebra 2 course:
The Remainder Theorem. If p ( x ) is any polynomial, and a is any real number, then on dividing p ( x ) by x − a we obtain a quotient, q ( x ), and a remainder, R, with R = p ( a ). The Factor Theorem. If p ( x ) is any polynomial, and a is any real number, then x − a is a factor of p ( x ) if and only if p ( a ) = 0. Virtually all secondary curricula teach these two theorems, and show students their utility; some textbooks even include a proof (but not most—see Thompson, Senk & Johnson, 2012). Yet from the point of view of the Theory-Building disposition, what is important about these two theorems is not only that they are both true, but also that they are equivalent to one another (Weiss 2016). That is, each one can be proved as a consequence or corollary of the other; neither one is more or less powerful, or more or less general, than the other. Mathematical texts are traditionally organized in a logical structure that takes certain properties as unproven axioms or postulates, and deduces other properties as theorems. It is noteworthy that the very act of re-organizing an existing theory into a different structure or on a different foundation may sometimes be regarded as a significant mathematical accomplishment; even if none of the results themselves are “new”, there is value in revealing previously unrecognized links or relationships between them. And yet, in standard secondary curricula, the structural relationships among the parts are nearly entirely absent. High school mathematics is, by default, organized around the work and value of problem- solving, rather than theory-building. How, then, shall the secondary mathematics curriculum be organized into a theory? What shall we take as its postulates, and what should be its theorems? In what order should the theorems be proved so as to produce a theory that is parsimonious without being bewildering? How to create coherence from what is normally a disconnected body of facts? To illustrate this problem, consider the following important property, familiar to all teachers and students of first-year Algebra: If two lines (neither one vertical) in the plane are perpendicular, then their slopes are opposite reciprocals of each other. There is little doubt that this property is important. But what kind of “thing” is it? Should we adopt this as the definition of perpendicular lines? If so, we would need to eventually somehow connect this with the more conventional notion that perpendicular lines are lines that intersect at a 90° angle. On the other hand if this is not the definition of perpendicular lines, then perhaps it ought to be a theorem. If so, how can we prove it?
8
8 Introduction
Figure 0.1 If lines l1 and l2 are perpendicular, then the triangles ΔOPR and ΔQOS must be congruent.
One possible proof might run (in abbreviated form) as follows: Suppose two perpendicular lines 1 and 2 have slopes m1 and m2 , respectively. Assume without loss of generality that m1 > 0 and m2 < 0, and that the lines intersect at the origin O. Now let P be the point 1 (1, m1 ), which (by the definition of slope) lies on 1, and let Q be the point , 1 , which (for m2 1 the same reason) lies on 2. Finally, let R be the point (1, 0 ) and let S be , 0 . (Refer to m2 Figure 0.1.) Now observe that, because the lines are perpendicular, ∠POR and ∠QOS are complementary. On the other hand, ∆OPR is a right triangle, and therefore ∠POR and ∠OPR are also complementary. Therefore, ∠QOS ≅ ∠OPR. Since OR = QS = 1, the two triangles −1 ∆OPR and ∆QOS are congruent by AAS. It follows that PR = OS , i.e.,= m1 m = − m2−1 2 (remember, m2 is negative by assumption); this completes the proof. Observe just how much background must be established before this proof can make sense. All of the following facts are necessary prerequisites: • • • •
That every line is characterized by a number called the slope, and that this number can be used to relate the coordinates of two different points on the same line; That the two acute angles of a right triangle are complementary; That if two angles are both complementary to a third angle, they are congruent to each other; That two triangles can be proven congruent by establishing a correspondence between their vertices and sides in such a way that two pairs of angles and one pair of sides are congruent (AAS congruence).
Nor is the list above exhaustive. To really make this rigorous, we would need to define what certain key words (e.g., “complementary”, “perpendicular”, “slope”) mean. In a coherent theory, all of this would need to be done before we can set out to prove our theorem. Here we see the problem: we are trying to prove a basic theorem of Algebra, but most of this material we need to use to prove it comes from Geometry, which (in the United States) is normally
9
Introduction 9 studied the year after Algebra. No wonder results like this are usually not proven in high school—when could they be? It might be objected that we have chosen an example proof that relies particularly heavily on techniques of Geometry, and certainly other proofs exist. Here is another: Let 1 and 2 be any two lines (intersecting, but not necessarily perpendicular). Once again we may assume without loss of generality that the lines intersect at the origin O( 0, 0 ). Choose any point P ( a, b ) on 1; then, if the slope of 1 is m1, we have b = am1. Similarly, if Q (c, d ) is a point on 2, and if the slope of 2 is m2 , then d = cm2 . Now consider the lengths of the sides of triangle ∆OPQ (see Figure 0.2). We have OP =
a 2 + ( am= a 1 + m12 1)
OQ =
c 2 + ( cm= c 1 + m22 2)
PQ =
(c − a )
2
2
2
+ ( cm2 − am1 )
2
Now by the Pythagorean Theorem (and its converse), ∠POQ is a right angle if and only if OP 2 + OQ 2 = PQ 2 , that is, a 2 (1 + m12 ) + c 2 (1 + m22 = )
(c − a )
2
+ ( cm2 − am1 )
2
Expanding out both sides of this equation we find that it is equivalent to a 2 + a 2 m12 + c 2 + c 2 m22 = c 2 − 2ac + a 2 + c 2 m22 − 2acm1 m2 + a 2 m12
Figure 0.2 Alternate diagram used for proving that two lines are perpendicular if and only if their slopes are opposite reciprocals.
10
10 Introduction Cancelling out common terms, we end up with 0 = 2ac − 2acm1 m2 which, if a and c are both nonzero, leads to m1 m2 = −1. Thus, the lines are perpendicular if and only if the slopes are opposite reciprocals. This proof is less overtly geometric than the previous one, but it, too, requires a great deal of prerequisite knowledge. In order to write such a proof, one needs to know: • • •
That every line is characterized by a number called the slope, and that this number can be used to relate the coordinates of two different points on the same line; That the distance between two points can be computed using the Pythagorean Theorem; That the converse of the Pythagorean Theorem is true.
These, in turn, rely on other, unstated assumptions—in particular, that the plane has been coordinatized in such a way that the x- and y-axes are perpendicular, and that the horizontal and vertical scales are equal, so that the distance between two points on a horizontal (or vertical) line corresponds to the difference between their x- (or y-) coordinates. All of this is normally taken for granted; however, as we will see in Chapter 4, they need not always be true, and there are good reasons for considering coordinate systems in which they are not. The goals of this book, then, are to weave all (or at least much) of secondary mathematics into a single, coherent narrative; to organize the main results into a coherent mathematical theory; to see how far and how well that theory can be generalized; and to explore how the curriculum has evolved over the past century.
0.7 Structure of the Book This book is suitable for use in either a two-semester undergraduate sequence or in a single- semester graduate seminar. Because it is intended for use by diverse audiences in a variety of settings, we offer here some suggestions for how abridged versions of the text may be used in a setting in which an abbreviated course is required. A short version of the course, suitable for use in a one-semester undergraduate capstone course, might include Chapters 1 and 2, skip over Chapter 3, and conclude with Chapter 4. If time permits, Chapter 6 is independent of most of what precedes it, and could be included as well. Instructors who wish to emphasize real analysis should be sure to include Chapters 1 and 5, which rely heavily on the completeness property of the reals; in contrast, Chapters 2, 3 and 6 are primarily algebraic in flavor. Both Chapters 4 and 6 have a particularly geometric flavor. Students interested in the history of mathematics and math curricula should pay particular attention to §§1.1–3, 4.1–3, 4.8, 5.1, the second half of 5.3, and 6.1–2. These sections, together with the recommended reading at the end of each chapter, would be a suitable basis for a graduate-level math ed seminar. Exercises may be found not only at the end of each section, but also (in the case of some of the longer chapters) within the sections, where they serve to break up the exposition into smaller units. Projects related to the recommended reading are included at the end of each chapter.
11
Introduction 11
Notes 1 Klein will reappear in this textbook in section 4.8. 2 Some important first steps in studying MKTT have been taken by Zopf (2010) and Castro Superfine & Li (2014).
References Ball, D. L., Thames, M., and Phelps, G. (2008). Content Knowledge for Teaching: What Makes It Special? Journal of Teacher Education, 59(5), 389–407. Begle, E. (1979). Critical variables in mathematics education: Findings from a survey of empirical literature. Washington, DC: Mathematical Association of America. Brown, S., and Walter, M. (2004). The Art of Problem Posing. 3rd ed. Hillsdale, NJ: Lawrence Erlbaum. Castro Superfine, A., & Li, W. (2014). Exploring the mathematical knowledge needed for teaching teachers. Journal of Teacher Education, 65(4), 303–314. Conference Board of the Mathematical Sciences (CBMS). (2001). The mathematical education of teachers. Providence, RI: American Mathematical Society. Conference Board of the Mathematical Sciences (CBMS). (2012). The mathematical education of teachers II. Providence, RI: American Mathematical Society. Cox, D. C., Chesler, J., Beisiegel, M., Kenney, R., Newton, J., & Stone, J. (2013). The Status of Capstone Courses for Pre-Service Secondary Mathematics Teachers. Issues in the Undergraduate Mathematics Preparation of School Teachers, 4. Klein, F. (2004). Elementary Mathematics from an Advanced Standpoint. Translated from the third German edition by E.R. Hedrick and C.A. Noble. Minneola, NY: Dover Publications. Thompson, D. R., Senk, S. L., & Johnson, G. J. (2012). Opportunities to learn reasoning and proof in high school mathematics textbooks. Journal for Research in Mathematics Education, 43(3), 253–295. Weiss, M. (2009). Mathematical sense, mathematical sensibility: The role of the secondary Geometry course in teaching students to be like mathematicians. Unpublished doctoral dissertation. University of Michigan, Ann Arbor, MI. Weiss, M., & Herbst, P. (2015). The role of theory building in the teaching of secondary geometry. Educational Studies in Mathematics, 89(2), 205–229. Weiss, M. (2016). Factor and Remainder Theorems: An Appreciation. Mathematics Teacher, 110(2), 153–156. Weiss, M. & Moore-Russo, D. (2012). Thinking like a mathematician. Mathematics Teacher, Focus Issue on Fostering Flexible Mathematical Thinking, Nov. 2012. Wu, H. (1999). Preservice professional development of mathematics teachers. Unpublished manuscript. Retrieved July 7, 2016 from https://math.berkeley.edu/~wu/pspd2.pdf. Wu, H. (2011). The mis-education of mathematics teachers. Notices of the AMS, 58(3), 372–384. Zopf, D. A. (2010). Mathematical Knowledge for Teaching Teachers: The Mathematical Work of and Knowledge Entailed by Teacher Education.
12
1 Numbers and Number Systems
“Happy families are all alike; every unhappy family is unhappy in its own way.” — Anna Karenina by Leo Tolstoy
1.1 Old and New Math The story of how mathematics is taught in the United States is a long and complicated one, with many moments of crisis and change. Mathematicians, teachers, teacher educators, and researchers in mathematics education often speak of different “eras” in mathematics education: e.g., the Standards era of the late 1980s and early 1990s, the Math Wars era of the late 1990s, and most recently the Common Core era of the second decade of the 21st century. But one period in mathematics education stands out among the others for its comprehensiveness, its audacity, its notoriety, and—depending on one’s perspective—either its stunning collapse, or its long-lasting impact. The so-called New Math initiatives of the mid- 20th century set out to completely transform not only how school mathematics was taught, but more profoundly to completely re-conceptualize what “school mathematics” even was. Indeed, it is not too much of an oversimplification to divide the story of math education in the United States (and, to an extent, worldwide) into just three eras: before, during, and after the New Math. Although the New Math era is widely associated with the mid-1960s, its roots go back to the 1940s and the immediate postwar period. As Europe recovered from the destruction of World War II and the global balance of power reshaped itself along the lines of what would come to be known as the Cold War, many in the West looked to science and technology as the hope of humanity. In its first decade, the so-called “Atomic Age” was still viewed with a Utopian enthusiasm, with scholars and scientists expected to lead the way into the future. In the United States, the middle class flourished, the so-called “GI Bill” led to a massive surge in college enrollment, and for the first time higher-level mathematics topics (advanced Algebra up to beginning Calculus) found their way into the mainstream high school curriculum of ordinary students, rather than just a small elite1. This massive social change coincided with a fear, in the West, of being technologically outpaced by the Soviet Union—a fear that took physical form when the Soviets launched Sputnik, the first artificial satellite, into orbit in October 1957. The prospect of an armada of orbital Russian spy satellites (and perhaps even futuristic space-based weapons) trained on the USSR’s rivals struck fear into the hearts of American experts and laypeople alike, and triggered the start of the “Space Race”. In this context, a broad-based coalition of mathematicians, scientists, educational experts and policy-makers saw an urgent need for a massive overhaul of science education, and especially mathematics education, in the United States. Mathematics, it was argued, had gone through a dramatic transformation since the beginning of the 20th century, with its
13
Numbers and Number Systems 13 very foundations rewritten and reinvented along entirely new, modern lines; but despite this, critics said, school mathematics was still stuck in the mid-19th century. Entire new disciplines had been created within mathematics (set theory, abstract or “modern” algebra, non- Euclidean geometry and many others), and with them new conceptions of mathematics and a new language for describing it had arisen—but the mathematics that students encountered in school was essentially unchanged from what their grandparents had studied. The time for reinventing school mathematics for the modern age had come. At the center of the reforms that eventually led to the New Math was the School Mathematics Study Group (SMSG), an academic think tank funded by the National Science Foundation and chaired by mathematician Edward Begle. Beginning in 1958 and continuing for nearly two decades, the SMSG produced dozens of volumes of curriculum materials2, ran teacher training institutes around the country, and conducted research on adoption and implementation of “New Math” curricula. Although the SMSG textbooks themselves were not intended to be commercially sold, they were widely disseminated and circulated as models to be imitated and improved upon, and by the late 1960s SMSG-inspired curricula were being marketed by all major publishers and were in use around the country. To say that these curricula were ambitious would be a dramatic understatement. Inspired by the mathematicians of the Bourbaki movement3, New Math curricula stressed mathematical structures as a key topic. The language and notation of sets was introduced as early as 1st grade; students were told to distinguish the word number (referring to a quantity) from numeral (referring to the representation of a number in written form), and to focus on properties such as order and equivalence. In upper elementary grades, students wrote numbers in different base systems (i.e. base 6 and base 2 in addition to the familiar base 10). At the middle-grades level, students were taught to distinguish between an “open sentence” (like 3x + 5 = 17 ) and an “equation” (such as 2 ( 3 + 5 )= 6 + 10). So strong was the emphasis on set theory and precise language that some felt the New Math went too far. Richard Feynman4, who served on the California State Curriculum Commission in 1964, objected to what he saw as the excesses of the New Math’s emphasis on language: In regard to this question of words, there is also in the new mathematics books a great deal of talk about the value of precise language—such things as that one must be very careful to distinguish a number from a numeral and, in general, a symbol from the object that it represents… For example, one of the books pedantically insists on pointing out that a picture of a ball and a ball are not the same thing. I doubt that any child would make an error in this particular direction. It is therefore unnecessary to be precise in the language and to say in each case, “Color the picture of the ball red,” whereas the ordinary book would say, “Color the ball red.”… Although this sounds like a trivial example, this disease of increased precision rises in many of the textbooks to such a pitch that there are almost incomprehensibly complex sentences to say the very simplest thing. In a first-grade book (a primer, in fact) I find a sentence of the type: “Find out if the set of the lollypops is equal in number to the set of girls”—whereas what is meant is: “Find out if there are just enough lollypops for the girls.”… If we would like to, we can and do say, “The answer is a whole number less than 9 and bigger than 6,” but we do not have to say, “The answer is a member of the set which is the intersection of the set of those numbers which is larger than 6 and the set of numbers which are smaller than 9.” It will perhaps surprise most people who have studied these textbooks to discover that the symbol ∪ or ∩ representing union and intersection of sets, and the special use of the brackets {} and so forth, all the elaborate notation for sets that is given in these books, almost never appear in any writings in theoretical physics, in engineering, in
14
14 Numbers and Number Systems business arithmetic, computer design, or other places where mathematics is being used. I see no need or reason for this all to be explained or to be taught in school. It is not a useful way to express one’s self. It is not a cogent and simple way. It is claimed to be precise, but precise for what purpose?5 Perhaps nothing illustrates in more striking fashion the broad ambition of the New Math than the table of contents from a 1968 high school textbook, Algebra 2 and Trigonometry: Theory and Application. Published by a division of Doubleday Books as part of the New Laidlaw Mathematics Program for Secondary Schools, this textbook promised to provide “an up-to-date program that represents the best of contemporary thinking”, and claimed that a “practical, eminently modern approach to mathematics will help students develop logical mathematical thinking, understand mathematical structure, and build precise language.” The table of contents is reproduced (in abbreviated form) in Table 1 below, with subtopics listed for the early chapters only. To the modern reader acquainted with similarly-titled textbooks from our own era, many of the topics in the Laidlaw Algebra 2 and Trigonometry text will seem very familiar. There are systems of equations (albeit called “open sentences”), exponential and logarithmic functions, complex numbers, and so on. But the first third of the book—Chapters 1–4—may Table 1.1 Chapter and partial topic list from Algebra 2 and Trigonometry: Theory and Application, New Laidlaw Mathematics Program for Secondary Schools, 1968. Chapter 1. Mathematical Reasoning. (pp. 14–65) Inductive reasoning. Counterexamples. Deductive reasoning. Negation. Composite statements. Negation of composite statements. Special emphasis on conditionals and biconditionals. Other wordings for conditions and biconditionals. Logical inference. Mathematical proof. Chapter 2. Mathematical Structure: The Group. (pp. 66–105) Operations. Closure property. Identity property. Inverse property. Associative property. A group. A commutative group. Application of the group concept. Chapter 3. Mathematical Structure: The Field. (pp. 106–151) Distributive property. A field. The rational ordered field. Periodic and non-periodic decimals. Irrationals and rational approximations. Density of the rationals and the rational number line. Completeness of the reals and the real number line. The complete ordered field (the real ordered field). Chapter 4. Algebraic Phrases and Sentences. (pp. 152–215) Mathematical phrases. First-degree open sentences with one variable. Open sentences involving absolute value. Applications of open sentences to problem solving. Positive integral exponents and polynomials. Factoring polynomials. Rational expressions. Complex fractions. Integral exponents. Introduction to rational exponents. More about rational exponents and radicals. Application of radical expressions. Chapter 5. Relations and Functions. (pp. 216–261) Chapter 6. Linear Polynomial Functions and Coordinate Geometry. (pp. 262–293) Chapter 7. Systems of first-degree open sentences. (pp. 294–325) Chapter 8. Quadratic polynomial functions. (pp. 326–367) Chapter 9. Conic sections and systems involving second-degree open sentences. (pp. 368–405) Chapter 10. Exponential and logarithmic functions. (pp. 406–451) Chapter 11. Circular functions. (pp. 452–499) Chapter 12. Application of circular functions. (pp. 500–535) Chapter 13. Inverse circular functions. (pp. 536–557) Chapter 14. The complex number system. (pp. 558–597) Chapter 15. Higher-degree polynomial functions. (pp. 598–635) Chapter 16. Sequences, mathematical induction, and the Binomial Theorem. (pp. 636–678)
15
Numbers and Number Systems 15 come as something of a shock. Groups? Fields? Density of the rationals? These are topics one would expect to find in an upper-division college-level course for mathematics majors. Was this really meant for a high school classroom?
1.2 Back to Basics It will come as little surprise to anyone who has lived through the more recent waves of mathematics (and other curriculum) reforms over the past several years, and the sometimes intense political controversies that have accompanied them, to learn that the New Math met with an intense and prolonged backlash from parents, teachers, politicians and other stakeholders. Critiques ran the gamut, from the gentle mockery of comedian Tom Lehrer’s song New Math6 and whimsical references in Charles Schulz’s Peanuts cartoon7, to the no- holds-barred attack of the best-selling 1973 book Why Johnny Can’t Add: The Failure of the New Math, written by New York University mathematics professor Morris Kline. Taking its cue in large part from Feynman’s 1965 critique, Kline’s book opens with a caricature of a New Math classroom: Let us look into a modern mathematics classroom. The teacher asks, “Why is 2 + 3 = 3 + 2?” Unhesitatingly the students reply, “Because both equal 5.” No, reproves the teacher, the correct answer is because the commutative law of addition holds. Her next question is, Why is 9 + 2 = 11? Again the students respond at once: “9 and 1 are 10 and 1 more is 11.” “Wrong,” the teacher exclaims. “The correct answer is that by the definition of 2, 9 + 2 = 9 + (1 + 1) But because the associative law of addition holds, 9 + (1 + 1) = (9 + 1) + 1 Now 9 + 1 is 10 by the definition of 10 and 10 + 1 is 11 by the definition of 11.” Evidently the class is not doing too well and so the teacher tries a simpler question. “Is 7 a number?” The students, taken aback by the simplicity of the question, hardly deem it necessary to answer; but the sheer habit of obedience causes them to reply affirmatively. The teacher is aghast. “If I asked you who you are, what would you say?” The students are now wary of replying, but one more courageous youngster does do so: “I am Robert Smith.” The teacher looks incredulous and says chidingly, “You mean that you are the name Robert Smith? Of course not. You are a person and your name is Robert Smith. Now let us get back to my original question: Is 7 a number? Of course not! It is the name of a number. 5 + 2, 6 + 1, and 8 − 1 are names for the same number. The symbol 7 is a numeral for the number.” The teacher sees that the students do not appreciate the distinction and so she tries another tack. “Is the number 3 half of the number 8?” she asks. Then she answers her own question: “Of course not! But the numeral 3 is half of the numeral 8, the right half.” The students are now bursting to ask, “What then is a number?” However, they are so discouraged by the wrong answers they have given that they no longer have the heart to voice the question. This is extremely fortunate for the teacher, because to explain what
16
16 Numbers and Number Systems a number really is would be beyond her capacity and certainly beyond the capacity of the students to understand it. And so thereafter the students are careful to say that 7 is a numeral, not a number. Just what a number is they never find out. The teacher is not fazed by the pupils’ poor answers. She asks, “How can we express properly the whole numbers between 6 and 9?” “Why,” one pupil answers, “just 7 and 8.” “No”, the teacher replies. “It is the set of numbers which is the intersection of the set of whole numbers larger than 6 and the set of whole numbers less than 9.” Thus are students taught the use of sets and, presumably, precision.8 It would probably be giving Kline’s book too much credit to claim that it single-handedly ended the New Math, but it is certainly correct that it served as a punctuation mark at the end of the era. By the late 1970s the New Math had virtually disappeared from classroom use. The era of the New Math had given way to what came to be known as the Back to Basics movement. But while many of the New Math’s signature innovations (e.g. the emphasis on the language of set theory, the practice of converting numbers from one base to another, the language of “groups” and “fields” at the secondary level) have disappeared9, some of the less controversial topics that entered the curriculum with the New Math survived the purge that followed. These topics survive in the secondary curriculum to this day, disconnected from the movement that spawned them, like fossilized footprints from some long-extinct creature. In this section we briefly identify some of these “remnants” of the New Math, and ask the questions: What do these topics have in common? Is there a common, unifying thread that ties them together? Remnant 1: Inequalities. It may come as a shock to the modern reader, but the study of inequalities was not a common part of K-12 curricula prior to the New Math. Even the symbols < and > for “less than” and “greater than” were rarely found in textbooks below the level of Calculus. One of the signature innovations of the New Math was to introduce an emphasis on the ordering of real numbers even at the early elementary level; this theme continued throughout the curriculum, and survives to this day. Remnant 2: Names of number sets. Virtually every Algebra 1 or Algebra 2 textbook in use today contains, usually somewhere in the first chapter, a Venn diagram something like the one shown below. This diagram shows the relationships among different types of numbers that are encountered in K-12 mathematics: The set of real numbers (usually denoted with a stylized letter R, such as or R), which includes both the rationals () and irrationals10; the set of rationals in turn contains the integers (); the set of integers contains both positive and negative whole numbers11, as well as zero. Later on—towards the end of an Algebra 2 course, say—this diagram may be supplemented by the inclusion of the imaginary numbers, with a still-larger superset identifying the complex numbers enclosing the reals and imaginaries. Diagrams such as these always contain the following two key features: • •
Attention is given to the sets of numbers, rather than simply to properties of individual numbers. The visual representation of the sets highlights the hierarchical structure of these number systems, with some sets fully enclosed within other sets.
17
Numbers and Number Systems 17
REAL NUMBERS INTEGERS
RATIONALS
Natural Numbers 0
Whole Numbers
1/2, 2/3, 2.3, –4.6, –5/2, …
1, 2, 3, …
Negave Integers –1, –2, –3, …
IRRATIONALS
2, Π, e, …
Figure 1.1 Structure of the real numbers and its main subsets.
Again it may come as a surprise to the modern reader that diagrams such as these were entirely absent from mathematics textbooks before the era of the New Math. To be clear, this is not to say that the individual words “integer”, “rational”, “real”, etc. were absent from these texts. On the contrary, one would certainly find these words in use—but they were used as descriptors of individual numbers, rather than as names of sets. That is to say, a textbook might say something like “a rational number can be written as a ratio of two whole numbers”, or “the product of two negative numbers is positive”, or “ 2 is irrational”. But a textbook would not say “the set of rational numbers is closed under multiplication”, or “the set of rational numbers is a subset of the set of real numbers”. The emphasis on the sets themselves as object of interest was perhaps the signature characteristic of the New Math, and it survives (in a somewhat vestigial form) in contemporary curricula. Remnant 3: Properties and structures. Once we have identified sets of numbers as the key objects of interest, it is only natural to start describing the structures of these sets, in particular with reference to the operation that can be performed within and among the sets. Thus, contemporary curricula list various properties of these sets under the operations of addition and multiplication, for instance: , and are all closed under addition and multiplication. Each of these sets contains an additive identity (0) with the property that a + 0 =+ 0 a= a for any a. • Each of these sets also contains a multiplicative identity (1) with the property that a ⋅1 =⋅ 1 a= a for any a. b a and a ⋅ b =⋅ b a. • Addition and multiplication are both commutative, i.e. a + b =+ • Addition and multiplication are both associative, i.e. a + ( b + c= ) ( a + b ) + c and a ⋅ ( b ⋅ c= ) ( a ⋅ b ) ⋅ c. • Multiplication distributes over addition, i.e. a ⋅ ( b + c )= a ⋅ b + a ⋅ c. • •
18
18 Numbers and Number Systems …and so forth. At the risk of repetition, the modern reader may be surprised to find that these properties were not normally identified by pre-New Math era textbooks. This is not to say that those older textbooks did not use the properties; on the contrary, it is virtually impossible to simplify a complicated algebraic expression without repeated use of these properties. But for the most part, these properties were used without being named. And this is not all that different from the situation in the modern era: In current curriculum materials, these properties are usually presented in the first or second chapter of the textbook, but after that their names are mostly forgotten. Again, the claim here is not that the properties are not used throughout the year (they certainly are), but rather that they are not mentioned. For example, when one adds two polynomials like 2 x 2 + 3x + 7 and 5x 2 − 8x, one would typically not see any explicit attention to when the commutative, associative, and distributive properties were used; one would simply rearrange and combine like terms as needed without attending to why one may do that12. Given that so many of the innovations of the New Math have fallen by the wayside, it is reasonable to wonder why these three remnants did not. If we no longer expect students to learn the language and notation of sets, why is it so important for them to know the names of specific sets of numbers? Why do we think it is important for students to learn the names of the properties of addition and multiplication, but not important for them to learn that those properties make a group and a field? If the distinction between number and numeral is no longer regarded as valuable, why are the symbols < and > so enduring? It’s probably not possible to answer the above questions definitively. However, the rest of this chapter is devoted to a development of the proposal that each of these three remnants is necessary for, and tightly intertwined with, the need to understand a single question: What are real numbers?
1.3 What are Real Numbers? Typically, textbooks answer this question with a definition or description along one of the following lines: •
• • • •
Examples of real numbers include not only integers (like 5 and − 3) and rational numbers 2 7 (like and − ) but also numbers like π and 2. 3 2 The set of real numbers includes all rational and irrational numbers. A real number is a (possibly infinite) decimal. A real number is a number that can be represented by a (possibly infinite) decimal, or by a point on a number line. A real number is a limit of a convergent sequence of rational numbers.
As descriptions go, none of these are wrong—at least not exactly—but none of them is really an adequate answer to the question. Let’s take a look at these proposed answers one at a time. Examples of real numbers include not only integers (like 5 and −3) and rational numbers 2 7 (like and − ) but also irrational numbers like π and 2. This is an example of what is 3 2 sometimes called an “ostensive definition”. In an ostensive definition, one “defines” a class or category of things by identifying several examples of the class or category. Here we place the word “defines” in quotes, because an ostensive definition arguably does not actually define anything; it merely exemplifies. Suppose, for instance, we did not know what the word
19
Numbers and Number Systems 19 “pastry” meant. If someone were to take us to a bakery and point at a donut, a cheese danish, and an almond croissant, we might feel that we have an approximate notion of what “pastries” are—but still feel unable to definitively answer a question like “Is a bagel a pastry?” or “Is a slice of pumpkin bread a pastry?”13, 14. Similarly, this definition tells us that integers and rational numbers belong to , as do irrational numbers “like π and 2”—but the word “like” is doing an awful lot of work here. Just which numbers are “like” the two named examples? Is −1 “like” 2? If not, why not? It turns out to be awfully difficult to answer this question without invoking some other, independent notion of “real number”. But of course that is precisely the idea that we are trying to define! The set of real numbers includes all rational and irrational numbers. This definition is found, for example, in a popular 1911 textbook15, which explains (p. 93, emphasis in original): All the numbers of algebra are in one or the other of two classes, real numbers and imaginary numbers. Real numbers are of two kinds, rational numbers and irrational numbers. A rational number is a positive or a negative integer, or a number which may be expressed as the quotient of two such integers. Any real number which is not a rational number is an irrational number. On close reading, this definition reveals itself as circular: at the beginning it tells us that there are two kinds of real numbers, rational and irrational; then it tells us that any real number that is not rational is irrational. If we knew already what real numbers were, this definition would be quite useful as a way of defining what irrational numbers are; conversely if we knew what irrational numbers were, this definition would tell us what real numbers are. But as with the previous attempted definition, it turns out to be extremely difficult to say what an “irrational number” is without a prior notion of “real number”. A real number is a (possibly infinite) decimal. This definition avoids appealing to any prior understanding of other types of numbers, and focuses solely on what might be called the form (or perhaps a form) of a real number. As such, it neatly sidesteps the circularity problem of the previous attempt. However, it could (potentially) lead to a too-literal identification of what real numbers look like with what they are. For example, the decimals 0.999… and 1.000… are unequivocally and unambiguously different decimals, but they nevertheless label the same real number16. That is to say, if we want to define or identify real numbers by their decimal representation, we need to also have some notion of “equivalence” of decimals. A real number is a number that can be represented by a (possibly infinite) decimal, or by a point on a number line. This definition seems to avoid the pitfall of the previous definition, while simultaneously getting closer to the question of what a real number actually is, rather than simply classifying the set of reals into categories. But it, too, falls short. For one thing, what exactly does it mean to “represent a number” by a decimal or a point on a number line? How are we supposed to tell whether a number can be represented by an infinite decimal (whatever that means) or by a point on a number line (whatever that is)? Are there numbers that can’t be represented in these ways? But the problems with this definition lie at an even deeper level. Consider how the sentence begins: “A real number is a number that…” In other words, if you want to know what a real number is, begin with the set of numbers, and then check to see which numbers meet a certain collection of conditions. But what exactly are these “numbers” we begin with? In the end, all this definition seems to tell us is that real numbers are numbers. A real number is a limit of a convergent sequence of rational numbers. Now we are really getting somewhere! Assuming that we can define what rational numbers are (not too hard),
20
20 Numbers and Number Systems and what a sequence is (pretty easy), we are fairly well along. The only problem is explaining what convergence means. The normal definition of “convergent sequence” reads something like this: A sequence ( sn ) is said to converge to L if for any > 0 there exists N such that if n > N then sn − L < . In this definition, L is the limit of the sequence ( sn ). The problem here is much the same as in the previous definitions: in order to know whether a given sequence of rationals is convergent, you need to already have a limiting value L in mind, which you can then test to see if it meets the conditions of this definition. But if you don’t know what real numbers are, where does the value of L come from? This definition, too, involves a subtle but inescapable circularity: it amounts to telling us that a real number is a real number (that satisfies some property). By now it should be clear that defining real numbers is more challenging than it may seem. In fact, some textbooks from the pre-New Math era avoided the problem altogether. For example, Webster Wells’s 1908 textbook A First Course in Algebra, discussing surds17, states that “a numerical expression involving surds is an irrational number.” A footnote to that description remarks: Note that we do not define irrational number. The two most important irrationals—π and e (the base of a system of logarithms)—have been proved not to involve surds. In other words, Wells’s 1908 text identifies expressions involving surds as irrational, and acknowledges that such expressions do not comprise the entirety of the set of irrationals, while at the same time expressly disavowing any attempt to define what, exactly, the full set of irrational numbers consists of. The SMSG text Intermediate Mathematics (Part 1) adopts the definition “real numbers are decimals”, but—like the Wells text—openly acknowledges the challenges inherent in trying to give a precise definition for “real number”: The real number system, which we shall call R, may be constructed with the decimal expressions playing the role of its numbers. To construct such a system, it is necessary to define relations “equality”, “order”, and operations “addition”, “multiplication” for the decimal expressions. Having made satisfactory definitions for these relations and operations, we should be obliged to determine which… properties of the rational number system might be valid in R. This is no mean task. Indeed it is quite formidable. It required several thousand years of human thought to accomplish the transition from the natural number system to the real number system. This fact alone should convince anybody that the problem is not an easy one. (p. 73) Regarding this, the Teacher’s Commentary to the SMSG text explains: Using decimal expressions to introduce real numbers involves a number of bothersome details which we try to “sweep under the carpet.”… We go so far as to frame definitions for = and < in the part of R not in Q. We restrict ourselves to this subset to avoid discussing decimals in which the digit 0 repeats and which are all rational. For similar reasons we omit entirely any definition of ‘sum’ or ‘product’… We believe students have a fairly good idea of what decimals are, so we say about as much as we think they can
21
Numbers and Number Systems 21 take in this context and omit all the rest. We announce that R has all the Q properties, and take this announcement as license to invoke any of the Q properties we wish when working with real numbers throughout the rest of this book. (p. 48–49) Note that the SMSG text stresses the importance of both the algebraic properties of the set of real numbers, as well as the order properties. Taking this approach to its ultimate conclusion, the Laidlaw Brothers Algebra 2 and Trigonometry devotes approximately 1/3 of its total page count to slowly building up to a definition that allows it to describe the set of real numbers as a complete ordered field (refer back to §1.1 for the table of contents and partial topics list from the Laidlaw textbook). But in order for this to serve as a description of the set , one needs to ensure that it uniquely characterizes one, and only one, numerical structure. That is to say: if we want to define or describe the set of real numbers in terms of its algebraic and order properties, we need to make sure that there are not many different sets that have the same properties. As we shall see, this is possible—and (in retrospect) this effort sheds light on the three remnants of the New Math curriculum that survive in contemporary curricula.
1.4 Characterizing the Reals The remainder of this chapter is devoted to elaborating on the claim made at the end of the last section: that the introduction of some of the curricular innovations that have outlived the New Math—and, in particular, the introduction of inequalities, the explicit identification of formal properties, and the Venn-diagrammatic representation of number system hierarchies—can be explained on the grounds that they make it possible to answer the fundamental question: What is a real number? More precisely, the rest of this chapter is devoted to explaining and proving the following important theorem: Real Number Characterization Theorem. Any two complete ordered fields are isomorphic. If the significance of this theorem is not immediately clear to you, have no fear—the whole purpose of the remainder of this chapter is to (slowly and gradually) develop the ideas that will make its meaning visible. It will take some time to do so. Along the way, we will have to learn: • what fields are (and how they differ from related algebraic structures, like groups and rings) • what it means for a field to be ordered • what it means when we say that an ordered field is complete • what exactly it means to say that two algebraic structures are isomorphic • and why the fact that any two complete ordered fields are isomorphic is a big deal. Even before we answer the first four questions, let’s sketch out a perspective on the last one: Why is the Characterization Theorem18 so important? The set of real numbers (denoted ) is, we will see, an example of a complete ordered field. From the Characterization Theorem we know—or we will know, once we finally get around to proving it—that all complete ordered fields are, in some sense, “the same” (although what exactly “the same” means remains to be seen).
22
22 Numbers and Number Systems In this sense, complete ordered fields have something in common with the “happy families” of the opening sentence of Tolstoy’s Anna Karenina, quoted at the beginning of this chapter: There are many different ways that numerical structures can not be complete ordered fields—they might not be fields, or they can be fields but not ordered, or they can be ordered but not complete—but all complete ordered fields are fundamentally equivalent. The practical takeaway of the Characterization Theorem is therefore that, from the perspective of mathematical structure, it doesn’t matter what the real numbers actually are: if a room full of different mathematicians, each one working with a different definition of “real number”, independently construct different systems that are all complete ordered fields, then—despite the differences that may exist in how they conceive of the objects under study—those different systems will all be equivalent to one another in every important respect. The Characterization Theorem thus is a powerful result that allows us to look away from the question “What are the real numbers?” and instead focus on the question “What are the properties of the set of real numbers?” The question “But what are real numbers, really?” then ceases to be a mathematical one and becomes a question for philosophers to worry about. So let’s take some tentative steps in the direction of the Characterization Theorem. The first order of business is to be explicit about what we mean when we say that a set is a field. The properties of a field can be expressed in several different (equivalent) forms; here is one of the most concise ways to define the notion. Definition. A field is a set F, on which two binary operations (denoted ⊕ and ⊗) are defined, and which contains two distinguished elements, denoted θ and e, with the following three properties: • (F1) With respect to ⊕, F is an abelian group with identity element θ • (F2) With respect to ⊗, F − {θ } is an abelian group with identity element e • (F3) The operation ⊗ distributes over ⊕, in the sense that for any three elements a, b, c ∈ F , a ⊗ ( b ⊕ c= ) ( a ⊗ b ) ⊕ ( a ⊗ c ). The definition above has one major problem: (F1) and (F2) don’t make any sense at all unless you already know what an abelian group is! As we will do with similar notions throughout this text, we will assume that the reader has encountered the definition of an abelian group at some point in the (possibly distant) past, but we do not assume that the reader has a full command of what this means or why it is important. So before we can really understand what a field is, we need to back up several steps and explain what an abelian group is. Once we have a few examples of that more fundamental notion, we will circle back around to discuss what a field is. A few preliminary comments about notation are in order. Later in the book we will refer to the two field operations using the common symbols for addition and multiplication (+ and ⋅), and we will use the symbols 0 and 1, respectively, to stand for the two identity elements for those operations. At this point, though, we are deliberately avoiding those symbols, for a very specific reason: throughout much of this chapter we will consider examples of fields (and related structures) in which the “objects” are not numbers, as we usually think of them, but something else. For example, at various points in this book the objects under consideration might be vectors, matrices, or rational functions, or still more exotic creatures19. The meaning of the “addition” and “multiplication” operations on these “number-like objects” will need to be carefully defined, and their properties checked; our intuitive ideas about
23
Numbers and Number Systems 23 adding and multiplying numbers together may or may not carry over into these alternative regimes. The identity elements will play roles analogous to the familiar numbers 0 and 1, but they will not always be those numbers. Using different symbols is a way of reminding us not to take anything for granted, and not to over-generalize from our experiences with numbers20. On the other hand, while intuition and experience can’t be counted on reliably, they do help orient us towards what we expect might be true, and they can help point us in the direction of the questions we need to be asking. For that reason it’s helpful to use symbols that are at least vaguely reminiscent of the familiar ones; that’s why we use θ for the additive identity (it looks a lot like the symbol for 0) and e for the multiplicative identity (it’s the first letter in the German word ein, which means “one”).
1.5 Groups So, with those notational preliminaries out of the way, what does it mean to say that a set is a “group”? We have the following definition: Definition. A group is a set G on which a binary operation is defined, and which contains a distinguished element i, with the following properties: • • •
(G1) The operation is associative; in other words, for any three elements a, b, c ∈ G , the combinations ( a b ) c and a ( b c ) are equal. (G2) The element i is “neutral” with respect to ; in other words, for any element a ∈ G , we have a i = a and i a = a. (G3) For any element a ∈ G , there is a “partner” or “inverse” element a (with respect to the operation and the neutral element i) with the property that a a = i and a a = i .
The group G is said to be abelian if, in addition to the above three properties, we also have the following: •
(G4) The operation is commutative; in other words, for any two elements a, b ∈ G , the combinations a b and b a are equal.
As we will see in the examples and exercises, not all groups are abelian; in fact some of the most important groups are not! However, abelian groups play a special role in the theory of fields—take a look back at properties (F1) and (F2) if you are not sure why this is true. A few more words about notation are in order: As the examples below will show, some group operations are best thought of as “like addition”, while others are best thought of as “like multiplication”; in order not to bias our intuition, when we want to talk about groups in general, we use the symbol , because the asterisk somewhat resembles a multiplication operator (× ) superimposed on an addition symbol ( + ). As for the neutral element, we use the letter i for identity21 in order to avoid committing to either the symbol for an additive identity (θ , which is itself a way of avoiding the symbol 0) or a multiplicative identity (e, which is a surrogate for 1). We will say more about the notation for inverses below. Example. Consider the set {1, 2, 3, 4, 5, 6}. We can define addition on this set “clockwise”: that is, imagine the numbers 1 through 6 arranged on the face of a circular clock, so that as we count upwards we wrap around back to 1 every time we pass 6. On such a
24
24 Numbers and Number Systems
Figure 1.2 Illustrating 5 + 4 = 3 in 6 .
clock, for example, we would say 5 + 4 = 3, because if we start at 5 and advance 4 spaces we end up at 3 (see Figure 1.2). This set is commonly22 denoted 6 and is called “the integers modulo 6.” The following table shows the result of “adding” (in this sense) any two elements of the set: +
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
Addition in 6 Is this set a group with respect to the addition operation? Let’s observe a few things about this table: 1. Notice that the last column and last row of the table exactly match the column and row labels. Symbolically, this matching can be expressed by saying that for any x ∈ 6 , x + 6 =+ 6 x= x . This means that 6 is an additive identity, so (G2) is verified, with i = 6. (In terms of the clock diagram, the equation x + 6 = x simply says that if you start at any position on the clock and go one full loop around it, you end up back where you started.)
25
Numbers and Number Systems 25 2. Imagine a diagonal line from the top-left corner through the bottom-right corner of the table, and observe that the table is symmetric across that line. This reflection symmetry can be expressed in symbols by saying that for any two elements a, b ∈ 6, a + b =+ b a. In other words, (G4) holds. 3. Notice also that the identity element (6) appears in every row and every column. This means that for every x ∈ 6 , there is an element x with the property that x + x = 6. This element x is the additive inverse promised by (G3). 4. Is this operation associative? It is difficult to tell directly from the table, but in fact it is true that ( a + b ) + c =+ a ( b + c ) for any three elements a, b, c ∈ 6. To check (G1) directly, one would need to explicitly list all 63 = 216 possible combinations of three elements and verify that the sum works out the same regardless of how one groups the elements. This tedious task can be made rather more efficient by cleverly exploiting properties (G2) and (G4), but the details are not particularly interesting so we will omit them here. Since (G1)–(G4) are all true, 6 is an abelian group with respect to the addition property defined here. It is probably clear that there is nothing particularly special about the number 6 in the above example: we can take any set of numbers of the form {1, 2, 3,… n}, define addition “clockwise”, and find that the resulting set n is an abelian group, with identity element n. If G is any group, then the properties (G1)–(G3) guarantee the existence of an identity element, but don’t tell us anything about its uniqueness. Is it possible to have a group in which two different elements both act like identities? That is, can there be two different elements i and j with the property that for all g ∈ G , both i= g g= i g and j= g g= j g are true? Theorem (Uniqueness of Identities). If G is any group, and i and j are elements both satisfying the identity property (G2), then i = j . Proof. Suppose i and j are two such elements, and consider the product i j . On the one hand, since i is an identity, we know that i j = j . On the other hand, since j is an identity, we know that i j = i . Therefore i = j .
Exercises 1. Prove that inverses are also unique. That is: Suppose G is a group, and x ∈ G is some element that has two “inverses”: i.e., we assume that there are two elem x x= xˆ e. Prove ents, denoted x and xˆ , for which x x = x x = e and xˆ= ˆ x = x that under these assumptions . 2. Prove that the inverse of the inverse of x is x.
26
26 Numbers and Number Systems Consider M 2 ( ), the set of 2 × 2 matrices with real entries, with respect to the operation of matrix addition. Is this a group? If so, is it abelian? If it is not a group, which of the group properties does it have, and which ones does it not have? 4. Now consider the same set M 2 ( ), this time with respect to the operation of matrix multiplication. Is this a group? If so, is it abelian? If not, which of the group properties does it have, and which ones does it not have? 5. Suppose we restrict the set M 2 ( ) to consider only those matrices that have a nonzero determinant. (Recall from linear algebra that the determinant of a matrix is zero if and only if the matrix has no inverse.) Is that smaller set of matrices a group with respect to multiplication? Is it a group with respect to addition? Why or why not? 0 −1 0 0 0 0 −1 0 1 0 0 0 , let B denote 0 0 0 1 , and let C 6. Let A denote the matrix 0 0 0 −1 1 0 0 0 0 0 1 0 0 −1 0 0 0 0 0 −1 0 0 −1 0 denote . 0 1 0 0 1 0 0 0 (These matrices are elements of M 4 ( ).) Explicitly compute all 9 products that can be formed by any choice of two of these three matrices (including repetitions), in either order. Based on your calculations, describe the smallest group possible containing A, B, and C. Make a multiplication table for the group. What is its identity element? What is each matrix’s multiplicative inverse? Is this group abelian? 3.
Here is another important example that we will see throughout the rest of this book: Example. Let S be any set, and consider the set Func (S ) that consists of all functions f : S → S . There is a natural binary composition law (called composition, in fact) on Func (S ), denoted by the symbol , and defined as follows: given any two functions f and g, we can define a new function f g by ( f g )( x ) = f ( g ( x ) ). Does function composition make Func (S ) a group? Let’s first check associativity operation: for any three functions f , g , h, we need to see if ( f g ) h = f ( g h ). What does this mean? Two functions are equal if they act identically on their domain, so we need to compute how ( f g ) h and f ( g h ) act on any element s ∈ S . By the way composition is defined, we have
(( f g ) h ) ( s ) = ( f g ) ( h ( s )) = f ( g ( h ( s ))) while also
( f ( g h ) ) ( s ) = f ( ( g h )( s ) ) = f ( g ( h ( s ) ) ) Since the two functions ( f g ) h and f ( g h ) act identically on every s ∈ S , they are the same function, so is associative.
27
Numbers and Number Systems 27 What about the existence of an identity element? We need to find some function i ∈ Func (S ) with the property that for any other function f ∈ Func (S ), f= i i= f f. Again this condition is best understood in terms of the action of the functions on an element of the set: we need to have that for every s ∈ S , f (i ( s ) ) = f ( s ) and i ( f ( s )) = f ( s ) There is one obvious function that meets both of these conditions: namely, the identity function on S, denoted idS , and defined by the equation idS ( x ) = x. In the case where S is just the set of real numbers , the graph of the identity function is just the familiar line y = x . So we have an associative operation (composition), and an identity element (idS ). Is Func (S ) therefore a group? We have not yet considered the third group property, (G3), which states that for any function f ∈ Func (S ) there must be a corresponding “partner” function f with the property that f f = idS and f f = idS . Expressed in terms of the action of these functions on an element s ∈ S , we need
(
)
f f (s) = s and f ( f ( s )) = s Notice that these conditions are precisely the ones that we use when we say that f and f are “inverse functions” in the usual sense of the word: that is, each function “reverses” the effect of the other. But—as is commonly taught in secondary Algebra 2—most functions, even in the case of functions on , are not invertible! Consider for example the function f : → defined by f ( x ) = x 2. There is no function g : → with the property that 2 g ( x 2 ) = ( g ( x ) ) = x for every real number x. Indeed there can’t be, because f is not a one- 2 to-one function. For example, since ( −3 ) and 32 are both equal to 9, we would need to have
(
g ( 9 ) = g ( 32 ) = 3 but also g (= 9 ) g ( −3 )
2
) = −3. If both of these were simultaneously true,
we would have −3 = 3, which is of course false. Although the language and perspective we are adopting here may be unfamiliar, the phenomenon we are describing should be quite recognizable: it is nothing more nor less than the observation that the function f ( x ) = x 2 is not invertible on its full domain23. In fact any function f : → is invertible if and only if its graph passes the so-called “horizontal line test”. Since not every function f ∈ Func (S ) has an inverse in this sense, the set Func (S ) is not a group with respect to composition of functions. This example illustrates something quite important, and perhaps unexpected: many of the important ideas of the secondary curriculum can be understood as instances of more general ideas at an abstract level. Certainly we do not claim that abstract group theory
28
28 Numbers and Number Systems belongs in high school mathematics, at least not at the level we are discussing it here. But composition of functions, and the question of when a function is invertible, does have a natural and well-established position in the secondary curriculum. It is not an accident that we use the words “identity function” and “inverse function” in the ways that we do: on the contrary, the identity function is precisely the same thing as the identity element in the set Func (S ) with respect to the operation of function composition, and the inverse of a function is precisely the same thing as its inverse with respect to composition, in the sense of property (G3).
1.6 Fields and Rings Now that we have stated the definition of an abelian group, and seen a few examples, let’s return to what the first two field properties (F1) and (F2) actually say: •
(F1) says that with respect to ⊕, F is an abelian group with identity element θ . In particular: • • •
•
(G1) and (G4), in the context of (F1), means that the operation ⊕ is both associative and commutative. (G2) means that for all a ∈ F , it is true that a ⊕ θ =⊕ θ a= a. (G3) means that for all a ∈ F , there is some element that is the inverse of a with respect to the operation ⊕ and the element θ . If we denote this “additive inverse” with the symbol −a, then (G3) says that a ⊕ −a = −a ⊕ a = θ.
(F2) says that respect to ⊗, F − {θ } is an abelian group with identity element e. In particular: • • •
(G1) and (G4), in the context of (F2), means that the operation of ⊗ is both commutative and associative. (G2) means that for all a ∈ F (except perhaps for θ ) it is true that a ⊗ e =⊗ e a =. a (G3) means that for all a ∈ F (except perhaps for θ ) there is some element that is the inverse of a with respect to the operation ⊗ and the element e. If we denote this “multiplicative inverse” with the symbol a − , then (G3) says that a − ⊗ a =⊗ a a− = e.
All of the above (and much more) is packed into the definition of a field! A few important comments are in order before we consider this definition further: 1. Because there are two operations defined on a field, there are also two types of “inverse” for each element: an additive inverse and a multiplicative inverse. For this reason we will use two distinct symbols, −a and a − , in place of the single symbol a that we use for a general group. 2. The notation a − is meant to be suggestive of the more familiar a −1, and later on we will use the latter notation as well. For now, though, we want to postpone notation that suggests exponentiation. (The ideas of multiplicative inverse and repeated multiplication are, at least initially, unrelated concepts, and there is no reason to expect a single notational convention to handle both of them.) 3. Finally, you may be wondering why whenever we have referred to the action of ⊗ on F we have taken pains to explicitly mention that it is only a group on F − {θ }, and that the properties (G2) and (G3) are not assured to work if the element a is chosen to be the additive identity element θ . It turns out that there is no good way to define a field so that (G3) extends to all of F. In particular, if we were to try to construct an example of a
29
Numbers and Number Systems 29 field in which θ is invertible, we would discover that the entire field collapses down into a single element; see Exercise 22 below. 4. Finally, note that (F3) is the only property that connects the two operations together. Without (F3) we would have two independent binary operations defined on our set F , and lots of information about how each of them work, but no information about how they interact with one another. It is often useful to consider sets that are almost, but not quite, fields. One common field-like algebraic structure is a ring. A ring can perhaps best be thought of as a set that has all the properties of a field, with the crucial difference that multiplication need not be commutative, and multiplicative inverses need not exist (remember that in a field, multiplicative inverses are guaranteed to exist for all non-θ elements). If multiplication is commutative, then the ring is called a commutative ring. For another type of algebraic structure that is almost (but not quite) a field, see Exercise 11. Examples of familiar rings abound. Here are some important examples that we will be dealing with in this book: Examples. 1. The set of integers, denoted , is a commutative ring with respect to the usual operations of addition and multiplication, but not a field. Note that most integers do not have multiplicative inverses in ; in fact the only integers that do have inverses in are 1 and −1. 2. The presence of the phrase “…in ” in the previous example may have seemed like a somewhat artificial restriction. Of course all integers (with the exception of 0) do have multiplicative inverses, but to locate those inverses we must go to a larger set—namely, the set of rational numbers, denoted . So is a field, but its subset is only a ring. 3. It would be a mistake, however, to overly generalize the example of the relationship between and . It is not the case that the non-invertible elements of a ring always have inverses “somewhere else”, and that any ring can therefore be “enlarged” to a field. Consider, for example, the set M 2 ( ) described earlier (see Exercises, §1.5). With respect to matrix addition, this set is an abelian group; with respect to both the operations of matrix addition and matrix multiplication, M 2 ( ) is a noncommutative ring24. Elements of M 2 ( ) have multiplicative inverses if and only if their determinant is nonzero, and there is no “larger field” in which it is possible to find an inverse for a 1 1 singular matrix like . 2 2 4. More generally, given any ring R, we can form a matrix ring M n ( R ) consisting of all n × n matrices with entries from R, and with multiplication and addition defined according to the usual rules for matrices. See Exercise 10 below. 5. The additive group 6 , considered in §1.5, can be made into a ring in a natural way: we interpret an expression like 4 ⋅ 5 to mean 5 + 5 + 5 + 5 (i.e., a sum of 4 copies of 5), with addition interpreted clockwise as in Figure 1.2. Thus, we would have 4 ⋅ 5 = 2 in 6 . The complete multiplication table for 6 is shown below: ×
1
2
3
4
5
6
1
1
2
3
4
5
6
2
2
4
6
2
4
6
3
3
6
3
6
3
6
30
30 Numbers and Number Systems 4
4
2
6
4
2
6
5
5
4
3
2
1
6
6
6
6
6
6
6
6
Multiplication in 6 From the table, we notice a few important facts: •
This table, like the addition table shown earlier, has reflection symmetry across a diagonal line drawn from top-left to bottom-right. This symmetry manifests the fact that multiplication is commutative—so 6 is a commutative ring. (This is not particularly surprising, but it is worth mentioning.) • The first row and column of the table show that for every x ∈ 6 , 1 ⋅ x =⋅ x 1 =x. So the element 1 functions as a multiplicative identity. (This, too, is not really a surprise.) • However, it may be a surprise to observe that most elements in our ring do not have multiplicative inverses! A particular number is invertible if and only if 1 appears in its corresponding row or column. In fact the only invertible elements in 6 are 1 and 5. Moreover, each of these elements is its own multiplicative inverse: we have 1− = 1 and 5− = 5. • Remember that in this ring 5 = −1; in other words, 5 is the additive inverse of 1, because 5 +1 = 6 and 6 is the additive identity. Then the fact that 5− = 5 says nothing more than − − 1 = ( ) −1, which is true in every ring (see Exercise 9 below). • It’s also worth noting that the last row and last column of the multiplication table consist solely of 6’s. We say that 6 is a multiplicative annihilator: for every x ∈ 6 , we have 6⋅x = 6 . The property of being an annihilator is in some sense the exact opposite of being an identity, in that multiplying by an identity preserves information (leaves elements unchanged) while multiplying by an annihilator erases information. Let’s dwell on this last property a moment. Notice that nowhere in the definition of a ring or a field is anything said about the existence of multiplicative annihilators; and yet one has appeared, seemingly of its own volition. In the integers, an analogous thing can be said about the number 0: with respect to addition, it is an identity, but with respect to multiplication, it is an annihilator. At this point a natural question surfaces: is it just a coincidence that in both of these examples the additive identity is also a multiplicative annihilator? Or are these two behaviors inevitably linked? Can we have a ring in which the additive identity is not a multiplicative annihilator? Can we have a ring in which there are two (or more) distinct multiplicative annihilators?
Theorem (Additive Identities are Multiplicative Annihilators). In any ring R, the additive identity θ will also be a multiplicative annihilator. Proof. Choose any r ∈ R and consider the product (θ ⊕ e ) ⊗ r. On the one hand, we have e r= r (θ ⊕ e ) ⊗ r =⊗ where the first equality is because θ is an additive identity, and the second equality is because e is a multiplicative identity. On the other hand, we have
31
Numbers and Number Systems 31
(θ ⊕ e ) ⊗ r =(θ ⊗ r ) ⊕ ( e ⊗ r=) (θ ⊗ r ) ⊕ r where the first equality follows from the distributive property, and the second equality is because e is a multiplicative identity. Combining these results, we have
(θ ⊗ r ) ⊕ r =r Now we add the additive inverse −r to both sides of the equation:
(θ ⊗ r ) ⊕ r ⊕ −r =r ⊕ −r By associativity and the definition of −r, we get
θ (θ ⊗ r ) ⊕ θ = and finally since θ is an additive identity, we conclude
θ ⊗r = θ Since r ∈ R was chosen arbitrarily, this shows that θ is a multiplicative annihilator.
Exercises 7. Prove that the multiplicative annihilator in a ring is unique. That is: Suppose R is a ring, and φ ∈R is some element with the property that φ ⊗ r = r ⊗ φ = φ for every r ∈ R. Show that φ = θ . [Hint: consider the proof of Uniqueness of Identities in §1.5.] 8. Prove that in any ring R, and for any element r ∈ R, we have ( −e ) ⊗ r = −r. [Hint: To prove that something is equal to −r, show that when it is added to r the result is θ .] − 9. Prove that in any ring R, ( −e ) = −e. 10. Consider the ring M 2 ( 6 ). How many elements does this ring have? List a few of them, and choose a few examples of matrix multiplication and addition in this ring to illustrate how they work. 11. Let A, B, and C be the 4 × 4 matrices from §1.5 Exercise 6, and let I denote the 4 × 4 identity matrix. Consider the ring consisting of all 4 × 4 real matrices of the form xI + yA + zB + wC , where x, y, z, w ∈ . Show that in this ring, every nonzero element has a multiplicative inverse. (Multiplication in this ring is noncommutative, so this ring is not a field. A noncommutative ring in which every nonzero element has an inverse is called a division algebra or a skew field.) Let’s contrast 6 with a similar ring, 5 . Its multiplication table is given below: ×
1
2
3
4
5
1
1
2
3
4
5
2
2
4
1
3
5
32
32 Numbers and Number Systems 3
3
1
4
2
5
4
4
3
2
1
5
5
5
5
5
5
5
Multiplication in 5 Nearly all of the observations we made about 6 hold here as well: multiplication is commutative, the additive identity (5, in this ring) is also a multiplicative annihilator, and so forth. However, there is one notable difference between 6 and 5 : in the latter ring, every element (other than the additive identity) has an inverse, as can be seen from the fact that ever row and column of the multiplication table contains a 1 (with the exception of the row and column corresponding to 5). So 5 , unlike our previous example, is a field. What accounts for the fact that 5 is a field, while 6 is not? To explore this further, let’s consider two more related examples: 9 and 11.
Exercises 12. Construct multiplication tables for 9 and 11. Identify the non-invertible elements in each ring. Which ring is a field? 13. In the example that is not a field, which elements are non-invertible? What do they have in common with one another?
If you have not answered the two questions in the box above, please stop reading now and answer them. Then continue on to the theorem below:
Theorem (Invertible and non-invertible elements in n). (a) An element k in the ring n will be non-invertible if k has a factor in common with n. (b) An element k in the ring n will be invertible if k has no factors in common with n. (In this case, we say that k and n are relatively prime.) (c) A ring n will be a field if and only if n is a prime number. (In this case, we typically use the letter p instead of n and write p .)
Proof (outline). The essence of the proof is that if k has a factor in common with n, then the multiples of k will contain a repeating cycle that ends at n. (Look, for example, at the row and column of the multiplication table for 9 corresponding to the elements 3 and 6.) Conversely, if k and n are relatively prime, then n does not appear among the multiples of k until you reach the very last row and column of the multiplication table. (Look at the row and column of the multiplication table for 9 corresponding to the elements 2 and 4.) That means that for such a k, the other n −1 entries in the row must be filled with distinct values. So 1 has to be in there somewhere.
33
Numbers and Number Systems 33
Exercises 14. Which elements of 12 and 15 are invertible? 15. Find three distinct elements a, b, c in 12 with the property that a ⋅ b =, c b ⋅ c =, a and c ⋅ a =. b 16. How many solutions does the equation x 2 = 1 have in 12? Exercise 16 illustrates a surprising fact: An equation of the form x 2 − 1 =0 can have more than two distinct solutions, depending on the ring in which x takes its values. This is in plain contradiction to what happens when we work over the set of real numbers. In fact one of the most fundamental theorems25 of the high school Algebra 2 course can be paraphrased as follows: “A polynomial of degree n has at most n real roots; if we use complex numbers, and count repeated roots with multiplicity, then a polynomial of degree n has exactly n roots.” The fact that this theorem is false when we work over 12 instead of means that, at some deep level, the proof of the theorem must depend on a property (or set of properties) that are true in but not in 12. But which property is the crucial one? To better understand this, let’s think through how one would solve the equation x 2 = 1 and try to identify where the logic breaks down: 1. 2. 3. 4.
First, we rewrite the equation as x 2 − 1 =. 0 Next, we observe that x 2 − 1 = ( x + 1) ( x − 1). So the equation to be solved is ( x + 1) ( x − 1) = 0. So either x + 1 = 0 (in which case x = −1) or x − 1 =0 (in which case x = 1).
Which of those four steps is illegitimate in 12? Certainly, the first step only involves subtracting 1 from both sides of the equation, which seems hard to quibble with. The factoring step works no matter what ring the coefficients are from, because multiplication distributes over addition. This brings us to the final step: If we know that ( x + 1) ( x − 1) = 0, what justifies the assertion that we must have either x + 1 = 0 or x − 1 =0? The answer—as any experienced Algebra teacher likely knows—is the property commonly known as the zero-product property: if the product of two real numbers (or two integers, or rational numbers, or whole numbers) is known to be zero, then at least one of the two numbers must be zero. Put another way, there are no nonzero numbers whose product is zero. This is precisely where, when we shift contexts to 12, the reasoning breaks down: in 12, it is possible to have two numbers A and B, neither of which is equal to the additive identity26 0, with product AB = 0. For example, 4 ⋅ 3 =, 0 2 ⋅ 6 =, 0 etc. So from the equation ( x + 1) ( x − 1) = 0 we cannot conclude that one of those two factors must be 0; it is also possible, for example, that x + 1 = 6 and x − 1 =4, which would happen if x = 5. These observations motivate the following: Definitions. Let R be any ring. If a, b ∈ R are two elements such that a ≠ θ and b ≠ θ , but nevertheless a ⊗ b = θ , then a and b are called zero divisors27. A ring that contains no zero divisors is called an integral28 domain (sometimes shortened to simply domain).
34
34 Numbers and Number Systems In an integral domain, the equation a ⊗ b = θ implies that either a = θ or b = θ . The rings , and are all integral domains. However, as we have just seen, 12 is not an integral domain; in 12 the elements 2, 3, 4, 6, 8, 9, and 10 are all zero divisors.
Exercises 17. Which elements of 15 are zero divisors? 18. Prove that if n is a composite number, then n is not an integral domain. 19. Prove that if p is a prime number, then p is an integral domain. 20. How many solutions does the equation x 2 = e have in an integral domain? Prove your answer. 21. How many solutions does the polynomial equation x 2 + 3x + 2 = 0 have in each of the following rings, and what are they? (a) 10 (b) 11 (c) 12 (d) . The attentive reader may have noticed that the zero divisors of 12 and 15 are all non- invertible. This is not a coincidence, as the following theorem shows: Theorem. In any ring R, zero divisors are always non-invertible.
Proof. Suppose a, b ∈ R are zero divisors; that is, assume a ≠ θ and b ≠ θ but a ⊗ b = θ . Now suppose a has a multiplicative inverse, a − . Then we have: a⊗b = θ , so a− ⊗ a ⊗= b a − ⊗ θ ; therefore e⊗b = θ , because a − ⊗ a = e and θ is a multiplicative annihilator; so b = θ , because e is a multiplicative identity. But this contradicts our assumption that b ≠ θ . Therefore, the multiplicative inverse a − cannot exist, which completes the proof.
Exercises 22. Suppose there were an example of a field F in which the additive identity θ were invertible. Prove that such a field can only contain a single element. (In other words: show that if a ∈ F is any element, then a = θ .) (Hint: mimic the proof of the theorem above.) 23. Can you have a field that is not a domain?
1.7 Important Examples Having marshaled a number of basic definitions, let’s acquaint ourselves with some important examples of groups, rings, and fields.
35
Numbers and Number Systems 35 Example 1. , the set of integers, is: • an abelian group with respect to ordinary addition, • a commutative ring with respect to ordinary addition and multiplication, • an integral domain, • but not a field (in fact, the only numbers with multiplicative inverses in are 1 and −1). Example 2. , the set of rational numbers, is: • an abelian group with respect to ordinary addition, • a commutative ring with respect to ordinary addition and multiplication, • an integral domain, • and also a field. Example 3. n, the integers modulo n, is: • • • •
always an abelian group with respect to “clockwise” addition, always a commutative ring, an integral domain if and only if n is prime, and also a field if and only if n is prime.
Example 4. M n ( ), the set of n × n matrices with entries from , is: • always an abelian group with respect to matrix addition, • a non-commutative ring, • neither an integral domain nor a field, because it is possible to multiply two nonzero matrices to produce a zero matrix (see Exercise 24). Example 5. GLn ( ), the “general linear group” consisting of n × n matrices with entries from and whose determinant is nonzero: • The requirement that the determinant of a matrix be nonzero guarantees that all matrices in GLn ( ) are invertible. • However, this set of matrices is not closed under addition, and does not contain an additive identity. • So GLn ( ) is a non-abelian group with respect to matrix multiplication, but is not a ring (and therefore certainly not a field). Example 6. 2 , the set of all ordered pairs of real numbers: • • •
There is a natural way to define addition, in which pairs are added “component-wise”: that is, ( a, b ) + (c, d ) = ( a + c, b + d ). With respect to this addition, 2 is an abelian group (see Exercise 25). But is this set a ring, and if so, is it a domain and a field? The answer to this question depends on how one defines “multiplication” of ordered pairs. Surprisingly, there are many different ways to define a product on this set; each of those definitions leads to a different ring structure, with different properties. (See Exercises 26 and 27.) We will return to this question in Chapter 6.
36
36 Numbers and Number Systems Example 7. Generalizing the previous example, for any group G we can form the set G 2 consisting of ordered pairs of elements from G. A typical element of G 2 would have the form ( a, b ) with both a ∈ G and b ∈ G . We can use the operation that comes with the group G to create a new operation on ordered pairs: we define ( a, b ) ( c, d ) = ( a b, c d ). • With this definition, we find that the additive identity for G 2 is simply the ordered pair (iG , iG ), where iG is the identity from G. • Then G 2 is a group. Moreover if G is abelian than G 2 will be as well (see Exercise 28). •
Example 8. In this spirit, we can regard 2 , 2 , and even ( M n ( ) ) as groups. (See Exercise 29). 2
Example 9. , the set of all real numbers, is a field. Example 10. , the set of complex numbers, is a field. Example 11. [ x ], the set of polynomials in a single variable x with coefficients from , is a commutative ring. (We will define this ring precisely and study its properties in detail in Chapter 2.) We could also consider [ x ], which includes all of [ x ] but also allows for polynomials with irrational coefficients, such as 3x 2 + πx + 3 2 . Example 12. Similar to [ x ], we can also consider ( x ), the set of rational expressions in a p (x) single variable x with coefficients in . Every element of ( x ) is of the form , where q (x) p ( x ) and q ( x ) are polynomials in [ x ] with q ( x ) ≠ 0. (Notice the use of rounded parentheses, rather than square brackets, to distinguish ( x ) from [ x ].) Using the ordinary rules of high school algebra, any two rational expressions can be added, subtracted, or multiplied, and any nonzero rational expression can be inverted, so this is a field, called the “field of rational functions”. •
•
•
Note, some care must be taken to deal with equivalent rational functions. To be precise, p (x) r (x) we should define two rational expressions and to be equivalent if and only if q (x) s (x) p ( x ) ⋅ s ( x ) = r ( x ) ⋅ q ( x ), where the “dot” stands for ordinary polynomial multiplication. We then should really say that the elements of ( x ) are equivalence classes of rational expressions. Usually we will elide that distinction, in much the same way that when 4 6 dealing with rational numbers we do not fuss over the fact that and are different 10 15 representations of the same element of . Notice also that although we call these “rational functions”, we are not really concerned here with how these behave as functions—that is, we are not concerned with domains, x2 − 1 x2 − x − 2 continuity, etc. For example, the rational expressions and are equivalent x −1 x−2 as elements of ( x ), even though as functions they are not identical because they have different domains. Finally note that there is no particular reason why we need to restrict the coefficients to . We could just as well consider ( x ), or even p ( x ) for some prime p. In fact for any field F we can form a field of rational functions F ( x ).
37
Numbers and Number Systems 37 Example 13. 2 , the set of real numbers of the form a + b 2 , where both a and b are 1 rational numbers. For example, 5 + 2 is a member of this set, as is every rational number 4 (take b = 0).
( a + b 2 ) + (c + d 2=)
(a + c) + (b + d )
2, so 2 is closed under addition. 2 has an additive identity (namely, 0, which can be written as 0 + 0 2 ) and additive inverses, so it is an (abelian) group with respect to addition. • 2 contains 1, which is a multiplicative identity. • Perhaps surprisingly, 2 is also closed under multiplication: a + b 2 ⋅ c + d 2= ( ac + 2bd ) + ( bc + ad ) 2 . • •
(
•
)(
)
Do multiplicative inverses exist in 2 ? To answer this question, we consider an arbitrary element a + b 2 . As long as at least one of a and b is nonzero, this real number has an inverse in the larger field . That inverse can be written (and re-written) as follows, using the familiar technique of rationalizing the denominator: 1 a+b 2
=
1
⋅
a−b 2
a+b 2 a−b 2
=
a b − 2 2 2 a − 2b a − 2b 2 2
This is an element of 2 , provided that a 2 − 2b2 is not zero. But the equation a 2 − 2b2 = 0 a has no rational solutions a, b, because if it did, would be a rational number whose square b equals 2. Since 2 does not have a rational square root (see §1.10), every nonzero element of 2 has an inverse in 2 , so 2 is a field.
Exercises 24. Find an example of two nonzero 2 × 2 real matrices A and B such that AB = 0 but BA ≠ 0. (A single such example shows that the ring M 2 ( ) is neither commutative nor an integral domain.) 25. Show that 2 is an abelian group with respect to component-wise addition. 26. Suppose we define multiplication in 2 component-wise in the obvious way, i.e. ( a, b ) ⋅ ( c, d ) = ( ac, bd ). Show that 2 would not be an integral domain. Describe all of the zero divisors in this ring. 27. Contrary to the previous exercise, suppose we instead define multiplication in 2 via the not at all obvious formula ( a, b ) ⋅ ( c, d= ) ( ac − bd , ad + bc ). Show that with this multiplication, 2 would be an integral domain. (This exercise and the previous one demonstrate that one and the same underlying set can be made into a ring in more than one distinct way, with different results and properties.) 28. Show that if G is an abelian group, then G 2 will be as well. 2 29. Give two explicit examples of what elements of ( M 2 ( ) ) look like, and show how to compute their product. 30. Mimic Example 13 to show that if n is any positive whole number that is not a perfect square, then n is a field. (On the other hand, if n is a perfect square, then n reduces to just , which is already a field.)
38
38 Numbers and Number Systems
1.8 Order Properties and Ordered Fields At this point it’s probably a good idea to pause and remember what we are working toward. The goal of this chapter is to prove (and understand the meaning of) the following important theorem: Real Number Characterization Theorem. Any two complete ordered fields are isomorphic. We have not yet defined what “ordered fields” are, nor what it means for an ordered field to be “complete”, nor what it means for two complete ordered fields to be “isomorphic”, so there is still quite a bit of work to do before we can understand (let alone prove) this theorem. Nevertheless, we can assess our progress on the way to this theorem by noting that in the last section we considered some important examples of different fields: , , , 2 , p and ( x ). The variety among these examples gives us some insight into how striking the Real Number Characterization Theorem really is: among all of the possible fields, is unique in that it is (essentially) the only field that is complete and ordered. In contrast, and 2 are ordered, but not complete; is complete29, but not ordered. The theorem says that if you want both properties, you need to work with . So what is an ordered field? Let’s start with an even more basic notion: A set S is called a totally ordered set if there is some binary relation (written with the symbol a, which should be read aloud as “b comes after a” or “b follows a”.) Indeed, it’s extremely common to order sets of objects according to criteria that have nothing whatsoever to do with size; consider the way words are ordered in a dictionary, in which “AL P H ABETIC A L ” precedes “A RGENTI NA ”. Similarly, if we have two elements a, b from a field F, the statement a < b says only that a precedes b according to some rule for ordering the elements; it does not mean that b is “larger” than a in any meaningful sense. The Law of Trichotomy asserts, among other things, that the ordering of the elements in S cannot contain any “loops”. Suppose, for example, that for some collection of elements we had a < b < c < < a . Then by transitivity, we would be able to conclude that a < a. But since a = a automatically, this would violate trichotomy.
39
Numbers and Number Systems 39 The Law of Trichotomy also ensures that there are no “incomparable” elements. This is not as trivial as it sounds. Suppose, for example, you wanted to order the polynomials in [ x ] by degree—in other words, for any two polynomials p ( x ) , q ( x ) we decide to say that p ( x ) comes before q ( x ) if the degree of r is lower than the degree of q. With this convention, we would be able to say with confidence that 5x + 2 < x3 + 7. But which would come first, x 2 + 2 x − 5 or x 2 + 7? Since they have the same degree, we would have neither x 2 + 2 x − 5 < x 2 + 7 nor x 2 + 7 < x 2 + 2 x − 5; the two polynomials would be said to be incomparable in this order, and this would therefore not constitute a total order on S. One commonly-used order is the lexicographic (also called the “dictionary”) order, which describes the way that words are sorted alphabetically: to place two words in alphabetical order, we first compare the initial letters of the two words, and (if they are different) choose the word that has the letter that comes earlier in the alphabet; if the initial letters are the same, we move on to compare the second letter of each word and (if they are different) choose the word that has the letter that comes earlier in the alphabet; if the second letters are also the same, we move on through the letters, continuing until we find a difference. We are guaranteed to eventually find a difference, unless the words are the same, so the law of trichotomy is guaranteed: given any two distinct words, we can always say unambiguously which one comes first in alphabetical order. Notice that this ordering on words requires that we already have an ordering on letters; in order to determine that CA RPENTER comes before C ARP E T BAG we need to know that the letter N comes before T . Notice also that we need some technique for handling words of different length; one way to handle that is to “pad” a shorter word with enough blank spaces at the end to make it the same length as a given longer word, and to regard blank spaces as coming at the very beginning of the alphabet. The lexicographic order can be adapted to handle a variety of mathematical structures. Example. The lexicographic order on 2 . Recall that an element of 2 is a pair of real numbers, which we may write ( a, b ). Now suppose that we have two such pairs, ( a, b ) and ( c, d ), and we need to decide which one should come first in a total order. We will write ( a, b ) < ( c, d ) if one of the following two conditions holds: (a) Either a < c , or (b) a = c and b < d . (In the conditions above, the symbol < denotes the usual ordering on the real numbers.) These conditions allow us to compare any two elements of R2 by following the same procedure one uses to alphabetize two-letter words: 1. First, we compare the first entries in each pair. If they are different, then whichever entry is less (in the usual sense) determines the ordered pair that comes first. 2. If the first entries in the pairs are the same, then we move on to the second entries. For example, we would have ( 3, 6 ) < ( 5, 2 ) because 3 < 5; we would also have ( 3, −1) < ( 3, 6 ), because the first entries are the same and −1 < 6. Graphically, we can describe the lexicographic order on 2 as follows: given any two points A and B in the plane, we consider the vertical lines through those points. If they lie on different vertical lines, then whichever point lies on the left-most line comes first; if they lie on the same vertical line, then whichever point is below the other comes first. See Figure 1.3, which shows the points ( 3, −1), ( 3, 6 ), and ( 5, 2 ).
40
40 Numbers and Number Systems
Figure 1.3 Order relation between three points in 2 with the lexicographic order.
Note, however, that a different lexicographic order—based on exactly the same principle, but producing a different result—can be defined by applying alphabetical principles right- to-left, rather than left-to-right. That is, we can define an alternative order on ordered pairs by saying that ( a, b ) ( c, d ) if (a) Either b < d , or (b) b = d and a < c . (Note the slight typographic difference between the symbols and θ . (O2) For any three elements x, y, z ∈ F , if θ < x and y < z then x ⊗ y < x ⊗ z .
The two properties (O1), (O2) establish the connection between the algebraic structure of a field, and its order structure. Properties (O1) and (O2) both call attention to the elements that come after θ in the order, which warrants the following definition: Definition. An element x in an ordered field F is called positive if θ < x; alternatively, x is called negative if x < θ . (Note that according to the law of trichotomy, every nonzero element must be either positive or negative.) The properties (O1) and (O2) can now be paraphrased as follows: • •
(O1) states that whether one element precedes or follows another depends on whether the difference between them is positive or negative. (O2) states that multiplication by a positive element is an order-preserving operation.
Equipped with (O1) and (O2), we may proceed to prove a number of basic properties of ordered fields: Theorem. (Addition is order-preserving) In an ordered field, if x < y, then for any z, x ⊕ z < y ⊕ z.
Proof. By (O2), x ⊕ z < y ⊕ z holds if and only if ( y ⊕ z ) − ( x ⊕ z ) is positive. But y − x , which by (O2) (again) is positive if and only if x < y.
( y ⊕ z ) − ( x ⊕ z )=
Theorem. In an ordered field, if x is positive then −x is negative; conversely if x is negative then −x is positive.
Proof. Suppose x is positive, i.e. θ < x. By the theorem just proved, adding −x to both sides of the inequality preserves the order relation, so θ ⊕ ( −x ) < x ⊕ ( −x ). But this says simply that −x < θ , i.e. −x is negative. The reverse argument is similar. Theorem. The set of positive elements is closed under both addition and multiplication.
Proof. If x > θ and y is any element, then x ⊕ y > θ ⊕ y = y . If, in addition, y > θ , then by transitivity we have x ⊕ y > θ . This shows the positive elements are closed under addition. The analogous claim for multiplication is a direct consequence of (O2).
42
42 Numbers and Number Systems Theorem. For any two elements x, y in an ordered field F: (a) If x > θ and y < θ , then x ⊗ y < θ . (b) If x < θ and y < θ , then x ⊗ y > θ .
Proof. For part (a), we observe that if y < θ then by property (O2) if we multiply by the positive element x the order relation is preserved, so x ⊗ y < x ⊗ θ = θ , where the last equality follows from the fact that the additive identity is also a multiplicative annihilator. For (b), note that if x and y are both negative, then both −x and −y are positive. By the theorem just proved, this means that ( −x ) ⊗ ( − y ) is positive. But in any field ( −x ) ⊗ ( − y ) = xy. So if x and y are both negative, then x ⊗ y is positive. The previous theorem is really nothing more than the abstract analogue of the familiar property that the product of a negative and a positive is negative, while the product of two negatives is positive. Moreover, we have the following corollaries: Corollary. In any ordered field, the square of any element x ≠ θ is always positive.
Proof. If x is positive, then x 2= x ⊗ x is positive because the set of positives is closed under multiplication; if x is negative, then x 2= x ⊗ x is still positive by the preceding theorem. By trichotomy, there are no other possibilities. Corollary. In any ordered field, the multiplicative identity e is positive.
Proof. This follows from the fact that e = e 2 . Corollary. In any ordered field, the equation x 2 = −e has no solution.
Proof. Since e is positive, −e is negative; but a square can never be negative. Corollary. It is not possible to define an order on such that becomes an ordered field. Proof. In , there is an element i whose square is −1. But as we just showed, in an ordered field, no element has a square that is the additive inverse of the multiplicative identity. The preceding Corollary shows what we meant when we said, a few pages up, that even though it is possible to define an order on (specifically, we used the lexicographic order), that order does not make the field into an ordered field! In fact there are lots of ways to define a total order on . However, the Corollary guarantees that none of these possible
43
Numbers and Number Systems 43 orders satisfy both (O1) and (O2), because it is not possible to define an order on in which both of those properties hold. We can also prove the following, which will be useful in subsequent sections: Theorem. Multiplicative inversion is a sign- preserving but order- reversing operation: that is, if x and y are any two positive elements with x < y, then x − and y − are also positive, with x − > y −.
Proof. Suppose x > θ . We know that θ is never invertible, so x − ≠ θ, and therefore by trichotomy, either x − < θ or x − > θ . However, if x − < θ , then by (O2), x ⋅ x − < x ⋅ θ = θ . But this implies that e < θ , contradicting the corollary above. So it must be true that x − > θ . This shows that multiplicative inversion is sign-preserving. Now suppose that x < y. Since x − y − is positive, multiplying both sides of x < y by x − y − should preserve the order of the inequality; therefore x ⋅ x − y − < x − y − ⋅ y . But this says precisely that y − < x −, proving that multiplicative inversion is order-reversing.
Exercises 31. Place the following elements of 2 in order using both the left-to-right lexicographic order < and the right-to-left lexicographic order : (7, 2 ) , ( −5, −3) , ( −5, 4 ) , (7, −5). 32. Draw a sketch showing the set of all elements that precede ( 5, 2 ) in the left-to-right lexicographic order. 33. Consider the subset S of 2 defined by S = {( a, b ) | a < 3}, where < denotes the left-to-right lexicographic order. Show that S is bounded above, but has neither a maximum element nor a least upper bound. Would you expect these properties to change if the condition a < 3 in the definition of S were replaced with a ≤ 3? 34. We have shown that the field can be endowed with an order (e.g. the lexicographic order) but that this order does not make into an ordered field. Check the axioms (O1) and (O2) for an ordered field and determine which (if any) are satisfied by this order (give proofs) and which are not (provide a counterexample). 35. Consider the ring [ x ] of polynomials with rational coefficients, and suppose we define a polynomial to be positive if its leading coefficient is a positive real number (in the usual sense); then, for any two polynomials p ( x ) and q ( x ), we define p ( x ) q ( x ) if q ( x ) − p ( x ) is positive. (a) Prove that this defines a total order. (b) Put the following five polynomials in order: 2 x 2 + 4 x − 7, 0, −5x 2 + 8x + 12, 2 x 2 + 6 x − 10, 12 x + 5. (c) Verify that even though [ x ] is not a field, (O1) and (O2) are nevertheless both satisfied with this order. (d) Show that for polynomials of degree n, this order is equivalent to a lexicographic order. (e) Would any of these questions have different answers if we replaced [ x ] with [ x ] or with [ x ]? If so, which ones?
1.9 Examples (and Non-Examples) of Ordered Fields Example 1. , the set of rational numbers, in the “usual” order, is an ordered field. Example 2. So is , the set of real numbers in the “usual” order.
44
44 Numbers and Number Systems Example 3. The field 2 , regarded as a subset of , “inherits” an order; that is, given two elements a + b 2 and c + d 2 from our field, we can order them in 2 by comparing them in their order as real numbers.
Example 4. Fields of the form p can never be ordered fields. This is because if you begin with the order relation 0 < 1 (which is necessarily true in any ordered field) and begin adding 1 repeatedly, you obtain by property (O2) the loop 0 < 1 < 1 + 1 < 1 + 1 + 1 < < 0. But we have already observed that totally ordered sets cannot contain any loops. Example 5. Consider ( x ), the field of rational expressions with coefficients in (refer p (x) r (x) back to Example 12 of §1.7). We may define an order in ( x ) by the rule if and q (x) s (x) only if p ( x ) s ( x ) q ( x ) r ( x ), where in the latter expression the order on polynomials is the one described in Exercise 35. It may be verified (see Exercise 36) that this order satisfies all of the axioms of an ordered field. How does this definition work in practice? For example, suppose we want to compare 3x − 5 12 x + 4 the elements and . We observe that ( 3x − 5 ) ( 2 x 2 + 5x + 3 ) is a polyno10 x + 7 2 x 2 + 5x + 3 mial of degree 3 with a positive leading coefficient, while (12 x + 4 ) (10 x + 7 ) is a polynomial of degree 2; therefore ( 3x − 5 ) ( 2 x 2 + 5x + 3 ) − (12 x + 4 ) (10 x + 7 ) is a positive polynomial 3x − 5 12 x + 4 (in the sense of Exercise 35). It follows that 2 in this particular order. 10 x + 7 2x + 5x + 3 (Later we will see that this same field can be ordered in more than one—in fact, infinitely many!—different ways.) The exact same definition can be used to define an order on ( x ). Notice that the set of real numbers can be thought of in a natural way as a subset of ( x ); that is to say, every constant can be thought of as a degree-zero polynomial, and therefore also as a rational function. In this sense, the usual order on is compatible with the order on ( x ), in that for any two real numbers a, b we would have a < b (as real numbers) if and only if a b (as rational functions). So ( x ) contains within it a copy of the entire real number line. However, in addition to the real number line, ( x ) also contains the element x. Notice that for any real number r, it is automatically true that r x , because x − r is a polynomial with positive leading coefficient. This means that we can regard r as a “transfinite element” of ( x ), in the sense that it comes after the entire real number line. Indeed, any element of the form x + r is transfinite in this sense. We can visualize the rational functions of the form x + r as forming a second number line that comes after the “main” number line. Let’s denote this second number line 1 and the “main” number line as 0 = . But this is only the beginning: in fact for any given real number a, the set of rational functions of the form ax + r form a number line a that (taken in isolation) looks exactly like the “standard” number line. If a < b, then every element of a precedes every element of b . So we can visualize the set of 1st-degree polynomials as a “stack” of infinitely many number lines, one for each real number (see Figure 1.4). If this looks familiar, it should: it is precisely the right-to-left lexicographic order on 2 (see Exercise 31). And this just scratches the surface! In addition to the transfinite element x, we also have x 2 , x3 , etc., each of which comes after all of the lower-degree monomials. Moreover, in this ordered field there are also infinitesimal elements—that is, elements that are positive but smaller than any positive real number. See Exercises 38 and 39.
45
Numbers and Number Systems 45
Figure 1.4 Ordering in ( x ).
Exercises 36. Verify that the order on ( x ) and ( x ) described in Example 5 satisfies all of the properties of an ordered field. 37. Show that an element of ( x ) or ( x ) is positive if and only if it can be written in a x n + + a0 the form n m , where the numerator and denominator have no common x + b0 factors (i.e. the expression is in lowest terms), the leading coefficient of the denominator is 1, and the leading coefficient of the numerator, an , is positive with respect to the usual order on the reals. 1 1 38. Consider the rational function in ( x ) or ( x ). Show that is infinitesimal, in x x 1 1 the sense that 0, but r for any positive r ∈ (or ). x x ax +1 39. Let a be an arbitrary positive real number. Show that comes after a, but x comes before any real number r that comes after a. Show furthermore that for any ax + 1 ax +1 positive real number b, the rational function lies between a and . x+b x
1.10 Rational Subfields and the Completeness Property Remember, we are working our way toward the following important theorem: Real Number Characterization Theorem. Any two complete ordered fields are isomorphic.
46
46 Numbers and Number Systems We are still not quite ready to define what “complete” means, but we are getting close! As a warm-up, we are going to prove the following important theorem: The Rational Subfield Theorem. Any ordered field contains a subfield that is isomorphic to . Towards a proof of the Rational Subfield Theorem, we first notice that on any field F (whether ordered or not), we can define a binary operation × F → F as follows: (i) If n is a positive integer and f ∈ F an arbitrary element of the field, we define n ⋅ f to mean the element f ⊕ f ⊕ ⊕ f . n terms
(ii) If n is a negative integer and f ∈ F , then −n is positive, so we can define n ⋅ f to mean ⊕ − f ⊕⊕ − f ( −n ) ⋅ ( − f ), i.e −f − n terms
With this definition, the notation n ⋅ f makes sense even though n and f come from entirely different sets; for example, if we take F = 5 then for any f ∈ F , the expression 8⋅ f means f + f + f + f + f + f + f + f , which is an element of 5 even though 8 is not in 5 . For brevity, we will usually write nf rather than n ⋅ f . Now for any field F, and any f ∈ F , we can consider the “integer multiples” of f, i.e. the elements …, −3 f , −2 f , − f , 0, f , 2 f , 3 f ,… It’s important to realize that these elements need not all be distinct! For example, if we choose f = 2 ∈ 5 this list repeats itself: …, 4,1, 3, 0, 2, 4,1, 3,… However, in an ordered field this never happens: Lemma. If F is an ordered field, and f any nonzero element of F, then the elements nf are all distinct; that is, if n and m are distinct integers then nf and mf are distinct elements of F.
Proof (sketch). This follows immediately from the Law of Trichotomy and the order properties (O1) and (O2). For example, if f > θ then f + f > f , and f + f + f > f + f , etc. If any two elements nf and mf were equal than we would have a loop, which is impossible in a totally ordered set.
Corollary. Ordered fields are always infinite. Theorem. Let F be an ordered field and let e ∈ F be the identity element. Then the set = e {ne | n ∈ } is a subring of F , and is isomorphic to .
47
Numbers and Number Systems 47 Proof. By the Lemma above, the map n ne is a one-to-one correspondence → e. It suffices to describe how multiplication and addition behave in e: For positive n, m ∈ we have ( ne ) ⊕ ( me ) = ( e ⊕ e ⊕ ⊕ e ) ⊕ ( e ⊕ e ⊕ ⊕ e ) = n terms m terms e ⊕ e ⊕ ⊕ e ) = ( n + m ) e. Likewise, ( n + m terms
e ⊕ e ⊕ ⊕ e ) ⊗ (e ⊕ e ⊕ ⊕ e ) ( ne ) ⊗ ( me ) = ( n terms
m terms
= ( e ⊗ e ) ⊕ ( e ⊗ e ) ⊕ ⊕ ( e ⊗ e ) = ( mn ) ⋅ ( e ⊗ e ) = ( mn ) ⋅ e. mn terms
The proof that ne ⊕ me = ( n + m ) e and ne ⊗ me = ( nm ) e when one or both of n and m is negative is similar. This shows that the map n ne is an isomorphism. The theorem above can be extended further. Suppose n ∈ is any (nonzero) integer. Then the element ne just defined is nonzero, and therefore (because F is a field) it has a multiplicam − m tive inverse, ( ne ) . Then for any nonzero rational number ∈ , we may define ⋅ e to n n − − − − ne ) ⊕ ( ne ) ⊕ ⊕ ( ne ) . As before, we will often omit the ⋅ symbol for mean m ⋅ ( ne ) , i.e. ( m terms
m p and are n q equivalent fractions (i.e. if m, n, p, q are integers such that mq = np ). Consequently, we have the following: brevity. One can verify (see Exercise 40) that m ( ne ) = p ( qe ) if and only if −
−
The Rational Subfield Theorem. Any ordered field F contains a subfield that is isomorphic to . This subfield is called the rational subfield of F and is denoted F0 . m m Proof. We define F0 = e ∈ . It can be verified (see Exercise 41) for any two n n p m p p m p m p m m rational numbers , that e ⊕ = e + e and e ⊗ = e ⋅ e. n q n q n q n q n q m m Since the map e is a 1–1 correspondence between and F0 that preserves addn n ition and multiplication, it is an isomorphism. Let’s take stock of where we are so far. We have shown that every ordered field F contains a subfield isomorphic to the field of rational numbers. In many cases, F can be described by specifying what other elements it contains beyond the rational subfield. We can describe the field F = 2 , for example, by saying that it contains the rational subfield, the single irrational element 2, and possible combination of those ingredients—and nothing more. (In fact, that’s precisely what the notation 2 is meant to convey.) In the case of , on the other hand, the problem of completely describing what the other (“irrational”) elements are is quite a bit harder. The following definitions are helpful with that problem: Definition. In any totally ordered set T, a subset S is said to be bounded above (sometimes we simply say “bounded”) by b ∈ T if s ≤ b for all s ∈ S .
48
48 Numbers and Number Systems Definition. A least upper bound for a set S is an upper bound with the property that no smaller upper bound for the same set exists. For example, in , the set = S {q q 2 < 10} is bounded above by 4, because if s ∈ S then s < 4. However, 4 is not a least upper bound, because a smaller upper bound exists; for instance, 3.2 is also an upper bound for S (although it, too, is not a least upper bound). Finally, we are ready to define what it means for an ordered field to be complete: Completeness Property: An ordered field is said to be complete if every nonempty bounded subset has a least upper bound. This seemingly simple property is actually one of the deepest and most significant properties in mathematics—so deep, in fact, that it completely characterizes the set of real numbers, up to isomorphism, as the only complete ordered field. In order to get a better understanding of what this property means, and what it looks like when it fails, let’s consider a few important examples. Example 1. , the set of rational numbers, is not complete. To see this, consider the set S = { q ∈ q > 0 and q 2 ≤ 2}. This set is bounded above (for example, 1.5, 1.42, and 1.416 are all upper bounds for this set). However, we can show that it does not have a least upper bound, as follows. First, we show that there is no element in whose square is exactly 2. This is a classical proof that is found (in a slightly different version) all the way back in Euclid’s Elements: Theorem. (Irrationality of 2 ). There is no rational number whose square equals 2.
Proof. Suppose to the contrary that there were such a rational number. If so, then we would 2 m m have two positive integers m and n whose ratio would have the property that = 2. n n Moreover, we may assume without loss of generality that m and n have no common factors; m if they did have a common factor, then would be reducible to a fraction in lower terms, n and we could then use the numerator and denominator of that fraction for m and n. So m we may as well suppose that all common factors have already been removed, and is in n lowest terms. We now ask: Can we say with certainty whether m or n is even? Note that we are assuming 2 m that = 2, which means that m2 = 2 n2. This means that m2 is an even number, which n means (in turn) that m must be an even number; if m were not even, then it would be odd, which would mean m2 would be odd as well, which we know it is not30. So m is even. Therefore for some whole number k, m = 2 k . However, once we know that m = 2 k , we can conclude that m2 = 4 k 2. Since we already are assuming that m2 = 2 n2, this means that 4 k 2 = 2 n2 , so n2 = 2 k 2. But this means that n2 is even, so (by the same reasoning as for m), n must be even. We have arrived at a contradiction: we began by assuming the existence of a pair of whole 2 m numbers m and n with no common factors; we have concluded that if = 2 then m and n n
49
Numbers and Number Systems 49 must both be even, and therefore would have a common factor, after all. The contradiction forces us to the conclusion that no such numbers m and n can exist. Notwithstanding the fact that the equation x 2 = 2 has no rational solutions, we nevertheless can find rational numbers whose squares are as close to 2 as we like, in the following 2 sense31. Let s be any positive rational number, and set t = . By definition, st = 2. We can s show that one of these numbers has a square less than 2, and the other number has a square greater than 2 (see Exercise 45). Without loss of generality, then, we may assume s 2 < 2 and t 2 > 2. Then we can prove the following theorem: Theorem (Approximation Theorem for 2). With s and t as described above, there exist two numbers s' and t' whose squares are closer to 2: more precisely, the inequalities 2 2 s < s' , t' < t, s 2 < ( s' ) < 2 , and 2 < (t' ) < t 2 are all satisfied. Moreover it is possible to find values of s ’ and t’ whose squares are arbitrarily close to 2.
s+t , the average of s and t. A straightforward application of (O1) and (O2) 2 shows that t' < t (see Exercise 46). It’s also true that for any two distinct positive real numbers 2 s +t s and t, the average satisfies the inequality > st (see Exercise 47). Since st = 2, this 2 2 2 means that 2 < (t' ) . Now we can set s' = . Since t' < t, it follows that s' > s. Likewise since t' 2 2 2 < (t' ) , it follows that ( s' ) < 2. Finally, let ∆ = t 2 − 2. It can be shown (see Exercise 48) that ∆ (t' )2 − 2 ≤ . This shows that the distance between 2 and the square of the upper bound t 4 can be made arbitrarily small by iterating this construction sufficiently often; as the values of the square of the upper bound get closer and closer to 2, so too do the values of the square of the lower bound. Let’s illustrate the Approximation Theorem with a specific example. We have already noted that the number 1.5 is an upper bound for the set S = {q ∈ q > 0 and q 2 ≤ 2}, because 2 4 3 16 2 2 < 2. As indicated in the proof above, (1.5 ) > 2; so let t = . Then s = 3 / 2 = 3 satisfies s= 2 9 Proof. Let t' =
s + t 17 = , which (as you can confirm directly) is another upper bound for S. 2 12 2 24 Then with s' = = we have the chain of inequalities t' 17 we construct t' =
2
2
2
4 24 17 3 2 , and either possibility leads to a contradiction (Exercise 56). So even though we have proved that 2 does not have a square root among the
56
56 Numbers and Number Systems rational numbers, it does have a square root in a complete ordered field! If we introduce the notation 2 to stand for this least upper bound, then the set S described above may be identified with S 2. Of course, in any other complete ordered field, we can also find a square root of 2, using the exact same argument. And this is the essence of what makes completeness such a powerful property: in a very real and specific sense, no complete ordered field can have any “extra” elements that are not also present in any other complete ordered field. Formalizing this last observation, we have finally arrived at our destination:
The Real Number Characterization Theorem. Any two complete, ordered fields are isomorphic.
Proof. Let F1 and F2 be two complete ordered fields. We will (temporarily) denote the multiplicative identities of these fields by e1 and e2, respectively. Likewise let 1 and 2 denote the rational subfields of F1 and F2 , respectively. Now, choose any DCOBRS S1 ⊆ F1. This m m m DCOBRS consists of elements of 1 of the form e1, with ∈ . But for each ∈ n n n m m with e1 ∈ F1, there is a corresponding element e2 ∈ F2 as well. So we may form a n n DCOBRS in F2 via m m S2 = e2 e1 ∈ S1 n n This clearly establishes a one-to-one correspondence between the DCOBRSes of F1 and the DCOBRSes of F2 . But we already know that the DCOBRSes of F1 are in one-to-one correspondence with the elements of F1, and the DCOBRSes of F2 are in one-to-one correspondence with the elements of F2 . So we have, via a chain of correspondences, a bijection between F1 and F2 , as follows: • • • • •
any element f1 ∈ F1 may be identified with the DCOBRS in F1 of which it is the least upper bound; this DCOBRS is a subset of 1, and is a copy of a subset of ; but this very same subset of also has a copy that sits inside of 2 ; this subset of 2 is a DCOBRS in F2 ; and that DCOBRS has some least upper bound, f2 ∈ F2.
The map f1 f2 described above establishes the desired isomorphism between F1 and F2 . The fact that all complete ordered fields are isomorphic justifies us in (finally) stopping the use of intentionally artificial notation for the identity elements and operations in such a field F . Under the isomorphism just described, the additive and multiplicative identity elements θ and e ∈ F can simply be identified with the rational numbers 0 and 1, respectively; and from this point on, we will stop using the symbols ⊗ and ⊕ and simply use ordinary notation for multiplication and addition.
57
Numbers and Number Systems 57
Exercises 52. In Example 3, why was it important that the number τ be transcendental? What would go wrong with the order 2 lead to contradictions, and therefore it must be the case that b2 = 2 .
1.12 Existence of a Complete Ordered Field In the previous section we showed that any two complete ordered fields are isomorphic, and therefore the question “what are real numbers, really?” with which we opened §1.3 ceases to be important—any complete ordered field will do just as well as any other. However, attentive readers may have noticed a gap in the argument: While we have shown that all complete ordered fields are isomorphic, we have not yet shown that any actually exist! Until we demonstrate the existence of a complete ordered field, it is possible (in principle) that all of the work we have done up to this point was devoted to proving properties of mathematical objects that are in some sense impossible. In this section we remedy this omission by describing explicitly a construction of a complete ordered field, which we may then take as a definition of “the” set of real numbers . The elements of this field may not necessarily conform to our intuition of what a “number” is, but that does not really matter, as long as our constructed field has all of the properties we want to have, and as long as we know any other field with the same set of properties will be isomorphic to the field we construct. The discussion at the beginning of §1.3 indicated the challenges of articulating a definition of “real number” that does not inadvertently rely on the very concept we wish to define. At the same time, however, it is impossible to define everything in mathematics; every definition must make use of more fundamental or “primitive” terms, and some of these must be left undefined, lest we find ourselves trapped in an infinite regress of definitions. For our purposes, we will take as undefined the notion of a natural number, and we will assume that natural numbers satisfy all of the following properties: 1. The set of all natural numbers, denoted , is not empty; in particular, it contains an element denoted by the symbol 1. 2. There exists a function S : → , called the successor function, which maps each natural number n to another natural number, denoted S ( n ), called the successor of n. 3. The successor function satisfies the following properties:
a. It is injective; that is, if m and n are natural numbers, then S ( m ) = S ( n ) if and only if m = n. b. It is not surjective; in particular, 1 is not the successor of any natural number.
58
58 Numbers and Number Systems 4. Finally, satisfies the induction principle: if X is any subset of that contains 1 and is closed under the action of the successor function, then X = . This set of four properties are known, collectively, as the Peano axioms for the natural numbers, named for the 19th century Italian mathematician Giuseppe Peano, who initially formulated a version of them in 1889. The system of numbers characterized by the Peano axioms is intentionally very sparsely described, which makes it well suited as a foundation for defining more complicated number systems: in general, we want to develop our theories as much as we can while assuming as little as possible. A complete description of how the theory of real numbers can be built up from is beyond the scope of this textbook, but we will outline some of the main steps along the way. First, we have the following important lemma: Lemma. Every natural number is either 1 or the successor of another natural number.
Proof. Let X ⊆ be the set of all natural numbers that are either 1 or the successor of another natural number. We show that X is closed under the successor function. Let n ∈ X be any element; we need to show that S ( n ) ∈ X as well. But this is obvious, by the definition of X . Since X contains 1 and is closed under S, by the induction principle X = . This proves that every natural number is either 1 or the successor of another natural number. This lemma makes it possible to define a binary operation, called the sum operation, as follows: Definition. The sum of two natural numbers, denoted n + m, is defined as follows: (a) If m = 1 then n + 1 = S ( n ). (b) If m ≠ 1 then m = S ( p ) for some p ∈ ; then we define n + m = S ( n + p ). The sum operation is also called addition; the two words are used interchangeably. Note that the definition above is a recursive definition, in that in order to know the meaning of n + S ( p ) we have to already know the meaning of n + p and then take its successor. Similarly, we can recursively define a product operation (also called multiplication): Definition. The product of two natural numbers, denoted n ⋅ m, is defined as follows: (a) If m = 1 then m ⋅1 =m. (b) If m ≠ 1 then m = S ( p ) for some p ∈ ; then we define n ⋅ m =⋅ ( n p ) + n. The following important properties can be proved directly from the definitions above: Theorem. (Properties of sum and product in ). Let m, n, p be any three natural numbers. Then: (a) Addition is commutative: m + n =+ n m. (b) Addition is associative: ( m + n ) + p = m + ( n + p ).
59
Numbers and Number Systems 59 (c) Multiplication is commutative: m ⋅ n =⋅ n m. (d) Multiplication is associative: ( m ⋅ n ) ⋅ p = m ⋅ ( n ⋅ p ). (e) Multiplication distributes over sums: m ⋅ ( n + p= ) ( m ⋅ n ) + ( m ⋅ p ). Proof. Omitted. We can also define an order relation on : Definition. Given two natural numbers m, n, we write m < n if there exists p ∈ such that m+ p = n. This order relation has the following important property, which should look familiar to you: Theorem. (Trichotomy in ). Let m, n, be any two natural numbers. Then exactly one of the following three properties holds: (a) m < n, (b) n < m, or (c) m = n. Proof. Omitted. Moreover, the order relationship is interconnected with the operations in : Theorem. (Order and operations in ). Let m < n and let p be any natural number. Then m + p < n + p and m ⋅ p < n ⋅ p.
Proof. Omitted. These properties suggest that is something like an ordered ring. However, is not a ring—nor even an additive group—because it lacks both an additive identity element and additive inverses. However, we can remedy this by constructing a larger set of numbers, called the integers. The elements of this set are built up from the natural numbers, as follows: first, we consider the set × , consisting of all ordered pairs of natural numbers. Then we define a relation on these ordered pairs as follows: given any two ordered pairs ( m, n ) and ( p, q ), we write ( m, n ) ∼ ( p, q ) if m + q =+ n p . Finally, it can be shown that the relation ~ satisfies three important properties: Lemma. The relation ~ defined above satisfies the following: (a) (Reflexivity) For any m, n ∈, ( m, n ) ∼ ( m, n ). (b) (Symmetry) For any m, n, p, q ∈ , if ( m, n ) ∼ ( p, q ) then ( p, q ) ∼ ( m, n ). (c) (Transitivity) For any m, n, p, q, r, s ∈ , if ( m, n ) ∼ ( p, q ) and ( p, q ) ∼ ( r, s ), then ( m, n ) ∼ ( r, s ). Proof. Omitted.
60
60 Numbers and Number Systems Any relation that is reflexive, symmetric and transitive, in the sense of the Lemma above, is called an equivalence relation. Given any ordered pair ( m, n ) of natural numbers, the equivalence class of ( m, n ), denoted ( m, n ) , is the set of all ordered pairs that are equivalent to ( m, n ). The following important lemma tells us that there are only three types of equivalence classes: Lemma. For any ( m, n ), exactly one of the three following cases holds: (a) ( m, n ) ∼ ( p + 1,1) for some p ∈ . In this case, we denote ( p + 1,1) with the symbol [ p ]. (b) ( m, n ) ∼ (1, p + 1) for some p ∈ . In this case, we denote (1, p + 1) with the symbol [ − p ]. (c) ( m, n ) ∼ (1,1). In this case, we denote (1,1) with the symbol [ 0 ]. Proof. Omitted. We now are ready to define , the set of integers: Definition. An integer is an equivalence class of ordered pairs of natural numbers. Furthermore, the order, sum and product of integers is defined as follows: (a) (b) (c)
( m, n ) < ( p, q ) if and only if m + q < n + p . ( m, n ) + ( p, q ) = ( m + p, n + q ) . ( m, n ) ⋅ ( p, q ) = ( m ⋅ p + n ⋅ q, n ⋅ p + m ⋅ q ).
(See Exercises 57–59.) These definitions often seem non-intuitive the first time they are encountered. We are accustomed to thinking of an integer as a single object; here it is being defined as a set containing infinitely many objects, each of which is an ordered pair of natural numbers. It may be helpful to think of the expression ( m, n ) as a roundabout way of writing what will eventually be denoted, using the operation of subtraction, as m − n . The following theorem can now be proved: Theorem. (Properties of sum and product in ). Let p, q, and r be any three integers. Then: (a) Addition is commutative: p + q =+ q p. (b) Addition is associative: ( p + q ) + r =p + ( q + r ). (c) [ 0 ] is an additive identity: p + [ 0 ] = p . (d) Additive inverses: for any p, there exists − p such that p + ( − p ) = [ 0 ]. (e) Multiplication is commutative: p ⋅ q =⋅ q p. (f) Multiplication is associative: ( p ⋅ q ) ⋅ r =p ⋅ ( q ⋅ r ). (g) [1] is a multiplicative identity: p ⋅ [1] = p. (h) Multiplication distributes over sums: p ⋅ ( q + r= ) ( p ⋅ q ) + ( p ⋅ r ). Proof. Omitted. This theorem states, in the language of §1.6–7, that is a ring. Furthermore, for any integer p ≠ [ 0 ], either [ 0 ] < p or p < [ 0 ]; accordingly we say that p is positive or negative, respectively.
61
Numbers and Number Systems 61 We can also prove the analogue of the two defining properties of an ordered field (refer back to §1.8). Moreover, if we identify each natural number n with the corresponding integer [ n] = ( n + 1,1) , then we may regard as a subset of , in a natural sense (more precisely, we should say that contains within it an isomorphic copy of ). This allows us to drop the awkward brackets and simply write 0,1, etc., rather than [ 0 ] , [1] , and so on. But why stop here? We were able to construct integers as equivalence classes of ordered pairs of natural numbers; let’s use the same strategy to construct the rational numbers. Definition. The set of rational numbers, denoted , is the set of equivalence classes of ordered pairs ( p, q ), where p, q ∈, q ≠ 0, and with equivalence defined by
( p, q ) ∼ (r, s ) if and only if p ∙ s = q ∙ r Just as we used addition, multiplication, and the order relation in to define corresponding notions in , so too we can “lift” these definitions from to obtain definitions of addition, multiplication, and order in . (See Exercises 60–62). Furthermore, just as we identified with an isomorphic copy of it that is a subset of , so too may we identify with an isomorphic copy of it that is a subset of : we simply associate, to each integer p, the rational number that is the equivalence class of the ordered pair ( p,1). Finally, we may prove that has all of the properties of an ordered field that were enumerated in §1.6–1.8 (Exercise 63–65). It is worth pausing at this point to reflect on the journey so far. We have invested a fair amount of time in showing (if admittedly in outline form only) how to develop, from first principles, a set that we previously accepted—all the way back in §1.7, Example 2—as an example of an ordered field. We have done this to show that what was previously assumed to be true can, in fact, be constructed and proved. What, now, shall we do about ? In the spirit of what has gone before, we want to somehow construct a new field, the field of real numbers, using rational numbers as raw ingredients. How can we do this? The answer to this question is actually implicit in the work we did in proving the Real Number Characterization Theorem in §1.11. In that section we showed that the elements of any complete ordered field can be put in one-to-one correspondence with the downward- closed open bounded rational subsets (DCOBRSes) of their rational subfields. Since every complete ordered field’s rational subfield is isomorphic to , this suggests a strategy: why not take the DCOBRSes of to be the elements of our complete ordered field? In more detail: we will now borrow an important idea from the German mathematician Richard Dedekind, who recognized in 1872 that every real number can be associated to a unique ordered pair ( X ,Y ) in which X and Y are nonempty subsets of satisfying the following three properties: (a) X is downward-closed; i.e., if a ∈ X and b < a then b ∈ X . (b) Y is upward-closed; i.e., if a ∈Y and b > a then b ∈Y . (c) X does not contain a maximal element. Informally, we can think of “cutting” the set of rationals into a “lower set” X and an “upper set”Y . If the upper set Y contains a minimal rational q, then the rational line is cut
62
62 Numbers and Number Systems at q; on the other hand, if Y does not contain a minimal element, then the rational line is cut at a “hole” where an irrational belongs. The definition of a Dedekind cut as an ordered pair ( X ,Y ) is slightly redundant, insofar as if X is any DCOBRS, then its complement X = X is automatically upward-closed, and hence (X , X ) is a cut in Dedekind’s sense (Exercise 66). Thus, we don’t really need to make explicit reference to Y at all. Instead, we make the following concise definition: Definitions. A Dedekind cut is a downward-closed open bounded subset of . A Dedekind cut is also called a real number. The set of all Dedekind cuts (that is, the set of real numbers) is denoted . Once again, this definition may be hard to reconcile with our intuitive notions of what a real number “is”. We think of a real number as a single number; with our definitions above, a real number is an infinite set of rationals, each of which is an equivalence class of ordered pairs of integers; each of those integers is, in turn, an equivalence class of ordered pairs of natural numbers. Whew! In order to make sense of this definition, it remains to show how to define addition, subtraction, inversion (additive and multiplicative), and order in our set of Dedekind cuts. Some of these are simpler to define than others. For example, any rational number q may be identified with the Dedekind cut {x | x < q}. This allows us, for example, to denote the set of negative rational numbers simply as 0. Definitions. Let X and Y be two real numbers (i.e., two Dedekind cuts). Then we define: X + Y = {x + y | x ∈ X and y ∈Y }. −X = {z − x | z < 0 and x ∈ X }. X < Y if and only if X ⊂ Y . If X ,Y are both positive, then X ⋅Y = {x ⋅ y | x and y are both positive, x ∈ X and y ∈Y } ∪ 0 (e) If X < 0 and Y > 0, we define X ⋅Y = − ( −X ⋅Y ), using (b) and (d) above. Similarly if X > 0 and Y < 0 we define X ⋅Y = − ( X ⋅ −Y ), and if both X and Y are negative we define X ⋅Y = ( −X ) ⋅ ( −Y ). z ( f) If X > 0 we define X −1 = | z < 1 and x ∈ X . x −1 ( g) If X < 0 we define X −1 = − ( −X ) . (a) (b) (c) (d)
With these definitions in place, it is possible to prove explicitly (although the details are rather long) that , so defined, is a complete ordered field. This construction of a complete ordered field is admittedly quite technical and may in some respects feel unsatisfying. Do we really mean to say that real numbers are Dedekind cuts? Fortunately, we do not need to take a position on this question. Whatever real numbers “really are”, we know that we want the set of real numbers to be a complete ordered field. We have proven that all complete ordered fields are isomorphic, and therefore any one instance of such a field is just as good as any other. Now that we have constructed an example of such a field, we are free to use it as a model of the real numbers; alternatively, if we prefer to construct a different complete ordered field and use that as our model of the real numbers, we may do so, secure in the knowledge that the resulting theory will be identical in all significant respects, regardless of the nature of the underlying “numbers” we work with.
63
Numbers and Number Systems 63
Exercises 57. Let m, n, m' , n' , p, q, p' , and q' be natural numbers. Show that if ( m, n ) ∼ ( m' , n' ) and ( p, q ) ∼ ( p' , q' ), then m + q < n + p if and only if m' + q' < n' + p' . (This shows that the definition of integer order is well defined.) 58. Let m, n, m' , n' , p, q, p' , and q' be natural numbers. Show that if ( m, n ) ∼ ( m' , n' ) and ( p, q ) ∼ ( p' , q' ) then ( m + p, n + q ) ∼ ( m' + p' , n' + q' ). (This shows that the definition of integer addition is well defined.) 59. Let m, n, m' , n' , p, q, p' , and q' be natural numbers. Show that if ( m, n ) ∼ ( m' , n' ) and ( p, q ) ∼ ( p' , q' ) then ( mp + nq, np + mq ) ∼ ( m' p' + n' q' , n' p' + m' q' ). (This shows that the definition of integer multiplication is well defined.) 60. Let p, q, p' , q' , r, s, r' and s' be integers. Show that if ( p, q ) ∼ ( p' , q' ) and ( r, s ) ∼ ( r' , s' ), then ps < rq if and only if p' s' < r' q' . Using this result, explain how to define an order relation on the set of rational numbers. 61. Let p, q, p' , q' , r, s, r' and s' be integers. Show that if ( p, q ) ∼ ( p' , q' ) and ( r, s ) ∼ ( r' , s' ), then ( ps + rq, qs ) ∼ ( p' s' + r' q' , q' s' ). Using this result, explain how to define addition of rational numbers. 62. Let p, q, p' , q' , r, s, r' and s' be integers. Show that if ( p, q ) ∼ ( p' , q' ) and ( r, s ) ∼ ( r' , s' ), then ( pr, qs ) ∼ ( p' r' , q' s' ). Using this result, explain how to define multiplication of rational numbers. 63. Prove that addition of rational numbers, as defined in Exercise 61, is commutative and associative; that the rationals contain an additive identity; and that every rational contains an additive inverse. 64. Prove that multiplication of rational numbers, as defined in Exercise 62, is commutative and associative; that the rationals contain an additive identity; and that every rational contains an additive inverse. 65. Prove that the order relation on the set of rational numbers, as defined in Exercise 60, satisfies the axioms of an ordered field. 66. Show that if X is any downward-closed open bounded rational subset, then its complement X = X is automatically upward- closed, and hence the pair (X , X ) is a cut in Dedekind’s sense.
1.13 Decimal Representations We began this chapter by considering several proposed ways of defining real numbers, one of which was “A real number is a (possibly infinite) decimal.” We rejected that definition at the time, both because it does not adequately distinguish between the representation of a real number and the number itself, and because two different decimals may represent the same real number. Up to this point, we have managed to avoid decimals entirely in this chapter—but the time has come to return to the topic, and develop a precise theory of just what we mean by “a (possibly infinite) decimal”. We begin with the simpler, related problem of representing an integer as a finite string of digits. Proposition (Decimal Representation of Integers). For any positive integer N , there exists a unique finite ordered sequence of numbers ( nk , nk −1 ,…, n0 ) satisfying all three of the following properties:
64
64 Numbers and Number Systems (a) Each ni is a member of the set {0,1, 2,…9}; (b) nk ≠ 0; and (c) = N nk 10 k + nk −110 k −1 + n110 + n0 We call such a finite sequence ( nk , nk −1 ,…, n0 ) a decimal representation of N .
Proof. We construct the decimal representation ( nk , nk −1 ,…, n0 ) recursively, beginning with nk and working our way down to n0 . (a) Let k be the largest natural number such that10 k ≤ N (such a k exists by the Archimedean property of ; see Exercise 67). Then let nk be the largest natural number such nk 10 k ≤ N . By the choice of k, nk is automatically a member of the set {1, 2, 3,…, 9}. (b) Next, let nk −1 be the largest integer such that nk −110 k −1 ≤ N − nk 10 k ; then by construction nk −1 is automatically a member of the set {0,1, 2,…9}. (c) Continuing, for each i < k let ni be the largest integer such that ni 10i ≤ N − ( nk 10 k + nk −110 k −1 + + ni +110i +1 ). (d) After k steps, we reach n0 , which is chosen to be n= N − ( nk 10 k + nk −110 k −1 + + n110 ) 0 . The right-hand side of this, by virtue of the construction of the preceding values ni , is automatically a member of {0,1, 2,…9} (Exercise 68). The preceding construction shows that N can be written in the form = N nk 10 k + nk −110 k −1 + n110 + n0, where each ni is a member of {0,1, 2,…9} and with nk ≠ 0. It remains to show that such a decimal representation is unique. Suppose that for some number N we have two different decimal representations ( nk , nk −1 ,…, n0 ) and ( mk' , mk' −1 ,…, m0 ). Choose the least such N for which this is possible. Observe that if the sequences ( nk , nk −1 ,…, n0 ) and ( mk' , mk' −1 ,…, m0 ) are different, then there must be some minimal i such that ni ≠ mi . Then it follows that (i) nk 10 k + nk −110 k −1 + ni +110i +1 = mk' 10 k' + mk' −110 k' −1 + mi +110i +1 Dividing both sides of this equation by 10i +1, we conclude that (j) nk 10 k −i −1 + nk −110 k −i − 2 + ni +1 = mk' 10 k' −i −1 + mk' −110 k' −i − 2 + mi +1 Let N' be the integer represented by the two sides of this equation. We have shown that and ( mk' , mi +1 ) are both decimal representations of N' , contradicting the assumed minimality of N . Therefore no such N can exist, which proves that decimal representations are unique. As an example, let N be the integer 7964. Then the construction above plays out as follows:
( nk , ni +1 )
• We can verify that 103 < 7964 but 10 4 > 7964 , so k = 3. • Next, we note that 7 ⋅103 < 7964 but 8 ⋅103 > 7964 , so n3 = 7 . • We continue by computing N − 7 ⋅103 = 964. Then we observe that 9 ⋅102 < 964 but 10 ⋅102 > 964 , so n2 = 9. • Next we compute N − ( 7 ⋅103 + 9 ⋅102 ) = 64 , and observe that 6 ⋅10 < 64 but 7 ⋅10 > 64, so n1 = 6. • Finally we compute N − ( 7 ⋅103 + 9 ⋅102 + 6 ⋅10 ) = 4, and set n0 = 4.
65
Numbers and Number Systems 65 Therefore the decimal representation of 7964 is precisely (7, 9, 6, 4 )—in other words, this algorithm extracts the familiar base-10 digits of the number N , one at a time. It is hopefully clear that there is nothing special about the number 10 in the proposition above. One can just as well substitute another positive whole number as the base in place of 10:
Proposition (Base b Representation of Integers). Fix a positive integer b. Then for any positive integer N , there exists a unique finite ordered sequence of numbers ( nk , nk −1 ,…, n0 ) satisfying all three of the following properties: (a) Each ni is a member of the set {0,1, 2,… b − 1}; (b) nk ≠ 0; and (c) = N nk b k + nk −1b k −1 + n1b + n0 We call such a finite sequence ( nk , nk −1 ,…, n0 ) a base b representation of N .
Proof. Exercise 69. For example, letting N = 7964 as before, the base 6 representation of N is the sequence (1, 0, 0, 5,1, 2), and the base 16 representation of N is the sequence (1,15,1,12) (Exercise 70). Notice that some of the “digits” used in the base 16 representation are themselves two-digit base 10 representations of numbers! We could, if we wanted, introduce distinct single-character symbols for the numbers 10 through 16. In fact, this is commonly done: in computer programming the base 16 system (called “hexadecimal”) is widely used to represent numbers; in that context, the letter A is used to stand for 10, B is used for 11, and so on, through F. Then the base 16 representation of our example number N would be (1, F ,1,C ). If all digits are represented by a single-character symbol, we typically omit the commas and parentheses for convenience, and indicate the base with a subscript. On the other hand if not all digits are represented by a single-character symbol, we may still omit the parentheses but use colons or some other character to separate the “place values”. Thus all of the following are representations of the same number in different bases: 796410 , 1005126 , 1F 1C16 , 2 : 12 : 4460 The base 60 notation may seem familiar to you—it is almost34 exactly the system we customarily use for measuring time! So much for integers. What about real numbers in general? We begin with an elementary observation about real numbers: Proposition. Let r be any positive real number. Then r can be written uniquely in the r N + d , where N is an integer and 0 ≤ d < 1. form =
Proof. By the Archimedean property of , there exists some integer n such that n > r. Let n0 be the least such integer; then set N= n0 − 1 and d = r − N . We observe that by construction N ≤ r (if not, then N would be greater than r, contradicting the minimality of n0 ),
66
66 Numbers and Number Systems so d = r − N ≥ 0. Further, N + 1 > r, so d < 1. Finally, we show that this representation is unique: Suppose we have both = r N + d and r = N' + d', where N and N' are both integers, and where d and d' are both greater than or equal to 0 and strictly less than 1. If N ≠ N' then one of them is larger than the other; without loss of generality, we may assume N' > N . But then from N + d = N' + d' we conclude d − d' = N' − N is a positive integer. This is impossible, since d − d ’ is less than d , which is less than 1. Therefore N = N'; from this it follows that d = d' as well. When a positive real number r is decomposed in the form N + d as above, we call N the integer part of r, and call d the fractional part of r. We have already shown how to represent the integer part of r as a decimal sequence (or indeed in any other base b). Our goal now is to show how to represent the fractional part of r as a decimal sequence as well—that is, as a (possibly infinite) sequence of integers from the set of digits {0,1, 2,…9}. Definition. For any real number d with 0 ≤ d < 1, the proper decimal expansion of d is the sequence d1 , d 2 , d3 ,… defined recursively as follows: (a) (b) (c) (d)
d1 is the largest integer such that d1 ≤ 10d ; d 2 is the largest integer such that d 2 ≤ 100d − 10d1; d3 is the largest integer such that d3 ≤ 1000d − (100d1 + 10d 2 ); and in general d k is the largest integer such that
(e) d k ≤ 10 k d − ∑ i =1 10 k −i di . k −1
1 For example, suppose d = . Then: 8 4 5 8 5 1 5 (a) To compute d1 we first calculate 10 = ; then d1 = 1, because 1 = ≤ but 2= > . 4 4 4 4 8 4 4 5 5 1 (b) To compute d 2 we first calculate 100 − 10 (1) = ; then d 2 = 2 because 2 = ≤ 2 2 8 2 6 5 but 3= > . 2 2 1 (c) To compute d3 we calculate 1000 − (100 (1) + 10 ( 2 ) ) = 5; then d3 = 5 because 5 ≤ 5 8 but 6 5 . 1 (d) To compute d 4 we calculate 10 4 − (1000 (1) + 100 ( 2 ) + 10 ( 5 ) ) = 0; then d 4 = 0. 8 (e) Likewise, for all subsequent terms in the sequence we have d k = 0. 1 is the sequence 1, 2, 5, 0, 0, 0,… This is an example 8 of a real number with a terminating decimal expansion, in the sense that only finitely many terms of the sequence are nonzero. 9 In contrast, consider the example of d = . In this case, the proper decimal expansion is 37 computed as follows: Thus the proper decimal expansion of
67
Numbers and Number Systems 67 9 90 74 90 (a) To compute d1 we first calculate 10 = ; then d1 = 2 , because 2 = ≤ 37 37 37 37 111 90 but 3 = > . 37 37 160 9 (b) To compute d 2 we first calculate 100 − 10 ( 2 ) = ; then d 2 = 4 because 37 37 148 160 185 160 4= ≤ but 5 = > . 37 37 37 37 120 9 (c) To compute d3 we calculate 1000 − (100 ( 2 ) + 10 ( 4 ) ) = ; then d3 = 3 because 37 37 148 120 111 120 but 4 = > . 3= ≤ 37 37 37 37 90 9 (d) To compute d 4 we calculate 10000 − (1000 ( 2 ) + 100 ( 4 ) + 10 ( 3 ) ) = ; then d 4 = 2 37 37 74 90 111 90 because= 2 < but 3 = > . 37 37 37 37 90 ) has reappeared in the algorithm! From this point on the decimal 37 9 sequence will repeat. The proper decimal expansion of is therefore 2, 4, 3, 2, 4, 3,…, and is 37 an example of a periodic or repeating decimal expansion. Once again it will probably come as no surprise to the reader that there is nothing special about the number 10 here, and that indeed any positive base b could be used. We are now ready to introduce the proper decimal representation of an arbitrary real number: We write the decimal representation of its integer part, followed by a decimal point, i.e. the period symbol (.), followed by the proper decimal expansion of its fractional part. 25 1 Thus, the proper decimal representation for = 3 + is written 3.125000 …, and the proper 8 8 9296 9 decimal representation for = 251 + is written 251.243243243… 37 37 For an arbitrarily chosen real number, it is not necessarily the case that the proper decimal representation (or the base b representation) will either terminate or repeat. In fact, the next Theorem gives precise conditions on when these two cases occur. Notice that the number (
Theorem (Repeating and terminating representations). Fix a base b, and choose any r ∈ . Then: (a) The proper base b representation of r will terminate or repeat if and only if r is rational. m (b) Furthermore, if r ∈ can be written in lowest terms as r = (i.e., m and n have n no common factors), then the proper base b representation of r terminates if and only if every prime factor of n is also a factor of r. Proof. Exercises 71–75.
68
68 Numbers and Number Systems The theory we have developed so far is adequate for most purposes, but it leaves some questions unaddressed. In particular, we have shown how to associate a decimal (or base b) expansion to each real number, but we have not yet shown whether this association is a bijection. That is to say: Fix a base b. Then given any (possibly infinite) sequence ( d1 , d 2 , d3 ,…), where each di ∈{0,1, 2,… b − 1}, does there exist a real number r ∈ corresponding to that sequence? The answer is: almost. Proposition (Correspondence Theorem for Proper Representations). Corresponding to any sequence ( d1 , d 2 , d3 ,…), where each di ∈{0,1, 2,… b − 1}, and in which the sequence does not eventually become the constant sequence b − 1, b − 1, b − 1,…, there exists a real number r ∈ whose proper base b representation is the given sequence. Proof. We will use the sequence ( d1 , d 2 , d3 ,…) to construct a DCOBRS (that is, a Dedekind cut) corresponding to the desired real number. We do this in stages. 1 (a) First, let S1 be the set of all rational numbers less than or equal to d1 . This is clearly 10 a DCOBRS. 1 1 (b) Next, let S2 be the set of all rational numbers less than or equal to d1 + d 2 . 10 100 This is clearly also a DCOBRS, and S1 ⊆ S2 . k 1 (c) In general, let Sk be the set of all rational numbers less than or equal to ∑ di . i =1 10i Then each Sk is a DCOBRS, and they are nested in an infinite ascending chain: S1 ⊆ S2 ⊆ S3 ⊆ . (d) Now form the union S = ∪ k Sk . This is also a DCOBRS (see Exercise 78). We claim that the real number corresponding to S—that is, its least upper bound—has precisely the base b representation ( d1 , d 2 , d3 ,…). We leave the details of verifying this claim to the reader. The preceding proposition shows that given a fixed base, there is almost a 1-to-1 correspondence between sequences ( d1 , d 2 , d3 ,…) and real numbers r with 0 ≤ r < 1. The proof given above does not work for sequences that eventually become constant sequences of the form b − 1, b − 1, b − 1,… . That is to say, there is no real number whose proper decimal representation is 0.249999 …, or whose proper base 5 representation is 0.34444 … Exercises 78 and 79 suggest why this is so. Fortunately, this problem can be solved: we just need to go back and define the decimal (or base b) representations in a different way: Definition. For any real number d with 0 ≤ d < 1, the nonterminating decimal expansion of d is the sequence d1 , d 2 , d3 ,… defined recursively as follows: (a) d1 is the largest integer such that d1 < 10d ; (b) d 2 is the largest integer such that d 2 < 100d − 10d1; (c) d3 is the largest integer such that d3 < 1000d − (100d1 + 10d 2 );
(d) and in general d k is the largest integer such that d k < 10 k d − ∑ i =1 10 k −i di . k −1
69
Numbers and Number Systems 69 This definition should be contrasted carefully with that of the proper decimal expansion, defined earlier. They are exactly alike in all respects, except that the ≤ symbol has been replaced throughout with the < symbol. Let’s see what the ramifications of this change are. 1 Consider once again the example d = . Then: 8 4 5 8 5 1 5 (a) To compute d1 we first calculate 10 = ; then d1 = 1, because 1= < but 2= ≥ . 4 4 4 4 8 4 5 4 5 1 (b) To compute d 2 we first calculate 100 − 10 (1) = ; then d 2 = 2 because 2= < 2 2 2 8 6 5 but 3= ≥ . 2 2 (c) So far things have unfolded exactly as they did when we computed the proper 1 decimal representation of . Here is where things change: To compute d3 we calcu8 1 late 1000 − (100 (1) + 10 ( 2 ) ) = 5. This time, instead of taking d3 = 5, we have d3 = 4 , 8 because 4 < 5 but 5 ≥ 5. 1 (d) Now to compute d 4 we calculate 10 4 − (1000 (1) + 100 ( 2 ) + 10 ( 4 ) ) = 10 ; then d 4 = 9 , 8 because 9 < 10 but 10 ≥ 10. (e) Likewise, for all subsequent terms in the sequence we have d k = 9. 1 is the sequence 1, 2, 4, 9, 9, 9,… Now we 8 can see why this is called a “nonterminating” decimal expansion! In fact, a close inspection of the two algorithms shows that the two different decimal representations of a real number r (i.e. its proper and nonterminating representations) will always coincide—unless the proper decimal expansion terminates. In that case, the proper expansion will end with an infinite string of 0s, and the nonterminating expansion will end with an infinite string of 9s. This essentially completes the theory of decimal and base b representations. We have shown that every real number r can be expressed as a unique base b representation, unless m the real number is a rational number of the form , where every factor of n is also a factor n of the base b. In that case, and in that case only, there are two distinct representations, one terminating and one nonterminating. Seen from this vantage point, the perennial question “Is 0.999… = 1.0?” seems both trivial and subtle at the same time. The correct response is that neither one of these expressions is a number. Rather, they are both decimal representations of a number—the same number, to be sure35. That a single number can be represented in more than one way should not come as 1 3 4 a surprise to anyone; after all, we are all accustomed to the fact that , and are different 3 9 12 representations of the same rational number, and we have seen that 796410 , 1005126 , and 1F 1C16 are different ways of describing the same integer. Seen from this perspective, there is nothing unusual about the fact that a real number may have more than one equivalent decimal representation, too36. Thus the nonterminating decimal expansion of
70
70 Numbers and Number Systems
Exercises 67. Let N be any positive integer. Use the Archimedean property of to prove that there exists a largest natural number k such that 10 k ≤ N . 68. In the proof of the Decimal Representation of Integers Proposition, show that at each step the integer di is a member of the set {0,1, 2,…, 9}. 69. Prove the Base b Representation of Integers Proposition. 70. Find the base b representations of N = 7964 in each of the following bases: b = 6, b = 7, b = 12, b = 16 , and b = 20. 71. Let r be any real number. Prove that if the base b representation of r terminates, then b is rational. 72. Let r be any real number. Prove that if the base b representation of r repeats, then r is rational. 73. Prove that the base b representation of any rational number will either terminate or repeat. m 74. Let r = be a rational number, where m and n have no common factors. Prove n that if the base b representation of rr terminates, then every prime factor of n is also a factor of b. m 75. Let r = be a rational number, where m and n have no common factors. Prove n that if every prime factor of n is also a factor of b, then the base b representation of r terminates. 1 1 1 76. Classify whether , and are terminating or repeating in each of the following 2 3 5 bases: (a) b = 6 (b) b = 15 (c) b = 20 (d) b = 24. 1 1 1 77. Find the base b representation of , and in each of the following bases: (a) 2 3 5 b = 6 (b) b = 15 (c) b = 20 (d) b = 24. 78. Fix b = 10, and consider the sequence ( 2, 4, 9, 9, 9,…). Show that if we use this sequence to construct a DCOBRS S as in the proof of the Correspondence 1 Theorem for Proper Representations, the resulting real number is , which has 4 the proper decimal representation 0.250000 … 79. Fix b = 5, and consider the sequence (3, 4, 4, 4, 4,…). Show that if we use this sequence to construct a DCOBRS S as in the proof of the Correspondence 4 Theorem for Proper Representations, the resulting real number is , which has 5 the proper base 5 representation 0.40000 …
1.14 Recommended Reading We began this chapter with two motivating questions, one mathematical and one curricular. The mathematical question was “What is a real number?”; the curricular question was “Why do some specific innovations of the New Math survive to the present day when others have mostly disappeared?” While we are not exactly in a position to give definitive answers to these questions, our journey through Chapter 1 has provided us with some insight into
71
Numbers and Number Systems 71 both of these questions, and (most important) we can now see some important connections between them. Among the topics introduced into the secondary curriculum by the New Math were inequalities, the language of set theory, names and notation for important sets of numbers, and names for the properties of those sets of numbers—all topics that have persisted (albeit in some cases in a rather vestigial form) to contemporary curricula. We have seen that all of these topics are necessary, in one way or another, to meaningfully characterize the set of real numbers as a complete ordered field. Thus, it may be that these topics entered the curriculum in part because the architects of the New Math sought to lay the groundwork for an answer to the question “What is a real number?”—a question, it will be recalled, that previous textbooks largely avoided dealing with. Your Recommended Reading for this chapter is: Ely, R. (2010). Nonstandard student conceptions about infinitesimals. Journal for Research in Mathematics Education, 41(2), 117–146. Bergé, A. (2008). The completeness property of the real numbers in the transition from calculus to analysis. Educational Studies in Mathematics, 67, 217–235. Núñez, R. (2006). Do real numbers really move? Language, thought, and gesture: The embodied cognitive foundations of mathematics. In 18 Unconventional Essays on the Nature of Mathematics (pp. 160–181). Springer, New York, NY. Ely (2010) presents a case study of a student, Sarah, who holds a different but internally consistent conception of the real numbers. Sarah’s conception of the real numbers is one in which finite numbers coexist alongside with both infinite and infinitesimal “numbers”. In Sarah’s conception, the two decimals 1.0 and 0.999… represent two distinct numbers, that differ from one another by an infinitesimally small quantity. This conception of the real number line is fundamentally different from and incompatible with the “standard” conception of the reals. However, as Ely observes, Sarah’s nonstandard conception of the reals is internally consistent, and therefore “unperturbable”. Ely writes: But what if the stable structures into which these two competing conceptions can be built are in fact not appreciably different in power and flexibility either? In other words, suppose there is no reason, based on stability, viability, power, or flexibility, that I as the teacher can anticipate preferring one conception to the other. In this scenario, however unlikely, we are justified in calling the learner’s conception a nonstandard conception. It is a conception that contradicts the standard conception, yet it is no objective “misconception,” nor is it a conception that appears to await perturbation by the learner’s other conceptions or future experiences due to inconsistency, lack of power, or viability. (p. 119) Ely notes that Sarah’s nonstandard conception aligns both with historical nonstandard conceptions, such as that of G. W. Leibniz, and with the modern “nonstandard analysis” developed by Abraham Robinson in the 1960s. (It also has much in common with one of the ordered field structures on ( x ) described earlier in this chapter37.) His analysis suggests that educators may incorrectly classify as “misconceptions” what should more accurately be regarded as entirely valid, mathematically coherent nonstandard conceptions. Such nonstandard conceptions are likely to be extremely resistant to perturbation, and may be consistent with those found in the history of mathematics. Bergé (2008) also closely examines the properties of in order to gain insight into students’ learning trajectories. Unlike Ely’s case study of a single student, for Bergé the unit
72
72 Numbers and Number Systems of analysis is the undergraduate course; in particular, Bergé studies four different undergraduate courses in Calculus and Analysis from a single university, and investigates the different expectations these four courses hold for students with respect to learning about the completeness property of the real numbers. Corresponding to her focus on institutions, rather than individuals, Bergé’s research methods involve studying course materials (syllabi, tasks, etc.)—what she calls the “praxeologies” of the courses—rather than interviewing students. Bergé finds that The notion of completeness of R inhabits both, Calculus and Analysis courses, but it takes more or less explicit forms with regard to theoretical justification. Understanding the changes of status that this property undergoes in passing from Calculus to Analysis requires a perspective that learners do not spontaneously take in accomplishing the tasks. Students are not naturally inclined to take this perspective, and this fact is not sufficiently seriously taken into account in the mathematical organization of the courses. (p. 234) The third reading, Núñez (2006), begins from the premise that “most of the idealized abstract technical entities in Mathematics are created via human cognitive mechanisms that extend the structure of bodily experience (thermic, spatial, chromatic, etc.) while preserving the inferential organization of these domains of bodily experience”, and asks the question: “How can an embodied view of the mind give an account of an abstract, idealized, precise, sophisticated and powerful domain of ideas if direct bodily experience with the subject matter is not possible?” Núñez observes that the language we use to describe real numbers is richly saturated with metaphors, in particular metaphors of “motion”. Núñez asks, Strictly speaking, absolutely no dynamic entities are involved in the formal definitions of these terms. So, if no entities are really moving, why do authors speak of “approaching,” “tending to,” “going farther and father,” and “oscillating”? Where is this motion coming from? What does dynamism mean in these cases? What role is it playing (if any) in these statements about mathematics facts? (p. 165) Núñez concludes, on the basis of an analysis of multiple mathematical texts, that “Formal definitions and axioms neither fully formalize nor generalize human concepts… Motion, in those examples, is a genuine and constitutive manifestation of the nature of mathematical ideas. In pure mathematics, however, motion is not captured by formalisms and axiomatic systems” (pp. 167–168).
Projects A. These three articles ask very different research questions and employ very different methods and theoretical frameworks, but they share a close attention to the mathematical properties of real numbers. Choose another mathematical topic and develop a proposal for a research study centered around your chosen topic. Your proposal should include a clearly-stated research question and a description of how it could be investigated. B. Do additional reading on Robinson’s “nonstandard analysis”. Prepare a report on what you have learned, intended for an audience that is familiar only with “standard analysis”.
73
Numbers and Number Systems 73 C. Bergé’s study focused on four Calculus and Analysis courses at a single university. Replicate or adapt her methods to a sequence of mathematics courses (at different levels of mathematical rigor) at your own university. Focus your study by choosing a specific mathematical topic, property or theorem, as Bergé did with “completeness”. D. Examine curricular materials at the secondary level for evidence that, as claimed by Núñez, the language of “motion” is used throughout mathematics to describe static objects. E. Choose two references from one of the Recommended Readings and prepare a summary of it, including synopses of (a) its research question, (b) the theoretical framework, (c) the research methods, (d) its findings and conclusions.
Notes 1 As transformative as these changes were, it should be acknowledged that they were largely confined to the white middle class, as many of the policies that shaped the post-war economy were carefully crafted to exclude African Americans from benefiting from them. 2 A large archive of these textbooks—all in the public domain—is available for download in PDF format at http://bit.ly/SMSG-archive. 3 The Bourbaki movement was a group of European (originally French) mathematicians who published their work under the collective pseudonym “Nicolas Bourbaki”. Founded in 1934, the Bourbaki group sought to re-establish all of mathematics on a modern structuralist footing. For more information, see Amir D. Aczel’s “biography”, The Artist and the Mathematician: The Story of Nicolas Bourbaki, the Genius Mathematician Who Never Existed. 4 One of the most famous and influential physicists of the 20th century, Feynman was awarded the Nobel Prize in 1965 for his work on the theory of quantum electrodynamics (QED). 5 Feynman, R.P. (1965). New textbooks for the “new” mathematics. Engineering and Science 28(6), pp. 9–15. 6 An animated version of this song is available at http://bit.ly/newmathlehrer. 7 See http://bit.ly/newmathpeanuts. 8 The full text of Kline’s book is available online (with the copyright holder’s permission) at www. rationalsys.com/mk_johnny.html. 9 Except, it should be noted, in the curriculum of mathematics teacher education. Many of the topics that disappeared with New Math survive in courses and textbooks with titles like “Math for Elementary Teachers.” 10 There is no standard convention for naming the set of irrational numbers; typically one writes , where denotes the set-theoretical difference. 11 Conventions differ about the distinction between the “whole numbers” and the “natural numbers”. Some authors use the latter phrase to refer to the set {1, 2, 3,…}, while others include 0 among the natural numbers and refer to the smaller set {1, 2, 3,…} as the set of whole numbers. 12 The main exception to this admittedly broad claim is the distributive property. Throughout the curriculum, students are frequently reminded to simplify expressions like 3 ( x + 5 ) by “distributing the 3”. Likewise, many textbooks and teachers make a point of teaching that multiplication of two binomials, such as in ( x + 4 ) ( x − 5= ) x 2 − x − 20, makes use of a repeated application of the distributive property. Notice, though, that it is extremely rare to also mention that the exact same algorithm also involves commuting and re-associating the four resulting terms. 13 The attentive reader may notice that this example could serve as an ostensive definition of “ostensive definition”. 14 Similarly, the question “Is a hotdog a sandwich?” tends to elicit strong feelings precisely because most people’s definition of “sandwich” is ostensive. 15 Hawkes, H.E., Luby, W.A., and Touton, F.C. (1911) Second Course in Algebra. Ginn and Company: Boston.
74
74 Numbers and Number Systems 16 At least, they do in most interpretations. See §1.13 for a slightly more nuanced view of this question. 17 Surd is an archaic term that was once a common vocabulary word in the secondary curriculum. Wells (2008) defines “surd” as “the indicated root of a number, or expression, which is not a per1
fect power of the degree denoted by the index of the radical sign; as 2, 3 5 , 4 x + y , ( 3 ) 2 .” 18 We will abbreviate the full name of the theorem for convenience, but it should be noted that there are also characterization theorems for other mathematical structures. For example Chapter 6 is devoted in large part to a characterization theorem for the set of complex numbers. 19 At one point in Chapter 2 we will consider polynomials whose coefficients are matrices. We could also consider matrices whose entries are polynomials. (An interesting question: Are those two sets “the same”?) 20 Whatever those are—we still haven’t defined what numbers are, so at this point we are relying purely on intuition and instinct. 21 Don’t confuse this use of the letter i with the imaginary number whose square is −1! We will eventually have to reckon with imaginary numbers, but fortunately by then we will no longer be using the symbol i for a generic group identity element, so hopefully there will be no confusion abut what we mean. Likewise, don’t mistake the letter e, which we use for the multiplicative identity of a field, with the transcendental number that is approximately equal to 2.718. 22 But not always! Many mathematicians prefer to use either the notation / ( 6 ) or / 6. This is particularly common among mathematicians who work in contexts in which p denotes the so-called “p-adic integers”. Since we will not encounter p-adic integers in this textbook, we don’t have to worry about that, so throughout the remainder of this textbook will use the simpler notation n to refer to the integers mod n. 23 The square root function, g ( x ) = x , is only a partial inverse. It is certainly true that if x is positive or zero then g ( f (= x ))
= x 2 x and f ( g ( x ) ) =
( x)
2
= x . However, if x is negative than
g ( f (= x )) = x − x. What can you say about f ( g ( x ) ) if x < 0? 24 Can you explain why it is noncommutative? 25 But not the Fundamental Theorem of Algebra (FTA), which only says that a polynomial always has at least one solution if we allow complex numbers. The theorem discussed here is often misidentified by teachers as the Fundamental Theorem of Algebra. We will return to this theorem in more detail in Chapter 2. 26 For the moment we will revert to the common practice of using the symbol 0 to stand for the additive identity in 12, rather than writing 12 or θ . 27 The phrase “zero divisors” is somewhat liable to misunderstanding; it would probably be better to use the more clear (but also longer and clumsier) phrase “nonzero divisors of zero”. 28 The word “integral” here has nothing to do with the way the same word is used in Calculus; rather, it is an adjectival form of the word “integer”. The set is, in some respects, the prototypical example of an integral domain. 29 We are cheating a little bit here. In this section we will finally define what “complete” means, and the definition will require us to be already working with an ordered field. Since isn’t an ordered field, it doesn’t really make sense to say that it is complete. On the other hand, there is a more general notion of “completeness” that has to do with topology, and in that sense, is a complete (but not ordered) field. We won’t be studying the topological notion of completeness in this book. 30 It is perhaps worth observing that in this proof we are taking for granted the following basic facts about odd and even numbers: (1) every number is either even or odd; (2) no number is both even and odd; (3) the product of two odd numbers is odd. All of these properties can be proven—see Exercises 42–44. 31 This sentence, and the next several paragraphs that follow it, would probably be a lot easier to read and write if we just said something like “ 2 can be approximated by rationals to as much precision as we want”. There is a very good reason for not saying it this way, though: at this point in our theoretical development, we don’t know that 2 even exists! That is, how do we know that there is a real number whose square equals exactly 2? Until we have established that 2 actually 2
75
Numbers and Number Systems 75 means something, we can’t meaningfully talk about approximating it. But while we can’t meaningfully talk about finding a rational number close to 2, we can talk about finding a rational number whose square is close to 2. 32 Among his many other accomplishments, the Greek mathematician and natural philosopher Archimedes developed the basic physical principles of the lever, and is well known for the saying “Give me a lever large enough, and a place to stand, and I can move the world.” The Archimedean property of ordered fields may be thought of as a less well-known principle in the same spirit. 33 A real number τ is called transcendental if there is no polynomial p ( x ) (with rational coefficients) that has the property that p (τ ) = 0. (Don’t confuse “transcendental” with “transfinite”!) For example, the familiar irrational numbers π ≈ 3.1415… and e ≈ 2.718… are both transcendental, but the irrational number 2 ≈ 1.414… is not transcendental, as it is a solution of the polynomial 0 More will be said about transcendental numbers in §4.4. equation x 2 − 2 =. 34 Can you identify how it is different? 35 The system described in this section is not the only reasonable way to make sense of decimal expansions. There are in fact some interpretations in which infinite decimals do not necessarily correspond to real numbers; in some of these systems, 0.999… need not denote the same real number as 1. See Katz, K.U. and Katz, M.G.. (2010). When is 0.999… less than 1? The Montana Mathematics Enthusiast 7(1), 3–30. 36 In this connection, it might be worthwhile to now return to §1.2 and reconsider Morris Kline’s critique of the New Math, particularly its emphasis on the distinction between a “number” and a “name for a number”. 37 Refer to §1.9, Example 5.
76
2 Polynomials and Polynomial Functions
“Algebraic symbols are what you use when you do not know what you are talking about.” —S. E. Davis, The Work of the Teacher
2.1 Polynomials in the Secondary Curriculum Polynomials occupy a central role in the secondary curriculum. Much of the Algebra 1 and Algebra 2 course is devoted to them; their reach extends throughout both of those courses and into Precalculus and Calculus, where they are a primary source of examples. Even when studying other topics (rational functions, trigonometric equations, etc.) they play a crucial background role. By the end of Algebra 2, students are expected to learn to do most of the following: • • • • • • • • • • • • • • • • • • • • • • • •
Recognize “like terms” Add, subtract, and multiply polynomials, and write the results in “standard form” Identify the degree of a polynomial Graph 1st-degree polynomial functions Solve 1st-degree polynomial equations Write the formula for a given 1st-degree polynomial function, given information about its graph Factor (some) 2nd-degree polynomials Use factoring to solve a 2nd-degree polynomial equation Graph a 2nd-degree polynomial function in standard form Complete the square to write a 2nd-degree polynomial in “vertex form” Complete the square to solve a quadratic equation Use the quadratic formula to solve a quadratic equation Use the discriminant to determine how many solutions a quadratic equation has Write a formula for a quadratic polynomial, given information about its graph Recognize special factoring patterns for 2nd-and 3rd-degree polynomials Factor higher-degree polynomials of “quadratic type” Identify the possible rational zeros of a higher-degree polynomial Use polynomial long division to find a quotient and remainder Use synthetic division to find a quotient and a remainder Use synthetic substitution to evaluate a function Know the relationship between the roots of a polynomial and its factors Describe and recognize the end behavior for polynomials of odd and even degree Understand and make use of the Fundamental Theorem of Algebra Factor a higher-degree polynomial (assuming it has enough rational roots) as a product of 1st-degree and irreducible 2nd-degree factors
77
Polynomials and Polynomial Functions 77 • • •
Describe the complex roots of a polynomial Sketch the graph of a polynomial using its factorization Write a possible formula for a polynomial function using information from its graph
(See Exercise 1.) One thing that you may notice about this list—which, despite its length, is not exhaustive!—is that, broadly speaking, these topics can be divided into two groups. Some of these topics have to do with the form of a polynomial; that is, how it is written, and how it may be rewritten. On the other hand, other topics on this list have to do with the values of a polynomial, considered as a function. When we factor a polynomial, or complete the square to rewrite a quadratic in vertex form, we are attending to the way a polynomial is expressed in terms of a variable; when we solve a polynomial equation, or graph a polynomial, we are attending to what happens when the polynomial is evaluated by replacing the variable with specific numerical values. These twin perspectives on polynomials—the formal perspective, and the functional perspective—are of course closely tied together, so much so that when teaching we tend to slip seamlessly from one perspective to the other. But as we will see in the next section, there is much to be gained from teasing these two points of view apart and studying them separately. This will be the task of this chapter.
Exercises 1. Classify the list of topics pertaining to polynomials in this section into two categories, depending on whether or not they are about polynomial expressions or polynomial functions. What other ways of classifying those topics can you identify? 2. Examine two or more Algebra or Algebra 2 textbooks and create a concept map1 for the topics pertaining to polynomials in each. How well does the list of topics in this section (see above) correspond to what you find? If possible, compare two textbooks that differ significantly: for example, you might compare a pre-New Math textbook to a more modern book, or use texts from different countries. 3. Examine a list of relevant secondary mathematics standards from your own context (these could be state standards, a national curriculum document, or something like the Common Core State Standards). How do the polynomial-related topics listed in this section appear in the standards? Which ones are present, and which ones are absent? How are those topics organized and presented in the standards?
2.2 Just What is a Polynomial? As with our discussion of real numbers in Chapter 1, the problem of defining the objects under investigation proves more subtle than one might expect. Just what is a polynomial? One widely-used Algebra 1 textbook2 defines a “polynomial” as “a monomial or a sum of monomials,” where “monomial” is defined as “a number, a variable, or a product of a number and one or more variables.” But this definition leaves many criteria implicit: Just what counts as a “number”? Need the coefficients be whole or rational numbers, or are arbitrary real numbers acceptable? Is 51/ 2 x3 + πx 2 + e3 a polynomial? It’s doubtful that any textbook would include it as an example of one—certainly, none of the secondary curriculum’s
78
78 Polynomials and Polynomial Functions techniques for factoring a polynomial or finding its roots are of much use when working with a polynomial with irrational coefficients. But if we restrict the definition of “polynomial” to only include those expressions with “nice” coefficients, one of the most important theorems of the Algebra 2 course—the “Complete Factorization Theorem”—breaks down completely. All of this is to say simply that the word “polynomial”, by itself, lacks specificity; to be precise, we should speak of polynomials with coefficients in a specific field or ring. Whether a polynomial is irreducible3, for example, depends entirely on what kind of coefficients are allowed (see Exercise 4). In fact, we have already seen this: in Chapter 1, we noted that if we interpret the coefficients of the polynomial x 2 − 1 as elements of 12, rather than as real numbers, then this 2nd-degree polynomial has four distinct factors rather than the two solutions we would expect; likewise, if we interpret the coefficients of x 2 + 1 in 17, then it has two factors, despite the fact that it has none when interpreted in . In subsequent sections we will make the definition of “polynomial” and “coefficient ring” more precise, but for now we will work informally. We begin with a simple question: When are two polynomials the same? Consider, for example, the following three polynomials: x5 + 3x 2 + 4 x, 3x 4 + 5x, and 2 x These three expressions are—unequivocally, unambiguously—three different polynomials. They have different numbers of terms, different degrees, different leading coefficients, and entirely different factorizations. Moreover these distinctions remain even if we regard the coefficients of these polynomials as elements of a ring other than or . However, if we interpret these polynomials as functions, then the story is quite different. In particular, suppose we evaluate each of these functions, interpreting the variable x as an element of 6 , and performing all computations in 6 . That is, suppose we complete each of the following function tables, writing each result as an element of the set 6 : x
x5 + 3x 2 + 4 x
x
3x 4 + 5x
x
2x
0
____
0
0
0
____
1
____
1
____
1
2
2
4
2
____
2
____
3
____
3
____
3
0
4
2
4
____
4
____
5
____
5
4
5
____
In fact, these three tables turn out to be exactly the same (see Exercise 5). This may come as something of a shock—we are used to thinking of the values of a polynomial function as being inextricably bound up with the form of a polynomial expression. And indeed, that is the case when working over the familiar number systems of , , and . But when working over 6 , this example shows that it is possible for several different polynomial expressions to determine the same polynomial function. Why does this happen? One might think that it has something to do with the fact that 6 is not an integral domain; that was, after all, the reason we proposed (back in §1.6) for why it was possible for a quadratic equation to have more than two solutions in 12. But that explanation is not adequate here. Consider the polynomials x7 − 6 x 4 + 5x + 2 and x 4 − x − 5, interpreted in 7 . We know that 7 is a field (because 7 is prime), and therefore does not contain zero divisors.
79
Polynomials and Polynomial Functions 79 But nevertheless, these two polynomials, when interpreted as functions, are completely identical (see Exercise 6). The fact that x7 − 6 x 4 + 5x + 2 and x 4 − x − 5 are the same function when interpreted in 7 can partly be accounted for simply by the fact that in any ring n there are many different names for the same element. For example, when working over 7 the constant term +2 at the end of the first polynomial and the constant term −5 at the end of the second polynomial are exactly the same thing. Likewise, the coefficients −6 and 1 attached to the 4th-degree terms in the respective polynomials are identical. But the linear terms of the two polynomials, 5x and –x, are not the same when working over 7 ; and in any case the two polynomials have different degrees and a different number of terms. So while part of this phenomenon can be accounted for on the grounds that every element of 7 can be written in multiple ways, that does not fully explain this phenomenon. Just what is going on here? These examples illustrate that it is possible for two polynomials to be different when considered as expressions, but identical when considered as functions. Much of the rest of this chapter will be devoted to answering the following:
Guiding question for Chapter 2: Under what conditions will two distinct polynomials determine the same function? In order to make progress on answering this guiding question, we will have to build up a fair amount of mathematical infrastructure. In particular we will need to define “polynomials” and “polynomial functions” as two completely different kinds of mathematical object, and understand the properties of both types. Once we have developed parallel, independent theories of polynomials and polynomial functions, we will then be able to investigate the relationships between these different kinds of objects, and understand (partly) why there is a one-to-one correspondence between them when working over , , and —and why and how that correspondence breaks down when working over other coefficient rings.
Exercises 4. Give an example of a polynomial that is irreducible (i.e., not factorable) over , but not ; of a polynomial that is irreducible over , but not over ; and of a polynomial that is irreducible over , but not over . 5. Compute all values for the function tables for x5 + 3x 2 + 4 x , 3x 4 + 5x, and 2x, interpreted in 6 , and confirm that they are identical. 6. Construct complete function tables for x7 − 6 x 4 + 5x + 2 and x 4 − x − 5, interpreted in 7 , and confirm that they are identical.
2.3 Functions We begin with the following definition, which is probably familiar to you: Definition. Let X and Y be any two sets. A function from X to Y is a set F made up of ordered pairs of the form ( a, b ), with a ∈ X and b ∈Y , that has the following two properties:
80
80 Polynomials and Polynomial Functions (1) If a ∈ X , then there is some b ∈Y such that ( a, b ) ∈ F . (2) If ( a, b1 ) ∈ F and ( a, b2 ) ∈ F , then b1 = b2. In this setting, the set X is called the domain of F , while the set Y is called by various names: although some authors refer to it as the range of F , others call Y the codomain or target set of F , reserving the word “range” for the set {b ∈Y ]( a, b ) ∈ F for some a ∈ X }. (That is the convention followed in this book.) Notice that a function need not be defined by a formula or rule of some kind; nor do we need to introduce additional undefined terms like “mapping” or “correspondence” to explain what a function is. A function is neither more nor less than a set of ordered pairs with the property that the “first element” in each ordered pair never re-occurs with a different “second element”. If X is a finite set, then it is possible to list all of the ordered pairs for a specific function. For example, if X and Y are both 6 , then the set f = {(1, 4 ) , ( 2,1) , ( 3, 2 ) , ( 4,1) , ( 5, 4 ) , ( 6, 3 )} is a function from X to Y . (Note that, as required. each value in the set {1, 2, 3, 4, 5, 6} appears only once as a “first element”, although they can repeat as “second elements”.). Instead of writing (1, 4 ) ∈ f , it is common to write f (1) = 4 to mean that in the ordered pair whose first element is 1, the second element is 4. It is sometimes more helpful to display the elements of a function as a table of values. For the function f above, we would write a
f (a )
1
4
2
1
3
2
4
1
5
4
6
3
Here, the symbol a stands for an unspecified or “generic” member of the domain, and f ( a ) stands for the second element in the ordered pair of f whose first element is a. For now, we will avoid the common practice of using the letter x to stand for an element of X ; instead we will reserve that notation for later in this chapter, when we study formal polynomials. We will use the notation4 Func( X ,Y ) to refer to the set of all functions from X to Y . Throughout most of this chapter, we will usually be interested in the case where X andY are the same set. If that set is denoted S, then we will abbreviate the set of functions Func(S , S ) with the shorter name Func(S ). This set was considered in an example in §1.5, and you are encouraged to go back and re-read that example now. Back in §1.5, we observed that there is an operation called composition on Func(S ), denoted by the symbol , which we may now formally define as follows: given any two functions f and g, we can define a new function f g by the rule that ( a, b ) ∈ f g if and only if for some c ∈ S , ( a, c ) ∈ g and ( c, g ) ∈ f . (You should verify that this formal definition in terms of sets or ordered pairs is equivalent to the usual notion of function composition; see Exercise 8.)
81
Polynomials and Polynomial Functions 81 This operation has a certain “multiplication-like” quality, under which Func(S ) seems vaguely group-like; in particular, composition is associative, in the sense that for any three functions f , g , h ∈Func (S ), we have
( f g) h =
f ( g h)
(see Exercise 9). Furthermore, there is a special “identity function”, denoted idS , and defined by the equation idS ( a ) = a. This identity function is an identity element of Func(S ) with respect to the operation , in the sense that for any other function f ∈Func(S ), we have id= f= idS f S f However, despite the existence of an identity element for an associative operation, the set Func(S ) is not a group with respect to the operation , because not every function has an inverse: that is, given any function f , there may not be (and usually isn’t) a function f with the property that f f = f f = idS . In the rare cases in which such a function f does exist, we say that f is an invertible function and refer to f as the inverse function. So Func(S ) is not a group, at least with respect to the function composition operation . However, if the set S happens to be a ring R, then Func( R ) “inherits” an addition and multiplication operation from R, as follows: Definition. Let R be any ring, and let f , g be two elements of Func( R ). Then we can define two new functions, denoted f + g and f ⋅ g , as follows: • •
( a, b ) ∈ f + g if and only if b= b1 + b2 where ( a, b1 ) ∈ f and ( a, b2 ) ∈ g . ( a, b ) ∈ f ⋅ g if and only if b= b1 ⋅ b2 where ( a, b1 ) ∈ f and ( a, b2 ) ∈ g .
These formal definitions have a more natural (and probably more familiar) expression if we use the common notation f ( a ) = b1 for ( a, b1 ) ∈ f , etc. (See Exercises 10 and 11.) A word of notational caution: don’t confuse f g with f ⋅ g ! The former signifies composition of functions while the latter signifies multiplication of functions. Is Func( R ) a ring? The question is ambiguous: since we have (so far) three different operations on Func( R ), and the definition of a ring requires two operations, we have to specify which ones we are interested in. Example 1. Is Func( R ) a ring with respect to the operations and +? In order to answer this, we need to check all of the following properties: • • • • • • •
Commutativity of function addition Associativity of function addition Existence of an additive identity function Existence of additive inverse functions Associativity of function composition Existence of a compositional identity function Distributivity of composition over addition
In particular, we recall that rings differ from fields precisely in that the “multiplication” operation does not need to be commutative or invertible. This is good, because in general
82
82 Polynomials and Polynomial Functions f g ≠ g f , and most functions are not invertible, in the sense that there need not exist an inverse function f −1 with the property that f −1 ( f ( r ) ) = f ( f −1 ( r ) ) = r for all r ∈ R. So is Func( R ) a ring with respect to + and ? The answer is almost—but not quite. Most of the required properties on the list above hold. For example, we can verify: •
•
•
• • •
Function addition is commutative. We need to confirm that for any two functions f + g =+ g f . But for two functions to be equal means precisely that they produce the same result for any member of the domain. So we need to check that ( f + g )( r= ) ( g + f )( r ) for any r ∈ R. But the left-hand side of this equation is (by the definition of function addition) f ( r ) + g ( r ), and the right-hand side is (again by the same definition) g ( r ) + f ( r ), and these are equal to one another because addition of elements of R is commutative. (Remember that f ( r ) and g ( r ) are elements of R.) Another way to describe this phenomenon is that the commutativity of addition in R is “inherited” by the addition of functions in Func( R ). Similarly, we can verify that associativity of addition in R is inherited in Func(R) (Exercise 12). Existence of an additive identity function. We need to find a function—let’s call it θ (pronounced “zero-hat”)—with the property that f + θ = f for any function f . In still more detail, this means that for any r ∈ R, we require that ( f + θ) ( r ) = f ( r ). Since the left-hand side is equal to f ( r ) + θ( r ), this motivates the definition of a special function, called the zero function on R, and defined by the property that θ( r ) = θR for any r ∈ R. (Here we are using the notation θ R to denote the additive identity element of R.) The zero function is precisely the additive identity we need. Existence of additive inverses. See Exercise 12. Associativity of function composition. See Exercise 9. Existence of a compositional identity element. This is precisely the identity function id R described above, which can be defined formally by id R = ( r, r ) r ∈ R .
{
}
In fact, every single one of our required properties holds, except for the distributive property. More explicitly, that in a noncommutative ring there are really two distributive properties. In our case, the first property would say that for any three functions f , g and h, f ( g + h) = ( f g ) + ( f h) (this is sometimes called the left-handed distributive property) and the second property would say that
( f + g ) h = ( f h) + ( g h) (the right-handed distributive property). Since function composition is not commutative, these two properties are genuinely different from one another. In fact, it turns out that the second property is true, but the first property is false! In slightly more detail: the right-handed distributive property states that for any element r ∈ R, we would have ( ( f + g ) h ) ( r ) = ( ( f h ) + ( g h ) ) ( r ) . Following our definitions, the left- hand side of this is ( f + g ) ( h ( r ) ) = f ( h ( r ) ) + g ( h ( r ) ); the right- hand side is ( f h )( r ) + ( g h )( r ) = f ( h ( r ) ) + g ( h ( r ) ). Since these are equal, function composition is “right-distributive” over addition.
83
Polynomials and Polynomial Functions 83 However, function composition is not “left- distributive” over addition! In general f ( g + h ) is not equal to ( f g ) + ( f h ), because for an arbitrarily chosen r ∈ R we do not expect f ( g ( r ) + h ( r ) ) to equal f ( g ( r ) ) + f ( h ( r ) ) R 5. So it turns out that with respect to addition and composition, Func( R ) “just barely” fails to be a ring: it has all of the required properties except for half of the distributive property. This failure—the fact that function composition does not distribute from the left—is in fact the root of a whole collection of familiar student misconceptions at the secondary level. For example, consider the following examples of functions in Func( ): • If f ( x ) = x , g ( x ) = x 2, and h( x ) = 9, then the pseudo- property f ( g + h ) = x 3. ( f g ) + ( f h ) corresponds to the false identity x2 + 9 =+ • If f ( x ) = x 2, g ( x ) = a and h ( x ) = b (for some constants a, b), then the pseudo-property 2 f ( g + h ) = ( f g ) + ( f h ) corresponds to the false identity ( a + b )= a 2 + b2 . • If f ( x ) = sin( x ), g ( x ) = a and h ( x ) = b (for some constants a, b), then the pseudo- property f ( g + h ) corresponds to the false identity sin ( a + b ) = sin ( a ) + sin ( b ). Each of the three examples above is probably familiar to you as a common student error; the middle one is commonly known as the “Freshman’s Dream”6. The fact that they can all be understood as a different manifestation of the same abstract algebraic fact—that function composition is not left-distributive over addition—is a powerful example of how an abstract perspective can be relevant for secondary teachers. Example 2. On the other hand, we could consider the operation of multiplication, rather than composition, and ask: is Func( R ) a ring with respect to the operations ⋅ and +? In order to answer this, we would need to check all of the following properties: • • • • • • •
Commutativity of function addition Associativity of function addition Existence of an additive identity function Existence of additive inverse functions Associativity of function multiplication Existence of a multiplicative identity function Distributivity of multiplication over addition
The first four of these properties have already been verified: we showed, in the previous example, that Func( R ) is an additive group. The rest of them are easily shown (see Exercise 13) to be inherited from the properties of the multiplication operation in R. In particular, if we use the notation eR to denote the multiplicative identity in R, then the multiplicative identity function can be described (using a variation on the notation above) as the function e , defined by the property that e ( r ) = eR for all r ∈ R . Note that the multiplicative identity e and the compositional identity idR are different functions! The similarity between function composition and function multiplication also underlies the source of a common student confusion at the secondary level: namely, the meaning of the raised exponent -1 in expressions like x −1 and f −1 ( x ). In the first of these, the superscript –1 indicates a multiplicative inverse; in the second, it indicates a compositional inverse. It is not just an unfortunate accident of history that the exact same notation (and even the same word) is used in two completely different ways—it reflects the fact that there are really two different “multiplication-like” operations in Func( ) :
84
84 Polynomials and Polynomial Functions For example, the function g ( x ) = sin x has a “compositional inverse”, which we call the inverse function of sin x and denote as sin −1x or arcsin x; it also has a “multiplicative 1 inverse”, which we call the reciprocal of sin x and denote as or csc x. sin x • Likewise, the function h ( x ) = 2 x has an inverse with respect to composition, denoted −1 1 x . h −1 ( x ) = , but it also has an inverse with respect to multiplication, denoted ( h ( x ) ) = 2x 2 •
Exercises 7. Suppose S is a finite set containing n elements. How many different functions from S to itself are there? Put another way: what is the size of Func(S )? (Hint: consider a table of values for a function, with the elements of S listed down the first column, and the second column left blank. How many different ways are there to fill in values for the second column?) 8. Show that the formal definition of function composition given in this section (in terms of sets of ordered pairs) is equivalent to the property that ( f g )( a ) = f ( g ( a ) ). 9. Show that function composition is associative. 10. Let R be a ring and consider two functions f , g ∈Func ( R ). Show that the formal definitions of f + g and f ⋅ g (in terms of sets of ordered pairs) given above is equivalent to the properties that ( f + g )( a ) = f ( a ) + g ( a ) and ( f ⋅ g )( a ) = f ( a ) ⋅ g ( a ), where on the right-hand side of each of these equations the operation is being performed in R. 11. Let R = 6 and consider f , g ∈ R where f = {(1, 4 ) , ( 2,1) , ( 3, 2 ) , ( 4,1) , ( 5, 4 ) , ( 6, 3 )} and g = {(1,1) , ( 2, 2 ) , ( 3,1) , ( 4, 4 ) , ( 5, 3 ) , ( 6, 4 )}. Compute f g , g f , f + g and f ⋅ g . 12. Fill in the missing details in the proof that Func( R ), with the properties + and , has all of the properties of a ring except for the distributive property. Specifically, show that addition of functions is associative, and demonstrate that every function f ∈Func( R ) has an additive inverse. 13. Fill in the missing details in the proof that Func( R ), with the properties + and ⋅, is a commutative ring. Specifically, show that function multiplication is commutative and associative, verify that eˆ is a multiplicative identity, and confirm that multiplication distributes over addition. 14. Suppose R is actually a field, i.e. assume every nonzero element r ∈ R has a multiplicative inverse. Under those conditions, would Func( R ) be a field? 15. Give an example of a function that has a multiplicative inverse but not a compositional one; then give an example of a function that has a compositional inverse but not a multiplicative one. What general properties do such functions have to have?
2.4 Constant Functions and Polynomial Functions In the preceding discussion of the additive and multiplicative identity functions, we introduced a notational convention that will be used heavily later on, so it’s probably a good idea to pause and take note of it now: we use the “hat” accent on top of a symbol to distinguish a function on R from an element of R. That is, θ R denotes the additive identity element in the ring, but θ denotes an additive identity function: in terms of the formal definition of
85
Polynomials and Polynomial Functions 85 a function, we would describe θ as consisting of ordered pairs in which every element of R appears as a “first element”, but the “second element” is always θ R. That is,
θ =
{(r, θ
R
)
r ∈R
}
Likewise eR denotes the multiplicative identity element in R, but e denotes a multiplicative identity function, defined by e = {( r, eR ) r ∈ R} Motivated by these examples, we introduce an entire category of constant functions: Definition. For any element a ∈ R, the constant function a is defined by a ( r ) = a for all r ∈ R. The use of the “hat” accent solves a notational problem that exists (but is usually not fully recognized) in teaching and learning mathematics at the secondary level: namely, an expression of the form y=3 is ambiguous. Does it mean that y is a single number, whose value is equal to 3? Or does it mean that y is a function, whose value everywhere is equal to 3? Graphically, we would represent the first case by a single point marked at 3 on a number line; we would represent the second case by a horizontal line that crosses 3 on the y-axis. With the hat notation, we can distinguish these two cases: y = 3 describes a single number, but y = 3ˆ describes a function. Similarly, does f (x) = 1 mean that there is a particular value of x for which the function f produces 1? Or does it mean that for all values of x the function produces 1? In the former case, we would have an equation (presumably one we want to solve); in the latter case, we would have an identity. For example, the equation sin ( x ) + cos ( x ) = 1 is an equation that is only true for very specific values of x, whereas sin2 ( x ) + cos 2 ( x ) = 1 is an identity that is true for all x. With “hat” notation, we can always be clear about which one we mean. If we want to say that f takes on the value 1 at a specific value of r, we write f ( r ) = 1. On the other hand, ˆ We write f if we want to say that f takes on the value 1 at all values of r, we write f = 1. rather than f ( r ) because this is a statement about the function taken as a whole, not at an individual point.
86
86 Polynomials and Polynomial Functions This “hat” notation also has another advantage: it allows us to recognize that a copy of R sits naturally inside Func( R ):
Proposition. The set of all constant functions in Func( R ) forms a subring, denoted ˆ This subring is isomorphic to the original ring R, in (perhaps unsurprisingly) by R. the following sense: + b. • If a and b are any two elements of R, then a + b = a • If a and b are any two elements of R, then a ⋅ b = a ⋅ b. Proof. See Exercise 16. The notation here is subtle: for example, in a + b, we are first performing addition in R, and then using the result to produce a constant function; in a + b, we are first converting a
and b to constant functions, and then performing addition of functions in Func( R ). (See Exercise 16.) Moreover, if the underlying ring R happens to be a field, then Rˆ is automatically a field as well (see Exercise 17); in this case, Func( R ) would be a ring that contains a field inside it. Example 1. Consider 5 . Note this is a field, because 5 is prime. Then Func ( 5 ) (which is a ring with respect to multiplication and addition, but not a field) contains inside it a set of = 1ˆ, 2ˆ , 3ˆ , 4ˆ , 5ˆ , and this set of constant functions is itself a field. five constant functions, 5
{
}
Now consider a function f given by the following table: r
f (r )
1
2
2
5
3
5
4
2
5
1
This function does not have an “inverse function” in the sense of function composition; nor does it have a “multiplicative inverse”. It’s non-invertible in two different ways, but for two completely different reasons. (See Exercise 18.). At this point, for any ring R we know that we can construct a ring of functions Func( R ), but other than the constant functions in Rˆ and the identity function id R , we don’t have any interesting examples of functions. That’s about to change, as we are ready (finally) to define a polynomial function: Definition. The set PolyFunc( R ) is the smallest subset of Func( R ) that contains Rˆ and id R and is closed under addition and multiplication of functions. A member of PolyFunc( R ) is called a polynomial function. This definition is not, on its face, very transparent; it’s hard to see what (if anything) this definition has to do with the familiar notion of “polynomial”. So let’s unpack this definition, slowly, and see what’s inside it.
87
Polynomials and Polynomial Functions 87 First, when we say that PolyFunc( R ) is the smallest subset of Func( R ) that contains Rˆ and id R and is closed under addition and multiplication of functions, what we really mean is that it contains any function you can build by adding and multiplying any number of constant functions and identity functions together, and nothing more7. So a typical element of PolyFunc( R ) can be written in the form 2 n f = a0 + a1 ⋅ id R + a2 ⋅ (id R ) + + an ⋅ (id R )
where each ai is a constant function. If we introduce the notational convention that (idR )0 = 1ˆ, then any element of PolyFunc( R ) can be written as n
f = ∑ ak ⋅ (id R )
k
k =0
Example 2. Let R = 6 . We might want to consider a polynomial function like f = 4ˆ ⋅ id 6 + 3ˆ ⋅ (id 6 )2 + (id 6 )5. What would the values of such a function be? 2 We can begin to answer this question by completing function tables for id 6 , ( id 6 ) , and higher powers of id 6 , as follows: r
id 6 ( r )
(id )
0
0
0
0
1
1
1
1
2
2
4
2
3
3
3
4
4
4
5
5
1
2
6
(r )
(id )
3
6
(r )
(id ) 6
0
4
(r )
(id )
5
6
(r )
0
(You will complete the rest of the table in Exercise 19.) Then we can combine the results together with the indicated constant functions as shown below:
r
2 5 4ˆ ⋅ id 6 + 3ˆ ⋅ (id 6 ) + (id 6 )
0
4 ⋅ 0 + 3⋅ 0 + 0 = 0
1
4 ⋅1 + 3 ⋅1 + 1 = 2
2 3
4 ⋅3 + 3⋅3 + 3 = 6
4 5 (Again, you will fill in the missing rows of this table in the Exercises.)
88
88 Polynomials and Polynomial Functions Does this table looks familiar? It should! (If not, go back and redo Exercise 5 from earlier in this chapter.) Likewise, we could also compute function tables for the polynomial 4 functions 5ˆ ⋅ id 6 + 3ˆ ⋅ (id 6 ) and 2ˆ ⋅id 6 . What do you think we will find? (See Exercise 19.). In fact, these three polynomial functions: 2 5 4ˆ ⋅ id 6 + 3ˆ ⋅ (id 6 ) + (id 6 ) 4 5ˆ ⋅ id 6 + 3ˆ ⋅ (id 6 )
and 2ˆ ⋅id 6 are all the exact same function—they are the same set of ordered pairs. This example shows that a single polynomial function may be built up from the building blocks of constant functions and powers of the identity function in more than one way. When we say that a polynomial function is a function that can be represented in the form
n
∑ a ⋅ (id ) , it is k
k
R
k =0
important to realize that such a representation need not be unique. By now it is probably becoming clear why we call these things “polynomial functions”, but in case the notation is still obscuring the central idea, let’s see what happens when we act n 2 with a general polynomial function a0 + a1 ⋅ id R + a2 ⋅ (idR ) + + an ⋅ (id R ) on an arbitrary element r ∈ R. We have:
(a + a ⋅ id 0
1
+ a2 ⋅ (id R ) + + an ⋅ (id R ) 2
R
n
) (r )
n = a0 ( r ) + a1 ( r ) ⋅ id R ( r ) + a2 ( r ) ⋅ (id R ) ( r ) + + an ( r ) ⋅ (id R ) ( r ) 2
n n To simplify this, just remember that ai ( r ) = ai , id R ( r ) = r , and ( id R ) ( r ) = ( id R ( r ) ) = r n, so we have:
(a + a ⋅ id 0
1
R
)
2 n + a2 ⋅ (id R ) + + an ⋅ (id R ) ( r ) = a0 + a1 ⋅ r + a2 ⋅ r 2 + + an ⋅ r n
This shows that when a polynomial function in our technical sense acts on an element r, the result is a polynomial in the familiar, informal sense. As this section comes to a close, we are ready to refine and extend our guiding question from earlier in this chapter into a pair of more precise questions:
Guiding Question 1. When does a polynomial function have a unique representation? We have seen that this is not the case in general. On the other hand, we know (or at least we think we know) that when working over the real numbers, there is a one-to- one correspondence between polynomial functions and polynomial expressions. What is it about that accounts for this correspondence?
89
Polynomials and Polynomial Functions 89 Guiding Question 2. What about the functions that are not polynomial? That is, what (if anything) are the properties of the set PolyFunc ( R ) \ Func ( R )? When working over the reals, we are used to dealing with non-polynomial functions like f ( x ) = 2 x . Do such non- polynomial functions exist in general? What can we say about them? We will answer (mostly) these questions in §2.6. Before we can do that, though, we need to shift gears and consider polynomial expressions as mathematical objects that are distinct from polynomial functions.
Exercises 16. Prove that for any two elements a and b in R, then a + b = a + b and a ⋅ b = a ⋅ b. 17. Show that if R is a field, then any constant function a (other than θ ) has a multi−1 plicative inverse, and therefore Rˆ is a field as well. Do the notations ( a )−1 and ( a ) mean the same thing? Are they equal?
18. (a) Explain why the function f ∈ Func ( 5 ) of Example 1 in this section has no multiplicative inverse. (b) Then explain why it has no inverse function in the sense of function composition. (c) Can you find a polynomial formula for f ? 19. Complete both of the tables in Example 2. Then also compute function tables for 5ˆ ⋅ id 6 + 3ˆ ⋅ (id 6 )4 and 2ˆ ⋅id 6 .
2.5 Formal Polynomials In this section we shift gears away from considering polynomials as functions and toward considering polynomials as expressions. Choose any ring R. We use the notation R [ x ] to represent the collection of all finite sequences of elements from R. More precisely: Definition. A finite sequence in R is a sequence of the form
(a0 , a1 , a2 ,…) where each ai ∈ R , and all but finitely many of the ai are nonzero8. Put slightly differently, the finiteness condition means that eventually (after some possibly very large list of terms) the sequence becomes “zeros all the way down”. Such a finite sequence will necessarily have a last nonzero term; if N is the largest natural number for which aN ≠ 0, then N is called the degree of the sequence. Thus for example for any r ∈ R, the degree of ( r, 0, 0, 0,…) is 0; the degree of ( r, s, 0, 0, 0,…) is 1; and so forth9. The set of all finite sequences in R is denoted R[ x ]. We can describe a finite sequence either by listing (some or all of) its terms, as in the notation ( a0 , a1 , a2 ,…), or with the shorthand notation ( ai ); sometimes we will use a single boldfaced letter to describe the entire sequence as a whole, as in a = ( ai ) . There are two natural operations that can be performed on finite sequences (and one less natural operation, which will be introduced shortly).
90
90 Polynomials and Polynomial Functions Definitions. • If a = ( a0 , a1 , a2 ,…) and b = ( b0 , b1 , b2 ,…) are two finite sequences in R, their sum is defined to be a + b = ( a0 + b0 , a1 + b1 , a2 + b2 ,…). • If a = ( a0 , a1 , a2 ,…) is any finite sequence in R, and r is any single element of R, then the scalar multiplication r ⋅ a is defined to be r ⋅ a = ( ra0 , ra1 , ra2 ,…).
Proposition. R[ x ] is an abelian group with respect to addition.
Proof. See Exercise 20.
Proposition. Scalar multiplication distributes over addition from both the left, i.e. r ⋅ ( � a + b )= r ⋅ a + r ⋅ b, and from the right, i.e. ( r + s ) ⋅ a =⋅ r a + s ⋅ a. Proof. See Exercise 21. Proposition. Scalar multiplication is compatible with multiplication in the ring, i.e. r ⋅ ( s ⋅ a ) = ( rs ) ⋅ a
Proof. See Exercise 22. We summarize the properties above by saying that set R[ x ] is an R-module. If these properties look familiar, it is because that in the case where R is a field, these are exactly the same properties that describe a vector space. In fact one way to define an R-module is to just say “It’s like a vector space, but instead of the scalars coming from a field, they can come from any ring.” Now, what about multiplication of finite sequences? Is there a way to define a product of two sequences a and b? One natural idea, by analogy with how addition is defined, would be to define the product componentwise: that is, we might very reasonably try to define a ⋅ b = ( a0 b0 , a1 b1 , a2 b2 ,…) This definition has the advantage of simplicity, but suffers from a major defect: if we define multiplication this way, the degree-1 sequence a = ( 0,1, 0, 0 …) and the degree-2 sequence b = ( 0, 0,1, 0, 0,…) would have the property that a ⋅ b = ( 0, 0, 0 …), which would make R[ x ] not be an integral domain. (Compare Chapter 1, Exercise 26.) Instead, we will define multiplication of finite sequences by the following, admittedly non-intuitive, definition: Definition. Let ( ai ) and ( b j ) be any two finite sequences in R[ x ]. We define the polynomial product ( ai ) ⋅ ( b j ) to be ( ck ), where k
ck = ∑ai bk −i i =0
91
Polynomials and Polynomial Functions 91 This definition is rather hard to make sense of a first glance, so let’s work out an example. Example. Suppose R = , and consider the set of finite sequences [ x ]. Let a = (1, 2, 3, 0, 0, 0, 0, 0,…) b = (5, 4, 8, −6, 0, 0, 0,…) We compute the polynomial product a ⋅ b = c one term at a time. By definition, each term ck is found by summing together all terms of the form ai b j where the indices i and j add up to k. For k = 0, there is only one such term: = c0 a= 1⋅ 5 = 5 0 b0 For k = 1, there are two terms to add: = c1 a0 b1 + a1b0 = 1 ⋅ 4 + 2 ⋅ 5 = 14 For k = 2, there are three terms: = c2 a0 b2 + a1b1 + a2 b0 = 1⋅ 8 + 2 ⋅ 4 + 3 ⋅ 5 = 31 Continuing in this fashion, we can find that c3 = 22, c4 = 12 , and c5 = −18 (Exercise 23). However, a funny thing happens when we try to compute ck for k ≥ 6: = c6 a0 b6 + a1b5 + a2 b4 + a3 b3 + a4 b2 + a5 b1 + a6 b0 = 0+0+0+0+0+0 = 0 and indeed for all subsequent terms of c we also get ck = 0. So the product is a ⋅ b = c = (5,14, 31, 22,12, −18, 0, 0, 0,…)
Exercises 20. Prove that R[ x ] is an abelian group with respect to addition. In particular, explicitly identify what the additive identity is, and verify properties (G1)–(G4) of §1.5. 21. Prove that r ⋅ ( a + b� ) = r ⋅ a + r ⋅ b, and ( r + s ) ⋅ a =⋅ r a + s ⋅ a. 22. Prove that r ⋅ ( s ⋅ a ) = ( rs ) ⋅ a. 23. For a = (1, 2, 3, 0, 0, 0, 0, 0,…) and b = (5, 4, 8, −6, 0, 0, 0,…), define c= a ⋅ b and confirm that c3 = 22, c4 = 12 , and c5 = −18. Explain why ck = 0 for all k ≥ 6. 24. Compute the polynomial product of ( 4, 0, 2, −1, −2, 0, 0,…) and (3, −3,1, 0, −1, 0, 0,…). 25. Let R be any commutative ring, and let a = ( r, s, 0, 0,…) and b = ( r, − s, 0, 0,…). Compute the polynomial products a ⋅ b, a ⋅ a, and b ⋅ b . How are the answers different if R is noncommutative?
92
92 Polynomials and Polynomial Functions When multiplication in R[ x ] is defined via the polynomial product, each finite sequence in R[ x ] is called a polynomial with coefficients in R (or, for brevity, “a polynomial over R”), and the set R[ x ] becomes a ring, called the ring of polynomials over R. We have the following important properties:
Theorem. Let R be any ring and R[ x ] the ring of polynomials over R. Then: (a) Multiplication in R[ x ] is associative. (b) If multiplication in R is commutative, then multiplication in R[ x ] is as well. (c) Multiplication in R[ x ] distributes over addition in R[ x ], i.e. for any three polynomials a, b, and c, we have a ⋅ ( b + c )= a ⋅ b + a ⋅ c. (d) If 1R is the multiplicative identity in R, then (1R , 0, 0, 0,…) is a multiplicative identity in R[ x ].
Proof. Exercises 26–29. Inside R[ x ], the degree-zero polynomials form a subring. Let r ∈ R be any element of the ring; we denote the degree-zero polynomial ( r, 0, 0,…) by the symbol r (pronounced “r-bar”). It turns out that multiplying any other polynomial a by r using the polynomial product has the exact same effect as multiplying a by the scalar r (Exercise 30). Moreover, if r, s ∈ R are any two members of the ring, then r ⋅ s = rs (Exercise 31). Consequently, if we denote the subring of zero-degree polynomials by R, then this subring of R[ x ] is an isomorphic copy of R. If this all seems strangely familiar to you, it should! In the previous section, we saw that the ring of polynomial functions (which we denoted PolyFunc( R )) contained inside it an ˆ consisting of constant functions of the form a, ˆ isomorphic copy of R, which we denoted R, where aˆ ( r ) = a for all r. Now we have seen that the ring of polynomial expressions, which we are calling R[ x ], contains inside it another isomorphic copy of R, which we are calling R, and which consists of zero-degree polynomials of the form a = ( a, 0, 0,…). Notationally, we use the accent marks to distinguish “scalars” (elements of the ring R, which have no accent mark) from “constant functions” (denoted with a “hat”) and “zero- degree polynomials” (denoted with a “bar”). Formally speaking, each of these is a different object. For example, if we work over the real numbers, we can distinguish between the ˆ and the polynomial 3: number 3, the function 3, • •
{
}
The function 3ˆ is the set of ordered pairs 3ˆ = ( r, 3) r ∈ . The polynomial 3 is the finite sequence 3 = (3, 0, 0,…).
ˆ and expresses the idea that even though 3, 3ˆ and 3 are The isomorphism between , , technically different things, they all behave the same way when combined with other mathematical objects. In addition to the zero-degree polynomials in R, the polynomial ring R[ x ] contains one special element that deserves particular attention: consider the degree-1 polynomial ( 0,1R , 0, 0,) This polynomial gets its own name: it is referred to by the single letter x. With this notational convention, we have the following extremely important properties:
93
Polynomials and Polynomial Functions 93 Theorem. Let R be any ring, R[ x ] the ring of polynomials over R, and x the polynomial ( 0,1R 0, 0,…) . Multiplying any sequence a by x has the effect of shifting the terms of a one index to the right: that is, x ⋅ ( a0 , a1 , a2 ,…) = ( 0, a0 , a1 , a2 ,…)
Proof. Exercise 32. Corollary. For any positive whole number n, the nth power of x is a sequence consisting of all 0s except for a single nonzero term equal to 1R in the nth position. That is, x 2 = ( 0, 0,1R , 0, 0,…) x3 = ( 0, 0, 0,1R , 0,…) x 4 = ( 0, 0, 0, 0,1R ,…) etc.
Proof. Exercise 33. Corollary. Any polynomial can be written as a finite sum of terms, each of which is a scalar multiple of a power of x. Specifically, if the degree of a is N, then
(a0 , a1 , a2 , a3 ,…, aN , 0, 0, 0,…) = a0 + a1x + a2 x2 + a3 x3 + + aN x N which may also be written as
(a0 , a1 , a2 , a3 ,…, aN , 0, 0, 0,…) = a0 + a1x + a2 x2 + a3 x3 + + aN x N Proof. Exercise 34. At this point the name “polynomial” and the name of the “polynomial product” are hopefully clear: the strange-seeming definition of multiplication we have introduced conforms precisely with the “normal” polynomial multiplication. For example, earlier in this section we computed the product of two polynomials over :
(1, 2, 3, 0, 0,…) ⋅ (5, 4, 8, −6, 0, 0,…) = (5,14, 31, 22,12, −18, 0, 0, 0,…) Using the notation of the preceding corollaries, we can write this as
94
94 Polynomials and Polynomial Functions
( 1 + 2x + 3x ) ⋅ ( 5 + 4x + 8x 2
2
− 6 x3 )= 5 + 14 x + 31x 2 + 22 x3 + 12 x 4 − 18x5
which can be verified directly using ordinary, high school-level algebra.
Exercises 26. Prove that multiplication in R[ x ] is associative. 27. Prove that if multiplication in R is commutative, then multiplication in R[ x ] is as well. 28. Prove that multiplication in R[ x ] distributes over addition in R[ x ], i.e. for any three polynomials a, b, and c, we have a ⋅ ( b + c ) = a ⋅ b + a ⋅ c . 29. Prove that if 1R is the multiplicative identity in R, then (1R , 0, 0, 0,…) is a multiplicative identity in R[ x ]. 30. Let R be any ring, let a be any polynomial in R[ x ] and let r ∈ R. Prove that r ⋅ a (computed using the polynomial product) is equal to r ⋅ a (computed using the scalar product). 31. Let R be a ring, and r, s ∈ R any two members of the ring. Prove that r ⋅ s = rs . 32. Prove that x ⋅ ( a0 , a1 , a2 ,…) = ( 0, a0 , a1 , a2 ,…). 33. Prove that x n is a sequence consisting of all 0s except for a single nonzero term equal to 1R in the nth position. 34. Prove that ( a0 , a1 , a2 , a3 ,…, aN , 0, 0, 0,…) = a0 + a1x + a2 x 2 + a3 x3 + + aN x N . 35. Prove directly from the definition of the polynomial product that if R is a com2 mutative ring then ax + b = a 2 x 2 + 2abx + b2 and ax + b ax −= b a 2 x 2 − b2 . How are things different if multiplication in R is not commutative? 36. Consider R = M 2 ( ), the ring of 2 × 2 real matrices. Give a careful explanation of what R[ x ] denotes, illustrated with at least two examples of elements of R [ x ], described using both the formal definition (i.e. in terms of sequences) and the more conventional “in terms of x” notation. Show how each of those examples can also be regarded in a natural way as an element of M 2 ( [ x ]). Your solution should include a clear (verbal) explanation of what M 2 ( ) [ x ] and M 2 ( [ x ]) are, and how they are (technically) different rings, and an answer to the question: Are M 2 ( ) [ x ] and M 2 ( [ x ]) isomorphic? 37. For each a ∈ R, define a function µ a ∈Func( R ) by µ a ( r ) = ar. (This function is called “left multiplication by a”; the Greek letter mu stands for “multiplication”.) Prove: (a) µ a + b = µ a + µ b . (b) µ ab = µ a µ b . (c) µ1 = id R . 38. Let µ ( R ) denote the set of all functions of the form µ a, a ∈ R (see previous exercise). Prove that µ ( R ) is a ring with respect to the operations of addition and comˆ position. Is µ( R ) isomorphic to R? Is it the same set of functions as R? 39. Define a linear function on R to be any function of the form f = µ a + bˆ , for a, b ∈ R . (Here µ a is defined as in the previous two questions.) Define LinFunc ( R ) to be the set of all such linear functions. Explain the correspondence between this set, and the things we call “linear functions” in high school Algebra. Prove that LinFunc ( R ) is closed under composition. 40. If we associate the linear function µ a + bˆ (see previous exercise) with the ordered pair ( a, b ) ∈ R 2 , then composition of linear functions induces a kind of “multiplication” on ordered pairs. Describe this multiplication rule explicitly; that is, what is ( a, b ) (c, d )? Is this product commutative? Associative? Is it distributive over addition? Does it have an identity element? If so, are all elements invertible?
(
)
(
)(
)
95
Polynomials and Polynomial Functions 95 Let’s pause and take stock of what we have so far. A polynomial is a finite sequence of terms from a ring R, and if we introduce the notational convention that x 0 = 1R , then any N k polynomial can be written in the form ∑ ak x . But in this expression the letter x does not k =0
stand for a member of the ring R, or even a member of R; rather it represents the sequence ( 0, eR , 0, 0,…). So a polynomial, in this sense, is an expression, but not a function. It doesn’t make any sense to speak of the “values” of a polynomial, or its “zeros”, or its “graph”—a polynomial has none of those things. It does, however, make sense to multiply two polynomials together, and therefore it is meaningful to speak of whether one polynomial divides another. It also makes sense to ask whether a particular polynomial is irreducible or not, and to try to factor a polynomial into irreducible polynomials of lesser degree. Of course, we expect there to be a connection between the factors of a polynomial and the zeros or roots of a polynomial function—but in order to explore that connection, we have to find a way to interpret a formal polynomial as a polynomial function.
2.6 Interpreting Polynomials as Functions Let p = a0 + a1x + a2 x 2 + + aN x N be any polynomial in R[ x ]. There is a natural way to use the coefficients of this polynomial as a “recipe” for forming a member of PolyFunc( R ): we define the functional interpretation of p to be the function 2 N pˆ = a0 + a1 ⋅ id R + a2 ⋅ (id R ) + + a N ⋅ (id R )
(Note that once again we use the “hat” accent mark to distinguish a function from another mathematical object.) Hopefully it is clear that any polynomial function can be obtained by functionally interpreting a suitable polynomial p ∈ R [ x ]. However, it is important to realize that two different polynomials p, q ∈ R [ x ] may have functional interpretations pˆ, qˆ ∈PolyFunc ( R ) that are nevertheless identical to one another. Indeed we have seen this before, and it is the motivation for distinguishing between “polynomial expressions” and “polynomial functions” in the first place! Consider, for example, the ring R = 6 , and the three polynomials p = ( 0, 4, 3, 0, 0,1, 0, 0,…) = 4 x + 3x 2 + x5 q = ( 0, 5, 0, 0, 3, 0, 0,…) = 5x + 3x 4 r = ( 0, 2, 0, 0,…) = 2 x As members of 6 [ x ] , these are three completely different things. However, when interpreted as functions, they are pˆ = 4ˆ ⋅ id + 3ˆ ⋅ id 2 + id 5
96
96 Polynomials and Polynomial Functions qˆ = 5ˆ ⋅ id + 3ˆ ⋅ id 4 rˆ = 2ˆ ⋅ id and we have already observed (see the discussion after Example 2, §2.4) that these are the same function; specifically, each of them is equal to the set of ordered pairs
{(0, 0) , (1, 2) , (2, 4) , (3, 0) , ( 4, 2) , (5, 4)} which is a member of PolyFunc( 6 ) . Our investigation in this chapter is motivated by trying to understand this question: What happens when we interpret a polynomial as a function, and when do two different polynomials have the same functional interpretation? To frame this question somewhat more precisely, we introduce the following definition. Definition. The functional interpretation map10 is the map Φ : R [ x ] → Func( R ) defined by Φ ( p ) = pˆ. Theorem. The functional interpretation map is a ring homomorphism; in particular, for any two polynomials p, q ∈ R[ x ], Φ ( p + q ) = Φ ( p) + Φ (q ) Φ ( p ⋅ q ) = Φ ( p ) ⋅ Φ( q )
Proof. Exercises 41–42. Let’s illustrate how polynomials can be interpreted as functions in a slightly less familiar context:
{
}
Example. Let R = 3 × 5 = ( a, b ) a ∈ 3 , b ∈ 5 . This is a set consisting of 15 distinct ordered pairs, just barely small enough for us to list them all: R = {( 0, 0 ) , ( 0,1) , ( 0, 2 ) , ( 0, 3) , ( 0, 4 ) , (1, 0 ) , (1,1) , (1, 2 ) ,
(1, 3) , (1, 4 ) , ( 2, 0 ) , ( 2,1) , ( 2, 2 ) , ( 2, 3) , ( 2, 4 )} We can make R into a ring by defining both multiplication and addition componentwise, i.e. so that ( a, b ) + ( c, d= ) ( a + c, b + d ) and ( a, b ) ⋅ ( c, d ) = ( ac, bd ), keeping in mind that in the first component addition and multiplication are interpreted in 3 and in the second component both are interpreted in 5 . For example,
( 2, 2 ) + ( 2, 3) = (1, 0 ) and
97
Polynomials and Polynomial Functions 97
( 2, 2 ) ⋅ ( 2, 3) = (1,1) Even though both 3 and 5 are fields, the product R = 3 × 5 is not a field—it’s not even an integral domain (Exercise 43). Now, consider the polynomial g ∈ R [ x ] given by g = ( 2,1) x 2 + (1, 3 ) This is something we really haven’t seen before: a polynomial whose coefficients are ordered pairs. Under the functional interpretation map Φ : R [ x ] → Func ( R ), g is mapped to the polynomial function ,1)id R2 + (1 , 3) gˆ = ( 2 How does this polynomial function act on elements of R? For example, gˆ( 2, 3) can be computed as follows: gˆ( 2, 3) = ( 2 ,1)( 2, 3) ⋅ id R2 ( 2, 3) + (1 , 3)( 2, 3) = ( 2,1) ⋅ ( 2, 3)2 + (1, 3) Now we compute: ( 2, 3 ) = (1, 4 ), so ( 2,1) ⋅ ( 2, 3 ) = ( 2,1) ⋅ (1, 4 ) = ( 2, 4 ), and finally 2
2
gˆ ( 2, 3) = ( 2, 4 ) + (1, 3) = ( 0, 2 ) This example shows how we can compute the value of such a polynomial function on R. But what about solving a polynomial equation? That is, what if we want to find the “roots” of g, i.e. the elements ( a, b ) ∈ R such that gˆ ( a, b ) = ( 0, 0 ) ? Again we calculate: gˆ( a, b ) = ( 2 ,1) ( a, b ) ⋅ id 2R ( a, b ) + (1 , 3) ( a, b ) = ( 2,1) ⋅ ( a, b )2 + (1, 3) = ( 2a 2 + 1, b2 + 3) So solving the equation gˆ ( a, b ) = ( 0, 0 ) splits into finding a solution a ∈ 3 to the equation 2a 2 + 1 =, 0 and a solution b ∈ 5 to the equation b2 + 3 =. 0 The solution to the first equation is a = 1 or a = 2, but the second equation has no solutions in 5 , so the equation gˆ ( a, b ) = ( 0, 0 ) has no solutions. The functional interpretation map Φ provides a language for restating the guiding questions for this chapter in terms of the properties of a ring homomorphism:
Guiding Question 1 (revised). For what rings R is the map Φ : R [ x ] → Func ( R ) injective, i.e. one-to-one? If Φ is not injective, for which distinct polynomials p, q ∈ R [ x ] will Φ ( p ) = Φ ( q )? Guiding Question 2 (revised). For what rings R is the map Φ : R [ x ] → Func ( R ) surjective, i.e. an onto function? If Φ is not surjective, what can be said about the relationship between Func( R ) and the proper subset PolyFunc( R )?
98
98 Polynomials and Polynomial Functions In case you need a refresher on the vocabulary used above, recall that a function f from a set A to another set B is called injective or one-to-one (the words mean the same thing) if whenever a1 and a2 are distinct elements of A, then f ( a1 ) and f ( a2 ) are distinct elements of B; alternatively, injectivity is the property that f ( a1 ) = f ( a2 ) ⇒ a1 = a2 We have been wondering throughout this chapter when and why it can happen that two different polynomials p and q determine identical polynomial functions, i.e. when will Φ ( p ) = Φ ( q ) but p ≠ q . This is precisely the question “When will Φ fail to be injective?” Another useful way to frame this question is: What is the kernel of Φ? Definition. Let R and S be two rings, and suppose f : R → S is a ring homomorphism. Then the kernel of f is the set of all elements of R that are mapped by f to the zero element in S, i.e. ker f = {r ∈ R f ( r ) = 0S ∈ S } The relevance of this definition is contained in the following proposition: Proposition. Let R and S be two rings, and f : R → S a ring homomorphism. Then two elements r1 , r2 ∈ R are mapped by f to the same element in S if and only if r1 − r2 ∈ ker f .
Proof. Suppose f ( r1 ) = f ( r2 ). Then f ( r1 ) − f ( r2 ) = 0S . But ring homomorphisms preserve sums and differences11, so f ( r1 − r2 ) = f ( r1 ) − f ( r2 ) = 0S , which shows that r1 − r2 ∈ ker f . The converse is essentially the same proof in the other order. In terms of our first guiding question, the preceding proposition allows us to ask it in yet another (final) version: Guiding Question 1 (final revision). For what rings R is the map Φ : R [ x ] → Func ( R ) injective, i.e. one-to-one? If Φ is not injective, what are the polynomials in ker Φ? Returning to the example of R = 6 , we have seen that if= g 5x3 + 3x 4 and h = 2 x then ˆ i.e. Φ ( g ) = Φ ( h ). This means that the polynomial g − h has the property that g − h = 0, g − h = 5x3 + 3x 4 − 2 x ∈ ker f . This can be verified directly by computing the value of the polynomial function Φ( g − h ) induced by 5x3 + 3x 4 − 2 x for each value in 6 (Exercise 45).
This polynomial g − h is not the only polynomial in ker Φ when R = 6 . Consider also the 6th-degree polynomial = p x( x − 1)( x − 2 )( x − 3)( x − 4 )( x − 5 ) Its functional interpretation is pˆ = id R (id R − 1ˆ )( id R − 2ˆ )( id R − 3ˆ )( id R − 4ˆ )( id R − 5ˆ ) . When this is evaluated for each element in 6 we find that
99
Polynomials and Polynomial Functions 99 p (= 0 ) 0 ( 0 − 1) ( 0 − 2 ) ( 0 − 3 ) ( 0 − 4 ) ( 0 − 5 ) = 0 p (1= ) 1(1 − 1) (1 − 2 ) (1 − 3) (1 − 4 ) (1 − 5 ) = 0 p (= 2 ) 2 ( 2 − 1) ( 2 − 2 ) ( 2 − 3 ) ( 2 − 4 ) ( 2 − 5 ) = 0 ˆ and so on. For every r ∈ 6 , we have pˆ ( r ) = 0, so pˆ = 0 and p ∈ker Φ. 12 We know that in the special case R = , i.e. when we consider polynomials over the real numbers, this kind of thing doesn’t happen: there is no nonzero polynomial whose graph is identical to y = 0. Equivalently, two polynomials over agree as functions if and only if they are equal term by term; that is, in the context of high school algebra, the interpretation map Φ : [ x ] → Func ( ) is injective, and ker Φ = 0 . But we also know that this is not true for all rings. In what follows, we begin to uncover the properties of Φ on different rings. It turns out (perhaps surprisingly!) that far from being abstract properties of ring theory, the main ideas we need are actually central to the high school curriculum. We begin with a pair of definitions, and a major theorem:
{}
Definitions. (1) The coefficient of the highest-degree term of a polynomial p ∈ R [ x ] is called the leading coefficient of p. (2) If the leading coefficient of p is 1, p is called a monic polynomial. Theorem (The Euclidean Algorithm). Let R be any ring, let p ∈ R [ x ] be any polynomial, and let d ∈ R [ x ] be a monic polynomial. Then there exists a unique q ∈ R [ x ] and a unique r ∈ R [ x ] with the property that deg r < deg d and such that p= q ⋅ d + r.
This theorem describes the possibility of performing long division on polynomials: in this situation, we divide the “dividend” polynomial p by a “divisor” polynomial d and obtain a “quotient” q and a “remainder” r. Each of p, d, q and r is a polynomial; the theorem promises us that as long as the divisor is monic, we can always find a quotient, and the remainder will have a lower degree than the divisor. Proof. We sketch the idea for the proof by considering how the long division algorithm actually works. Suppose our initial polynomial p is of the form p = an x n + an −1x n −1 + … + a0 and the divisor is d = x m + bm −1x m −1 + … b0, with m ≤ n. The long division process would start as follows: We write the polynomial p inside the long division enclosure and the divisor d to its left. Then we ask the question, “What do we need to multiply x m by to get an x m?” The answer is an x n − m , which we write directly above the division enclosure:
x + bm −1x m
m −1
an x n − m n … + + b0 an x + an −1x n −1 + + a0
)
100
100 Polynomials and Polynomial Functions This tells us what the first term of the quotient needs to be. Now to continue, we multiply this first term by the divisor, and subtract the result from the dividend. This has the effect of producing a new dividend polynomial, whose degree is lower than the one we began with. We repeat the process, each time getting one additional term in the quotient, and reducing the degree of the dividend. Eventually we end up with a dividend whose degree is less than that of the divisor; at that point we stop, and call what is left over the remainder. This iterative algorithm can be formalized as a proof by induction. Let n be the degree of p, and m the degree of d as above, and write k= n − m. If k < 0 then the dividend p already has lower degree than the divisor, so we can simply take q = 0, r = p and write p= 0 ⋅ d + p. So we really only need to consider the case when k ≥ 0. Base case: k = 0. In this case, n = m, i.e. the dividend and the divisor have the same degree. In such a situation, the long division algorithm would have only one step to it; the quotient would be q = an, and the remainder would be r= p − q ⋅ d , which is guaranteed to have degree less than n because the first term of p and the first term of q ⋅ d would match and cancel out. Then p= q ⋅ d + r and we are done. Induction step: Suppose that we can perform long division for polynomials if n and m differ by no more than some positive value k; we need to prove that we can also divide a polynomial whose leading term is an x n by a divisor whose leading term is x m with n= m + k +1. The first term of the quotient will need to be an x k +1. Then set p' = p − an x k +1 ⋅ d By construction, p' is a polynomial whose degree is less than that of p, because the first term of p and the first term of an x k +1 ⋅ d will match and cancel. But this means that the degree of p' and the degree of d is at most k, and so by the induction hypothesis we can write p' = q' ⋅ d + r where q' the quotient we get from dividing p' by d, and r is the remainder whose degree is less than that of d. Now let’s put it all together. We have p = an x k +1 ⋅ d + p' = an x k +1 ⋅ d + ( q' ⋅ d + r ) = (an x k +1 + q' ) ⋅ d + r If we set q = an x k +1 + q' then we have found a quotient q and a remainder r such that p= q ⋅ d + r, as desired. This proves the induction step, and completes the proof. A few comments are in order before we proceed. First, notice that the proof replicates precisely the algorithm for long division. In the induction step, we subtract an x k +1 ⋅ d from p to get a lower-degree polynomial p' ; in the algorithm we would say “now iterate and divide p' by d to get a quotient q' ”, whereas in the proof we say “by assumption, p' can be divided by d to get a quotient q' ”. This is really a difference in style more than substance. Second, we emphasize that this algorithm only works over general rings if the divisor polynomial is monic. If the leading coefficient of d is not 1, it may not be possible to multiply d by something to “match” the leading coefficient of p. For example, let’s work over 6 , and consider what happens if we try to divide= p 3x 2 + x + 1 by q = 2 x + 5:
)
2 x + 5 3x 2 + x + 1
101
Polynomials and Polynomial Functions 101 The algorithm begins: we ask, “What can we multiply 2x by to match 3x2?” In order to make the degrees match, it should be of the form a1x. Unfortunately, here the algorithm immediately grinds to a halt, as there is no element a1 ∈ 6 with the property that 2 ⋅ a1 = 3. Essentially, the problem is that as an element of 6 , 2 is not invertible. When we work over a field, this is not a problem, because every nonzero element is invertible. Over general rings, we need to be more careful. In fact the theorem just proved above can be generalized slightly: the Euclidean Algorithm (polynomial long division) works over any ring as long as the leading coefficient of the divisor is an invertible element in the ring. In particular, when working over n we can always do long division as long as the leading coefficient of the divisor has no factors in common with n; see Exercise 46–48. Fortunately, this bit of nuance won’t matter very much in what follows, as the only long division we will need involves monic divisors.
Exercises 41. For any two polynomials p, q ∈ R [ x ], show that Φ ( p + q ) = Φ ( p ) + Φ ( q ). 42. For any two polynomials p, q ∈ R [ x ], show that Φ ( pq ) = Φ ( p ) ⋅ Φ ( q ). 43. Verify that R = 3 × 5 is not an integral domain, and hence not a field. Which of its 15 elements is invertible? 44. Let R = 3 × 9, h = ( 2,1) x 2 + (1, 0 ) ∈ R [ x ]. Solve the equation hˆ ( a, b ) = ( 0, 0 ). 45. Let R = 6 and let p ∈ R[ x ] be given by= p 5x3 + 3x 4 − 2 x . Compute a table of values for the function pˆ ∈PolyFunc ( R ) and verify directly that p ∈ker Φ. 46. Let R = 5 and use long division to write= p 3x 2 + x + 4 in the form q ⋅ d + r , where d = 2 x − 1. 47. Repeat the previous exercise, this time working over 7 . 48. Repeat the previous two exercises, this time working over . What relationship, if any, do you see between the solutions of these three problems? When the divisor d has the particular form d= x − c for some c ∈ R, there is a faster alternative to long division, called synthetic division. The key idea underlying synthetic division is summarized in the following proposition: Proposition (Synthetic Division). Let = p an x n + + a0 be an nth-degree polynomial, and let d= x − c for some c ∈ R. Then if we write the quotient polynomial in the form = q bn x n −1 + + b1 (notice that the subscripts on the coefficients do not match the degree of the term) and denote the remainder as r = b0 , then each coefficient bi may be found from the recursive computation bn = an , bk = ak + c ⋅ bk +1 ( for k < n ) Proof. By definition, we have an x n + + a= 0
( x − c ) ( bn x n −1 + + b1 ) + b0
Expanding out the right-hand side and collecting terms of equal degree, we have
102
102 Polynomials and Polynomial Functions
(
)
(
) (
an x n + += a0 bn x n + bn −1 − cbn x n −1 + b1 − cb2 x + b0 − cb1
)
Equating the coefficients of the left-hand side with the corresponding coefficients on the right-hand side, we obtain an = bn , ak = bk − cbk +1 ( for k < n ) from which the conclusion of the proposition follows. The preceding proposition provides a recursive algorithm for generating, one at a time, the coefficients of the quotient polynomial q and the remainder r in the special case when the divisor is of the form d= x − c . This recursive algorithm is commonly implemented in tabular form, as shown below: c
an
an−1
an−2
a1
a0
↓
c ⋅ bn
c ⋅ bn −1
c ⋅ b2
c ⋅ b1
bn = an
b= an −1 + c ⋅ bn n −1
bn−2
b1
b0 = r
In this implementation, we write the coefficients ak of the dividend polynomial p across the top row, with the value of c (from the divisor x − c ) in the top-left corner, set off from the rest of the coefficients by a border. We then fill in the first entry of the bottom row by bringing down the value directly above it, as shown above. This completes the first column. We continue working left-to-right and top-to-bottom: each time we write down one of the coefficients bk in the bottom row, we multiply it by c and write the result in the cell above and to the right of it; then we add the two entries in the next column and record the sum in the bottom row. In this way we proceed to generate each of the coefficients bk from the ones that have already been produced. When we complete the table, the last entry in the bottom row is the remainder r, shown set off from the rest of the row by a border. The Euclidean Algorithm (i.e. polynomial long division) and its variant, Synthetic Division, are fixtures of the high school Algebra 2 curriculum, but only in the context of polynomials over . In fact, neither algorithm is restricted to that specific coefficient field, as the following examples show. Example. Let R = ,= p 3x 2 + ( 2 + i ) x + ( 4 − 3i ), and d= x − 2i . Then the synthetic division algorithm is implemented as shown in the following table: 2i .
3.
2 + i.
4 − 3i
↓
6i .
−14 + 4i
3
2 + 7i
−10 + i
We conclude from this calcution that on dividing 3x 2 + ( 2 + i ) x + ( 4 − 3i ) by x − 2i , the quotient is 3x + ( 2 + 7i ), with remainder −10 + i; that is, we have found that 3x 2 + ( 2 + i ) x + ( 4 − 3i= )
( x − 2i ) (3x + ( 2 + 7i ) ) + ( −10 + i )
which can be verified by expanding and simplifying the right-hand side of the equation.
103
Polynomials and Polynomial Functions 103 Example. Let R = 12= , p 6 x3 + 4 x 2 + 3x + 8, and d= x + 7. Then the synthetic division algorithm is implemented as shown in the following table (note that in 12 e can write x + 7 as x − 5): 5
6
4
3
8
↓
6
2
1
6
10
5
9
which shows that, over 12, the result of dividing 6 x3 + 4 x 2 + 3x + 8 by x + 7 is 6 x 2 + 10 x + 5 with remainder 9, so that we may write 6 x3 + 4 x 2 + 3x + 8 = ( x + 7 ) (6 x 2 + 10 x + 5) + 9. We continue our treatment of polynomials with another topic that is, like Long Division and Synthetic Division, a mainstay of the Algebra 2 curriculum. Theorem (The Remainder Theorem). Let p ∈ R [ x ]. be any polynomial and let d= x − a for some a ∈ R. Then on dividing p by d, the remainder is given by r , where r = pˆ( a ).
Proof. We know that when we divide p by d we get a remainder whose degree is less than that of d. If d= x − a then d is 1st-degree, so the degree of the remainder must be 0, i.e. the remainder is some constant. Thus we know p= q ⋅ ( x − a ) + r Now let’s apply Φ to both sides of this equation, turning everything into a function. We have pˆ = qˆ ⋅ ( id R − aˆ ) + rˆ This next bit is clever: apply the function above to the element a ∈ R. We get pˆ( a ) = qˆ( a ) ⋅ ( id R ( a ) − aˆ ( a )) + rˆ( a ) This rather messy equation can be cleaned up quite a bit if we remember that id R ( a ) = a , aˆ ( a ) = a , and rˆ ( a ) = r , so that pˆ( a ) = qˆ( a ) ⋅ ( a − a ) + r and finally since qˆ ( a ) ⋅ ( a − a ) = qˆ ( a ) ⋅ 0 = 0, we get pˆ ( a ) = r , which is what we wanted to prove. In the high school context, the Remainder Theorem says (without all of the “bar” and “hat” accents) that when you divide a polynomial p by x − a , the remainder is p( a ). This
104
104 Polynomials and Polynomial Functions theorem is what justifies the use of what is called “synthetic substitution” as a method of evaluating a polynomial at a particular value; instead of plugging the value directly into the polynomial, we perform synthetic division and discard every part of the result except the remainder at the end, which is the value we want. In our more abstract context, the Remainder Theorem provides the main conceptual bridge between the properties of a polynomial expression and the properties of the corresponding polynomial function. In particular, we have the following essential corollary: Corollary (The Factor Theorem). If R is any ring, p ∈ R[ x ] a polynomial, and a ∈ R, then pˆ ( a ) = 0 if and only if x − a is a factor of p, i.e. if there is some q ∈ R [ x ] such that p = ( x − a ) q.
Proof. By the Remainder Theorem, we know that p = ( x − a ) q + pˆ( a ). Therefore if p ( x − a ) q , and conversely if = p ( x − a ) q we must have pˆ( a ) = 0, and pˆ ( a ) = 0 we have = ˆ hence p ( a ) = 0. Example. If pˆ ( a ) = 0 then the Factor Theorem assures us that we can write = p (x − a ) q for some polynomial q. However, nothing in the theorem guarantees that this factorization is unique, and in fact it need not be. Consider the example R = 12 and the polynomial = p x 2 + 3x + 2 ∈ 12. Let’s try to factor p. It’s fairly easy to see that = p ( x + 2 )( x + 1), which could also be written (since we are working over 12) as = p
( x − 10 ) ( x − 11) .
We can easily verify that pˆ (11) = 0 and pˆ (10 ) = 0 , as expected by the Factor Theorem. However, surprisingly, these are not the only zeros of the polynomial function. In fact it is also easy to verify directly that pˆ (7 ) = 0 and pˆ ( 2 ) = 0 as well. By the Factor Theorem, this means that ( x − 7 ) and ( x − 2 ) are factors of p as well. But p is only a second-degree polynomial—how can it have four different factors? The surprising thing here is that ( x − 10 ) ( x − 11) and ( x − 7 ) ( x − 2 ) both multiply out (in 12) to give = p x 2 + 3x + 2. (See Exercise 53.) That is, p has two completely different factorizations. Another way to say this is that the polynomial function has four different roots, each of which is associated with a factor of p, but each of the two distinct factorizations of p makes use of only two of those factors at a time. Why is this strange behavior happening? It turns out the key is that 12 is not an integral domain; if it were, this kind of thing would not be possible, as the following Lemma and Corollary show: Lemma. Let R be an integral domain and p ∈ R [ x ]. p ( x − a ) q for (a) If a, b ∈ R are distinct elements that are both zeros of Φ ( p ), then = some polynomial q ∈ R [ x ] with the property that b is a zero of Φ ( q ). (b) If a1 , a2 ,…, ak are distinct elements of R that are all zeros of Φ(p), then = p ( x − a1 ) ( x − a2 ) ( x − ak ) q for some polynomial q ∈ R [ x ].
105
Polynomials and Polynomial Functions 105 Remark. Before we prove the Lemma, let’s reflect on what it says, and how it fits in with the example we just considered. We already know that an individual zero of a polynomial function corresponds to a factor of the polynomial; that much is contained in the Factor Theorem, which applies to all rings, not just integral domains. But this Lemma goes further: it asserts that when working over an integral domain, if we have multiple zeros of a polynomial function then those zeros correspond to factors which can be combined together in a factorization of the polynomial. This does not happen over rings in general; as we have seen, over 12 the polynomial function corresponding to x 2 + 3x + 2 has four zeros (2, 7, 10 and 11), which correspond to four factors, but those four factors can not all go together into a single factorization of the polynomial. So why is the Lemma true? Proof. Suppose a ∈ R and b ∈ R are distinct zeros of p ∈ R [ x ]. By the Factor Theorem, we know p = ( x − a ) ⋅ q for some polynomial q ∈ R [ x ]. When we interpret both sides of this equation as functions via the map Φ, we get pˆ = (id R − aˆ ) ⋅ qˆ . Now let’s act with both sides of this equation on b ∈ R, getting 0 = ( b − a ) ⋅ qˆ( b ) since pˆ ( b ) = 0 by hypothesis. At this point we invoke the fact that R is assumed to be an integral domain: since a and b are distinct, it must be true that qˆ ( b ) = 0. This proves part (a). ˆ By (a), we Now suppose a1 , a2 , , ak are distinct elements of R that are all zeros of p. know that we can write p = x − a1 q1 for some polynomial q1 ∈ R [ x ] for which a2 , , ak are
(
)
(
)
all zeros. Then q1 can be written as q1 = x − a2 q2 for some other polynomial q2 ∈ R [ x ] for which a3 , , ak are all zeros. Putting this factorization together with the previous one, we
(
)(
)
(
)
have p = x − a1 x − a2 q2 . We can continue iteratively, pulling out a factor x − ak +1 . out of each qk, eventually getting the complete factorization of part (b). This Lemma gives us two very helpful Corollaries: Corollary 1. If R. is an integral domain, p ∈ R [ x ], deg p = n , then Φ ( p ) has at most n distinct zeros.
Proof. By part (b) of the Lemma, each zero of Φ ( p ) contributes a factor to the factorization of p. If there were more than n zeros, then p would be a product of more than n 1st-degree factors, and hence would have a degree higher than n. Corollary 2. If R is an integral domain, with p and q two monic, degree n polynomials with the same set of n distinct zeros, then p = q.
Poof. Exercise 55.
106
106 Polynomials and Polynomial Functions Corollary 3. If R is an infinite integral domain, then Φ is a one-to-one function, and therefore no two polynomials induce the same polynomial function.
Proof. Suppose Φ were not one-to-one; then as discussed above, there would be some polynomial p ∈ R [ x ] with p ( r ) = 0 for every r ∈ R. But if R is infinite, this contradicts Corollary 1, which asserts that the number of zeros of pˆ must be finite. Corollary 3 explains (finally!) why the correspondence between polynomials and polynomial functions over is one-to-one, as R is an infinite integral domain13. Over , no two distinct polynomials ever produce the same polynomial function. It also tells us more generally that the same rule applies to any infinite integral domain, like or or . In these settings, it is completely safe to totally ignore the technical distinction between “polynomial” and “polynomial function”—they may be identified in a completely natural manner and no information is lost in doing so. But what about non-infinite rings, or infinite rings that are not integral domains? Those are more complicated; we turn to them in the next section.
Exercises 49. Let R = . Use synthetic division to divide = p x 4 − 8x3 + 27 x 2 − 38x + 26 by x − (3 + 2i ). Then divide the resulting quotient by x − ( 3 − 2i ) . Use your result to write p as a product of two quadratic polynomials with real coefficients, each irreducible over . 50. Let R = 18. Use synthetic division to verify that both x − 2 and x − 5 are factors of = p x 4 − x3 + 5x 2 − 7 x + 4. Then use polynomial long division to verify that the product ( x − 2 )( x − 5 ) is not a factor of p. Is this a contradiction? Explain. 51. Let R = 17. Use synthetic substitution to verify that 4 and 10 are both zeros of = p x 4 + 5x3 − 6 x 2 − 8x − 6. Does this imply that the product ( x − 4 ) ( x − 10 ) is a factor of p? Why or why not? Find a complete factorization of p. 52. As noted in this section, synthetic division can only be used when the divisor is of the form d= x − c . However, in the case of a quadratic divisor that itself can be factored in the form d = x − c1 x − c2 , division by d can be performed by first dividing by x − c1 and then dividing the result by x − c2 . Illustrate this with a well-chosen example, being careful to explain how to find both the quotient and remainder for division by d . 53. Verify that in 12, both ( x − 10 ) ( x − 11) and ( x − 7 ) ( x − 2 ) multiply out to give x 2 + 3x + 2 . 54. Verify that in 12, the values 2, 7, 10 and 11 are all zeros of the polynomial function Φ( x 2 + 3x + 2 ). 55. Prove Corollary 2.
(
)(
)
2.7 Polynomials over Finite Rings At the end of the last section we saw (Corollary 3) that if R is an infinite integral domain, then the functional interpretation map Φ : R [ x ] → Func( R ) is injective, i.e. no polynomial in R[ x ] acts “like zero” on all of R. The situation for finite rings is very different:
107
Polynomials and Polynomial Functions 107 Proposition. Let R be a finite ring with n elements, say R = {r1 , r2 ,…, rn }. Then the nth- degree monic polynomial χ = ( x − r1 ) ( x − r2 ) ( x − r3 ) ( x − rn ) has the property that R ˆ (In other words, for all r ∈ R, we have χ ( r ) = 0.) χ = 0. R
R
Proof. Any r ∈ R is ri for some i, and therefore the factor ( x − ri ) (and hence all of χ R ) yields 0 when interpreted as a function and evaluated at r. This simple proposition has far-reaching consequences. We refer to the polynomial constructed in the proposition above as the characteristic polynomial14 of R. Below, we calculate a few examples of characteristic polynomials for different finite rings. Examples 1–7. 1. If R = 5 then χ = x( x − 1)( x − 2 )( x − 3)( x − 4 ). To calculate this efficiently, recall R that since we are working over 5 , we can write x − 4 =+ x 1 and x − 3 = x + 2. Therefore = χ R x ( x − 1) ( x + 1) ( x − 2 ) ( x = + 2 ) x ( x 2 − 1) ( x 2 = − 4 ) x ( x 4 + 4= ) x5 − x . 2. If R = 6 then χ = x( x − 1)( x − 2 )( x − 3)( x − 4 )( x − 5 ). We can rewrite this as R = χ x ( x − 1) ( x + 1) ( x − 2 ) ( x + 2 ) ( x = − 3 ) x( x 2 − 1)( x 2 − 4 )( x − 3) and after some tedious calculation we finally end up with χ= x 6 − 3x 5 + x 4 + 3x 3 + 4 x 2 . R 7 3. If R = 7 then χ= x − x. (Exercise 56.) R 4. If R = 8 then χ R = x8 − 4 x7 + 2 x6 + x 4 + 4 x3 + 4 x 2. (Exercise 57.) 5. If R = 9 then χ= x9 − 3x7 + 3x5 − x3. (Exercise 58.) R 6. If R = 10 then χ= x10 − 5x9 + 3x6 + 5x5 + 6x 2. (Exercise 59.) R 7. If R = 11 then χ= x11 − x. (Exercise 60.) R In each case, the characteristic polynomial of Zn is an nth-degree monic polynomial that “acts like zero” on Zn. Having such a polynomial in hand is incredibly helpful for answering our first guiding question: Guiding Question 1 (final revision). For what rings R is the map Φ : R [ x ] → Func ( R ) injective, i.e. one-to-one? If Φ is not injective, what are the polynomials in ker Φ? We now know that if R is finite then Φ is not injective, and ker Φ contains (at least) the characteristic polynomial χ R . Consequently if two polynomials differ by χ R , they are “functionally equivalent”; i.e., if p, q ∈ R [ x ] and p − q = χ R then the polynomial functions pˆ and qˆ are the same function. This notion of “functional equivalence” is a useful way of thinking about the non- injectivity of Φ. We can use the characteristic polynomial of a finite ring to replace any polynomial of degree ≥ n with an equivalent polynomial of degree < n: Example 8. Let R = 5. We calculated in Example 1 above that χ= x5 − x . This polynoR 5 ˆ mial is functionally equivalent to 0, and therefore the polynomials x and x are functionally equivalent to each other, i.e. Φ ( x5 ) = Φ ( x ). Therefore in any polynomial of degree 5 or higher, if we replace each occurrence of x5 with x, we obtain a lower-degree equivalent
108
108 Polynomials and Polynomial Functions polynomial. This replacement can be repeated iteratively until we eventually get a polynomial of degree 4 or lower. Let’s see how this plays out with an example. Consider = p x9 + 2 x + 1 ∈ 5 [ x ]. We wish to find another polynomial q, of degree 4 or lower, with the property that qˆ = pˆ. We begin by factoring out an x5 from the highest-degree term and replacing it with x: x9 + 2 x + 1 = ( x5 ) x 4 + 2 x + 1 ( x ) x 4 + 2 x += 1 x5 + 2 x + 1 This shows that Φ ( x9 + 2 x + 1) = Φ( x5 + 2 x + 1). But we can go farther: repeating the process, we have x5 + 2 x + 1 x + 2 x += 1 3x + 1 So it turns out that the 9th-degree polynomial x9 + 2 x + 1, when interpreted as a function on 6 , is equivalent to the 1st-degree polynomial 3x + 1. This surprisingly simple conclusion can be verified by computing a table of values for each polynomial, interpreted as a function on 5 , and confirming that they are identical (Exercise 61). Example 9. Let R = 6 . In this case (see Example 2 above), χ= x6 − 3x5 + x 4 + 3x3 + 4 x 2 . As R in the previous case, because Φ ( x6 − 3x5 + x 4 + 3x3 + 4 x 2 ) = 0ˆ we can write Φ( x 6 ) = Φ(3x5 − x 4 − 3x3 − 4x 2 ) This means that in any polynomial of degree 6 or higher, we can replace any occurrence of x6 with the expression 3x5 − x 4 − 3x3 − 4 x 2 without changing the functional interpretation of the polynomial. In this way, we can iteratively reduce any polynomial over 6 with an equivalent polynomial of degree at most 5. Clearly this replacement strategy in 6 is considerably more complex than the analogous strategy in 5 , due to the more complicated form of the characteristic polynomial. Let’s see how this plays out with an example. Consider the polynomial = p x7 + 2 x 2 + 3x + 1 ∈ 6 [ x ]. We wish to find another polynomial q, of degree 5 or lower, with the property that degree term and replacing it qˆ = pˆ. We begin by factoring out x6 from the highest- with 3x5 − x 4 − 3x3 − 4 x 2: x7 + 2 x 2 + 3x + 1 = ( x6 ) x1 + 2x2 + 3x + 1 (3x5 − x 4 − 3x3 − 4x2 ) x1 + 2x2 + 3x + 1 Simplifying this we have 3x6 − x5 − 3x 4 − 4 x3 + 2 x 2 + 3x + 1. Now we need to reduce the 6th- degree terms by making the same replacement: 3x6 − x5 − 3x 4 − 4 x3 + 2 x 2 + 3x + 1 3 (3x5 − x 4 − 3x3 − 4 x 2 ) − x5 − 3x 4 − 4 x3 + 2 x 2 + 3x + 1 = 2x5 − x3 + 2 x 2 + 3x + 1 where we have also made use of the fact that we are working over 6 to reduce the coefficients mod 6. At this point we have reduced the polynomial = p x7 + 2 x 2 + 3x + 1 ∈ 6 [ x ] to an equiva5 3 2 lent polynomial= q 2 x − x + 2 x + 3x + 1 ∈ 6 [ x ]. We can verify that these polynomials do
109
Polynomials and Polynomial Functions 109 indeed determine identical functions over 6 . (Exercise 62). In addition, we observe that p−q = x7 − 2 x5 + x3, and we can confirm using polynomial long division that the characteristic polynomial χ= x6 − 3x5 + x 4 + 3x3 + 4 x 2 divides evenly into p − q with no remainder R ˆ (Exercise 63). Since p − q is a multiple of χ R , it too is functionally equivalent to 0. This method is, admittedly, tedious, but as we have seen it is much simpler when the characteristic polynomial has a simple form, as in the case of 5 . So one might reasonably ask: for which values of n does the characteristic polynomial of n have a simple form? An examination of Examples 1–7 suggests that it might have to do with whether n is prime or composite, and this is in fact the case. We begin with a famous theorem of number theory: Theorem (Fermat’s Little Theorem15). If p is a prime number, then for any integer a, the number a p − a is an integer multiple of p.
Proof. Omitted16. Fermat’s Little Theorem has a natural expression in terms of polynomial functions over p for a prime p: Corollary 1. If p is a prime number, then the polynomial x p − x ∈ p [ x ] is functionally ˆ equivalent to 0.
Proof. Exercise 64. For a prime p, we will refer to the polynomial x p − x as the Fermat polynomial of p [ x ]. We have arrived at an interesting situation: for p prime, we now have two completely different ˆ ways of producing a polynomial that is functionally equivalent to 0 on p: (a) On the one hand, we can form the characteristic polynomial = χ R x ( x − 1) ( x − p + 1) (b) But on the other hand, we can form the Fermat polynomial x p − x. These two polynomials are both degree p and both monic, and they have the same set of p distinct zeros, namely {0,1, 2,…, p − 1}. Consequently, we have: Corollary 2. The characteristic polynomial of p (p prime) is identical with the Fermat polynomial; that is, χ = x p − x. p Proof. Use Corollary 2 of §2.6. Not only is χ p a function in ker Φ, it is actually the lowest degree monic polynomial that is functionally equivalent to 0ˆ on p ; in fact, when working over p , the Lemma of §2.6 proves that any polynomial in ker Φ can be written in the form χ p ⋅ q for some polynomial q. (The technical way to say this is that ker Φ is a principal ideal generated by the characteristic polynomial.)
110
110 Polynomials and Polynomial Functions Using the above result, any polynomial function over p can be reduced to an equivalent polynomial of degree less than p by using the replacement x p x (possibly multiple times). Furthermore, no two polynomials of degree less than p can be functionally equivalent; if ˆ and we have already established that they were, their difference would be equivalent to 0, such a polynomial must be degree p or higher. Putting all of these observations together, we have the following: Theorem. Let R = p for some prime p. Then there is a 1-to-1 correspondence between PolyFunc ( R ) and the subset of R [ x ] consisting of polynomials whose degree is at most p. Proof. Any polynomial function can be written in the form ∑ k = 0 ak ⋅ (id R ) , which corresn
k
ponds (via Φ) to the formal polynomial ∑ k = 0 ak x k . If the degree of this polynomial is p or n
higher, use the replacement x p x repeatedly to reduce it to an equivalent polynomial of degree less than p. This is the unique such polynomial that corresponds to the desired polynomial function. Corollary. Every function in Func ( p ) is a polynomial function; in other words,
PolyFunc ( p ) = Func ( p ) and Φ : p → Func ( p ) is surjective.
Proof. The number of distinct polynomials in p [ x ] whose degree is less than p is exactly p p (Exercise 65), and this is therefore the number of distinct polynomial functions. However, this is also the total number of all functions on p (see Exercise 7 earlier in this chapter), so there is no room left in Func ( p ) for any non-polynomial functions. This result is (or at least should be) really surprising, so it is worth pausing for a moment to reflect on how unexpected it is. We are used to working over or , which abound with examples of non-polynomial functions: exponential, trigonometric, logarithmic, rational, etc. Indeed in the conventional settings with which we are most familiar, polynomials are a relatively rare type of function—arguably the simplest type. In Calculus, the whole theory of Taylor series can be understood as an attempt to approximate an arbitrary function by a sequence of polynomials of ever-increasing degree. But we have just shown that when working over p for a prime p, none of this exists. There are no non-polynomial functions— any function, no matter where it comes from, can be expressed by a polynomial of degree less than p, and (if we restrict our attention to degree less than p) that representation is unique.
Exercises 56. Show by direct calculation that if R = 7 then χ= x 7 − x. R 57. Show by direct calculation that if R = 8 then χ R = x8 − 4 x7 + 2 x6 + x 4 + 4 x3 + 4 x 2. 58. Show by direct calculation that if R = 9 then χ= x9 − 3x7 + 3x5 − x3. R 59. Show by direct calculation that if R = 10 then χ= x10 − 5x9 + 3x6 + 5x5 + 6x 2. R 60. Show by direct calculation that if R = 11 then χ= x11 − x. R
111
Polynomials and Polynomial Functions 111 61. Compute a table of values for each of the polynomial functions determined by x9 + 2 x + 1 and 3x + 1 over 5 , and confirm that they are identical. 62. Compute a table of values for each of the polynomial functions determined by x7 + 2 x 2 + 3x + 1 and 2 x5 − x3 + 2 x 2 + 3x + 1 over 6 , and confirm that they are identical. 63. Let R = 6 and let p and q be the two polynomials of the previous exercise. Use polynomial long division (i.e. the Euclidean algorithm) to verify that the characteristic polynomial divides evenly into p − q with no remainder. 64. Show that Corollary 1 of this section is equivalent to Fermat’s Little Theorem. 65. Explain why the number of distinct polynomials in p [ x ] of degree less than p is exactly p p. 66. The case p = 3 is small enough that it is not unreasonable to explicitly write each of the 27 polynomials of degree ≤ 2 in 3 [ x ]. Do this, and then generate for each one a table of values for the corresponding polynomial function over 3 . Confirm that every possible table of values occurs exactly once among the set. (Perhaps the most difficult part of this question will be finding a compact yet readable way to tabulate your answers. You may want to consider using a spreadsheet to organize and automate your calculations, but this is not necessary to answer the question.) 67. (This problem is a continuation of the previous exercise.) We know that a polynomial p over a field has a linear factor of the form (x − a) if, and only if, a is a zero ˆ Use this fact, together with your inventory of of its functional interpretation p. functions found above, to identify all of the irreducible (i.e. unfactorable) quadratic polynomials over 3 . 68. Write a computer program that repeats the previous two exercises, this time for the case p = 5, for which there are 55 = 3125 different polynomial functions. At this point it is fair to say that we have completely answered both of our guiding questions for rings of the form R = p when p is a prime number. We know that in such a case the functional interpretation map Φ : R [ x ] → Func ( R ) is not injective, but we have completely described its kernel, and we know exactly when two polynomials map to the same polynomial function; we also know that in this case the map is surjective, and that therefore every function can be represented by a polynomial. However, the situation is radically different for a ring R = n when n is a composite number. We still have a characteristic polynomial = χ R x ( x − 1) ( x − 2 )( x − n + 1) ˆ so that χ ∈ker Φ. However, in this case it is not which has the property that Φ ( χR ) = 0, R true that every polynomial in ker Φ must be a multiple of χ R . Consider once again the case n = 6: the characteristic polynomial has degree 6, but we have seen (see Exercise 49) that = p 5x3 + 3x 4 − 2 x has the property that p = 0. This means that while we can use the method of Example 9 to reduce any polynomial in 6 [ x ] to an equivalent polynomial of degree 5 or less, there is still redundancy even among those lower-degree polynomials; for example, Φ (5x3 + 2 x ) = Φ (3x 4 ), despite the fact that the difference between them is not a multiple of the characteristic polynomial. The upshot of all of this is that while there are 66 polynomials of degree 5 or less over 6 , the number of distinct polynomial functions is less than that. How much less? The following Theorem provides an upper bound.
112
112 Polynomials and Polynomial Functions Theorem. Let n be a composite number that is the product of two primes, p1 and p2 (which may or may not be distinct from each other). Then the polynomial
(x p
1
− x ) ( x p2 − x )
ˆ is functionally equivalent to 0. Proof. Let a be any integer. By Fermat’s Little Theorem, a p1 − a is a multiple of p1, and a p2 − a is a multiple of p2 . Therefore the product ( a p1 − a )( a p2 − a ) is a multiple of p1 p2 = n . But this means precisely that Φ (( x p1 − x )( x p2 − x )) = 0 on n. Definition. Generalizing the case for p , if n = p1 p2 as in the previous theorem we call 1 − x ) ( x p2 − x ) the Fermat Polynomial of n.
(x p
Example 10. We have been interested in the case n = 6 ever since the start of this chapter, so let’s see what the preceding theorem tells us about that case. The Fermat Polynomial of 6 is ( x 2 − x ) ( x3 − x= ) x5 − x 4 − x3 + x2 . This is a monic, 5th-degree polynomial that is functionally equivalent to 0ˆ on 6 , and we can use it to reduce any polynomial over 6 to another, functionally equivalent polynomial of degree at most 4. (Specifically, since Φ ( x5 ) = Φ ( x 4 + x3 − x 2 ), we can iteratively replace any occurrence of x5 with x 4 + x3 − x 2.) Since the number of possible polynomials of degree ≤ 4 with coefficients in the set {0,1, 2, 3, 4, 5} is 65, this means that the number of distinct polynomial functions is at most 65. It may even be lower—nothing we have done so far precludes the possibility of additional polynomials in ker Φ, and as a matter of fact we already know that Φ (3x 4 ) = Φ (5x3 + 2 x ), which we could use to reduce any polynomial whose leading term is 3x 4 to an equivalent 3rd-degree polynomial. Can we go even further? It turns out that accounting for all of the redundancies that exist among the polynomials over 6 , and thereby describing exactly a set of polynomials that is in 1-to-1 correspondence with PolyFunc ( 6 ), is beyond the scope of this textbook. But no matter: we definitely know that such a set, if we could find it, contains fewer than 65 different polynomials; meanwhile, the set Func ( 6 ) definitely contains 66 different functions. Comparing these two numbers, we can conclude that the set of polynomial functions is less than 1/6 of the size of the set of all functions. In other words, most functions (at least 5/6, or about 83% of them) on 6 cannot be represented by a polynomial. The situation becomes even more extreme when our composite number n is made up of larger primes. Consider the case = n 15 = 3 ⋅ 5. The Fermat polynomial is then
( x3 − x ) ( x5 − x ) , an 8th-degree polynomial. Using this, any polynomial can be reduced to one of degree ≤ 7, and therefore the number of distinct polynomial functions on 15 is at most 158 (in fact it. is probably much smaller). On the other hand, the total number of functions on 15 is 1515 158 Consequently, the polynomial functions make up less than = 15 −7 ≈ 0.000000585% of 1515 the set of all functions!
113
Polynomials and Polynomial Functions 113 In general, we have: Theorem. If n = p1 p2 (where p1 and p2 are both prime) then there are at most n p1 + p2 polynomial functions, out of n n functions all together. To get a sense of how small n p1 + p2 is compared to n n , a few values of the ratio of these two numbers are tabulated below. n
p1
p2
n p1 + p2
nn
n p1 + p2 / n n
6
2
3
65
66
1 ≈ 0.166 6
9
3
3
96
99
1 ≈ .00137 93
10
2
5
107
1010
1 = 0.001 103
14
2
7
149
1414
1 ≈ 0.00000186 145
15
3
5
158
1515
1 ≈ 0.00000000585 157
21
3
7
2110
2121
1 ≈ 2.86 × 10 −15 2111
This is so dramatically different from the situation when n is a prime that it is really worth pausing once again to consider what we have shown. When working over p for p prime, we have proved that every function is given by a polynomial; yet when working over n for n composite, we now know that virtually every function (relatively speaking) is not expressible by a polynomial. Over primes, non-polynomial functions simply do not exist at all; over composites, they are essentially everywhere, with the polynomial functions comprising only a tiny, tiny subset of all functions. (See Exercise 69 for more on this.)
Exercises 69. Generalize the results of the preceding section to the case of a composite number that is the product of three or more primes. Use your generalization to estimate the fraction of Func ( p ) made up of polynomial functions for p = 8,12,16,18, and 27. 70. Let R = 4 . Compute the characteristic polynomial χ R and the Fermat polynomial of R explicitly, and confirm that they are distinct monic polynomials of the same degree. 71. Use the results of the previous exercise to find a polynomial f of degree < 4 for ˆ Use this to find an upper bound on the number of distinct polywhich Φ ( f ) = 0. nomial functions there are over 4.
114
114 Polynomials and Polynomial Functions
2.8 Recommended Reading In this chapter we investigated the closely-related notions of function and polynomial, as well as the blended concept of a polynomial function. We found that, when working over an infinite integral domain (like , or ), no two polynomials induce the same function, and therefore it is generally safe to conflate the notions of “polynomial” and “polynomial function”. However, when working over a finite ring, the distinction between these two notions is crucial. The questions “What is a function?” and “What do polynomials (and variables) mean?” have been the focus of much research in secondary mathematics education for decades. Your recommended reading for this chapter is: Usiskin, Z. (1999). Conceptions of school algebra and uses of variables. In B. Moses (Ed.), Algebraic Thinking, Grades K-12: Readings from NCTM’s School-Based Journals and Other Publications (pp. 7– 13). Reston, VA: National Council of Teachers of Mathematics. Vinner, S. & Dreyfus, T. (1989). Images and definitions for the concept of function. Journal for Research in Mathematics Education, 20(4), 356–366. Usiskin (1999) distinguishes between multiple distinct notions of “variable” in school algebra. He describes the use of variables as pattern generalizers, unknowns (to be solved for), arguments (to be substituted in to functions), parameters, and referent-free symbols (“marks on paper”). Usiskin also calls attention to “the question of the role of functions and the timing of their introduction”: It is clear that these two issues relate to the very purposes for teaching and learning algebra, to the goals of algebra instruction, to the conceptions we have of this body of subject matter. What is not as obvious is that they relate to the ways in which variables are used… My thesis is that the purposes we have for teaching algebra, the conceptions we have of the subject, and the uses of variables are inextricably related. Purposes for algebra are determined by, or are related to, different conceptions of algebra, which correlate with the different relative importance given to various uses of variables. (pp.8–9, emphasis in original) Usiskin identifies four distinct “conceptions of Algebra”, each corresponding to a different use of variables. Algebra is, in his analysis, (1) a generalization of arithmetic, (2) the study of procedures for solving certain kinds of problems, (3) the study of relationships among quantities, and (4) the study of structures. When we (for example) study the graph of a polynomial function or inquire after its zeros, we are primarily attending to uses (2) and (3); in that context, a variable stands for an unspecified element of a replacement set. On the other hand, when we factor polynomials we are primarily focusing on (4). In that context, variables stand for indeterminates; that is, “marks on paper” that are to be manipulated without attending to what they stand for. Just as Usiskin distinguishes among different uses of variables, Vinner & Dreyfus (1989) distinguish among different concept images for functions. Vinner & Dreyfus introduce the notion of concept image to refer to “the set of all the mental pictures associated in the student’s mind with the concept name, together with all the properties characterizing them” (p. 356). The “modern conception of function, which can be called the Dirichlet–Bourbaki concept” is essentially the one provided as a definition in §2.3 of this chapter: that is, a
115
Polynomials and Polynomial Functions 115 function is a correspondence between two sets, or “a set of ordered pairs that satisfies a certain condition” (p. 357). Although the Dirichlet–Bourbaki definition of function is mathematically privileged as the most correct, Vinner & Dreyfus show that students’ understandings of functions are primarily guided by other, more visually-linked concept images: Thus, when asked about the function definition, a student may well come up with the Dirichlet– Bourbaki formulation, but when working on identification or construction tasks, his or her behavior might be based on the formula conception. This inconsistent behavior is a specific case of the compartmentalization phenomenon mentioned in Vinner, Hershkowitz, and Bruckheimer (1981). This phenomenon occurs when a person has two different, potentially conflicting schemes in his or her cognitive structure. Certain situations stimulate one scheme, and other situations stimulate the other. (p. 357) An analogous case of this kind of compartmentalization is discussed in one of the Recommended Readings at the end of Chapter 4.
Projects A. Use Usiskin’s analysis as an analytical frame for analyzing one or more textbooks (possibly including this one). Which conceptions of algebra, and which uses of variables, are present most, and which least? Can you identify any additional conceptions or uses of variables that Usiskin overlooked? B. Administer Vinner & Dreyfus’s 7-item questionnaire to a suitable group of high school or college students. How do your results compare with the ones reported in their study? C. Compare and contrast Usiskin’s four “conceptions of algebra” with Vinner & Dreyfus’s six definition categories for function. What (if any) correspondences do you see between these different constructs? D. Investigate options for how one can “graph” a function over a discrete ring (or field) such as n. How do the concept images discussed by Vinner & Usiskin help inform such a representation? E. Usiskin gives, as an example, the problem “Factor 3x 2 + 4ax − 132a 2,” and argues that in solving such a problem students typically do not (and are not expected to) assign meaning to the symbols x and a; they are instead supposed to “treat the variables as marks on paper, without numbers as a referent.” Discuss this example in light of the content of this chapter; in particular, what happens if we regard the coefficients of this polynomial as belonging to a ring other than or ? F. Choose two references from one of the Recommended Readings and prepare a summary of it, including synopses of (a) its research question, (b) the theoretical framework, (c) the research methods, (d) its findings and conclusions.
Notes 1 See https://en.wikipedia.org/wiki/Concept_map for a description of concept maps. 2 Glencoe, Algebra 1. 3 A polynomial is irreducible over a specific ring or field if it is not possible to write it as a product of lower-degree polynomials with coefficients in that field. For example, x 2 + 2 x + 6 is an irreducible quadratic over the field .
116
116 Polynomials and Polynomial Functions 4 This is not standard notation; most set theory references call this set Y X . 5 A function f that has this property is called a group homomorphism on the (additive structure of). 6 There’s a whole Wikipedia article of that name about this misconception. 7 The technical way to say this is that PolyFunc (R ) is the subring of Func (R ) generated by Rˆ ∪ {id R }. 8 For the rest of this chapter, we will use 0 R and 1R , respectively, for the additive and multiplicative inverses in R, rather than continue to use θ R and eR . When there is no risk of confusion about which ring is intended, we will sometimes drop the suffix R. 9 This definition covers all finite sequences, with the exception of one special case: the sequence (0, 0, 0,…), which has no nonzero terms. The degree of this sequence is undefined. 10 We use the Greek letter Φ (phi) because it makes the initial sound of the word “functional”. You can think of Φ ( p ) as “the function of p”, i.e. what you get when you interpret the polynomial as a function. 11 Products, too, but that doesn’t really matter here. 12 Or at least, we think we know—at this point you might be beginning to doubt it! 13 If you’ve been worrying ever since Footnote 12, you can relax now! 14 We use the Greek letter χ (chi) because it is the sound at the beginning of the word “characteristic”. The letter χ looks a lot like the English letter x—be careful not to confuse them! 15 Despite its name and acronym, this FLT should not be confused with the other, more famous FLT, i.e. “Fermat’s Last Theorem”. The “Last” theorem was notorious because it defied proof until the mid-1990s, but the “Little” theorem was proved by Leibniz in the late 17th century. 16 There are many proofs of Fermat’s Little Theorem; in fact there is an entire Wikipedia article collecting alternative proofs (currently containing six different proofs, using methods from combinatorics, dynamical systems, group theory and modular arithmetic).
117
3 Solving Equations
“[Albert’s] uncle Jakob Einstein, the engineer, introduced him to the joys of algebra. ‘It’s a merry science,’ he explained. ‘When the animal that we are hunting cannot be caught, we call it X temporarily and continue to hunt until it is bagged.’ ” —Walter Isaacson, Einstein: His Life and Universe “Mathematics is a game played according to certain simple rules with meaningless marks on paper.” —David Hilbert
3.1 “Equivalence” in the Secondary Curriculum What does it mean to “solve an equation”? However we answer this question—and as we will see, there is more than one meaningful interpretation of this seemingly straightforward phrase—it is undeniable that solving equations of various sorts constitutes one of the central activities of secondary mathematics. The process of solving equations, in turn, rests heavily on the act of transforming expressions and equations into equivalent expressions and equations. Policy documents such as the NCTM Principles and Standards for School Mathematics (2000) and the Common Core State Standards in Mathematics (2010) all lay stress on the role of equivalence. The Principles and Standards calls for students to “write equivalent forms of equations, inequalities, and systems of equations and solve them with fluency”; to “understand the meaning of equivalent forms of expressions, equations, inequalities and relations”; to “become fluent in performing [algebraic] manipulations by appropriate means—mentally, by hand, or by machine—to solve equations and inequalities, to generate equivalent forms of expressions or functions, or to prove general results” (pp. 312–313). Likewise, the Common Core State Standards for High School Mathematics state that students should learn to “Write expressions in equivalent forms to solve problems” (Standard A-SSE.3) and “Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function” (Standard F-IF.8). However, despite the apparently universal agreement that “equivalence” is a central idea in high school mathematics, precisely what this word actually means is often left tacit and vague. In that sense, the words “equivalent” and “equivalence” are analogous to the phrase “real number”, considered at the beginning of Chapter 1: though fundamental, it typically passes without a definition, as if the meaning were unproblematic. This chapter is devoted, in large part, to problematizing the notion of equivalence; that is, we seek to reveal that it is more complicated than it may appear at first glance, and therefore warrants careful
118
118 Solving Equations investigation. We will then define not one but three distinct types of equivalence, and investigate how these types can be used to analyze the process of solving equations. Consider, for example, the following two equations: x 4 + 11x 2 += 18 6 x3 + 12 x 729e 2 x−6 =
x9 27
Both equations are quite difficult (if not quite impossible) to solve by normal algebraic methods; however, one may verify by direct substitution that x = 3 is a solution to each one. Moreover, it can also be verified (though it is somewhat harder) that this is the only real solution to each equation, so the equations have exactly the same solution set. Are they “equivalent”? The answer, of course, is that it depends on how we choose to define “equivalence”. Like policy documents, textbooks rarely provide an explicit definition; rather, they introduce the word through usage. For example, one widely used Algebra 1 textbook introduces the term “equivalent equation” with the sentence If the same number is added to each side of an equation, then the result is an equivalent equation. Equivalent equations have the same solution. (emphasis in original) Note in particular what this does not say. It does not say that adding the same number to both sides of an equation is the only way to produce an equivalent equation (and of course just a few pages later the student learns that if one multiplies both sides of an equation by the same number, the result is also an equivalent equation1). This sentence does tell us that equivalent equations have the same solution, but it does not say whether all equations with the same solution are equivalent. Written in the notation of logical implication, we might say that equations produced by specified algebraic “moves” ⇒ equivalent equations ⇒ equations with the same solutions This statement is perhaps uncontroversial. But it raises the question of the converse of these implications. Are all three categories of equation one and the same? Can the logical arrows above be reversed to flow in the other direction? x9 In our example, the two equations x 4 + 11x 2 += 18 6 x3 + 12 x and 729e 2 x−6 = have the 27 same solution set, but there is no obvious way to algebraically “transform” one equation into the other by a series of steps in which one “does the same thing to both sides”. This suggests that, just as distinguishing between polynomials and polynomial functions in Chapter 2 allowed us to explore the relationships between the two concepts, here we may gain some insight by defining two different kinds of equivalence. •
We will say that two equations are strongly equivalent if it is possible to transform each equation into the other via a series of (specified) moves in which one does the same thing to both sides. The set of moves is determined by the context; for example, “adding 7 to both sides” may be a legitimate move in Algebra 1, but “taking the logarithm”
119
Solving Equations 119
•
would not be. So strong equivalence is a relative notion, and in some sense depends on non-mathematical considerations such as local culture and custom. We will say that two equations are weakly equivalent if they have the same set of solutions. Notice that this, too, depends on some context, this time about what kind of solutions are permitted; for example, it may be argued that x 4 + 11x 2 += 18 6 x3 + 12 x 9 x and 729e 2 x−6 = don’t actually have the same solution sets, because in addition to the 27 shared real solution x = 3, each also has distinct complex roots. If the context in which we operate includes only real numbers, then, we would say that the equations are weakly equivalent, but if the context includes complex numbers as well then we would say that they are not.
The reason for calling these two equivalences “strong” and “weak” is that, as stated earlier, the strong form implies the weak form: if you know that it’s possible to transform two equations into each other via a set of algebraic rules, then you can reliably conclude they have the same solutions. However, the weak form does not imply the strong form; just knowing that two equations have the same solution does not guarantee that they can be transformed algebraically into one another. Thus the “strong” relationship tells you more than the “weak” one does. But even this distinction does not fully capture the subtlety of what goes on when students solve equations. Consider the equation 3 ( x − 15 ) + 4 x =− 2 x 10 Typically a student (for example, at the level of high school Algebra 1) would begin by using the distributive property by rewriting the equation as 3x − 45 + 4= x 2 x − 10 and then combining like terms to get 7 x − 45 =− 2 x 10 In the next step students would probably either add 45 to both sides, or subtract 2x from both sides. Notice that either of these would be the first time an algebraic move was performed in which the same operation was done to both sides of an equation. All of the work that has preceded it was operating solely on one side of the equation. How shall we describe this? The central feature in this example is that, just as we distinguished between polynomials and polynomial functions in Chapter 2, we now find it necessary to distinguish between expressions and polynomials. Certainly 3 ( x − 15 ) + 4 x and 7 x − 45 are the same polynomial; this is true regardless of what coefficient ring we work in (as long as the symbols for 3, 4, 7, 15 and 45 refer to some elements in the ring). But they are entirely different expressions, and a great deal of Algebra instruction is devoted to teaching students how to turn one expression into another via legitimate means. So just as equations can be equivalent (in two distinct ways), we also must define and clarify a notion of equivalent expression. In the next section, we set out to formalize these ideas and prove some basic theorems about the various kinds of equivalence that exist among equations and relations.
120
120 Solving Equations
Exercises 1. Give another example of two equations that are weakly equivalent but not strongly equivalent. 2. Create a complicated, multi-step 1st-degree equation that would be difficult but not impossible for a typical Algebra 1 student. Solve the equation, taking note of the different algebraic moves you perform as you go. Identify which kinds of equivalence are involved in the transition from each step to the next. Note that this is not the same thing as “Justify with an algebraic property”; the question here is not Why can you do this? but the more basic question What do you do? Use your example to generate a provisional set of moves that would be adequate to solve all such problems. Are you confident that your set is complete? How could you be sure? 3. Repeat Problem 2, but for a quadratic equation to be solved by a hypothetical Algebra 1 student. 4. Repeat Problem 2, but for a higher-degree polynomial equation to be solved by a hypothetical Algebra 2 student. 5. Repeat Problem 2, but for a trigonometric equation to be solved by a Precalculus student.
3.2 Strings and Algebraic Strings As we have just discussed, we need a way to distinguish between a polynomial and the form in which it is written. All of the following expressions denote the same polynomial: x 2 − 4 ( x − 2 ) + 8x − 1 8x − 1 + x 2 − 4 ( x − 2 )
(x + 2)
2
+3
x2 + 4x + 7 1 3 31 1 3 31 ( x + 1)3 + ( x + 1)2 + ( x + 1) − x3 − x2 − x 3 2 6 3 2 6 and yet they are manifestly not written in the same form, and the work involved in recognizing that they are supposed to be “the same” is not insignificant. Indeed this is the whole point of teaching students to simplify expressions, factor polynomials, write a quadratic in both vertex and standard form, etc.; sometimes the form in which a polynomial is written tells us something that would not be immediately clear were the same polynomial written in a different form. So we need a theory of forms, or of expressions. We start with the basic idea of a string of characters. Definition. Let be any set; we will call the elements of characters, and say that is an alphabet of characters. An -string (often abbreviated as simply a “string” when the
121
Solving Equations 121 alphabet is clear from context) is a finite sequence of characters from . We use the notation a1a2 a3 a4 an to represent an -string, where each ak is a character from . For example, let be the 26 lowercase letters of the English alphabet. Then an -string is simply a string of letters, like elephant , pineapple , or eeehainjytgf . As the last example shows, a string does not have to be meaningful; the definition only speaks of a string’s form, not its content. We also allow for the special case of an empty string, i.e. a string with no characters in it. Such a string may be denoted , but this could also be interpreted to mean a string with a single “space” character in it, so instead we use the special symbol ∅ to designate the empty string. We also can speak naturally of substrings, which are strings of characters that appear contiguously inside a given string: for instance eap is a substring of pineapple , but npp is not. Any string is a substring of itself; likewise ∅ is a substring of every string. All other substrings are called proper substrings. Of course, if we want to write more than a single word—if we want to write a sentence or a phrase, for example—we would need a larger alphabet, including not only lowercase letters but also capitals, punctuation, and a “space” character. This gives us a different alphabet2, = ∪ {A B C Z ., ! ?
}
(where the gap at the end indicates a “space”.) With an alphabet like this, we could write strings like Granny Smith apples , Why ? Because I said so ! , and ftb,,, . ep !? . If we want to emphasize the alphabet from which a string is built, we call it an -string, or a -string, etc. We could also enlarge our alphabet still further by including the ten digits {0,1, 2, 3, 4, 5, 6, 7, 8, 9}. With these available, most computer passwords could be written. Of course we are not interested in writing words, sentences, or passwords. Our plan is to 1 be able to write expressions like (6 x + 8) x − 5 and 3x 2 − 26 x − 40 . In order to accom2 modate such strings, we would need an alphabet that includes not only individual digits, but also symbols for addition and subtraction, both left and right parentheses, one or more letter that can be used for variables, and symbols for fractions. The most straightforward way to accomplish this is to choose for our alphabet = ∪ {+ − / ( ) x x 2 x3 } This is, of course, an infinite alphabet, containing not only the single digits 0–9 but also 5 multiple digit integers like 247, fractions like , and variables raised to a positive integer 8 power, each of which is (per our definition) a single character. That means in particular that a string like 123 is liable to be interpreted in several different ways: as a 1-character string containing the single character 123; as a 3-character string made up of the digits 1, 2 and 3; as a 2-character string beginning with the single character 12 and concluding with the single character 3; or as a 2-character string formed from the character 1 followed by the single character 23. This situation is not really acceptable for us, because we want to be able to talk about the length of a string in an unambiguous way. We can resolve this problem by introducing one additional notational device: when a single number is normally written using two or more digits, but we want to consider it as a single character, we will use square brackets, i.e. [], to separate it from its neighbors. Note that the square brackets are not actually characters in our alphabet, but rather a notational device
122
122 Solving Equations to aid in reading. With this modification to our system, we can write 3x 2 − [ 26 ] x − [ 40 ] , a string that is unambiguously 7 characters long. We use absolute value bars around a string to denote its length: thus we can say 3x 2 − [ 26 ] x − [ 40 ] = 7 What about if we want to discuss polynomials with coefficients in some other ring, as we did in Chapter 2? In such a case all we need to do is to replace the set in our alphabet with a different ring R. In general, we choose a ring R and define an alphabet = R ∪ {+ − ( ) x x 2 x3 } Notice that we use the same letter (R) in a different typeface () to emphasize that the alphabet and the ring of coefficients are two different mathematical objects, with one built from the other. However, when the ring is a familiar one like or , we will typically use the same symbol to denote both the ring and its associated alphabet, in order to avoid the need for additional symbols. The set of all possible -strings is denoted String ( ), and is large enough to accommodate expressions representing all polynomials with coefficients in R. If we wanted to be able to describe rational functions, square roots and other mathematical operators, we would need a larger alphabet, but for now this is adequate. For any alphabet , whether or not it is based on a ring, there is a natural binary composition law on String ( ), called concatenation. The concatenation of two strings is a new string, consisting of all of the characters from the first string followed immediately by all of the characters from the second string. To indicate the concatenation of two strings, we simply write one of them adjacent to the other. Thus if we return to the alphabet consisting solely of lowercase English letters, we may write a concatenation such as banana aardvark = bananaaardvark Concatenation is an associative operation for which there is an identity element for concatenation of strings, but String ( ) is not a group (Exercise 6). Sometimes rather than referring to a string by exhibiting its full content, it is more convenient to give the string a name. To indicate a named string, we use a dollar sign3. For example, we might decide to use the name $a to stand for banana and the name $b to stand for aardvark . Then the concatenation above could be written $a$b = bananaaardvark It’s very important not to confuse the name of a string with its content. A name of a string may be thought of as a “variable for strings”, in the sense that we can say things like “Let $s and $t be any two strings. Then $s$t = ” (Exercise 7 asks you to complete this sentence.) Neither the dollar sign, nor the letters s and t in this example, are part of the string; the dollar sign is not even a character in our alphabet! We return now to the case of an alphabet based on a ring R. In addition to expressions for polynomial and rational functions, String ( ) also includes malformed gibberish like
123
Solving Equations 123 ⋅x +⋅ − + ) − 2( x 2 , an 11-character string. We need to have some way to restrict our attention to strings that have meaningful interpretations. For this, we need a new definition. Definition. Let R be any ring and let be the associated alphabet. The set of algebraic -strings, denoted AlgString ( ), is the smallest subset of String ( ) that satisfies all of the following properties: For any r ∈ R, and for any positive integer n, the one-character strings r and x n , and the two-character strings rx n , belong to AlgString ( ). • If $a belongs to AlgString ( ), then ( $a ) and – ( $a ) belong to AlgString ( ) as well, and vice versa. • If $a and $b are nonempty algebraic -strings, then the concatenation ( $a ) + ( $b ) is an algebraic -string, as is ( $a ) − ( $b ) . • If $a and $b are nonempty algebraic -strings, then the concatenation ( $a )( $b ) is an algebraic -string4. •
Note that we do not introduce a symbol for “division” into our alphabet. There are two important reasons for not doing so at this time: ( a) In general, division in a ring is an undefined operation; and (b) Even if we work over a field, in which division is defined, we must take extra care to avoid dividing by a string that is “equivalent to 0”. Since we have not yet defined what it means for strings to be equivalent, we will defer consideration of division until the end of the next section. When it is clear which alphabet we are using, we will often abbreviate the phrase “algebraic -strings” as simply “algebraic strings”. However, if more than one alphabet is in use it is 2 important to distinguish different sets of algebraic strings. For example, x − (5) is not 3 an algebraic -string, but it is an algebraic -string; on the other hand, ( π ) + ( 4 ) is an algebraic -string, but neither a -string nor a -string. AlgString ( ) contains all of the strings we need to write algebraic expressions, and nothing else. In particular note that AlgString ( ) is not, itself, closed under concatenation: the concatenation of two algebraic strings is not necessarily algebraic! For example, with = we have that both 2 x 2 and x3 are both algebraic strings, as is (2 x 2 ) + ( x3 ) , but the simple concatenation 2 x 2 x3 is not an algebraic string. However, the string (2x2 )(x3 ) is, and for practical purposes this is good enough. The use of superfluous parentheses for representing addition, subtraction and multiplication is inconvenient and can make algebraic strings difficult to read, but it is a necessary evil; the alternative is that we might end up concatenating two strings 2 + x and 2 − x and get as a result 2 + x2 − x , when what we really want (since we are trying to represent algebraic rules using formal strings) is to get the longer string (( 2 ) + ( x )) (( 2 ) − ( x )) . More important, in the next section we will begin replacing substrings of algebraic strings with algebraically equivalent strings, and in order for that process to work smoothly we need to have some redundant safeguards in our expressions to make sure our rules do not allow transforming, say, 2x + y into 2y + x by an overzealous use of the rule that says that the order of strings representing sums can be interchanged. Often, however, we will ignore layers of parentheses for increased readability.
124
124 Solving Equations At this point we have marshaled all of the definitions and notation we need in order to begin discussing what it means for two algebraic strings to be algebraically equivalent, which we turn to in the next section.
Exercises 6. Show that concatenation is an associative operation with an identity element, but that String ( ) is not a group under concatenation. 7. How can you compute $s$t if you know the lengths of $s and $t? 8. How many substrings does an n-character string have? 9. Are $s and $t always substrings of $s$t? Support your answer with either a proof or a counterexample. 10. Are 2 x and x1/ 2 algebraic strings? If not, how could you extend the alphabet so that they would be?
3.3 Algebraic Equivalence Our goal is to define an equivalence relation on -strings, which we will denote ~ , that captures in formal terms the idea of two strings standing for the same algebraic “thing”. Essentially, we need a way to formalize what happens in writing when we manipulate an algebraic expression. For example, what happens when we rewrite the string (3) (( x ) − (5)) as (3x ) − ([15]) , or when we rewrite (5) + ( x3 ) as ( x3 ) + (5) ? Definition. Two algebraic -strings $a and $b are said to be algebraically equivalent, denoted $a ∼ $b, if one can be turned into another by a finite sequence of moves, where the set of moves contains all of the following: Turn a substring of the form ( $r ) + ( $s ) into ( $s ) + ( $r ) . Turn a substring of the form ( $r ) − ( $s ) into –( $s ) + ( $r ) . Turn a substring of the form ( $r ) + ( − ( $s ) ) into ( $r ) − ( $s ) . Turn a substring of the form (( $r ) + ( $s )) + ( $t ) into ( $r ) + (( $s ) + ( $t )) and vice versa. • Turn a substring of the form ( x n )( x m ) into ( x[n + m] ) and vice versa. • Turn a substring of the form ( r )( x n ) , where r ∈ R, into ( rx n ) , and vice versa. • Turn a substring of the form ( r ) + ( s ) , where r, s ∈ R , into ([ r + s ]) , where [ r + s ] denotes a single character, and vice versa. • Turn a substring of the form ( r ) − ( s ) , where r, s ∈ R , into ([ r − s ]) , where [ r − s ] denotes a single character, and vice versa. • Turn a substring of the form ( r )( s ) , where r, s ∈ R , into ([ rs ]) and vice versa. • Plus further moves formalizing the commutative and associative properties of multiplication, the distributive property of multiplication over sums and differences, the distributive property of exponents over products, the identity properties of addition and multiplication… • … Plus anything else that we might need. (Exercises 12–16) • • • •
As the last two bullet points suggest, actually listing out every single move that is performed in the course of turning one algebraic expression into another is absolutely exhausting, and past a certain point not particularly productive. What is important here is that it’s possible
125
Solving Equations 125 to, at least in principle. We can describe the transformation of one algebraic expression into another one syntactically, rather than semantically; that is, we base our manipulations solely on how the expressions are written, rather than on what they mean. The fact that algebraic meaning can be formalized in symbols to such an extent that we don’t even need to know what the symbols mean is precisely what makes it possible to write computer systems that perform algebra5. When the alphabet is clear from context, or when it doesn’t matter, we will typically write $a ∼ $b, omitting the small that is normally attached to the ∼ symbol; if more than one alphabet is in use, we can distinguish between $a ∼ $b and $a ∼ $b . Our set of moves is large enough at this point to show, for example, that (( x ) + (3))(( x ) − (5 )) ∼ ( x 2 ) − ( 2 x ) − ([15]) (Exercise 17). More generally any string representing a combination of products and sums of multiple polynomials can be transformed, step by step, into the “standard form” of a single polynomial. This means in particular that normal classroom practices like “simplifying” and “factoring” turn one string into an equivalent, but different, string. Just as we can always enlarge our alphabet if we want to move to a slightly different mathematical context, we can also always enlarge our set of legal moves if necessary. For example, in the context of Algebra 2 one would need symbols for square roots and logarithms, and moves corresponding to laws like a b = ab and ln ( ab ) = ln ( a ) + ln( b ). In the context of d Calculus, we would need to let be a single character, and introduce rules that would dx d allow us to turn x n into nx n−1 , and so on. dx From this point forward, we will not try to create a comprehensive and exhaustive list of all possibly useful moves; instead we will satisfy ourselves with the conviction that we could do so if we really wanted to. That is, from this point on we will assume that our alphabet has all of the characters we need to represent whatever mathematical objects we need, and that the set of moves contains all of the essential properties and identities that one uses in a step-by- step rewriting of an algebraic expression of whatever sort is under consideration into an algebraically equivalent form. If this is so, then there is a natural map Σ : AlgString ( ) → R [ x ], where R [ x ] is as usual the set of polynomials with coefficients in R. This map is called the string interpretation map6, and it plays a role analogous to the functional interpretation map Φ : R [ x ] → PolyFunc( R ) that was so central to our analysis in Chapter 2. Moreover, if $a ∼ $b, then Σ ( $a ) = Σ($b ). Finally, the map Σ : AlgString ( ) → R [ x ] is surjective (Exercise 18), so we may identify R [ x ] with the set of equivalence classes in AlgString ( ). (Exercise 19.)
Exercises 11. Show that ∼ is an equivalence relation. (If you have forgotten what this means, you may want to look it up in another source.) 12. Write explicitly the moves that formalize the commutative and associative properties of multiplication. (Be careful with your use of parentheses.) 13. Write explicitly the moves that formalize the distributive property of multiplication over sums and differences.
126
126 Solving Equations 14. Write explicitly the moves that formalize the identity properties of addition and multiplication. 15. Write explicitly the moves that formalize the distributive property of exponents over products. 16. What else might we need? 17. Show explicitly every move that is performed in proving that
( ( x ) + ( 3 )) ( ( x ) − ( 5 ) )
∼ ( x 2 ) − ( 2 x ) − (15) .
(This may require you to have first answered one or more of the previous exercises.) 17. In Chapter 2 we defined a polynomial as a finite sequence p = ( a0 , a1 , a2 ,) where each ai ∈ R . Describe explicitly how to construct an algebraic string $s such that Σ ($s ) = p. 19. Explain explicitly what is meant by “we may identify R [ x ] with the set of equivalence classes in AlgString ( )”.
3.4 Equations, Strong and Weak Equivalence, and Solutions We are finally ready to define what an equation is. Compared to the technical details of the last two sections, this is actually quite easy: Definition. A formal equation is an ordered pair ($a, $b ) of two algebraic strings written using the same alphabet. Thus for example7 with $a = 2 x + 3 and $b = 5x − [15] , we can write the formal equation 2 x + 3 , 5x − [15] . We use ordered pairs to capture the notion that an equation has a “left side” and a “right side”; that is to say, when we see something like 2 x + 3 , 5x − [15] we think of it as a formalized way of writing the equation
(
)
(
)
2x + 3 = 5x − 15 We don’t want to actually use the equals sign to separate the two sides of a formal equation, because that symbol has multiple uses and it may not always be clear which one is intended (see the Recommended Reading at the end of Chapter 2). Of course if you give the equation 2 x + 3 = 5x − 15 to a high school class, you would expect them to do something like transform it into 2 x + 18 =, 5x and then into 18 = 3x, and finally into x = 6. In our terms, the students would perform a series of algebraic moves to turn the formal equation ( 2 x + 3 , 5x − [15] ) into ( 2 x + [18] , 5x ), then ( [18] , 3x ), and ( x , 6 ). Notice that the set of moves that one uses to transform an equation into an equivalent equation is not the same set of moves as the ones used to transform an expression into an equivalent expression, although the latter are in a sense a subset of the former. More specifically, we have the following: Definition. Two equations ($s, $t ) and ($u, $v ) are said to be strongly equivalent (denoted ($s, $t ) ≡ ($u, $v )) if each one can be turned into the other by a sequence of moves, each of which is of the following types: •
Type I moves (“Rewriting one side”): Replacing ($s, $t ) with ($s' , $t ), where $s ∼ $s'; or replacing ($s, $t ) with ($s, $t' ), where $t ∼ $t'.
127
Solving Equations 127 • •
Type II moves (“Adding or subtracting the same thing on both sides”): Replacing + $r, $t + $r ) for an arbitrary algebraic string $r; or replacing ( $s,$t ) with (($s −( $r ) , $t −( $r ) ) for an arbitrary algebraic string $r. Type III moves (“Multiplying by the same thing on both sides”): Replacing ( $s,$t ) with ( ( $s )( $r ) , ( $t )( $r ) ) for an arbitrary algebraic string $r, provided that $r is not algebraically equivalent to 0.
( $s,$t ) with ($s
This list could be augmented with additional moves. For example if the underlying ring R is a field, and our alphabet has been augmented to include a symbol for division, we could allow a move that replaces ($s, $t ) with ( ( $s ) / ( $r ) , ( $t ) / ( $r ) ), provided that $r is not algebraically equivalent to 0. (“dividing by the same thing on both sides”). In addition, in higher-level courses, one begins to add moves corresponding to “take the logarithm of both sides”, “exponentiate both sides”, etc. Before one can do that, of course, one needs an alphabet rich enough to include ways of writing exponential and logarithmic expressions, and an extended notion of algebraic equivalence that accounts for the typical manipulations one performs in working with those expressions, e.g. changing log ( ab k ) into log ( a ) + k log( b ). Then one can extend the set of equation moves accordingly. We will not pursue this in more detail here. As with the definitions in the previous section, it turns out to be not very practical to specify every single move one might perform in the course of solving every conceivable sort of equation. Rather, our goal here is to illustrate the general point, which is that the operations involved in solving equations can be represented by operations that act purely on text strings without regard for their meaning or interpretation. Of course it might be objected that we don’t want our students to operate on equations without thinking of their meaning, but that does not change the fact that it can be done, and thinking of the process in terms of moves like this allows us to consider the algorithm as an object of mathematical analysis, rather than as the focus of student psychology. A few observations need to be made at this point. Note 1. You may have expected to see a move corresponding to “interchange the left-and right-hand sides of the equation”, i.e. replace ($s, $t ) with ($t, $s ). It turns out that this is not necessary, as this move can be accomplished as a combination of Type I and Type II moves (Exercise 20). Note 2. We have defined strong equivalence as a symmetric relation, because in order for two equations to be strongly equivalent each equation needs to be transformable into the other. Often one wants to consider a slightly more relaxed notion, in which ($s, $t ) can be transformed into ($u, $v ) but the process is not required to be reversible. This happens, for example, when squaring both sides of an equation, or more generally whenever the operation performed on both sides of an equation is not one-to-one. In such cases, we will say that ($s, $t ) is an ancestor of ($u, $v ), and ($u, $v ) is a descendant of ($s, $t ), but they are not strongly equivalent unless there is a sequence of moves that runs in the other direction as well. Strong equivalence is, as was discussed in §3.1, only one notion of equivalence that is important when considering equations. In order to discuss the second type, we first need to consider what it means for a member of the ring R to be a solution to an equation ($s, $t ). The basic idea is really an extension and combination of a number of tools we already have in place. First, we take the strings $s and $t, and use the string interpretation map Σ : AlgString ( ) → R [ x ] on each of them to obtain two polynomials Σ ( $s ) and Σ ( $t ) . Each of these polynomials can then be interpreted as a function in a natural way via the functional interpretation map Φ, to get two members of Func( R ), namely Φ ( Σ ( $s ) ) and
128
128 Solving Equations Φ ( Σ ($t )). As these are functions on R, we can take a value r ∈ R, evaluate both functions there, and see if the results are the same. In other words, we have the following: Definitions. An element r ∈ R is called a solution to the equation ($s, $t ) if Φ ( Σ ($s )) ( r ) = Φ ( Σ ($t )) ( r ). The set of all solutions in R to a given equation is called the solution set of the equation, and is denoted SolR ($s, $t ). For convenience, we will often use the notation SolR ( f , g ), where f , g ∈ R [ x ]. This is really just a shorthand for SolR ($s, $t ) where Σ ( $s ) = f and Σ ( $t ) = g . A special case of the above is when one side of the equation is equal to 0. We define the zero set8 of a string $s to be ZR ($s ) = SolR ($s, 0 ). Similarly for a function f ∈ R [ x ] we define the zero set of f to be ZR ( f ) = SolR ( f , 0 ). We anticipate that if two equations are strongly equivalent, then they should have the same solution set. But is the reverse true? The example at the beginning of this chapter suggests it is not true in general, but that example involved exponential functions, which are not part of our alphabet. Perhaps if we restrict our attention to polynomial functions, it might be true that two equations with the same solution set would always be strongly equivalent? However, consider the example of the two equations
(x
4
− 3x3 , 9x 2 + 3x + 10
)
(x
4
− 3x3 , 8x 2 + 6 x + 20
)
and
Let us consider the solution sets of these two equations in the set of real numbers, the context of normal high school algebra. It can be shown (Exercise 21) that the solution sets of these two equations are identical; despite that, there is no series of Type I, II or III moves that can possibly transform either equation into the other (Exercise 23). So while we anticipate that strongly equivalent equations will have the same solution sets, we should not expect all equations with the same solution sets to be strongly equivalent. This justifies our final important definition of this section: Definition. Two equations ($s, $t ) and ($u, $v ) are said to be weakly equivalent over R if SolR ($s, $t ) = SolR ($u, $v ). The ring here is quite important: In our example above, ( x 4 − 3x3 , 9x 2 + 3x + 10 ) and ( x 4 − 3x3 , 8x2 + 6x + 20 ) are weakly equivalent over , , and , but not over or over 12 (Exercise 22). We can now prove some useful lemmas that justify the most common student moves in the practice of solving equations. Lemma 1. Type I moves preserve solution sets. That is, if an equation ($s, $t ) is transformed by a Type I move (or a sequence of Type I moves) into a new equation ($s' , $t' ), then ($s, $t ) and ($s' , $t' ) are weakly equivalent.
Proof. Suppose r ∈ R is a solution to ($s, $t ). By definition, this means that Φ ( Σ ( $s ) ) ( r ) = Φ ( Σ ( $t ) ) ( r )
129
Solving Equations 129 In addition, since all Type I moves are reversible, we have $s ∼ $s' and $t ∼ $t', which in turn implies that as elements of R[ x ], Σ ($s ) = Σ ($s' ) ,
Σ ($t ) = Σ ($t' )
From this it follows that Φ ( Σ ($s )) ( r ) = Φ ( Σ ($t )) ( r ), and therefore r is also a solution to ($s' , $t' ). This proves that SolR ($s, $t ) ⊆ SolR ($s' , $t' ). The reverse inclusion follows from performing the Type I moves in reverse.
Lemma 2. Type II moves preserve solution sets. That is, if an equation ($s, $t ) is transformed by a Type II move (or a sequence of Type II moves) into a new equation ($s' , $t' ), then ($s, $t ) and ($s' , $t' ) are weakly equivalent.
Proof. A Type II move replaces $s with $s' = $s + $u and $t with $t' = $t + $u for some algebraic string $u. Suppose now that r ∈ SolR ($s,$t ). Then by definition we have Φ ( Σ ($s )) ( r ) = Φ ( Σ ($t )) ( r ). Moreover, by the definition of Φ and Σ, we have
(
)
(
Φ Σ ($s + $u ) = Φ ( Σ ($s )) + Φ ( Σ ($u )) = Φ ( Σ ($t )) + Φ ( Σ ($u )) = Φ Σ ($t + $u )
)
which shows that SolR ($s, $t ) ⊆ SolR ($s' , $t' ). The reverse inclusion is Exercise 24. (Note: there is more to the reverse inclusion than to the forward inclusion, because removing part of a string is trickier than concatenating on to one.) As an immediate corollary, we have the following: Corollary (“Move everything to one side”). For any two algebraic strings $s and $t, − $t, 0 ) and SolR ($s, $t ) = ZR ($s − $t ).
($s, $t ) ≡ ($s
Proof. That ($s, $t ) is an ancestor of ($s − $t, 0 ) follows from applying a Type II move (concatenating both sides of the equation on the right with − $t), followed by a Type I move (replacing the right-hand side $t − $t with the algebraically equivalent string 0 . Likewise, that ($s − $t, 0 ) is an ancestor of ($s, $t ) follows from applying a different Type II move (concatenating both sides of the equation on the right with + $t), followed by two Type I moves (replacing $s − $t + $t on the left-hand side of the equation with the algebraically equivalent string $s, and replacing 0 + $t on the right-hand side of the equation with the algebraically equivalent $t). This shows the equations are strongly equivalent; by the two Lemmas just proved, we also know they have the same solution sets. Our third Lemma is perhaps the most interesting, as it deals with Type III moves, which do not, in general, preserve solution sets, but may enlarge them. Lemma 3. If an equation ($s, $t ) is transformed by a Type III move into a new equation ($s' , $t' ), then SolR ($s, $t ) ⊆ SolR ($s' , $t' ).
130
130 Solving Equations Proof. A Type III move replaces $s with ( $s ) ( $u ) and $t with ( $t )( $u ) for some algebraic string $u, where $u is not algebraically equivalent to 0 . Now let r ∈ R be a solution to ($s,$t ). As usual this means that Φ ( Σ ($s )) ( r ) = Φ ( Σ ($t )) ( r ). Note that this is an equation asserting the equality not of two strings or polynomials, but of two elements of R. We may then use multiplication in R itself to obtain Φ ( Σ ($s )) ( r ) Φ ( Σ ($u )) ( r ) = Φ ( Σ ($t )) ( r ) Φ ( Σ ($u )) ( r ) By the usual methods we conclude that
(
Φ Σ ( ( $s )( $u )
)) = Φ ( Σ ( (
$t )( $u )
))
which completes the proof that r is a solution to ($s' ,$t' ), and therefore SolR ($s,$t ) ⊆ SolR ($s' ,$t' ). It is worth considering why multiplication does not preserve solution sets, but only in general enlarges them. Consider the equation 2 x − 3 = x + 4. This equation has a single solution, namely x = 7. If we multiplied both sides of this equation by x − 4, we would obtain the quadratic equation ( 2 x − 3 ) ( x − 4= ) ( x + 4 )( x − 4 ), which has two solutions. A different, but related, issue arises if we work over a ring that is not an integral domain: over 12, the equation 2 x = 3x has only the single solution x = 0, but if we multiply both sides by 4, we obtain 8x = 0 , which has solution set {0, 3, 6, 9}. In general additional solutions that appear in the course of solving an equation but are not solutions to the original problem are called “extraneous solutions”. Type III moves do not always introduce extraneous solutions, of course; sometimes they are reversible, in which case solution sets are preserved, as the following Corollary states: Corollary to Lemma 3. If an equation ($s,$t ) is transformed by a Type III move into a new equation ($s' ,$t' ), and if that transformation is reversible by a combination of further Type I, II and III moves, then the two equations have the same solution sets.
Proof. Exercise 26. Combining Lemmas 1, 2 and 3, we have (finally) the theorem we have been working up to for this whole chapter: Theorem. If two equations are strongly equivalent, then they are weakly equivalent.
Proof. If two equations are strongly equivalent, then each one can be transformed into the other by a sequence of Type I, II, and III moves. By the three Lemmas (and the Corollary), each of these moves preserves solution sets. Because Type III moves can introduce extraneous solutions, it is worth considering certain circumstances in which they are solution set preserving. We have the following important results:
131
Solving Equations 131 Lemma 4. Let R be an integral domain, and let u ∈ R be any nonzero element, with corresponding single-character -string u . Then the Type III move replacing ($s,$t ) with ( ( $s )( u ) , ( $t )( u ) ) preserves solution sets. Before the proof, a few comments about this Proposition. By Lemma 3, we already know that SolR ($s,$t ) ⊆ SolR ( ( $s )( u ) , ( $t )( u ) ). It remains to show the reverse inclusion. Notice that we cannot, in general, transform the second equation into the first by a series of moves, because we have carefully avoided defining an equation move corresponding to “division by an element of R”. Indeed, if we did allow such a move in general, it would be disastrous, as in general multiplication in an arbitrary ring R is not a reversible operation. For example, if we work over the ring R = 12, then the equation ((3x )( 2 ) , (6 x )( 2 )) has solution set {0, 2, 4, 6, 8,10}, but the equation (3x, 6 x ) has only the smaller solution set {0, 4, 8} (Exercise 25). However, in any integral domain, we have the following: Proposition (Integral domain cancellation law). In an integral domain, if a, b are any two elements and c is a nonzero element such that ac = bc, then a = b.
Proof of Proposition. If ac = bc, then ac − bc = 0, so ( a − b ) c = 0. Since c is nonzero, and R is an integral domain, this implies a − b = 0, and therefore a = b. (Notice that “division by c”, which does not exist in all integral domains, is not used at all here!). Now we return to the proof of Lemma 4: Proof. Let r ∈ SolR ( ( $s )( u ) , ( $t )( u ) ). Then as usual we have Φ( Σ ( ( $s )( u )
) (r ) = Φ (Σ ( (
$t )( u )
)) ( r )
By the way we have constructed our interpretation maps, this is the same as saying Φ ( Σ ($s )) ( r ) ⋅ u ( r ) = Φ ( Σ ($t )) ( r ) ⋅ u ( r ) where u denotes the constant function9. This means that Φ ( Σ ($s )) ( r ) ⋅ u = Φ ( Σ ($t )) ( r ) ⋅ u , in which the “dot” indicates multiplication in R. Since R is an integral domain and u ( r ) = u is nonzero, the cancellation law allows us to conclude that Φ ( Σ ($s )) ( r ) = Φ ( Σ ($t )) ( r ), which finally leads us back to the conclusion that r ∈ SolR ($s,$t ). Similarly, we have the following related special case: Lemma 5. Suppose R is any ring, not necessarily an integral domain, and u ∈ R is an invertible element with corresponding single-character -string u. Then the Type III move replacing ($s,$t ) with ( ( $s )( u ) , ( $t )( u ) ) preserves solution sets.
Proof. Exercise 27.
132
132 Solving Equations Finally, we consider what happens when one multiplies both sides of an equation by an algebraic string that is not just a single character corresponding to an element of R: Lemma 6. (“Zero- product property”) Let $s and $t be two algebraic strings. Then ZR ( ( $s )( $t ) ) ⊇ ZR ($s ) ∪ ZR ($t ); if R is an integral domain then ZR ( ( $s )( $t ) ) = ZR ($s ) ∪ ZR ($t ).
Proof. Written in the simpler notation of polynomials, this Lemma claims that for any two functions f , g ∈ R [ x ], the zero set ZR ( fg ) contains both ZR ( f ) and ZR ( g ), and moreover that if R is a domain then ZR ( fg ) contains nothing else. We will prove it in this more natural form for ease of legibility, but you should convince yourself that it could all be formalized in the notation of strings. First, we need to show that ZR ( f ) and ZR ( g ) are both contained within ZR ( fg ). Suppose r ∈ ZR ( f ). Then Φ ( f )( r ) = 0, and therefore Φ ( fg )( r ) = ( Φ ( f ) ⋅ Φ ( g )) ( r ) = Φ ( f )( r ) ⋅ Φ ( g )( r ) = 0 ⋅ Φ ( g )( r ) = 0 so r ∈ ZR ( fg ). This shows that ZR ( f ) ⊆ ZR ( fg ). The proof that ZR ( g ) ⊆ ZR ( fg ) is essentially the same. Next, we need to show that if R is an integral domain, then every r ∈ ZR ( fg ) belongs to at least one of ZR ( f ) or ZR ( g ) . Suppose r ∈ ZR ( fg ). Then Φ ( fg )( r ) = 0. This means that 0 = ( Φ ( f ) ⋅ Φ ( g )) ( r ) = Φ ( f )( r ) ⋅ Φ ( g )( r ) But Φ ( f )( r ) and Φ ( g )( r ) are elements of the ring R; in the notation of Chapter 2 we could also call them fˆ ( r ) and gˆ ( r ). If their product is 0, and R is an integral domain, then either Φ ( f )( r ) = 0 or Φ ( g )( r ) = 0. In the former case, we conclude r ∈ ZR ( f ), and in the latter case we conclude r ∈ ZR ( g ) . Earlier in this chapter, we observed that if the underlying ring R is a field, we may wish to include a symbol for “division” into our alphabet. In that case, we would have an additional rule for producing new strings from old ones, namely: •
If the underlying ring R is a field, and $a and $b are nonempty algebraic -strings, and $b is not algebraically equivalent to 0 , then ( $a ) / ( $b ) is an algebraic -string.
With “division of strings” thus defined, we would next need to further augment our notion of “algebraic equivalence” to account for the fact that a string of the form ( $a ) / ( $a ) x should be algebraically equivalent to 1 . Or should it? As rational expressions over , is x equivalent to 1, but as functions they are slightly different: 1 is a constant function, defined x for all real numbers, but is undefined at x = 0. A more general version of this problem x arises as soon as we consider what happens to our string interpretation map Σ and the functional interpretation map Φ. For a string of the form $s = ( $a ) / ( $b ) , the natural
133
Solving Equations 133 interpretation Σ ( $s ) is not a polynomial function in R [ x ], but rather an element of the field R ( x ).10 This is not in itself a problem, but when we try to interpret a rational expression as a function R → R via the functional interpretation map Φ, this is problematized by the fact that the “function” may not be defined on the entirety of the underlying field. Thus for example if we work over , then $s = ( 2 x + 1) / (3x − 6 ) would be a valid 2x + 1 -string, and Σ($s ) would be the rational expression ∈ ( x ), but there is no way to 3x − 6 2 ⋅ 2 +1 5 consistently assign meaning to Φ ( Σ ($s )) ( 2 ), which would evaluate to = . In such a 3⋅2 − 6 0 case, Φ ( Σ ($s )) is not a function on all of R, but rather a partial function; that is, a function defined on only a subset of R. This is not an insurmountable challenge, but it is one that does need to be addressed. The resolution is to define an extended version of Φ : R [ x ] → PolyFunc ( R ), namely Φ : R ( x ) → PartialFunc ( R ), where PartialFunc ( R ) denotes (as its name suggests) the set of all partial functions R → R . More specifically, if R is a field, and $s is an algebraic string of the form $s = ( $a ) / ( $b ) , then we take the domain of the partial function Φ ( Σ ($s )) to be to R ZR ( $b ), where the symbol denotes the set-theoretical difference, i.e. the elements that belong to the first set but not the second set. This formalizes the notion that a rational function is undefined where the denominator is equal to zero. Now that we know how to interpret a string corresponding to a rational expression as a partial function, we could also augment our set of equation moves with a “Type IV” move that replaces ($s,$t ) with ( ( $s ) / $r, ( $t ) / $r ), provided that ZR ( $r ) is empty, i.e. that ($r, 0 ) has no solutions. This seems straightforward enough—as long as one is dividing by an expression that is never equal to zero, there seems to be no harm done—but it, too, is complicated by the fact that whether an equation has solutions depends on the ring or field over which one works! For example (using the less cumbersome notation of functions, rather than strings) a student may want to solve an equation like
( 2x − 3) ( x2 + 1=) ( x + 5 ) ( x2 + 1) by first dividing both sides by x 2 + 1, obtaining
( 2x − 3) ( x2 + 1) ( x + 5 ) ( x 2 + 1) x2 + 1
=
x2 + 1
x2 + 1 x2 + 1 with 1, obtaining 2 x − 3 = x + 5. As long as we are working over (or a subfield of it), this is fine, but if we work over it should be disallowed, because i is a solution to ( 2x − 3) ( x2 + 1=) ( x + 5 ) ( x2 + 1) but is not a solution to 2x − 3 =x + 5. It is one thing to sometimes introduce extraneous solutions via the process of using Type III moves to solve an equation; that can always be handled by simply testing the solutions you obtain by substituting them into the original equation and seeing if they work. It would be quite another matter if we lost solutions by applying a Type IV move—how would we ever get them back, if we don’t know that they are gone? We conclude this brief (and mostly non-technical) discussion of rational equations over a field with two simple Lemmas, the proofs of which are left to the exercises: The student may now wish to “simplify” each side of the equation by replacing
134
134 Solving Equations Lemma 7. (“Zeros of rational functions”). Let $s be a string of the form $s = ( $a ) / ( $b ) . Then ZR ( $s ) = ZR ($a ) ZR ( $b ).
Proof. Exercise 29.
Lemma 8. (“Cross-multiplying”). Suppose R is a field. Let $s and $t be two algebraic strings, of the form $s = ( $a ) / ( $b ) $t = ( $c ) / ( $d ) Then ($s,$t ) is an ancestor of ( ( $a )( $d ) , ( $b )( $c ) ), and SolR ($s,$t ) ⊆ SolR ( ( $a )( $d ) , ( $b )( $c ) ). More precisely, if these two sets are unequal, then the set-theoretical difference between these two solution sets consists precisely of ZR ($b ) ∪ ZR ($d ). (That is, the elements of ZR ($b ) ∪ ZR ($d ) are extraneous solutions.)
Proof. Exercise 30.
Exercises 20. Show that using a combination of Type I and Type II moves, it is possible to transform $s,$t into $t,$s . 21. Find Sol ( x 4 − 3x3 , 9x 2 + 3x + 10 ) and Sol ( x 4 − 3x3 , 8x 2 + 6 x + 20 ) and confirm that the two equations are weakly equivalent over . 22. Find SolR ( x 4 − 3x3 , 9x 2 + 3x + 10 ) and SolR ( x 4 − 3x3 , 8x 2 + 6 x + 20 ) for each of R = , , and 12. 23. Prove that there is no sequence of Type I, II or III moves that can possibly transform either ( x 4 − 3x3 , 9x 2 + 3x + 10 ) or ( x 4 − 3x3 , 8x 2 + 6 x + 20 ) into the other. 24. In the proof of Lemma 2 it was shown that SolR ($s,$t ) ⊆ SolR ($s' ,$t' ). Show the reverse inclusion. (Hint: there is more to proving the reverse inclusion than to the forward inclusion, because removing part of a string is trickier than concatenating on to the end of one.) 25. Explain explicitly why over the ring R = 12 the equation ((3x )( 2 ) , (6 x )( 2 )) has solution set {0, 2, 4, 6, 8,10}, while the equation (3x, 6 x ) has only the smaller solution set {0, 4, 8}. 26. Prove the Corollary to Lemma 3. 27. Prove Lemma 5. 28. Complete the proof of Lemma 1 by showing that ($s − $t, 0 ) is an ancestor of ($s,$t ), and ZR ($s − $t ) ⊆ SolR ($s,$t ). 29. Prove Lemma 7. 30. Prove Lemma 8.
135
Solving Equations 135
3.5 A Complete (?) Algorithm for Solving Polynomial Equations in High School At this point we join some of the formalism of the previous two sections together with some techniques from Chapter 2, and consider what kinds of methods secondary students actually learn to use to solve polynomial equations. While not all polynomial equations can be solved, it is true that equations found in the secondary curriculum tend to be carefully curated so that they can be handled using a fairly limited set of techniques. It turns out that Type I, II and III moves are not even close to enough—one also uses factoring, the quadratic formula, and other techniques, none of which are represented yet in our inventory of moves. This set of techniques is taught (in the United States) in bits and pieces, typically over at least a three-year period spanning both Algebra 1 and Algebra 2, so there may be some value in compiling all of the techniques together here in algorithmic form. We emphasize here that the mere possibility of distilling all of equation-solving into an algorithm of this sort does not mean that it should be taught to students this way. Indeed from a pedagogical perspective that would almost surely prove disastrous! Rather, whatever value this algorithm possesses mainly lies in its ability to summarize and synthesize what students should already have learned; to connect it to the theoretical results of this chapter; to see how far those methods generalize to other rings and fields; and to suggest how the act of equation-solving could be encoded into software. Solving Polynomial and Rational Equations 1. Begin with an equation ($s,$t ), where $s and $t are algebraic strings. In high school, these will typically be -strings or -strings, although in parts of Algebra 2 and Precalculus one occasionally works with -strings; here we also consider general R-strings. 2. If we are working over a field, and if one or both strings represent a rational function, begin by cross-multiplying (Lemma 8) to obtain a polynomial equation. Be aware that in doing so you may have introduced extraneous solutions; these will need to be identified and eliminated at the end. 3. “Simplify both sides of the equation”: That is, use a sequence of Type I moves to replace ($s,$t ) with ($s' ,$t' ), where $s' and $t' are in “standard form”. From this point on we will simply write ( p, q ) rather than ($s,$t ), where p = Σ ( $s ) and q = Σ ( $t ), and elide the technical distinction between strings and polynomials. 4. “Move everything to one side of the equation”: Use Lemma 3 to justify replacing ( p, q ) with ( p − q, 0 ). 5. (Optional but recommended): Do any coefficients contain fractions or decimals? If so, clear denominators by a Type III move. As long as we are working over , or , and the denominators are purely numerical, this is unproblematic and produces an equivalent equation by Lemmas 4 and 5. From this point on we follow different strategies, depending on the degree of p − q . 6. Is p − q a constant? a. If so, and the constant is nonzero, then ZR ( p − q ) = ∅ (e.g., an equation like 1 = 0 is an inconsistent equation with no solutions). b. If so, and the constant is zero, then ZR ( p − q ) = R (i.e., an equation like 0 = 0 is an identity or tautological equation for which all numbers are solutions). 7. Is p − q a first-degree polynomial? If so, then it is of the form ax + b for some a, b ∈ R , where a ≠ 0.
a. First use a Type II move to change ( ax + b, 0 ) to ( ax, −b ).
136
136 Solving Equations b. If R is a field, then following the discussion at the end of the previous section we may use a Type IV move followed by a Type I move to change ( ax, −b ) to the strongly equivalent equation ( x, −b / a ). Alternatively, one may use a Type III move and produce ( x, −a −1b ). Either way we have found a unique solution, i.e. ZR ( x, −b / a ) = {−a −1b}. c. If R is not a field, or even an integral domain, a may nevertheless be an invertible element. If so, then we may use a Type III move and multiply both sides of ( ax, −b ) by a −1, obtaining the same unique result as above. For example in 12 the element 5 is invertible, with 5−1 = 7, and so for any b the equation 5x = b has the unique solution x = 7b. d. If a is not an invertible element, then there may or may not be a solution to ( ax, −b ), and if one does exist it may not be unique. For example over 12 the equation 4 x = 1 has no solutions, while the equation 4 x = 0 has four solutions. e. If R is an integral domain but not a field, there may or may not be a solution to ( ax, −b ), but if one does exist it is unique. This is because if r and s are two solutions to ax = −b , then ar = as, and by the integral domain cancellation law we conclude r = s . As an example of a case in which no solution exists, consider R = and the equation 2 x = 3 , which has no solutions in , although of course it has a solution in . f. As the last example suggests, in the case where R is an integral domain but not a field, it is always possible to “enlarge” R to a unique (up to isomorphism) minimal field containing R, called the “field of quotients”11 of R and denoted Q( R ). If we allow ourselves to find solutions in Q( R ), a unique solution always exists. 8. If p − q is a polynomial of degree 2 or higher, and we are working over an integral domain, start trying to factor: a. If p − q is quadratic with integer coefficients, try factoring it (over R) as a product of two binomials. Various heuristics exist for this and it is the easiest possible case. If successful, use the Zero Product Property (Lemma 6) to reduce the problem to two instances of solving linear equations. Assuming this does not work, continue below. b. First use the rational roots theorem to generate a list of all “candidate” rational zeros. (This only makes sense if one is working with coefficients from a unique factorization domain, and we allow ourselves to find solutions in the field of quotients of the domain; see above.) c. Check the candidate rational zeros one at a time, either using direct substitution, synthetic substitution, or polynomial long division (i.e. the Euclidean Algorithm of §2.6). d. When we find a rational zero rk , we simultaneously discover a factor ( x − r ) for our polynomial p − q (by the Factor Theorem, §2.6), so we may write p − q = ( x − r ) s, where s is a polynomial of degree one less than the degree of p − q . If we are working over an integral domain, Z ( p − q ) = Z ( x − r ) ∪ Z ( s ) (Lemma 6), and Z ( x − r ) = {r}, so any further zeros of p − q will necessarily be zeros of s. This allows us to reduce the degree of the problem by 1; we now return to (a) and iterate until all rational zeros have been found. e. At this point the polynomial has been factored into the form p−q = ( x − r1 ) ( x − r2 ) ( x − rk ) s , where s is a polynomial with no rational zeros. Such a polynomial is irreducible over (or the quotient field of whatever ring we are working with). f. If we are working with real numbers, or with a subset of the real numbers, then if we are lucky s is quadratic, in which case its irrational (and even complex) zeros can be found by the quadratic formula or by the method of completing the square.
137
Solving Equations 137
g. If we are working with real numbers, or with a subset of the real numbers, and s is not quadratic, we may be out of luck. Although there are general methods for finding irrational solutions of irreducible 3rd-and 4th-degree equations, they are far beyond the scope of what is normally taught in high school12. For 5th- degree equations and higher, the situation is far worse: although the Fundamental Theorem of Algebra guarantees the existence of solutions (possibly complex) to all such equations, and a variety of numerical methods can be used to find decimal approximations to an arbitrary degree of precision, the Abel–Ruffini Theorem shows that in general there is no purely “algebraic” way to find them, and no “exact form” for them exists.13 h. If we are working not with real numbers but rather with a field of the form p for some prime p, we can use the characteristic polynomial to reduce the degree of the polynomial until we have a functionally equivalent polynomial whose degree is less than p (see Chapter 2). Then we may exhaustively evaluate sˆ( r ) for every member of the field and see directly which ones are solutions. For small p this is practical enough to do by hand, if inelegant, but for large p it may be necessary to use software to assist in the calculations. i. If we are working with a ring of the form n for some composite n, the steps above are no longer valid, and we cannot assume that factorizations are unique. We therefore cannot reduce the problem of factoring the original polynomial to the simpler problem of factoring s. We can, however, still use the characteristic polynomial to reduce the degree of the polynomial. In this case if n is a product of primes p1 p2 pk then we can reduce the degree until it is less than p1 + p2 + pk (refer to Chapter 2), and if we can find additional polynomials that are functionally equivalent to zero (which often exist), we may be able to reduce the degree still further. At this point we may exhaustively evaluate the elements of the ring one at a time to find all solutions14. As you can see, the complete algorithm for solving polynomial and rational equations in one variable is quite complex, with subtle connections to deep abstract mathematical principles.
Exercises 31. Some Algebra 2 textbooks incorrectly claim that using a combination of the rational roots theorem, synthetic division, and the quadratic formula it is possible to find all roots of any polynomial. Obtain one or more textbooks and see if they make this error. 32. Use the algorithm to find all solutions to 2 x7 + 3x − 5 = 0 over 5 . 33. Use the algorithm to find all solutions to 2 x7 + 3x − 5 = 0 over 6 . 34. Use the algorithm to find all rational solutions to 2 x7 + 3x − 5. How do you know you have found them all? (The other roots are all complex, and cannot be found by algebraic methods.)
3.6 Equations in Two Variables One of the advantages of framing the theory of polynomials and equations in the abstract form that we have used is that it allows us with almost no additional work to consider polynomials in two variables. Recall that for any ring R we can form the ring of polynomials
138
138 Solving Equations R[ x ]. We originally defined a formal polynomial with coefficients in R as a sequence of elements from R, which we denoted ( r0 , r1 , r2 ,…), in which only finitely many terms are nonzero15. We then introduced addition and multiplication laws, and introduced the special polynomial x = ( 0R ,1R , 0R , 0R ,…), whose elements are the additive and multiplicative identities of R. Armed with these tools we were able to write any polynomial in the more familiar form r0 + r1x + r2 x 2 + + rn x n. What happens, now, if we take our ring R[ x ], and use it to build a formal polynomial with coefficients in R[ x ]? Such a polynomial would have the form
( p ( x ) , p ( x ) , p ( x ) , p ( x ) ,…) 0
1
2
3
where each pk ( x ) is itself a polynomial in x. In other words, each term in the sequence above is itself a sequence of elements of R. We make a few observations about this construction: •
(
)
First, the sequence 0, 1, 0, 0 … needs to have a name. Note carefully that the elements of this sequence are not members of the underlying ring R, but rather constant polynomials in x (which is why they have bars over them). That is to say, each term is itself a polynomial. We can unpack things by writing it as
((0, 0,) , (1, 0, 0,) , (0, 0,) , (0, 0,) ,)
•
In particular this sequence is not the same mathematical object as x = ( 0R ,1R , 0R ,…), whose terms are elements of R, rather than sequences of elements of R. So we need to call it something else; the natural thing to do is to call it y. This means our ring of polynomials with coefficients in R[ x ] should be denoted ( R [ x ]) [ y ]. Now any member of ( R [ x ]) [ y ] can be written uniquely in the form p0 ( x ) + p1 ( x ) y + p2 ( x ) y2 + p3 ( x ) y3 + + pn ( x ) y n
•
where each pk ( x ) is a polynomial in x. In the above expression, n would be the “degree in y” of the polynomial, but each coefficient would have its own “degree in x”. For any polynomial p ( x ) , the sequence ( p ( x ) , 0, 0, 0,) (following our conventions in Chapter 2) should be denoted p( x ). p( x ) is not a “constant” in the normal sense, but its “degree in y” is 0.
It may sometimes be helpful to display a sequence of sequences in the form of a grid or matrix, in which each column represents a sequence of elements of R, and the list of columns represents the entire sequence of sequences. For example, all three of the following represent the same polynomial in ( [ x ]) [ y ]: ( 2 x 2 + 3, 5x − 4, x6 , 0, 0,…)
( 2 x 2 + 3 ) + ( 5x − 4 ) y + x 6 y 2
139
Solving Equations 139 3 −4 0 5 2 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0
One reason this is useful is that it allows us to consider what would happen if we changed notation by using the letter y to represent the “inner” variable and x for the “outer” variable. This would give us a ring denoted ( R [ y ]) [ x ]. Is this the same ring as ( R [ x ]) [ y ]? The answer (perhaps unsurprisingly) is that it depends what you mean by “the same”. Certainly at a formal level ( R [ x ]) [ y ] and ( R [ y ]) [ x ] are two different sets. However, there is a natural way to identify each element of one ring with a corresponding element in the other: we simply use the ordinary properties of algebra to distribute the powers of y, then regroup terms with equal degree in x. For the example above, we would write
(2x2 + 3) + (5x − 4) y + x6 y2 = 2x2 + 3 + 5xy − 4 y + x6 y2 = ( 3 − 4 y ) + (5 y ) x + 2 x 2 + ( y 2 ) x 6 If we write this reshuffled version of the polynomial in matrix form, with the columns representing the coefficients (each a polynomial in y), then we would have 3 −4 0 0
0 5 0 0
2 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0 0 1 0
0 0 0 0
which is the transpose of the matrix we wrote before. So there is a clear one-to-one correspondence between the elements of ( R [ x ]) [ y ] and the elements of ( R [ y ]) [ x ], and this correspondence in fact induces an isomorphism of rings. For this reason, we normally treat the two rings as interchangeable; except in the contexts of certain proofs (as we will see below) it usually does not matter which variable is on the “inside” and which is on the “outside” of the construction. When we wish to treat x and y on an equal footing, we will simply write R [ x, y ], the ring of polynomials in two variables with coefficients in R. In general, any polynomial in two variables can be written in the form
∑a
i, j
i, j
xi y j
140
140 Solving Equations where the sum runs over all pairs of non-negative integers i , j , and only finitely-many of the coefficients ai , j ∈ R are nonzero. We will often use the notation p ( x, y ) to indicate a polynomial in two variables. For example, we have already shown that the polynomial p ( x= , y ) ( 2 x 2 + 3 ) + ( 5x − 4 ) y + x6 y2 considered earlier has the form 2 x 2 + 3 + 5xy − 4 y + x6 y2 ; this means that a0,0 = 3,
a2,0 = 2,
a1,1 = 5,
a0,1 = −4,
a6,2 = 1
with all other ai , j = 0. Note that it is possible to write any polynomial in two variables in the order of decreasing (or increasing) degree in x, and it is also possible to write it in the order of decreasing (or increasing) degree in y, but it is not always possible to do both simultaneously, so “standard form” is not really well defined here. We still have a functional interpretation map—or rather, we have two functional interpretation maps. If we regard R [ x, y ] as ( R [ x ]) [ y ], then the functional interpretation map Φ takes a polynomial expression in two variables and turns it into a function on R [ x ]; that is, we have Φ : R [ x, y ] → Func ( R [ x ]). On the other hand, if we regard R [ x, y ] as ( R [ y ]) [ x ], then we have a different map Φ : R [ x, y ] → Func ( R [ y ]). As these are two different maps, taking their image in different sets of functions, we need to denote them with different symbols. We will write Φx : R [ x, y ] → Func ( R [ x ]) for the first one, and Φy : R [ x, y ] → Func ( R [ x ]) for the second one. (We don’t use the “hat” notation here because there is no easy way to distinguish one from the other without using two differently shaped hats, and we have enough unusual accent marks cluttering up our text already.) What do these maps do? Consider the action of Φx on our example, 2 x 2 + 3 + 5xy − 4 y + x6 y2 . The result Φx (2 x 2 + 3 + 5xy − 4 y + x6 y2 ) is supposed to be a function on R [ x ]. This means that we ought to be able to “plug in” any polynomial that’s purely in x into our function, and produce another polynomial that’s purely in x as a result. The way we do this is baked into our definitions: we substitute an arbitrary polynomial q ( x ) in for y, and then algebraically simplify. For example, with q ( x= ) x2 − x − 1, we would have
( Φ (2 x x
2
+ 3 + 5xy − 4 y + x6 y2 )) ( x 2 − x − 1)
= 2 x 2 + 3 + 5x ( x 2 − x − 1) − 4 ( x 2 − x − 1) + x6 ( x 2 − x − 1)
2
which of course can be simplified. The notation Φx ( p ( x, y )) ( q ( x )) is not only cumbersome but potentially misleading, as it looks like we are multiplying two polynomials, when we are actually substituting the second polynomial into a function defined by the first polynomial, so a better notation (and one we will use from now on) is p( x, y ) y = q ( x ) , which is probably familiar to you from Calculus. Likewise, Φy (2 x 2 + 3 + 5xy − 4 y + x6 y2 ) is supposed to be a function on R [ y ], which means we ought to be able to take any polynomial r( y ) and “plug it in for x”, obtaining another polynomial in y. We will write the result of acting on r( y ) by the function Φy ( p ( x, y )) as p( x, y ) x = r ( y ). We are now prepared to describe the solution set to a two-variable polynomial equation. It turns out there are two distinct ways to do this, depending on which functional interpretation map we use. (Later we will define yet a third way, using both variables together.) Following our work earlier in the chapter, we define an equation to be an ordered pair ( p ( x, y ) , q ( x, y ) ). A solution for y in terms of x is a polynomial r( x ) with p ( x, y ) y = r ( x ) = q ( x, y ) y = r ( x )
141
Solving Equations 141 while a solution for x in terms of y is a polynomial s( y ) with p ( x, y ) x = s ( y ) = q ( x, y ) x = s ( y ) To make this concrete, let’s consider a simple example. Suppose that p ( x, y= ) 2x + 3 y, and q ( x, y )= y − 6 x + 4, both regarded as polynomials in [ x, y ], and consider the equation ( p ( x, y) , q ( x, y)). For this equation, the polynomial r ( x )= 2 − 4x is a solution for y in terms of x, as shown by the equality of p ( x, y ) y = r ( x= 2 x + 3 ( 2 − 4 x )= 6 − 10 x ) and q ( x, y ) y = r ( x= )
( 2 − 4x ) − 6x + 4=
6 − 10 x.
On the other hand, considering the same equation ( p ( x, y ) , q ( x, y )), the polynomial 1 1 s ( y )= − y is a solution for x in terms of y, as can be verified by computing 2 4 5 1 1 p ( x, y ) x = s ( y ) = 2 − y + 3 y = 1 + y 2 4 2 and 5 1 1 q ( x, y ) x = s ( y ) = y − 6 − y + 4 = 1 + y. 2 4 2 Note that far from being some exotic exercise in mathematical formalism, we have naturally rediscovered “solving for one variable in terms of another”, a fairly standard part of high school classroom mathematics! The sets of all solutions (of both types) are SolR[x] ( p ( x, y ) , q ( x, y )) and SolR[ y] ( p ( x, y ) , q ( x, y )), respectively. Likewise ZR[x] ( p ( x, y ) )
denotes the set of polynomials q ( x ) ∈ R[ x ] with the property that p ( x, y ) y = q ( x ) = 0, and ZR[ y] ( p ( x, y ) ) denotes the set of polynomials r ( y ) ∈ R[ y ] with the property that p ( x, y ) x = r ( y ) = 0. Everything seems to be working quite well. There is, however, a problem: in almost every case, both SolR[x] ( p ( x, y ) , q ( x, y )) and SolR[ y] ( p ( x, y ) , q ( x, y )) are empty sets. This is because when you solve a two-variable polynomial equation for one variable in terms of another, the result is rarely a polynomial. This can be seen even in very simple cases. Consider for example xy2 = 1. Thinking of this as a 1st-degree equation in x (with coefficients from [ y ]), we would anticipate a single solution, and normal algebraic manipulation leads to x=
1 y2
But this is not a polynomial! We have to find our solutions outside of [ y ]. In hindsight this is not surprising; [ y ] is an integral domain, not a field, and in many ways it behaves like the
142
142 Solving Equations integers . After all, 3x = 2 is an equation with integer coefficients, but its solution lies outside of and must be found in the field of rational numbers . It’s straightforward enough for us to extend our “universe” of possible solutions from [ y ] to its field of quotients, i.e. to the field of rational expressions ( y ). But this is still not fully adequate, as we see as soon as we try to solve xy2 = 1 for y in terms of x. Regarding this as a quadratic equation in y with coefficients from [ x ], we expect to find two solutions, and ordinary manipulation leads to y=
1 x
,
y=
−1 x
But these do not “live” in either [ x ] or ( x ). They are neither polynomials nor rational functions, but something else. What kind of “thing” are they, and what algebraic structure do we need to find them in? The situation here is, in many ways, analogous to the one high school students encounter the first time they try to solve a quadratic equation like x 2 = 10 . All of the coefficients are rational (even integers!) but in order to find a solution you have to look beyond to the larger field of real numbers, . Similarly, when students try to solve x 2 + 2 x + 10 = 0 they eventually learn to look beyond to the field of complex numbers, . It is something of a surprise (or should be) that it’s never necessary to go any further than that when solving higher-degree equations, or when using more complicated coefficients (including complex ones); indeed, the Fundamental Theorem of Algebra says precisely that any polynomial equation in one variable has a solution in . Another way to express this is to say that is algebraically closed: Definition. A field F is algebraically closed if every polynomial with coefficients in F has a solution in F . Definition. An algebraic closure of a field F is an algebraically closed field F that contains F as a subfield, and that contains no smaller algebraically closed fields between F and F . It is a deep and profound theorem, the proof of which is beyond this text, that every field F has an algebraic closure, and that up to isomorphism all algebraic closures of F are isomorphic to one another. For this reason we usually refer to F as the algebraic closure of F , rather than an algebraic closure. To connect these definitions with our remarks above, the algebraic closure of is , but is its own algebraic closure, since it’s algebraically closed to begin with. We don’t usually talk about “the algebraic closure of ” because is not a field; however, it is an integral domain, and any integral domain can be enlarged to a field of quotients, which would then have an algebraic closure. In this case the field of quotients of is just , and the algebraic closure of is . We are now ready (finally) to answer the question, “When we solve a polynomial for one variable in terms of another, where do we find solutions?” The answer lies in the algebraic closures of R ( x ) and R ( y ), which contain not only all rational functions in their respective variables, but also roots of all orders, and other partial functions for which there is no notation16. Such functions are called algebraic functions (even though many of them are only partial functions). Not all partial functions from R → R are algebraic, nor even all functions; for example the algebraic closure of ( x ), denoted ( x ), does not contain functions like sin( x ) or 2 x . (Non-algebraic functions like these are called “transcendental”
143
Solving Equations 143 functions.) But ( x ) does contain things like= y x2 /3 +
5
. It is not at all 1 + 12 − x5 / 7 + x1/ 4 obvious how one could find a polynomial p( x, y ) with the property that plugging in this complicated formula for y would lead to 0, but there is one; that is what it means to be in the algebraic closure of R( x ). A few other observations about polynomials in two variables are in order here. We know that for any ring R, the Euclidean Algorithm17 provides a method for dividing any polynomial p ( x ) ∈ R [ x ] by another polynomial d ( x ) ∈ R [ x ], provided d ( x ) is monic, obtaining as a result a quotient q ( x ) and a remainder r ( x ) of lower degree than d ( x ). The same method works if we replace the ring R with a polynomial ring R[ x ] or R[ y ] and then add on a second variable. For example, we can divide p ( x= , y ) 2 x3 y + 3x 2 y2 + 2 y2 + 5x − 4 y + 6 by 2 d ( x, y= ) x + y + 3 in two different ways. If we think of them both as polynomials in y with coefficients in [ x ], then (with a little bit of regrouping) we are dividing
(3x2 + 2) y2 + (2x3 − 4) y + (5x + 6)
4
by y + ( x 2 + 3)
Alternatively if we regard both p ( x, y ) and d ( x, y ) as polynomials in x with coefficients in [ y ], then we are dividing 2 x3 + (3 y2 ) x 2 + 5x + (2 y2 − 4 y + 6 ) by x 2 + ( y + 3) In either case, the divisor is monic, so the standard long division algorithm can proceed (Exercise 40). More generally, any time we perform long division with two-variable polynomials we end up with either p ( x, y ) = d ( x, y ) q1 ( x, y ) + r1 ( x, y ) where the y-degree of r1 ( x, y ) is less than the y-degree of d ( x, y ); or we end up with p ( x, y ) = d ( x, y ) q2 ( x, y ) + r2 ( x, y ) where the x-degree of r1 ( x, y ) is less than the x-degree of d ( x, y ). In general the two quotients q1 ( x, y ) and q2 ( x, y ) are different, as are the two remainders r1 ( x, y ) and r2 ( x, y ) . In the case where the divisor is of the form y − a( x ), where a ( x ) ∈ R [ x ], the first of these two forms reads p ( x, y= )
( y − a ( x ) ) q1 ( x, y ) + r1 ( x, y )
where the y-degree of r1 ( x, y ) is less than the y-degree of y − a ( x ). But the y-degree of y − a ( x ) is just 1, so this implies that r1 ( x, y ) has y-degree zero; in other words, it depends only on x. In this case we therefore have p ( x, y= )
( y − a ( x ) ) q1 ( x, y ) + r1 ( x )
Moreover, if we evaluate this at y = a ( x ), we get p ( x, y ) y= =a( x )
( a( x ) − a ( x ) ) q1 ( x, a ( x ) ) + r1 ( x )=
0 + r1 ( x ) ,
144
144 Solving Equations so r1 ( x ) can be determined by just evaluating p( x, y ) at y = a( x ). This should look familiar; it’s just the Remainder Theorem, adapted to the 2-variable case: Theorem. (2-variable Remainder Theorem, Version 1) If p( x, y ) is divided by y − a ( x ), the remainder is p ( x, y ) y = a ( x ). If we interchange the roles of x and y we arrive at a symmetric statement (Exercise 41). As an immediate corollary, we also have the 2-variable Factor Theorem: Theorem. (2-variable Factor Theorem, Version 1). The polynomial y − a ( x ) is a factor of p ( x, y ) if and only if p ( x, y ) y = a ( x ) = 0. Once again, a symmetric version exists (Exercise 42). At this point we have fully adapted our theory of polynomials in a single variable to the two-variable case, and in particular we have explained what it means to say that we have solved a two-variable polynomial equation for one variable in terms of another. But there is another sense in which we commonly speak of “solutions” to an equation in two variables. Everyone is familiar with statements like (3, 4 ) is a solution to x 2 + y2 = 25 and we all know what it means. Statements like this don’t fit into the framework we have described above. How can we adapt our theory to provide us with a language for this kind of solution? The key is to realize that we can define a two-variable functional interpretation map. Every polynomial p in R [ x, y ] can naturally be interpreted as a function pˆ : R 2 → R. That is, pˆ takes two separate inputs and produces a single output. It turns out that the two-variable functional interpretation map can be built out of the existing one-variable functional interpretation maps. Definition. The two-variable functional interpretation map, denoted Φxy : R [ x, y ] → Func( R 2 , R ), is defined as follows: For any polynomial p ∈ R [ x, y ], we have Φxy ( p ) = pˆ, where for any a, b ∈ R we have
(
pˆ ( a, b ) = Φ Φx ( p ) y = b
)
x=a
This definition is rather subtle, so it is worth considering what it says and why it all makes sense. First, we take p ( x, y ) and interpret it as a function on R [ x ]. This means that we can take any polynomial in x, and plug it in for y, to get another polynomial in x. In this case, the “polynomial in x” is just the constant b. Up to this point, all we have really done is set y = b ; what remains still has x in it. So now we interpret this single-variable polynomial (via the normal, 1-variable functional interpretation map Φ) as a function on R, and evaluate it as x = a.
145
Solving Equations 145 We don’t have to define Φxy this way; alternatively we could interchange the role of x and y and get the exact same map (Exercise 36). However we define it, it is natural to use the notation p ( x, y ) =( a ,b ) or just p ( a ,b ) as a shorthand way to expressing (Φxy ( p )) ( a, b ). We are now ready to define a two-variable solution. Definition. If p and q are both polynomials in two variables, then a two-variable solution is an ordered pair ( a, b ) ∈ R 2 with the property that p ( a ,b ) = q ( a ,b ). The set of all such solutions is denoted SolR2 ( p, q ). We also define the zero set of a polynomial to be ZR2 ( p ) = SolR2 ( p, 0 ). Note that although in high school we tend to speak of solving an equation p = q (i.e. we want to make both sides equal), it’s more common in algebraic geometry to talk about finding the zero set of a single polynomial. There really is no difference; the solution set of x 2 + y2 = 1 is the same as the zero set of x 2 + y2 − 1. What is the solution set of x 2 + y2 = 1? Of course we know it consists not of a single point, but of an entire set of points, and we know that the set of points describes a circle. But this is a geometric description, and so far our theory has been purely algebraic! We need some way to talk about graphing a solution set; we turn to this in Chapter 4. In light of the fact that we have a fully worked out theory of dividing polynomials from R [ x, y ] from two different perspectives—i.e., regarding R [ x, y ] as either ( R [ x ]) [ y ] or as ( R [ y]) [ x ]—it is only reasonable to ask whether there is a “symmetric” approach. That is, given a polynomial p ( x, y ) and a (monic) divisor d ( x, y ), is there a uniquely determined quotient q( x, y ) and remainder r( x, y ) for which = p qd + r, and for which the total degree of r( x, y ) is less than the total degree of d ( x, y )? Unfortunately, the answer is no. It is possible to generalize the idea of “degree” in two or more variables to one of “monomial order”—so, for example, we might order all monomials xi y j first by total degree i + j , and then within monomials of the same total degree we could use a lexicographic ordering—and once we have done so the Euclidean Algorithm can be deployed, producing eventually a remainder whose leading coefficient “comes before” the leading coefficient of the divisor. However, this is not particularly useful for gaining insight into the relationship between factorization and the zero set of a polynomial in two variables (which was our main application of polynomial long division in the other settings), so we will not pursue it further here. In general the problem of factoring a polynomial in two variables is quite hard, and it rarely yields to algorithmic approaches. Of course it is possible to handcraft specific examples that factor easily: in high school, for example, students typically learn to factor x 2 − y2, x3 + y3 and x3 − y3, as well as variations on those forms. (Exercises 44–45). If the equation has “quadratic form”, i.e. Ax 2 n + Bx n y k + Cy2 k , then it can often be factored as a product of two binomials. Whether those two binomials can themselves be further factored is of course another matter. If we are working over n, it’s possible in principle to write a list of all polynomials of degree less than the degree of the polynomial we want to factor, and check them one at a time using the Factor Theorem, but as n grows the number of things to check quickly becomes computationally prohibitive.
Exercises 35. Let p ( x, y= ) x2 + y2 − 1. Find Z[x] ( p ), Z( x ) ( p ), and Z( x ) ( p ).
(
36. We defined Φxy ( p ) = pˆ where pˆ ( a, b ) = Φ Φx ( p ) y = b
(
have defined pˆ ( a, b ) = Φ Φy ( p ) x = a
)
y=b
)
x=a
. Show that we could also
and would have obtained the same result.
146
146 Solving Equations 37. Let R be the ring of 2 × 2 matrices with real coefficients, and let p ∈ R[ x, y ] be 1 0 2 2 −3 2 4 0 0 12 0 given by p = xy + x + y + + x . Let 0 4 1 −1 0 2 −1 0 1 −3 −1 0 1 0 q ∈ R [ y ] be the polynomial q = y+ . Evaluate p x = q . What do you 2 4 0 3 5 1 get when you evaluate p x = q at ? −3 −1 38. Let R = 6 , and let p ∈ R [ x, y ] be given by = p 3x 2 + 4 y − 2. Find ZR2 ( p ). (Hint: Factoring this is a dead-end, and you can’t take square roots in 6 . But you can still brute-force check every element of R 2 and see if it’s a solution. How many combinations would you need to check? Could you streamline the process using a spreadsheet?) 39. Let p( x, y= ) y5 + y + x and consider the equation p ( x, y ) = 0. Show that even though it is not possible to solve this algebraically for y in terms of x, it is nevertheless true that for every real number x there is a unique real number y such that this equation is satisfied. An equivalent way of saying this is that there is an implicitly defined algebraic function q( x ) such that p ( x, y ) y = q ( x ) = 0. Use a computer algebra system to find the graph of q( x ). 40. Perform polynomial long division for(2 x3 y + 3x 2 y2 + 2 y2 + 5x − 4 y + 6 ) ÷ ( x 2 + y + 3) two different ways: first by regarding each as a polynomial in ( [ x ]) [ y ], and then by regarding each as a polynomial in ( [ y ]) [ x ]. In each case, find a quotient and a remainder. Are the answers the same? 41. State and prove the second version of the 2-variable Remainder Theorem, in which the roles of x and y are reversed. 42. State and prove the second version of the 2-variable Factor Theorem, in which the roles of x and y are reversed. 43. Use the Factor Theorem to confirm that y − x 2 + 1 is a factor of –x 4 y − x3 y + x 2 y2 + x 2 y − 3x 2 + xy2 + xy + 3 y + 3. Then use long division (or synthetic division!) to find the other factor. 44. How do you factor x n − y n for an arbitrary positive integer n? What about x n + y n for an arbitrary odd integer? Show that in both cases you obtain two factors, both irreducible over . Why is there no way to factor x n + y n when n is even? 45. Explain how to factor any polynomial of the form Ax 2 n + Bx n y k + Cy2 k .
3.7 Recommended Reading Your recommended reading for this chapter is: Chazan, D., Yerushalmy, M., & Leikin, R (2008). An analytic conception of equation and teachers’ views of school algebra. The Journal of Mathematical Behavior 27(2), 87–100. Chazan, Yerushalmy and Leikin (2008) situate their article in a school that was consciously and deliberately shifting the way that teachers and students talk about what an equation is: “from a statement about unknown numbers to a particular kind of comparison of two functions” (p. 87). They take pains to point out that this shift entails more than merely the
147
Solving Equations 147 addition of new techniques for solving equations—although it certainly does that as well— but a fundamental conceptual change in what the objects themselves are. They observe that this transition requires not only that teachers teach differently, but also that they themselves learn to think differently about equations; this observation motivated the research study, which aimed at understanding not only how teachers implement the new perspective, but also at how they think about equations themselves, and how they relate to the function- based perspective. Because the research questions were about how the teachers think about equations, rather than on how they teach equations, the authors did not use classroom observations as a data source; instead they conducted interviews in which teachers solved mathematical problems and discussed how they thought about those problems. Notably, the set of mathematical tasks included an equation that cannot be solved by algebraic methods, namely 2 x = x 2, and a system of equations for which no solution exists. Teachers were also asked to produce their own examples of systems of equations with no solutions, and equations that are equivalent to 0 = 0, and to discuss how they explain such results to their students. Another task asked teachers to describe the solution set of the x 2 y2 equation 1. Finally, teachers were asked + = 9 4 whether they felt there were differences between the equations 10 x − 45 = 5 and x = 5. Here we were interested in seeing how teachers would talk about x = 5, whether they would see it as an equivalent equation and also the value of the solution, or the description of the solution set. Are functions in one variable meaningfully different from equations in two variables? And, how does one talk about such issues with students? We were interested in how teachers would describe similarities and differences between = y 3x + 4 and f ( x= ) 3x + 4. Do they see these as representations of the same relation? Are both of these strings representations of a function? (p. 90). It is perhaps noteworthy that Chazan et al. observe in a footnote (p. 90, note 4) that Equivalent equations can be thought of in two ways. Equivalent equations can be equations that have the same solutions. But, in school algebra, equivalent equations typically have been those that have the same solutions AND can be derived from the other by applying an identity. As you have no doubt noticed, nearly all of the mathematical tasks and issues explored in the research study have direct connections with the material in this chapter. The main difference is that while we have attended to these issues from the perspective of mathematical analysis, Chazan et al. attend to these issues from the point of view of teacher knowledge and learning. Epistemologically the two approaches are different but complementary. Chazan concludes that Furthermore, we suggest that this curricular phenomenon in school mathematics is representative of activity in the discipline. In mathematics, one can develop a theory from slightly different starting points and end up with structures that seem quite similar, but that have different qualities. We suggest that this sort of mathematical thinking should be conceptualized as part of the work of teaching (see Sandow, 2002) and that it deserves attention by researchers. Do different sorts of curricular changes have different mathematical loads in terms
148
148 Solving Equations of teacher learning? These teachers had to “relearn” some aspects of what they knew before… Do other curriculum changes that involve a change of the mathematical approach, e.g., from a typical Euclidean geometry course to a transformational geometry course, have a similar mathematical load? Is there a difference between curricula that ask a teacher to learn a new mathematical point of view for familiar material, as opposed to curricula that require learning material (say data analysis) that is unfamiliar? And, do such different curricula have different affordances for addressing issues of student engagement with classroom mathematics? (p. 99)
Projects A. Examine two or more high school textbooks (Algebra 1 or Algebra 2). Is the process of equations described as one of finding an unknown number, or of finding when two functions agree? Focus not on the methods taught, but rather on the explanatory text. Write an analysis of how the textbooks are like and unlike one another in this respect. B. How do teachers and/or high school students conceptualize equations (in one or two variables) which lead to either inconsistent results (i.e. no solutions) or redundant ones (i.e. infinitely many solutions)? Conduct interviews with two or more teachers or students, with the goal of understanding whether they conceptualize these situations in terms of unknown numbers or functions. As part of designing your study, prepare three or more mathematical tasks that you think might elicit discussion of these issues. Does it matter whether the equations have 1 or 2 variables? C. Read two references cited by the Chazan et al. article and prepare an analytical summary of each of them. Your summary should include (at a minimum) synopses of (a) the research question, (b) the theoretical framework, (c) the research methods, (d) its findings and conclusions.
Notes 1 This paraphrase of the textbook intentionally contains an error, one which was also found in the source material (although presumably in that context it was unintentional). Can you find the error? 2 Here, we temporarily abandon the standard set-theory notational convention of using commas to separate the elements of a set, because the comma is itself a character in the alphabet. 3 The use of the dollar sign for this purpose is inspired by the old BASIC computer programming language, in which variables containing text strings were marked by a dollar sign. Thus in BASIC one would write instructions like LET $name = “Emmy Noether”, which would set the variable $name equal to the text string “Emmy Noether”. We do not, however, adopt the BASIC convention of using quotation marks to enclose the characters of a text string, because it can be difficult to visually distinguish multiple quotation marks in a row, as would be the case when concatenating strings; instead we use angle brackets, which allow us to easily distinguish between the start of a string and its end. 4 Read this carefully. How many different strings are being concatenated here? 5 It’s also the basis of S.E. Davis’s quip that “Algebraic symbols are what you use when you don’t know what you’re talking about”, as well as the logician David Hilbert’s slightly more serious maxim, quoted at the beginning of this chapter, that “Mathematics is a game played according to certain simple rules with meaningless marks on paper.”
149
Solving Equations 149 6 The notation here is meant to be somewhat mnemonic: Just as we used the capital Greek letter Φ for the functional interpretation map because phi makes the “f ” sound for “functional”, we use the capital letter Σ for the name of this map because sigma makes the “s” sound for “string”. Note that Σ does not indicate any kind of summation in this context, it is simply the name of a map between sets! 7 In this example, and throughout most of what follows, we omit some of the redundant parentheses, sacrificing precision in favor of improved legibility. You are encouraged to consider where they ought to be, and write them back in with pencil if you are bored. 8 Unlike most of the notation in this chapter, which was invented for this book, the use of ZR ( f ) to denote the zero set of a polynomial or rational function is standard practice in algebraic geometry. 9 Refer back to Chapter 2 if you don’t remember this notation. 10 Refer to §1.7, Example 12 if you have forgotten what this means. 11 We know, of course, that every rational number can be represented as an ordered pair of integers; in fact it is possible to define the field as the collection of all equivalence classes of ordered pairs of integers, where two ordered pairs of integers ( a, b ) and (c, d ) are equivalent if and only if ad = bc. This construction generalizes: given any arbitrary integral domain, if we take ordered pairs of elements and define equivalence in exactly the same way, we obtain a field, called the field of quotients for our domain. 12 Refer to the Wikipedia articles on “Cubic function” and “Quartic function” for the gory details. 13 More precisely, the Abel–Ruffini Theorem shows that some equations of degree ≥ 5 are not “solvable by radicals”, i.e. there is no quintic analogue of the quadratic, cubic and quartic formulas that allows us to write the solutions to a 5th-degree equation using only a combination of addition, multiplication, division and taking roots (of any order). The claim of this theorem is not just that no such formula is known, but rather that such a formula cannot exist, even in principle. This remarkable result led to a radical shift in our understanding of what mathematics can and cannot do, and even in our understanding of what numbers are. An incomplete proof was first published by Paolo Ruffini in 1799, and a finished proof by Niels Henrik Abel in 1824, but arguably it was only with the posthumous publication of Evariste Galois’s theory of field extensions, after his 1832 death in a duel at the age of 20, that the significance of the theorem was fully understood. 14 When solving polynomial equations over Zn, additional techniques exist, but these typically involve more complicated number-theoretical results, and they are out of place in the list above. 15 See §2.5. 16 This is analogous to the fact that 5th-degree polynomials in one variable have solutions that cannot be expressed in radicals (see footnote 13 above). 17 See §2.6.
150
4 Geometry, Graphs and Symmetry
“One must always be able to say ‘tables, chairs, and beer mugs’ each time in place of ‘points, lines and planes’.” —David Hilbert
4.1 Euclidean Geometry in the Secondary Curriculum At the end of the last chapter we observed that a two-variable polynomial equation such as x 2 + y3 = 3xy has three distinct types of solution sets, corresponding to whether we interpret x 2 + y3 and 3xy as functions on [ x ], [ y ], or on 2 . In the first two cases, “solving” means specifically “solving for one variable in terms of the other”, obtaining one or more algebraic functions. In the latter case, the solution set Sol2 ( x 2 + y3 , 3xy ) is a set of ordered pairs of real numbers; for example, you can easily verify that ( 2, 2 ) and ( 4, 2 ) both belong to Sol2 ( x 2 + y3 , 3xy ). Typically we don’t want to just find some solutions to an equation, but rather to find a description of all of them. How to do that, when (as in this case) there are infinitely many distinct ordered pairs in 2 that are solutions to our equation? The answer, of course, is that we usually describe a solution set in 2 by describing its x 2 y2 + =1 shape. We say that the solution set of 3x − 4 = 2 y + 6 is a line, the solution set of 16 25 is an ellipse, and so forth. Focusing on the shape of the solution set allows us to notice things about symmetry, about boundedness, about the smoothness of the curve, whether there are asymptotes and what kinds. All of this information is implicit in the algebraic description of the solution set, but only becomes revealed when we have a picture to look at. In order to discuss the graph of a solution set, we need to shift our focus and develop a language for talking about the geometry of Euclidean planes. By “Euclidean planes”, we mean any mathematical system in which it is possible to perform the classical constructions of Euclidean geometry. We say “Euclidean planes” rather than “the Euclidean plane” because, as we will see, there are many distinct mathematical structures, each of which has all of the mathematical properties we need. Rather than single one of them out and call it the Euclidean plane, we will focus on how these various planes differ from one another, and what they all have in common. The study of Euclidean geometry has been a part of the mathematical curriculum since before “secondary schools” as we know them even existed. Indeed, from the time Euclid’s Elements was first written and disseminated in approximately 300 BCE, continuing well into the modern era, the study of this seminal textbook has played a central role in mathematics education. Throughout the Renaissance, University students studied the Elements, and in the 18th and 19th centuries a knowledge of Euclid was understood to be an essential component of the education of any educated gentleman. (Of course, it is also broadly true
151
Geometry, Graphs and Symmetry 151 that only white men of a certain social and economic class had access to education at this level.) The style of Euclid’s Elements inspired not only the structure of the Declaration of Independence but also the speeches of Abraham Lincoln. With the beginnings of large- scale public education in the 19th century, the study of the Elements was incorporated into the nascent high school curriculum. The Elements was not without its competitors. Beginning in the 18th century, mathematicians from Legendre to Playfair began producing their own versions of the Elements, often departing from the original in organization and coverage. So many competing versions of the Elements appeared that in 1879 the mathematician and satirist Charles Dodgson1 published Euclid and his Modern Rivals, in which he mounted a vigorous defense of the original. Still, by the end of the 19th century, a consensus emerged among mathematicians that Euclid’s work contained serious logical flaws. To understand the nature of these flaws, we must first understand Euclid’s unique contribution to mathematics. The Elements was not the first collection of mathematical knowledge, nor even the first geometry textbook; it was, however, the first textbook to produce a systematized mathematical theory. Euclid introduced the deductive “Definition–Postulate– Theorem” structure that characterizes mathematical writing to this day (including, of course, the very textbook that you are reading now). This structure seeks to prove as many results (“theorems”) as possible from as small a set of unproven assumptions (“postulates” or “axioms”) as is possible. In Book I of the Elements, Euclid assumed just five postulates and proved 48 separate theorems. But in establishing this structure, and the need to justify every claim with either a proof or a postulate, Euclid also planted the seeds for his own future critics, because Euclid himself often relies on properties of diagrams that are not justified, nor even justifiable, using his own axiom scheme. For example, the proof of the very first Proposition in the Elements relies on constructing the intersection of two circles with a common radius AB, one centered at A and one at B (see Figure 4.1 below). Although it is visually obvious that these circles must intersect, none of Euclid’s postulates actually justifies the assertion that they do, and Euclid offers no explanation for why it should be so. Rather, he seems to tacitly rely on the visual appearance of the diagram, rather than on the rules of his own mathematical system. In 1899, the German mathematician David Hilbert published Grundlagen der Geometrie (“Foundations of Geometry”), in which he re-established the entire theory of Euclidean geometry on a modern, rigorous basis. Hilbert’s Grundlagen begins with not just Euclid’s five postulates, but with five sets of postulates, comprising twenty in all. These postulates include: • • • • •
Eight axioms of incidence, laying out the fundamental relationships between points, lines and planes; Four axioms of order, establishing basic notions of “betweenness”, “side”, and “interior/ exterior”; One axiom of parallels; Five axioms of congruence; and two axioms of continuity.
With one exception, these twenty axioms are both necessary and sufficient for proving all of the essential theorems, and performing all of the compass-and-straightedge constructions, of Euclidean geometry. The one exception is Hilbert’s twentieth and final axiom, which he called the “Axiom of Completeness”. The Axiom of Completeness is not only unnecessary
152
152 Geometry, Graphs and Symmetry
Figure 4.1 Two intersecting circles—or are they?
for doing Euclidean geometry, it actually erases some of the most interesting features relating geometry to abstract algebra, and in what follows we will mostly ignore it, except when we need to point out that we want to violate it. To be clear, this chapter is not devoted to a thorough axiomatic development of Euclidean geometry. Such an endeavor would require an entire book, and indeed there is already at least one outstanding book intended for preservice secondary teachers, mathematics educators, and mathematics teacher educators (among others) that does it exceptionally well: namely, Marvin Greenberg’s Euclidean and Non-Euclidean Geometries. In fact a 2000 survey2 of mathematics teacher education programs found that approximately 40% of respondent institutions require all preservice secondary mathematics teachers to take a course focused on an axiomatic development of geometry, with roughly 40% of those courses using Greenberg’s text. So there is a fairly good chance that the reader of this book has already encountered at least one thorough axiomatic development of geometry, and there does not seem to be much point in trying to cram another one into a single chapter. The goal of this chapter is more modest: to begin with a (presumably) mature notion of the Euclidean plane, and see what we can do with it. Specifically, our goal is to focus on how to put coordinates on the plane, and thereby establish a correspondence between a set of ordered pairs of real numbers, on the one hand, and a set of points on the other. There are, it should be noted, many different ways to do this, and the “shapes” one ends up with will look quite different depending on the choice of coordinates one chooses. For example, if the ordered pair ( a, b ) is identified with the point a units from the origin at an angle of elevation of b radians—in other words, if we use polar coordinates—then the solution set of y = 3x + 2 would not be a line at all, but rather a pair of ever-widening Archimedean spirals, one clockwise and one counter-clockwise (see Figure 4.2). Our goal, at least preliminarily, is to come up with a working theory of rectangular coordinates on a Euclidean plane. That is, we will want to have some way of attaching to each point on the plane a pair of real numbers ( a, b ) in such a way that, for example, the solution sets of first-degree equations in two variables correspond in some natural way to
153
Geometry, Graphs and Symmetry 153
Figure 4.2 The graph of y = 3x + 2, if x and y are plotted in polar coordinates
straight lines, the solution sets of quadratic equations in two variables correspond to conic sections, and so on. So one of our goals will be to establish a coordinate function,
χ : Euclidean plane → 2 Of course in order to do this we will first need to stop dancing around the question of what, exactly, we mean by a “Euclidean plane”, and this will be the first order of business in the next section. Then the next thing we will have to do is set up the function χ. Then we can start to ask some really interesting questions, like: Is χ surjective? In other words, we may be able to set things up so that every point in the plane gets a pair of real numbers attached to it—but does that necessarily mean that every pair of real numbers is attached to some point? Or might there be certain real numbers that never get “used” as coordinates? These considerations will lead us to the important question: how much of the real number system can be reconstructed using purely geometric methods? After that we will turn our attention to some important properties of graphs of solution sets. We will want to establish definitively when a solution set forms a line, when it has various sorts of symmetry, and so on. These questions will be taken up in the last sections of this chapter.
4.2 Compass-and-Straightedge Constructions in the Euclidean Plane Broadly speaking, Geometry curricula take two distinct approaches to axiomatizing the properties of the Euclidean plane. Since the days of Euclid’s Elements, the primary approach
154
154 Geometry, Graphs and Symmetry has been to adopt axioms describing the functioning of a compass and an (unmarked) straightedge. Euclid’s own original five postulates3, for example, include the following: I. To draw a straight line4 from any point to any point. II. To produce a finite straight line continuously in a straight line. III. To draw a circle with any center and any radius. The first two of these postulates says, more or less, “I have a straightedge, and I know how to use it”; the second says “I also have a compass, and I know how to use it, too.” Notice that the first two postulates describe very different ways of using a straightedge: joining two points together by constructing a line segment is, both visually and kinesthetically, a different activity than extending an existing line segment by making it longer. Armed with just these three “compass and straightedge” postulates, Euclid sets out, in a series of Propositions, to show that ever more complicated geometric constructions are possible. The very first Proposition is to show how, given any line segment, an equilateral triangle may be constructed whose edges are all equal to the given segment. Proposition 9 shows how any angle may be bisected into two equal sub-angles; Proposition 10 shows how any segment may be bisected into two equal sub-segments. Subsequent Propositions show how to construct a perpendicular to any line through any point, how to construct a parallel to any line through any point not on the line, to copy an angle to another segment, and to construct a parallelogram, equal in area to a given triangle, containing an arbitrary given angle. In between these constructions, another collection of Propositions asserts general properties that apply to all figures possessing certain (other) properties: for example, that the three angles of a triangle always sum to the equivalent of two right angles, or that two triangles with three pairs of corresponding equal sides will also have three corresponding equal angles. One noticeable feature of the geometric constructions in the Elements is that each one begins with specific given geometric objects and uses the compass and straightedge to produce additional geometric objects. In no case is a geometric object constructed “out of nothingness”. Another, less obvious feature of the constructions is that no “auxiliary object” is ever used in a proof until a construction justifying its existence has already been demonstrated. For example, in Euclid’s proof of the Base Angles Theorem for isosceles triangles, he does not draw an angle bisector from the vertex angle of the triangle to its base, cutting it into two congruent triangles, because at that point in the exposition he has not yet shown that angle bisectors can even be constructed at all! For Euclid, until a construction has been demonstrated to be possible, it is essentially “off-limits”. Thus in a very real sense, Euclid’s constructions function in much the same way that a modern mathematics text would treat “existence proofs”. This observation becomes especially important when one considers the geometric constructions that Euclid doesn’t demonstrate the possibility of. Euclid never shows, for example, how to trisect an angle into three equal parts using only a compass and straightedge, and for a very good reason: no such construction can exist, even in principle5! The problem of trisecting an angle is, along with squaring a circle6 and doubling a cube7, one of the three classic “unsolvable” construction problems of antiquity. In particular, there is no compass-and-straightedge method for trisecting a 60° angle into three 20° angles. From a strictly Euclidean perspective, it is not too much of a stretch to say that 20° angles don’t even really exist. More precisely, there is no way to construct a 20° angle, and therefore no way to make use of one in a proof or in another construction; so for all practical purposes, they may as well not exist at all. Likewise, the impossibility of doubling the cube means that if you have a segment that is one unit long, then in a certain sense there is no segment whose length is 3 2 units long. If you can’t construct it, then you can’t justify the claim that it exists, so it may as well not!
155
Geometry, Graphs and Symmetry 155 The need to justify constructions with compass and straightedge thus leads to some interesting questions: if not all lengths can be constructed, then which ones can? If not all angles can be constructed, then which ones can? Evidently the power of a compass and straightedge is limited—just which numbers are within their reach? All of these concerns and questions disappear and become trivial if instead of compass- and-straightedge axioms, one structures a geometric theory around ruler-and-protractor axioms, which brings us to the second of the two approaches commonly used for axiomatizing geometry. Beginning in the mid-20th century, a number of geometry textbooks began introducing two axioms that asserted the correspondence between the geometric objects of Euclid, and the numerical objects of algebra and arithmetic. The first major text to follow this approach was that of Basic Geometry by Birkhoff & Beatley (1941). The protractor and ruler axioms were later incorporated into the School Mathematics Study Group’s8 1960 text Geometry9, which introduced the relevant axioms this way: Postulate 2. (The Distance Postulate). To every pair of points there corresponds a unique positive number. (SMSG Geometry, p. 34) Postulate 3. (The Ruler Postulate). The points of a line can be placed in correspondence with the real numbers in such a way that ( 1) To every point of the line there corresponds exactly one real number, (2) To every real number there corresponds exactly one point of the line, and (3) The distance between two points is the absolute value of the difference of the corresponding numbers. (SMSG Geometry, p. 36) Postulate 4. (The Ruler Placement Postulate). Given two points P and Q of a line, the coordinate system can be chosen in such a way that the coordinate of P is zero and the coordinate of Q is positive. (SMSG Geometry, p. 40) Postulate 11. (The Angle Measurement Postulate). To every angle ∠BAC there corresponds a real number between 0 and 180. (SMSG Geometry, p. 80) Postulate 12. (The Angle Construction Postulate.) Let AB be a ray on the of the edge half-plane H. For every number r between 0 and 180 there is exactly one ray AP, with P in H, such that m∠PAB = r. (SMSG Geometry, p. 81) Commenting on their adoption of the ruler-and-protractor scheme, the SMSG authors wrote in their Preface: The basic scheme in the postulates is that of G. D. Birkhoff. In this scheme, it is assumed that the real numbers are known, and they are used freely for measuring both distances and angles. This has two main advantages. In the first place, the real numbers give us a sort of head start. It has been correctly pointed out that Euclid’s postulates are not logically sufficient for geometry, and that the treatments based on them do not meet modern standards of rigor. They were improved and sharpened by Hilbert. But the foundations of geometry, in the sense of Hilbert, are not a part of elementary mathematics, and do not belong in the tenth-grade curriculum. If we assume the real numbers, as in the Birkhoff treatment, then the handling of our postulates becomes a much easier task, and we need not face a cruel choice between mathematical accuracy and intelligibility. The ruler-and-protractor axioms were also adopted by the influential 1964 text, also called Geometry, by Moise & Downs; and in the United States subsequent geometry textbooks for high school students (down to the modern era) have, almost without exception, followed
156
156 Geometry, Graphs and Symmetry suit. Most modern textbooks also include compass-and-straightedge axioms, leading to an overloaded set of axioms, many of which are redundant (Exercise 1). The incorporation of ruler-and-protractor axioms into a Geometry curriculum entirely short-circuits the questions of which constructions are possible and which are not. With SMSG-style protractor axioms, any angle is (trivially) trisectable: SMSG’s Postulate 11 asserts the possibility of assigning a real number to the measure of any angle; that real number can (obviously) be divided by 3; the resulting value corresponds, by Postulate 12, to some angle. Done! The problems of doubling a cube or squaring a circle become similarly trivial (Exercise 2 and 3). Indeed the question raised at the end of the last section—how much of the real number system can be reconstructed using purely geometric methods?— becomes completely moot when the entirety of the set of real numbers is “baked into” the axioms themselves. For all of these reasons, we will not adopt protractor-and-ruler axioms in this chapter. They are simply too powerful for our purposes10; while they may make instruction easier at the high school level, they do so by turning important mathematical subtleties into trivial corollaries. Instead, we will try to hold at least a little bit closer to the Euclidean ideal: to prove as much as possible, while assuming as little as we can reasonably get away with. So, after all of this preamble, it is time to ask: What are we going to assume, and just what are we going to try to get away with? Let’s first establish some basic vocabulary and notation. We use the symbol 2 to refer to a Euclidean plane. At this point we will still refrain from defining precisely what that means, but you should visualize a Euclidean plane as a vast, empty canvas on which geometric figures can be drawn. Euclidean planes are flat, homogeneous11, and infinite in extent; in particular, they are entirely bare of axes, coordinates, grid lines, and the other paraphernalia of graph paper. We will assume that we have sufficiently powerful axioms to allow us to perform all of Euclid’s compass-and-straightedge constructions; shortly we will list a few examples of constructions that will be particularly useful for us. Note that 2, a Euclidean plane, is not the same thing as 2 , the collection of all ordered pairs of real numbers. 2 is an algebraic and numerical structure, not a geometric one; there are no “pictures” of (or in) 2 . Whereas 2 contains ordered pairs like ( 2, 3), and sets of ordered pairs like {( a, b ) a 2 + b2 = 25}, 2 contains points and lines and circles. We are, of course, accustomed through many years of habit to identify these two very different kinds of structures; we naturally think of ( 2, 5) as a point in the plane and of {( a, b ) a 2 + b2 = 25} as a set of points that forms a circle. How does such an identification actually function? Broadly speaking, there are two different strategies one can follow in trying to establish a connection between 2 and 2 . The first strategy can be summarized by saying that we can try to geometrize 2 . That is, we can start with a mature notion of 2 , and make the following definitions: • We define a point of 2 to be an ordered pair ( p, q ) ∈ 2; • We define a line of 2 to be an equivalence class of equations, each of the form ax + by = c , where: • a, b, c ∈, • at least one of a, b is nonzero, and • two equations ax + by = c and a ′x + b ′y = c ′ are said to be equivalent if for some nonzero k ∈ we have a ′ = ka, b ′ = kb, and c ′ = kc. • The point ( p, q ) is said to lie on the line ax + by = c if and only if ap + bq = c, i.e. if ( p, q ) ∈ Sol2 ( ax + by, c ).
157
Geometry, Graphs and Symmetry 157 •
Given three points ( p, q ), ( p ′, q ′ ) , and ( p ′′, q ′′ ), all lying on a common line, we say ( p ′, q ′ ) is between ( p, q ) and ( p ′′, q ′′ ) if any of the following conditions holds: p < p ′ < p ′′ p ′′ < p ′ < p q < q ′ < q ′′ q ′′ < q ′ < q
We define a circle in 2 to be an equation of the form ( x − h ) + ( y − k ) = r, where • h, k , r ∈, and • r > 0. 2 2 • And we say that a point ( p, q ) lies on the circle ( x − h ) + ( y − k ) = r if and only if 2 2 2 2 ( p − h) + (q − k ) = r , i.e. if ( p, q ) ∈ Sol2 ( x − h) + ( y − k ) , r . 2
•
(
2
)
Armed with these definitions—and more, for “segment”, “ray”, “angle” and “ congruence”— one proceeds to verify that all of the incidence, betweenness, congruence, and other axioms of geometry (whichever axiom system one is using!) are satisfied by the points, lines, and circles of 2 . This shows that 2 is a model of Euclidean geometry, or, to use our language, that 2 is a Euclidean plane12. Consequently all of the theorems of Euclidean geometry also apply to the points, lines, and circles of 2 . That this can all be done is probably not surprising, and it is done in detail in any textbook that deals with Euclidean (and non-Euclidean) geometry in a thorough axiomatic way; Greenberg, for example, provides a comprehensive treatment along these lines. It is worth pausing at this point to consider what would happen if we tried to replicate all of the above using some other field F , instead of , as the basis of our theory. Suppose, for example, we copy the above set of definitions, replacing with throughout. That is, we define a point to be an ordered pair ( p, q ) ∈2 , a line to be an equivalence class of equations of the form ax + by = c with a, b, c ∈, and so forth. Would Euclidean geometry still “work”? The answer is: partly. All of the incidence, order, congruence and parallel axioms turn out to be perfectly well satisfied in a “rationals-only” geometry. It’s not until we try to verify the continuity axioms that things start to break down. For example, consider the line y = 0 and the circle x 2 + y2 = 2. One can easily verify that the points ( 0, 0 ) and ( 2, 0 ) both lie on the line, with one point in the interior of the circle and one point in its exterior; in any reasonable axiomatic scheme, one would expect that the line and the circle would necessarily intersect somewhere. But there is no point ( p, q ) ∈2 that lies on both the line and the circle 2 (Exercise 4)! Likewise, the two circles x 2 + y2 = 1 and ( x − 1) + y2 = 1 have centers located at opposite endpoints of a common radius, but there is no point ( x, y ) ∈2 that lies on both circles (Exercise 5). In particular this means that the construction of an equilateral triangle in Euclid’s very first Proposition (refer back to Figure 4.1) would fail! The problem is that —and therefore 2 —has too many “holes” in it for us to perform geometric constructions. Lines and circles that seemingly ought to cross instead somehow slip through each other without intersecting. This “porous” quality makes 2 fail to be a Euclidean plane. Very well, what if we try something even more radical, like replacing with a finite field, like p (for some prime p)? In this case things break down even earlier: because p is not an ordered field, it becomes impossible to define what it means for one point to be
158
158 Geometry, Graphs and Symmetry between two others on a line. Essentially the issue is that p doesn’t “look like” a number line, but rather like a circle. (Recall that we first introduced the sets n in Chapter 1 by making an analogy with how arithmetic is done on a clock.) Without a well-defined notion of betweenness, we can’t define what a line segment is, or what it means for two points to be on opposite sides of a line; without segments, the very notion of “congruent segments” becomes moot; without sides, we can’t define the “interior” or “exterior” of an angle, or a polygon, or a circle… Nearly everything we need to do Euclidean geometry breaks down. If 2 is too full of “holes”, perhaps going to a larger field, like , would work? But here we run into essentially the same problem as with a finite field: , like p , is not an ordered field (refer back to §1.8 if you don’t remember why this is), and this makes it impossible to define betweenness, and all of the other geometric properties that depend on it. For example, the line x − y = 0 would contain, in 2, the points ( 0, 0 ) , (1,1) , and (i , i ). Which of these is “between” the other two? The question cannot be answered—it doesn’t even make sense in this context. So and p are too small to use as the backbone of a Euclidean plane, while is too large. Does that mean that is really the only choice? Surprisingly, the answer is no. There are subfields of , larger than but far smaller than the entirety of the set of real numbers, that are large enough to let us define a Euclidean plane. A significant portion of Greenberg’s Euclidean and non-Euclidean Geometries is devoted to exploring the question of just which fields are big enough to let us do Euclidean geometry. The preceding brief considerations illustrate, in capsule form, the first of the two strategies for trying to forge a connection between 2 and 2 : to begin with the algebraic structure of 2 (or a smaller structure F 2 for some suitably chosen subfield F ), and to define geometric objects in it. But there is another direction we can go in, and it is this other direction that will occupy our attention for the rest of this chapter: to begin with the geometric structure 2, and try to “build” a number system inside of it. In the next section, we start to do exactly this.
Exercises 1. Examine two or more contemporary Geometry textbooks and create an inventory of their use of compass, straightedge, protractor and ruler axioms. How many axioms in all does each textbook introduce? 2. Show how Ruler and Protractor axioms can be used to “solve” the problem of doubling a cube. (“Solve” is in quotes here because the “solution” does not tell you how to construct a solution, merely that one exists.) 3. Show how Ruler and Protractor axioms can be used to “solve” the problem of squaring a circle. (“Solve” is in quotes here because the “solution” does not tell you how to construct a solution, merely that one exists.) 4. Show that there is no point in 2 that lies on both the line y = 0 and the circle x 2 + y2 = 2. 5. Show that there is no point in 2 that lies on both the circle x 2 + y2 = 1 2 and ( x − 1) + y2 = 1. 6. Describe the circles x 2 + y2 = k for x, y, k ∈5 . (“Describe”, here, means at a minimum to identify the solution sets of each of these five equations.)
159
Geometry, Graphs and Symmetry 159
4.3 Measuring Ratios in the Plane In this section, we finally begin to construct an algebraic system inside of 2. Our first step is to assign meaning to the ratio of two segments. At first glance, this might seem to be either trivially simple, or absurd: don’t we just measure the lengths of two segments, then divide one by the other? And if we don’t do that, how can we possibly make sense of the ratio of two lengths without knowing what the lengths are? Odd as it may seem to the reader familiar with modern Geometry textbooks, Euclid himself never—not even once, in all of the Elements—assigns a numerical value to the length of a segment. Euclid talks about two segments having the same length, and about one segment being longer than another, but he never says something like “segment AB is 5 units long”. Remember, the Ruler Axiom is a modern innovation! Euclid’s geometry is almost purely non-numerical. The closest Euclid gets to assigning numerical values to lengths is when he compares two segments to one another. Euclid says that a segment AB measures a second segment CD if copies of AB can be laid end to end a whole number of times to exactly fill segment CD. For example, in Figure 4.3 below, segment AB measures CD because exactly four copies of AB fit inside of CD. In such a situation, Euclid is perfectly comfortable saying that segment CD is four times as long as segment AB; in modern notation we would write either CD = 4 AB CD or = 4. Note carefully that we do not have to actually know how long either segment is AB in order to say this. Things get more interesting when we consider two segments that have a common measure—that is, when there is a single (“short”) segment that measures (i.e. fits exactly into a whole number of times) two different (“longer”) segments. In this case, Euclid says that the two longer segments are commensurable (which just means “measurable together”). For example, in a 30°–60°–90° triangle, the hypotenuse and short leg are commensurable; if you take the short leg and bisect it, the resulting short segment fits into the short leg twice and the hypotenuse four times (see Figure 4.4 below). More generally, if two segments AB and CD are commensurable, with a common measure that fits m times into AB and n times into CD, Euclid says that the ratio of AB to CD is m to n, which we will write (using slightly more modern notation) as m : n. In Figure 4.5, the sides of the rectangle are in the ratio 5:3. Notice that in order to say this we don’t need to know how long the sides individually are; we just need to know that there is some segment (the common measure) that fits into one side 5 times, and into the other side 3 times.
Figure 4.3 Segment AB fits into segment CD four times.
160
160 Geometry, Graphs and Symmetry
Figure 4.4 The hypotenuse and short leg of a 30°–60°–90° triangle are commensurable.
Figure 4.5 A rectangle whose legs are in 5:3 ratio.
It is not too much of a stretch for us to take Euclid’s idea a bit further, and say that when two segments AB, CD are commensurable, with ratio m : n, we can assign to the pair ( AB,CD ) the rational number m / n —although to be clear, Euclid himself never quite does this. The difference between Euclid’s “ratios” and our “rational numbers” is a subtle one; perhaps the
161
Geometry, Graphs and Symmetry 161 best way to think about it is that for Euclid, a ratio was a relationship between two numbers, whereas we are accustomed to thinking of a rational number as a single number. There are some important questions that we have glossed over so far. Perhaps the most glaring omission has to do with how we handle the fact that if two segments are commensurable, there are many different ways of finding a common measure for them, each of which leads to a different ratio. For example, in Figure 4.5, if we subdivide each of the marked segments along the sides of the rectangle into (let’s say) 4 smaller, equal segments, we would get a new, smaller measure that fits into one side 20 times, and into the other side 12 times. The ratio of sides would then be 20:12, not 5:3. Fortunately, Euclid works out a complete theory of what it means for two ratios to be in proportion. Specifically, two ratios m : n and p : q are in proportion if mq = np (where the products indicate ordinary whole-number multiplication). In our case, 20:12 and 5:3 are easily seen to be in proportion, which means (for Euclid) that they may be treated as equivalent. With our extension of Euclid’s idea, we can see that the rational numbers 20 / 12 and 5 / 3 are the same rational number. So it really doesn’t matter which common measure we use; the assignment of a rational number to the pair ( AB,CD ) (where AB and CD are commensurable) is “measure-independent”. But what about when two segments are not commensurable? The ancient Greek geometers were well aware that one may have two segments that simply cannot be measured by a common segment; in such a case, we say that the segments are incommensurable. The lengths of two incommensurable segments simply cannot be compared using a ratio of whole numbers. For example, Figure 4.6 shows a square with side length AB and diagonal AC. It has been known since the days of the ancient Pythagoreans that no segment exists (even theoretically) that can possibly fit into both AB and AC a whole number of times; equivalently, there is no pair of whole numbers m, n for which we can say that the ratio of AC to AB is m : n. (Exercise 7) In modern language, we would say that this is because AC / AB is an irrational number, but even in writing down the expression AC / AB we are (implicitly) embracing the idea that both AC and AB are numbers that can be divided, and we are trying to avoid that whole idea! So what to do about the incommensurable case?
Figure 4.6 The side and diagonal of a square are incommensurable.
162
162 Geometry, Graphs and Symmetry
Figure 4.7 AC is cut in extreme and mean ratio if
AC BC . = BC AB
Figure 4.8 AD and CE cut each other in extreme and mean ratio; that is,
AD AP CE CP and . = = AP PD CP PE
The side and diagonal of a square are hardly the only incommensurable segments in Euclid’s geometry. Book VI, Proposition 30 shows how to cut any segment into “extreme and mean ratio”—by which Euclid means that the ratio of the whole to the larger part is the same as the ratio of the larger part to the smaller (see Figure 4.7). In this case one can prove 1+ 5 that the common ratio is equal to φ = , the irrational number known as the golden 2 ratio (Exercise 8). Much later (Book XIII, Proposition 8) Euclid shows that two consecutive diagonals of a regular pentagon cut each other in extreme and mean ratio (see Figure 4.8). Quite apart from the mathematical problem of “how do you prove this?”, a proof such as this raises a definitional problem: what does it even mean to say that two ratios are “the same” when the ratios themselves involve incommensurable quantities that cannot simultaneously be represented by whole numbers? What even is a “ratio”, when dealing with irrational quantities? Euclid is not oblivious to this question; on the contrary, the Elements contains two entire Books—the fifth13 and the tenth14—devoted to dealing with ratios, proportions, and incommensurable magnitudes. The fifth, sixth and seventh definitions in Book V are:
163
Geometry, Graphs and Symmetry 163 5. Magnitudes are said to be in the same ratio, the first to the second, and the third to the fourth, when equal multiples of the first and the third either both exceed, are both equal to, or are both less than, equal multiples of the second and the fourth, respectively, being taken in corresponding order, according to any kind of multiplication whatever. 6. And let magnitudes having the same ratio be called proportional. 7. And when for equal multiples (as in Def. 5), the multiple of the first (magnitude) exceeds the multiple of the second, and the multiple of the third (magnitude) does not exceed the multiple of the fourth, then the first (magnitude) is said to have a greater ratio to the second than the third (magnitude has) to the fourth. These definitions can be quite difficult for modern readers to parse, so let’s take them slowly and try to express them in something like modern notation. Definition 5 says that a set of four magnitudes—i.e., lengths of segments—are in the same ratio when a certain condition is met; definition 6 says that in this situation, the segments are called “proportional”. Let’s call our four segments AB,CD, EF , and GH . The question Euclid is trying to answer is: when does it make sense to say that the ratio AB : CD is the same as the ratio EF : GH ? Remember, we are not going to actually measure the four segments directly, just compare them to each other in pairs! Euclid’s answer is that we will say AB : CD = EF : GH precisely if “when equal multiples of the first (AB) and third (EF ) either both exceed, are both equal to, or are both less than, equal multiples of the second (CD) and the fourth (GH ), respectively.” This means that if we multiply AB and EF both by some common whole number m, and we multiply both CD and GH both by some (possibly different) common whole number n, then we always have the following conditions met: (a) m ⋅ AB > n ⋅ CD whenever m ⋅ EF > n ⋅ GH , and vice versa; (b) m ⋅ AB = n ⋅ CD whenever m ⋅ EF = n ⋅ GH , and vice versa; (c) m ⋅ AB < n ⋅ CD whenever m ⋅ EF < n ⋅ GH , and vice versa. Notice that condition (b) can hold if and only if AB and CD are commensurable, and EF and GH are also commensurable, with the ratio of AB to CD and the ratio of EF to GH both given by n : m. This is precisely what we would expect to say for commensurable segments: that one pair of commensurable segments is proportional to another pair of commensurable segments precisely if both pairs can be described with the same ratio of whole numbers. It’s conditions (a) and (c) that are most interesting, as they apply even to the case of a pair of incommensurable segments, like the diagonal and side of a square. Taken together, conditions (a) and (c) can be paraphrased as follows: For two incommensurable segments, AB : CD = EF : GH if, for any pair of whole n n numbers m, n, CD < AB ⇔ GH < EF . m m n CD to indicate the m length of a segment produced by taking n copies of CD and then subdividing it into m equal parts. (We show in the next section how this construction can be performed using only compass and straightedge.) If we further allow ourselves to use modern notation (and concepts) that Euclid would not have understood, this in turn can be paraphrased in what, for us, will be its final form: where we have introduced the (definitely non-Euclidean) notation
For two incommensurable segments, AB : CD = EF : GH if, for any pair of whole n AB n EF numbers m, n, < ⇔ < . m CD m GH
164
164 Geometry, Graphs and Symmetry AB to refer to the ratio, in the modern sense, of the lengths CD of the two indicated segment. (Remember, Euclid did not measure individual segments, and we are not yet going to do so either, so this really is abusing notation.) Written in this form, Euclid’s (or Eudoxus’s) definition of “proportion” begins to look remarkably familiar. Do you recognize it? What Euclid is doing here is almost exactly the same thing that we did in Chapter 1, when we (following the lead of the late 19th century mathematician Richard Dedekind) established a one-to-one correspondence between a real number r and the set of rational numbers Sr = {q ∈ q < r}. We used this correspondence to show that any two complete, ordered fields are isomorphic—the “real number characterization theorem” that was our focus at the start of this book15. Euclid’s theory of proportions is remarkably similar: he asserts that two ratios of lengths AB : CD and EF : GH are “equivalent” if and only if the two sets of n n rational numbers SAB :CD = ∈ n ⋅ CD < m ⋅ AB and SEF :GH = ∈ n ⋅ GH < m ⋅ EF m m are the same set. In the language we used in Chapter 1, SAB :CD is a “downward closed, open, bounded, rational subset” of , or a Dedekind cut. Euclid is identifying a ratio of incommensurable lengths with a Dedekind cut—that is, with a real number, in the modern sense of the word! The seventh definition of Book V translates what it means for one ratio to be larger than another, also by making use of the DCOBRS associated with each pair of magnitudes. Thus, Euclid’s theory of proportions includes all of the mathematical “infrastructure” needed to assign a real number to each pair of segments AB, CD, and to compare one such real number to another. For this reason, we feel justified in adopting the following “ratio principle”, as being both in the spirit of Euclid’s own geometry and consistent with our own approach to real numbers in Chapter 1: where we have used the notation
Ratio Principle. To any pair of segments, AB and CD, we may associate a real number, AB written , called the ratio of the two segments. If AB and CD are commensurable, then CD their ratio is a rational number; if they are incommensurable, then their ratio is an irrational number. Having spent some time justifying this Ratio Principle, it is important to step back and explicitly acknowledge what it does not say: • •
The Ratio Principle does not say that we can assign a numerical value to a single segment. It’s not the same thing as the Ruler Postulate of 20th-century geometry texts! All the Ratio Principle does is let us assign a numerical value to a pair of segments. The Ratio Principle also does not assert the existence of a one-to-one correspondence between ratios and real numbers. It says that we can associate a real number to any pair of segments, but it does not say that every real number gets a pair of segments associated to it! In modern notation, we can say that the Ratio Principle establishes a mapping ( AB,CD ) r ∈ , but it does not say that this mapping is surjective. There may be real numbers that never arise as ratios of segments!
This last point is, for our purposes, vitally important. The project of this chapter is to see how much of the real number system we can construct inside a Euclidean plane using only compass-and-straightedge constructions. The Ratio Principle provides a kind of “Rosetta stone”—a way of translating geometric objects into numerical ones—but that translation
165
Geometry, Graphs and Symmetry 165 key only works in one direction. We have already observed (see footnote 7) that, given a segment AB, it is not possible (using only compass and straightedge) to construct a second CD 3 segment CD with the property that = 2 . One way of interpreting a “no-go” theorem AB like this is that the theory of Euclidean planes splits into two different cases: •
•
If we model a Euclidean plane on 2 , as discussed in §4.2, then of course there exist CD 3 segments AB and CD with = 2 . One simply takes A = C = ( 0, 0 ), B = (1, 0 ), and AB D = 3 2 , 0 . Then AB = 1, CD = 3 2 , and everything works out the way we want it to. On the other hand if we take a purely geometric notion of 2 and only allow the existence of points and lengths that we can prove exist using compass-and-straightedge constructions, then no such pairs of segments exist.
(
)
We see now, at last, why we have insisted on referring to “a Euclidean plane”, rather than “the Euclidean plane”. It turns out there are many different Euclidean planes, each corresponding (in some sense) to the different types of numbers that one “allows” to be constructed inside of it. So our guiding question for the chapter can now be restated: Guiding Question: What ratios can we prove exist, using only compass, straightedge, and the properties of Euclidean geometry, inside any Euclidean plane? Alternatively, what numbers are constructible? In the next section, we finally get some definitive answers to this question.
Exercises 7. Prove that the side and diagonal of a square are incommensurable, avoiding carefully any language that would not have made sense to Euclid. 8. Prove that when a segment is cut into extreme and mean ratio, the common ratio 1+ 5 is φ = . 2 9. Read Elements Book XIII, Proposition 8 (that two consecutive diagonals of a regular pentagon cut each other in extreme and mean ratio) and translate the proof into modern language and notation. (As part of this, you may need to refer back to earlier propositions as well.)
4.4 From Geometry to Algebra: Coordinatizing Lines and the Plane From this point on we assume that we have chosen some suitably mature axiomatic scheme to characterize a Euclidean plane. Without getting bogged down in the details of exactly which axioms we will use, we will simply assume that we have enough available for all of the following: •
(Construct parallels) Given a line and a point P not on , we may construct a second line m through P and parallel to , and this line m is unique.
166
166 Geometry, Graphs and Symmetry (Construct perpendiculars) Given a line and a point P (whether or not on ), we may construct a second line m through P and perpendicular to , and this line m is unique. • (Corresponding angles) If two parallel lines and m are cut by a transversal t, then the four angles formed by the intersection of and t are each congruent to the “corresponding angle”16 formed by the intersection of m and t. • (AA Similarity) If two triangles ∆ABC and ∆PQR have the property that ∠A ≅ ∠P and AB AC BC = = ∠B ≅ ∠Q, then also ∠C ≅ ∠R, and . PQ PR QR AB BC • (SAS Similarity) If two triangles ∆ABC and ∆PQR have the properties = and PQ QR ∠B ≅ ∠Q, then also ∠A ≅ ∠P. • (Pythagorean Theorem) If ∆ABC is a right triangle, with hypotenuse AC, 2 2 AB BC + = 1. then AC AC •
The last three statements all rely, in one way or another, on the Ratio Principle of the previous section. Notice that our version of the Pythagorean Theorem is different from the version you probably were expecting, in that it avoids referring directly to the lengths of the sides of the right triangle, and instead refers to the ratios of the sides. Remember, we don’t (yet) have a way to measure lengths directly! But we will begin to rectify that shortcoming in our language now. Suppose we choose any line , pick any two points on that line, and call one of the points O and the second point U . Now, for any other point P on the same line, we make the following definition: Definition. Let be any line, and choose any two points on the line; call one of the points O OP and the other point U . Now, for any other point P ∈, we define the number dOU ( P ) = . OU We call dOU ( P ) the distance from O to P relative to the scale OU ; the segment OU is called a unit of measure or a unit segment for this distance function. This definition establishes a way to talk about how far away points on a line are from one another, relative to an (arbitrarily chosen) unit of measure. (Notice that we don’t have to use the same unit of measure for different lines!) Now the questions we raised earlier in this chapter can be phrased as follows: Guiding Question: For what real numbers r ∈ does there exist a point P ∈ such that dOU ( P ) = r ? For example, we can prove that there exists a point P ∈ with dOU ( P ) = 2, via the construction shown in Figure 4.9. The figure is constructed using only compass and straightedge, so that UQ ≅ OU , OQ ≅ OP , and ∠U is a right angle. By our version of the Pythagorean 2
2
UQ OU Theorem, + = 1. But UQ and OU are congruent by construction, so the two OQ OQ 2
OU 1 indicated ratios are equal, and therefore = . But OQ and OP also have the same OQ 2 2
2
1 OP OU OP = . This implies that = 2, and therefore that dOU ( P ) = length, so = 2. OP OU 2 OU
167
Geometry, Graphs and Symmetry 167
Figure 4.9 Constructing a point whose distance from the origin is 2 times a unit segment.
Figure 4.10 Constructing a point whose distance from the origin is 3 times a unit segment.
Once this construction has been demonstrated, it is a natural follow-up to show that for every natural number n, we can construct a point Pn with dOU ( Pn ) = n . The following diagram shows how to use the point P = P2 to construct P3. More generally, once Pn has been constructed, the same essential idea can be used to construct Pn+1 (Exercise 11). So the square roots of all natural numbers are “constructible numbers”. On the other hand, the fact that the doubling of a cube cannot be accomplished using only compass and straightedge means that 3 2 is not a constructible number. What other numbers can we construct?
168
168 Geometry, Graphs and Symmetry
Figure 4.11 Constructing a point whose distance from the origin is 1/5 times a unit segment.
1 . Figure 4.11 n shows this construction with n = 5. We first construct a line perpendicular to OU through O. We then mark off five equally spaced points, Q1 through Q5, along OU ; the spacing between O and Q1 is arbitrary, but we then use a compass to replicate that distance between the other points. Then we join Q5 to U . Next, we construct a line parallel to Q5U through Q1; where this 1 line intersects OU , we mark P. The resulting point P has dOU ( P ) = (Exercise 12). 5 At this point we know that the set of constructible distances contains, at the very least: Given any natural number n, we can construct a point P with dOU ( P ) =
• • •
All natural numbers n; The reciprocals of all natural numbers n; The square roots of all natural numbers n.
It’s also clear that the set of constructible distances is closed under addition; one simply places two segments adjacent to one another to construct a new segment whose length is the sum of the original two. What else can we do? The following lemma is useful for converting measurements in different units: Lemma. (Change of Scale17) Let O, A, B and C be any four points. Then dOA (C ) =
dOB (C ) . dOB ( A)
OC OA OC ⋅ = . OA OB OB This may seem to be trivially true, and indeed it would be if we regarded OA,OB and OC as numbers, and the corresponding ratios as ratios of numbers; remember, though, that we are not assigning absolute meaning to “segment lengths”, only relative meaning. However, OC if we interpret each of these ratios as a Dedekind cut (i.e., each one is a set of rational OA Proof (sketch only). The claim is equivalent to dOA (C ) ⋅ dOB ( A) = dOB (C ), i.e.
169
Geometry, Graphs and Symmetry 169
Figure 4.12 Two intersecting chords in a circle.
numbers satisfying a specific inequality), then the indicated product is a shorthand notation for multiplying together the rational numbers in each set, and in this interpretation the result can be proven rigorously, although we omit the details. The following two properties of Euclidean geometry are particularly useful for what comes next: Lemma. (Intersecting chords of a circle) Let A, B,C , and D be four points on a circle, with AC and BD intersecting at a point P in the interior of the circle, as shown in Figure 4.12. PC PA Then = . PB PD Proof. Omitted. Lemma. (Circle through three points) Given any three non-collinear points in the plane, we may construct a (unique) circle passing through all three points. Proof. Omitted. Both of the preceding lemmas are standard parts of the high school curriculum and their proof can be found in virtually any high school geometry textbook. It turns out we’re going to get a lot of mileage out of these two basic tools. Our first major application of them is the following: Theorem. (Products are constructible) Let r and s be two constructible distances; then rs is also constructible. Proof. Choose a line and a unit of measure OU , and let R and S be two points on such that dOU ( R ) = r and dOU (Q ) = s. Construct the diagram shown, in stages, in Figure 4.13: first, place two segments of length r and s (relative to OU , of course) adjacent to each other in a line. At the point where those two segments meet, erect a perpendicular segment of length 1
170
170 Geometry, Graphs and Symmetry
(a)
(b)
(c)
(d)
Figure 4.13 Constructing a segment whose length is equal to the product of two given lengths.
171
Geometry, Graphs and Symmetry 171 (i.e., congruent to OU ) (Figure 4.13a). Then construct a circle passing through the three endpoints of the segments (Figure 4.13b). Next, extend the segment of unit length until it passes through the opposite side of the circle in a point Q (Figure 13c). Finally, use a compass to locate a point P on the same line as R and S with OP = OQ (Figure 13d). The claim is that dOU ( P ) = rs . OS OQ To verify this, first note that by the Two Chords lemma, = . Next, by construcOU OR OP OQ OS OP tion, OP ≅ OQ , so = . Combining these two equalities, we have = , or OR OR OU OR dOU ( P ) dOU (S ) = dOR ( P ). By the Change of Scale lemma, we have dOU (S ) = , i.e. s = dOU ( P ) , dOU ( R ) r from which the claim follows. Theorem. (Inverses are constructible) Let r be a constructible (nonzero) distance; then 1 is also constructible. r
Proof. Exercise 13. At this point we have shown that the set of constructible distances includes all positive rational numbers, and is closed under both addition, multiplication, and multiplicative inverses. You may be starting to suspect that we are constructing a field, and you would be right! There is only one problem: constructible distances are always positive, and in order to have a field, we need to have negative numbers, as well. This is not a particular problem: Definition. Let be any line and choose a unit segment OU on . Then for any point P ∈, we define the coordinate of P relative to OU , denoted18 χOU ( P ), by dOU ( P ) if P andU are onthe same side of O χOU ( P ) = −dOU ( P ) if P andU are on opposite sides of O A real number r ∈ is called a constructible number if there exists some P ∈ such that χOU ( P ) = r. The set of all constructible numbers is denoted K. The results of the previous two theorems, together with our preceding observations, can be summarized with the following: Corollary. The set of constructible numbers, K, is a proper subfield of the real numbers that contains as a proper subfield. That is, K .
Proof. To show that K is a field, all we need to do is observe that K contains additive inverses, multiplicative inverses, and is closed under addition and multiplication, all of which we have essentially done. To show that K is a proper subfield of we simply recall that 3 2 is not constructible. Finally, to show that is a proper subfield of K, we recall that 2 is constructible.
172
172 Geometry, Graphs and Symmetry What else is inside this mysterious field K? The following theorem is an extension of the fact (already shown) that n ∈ K for all natural numbers n. Theorem. Let r ∈ K be any positive constructible number. Then constructible number.
r is also a
Proof. Exercise 14. An ordered field that contains square roots for all positive elements is called a Euclidean field. We have just shown that the field of constructible numbers is a Euclidean field; in fact, this property more or less completely characterizes K, as the next theorem asserts. Theorem. K is the smallest Euclidean subfield of . Equivalently, any Euclidean subfield of will contain K as a subfield.
Proof (sketch). The proof of this theorem is, unfortunately, far beyond the scope of this book. The key idea is that any point in 2 that can be “reached” by a sequence of compass- and-straightedge constructions can be interpreted as a complex number that satisfies a polynomial equation of degree 2 k for some positive k. Why? Fundamentally, it’s because the equation of a circle is quadratic, and the equation of a line is linear, so any construction involving intersecting lines and circles can be translated into a system of quadratic and linear equations, each of which contains coefficients that are themselves solutions of quadratic and linear equations. Finally, one shows that all such equations can be solved by taking square roots (possibly many times), so that any Euclidean subfield of R must contain all such constructible points. For a more detailed version of this argument, refer to any textbook on abstract algebra that deals with the theory of field extensions; for example, Michael Artin’s Algebra (2nd ed.), pp. 450–455. It is worth stepping back here and considering what we have learned about the set of real numbers. All the way back in Chapter 1, we observed that it is commonplace for high school Algebra textbooks to include a diagram something like the one below (Figure 4.14), showing that the set R includes both rational and irrational numbers. Such a diagram establishes a taxonomy of different types of number. We now know that this taxonomy drastically oversimplifies the situation, because not all irrational numbers are created equal. Indeed we might say that some irrational numbers are “simpler” than others. To be specific, we can now name several subfields of R: Example 1. The set of algebraic real numbers (sometimes, but not always, denoted ) consists of every real number that is a root of a polynomial with rational (or integer) coefficients. Slightly more generally, the field of algebraic numbers (without the additional modifier “real”) is the subfield of C consisting of all of the roots of such polynomials, whether real or complex; this field can also be described as the algebraic closure of , and is usually denoted19 . It is quite a remarkable fact that not all irrational numbers are algebraic! Such real numbers are called “transcendental”, and their existence was not confirmed until the middle of the 19th century, when Joseph Liouville gave the first explicit construction of a transcendental number. Shortly after this, Charles Hermite proved (in 1873) that the familiar
173
Geometry, Graphs and Symmetry 173
REAL NUMBERS INTEGERS
RATIONALS
Natural Numbers 0
Whole Numbers 1, 2, 3, …
Negave Integers –1, –2, –3, …
1/2, 2/3, 2.3, –4.6, –5/2, …
IRRATIONALS
2, Π, e, …
Figure 4.14 The taxonomy of numbers, as typically shown in high school textbooks.
irrational constant e (the base of the natural logarithm) is transcendental; less than a decade later, Ferdinand von Lindemann proved (in 1882) that π is transcendental as well. Perhaps counterintuitively, it turns out that transcendental numbers are not rare exceptions, but rather comprise (in a technical sense) “most” of the real numbers (see Exercises 15 and 16). The set of transcendental numbers may be denoted or , where the “backslash” symbol denotes the “set difference” operation. Example 2. The set of expressible real numbers. Just because a real number is the solution to a polynomial equation does not mean that we can express it simply. As was mentioned in Chapter 3 (see footnotes 12 and 13), while there are explicit formulas for the solutions to 2nd-, 3rd-, and 4th-degree equations, some equations of degree ≥ 5 are not “solvable by radicals”: i.e. there is no quintic analog of the quadratic, cubic and quartic formulas that allows us to write the solutions to a 5th-degree equation using only a combination of addition, multiplication, division and taking roots (of any order). The best one can do with such a number is to describe it indirectly; for example, we may speak of “the unique real solution of x5 − x − 1”, which unambiguously characterizes a real number whose decimal representation is approximately 1.167; however, an explicit formula for this number does not exist. Thus, we may distinguish between expressible20 algebraic real numbers and inexpressible algebraic real numbers. Likewise one may discuss expressible and inexpressible algebraic complex numbers. Example 3. The set of constructible numbers. As we have discussed at length, some real numbers can be constructed using only a compass and straightedge, while others (most) cannot. The set of constructible reals K is the smallest subfield of the reals that is closed under the operation of taking square roots; consequently every constructible real number is
174
174 Geometry, Graphs and Symmetry
REAL NUMBERS INTEGERS
RATIONALS
CONSTRUCTIBLE REALS
EXPRESSIBLE REALS
ALGEBRAIC REALS
TRANSCENDENTAL REALS
Natural Numbers
0
Whole Numbers 1, 2, 3, …
2,
1/2, 2/3, 2.3, –4.6, –5/2, …
,…
2, 4+
7, …
Nega ve Integers
All real solu ons to polynomial equa ons with ra onal coefficients
, ,…
–1, –2, –3, …
CONSTRUCTIBLE NUMBERS
,
,…
EXPRESSIBLE NUMBERS
ALGEBRAIC NUMBERS
TRANSCENDENTAL NUMBERS
1+ …
COMPLEX NUMBERS Figure 4.15 An elaborated taxonomy of real and complex numbers.
expressible, and every expressible real number is algebraic. However, some numbers (like 3 2 ) are expressible, but not constructible. Thus the standard diagram above can be extended to show different types of irrational numbers (Figure 4.15). At this point, we have a choice to make. Do we want to adopt an additional axiom asserting that every real number corresponds to a point on a line, or don’t we? This is really more of a philosophical and aesthetic question than a mathematical one. We don’t have to include such an axiom—it is entirely possible to do all of Geometry with only the constructible reals—but we are free to do so if we want to. There are many equivalent ways of incorporating such an axiom into our system. We have already discussed the “Ruler Axiom”, which (in the SMSG version) explicitly states that placing coordinates on a line establishes a one- to-one correspondence between the points of the line and the set . Another option is to include Hilbert’s “Completeness Axiom”, which asserts To a system of points, straight lines, and planes, it is impossible to add other elements in such a manner that the system thus generalized shall form a new geometry obeying all of the five groups of axioms. In other words, the elements of geometry form a system which is not susceptible of extension, if we regard the five groups of axioms as valid. By “a system which is not susceptible of extension”, Hilbert is essentially saying “The Euclidean plane 2 contains everything it could conceivably contain; there is no way to make it any larger by including more points.”21 Such an axiom functions as a “backdoor” to
175
Geometry, Graphs and Symmetry 175 the set of real numbers; if we were working in a system that did not include all real numbers, it could be enlarged to one that did, which would violate the Completeness Axiom, and therefore the Completeness Axiom implies the SMSG Ruler Axiom. As we have previously stated, throughout most of this chapter we will not assume the Ruler Axiom, or the Completeness Axiom, or indeed any equivalent version. So in our geometry, we will not assume that all real numbers “exist” as points on any line. On the other hand, while we are not asserting that the points of a line are in one-to-one correspondence with , we are also not asserting that they aren’t. Rather, the position we adopt will be a neutral one: we know (because we have proven it) that the points on any line can be put in one-to-one correspondence with a subfield of , and that this subfield includes at the very least the field of constructible numbers K , but that it also could include more numbers than that—it can include any subfield of that contains K , up to and including the entire field .
Exercises 10. Write a careful definition of what “corresponding angles” are, assuming that you have already established what it means for two points to be on the “same side” of a line. 11. Explain in detail how the point Pn with dOU ( Pn ) = n can be used to construct Pn+1 with dOU ( Pn+1 ) = n + 1. 12. Complete the proof that the point P constructed in Figure 4.5 has dOU ( P ) = 1 / 5. 1 13. Prove that if r is a constructible distance, then is also constructible. (Hint: mimic r the proof that products are constructible, with the three segments replaced by just two, arranged in a suitable configuration.) 14. Prove that if r is a constructible distance, then r is also constructible. (Hint: mimic the proof that products are constructible, with the three segments replaced by just two, arranged in a suitable configuration.) 15. Prove that the set of algebraic real numbers is countably infinite; that is, there exists a one-to-one correspondence between and the set of integers . 16. Prove that the set of transcendental real numbers, is uncountably infinite; that is, there does not exist a one-to-one correspondence between and . (Hint: what do you know about the cardinality of ?)
4.5 Coordinate Systems, Lines and 1st-Degree Equations Up to this point, we have put coordinates on one line at a time, essentially showing that every line can be regarded as a “number line”. Now we are ready to extend this method to the entire plane. After all of our previous hard work, this will be refreshingly simple. We proceed in steps: ( a) First, choose any line h in 2. (b) Next, choose any two points O,U ∈ h, and use them to define a coordinate function χOU on h. Now every point on h has been assigned a real number. (c) Through O, choose a (different) line v in 2. (d) Choose a point V ∈v and use O,V to define a coordinate function χOV on v. Now every point on v has been assigned a real number as well.
176
176 Geometry, Graphs and Symmetry Let’s pause and translate what has been done into familiar language. The line h will be called a horizontal axis, and the line v will be called a vertical axis. These two axes intersect at O, the origin of our coordinate system; the choice of U and V establishes scales (possibly different) along the two coordinate axes. Note that the words “horizontal” and “vertical” are being used here only for mnemonic reasons. A Euclidean plane, remember, is completely bare—an infinite, blank canvas with no intrinsic center, scale or orientation. We create these things by choosing the lines h and v and the unit measures OU and OV . Also notice that nothing in our description so far requires that h and v are perpendicular to each other, or that OU ≅ OV . We may want to make our choices so that they are, but we don’t have to. On the other hand, it is worth having terminology available for describing when this is the case. Definition. A coordinate system in which h ⊥ v is called an orthogonal system; a coordinate system in which OU ≅ OV is called an isometric system; and a system that is both orthogonal and isometric is called an orthonormal coordinate system. We resume our construction now: (e) For every point P ∈ h, with coordinate χOU ( P ) = α , we define vα to be the unique line through P and parallel to v. The line vα is called the vertical gridline with coordinate α . (f) Similarly, for every point Q ∈v, with coordinate χOV (Q ) = β , we define hβ to be the unique line through Q and parallel to h. The line hβ is called the horizontal gridline with coordinate β. Now we prove the following: Theorem (Uniqueness of Coordinates). (a) If α ≠ β ∈ are both coordinates, then hα ∩ hβ = ∅ and vα ∩ vβ = ∅ (equivalently, hα hβ and vα vβ); (b) For any point N in the plane, there exists exactly one ordered pair ( α, β ) ∈ 2 such that N ∈vα and N ∈ hβ. Proof. Part (1) is left for Exercise 17. For part (2), choose any N ∈ 2 . There is a unique line through N parallel to h, and this line intersects v at some point P. Then by definition N ∈ hβ, where χOV ( P ) = β. Similarly there is a unique line through N parallel to v, and this line intersects h at some point Q; then N ∈vα , where χOU (Q ) = α . This shows the existence of the desired ordered pair ( α, β ). Its uniqueness follows from Part (1). The Uniqueness of Coordinates Theorem establishes a coordinatization of the entire plane 2, allowing us to identify it with a subset of 2 . Armed with this theorem, we may define our coordinate function on the plane: Definition. Suppose a coordinate system has been constructed as above. Then for any point N ∈ 2 , we define the coordinates of N to be the ordered pair ( α, β ) described above, and we write
χ ( N ) = ( α, β ) where we introduce the notation χ for the coordinate function 2 → 2 .
177
Geometry, Graphs and Symmetry 177 Once again, whether χ is surjective depends on whether or not we want to include a variation of the Ruler Axiom of the Completeness Axiom, and we are remaining steadfastly agnostic on that point. At this point we have completed our goal, which was to “algebraize” 2, in contrast to the method sketched earlier in this chapter of how one can “geometrize” 2 . Notice that the construction of the coordinate function χ : 2 → 2 is dependent on a number of independent choices: we must choose a horizontal axis h, an origin O on h, a second point U on h to create a horizontal scale, a vertical axis v through O, and a second point V on v to create a vertical scale. Thus the definition of χ depends on five separate geometric objects. We therefore should probably use the notation
χ( h,O ,U ,v,V ) : 2 → 2 to emphasize the dependence of the coordinate function on those five ingredients; however, this is notationally awkward, so we will not do it. Instead, we will sometimes refer to the five ingredients of a coordinate system collectively using the single Greek letter Σ. That is, we will write Σ = ( h,O,U , v,V ) for a coordinate system22, and then write χ Σ if we want to emphasize the dependence of χ on the choice of Σ. Different choices of coordinate systems Σ1, Σ 2 will lead to completely different ways of assigning coordinates to points. Now that we have coordinatized the plane, we can—finally!—talk about how to graph the solution set of an equation in two variables. The graph is, of course, relative to our coordinate system. Definition. Let Σ = ( h,O,U , v,V ) be a coordinate system and let χ Σ be the associated coordinate function. Let F denote the field of all coordinates (this necessarily contains at least K but could contain all of ). Let ( f , g ) ∈ F [ x, y ] be a polynomial equation in two variables, and let SolF 2 ( f , g ) be its solution set over F . Then the graph of ( f , g ) relative to Σ is the set
{N ∈
2
χ Σ ( N ) ∈ SolF 2 ( f , g )}
It’s important to match the field of coordinates, F , to the field we use for the coefficients of our polynomial, and to the field in which we look for solutions. To illustrate why this is so, consider the polynomial p = y − 3 2 x ∈ [ x, y ], and the equation ( p, 0 ) . The solution set of this equation consists of all ordered pairs of the form a, a 3 2 . But in 2, there may be no points (other than the origin) whose coordinates have this form—it all depends on which model of the Euclidean plane we are using, and in particular whether it is based on the entirety of (in which case the graph of ( p,0 ) would be a line), whether it is based on just the constructible numbers K (in which case the graph of ( p,0 ) would be the single point O), or whether it is based on some other field intermediate between them (in which case it’s anybody’s guess). For this reason, we will usually choose a subfield F ⊂ first, making sure that it includes at least K; then consider a Euclidean plane isomorphic to F 2 , and a polynomial equation with coefficients in F ; find the solutions over F 2 ; then choose a coordinatization of our plane; and finally consider the graph of the solution set, relative to our coordinatization. Now that we have assigned coordinates to the plane, we are in a position to describe the graphs of familiar types of equations. Our first main result concerns first-degree equations in two variables. Of course everyone learns (usually in Algebra 1, if not in pre-Algebra) that the graphs of such equations are lines in the plane; so well known is this fact that we usually
(
)
178
178 Geometry, Graphs and Symmetry call first-degree equations “linear equations” without giving it a second thought. But this fact should not pass by us without scrutiny; indeed, it is such a foundational fact of the secondary curriculum that it deserves close attention. For this reason, we refer to it in this book as the “Fundamental Theorem of High School Algebra.” Theorem. (Fundamental Theorem of High School Algebra). Let ax + by + c be any first-degree polynomial in two variables. Then relative to any orthonormal coordinate system Σ, the graph of the equation ( ax + by + c,0 ) is a line in the plane. Specifically, (a) If b = 0 then the graph of ( ax + c,0 ) is the vertical gridline v− c / a. (b) If b ≠ 0 then the graph of ( ax + by + c,0 ) is the same as the graph of ( y, mx + k ), where m = −a / b and k = −c / b . Furthermore, every line in the plane is the graph of an equation of one of these forms.
Proof. There are actually several things we need to prove here. First, we need to show that (in both of the two cases described in the statement of the theorem) all points in the graph of the solution set are collinear. To do this, it is enough to prove that any three points in the graph of the solution set are collinear. (Why is this enough? See Exercise 16.) Once we have done that, we also need to prove that every point along the line defined by the graph of the solution set corresponds to a solution. This is necessary because otherwise it’s possible that the graph might consist of points that are all scattered along a line, but not fill the entire line. We need to show that this doesn’t actually happen. Finally, once we have shown that the graph of a solution set of a first-degree equation is a line, we will need to show that the entire process can be reversed: that any line can be realized as the graph of the solution set of a first-degree equation. That’s a lot to prove, so let’s get started! Step 1. We prove that the graph of the set of all solutions to ( ax + by + c, 0 ) is collinear in 2. We consider two cases separately: (i) If b = 0 then a ≠ 0; otherwise the polynomial is not first-degree, but a constant. Then (since we are working over a field) the equation ( ax + c,0 ) is strongly equivalent to c (ax, −c ) (by a Type I move), which in turn is strongly equivalent to x, − (by a reversa c ible Type II move). The solution set of x, − consists of all ordered pairs of the form a c 2 − , β , where β is arbitrary. Now we ask: when will a point N ∈ have coordinates a c χ ( N ) = − , β ? The answer is precisely when N ∈v− c / a, because (by definition) all a c points on that vertical gridline have a first coordinate equal to − . So in the case b = 0, a all points lie on a single line. (ii) If b ≠ 0 then ( ax + by + c,0 ) is strongly equivalent to ( by, −ax − c ) (Type I moves), which a c in turn is strongly equivalent to y, − x − (reversible Type II move). If we introduce b a
179
Geometry, Graphs and Symmetry 179
Figure 4.16 Diagram for the proof of the Fundamental Theorem of High School Algebra, Step 1, part (ii).
a c the notation m = − and k = − then we may write ( y, mx + k ) as an equivalent equation. b a The solutions to this equation all have the form ( α, mα + k ) for arbitrary α. We need to show that any three such solutions are collinear. Consider the diagram shown in Figure 4.16, below. In Figure 4.16, an orthonormal coordinate system has been chosen, and three points N , P and Q, with coordinates
(α1 , mα1 + k ) (α 2 , mα 2 + k ) (α3 , mα3 + k ) have been plotted. Each of these three points corresponds to a solution to the equation
( y, mx + b). We need to show that they are collinear.
Notice that in the diagram, the horizontal and vertical coordinates of each of the three points have been marked along the horizontal and vertical axes. If we add horizontal gridlines (i.e., lines parallel to the horizontal axes) through N and P, and vertical gridlines (i.e. lines parallel to the vertical axes) through P and Q, we form two right triangles, as shown. The lengths of the horizontal and vertical sides of these triangles are (relative to our choice of scale) simply found by subtracting the appropriate coordinates. One triangle has sides of length α 2 − α1 and mα 2 − mα1; the other triangle has sides of length α 3 − α 2 and mα 3 − mα 2 . In both triangles, the ratio of the vertical side to the horizontal side is mα 2 − mα1 mα 3 − mα 2 = =m α 2 − α1 α3 − α 2
180
180 Geometry, Graphs and Symmetry Thus the legs of the two triangles are in the same proportion. We also know both triangles have right angles between the legs; therefore by SAS similarity, the two angles marked θ1 and θ2 are congruent. But this implies that the points N , P,Q are all collinear, which was what we wanted to prove. Step 2. Having shown in the previous step that any three points from the solution set are collinear, and hence that the entire solution set is collinear, we next prove that the entire line is contained in the solution set. We have already established that in the case b = 0 every point c on the vertical gridline v− c / a has coordinates − , β for some β, and therefore is a solution a c to x, − , which in turn means it is a solution to ( ax + c,0 ). So it remains to consider the a non-vertical case. Let N , P be two points whose coordinates are solutions to ( y, mx + b ), and let be the line through N and P. We need to show that any other point R ∈ also has coordinates that are solutions to the same equation. So let’s say, as in the previous part of the proof, that χ ( N ) = ( α1 , mα1 + k ) and χ ( P ) = ( α 2 , mα 2 + k ). Let the coordinates of R be given by χ ( R ) = (β, γ ). Refer to Figure 4.17. Once again we have two right triangles. This time we are assuming that the three points N , P, R are collinear, and therefore the two angles marked θ and φ are congruent. Since the two triangles both contain right angles, in addition to the congruent acute angles, we have by AA similarity that the sides are in proportion: that is,
Figure 4.17 Diagram for the proof of the Fundamental Theorem of High School Algebra, Step 2.
181
Geometry, Graphs and Symmetry 181
γ − ( mα 2 + k ) mα 2 − mα1 = =m β − α2 α2 − α2 It follows that γ − ( mα 2 + k ) = m (β − α 2 ). But this is equivalent to γ − k = mβ, i.e. γ = mβ + k. This shows that the coordinates of R are actually β, mβ + β. This is exactly the conclusion we need: it shows that the coordinates of R are solutions to the equation ( y, mx + k ), which is what we wanted to show. Step 3. Finally, we need to show that every line in the plane is the solution set to some linear equation. Let be any line. There are two possibilities: (i) is a vertical gridline, i.e. v. Then intersects h at some point P with coordinate (on h) equal to some value α. This means precisely that = vα , and that every point on has coordinates ( α, β ) for arbitrary β. Such points are all solutions to the equation ( x, α ), and therefore is the graph of the solution set for this equation. (ii) is not a vertical gridline. In this case v, so intersects v at some point P; let us call the vertical coordinate of that point k. Now choose any other point Q ∈ and let the coordinates of Q be ( α, β ). We claim that the coordinates of both P and Q are solutions to the equation β−k x + k y, α This claim can be verified by direct substitution: plugging in ( 0, k ) (the coordinates of P) into the equation, we have k=
β−k ⋅0 + k α
which is true by inspection; and plugging in ( α, β ) (the coordinates of Q) into the equation, we have
β=
β−k ⋅α + k α
β−k , then α we have shown that both P and Q are solutions to the equation ( y, mx + k ). But we have already shown that if two points on a line are solutions to a particular first-degree equation, then every point on the same line is also a solution to the same equation! So all points on are solutions to ( y, mx + k ). which is also true by inspection. If we introduce the shorthand notation m =
These three steps, taken together, complete the proof of the Fundamental Theorem of High School Algebra. There are a few things to observe about the proof. Notice that the traditional high school distinction between vertical lines (which have “undefined slope” and are given by equations of the form x = a) and non-vertical lines (which have a slope computed by finding the ratio of the “rise” and “run” of two points on the line, and are given by equations of the form y = mx + b) occurs almost spontaneously in the proof. Notice also that both Steps 1 and 2 require the use of some fundamental properties of similar triangles, but in different ways. That is because the very notion of “constant slope”—the essence of straight lines—is really
182
182 Geometry, Graphs and Symmetry about the proportions of sides in a right triangle. At a really basic level, the Fundamental Theorem of High School Algebra is really a theorem of Geometry, not Algebra, because it describes the linearity of a set of points. Since most high school students in the United States learn Geometry the year after Algebra, it’s really not possible to prove (or even talk seriously about) why this theorem is true at the time that it is most relevant, and by the time students have learned enough Geometry to really grapple with it the “facts” of the theorem are so well known already that there doesn’t seem to be anything worth proving. Once we have the Fundamental Theorem in place, we can define a few additional key terms of the curriculum. Definition. Let be any line in a Euclidean plane and choose a coordinatization Σ. By the FTHSA, it is either a vertical line (i.e. parallel to v) or is the graph of the solution set of an equation of the form ( y, mx + k ). In the latter case, m is called the slope of the line, and k is called the y -intercept of the line; we will refer to ( y, mx + k ) as simply the equation of the line and say that is the graph of the equation ( y, mx + k ). The FTHSA, and the preceding definitions, make it possible for us to prove a few additional well-known (but crucial) theorems of the high school curriculum.
Theorem (Slopes of Parallel lines). Let 1 and 2 be two parallel lines in 2. Choose an orthonormal coordinatization Σ of the plane. Then one of the following two cases is true: (a) 1 and 2 are both vertical gridlines (i.e. parallel to v), or (b) 1 and 2 are the graphs of two lines with the same slopes but different y-intercepts. Conversely, if 1 and 2 are any two lines in 2 for which either (a) or (b) is true, then 1 and 2 are parallel.
Proof. Suppose first that 1 and 2 are parallel lines. By the FTHSA, each line is either vertical or of the form ( y, mx + k ) for some slope m and y-intercept k. We show that there are two impossible cases: (i) One line is vertical and the other is not. Without loss of generality, suppose 1 is the graph of ( x, c ) and 2 is the graph of ( y, mx + k ) for some constants c, m, k ∈ F . But it can be verified directly (Exercise 19) that the point P with coordinates (c, mc + k ) lies on both of these lines, contradicting the premise that the lines are parallel. So this case cannot actually happen. (ii) Neither line is vertical, and the slopes are different. That is, assume 1 is the graph of ( y, m1x + k1 ) and 2 is the graph of ( y, m2 x + k2 ) for some (different) slopes m1 and m2 . k − k1 Then we can verify directly (Exercise 20) that if we define c = 2 , then the point m2 − m1 with coordinates (c, m1c + k ) lies on both lines, contradicting the premise that the lines are parallel. So this case also cannot happen. The only cases remaining, after we have eliminated the two impossible cases, are that either both lines are vertical, or both lines are non-vertical with the same slope and different y-intercepts. These are precisely the two cases asserted by the theorem.
183
Geometry, Graphs and Symmetry 183 For the converse, we first observe that if 1 and 2 are both vertical, then they are both parallel to v, and it is a basic theorem of Euclidean geometry that two distinct lines both parallel to a common third line must be parallel to each other. On the other hand, if they are the graphs of two equations with the same slopes but different y-intercepts, then they can have no point in common; for the only way that ( α, β ) can be a solution to both ( y, mx + k1 ) and ( y, mx + k2 ) is if β = mα + k1 and also β = mα + k2 , which would imply k1 = k2, contradicting the hypothesis that they have different y-intercepts. Since the two lines have no point in common, by definition they are parallel. Theorem. (Slopes of Perpendicular lines). Let 1 and 2 be two perpendicular lines in 2. Choose an orthonormal coordinatization Σ of the plane. Then one of the following two cases is true: (a) One of 1 and 2 is a vertical gridline (i.e. parallel to v) and the other one is a horizontal gridline (i.e. parallel to h), or (b) 1 and 2 are the graphs of two lines whose respective slopes m1 and m2 are “nega1 tive reciprocals”, i.e. m2 = − . m1 Conversely, if 1 and 2 are any two lines in 2 for which either (a) or (b) is true, then 1 and 2 are perpendicular.
Proof. Suppose first that 1 and 2 are perpendicular lines. Then in particular they are not parallel, so by the previous theorem we know that they are not both vertical; this leaves two possibilities: (a) Exactly one of the lines is vertical. Without loss of generality, assume 1 is vertical, and 2 is not. If 1 is vertical, then it is parallel to v, which is perpendicular to h (remember, we are in an orthonormal coordinatizaton); therefore 1 is perpendicular to h. Since 1 is also perpendicular to 2 by hypothesis, then 2 must be parallel to h. This implies that 2 is horizontal. Likewise, if 2 is vertical, 1 must be horizontal. (b) Neither line is vertical. Assume the equation of 1 is ( y, m1x + k1 ) and the equation of 2 is ( y, m2 x + k2 ). We also know that since the lines are not parallel, m1 ≠ m2. Let P be the intersection point of 1 and 2. Move along the horizontal gridline through P an arbitrary distance to some point Q, and then move along the vertical gridline through Q until it intersects with 1 at a point R (refer to Figure 4.18). Then return to P, and again move along the horizontal gridline through P, this time to locate a point S with PS ≅ QR . Then travel along the vertical gridline through S until it intersects with 2 at a point T . Now observe the following facts:
i. Since 1 and 2 are perpendicular by hypothesis, ∠RPQ and ∠SPT are complementary. ii. Since ∆PQR is a right triangle, ∠RPQ and ∠PRQ are complementary, as well. iii. Therefore ∠SPT ≅ ∠PRQ. RQ PS iv. By AA similarity, = . Since we have constructed PS ≅ QR , this implies PQ TS PQ ≅ TS . RQ TS PQ v. Finally, the slope of 1 is m1 = and the slope of 2 is m2 = – =− = −1 / m1. PQ PS RQ
184
184 Geometry, Graphs and Symmetry
Figure 4.18 Diagram for the proof of the Perpendicular Lines theorem.
For the other direction of the proof, it is clear that if one of 1 , 2 is vertical and the other is horizontal, then the lines would be perpendicular (again, this is specifically true because we are in an orthogonal coordinate system). It remains only to show that if they are non- vertical with slopes that are opposite reciprocals, then they are perpendicular. Suppose, then, that they are non-vertical lines with opposite reciprocal slopes. In particular, the slopes are not equal, so the lines are not parallel, and therefore they intersect at some point P. There must be some line perpendicular to 1 through P; let’s call that hypothetical line 3 . By the part of this theorem already proved, the slope of 3 and 1 must be opposite reciprocals, as well. But this implies that 2 and 3 have the same slope. Since 2 and 3 pass through the same point ( P ) with the same slope, they must in fact be the same line; this shows that 1 ⊥ 2 .
Exercises 17. Prove part (1) of the Uniqueness of Coordinates Theorem. 18. Explain why, in order to prove that all of the solutions of an equation are collinear, it is enough to prove that any three solutions are collinear. 19. Show that the point P with coordinates (c, mc + k ) (relative to some coordinatization Σ) lies on the graphs of both ( x, c ) and ( y, mx + k ).
185
Geometry, Graphs and Symmetry 185 20. Assume 1 is the graph of ( y, m1x + k1 ) and 2 is the graph of ( y, m2 x + k2 ) for some k − k1 (different) slopes m1 and m2 . Verify that if we define c = 2 , then the point m2 − m1 with coordinates (c, m1c + k1 ) lies on both lines. 21. Assume the FTHSA, and show that if a line is completely determined by its slope and any point on it; that is, if two lines 1 and 2 both pass through the same point P and have equal slopes, then they are the same line.
4.6 Non-Orthonormal Coordinate Systems The main theorems of the previous section—the Fundamental Theorem of High School Algebra and the two theorems on Slopes of Parallel and Perpendicular Lines—were all proved, for simplicity, in the case of an orthonormal coordinate system. It is worth observing, however, that two of those theorems are also true in non-orthonormal coordinate systems. (Can you identify which ones?) Why should we care about non-orthonormal systems? Recall that in order to be orthonormal, a coordinate system needs to satisfy two separate conditions: (a) it needs to be orthogonal, i.e. the vertical axis v needs to be perpendicular to the horizontal axis h; and (b) it needs to be isometric, i.e. the point U chosen along h to define the scale along the horizontal axis needs to be the same distance from O as the point V chosen along v to define the scale along the vertical axis. The second of these conditions is often violated for practical reasons. Suppose, for example, we are in a high school Algebra 1 classroom and we want to look at the graph of a function like y = 100 x 2 on the portion of the x-axis defined by −10 ≤ x ≤ 10. If we use an isometric coordinate system, the graph would look like the one shown in Figure 4.19a. With this scale, very little of the graph’s shape can be recognized. On the other hand, with the non-isometric coordinate system shown in Figure 4.19b, the familiar parabolic shape is easily seen. Indeed for any function (or, for that matter, any set of data) whose vertical and horizontal scales are naturally “different”, it makes sense to use different scales for the vertical and horizontal axes—i.e., to use a non-isometric coordinate system. Of course, using different scales on the vertical and horizontal axes can lead to distortions or misunderstandings. Anyone who has used a graphing calculator is surely aware that, because the screen on such a calculator is typically wider than it is tall, the default Window settings are generally non-isometric; as a result, a the graph of a “circle” will typically look like an ellipse, and the graph of two “perpendicular” lines like y = x and y = − x will appear to intersect at an angle that is clearly not 90°. For this very reason, graphing calculators usually have commands23 that automatically “isometrize” and “de-isometrize” the window settings. But why would we want to use a non-orthogonal coordinate system? What possible value is there in having axes that are not perpendicular to one another? Here, too, there are pragmatic uses. Consider a perspective drawing of a 3-dimensional object in the plane. We are quite accustomed to using non-orthogonal axes to represent different planes in 3-dimensional space; for example, the “cube” shown in Figure 4.20 makes use of three different overlapping coordinate systems to create the illusion of depth: Σ1 = ( h,O,U , v,V ) to define the “floor” of the cube Σ 2 = ( h,O,U , w,W ) to define the “front face” of the cube Σ 3 = (v,O,V , w,W ) to define the “left face” of the cube
186
186 Geometry, Graphs and Symmetry
(a) 10 0
5
-10
-5
0
5
10
5
10
-5
-10
(b) 10000
5000
-10
-5
0
Figure 4.19 The graph of y = 100 x 2 in both (a) an isometric coordinate system and (b) a non-isometric coordinate system.
187
Geometry, Graphs and Symmetry 187
Figure 4.20 Overlapping, non-orthogonal coordinate systems create the illusion of depth.
None of these coordinate systems are orthogonal (or isometric, for that matter), yet we are quite comfortable reading diagrams such as Figure 4.20 and interpreting, for example, the acute angle shown as ∠VOU as a “right angle” shown in perspective. Quite apart from the practical applications of non-orthonormal coordinate systems, there are mathematical reasons for considering them. First, throughout this book we have always been interested in seeing how far results can generalize, and precisely how they break down when they do not; and second, some of the ideas we develop in this section will provide important background information when we turn to consider transformations and symmetry in the next sections. So let’s consider a non-orthonormal coordinate system Σ1 = ( h,O,U , v,V ). We will use this system to build two additional coordinate systems, Σ 2 and Σ 3, where Σ 2 will be orthogonal (but not, in general, isometric), and Σ 3 will be orthonormal (i.e. both orthogonal and isometric). The construction proceeds as follows: Step 1. As shown in Figure 4.21(a), we first construct a line v' perpendicular to h through O. Then construct a line through V parallel to h; let V' be the point where it intersects v. Then Σ 2 = ( h,O,U , v' ,V ' ) is an orthogonal coordinate system, called the orthogonalization of Σ1. Step 2. As shown in Figure 4.21(b), we next locate a point V" on v', located on the OV" same side of h as V' is, such that OV" ≅ OU (equivalently, such that = 1). Then OU Σ 3 = ( h,O,U , v' ,V" ) is both orthogonal and isometric. We call Σ 3 the isometrization of Σ 2 , or alternatively the orthonormalization of Σ1. In order to see how the three main theorems of the previous section are affected by the use of a non-orthonormal coordinate system, we need to first understand how the coordinates of a point P change when we change from one coordinate system to another. Specifically, we need the following:
188
188 Geometry, Graphs and Symmetry
(a)
(b)
Figure 4.21 (a) Using a non-orthonormal coordinate system (shown with solid lines), we first construct an orthogonal coordinate system (shown with dashed lines). Then (b) we locate a point V ′′ with OV ′′ ≅ OU to construct an orthonormal system.
Proposition. (Orthonormalization of Coordinates). Let Σ1 be a non-orthonormal coordinate system ( h,O,U , v,V ), and construct its orthogonalization Σ 2 = ( h,O,U , v' ,V ' ) and orthonormalization Σ 3 = ( h,O,U , v' ,V" ), as described above. Choose any point N ∈ 2 and let χ1 = ( α1 , β1 ) be the coordinates of N with respect to Σ1, let χ2 ( N ) = ( α 2 , β2 ) be the coordinates of N with respect to Σ 2 , and let χ3 ( N ) = ( α 3 , β3 ) be the coordinates of N with respect to Σ 3. Then there exist two constants r, s (independent of N but uniquely determined by Σ1) such that
(α 2 , β2 ) = (α1 + rβ1 , β1 ) (α3 , β3 ) = (α1 + rβ1 , sβ1 )
Proof. Refer to Figure 4.22. LetW be the intersection of v with a line through U parallel to v', and set r = χOW (V ) and s = χOV" (V ' ). These two constants are called shear constants. Now consider any point N ; for simplicity, we assume N is in the first quadrant relative to Σ1 (i.e. we assume α1 , β1 ≥ 0); if it is not, the proof below needs minor modification with occasional negative signs. We next construct the following points on the different coordinate axes: (a) Let P be the intersection of h with a line parallel to v through N ; (b) Let P ’ be the intersection of h with a line parallel to v ’ through N ;
189
Geometry, Graphs and Symmetry 189 (c) Let Q be the intersection of v with a line parallel to h through N ; (d) let Q ’ be the intersection of h with a line parallel to h’ through N . Then by definition,
α1 = χOU ( P ) =
OP OU
while
α 2 = χOU ( P' ) =
OP' OP PP' PP' = + = α1 + OU OU OU OU
PP' NP = (Exercise 22). We also have NP ≅ OQ (Exercise 23), OU OW χOV (Q ) PP' OQ OQ so = . By the change of base formula, = = β1 ⋅ χOW (V ) = rβ1. Putting OU OW OW χOV (W ) this all together, we have However, by AA similarity,
α 2 = α1 + rβ1
Figure 4.22 Diagram used for proving the Orthonormalization of Coordinates proposition.
190
190 Geometry, Graphs and Symmetry OQ' OQ' OQ = = β1 (Exercise 24). . But (again by AA similarity), OV ' OV ' OV β So ( α 2 , β2 ) = α1 + 1 , β1 , proving the first half of the proposition. For the second half, we γ OP' OQ' χ (Q' ) note simply that α 3 = = α 2 and β3 = = OV" = β2 ⋅ χOV" (V ' ) = sβ2 . OU OV" χOV" (V" ) We also have β2 =
Now that we have established how the coordinates of a point change when we change from a non-orthonormal coordinate system to its orthogonalization and orthonormalization, we are ready to prove generalizations of two of the main results from the previous section. Theorem (Generalized Fundamental Theorem of High School Algebra). Let Σ be any coordinate system (not necessarily orthonormal), and let l be any line. Then there exist constants a, b, c ∈ F such that l is the graph (relative to Σ) of the equation (a'x + b'y + c', 0). Conversely for any equation of this form, the graph relative to Σ is a line.
Proof. Consider any line l. If Σ' is the orthonormalization of Σ, then by the FTHSA, l is the graph (relative to Σ') of some equation of the form (ax + by + c, 0). This means that for any point N ∈ l, the coordinates α', β' of N relative to Σ' satisfy the equation aα' + bβ' + c = 0 and conversely every pair of coordinates satisfying this equation corresponds to some point N ∈. However, by the Orthonormalization of Coordinates proposition, we know α' = α + rβ and β' = sβ , where ( α, β ) are the coordinates of N relative to Σ, and r, s are the two shear constants defined in the previous proof. Thus for any point N ∈ we have that a ( α + rβ ) + b ( sβ ) + c = 0, which can be rewritten as aα + ( ar + bs ) β + c = 0 If we set a' = a, b' = ar + bs, c' = c , then the coordinates relative to Σ of every point on satisfy a' α + b'β + c = 0, from which it follows that is the graph of the equation ( a' x + b' y + c,0 ) relative to Σ. The converse is left as Exercise 25. Theorem (Generalization of Slopes of Parallel lines). Let 1 and 2 be two parallel lines in 2. Choose any coordinatization Σ (not necessarily orthonormal). Then one of the following cases is true: (a) 1 and 2 are both vertical gridlines (i.e. parallel to v), or (b) 1 and 2 are the graphs of two lines with the same slopes but different y-intercepts. Conversely, if 1 and 2 are any two lines in 2 for which either (a) or (b) is true, then 1 and 2 are parallel. Proof. Exercise 26.
191
Geometry, Graphs and Symmetry 191
Exercises
PP' NP 22. Explain why, in Figure 4.22, . = OU OW 23. Explain why, in Figure 4.22, NP ≅ OQ. OQ' OQ = = β1. 24. Explain why, in Figure 4.22, OV ' OV 25. Complete the proof of the Generalized FTHSA; that is, show that the graph of any equation of the form ( a' x + b' y + c,0 ) relative to any coordinate system is a line. 26. Prove the Generalization of the Slopes of Parallel Lines theorem. 27. Why can’t we generalize the Slopes of Perpendicular Lines theorem to the non- orthogonal case? Why can’t we generalize it to the non-isometric case?
4.7 Transformations and Symmetry In the last section we saw an example of how changing from one coordinate system Σ to a second coordinate system Σ' amounts to a “relabeling” of all of the points in 2: each point N has one set of coordinates ( α, β ) with respect to Σ and a different set of coordinates (α' , β' ) with respect to Σ' . One way to understand this is to think of a coordinate system as establishing a one-to-one correspondence between the points of the Euclidean plane 2 and the ordered pairs of the “algebraic plane”24 2 . Using this correspondence, the change of coordinates Σ Σ' induces a map 2 → 2, as shown in the diagram below: Σ
Σ’
2 → 2 → 2 But because of this correspondence between 2 and 2 , a change of coordinates can also be used to induce a map 2 → 2 , as indicated below: Σ
Σ’
2 → 2 → 2 What does this mean? One way of thinking about it is to begin with the (bare, uncoordinatized) Euclidean plane, and choose a point N . Now lay over the plane a coordinate system Σ; this assigns a pair of coordinates ( α, β ) to N . If we then change to another coordinate system Σ' , that same pair of coordinates ( α, β ) now labels a different point, N' . Thus the change of coordinates Σ Σ' induces a map N N'. There are thus two different ways of thinking25 about what a change of coordinates “does”: • •
From one point of view, the points stay the same and the labels change. From the other point of view, the labels stay the same while the points move.
In the last section, we took the first point of view; in this section, we take the second. To make this concrete, choose an orthonormal coordinate system Σ = ( h,O,U , v,V ) and consider the line that is the graph, relative to Σ, of the equation y = 2 x − 1. Now choose O' to be the point whose coordinates (still relative to Σ) are (3, 0 ), let U' be the point whose
192
192 Geometry, Graphs and Symmetry
Figure 4.23 Relative to Σ = ( h, O,U , v, V ), line is the graph of y = 2 x − 1, and relative to Σ ′ = ( h, O ′, U ′, v ′, V ′ ) line ′ is the graph of the same equation. However, relative to Σ, line ′ is the graph of y = 2 x − 7.
coordinates are ( 4, 0 ), let V' be the point with coordinates (3,1), and let v' be the line through O' and V'. Then if we define a new orthonormal coordinate system Σ' = ( h,O' ,U' , v' ,V ' ), then relative to Σ' the equation y = 2 x − 1 determines a different line, '. (See Figure 4.23.) In fact, ' can be understood as a translation of five units to the right. We can also observe that, relative to the original coordinate system, the equation of ' is y = 2 x − 7. As this example suggests, the formalism of changing coordinate systems provides a language for describing what, in high school Algebra 1 and 2, are called “function transformations” (i.e. horizontal and vertical translations, reflections, and “stretches”). In the rest of this section we will develop a general framework for these transformations and others, use that framework to discuss function symmetry, and see how categories of familiar “special functions” naturally arise from considerations of function symmetry. We begin with some basic definitions and notation: Definitions. Let Σ, Σ' be any two coordinate systems on 2. Every point P ∈ 2 corresponds, via Σ, to an ordered pair ( α, β ) ⊆ 2 ; this ordered pair then corresponds, via Σ' , to a (generally different) point P' ⊆ 2. We write χ and χ' for χ Σ and χ Σ’ , respectively, to distinguish between the coordinate functions corresponding to each coordinate system; then the equation χ( P ) = χ' ( P' ) expresses the fact that the two points have the same coordinates, but in different systems. The mapping P P' determines a map T : 2 → 2 , called the transformation induced by the change of coordinates Σ Σ'. Any set of points S ⊆ 2 is mapped, via T , to a (generally different) set of points S' ⊆ 2, called the image of S under the transformation T . If it happens to be the case that S' = S , then the set S is said to be invariant under T , and T is said to be a symmetry of S. Notice that if S is invariant under T , that does not mean that T fixes each individual point of S, only that it maps each point of S onto another point of S in a one-to-one correspondence. For example, any horizontal gridline is invariant under a horizontal translation; each point on the line slides left or right, but the line as a whole remains unchanged. If we were purely interested in geometry, we would be interested in the symmetries of common geometric figures like regular polygons and circles, but our purpose here is to
193
Geometry, Graphs and Symmetry 193 connect the geometric language to the algebraic study of functions, so we need some additional language for that: Definitions. Let f ∈ Func ( ). The graph of f is the solution set of the equation ( y, f ( x )) ; equivalently it is the zero set of y − f ( x ), regarded as a function 2 → . If T is a transformation of 2 induced by a change of coordinates, and if the graph of f is invariant under T , then we say that the function26 f is invariant under T , and T is said to be a symmetry of f . Let’s consider a few specific categories of transformations and what it means for a function to be invariant under them. Example 1 (Horizontal translations). Let Σ = ( h,O,U , v,V ) and Σ' = ( h,O' ,U' , v' ,V ' ) be two coordinate systems that share a common horizontal axis, and for which v v', ∆VOU ≅ ∆V 'OU ' ', and with V and V' on the same side of h. Now we consider two sub-cases: (a) If we assume that O' is on the ray originating at O and passing through U , so that O' and U are on the same side of O, then the induced transformation T is called a horizontal translation to the right from O to O'. (b) If, on the other hand, O' and U are on opposite sides of O, then T is a horizontal translation to the left from O to O'. In either case we also introduce the language of vectors, and say that T is a translation of the plane through the vector OO' . Suppose we translate to the right through a distance d (i.e. through a vector OO' with OO' = d ). If χ ( P ) = ( α, β ), then χ ( P' ) = ( α + d , β ). Any function f can be translated to the right. Suppose we translate f through a distance d . Then a point P lies on the graph of f ( x ) with respect to Σ if and only if χ ( P ) = ( α, f ( α )); by the remarks above, then χ ( P' ) = ( α + d , f ( α )). From this we see that a point lies on the translation of f if and only if its coordinates satisfy the equation y = f ( x − d ) relative to the original coordinate system Σ' . (Here, f ( x − d ) is the common shorthand notation for the composition of functions f id − dˆ .) Most functions, of course, are not invariant when you translate them to the right: the graph of a translated function typically is in a different position than the graph of the original function. However, if a function f is invariant under a translation to the right, it is also invariant under a translation to the left (Exercise 28). If a function f is invariant under every horizontal translation, it is a constant function, whose graph is a horizontal line. A more interesting case is when f is only invariant under certain horizontal translations. In this case, we can consider the set of distances
(
)
= dOU (O' ) O' ≠ O and f is invariant under translation through OO'
{
}
If this set D has some minimal value d > 0, then we say that f is periodic with period d . See Figure 4.24 for an illustration27. Moreover, if f is invariant under a translation to the right through a distance d , it is also invariant under a translation through a distance 2d , 3d, and indeed any whole-number multiple of d . If we relax our language enough to allow an expression like “translation to the right through a distance −2d ” to mean a translation to the left through a distance 2d , then we can regard as an additive subgroup of (Exercise 29).
194
194 Geometry, Graphs and Symmetry
Figure 4.24 The graph of a periodic function is translation-invariant; the period is d = OO ′ where O ′ is a point on h of minimal distance from O such that f is invariant under translation through OO’.
Note further that, as Figure 4.24 suggests, given any point P and its image P' = T ( P ), the vector PP' is independent of P. That is, the vectors OO' ,UU' , and VV ' are all the same vector28. Example 2 (Vertical translations). Following the pattern of Example 2, we may consider two coordinate systems Σ = ( h,O,U , v,V ) and Σ' = ( h' ,O' ,U' , v,V ' ) that share a common vertical axis, and for which h h', ∆VOU ≅ ∆V 'OU ' ', and with U and U' on the same side of v. Then we consider two sub-cases: (a) If we assume that O' is on the ray originating at O and passing through V , so that O' and V are on the same side of O, then the induced transformation T is called a vertical translation up from O to O'. (b) If, on the other hand, O' and V are on opposite sides of O, then T is a vertical translation down from O to O'. The rest of the details in the previous example adapt naturally to the case of vertical translations, with a few key differences. If P is a point with χ ( P ) = ( α, β ), and P ’ is the translation of P up through a distance d , then χ ( P' ) = ( α, β + d ). Therefore if P is a point on the graph of a function y = f ( x ) with respect to Σ, then P' is on the graph of the function y = f ( x ) + d (with respect to the same coordinate system). For this reason, no function can ever be invariant under a vertical translation; for if f were invariant under a vertical translation, then ( α, f ( α )) and ( α, f ( α ) + d ) would be two different points on the graph of the function, contradicting the basic definition of a function as a set of ordered pairs in which each 1st coordinate is matched to one and only one 2nd coordinate. Example 3 (General translations). Now consider two coordinate systems Σ = ( h,O,U , v,V ) and Σ ’ = ( h' ,O' ,U' , v' ,V ' ) to be two coordinate systems that share neither axis, but for which h h', v v', and with OU = OU ' ' and OV = OV ' ' as vectors (see footnote 26). The transformation T induced by Σ Σ' can be regarded as a composition of a horizontal translation Th and a vertical translation Tv ; as a result, for any point P with χ ( P ) = ( α, β ), the translated point P ’ has χ ( P' ) = ( α + d , β + e ), where the vector components of OO' are d , e . (In the
195
Geometry, Graphs and Symmetry 195 high school context, the letters h and k are more commonly used, but we already have h in use as the name of the horizontal axis.) Consequently if we translate the graph of the function y = f ( x ), the resulting set of points all have coordinates (relative to Σ) satisfying the equation y = f ( x − d ) + e . Can a function be invariant under such a translation? The answer is—perhaps surprisingly, given the previous example—yes, at least provided that d ≠ 0. In fact, we have the following: Proposition. If f is invariant under a translation through OO' = d , e, with d ≠ 0, then f ( x ) must be of the form f ( x ) = g ( x ) + l ( x ), where g ( x ) is invariant under a horizontal translation and l ( x ) is a linear function; conversely, any function f ( x ) of this form is invariant under some translation. Proof. If f is invariant through OO' = d , e then f ( x + d ) = f ( x ) + e for all x. Define the ex ex linear function l ( x ) = , and set g ( x ) = f ( x ) − . We now calculate explicitly: d d g (x + d ) = f (x + d ) −
e (x + d ) ex ed ex = f (x) + e − − = f (x) − = g(x) d d d d
Since g ( x + d ) = g ( x ), g ( x ) is invariant under a horizontal translation through a distance d , which completes the proof. Figure 4.25(a) shows an example of a function that is invariant under a composition of a horizontal and vertical translation; Figure 4.25(b) shows the decomposition of the same function into its linear component (shown with a dashed line) and its horizontally-invariant component. Note, however, for a given function f ( x ) that is invariant under a translation through d , e , the three conditions
(a)
(b)
Figure 4.25 (a) The graph of a function f ( x ) whose graph is invariant under a translation through a vector d , e with d ≠ 0. (b) The graphs of two functions g ( x ) (invariant under horizontal translations) and l ( x ) (linear), whose sum is f ( x ).
196
196 Geometry, Graphs and Symmetry (a) f ( x ) = g ( x ) + l ( x ); (b) g ( x ) is invariant under a horizontal translation through d ; (c) l ( x ) is linear do not uniquely determine g ( x ) and l ( x ). Indeed, if g1 ( x ) and l1 ( x ) are two functions satisfying the three conditions above, then for any constant C, the two functions defined by g2 ( x ) = g1 ( x ) + C and l2 ( x ) = l1 ( x ) − C also satisfy the same conditions. In the previous three examples, the origin was translated (keeping the axes parallel) while OU ' ' all measurement scales were kept unchanged (that is, = 1, etc.). Next we consider what OU happens if the origin and axes are left in place while one or both measurement scales are changed. Example 4. (Dilations). Let Σ = ( h,O,U , v,V ) and Σ' = ( h,O,U' , v,V ' ) be two coordinate systems with the same origin and coordinate axes, but with different units of measure. We also assume that U and U' are on the same side of O (i.e. O is not between U and U') and that V and V' are also on the same side of O (i.e. O is also not between V and V'). The transformation T induced by Σ Σ' is called a dilation. More specifically, we may describe particular types of dilations: OU' OV ' OU' OV ' = then T is called an isotropic dilation; in this case, the value k = = OU OV OU OV is called the scale factor of the dilation. OU' OV ' ( b) If ≠ then T is called an anisotropic dilation; in this case, we can distinguish OU OV OU' OV ' between the horizontal scale factor kx = and the vertical scale factor ky = . OU OV (a) If
The reader is cautioned that this terminology is not standardized, and many alternatives are in use. What we have called an “isotropic dilation” may also be called a “homogeneous dilation”, an “isometric dilation”, or simply a “dilation”; other authors call them “homotheties”. What we are calling an “anisotropic dilation” may be called by other authors an “inhomogeneous dilation”, a “non- isometric dilation”, or nothing at all (some authors don’t consider them to be “dilations” in the first place). The effects of a dilation on a point P are easy to describe. By the change of scale property, if P has coordinates ( α, β ) with respect to Σ, then P' = T ( P ) has coordinates ( kx α, ky β ) in the same coordinate system, where kx and ky are the two scale factors. If P is on the graph x of a function y = f ( x ), then P' is on the graph of y = ky ⋅ f . If both kx and ky are kx greater than 1, we typically say (in the high school context) that under this transformation, the graph of f ( x ) is horizontally stretched by a factor of kx , and vertically stretched by a factor of ky . (Note, of course, the well-known asymmetry in the behavior of the dependent and independent variable here: we divide x by kx to stretch horizontally, but we multiply y by ky to stretch vertically.) If either kx or ky is less than 1, we may describe the dilation as a compression rather than a stretch, although here, too, the language is not standardized; if we 1 have a function g ( x ) and we define f ( x ) = g (3x ) we might say that f ( x ) is a vertical com5 pression by a factor of 5 and a horizontal compression by a factor of 3, but it also common
197
Geometry, Graphs and Symmetry 197 to say that f ( x ) is a vertical compression by a factor of
1 and a horizontal compression by 5
1 a factor of ; usually it is clear from context what is meant. 3 In some cases horizontal and vertical dilations are essentially the same thing. For example, consider the familiar parabola, y = x 2. For k > 1, a horizontal compression by a factor of k 2 produces y = ( kx ) , but this is the same function as y = k 2 x 2, which can also be interpreted as a vertical stretch by a factor of k 2. This can be a source of confusion for students: we want 2 them to understand y = ( 2 x ) as a parabola that is “skinnier” than y = x 2 (specifically, the x-coordinates of all the points on the parabola are divided by 2), but students are likely to interpret its graph as a parabola that is “taller” than y = x 2 (because the y-coordinates of all the points on the parabola are multiplied by 4). In general, making a graph “skinnier” has a very different effect than making it “taller”, but for the special case of a monomial of the form y = x d the two transformations coalesce into a single operation. Can a function be invariant under a dilation? The preceding discussion suggests one way: If you rescale a parabola horizontally by a factor of 2, and then rescale it vertically by a factor of 4, you recover the exact same parabola you begin with. More generally, any monomial of the form f ( x ) = x d is invariant under an anisotropic dilation with kx = k and ky = k d for some positive constant k (Exercise 31). But there are other (perhaps more interesting) ways to find a function that is invariant under a dilation. Consider the family of functions of the form
(
f ( x ) = x cos 2 π log b ( x )
)
where b is any positive constant. (We have not yet provided a formal treatment of logarithms in this book29, so for now we rely on your prior knowledge of how they work.) This function has the property that for any x, x x x b ⋅ f = b ⋅ cos 2 π log b = x cos( 2 π log b ( x ) − 2 π ) b b b = x cos( 2 π log b ( x ) = f ( x ) which shows that the graph of f is invariant on being stretched horizontally by a factor of b and then stretched vertically by the same factor. More generally, if g ( x ) is any function that is periodic with period b, then if we define f ( x ) = x ⋅ g ( log b x ) x then f ( x ) is “scale-periodic”, in the sense that b ⋅ f = f ( x ) (Exercise 32). (Note that b the absolute value bars are not, strictly speaking, necessary: however, if we omit them then f ( x ) is defined only for x > 0, whereas including them makes it possible to extend the domain of the function to the entire real line.) The graph of such a function is an example of a fractal, i.e. a geometric object that exhibits the phenomenon of “self-similarity”: as you zoom deeper and deeper into it, you continually encounter new copies of the original graph. (See Figure 4.26 for an example.) In fact, the converse of this is also true: if
198
198 Geometry, Graphs and Symmetry
Figure 4.26 The graph of y = x ⋅ cos( 2 πlog 4 x ) is invariant under a dilation with scale factor 4 (or any power of 4). For an interactive, scalable version of this graph, visit http://bit.ly/ scaleperiodic and try zooming in and out.
x f ( x ) is any scale-periodic function defined for x > 0 and obeying b ⋅ f = f ( x ), then b f ( x ) = x ⋅ g ( log b x ) for some periodic function g ( x ) (Exercise 33). Example 5. (Reflections). Let Σ = ( h,O,U ,v,V ) and Σ' = ( h,O,U' ,v,V' ) be two orthogonal coordinate systems with the same origin and coordinate axes, with ∆VOU ≅ ∆V 'OU ' ' ,OV ' ≅ OV OU' OV ' = = 1), and in which exactly one of the following holds: (equivalently: with OU OV (a) Either V = V ' and O is between U and U'; (b) or, U = U' and O is between V and V '. In this situation the induced transformation T : 2 → 2 is called a reflection across an axis. (We will define a more general notion of ‘reflection’ below.) Corresponding to the two cases (a) and (b), we have two different types of reflections:
199
Geometry, Graphs and Symmetry 199 (a) If V = V ' and O is between U and U', then T is called a horizontal reflection across the vertical axis; (b) while if U = U' and O is between V and V', then T is called a vertical reflection across the horizontal axis. For brevity, we usually call these simply “a horizontal reflection” and a “vertical reflection” (or alternatively a “reflection across the vertical axis” and “a reflection across the horizontal axis”). What happens if we reflect around both axes? Let T1 be a horizontal reflection and T2 be a vertical reflection. Then the composite T = T1 T2 maps the point P with coordinates ( α, β ) to P' with coordinates ( −α, −β ). Is this a reflection? In order to answer this question, we need a more general definition of ‘reflection’ against which to compare the transformation. Definition. A reflection is a transformation T : 2 → 2 with the following properties: (a) There exists a unique line (called the “line of reflection”) with the property that for every P ∈, T ( P ) = P (i.e. the points of are fixed by T ). (b) For every point P ∉, T ( P ) = P', where P' is the unique point in 2 such that is the perpendicular bisector of PP' (i.e. P and P ’ are on opposite sides of and equally far away from it). We often call such a transformation a reflection in or a reflection across when we want to emphasize the special role of the line of reflection. We should pause to check that this definition includes the special cases we have already discussed (Exercise 34). But what about the transformation T : ( α, β ) ( −α, −β )? It is not a reflection: the only point fixed by T is the origin, ( 0, 0 ), so no line of reflection exists. Nevertheless, many secondary textbooks (at least in the United States) refer to T as a “reflection in the origin” or a “reflection across the origin”. It would be more accurate to describe T as a rotation around the origin (see Example 6 below), but calling T a “reflection in the origin” is so well established that there doesn’t seem to be much point in fighting it. Reflections across the coordinate axes and across the origin are a standard part of the high school curriculum. If P has coordinates ( α, β ) then the horizontal reflection of P has coordinates ( −α, β ), while the vertical reflection of P has coordinates ( α, −β ). Consequently the horizontal reflection of the graph of y = f ( x ) is the graph of the function y = f ( − x ), while the vertical reflection of the graph of y = f ( x ) is the graph of the function y = − f ( x ). If T is a reflection across the origin, then the image of the graph of y = f ( x ) is the graph of y = − f ( − x ). Functions can certainly be invariant under reflections. If f ( x ) is invariant under a horizontal reflection across the vertical axis, then f ( − x ) = f ( x ) for all x; such a function is said to have even symmetry. If f ( x ) is invariant under a reflection in the origin, then – f ( − x ) = f ( x ) for all x; this condition is usually written in the equivalent form f ( − x ) = − f ( x ), and such a function is said to have odd symmetry. The language “even” and “odd” comes from the fact that a polynomial with only even-degree terms will always have even symmetry, while a polynomial with only odd-degree terms will always have odd symmetry (Exercise 35). However, the only function that is invariant under a vertical reflection across the horizontal axis is the constant function f ( x ) = 0 (Exercise 36). What about other kinds of reflection symmetry? Consider for example a general quadratic function of the form f ( x ) = ax 2 + bx + c for some constants a, b, c. If b ≠ 0 then such a function does not have even symmetry; nevertheless its graph, a parabola, does have some kind of reflection symmetry. How do we describe this?
200
200 Geometry, Graphs and Symmetry We begin by observing that if T is a reflection across a vertical line given by the equation x = K for some constant K, then T maps the point P with coordinates ( α, β ) to the point P' with coordinates ( 2K − α, β ) (Exercise 37). Consequently the condition for a function to be symmetric under such a reflection is that f ( 2 K − x ) = f ( x ) for all x. It may easily be verified that the criterion for even symmetry is just the special case K = 0 of this more general condif ( x ) = ax 2 + bx + c tion; we can also easily verify that the parabola described by the equation b is symmetric across the vertical line described by x = − (Exercise 38). 2a Example 6. (Rotations). Let Σ = ( h,O,U , v,V ) and Σ' = ( h' ,O,U' , v' ,V ' ) be two coordinate OU' OV ' = = 1), OU OV ∠OUV ≅ ∠OUV ' ' , and in which (relative to the coordinate system Σ) the rays OU' and OV' satisfy all of the following conditions: systems with the same origin, with OU' ≅ OU ,OV ' ≅ OV (equivalently: with
OV' is in the quadrant “after” OU' ; that is, (a) if OU' is in Quadrant I, then OV' is in Quadrant II, (b) if OU' is in Quadrant II, then OV' is in Quadrant III, (c) if OU' is in Quadrant III, then OV' is in Quadrant IV, and (d) if OU' is in Quadrant IV, then OV' is in Quadrant I. • If OU ’ is on the boundary between two adjacent quadrants, then OV' is on the boundary between the subsequent adjacent quadrants (e.g., if OU' is on the boundary between QI and QII then OV' is on the boundary between QII and QIII, etc.). •
The transformation T : 2 → 2 induced by the change of coordinates Σ Σ' is called a rotation around the origin. The angle of the rotation is the measure of ∠OUO' (which is also the measure of ∠VOV'). There are many conventions about how to measure angles, but for simplicity we will assume here that if OU' is in the upper half-plane (i.e. Quadrants I or II) then the angle of rotation is interpreted as a positive value, corresponding to a counter- clockwise rotation, while if OU' is in the lower half-plane (i.e. Quadrants III or IV) then the angle of rotation is a negative value, corresponding to a clockwise rotation. Perhaps unsurprisingly, writing an explicit formula for the coordinates of a point under a rotation through an arbitrary angle θ requires some trigonometry. It can be shown (although we will not derive it here) that if P has coordinates ( α, β ) with respect to Σ, and if T ( P ) = P', then the coordinates of P' with respect to Σ are
(α cos θ − β sin θ, β cos θ + α sin θ) In particular, if θ = 180° , then the coordinates of P' are ( −α, −β ), which shows that what we previously called “reflection in the origin” is really a 180° rotation around the origin. In general, rotating the graph of a function does not produce another function. The main exception is, as noted above, the case of a 180° rotation, which transforms the graph of y = f ( x ) into the graph of y = − f ( − x ). If a function is invariant under such a transformation, it is said to have odd symmetry. Just as we can reflect across lines that are not the coordinate axes, we can also rotate around points other than the origin. We consider the general case below when we consider compositions of transformations, but let’s pause to consider the special case of a 180°
201
Geometry, Graphs and Symmetry 201 rotation around an arbitrarily chosen point ( A, B ). If any point P with coordinates ( α, β ) is rotated 180° around ( A, B ), its image P ’ has coordinates ( 2 A − α, 2 B − β ) (Exercise 37). Therefore if P is on the graph of y = f ( x ), P' will be on the graph of y = 2 B − f ( 2 A − x ). This is the equation of the rotated graph. Moreover, the condition that the graph is invariant under such a rotation can be written in the form f (A + x) − B = B − f (A − x) or equivalently as B=
1 ( f ( A + x ) + f ( A − x )) 2
which expresses succinctly the notion that two equally spaced input values on either side of x = A evaluate to output values that are equally spaced on either side of y = B. In the previous example we saw that every parabola has reflection symmetry across the b line x = − . It turns out that every cubic equation has symmetry, too: 2a Proposition. Let f ( x ) = ax3 + bx 2 + cx + d be an arbitrary cubic function. There exists a point P, with coordinates ( A, B ) and with B = f ( A), with the property that the graph of y = f ( x ) is invariant under a 180° rotation around P.
Proof. Rather than pull a formula for A and B out of thin air, let’s see if we can discover it. Based on the discussion above, we want 2 f ( A) = f ( A + x ) + f ( A − x ) Evaluating this for our specific function f ( x ) = ax3 + bx 2 + cx + d , we obtain the (somewhat unwieldy) equation 2 (aA3 + bA2 + cA + d ) = a ( A + x ) + b ( A + x ) + c ( A + x ) 3
2
+ d + a (A − x) + b (A − x) + c (A − x) + d 3
2
Expanding and simplifying the right-hand side, and canceling common terms that appear on both sides of the equation, we obtain (eventually) 0 = 3Aax 2 + bx 2 Now we want this to be true for all values of x. The only way for this to happen is if b 3Aa + b = 0 , i.e. if A = − . This gives us the x-coordinate of our desired center of rotation. 3a The corresponding y-coordinate is found by simply computing B = f ( A). The proof above is rather indirect, so perhaps an example is in order. Consider the cubic function f ( x ) = x3 − 6 x 2 + 2 x + 5. Based on our calculation above, the graph of y = f ( x ) is
202
202 Geometry, Graphs and Symmetry
Figure 4.27 The graph of y = x3 − 6 x 2 + 2 x + 5 is symmetric under a 180° rotation around (2, − 7 ).
symmetric around a point ( A, B ) with A = −
( −6) = 2 and with B = f 2 = −7. Thus 2, −7 is ( ) ( ) 3 (1)
the center of our rotation; see Figure 4.27. The attentive reader may notice that this point also appears to be the inflection point of the cubic equation, and indeed this is neither an illusion nor a coincidence, as the next proposition shows. Proposition. Every cubic function has 180° rotational symmetry around its sole inflection point.
Proof. There are actually two distinct ways to prove this fact (at least). Both, of course, require Calculus—how else can we even talk about an inflection point? One method of proof is to simply take the second derivative of f ( x ) = ax3 + bx 2 + cx + d , set it equal to b zero, and find that the solution occurs at x = − , the same x-coordinate as was found in 3a the previous proposition. It is then a simple matter (using 1st-semester Calculus techniques) to confirm that the concavity changes at this point, so that the center of rotational symmetry is also an inflection point of the function. A slightly more interesting proof runs as follows: first, notice that the first derivative of y = f ( x ) is the quadratic function f ’ ( x ) = 3ax 2 + 2bx + c. Based on our discussion in the previous example, the graph of this quadratic function is a parabola with reflection 2b b symmetry around the line x = − = − . This implies that if we translate the function 2 (3a ) 3a
203
Geometry, Graphs and Symmetry 203 b , the resulting translated function g ( x ) would have a quadratic 3a derivative g' ( x ) whose axis of symmetry would be on the y-axis. Then the second derivative g" ( x ) would be a linear function that passes through the origin; this tells us that g ( x ) has an inflection point at x = 0. Finally, translating back to the right we conclude that the original b function f ( x ) had its inflection point at x = − , as claimed. 3a y = f ( x ) to the left by –
Example 7 (Shears). Our final example of this section is a shear. Let Σ be a non-orthonormal coordinate system, and let Σ ’’ be the orthonormal coordinate system produced from it using the method of §4.6. Now suppose we begin in the coordinate system Σ" and change to Σ. This induces a transformation T : 2 → 2 . If P has coordinates ( α, β ) with respect to Σ" , then (still with respect to the orthonormal coordinates Σ" ), T ( P ) = P' has coordinates ( α + rβ, sβ ), where r, s are the shear constants introduced in the proof of the Orthonormalization of Coordinates proposition. Shears are unlike reflections, rotations, translations and isotropic dilations, and like anisotropic dilations, in that they distort the shapes of geometric objects. The image of a unit square (oriented with sides parallel to an orthonormal coordinate system) under a reflection, rotation, translation or isotropic dilation is always a square; a shear, on the other hand, turns such a square into a parallelogram, while an anisotropic dilation turns it into a rectangle. We will have more to say about this in the next section.
Exercises 28. Show that if f is invariant under a translation to the right, it is also invariant under a translation to the left. 29. In Example 1, explain why we can regard as isomorphic to an additive subgroup of . 30. Prove that a non-constant polynomial function p( x ) cannot be periodic. (Hint: suppose it were, and consider the function p ( x ) − p( 0 ). How many zeros would it have?) 31. Show that any monomial of the form f ( x ) = x d is invariant under an anisotropic 1 dilation with kx = and ky = k d for some positive constant k. k 32. Show that if g ( x ) is any function that is periodic with period b, then x f ( x ) = x ⋅ g ( log b x ) is “scale-periodic”, in the sense that b ⋅ f = f ( x ). b 33. Prove the converse of Exercise 29: if f ( x ) is any scale-periodic function obeying x b ⋅ f = f ( x ), then f ( x ) = x ⋅ g ( log b x ) for some periodic function g ( x ) . b Explicitly show how to define g ( x ) from f ( x ). 34. Verify that the general definition of “reflection” includes as special cases the transformations “horizontal reflection across the vertical axis” and “vertical reflection across the horizontal axis”. 35. Show that a polynomial with only even-degree terms will always have even symmetry, while a polynomial with only odd-degree terms will always have odd symmetry.
204
204 Geometry, Graphs and Symmetry 36. Show that the only function that is invariant under a vertical reflection across the horizontal axis is the constant function f ( x) = 0. 37. Show that if T is a reflection across a vertical line ven by the equation x = K for some constant K , then T maps the point P with coordinates to the point P ’ with coordinates ( 2K − α, β ). 38. Show that the parabola described by the equation y = ax 2 + bx + c is symm ( α, β ) b . etric across the vertical line described by x = − 2a 39. Show that if any point P with coordinates ( α, β ) is rotated 180° around ( A, B ), its image P ’ has coordinates ( 2 A − α, 2 B − β ). (Hint: draw a clear diagram.)
4.8 Groups of Transformations In the last section we saw how changing from one coordinate system to another induces a transformation T : 2 → 2 ; in particular, we saw how translations, dilations, reflections, rotations and shears can arise in this fashion. In this section we consider how combinations of transformations affect the geometry of the plane, and consider our inquiry into how some of the “standard” functions of high school mathematics can be characterized by their invariance under certain transformations. We begin with a brief survey of the role transformations play in high school geometry. The role of transformations in geometry teaching has waxed and waned over the past hundred years, but there is little doubt that the way mathematicians conceptualize geometry has been radically transformed by the transformational perspective. In 1872 Felix Klein became a full professor at the University of Erlangen, where he launched what came to be known as the “Erlangen programme”—nothing less ambitious than a complete reorganization of geometry along the lines of what was just beginning to be understood as the beginnings of group theory. Klein proposed that “geometry” be understood as the study of those properties of a space (such as, for example, a Euclidean plane) that are left invariant under the action of some group of transformations. Different choices of the group of transformations lead, naturally, to different sets of invariant properties, and thus to different kinds of geometries. Thus, projective geometry, Euclidean geometry, hyperbolic (and other non-Euclidean) geometries are each realized as a set of invariants under a different group of transformations. This transformation-based perspective was a radical departure from the “synthetic” approach of Euclid and his modern updaters, in which the goal was to build towers of theorems on a foundation of axioms and postulates. Klein’s Erlangen Programme was unlike Hilbert’s Grundlagen in that it did not seek to fill the logical holes in the foundations of Euclid’s Elements, but rather to replace the entire structure with a completely new approach. The impact of Klein’s transformation-based geometry on schools varied across different countries. It had its greatest impact in the European educational system, where over the course of the 20th century transformations were incorporated into the secondary curriculum in France, Germany and Russia. In the United States, however, the transformation- based approach to geometry largely failed to penetrate the secondary curriculum until 1972, when Usiskin and Coxford published their textbook Geometry: A transformation approach. More recently, the Common Core State Standards in Mathematics have advocated for a more thorough incorporation of transformations into the secondary curriculum, and a number of Standards-aligned textbooks have followed suit.
205
Geometry, Graphs and Symmetry 205 Just as we did not attempt a thorough treatment of axiomatic Euclidean geometry at the beginning of this chapter, so too will we refrain from attempting a thorough analysis of geometry from a transformation perspective. We will, however, summarize some of the main definitions and theorems of this approach, omitting the proofs (some of which will be left for the exercises; the rest can be found in most upper-level Geometry texts.) We begin with the key idea of a rigid motion: Definition. A rigid motion (also called an isometry) of the plane is a transformation T : 2 → 2 that is bijective and preserves distances, in the following sense: if P and Q are any points in the plane, and if P' = T ( P ) and Q' = T (Q ) are their images under the transformP'Q' ation, we require PQ ≅ P'Q', or equivalently = 1. Note that neither of these conditions PQ requires us to actually assign a numerical measurement to the length of a segment—we are (for now) still playing with the same rules we established earlier, in which segments are only measured relative to one another, not relative to some fixed absolute scale. The following proposition identifies some (probably unsurprising) examples of rigid motions: Proposition. (a) Any translation, reflection, or rotation is a rigid motion. (b) The set of all rigid motions of the plane forms a group: a composition of rigid motions is itself a rigid motion, the identity transformation is a rigid motion, and every rigid motion is invertible.
Proof. (a) Omitted. (b) Exercise 40. While it is perhaps unsurprising that a composition of distance-preserving transformations is a distance-preserving transformation, there are some surprising relationships among the set of isometries. For example we have the following double-reflection properties:
Proposition. Let and m be any two lines in the plane, and let r and rm be reflections across and m, respectively. Then: (a) If m, the composition r rm is a translation. (b) If m, the composition r rm is a rotation. Conversely, any translation or rotation can be “factored” as a composition of two reflections (although this decomposition into reflections is not unique).
Proof. Omitted. In a very real sense, these last two propositions tell us that all of the rigid motions we know about so far—the reflections, rotations, and translations of the previous section— can be generated from reflections. This raises the basic question: can every rigid motion be generated from reflections? And if so, how many are necessary? The next Theorem sits at the core of the theory of rigid motions, and in a more detailed development would be the climax of the theory’s development, so we give it an appropriately grandiose name:
206
206 Geometry, Graphs and Symmetry Theorem. (Fundamental Theorem of Rigid Motions). Every isometry T : 2 → 2 can be expressed as a composition of three reflections or fewer. Moreover, every isometry is one of the following types: (i) A reflection (ii) A rotation (iii) A translation (iv) A glide reflection (that is, a translation through a vector, followed by a reflection across a line parallel to that vector)
Proof. Omitted. It is perhaps worth noticing that of the four types of isometry enumerated in the Fundamental Theorem of Rigid Motions, only a glide reflection requires three reflections; the other types of isometry can all be performed with one or two reflections. If all we were interested in were rigid motions, we could stop here. But we have already introduced other, non-isometric transformations in the previous section, and many of them also play important roles in the study of geometry. So, in the spirit of Klein’s Erlangen Programme, we now enlarge the group of isometries to a larger group, the group of similarities. Definition. A similarity of the plane is a transformation T : 2 → 2 that preserves ratios of lengths, in the following sense: if PQ and RS are any segments in the plane, with images P'Q' P ' Q' R' S ' and R' S', respectively, we require PQ = R S . This common ratio is called the scale factor of the similarity. (As with the definition of a rigid motion, note that this condition does not require us to actually assign a numerical measurement to the length of an individual segment, only to measure one segment relative to another.) It is straightforward to show that the set of similarities is a group (Exercise 42), and that every isometry of the plane is automatically a similarity (Exercise 43). But the group of similarities also includes some non-isometric transformations; in particular, every isotropic dilation is a similarity of the plane. In fact, we have the following theorem, which completely classifies the structure of the group of similarities: Theorem. (Fundamental Theorem of Similarities). Every similarity T : 2 → 2 can be expressed as a composition of an isotropic dilation and a rigid motion.
Proof. Omitted. While the definition of similarity makes reference only to the preservation of ratios of lengths, it is a remarkable fact that similarities also preserve angles. That is, if similarity T : 2 → 2 sends three points P,Q and R to images P' ,Q' and R', then the two angles ∠PQR and ∠P'Q' R' . are congruent. We omit here the proof of this; however, this important fact forms the link between the similarity transformations we are discussing here, on the one hand, and the topic of “similar figures” as it is found in secondary curricula. That is, we have the following:
207
Geometry, Graphs and Symmetry 207 Proposition. Let P1 P2 … Pn be a polygon with n sides, and let T: E2 → E2 be a similarity. Then the image P1' P2' … Pn' is a similar polygon, in the sense that corresponding sides of P1 P2 … Pn and P1’ P2’ … Pn’ are proportional, and corresponding angles are congruent. Moreover, if P1 P2 … Pn and Q1 Q2 … Qn are two similar polygons, then there exists a similarity T: E2 → E2 mapping each Pi onto the corresponding vertex Qi.
Proof. Omitted. Since an isometry is just a similarity with scale factor 1, we have the following special case: Proposition. Let P1 P2 … Pn be a polygon with n sides, and let T: E2 → E2 be an isometry. Then the image P1' P2' … Pn' is a congruent polygon, in the sense that corresponding sides of P1 P2 … Pn and P1' P2' … Pn' are congruent, as are corresponding angles. Moreover, if P1 P2 … Pn and Q1 Q2 … Qn are two congruent polygons, then there exists an isometry T: E2 → E2 mapping each Pi onto the corresponding vertex Qi.
Proof. Omitted. Much of high school geometry is concerned with studying the properties of congruent or similar figures; the two preceding propositions allow us to reframe those properties in terms of transformations. For example, the following is a restatement of the well-known “Side- Angle-Side” congruence property for triangles: Proposition. (SAS Congruence) Let {A, B, C} and {P, Q, R} be two sets of points in AB BC the plane, for which = = 1 and ∠ABC ≅ ∠PQR. Then there exists a (unique) PQ QR isometry T: E2 → E2 with T(A) = P, T(B) = Q and T(C) = R.
Proof. Omitted. Other properties of congruence and similarity— Angle- Side- Angle, Side- Side- Side, etc.—can similarly be expressed in the language of transformations (see Exercises 44 and 45). Thus the groups of isometries and similarities contain within them essentially all of the raw material for the study of Euclidean geometry. This is the goal of Klein’s Erlangen Programme: to realize the properties of geometry as aspects of particular groups of transformations. Similarity properties are, of course, ubiquitous throughout the Geometry curriculum, where they are deeply interconnected with the properties of parallel lines. It is less obvious, but no less true, that they are also deeply interconnected with the properties of linear and quadratic functions, the staples of the Algebra 1 curriculum. For example, the proof of the Fundamental Theorem of High School Algebra (§4.5)—the correspondence between 1st- degree equations and straight lines in E2—relied in an essential way on the SAS similarity property. In addition, we have the following somewhat remarkable example:
208
208 Geometry, Graphs and Symmetry Proposition. (Parabolas and Similarity) All parabolas are similar to each other. Before embarking on the proof, we first observe that we have not, to this point, defined the word “parabola”, an oversight which obviously needs to be corrected before a proof is possible. In the context of high school Algebra 1, the word parabola is used exclusively to refer to the graph of an equation of the form y = ax2 + bx + c, where a, b, c ∈ R, and with a ≠ 0. Such graphs always have a vertical axis of symmetry (refer back to Exercise 38). Later, in Algebra 2, the usage of the word is expanded to include parabolas that are horizontally oriented as well; in this context, students typically learn that a parabola can be described by specifying a (horizontal or vertical) line, called the directrix of the parabola, and a specified point (not on the directrix) called the focus. The most general definition of a parabola is as follows: Definition. Given any line l and point P not on l, the parabola determined by l and P is the set P consisting of all points in the plane that are equidistant from l and P. More precisely, let Q be any point in E2, and let R be the unique point on l with the property that QR ⊥ l. QR Then Q is on the parabola if and only if = 1. Note that in this definition the directrix QP need not be horizontal or vertical; also note that, consistent with our approach throughout this chapter, this definition does not require us to actually measure the distances QR and QP, only to know whether or not they are equal, or, equivalently, to measure their ratio. (See Figure 4.28.)
Figure 4.28 Point Q on the parabola is equally distant from the directrix and the focus P.
209
Geometry, Graphs and Symmetry 209 Of particular interest are the vertically oriented parabolas; that is, those parabolas for which the directrix is a horizontal line. Let be the line corresponding to the equation y = a, for some constant a, and let P = ( b, c ). Then the condition for a point Q = ( x, y ) to be equally distant from and P is y−a =
( x − b)2 + ( y − c )2
which, after some manipulation, is equivalent to y = A(x − b) + B 2
where A and B are constants that can be expressed in terms of a, b and c (Exercise 46, 47). Conversely, any equation of this form corresponds to a vertically oriented parabola. With these basic facts in hand, we are ready to prove the Parabolas and Similarity Proposition: Proof. We will actually show that any parabola can be mapped via a similarity to the parabola 0 defined by the equation y = x 2. This implies the general claim: for if T1 : 2 → 2 is a similarity that maps a parabola 1 onto 0 , and T2 : 2 → 2 is another similarity that −1 maps a second parabola 2 onto 0 , then the composition (T2 ) T1 is a similarity that maps 1 onto 2 . We begin by observing that any parabola can be mapped onto another parabola with a horizontal directrix by means of a simple rotation of the plane; then, by a translation, any such (vertically oriented) parabola can be mapped onto yet another parabola with its vertex at the origin. Thus, by a rigid motion we can map any parabola onto a parabola ’ corresponding to an equation of the form y = Ax 2 . To complete the proof, we need to show how to map a parabola ' corresponding to y = Ax 2 onto 0 . In fact, this can be done by an isotropic dilation, as follows: We need to find some scale factor k such that the dilation ( a, b ) ( ka, kb ) maps every point on ’ to a point on 0 . Since every point on ’ has the form (a, Aa 2 ), this means we need to find k such that ( ka, kAa 2 ) satisfies the equation y = x 2. In other words, we need to have kAa 2 = ( ka )
2
It may easily be verified that this equation is solved by choosing k = A. Thus, any parabola can be mapped onto 0 by a rotation, followed by a translation, followed by a dilation. In the language of group theory, we can summarize the previous proposition by saying that the group of similarities acts transitively on the set of all parabolas in the plane: any parabola can be mapped onto any other parabola by choosing an appropriate similarity. But why stop with similarities? What if we want to consider additional transformations of the plane, ones that are neither isometries nor similarities? In fact, we already have two examples of such transformations: the shear and the anisotropic dilation. Recall from the last section that a shear maps a point with coordinates ( α, β ) onto the point with coordinates (α + rβ, sβ), where r, s are two real numbers called the shear constants; an anisotropic dilation maps ( α, β ) onto ( kx α, ky β ), where kx , ky are the horizontal and vertical scale factors (which need not be equal). In general shears and anisotropic dilations do not preserve lengths, ratios of lengths, or angles; in fact, we have the following proposition.
210
210 Geometry, Graphs and Symmetry Proposition. Choose an orthonormal coordinate system, and let A, B,C , D be the points with coordinates ( 0, 0 ) , (1, 0 ) , (1,1) ,( 0,1), respectively, so that polygon ABCD is a square. Consider a transformation of the plane T : 2 → 2 , and let A' B' C' D' be the image of ABCD under T . Then: (a) If T is an anisotropic dilation, with scale factors kx and ky , then A' B' C' D' is a rectangle. (b) If T is a shear, with shear constants r, s , then A' B' C' D' is a parallelogram.
Proof. Exercises 50 and 51. Although shears and anisotropic dilations do not preserve angles or the ratios of lengths, they do preserve incidence, lines, and parallelism. We turn our attention now to the set of all transformations that preserve these properties: Definition. An affine transformation (sometimes called an “affinity”) of the plane is a bijective transformation T : 2 → 2 that preserves lines, incidence, and parallelism; that is, (a) If ⊂ 2 is a line, then so is ', its image under T . (b) If point P lies on line , and P ’ is the image of image of P under T , then P' lies on '. (c) If lines and m are parallel, then their images ' and m' are parallel as well. The study of those properties that are preserved by affine transformations is called affine geometry. Unsurprisingly, the set of all affine transformations forms a group (Exercise 52), and all isometries and similarities are affine transformations (Exercise 53). We have already observed that shears and anisotropic dilations are also affine transformations. Remarkably, all affinities can be generated by similarities and shears: Theorem. (Fundamental Theorem of Affine Transformations). Every affinity T : 2 → 2 can be expressed as a composition of similarities and shears.
Proof. Omitted. For example, an anisotropic dilation T : ( α, β ) ( kx α, ky β ) can be decomposed as a composition T2 T1, where T1 is a similarity and T2 is a shear (Exercise 54). One useful way of thinking about affine transformations is by representing them using matrix operations. Let a b M= c d be an invertible 2 × 2 real matrix, and let h v= k
211
Geometry, Graphs and Symmetry 211 be a column matrix. Then the transformation ( α, β ) ( α' , β' ) defined by α' a b α h β' = c d β + k α α' is an affine transformation. If we introduce the notation x = , y = , then this can β β' be written compactly as y = Mx + v , a formula reminiscent of the familiar slope-intercept equation of a line. Moreover, given any affine transformation T : 2 → 2 , there exists a unique pair M , v such that T ( x ) = Mx + v . Let’s consider a few familiar examples of affine transformations, and their representations in matrix notation: Example 1 (Translations). Let T : 2 → 2 be a translation. Then, relative to some coordinate system, T acts by mapping a point with coordinates ( α, β ) to a point with coordinates (α + h, β + k ) for some constants h, k (refer back to §4.7, Example 3). This transformation can be represented by the matrix function T ( x ) = I 2 x + v where I 2 is the 2 × 2 identity matrix, h and v = . For simplicity, we may also write T ( x ) = x + v . k Any transformation of the form T ( x ) = Mx + v can be naturally regarded as a composition of two transformations, T = T2 T1, where T1 ( x ) = Mx is a transformation fixing the origin, and T2 ( x ) = x + v is a translation. The set of affine transformations that fix the origin can therefore be identified with the group of invertible 2 × 2 real matrices, called the general linear group in two dimensions and denoted GL ( 2 ). All of the basic affine transformations we have considered so far have simple representations as matrices: Example 2 (Rotations). Let T : 2 → 2 be a rotation around the origin through an angle θ. Then T can be represented by the matrix function T ( x ) = Mx where M is the 2 × 2 rotation matrix, cos θ − sin θ M= sin θ cos θ Example 3 (Reflections). Let T : 2 → 2 be a reflection across the vertical axis. Then T can be represented by the matrix function T ( x ) = Mx where M is the 2 × 2 reflection matrix −1 0 M= 0 1 Likewise, if T is a reflection across the horizontal axis, then the corresponding matrix 1 0 is M = . 0 −1 Example 4 (Shears). Let T : 2 → 2 be a shear, ( α, β ) ( α + rβ, sβ ). Then T corresponds to the matrix function T ( x ) = Mx where M is the 2 × 2 shear matrix 1 r M= 0 s
212
212 Geometry, Graphs and Symmetry The matrix representation of dilations (both isotropic and anisotropic) are left for Exercise 55. We conclude this section by asking, in the spirit of Klein’s Erlangen Programme, the question: What types of functions are left invariant under the various sorts of transformations discussed above? We have already seen (in §4.6) that periodic functions, such as the common trigonometric functions, are invariant under horizontal translations; we have also seen that monomials (i.e. power functions) are invariant under anisotropic dilations. What about more exotic combinations? Example 5. Let T be a horizontal translation followed by a vertical stretch:
(α, β) (α + h, β) ( α + h, kβ ). A function f : → is invariant under T if and only if kf ( x − h ) = f ( x ). Exponential functions have precisely this property! Consider the exponential function f ( x ) = Ax for some positive base A. Then if the vertical scale factor is k = Ah , we have kf ( x − h ) = Ah Ax − h = Ax = f ( x ), as required. We will see (in Chapter 5) that this invariance property almost completely characterizes the family of exponential functions.
Example 6. Let T be a horizontal stretch followed by a vertical translation: A function f : → is invariant under T if and only if x f + h = f ( x ). Logarithmic functions have this property—for if f ( x ) = log b x for some k x x positive base b, then with h = log b k we have f + h = log b + log b k = log b x . Again, k k we will see in Chapter 5 that this property essentially characterizes the logarithmic functions uniquely. Examples 5 and 6, together with the cases considered earlier, show that essentially all of the basic toolkit functions of the secondary curriculum have special properties that can be described using the language of transformations:
( α, β ) ( k α, β ) ( k α, β + h ) .
• The first-degree functions correspond to the graphs of straight lines, which are invariant under (certain) translations and reflections; moreover the group of affinities acts transitively on the set of lines in the plane. • The quadratic functions correspond to the graphs of parabolas, which are invariant under reflections across their axis of symmetry; moreover the group of similarities acts transitively on the set of parabolas. • The monomials (or power functions) are precisely the functions that are invariant under anisotropic dilations. • An exponential function is invariant under a combination of a horizontal shift and a vertical stretch. • A logarithmic function is invariant under a combination of a horizontal stretch and a vertical shift. • A trigonometric function is invariant under a horizontal translation.
Exercises 40. Prove (using only the definition) that the set of all rigid motions forms a group: that is, it is closed under composition, it contains an identity element, and every rigid motion has an inverse that is also a rigid motion. 41. Find a contemporary secondary Geometry textbook that includes the double- reflection properties. Then find a second contemporary Geometry textbook that does not include them.
213
Geometry, Graphs and Symmetry 213 42. Prove that the set of similarities of the plane form a group. 43. Prove, using only the definitions, that every rigid motion is a similarity of the plane with scale factor 1. 44. Restate the main triangle congruence properties of secondary Geometry (Side- Side- Side, Angle- Side- Angle, and Angle- Angle- Side) using the language of transformations. 45. Restate the main triangle similarity properties of secondary Geometry (Angle- Angle, Side-Side-Side, and Side-Angle-Side using the language of transformations). 46. Let be the line corresponding to the equation y = a, for some constant a, and let P = ( b, c ). Show that the condition for a point Q = ( x, y ) to be equally distant 2 from and P can be expressed in the form y = A ( x − b ) + B , where A and B are constants that can be expressed in terms of a, b and c. 47. High school Algebra 2 courses often write the equation of a vertically oriented 1 parabola in so- called “standard form”: y = ( x − h)2 + k . Draw a diagram 4p showing the relationships between the parameters h, k and p and the constants a, b and c from Exercise 46. 48. Find an explicit quadratic equation in two variables for the parabola determined by the directrix :x + y = −2 and the focus P = ( 0, 0 ). 49. Describe explicitly the sequence of transformations that maps the parabola of Exercise 48 onto the parabola 0 corresponding to the equation y = x 2. 50. Prove that the image of a unit square under an anisotropic dilation is a rectangle. Express the side lengths, their ratio, and the area of the image in terms of the scale factors. 51. Prove that the image of a unit square under a shear is a parallelogram. Express the area, side lengths, and the measures of the angles of the image in terms of the shear constants. 52. Prove that the set of affine transformations is a group. 53. Prove that all isometries and similarities are affine transformations. 54. Show that an anisotropic dilation T : ( α,) ( kx α, ky β ) can be decomposed as a composition T2 T1, where T1 is a similarity and T2 is a shear. 55. Describe the matrix representation of a dilation (whether isotropic or anisotropic). 56. Show that an affine transformation preserves ratios of lengths along a line. That AB A’B ’ is, while in general for two segments AB,CD it is not true that = , it is true CD C ’D’ if A, B,C , D all lie on a common line . Consequently, an affine transformation preserves midpoints. 57. Show that any parallelogram can be mapped onto any other parallelogram via a unique affine transformation (in other words, the group of affinities acts transitively on the set of all parallelograms).
4.9 Operations on Functions Recall from §4.7 that a function f : → is called an odd function if its graph is invariant under a 180° rotation around the origin, or, equivalently, if it satisfies the condition f ( − x ) = − f ( x ); and it is called an even function if its graph is invariant under a reflection across the y-axis, or, equivalently, if it satisfies the condition f ( − x ) = f ( x ). Most functions,
214
214 Geometry, Graphs and Symmetry of course, are neither odd nor even. For example, while f ( x ) = 2 x3 + x is an odd function, and g ( x ) = x 4 + 8x 2 − 7 is an even function, their sum ( f + g )( x ) = x 4 + 2 x3 + 8x 2 + x − 7 is neither odd nor even. This example is fairly typical: in general, the sum of an even function and an odd function is neither even nor odd. Surprisingly, however, every function can be decomposed as a sum of an “odd part” and an “even part”! In the case of a polynomial, this is not particularly surprising; in the example above, one can easily identify the odd and even parts of x 4 + 2 x3 + 8x 2 + x − 7 by simply grouping together the odd-degree and even-degree terms. But what about non- polynomial functions? What, for example, are the odd and even parts of an exponential function like f ( x ) = 2 x? Definition. Let f : → be any function. Define the even part of f , denoted f0, by f0 ( x ) =
f (x) + f ( −x) 2
and define the odd part of f , denoted f1, by f1 ( x ) =
f (x) − f ( −x) . 2
(The subscripts 0 and 1 are intended to be mnemonics for “even” and “odd”, respectively.) The following proposition is easily verified: Proposition. (Existence of Decompositions) For any function f : → , (a) (b) (c) (d) (e)
For all x ∈, f ( x ) = f0 ( x ) + f1 ( x ). f0 is an even function. f1 is an odd function. f ( x ) = f0 ( x ) for all x ∈ if and only if f is even. f ( x ) = f1 ( x ) for all x ∈ if and only if f is odd.
Proof. See Exercises 58–60. The preceding proposition shows that decompositions of a function into even and odd parts always exist. Furthermore, this decomposition of a function is unique. That is, we have the following: Proposition (Uniqueness of Decompositions). Let f = f0 + f1 and f = g0 + g1 be two different decompositions of a function f into even and odd parts. Then for all x ∈, f0 ( x ) = g0 ( x ) and f1 ( x ) = g1 ( x ).
Proof. Under the hypotheses of this question, we have f0 + f1 = g0 + g1, from which we find f0 − g0 = g1 − f1
215
Geometry, Graphs and Symmetry 215 Now we observe that the left-hand side of this equation, f0 − g0, is a difference of two even functions, and is therefore itself even; similarly g1 − f1 is a difference of two odd functions, and is therefore odd (see Exercise 61). Therefore, the two sides of this equation describe a function that is simultaneously both odd and even. However, the only function that is both odd and even is the constant function f ( x ) = 0 (Exercise 63). Thus, f0 ( x ) − g0 ( x ) = 0 and f1 ( x ) − g1 ( x ) = 0, from which the conclusion follows. In the case where f is a polynomial function, it can be verified (Exercise 64) that f0 consists precisely of the even-degree terms of f , and f1 consists of its odd-degree terms. What about non-polynomial functions? Example. Consider the exponential function f ( x ) = e x . Then the even and odd parts of f are f0 ( x ) =
ex + e−x , 2
f1 ( x ) =
ex − e−x 2
These two functions are known (respectively) as the hyperbolic cosine and hyperbolic sine functions, denoted cosh x and sinh x. As their names suggest, there are a number of strong analogies between the hyperbolic sine and cosine, on the one hand, and the more familiar trigonometric functions on the other: Proposition. (a) The derivative of cosh x is sinh x, and the derivative of sinh x is cosh x. (b) For all x ∈, cosh 2 x − sinh 2 x = 1. Proof. Exercises 65–66. Part (b) of the preceding proposition implies that, just as the set of points of the form (cos t,sin t ) lies on the unit circle, given by the equation x2 + y2 = 1, so too do the points of the form ( cosh t,sinh t ) all lie on the unit hyperbola, given by the equation x 2 − y2 = 1 See Figure 4.29, below. We now introduce some additional notation: for any function f , we will write Even ( f ) = f0 and Odd ( f ) = f1. Thus, we can say that f is even if and only if Even ( f ) = f , etc. The introduction of new notation may not seem to add much to our discussion; however, as we have seen throughout this book, the mere act of naming something often makes it possible to investigate its properties. We can ask, now: What sort of mathematical “things” are Even and Odd? It’s important to realize that neither Even nor Odd is a function—that is, neither one establishes a correspondence between input numbers and output numbers, or (more formally) neither one is a set of ordered pairs of real numbers. Instead, Even and Odd are examples of what are called operators: each one takes as input a function, and returns as its output another function. Operators act on functions in a way that is roughly analogous to how functions act on numbers; indeed, from a formal perspective, both Even and Odd are members of the set Func ( Func ( )). Lest this seem like an excessively abstract perspective, we observe that there are other, more well-known operators:
216
216 Geometry, Graphs and Symmetry
(a)
(b)
Figure 4.29 (a) The graph of f ( x ) = cosh x . (b) The graph of f ( x ) = sinh x . (c) The parametrized path given by x = cosh t, y = sinh t is the right half of the unit hyperbola x 2 − y 2 = 1.
217
Geometry, Graphs and Symmetry 217
(c)
Figure 4.29 (Cont.)
Example (Differentiation). As all students of Calculus know, if f ∈ Func ( ) is a differentiable function, then we can construct its derivative, f ′ ∈ Func ( ). Thus “differentiation” is another example of an operator. We introduce here the notation D ( f ) = f ′. Then D is an operator on the set of differentiable functions30. Example (Degree-raising). Let f ∈ Func ( ) be any function. We define a new function, X ( f ), by X ( f )( x ) = xf ( x ). Then X is a degree-raising operator. For example, if f ( x ) = 3x 2 + 2 x − 5, then X ( f )( x ) = 3x3 + 2 x 2 − 5x . (Note, however, that X ( f ) is defined even if f is not a polynomial.) Example (Identity and Zero Operators). Let f be any function; then we define I ( f ) = f and 0 ( f ) = 0ˆ . The operator I is called the identity operator; it plays a role analogous to the identity function id ∈ Func ( ). The operator 0 is called the zero operator; it plays a role analogous to the zero constant function. Recall from Chapter 2 that there are three different operations defined on Func ( ): addition of functions, multiplication of functions, and composition of functions. Likewise, we may consider three different operations on operators: Definition. Let A and B be any two operators on Func ( ). Then: (a) A + B is the operator defined by ( A + B )( f ) = A ( f ) + B ( f ). (b) A ⋅ B is the operator defined by ( A ⋅ B )( f ) = A ( f ) ⋅ B ( f ). (c) A B is the operator defined by ( A B )( f ) = A ( B ( f )).
218
218 Geometry, Graphs and Symmetry It turns out that multiplication of operators is not particularly interesting, whereas composition has some very important properties; for this reason, whenever two operators are written next to each other without a symbol, it is understood that they are combined by composition. Thus, unless otherwise specified, AB denotes A B. With these conventions in hand, we may explore some of the relationships among the operators we have identified so far: Theorem (Operator Compositions). (a) (b) (c) (d) (e) (f) (g) (h)
0 is an additive identity for operators. I is an identity operator relative to composition. Even + Odd = I . Odd Even = Even Odd = 0. D Even = Odd D D Odd = Even D X Even = Odd X X Odd = Even X
Proof. Exercises 67–70. Parts (e)–(h) of the Operator Compositions theorem show that the set of operators, regarded as a ring with respect to addition and composition, is not commutative31, and indeed this is the case. For example, we have the following relationship between the operators D and X: Proposition. DX − XD = I. Proof. For any function f , the chain rule for derivatives says that ( x ⋅ f ( x ))' = f ( x ) + xf ′ ( x ). Expressed in the language of operators, this reads D ( X ( f )) = f + X ( D ( f )) or equivalently ( DX − XD )( f ) = f , from which the claim follows. In the study of noncommutative rings, the notation [ A, B ] denotes the combination AB − BA, and is called the commutator of two elements A and B. Thus, a ring is commutative if and only if [ A, B ] = 0 for all A, B. The result just proved shows that [ D, X ] = I . Many of the other results in the Operator Compositions theorem can be expressed using commutator notation; see Exercises 71–72.
Exercises 58. Let f : → be an arbitrary function. Prove that for all x ∈, f ( x ) = f0 ( x ) + f1 ( x ). 59. Let f : → be an arbitrary function. Prove that f0 is an even function and f1 is an odd function. 60. Prove that f ( x ) = f0 ( x ) for all x ∈ if and only if f is even and f ( x ) = f1 ( x ) for all x ∈ if and only if f is odd.
219
Geometry, Graphs and Symmetry 219 61. Show that a sum of two even (respectively, odd) functions is also even (respectively, odd). 62. Generalize Exercise 61 to the case of multiplication. That is, what can you say about the product of even and odd functions? ˆ i.e. f is the constant 63. Prove that if f is both even and odd, then f = 0, function f ( x ) = 0. 64. Prove that if f is a polynomial function, then f0 consists precisely of the even- degree terms of f , and f1 consists of its odd-degree terms. 65. Prove that the derivative of cosh x is sinh x, and the derivative of sinh x is cosh x. 66. Prove that for all x ∈, cosh 2 x − sinh 2 x = 1. 67. Prove that 0 is an additive identity for operators, and I is an identity relative to composition. 68. Prove that Even + Odd = I and Odd Even = Even Odd = 0. 69. Prove that D Even = Odd D and D Odd = Even D. (Hint: use the fact that the derivative of an odd function is even, and vice versa.) 70. Prove that X Even = Odd X and X Odd = Even X . 71. Prove that [ D, Even ] = D ( Even − Odd ) and [ D, Odd ] = D (Odd − Even ). 72. Prove that [ X, Even ] = X ( Even − Odd ) and [ X, Odd ] = X (Odd − Even ).
4.10 Recommended Reading This chapter was devoted to an exploration of the complicated relationships between abstract mathematical concepts and the geometric representations of those concepts—that is, between ideas and pictures. In order to realize the solution set for an equation in two variables as a diagram, one needs to first assign coordinates to the plane; different choices of coordinates lead to different kinds of visual representations. Needless to say, this distinction, between a concrete diagram and the abstract ideas the diagram is intended to represent, is a subtle one for students, and much of the research in how students think and learn about secondary Geometry has focused on the challenges students encounter in this connection. Your Recommended Reading for this chapter is: Chazan, D. (1993). High school geometry students’ justification for their views of empirical evidence and mathematical proof. Educational Studies in Mathematics, 24(4), 359–387. Schoenfeld, A. (1988). When good teaching leads to bad results: The disasters of “well-taught” mathematics courses. Educational Psychologist, 23(2), 145–166. Both of the above papers deal (albeit in very different ways) with the relationship between a specific diagram and the abstract geometric relationships the diagram represents. The Chazan article explores students’ perspectives on the relationship between empirical evidence (i.e., measurement) and deductive reasoning. Chazan finds two persistent, widespread beliefs among the students in his study, which he summarizes as “Evidence is Proof ” and “Deductive Proof is Just Evidence”: Some students contend that measuring, like writing a deductive proof, can allow one to reach conclusions that are certain and that are applicable to sets that have an infinite
220
220 Geometry, Graphs and Symmetry number of members… [while some] students view deductive proofs in geometry as proofs for a single case, the case that is pictured in the associated diagram… Students who hold this belief do not understand the generalization principle for deductive proofs; they do not understand that the validity of the conclusion is meant to be generalizable to all figures which satisfy the givens. (pp. 360–362) The issue at play here—the relationship between a single diagram, on the one hand, and the collection of “all figures which satisfy the givens”, on the other—is not exactly the same as the relationship between an algebraic object (e.g., a set of ordered pairs of real numbers) and a geometric one (e.g., a line in the Euclidean plane); however, the two issues share, at their cores, some common conceptual DNA. In both cases, a single diagram plays multiple simultaneous roles, both as a thing in its own right and as a symbol for something else. In order to successfully understand the material being taught, the student needs to smoothly move back and forth between these two roles. Failure to navigate the space between these two roles leads to a breakdown in students’ understanding of what it means to prove something. The Schoenfeld article also deals with a breakdown in students’ ability to move between two different conceptual spheres and, like the Chazan study, focuses on the distinction between deduction and empiricism. In this case, the two spheres are geometric proof and geometric construction; as Schoenfeld writes, High school and college students who had taken a full year of high school geometry, which focuses on proving theorems about geometric objects, uniformly approached geometric construction problems as empiricists. They engaged in empirical guess-and-test loops, completely ignoring their proof-related knowledge… Such behavior indicated that these students saw little or no connection between their “proof knowledge,” abstract mathematical knowledge about geometric figures obtained by formal deductive means, and their “construction knowledge,” procedures and information they had mastered in the very same class for working straightedge and compass construction problems. (pp. 150–151) For the students in Schoenfeld’s study, a “construction problem” entailed the physical production of line segments and circles that satisfied certain constraints, and whose accuracy is determined by empirical measurement, not by logical argument. Contrast this with the approach to construction used earlier in this chapter, for example in Figs. 10, 11 and 13, which shows how compass-and-straightedge constructions could be used to create segments of particular lengths. Or contrast the approach described in Schoenfeld’s study with the way Euclid used constructions in the Elements: there, constructions are theorems that establish the existence of certain objects, and are interwoven with theorems that assert properties of objects. The difference, again, rests at least in part on a perception of each geometric diagram as a single, unique object to be acted on materially, rather than as a representative of a category of abstract concepts sharing certain properties.
Projects A. Prepare a concept map showing all 48 of the theorems of Book I of Euclid’s Elements, distinguishing between construction problems and other theorems, and indicating the logical dependence among them. In other words, your map should make it clear (for example) which constructions make use of which theorems, and which theorems require the use of which constructions.
221
Geometry, Graphs and Symmetry 221 B. Observe a secondary Geometry classroom in which one or more compass-and- straightedge constructions is taught. Does the instruction include a proof that the constructions are correct? C. Some of the students in the Chazan paper expressed the belief that even after a theorem has been proven, there remains the possibility that a counterexample could be found because the proof may have relied unwittingly on specific features of a diagram. Find (or create) an example of a proof for which this possibility actually occurs. D. Replicate the methods of the Chazan study with an appropriate population (e.g., a high school geometry class, one or more preservice or inservice secondary mathematics teachers, etc.). How do your findings compare with Chazan’s? E. A great deal has changed since the Schoenfeld paper was written in 1988. In particular, state and national standards documents have been issued, reissued and revised multiple times in the ensuing decades. Survey a variety of current standards documents to see if (and how) they address the question of what students should learn about geometric constructions, and in particular whether they expect students to be able to prove the validity of specific constructions. F. One way to justify the need for constructions to be validated by a deductive proof (rather than by empirical measurement) is to call attention to the three classical “impossible construction” problems: trisecting an arbitrary angle, doubling a cube, and squaring a circle. (Refer to §4.2.) Each of these problems has a solution that can be approximated to arbitrarily high precision, but cannot be solved exactly using only a compass and straightedge. What, if anything, do secondary Geometry textbooks have to say about these impossible constructions? Survey a variety of current and archival textbooks and report on your findings. G. Choose two references from one of the Recommended Readings and prepare an analytical summary of each of them. Your summary should include (at a minimum) synopses of (a) the research question, (b) the theoretical framework, (c) the research methods, (d) its findings and conclusions.
Notes 1 Dodgson, under the pseudonym “Lewis Carroll”, is best known as the author of Alice’s Adventures in Wonderland and Through the Looking-Glass. 2 Grover, B. & Connor, J. (2000). Characteristics of the college geometry course for preservice secondary teachers. Journal of Mathematics Teacher Education 3, 47–67. 3 Euclid distinguished between “axioms”, by which he meant general principles of mathematical reasoning (e.g., “Equals added to equals produce equals”), and “postulates”, by which he meant geometric properties that were stipulated to be true without requiring proof. Modern mathematics treats these two words essentially as synonyms, and they are used essentially interchangeably throughout this chapter. 4 Euclid uses the phrase “straight line” for what we would call a “line segment”. The notion of an infinite line does not really exist for Euclid; he focuses on diagrams, and diagrams are (necessarily) finite. However, Euclid understands that any straight line can be extended as far as we like, without bound or limit, and in that sense is “potentially infinite”. 5 The impossibility of trisecting an angle can be proven using the theory of field extensions—the same theory that shows that fifth-degree polynomial equations are not solvable by radicals (see Chapter 3, footnote 12).
222
222 Geometry, Graphs and Symmetry 6 To “square a circle” means to construct a square with an area equal to that of a given circle. It’s possible to square any polygon, and Euclid eventually demonstrates how to do it, so it is only natural to want to square a circle, too. This, however, is also impossible, as can be shown by (once again) the theory of field extensions. 7 To “double a cube” means to construct a segment whose length, if used as the edge of a cube, would produce a cube with double the volume of a given cube. As a practical matter, this amounts to the same thing as producing a new segment whose length is 3 2 times that of a given segment. This turns out to be also impossible, which can be proven using—you guessed it!—the theory of field extensions. 8 Refer back to Chapter 1 for background information on the SMSG’s reconstruction of the school curriculum. 9 In addition to the 1960 Geometry, the SMSG also issued a two-volume Geometry with Coordinates in 1965. 10 Much the same observation applies to Hilbert’s final postulate, the “Completeness Axiom”, which—while not explicitly invoking the existence of the real numbers—has the exact same effect. 11 This means that every point on a Euclidean plane is essentially the same as any other point; there is no designated “center” or “origin” or any other special point in E2. 12 Not the Euclidean plane, mind you; just a Euclidean plane. 13 Historians of Greek mathematics believe the contents of Euclid’s Book V rest on previous work by Eudoxus of Cnidus. 14 Just as historians believe that most of Book V is due to Eudoxus of Cnidus, Book X is believed to be mostly the work of Thaetetus of Athens. 15 If you don’t remember this argument, you might want to refer back to §1.11. 16 This phrase can be made precise provided we have good working definitions of what it means for two points to be on the “same side” of a line; see Exercise 10. 17 If this formula reminds you of the “Change of Base” formula for logarithms, congratulations— you have just noticed an isomorphism! We will discuss this more in Chapter 5. 18 The Greek letter chi makes the initial sound of the word “coordinate”. 19 Notation and terminology here is very inconsistent, and caution is recommended when reading or referring to these fields. Some textbooks use the symbol A to refer to the set of algebraic complex ˉ mean exactly the same thing; with that convention, the set of algebraic numbers, so that A and Q real numbers would be denoted ∩ or ∩ . 20 Here, too, naming conventions are not at all standard. What we have called “expressible algebraic real numbers” are sometimes called “explicit algebraic real numbers”, or simply “real numbers that are expressible by radicals”. 21 More colloquially, “That’s all there is, there ain’t no more.” 22 We use “Sigma” because it makes the initial sound in the word “System”. In this context, it has nothing to do with summations; it also has nothing to do with the “string interpretation map” introduced in Ch. 3, which was denoted with the same symbol. 23 For example, on Texas Instruments’ TI-83/84 series graphing calculators, the command ZSquare will set the vertical and horizontal scales equal, whereas the command ZFit will adjust the vertical scale to display the largest and smallest y-value corresponding to the current horizontal scale. 24 Up to this point, we have avoided making a commitment as to whether our coordinate field contains all of R or only a subset of it. Beginning with this section, we assume for simplicity of exposition that the coordinate field F is the entirety of the real numbers; however, almost all of the results generalize to an arbitrary coordinate field. 25 Physicists and Engineers often use the phrases “passive coordinate transformation” and “active coordinate transformation” to distinguish between these two senses; a passive transformation refers to the description of the same geometric object in different coordinates, while an active transformation refers to an actual change from one geometric object to another. 26 Technically, we should distinguish between the function and its graph, which are two different things; probably we should say “the graph of the function f is invariant” and “T is a symmetry of the graph of f”, and we may do so at times, but in most cases it does no harm to elide the distinction, and makes our already-technical language somewhat less clumsy.
223
Geometry, Graphs and Symmetry 223 27 Figure 4.24 shows the familiar shape of a trigonometric function, but to be clear, these are not the only periodic functions; they are, however, the first (and perhaps only?) ones students are likely to encounter in the high school setting. 28 To really explain what this means we would need a formal definition of “vector” as an equivalence class of directed line segments, where two directed line segments are equivalent to one another if joining the initial and terminal points of one segment to the initial and terminal points of the second segment forms a parallelogram. However, this footnote is too small to contain such a definition. 29 Although we will in the next chapter! 30 Since not all functions are differentiable, the domain of D is considerably less than the entirety of Func ( ). We could try to extend D to all of Func ( ), but the cost of this would be that the output D ( f ) might be only a partial function. Throughout the rest of this chapter, we will assume that in any discussion involving D we are restricting our attention to the set of differentiable functions. 31 Recall from Chapter 2 that the addition operation in a ring is always commutative; when we refer to a ring as “commutative” or “non-commutative” we are describing its multiplicative structure (which, in this case, really means its structure under the composition of operators).
224
5 Exponential and Logarithmic Functions
Anyone who believes exponential growth can go on forever in a finite world is either a madman or an economist.
—Kenneth Boulding Napier’s Logarithms, by shortening the labours [of calculation], doubled the life of the astronomer.
— Pierre-Simon Laplace Imagine a person with a gift of ridicule [He might say] First that a negative quantity has no logarithm; secondly that a negative quantity has no square root; thirdly that the first non-existent is to the second as the circumference of a circle is to the diameter.
—Augustus de Morgan
5.1 What We Talk About when We Talk About Logs In this chapter we study the properties of exponential and logarithmic functions. While in the United States students often meet exponential functions in an Algebra 1 course, their inverses—the logarithmic functions—are usually encountered for the first time in Algebra 2. Exponential and logarithmic functions are many students’ first encounter with mathematics that is genuinely abstract, rather than algorithmic in nature. That is to say, while a polynomial like 3x 2 + 20 x − 9 can be evaluated (by hand, if needed) for essentially any value of x, an expression like 2 x or log3 x cannot be, except for a few carefully chosen values of x. (Quick: how could you compute the value of 21.57 or log315 without using a calculator?) Unlike most of the functions encountered earlier, logarithmic and exponential functions behave like a black box: a number goes in, and another number (generally irrational) comes out. What happens inside the box—the process by which input is transformed into output— remains essentially invisible. Commenting on the placement of these functions in Algebra 2, the mathematician and educator Paul Lockhart has written1: Exponential and logarithmic functions are also introduced in Algebra II, despite not being algebraic objects, simply because they have to be stuck in somewhere, apparently. Lockhart’s observation that these functions are not “algebraic objects” means something specific. Lockhart is noting (correctly, if concisely) that these functions do not belong to
225
Exponential and Logarithmic Functions 225 [ x ], the ring of polynomial functions2, nor to the field of rational functions ( x ), nor even to its algebraic closure, the field of “algebraic functions”3, ( x ). When we choose to work with exponential and logarithmic functions, we leave the world of algebraic functions behind and enter the realm of the transcendental functions. And yet, seen from another perspective, exponential and logarithmic functions are quintessentially algebraic, in the sense that they are useful precisely for their algebraic properties. All exponential functions of the form f ( x ) = a x satisfy the property f ( x + y ) = f ( x ) f ( y ), while all logarithmic functions of the form g ( x ) = log b ( x ) obey the complementary property g ( xy ) = g ( x ) + g ( y ). What are these properties—what type of “thing” are they? From an algebraic perspective, they express precisely the fact that the exponential and logarithmic functions are group homomorphisms. In more detail: Let us use ( ,+ ) to denote the set of real numbers, regarded as a group with respect to the operation of addition4. Let us also write ( + , × ) to denote the set of positive real numbers, regarded as a group with respect to the operation of multiplication. (You should confirm that these are both groups; see Exercise 1.) Although both are sets of real numbers, they are two different groups. For example, the identity element in the first group is 0, while the identity in the second group is 1; similarly, the inverse of 10 in the first 1 group is −10, while its inverse in the second group is . Now, a group homomorphism from 10 one group to the other is a mapping that translates the group operations and elements in the domain to the operations and elements in the range. That is, a function f : ( , + ) → ( + , × ) is a homomorphism if and only if it obeys all of the following: f (0) = 1 f (x + y) = f (x) f ( y) f ( −x) = f (x)
−1
while a function g : ( + , × ) → ( , + ) is a homomorphism if and only if it obeys g (1) = 0 g ( xy ) = g ( x ) + g ( y ) g ( x −1 ) = − g ( x ) The claim made above, that exponential and logarithmic functions are both group homomorphisms, is now (hopefully) clear: exponential functions satisfy the first set of properties, while logarithmic functions satisfy the second set. In fact, since exponential and logarithmic functions are bijections between the two sets and +, they are actually group isomorphisms. What could be a more quintessentially algebraic object than that? To be sure, this algebraic perspective is not, historically, the reason why exponential and logarithmic functions have been historically useful. Indeed, the relationship between exponential and logarithmic functions was not even well understood at first! The first comprehensive theory of logarithms was published by the English mathematician John Napier in his 1614 volume Mirifici Logarithmorum Canonis Descriptio (“Description of the Wonderful
226
226 Exponential and Logarithmic Functions Rule of Logarithms”). Surprising as it may seem, Napier’s book says essentially nothing at all about exponential functions. For Napier, and for most users of logarithms in the next 350 years, logarithms were, first and foremost, a means of efficient numerical computation— a tool valued for utilitarian, rather than abstract, reasons. Logarithms were valuable because they could be used to multiply and divide multidigit numbers more efficiently that standard pencil-and-paper algorithms; they could also be used to raise numbers to powers and to take roots. For example, the 1911 Algebra 2 textbook A Second Course in Algebra by Hawkes, Luby & Touton5 demonstrates (on p. 196) the following procedure for using logarithms to multiply two three-digit numbers, using the example of 432 ⋅ 0.574: 1. 2.
First, find the (common, or base 10) logarithm of 423 as follows: a. In a table of logarithms, look up the logarithm of 4.23, finding log 4.23 = 0.6355. b. Because 423 is two orders of magnitude larger than 4.23, its logarithm is 2.6355. Next, find the (common) logarithm of 0.574 as follows: a. In a table, look up the logarithm of 5.74, finding log 5.74 = 0.7589. b. Because 0.574 is one order of magnitude smaller than 5.74, its logarithm is −1 + 0.574 , which is written as 1.574 . For later ease of computation, this is also expressed as 9.574 − 10 . 3. Add the two logarithms: log 423 + log 0.574 = 2.6355 + 9.574 − 10 = 12.3944 − 10 = 2.3944. 4. Finally, find the antilogarithm of 2.3944 as follows: a. In a table, find two consecutive values whose logarithms are on either side of 0.3944. We find log 2.47 = 0.3927 and log 2.48 = 0.3945. 17 b. The value we seek the antilogarithm of, 0.3944, is of the way from the smaller 18 17 value to the larger, so we take for the antilogarithm 2.47 + ( 0.01) ≈ 2.479. 18 c. Finally, the antilogarithm of 2.3944 should be two orders of magnitude larger than the antilogarithm of 0.3944, so the answer is (to four significant digits) 247.9. The text’s method is shown (in somewhat more concise form than the verbal description above) in Figure 5.1, below. Along with mastering this procedure, students were also expected to learn a technical language for describing it. For example, earlier in the text (p. 189–191), the authors had explained that in an expression like log432 = 2.6355 the logarithm’s integer part (i.e. the whole number 2 preceding the decimal point in 2.6355) is called the characteristic of 432, while the digits that follow the decimal point, i.e. the number 6355, is called its mantissa. The characteristic corresponding to a given number N determines the scale or magnitude of a number, while the mantissa determines its actual digits: two numbers that share a mantissa will have the same decimal digits, differing only in the placement of the decimal point. In the example of log 0.574 = 1.7589 , the mantissa is 7589, and the bar over the 1 is used to indicate that the characteristic is −1, corresponding to the fact that 0.574 = 5.74 × 10 −1. Notice also that in the last step of the solution method (i.e., finding the antilogarithm of 2.3944) the text, as shown in Figure 5.1, interpolates between two values on the table of logarithms, without showing the work that goes into this process. Interpolation had been previously modeled by the text with the following example:
227
Exponential and Logarithmic Functions 227
Figure 5.1 The calculation of 432 ⋅ 0.574, as shown in A Second Course in Algebra (Hawkes, Luby & Touton, 1911), p. 196.
If the mantissa of a given logarithm, as 2.5271 [sic6], is not in the table, the antilogarithm is obtained by interpolation as follows: The mantissa 5271 lies just between .5263, the mantissa of 336, and .5276, the mantissa of 337. Therefore the antilogarithm of 1.5271 lies between 33.6 and 33.7. Since the tabular difference [i.e., the difference between 5263 and 5276] is 13, and the difference 8 between .5263 and .5271 is 8, the mantissa .5271 lies of the way from .5263 to .5276. 13 8 Therefore the required antilogarithm lies of the way from 33.6 to 33.7. 13 8 Then antilog 1.5271 = 33.6 + × .1 = 33.6 + .061 = 33.66. 13 (Hawkes, Luby & Touton, pp. 194–5). The modern reader is likely to be quite dizzy at this point. Not only is all of the jargon (mantissa, characteristic) unfamiliar, even to those very familiar with the contemporary secondary curriculum, as is much of the technique; not only do decimal points appear and reappear with seemingly reckless abandon—but this is supposed to be efficient? But indeed, in a pre-calculator era, calculating with logarithms was in fact a huge time- saver. Consider what would be involved in multiplying 423 ⋅ 0.574 by hand: one must perform nine separate single-digit multiplication operations and five single-digit “carry” additions to arrive at the three partial products; then the three partial products (each 4 to 6 digits long) must be added, requiring three more carries. In contrast, the method of logarithms requires only that two numbers be looked up in a table and added, with the result then looked up in a table again. (If one is content with only three digits’ worth of precision, the interpolation step can be skipped altogether.) Likewise, logarithms can be used to compute roots (called “involution” in the 1911 text) and powers (“evolution”), which are extremely difficult (if not impossible) to do by hand. Hawks et al. explain how to compute 3 .000639 as follows (p. 199): 1. First, use a table to find the mantissa of 6.39, which is 8055. Since 0.000639 = 6.39 × 10 −4, the characteristic is −4, so log 000639 = 4.8055. 2. We need to divide this by 3. The easiest way to do this is to first write 4.8055 = 2.8055 − 6. Then, on dividing by 3, we have 0.9351 − 2 . 3. Finally, we find antilog 2.9351 = 0.08612 , which is the solution to the problem. a. In order to perform this last step (finding the antilog of 2.9351) we have to do another interpolation: 9351 is 1 of the way from 9350, the mantissa of 8.61, and 5 1 9355, the mantissa of 8.62. Therefore 9351 is the mantissa of 8.61 + (.01) = 8.612. 5 Finally, we move the decimal point two spaces to the left because the characteristic is −2.
228
228 Exponential and Logarithmic Functions
b. If we only wanted our answer to be accurate to three significant figures, we would skip the interpolation step (at the cost of a less accurate result) and simply choose from between 8.61 and 8.62 whichever number has a mantissa closer to 9351. Since the mantissa of 8.61 is 9350, we could approximate the answer as 0.0861; indeed we 3 can verify that ( 0.0861) ≈ 0.000638.
Needless to say, all of this mathematics—the jargon, the tables, the archaic notation, the techniques—is today completely obsolete. The numerical methods that were so indispensable to generations of mathematicians, scientists and engineers, the methods that were used to calculate the mass of the electron, design the first controlled nuclear chain reaction, and bring humanity safely to the moon and back, all have been entirely moot since roughly the mid-1970s, when inexpensive pocket-sized electronic calculators came on the marketplace. We no longer need logarithms to multiply or divide numbers, or to find roots, or raise difficult numbers to powers. And yet, we still teach them. Why? What are they good for? What do we talk about today, when we talk about logs? In contemporary curricula, logarithms are used to solve exponential equations, such as arise in the study of quantities that grow or decay exponentially. We use logarithms to find the half-life of a radioactive isotope or the doubling time of an investment. Students are taught to write continuously growing or decaying quantities in the form y = Ae kt , where e is an irrational number approximately equal to 2.718, and k is a (positive or negative) constant called the “instantaneous growth (or decay) rate”. They learn to use the “natural logarithm” (i.e. the logarithm base e) to work with such quantities, and to convert back and forth between expressions in the form y = Ae kt and y = AB t . Typical exercises will ask students to condense an expression with multiple logarithms into a single logarithm, or to expand a single complicated logarithm as a combination of simpler ones. Virtually none7 of this material is found in Hawkes et al.’s 1911 textbook. In fact, the curriculum’s understanding of what logarithms are “for”—why we learn them and what we do with them—has changed almost completely in the past 100 years. The set of applications of logarithms in contemporary textbooks is almost completely disjoint from the corresponding set in textbooks from before the Second World War. Perhaps nothing illustrates this change more profoundly than the now-archaic word “antilogarithm”. The fact that older textbooks write (for example) antilog1.5271 rather than simply write 101.5271 hints at the cultural gulf that separates past from present. In the words of the British author L. P. Hartley, “The past is a foreign country; they do things differently there.” In this chapter, we will investigate the question: What are logarithms (and their counterparts, exponential functions) really all about? What, if anything, is the connective tissue that links the computational techniques of the past with the applications of the present? The logarithm was computationally useful precisely for 350 years because it converts multiplication problems to addition problems and exponentiation to multiplication; that is, because it is a group isomorphism from ( + , × ) → ( , + ). It is emphasized today for essentially the same reason! Consider the ways in which contemporary students are taught to regard exponential functions as “just like” linear functions, as shown in the Table 5.1 below. We might describe these similarities between linear and exponential functions by saying that there is an “analogy” between them—a sense in which, though different, they are structurally “the same”. The precise mathematical name for this “structural sameness” is isomorphism: If you take the logarithm of AB x, the exponent becomes a product and the multiplication becomes addition, and the whole thing becomes a linear function. Conversely,
229
Exponential and Logarithmic Functions 229 Table 5.1 A dictionary relating linear functions to exponential functions Linear Functions
Exponential Functions
General form: y = b + mx
General form: y = AB x
“Starting value” ( y-intercept): b Growth rate (slope): m To find m using a table of values, subtract y values corresponding to values of x that differ by 1
“Starting value”: ( y-intercept): A Growth factor (base): B To find B using a table of values, divide y values corresponding to values of x that differ by 1
Table 5.2 A dictionary showing the correspondence between the additive structure of and the multiplicative structure of +. In :
In +
Addition: a + b Additive identity: 0 Additive inverse: −a Subtraction: a − b Repeated addition: n ⋅ a (for n ∈ + )
Multiplication: ab Multiplicative identity: 1 Multiplicative inverse: a − Division: a / b Repeated multiplication: a n (for n ∈ + )
if we exponentiate b + mx everything transforms in reverse, and the linear function becomes an exponential one. This isomorphism, elaborated on in Table 5.2, lies at the heart of everything students do with exponential and logarithmic functions. In the rest of this chapter, we investigate exponential and logarithmic functions from both an algebraic and an analytic viewpoint. Our main questions will be the following: 1. Are logarithmic and exponential functions completely characterized by the fact that they are group isomorphisms between ( ,+ ) and ( + , × )? Put another way, are there other group isomorphisms between these two groups? If so, what are they, and how are they like or unlike the logarithmic and exponential functions? 2. What happens if we change from to another field, or even a ring? How much of the theory of logarithms and exponential functions generalizes to these other cases? As we will see in subsequent sections, the answer to the first question (“Are logarithmic and exponential functions completely characterized by the fact that they are group isomorphisms between ( ,+ ) and ( + , × )?”) is: almost, but not quite. There are other group isomorphisms between these two groups, isomorphisms that behave like—but are not actually!—exponential or logarithmic functions. However, if we require one extra property8 of our isomorphisms, then these “imposter” functions disappear entirely, and we have a complete characterization of the two types of functions. The next several sections are devoted to making this claim precise. To make the discussion simpler, in what follows we will make use of the following conventions: we will abbreviate the additive group ( ,+ ) with the single symbol , and will abbreviate the multiplicative group ( + , × ) with the symbol +. Notice that the superscript + sign on + does not denote the group operation, which is not addition, but rather refers to the fact that it consists only of positive real numbers.
230
230 Exponential and Logarithmic Functions
Exercises 1. Verify (or perhaps re-verify) that ( ,+ ) and ( + , × ) are both groups. 2. Download A Second Course in Algebra by Hawkes, Luby & Touton (1911) (see footnote 1) and solve the following exercises from its pages, using the Table of Logarithms found on pp. 200–201 of that text. (a) (b) (c) (d) (e)
p. 193, #1–15 p. 195, #1–12 p. 197, #14–22 p. 197, #32–42 p. 198, #1–12
3. Locate Algebra 2 textbooks from the 1970s, 1980s, and/or 1990s and create an inventory of the type of exercises students are asked to work when studying logarithms. Can you observe a shift in what the curriculum held students accountable for learning?
5.2 Exponential Functions, Roots, and the AM–GM Inequality The goal of this section is to prove that for any positive real number b, the function f : → + given by f ( x ) = b x is a group homomorphism; in the next section we show that it is invertible. We begin by formally defining the meaning of the notation b n in the case where n is a non-negative integer: Definition. If n ≥ 0 is an integer, and b any fixed real number, then the expression b n denotes ⋅ b the product b b . More formally, we can define b n recursively as follows: n factors
(a) b0 = 1, and (b) b n +1 = b ⋅ b n . Note that, for the moment, there are no restrictions whatsoever on b; in particular, b need not be an integer, and we allow b to be positive, negative, or zero. (This will change, shortly.) The following elementary proposition is undoubtedly familiar: Proposition. (Properties of Non-Negative Integer Exponents) Let b, c ∈ be any two real numbers, and let m, n be two non-negative integers. Then: (a)
n b n c n = ( bc ) .
(b)
b n b m = b n + m.
(c)
(bn )m = bnm.
Proof. In what follows we assume n and m are both positive; the cases in which either n = 0 or m = 0 (or both) are left as an exercise for the reader (Exercise 4).
231
Exponential and Logarithmic Functions 231 (a) We use the fact that multiplication is both commutative and associative to write b n c n = ( b ⋅ b b ) ⋅ (c ⋅ c c ) = ( bc )( bc ) ( bc ) = ( bc ) n factors
n factors
n
n factors of bc
(b) By the associative property, n+ m b n b m = ( b ⋅ b b )⋅ ( b ⋅ b b ) = b ⋅ b b = b n + m factors n factors
m factors
(c) By the associative property again,
(b )
n m
n factors nm = ( b ⋅ b b ) ( b ⋅ b b ) ( b ⋅ b b ) = b ⋅ b b = b . nm factors m groups , each co ontaining n factors
These properties are the first three steps in our journey toward proving that f ( x ) = b x is a group homomorphism. However, before we can reach that goal, we first need to define what the notation b x even means when x is not a positive integer! We will proceed in stages: first, we define b x for the case when x is a negative integer; then we treat the case when x is a rational number; and finally we deal with the general case, in which x is a real number9. At each stage, we will need to stop and check if the three properties of the previous proposition still hold even in the more general setting. Let’s begin our consideration of the case in which the exponent is a negative integer. We remark that the notation b −1, although commonly used to denote the multiplicative inverse of the real number b, has (so far) been avoided in this book. In fact, back in Chapter 1 we introduced the notation b − to stand for the multiplicative inverse, and specifically said that we were going to avoid using the more common notation. At the time10, we said: The notation a − is meant to be suggestive of the more familiar a −1, and later on we will use the latter notation as well. For now, though, we want to postpone notation that suggests exponentiation. (The ideas of multiplicative inverse and repeated multiplication are, at least initially, unrelated concepts, and there is no reason to expect a single notational convention to handle both of them.) The time has finally arrived for us to unite these two apparently unrelated concepts. Our goal is to introduce the notation b −1 as an alternative notation for b − , the multiplicative inverse of b. But we want to ensure that this notational convention does not “ruin” the three properties proved in the previous proposition. In particular, we have to worry about a posn sible ambiguity in the notation b − n: does it mean (b − ) , i.e. the nth power of the multiplica− tive inverse of b—or does it mean (b n ) , i.e. the multiplicative inverse of the nth power of b? Fortunately, these two alternatives coincide: Proposition. For any real (nonzero) base b, with multiplicative inverse b − , and for any positive integer n,
( b − )n = ( b n ) − .
232
232 Exponential and Logarithmic Functions Proof. We need to show that the left-hand side of this equation, (b − ) , is the multiplicative inverse of b n. A straightforward calculation using the first of the three properties proved n n n above shows that (b − ) ⋅ b n = (b − b ) = 1n = 1; therefore (b − ) is the multiplicative inverse of b n, which completes the proof. n − With this result in hand, we can safely write b − n as a shorthand notation for (b − ) = (b n ) . We now can extend the earlier proposition to show that the same three properties hold for all integer exponents: n
Proposition. (Properties of Integer Exponents) Let b, c ∈ be any two nonzero real numbers, and let m, n be any two integers. Then: (a) b n c n = ( bc ) . n
(b) b n b m = b n + m. (c) (b n ) = b nm. m
Proof. We already know these properties hold if neither m nor n is negative, so we need only consider the case where at least one of them is negative. In what follows, we show the proof for the case where n < 0 and m ≥ 0; the case where both are negative is left for Exercise 5. Write n = − k where k is a positive integer; then:
(
)
− k
(a) b n c n = b − k c − k = (b − ) (c − ) = (b − c − ) = ( bc ) = ( bc ) = ( bc ) . n m −k m − k m (b) First we observe b b = b b = (b ) b . Now, if m > k then we can write b m = b k b m − k; then k
k
k
−k
n
( b − )k b m = ( b − )k b k b m − k = ( b − b ) k b m − k = b m − k = b m + n On the other hand, if m < k then we can write (b − ) = (b − ) k
k −m
(b − )m , and
( b − )k b m = ( b − )k − m ( b − ) m b m = ( b − )k − m ( b − b ) m = ( b − ) k − m = b m − k = b m + n . (c) Finally, by definition and the previous proposition we have
( b n )m = ( b − k )m = ( ( b k )− )
m
(
= (b k )
)
m −
= (b km ) = b − km = b nm −
Notice that while the original set of properties were true for only positive integer exponents, but for all bases, in our just-proved extended set of properties we require the base to be nonzero. This is actually fairly typical of what is coming up: if we want to extend the notation and rules for exponentiating to a larger set of exponents, we often have to simultaneously restrict the set of allowable bases. Our strategy for the rest of this section is as follows: (a) First, we extend our three properties to the case of rational exponents. (b) Finally, we extend them further to the case of any real exponents.
233
Exponential and Logarithmic Functions 233 In doing so we draw close to two important issues of the secondary curriculum, the first of which is normally dealt with somewhat explicitly, while the second is typically glossed over. The first issue has to do with what br means when r is a rational number. Students in high m school are explicitly taught that an expression of the form b m / n means either n b m or n b : that is, it is the nth root of b m, or the mth power of the nth root of b. However, there are still some important questions that need to be asked:
( )
( )
m
(1) How do we know that the expressions n b m and n b actually define real numbers? That is: Are we sure that positive real numbers always have nth roots? This is not a trivial question, as it isn’t true in most fields—it relies very specifically on the properties of . m (2) Even if we know that n b m and n b both define real numbers, are we sure that they define the same real number? Here again, the question is nontrivial, as can be seen by considering the case where b = −1, m = 2, n = 2 (see Exercise 6). (3) The last example suggests that the meaning of br where r ∈ may, in some cases, change depending on how we write r as a ratio of two integers. Suppose we choose two different m m' representations and for the same rational number. Are we sure that n b m = n' b m' ? n n' (see Exercise 7).
( )
If the answer to any of the above questions turns out to be “No”, then we would have a real problem on our hands—it would mean that the expression br is not really well defined when r is a rational number, and would call into question whether exponential and logarithmic functions even exist. Fortunately, these questions are actually not all that hard to resolve, and we address them below. A far more difficult issue (one that high school textbooks usually do not address directly) concerns what to do with expressions of the form br when r ∈ is irrational. To be specific: We may (eventually) become comfortable with the idea that 25 / 7 means 7 25 , and that this designates a unique real number (approximately equal to 1.64). But what possible interpretation can we give to an expression like 2 2 , or 2 π ? If the exponent cannot be expressed as a ratio, then we can’t interpret these as whole-number roots of whole-number powers (or vice versa) at all. What, if anything, do such expressions mean? We will return to this question shortly.
Exercises 4. Prove all three properties of exponents for the case where either n = 0 or m = 0 (or both). 5. Prove all three properties of exponents for the case where both n and m are negative. m 6. Consider the expressions n b m and n b for b = −1, m = 2, n = 2 , and explain why this example makes the expression b m / n problematic. 7. Repeat the previous exercise for b = −8, m = 1, n = 3 and b = −8, m = 2, n = 6.
( )
We continue our discussion of exponentiation by recalling some important findings from Chapter 1. Theorem. (Irrationality of 2). There is no rational number whose square equals 2.
234
234 Exponential and Logarithmic Functions Proof. See §1.10. This theorem tells us that, if we were to operate exclusively in the field , then 2 would have no square root (as indeed most numbers would not). However, notwithstanding this, we also proved the following: Theorem (Approximation Theorem for 2). Let s be a positive rational number with 2 s 2 < 2, and let t = , so that t 2 > 2. Then there exist two rational numbers s' and t' s whose squares are closer to 2: that is, the inequalities s < s', t' < t, and s 2 < ( s' ) < 2 < (t' ) < t 2 2
2
are all satisfied. Moreover it is possible to find values of s' and t' whose squares are arbitrarily close to 2.
Proof. See §1.10. The Approximation Theorem for 2 tells us that although 2 has no square root in , it has arbitrarily good approximate square roots. Furthermore, if we enlarge the field to one that has the completeness property, then there does exist a number whose square is exactly 2 (see §1.12, Exercises 55–56). We need to generalize this last result to the case of arbitrary roots of arbitrary numbers. We begin with a lemma: Lemma. (Monotonicity of x n ). For any fixed positive integer n, the function f ( x ) = x n is a strictly increasing function on +; that is, if 0 < x1 < x2 then f ( x1 ) < f ( x2 ).
Proof. First, we note that for any r > 1, we have r n > 1 (Exercise 8). Now, choose two positive n
n
x x x2 xn > 1. It follows that 2 > 1. But 2 = 2n , and if the latter x1 x1 x1 x1 n n expression is greater than 1, then x1 < x2 , which is what we needed to prove. The preceding lemma will be used frequently in what follows, usually tacitly. Our main goal is to generalize the Approximation Theorem for 2, and the existence of 2 in , to the case of arbitrary roots of arbitrary numbers. We begin by defining what it means to be an “approximate nth root”. Let r be any positive real number, and choose a positive integer n. Choose a positive number b such that b n > r ; then b is an upper bound for the nth root of r . r If we define a = n −1 , then a n < r , so a is a lower bound for the nth root of r, and ab n−1 = r. b (We might think of a as being too small by exactly the right amount to compensate for the too-largeness of b.) For example, suppose we want to approximate the cube root (n = 3) of 5. We begin with an upper bound, say b = 2. We know this is an upper bound because 23 is larger than 5. It’s real numbers x1 < x2. Then
235
Exponential and Logarithmic Functions 235 5 not a particularly good upper bound, mind you. Corresponding to this, we construct a = ; 4 3 125 5 this is a lower bound, as can be observed by computing = , which is easily seen to 4 64 be less than 5. The following theorem tells us that this pair of bounds can be improved:
Theorem (Approximation of nth roots). Let b be an upper bound for an nth root of r, r and let a = n −1 be the corresponding lower bound, as described above. Then there b exists another lower bound b' that better approximates an nth root of r, in the sense r n n that r < ( b' ) < b n; moreover if we define a' = then we have a n < ( a' ) < r as well. n −1 ( b' )
Proof. The basic idea is that, by hypothesis, the nth root of r (if it exists, which we have yet a, b, b,…, b . to prove!) would be the geometric mean of the n numbers n −1 terms We will choose b' to be the arithmetic mean of this same set of numbers, and set n −1 a' = r / ( b' ) . Then our conclusion will follow from the fact that the arithmetic mean of any set of positive numbers is always greater than or equal to its geometric mean. We pause the proof of the Approximation Theorem to prove this useful fact as a separate lemma: Lemma. (AM–GM Inequality). Given any set of (not necessarily distinct) positive numbers a1 , a2 ,… an, we have a + a + an a1a2 an ≤ 1 2 n
n
Before proceeding with the proof of the Lemma, we pause to note that the AM–GM inequality is normally written in the more familiar form n
a1a2 an ≤
a1 + + an n
in which the right-hand side is (of course) the arithmetic mean, and the left-hand side is the geometric mean. However, at this point in our discussion, we don’t even know that that the left-hand side is even defined—remember, we are in the middle of proving that nth roots exist! Until we have finished that proof, we need to avoid assuming that an expression like n a a a corresponds to a unique real number. Fortunately, we do not need to take nth 1 2 n roots (or even write expressions involving them) in order to prove the lemma.
236
236 Exponential and Logarithmic Functions Proof of Lemma. First, we show that the inequality holds in the special case when n is a power of 2. We use induction: 2
a +a 1. The base case is n = 2. We need to prove that a1a2 ≤ 1 2 . But this is equivalent 2 to (a1 − a2 )2 ≥ 0 , which is true for any a1 , a2. (See Exercise 9.) 2. For the induction step, we assume the result is true for n = 2 k for some k, and prove that it is true for 2 n = 2 k +1. Choose any set of 2n positive numbers a1 , a2 , an , an +1 , … a2 n . Let M1 stand for the product of the first n numbers, and M 2 the product of the second set of n numbers; similarly let A1 and A2 stand for the arithmetic means of the same sets of numbers. Then n A + A2 a1a2 an an +1 a2 n = M1M 2 ≤ A1n A2n = ( A1A2 ) ≤ 1 2
2n
2n
a + a + a2 n = 1 2 . 2n
(see Exercise 10). Now, we continue to prove the lemma for the general case, in which n is not necessarily a power of 2. In this case, we can always find a larger value N > n that is a power of 2. Let A stand for the arithmetic mean of a1 , a2 , an . The idea is that we augment the original set of n numbers by adding N − n new terms, each equal to A; this does not change the arithmetic mean of the set (Exercise 12). Then we compute the product a + an + A + A a1a2 an A A ≤ 1 N N − n factors
N
= AN
n factors
If we cancel out N − n factors of A from both sides of this inequality, we conclude a1a2 an ≤ An, which concludes the proof of the lemma. Before we return to the proof of the Approximation Theorem, we make a few observations about the relationship of arithmetic and geometric means to the topic of this chapter, and to the role of geometric means in the secondary curriculum. A geometric mean is “just like” an arithmetic mean, in exactly the same way that an exponential function is “just like” a linear one: instead of adding we multiply, and instead of multiplying we exponentiate (refer back to Table 5.2 in §5.1). More formally, we can express this analogy by saying that taking the logarithm of a geometric mean turns it into an arithmetic mean, and exponentiating an arithmetic mean turns it into a geometric mean (see Exercises 12 and 13). Geometric means appear in contemporary secondary curricula in only a few limited contexts: 1. Students in Geometry usually learn that when an altitude is dropped to the hypotenuse of a right triangle, three different relationships are created, each involving a geometric mean (see Figure 5.2 and Exercise 14). 2. Also in Geometry, students learn that the length of a tangent segment to a circle is equal to the geometric mean of two secant segments (see Figure 5.3 and Exercise 15). 3. Finally, in Algebra 2, students often learn to interpolate between two nonconsecutive terms of a geometric series by computing a geometric mean.
237
Exponential and Logarithmic Functions 237
Figure 5.2 BD is the geometric mean of AD and DC ; BA is the geometric mean of AD and AC ; and BC is the geometric mean of CD and CA.
Figure 5.3 The length of the tangent segment PC is the geometric mean of PA and PB .
These three contexts all share an important feature: the geometric mean is computed from two (and only two) numbers. In fact Geometry textbooks usually define the geometric mean of two numbers as the solution to a proportion of the form a x = x b or by the formula x = ab . The more general notion (of taking the geometric mean of a set of two or more positive numbers) is usually not taught. This was not always the case: prior to
238
238 Exponential and Logarithmic Functions the 1980s, Algebra 2 textbooks typically covered the geometric mean during the sections on geometric sequences, exponential growth, and compound interest, where it still (potentially) could have a natural place in the curriculum. Consider, for example, the following problem: The value of an investment increases in three consecutive years by 10%, 15%, and 2%. What is the average annual percent increase for the value of the investment? A natural (but incorrect) approach to solving this problem is to compute the average (i.e., 10 + 15 + 2 the arithmetic mean) of the three individual percent increases: = 9. However, a 3 9% increase for three consecutive years is not equivalent to successive increases of 10%, 15% and 2%. This is because consecutive percent increases combine not by addition, but by multiplication. That is, after each year the previous year’s value is multiplied by 1.10, 1.15, and 1.02, respectively; therefore the cumulative effect is that over three years the value is multiplied by (1.10 )(1.15)(1.02 ) ≈ 1.29 . The effect of this is that the net increase over three years is 29%, not 27%. To find the annual average percent increase, we take the geometric mean of these three multiplicative factors: 3
(1.10 )(1.15 )(1.02 ) ≈ 1.0887
and therefore the average annual percent increase is 8.87%. Such problems were once fairly commonplace in the secondary curriculum. Some older textbooks even included a statement of the AM–GM inequality, although it was nearly always stated without proof. (In the previous example, the fact that the geometric mean calculation leads to the answer 8.87%, while the incorrect arithmetic mean calculation leads to the larger answer 9%, is an instance of the AM–GM inequality.) For the most part, these materials have disappeared from the secondary curriculum. We return now (at last) to the proof of the Approximation Theorem for nth roots: Proof of Theorem (continued): Suppose b is an upper bound for the nth root of r, and set 1 a = r / b n −1. Recall that this implies a n < r < b n . We define b' = ( a + ( n − 1) b ); then b' < b n n (Exercise 16). Furthermore, by the AM–GM inequality, r < ( b' ) , and therefore b' is a (better) upper bound for the nth root of r. If we now define a' = r / ( b' ) , then we have n −1
a n < ( a' ) < r < ( b' ) < b n , n
n
as desired. Let’s illustrate the Approximation Theorem with some numerical computations. Earlier, we proposed using b = 2 as a (very poor) upper bound for the cube root of 5. Following 5 5 the procedure above, we compute a = 2 = as our (extremely bad) lower bound. Then 2 4 15 5 80 7 b' = + 2 + 2 = is our improved upper bound, and a' = = is our improved 2 4 34 (7 / 4) 49 lower bound. It can be confirmed directly that 3
3
3
80 7 5 3 < < 5 < < 2 4 49 4
239
Exponential and Logarithmic Functions 239 1 80 7 7 503 The process can be iterated, getting an even better upper bound b" = + + = , 3 49 4 4 294 5 3 3 and corresponding lower bound a" = . It can be verified that both ( a" ) and ( b" ) 2 (503 / 294) are quite close to 5 (see Exercise 17). Note that if r and the original upper bound b are both rational, then so too will be a, b' and a' . Thus the theorem just proved also shows that nth roots of rational numbers have rational approximations of arbitrary precision. We also observe, in passing, that although our statement (and proof) of the Approximation Theorem begins with an upper bound, it is just as true if we start with a lower bound (see Exercise 18). We are now (at last!) ready to prove the following: Theorem (Existence of Roots). Let r ∈ + be an arbitrary positive real number, and let n be any positive integer. Then the equation x n = r has a solution in . Proof. We begin by defining the set S = {c ∈ | c n < r}. This set is nonempty (since 0 ∈S), and is bounded above (for example, by r +1; see Exercise 21); therefore, by the completeness property of the reals, it has a least upper bound. Let’s denote this least upper bound u. We claim that u n = r. To see this, we will show that the other two possibilities (u n < r and u n > r) both lead to contradictions. (a) Suppose first that u n > r . Then u is an upper bound for an nth root of r. Therefore, by n the Approximation Theorem, there exists a real number u' with u' < u and ( u' ) > r . The latter condition implies that for all c ∈ S , u' > c , and hence u' is also an upper bound for S, contradicting the “leastness” of u. (b) Next, suppose that u n < r . Then u is a lower bound for an nth root of r, and therefore, again by the Approximation Theorem (in particular, the variation of it in Exercise 18) n there exists a real number u' with u' > u and ( u' ) < r . Now, u ’ may or may not be rational; however, by the Density of Rationals theorem11 there exists a rational number c with u < c < u' . This rational satisfies c n < r, and therefore c ∈ S . But this contradicts the fact that u is supposed to be an upper bound for S. Because both u n < r and u n > r lead to contradictions, by trichotomy we conclude that u n = r, which completes the proof that r has an nth root. The theorem just proved shows that nth roots exist for any positive number. It is fortunately quite a bit simpler to prove that they are unique; that is, we have the following: Theorem (Uniqueness of Roots). Let n be a positive integer, and let x and y be two positive real numbers with x n = y n. Then x = y. Proof. Exercise 22. Now that we have proven that positive nth roots of positive numbers exist and are unique, it is safe for us to introduce the notation n a for the unique positive solution of the equation x n = a.
240
240 Exponential and Logarithmic Functions
( )
m
We turn now to the questions raised earlier: Do we know for certain that n b m and n b are equal? That is, does the operation of exponentiation (by a positive integer) commute m m' with taking an nth root? Furthermore, if and are two different representations of the n n' same rational number, are we certain that n b m and n' b m' are the same real number? The answer to both of these questions (fortunately) is yes, with one important caveat. The next two propositions are similar in form to the earlier proof that the notation b − n is unambiguous: Proposition. For any positive real number b and any two positive integers m and n, n
bm =
( b) n
m
.
Proof. We need to show that
(( b ) n
)
m n
=
( b) n
mn
=
( b) n
(( b ) ) n
n m
m
is the nth root of b m, so we raise
( b) n
m
to the nth power:
= bm
Similarly, we have: Proposition. Let b be any positive real number, and let m, n, m' , n' be four positive m m' integers with = . Then n b m and n' b m' . n n' Proof. We raise both n b m and n' b m' to the power of nn' and show that the results are equal. First, on the left-hand side we have
(
n
bm
)
nn'
=
((
n
bm
))
n n'
= (b m ) = b mn' n'
while on the right-hand side we have
(
n'
b m'
)
nn'
=
((
n'
b m'
)
)
n' n
= (b m' ) = b m' n n
m m' = implies that mn' = m' n. Since raising both n b m and n n' n' m' b to the same power produces equal results, n b m = n' b m' . The results above allow us, at last, to introduce the notation b m / n to mean n b m for any m rational number , secure in the knowledge that the result does not change if we use a n different (but equivalent) representation of the fraction we choose. Moreover, we can freely write These are equal, because
(b1/ n )m = b m / n = (b m )1/ n
241
Exponential and Logarithmic Functions 241 We now have everything in place that is ready to extend the properties of exponents, already known for the case where the exponents are positive integers, to the more general case where the exponents are rationals. Proposition. (Properties of Rational Exponents). Let b, c be any two positive real numbers and let q, q' ∈ be two rationals. Then: (a) bq c q = ( bc ) . q
(b) bq bq' = bq + q' . (c)
(bq )q'
= bqq' .
Proof. We prove (b), leaving the proofs of (a) and (c) to Exercises 23 and 24. Let q =
m n
m' . Without loss of generality, we assume that n and n' are positive. Then n' mn' + m' n q + q' = , and we need to show that nn' and q' =
b( mn' + m' n )/ nn' = b m / n b m' / n' As in the proof of the last proposition, we show this by raising both expressions to the power of nn' and show that the results are equal. On the left-hand side, we have just b mn' + m' n. On the right-hand side, we have
(b m / n b m' / n' )nn' = (b m / n )nn' (b m' / n' )nn'
= b mn' b m' n = b mn' + m' n
which completes the proof of (b). The careful reader may have noticed that for the last several pages we have quietly inserted the condition “b is a positive integer” into all of our propositions, theorems and proofs. In fact, most of the theory of rational exponents fails if we allow negative bases! In particular: 1. If b is negative, then for even values of n the equation x n = b has no solution, and therefore the expressions n b and b1/ n are not well defined; 2. If b mis negative, and m = n = 2, then the expression n b m is defined, but the expression n b does not refer to any real number, so the notation b m / n cannot be used in the latter sense; and 3. If b is negative, and m and n are odd positive integers, then the expressions 2 n b2 m and n m b are both defined, but refer to different real numbers, and therefore we cannot define the expression bq in a way that does not depend on the representation of the rational number q.
( )
All of these problems go away if we only consider b > 0. For this reason, if we want to allow rational exponents (which we certainly do!), we restrict the universe of permissible bases to only positive real numbers. In what follows, we will always assume that the base b is positive, unless stated otherwise.
242
242 Exponential and Logarithmic Functions For future reference, we can also extend the Monotonicity Lemma from earlier in this section: Lemma. (Monotonicity of x q ). For any fixed positive rational q, the function f ( x ) = x q is a strictly increasing function on +; that is, if 0 < x1 < x2 then f ( x1 ) < f ( x2 ).
Proof. Exercise 25.
Exercises 8. Show that if r > 1, then for any positive integer n, r n > 1. (Hint: Use induction, and the properties of an ordered field from Chapter 1). 2 2 a +a 9. Show that a1a2 ≤ 1 2 is equivalent to ( a1 − a2 ) ≥ 0 . 2 10. Explain each step in the proof of the AM– GM inequality. 2n n A + A2 Specifically: (a) Why is M1M 2 ≤ A1n A2n ? (b) Why is ( A1A2 ) ≤ 1 ? 2 2n
2n
A + A2 a + a + a2 n = 1 2 (c) Why is 1 ? 2 2n 11. Show that if A is the arithmetic mean of a set of numbers a1 , a2 , an , then the arithmetic mean of a1 , a2 , an , A, A,… A is also A. 12. Let f be any function satisfying the property f ( xy ) = f ( x ) + f ( y ). (Later we shall call such a function f a “logarithm-like function”.) Show that applying f to the geometric mean of a1 , a2 , an (any set of positive real numbers) produces the arithmetic mean of f ( a1 ) , f ( a2 ) ,… f ( an ). 13. Let g be any function satisfying the property g ( x + y ) = g ( x ) g ( y ). (Later we shall call such a function g an “exponential-like function”.) Show that applying g to the arithmetic mean of a1 , a2 , an (any set of real numbers) produces the geometric mean of g ( a1 ) , g ( a2 ) ,… g ( an ). 14. Prove the three geometric mean relationships in Figure 5.2. 15. Prove the geometric mean relationship in Figure 5.3. 16. In the proof of the Approximation Theorem for nth roots, explain why b' < b. 3
3 5 503 17. Confirm that and are both very close to 5, either by hand, 2 294 (503 / 294 ) or (preferred) by using a table of logarithms (such as that found on pp. 200–201 of A Second Course in Algebra, 1911) and the methods of §5.1. 18. Prove the following slight variation on the Approximation Theorem: Let a be a r lower bound for an nth root of r, and let b = n −1 be the corresponding upper a bound. Then there exists another lower bound a' that better approximates an nth r n root of r, in the sense that a n < ( a' ) < r ; moreover if we define b' = then we ( a' )n −1 n have r < ( b' ) < b n as well.
243
Exponential and Logarithmic Functions 243 19. Use the methods of the Approximation Theorem to find upper and lower bounds for 5 180 , beginning with 3 as an initial upper bound and using at least two iterations of the method. 20. Use the methods of the Approximation Theorem (and the variation of it you prove in Exercise 18) to find upper and lower bounds for 4 700 , beginning with 5 as an initial lower bound and using at least two iterations of the method. 21. Let r > 0 and let n be a positive integer. Show that r +1 is an upper bound of the set S = {c ∈ | c n < r}. 22. Let n be a positive integer, and let x and y be two positive real numbers with x n = y n. Then x = y. q 23. Prove that for any two positive real numbers b, c and any rational q, bq c q = ( bc ) . q q' qq' 24. Prove that for any positive real number b and any two rationals q, q', (b ) = b . 25. Prove that for any fixed positive rational q, the function f ( x ) = x q is a strictly increasing function on +; that is, if 0 < x1 < x2 then f ( x1 ) < f ( x2 ). What should be the meaning of br when r is an irrational number? As we observed earlier in this section, this question is more difficult to answer than it may seem. By definition an irrational number cannot be written as a ratio of integers, and therefore the “root of a power” approach is unavailable to us. Most curricula sidestep the issue entirely. For example, A Second Course in Algebra (1911) introduces the basic properties of exponents, justifies them on the grounds that exponentiation means repeated multiplication, and then simply says “It is assumed that these laws hold for all real values of [the exponents]” (p. 90). Writing in 1960 about the challenge of explaining what is meant by irrational exponents, the authors of the School Mathematics Study Group’s12 Intermediate Mathematics wrote: …It is exceptionally difficult to present a satisfactory treatment of exponents, and the usual high school courses in mathematics give only a small fragment of the theory. What is the meaning of 3 2 , 10 π ,…, and how do we prove that the usual laws of exponents hold for rational and irrational exponents? It is not possible to give satisfactory answers to these questions in the usual treatment of exponents. (Intermediate Mathematics, Teacher’s Commentary [Part II], p. 548; emphasis added.) One possible approach toward solving this problem is to begin with the observation that every real number, whether rational or irrational, can be approximated by a sequence of rationals. For example, suppose we want to calculate 10 π . Using decimals, π can be approximated by the sequence 3, 3.1, 3.14, 3.141, 3.1415 … Because each number in this sequence is a rational number, we know what it means to use each one as an exponent: that is, 103 = 1000 103.1 = 1031/10 = 10 1031 ≈ 1, 258.925 103.14 = 10314 /100 = 10157 / 50 = 50 10157 ≈ 1, 380.384 103.141 = 103141/1000 = 1000 103141 ≈ 1, 383.566
244
244 Exponential and Logarithmic Functions 103.1415 = 1031415 /10000 = 106283 / 2000 = 2000 106283 ≈ 1, 385.160 As we add on more digits to the approximation of π , the decimal representation of 10 π begins to take shape. So one way to define an expression of the form br where r is irrational is as follows: find a sequence of rational numbers qn that converges to r, and then define br = lim bqn n →∞
Apart from the fact that this definition requires us to first work out a reasonably thorough theory of limits—something which is normally not covered until first-year Calculus—this approach seems sensible. However, it raises some subtle questions. How do we know that the limit defined by this expression is the same value regardless of which sequence we use to approximate r? For example, in addition to the decimals above, π can also be approximated by the following sequence of rationals: 22 179 333 355 , , , ,… 7 57 106 113 which leads to the following sequence of approximate values for 10 π : 1022 / 7 = 7 1022 ≈ 1, 389.495 10179 / 57 = 57 10179 ≈ 1, 381.500 10333 /106 = 106 10333 ≈ 1, 385.190 10355 /113 = 113 10355 ≈ 1, 385.457 It seems as though this sequence might be converging to the same limit as the one we found before, but is it actually? What is that limit, anyway? A (somewhat) simpler approach is to refer back to our study of the Real Number Characterization Theorem in Chapter 1, and recall that there is a one-to-one correspondence pairing each real number with a downward-closed, open, bounded, rational subset (DCOBRS, or Dedekind cut) (see §1.11). More specifically, we associate to each real number r the set of all rationals less than r: Sr = {q ∈ | q < r} Now, choose a (positive) base b. For each q ∈ Sr, we can define bq, since the exponents are all rational numbers. Our strategy is to take all of the different values of bq corresponding to all q ∈ Sr and use them to form a new Dedekind cut that can be used to define br . It’s not quite as simple as putting all of those values of bq together into one set, though, for two reasons: (a) most of the values of bq aren’t rational, and (b) the set {bq | q ∈ Sr } is not
245
Exponential and Logarithmic Functions 245 downward-closed. (Exercise 26). However, each of the numbers bq is also associated to its own Dedekind cut. Moreover, these sets are nested, in the following sense: For each q ∈ Sr, let Aq denote the set of all rational numbers less than bq. Then for any two rationals q1 , q2 , we can see that q1 < q2 if and only if Aq1 ⊂ Aq2 (Exercise 27). Then, if we take the union of all of the Dedekind cuts Aq, we form a new Dedekind cut, U q ∈Sr Aq (Exercise 28). This Dedekind cut has a least upper bound, u. The following properties are straightforward to verify: Proposition. With b, r, and u as above: (a) If q is a rational number with q < r, then bq < u . (b) If q is a rational number with q > r, then bq > u .
Proof. Exercise 29. This proposition tells us that the least upper bound u is the cut-off value between those numbers bq corresponding to rational numbers below r and those above it. For this reason, it is reasonable to define br to be equal to u. Definition. For an irrational number r, we define br to be the least upper bound of the set U q ∈Sr Aq, where Sr = {q ∈ | q < r}, and for each ∈Sr , Aq is the set of all rational numbers less than bq. With this definition in place, we can finally extend the properties of exponentiation to the most general case: Proposition. (Properties of Real Exponents). Let b, c be any two positive real numbers and let r, r' ∈ be two real numbers. Then: (a) br c r = ( bc ) . r
(b) br br' = br + r' . (c) (br ) = brr' . r'
Proof. We have already dealt with the case in which both r and r' are rational, so we need only deal with the case in which at least one of them is irrational. In what follows we assume that r ∈ but r' is irrational; the case in which both are irrational is similar. Moreover for brevity’s sake we give the proof for (b) only, leaving the others to Exercises 30 and 31. By the previous proposition, we know that for all rationals q, q < r' if and only if bq < br' . ’ Since br > 0, we can also say that q + r < r + r' if and only if bq + r = bq br < br br . If we write t = q + r and s = r + r', this can be re-stated as t < s if and only if bt < br br' . However, by definition, t < s if and only if bt < b s . It therefore follows that br br' = b s , i.e. that br br' = br + r' , which was the property we wanted to show.
246
246 Exponential and Logarithmic Functions This last proposition can be paraphrased as follows:
Corollary. For any positive real number b, the function f : → + defined by f ( x ) = b x is a group homomorphism from the additive group of real numbers ( ,+ ) to the multiplicative group of positive reals ( + , × ).
Proof. The three properties that need to be verified are that f ( 0 ) = 1, f ( x + y ) = f ( x ) f ( y ), and f ( − x ) = − f ( x ), all of which have already been established. This last proposition concludes our discussion of exponents, their meaning, and their properties. In the next section we turn to consider their inverses, the logarithmic functions.
Exercises 26. Let Sr = {q ∈ q < r} for some irrational r, choose a positive real base b , and form the set {bq | q ∈ Sr }. Show that this set is not a DCOBRS because (a) most of its members are not rational, and (b) it is not downward-closed. 27. For each q ∈ Sr, let Aq denote the set of all rational numbers less than bq. Then for any two rationals q1 , q2 , prove that q1 < q2 if and only if Aq1 ⊂ Aq2 . 28. Let b be a positive base, and let r be an irrational number, with corresponding DCOBRS Sr = {q ∈ | q < r}. For each q ∈ Sr, let Aq denote the set of all rational numbers less than bq. Prove that the set U q ∈Sr Aq is a DCOBRS. 29. With the notation of the previous problem, let u be the least upper bound of the set U q ∈Sr Aq. Show that (a) if q is a rational number with q < r, then bq < u ; and (b) if q is a rational number with q > r, then bq > u . 30. Let b, c be any two positive real numbers and let r ∈ be any real number; prove r that br c r = ( bc ) . 31. Let b be any positive real number and let r, r' ∈ be two real numbers; prove r' that (br ) = brr' .
5.3 Exponential Equations and Logarithmic Functions In the last section, we showed how exponentiation can be extended to all real numbers: that is, given any positive real number b, we now have an exponential function f : → + given by f ( x ) = b x . This exponential function satisfies all three of the properties of a group homomorphism: (a) f ( 0 ) = 1, (b) f ( x + y ) = f ( x ) f ( y ) , and (c) f ( − x ) = ( f ( x )) . −1
247
Exponential and Logarithmic Functions 247 We now turn to the problem of defining logarithmic functions. We begin with a fundamental proposition: Proposition (Monotonicity of b x ). Let b be a positive real number. Then if b > 1 the exponential function f ( x ) = b x is strictly increasing; that is, if x1 < x2 then f ( x1 ) < f ( x2 ) . On the other hand if 0 < b < 1 then ( x ) = b x is strictly decreasing; that is, if x1 < x2 then f ( x1 ) > f ( x2 ). The reader may be experiencing a bit of déjà vu here. Didn’t we already prove this in the last section? In fact, no! Our first monotonicity result, near the beginning of §5.2, proved that x n is a strictly increasing function for any positive integer value of n; later, toward the end of §5.2, we extended this result to show that x q is also strictly increasing for any positive rational value of q. In both cases, we took the exponent to be a fixed (positive) constant, and let the (positive) base vary. In our current case, we fix the base and let the exponent vary. In other words: although the preceding results were about exponentiation, the functions involved, g ( x ) = x n and h( x ) = x q, were not exponential functions13! Proof. There are a lot of different cases to consider. (a) First, consider the case b > 1, and suppose x1 and x2 are both positive integers. Then x b . If x1 < x2 then there are more factors of b in the second b x1 = b b and b 2 = b x2 factors
x1 factors
expression than in the first; the result then follows by using induction and the fact that if A is any positive number, and b > 1, then bA > A. (b) Now continue to assume b > 1 and suppose that x1 and x2 are both positive rational m m numbers. Then we write x1 = 1 and x2 = 2 , where m1 , n1 , m2 and n2 are all positive n1 n2 integers; we then must prove that x1 < x2 implies b m1 / n1 < b m2 / n2 . By our monotonicity results of the previous section, the latter inequality (the one we want to prove) is equivalent to
( b m / n )n n 1
1
1 2
< (b m2 / n2 ) 1 2 nn
which is in turn equivalent to b m1n2 < b m2 n1 In this expression, both exponents are positive integers; moreover the assumption that m1 / n1 < m2 / n2 means precisely that m1n2 < m2 n1. So by (a), b m1n2 < b m2 n1, and therefore b x1 < b x2 . (c) Next, still under the assumption b > 1, we let one of x1 and x2 be a positive rational, and the other one a positive irrational. Now one of the propositions of the previous section (and Exercise 29) said that if q is a rational number and r an irrational with q < r, then bq < br ; and if q > r, then bq > br . Therefore regardless of which of x1 and x2 is irrational, we have b x1 < b x2 .
248
248 Exponential and Logarithmic Functions (d) Now suppose that b > 1 and that both x1 and x2 are both positive irrationals. By the density of rationals, there exists a rational number q with x1 < q < x2 . By (c), we find that b x1 < bq < b x2 , and therefore b x1 < b x2 . (e) If 0 < b < 1 and x1 and x2 are both positive then all of the above argument works exactly the same way, but with all of the inequalities reversed; this is because if A is any positive number, and 0 < b < 1, then bA < A. x 1 (f) If x1, x2 or both are negative, we use the fact that (by definition) b − x = to complete b the argument. We omit the details. Now, we take fixed values of a, b > 0, and ask the question: Does the equation a x = b have a solution? The question may seem like a simple one. But actually finding a solution using algebraic methods is typically impossible—so why are we so sure that one exists? Consider for example the case where a = 4. The equation 4 x = b certainly has a solution if, for example, b is a pure power of 2; in that case, we can write b = 2 n, and then solve the equation by writing both sides as powers of 2:
( 2 2 )x = 2 n 22x = 2 n 2x = n x = n/2 For example, 32 is a pure power of 2 (because 25 = 32), and applying the argument above we find that the unique solution to the equation 4 x = 32 is x = 5 / 2 . But most real numbers are not pure powers of 2, so this method is of limited applicability. How, for instance, can we solve an equation like 2 x = 40 ? We might recognize that because 45 / 2 = 32 and 43 = 64 , by monotonicity the solution (if one exists) would need to lie between 5 / 2 and 3. But where, exactly? Any attempts to express 40 as an nth root of an mth power of 4, for some n and m, are doomed to failure, because the equation 4 x = 40 has no rational solution. This is not hard to prove; the easiest way relies on the following basic result of number theory: Theorem (Fundamental Theorem of Arithmetic). Any positive integer N has a unique factorization in primes. That is, if p1n1 pknk is a factorization of N into distinct prime m numbers p1 ,…, pk , and q1m1 q j j is a second factorization of N into distinct prime numbers q,…, q j , then i = j , and the two expressions are identical except for a possible re-ordering of the factors.
249
Exponential and Logarithmic Functions 249 Proof. Omitted. The fact that 4 x = 40 has no rational solution follows directly from the Fundamental Theorem of Arithmetic: Proposition. The equation 4 x = 40 has no rational solution.
Proof. Suppose that m, n are two integers such that 4 m / n = 40. Then, exponentiating both sides, we have 4 m = 40 n Both 4 m and 40 n are integers, and therefore can be factored into primes in a unique way. The prime factorization of 4 m is 22m, while the prime factorization of 40 n is 23n 5n (Exercise 32). Thus the equation we are trying to solve is equivalent to 2 2 m = 23 m 5 n But this is impossible, because if both of these expressions are equal to the same number, that number would have two different prime factorizations: one factorization includes 5 as a factor, while the other does not. This would contradict the FTA, and therefore no such m, n can exist. This method can be adapted to prove that most exponential equations have no rational solutions (see Exercise 33). However, the fact that we can’t find an exact rational solution to 4 x = 40 does not mean we can’t find good approximate solutions. Inspired by the above proof, we observe that it would really be sufficient for us to find an approximate solution to the equation 2r = 5. If we had one—that is, if we could find a rational value for r with the property that 2r ≈ 5—then we would have 23 ⋅ 2r ≈ 8 ⋅ 5 = 40, and therefore our equation becomes 2 2 x ≈ 23 ⋅ 2 r = 23 + r 1 (3 + r ). So the problem 2 of finding an approximate rational solution to 4 x = 40 reduces to the problem of finding positive whole numbers m, n with the property that 2 m / n ≈ 5, or equivalently 2 m ≈ 5n . Once again, by the Fundamental Theorem of Arithmetic, we know the equation 2 m = 5n has no exact solutions (other than the trivial solution, m = n = 0). But in order to find an approximate solution, all we have to do is find a power of 2 and a power of 5 that are “close” to one another. The powers of 2 are from which we could conclude that 2 x ≈ 3 + r , and therefore x ≈
2, 4, 8, 16, 32, 64, 128, 256, 512, … while the powers of 5 are 5, 25, 125, 625, 3125, …
250
250 Exponential and Logarithmic Functions Inspecting these two lists, we notice that 27 = 128 is not too far from 53 = 125. (To be precise, they differ by a little bit more than 2%.) From the observation that 27 ≈ 53 we deduce 1 7 8 that 27 / 3 ≈ 5, and therefore 23+ 7 / 3 ≈ 23 ⋅ 5 = 40 . Thus x = 3 + , or x = , would seem to be 2 3 3 a reasonable approximate solution to 4 x = 40 . Since 27 is a bit larger than 53, we can con8 clude that x = is actually an overestimate, in the sense that 48 / 3 is slightly larger than 40 3 (Exercise 34). In fact, the two values differ by less than 1%. We can always find a better approximate solution by looking for another pair, consisting of a power of 2 and a power of 5, that are even closer together. In fact, we have the following theorem, analogous to what we proved in the previous section about approximating nth roots: Theorem (Approximate Solutions for exponential equations). Let a, b be any two positive real numbers, and consider the exponential equation b x = a . Then any rational approximation to a solution can be improved upon. That is, (a) if c is an approximate rational solution such that bc < a (i.e. an underestimate for a solution), then there exists another rational c' that is a better underestimate, in the sense that bc < bc' < a; and (b) if c is an approximate rational solution such that bc > a (i.e. an overestimate for a solution), then there exists another rational c ’ that is a better overestimate, in the sense that bc > bc' > a.
Proof. First, assume that b > 1, and suppose we have bc < a . We want to show that we can increase the exponent slightly without exceeding a. In other words, we want to show that there exists some (possibly very small) value u ∈ such that bc < bc + u < a, or equivalently 1 < bu < a / bc. In fact, we will show that there exists some n such that 1 < b1/ n < a / bc . This amounts to proving n that there exists n such that (a / bc ) > b . This fact follows from the following lemma, which is analogous to a result we proved about complete ordered fields in Chapter 1: Lemma (Archimedean Property of exponential functions). Given any a, b > 1, there exists a natural number n with a n > b. Before proceeding with the proof, we pause to illustrate the meaning of this lemma. Suppose the number a is just slightly larger than 1—for example, consider a = 1.00001—and suppose b is very, very large (say, b = 1, 000, 000 ). The exponential function f ( x ) = 1.00001x grows very slowly, at least for small values of x, but the Archimedean property assures us that eventually it will catch up with and pass b. And indeed, no matter how large b is, and no matter how close to 1 we choose a to be, for sufficiently large n we will eventually have a n > b. Proof of Lemma. Write a = 1 + t where t is a (typically small) number. Then by the Binomial Theorem (Exercise 35), a n = (1 + t ) = 1 + nt + n
n ( n − 1) 2 t + + tn 2
251
Exponential and Logarithmic Functions 251 In this sum, all of the terms are positive, and therefore a n > 1 + nt . Now by the Archimedean Property of , we can find an n large enough that nt > b ; it follows that for the same value of n, a n > b. Equipped with this lemma, we return to the proof of the Approximation Theorem for Exponential Equations. Proof of Theorem (continued). Still assuming that b > 1, and bc < a , we know by the n Archimedean Property of Exponential Functions that there exists n such that (a / bc ) > b , which in turn implies b1/ n < a / bc. Then multiplying both sides by c we have bc b1/ n < a. If we 1 write c' = c + then we conclude bc < bc' < a, as desired. n Next, suppose still that b > 1, but this time assume that c is an overestimate, i.e. bc > a . We claim that there exists a small positive u such that a < bc − u < b; in that case, c' = c − u would be an improved overestimate. This is equivalent to a / bc < b − u , which in turn is the same as bc / a > bu . Once again we claim that we may take u = 1/ n for some sufficiently large n. For n then we need to prove that there exists an n such that (bc / a ) > b . But this also follows from the lemma just proved! So both rational overestimates and underestimates can be improved. The case where 0 < b < 1 is left for Exercise 36. As we did after proving the Approximation Theorem for nth roots, we illustrate how the Approximation Theorem for exponential equations can be used with a numerical example. 8 Let’s return to our previous example, in which we had as an approximate solution (in fact, 3 an overestimate) to the equation 4 x = 40 . Following the argument of the Approximation n
48 / 3 48 / 3 Theorem, we seek a value of n such that − 1, we know that > 4. Writing t = 40 40 n 48 / 3 > 1 + nt, so we choose n > 4 / t. With a little bit of algebra, this is equivalent to 40 n>
4 ⋅ 40 ≈ 504 4 − 40 8/3
(Exercise 37, 38). If we choose any integer n larger than this value, then better approximate solution; so, for example,
8 1 − should be a 3 n
8 1 1599 − = is an improved overestimate 3 600 600
(Exercise 39). Of course we don’t really want an approximate solution; we want to prove that there is an exact solution to 4 x = 40 , even if we can’t find it exactly. In some ways this question is not unlike the problem of solving the equation x 2 = 2 , which (as we know) also has no rational solution. Just as in that case, the solution to our problem relies in an essential way on the fact that is a complete ordered field. We have the following theorem: Theorem (Existence and Uniqueness of Solutions of exponential equations). For any positive base b and any positive real number a, the equation b x = a has a unique solution.
252
252 Exponential and Logarithmic Functions Proof. We construct the set S = {q ∈ | bq < a}. This set is easily verified to be a downward- closed open bounded rational subset (a DCOBRS). As such it corresponds to a single real number s. More precisely, let s be the least upper bound of S. We claim that b s = a . This is because the other two possibilities (b s > a and b s < a) each lead to contradictions (see Exercise 41). That s is the unique solution follows from the fact that if b x = b s then b x − s = 1, which is possible if and only if x − s = 0, i.e. x = s . Now that we have established that the equation b x = a has a unique solution, we are ready to define logarithms: Definition. For any positive real numbers a, b, the logarithm base b of a, denoted log b ( x ), is the unique solution to the equation b x = a . Proposition. The functions f : → + given by f ( x ) = b x and g : + → given by g ( x ) = log b ( x ) are inverse functions. That is, f g is the identity function + → + and g f is the identity function → .
Proof. Exercise 42. After all of the hard work of the previous section, proving the properties of logarithms is relatively straightforward. Theorem (Properties of Logarithms). For any positive base b: (a) For any positive r, s ∈, log b ( rs ) = log b ( r ) + log b ( s ) . (b) For any positive r ∈ and any k ∈ we have log b ( r k ) = k ⋅ log b ( r ). Proof. (a) Write x = log b ( r ) and y = log b ( s ). Then by definition, r = b x and s = b y . Therefore, rs = b x b y = b x + y, which shows that log b ( rs ) = x + y = log b ( r ) + log b ( s ). k (b) Write x = log b ( r ). Then by definition r = b x, so r k = (b x ) = b kx. This shows that log b (r k ) = kx = k ⋅ log b ( r ) . As before, we can restate our work as follows: Corollary. For any positive real number b, the function g : + → defined by g ( x ) = log b ( x ) is a group homomorphism from the multiplicative group of positive reals ( + , × ) to the additive group of real numbers ( , + ).
Proof. The three properties that need to be verified are that g (1) = 0 , g ( xy ) = g ( x ) + g ( y ), and g ( x −1 ) = − g ( x ), all of which have been established above.
253
Exponential and Logarithmic Functions 253
Exercises 32. Explain why the prime factorization of 4 m is 22m and the prime factorization of 40 n is 23n 5n . 33. Show that the equation b x = a has rational solutions if only if a and b have the same set of prime factors, and the multiplicities of those factors are in a fixed ratio. 34. Use a table of logarithms and the methods of §5.1 to verify that 48 / 3 is slightly larger than 40. 35. The Binomial Theorem is usually taught in high school Algebra 2 textbooks, typically in the context of a section on induction proofs. Prove it. 36. Complete the proof of the Approximation Theorem for Exponential Equations by treating the case 0 < b < 1. 4 ⋅ 40 48 / 3 37. Show that with t = − 1, the condition nt > 4 is equivalent to n > 8 / 3 . 4 − 40 40 38. Without using a calculator (for example using a table of logarithms and the 4 ⋅ 40 methods of §5.1) show that 8 / 3 ≈ 504 . 4 − 40 39. Without using a calculator (for example, by using a table of logarithms and the methods of §5.1), calculate the value of 41599 / 600 and confirm that it is approximately equal to 40. 40. Consider the exponential equation 12 x = 150 and the approximate solution x = 2, an underestimate. Use the methods described in the Approximation Theorem to find an improved underestimate. 41. Complete the proof of the Existence of Solutions theorem by showing that both b s > a and b s < a lead to contradictions. (Hint: mimic the proof of the Existence of Roots theorem in §5.3.) 42. Show that f ( x ) = b x and g ( x ) = log b ( x ) are inverse functions. You may have noticed, and perhaps been puzzled by, the notable absence throughout this entire discussion of the transcendental number e. It is a perhaps remarkable fact that, from the point of view we have adopted so far, there is quite literally nothing special at all about this particular base. If all we care about are the algebraic properties of exponential and logarithmic functions (i.e. the fact that they are a pair of isomorphisms between and +) then any base at all will do14. This phenomenon—if the absence of any reason to pay attention to e can be called a “phenomenon”—is in large part a consequence of a choice we made (without much fanfare or reflection) back at the beginning of section §5.2; namely, we began by defining and studying the properties of exponential functions first, and introducing logarithmic functions second. The “exponentials first” approach is the standard one in secondary curricula, for obvious reasons: conceptually, the idea of exponentiation as “repeated multiplication” is a natural extension of the idea of multiplication as “repeated addition”, and the extension to rational exponents is a natural extension of the prior idea of “root extraction”. There is, however, another approach to the study of exponential and logarithmic functions, one that turns the subject on its head by beginning with logarithms. This “logarithms first” approach is rarely, if ever, encountered in contemporary secondary textbooks; however, it was not always so. In fact, the “logarithms first” approach was one of the innovations pioneered by the SMSG in the 1960s.
254
254 Exponential and Logarithmic Functions Writing in the Teacher’s Commentary for Intermediate Mathematics, the SMSG’s authors wrote: The treatment of logarithms and exponents presented here is completely different from the one which has been taught in high school in the past. The traditional treatment has started with the theory of exponents from which in turn the theory of logarithms was derived. The present treatment begins with the theory of logarithms and derives from it the theory of exponential functions and the theory of exponents… The first thing for the teacher to realize is that the definition and treatment of logarithms given in this chapter are completely different from those which have been given in high school in the past. The teacher should observe that general exponents do not enter in this chapter until Section 9-8, where a complete treatment is provided. The teacher must be prepared for a new approach to an old and familiar subject. The teacher will find the definition of y = log x new and strange and will undoubtedly ask why it has been given in preference to the traditional definition in terms of exponents. There are several reasons for choosing the new definition. First, it is exceptionally difficult to present a satisfactory treatment of exponents, and the usual high school courses in mathematics give only a small fragment of the theory. What is the meaning of 3 2 , 10 π , …, and how do we prove that the usual laws of exponents hold for rational and irrational exponents? It is not possible to give satisfactory answers to these questions in the usual treatment of exponents. If logarithms are defined in terms of exponents, the theory of logarithms is left in unsatisfactory condition also. The definition of logarithms used in this course places the theory of logarithms on a solid foundation. Furthermore, the definition of y = log x used here enables us to give a satisfactory treatment of exponents also, but it comes after the treatment of logarithms… … The method used here makes it possible to define and treat all of the logarithm functions simultaneously. The common logarithm function and the natural logarithm function are only two special cases of the general logarithm function. [In addition], the treatment given here makes it possible to define the number e in a simple and concrete fashion. The definition does not include any mysterious limits. (pp. 546–548). Below, we briefly summarize the SMSG’s approach, quoting liberally from the text and presenting all major results and definitions but omitting most of the proofs. Interested readers are referred to both the Student’s Text and Teacher’s Commentary of SMSG Intermediate Mathematics (Part II) for more details15. The SMSG approach begins by considering the graph of the function y = k / x for an arbitrary positive parameter k and for x > 0. The text shades the portion of the plane between this function and the x-axis, bounded by vertical lines at x = 1 and at an arbitrary second value of x (see Figure 5.4 below). The text then observes that “there is no simple formula that gives the area of the shaded region; however, the shaded region will be used to define a function”. More precisely, Intermediate Mathematics presents the following definition, which we quote verbatim: Definition (SMSG Def. 9-1, p. 455). The logarithm function is defined for all x > 0 by the following correspondence between x and y. (a) For each x > 1, the corresponding value of y is the area of the region bounded by the x-axis, the hyperbola y = k / x, and the vertical lines at 1 and x. (b) For x = 1, the value of y is 0. (c) For each x such that 0 < x 0,
x >1
log x < 0,
0 < x