Lectures on Geometry [1 ed.] 9783031514135, 9783031514142

This is an introductory textbook on geometry (affine, Euclidean and projective) suitable for any undergraduate or first-

132 91 11MB

English Pages 490 [493] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Linear Algebra
1.1 Introductory Notions
1.1.1 Matrices and Linear Systems
1.2 Vector Spaces
1.2.1 Vector Subspaces, Generators, Linearly Independent Sets, Bases and Dimension of a Vector Space
1.3 Linear Maps
1.3.1 The Dual Vector Space
1.3.2 Direct Sum of Vector Subspaces
1.3.3 Eigenvalues and Eigenvectors
1.4 Euclidean Vector Spaces
1.4.1 Orthogonal Automorphisms
1.4.2 Self-Adjoint Operators
1.5 Exercises
2 Bilinear and Quadratic Forms
2.1 Bilinear Maps
2.1.1 Diagonalization of Quadratic Forms
2.2 Cross-Product
2.3 Exercises
3 Affine Spaces
3.1 Affine Spaces
3.1.1 The Affine Ratio and Menelaus' Theorem
3.1.2 The Affine Subspaces of An(K)
3.2 Affine Morphisms
3.2.1 Dimension Theorem
3.2.2 Projections and Symmetries
3.2.3 Thales' and Ceva's Theorems
3.2.4 Real Affine Spaces and Convex Sets
3.3 Exercises
4 Euclidean Spaces
4.1 Euclidean Affine Spaces
4.1.1 Orthogonality
4.2 Orthogonal Affine Morphisms
4.2.1 Structure of Isometries
4.3 Exercises
5 Affine Hyperquadrics
5.1 Affine Hypersurfaces
5.1.1 Tangent Hyperplanes and Multiple Points
5.2 Affine Hyperquadrics
5.2.1 The Reduced Equation of a Hyperquadric of An(K)
5.2.2 Euclidean Classification of the Real Hyperquadrics
5.3 Real Conics
5.4 Real Quadrics
5.5 Exercises
6 Projective Spaces
6.1 Some Elementary Synthetic Projective Geometry
6.1.1 General Projective Spaces
6.2 The Projective Space Associated with a K-Vector Space
6.2.1 Projective Subspaces of the Standard Projective Space
6.3 Dual Projective Space and Projective Duality
6.4 Exercises
7 Desargues' Axiom
7.1 Exercises
8 General Linear Projective Automorphisms
8.1 Projective Homotheties and Translations
8.1.1 Desargues' Axiom, Pappus' Axiom and the Division Ring of the Coordinates
8.2 The Group of Projective Automorphisms of Pn(K)
8.2.1 Geometric Characterization of the Field K
8.3 Exercises
9 Affine Geometry and Projective Geometry
9.1 Affine Space Structure on the Complement of a Hyperplane of Pn(K)
9.2 Projective Closure of an Affine Subspace
9.2.1 Projective Automorphisms and AffineAutomorphisms
9.3 Cross-Ratio (or Anharmonic Ratio) of Four Collinear Points
9.3.1 Harmonic Ratio
9.3.2 Involutions of P1(K)
9.3.3 Homographies of the Complex Affine Line A1(C)
9.4 Exercises
10 Projective Hyperquadrics
10.1 Projective Hypersurfaces
10.1.1 Smooth Points and Tangent Hyperplanes
10.2 Projective Hyperquadrics
10.2.1 Reduced Equations of Projective Hyperquadrics
10.3 Polarity with Respect to a Hyperquadric
10.3.1 Affine and Euclidean Geometry in a ProjectiveSetting
10.4 Exercises
11 Bézout's Theorem for Curves of P2(K)
11.1 Proof of a Simple Case of Weak Bézout's Theorem
11.1.1 Two Applications of Bézout's Theorem
11.2 The General Bézout's Theorem and Further Applications
11.3 The Resultant of Two Polynomials
11.4 Intersection Multiplicity of Two Curves
11.5 Applications of Bézout's Theorem
11.5.1 Points of Inflection
11.5.2 Legendre Form of Cubics
11.5.3 Max Noether's Theorem
11.5.4 Conics Passing Through a Finite Number of Points
11.6 Exercises
12 Absolute Plane Geometry
12.1 Elements of Absolute Plane Geometry
12.1.1 Angles in an Absolute Plane
12.1.2 Triangles
12.2 The Poincaré Hyperbolic Plane
12.3 Exercises
13 Cayley–Klein Geometries
13.1 Euclidean Metric from a Projective Point of View
13.2 Projective Metrics on P1(R)
13.3 Projective Metrics of P2(R) and P3(R)
13.3.1 Non-degenerate Absolute
13.3.2 Non-ruled Quadric (i.e. i(F) = 2) F = x02 + x12 + x22 - x32. We Get Four Actual Spaces
13.3.3 Ruled Quadric (i.e. i(F) = 1) F = x02 + x12 - x22 - x32. We Get Three Actual Spaces On X = P3(R) V+(F)
13.3.4 Degenerate Absolute
13.4 General Absolutes
13.5 Exercises
References
Index
Recommend Papers

Lectures on Geometry [1 ed.]
 9783031514135, 9783031514142

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

UNITEXT 158

Lucian Bădescu · Ettore Carletti

Lectures on Geometry

UNITEXT

La Matematica per il 3+2 Volume 158

Editor-in-Chief Alfio Quarteroni, Politecnico di Milano, Milan, Italy École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland Series Editors Luigi Ambrosio, Scuola Normale Superiore, Pisa, Italy Paolo Biscari, Politecnico di Milano, Milan, Italy Ciro Ciliberto, Università di Roma “Tor Vergata”, Rome, Italy Camillo De Lellis, Institute for Advanced Study, Princeton, NJ, USA Victor Panaretos, Institute of Mathematics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland Lorenzo Rosasco, DIBRIS, Università degli Studi di Genova, Genova, Italy Center for Brains Mind and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts, US Istituto Italiano di Tecnologia, Genova, Italy

The UNITEXT - La Matematica per il 3+2 series is designed for undergraduate and graduate academic courses, and also includes books addressed to PhD students in mathematics, presented at a sufficiently general and advanced level so that the student or scholar interested in a more specific theme would get the necessary background to explore it. Originally released in Italian, the series now publishes textbooks in English addressed to students in mathematics worldwide. Some of the most successful books in the series have evolved through several editions, adapting to the evolution of teaching curricula. Submissions must include at least 3 sample chapters, a table of contents, and a preface outlining the aims and scope of the book, how the book fits in with the current literature, and which courses the book is suitable for. For any further information, please contact the Editor at Springer: [email protected] THE SERIES IS INDEXED IN SCOPUS *** UNITEXT is glad to announce a new series of free webinars and interviews handled by the Board members, who rotate in order to interview top experts in their field. Access this link to subscribe to the events: https://cassyni.com/s/springer-unitext

Lucian B˘adescu • Ettore Carletti

Lectures on Geometry

Lucian B˘adescu Department of Mathematics University of Genoa Genoa, Italy

Ettore Carletti Department of Mathematics University of Genoa (retired since 2019) Genoa, Italy

ISSN 2038-5714 ISSN 2532-3318 (electronic) UNITEXT ISSN 2038-5722 ISSN 2038-5757 (electronic) La Matematica per il 3+2 ISBN 978-3-031-51413-5 ISBN 978-3-031-51414-2 (eBook) https://doi.org/10.1007/978-3-031-51414-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

To my family Miruna and Ciprian Matei and Eliza Cosmin and Alexandra Cornel and Iuliana Mihaela Lucian Badescu To my nephews Francesco and Anna Beatrice and Chiara Ettore Carletti

Preface

ʾ ι´τ ω ʾ ϵωμ´ϵ τρητ oς ϵ ισ Mηδϵ´ις αγ

.

The aim of this book is to offer some fundamentals of the affine, euclidean and projective geometries which are indispensable for any course in mathematics and, in general, useful for the students of scientific faculties. We are also strongly convinced that a comparative study of these geometries is essential for a true understanding of their concepts and methods. Nevertheless, one can use the first five chapters as a course in affine and euclidean geometry independently from the remaining eight chapters. As a companion to this book, we recommend [12], which is a very good collection of exercises in projective geometry. The first ten chapters give an outline of the affine, euclidean and projective geometries a little wider than usually requested by the academic programs. Base fields have been chosen arbitrarily (not only .R and .C) in order to have a more complete point of view, unless otherwise required. We have followed an axiomaticsynthetic approach as well as an analytic one which permit a deeper comprehension and more essential and elegant exposition of various subjects. The first chapter is devoted to explaining those concepts and results from linear algebra absolutely necessary for the following chapters. In Chap. 2, we study symmetric bilinear forms defined on vector spaces over arbitrary fields of characteristic ./= 2. Sylvester’s theorem on real and complex symmetric bilinear forms is the core of this chapter. This important result is essential for classifying real and complex hyperquadrics. We have concluded this chapter by introducing Gram determinants and cross-products which are indispensable tools in euclidean geometry of dimension .≥ 3. Chapter 3 is devoted to affine spaces of finite dimension over an arbitrary field, presented from a synthetic as well as an analytic point of view. Barycentric coordinates play a central role in this exposition; classical theorems by Menelaus, Ceva and Thales are proved by means of them. A careful treatment of affine morphisms

vii

viii

Preface

ends with the structure theorem of the group of automorphisms .AutK (A ) of an affine space .A (Proposition 3.57). In Chap. 4, some euclidean geometry in dimension .n ≥ 2 is presented (also using the cross-product defined in Chap. 2). The core of this chapter is the structure theorem of the group of isometries .O(A , (V , 〈·, ·〉V ), ϕ). of a euclidean space .(A , (V , 〈·, ·〉V ), ϕ) (Theorem 4.35). After a brief introduction to the affine hypersurfaces of .An (K), Chap. 5 is devoted to the study of hyperquadrics of .An (K), where K is an arbitrary field of characteristic ./= 2. A reduced form for the hyperquadrics is given by Theorem 5.37, followed by the euclidean classification of the hyperquadric of .En (R). At the end of this chapter, an affine version of Bézout’s theorem is proved and used to demonstrate the affine Pascal’s theorem. Chapters 6, 7 and 8 are mainly devoted to the study of projective spaces associated with a vector space over an arbitrary field. Some synthetic projective geometry is developed in order to introduce the crucial Desargues’ axiom (the entire Chap. 7) and Pappus’ theorem (Chap. 8). The group .Aut(Pn (K)) of projective automorphisms of .Pn (K), and in particular, the structure of the general linear group .PGLn (K) is treated in Chap. 8. At the end of Chap. 8, we give a geometric characterization of the base field K by means of two classes of general linear automorphisms: the projective translations and the projective homotheties. They are used in Chap. 9 for a synthetic presentation of the cross-ratio. In the first part of Chap. 9, the basic links between affine geometry and projective geometry are outlined. Then the fundamental concept of cross-ratio is introduced from a synthetic and analytic point of view. At the end of this chapter is proved a classical theorem by von Staudt that puts the importance of harmonic cross-ratio in evidence. Projective hypersurfaces of .Pn (K) are briefly introduced in Chap. 10, which is mainly concerned with the projective hyperquadrics of .Pn (K), where .char(K) /= 2. The projective classification of hyperquadrics is given by Theorems 10.53, 10.55 and 10.63 for an arbitrary field K (.char(K) /= 2), for the complex field .C and for the real field .R respectively. Polarity with respect to a hyperquadric of .Pn (K) is a main theme of this chapter. The projective point of view of various concepts of the affine and euclidean theory of hyperquadrics (in particular conics and quadrics) is shown to be very fruitful by various examples also in the exercises section. Bézout’s theorem for projective curves of .P2 (K), where K is an arbitrary algebraically closed field, is proved in a particular case and in its general form in Chap. 11. We have adopted the axiomatic definition of intersection multiplicity of two curves as given in [19] in order to keep the necessary commutative algebra to a minimum. Several important geometric applications such as Pascal’s and Brianchon’s theorems, the inflection points of a curve of degree .≥ 3, Legendre form of cubic curves, and linear systems of conics passing through a finite number of points are exposed in a detailed form. Absolute planes are axiomatically introduced and studied in Chap. 12. An important example is the Poincaré hyperbolic plane which is built at the end of this chapter. The “projective nature” of this geometry is proved by the relation (12.13)

Preface

ix

which expresses the hyperbolic distance by means of a cross-ratio. This link between projective geometry and non-euclidean geometries is the central theme of Chap. 13 which is concerned with so-called Cayley–Klein geometries. Euclidean, hyperbolic and elliptic geometries are Cayley–Klein geometries, but other geometries like Minkowskian geometry can be “projectively” generated. We give some details on all projective geometries of .P1 (R) and of .P2 (R) and also a brief account on some geometries of .P3 (R). A generalization of these geometries to .Pn (R) according to [37] is mentioned at the end this chapter. This exposition is obviously by far incomplete, but its scope is to stir the reader’s interest in this topic. Classical references on Cayley–Klein geometries are [10], [20], [33], but we would also like to mention [9], [27], [36], [35] and [37]. We wish to thank our friend Giacomo Monti Bragadin for his contributions to various chapters of this book. We would like to express our heartfelt thanks to our friend Antonio Lanteri, who read our manuscript and helped us to eliminate many misprints and mistakes and to improve the exposition. We also wish to thank Mrs. Barbara Ionescu for helping us to solve some LateX problems. Thanks are due to Dr. Francesca Bonadei of Springer Italia for her encouragement and support. The second author wishes to pay a heartfelt tribute to the memory of Lucian Badescu, dearest friend and master. Genoa, Italy April 2023

˘ L. Badescu E. Carletti

Contents

1

Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introductory Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Matrices and Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Vector Subspaces, Generators, Linearly Independent Sets, Bases and Dimension of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 The Dual Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Direct Sum of Vector Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Eigenvalues and Eigenvectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Euclidean Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Orthogonal Automorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 9 17

19 27 37 44 46 57 66 74 78

2

Bilinear and Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 2.1 Bilinear Maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 2.1.1 Diagonalization of Quadratic Forms. . . . . . . . . . . . . . . . . . . . . . . 92 2.2 Cross-Product. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

3

Affine Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Affine Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 The Affine Ratio and Menelaus’ Theorem . . . . . . . . . . . . . . . . 3.1.2 The Affine Subspaces of An (K) . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Affine Morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Dimension Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Projections and Symmetries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Thales’ and Ceva’s Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Real Affine Spaces and Convex Sets . . . . . . . . . . . . . . . . . . . . . . 3.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115 115 125 127 134 142 147 151 155 157 xi

xii

Contents

4

Euclidean Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Euclidean Affine Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Orthogonal Affine Morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Structure of Isometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161 161 166 175 177 184

5

Affine Hyperquadrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Affine Hypersurfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Tangent Hyperplanes and Multiple Points. . . . . . . . . . . . . . . . . 5.2 Affine Hyperquadrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 The Reduced Equation of a Hyperquadric of An (K) . . . . . 5.2.2 Euclidean Classification of the Real Hyperquadrics . . . . . . 5.3 Real Conics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Real Quadrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

187 187 189 197 206 213 221 228 233

6

Projective Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Some Elementary Synthetic Projective Geometry . . . . . . . . . . . . . . . . . . 6.1.1 General Projective Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Projective Space Associated with a K-Vector Space . . . . . . . . . . 6.2.1 Projective Subspaces of the Standard Projective Space. . . 6.3 Dual Projective Space and Projective Duality . . . . . . . . . . . . . . . . . . . . . . 6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

239 239 245 248 257 260 265

7

Desargues’ Axiom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 7.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

8

General Linear Projective Automorphisms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Projective Homotheties and Translations . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Desargues’ Axiom, Pappus’ Axiom and the Division Ring of the Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The Group of Projective Automorphisms of Pn (K) . . . . . . . . . . . . . . . . 8.2.1 Geometric Characterization of the Field K . . . . . . . . . . . . . . . 8.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

275 275

Affine Geometry and Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Affine Space Structure on the Complement of a Hyperplane of Pn (K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Projective Closure of an Affine Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Projective Automorphisms and Affine Automorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Cross-Ratio (or Anharmonic Ratio) of Four Collinear Points. . . . . . 9.3.1 Harmonic Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Involutions of P1 (K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Homographies of the Complex Affine Line A1 (C) . . . . . . . 9.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

295

9

281 284 285 292

295 302 304 306 311 316 317 320

Contents

10

xiii

Projective Hyperquadrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Projective Hypersurfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Smooth Points and Tangent Hyperplanes . . . . . . . . . . . . . . . . . 10.2 Projective Hyperquadrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Reduced Equations of Projective Hyperquadrics . . . . . . . . . 10.3 Polarity with Respect to a Hyperquadric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Affine and Euclidean Geometry in a Projective Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

323 323 328 342 348 355

11

Bézout’s Theorem for Curves of .P2 (K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Proof of a Simple Case of Weak Bézout’s Theorem . . . . . . . . . . . . . . . . 11.1.1 Two Applications of Bézout’s Theorem . . . . . . . . . . . . . . . . . . . 11.2 The General Bézout’s Theorem and Further Applications . . . . . . . . . 11.3 The Resultant of Two Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Intersection Multiplicity of Two Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Applications of Bézout’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Points of Inflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Legendre Form of Cubics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.3 Max Noether’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.4 Conics Passing Through a Finite Number of Points . . . . . . 11.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

385 385 387 390 390 399 404 404 409 414 414 418

12

Absolute Plane Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Elements of Absolute Plane Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Angles in an Absolute Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.2 Triangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 The Poincaré Hyperbolic Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

421 421 431 438 440 449

13

Cayley--Klein Geometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Euclidean Metric from a Projective Point of View . . . . . . . . . . . . . . . . . 13.2 Projective Metrics on P1 (R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Projective Metrics of P2 (R) and P3 (R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Non-degenerate Absolute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ) = 2) 13.3.2 Non-ruled Quadric (i.e. i(F 2 2 2 2 F = x0 + x1 + x2 − x3 . We Get Four Actual Spaces . . . . ) = 1) 13.3.3 Ruled Quadric (i.e. i(F 2 2 2 F = x0 + x1 − x2 − x32 . We Get ) . . . . . . . . . . Three Actual Spaces On X = P3 (R) \ V+ (F 13.3.4 Degenerate Absolute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 General Absolutes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

451 451 454 461 463

366 380

467

468 468 478 482

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487

Chapter 1

Linear Algebra

1.1 Introductory Notions Notation i.e. (from latin id est) means “namely”; e.g. (from latin exempli gratia) means “for instance”. We will denote the sets of the natural numbers {1, 2, . . . , }, the integers, the rationals, the reals and the complex numbers by N, Z, Q, R and C respectively. Let X be a non-empty set. The cartesian product X × · · · × X will be denoted by n times

Xn . We shall denote the set of all parts of a set X by P(X) and the cardinal number of X (i.e. “the number of elements” of X) by #(X). We shall not be concerned with cardinal arithmetic but we have only to recall some basic definitions. A set X is finite and #(X) = n if there is a bijective map X → {1, . . . , n} for some natural n. The empty set ∅ is also considered finite with #(∅) = 0. Set that is not finite will be called infinite. We simply write #(X) = ∞ to say that X is infinite and #(X) < ∞ when X is finite. A set X is countable and #(X) = ℵ0 if there is a bijective map X → Z. An infinite set is uncountable if it is not countable, e.g. R and C. Definition 1.1 (Equivalence Relations) Let X be a non-empty-set. A non-empty subset R ⊂ X × X is called a binary relation in X. If x, y ∈ X and (x, y) ∈ R it is customary to write xRy and say that x is in the relation R to y. For instance R := {(x, x)|x ∈ X} (the diagonal of X × X) is the relation of equality in X. A binary relation R in a set X is an equivalence relation if the following conditions are satisfied: (i) xRx for all x ∈ X (Reflexivity). (ii) xRy implies yRx for all x, y ∈ X (Symmetry). (iii) xRy and yRz imply xRz for all x, y, z ∈ X (Transitivity). If R is an equivalence relation and xRy we will say that x and y are equivalent. The equivalence class or coset of an element x ∈ X is the subset  x := {y ∈ X | xRy}. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_1

1

2

1 Linear Algebra

Equivalence classes determine the equivalence relation uniquely and they satisfy the following properties: (i) x ∈  x for every x ∈ X. (ii) X =  x. x∈X

(iii) For every x, y ∈ X,  x ∩ y /= ∅ if and only if  x = y , i.e. if and only if xRy. Thus the collection of distinct equivalence classes is a partition of X. Conversely, to any partition {Ui : i ∈ I } of a set X we can associate the equivalence relation in X ⇐⇒

xRy

.

x, y ∈ Ui

for some index i ∈ I.

The set of equivalence classes X/R := { x | x ∈ X} is called the quotient set of X under the relation R (or of X mod R) . The well-defined map π : X → X/R, π(x) =  x is called a quotient map. A complete system of representatives of X mod R is a family {xi }i∈I of elements of X such that: (i) For each  (i, j ) ∈ I × I , xi Rxj if and only if i = j . (ii) X = xi . i∈I

An equivalence relation R is most frequently denoted by ∼ so we will write x ∼ y instead of xRy and X/ ∼ instead of X/R. Examples 1.2 (1) Given a natural number n ≥ 2 and two integers a and b, we say that a is congruent b modulo n, and we write a ≡ b (mod n)

.

(1.1)

is a − b = kn for some k ∈ Z. The congruenc//e (1.1) is the equivalent relation Rn in Z defined by Rn = {(x, y) ∈ Z × Z | x − y ∈ nZ}.

.

The set {0, 1, . . . , n − 1} is a complete system of representatives of Z mod Rn . We will denote the quotient set Z/Rn by Zn . (2) It easy to see that R = {(x, y) ∈ Q × Q | x − y ∈ Z} represents an equivalent relation in Q. The set {x ∈ Q | 0 ≤ x < 1} is a complete system of representatives of Q mod R. The next two subsections are devoted to recalling some basic definitions and results concerning groups, rings and fields. Very good references are [6], [7], [22] and [39]. Groups A binary operation in a non-empty set G is a map G × G → G. It is customary to denote the image of the pair (g, h) ∈ G × G by g · h (or simply gh), “the

1.1 Introductory Notions

3

multiplicative notation”, or g + h, “the additive notation”. We will write (G, ·), (G, +) to show our choice. Definition 1.3 (Groups) A non-empty set G is called a group if there is binary operation · in G such that the following conditions are satisfied: 1. If g, h, k ∈ G, then (gh)k = g(hk) (associative law). 2. There is u ∈ G such that ug = gu = g for every g ∈ G. The element u is unique and it is called the identity of G (denoted by 1 (or 1G ). Other names are neutral element, unity and zero (denoted by 0 (or 0G ), when the operation is denoted additively). 3. For every g ∈ G there is xg ∈ G such that gxg = xg g = 1. The element xg , which is unique, is called the inverse of g and denoted by g −1 (in additive notation it is the opposite of g, denoted by −g). We define the nth -power of g ∈ G for n ∈ Z as follows: gn = g · · · g,   

.

if n > 0,

g n = g −1 · · · g −1   

if n < 0,

g 0 = 1G .

−n times

n times

Therefore one has g n+m = g n g m and g nm = (g n )m for n, m ∈ Z. The Reader can formulate an analogue definition for additive groups. We define the order or the period of an element g ∈ G as the smallest positive integer n such that g n = 1G if this integer n exists, otherwise we say that g has infinite order. The cardinal number #(G) of G will be called the order of G and denoted also by |G|. The centre of a group G is the subset C(G) consisting of all elements g ∈ G such that gh = hg for all h ∈ G. It is obvious that C(G) is a subgroup of G. A group G is commutative or abelian if C(G) = G i.e. gh = hg for every g, h ∈ G. The sets Z, Q, R and C are abelian groups with respect to the usual sum while Q  {0}, R  {0} and C  {0} are abelian groups with respect to the usual product. If X has more than 2 elements the set of bijective maps X → X is a non-commutative group with respect to the composition of applications. Let G be a group and S a subset of G. We shall say that S generates G (and write G = 〈S〉) if for every g ∈ G, g = s1 · · · sr , where si ∈ S or si−1 ∈ S for i = 1, . . . , r. We shall call cyclic a group G if there exists an element a ∈ G such that every element g ∈ G can be written in the form a n for some n ∈ Z i.e G = 〈{a}〉. A cyclic group is obviously abelian. A non-empty set H ⊂ G is a subgroup of G if gh−1 ∈ H whenever g, h ∈ H . We can associate to every subgroup H two equivalence relations defined by g ∼r g '

.

⇐⇒

g(g ' )−1 ∈ H,

g ∼l g '

⇐⇒

g −1 g ' ∈ H.

The equivalence classes of ∼r are the sets H g := {hg : h ∈ H } and they are called right cosets, while gH := {gh : h ∈ H } are the equivalence classes of ∼l and they are called left cosets. The quotient spaces will be denoted by H \G and G/H respectively. The natural map H \G → G/H , H g → gH , is bijective so that

4

1 Linear Algebra

#(H \G) = #(G/H ). Thus we call #(G/H ) the index of H in G and denote it by [G : H ]. It can be proved that |G| = [G : H ]|H |.

.

This is immediate if G is a finite group i.e. |G| < ∞. Definition 1.4 A subgroup H is normal if gH = H g for every g ∈ G. Therefore H \G = G/H and there is a well-defined binary operation in G/H gH · g ' H = gg ' H,

∀ g, g ' ∈ G,

.

so that G/H is a group, called the quotient group of G with respect to H (or also the factor group of G by H ). If G = Z and H = nZ := {nm | m ∈ Z}, the additive group G/H will be denoted by Zn . The centre C(G) is a normal subgroup of G. Let G and G' be two groups. A map ϕ : G → G' is a homomorphism of groups (or a group-homomorphism) if ϕ(gh) = ϕ(g)ϕ(h) for all g, h ∈ G. The following statements are very easy to prove. (a) ϕ(1G ) = 1G' and ϕ(g −1 ) = (ϕ(g))−1 for all g ∈ G. (b) If H is a subgroup of G then ϕ(H ) is a subgroup of G' . (c) If L is a subgroup of G' then ϕ −1 (L) is a subgroup of G; in particular ker(ϕ) := ϕ −1 ({1G' }) is a normal subgroup of G called the kernel of ϕ. We can deduce at once that ϕ is injective if and only if ker(ϕ) = {1G }. (d) If H is a normal subgroup of G, the quotient map π : G → G/H is a homomorphism of groups. (e) If ϕ : G → G' is bijective homomorphism of groups, its inverse map ϕ −1 : G' → G is a homomorphism of groups. A bijective homomorphism of groups is called an isomorphism of groups. Theorem 1.5 (First Isomorphism Theorem) Let ϕ : G → G' be a homomorphism of groups, then there is a unique homomorphism of groups ψ : G/ ker(ϕ) → G' such that the diagram ϕ

/ G' ; v vv vv π v v  vv ψ G/ ker(ϕ) G

is commutative and ψ : G/ ker(ϕ) → ϕ(G) is an isomorphism. Definition 1.6 (Transformation Groups) Let G be a group and X a non-empty set. We say that G acts on the left on X if there is a map G × X → X which, denoting by g ⊙ x the image of (g, x), satisfies the following conditions:

1.1 Introductory Notions

5

(i) For every g, g ' ∈ G and x ∈ X, g ' g ⊙ x = g ' ⊙ (g ⊙ x). (ii) For every x ∈ X, 1G ⊙ x = x. For every g ∈ G the map X → X, defined by x → gx, is bijective, thus we call the triple (G, X, ⊙) transformation groups. In a very similar manner an action on the right of a group on a set is defined. There is a bijective correspondence between the left actions and the right actions of a group G on a set X: g ⊙ x ←→ x ⊙ g −1 . The set Orb(x) := {g ⊙ x | g ∈ G} is called the orbit of x. For every x ∈ X the subset O(x) := {g ∈ G | ; g ⊙ x = x} is a subgroup of G which is called an isotropy subgroup (or stabilizer) of x. For any fixed x ∈ X the map fx : G → Orb(x), defined by fx (g) = g ⊙ x, is surjective and fx (g) = fx (h) if and only if (h−1 g) ⊙ x = x, i.e. h−1 g ∈ O(x). Then fx induces the natural bijective map g ) := g ⊙ x with  g := gO(x). Every action of G on Fx : G/O(x) → Orb(x), Fx ( X induces an equivalence relation in X: x∼y

.

⇐⇒

y ∈ Orb(x),

∀ x, y ∈ X.

The equivalence class of x is nothing else than Orb(x) and the quotient space X/ ∼ is the set of all orbits. It is worthwhile to observe that an action of a group G on a set X induces an action of any subgroup H of G on X. Rings and Fields Definition 1.7 (Rings) Let A be a non-empty set in which two binary operations, + (addition) and · (multiplication), are given is called a ring if the following conditions are satisfied: 1. A is an abelian group with respect to addition. 2. If a, b, c ∈ A, then a(bc) = (ab)c (associative law). 3. If a, b, c ∈ A, then a(b + c) = ab + ac and (a + b)c = ac + bc (distributive laws). A ring A is unitary if there is an element /= 0A , denoted by 1 or 1A , such that a · 1 = 1 · a = a for every a ∈ A; such an element is unique and is called the identity of A. The set of integers Z is an abelian unitary ring, as well as the “ring of residue classes (mod n)” Zn . If A is a commutative unitary ring, the set of polynomials in n ≥ 1 indeterminates A[X1 , . . . , Xn ] is a commutative unitary ring with respect the usual operations on polynomials. We shall see that the set Mn (R) of square matrices of order n ≥ 2 (Definition 1.14) with entries in a commutative unitary ring R is a non-abelian unitary ring. From Now on, Ring Will be Synonymous with Unitary Ring The centre of a ring A is the subset C(A) consisting of all elements of a ∈ A such that ax = xa for all x ∈ A. It is obvious that C(A) is a subring of A. A ring A is commutative if C(A) = A i.e. ab = ba for every a, b ∈ A. A subgroup B of a ring A containing 1A is called a subring of A if xy ∈ B whenever x, y ∈ B. An left-ideal A of A is a subgroup of A such that AA := {xy :

6

1 Linear Algebra

x ∈ A, y ∈ A} ⊂ A. To define a right-ideal we require AA ⊂ A. A two-sided ideal is a subset which is both a left and a right ideal. If A is commutative every ideal is two-sided, thus we call it simply ideal. An element a ∈ A is a unit (or invertible) if there exists b ∈ A such that ab = ba = 1; this element b is unique and it is denoted by a −1 , the inverse of a. The set A∗ := {a ∈ A : a is invertible} is a group with respect to product. If A∗ = A  {0}, A is called a division ring. A commutative division ring is called a field while a noncommutative division ring is called a skew-field. Familiar examples of fields are Q, R, C and Zp if p is a prime number. Let A be a ring and A a two-sided ideal of A. Since A is a normal subgroup of A, A/A is an abelian group. Furthermore there is a well-defined product in A/A:  x · y := x y,

x, y ∈ A.

.

Thus A/A is a unitary ring called the factor ring of A by A. If A is commutative A/A is commutative. Let A and B be two rings and ψ : A → B a map. We say that ψ is a homomorphism of rings if it is a homomorphism of additive groups, ψ(aa ' ) = ψ(a)ψ(a ' ), for every a, a ' ∈ A and ψ(1A ) = 1B . One can easily prove the following statements. (a) If C is a subring of A then ψ(C) is a subring of B. (b) If B is a left-ideal (right-ideal, subring) of B then ψ −1 (B) is an left-ideal (rightideal, subring) of A. (c) We call the set ker(ψ) := ψ −1 (0B ) the kernel of ψ . The kernel of a homomorphism of rings is a two-sided ideal and ker(ψ) = {0A } if and only if ψ is injective. (d) If A is a two-sided ideal of A, the quotient map π : A → A/A is a homomorphism of rings. (e) If ϕ : A → B is a bijective homomorphism of rings, its inverse map ϕ −1 : B → A is a homomorphism of rings. A bijective homomorphim of rings is called an isomorphism of rings. Theorem 1.8 (First Isomorphism Theorem) Let ϕ : A → B be a homomorphism of rings, then there is a unique homomorphism of rings ψ : A/ ker(ϕ) → B such that the diagram ϕ

/ B v; v v vv π vv v  v ψ A/ ker(ϕ) A

is commutative and ψ : A/ ker(ϕ) → ϕ(A) is an isomorphism.

1.1 Introductory Notions

7

Let A be a left-ideal of a ring A. A set S ⊂ A generates A if all elements of A can be written in the form r .

ai si ,

ai ∈ A, si ∈ S,

i = 1, . . . , r.

(1.2)

i=1

A left-ideal A is principal if it is generated by an element of A, i.e there exists s ∈ A such that A = As := {as : a ∈ A}. For right-ideals we have to put the elements ai of A in (1.2) on the right. From elementary mathematics we see that the only ideals of Z are principal of the form nZ with n ≥ 0. Up to the End of this Subsection We Suppose that A Is a Commutative Ring If a /= 0 and b /= 0 are elements of A, we say that b divides a if there exists c ∈ A such that a = bc; we write a | b. We say also that b is a divisor or a factor of a. A zero-divisor (or 0-divisor) of A is an element a ∈ A such that there is b ∈ A, b /= 0, with ab = 0; a 0-divisor a /= 0 is called a proper 0-divisor. If A does not contain proper zero-divisors it is called an integral domain or simply a domain. An non-unit element a ∈ A, a /= 0, is irreducible if a = bc implies b or c is a unit; is prime if from a | bc it follows that a | b or a | c. Every prime element is irreducible but the converse is not always true. Two elements a and b are associates if a = ub, where u is a unit. An integral domain A is a unique factorization domain (UFD) or a factorial domain if for every non-unit a ∈ A there exist uniquely determined irreducible elements p1 , . . . , pr , such that a = p1n1 p2n2 · · · prnr ,

.

nj ∈ N, = 1, . . . , r.,

where pi and pj are not associates if i /= j . The above factorization of a is unique up to order and unit factors. The only divisors of a are of the form p1m1 p2n2 · · · prmr ,

.

0 ≤ mj ≤ nj , j = 1, . . . , r.,

The ring of rational integers Z is a unique factorization domain by the fundamental theorem of arithmetics and every field is trivially a unique factorization domain. We observe that in a factorial domain every irreducible element is prime. Two elements a and b of a factorial domain A, a = p1n1 p2n2 · · · prnr ,

.

b = q1m1 q2m2 · · · qsms ,

are coprime if they do not have common irreducible divisors, i.e. if each irreducible factor pi of a is not associated to any irreducible factor qj of b. In a factorial domain every irreducible element is prime.

8

1 Linear Algebra

The following two theorems are crucial for this chapter and for Chap. 10. Their proofs can be found in [39]. Theorem 1.9 (Gauss) If A is a factorial domain also A[X1 , . . . , Xn ] is a factorial domain for every n ≥ 1. In other words, every non-constant polynomial F is factorized in a unique way up to the order and unit factors in the form F = F1m1 · · · Frmr

.

(1.3)

where F1 , . . . , Fr are pairwise coprime irreducible polynomials and mi ∈ N, i = 1, . . . , r. Furthermore, F is homogeneous if and only if F1 , . . . , Fr are homogeneous polynomials. If in (1.3) we have m1 = m2 = · · · = mr = 1 , F is called reduced. Theorem 1.10 Let A be an integral domain. Let f (X), g(X) ∈ A[X] be two polynomials of respective degrees m and n. If a is the leading coefficient of g(X) there exist polynomials q(X), r(X) ∈ A[X] such that a k f (X) = q(X)g(X) + r(X),

.

where k = max(m − n + 1, 0) and r(X) is either of degree less than n or is the zero polynomial. Furthermore, g(X) and r(X) are uniquely determined. In particular b ∈ A is a root of f (X) if and only if X − b divides f (X). Hence a non-zero polynomial f (X) of degree m can have at most m distinct roots. Definition 1.11 (Characteristic of a Field) Let K be a field. The map ϕ : Z → K defined by ϕ(n) = n · 1K , where ⎧ ⎪ 1 + · · · + 1K if n > 0 ⎪ ⎪   K  ⎪ ⎪ ⎪ n times ⎨ .n · 1K := if n = 0 0K ⎪ ⎪ ⎪ ⎪ (−1K ) + · · · + (−1K ) if n < 0 ⎪ ⎪   ⎩ −n times

is a homomorphism of rings. Then Ker(ϕ) is an ideal of Z, say Ker(ϕ) = pZ with p ≥ 0. If p = 0, i.e. ϕ is injective, we say that K has characteristic 0. The homomorphism ϕ : Z → K can be uniquely continued to a homomorphism  ϕ: Q → K defined by  ϕ

.

m n

=

ϕ(m) , ϕ(n)

1.1 Introductory Notions

9

which is injective. Thus every field K of characteristic 0 contains a subfield H isomorphic to the field of rational numbers Q and H is the intersection of all subfields of K. If p > 0, then p is the smallest positive integer such that p · 1K = 0K and we say that K has characteristic p. Moreover p is a prime number, otherwise if p = mn, with 1 < n < p, 1 < m < p, then (m · 1K )(n · 1K ) = p · 1K = 0K that would imply m · 1K = 0K or n · 1K = 0K contradicting the assumption that p is the smallest positive integer such that p · 1K = 0K . By Theorem 1.8 K contains a subfield H isomorphic to the field Zp . This field H is the intersection of all subfields of K. In both cases, we denote the characteristic of K by char(K) and we call the smallest field contained in K the prime field in K. Definition 1.12 A field K is algebraically closed if every polynomial P (X) ∈ K[X] of degree ≥ 1 has at least a root in K. The fundamental theorem of algebra asserts that the field C of complex numbers is algebraically closed whereas the field R of real numbers is not algebraically closed because the polynomial X2 + 1 has no real roots. Moreover, for every field K there is an algebraically closed field K, which is an algebraic extension of K, i.e. every element of K is a root of a polynomial f (X) ∈ K[X]. This field K is unique up to K-isomorphism and it is called the algebraic closure of K (see Chapters V and VI of [22]); for instance the field C is the algebraic closure of R. Lemma 1.13 Every algebraically closed field is infinite. Proof Assume that K = {a1 = 0, a2 = 1, a3 , . . . , am }. Then the polynomial P (X) = (X − a1 )(X − a2 ) · · · (X − am ) + 1

.

has no roots in K since P (ai ) = 1 /= 0 for i = 1, 2, . . . , m, contradicting our hypothesis. ⨆ ⨅

1.1.1 Matrices and Linear Systems In this subsection we recall some basic notions concerning matrices, determinants and linear systems that will be frequently used. We have thought it convenient to consider matrices with entries in a commutative ring (and not only in a field) in view of concepts like the characteristic polynomial of a matrix (Definition 1.116) and the resultant of two polynomials (Chap. 11). As a matter of fact all proofs of the here presented statements do not change if the entries of a matrix belong to a field or to a commutative ring. The reader can find all proofs (for fields) in [14] and [15]. We suggest [6–8] and [24] for advanced treatments of linear algebra over a commutative ring.

10

1 Linear Algebra

Definition 1.14 Let R be a commutative ring with identity 1. A rectangular array ⎛ ⎞ a11 a12 · · · a1n ⎜ a21 a22 · · · a2n ⎟ ⎜ ⎟ .A = ⎜ . .. . . .. ⎟ ⎝ .. . . ⎠ . am1 am2 · · · amn of mn scalars (i.e. elements of R) is called a matrix of m rows and n columns or, in brief, an .m × n-matrix. The scalars .aij are called the entries. In the following we shall also use the abbreviated form .A = (aij )i=1,...,m or simply .(aij ). The set of all j =1,...,n

m × n-matrices with entries in R will be denoted by .Mm,n (R). An .n × n-matrix is called a square matrix. The set of .n × n-matrices will be denoted by .Mn (R). A square matrix .A = (aij )i,j ∈ Mn (R) is

.

• diagonal if .aij = 0, .∀ i /= j , .i, j = 1, . . . , n; • upper triangular if .aij = 0, .∀ i > j , .i, j = 1, . . . , n; • lower triangular if .aij = 0, .∀ i < j , .i, j = 1, . . . , n. We shall denote a diagonal matrix .A = (aij )i,j ∈ Mn (R) by .diag (d1 , . . . , dn ) where .di = aii , .i = 1, . . . , n. The main diagonal of a square matrix .A = (aij )i,j ∈ Mn (R) is the list of entries .a11 , a22 , . . . , ann . We define the sum of two matrices .A = (aij )i,j and .B = (bij )i,j ∈ Mm,n (R) as the matrix A + B = (aij )i,j + (bij )i,j := (aij + bij )i,j ∈ Mm,n (R)

.

and the product of a scalar .r ∈ R and a matrix .A = (aij )i,j ∈ Mm,n (R) as the matrix rA := (raij )i,j ∈ Mm,n (R).

.

The reader can easily prove the following properties which hold for every A, B, C ∈ Mm,n (K), h, .k ∈ R:

.

(1) (2) (3) (4) (5) . (6) (7) (8) (9) (10)

A+B =B +A A + (B + C) = (A + B) + C k(hA) = (kh)A k(A + B) = kA + kB (k + h)A = kA + hA 0m,n + A = A 1A = A A + (−A) = 0m,n , 0A = 0m,n , (0 ∈ R), k0m,n = 0m,n ,

commutativity, associativity, compatibility, distributivity, distributivity, neutral element, unitarity, opposite element,

1.1 Introductory Notions

11

where .0m,n = (aij )i,j with .aij = 0, .1 ≤ i ≤ m, .1 ≤ j ≤ n, and .−A = (−aij )i,j if A = (aij )i,j . Hence .Mm,n (R) is an abelian group with respect to sum (as a matter of fact .Mm,n (R) is an R-module (see Remark 1.32)). Interchanging rows and columns of a matrix .A = (aij )i,j ∈ Mm,n (R) we obtain from A the transposed matrix ⎛ ⎞ a11 a21 · · · am1 ⎜a12 a22 · · · am2 ⎟ ⎜ ⎟ t .A := ⎜ . . . ⎟ ∈ Mn,m (R). ⎝ .. .. . . . .. ⎠ a1n a2n · · · amn

.

For all A, .B ∈ Mm,n (R) and .r ∈ R, we have immediately (A + B)t = At + B t ,

.

(rA)t = rAt .

Definition 1.15 The product of the .m × n-matrix .A = (aij )i,j and the .n × p-matrix B = (bj k )i,j is the .m × p-matrix .(cik )i,k , denoted by AB, where

.

cik =

n

.

aij bj k

i = 1, . . . , m, k = 1, . . . , p.

j =1

The matrix-multiplication has the following properties which hold for every .A, B ∈ Mm,n (R) and C, .D ∈ Mn,p (R), k ∈ R: (1) (A + B)C = AC + BC (2) A(C + D) = AC + AD . (3) A(kC) = k(AC) = (kA)C (4) AIn = A, In C = C

distributivity, distributivity, homogeneity, unitarity,

where .In = (δij )i,j ∈ Mn (R) is the unit matrix. Furthermore, if .E ∈ Mm,n (R), F ∈ Mn,p (R) and .G ∈ Mp,s (R) we have

.

.

(5) (EF )G = E(F G) associativity, (6) (EF )t = F t E t .

Hence .Mn (R) is a unitary ring (not commutative if .n ≥ 2) with respect to the sum and to the product introduced above. n We define the trace of a matrix .A ∈ Mn (R) as the scalar .tr(A) := aii . The i=1

following properties are immediate: .

tr(kA) = k tr(A),

tr(A + B) = tr(A) + tr(B),

∀ k ∈ R, ∀ A, B ∈ Mn (R). (1.4)

12

1 Linear Algebra

Thanks to (1.4) we can say that .tr : Mn (R) → R is a homomorphism of R-modules (see Remark 1.67). Moreover we have .

tr(At ) = tr(A),

tr(AB) = tr(BA),

∀ A, B ∈ Mn (R).

(1.5)

Definition 1.16 Let .X = {1, 2, · · · , n}. A bijective map .p : X → X is called a permutation of X. It is customary to represent a permutation p by a matrix like this:   1 2 ... n . . p(1) p(2) . . . p(n) The set .Sn of all permutations of X is a group (with respect to composition of permutations) of order .n!. There is a canonical action of .Z on X (see Definition 1.6) associated to .p ∈ Sn , Z × X → X,

(n, x) → pn (x),

.

(1.6)

where ⎧ ⎪ ◦ · · · ◦ p, if n > 0, ⎪ ⎪p    ⎪ ⎪ ⎪ ⎨ n times n .p = p−1 ◦ · · · ◦ p−1 if n < 0, ⎪    ⎪ ⎪ ⎪ −n times ⎪ ⎪ ⎩ idX , if n = 0. Let .mp be the number of orbits of the action (1.6). We call the number .ε(p) = (−1)n−mp the signature of the permutation p. For instance if p is the permutation  .

1234 1324



then .ε(p) = −1 since p has 3 orbits: .{1}, {2, 3}, {4}. Definition 1.17 Let .A = (aij )i,j ∈ Mn (R); the determinant of A is the scalar defined by .

det(A) :=



ε(p)a1p(1) a2p(2) . . . anp(n) .

p∈Sn

The determinant of .A = (aij )i,j is also written in the form  a11 . . .   . . .  . a . . . n1

 a1n  ..  . .  a  nn

1.1 Introductory Notions

13

For instance   a11 a12    . a21 a22  = a11 a22 − a12 a21 ,

  a11 a12 a13    a21 a22 a23    a a a  31 32 33

= a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a11 a23 a32 − a12 a21 a33 . Definition 1.18 Let .A = (aij )i,j ∈ Mm,n (R). For every set of indices 1 ≤ i1 < · · · < ir ≤ m,

.

1 ≤ j1 < · · · < jr ≤ n

(1.7)

j ,...,j

denote by .Ai11,...,irr the .r ×r-matrix consisting of the rows .i1 , . . . , ir and the columns j ,...,j

j1 , . . . , jr of A. We call .Ai11,...,irr a submatrix of A. The number of submatrices of    m n . A is clearly . p p     j ,...,j  j ,...,j The determinant .Ai11,...,irr  := det Ai11,...,irr is called a minor of order r of the matrix A. Let .A = (aij )i,j ∈ Mn (R). To every set of indices

.

1 ≤ i1 < · · · < ir ≤ n,

.

1 ≤ j1 < · · · < jr ≤ n

(1.8)

we associate the .(n − m) × (n − m)-matrix obtained from A deleting the rows j ,...,j i1 ,...,im

m 1 i1 , . . . , im and the columns .j1 , . . . , jm and denote it by .A  . We define the

.

cofactor of size .n − m as the scalar

     j1 j1 ,...,jm  ,...,jm i1 +···+im +j1 +···+jm  := (−1) . A det A  .  i1 i1 ,...,im ,...,im 

(1.9)

   j If .m = 1 (and .i1 = i, .j1 = j ) instead of .A i  we adopt the simpler notation .Aij and we call .Aij the cofactor of the element .aij . The .n × n-matrix of cofactors .(Aij )i,j is called the adjoint matrix of A and denoted by .adj(A). The following important result holds. Theorem 1.19 (Laplace) Let .A ∈ Mn (R). We have the Laplace expansion by cofactors along row i n .

k=1

aik Aj k = det(A)δij ,

(1.10)

14

1 Linear Algebra

and the Laplace expansion by cofactors along column j n .

aki Akj = det(A)δij .

(1.11)

k=1

Hence we obtain adj(A) A = A adj(A) = det(A)In .

.

(1.12)

More generally, for every integer .1 ≤ m ≤ n, fixing the rows .i1 , . . . , im we have det(A) =

.

1≤j1 0} equipped with the operations R+ × R+ → R+ ,

.

R × R+ → R+ ,

(x, y) → x  y := xy, (α, x) → α  x := x α ,

is a R-vector space. Exercise 1.7 Find an infinite (countable and uncountable) free subset of the Qvector spaces R and C. Exercise 1.8 Let K be a finite field of characteristic p. Show that #(K) = pn , with n ≥ 1. (Hint: use Theorem 1.49.) Exercise 1.9 Let u : K → A be a homomorphism of unitary rings, where K is a field and A is an unitary ring (not necessarily commutative). Show that u is injective. Exercise 1.10 If v1 , v2 , v3 are three linearly dependent vectors of a K-vector space V , is it true that 〈v1 , v2 〉 = 〈v2 , v3 〉? Exercise 1.11 Let V1 , V2 and V3 be three vector subspaces of a K-vector space V such that V1 ⊂ V3 . Prove the equality (V1 + V2 ) ∩ V3 = V1 + (V2 ∩ V3 ). Exercise 1.12 Find an infinite (countable and uncountable) free subset of the Rvector space C([a, b]; R) (see Example 1.33, 4). Exercise 1.13 Let W1 , W2 and W3 be three subspaces of a K-vector space V . Is it true that W2 = W3 if W1 ⊕ W2 = W1 ⊕ W3 ? Exercise 1.14 Let V be a K-vector space with char(K) /= 2. An endomorphism f : V → V is called: idempotent if f 2 := f ◦ f = f , involutive if f 2 = idV . 1. Prove that f is idempotent if and only if V = ker(f − idV ) ⊕ ker(f ).

.

Note that ker(f − idV ) = Im(f ) and f| Im(f ) = idIm(f ) . If ker(f ) = 0V then f = idV ; if ker(f − idV ) = 0V then f (v) = 0V for all v ∈ V . 2. Let V = A⊕B; the endomorphism pA,B : V → V , pA,B (v) = a if v = a +b, is called the projection onto A along B. It is immediate that pA,B is an idempotent endomorphism. if f : V → V is an idempotent linear endomorphism, then f = pIm(f ),Ker(f ) . 3. Prove that f is involutive if and only if V = ker(f − idV ) ⊕ ker(f + idV ).

.

1.5 Exercises

81

The subspace ker(f − idV ) is called the axis of f . It may of course happen that one of subspaces ker(f −idV ), ker(f +idV ) is trivial (i.e. 0V ). If ker(f +idV ) = 0V then f = idV ; if ker(f − idV ) = 0V then f = − idV and f is called the symmetry about the origin. 4. Let V = A⊕B. The endomorphism sA,B : V → V , sA,B (v) = a−b if v = a+b, is called the reflection (or symmetry) through A along B. It is easily seen that sA,B is involutive so that sA,B is an isomorphism. if s is involutive, then s = sA,B where A = ker(s − idV ) and B = ker(s + idV ). 1 1 5. Let s /= idV be involutive. Show that p := s + idV is idempotent and p /= 2 2 idV . Conversely, if p /= idV is idempotent, then s := 2p − idV is involutive and s /= idV . 6. Show that a reflection sA,B is an orthogonal linear isomorphism if and only if B = A⊥ . Exercise 1.15 Let V1 be the subset of symmetric matrices and V2 the subset of skew-symmetric matrices of V = Mn (K). (i) Show that V1 and V2 are vector subspaces of V and dimK (V1 ) = n(n − 1) . (ii) If char(K) /= 2, show that dimK (V2 ) = 2 (iii) If char(K) /= 2, show that V = V1 ⊕ V2 .

n(n + 1) . 2

Exercise 1.16 Let K be a field with char(K) = p > 0 and n ≥ 1 a fixed integer. Assume that p  n. Prove that V1 := {aIn | a ∈ K} and V2 := {A ∈ V | tr(A) = 0} are subspaces of V = Mn (K) and V = V1 ⊕ V2 . Exercise 1.17 Let V = R[X]. A polynomial P = P (X) ∈ R[X] is called even if P (−X) = P (X), odd if P (−X) = −P (X). Let W be the subset of all even polynomials and W ' the subset of all odd polynomials V . Show that W and W ' are subspaces of V and V = W ⊕ W ' . Exercise 1.18 Is the vector space of Exercise 1.6 isomorphic to a finite– dimensional R-vector space? Exercise 1.19 Let V , W , Z be three K-vector spaces and f : V → W , g : V → Z two linear maps. Prove that .

ker(f ) ⊂ ker(g)

⇐⇒

∃ h : W → Z linear such that g = h ◦ f.

If k : W → Z is a linear maps, prove that .

Im(g) ⊂ Im(k)

⇐⇒

∃ l : W → Z linear such that g = k ◦ l.

Exercise 1.20 Let V be a K-vector space and V ∗ its dual space. Prove the equalities (W1 + W2 )0 = W10 ∩ W20 ,

.

(W1 ∩ W2 )0 = W10 + W20 ,

for every pair of subspaces W1 and W2 of V .

82

1 Linear Algebra

Exercise 1.21 Let f ∈ EndK V be an endomorphism of a K-vector space V of dimension n ≥ 2 such that f n = 0 and f n−1 /= 0. Prove that there exists a vector v ∈ V such that B = {v, f (v), . . . , f n−1 (v)} is a basis V . Find MfB . ! Exercise 1.22 Let A = aij i,j =1,...,n ∈ Mn (K). Show that if aij = 0 ∀ i ≥ j then An = 0. Exercise 1.23 Let A, B, C, D, A' , B ' , C ' , D ' ∈ Mn (R) (where R is a commutative unitary ring). Show the equality  .

   ' '  AB AA' + BC ' AB ' + BD ' A B = . · C ' D' CA' + DC ' CB ' + DD ' CD

Exercise 1.24 Let A ∈ Mn (K), B ∈ Mm (K), C ∈ Mn,m (K) and  M=

.

AC 0 B

 .

Show that the characteristic polynomials PA (T ), PB (T ) and PM (T ) satisfy the following relation: PM (T ) = PA (T ) PB (T ).

.

Hint: note that if A ∈ Mn (R), B ∈ Mm (R), C ∈ Mn,m (R) (where R is a commutative unitary ring) then we have  .

A C 0m,n B



 =

In 0n,m 0m,n B



A 0m,n

C Im

 .

Exercise 1.25 Let f , g ∈ EndK (V ) be two endomorphisms of a K-vector space V of finite dimension. Show that the characteristic polynomials of f ◦ g and g ◦ f coincide. (Hints: From Exercise 1.24 we have the equality       −A AB − T In −A In 0 In 0 −T In · = · , . 0 −T In B In B In 0 BA − T In

A, B ∈ Mn (K).

Exercise 1.26 Let R[X]n be the R-vector space of all polynomials of degree ≤ n (with a fixed n ≥ 2) (see Example 1.33, 2) and D the formal derivative of Example 1.69-5). Prove that D|R[X]n is an endomorphism of R[X]n such that D n+1 = 0 and find its eigenvalues. Exercise 1.27 Prove that the set of diagonalizable matrices of Mm (R) generates Mm (R). (Hint: use the elementary matrices Eii (1) and diag(1, 2, . . . , n)).

1.5 Exercises

83



⎞ a10 Exercise 1.28 Prove that for all a ∈ R the matrix ⎝0 a 1⎠ ∈ M3 (R) is not 00a diagonalizable. Exercise 1.29 Applying the diagonalization algorithm in Subsection 1.3.3, diagonalize ⎛ ⎞ 1 0 2 . ⎝0 −2 5 ⎠ ∈ M3 (R) 0 5 −2 Exercise 1.30 Show that the characteristic polynomial of a skew-symmetric matrix A ∈ Mn (R) has all its roots purely imaginary. (Hint: adapt the proof of Proposition 1.122.) Exercise 1.31 Let V = V1 ⊕ V2 be a K-vector space direct sum of its subspaces V1 and V2 . Let f : V1 → V1 and g : V2 → V2 be two maps. If h : V → V is defined by h(v) = f (a) + g(b) where v = a + b, a ∈ V1 and b ∈ V2 , prove that: (i) h is linear if and only if f and g are linear. (ii) h is diagonalizable if and only if f and g are diagonalizable. Exercise 1.32 Show that a pair of matrices A, B ∈ Mn (K) are simultaneously diagonalizable (i.e. there exists a matrix M ∈ GLn (K) such that M −1 AM and M −1 BM are diagonal) if and only if AB = BA. Exercise 1.33 Let d1 , . . . , dn be n ≥ 2 positive real numbers. Show that 〈(x1 , . . . , xn ), (y1 , . . . , yn )〉 := d1 x1 y1 + · · · + dn xn yn

.

is an inner product of Rn . Exercise 1.34 Consider the euclidean vector space R[X]≤5 with inner product defined by $ 〈f (X), g(X)〉 =

1

.

−1

f (x)g(x) dx.

Applying the Gram–Schmidt process orthonormalize the basis {1, X, X2 , X3 , X4 , X5 }. Exercise 1.35 Let v and w be two vectors of a euclidean vector space V . 1. If ‖v‖ = ‖w‖ show that v−w and v+w are orthogonal and explain the geometric meaning of it. 2. Prove the equality ‖v + w‖2 + ‖v − w‖2 = 2‖v‖2 + 2‖w‖2 and explain its geometric meaning.

84

1 Linear Algebra

Exercise 1.36 Let {v1 , . . . , vn } be an on-basis of a euclidean vector space V . For every i = 1, . . . , n put xi := v1 + · · · + vi . Show that {x1 , . . . , xn } is a basis of V and orthonormalize it by the Gram–Schmidt process. Exercise 1.37 Let W and U be two vector subspace of an euclidean vector space V of finite dimension. Show the equalities (W + U )⊥ = W ⊥ ∩ U ⊥ and (W ∩ U )⊥ = W ⊥ + U ⊥. Exercise 1.38 Let A ∈ Mn (R) with rank(A) = k.



(a) Prove that there exist P ∈ GLn (R) and Q ∈ O(n) such that P A Q =

 Ik 0 . 0 0

(b) Using (a) prove that rank(At A) = rank(A). Exercise 1.39 Let (V , 〈·, ·〉) be a euclidean vector space of dimension n and f ∈ EndK (V ) a self-adjoint linear operator on V . Prove that f can only have +1 and −1 as eigenvalues. If n is odd and det(f ) = 1 then f has necessarily +1 as eigenvalue. Exercise 1.40 Let (V , 〈·, ·〉) be a euclidean vector space of dimension n and f ∈ EndK (V ) a linear operator on V . Suppose that 〈v, w〉 = 0

.

=⇒

〈f (v), f (w)〉 = 0,

∀ v, w ∈ V .

Prove that there exist an orthogonal linear automorphism g : (V , 〈·, ·〉) → (V , 〈·, ·〉) and a scalar λ ∈ R such that f = λg. Exercise 1.41 Let (V , 〈·, ·〉V ) and (W, 〈·, ·〉W ) be two euclidean spaces of the same dimension. Suppose that a map f : V → W preserves the inner product, i.e. 〈f (u), f (v)〉W = 〈u, v〉V for every u, v ∈ V . Show that f is a linear isomorphism. Exercise 1.42 Show that every finite subgroup of SO(2) is cyclic.  m cos θ − sin θ Exercise 1.43 Show that for any natural number m one gets = sin θ cos θ   cos mθ − sin mθ . sin mθ cos mθ Exercise 1.44 Let σ and ρ be the orthogonal automorphisms of E2 (R) defined by the matrices   1 0 . 0 −1

2π ⎞ 2π − sin ⎜ n n ⎟ ⎟, ⎜ ⎠ ⎝ 2π 2π cos sin n n ⎛

cos

and

(n ≥ 2),

respectively. Show that σ 2 = ρ n = id and ρ ◦σ = σ ◦ρ n−1 . Furthermore, prove that the subgroup of O(2) generated by σ and ρ has 2n elements. This group is called the n-th dihedral subgroup of O(2).

1.5 Exercises

85

Exercise 1.45 Adapting the proof of Theorem 1.178 prove Schur’s theorem given below: Let C ∈ Mn (R) be a matrix whose characteristic polynomial has all real roots, then there exists an orthogonal matrix A ∈ Mn (R) such that ACAt is upper triangular.

Which entries does ACAt have on its main diagonal?

Chapter 2

Bilinear and Quadratic Forms

2.1 Bilinear Maps In this chapter K denotes a field of characteristic ./= 2. Definition 2.1 Let .V1 , . . . , Vn , W be K-vector spaces. A map f : V1 × · · · × Vn → W

(2.1)

.

is called multilinear if it is linear in each variable separately, i.e. if f (v1 , . . . , vi−1 , λv + μv ' , vi+1 , . . . , vn )

.

= λf (v1 , . . . , vi−1 , v, vi+1 , . . . , vn ) + μf (v1 , . . . , vi−1 , v ' , vi+1 , . . . , vn ), ∀ v, v ' ∈ Vi , ∀ λ, μ ∈ K, for all .i = 1, . . . , n and for any fixed .vj ∈ Vj , .j = 1, . . . , n, .j /= i. If .n = 2 we use the term bilinear instead of multilinear. If .V1 = · · · = Vn = V we call f an n-linear map. If in addition .W = K, then it is said to be an n-linear form on V ; if .n = 2, bilinear form. The set of all bilinear forms on V , denoted by .Bil(V ), equipped with the operations (f + g)(v, w) := f (v, w) + g(v, w),

.

(λ f )(v, w) := λf (v, w),

∀ v, w ∈ V , ∀ v, w ∈ V

∀f, g ∈ Bil(V ), ∀λ ∈ K,

∀f ∈ Bil(V ),

is a K-vector space. The neutral element is the null form .0 ∈ Bil(V ), .0(v, w) := 0, ∀ v, .w ∈ V ; the opposite of .f ∈ Bil(V ) is the form .−f , .(−f )(v, w) := −f (v, w), .∀ v, .w ∈ V . .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_2

87

88

2 Bilinear and Quadratic Forms

A bilinear form .f ∈ Bil(V ) is said to be symmetric if .f (v, w) = f (w, v), for all v, .w ∈ V ; skew-symmetric if .f (v, w) = −f (w, v), for all v, .w ∈ V . Examples 2.2 1. The product (A, B) ∈ M(m, n, K) × M(n, p, K) → AB ∈ M(m, p, K),

.

is a bilinear map. 2. Let .(V , 〈·, ·〉) be a euclidean vector space; then .f : V × V → R, .f (v, w) := 〈v, w〉, is a symmetric bilinear form. In particular the standard inner product on n n .R (see Example 1.140, 1) is called the standard bilinear form on .R . n 3. The standard bilinear form on .K is given by .f (x, y) = x1 y1 + · · · + xn yn for all .x = (x1 , . . . , xn ), .y = (y1 , . . . , yn ) ∈ K n . 4. If V is a K-vector space of finite dimension, the natural pairing of V and .V ∗ 〈·, ·〉 : V × V ∗ → K,

〈v, w∗ 〉 = w ∗ (v),

.

is a bilinear form. Definition 2.3 Let .f ∈ Bil(V ) be a bilinear form on a K-vector space V . For every v ∈ V , the following linear functionals on V are defined:

.

.

fv : V → K fv (w) := f (v, w),

∀ w ∈ V,

fv' : V → K fv' (w) := f (w, v),

∀ w ∈ V.

Thus we have the linear maps δf : V → V ∗ δf (v) := fv ,

∀ v ∈ V,.

(2.2)

δf'

∀ v ∈ V.

(2.3)

.

:V →V



δf' (v)

:=

fv' ,

Definition 2.4 Let .B = {v1 , . . . , vn } be a basis of a K-vector space V and .f ∈ Bil(V ) a bilinear form on V . We call   MfB := aij i,j =1,...,n ∈ Mn (K),

.

aij := f (vi , vj ), i, j = 1, . . . , n,

the matrix associated to the bilinear form f with respect to a basis .B. If f is symmetric then .MfB is obviously symmetric.   Conversely, if with .A = aij i,j =1,...,n ∈ Mn (K) we associate the map fA : V × V → K,

.

fA (v, w) =

n  i,j =1

aij xi yj ,

v=

n  i=1

xi vi , w =

n 

yi vi ,

i=1

(2.4)

2.1 Bilinear Maps

89

then .fA is a bilinear form on V such that .MfBA = A. If A is the null matrix, then .fA is the null form. If A is symmetric, then .fA is symmetric. Proposition 2.5 Under the above hypothesis and notation the map .

Bil(V ) → Mn (K),

f → MfB ,

∀ f ∈ Bil(V ),

is an isomorphism of K-vector spaces, such that f is symmetric if and only if .MfB is symmetric. ' If .B' is another basis of V and .ΛB,B ∈ GLn (K) is matrix of transition from the ' basis .B to the basis .B , then we have '

'

MfB = (ΛB,B )t MfB ΛB,B

'

(2.5)

. '

'

so that .MfB and .MfB are congruent. In particular, .rank(MfB ) = rank(MfB ).   ' Proof Let .B' = {w1 , . . . , wn } be another basis of V . If .ΛB,B = αij i,j =1,...,n , n  then .wi = αj i vj , .i = 1, . . . , n), so that j =1

 f (wi , wj ) = f

n 

αki vk ,

.

k=1 '

n 

 αlj vl

=

l=1 '

n 

αki f (vk , vl )αlj .

k,l=1 '

Hence we have .MfB = (ΛB,B )t MfB ΛB,B .

⨆ ⨅

For instance, the matrix associated with the standard bilinear form on .K n (see Example 2.2, 3) with respect to the canonical basis .e1 = (1, 0, . . . , 0), . . . , en = (0, . . . , 0, 1) of .K n is the identity matrix .In ∈ Mn (K). Definition 2.6 We define the rank .rank(f ) of a bilinear form .f ∈ Bil(V ) by the relation .

rank(f ) := rank(MfB ),

where .B is any basis of V . By Proposition 2.5 .rank(f ) is independent of the basis .B chosen. We have the obvious inequality .rank(f ) ≤ dimK V . If .rank(f ) < dimK V we call f a degenerate bilinear form, while f is called non-degenerate if .

rank(f ) = dimK V .

90

2 Bilinear and Quadratic Forms

Proposition 2.7 Let .f ∈ Bil(V ) be a bilinear form on a K-vector space V of finite dimension. The following conditions are equivalent: (i) (ii) (iii) (iv) (v)

f is non-degenerate. For every .v /= 0V in V there exists .w ∈ V such that .f (v, w) /= 0. For every .w /= 0V in V there exists .v ∈ V such that .f (v, w) /= 0. The map .δf : V → V ∗ of (2.2) is an isomorphism of K-vector spaces. The application .δf' : V → V ∗ of (2.3) is an isomorphism of K-vector spaces.

Proof (ii) .⇐⇒ (iv). A vector .v ∈ V is such that .f (v, w) = 0 .∀ w ∈ V if and only if .v ∈ Ker(δf ), while .Ker(δf ) = 0 if and only if .δf is an isomorphism (by Proposition 1.74 and since .dimK V = dimK V ∗ ). The equivalence iii) .⇐⇒ v) is proved in the same way.   Let .B = {v1 , . . . , vn } be a basis of V ; if .MfB = aij i,j =1,...,n and .B∗ = {e1∗ , . . . , en∗ } is the dual basis of .V ∗ , for every .v ∈ V we have δf (v) =

n 

.

δf (v)(ej )ej∗

=

j =1

n 

f (v, ej )ej∗

j =1

(see the proof of Theorem 1.96). In particular, δf (ei ) =

n 

.

aij ej∗ ,

i = 1, . . . , n.

j =1

Thus ∗

MδB,B = (MfB )t f

.

and the equivalence (i) .⇐⇒ (iv) follows. The equivalence (i) .⇐⇒ (v) is completely analogous. ⨆ ⨅ Definition 2.8 Let .f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space V . Two vectors v, .w ∈ V are f -orthogonal with respect to f or orthogonal with respect to f if .f (v, w) = 0, written .v ⊥f w. If .S ⊂ V is a subset of V we define the f -orthogonal complement of S with respect to f : S ⊥f := {w ∈ V | f (v, w) = 0 ∀ v ∈ S}.

.

The subset .S ⊥f is clearly a vector subspace of V . Two vector subspaces .V1 and .V2 are called f -orthogonal (or orthogonal with respect to f ) if .f (v, w) = 0 for every .v ∈ V1 and .w ∈ V2 , written .V1 ⊥ V2 .

2.1 Bilinear Maps

91

The subspace .V ⊥f = {v ∈ V | f (v, w) = 0 ∀ w ∈ V } is called the radical of f . By Proposition 2.7 f is non-degenerate if and only if .V ⊥f = 0V . It easy to see that .

dimK (V ⊥f ) = n − rank(f ).

A vector .v ∈ V is called isotropic with respect to f if .f (v, v) = 0. If v is isotropic, then .λv is isotropic .∀ λ ∈ K. If v is not isotropic, v is called non-isotropic. The only isotropic vector with respect to the bilinear symmetric form .f (v, w) = 〈v, w〉 of the Example 2.2-2 is .0V . Let v be a non-isotropic vector of V . The Fourier coefficient of .w ∈ V with respect to v relative to a symmetric bilinear form f is the scalar defined by av (w) :=

.

f (v, w) . f (v, v)

(2.6)

Proposition 2.9 Let .f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space V . If v is a non-isotropic vector then V = 〈v〉 ⊕ v ⊥f ,

.

where .〈v〉 is the subspace generated by v. Proof From (2.6) we obtain .f (v, w − av (w)v) = 0, i.e. .w − av (w)v ∈ v ⊥f . Since ⊥ .w = av (w)v + (w − av (w)v), it follows that .V = 〈v〉 + v f . Furthermore, since v ⊥ ⨆ ⨅ is non-isotropic, .〈v〉 ∩ v f = 0V so that our thesis follows. Remark 2.10 Let .f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space V . Let W be a subspace of V . In general we have .V /= W ⊕ W ⊥f . For instance, take .f ∈ Bil(R2 ), .f ((x1 , x2 ), (y1 , y2 )) = x1 y1 − x2 y2 , and .W = 〈(1, 1)〉, then we have .W = W ⊥f . Furthermore .W ⊊ W ⊥f ⊥f . For instance, take .f ∈ Bil(R3 ), ⊥ ⊥ .f ((x1 , x2 , x3 ), (y1 , y2 , y3 )) = x1 y1 − x2 y2 , and .W = 〈(1, 1, 1)〉, then .W f f = {(a, a, b) | a, b ∈ R} ⊋ W . Definition 2.11 Let .f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space V . The map q : V → K,

.

q(v) = f (v, v),

∀ v ∈ V,

is called the quadratic form associated with f and f is referred to as the bilinear form polar to the quadratic form q. The rank of q is defined by .rank q := rank f . If A is a symmetric matrix, .qA will denote the quadratic form associated with .fA . The quadratic form associated with the standard bilinear form on .K n is .q(v) = 2 x1 + · · · + xn2 , .v = (x1 , . . . , xn ) ∈ K n . This form is called the standard quadratic form on .K n .

92

2 Bilinear and Quadratic Forms

Proposition 2.12 Let .f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space V and .q : V → K the quadratic form associated with f . Then q(λv) = λ2 q(v),

2f (v, w) = q(v + w) − q(v) − q(w),

.

∀ λ ∈ K, ∀ v, w ∈ V .

Proof From Definition 2.11 we have q(v + w) − q(v) − q(w) = f (v + w, v + w) − f (v, v) − f (w, w)

.

= f (v, w) + f (w, v) = 2f (v, w) and .q(λv) = f (λv, λv) = λ2 f (v, v) = λ2 q(v).

⨆ ⨅

Since .char(K) /= 2, by Proposition 2.12 we can assert that the polar form f is uniquely determined by its quadratic form q.   Let .B = {v1 , . . . , vn } be a basis of V and .MfB = aij i,j =1,...,n the matrix associated with a symmetric bilinea form .f ∈ Bil(V ) relative to .B. If .v = x1 v1 + · · · + xn vn (.x := (x1 , . . . , xn ) ∈ K n ) then q(v) = f (v, v) = x MfB x t =

n 

.

aij xi xj .

i,j =1

The polynomial Q(x) = x MfB x t =

n 

.

aij xi xj ∈ K[x1 , . . . , xn ]

(2.7)

i,j =1

is homogeneous of degree 2 and it is said to represent the quadratic form q with respect to the basis .B. We observe that any quadratic homogeneous polynomial of .K[x1 , . . . , xn ] can be written in the form (2.7) since .char(K) /= 2.

2.1.1 Diagonalization of Quadratic Forms Definition 2.13 Let f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space V and B = {v1 , . . . , vn } a basis of V . We call B orthogonal with respect to f or f -orthogonal if f (vi , vj ) = 0 for all i /= j . We say that f is diagonalizable if there exists an orthogonal basis with respect to f .

2.1 Bilinear Maps

93

If r = rank(f ), eventually renumbering the vectors of B, for every pair of vectors v = x1 v1 + · · · + xn vn and w = y1 v1 + · · · + yn vn , with xi , yi ∈ K, i = 1, . . . , n, we have f (v, w) = a11 x1 y1 + · · · + arr xr yr ,

.

with aii = f (vi , vi ) /= 0,

i = 1, . . . , r, (2.8)

i.e.

MfB

.

⎛ a11 ⎜ ⎜ 0 ⎜ ⎜ . ⎜ .. ⎜ =⎜ ⎜ 0 ⎜ ⎜ 0 ⎜ ⎜ .. ⎝ . 0

⎞ 0 .. ⎟ .⎟ ⎟ .. ⎟ .⎟ ⎟ ⎟ 0⎟ . ⎟ 0⎟ ⎟ .. ⎟ .⎠ 0 ... 0 0 ... 0

0 ... 0 0 . a22 . . . 0 .. .. . . .. .. . . . . 0 . . . arr 0 0 ... 0 0 .. .. .. .. . . . .

... .. . .. . ... ... .. .

(2.9)

We say that the quadratic form q associated to the symmetric bilinear form f , with respect to an orthogonal basis, is represented as a sum of squares, i.e. q(v) = a11 x12 + · · · + arr xr2 with aii = f (vi , vi ) /= 0,

.

i = 1, . . . , r.

(2.10)

Theorem 2.14 (Lagrange) Any symmetric bilinear form f ∈ Bil(V ) on a Kvector space V of dimension n is diagonalizable. Proof There are several proofs of this fundamental result. We choose a proof based on the concept of a non-isotropic vector. We proceed by induction on n. If n = 1 we need not prove anything. Let n ≥ 2. We can suppose f /= 0. Let v, w ∈ V be two vectors such that f (v, w) /= 0. Hence at least one among the vectors v, w, v + w is non-isotropic. Indeed, if v and w are both isotropic, then f (v + w, v + w) = f (v, v) + f (w, w) + 2f (v, w) = 2f (v, w) /= 0,

.

since char(K) /= 2. Therefore we can suppose the existence of a vector e1 /= 0 such that f (e1 , e1 ) /= ⊥ 0. By Proposition 2.9 we have V = 〈e1 〉 ⊕ V ' , where V ' := e1 f and dimK V ' = n − 1. By our inductive hypothesis, V ' has an orthogonal basis e2 , . . . , en with respect to the symmetric bilinear form f ' : V ' × V ' → K,

.

f ' := f|V ' ×V ' .

94

2 Bilinear and Quadratic Forms

Now we prove that e1 , e2 , . . . , en is an orthogonal basis of V . Let λ1 e1 + λ2 e2 + · · · + λn en = 0V with λi ∈ K, i = 1, . . . , n. If we put v := λ2 e2 + · · · + λn en ∈ V ' , we get λ1 e1 + v = 0V , hence 0V = f (λ1 e1 + v, e1 ) = λ1 f (e1 , e1 ) + f (v, e1 ) = λ1 f (e1 , e1 )

.

(because one has f (v, e1 ) = f (e1 , v) = 0), which implies λ1 = 0 since f (e1 , e1 ) /= 0. Therefore 0V = v = λ2 e2 +· · ·+λn en and λ2 = · · · = λn = 0 because e2 , . . . , en is a basis of V ' su K. Thus our basis e1 , . . . , en is orthogonal with respect to f . ⨅ ⨆ By Theorem 2.14 and Proposition 2.5 we get: Corollary 2.15 Any symmetric matrix of Mn (K) is congruent to a diagonal matrix. Proof Let A = (aij )i.j =1,...n ∈ Mn (K) be a symmetric matrix. Then fA : K × K → K,

.

n

fA (v, w) =

n

n 

aij xi yj ,

∀ v = (x1 , . . . , xn ),

i,j =1

w = (y1 , . . . , yn ) ∈ K n is a symmetric bilinear form on K n . Then A = MfEA , where E is the canonical basis of K n . By Theorem 2.14 there exists an orthogonal basis B = (u1 , . . . , un ) with ui = (zi1 , . . . , zin ), i = 1, . . . , n, such that MfBA = diag(b11 , . . . , bnn ). By Proposition 2.5 we have MfBA = (ΛE,B )t A ΛE,B ,

.

where ⎛

ΛE,B

.

z11 ⎜z12 ⎜ =⎜ . ⎝ ..

z21 z22 .. .

... ... .. .

⎞ zn1 zn2 ⎟ ⎟ .. ⎟ . ⎠

z1n z2n . . . znn is the matrix of transition from the basis E to the basis B.

⨆ ⨅

Remark 2.16 (1) We can associate to a symmetric bilinear form f several diagonal representations which depend on the orthogonal bases chosen so that in general we don’t have a canonical diagonal representation. Let rank(f ) = r and B = {v1 , . . . , vr , vr+1 , . . . , vn } be an orthogonal basis with respect to f

2.1 Bilinear Maps

95

such that MfB is defined by the identity 2.9. If λ1 , . . . , λr ∈ K ∗ and B' = {λ1 v1 , . . . , λr vr , vr+1 , . . . , vn }, then B' is an orthogonal basis with respect to f such that ⎛

λ21 a11

'

⎜ ⎜ 0 ⎜ ⎜ . ⎜ .. ⎜ =⎜ ⎜ 0 ⎜ ⎜ 0 ⎜ ⎜ .. ⎝ . 0

MfB

.

0

...

0

λ22 a22 . . . 0 .. . . . . .. . 0 . . . λ2r arr 0 ... 0 .. .. .. . . . 0

...

0

0 ... .. .. . . .. .. . . 0 ... 0 ... .. . . . .

⎞ 0 .. ⎟ .⎟ ⎟ .. ⎟ .⎟ ⎟ ⎟ 0⎟ . ⎟ 0⎟ ⎟ .. ⎟ .⎠

(2.11)

0 ... 0

We obviously have many other ways to obtain different diagonal representations: for instance, if K = Q, the quadratic form 2x 2 + 3y 2 has the diagonal form 5X2 + 30Y 2 with respect to the orthogonal basis {v1 = (1, 1), v2 = (−3, 2)}, indeed     1 −3 20 1 1 5 0 . = 1 2 03 −3 2 0 30

.

⎞ 2 3/2 2 (2) The matrix A = ⎝3/2 1 0⎠ ∈ Mn (Q) is congruent to the matrix 2 0 1 ⎞ ⎛ 1/2 0 0 ⎝ 0 −8 0 ⎠ but A is not similar to any diagonal matrix with entries 0 0 1/17 √ 3 ± 26 in Q since its eigenvalues are 1, . 2 If K is an algebraically closed field or K = R we can obtain a canonical diagonal representation for a symmetric bilinear form. ⎛

Corollary 2.17 Let f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space of dimension n, where K is an algebraically closed field. Then there exists a basis B = (v1 , . . . , vn ) of V such that MfB =

.

 Ir 0 0 0

where r := rank(f ) and Ir is the identity matrix of the ring Mr (K). Proof By Theorem 2.14 there is an orthogonal basis w1 , . . . , wn of V such that (2.8) holds. Since K is algebraically closed, there exist bi ∈ K such that bi2 = aii ,

96

2 Bilinear and Quadratic Forms

i = 1, . . . , r. We obtain the required basis if we define vi = bi−1 wi , i = 1, . . . , r ⨆ ⨅ and vj = wj , j = r + 1, . . . , n. Corollary 2.18 (Sylvester’s Law of Inertia) Let f ∈ Bil(V ) be a bilinear symmetric form on a R-vector space V of dimension n. Then there exist a nonnegative integer p, which depends only on form f with 0 ≤ p ≤ r := rank(f ) and a basis B = (v1 , . . . , vn ) of V such that MfB

.

⎛ Ip 0 = ⎝ 0 −Ir−p 0 0

⎞ 0 0⎠ . 0

Proof The existence of an orthogonal basis follows from Theorem 2.14. Let w1 , . . . , wn be such a basis and aii := f (wi , wi ) /= 0 if and only if i ≤ r := rank(f ). Let p be the number of indices i such that aii > 0. Let the vectors wi be arranged so that aii > 0 ⇐⇒ 1 ≤ i ≤ p,

.

and

ajj < 0 ⇐⇒ p + 1 ≤ j ≤ r.

Then there exist b1 , . . . , br ∈ R \ {0} such that aii = bi2 , i = 1, . . . , p and ajj = −bj2 , j = p + 1, . . . , r. Therefore the required basis consists of the vectors vi := bi−1 wi ,

i = 1, . . . , r,

.

and

vk := wk ,

k = r + 1, . . . , n.

It remains to prove that p is an invariant of f . The quadratic form q associated with f with respect to the above basis (v1 , . . . , vn ) is 2 q(v) = x12 + · · · + xp2 − xp+1 − · · · − xr2 ,

.

v = x1 v1 + · · · + xn vn .

(2.12)

Suppose that (u1 , . . . , un ) is another basis of V such that 2 q(v) = y12 + · · · + yt2 − yt+1 − · · · − yr2 ,

.

v = y1 u1 + · · · + yn un ,

(2.13)

with 0 ≤ t ≤ r. We have to prove the equality p = t. Suppose that p /= t, e.g. t < p. Let V1 := 〈v1 , . . . , vp 〉 and V2 := 〈ut+1 , . . . , un 〉. Then .

dimR (V1 ) + dimR (V2 ) = p + n − t > n,

so that by Grassmann’s formula (Corollary 1.76) we have V1 ∩ V2 /= {0V }. Hence there exists 0V /= v ∈ V1 ∩ V2 . Let v = x1 v1 + · · · + xp vp = yt+1 ut+1 + · · · + yn un , with xi , yj ∈ R. By (2.12) q(v) = x12 + · · · + xp2 > 0 and by (2.13) q(v) = 2 − · · · − y 2 < 0 but these two inequalities are incompatible, thus p = t. −yt+1 ⨆ ⨅ r

2.1 Bilinear Maps

97

Definition 2.19 Let q be a quadratic form on an n-dimensional R-vector space V . The equality (2.12) (with 0 ≤ p ≤ r = rank(q)) is called the canonical representation of q. It is easy to see that: • The integer p (called the positivity index and denoted by i+ (q)) is the maximum dimension of a subspace W such that q(w) > 0 for all w ∈ W , w /= 0. • The integer r − p (called the negativity index and denoted by i− (q)) is the maximum dimension of a subspace W such that q(w) < 0 for all w ∈ W , w /= 0. • The integer n − r (called the nullity index and denoted by i0 (q)) is the maximum dimension of a subspace W such that q(w) = 0 for all w ∈ W . We can obviously extend the above definitions to real symmetric matrices. If A is a real symmetric matrix, i+ (A) := i+ (qA ),

.

i− (A) := i− (qA ),

i0 (A) := i0 (qA ).

The pair of integers Sign(q) := (i+ (q), i− (q)) is called the signature of the quadratic form q and Sign(A) := (i+ (A), i− (A)) is called the signature of the real symmetric matrix A. Some authors define the signature of a quadratic form q as the difference i+ (q) − i− (q). Corollary 2.20 Let A = (aij )i.j =1,...n ∈ Mn (R) be a symmetric matrix. Then A is congruent to the matrix ⎛ Ip 0 ⎝ . 0 −Ir−p 0 0

⎞ 0 0⎠ , 0

(2.14)

where p = i+ (A), r − p = i− (A) and r = rank(qA ) = rank(A). Proof By Theorem 1.178 there exists an orthogonal matrix P such that P −1 AP = diag(λ1 , . . . , λr , 0, . . . , 0) where λ1 , . . . , λr are the non-zero eigenvalues of A (counted with their multiplicities) and λ1 , . . . , λp > 0, while λp+1 , . . . , λr < 0. Let B = diag(|λ1 |−1/2 , . . . , |λr |−1/2 , 1 . . . , 1).

.

Then (P B)t A(P B) is the diagonal matrix (2.14).

⨆ ⨅

Remark 2.21 From Corollary 2.20 it follows that i+ (A) is the number of positive eigenvalues (counted with their multiplicities), i− (A) of negative eigenvalues of A (counted with their multiplicities) and i0 (A) the multiplicity of the eigenvalue 0. A quadratic form q on an n-dimensional R-vector space is called: 1. positive definite if Sign(q) = (n, 0), i.e. q(v) > 0, ∀ v ∈ V \ {0V }; 2. negative definite if Sign(q) = (0, n), i.e. q(v) < 0, ∀ v ∈ V \ {0V }; 3. positive semi-definite if Sign(q) = (r, 0), with 0 ≤ r ≤ n, i.e. q(v) ≥ 0, ∀ v ∈ V ;

98

2 Bilinear and Quadratic Forms

4. negative semi-definite if Sign(q) = (0, r), with 0 ≤ r ≤ n, i.e. q(v) ≤ 0, ∀v ∈ V; 5. indefinite if Sign(q) = (p, r − p), with 0 < p < r ≤ n). Examples 2.22 (1) An inner product on a real vector space is a positive definite quadratic form. (2) An example of non-degenerate indefinite quadratic form (of signature (3, 1)) is q(x1 , x2 , x3 , x4 ) = x12 + x22 + x32 − x42 .

.

The pair (R4 , q) is called the Minkowski space, which plays an important role in the theory of relativity (see Chapter 13 on Cayley– Klein geometries). Remark 2.23 To determine the signature of a quadratic form on R one does not need to find the eigenvalues of the associated symmetric matrix (a very difficult task if the order of the matrix is ≥ 5). The following criterion is very useful (see [29]): Proposition 2.24 (Descartes’ Rule) Let f (X) = an Xn + an−1 Xn−1 + · · · a1 X + a0 ∈ R[X] be a polynomial whose all roots are real and /= 0. The number of the roots > 0 counted with their multiplicities is equal to the number of sign changes in the sequence an , . . . , a0 of non-zero coefficients of f . Lemma 2.25 Let V be a real vector space of dimension n and b : V × V → R an indefinite bilinear symmetric form of signature (p, n − p) with p < n. Let W be a subspace of V of dimension p such that b(v, v) > 0 for all v ∈ W . Then, for every w ∈ W ⊥b , w /= 0, we have b(w, w) < 0.

.

Similarly, if W is a subspace of V of dimension n − p such that b(v, v) < 0 for all v ∈ W , then, for every w ∈ W ⊥b , w /= 0, we have b(w, w) > 0.

.

Proof Suppose that there exists z ∈ W ⊥b such that b(z, z) > 0. Let N = W ⊕ 〈z〉. Every element of N is of the form w + λz with w ∈ W , then, for every λ ∈ R b(w + λz, w + λz) = b(w, w) + λ2 b(z, z) + 2λb(w, z) = b(w, w) + λ2 b(z, z) > 0.

.

Hence dim N > dim W , contradicting our assumption on W .

⨆ ⨅

Lemma 2.26 (Cauchy–Schwarz Inequalities) Let V be a real vector space of dimension n and b : V × V → R a bilinear symmetric form.

2.1 Bilinear Maps

99

(a) If b positive definite, then for all x and y ∈ V we have b(x, y)2 ≤ b(x, x)b(y, y),

.

(2.15)

and b(x, y)2 = b(x, x)b(y, y) if and only if x = 0V or y = 0V . (b) If b is non-degenerate of signature (n − 1, 1), b(x, x) < 0 and b(y, y) < 0 then b(x, y)2 > b(x, x)b(y, y).

.

(2.16)

Proof (a) A positive definite bilinear symmetric form is an inner product so that (a) is Theorem 1.147. (b) The subspace 〈x〉 generated by x has dimension 1 which is maximum. Take the orthogonal complement 〈x〉⊥b , then y = λx + z,

.

z ∈ 〈x〉⊥b .

By Lemma 2.25 we have b(z, z) > 0, hence 0 > b(y, y) = λ2 b(x, x) + b(z, z).

.

Thus b(x, y)2 = b(x, λx + z)2 = λ2 b(x, x)2

.

= λ2 b(x, x)b(x, x) = [b(y, y) − b(z, z)]b(x, x) > b(y, y)b(x, x), which is our thesis. ⨆ ⨅ Also for finite fields we have a canonical form. Corollary 2.27 Let f ∈ Bil(V ) be a bilinear symmetric form on a K-vector space of dimension n where K is finite. Then there exists a basis B = (v1 , . . . , vn ) of V such that MfB is the diagonal matrix ⎛



1

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎟ ⎟ ⎠

⎜ . ⎜ .. ⎜ ⎜ 1 ⎜ ⎜ .⎜ d ⎜ ⎜ 0 ⎜ ⎜ .. ⎝ . 0

where r := rank(f ) and d is unique up to multiplication by a square.

100

2 Bilinear and Quadratic Forms

⨆ ⨅

Proof See [32], pp. 290– 291.

Lagrange’s Method We now show how to select a coordinate system (ξ1 , . . . , ξn ) in which the quadratic form on the standard K-vector space K n q(x1 , . . . , xn ) =

n 

.

aij = aj i

aij xi xj ,

i,j =1

is represented as a sum of squares λ1 ξ12 + λ2 ξ22 + · · · + λn ξn2 .

(2.17)

.

Case 1 There exists a diagonal entry aii /= 0. By renumbering the variables we can assume that a11 /= 0. Then we have q(x1 , . . . , xn ) = a11 x12 + x1 (2a12 x2 + · · · + 2a1n xn ) + q1 (x2 , . . . , xn )

2 . a1n a12 = a11 x1 + x2 + · · · + xn + q2 (x2 , . . . , xn ), a11 a11

(2.18)

where q1 and q2 are quadratic forms in ≤ n − 1 variables. By change of variables y1 = x1 −

.

1 (a12 x2 + · · · + a1n xn ), a11

yi = xi ,

i = 2, . . . , n,

we get a11 y12 + q2 (y2 , . . . , yn ).

.

The next step is to apply this method to q2 . Case 2 All diagonal entries aii are null. By a suitable change of the numbering of the variables we can assume that a12 /= 0. Then q(x1 , . . . , xn ) = 2a12 x1 x2 + x1 𝓁1 (x3 , . . . , xn ) + x1 𝓁2 (x3 , . . . , xn )

.

+ q1 (x3 , . . . , xn ),

(2.19)

where 𝓁1 and 𝓁2 are linear forms and q1 is a quadratic form. Consider the coordinate transformation defined by x1 = y1 + y2 ,

.

x2 = y1 − y2 , ,

xi = yi ,

i = 3, . . . , n.

2.1 Bilinear Maps

101

Under this transformation we have 2a12 (y12 − y22 ) + q2 (y1 , . . . , yn ),

.

where q2 is a quadratic form not containing y12 and y22 . Thus we are under the assumption of Case 1. After a finite number of steps of the type just described our quadratic form will finally take the form (2.17). Example 2.28 We shall now give two examples illustrating Lagrange’s method. (1) Consider the quadratic form 2x12 + 3x1 x2 + 4x1 x3 + x22 + x32 . Then 2x12 + 3x1 x2 + 4x1 x3 + x22 + x32 = 2x12 + x1 (3x2 + 4x3 ) + x22 + x32 2 

3 9 2 x2 + 3x2 x3 + 2x32 + x22 + x32 = 2 x1 + x2 + x3 − . 4 8 2

1 3 = 2 x1 + x2 + x3 − x22 − 3x2 x3 − x32 . 4 8 Under the change of variables 3 y1 = x1 + x2 + x3 , 4

.

y2 = x2 ,

y3 = x3 ,

we obtain 1 2y12 − y22 − 3y2 y3 − y32 . 8

.

Iterating the above procedure we have .

1 2 1 y + 3y2 y3 + y32 = (y2 + 12y3 )2 − 17y32 . 8 2 8

If we put z1 = y1 ,

z2 = y2 + 12y3 ,

.

then we get 1 2z12 − z22 + 17z32 . 8

.

z3 = y3

102

2 Bilinear and Quadratic Forms

Thus the change of variables to diagonalize our quadratic form is ⎛ ⎞ ⎛ ⎞⎛ ⎞ x1 1 − 34 8 z1 . ⎝x2 ⎠ = ⎝0 1 −12⎠ ⎝z2 ⎠ . x3 z3 0 0 1 (2) Consider the quadratic form 3x1 x2 +4x1 x3 −x2 x3 . Under the change of variables x1 = y1 + y2 ,

.

x2 = y1 − y2 ,

x3 = y3 ,

we obtain 3(y12 − y22 ) + 3y1 y3 + 5y2 y3 . As in Case 1 we have 2 2

1 5 4 3y12 − 3y22 + 3y1 y3 + 5y2 y3 = 3 y1 + y3 − 3 y2 − y3 + y32 . 2 6 3

.

By the following change of variables 1 z1 = y1 + y3 , 2

.

5 z2 = y2 − y3 , 6

z3 = y3

we get 4 3z12 − 3z22 + z32 . 3

.

Thus the change of variables to diagonalize our quadratic form is ⎛ ⎞ ⎛ ⎞⎛ ⎞ 1 1 13 x1 z1 4 ⎝ ⎠ ⎝ ⎠ ⎝ . z2 ⎠ . x2 = 1 −1 − 3 z3 x3 0 0 1

2.2 Cross-Product Definition 2.29 Let V be a R-vector space of dimension n ≥ 1. We shall say that two ordered bases S and T are identically oriented if det(ΛS,T ) > 0. It is clear that the set of all oriented bases of V is divided precisely into two classes, consisting of identically oriented bases, while the bases of different classes are oriented differently. The choice of one of these classes is called the orientation of the space V .

2.2 Cross-Product

103

If V = Rn is the standard vector space on R of dimension n we call a basis S right-handed or positive if identically oriented with the canonical basis; left-handed or negative in the opposite case. Definition 2.30 Let V and W be two real vector spaces and f : V n → W a nlinear map. The group Sn of permutation of {1, . . . , n} acts on the set ML(V n , W ) of n-linear maps in such a way that Sn × ML(V n , W ) → ML(V n , W ),

.

(σ, f ) = σ · f (v1 , . . . , vn )

:= f (vσ (1) , . . . , vσ (n) ). We shall say that f is alternating if for every n-tuple (v1 , . . . , vn ) ∈ V n and for every transposition τ ∈ Sn we have τ · f = −f . We recall that a transposition is a permutation that interchanges two elements and leaves all the others fixed. Since every permutation is a product of transpositions, f (vσ (1) , . . . , vσ (n) ) = sgn(σ )f (v1 , . . . , vn ),

.

∀ σ ∈ Sn ,

where  sgn(σ ) =

.

1

if σ is product of an even number of transposition

−1

if σ is product of an odd number of transposition.

If W = R we call an alternating n-linear map an alternating n-linear form. Proposition 2.31 An n-linear map f : V n → W is alternating if and only if vi = vj

.

for somei /= j

=⇒

f (v1 , . . . , vn ) = 0W .

Proof If f is alternating then f (v1 , . . . , vi , . . . , vj , . . . , vn ) = −f (v1 , . . . , vj , . . . , vi , . . . , vn )

.

= −f (v1 , . . . , vi , . . . , vj , . . . , vn ), then f (v1 , . . . , vi , . . . , vj , . . . , vn ) = 0W . Conversely, assume the implication: if vi = vj for some i /= j then f (v1 , . . . , vn ) = 0W . Then if τ is any trasposition, τ (i) = j and τ (j ) = i, it follows that 0W = f (v1 , . . . , vi + vj , . . . , vi + vj , . . . , vn )

.

= f (v1 , . . . , vi , . . . , vi + vj , . . . , vn ) + f (v1 , . . . , vj , . . . , vi + vj , . . . , vn ) = f (v1 , . . . , vi . . . , vi , . . . , vn ) + f (v1 , . . . , vi , . . . , vj , . . . , vn ) + f (v1 , . . . , vj , . . . , vi , . . . , vn ) + f (v1 , . . . , vj , . . . , vj , . . . , vn )

104

2 Bilinear and Quadratic Forms

= f (v1 , . . . , vi , . . . , vj , . . . , vn ) + f (v1 , . . . , vj , . . . , vi , . . . , vn ), i.e. τ · f + f = 0W .

⨆ ⨅

Proposition 2.32 Let f : V n → W be an alternating n-linear map. If v1 , . . . , vn are linearly dependent then f (v1 , . . . , vn ) = 0W . ⨆ ⨅

Proof Left to the reader as an exercise. Examples 2.33

(1) A fundamental example of an alternating n-linear map is the determinant (see [22], Cap. XIII, § 4). Let Rn be the space of column vectors. Define f : Rn × · · · × Rn → R,

.

f (v1 , . . . , vn ) := det(A),

n times

where A is the n × n-matrix whose i-th column is vi , i = 1, . . . , n. (2) The map f : Mn (R) × Mn (R) → Mn (R) defined by f (A, B) = AB − BA,

.

∀ A, B ∈ Mn (R),

is an alternating bilinear map. Definition 2.34 Let (V , 〈·, ·〉) be a euclidean vector space of dimension n and B = {e1 , . . . , en } an orthonormal basis of V . Let v = (v1 , . . . , vn ) ∈ V n . By Theorem 1.146 we have vj =

.

n  〈ei , vj 〉ei ,

j = 1, . . . , n.

i=1

Let v : V → V be the linear operator such that v(ej ) = vj , j = 1, . . . , n. Then   MvB = 〈ei , vj 〉 i,j =1,...,n

.

The map

λB : V n → R,

.

  〈e1 , v1 〉 〈e1 , v2 〉 . . . 〈e1 , vn 〉   〈e2 , v1 〉 〈e2 , v2 〉 . . . 〈e2 , vn 〉   λB (v1 , . . . , vn ) := det(MvB ) =  . .. ..  ..  .. . . .   〈e , v 〉 〈e , v 〉 . . . 〈e , v 〉 n 1 n 2 n n

' is called a volume form associated with the on-basis B. In particular, if B =  ' (v1 , . . . , vn ) is an ordered basis of V , then λB (v1 , . . . , vn ) = det ΛB,B .

2.2 Cross-Product

105

Proposition 2.35 Assume the notation of Definition 2.34. The volume form λB satisfies the following properties: (a) If B' = {f1 , . . . , fn } is another orthonormal basis of V then  λB (f1 , . . . , fn ) =

.

1

if B' and B are identically oriented

−1 otherwise.

(b) λB does not depend on B but only on the orientation class of B. (c) λB is an alternating n-linear form. (d) λB (v1 , . . . , vn ) = 0 if and only if v1 , . . . , vn are linearly dependent. Proof '

'

B ,B (a) The matrix ΛB,B = Mid is orthogonal so that λB (f1 , . . . , fn ) = V   ' det ΛB,B = ±1.

(b) Let B' = {f1 , . . . , fn } be another on-basis of V and v' : V → V be the linear operator such that v' (fj ) = vj , j = 1, . . . , n. As above, vj =

.

n  〈fi , vj 〉fi ,

j = 1, . . . , n,

i=1

and   MvB = 〈fi , vj 〉 i,j =1,...,n .

.

From (1.28) we have '

'

MvB = ΛB,B MvB' ,

.

so that '

λB (v1 , . . . , vn ) = det(ΛB,B )λB' (v1 , . . . , vn ),

.

which implies our thesis by (a). (c) Indeed, λB is a determinant.

106

2 Bilinear and Quadratic Forms

(d) It follows from the equivalences that n  .

αj vj = 0V ⇐⇒ 〈ei ,

j =1

n 

αj vj 〉 = 0, i = 1, . . . , n,

j =1

⇐⇒

n 

αj 〈ei , vj 〉 = 0, i = 1, . . . , n

j =1

.

⇐⇒

n 

  αj 〈e1 , vj 〉, 〈e2 , vj 〉, . . . , 〈en , vj 〉 = 0V ,

j =1

by a basic property of the determinant. ⨆ ⨅ Definition 2.36 Let v1 , . . . , vp be vectors of a euclidean vector space (V , 〈·, ·〉). The Gram determinant of v1 , . . . , vp is defined by   ‖v1 ‖2 〈v2 , v1 〉   〈v1 , v2 〉 ‖v2 ‖2  .Gram(v1 , . . . , vp ) := det((〈vi , vj 〉)i,j ) =  ..  ... .  〈v , v 〉 〈v , v 〉 1

p

2

p

 . . . 〈vp , v1 〉 . . . 〈vp , v2 〉 ..  . .. . .  . . . ‖vp ‖2  (2.20)

From the basic properties of the inner product and of the determinant we deduce the identity ⎛ Gram(v1 , . . . , vp ) = Gram ⎝v1 , . . . , vi +



.

⎞ αj vj , . . . , vp ⎠ ,

αj ∈ R,

j /=i

(2.21) so that if v1 , . . . , vp are linearly dependent, Gram(v1 , . . . , vp ) = 0. If v1 , . . . , vn is an orthonormal basis of V , then Gram(v1 , . . . , vn ) = 1.

.

Let B = {e1 , . . . , en } be an orthonormal basis of V . Let v1 , . . . , vn ∈ V . By Parseval’s identity (Theorem 1.146) n  .〈vi , vk 〉 = 〈ej , vi 〉〈ej , vk 〉, j =1

i, k = 1, . . . , n,

2.2 Cross-Product

107

we have ⎛

⎞ ‖v1 ‖2 〈v2 , v1 〉 . . . 〈vn , v1 〉 ⎜〈v1 , v2 〉 ‖v2 ‖2 . . . 〈vn , v2 〉⎟ ⎜ ⎟ .⎜ . .. .. ⎟ .. ⎝ .. . . . ⎠ 〈v1 , vn 〉 〈v2 , vn 〉 . . . ‖vn ‖2 ⎛ ⎞ 〈e1 , v1 〉 〈e1 , v2 〉 . . . 〈e1 , vn 〉 t ⎜〈e2 , v1 〉 〈e2 , v2 〉 . . . 〈e2 , vn 〉⎟ ⎜ ⎟ =⎜ . .. .. ⎟ .. ⎝ .. . . . ⎠ 〈en , v1 〉 〈en , v2 〉 . . . 〈en , vn 〉



⎞ 〈e1 , v1 〉 〈e1 , v2 〉 . . . 〈e1 , vn 〉 ⎜〈e2 , v1 〉 〈e2 , v2 〉 . . . 〈e2 , vn 〉⎟ ⎜ ⎟ ⎜ . .. .. ⎟ . .. ⎝ .. . . . ⎠ 〈en , v1 〉 〈en , v2 〉 . . . 〈en , vn 〉

Hence Gram(v1 , . . . , vn ) = (λB (v1 , . . . , vn ))2 .

.

(2.22)

Definition 2.37 Let (V , 〈·, ·〉) be a euclidean vector space of dimension n ≥ 3 and B an on-basis of V . Fixing n − 1 vectors v1 , . . . , vn−1 ∈ V there exists one and only one w ∈ V such that 〈w, u〉 = λB (v1 , . . . , vn−1 , u),

.

∀ u ∈ V.

(2.23)

Indeed, the map V → V ∗ , w → w ∗ , where w ∗ (u) = 〈w, u〉 for every u ∈ V , is an isomorphism (see Proposition 2.7). We call the vector w the cross-product of v1 , . . . , vn−1 and we denote it by w := v1 × v2 × · · · × vn−1 .

.

The cross-product depends only on the orientation class of B. The following proposition collects the basic properties of the cross-product. Proposition 2.38 Let (V , 〈·, ·〉) be a euclidean vector space of dimension n ≥ 3 and B an on-basis of V . (i) The map V n−1 → V ,

.

(ii) (iii) (iv) (v)

(v1 , . . . , vn−1 ) → v1 × v2 × · · · × vn−1

is alternating multilinear. v1 × v2 × · · · × vn−1 = 0 if and only if v1 , . . . , vn−1 are linearly dependent. v1 ×v2 ×· · ·×vn−1 ∈ W ⊥ , where W is the subspace generated by v1 , . . . , vn−1 . ‖v1 × v2 × · · · × vn−1 ‖2 = Gram(v1 , . . . , vn−1 ). If v1 , . . . , vn−1 are linearly independent, then B' = (v1 , . . . , vn−1 , v1 × v2 × · · · × vn−1 ) is a basis of V identically oriented with B.

108

2 Bilinear and Quadratic Forms

Proof (i) It comes from the fact that λB is alternating multilinear. (ii) The implication ⇒ is immediate by point (d) of Proposition 2.35. Suppose now that v1 , . . . , vn−1 are linearly independent. Add a vector vn so that {v1 , . . . , vn−1 , vn } is a basis of V . Then 〈v1 × v2 × · · · × vn−1 , vn 〉 = λB (v1 , . . . , vn−1 , vn ) /= 0,

.

from point (d) of Proposition 2.35, hence v1 × v2 × · · · × vn−1 /= 0. Again from point (d) of Proposition 2.35 we have 〈v1 × v2 × · · · × vn−1 , vi 〉 = λB (v1 , . . . , vn−1 , vi ) = 0,

.

i = 1, . . . , n − 1, (2.24)

which implies point (iii). By (iii) and Definition 2.36 we have Gram(v1 , . . . , vn−1 , v1 × v2 × · · · × vn−1 )   ‖v1 ‖2 〈v2 , v1 〉   , v 〉 ‖v2 ‖2 〈v 1 2  = .. ..  . .  〈v , v × v × · · · × v 〉 〈v , v × v × · · · × v 〉 1 1 2 n−1 2 1 2 n−1  . . . 〈v1 × v2 × · · · × vn−1 , v1 〉 . . . 〈v1 × v2 × · · · × vn−1 , v2 〉  .. ..  . .  2 . . . ‖v1 × v2 × · · · × vn−1 ‖     ‖v1 ‖2 〈v2 , v1 〉 . . .  0   〈v1 , v2 〉 ‖v2 ‖2 . . .  0   = .  . . . .. .. ..  ..     0 0 . . . ‖v1 × v2 × · · · × vn−1 ‖2 

.

= Gram(v1 , . . . , vn−1 ) ‖v1 × v2 × · · · × vn−1 ‖2 .

(2.25)

From (2.22) and (2.25) it follows that ‖v1 × v2 × · · · × vn−1 ‖4 = 〈v1 × v2 × · · · × vn−1 , v1 × v2 × · · · × vn−1 〉2 = (λB (v1 , . . . , vn−1 , v1 × v2 × · · · × vn−1 ))2 .

= Gram(v1 , . . . , vn−1 , v1 × v2 × · · · × vn−1 ) = Gram(v1 , . . . , vn−1 ) ‖v1 × v2 × · · · × vn−1 ‖2 ,

2.2 Cross-Product

109

which implies our thesis if ‖v1 × v2 × · · · × vn−1 ‖ /= 0. If ‖v1 × v2 × · · · × vn−1 ‖ = 0, v1 , . . . , vn−1 are linearly dependent by ii) so that Gram(v1 , . . . , vn−1 ) = 0 and iv) holds. (v) By (iii) B' is a basis. Furthermore λB ((v1 , . . . , vn−1 , v1 × v2 × · · · × vn−1 )

.

= 〈v1 × v2 × · · · × vn−1 , v1 × v2 × · · · × vn−1 〉 > 0 so that B and B' are identically oriented.

⨆ ⨅

Now we find the components of v1 × v2 × · · · × vn−1 with respect to a frame of V . Proposition 2.39 Let (V , 〈·, ·〉) be a euclidean vector space of dimension n and B = {e1 , . . . , en } an on-basis of V . Let v1 , . . . , vn−1 ∈ V : vj =

.

n  〈ei , vj 〉ei ,

j = 1, . . . , n − 1.

i=1

Then v1 × v2 × · · · × vn−1 =

.

n  (−1)n+k det(Ak ) ek , k=1

where A = (〈ei , vj 〉)

i=1,...,n j =1,...,n−1

and Ak is the (n − 1) × (n − 1)-matrix obtained from

A by deleting the k-th row of A. Proof Let v1 × v2 × · · · × vn−1 =

.

n  〈ek , v1 × v2 × · · · × vn−1 〉ek . k=1

Then we have  0  0  . . . .〈ek , v1 × v2 × · · · × vn−1 〉 = λB (ek , v1 , . . . , vn−1 ) =  1   .. .  0

 〈e1 , v1 〉 . . . 〈e1 , vn−1 〉 〈e2 , v1 〉 . . . 〈e2 , vn−1 〉  .. .. ..  . . .   〈ek , v1 〉 . . . 〈ek , vn−1 〉  .. ..  ..  . . .  〈en , v1 〉 . . . 〈en , vn−1 〉

Our thesis follows by Laplace expansion of the determinant along column 1.

⨆ ⨅

110

2 Bilinear and Quadratic Forms

The cross-product is not generally invariant by orthogonal automorphisms, indeed we have the following result. Proposition 2.40 Let (V , 〈·, ·〉) be a euclidean vector space of dimension n and ϕ : (V , 〈·, ·〉) → (V , 〈·, ·〉) a linear automorphism. Then, for every (n − 1)-tuple (v1 , . . . , vn−1 ) ∈ V n−1 we have ϕ(v1 ) × · · · × ϕ(vn−1 ) = det(ϕ) (ϕ ∗ )−1 (v1 × v2 × · · · × vn−1 ).

.

(2.26)

where ϕ ∗ is the adjoint of ϕ. Proof We fix an on-basis B = (e1 , . . . , en ) of V . We recall that det(ϕ) = det(MϕB ) and MϕB∗ ) = (MϕB )t . Let u be any vector of V . From Definition 1.176 we have 〈ϕ(v1 ) × · · · × ϕ(vn−1 ), ϕ(u)〉 = 〈ϕ ∗ (ϕ(v1 ) × · · · × ϕ(vn−1 )), u〉.

.

(2.27)

On the other hand, we have 〈ϕ(v1 ) × · · · × ϕ(vn−1 ), ϕ(u)〉 = λB (ϕ(v1 ), . . . , ϕ(vn−1 , ϕ(u)).

.

Now the columns of the matrix ⎞ 〈e1 , ϕ(v1 )〉 〈e1 , ϕ(v2 )〉 . . . 〈e1 , ϕ(vn−1 )〉 〈e1 , ϕ(u)〉 ⎜〈e2 , ϕ(v1 )〉 〈e2 , ϕ(v2 )〉 . . . 〈e2 , ϕ(vn−1 )〉 〈e2 , ϕ(u)〉⎟ ⎟ ⎜ .M = ⎜ ⎟ .. .. .. .. .. ⎠ ⎝ . . . . . 〈en , ϕ(v1 )〉 〈en , ϕ(v2 )〉 . . . 〈en , ϕ(vn−1 )〉 〈en , ϕ(u)〉 ⎛

are the coordinates of (ϕ(v1 ), . . . , ϕ(vn−1 , ϕ(u)) with respect to B so that M = MϕB N,

.

where N is the matrix whose columns are the coordinates of (v1 , . . . , vn−1 , u) with respect to B. Hence λB (ϕ(v1 ), . . . , ϕ(vn−1 , ϕ(u)) = det(M) = det(MϕB ) det(N )

.

= det(MϕB )λB (v1 , . . . , vn−1 , u), i.e. 〈ϕ(v1 ) × · · · × ϕ(vn−1 ), ϕ(u)〉 = det(MϕB )〈v1 × v2 × · · · × vn−1 , u〉.

.

From (2.27) we get for all u ∈ V 〈ϕ ∗ (ϕ(v1 ) × · · · × ϕ(vn−1 )), u〉 = det(MϕB )〈v1 × v2 × · · · × vn−1 , u〉,

.

2.3 Exercises

111

which implies ϕ ∗ (ϕ(v1 ) × · · · × ϕ(vn−1 )) = det(MϕB ) v1 × v2 × · · · × vn−1 ,

.

i.e. (2.26) since ϕ ∗ is an automorphism.

⨆ ⨅

2.3 Exercises Exercise 2.1 Prove that the map f : Mm,n (R) × Mm,n (R) → R,

f (A, B) = tr(At B),

.

∀ A, B ∈ Mm,n (R),

is symmetric bilinear. Exercise 2.2 Prove that the maps f : Mn (R) × Mn (R) → R, f (A, B) = tr(AB),

.

g : Mn (R) × Mn (R) → R, g(A, B) = det(AB),

∀ A, B ∈ Mn (R), ∀ A, B ∈ Mn (R),

are symmetric bilinear. (a) Determine Sn (R)⊥ , where Sn (R) is the subspace of symmetric matrices. (b) Find the signatures of f and g. Exercise 2.3 Let V = {f ∈ C 1 ([a, b]; R) : f (a) = f (b) = 0}. Show that  b(f, g) =

.

b

f g ' dt,

∀ f, g ∈ V ,

a

is a skew-symmetric bilinear form on V . Exercise 2.4 Let f ∈ Bil(V ) be a symmetric bilinear form on a n-dimensional R-vector space V . If b(v, v) /= 0 for every v ∈ V \ {0V }, show that b is positive definite or negative definite. Exercise 2.5 Let f ∈ Bil(V ) be a bilinear form on a n-dimensional R-vector space V . Prove or disprove the following statement: There exist two linear functionals f , g ∈ V ∗ such that b(v, w) = f (v)g(w), ∀ v, w ∈ V .

Exercise 2.6 Let V be a n-dimensional R-vector space and v1 , . . . , vk vectors of V . Show that b(f, g) =

k 

.

i=1

f (vi )g(vi ),

∀ f, g ∈ V ∗ ,

112

2 Bilinear and Quadratic Forms

is a symmetric bilinear form on the dual space V ∗ and determine positivity, negativity and nullity indexes of b. Exercise 2.7 Determine the signature of the quadratic form on R4 (2 + c)x 2 + xy + xz + xt + (2 + c)y 2 + yz + yt + (2 + c)z2 + zt + (2 + c)t 2

.

when c varies in R. Find the endomorphism which gives the canonical form for c = 0. Exercise 2.8 Diagonalize the following quadratic forms on Q: x 2 + 4xy + 4y 2 + 2xz + z2 + 2yz,

x 2 + 2xy + 4xz + 3y 2 + yz + 7z2 ,

2x 2 + 3xy + 6y 2 ,

8xy + 4y 2 .

.

Exercise 2.9 Diagonalize the quadratic form 2x 2 + xy + 3z2 on Z5 . Exercise 2.10 Determine the signature of the quadratic form on R q(x1 , x2 , x3 ) = x22 − 3x32 + 2x1 x2 + 6x1 x3 + 2x2 x3 .

.

Diagonalize q(x1 , x2 , x3 ) on R and on Q. Exercise 2.11 (Cross-Product in (R3 , 〈·, ·〉s ) Let v = (v1 , v2 , v3 ), w = (w1 , w2 , w3 ) ∈ R3 . The cross-product v × w ∈ R3 is defined by the formal determinant (according to Proposition 2.39)    e1 e2 e3           v2 v3       e1 −  v1 v3  e2 +  v1 v2  e3 = .  v1 v2 v3  =    w 2 w 3  w1 w3  w 1 w 2  w w w  1 2 3 .

(2.28)

= (v2 w3 − v3 w2 , v3 w1 − v1 w3 , v1 w2 − v2 w1 ),

with (e1 , e2 , e3 ) the canonical basis (or any on-basis) of R3 . One can verify the following properties from (2.28): (i) (ii) (iii) (iv) (v)

e1 × e2 = e3 , e2 × e3 = e1 , e3 × e1 = e2 , v × w = −w × v, v × (w1 + w2 ) = v × w1 + v × w2 , ∀ v, w, w1 , w2 ∈ R3 , (av) × w = v × (aw) = a(v × w), ∀ v, w ∈ R3 , ∀a ∈ R, u × (v × w) = 〈u, w〉v − 〈u, v〉w, ∀ u, v, w ∈ R3 ,    u1 u2 u3    .〈u, v × w〉 =  v1 v2 v3  ,   w w w  1 2 3

∀ u = (u1 , u2 , u3 ) ∈ R3 .

2.3 Exercises

113

In particular, 〈v, v × w〉 = 〈w, v × w〉 = 0, i.e. v × w is orthogonal to both v and w. Exercise 2.12 Show that v × w = (xy ' − yx ' ) u1 × u2 − (xz' − zx ' ) u1 × u3 + (yz' − zy ' ) u2 × u3 ,

.

if v, w ∈ R3 have coordinates (x, y, z) and (x ' , y ' , z' ) respectively, with respect to an arbitrary basis. Exercise 2.13 Verify Lagrange’s identity ‖v × w‖2 = ‖v‖2 · ‖w‖2 − 〈v, w〉2 ,

.

∀ v, w ∈ R3 .

Exercise 2.14 Let θ =< ) (v, w), where v and w are two non-zero vectors of R3 (see Definition 1.149). Prove the identity ‖v × w‖ = ‖v‖ · ‖w‖ sin θ.

.

In particular the area A of the parallelogram with u, v and w as adjacent sides is given by formula A = ‖v × w‖. Exercise 2.15 Let the vectors u, v and w be the edges from the vertex O of a parallelepiped P and of a tetrahedron T . Prove that the volumes VP of P and VT 1 of T are given by formulas VP = |〈u, v × w〉| and VT = |〈u, v × w〉|. 6 Exercise 2.16 Prove Jacobi’s identity (u × v) × w + (v × w) × u + (w × u) × v = 0R3 ,

.

∀ u, v, w ∈ R3 .

Chapter 3

Affine Spaces

3.1 Affine Spaces Definition 3.1 An affine space over a field K is a triple (A, V , ϕ), consisting of a non-empty set A whose elements are called points, a K-vector space V of finite dimension called director space or orienting space, and a map ϕ: A × A → V,

.

−→ (A, B) → AB := ϕ(A, B),

∀ A, B ∈ A,

satisfying the following axioms: (AFF1) (AFF2)

−→ −→ −→ For any three points A, B, C ∈ A, AB + BC = AC. There exists O ∈ A such that the map ϕO : A → V ,

.

−→ ϕO (A) := OA,

∀ A ∈ A,

(3.1)

is bijective. The integer dimK A := dimK (V ) is the dimension of (A, V , ϕ). Proposition 3.2 Let (A, V , ϕ) be an affine space. −→ −→ −→ (a) For every A, B ∈ A, AA = 0V and AB = −BA. −−→ (b) For every point O ' ∈ A, the map ϕO ' : A → V , defined by ϕO ' (A) = O ' A, ∀ A ∈ A, is bijective. Proof (a) follows immediately from axiom (AFF1). From axiom (AFF1) we have −−→ −−→ −→ O ' A = O ' O + OA, so that ϕO ' = ψ ◦ ϕO is bijective by axiom AFF2), since −−→ ⨆ ⨅ ψ : V → V , defined by ψ(v) = v + O ' O, is bijective.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_3

115

116

3 Affine Spaces

Fig. 3.1 Parallelogram rule

By axiom (AFF2) A equipped with the operations A + B = C,

.

λ A = B,

−→ −→ −→ where C is defined by OA + OB = OC, ∀ A, B ∈ A, −→ −→ where B is defined by λOA = OB, ∀ λ ∈ K, ∀ A ∈ A.

is a K-vector space (whose null vector is the point O) such that the map ϕO : A → −→ V is an isomorphism of K-vector spaces. This vector space will be denoted by OA. Thanks to (b) of Proposition 3.2 the point O can be arbitrarily chosen. Lemma 3.3 (Parallelogram Rule) Let (A, V , ϕ) be an affine space. Let A, B, C, −→ −→ −→ −→ D ∈ A be four points such that AB = CD, then AC = BD. −→ −→ −→ −→ −→ Proof By axiom AFF1) we have AC + CD = AD = AB + BD, so that our −→ −→ −→ −→ assumption AB = CD implies AC = BD (see Fig. 3.1). ⨆ ⨅ Example 3.4 A basic example of affine space is the affine space associated with a vector space. Let V be a K-vector space of finite dimension. Let A be the set V and ϕ : A × A → V be defined by ϕ(u, v) = v − u, where u and v on the left– hand side are two points of set V , while v − u on the right– hand side is the difference between the vectors v and u. It easy to see that A(V ) = (A = V , V , ϕ) is an affine space which is called the affine space associated with the vector space V and that dimK A(V ) = dimK V . If V = K n , with n ≥ 1, then the affine space A(V ) will be denoted by An (K) and called the standard affine space of dimension n on K. Definition 3.5 Let A1 , . . . , An be any points in an affine space (A, V , ϕ). For any a1 , . . . , an ∈ K with the condition a1 + · · · + an = 1 we define the weighted barycentre of n-tuple (A1 , . . . , An ) with weights (or masses) (a1 , . . . , an ) as the unique point of A A := a1 A1 + · · · + an An

.

such that −→ −−→ −−→ OA = a1 OA1 + · · · + an OAn

.

for some point O of A. The existence of O and the uniqueness of A follows from axiom AFF2).

3.1 Affine Spaces

117

As a matter of fact, the weighted barycentre A ∈ A does not depend on the choice of point O ∈ A but only on the points A1 , . . . , An and the scalars a1 , . . . , an (satisfying a1 + · · · + an = 1). Indeed, let O ' any other point of A, then we have to −−→ −−−→ −−−→ prove that O ' A = a1 O ' A1 + · · · + an O ' An . By axiom AFF1) of Definition 3.1 we −−→ − − → −→ have O ' B = O ' O + OB, ∀ B ∈ A. Then −−→ −→ −−→ −−→ −−→ −−→ O ' A = O ' O + OA = O ' O + a1 OA1 + · · · + an OAn −−→ −−→ −−→ = (a1 + · · · + an )O ' O + a1 OA1 + · · · + an OAn −−→ −−→ −−→ −−→ −−→ −−→ = a1 (O ' O + OA1 ) + a2 (O ' O + OA2 ) + · · · + an (O ' O + OAn ) −−−→ −−−→ = a1 O ' A1 + · · · + an O ' An .

.

In the affine space An (R) if we assign a mass ai to Ai , i = 1, . . . , n, then A represents the physical barycentre of the ordered system {A1 , . . . , An , a1 , . . . , an }. 1 1 1 If char(K)  n and a1 = · · · = an = , then the point A = A1 + · · · + An n n n is simply called the barycentre of the system of points {A1 , . . . , An }; if n = 2, 1 1 A = A1 + A2 is called the midpoint of the pair {A1 , A2 }, and B := (−1)A1 +2A2 2 2 is the symmetric of A1 with respect to A2 . Definition 3.6 Let a1 , . . . , an ∈ K be n scalars such that a1 + · · · + an = 0. Then −−→ −−→ the vector a1 OA1 + · · · + an OAn does not depend on the point O for any points A1 , . . . , An ∈ A. Indeed, we have −−→ −−−→ −−−→ −−→ −−→ a1 O ' A1 + · · · + an O ' An = (a1 + · · · + an )O ' O + a1 OA1 + · · · + an OAn −−→ −−→ = a1 OAn + · · · + an OAn .

.

−−→ We write a1 A1 + · · · + an An = ◯ if there exists a point O ∈ A such that a1 OA1 + −−→ · · · + an OAn = 0V . Remark 3.7 Let (A, V , ϕ) be an affine space over K. Let O ∈ A and ϕO : A → V as in (3.1). From Definition 3.5 we have ϕO (a1 A1 + · · · + an An ) = a1 ϕO (A1 ) + · · · + an ϕO (An ),

.

∀ O, A1 , . . . , An ∈ A and ∀ a1 , . . . , an ∈ K such that a1 + · · · + an = 1. Definition 3.8 Let (A, V , ϕ) be an affine space over K and A' a subset of A. We say that A' is an affine subspace of (A, V , ϕ) if A' = ∅ or if there exists a point −→ O ∈ A' such that ϕO (A' ) (= {OA | A ∈ A' }) is a vector subspace of V .

118

3 Affine Spaces

Let A' be a non-empty affine subspace of (A, V , ϕ). If O ' ∈ A' is any other point −−→ of A' then ϕO ' (A' ) = v + ϕO (A' ), with v = O ' O ∈ ϕO (A' ), so that ϕO ' (A' ) = ϕO (A' ). Hence the vector subspace ϕO (A' ) of V does not depend on the choice of O ∈ A' and it is called the director subspace (or orienting subspace) of A' We shall denote ϕO (A' ) by D(A' ). Since the map ϕO is bijective, then we have −1 A' = ϕO (D(A' )) for all O ∈ A' which will be also denoted by A' = O + D(A' ).

.

Therefore a non-empty subset A' is an affine subspace of (A, V , ϕ) if and only if (A , D(A' ), ϕ|A' ×A' ) is an affine space over K. '

Proposition 3.9 Let (A, V , ϕ) be an affine space over K and A' a non-empty subset of A. The following conditions are equivalent: (i) A' is an affine subspace of (A, V , ϕ). (ii) For every finite number of points A1 , . . . , An ∈ A' and for every a1 , . . . , an ∈ K such that a1 + · · · + an = 1, we have a1 A1 + · · · + an An ∈ A' . Proof (i) =⇒(ii): Let A :=

n 

ai Ai , with Ai ∈ A' and

i=1 n 

−→ exists O ∈ A' such that OA =

n 

ai = 1. Then there

i=1

−−→ ai OAi . Since ϕO (A' ) is a vector subspace of V ,

i=1

−−→ −→ Ai ∈ A' (i.e. OAi ∈ ϕO (A' )) and ai ∈ K, i = 1, . . . , n, then we get OA ∈ ϕO (A' ), i.e. A ∈ A' . (ii) =⇒ (i): We have to show that ϕO (A' ) is a vector subspace for O ∈ A' . −−→ Let v1 , v2 ∈ ϕO (A' ), vi = OAi , with Ai ∈ A' , i = 1, 2. If we put A3 := O, a1 = a2 = 1 and a3 = −1, so that a1 + a2 + a3 = 1, by our assumption we obtain that A := a1 A1 + a2 A2 + a3 A3 = A1 + A2 − O ∈ A' , which is equivalent to −−→ −−→ −−→ −−→ −−→ −→ v1 + v2 = OA1 + OA2 = OA1 + OA2 − OO = OA ∈ ϕO (A' ). Finally, if λ ∈ K −−→ −−→ −→ then B := (1 − λ)O + λA1 ∈ A' , so that λv1 = (1 − λ)OO + λOA1 = OB ∈ ' ϕO (A ). ⨆ ⨅ Corollary 3.10 Let {Ai }i∈I be a non-empty  family of affine subspaces of an affine Ai is an affine subspace (possibly empty) space (A, V , ϕ). Then the intersection i∈I

of (A, V , ϕ). ⨆ ⨅

Proof It follows Proposition 3.9. '

Definition 3.11 Let A be an affine subspace of an affine space (A, V , ϕ). We define the dimension of A' as the integer  .

'

dimK A :=

dimK (D(A' )) if A' /= ∅, −1

if A' = ∅.

3.1 Affine Spaces

119

The codimension of A' in (A, V , ϕ) is defined by codim(A' ) := dimK A−dimK A' . The points of (A, V , ϕ) are the affine subspaces of dimension 0. The affine subspaces of dimension 1 are called affine lines (or simply lines) of (A, V , ϕ), while those of dimension 2 are called affine planes (or simply planes). The affine subspaces of codimension 1 are called affine hyperplanes (or simply hyperplanes) of (A, V , ϕ). Proposition 3.12 Let A' be an affine subspace of an affine space (A, V , ϕ) of dimension n. Then dimK A' ≤ n and dimK A' = n if and only if A' = A. Proof It follows from Definition 3.11 and from Corollary 1.61.

⨆ ⨅

Example 3.13 Let A' be a non-empty subset of the standard affine space An (K). Then A' is an affine subspace An (K) if and only if there exist a vector subspace W ⊂ K n and a vector v ∈ K n such that A' = v + W := {v + w | w ∈ W }. (If A' is an affine subspace then W = D(A' ).) Definition 3.14 Let M ⊂ A be a subset of the affine space (A, V , ϕ). The affine span of M 〈M〉 is the intersection of all affine subspaces of (A, V , ϕ) which contain M. From Corollary 3.10 we obtain the following properties: (i) 〈M〉 is an affine subspace of (A, V , ϕ) such that M ⊆ 〈M〉. (ii) For every affine subspace B of (A, V , ϕ) containing M then 〈M〉 ⊆ B. (iii) If M = ∅ then 〈M〉 = ∅. Proposition 3.15 Let {A1 , . . . , Am } ⊆ A be a finite subset of an affine space (A, V , ϕ). Then  〈A1 , . . . , Am 〉 =

m 

.

 m   ai Ai  ai = 1 ,

i=1

i=1

i.e. the affine span 〈A1 , . . . , Am 〉 is the set of all weighted barycentres and −−−→ −−−→ D(〈A1 , . . . , Am 〉) = 〈A1 A2 , . . . , A1 Am 〉.

.

In particular dimK 〈A1 , . . . , Am 〉 ≤ m − 1. Proof Let A' := {

m  i=1

ai Ai | ai ∈ K,

m  i=1

ai = 1} and O := A1 . By Proposition 3.9,

(ii). we have A' ⊂ 〈A1 , . . . , Am 〉. Conversely, it will suffice to prove that A' is an affine subspace of (A, V , ϕ), since Ai ∈ A' . So we have to prove that ϕA1 (A' ) is a

120

3 Affine Spaces

m  −−→ −−→ vector subspace of V . Then let v = A1 A and w = A1 B, where A = ai Ai and m 

B=

bi Ai , with

i=1

m 

ai =

i=1

m 

i=1

bi = 1. Therefore

i=1

−−→  −−−→ v = A1 A = ai A1 Ai m

.

and

i=2

w=

n  −−−→ bi A1 Ai ,

which imply

i=2

m  −−−→ v+w = (ai + bi )A1 Ai . i=2

If we put ci = ai + bi , i = 2, . . . , m and c1 := 1 − C :=

n 

m 

ci , we get the point

i=2

−−→ ci Ai ∈ A' such that v + w = A1 C, thus v + w ∈ ϕA1 (A' ). Now we have

i=1

to prove that λv ∈ ϕA1 (A' ) for every λ ∈ K and v ∈ ϕA1 (A' ) as above. The point D := (1 − λ)A1 + λA = (1 − λ)A1 + λa1 A1 + · · · + λam Am belongs to A' by definition of A' . Hence −−→ −−−→ −−→ −−→ λv = λA1 A = (1 − λ)A1 A1 + λA1 A = A1 D ∈ ϕA1 (A' ).

.

Therefore ϕA1 (A' ) is a vector subspace of V and A' is an affine subspace of (A, V , ϕ) so that 〈A1 , . . . , Am 〉 ⊂ A' since {A1 , . . . , Am } ⊂ A' . The last statement is now obvious. ⨆ ⨅ Definition 3.16 Let (A, V , ϕ) be an affine space of dimension n. We call m points A1 , . . . , Am ∈ A (or the set {A1 , . . . , Am }) affinely independent if −−−→ −−−→ A1 A2 , . . . , A1 Am are linearly independent vectors of V . If A1 , . . . , Am are not affine independent we call them affinely dependent. Moreover, we say that A1 , . . . , Am are in general position if m ≤ n and they are affinely independent or, if m > n, every subset of {A1 , . . . , Am } consisting of n points is affinely independent. For instance, n distinct points P1 , . . . , Pn ∈ A2 (K), with n > 2, are in general position if any subset of three distinct points is not contained in a line. Corollary 3.17 Under the assumptions of Proposition 3.15, the following conditions are equivalent: (i) {A1 , . . . , Am } is affinely independent. (ii) dimK 〈A1 , . . . , Am 〉 = m − 1. (iii) If (a1 , . . . , am ) ∈ K m is such that a1 +· · ·+am = 0 and a1 A1 +· · ·+am Am = ◯ (see Definition 3.6), then we have a1 = · · · = am = 0. (iv) For every A ∈ 〈A1 , . . . , Am 〉 there exists a unique (a1 , . . . , am ) ∈ K m such that a1 + · · · + am = 1 and A = a1 A1 + · · · + am Am .

3.1 Affine Spaces

121

Proof The equivalence (i) ⇐⇒ (ii) follows from Definition 3.16 and from Proposition 3.15. (i) =⇒ (iii): Let a1 , . . . , am ∈ K such that a1 + · · · + am = 0 and a1 A1 + · · · + −−−→ −−−→ am Am = ◯. If we take O = A1 , then we have a2 A1 A2 + · · · + am A1 Am = 0V , so that by i), a2 = · · · = am = 0. Furthermore, we have a1 = −(a2 + · · · + am ) = 0. −−−→ −−−→ (iii) =⇒ (i): We have to show that A1 A2 ,. . . , A1 Am are linearly independent. Let −−−→ −−−→ a2 A1 A2 + · · · + am A1 Am = 0V , with a2 , . . . , an ∈ K. We have just seen that this equality is equivalent to a1 A1 + · · · + am Am = ◯, where a1 = −(a2 + · · · + am ). From (iii) we conclude that a1 = a2 = · · · = am = 0. (iii) =⇒ (iv): Let A ∈ 〈A1 , . . . , Am 〉 such that A = a1 A1 + · · · + am Am = b1 A1 + m  −−−→ ai A1 Ai = · · · + bm Am with a1 + · · · + am = b1 + · · · + bm = 1. Then we have m 

i=2

−−−→ bi A1 Ai . Since iii) ⇐⇒ i), it follows that ai = bi , i = 2, . . . , m from which we

i=2

obtain a1 = 1 − (a2 + · · · + am ) = 1 − (b2 + · · · + bm ) = b1 . −−−→ −−−→ (iv) =⇒ (i): Let a2 A1 A2 + · · · + am A1 Am = 0V with a2 , . . . , an ∈ K. Then we have A1 = a1 A1 + · · · + am Am with a1 = 1 − (a2 + · · · + am ). Since A1 = 1 · A1 + 0 · A2 + · · · + 0 · Am , iv) implies ai = 0, i = 2, . . . , m. ⨆ ⨅

Remark 3.18 As an immediate consequence of Corollary 3.17 we assert that the affine independence of a set {A1 , . . . , Am } does not depends on the order of A1 , . . . , Am . Definition 3.19 We call m points A1 , . . . , Am ∈ (A, V , ϕ) collinear if 〈A1 , . . . , Am 〉 is a line, and coplanar if 〈A1 , . . . , Am 〉 is a plane. Definition 3.20 Let (A, V , ϕ) be an affine space of dimension n. If a subset {A1 , . . . , Am } of V is affine independent, every point A ∈ 〈A1 , . . . , Am 〉 can be written in the unique form A = a1 A1 + · · · + am Am (with a1 + · · · + am = 1) by Corollary 3.17. Thus (a1 , . . . , am ) ∈ K m can be called the barycentric coordinates of A with respect to the ordered affine independent set {A1 , . . . , Am }. Every permutation of the set {A1 , . . . , Am } determines the same permutation of the coordinates (a1 , . . . , am ). Proposition 3.21 Let {A1 , . . . , Am } be a non-empty set of an affine space (A, V , ϕ). Then {A1 , . . . , Am } is affinely independent if and only if Ai ∈ / 〈A1 , . . . , Ai−1 , Ai+1 , . . . , Am 〉

.

for every i ∈ {1, . . . , m}.

(3.2)

122

3 Affine Spaces

Proof If m = 1 there is nothing to prove. Let m ≥ 2. Suppose that {A1 , . . . , Am } is affinely independent and there is an index i such that Ai ∈ 〈A1 , . . . , Ai−1 , Ai+1 , . . . , Am 〉 After renumbering our points we can suppose i /= 1 so that Ai =

m 

.

aj Aj ,

m 

with

j /=i,j =1

aj = 1,

i.e.

m 

−−−→ A1 Ai =

j /=i,j =1

−−−→ aj A1 Aj ,

j /=i,j =2

which contradicts the affine independence of {A1 , . . . , Am }. Conversely, if {A1 , . . . , Am } is not affinely independent we can suppose that m  −−−→ −−−→ there is an index i0 /= 1 such that A1 Ai0 = aj A1 Aj so that j =2,j /=i0

Ai0 = a1 A1 +

m 

.

j =2,j /=i0

which contradicts (3.2).

aj Aj ,

with a1 = 1 −

m 

aj ,

j =2,j /=i0

⨆ ⨅

Proposition 3.22 Let (A, V , ϕ) be an affine space of dimension n and A' a nonempty affine subspace of dimension m (m ≤ n). There exists an affinely independent subset {A0 , A1 , . . . , Am } ⊂ A' such that A' = 〈A0 , A1 , . . . , Am 〉. Proof Let {v1 , . . . , vm } be a basis of D(A' ). Take a point O of A' and define −−→ −1 Ai := ϕO (vi ) (i.e. vi = OAi ), i = 1, . . . , m. If you put A0 := O, the set {A0 , A1 , . . . , Am } is affinely independent such that A' = 〈A0 , A1 , . . . , Am 〉. ⨆ ⨅ Definition 3.23 Let (A, V , ϕ) be an affine space of dimension n ≥ 1. We define the affine frame of (A, V , ϕ) as every ordered set of n + 1 points A0 , A1 , . . . , An ∈ A which are affinely independent. The point A0 is called the origin of the affine frame A0 , A1 , . . . , An . By Proposition 3.22 every affine space (A, V , ϕ) of dimension n ≥ 1 has an affine frame at least. For instance E0 = (0, 0, . . . , 0), E1 = (1, 0, . . . , 0),. . . ,En = (0, 0, . . . , 0, 1) is an affine frame of An (K) called the standard affine frame of An (K). Proposition 3.24 Let (A, V , ϕ) be an affine space of dimension n ≥ 1. Giving an affine frame of (A, V , ϕ) is equivalent to taking a point O ∈ A (the origin) and a vector basis of V . Proof Let A0 , A1 , . . . , An be an affine frame of (A, V , ϕ). Take O := A0 and −−−→ vi := A0 Ai , i = 1, . . . , n. Then the vectors v1 , . . . , vn are linearly independent and {v1 , . . . , vn } is a basis of V because dimK V = n. Conversely, given any point O ∈ A and an ordered basis v1 , . . . , vn of V , by axiom AFF2) and Proposition 3.2, there exists a uniquely determined point Ai ∈ A

3.1 Affine Spaces

123

−−→ such that vi = OAi , i = 1, . . . , n. Putting A0 := O, the set A0 , A1 , . . . , An is an affine frame of (A, V , ϕ). ⨅ ⨆ An immediate consequence of Corollary 3.17, iv) is the following: Corollary 3.25 Let A0 , A1 , . . . , An be an affine frame of (A, V , ϕ) of dimension n. Then for every point A ∈ A there exists a unique (n + 1)-tuple (a0 , a1 , . . . , an ) ∈ K n+1 such that a0 + a1 + · · · + an = 1 and A = a0 A0 + a1 A1 + · · · + an An . Definition 3.26 Under the assumptions and notation of Corollary 3.25, the scalars a0 , a1 , . . . , an ∈ K, with a0 + a1 + · · · + an = 1, such that A = a0 A0 + a1 A1 + · · · + an An are called the barycentric coordinates of A with respect to the affine frame A0 , A1 , . . . , An . −−−→ −−−→ Since {A0 A1 ,. . . ,A0 An } is a basis of V there is a unique n-tuple (x1 (A), . . . , xn (A)) ∈ −−→ −−−→ −−−→ K n such that A0 A = x1 (A)A0 A1 + · · · + xn (A)A0 An . We call x1 (A), . . . , xn (A) ∈ K affine coordinates with respect to the affine frame A0 , A1 , . . . , An or also a system of affine coordinates. If (a0 , a1 , . . . , an ) are the barycentric coordinates of A with respect to the affine frame A0 , A1 , . . . , An , then the affine coordinates of (with respect to the same frame) are given by xi (A) = ai , i = 1, . . . , n. Indeed, the equality A = a0 A0 + a1 A1 + · · · + an An implies the relations −−→ −−−→ −−−→ −−−→ −−−→ −−−→ A0 A = a0 A0 A0 + a1 A0 A1 + · · · + an A0 An = a1 A0 A1 + · · · + an A0 An .

.

Conversely, if (x1 (A), . . . , xn (A)) are the affine coordinates of A, then (a0 , a1 , . . . , an ) = (1 − (x1 (A) + · · · + xn (A)), x1 (A), . . . , xn (A))

.

are the barycentric coordinates of A. Proposition 3.27 Let A0 , A1 , . . . , An and B0 , B1 , . . . , Bn be two affine frames of an affine space (A, V , ϕ) of dimension n ≥ 1. Denote the associated vector bases −−→ −−−→ and T = {B0 Bi }i=1,...,n , and the matrix of transition from S by S = {A0 Ai }i=1,...,n  to T by ΛS,T = aij i,j =1,...,n so that −−−→  −−−→ B0 B k = aik A0 Ai , n

.

k = 1, . . . , n.

(3.3)

i=1

If (x1 , . . . , xn ) and (y1 , . . . , yn ) are the affine coordinates of any point A ∈ A with respect to A0 , A1 , . . . , An and B0 , B1 , . . . , Bn ) respectively, then we have xi =

n 

.

k=1

aik yk + bi ,

i = 1, . . . , n,

(3.4)

124

3 Affine Spaces

where −−−→  −−−→ bi A0 Ai . A0 B0 = n

(3.5)

.

i=1

Proof We have from (3.5) −−→ −−→ −−−→  −−−→  −−−→  −−−→ B0 A = A0 A − A0 B0 = xi A0 Ai − bi A0 Ai = (xi − bi )A0 Ai . n

n

n

i=1

i=1

i=1

.

On the other hand, we have

n n n n  −−→  −−−→ −−−→   −−−→ .B0 A = yk B0 Bk = aik yk A0 Ai = aik yk A0 Ai . k=1

i,k=1

i=1

k=1

These two equalities imply (3.4).

⨆ ⨅

Lines in the Affine Spaces Let A and B be two distinct points of an affine space (A, V , ϕ). Then the set {A, B} is obviously affinely independent. Denote by AB := 〈A, B〉 the affine subspace of A generated by {A, B}. By Proposition 3.15 we have: AB = {(1 − t)A + tB | t ∈ K}.

.

Moreover, by Corollary 3.17, dimK AB) = 1, then AB is a line according to Definition 3.11. We shall say that AB joins A and B. Conversely, every line B of (A, V , ϕ) is of the form AB, with A, B ∈ B, A /= B. Indeed, if we take any two points A, B ∈ B, A /= B, then AB ⊂ B and, by Proposition 3.12, AB = B (since dimK AB = dimK B = 1). Thus we have: Corollary 3.28 Through two distinct points A and B of an affine space (A, V , ϕ) there is exacly one line. Proposition 3.29 Let (A, V , ϕ) be an affine space and A' a non-empty subset of A. If A' is an affine subspace of (A, V , ϕ), then for every pair of distinct points A, B ∈ A' the line AB is contained in A' . Conversely, if char(K) /= 2 and AB ⊆ A' for every pair of distinct points A, B ∈ A' , then A' is an affine subspace of (A, V , ϕ). Proof The first assertion follows from Proposition 3.9, since the line AB is the set of all weighted barycentres of A and B, by definition. Conversely, assume that char(K) /= 2 and that AB ⊂ A' for every pair of distinct points A, B ∈ A' , Let O be any point of A' . We have to show that W := ϕO (A' ) is a vector subspace of V .

3.1 Affine Spaces

125

−−→ Let v = OM ∈ W , with M ∈ A' and λ ∈ K. Since OM ⊂ A' , then N := −−→ −−→ −−→ −−→ (1 − λ)O + λM ∈ A' , i.e. λv = λOM = (1 − λ)OO + λOM = ON ∈ W . We have now to prove that v + v ' ∈ W if v, v ' ∈ W . Let M, M ' ∈ A' such −−→ 1 1 −−→ that v = OM and v ' = OM ' . Take N = M + M ' (we are assuming that 2 2 −−→ char(K) /= 2!); since MM ' ⊂ A' , then N ∈ A' and ON ∈ W . By definition we 1 −−→ 1 −−→ 1 −−→ 1 have ON = OM + OM ' = (v + v ' ), hence (v + v ' ) ∈ W and v + v ' ∈ W . 2 2 2 2 ⨆ ⨅ Remark 3.30 By Corollary 3.17 three points A, B, C ∈ A are affinely independent if and only if A, B, C are not collinear. By the same corollary A, B, C are collinear if and only if there exists (a, b, c) ∈ K 3 \ (0, 0, 0) such that a + b + c = 0 and aA + bB + cC = ◯.

3.1.1 The Affine Ratio and Menelaus’ Theorem Definition 3.31 Let A and B be two distinct points of an affine space (A, V , ϕ) of dimension ≥ 1 over a field K. The ordered pair (A, B) is an affine frame of the line AB (as affine space of dimension 1). From Corollary 3.25 it follows that, for every C ∈ AB, there is a unique t ∈ K such that C = (1 − t)A + tB.

.

(3.6)

Hence the barycentric coordinates of C with respect to the affine frame (A, B) are (1 − t, t) and the affine coordinate of C with respect to the same affine frame is t. From (3.6) we get −→ −→ −→ −→ AC = (1 − t)AA + t AB = t AB,

(3.7)

−→ AC −→ := t ∈ K. AB

(3.8)

.

so that we can define .

If C ∈ AB and C /= A, B, the relation (3.6) also implies A = from which we get −→ CA =

.

−1 −→ t −→ t −→ CB + CC = CB. t −1 t −1 t −1

t −1 B+ C t −1 t −1

126

3 Affine Spaces

Fig. 3.2 Menelaus’ theorem

These last relations justify the definition of affine ratio ar(A, B, C) of ordered triple (A, B, C) −→ t CA ar(A, B, C) = −→ := ∈ K. t −1 CB

.

(3.9)

Definition 3.32 Let A, B and C be three affine independent points of an affine space (A, V , ϕ) of dimension ≥ 2. We call the triple {A, B, C} triangle (nondegenerate) and we denote it by ΔABC. The points A, B and C, which are not collinear, are called the vertices of ΔABC, while the lines AB, AC and BC are the sides of ΔABC. The affine ratio is a fruitful tool in affine geometry and, to show it, we prove the well-known Menelaus’ theorem. Proposition 3.33 (Menelaus’ Theorem) Let ΔABC be a triangle of an affine space (A, V , ϕ) of dimension 2. Let d be a line such that (see Fig. 3.2) d ∩ BC = A' /= B, C;

d ∩ CA = B ' /= C, A;

.

d ∩ AB = C ' /= A, B.

Then

.

−−→ A' B −−→ A' C

−−→ B 'C · −−→ B 'A

−−→ C'A · −−→ = 1. C'B

(3.10)

Conversely, if A' ∈ BC \ {B, C}, B ' ∈ CA \ {C, A} and C ' ∈ AB \ {A, B} are three points such that (3.10) holds, then A' , B ' , C ' are collinear. In other words, if A' , B ' , C ' are three points, on the sides of ΔABC opposite A, B, C respectively, all different from the vertices, then A' , B ' , C ' are collinear if and only if (3.10) holds. Proof If we put A' = (1 − tA' )B + tA' C, B ' = (1 − tB ' )C + tB ' A and C ' = (1−tC ' )A+tC ' B, with tA' , tB ' , tC ' ∈ K \{0, 1}, then the equality (3.10) is equivalent to .

tA' tB ' tC ' = 1. (tA' − 1)(tB ' − 1)(tC ' − 1)

(3.11)

3.1 Affine Spaces

127

On the other hand, the equalities B ' = (1−tB ' )C +tB ' A and C ' = (1−tC ' )A+tC ' B imply −−→ −−'→' −−→ A B = (1 − tB ' )A' C + tB ' A' A

and

.

−−'→' −−→ −−→ A C = (1 − tC ' )A' A + tC ' A' B. (3.12)

−−→ −−→ From A' = (1 − tA' )B + tA' C it follows that 0V = (1 − tA' )A' B + tA' A' C and −−→ A' B =

.

tA' −−→ A' C. tA' − 1

(3.13)

Inserting (3.13) in (3.12) we get −−→ −−'→' tC ' tA' −−→ AC = A' C + (1 − tC ' )A' A. tA' − 1

.

(3.14)

−−→ −−→ Since A' A and A' C are linearly independent vectors the equalities (3.12) and (3.14) imply the equivalences A' , B ' , C ' are collinear

.

⇐⇒ ⇐⇒

−−'→' −−→ A B and A' C ' are linearly dependent 1 − tB ' tB ' tC ' tA' = 1 − tC ' . tA' − 1 ⨆ ⨅

The last equality is equivalent to (3.11), so that the thesis is proved.

Remark 3.34 Menelaus’ theorem obviously holds in any affine space (A, V , ϕ) of dimension ≥ 2 because its statement concerns only the plane 〈A, B, C〉 generated by A, B, C.

3.1.2 The Affine Subspaces of An (K) We can now give two analytical descriptions of subspaces of the standard affine space .An (K). Proposition 3.35 Let .A be any non-empty affine subspace of .An (K) of dimension k. Fix a point .O = (a1 , . . . , an ) ∈ A and a basis .v1 , . . . , vk of .D(A) with .vj = (a1j , . . . , anj ), .j = 1, . . . , k. Then all points .(x1 , . . . , xn ) ∈ A are given by parametric equations of the form xi = ai +

k 

.

aij tj ,

i = 1, . . . , n,

(3.15)

j =1

 where .(t1 , . . . , tk ) runs in .K k and . aij i=1,...,n is a matrix of (maximum) rank k. j =1,...,k

128

3 Affine Spaces

every subset of .An (K) described by the Eq. (3.15) with  Conversely, . aij matrix of rank k and .O := (a1 , . . . , an ) ∈ An (K) is an i=1,...,n,j =1,...,k affine subspace .A of .An (K) of dimension k such that .O ∈ A. Proof Since D(A) =

.

⎧ k ⎨ ⎩

⎫  ⎬  tj vj  (t1 , . . . , tk ) ∈ K k ⎭

j =1

−→ and a point .X := (x1 , . . . , xn ) ∈ A if and only if .OX ∈ D(A), we have xi − ai =

k 

.

i = 1, . . . , n.

aij tj ,

j =1

 Conversely, since the matrix . aij i=1,...,n, has rank k, the linearly independent j =1,...,k

column vectors .v1 , . . . , vk , where .vi := (a1i , . . . , ani ), .i = 1, . . . , k, generate a vector subspace W of dimension k. Then the Eq. (3.15) define the affine subspace .A = O + W of dimension k. ⨆ ⨅ Proposition 3.36 Let .A be any non-empty affine subspace of .An (K) of dimension k. Then all points .X := (x1 , . . . , xn ) ∈ A are solutions of a system of linear equations, called implicit equations of .A, of the form n  .

bij xj = ci ,

i = k + 1, . . . , n,

(3.16)

j =1

 where the matrix . bij i=k+1,...,n has rank .n − k and .ck+1 , . . . , cn ∈ K. j =1,...,n  Conversely, every linear system (3.16), where . bij i=k+1,...,n is a matrix of rank j =1,...,n

n − k, defines an affine subspace .A of .An (K) of dimension k.

.

Proof Fix a point .O = (a1 , . . . , an ) ∈ A and a basis .B = {v1 , . . . , vk } of D(A). By Corollary 1.53, there exists a basis .B' = {v1 , . . . , vk , vk+1 , . . . , vn } of n which contains .B. Let .v = (a , . . . , a ), .i = 1, . . . , n. The non-singular .K i 1i ni   matrix . aij i,j =1,...,n ∈ Mn (K) coincides with .ΛE,B = aij i,j =1,...,n ∈ Mn (K), the transition matrix from the canonical basis .E = {e1 , . . . , en } of .K n (see −1   = bij i,j =1,...,n . If .(y1 , . . . , yn ) are Example 1.62-1)) to .B. Let .B = ΛE,B −→ the coordinates of .OX = (x1 − a1 , . . . , xn − an ) ∈ K n with respect to .B, we have n n   .(x1 − a1 , . . . , xn − an ) = yi vi so that .yi = bij (xj − aj ), .i = 1, . . . , n. .

i=1

j =1

3.1 Affine Spaces

129

Furthermore, .(x1 − a1 , . . . , xn − an ) ∈ D(A) if and only if .yk+1 = . . . = yn = 0, hence A = O + D(A) = {(a1 , . . . , an ) + (x1 , . . . , xn ) ∈ An (K) |

n 

.

bij (xj − aj )

j =1

= 0, i = k + 1, . . . , n}. Since .det

   bij i,j =1,...,n /= 0, then . bij i=k+1,...,n has maximum rank .n−k. In this j =1,...,n

way we have obtained a general form of implicit equations that define the subspace n .A of .A (K) of dimension k: n  .

bij (xj − aj ) = 0,

i = k + 1, . . . , n,

(3.17)

j =1



 where .O = (a1 , . . . , an ) and .rank bij i=k+1,...,n j =1,...,n

n  .

bij xj = ci ,

 = n − k or

i = k + 1, . . . , n,

(3.18)

j =1

with .ck+1 , . . . , cn ∈ K if we do not specify the point O. n Conversely,  points .(x1 , . . . , xn ) of .A (K) which satisfy (3.16)  the set of the  (where .rank bij i=k+1,... = n−k) is an affine subspace of .An (K) of dimension n,j =1,...,n

k whose director space is given by equations n  .

bij xj = 0,

i = k + 1, . . . , n,

(3.19)

j =1

where, with abuse of notation, we have denoted the vectors of .K n by .(x1 , . . . , xn ) as the points of .An (K). ⨆ ⨅ Remark 3.37 We want to note that both parametric equations and implicit equations of the affine subspace .A given in Propositions 3.35 and 3.36 depend on the choice of a basis for its director subspace .D(A). Examples 3.38 1. Every hyperplane of .An (K) is given by an implicit equation of the form b1 x1 + · · · + bn xn = c,

.

where

(b1 , . . . , bn ) ∈ K n \ {(0, . . . , 0)}

and

c ∈ K.

(3.20)

130

3 Affine Spaces

2. Parametric equations of a line r containing a point .(a1 , . . . , an ) ∈ An (K) have the general form xj = aj + lj t,

j = 1, . . . , n,

.

(3.21)

where the vector .(l1 , . . . , ln ) /= (0, . . . , 0) generates the director space of r and the scalars .l1 , . . . , ln are called director parameters of r (which are unique up to a non-zero multiplicative factor). If .A = (a1 , . . . , an ) and .B = (b1 , . . . , bn ) are two distinct points of r we can take li = bi − ai ,

i = 1, . . . , n.

.

3. Implicit equations of a line of .An (K) are of the form n  .

bij xj = ci ,

i = 2, . . . , n,

(3.22)

j =1

where .rank

 

bij



i=2,...,n j =1,...,n

= n − 1.

In particular if .n = 3, parametric and implicit equations of a line r containing a point .(a1 , a2 , a3 ) and having director parameters .(l1 , l2 , l3 ) ∈ K 3 \ (0, 0, 0) have the form, respectively, ⎧ ⎪ ⎪ ⎨x1 = a1 + l1 t . ∀ t ∈ K, x2 = a2 + l2 t ⎪ ⎪ ⎩x = a + l t 3



such that

3

3

a1 x1 + a2 x2 + a3 x3

= c

b1 x1 + b2 x2 + b3 x3

= d,

ai , bi , c, d ∈ K, i = 1, 2, 3

  a1 a2 a3 = 2. . rank b1 b2 b3

Now we give another way to determine implicit equations of an affine subspace A of .An (K). Fix a point .O = (a1 , . . . , an ) ∈ A and a basis .{v1 , . . . , vk } of .D(A) with .vi = (ai1 , . . . , ain ), .i = 1, . . . , k. A point .X := (x1 , . . . , xn ) ∈ A if and only −→ if .OX ∈ D(A), i.e. ⎞ ⎛ x1 − a1 x2 − a2 · · · xn − an ⎜ a11 a12 · · · a1n ⎟ ⎟ ⎜ .Ω = ⎜ .. .. .. ⎟ .. ⎝ . . . . ⎠

.

ak1

ak2

···

akn

3.1 Affine Spaces

131

has rank k. Since .v1 , . . . , vk are linearly independent we can find a non-singular k × k-submatrix M of ⎞ ⎛ a11 a12 · · · a1n ⎜ . .. . . .. ⎟ ' .Ω = ⎝ . . . ⎠. . .

.

ak1 ak2 · · · akn Then we write down .n − k implicit equations of .A by vanishing the determinants of the .n − k submatrices .N ∈ Mk+1 (K) containing M. Indeed, by Kronecker’s rule, .rank(Ω) = k if and only if the .n − k submatrices .N ∈ Mk+1 (K) containing M are singular. Lemma 3.39 Let .A1 , . . . , An be n arbitrary points of .An (K), with .Ai = (ai1 , . . . , ain ), .i = 1, . . . , n. Then .A1 , . . . , An are affinely independent if and only if the matrix ⎛ ⎞ 1 a11 · · · a1n ⎜1 a21 · · · a2n ⎟ ⎜ ⎟ .Ω(A1 , . . . , An ) := ⎜ . . . ⎟ ⎝ .. .. . . . .. ⎠ 1 an1 · · · ann contains an .n × n non-singular submatrix obtained by deleting one of the second n columns. Proof We see that for .i = 1, . . . , n   1 a11 · · · a1 i−1 a1 i+1 · · · a1n    1 a21 · · · a2 i−1 a2 i+1 · · · a2n    . . . .. .. ..  ..  .. . . ... . . . .   1 a · · · a a ··· a  n1

n i−1

 1 a11  0 a21 − a11  = . ..  .. .  0 a − a n1

11

n i+1

2n

··· a1 i−1 a1 i+1 · · · a2 i−1 − a1 i−1 a2 i+1 − a1 i+1 .. .. .. . . . · · · an i−1 − a1 i−1 an i+1 − a1 i+1

 ··· a1n  · · · a2n − a1n  , .. ..  . .  ··· a − a  2n

1n

thus the matrix .Ω(A1 , . . . , An ) contains an .n × n non-singular submatrix obtained by deleting one of the second n columns if and only if ⎛ ⎞ 1 a11 ··· a1n ⎜0 a21 − a11 · · · a2n − a1n ⎟ ⎜ ⎟ ' .Ω (A1 , . . . , An ) := ⎜ . ⎟ .. .. .. ⎝ .. ⎠ . . . 0 an1 − a11 · · · ann − a1n

132

3 Affine Spaces

contains a .n × n non-singular submatrix obtained by deleting one of the second n columns. This fact is equivalent to that the submatrix of .Ω ' (A1 , . . . , An ) obtained by deleting the first row and the first column has maximum rank .n − 1. This last fact −−−→ −−−→ means that .A1 A2 ,. . . ,.A1 An are linearly independent, i.e. .A1 , . . . , An are affinely independent. ⨆ ⨅ Proposition 3.40 Let .A1 , . . . , An be n affinely independent points of .An (K) .(n ≥ 2), where .Ai = (ai1 , . . . , ain ), .i = 1, . . . , n, then the hyperplane .〈A1 , . . . , An 〉 has the equation  1 x1  1 a11  . . .  .. . .  1 a n1

 xn  a1n  ..  = 0. .  · · · ann  ··· ··· .. .

(3.23)

Proof By Corollary 3.17 the subspace .H := 〈A1 , . . . , An 〉 is a hyperplane. On the other hand, by Lemma 3.39, the polynomial  1 x1  1 a11  . . .  .. . .  1 a

n1

 xn  a1n  ..  .  · · · ann  ··· ··· .. .

(3.24)

has total degree 1 and represents a hyperplane .H' . Since .Ai ∈ H' , .i = 1, . . . , n, ' ' ' .H ⊂ H and hence .H = H because .H and .H are two hyperplanes. ⨆ ⨅ We have seen that a hyperplane of .An (K) has an equation of the form (3.20) and this equation is essentially unique as stated in the following Lemma 3.41 Let .b1 x1 + · · · + bn xn = c and . b1' x1 + · · · + bn' xn = c' be two equations that define the same hyperplane, .A of .An (K) (with .n ≥ 2 and ' ' n .(b1 , . . . , bn ), .(b , . . . , bn ) ∈ K \ {(0, . . . , 0)}). Then there exists .d ∈ K \ {0} 1 ' such that .bi = dbi , .i = 1, . . . , n, and . c' = dc. Proof First suppose .c /= 0. Then .c' /= 0, otherwise .c' = 0 implies .(0, . . . , 0) ∈ A and .c = 0. Furthermore, for every i such that .bi /= 0 the point −1 −1 .(0, . . . , 0, cb i , 0, . . . , 0) ∈ A. Because the n-tuple .(0, . . . , 0, cbi , 0, . . . , 0) ' ' b c solves the second equation, we get .bi' cbi−1 = c' so that . i = for every i with bi c ' ' .bi /= 0. In particular, .bi /= 0 implies .b /= 0. Interchanging .bi with .b we obtain the i i reciprocal assertion. So our thesis follows if .c /= 0. If .c = 0 (and consequently .c' = 0), let i be an index such that .bi /= 0 and let .j /= i be any other index. Then the n-tuple .(λ1 , . . . , λn ), where .λi = −bj , .λj = bi and .λk = 0, .∀k /= i, j , solves the equation .b1 x1 + · · · + bn xn = 0 and

3.1 Affine Spaces

133

hence .(λ1 , . . . , λn ) ∈ A. Therefore it solves also .b1' x1 + · · · + bn' xn = 0 so that ' ' .b (−bj ) + b bi = 0. Our proof is now complete. ⨆ ⨅ i j Proposition 3.42 Let .A be a k-dimensional affine subspace of .An (K) .(0 ≤ k ≤ n − 1, .n ≥ 2) whose implicit equations are Fi (x1 , . . . , xn ) :=

n 

.

bij xj − ci = 0,

i = k + 1, . . . , n,

j =1

 with .rank

 bij i=k+1,...,n



j =1,...,n

= n − k. For every hyperplane .H of .An (K) containing

A there exists .(λk+1 , . . . , λn ) ∈ K n−k \ (0, . . . , 0) such that .H has the equation

.

λk+1 Fk+1 (x1 , . . . , xn ) + · · · + λn Fn (x1 , . . . , xn ) = 0.

.

Proof Let .A = A + D(A), with .A = (a1 , . . . , an ). The director subspace D(A) is a K-vector subspace of dimension k of .K n . By Corollary 1.107 we have 0 .dimK (D(A) ) = n − k (see also Definition 1.106). Since the linear functionals on n .K .

fi (x1 , . . . , xn ) =

n 

.

bij xj = Fi + ci ,

i = k + 1, . . . , n,

j =1

with .ci = fi (a1 ,  . . . , an ), belong to .D(A)0 for .i = k + 1, . . . , n, and  .rank bij i=k+1,...,n = n−k is equivalent to linear independence of .fk+1 , . . . , fn , j =1,...,n

it follows that .fk+1 , . . . , fn is a basis of .D(A)0 . On the other hand, every hyperplane .H which contains .A has an equation of the form F (x1 , . . . , xn ) = f (x1 , . . . , xn ) − f (a1 , . . . , an ) = 0,

.

with .f ∈ D(A)0 . Since .fk+1 , . . . , fn is a basis of .D(A)0 , there are .λk+1 , . . . , λn ∈ K such that .f = λk+1 fk+1 + · · · + λn fn . Because .ci = fi (a1 , . . . , an ), we get .f (a1 , . . . , an ) = λk+1 ck+1 +· · ·+λn cn which implies .F = λk+1 Fk+1 +· · ·+λn Fn . ⨆ ⨅ Definition 3.43 Under the notation of Proposition 3.42, the set of hyperplanes of An (K) whose intersection is an affine subspace .A is called an affine linear system of hyperplanes with base locus .A. If .dim A = n − 2 we shall use the term affine pencil of hyperplanes generated by .A; if .A = P ∈ A2 (K) we call the set of all lines through P a pencil with base locus P .

.

134

3 Affine Spaces

3.2 Affine Morphisms Definition 3.44 Let (A, V , ϕ) and (B, W, ψ) be two affine spaces on K, and f : A → B a map. Fixing O ∈ A, from Definition 3.1 (and Proposition 3.2) the maps ϕO : A → V and ψf (O) : B → W are bijective. The trace map with respect to point O, TrO (f ) : V → W , is defined by .

−1 TrO (f ) := ψf (O) ◦ f ◦ ϕO .

In other words, for every A ∈ A, we have .

−−−−−−−→ −→ TrO (f )(OA) = f (O)f (A),

(3.25)

i.e. the diagram

.

is commutative. If there exists a point O ∈ A such that TrO (f ) : V → W is a linear map, we call f : A → B a morphism of affine spaces or affine map and we write f : (A, V , ϕ) → (B, W, ψ). A morphism of affine spaces f : (A, V , ϕ) → (B, W, ψ) is called an affine isomorphism if there exists a morphism of affine spaces g : (B, W, ψ) → (A, V , ϕ) such that g ◦f = idA and f ◦g = idB (so that g = f −1 ). An isomorphism of affine spaces f : (A, V , ϕ) → (A, V , ϕ) is called an affine automorphism of (A, V , ϕ). The set of all affine automorphisms of (A, V , ϕ) will be denoted by AutK (A, V , ϕ) or simply by AutK (A). We will see that AutK (A, V , ϕ) is a group which lies at the base of the geometry of the affine space (A, V , ϕ). If (A, V , ϕ) = A(V ) we simply write AutK (A(V )). The following lemma shows that Definition 3.44 does not depend on the choice of O ∈ A. Lemma 3.45 Under the assumptions and notation of Definition 3.44, suppose that there is O ∈ A such that TrO (f ) : V → W is a linear map. Then for every point O ' ∈ A we have TrO (f ) = TrO ' (f ).

3.2 Affine Morphisms

135

−→ −−→ −−→ Proof Since OA = OO ' + O ' A and TrO is a linear map, for every A ∈ A we have −−−−−−−→ −−−−−−−→ −−−−−−−→ f (O)f (O ' ) + f (O ' )f (A) = f (O)f (A) −→ = TrO (f )(OA) −−→ −−→ . = TrO (f )(OO ' + O ' A) −−→ −−→ = TrO (f )(OO ' ) + TrO (f )(O ' A) −−→ −−−−−−−→ = f (O)f (O ' ) + TrO (f )(O ' A), −−→ −−−−−−−→ so that TrO (f )(O ' A) = f (O ' )f (A), ∀A ∈ A, i.e. TrO (f ) = TrO ' (f ).

⨆ ⨅

Definition 3.46 Let f : (A, V , ϕ) → (B, W, ψ) be an affine map and O any point of A. By Lemma 3.45 the linear map TrO (f ) does not depend on point O so that we can define .

Tr(f ) : V → W,

Tr(f ) := TrO (f ),

and call it the trace of f . Therefore we can write, for any fixed O ∈ A −→ f (A) = f (O) + Tr(f )(OA),

.

∀ A ∈ A.

Remark 3.47 Let (A, V , ϕ) and (B, W, ψ) two affine spaces, O ∈ A and O ' ∈ B two fixed points. Every linear map f : V → W induces an affine map fO,O ' : (A, V , ϕ) → (B, W, ψ),

.

fO,O ' (A) = B

such that

−−→ −→ f (OA) = O ' B.

It is immediately seen that TrO (fO,O ' ) = f . Proposition 3.48 Let (A, V , ϕ), (B, W, ψ) and (C, U, χ ) be affine spaces. (iv) The identity map idA : (A, V , ϕ) → (A, V , ϕ) is an affine automorphism of (A, V , ϕ)). (ii) If f : (A, V , ϕ) → (B, W, ψ) and g : (B, W, ψ) → (C, U, χ ) are affine maps, then we have Tr(g ◦ f ) = Tr(g) ◦ Tr(f ). In particular, g ◦ f : (A, V , ϕ) → (C, U, χ ) is an affine map. (iii) An affine map f : (A, V , ϕ) → (B, W, ψ) is (a) injective if and only if Tr(f ) : V → W is injective; (b) surjective if and only if Tr(f ) : V → W is surjective; (c) bijective if and only if Tr(f ) : V → W is bijective. In particular, an affine map f : (A, V , ϕ) → (B, W, ψ) is an affine isomorphism if and only if Tr(f ) is isomorphism of K-vector spaces.

136

3 Affine Spaces

(iv) AutK (A, V , ϕ) is a group with respect to composition of automorphisms. Furthermore, the trace map Tr : AutK (A, V , ϕ) → GLK (V ) is a surjective homomorphism of groups. Proof (i) Obvious. (ii) and (iii) left to the reader as an esercise. (iv) It follows from ⨆ ⨅ (ii), (iii) and Remark 3.47. Lemma 3.49 Let (A, V , ϕ) and (B, W, ψ) be an affine spaces. A map f : (A, V , ϕ) → (B, W, ψ) is an affine morphism if and only if for every A, B, C, D ∈ A and k ∈ K −→ −→ AB = k CD

.

=⇒

−−−−−−→ −−−−−−−→ f (A)f (B) = k f (C)f (D).

(3.26)

Proof If f is an affine morphism, then −−−−−−→ −−−−−−−→ −→ −→ −→ f (A)f (B) = Tr(f )(AB) = Tr(f )(k CD) = k Tr(f )(CD) = k f (C)f (D).

.

−→ −→ Conversely, if v ∈ V we can write v = AB so that Tr(f )(v) = Tr(f )(AB) := −−→ −−−−−−−→ −−−−−−→ −−−−−−→ f (A)f (B). If v = A' B ' , then Tr(f )(v) = f (A' )f (B ' ) = f (A)f (B) by (3.26), −−→ −→ choosing k = 1, because AB = v = A' B ' . Therefore Tr(f ) is well-defined. It −→ remains to prove that Tr(f ) is linear. First we note that Tr(f )(0V ) = Tr(f )(AA) = −−−−−−→ −→ −→ f (A)f (A) = 0W . Next, given v, v ' ∈ V , let us write v = AB and v ' = BC. Then .

−−−−−−→ −→ −→ −→ Tr(f )(v + v ' ) = Tr(f )(AB + BC) = Tr(f )(AC) = f (A)f (C) −−−−−−→ −−−−−−→ = f (A)f (B) + f (B)f (C) = Tr(f )(v) + Tr(f )(v ' ).

−→ −→ Finally, given v ∈ V and k ∈ K, we again write v = AD and put B = A + k AD. −→ −→ Thus AB = k AD and we have .

−−−−−−→ −→ −→ Tr(f )(kv) = Tr(f )(k AD) = Tr(f )(AB) = f (A)f (B) −−−−−−−→ −→ = k f (A)f (D) = k Tr(f )(AD) = k Tr(f )(v). ⨆ ⨅

Proposition 3.50 Let (A, V , ϕ) and (B, W, ψ) be two affine spaces and f : A → B a map. Then f is an affine map if and only if for every finite set of points {A1 , . . . , Am } ⊂ A and for every finite set of scalars {a1 , . . . , am } ⊂ K such that a1 + · · · + am = 1, we have f (a1 A1 + · · · + am Am ) = a1 f (A1 ) + · · · + am f (Am ).

.

3.2 Affine Morphisms

137

Proof Let f be an affine map. Take A := a1 A1 + · · · + am Am and B := a1 f (A1 ) + −−−−−−−→ −−−−→ · · · + am f (Am ). We have to prove that f (A) = B, i.e. f (O)f (A) = f (O)B. Since −→ −−→ −−−→ OA = a1 OA1 + · · · + am OAm we have (because TrO (f ) is a linear map) −−−−−−−→ −→ −−→ −−−→ f (O)f (A) = TrO (f )(OA) = a1 TrO (f )(OA1 ) + · · · + am TrO (f )(OAm ) −−−−−−−→ −−−−−−−−→ . = a1 f (O)f (A1 ) + · · · + am f (O)f (Am ) −−−−→ = f (O)B. ⨆ ⨅

The reverse implication is proved in a completely similar manner.

Proposition 3.51 Under the assumptions and notation of Definition 3.44, if char(K) /= 2, f : A → B is an affine map if and only if f ((1 − t)A + tB) = (1 − t)f (A) + tf (B),

.

∀ A, B ∈ A, ∀ t ∈ K.

(3.27)

Proof One implication is a special case of Proposition 3.50. Suppose now that (3.27) holds. Fixing O ∈ A we have to prove that TrO (f ) is linear. Let v ∈ V −−→ and t ∈ K, then v = OM with M ∈ A. Taking N = (1 − t)O + tM we have −−→ −−→ −−→ −−→ ON = (1 − t)OO + t OM = tv so that TrO (f )(tv) = TrO (f )(ON). Since −−−−−−−→ −−−−−−−→ f (N) = (1 − t)f (O) + tf (M), f (O)f (N) = t f (O)f (M), hence .

−−−−−−−→ −−→ TrO (f )(tv) = TrO (f )(ON) = f (O)f (N ) −−−−−−−→ −−→ = t f (O)f (M) = t TrO (f )(OM) = t TrO (f )(v).

−−→ 1 1 −−→ −−→ Now let v = OM and v ' = OM ' . Taking N = M + M ' we have ON = 2 2 1 −−→' 1 −−→ −−→ −−→ OM so that v + v ' = OM + OM ' = 2ON . Since f (N) = f (M) + 2 2 −−−−−−−→ 1 −−−−−−−→ 1 −−−−−−−−→ we get f (O)f (N ) = f (O)f (M) + f (O)f (M ' ). Hence 2 2 .

1 −−→ OM + 2 1 f (M ' ) 2

−−−−−−−→ −−→ −−→ TrO (f )(v + v ' ) = TrO (f )(2ON) = 2 TrO (f )(ON ) = 2f (O)f (N)   1 −−−−−−−→ 1 −−−−−−−−→ −−−−−−−→ −−−−−−−−→ ' = 2 f (O)f (M) + f (O)f (M ) = f (O)f (M) + f (O)f (M ' ) 2 2 = TrO (f )(v) + TrO (f )(v ' ).

Therefore TrO (f ) is linear.

⨆ ⨅

Corollary 3.52 Let (A, V , ϕ) and (B, W, ψ) be two affine spaces and {A0 , A1 , . . . , An } an affine frame of (A, V , ϕ).

138

3 Affine Spaces

(i) If σ : (A, V , ϕ) → (B, W, ψ) is an affine isomorphism, then {σ (A0 ), σ (A1 ), . . . , σ (An )} is an affine frame of (B, W, ψ). (ii) Let {B0 , B1 , . . . , Bn } be an affine frame of (B, W, ψ). Then there exists a unique affine isomorphism σ : (A, V , ϕ) → (B, W, ψ) such that σ (Ai ) = Bi , i = 0, 1, . . . , n. (iii) If σ : (A, V , ϕ) → (B, W, ψ) is an affine isomorphism and A, B and C are three distinct collinear points of A, then σ (A), σ (B) and σ (C) are three distinct collinear points of B and ar(A, B, C) = ar(σ (A), σ (B), σ (AC)).

.

Proof −−−−−−−−→ −−−−−−−−→ (i) It suffices to observe that {σ (A0 )σ (A1 ), . . . , σ (A0 )σ (A1 )} is a basis of W −−−→ −−−→ since {A0 A1 , . . . , A0 A1 } is a basis of V . (ii) Since {A0 , A1 , . . . , An } is an affine frame every point A ∈ A is uniquely written in the form A = a0 A0 + a1 A1 + · · · + an An , with a0 , a1 , . . . , an ∈ K and a0 + a1 + · · · + an = 1 (Corollary 3.25). If σ : A → A is an affine isomorphism such that σ (Ai ) = Bi , i = 0, 1, . . . , n, by Proposition 3.50 we have the equality σ (A) = a0 B0 + a1 B1 + · · · + an Bn ,

.

which uniquely determines σ . Furthermore, σ is an affine isomorphism by Proposition 3.50. (iii) If C = (1 − t)A + tB then σ (C) = (1 − t)σ (A) + tσ (B) by Proposition 3.50. ⨆ ⨅ Examples 3.53 1. If A' is a non-empty affine subspace of an affine space (A, V , ϕ), then the inclusion i : A' ͨ→ A is an affine map. 2. An affine automorphism α ∈ AutK (A, V , ϕ) is called an affine homothety if there exists a point O ∈ A and a scalar a ∈ K ∗ such that α(O) = O and −−−−→ −→ Oα(A) = a OA, ∀ A ∈ A. Hence we have .

−−−−−−→ −−−−→ −→ −→ TrO (α)(OA) = α(O)α(A) = Oα(A) = a OA.

The point O, necessarily unique if a /= 1, is called the centre of the homothety α. We shall denote the set of all homotheties of centre O by HO (A, V , ϕ) which is easily seen to be a subgroup of AutK (A, V , ϕ) isomorphic to the multiplicative group K ∗ .

3.2 Affine Morphisms

139

3. Let v ∈ V . An affine automorphism τv ∈ AutK (A, V , ϕ) is called an affine −−−−→ translation by the vector v if Aτv (A) = v, ∀ A ∈ A. Hence .

−−−−−−−→ −−−−−→ −→ −−−−→ −→ −→ −→ TrO (τv )(OA) = τv (O)τ (A) = τv (O)O+OA+Aτv (A) = −v+OA+v = OA,

i.e. Tr(τv ) = idV . The set of affine translations will be denoted by TK (A, V , ϕ) or simply by TK (A). It is immediate to see that TK (A, V , ϕ) = Ker(Tr) so that TK (A, V , ϕ) is a normal subgroup of AutK (A, V , ϕ). 4. Let (A, V , ϕ) be an affine space and O ∈ A a fixed point. We can see ϕO : A → V as an affine isomorphism ϕO : (A, V , ϕ) → A(V ) where A(V ) is the affine space associated with V (see Example 3.4). In this case we have ϕO (O) = 0V and TrO (ϕO ) = idV . For this reason we can always refer to that standard example. Proposition 3.54 Let f : (A, V , ϕ) → (B, W, ψ) be an affine map. If A1 and B1 are two affine subspaces of A and B, respectively, then f (A1 ) and f −1 (B1 ) are affine subspaces of B and A, respectively. Furthermore f|A1 : A1 → f (A1 ) is an affine map. Proof One can verify the following equalities: f (A1 ) = f (A) + Tr(f )(D(A1 )),

.

f

−1

((B1 ) = A + Tr(f )

−1

if A1 = A + D(A1 ),

(D(B1 )) if B1 = B + D(B1 ), f (A) = B.

Since Tr(f )(D(A1 )) and Tr(f )−1 (D(B1 )) are vector subspaces of W and V , ⨅ ⨆ respectively, both assertions are proved. The last statement is now obvious. Definition 3.55 Let O ∈ (A, V , ϕ). If α ∈ AutK (A) satisfies the equality α(O) = O we shall say that O is a fixed point of α. It is clear that .

AutK (A, O) := {α ∈ AutK (A) | α(O) = O}

is a subgroup of AutK (A) such that .

Tr| AutK (A,O) : AutK (A, O) → GLK (V )

is an isomorphism of groups. We recall the following definition from the theory of groups. Definition 3.56 Let G be a (multiplicative) group, T ⊂ G a normal subgroup of G and H ⊂ G a subgroup of G. If T ∩ H = {1G } and G = T H := {th | t ∈ T , h ∈ H }, we shall say that G is a semidirect product of T by H , denoted by G = T >< T . We note that the equalities g = th = h' t ' imply h = h' but t /= t ' , in general (find an example using Proposition 3.57). Proposition 3.57 For every point O ∈ (A, V , ϕ), we have .

AutK (A) = TK (A) > 0, .ann = 1 and .h = 1 nn so that

.

di = ti ,

.

i = 1, . . . , r.

⨆ ⨅

Remark 5.51 From Theorem 5.48 every hyperquadric of .E1 (R) is isometric to one and only one of the following types: (1) .x12 = 0: a real double point. (2) .d1 x12 = −1, with .d1 > 0: two complex-conjugate points. (3) .d1 x12 = 1, with .d1 > 0: two distinct real points.

220

5 Affine Hyperquadrics

Principal Diametral Hyperplanes and Axes of a Hyperquadric ^ ∈ H2 (n, R) be a non-degenerate hyperquadric of .En (R) Definition 5.52 Let .F ^ (.n ≥ 2). A hyperplane H of .En (R) is called a principal diametral hyperplane of .F if ^= F ^, sH · F

.

^ is a where .sH is the orthogonal symmetry with respect to H . An axis of .F line intersection of principal diametral hyperplanes. If .n = 2 principal diametral hyperplanes coincide obviouvsly with axes. ^= F ^ Remark 5.53 Under the assumptions and notation of Lemma 3.87 if .sB,W · F then ^ = σ ◦ sB,W · F ^= σ ·F ^. sσ (B),Tr(σ )(W ) ◦ σ · F

.

(5.64)

^ be orthogonally symmetric with respect to a euclidean subspace In particular, let .F ^ = F ^ where .sB is the orthogonal symmetry with respect to .B B, i.e. .sB · F (Definition 4.25). If .σ ∈ O(En (R)), then .σ ◦ sB = sσ (B) ◦ σ by Corollary 4.27 ^ is orthogonally symmetric with respect to the euclidean subspace .σ (B) so that .σ · F

.

Now we give some obvious statements concerning principal diametral hyperplanes and axes of the canonical non-degenerate hyperquadrics of Theorem 5.48. These assertions are valid in general thanks to Remark 5.53. In Chap. 10 we will come back to this subject. Corollary 5.54 Principal diametral hyperplanes and axes of the canonical nondegenerate hyperquadrics of Theorem 5.48 satisfy the following properties: −1 1 1. The non-degenerate hyperquadrics with centre .Гp,n;d and .Гp,n;d have 1 ,...,dn 1 ,...,dn n pairwise orthogonal principal diametral hyperplanes .xi = 0, .i = 1, . . . , n and n axes, each of them being the intersection of .n − 1 principal diametral hyperplanes. Furthermore, their centres are the intersections of their axes, i.e. the intersection of n pairwise orthogonal principal hyperplanes. 2. Every hyperplane or line passing through the centre .(0, . . . , 0) of the sphere 1 2 2 .Г n,n;d1 : d1 x1 + · · · + d1 xn = 1 is a principal diametral hyperplane or an axis respectively. 2 3. The paraboloid .Гp,n−1;d has .n − 1 pairwise orthogonal principal 1 ,...,dn−1 diametral hyperplanes .xi = 0, .i = 1, . . . , n − 1 and only one axis, which is the intersection of .n − 1 pairwise orthogonal principal diametral hyperplanes. 4. The non-degenerate hyperquadrics are symmetric with respect to their axes.

5.3 Real Conics

221

Table 5.2 Real affine conics: see Table 5.1 1 Ellipse: Г2,2

rank(CF ) 2

rank(C F ) 3

Sign(CF ) (2, 0)

Sign(C F ) (2, 1)

−1 Imaginary ellipse: Г2,2

2

3

(2, 0)

(3, 0)

2

3

(1, 1)

(2, 1)

1

3

(1, 0)

(2, 1)

−1 Hyperbola: Г1,2 2 Parabola: Г1,1

Pair of complex-conjugate lines:

0 Г2,2

2

2

(2, 0)

(2, 0)

0 Pair of incident real lines: Г1,2

2

2

(1, 1)

(1, 1)

1 Г1,1

1

2

(1, 0)

(1, 1)

1

2

(1, 0)

(2, 0)

1

1

(1, 0)

(1, 0)

Pair of parallel real lines:

−1 Pair of parallel complex-conjugate lines: Г1,1

Double line:

0 Г1,1

5.3 Real Conics Ellipse, Hyperbola and Parabola See Table 5.2 (classification of affine real conics). 1 Definition 5.55 The ellipse Г2,2;d of equation d1 x12 + d2 x22 = 1 (0 < d1 ≤ d2 ) 1 ,d2 is usually put in the form

Гa,b :

.

x2 y2 + = 1, a2 b2

1 1 where a = √ ≥ b = √ , d2 d1

x = x1 , y = x2 . (5.65)

√ 1 2 2 If a = b the ellipse √ Гa,b is the circle S (O, a) : x + y = a of centre the origin O and radius a. To the ellipse Гa,b with a > b we associate: 1. The centre (of symmetry) O and the axes (of symmetry) y = 0 and x = 0. √ 2. The foci F1 = (c, 0) and F2 = (−c, 0) with c = a 2 − b2 . a2 a2 3. The directrices r2 : x = − and r1 : x = . c c Foci and directrices of Гa,b have the following geometrical characterizations whose easy proofs are left to the reader.

222

5 Affine Hyperquadrics

Fig. 5.1 Ellipse:

y2 x2 + 2 =1 a2 b

Proposition 5.56 The geometric locus of Гa,b can be characterized by each of the following relations: 2 V(Г^ a,b ) = {P ∈ E (R) : d(P , F1 ) + d(P , F2 ) = 2a}, ⎧ | d(P , F1 ) c = P ∈ E2 (R) : = , d(P , r1 ) a | ⎧ d(P , F2 ) c 2 = , = P ∈ E (R) : d(P , r2 ) a

.

where c =

√ a 2 − b2 (Fig. 5.1).

−1 of equation d1 x12 − d2 x22 = −1 in Definition 5.57 We put the hyperbola Г1,2;d 1 ,d2 the more usual form (interchanging the coordinates x1 and x2 )

Δa,b :

.

x2 y2 − = 1, a2 b2

1 1 where a = √ , b = √ , x = x1 , y = x2 . d1 d2

(5.66)

To the hyperbola Δa,b we associate: 1. The centre (of symmetry) O and the axes (of symmetry) y = 0 and x = 0. √ 2. The foci F1 = (c, 0) and F2 = (−c, 0) with c = a 2 + b2 . a2 a2 3. The directrices r2 : x = − and r1 : x = . c c x x y y 4. The asymptotes l2 : + = 0 and l1 : − = 0. a b a b

5.3 Real Conics

Fig. 5.2 Hyperbola

223

x2 y2 − 2 =1 2 a b

We let the reader prove the following geometrical characterizations: Proposition 5.58 The geometric locus of Δa,b can be characterized by each of the following relations: 2 ^ V(Δ a,b ) = {P ∈ E (R) : |d(P , F1 ) − d(P , F2 )| = 2a}, | ⎧ d(P , F1 ) c 2 = , = P ∈ E (R) : d(P , r1 ) a ⎧ | d(P , F2 ) c 2 = P ∈ E (R) : = , d(P , r2 ) a

.

where c =

√ a 2 + b2 (Fig. 5.2).

2 : d1 x12 = 2x2 , usually written in the form Definition 5.59 The parabola Г2,1:d 1

Λp : y = 2px 2 ,

(5.67)

.

d1 where p = , x = x1 and y = x2 , has no centre. The point F = 4 1 the directix of Λp . called the focus of Λp . and the line r : y = − 8p



1 0, 8p

⎞ is

224

5 Affine Hyperquadrics

Fig. 5.3 Parabola y = 2px 2

Proposition 5.60 The geometric locus of Λp is the set of the points P ∈ E2 (R) such that d(P , F ) = d(P , d).

.

Remark 5.61 We have defined foci, directrices, asymptotes of a conic referring to the canonical forms (5.65), (5.66) and (5.67). We will see their intrinsic definitions by aid of projective geometry in Chap. 10 (Fig. 5.3). Intersections Between a Non-degenerate Conic and an Algebraic Plane Curve In order to prove Theorem 5.63 we need a rational parametrization of the nondegenerate conics. We need a homogeneous parametrization for Гa,b and Δa,b

Гa,b

.

Δa,b

⎧ a(u2 − v 2 ) ⎪ ⎪ ⎪x = ⎨ u2 + v 2 : ⎪ ⎪ 2buv ⎪ ⎩y = , u2 + v 2 ⎧ a(u2 + v 2 ) ⎪ ⎪ ⎪ x = ⎨ u2 − v 2 : ⎪ ⎪ 2buv ⎪ ⎩y = , u2 − v 2

(u, v) ∈ R2 \ {(0, 0)}, .

(5.68)

(u, v) ∈ R2 \ {(0, 0)}, u /= ±v,

(5.69)

while for Λp we simply have t → (t, 2pt 2 ), t ∈ R. The following theorem is a very simple case of the affine Bézout’s theorem and it is an interesting consequence of the euclidean (or affine) classification of conics. We need the following preliminary remark. Remark 5.62 Let f (X, Y ) = a0 Xn + a1 Xn−1 Y + · · · an−1 XY n−1 + an Y n ∈ K[X, Y ] be a homogeneous polynomial of degree n with coefficients in a field K.

5.3 Real Conics

225

Let Z(f ) := {(a, b) ∈ K × K \ {(0, 0)} : f (a, b) = 0}.

.

We put on Z(f ) the following equivalence relation: (a, b) ∼ (c, d) iff a = kc, ^ b) ∈ Z(f )/ ∼ zeros b = kd for some k ∈ K ∗ . We call the equivalence classes (a, of f . The zeros of (x, y) of f (X, Y ) with x /= 0 are in bijection with the zeros of the polynomial g(Z) = a0 + a1 Z + · · · + an−1 Z n−1 + an Z n ∈ K[Z].

.

If f (0, 1) = 0 (i.e. an = 0), then f (X, Y ) = X(a0 Xn−1 + a1 Xn−2 Y + · · · + an−1 Y n−1 ). Therefore we can conclude that f (X, Y ) has at most n zeros. ^∈ ^ ∈ H2 (2, R) be a non-degenerate conic of A2 (R) and G Theorem 5.63 Let F 2 Hd (2, R) an algebraic curve of degree d of A (R). Suppose that F does not divide G in R[X, Y ], then ( ) ^) ∩ V(G) ^ ≤ 2d. # V(F

.

In other words the number of solution of the algebraic system ≤ 2d.

⎧ F (x, y) = 0 G(x, y) = 0

is

Proof If σ is an affine automorphism of A2 (R) by (5.12) we have ^) ∩ V(σ · G) ^ = σ (V(F ^)) ∩ σ (V(G)) ^ = σ (V(F ^) ∩ V(G)) ^ V(σ · F

.

^ is an imaginary ellipse then ^ is Гa,b , Δa,b or Λp (if F so that we can suppose that F ^ V(F ) = ∅ so that we have nothing to prove). We can restrict ourselves to the case of the ellipse F = Гa,b , the other cases being completely analogous (much easier if ^ is Λp ). We have to solve the algebraic system F (x, y) = G(x, y) = 0 in order F ^) ∩ V(G). ^ Thanks to parametrization (5.68) we have to find the to determine V(F solutions (u, v) /= (0, 0) of ⎛

a(u2 − v 2 ) 2buv .G , u2 + v 2 u2 + v 2

⎞ = 0.

(5.70)

⎞ a(u2 − v 2 ) 2buv , is not identically zero, u2 + v 2 u2 + v 2 otherwise F would divide G. Indeed, by Theorem 1.10 in (R[Y ])[X] we write ⎛

First we prove that the function G

G(X, Y ) = F (X, Y )q(X, Y ) + A(Y )X + B(Y ),

.

(5.71)

226

5 Affine Hyperquadrics

where A(Y ), B(Y ) ∈ R[Y ]. We have to prove that A(Y ) and B(Y ) are null polynomials. Let A(Y ) = a0 + a1 Y + · · · + an Y n ,

B(Y ) = b0 + b1 Y + · · · + bn Y n , (5.72)

and

.



⎞ a(u2 − v 2 ) 2buv , is identically with n = max{deg(A(Y )), deg(B(Y ))}. If G u2 + v 2 u2 + v 2 ⎛ ⎞ a(u2 − v 2 ) 2buv , zero, since F is identically zero, it follows that u2 + v 2 u2 + v 2 ⎛

2buv .A u2 + v 2



a(u2 − v 2 ) +B u2 + v 2



2buv u2 + v 2

⎞ = 0,

∀ (u, v) ∈ R2 \ {(0, 0)}. (5.73)

Taking u = 0 we have −aa0 + b0 = 0. Furthermore, from (5.72) and (5.73) we get aa0 (u2 + v 2 )n (u2 − v 2 ) + 2aba1 uv(u2 + v 2 )n−1 (u2 − v 2 )

.

+ · · · + 2n an abn un v n (u2 − v 2 ) + b0 (u2 + v 2 )n+1 + 2bb1 uv(u2 + v 2 )n + · · · + 2bn bn un v n (u2 + v 2 ) = 0

∀ (u, v) ∈ R2 \ {(0, 0)}

so that the coefficient of u2n+2 is aa0 + b0 = 0. Then the equalities aa0 + b0 = aa0 − b0 = 0 imply a0 = b0 = 0, i.e. A(Y ) = Y A' (Y ) and B(Y ) = Y B ' (Y ), with A' (Y ), B ' (Y ) ∈ R[Y ]. Hence the remainder a(Y )X + b(Y ) of (5.71) can be 2buv rewritten as Y (a ' (Y )X + b' (Y )). Since Y = 2 is not a null function we have u + v2 A'

.



2buv u2 + v 2



a(v 2 − u2 ) + B' u2 + v 2



2buv u2 + v 2

⎞ = 0,

∀ (u, v) ∈ R2 \ {(0, 0)}.

Iterating our argument we get A(Y ) = B(Y ) = 0, i.e. F | G. Let G(X, Y ) = Fd (X, Y ) + Fd−1 (X, Y ) + · · · + F0 (X, Y ), where Fi (X, Y ) is a homogeneous polynomial of degree i, i = 0, . . . , d, and Fd (X, Y ) /= 0. Hence we can write (5.70) in the form .

1 1 Fd (a(u2 − v 2 ), 2buv) + 2 Fd−1 (a(u2 − v 2 ), 2buv) (u2 + v 2 )d (u + v 2 )d−1 + · · · + F0 = 0,

5.3 Real Conics

227

or, equivalently, Fd (a(u2 −v 2 ), 2buv)+(u2 +v 2 )Fd−1 (a(u2 −v 2 ), 2buv)+· · ·+(u2 +v 2 )d F0 = 0.

.

The left– hand side is a homogeneous polynomial of R[u, v] whose ⎛ ⎞ degree is ≤ a(u2 − v 2 ) 2buv 2d so that it has at most 2d roots since G , is not the null u2 + v 2 u2 + v 2 rational function. ⨆ ⨅ Now we wish to prove the following classical result by means of Theorem 5.63. ^ ∈ H2 (2, R) be a nonTheorem 5.64 (Affine Pascal’s Theorem) Let F 2 ^) (an hexagon degenerate conic of A (R). Let P1 . . . P6 be six points of V(F inscribed in the conic). Assume that P1 P2 /|| P4 P5 ,

.

P3 P4 /|| P1 P6 ,

P2 P3 /|| P5 P6 ,

then the points A = P1 P2 ∩ P4 P5 , B = P3 P4 ∩ P1 P6 and C = P2 P3 ∩ P5 P6 are collinear (see Fig. 5.4). ^ Proof (Plücker) Let L ij be the line passing through Pi and Pj . Consider the polynomial Fλ := L12 L34 L56 + λL16 L23 L45 ,

.

λ ∈ R.

For every λ ∈ R, Fλ is a non-zero-polynomial of degree ≤ 3. Indeed, the ring ^ R[X, Y ] is factorial so that Fλ /= 0 by our hypothesis on the lines L ij . We observe ^λ ) for all λ ∈ R but A, B, C ∈ ^), indeed by Theorem 5.63, that A, B, C ∈ V(F / V(F a line and a non-degenerate conic have two points in common at most. Suppose that ^) ∩ V(F ^λ ) for all λ, by Theorem 5.63, Fλ has degree ≤ 2. Since {P1 , . . . , P6 } ⊆ V(F Fλ | F contradicting what was said above. Hence Fλ has degree 3 for all λ ∈ R.

Fig. 5.4 Affine Pascal’s theorem

228

5 Affine Hyperquadrics

Table 5.3 Canonical forms of real conics: Theorems 5.40 and 5.48 Non-degenerate conics Ellipse

Affine equation x12 + x22 = 1

Euclidean equations d1 x12 + d2 x22 = 1, 0 < d1 ≤ d2

Imaginary ellipse

x12 + x22 = −1

d1 x12 + d2 x22 = −1, 0 < d1 ≤ d2

Hyperbola Parabola

x12 x12

Degenerate conics Pair of complex-conjugate lines

Affine equation x12 + x22 = 0

Euclidean equations x12 + d2 x22 = 0, 0 < 1 ≤ d2

Pair of incident real lines

x12 − x22 = 0

x12 − d2 x22 = 0, 0 < d2

Pair of parallel real lines

x12 x12 x12

Pair of parallel complex-conjugate lines Double line

− x22

= −1

= 2x2

d1 x12 − d2 x22 = −1, d1 , d2 > 0 d1 x12 = 2x2 , 0 < d1

=1

d1 x12 = 1, 0 < d1

= −1

d1 x12 = −1, 0 < d1

=0

x12 = 0

^) with P /= Pi , i = 1, . . . , 6. Determine λ such that Choose a point P ∈ V(F Fλ (P ) = 0. Since ^) ∩ V(F ^λ ) ⊇ {P1 , . . . , P6 , P }, V(F

.

by Theorem 5.63 we have F | Fλ , i.e. Fλ = F · L, where L is a polynomial of degree 1. As noted above ^λ ) \ V(F ^), A, B, C ∈ V(F

.

^ so that A, B and C belong to V(L).

⨆ ⨅

Corollary 5.65 There is at least one conic passing through five points P1 , . . . , P5 of A2 (R). Furthermore, if no four of P1 , . . . , P5 lie on a line, then the conic which contains them is unique and if no three lie on a line the conic is necessarily nondegenerate. Proof It follows from Table 5.3 and from Bézout’s theorem.

⨆ ⨅

Corollary 5.66 There exists one and only one pencil of conics passing through four non-collinear points of A2 (R). Proof It follows from Corollary 5.65.

5.4 Real Quadrics Ellipsoids, Hyperboloids and Paraboloids We need a preliminary definition (Table 5.4).

⨆ ⨅

5.4 Real Quadrics

229

Table 5.4 Real affine quadrics: see Table 5.1 Non-degenerate quadrics 1 Ellipsoid: Г3,3

rank(CF ) 3

rank(C F ) 4

Sign(CF ) (3, 0)

Sign(C F ) (3, 1)

−1 Imaginary ellipsoid: Г3,3

3

4

(3, 0)

(4, 0)

3

4

(2, 1)

(2, 2)

3

4

(2, 1)

(3, 1)

One-sheet hyperboloid: Two-sheet hyperboloid: Hyperbolic paraboloid:

1 Г2,3 −1 Г2,3 2 Г1,3

2

4

(1, 1)

(2, 2)

2 Elliptic paraboloid: Г2,3

2

4

(2, 0)

(3, 1)

Simply degenerate quadrics 0 Cone: Г2,3

rank(CF ) 3

rank(C F ) 3

Sign(CF ) (2, 1)

Sign(C F ) (2, 1)

0 Imaginary cone: Г3,3

3

3

(3, 0)

(3, 0)

1 Г2,2

2

3

(2, 0)

(2.1)

2

3

(1, 1)

(2, 1)

2 Parabolic cylinder: Г1,1 −1 Imaginary cylinder: Г2,2

1

3

(1, 0)

(2, 1)

2

3

(2, 0)

(3, 0)

Reducible quadrics 0 2 distinct incident planes: Г1,2

rank(CF ) 2

rank(C F ) 2

Sign(CF ) (1, 1)

Sign(C F ) (1, 1)

0 2 complex-conjugate planes: Г2,2

2

2

(2, 0)

(2, 0)

1

2

(1, 0)

(1, 1)

1

2

(1, 0)

(2, 0)

1

1

(1, 0

(1, 0)

Elliptic cylinder:

−1 Hyperbolic cylinder: Г1,2

2 distinct parallel planes:

1 Г1,1

2 parallel complex-conjugate planes:

−1 Г1,1

Double plane:

Definition 5.67 A ruled surface of A3 (K) is defined by the property that through every point in the surface, there is at least one straight line which also lies on the surface. The lines are called the generators of the surface. Examples of ruled surfaces are the cones and the cylinders (Definition 5.14). 1 We obtain for non-degenerate quadrics their classical forms taking a = √ , d1 1 1 ^ b = √ , c = √ , (x = x1 , y = x2 and z = x3 ). For real ellipsoid Fre and d3 d2 ^ imaginary ellipsoid F ie we have x2 y2 z2 ^ F re : 2 + 2 + 2 = 1, a b c

.

x2 y2 z2 ^ F ie : 2 + 2 + 2 = −1 a b c

(5.74)

^ The principal diametral planes of F re are x = 0, y = 0, z = 0 and its axes are x = y = 0, y = z = 0, x = z = 0 while the origin is the centre.

230

5 Affine Hyperquadrics

^ The canonical equations of one-sheet hyperboloid F i1 and two-sheet hyperboloid ^ F are i2 .

x2 y2 z2 ^ F i1 : 2 + 2 − 2 = 1, a b c

x2 y2 z2 ^ F i2 : 2 + 2 − 2 = −1. a b c

(5.75)

^ They have the same principal diametral planes, axes and centres as F re . Furthermore, ^ one-sheet hyperboloids Fi1 is a ruled surface covered by the lines: ⎧ ⎛ ⎪ z y⎞ ⎪x ⎨ − =t 1− a⎛ c ⎞ b t ∈ R, .rt : ⎪ z y x ⎪ ⎩t + =1+ , a c b ⎧ ⎛ ⎪ y⎞ ⎪ ⎨x + z =u 1− b u ∈ R, ru : a ⎛ c ⎞ ⎪ z y x ⎪ ⎩u − =1+ , a c b

(5.76)

and ⎧ ⎪ z ⎪x ⎨ + a c ⎪ y ⎪ ⎩1 − b

rt=∞ :

.

=0 = 0,

ru=∞ :

⎧ ⎪ z ⎪x ⎨ − a c ⎪ y ⎪ ⎩1 − b

=0

(5.77)

= 0.

^ The sets {rt | t ∈ R ∪ {∞}} and {ru | u ∈ R ∪ {∞}} are called the rulings of F i1 . Elliptic and hyperbolic parabolids take the forms x2 y2 ^ F ep : 2 + 2 = 2z, a b

.

x2 y2 ^ F hp : 2 − 2 = 2z. a b

(5.78)

Their principal diametral planes are x = 0, y = 0 and x = y = 0 is the axis of symmetry. They don’t have any centre. ^ The hyperbolic paraboloid F hp is a ruled surface covered by the lines ⎧ ⎪ ⎪ ⎨x − y ' a⎛ b ⎞ .rt : ⎪ y x ⎪ ⎩t + a b

=t = 2z,

t ∈ R,

⎧ ⎪ ⎪ ⎨x + y ' ru : a ⎛ b ⎞ ⎪ y x ⎪ ⎩u − a b

=u

u ∈ R.

= 2z,

(5.79) ^ The sets {rt | t ∈ R} and {ru | u ∈ R} are the rulings of F . hp The only ruled non-degenerate quadrics are the one-sheet hyperboloid and the hyperbolic paraboloid. All the degenerate quadrics (with non-empty geometric

5.4 Real Quadrics

231

locus) are obviously ruled. The reader can easily prove the following properties ^: of the rulings of a ruled non-degenerate quadric F (i) Two lines belonging to the same ruling are skew. (ii) Every line of a ruling meets every line of the other ruling. ^) there is one and only one line of both the rulings (iii) For every point P ∈ V(F passing through P . ^) the tangent plane to F ^ at P cuts V(F ^) along the (iv) For every point P ∈ V(F lines of the two rulings passing through P . Thanks to Proposition 5.29 and Table 5.2 we have the following ^ be an irreducible quadric of A3 (R). A non-singular point Definition 5.68 Let F ^) is: P ∈ V(F ^ is a pair of complex-conjugate lines, 1. elliptic if TP ,F^ ∩ F ^ is a pair of distinct real lines, 2. hyperbolic if TP ,F^ ∩ F ^ 3. parabolic if TP ,F^ ∩ F is a double line. ^ be an irreducible quadric of A3 (R). The regular points of Proposition 5.69 Let F ^ F are all: ^ is an ellipsoid, a two-sheet hyperboloid or an elliptic (i) elliptic if and only if F paraboloid, ^ is a one-sheet hyperboloid or a hyperbolic (ii) hyperbolic if and only if F paraboloid, ^ is a cone or a cylinder. (iii) parabolic if and only if F ^ is non-degenerate, γ := T ^ ∩ F ^ is a degenerate conic of rank 2 so that Proof If F P ,F it is a pair of distinct real lines or two complex-conjugate lines by Proposition 5.29. ^ does not contain lines, then every point is elliptic. Suppose that F ^ contains a If F line r, then for every point P ∈ r, the line r is contained in TP ,F^ and hence also in ^ \ r, γ , which is consequently the union of two lines through P . If instead P ∈ F ^ the plane joining r and P intersects F in another line s passing through P so that P is hyperbolic. ^ is a cone or a cylinder, one and only one line passes through a non-singular If F ^ has only parabolic points. Conversely, if F ^ has a parabolic point points so that F ^ ^ must be simply P , then r = TP ,F^ ∩ F is a double line passing through P , then F degenerate so that it is a cone or a cylinder. ⨆ ⨅ Quadrics of Revolution ^ be a non-degenerate or simply degenerate quadric of E3 (R) Definition 5.70 Let F ^ ^)) = V(F ^) for all such that #(V(F )) > 1. If there exists a line r such that ϕ(V(F ^ rotations ϕ about r we say that F is a quadric of revolution about r and we call r an ^ cuts F ^ in a axis of revolution. Every plane orthogonal to an axis of revolution of F circle (possibly reduced to a point) or in the empty set. From Table 5.5 we see that the only possible quadrics of revolution are the ellipsoids, the hyperboloids (onesheet and two-sheet), the ellipitic paraboloids, the cones and the cylinders. Indeed a

Double plane

2 parallel complex-conjugate planes

2 parallel planes

d1 x12 = 1, d1 > 0 d1 x12 = −1, d1 > 0 x12 = 0

=1 = −1 =0

x12 + d2 x22 = 0, 1 ≤ d2

x12 + x22 = 0

2 complex-conjugate planes x12 x12 x12

Euclidean equations x12 − d2 x22 = 0, d2 > 0

d1 x12 + d2 x22 = −1, 0 < d1 ≤ d2

+ x22 = −1

Affine equation x12 − x22 = 0

d1 x12 = 2x3 , d1 > 0

d1 x12 − d2 x22 = −1, 0 < d1 , 0 < d2

= −1

= 2x3

d1 x12 + d2 x22 = 1, 0 < d1 ≤ d2

=1

Reducible quadrics 2 distinct incident planes

Imaginary cylinder

Parabolic cylinder

Hyperbolic cylinder

Elliptic cylinder

x12 + d2 x22 + d3 x32 = 0, 1 ≤ d2 ≤ d3

+ x22 − x22

x12 + x22 + x32 = 0

Imaginary cone x12 x12 x12 x12

Euclidean equations x12 + d2 x22 − d3 x32 = 0, 1 ≤ d2 , d3 > 0

d1 x12 + d2 x22 = 2x3 , 0 < d1 ≤ d2

= 2x3

Affine equation x12 + x22 − x32 = 0

d1 x12 − d2 x22 = 2x3 , 0 < d1 , 0 < d2

d1 x12 + d2 x22 − d3 x32 = −1, 0 < d1 ≤ d2 , 0 < d3

= −1

= 2x3

d1 x12 + d2 x22 − d3 x32 = 1, 0 < d1 ≤ d2 , 0 < d3

=1

Simply degenerate quadrics Cone

Elliptic paraboloid

Hyperbolic paraboloid

Two-sheet hyperboloid

One-sheet hyperboloid

− x32 − x32

d1 x12 + d2 x22 + d3 x32 = −1, 0 < d1 ≤ d2 ≤ d3

+ x22 + x22 − x22 + x22

x12 + x22 + x32 = −1

Imaginary ellipsoid x12 x12 x12 x12

Euclidean equations d1 x12 + d2 x22 + d3 x32 = 1, 0 < d1 ≤ d2 ≤ d3

Affine equation x12 + x22 + x32 = 1

Non-degenerate quadrics Ellipsoid

Table 5.5 Real euclidean quadrics: see Theorem 5.48

232 5 Affine Hyperquadrics

5.5 Exercises

233

^ of E3 (R) such that #(V(F ^)) > non-degenerate or simply degenerate real quadric F 1 is of revolution if and only if its matrix CF has a double non-zero eigenvalue. If ^ is a sphere, which has every line through the eigenvalues of CF are triple then F its centre as an axis of revolution. In the remaining cases there is only one axis of revolution (the axis of revolution) whose director subspace is generated by an eigenvector of the simple eigenvalue. Every axis of revolution is an axis of symmetry ^ and every plane containing the axis of revolution is a principal diametral plane of F ^. Canonical forms of quadrics of revolution are: of F • d1 x12 + d2 x22 + d3 x32 = 1, with 0 < d1 = d2 < d3 : ellipsoid of revolution with axis the line x1 = x2 = 0; • d1 x12 + d2 x22 − d3 x32 = ±1, with d1 = d2 > 0, d1 > 0: hyperboloid (one-sheet and two-sheet), with axis the line x1 = x2 = 0; • d1 x12 + d2 x22 = 2x3 , with d1 = d2 > 0: elliptic paraboloid with axis the line x1 = x2 = 0; • d1 x12 + d2 x22 = 1, with d1 = d2 > 0: cylinder with axis the line x1 = x2 = 0; • x12 + d2 x22 − d3 x32 = 0, with d2 = 1: cone with axis the line x1 = x2 = 0.

5.5 Exercises Exercise 5.1 Find the multiplicity of O = (0, 0) and the tangent cone for the following real plane curves: x + 2x 2 + y 3 = 0; 3xy = x 3 + y 3 ; . 2xy + x 4 + y 4 = 0; x 2 − yx 2 + y 4 = 0;

2x + 2x 2 + y 4 = 0; x − y = y 3 ; xy − y 2 = x 3 ; xy = x 4 + xy 3 + y 3 ; 2 2 2 8xy = (x + y ) ; x 2 − y 3 = 0; xy 2 = x 4 + y 4 .

Exercise 5.2 Find the multiplicity of O = (0, 0, 0) and the tangent cone for the following urfaces of A3 (R): z = xy + x 3 ; y 2 + x 3 + z2 = 0; 3 2 3 . x + xy + z = 0; 2 2 z = x(x + y 2 ); xyz = 1.

xy − z2 + x 3 − y 3 = 0; xyz + x 3 + y 3 = 0; x 2 + xy 2 + z3 = 0; x + y 2 + z3 = 0;

xy + z3 = 0; xyz = x 2 y 2 + y 2 z2 + z2 x 2 ; z(x 2 + y 2 ) = xy; xyz = (x + y + z)3 ;

^ be a hypersurface of An (K), P ∈ F ^ and H ^ a hyperplane Exercise 5.3 ([12]) Let F ^. Prove the following assertions: passing through P and not contained in F ^∩ H ^) ≥ mP (F ^); in particular, if P is a singular point of F ^, then it is a (a) mP (F ^ ^ singular point for F ∩ H .

234

5 Affine Hyperquadrics

^∩ H ^ of H ^ is singular at P if and only if H ^ ⊂ TP (F ^). (b) The hypersurface F ^ such that mP (F ^∩ H ^) = mP (F ^). (c) There exists a hyperplane H Exercise 5.4 Let F = 0 and G = 0 be the equations of two non-parallel planes of A3 (K) and ϕ ∈ K[u, v] be a polynomial in the indeterminates u and v. Show that ϕ(F, G) = 0 represents a cylinder whose generators are parallel to the line A = B = 0. Exercise 5.5 A conoid is a ruled surface whose generators stay parallel to a plane called the director plane of the conoid, while intersecting a line L, called the axis ^ of A3 (K) is a conoid whose director plane of the conoid. Prove that a surface F is ⎛A = 0⎞ and its axis is the line B = C = 0 if it can be written in the form B = 0, where ϕ ∈ K[u, v] is a polynomial in the indeterminates u and ϕ A, C v, and A = 0, B = 0 and C = 0 are the equations of three planes of A3 (K) not belonging to the same pencil. Exercise 5.6 Consider the following surfaces: x − 2y + 3z − 2(3x + 2y − 1)3 = 0

.

(x − 2y + z) − 3(x + y + 2z)3 = (2x − y + 3z)2

2x 3 − 3y 3 + z3 + x 2 z = 0

(x − 1)(y − 2)2 + 3(z − 1)3 = 0

(x − y)2 + 2(y − z)3 − 3(z − x)5 = 0

y = 3x(z − x)3

(x − 2y + z)2 = (x 3 − x + 1)(x + y + z)2

x 2 z = y 2 ( Whitney’s umbrella).

Determine which are cones, cylinders or conoids and find their vertices, director planes and axes. Exercise 5.7 Let F (x, y, z) = xy 2 − z2 ∈ R[x, y, z]. ^. (a) Find the singular points of F ^). (b) Determine the lines contained in V(F ^ is irreducible. (c) Say whether F 1 Exercise 5.8 Consider the non-degenerate hyperquadric Гp,n;d of En (R) (see 1 ,...,dn Theorem 5.48). Prove that

.

{ } 1 1 min d(O, P )2 | P ∈ V(Гp,n;d ) = , 1 ,...,dn dp

where d(O, P ) is the euclidean distance of P from the origin O = (0, . . . , 0), and determine the point of minimum.

5.5 Exercises

235

Exercise 5.9 Let S n−1 (A, R) be the (n − 1)-dimensional sphere of En (R) having n Σ centre A = (a1 , . . . , an ) and radius R whose equation is F = (xi − ai )2 − R 2 i=0

(see Definition 4.21). Let d be a line passing through a fixed point B = (b1 , . . . , bn ) and intersecting S n−1 (A, R) at B1 and B2 (where we may have B1 = B2 ). The real number F (b1 , . . . , bn ) is called the power of B with respect to S n−1 (A, R) and denoted by PB (S n−1 (A, R)). Prove that: −−→ −−→ 1. PB (S n−1 (A, R)) = , 2. d(B, B1 ) · d(B, B2 ) = |PB (S n−1 (A, R))|. In particular, PB (S n−1 (A, R)) depends only on B and not on the line d. Exercise 5.10 Let S n−1 (A, R) and S n−1 (A' , R ' ) be two eccentric spheres (i.e. A /= A' ). Show that the set {B ∈ En (R) : PB (S n−1 (A, R)) = PB (S n−1 (A' , R ' ))}

.

is a hyperplane orthogonal to the line AA' , called the radical hyperplane (or radical axis if n = 2) of S n−1 (A, R) and S n−1 (A' , R ' ). If the two spheres are secant, the radical hyperplane is merely the hyperplane containing their intersection. Prove that the geometric locus of the points of E3 (R) having the same power with respect three spheres, whose centres are not collinear, is a line perpendicular to the plane determined by the centres of the spheres. Exercise 5.11 Determine the affine type of the quadrics of E3 (R) xy − yz = 1, x 2 + 4y 2 − 6z2 − 2x + 8y − 2 = 0, x 2 + y 2 + z2 + z(3x − y − 1) = 0, xy + yz + xz = 1, . 2x 2 + y 2 − 2xy + 2x − z2 = 0, x 2 + y 2 + 3yz − 2x + 4y = 0, x 2 − xy + y 2 − 2x = 1, xy − xz + z2 = 1,

3z2 + 2xy = 1, x 2 − 6xy = 2z + 3, 3x 2 − xy + y 2 = z + 3, xy + yz + xz = −1, (x + y − 6z)2 = 4z, z2 − y 2 = 3, (2x − 1)2 + z2 − 3z = 0, 2x 2 − xy + xz − yz + z2 − 3x + z = 0.

(a) Say which of the above quadrics are of revolution and find their axes of revolution. (b) Say which of the above quadrics are ruled and describe their rulings. (c) Find the euclidean canonical forms for some of the non-degenerate quadrics. (d) Determine their possible centres, principal diametral planes and axes of symmetry. Exercise 5.12 Determine the planes passing through the line r : x = y = z + 2 of E3 (R) and tangent to the sphere x 2 + y 2 + z2 − 2x = 0.

236

5 Affine Hyperquadrics

Exercise 5.13 Determine the cone of vertex V circumscribed around a quadric ^ of E3 (R), i.e. the locus of all lines passing through V and tangent to F ^. By F a more geometric method determine the cone of vertex the origin O = (0, 0, 0) circumscribed around the sphere of centre C = (2, 3, 4) and radius 1. Exercise 5.14 Prove that the asymptotes and the parallel lines to the asymptotes of a hyperbola Г passing through a point P ∈ V(Г) form a parallelogram of constant area. Exercise 5.15 Let C1 and C2 be two tangent circles. Find the geometric locus of the centres of all circles tangent to C1 and C2 . Exercise 5.16 Find all parabolas whose axis is the line y = 2x and passing through the point A(1, 0). Exercise 5.17 Determine the equilateral hyperbolas (i.e. with orthogonal asymptotes) having the line y = 3x as an asymptote and passing through the point (2, 0). Exercise 5.18 Find the conic whose axes are the lines x = y and x = −y, and having the line y = 3x − 5 as tangent at the point (2, 1). Exercise 5.19 Determine all the conics with centre at the point (0, 1), tangent to the line 2x − y = 1 at the point (1, 1) and tangent to the line x = 2y. Exercise 5.20 Let r1 and r2 be two perpendicular lines. Determine the locus {P ∈ E2 (R) : d(P , r1 )d(P , r2 ) = k},

.

where k ∈ R is fixed. Exercise 5.21 Let Г be a hyperbola with asymptotes the line r1 and r2 . Show that the product d(P , r1 )d(P , r2 ) does not depend on P ∈ Г. Exercise 5.22 Let Гλ be the conic of E2 (R) having equation (2 − λ)x 2 − 2λxy + (2 − λ)y 2 − 8x − 8y + 8 = 0,

.

where λ > 0. (i) Show that all conics Гλ have a common symmetry axis. (ii) Determine the affine type of Гλ . (iii) Find the euclidean canonical forms for Гλ with λ = 1. Exercise 5.23 Let r and s be two skew lines of E3 (R). Show that the locus ⎧ .

P ∈ E3 (R) :

| d(P , r) =λ>0 d(P , s)

is a hyperbolic paraboloid if λ = 1 and a one-sheet hyperboloid if λ /= 1.

5.5 Exercises

237

Exercise 5.24 Prove that all quadrics of the pencil generated by a quadric of revolution and a sphere are of revolution having all their axes parallel and lying on the same plane. Exercise 5.25 Determine the planes passing through the line r : x − z = y − 2z − 1 = 0 of E3 (R) and tangent to the quadric z = xy. Exercise 5.26 Find the affine type of the following conics: ⎧ .

x−y =0

x 2 + y 2 − yz = 1,



(x − 2y)2 = z

3x + y + 4z = 8,

⎧ xy + z2 = 1 2x = y + 3.

Exercise 5.27 (Group Law on a Non-degenerate Affine Conic) Let f^ be a nondegenerate conic of A2 (K), and let N be any fixed point on it. We define an operation as follows: 1. Let P , Q ∈ V(f^), P /= Q. Through N we draw a line rN parallel to P Q. This line must intersect f^ at another point, say R. We define P + Q := R.

.

2. If P = Q, then the line P Q is tangent to f^ at P . So to find P + P , which we denote by 2P , we draw a line through N parallel to the tangent at P to f^; then its second point of intersection with f^ is 2P . 3. If the line through N parallel to P Q is tangent to f^ at N, then we take R = N , i.e. P + Q = N. Prove that V(f^) is an abelian group equipped with the operation above defined. (Hint: Use Pascal’s Theorem 5.64 to check associativity.)

Chapter 6

Projective Spaces

6.1 Some Elementary Synthetic Projective Geometry We start by giving some notions on projective geometry from a synthetic point of view. However, the main topic of this chapter is the projective space associated to a K-vector space, where K is a field. Good references for the synthetic approach are [1, 5, 16] and [23]. Projective Planes Definition 6.1 Let .X be a non-empty set and .D a non-empty subset of .P(X), the set of all subsets of .X. The elements of .X will be called points and those of .D, lines. If a point A belongs to the line d we say that A lies on d or also that d passes through A. A pair .(X, D) is an incidence structure if it satisfies the following axiom: (P1 ) For every two distinct points A, .B ∈ X there is one and only one line .d ∈ D such that d joins A and B, i.e. .A ∈ d and .B ∈ d. The line d will be denoted by AB.

.

It is immediately seen that the intersection of two lines d and .d ' of .(X, D) is either d if .d = d ' , one point, or .∅. Example 6.2 Let .(A, V , ϕ) be an affine space and .D the set of all affine subspaces of dimension 1 of .A. Then the pair .(A, D) is an incidence structure by Corollary 3.28. Definition 6.3 Let .Y ⊂ X be a non-empty subset of an incidence structure .(X, D). We say that .Y is a linear subvariety of .(X, D) if for every two distinct points A, ' .B ∈ Y the line .AB ∈ D is contained in .Y, i.e. if the pair .(Y, D ) is an incidence ' structure, where .D is the set of the lines of .D contained in .Y. The points and the lines of .(X, D) are examples of linear subvarieties. We also consider the empty set .∅ as a linear subvariety of .X. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_6

239

240

6 Projective Spaces

Definition 6.4 Let .(X, D) be an incidence structure and A, B, .C ∈ X three points. We say that A, B, C are collinear if there exists a line .d ∈ D such that A, B, .C ∈ d. In other words, A, B, C are collinear if and only if .C ∈ AB. Since .AC = AB and .BC = AC this definition does not depend on the order of the points. Definition 6.5 An incidence structure .(X, D) is called a projective plane if it satisfies the following axioms: (P2 ) For every two lines d, .d ' ∈ D, .d ∩ d ' /= ∅. .(P3 ) There exist four points A, B, C, .D ∈ X and no three of them are collinear. .

The elements of .D will be called projective lines. Definition 6.6 An affine plane is an incidence structure .(X, D) such that: (A2 ) given any line .d ∈ D and any point P not on d, there is exactly one line ' .d ∈ D through P that does not meet d; .(A3 ) there exist four points A, B, C, .D ∈ X and no three of them are collinear. .

We say that two lines are parallel if they are equal, or if they have no points in common. An affine space .(A, V , ϕ) of dimension 2 is an affine plane according to the above definition (for .(A2 ) see Proposition 3.74 and in .A2 (K) the points .(0, 0), (1, 0), (0, 1), (1, 1) satisfy axiom .(A3 )). Example 6.7 Consider the sets .X = {1, 2, 3, 4, 5, 6, 7} and D = {{1, 2, 6}, {2, 3, 4}, {1, 3, 5}, {1, 4, 7}, {2, 5, 7}, {3, 6, 7}, {4, 5, 6}}.

.

Then .(X, D) is a projective plane called Fano’s plane (see Fig. 6.1). It is the “smallest” projective plane. Example 6.8 Let .D' be the set of all affine lines of the affine plane .A2 (K). We put on .D' the following equivalence relation: d ∼ d'

.

Fig. 6.1 Fano’s projective plane

⇐⇒

d ‖ d '.

6.1 Some Elementary Synthetic Projective Geometry

241

If we denote the equivalence class of d by .[d] we can put  [d] =

.

m

if d has the equation y = mx + n,



if d has the equation x = a.

Thus the quotient set .ω := D' / ∼ can be identified with .K ∪ {∞}. Let .X := A2 (K) ∪ ω (disjoint union). The points of .A2 (K) will be called the proper points of .X; the points of .ω the improper points or the points at infinity of .X. The set .ω is called the line at infinity of .X. To every line .d ∈ D' we associate d := d ∪ {[d]}.

.

Then .d ⊂ X. Finally, we define the set of projective lines D := {d| d ∈ D' } ∪ {ω}.

.

Lemma 6.9 The pair .(X, D) of Example 6.8 is a projective plane. Proof We have to check axiom .(P1 ). Let A, .B ∈ X, .A /= B. • If A and B are proper points there exists only one affine line .d ∈ D' passing through A and B. Then .d is the unique line of .D passing through A and B. • If A is a proper point and .B ∈ ω, then .B = [d], with .d ∈ D' . By Proposition 3.74 there is a unique affine line .d ' parallel to d through A; hence .d' is the unique line of .D passing through A and .B = [d] = [d ' ]. • If A, .B ∈ ω, then .ω is the unique line of .D through A and B. Now we check axiom .(P2 ). Let .d and .d' be two distinct lines of .D, with d, ∈ D' (hence .d /= d ' ). If d and .d ' are not parallel, i.e. .[d] /= [d ' ] and .d ∩ d ' /= ∅, then a fortiori .d∩ d' /= ∅. If .d ‖ d ' (i.e. .[d] = [d ' ] =: A ∈ ω) we have .d∩ d' = {A}.  with .d ∈ D' , meets the line at infinity .ω in the point at infinity Finally, every line .d, .[d]. Axiom .(P3 ) is obviously satisfied by the points .(0, 0), (1, 0), (0, 1), (1, 1) of 2 .A (K). ⨆ ⨅ ' .d

Lemma 6.10 Let .(X, D) be a projective plane and A, B, .C ∈ X three non-collinear points. Then we have X=



.

AF.

F ∈BC

Proof We have only to prove the inclusion .⊂. Let .D ∈ X, .D /= A. By axiom .(P2 ), AD ∩ BC /= ∅, hence there exists a point .F ∈ AD ∩ BC, so that .D ∈ AF , with .F ∈ BC. ⨆ ⨅ .

Lemma 6.11 Let .(X, D) be a projective plane and d, .d ' ∈ D two arbitrary lines. Then there exists a bijection .f : d → d ' .

242

6 Projective Spaces

Fig. 6.2 Bijection between two projective lines

Proof We can obviously suppose that .d /= d ' . We claim that there exists a point ' .P ∈ X \ (d ∪ d ). By axiom .(P3 ) let A, B, C, .D ∈ X be four points such that no line contains all three of them. If all these points belong to .d ∪ d ' , we should have A, .B ∈ d \ d ' and C, .D ∈ d ' \ d for instance. We claim that .AD ∩ BC = P ∈ X \ (d ∪ d ' ). Indeed, if .P ∈ d (or .P ∈ d ' ) then A, B and D (A, D and C respectively) would be collinear. Fix a point .P ∈ X \ (d ∪ d ' ). We can define the map .f : d → d ' such that ' ' .f (A) := AP ∩ d , for all .A ∈ d (see Fig. 6.2). Let .g : d → d be the map defined ' ' ' ' by .g(B ) := B P ∩ d for all .B ∈ d . By axiom .(P1 ), g is the inverse of f so that f is bijective. ⨆ ⨅ We introduce two more axioms concerning an incidence structure .(X, D): (P3' ) Every line .d ∈ D contains at least three points, '' .(P ) There exist three non-collinear points in .X. 3 .

Proposition 6.12 Let .(X, D) be an incidence structure. The following conditions are equivalent: (i) .(X, D) is a projective plane. (ii) .(X, D) satisfies axioms .(P2 ), .(P3' ) and .(P3'' ). Proof (i).=⇒ (ii). It is obvious that .(P3 ) implies .(P3'' ). Let A, B, C, .D ∈ X be four points no three collinear. then the intersection .AD ∩ BC is a point E distinct from B and from C. Hence the line BC has at least three distinct points. By Lemma 6.11 every other line of .X has at least three distinct points, so that .(P3 ) implies .(P3' ). (ii).=⇒ (i). By axiom .(P3'' ) there exist three non-collinear points A, B, .C ∈ X. By axiom .(P3' ) the line BC contains a point E with .E /= B and .E /= C. By the same axiom AE contains a point D different from A and E. Hence A, B, C, D are four points such that no three of them lie on a line, so that axiom .(P3 ) is verified. ⨆ ⨅ Proposition 6.13 A projective plane .(X, D) satisfies the following axiom: (P3⋆ ) Through every point .P ∈ X there pass at least three lines of .D.

.

Proof Let P be an arbitrary point of .X and r be a line which does not contain P . Take three distinct points Q, R and S on r, then through P there pass the distinct

6.1 Some Elementary Synthetic Projective Geometry

243

lines P Q, P R and P S. For each P a line r not containing P can be found by taking two distinct lines s and t through P , a point .A ∈ s, A point .B ∈ t, .A /= P and .B /= P , so that line AB does not contain P . ⨆ ⨅ Definition 6.14 Let .(X, D) and .(X' , D' ) two projective planes A map .f : X → X' is called an isomorphism of projective planes if it is bijective and for every three collinear points A, B, .C ∈ X, .f (A), .f (B) and .f (C) are collinear points of .X' . A projective plane .(X, D) is isomorphic to the projective plane .(X' , D' ) if there is a isomorphism of projective planes .f : X → X' . If .f : X → X' and .g : X' → X'' are isomorphisms, then .g ◦ f : X → X'' is clearly an isomorphism. An isomorphism .f : X → X is called an automorphism of .X. The isomorphism of affine planes is defined in the same manner. The following proposition states that the relation of isomorphism of projective planes is symmetric. Proposition 6.15 If .f : X → X' is an isomorphism of projective .(affine.) planes, then also .f −1 : X' → X is an isomorphism of projective .(affine.) planes. Proof Let .A' , .B ' , .C ' ∈ X' be three points lying on a line .d ' . We have to show that −1 (A' ), .B := f −1 (B ' ), .C := f −1 (C ' ) are collinear in .X. We prove it by .A := f contradiction. Suppose that A, B, C are not collinear. Let .P ∈ X be an arbitrary point. We shall prove that .f (P ) ∈ d ' , where .d ' is the line through .A' = f (A), ' ' ' ' .B = f (B) and .C = f (C) which contradicts the surjectivity of f (.d /= X ). If ' .P = A then it obvious that .f (P ) ∈ d . Let .P /= A and .D := AP ∩ BC. Since f is an isomorphism .f (D) ∈ B ' C ' = d ' and then .f (P ) ∈ f (A)f (D) = d ' . In the affine case, if .AP ‖ BC then .f (A)f (P ) ‖ f (B)f (C) = B ' C ' so that .A' ∈ / B 'C', contradicting the hypothesis. ⨆ ⨅ Corollary 6.16 The set .Aut(X) of all automorphisms of a projective .(or affine.) plane .X is a group with respect to composition of automorphisms. Every line of Fano’s plane (Example 6.7) contains exactly three points. We also have the converse. Proposition 6.17 Let .(X' , D' ) be a projective plane whose all lines contain exactly three points. Then .(X' , D' ) is isomorphic to Fano’s plane .(see Exercise 1.3.). Proof Since every line contains exactly three points, by Lemma 6.10 we see immediately that .#(X' ) = 7. Let A, B, .C ∈ X' be three non-collinear points. The remaining points are as in Fig. 6.3. It is easy to see that the map .f : {1, 2, 3, 4, 5, 6, 7} → Fig. 6.3 Fano’s projective plane

244

6 Projective Spaces

{A, B, C, D, E, F, G}, where .f (1) = A, .f (2) = B, .f (3) = C, .f (4) = D, f (5) = E, .f (6) = F and .f (7) = G, is an isomorphism of projective planes (compare it with Fig. 6.1). ⨆ ⨅

.

Proposition 6.18 Let .(X, D) be a projective plane, .r ∈ D a fixed line, .Xr = X \ r and .Dr = {dr := d ∩ Xr : d ∈ D, d /= r}. Then .(Xr , Dr ) is an affine plane. Conversely, let .(X, D) be an affine plane. Consider the equivalence relation on ' ' .D defined by .d ∼ d if and only if .d ‖ d and denote the equivalence class of a line  Put d by .d. X := X∪{d : d ∈ D},

.

d∞ := {d : d ∈ D},

D := {d := d∪d : d ∈ D}∪{d∞ }.

Then .(X, D) is a projective plane called the projective closure of .(X, D) .(see Example 6.8.). Finally, if .(X, D) is a projective plane, then .Xr = X and .Dr = D, where .d∞ = r. Analogously, if .(X, D) is an affine plane, then .Xd∞ = X and .Dd∞ = D. Proof Two lines .dr and .er of .Xr are parallel if and only if .d ∩ e ∈ r. The axioms (A2 ) and .(A3 ) of Definition 6.6 are easily proved. First we immediately realize that .(Xr , Dr ) is an incidence structure. Let P be a point, .dr be a line of .Xr not containing P and .e = P Q where .Q = r ∩ d, then the line .er := e ∩ Xr is the only line of .Xr passing through P and parallel to .dr . Finally, let A, B, C, D be four points of .X such that no three of them lie on a line. We can check axioms .(A2 ) and .(A3 ) on the Fano’s plane and then suppose that every projective line contains four points at least. If .A ∈ r but B, C and D don’t lie on r we can choose a point E on AB, .E /= A, B and .E ∈ / CD so that E, B, C, D satisfy axiom .(A3 ). Indeed E, B, C are not collinear, otherwise A, B, C would be collinear; thus also E, B, D are not collinear. The points B, C and D are not collinear by hypothesis as well as E, C and D. By a similar argument we see that if A, .B ∈ r but C and D don’t lie on r we can find two points E, .F ∈ Xr such that C, D, E and F satisfy axiom .(A3 ). We let the reader prove the converse assertion as an exercise. ⨆ ⨅ .

Finally, we observe that an isomorphism of projective planes .f : (X, D) → (Y, E) induces an isomorphism of affine planes .fr : (Xr , Dr ) → (Yf (r) , Ef (r) ). Analogously an isomorphism of affine planes .f : (X, D) → (Y, E) extends to an isomorphism of projective planes .f : (X, D) → (Y, E) defined by f (P ) = f (P ),

.

∀ P ∈ X,

 = f f (d) (d),

∀ d ∈ d∞ .

Remark 6.19 From Lemma 6.11 and from Propositions 6.12 and 6.18 we deduce immediately: 1. Let d, .d ' ∈ D be two arbitrary lines in an affine plane. Then there exists a bijection .f : d → d ' . 2. Every line in an affine space contains at least two points.

6.1 Some Elementary Synthetic Projective Geometry

245

6.1.1 General Projective Spaces We are going to extend Definitions 6.5 and 6.14. Definition 6.20 An incidence structure .(X, D), satisfying the following axioms: (P3'' ) there exist three non-collinear points in .X, .(P4 ) for every three non-collinear points A, B, .C ∈ X there exists a linear subvariety .Y of .(X, D), containing A, B, C, which is a projective plane,

.

is called a general projective space. In particular a projective plane according to Definition 6.5 is a general projective space. If .(X, D) is a projective space and if A, B, .C ∈ X are three non-collinear points, it follows that every projective plane .Y containing A, B, C coincides by Lemma 6.10  (as a set) with . AP . In particular, .Y is uniquely determined by every three nonP ∈BC

collinear points A, B, .C ∈ Y; it is called the projective plane passing through A, B, C; .Y will be denoted by ABC. Hence every projective space satisfies axiom .(P3' ), i.e. every line contains three distinct points at least. On the other hand, axiom .(P2 ) does not hold in a general projective space. We can assert that a projective space .(X, D) satisfies axiom .(P2 ) if and only if it is a projective plane. Implication .⇐= is obvious. Let A, B, .C ∈ X be three non-collinear points and .Y the plane passing through them. Let .r = AB, .s = AC so that .A = r ∩ s. Let D be any point of .X \ (r ∪ s). If t is the line joining A and D and .u = BC, by axiom .(P2 ) we have .t ∩ u = E so that both A and E belong to .Y. Then we get .t ⊂ Y and .D ∈ Y. A projective space .(X, D) contains four points A, B, C, D such that A, B, C are not collinear and .D ∈ / ABC, if .(X, D) is not a projective plane. Four points of a projective space .(X, D) are called non-coplanar if they are not contained in a plane of .X. In particular, if four points are not coplanar then no three of them are collinear. The concept of projective isomorphism between projective spaces is the same as for projective planes. Definition 6.21 Let .(X, D) and .(X' , D' ) be two projective spaces. A bijective map ' .f : X → X is called an isomorphism of projective spaces if for every A, B, .C ∈ X A, B, C

.

are collinear in X ⇐⇒ f (A), f (B), f (C)

are collinear in X' .

In other words, an isomorphism .f : X → X' is characterized by the following property: • for every subset .Y ⊂ X, .Y is a line of .X (i.e. .Y ∈ D) if and only if .f (Y) is a line of .X' (i.e. .f (Y) ∈ D' ). The reader can immediately show that a map .f : X → X' is an isomorphism of projective spaces if and only if the following conditions are satisfied: (i) For every three collinear points A, B, .C ∈ X, then .f (A), .f (B) and .f (C) are collinear in .X' .

246

6 Projective Spaces

(ii) There exists a map .g : X' → X such that .g ◦ f = idX , .f ◦ g = idX' , and for every three collinear points .A' , .B ' , .C ' ∈ X' , then .g(A' ), .g(B ' ) and .g(C ' ) are collinear in .X. Definition 6.22 Let .(X, D) be a projective space. An isomorphism of projective spaces .f : X → X is called an automorphism of .(X, D), (or simply of .X. The set of all automorphisms .X will be denoted by .Aut(X). It is immediately seen that .Aut(X) is a group (in general non commutative) with respect to composition of automorphisms. Projective Subspaces Definition 6.23 Let .(X, D) be a projective space. A subset .Y ⊂ X is called a projective subspace, or simply a subspace of .X, if .Y is a linear subvariety of the incidence structure .(X, D). In other words, .Y is a subspace of .X if for every two distinct points A and .B ∈ Y, the line AB is contained in .Y. We agree that the points and the lines of .X as well as the empty set .∅ are projective subspaces of .X. If .Y is the empty set, a point or a line, we can define the dimension .dim Y as follows: ⎧ ⎪ ⎪ ⎨−1 . dim Y = 0 ⎪ ⎪ ⎩1

if Y = ∅, if Y is a point, if Y is a line.

If .Y is a subspace of .X and .DY := {d ∈ D | d ⊂ Y}, then .(Y, DY ) is a projective space according to Definition 7.1 provided that .Y contains at least three non-collinear point. A hyperplane .Y is a subspace of .X such that if .Z is a subspace of .X such that .Y ⊂ Z ⊂ X then .Z = Y or .Z = X. If .f : X → X' is an isomorphism of projective spaces and .Y is a subspace of .X, it is immediately seen that .Y' = f (Y) is a subspace of .X' and .f |Y : Y → Y' is an isomorphism of projective spaces. Lemma 6.24 Let .{Yi }i∈I be an arbitrary family of projective subspaces of a projective space .X. Then . i∈I Yi is a projective subspace of .X. Proof It is an immediate consequence of the above definition.

⨆ ⨅

Thanks to 6.24 we can give the following: Definition 6.25 Let M be a subset of a projective space .X. The intersection of all subspaces of .X containing M is a subspace of .X called the subspace generated by M; it will be denoted by .〈M〉. It is the smallest subspace containing M. Let .Y1 and .Y2 be two subspaces of .X. We shall denote the subspace .〈Y1 ∪ Y2 〉 by .Y1 + Y2 and we call it the sum of .Y1 and .Y2 . If .M = {A1 , . . . , Am }, where .Ai ∈ X, .i = 1, . . . , m, then .〈M〉 will be denoted by .A1 + · · · + Am . For instance, if A and B are two distinct points of .X, the subspace .A + B is the line AB. If .X is a plane and .d1 , .d2 are two distinct lines of .X, then .d1 + d2 = X. The following theorem, which is a generalization of Lemma 6.10, gives an explicit description of the sum of two subspaces.

6.1 Some Elementary Synthetic Projective Geometry

247

Theorem 6.26 Let .Y1 and .Y2 be two non-empty subspaces of a projective space .X such that .Y1 ∪ Y2 contains at least two distinct points. Then we have Y1 + Y2 =



.

A1 A2 .

(6.1)

A1 ∈Y1 ,A2 ∈Y2 ,A1 /=A2

Proof Let .X' be the right– hand side of (6.1). Both the inclusions .X' ⊆ Y1 + Y2 and ' ' .Y1 ∪ Y2 ⊆ X are obvious, so that we have only to show that .X is a subspace of .X to prove (6.1). Consider two arbitrary points .P1 , .P2 ∈ X' , .P1 /= P2 . There exist .Ai , .Bi ∈ Yi , .i = 1, 2, such that .A1 /= A2 , .B1 /= B2 , .P1 ∈ A1 A2 and .P2 ∈ B1 B2 . Two cases are possible: (a) The points .A1 , .A2 , .B1 and .B2 are coplanar, i.e. the subspace .A1 + A2 + B1 + B2 is contained in a plane .Z. If .A1 , .A2 , .B1 and .B2 lie on the same line r there is nothing to prove since .P1 P2 coincides with r (which is contained in both the subspaces .Y1 and .Y2 ). Otherwise we have the equality .A1 + A2 + B1 + B2 = Z. By axiom .(P2 ) (applied to .Z) the lines .A1 A2 and .B1 B2 intersect in a point .R ∈ Z. If P is an arbitrary point of .P1 P2 , by axiom .(P2 ) the line RP meets .A1 B1 and .A2 B2 in the points .C1 and .C2 respectively (see Fig. 6.4). Since .Yi , .i = 1, 2, are subspaces, then .Ci ∈ Yi , .i = 1, 2, and .P ∈ C1 C2 . Therefore in case a) we checked that the line .P1 P2 lies on .X' . (b) The points .A1 , .A2 , .B1 and .B2 are not coplanar. As in the previous case, let .P ∈ P1 P2 . By axiom .(P4 ) we have the plane .A1 A2 P . Since .P1 , .P ∈ A1 A2 P , then .P1 P = P1 P2 ⊂ A1 A2 P , so that .P2 ∈ A1 A2 P . By axiom .(P2 ) (applied to .A1 A2 P ) there exists a point Q such that .A1 P ∩ A2 P2 = Q (see Fig. 6.5). Furthermore, .Q ∈ A2 P2 ⊂ A2 B1 B2 so that, by axiom .(P2 ) applied to the plane .A2 B1 B2 , there exists a point .C2 such that .B1 Q ∩ A2 B2 = C2 . Finally, since .Q ∈ B1 C2 ⊂ A1 B1 C2 and .P ∈ A1 Q ⊂ A1 B1 C2 , by axiom .(P2 ) applied to the plane .A1 B1 C2 there exists a point .C1 such that .C2 P ∩ A1 B1 = C1 . Thus we have obtained the points .C1 ∈ A1 B1 and .C2 ∈ A2 B2 (hence .C1 ∈ Y1 and .C2 ∈ Y2 because .Y1 and .Y2 are subspaces) such that .P ∈ C1 C2 . This ends the proof that .X' is a subspace of .X. ⨆ ⨅

Fig. 6.4 Three coplanar lines

248

6 Projective Spaces

Fig. 6.5 Three non-coplanar lines

Remark 6.27 Let .(X, D) be a projective space. If .Y is a hyperplane of .X and d a line, then .d ∩ Y = d or .d ∩ Y is only one point. Indeed, if .d ∩ Y = ∅, take a point .P ∈ d. Consider the subspace .P + Y. By Theorem 6.26 we have Y ⊊ P + Y ⊊ X,

.

contradicting the assumption that .Y is a hyperplane.

6.2 The Projective Space Associated with a K-Vector Space Definition 6.28 Let V be a K-vector space of dimension n + 1 (with n ≥ 2). Consider the set X of all vector subspaces U of V with dimK (U ) = 1. Let W be a vector subspace of dimension 2 and dW be the set of all subspaces of dimension 1 contained in W (i.e. dW := {〈v〉 | v ∈ W, v /= 0V }). We define the set of lines D := {dW | W vector subspace of V , dimK W = 2}.

.

We call P(V ) := (X, D) the projective space associated with the K-vector space V . The above definition is meaningful also if dim V = 2; in that case P(V ) := (X, D) is called the projective line associated with the K-vector space V since X = dV and D = {dV }. If V = K n+1 the projective space Pn (K) := P(K n+1 ) is called the standard projective space of dimension n over K. There is a canonical identification (V \ {0V })/ ∼ −→ P(V ),

.

 v → 〈v〉,

(6.2)

where v ∼ w if and only if v = kw for some k ∈ K ∗ . We will denote the equivalence class of (x0 , . . . , xn ) ∈ K n+1 by [x0 , . . . , xn ]. Theorem 6.29 The projective space P(V ) = (X, D) associated with the K-vector space V with (dim V ≥ 3) is a general projective space according to Definition 7.1.

6.2 The Projective Space Associated with a K-Vector Space

249

Proof First we have to show that P(V ) is an incidence structure. Let A, B ∈ X be two distinct points, i.e. A = 〈v〉 and B = 〈w〉, with v, w ∈ V , v, w /= 0V such that v, w are linearly independent vectors. Let W = 〈v, w〉, then dimK W = 2 and the line dW ∈ D contains A and B. Let W ' be a vector subspace of dimension 2 such that A, B ∈ dW ' , i.e. v, w ∈ W ' . Since dimK W ' = 2, v, w ∈ W ' and v, w are linearly independent we get W ' = 〈v, w〉 so that W = W ' and dW = dW ' . Therefore axiom (P1 ) Definition 6.1 is verified and P(V ) is an incidence structure. ⨆ ⨅ We need the following result. Lemma 6.30 Suppose dimK V = 3, then P(V ) = (X, D) is a projective plane. Proof Axiom (P2 ). Let dW and dW ' be two distinct lines of P(V ), i.e. W /= W ' . Since dimK V = 3 and dimK W = dimK W ' = 2, then we have W + W ' = V . By Grassmann’s formula (Corollary 1.76) we get .

dimK (W ∩ W ' ) = dimK W + dimK W ' − dimK (W + W ' ) = 2 + 2 − 3 = 1,

so that W ∩ W ' is a point of X common to dW and dW ' . Axiom (P3' ). Let dW ∈ D be a line and {v1 , v2 } be a K-basis of W . Then 〈v1 〉, 〈v2 〉 and 〈v1 + v2 〉 are three distinct subspaces of W of dimension 1 so that they represent three distinct points of dW . Axiom (P3'' ). Let {v1 , v2 , v3 } be a basis of V . It is clear that the vector subspaces Wi := Kvi , i = 1, 2, 3 define three non-collinear points of X. ⨆ ⨅ By Lemma 6.30 we can suppose n ≥ 3, i.e. dimK V ≥ 4. The proof of axiom (P3'' ) is exactly the same as for Lemma 6.30. Axiom (P4 ). Let A, B, C be three non-collinear points of X, i.e. A = Kv1 , B = Kv2 and C = Kv3 such that v1 , v2 and v3 are linearly independent. Let W = 〈v1 , v2 , v3 〉. Since dimK W = 3, by Lemma 6.29 P(W ) (the set of all 1-dimensional vector subspaces of V contained in W ) is a projective plane. Thus the projective plane P(W ) is a linear subvariety of the incidence structure P(V ) containing A, B and C. Theorem 6.31 Let V be a K-vector space of dimension ≥ 3. Consider the canonical map π : V \ {0V } → P(V ),

.

π(v) = Kv.

The correspondence Y → VY := π −1 (Y) ∪ {0V }

.

(6.3)

250

6 Projective Spaces

between the set of all non-empty subspaces of P(V ) and the set of all non-null vector subspaces of V is bijective and preserves the inclusions. The correspondence (6.3) can also be seen as W → P(W ) ⊂ P(V ),

.

W vector subspace of V , W /= 0V .

(6.4)

Thus if W is a vector subspace of V we also can identify P(W ) with (W \{0V })/ ∼. Proof Let Y be a non-empty projective subspace of P(V ). First we have to check that VY is a vector subspace of V . Let v1 , v2 ∈ VY . If either v1 or v2 is 0V then it is obvious that v1 + v2 ∈ VY . Suppose vi /= 0V , i = 1, 2. Then π(vi ) = Kvi , i = 1, 2 determine two points P1 and P2 of Y. If P1 = P2 i.e. Kv1 = Kv2 , then v1 , v2 ∈ Kv1 , hence v1 + v2 ∈ VY . Suppose now that Kv1 /= Kv2 i.e. P1 /= P2 , then P1 P2 ⊂ Y since Y is a subspace. According to Definition 6.28, P1 P2 is the set of all 1-dimensional vector subspaces of Kv1 + Kv2 , hence v1 + v2 ∈ VY . Let v ∈ VY and λ ∈ K. If λ = 0K , then λv = 0V ∈ VY . If λ /= 0K , since Kv = K(λv) we have λv ∈ VY . The correspondence Y → VY clearly preserves the inclusions. Now we prove that it is bijective. Let W be a non-null vector subspace of V . If dimK W = 1, then W represents a uniquely determined point A ∈ P(V ) such that VA = W . If dimK W = 2, then W is a uniquely determined line d of P(V ) such that Vd = W . Hence suppose dimK W ≥ 3. The set YW of all 1-dimensional vector subspaces of V contained in W is a projective subspace of P(V ) such that VYW = W . Hence the map W → YW is the inverse of Y → VY . ⨆ ⨅ Definition 6.32 Under the hypothesis and notation of Theorem 6.31, we define the dimension of a non-empty projective subspace Y of P(V ) as the integer .

dimK Y := dimK VY − 1,

where dimK VY is the dimension of VY as a vector subspace of V . In particular we have dim P(V ) := dimK V − 1 and dimK Y = 0 if and only if Y is a point. Finally, we put .

dimK Y = −1, if Y = ∅.

From now on we use the notation dim Y (instead of dimK Y) if the field K is fixed. If dim P(V ) = n and Y is a projective subspace of P(V ), then dim Y ≤ n. More generally, if Z is another projective subspace of P(V ) such that Y ⊂ Z, then dim Y ≤ dim Z, and dim Y = dim Z if and only if Y = Z. If dim P(V ) = n ≥ 2, a projective subspace Y of P(V ) is called a projective line if dim Y = 1, projective plane if dim Y = 2, and, when n ≥ 4, projective hyperplane if dim Y = n − 1. The concept of dimension for projective subspaces of Definition 6.23 can be found in Chapter 8 of [1].

6.2 The Projective Space Associated with a K-Vector Space

251

The analogue of Grassmann’s formula (1.26) is given by the following: Corollary 6.33 Let Y and Z be two projective subspaces of P(V ). We have the following identity: .

dim Y + dim Z = dim(Y ∩ Z) + dim(Y + Z),

(6.5)

called the projective Grassmann formula. Proof We obtain (6.5) by Grassmann’s formula of Corollary 1.76 and Definition 6.32, taking into account the following relations: VY+Z = VY + VZ ,

.

VY∩Z = VY ∩ VZ . ⨆ ⨅

There are some immediate consequences of the projective Grassmann formula. Corollary 6.34 Let Y and Z be two projective subspaces of P(V ), where dim P(V ) = n. (i) If Y is a line and Z is a hyperplane, then Y ∩ Z /= ∅. If Y /⊂ Z then Y ∩ Z is a point. (ii) If Y is a plane and Z is a hyperplane, then dim(Y ∩ Z) ≥ 1. If Y /⊂ Z then Y ∩ Z is a line. (iii) If Y and Z are distinct hyperplanes, then dim(Y ∩ Z) = n − 2. Proof (i). It is obvious that Y⊂Z

.

=⇒

Y ∩ Z = Y /= ∅,

Y /⊂ Z

=⇒

Z  Y + Z.

Since dim Z = n − 1 we have dim(Y + Z) > n − 1, which implies dim(Y + Z) = n (i.e. Y + Z = P(V )). By the projective Grassmann formula we get .

dim(Y ∩ Z) = dim Y + dim Z − dim(Y + Z) = 1 + (n − 1) − n = 0,

so that Y ∩ Z is a point. We let the reader prove the remaining points (ii) and (iii) as an exercise. ⨆ ⨅ Definition 6.35 Let B0 , B1 , . . . , Bm be m + 1 points (m ≥ 0) of the projective space P(V ) of dimension n. Then we have .

dim(B0 + B1 + · · · + Bm ) ≤ m.

(6.6)

Indeed, if Bi corresponds to the vector subspace Kvi of dimension 1, i = 0, 1, . . . , m, then VB0 +B1 +···+Bm = Kv0 + Kv1 + · · · + Kvm ,

.

so that dim(VB0 +B1 +···+Bm ) ≤ m + 1 and (6.6).

252

6 Projective Spaces

We say that the set {B0 , B1 , . . . , Bm } is: • projectively independent, or simply independent, if dim(B0 +B1 +· · ·+Bm ) = m; • projectively dependent or simply dependent if dim(B0 + B1 + · · · + Bm ) < m; • a system of generators of a subspace Y of P(V ) if Y = B0 + B1 + · · · + Bm ; We also say that B0 , B1 , . . . , Bm generate Y; • a basis of P(V ) if it is a system of generators of P(V ) which is projectively independent. If Bi = Kvi , i = 0, 1, . . . , m, we easily get the following equivalences: 1. {B0 , B1 , . . . , Bm } is a projectively independent subset of P(V ) if and only if v0 , v1 , . . . , vm are linearly independent vectors of V . In particular m ≤ n + 1. 2. {B0 , B1 , . . . , Bm } is a system of generators of P(V ) if and only if v0 , v1 , . . . , vm generate the vector space V . In particular m ≥ n + 1. 3. {B0 , B1 , . . . , Bm } is a basis of P(V ) if and only if {v0 , v1 , . . . , vm } is a basis of V . In particular m = n + 1. Every subset of a projectively independent set is obviously projectively independent and every (finite) subset containing a systems of generators of P(V ) generates P(V ). More generally, we shall say that r projective subspaces Y1 , . . . , Yr of P(V ) are projectively independent if .

dim(Y1 + · · · + Yr ) = dim(Y1 ) + · · · + dim(Yr ) + r − 1.

Example 6.36 Two points B0 , B1 ∈ P(V ) are projectively independent if and only if B0 /= B1 ; three points B0 , B1 , B2 ∈ P(V ) are projectively independent if and only if they are not collinear. Four points B0 , B1 , B2 , B3 ∈ P(V ) are projectively independent if and only if they are not coplanar. We say that m + 1 points B0 , B1 , . . . , Bm are in general position if: • m ≤ n and B0 , B1 , . . . , Bm are projectively independent; • m > n and every subset of n + 1 points contained in {B0 , . . . , Bm } is projectively independent. For instance, if n = 2, m + 1 ≥ 3 distinct points B0 , B1 , . . . , Bm ∈ P(V ) are in general position if no three of them are collinear. Any ordered set {B0 , . . . , Bn , Bn+1 } consisting of n + 2 points in general position is called a projective frame of P(V ); the points B0 , B1 , . . . , Bn are called the fundamental points of the frame, while Bn+1 is called the unit point of the frame. In Pn (K) the frame {[1, 0, ..., 0], [0, 1, ..., 0], . . . , [0, 0, ..., 1], [1, 1, ..., 1]} is called the standard projective frame; a corresponding normalized basis is the canonical basis of K n+1 Theorem 6.37 Let P(V ) be a projective space of dimension n ≥ 2. (i) Every projectively independent subset of P(V ) is contained in a basis of P(V ). (ii) Every system of generators of P(V ) contains a basis of P(V ).

6.2 The Projective Space Associated with a K-Vector Space

253

Proof It is immediate from the above definitions and Theorem 1.49 and Corollary 1.53. ⨆ ⨅ Corollary 6.38 Let P(V ) be a projective space of dimension n. Let B be a finite subset of P(V ). The following assertions are equivalent: (i) B is a maximal projectively independent subset (with respect to inclusion) of P(V ). (ii) B is a minimal system of generators of P(V ). (iii) B is a basis of P(V ). ⨆ ⨅

Proof It follows immediately from Theorem 6.37.

Definition 6.39 Let F = {P0 , P1 , . . . , Pn , Pn+1 } be a projective frame of P(V ). For any u ∈ V , u /= 0V , there exists a unique basis Bu = {v0 , . . . , vn } of P(V ) such that Pi = 〈vi 〉, i = 0, . . . , n,

Pn+1 = 〈u〉,

.

u = v0 + · · · + vn .

(6.7)

Any basis Bu satisfying (6.7) is called a normalized basis of V . If P = 〈v〉 and if v = x0 v0 + · · · xn vn (i.e. (x0 , . . . , xn ) are the coordinates of v with respect to Bu ), we say that (x0 , . . . , xn ) are the homogeneous coordinates of P with respect to the frame F. These coordinates are uniquely determined up to a non-zero scalar (indeed, for any k ∈ K ∗ we have Bku = {kv0 , . . . , kvn }), hence by homogeneous coordinates of P we mean a homogeneous (n + 1)-tuple [P ]F = [x0 , . . . , xn ] ∈ Pn (K). [x0 , . . . , xn ]. In particular, we have [P0 ]F = [1, 0, . . . , 0],

.

= [0, 0, . . . , 1],

[P1 ]F = [0, 1, . . . , 0],

. . . , [Pn ]F

[Pn+1 ]F = [1, 1, . . . , 1].

Once a projective frame of P(V ) has been chosen, we can simply write P = [x0 , . . . , xn ]. Choosing a projective frame F is equivalent to choosing a projective isomorphism P(V ) → Pn (K),

.

P → [P ]F ,

induced by the linear isomorphism which sends v ∈ V to its coordinates with respect to the basis B, where B is any normalized basis associated with F. Definition 6.40 Let V and V ' be two K-vector spaces with dimK (V ), dimK (V ' ) ≥ 2. A map T : P(V ) → P(V ' )

.

254

6 Projective Spaces

is called a linear projective transformation if there exists an injective linear map φ : V → V ' such that T (〈v〉) = 〈φ(v)〉,

∀ v ∈ V \ {0V }.

.

(6.8)

Lemma 6.41 The linear map which induces a linear projective transformation is determined only up to multiplication by a non-zero scalar. Tφ = Tψ

⇐⇒

.

∃ k ∈ K∗

such that φ = kψ.

Proof The implication ⇐= is immediate. Suppose Tφ = Tψ , i.e. φ(v) = kv ψ(v), kv ∈ K ∗ for every v ∈ V . Take a basis {ei , i = 0, . . . , n} of V , then φ(ei ) = kei ψ(ei ). i = 0, . . . , n. Hence n  n n

.φ ei = φ(ei ) = kei ψ(ei ) i=0

i=0

i=0

and n  n  n

.φ ei = ke0 +···+en ψ ei = ke0 +···+en ψ(ei ). i=0

i=0

i=0

By linear independence of ψ(ei ) it follows that ke0 = · · · = ken = ke0 +···+en = k.

.

Therefore for every v =

n

λi ei ∈ V , we have

i=0

φ(v) = φ

n

.

i=0

 λi ei

=

n

i=0

λi φ(ei ) =

n

λi kψ(ei ) = kψ(v).

i=0

⨆ ⨅ We call any such linear map φ associated with T and write T = Tφ . If φ is an isomorphism of vector spaces, we say that Tφ is a linear projective isomorphism of projective spaces. Two projective spaces over a field K are said to be linearly isomorphic if there exists a linear projective isomorphism between them. Two projective spaces over a field K are linearly isomorphic if and only if they have the same dimension. A linear projective isomorphism T : P(V ) → P(V ) is also called a linear projectivity (or a linear projective automorphism) of P(V ). We denote the set of all linear automorphism of P(V ) by PGLK (V ); if V = K n+1

6.2 The Projective Space Associated with a K-Vector Space

255

we use the notation PGLn (K). Two subsets Y and Z of P(V ) are said to be linearly equivalent if there exists a linear projectivity T of P(V ) such that T (Y) = Z. Proposition 6.42 Let T : P(V ) → P(V ' ) and T ' : P(V ' ) → P(V '' ) be two linear projective transformations. Then T ' ◦ T : P(V ) → P(V '' ) is a linear projective transformation. If T : P(V ) → P(V ' ) is a linear projective isomorphism then T −1 : P(V ' ) → P(V ) is a linear projective isomorphism. Proof Let T = Tφ and T ' = Tφ ' , then we have T ' ◦ T = T(φ ' ◦φ) . Furthermore T −1 = Tφ −1 . ⨆ ⨅ Corollary 6.43 The above– defined set PGLK (V ) is a group with respect to composition of automorphisms called the general linear projective group of P(V ). Proposition 6.44 Let T : P(V ) → P(V ' ) be a linear projective transformation. Then: (i) T is an injective map. (ii) If P1 , P2 and P3 are collinear points of P(V ), then T (P1 ), T (P2 ) and T (P3 ) are collinear points of P(V ' ). (iii) If T is an isomorphism according to Definition 6.40 and dim P(V ) ≥ 2, then T is an isomorphism according to Definition 6.21. (iv) If Y is a subspace of P(V ), then T (Y) is a subspace of P(V ' ) having the same dimension as Y and T|Y : Y → f (Y) is a linear projective isomorphism. Proof (i) Suppose T (P ) = T (Q) with P = 〈v〉 and Q = 〈w〉. Then, if T = Tφ , we have 〈φ(v)〉 = 〈φ(w)〉 so that φ(v) = kφ(w) = φ(kw) for some k ∈ K ∗ so that v = kw and P = Q. (ii) If P1 = 〈v1 〉, P2 = 〈v2 〉 and P3 = 〈v3 〉, then v1 , v2 , v3 are linearly dependent vectors of V . Let T = Tφ . Since φ is an injective linear map, φ(v1 ), φ(v2 ), φ(v3 ) are linearly dependent, i.e. T (P1 ), T (P2 ) and T (P3 ) are three collinear points of P(V ' ). (iii) We have only to show that if T (P1 ), T (P2 ) and T (P3 ) are collinear points of P(V ' ), then P1 , P2 and P3 are collinear points of P(V ). Since T −1 is a linear projective isomorphism, by (ii) T −1 (T (P1 )) = P1 , T −1 (T (P2 )) = P2 and T −1 (T (P3 )) = P3 are collinear. (iv) If Y = P(W ) and T = Tφ , then T (Y) = P(φ(W )). ⨆ ⨅ Definition 6.45 Let V be a K-vector space of dimension n + 1 (n ≥ 2) and P(V ) the associated projective space. By Proposition 6.42 the map T : GLK (V ) → Aut(P(V )), defined by T (φ) = Tφ is a homomorphism of groups with ker(T ) = {kidV | k ∈ K ∗ } (by Lemma 6.41). Since T (GLK (V )) = PGLK (V ) by definition, T induces a group isomorphism GLK (V )/ ker(T ) ∼ = PGLK (V ) by Theorem 1.5.

256

6 Projective Spaces

Example 6.46 If V = K n+1 , identifying GLK (V ) with GLn+1 (K) (the invertible (n + 1) × (n + 1) matrices with entries in K), we associate to every matrix A = (aij )i,i=0,...,n the linear projective isomorphism TA : Pn (K) → Pn (K),

P → AP ,

.

∀ P = (x0 , . . . , xn )t

(6.9)

i.e. TA ([x0 , x1 , . . . , xn ]) := [y0 , y1 , . . . , yn ],

.

yi =

n

aij xj ,

with

i = 0, 1, . . . , n.

j =0

The relations yi =

n

aij xj ,

i = 0, 1, . . . , n are also called the equations of TA .

j =0

They can be written in matricial form ϱy t = Ax t , where x = (x0 , . . . , xn ), y = (y0 , . . . , yn ) and ϱ ∈ K ∗ . In this case we denote the above groups homomorphism T by Tn ; so Tn : GLn+1 (K) → PGLn (K). Thus we have the isomorphism GLn+1 (K)/ ker(Tn ) ∼ = PGLn (K). The existence of a linear projective transformation T : P(V ) → P(V ' ) is guaranteed by the fundamental theorem of projective transformations: Theorem 6.47 (Fundamental Theorem of Projective Transformations) Let P(V ) and P(V ' ) be projective spaces over the field K such that dim P(V ) = n ≤ dim P(V ' ). Assume that {P0 , . . . , Pn , Pn+1 } is a projective frame of P(V ) and that {Q0 , . . . , Qn , Qn+1 } is a projective frame of an n-dimensional subspace P(W ) of P(V ' ). Then there exists a unique linear projective transformation T : P(V ) → P(V ' ) such that T (Pi ) = Qi for each i = 0, 1, . . . , n + 1. If P(W ) = P(V ' ), then T is a linear projective isomorphism and vice versa. Proof Let Pi = 〈vi 〉, i = 0, . . . , n + 1, and Qi = 〈wi 〉, i = 0, . . . , n + 1, where {v0 , . . . , vn+1 } is a basis of V and {w0 , . . . , wn+1 } is a basis of W . By the conditions T (Pi ) = Qi we have to define a linear map ϕ : V → V ' such that ϕ(vi ) = λi wi , with λi /= 0, i = 0, . . . , n + 1. Dividing λ0 , λ1 , . . . , λn , λn+1 by λ−1 n+1 (that does not change the projective transformation T we are looking for), we can suppose λn+1 = 1. By a suitable choice of λi we can get also T (Pn+1 ) = Qn+1 . Indeed, since {v0 , . . . , vn } is a basis of V and {w0 , . . . , wn } is a basis of W we have vn+1 =

n

.

i=0

μi vi

and

wn+1 =

n

i=0

μ'i wi ,

(6.10)

6.2 The Projective Space Associated with a K-Vector Space

257

with μi , μ'i /= 0 for each i = 0, 1, . . . , n (since all n + 1 vectors of {v0 , . . . , vn+1 } or of {w0 , . . . , wn+1 } are linearly independent). Then ϕ(vn+1 ) =

n

.

μi ϕ(vi ) =

i=0

n

μi λi wi . = wn+1

(6.11)

i=0

if we have μ'i = λi μi , i = 0, 1, . . . , n. We satisfy T (Pn+1 ) = Qn+1 by choosing μ' ⨆ ⨅ λi = i , i = 0, 1, . . . , n. μi Remark 6.48 In general we have a strict inclusion PGLK (V ) ⊊ Aut(P(V )), i.e. not all projective automorphism of P(V ) (according to Definition 6.21) are linear projective automorphisms (see Chap. 8). Definition 6.49 Let S, T be projective subspaces of the projective space P(V ) of dimension n such that S ∩ T = ∅ and P(V ) = S + T. If S = P(W ) and T = P(U ), then V = W ⊕ U . The projection pU,W : V → U onto U along W (see Exercise 1.14) induces a map πS : P(V ) \ S → T,

.

πS (〈v〉) = 〈pU,W (v)〉,

∀ v ∈ V \ W,

(〈pU,W (v)〉 = 〈W ∪ {v}〉 ∩ U ) called the projection onto T centred at S. Let T1 , T2 be two projective subspaces of the projective space P(V ) of dimension n with dim(T1 ) = dim(T2 ) = m and let S be a subspace such that S ∩ T1 = S ∩ T2 = ∅,

.

dim(S) = n − m − 1.

Then the restriction to T1 of the projection onto T2 centred at S is a projective isomorphism f : T1 → T2 , called perspectivity centred at S. It is induced by the linear isomorphism p1 : U1 → U2 , where T2 = P(U2 ), S = P(W ), T1 = P(U1 ) and p1 is the restiction to U1 of pU2 ,W . The inverse isomorphism f −1 : T2 → T1 is a perspectivity centred at S and the restriction of f to T1 ∩ T2 is the identity map.

6.2.1 Projective Subspaces of the Standard Projective Space Definition 6.50 Let Y be a projective subspace of Pn (K) with dim Y = m. We n+1 of choose a base {v0 , . . . , vm } of the associated vector subspace VY  of K dimension m + 1. Let vj = (a0j , . . . , anj ), j = 0, . . . , m and aij i=0,1,...,n , j =0,1,...,m

a (n + 1) × (m + 1)-matrix of maximum rank m + 1. Consider the linear map φ : K m+1 → K n+1 which sends the canonical basis {e0 , . . . , en } of K m+1 (see

258

6 Projective Spaces

Example 1.62-(1)) to {v0 , . . . , vm }, i.e. φ(ei ) = vi , i = 0, . . . , m. Thus φ(W ) = VY , where W = 〈e0 , . . . , em 〉. Then we have ⎞ ⎛ m m m

.φ(t0 , t1 , . . . , tm ) = ⎝ a0j tj , a1j tj , . . . , anj tj ⎠ . j =0

j =0

j =0

The linear map φ induces a projective transformation f : Pm (K) → Pn (K) defined by ⎡ f ([t0 , t1 , . . . , tm ]) = ⎣

m

.

a0j tj ,

j =0

m

a1j tj , . . . ,

j =0

m

⎤ anj tj ⎦

j =0

such that f (Pm (K)) = Y. The components of the analytic representation of f xi =

m

.

aij tj ,

i = 0, 1, . . . , n,

∀ [t0 , t1 , . . . , tm ] ∈ Pm (K),

(6.12)

j =0

are called parametric equations of Y. As in the affine case, the parametric equations of a projective subspace Y are not uniquely determined, indeed they depend on the choice of a base for VY . Example 6.51 Let A0 = [a00 , a10 , . . . , an0 ] and A1 = [a01 , a11 , . . . , an1 ] be two distinct points of Pn (K) and Y be the line A0 A1 . Then its parametric equations are given by xi = ai0 t0 + ai1 t1 ,

i = 0, 1, . . . , n,

.

∀ [t0 , t1 ] ∈ P1 (K),

(6.13)

We can choose {(a00 , a10 , . . . , an0 ), (a01 , a11 , . . . , an1 )} as a basis of VY . Definition 6.52 Let m + 1 ≥ 2 projectively independent points Pi = [pi0 , pi1 , . . . , pin ] ∈ Pn (K), i = 0, 1, . . . , m. Consider the subspace Y = P0 + P1 + · · · + Pm and the matrix ⎛

x0 x1 ⎜ p00 p01 ⎜ .M = ⎜ . .. ⎝ .. . pm0 pm1

··· ··· .. .

⎞ xn p0n ⎟ ⎟ .. ⎟ . . ⎠

· · · pmn

We have the following equivalences: [x0 , . . . , xn ] ∈ Y

.

⇐⇒

(x0 , . . . , xn ) ∈ VY

⇐⇒

rankM = m + 1.

6.2 The Projective Space Associated with a K-Vector Space

259

By Proposition 1.25 (Kronecker’s rule) rankM = m + 1 if and only if M contains an invertible (m + 1) × (m + 1) submatrix M ' such that the determinant of every (m + 2) × (m + 2) submatrix of M containing M ' vanishes; since there are n − m such matrices, this provides a Cartesian representation (or implicit representation) of Y with n − m homogeneous linear equations. If m = n − 1, the hyperplane Y = P1 + · · · + Pn−1 has the equation    x0 · · · xn     p00 · · · p0n    . .. ..  = 0,  ... . .   p  n−1 0 · · · pn−1 n

i.e.

α0 x0 + · · · + αn xn = 0,

where    p00 . . . p  0j . . . p0n   .. .. ..  , j .αj = (−1)  . .   . p   n−1 0 . . . p n−1 j . . . pn−1 n

j = 0, . . . , n,

where  means that the j -th column is deleted. We also have the converse result.   Proposition 6.53 For every matrix cij i=1,...,n−m ∈ Mn−m,n+1 (K) of rank n − m, j =0,1,...,n

the system of homogeneous linear equations n

.

cij xj = 0,

i = 1, . . . , n − m

j =0

represents a projective subspace of Pn (K) of dimension m. ⨆ ⨅

Proof Use Corollary 1.65.

Remark 6.54 It is obvious that Cartesian equations of a projective subspace are not uniquely determined. Indeed, different choices of projectively independent points (to generate a subspace) produce different homogeneous linear systems. However, if F = a0 x0 + a1 x1 + · · · + an xn and G = b0 x0 + b1 x1 + · · · + an xn are two homogeneous polynomials of degree 1 in the indeterminates x0 , x1 , . . . , xn over a field K (n ≥ 2), then  .

 n   ai xi = 0 [x0 , x1 , . . . , xn ] ∈ Pn (K)  i=0



 n   = [y0 , y1 , . . . , yn ] ∈ P (K)  bi yi = 0 n

i=0

(6.14)

260

6 Projective Spaces

if and only if there exists c ∈ K \ {0} such that F = cG (i.e. ai = cbi , i = 0, 1, . . . , n). Suppose that (6.14) is true and let 0 ≤ i < j ≤ n be two arbitrary indices such that ai /= 0 or aj /= 0. Then the (0, . . . , 0, aj , 0, . . . , 0, −ai , 0, . . . , 0) (with aj at the place i and −ai at the place j ) is a zero of F . Hence it also is a zero of G, i.e. bi aj − bj ai = 0. Since i and j are arbitrary, it follows that a0 , a1 , . . . , an and ⨆ ⨅ b0 , b1 , . . . , bn are proportional, i.e. ai = cbi , i = 0, 1, . . . , n with c /= 0. Hence if Y is a hyperplane of Pn (K) defined by the equation c0 x0 + c1 x1 + · · · + cn xn = 0, any other equation of Y is of the form kc0 x0 + kc1 x1 + · · · + kcn xn = 0 for k ∈ K ∗ . Thus we are allowed to say that c0 x0 + c1 x1 + · · · + cn xn = 0 is the equation of Y. Therefore there is a bijective correspondence between the set of all hyperplanes of Pn (K) and h K[X0 , . . . , Xn ]1 / ∼ (see Example 1.33-(2)), where F ∼ G if and only if F = kG for some k ∈ K ∗ .

6.3 Dual Projective Space and Projective Duality Definition 6.55 Let V be a K-vector space of dimension n + 1 ≥ 3 and denote by V ∗ its dual space. Since dimK (V ) = dimK (V ∗ ), V is isomorphic to V ∗ but there is no canonical isomorphism V → V ∗ . However, if we fix a basis B = {v0 , . . . , vn } of V and take the dual basis B∗ = {v0∗ , . . . , vn∗ } we have the isomorphism (depending only on the choice of B of V ) we might call “semi-canonical” ∗

φB : V → V ,

.

φB

n

 ai vi

i=0

=

n

ai vi∗ ,

ai ∈ K, i = 0, . . . , n.

i=0

(6.15) The projective space P(V ∗ ) is called the dual projective space; if V = K n+1 we will denote P(V ∗ ) by Pn (K)∗ . The isomorphism φB induces an isomorphism of projective spaces TB : P(V ) → P(V ∗ ) according to Definition 6.40. Notation Let V be a K-vector space of dimension n + 1 ≥ 3. We denote the set of m-dimensional projective subspaces of P(V ) by Sm (P(V )), where 0 ≤ m ≤ n. For instance S0 (P(V )) coincides with P(V ) as a set; Sn−1 (P(V )) is the set of hyperplanes of P(V ), while Sn (P(V )) = {P(V )}. The set S(P(V )) =

n 

.

Sm (P(V ))

m=0

is clearly a lattice partially ordered by inclusion. Theorem 6.56 Assume the above hypothesis and notation. There exists a bijective correspondence α : P(V ∗ ) → Sn−1 (P(V ))

.

(6.16)

6.3 Dual Projective Space and Projective Duality

261

such that A1 , A2 and A3 are collinear points of P(V ∗ ) if and only if dim(α(A1 )) ∩ α(A2 )) ∩ α(A3 )) ≥ n − 2. Proof By the projective Grassmann formula (Corollary 6.33) for three hyperplanes H1 , H2 and H3 of P(V ) we have .

dim(H1 ∩ H2 ∩ H3 ) ≥ n − 3

(we have = if H1 , H2 and H3 are pairwise distinct and H1 ∩ H2 /⊂ H3 ). Let f ∈ V ∗ \ {0}, then Wf := Ker(f ) is a vector subspace of V of dimension n. By Theorem 6.31 there exists a unique hyperplane Yf (= P(Wf )) of P(V ) such that VYf = Wf . Note that if g = λ · f , with λ ∈ K \ {0}, then Ker(f ) = Ker(g), i.e. Wf = Wg or also Yf = Yg . Thus we obtain a map α : P(V ∗ ) → Sn−1 (P(V )),

.

α(〈f 〉) := Yf

(6.17)

(as usual, 〈f 〉 is the subspace of V ∗ generated by f ). Conversely, let Y be a hyperplane of P(V ) and VY be the n-dimensional vector subspace of V given by Theorem 6.31. The annihilator VY◦ of VY has dimension 1 so that there exists a linear functional fY /= 0V ∗ such that VY◦ = 〈fY 〉. Thus we are able to define the map β : Sn−1 (P(V )) → P(V ∗ ),

.

β(Y) := 〈fY 〉.

It follows immediately that α ◦ β = id and β ◦ α = id, so that α is bijective. We have to prove the last assertion. Suppose Ai = 〈fi 〉, with fi ∈ V ∗ , fi /= 0V ∗ , i = 1, 2, 3. Then A1 , A2 and A3 are collinear if and only if f1 , f2 and f3 are linearly dependent. By Theorem 6.31 we have dim(α(A1 )) ∩ α(A2 )) ∩ α(A3 )) = dim(Yf1 ∩ Yf2 ∩ Yf3 ) = dim(P(ker f1 ) ∩ P(ker f2 ) ∩ P(ker f3 ))

.

= dim(P((ker f1 ) ∩ (ker f2 ) ∩ (ker f3 )). Therefore dim(α(A1 )) ∩ α(A2 )) ∩ α(A3 )) ≥ n − 2 if and only if dim((ker f1 ) ∩ (ker f2 ) ∩ (ker f3 )) ≥ n − 1. This last inequality holds if and only if f1 , f2 and f3 are linearly dependent (we let the reader show this last equivalence). ⨆ ⨅ Remark 6.57 We have just seen (Theorem 6.56) that the dual projective space P(V ∗ ) induces a structure of projective space on the set Sn−1 (P(V )) of all hyperplanes of P(V ). Thus the map α : P(V ∗ ) → Sn−1 (P(V )) is a canonic isomorphism of projective spaces so that P(V ∗ ) and Sn−1 (P(V )) can be identified. The composition of maps αB = α ◦ TB gives the isomorphism of projective spaces αB : P(V ) → Sn−1 (P(V )),

.

αB ([a0 , . . . , an ]) = Y[a0 ,a1 ,...,an ] ,

(6.18)

262

6 Projective Spaces

where Y[a0 ,a1 ,...,an ] is the hyperplane of P(V ) of equation a0 x0 +a1 x1 +· · ·+an xn = 0 (here x0 , . . . , xn are the projective coordinates on P(V ) associated with the base B). Definition 6.58 Let dim P(V ) = n and Y be a subspace of P(V ) of dimension m, i.e. Y = P(W ), where dimK (W ) = m + 1 (Theorem 6.31). The annihilator W ◦ of W (Definition 1.106) has dimension n − m (Corollary 1.107) as a subspace of P(V ∗ ). Thus we have the natural map, called duality correspondence, δ : S(P(V )) → S(P(V ∗ )),

.

δ(Y) = P(W ◦ ),

(6.19)

which reverses inclusions. From Proposition 1.107 we get immediately the following relations: .

dim(δ(Y)) = n − m − 1,

δ(Y1 ∩ Y2 ) = δ(Y1 ) + δ(Y2 ),

δ(Y1 + Y2 ) = δ(Y1 ) ∩ δ(Y2 ).

(6.20)

In particular we have δ(∅) = P(V ∗ ) and δ(P(V )) = ∅. When m = n − 1 we recover the bijective correspondence between hyperplanes of P(V )) and points of P(V ∗ ) and when m = 0 between points of P(V ) and hyperplanes of P(V ∗ ). We have a bijective correspondence between the set of hyperplanes containing a subspace Y of P(V ) and the points of δ(Y). −1 The isomorphism TB : P(V ∗ ) → P(V ) induces a bijective map −1 (TB )S : S(P(V ∗ )) → S(P(V )),

.

−1 −1 (TB )S (Y) = TB (Y),

∀ Y∈S(P(V ∗ )),

so that we have the bijection δB : S(P(V )) → S(P(V )),

.

−1 δB = (TB )S ◦ δ,

(6.21)

which transforms subspaces of dimension m of P(V ) into subspaces of dual dimension n − m − 1 of P(V ). More explicitly, if Y = P0 + · · · + Pm , where P0 , . . . , Pm are m + 1 projectively independent points, and Pi = [pi0 , . . . , pin ], i = 0, . . . , m with respect to the base B, then, by (6.20), we have δB (Y) = δB (P0 ) ∩ · · · ∩ δB (Pm )

.

so that δB (Y) is described by the system of homogeneous linear equations n

.

pij xj = 0,

i = 0, . . . , m.

(6.22)

j =0

The duality principle, we are going to explain, was discovered by geometers of École Polytechnique of Paris about two centuries ago. This principle, which does

6.3 Dual Projective Space and Projective Duality

263

not hold in affine geometry, is very important in projective geometry since one can deduce new results from known ones immediately. Theorem 6.59 (Duality Principle) Let P(V ) be a projective space of dimension n. Let P be a statement about subspaces of P(V ), their intersections, joins, and dimensions. If P∗ is the dual statement obtained from P by replacing the words intersection, join, contained, containing, dimension by join, intersection, containing, contained, dual dimension respectively, then P holds if and only if P∗ holds. Proof It follows immediately from the properties of δ ((6.20)).

⨆ ⨅

Examples 6.60 Here are some assertions together with their dual assertion. P : Every line of P(V ) (with dim P(V ) = 2) contains at least three distinct points. P∗ : For every point P of P(V ) there are at least three distinct lines passing through P. P1 : The lines d and e of P(V ) (with dim P(V ) = 3) are concurrent at a point P . P∗1 : The lines δ(d) and δ(e) are contained in the plane δ(P ). P2 : If the line d of P(V ) (with dim P(V ) = 3) does not pass through the point P there exists a unique plane π containing d ∪ {P }. P∗2 : If the line δ(d) is not contained in the plane δ(P ), then δ(d) ∩ δ(P ) = δ(π ). Let dim P(V ) = n ≥ 2. P3 P∗3 P4 P∗4 P5 P∗5

: Two distinct points of P(V ) generate a line. : Two distinct hyperplanes meet at a subspace of dimension n − 2. : n points of P(V ) generate a subspace of dimension ≤ n − 1. : n hyperplanes meet at least at a point. : n projectively independent points generate a hyperplane. : n projectively independent hyperplanes (as points of the projective space Sn−1 (P(V ))) meet exactly at one point.

The duality principle allows us to apply the concepts of projective geometry of P(V ) to the sets whose elements are subspaces (of fixed dimension) of P(V ) that is very useful in projective geometry of higher dimensions. For instance, we shall see in Chap. 9 how the cross-ratio of 4 collinear points is immediately defined for 4 coplanar lines or for 4 hyperplanes passing through a subspace of codimension 2. The reader can find further developments in the beautiful book by Enriques [11]. Remark 6.61 The concepts of dual projective space and duality can be defined also for the general projective spaces of Definition 7.1. The reader can find a thorough exposition in [1] and in [11]. Remark 6.62 The canonical isomorphism αV : V → V ∗∗ (see Definition 1.98) induces a canonical isomorphism of projective spaces α V : P(V ) → P(V ∗∗ ),

.

α V (〈v〉) = 〈α(v)〉,

where P(V ∗∗ ) is the dual projective space of P(V ∗ ).

∀ v ∈ V,

(6.23)

264

6 Projective Spaces

Proposition 6.63 Let dim P(V ) = n and Y be a subspace of P(V ) of dimension m, 0 ≤ m ≤ n − 1. If the equations of Y with respect to a projective coordinate system B of P(V ) are Fi :=

n

.

cij xj = 0,

i = 1, 2, . . . , n − m.

j =0

and Z is a hyperplane of P(V ) containing Y, then there exists (λ1 , . . . , λn−m ) ∈ K n−m \ (0, . . . , 0) such that Z has an equation of the form λ1 F1 + · · · + λn−m Fn−m = 0.

.

(6.24)

Proof We have just seen that δ(Y) is a subspace of P(V ∗ ) of dimension n − m − 1 and that {Z ∈ Sn−1 (P(V )) | Y ⊂ Z} is in a bijective correspondence with δ(Y). On the other hand, the map f : Pn−m−1 (K) → P(V ∗ )

.

f ([λ1 , . . . , λn−m ]) = λ1 F1 + · · · + λn−m Fn−m

is a projective transformation such that f (Pn−m−1 (K)) = V ⊂ δ(Y). Since ⨆ ⨅ dim V = dim(δ(Y)), we get V = δ(Y). Definition 6.64 Let Sr be a r-dimensional subspace of P(V ), r ≤ n − 1. Let s be an integer such that r ≤ s ≤ n − 1. The set Фs (Sr ) = {Ss : Ss ⊃ Sr } of all s-dimensional subspaces of P(V ) containing Sr is called the star of s-dimensional n−1  subspaces with base locus Sr . The union Ф(Sr ) = Фs (Sr ) is the star with base s=r+1

locus Sr . The star Фn−1 (Sr ) of all hyperplanes passing through Sr (which coincides with δ(Sr )) is called a (projective) linear system with base locus Sr . If r = n − 2, Фn−1 (Sr ) is also called the pencil of hyperplanes passing through Sr . Definition 6.65 Let V be a K-vector space of dimension n + 1 ≥ 3. The G(r, V ) of r-dimensional vector subspaces M of L is called a Grassmann variety. In particular we have G(1, V ) = P(V ) and the bijective canonical map G(n, V ) ←→ G(1, V ∗ ) = P(V ∗ ),

.

M ←→ M 0 .

(6.25)

Let Sr = P(W ) be a fixed subspace of P(V ) of dimension 0 ≤ r < n − 1. Let s be an integer with r < s ≤ n − 1. If Ss ⊃ Sr , then Ss = P(U ) with U ⊃ W (so that U 0 ⊂ W 0 ). Hence, by duality, Фs (Sr ) can be seen as the set of subspaces δ(Ss ) = P(U 0 ), δ(Ss ) ⊂ δ(Sr ) = P(W 0 ). Therefore there is a bijective correspondence Фs (Sr ) ←→ G(n − s, W 0 )

.

(6.26)

6.4 Exercises

265

between Фs (Sr ) and the set of (n−s)-dimensional vector subspaces of V ∗ contained in W 0 (which has dimension n − r). From (6.26) we get Фn−1 (Sr ) ←→ G(1, W 0 ) ←→ P(W 0 ) ←→ Pn−r−1 (K).

.

(6.27)

Now let K be an infinite field and V be a K-vector space of dimension n + 1. We say that P(V ) has ∞n points since every point is uniquely determined by a homogeneous (n + 1)-tuple of parameters. In particular the projective line P1 (K) has ∞1 points, the projective plane P2 (K) has ∞2 points and so on. We can also say that in P(V ) there are ∞n−r−1 hyperplanes passing through a subspace Sr . Thus we shall call Фn−1 (Sr ) (and every set in bijective correspondence with Pn−r−1 (K)) form of (n − r − 1)th kind. The following sets are forms of the first kind: 1. A projective line. 2. A pencil of hyperplane passing through a subspace Sn−2 of P(V ). In particular a pencil of lines in P2 (K) passing through a point and a pencil of planes in P3 (K) containing a line. Forms of the second kind are: 1. A projective plane. 2. A linear system of planes in P3 (K) passing through a point. Let 1 ≤ m < n be an integer and Sm−1 = P(W ) a subspace of P(V ) having dimension m − 1. From (6.26) we have a bijective correspondence Фm (Sm−1 ) ←→ G(n − m, W 0 ),

.

where dim W 0 = n + 1 − m. Hence by (6.25) we get G(n − m, W 0 ) = G(n + 1 − m − 1, W 0 ) ←→ P((W 0 )∗ ).

.

Therefore there are ∞n−m subspaces Sm containing a fixed Sm−1 in P(V ). In particular, a star Ф1 (S0 ) of lines of P3 (K) with base locus a point S0 is a form of the second kind.

6.4 Exercises Exercise 6.1 Show that a projective plane X contains m2 − m + 1 exactly if there exists in X a line consisting of m points. Furthermore, prove that exactly m distinct lines pass through any point P ∈ X. Exercise 6.2 Under the assumptions of Exercise 1.1 calculate the number of elements of the following set: {(A, B, C) ∈ X×X×X | with A, B, C non-collinear}.

266

6 Projective Spaces

Exercise 6.3 Let K be a finite field with p k elements, where p is a prime number and k ≥ 1. pk(n+1) − 1 points. p−1 (b) Calculate the number of the lines and the planes of P3 (K). (a) Prove that Pn (K) (with n ≥ 2) consists of

Exercise 6.4 Show that P2 (Z2 ) is isomorphic to Fano’s plane. Exercise 6.5 Prove that P2 (R) is isomorphic to the projective plane of Example 6.8. Exercise 6.6 Let A and B be two points of a projective space P(V ) of dimension ≥ 2. Prove that there exists a hyperplane of P(V ) passing through A not containing B. Exercise 6.7 Let {E0 , E1 . . . , En } be a basis of a projective space P(V ) of dimeni +· · ·+En sion n ≥ 2. For every i = 0, 1, . . . , n denote the hyperplane E0 +· · ·+ E n  i means that Ei is omitted) by Xi . Prove that (where E Xi = ∅. (Hint: induction i=0

on n.) Exercise 6.8 Let P(V ) be a projective space of dimension n ≥ 2. Let Y0 , Y1 ,. . . ,Yn be an n + 1 hyperplane such that Y0 ∩ Y1 ∩ . . . ∩ Yn = ∅. Show that X /= Y0 ∪ Y1 ∪ · · · ∪ Yn . (Hint: induction on n.) Exercise 6.9 Let P , Q be two distinct points of a projective space P(V ) of dimension ≥ 2. Prove that there exists a hyperplane Z of P(V ) such that P , Q ∈ / Z. Exercise 6.10 Let Z be a subspace of P(V ) such that dim Z ≤ dim P(V ) − 2, and P , Q ∈ P(V ) \ Z. Prove that there exists a hyperplane Y of P(V ) containing Z and such that P , Q /∈ Y. Exercise 6.11 Let P(V ) be a projective space and Y be a subspace of dimension k ≥ 0. (i) Show that there exists a subspace Z of P(V ) with dim Z = n − k − 1 such that Y ∩ Z = ∅. (ii) Prove that for every i, 0 ≤ i ≤ k, and for every subspace Z of dimension n − k + i one has dim(Y ∩ Z) ≥ i. Furthermore, there exists a subspace Z of dimension n − k + i such that dim(Y ∩ Z) = i. In both cases (i) and (ii) we have Y + Z = P(V ). Exercise 6.12 Let Y be a subspace of P(V ) whose of dimension dim(P(V )) − 2. Prove that there are at least three distinct hyperplanes of P(V ) containing Y. Exercise 6.13 Let d and d ' be two arbitrary lines of P(V ) (dim(P(V )) ≥ 2). Prove that there exists a projective isomorphism f : P(V ) → P(V ) such that f (d) = d ' .

6.4 Exercises

267

Exercise 6.14 Let P(V ) be a projective space of dimension n ≥ 2 containing a line d consisting exactly of m points (m ≥ 3). Prove that #(P(V )) =

.

(m − 1)n+1 − 1 . m−2

(Hint: Theorem 6.26.) Exercise 6.15 Let P(V ) be a projective space of dimension n ≥ 2. Suppose that there exists a line d in P(V ) consisting of an infinite number of points. Prove that P(V ) is not a finite union of hyperplanes. Exercise 6.16 Let dim(P(V )) = 3. Let d and d ' be two skew lines of P(V ) and A ∈ P(V ) \ (d ∪ d ' ). Show that there exists a unique line e through A which meets both d and d ' (see Exercise 3.24). Give the dual statement of the above assertion. Exercise 6.17 Let d1 , d2 and d3 be three lines in a projective space P(V ) of dimension 4, no two of them being coplanar. Prove that there exists a line of P(V ) which meets d1 , d2 and d3 . Is such a line unique? Exercise 6.18 Prove that in a projective space P(V ) of dimension 5 there exist three lines d1 , d2 and d3 such that no other line of P(V ) meets d1 , d2 and d3 . Exercise 6.19 Give the dual statement of the following assertions: (i) Two arbitrary planes of P(V ) with dim(P(V )) = 4 have a non-empty intersection. (ii) Two distinct planes of P(V ) with dim(P(V )) = 4 have a line in common. Exercise 6.20 Let K be an infinite field and P(V ) be a projective space of dimension n ≥ 2. For every finite subset A of P(V ) there exists a hyperplane H such that H ∩ A = ∅. What is the dual statement?. Exercise 6.21 Let r and s be two lines of P2 (K). Prove the following assertions: (i) If r /= s and P = r ∩ s, then a projectivity ϕ : r → s is a perspectivity if and only if ϕ(P ) = P . (ii) Every projectivity ϕ : r → s is a composition of at least two perspectivities if r /= s and of at least three perspectivities if r = s.

Chapter 7

Desargues’ Axiom

Desargues’ axiom (and Pappus’ axiom) are extremely important configuration properties as will come clear in the next chapter (Remark 8.15). This chapter is devoted to Desargues’ axiom, while Pappus’ axiom will be treated in Chap. 8. Let .(X, D) be a projective space according to Definition 7.1. Desargues’ axiom asserts: (P5 ) For every three lines .d1 , .d2 , .d3 of .(X, D) concurrent at a point O and for every pair of distinct points .Ai , Bi ∈ di \ {O}, .i = 1, 2, 3, the points .D12 := A1 A2 ∩ B1 B2 , .D13 := A1 A3 ∩ B1 B3 , .D23 := A2 A3 ∩ B2 B3 are collinear (see Fig. 7.1).

.

In other words, if the lines joining corresponding vertices of two triangles are concurrent (in Fig. 7.1 .ΔA1 A2 A3 and .ΔB1 B2 B3 , then the intersections of corresponding pairs of sides are collinear. Theorem 7.1 (Desargues) The projective space .X = P(V ) satisfies Desargues’ axiom .(P5 ). Proof Case 1: Suppose that .d1 , .d2 and .d3 are not coplanar, hence .dim(d1 + d2 + d3 ) = 3. We can observe that .D12 ∈ A1 A2 A3 ∩ B1 B2 B3 . Analogously, .D13 , D23 ∈ A1 A2 A3 ∩B1 B2 B3 . Since .Ai and .Bi are distinct for .i = 1, 2, 3, and the subspace .A1 A2 A3 + B1 B2 B3 = d1 + d2 + d3 has dimension 3, by Grassmann’s formula (Corollary 6.33) the intersection between the planes .A1 A2 A3 and .B1 B2 B3 is a line d which passes through .D12 , D13 and .D23 so our assertion is proved. Case 2: Now suppose that .d1 , d2 and .d3 are coplanar, contained in the plane .P := d1 + d2 + d3 . If .A1 , A2 , A3 (or .B1 , B2 , B3 ) are collinear, then .D12 = D13 so that .D12 , D13 and .D23 are obviously collinear. Therefore we can suppose that ' .A1 , .A2 , .A3 and .B1 , .B2 , .B3 are not collinear. Since .n ≥ 3, there exists a line .d in ' '' .P(V ) passing through O and not lying on the plane .P. Take two points .O /= O ' '' ' distinct from O (i.e. .O , O ∈ / P) on .d (every line contains at least three distinct © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_7

269

270

7 Desargues’ Axiom

Fig. 7.1 Desargues’ axiom

points). Put .Ci := O ' Ai ∩ O '' Bi ∈ di + d ' , .i = 1, 2, 3. Since .A1 , .A2 , .A3 are not collinear, the lines .O ' A1 , .O ' A2 , .O ' A3 are not coplanar, so that we are under Case 1 and we can deduce that .P12 := A1 A2 ∩ C1 C2 , .P13 := A1 A3 ∩ C1 C3 and '' '' .P23 := A2 A3 ∩C2 C3 are collinear. Analogously, since the lines .O B1 , O B2 and '' .O B3 are not coplanar, by Case 1, .Q12 := B1 B2 ∩ C1 C2 , .Q13 := B1 B3 ∩ C1 C3 and .Q23 := B2 B3 ∩ C2 C3 are collinear points. On the other side the line .Ci Cj does not lie on .P, so that the intersection (in the subspace .d ' + P of dimension 3) .Ci Cj ∩ P (.i < j ) is a point. Hence we have .Pij = Ci Cj ∩ P = Qij (.i < j ), thus we deduce that .Pij = Qij = Dij , for every .i < j , so that .D12 , D13 and .D23 are collinear. ⨆ ⨅ The converse of Desargues’ axiom is the following statement: ' .(P ) 5

Under the notation of Fig. 7.1, given three lines .d1 , .d2 , .d3 and three pairs of distinct points .Ai /= Bi on every line .di , .i = 1, 2, 3, such that the intersections .D12 , .D13 and .D23 (.Dij = Ai Aj ∩ Bi Bj ), are collinear, then the lines .d1 , .d2 and .d3 are concurent.

The dual statement of Desargues’ axiom .(P5 ) in a projective plane .P(V ) (according to Theorem 6.59) coincides with the converse of Desargues’ axiom .(P5' ) (Exercise 7.1). Thus from Theorem 7.1, and Theorem 6.59 we get: Corollary 7.2 A projective plane .P(V ) satisfies the converse of Desargues’ axiom (P5' ).

.

A Projective Plane which Does Not Satisfy Desargues’ Axiom Definition 7.3 A configuration is a pair of sets .(X, D) where the elements of .X are called points and .D is a collection of subsets of .X, called lines of .X, which satisfies the following axiom: (C1 ) Two distinct points of .X lie on at most one line of .X.

.

7 Desargues’ Axiom

271

It follows that two distinct lines have at most one point in common. If .Y ⊂ X, .E is a collection of subsets of .Y and .E ⊂ D, then .(Y, E) is a configuration, called a subconfiguration of .(X, D). Examples 7.4 (i) Every non-empty set .X with .D = ∅ (i.e. there is no line) is a configuration. If .X has exactly four distinct points A, B, C, D, then no three of them are collinear (since .D = ∅). (ii) Every incidence structure is a configuration. (iii) The set of points and lines which occurs in Fig. 7.1, 

X = {O, A1 , A2 , A3 , B1 , B2 , B3 , D12 , D13 , D23 },

.

D = {O, A1 , B1 }, {O, A2 , B2 }, {O, A3 , B3 }, {A1 , A2 , D12 },

.

{B1 , B2 , D12 }, {A1 , A3 , D13 },

 {B1 , B3 , D13 }, {A2 , A3 , D23 }, {B2 , B3 , D23 }, {D12 , D13 , D23 } , is a configuration which has ten points and ten lines and every line consists of exactly three distinct points, while through every point there pass exactly three distinct lines. This configuration is called Desargues’ configuration. Definition 7.5 (The Free Projective Plane Generated by a Configuration) Let (X0 , D0 ) be a configuration. We will construct a new configuration .(X1 , D1 ) such that .X1 := X0 , and .D1 consists of all lines of .D0 and of all subsets of .X1 = X0 of the form .{A, B}, with A, .B ∈ X0 not on a line of .D0 . We emphasize that the new lines (i.e. those belonging to .D1 \D0 ) contain exactly two distinct points. Therefore .(X1 , D1 ) satisfies the following property: .

(a) Every two distinct points of .X1 lie on one and only one line of .D1 (i.e. .(X1 , D1 ) is an incidence structure). Construct .(X2 , D2 ) from .(X1 , D1 ) as follows. The points of .X2 are the points of .X1 , plus, for each pair of lines d and .d ' of .D1 which do not meet, a new point denoted by .d · d ' . If e, .e' are two lines which do not meet, then .d · d ' = e · e' if and only if .{d, d ' } = {e, e' } by definition. Each line .d of .D2 is a line .d ∈ D1 extended with all possible points of the form .d · d ' , with .d ' ∈ D1 and .d ∩ d ' = ∅. In particular, the map .f : D1 → D2 defined by .f (d) = d is bijective. By construction through a point of .X2 of the form .d · d ' (hence not belonging to .X1 ) they pass only two lines:  and .d' . Therefore .(X2 , D2 ) satisfies the following property: .d (b) Every pair of distinct lines of .D2 meets in one (and only one) point. We have to observe that .(X2 , D2 ) no longer has the property (a), indeed a pair of points .d · d ' and .e · e' , with .d · d ' /= e · e' , does not lie on any line of .D2 . Suppose by induction that the configuration .(Xn , Dn ) has been constructed. We have to consider two cases:

272

7 Desargues’ Axiom

(i) If n is even, then .Xn+1 := Xn . A line of .Dn+1 is a line of .Dn or a subset of the form .{A, B}, where A and B are distinct points of .Xn not joined by any line of .Dn . As said above, the new lines (i.e. those belonging to .Dn+1 \ Dn ) contain exactly two distinct points. The configuration .(Xn+1 , Dn+1 ) has the property (a). (ii) If n is odd, .Xn+1 contains .Xn and the new points .d · d ' , where d, .d ' are two lines of .Dn which do not meet. The lines of .Dn+1 are the lines of .Dn extended as described above for .n = 1. Through a point of .Xn+1 of the form .d · d ' (hence not belonging to .Xn ) they pass only two lines: .d and .d' . Thus the configuration .(Xn+1 , Dn+1 ) satisfies the property (b) (but no longer (a)). Now we define X :=

∞ 

.

Xn ,

D = {L ⊂ X | L ∩ Xn ∈ Dn for all large enough n}.

n=0

In other words, a subset L of .X is a line of .(X, D) if and only if there exists a natural number .nL (which depends on L) such that .L ∩ Xn is a line of .Dn for every .n ≥ nL . The level of .P ∈ X is the smallest integer .n ≥ 0 such that .P ∈ Xn . Analogously, the level of a line .d ∈ D is the smallest integer .n ≥ 0 such that .d ∩ Xn is a line of .(Xn , Dn ). From the above construction we see that three points A, B, .C ∈ Xn are collinear in .(Xn , Dn ) if and only if they are collinear in .(Xn+1 , Dn+1 ) for every .n ≥ 0. Therefore if A, B and C are collinear in .(Xn , Dn ) they are collinear in .(X, D). Conversely, if they are collinear in .(X, D), then they are collinear in .(Xn , Dn ) for every .n ≥ 0. Proposition 7.6 Under the above notation, suppose that .X0 contains at least four points no three of which lie on a line in .(X0 , D0 ), then .(X, D) is a projective plane. Proof Since .(Xn , Dn ) satisfies the condition (a) for every odd integer n and the condition (b) for every even integer .n ≥ 2, then .(X, D) has both the properties (a) and (b). In other words, .(X, D) satisfies axioms .(P1 ) and .(P2 ) of the projective plane (Definitions 6.1 and 6.5). Moreover, if A, B, C, .D ∈ X0 are four points no three of which lie on a line in .(X0 , D0 ), then the same property holds for A, B, C, ⨆ ⨅ D in .(X, D). Thus .(X, D) satisfies axiom .(P3 ). Definition 7.7 The pair .(X, D) constructed above is called the free projective plane generated by the configuration .(X0 , D0 ). Definition 7.8 A configuration .(X, D) is called confined if each point lies on at least three lines, and each line contains at least three points. Desargues’ configuration of Example 7.4-(iii) is confined. A projective plane is confined by Propositions 6.12 and 6.13. Proposition 7.9 Any finite, confined subconfiguration of the free projective plane (X, D) generated by .(X0 , D0 ), is contained in .(X0 , D0 ).

.

7 Desargues’ Axiom

273

Proof Let .(Y, D' ) be a finite, confined subconfiguration of .(X, D). Let n be the maximum of all levels of the points and the lines of .(Y, D' ) which is finite. We have to prove that .n = 0. By reductio ad absurdum suppose that .n ≥ 1. Case 1. If the maximum level n is reached by a line .d ∈ D' , then it follows that .d ∩ Xn ∈ Dn and .d ∩ Xn−1 ∈ / Dn−1 . Hence .d ∩ Xn is a line of .Xn having the form .(A, B), where A and B are two distinct points of .Xn−1 which do not lie on a line of .Dn−1 . Since the level of each point .P ∈ Y is .≤ n, then .P ∈ Xn , so that .D' ⊂ Dn and .(Y, D' ) is a subconfiguration of .(Xn , Dn ) such that the line d consists only of the points A and B, contradicting the hypothesis that .(Y, D) is confined. Case 2. Suppose now that n is the level of a point P of .Y, i.e. .Y ⊂ Xn . We can also suppose that the level of each line of .D' is .≤ n − 1 (otherwise we are in Case 1). Then the point .P ∈ Xn \ Xn−1 so that it is of the form .d · d ' , where .{d, d ' } is a uniquely determined pair of lines of .Dn−1 such that .d ∩ d ' = ∅. It follows that through the point .P ∈ Y pass exactly two distinct lines of .Dn (i.e. .d and .d' ), hence exactly two distinct lines of .D' , contradicting the hypothesis that .(Y, D) is confined. ⨆ ⨅ Corollary 7.10 The free projective plane .(X, D) generated by a configuration (X0 , D0 ), where .X0 is a set consisting of four distinct points and .D0 = ∅ .(see Example 7.4-i)) is infinite. In particular, every line of .D is an infinite set.

.

Proof If .X is a finite set, then .(X, D) is a finite, confined subconfiguration of itself, hence, by Proposition 7.9 .(X, D) is contained in .(X0 , D0 ), which is absurd. ⨆ ⨅ Example 7.11 (A Non-Desarguesian Projective Plane) Let .(X0 , D0 ) be the configuration with .X0 = {O, A1 , A2 , A3 } and .D0 = ∅ (see Example 7.4-i)). We claim that the free projective plane .(X, D) generated by .(X0 , D0 ) does not satisfy Desargues’ axiom .(P5 ). Since the line .OAi (.∈ D) is infinite (.i = 1, 2, 3) we can find on .OAi a point .Bi (.i = 1, 2, 3) such that .B1 , .B2 and .B3 are not collinear. Since .D0 = ∅ the points .A1 , .A2 and .A3 are not collinear in .(X, D). Suppose that .(X, D) satisfies Desargues’ axiom. Then .D12 := A1 A2 ∩ B1 B2 , .D13 := A1 A3 ∩ B1 B3 and .D23 := A2 A3 ∩ B2 B3 are collinear. The pair .(Y, E) where .Y = {O, A1 , A2 , A3 , B1 , B2 , B3 , D12 , D13 , D23 } and  E = {O, A1 , B1 }, {O, A2 , B2 }, {O, A3 , B3 },

.

{A1 , A2 , D12 }, {B1 , B2 , D12 }, {A1 , A3 , D13 },

 {B1 , B3 , D13 }, {A2 , A3 , D23 }, {B2 , B3 , D23 }, {D12 , D13 , D23 } , is a finite, confined subconfiguration of .(X, D) (it is Desargues’ configuration of Example 7.4, (iii)). Then by Proposition 7.9 we get .Y ⊆ X0 , which is impossible because .Y consists of ten points while .X0 has only four points. Therefore the

274

7 Desargues’ Axiom

projective plane .(X, D) is non-Desarguesian, hence .(X, D) is not isomorphic to any projective plane associated with a K-vector space. Remark 7.12 The non-Desarguesian projective plane of Example 7.11 is infinite. The reader can find examples of finite non-Desarguesian projective planes in [16], Exercises 26, 27, 28 and 29.

7.1 Exercises Exercise 7.1 Prove that the dual assertion of Desargues’ axiom in a projective plane P(V ) coincides with the converse of Desargues’ axiom. Exercise 7.2 (The Moulton Plane) Let X = R2 . The set of lines D is as follows: D = {x = a | a ∈ R} ∪ {y = mx + n | m ≥ 0} ∪ {𝓁m,n | m ≤ 0, n ∈ R},

.

where 𝓁m,n =

.

 mx + n

x≤0

2mx + n x ≥ 0.

Prove that (X, D) is a projective plane which does not satisfy Desargues’ axiom.

Chapter 8

General Linear Projective Automorphisms

8.1 Projective Homotheties and Translations Let .(X, D) be a general projective space and .Aut(X) be the groups of all its projective automorphisms. Definition 8.1 Let .σ ∈ Aut(X). A point .A ∈ X is called a fixed point of .σ if σ (A) = A. A subspace .Y of .X is called a fixed subspace of .σ if .σ (Y) = Y. If .σ (P ) = P for all .P ∈ Y we say that .Y is a subspace of fixed points of .σ . A point .O ∈ X is called the centre of .σ if every line through O is a fixed line of .σ . Since there are two distinct lines d and .d ' passing through O, we have .O = d ∩ d ' so that .σ (O) = σ (d ∩ d ' ) = σ (d) ∩ σ (d ' ) = d ∩ d ' = O. Hence the centre of an automorphism .σ is a fixed point of .σ . A hyperplane .Y of .X is called an axis of .σ if .Y consists of fixed points of .σ . .

Proposition 8.2 Let .σ ∈ Aut(X). If .σ has two distinct centres .(or two distinct axes.), then .σ = idX , the identity map of .X. Proof Suppose that .σ has two centres O, .O ' ∈ X, with .O /= O ' . Since O and .O ' are fixed points of .σ , we have to prove that .σ (A) = A for each .A ∈ X \ {O, O ' }. Let .A ∈ / OO ' , then .A = AO ∩ AO ' , so that .σ (A) = σ (AO ∩ AO ' ) = σ (AO) ∩ ' σ (AO ) = AO ∩ AO ' = A. Let .B ∈ OO ' \ {O, O ' }. The line AB contains a third point C (which does not belong to .OO ' so that .σ (C) = C). Hence .B = AC ∩ OO ' which implies .σ (B) = σ (AC ∩ OO ' ) = σ (AC) ∩ σ (OO ' ) = AC ∩ OO ' = B. Therefore .σ = idX . Let .Y and .Z be two distinct axes of .σ . Since .Y ∪ Z consists of fixed points of .σ we have only to examine the points .A ∈ X \ (Y ∪ Z). Let B and C be two distinct points of hyperplane .Y. By Remark 6.27 .BA ∩ Z and .CA ∩ Z are two points .B ' and ' ' ' ' ' ' ' .C . Then .A = BB ∩ CC so that .σ (A) = σ (BB ∩ CC ) = σ (BB ) ∩ σ (CC ) = ' ' ' ' BB ∩ CC = A, since B, .B , C, .C are fixed points of .σ . ⨆ ⨅

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_8

275

276

8 General Linear Projective Automorphisms

Theorem 8.3 Let .σ ∈ Aut(X) \ {idX } be an automorphism of a projective space (X, D). Then .σ has a centre if and only if it has an axis.

.

Proof First suppose that .σ has the hyperplane .Y as its axis. Case 1: There exists a fixed point .O ∈ X \ Y of .σ . We wish to prove that O is the centre of .σ . Indeed, let d be a line passing through O and .A = d ∩ Y (see Remark 6.27). Then .σ (A) = A and .σ (d) = σ (OA) = σ (O)σ (A) = OA = d, since .σ (O) = O. Case 2: For every .P ∈ X \ Y we have .σ (P ) /= P . Since .Y is an axis, .P /∈ Y so that .σ (P ) /∈ Y (otherwise .P = σ −1 (σ (P )) ∈ Y). Fix a point P and put .O := P σ (P ) ∩ Y. First observe that .σ (OP ) = σ (O)σ (P ) = Oσ (P ) = OP (since .O ∈ P σ (P )), so OP is a fixed line of .σ . Let d be another line passing through O. If .d ⊂ Y, then d is obviously fixed. Let .d /⊂ Y. Take a point .Q ∈ d \Y and put .U := P Q ∩ Y. Thus .Q ∈ U P and .σ (Q) ∈ σ (U )σ (P ) = U σ (P ). In particular P , Q, .σ (P ) and .σ (Q) are coplanar. On the plane determined by P , Q, .σ (P ) and .σ (Q) consider .O ' := P σ (P ) ∩ Qσ (Q). Since the line .Qσ (Q) is fixed (let .Qσ (Q) ∩ Y = E, then .σ (EQ) = σ (E)σ (Q) = Eσ (Q) = Qσ (Q)) if we prove that .O = O ' , then .d = Qσ (Q) so that d is fixed and O is the centre of .σ . The point .O ' is fixed because it is the intersection of the fixed lines .P σ (P ) and .Qσ (Q). By the above assumption (Case 2) .O ' must belong to .Y. If .O ' /= O we should have .P σ (P ) = OO ' ⊂ Y, contradicting the hypothesis .P /∈ Y, hence ' .O = O. Now suppose that .σ has a centre O. We have to consider two cases. Case .1' : There exists a hyperplane .Y of .X such that .σ (Y) = Y and .O ∈ / Y. Then .Y is an axis of .σ . Indeed, if .P ∈ Y the line OP is fixed (since O is a centre) and .σ (P ) ∈ OP ∩ Y. Since .O ∈ / Y the intersection .OP ∩ Y consists of the point P only so that .σ (P ) = P . Case .2' : For each hyperplane .Z which does not contain O one has .σ (Z) /= Z. Since we have not introduced the concept of dimension of a general projective subspace (Definition 7.1) we have to suppose that .X = P(V ) (with .dim(P(V ) = n ≥ 2). However, the proof of the general case is the same as the one we are giving below (see [1]). Fix such a hyperplane .Z. Since .σ (Z) /= Z, by projective Grassmann formula (Corollary 6.33) we have .dim(σ Z ∩ Z) = n − 2. Let .U := σ (Z) ∩ Z. By Corollary 6.33 the subspace .Y := O + U is a hyperplane. We shall show that .Y is an axis of .σ . First we prove that every point of .U is fixed. Indeed, if .P ∈ U, then .σ (P ) ∈ OP (OP is fixed) and .σ (P ) = P since .OP ∩σ (Z) = σ (P ) and .P ∈ OP ∩ σ (Z). By Theorem 6.26 we have Y=



.

OP .

P ∈U

To finish our proof it remains to prove that every point .R ∈ OP is fixed (for arbitrary .P ∈ U). By reductio ad absurdum suppose there is .R ∈ OP such that .σ (R) /= R. Take a hyperplane .V of .P(V )) passing through R and not containing

8.1 Projective Homotheties and Translations

277

O. By Corollary 6.33, .V ∩ σ (V) is a subspace of dimension .n − 2 and every point of .V ∩ σ (V) is a fixed point. Indeed, if .P ∈ V ∩ σ (V), then .σ (P ) ∈ OP (OP is fixed) and .σ (P ) = P since .OP ∩ σ (V) = σ (P ) and .P ∈ OP ∩ σ (V). Furthermore, there exists .S ∈ V ∩ σ (V) \ U, otherwise .V ∩ σ (V) = U (both the subspaces are of dimension .n − 2), hence, in particular, .P ∈ V ∩ σ (V) ⊂ V. Since .R ∈ V, we have .RP ⊂ V, so that .O ∈ V (since .O ∈ RP ) contradicting the assumptions on the hyperplane .V. Take a point .S ∈ V ∩ σ (V) \ U and consider the hyperplane S+U=



.

ST .

T ∈U

It is fixed since S and T are fixed points; moreover it does not contain O. But this contradicts the assumption of Case .2' . ⨆ ⨅ Definition 8.4 Let .σ ∈ Aut(X \ {idX } be an automorphism of a general projective space .(X, D). • .σ is a projective translation if .σ has an axis .Y and a centre O such that .O ∈ Y. • .σ is a projective homothety if .σ has an axis .Y and a centre O such that .O ∈ / Y. Let .σ ∈ Aut(X) be an automorphism of axis .Y and centre O (i.e. a projective translation or homothety). If .P ∈ X \ (Y ∪ {O}), then .σ (P ) ∈ P O and .σ (P ) ∈ / (Y ∪ {O})). The identity .idX is a projective translation as well as a projective homothety by definition. Every point of .X is a centre and every hyperplane of .X is an axis. Notation Let .Y be a hyperplane of .(X, D). We introduce the following notation: • • • •

SY : the set of all projective automorphisms di .Aut(X) of axis .Y. TY : the set of all projective translations of axis .Y. .TY,U : the set of all projective translations of axis .Y and centre U . .HY,O : the set of all projective homotheties of axis .Y and centre O. . .

The following inclusions are obvious: .TY,U ⊆ TY ⊆ SY and .HY,O ⊆ SY . Proposition 8.5 Under the above notation we have: (i) .SY is a subgroup of .Aut(X). (ii) .TY is a normal subgroup of .SY . (iii) For every U , .V ∈ Y, .TY,U is a subgroup of .TY and .TY,U ∩ TY,V = {idX } if .U /= V . (iv) For every .O ∈ X \ Y, .HY,O is a subgroup of .SY and .HY,O ∩ TY = {idX }. (v) An automorphism .α ∈ Aut(X) such that .α(Y) = Y' and .α(O) = O ' induces two group isomorphisms HY,O → HY' ,O ' ,

.

TY,O → TY' ,O ' ,

σ → α ◦ σ ◦ α −1 .

278

8 General Linear Projective Automorphisms

Proof The assertions (i), (iii), (iv) and (v) easily follow from Definition 8.1 and from Proposition 8.2 (exercise left to the reader). (ii) First we prove that .TY is a subgroup .SY . It is obvious that if .σ ∈ TY,U then −1 ∈ T .σ Y,U . Let .σ , .τ ∈ TY of centres U , .V ∈ Y respectively. The axis of .σ ◦ τ is clearly .Y. If .U = V the centre of .σ ◦ τ is obviously U . Then suppose that both the automorphisms are distinct from .idX (otherwise there is nothing to prove) and .U /= V . If .P ∈ X \ Y, then .σ (P ) ∈ U P , .σ (P ) /= P (otherwise every line through P , since it meets .Y, would be fixed and P would be a second centre of .σ , which is impossible by Proposition 8.2 since .σ /= idX ). Analogously, .τ (P ) ∈ V P and .τ (P ) /= P . It follows that .σ (τ (P )) ∈ U τ (P ), hence .P /= σ (τ (P )) for each .P ∈ X \ Y. Therefore the centre of .σ ◦ τ ∈ SY (which exists by Theorem 8.3) belongs to .Y, i.e. .σ ◦ τ ∈ TY . Now we show that .TY is a normal subgroup of .SY . Let .σ ∈ TY and .α ∈ SY . We have to prove that .α −1 ◦ σ ◦ α ∈ TY . Let .U ∈ Y be the centre of .σ . We shall prove that U is also the centre of .α −1 ◦ σ ◦ α. Let d be a line of .X through U . Since .α(U ) = U the line .α(d) also passes through U , and because U is the centre of .σ , we have .σ (α(d)) = α(d), therefore .α −1 (σ (α(d))) = α −1 (α(d)) = d, i.e. d is a fixed line of .α −1 ◦ σ ◦ α. ⨆ ⨅ Throughout this chapter K will be a fixed field and .n ≥ 1 a fixed integer. Most results of this chapter also hold if K is a skew field (e.g see [2] or [1]). ⎞ ⎛ 1 0 ··· 0 ⎜0 a · · · 0 ⎟ ⎟ ⎜ Example 8.6 Let .A = ⎜ . . . . ⎟ , with .a ∈ K \ {0}. Since .det(A) = a n /= ⎝ .. .. . . .. ⎠ 0 0 ··· a 0, .A ∈ GLn+1 (K). The general linear projective automorphism .σa := TA ∈ PGLn (K): σa ([x0 , x1 , . . . , xn ]) = [x0 , ax1 , . . . , axn ],

.

is a homothety with centre .O = [1, 0, . . . , 0] and axis the hyperplane .Y : x0 = 0. The set .{σa | a ∈ K ∗ } is a commutative subgroup of .PGLn (K), being isomorphic to the multiplicative group .K ∗ . Example 8.7 For every .(a1 , . . . , an ) ∈ K n consider the following matrix: ⎛

A(a1 ,...,an )

.

1 ⎜a1 ⎜ ⎜ = ⎜a2 ⎜. ⎝ ..

0 1 0 .. .

0 0 1 .. .

··· ··· ··· .. .

⎞ 0 0⎟ ⎟ 0⎟ ⎟. .. ⎟ .⎠

an 0 0 · · · 1

8.1 Projective Homotheties and Translations

279

Since .det(A(a1 ,...,an ) ) = 1, .A(a1 ,...,an ) ∈ GLn+1 (K), the linear projective automorphism τ(a1 ,...,an ) ([x0 , x1 , . . . , xn ]) = [x0 , a1 x0 + x1 , . . . , an x0 + xn ].

.

is a projective translation of centre .U := [0, a1 , . . . , an ] and axis the hyperplane Y : x0 = 0. The set .{τ(a1 ,...,an ) | (a1 , . . . , an ) ∈ K n } (with .τ(0,...,0) = idPn (K) ) is a commutative subgroup of .PGLn (K) isomorphic to the additive group of .K n (note that .A(a1 ,...,an ) A(b1 ,...,bn ) = A(a1 +b1 ,...,an +bn ) ).

.

The following result shows the importance of Desargues’ axiom. Proposition 8.8 Let .(X, D) be a projective space which satisfies Desargues’ axiom. Let .Y be a hyperplane of .(X, D) and O be a point of .X. For every pair of points P , ' ' .P ∈ X \ (Y ∪ {O}) such that P , .P and O are collinear, there exists one and only one automorphism .σ ∈ Aut(X) of axis .Y and with centre O such that .σ (P ) = P ' . Proof First we prove uniqueness, which does not require Desargues’ axiom. Let σ, τ ∈ Aut(X) be two automorphisms which satisfy the assertion of the proposition. Then .α := τ −1 ◦ σ is an automorphism of axis .Y and with centre O such that .α(P ) = P . Every line d through P is fixed for .α since d joins P and .Q = d ∩ Y. Hence .α has two distinct centres that implies .α = id by Proposition 8.2. For a synthetic proof of existence in the general case see [1]. We suppose that n .X = P (K) with K field. .

Case of projective homothety. Let .Y : x0 = 0 and .O = [1, 0, . . . , 0]. Suppose ' = .OP ∩ Y = Q = [0, a1 , . . . , an ]. Then .P = [1, λa1 , . . . , λan ] and .P ' ' ' ' −1 [1, λ a1 , . . . , λ an ], with .λ, .λ ∈ K \ {0}. Put .a = λ · λ and .σa = TA , where ⎞ ⎛ 1 0 ··· 0 ⎜0 a · · · 0⎟ ⎟ ⎜ .A = ⎜ . . . .⎟. ⎝ .. .. . . .. ⎠ 0 0 ··· a Then .σa is a projective homothety with axis .Y and centre O such that .σa (P ) = P '. Case of projective translation. Let .Y : x0 = 0, .O = [0, a1 , . . . , an ] ∈ Y and ' .P = [1, c1 , . . . , cn ]. Then .P = [1, c1 + λa1 , . . . , cn + λan ], with .λ ∈ K. Put .τ(λa1 ,...,λan ) = TA , where ⎛

⎞ 0 0⎟ ⎟ 0⎟ ⎟. .. ⎟ .⎠ λan 0 0 · · · 1

1 ⎜λa1 ⎜ ⎜ .A = ⎜λa2 ⎜ . ⎝ ..

0 1 0 .. .

0 0 1 .. .

··· ··· ··· .. .

280

8 General Linear Projective Automorphisms

Then .τ(λa1 ,...,λan ) is a projective translation with axis .Y and centre O such that τ(λa1 ,...,λan ) (P ) = P ' . Observe that the projective homothety and the projective translation above determined are in .PGLn (K). Let .Y' be an arbitrary hyperplane of .Pn (K) and .O ' ∈ Pn (K) an arbitrary point. There exists .σ ∈ PGLn (K) such that .σ (Y) = Y' and .σ (O) = O ' (Theorem 6.47). If .O ' /∈ Y' , then .σ ◦ σa ◦ σ −1 ∈ HY' ,O ' sends .σ (P ) to .σ (P ' ). If .O ' ∈ Y' , then −1 ∈ T ' ' .σ ◦ τλa1 ,...,λan ◦ σ ⨆ ⨅ Y ,O ' sends .σ (P ) to .σ (P ). .

Corollary 8.9 Every projective translation .(resp. projective homothety.) of .Pn (K) is an element of .PGLn (K). Furthermore, for every hyperplane .Y of .Pn (K) and for every .O /∈ Y and .U ∈ Y, we have the following group isomorphisms: HY,O ∼ = K ∗,

.

TY,U ∼ = K.

In particular .HY,O and .TY,U are commutative subgroups of .PGLn (K). Proof The first statement follows from the proof of Proposition 8.8 (existence and uniqueness). The isomorphism .HY,O ∼ = K ∗ follows from Example 8.6. From the identity τ(λa1 ,...,λan ) ◦ τ(μa1 ,...,μan ) = τ((λ+μ)a1 ,...,(λ+μ)an ) ,

.

λ, μ ∈ K,

we deduce the isomorphism .TY,U ∼ = K.

⨆ ⨅

Theorem 8.10 Let .Y be a hyperplane of a general projective space .(X, D). If (X, D) is a plane we require that .X satisfies Desargues’ axiom .(P5 ). Then:

.

(i) The group .TY of translation with axis .Y is commutative. (ii) Let .O ∈ X \ Y. For each .σ ∈ SY there exists .τ ∈ TY and .α ∈ HY,O such that .σ = τ ◦ α. Proof From Theorem 7.1 and from the hypothesis it follows that .X satisfies Desargues’ axiom .(P5 ). (i) Let .τ1 , .τ2 ∈ TY be two translations ./= idX of centres .U1 and .U2 respectively. Case 1: .U1 /= U2 . Let .P ∈ X an arbitrary point. Put .P1 := τ1 (P ) and .P2 := τ2 (P ). We can assume .P ∈ / Y (otherwise .τ1 (τ2 (P )) = P = τ2 (τ1 (P ))). Since .τi /= idX , .Pi /= P , .i = 1, 2 (otherwise P would be another centre of .τ1 and .τ2 ). The points .P2 , .τ2 (P1 ) and .τ2 (U1 ) = U1 are collinear since .P1 ∈ U1 P . It follows that .τ2 (P1 ) ∈ U1 P2 . Furthermore .τ2 (P1 ) ∈ U2 P1 . Hence .τ2 (P1 ) = U1 P2 ∩ U2 P1 (.U1 /= U2 ). By symmetry .τ1 (P2 ) = U1 P2 ∩ U2 P1 , hence .τ1 ◦ τ2 = τ2 ◦ τ1 if .U1 /= U2 . Thus we have proved the equality .τ1 ◦ τ2 = τ2 ◦ τ1 if .U1 /= U2 without assuming Desargues’ axiom. Case 2: .U1 = U2 =: U . Choose a point .V ∈ Y, .V /= U , and two distinct points A, .A' ∈ / Y, such that A, .A' and V are collinear. Since .X satisfies Desargues’ axiom, by Proposition 8.8 there exists a unique projective translation .τ of axis

8.1 Projective Homotheties and Translations

281

Y and centre V such that .τ (A) = A' . If W is the centre of .τ2 ◦ τ , then .W /= U (otherwise A, U , .τ2 (τ (A)) would be collinear as well as .A' , U , .τ2 (τ (A)) so that ' ' .U ∈ AA but .AA ∩ Y = V /= U ). Since .V /= U , by Case 1 we have .

τ1 ◦ τ2 = τ1 ◦ τ2 ◦ τ ◦ τ −1 = τ1 ◦ (τ2 ◦ τ ) ◦ τ −1 = (τ2 ◦ τ ) ◦ τ1 ◦ τ −1

.

= τ2 ◦ (τ ◦ τ1 ) ◦ τ −1 = τ2 ◦ (τ1 ◦ τ ) ◦ τ −1 = τ2 ◦ τ1 and (i) is proved. (ii) Let .σ ∈ SY . If .σ ∈ HY,O we take .τ = id and .α = σ . If we suppose .σ ∈ / HY,O , then we have .O ' := σ (O) /= O. By Proposition 8.8 there exists a unique projective translation .τ ∈ TY of centre .U = OO ' ∩ Y such that .τ (O) = O ' . Therefore .τ −1 ◦ σ (O) = O, thus if we put .α := τ −1 ◦ σ we get .α ∈ HY,O . ⨅ ⨆ Corollary 8.11 Assume the hypothesis of Proposition 8.8. Let A, B, .C ∈ X \ Y. If .τBA is the unique translation .τ ∈ TY such that .τ (A) = B then .τAB = (τBA )−1 . A B A .τ C = τC ◦ τB . A A .τ (C) = τ (B). B C A If .σB is the unique homothety .σ ∈ HY,O such that .σ (A) = B then .σAB = (σBA )−1 . (e) .σCA = σCB ◦ σBA , where O, A, B and C are necessarily collinear. (f) .σBA (C) = σCA (B), where O, A, B and C are necessarily collinear.

(a) (b) (c) (d)

Proof (c) It is clear that .τBA (C) = τCA (B) if and only if .τAB (τBA (C)) = τAB (τCA (B)). Since .τAB (τBA (C)) = C by (a) and .τAB (τCA (B)) = τCA (τAB (B)) = τCB (B) = C (because .TY is commutative and by (b)), (c) follows. As above, .σBA (C) = σCA (B) if and only if .σAB (σBA (C)) = σAB (σCA (B)). By (d) B A B A A B .σ (σ (C)) = C and by commutativity of .HY,O , .σ (σ (B)) = σ (σ (B)) = A B A C C A B σC (B) = C. ⨆ ⨅

8.1.1 Desargues’ Axiom, Pappus’ Axiom and the Division Ring of the Coordinates Definition 8.12 Let (X, D) be a general projective space (according to Definition 7.1). Pappus’ axiom holds if:

282

8 General Linear Projective Automorphisms

Fig. 8.1 Pappus’ theorem

(P6 ) For every two lines d and d ' of (X, D) concurrent at a point O and for every six distinct points A, B, C ∈ d \ {O} and A' , B ' , C ' ∈ d ' \ {O} the points of intersection U := AB ' ∩ A' B,

.

V := AC ' ∩ A' C,

W := BC ' ∩ B ' C

lie on a common line in the plane d + d ' (see Fig. 8.1). Now we give an application of projective homotheties and translations by proving an important classical theorem of projective geometry. Theorem 8.13 (Pappus’ Theorem) Pappus’ axiom holds in a projective space P(V ) associated with a K-vector space V of dimension ≥ 3. Proof Assume the notation of Definition 8.12. Case 1. Suppose O ∈ U W and put Y = U W . Let P , Q ∈ d \ {O} and P ' , Q' ∈ P ∈T d ' \ {O}. By Proposition 8.8 there is only one projective translation τQ Y,O '

'

P ∈ T P P ' ' (resp. τQ ' Y,O ) such that τQ (P ) = Q (resp. τQ' (P ) = Q ). We claim that there are the following equalities: '

τBA = τAB' ,

.

'

τCB = τBC' .

Indeed to prove the first equality (the second is analogous) it is enough to observe that τBA (B ' ) ∈ AU since B, U , A' are collinear. Furthermore, τBA (B ' ) ∈ d ' , hence ' τBA (B ' ) = AU ∩ d ' = A' , i.e. τBA = τAB' . Therefore '

'

'

'

'

τCA = τCB ◦ τBA = τBC' ◦ τAB' = τAB' ◦ τBC' = τAC'

. '

'

'

'

( τBC' ◦ τAB' = τAB' ◦ τBC' follows from Corollary 8.9 which implies that TY,O is a commutative group). In other words τCA (AC ' ) = CA' . Furthermore τCA (V ) ∈ OV which implies τCA (V ) = τCA (AC ' ) ∩ OV = CA' ∩ OV = V , so that V ∈ Y. Case 2. Now suppose O ∈ / U W and put Y = U W . In this case the subgroup TY,O will be replaced by the subgroup HY,O which is commutative too (Corollary 8.9).

8.1 Projective Homotheties and Translations

283

Let P , Q ∈ d \ (Y ∪ {O}) (resp. if P ' , Q' ∈ d ' \ (Y ∪ {O})). By Proposition 8.8 P ∈H P' there is only one projective homothety σQ Y,O (resp. σQ' ∈ HY,O ) such that '

'

'

P (P ) = Q (resp. σ P (P ' ) = Q' . As above we get σ A = σ B and σ B = σ C . σQ B C Q' A' B' Hence '

'

'

'

'

σCA = σCB ◦ σBA = σBC' ◦ σAB' = σAB' ◦ σBC' = σAC'

.

'

'

'

'

(the equality σBC' ◦ σAB' = σAB' ◦ σBC' holds because the subgroup HY,O is commutative). In other words σCA (AC ' ) = CA' . Furthermore σCA (V ) ∈ OV which implies σCA (V ) = σCA (AC ' ) ∩ OV = CA' ∩ OV = V , so that V ∈ Y. ⨅ ⨆ Theorem 8.14 (Hessenberg’s Theorem) Let (X, D) be a general projective space. If Pappus’ axiom holds in (X, D) then Desargues’ axiom is also true in (X, D). Proof See [1] pp. 147–148.

⨆ ⨅

Remark 8.15 Let (X, D) be a general projective space satisfying Desargues’ axiom. Fix a hyperplane Y and two points O, E ∈ X\Y and put KO,E,Y := OE\Y. Then by means of projective homotheties and translations we can define a structure of division ring (in general non-commutative) on KO,E,Y (see Sect. 8.2.1 when X = Pn (K)). We point out some basic properties which underline the relevance of Desargues’ and Pappus’ axioms. 1. If we fix another triple (O ' , E ' , Y' ) as above, then KO,E,Y is isomorphic to KO ' ,E ' ,Y' , thus we can define K(X) := KO,E,Y the division ring of the coordinates of X which is unique up to division ring isomorphisms. 2. We can extend Definition 6.28 also to the vector spaces over a non-commutative division ring K without any change. In particular we can introduce the left standard projective space Pnl (K) as well as the right standard projective space Pnr (K). Every projective plane P2l (K) (or P2r (K)) satisfies Desargues’ axiom. The projective spaces over the skew field of quaternions play an important role in algebraic topology (see [17]). 3. There exists a projective isomorphism X → Pnl (K) (or X → Pnr (K)), where K∼ = K(X). 4. The division ring of the coordinates of X is commutative if and only if X satisfies Pappus’ axiom. In this case KO,E,Y will be called the field of coordinates. In Chapter 11 of [1] one can find a very complete account of this subject.

284

8 General Linear Projective Automorphisms

8.2 The Group of Projective Automorphisms of Pn (K) Definition 8.16 Let Aut(K) be the group (in general non-commutative) of all field automorphisms of K with respect to composition of maps. Every α ∈ Aut(K) induces a well-defined map Sα : Pn (K) → Pn (K), where n ≥ 2, given by Sα ([x0 , x1 , . . . , xn ]) = [α(x0 ), . . . , α(xn )].

.

It is immediately seen that Sα is bijective. Since Sα ([λa0 + μb0 , . . . , αan + μbn ]) = α(λ)Sα ([a0 , a1 , . . . , an ])

.

+ α(μ)Sα ([b0 , b1 , . . . , bn ]) Sα maps lines into lines and so it is a projective automorphism of Pn (K). Furthermore, since Sαβ = Sα ◦ Sβ , ∀ α, β ∈ Aut(K), the map Sn : Aut(K) → Aut(Pn (K)),

.

Sn (α) = Sα ,

is an injective group homomorphism. We shall denote the subgroup Sn (Aut(K)) of Aut(Pn (K)) by PAutn (K). Since Sn is an injective group homomorphism PAutn (K) is isomorphic to Aut(K). Proposition 8.17 Aut(R) = {idR }, i.e. idR is the unique automorphism of R. Proof Let f ∈ Aut(R). Since f (1) = 1, we have f (2) = f (1 + 1) = f (1) + f (1) = 1 + 1 = 2. Suppose by induction that f (n) = n, then f (n + 1) = f (n) + f (1) = n + 1. Hence f (n) = n for all n ∈ N. Furthermore f (0) = 0 and f (−n) = −f (n) = −n, ∀ n ∈ N, so that f (n) = n, ∀ n ∈ Z. Let n > 0 be a positive integer. Then

 1 1 1 1 .1 = =⇒ 1 = f (1) = f + ... + + ··· + n n n n

n times

n times





 1 1 1 =f = nf + ··· + f , n n n

n times

i.e. f

f

.

 1 1 = . If m > 0 is another positive integer, we have n n m n



 

 1 1 1 1 1 m =f = mf + ··· + + ··· + f = . n n n n n n





=f

m times

m times

8.2 The Group of Projective Automorphisms of Pn (K)

285

Hence f (q) = q for every q ∈ Q, q > 0. Therefore f (−q) = −q, ∀ q > 0 so that f (q) = q for all q ∈ Q. If x ∈ R and x > 0, then x = a 2 for some a ∈ R, so that f (x) = f (a 2 ) = f (a)2 > 0. If x, y ∈ R with x < y, then f (y) − f (x) = f (y − x) > 0 since y − x > 0, hence f is strictly increasing. Let x ∈ R. Since Q is dense in R, there exist two sequences of rational numbers {pn }n∈N and {qn }n∈N such that .

lim pn = x,

n→∞

lim qn = x,

n→∞

pn < x < qn , ∀ n ∈ N,

such that {pn }n∈N is strictly increasing and {qn }n∈N strictly decreasing. Since f s strictly increasing we have f (pn ) = pn < f (x) < f (qn ) = qn , ∀ n ∈ N. Therefore f (x) = lim pn = lim qn , so that f (x) = x by uniqueness of limits. ⨆ ⨅ n→∞

n→∞

Corollary 8.18 PAutn (R) = {idR }. Remark 8.19 From the proof of Proposition 8.17) we have PAutn (Q) = {idQ }. Also PAutn (Zp ) = {idZp } (Exercise 12.2). On the contrary, PAutn (C) is uncountable (see [38]). However, only the identity idC and the conjugation z → z are continuous with respect to euclidean topology.

8.2.1 Geometric Characterization of the Field K In order to determine the structure of .Aut(Pn (K)) we need to give a geometric characterization of the field K. Definition 8.20 Let  .

O ' = [1, 0, . . . , 0], E ' = [1, 1, . . . , 1], E1'



= [0, 1, 0, . . . , 0], . . . , En' = [0, 0, . . . , 0, 1]

(8.1)

be the standard projective frame of .Pn (K) and .Y0 be the hyperplane generated by ' ' ' ' .E ,. . . ,.En (whose equation is .x0 = 0). The line .O E (of equations .x1 = x2 = 1 · · · = xn ) meets .Y0 at the point .U ' = [0, 1, . . . , 1]. Hence the triple .{O ' , E ' , U ' } is a projective frame of .O ' E ' . Put KO ' ,E ' ,Y0 := O ' E ' \ {U ' } = {[1, a, . . . , a] | a ∈ K}.

.

Then the natural map ϕO ' ,E ' ,Y0 : K → KO ' ,E ' ,Y0 ,

.

ϕO ' ,E ' ,Y0 (a) = [1, a, . . . , a],

∀ a ∈ K, (8.2)

286

8 General Linear Projective Automorphisms

is bijective. Therefore by means of .ϕO ' ,E ' ,Y0 we can carry the field structure of K to KO ' ,E ' ,Y0 . More precisely, the triple .(KO ' ,E ' ,Y0 , +, ·) becomes a field canonically isomorphic to K with respect to the operations defined by

.

+[1, b, . . . , b] = [1, a + b, . . . , a + b], .

[1, a, . . . , a] · [1, b, . . . , b] = [1, ab, . . . , ab],

(8.3)

∀ a, b ∈ K.

If .[1, a, . . . , a], .[1, b, . . . , b] ∈ KO ' .E ' ,Y0 are two elements of .KO ' ,E ' ,Y0 the addition of .KO ' ,E ' ,Y0 defined by (8.3) can be seen geometrically as '

'

τAO' +B ' = τAO' ◦ τBO'

'

∀ A' = [1, a, . . . , a], B ' = [1, b, . . . , b],

.

(8.4)

'

where .τAO' is the unique projective translation of axis .Y0 and centre .U ' = [0, 1, . . . , 1] which sends .O ' = [1, 0, . . . , 0] to .A' = [1, a, . . . , a]. Indeed, from ' ' ' the proof of Proposition 8.8, .τAO' , .τBO' and .τAO' +B ' correspond to the matrices ⎛

1 ⎜a ⎜ ⎜ . ⎜a ⎜. ⎝ ..

0 1 0 .. .

0 0 1 .. .

⎞ ··· 0 · · · 0⎟ ⎟ · · · 0⎟ ⎟, . . .. ⎟ . .⎠



1 ⎜b ⎜ ⎜b ⎜ ⎜. ⎝ ..

0 1 0 .. .

0 ··· 0 ··· 1 ··· .. . . . .

⎞ 0 0⎟ ⎟ 0⎟ ⎟ .. ⎟ .⎠

⎛ and

b 0 0 ··· 1

a 0 0 ··· 1

1 ⎜a + b ⎜ ⎜a + b ⎜ ⎜ . ⎝ ..

0 1 0 ...

0 0 1 .. .

⎞ ··· 0 · · · 0⎟ ⎟ · · · 0⎟ ⎟ . . .. ⎟ . .⎠

a + b 0 0 ··· 1

respectively. Analogously the multiplication of .KO ' ,E ' ,Y0 defined by (8.3) is geometrically given by '

'

'

σAE' ·B ' = σAE' ◦ σBE' ,

.

∀ A' = [1, a, . . . , a], B ' = [1, b, . . . , b], a, b ∈ K \ {0}, (8.5)

'

where .σAE' is the unique projective homothety of axis .Y0 and centre .O ' which sends E' E' ' ' .E to .A = [1, a, . . . , a]. Indeed, from the proof of Proposition 8.8, .σ ' , .σ ' and A B ' E .σ ' ' correspond to the matrices A ·B ⎛ 1 ⎜0 ⎜ . ⎜. ⎝ ..

··· ··· .. .

0 a .. .

⎞ 0 0⎟ ⎟ .. ⎟ , .⎠

0 0 ··· a

⎛ 1 ⎜0 ⎜ ⎜. ⎝ ..

0 b .. .

··· ··· .. .

⎞ 0 0⎟ ⎟ .. ⎟ .⎠

0 0 ··· b

and

⎛ 1 ⎜0 ⎜ ⎜. ⎝ ..

0 ab .. .

··· ··· .. .

⎞ 0 0⎟ ⎟ .. ⎟ .⎠

0 0 · · · ab

respectively. In particular we have E ' · A' = A'

.

∀ A' = [1, a, . . . , a] ∈ KO ' ,E ' ,Y0 .

(8.6)

8.2 The Group of Projective Automorphisms of Pn (K)

287

If .ab = 0 then .A' = O ' (or .B ' = O ' ) and we put by definition .O ' · A' = O ' for every .A' = [1, a, . . . , a] ∈ KO ' ,E ' ,Y0 . Proposition 8.21 Assume the notation of Definition 8.20. Let .Y be a hyperplane of Pn (K) and O, .E ∈ Pn (K) \ Y be two distinct points. Let

.

KO,E,Y := OE \ Y,

.

U := OE ∩ Y.

Then there exists a natural field structure on .KO,E,Y whose algebraic operations are given by O τA+B = τAO ◦ τBO ,

.

i.e. A + B = τAO ◦ τBO (O) = τAO (B),

∀ A, B ∈ KO,E,Y , . E σA·B

=

σAE

◦ σBE

i.e. A · B =

(8.7) σAE

◦ σBE (E)

=

σAE (B),

∀ A, B ∈ KO,E,Y \ {O}, .

(8.8)

O ·A = O

∀ A ∈ KO,E,Y , .

(8.9)

E·A = A

∀ A ∈ KO,E,Y ,

(8.10)

where .τAO is the projective translation of axis .Y and centre U such that .τAO (O) = A; E E .σ A is the projective homothety of axis .Y and centre O such that .σA (E) = A, if .A /= O. Furthermore, there exists a canonical field isomorphism .ϕO,E,Y : K → KO,E,Y such that .ϕO,E,Y (0) = O and .ϕO,E,Y (1) = E given by (8.13). Proof Choose n points .E1 , . . . , En ∈ Y which generate .Y (hence .E1 , . . . , En ∈ Y are projectively independent). Therefore .{O, E, E1 , . . . , En } is a projective frame of .Pn (K) and the set .{O, E, U } is a projective frame of OE. By Theorem 6.47 there exists a unique .σ ∈ PGLn (K) such that σ (O ' ) = O, σ (E ' ) = E, σ (E1' ) = E1 , . . . , σ (En' ) = En .

.

In particular, .σ (Y0 ) = Y. Since .U ' = O ' E ' ∩ Y0 and .U = OE ∩ Y, we also have ∗ n ' .σ (U ) = U . Furthermore, if .Y is another hyperplane of .P (K) passing through U ∗ ∗ n ∗ ∗ such that O, .E ∈ P (K) \ Y and if .E1 , . . . , En ∈ Y are n points generating ∗ n ∗ ∗ .Y , then the set .{O, E, E , . . . , En } is another projective frame of .P (K). By 1 Theorem 6.47 there exists a unique automorphism .τ ∈ PGLn (K) such that τ (O ' ) = O, τ (E ' ) = E, τ (E1' ) = E1∗ , . . . , τ (En' ) = En∗ .

.

Since the hyperplanes .Y and .Y∗ contain U , it follows that .σ (U ' ) = τ (U ' ) = U . Therefore the restrictions .σ |O ' E ' : O ' E ' → OE and .τ |O ' E ' : O ' E ' → OE are the same projective linear isomorphism .O ' E ' → OE. In particular, the projective frame .{O, E, U } of the line OE does not depend on the choice of the hyperplane .Y

288

8 General Linear Projective Automorphisms

passing through U and such that .O, E ∈ Pn (K) \ Y. If we put  σO,E : KO ' ,E ' ,Y0 → KO,E,Y ,

 σO,E := σ |O ' E ' \{U ' } = τ |O ' E ' \{U ' }

.

(8.11)

(. σO,E : O ' E ' \ {U ' } → OE \ {U } since .σ (U ' ) = U ), then . σO,E (O ' ) = O and ' . σO,E (E ) = E. Now let A, .B ∈ KO,E,Y . By that seen above there exist uniquely determined a, .b ∈ K such that .A' = [1, a, . . . , a] and .B ' = [1, b, . . . , b] with .A = σ (A' ) = '  σO,E (A' ) and .B = σ (B ' ) =  σO,E (B ' ). Since .σ ◦τAO' ◦σ −1 is a projective translation ' with axis .Y which sends O into A and .σ ◦ σAE' ◦ σ −1 is a projective homothety with axis .Y and centre O which sends E into A, by Proposition 8.8 we have the following formulas: '

τAO = σ ◦ τAO' ◦ σ −1

and

.

'

σAE = σ ◦ σAE' ◦ σ −1 .

(8.12)

By means of (8.4), (8.5), (8.6) and (8.12) the operations of .KO,E,Y are naturally σO,E is an isomorphisms of defined by the formulas from (8.7) to (8.10). Therefore . fields: '

 σO,E (A' · B ' ) =  σO,E (σAE' (B ' )) =  σO,E (σ −1 ◦ σAE ◦ σ (B ' ))

.

= σAE (B) = A · B =  σO,E (A' ) ·  σO,E (B ' ); '

 σO,E (A' + B ' ) =  σO,E (τAO' (B ' )) =  σO,E (σ −1 ◦ τAO ◦ σ (B ' )) σO,E (A' ) +  σO,E (B ' ). = τAO (B) = A + B =  Thus the composition ϕO,E,Y : K → KO,E,Y ,

.

ϕO,E,Y =  σO,E ◦ ϕO ' ,E ' ,Y0

(8.13) ⨆ ⨅

is a canonical isomorphism of fields.

Corollary 8.22 Let .f ∈ Aut(P (K)). Let .Y be a hyperplane and O, .E ∈ P (K) \ Y be two distinct points. Then the restriction .f |KO,E,Y : KO,E,Y → Kf (O),f (E),f (Y) is a field isomorphism. n

n

Proof Let: (a) .τAO be the unique projective translation of axis .Y and centre .U = OE ∩ Y such that .τAO (O) = A, (b) .σBE be the unique projective homothety of axis .Y and centre O such that E .σ (E) = B /= O. B

8.2 The Group of Projective Automorphisms of Pn (K)

289

Then by Proposition 8.8 τf (A) = f ◦ τAO ◦ f −1 ,

∀ A ∈ KO,E,Y ;

σf (A) = f ◦ σAE ◦ f −1 ,

∀ A ∈ KO,E,Y \ {O},

.

f (O)

f (E)

f (O)

(8.14)

f (E)

where .τf (A) has axis .f (Y) and centre .f (U ), while .σf (A) has axis .f (Y) and centre .f (O). By the group structure on .Kf (O),f (E),f (Y) and by (8.14) we have the relations τf (A)+f (B) = τf (A) ◦ τf (B) = (f ◦ τAO ◦ f −1 ) ◦ (f ◦ τBO ◦ f −1 ) f (O)

.

f (O)

f (O)

O ◦ f −1 = τf (A+B) = f ◦ (τAO ◦ τBO ) ◦ f −1 = f ◦ τA+B f (O)

and σf (A)·f (B) = σf (A) ◦ σf (B) = (f ◦ σAE ◦ f −1 ) ◦ (f ◦ σBE ◦ f −1 )

.

f (E)

f (E)

f (E)

E = f ◦ (σAE ◦ σBE ) ◦ f −1 = f ◦ σA·B ◦ f −1 = σf (A·B) . f (E)

This concludes the proof.

⨆ ⨅

Definition 8.23 The field .KO,E,Y is called the coordinate field of .Pn (K) (according to Remark 8.15), unique up to field isomorphisms. Proposition 8.24 Let  ' ' ' ' . O = [1, 0, . . . , 0], E = [1, 1, . . . , 1], E1 = [0, 1, 0, . . . , 0], . . . , En  = [0, 0, . . . , 0, 1] (8.15) be the standard projective frame of .Pn (K) with .n ≥ 2. If .α ∈ Aut(Pn (K)) is such that .α(O ' ) = O ' , .α(Ei' ) = Ei' , .i = 1, 2, . . . , n, .α(E ' ) = E ' , then .α ∈ PAutn (K). Proof We assume the notation of the discussion coming before Proposition 8.21 to (8.3) and put .K ' := KO ' ,E ' ,Y0 . By Lemma 8.22 .α|K ' : K ' → K ' is a group isomorphism. By means of the canonical isomorphism .ϕ ' := ϕO ' ,E ' ,Y0 : K → K ' defined by (8.2), we have the automorphism .β : K → K given by .β(a) = b, where .α|K ' ([1, a, . . . , a]) = [1, b, . . . , b] for every .a ∈ K. We have to show that .α([a0 , a1 , . . . , an ]) = [β(a0 ), β(a1 ), . . . , β(an )] for each .P = [a0 , a1 , . . . , an ] ∈ Pn (K). First suppose .P = [a0 , a1 , . . . , an ] /∈ Y. Assume that .a0 = 1. Put .α(P ) = [1, b1 , . . . , bn ] and fix .i ∈ {1, . . . , n}. Consider the point .O ' E ' ∩ (α(P ) + E1' + ' + · · · E ' ), where .E ' means that .E ' is omitted. Then ··· + E n i i i ' + · · · E ' ) = α(O ' E ' ) + α(P + E ' + · · · O ' E ' ∩ (α(P ) + E1' + · · · + E n 1 i

.

' + · · · E ' ) = α(O ' E ' ∩ (P + E ' + · · · + E ' + · · · E ' )) +E n n 1 i i

290

8 General Linear Projective Automorphisms

since .α(O ' ) = O ' , .α(Ei' ) = Ei' , .i = 1, 2, . . . , n, .α(E ' ) = E ' . The hyperplane ' ' ' .(α(P ) + E + · · · + E + · · · En ) has the equation .bi x0 − xi = 0 so that i 1 ' + · · · E ' ) = [1, bi , . . . , bi ], O ' E ' ∩ (α(P ) + E1' + · · · + E n i

.

i = 1, 2, . . . , n.

' + · · · E ' has the equation .ai x0 − xi = 0 so that The hyperplane .P + E1' + · · · + E n i ' + · · · E ' ) = [1, ai , . . . , ai ] i = 1, 2, . . . , n. O ' E ' ∩ (P + E1' + · · · + E n i

.

Therefore .α([1, ai , . . . , ai ]) = [1, bi , . . . , bi ] i.e. .β(ai ) = bi , .i = 1, 2, . . . , n. Now suppose .P = [0, a1 , . . . , an ] ∈ Y0 . Take .Q = [1, a1 , . . . , an ] so that ' .P = O Q ∩ Y0 . By the above case .α(Q) = [1, β(a1 ), . . . , β(an )]. Then .α(P ) = ' α(O )α(Q) ∩ Y0 = [0, b1 , . . . , bn ] so that .bi = β(ai ), .i = 1, . . . , n. ⨆ ⨅ The fundamental result concerning the structure of .Aut(Pn (K)) is the following: Theorem 8.25 Let .α ∈ Aut(Pn (K)) .(n ≥ 2) be any automorphism. Then there exist .β ∈ PGLn (K) and .γ ∈ PAutn (K) such that .α has a unique expression of the form .α = β ◦ γ . Since .PGLn (K) is a normal subgroup of .Aut(Pn (K)) and .PGLn (K) ∩ PAutn (K) = {id}, then .Aut(Pn (K)) is the semi-direct product .PGLn (K) > 0 and F 2 ) if and only if any pair of points P , Q ∈ S are conjugate with respect (a) S ⊂ V+ (F  , then S ⊂ Tp . to F . In particular, if S contains a non-singular point P of F ,P F ), P , Q ∈ S are conjugate with respect to F  if and only if they are (b) If S /⊂ V+ (F  to S. conjugate with respect to F |S,σ , the restriction of F Proof ), then (a) For any pair of points P , Q ∈ S the line P Q ⊂ S, therefore if S ⊂ V+ (F 2  λP + μQ ∈ V+ (F ) for all (λ, μ) ∈ K \ {(0, 0)}, i.e. the equation λ2 P t CP + 2λμP t CQ + μ2 Qt CA = 0

.

vanishes identically; this implies P t CQ = 0, i.e. P and Q are conjugate with . Conversely, if any pair of points P , Q ∈ S are conjugate with respect to F  ), respect to F , then in particular P t CP = Qt CQ = 0 so that P , Q ∈ V+ (F  i.e. S ⊂ V+ (F ). (b) Let P = σ ([p0 , . . . , pm ]) and Q = σ ([q0 , . . . , qm ]), i.e. xkP =

m 

.

akj pj ,

k = 0, . . . , n,

and

xlQ =

j =0

m 

alh qh ,

l = 0, . . . , n.

h=0

 if and only if The points P and Q are conjugate with respect to F n  .

Q

ckl xkP xl = 0,

(10.52)

k,l=0

while P and Q are conjugate with respect to F |S,σ if and only if m  .

j,k=0

⎛ ⎝

n 

k,l=0

⎞ ckl akj alh ⎠ pj qh = 0.

(10.53)

10.3 Polarity with Respect to a Hyperquadric

361

The equivalence between (10.52) and (10.53) follows from the equalities (10.39) which define F ⨆ ⨅ |S,σ . ) be a non-singular point of an irreducible Corollary 10.81 Let P ∈ Reg(F p  hyperquadric F ∈ H2 (n, K) and Y be a hyperplane of Pn (K) such that P ∈ Y. p  to Y. Then Y = TF,P if and only if P is singular for the restriction of F  is irreducible, by Lemma 10.48 V+ (F ) does not contain any Proof Since F p hyperplane of Pn (K) passing through P . Let Q any point of Y. If Y = TF,P , then Q , hence by Proposition 10.80, Q is conjugate to is conjugate to P with respect to F  to Y. Therefore P is singular for the restriction P with respect to the restriction of F  to Y by Proposition 10.70. Conversely, if P is singular for the restriction of F  of F  to to Y, then P is conjugate to any point Q of Y with respect to the restriction of F  so that Y. By Proposition 10.80 P is conjugate to any point Q of Y with respect to F p Y ⊂ TF,P . ⨆ ⨅ Remark 10.82 By Proposition 10.49 and Corollary 10.81 the restriction of an  to a hyperplane Y which does not meet the vertex of irreducible hyperquadric F  is non-degenerate. F  ∈ Hp (n, K) be a non-singular hyperquadric Corollary 10.83 (Bertini) Let F 2 of Pn (K) with n ≥ 3. Then there exists a hyperplane Y of Pn (K) such that the  to Y is a non-singular hyperquadric. restriction of F ) so that, by Theorem 10.74, Proof By Lemma 10.7 there exists a point P ∈ / V + (F p  ). Take Y = π F(P ). By Corollary 10.81, for π F (P ) /= TF,Q for all Q ∈ V+ (F ) ∩ Y is non-singular for the restriction of F  to Y, i.e. the restriction every Q ∈ V+ (F  to Y is non-singular. ⨆ ⨅ of F Definition 10.84 We say that a projective basis P0 , . . . , Pn of Pn (K) is a self-polar  if (n + 1)-hedron for a hyperquadric F 

π F (Pi ) = P0 + · · · + Pi−1 + Pi+1 + · · · Pn ,

.

i = 0, . . . , n.

If n = 2 we speak of a self-polar triangle and if n = 3 of a self-polar tetrahedron. It is immediately seen that taking a self-polar (n + 1)-hedron as vertices of a  assumes the reduced form of Theorem 10.53 up to a projective projective frame, F isomorphism.  of Pn (K) there is a self-polar (n + 1)Proposition 10.85 For any hyperquadric F  hedron for F . ). Let Y = PFP its polar Proof By Lemma 10.7 there exists a point P0 ∈ / V + (F 0 ), P0 ∈ / V + (F / Y. If n = 1 then Y = {P1 }, where P1 /= P0 hyperplane; since P0 ∈ . and P0 and P1 are conjugate. Therefore {P0 , P1 } is a self-polar 2-hedron for F

362

10 Projective Hyperquadrics

If n > 1, then, using induction, we assume the claim to be true for any ), we take any independent points hyperquadric of Pn−1 (K). If Y ⊂ V+ (F  P1 , . . . , Pn ∈ Y. Otherwise, F ∩ Y is a hyperquadric of Y and by the induction  ∩ Y. Since P0 ∈ / Y, in hypothesis we have a self-polar n-hedron P1 , . . . , Pn for F both cases P0 , . . . , Pn are independent points and we take P0 , . . . , Pn as a self-polar . Also in both cases P0 is conjugate to all Pi , i = 1, . . . , n (n + 1)-hedron for F ) any two points Pi , Pj , because they have been taken in Y. In the case Y ⊂ V+ (F  ) by the way they i, j = 1, . . . , n, are conjugate with respect to F . If Y /⊂ V+ (F have been chosen, any Pi , Pj , i /= j , i, j = 1, . . . , n, are conjugate with respect to  ∩ Y and so they are also conjugate with respect to F  by Proposition 10.80. Thus F in both cases P0 , . . . , Pn is a self-polar (n + 1)-hedron as wanted. ⨆ ⨅ Non-degenerate Ruled Quadric of P3 (C) and P3 (R) First we point out the following elementary fact.  be a non-degenerate hyperquadric of Pn (K) and Sh ⊂ Pn (K) Remark 10.86 Let F ), then every point of Sh be a projective subspace of dimension h. If Sh ⊂ V+ (F belongs to its polar hyperplane so that Sh is contained in its polar subspace Sn−h−1 . Hence h ≤ n − h − 1, i.e. & % n−1 , .h ≤ 2 where [x] is the integer part of the real number x. Nevertheless, if K is algebraically closed, then there exist infinitely many projective subspaces of maximal dimension & % n−1 (see [3]). 2  of P3 (C) Definition 10.87 By Remark 10.65, through any point P of a quadric F p  there pass two distinct lines a and b which lie on F and on the tangent plane TF,P . Let Фλ,μ (a) be the pencil of planes passing through a and let σλ,μ (a) be a plane of ) is a degenerate conic σλ,μ (a) ∩ V+ (F ) consisting Фλ,μ (a), then σλ,μ (a) ∩ V+ (F . of a and another line ra,λ,μ /= a. The family Ra := {ra,λ,μ } is called a ruling of F A second ruling Rb is obtained in the same manner.  be a non-degenerate quadric of P3 (R). Suppose that there exists a point P Let F such that ) = r ∪ s, TF,P ∩ V+ (F

.

p

 we have where r and s are distinct real lines, then if Q is any other point of F ) = r ' ∪ s ' , TF,Q ∩ V+ (F

.

p

where r ' , s ' are distinct real lines. Indeed, by contradictory assumption, if for a point Q the lines r ' , s ' are complex conjugate lines, then the plane r + Q should

10.3 Polarity with Respect to a Hyperquadric

363

) in r and in a real line t passing through Q so that t ⊂ T meet V+ (F ,Q which F contradicts our hypothesis. p

 Every non-degenerate quadric of P3 (R) is projectively isomorphic to 𝚪 3,3 , 𝚪 2,3    or to 𝚪 . It is easily seen that 𝚪 , 𝚪 do not contain real lines (for 𝚪 take 1,3 3,3 2,3 2,3 the point P0 = [1, 0, 0, −1]). The quadric 𝚪 1,3 is ruled quadric, indeed the tangent ) has the equation x0 − x2 = 0 and plane at P0 = [1, 0, 1, 0] ∈ V+ (F ) = r ∪ s, TF,P ∩ V+ (F

.

p

0

where r : x0 − x2 = x1 − x3 = 0 and s : x0 − x2 = x1 + x3 = 0. From the equation x02 − x22 = x32 − x12 we deduce immediately that all the lines of 𝚪 1,3 are $ {rλ } :

.

$ {sμ } :

x0 − x2 = λ(x3 + x1 ) λ(x0 + x2 ) = x3 − x1 x0 − x2 = μ(x3 − x1 )

$

μ(x0 + x2 ) = x3 + x1

$

x0 + x2 = 0 x3 − x1 = 0

x0 + x2 = 0 x3 + x1 = 0.

Proposition 10.88 Under the above notation and hypothesis we have ) there exists one and only one pair of lines ra ∈ Ra (i) For every point Q ∈ V+ (F and rb ∈ Rb such that Q = ra ∩ rb and p ) = ra ∪ rb . TF,Q ∩ V+ (F

.

(ii) Two lines r1 and r2 belonging to the same ruling are skew. (iii) Every line of a ruling meets every line of the other ruling. Proof ) and rb = (b +Q)∩V+ (F ); moreover (i) If Q ∈ / a ∪b, then ra = (a +Q)∩V+ (F ra /= rb , otherwise P ∈ ra ∩ rb ∩ a ∩ b. If Q ∈ a and Q ∈ / b, then ), ra = TF,Q ∩ V+ (F

.

p

p

which is a line of the pencil Фλ,μ (a) (since a ⊂ TF,Q ) and rb = a; if Q = a ∩ b, then ra = b and rb = a. (ii) The lines r1 and r2 lie on two different planes of Фλ,μ (a) so that if r1 ∩r2 = Q,  would be degenerate because three distinct lines we have Q ∈ a. Hence F  ). lying on V+ (F ) pass through Q ∈ V+ (F ) which (iii) Any line ra meets any plane σλ,μ (b) ∈ Фλ,μ (b) at a point Q ∈ V+ (F must lie on rb (Q ∈ / b since every ra is skew with b which belongs to Ra ). ⨅ ⨆

364

10 Projective Hyperquadrics

Involutions Generated by a Polarity  be a non-degenerate hyperquadric of Pn (K) with n ≥ 2 and Definition 10.89 Let F  F . The involution induced by π its polarity. Let r be a line which is not tangent to F  π F on r is defined as being the map ϕr : r → r,

.



ϕr (P ) = PFP ∩ r,

∀ P ∈ r,

(10.54)

It is clear that ϕr is an involution according to Definition 9.32. We may extend the above definition to any form of first kind (Definition 6.65). If S is a subspace of  and Фn−1 (S) is the pencil of the hyperplanes dimension n − 2 not tangent to F  passing through S (see Definition 6.65) we may define the involution induced by π F on Фn−1 (S) 

ФS (Y) = S + (π F )∗ (Y),

ФS : Фn−1 (S) → Фn−1 (S),

.

∀ Y ∈ Фn−1 (S). (10.55)

Proposition 10.90 The maps (10.54) and (10.55) are involutions according to Exercise 9.32. Furthermore: ) = {P1 , P2 }, where P1 and P2 are the (distinct) fixed points of ϕr (for (a) r ∩ V+ (F ) = ∅. instance if K is algebraically closed) or r ∩ V+ (F n ) (b) Two distinct points P and Q of P (K) such that the line r = P Q meets V+ (F only at two distinct points P1 and P2 are conjugate if and only if ρ(P1 , P2 , P , Q) = −1.

.

(10.56)

 (c) There exist exactly two hyperplanes passing through S which are tangent to F and represent the fixed points of ФS (for instance if K is algebraically closed) or none. (d) If ФS has the hyperplanes Y1 and Y2 as fixed points, then two distinct hyperplanes Z1 and Z2 are conjugate if and only if ρ(Y1 , Y2 , Z1 , Z2 ) = −1.

.

(10.57)



Proof First we observe that r is not contained in PFP for all P ∈ r, otherwise ) (being self-conjugate) and r would be tangent to F  at P . Therefore ϕr P ∈ V + (F  is well-defined. Since every hyperplane Y ∈ Фn−1 (S) has its pole (π F )∗ (Y) which  F ∗  at (π ) (Y)), also the map does not belong to S (otherwise S would be tangent to F ФS is well-defined. We may suppose that r has equations x2 = · · · = xn = 0 so that [x0 , x1 ] are projective coordinates on r. Let F = Xt CX, where C = (cij ). Since r is not , then c2 − c00 c11 /= 0. The equation of PFP at P = [p0 , p1 , 0, . . . , 0] tangent to F 01 is (c00 p0 + c10 p1 )x0 + (c01 p0 + c11 p1 )x1 + · · · + (c0n p0 + c1n p1 )xn = 0

.

10.3 Polarity with Respect to a Hyperquadric

365

so that 

PFP ∩ r = {[c01 p0 + c11 p1 , −(c00 p0 + c10 p1 )]} i.e    p0 c01 c11 . ϕr ([p0 , p1 ]) = −c00 −c10 p1 .

Hence ϕr is an involution. Similarly, we may assume that S has equations x0 = x1 = 0 so that any hyperplane Y ∈ Фn−1 (S) has the equation λx0 + μx1 = 0 and [λ, μ] are projective coordinates on Фn−1 (S). Let C −1 = (dij ). Since S is not tangent to  the poles of any Y ∈ Фn−1 (S) do not lie on S. Since F 

(π F )∗ (Y) = [λ, μ, 0, . . . , 0] C −1 = [λd00 + μd10 , λd01 + μd11 , . . . , λd0n + μd1n ], (10.58)

.



the condition (π F )∗ (Y) ∈ Y is equivalent to the equation d00 λ2 + 2d01 λμ + d11 μ2 = 0

(10.59)

.



which does not vanish identically (if d00 = d01 = d11 = 0, then (π F )∗ (Y) ∈ S). If d01 2 −d d d01 00 11 = 0 then d00 /= 0 (otherwise d01 = d11 = 0), hence λ = − d00 and μ = 1. Therefore from (10.58) we get 

(π F )∗ (Y) = [0, 0, λd02 + μd12 , . . . , λd0n + μd1n ] ∈ S,

.

2 −d d so that we have d01 00 11 /= 0 and the Eq. (10.59) has two distinct roots. The  hyperplane S + (π F )∗ (Y) has the equation

  x0 x1 x2  λd00 + μd10 λd01 + μd11 λd02 + μd12   0 0 1 .  . . .. .. ..  .   0 0 0

  ··· xn  · · · λd0n + μd1n   ··· 0  = 0,  .. ..  . .   ··· ··· 1 x3 ··· 0 .. .

i.e. (λd01 + μd11 )x0 − (λd00 + μd10 )x1 = 0. Therefore we get the relation  ФS ([λ, μ]) =

.

d01 d11 −d00 −d01

  λ μ

that shows that ФS is an involution. The assertions (a), (b), (c) and (d) easily follow ⨆ ⨅ from Exercise 9.32. Remark 10.91 If in Definition 10.89 we take n = 2, then S is a point P which is ). not self-conjugate, i.e. P ∈ / V + (F

366

10 Projective Hyperquadrics

Remark 10.92 We can associate to a non-degenerate conic F : a00 x02 +2a01 x0 x1 +  a11 x12 of P1 (K) the polarity π F : P1 (K) → P1 (K)∗ defined by [x0 , x1 ] → [u0 , u1 ] = [a00 x0 + a01 x1 , a01 x0 + a11 x1 ].

.

The point coordinates of the “line” [u0 , u1 ] are [u1 , −u0 ] just because u0 u1 − u1 u0 = 0, therefore the conjugate point of [x0 , x1 ] has coordinates [a01 x0 + a11 x1 , −(a00 x0 + a01 x1 )] so that the map P1 (K) → P1 (K) defined by 

a01 a11 .[x0 , x1 ] → [a01 x0 + a11 x1 , −(a00 x0 + a01 x1 )] = −a00 −a01

  x0 x1

is the involution of conjugation. Let K be an algebraic closure of K, and j : P1 (K) → P1 (K) be the natural inclusion, then P , Q ∈ P1 (K) are conjugate  if and only if with respect to F ρ(A, B, P , Q) = −1,

.

) ⊂ P1 (K). where {A, B} = V+ (F

10.3.1 Affine and Euclidean Geometry in a Projective Setting Several affine (or euclidean) definitions and properties of a hyperquadric .f can be 0 (f ) (with respect to the hyperplane at rewritten in terms of the projective closure .x 0 (f ). This is a infinity .Y0 : x0 = 0) as well as of the complexification of .f and .x very fruitful approach as shown, e.g. in Exercise 10.19. We wish to give a projective version of some concepts which occur in the theory of hyperquadrics of .An (R) or .En (R). We shall follow [12]. Let us fix some notation. Let .f be a non-degenerate hyperquadric of .An (R), with associated matrices ⎛

c11 ⎜c21 ⎜ .Cf = ⎜ . ⎝ ..

c12 c22 .. .

··· ··· .. .

⎞ c1n c2n ⎟ ⎟ .. ⎟ , . ⎠

cn1 cn2 · · · cnn



c ⎜c1 ⎜ ⎜ C f = ⎜c2 ⎜. ⎝ ..

c1 c11 c21 .. .

c2 c12 c22 .. .

··· ··· ··· .. .

⎞ cn c1n ⎟ ⎟ c2n ⎟ ⎟. .. ⎟ . ⎠

cn cn1 cn2 · · · cnn

10.3 Polarity with Respect to a Hyperquadric

367

0 (f ) (with respect to the hyperplane at infinity .Y : Then the projective closure .x 0 x0 = 0) is associated to the matrix



c00 ⎜c10 ⎜ p ⎜ .C f := ⎜c20 ⎜ . ⎝ ..

c01 c11 c21 .. .

··· ··· ··· .. .

c02 c12 c22 .. .

⎞ c0n c1n ⎟ ⎟ c2n ⎟ ⎟, .. ⎟ . ⎠

c00 = c,

c0i = ci , i = 1, . . . n.

cn0 cn1 cn2 · · · cnn 0 (f ) ∩ Y . The hyperquadric at infinity of .f is .f∞ := x 0

Definition 10.93 (Centre of a Hyperquadric) If .f is a non-degenerate quadric with centre .P = [p1 , . . . , pn ], the point .P = [1, p1 , . . . , pn ] coincides with the pole of .Y0 . Indeed, p

t

C f P = (1, 0)t ,

.

0 = (0, . . . , 0),

by Proposition 5.31. The pole .Q = [q0 , q1 , . . . , qn ] of .Y0 is improper, that is, .q0 = 0, if and only if .f is a paraboloid. Indeed, since

p

(C f )−1

.

⎛ det(C ) f ⎜ det(C f ) ⎜ ⎜ = ⎜ ··· ⎜ .. ⎝ . ···

⎞ ··· ··· ··· ⎟ ⎟ · · · · · · · · ·⎟ ⎟, .. . . .. ⎟ . . . ⎠ ··· ··· ···

p 0 (f ) is tangent to .Y at the then .(C f )−1 (1, 0)t = (0, q1 , . . . , qn )t . Therefore .x 0 point Q which may be called the improper centre of a paraboloid. We can complete our classification of affine hyperquadrics of .An (R) (see Definition 5.46). A hyperquadric .f is called:

(a) a hyperboloid if non-degenerate, with centre such that .f∞ is a non-degenerate quadric of .Y0 with non-empty support; (b) an ellipsoid if non-degenerate, with centre such that .f∞ is a non-degenerate quadric of .Y0 with empty support. Suppose that a point .P = [1, p1 , . . . , pn ] is the pole of the hyperplane at infinity Y0 , then any line r passing through .P = (p1 , . . . , pn ) meets .V(f) in two distinct points .P1 , .P2 or in none. Indeed, if .r ∩V(f) = {Q}, then r being tangent to .fwould be contained in the polar plane of Q; therefore, by reciprocity, Q would belong to .Y0 which is absurd. Furthermore .ϕr (P ) = Y0 ∩ r (the improper point of r), where  F .ϕr is the involution induced by .π on r, hence .

ϱ(P1 , P2 , P , ϕr (P )) = −1,

.

368

10 Projective Hyperquadrics

i.e. P is the midpoint of the segment .{P1 , P2 } (see Definition 9.26) so that P is the centre according to Definition 5.30. All what was said above clearly holds true for any field of characteristic ./= 2. Definition 10.94 (Diametral Hyperplanes, Axes and Asymptotes of a Hyperquadric) An affine hyperplane is called a diametral hyperplane of a nondegenerate hyperquadric .f (diametral plane if .n = 3, diameter if .n = 2) if its projective closure is the polar hyperplane of a point of the hyperplane at infinity .Y0 . Every line (not lying on .Y0 ) passing through the centre (proper or improper) is called a diameter of .f (if .n = 2 the notions of diametral hyperplane and 0 (f ) are called the asymptotes of diameter coincide). The diameters tangent to .x . If .f has a (proper) centre C and d is a diameter tangent to .x 0 (f ) at a point P , .f x 0 (f )

x 0 (f )

then d is contained in the polar hyperplane .PP so that .P ∈ PC = Y0 by x  reciprocity. Therefore an asymptote is tangent to . 0 (f ) in a point of the improper . For this reason the set of all lines passing through 0 (f ) ∩ Y of .f locus .f∞ := x 0 x  0 0 (f ) from C (see C and tangent to . (f ), which is the cone circumscribed to .x  Exercise 10.7), is called the asymptote cone to .f . If P is the centre of the hyperquadric, then a hyperplane is diametral if and only if its projective closure passes through the centre P by reciprocity. If .fis a paraboloid, all diametral hyperplanes are parallel to one line, because their projective closures pass through the improper centre Q of .f (so that all their orienting subspaces contain the 1-dimensional vector subspace .VQ ). In order to define the principal diametral hyperplanes (for hyperquadrics of n .E (R)) we need the following: Proposition 10.95 Let K be any field of characteristic ./= 2. Let .f be a nondegenerate quadric of .An (K), let .Y be a diametral hyperplane and let .P = 0 (f ). Then: [(0, p1 , . . . , pn )] ∈ Pn (K) be the pole of .Y with respect to .x (a) .P ∈ Y if and only if the vector .v = (p1 , . . . , pn ) is parallel to .Y. (b) Suppose .P ∈ / Y, then sY,〈v〉 (V(f)) = V(f),

.

where .sY,〈v〉 is the symmetry through .Y parallel to the vector subspace .〈v〉 (Definition 3.82). Proof (a) Let .Y : a1 x1 + · · · + an xn + b = 0, then .Y has the equation .bx0 + a1 x1 + · · · + an xn = 0. Therefore .P ∈ Y if and only if .a1 p1 + · · · + an pn = 0, i.e. if and only if v is parallel to .f. (b) Since .P ∈ / Y by part (a) we can take affine coordinates .x1 , . . . , xn such that .v = (0, 0, . . . , 0, 1) and .Y : xn = 0. With respect to this system of coordinates we have sY,〈v〉 (x1 , . . . , xn−1 , xn ) = (x1 , . . . , xn−1 , −xn ).

.

10.3 Polarity with Respect to a Hyperquadric

369

Since the polar hyperplane of .[0, . . . , 0, 1] has the equation .xn = 0, i.e. ⎛

c00 ⎜c10 ⎜ ⎜ . ⎜c20 ⎜ . ⎝ ..

c01 c11 c21 .. .

c02 c12 c22 .. .

··· ··· ··· .. .

⎞⎛ ⎞ ⎛ ⎞ 0 0 c0n ⎜0⎟ ⎜0⎟ c1n ⎟ ⎟⎜ ⎟ ⎜ ⎟ ⎜.⎟ ⎜.⎟ c2n ⎟ ⎟ ⎜ .. ⎟ = ⎜ .. ⎟ , ⎜ ⎟ ⎜ ⎟ .. ⎟ . ⎠ ⎝0⎠ ⎝0⎠

cn0 cn1 cn2 · · · cnn

1

1

we get .ci0 = 0 for .i = 0, . . . , n − 1. Therefore f (x1 , . . . , xn ) =

n−1 

.

cij xi xj + cnn xn2 ,

i=0

which implies .sY,〈v〉 · f = f .

⨆ ⨅

If the diametral hyperplane .Y relative to a point .P = [0, p1 , . . . , pn ] ∈ Y0 is orthogonal to the vector .(p1 , . . . , pn ), then .Y is called a principal diametral hyperplane of .f (see Definition 5.52 and Corollary 5.54). Moreover, we define an axis of .f as every line of .En (R) which is the intersection of principal diametral hyperplanes (if .n = 2 the notions of principal diametral hyperplane and axis coincide), and a vertex of a non-degenerate hyperquadric .f as every point where  intersects an axis. .f The following proposition asserts that the notions of principal diametral hyperplane of Definitions 5.52 and 10.94 coincide. Proposition 10.96 Let .Y be a hyperplane of .En (R) and .sY be the orthogonal symmetry through .Y. 0 (f ), then .s · f (a) If the projective closure .Y is tangent to .x Y  /= f. (b) A hyperplane .Y is a principal diametral hyperplane of .f if and only if the orthogonal symmetry .sY through .Y preserves .f i.e. .sY · f = f.

Proof 0 (f ) at a point .P . If .P is a proper point, we may suppose (a) Let .Y be tangent to .x that .P = (0, . . . , 0) and .Y is defined by the equation .x1 = 0 (up to an isometry). With respect to this system of coordinates, we have

f (x1 , . . . , xn ) = x1 + g(x1 , . . . , xn )

.

where g is a homogeneous polynomial of degree 2 (indeed .c = c2 = · · · = cn = 0 and .c1 = 1). The orthogonal reflection through .Y is given by sY (x1 , . . . , xn ) = (−x1 , x2 , . . . , xn )

.

370

10 Projective Hyperquadrics

so that .sY · f is defined by (sY · f )(x1 , . . . , xn ) = −x1 + g(−x1 , . . . , xn ).

.

Suppose by contradiction that .sY · f = f. Then there exists .λ ∈ R such that .f (−x1 , . . . , xn ) = λf (x1 , . . . , xn ). Since .f (−x1 , . . . , xn ) = −x1 + g(−x1 , . . . , xn ) we must have .λ = −1 and .g(−x1 , . . . , xn ) = −g(x1 , . . . , xn ). Therefore each monomial appearing in g is of type .x1 xj , for some .2 ≤ j ≤ n. Hence g is divisible by .x1 , and the same is true for f . Since .fis non-degenerate this is impossible. Let .P be an improper point. We can always suppose as above that .Y is the hyperplane of equation .x1 = 0, and .P = [0, . . . , 0, 1]. In this system of coordinates, we have f (x1 , . . . , xn ) = c + g(x1 , . . . , xn ) + h(x1 , . . . , xn ),

.

where .c ∈ R, the polynomial g is either zero or homogeneous of degree 1, and h(x1 , . . . , xn ) = x1 xn +r(x1 , . . . , xn−1 ), where r is a homogeneous polynomial of degree 2 in .x1 , . . . , xn−1 (indeed .cn = c2n = · · · cnn = 0 and .c1n = 1). In this case .sY · f is defined by

.

f (−x1 , . . . , xn ) = c − x1 xn + g(−x1 , . . . , xn ) + r(−x1 , . . . , xn−1 ).

.

Then, as in the previous case, if .sY · f = f then .x1 divides f , against the hypothesis that .f is non-degenerate. (b) Let .v = (v1 , . . . , vn ) ∈ Rn \ 0 be a vector orthogonal to .Y, and .R = [0, v1 , . . . , vn ] ∈ Pn (R) be the point at infinity corresponding to the direction v. Of course we have .R ∈ / Y. Moreover, .Y is a principal hyperplane if and only 0 (f ). If .Y is a principal hyperplane, then if R is the pole of .Y with respect to .x    = f and .sY · f = f by Proposition 10.95. Conversely, suppose that .sY · f n n let P be the pole of .Y. Let .sY : P (R) → P (R) the projectivity induced by .sY (Proposition 9.13). Let .x = (x1 , . . . , xn ), .b = (b1 , . . . , bn ), .A = (aij )i,j =1,...,n and .sY (x) = Ax t + bt . Since v is orthogonal to .Y, then .Av t = kv for .k ∈ R∗ , therefore ⎛

1 ⎜b1 ⎜ .⎜ . ⎝ ..

0 a11 .. .

··· ··· .. .

⎞⎛ ⎞ ⎛ ⎞ 0 0 0 ⎜v1 ⎟ ⎜kv1 ⎟ a1n ⎟ ⎟⎜ ⎟ ⎜ ⎟ .. ⎟ ⎜ .. ⎟ = ⎜ .. ⎟ . ⎠⎝ . ⎠ ⎝ . ⎠

bn an1 · · · ann

vn

kvn

so that .sY (R) = R, hence the set .Y ∪ {R} is the fixed locus of .sY . Since .sY · x 0 (f ) = x 0 (f ) we also have .s (P ) = P . By (a) the projective hyperplane .Y is Y

10.3 Polarity with Respect to a Hyperquadric

371

0 (f ) so that the point P does not belong to .Y, hence .P = R and not tangent to .x .Y is a principal hyperplane. ⨆ ⨅

Complexification Let σC : Cn → Cn ,

σC (z1 , . . . , zn ) = (z1 , . . . , zn ),

.

(10.60)

where .zj is the complex conjugate of .zj , .j = 1, . . . , n. For every polynomial (homogeneous or not homogeneous) f =



ai1 ,...,in X1i1 · · · Xnin ∈ C[X1 , . . . , Xn ]

.

i1 ≥0,...,in ≥0

we define the conjugation .σC as follows: σC (f ) :=



.

ai1 ,...,in X1i1 · · · Xnin ∈ C[X1 , . . . , Xn ]

(10.61)

i1 ≥0,...,in ≥0

obtained from f by conjugating each coefficient. For any polynomial .f ∈ C[X1 , . . . , Xn ] the affine hypersurface .σC (f) := σ C (f ) is called the complex conjugate of the affine hypersurface .f. If .f ∈ Hd (n, C), the intersection .VR (f) := V(f) ∩ An (R) is the set of the real points of .f. Analogously, for any homogeneous ) := σ polynomial .F ∈ C[X0 , . . . , Xn ] the projective hypersurface .σC (F C (F ) is   ∈ Hp (n, C), called the complex conjugate of the projective hypersurface .F . If .F d ) := V+ (F ) ∩ Pn (R) is the set of the real points of .F . the intersection .VR )+ (F If .f ∈ R[X1 , . . . , Xn ]d with .d ≥ 1, then we can consider both the equivalence class .f in .Hd (n, R) and the equivalence class .f C in .Hd (n, C). We call the  hypersurface .f the complexification of . . Since every affine subspace .S of f C dimension k in .An (R) is the intersection of .n − k independent hyperplanes .Yi = fi , .i = 1, . . . , n−k, its complexification .SC is an affine subspace of dimension k which n  is the intersection of the .n−k independent hyperplanes .(f i )C . Any affinity of .A (R) n t trivially extends to an affinity of .A (C). Conversely, an affinity .ϕ(X) = MX + N t (with .X = (x1 , . . . , xn ) ∈ Cn , .M ∈ GLn (C) and .N = (b1 , . . . , bn ) ∈ Cn ) maps points of .An (R) to points of .An (R) if and only if .M ∈ GLn (R) and n n .N = (b1 , . . . , bn ) ∈ R . In this case the restriction of .ϕ to .A (R) is an affinity n of .A (R). It is immediately seen that: 1. A hyperquadric .f of .An (R) is non-degenerate if and only if .f C is a nondegenerate hyperquadric of .An (C). 2. A non-degenerate hyperquadric .f of .An (R) has a centre if and only if .f C has a centre; moreover the two centres coincide.

372

10 Projective Hyperquadrics

We note that the curve of equation .x 2 + y 2 = 0 is reducible when regarded as a complex curve, while it is irreducible when regarded as a real curve. If n  ∈ Hd (n, R), the subset .V(f .f C ) \ V(f) of .A (C) is the set of complex-conjugate points of .f. Analogous considerations can be made in the projective case. If .F ∈ ∗ R[X h 0 , . . . , Xn ]d with .d ≥ 1 (see Definition 10.1), then we can consider both  in .Hd (n, R) and the equivalence class .F the equivalence class .F C in .Hd (n, C). We  call the hypersurface .F the complexification of . . Since every projective subspace F C n i , .S of dimension k in .P (R) is intersection of .n−k independent hyperplanes .Yi = F .i = 1, . . . , n − k, its complexification .SC is a projective subspace of dimension k  which is the intersection of the .n−k independent hyperplanes .(F i )C . It is immediate n  to see that a hyperquadric .F of .P (R) is non-degenerate if and only if .F C is a nonn n degenerate hyperquadric of .P (C). Every projectivity of .P (R) trivially extends to a projectivity of .Pn (C). Conversely, a projectivity .φ of .Pn (C) represented by a matrix .A ∈ GLn+1 (C) transforms points of .Pn (R) into points of .Pn (R) if and only if there exists .k ∈ C∗ such that .kA ∈ GLn+1 (R) so that the restriction of .φ to .Pn (R) ) of .Pn (C)  ∈ Hp (n, R), the subset .V+ (F is a projectivity of .Pn (R). If .F C ) \ V+ (F d . is the set of complex-conjugate points of .F  of .Pn (C) is the complexification .G C of a hyperLemma 10.97 A hypersurface .F n    surface .G of .P (R) if and only if .σC (F ) = F . = G C , then we can take .F = G ∈ h R[X0 , . . . , Xn ]∗ so that .σC (F ) = F Proof If .F d ) = F . and .σC (F ) = F  with .F ∈ C[z0 , . . . , zn ] homogeneous. The Conversely, let .σC (F conjugate hypersurface is defined by .σC (F ) = 0 so there exists .k ∈ C∗ such that .σC (F ) = kF . Write .z = (z0 , . . . , zn ), .F (z) = G(z) + iH (z) with G and .H ∈ R[z0 , . . . , zn ] homogeneous. Since .F /= 0, up to replacing F by iF we may  Therefore we have assume .G /= 0 so that G defines a real hypersurface .G. G(z) =

.

1+k 1 F (z), (F (z) + σC (F )(z)) = 2 2

= G C . with .k /= −1 so that .F Lemma 10.98 Let r be a line

⨆ ⨅ of .P2 (C).

(a) If .r = σC (r), then r contains infinitely many real points. (b) If .r /= σC (r), then the point .P = r ∩ σC (r) is the only real point of r. Proof (a) If .r = σC (r), by Lemma 10.97 the line r is the complexification of a real line s, so it contains infinitely many real points. (b) If .r /= σC (r), then the point .P = r ∩ σC (r) is a real point since .σC (P ) ∈ σC (r) ∩ σC (σC (r)) = σC (r) ∩ r so that .σC (P ) = P . If .Q ∈ r is a real point, then .Q = σC (Q) ∈ r ∩ σC (r) = P . ⨆ ⨅

10.3 Polarity with Respect to a Hyperquadric

373

 be a non-degenerate quadric of .P3 (R) and let .F Remark 10.99 Let .F C be its ) /= ∅. Consider the rulings  is not ruled and .V+ (F complexification. Suppose that .F .Ra and .Rb of .F C . The conjugation .σC preserves incidence relations, so that .σC either ). Then transforms each ruling into itself or switches .Ra and .Rb . Let .P ∈ V+ (F p p .σC (T ) = T , where .σC is the conjugation defined by (10.61). Therefore F F C ,P C ,P p .σC preserves the degenerate conic .T ∩ V + (F C ) consisting of two complex F C ,P

conjugate lines .l1 and .l2 . Hence .σC (l1 ) = l2 and .σC (l2 ) = l1 , i.e. .σC switches .Ra and .Rb .  be a non-degenerate quadric of .P3 (R) and let r be a line Definition 10.100 Let .F ). We say that r is secant to .F ) consists of  if .r ∩ V+ (F not contained in .V+ (F ) consists of two  if .r ∩ V+ (F two distinct real points, and that r is external to .F complex-conjugate points. The source of the following proposition is Exercise 185 of [12].  be a non-degenerate quadric of .P3 (R) and let r be a line Proposition 10.101 Let .F ). Let .rF be the polar line of r. If .F  is ruled, then: not contained in .V+ (F (a) r is secant if and only if .rF is secant.  if and (b) There exist two planes of .P3 (R) that pass through r and are tangent to .F only if r is secant. ) /= ∅, then:  is not ruled and .V+ (F If .F (c) r is secant if and only if .rF is external.  if and (d) There exist two planes of .P3 (R) that pass through r and are tangent to .F only if r is external. Proof  at distinct points .Q1 and .Q2 , by Proposition 10.78-(f) there (a) If .rF is secant to .F  at the points exist two planes .Y1 and .Y2 passing through r and tangent to .F . Hence .Y1 ∩ .Q1 and .Q2 . Let .Ra and .Rb be the rulings of lines contained in .F   V+ (F ) = a1 ∩ b1 and .Y2 ∩ V+ (F ) = a2 ∩ b2 , with .a1 , a2 ∈ Ra and .b1 , ), then the four lines .a1 , a2 , b1 , b2 .b2 ∈ Rb . Since r is not contained in .V+ (F , and on the other hand, are distinct. The lines .a1 and .b2 meet at a point of .F ). Analogously, .a1 ∩ b2 ∈ Y1 ∩ Y2 = r, so that the point .a1 ∩ b2 ∈ r ∩ V+ (F  .a2 ∩ b1 ∈ r ∩ V+ (F ). The points .a1 ∩ b2 and .a2 ∩ b1 are distinct, otherwise ) passing through a point. The converse there are three lines contained in .V+ (F immediately follows from the fact that r is the polar of .rF. (b) is an immediate consequence of (a) and Proposition 10.78-(f). ) = {Q1 , Q2 }. Then  and .rF ∩ V+ (F (c) Let .rF be secant to .F p



p



TF,Q ∩ TF,Q = π F (Q1 ) ∩ π F (Q1 ) = r

.

1

2

374

10 Projective Hyperquadrics

and p ) = Q1 , TF,Q ∩ V+ (F

p ) = Q2 , TF,Q ∩ V+ (F

.

1

2

 is not ruled. The lines r and .rF are distinct because r is not contained since .F ), and they are not incident, since r is not tangent to .F  (Proposition in .V+ (F  10.78). Therefore, if there exists a point .R ∈ r ∩ V+ (F ), then we should have .R /= Q1 , contradicting the fact that ) ⊂ Tp  r ∩ V + (F ,Q ∩ V+ (F ) = Q1 . F

.

1

Conversely, suppose that r is external. By Proposition 10.78-(e) .rF is not . Let .rC ∩ V+ (F tangent to .F C ) = {Q1 , Q2 }, where .rC is the complexification of r. The points .Q1 and .Q2 are complex conjugate, as are the tangent planes p p .T ,Q and .TF ,Q . Therefore F C

C

1

2

TF ,Q ∩ V+ (F C ) = a1 ∪ b1 ,

.

TF ,Q ∩ V+ (F C ) = a2 ∪ b2 ,

p

C

p

C

1

2

where .a1 , .a2 ∈ Ra and .b1 , .b2 ∈ Rb (see Remark 10.99). Hence .σC (a1 ) = b2 and .σC (a2 ) = b1 . Therefore p

p

a1 ∩ b2 = R ∈ TF ,Q ∩ TF ,Q = s

.

C

C

1

2

and p

p

a2 ∩ b1 = S ∈ TF ,Q ∩ TF ,Q = s,

.

C

2

C

1

where .s is the polar of .rC . Since R and S are real then R, .S ∈ rF. (d) is an immediate consequence of (c) and Proposition 10.78-(f).

⨆ ⨅

Definition 10.102 (The Absolute Hyperquadric) We define the absolute hyperquadric (conic if .n = 2, 3) or simply absolute of the standard euclidean space n  .E (R) as the non-degenerate hyperquadric .Abn := (Y0 )C ∩ (𝚪 n )C of the hyperplane at infinity .(Y0 )C , i.e. V+ (Abn ) = {[x0 , . . . , xn ] ∈ Pn (C) : x02 + x12 + · · · + xn2 = 0 = x0 = 0}.

.

In the case .n = 2 the absolute is the pair of complex-conjugate points I = [0, 1, i],

.

J = [0, 1, −i].

These points are called the cyclic or circular points of the plane and a line of .A2 (C) having a cyclic point as improper point is called an isotropic line. In general, we call any two complex-conjugate points lying on the absolute .Abn circular points of

10.3 Polarity with Respect to a Hyperquadric

375

En (R). It easy to see that any hyperquadric .fof .En (R) is a sphere if and only if the 2 hyperquadric at infinity of .f C is .Abn ; in particular, any conic of .E (R) is a circle if and only if its complexification has the cyclic points as improper points, and two circles have the same centre if and only if they are tangent at the cyclic points, i.e. they have the same tangent lines at the cyclic points. Let .v = (x1 , . . . , xn ) and .w = (y1 , . . . , yn ) be two vectors of .Rn , then they are orthogonal if .x1 y1 + · · · + xn yn = 0, i.e. if the point .P = [0, x1 , . . . , xn ] and .Q = [0, y1 , . . . , yn ] are conjugate with respect to the absolute hyperquadric .Abn . Thus we are led to the following: .

Proposition 10.103 Let S and T be two subspaces of .En (R) of dimension .m > 0 and .r > 0 respectively. Then: (a) If .m+r ≤ n, .S ⊥ T if and only if the improper part of either of them is contained in the polar space of the improper part of the other relative to the absolute hyperquadric, i.e. .S∞ ⊂ (T∞ )Abn .(equivalent to .T∞ ⊂ (S∞ )Abn , where .S∞ = P(D(S)) and .T∞ = P(D(T )) according to notation of Definition 10.77. (b) If .m + r ≥ n, .S ⊥ T if and only if .(T∞ )Abn ⊂ S∞ .(equivalent to .(S∞ )Abn ⊂ T∞ ). Proof (a) If .m + r ≤ n and .S ⊥ T , then, by Definition 4.10, .D(S) ⊂ D(T )⊥ (.⇐⇒ D(T ) ⊂ D(S)⊥ ) so that .S∞ ⊂ (T∞ )Abn , which is equivalent to .T∞ ⊂ (S∞ )Abn ). Conversely, if .S∞ ⊂ (T∞ )Abn we have .m−1 ≤ n−1−(r −1)−1 = n − r − 1, i.e. .m + r ≤ n and .D(S) ⊂ D(T )⊥ , i.e. .S ⊥ T . (b) If .m + r ≥ n and .S ⊥ T , then .D(T )⊥ ⊂ D(S) (.⇐⇒ D(S)⊥ ⊂ D(T )) so that .(T∞ )Abn ⊂ S∞ which is equivalent to .(S∞ )Abn ⊂ T∞ . Conversely, if .(T∞ )Abn ⊂ S∞ we get .n − 1 − (r − 1) − 1 ≤ m − 1 i.e. .m + r ≥ n and ⊥ ⊂ D(S), i.e. .S ⊥ T . .D(T ) ⨆ ⨅ In particular we have: (a) Two lines r and s of .En (R) are orthogonal if and only if their improper points are conjugate relative to .Abn . If .n = 2 and R and S are the improper points of r and s respectively, r and s are orthogonal if and only if (see Remark 10.92) ρ(I, J, R, S) = −1.

.

(b) Two planes .σ , .σ ' of .E3 (R) are orthogonal if and only if their improper lines are conjugate relative to .Ab3 . (c) A line r and a plane .σ of .E3 (R) are orthogonal if and only if the improper point of r is the pole of the improper line of .σ relative to .Ab3 . Definition 10.104 Let .f be a non-degenerate conic, let .r0 be the improper line . We recall some definitions (some of 0 (f ) the projective closure of .f .x0 = 0 and .x them given above for a general hyperquadric):

376

10 Projective Hyperquadrics

1. The centre of .f is the pole of .r0 . 2. A diameter of .f is the polar of a point of .r0 . 3. An axis of .f is a diameter d such that the pole of d and its point at infinity are conjugate with respect to the absolute .x12 + x22 = 0 of .r0 . 4. According to Definition 10.89, if .f has a proper centre C it is defined the involution of conjugate diameters ФC : Ф1 (C) → Ф1 (C),

.

5.

6. 7.

8.

9.

ФC (d) = (π

x 0 (f )

)∗ (d) + C,

∀ d ∈ Ф1 (C), (10.62)

where .Ф1 (C) is the pencil of lines passing through C. Therefore two diameters d and .d ' are conjugate if and only if the pole of d is the point at infinity of .d ' (and conversely). The fixed points of .ФC are the asymptotes, so that the asymptotes are the self-coniugate diameters. If .f is an ellipse, then .ФC is an elliptic involution (see Definition 9.32) and the asymptotes are complex-conjugate lines. If .f is a hyperbola, then .ФC is an hyperbolic involution and the asymptotes are real lines.  is a parabola the diameters are real lines parallel to the axis and we may If .F improperly say that all asymptotes coincide with the improper line .r0 . Let .P ∈ P2 (R) be a non-self-conjugate point and .ФP : Ф1 (P ) → Ф1 (P ) be the associated involution. We say that .ФP is circular if two conjugate lines are orthogonal, i.e. their poles are conjugate with respect the absolute .x12 + x22 = 0 of .r0 . If .f has a proper centre C, the line at infinity .r0 : x0 = 0 is not tangent to  is not a circle, then .x 0 (f ) . If .f 0 (f ) .x C C does not contain the cyclic points. By Proposition 10.90-(c) there are two distinct lines .s1 and .s2 of .P2 (C) that pass 0 (f ) at the proper points .Q = s ∩ x 0 (f ) and through I and are tangent to .x 1 1 C C x 0 (f ) . The complex-conjugate lines .t = σ (s )) and .t = σ (s )) .Q2 = s2 ∩  1 2 C C 1 C 2 x0 (f ) . 0 (f ) since .σ (x pass through J and they also are tangent to .x C C 0 (f )C ) =  C By Lemma 10.98 .F1 := s1 ∩ t1 and .F2 := s2 ∩ t2 are the only real points of . Moreover, .F1 and .F2 are distinct .s1 ∪ s2 ∪ t1 ∪ t2 . We call .F1 and .F2 the foci of .f x from the centre C (if .Fi = C for .i = 1 or .i = 2, then .C ∈ si = π 0 (f )C (Qi ), so that .Qi ∈ r0 ). 0 (f ) . If r is the line tangent to If .f is a circle, then I and J belong to .x C x  0 (f ) . 0 (f )C at I , then .r /= r0 since the improper line .r0 intersects .x C at I and 0 (f ) at J , hence it is J . The complex-conjugate line .s = σC (r) is tangent to .x C distinct from r. The lines r and s, being the polars of points at infinity, intersect in the centre C of .f, which can be also called the focus of .f. If .f is a parabola, then two of the tangents .s1 , .s2 , .t1 , .t2 from the cyclic points coincide with .r0 and the remaining lines meet at the only real point F , which is called the focus of .f. A directix of a conic .f which is not a circle is the polar of a focus.

10.3 Polarity with Respect to a Hyperquadric

377

Proposition 10.105 Let .f be a non-degenerate conic. (a) If F is a focus of .f, then the involution .ФF is circular. (b) If .f has a proper centre but it is not a circle, then the foci .F1 and .F2 lie on an axis called the focal axis. Moreover the directrices are orthogonal to the focal axis. (c) The focus of a parabola lies on its axis; moreover the directrix is orthogonal to the axis. Proof (a) Let s and t be two complex-conjugate tangent lines from I and J . By Proposition 10.90-(d), two lines u and v through F are coniugate with respect 0 (f ) ) if and only if .ρ(s, t, u, v) = −1, to .f (or equivalently with respect to .x C which is equivalent to .ρ(I, J, U, V ) = −1 (see Definition 9.29), where .U = u ∩ r0 and .V = v ∩ r0 , i.e. U and V are conjugate with respect the absolute of .r0 . Therefore u and v are orthogonal. (b) Consider now the line .r = F1 F2 , its improper point P and the conjugate lines .s1 = ФF1 (r) and .s2 = ФF2 (r) through .F1 and .F2 respectively. Then .s1 and .s2 pass through the pole Q of r by definition. If .A1 and .A2 are the improper points of .s1 and .s2 respectively, then .ρ(I, J, A1 , P ) = ρ(I, J, A2 , P ) = −1 (by definition) so that .A1 = A2 , hence .A1 = A2 = Q (.s1 /= s2 ) so that . Since a directrix d is the polar of a .ρ(I, J, Q, P ) = −1, i.e. r is an axis of .f focus then d contains the pole Q of r (since r contains the pole of d) which is an improper point so that Q is the point at infinity of d and we just know that .ρ(I, J, Q, P ) = −1, i.e. .d ⊥ r. Since the pole Q of r belongs to .r0 then the centre C belongs to r by reciprocity. (c) If .f is a parabola and r and s are the isotropic lines meeting at the focus F , then (by Proposition 10.90-(d)) .ρ(r, s, u, v) = −1, where u is the line joining 0 (f ) . Therefore F and the centre C and v its conjugate with respect to .x C .ρ(I, J, C, U ) = −1, where U is the pole of F C (which lies on .r0 since the pole of .r0 , i.e. C lies on F C), hence F C is the axis of .f. Let d be the directrix of .f and P its improper point. Since d is the polar of F , then d contains the pole of F C i.e. .U ∈ d, therefore .U = P so that .d ⊥ F C. ⨆ ⨅ Definition 10.106 Let .π0 : x0 = 0 be the plane at infinity of .E3 (R). Also for non-degenerate quadrics .f of .E3 (R) we recall the following definitions: The centre C of .f if the pole of the improper plane .π0 . A diametral plane of .f is the polar plane of a point of .π0 . A diameter of .f is the polar line of a line of .π0 . A principal diametral plane .π of .f is a diametral plane such that its pole O is the pole of the improper line .π ∩ π0 with respect to the absolute .Ab3 .  is the line intersection of two principal diametral planes. (e) An axis of .F

(a) (b) (c) (d)

378

10 Projective Hyperquadrics

Remark 10.107 To find foci, asymptotes or directrices of a conic .f in the affine/euclidean space we can avoid having to determine its canonical form thanks to the above definitions which involve the projective closure of .f and its complexification. We end this chapter by introducing a concept which will be crucial in Chap. 13. Interior and Exterior of a Real Hyperquadric  be a non-singular hyperquadric of Definition 10.108 Let .P ∈ Pn (R) and let .F  F n  .P (R), .n > 1. We say that P is interior to .F if its polar hyperplane .π (P ) shares  (see Definition 10.89). Since the points of .F  lie in their no real points with .F  polar hyperplanes, no interior point belongs to .F . One may equivalently say that . A point the interior points are those where no real line through them is tangent to .F  F    .P ∈ / V+ (F ) is exterior to .F if .V+ (F ) ∩ π (P ) has some real points, i.e. through P . We denote the sets of interior (exterior) points there is some real line tangent to .F   by .Int(F ) (.Ext(F ) respectively). Thus we have a decomposition of .Pn (R) in three pairwise disjoint subsets ) ∪ V+ (F ) ∪ Ext(F ). Pn (R) = Int(F

.

(10.63)

It is easy to see that the above splitting (10.63) has a projective character: if ϕ : Pn (R) → Pn (R) is a projectivity, then

.

)) = Int(ϕ(F )), ϕ(Int(F

.

)) = Ext(ϕ(F )). ϕ(Ext(F

(10.64)

By the definition of an index of a hyperquadric (see Definition 10.60) we immediately have:  be a non-singular hyperquadric of .Pn (R), .n > 1. Proposition 10.109 Let .F ) = n, then .Int(F ) = Pn (R). This is the case of .F = x 2 + x 2 + · · · + xn2 . (a) If .i(F 0 1 ) ≤ n − 2, then .Int(F ) = ∅. (b) If .i(F  of Therefore we have to focus our attention on the non-singular hyperquadrics .F 2 2 2  > 1, with .i(F ) = n − 1, i.e. with reduced equation .x0 + x1 + · · · + xn−1 − xn2 = 0.

n .P (R), .n

Definition 10.110 In order to study the splitting (10.63) concerning the hyperquadrics of index .n − 1 it is useful to introduce the notion of sign of a point with  be a non-singular hyperquadric of .Pn (R), .n > 1. respect to a hyperquadric. Let .F ) Once the representative F is fixed, we may assign to each point .P = [x] ∈ / V + (F the sign, positive or negative, of .F (x, x) which is independent of the choice of x but depends on the choice of F . We say that two points .P = [x] and .Q = [y], P , .Q ∈ / ), have the same sign or opposite signs relative to .F  if .F (x, x)F (y, y) > 0 V + (F or .F (x, x)F (y, y) < 0 respectively. Thus comparing signs is independent of the . choice of representative of .F

10.3 Polarity with Respect to a Hyperquadric

379

 = α·F  (see Lemma 10.46), then for each point If .α = TA ∈ PGLn (R) and .G P = [x] one has .F (x, x) = G(Ax, Ax) so that two points P and Q have the same  if and only if .α(P ) and .α(Q) have the same sign relative to .G,  i.e. sign relative to .F this notion is a projective invariant.

.

) =  be a non-singular hyperquadric of .Pn (R) such that .i(F Lemma 10.111 Let .F n − 1, .n > 1, then any self-polar .(n + 1)-hedron S has only one vertex interior to  and all the other vertices are exterior. The interior vertex is called the lone vertex .F of S. Moreover the lone vertex and all other vertices have opposite signs. , hence Proof Let .S = {P0 , . . . . . . , Pn } be a self-polar .(n + 1)-hedron relative to .F  i + · · · Pn , π F (Pi ) = P0 + · · · P

.

i = 0, . . . , n.

We may assume .P0 = [1, 0, . . . , 0], .P1 = [0, 1, 0, . . . , 0], . . . , Pn = [0, 0, . . . , 1]  has the equation so that .F 2 x02 + x12 + · · · + xn−1 − xn2 = 0.

.

  ) = ∅ so that .P0 is Since .π F (Pn ) has the equation .xn 0 = 0 we have .π F (P0 )∩V+ (F   F F ) /= ∅,  interior to .F . If .i < n, .π (Pi ) has the equation .xi = 0 and .π (Pi ) ∩ V+ (F  so that .Pi is exterior to .F . The last assertion is immediately verified. ⨆ ⨅

) =  be a non-singular hyperquadric of .Pn (R) such that .i(F Corollary 10.112 Let .F n − 1, .n > 1, then ) /= ∅, Int(F

.

) /= ∅. Ext(F

2 If .F = x02 + x12 + · · · + xn−1 − xn2 then 2 ) = {[x0 , . . . , xn ] : x02 + x12 + · · · + xn−1 Int(F − xn2 < 0}.

.

) if and only if Proof The point .[a0 , a1 , . . . , an ] ∈ Int(F

.

$ a0 x0 + a1 x1 + · · · + an−1 xn−1 − an xn = 0 2 − xn2 = 0 x02 + x12 + · · · + xn−1

2 has no real roots i.e. .a02 + a12 + · · · + an−1 − an2 < 0.

⨆ ⨅

Corollary 10.113 Assume the notation and hypothesis of Lemma 10.111. If P is a , then P and each point Q conjugate to P have opposite signs point interior to .F . Therefore two points both interior .(exterior.) to .F  have the same with respect to .F sign, while an interior point and an exterior point have opposite signs.

380

10 Projective Hyperquadrics

  ) = ∅ we have that .Q ∈ ). Proof Let .Q ∈ π F (P ). Since .π F (P ) ∩ V+ (F / V + (F  F / π (P ) we may find a self-polar .(n + 1)-hedron containing P and Q Since .P ∈ (see Proposition 10.85). By Lemma 10.111, Q is exterior because P is interior by assumption and they have opposite signs.   If both P and .P ' are interior points, then each point .Q ∈ π F (P ) ∩ π F (P ' ) has ' the opposite sign with respect to P and .P , which necessarily have the same sign. If Q is exterior there exists a self-polar .(n + 1)-hedron S with vertex Q and the lone vertex P of S is interior and conjugate to Q such that P and Q have opposite signs. Finally, if Q and .Q' are exterior points, then, being conjugate to two interior points P and .P ' , they have opposite signs to P and .P ' (which have the same sign) ⨆ ⨅ and therefore they have the same sign.

Proposition 10.114 Assume the notation and hypothesis of Lemma 10.111. If P is , then each line through P meets .F  at two distinct real points. a point interior to .F  Moreover, each line which meets .F at two distinct real points .P1 and .P2 contains both interior and exterior points. More precisely, one open segment with endpoints .P1 and .P2 consists of interior points, while the other consists of exterior points. Proof Let .P = [x] be an interior point and let r be a line through P . Since .P ∈ /   F F π (P ), then .r ∩ π (P ) = Q /= P . Therefore P and .Q = [y] have opposite signs by Corollary 10.113, i.e. .F (x, x)F (y, y) < 0. Since P and Q are conjugate, 2 2 .F (x, y) = 0 so that the equation .F (x, x)λ + F (y, y)μ = 0 has two distinct real  solutions, i.e. .V+ (F ) ∩ r consists of two distinct real points.  at two distinct real points .Q1 and .Q2 . Let Let r be a line which meets .F ' .P ∈ r, .P /= Q1 , .Q2 . Take .P as the fourth harmonic of .Q1 , .Q2 and P . Then P and .P ' are conjugate (by Proposition 10.90-(b)) so that one is interior and the other is exterior. If .P = [x] and .P ' = [y] are two points belonging to an open segment with endpoints .P1 and .P2 , then by (13.15) (in the proof of Lemma 13.4(b)) .F (x, x)F (y, y) > 0 so that P and .P ' have the same sign and the last claim ⨆ ⨅ follows from Corollary 10.113.

10.4 Exercises   n+d . (Hint: find d the number of monomials x0i0 x1i1 · · · xnin , with i0 + i1 + · · · + in = d, ij ≥ 0.)

Exercise 10.1 Prove the formula dimK (h K[x0 , . . . , xn ]d ) =

Exercise 10.2 Let σ be a permutation of {0, 1, . . . , n} (n ≥ 1), and f : Pn (K) → Pn (K) be the map defined by f ([x0 , x1 , . . . , xn ]) = [xσ (0) , xσ (1) , . . . , xσ (n) ], ∀ [x0 , x1 , . . . , xn ] ∈ Pn (K). Show that f ∈ PGLn (K). Exercise 10.3 Let F = x03 + x13 + x23 + a(x0 + x1 + x2 )3 . Find all complex numbers  of P2 (C) has singular points. and determine such points. a such that the curve F

10.4 Exercises

381

) of the Steiner surface F  of P3 (C), Exercise 10.4 Find the singular locus Sing(F 2 2 2 2 2 2 where F = x0 x1 + x1 x2 + x0 x2 − x0 x1 x2 x3 . ) of the dual Steiner surface F  of Exercise 10.5 Find the singular locus Sing(F 3 P (C), where F = x1 x2 x3 + x0 x2 x3 + x0 x1 x3 + x0 x1 x2 .  ∈ Hp (2, C) be a curve of P2 (C) of degree 3 which has three Exercise 10.6 Let F 3 ) is the union of the lines AB, non-collinear singular points A, B, C. Show that V(F BC and CA.  be a hyperExercise 10.7 (Cone Circumscribed to a Hyperquadric) Let F quadric of Pn (K), where F = xCx t , x = (x0 , . . . , xn ) and C = (cij )i,j =1,...,n . ). Show that the equation Fix a point Y = [y0 , . . . , yn ] ∈ Pn (K) \ V+ (F (yC t x)2 − (yC t y)(xC t x) = 0,

.

y = (y0 , . . . , yn ),

(10.65)

 called the cone represents the locus of the lines passing through Y and tangent to F  from Y . circumscribed to F  of Pn (C) contains Exercise 10.8 Prove that a non-degenerate hyperquadric F infinitely many lines. (Hint: use Corollary 10.83 (Bertini’s theorem).) Exercise 10.9 Prove that a hypersurface of degree 1 is uniquely determined by its support. Exercise 10.10 The Fermat cubic x03 + x13 + x23 + x33 = 0 of P3 (C) contains exactly 27 lines. (Hint: Up to a permutation of coordinates, every line in P3 (C) is given by two linear equations of the form $ .

x0 = a2 x2 + a3 x3 x1 = b2 x2 + b3 x3

for suitable a2 , a3 , b2 , b3 ∈ C.)  of P3 (C) and Exercise 10.11 Prove that for any non-degenerate ruled quadric F 3  P (R) there is a projective frame relative to which F has the equation x0 x1 −x2 x3 = 0. Exercise 10.12 Let P1 , . . . , P5 be five points of P2 (K) in general position. 1. Prove that there is one and only one non-degenerate conic passing through P1 , . . . , P5 . (Hint: choose a convenient projective frame.) 2. Let {P0 , . . . , P2 , U } be a projective frame of P2 (K). Prove that there is one and only one non-degenerate conic passing through P1 , P2 , U whose tangent lines at P1 and P2 meet at P0 . 0 (f ) its projective Exercise 10.13 Let f be an affine non-degenerate conic, x closure and r0 : x0 = 0 the line at infinity. Prove that:

382

10 Projective Hyperquadrics

0 (f )∩r is a double point, i.e. r is tangent (a) fis a parabola if and only if f∞ := x 0 0 x 0 (f ). to  (b) f is an ellipse if and only if f∞ is a pair of complex-conjugate points. (c) f is a hyperbola if and only if f∞ is a pair of distinct real points.

Exercise 10.14 Show that a conic fof E2 (R) is an equilateral hyperbola (i.e. with orthogonal asymptotes) if and only if the cyclic points I and J are conjugate with 0 (f ) . respect to x C Exercise 10.15 Prove that the axes of a hyperbola are the bisector lines of the asymptotes. The converse is true for equilateral hyperbolas. Exercise 10.16 Find centres, axes, asymptotes, foci, directrices of the following real conics: x 2 + 2y 2 − 3xy + 8 = 0, x 2 + xy + y 2 = 1, 2 2 . x + xy − y = 1, x 2 + 2xy + y 2 − 4x = 0, 2 2 2 x + y − (x + y − 1) = 0, x 2 + 4xy + 4y 2 − 24x + 32y = 0. 0 f be its projective closure and Exercise 10.17 Let f be an affine quadric, let x 0 f ∩ π be its conic at infinity. Prove that the classification of nonf∞ := x 0 degenerate quadrics of A3 (R) can be rephrased as follows (compare with the one given in Chapter 5):

f is a elliptic paraboloid if f∞ is a pair of complex-conjugate lines. f is a hyperbolic paraboloid if f∞ is a pair of real lines. f is a real ellipsoid if f∞ is a non-degenerate conic such that V(f∞ ) = ∅ and V(f) /= ∅. (d) f is an imaginary ellipsoid if f∞ is a non-degenerate conic such that V(f∞ ) = ∅ and V(f) = ∅. (e) f is one-sheeted hyperboloid if f∞ is a non-degenerate conic and f is ruled. (f) f is a two-sheeted hyperboloid if f∞ is a non-degenerate conic and f is not ruled. (a) (b) (c)

Exercise 10.18 Determine the possible centres, principal diametral planes and axes of symmetry of the non-degenerate quadrics of Exercise 5.11. Exercise 10.19 (Circles on a Real Non-degenerate Quadric) Let f be a nondegenerate quadric of E3 (R) which is not a sphere. Let f∞ its conic at infinity, and let f∞ ∩ Ab3 = {A, A, B, B}.

.

First suppose A /= B. Then we have two pencils of parallel planes passing through the real line AA and the real line BB. These planes cut V(f) into circles (why?) and these are the only circles contained in V(f). If A = B, i.e. f∞ and Ab3 are tangent at A = B and at A = B, then there is only one pencil Ф2 (r) of parallel planes

10.4 Exercises

383

passing through the real line r := AA that cut V(f) into circles (the only circles on V(f)). In this case fhas a proper centre (indeed f∞ is a non-degenerate conic) and it is of revolution around the line passing through the centre and orthogonal to the planes of Ф2 (r). Find all circles lying on the quadric z2 + xy + 1 = 0. Exercise 10.20 Here is a procedure to determine a self-polar tetrahedron for a non of P3 (K). degenerate quadric F ) and πA := π F(A) its polar plane. 1. Let A ∈ / V + (F ). The polar plane πB passes through A and πB ∩ πA is a 2. Take B ∈ πA \ V+ (F   and B ∈ V+ (F ). line r /⊂ V+ (F ), otherwise πB would be tangent to F  3. Take C ∈ r \ V+ (F ). The polar plane πC passes through AB and πC ∩ πA is a  (otherwise πC would be tangent to F  and C ∈ V+ (F )). line s not lying on F  4. Let D = r ∩ s and πD its polar plane. Then D ∈ / V+ (F ), otherwise (since  and we should have D ∈ πD and r, πD ⊃ {A, B, C}) πD would be tangent to F s ⊂ πD which implies πD = πA , which is absurd. , indeed The quadruple {A, B, C, D} is a self-polar tetrahedron for F {B, C, D} ⊂ πA , {A, C, D} ⊂ πB , {B, A, D} ⊂ πC , {A, B, C} ⊂ πD .

.

Determine a self-polar tetrahedron for the quadric 3x02 +x12 +3x22 +2x0 x2 +2x0 x3 − 2x1 x3 + 6x2 x3 + 3x32 = 0.

Chapter 11

Bézout’s Theorem for Curves of P2 (K) .

In this chapter we shall prove Bézout’s theorem for curves of .P2 (K), where K is a fixed algebraically closed field of arbitrary characteristic. We begin stating the weak form of Bézout’s theorem (see [1, 13, 26], or [30]): Theorem 11.1 (Weak Form of Bézout’s Theorem) If two projective plane curves C and .D of degrees .m ≥ 1 and .n ≥ 1 have no common component, then

.

0 < #(C ∩ D) ≤ mn.

.

(11.1)

Before giving a general proof of Theorem 11.1, we prove its very special case by using projective classification of conics.

11.1 Proof of a Simple Case of Weak Bézout’s Theorem  is a curve of degree .≤ 2, .D = G  is a curve of degree n We suppose that .C = F and .char(K) /= 2. By Proposition 10.27 we may suppose that F is of the following type: (i) (ii) (iii) (iv)

F F .F .F . .

= x0 (.C is a line). = x02 + x12 + x22 = 0 (.C is a non-degenerate conic). = x02 + x12 (.C two distinct lines). = x02 (.C is a double line).

(i) We can take .x0 = 0, .x1 = s, .x2 = t, with .[s, t] ∈ P1 (K) as parametric equations of .C. Therefore finding the points of .C ∩ D is equivalent to determining the roots (in .P1 (K)) of the equation .G(0, s, t) = 0. Since .x0 does not divide .G(x0 , x1 , x2 ), the homogeneous polynomial .H (s, t) := G(0, s, t) of degree n in s and t is not identically zero. Therefore by Proposition 10.5, if p

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_11

385

11 Bézout’s Theorem for Curves of .P2 (K)

386

is the number of the roots of .H (s, t), then .1 ≤ p ≤ n so that .#(C ∩ D) ≤ n. The above argument works also if .char(K) = 2. We can also say that .x0 divides .G(x0 , x1 , x2 ) if and only if .C ⊆ D. (ii) Since .char(K) /= 2 we can write .F = x02 + x12 + x22 = x12 − (x0 + ix2 )(ix2 − x0 ) with .i ∈ K such that .i 2 = −1. By means of the change of coordinates .y0 = x0 + ix2 , .y1 = x1 and .y2 = ix2 − x0 we can suppose that the equation of .C is x12 − x0 x2 = 0.

.

(11.2)

It is easy to see that x0 = s 2 , x1 = st, x2 = t 2 ,

.

with [s, t] ∈ P1 (K)

(11.3)

are parametric equations of .C so that to find the points of .C ∩ D is equivalent to solving the equation of degree 2n in s and t (.[s, t] ∈ P1 (K)) H (s, t) := G(s 2 , st, t 2 ) = 0.

.

(11.4)

Claim .H (s, t) ≡ 0 if and only if F divides G in .K[x0 , x1 , x2 ]. To prove the above claim consider .F = F (x0 , x1 , x2 ) = x12 − x0 x2 and .G = G(x0 , x1 , x2 ) as polynomials in .x1 with coefficients in .K[x0 , x2 ]. In the euclidean domain .K[x0 , x2 ][x1 ] we have (since F is monic with respect to .x1 ) G(x0 , x1 , x2 ) = Q(x0 , x1 , x2 )F (x0 , x1 , x2 ) + R(x0 , x1 , x2 ),

.

where .Q = Q(x0 , x1 , x2 ) ∈ K[x0 , x1 , x2 ] and .R(x0 , x1 , x2 ) = a(x0 , x2 )x1 + b(x0 , x2 ) with .a(x0 , x2 ), .b(x0 , x2 ) ∈ K[x0 , x2 ]. By (11.3) we get 0 ≡ H (s, t) = G(s 2 , st, t 2 ) = Q(s 2 , st, t 2 ) · 0 + a(s 2 , t 2 )st + b(s 2 , t 2 )

.

= a(s 2 , t 2 )st + b(s 2 , t 2 ), i.e. .a(s 2 , t 2 )st +b(s 2 , t 2 ) ≡ 0. Since .a(s 2 , t 2 )st contains only powers of odd degree in s and t while .b(s 2 , t 2 ) contains only powers of even degree in s and t, we have .a(s, t) ≡ 0 and .b(s, t) ≡ 0 which proves the claim. By assumption F does not divide G, hence .H (s, t) is not identically zero. Since we know that .V+ (H (s, t)) is in a bijective correspondence with .C ∩ D and .0 < #(V+ (H (s, t))) ≤ 2n by Proposition 10.5, then .0 < #(C∩D) ≤ 2n. The assumption .char(K) /= 2 is essential since the inverse of the projectivity .y0 = x0 +ix2 , .y1 = x1 , 1 1 1 1 x0 + x2 , .y1 = x1 , .y2 = x0 − x2 . .y2 = ix2 − x0 is the projectivity .y0 = 2 2 2i 2i (iii) In this case the degenerate conic is the union of the lines .L1 := x0 + ix1 = 0 and . L2 := x0 − ix1 = 0. By i) .#(V+ (L1 ) ∩ D) ≤ n and .#(V+ (L2 ) ∩ D) ≤ n so that .#(C ∩ D) ≤ 2n.

11.1 Proof of a Simple Case of Weak Bézout’s Theorem

387

(iv) In this case .C = V+ (x02 ) = V+ (x0 ), i.e. .C ∩ D = V+ (x0 ) ∩ D, hence by i) .#(V+ (x0 ) ∩ D) ≤ n < 2n.

11.1.1 Two Applications of Bézout’s Theorem Proposition 11.2 Let P1 , . . . , P5 five points of P2 (K). Then: (i) There exists at least one conic C passing through P1 , . . . , P5 . (ii) A conic C passing through P1 , . . . , P5 is non-degenerate if and only if no three of P1 , . . . , P5 are collinear. Moreover such a non-degenerate conic is unique. (iii) A degenerate conic C passing through P1 , . . . , P5 is unique if and only if no four of P1 , . . . , P5 are collinear. (iv) There exists at most one conic C passing through P1 , . . . , P5 if no four of P1 , . . . , P5 are collinear. Proof (i) It follows from Proposition 10.45. (ii) Since a degenerate conic of P2 (K) is a pair of distinct lines or a double line, at least three points among P1 , . . . , P5 are collinear, hence if no three of P1 , . . . , P5 are collinear every conic through P1 , . . . , P5 is non-degenerate.  is another conic passing through P1 , . . . , P5 . We Suppose now that D = G have just proved that D is non-degenerate. By Bézout’s theorem, F | G and =F , i.e. C = D. G | F so that G   be two conics passing through P1 , . . . , P5 , then by ii), (iii) Let C = F and D = G both the conics are degenerate, say F = L1 · L2 ,

.

G = L'1 · L'2 ,

L1 , L2 , L'1 , L'2 homogeneous polynomials of degree 1 in x0 , x1 , x2 Since #(C ∩ D) ≥ 5, by Theorem 11.1), F and G have a common factor of degree 1, e.g. F = L1 · L2 and G = L1 · L'2 . Therefore C /= D if and only if the lines r : L2 = 0 and s : L'2 = 0 are distinct if and only if r ∩ s is a point if and only if the common line L1 = 0 passes through at least four points among P1 , . . . , P5 . (iv) It follows from (ii) and (iii). ⨆ ⨅ Corollary 11.3 Through four non-collinear points of P2 (K) there passes a unique pencil of conics. Definition 11.4 If m ≥ 3, an m-gon Pm is a set of m distinct points cyclically ordered, i.e. P1 , P2 , . . . , Pm+1 = P1 , called vertices of Pm , and m lines ri = Pi Pi+1 , i = 1, . . . , m called sides of Pm in such a way that no three vertices

11 Bézout’s Theorem for Curves of .P2 (K)

388

are collinear and no three sides are concurrent. Reversing the ordering of the vertices does not change the m-gon. Obviously an m-gon is determined by its cyclically ordered vertices, and also by its cyclically ordered sides. The 3-gons are just triangles, a 4-gon is a quadrilateral and for m = 5, 6 . . . , m-gons are called pentagons, hexagons, etc. If m is even, by definition, (P1 , P(m/2)+1 ), (P2 , P(m/2)+2 ), . . . , (Pm/2 , Pm )

.

are the pairs of opposite vertices and (r1 , r(m/2)+1 ), (r2 , r(m/2)+2 ), . . . , (rm/2 , rm )

.

are the pairs of opposite sides of Pm . An m-gon Pm is said to be inscribed in a non-degenerate conic C if and only if all its vertices belong to C, in which case it is equivalently said that C is circumscribed to Pm . If all sides of Pm are tangent to a non-degenerate conic C, then it is said that Pm is circumscribed to C and also that C is inscribed in Pm . In particular, if P1 , P2 , P3 , P4 , P5 , P6 are the vertices of an hexagon, then the three pairs of vertices (P1 , P4 ), (P2 , P5 ), (P3 , P6 ) are the opposite vertices and (P1 P2 , P4 P5 ),

.

(P2 P3 , P5 P6 ),

(P3 P4 , P6 P1 ),

are the opposite sides. The three lines P1 P4 , P2 P5 and P3 P6 joining opposite vertices are called the diagonals of the hexagon. We now use Bézout’s theorem to prove a classical (projective) Pascal’s theorem: Theorem 11.5 (Pascal) Let C be a non-degenerate conic of P2 (K), and P6 = (P1 , P2 , . . . , P6 ) be a hexagon inscribed in C. Then the points A := P1 P2 ∩ P4 P5 , B := P3 P4 ∩ P1 P6 and C := P2 P3 ∩ P5 P6 are collinear (see Fig. 11.1). In other words, the intersection points of the pairs of the opposite sides of a hexagon inscribed in a non-degenerate conics are collinear (Fig. 11.2). Proof (Plücker) Let Lij = 0 be the equation of the line Pi Pj , i, j ∈ {1, 2, . . . , 6}, i /= j . Consider the polynomials Ft := L12 L34 L56 + tL16 L23 L45 ,

.

Fig. 11.1 Degenerate conics

t ∈ K.

11.1 Proof of a Simple Case of Weak Bézout’s Theorem

389

Fig. 11.2 Projective Pascal’s theorem

Fig. 11.3 Brianchon’s theorem

For every t ∈ K, Ft is a polynomial of degree 3, indeed Ft is not identically 0 for all t ∈ K. If Ft is identically 0 for some t ∈ K, then we should have L12 L34 L56 = −tL16 L23 L45 , in contradiction with the K[x0 , x1 , x2 ]’s factoriality. First observe that P1 , . . . , P6 , A, B, C ∈ V+ (Ft ) and that A, B, C do not lie on C (if, e.g. A ∈ C, then P1 P2 meets C at three points, contradicting Theorem 11.1). Fix a point P ∈ C, P /= Pj , j = 1, . . . , 6. We can find tP ∈ K such that FtP (P ) = 0.  Consider the cubic D = F tP . Then D ∩ C ⊃ {P1 , P2 , . . . , P6 , P }.

.

, we have FtP = F · L, where L is a homogeneous By Theorem 11.1, if C = F polynomial of degree 1. Since A, B, C ∈ V+ (FtP )  V+ (F ), we must have L(A) = L(B) = L(C) = 0, i.e. A, B and C belong to the line of equation L = 0. ⨆ ⨅ Theorem 11.6 (Brianchon) In a projective plane P2 (K), consider a hexagon whose six sides are tangent to a non-degenerate conic C. Then the three diagonals of the hexagon intersect at the same point (see Fig. 11.3).

11 Bézout’s Theorem for Curves of .P2 (K)

390

Proof Brianchon’s theorem is the dual statement of Pascal’s theorem. By Proposition 10.71 and Theorem 10.53, i) we may suppose that C is defined by the equation x02 + x12 + x22 = 0. In this case the polarity π C : P2 (K) → P2 (K)∗ is given by π C ([ao , a1 , a2 ]) = V+ (a0 x0 + a1 x1 + a2 x2 ),

.

∀[ ao , a1 , a2 ] ∈ P2 (K).

By Theorem 10.74 π C is a projective isomorphism that transforms the circumscribed Brianchon hexagon of P2 (K) into the inscribed Pascal hexagon of P2 (K)∗ and conversely. ⨆ ⨅

11.2 The General Bézout’s Theorem and Further Applications Since Bézout’s theorem is a fundamental result in the theory of projective curves of .P2 (K) (where K is an algebraically closed field of arbitrary characteristic), we wish to prove its general form, which requires more powerful tools. We shall also discuss some important applications.

11.3 The Resultant of Two Polynomials Let A be a factorial domain, and let .f, g ∈ A[X] be two non-constant polynomials: f = a0 Xn + a1 Xn−1 + · · · + an and g = b0 Xm + b1 Xm−1 + · · · + bm ,

.

(11.5)

with .n, m ≥ 1 and .a0 b0 /= 0. Proposition 11.7 The polynomials f and g have a non-constant common divisor in .A[X] if and only if there exist two non-zero polynomials .f1 , g1 ∈ A[X] such that fg1 + gf1 = 0, with deg(f1 ) ≤ n − 1 and deg(g1 ) ≤ m − 1.

.

(11.6)

Proof Assume that there exists a polynomial .h ∈ A[X] of degree .≥ 1 such that f = hf1 and .g = −hg1 , with .f1 , g1 ∈ A[X]. Then .deg(f1 ) ≤ n − 1 and .deg(g1 ) ≤ m − 1, and clearly .f g1 + gf1 = 0. Conversely, assume that (11.6) holds. Since by Gauss’ theorem the ring .A[X] is s factorial, we can write .f = aF1s1 · · · Fpp , with .a ∈ A, .F1 , . . . , Fp pairwise coprime irreducible polynomials in .A[X] of degrees .≥ 1, and .si ≥ 1, .i = 1, . . . , p. Since .A[X] is factorial, it follows that .Fi is prime in .A[X], .i = 1, . . . , p. Then we claim that there exists an .i ∈ {1, . . . , p} such that .Fi divides g. Indeed, otherwise from .

11.3 The Resultant of Two Polynomials

391

the equality .fg1 = −gf1 we deduce that .Fisi divides .f1 , .i = 1, . . . , p, hence sp s1 .F = F ⨆ ⨅ 1 · · · Fp divides .f1 , which is absurd because .deg(f1 ) < n = deg(F ). Now write f1 = p0 Xn−1 + p1 Xn−2 + · · · + pn−1

.

and g1 = q0 Xm−1 + q1 Xm−2 + · · · + qm−1 .

(11.7)

Then (11.6), (11.5) and (11.7) imply ⎧ ⎪ a0 q0 + b0 p0 ⎪ ⎪ ⎪ ⎨a q + a q + b p + b p 1 0 0 1 1 0 0 1 . ⎪ . . . . . . . . . . . . . . . . . . . . . . . . ⎪ ⎪ ⎪ ⎩ an qm−1 + bm pn−1

=0 =0

(11.8)

... = 0.

Then (11.8) can be considered as a system of .n + m homogeneous equations in the n + m indeterminates .q0 , . . . , qm−1 , p0 , . . . , pn−1 having the .(m + n) × (m + n) matrix

.

0 ... 0 0 ... 0 a0 . . . 0 .. .. . . 0 0 0 . . . an 

⎞ 0 ... 0 0 ... 0 ⎟ ⎟ b0 . . . 0 ⎟ ⎟ .. .. ⎟ . . ⎠ 0 0 0 . . . bm 

m columns

n columns



a0 ⎜a1 ⎜ ⎜ .Ω = ⎜a2 ⎜. ⎝ ..

0 a0 a1 .. .

b0 b1 b2 .. .

0 b0 b1 .. .

Now we define  a0  0  .  ..   0 t .R(f, g) := det(Ω ) =  b0  0   .. .  0

a1 a2 . . . a0 a1 . . . .. . . .. . . . 0 . . . a0 b1 b2 . . . b0 b1 b2 .. .. .. . . .

... ... .. .

... ... .. .

... ... .. .

a1 bm ... .. .

... 0 bm .. .

... ... ... .. .

0 0 ... ... ... ...

         an  , 0  0   0  bm  0 0 .. .

11 Bézout’s Theorem for Curves of .P2 (K)

392

where .Ωt is the transpose of .Ω and where the entries .a0 , . . . , an occur only in the first m rows, while the entries .b0 , . . . , bm occur in the last n rows. The entries on the principal diagonal are .a0 (repeated m times) and .bm (repeated n times). For example, we have .

f (x) = a0 x + a1 , g(x) = b0 x + b1 , f (x) = a0 x 3 + a1 x 2 + a2 x + a3 , g(x) = b0 x 2 + b1 x + b2 ,   a0 a1    = a0 b1 − a1 b0 , R(f, g) =  b0 b1    a a a a 0   0 1 2 3  0 a a a a   0 1 2 3   R(f, g) = b0 b1 b2 0 0  .    0 b0 b1 b2 0     0 0 b0 b1 b2 

Definition 11.8 Under the above notation the determinant .R(f, g) ∈ A is called the Euler–Sylvester resultant, or simply the resultant, of the polynomials .f, g ∈ A[X]. The resultant has the following symmetry property: R(g, f ) = (−1)mn R(f, g).

.

(11.9)

Furthermore, if .λ, .μ ∈ A, .λμ /= 0, we have R(λf, μg) = λm μn R(f, g).

.

(11.10)

Proposition 11.7 and the above discussion yield (see Theorem 1.30): Corollary 11.9 Under the above assumptions, the polynomials f , .g ∈ A[X] have a non-constant common factor if and only if .R(f, g) = 0. In particular, if A is a field, f and g have a common root in some field .K ⊃ A if and only if .R(f, g) = 0. Proposition 11.10 Under the above assumptions there exist two polynomials P , Q ∈ A[X] such that

.

R(f, g) = Pf + Qg.

.

(11.11)

Moreover, one can choose P and Q such that .deg(P ) ≤ m − 1 and .deg(Q) ≤ n − 1. Proof We need the following simple fact: fix a positive integer t and let Pi =

t 

.

j =0

aij Xj ∈ A[X], i = 1, . . . , t + 1

11.3 The Resultant of Two Polynomials

393

be .t + 1 polynomials of degree .≤ t. Then   a10 a11   a20 a21  . ..  ... .  a t+1,0 at+1,1

  . . . a1t   P1 a11 . . . a2t   P2 a21 . = . .. .. . ..   .. . . . . at+1,t  Pt+1 at+1,1

 . . . a1,t  t+1 . . . a2,t   = hi Pi , .  .. . ..  i=1 . . . at+1,t 

with .hi ∈ A. Taking in particular, .t = n + m − 1 and Pi := Xi−1 f, i = 1, 2, . . . , m,

and

.

Pm+j := Xj −1 g, j = 1, . . . , n,

we get R(f, g) = h1 f + · · · + hm Xm−1 f + hm+1 g + · · · + hn+m Xn−1 g.

.

Putting .P := h1 + · · · + hm Xm−1 and .Q := hm+1 + · · · + hn+m Xn−1 we get .R(f, g) = Pf + Qg, with P , .Q ∈ A[X]. ⨅ ⨆ Definition 11.11 The determinant .R(f, g) is a sum of terms of the following type: .

± ai1 · · · aim bj1 · · · bjn .

We shall say that this term has weight .i1 + · · · + im + j1 + · · · + jn . In particular the weight of each .ai is i, and the weight of each .bj is j . Proposition 11.12 Every term in the development of the determinant .R(f, g) has weight mn. Proof Let us denote by .(i, j ) the entry on i-th line and j -th column in the .(n+m)× (n + m) matrix ⎛ a0 ⎜0 ⎜ ⎜. ⎜ .. ⎜ ⎜ ⎜0 t .Ω = ⎜ ⎜b0 ⎜ ⎜0 ⎜ ⎜ .. ⎝. 0

a1 a0 .. .

a2 a1 .. .

... ... .. .

... ... .. .

... ... .. .

... ... .. .

0 b1 b0 .. .

... b2 b1 .. .

a0 ... b2 .. .

a1 bm ... .. .

... 0 bm .. .

... ... ... .. .

0 0 ... ... ... ...

0 0 .. .



⎟ ⎟ ⎟ ⎟ ⎟ ⎟ an ⎟ ⎟. 0⎟ ⎟ 0⎟ ⎟ ⎟ 0⎠ bm

11 Bézout’s Theorem for Curves of .P2 (K)

394

If we adopt the convention that .ai = 0 if .i < 0 or if .i > n (respectively, .bj = 0 if j < 0 or if .j > m), then clearly,

.



aj −i if i ≤ m

(i, j ) :=

.

bj −i+m if i ≥ m + 1.

(11.12)

Now a general term in the development of the determinant of .R(f, g) is of the form .

± (1, j1 )(2, j2 ) . . . (n + m, jn+m ),

(11.13)

where  σ =

.

1 2 ... n + m j1 j2 . . . jn+m



is an arbitrary permutation of .{1, 2, . . . , n + m}. Now, using (11.12), the weight of (11.13) is given by [(j1 − 1) + · · · + (jm − m)]

.

+ [(jm+1 − (m + 1) + m) + · · · + (jn+m − (n + m) + m)] =  n parentheses

.

=

n+m  i=1

n+m 

because .

i=1

i=

n+m 

ji −

n+m 

i + mn = mn,

i=1

⨆ ⨅

ji (since .σ is a permutation).

i=1

Corollary 11.13 Let A be a factorial domain. If F (x0 , . . . , xn ) = a0 x0m + a1 (x1 , . . . , xn )x0m−1 + · · · + am (x1 , . . . , xn ),

.

p

p−1

G(x0 , . . . , xn ) = b0 x0 + b1 (x1 , . . . , xn )x0

+ · · · + bp (x1 , . . . , xn )

are two homogeneous polynomials of degrees .m ≥ 1, .p ≥ 1, respectively, where .ai (x1 , . . . , xn ) and .bj (x1 , . . . , xn ) are zero or homogeneous of degrees i, j respectively, such that .a0 b0 /= 0, their resultant .R(F, G) is either 0 or an homogeneous polynomial of .A[x1 , . . . , xn ] of degree mp. Furthermore, .R(F, G)(x1 , . . . , xn ) is identically zero if and only if F and G have a nonconstant homogeneous common factor. Proof The requirement .a0 b0 /= 0 implies that the degrees of F and G as polynomials in .B[x0 ], where .B = A[x1 , . . . , xn ], are m and p respectively. We have only to observe that the weight of every term ./= 0 in the development of

11.3 The Resultant of Two Polynomials

395

the determinant .R(F, G) is its degree as a polynomial of .A[x1 , . . . , xn ] and apply Proposition 11.12 and Corollary 11.9. ⨆ ⨅ Proposition 11.14 Let A be a factorial domain. Given two polynomials f (X) =

m 

.

(X − λi ),

g(X) =

p 

(X − μi )

j =1

i=1

where .λi and .μj belong to some field .K ⊃ A, then R(f, g) =

p m  

.

(λi − μj ).

(11.14)

i=1 j =1

Proof Let us consider the polynomials f and g in the indeterminates X, λ1 , . . . , λm , μ1 , . . . .μp

.

f (X, λ1 , . . . , λm ) =

m 

.

(X − λi ),

i=1

g(X, μ1 , . . . .μp ) =

p 

(X − μi ) ∈ A[X, λ1 , . . . , λm , μ1 , . . . .μp ],

j =1

which are homogeneous of degrees m and p respectively. Then .R(f, g) is non-zero and homogeneous of degree mp in the indeterminates .λ1 , . . . , λm , μ1 , . . . .μp . If we replace .λi with .μj in .f (X, λ1 , . . . .λp ), we get a polynomial .f1 , which has the factor .X − μj common with .g(X, μ1 , . . . , μm ). Therefore .R(f1 , g) = 0 and, since .R(f1 , g) is obtained from .R(f, g) by changing .λi with .μj , i.e. R(f, g1 )(λ1 , . . . , λi−1 , λi+1 , . . . , λm , μ1 , . . . , . . . , μp )

.

= R(f, g)(λ1 , . . . ,

μj , . . . , , λm , μ1 , . . . , μp ),  i-th place

then .R(f, g)(λ1 , . . . ,

μj , . . . , , λm , μ1 , . . . , μp ) = 0. It means that the poly  i-th place

nomial R(f, g)(λi ) ∈ A[λ1 , . . . , λi−1 , λi+1 , . . . , λm , μ1 , . . . , . . . , μp ][λi ]

.

11 Bézout’s Theorem for Curves of .P2 (K)

396

vanishes for .λi = μj and .λi − μj divides .R(f, g) by Theorem 1.10. That occurs for every .1 ≤ i ≤ m, .1 ≤ j ≤ p, therefore, for degree reasons, R(f, g)(λ1 , . . . , λm , μ1 , . . . .μp ) = λ

p m  

.

(λi − μj ),

λ ∈ A, λ /= 0.

i=1 j =1

(11.15) To calculate .λ we evaluate the polynomial .R(f, g)(λ1 , . . . , λm , μ1 , . . . , μp ) at a point μj /= 0, j = 1, . . . , p.

(0, . . . , 0, μ1 , . . . , μp ),

.

Then  1 0  0 1  . .  .. ..   0 0 .R(f, g)(0, . . . , 0, μ1 , . . . , μp ) =  1 b1  0 1   .. .. . .  0 0

0 0 .. .

... ... .. .

... ... .. .

... ... .. .

... ... .. .

... b2 b1 .. .

1 ... ... .. .

0 ... bp 0 . . . bp .. .. . .

... ... ... .. .

0 ... ... ... ...

 0  0  ..  .  0 . 0  0   0  bp 

Therefore the matrix is lower triangular and its determinant is the product bpm = ((−1)p μ1 · · · μp )m = (−1)pm (μ1 · · · μp )m .

.

On the right-hand side of (11.15) the coefficient of .(μ1 · · · μp )m is .(−1)pm λ, so that we conclude that .λ = 1. ⨆ ⨅ Remark 11.15 Let

f (X) = a0

m 

.

(X − λi ),

g(X) = b0

p 

(X − μi ).

From

j =1

i=1

(11.10) we get p

R(f, g) = a0 b0m

.

p m  

p

(λi − μj ) = a0

i=1 j =1

m 

g(λi ) = (−1)mp b0m

i=1

p 

f (μj ).

j =1

(11.16) Corollary 11.16 Let .f (X), .g(X) and .h(X) ∈ A[X]. Then we have R(f, gh) = R(f, g)R(f, h),

.

R(fg, h) = R(f, h)R(g, h).

(11.17)

11.3 The Resultant of Two Polynomials

397

Proof If f (X) = a0

m 

.

(X − λi ),

g(X) = b0

p 

(X − μi ),

h(X) = c0

j =1

i=1

q 

(X − γk )

k=1

then R(f, g) = (−1)pm b0m

p 

.

R(f, h) = (−1)mq c0m

f (μj ),

j =1

q 

f (γk ).

k=1

Since g(X)h(X) = b0 c0

p 

.

(X − μi )

j =1

q 

(X − γk )

k=1

we get R(f, g)R(f, h) = (−1)mp+mq b0m c0m

p 

.

f (μj )

j =1

= (−1)m(p+q) (b0 c0 )m

q 

f (γk )

k=1

p 

f (μj )

j =1

q 

f (γk ) = R(f, gh).

k=1

Finally, we have by (11.9) R(fg, h) = (−1)(m+p)q R(h, fg)

.

= (−1)mq R(h, f )(−1)pq R(h, g) = R(f, h)R(g, h). ⨆ ⨅ Corollary 11.17 We assume the notation of Remark 11.15. Let .f (X) g(X)q(X) + r(X). Then we have m−q

R(f, g) = (−1)p(m−q) b0

.

R(r, g),

where .b0 is the leading coefficient of .g(X) and .q = deg r(X).

=

(11.18)

11 Bézout’s Theorem for Curves of .P2 (K)

398

Proof Since .f (μj ) = r(μj ) for .j = 1, . . . , p we have R(f, g) = (−1)pm b0m

p 

.

m−q

f (μj ) = (−1)p(m−q) b0

j =1 m−q

= (−1)p(m−q) b0

q

(−1)pq b0

p 

r(μj )

j =1

R(r, g) ⨆ ⨅

by (11.16).

Corollary 11.18 We assume the notation of Remark 11.15. with .m ≤ p. Let .h(X) ∈ A[X] of degree .p − m, then R(f, g) = R(f, f h + g).

.

Proof Since .(f h + g)(λi ) = g(λi ) for .i = 1, . . . , m we have R(f, g) =

.

p a0

m  i=1

g(λi ) =

p a0

m 

(f h + g)(λi ) = R(f, f h + g).

⨆ ⨅

i=1

Now we are ready to prove Theorem 11.1. Proof Let .F (x0 , x1 , x2 ) and .G(x0 , x1 , x2 ) be the equations of .C and .D respectively, with F and G non-zero homogeneous polynomials of degrees n and m respectively. We claim that we can find a projective coordinate system .[x0 , x1 , x2 ] of .P2 such that the point .[1, 0, 0] does not belong to .C ∪ D. In fact, by Lemma 10.7, formula (10.2) and Exercise 8.3, we can choose a point .Q ∈ P2  (C ∪ D) and a projective linear automorphism .σ of .P2 such that .σ (Q) = [1, 0, 0]. The new projective coordinate system will be given by the projective linear automorphism .σ −1 and with respect to it the coordinates of the point Q will be .[1, 0, 0]. Notice that, as the equations .σ are linear and homogeneous, the homogeneous polynomials .σ · F and .σ · G (which are the equations of .C and .D in the new system of coordinates) are still of degrees n and m respectively. We notice that Bézout’s theorem holds for .C and .D if and only if it holds for their images .σ (C) and .σ (D). With this choice of the coordinate system the equations F and G are of the form F (x0 , x1 , x2 ) = a0 x0n + a1 (x1 , x2 )x0n−1 + · · · + an (x1 , x2 ),

.

G(x0 , x1 , x2 ) = b0 x0m + b1 (x1 , x2 )x0m−1 + · · · + bm (x1 , x2 ), where .ai (x1 , x2 ) ∈ K[x1 , x2 ] is a homogeneous polynomial of degree i, .i = 0, . . . , n, and .bj (x1 , x2 ) ∈ K[x1 , x2 ] is a homogeneous polynomial of degree j , .j = 0, . . . , m. The fact that .P0 ∈ / C ∪ D implies that .a0 b0 /= 0. Now, setting .A := K[x1 , x2 ] we know by Theorem 1.9 that A is factorial. Consider F and G as polynomials of .A[x0 ]. Then by Corollary 11.13 the resultant .R(F, G) = R(x1 , x2 ) is a non-zero homogeneous polynomial of degree mn of .K[x1 , x2 ] because F and

11.4 Intersection Multiplicity of Two Curves

399

G have no non-constant common factor. Let .[c1 , c2 ] ∈ P1 be a zero of .R(x1 , x2 ). Then the polynomials h(x0 ) := F (x0 , c1 , c2 ) = a0 x0n + a1 (c1 , c2 )x0n−1 + · · · + an (c1 , c2 ) ∈ K[x0 ],

.

k(x0 ) := G(x0 , c1 , c2 ) = b0 x0m + b1 (c1 , c2 )x0n−1 + · · · + bm (c1 , c2 ) ∈ K[x0 ] have by Corollary 11.9 a common root .c0 ∈ K (K is algebraically closed). Therefore the point .[c0 , c1 , c2 ] ∈ P2 belongs to .C ∩ D. Conversely, if a point 2 .[c0 , c1 , c2 ] ∈ P belongs to .C ∩ D then the polynomials .h(x0 ) and .k(x0 ) have the common root .c0 , hence by Corollary 11.9, .R(c1 , c2 ) = 0. Since for a fixed 1 .[c1 , c2 ] ∈ P such that .R(c1 , c2 ) = 0 there are only finitely many possibilities for .c0 such that .[c0 , c1 , c2 ] ∈ C ∩ D and since by Proposition 10.5 the equation 1 .R(x1 , x2 ) = 0 has at most mn zeros (and at least one) on .P , we infer that .C ∩ D is finite and nonempty. Now, consider the (finite) set .A of lines of .P2 joining any two different points of the intersection .C ∩ D (.A empty if .C ∩ D consists only of one point). We may arrange the above projective linear automorphism .σ in such a way that no line of .A passes through the point .P0 . Set .P = [c0 , c1 , c2 ] ∈ C ∩ D. Then the line .P0 P has the equation c2 x1 − c1 x2 = 0.

.

(11.19)

On the other hand, if for a given .[c1 , c2 ] ∈ V+ (R(x1 , x2 )) there are two distinct elements .c0 , .c0' ∈ K such that [c0 , c1 , c2 ], [c0' , c1 , c2 ] ∈ C ∩ D,

.

then these two points are on the line of equation (11.19), i.e. .P0 and these two points are collinear. In other words, if .P ∈ C ∩ D then the line .P0 P does not contain any point of .C ∩ D other than P . Consequently, there is a bijective correspondence between .C ∩ D and the set of all lines .P0 P , with .P ∈ C ∩ D. It follows that the set .C ∩ D is in a bijective correspondence with .V+ (R(x1 , x2 )), and in particular, .#(C∩D) = #V+ (R(x1 , x2 )). Finally, by Proposition 10.5, .#(V+ (R(x1 , x2 ))) ≤ nm. ⨆ ⨅

11.4 Intersection Multiplicity of Two Curves We are about to introduce the intersection multiplicity of two curves which permits us to reformulate Bézout’s theorem in its strongest form. We closely follow [19].

11 Bézout’s Theorem for Curves of .P2 (K)

400

 and .G  there exists a uniquely Theorem 11.19 Given two projective curves .F , G)  ∈ N ∪ {0, ∞} satisfying the following properties: determined .mP (F (i) (ii) (iii) (iv) (v) (vi)

, G)  = mP (G,  F ). mP (F    and .G.  .mP (F , G) = ∞ if and only if P lies on a common component of .F     .mP (F , G) = 0 if and only if .P ∈ / F ∩ G. , G)  = 1 if .F  and .G  are lines and P is their unique point of intersection. .mP (F , G)  = mP (F1 , G)  + mP (F2 , G).  If .F = F1 F2 then . mP (F   If .F and .G have degrees n and m respectively with .n ≤ m and H is a homogeneous polynomial of degree .m − n, then .

, G)  = mP (F , F mP (F H +G).

.

 and .G  do not have common components and we select a projective Moreover, if .F coordinate system such that the following conditions are satisfied:  ∪ G,  (a) .[1, 0, 0] ∈ /F  ∩ G,  (b) .[1, 0, 0] does not lie on any line connecting two distinct points of .F  or .G  at any point of .F  ∩ G,  (c) .[1, 0, 0] does not lie on the tangent line to .F  ∩ G,  we have then, setting .P = [a, b, c] ∈ F , G)  = m[b,c] R(F, G)(x1 , x2 ), mP (F

.

i.e. the largest integers k such that .(bx2 − cx1 )k divides the resultant R(F, G)(x1 , x2 ).

.

, G)  Proof First we prove uniqueness, i.e. the conditions (i)–(vi) determine .mP (F completely. We can assume that .P = [0, 0, 1] since these conditions are indepen and .G  are dent of the choice of coordinates. Furthermore we may assume that .F     irreducible by (i) and (v), that .mP (F , G) is finite by (ii) and that .mP (F , G) = k > 0 by (iii). By induction on k we assume that every intersection multiplicity .< k can be calculated using only the conditions (i)–(vi) (intersection multiplicity 0 is given by (iii)). We consider the polynomials .f (x0 ) := F (x0 , 0, 1) ∈ K[x0 ] and .g(x0 ) := G(x0 , 0, 1) ∈ K[x0 ] of degrees r and s respectively. By (i) we assume that .r ≤ s. • Case 1. If .r = 0, the polynomial .f (x0 ) is constant and hence .= 0 because .F (0, 0, 1) = 0. Since F is homogeneous the polynomial .F (x0 , 0, x2 ) is identically zero. Thus there exists a homogeneous polynomial .Q(x0 , x1 , x2 ) such that F (x0 , x1 , x2 ) = x1 Q(x0 , x1 , x2 ).

.

11.4 Intersection Multiplicity of Two Curves

401

We can also find two homogeneous polynomials .S(x0 , x1 , x2 ), .T (x0 , x2 ) with T (0, 1) /= 0 and some integer q such that

.

q

G(x0 , x1 , x2 ) = G(x0 , 0, x2 )+x1 S(x0 , x1 , x2 ) = x0 T (x0 , x2 )+x1 S(x0 , x1 , x2 ).

.

Since .G(0, 0, 1) = 0 the integer q is .> 0. Hence the point .P = [0, 0, 1] does not lie on the curve .T of equation .T (x0 , x2 ) = 0 and by (iii) and (iv) we have mP ( x1 , T) = 0,

.

mP ( x0 , x1 ) = 1,

(11.20)

where .x0 , .x1 are the lines .x0 = 0 and .x1 = 0 respectively. From (v) we obtain , G)  = mP (Q,  G)  + mP (  m P (F x1 , G)

.

(11.21)

and from (vi) we have q q q  = mP ( mP ( x1 , G) x1 , x0  T +x1 S) = mP ( x1 , x1  S +x0 T ) = mP ( x1 , x 0 T ). (11.22)

.

Iterating (v) q times we get by (11.20) q q mP ( x1 , x x1 , x0 ) + mP ( x1 , T) = qmP ( x0 , x1 ) + mP ( x1 , T) = q. 0 T ) = mP ( (11.23)

.

Hence from (11.21), (11.22) and (11.23) we have , G)  = mP (Q,  G)  + q. m P (F

.

(11.24)

 G)  < m P (F , G)  = k and Since .q > 0 from (11.24) we obtain that .mP (Q,   our inductive hypothesis tells us that .mP (Q, G) can be calculated by using only , G).  conditions (i)–(vi) so that the same happens to .mP (F • Case 2. If .r > 0, multiplying by suitable constants we make the polynomials .f (x0 ) and .g(x0 ) monic. The polynomial L(x0 , x1 , x2 ) = x2n+s−r G(x0 , x1 , x2 ) − x0s−r x2m F (x0 , x1 , x2 )

.

is homogeneous and not identically zero because .F (x0 , x1 , x2 ) and .G(x0 , x1 , x2 ) are irreducible and distinct. Moreover l(x0 ) := L(x0 , 0, 1) = g(x0 ) − x0s−r f (x0 ) = x0s + · · · − x0s−r (x0r + · · · )

.

11 Bézout’s Theorem for Curves of .P2 (K)

402

has degree .t < s. By (vi) and (v) we get n+s−r s−r x m F ) = m (F G) , L)  = m P (F , x n+s−r G−x m P (F P , x2 2 0 2

.

n+s−r , x , G)  = m P (F , G),  ) + m P (F = m P (F 2 n+s−r , x since .P ∈ / x2 and hence .mP (F ) = 0. Now, we can replace F and G by 2 F and L (or by L and F if .t < r) and continue until we reach the situation .r = 0. ⨆ ⨅

Remark 11.20 We need the assumption (b) in the statement of Theorem 11.19 in ∩ G  and the zeros order to have a bijective correspondence between the points of .F of .R(F, G)(x1 , x2 ), otherwise a single root of the resultant might give more points of intersection (see the proof of Theorem 11.1 above). The condition (c) allows us to prove the following result.  ∩ G,  we , .G  and a point .P ∈ F Proposition 11.21 Given two projective curves .F     have .mP (F , G) = 1 if and only if P is not a singular point of .F and .G and the  and .G  are distinct. tangent lines to .F One can find a proof in [19], pp. 63–66 if .K = C. We now prove the existence. We give the following:  and .G,  their intersection multiDefinition 11.22 Given two projective curves .F , G)  at a point .P ∈ P2 (K) is defined as follows: plicity .mP (F (α) .(β) .(γ ) .

, G)  = ∞ if P belongs to a common component of .F  and .G.  m P (F     .mP (F , G) = 0 if .P ∈ / F ∩ G. ∩G  but it does not lie in a common component of .F  and If .P ∈ F  .G, then removing any of their common components (obtaining coprime polynomials .F0 and .G0 ) and choosing a projective coordinate system such that the conditions (a)–(c) are satisfied and .P = [a, b, c] gives .

, G)  = m[b,c] (R(F0 , G0 ))), m P (F

.

(11.25)

i.e. the greatest integer k such that .(bx2 − cx1 )k divides .R(F0 , G0 ). We have to show that the conditions (i)–(vi) of Theorem 11.19 are satisfied: (i) It follows at once from (11.9) for .(γ ) and it is obvious for .(α) and .(β). (ii) It follows from definition and Corollary 11.13 (.R(F, G) is identically zero). , G)  = 0 =⇒ P ∈  ∩ G.  (iii) We have only to prove the implication .mP (F / F   By contradiction, suppose that .P = [a, b, c] ∈ F ∩ G i.e. .F (a, b, c) = G(a, b, c) = 0. Then the resultant .R(F, G)(x1 , x2 ) vanishes when .(x1 , x2 ) = (b, c) by Corollary 10.6. Hence .bx2 − cx1 divides .R(F, G)(x1 , x2 ) and .m[b,c] (R(F, G)) > 0 because by (a) .(b, c) /= (0, 0) (or .m[b,c] (R(F, G)) = ∞ if .R(F, G)(x1 , x2 ) is identically zero).

11.4 Intersection Multiplicity of Two Curves

403

(iv) If .F = a0 x0 + a1 x1 + a2 x2 and .G = b0 x0 + b1 x1 + b2 x2 represent two distinct lines with .a0 b0 /= 0, their intersection point is [a1 b2 − a2 b1 , a2 b0 − a0 b2 , a0 b1 − a1 b0 ] /= [1, 0, 0]

.

by condition (a). Then the polynomial   a a x + a2 x2   = (a0 b1 − a1 b0 )x1 + (a0 b2 − a2 b0 )x2 R(F, G)(x1 , x2 ) =  0 1 1 b0 b1 x1 + b2 x2 

.

is not identically zero and has .[a0 b2 − a2 b0 , −(a0 b1 − a1 b0 )] as a root of multiplicity 1. (v) It follows immediately from Corollary 11.16. (vi) It follows immediately from Corollary 11.18. ⨆ ⨅

The proof of Theorem 11.19 is now complete.

Remark 11.23 The uniqueness (in Proposition 11.19) shows that the intersection , G)  (Definition 11.22) is independent of the coordinates chosen multiplicity .mP (F , G)  depends  and .G.  Moreover (iii) and (v) imply that .mP (F and is intrinsic to P , .F   only on those components of .F and .G which contain P . Theorem 11.24 (Bézout) Under the assumptions of Theorem 11.1, .C ∩ D is finite, non-empty and 

mP (C, D) = mn.

.

P ∈C∩D

Proof From the proof of Theorem 11.1, Propositions 11.19 and 10.6 we have 

mP (C, D) =

.

P =[a,b,c]∈C∩D



m[b,c] (R(F, G)) = mn.

[b,c]∈V+ (R(F,G))

⨆ ⨅

Remark 11.25 The proof of Theorem 11.1 works also if .char(K) = 2. In particular, Theorem 11.5 and Proposition 11.2 hold true if .char(K) = 2. Remark 11.26 We want to compare the intersection multiplicity between a curve  and a line r given in Definition 10.18 with the one between two curves given F  and suppose that above. We assume that r is not a component of .F

.

F (x0 , x1 , x2 ) = a0 x0m + a1 (x1 , x2 )x0m−1 + · · · + am (x1 , x2 ), r : x0 = 0,

.

P = [0, 0, 1].

11 Bézout’s Theorem for Curves of .P2 (K)

404

Then we have  1 0  0 1   .. .. .R(F, r)(x1 , x2 ) =  . .  . ..  .. .  a a (x , x ) 0 1 1 2

0 0 .. . .. . ...

       = am (x1 , x2 )    1 0  . . . am (x1 , x2 ) 0 0 .. .

0 0 .. .

with .am (x1 , x2 ) not identically zero. Since .F (0, 0, 1) = 0 we have .am (0, 1) = 0 so that am (x1 , x2 ) = λ1 x1m + · · · λk x1k x2m−k = x1k (λ1 x1m−k + · · · λk x2m−k ).

.

, r) = k, according to Definition 11.22. Hence .k = m[0,1] (R(F, r)) ≥ 1 and .mP (F Choose .Q = [0, 1, 0] and write .r : uP + vQ = [0, u, v], so that .F (0, u, v) = am (u, v) = uk (λ1 um−k + · · · λk v m−k ) and the two intersection multiplicities coincide.

11.5 Applications of Bézout’s Theorem 11.5.1 Points of Inflection  ⊂ P2 (K) be a curve of degree d ≥ 2 such that the Definition 11.27 Let C = F characteristic of K does not divide d!. By Euler’s identities (10.4) and (10.5) we easily get d(d − 1)F = Fx''0 x0 x02 + Fx''1 x1 x12 + Fx''2 x2 x22

.

+ 2(Fx''0 x1 x0 x1 + Fx''0 x2 x0 x2 + Fx''1 x2 x1 x2 ),

(11.26)

where Fx''i xj denotes the second partial derivative of F with respect the variables xi and xj , 0 ≤ i, j ≤ 2. The identity (11.26) can be put in the matrix form d(d − 1)F = (x0 , x1 , x2 ) HF (x0 , x1 , x2 )t ,

.

where ⎛

⎞ Fx''0 x0 Fx''0 x1 Fx''0 x2 ⎜ '' '' '' ⎟ .HF = ⎝Fx x Fx x Fx x ⎠ 1 0 1 1 1 2 Fx''2 x0 Fx''2 x1 Fx''2 x2

11.5 Applications of Bézout’s Theorem

405

is the Hessian matrix of F . The determinant HF := det(HF ) is called the Hessian form (or simply the Hessian) of F and it is a homogeneous form of degree 3(d − 2).  The associated curve H F is called the Hessian of C. t If (y0 , y1 , y2 ) = A(x0 , x1 , x2 )t is a new system of projective coordinates (with A a non-singular 3 × 3 matrix) then one can easily check that HF = A HF At ,

.

where HF is the Hessian matrix with respect to (y0 , y1 , y2 ), then the two Hessian forms are equal up to a constant factor. If P is an arbitrary point of P2 (K) we can associate to the symmetric matrix ⎛

⎞ Fx''0 x0 (P ) Fx''0 x1 (P ) Fx''0 x2 (P ) ⎜ '' ⎟ '' '' .HF (P ) = ⎝Fx x (P ) Fx x (P ) Fx x (P )⎠ 1 0 1 1 1 2 Fx''2 x0 (P ) Fx''2 x1 (P ) Fx''2 x2 (P )  the conic γP := G P of equation GP := Fx''0 x0 (P )x02 + Fx''1 x1 (P )x12 + Fx''2 x2 (P )x22

.

+ 2Fx''0 x1 (P )x0 x1 + 2Fx''1 x2 (P )x1 x2 + 2Fx''2 x0 (P )x2 x0 = 0.

(11.27)

Proposition 11.28 Under the hypothesis and notation of Definition 11.27, if P ∈ P2 (K) is an arbitrary point we have: (i) GP (P ) = d(d − 1)F (P ), hence P ∈ C if and only if P ∈ γP .  (ii) HF (P ) = 0 if and only if the conic γP is degenerate, thus P ∈ C ∩ H F if and only if P belongs to the degenerate conic γP . (iii) P is a singular point of C if and only if P is a singular point of γP .  (iv) Sing(C) ⊂ H F.  (v) For each non singular point P ∈ C ∩ H F the tangent line TP (C) to C at P is an irriducible component of γP . Proof Statements (i) and (ii) trivially follow from (11.26) and from the above definitions. Euler’s identities for derivatives Fx0 , Fx1 and Fx2 give the identities 2(d − 1)

.

∂F ∂GP (P ) = (P ), ∂xi ∂xi

i = 0, 1, 2

(11.28)

from which the assertion (iii) follows. If P is singular for C it is singular for γP by (11.28), hence γP is degenerate and (iv) follows from (ii). By (11.28) the equations of the tangent lines to C and γP at P are the same and this common tangent must be an irriducible component of the degenerate conic γP . ⨆ ⨅  ⊂ P2 (K) be a curve of degree d ≥ 3 and suppose Definition 11.29 Let C = F that the characteristic of K does not divide d!. A non-singular point P ∈ C is called

11 Bézout’s Theorem for Curves of .P2 (K)

406

a flex (or point of inflection) if mP (C, TP (C)) ≥ 3.

.

If C has a line L as a component, a non-singular point P of C such that P ∈ L (then L = TP (C)) is trivially a flex. To avoid this degenerate case we suppose that the curves considered in this subsection have no line as a component. Lemma 11.30 Let F (x0 , x1 , x2 ) be a homogeneous polynomial of degree d > 1 such that the characteristic of K does not divide d!. Then    dF (d − 1)F ' (d − 1)F '   x1 x2   '  2 .x0 HF (x0 , x1 , x2 ) = (d − 1) Fx Fx''1 x1 Fx''1 x2  ;  '1  '' ''  Fx F x1 x 2 F x2 x 2  2

(11.29)

Proof Euler’s identities (d − 1)Fx' 0 = x0 Fx''0 x0 + x1 Fx''0 x1 + x2 Fx''0 x2 , (d − 1)Fx' 1 = x0 Fx''1 x0 + x1 Fx''1 x1 + x2 Fx''1 x2 , .

(d − 1)Fx' 2 = x0 Fx''2 x0 + x1 Fx''2 x1 + x2 Fx''2 x2 ,

(11.30)

dF = x0 Fx' 0 + x1 Fx' 1 + x2 Fx' 2 and the basic property of determinants give   x F '' F '' F ''   0 x0 x 0 x0 x 1 x0 x 2    x0 HF (x0 , x1 , x2 ) = x0 Fx''0 x1 Fx''1 x1 Fx''1 x2    x0 Fx'' x Fx'' x Fx'' x  0 2 1 2 2 2   (d − 1)F ' − x F '' − x F '' F '' F ''  1 x0 x 1 2 x0 x 2 x0 x 1 x0 x 2   x0   = (d − 1)Fx' 1 − x1 Fx''1 x1 − x2 Fx''1 x2 Fx''1 x1 Fx''1 x2    (d − 1)Fx' − x1 Fx'' x − x2 Fx'' x Fx'' x Fx'' x  2 1 2 2 2 1 2 2 2 .   (d − 1)F ' F '' F ''   x0 x0 x 1 x0 x 2    = (d − 1)Fx' 1 Fx''1 x1 Fx''1 x2    (d − 1)Fx' Fx'' x Fx'' x  2 1 2 2 2   F ' F '' F ''   x0 x0 x 1 x0 x 2    = (d − 1) Fx' 1 Fx''1 x1 Fx''1 x2  .   ' '' '' Fx Fx x Fx x  2

1 2

2 2

(11.31)

11.5 Applications of Bézout’s Theorem

407

Therefore   x F ' x F '' x F ''   0 x0 0 x0 x1 0 x0 x2   '  2 .x0 HF (x0 , x1 , x2 ) = (d − 1)  Fx Fx''1 x1 Fx''1 x2  .  '1  '' ''  Fx F x1 x2 F x2 x2  2

(11.32)

Applying the same procedure to the right side of (11.32) we obtain    dF (d − 1)F ' (d − 1)F '   x1 x2   '  2 .x0 HF (x0 , x1 , x2 ) = (d − 1) Fx Fx''1 x1 Fx''1 x2  .  '1   Fx Fx''1 x2 Fx''2 x2  2

⨆ ⨅

Proposition 11.31 Under the above assumptions and notation, let P be a nonsingular point of C such that the tangent line TP (C) is not contained in C, then Hessian polynomial HF is not identically zero and  m P (H F , TP (C)) = mP (C, TP (C)) − 2.

.

Proof We can choose a system of projective coordinates such that P = [1, 0, 0] and TP (C) has the equation x2 = 0. The affine curve C0 := C ∩ U0 (see Definition 9.9) has the equation f (x, y) = 0, where f (x, y) := F (1, x, y) and r0 := TP0 (C0 ) is the line y = 0 where P0 = (0, 0). By Proposition 10.40 we have 2 ≤ m := mP (C, TP (C)) = mP0 (C0 , r0 )

.

so that we can write f (x, y) = x m ϕ(x) + yψ(x, y),

.

(11.33)

with ϕ(x) ∈ K[x], ϕ(0) /= 0, and ψ(x, y) ∈ K[x, y] ψ(0, 0) /= 0. By differentiating f (x, y) we obtain fx' = x m−1 h(x) + yψx' (x, y),

with h(x) = mϕ(x) + xϕ ' (x)

fy' = ψ(x, y) + yψy' (x, y) .

'' '' = x m−2 k(x) + yψxx (x, y), fxx

with k(x) = (m − 1)h(x) + xh' (x)

'' '' fxy = ψx' (x, y) + yψxy (x, y) '' '' fyy = 2ψy' (x, y) + yψyy (x, y).

(11.34) In particular we have h(0) /= 0 and k(0) /= 0.

11 Bézout’s Theorem for Curves of .P2 (K)

408

From Lemma 11.30 we get   df (d − 1)f ' (d − 1)f '   x y   '' '' HF (1, x, y) = (d − 1)  fx' fxx fxy   '  '' ''  fy  fxy fyy .   d(x m ϕ + yψ) (d − 1)(x m−1 h + yψ ' ) (d − 1)(ψ + yψ ' )  x y    '' '' = (d − 1) x m−1 h + yψx' x m−2 k + yψxx ψx' + yψxy .   '' ''   ψ + yψy' ψx' + yψxy 2ψy' + yψyy (11.35)  In order to determine the intersection multiplicity mP0 (H F , r0 ) we have to calculate HF (1, x, 0):  m   dx ϕ (d − 1)x m−1 h (d − 1)ψ(x, 0)  m−1  .HF (1, x, 0) = (d − 1)  x h x m−2 k ψx' (x, 0)  = x m−2 g(x).  ψ(x, 0) ψ ' (x, 0) 2ψy' (x, 0)  x We observe that g(0) = −(d − 1)k(0)ψ(0, 0)2 /= 0 so the Hessian polynomial HF  is not identically zero and mP0 (H ⨆ ⨅ F , r0 ) = m − 2. Corollary 11.32 Under the above assumptions and notation, a non-singular curve  C cannot be a component of H F.  Proof If C were an irreducible component of H F then, for P ∈ C,  mP (C, TP (C)) ≤ mP (H F , TP (C)),

.

⨆ ⨅

but that is false by Proposition 11.31. Now we are able to prove the following result:

Theorem 11.33 Under the above assumptions and notation, a non-singular point  P ∈ C is a flex if and only if P ∈ H F . In particular, a non-singular curve C of degree d can have at most 3d(d − 2) flexes and at least one flex. Proof Let P = [p0 , p1 , p2 ] be a non-singular point of C and TP (C) the tangent line to C at P of equation Fx' 0 (P )x0 +Fx' 1 (P )x1 +Fx' 2 (P )x2 = 0. Let Q = [q0 , q1 , q2 ] ∈ TP (C) be a variable point different to P . We have F (sP + tQ) =

d 

.

l=0

⎛ s d−l t l ⎝



i+j +k=l

⎞ 1 ∂lF j (P ) q0i q1 q2k ⎠ · i!j !k! ∂x0 i ∂x1 j ∂x2 k

= s d F (P ) + s d−1 t (Fx' 0 (P )q0 + Fx' 1 (P )q1 + Fx' 2 (P )q2 )

11.5 Applications of Bézout’s Theorem

409

1 + s d−2 t 2 GP (Q) + t 3 (· · · ) 2 1 = + s d−2 t 2 GP (Q) + t 3 (· · · ), 2

(11.36)

since F (P ) = 0 and Fx' 0 (P )q0 + Fx' 1 (P )q1 + Fx' 2 (P )q2 = 0. Then P is a point of inflection of C if and only if GP (Q) = 0 for every Q ∈ TP (C). It means that TP (C)  is contained in the conic γP so that γP is degenerate and HF (P ) = 0, i.e. P ∈ H F by Proposition 11.28.  Conversely, if P ∈ C∩ H F is non-singular, the tangent line TP (C) is a component of degenerate conic γP by Proposition 11.28, so GP (Q) = 0 for every Q ∈ TP (C) and P is a flex of C. Since the curve C is non-singular it is also irreducible by Corollary 10.17 so it  has no common component with H F and it cannot be an irreducible component of   by Corollary 11.32. By Bézout’s Theorem 11.1, we have 1 ≤ #(C ∩ H H F F) ≤ deg(F ) deg(HF ) ≤ 3d(d − 2). ⨆ ⨅ Corollary 11.34 Let C ⊂ P2 (K) be a nonsingular curve of degree 3 over an algebraically closed field of characteristic /= 2, 3. Then C has at least 1 and at most 9 flexes (Fig. 11.4).

11.5.2 Legendre Form of Cubics Theorem 11.35 (Legendre) Let K be an algebraically closed field of characteristic /= 2, 3. Every non-singular cubic C in P2 (K) is equivalent under a projective

Fig. 11.4 Non-singular cubics y 2 = x 3 − x; y 2 = x 3 − x +

1 2

11 Bézout’s Theorem for Curves of .P2 (K)

410

transformation to the curve defined by y 2 z = x(x − z)(x − λy)

(11.37)

.

for some λ ∈ K  {0, 1}. Proof By Corollary 11.33 C has a point of inflection P . By a suitable change of coordinates we may assume that P = [0, 1, 0] and the tangent line to C at P is , where x2 = 0. Let C := F F (x0 , x1 , x2 ) = ax03 + bx13 + cx23 + dx02 x1 + ex0 x12 + f x02 x2 + gx0 x22

.

+ hx12 x2 + kx1 x22 + lx0 x1 x2 ,

(11.38)

with a, b, . . . , l ∈ K. Since .

∂F = 3ax02 + 2dx0 x1 + ex12 + 2f x0 x2 + gx22 + lx1 x2 , ∂x0 ∂F = 3bx12 + dx02 + 2ex0 x1 + 2hx1 x2 + kx22 + lx0 x2 , ∂x1 ∂F = 3cx22 + f x02 + 2gx0 x2 + hx12 + 2kx1 x2 + lx0 x1 , ∂x2

from our assumptions F (0, 1, 0) = 0,

.

∂F ∂F (0, 1, 0) = (0, 1, 0) = 0, ∂x0 ∂x1

∂F (0, 1, 0) /= 0, ∂x2

we deduce b = 0, e = 0, h /= 0. As in Lemma 11.30 we can obtain  ''  ' ''  F  x0 x 0 F x0 F x0 x 2    2 3 '  ' .x2 HF (x0 , x1 , x2 )) = 4  F  x0 2 F F x2  ,   F '' F ' F ''  x0 x 2 x2 x2 x 2 so, because HF (0, 1, 0) = 0, we have  ''  F 0 Fx''0 x2 (0, 1, 0))  x0 x0 (0, 1, 0))   .0 = HF (0, 1, 0)) = 4  0 0 Fx' 2 (0, 1, 0)     F '' (0, 1, 0)) F ' (0, 1, 0)) F '' (0, 1, 0)) x0 x 2 x2 x2 x 2 = −4(Fx' 2 (0, 1, 0))2 Fx''0 x0 (0, 1, 0)).

11.5 Applications of Bézout’s Theorem

411

Thus Fx''0 x0 (0, 1, 0)) = 0. Since .

∂ 2F = 6ax0 + 2dx1 + 2f x2 , ∂x02

deduce that d = 0. Therefore F has the form F (x0 , x1 , x2 ) = ax03 + cx23 + f x02 x2 + gx0 x22 + hx12 x2 + kx1 x22 + lx0 x1 x2

.

= x1 x2 (lx0 + hx1 + kx2 ) + φ(x0 , x2 ), where φ(x0 , x2 ) is homogeneous of degree 3 in x0 and x2 . After the change of coordinates [x0 , x1 , x2 ] ‫[ →׀‬x, y, z],

.

x0 = x, x1 = y −

lx + kz , x2 = z, 2h

we obtain      lx + kz lx + kz lx + h y − + kz + φ(x, z) F (x, y, z) = z y − 2h 2h    lx + kz lx + kz hy + + φ(x, z) =z y− 2h 2    lx + kz lx + kz y+ + φ(x, z) = hz y − 2h 2h     lx + kz 2 2 = hz y − + φ(x, z) = hy 2 z + ψ(x, z), 2h

.

where ψ(x, z) is homogeneous of degree 3 in x and z. Since K is algebraically closed, ψ(x, z) = (a1 x − b1 z)(a2 x − b2 z)(a3 x − b3 z)

.

(11.39)

and z does not divide ψ(x, z), otherwise F would be reducible and C singular, thus a1 a2 a3 /= 0. We write     b1 b2 b3 ψ(x, z) = a1 a2 a3 x − z x− z x− z , a1 a2 a3

.

so     b1 b2 b3 x− z x− z . F (x, y, z) = hy 2 z + a1 a2 a3 x − z a1 a2 a3

.

11 Bézout’s Theorem for Curves of .P2 (K)

412

After the change of coordinates x = X, y = δY , z = Z, where δ 2 = − put C into the form

a1 a2 a3 , we h

Y 2 Z = (X − αZ)(X − βZ)(X − γ Z).

.

b1 b2 b3 ,β = ,γ = are distinct, otherwise C would be singular. a1 a2 a3 If we apply the change of coordinates (using again the letters x, y and z) The elements α =

X = (β − α)x + αz,

.

Y = ηy,

Z = z,

where η2 = (β − α)3 , we get   γ −α y 2 z = x(x − z) x − z , β −α

.

i.e. (11.37) with λ =

γ −α ∈ K  {0, 1}. β −α

⨆ ⨅

Definition 11.36 Equation (11.37) is called Legendre form of the cubic C. Remark 11.37 The proof of Theorem 11.35 also shows that an irreducible singular curve admits an equation of the form (11.37). In this case the roots of the polynomial ψ(x, z) are not distinct. An Application of Legendre Form  of P2 (K) has exactly nine Theorem 11.38 (Plücker) A non-singular cubic C = F distinct points of inflection and the line joining two of them passes through a third inflection point. Proof By Theorem 11.35 we can suppose that F (x, y, z) = y 2 z − x(x − z)(x − λz) with λ /= 0, 1. Because   −3x + (λ + 1)z 0 (λ + 1)x − λz   1  . HF (x, y, z) =  0 z y   8  (λ + 1)x − λz y  −λx = (3λ − (λ + 1)2 )x 2 z + λ(λ + 1)xz2 − λ2 z3 + 3xy 2 − (λ + 1)y 2 z the flexes of C are the solutions of the system of equations

.

 (3λ − (λ + 1)2 )x 2 z + λ(λ + 1)xz2 − λ2 z3 + 3xy 2 − (λ + 1)y 2 z = 0 y 2 z = x(x − z)(x − λz). (11.40)

11.5 Applications of Bézout’s Theorem

413

Since P = [0, 1, 0] is a flex and the tangent line to C at P is z = 0, all other flexes belong to Uz = {[x, y, z] ∈ P2 (K) : z /= 0} so that we can suppose that they have the third coordinate z = 1. Therefore the system (11.40) becames

.

 (3λ − (λ + 1)2 )x 2 + λ(λ + 1)x − λ2 + 3xy 2 − (λ + 1)y 2 = 0 y 2 = x(x − 1)(x − λ),

(11.41)

from which we deduce the equation of fourth degree 3x 4 − 4(λ + 1)x 3 + 6λx 2 − λ2 = 0.

.

(11.42)

If λ = −1 Eq. (11.42) is 3x 4 − 6x 2 − 1 = 0 whose 4 roots are all distinct. We assume now λ /= −1. We have to prove that Eq. (11.42) has 4 simple roots, i.e. the system  .

3x 4 − 4(λ + 1)x 3 + 6λx 2 − λ2 = 0 x 3 − (λ + 1)x 2 + λx = 0

(11.43)

has no solution. We observe that x = 0 is not a solution of (11.43) so we may consider  3x 4 − 4(λ + 1)x 3 + 6λx 2 − λ2 = 0 . (11.44) x 2 = (λ + 1)x − λ. Substituting the second equation in the first one twice we get the equivalent system  .

(λ − 1)2 (λ + 1)x = λ(λ − 1)2 x 2 = (λ + 1)x − λ.

λ but then from the second equation The only possible solution is x = λ+1 2  λ of (11.44) we obtain = 0 which implies λ = 0, a contradiction. λ+1 Let a1 , a2 , a3 , a4 be the roots of (11.42). To each root ai we associate ±bi such that (±bi )2 = ai , i = 1, 2, 3, 4. In this way we get the points P = [0, 1, 0], [a1 , ±b1 , 1], [a2 , ±b2 , 1], [a3 , ±b3 , 1], [a4 , ±b4 , 1] which are the nine inflection points of C. The last statement follows very easily. Fix two flexes P1 , P2 of C and suppose that P1 = [0, 1, 0]. From the above discussion we have P2 = [a, b, 1], with a, b ∈ K. Hence P3 := [a, −b, 1] is a flex of C collinear with P1 , P2 . ⨆ ⨅ Remark 11.39 Theorem 11.38 agrees with Bézout’s theorem. Since both the curves C and HF are of degree 3, the number of intersection points of two

11 Bézout’s Theorem for Curves of .P2 (K)

414

curves reaches the maximum possible of nine as stated by Bézout’s theorem. In particular, all these intersection points (i.e. points of inflection of C) have intersection multiplicity 1.

11.5.3 Max Noether’s Theorem Let K be an algebraically closed field of arbitrary characteristic. 1 and .C2 = F 2 be two curves of Theorem 11.40 (Max Noether) Let .C1 = F degree n in .P2 (K) that intersect in exactly .n2 points. Fix a natural number .m < n  of degree m contains exactly mn and suppose that an irreducible curve .D = G of these points, then there is a curve of degree .n − m which passes through the remaining .n(n − m) points. Proof Choose a point .P = [a, b, c] ∈ D, .P ∈ / C1 ∩C2 . Since .deg(F1 ) = deg(F2 ) = n the curve .C of degree n defined by F2 (a, b, c) F1 (x0 , x1 , x2 ) − F1 (a, b, c) F2 (x0 , x1 , x2 )

.

meets .D in almost .mn + 1 points, namely P and mn points of .C1 ∩ C2 . By Bézout’s theorem, .C and .D have a common irreducible component which must be .D itself because .D is irreducible.Therefore F2 (a, b, c) F1 (x0 , x1 , x2 ) − F1 (a, b, c) F2 (x0 , x1 , x2 ) = G(x0 , x1 , x2 )H (x0 , x1 , x2 )

.

for some homogeneous polynomial .H ∈ K[x0 , x1 , x2 ] of degree .n − m. If [u, v, w] ∈ C1 ∩ C2 then either .G(u, v, w) = 0 or .H (u, v, w) = 0. Therefore . the .n(n − m) points of .C1 ∩ C2 which do not lie on .D must all lie on .H ⨆ ⨅

.

Corollary 11.41 If two cubics intersect in exactly nine points and an irreducible conic contains six of these points then the remaining three points are collinear. Conversely, if three points of these nine points are collinear then the remaining six points lie on a conic. Proof Apply Theorem 11.40 twice.

⨆ ⨅

11.5.4 Conics Passing Through a Finite Number of Points We closely follow [30]. Throughout this subsection we shall denote the K-vector space .h K[x0 , x1 , x2 ]d by .Sd . By Exercise 10.1 we know that  .

dimK Sd =

 d +2 (d + 1)(d + 2) = . d 2

(11.45)

11.5 Applications of Bézout’s Theorem

415

For .P1 , . . . , Pn ∈ P2 (K), we define Sd (P1 , . . . , Pn ) := {F ∈ Sd | F (P1 ) = · · · = F (Pn ) = 0}.

.

It is immediately seen that .Sd (P1 , . . . , Pn ) is a subspace of .Sd . Proposition 11.42 Under the above notation we have .

dimK Sd (P1 , . . . , Pn ) ≥

(d + 1)(d + 2) − n. 2

(11.46)

In particular, .dimK S2 (P1 , . . . , Pn ) ≥ 6 − n. Proof Each of the conditions .F (Pi ) = 0 is a single linear condition on F , so that (11.46) follows from (11.45). ⨆ ⨅ Corollary 11.43 Let .P1 , . . . , Pn be .n ≤ 5 points no four of which are collinear, then .

dimK S2 (P1 , . . . , Pn ) = 6 − n.

Proof If .n = 5 Proposition 11.2-iv) implies that .dimK S2 (P1 , . . . , P5 ) ≤ 1, so that dimK S2 (P1 , . . . , P5 ) = 1 by Proposition 11.42. If .n ≤ 4 we can add .Pn+1 , . . . , P5 such that no four of .P1 , . . . , P5 are collinear. Since .dimK S2 (P1 , . . . , P5 ) = 1 and each point imposes at most one linear condition on F we get

.

1 = dimK S2 (P1 , . . . , P5 ) ≥ dimK S2 (P1 , . . . , Pn ) − (5 − n),

.

i.e. .dimK S2 (P1 , . . . , Pn ) ≤ 6 − n so that .dimK S2 (P1 , . . . , Pn ) = 6 − n by Proposition 11.42. ⨅ ⨆ Lemma 11.44 Let .F ∈ Sd , .F /= 0.  be a line of .P2 (K). If .V+ (F ) ⊃ L, then .F = L · G with .G ∈ Sd−1 . (i) Let .L = L  ) ⊃ C, then .F = C ·H (ii) Let .C = C be a non-degenerate conic of .P2 (K). If .V+ (F with .H ∈ Sd−2 . Proof Immediate by Theorem 11.1.

⨆ ⨅

 be a line, and let .C = C  be a non-degenerate conic of Corollary 11.45 Let .L = L 2 2 .P (K). Suppose that points .P1 , . . . , Pn ∈ P (K) are given and fix .d ≥ 1. (i) If .P1 , . . . , Pa ∈ L, . Pa+1 , . . . , Pn /∈ L and .a > d, then Sd (P1 , . . . , Pn ) = L·Sd−1 (Pa+1 , . . . , Pn ) := {L·D | D ∈ Sd−1 (Pa+1 , . . . , Pn )}.

.

(ii) If .P1 , . . . , Pa ∈ C, . Pa+1 , . . . , Pn /∈ C and .a > 2d, then Sd (P1 , . . . , Pn ) = C·Sd−2 (Pa+1 , . . . , Pn ) := {C·E | E ∈ Sd−2 (Pa+1 , . . . , Pn )}.

.

11 Bézout’s Theorem for Curves of .P2 (K)

416

Proof (i) If .F ∈ Sd (P1 , . . . , Pn ) (.F /= 0), by Theorem 11.1 we have .L ⊂ C (since .a > d). Then Lemma 11.44, i) implies .F = L · D, with .D ∈ Sd−1 (Pa+1 , . . . , Pn ), hence .Sd (P1 , . . . , Pn ) ⊂ L · Sd−1 (Pa+1 , . . . , Pn ). The inclusion .L · Sd−1 (Pa+1 , . . . , Pn ) ⊂ Sd (P1 , . . . , Pn ) is obvious so that we get i). (ii) is exactly the same. ⨆ ⨅ Lemma 11.46 Let .P1 , . . . , P8 ∈ P2 (K) be distinct points such that no four of them are collinear and no seven of them lie on a non-degenerate conic. Then .

dimK S3 (P1 , . . . , P8 ) = 2.

Proof By Proposition 11.42 we have .dimK S3 (P1 , . . . , P8 ) ≥ 2. Case 1 No three points of .P1 , . . . , P8 are collinear and no six lie on a nondegenerate conic. Suppose by contradiction that .dimK S3 (P1 , . . . , P8 ) ≥ 3. Let .P9 , .P10 be two distinct points on the line .L = P1 P2 . Then .

dimK S3 (P1 . . . , P10 ) ≥ dimK S3 (P1 , . . . , P8 ) − 2 ≥ 1,

so that there exists .0 /= F ∈ S3 (P1 , . . . , P10 ). By Corollary 11.45 (.a = 4 since P1 , P2 , P9 , P10 ∈ L) we have .F = L · C, with .C ∈ S2 (P3 , . . . , P8 ). Since the conic  passes through .P3 , . . . , P8 then by assumption .C is degenerate, i.e. .C is a .C := C pair of distinct lines or a double line. In both cases at least three points of .P1 , . . . , P8 are collinear, contradicting our hypothesis. .

 Let .P9 be a fourth point on Case 2 Suppose that .P1 , P2 , P3 lie on the line .L = L. the line .L distinct from all points .P1 , . . . , P8 . By Corollary 11.45 we have S3 (P1 , . . . , P9 ) = L · S2 (P4 , . . . , P8 ).

.

Also, since no four of .P4 , . . . , P8 are collinear, by Corollary 11.43, .dimK S2 (P4 , . . . , P8 ) = 6 − 5 = 1, hence .dimK S3 (P1 , . . . , P9 ) = 1 which implies .dimK S3 (P1 , . . . , P8 ) ≤ 2 so that .dimK S3 (P1 , . . . , P8 ) = 2.  Choose Case 3 Suppose that .P1 , . . . , P6 belong to a non-degenerate conic .C = C. a point .P9 ∈ C distinct from .P1 , . . . , P6 . By Corollary 11.45-(ii) we have S3 (P1 , . . . , P9 ) = C · S1 (P7 , P8 ).

.

11.5 Applications of Bézout’s Theorem

417

Fig. 11.5 Nine points

 is the line .P7 P8 ) so Moreover, .S1 (P7 , P8 ) is generated by L (where .L = L that .S3 (P1 , . . . , P9 ) is the 1-dimensional space spanned by .C · L. Therefore .dimK S3 (P1 , . . . , P8 ) ≤ 2 and .dimK S3 (P1 , . . . , P8 ) = 2. ⨆ ⨅ 1 , C2 = C 2 be two cubics of .P2 (K) whose Proposition 11.47 Let .C1 = C  intersection consists of nine distinct points .P1 , . . . , P9 . Then a cubic .D = D through .P1 , . . . , P8 also passes through .P9 .(see Fig. 11.5.). Proof If four points of .P1 , . . . , P8 belong to a line L, then .L ⊂ C1 ∩ C2 which is absurd by Bézout’s theorem. For exactly the same reason, no seven points of .P1 , . . . , P8 can lie on a non-degenerate conic. By Lemma 11.46 we get .dimK S3 (P1 , . . . , P8 ) = 2. Therefore the polynomials .C1 and .C2 form a basis of .S3 (P1 , . . . , P8 ), and hence .D = α1 C1 + α2 C2 , with .α1 , α2 ∈ K. Now .C1 (P9 ) = C2 (P9 ) = 0 so that .D(P9 ) = 0, i.e. .P9 ∈ D. ⨆ ⨅ From Proposition 11.47 we the deduce the converse of Pascal’s Theorem 11.5. Corollary 11.48 (Converse of Pascal’s Theorem) Consider Fig. 11.2. If the opposite sides of a hexagon .P1 P2 P3 P4 P5 P6 can be extended until they meet in three collinear points A, B, C, and the nine points and the six lines are all distinct; then .P1 , . . . , P6 lie on a non-degenerate conic. Proof Assume the notation of Theorem 11.5 and consider Fig. 11.2. The intersection of the cubics .C1 := P1 P2 ∪ P3 P4 ∪ P5 P6 and . C2 := P2 P3 ∪ P4 P5 ∪ P6 P1 consists of nine points C1 ∩ C2 = {P1 , P2 , P3 , P4 , P5 , P6 , A, B, C}.

.

418

11 Bézout’s Theorem for Curves of .P2 (K)

Fig. 11.6 Abelian group law on an elliptic curve (with identity O)

By Proposition 5.27-(i) there exists a conic .D passing through points .P1 , . . . , P5 . Let .L be the line containing A, B, C, then the cubic .D ∪ L passes through the points .P1 , P2 , P3 , P4 , P5 , A, B, C. By Proposition 11.47 .P6 ∈ D ∪ L. Since .P6 does not lie on .L, by assumption, then .P6 ∈ D. ⨆ ⨅ Elliptic Curves Definition 11.49 A non-singular cubic of .P2 (K) is called an elliptic curve. Elliptic curves are very important objects in Algebraic Geometry and in Number Theory with many remarkable applications. Proposition 11.47 is a basic result to prove the following fundamental theorem. Theorem 11.50 Let .C an elliptic curve of .P2 (K), where K is an algebraically closed field of characteristic 0. Fix an arbitrary flex .O ∈ C (O exists by Theorem 11.33.). Then there exists an abelian group law on .C whose identity is O. The reader can find a proof in [30], pp. 33–41, in [26], pp. 235–240 or in [21], pp. 67–74 (where there is a complete proof of associativity). The group law is illustrated in Fig. 11.6. Two remarks: 1. If .A = B the line AB is the tangent to .C at A. 2. We have to note that the choice of a point of inflection as identity is obliged since the tangent to .C at O meets .C only at O so that .O + O = O.

11.6 Exercises Exercise 11.1 Let C be a non-singular curve of P2 (K) (where K is an algebraically closed field of characteristic 0), let P ∈ C be a flex of C and σ ∈ PGL2 (K) a projective automorphism. Show that σ (P ) is a flex of σ (C).

11.6 Exercises

419

Exercise 11.2 Let C = fbe an affine curve of equation y 2 = x 3 + px + q over an algebraically closed field of characteristic /= 2, 3. Prove that the projective closure x 0 (f ) is non-singular if and only if x 3 + px + q has only simple roots. Exercise 11.3 Prove Proposition 11.42. (Hint: By induction on n taking into account that each point imposes at most one linear condition on F .) Exercise 11.4 Let C be the projective curve of equation x12 x2 − x03 + x0 x22 = 0 over an algebraically closed field of characteristic /= 2, 3. Show that C is non-singular and [0, 1, 0] is a flex of C.  be a non-degenerate Exercise 11.5 (Group Law on a Projective Conic) Let F )  𝓁. conic of P2 (K). Let 𝓁 be a line and let N be a point on V+ (F ), A /= B, let PAB = AB ∩ 𝓁. We define 1. For A, B ∈ V+ (F A ◦ B :=

 R

.

N

) if R = PAB N ∩ V+ (F  at N. if PAB N is tangent to F

) and tA is the tangent line at A to F , let PA = tA ∩ 𝓁 we define 2. If A = B ∈ V+ (F  A ◦ B :=

R

.

N

) if R = PA N ∩ V+ (F  at N. if PA N is tangent to F

) equipped with the operation ◦ is an abelian group. (Hint: Use Prove that V+ (F , the Pascal’s Theorem 11.5 for associativity.) Further show that if 𝓁 is tangent to F ), ◦) is isomorphic to the additive group of K while if 𝓁 intersects group (V+ (F ), ◦) is isomorphic to the multiplicative the conic in two distinct points, then (V+ (F group K ∗ of K.

Chapter 12

Absolute Plane Geometry

12.1 Elements of Absolute Plane Geometry This chapter is an introduction to the absolute plane geometry closely following [23]. In particular, we shall give an example of a non-euclidean absolute plane geometry. Definition 12.1 An incidence structure .P := (X, D) (see Definition 6.1) is called an absolute plane if the following group of axioms .a1 ), . . . , a10 ) are satisfied: Incidence axioms. (.a1 ) Each line of .P contains at least two points. (.a2 ) There exist three non-collinear points of .X. Order axioms. (.a3 ) There are two total order relations on each line, say .≤ and the opposite .≤0 defined by .A ≤0 B iff .B ≤ A. Let A, B, .C ∈ X be three distinct collinear points. We say that C is between A and B if .A ≤ C ≤ B or .B ≤ C ≤ A. The betweenness relation does not depend on the order chosen on the line d containing A, B and C; indeed A ≤0 C ≤0 B ⇐⇒ B ≤ C ≤ A,

.

B ≤0 C ≤0 A ⇐⇒ A ≤ C ≤ B.

Let A and B be two distinct points of .X. We define the segment with endpoints A and B to be the set [AB] := {C ∈ X : C ∈ AB, C is between A and B}.

.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_12

421

422

12 Absolute Plane Geometry

We obviously have .[AB] = [BA]. Let A, B, .C ∈ X three distinct non-collinear points, we call the segments .[BC], .[CA] and .[AB] the sides of triangle .ΔABC. Now we can state the second order axiom: (.a4 ) (Pasch’s axiom). If a line .d ∈ D meets one side of .ΔABC, then d also meets another side of .ΔABC. Distance axioms. There exists a function .d : X × X → R+ := [0, ∞), called the distance function, which satisfies the following axioms: (.a5 ) For every A, .B ∈ X, .d(A, B) = d(B, A). (.a6 ) If A, .B ∈ X, then .d(A, B) = 0 if and only if .A = B. (.a7 ) A point .C ∈ AB lies on the segment .[AB] if and only if .d(A, B) = d(A, C)+ d(C, B). Let .≤ be a total order on a line d (by axiom .a3 )). Fix a point .O ∈ d. The sets + dO := {A ∈ d | A ≥ O},

.

− dO := {A ∈ d | A ≤ O}

are called opposite rays of origin O of d or simply O-rays. The opposite rays of origin O with respect to the order .≤ coincide with the opposite rays of origin O with + − respect to the order .≤0 . From now on we shall equally denote .dO or .dO simply by →

dO . A ray with origin O will be also denoted by .(OX) and the line which contains

.



(OX) by OX.

.

(.a8 ) For every line .d ∈ D, for every O-ray .dO and for every real number .x ≥ 0, there exists .A ∈ dO such that .d(O, A) = x. Let .f : dO → R+ = [0, ∞) be the function defined by .f (A) := d(O, A) for all A ∈ X. Then f is bijective by axiom .a8 ) (surjectivity), and by axioms .a6 ) and .a7 ) (injectivity). Superposition axioms. We need a preliminary definition. Let .P = (X, D) be an incidence structure satisfying axioms .(a1 ), . . . , (a8 ).

.

Definition 12.2 A map .σ : X → X is called an automorphism of .P if .σ is bijective, σ (d) ∈ D for every .d ∈ D and

.

d(σ (A), σ (B)) = d(A, B),

.

∀ A, B ∈ X.

It is immediately seen that the set .Aut(P) of all automorphisms of .P is a group with respect to composition of automorphisms. Now we can state two superposition axioms. (.a9 ) For each .d ∈ D there is a unique automorphism .sd ∈ Aut(P), .sd /= idX , which fixes every point of d, i.e. .sd (P ) = P for all .P ∈ d. We call .sd a symmetry of axis d.

12.1 Elements of Absolute Plane Geometry

423

' (.a10 ) Let d and .d ' ∈ D be two lines such that .d ∩ d ' = {O}. Let .dO and .dO ' be two of the O-rays of d and .d . Then there exists a line .e ∈ D such that ' .se (dO ) = d . O

The absolute planes are the subject of the absolute plane geometry. By abuse of notation we shall also use .P instead of .X. Proposition 12.3 The standard euclidean plane .P = E2 (R) is an absolute plane. Proof All axioms are easily verified. Furthermore .Aut(P) coincides with the group O(En (R)) of the isometries of .E2 (R) by Proposition 4.24. By Exercise 5.6 we know that an isometry of .E2 (R) which fixes every point of a line d is necessarily an orthogonal symmetry through d. Finally, the line e of axiom .a10 ) is one of the bisector lines of d and .d ' (Definition 4.20). ⨆ ⨅

.

Definition 12.4 Two lines d and .d ' of .P are called parallel if either .d = d ' , or ' .d ∩ d = ∅. If .P = E2 (R), two lines are parallel according to Definition 12.4 if and only if they are parallel according to Definition 3.68. Euclid’s axiom or parallel postulate: (⋆) Let d be a line of .P, and let .A ∈ P be a point not lying on d. Then there exists at most a line .d ' through A which is parallel to d.

.

We shall see in Proposition 12.40 that there always exists at least a line .d ' through A which is parallel to d. Propositions 12.3 and 3.74 imply: Corollary 12.5 The standard euclidean plane .P = E2 (R) satisfies Euclid’s axiom .(⋆). An absolute plane satisfying axiom .(⋆) will be called a euclidean .(absolute.) plane and its geometry euclidean .(absolute.) geometry; in the opposite case, it will be called a non-euclidean .(absolute.) plane and its geometry non-euclidean .(absolute.) geometry. At the end of this chapter we shall treat the Poincaré hyperbolic half-plane, which is an absolute plane not satisfying axiom .(⋆). In particular, we shall show that Euclid’s axiom .(⋆) is independent of the axioms .a1 ), . . . , a10 ). First of all we wish to investigate some fundamental properties which follow only from the axioms .a1 ), . . . , a10 ), i.e. common to euclidean and non-euclidean geometry. From now on .P = (X, D) shall denote an arbitrary absolute plane. Therefore .P satisfies all axioms .a1 ), . . . , a10 ) but does not necessarily satisfy Euclid’s axiom .(⋆). Proposition 12.6 If a line d of .P meets all sides of a triangle .ΔABC, then it passes through one vertex of .ΔABC.

424

12 Absolute Plane Geometry

Proof Suppose by contradiction d ∩ [BC] = M,

.

d ∩ [CA] = N,

d ∩ [AB] = P

and

A, B, C ∈ / d.

Then M, N and P are pairwise distinct. By axiom .a3 ) suppose .M ∈ [NP ]. Then the line BC meets .[N P ] but does not meet either .[AN] or .[AP ], contradicting Pasch’s ⨆ ⨅ axiom (.a4 ) applied to the triangle .ΔANP . Proposition 12.7 Let d be a line of an absolute plane .P. The relation .R on .P  d defined by ARB

⇐⇒

.

[AB] ∩ d = ∅

is an equivalence relation. Furthermore .#(P \ d)/R) = 2. Proof Reflexivity and symmetry of .R are evident. If A, B and C are three points such that .A R B and .B R C, i.e. .[AB] and .[BC] do not meet d, then .[AC] ∩ d = ∅ by Pasch’s axiom, hence .A R C. There exist two points A, .B ∈ P such that .[AB] ∩ d /= ∅. Indeed, take one point + .O ∈ d and a line e through O, .e /= d. By axiom .a8 ) there exists a point .A ∈ e O and − .B ∈ e O (for instance we can take .d(O, A) = d(O, B) = 1). Since .[AB]∩d = {O}, + − A is not equivalent to B. Let .C ∈ P \ d. If .C ∈ e, then .C ∈ eO or .eO but .C /= O. + − / e, then the line d does If .C ∈ eO then .C R A, while if .C ∈ eO then .C R B. If .C ∈ not pass through any vertex of .ΔABC and .[AB] ∩ d = {O} so that by Pasch’s axiom, either .[AC] ∩ d /= ∅ or .[BC] ∩ d /= ∅. By Proposition 12.6 only one of .[AC] ∩ d /= ∅ and .[BC] ∩ d /= ∅ is possibile, hence .C R A or .C R B. ⨆ ⨅  and .B  are the (2) elements of .(P  d)/R we define .H± as Definition 12.8 If .A d being the sets .{P ∈ P  d : P R A} and .{P ∈ P  d : P R B}. We call the sets ± ± ± .H open half-planes separated by d. The sets .Hd := H ∪ d are called the closed d d ±

half-planes separated by d. The line d is the boundary of .H± d and of .Hd .

Definition 12.9 A line d of .P is oriented if one of the total order relations (given by axiom (.a3 )) is chosen. Proposition 12.10 For every line d of .P and for every point .O ∈ d, there is a unique bijective map .f : d → R such that .f (O) = 0, .|f (A) − f (B)| = d(A, B), .∀ A, .B ∈ d which is an increasing function, i.e. .f (A) ≤ f (B) if .A ≤ B. + − Proof Since d is oriented we have .dO = {P ∈ d : O ≤ P } and .dO = {P ∈ d : P ≤ O}. Thus we can define

 f (P ) =

.

d(O, P ) −d(O, P )

+ if P ∈ dO , − . if P ∈ dO

In particular, .f (O) = 0. The requested properties of f are easily proved.

⨆ ⨅

12.1 Elements of Absolute Plane Geometry

425

Definition 12.11 Let d be an oriented line of .P and let .O ∈ d be a fixed point. The pair .(d, O) is called pointed oriented line. If .(d, O) is a pointed oriented line, we call the real number .f (P ), where f is given by Proposition 12.10, abscissa of .P ∈ d. Corollary 12.12 For every pair of distinct points .{A, B} of .P, there exists a unique point .C ∈ [AB] such that .d(A, C) = d(B, C). We call C the midpoint of the segment .[AB]. If .A = B the midpoint of .[AA] is A. Proof Suppose .A ≤ B. By axiom .a8 ) there exists a unique point C of the ray 1 emanating from A and containing B such that .d(A, C) = d(A, B). Since by 2 axiom (.a7 ) one has .d(A, B) = d(A, C) + d(B, C), then .d(A, C) = d(B, C). ⨆ ⨅ Proposition 12.13 Let .σ : P → P be an automorphism of .P. (i) .σ ([AB]) = [σ (A)σ (B)]. (ii) Every ray with origin O is transformed by .σ into a ray with origin .σ (O). (iii) Every half-plane with boundary d is transformed by .σ into a half-plane with boundary .σ (d). Proof (i) If A, B, .C ∈ P are three distinct collinear points then .σ (A), σ (B) and .σ (C) are collinear. Therefore, by axiom .a7 ), if .C ∈ [AB], then .d(A, C) + d(C, B) = d(A, B) which implies .d(σ (A), σ (C)) + d(σ (C), σ (B)) = d(σ (A), σ (B)), i.e. .σ (C) ∈ [σ (A)σ (B)]. Therefore if d and .σ (d) are oriented, then the restriction .σ|d : d → σ (d) is bijective and monotone. Now (ii) and (iii) easily follow. ⨆ ⨅ .

Proposition 12.14 Let .σ : P → P be an automorphism of .P. (i) If .σ has two distinct fixed points A and B, then .σ fixes every point of the line AB. (ii) If .σ permutes two distinct points A and B, then .σ fixes the midpoint of the segment .[AB]. (iii) If .σ fixes three distinct non-collinear points A, B and C, then .σ = idP . Proof (i) Any point P of AB is uniquely determined by its distances from A and B taking into account that .d(A, σ (P )) = d(A, P ) and .d(B, σ (P )) = d(B, P ). (ii) Let C be the midpoint of .[AB]. Since .σ (A) = B, .σ (B) = A and .d(σ (C), σ (A)) = d(A, C) = d(C, B) = d(σ (C), σ (B)) it follows that .σ (C) is the midpoint of .[σ (A)σ (B)] = [BA] so that .σ (C) = C. (iii) By (i), .σ fixes all points of the lines BC, CA and AB. Let .M ∈ P  (AB ∪ BC ∪ CA) and let .D ∈ [BC] with .D /= B, C. By Pasch’s axiom .a4 ) the line DM meets a second side of .ΔABC at a point E. Since D and E are distinct fixed points and .M ∈ DE, then M is fixed by .σ . ⨆ ⨅

426

12 Absolute Plane Geometry

Definition 12.15 Let .A and .B be two subsets of .P (more precisely, if .A, .B ⊂ X). We say that .A and .B are congruent, written .A ≡ B, if there exists an automorphism .σ ∈ Aut(P) such that .σ (A) = B. Taking into account that .Aut(P) is a group, it immediately follows that the congruence relation .≡ is an equivalence relation on the set of all subsets of .P. Remark 12.16 The proofs of Propositions 12.13 and 12.14 do not require the superposition axioms. We now show some consequences of axioms (.a9 ) and (.a10 ). Proposition 12.17 If .sd is a symmetry of .P of axis d, then: (i) .sd2 = idP . (ii) The locus of fixed points of .sd is the line d. ± ∓ ∓ (iii) .σd (H± d ) = Hd and .σd (Hd ) = Hd . Proof (i) The automorphism .sd2 is different from .sd and it fixes all points of d, then by axiom .a9 ) we have .sd2 = idP . (ii) It follows from Proposition 12.14. (iii) .A ∈ P \ d. By Proposition 12.14-(ii) and by (i) the midpoint O of the segment .[Asd (A)] is fixed by .sd . Hence .O ∈ d, so that the points A and .sd (A) do not belong to the same half-plane separated by d. ⨆ ⨅ Proposition 12.18 Let A and B two distinct points of .P. Then there exists a unique line d such that .sd (A) = B. →



Proof Let O be the midpoint of the segment .[AB] and let .(OA) (resp. .(OB)) be the ray with origin O containing A (resp. B). By axiom .a10 ) there exists a line d →



such that .sd ((OA)) = (OB). Since .d(O, A) = d(O, B), we get .sd (A) = B. By contradiction, let .d ' be a line different from d such that .sd ' (A) = B (hence ' .sd ∩ s = O by Proposition 12.14-(ii)). Then .sd and .sd ' are automorphisms which d permute A and B. In particular, .sd and .sd ' fix the line .r := AB (.r /= d and .r /= d ' ) and they are distinct from .sr . Hence by Proposition 12.13-(iii), .sd and .sd ' transform a half-plane with boundary r into a half-plane with the same boundary r. Let .Hr be any of the half-planes separated by r. If .P ∈ Hr ∩ d then .sd (P ) = P ∈ Hr so that .sd preserves the half-planes separated by r. The same happens for .sd ' and so for their composition .sd ◦ sd ' . The composition .sd ◦ sd ' fixes A and B so that it fixes every point of r by Proposition 12.14-(i) but does not permute the half-planes with boundary r, hence .sd ◦ sd ' /= sr . Therefore by axiom .a9 ), .sd ◦ sd ' = idX . Since −1 ' .s ' = sd ' , we have .sd = sd ' and then .d = d (Proposition 12.17). ⨆ ⨅ d Definition 12.19 The unique line d given by Proposition 12.18 is called the symmetry axis of segment .[AB].

12.1 Elements of Absolute Plane Geometry →

427



Corollary 12.20 Let .(OX) and .(OY ) be two rays of .P with the same origin O. →



Then there exists a unique line d such that .sd ((OX)) = (OY ). →



Proof By Proposition 12.13-(ii), if .sd ((OX)) = (OY ), then .sd (O) = O. In →

particular, such a line d necessarily passes through O. First suppose that .(OX) →





and .(OY ) do not lie on the same line. Let .A ∈ (OX) and .B ∈ (OY ) such that →



d(O, A) = d(O, B) > 0. Then the equality .sd ((OX)) = (OY ) implies .sd (A) = B,

.





so that d is the line joining O and the midpoint of .[AB]. If .(OX) = (OY ), then →







d = OX. If .(OX) and .(OY ) lie on the same line but .(OX) /= (OY ), then d is the symmetry axis of the segment .[AB]. ⨅ ⨆

.

Definition 12.21 The unique line d given by Corollary 12.20 is called the bisector →



line of the rays .(OX) and .(OY ). Proposition 12.22 Let A and B two distinct points of .P. Then .{M ∈ P : d(M, A) = d(M, B} coincides with the symmetry axis of .[AB]. Proof Let d be the symmetry axis of .[AB]. If .M ∈ d, then .d(M, A) = d(sd (M), sd (A)) = d(M, B) (since .M ∈ d is fixed by .sd ). Conversely, let .M ∈ P such that .d(M, A) = d(M, B). Let e be the bisector line of the rays .(MA) and .(MB). Then .se (A) = B and .e = d, by Proposition 12.18. ⨆ ⨅ Definition 12.23 Let d, e be two lines of .P. We say that d is perpendicular to →



e, written .d ⊥ e, if .d /= e and .sd (e) = e. Two rays .((OX) and .(OY )) are perpendicular if OX and OY are perpendicular. Proposition 12.24 We have .d ⊥ e ⇐⇒ e ⊥ d. Moreover, if .d ⊥ e then .d∩e /= ∅. Proof Let .A ∈ e  d. Since .sd (e) = e, .B := sd (A) ∈ e and .B /= A. The midpoint O of .[AB] is fixed by .sd so that .O ∈ d ∩ e. Moreover, d is the symmetry axis of .[AB] (Proposition 12.22). Since .se preserves distances, for every point .M ∈ d we have d(se (M), A) = d(se (M), se (A)) = d(M, A) = d(M, B)

.

= d(se (M), se (B)) = d(se (M), B). Hence .se (M) belongs to the symmetry axis of .[AB] so that .e ⊥ d.

⨆ ⨅

Remark 12.25 From the proof of Proposition 12.24 we see that the symmetry axis of a segment .[AB] (.A /= B) is the line perpendicular to AB passing through the midpoint O of .[AB]. Proposition 12.26 Let A be a point and let d be a line of .P. Then there exists one and only one line e perpendicular to d and passing through A.

428

12 Absolute Plane Geometry

Proof Suppose .A /∈ d. the perpendicular line e necessarily passes through .B := sd (A) so that .e = AB which implies the uniqueness of e. Moreover, .sd (AB) = sd (A)sd (B) = BA, hence .d ⊥ e. If .A ∈ d, then e is the bisector line of the rays .dA+ and .dA− (Corollary 12.20 and Definition 12.21). ⨆ ⨅ Proposition 12.27 Let d, .d ' be two distinct lines perpendicular to a line e. Then ' .d ∩ d = ∅. Proof Suppose by contradiction that there exists .A ∈ d ∩ d ' . Then through A there would be two lines perpendicular to e, contradicting Proposition 12.26. ⨆ ⨅ Corollary 12.28 Let d, e be two lines of .P and let .σ be an automorphism of .P. Then .d ⊥ e if and only if .σ (d) ⊥ σ (e). Proof Let .d ⊥ e, .O = d ∩ e, and let A, .B ∈ e, be such that the midpoint of [AB] is O. Then .d = {M ∈ P : d(M, A) = d(M, B}. Hence .σ (d) = {N ∈ P : ⨆ ⨅ d(N, σ (A)) = d(N, σ (B))} so that .σ (d) ⊥ σ (e).

.

Proposition 12.29 Let .ΔABC and .ΔA' B ' C ' be two triangles of .P. Assume that d(A, B) = d(A' , B ' ), d(B, C) = d(B ' , C ' ), d(A, C) = d(A' , C ' ).

.

Then there exists a unique automorphism .σ ∈ Aut(P) such that σ (A) = A' ,

.

σ (B) = B ' ,

σ (C) = C ' .

(12.1)

Moreover, .σ is the composition of two or three axial symmetries. Proof Let .σ, τ ∈ Aut(P) be two automorphisms which satisfy the assumptions. Then .α := σ −1 ◦ τ fixes three non-collinear points A, B and C, hence by Proposition 12.14-(iii), .α = idP , i.e. .σ = τ and uniqueness is proved. Let .s1 be an axial symmetry such that .s1 (A) = A' (.s1 is uniquely determined if .A /= A' ) and let .B1 := s1 (B), .C1 := s1 (C). Then .d(A' , B1 ) = d(A, B) = d(A' , B ' ) so that there is an axial symmetry .s2 such that .s2 (A' ) = A' and .s2 (B1 ) = B ' . Let .C2 := s2 (C1 ), then d(A' , C2 ) = d(A' , C1 ) = d(A, C) = d(A' , C ' ),

.

d(B ' , C2 ) = d(B1 , C1 ) = d(B, C) = d(B ' , C ' ). There are two cases. If .C2 = C ' then .σ := s2 ◦ s1 satisfies (12.1). If .C2 /= C ' , then the line .A' B ' is the symmetry axis of .[C ' C2 ] (indeed .d(A' , C2 ) = d(A' , C ' ) and ' ' ' ' ' .d(B , C2 ) = d(B , C )). If .s3 is the symmetry of axis .A B , then .σ := s3 ◦ s2 ◦ s1 is ' the required automorphism (indeed .s3 (C2 ) = C ). ⨆ ⨅ Corollary 12.30 Every automorphism of .P is the composition of two or three axial symmetries.

12.1 Elements of Absolute Plane Geometry

429

Definition 12.31 An automorphism .σ ∈ Aut(P) is called a rotation if either .σ = idP or if .σ has a unique fixed point O called the centre of .σ . By convention, every point of .P is the centre of .idP . Proposition 12.32 Let .P be an absolute plane. (i) The composition of two symmetries whose axes meet at a point O is a rotation with centre O. (ii) Conversely, for every rotation .σ with centre O and for every line d passing through O, there exist two lines .d1 and .d2 passing through O such that .σ = sd 1 ◦ s d = s d ◦ s d 2 . Proof (i) Let d, .d1 be two lines passing through O and let .σ := sd ◦ sd1 . Then O is fixed by .σ . If .d = d1 then .σ = idP , which is a rotation. Let .d /= d1 . We have to prove that O is the only fixed point of .σ . Suppose that a point .A /= O is fixed by .σ . Take .B := sd1 (A), then .sd (B) = σ (A) = A and .sd1 (B) = A. By Proposition 12.18 we conclude that .A = B or .d = d1 . Then .A = B, hence .A ∈ d1 ∩ d = {O}, i.e. .A = O. (ii) If .σ = idP we can take .d1 = d2 = d. Let .σ /= idP . Take .A ∈ d {O} and .B := σ (A). Then .d(O, A) = d(O, B) so that there exists a symmetry whose axis .d1 passes through O (and the midpoint of .[AB]) such that .sd1 (A) = B. Hence .sd1 ◦ σ is an automorphism ./= id P which fixes O and A. By Proposition 12.14(ii) every point of .d = OA is fixed by .sd1 ◦ σ . By axiom .a9 ), we have .sd1 ◦ σ = sd , i.e. .σ = sd1 ◦ sd . In a completely analogous way one proves that there exists a line .d2 through O such that .σ = sd ◦ sd2 . ⨆ ⨅ Corollary 12.33 If .σ is a rotation with centre O and d is a line passing through O, then the automorphisms .sd ◦ σ and .σ ◦ sd are symmetries whose axes pass through O. Proof By Proposition 12.32 we have .σ = sd1 ◦ sd i.e. .σ ◦ sd = sd1 and .σ = sd ◦ sd2 i.e. .sd ◦ σ = sd2 . ⨆ ⨅ Proposition 12.34 The set .RO of all rotations with centre O is an abelian subgroup of .Aut(P). Proof First we prove that .RO is a subgroup of .Aut(P). Let r, .r ' ∈ RO be two rotations. We have to show that .r ' ◦ r −1 ∈ RO . Fix a line d passing through O. By Corollary 12.33, there exist two symmetries .d1 , .d2 through O such that .r = sd1 ◦ sd and .r ' = sd2 ◦ sd . Then r ' ◦ r −1 = sd2 ◦ sd ◦ (sd1 ◦ sd )−1 = sd2 ◦ sd1 ,

.

so that by Proposition 12.32-(i), .r ' ◦ r −1 ∈ RO .

430

12 Absolute Plane Geometry

Now we have to show that .r ◦r ' = r ' ◦r for every r, .r ' ∈ RO . Let .A ∈ P{O} and .s := sOA . By Corollary 12.33, .t := r ◦ s is the symmetry whose axis passes through O such that .t (A) = r(A) := B. Hence .r ◦ s(B) = A. Analogously, .t ' := r ' ◦ s is the symmetry whose axis passes through O such that .t ' (A) = r ' (A) := B ' , then .r ' ◦ s(B ' ) = A. Since .RO is a subgroup of .Aut(P), . r ◦ r ' , r ' ◦ r ∈ RO . By Corollary 12.33, .u := r ' ◦ r ◦ s is the symmetry whose axis passes through O such that .u(B) = B ' ; analogously, .v := r ◦ r ' ◦ s is the symmetry whose axis passes through O such that .v(B ' ) = B. Since u and v fix the midpoint of .M = [BB ' ] (Proposition 12.14) their axes coincide with the line OM, hence (by axiom .a9 )) ' ' ' ' .u = v, i.e. .r ◦ r ◦ s = r ◦ r ◦ s which implies .r ◦ r = r ◦ r . ⨆ ⨅ →



Corollary 12.35 For every pair of rays .(OX) and .(OY ) with the same origin O, →



there exists a unique rotation .σ with centre O such that .σ ((OX)) = (OY ). →

Proof Let s be a symmetry whose axis contains .(OX) and let .s ' be the axial →



symmetry interchanging the rays .(OX) and .(OY ) (Corollary 12.20). Then .σ := →



s ' ◦ s is a rotation with centre O (Proposition 12.32-i)) such that .σ ((OX)) = (OY ). →



Let .σ ' be another rotation with centre O such that .σ ' ((OX)) = (OY ). Let →

A ∈ (OX), .A /= O. Then .σ (A) = σ ' (A), so that A is a fixed point of the rotation −1 ◦ σ ' with centre O, which implies .σ = σ ' . .σ ⨆ ⨅ .

Proposition 12.36 Let O and .O ' be two distinct points of .P, and let s be an axial symmetry which interchanges O and .O ' . Then the map f : RO → RO ' ,

.

f (r) = s ◦ r ◦ s,

∀ r ∈ RO ,

is a group isomorphism. Proof Exercise for the reader.

⨆ ⨅

Definition 12.37 Let O be a point of .P. The symmetry through O (or the central symmetry through O) is the map .σO : P → P defined by the following condition: O is the midpoint of the segment .[MσO (M)], .∀ M ∈ P. Proposition 12.38 Let .σO be the central symmetry through O, then there exist two axial symmetries .sd and .se with centre O, whose axes d and e are perpendicular, such that σO = sd ◦ se = se ◦ sd .

.

In particular, the central symmetry through O is the unique involutive rotation with centre O, different from .idP . Proof Let d, e be two perpendicular lines passing through O, then the axial symmetries .sd , .se commute. Indeed, take .A ∈ e  {O}, then .B = sd (A) belongs to e by Definition 12.23 so that .se ◦ sd (A) = se (B) = B and .sd ◦ se (A) = sd (A) = B.

12.1 Elements of Absolute Plane Geometry

431

By Corollary 12.35 we have the equality .sd ◦ se = se ◦ sd := r, the unique rotation →



with centre O such that .r((OA)) = (OB). Therefore (se ◦ sd )2 = (se ◦ sd ) ◦ (sd ◦ se ) = se ◦ sd2 ◦ se = se2 = idP ,

.

i.e. r is an involutive rotation. Furthermore .r /= idP because .d /= e. We shall now prove that each rotation .r /= idP with centre O such that .r 2 idP coincides with the central symmetry .σO . Let .A ∈ P. If Q is the midpoint the segment .[Ar(A)], then .r(Q) is the midpoint of the segment .[r(A)r(r(A))] [r(A)A], hence Q is fixed by r. Since r is a rotation, then .Q = O and .r = σO .

= of = ⨆ ⨅

Proposition 12.39 Let d be a line and let O be a point of an absolute plane .P such that .O ∈ / d. If .σO is the symmetry through O then .d ∩ σO (d) = ∅. Proof Assume by contradiction that there exists .A ∈ d∩σO (d), then .B := σO (A) ∈ d ∩ σO (d). Since .B /= A (.A /= O) the lines d and .σO (d) should coincide with the ⨆ ⨅ line AB. In particular, O (which is the midpoint of .[AB]) should belong to d. Proposition 12.40 Let d be a line and let A be a point of an absolute plane .P such that .A ∈ / d. The there exists at least a line .d ' passing through A such that .d ∩d ' = ∅, ' i.e. .d ‖ d. Proof Let B be an arbitrary point of d and let O be the midpoint of of the segment [AB]. By Proposition 12.39, the line .d ' := σO (d) satisfies the thesis. ⨆ ⨅

.

12.1.1 Angles in an Absolute Plane Let .P be an absolute plane, let O be a point of .P, and let .MO be the set of pairs →





((OX), (OY )) of rays emanating from O. Let .d = OX be the line containing .(OX)

.



and .e = OY be the line containing .(OY ). Denote by: →



(a) .Hd ((OY )) (resp. .Hd ((OY ))) the open (resp. closed) half-plane delimited by d →

which contains .(OY ), →



(b) .He ((OX)) (resp. .He ((OX))) the open (resp. closed) half-plane delimited by e →

which contains .(OX). →



With each pair .((OX), (OY ) ∈ MO is associated an open angular sector ◦









S((OA), (OY )) of sides .(OA) and .(OY ) and a

.

closed angular sector

432

12 Absolute Plane Geometry →



S((OA), (OY )) defined by

.











S((OA), (OY )) := Hd ((OY )) ∩ He ((OX)),

.









S((OA), (OY )) := Hd ((OY )) ∩ He ((OX)), respectively. →







Definition 12.41 Two pairs .((OX), (OY )) and .((O ' X' ), (O ' Y ' )) of .MO are called →

congruent if there exists an automorphism .σ ∈ Aut(P) such that .σ ((OX)) = →





(O ' X' ) and .σ ((OY )) = (O ' Y ' ). Since .Aut(P) is a group the congruence relation is an equivalence relation on .MO and its congruence classes are called non-oriented → → . angles. The congruence class of .((OX), (OY )) ∈ MO will be denoted by .XOY →



Since there exists an axial symmetry s which interchanges .(OX) and .(OY )) →



 = Y  (Corollary 12.20) we have .XOY OX. If .(OX' ) and .(OY ' ) are the opposite → → ' OY ' = XOY . Indeed, rays of .(OX) and .(OY ) respectively, it is also true that .X →







the central symmetry .σO interchanges .(OX) and .(OX' ) as well as .(OY ) and .(OY ' ). →  of two equal rays .(OX) is called the zero angle, The congruence class .XOX →



' of two opposite rays .(OX) and .(OX' ) is denoted by 0; the congruence class .XOX said a straight angle, denoted by .ω. The study of angles will be simplified by representing them as a pair











((OX), (OY )), where .(OX)) is fixed and .(OY )) ⊂ Hd ((OY )) with .d = OX.

.



Proposition 12.42 Let .(OX) be a ray of .P, and let .Π be a closed half-plane delimited by the line OX. Then every non-oriented angle of .P has a unique →





representation of the form .((OX), (OY )), where .(OY ) is a ray contained in .Π. In other words, If .SO,Π is the set of rays emanating from O and contained in .Π and .A → , the set of non-oriented angles of .P, the map .SO,Π → A, defined by .(OY ) → XOY is bijective. Proof We have to show that for any non-oriented angle .α there exists a unique ray → ' O ' X ' = 0 there exists an . If .α = X .(OY ) contained in .Π such that .α = XOY →



axial symmetry .s1 such that .s1 (O ' ) = O; therefore .s1 ((O ' X' )) = (OZ) for some .Z ∈ P (Proposition 12.18). Moreover there exists an axial symmetry .s2 such that →



s2 ((OZ)) = (OX) (Corollary 12.20). Hence .σ := s2 ◦ s1 is an automorphism such

.





that .σ ((O ' X' )) = (OX). Thus the thesis is proved if .α = 0. The same proof works for the straight angle .ω.

12.1 Elements of Absolute Plane Geometry

433

 = Hence we can assume .α /= 0, ω. Uniqueness is immediate. Indeed,if .XOY → → →  (with .(OX), .(OZ) ⊂ Π), then there exists .σ ∈ Aut(P) such that .σ ((OX)) = XOZ →





(OX) (hence .σ fixes every point of the line .d = OX) and .σ ((OY )) = (OZ). By axiom .a9 ) we have .σ = idP or .σ = sd . We claim that .σ = idP . Indeed, the rays →



(OY ) and .(OZ) are both contained in the closed half-plane .Π (but not contained in d since .α /= 0, ω), while .sd interchanges the open half-planes delimited by d (Proposition 12.17). →  AV /= 0, ω. It remains to prove that there exists a ray .(OY ) ⊂ Π Let .α = U . As above there exists an axial symmetry .s1 such that .s1 (A) = such that .α = XOY

.









O. Put .s1 ((AU )) = (OU1 ) and .s1 ((AV )) = (OV1 ). There exists a symmetry .s2 →





whose axis passes through O such that .s2 ((OU1 )) = (OX). If .s2 ((OV1 )) ⊂ Π, →





the ray .(OY ) := s2 ((OV1 )) verifies the thesis. Otherwise, we can take .(OY ) = →

⨆ ⨅

sd (s2 ((OV1 ))), where d is the line OX.

→ → Corollary 12.43 Let .((OX), (OY )) and .((O ' X' ), (O ' Y ' )) be two pairs of perpen→ → → → dicular rays. Then .((OX), (OY )) and .((O ' X' ), (O ' Y ' )) are congruent. We call the →



 a right angle and denote it by .δ. non-oriented angle .XOY →

Proof Let .Π = Hd ((OY )) be the closed half-plane delimited by .d = OX →









containing .(OY ). The couple .((O ' X' ), (O ' Y ' )) is congruent to .((OX), (OZ)) , →





with .(OZ) ⊂ Π (Proposition 12.42). Since .(O ' X' ) and .(O ' Y ' ) are perpendicular, →





the same holds for .(OX) and .(OZ) by Corollary 12.28. By assumption .(OX) and →





(OY ) are perpendicular so that .(OY ) = (OZ) by Proposition 12.26.

.

⨆ ⨅

Order Relation on the Set of All Angles →

Definition 12.44 We fix a ray .(OX) and a closed half-plane .Π delimited by OX. →  (where the ray .(OY ) ⊂ Π By Proposition 12.42 every angle .α is of the form .XOY is uniquely determined), therefore we can associate .α with the closed angular sector →





Sα delimited by .(OX) and .(OY X); for instance, .S0 = (OX) and .Sω = Π. Let .A be the set of all angles of .P. We put on .A the following order relation:

.

α≤β

.

⇐⇒

Sα ⊆ Sβ , ∀ α, β ∈ A.

Therefore 0 is the smallest element of .A, and .ω is the biggest element of .A. In order to prove that .≤ is a total order relation on .A we need the following result.

434

12 Absolute Plane Geometry

Fig. 12.1 Order relation on the set of angles: I

Fig. 12.2 Order relation on the set of angles: II



Proposition 12.45 Assume the above notation and assumptions. Let .(OX' ) be the →





opposite ray of .(OX). Let .(OY ) be a ray contained in .Π, distinct from .(OX) and →







(OX' ). Let .A ∈ (OX), .B ∈ (OY ) and .A' ∈ (OX' ) .(Fig. 12.1.).

.











(i) Every ray .(OZ) contained in .Π, such that .(OZ) /= (OX), .(OY ), .(OX' ), meets only one of the segments →







(a) .[AB], if .S((OX), (OZ)) ⊂ S((OX), (OY )), →







(b) .[A' B], if .S((OX), (OZ)) ⊃ S((OX), (OY )). →







(ii) Let .(OU ) and .(OV ) be two rays contained in .S((OX), (OY )) which meet the segment .[AB] at the points P and Q respectively (Fig. 12.2). Then →







S((OX), (OU )) ⊂ S((OX), (OV ))

.

⇐⇒

P ∈ [AQ].

Proof (i) By Proposition 12.6 and Pasch’s axiom applied to the triangle .ΔABA' , the line OZ meets only one between the sides .[AB] and .[A' B] at a point which →



necessarily belongs to the ray .(OZ). Since A, .B ∈ Hd ((OX)) (with .d = OY ) as →

well as A, .B ∈ He ((OY )) (with .e = OX), we have →







[AB] ⊂ Hd ((OX)) ∩ He ((OY )) = S((OX), (OY )).

.

12.1 Elements of Absolute Plane Geometry

435 →





In the same way one sees that .[A' B] ⊂ S((OY ), (OX' )). Therefore .(OZ) ⊂ →













S((OX), (OY )) or .(OZ) ⊂ S((OY ), (OX' )). In other words .S((OX), (OY )) →



(resp. .S((OY ), (OX' ))) is the union of all rays with origin O which meet .[AB] (resp. .[A' B]). →



If .C := OZ ∩ [AB], by the same argument as above we see that .S((OX), (OZ)) is the union of all rays with origin O which meet .[AC]. In particular, →







S((OX), (OZ)) ⊂ S((OX), (OY )).

.





If OZ meets .[A' B] in the same way we see that .S((OX' ), (OZ)) ⊂ →















S((OX' ), (OY )) (and .S((OX' ), (OZ)) ⊂ S((OX' ), (OY ))) so that →















S((OX), (OZ)) = Π  S((OX' ), (OZ)) ⊃ Π  S((OX' ), (OY ))

.





= S((OX), (OY )). →









(ii) By (i) we have .S((OX), (OU )) ⊂ S((OX), (OV )) if and only if .OU ⊂ →



S((OX), (OV )), i.e. .P ∈ AQ.

⨆ ⨅

Corollary 12.46 The order relation .≤ on .A of Definition 12.44 is a total order relation such that every non-empty subset of .A has a supremum and an infimum.  be as Proof The first assertion follows from Proposition 12.45-i). Let .α := XOY ' in Proposition 12.45. The function .f : A → [AB] ∪ [BA ] defined by  C = OZ ∩ [AB]  .f (AOZ) = C = OZ ∩ [BA' ]

 ≤ XOY  if 0 ≤ AOZ  ≤ AOZ  ≤ω if XOY

is a strictly increasing and bijective. Therefore there exists a bijective strictly increasing map .A → [a, b], where .[a, b] is a bounded closed interval of .R of which every non-empty subset has a supremum and an infimum. ⨆ ⨅ Sum of Angles We say that a non-oriented angle .γ is the sum of two non-oriented angles .α and .β, →





written .γ = α + β, if there exist three rays with origin O, .(OX), .(OY ) and .(OZ) , which lie on the same closed half-plane delimited by OX such that .α = XOY   and satisfying the following conditions: .β = Y OZ and .γ = XOZ, →











(a) .S((OX), (OZ)) = S((OX), (OY )) ∪ S((OY ), (OZ)), or →



(b) the ray .(OZ) is the opposite of .(OX) (in this case we say that .α and .β are supplementary and .α + β = ω). If .γ = α + β we can write .α = γ − β and .β = γ − α.

436

12 Absolute Plane Geometry

Fig. 12.3 Bisector

The sum of two angles may not be defined, e.g. .ω + ω is not defined. For every angle .α, .α + 0 is always defined and we have .α + 0 = α. If n is a positive integer  := XOY  + · · · + XOY  whenever the sum is defined. Moreover then .n · XOY    n times

0·α := 0 for every angle .α. The sum of two angles (whenever it is defined) is clearly commutative and associative. The following implication .α+β = α+γ =⇒ β = γ is immediate.

.

Bisector of an Angle





In Definition 12.21 we introduced the notion of bisector of two rays .(OY ) and .(OZ) →





as being the unique line d passing through O such that .sd ((OY )) = (OZ). If .(OY ) →





and .(OZ) are not the opposite rays, we call the interior bisector of .(OY ) and .(OZ) →





the ray .(OU ) of d contained in .S((OY ), (OZ)) (Fig. 12.3).     OU = U OZ. Since .Y OZ = Y OU + U OZ, then .Y OZ = Clearly we have .Y     2 · Y OU so we can say that the angle .Y OU = U OZ bisects .Y OZ and write 1 ' is the ray emanating from  .Y OZ. The bisector of a straight angle .XOX OU = · Y 2 O, lying on .Π and perpendicular at O to OX. Therefore the right angle .δ is half of 1 ω 1 .ω, i.e. .δ = · ω. An angle is called acute if .< , obtuse if .> ω. 2 2 2 By induction, for every non-oriented angle .α and for every integer .n ≥ 1 we can define the angle .αn by means of the formula .α = 2n · αn ; thus .αn will be denoted by .2−n · α. Measure of Angles → → , where the rays .(OX) and .(OY ) are contained in a closed half-plane Let .α = XOY .Π delimited by OX. Fo every integer .n ≥ 1 consider the angle .2−n · ω and .

q := q2−n · ω, XOY n

q = 0, 1, . . . , 2n .

12.1 Elements of Absolute Plane Geometry

437

For n fixed we clearly have n −1 2 .



→ q+1

q

S((OYn ), (OYn

)) = Π.

q=0 → q q+1 .{S((OYn ), (OYn ))} →

As .n → +∞ then the covering so that we have:

becomes finer and finer

Proposition 12.47 For any .α and for any integer .n ≥ 1 there exists one unique integer .0 ≤ qn ≤ 2n such that qn 2−n · ω ≤ α < (qn + 1)2−n · ω.

.

Archimedes’ axiom holds true for angles. Proposition 12.48 For every angle .α /= 0 there exists a natural number .n ≥ 1 such that .2−n · ω ≤ α. Proof Let .A := {2−n · ω | n ≥ 1} ⊂ A. By Corollary 12.46 A has an infimum .ε. 1 Let .f : A → [0, δ] defined by .f (β) = ·β. It easy to see that f is bijective, strictly 2 increasing on the interval .[0, δ] of .A and .f (A) ⊂ A. Hence .inf A ≤ inf f (A), i.e. 1 ε, which implies .ε = 0. .ε ≤ ⨆ ⨅ 2 Corollary 12.49 For every angle .α let .Eα := {q2−n · ω | n ≥ 1, 0 ≤ q ≤ 2n , q2−n · ω ≤ α}. Then α = sup Eα .

.

Proof We obviously have .α ≥ γ for each .γ ∈ Eα . For any angle .β with .β < α, there exists an integer .n ≥ 1 such that .2−n · ω < α − β (Proposition 12.48). For the above n, by Proposition 12.47, there is an integer .qn such that .qn 2−n · ω ≤ α < (qn + 1)2−n · ω. Hence β < α − 2−n · ω < (qn + 1)2−n · ω − 2−n · ω = qn 2−n · ω ≤ α.

.

Since .qn 2−n · ω ∈ Eα , .α is the supremum of .Eα .

⨆ ⨅

Definition 12.50 A measure on the set of non-oriented angles of .P is any strictly increasing function .ϕ : A → R+ such that ϕ(α + β) = ϕ(α) + ϕ(β),

.

for all .α, β ∈ P whose sum .α + β is defined.

(12.2)

438

12 Absolute Plane Geometry

Theorem 12.51 For every real number .k > 0 there exists a unique measure .ϕ such that .ϕ(ω) = k and .ϕ is a bijective map .A → [0, k]. Proof Suppose that .ϕ exists. By (12.2), for every pair of natural numbers .(n, q) with .q ≤ 2n , we have ϕ(q2−n · ω) = q2−n k.

(12.3)

.

By Corollary 12.49 we also have ϕ(α) = sup ϕ(β). β∈Eα

(12.4)

.

Then (12.3) and (12.4) imply the uniqueness of .ϕ. We observe that the relations (12.3) and (12.4) define a measure as the reader can easily check. Finally, let Dξ = {q2−n k : q ≤ 2n , q2−n k ≤ ξ }

.

for a fixed number .ξ ∈ [0, k], then .α := sup

x

· ω | x ∈ Dξ

k which satisfies the equality .ϕ(α) = ξ (if .ξ = k then .α = ω).

is the only angle ⨆ ⨅

Definition 12.52 If in Theorem 12.51 we take .k = π the measure .ϕ(α) is called the radian measure of .α.

12.1.2 Triangles Definition 12.53 Let ΔABC be a triangle in an absolute plane P. We denote → →  The  determined by the rays (AB) and (AC) simply by A. the angle BAC →  of BAC  is the angle DAB  where the ray (AD) supplementary angle ω − BAC →

is the opposite of (AC); it will be called the exterior angle of ΔABC adjacent to A. Proposition 12.54 Let ΔABC be a triangle in an absolute plane P. Then d(A, B) = d(A, C)

.

⇐⇒

 = C.  B

Proof If d(A, B) = d(A, C), there exists a symmetry σ whose axis passes through  = ACB.  A and such that σ (B) = C, hence ABC '   Conversely, let ABC = ACB and A be the symmetric of A with respect to the ' BC = ACB   = ABC  imply that the rays axis d of [BC], then the equalities A →







(BA) and (BA' ) coincide as well as (CA) and (CA' ). Hence A' = A, A ∈ d and, ⨆ ⨅ in particular, d(A, B) = d(A, C).

12.1 Elements of Absolute Plane Geometry

439

Fig. 12.4 (a) Exterior angle. (b) Sides and angles

Proposition 12.55 Let ΔABC be a triangle in an absolute plane P. Then the  and C,  i.e. exterior angle of ΔABC adjacent to B is strictly greater than A  > A,  C.  ω−B

.



 > A  for instance. Let (BX) be the ray opposite to Proof We prove that ω − B →  Let σO be the symmetry with centre the  > A. (BC). We have to show that ABX midpoint O of the side [AB] and let D = σO (C) (Fig. 12.4a). Since σO is an → →  = ABD.  Since D ∈ S((BX), (BA)) we conclude that automorphism we get BAC  > ABD  = BAC.  ⨆ ⨅ ABX Corollary 12.56 The sum of two angles of a triangle is strictly smaller than the straight angle ω.  + CBA  < Proof Assume the notation of Proposition 12.55. Then we have BAC   ⨆ ⨅ ABX + CBA = ω. Proposition 12.57 Let ΔABC be a triangle in an absolute plane P. Then d(A, B) > d(A, C)

.

⇐⇒

 > ABC.  ACB

Proof Suppose d(A, B) > d(A, C). Let D ∈ [AB] such that d(A, D) =  = ACD,  and by d(A, C) by axiom a8 ) (Fig. 12.4b). By Proposition 12.54, ADC  < ADC  which implies ABC  < ACB.  Proposition 12.55 we get ABC   Conversely, let ACB > ABC. Assume by contradiction that d(A, B) ≤ d(A, C) then d(A, B) /= d(A, C) by Proposition 12.54, and the inequality d(A, B) <  > ACB.  d(A, C) implies (according to that just proved above) ABC ⨆ ⨅ Proposition 12.58 Let ΔABC be a triangle in an absolute plane P. Then we have the inequality d(B, C) < d(A, B) + d(A, C).

.

440

12 Absolute Plane Geometry

Fig. 12.5 Triangular inequality

Fig. 12.6 Parallels



Proof Let D ∈ (AC) such that d(A, D) = d(A, B) (Fig. 12.5). By Proposi = DBA  < DBC,  hence by Proposition 12.55 we get tion 12.54 we have BDC d(B, C) < d(D, C), i.e. d(B, C) < d(A, B) + d(A, C). ⨆ ⨅ Corollary 12.59 For any three points A, B, C of P we have the inequality d(B, C) ≤ d(A, B) + d(A, C).

.

Moreover, d(B, C) = d(A, B) + d(A, C) if and only if A ∈ [BC]. Corollary 12.59 states that any absolute plane is a metric space according to Definition 1.153. Proposition 12.60 Let d = AB, d1 = AX, d2 = BY be three distinct lines in  = Y an absolute plane P such that XAB BZ as in Fig. 12.6. Then d1 and d2 are parallel. Proof If O is the midpoint of [AB], then d2 = σO (d1 ), so that by Proposition 12.39 we end the proof. ⨆ ⨅

12.2 The Poincaré Hyperbolic Plane In this section we give an example (due to Henri Poincarè) of an absolute plane not satisfying Euclid’s axiom .(⋆).

12.2 The Poincaré Hyperbolic Plane

441

Fig. 12.7 Hyperbolic geodesics

Definition 12.61 Let H := {(x, y) ∈ R2 | y > 0},

.

D := {ra : a ∈ R} ∪ {Cb,r : b, r ∈ R, r > 0}, where .ra := {(a, y) ∈ R2 : y > 0} and .Cb,r := {(x, y) ∈ R2 : (x − b)2 + y 2 = r 2 , y > 0}. We call the couple .P = (H, D) the Poincaré hyperbolic half-plane and the elements of .D hyperbolic lines (see Fig. 12.7). We shall denote a point .(x, y) ∈ H also by the corresponding complex number .z = x + iy. The main result of this section is the following: Theorem 12.62 (Poincaré) The couple .P = (H, D) of Definition 12.61 is an absolute plane not satisfying Euclid’s axiom .(⋆). Proof It is a very easy exercise to prove that .P is an incidence structure that satisfies axioms .a1 ) and .a2 ). If .z1 = a + iy1 ∈ H and .z2 = a + iy2 ∈ H only the line .x = a contains .z1 and .z2 . If .z1 = a + iy1 ∈ H and .z2 = b + iy2 ∈ H with .a /= b, there is one and only one circle passing through .z1 and .z2 having its centre on the real axis .y = 0. On the lines .ra we have the natural order .z1 ≤ z2 iff .y1 ≤ y2 and the opposite order. On the lines .Cb,r we have the counter-clockwise and the clockwise orders (see Fig. 12.7), so that .P satisfies axioms .a3 ). If .d = ra , the half-planes are − H+ d = {z ∈ H : Re(z) > a}, Hd = {z ∈ H : Re(z) < a},

.

+



Hd = {z ∈ H : Re(z) ≥ a}, Hd = {z ∈ H : Re(z) ≤ a}. If .d = Cb,r we have − H+ d = {z ∈ H : |z − b| > r}, Hd = {z ∈ H : |z − b| < r},

.

+



Hd = {z ∈ H : |z − b| ≥ r}, Hd = {z ∈ H : |z − b| ≤ r}.

.

Two points .z1 and .z2 belong to the same above half-plane if and only if the segment [z1 z2 ] does not meet d. Thus Pasch’s axiom .a4 ) is easily verified.

.

442

12 Absolute Plane Geometry

Euclid’s axiom .(⋆) is clearly not satisfied by .P; through a point P which does not belong to a line d there are infinitely many hyperbolic lines parallel to d (for instance take .d = r0 : x = 0 and .P = (2, 1), then the lines .Cb,r with .r 2 = (2 − b)2 + 1 and 5 ⨆ ⨅ .b > r (i.e. .b > ) are parallel to d). 4 Lemma 12.63 Let .z1 , .z2 ∈ H (z1 /= z2 ), and let the line d joining .z1 and .z2 have end-points .z1∗ , .z2∗ ∈ R ∪ {∞}, chosen in such a way that .z1 lies between .z1∗ and .z2 . Then there exists a unique element .τ ∈ PSL2 (R) such that τ (z1∗ ) = 0,

τ (z2∗ ) = ∞,

.

τ (z1 ) = i.

(12.5)

Also .τ (z2 ) = ri (r > 1) so that the cross-ratio .ρ(z2 , z1 , z1∗ , z2∗ ) = r. Proof Assume that neither .z1∗ nor .z2∗ is .∞. We may suppose .z1∗ > z2∗ , otherwise we just relabel. Let σ (z) =

.

z − z1∗ , z − z2∗

then .σ ∈ PSL2 (R), .σ (z1∗ ) = 0, .σ (z2∗ ) = ∞, so that .σ maps d to the imaginary axis. If .σ (z1 ) = ki then τ = υ1/k ◦ σ,

.

where

υ1/k (z) =

1 z, k

is the required transformation. Uniqueness follows from Exercise 9.7, and as .z1 lies between .z1∗ and .z2 , .τ (z1 ) = i lies between .τ (z2∗ ) = 0 and .τ (z2 ), so that .τ (z2 ) = ri, r > 1, hence .ρ(z2 , z1 , z1∗ , z2∗ ) = r (see Definition 9.35). If .Re(z1 ) = Re(z2 ) = α ∈ R, so that .z1∗ = α and .z2∗ = ∞, .z1 = α + ki. Then τ (z) =

.

1 (z − α) k

is the required transformation.

⨆ ⨅

Definition 12.64 (Hyperbolic Distance) Let .z1 , .z2 be two points of .H and let γ (t) = (x(t), y(t)), .t0 ≤ t ≤ t1 be any regular parametrization of hyperbolic segment .[z1 z2 ]. The hyperbolic distance .d(z1 , z2 ) is given by

.

t1 x(t) ˙ 2 + y(t) ˙ 2 dt . .d(z1 , z2 ) = t0 y(t)

(12.6)

It is well known from elementary calculus that the right–hand side of (12.6) is independent of the regular parametrization chosen so that .d(z1 , z2 ) depends only on .z1 and .z2 .

12.2 The Poincaré Hyperbolic Plane

443

Fig. 12.8 Hyperbolic distance

(i) If .z1 = α + iy1 and .z2 = α + iy2 (.α ∈ R), choose the regular parametrization .x = α, .y = t so that d(z1 , z2 ) =

y2

.

y1

  y2 dt . = log t y1

(12.7)

(ii) If .z1 , .z2 , .z1∗ , .z2∗ are defined as in Lemma 12.63 with .Re(z1 ) /= Re(z2 ) we choose the regular parametrization x = c + r cos θ,

.

y = r sin θ,

θ ∈ (0, π ), r =

z∗ + z2∗ z1∗ − z2∗ , c= 1 . 2 2

According to Fig. 12.8 let z1 = c + r cos θ1 + ir sin θ1 , z2 = c + r cos θ2 + ir sin θ2 .

.

(12.8)

Since .x(θ ˙ ) = −r sin θ and .y(θ ˙ ) = r cos θ we get .d(z1 , z2 ) =

θ2 θ1

   θ θ2 dθ = log tg . sin θ 2 θ1

(12.9)

Taking into account the relations sin θ θ = = . tg 2 1 + cos θ



1 − cos θ , 1 + cos θ

θ ∈ (0, π ),

if .z = c + r cos θ + i(r sin θ ) ∈ Cc,r we easily get ∗ z1 − z θ . = . tg 2 z2∗ − z

(12.10)

444

12 Absolute Plane Geometry

Hence   ∗  ∗   ∗ z1 − z z2 z1 − z1 z1 − z2 θ θ2 = log − log . log tg = log z∗ − z z∗ − z z∗ − z 2 θ1 2 1 2 2 2 z1   ∗ z1 − z2 z1∗ − z1 . : = log ∗ (12.11) z2 − z2 z2∗ − z1 Since .z1 , .z2 , .z1∗ and .z2∗ lie on the same circle, the cross-ratio (Definition 9.35) ρ(z2 , z1 , z1∗ , z2∗ ) =

.

z2 − z1∗ z1 − z1∗ : z2 − z2∗ z1 − z2∗

is a real number (Exercise 9.9). Moreover By Lemma 12.63 there is a Moebius transform .τ such that τ (z2∗ ) = ∞, τ (z1∗ ) = 0,

.

τ (z1 ) = i, τ (z2 ) = ri,

r = ρ(z2 , z1 , z1∗ , z2∗ ) > 1. (12.12)

Hence from and (12.9) and (12.11) we have .d(z1 , z2 ) = log(ρ(z2 , z1 , z1∗ , z2∗ )). Since ρ(z2 , z1 , z2∗ , z1∗ ) =

.

1 , ρ(z2 , z1 , z1∗ , z2∗ )

ρ(z1 , z2 , z1∗ , z2∗ ) =

1 , ρ(z2 , z1 , z1∗ , z2∗ )

ρ(z1 , z2 , z2∗ , z1∗ ) = ρ(z2 , z1 , z1∗ , z2∗ ) the formula d(z1 , z2 ) = |log(ρ(z2 , z1 , z1∗ , z2∗ ))|

.

(12.13)

holds true, interchanging .z1 and .z2 as well as .z1∗ and .z2∗ . Also formula (12.7) can be put in the form (12.13). If .z1 = α + iy1 , .z2 = α + iy2 , .z1∗ = α and .z2∗ = ∞, then ∗ ∗ .| log(ρ(z2 , z1 , z1 , z2 ))|

  y2 . = log y 1

In both cases i) and ii) (of Definition 12.64) we have the inequality   y2 . d(z1 , z2 ) ≥ log y1

.

(12.14)

In case i) the inequality (12.14) coincides with the equality (12.7). In case ii) since .

1 ≥ |ctg θ |, sin θ

∀ θ ∈ (0, π ),

12.2 The Poincaré Hyperbolic Plane

445

we have (.z1 = x1 + iy1 , .z2 = x2 + iy2 as in (12.8)) θ 2 dθ θ2 ≥ ≥ |ctg θ | dθ ctg θ dθ θ1 θ1 θ1 sin θ   r sin θ2 y2 sin θ2 log . = = log = log sin θ1 r sin θ1 y1

d(z1 , z2 ) =

.

θ2

Thus axioms .a5 ), .a6 ) are clearly satisfied. Now we check axiom .a7 ). If .z1 = α + iy1 , .z2 = α + iy2 , .z3 = α + iy3 are three distinct points on a hyperbolic line .x = α ∈ R such that .z1 < z2 < z3 , i.e. .y1 < y2 < y3 . Then  d(z1 , z2 ) = log

.

y2 y1



 ,

d(z2 , z3 ) = log

y3 y2



 ,

d(z1 , z3 ) = log

y3 y1

 ,

therefore  d(z1 , z2 ) + d(z2 , z3 ) = log

.

y2 y1



 + log

y3 y2



 = log

y2 y3 y1 y2



 = log

y3 y1



= d(z1 , z3 ). Conversely, if .z1 < z3 and .z2 ∈ z1 z3 with .z2 /= z1 , .z3 are such that .d(z1 , z2 ) + d(z2 , z3 ) = d(z1 , z3 ), then .z2 ∈ [z1 z3 ], i.e. .z1 < z2 < z3 . Otherwise, if .z3 < z1 < z2 , we have just showed that .d(z3 , z1 ) + d(z1 , z2 ) = d(z3 , z2 ) which implies .d(z1 , z2 ) + d(z2 , z3 ) + d(z1 , z2 ) = d(z2 , z3 ), i.e. .d(z1 , z2 ) = 0 contradicting the hypothesis. Similarly, one sees that .z1 < z3 < z2 contradicts the hypothesis. Assume the notation of (12.8). Let .z3 = c+r cos θ3 +ir sin θ3 ∈ Cc,r be between .z1 and .z2 , i.e. .θ1 < θ3 < θ2 . Let .τ be defined as in (12.12), then .τ (z3 ) = ki with .1 < k < r. Therefore d(z1 , z3 ) + d(z3 , z2 ) = |log ρ(z3 , z1 , z1∗ , z3∗ )| + |log ρ(z2 , z3 , z3∗ , z2∗ )| = |log ρ(z3 , z1 , z1∗ , z2∗ )| + |log ρ(z2 , z3 , z1∗ , z2∗ )| = |log ρ(τ (z3 ), τ (z1 ), τ (z1∗ ), τ (z2∗ ))| + |log ρ(τ (z2 ), τ (z3 ), τ (z1∗ ), τ (z2∗ ))|

.

= log k + log

r k

= log r = |log ρ(z2 , z1 , z1∗ , z2∗ )| = d(z1 , z2 ). The converse can be proved as above. By (12.14) we have .

lim d(z1 , z2 ) =

y2 →0+

lim d(z1 , z2 ) = +∞.

y2 →+∞

446

12 Absolute Plane Geometry

The map .r → R+ , .z2 → d(z1 , z2 ) from any line r emanating from .z1 , is continuous with respect to .z2 (on r and .R+ there is the euclidean topology) and bijective, i.e. .d(z1 , z2 ) verifies axiom .(a8 ). Finally, we have to prove superposition axioms .(a9 ) and .(a10 ). By Exercise 9.10, Moebius transforms of .PSL2 (R) preserve the half-plane .H as well as the real line .Im(z) = 0. Also the maps τ (z) =

.

az + b , cz + d

ad − bc < 0,

(12.15)

which are the composition of Moebius transforms of .GL− 2 (R) := {A ∈ GL2 (R) : det(A) < 0} with the automorphism .z → z of .C, preserve the half-plane .H as well as the real line .Im(z) = 0. Lemma 12.65 The Moebius transforms of .PSL2 (R) and the transforms (12.15) are automorphisms of .P, i.e. they map hyperbolic lines to hyperbolic lines and preserve the distances. Proof The assertions concerning the Moebius transforms of .PSL2 (R) follow from Exercise 9.10, Corollary 9.36 and formula (12.13). The reader can proceed as in Exercise 9.8 taking into account that the transforms (12.15) are compositions of 1 . Moreover one can easily check .z → az + b (a, .b ∈ R, .a < 0) and of .z → z that the transforms .τ (z) of the form (12.15) change the cross-ratio into its complex conjugate ρ(τ (z1 ), τ (z2 ), τ (z3 ), τ (z4 )) = ρ(z1 , z2 , z3 , z4 ).

.

Hence d(τ (z1 ), τ (z2 )) = |log ρ(τ (z2 ), τ (z1 ), τ (z1 )∗ , τ (z2 )∗ )| = |log ρ(z2 , z1 , z1∗ , z2∗ )|

.

=

|log ρ(z2 , z1 , z1∗ , z2∗ )| = d(z1 , z2 ). ⨆ ⨅

Corollary 12.66 Let a, .k ∈ R, .k > 0. Let M1 (z) = −z + 2a,

.

M2 (z) =

k + a. z−a

(12.16)

Then .M1 (z) is the euclidean symmetry of axis the line .Re(z) √ = a, and .M2 (z) is the symmetry of axis the circle of centre a and radius .r = k. These transforms, restricted to .H, are called hyperbolic symmetries.

12.2 The Poincaré Hyperbolic Plane

447

z − w ∈ [0, 1) is invariant with Lemma 12.67 For all z, .w ∈ H, .τ (z, w) := z − w respect to .PSL2 (R), i.e. τ (σ (z), σ (w)) = τ (z, w),

.

∀ σ ∈ PSL2 (R).

Proof This follows from the formula

|σ (z) − σ (w)| = |z − w| |σ ' (z)σ ' (w)|

.

(here .σ ' denotes the derivative of .σ ) which is easily proved by direct computation. ⨆ ⨅ Lemma 12.68 Let z, .w ∈ P. Then z − w ⎞ 1+ ⎜ z − w ⎟ ⎜ ⎟ . .d(z, w) = log ⎝ z − w ⎠ 1 − z − w ⎛

Proof We may suppose that z, w and r are as .z1 , .z2 and r (respectively) in Lemma 12.63 so that .d(z, w) = log r. Also 1 − r r − 1 ed(z,w) − 1 = = d(z,w) τ (z, w) = τ (i, ri) = 1+r r +1 e +1

.

and therefore z − w ⎞   1+ ⎜ 1 + τ (z, w) z − w ⎟ ⎜ ⎟ . = log ⎝ .d(z, w) = log z − w ⎠ 1 − τ (z, w) 1 − z − w ⎛

⨆ ⨅

Lemma 12.69 For any hypertbolic line d there exists a hyperbolic symmetry .sd . Proof The existence of .sd is given by Corollary 12.66. Let .σ /= idP be an automorphism of .P which fixes any point of d. By Proposition 12.14-(ii) (which does not require superposition axioms), .σ does not have any other fixed point. By Lemma 12.63 there exists an automorphism .τ of .P such that .τ (d) is the imaginary half-axis .Re(z) = 0, .Im(z) > 0. Then .γ := τ ◦ σ ◦ τ −1 fixes any point of .τ (d).

448

12 Absolute Plane Geometry

Since .d(γ (z), γ (w)) = d(z, w), we have ⎛ z − w ⎞ γ (z) − γ (w) ⎞ 1 + 1 + z − w ⎟ ⎜ ⎜ γ (z) − γ (w) ⎟ ⎟ ⎜ ⎜ ⎟ . = log ⎝ . log ⎝ z − w ⎠ γ (z) − γ (w) ⎠ 1− 1− z − w γ (z) − γ (w) ⎛

For every .w ∈ τ (d) we have ⎛ z − w ⎞ γ (z) − w ⎞ 1+ 1+ ⎜ ⎜ z − w ⎟ γ (z) − w ⎟ ⎟ ⎜ ⎜ ⎟ . = log ⎝ . log ⎝ z − w ⎠ γ (z) − w ⎠ 1 − 1 − z − w γ (z) − w ⎛

 1+t By Lemma 12.68, taking into account that .t → log , .0 ≤ t < 1, is injective, 1−t we get 

γ (z) − w z − w . γ (z) − w = z − w .

(12.17)

Since the symmetry .sτ (d) (z) = −z satisfies (12.17) and the triangles .Δzww and Δγ (z)ww are congruent (with a common side) we have that .γ (z) = sτ (d) (z) and .σ (z) = sd (z). ⨆ ⨅ .

Thus axiom .a9 ) is satisfied by Lemma 12.69. Lemma 12.70 Axiom .(a10 ) is satisfied. Proof As in the proof of Lemma 12.69, we may assume that one of the hyperbolic rays lies on a euclidean line r parallel to the imaginary axis and the other is the arc →

(AP ) on the semicircle .𝚪 as in Fig. 12.9. Since the line r meets .𝚪, the point H lies between P and Q. Consider the

.





hyperbolic rays .(AH ) and .(AZ) lying on r and emanating from A. Let p and q p+q , 0) and its the abscissas of P and Q respectively so that the centre of .𝚪 is .( 2 Fig. 12.9 Hyperbolic bisector

12.3 Exercises

radius is .

449

√ q −p . Furthermore .A = h + i (q − h)(h − p) and 2

.[AH ] : z = h + it, 0 ≤ t ≤ (q − h)(h − p),

→ (AZ) : z = h + it, (q − h)(h − p) ≤ t < +∞.

Let .

se (z) =

(q − p)(q − h) + q, z−q

sf (z) =

(q − p)(h − p) + p. z−p

Then .se (H ) = P and (q − p)(q − h) +q √ h − q − i (q − h)(h − p) √ (q − p)(q − h)[h − q + i (q − h)(h − p)] + q(h − q)(p − q) = (h − q)(p − q)

= h + i (q − h)(h − p) = A.

se (A) =

.

√ The hyperbolic line e is the semicircle of centre .(q, 0) and radius . (q − h)(q − p). →

In a similar way we have .sf (A) = A. Also . lim sf (h+it) = p, hence .se ((AH )) = →





t→+∞

(AP ) and .sf ((AZ)) √ = (AP ). The hyperbolic line f is the semicircle of centre (p, 0) and radius . (h − p)(q − p). ⨆ ⨅

.

Thus the proof of Theorem 12.62 is complete.

⨆ ⨅

12.3 Exercises Exercise 12.1 Let P be an absolute plane. Use Corollary 12.59 to prove that a map σ : P → P is an automorphism of P if and only if σ is bijective and d(σ (A), σ (B)) = d(A, B) for all A, B ∈ P. Exercise 12.2 Let d be a line of an absolute plane P, and let A ∈ P \ d. Let M = d ∩ e, where e is the perpendicular to d passing through A. Prove the following assertions: 1. d(A, N) ≥ d(A, M) for all N ∈ d.

450

12 Absolute Plane Geometry

± ± 2. For any ray dM the map dM → R, N → d(A, N), is continuous (with respect to euclidean topologies), stricltly increasing and such that lim d(A, N) = d(M,N )→+∞

+∞.

The distance d(A, d) from A to d is defined by d(A, d) := d(A, M). Exercise 12.3 Let d be a line of an absolute plane P. The circle C(O, r) of centre O and radius r is defined in the usual way, C(O, r) := {P ∈ P : d(P , O) = r}.

.

Prove that (use Exercise 12.2) C(O, r) ∩ d /= ∅

.

⇐⇒

d(O, d) ≤ r.

Furthermore d ∩ C(O, r) is only one point iff d(O, d) = r, and two distinct points iff d(O, d) < r. Exercise 12.4 Let C(O, r) and C(O ' , r ' ) be two circles in an absolute plane P. Then C(O, R) ∩ C(O ' , R ' ) /= ∅

.

⇐⇒

|r − r ' | ≤ d(O, O ' ) ≤ r + r ' .

Furthermore C(O, R) ∩ C(O ' , R ' ) is only one point iff d(O, O ' ) = |r − r ' | or d(O, O ' ) = r + r ' , and two distinct points iff |r − r ' | < d(O, O ' ) < r + r ' .

Chapter 13

Cayley--Klein Geometries

13.1 Euclidean Metric from a Projective Point of View In Chap. 12 we introduced the Poincaré half-plane as a model of the plane hyperbolic geometry. It is a remarkable fact that hyperbolic distance may be expressed by the cross-ratio (12.13), i.e. by means of a projective quantity. In the first half of 1800 J. V. Poncelet (1788–1867) discovered that some euclidean notions might be formulated in terms of projective geometry. In order to show these connections we have to consider the euclidean plane immersed into the complex projective plane. We fix the immersion .j0 : A2 (R) → P2 (C), .j0 (x, y) = [1, x, y] for all .(x, y) ∈ 2 A (R). We recall from Definition 10.102 that .I = [0, 1, i], .J = [0, 1, −i] are the cyclic points (with respect to .j0 ), and that the lines of the pencils with base locus I and J are the isotropic lines. We extend the standard euclidean metric to a “complex-valued distance”  d(A, B) =

.

a1 b1 − a0 b0

2

 +

a2 b2 − a0 b0

2 1/2 ,

A = [a0 , a1 , a2 ], B = [b0 , b1 , b2 ], a0 b0 /= 0,

(13.1)

√ where . z > 0 if .z > 0. The locus of the points which have distance 0 from a fixed point .A = [a0 , a1 , a2 ] is determined by equation (x1 a0 − x0 a1 )2 + (x2 a0 − x0 a2 )2 = 0.

.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. B˘adescu, E. Carletti, Lectures on Geometry, La Matematica per il 3+2 158, https://doi.org/10.1007/978-3-031-51414-2_13

451

452

13 Cayley--Klein Geometries

Such a locus consists of the lines x1 a0 − x0 a1 + i(x2 a0 − x0 a2 ) = 0,

.

x1 a0 − x0 a1 − i(x2 a0 − x0 a2 ) = 0, which are the isotropic lines of the pencil with base locus A. Every pencil (whose base locus is a proper point) contains two isotropic lines which are the locus of the points whose distances from the base locus is 0. Two points P , Q lying on an isotropic line but distinct from cyclic points have distance 0, and the distance of two points is not defined if one of them is cyclic. Cyclic points and isotropic lines play an essential role in the projective setting of metrical notions. E. N. Laguerre (1834–1886) proved that the angle . 0.

(13.13)

.

If ξ , η do not separate the pair x, y, i.e. ρ(x, y, ξ, η) > 0, by (13.10) we have F (x, y)2 − F (x, x)F (y, y) . > 0. F (x, y) − F (x, y)2 − F (x, x)F (y, y) F (x, y) +

(13.14)

The inequality (13.14) is equivalent to  0 < F (x, y) +





.

F (x, y)2

− F (x, x)F (y, y)

   2 × F (x, y) − F (x, y) − F (x, x)F (y, y) = F (x, x)F (y, y).

(13.15)

Then by (13.13) and (13.15) we get .

F 2 (x, y) > 1. F (x, x)F (y, y)

As in case (a), changing the sign of the coordinates of y such that F (x, y) > 0, we obtain .

F (x, y) > 1. √ F (x, x)F (y, y)

In the same way as (13.11) one has .

log ρ(x, y, x + λ0 y, x + λ1 y)    F (x, y) F 2 (x, y) −1 . = 2 log √ + F (x, x)F (y, y) F (x, x)F (y, y)

Taking into account the relation arccosh(z) = log(z +

.

we end the proof of case (b).



z2 − 1),

z ∈ R, z > 1,

(13.16) ⨆ ⨅

458

13 Cayley--Klein Geometries

Corollary 13.5 Under the notation and hypothesis of Lemma 13.4 in case (a)  dell (x, y) := 2ke arccos √

.

 F (x, y) , F (x, x)F (y, y)

ke > 0,

(13.17)

defines a metric on P1 (R) called an elliptic metric. In particular we have 0 ≤ dell (x, y) ≤ π ke . In case (b)  dhyp (x, y) = 2kh arccosh √

.

 F (x, y) , F (x, x)F (y, y)

kh > 0,

(13.18)

defines a metric on each of the open segments with endpoints ξ and η called a hyperbolic metric. In this case 0 ≤ dhyp (x, y) < +∞. Both formulas are valid if x = y, being the distances = 0. Furthermore we can assume that dhyp (x, ξ ) = dhyp (x, η) = ∞ for x /= ξ , η. √ Proof In case (a), since log(α + α 2 − 1) is a pure imaginary complex number by (13.12), we may take k = −ike with ke > 0, so that dell (x, y) is a non-negative real number. In case (b), since the right-hand side of (13.9) is non-negative we put kh = k > 0. The last assertion follows from the relations ρ(x, ξ, ξ, η) = ∞ and ρ(x, η, ξ, η) = 0 by Remark 13.2. ⨆ ⨅ Example 13.6 Consider the symmetric bilinear form F : R2 → R defined by  x0 y0 + x1 y1

F (x, y) =

.

x0 y0 − x1 y1

elliptic case, hyperbolic case,

x = (x0 , x1 ), y = (y0 , y1 ) ∈ R2 .

Then the elliptic metric on P1 (R) is given by ⎞

⎛ dell (x, y) = 2ke arccos ⎝ 

x0 y0 + x1 y1

.

⎠.

(13.19)

(x02 + x12 )(y02 + y12 )

The elliptic distance (13.19) coincides with the euclidean length of arc β as in Fig. 13.1. In particular the length of elliptic line is 2π kl . Let F (x, x) = x02 − x12 . Denote by I each of the two segments of endpoints [1, 1] and [1, −1]. If x, y belong to the same I and x /= [1, 1], [1, −1], y /= [1, 1], [1, −1], then the hyperbolic metric on I \ {[1, 1], [1, −1]} is given by ⎞

⎛ dhyp (x, y) = 2kh arccosh ⎝ 

.

x0 y0 − x1 y1 (x02 − x12 )(y02 − y12 )

⎠.

(13.20)

13.2 Projective Metrics on P1 (R)

459

Fig. 13.1 Elliptic metric

Fig. 13.2 Hyperbolic metric

The reader can prove that the hyperbolic distance dhyp (x, y) is the euclidean area of the hyperbolic sector as in Fig. 13.2. In particular the length of a hyperbolic line is infinite. Definition 13.7 (Euclidean Metric) The euclidean metric on A1 (R) should be associated with a degenerate conic x02 = 0, i.e. with ξ = η = [0, 1], but, in this case, we have ρ(x, y, ξ, η) = 1, i.e. 𝓁(x, y) = 0 for all x, y with x /= y. Nevertheless we may obtain the euclidean (or parabolic) metric in the following way. Consider the non-degenerate quadratic form Fε (z0 , z1 ) = z02 + εz12 ,

.

ε ∈ R \ {0},

460

13 Cayley--Klein Geometries

and let ⎞

⎛ x0 y0 + εx1 y1

⎠ , ε > 0, dε+ (x, y) = 2kε arccos ⎝  (x02 + εx12 )(y02 + εy12 ) ⎞ ⎛ x0 y0 + εx1 y1 ⎠ , ε < 0, dε− (x, y) = 2kε arccosh ⎝  2 (x0 + εx12 )(y02 + εy12 )

.

be the corresponding elliptic (ε > 0) and hyperbolic (ε < 0) metrics. Choose kε = 1 √ . From the relation 2 |ε| .

arccos α = arcsin 1 − α 2 ,

0 ≤ α ≤ 1,

(13.21)

we have for ε > 0 + .dε (x, y)

  √ (x0 y1 − x1 y0 )2 1 ε = √ arcsin . ε (x02 + εx12 )(y02 + εy12 )

arcsin(α) = 1, we get α→0 α      y1 2  x1 y1  x1 + . lim dε (x, y) = − =  −  . x0 y0 x0 y0 ε→0+

Since lim

(13.22)

Similarly, from the formula .

arccosh α = arcsinh α 2 − 1,

1 ≤ α,

(13.23)

we have for ε < 0 − .dε (x, y)



 (x0 y1 − x1 y0 )2 1 = √ arcsinh |ε| . |ε| (x02 + εx12 )(y02 + εy12 )

arcsinh(α) = 1, we get α→0 α      y1 2  x1 y1  x1 − . lim dε (x, y) = − =  −  . x0 y0 x0 y0 ε→0−

Since lim

(13.24)

13.3 Projective Metrics of P2 (R) and P3 (R)

461

Therefore both relations (13.22) and (13.24) give the euclidean metric on A1 (R) According to Klein [20] the euclidean metric is the “tangent metric” at ε = 0 to the metrics associated with the absolutes z02 + εz12 = 0, ε ∈ R \ {0}.

13.3 Projective Metrics of P2 (R) and P3 (R) Elliptic, hyperbolic and parabolic (i.e. euclidean) metrics on .P1 (R) are the only projective metrics of dimension 1. Each of them is determined by the choice of an absolute and by the corresponding group of projectivities which fix the absolute with respect to them the metric is invariant. This procedure may be generalized to n .P (R) with .n ≥ 2. Definition 13.8 We rename a conic of .P2 (R) or a quadric of .P3 (R) by 2-quadric (with .n = 2, 3) will be called or 3-quadric respectively A non-singular n-quadric .F an absolute n-quadric of .Pn (R). We can define a distance on a suitable subset .X ) as follows: of .Pn (R) \ V+ (F d(x, y) := k log ρ(x, y, ξ, η),

.

k ∈ C∗ ,

x, y ∈ X,

(13.25)

where .ξ and .η are the (possibly complex-conjugate) intersections of the line xy with ) and k is fixed. The relative group of isometries consists of projectivities of V + (F n (see (10.64)). .P (R) which fix .F are called the isotropic If .n = 2, the lines through a point .P ∈ X tangent to .F lines of the pencil with base locus P . They may be real or complex-conjugate lines. If x, .y ∈ X lie on an isotropic line then .ξ = η, so that .d(x, y) = 0. If .n = 3 we p define an isotropic cone with vertex .P ∈ X as the tangent cone .CP (F ). It may be a real cone or an imaginary cone (having the only vertex real). Furthermore, if .x ∈ X ), then .d(x, y) = ∞. We have to choose a suitable set of lines and .y ∈ V+ (F .

LX ⊂ {r ∩ X : r projective line},

.

and if .n = 3, also a suitable set of planes PX ⊂ {σ ∩ X : σ projective plane}.

.

Let r, .s ∈ LX be two lines which meet at a point P of .X and let .σ , .τ ∈ PX be two planes passing through a line t of .X. The angle .