303 40 12MB
English Pages [326] Year 1983
This textbook directly continues the first volume of a course of geometry (M . M, Postnikov.
Lectures in Geometry : Semester t Analytic Geometry. Moscow, M ir Publishers, 1981) based on lectures read by the author at Moscow University for students specializing in mathematics. It contains 27 lectures, each a nearly exact reproduction of an original lecture. It treats linear algebra, with elementary differential geometry of curves and surfaces in threedimensional space added to pave the way for further discussions.
M. M. nOCTHHKOfi JIEKUHH
no
TEOMETPHri
CEMECTP 2
JIMHEtfHAH AJirEEPA H flHOOEPEHUHAJIbHAH TEOMETPHH
MOCKBA «HAyKA»
rjiaBHaa penaKmia $H3HKO-MaTeMaTunecKofl
jiwTepaTypu
M . POSTNIKOV
LECTURES IN GEOMETRY SEMESTER II
LI N EAR ALGEBRA AND DIFFERENTIAL GEOMETRY Translated from the Russian by Vladimir Shokurov
MIR PUBLISHERS MOSCOW
First published 1982 Revised from the 1979 Russian edition
Ha aHMuucKOM aavuuS
©
r j ia B n a n peflaK ijH H (J)H3HKO-MaTeMaTnnecKOH jiH T ep aT yp b i
n3AaTejibCTBa «HayKa», 1979
© English translation, Mir Publishers, 1982
PREFACE
This book is a direct continuation of the author’s previous book* and is akin to it in being a nearly faithful record of the lectures delivered by the author in the second semester of the first year at the Mathematics-Mechanics Faculty of Moscow State University named after M. V. Lomonosov to mathematical students (a course in Linear Algebra and Analytic Geometry). Naturally, in the selection of the mate rial and the order of presentation the author was guided by the same considerations as in the first semester (see the Preface in [1]). The number of lectures in the book is ex plained by the fact that although the curriculum assigns 32 lectures to the course, in practice it is impossible to deliver more than 27 lectures. The course in Linear Algebra and Analytic Geometry is just a part of a single two-year course in geometry, and much in this book is accounted for, as regards the choice of the material and its accentuation, by orientation to the second year devoted to the differential geometry of mani folds. In particular, it has proved possible (although it is not envisaged by the curriculum) to transfer part of the propaedeutic material of the third semester (the elementary differential geometry of curves and surfaces in three-dimen sional space) to the second-semester course and this has * M. M. Postnikov. Lectures in Geometry: Semester 1. Analytic Geo metry. Moscow, Nauka Publishers, 1979 (English translation, Publishers, Moscow, 1981, referred to as 1 in what follows).
6
Preface
substantially facilitated (not only for the lecturer but, what is of course more important, also for the students) the third semester course. At the same time, as experience has shown, this material appeals to the students and they learn it well on the whole already in the second semester. M. M. Postnikov October 27, 1977
CONTENTS
Preface
Lecture 1
5
11
Vector spaces. Subspaces. Intersection of subspaces. Linear spans. A sum of subspaces. The dimension of a subspace. The dimension of a sum of subspaces. The dimension of a linear span
Lecture 2
19
Matrix rank theorem. The rank of a matrix product. The Kronecker-Capelli theorem. Solution of systems of linear equations
Lecture 3
28
Direct sums of subspaces. Decomposition of a space as a direct sum of subspaces. Factor spaces. Homomorphisms of vector spaces. Direct sums of spaces
Lecture 4
36
The conjugate space. Dual spaces. A second conjugate space. The transformation of a conjugate basis and of the coordinates of covectors. Annulets. The space of solutions of a system of homogeneous linear equations
Lecture 5
47
An annulet of an annulet and annulets of direct summands. Bilinear functionals and bilinear forms. Bilinear func tionals in a conjugate space. Mixed bilinear functionals. Tensors
Lecture 6 Multiplication of tensors. The basis of a space of tensors. Con traction of tensors. The rank space of a multilinear func tional
58
8
Contents
Lecture 7 The rank of a multilinear functional. Functionals and per mutations. Alternation
Lecture 8 Skew-symmetric multilinear functionals. External multipli cation. Grassman algebra. External sums of covectors. Ex pansion of skew-symmetric functionals with respect to the external products of covectors of a basis ^
Lecture 9 The basis of a space of skew-symmetric functionals. Formulas for the transformation of the basis of that space. Multi vec tors. The external rank of a skew-symmetric functional. Multivector rank theorem. Conditions for the equality of multi vectors
Lecture 10 Cartan’s divisibility theorem. Pliicker relations. The Pliicker coordinates of subspaces. Planes in an affine space. Planes in a projective space and their coordinates
Lecture 11 Symmetric and skew-symmetric bilinear functionals. A mat rix of symmetric bilinear functionals. The rank of a bilin ear functional. Quadratic functionals and quadratic forms. Lagrange theorem 1
Lecture 12 Jacobi theorem. Quadratic forms over the fields of complex and real numbers. The law of inertia. Positively definite qua dratic functionals and forms
Lecture 13 Second degree hypersurfaces of an n-dimensional projective space. Second degree hypersurfaces in a complex and a realcomplex projective space. Second degree hypersurfaces of an ^-dimensional affine space. Second degree hypersurfaces in a complex and a real-complex affine space
Lecture 14 The algebra of linear operators. Operators and mixed bili near functionals. Linear operators and matrices. Invertifilp
Contents
9
operators. The adjoint operator. The Fredholm alternative. Invariant subspaces and induced operators
Lecture 15
151
Eigenvalues. Characteristic roots. Diagonalizable opera tors. Operators with simple spectrum. The existence of a basis in which the matrix of an operator is triangular. Nilpotent operators
Lecture 16
160
Decomposition of a nilpotent operator as a direct sum of cyclic operators. Root subspaces. Normal Jordan form. The Hamilton-Cayley theorem
Lecture 17
170
Complexification of a linear operator. Proper subspaces belong ing to characteristic roots. Operators whose complexifica tion is diagonalizable
Lecture 18
179
Euclidean and unitary spaces. Orthogonal complements. The identification of vectors and covectors. Annulets and orthogonal complements. Bilinear functionals and linear ope rators. Elimination of arbitrariness in the identification of tensors of different types. The metric tensor. Lowering and raising of indices
Lecture 19
191
Adjoint operators. Self-adjoint operators. Skew-symmetric and skew-Hermitian operators. Analogy between Hermitian operators and real numbers. Spectral properties of self-adjoint operators. The orthogonal diagonalizability of self-adjoint operators
Lecture 20
199
Bringing quadratic forms into canonical form by orthogonal transformation of variables. Second degree hypersurfaces in a Euclidean point space. The minimax property of eigen values of self-adjoint operators. Orthogonally diagonalizable operators
Lecture 21 Positive operators. Isometric operators. Unitary matrices. Polar factorization of invertible operators. A geometrical interpretation of polar factorization. Parallel translations
208
10
Contends
and centroaffine transformations. Bringing a unitary opera tor into diagonal form. A rotation of an w-dimensional Eucli dean space as a composition of rotations in two-dimensional planes
Lecture 22
221
Smooth functions. Smooth hypersurfaces. Gradient. Deriva tives with respect to a vector. Vector fields. Singular points of a vector field. A module of vector fields. Potential and irrotational vector fields. The rotation of a vector field. The divergence of a vector field. Vector analysis. Hamilton’s symbolic vector. Formulas for products. Compositions of operators
Lecture 23
243
Continuous, smooth, and regular curves. Equivalent curves. Regular curves in the plane and graphs of functions. The tangential hyperplane of a hypersurface. The length of a curve. Curves in the plane. Curves in a three-dimensional space
Lecture 24
262
Projections of a curve onto the coordinate planes of the mov ing rc-hedron. Frenet’s formulas for a curve in the ^-dimen sional space. Representation of a curve by its curvatures. Regular surfaces. Examples of surfaces
Lecture 25
276
Vectors tangential to a surface. The tangential plane. The first quadratic form of a surface. Mensuration of lengths and an gles on a surface. Diffeomorphisms of surfaces. Isometries and the intrinsic geometry of a surface. Examples. Developables
Lecture 26
291
The tangential plane and the normal vector. The curvature of a normal section. The second quadratic form of a surface. The indicatrix of Dupin. Principal curvatures. The second quadratic form of a graph. Ruled surfaces of zero curvature. Surfaces of revolution
Lecture 27
310
Weingarten’s derivation formulas. Coefficients of connec tion. The Gauss theorem. The necessary and sufficient con ditions of isometry Subject Index
3*6
Lecture I
Vector spaces • Subspaces • Intersection of subspaces • Linear spans • A sum of subspaces • The dimension of a subspace • The dimension of a sum of subspaces • The dimension of a linear span In this semester we shall transfer the results obtained in the first semester to the case of any n. In the main we shall follow the same plan of presentation as before. Recall (see Definition 1 in Lecture 1 of [1]) that a vector (or linear) space over a field K is a set whose members are called vectors and where the operation of addition x, y i- ^ x + y and the operation x i—* kx of multiplication by any number k £ K are defined. It is also required that under addition TT should be an Abelian group and that four natural axioms should hold for multiplication by numbers in K. The concepts of a linear combination of vectors and of linearly dependent or independent families and sets of vectors have meaning in such a space. A space T* is said to be finite-dimensional if there exists in it a finite basis, i.e. a family of vectors in terms of which any vector of T* can be linearly expressed in a unique way. The number of vectors is the same in all the bases. It is called the dimen sion of the vector space TT and designated by the symbol dim T . Let T* be an arbitrary finite-dimensional vector space. Definition 1. A subset H of a space TT is said to be its subspace if every linear combination k^ + . . . + k mxm pf any vectors Xi, . . ., xm £ ^ belongs to b =
(&i>
• • •? h n ) ,
then the vector equation (5) Xi&i + . . . + #mam = b is equivalent to n numerical equations ”f" • • • "f" ^mi^m — ( 6) ............................................................... 1 • • • “h bn*
Lecture 2
25
Relations (6) form a system of n nonhomogeneous linear equations in m unknowns. This system is compatible, i.e. has at least one solution xx, . . xm if and only if equation (5) holds, i.e. if the vector b is linearly expressible in terms of the vectors ax, . . ., am. On the other hand, by Theorem 1 the rank of the set of vectors al9 . . ., am is equal to the rank of the matrix of the coefficients flu
a mi
aIn
amu-
of system (6) and the rank of the set of vectors ax, . . . . . ., am, b is equal to the rank of the augmented matrix of the coefficients aw • • • aml W a^n • • • Minn hn
obtained from the matrix (7) by adding a column of free members. This proves the following theorem. Theorem 2 (Kronecker-Capelli theorem). The system of linear equations (6) is compatible if and only if the rank of the matrix of its coefficients (7) is equal to the rank of the augmented matrix (8). Let system (6) be compatible. How can all of its solu tions be found? Let r be the rank of the matrix (7). On interchanging the equations and renaming (if necessary) the unknowns we may assume without loss of generality that a^i •. • ar^ =7^=0.
a^j> . . . a^f Since under the hypothesis system (6) is compatible, the rank of the matrix (8) is by the Kronecker-Capelli theorem also equal to r. This means (in view of condition (9)) that
26
Semester 2
the first r rows of the matrix (8) (i.e. the first r equations (6)) are linearly independent and that any other row of the matrix (8) (any other equation (6)) is a linear combination of them. Therefore system (6) is equivalent to the system
^11*^4 4“ • • • 4“ a ^ X r 4“ • • • 4“ ^ml^m = ( 10 )
...............................................................................................
^ l r ^ l 4 “ • • • 4" ^' TT^ ' r 4" • • • 4 “ ^m r^ rn
consisting of its first r equations, i.e. that any solution of system (6) is a solution of system (10) and conversely any solution of system (10) is a solution of system (6). Thus everything has reduced to the solution of system (10) consist ing of linearly independent equations. To solve this system we rewrite it in the form O 'il^ l
(H )
4“ • • • 4“^rl^r = ^1 ^t+1, l^r+l
•• •
..........................................................................................................................
4“ • • • 4~ arrXr = bT #r+l, r^r+1
• •*
If we assign to the unknowns x r+u . . ., xm arbitrary val ues, then system (11) becomes a system of r equations in r unknowns xu . . ., xr with a nonzero (by (9)) determi nant A. We can therefore find the unknowns xt , . . ., xT in a unique way by Cramer's formulas we know from the algebra course. It is clear that this method gives us all solutions of system (10) (i.e. of system (6)). In practice, there is no need of course to interchange the equations in advance and to rename the unknowns. The procedure for solving an arbitrary system of linear equations (6) is therefore as follows: Stage 1. Computing the minors of the matrix of the coefficients (7) we find its rank r simultaneously discovering at least one nonzero minor A of order r. Stage 2. Bordering the found minor in the matrix (8) we see that the rank of the matrix is also equal to r. (If it is greater than r, i.e. equal to r 4- 1, then system (6) is incompatible.) At this stage it is obviously sufficient to compute only n — r minors of order r 4~ 1.
Lecture 2
27
Stage 3. The minor A contains the coefficients of r unknowns in r equations. Leaving only these equations, assigning to the other n — r unknowns arbitrary values and obtaining in this way a system of r equations in r unknowns with a nonzero determinant we solve that system by Cramer’s formulas. Thus we find the values of the other r unknowns. The values obtained at stage 3 for the unknowns z x, . . . . . x m are solutions of system (6) and any solution of this system can be obtained in this way.
Lecture 3
Direct sums of subspaces • Decomposition of a space as a direct sum of subspaces • Factor spaces • Homomorphisms of vector spaces • Direct sums of spaces Let