248 29 17MB
English Pages 522 Year 2021
LIE GROUPS, LIE ALGEBRAS, AND COHOMOLOGY
LIE GROUPS, LIE ALGEBRAS, AND COHOMOLOGY
by
Anthony W. Knapp
Mathematical Notes 34
PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
1988
Published by Princeton University Press, 41 William Street. Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, Chichester. West Sussex Copyright © 1988 by Princeton University Press All Rights Reserved Library of Congress Cataloging-in-Publication Data
Knapp, Anthony W. Lie groups, lie algebras, and cohomology. (Mathematical notes ; 34) Bibliography: p. Includes index. 1. Lie groups. 2. Lie algebras. 3. Homology theory. I. Title. II. Series: Mathematical notes (Princeton University Press) ; 34. QA387.K57 1988 512\55 88-2514 ISBN 0-691-08498-X (pbk.) Printed in the United States of America Princeton University Press books are printed on acid-free paper and meet the guidelines for permanence and durability of the Committee on Production Guidelines for Book Longevity of the Council on Library Resources The Princeton Mathematical Notes are edited by Luis A. Caffarelli. John N. Mather, and Elias M. Stein 5 7 9 10 8 6
To My Family With Love
CONTENTS
Preface Chapter I. 1. 2. 34. 56. 78.
1. 2. 3. 4. 5. 6.
45 48 56 64 73 75 86 92
Representations of Compact Groups
Abstract theory Irreducible representations of SU(2) Root space decomposition for U(n) Roots and weights for U(n) Theorem of the Highest Weight for U(n) Weyl group for U(n) Analytic form of Borel-Weil Theorem for U(n)
Chapter IV.
3 7 10 15 18 25 31 36
Representations and Tensors
Abstract Lie algebras Tensor product of two representations Representations on the tensor algebra Representations on exterior and symmetric algebras Extension of scalars - complexification Universal enveloping algebra Symmetrization Tensor products over an algebra
Chapter III. 1. 2. 3. 4. 5. 6. 7.
Lie Groups and Lie Algebras
S0(3) and its Lie algebra Exponential of a matrix Closed linear groups Manifolds and Lie groups Closed linear groups as Lie groups Homomorphisms An interesting homomorphism Representations
Chapter II. 1. 2. 34. 5. 6. 78.
ix
99 111 114 118 124 129 138
Cohomology of Lie Algebras
Motivation from differential forms Motivation from extensions Definition and examples Computation from any free resolution Lemmas for Koszul resolution Exactness of Koszul resolution
153 l6l 166 175 188 190
viii
CONTENTS
Chapter V. 1. 2. 3. 4. 5. 6.
Homological Algebra
Projectives and invectives Functors Derived functors Connecting homomorphisms and long exact sequences Long exact sequence for derived functors Naturality of long exact sequence
Chapter VI.
Application to Lie Algebras
1. Projectives and injectives 2. Lie algebra homology and cohomology 3. Poincare duality
4. Kostant's Theorem for U(n) 5- Harish-Chandra isomorphism for U(n) 6. Casselman-Osborne Theorem Chapter VII.
(g,K) modules The algebra R(g,K) The category C (g,K) The functors P and I Projectives and injectives Homology, cohomology, and Ext Standard resolutions Poincare duality Revised setting for Kostant T s Theorem Borel-Weil-Bott Theorem for U(n)
Chapter VIII. 1. 2. 3. 4. 5. 6. 7. 8.
266 282 288
292 301 317
Relative Lie Algebra Cohomology
1. Motivation for how t o construct representations
2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
199 210 230 238 245 258
325
334 340 360 365 373 384 386 396 4o2 4o8
Representations of Noncompact Groups
Structure theory for U(m,n) Cohomological induction Vanishing above the middle dimension First reduction below the middle dimension Second reduction below the middle dimension Vanishing below the middle dimension Effect on infinitesimal character Effect on multiplicities of K types
417 428 437 448 46o 468 473 479
Notes
490
References
500
Index of Notation
503
Index
506
PREFACE
This material is based on a one-semester course given at SUItfY Stony Brook in Fall 1986. The audience consisted largely of graduate students knowledgeable about geometry, acquainted with tensor products of vector spaces, and having little or no background in Lie groups. The objective was to go in one semester from the beginnings of Lie theory to the frontier in algebraic constructions of group representations. The course consisted of much of the first seven chapters of the present book, done in a slightly different order. Actually the course was designed backwards from a key algebraic computation (7-79) that yields the Borel-Weil-Bott Theorem, and it ended up including whatever seemed appropriate as preliminary material. Chapter VIII was added to indicate how the computation (7-79) leads to the frontier. The topic of interest here is the representation theory of compact Lie groups and of their natural noncompact analogs, the noncompact semisimple Lie groups. Special linear groups, symplectic groups, and isometry groups of quadratic forms give examples of noncompact semisimple Lie groups. Group representations are homomorphisms of a group into invertible linear transformations on a complex vector space, possibly infinite-dimensional and possibly with some continuity assumption. Understanding of group representations allows one to take advantage of symmetries in various problems in analysis and algebra. For a compact group, the irreducible representations (those with no nontrivial closed subspaces invariant under the representation) are finite-dimensional, in the case of any compact connected Lie group, the Borel-Weil Theorem gives a way of realizing all such irreducible representations in
-ix-
x
PREFACE
spaces of holomorphic functions, and a generalization due to Bott gives alternative realizations in spaces of differential forms having only
cfz Ts present.
In 1966 Langlands conjectured that a version of the Borel-Weil-Bott Theorem should provide a realization of "discrete series" representations of noncompact semisimple Lie groups.
"Discrete series" are certain irreducible infinite-
dimensional representations that are known to play a fundamental role in the construction of all irreducible representations.
Over a period of some years ending in the
mid 1970 T s, Schmid proved the Langlands conjecture and several variants of it. In 1978 Zuckerman found that he could bypass a number of difficult analytic problems of SchmidTs by phrasing the Langlands conjecture in algebraic terms and using constructions in homological algebra.
He found that similar
algebraic constructions yielded other interesting representations for a variety of groups.
His technique, now known as
cohomological induction, has become a fundamental tool in representation theory. Our goal is to start with elementary Lie theory and to arrive at the definition, elementary properties, and first applications of cohomological induction, all the while developing the computational techniques that are so important in handling Lie groups.
A byproduct is that we are able to
study homological algebra with a significant application in mind; we see as a consequence just what results are fundamental and what results are minor.
A person who wants to
study further may wish to read the general theory of roots and weights from any number of books (e.g., Khapp [1986], Chapters IV and v) and to study cohomological induction further from the book by Vogan [1981]. Prerequisites for reading the present book are a knowledge of metric spaces, an advanced course in linear algebra, and a passing acquaintance with topological groups. Also invariant integration for compact groups plays a role in Chapters III, VII, and VIII.
PREFACE
xi
Chapter I is a quick introduction to Lie groups of matrices. The reader who already knows beginning Lie theory may skip this chapter except for the last section (representations). Representations of groups are the objects of interest in these notes, but the approach is by means of algebra. What we therefore study is representations of Lie algebras that are related to representations of Lie groups; the sense in which we can actually pass from the Lie algebras to the Lie groups is not addressed here very seriously. Since the group representations are the objects of interest, we constantly need examples of them in order to motivate what happens with Lie algebras. The first half of Chapter II contains two kinds of topics alternately: a devlopment of multilinear algebra and the imposition of representations on various spaces of tensors. Most readers will be able to skip at least part of the material in §§1-4. The second half of Chapter II constructs some algebraic objects that will be used repeatedly throughout the later chapters
universal enveloping algebra,
symmetrization, and tensor products over an algebra. Chapter III treats the representation theory of compact groups, with emphasis on the unitary groups as examples. This material is included at this stage mostly for motivation and is not used until the latter part of Chapter VI. Chapters IV and V, as well as the first half of Chapter VI, develop homological algebra as it applies to Lie algebras and their representations. The motivation that is provided comes largely from geometry and starts with the analytic Borel-Weil Theorem in §7 of Chapter III. A second line of motivation begins in §1 of Chapter IV. The two lines merge in § 1 of Chapter VI and continue in §1 of Chapter VTI. Chapter VII makes a modification in the theory of the previous three chapters, so that the setting matches better what is needed for representation theory. The ingredients of cohomological induction are assembled in this chapter. Section 11 is the most important and illustrates how the theory can be applied in the ideal setting of a compact
xii
PREFACE
connected Lie group.
This section contains the key
computation (7.79) mentioned earlier. Chapter VIII indicates the kinds of adjustments that need to be made when working -with noncompact semisimple Lie groups, rather than compact groups.
Specific illustrations are given
for the unitary groups of indefinite quadratic forms. At the end of the book is a chapter entitled "Notes," •which gives historical comments, amplifies some notions, and points to the list of references at the end. I am grateful to David Vogan for his advice in presenting this material and to the members of the class at SUNY Stony Brook for a number of comments and suggestions.
Clifford
Earle helped with some of the motivation, Chin-Han Sah helped with some of the bibliographical material, and Leticia Barchini helped with checking the mathematics.
Some of the
material in Chapters VII and VTII is new, and part of it is taken from the unpublished notes Knapp and Vogan [1986] and from other unpublished work joint with Vogan.
This research
was sponsored by the National Science Foundation under one or more of the grants DMS 85-01793, DMS 87-11593, and DMS 85-04029. A.W.K. December 1987
LIE
GROUPS, L I E ALGEBRAS, AND COHOMOLOGY
CHAPTER I LIE GROUPS AND LIE ALGEBRAS
1. .SO(3) and its Lie algebra We denote by
GL(n, 3R) the real general linear group
consisting of all nonsingular n-by-n real matrices with matrix multiplication as group operation.
Similarly
GL(n, C)
denotes the group of nonsingular n-by-n complex matrices. Let
G
be the set of matrices
S0(3) = {x€GL(3,B)
SO(3)
defined by
| x x t r = I and det x = 1} .
This is the set of 3-by-3 rotation matrices and "will be an example of a Lie group.
For now we can notice that
G
is a
group (being closed under multiplication and inversion within GL(3>IR) ) and is also a compact subset of
3Er
(being closed
under entry-by-entry limits and having all entries absolute value).
< 1
in
The group structure and the topological
structure are related in that multiplication and inversion are continuous (being given by nice formulas). Of more interest, however, is the fact that a kind of differentiation is available.
Let us agree for now that a
smooth curve
is a
image is in c(t)
in
c(t) SO(3)
S0(3)
in
S0(3)
for each
with
t.
c(0) = I .
C°° curve into
3Er whose
We consider smooth curves One such is
I:
LIE GROUPS AND LIE ALGEBRAS
c(t) =
[ -sin
For any such, we can form matrix.
With
t
cos
t
c'(o) , which -will be some 3-by-3
c as above
'-sin t c ! (t) = [ -cos t
cos t
0\
-sin t
0
0
0
/0 ,
cT(0) =
0/
-1 \ 0
1
0
0
0 j .
0
0^ (1.1)
cT (0) tells us the direction in
Here
extends from describing
I.
G
in which the curve
Think of it as telling how a parameter
G is to start out from the identity.
Let
3 = {c!(0) | c(t) = smooth curve in G with c(o) = 1} . Then
g
is a subset of 3-by-3 matrices, not necessarily
invertible. Let us identify space.
g.
First of all,
g
To see closure under addition, let
be in g .
Forming
is a vector cT(0) and b!(o)
c(t)b(t) , we have = c(t)b'(t)+c'(t)b(t)
=Q
Thus
g
= c(o)b'(o) + c'(O)b(o) = b'(o) +c'(0) .
is closed under addition.
scalar multiplication, let IR . Forming ^
!
To see closure under
c (0) be in
g
and
k
be in
c(kt) , we have c»(kt)k,
-^[c(kt)] t = 0 = kc'(o) .
1. SO(3) AM) ITS LIE ALGEBRA Thus
g is closed under scalar multiplication. Next let us find additional concrete matrices that are
in
g.
Two such are 0
c!(0) = | 0
0
1\
0
0
-10
/ cos t
0
sin t
\ -sin t
0
cos t
from c(t)
0/
and
from c(t)
Taking into account (1.1) and the vector space operations, we see that '0
a
£ {( ~a
°
c
-c
0y
} = (skew-symmetric real matrices} . (1.
Actually equality holds in (1.2). c(t)trc(t) = I ,
In fact, if
then c(t)trc'(t) + c'(t)trc(t)
= 0
I t r c ' ( 0 ) +c»(0) t r I = 0 c ' ( 0 ) t r + c'(0) =
0.
Thus
g = {skew-symmetrie real matrices} . The group structure of structure.
Indeed,
G forces
(1*3)
g to have additional
G is closed under group conjugation
6
I: LIE GROUPS AM) LIE ALGEBRAS
x -» gxg through at
. I
Thus if at
t = 0,
c(t)
t = 0,
is a smooth curve in
then so is
we see that
gc T (o)g
closed under the linear maps g
into
3
gc(t)g~ . is in
3 .
Ad(g) , for
G
passing
Differentiating Thus
3
is
g € G , that carry
and are given by Ad(g)X = gXg" 1 .
Next let in
G
with
X
"be in
c(0) = I .
function into
3
and let
Then
(1.4)
c(t)
be a smooth curve
t -> Ad(c(t))X
3 , as we see "by substituting
(1.4) and writing matters out.
is a smooth g = c(t)
into
By definition we have
^ A d ( c ( t ) ) x | t = Q = lim 1 [Ad(c(t))X-X] , and we know that the left side exists.
(1.5)
The right side
involves vector space operations and a passage to the limit, all on members of of
3 .
Since
3 , as a 3-dimensional subspace
JPr , is closed topologically, the limit is in
3 .
Let us
calculate this limit. We need a preliminary formula, namely
A
(1.6)
To see this, we differentiate
c(t)c(t)~
= I
by the product
rule, obtaining c(t)"1] = 0 , and (1.6) follows.
Therefore
2. EXPONENTIAL OF A MATRIX
= c»(t)Xc(t)" 1 - c(t)Xc(t)""1c^(t)c(t)'1. Putting
t = 0
and taking (1.5) into account, we see that cl(o)X - Xc'(O)
i s in
g.
We conclude that
g
i s closed under the
Lie "bracket operation [X,Y] = XY-YX .
(1.7)
Of course, we could have calculated directly that closed under (1.4) and (1.7) once we had identified explicitly as in (1.3)«
g
is
g
But the point is that these closure
properties did not depend upon our explicit knowledge of
g.
We shall return to this matter in §3, considering it in a wider context.
2. Exponential of a matrix It is possible also to go "backwards from §1.
g
to
G
The tool for doing so is the exponential map. If
A
is an n-"by-n complex matrix, then we define exp A = e A =
I * AN . N=0 N '
This definition makes sense, according to the following proposition.
in
8
I: LIE GROUPS AND LIE ALGEBRAS Proposition 1.1.
is given "by a convergent series (entry-by-entry). eXeY = e X + Y
(a)
A , e
For any n-by-n complex matrix
if
X
and
y
Moreover
commute
•yr
I
(b)
e
(c)
t -> e
at
is nonsingular GL(n, C)
is a smooth curve into
that is
t = 0 (d) ^ ( e t X ) = X e t X (e)
det e X =
(f)
X -» e
^
n
)
is a
C°° mapping from matrix space
into itself.
Proof.
For any n-by-n matrix
M,
put
M . = sup Il where
|jx|| and
||Mxi|
refer to the Euclidean norm.
I f wr^ < l and the right side tends to infinity.
0
Then
2
as
Hence the series for
e
entry, and it must be convergent.
N-j_ and
N2
tend to
is Cauchy, entry~byThis convergence is good
enough to justify the manipulations in the remainder of the proof.
(a) eXeY = ( Z r£> Xr/)( I S A Ys) = Z V r=0
oo
s=0
N
N=0 k=0
'
'
r,s
2. EXPONENTIAL OF A MATRIX
•i*£«> £ ( )
(b)
N=O JN# Y = -X I n ( a ) , and use
Take
(cd) £(•«) = £ = E
N=O
(e) Moreover
If
X
y det e
e° =
I
£ (txf - J £
J ^
iN#
is upper triangular, then so is
eX .
depends only on the diagonal entries of
Y
e
, which depend only on the diagonal entries of X . Thus X Tr Y det e = e in this case. A general complex matrix is of the form
gXg"
det e ^ (f)
with
X
upper triangular, and then we have
= detfgeV 1 ) = det e X = e T r
X
= eTr
This follows from standard facts about term-by-
term differentiation of series of functions. Let us return to
and its
g
given by (1-3).
Then the exponential map actually carries
g
into
fact, if
X
G = SO(3)
is skew-symmetric, then (ex)(eX)tr = eXeX
r
= e X e" X = e° = I ,
by (a) in the proposition, and det e X = e
T r
»
= e° = 1 ,
G.
In
10
I: LIE GROUPS AND LIE ALGEBRAS
by (e) in the proposition. Thus (c) says that, for each curve in tx e has X
in
at
I
X
in
t = 0•
G
that is
at
X
as derivative at
tx g, e
From (d) we see that
t = 0.
Consequently for any
g , there is a ""best" smooth curve in
t= 0
and has derivative
X
is a smooth
that is tx t = 0 , namely e .
at
G
Moreover this family of curves varies nicely as
X
I
varies.
We shall see that the exponential map recaptures a number of local properties of
G,
even though
g
depends only on
infinitesimal data near the identity.
3. Closed linear groups The critical property of § 1 is that
S0(3)
SO(3)
for the developments in
is closed topologically as a subset of
GL(3,C) when GL(3>C) is regarded as an open subset of p. Q 2 3R D . We shall refer to any closed subgroup of some GL(n,C)
as a closed linear group.
The reason that
SO(3)
by polynomial equations.
is closed is that it is defined
The polynomials in question are the
entry-by-entry relations in the 9 coordinates that amount to xx
r
= I
and
det x = 1 .
Here are some more examples of
closed linear groups; they too are closed because they are defined by polynomial equations: SO(n) = {x€GL(n,]R)
| x x t r = I and det x = 1} tr
U(n) = {x€GL(n,C) SU(n) = {x€U(n) |
| xx
= i}
det x = 1}
3- CLOSED LINEAR GROUPS
11
SL(n,3R) = {xeGL(n,IR)
| det x = 1}
SL(n,C) = {x€GL(n,C)
| det x = 1} .
Further examples may be found in pp. 4-6 of Knapp
[1986].
Let us generalize the calculations in § 1, seeing the extent to which they can be carried through for all closed linear groups.
For any closed linear group
of smooth curves in T
c (0)
G , and -we let
for all smooth curves
3
G , we can speak
"be the set of matrices
c(t) in
G
such that
c(o) = 1 .
We cannot instantly write down any nonzero members of 3 in the way that we did in (1.1).
But we can see Just as in § 1
that
The same argument as in §1
3
is a real vector space.
shows that
3
is closed under the linear maps
given as in (1.4). 3
Ad(g) ,
geG,
And the same argument as in § 1 shows that
is closed under the Lie bracket operation (1.7).
Thus 3
is a Lie algebra of matrices in the sense that it is a real vector space of matrices that is closed under the bracket operation [X,Y] = X Y - Y X . Let
3I (n, ]R) and
31(11, C)
complex n-by-n m a t r i c e s .
be t h e spaces of a l l r e a l and
For t h e f i v e examples above, we f i n d
t h a t t h e c o r r e s p o n d i n g Lie a l g e b r a s a r e given by So (n) =
{X 6 gl (n,3R)
|
X+Xtr =
u(n) =
{X€gI(n,C)
|
X+X * = 0}
{X €31 ( n , C )
I
X+X
{ X e g l (n,3R)
I
Tr X = 0}
*u(n) = • I (n,IR) =
tr
=
0}
0
a n d T r X = 0}
12
I: «I(n,C) =
LIE GROUPS AND LIE ALGEBRAS
{X e 9 I ( n , C )
The Lie algebra X = c'(0) I
at
g
|
Tr X =
consists of the set of matrices
for smooth curves
t = 0.
0} .
c(t)
in
G
that pass through
As with
S0(3) > it will turn out that there tx is a "best" such curve, namely e . Proposition 1.1 shows
tx that
e
is a smooth curve with the correct derivative at
t = 0 , but we need to see that
e
lies in
G.
A little
computation shows this to be the case for our five examples, and we establish it in general in Proposition 1.4 below. Lemma 1.2. gl(n,C) that
There exist an open cube
and an open neighborhood
exp : U •» V
V
of
U I
about in
0
in
GL(n, c)
such
is smooth (= C°°) and one-one onto, with a
smooth inverse. Proof.
Proposition 1.If says that
exp
is smooth.
By
the Inverse Function Theorem, it is enough to prove that the derivative matrix at Let
{E.}
X=0
be the standard
gl (n, C) , and let
x.
coordinate function.
of
X -> exp X
is nonsingular.
2n -member basis over
B
of
be the corresponding coordinate or Then
j j
X.
°)|t=o
x=o = xj_(E-e
tE. J )[t==0
since x^f*) is linear
= x.( E .) = 6 . . . So the derivative matrix is the identity, and the lemma follows.
3- CLOSED LINEAR GROUPS Lemma 1-3c(0) = I
If
c(t)
is a smooth curve in
c T (o) = X ,
and
t
in the domain of
Remark.
GL(n,C)
with
then
lim cfp) for all
13
= exp tX c.
The prototype of this lemma is as follows:
We
identify the additive reals with a closed linear group by t -» (e ) . have to
One curve in this group is
c(t/k)
= (1 + t/k)
c(t) = 1 + t , and we
, which is well.known to converge
e* . Proof.
e > 0
Choose
so that
c(t)
U
and
V
as in Lemma 1.2, and choose
is in
V
for all
t
with
|t| < 2e .
Using the lemma, we can form a smooth curve Z(t) = exp" 1 c(t) Now
z(0) = 0
for
|t| < 2c .
clearly, and the Chain Rule gives Z'(0) = ( e r p ^ O ) ) " 1 ^ ^ ) = X .
Thus Taylor's formula gives Z(t) = tX + 0(t 2 ) , where
0(t )
is a term that is "bounded for
remains bounded near t
by
t/k
t=0
and regarding
|t| ^ e and p t . Replacing
when divided by t
as fixed gives
«
$
•
•
14
I:
LIE GROUPS AND LIE ALGEBRAS
Thus
kz(|) = tX + oQ)
for any
|t| < c ,
and (|) k = (exp z(|)) k = exp Letting
k
tend to infinity and using the continuity of exp ,
we obtain the conclusion of the lemma for For some other the a"bove argument. integer t/N .
N
with
t
in the domain of
with
c , let us modify
First we choose and fix a positive
|t/N| < e .
Instead of replacing
t/(Nk+t)
|t| < c .
The above estimates apply to t
0 < -t < N - l .
by
t/k,
we replace
Again regarding
t
t/N
by
as fixed,
we obtain
Thus
and
Letting
k
tend to infinity, we obtain the conclusion of the
lemma for this value of Proposition 1,4. is in its Lie algebra
t.
If
G
is a closed linear group and
g , then
exp X
X
is in G. Consequently
4. MANIFOLDS AND LIE GROUPS 9 = {Xegl (n,C) Proof.
Let
curve in
G
is in
for
G
X
with
i exp tX
be in
is in G for all real t} .
g , and let
c(o) = I
and
c'(o) = X .
is a closed set) if the limit exists.
exp tX .
Thus
exp tX
t
is in
G
t .
for
"be a smooth Then n
c(t/n) n
(since
G
Lemma 1.3 says that the
in the domain of
follows by raising to powers that real
c(t)
n > 1 , and so is the limit on
limit does exist for all
15
|t|
exp tX
c
and is
small, and it is in
G
for all
The rest is clear from Proposition 1.1c.
Actually
exp
maps
g
onto a neighborhood of
I
in
and this fact accounts for the strong connection between and
G.
G,
g
To get at this fact, however, requires a digression.
4. Manifolds and Lie groups In the terminology of advanced calculus, the difficulty with handling the exponential map as well as we would like for a group like
SO (3)
or one of the later examples is that
the group is defined implicitly, and we need to define it parametrically (at least locally) in order to work with it better.
Actually it is not so important to know a
p aramet rization concretely; we just have to know that one exists.
The parametric form is the one in the definition of
Lie group below. First we define the notion of "smooth manifold."
Other
kinds of manifolds are important for Lie groups, as well, but we shall not introduce them here.
In what follows we shall
16
I: LIE GROUPS AND LIE ALGEBRAS
use the terms "smooth" and Let
M
connected.
be a separable metric space, not necessarily An n-chart on
open subset of subset
"C°°" interchangeably.
cp(u)
M
and
cp
3Rn .
of
M
is a pair
a homeomorphism of
Two charts
(U1,cp1)
smoothly compatible if the mapping cp-^U-jfiUp) inverse.
u*
cover
Having a M
to
cpgocpT
cp2(U-, HlJp)
with U
M
an
(Up>cp2)
are
from the open set
is smooth and has a smooth
is called a
(U-,cp.)
for
C°° atlas.
C°° atlas allows us to define smooth functions IRn .
by referring matters back to
function
f : E -» B
on an open set
for each
x
there is some chart
in
such that- x
U
onto an open
and
A set of smoothly compatible n-charts
which the
on
Bn
in
(U,cp)
E
is in
U
and
f*cp~
E
of
For example, a M
is smooth if
(U,cp)
is smooth on
in the atlas cp(uHE) .
The compatibility of the charts makes it so that this smoothness persists for all charts about We say that
M
as above, when endowed with a
is a smooth manifold of dimension technical point here:
x.
n .
c°° atlas,
(There is a small
A different atlas that leads to the
same smooth functions on open sets is to yield the same smooth manifold.
To handle this, one can observe that the set
of all charts smoothly compatible with a maximal
C°° atlas.
Two
C°° atlas is a
C°° atlases lead to the same smooth
functions exactly if their corresponding maximal atlases are the same.
So, technically,
is endowed with a maximal
M
is a smooth manifold when it
C°° atlas.)
4. MANIFOLDS AND LIE GROUPS Examples.
(1)
Any open subset of
manifold in a natural way.
lRn
17 is a smooth
Only one chart is needed, and cp
can be taken as the identity map. (2)
The unit sphere
Sn
smooth manifold of dimension
3Rn
in n
can be made into a
by using two charts.
One is
n+i defined on
t^ = S n - { (0,..., 0,1)} , and the other is
defined on
U 2 = S n - { (0,..., 0,-1)} .
(3)
The closed linear group
SU(2) , defined in §3, is
a smooth manifold in a natural way since the formula SU(2)={(
a
_ £ ) | a e c , P ^ C , |a| 2 +|p| 2 = l}
-JB
a
identifies it with the sphere
S .
Every connected component of a smooth manifold is an open subset.
Since the underlying metric space is assumed
separable, there are at most countably many components. If f : U -> V f
u
and
V
are open subsets of smooth manifolds and
is a one-one smooth map with a smooth inverse, then
is called a diff eomorphism of
U
onto
V.
We come to the definition of "Lie group."
A separable
topological group is a separable metric space that is a group in such a way that multiplication and inversion are continuous.
A
Lie group
G
is a separable topological group
18
I: LIE GROUPS AND LIE ALGEBRAS
that is a smooth manifold in such a way that multiplication and inversion are smooth. from
Gx G
into
G,
(Her
ultiplication is a mapping
and we understand
GX G
to be a smooth
manifold whose charts are products of charts in the factors.) The term analytic group is used for a connected Lie group.
In a Lie group
G,
the identity component
G Q , which
we know to "be open, is an open closed subgroup and is an analytic group. Both
GL(n, 3R)
and
GL(n, C)
are Lie groups.
In fact,
each is an open subset of Euclidean space and is a metric space for free.
One chart suffices to give an atlas.
turns out to have two connected components, whereas is connected.
GL(n,IR) GL(n, C)
The formula for multiplication is given by
polynomials and is smooth; the formula for inversion is given by Cramer's rule, using determinants, and is smooth. GL(n,3R) and
GL(n,C)
triangular subgroup in
are Lie groups.
Thus
Similarly the upper
GL(n,3R) is a Lie group, and so is
the subgroup of the upper triangular group with ones on the diagonal. But for the most part, it is not so obvious how to exhibit a group as a Lie group.
It turns out that all closed
linear groups are Lie groups automatically.
This is a hard
result whose proof we give in the next section.
5. Closed linear groups as Lie groups A relatively easy case that gives some insight into why closed linear groups are Lie groups is the case of
5- CLOSED LINEAR GROUPS AS LIE GROUPS
G = SL(2,]R) = {(y This
G
^)
real
I
entries and
19
wz - xy = ij .
is the zero locus of the single polynomial P(w,x,y,z) = w z - x y - l ,
and the derivative of
is the l-by-4 matrix
P
P ! = (z which is nowhere
0
-y
-x
w) , P=0.
on the locus where
Thus the
Implicit Function Theorem allows us to solve locally about each point of
G
other three.
for one of the variables in terms of the
(We can see this explicitly here.
for example, we can solve for and
w = z
z
or
w:
About
I,
z = w~ (1+xy)
(1+xy) .)
To be concrete, let us fix attention on a neighborhood of
I . • The map
X
(™
J •* (w,x,y)
because its inverse is
defines a c h a r t . -> (x,y,z)
is a local homeomorphism
(w,x,y) •* \
V
,
x
1
).
w" (l+xy)
/
^+x^^
x
Thus it
A second choice f o r a c h a r t i s
with inverse
(x,y, z) -» ( z
j .
The
composition of the inverse of the first followed by the second is (w,x,y) » (w
x
V w
and is smooth.
Thus the charts are compatible.
this way for every point of
Arguing in
G , we obtain an atlas.
Let us check that multiplication is smooth near each point.
For
Ix I , we are to check on
20
I : LIE GROUPS AND LIE ALGEBRAS
1
This is smooth.
We see that the coordinates that arise are
compositions obtained from multiplication and our function produced "by the Implicit Function Theorem; thus -we did not need to make the explicit calculation to see the smoothness. By a similar argument, multiplication is smooth near all other points. And in similar fashion, inversion is smooth.
Thus
SL(2,IR) is a Lie group. If we try to proceed similarly with a general closed linear group
G , the first problem is that the group need not
b e defined by polynomial equations.
Even so, suitable entries
of the matrices do look promising as local coordinates for the group.
The difficulty is to decide what the open sets should
be that yield the charts.
To handle the difficulty, we shall
work with the exponential mapping rather than the matrix entry functions; the advantage of the exponential mapping is that Proposition 1.4 gives us a relationship between Theorem 1.5.
If
G
g
and
is a closed linear group, then
G. G
(with its relative metric) becomes a Lie group in a unique way such that
5. CLOSED LINEAR GROUPS AS LIE GROUPS (a)
the restrictions from
GL(n, C)
to
G
21
of the real and
imaginary parts of each entry function are smooth and (b)
cp : M -> GL(n, C)
whenever
smooth manifold
M
is a smooth function on a
such that
cp (M) ^ G , then
cp : M -> G
is smooth. Moreover the dimension of the Lie algebra dimension of the manifold
G .
open neighborhoods
0
U
of
in
exp : U -» V
(V,exp
is a compatible chart.
Remark.
equals the
And, In addition, there exist
such that )
g
g
and
V
of
1
in
G
is a homeomorphism onto and such that
Part (b) explains our definition of smooth curve
in §1 and §3Lemma 1.6.
Let
gl (n,C) = Q © b . in
a
and
neighborhood
U2
a
and
b
be real subspaces with
Then there exist open balls about
V
of
0 I
in
in
U^
about
0
b , as well as an open GL(n,C) , such that
(a,b) -> exp a exp b is a diffeomorphism from Proof. a
and
Let
U^x U 2
X nA., ... , X» r
onto
and
V.
Y-, j-, •. • , Y_ s
be bases of
b , and consider the map
(u 1 ,...,u r ,v 1 , ...,vg) -» exp" 1 {exp(Zu i X i )exp(S v^.Y^)} defined in an open neighborhood of
0 , with the result
written out as a linear combination of Y
.
(1-8)
X 1 , ... , X
, Y^ , ... ,
We shall apply the Inverse Function Theorem to see that
this map Is locally invertible.
Since
exp
is locally
22
I: LIE GROUPS AND LIE ALGEBRAS
invertible, the lemma will follow. Thus we are to compute the derivative matrix at (1.8).
0
of
In computing the partial derivatives, we can set all
variables but one equal to
0
before differentiating, and
then we see that the expression to be differentiated is linear.
The derivative matrix is thus seen to be the
identity, the Inverse Function Theorem applies, and the proof is complete. Proof of uniqueness in Theorem 1.5. T
G
are two versions of
i : G •*• GT
G
as a smooth manifold.
be the identity function.
t : G -> GL(n, C)
Suppose that
G
and
Let
The function
is smooth because smoothness of this map is
detected by smoothness of each real or imaginary part of an entry function, which is given as (a). smooth.
By ("b)>
t" : GT -> G
By the same argument,
Proof of existence in Theorem 1.5. subspace
«
of
gI(n,C)
such that
i : G -> Gf
is smooth.
Choose a real vector
gl(n,€) = g © s , and
apply Lemma 1.6 to this decomposition, obtaining balls and of
Up I
about in
0
in
GL(n, C)
U2 •
and
U-, x Up
For each integer
X
is in
V±
and
Y
k
in
U
l
and
exp Xfc exp Y^
Y
k
in
in
G.
V.
Let
2c
k , exp X exp Y
is in
Assume the contrary. X
(X,Y) -> exp X exp Y onto
(k+l)"1^
Then for every
(k-fl)" 1 ^
U-,
and an open neighborhood
with
(k+l)~n[J2 .
unless
Yk ^ 0
By Proposition 1.4,
is a
cannot be in
k > 1
V
be the radius
k > 1 , form the set
The claim is that for large if
«
such that
diffeomorphism from of
g
is
G
Y=0. we can find
and with
exp Xfe
is in
G.
5- CLOSED IZItfEAR GROUPS AS LIE GROUPS Thus
is in
exp Y k
choose an integer
G
for all such that
nfe
k.
Since
23
Y k ^ 0 , we can
e/2 < n^IlYjJl < e .
passing
to a subsequence if necessary, we may assume that converges, say to
Y . k
Then
Y
is in
8
and
Y ^ 0.
is in
G
and
G
is closed,
Let us show that
exp £ Y is in
G
for all integers
q
Write
Since
n
exp n k Y k = (exp Y k ) in
Then
with
q > 0.
-J= Y, -> 0 since Q
Since
k
with
p
O^r, I , lim exp m, Y v
q
K
2. Y * However, exp 2. Y * However, is closed, and thus
exp m Y exp m kk Y kk exp 2 Y
In other words, G
Also
JS.
exp —
Since
n,p = m 0 .
iv
(exp m k Y k ) (exp ^
is closed,
definition of
g , Y
exists and equals m k = (exp Y = (exp Y kk) is in G and is in G •
exp tY
is in
G
for all rational
exp tY
is in
G
for all real
is in
g .
Thus
Y
G t .
t.
By
is a nonzero member
g na , and this fact contradicts the directness of the sum
g ©8 . in
is
G.
and
of
exp Y
G
We conclude that for large if
X
is in
U-, and
Y
k,
is in
exp X exp Y (k+1) ~ Up
cannot be unless
Y = 0 . Changing notation, we have found open balls about in
0
in
gl (n, C)
morphism of in
G
g
and
8
such that U-,x u 2
only if
an open set in
(X,Y) -> exp X exp Y
onto
Y = 0. G
and an open neighborhood
V
and such that
U-j_ and V
exp
I
is a diffeoexp X exp Y
In view of Proposition 1.4,
such that
of
U2
is
VnG
is
is a homeomo rphi sm from U-j_
24
I:
onto
V0 G • We take
G .
LIE GROUPS AND LIE ALGEBRAS
(Vfl G , exp"" )
About the point
g
(L (Vfl G) , exp" °L~ ) g :
translation by
in
as a chart about the identity in G, we shall use also
as a chart, where
L_(x) = gx .
U
that
UT
and
exp" L
1
are open subsets of © exp : U -> U
T
is left
Let us show that this
system of charts is smoothly compatible.
Here
L
The picture is
g •
is smooth.
We are to check This is Just the
restriction to a lower-dimensional Euclidean space of the smooth map
exp" © L
T «exp : UX U~ -» $1 (n, C) , and hence it
is smooth.
Thus
is a smooth manifold.
G
The same reasoning shows that multiplication and inversion are smooth.
For example, near the identity, what
needs checking in order to see that multiplication is smooth is that, for a small open neighborhood
U
about
0
in
g,
(X,Y) € UX U -> exp"1(exp X exp Y) is smooth into map on
g .
But this map is a restriction of the same
(ux u 2 ) * (Ux U 2 ) > where we know it to be smooth.
6. HOMOMORPHISMS
25
In addition, this reasoning readily proves (a) and (b). Since
G
have
as a manifold has open subsets of
dim G = dim g . Corollary 1.7,
f : G -> M
If
G
is a closed linear group and
is a function into a smooth manifold that is the
GL(n, IR) or Proof. G
as charts, we
Thus the theorem is completely proved.
F : U -» M , where
restriction of a smooth map
of
g
into
GL(n, C)
and
G ^ U , then
We can write
U
is open in
f : G-» M
is smooth.
f = F©i , where t is the inclusion
GL(n, IR) or
GL(n, C) .
Then
%
is smooth by
Theorem 1.5a, and hence
f
is a composition of smooth maps.
Corollary 1.8.
G
is a closed linear group and
If
is its Lie algebra, then component Proof. G .
Tnus
exp g
GQ
generates the identity
Gn . By continuity, exp g c &
.
exp g
is a connected subset of
Theorem 1.5 says that
contains a neighborhood of of
g
I
in
GQ .
The smallest subgroup
containing a nonempty open set in
Corollary 1.9.
If
G
and
G1
exp g
GQ
is all of
GQ .
are closed linear groups
with the same Lie algebras (as sets of matrices), then the identity components of Proof.
G
and
G1
coincide.
We apply Corollary 1.8.
6. Homomorphi sms Suppose that
G
and
H
are closed linear groups.
We
26
I: LIE GROUPS AND LIE ALGEBRAS
shall be especially interested in the case that but -we do not make such an assumption for now*
H = GL(n,C) , Let 3 and t>
be the Lie algebras of G and H , and suppose that smooth homomorphism of G
into
H.
T is a
Our objective is to
associate to IT a map drr : 3 -» % . Before considering examples, let us comment on the case that
G or H
is the Lie group
IR . We can always regard
B
as a closed linear group, say as the set { (e )} of 1-by-l matrices.
We use this convention throughout.
Examples.
(1) Regard
u
by
t *-* (e ) .
in
§ , then
into
3R as a closed linear group, as
If H is any closed linear group and X is
t -> exp tx is a smooth homomorphism of JR
H. (2)
linear group within
GL(1, C) , and t -» e
homomorphism of 3R into (3)
S 1 = [z € C | |z| = 1 }
The circle group
is a closed
is a smooth
S .
The triangular group
G=
/l
0 \0
x z\ 1 y 0 1/
of real
matrices is a closed linear group, and the map that sends the indicated matrix into x into
is a smooth homomorphism of G
3R . We return to the general setting.
If X
in g is
given, let c(t) be a smooth curve in G with cT(0) = X .
(For example, we can take
the composition TT(C(O)) = I ,
t -> 7r(c(t))
and we define
c(o) = 1
and
c(t) = exp tX .) Then
is a smooth curve in H with d7r(X) = (TTOC) * (0) .
Let us see that this definition is indeDendent of the
6. HOMOMORPHISMS choice of value
I
c.
If
c, (t)
and
27
c 2 (t) both have starting
and starting derivative
X , then we compute
Oroc2)'(0) = = -gg 7r(c2(t)c1(t)~ )7r(c-L(t)) ._Q
since ir is a homomorphism
Thus it is enough to prove that the curve
c(t) = c2(t)c-.(t)
which has
(TTOC) T (o) = 0 .
c(o) = I
and
c'(0) = 0 ,
has
, We
now refer matters to local coordinates, using the exponential map.
(See Theorem 1.5.)
The local expression for 7r©c
is
exp" ©7TOC , which we write as exp
©TT©C = (exp
©7T©exp) (exp" ©c) ;
here the two factors on the right are local expressions for ir and
c.
has
(exp
Thus
However, the second factor is a curve in ©c)T(0) = 0 by the Chain Rule, since
(exp" ©7roc)T(0) = 0
"by the Chain Rule.
g , and it
c T (0) = 0.
Applying the
exponential map and using the Chain Rule once more, we see that
(iroc) T (0) = 0 .
Thus our definition is independent of
the choice of ' c . Now we can imitate some of the development of §1 and §3* First of all,
dir : g -» $
is linear.
In fact, let
c^(t) and
28
I : LIE GROUPS AND LIE ALGEBRAS correspond t o
Cg(t)
d
X and Y ,
and l e t
k
be i n
Then
JR.
/
and
d7T(X+Y) = £ = d7r(X) by the product rule for derivatives. Now let and
g be in G and let
c'(0) = X .
t =0 .
Then
gc(t)g" 1
c(t) in G have
has derivative
Ad(g)X at
Hence
ehr(Ad(g)X)
=
4 t^(gc(t)g"1) 1
1 = 0
= ^
ir(gMe(t)
) 7 r (
= ir(g)dTr(x)Tr(g)"1. If also
c(0) = 1
Y
1
(1.9)
is in g , then this formula says d7r(Ad(c(t))Y) = T(c(t))d1r(Y)T(c(t))-1.
Differentiating at
t = 0
and using the fact that
dir
is
l i n e a r , we obtain d7r[X,Y] = d7r(X)d7r(Y) - d7r(Y)d7r(X) .
(1-10)
The right hand side of (1.10) is the definition of [d7r(X),d7r(Y) ] . of
d?r ,
Thus (1.10), in the presence of the linearity
says that
dir is a Lie algebra homomorphism.
our smooth homomorphism
IT : G -> H
Thus
leads to a Lie algebra
6. HOMOMORPHISMS homomorphism
dir : g -» § .
Examples, c o n t i n u e d . d7r(l) = X . has
29
(To s e e t h i s ,
r
c (0) = 1 ,
(1)
If
ir(e ) = exp tX ,
we u s e t h e c u r v e f
and we compute
(TTOC) (o)
d7r(l) =
then
c(t) = e ,
which
from P r o p o s i t i o n
l.ld. (2)
If
ir^)
=
ext ,
If
/l irO \0
x 1 0
z\ y = 1/
then
(i)
as a 1-by-l
matrix. (3)
(e x ) ,
then
/o a c\ irlo 0 b = \0 0 0/
(a) .
The fundamental relation for dealing with homomorphisms is the formula that relates Theorem 1.10.
ir , d-rr , and the exponential map.
If 7r:G->H
i s a smooth homomorphism
between closed linear groups, then Proof. and
C
2^
Fix X ^e ^
7roexp = exp©d7r .
in the Lie algebra of G , and let c 1 (t)
e s m 0 0
^ curves of matrices (actually in H )
given by c x (t) = exp(t dir(x)) Then
and
c2(t) = 7r(exp tX) .
c1(o) = c2(o) = I and J L c^t) = dir(x)exp(t dT(X)) = d7r(X)c1(t)
by Proposition l.ld. •are C2^
Also
= m r ^(exp(t+h)x)| h=0 = -*- T(exp hx)ir(exp t x ) | h = 0
tx) =
30
I: LIE GROUPS AND LIE ALGEBRAS
by definition of c 1 (t)
and
d7r(x) .
c2(t)
j t h columns of both
Since the
solve the initial value problem for the
linear system of differential equations y(0) = ( j t h column of I) ,
§f = d7r(x)y,
the uniqueness theorem for systems of ordinary differential equations says
C
T("^)
= C
2^) •
^
e
"theorem follows by taking
t = 1 . Corollary 1.11.
Let
TT : G -> H
be a smooth homomorphism
between closed linear groups, and let
s
algebras.
x=I
If the map
x -> TT(X)
near
and
^
be the Lie
is referred to
local coordinates relative to the exponential maps of H , then the corresponding map is exactly is linear.
Hence
identity when
w
G
and
dir : g -> § , which
ctor is also the derivative of
T
at the
is referred to local coordinates by the
exponential maps. Proof. Y = exp
The map in local coordinates is
(Tr(exp(x)) near
is the same as linear.
X = 0, and Theorem 1.10 says this
Y = exp" (exp d7r(x)) = dir(x) , which is
A linear map is its own derivative, and the corollary
follows. Corollary 1.12.
If
T*
and
ir^ are smooth homo-
morphisms between two closed linear groups that
dir^ = d-n-g , then Proof.
For
X
in
TT 1 = ir^ on
G
GQ .
g , Theorem 1.10 gives
and
H
such
7. AN INTERESTING HOMOMORPHISM 77^ (exp X) = exp d ^ X ) By Corollary 1.8,
= exp d7T2(X) = 7r2(exp X) .
TT1 = ir2 on
Corollary 1.13.
Let
GQ .
ir : G -» H
be a smooth homomorphism
bet-ween two closed linear groups, and let corresponding Lie algebra homomorphism.
of
I
(a)
dir
onto implies
(b)
d-ir
one-one implies
in (c)
on
31
ir is onto
dir : 9 -» §
be the
Then GQ
TT is one-one in a neighborhood
G d-rr
one-one onto implies
ir is a local isomorphism
GQ. Proof.
Parts (a) and (b) are immediate from Theorems
1.10 and 1.5.
Part (c) carries with it a statement about
smoothness of
rr~ , which follows from Corollary 1.11 and the
Inverse Function Theorem.
7« An interesting homomorphism This section gives an optional example of a nontrivial smooth homomorphism and illustrates a number of the points in §6.
The map
IT will carry
SU(2)
into
SO(3) .
In §4 we gave particular charts to make spheres into manifolds.
If we specialize to the case of
S
and use
complex variables in the notation, then the statement is that X-, + ix o 2 = -i 2. l-x3
(1-11)
32
I : LIE GROUPS AND LIE ALGEBRAS
maps S CU{co}. w
2
"3 ( i n 1RJ ) one-one onto t h e extended complex p l a n e Meanwhile SU(2) a c t s on CU{»} by
= g(z) =
a
l
+
if
?_
- j3z + a
is
g = / * M
in
It is easy to check that this is a group action. We shall 2 reinterpret this action as an action on S by the identification (1-11) and shall see that each g in SU(2) 2 acts on S by the restriction of a linear transformation of 3R . It follows that each of these linear transformations is 2 an orthogonal transformation (since S is preserved); since 2 "3 we have a group action on S , we have a group action in IRJ This means that the map 7T : g in SU(2) — » effect of g on IP? is a group homomorphism of
SU(2) into
0(3) = {x | x x t r = 1} . Since
SU(2) is connected and ir is continuous, the image
must be contained in SO(3) • Let us construct
ir .
Afterward we shall compute d-rr
and examine what is happening. x + ix A l *X A 2 1 -Xo
TT
and we compute ( y ^ y ^ y * ) 2 2 2 x ^ + X p + X o = 1 , we have
Thus we write
az+6 - jSz + a
T7
^ terms of
v 4- iv ^1 + ^ 2 1 -y(x1,x2,x^) .
Since
7. AN INTERESTING HOMOMORPHISM 1-x-
Then
| z| 2 - 1 = 2x 3 /(l - Xg)
33
x
and | z| 2 + 1 = 2/(1 - x-)
Consequently
A similar computation solves for y, in terms of w , and we find W+W
l + lwl 2 ' To compute
y2
_
W- W
"^77Tm^'
y
3
y^ in terms of x.. , Xp , and x- , we write :
gz
+ 2Re(gzg) + l g | :
Thus
_ \v\f-l _ (lal2- |g|2)(lzl2-l) Iwl + 1 Iz| + 1
4
Ui 2 +1 = 2(Re To compute y 1
and
2/(l-x3) a| 2 -
yg , we write
+4Re(azg)
.
(1.12)
34
I : LIE GROUPS AND LIE ALGEBRAS
2
2 Re w
\ - gz + a] _ 2 Re[(qz+6)(-gz+q)]
l+|w|2
la-pzl'^l+lwl2)
l+NI*
_ 2 Re[(az+j3) (-gz'+a)]
=
2Re(a2z-/z)
M*
+
± - U\2
ITur
2 Re
2 2 = Re[a (x 1 + ix 2 ) - P ( x 1 ~ i x 2 ) ]
- 2x^ Re ajS
= x± Re(a 2 -jS 2 )
- 2x^ Re ap
- x 2 Im(a 2 +p 2 )
(1.13)
and
2 imf ^
2 Im w
2
\ - gz + a / _ 2 Im[ (az+p) (-gz+a) 1
l+|W|a
1+lwl^
|a-1
2 Im[ ( a z + g ) ( z|2 2 l m ( a 2 z - B 2 z)
2
= x± Im(a2 - p 2 )
x
\ )=
Hence
2
2
2
= Im[a (x 2 + ix 2 )
Combining (1.12),
l>|z]
- p ( x ] _ - i x 2 ) ] - 2x^ Im + x 2 Re(a 2 +p 2 )
- 2x^ Im
(1.13), and (1.14), we see that
/Re(a 2 -p 2 )
-Im(a 2 +p 2 )
-2 Re ap \ /x x
Im(a 2 -p 2 )
Re(a 2 +p 2 )
-2 Im ap ) ( xo \ .
\ 2 Re dp -2 im dp |a|2-|p|2 'a p\ irl-g - I i s the 3-by-3 matrix in (1.15)-
(1.15)
35
7. AN INTERESTING HOMOMORPHISM
We compute
d-rr on the basis
^ J) , (°
(^
by using the curves cos t
sin t
cos t
i sin t
-sin t
cos t
i sin t
cos t
So, for example, /cos 2t
-sin 2t
0^
= -^r- sin 2t
cos 2t
0
t=0
\
o
o
i, t=0
Similarly
d7T
V-l
It is clear that
d-rr maps
0
8u(2)
is connected (because
1
=
\^ 0
-2 0 So (3) -
one-one onto
S0(3)/S0(2) ^ s 2
S0(2)
are connected), Corollary 1.13 tells us that
SU(2)
onto
T
and maps
and is a diff eomorphism in a neighborhood
Actually
check directly that
0
0
S0(3)
of the identity.
0
(0
Since
SO (3)
0
CVJ
dir
i\
^\"0
everywhere two-to-one.
rr is not globally one-one; "we can _ T )= ^ *
^
e
k° momo:r T?ki sm
^
^s
36
I:
LIE GROUPS AND LIE ALGEBRAS 8, Representations
Let
G
be a topological group.
representation of
G
A finite-dimensional
is a homomorphism
IT of
G
into the
group of invertible linear maps of a finite-dimensional complex vector space map of
Gx V
When
G
into
V
V
into itself, such that the resulting is continuous. GL(n, C) , then
is given to us as a subgroup of
the identity map on representation of
G G
gives us what is called the standard
on
Cn.
It is reasonable to ask "why
one should study representations of a group that is already represented as matrices.
The answer is that representations
are often forced on us by some outside area of mathematics, and we have to deal with them. Examples. (1)
G = SO(n) , V = all polynomials in
n
real
variables with complex coefficients and all terms of degree N
(i.e., homogeneous of degree
N ),
and
x = P(g-1
ir(g)P
x. (2) z
G = U(n)
or
SU(n) , V = all polynomials in
l ' * *' ' z n 9 ~*1 ' " ' '^n
(3)
G = U(n)
or
hom
°geneous
of
degree
N , and
SU(n) , V = all members of the
8. REPRESENTATIONS A Cn
subspace
A basis over
37
of the exterior algebra of C
e. A .../v e. 1-L ik
of A C
with
(see §11,4).
is all alternating tensors
i, < ... < jL , where 1 k Cn
standard basis of
n
Cn
over
C.
{e.} 0
is the
If we define
77-(g) (e. A ... A e. ) = ge. A ...A ge. , 1 x x 1 \ l k then
extends to a linear map of A C n
ir(g)
ir is a representation.
into itself, and
(We shall see this a little more
systematically in Chapter II.) An invariant subspace for such a vector subspace such that
7r(g)U c u
this case we get representations of the obvious way.
tions
0
7T on
and V
g
in
for all G
on
U
g
in
and
v/U
in
ir is
Two finite-dimensional representa-
7rT
and
G; in
and if ir has no invariant sub spaces
V.
on VT 1
E : V -» V
an invertible linear for all
is a (complex)
A finite-dimensional representation
irreducible if • V ^ 0 other than
T
are equivalent if there is such that
E7r(g) = TTT (g)E
G.
Examples. (1)
G = SO(n) , V = all polynomials in n
variables homogeneous of degree 7T as before.
N
real
with complex coefficients,
Define the Laplacian operator by
This operator reduces degrees by two, but the operator E(P) = |x| AP maps
V
into V-
Moreover this operator
38
I: LIE GROUPS AND LIE ALGEBRAS
commutes with in
SO(n) .
image of
IT in the sense that
Eir(g) = 7r(g)E
E
are invariant sub spaces.
With some effort, one N
if
2. (2)
in
g
This commutativity implies that the kernel and
can show that the kernel is irreducible for each n>
for all
G = SU(n) ; V = homogeneous polynomials of degree
z^ , ... , z , "z^ , . . . , "z_ ; ir as before.
holomorphic polynomials (those with no subspace.
N
The subspace of
I's ) is an invariant
With some effort, one can show that this subspace
is irreducible. (3)
G = SU(2) , v = homogeneous holomorphic polynomials
of degree
N
in
z,
and
holomorphic polynomials in
z^, F z
VT =
as before,
of degree
N
with
\j3z + a E : V -> V !
Define !
and
7r
by
EP(z) = p(^) .
Then
E
exhibits IT
as equivalent.
Our interest will be in the case that particularly when
G
G
is a closed linear group.
is a Lie group, If
V
is a
finite-dimensional complex vector space, then we can identify V
with
where
Cn
and the invertible linear mappings with
n = dim V ;
GL(n, C) ,
all we have to do is choose a basis.
If we
choose a different basis, then the two identifications are related by an invertible linear mapping, which preserves all the Lie group structure of interest. representations on C on
V.
Thus a result about
gives us a result about representations
The connection between representation theory and Lie
8. REPRESENTATIONS
39
groups comes from the following theorem.
We state it for
closed linear groups, but it is valid for all Lie groups. Every finite-dimensional representation IT
Theorem 1.14.
of a closed linear group is a smooth mapping. Lemma 1.15.
If G
is a closed linear group, then there
exist arbitrarily small open neighborhoods in
G
such that
U5 V
U
and V
and each element of V
of I
has a unique
square root in U . Proof. open ball I
in
By Theorem 1.5 we can choose a sufficiently small B
about
0
G such that
in
5 and an open neighborhood
exp is a diffeomorphism of B
B T = iB. and U = exp B T .
Put
On U
the map
V of
onto
V.
x -» x , when
referred to local coordinates by the exponential map, is just and is a diffeomorphism of B ! onto B . Therefore the 2 map x -» x is a diff eomorphism of U = exp B' onto 21
V = exp B . Proof of Theorem 1.14. F i r s t suppose
Let
G = IR «—> { (e )} .
can choose open neighborhoods of
I
GL(n, C)
in
such t h a t
Choose
e > 0
Then we can write e
IT(e '
)
and
so t h a t 7r(e ) =
€
7r(e /
2
0
gl (n, C)
in
U is a ball,
k
) = exp (e/2 )Y
V is a
V have unique square roots in
v(e ) exp eY
i s in
V
for some
I t e r a t i n g t h i s fact, for a l l
and
exp : U -» Y
for Y in
exp ^Y are both square roots of
V and hence are equal.
be given.
By Lemmas 1.2 and 1.15 we
U of
diff eomorphism, and members of V.
ir : G -> GL(n, C)
k > 1.
|t|
< e .
e" U.
exp eY
Now
within
we see t h a t
Raising to powers
40
I : LIE GROUPS AND LIE ALGEBRAS
shows that
ir(e ) = exp tY
continuity of
ir(e ) =
ir ,
i s smooth "when
G,
let
the previous paragraph,
7r(exp
exp tY
for a l l r e a l
t .
t .
By
Thus
ir
G = 3R .
For general
t -> exp tY.
for a l l diadic r a t i o n a l
for some U
3_X1 ' "
X-, , — , X. t -> 7r(exp tX.)
Y.
e x p u X
in
d d^
gt(n,C) . =
be a b a s i s of
g.
By
i s of the form Since
7r ex
( P u-^X^)* • *Tr(exp u^X^)
= (exp u ^ ) • • • (exp u d Y d ) , the map (u 1 , . . . , u d ) -» Tr(exp ^ 1 X 1 * - - exp u d X d ) i s smooth from
3R
into
GL(n, C) .
I f we can show t h a t
(exp u 1 x 1 )*--(exp u d X d ) is locally a diffeomorphism into then
x -» TT(X)
G
(1-16)
near the origin of 3R ,
will be smooth near the identity, and the
theorem will follow. We shall apply the Inverse Function Theorem.
In local
coordinates, (1.16) is given by (u1,...,ud) -> exp"1!(exp u 1 X 1 )---(exp u d X d )} , and the derivative matrix of this at the origin is the identity (since we can set all but one u . equal to 0 before differentiating).
The Inverse Function Theorem applies, and
the proof is complete. As a consequence of this theorem, a representation
8. REPRESENTATIONS IT : G- -> GL(n, €) morphisin
4l
automatically gives us a Lie algebra homo-
d7r : g -> 3 I (n, €) .
To make the terminology match, we
say that a finite-dimensional representation algebra of matrices is a linear mapping of space
End c V
vector space
cp 9
of a Lie
into the vector
of linear maps of a finite-dimensional complex V
into itself such that )
=
(Hom(L A ,®L B ,,L c )(cp))(a'®b')
= f(HQm(L A ,®L Bt ,L c )cp)(a')(b') . This proves the proposition. By way of motivation for the tensor product of two Lie algebra representations, let G be a topological group, and let
a
and p be representations of G on finite-dimensional
complex vector spaces representation
V
and w , respectively.
tr of G on V ® P W
Define a
by means of (2.8) as
Tr(g) == a(g) ® p(g) . Relation (2.9) assures us that can use bases to see that indeed a representation.
w i s multiplicative, and we
ir i s continuous; thus Now l e t us think of
r
is
G as a closed
linear group (as in Chapter I) and differentiate to find We will be replacing
g
by
c(t)
and differentiating.
product rule will then give us two terms.
dir . The
Accordingly we make
5*-
II: REPRESENTATIONS AND TENSORS
the following definition. Let
3
be a Lie algebra over
be complex vector spaces, and let tions of 5
on V
ir = a ® p
product
B or C , let V a
and W
and p be representa-
and ¥ , respectively.
Define the tensor
on v®« W by
———————
i/
(a® p) (X) = a(X) ® I + I® p(X) . A little calculation with (2.9) shows that
(2.14)
a ® p is a
representation. Again by way of motivation, let us consider group actions on a
Hom c (V, W) .
Thus let
G be a topological group, and let
and p be representations of G on finite-dimensional
complex vector spaces representation
V and ¥ , respectively.
Define a
ir of G on Hom c (V, W) by -rr(g) = Hom(a(g)"1,p(g)) .
Referring to (2.10a), we see that Homc(V,W)
to i t s e l f .
7r(g)
i s a linear map from
Relation (2.11) shows that
multiplicative and explains the presence of a basis to see the continuity, and thus tion.
Concretely the formula is this:
Homc(V,W) ,
ir(g)
g~ .
is
We can use
TT is a representaIf
cp i s in
then 7r(g)(cp)(v) = p(g)(cp(a(g)-1v)) .
Passing to closed linear groups and differentiating, we are led to make the following definition. Let
g
be a Lie algebra over
be complex vector spaces, and let
IR or C , let V a
and p be
and W
2. TENSOR PRODUCT OF TWO REPRESENTATIONS r e p r e s e n t a t i o n s of IT = Hom(cj,p)
on
g
on
V and
Homc(V,W)
¥,
55
respectively.
Define
by
(Hom(a,p)(X))(cp)(v) =
p(X)(cp(v)) - c p ( a ( X ) v ) .
A little calculation with (2.11) shows that
Hom(a,p)
(2.15) is a
representation. A special case of interest is the case that and
p
write on
is the trivial representation on a* = Horn (a, 1)
V* .
(The use of
C.
¥
is
C
In this case we
for the resulting representation of g "1" here is a reminder that
is not necessarily zero;
Horn (a, 1)
"o" would be a more literal symbol
for the trivial representation.)
Formula (2.15) specializes
to cr*(X)(cp)(v) = -q>(ff(X)v) . The representation
a*
Proposition 2.3> over (over
(2.16)
is called the contragredient of a . Let A , B , and
C
be vector spaces
C , and suppose that representations of a Lie algebra g 3R or
C ) are given on each.
Then in the vector space
identity H o m c ( A ® c B , C ) a Hom c (A,Hom c (B,C)) of (2.12), the isomorphism is an equivalence of representations. Proof.
This is just a question of checking what happens
in (2.13) when we plug in the representation formulas of (2.14) and (2.15).
Dropping the names of all representations
from the notation, we have
56
II: REPRESENTATIONS AND TENSORS 5(38p)(a)(D) = (Xcp)(a®b) = X(cp (a® b)) -cp (X(a® b)) = X(cp(a®b)) -cp(a®Xb) -ep(Xa®b) = X(?(cp)(a)(b)) -?( V
universal property: Ax Bx C
C
Whenever
into a vector space
unique linear mapping
T
of
t
s a vec
"^
or s
P
ace
k.
A triple
over
k "with
having the following is a trilinear mapping of
U
over
k,
V
into
U
there exists a such that the
diagram V t
(fixed)/
( = triple tensor product) \ T
AX BX C
(2.17) > U
t commutes.
It is clear that there is at most one triple tensor
product up to canonical isomorphism.
We shall use triple
tensor products to establish an associativity formula for
3 - REPRESENTATIONS ON THE TENSOR ALGEBRA
57
ordinary tensor products. Proposition 2.4.
(A®kB)®kC
(a)
and
A®k(B®fcC)
are
t r i p l e tensor products. (b)
There exists a unique isomorphism
§
from l e f t
to
r i g h t in (A ®k B) ®k C ^ A ®k (B®k C) such that and
§ ((a® b) ® c) = a ® ("b® c)
for all a € A , b € B ,
c€C . Proof.
(a) Consider
t (a,b) = t(a,b,c) . to a linear = t
t C-j T C p
(AO^B) ® k C .
For c in C , define
be trilinear.
Then
t
+1
and t O
T
= xT .
XC
C
t:AxBxC^U
t : A x B -> U by
is bilinear and hence extends
T« : A ® k B •» U .
C-j
Let
Since
= xt
t
is trilinear,
for scalar
x ; thus
C
XC
uniqueness of the linear extension forces and
(2.18)
T
C
Consequently
l+C2
= T
c
l
+T C 2
t T : (A®kB) X C -» U given by linear
tT(d,c) =
T (d)
i s b i l i n e a r and hence extends t o a
T: (A ®, B) ®, C -» U .
This
T proves existence of the
l i n e a r extension of the given
t .
s i n c e the elements
generate
(a®b)®c
Uniqueness i s t r i v i a l ,
(A® k B)®,C
i s a t r i p l e tensor product.
A ®k (B®fcC)
i s a t r i p l e tensor product.
(b) and
I n (2.17), take
t(a,b,c) =
a® (b®c) .
(A ®, B) ®k C . In similar
V = (A ®k B) ®k C ,
So
fashion,
U = A®k(B®kC),
We have Just seen i n (a) t h a t
i s a t r i p l e tensor product -with
t (a,b,c) =
(a®b)®c .
V
Thus
58
II: REPRESENTATIONS AND TENSORS T : V -» U
there exists a linear This equation means the roles of
(A ® k B) ® k C
and
T.
and existence is proved. (a®b)®c
Ti(a,b,c) = t(a,b,c) .
T((a®b)®c) = a® (b®c) .
two-sided inverse for
elements
with
Interchanging
A ® k ( B ® k C ) , we obtain a
Thus
T
will serve as
$
in (b),
Uniqueness is trivial, since the
generate
(A®kB)®kC.
Just as with Proposition 2.1, Proposition 2.4 carries along with it a certain naturality that we often need to invoke in applications. Proposition 2.5-
A , B , C , A j , Br , CT
Let
be vector
L A : A -» AT , L B : B -> BT , and
spaces over
k , and let
Ln : C -> CT
be linear maps.
Then the isomorphism
$
of
Proposition 2.4 is natural in the sense that the diagram A®k(B®kC)
(AT®kBt)®CI
* A!®k(B!®kCT)
commutes. Proof. ( ( L A ® ( L B ® L c ) M ) ( ( a ® b ) ® c) = (L A ® (L B ® LQ)) (a® (b® c)) L A a® (LBb®.Lcc) L c c)
3. REPRESENTATIONS ON THE TENSOR ALGEBRA
59
= §((L A ® L B ) ( a ® b ) ®L c c) = ( 5 c ( ( L A ® L B ) ® L c ) ) ( ( a ® b ) ® c) . The p r o p o s i t i o n
follows.
P r o p o s i t i o n 2 . 3 showed t h a t t h e v e c t o r space i d e n t i t y (2.12) i s compatible with t h e passage t o Lie a l g e b r a representations.
We now prove a corresponding
compatibility
for the identity (2.l8). Proposition 2.6. over (over
C,
Let
A,
B,
and
C be vector spaces
and suppose that representations of a Lie algebra
3R or
C ) are given on each.
g
Then in the vector space
identity (A®CB)®CC a £®c (B®CC) of (2.18), the isomorphism is an equivalence of representations. Proof.
Let §
be the ismorphism implementing (2.18),
and let us drop the names of all representations from the notation.
Then "we have
§ ( X ( ( a ® b ) ® c ) ) = § ( X ( a ® b ) ® c + (a® b ) ®Xc) = $ ( ( X a ® b ) ® c + (a® Xb) ® c + ( a ® b) Xc) = $ ( ( X a ® b ) ® c ) + $ ( ( a ® Xb) ® c) + $ ( ( a ® b ) ® Xc) = Xa® (b® c ) + a ® (Xb ® c ) + a ® ( b ® X c ) = Xa® (b® c ) + a ® X(b® c ) = X ( a ® (b® c ) ) = X ( ? ( ( a ® b) ® c ) ) .
(2.19)
60
II: REPRESENTATIONS AND TENSORS
Since the elements
(a®b)®c
generate
(A®kB)®kC'
equivalence follows. Remark.
The Lie algebra representation on a triple
tensor product may be motivated by group representations.
In
a triple tensor product, a member of the group is to act in all three factors simultaneously.
The differentiated action
is just (2.19) above, as a consequence of the product rule. There is no difficulty in generalizing matters to n-fold tensor products by induction.
An n-fold tensor product is to
be universal for n-multilinear maps. to canonical isomorphism.
It is clearly unique up
One such tensor product is the
(n-l)-fold tensor product of the first with the n
space.
n-1
spaces, tensored
Proposition 2.^-b allows us to regroup
parentheses (inductively) in any fashion we choose, and iterated application of Proposition 2.5 shows that we get a well defined notion of the tensor product of
n
linear maps.
Iterating Proposition 2.6 shows that representations of a Lie algebra
g
on each factor, when the vector spaces are complex,
lead to a well defined representation on the n-fold tensor product. Fix a vector space n-fold tensor product of we let
T^°'(E)
vector space,
E
over E
be the field
T(E)
k,
with itself. k.
T^n'(E)
be an
In the case
n = 0,
and let
Define, initially as a
to be the direct sum
T(E) =
£©T(n)(E) . n=0
(2.20)
3. REPRESENTATIONS ON THE TENSOR ALGEBRA The elements that lie in one or another homogeneous.
6l
T^n' (E) are called
We define a "bilinear multiplication on
homogeneous elements TW(E)XT(n)(E)
_,T(m+n)(E)
to be the restriction of the above canonical isomorphism T ( m ) (E) ® f e T ( n ) (E) — » T ( m + n ) (E) . This multiplication is associative because the restriction of the isomorphism
to T ^ (E) x ( T ^ (E) x T ^ (E)) T ( ^(E)x (T (m) (E)x T (n) (E)) given by
factors through the map » ( T W ( E ) X T W ( E ) ) X T ( n ) (E)
(r, (s,t)) -> ((r, s),t) .
Thus
T(E) becomes an
associative algebra with identity and is known as the tensor algebra of
E.
T(E)
has the following universal mapping
property. Proposition 2.7* with image I : E -> A
T^'(E)
The one-one linear mapping has the following property:
is a linear map of
E
t : E -> T(E) Whenever
into an associative algebra
with identity, then there exists a unique associative algebra homomorphism
L: T(E) -» A
such that
L(l) = 1
T(E) \
LL
> A commutes.
and the. diagram
62
II. REPRESENTATIONS AND TENSORS Proof.
Uniqueness is clear, since
as an algebra.
For existence we define
E
generates n
l/ '
T(E)
31
on
T' ' (E)
to
be the linear extension of the n-multilinear map
and we let be in
L = Z 9 I, '
T^m'(E)
and
in obvious notation.
v, ® . . . ® v
be in
Let u..® ...®
T^ n ^ (E) .
Then we
have
Hence ... ® v n )
Taking linear combinations, we see that
L
is a homomorphism.
Now let us consider what happens with representations. To orient ourselves, let us think about matters first in terms of groups. can put
If
IT is a representation of
A = T(V)
defined on all of
and
T(V) .
allows us to see that on
T ^ ( V ) , 7r(g)
product
I = ir(g)
G
on
V,
in (2.21) to obtain
The uniqueness of
L
if is multiplicative in
then we ~(g)
in (2.21) g.
In fact,
is nothing more than the n-fold tensor
7r(g) ® ... ® ir(g) .
should expect an action by
Differentiating, we see that we q
involving
n
terms on
T^(v).
3- REPRESENTATIONS ON THE TENSOR ALGEBRA
63
In fact, if we start from a Lie algebra representation IT of
g
on a complex vector space
V , our remarks with n-f old T^n'
tensor products give us canonically a representation S
on
T^ n^(v) .
be trivial.)
(The representation on
T ' 0 ' (V) ^ C
of
is to
It is clear that the formula is
n ( ) 7 T W (X) (v, « ... ® V ) = I (v-,® ... ® V. ,® 7r(X)v.® V. ,-,®...®O , ± n . j J. j-J. j J+-L n
(2.22a) and this agrees with what we' would expect from differentiating a group action. 7T
of
g
Using direct sums, we obtain a representation
on the full tensor algebra
Proposition 2.8. algebra
g
on
If
V
and if
representation on
T(V) ,
TT(X)(V®W)
for all
X
Proof. n
T^ ^(V) .
in
g
T(V) .
IT is a representation of the Lie TT is the corresponding then = TT(X)V® W + v ® T?(X)W
and for all
We may assume
v
v
and
is in
w
in
T^m^ (V)
(2.22b)
T(v) . and
w
is in
Then (2.22a) shows that
and similarly for
ir^ (x)
that
and this is (2.22b).
and
7r^m+n^ (X) .
Then it is clear
64
II: REPRESENTATIONS AND TENSORS 4. Representations on exterior and symmetric algebras Let
E
be a vector space over
tensor algebra. A(E) •
k, and let
T(E)
be its
We begin by defining the exterior algebra
The elements of
A(E)
are to be all the alternating
tensors ( = skew-symmetric if
k
and so we want to force
to be
v® v
has characteristic ^ 2 ) , 0•
Thus we define the
exterior algebra by //two-sided ideal I generated by\ . (E) = T ( E ) / m / I all v ® v with v in T v ; (E) I
(2.23)
This is an associative algebra with identity. It is clear that the ideal
I
satisfies
00
i = E© (mT (n) (E)) . n=0 An ideal with this property is said to be graded.
Since
I
is graded, A(E) = We write A n (E)
I©T(n'(E)/(inTtn)(E)). n=0
for the n
summand on the right side, so
that A(E) = I © A n ( E ) . n=0 Since
IflT^ '(E) = 0 , the map of
elements is one-one onto. denoted A v ^ ® — ® vn a
E
into first-order
The product operation in A(E)
rather than ® .
The image in A n (E)
in T ^ (E) is thus denoted
is in A m ( E )
and
b
(2.24)
of
v^ A ... A V .
is in A n (E) , then
aAb
is
is in
If
4. EXTERIOR AND SYMMETRIC ALGEBRAS Am+n(E) . V
Moreover
1 A •*•* v n
A n (E)
with a11
v
is generated by elements
in
j
Al
^E) -
generated by corresponding elements defining relations for A(E) and
v. J
in A
65
make
E
>
since
T ^ (E)
® vn '
v-,® v. A V . =
is
The
-V.AV.
for
v.
(E) , and it follows that
a A b = (-l)mnbAa
i f a e A m ( E ) and b eA n (E) .
(2.25)
has the following universal property:
Let t
Proposition 2.9. (a) A n ( E ) be the map n
A (E) .
If
Ex...xE
t (v-,..., v ) = v, A I
of
E x ... x E
into
is any alternating n-multilinear map of
into a vector space
linear map
A V
L : A n (E) -* U
U , then there exists a unique
such that the diagram
An(E)
EX...XE commutes. (b) A(E)
has the following universal property:
be the map that imbeds linear map of
E
identity such that
E
as
A (E) c A(E) .
into an associative algebra £(v)
=0
for all
exists a unique algebra homomorphism L(l) = 1
v
such that the diagram
E
A
> A
I
1
is any
with
E , then there
L : A(E) -> A
A(E)
commutes.
in
If
Let
with
66
I I : REPRESENTATIONS AND TENSORS Proof.
I n both cases uniqueness i s t r i v i a l .
For T'n'(E)
e x i s t e n c e we use the u n i v e r s a l mapping p r o p e r t i e s of and
T(E)
t o produce
show t h a t
L
taken as
L,
or
A(E) ,
term in
T(E) .
I f we can
For ( a ) , we have
v®v
L(T^ n ' (E) H i ) =
with
i s thus of t h e form n
T^ '(E) •
v
in
E.
0,
where
A member of
Z a^® ( v i ® v i ) ® b .
with each
Each term here i s a sum of pure t e n s o r s
x, ® . . . ® x
-ft.^ «y
with
or
then the r e s u l t i n g map can be
and we a r e to show
i s generated by a l l
T ^ ( E ) HI
T'* 1 ' (E)
and we a r e done.
L : T^ n ' (E) -> U I
on
a n n i h i l a t e s the a p p r o p r i a t e subspace so as to
A n (E)
descend t o
L
^
r+2 +s=n .
Since
® v. ® v. ® V- ® . . . ® v A
But L ( v ® v ) = £(v)
L
= 0,
L(l) = 0 .
Corollary 2.10. then
vanishes on
is an ideal, it is enough to check that
vanishes on the generators of I . and thus
L(T'n'(E)ni)=O.
n
Hom^(A (E),F)
If
E and F
are vector spaces over k,
is canonically isomorphic (via
restriction to pure tensors) to the vector space of F-valued alternating n-multilinear functions on Ex ... x E . Proof.
Restriction is linear and one-one.
Proposition 2.9a.
It is onto by
4. EXTERIOR AND SYMMETRIC ALGEBRAS Proposition 2.11, space of dimension
Let
N.
n
67
E "be a f i n i t e - d i m e n s i o n a l
vector
Then
(a)
A (E) = 0
(b)
dim A n (E) = (^)
for
n > N .
for 0 < n < N .
ordered basis of E , then
{u. A x±
If [u±] is an
A u. | in < in 1
( i } n
is a basis of A (E) • (c) A n (E*) is canonically isomorphic to A n (E)* by (flA — Proof. The vectors
Af
n)^l""^n)
Let u., , ... , u_. be an ordered basis of E . u. ... ® u. 1
hence the corresponding
are then a basis of T'n'(E) , and
n
u. A ... A u. span A (E) . Using x x l n the skew symmetry in A (E) , we thus see that the elements
u. A — A u. 1
1 ^n proves (a). Let
u^ ,
r1 < — < r 1
Then
t
with , v*
span A n (E) .
i, < ... < i -1
n
be the dual basis of
E* ,
This fix
, and define _ (wj)}
n
for
w^ . . . , w n
i s alternating n-multilinear from
E.
Ex . . . XE into
L : An(E) •*• k .
and extends by Proposition 2.9a to
in
k
If
L(u k A ... A u k ) = I (u k ,... ,ufc ) = det{u* (u^ )} , 1 n I n i J and the right side is 0 unless which case it is 1 .
r^ = k, , —
, rn = k ,
in
This proves that the u A ... A U
are linearly independent in A n (E) . the number of increasing
Then (b) follows, since
n tuples from
{1, ...,N}
is f *M .
68
II: REPRESENTATIONS AND TENSORS For (c) let
f1 , ... , f
be in E* , and define w ) = det{f. (w
' *'*' n
Then £-. ~ is alternating n-multilinear from EX ... x E 1 l'##^1n into k and extends by Proposition 2.9a to a linear Lf
f
:A n (E)->k.
Thus I (f -,,..., f ) = Lr
-
defines an alternating n-multilinear map of E* x ... x E* into A n (E)* .
L maps An(E*)
Its linear extension
into A n (E)* .
The argument in the previous paragraph shows that
L maps
the basis
A...AU
Hence
L
u*
A
... A U
to the dual basis of u
is an isomorphism.
Now let us consider representations. algebra over
B
or C , and let
Let 7r'n^
V.
representations of g
T ^ (v)
and b
TT be the
T(v) .
Let us see It is
F(X) to an element (2.26), which we write in
condensed form as T^fV),
and
and
is an invariant subspace for 7r'n' .
that T ^ (V) HI enough to apply
be a Lie
F be a representation of g
on a complex vector space on
Let g
a®v®v®b in
T^(V).
with a
in
T ^ (V) , v in
By Proposition 2.8,
7?(X) (a® v® v® b) = 7r(x)a® v® v® b + a ® 7 r ( x ) v ® v ® b + a®v®7r(X)v®b + a® v® v® Tr(x)b = 7?(X)a®v®v®b + a® v® v®7r(x)b + a®
( T T ( X ) V + V ) ® (TT(X)V + V) ®b
- a® T ( X ) V ® 7T(X)V® b -
a®v®v®b,
4. EXTERIOR AND SYMMETRIC ALGEBRAS •which is in the span of (2.26).
7r'n' , and
invariant subspace for subspace for
7? .
and
A(V) .
I
T ^ (v) n I
is an
is an invariant
Thus we get well defined quotient
representations, also denoted A n (V)
Hence
69
7r'n'
and
TT for now, on
By Proposition 2.8, we have
~(X) (aAb) = 7?(X)a A b + a A?(X)b .
(2.27)
The above discussion makes precise, on the level of the third example of representations given in §1.8. group level, if we assume a representation
TT'11'
V
of
with
On the
is finite-dimensional, we obtain G
on
T'11' (V)
from
IT on
V, n
and this passes to the quotient as a representation A n (V)
g >
7r' '
on
irf n '(g)(v 1 A...A7 n ) = ir(g)v1 A ... A 7r(g) v n .
Its
differentiated version is what we constructed above on the level of
3.
We turn to the symmetric algebra vector space over
k .
S(E) , E
being a
The construction, results, and proofs
are similar to those for A ( E ) .
The elements of
S(E)
are
to be all symmetric tensors, and so we want to force u®v = v®u.
Thus we define the symmetric algebra by
C
two-sided ideal J generated by all u®v-v®u
with u € T l
;
(E) , v € T v
\ ;
(E)y
This is an associative algebra with identity. It is clear that
J
is graded:
J = Z ~ - © (JH T^n' (E)) .
Thus we can write 00
S(E) = EeT tn '(E)/(jnT W (E)). n=0
70
I I : REPRESENTATIONS AND TENSORS S (E) f o r t h e n
We w r i t e
sunmiand on t h e r i g h t s i d e , so
that S(E) = Since
00
Ees n (E) . n=0
JflT^'fE) = 0 , the map of E
elements is one-one onto.
in T^ n ' (E)
The image in S n (E) of
Sn(E) is generated by elements
Moreover
in S (E) ^ E , since
corresponding elements for
is thus denoted
is in S n (E) , then
is in S®(E) and b
v.
into first-order
The product operation in S(E) is
written without a product sign. v,®...®v
S(E) make
T'31' (E) v
v.^®— ®
v^v • = v.v^
and it follows that
(2.29)
n
v-,# • • v
.
If a
ab is in S m + n (E) . v-,#*#v
with all
is generated by •
T rie
^
defining relations
for v^ and v. in S (E) ,
S(E) is commutative.
Proposition 2.12. Sn(E) has the following universal property:
(a) be the map S n (E) .
t (v^,..., v ) = v^* • • v
If £
Ex ... x E linear map
of Ex ... x E
Let t
into
is any symmetric n-multilinear map of
into a vector space L: S n (E) -> U
U , then there exists a unique
such that the diagram
EX . . . x E commutes. (b)
S(E) has the following universal property:
be the map that imbeds
E
as S 1 (E) A
commutes. Proof.
This is completely analogous to Proposition 2.9.
Corollary 2.13* k,
then
If
Hom k (S n (E) ,F)
E and
F
are vector spaces over
is canonically isomorphic (via
restriction to pure tensors) to the vector space of F-valued symmetric n-multilinear functions on Proposition 2.14. N .
space of dimension (a)
be a finite-dimensional vector
Then
E , then
for
0 < n < «> .
{u., •••u__
with
Sn(E*) is canonically isomorphic to
Gn Proof.
if
{u±}
is the symmetric group on (a)
monomials span
Since
S(E)
n
is
j-,+.. .+j* = n}
Sn(E) .
is a basis of
where
E
dim Sn(E) = (^i"1)
an ordered basis of
(b)
Let
Ex ... x E .
S n (E)*
by
letters.
is commutative and since
T ^ (E) , the indicated set spans
S n (E) .
To see its cardinality, we recognize that picking out
N -1
72
I I : REPRESENTATIONS AND TENSORS
o b j e c t s from
n+N-1
t o l a b e l a s d i v i d e r s i s a way of
a s s i g n i n g exponents t o t h e u . T s ; t h u s t h e c a r d i n a l i t y of t h e indicated set is
1
(
Let us see independence. into the polynomial algebra
The map E c ^ ^ -* Z c^X^ k[X-,,... ,X^]
commutative algebra with identity.
of E
is linear into a
I t s extension via
Proposition 2.12b maps our spanning set for
Sn(E)
to
distinct monomials in
k[X-,,... ,XJ , which are necessarily
linearly independent.
Hence our spanning set is a basis.
(b)
This i s proved in the same way as Proposition 2.11c,
and Proposition 2.12a is the tool. Remarks.
The proof of (a) suggests that
S(E)
might be
just polynomials in disguise, but this suggestion i s misleading.
The isomorphism with
depended on choosing a basis. between
S(E*)
k[X-,,. • . ,XJ
in (a)
The canonical isomorphism is
and polynomials on E;
details are left to
the reader. Finally let us consider representations. Lie algebra over of
g
3R or
T^ ^(v)
and
T(V) ,
respectively.
algebra, we find that for
n
7r^
.
Hence
J
g
be a
C , and let • ir "be a representation
on a complex vector space
n
Let
i^ n ' (V) fi J
V.
Form 7r'n'
and ~ on
Just as with the exterior is an invariant subspace
is an invariant subspace for
IT .
Thus
we get well defined quotient representations, also denoted 7r' n '
and
TT for now, on
Sn(E)
and
S(E) •
By Proposition
2.8 we have
£
£
(2.30)
5. COMPLEXIFICATION
73
5. Extension of scalars - complexification For the time "being, extension of scalars "will be a device for taking a real vector space and making from it a complex vector space of the same dimension.
If the original vector
space is n-dimensional real column vectors, the new space may "be regarded as n-dimensional complex column vectors.
In this
case a real matrix gives a linear mapping between real vector spaces originally and then can be regarded as a linear mapping between the extensions to complex vector spaces.
We construct
the new space without using a basis. Let
K
space over
be a field with k,
and we can form
vector space over as a subspace of E®, K
k.
in
is a vector E
The linear map
e -*• e ® l
imbeds
(over the field K
k).
\ c , c1,
in and
is a E
We now make
by defining
/mult by c
= 1 8
E®tK
We easily check that
K
whenever
into a vector space over
(
Then
E® f c K
E®,K
mult by c\
E ®k K
k c K .
for
c€K.
K c2
in
K
and
u, v
in
imply
(i)
c x (c 2 v) = (c 1 c 2 )v
(ii)
c(u + v) = cu + cv
(iii) (c 1 + c2)v = cxv + c 2 v (iv)
1v = v
(v)
c(e®l) = c e ® l
if
These properties show that
c
is in
ES^K
k
and
e
is in
E.
is a vector space over
K
lh
II: REPRESENTATIONS AND TENSORS
and that its scalar multiplication, when restricted to scalars in
k
and the original imbedded
multiplication in
E , agrees with scalar
E.
If we have a linear mapping L ® 1 : E ® k K •» F ® k K
L : E -» F
is linear over
K,
over
k , then
as follows from the
identity
J
mult by\ y\
( j ( /mult by\
L L
/
mult by
® c in K J = = ®(c in KK j= ( ® c in K The mapping
L® 1
X
is the one explained in the first paragraph
of this section. Another kind of extension of linear mappings is available when
L : E •* F
space over E
and
c
is linear over
K . in
In this case, let K.
Then
Moreover
L
*L
and
is already a vector for
e
is bilinear over
k
L: E ® k K -> F -
on the imbedded copy is linear over
F
£(e,c) = cL(e)
I : Ex K -> F
and extends to a k-linear map reduces to
k
E® 1
The map
of
E
in
in
L E®^K.
K -, to see this linearity, we
Just observe that L(c 1 (e®c 2 )) =
The pure tensors
e $ C2
generate
E®^K,
and the K-linearity
follows. Let us now specialize matters to the case K = C.
We denote
E ^ C
c omplexification of Suppose that make
g
C
g
by
k = 1R and
E C , calling it the
E. is a Lie algebra over
into a complex Lie algebra.
1R .
We shall
To obtain a complex
6. UNIVERSAL ENVELOPING ALGEBRA
75
Lie algebra, -we need to define the bracket for two members of 3
.
(This is very easy with bases, but it is not immediately
clear that the resulting definition is independent of basis.) gxCxgxC-»gC
Thus let us form the 4-multilinear map by
(x, r,y,s) -» rs[x,y] . C
g®TR ®7R S ®TD C (g ® _ C) x (g ®
and
given
Extend it to a linear map
restrict the result'to an B -bilinear map
c) •» S .
Using suitable uniqueness conditions
for extensions of multilinear maps, we readily check that this map is C-bilinear from
g xg
to
g , that it is
alternating, and that it satisfies the Jacobi identity. g
is a complex Lie algebra. The ]R-linear map
map
ad X : g -* g
gives us the C-linear
ad X ® 1 : g C -> g C , which is nothing more than ad(X® 1 ) .
We shall write this simply as ad : g -» Endg
Thus
on
c g
carrying
X
ad X . to
The linear map
ad X
is a representation of
C
g .
Suppose vector space
ir is a representation of
g
V .
is real linear into a
Then
ir : g -> End« V
on a complex
complex vector space, and we have constructed a corresponding C complex linear extension from g into End- V . We denote this extension by extended tion of
T g
ir , too.
It is easy to check that the
is a representation of
g
.
Thus a representa-
automatically extends to a representation of
on the same complex vector space.
6. Universal enveloping algebra In this section we suppose that
g
is a complex Lie
g
76
II: REPRESENTATIONS AND TENSORS
algebra of finite dimension
N.
(When we are studying the
representation theory of a real Lie algebra 9 0 *)
If we have a representation
vector space
g , g will be
T of g on a complex
V , then the investigation of invariant
subspaces in principle involves writing down all iterates 7r(X1)7r(X2)'• •7r(Xn) for members of g , applying them to members of V , and comparing the results.
In the course of
comparing results, one might be able to simplify an expression by using the identity
TT(X)T(Y)
= ir(Y)ir(X) + TT[X,Y] .
identity really has nothing to do with
This
T , and our objective
in this section will be to introduce an object where we can make such calculations without reference to ir; to use an identity with the representation
ir, one simply applies
ir
to both sides. For a first approximation to what we want, we can use the tensor algebra of
T(g) .
The representation
g into the associative algebra
algebra homomorphism 7r(Xn)---7r(X1)
T is a linear map
End-, V and extends to an
Ir : T(g) -» Endc V with
TT(1) = 1 .
Then
can be replaced by 7r(Xn® .. - ® X x ) . The
difficulty is that the tensor algebra does not take advantage of the Lie algebra structure of g and does not force the identity and all
7r(X)7r(Y) = T(Y)TT(X) +TT[X,Y] w.
for all
X , Y in g
Thus instead of the tensor algebra, we use the
following quotient of T(g) :
((
two-sided ideal generated by all
U(g)
X0Y-Y® X - [X,Y]
(1) (1)
with X , Y in T[±) (g
is an associative algebra with identity and is known as
6. UNIVERSAL ENVELOPING ALGEBRA the universal enveloping algebra of
g.
77
Products in
U(s)
are written without multiplication signs. The canonical map T^ '(g)
g ^ u(g)
and then passing to
of (2.31),
1
given by imbedding
U(g)
is denoted
into
Because
satisfies
1 [X,Y] = t (X)i (Y) -t (Y)t (X) U(g)
t .
g
for
is harder to work with than
X,Y A(g)
in or
g.
(2.32)
S(g)
because the ideal in (2.31) is not graded, i.e., is not generated by homogeneous elements.
Thus, for example, it is
not evident that the canonical map
t : g -> U(g)
However, when U(s)
g
is abelian (i.e., when all brackets are 0 ) ,
reduces to
S(g) , and we have a clear notion of what to
expect in this case. S(g)
is one-one.
Even when
g
is nonabelian,
U(g)
and
are still related, and we shall make the relationship
precise at the end of this section. Let
U 1 1 ^)
be the image of
Z£=Q T ^ ( g )
passage to the quotient in (2.31)* dimensional subspace of Proposition 2.15* t : g -» U(g)
Then
u(g) , and U(g)
^(g)
under the is a finite-
u(g) = U~ =
and the canonical map
have the following universal mapping property:
Whenever A is a complex associative algebra with identity and ir : g -> A
is a linear mapping such that
7r(X)7r(Y) -7r(Y)7r(x) = TT[X,Y]
for all X and Y in g, (2.33)
then there exists a unique algebra homomorphism such that
•?(!) = 1
and
~ : U(g) -» A
78
II: REPRESENTATIONS AND TENSORS
u(s)
commutes. Proof. t (g)
Uniqueness follows from the fact that
generate
U(s) .
For existence, let
1
and
TT1 : T(g) -» A
be
the extension given "by the universal mapping property of T(s) .
To obtain
if, we are to show that
the ideal in (2.31).
TT-, annihilates
It is enough to consider
ir-, on a
typical generator of the ideal, where we have i r 1 ( i X ® t Y - i Y ® iX - t [X,Y]) = 7r 1 (tX)TT 1 (tY) -7r 1 (tY)7r 1 (tX) - T T ^ I [X, Y] ) = 7r(X)7r(Y) -7r(Y)7r(X) -7T[X,Y] = Corollary 2.16.
Representations of
correspondence with unital left ir -> ~
correspondence Remark. Proof.
apply Proposition 2.15 to
in
U(g)
U(g)
and
1
operates as
v
in
U(g) V .
g
TT : g -> End^ V , module under Conversely if
module, then we can define
implies that
modules (under the
TT is a representation of
becomes a unital left
stand in one-one
of Proposition 2.15).
Unital means that If
U(g)
g
0 .
1. on
V,
and then
V
uv = 7r(u)v V
we
for
is a unital left
TT(X)V = (tX)v , and (2.32)
rr is a representation of
g .
constructions are inverse to each other since
u
The two TTOI = -rr in
6. UNIVERSAL ENVELOPING ALGEBRA
79
Proposition 2.15. Theorem 2.17 ( P o i n c a r e - B i r k h o f f - W i t t ) . i s an ordered b a s i s of
with all
g ,
If
t h e n the monomials
J, y_ 0 , form a basis of U(g) .
canonical map
t : g -> U(g)
Lemma 2.18. a permutation of
Let
X1
, ... ,
In particular, the
is one-one.
Z-. , ... , Z
{1,...,p} .
"be in
g , and let a be
Then
is in I ? " 1 (s) . Proof.
Without loss of generality, let a
transposition of
j
with
j+1 .
Then the lemma follows from
by multiplying through on the left by the right by
be the
(t Z-,) • • • (t Z . ,)
and on
(i Z j + 2 ) • • • (i Z p ) -
For the remainder of the proof of the theorem, we shall use the following notation: I = (i1,
,i )
not, we write
of integers from
Y T = Y. •*-Y- . P 1 }
Lemma 2.19. length
Let Y^ = tX i . 1
Also
to
For any tuple
N , increasing or
i < I
means
The Y T , for all increasing tuples of
< p , generate
\P (g) .
80
II: REPRESENTATIONS AND TENSORS Proof.
If we use all tuples of length
< p , we
certainly have a set of generators, since the obvious preimages generate ^ k / D T^ '(g) .
Lemma 2.18 then implies that
the increasing tuples suffice. Proof of Theorem 2.17. C[z_,...,z_J , and let total degree "^
satisfy P
with
z^.
in
P
for a l l
J
as the union of its definitions on
ir will be a representation by (C ) and will
satisfy (2.34) by (A ). For
-» -n+i
X
TT(XJ) (7r(X i )z J ) +7r[X i ,X 0 .]z J
P^
T
in
i < I
is in
(TT(X, J )Z J )
zT
(2.34).
f o r
so as t o be compatible and t o
(A )
with
p
TT(X) : P
s h a l l d e f i n e l i n e a r maps i n d u c t i o n on
satisfying
ir
8l
p = 0,
Hence we will be done.
we define
^ ( X ^ ) ! = z^ .
Then (AQ) holds,
(B o ) is valid, and (CQ) is vacuous. Thus now assume that for all X (C
in
n) hold.
sequence
I
with
in such a way that (A
We are to define of p
and (C ) hold. to (A ).
g
TT(X) has been defined on
If
TT(X.)Z
P^ T
-j_), (B _ _ ^ ) , and for an increasing
elements in such a way that (A ) , (B ) , i ^ I , we make the definition according
Otherwise we can write in obvious notation
J < i , 3 < J , | j| = p - 1 .
== 7r(Xi)7r(X.)2T
I=(j,J)
We are forced to define
since 7r(X.)Zj is already defined
= 7r(XJ)^(Xi)zJ + T[X i ,X J ]Zj
^
= 7r(X. 5 )(z i z J +'w) +Tr[X i ,X ; j ]z J
with w i n P ^
= z . z i z J + 7T(X^.)^+7r[X i ,X^]z J
by (A )
(Cp) by
(Bp-1)
82
II: REPRESENTATIONS AND TENSORS
So we make this definition, and then (B ) holds. ir(X.)z_
has now "been defined in all cases on
Thus
P , and we
have to show that (c ) holds. Our construction above was made so that (C ) holds if J -X
in
T' '(s) •
Composing with passage to the
quotient, we obtain an antihomomorphism of Its kernel is an ideal.
T(g)
into
To show the map descends to
U(g) .
U(g) ,
it is enough to show that each generator X® Y - Y ® X - [X,Y] maps to
0.
But this element maps in
then maps to to
U(s) •
0
in
U.(g) .
T(g)
to itself and
Hence the transpose map descends
It is clearly of order two and thus is one-one
onto. The map V
u -» u
also as right
a left
U(g)
define
r
U(g)
allows us to regard left modules, and vice versa:
module into a right
vu = u
r
v
for
u
in
U(g)
U(g)
and
U(g)
modules
To convert
module, we Just v
in
V.
Conversion in the opposite direction is accomplished "by uv = vu
.
This conversion will allow us to use tensor
products over universal enveloping algebras in investigating representations.
We take up such tensor products in §8.
7. Symmetrization Let
g
be a finite-dimensional complex Lie algebra.
observed in (2.36) that if Q
and
b
g = 0 ©b
We
as vector spaces and if
are Lie subalgebras, then their universal
7. SYMMETRIZATION enveloping algebras satisfy particular, the canonical.
If
a a
87
U(s) — U(a ) ®~ U(b ) .
type terms and the 'b and
b
In
type terms are
are merely vector subspaces of
we can still choose a "basis of but the linear span of the
a
g
compatible with
g,
g = c ©b ,
type monomials will perhaps
depend on the choice of basis. In this section we introduce a device to get around this problem.
The starting £oint is the remark in §6 that
reduces to the symmetric algebra is abelian.
S(g)
Lemma 2.18 indicates that
rather similar even if
g
U(g)
in the case that U(g)
and
S(g)
g are
is nonabelian, and we introduce a
mapping of the one to the other that captures these similarities precisely. For
p > 1 , define a symmetric p-multilinear map
S p : gx ... X g -» U ( Q )
where
G
by
is the symmetric group on
coefficients
1/pJ
p
letters.
(The
are traditional but are not important.)
By Proposition 2.12a we obtain a corresponding linear map, also denoted
S
, from
of these maps for S : S(g) -» U(g)
The map
S
S^(g)
p > 0
(with
into
U(g) .
The direct sum
$ 0 (l) = 1 ) is a linear map
such that
is called symmetrization.
88
II: REPRESENTATIONS AND TENSORS Proposition 2.22.
isomorphism of Proof. ST
S(g)
onto
P
P
U(g) .
S (Sp(g))
The image
be the composition of
quotient
S
1
U (g )/U ~ (g) .
By Theorem 2.17, p
U (g)/!?"" (s) . §
Symmetrization is a vector space
3"
is in
U p (g) , and we let
followed by passage to the
By Lemma 2.18
maps a basis of
S p (g)
Let us see therefore that
to a basis of
S- is one-one.
carries a linear combination of monomials to
be the largest degree of such a monomial.
0,
Composing
If
let
p
§
on
the linear combination with the quotient
u p (g) - u p (g)/u p - 1 (g) , we obtain
3T
on the homogeneous part of degree
all the monomials of degree coefficient one-one into
0*
Hence
U p (g) .
§
p
p .
Thus
in the linear combination have
restricted to
Zp
Q
S q (g)
is
But these spaces have the same finite
dimension, again by Theorem 2.17, and thus this restriction of S
is also onto
onto
U p (g) .
Hence
§
S(g)
U(g) . The canonical decomposition of
when
is one-one from
Q
and
U(g)
from
g = Q ©b
b • are merely vector spaces is given in the
following proposition. Proposition 2.23. b
are subspaces of
g .
Suppose
g = aeb
Then the mapping
and suppose
a
and
(a,b) -> S(a)S(b)
1. SYMMETRIZATION of
S(a)®«S(b)
into
U(g)
89
is a vector space isomorphism
onto. Proof.
We argue as in Proposition 2.22 somewhat.
restrict the map to
p
We
m
Z ^ = Q S^a ) ® c S ~ (b ) , with image in
U p (s) , and follow it with passage to the quotient U P (3)/U P " (3) . one-one.
As in Proposition 2.22, this composition is
We then complete the argument as in Proposition
2.22, obtaining an isomorphism onto. Corollary 2.24. subalgebra. into
u(s)
Suppose
Then the mapping
3 = \ u£(p)
I of
is a Lie U(l)® /n S(p)
is a vector space isomorphism onto.
Proof.
The composition (k,p) + (§(k),p) -> S(k)S(p) ,
sending S(!)3S(*>) -> U(!)® c S(p) •* U(s) , is an isomorphism by Proposition 2.23, and the first map is an isomorphism by Proposition 2.22.
Therefore the second map is
an isomorphism, and the notation corresponds to the statement of the corollary when we write
u = S(k) .
We know that a representation a representation of (2.30) holds.
g
on
S(V)
IT of
g
on
V
leads to
such that the product rule
Applying this fact to the adjoint representa-
tion of
g
on
3 , we obtain a representation (also called
ad ) of
g
on
S(g)
such that
90
I I : REPRESENTATIONS AND TENSORS (ad X)(ab) =
for a l l
a
and
TD
in
a((ad X)b) + ((ad x)a)b
(2.37)
S(g) .
The c o n s t r u c t i o n t h a t gave us r e p r e s e n t a t i o n s on symmetric algebras can a l s o take the r e p r e s e n t a t i o n g i v e us a r e p r e s e n t a t i o n on
U(g)
t h e r e i s an e a s i e r "way t o proceed:
satisfying
ad
(2.37).
and But
We simply define
(ad X)u = Xu-uX for
X
in
3 and u
in
U(g) .
Clearly
(2.38) ad
extends the
usual definition on g , and it is easy to check that (2.37) holds.
To see that it is a representation, we compute
(ad X ad Y)u = X((ad Y)u) - ((ad Y)u)X = XYu - XuY - YuX + uYX . Thus (ad X ad Y - ad Y ad X)u = XYu + uYX - YXu - uXY = (XY-YX)u-u(XY - YX) = [X,Y]u-u[X,Y] = (ad[X,Y])u as required. Proposition 2.25. representations on
Symmetrization and the adjoint
S(g) and U(s) are related by (ad X ) o S = S c ad X
for
X
in
Remark.
g . Symmetrization of course does not respect
7. SYMMETRIZATION multiplication i n S(s)
anc
compensates some f o r t h a t Proof.
If
X
T"
# X
91
^ U(s) > and t h i s i d e n t i t y
failure. i s
D
a monomial i n S(g) ,
(ad X)S(X 1 ---X p ) = (ad X) £ S X , ( 1 ) • • - X ,
then (p)
= JT ZZ \ (i) * * ' \ (3-1) [X ' X a (J) ]Xc (j+i)' "*o (p) by (2.37) with (j)=k
(2.39)
Define
X±
if
i ^ k.
Then §(ad X)(X1---Xp) =
E
a Y with
1
y
pT ^
y
k
k
.
#vk
JS. . . v k
J Y a(l) *V(J-l) r k
Y
a(p)
with
•with Since (2.39) and (2.4o) match, t h e p r o p o s i t i o n f o l l o w s .
92
II: REPRESENTATIONS AND TENSORS 8. Tensor product over an algebra Let
R
only that
be an associative algebra over
R
C .
(This means
is a complex vector space with a bilinear
multiplication that is associative.)
In our applications,
R
at first will be a universal enveloping algebra, but later
R
•will be a more complicated algebra that need not have an identity. that
1
"When
R
acts as
does have an identity, we shall assume
1
in all
R
modules; in this case every
R
module is a complex vector space, by restriction of the multiplication to multiples of
1.
identity, we shall assume that an vector space and that the
R
"When R
R
does not have an
module is also a complex
action commutes with scalar
multiplication. For this section let us write is a left R
R
module.
module and
V^
^T
to indicate that
to indicate that R
In the situation
V
V
is a right
R
( V, W), let )
(2.4la)
denote the vector space of complex l i n e a r maps cp : v •*• ¥
that
are R-linear in the sense that f or a l l r e R , V€V.
cp(rv) = r(cp(v))
This definition works compatibly with (2.10).
Namely if
Ham-fVgjWg) 9
and
L± qp
is in is in
actually R-linear from definition
Hom(L^,Lp)
HomR(V1,W]L) , L 2
Hom R (W i ,V 2 ) , then
Y^_ into
W^ •
Hom(L1,L2)cp = Lgocp©!^,
(2.4lb) defined in
is in I^cpoL^
Thus under the
is
8. TENSOR PRODUCT OVER AN ALGEBRA is in S
Suppose that and that the left such a way that
Hom(D(HomR(¥1, Vg) jHorn^V.^ Wg)) .
R module
V
(rv)s = r(vs)
Ham-(V,W)
(2.42)
is another associative algebra over is also a right for all
C
S module in
r€R , s€S, v € V .
* T S to signify this situation.)
(In this section we write Then
93
"becomes a left
S
module under the
definition (sqp)(v) = cp(vs) . The only thing t h a t requires note i n t h i s a s s e r t i o n i s the fact that identity
scp
i s again R-linear, and t h i s comes down t o the
(rv)s =
r(vs) . (RRR, ^T) .
A special case of i n t e r e s t i s the s i t u a t i o n If
R has an i d e n t i t y , then the map
¥
that
veV.
(2.44b)
a r e R - l i n e a r i n t h e sense t h a t cp(vr) =
(cp(v))r
We still use (2.10) to define valid.
In the situation
module under
for a l l r e R ,
Hom(L 1 ,L 2 ) , and (2.42) remains
(SV^,WR) , H o n ^ ^ W )
is a right
S
I I : REPRESENTATIONS AND TENSORS
9h
(cps) (v) = cp(sv) . I n t h e s p e c i a l case
( R ,V^) ,
we o b t a i n an isomorphism jV) 2= v
of r i g h t
R modules "by the map
(2.45)
cp -» cp(l) ,
provided
R
has
an i d e n t i t y . Now we s h a l l make an analogous construction with tensor product in place of
Horn.
Let the s i t u a t i o n be
(v**,RW) .
Then
denotes the vector space quotient of generated by a l l
vr® w - v® rw .
the image in
V®"^
vr®w = v®rw
in
compatibly with HomR(V1,W1)
of
v®w
V®^ W. L-,®Lp
and if
L2
We s t i l l write
in
V^W.
i s in
v® w
for
Then we have
The definition of
i n (2.8).
i n i t i a l l y i s a member of
V®c W by the subspace
Namely i f
V®_W L-,
works
is in
Hom~(V2>W2) > then
L-,®L2
Homc(V1®c Vp» W-i®c Wp) > and i t
passes to the quotient i n the range t o become a member of Homc(V1®cV2,W1®:RW2) .
Since
(L2® L 2 )(v x r® v 2 - v x ® rv 2 ) =
L-L(v1r) ® L 2 (v 2 ) - L-Jv^ ® L 2 (rv 2 )
= L 1 (v 1 )r®L 2 (v 2 ) - L 1 ( v 1 ) ® rL 2 (v 2 ) =
0,
we see that L-L®!^ In the s i t u a t i o n
i s in
Honip^^j^ V2, W1®R W2) .
(S'VR,^)
,
V®RW
becomes a l e f t
(2.46) S
8. TENSOR PRODUCT OVER AN ALGEBRA
95
module under t h e d e f i n i t i o n s(v®w) =
sv® w .
(2.47a)
To make this precise, we define the left action by V8> W by means of (2.46) as L ® I , where in
V.
L is "left by s "
Formula (2.9) shows that this definition makes V ® _ W
into a left V®RW
s on
S module.
becomes a right
Similarly in the situation (V S module under the definition (v® w)s = v® ws .
Special cases of interest are when
(2.47b)
(V,^)
and
R has an identity, and then we obtain the respective
isomorphisms R®~V^V Ju
from the maps
and
V®^ R 2= V
(2.48)
i\
r® v -> rv and v ® r -> vr .
A more general situation where two-sided modules arise is when
R
is a subalgebra of S .
module, and HOHL R (S,V) module.
and S ® R V
Then
S
is a two-sided
make sense if V
R
is an R
Both constructions are sometimes referred to as
extension of scalars, and the notion of complexification in §5 is an instance of this for real associative algebras. The vector space
V®^!^ has a universal mapping
property, given as follows. Proposition 2.26.
in the situation
i : V x W -» V ® ^ W denote the map any complex vector space and
(V^,RW) , let
t (v,w) = v® w .
b : Vx W -> X
If X is
is R-bilinear in
96
II: REPRESENTATIONS AND TENSORS
the sense that b(vr,w) = b(v, rw)
for all r e R ,
vev,
then there exists a unique complex linear map
wew,
B : V ® R W -» X
such that
B VX W
> X
commutes. This kind of argument is by now routine, and we omit it. Our final three results concern associativity formulas. Proposition 2.27. modules
A , B , and
In the situation
(A R , R B S , C S )
for
C , the isomorphism
of (2.12) induces a vector space isomorphism Hom s (A® R B,C) 2= HomR(A,Homs(B,C))
(2.49)
that is natural in each variable. Remark.
"Naturality" refers to the commutative diagram
in Proposition 2.2, which remains commutative when we pass to the quotients and submodules here. Proof.
The map implementing (2.12) from left to right
was *(q>)(a)(b) = cp(a®b)
8. TENSOR PRODUCT OVER AN ALGEBRA
97
"with i n v e r s e Y(t)(a«b) = First we restrict that
$(cp)
$
•(a)(b) .
to cp's that descend to A ® ~ B and see K
i s R-linear:
§(cp)(ar)(b) = cp(ar®b) = qp(a® rb) = $ (cp) (a) ( r b ) = ((§ (cp) ( a ) ) r) ( b ) . So
$
induces a map of
HonufAjHom^B, C)) . $
to
cp's
S-linear
Homc(A£>
B,C)
into
Next we r e s t r i c t t h e induced v e r s i o n of
t h a t a r e S - l i n e a r and check t h a t
§(cp) h a s
values:
$ (cp)(a) (bs) = cp(a®bs) = cp ((a® b ) s ) = (9(a® b) ) s = s(5 (cp) ( a ) ( b ) ) . This gives us our map from left to right in (2.49). We construct a two-sided inverse in the same way from Defining
7($) on elements
? .
a ® b in A S> B defines it on
all of A®_,B as a result of Proposition 2.26.
The remaining
details for proving (2.49) are easy and are omitted. Proposition 2.28. modules
In the situation
(RA,SBR,SC) for
A , B , and C , the isomorphism c
c
c
€
€
c
B
, C))
of (2.3) and (2.12) induces a vector space isomorphism Homs(B®RA,C) 2= HomR(A,Homs(B,C)) t h a t i s natural in each variable.
(2.50)
98
II: REPRESENTATIONS AND TENSORS Proposition 2.29.
modules
A , B,
and
In the situation
(A R , R B S , S C)
for
C , the isomorphism
(A®CB) ® C C a A ®
C
(B8>c C)
of (2.18) induces a vector space isomorphism ( A ® R B ) ® S C sf A ® R ( B ® S C )
(2.51)
that is natural in each variable. Remarks.
Naturality is by Proposition 2.5.
(2.51), we use the map of (2.18) to send A ®-, (B®^ C) . we map
If
A®R(B®CC)
Q : B«> C -> B®« C into
into
check that it descends properly. similarly.
(A®~ B) ®.p C
into
is the quotient maiD, then
A®R(B®SC)
composition sends ( A ® C B ) ® C C
To obtain
by
I®Q.
The
A ® R ( B ® S C ) , and we
The inverse map is obtained
CHAPTER III REPRESENTATIONS OP COMPACT GROUPS
1. Abstract theory We defined representations of topological groups in §1.8. In this chapter we shall assume that
G
is a compact
topological group, and soon we shall specialize
G
further.
Although everything we do will be valid in some form for all compact Lie groups, we shall give statements and proofs only for the unitary groups
U(n) .
In this way we shall avoid the
need for an extensive digression on roots and weights. For this section, let
G
be a compact topological group.
Such a group has a unique Borel measure that is invariant under right and left translation and has total mass one.
We
refer to this measure as normalized Haar measure and write it in integrals simply as A representation $ (x)
dx . $
on a Hilbert space
is a unitary operator for all
case, whenever
U
x
V
is unitary if
in the group.
In this
is an invariant subspace, so is the
orthogonal complement
U x , since
w e U1
and
u €U
imply
(§(x)w,u) = (w,$(x)"1u) € (w,u) = 0 . Proposition 3>1>
If
I
is a representation of
99
G
on a
100
III: REPRESENTATIONS OF COMPACT GROUPS
finite-dimensional
V,
product such that Proof.
Let
§
then
V
admits a Hermitian inner
is unitary.
(•>•)
"be any Hermitian inner product on
V,
and define
= J ($(x)u,v) dx. G
It is straightforward to see that
has the required
properties. Corollary 3- 2finite-dimensional
If V,
?
is a representation of
then
$
irreducible representations. each
V.
G
on a
is the direct sum of
(That is,
V = V-,©
© V.
with
an invariant subspace on which § acts irreducibly.)
Proof.
Construct
invariant subspace
(•,•)
U ^ 0
orthogonal complement
as in Proposition 3-1*
Find an
of minimal dimension and take its
IT1 .
Then 1
Repeating the argument with
IX
IT1
is invariant.
and iterating, we obtain the
required decomposition. Proposition 3.3 (Schurfs Lemma). are irreducible representations of spaces
V
V T , respectively.
and
map such that
§ T (g)L = L$ (g)
one-one onto or Proof.
G If
for all
Suppose
$' and
$T
on finite-dimensional L: V -» V T
is a linear
g
then
in
G,
L
L = 0.
We see easily that
invariant subspaces of
V
and
ker L
and
image
L
are
V 1 , respectively, and then
the only possibilities are the ones listed.
is
1. ABSTRACT THEORY Corollary 3»^» representation of L : V -» V in
G,
Suppose G
$
is an irreducible unitary
on a finite-dimensional
is a linear map such that then
Proof.
L
V.
If
5 (g)L = L? (g)
for all
g
is scalar.
Let
A
be an eigenvalue of
By Proposition 3.3,
L. $ (g)
not one-one onto "but does commute -with G .
101
Then
L - XI
for all
g
is in
L - XI = 0 .
Corollary 3*5 (Schur orthogonality). (a)
Let
$
and
$r
be inequivalent irreducible unitary
representations on finite-dimensional spaces respectively.
and
Let
§
for all u, v € V and u',V € V ! .
be an irreducible unitary representation on
a finite-dimensional
V .
Then (u-.jUp) (V,,
J ( 1 11,v)1)($(x;u ( ( ; 22,v2 2j dx = — ± — f J ( ($(x)u -1
G for all
X
d
u^ , v 1 , u 2 , v^
Proof.
V! ,
Then
J (S (x)u, v) ($ * (xJuSv 1 ) dx = 0 G (b)
V
For (a), let
in
^
±±
£
dim V
V.
I : Vf -» V
be linear and form
L = J $ (x)^| « (x""1) dx . G
(This integration can be regarded as occurring for matrixvalued functions and is to be handled entry-by-entry.) it follows that for all
y
in
!
1
$ (y)L$ (y" ) = L , so that G.
By Proposition 3*3*
Then T
$ (y)L = L$ (y)
L = 0.
Thus
102
III: REPRESENTATIONS OF COMPACT GROUPS
(LvT,v) = o ••
Choose
£(w T ) = (w T ,u T )u, and (a) results.
For (b), we proceed in the same way, starting from I : V -» V
and obtaining
L = XI
by Corollary 3.4.
Taking the
trace of both sides, we find A dim V = Tr L = Tr I , so that
A = Tr £/dim V .
Thus
£(w) = ( w , ^ ) ^ , we obtain (b).
Choosing If
§
is a unitary representation of
coefficient of
$
is any function on
If
on which
acts, then the expressions of
basis are
a matrix
of the form
($(x)u,v) . $
{u^}
G
G,
is an orthonormal basis of the space
i..(x) = ($(x)u.,u.)
§ (x)
in this
and are matrix coefficients.
The linear span of these functions is independent of the basis
{ui} .
Let
$
be unitary on
space as
V
but with multiplication by
multiplication by
V.
Let
-i ; that is, if
V
J
be the same vector i and
respective multiplication-by-i maps, then an inner product in
V
by
$ (x) : V -> V
as maps
§ (x) : V -» V , so that
on
Moreover
$
is unitary on
the complex conjugate of of
$
on
7
$
$
on
V.
J~
are the
J— = - J .
(u,v)== = (v,u) v •
check that the linear maps
V .
replaced by
Define
It is easy to
remain complex linear
is a representation of V.
We call
$
on
G V
The matrix coefficients
are the complex conjugates of the matrix
1. ABSTRACT THEORY c o e f f i c i e n t s of If !
V ,
$
then
$
on
$!
and $&$
T
103
V.
are u n i t a r y r e p r e s e n t a t i o n s on T
i s u n i t a r y on
V®V
V and
w i t h r e s p e c t to t h e
i n n e r product defined by (u®uT,v®vT) =
(u,v)(uT,vf) .
Evidently the matrix coefficients of
? ® $T
are spanned "by
the products of matrix coefficients, one from
$
and one
T
from
$ . We can interpret Corollary 3*5 as follows.
Let
{§'a'}
be a maximal set of mutually inequivalent irreducible unitary finite-dimensional representations of
G.
For each
§^
,
choose an orthonormal basis for the underlying vector space and let
§£?)
be the matrix of
the functions L 2 (G) .
{?L .'(x)}. .
In fact, if
d ^
$^
(x)
in this basis.
Then
form an orthogonal set in denotes the degree of
$^a^
(i.e., the dimension of the underlying vector space), then { (d(a))1//2§(°f) (x)}. . ij
J-^
jj ot
is an orthonormal set in
L 2 (G) .
The Peter-Weyl Theorem below -will say (among other things) that this orthonormal set is an orthonormal basis. If
T
is a unitary representation on a finite-
dimensional space, its character is the function X T (x)
where
{u.}
= Tr T(x)
=Z
(T(X)U±,U±)
is an orthonormal basis.
,
(3.1)
This function is
independent of the basis and lies in the span of the matrix coefficients of
T .
104
III: REPRESENTATIONS OF COMPACT GROUPS To any representation
associate an operator
$ on a Hirbert space, we can
$ (f)
for any continuous
f : G -» C
"by
(*(f)u,v) = J f(x)(§(x)u,v) dx G or more directly by the vector-valued integral §(f)u = S f(x)$(x)udx. G Then
$ (f)
is linear in the
f variable, and we readily
check that §(f *h) = where
f*h refers to the convolution f *h(x) = J fixy^My) G
In addition, if
$
f* (x) = f (x~X)
and
and $ (f )*
f(f») = «(f)*,
(3.3)
is the adjoint of
We shall be especially interested in the operators where
(3.2)
is unitary, then
! U ( f ) ! l < llffii where
dy = J f(y)h(y"1x) dy . G
$ (f) .
$ (XT) ,
T is an irreducible representation. Corollary 3.6,
Characters of finite-dimensional
irreducible unitary representations
T
and T!
satisfy
X T (x) = X T (x"1) * XTT = 0
*x T = x T .
if T is not equivalent with T !
1. ABSTRACT THEORY Proof. the relation
105
The first equation follows by summing for (Tfxju^u.) = ( T ( X Ju.,^) .
i = J
The other two
equations are routine consequences of Schur orthogonality (Corollary 3-5). Theorem 3«7 (Peter-Weyl Theorem). (a) The linear span of all matrix coefficients for all finite-dimensional irreducible unitary representations of is dense in (b) If
G
L (G) . {$(ah
is a maximal set of mutually
inequivalent finite-dimensional irreducible unitary representations of
G
{ (d^ )1/'2$^ (x)} is a
and if
corresponding orthonormal set of matrix coefficients, then { (d( a )) 1 / 2 §.^ (x)} (c)
is an orthonormal basis of
L 2 (G) .
Every irreducible unitary representation of
G
is
finite-dimensional. (d)
Let
Hilbert space
$
be a unitary representation of
V .
Then
V
G
on a
is the orthogonal sum of finite-
dimensional irreducible invariant subspaces. (e)
Let
Hilbert space tion
T
of
$
be a unitary representation of
V .
G
on a
For each irreducible unitary representa-
G, let
E^, be the orthogonal projection on the
closure of the sum of all irreducible invariant subspaces of V
that are equivalent with
d § (x_) 9 where character of T . then
d
T .
Then
is the degree of Moreover if
E T 5 r , = K. t^r = 0 .
T
and
E T
is given by and X
is the
T * are inequivalent,
Finally every
v
in V
satisfies
106
III: REPRESENTATIONS OF COMPACT GROUPS v = I E^v , T
with the sum taken over a set of representatives
T
of all
equivalence classes of irreducible unitary representations of
G. Proof.
(a) Although this result is valid in general,
we prove it only for unitary groups in question is a subspace of
U(n) .
The linear span
C(G) , the space of continuous
complex-valued functions on
G,
and it is closed under
multiplication (because of tensor products) and conjugation (because of complex conjugate representations). Moreover it contains the constants (because of the trivial representation) and separates points (because of the standard representation on
C n ).
By the Stone-Weierstrass Theorem, it is uniformly
dense in
C(G) .
convergence, it is dense in (b)
o
Since uniform convergence implies
L
L (G) .
The linear span of the functions in question is the
linear span considered in (a). Thus (a) and general Hilbert space theory imply (b). (c)
This will follow from (d).
(d)
By Zorn's Lemma, choose a maximal orthogonal set of
finite-dimensional irreducible invariant subspaces. be the closure of the sum. suppose
U
is not all of
invariant sub space. If then
h
h
Fix
Let
U
Arguing by contradiction, we V.
Then
v j4 0
in
IT1
is a nonzero closed
1
IT .
is a linear combination of matrix coefficients,
lies in a finite-dimensional subspace
S
of
L (G)
1. ABSTRACT THEORY
107
that is invariant under left translation. a basis of this space
S .
Then
g €G
Let
h 1 , ...,h
be
implies
J h(x)?(x)vdx = J h(x)§(gx)vdx G G 1
G
x)$(x)vdX .
= ? C . J h.(x)§(x)vdX, j=l J G J
and hence the finite-dimensional subspace invariant subspace for
$ .
Z . C§(h.) v
is an
Consequently we "will obtain a
contradiction if we show that
$(h)v ^ 0
for some linear
combination of matrix coefficients. To construct
h,
continuous function
we first form
> 0
with
f N (l) = 1
vanishes off an open neighborhood i (f«)v
as
N
is in
1
IT
shrinks to
for every
{1} .
f(fj.)v, where
N
N.
of
such that 1
in
G.
f~ f.,
Then
Let us see that
In fact,
- v = 1 HfNlli1fN(x)i(x)vdx - v G
= J i!fN|!-1fN(x)[f(X)v - V]dX = J l|fNH-1fN(x)[?(x)v - v] dx, N
""
and so
= sup ]| ? (x) v - vR . X€N
is a
108
I I I : REPRESENTATIONS OF COMPACT GROUPS
Since the r i g h t side tends t o
0
as
N shrinks t o
{1} ,
(3-3) follows. I t follows t h a t such an N .
i s
§ (^TJ) V
Now choose
h
no
"t
°
f o r
some
N
•
F i x
by (a) so t h a t
l l ^ - h ! ^ < !!f N -h!I 2 < *U(fN)vfl/jivH .
(3.5)
Then
lU(f N )v-$(hH| = Hi (fN-h)vH < llf^-hUJvH
by (3-3) and (3-5).
Hence
!U(h)v]i > H*(f N H -lli(f N )v-?(h)v|| > lN(f N )v!| > 0. Thus
h
has the required property.
This proves (d) and
also (c). (e)
Put
adjoint of
sj = dT$(xT) .
^
By (3-3) and Corollary 3-6, the
i s given by
and E ^ , = dTdT ,$ (xT * ^ T ,) = 0
Thus for
for T and T ' inequivalent
"El is an orthogonal projection, and T
T
and T
Let
U
EjjElI t = ^v T5J
=
0
inequivalent.
be an irreducible finite-dimensional subspace of
on which $ U i s equivalent with T , and l e t b e an orthononnal b a s i s of U . I f $. . (x) = ( $ ( x ) u . , u . ) > V
then
1-
ABSTRACT THEORY'
m. X^ (x) = Z $.. (x) T =i l x l
and
§(x)u. = J
109
m Z I =i l
Hence Schur orthogonality gives
1 , iv
Thus
KI is the identity on every irreducible subspace of
type
T . For
u
in a space of type
TT
with
T!
inequivalent, we have "Elu = E ! E' T u = 0 . EjI
and T
Now let us apply
to a decomposition as in (d). All terms are then
annihilated except the ones of type on spaces of type Consequently
T ' with
T!
E^T vanishes
T , since
not equivalent with
ELT = K. and v = Z
K,v for all v
r .
in V .
This completes the proof of the theorem. The proof of (e) in the Peter-Weyl Theorem contains information even in the case that
$
representation on L (G) •
is an irreducible unitary
If T
representation and u^ , ... , TI^ the space on which
T
is the right regular
is.an orthonormal basis of
operates, then the span of a row of
matrix coefficients (T(X)U.,U. ) , J
X
i
fixed and
i s an invariant subspace of orthogonality the different orthogonal.
L (G)
J
moving,
of type
spaces, as
i
1 < j < d ,
T.
—
—
T
By Schur
varies, are
In the decomposition of (d) of the theorem, as
made specific in (b), these
d
spaces are the only ones of
110
III: REPRESENTATIONS OF COMPACT GROUPS
type
T , because the proof of (e) shows that
E^.
annihilates
the others. Thus in the case of the right regular representap p tion on L (G) , image K. has dimension d , with T occurring
linearly independent times.
dT
The conclusion of (e) in the Peter-Weyl Theorem implies that the number of occurrences of
T
in a decomposition (d)
is independent of the decomposition.
The number is obtained
as the quotient
We write
(dim image IL)/d
•
this quantity, calling it the multiplicity of Corollary 3>8. tions of suppose
G T
Let
on spaces
$
V
and and
is irreducible.
T V
[§:T] T
in
for $.
be unitary representa, respectively, and
Then
[$:T] = dim Hom G (V ? ,V T ) = dim Ham G (V T ,V*) , where the subscripts
!t !T
indicated actions by
G.
Proof. member of E V$
refer to linear maps respecting the
By SchurTs Lemma and the Peter-Weyl Theorem, any Hom G (v $ ,V T )
annihilates
(^V$)x .
Ihus write
as the orthogonal sum of irreducible subspaces
(d) of the theorem. of the theorem. V
G
to
V
T
Each
V
Thus for each
is equivalent with V
the space of
is at least one-dimensional.
G
maps from
It is at most one-
Then it follows that
[5:T] = dim Hom G (V f ,V T ) . Taking adjoints, we obtain
, by
VT , by (e)
a dimensional by SchurTs Lemma.
V
2. IRREDUCIBLE REPRESENTATIONS OF SU(2)
111
dim Hom G (V ? ,V T ) = dim Hom G (V T , V 1 ) . The corollary follows.
2. Irreducible representations of SU(2) ¥e know from §1.8 that every finite-dimensional representation of
SU(2)
is smooth and hence leads to a
representation of
«u (2) .
Since
«u(2) e i Su(2) = SI (2,C) , we obtain a complex-linear representation of
SI(2,C) .
The
invariant subspaces for these representations correspond, and thus an irreducible representation of
SU(2)
leads to an
irreducible complex-linear representation of
«I(2,C) •
We
shall now classify the latter and see that they all come from representations of
SU(2) .
that the representations polynomials in
z, , z 2
$
As a consequence we shall see of
SU(2)
on holomorphic
homogeneous of degree
n
(Example 2
in §1.8) exhaust the irreducible representations of
SU(2) ,
apart from equivalence. We shall make repeated use of the basis si (2,C)
over
h=
Vo
{h , e , f}
of
f=
Vi
o) •
These elements satisfy the bracket relations [h,e]=-2e,
[h,f]=-2f,
[e,f]=h.
(3-6)
112
III: REPRESENTATIONS OF COMPACT GROUPS Theorem 3*9»
For each integer
m > 1 , there exists up
to equivalence a unique irreducible complex-linear representation
IT of 3 1 (2,C)
there is a "basis
on a space
{v , ... , v
7r(h)vi =
(2)
7r(e)v0 = 0
(3)
Tr(t)v± = v i + 1
(4)
7r(e)vi = i ( n - i + l)v jL _ 1
m.
In V
(n-2i)vi
with
Moreover the representation
vn+1 = 0 with
v_± = 0 .
IT can be realized as the
version of the representation
holomorphic polynomials in Remark.
of dimension
-,} such that ("with n = m - 1 )
(1)
differentiated
V
z^ , z^
$
on
homogeneous of degree
Property (1) gives the eigenvalues of
n.
ir(h) .
Notice that the smallest eigenvalue i s the negative of the largest.
Therefore the largest i s
Proof of uniqueness.
Let
irreducible representation of dim V = m . with
Let
v 4 0
7r(h) v = Xv .
)
0.
IT be a complex-linear «I(2,C)
on
V -with
be an eigenvector for
Then
7r(e)v ,
ir(e) v , . . .
ir(h) ,
say
are also
eigenvectors because 7r(h)7r(e)v = Tr(e)7r(h)v +7r([h, e] )v = Tr(e)Av 4-2ir(e)v = (?w-2)7r(e)v . Since
A , X+2 , ~k-A , ...
independent while nonzero. find
vQ
in V
with
are distinct, these eigenvectors are By finite-dimensionality we can
( \ redefined and)
2. IRREDUCIBLE REPRESENTATIONS OF SU(2) (a)
v
(b)
7r(h)v0 = Av Q
(c)
7r(e)v0 = 0.
0
^ 0
v i = Trff^v
Define
.
Then
7r(h)v^ = ("X-2i)v. , "by the same
argument as above, and so there is a minim"um integer 7T ( f )
n+
\
= 0.
Then
v , ... , v n
(1)
7r(h)vi = (A-2i)v ±
(2)
Tr(e)v0 = 0
(3)
ir(f)vi = v ± + 1
We claim
with
irreducibility.
with
are independent and
It is enough to show
is stable under
7r(e)
because of the
In fact, we show
ir{e)v± = i ( ^ - i + l)v i-]L
with
v^± = 0.
We proceed by induction for (4), the case (2).
n
vn+1 = 0.
V = span{ v Q ,..., v n } .
span{ vQ,..., v }
{h)
113
Assume (4) for case
i .
To handle
i = 0
i + 1,
being
we write
= ir(e)ir(t)v± = 7r([e,f])vi + 7r(f)7r(e)vi = 7r(h)vi + 7r(f)7r(e)vjL
and the induction is complete. To finish the proof of uniqueness, we show
X = n.
have Tr Tr(h) = Tr(7r(e)7r(f) -Tr(f)ir(e)) = 0. Thus
£ i = 0 (A - 2i) = 0 ,
and we find
~K = n .
We
114
III: REPRESENTATIONS OF COMPACT GROUPS Proof of existence.
of the representation z
l 'Z 2
homo
gene°us
Form the differentiated version cp
§
of>
degree
linear and has dimension
n.
Here
m = n +1 . 0 -t
so that
cp (h)
does have
n
V.
all
j.
is complex
d
1=0
e nt 2 n,
as an eigenvalue. cp
is reducible on
=u 0 E u l E • • • E u k = v
U. ^ u-?+i Moreover
and
^n
k > 1.
is
irreducible on
Since
dim U_. -,/U. < n , J+-L
uniqueness argument says on
U. j/U.
for any
eigenvalue
n
__ __n
Then we can find a chain of subspaces 0
such that
cp
Also
Arguing by contradiction, suppose its space
n
on holomorphic polynomials in
on
cp (h)
j .
V,
the
j —
does not have eigenvalue
Therefore
contradiction.
cpn(h)
n
does not have
We conclude
cp
is
irreducible. 3» Root space decomposition for U(n) For most of the remainder of the chapter, we shall work only with unitary groups. 3 0 = u (n)
Thus let
G = U(n) , and let
be its Lie algebra (the set of skew-Hermitian
matrices). working with
We can identify
5=3^
with
gl(n,c) .
In
U(n) , we shall exploit the many naturally
occurring copies of device for using the
SU(2)
that lie within T
SU(2) s
U(n) , and the
is the root space
3- ROOT SPACE DECOMPOSITION FOR U(n) decomposition of
115
g.
Let 0
Define a matrix elsewhere.
For each
= diagonal matrices in g
E- • to be
1
in the (i, j)
Define a linear functional
H
in
§ , ad H
consisting of members of
e.
place and
0
in the dual space
is diagonalized "by the basis of t>
and the
E^.
for
1*3.
g
We
have
So
Ej; - is a simultaneous eigenvector for all
eigenvalue
e. (H) -e.(H) •
eigenvalue is linear. functional on 1 * 39
are
ad H ,
In its dependence on
H,
with the
So the eigenvalue is a linear
§ , namely
called roots.
e. - e . .
The
(e i - e .) fs , for
The set of roots is denoted
A.
We have
9 = ^ e E CE,,, i?^i
which we can rewrite as 9 = ^ 0
E
3
,
(3.7)
116
III: REPRESENTATIONS OF COMPACT GROUPS
where Se
e
= {X€S
| (ad H)X = (e.-e.)(H)X
for all H e^} .
The decomposition (3*7) is called the root-space decomposition of
g with respect to
§ .
The bracket relations are easy, relative to (3-7)• and
jB are roots, we can compute
[E. .,E. , . r ] 3
ij
= Sa+n
[S >Hg] •{ =
0
c y
If a
and see that
j
if ct+jB is a root if cx+jB is not a root or 0
(3*8)
if a+jS = 0 .
In the last case, the exact formula is
All the roots are real on
$-. . We introduce an ordering
on the roots and certain other linear functionals on
§-_ .
The ordering will depend on the choice of an ordered "basis H. for
^
. For example, the elements E± = E±± , 1 < i < n,
form an ordered basis of
$_. . If
f
is in
ij* , we say f
is positive (relative to this ordered basis of $ is real on b_
irC
) if f
and if f (%) > 0
or
f(H1) = 0
and
£r
f(H1) = f(H2) = 0
or ... or
f^)
f(H2) > 0
= ... = fi^j)
and =
f(Hj) > 0 °
and
>
3- ROOT SPACE DECOMPOSITION FOR U(n) If
f
is not
0 "but is real on each
of
f
and -f is positive.
117
H. , then exactly one
The positive elements are closed
under addition and under multiplication by positive scalars. f > g or g < f if f - g is positive. The
We shall say
resulting ordering on the members of b*
is called the
lexicographic ordering relative to the ordered basis H
l '—
' ^n
o f
^TR*
W e d e n o 1 : e fe
y
A +
^he
se
"t °^ positive
roots. The lexicographic ordering obtained from the choice H
i " E ii '
1
^
i
£n '
vji11 b e c a l l e d
the
"standard
lexicographic ordering, " and the corresponding consists of all
e. - e.
with
A + , ^hich
i < J , will be called the
"standard system of positive roots." The trace form BQ(X,Y) = Tr(XY) is complex bilinear on 3 x 9 ,
(3-9)
and its restriction to ^ T O x b_ -IK
i s real-valued and positive definite.
JK
BQ has the invariance
properties BQ((ad X)Y,Z) =
-BQ(Y,(ad X)Z)
(3-10a)
and BQ(Ad(g)X,Ad(g)Y) = on
g
and G , respectively. B Q (E i j ,E J i ) = 1
BQ(X,Y)
(3.10b)
Note also that for all i and 3 .
(3.11)
118
III: REPRESENTATIONS OF COMPACT GROUPS 4. Roots and weights for U(n) Within
U(n) , let T be the diagonal subgroup.
This
subgroup is connected and is maximal abelian in u(n) ; it is often referred to as a maximal torus or a Cartan subgroup. Correspondingly
$
is often referred to as a Cartan
subalgebra of g Q . Let
$
be a representation of G = U(n)
dimensional complex vector space may assume that
$
is unitary.
smooth; let cp : g -> End-, V
t =0
cp .
If X
By Proposition 3*1 we
By Theorem 1.14,
§ is
be its differentiated version.
When we need to, we can extend extension
V.
on a finite-
cp to U(g) , calling the
is in g , then differentiation at
of the identity $(exp tx)$(exp tx)* = I ,
with
(• )*
denoting adjoint, leads to the identity cp(x) + cp(X)* = 0 .
Thus
cp (X)
is skew-Hermitian for all
In particular, each Hermitian.
Hence each
cp(H)
cp (H)
X
for H
in g .
in L
for H in ^
and is diagonable with real eigenvalues. any basis of t)_ , such as H- = E.. .
is skewis Hermitian
Let EL , .. . , H
These matrices commute
and thus the homomorphism property cp[X,Y] = cp(X)cp(Y) -cp(Y)cp(X) says that the cp(H.)
commute.
be
Therefore we can find a
4. ROOTS AND WEIGHTS FOR U(n) simultaneous eigenspace decomposition of V cp (H.) .
Since
119
under all the
cp is linear on $ , this decomposition is a
j
simultaneous decomposition for all of cp(^) , and each eigenvalue is linear.
A typical eigenvalue is
M H ) » H e§,
and the eigenspace is V^ : Vx = {v€V
| cp(H)v = A(H)v
for a l l H € ^ } .
These eigenvalues, which are c e r t a i n l i n e a r functionals on t h a t a r e r e a l on
b_ , a r e called the weights of the -
in.
representation
^
$
or cp , the spaces
V^ are called weight
spaces, and the members of V^ are called weight vectors for the weight
A.
There are only finitely many weights, and we
have an orthogonal direct sum weight-space decomposition V = Z vx . weights We give some examples below.
(3.12)
Of special interest will be
the highest weight; this is the weight that is largest in whatever lexicographic ordering we fix in §3*
To be concrete,
let us use the standard lexicographic ordering in the examples. Example 1.
For G = SU(2) , we can imitate the theory
for u(n), using ,Q = { ( ^ _%)} . Put e^J _° w )=™. If z
$
l 'Z 2
is the representation on holomorphic polynomials in homogeneous of degree
n , the general element of the
representation space is a
n z l + a n-l z l~ l 2 2 + " -
+a
0z2'
120
III: REPRESENTATIONS OF COMPACT GROUPS Cz^? , Cz?
The weight spaces are respective weights weight is
ne^ .
z~ , ..., Cz^
-ne-L , -(n-2)e 1 , ... , n e 1 .
with The highest
A special feature of this example is that
each weight space has dimension one. Example 2.
For
G = U(n) , let
on holomorphic polynomials in degree
N .
$
be the representation
z, , ... , z
homogeneous of
Each monomial
z "
with
J, +... + j
= N
is a weight vector, and
if
h 1 , ... , h
are imaginary.
- ( ^ h ^ . ••+J n e n ) • Example 3-
Hence the weight is
The highest weight is
For
G = U(n) , let
is only one weight, and it is Example 4.
For
representation on basis of
A C .
If
$ (x) = (det x ) k .
There
k(e,+.. .+e ) .
G = U(n) , let n
-Ne n .
u^ , —
C n , then the vectors
§
"be the usual
, u
is the standard
h. ROOTS ATTD WEIGHTS FOR U(n) u. A u- A ... A u. 1
X
1
with
form a basis of weight vectors of e. +e. +. . .+e. 1 x 1 X2 k
weights being
Example 5.
For
representation cp
acts by
Ad
ad,
and the roots
.
i- < ... . irt
This is clear.
A linear functional satisfying the
equivalent properties in the proposition is said to be • analytically integral. Corollary 3-12. then
A
If
A
in
satisfies the condition
^*
is analytically integral,
124
III: REPRESENTATIONS OF COMPACT GROUPS Proof.
Use (i) in the proposition, noting that
2/|a| 2 = l .
5. Theorem of the Highest Weight for U(n) We continue with §§3-4.
G = U(n)
A linear functional
said to be dominant if
A
the standard one and if whenever
on
^
2/|a|
the set of positive roots.
k. > k.
and with other notation as in that is real on
> 0
for every
a
$_
is A+ ,
in
If the lexicographic ordering is
A = Z k. e^ , this condition means
i < j.
Theorem 3*13 (Theorem of the Highest Weight). lexicographic ordering for
$* .
irreducible representations
I
Fix a
Apart from equivalence, the
of
G
stand in one-one
correspondence with the dominant, analytically integral linear functionals
A
on
^ , the correspondence being that
the highest weight (largest weight in the ordering) of The highest weight (a)
A
A
of
$^
A+
of
(c)
each
$^ .
and not on the particular
lexicographic ordering that yielded the weight space
is
has these properties:
depends only on
(b)
A
V^
for
A+ A
is one-dimensional
a
in
A + , annihilates the members
V^ , and the members of
V^
are the only vectors with
E , for
this property (d) with the
every weight of n
integers
5^
> 0.
is of the form
A -Z
£A+
n a
5-
THEOREM OF THE HIGHEST WEIGHT FOR U(n)
Proof of existence of the correspondence. given.
Let
$
We apply the c o n s t r u c t i o n s of §§3-^ and l e t
h i g h e s t weight.
Then
125 be
A be the
A i s a n a l y t i c a l l y i n t e g r a l , by
P r o p o s i t i o n 3*11* If
a
A+ ,
i s in
a weight.
E
Thus
a
P r o p o s i t i o n 3*10a. Since. f o r each of over
+
A ,
$
v /
S
a
then a n d
0
A+a v € V
A
exceeds imply
A and cannot be cp(E )v = 0 ,
This proves t h e f i r s t p a r t of
is irreducible, in
and l e t
C.
€
V.
Let
E^ , . • . , E^
so i s
cp .
Thus
p 1 , . . . , |5 be t h e b a s i s
by
(c). cp(U(g))v = V
be an enumeration H. = E . .
of
^
By t h e Birkhoff-Witt Theorem, the monomials
E
» • » TT
^1
form a basis of U(g) . monomials to some E 's give
0
XT r
*m
v
Let us apply
±
^m
cp of each of these
in the highest weight space
V^ .
The
r
(by the previous paragraph), the H s multiply
by constants, and the E *s push the weight down (by ~P Proposition 3-10a).
Consequently the only members of v\
that can be obtained by applying vectors
Cv .
Thus
cp of (3.16) to v
are the
V-^ is one-dimensional, and (b) is
proved. The effect of cp of (3-l6) applied to give a weight vector with weight m * - Z i,-/5,, J=l J J and these vectors span
v
in V^
is to
(3.17)
V . ' Thus the weights (3-17) are the
only weights of cp , and (d) follows.
Also (d) implies (a).
126
III: REPRESENTATIONS OF COMPACT GROUPS To prove the second half of (c), let
cp (E )v = 0
a € A+.
for all
*V\ , we may assume
v
has
V.
, and let a € A+,
all
v
0
component in v
V. .
Let
A-
be
has a nonzero component in
be the component. cp(^)v! c cvT .
and
satisfy
Substracting the component in
the largest weight such that T
v ^"VV
Then
Applying
cp(E )vT = 0 cp
for
of (3.16), we
see that V = 1 Cp(E
1 A ) ~P1
Every weight of vectors on the right is strictly lower than X , and we have a contradiction to the fact that
A
occurs as
a
be in
a weight. Finally we prove that +
A ,
and form
1
H , E , and
span a subalgebra of isomorphism carries subspace of
V
is stable under
7\ is dominant.
T
3 H*
E^
h .
cp(H )
For
These vectors
§1 (2,c) , and the v ^ 0
in
V^ , the
spanned by all
*I (2,C) , and the argument for (c) shows it cp(E! )^v .
is the same as the span of all !
as in (3-15)-
isomorphic to to
Let
On these vectors,
acts with eigenvalue
and the largest eigenvalue of 2/|a|
.
cp(HT)
is therefore
On the other hand, this subspace is a
representation space for
SU(2)
and splits as the direct sum
5-
THEOREM OF THE HIGHEST WEIGHT FOR U(n)
of i r r e d u c i b l e r e p r e s e n t a t i o n s of
SU(2) .
By t h e remark
f o l l o w i n g t h e statement of Theorem 3 . 9 , t h e l a r g e s t > 0.
is
Thus
2(X,a>/|a|
2
is
eigenvalue
> 0 , and A i s dominant.
Proof t h a t t h e correspondence i s one-one. "be i r r e d u c i b l e on V and Vr ,
§'
127
Let $ and
r e s p e c t i v e l y , both with
A , and l e t cp and cp! be t h e corresponding
h i g h e s t weight
r e p r e s e n t a t i o n s of
U(g) .
h i g h e s t weight v e c t o r s .
Let v
Form
$®$
and v ' T
be nonzero
on V®Vf .
We claim
that
s = (cpecp where itself.
Since
Ad(x)
cr
is invertible,
Conversely i f
a
such an
Namely l e t
"be
entry,
1 < i 0. Since the highest-weight correspondence is onto when the positive system is representation (A+)o.
$
(A+)o,
whose highest weight is
The weights of
W , and thus
A
$
Then
f
wA
wA
relative to
are closed under the operation of
is a weight of
highest weight relative to weight.
there exists an irreducible
A4" .
$.
We claim
In fact, let
A
is the
AT
be a
is a weight, and part (d) of the theorem
says wAT = wA -
Z n_a = wA -
Z n n w6 = w(A
w/3
Z n O B) .
w0
6. WEYL GROUP FOR U(n) Cancelling A
w , we see that
is the highest weight of
A1
A+ .
Then any member
is conjugate via
W
to a
in
dominant element. Proof. let
Let
A = Z K^e^
(A )
be the standard positive system, and
Q
be a member of are
that is real on
Then the
k^'s
W
is to permute the coefficients
to
A
real.
^*
The effect of applying a member of k^ .
Since
consists of all permutations, we can arrange for the end up nonincreasing, and the result will be For general +
(A )o
dominant.
A+ = w(A+)0.
A + , we start with Choose
$-..
w
A
+
(A ) Q
W k. Ts
to
dominant.
that may be assumed
by Proposition 3*1^ so that
We readily check that
wA
is
A+
dominant,
and the lemma follows. As an illustration of the use of this lemma, we prove the following proposition. Proposition 3*l6.
Fix a positive system
be an irreducible representation of A , and let
\± be any weight of
Proof. assuming that
G
$^ .
A+.
Let
$^
with highest weight Then
|JJL| < |A| .
By the lemma, there is no loss of generality in JI is dominant.
By Theorem 3-13d, we have
13^
III: REPRESENTATIONS OF COMPACT GROUPS H = A -
with all with
n
^> 0 .
E n a
(3.20)
Taking the inner product of both sides
\i , we obtain
the second inequality holding since
\i is dominant.
~h , we obtain
the inner product of both sides of (3-20) with
In |
fore
- I na < |A| 2 ,
- •• ^
the second inequa i . •
-''.;-, Lince
A
is dominant.
(i
j)
in the symmetric group leads
to a Weyl group element with a nice formula.
The
corresponding Weyl group element evidently sends
e.-e. •*•
J
There-
0 , wa > 0} + Jw Z {a | a > 0 , wa < 0}
= £ Z {wa | a > 0 , w a > 0 }
+£Z
{wa | a > 0 , wa< 0}
= £ Z £p | w " ^ > 0 , ]3 > 0} + i Z {Y I w - \ under
> 0,
Y
< 0}
p = wa , Y
138
I I I : REPRESENTATIONS OF COMPACT GROUPS = * 2 U I v~h > 0 , p > 0} - % Z {p | w " ^ < 0 , jB > 0} under
£ = -y •
S u b t r a c t i n g , we o b t a i n 6 - w6 = S {/3 | p > 0 ," w"1/} < 0} =
Z
jB .
jS€A + (w)
7* Analytic form of Borel-Weil Theorem for U(n) This section gives optional motivation for some of the algebraic constructions that occur later.
It assumes
knowledge of the definitions and elementary properties of holomorphic functions and complex manifolds.
Our objective
will be to give realizations of irreducible representations of U(n)
in terms of geometry and analysis.
shall work with
GL(n, C)
But for a while, we
in place of
U(n) ; GL(n, C) 2 complex manifold (as an open subset of C n ) , and
is a
multiplication and inversion are holomorphic. If we identify two nonzero members of
C
when one is a
complex scalar multiple of the other, then the resulting quotient space
CP11"
is called complex protective space and
is a compact complex manifold in a natural way. The group
G = GL(n, c)
acts transitively on
Cn-{o}
multiplication of matrices times column vectors, and the action is holomorphic. classes leading to
The action respects the equivalence
CP11"" , and hence
transitive holomorphic action on
31
G
CP " .
has a natural The isotropy
by
7.
subgroup of
ANALYTIC FORM OF BOREL-WEIL THEOREM.
G
at the class of
all g such that exactly
g f. •. 0 .1
for some
0 A n-1
This is a complex subgroup of
G
CP31"1
A , hence is
since the Lie algebra of i , and
Our group action of
gives us a one-one holomorphic map of
G/Q
G
on
G/Q
CP n ~
Fix an integer homomorphism
N ) 0.
X : Q -> v
G
thus
€P n ~ .
onto
as complex manifolds on "which
X
operates.
Then "we have a holomorphic
given by
(3.22)
gives a holomorphic action of
definition
One
Thus
= A -N
This
Q
becomes a
can verify that the inverse map is holomorphic. G/Q 2= CP11
consists of
1
is closed under multiplication by complex manifold.
in
0 1
139
q(z) = x(q)z .
Q
on
C
by the
We form the quotient space
G X Q C = (GX where
~
GX c .
is the equivalence relation
(gq,z) ^ (g,q(z)) on
The quotient space is a complex manifold, and
holomorphically by The space
g(x,z) = (gx,z)
GxQ c
for
geG,
xeG,
fibers holomorphically over
G
acts zeC.
G/Q. with
140
I I I : REPRESENTATIONS OF COMPACT GROUPS
p r o j e c t i o n map e : Gx c
given by
e ( g , z ) = gQ .
(Actually
the fibering i s a fiber bundle, but t h i s fact w i l l not e x p l i c i t l y concern us.
The bundle
Gx^ C -> G/Q
associated line bundle for
X .)
y : G/Q -> GX c ;
C°° maps such that
these are
i d e n t i t y on
G/Q .
between the
C°° sections
We consider
i s called the
C
sections
ey
i s the
Let us set up a one-one correspondence y
and the
C°° maps
cp : G -> C
such t h a t 9(gq) = X(q)"1cp(g) In fact, if we are given Y (gQ) =
y ,
for
then
y
q€Q,g€G.
(3.23)
must be of the form
(g>¥v ( g ) ) > and t h e image must b e i n t h e same
e q u i v a l e n c e c l a s s i f we r e p l a c e
g
by
gq with
q eQ.
U n r a v e l i n g m a t t e r s , we o b t a i n
Then cp
cp (g) = X(q)cp (gq) , and (3-23) f o l l o w s .
i s given with (3-23) h o l d i n g , we d e f i n e
and o b t a i n a The group
C°°
functions
cp by
y (gQ) =
(g,cp(g))
section.
G a c t s on t h e
(gy)(x) = y(g x) ,
Conversely i f
C°° s e c t i o n s
y
by
and i t a c t s compatibly on t h e above (gcp) (x) = cp (g" x) .
Examples. (1)
Let
P
— be a holomorphic polynomial \2n/ homogeneous of degree N , with N a s i n ( 3 . 2 2 ) , and define
7-
ANALYTIC FORM OF BOREL-WEIL THEOREM
C00 section.
Thus
cp satisfies (3*23) and yields a
Actually
this
cp is holomorphic, and the corresponding section is
therefore holomorphic. (2)
Let cp be as in (1), and let
C°° function.
Define
q^fg) = f(gQ)cp(g) .
satisfies (3*23) and yields a Proposition 3. 20, associated to X
f : G/Q -» C be any Then cp1
C°° section.
For the line bundle
G x Q C -> CP31"
in (3.22), the only holomorphic sections are
the ones that arise as in Example 1 from holomorphic polynomials homogeneous of degree Proof.
N.
Let cp : G -» C be the holomorphic map satisfying
(3.23) that arises from a given holomorphic section. We z
define a function
P r
z1 Z
l
on
n / and let
C n -{0}
/ z1
P |—
n/
is well defined, w e suppose also that 0 \
as follows.
J = cp (g) .
We find
To see this
?.\ IZA
g T jQ =•••!. 1/ \Z n /
Then
142
III: REPRESENTATIONS OF COMPACT GROUPS g 1 = gq
Writing
and applying (3.23), we see that
cp(gT) = cp (g) , so that
P
is "well defined.
Moreover
cp(g)
shows Uiat
P
definition of of
w
T
s
is homogeneous of degre . N . P
Since the
can "be accomplished using a whole open set
at a time, we see that
P
is holomorphic on
c n -{o}. The homogeneity condition implies that near
0.
Hence
homogeneity and
P
P
is bounded
extends to be holomorphic on
€n •
The
C°° behavior on the unit sphere force
|P(z)l < C| Z | N and more generally
I *£>(*) I < C a U I N - | a | for any multi-index
a
and all
z
in
(3.24)
Cn-{0} .
we see from (3.24) that the holomorphic function vanishes at infinity.
Hence it is
(convergent) Taylor expansion of of degree
> N
equal to
0 , and
0. P P
If
|a| > N ,
B^P
Therefore the
about
0
has all terms
is a polynomial.
This
proves the result. In terms of representation theory, Proposition 3.20 says that the natural representation of
GL(n, C)
on holomorphic
7. ANALYTIC FORM OF BOREL-WEIL THEOREM Cn
polynomials on
homogeneous of degree
N can be realized
as the space of holomorphic sections of the line bundle over CP11"
associated to the representation
x
of
Q given in
(3-23). We can, of course, r e s t r i c t from The representation of
U(n)
to
A = -Nen
(see
The concrete realization i s b u i l t from
which i s determined by i t s holomorphicity and i t s to
u(n) .
t h a t we are realizing concretely
i s irreducible and has highest weight Example 2 in §4).
GL(n,C)
U(n-l) x U(l) ;
on this subgroup,
representation with (highest) weight
X
X ,
restriction
is the irreducible
A = -Ne .
I t i s of interest to obtain the above geometric realization of the holomorphic polynomial representation of U(n)
we want to exploit is- that Thus l e t L
0
=
GL(n, C) .
without explicit reference to
Q
° G0 '
GQ = U(n)
t h e
C
°°
m a p
descends to a one-one map i s onto.
u (n) and G
0 "*
C°° map
The fact that
— gi (n,C) .
LQ = U(n-l) x u(l) . G//Q
Siven
b v
Since
g 0 •* S0Q
GQ/L0 "^ G/Q .
(The relevant fact here i s that
Actually this G = GQQ ,
which
follows from G=G 0 B Q , where
(3-25)
B o is the subgroup of the lower triangular group with
positive real numbers as diagonal entries.
Identity (3-25) is
a unique decomposition and is a group-theoretic formulation of the Gram-Schmidt orthogonalization process for
C n . See
Knapp [1986], §v.2.) One can show that the inverse of the map
G Q / L Q ~* G/Q is
144
III: REPRESENTATIONS OF COMPACT GROUPS G^/LQ — G/Q
smooth, so that acts.
as smooth manifolds on which
Go
We can use the isomorphism to transport the complex
structure from
G/Q
to
G Q /L Q .
check for holomorphicity on
(Shortly "we shall see how to
G 0 /L Q
without referring to
G/Q.) To think of on
X
in real terms, we start "by defining
L n = U(n-l) x U(l) .
We extend it holomorphically to
GL(n-l,C)x GL(1,C) , and (3-22) -will still hold. extend it to
Q
X
Finally we
so as to remain a homomorphism, and (3-22)
will still hold. With
X
defined only on
L n , we can form 0 But this is nothing new, since the relation
GA x c. 0 LQ
shows that we have a diffeomorphism
Q
0 L~ 0
that respects the action by GxnC Q
G .
can be transported to
G A x T C , and we seek a way to 0 LQ
detect holomorphicity of sections reference to Let
Q.
g
c omplexif ication of
I .
X
acts on
q
, but
C°°(G0)
C
q
g = 31 (n, C)
without
0
be the Lie algebras of
We can regard
c omplexif ication of
then
y : G Q /L Q -> G Q X L
GL(n,C) .
s 0 , $ , I Q , and
G , L n , and
The complex structure on
GQ ,
as the
is larger than the
As in §11.6, if
X
is in
g ,
as a left-invariant vector field:
7- ANALYTIC FORM OF BOREL-WEIL THEOREM
If
Z
with
is in X
and
3 , then Y
in
Z
acts, too; we write
g , and then
Z
145
Z = X +iY
acts by
Zf = Xf + i(Yf) •
(3.26)
Left-invariant vector fields on matter.
If
Z
is in
3 , then
C^iG)
when
ZF = XF + (iY)F
Z = X + iY
acts on
are quite another C^iG)
by
F(g exp t Z ) | t = 0 .
ZF(g) = ^ On
Z
C°°(G)
with
X
and
Y
(3-27) gQ,
in
but we do not necessarily have
we have
ZF = XF + i(YF).
In fact, this kind of equality is related to the holomorphicity of F €C°°(G)
F:
is holomorphic i(ZF) = (iZ) (F)
for all
Z€g . (3.28)
(To verify (3.28), we have only to realize that gives us a chart about the identity in
G
exp: 9 -» G
compatible with the
complex structure, and thus the right side of (3-28) says that F
satisfies the Cauchy-Riemann equaltions.) Once again we use the roots of
Let
A
+
g
with respect to
§.
denote the standard positive system.
Proposition 3-21.
Let
y : G Q /L 0 -» G Q x L
section defined on an open subset the inverse image of
U
C°° function with domain
in U
Y(g 0 L Q ) =
U
of
G Q , and let
C
be a
G Q / L Q , let cp : G Q -> C
C°° U
be
be the
such that
(g o ,cp(g Q ))
(3.29)
146
III: REPRESENTATIONS OF COMPACT GROUPS
and cpThen
y
* )= X(O"\>(gn) .
is a holomorphic :