136 101 630KB
English Pages [89] Year 2016
Part II — Representation Theory Based on lectures by S. Martin Notes taken by Dexter Chua
Lent 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine.
Linear Algebra and Groups, Rings and Modules are essential Representations of finite groups Representations of groups on vector spaces, matrix representations. Equivalence of representations. Invariant subspaces and submodules. Irreducibility and Schur’s Lemma. Complete reducibility for finite groups. Irreducible representations of Abelian groups. Character theory Determination of a representation by its character. The group algebra, conjugacy classes, and orthogonality relations. Regular representation. Permutation representations and their characters. Induced representations and the Frobenius reciprocity theorem. Mackey’s theorem. Frobenius’s Theorem. [12] Arithmetic properties of characters Divisibility of the order of the group by the degrees of its irreducible characters. Burnside’s pa q b theorem. [2] Tensor products Tensor products of representations and products of characters. The character ring. Tensor, symmetric and exterior algebras. [3] Representations of S 1 and SU2 The groups S 1 , SU2 and SO(3), their irreducible representations, complete reducibility. The Clebsch-Gordan formula. *Compact groups.* [4] Further worked examples The characters of one of GL2 (Fq ), Sn or the Heisenberg group.
1
[3]
Contents
II Representation Theory
Contents 0 Introduction
3
1 Group actions
4
2 Basic definitions
7
3 Complete reducibility and Maschke’s theorem
12
4 Schur’s lemma
16
5 Character theory
21
6 Proof of orthogonality
28
7 Permutation representations
32
8 Normal subgroups and lifting
36
9 Dual spaces and tensor products of 9.1 Dual spaces . . . . . . . . . . . . . 9.2 Tensor products . . . . . . . . . . . 9.3 Powers of characters . . . . . . . . 9.4 Characters of G × H . . . . . . . . 9.5 Symmetric and exterior powers . . 9.6 Tensor algebra . . . . . . . . . . . 9.7 Character ring . . . . . . . . . . .
representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
39 39 40 43 45 46 47 48
10 Induction and restriction
50
11 Frobenius groups
58
12 Mackey theory
61
13 Integrality in the group algebra
65
14 Burnside’s theorem
69
15 Representations of compact groups 71 15.1 Representations of SU(2) . . . . . . . . . . . . . . . . . . . . . . 77 15.2 Representations of SO(3), SU(2) and U (2) . . . . . . . . . . . . . 85 Index
88
2
0
0
Introduction
II Representation Theory
Introduction
The course studies how groups act as groups of linear transformations on vector spaces. Hopefully, you understand all the words in this sentence. If so, this is a good start. In our case, groups are usually either finite groups or topological compact groups (to be defined later). Topological compact groups are typically subgroups of the general linear group over some infinite fields. It turns out the tools we have for finite groups often work well for these particular kinds of infinite groups. The vector spaces are always finite-dimensional, and usually over C. Prerequisites of this course include knowledge of group theory (as much as the IB Groups, Rings and Modules course), linear algebra, and, optionally, geometry, Galois theory and metric and topological spaces. There is one lemma where we must use something from Galois theory, but if you don’t know about Galois theory, you can just assume the lemma to be true without bothering yourself too much about the proofs.
3
1
1
Group actions
II Representation Theory
Group actions
We start by reviewing some basic group theory and linear algebra. Basic linear algebra Notation. F always represents a field. Usually, we take F = C, but sometimes it can also be R or Q. These fields all have characteristic zero, and in this case, we call what we’re doing ordinary ¯ p , the algebraic representation theory. Sometimes, we will take F = Fp or F closure of Fp . This is called modular representation theory. Notation. We write V for a vector space over F — this will always be finite dimensional over F. We write GL(V ) for the group of invertible linear maps θ : V → V . This is a group with the operation given by composition of maps, with the identity as the identity map (and inverse by inverse). Notation. Let V be a finite-dimensional vector space over F. We write End(V ) for the endomorphism algebra, the set of all linear maps V → V . We recall a couple of facts from linear algebra: If dimF V = n < ∞, we can choose a basis e1 , · · · , en of V over F. So we can identify V with Fn . Then every endomorphism θ ∈ GL(V ) corresponds to a matrix Aθ = (aij ) ∈ Mn (F ) given by X θ(ej ) = aij ei . i
In fact, we have Aθ ∈ GLn (F), the general linear group. It is easy to see the following: Proposition. As groups, GL(V ) ∼ = GLn (F), with the isomorphism given by θ 7→ Aθ . Of course, picking a different basis of V gives a different isomorphism to GLn (F), but we have the following fact: Proposition. Matrices A1 , A2 represent the same element of GL(V ) with respect to different bases if and only if they are conjugate, namely there is some X ∈ GLn (F) such that A2 = XA1 X −1 . P Recall that tr(A) = i aii , where A = (aij ) ∈ Mn (F), is the trace of A. A nice property of the trace is that it doesn’t notice conjugacy: Proposition. tr(XAX −1 ) = tr(A). Hence we can define the trace of an operator tr(θ) = tr(Aθ ), which is independent of our choice of basis. This is an important result. When we study representations, we will have matrices flying all over the place, which are scary. Instead, we often just look at the traces of these matrices. This reduces our problem of studying matrices to plain arithmetic. 4
1
Group actions
II Representation Theory
When we have too many matrices, we get confused. So we want to put a matrix into a form as simple as possible. One of the simplest form a matrix can take is being diagonal. So we want to know something about diagonalizing matrices. Proposition. Let α ∈ GL(V ), where V is a finite-dimensional vector space over C and αm = id for some positive integer m. Then α is diagonalizable. This follows from the following more general fact: Proposition. Let V be a finite-dimensional vector space over C, and α ∈ End(V ), not necessarily invertible. Then α is diagonalizable if and only if there is a polynomial f with distinct linear factors such that f (α) = 0. Q Indeed, we have xm − 1 = (x − ω j ), where ω = e2πi/m . Instead of just one endomorphism, we can look at many endomorphisms. Proposition. A finite family of individually diagonalizable endomorphisms of a vector space over C can be simultaneously diagonalized if and only if they commute. Basic group theory We will not review the definition of a group. Instead, we look at some of our favorite groups, since they will be handy examples later on. Definition (Symmetric group Sn ). The symmetric group Sn is the set of all permutations of X = {1, · · · , n}, i.e. the set of all bijections X → X. We have |Sn | = n!. Definition (Alternating group An ). The alternating group An is the set of products of an even number of transpositions (i j) in Sn . We know |An | = n! 2 . So this is a subgroup of index 2 and hence normal. Definition (Cyclic group Cm ). The cyclic group of order m, written Cm is Cm = hx : xm = 1i. This also occurs naturally, as Z/mZ over addition, and also the group of nth roots of unity in C. We can view this as a subgroup of GL1 (C) ∼ = C× . Alternatively, this is the group of rotation symmetries of a regular m-gon in R2 , and can be viewed as a subgroup of GL2 (R). Definition (Dihedral group D2m ). The dihedral group D2m of order 2m is D2m = hx, y : xm = y 2 = 1, yxy −1 = x−1 i. This is the symmetry group of a regular m-gon. The xi are the rotations and xi y are the reflections. For example, in D8 , x is rotation by π2 and y is any reflection. This group can be viewed as a subgroup of GL2 (R), but since it also acts on the vertices, it can be viewed as a subgroup of Sm .
5
1
Group actions
II Representation Theory
Definition (Quaternion group). The quaternion group is given by Q8 = hx, y : x4 = 1, y 2 = x2 , yxy −1 = x−1 i. This has order 8, and we write i = x, j = y, k = ij, −1 = i2 , with Q8 = {±1, ±i, ±j, ±k}. We can view this as a subgroup of GL2 (C) via 1 0 i 0 0 1 0 i 1= , i= , j= , k= , 0 1 0 −i −1 0 i 0 −1 0 −i 0 0 −1 0 −i −1 = , −i = , −j = , −k = 0 −1 0 i 1 0 −i 0 . Definition (Conjugacy class). The conjugacy class of g ∈ G is CG (g) = {xgx−1 : x ∈ G}. Definition (Centralizer). The centralizer of g ∈ G is CG (g) = {x ∈ G : xg = gx}. Then by the orbit-stabilizer theorem, we have |CG (g)| = |G : CG (G)|. Definition (Group action). Let G be a group and X a set. We say G acts on X if there is a map ∗ : G × X → X, written (g, x) 7→ g ∗ x = gx such that (i) 1x = x (ii) g(hx) = (gh)x The group action can also be characterised in terms of a homomorphism. Lemma. Given an action of G on X, we obtain a homomorphism θ : G → Sym(X), where Sym(X) is the set of all permutations of X. Proof. For g ∈ G, define θ(g) = θg ∈ Sym(X) as the function X → X by x 7→ gx. This is indeed a permutation of X because θg−1 is an inverse. Moreover, for any g1 , g2 ∈ G, we get θg1 g2 = θg1 θg2 , since (g1 g2 )x = g1 (g2 x). Definition (Permutation representation). The permutation representation of a group action G on X is the homomorphism θ : G → Sym(X) obtained above. In this course, X is often a finite-dimensional vector space over F (and we write it as V ), and we want the action to satisfy some more properties. We will require the action to be linear, i.e. for all g ∈ G, v1 , v2 ∈ V , and λ ∈ F. g(v1 + v2 ) = gv1 + gv2 ,
g(λv1 ) = λ(gv1 ).
Alternatively, instead of asking for a map G → Sym(X), we would require a map G → GL(V ) instead. 6
2
2
Basic definitions
II Representation Theory
Basic definitions
We now start doing representation theory. We boringly start by defining a representation. In fact, we will come up with several equivalent definitions of a representation. As always, G will be a finite group and F will be a field, usually C. Definition (Representation). Let V be a finite-dimensional vector space over F. A (linear) representation of G on V is a group homomorphism ρ = ρV : G → GL(V ). We sometimes write ρg for ρV (g), so for each g ∈ G, ρg ∈ GL(V ), and ρg ρh = ρgh and ρg−1 = (ρg )−1 for all g, h ∈ G. Definition (Dimension or degree of representation). The dimension (or degree) of a representation ρ : G → GL(V ) is dimF (V ). Recall that ker ρ C G and G/ ker ρ ∼ = ρ(G) ≤ GL(V ). In the very special case where ker ρ is trivial, we give it a name: Definition (Faithful representation). A faithful representation is a representation ρ such that ker ρ = 1. These are the representations where the identity is the only element that does nothing. An alternative (and of course equivalent) definition of a representation is to observe that a linear representation is “the same” as a linear action of G. Definition (Linear action). A group G acts linearly on a vector space V if it acts on V such that g(v1 + v2 ) = gv1 + gv2 ,
g(λv1 ) = λ(gv1 )
for all g ∈ G, v1 , v2 ∈ V and λ ∈ F. We call this a linear action. Now if g acts linearly on V , the map G → GL(V ) defined by g 7→ ρg , with ρg : v 7→ gv, is a representation in the previous sense. Conversely, given a representation ρ : G → GL(V ), we have a linear action of G on V via gv = ρ(g)v. In other words, a representation is just a linear action. Definition (G-space/G-module). If there is a linear action G on V , we say V is a G-space or G-module. Alternatively, we can define a G-space as a module over a (not so) cleverly picked ring. Definition (Group algebra). The group algebra FG is defined to be the algebra (i.e. a vector space with a bilinear multiplication operation) of formal sums X FG = αg g : αg ∈ F g∈G
with the obvious addition and multiplication. 7
2
Basic definitions
II Representation Theory
Then we can regard FG as a ring, and a G-space is just an FG-module in the sense of IB Groups, Rings and Modules. Definition (Matrix representation). R is a matrix representation of G of degree n if R Is a homomorphism G → GLn (F). We can view this as a representation that acts on Fn . Since all finitedimensional vector spaces are isomorphic to Fn for some n, every representation is equivalent to some matrix representation. In particular, given a linear representation ρ : G → GL(V ) with dim V = n, we can get a matrix representation by fixing a basis B, and then define the matrix representation G → GLn (F) by g 7→ [ρ(g)]B . Conversely, given a matrix representation R, we get a linear representation ρ in the obvious way — ρ : G → GL(Fn ) by g 7→ ρg via ρg (v) = Rg v. We have defined representations in four ways — as a homomorphism to GL(V ), as linear actions, as FG-modules and as matrix representations. Now let’s look at some examples. Example (Trivial representation). Given any group G, take V = F (the onedimensional space), and ρ : G → GL(V ) by g 7→ (id : F → F) for all g. This is the trivial representation of G, and has degree 1. Despite being called trivial, trivial representations are highly non-trivial in representation theory. The way they interact with other representations geometrically, topologically etc, and cannot be disregarded. This is a very important representation, despite looking silly. Example. Let G = C4 = hx : x4 = 1i. Let n = 2, and work over F = C. Then we can define a representation by picking a matrix A, and then define R : x 7→ A. Then the action of other elements follows directly by xj 7→ Aj . Of course, we cannot choose A arbitrarily. We need to have A4 = I2 , and this is easily seen to be the only restriction. So we have the following possibilities: (i) A is diagonal: the diagonal entries can be chosen freely from {±1, ±i}. Since there are two diagonal entries, we have 16 choices. (ii) A is not diagonal: then it will be equivalent to a diagonal matrix since A4 = I2 . So we don’t really get anything new. What we would like to say above is that any matrix representation in which X is not diagonal is “equivalent” to one in which X is. To make this notion precise, we need to define what it means for representations to be equivalent, or “isomorphic”. As usual, we will define the notion of a homomorphism of representations, and then an isomorphism is just an invertible homomorphism. Definition (G-homomorphism/intertwine). Fix a group G and a field F. Let V, V 0 be finite-dimensional vector spaces over F and ρ : G → GL(V ) and ρ0 : G → GL(V 0 ) be representations of G. The linear map ϕ : V → V 0 is a G-homomorphism if ϕ ◦ ρ(g) = ρ0 (g) ◦ ϕ (∗)
8
2
Basic definitions
II Representation Theory
for all g ∈ G. In other words, the following diagram commutes: ρg
V
V
ϕ
V0
ϕ ρ0g
V0
i.e. no matter which way we go form V (top left) to V 0 (bottom right), we still get the same map. We say ϕ intertwines ρ and ρ0 . We write HomG (V, V 0 ) for the F-space of all these maps. Definition (G-isomorphism). A G-homomorphism is a G-isomorphism if ϕ is bijective. Definition (Equivalent/isomorphic representations). Two representations ρ, ρ0 are equivalent or isomorphic if there is a G-isomorphism between them. If ϕ is a G-isomorphism, then we can write (∗) as ρ0 = ϕρϕ−1 .
(†)
Lemma. The relation of “being isomorphic” is an equivalence relation on the set of all linear representations of G over F. This is an easy exercise left for the reader. Lemma. If ρ, ρ0 are isomorphic representations, then they have the same dimension. Proof. Trivial since isomorphisms between vector spaces preserve dimension. The converse is false. Example. C4 has four non-isomorphic one-dimensional representations: if ω = e2πi/4 , then we have the representations ρj (xi ) = ω ij , for 0 ≤ i ≤ 3, which are not equivalent for different j = 0, 1, 2, 3. Our other formulations of representations give us other formulations of isomorphisms. Given a group G, field F, a vector space V of dimension n, and a representation ρ : G → GL(V ), we fix a basis B of V . Then we get a linear G-isomorphism ϕ : V → Fn by v 7→ [v]B , i.e. by writing v as a column vector with respect to B. Then we get a representation ρ0 : G → GL(Fn ) isomorphic to ρ. In other words, every representation is isomorphic to a matrix representation: ρ
V ϕ
Fn
V ϕ
ρ0
9
Fn
2
Basic definitions
II Representation Theory
Thus, in terms of matrix representations, the representations R : G → GLn (F) and R0 : G → GLn (F) are G-isomorphic if there exists some non-singular matrix X ∈ GLn (F) such that R0 (g) = XR(g)X −1 for all g. Alternatively, in terms of linear G-actions, the actions of G on V and V 0 are G-isomorphic if there is some isomorphism ϕ : V → V 0 such that gϕ(v) = ϕ(gv). for all g ∈ G, v ∈ V . It is an easy check that this is just a reformulation of our previous definition. Just as we have subgroups and subspaces, we have the notion of subrepresentation. Definition (G-subspace). Let ρ : G → GL(V ) be a representation of G. We say W ≤ V is a G-subspace if it is a subspace that is ρ(G)-invariant, i.e. ρg (W ) ≤ W for all g ∈ G. Obviously, {0} and V are G-subspaces. These are the trivial G-subspaces. Definition (Irreducible/simple representation). A representation ρ is irreducible or simple if there are no proper non-zero G-subspaces. Example. Any 1-dimensional representation of G is necessarily irreducible, but the converse does not hold, or else life would be very boring. We will later see that D8 has a two-dimensional irreducible complex representation. Definition (Subrepresentation). If W is a G-subspace, then the corresponding map G → GL(W ) given by g 7→ ρ(g)|W gives us a new representation of W . This is a subrepresentation of ρ. There is a nice way to characterize this in terms of matrices. Lemma. Let ρ : G → GL(V ) be a representation, and W be a G-subspace of V . If B = {v1 , · · · , vn } is a basis containing a basis B1 = {v1 , · · · , vm } of W (with 0 < m < n), then the matrix of ρ(g) with respect to B has the block upper triangular form ∗ ∗ 0 ∗ for each g ∈ G. This follows directly from definition. However, we do not like block triangular matrices. What we really like is block diagonal matrices, i.e. we want the top-right block to vanish. There is no a priori reason why this has to be true — it is possible that we cannot find another G-invariant complement to W .
10
2
Basic definitions
II Representation Theory
Definition ((In)decomposable representation). A representation ρ : G → GL(V ) is decomposable if there are proper G-invariant subspaces U, W ≤ V with V = U ⊕ W. We say ρ is a direct sum ρu ⊕ ρw . If no such decomposition exists, we say that ρ is indecomposable. It is clear that irreducibility implies indecomposability. The converse is not necessarily true. However, over a field of characteristic zero, it turns out irreducibility is the same as indecomposability for finite groups, as we will see in the next chapter. Again, we can formulate this in terms of matrices. Lemma. Let ρ : G → GL(V ) be a decomposable representation with G-invariant decomposition V = U ⊕ W . Let B1 = {u1 , · · · , uk } and B2 = {w1 , · · · , w` } be bases for U and W , and B = B1 ∪ B2 be the corresponding basis for V . Then with respect to B, we have [ρu (g)]B1 0 [ρ(g)]B = 0 [ρu (g)]B2 Example. Let G = D6 . Then every irreducible complex representation has dimension at most 2. To show this, let ρ : G → GL(V ) be an irreducible G-representation. Let r ∈ G be a (non-identity) rotation and s ∈ G be a reflection. These generate D6 . Take an eigenvector v of ρ(r). So ρ(r)v = λv for some λ 6= 0 (since ρ(r) is invertible, it cannot have zero eigenvalues). Let W = hv, ρ(s)vi ≤ V be the space spanned by the two vectors. We now check this is fixed by ρ. Firstly, we have ρ(s)ρ(s)v = ρ(e)v = v ∈ W, and ρ(r)ρ(s)v = ρ(s)ρ(r−1 )v = λ−1 ρ(s)v ∈ W. Also, ρ(r)v = λv ∈ W and ρ(s)v ∈ W . So W is G-invariant. Since V is irreducible, we must have W = V . So V has dimension at most 2. The reverse operation of decomposition is taking direct sums. Definition (Direct sum). Let ρ : G → GL(V ) and ρ0 : G → GL(V 0 ) be representations of G. Then the direct sum of ρ, ρ0 is the representation ρ ⊕ ρ0 : G → GL(V ⊕ V 0 ) given by (ρ ⊕ ρ0 )(g)(v + v0 ) = ρ(g)v + ρ0 (g)v0 . In terms of matrices, for matrix representations R : G → GLn (F) and R0 : G → GLn0 (F), define R ⊕ R0 : G → GLn+n0 (F) by R(g) 0 (R ⊕ R0 )(g) = . 0 R0 (g) The direct sum was easy to define. It turns out we can also multiply two representations, known as the tensor products. However, to do this, we need to know what the tensor product of two vector spaces is. We will not do this yet. 11
3
3
Complete reducibility and Maschke’s theorem
II Representation Theory
Complete reducibility and Maschke’s theorem
In representation theory, we would like to decompose a representation into sums of irreducible representations. Unfortunately, this is not always possible. When we can, we say the representation is completely reducible. Definition (Completely reducible/semisimple representation). A representation ρ : G → GL(V ) is completely reducible or semisimple if it is the direct sum of irreducible representations. Clearly, irreducible implies completely reducible. Not all representations are completely reducible. An example is to be found on example sheet 1. These are in fact not too hard to find. For example, there are representations of Z over C that are not completely reducible, and also a non-completely reducible representation of Cp over Fp . However, it turns out we have the following theorem: Theorem. Every finite-dimensional representation V of a finite group over a field of characteristic 0 is completely reducible, namely, V ∼ = V1 ⊕ · · · ⊕ Vr is a direct sum of irreducible representations. By induction, it suffices to prove the following: Theorem (Maschke’s theorem). Let G be a finite group, and ρ : G → GL(V ) a representation over a finite-dimensional vector space V over a field F with char F = 0. If W is a G-subspace of V , then there exists a G-subspace U of V such that V = W ⊕ U . We will prove this many times. Proof. From linear algebra, we know W has a complementary subspace. Let W 0 be any vector subspace complement of W in V , i.e. V = W ⊕ W 0 as vector spaces. Let q : V → W be the projection of V onto W along W 0 , i.e. if v = w + w0 with w ∈ W, w0 ∈ W 0 , then q(v) = w. The clever bit is to take this q and tweak it a little bit. Define q¯ : v 7→
1 X ρ(g)q(ρ(g −1 )v). |G| g∈G
This is in some sense an averaging operator, averaging over what ρ(g) does. Here 1 we need the field to have characteristic zero such that |G| is well-defined. In fact, this theorem holds as long as char F - |G|. For simplicity of expression, we drop the ρ’s, and simply write q¯ : v 7→
1 X gq(g −1 v). |G| g∈G
We first claim that q¯ has image in W . This is true since for v ∈ V , q(g −1 v) ∈ W , and gW ≤ W . So this is a little bit like a projection.
12
3
Complete reducibility and Maschke’s theorem
II Representation Theory
Next, we claim that for w ∈ W , we have q¯(w) = w. This follows from the fact that q itself fixes W . Since W is G-invariant, we have g −1 w ∈ W for all w ∈ W . So we get 1 X 1 X −1 1 X q¯(w) = gq(g −1 w) = gg w = w = w. |G| |G| |G| g∈G
g∈G
g∈G
Putting these together, this tells us q¯ is a projection onto W . Finally, we claim that for h ∈ G, we have h¯ q (v) = q¯(hv), i.e. it is invariant under the G-action. This follows easily from definition: 1 X h¯ q (v) = h gq(g −1 v) |G| g∈G
1 X = hgq(g −1 v) |G| g∈G
1 X (hg)q((hg)−1 hv) = |G| g∈G
We now put g 0 = hg. Since h is invertible, summing over all g is the same as summing over all g 0 . So we get 1 X 0 0−1 g q(g (hv)) = |G| 0 g ∈G
= q¯(hv). We are pretty much done. We finally show that ker q¯ is G-invariant. If v ∈ ker q¯ and h ∈ G, then q¯(hv) = h¯ q (v) = 0. So hv ∈ ker q¯. Thus V = im q¯ ⊕ ker q¯ = W ⊕ ker q¯ is a G-subspace decomposition. The crux of the whole proof is the definition of q¯. Once we have that, everything else follows easily. 1 Yet, for the whole proof to work, we need |G| to exist, which in particular means G must be a finite group. There is no obvious way to generalize this to infinite groups. So let’s try a different proof. The second proof uses inner products, and hence we must take F = C. This can be generalized to infinite compact groups, as we will later see. Recall the definition of an inner product: Definition (Hermitian inner product). For V a complex space, h · , · i is a Hermitian inner product if (i) hv, wi = hw, vi
(Hermitian)
(ii) hv, λ1 w1 + λ2 w2 i = λ1 hv, w1 i + λ2 hv, w2 i (iii) hv, vi > 0 if v 6= 0
(sesquilinear) (positive definite)
Definition (G-invariant inner product). An inner product h · , · i is in addition G-invariant if hgv, gwi = hv, wi. 13
3
Complete reducibility and Maschke’s theorem
II Representation Theory
Proposition. Let W be G-invariant subspace of V , and V have a G-invariant inner product. Then W ⊥ is also G-invariant. Proof. To prove this, we have to show that for all v ∈ W ⊥ , g ∈ G, we have gv ∈ W ⊥ . This is not hard. We know v ∈ W ⊥ if and only if hv, wi = 0 for all w ∈ W . Thus, using the definition of G-invariance, for v ∈ W ⊥ , we know hgv, gwi = 0 for all g ∈ G, w ∈ W . Thus for all w0 ∈ W , pick w = g −1 w0 ∈ W , and this shows hgv, w0 i = 0. Hence gv ∈ W ⊥ . Hence if there is a G-invariant inner product on any complex G-space V , then we get another proof of Maschke’s theorem. Theorem (Weyl’s unitary trick). Let ρ be a complex representation of a finite group G on the complex vector space V . Then there is a G-invariant Hermitian inner product on V . Recall that the unitary group is defined by U(V ) = {f ∈ GL(V ) : hf (u), f (v)i = hu, vi for all u, v ∈ V } = {A ∈ GLn (C) : AA† = I} = U(n). Then we have an easy corollary: Corollary. Every finite subgroup of GLn (C) is conjugate to a subgroup of U(n). Proof. We start by defining an arbitrary inner product on V : take a basis e1 , · · · , en . Define (ei , ej ) = δij , and extend it sesquilinearly. Define a new inner product 1 X hv, wi = (gv, gw). |G| g∈G
We now check this is sesquilinear, positive-definite and G-invariant. Sesquilinearity and positive-definiteness are easy. So we just check G-invariance: we have 1 X hhv, hwi = ((gh)v, (gh)w) |G| g∈G
1 X 0 (g v, g 0 w) = |G| 0 g ∈G
= hv, wi. Note that this trick also works for real representations. 1 Again, we had to take the inverse |G| . To generalize this to compact groups, 1 we will later replace the sum by an integral, and |G| by a volume element. This is 0 0 fine since (g v, g w) is a complex number and we know how to integrate complex numbers. This cannot be easily done in the case of q¯. 14
3
Complete reducibility and Maschke’s theorem
II Representation Theory
Recall we defined the group algebra of G to be the F-vector space FG = heg : g ∈ Gi, i.e. its basis is in one-to-one correspondence with the elements of G. There is a linear G-action defined in the obvious way: for h ∈ G, we define X X X h ag eg = ag ehg = ah−1 g0 eg0 . g
g0
g
This gives a representation of G. Definition (Regular representation and regular module). The regular representation of a group G, written ρreg , is the natural action of G on FG. FG is called the regular module. It is a nice faithful representation of dimension |G|. Far more importantly, it turns out that every irreducible representation of G is a subrepresentation of the regular representation. Proposition. Let ρ be an irreducible representation of the finite group G over a field of characteristic 0. Then ρ is isomorphic to a subrepresentation of ρreg . This will also follow from a more general result using character theory later. Proof. Take ρ : G → GL(V ) be irreducible, and pick our favorite 0 6= v ∈ V . Now define θ : FG → V by X X ag eg 7→ ag (gv). g
It is not hard to see this is a G-homomorphism. We are now going to exploit the fact that V is irreducible. Thus, since im θ is a G-subspace of V and non-zero, we must have im θ = V . Also, ker θ is a G-subspace of FG. Now let W be the G-complement of ker θ in FG, which exists by Maschke’s theorem. Then W ≤ FG is a G-subspace and FG = ker θ ⊕ W. Then the isomorphism theorem gives W ∼ = FG/ ker θ ∼ = im θ = V. More generally, G doesn’t have to just act on the vector space generated by itself. If G acts on any set, we can take that space and create a space acted on by G. Definition (Permutation representation). Let F be a field, and let G act on a set X. Let FX = hex : x ∈ Xi with a G-action given by X X ax ex = ax egx . g x
x
So we have a G-space on FX. The representation G → GL(FX) is the corresponding permutation representation.
15
4
Schur’s lemma
4
II Representation Theory
Schur’s lemma
The topic of this chapter is Schur’s lemma, an easy yet extremely useful lemma in representation theory. Theorem (Schur’s lemma). (i) Assume V and W are irreducible G-spaces over a field F. Then any G-homomorphism θ : V → W is either zero or an isomorphism. (ii) If F is algebraically closed, and V is an irreducible G-space, then any G-endomorphism V → V is a scalar multiple of the identity map ιV . Proof. (i) Let θ : V → W be a G-homomorphism between irreducibles. Then ker θ is a G-subspace of V , and since V is irreducible, either ker θ = 0 or ker θ = V . Similarly, im θ is a G-subspace of W , and as W is irreducible, we must have im θ = 0 or im θ = W . Hence either ker θ = V , in which case θ = 0, or ker θ = 0 and im θ = W , i.e. θ is a bijection. (ii) Since F is algebraically closed, θ has an eigenvalue λ. Then θ − λιV is a singular G-endomorphism of V . So by (i), it must be the zero map. So θ = λιV . Recall that the F-space HomG (V, W ) is the space of all G-homomorphisms V → W . If V = W , we write EndG (V ) for the G-endomorphisms of V . Corollary. If V, W are irreducible complex G-spaces, then ( 1 V, W are G-isomorphic dimC HomG (V, W ) = 0 otherwise Proof. If V and W are not isomorphic, then the only possible map between them is the zero map by Schur’s lemma. Otherwise, suppose V ∼ = W and let θ1 , θ2 ∈ HomG (V, W ) be both nonzero. By Schur’s lemma, they are isomorphisms, and hence invertible. So θ2−1 θ1 ∈ EndG (V ). Thus θ2−1 θ1 = λιV for some λ ∈ C. Thus θ1 = λθ2 . We have another less obvious corollary. Corollary. If G is a finite group and has a faithful complex irreducible representation, then its center Z(G) is cyclic. This is a useful result — it allows us transfer our representation-theoretic knowledge (the existence of a faithful complex irreducible representation) to group theoretic knowledge (center of group being cyclic). This will become increasingly common in the future, and is a good thing since representations are easy and groups are hard. The converse, however, is not true. For this, see example sheet 1, question 10.
16
4
Schur’s lemma
II Representation Theory
Proof. Let ρ : G → GL(V ) be a faithful irreducible complex representation. Let z ∈ Z(G). So zg = gz for all g ∈ G. Hence φz : v 7→ zv is a G-endomorphism on V . Hence by Schur’s lemma, it is multiplication by a scalar µz , say. Thus zv = µz v for all v ∈ V . Then the map σ : Z(G) → C× z 7→ µg is a representation of Z(G). Since ρ is faithful, so is σ. So Z(G) = {µz : z ∈ Z(G)} is isomorphic to a finite subgroup of C× , hence cyclic. Corollary. The irreducible complex representations of a finite abelian group G are all 1-dimensional. Proof. We can use the fact that commuting diagonalizable matrices are simultaneously diagonalizable. Thus for every irreducible V , we can pick some v ∈ V that is an eigenvector for each g ∈ G. Thus hvi is a G-subspace. As V is irreducible, we must have V = hvi. Alternatively, we can prove this in a representation-theoretic way. Let V be an irreducible complex representation. For each g ∈ G, the map θg : V → V v 7→ gv is a G-endomorphism of V , since it commutes with the other group elements. Since V is irreducible, θg = λg ιV for some λg ∈ C. Thus gv = λg v for any g. As V is irreducible, we must have V = hvi. Note that this result fails over R. For example, C3 has a two irreducible real representations, one of dimension 1 and one of dimension 2. We can do something else. Recall that every finite abelian group G isomorphic to a product of abelian groups. In fact, it can be written as a product of Cpα for various primes p and α ≥ 1, and the factors are uniquely determined up to order. This you already know from IB Groups Rings and Modules. You might be born knowing it — it’s such a fundamental fact of nature. Proposition. The finite abelian group G = Cn1 × · · · × Cnr has precisely |G| irreducible representations over C. This is not a coincidence. We will later show that the number of irreducible representations is the number of conjugacy classes of the group. In abelian groups, each conjugacy class is just a singleton, and hence this result. Proof. Write G = hx1 i × · · · × hxr i, where |xj | = nj . Any irreducible representation ρ must be one-dimensional. So we have ρ : G → C× . 17
4
Schur’s lemma
II Representation Theory
Let ρ(1, · · · , xj , · · · , 1) = λj . Then since ρ is a homomorphism, we must have n λj j = 1. Therefore λj is an nj th root of unity. Now the values (λ1 , · · · , λr ) determine ρ completely, namely ρ(xj11 , · · · , xjrr ) = λj11 · · · λjrr . Also, whenever λi is an ni th root of unity for each i, then the above formula gives a well-defined representation. So there is a one-to-one correspondence n ρ ↔ (λ1 , · · · , λr ), with λj j = 1. Since for each j, there are nj many nj th roots of unity, it follows that there are |G| = n1 · · · nr many choices of the λi . Thus the proposition. Example. (i) Consider G = C4 = hxi. The four 1-dimensional irreducible representations are given by
ρ1 ρ2 ρ2 ρ2
1
x
x2
x3
1 1 1 1
1 i −1 −i
1 −1 1 −1
1 −i −1 i
(ii) Consider the Klein four group G = VR = hx1 i × hx2 i. The irreducible representations are
ρ1 ρ2 ρ2 ρ2
1
x1
x2
x1 x2
1 1 1 1
1 1 −1 −1
1 −1 1 −1
1 −1 −1 1
These are also known as character tables, and we will spend quite a lot of time computing these for non-abelian groups. Note that there is no “natural” one-to-one correspondence between the elements of G and the representations of G (for G finite-abelian). If we choose an isomorphism G ∼ = Cn1 × · · · Cnr , then we can identify the two sets, but it depends on the choice of the isomorphism (while the decomposition is unique, we can pick a different generator of, say, Cn1 and get a different isomorphism to the same decomposition). Isotypical decompositions Recall that we proved we can decompose any G-representation into a sum of irreducible representations. Is this decomposition unique? If it isn’t, can we say anything about, say, the size of the irreducible representations, or the number of factors in the decomposition? We know any diagonalizable endomorphism α : V → V for a vector space V gives us a vector space decomposition M V = V (λ), λ
18
4
Schur’s lemma
II Representation Theory
where V (λ) = {v ∈ V : α(v) = λv}. This is canonical in that it depends on α alone, and nothing else. If V is moreover a G-representation, how does this tie in to the decomposition of V into the irreducible representations? Let’s do an example. Example. Consider G = D6 ∼ = S3 = hr, s : r3 = s2 = 1, rs = sr−1 i. We have previously seen that each irreducible representation has dimension at most 2. We spot at least three irreducible representations: 1 S W
triad sign 2-dimensional
r→ 7 1 r→ 7 1
s 7→ 1 s 7→ −1
The last representation is the action of D6 on R2 in the natural way, i.e. the rotations and reflections of the plane that corresponds to the symmetries of the triangle. It is helpful to view this as a complex representation in order to make the matrix look nice. The 2-dimensional representation (ρ, W ) is defined by W = C2 , where r and s act on W as ω 0 0 1 ρ(r) = , ρ(s) = , 0 ω2 1 0 and ω = e2πi/3 is a third root of unity. We will now show that these are indeed all the irreducible representations, by decomposing any representation into sum of these. So let’s decompose an arbitrary representation. Let (ρ0 , V ) be any complex representation of G. Since ρ0 (r) has order 3, it is diagonalizable has eigenvalues 1, ω, ω 2 . We diagonalize ρ0 (r) and then V splits as a vector space into the eigenspaces V = V (1) ⊕ V (ω) ⊕ V (ω 2 ). Since srs−1 = r−1 , we know ρ0 (s) preserves V (1) and interchanges V (ω) and V (ω 2 ). Now we decompose V (1) into ρ0 (s) eigenspaces, with eigenvalues ±1. Since r has to act trivially on these eigenspaces, V (1) splits into sums of copies of the irreducible representations 1 and S. For the remaining mess, choose a basis v1 , · · · , vn of V (ω), and let vj0 = 0 1 0 0 0 ρ (s)vj . Then ρ (s) acts on the two-dimensional space hvj , vj i as , while 1 0 ω 0 ρ0 (r) acts as . This means V (ω) ⊕ V (ω 2 ) decomposes into many copies 0 ω2 of W. We did this for D6 by brute force. How do we generalize this? We first have the following lemma: Lemma. Let V, V1 , V2 be G-vector spaces over F. Then (i) HomG (V, V1 ⊕ V2 ) ∼ = HomG (V, V1 ) ⊕ HomG (V, V2 ) (ii) HomG (V1 ⊕ V2 , V ) ∼ = HomG (V1 , V ) ⊕ HomG (V2 , V ). 19
4
Schur’s lemma
II Representation Theory
Proof. The proof is to write down the obvious homomorphisms and inverses. Define the projection map πi : V1 ⊕ V2 → Vi , which is the G-linear projection onto Vi . Then we can define the G-homomorphism HomG (V, V1 ⊕ V2 ) 7→ HomG (V, V1 ) ⊕ HomG (V, V2 ) ϕ 7→ (π1 ϕ, π2 ϕ). Then the map (ψ1 , ψ2 ) 7→ ψ1 + ψ2 is an inverse. For the second part, we have the homomorphism ϕ 7→ (ϕ|V1 , ϕ|V2 ) with inverse (ψ1 , ψ2 ) 7→ ψ1 π1 + ψ2 π2 . Lemma. Let L F be an algebraically closed field, and V be a representation of G. n Suppose V = i=1 Vi is its decomposition into irreducible components. Then for each irreducible representation S of G, |{j : Vj ∼ = S}| = dim HomG (S, V ). This tells us we can count the multiplicity of S in V by looking at the homomorphisms. Proof. We induct on n. If n = 0, then this is a trivial space. If n = 1, then V itself is irreducible, and by Schur’s lemma, dim HomG (S, V ) = 1 if V = S, 0 otherwise. Otherwise, for n > 1, we have ! n−1 M V = Vi ⊕ Vn . i=1
By the previous lemma, we know ! ! n−1 M dim homG S, Vi ⊕ Vn = dim HomG i=1
S,
n−1 M
! Vi
+ dim homG (S, Vn ).
i=1
The result then follows by induction. Definition (Canonical decomposition/decomposition into isotypical compoL nents). A decomposition of V as Wj , where each Wj is (isomorphic to) nj copies of the irreducible Sj (with Sj ∼ 6 Si for i 6= j) is the canonical decomposition = or decomposition into isotypical components. For an algebraically closed field F, we know we must have nj = dim HomG (Sj , V ), and hence this decomposition is well-defined. We’ve finally all the introductory stuff. The course now begins.
20
5
Character theory
5
II Representation Theory
Character theory
In topology, we want to classify spaces. To do so, we come up with invariants of spaces, like the number of holes. Then we know that a torus is not the same as a sphere. Here, we want to attach invariants to a representation ρ of a finite group G on V . One thing we might want to do is to look at the matrix coefficients of ρ(g). However, this is bad, since this is highly highly highly basis dependent. It is not a true invariant. We need to do something better than that. Let F = C, and G be a finite group. Let ρ = ρV : G → GL(V ) be a representation of G. The clue is to look at the characteristic polynomial of the matrix. The coefficients are functions of the eigenvalues — on one extreme, the determinant is the product of all eigenvalues; on the other extreme, and the trace is the sum of all of them. Surprisingly, it is the trace that works. We don’t have to bother ourselves with the other coefficients. Definition (Character). The character of a representation ρ : G → GL(V ), written χρ = χv = χ, is defined as χ(g) = tr ρ(g). We say ρ affords the character χ. Alternatively, the character is tr R(g), where R(g) is any matrix representing ρ(g) with respect to any basis. Definition (Degree of character). The degree of χv is dim V . Thus, χ is a function G → C. Definition (Linear character). We say χ is linear if dim V = 1, in which case χ is a homomorphism G → C× = GL1 (C). Various properties of representations are inherited by characters. Definition (Irreducible character). A character χ is irreducible if ρ is irreducible. Definition (Faithful character). A character χ is faithful if ρ is faithful. Definition (Trivial/principal character). A character χ is trivial or principal if ρ is the trivial representation. We write χ = 1G . χ is a complete invariant in the sense that it determines ρ up to isomorphism. This is staggering. We reduce the whole matrix into a single number — the trace, and yet we have not lost any information at all! We will prove this later, after we have developed some useful properties of characters. Theorem. (i) χV (1) = dim V . (ii) χV is a class function, namely it is conjugation invariant, i.e. χV (hgh−1 ) = χV (g) for all g, h ∈ G. Thus χV is constant on conjugacy classes. 21
5
Character theory
II Representation Theory
(iii) χV (g −1 ) = χV (g). (iv) For two representations V, W , we have χV ⊕W = χV + χW . These results, despite being rather easy to prove, are very useful, since they save us a lot of work when computing the characters of representations. Proof. (i) Obvious since ρV (1) = idV . (ii) Let Rg be the matrix representing g. Then χ(hgh−1 ) = tr(Rh Rg Rh−1 ) = tr(Rg ) = χ(g), as we know from linear algebra. (iii) Since g ∈ G has finite order, we know ρ(g) matrix λ1 .. Rg = .
is represented by a diagonal ,
λn and χ(g) =
P
λi . Now g −1 is represented by −1 λ1 .. Rg−1 = .
λ−1 n
,
Noting that each λi is an nth root of unity, hence |λi | = 1, we know χ(g −1 ) =
X
λ−1 = i
X
λi =
X
λi = χ(g).
(iv) Suppose V = V1 ⊕ V2 , with ρ : G → GL(V ) splitting into ρi : G → GL(Vi ). Pick a basis Bi for Vi , and let B = B1 ∪ B2 . Then with respect to B, we have [ρ1 (g)]B1 0 [ρ(g)]B = . 0 [ρ2 (g)]B2 So χ(g) = tr(ρ(g)) = tr(ρ1 (g)) + tr(ρ2 (g)) = χ1 (g) + χ2 (g). We will see later that if we take characters χ1 , χ2 of G, then χ1 χ2 is also a character of G. This uses the notion of tensor products, which we will do later. Lemma. Let ρ : G → GL(V ) be a complex representation affording the character χ. Then |χ(g)| ≤ χ(1), with equality if and only if ρ(g) = λI for some λ ∈ C, a root of unity. Moreover, χ(g) = χ(1) if and only if g ∈ ker ρ.
22
5
Character theory
II Representation Theory
Proof. Fix g, and pick a basis of eigenvectors of ρ(g). Then the matrix of ρ(g) is diagonal, say λ1 .. ρ(g) = , . λn Hence
X X X |χ(g)| = λi ≤ |λi | = 1 = dim V = χ(1).
In the triangle inequality, we have equality if and only if all the λi ’s are equal, to λ, say. So ρ(g) = λI. Since all the λi ’s are roots of unity, so is λ. And, if χ(g) = χ(1), then since ρ(g) = λI, taking the trace gives χ(g) = λχ(1). So λ = 1, i.e. ρ(g) = I. So g ∈ ker ρ. The following lemma allows us to generate new characters from old. Lemma. (i) If χ is a complex (irreducible) character of G, then so is χ. ¯ (ii) If χ is a complex (irreducible) character of G, then so is εχ for any linear (1-dimensional) character ε. Proof. ¯:G→ (i) If R : G → GLn (C) is a complex matrix representation, then so is R ¯ is χ GLn (C), where g 7→ R(g). Then the character of R ¯ (ii) Similarly, R0 : g 7→ ε(g)R(g) for g ∈ G is a representation with character εχ. It is left as an exercise for the reader to check the details. We now have to justify why we care about characters. We said it was a complete invariant, as in two representations are isomorphic if and only if they have the same character. Before we prove this, we first need some definitions Definition (Space of class functions). Define the complex space of class functions of G to be C(G) = {f : G → C : f (hgh−1 ) = f (g) for all h, g ∈ G}. This is a vector space by f1 + f2 : g 7→ f1 (g) + f2 (g) and λf : g 7→ λf (g). Definition (Class number). The class number k = k(G) is the number of conjugacy classes of G. We can list the conjugacy classes as C1 , · · · , Ck . wlog, we let C1 = {1}. We choose g1 , g2 , · · · , gk to be representatives of the conjugacy classes. Note also that dim C(G) = k, since the characteristic function δj of the classes form a basis where ( 1 g ∈ Cj δj (g) = 0 g 6∈ Cj .
23
5
Character theory
II Representation Theory
We define a Hermitian inner product on C(G) by hf, f 0 i =
1 X f (g)f 0 (g) |G| g∈G k
=
1 X |Cj |f (gj )f 0 (gj ) |G| j=1
By the orbit-stabilizer theorem, we can write this as =
k X j=1
1 f (gj )f 0 (gj ). |CG (gj )|
In particular, for characters, hχ, χ0 i =
k X j=1
1 χ(gj−1 )χ0 (gj ). |CG (gj )|
It should be clear (especially using the original formula) that hχ, χ0 i = hχ0 , χi. So when restricted to characters, this is a real symmetric form. The main result is the following theorem: Theorem (Completeness of characters). The complex irreducible characters of G form an orthonormal basis of C(G), namely (i) If ρ : G → GL(V ) and ρ0 : G → GL(V 0 ) are two complex irreducible representations affording characters χ, χ0 respectively, then ( 1 if ρ and ρ0 are isomorphic representations 0 hχ, χ i = 0 otherwise This is the (row) orthogonality of characters. (ii) Each class function of G can be expressed as a linear combination of irreducible characters of G. Proof is deferred. We first look at some corollaries. Corollary. Complex representations of finite groups are characterised by their characters. Proof. Let ρ : G → GL(V ) afford the character χ. We know we can write ρ = m1 ρ1 ⊕ · · · ⊕ mk ρk , where ρ1 , · · · , ρk are (distinct) irreducible and mj ≥ 0 are the multiplicities. Then we have χ = m1 χ1 + · · · + mk χk , where χj is afforded by ρj . Then by orthogonality, we know mj = hχ, χj i. So we can obtain the multiplicity of each ρj in ρ just by looking at the inner products of the characters. 24
5
Character theory
II Representation Theory
This is not true for infinite groups. For example, if G = Z, then the representations 1 0 1 1 1 7→ , 1 7→ 0 1 0 1 are non-isomorphic, but have the same character 2. Corollary (Irreducibility criterion). If ρ : G → GL(V ) is a complex representation of G affording the character χ, then ρ is irreducible if and only if hχ, χi = 1. Proof. If ρ is irreducible, then orthogonality says hχ, χi = 1. For the other direction, suppose hχ, χi = 1. We use complete reducibility to get X χ= mj χj , with χj irreducible, and mj ≥ 0 the multiplicities. Then by orthogonality, we get X hχ, χi = m2j . But hχ, χi = 1. So exactly one of mj is 1, while the others are all zero, and χ = χj . So χ is irreducible. Theorem. Let ρ1 , · · · , ρk be the irreducible complex representations of G, and let their dimensions be n1 , · · · , nk . Then X |G| = n2i . Recall that for abelian groups, each irreducible character has dimension 1, and there are |G| representations. So this is trivially satisfied. Proof. Recall that ρreg : G → GL(CG), given by G acting on itself by multiplication, is the regular representation of G of dimension |G|. Let its character be πreg , the regular character of G. First note that we have πreg (1) = |G|, and πreg (h) = 0 if h 6= 1. The first part is obvious, and the second is easy to show, since we have only 0s along the diagonal. Next, we decompose πreg as X πreg = aj χj , We now want to find aj . We have aj = hπreg , χj i =
1 1 X πreg (g)χj (g) = · |G|χj (1) = χj (1). |G| |G| g∈G
Then we get |G| = πreg (1) =
X
aj χj (1) =
X
χj (1)2 =
X
n2j .
From this proof, we also see that each irreducible representation is a subrepresentation of the regular representation.
25
5
Character theory
II Representation Theory
Corollary. The number of irreducible characters of G (up to equivalence) is k, the number of conjugacy classes. Proof. The irreducible characters and the characteristic functions of the conjugacy classes are both bases of C(G). Corollary. Two elements g1 , g2 are conjugate if and only if χ(g1 ) = χ(g2 ) for all irreducible characters χ of G. Proof. If g1 , g2 are conjugate, since characters are class functions, we must have χ(g1 ) = χ(g2 ). For the other direction, let δ be the characteristic function of the class of g1 . Then since δ is a class function, we can write X δ= mj χj , where χj are the irreducible characters of G. Then X X δ(g2 ) = mj χj (g2 ) = mj χj (g1 ) = δ(g1 ) = 1. So g2 is in the same conjugacy class as g1 . Before we move on to the proof of the completeness of characters, we first make a few more definitions: Definition (Character table). The character table of G is the k × k matrix X = (χi (gj )), where 1 = χ1 , χ2 , · · · , χk are the irreducible characters of G, and C1 = {1}, C2 , · · · , Ck are the conjugacy classes with gj ∈ Cj . We seldom think of this as a matrix, but as an actual table, as we will see in the table below. We will spend the next few lectures coming up with techniques to compute these character tables. Example. Consider G = D6 ∼ = S3 = hr, s | r3 = s2 = 1, srs−1 = r−1 i. As in all computations, the first thing to do is to compute the conjugacy classes. This is not too hard. We have C1 = {1},
C2 = {s, sr, sr2 },
C2 = {r, r−1 }.
Alternatively, we can view this as S3 and use the fact that two elements are conjugate if and only if they have the same cycle types. We have found the three representations: 1, the trivial representation; S, the sign; and W, the two-dimensional representation. In W , the reflections srj acts by matrix with eigenvalues ±1. So the sum of eigenvalues is 0, hence χw (sr2 ) = 0. It also also not hard to see cos 2mπ − sin 2mπ m 3 3 r 7→ . sin 2mπ cos 2mπ 3 3 So χw (rm ) = −1. Fortunately, after developing some theory, we will not need to find all the irreducible representations in order to compute the character table.
26
5
Character theory
II Representation Theory
1 S χw
1
C2
C3
1 1 2
1 −1 0
1 1 −1
We see that the sum of the squares of the first column is 12 + 12 + 22 = 6 = |D6 |, as expected. We can also check that W is genuinely an irreducible representation. Noting that the centralizers of C1 , C2 and C3 are of sizes 6, 2, 3. So the inner product is 22 02 (−1)2 hχW , χW i = + + = 1, 6 2 3 as expected. So we now need to prove orthogonality.
27
6
6
Proof of orthogonality
II Representation Theory
Proof of orthogonality
We will do the proof in parts. Theorem (Row orthogonality relations). If ρ : G → GL(V ) and ρ0 : G → GL(V 0 ) are two complex irreducible representations affording characters χ, χ0 respectively, then ( 1 if ρ and ρ0 are isomorphic representations 0 hχ, χ i = . 0 otherwise Proof. We fix a basis of V and of V 0 . Write R(g), R0 (g) for the matrices of ρ(g) and ρ0 (g) with respect to these bases respectively. Then by definition, we have 1 X 0 −1 hχ0 , χi = χ (g )χ(g) |G| g∈G
1 X X 0 −1 = R (g )ii R(g)jj . |G| 0 g∈G 1≤i≤n 1≤j≤n
For any linear map ϕ : V → V 0 , we define a new map by averaging by ρ0 and ρ. ϕ˜ : V → V 0 1 X 0 −1 ρ (g )ϕρ(g)v v 7→ |G| We first check ϕ˜ is a G-homomorphism — if h ∈ G, we need to show ρ0 (h−1 )ϕρ(h)(v) ˜ = ϕ(v). ˜ We have ρ0 (h−1 )ϕρ(h)(v) ˜ =
1 X 0 ρ ((gh)−1 )ϕρ(gh)v |G| g∈G
1 X 0 0−1 = ρ (g )ϕρ(g 0 )v |G| 0 g ∈G
= ϕ(v). ˜ (i) Now we first consider the case where ρ, ρ0 is not isomorphic. Then by Schur’s lemma, we must have ϕ˜ = 0 for any linear ϕ : V → V 0 . We now pick a very nice ϕ, where everything disappears. We let ϕ = εαβ , the operator having matrix Eαβ with entries 0 everywhere except 1 in the (α, β) position. Then ε˜αβ = 0. So for each i, j, we have 1 X 0 −1 (R (g )Eαβ R(g))ij = 0. |G| g∈G
Using our choice of εαβ , we get 1 X 0 −1 R (g )iα R(g)βj = 0 |G| g∈G
28
6
Proof of orthogonality
II Representation Theory
for all i, j. We now pick α = i and β = j. Then 1 X 0 −1 R (g )ii R(g)jj = 0. |G| g∈G
We can sum this thing over all i and j to get that hχ0 , χi = 0. (ii) Now suppose ρ, ρ0 are isomorphic. So we might as well take χ = χ0 , V = V 0 and ρ = ρ0 . If ϕ : V → V is linear, then ϕ˜ ∈ EndG (V ). We first claim that tr ϕ˜ = tr ϕ. To see this, we have tr ϕ˜ =
1 X 1 X tr(ρ(g −1 )ϕρ(g)) = tr ϕ = tr ϕ, |G| |G| g∈G
g∈G
using the fact that traces don’t see conjugacy (and ρ(g −1 ) = ρ(g)−1 since ρ is a group homomorphism). By Schur’s lemma, we know ϕ˜ = λιv for some λ ∈ C (which depends on ϕ). Then if n = dim V , then λ=
1 tr ϕ. n
Let ϕ = εαβ . Then tr ϕ = δαβ . Hence ε˜αβ =
1 X 1 ρ(g −1 )εαβ ρ(g) = δαβ ι. |G| g n
In terms of matrices, we take the (i, j)th entry to get 1 X 1 R(g −1 )iα R(g)βj = δαβ δij . |G| n We now put α = i and β = j. Then we are left with 1 X 1 R(g −1 )ii R(g)jj = δij . |G| g n Summing over all i and j, we get hχ, χi = 1. After learning about tensor products and duals later on, one can provide a shorter proof of this result as follows: Alternative proof. Consider two representation spaces V and W . Then hχW , χV i =
1 X 1 X χW (g)χV (g) = χV ⊗W ∗ (g). |G| |G|
We notice that there is a natural isomorphism V ⊗ W ∗ ∼ = Hom(W, V ), and the action of g on this space is by conjugation. Thus, a G-invariant element is just a G-homomorphism W → V . Thus, we can decompose Hom(V, W ) = HomG (V, W ) ⊕ U for some G-space U , and U has no G-invariant element. Hence in the decomposition of Hom(V, W ) into irreducibles, we know there are exactly
29
6
Proof of orthogonality
II Representation Theory
dim HomG (V, W ) copies of the trivial representation. By Schur’s lemma, this number is 1 if V ∼ = W , and 0 if V 6∼ = W. So it suffices to show that if χ is a non-trivial irreducible character, then X χ(g) = 0. g∈G
P But if ρ affords χ, then any element in the image P of g∈G ρ(g) is fixed by G. By irreducibility, the image must be trivial. So g∈G ρ(g) = 0. What we have done is show the orthogonality of the rows. There is a similar one for the columns: Theorem (Column orthogonality relations). We have k X
χi (gj )χi (g` ) = δj` |CG (g` )|.
i=1
This is analogous to the fact that if a matrix has orthonormal rows, then it also has orthonormal columns. We get the extra factor of |CG (g` )| because of the way we count. This has an easy corollary, which we have already shown previously using the regular representation: Corollary. |G| =
k X
χ2i (1).
i=1
Proof of column orthogonality. Consider the character table X = (χi (gj )). We know X 1 χi (g` )χk (g` ). δij = hχi , χj i = |CG (g` )| `
Then ¯ −1 X T = Ik×k , XD where
|CG (g1 )| .. D= . 0
··· 0 .. .. . . . · · · |CG (gk )|
¯ T is the inverse of X. So X ¯ T X = D, Since X is square, it follows that D−1 X which is exactly the theorem. The proof requires that X is square, i.e. there are k many irreducible representations. So we need to prove the last part of the completeness of characters. Theorem. Each class function of G can be expressed as a linear combination of irreducible characters of G.
30
6
Proof of orthogonality
II Representation Theory
Proof. We list all the irreducible characters χ1 , · · · , χ` of G. Note that we don’t know the number of irreducibles is k. This is essentially what we have to prove here. We now claim these generate C(G), the ring of class functions. Now recall that C(G) has an inner product. So it suffices to show that the orthogonal complement to the span hχ1 , · · · , χ` i in C(G) is trivial. To see this, assume f ∈ C(G) satisfies hf, χj i = 0 for all χj irreducible. We let ρ : G → GL(V ) be an irreducible representation affording χ ∈ {χ1 , · · · , χ` }. Then hf, χi = 0. Consider the function 1 X ϕ= f (g)ρ(g) : V → V. |G| g For any h ∈ G, we can compute ρ(h)−1 ϕρ(h) =
1 X ¯ −1 1 X¯ f (g)ρ(h−1 gh) = f (h gh)ρ(h−1 gh) = ϕ, |G| g |G| g
using the fact that f¯ is a class function. So this is a G-homomorphism. So as ρ is irreducible, Schur’s lemma says it must be of the form λιV for some λ ∈ C. Now we take the trace of this thing. So we have ! 1 X 1 X nλ = tr f (g)ρ(g) = f (g)χ(g) = hf, χi = 0. |G| g |G| g P So λ = 0, i.e. g f (g)ρ(g) = 0, the zero endomorphism on V . This is valid for any irreducible representation, and hence for every representation, by complete reducibility. In particular, take ρ = ρreg , where ρreg (g) : e1 7→ eg for each g ∈ G. Hence X X f (g)ρreg (g) : e1 7→ f (g)eg . g
P Since this is zero, it follows that we must have f (g)eg = 0. Since the eg ’s are linearly independent, we must have f (g) = 0 for all g ∈ G, i.e. f = 0.
31
7
7
Permutation representations
II Representation Theory
Permutation representations
In this chapter, we are going to study permutation representations in a bit more detail, since we can find some explicit formulas for the character of them. This is particularly useful if we are dealing with the symmetric group. Let G be a finite group acting on a finite set X = {x1 , · · · , xn } (sometimes known as a G-set). We define CX to be the complex vector space with basis {ex1 , · · · , exn }. More explicitly, X CX = aj exj : aj ∈ C . j
There is a corresponding representation permutation ρX : G → GL(CX) g 7→ ρ(g) where ρ(g) : exj 7→ egxj , extended linearly. We say ρX is the representation corresponding to the action of G on X. This is a generalization of the group algebra — if we let G act on itself by left multiplication, then this is in fact the group algebra. The corresponding matrix representations ρX (g) with respect to the basis {ex }x∈X are permutation matrices — it is 0 everywhere except for one 1 in each row and column. In particular, (ρ(g))ij = 1 precisely when gxj = xi . So the only non-zero diagonal entries in ρ(g) are those i such that xi is fixed by g. Thus we have Definition (Permutation character). The permutation character πX is πX (g) = | fix(g)| = |{x ∈ X : gx = x}|. Lemma. πX always contains the trivial character 1G (when decomposed in the basis of irreducible characters). In particular, span{e Px1 + · · · +Pexn } is a trivial G-subspace of CX, with G-invariant complement { x ax ex : ax = 0}. Lemma. hπX , 1i, which is the multiplicity of 1 in πX , is the number of orbits of G on X. Proof. We write X as the disjoint union of orbits, X = X1 ∪ · · · ∪ X` . Then it is clear that the permutation representation on X is just the sum of the permutation representations on the Xi , i.e. πX = πX1 + · · · + πx ` , where πXj is the permutation character of G on Xj . So to prove the lemma, it is enough to consider the case where the action is transitive, i.e. there is just one orbit. So suppose G acts transitively on X. We want to show hπX , 1i = 1. By
32
7
Permutation representations
II Representation Theory
definition, we have hπX , 1i =
1 X πX (g) |G| g
1 |{(g, x) ∈ G × X : gx = x}| |G| 1 X = |Gx |, |G|
=
x∈X
where Gx is the stabilizer of x. By the orbit-stabilizer theorem, we have |Gx ||X| = |G|. So we can write this as =
1 X |G| |G| |X| x∈X
1 |G| · |X| · |G| |X| = 1. =
So done. Lemma. Let G act on the sets X1 , X2 . Then G acts on X1 × X2 by g(x1 , x2 ) = (gx1 , gx2 ). Then the character πX1 ×X2 = πX1 πX2 , and so hπX1 , πX2 i = number of orbits of G on X1 × X2 . Proof. We know πX1 ×X2 (g) is the number of pairs (x1 , x2 ) ∈ X1 × X2 fixed by g. This is exactly the number of things in X1 fixed by g times the number of things in X2 fixed by g. So we have πX1 ×X2 (g) = πX1 (g)πX2 (g). Then using the fact that π1 , π2 are real, we get 1 X πX1 (g)πX2 (g) |G| g 1 X πX1 (g)πX2 (g)1G (g) = |G| g
hπX1 , πX2 i =
= hπX1 πX2 , 1i = hπX1 ×X2 , 1i. So the previous lemma gives the desired result. Recall that a 2-transitive action is an action that is transitive on pairs, i.e. it can send any pair to any other pair, as long as the elements in the pair are distinct (it can never send (x, x) to (x0 , x00 ) if x0 6= x00 , since g(x, x) = (gx, gx)). More precisely, 33
7
Permutation representations
II Representation Theory
Definition (2-transitive). Let G act on X, |X| > 2. Then G is 2-transitive on X if G has two orbits on X × X, namely {(x, x) : x ∈ X} and {(x1 , x2 ) : xi ∈ X, x1 6= x2 }. Then we have the following lemma: Lemma. Let G act on X, with |X| > 2. Then πX = 1G + χ, with χ irreducible if and only if G is 2-transitive on X. Proof. We know πX = m1 1G + m2 χ2 + · · · + m` χ` , with 1G , χ2 , · · · , χ` distinct irreducible characters and mi ∈ N are non-zero. Then by orthogonality, j X hπX , πX i = m2i . i=1
Since hπX , πX i is the number of orbits of X × X, we know G is 2-transitive on X if and only if ` = 2 and m1 = m2 = 1. This is useful if we want to understand the symmetric group, since it is always 2-transitive (modulo the silly case of S1 ). Example. Sn acting on X = {1, · · · , n} is 2-transitive for n ≥ 2. So πX = 1+χ, with χ irreducible of degree n − 1 (since πX has degree n). This works similarly for An , whenever n ≥ 3. This tells us we can find an irreducible character of Sn by finding πX , and then subtracting 1 off. Example. Consider G = S4 . The conjugacy classes correspond to different cycle types. We already know three characters — the trivial one, the sign, and πX − 1 G .
trivial sign πX − 1G
χ1 χ2 χ3 χ4 χ5
1 1
3 (1 2)(3 4)
8 (1 2 3)
6 (1 2 3 4)
6 (1 2)
1 1 3
1 1 −1
1 1 0
1 −1 −1
1 −1 1
We know the product of a one-dimensional irreducible character and another character is another irreducible character, as shown on the first example sheet. So we get
trivial sign πX − 1G χ3 χ2
χ1 χ2 χ3 χ4 χ5
1 1
3 (1 2)(3 4)
8 (1 2 3)
6 (1 2 3 4)
6 (1 2)
1 1 3 3
1 1 −1 −1
1 1 0 0
1 −1 −1 1
1 −1 1 −1
34
7
Permutation representations
II Representation Theory
For the last representation we can find the dimension by computing 24 − (12 + 12 + 32 + 32 ) = 22 . So it has dimension 2. To obtain the whole of χ5 , we can use column orthogonality — for example, we let the entry in the second column be x. Then column orthogonality says 1 + 1 − 3 − 3 + 2x = 0 . So x = 2. In the end, we find
trivial sign πX − 1G χ3 χ2
χ1 χ2 χ3 χ4 χ5
1 1
3 (1 2)(3 4)
8 (1 2 3)
6 (1 2 3 4)
6 (1 2)
1 1 3 3 2
1 1 −1 −1 2
1 1 0 0 −1
1 −1 −1 1 0
1 −1 1 −1 0
Alternatively, we have previously shown that χreg = χ1 + χ2 + 3χ3 + 3χ4 + 2χ5 . By computing χreg directly, we can find χ5 . Finally, we can obtain χ5 by observing S4 /V4 ∼ = S3 , and “lifting” characters. We will do this in the next chapter. In fact, this χ5 is the degree-2 irreducible representation of S3 ∼ = D6 lifted up. We can also do similar things with the alternating group. The issue of course is that the conjugacy classes of the symmetric group may split when we move to the alternating group. Suppose g ∈ An . Then |CSn (g)| = |Sn : CSn (g)| |CAn (g)| = |An : CAn (g)|. We know CSn (g) ⊇ CAn (g), but we need not have equality, since elements needed to conjugate g to h ∈ CSn (g) might not be in An . For example, consider σ = (1 2 3) ∈ A3 . We have CA3 (σ) = {σ} and CS3 (σ) = {σ, σ −1 }. We know An is an index 2 subgroup of Sn . So CAn (g) ⊆ CSn (g) either has index 2 or 1. If the index is 1, then |CAn | = 12 |CSn |. Otherwise, the sizes are equal. A useful criterion for determining which case happens is the following: Lemma. Let g ∈ An , n > 1. If g commutes with some odd permutation in Sn , then CSn (g) = CAn (g). Otherwise, CSn splits into two conjugacy classes in An of equal size. This is quite useful for doing the examples on the example sheet, but we will not go through any more examples here.
35
8
Normal subgroups and lifting
8
II Representation Theory
Normal subgroups and lifting
The idea of this section is that there is some correspondence between the irreducible characters of G/N (with N C G) and those of G itself. In most cases, G/N is a simpler group, and hence we can use this correspondence to find irreducible characters of G. The main result is the following lemma: Lemma. Let N C G. Let ρ˜ : G/N → GL(V ) be a representation of G/N . Then the composition ρ:G
natural
ρ˜
G/N
GL(V )
is a representation of G, where ρ(g) = ρ˜(gN ). Moreover, (i) ρ is irreducible if and only if ρ˜ is irreducible. (ii) The corresponding characters satisfy χ(g) = χ(gN ˜ ). (iii) deg χ = deg χ. ˜ (iv) The lifting operation χ ˜ 7→ χ is a bijection {irreducibles of G/N } ←→ {irreducibles of G with N in their kernel}. We say χ ˜ lifts to χ. Proof. Since a representation of G is just a homomorphism G → GL(V ), and the composition of homomorphisms is a homomorphisms, it follows immediately that ρ as defined in the lemma is a representation. (i) We can compute hχ, χi =
1 X χ(g)χ(g) |G| g∈G
1 = |G| = = =
1 |G| 1 |G|
X
X
χ(gk)χ(gk)
gN ∈G/N k∈N
X
X
χ(gN ˜ )χ(gN ˜ )
gN ∈G/N k∈N
X
|N |χ(gN ˜ )χ(gN ˜ )
gN ∈G/N
1 |G/N |
X
χ(gN ˜ )χ(gN ˜ )
gN ∈G/N
= hχ, ˜ χi. ˜ So hχ, χi = 1 if and only if hχ, ˜ χi ˜ = 1. So ρ is irreducible if and only if ρ˜ is irreducible. (ii) We can directly compute χ(g) = tr ρ(g) = tr(˜ ρ(gN )) = χ(gN ˜ ) for all g ∈ G. 36
8
Normal subgroups and lifting
II Representation Theory
(iii) To see that χ and χ ˜ have the same degree, we just notice that deg χ = χ(1) = χ(N ˜ ) = deg χ. ˜ Alternatively, to show they have the same dimension, just note that ρ and ρ˜ map to the general linear group of the same vector space. (iv) To show this is a bijection, suppose χ ˜ is a character of G/N and χ is its lift to G. We need to show the kernel contains N . By definition, we know χ(N ˜ ) = χ(1). Also, if k ∈ N , then χ(k) = χ(kN ˜ ) = χ(N ˜ ) = χ(1). So N ≤ ker χ. Now let χ be a character of G with N ≤ ker χ. Suppose ρ : G → GL(V ) affords χ. Define ρ˜ : G/N → GL(V ) gN 7→ ρ(g) Of course, we need to check this is well-defined. If gN = g 0 N , then g −1 g 0 ∈ N . So ρ(g) = ρ(g 0 ) since N ≤ ker ρ. So this is indeed well-defined. It is also easy to see that ρ˜ is a homomorphism, hence a representation of G/N . Finally, if χ ˜ is a character of ρ˜, then χ(gN ˜ ) = χ(g) for all g ∈ G by definition. So χ ˜ lifts to χ. It is clear that these two operations are inverses to each other. There is one particular normal subgroup lying around that proves to be useful. Lemma. Given a group G, the derived subgroup or commutator subgroup G0 = h[a, b] : a, b ∈ Gi, where [a, b] = aba−1 b−1 , is the unique minimal normal subgroup of G such that G/G0 is abelian. So if G/N is abelian, then G0 ≤ N . Moreover, G has precisely ` = |G : G0 | representations of dimension 1, all with kernel containing G0 , and are obtained by lifting from G/G0 . In particular, by Lagrange’s theorem, ` | G. Proof. Consider [a, b] = aba−1 b−1 ∈ G0 . Then for any h ∈ G, we have h(aba−1 b−1 )h−1 = (ha)b(ha)−1 b−1 bhb−1 h−1 = [ha, b][b, h] ∈ G0 So in general, let [a1 , b1 ][a2 , b2 ] · · · [an , bn ] ∈ G0 . Then h[a1 , b1 ][a2 , b2 ] · · · [an , bn ]h−1 = (h[a1 , b1 ]h−1 )(h[a2 , b2 ]h−1 ) · · · (h[an , bn ]h−1 ), which is in G0 . So G0 is a normal subgroup. Let N C G. Let g, h ∈ G. Then [g, h] ∈ N if and only if ghg −1 h−1 ∈ N if and only if ghN = hgN , if and only if (gN )(hN ) = (hN )(gN ) by normality. Since G0 is generated by all [g, h], we know G0 ≤ N if and only if G/N is abelian.
37
8
Normal subgroups and lifting
II Representation Theory
Since G/G0 , is abelian, we know it has exactly ` irreducible characters, χ ˜1 , · · · , χ ˜` , all of degree 1. The lifts of these to G also have degree 1, and by the previous lemma, these are precisely the irreducible characters χi of G such that G0 ≤ ker χi . But any degree 1 character of G is a homomorphism χ : G → C× , hence χ(ghg −1 h−1 ) = 1. So for any 1-dimensional character, χ, we must have G0 ≤ ker χ. So the lifts χ1 , · · · , χ` are all 1-dimensional characters of G. Example. Let G be Sn , with n ≥ 3. We claim that Sn0 = An . Since sgn(a) = sgn(a−1 ), we know that [a, b] ∈ An for any a, b ∈ Sn . So Sn0 ≤ An (alternatively, since Sn /An ∼ = C2 is abelian, we must have Sn0 ≤ An ). We now notice that (1 2 3) = (1 3 2)(1 2)(1 3 2)−1 (1 2)−1 ∈ Sn0 . So by symmetry, all 3-cycles are in Sn0 . Since the 3-cycles generate An , we know we must have Sn0 = An . Since Sn0 /An ∼ = C2 , we get ` = 2. So Sn must have exactly two linear characters, namely the trivial character and the sign. Lemma. G is not simple if and only if χ(g) = χ(1) for some irreducible character χ 6= 1G and some 1 6= g ∈ G. Any normal subgroup of G is theTintersection of the kernels of some of the irreducible characters of G, i.e. N = ker χi . In other words, G is simple if and only if all non-trivial irreducible characters have trivial kernel. Proof. Suppose χ(g) = χ(1) for some non-trivial irreducible character χ, and χ is afforded by ρ. Then g ∈ ker ρ. So if g = 6 1, then 1 6= ker ρ C G, and ker ρ 6= G. So G cannot be simple. If 1 6= N C G is a non-trivial proper subgroup, take an irreducible character χ ˜ of G/N , and suppose χ ˜ 6= 1G/N . Lift this to get an irreducible character χ, afforded by the representation ρ of G. Then N ≤ ker ρ C G. So χ(g) = χ(1) for g ∈ N. Finally, let 1 6= N C G. We claim that N is the intersection of the kernels of T the lifts χ1 , · · · , χ` of all the irreducibles of G/N . Clearly, we have N ≤ i ker χi . If g ∈ G \ N , then gN 6= N . So χ(gN ˜ ) 6= χ(N ˜ ) for some irreducible χ ˜ of G/N . Lifting χ ˜ to χ, we have χ(g) 6= χ(1). So g is not in the intersection of the kernels.
38
9
Dual spaces and tensor products
9
II Representation Theory
Dual spaces and tensor products of representations
In this chapter, we will consider several linear algebra constructions, and see how we can use them to construct new representations and characters.
9.1
Dual spaces
Our objective of this section is to try to come up with “dual representations”. Given a representation ρ : G → GL(V ), we would like to turn V ∗ = Hom(V, F) into a representation. So for each g, we want to produce a ρ∗ (g) : V ∗ → V ∗ . So given ϕ ∈ V ∗ , we shall get a ρ∗ (g)ϕ : V → F. Given v ∈ V , a natural guess for the value of (ρ∗ (g)ϕ)(v) might be (ρ∗ (g)ϕ)(v) = ϕ(ρ(g)v), since this is how we can patch together ϕ, ρ, g and v. However, if we do this, we will not get ρ∗ (g)ρ∗ (h) = ρ∗ (gh). Instead, we will obtain ρ∗ (g)ρ∗ (h) = ρ∗ (hg), which is the wrong way round. The right definition to use is actually the following: Lemma. Let ρ : G → GL(V ) be a representation over F, and let V ∗ = HomF (V, F) be the dual space of V . Then V ∗ is a G-space under (ρ∗ (g)ϕ)(v) = ϕ(ρ(g −1 )v). This is the dual representation to ρ. Its character is χ(ρ∗ )(g) = χρ (g −1 ). Proof. We have to check ρ∗ is a homomorphism. We check ρ∗ (g1 )(ρ∗ (g2 )ϕ)(v) = (ρ∗ (g2 )ϕ)(ρ(g1−1 )(v)) = ϕ(ρ(g2−1 )ρ(g1−2 )v) = ϕ(ρ((g1 g2 )−1 )(v)) = (ρ∗ (g1 g2 )ϕ)(v). To compute the character, fix a g ∈ G, and let e1 , · · · , en be a basis of eigenvectors of V of ρ(g), say ρ(g)ej = λj ej . If we have a dual space of V , then we have a dual basis. We let ε1 , · · · , εn be the dual basis. Then −1 −1 −1 (ρ∗ (g)εj )(ei ) = εj (ρ(g −1 )ei ) = εj (λ−1 i ei ) = λi δij = λj δij = λj εj (ei ).
Thus we get ρ∗ (g)εj = λ−1 j εj . So χρ∗ (g) =
X
−1 λ−1 ). j = χρ (g
Definition (Self-dual representation). A representation ρ : G → GL(V ) is self-dual if there is isomorphism of G-spaces V ∼ = V ∗. 39
9
Dual spaces and tensor products
II Representation Theory
Note that as (finite-dimensional) vector spaces, we always have V ∼ = V ∗. However, what we require here is an isomorphism that preserves the G-action, which does not necessarily exist. Over F = C, this holds if and only if χρ (g) = χρ∗ (g) = χρ (g −1 ) = χρ (g). So a character is self-dual if and only if it is real. Example. All irreducible representations of Sn are self-dual, since every element is conjugate to its own inverse. This is not always true for An .
9.2
Tensor products
The next idea we will tackle is the concept of tensor products. We first introduce it as a linear algebra concept, and then later see how representations fit into this framework. There are many ways we can define the tensor product. The definition we will take here is a rather hands-on construction of the space, which involves picking a basis. We will later describe some other ways to define the tensor product. Definition (Tensor product). Let V, W be vector spaces over F. Suppose dim V = m and dim W = n. We fix a basis v1 , · · · , vm and w1 , · · · , wn of V and W respectively. The tensor product space V ⊗ W = V ⊗F W is an nm-dimensional vector space (over F) with basis given by formal symbols {vi ⊗ wj : 1 ≤ i ≤ m, 1 ≤ j ≤ n}. Thus V ⊗W =
nX
o λij vi ⊗ wj : λij ∈ F ,
with the “obvious” addition and scalar multiplication. If X X v= αi vi ∈ V, w = βj wj , we define v⊗w =
X
αi βj (vi ⊗ wj ).
i,j
Note that note all elements of V ⊗ W are of this form. Some are genuine combinations. For example, v1 ⊗ w1 + v2 ⊗ w2 cannot be written as a tensor product of an element in V and another in W . We can imagine our formula of the tensor of two elements as writing X X αi vi ⊗ βj wj , and then expand this by letting ⊗ distribute over +. Lemma. (i) For v ∈ V , w ∈ W and λ ∈ F, we have (λv) ⊗ w = λ(v ⊗ w) = v ⊗ (λw). 40
9
Dual spaces and tensor products
II Representation Theory
(ii) If x, x1 , x2 ∈ V and y, y1 , y2 ∈ W , then (x1 + x2 ) ⊗ y = (x1 ⊗ y) + (x2 ⊗ y) x ⊗ (y1 + y2 ) = (x ⊗ y1 ) + (x ⊗ y2 ). Proof. (i) Let v =
P
αi vi and w =
P
βj wj . Then X (λv) ⊗ w = (λαi )βj vi ⊗ wj ij
λ(v ⊗ w) = λ
X
αi βj vi ⊗ wj
ij
v ⊗ (λw) =
X
αi (λβj )vi ⊗ wj ,
and these three things are obviously all equal. (ii) Similar nonsense. We can define the tensor product using these properties. We consider the space of formal linear combinations v ⊗ w for all v ∈ V, w ∈ W , and then quotient out by the relations re had above. It can be shown that this produces the same space as the one constructed above. Alternatively, we can define the tensor product using its universal property, which says for any U , a bilinear map from V × W to U is “naturally” equivalent to a linear map V ⊗W → U . This intuitively makes sense, since the distributivity and linearity properties of ⊗ we showed above are exactly the properties required for a bilinear map if we replace the ⊗ with a “,”. It turns out this uniquely defines V ⊗ W , as long as we provide a sufficiently precise definition of “naturally”. We can do it concretely as follows — in our explicit construction, we have a canonical bilinear map given by φ:V ×W →V ⊗W (v, w) 7→ v ⊗ w. Given a linear map V ⊗ W → U , we can compose it with φ : V × W → V ⊗ W in order to get a bilinear map V × W → U . Then universality says every bilinear map V × W → U arises this way, uniquely. In general, we say an equivalence between bilinear maps V × W → U and linear maps V ⊗ W → U is “natural” if it is be mediated by such a universal map. Then we say a vector space X is the tensor product of V and W if there is a natural equivalence between bilinear maps V × W → U and linear maps X → U. We will stick with our definition for concreteness, but we prove basisindependence here so that we feel happier: Lemma. Let {e1 , · · · , em } be any other basis of V , and {f1 , · · · , fm } be another basis of W . Then {ei ⊗ fj : 1 ≤ i ≤ m, 1 ≤ j ≤ n} is a basis of V ⊗ W . 41
9
Dual spaces and tensor products
II Representation Theory
Proof. Writing vk =
X
αik ei ,
w` =
X
βj` f` ,
we have vk ⊗ w` =
X
αik βjl ei ⊗ fj .
Therefore {ei ⊗ fj } spans V ⊗ W . Moreover, there are nm of these. Therefore they form a basis of V ⊗ W . That’s enough of linear algebra. We shall start doing some representation theory. Proposition. Let ρ : G → GL(V ) and ρ0 : G → GL(V 0 ). We define ρ ⊗ ρ0 : G → GL(V ⊗ V 0 ) by (ρ ⊗ ρ0 )(g) :
X
X
λij vi ⊗ wj 7→
λij (ρ(g)vi ) ⊗ (ρ0 (g)wj ).
Then ρ ⊗ ρ0 is a representation of g, with character χρ⊗ρ0 (g) = χρ (g)χρ0 (g) for all g ∈ G. As promised, the product of two characters of G is also a character of G. Just before we prove this, recall we showed in sheet 1 that if ρ is irreducible and ρ0 is irreducible of degree 1, then ρ0 ρ = ρ ⊗ ρ0 is irreducible. However, if ρ0 is not of degree 1, then this is almost always false (or else we can produce arbitrarily many irreducible representations). Proof. It is clear that (ρ ⊗ ρ0 )(g) ∈ GL(V ⊗ V 0 ) for all g ∈ G. So ρ ⊗ ρ0 is a homomorphism G → GL(V ⊗ V 0 ). To check the character is indeed as stated, let g ∈ G. Let v1 , · · · , vm be a basis of V of eigenvectors of ρ(g), and let w1 , · · · , wn be a basis of V 0 of eigenvectors of ρ0 (g), say ρ0 (g)wj = µj wj .
ρ(g)vi = λi vi , Then
(ρ ⊗ ρ0 )(g)(vi ⊗ wj ) = ρ(g)vi ⊗ ρ0 (g)wj = λi vi ⊗ µj wj = (λi µj )(vi ⊗ wj ). So χρ⊗ρ0 (g) =
X
λi µj =
X
λi
i,j
42
X
µj = χρ (g)χρ0 (g).
9
Dual spaces and tensor products
9.3
II Representation Theory
Powers of characters
We work over C. Take V = V 0 to be G-spaces, and define Notation. V ⊗2 = V ⊗ V. We define τ : V ⊗2 → V ⊗2 by X X τ: λij vi ⊗ vj 7→ λij vj ⊗ vi . This is a linear endomorphism of V ⊗2 such that τ 2 = 1. So the eigenvalues are ±1. Definition (Symmetric and exterior square). We define the symmetric square and exterior square of V to be, respectively, S 2 V = {x ∈ V ⊗2 : τ (x) = x} Λ2 V = {x ∈ V ⊗2 : τ (x) = −x}. The exterior square is also known as the anti-symmetric square and wedge power. Lemma. For any G-space V , S 2 V and Λ2 V are G-subspaces of V ⊗2 , and V ⊗2 = S 2 V ⊕ Λ2 V. The space S 2 V has basis {vi vj = vi ⊗ vj + vj ⊗ vi : 1 ≤ i ≤ j ≤ n}, while Λ2 V has basis {vi ∧ vj = vi ⊗ vj − vj ⊗ vi : 1 ≤ i < j ≤ n}. Note that we have a strict inequality for i < j, since vi ⊗ vj − vj ⊗ vi = 0 if i = j. Hence dim S 2 V = Sometimes, a factor of vi vj and vi ∧ vj , i.e. vi vj =
1 2
1 n(n + 1), 2
dim Λ2 V =
1 n(n − 1). 2
appears in front of the definition of the basis elements
1 (vi ⊗ vj + vj ⊗ vi ), 2
vi ∧ vj =
1 (vi ⊗ vj − vj ⊗ vi ). 2
However, this is not important. Proof. This is elementary linear algebra. For the decomposition V ⊗2 , given x ∈ V ⊗2 , we can write it as x=
1 1 (x + τ (x)) + (x − τ (x)) . 2 | {z } |2 {z } ∈S 2 V
∈Λ2 V
How is this useful for character calculations? 43
9
Dual spaces and tensor products
II Representation Theory
Lemma. Let ρ : G → GL(V ) be a representation affording the character χ. Then χ2 = χS +χΛ where χS = S 2 χ is the character of G in the subrepresentation on S 2 V , and χΛ = Λ2 χ the character of G in the subrepresentation on Λ2 V . Moreover, for g ∈ G, χS (g) =
1 2 (χ (g) + χ(g 2 )), 2
1 2 (χ (g) − χ(g 2 )). 2
χΛ (g) =
The real content of the lemma is, of course, the formula for χS and χΛ . Proof. The fact that χ2 = χS + χΛ is immediate from the decomposition of G-spaces. We now compute the characters χS and χΛ . For g ∈ G, we let v1 , · · · , vn be a basis of V of eigenvectors of ρ(g), say ρ(g)vi = λi vi . We’ll be lazy and just write gvi instead of ρ(g)vi . Then, acting on Λ2 V , we get g(vi ∧ vj ) = λi λj vi ∧ vj . Thus X
χΛ (g) =
λi λj .
1≤i 0 and also χ(1) > 0, we must have ni = +1. So α = χi . 48
9
Dual spaces and tensor products
II Representation Theory
Henceforth we don’t distinguish between a character and its negative, and often study generalized characters of inner product 1 rather than irreducible characters.
49
10
Induction and restriction
10
II Representation Theory
Induction and restriction
This is the last chapter on methods of calculating characters. Afterwards, we shall start applying these to do something useful. Throughout the chapter, we will take F = C. Suppose H ≤ G. How do characters of G and H relate? It is easy to see that every representation of G can be restricted to give a representation of H. More surprisingly, given a representation of H, we can induce a representation of G. We first deal with the easy case of restriction. Definition (Restriction). Let ρ : G → GL(V ) be a representation affording χ. We can think of V as an H-space by restricting ρ’s attention to h ∈ H. We get a representation ResG H ρ = ρH = ρ ↓H , the restriction of ρ to H. It affords the character ResG H χ = χH = χ ↓H . It is a fact of life that the restriction of an irreducible character is in general not irreducible. For example, restricting any non-linear irreducible character to the trivial subgroup will clearly produce a reducible character. Lemma. Let H ≤ G. If ψ is any non-zero irreducible character of H, then there exists an irreducible character χ of G such that ψ is a constituent of ResG H χ, i.e. hResG 6 0. H χ, ψi = Proof. We list the irreducible characters of G as χ1 , · · · , χk . Recall the regular character πreg . Consider hResG H πreg , ψi =
|G| ψ(1) 6= 0. |H|
On the other hand, we also have hResG H πreg , ψiH =
k X
deg χi hResG H χi , ψi.
1
If this sum has to be non-zero, then there must be some i such that hResG 6 H χi , ψi = 0. Lemma. Let χ be an irreducible character of G, and let X ResG ci χi , Hχ= i
with χi irreducible characters of H, and ci non-negative integers. Then X c2i ≤ |G : H|, with equality iff χ(g) = 0 for all g ∈ G \ H. This is useful only if the index is small, e.g. 2 or 3.
50
10
Induction and restriction
II Representation Theory
Proof. We have X
G hResG H χ, ResH χiH =
c2i .
However, by definition, we also have G hResG H χ, ResH χiH =
1 X |χ(h)|2 . |H| h∈H
On the other hand, since χ is irreducible, we have 1 = hχ, χiG 1 X = |χ(g)|2 |G| g∈G X X 1 |χ(h)|2 + |χ(g)|2 = |G| h∈H
=
|H| X |G|
g∈G\H
c2i +
1 |G|
X
|χ(g)|2
g∈G\H
|H| X 2 ≥ ci . |G| So the result follows. Now we shall go in the other direction — given a character of H, we want to induce a character of the larger group G. Perhaps rather counter-intuitively, it is easier for us to first construct the character, and then later find a representation that affords this character. We start with a slightly more general notion of an induced class function. Definition (Induced class function). Let ψ ∈ C(H). We define the induced class G G function IndG H ψ = ψ ↑ = ψ by IndG H ψ(g) =
1 X ˚ −1 ψ(x gx), |H| x∈G
where ˚ = ψ(y)
( ψ(y) y ∈ H . 0 y∈ 6 H
The first thing to check is, of course, that IndG H is indeed a class function. G Lemma. Let ψ ∈ CH . Then IndG H ψ ∈ C(G), and IndH ψ(1) = |G : H|ψ(1).
Proof. The fact that IndG H ψ is a class function follows from direct inspection of the formula. Then we have IndG H ψ(1) =
1 X˚ |G| ψ(1) = ψ(1) = |G : H|ψ(1). |H| |H| x∈G
51
10
Induction and restriction
II Representation Theory
As it stands, the formula is not terribly useful. So we need to find an alternative formula for it. If we have a subset, a sensible thing to do is to look at the cosets of H. We let n = |G : H|, and let 1 = t1 , · · · , tn be a left transversal of H in G, i.e. t1 H = H, · · · , tn H are precisely the n left-cosets of H in G. Lemma. Given a (left) transversal t1 , · · · , tn of H, we have n X
IndG H ψ(g) =
˚ −1 gti ). ψ(t i
i=1
Proof. We can express every x ∈ G as x = ti h for some h ∈ H and i. We then have ˚ i h)−1 g(ti h)) = ψ(h ˚ −1 (t−1 gti )h) = ψ(t ˚ −1 gti ), ψ((t i i −1 since ψ is a class function of H, and h−1 (t−1 i gti )h ∈ H if and only if ti gti ∈ H, as h ∈ H. So the result follows.
This is not too useful as well. But we are building up to something useful. Theorem (Frobenius reciprocity). Let ψ ∈ C(H) and ϕ ∈ C(G). Then G hResG H ϕ, ψiH = hϕ, IndH ψiG .
Alternatively, we can write this as hϕH , ψi = hϕ, ψ G i. If you are a category theorist, and view restriction and induction as functors, then this says restriction is a right adjoint to induction, and thsi is just a special case of the tensor-Hom adjunction. If you neither understand nor care about this, then this is a good sign [sic]. Proof. We have hϕ, ψ G i =
1 X ϕ(g)ψ G (g) |G| g∈G
X 1 ˚ −1 gx) = ϕ(g)ψ(x |G||H| x,g∈G
−1
We now write y = x gx. Then summing over g is the same as summing over y. Since ϕ is a G-class function, this becomes X 1 ˚ = ϕ(y)ψ(y) |G||H| x,y∈G
Now note that the sum is independent of x. So this becomes 1 X ˚ = ϕ(y)ψ(y) |H| y∈G
˚ So Now this only has contributions when y ∈ H, by definition of ψ. 1 X = ϕ(y)ψ(y) |H| y∈H
= hϕH , ψiH . 52
10
Induction and restriction
II Representation Theory
Corollary. Let ψ be a character of H. Then IndG H ψ is a character of G. Proof. Let χ be an irreducible character of G. Then G hIndG H ψ, χi = hψ, ResH χi. G Since ψ and ResG H χ are characters, the thing on the right is in Z≥0 . Hence IndH is a linear combination of irreducible characters with non-negative coefficients, and is hence a character.
Recall we denote the conjugacy class of g ∈ G as CG (g), while the centralizer is CG (g). If we take a conjugacy class CG (g) in G and restrict it to H, then the result CG (g) ∩ H need not be a conjugacy class, since the element needed to conjugate x, y ∈ CG (g) ∩ H need not be present in H, as is familiar from the case of An ≤ Sn . However, we know that CG (g) ∩ H is a union of conjugacy classes in H, since elements conjugate in H are also conjugate in G. Proposition. Let ψ be a character of H ≤ G, and let g ∈ G. Let CG (g) ∩ H =
m [
CH (xi ),
i=1
where the xi are the representatives of the H conjugacy classes of elements of H conjugate to g. If m = 0, then IndG H ψ(g) = 0. Otherwise, IndG H ψ(g) = |CG (g)|
m X ψ(xi ) . |CH (xi )| i=1
This is all just group theory. Note that some people will think this proof is excessive — everything shown is “obvious”. In some sense it is. Some steps seem very obvious to certain people, but we are spelling out all the details so that everyone is happy. ˚ −1 gx) = 0 for all x. So Proof. If m = 0, then {x ∈ G : x−1 gx ∈ H} = ∅. So ψ(x G IndH ψ(g) = 0 by definition. Now assume m > 0. We let Xi = {x ∈ G : x−1 gx ∈ H and is conjugate in H to xi }. By definition of xi , we know the Xi ’s are pairwise disjoint, and their union is {x ∈ G : x−1 gx ∈ H}. Hence by definition, IndG H ψ(g) = =
=
=
1 X ˚ −1 ψ(x gx) |H| 1 |H| 1 |H|
x∈G m X
i=1 x∈Xi m X X
ψ(x−1 gx)
ψ(xi )
i=1 x∈Xi
m X |Xi | i=1
X
|H|
53
ψ(xi ).
10
Induction and restriction
II Representation Theory
So we now have to show that in fact |Xi | |CG (g)| = . |H| |CH (xi )| We fix some 1 ≤ i ≤ m. Choose some gi ∈ G such that gi−1 ggi = xi . This exists by definition of xi . So for every c ∈ CG (g) and h ∈ H, we have (cgi h)−1 g(cgi h) = h−1 gi−1 c−1 gcgi h We now use the fact that c commutes with g, since c ∈ CG (g), to get = h−1 gi−1 c−1 cggi h = h−1 gi−1 ggi h = h−1 xi h. Hence by definition of Xi , we know cgi h ∈ Xi . Hence CG (g)gi H ⊆ Xi . Conversely, if x ∈ Xi , then x−1 gx = h−1 xi h = h−1 (gi−1 ggi )h for some h. Thus xh−1 gi−1 ∈ CG (g), and so x ∈ CG (g)gi h. So x ∈ CG (g)gi H. So we conclude Xi = CG (g)gi H. Thus, using some group theory magic, which we shall not prove, we get |Xi | = |CG (g)gi H| =
|CG (g)||H| |H ∩ gi−1 CG (g)gi |
Finally, we note gi−1 CG (g)gi = CG (gi−1 ggi ) = CG (xi ). Thus |Xi | =
|H||CG (g)| |H||CG (g)| = . |H ∩ CG (xi )| |CH (xi )|
Dividing, we get |Xi | |CG (g)| = . |H| |CH (xi )| So done. To clarify matters, if H, K ≤ G, then a double coset of H, K in G is a set HxK = {hxk : h ∈ H, k ∈ K} for some x ∈ G. Facts about them include (i) Two double cosets are either disjoint or equal. (ii) The sizes are given by |H||K| |H||K| = −1 . −1 |H ∩ xKx | |x Hx ∩ K| 54
10
Induction and restriction
II Representation Theory
Lemma. Let ψ = 1H , the trivial character of H. Then IndG H 1H = πX , the permutation character of G on the set X, where X = G/H is the set of left cosets of H. Proof. We let n = |G : H|, and t1 , · · · , tn be representatives of the cosets. By definition, we know IndG H 1H (g) =
n X
˚ 1H (t−1 i gti )
i=1
= |{i : t−1 i gti ∈ H}| = |{i : g ∈ ti Ht−1 i }| But ti Ht−1 is the stabilizer in G of the coset ti H ∈ X. So this is equal to i = | fixX (g)| = πX (g). By Frobenius reciprocity, we know hπX , 1G iG = hIndG H 1H , 1G iG = h1H , 1H iH = 1. So the multiplicity of 1G in πX is indeed 1, as we have shown before. Example. Let H = C4 = h(1 2 3 4)i ≤ G = S4 . The index is |S4 : C4 | = 6. We consider the character of the induced representation IndG H (α), where α is the faithful 1-dimensional representation of C4 given by α((1 2 3 4)) = i. Then the character of α is
χα
1
(1 2 3 4)
(1 3)(2 4)
(1 4 3 2)
1
i
−1
−i
What is the induced character in S4 ? We know that IndG H (α)(1) = |G : H|α(1) = |G : H| = 6. So the degree of IndG H (α) is 6. Also, the elements (1 2) and (3 4) are not conjugate to anything in C4 . So the character vanishes. For (1 2)(3 4), only one of the three conjugates in S4 lie in H (namely (1 3)(2 4)). So, using the sizes of the centralizers, we obtain −1 IndG ((1 3)(2 4)) = 8 = −2. H 4 For (1 2 3 4), it is conjugate to 6 elements of S4 , two of which are in C4 , namely (1 2 3 4) and (1 4 3 2). So i −i IndG (1 2 3 4) = 4 + = 0. H 4 4 So we get the following character table:
IndG H (α)
1 1
6 (1 2)
8 (1 2 3)
3 (1 2)(3 4)
6 (1 2 3 4)
6
0
0
−2
0
55
10
Induction and restriction
II Representation Theory
Induced representations We have constructed a class function, and showed it is indeed the character of some representation. However, we currently have no idea what this corresponding representation is. Here we will try to construct such a representation explicitly. This is not too enlightening, but its nice to know it can be done. We will also need this explicit construction when proving Mackey’s theorem later. Let H ≤ G have index n, and 1 = t1 , · · · , tn be a left transversal. Let W be a H-space. Define the vector space V = IndG H W = W ⊕ (t2 ⊗ W ) ⊕ · · · ⊕ (tn ⊗ W ), where ti ⊗ W = {ti ⊗ w : w ∈ W }. So dim V = n dim W . To define the G-action on V , let g ∈ G. Then for every i, there is a unique j such that t−1 j gti ∈ H, i.e. gti H = tj H. We define g(ti ⊗ w) = tj ⊗ ((t−1 j gti )w). We tend to drop the tensor signs, and just write g(ti w) = tj (t−1 j gti w). We claim this is a G-action. We have g1 (g2 ti w) = g1 (tj (t−1 j g2 ti )w), where j is the unique index such that g2 ti H = tj H. This is then equal to −1 −1 t` ((t−1 ` gtj )(tj g2 ti )w) = t` ((t` (g1 g2 )ti )w) = (g1 g2 )(ti w),
where ` is the unique index such that g1 tj H = t` H (and hence g1 g2 ti H = g1 tj H = t` H). Definition (Induced representation). Let H ≤ G have index n, and 1 = t1 , · · · , tn be a left transversal. Let W be a H-space. Define the induced representation to be the vector space IndG H W = W ⊕ t2 ⊗ W ⊕ · · · ⊕ tn ⊗ W, with the G-action g : ti w 7→ tj (t−1 j gti )W, where tj is the unique element (among t1 , · · · , tn ) such that t−1 j gti ∈ H. This has the “right” character. Suppose W has character ψ. Since g : ti w 7→ −1 tj (t−1 j gti ), the contribution to the character is 0 unless j = i, i.e. if ti gti ∈ H. Then it contributes ψ(t−1 i gti ). Thus we get IndG H ψ(g) =
n X i=1
56
˚ −1 gti ). ψ(t i
10
Induction and restriction
II Representation Theory
Note that this construction is rather ugly. It could be made much nicer if we knew a bit more algebra — we can write the induced module simply as IndG H W = FG ⊗FH W, where we view W as a left-FH module, and FG as a (FG, FH)-bimodule. Alternatively, this is the extension of scalars from FH to FG.
57
11
11
Frobenius groups
II Representation Theory
Frobenius groups
We will now use character theory to prove some major results in finite group theory. We first do Frobenius’ theorem here. Later, will prove Burnside’s pa q b theorem. This is a theorem with lots and lots of proofs, but all of these proofs involves some sort of representation theory — it seems like representation is an unavoidable ingredient of it. Theorem (Frobenius’ theorem (1891)). Let G be a transitive permutation group on a finite set X, with |X| = n. Assume that each non-identity element of G fixes at most one element of X. Then the set of fixed point-free elements (“derangements”) K = {1} ∪ {g ∈ G : gα 6= α for all α ∈ X} is a normal subgroup of G with order n. On the face of it, it is not clear K is even a subgroup at all. It turns out normality isn’t really hard to prove — the hard part is indeed showing it is a subgroup. Note that we did not explicitly say G is finite. But these conditions imply G ≤ Sn , which forces G to be finite. Proof. The idea of the proof is to construct a character Θ whose kernel is K. First note that by definition of K, we have [ Gα , G=K∪ α∈X
where Gα is, as usual, the stabilizer of α. Also, we know that Gα ∩ Gβ = {1} if α 6= β by assumption, and by definition of K, we have K ∩ Gα = {1} as well. Next note that all the Gα are conjugate. Indeed, we know G is transitive, and gGα g −1 = Ggα . We set H = Gα for some arbitrary choice of α. Then the above tells us that |G| = |K| − |X|(|H| − 1). On the other hand, by the orbit-stabilizer theorem, we know |G| = |X||H|. So it follows that we have |K| = |X| = n. We first compute what induced characters look like. Claim. Let ψ be a character of H. Then nψ(1) g = 1 G IndH ψ(g) = ψ(g) g ∈ H \ {1} . 0 g ∈ K \ {1} Since every element in G is either in K or conjugate to an element in H, this uniquely specifies what the induced character is.
58
11
Frobenius groups
II Representation Theory
This is a matter of computation. Since [G : H] = n, the case g = 1 immediately follows. Using the definition of the induced character, since any non-identity in K is not conjugate to any element in H, we know the induced character vanishes on K \ {1}. Finally, suppose g ∈ H \ {1}. Note that if x ∈ G, then xgx−1 ∈ Gxα . So this lies in H if and only if x ∈ H. So we can write the induced character as IndG H ψ(g) =
1 X˚ 1 X ψ(xgx−1 ) = ψ(hgh−1 ) = ψ(g). |H| |H| g∈G
h∈H
Claim. Let ψ be an irreducible character of H, and define θ = ψ G − ψ(1)(1H )G + ψ(1)1G . Then θ is a character, and ( ψ(h) h ∈ H . θ(g) = ψ(1) k ∈ K Note that we chose the coefficients exactly so that the final property of θ holds. This is a matter of computation:
ψG ψ(1)(1H )G ψ(1)1G θi
1
h ∈ H \ {1}
K \ {1}
nψ(1) nψ(1) ψ(1) ψ(1)
ψ(h) ψ(1) ψ(1) ψ(h)
0 0 ψ(1) ψ(1)
The less obvious part is that θ is a character. From the way we wrote it, we already know it is a virtual character. We then compute the inner product 1 X |θ(g)|2 |G| g∈G X 1 X = |θ(g)|2 + |θ(g)|2 |G| g∈K g∈G\K X 1 = n|ψ(1)|2 + n |ψ(h)|2 |G| h6=1∈H ! X 1 2 n |ψ(h)| = |G|
hθ, θiG =
h∈H
1 = (n|H|hψ, ψiH ) |G| = 1. So either θ or −θ is a character. But θ(1) = ψ(1) > 0. So θ is a character. Finally, we have
59
11
Frobenius groups
II Representation Theory
Claim. Let ψ1 , · · · , ψt be the irreducible representations of H, and θi be the corresponding representations of G constructed above. Set Θ=
t X
θi (1)θi .
i=1
Then we have
( |H| θ(g) = 0
g∈K . g 6∈ K
From this, it follows that the kernel of the representation affording θ is K, and in particular K is a normal subgroup of G. This is again a computation using column orthogonality. For 1 6= h ∈ H, we have t X Θ(h) = ψi (1)ψi (h) = 0, i=1
and for any y ∈ K, we have Θ(y) =
t X
ψi (1)2 = |H|.
i=1
Definition (Frobenius group and Frobenius complement). A Frobenius group is a group G having a subgroup H such that H ∩ gHg −1 = 1 for all g 6∈ H. We say H is a Frobenius complement of G. How does this relate to the previous theorem? Proposition. The left action of any finite Frobenius group on the cosets of the Frobenius complement satisfies the hypothesis of Frobenius’ theorem. Definition (Frobenius kernel). The Frobenius kernel of a Frobenius group G is the K obtained from Frobenius’ theorem. Proof. Let G be a Frobenius group, having a complement H. Then the action of G on the cosets G/H is transitive. Furthermore, if 1 6= g ∈ G fixes xH and yH, then we have g ∈ xHx−1 ∩ yHy −1 . This implies H ∩ (y −1 x)H(y −1 x)−1 6= 1. Hence xH = yH. Note that J. Thompson (in his 1959 thesis) proved any finite group having a fixed-point free automorphism of prime order is nilpotent. This implies K is nilpotent, which means K is a direct product of its Sylow subgroups.
60
12
Mackey theory
12
II Representation Theory
Mackey theory
We work over C. We wish to describe the restriction to a subgroup K ≤ G of an G G induced representation IndG H W , i.e. the character ResK IndH W . In general, K and H can be unrelated, but in many applications, we have K = H, in which case we can characterize when IndG H W is irreducible. It is quite helpful to first look at a special case, where W = 1H is the trivial representation. Thus IndG H 1H is the permutation representation of G on G/H (coset action on the left cosets of H in G). Recall that by the orbit-stabilizer theorem, if G is transitive on the set X, and H = Gα for some α ∈ X, then the action of G on X is isomorphic the action on G/H, namely the correspondence gα ↔ gH is a well-defined bijection, and commutes with with the g-action (i.e. x(gα) = (xg)α ↔ x(gH) = (xg)H). We now consider the action of G on G/H and let K ≤ G. Then K also acts on G/H, and G/H splits into K-orbits. The K-orbit of gH contains precisely kgH for k ∈ K. So it is the double coset KgH = {kgh : k ∈ K, h ∈ H}. The set of double cosets K\G/H partition G, and the number of double cosets is |K\G/H| = hπG/K , πG/H i. We don’t need this, but this is true. If it happens that H = K, and H is normal, then we just have K\G/H = G/H. What about stabilizers? Clearly, GgH = gHg −1 . Thus, restricting to the action of K, we have KgH = gHg −1 ∩ K. We call Hg = KgH . So by our correspondence above, the action of K on the orbit containing gH is isomorphic to the action of K on K/(gHg −1 ∩ K) = K/Hg . From this, and using the fact that IndG H 1H = C(G/H), we get the special case of Mackey’s theorem: Proposition. Let G be a finite group and H, K ≤ G. Let g1 , · · · , gk be the representatives of the double cosets K\G/H. Then G ∼ ResG K IndH 1H =
k M
IndK 1. gi Hg −1 ∩K
i=1
i
The general form of Mackey’s theorem is the following: Theorem (Mackey’s restriction formula). In general, for K, H ≤ G, we let S = {1, g1 , · · · , gr } be a set of double coset representatives, so that [ G= Kgi H. We write Hg = gHg −1 ∩ K ≤ G. We let (ρ, W ) be a representation of H. For each g ∈ G, we define (ρg , Wg ) to be a representation of Hg , with the same underlying vector space W , but now the action of Hg is ρg (x) = ρ(g −1 xg), 61
12
Mackey theory
II Representation Theory
where h = g −1 xg ∈ H by construction. This is clearly well-defined. Since Hg ≤ K, we obtain an induced representation IndK Hg Wg . Let G be finite, H, K ≤ G, and W be a H-space. Then M G ResG IndK K IndH W = Hg W g . g∈S
We will defer the proof of this for a minute or two. We will first derive some corollaries of this, starting with the character version of the theorem. Corollary. Let ψ be a character of a representation of H. Then X G ResG IndK K IndH ψ = Hg ψg , g∈S
where ψg is the class function (and a character) on Hg given by ψg (x) = ψ(g −1 xg). These characters ψg are sometimes known as the conjugate characters. This obviously follows from Mackey’s restriction formula. Corollary (Mackey’s irreducibility criterion). Let H ≤ G and W be a H-space. Then V = IndG H W is irreducible if and only if (i) W is irreducible; and (ii) For each g ∈ S \H, the two Hg spaces Wg and ResH Hg W have no irreducible −1 constituents in common, where Hg = gHg ∩ H. Note that the set S of representatives was chosen arbitrarily. So we may require (ii) to hold for all g ∈ G \ H, instead of g ∈ S \ H. While there are more things to check in G, it is sometimes convenient not have to pick out an S explicitly. Note that the only g ∈ S ∩ H is the identity, since for any h ∈ H, KhH = KH = K1H. Proof. We use characters, and let W afford the character ψ. We take K = H in Mackey’s restriction formula. Then we have Hg = gHg −1 ∩ H. Using Frobenius reciprocity, we can compute the inner product as G G G hIndG H ψ, IndH ψiG = hψ, ResH IndH ψiH X = hψ, IndH Hg ψg iH g∈S
=
X
hResH Hg ψ, ψg iHg
g∈S
= hψ, ψi +
X
hResH Hg ψ, ψg iHg
g∈S\H
We can write this because if g = 1, then Hg = H, and ψg = ψ. This is a sum of non-negative integers, since the inner products of characters always are. So IndG H ψ is irreducible if and only if hψ, ψi = 1, and all the other terms in the sum are 0. In other words, W is an irreducible representation of H, and for all g 6∈ H, W and Wg are disjoint representations of Hg . 62
12
Mackey theory
II Representation Theory
Corollary. Let H C G, and suppose ψ is an irreducible character of H. Then IndG H ψ is irreducible if and only if ψ is distinct from all its conjugates ψg for g ∈ G \ H (where ψg (h) = ψ(g −1 hg) as before). Proof. We take K = H C G. So the double cosets are just left cosets. Also, Hg = H for all g. Moreover, Wg is irreducible since W is irreducible. So, by Mackey’s irreducible criterion, IndG H W is irreducible precisely if ∼ W = 6 Wg for all g ∈ G \ H. This is equivalent to ψ 6= ψg . Note that again we could check the conditions on a set of representatives of (double) cosets. In fact, the isomorphism class of Wg (for g ∈ G) depends only on the coset gH. We now prove Mackey’s theorem. Theorem (Mackey’s restriction formula). In general, for K, H ≤ G, we let S = {1, g1 , · · · , gr } be a set of double coset representatives, so that [ G= Kgi H. We write Hg = gHg −1 ∩ K ≤ G. We let (ρ, W ) be a representation of H. For each g ∈ G, we define (ρg , Wg ) to be a representation of Hg , with the same underlying vector space W , but now the action of Hg is ρg (x) = ρ(g −1 xg), where h = g −1 xg ∈ H by construction. This is clearly well-defined. Since Hg ≤ K, we obtain an induced representation IndK Hg Wg . Let G be finite, H, K ≤ G, and W be a H-space. Then M G ResG IndK K IndH W = Hg W g . g∈S
This is possibly the hardest and most sophisticated proof in the course. Proof. Write V = IndG H W . Pick g ∈ G, so that KgH ∈ K\G/H. Given a left transversal T of H in G, we can obtain V explicitly as a direct sum M V = t ⊗ W. t∈T
The idea is to “coarsen” this direct sum decomposition using double coset representatives, by collecting together the t ⊗ W ’s with t ∈ KgH. We define M V (g) = t ⊗ W. t∈KgH∩T
Now each V (g) is a K-space — given k ∈ K and t ⊗ w ∈ t ⊗ W , since t ∈ KgH, we have kt ∈ KgH. So there is some t0 ∈ T such that ktH = t0 H. Then t0 ∈ ktH ⊆ KgH. So we can define k · (t ⊗ w) = t0 ⊗ (ρ(t0−1 kt)w),
63
12
Mackey theory
II Representation Theory
where t0 kt ∈ H. Viewing V as a K-space (forgetting its whole G-structure), we have M ResG V (g). KV = g∈S
The left hand side is what we want, but the right hand side looks absolutely nothing like IndK Hg Wg . So we need to show V (g) =
M
t⊗W ∼ = IndK Hg Wg ,
t∈KgH∩T
as K representations, for each g ∈ S. Now for g fixed, each t ∈ KgH can be represented by some kgh, and by restricting to elements in the traversal T of H, we are really summing over cosets kgH. Now cosets kgH are in bijection with cosets k(gHg −1 ) in the obvious way. So we are actually summing over elements in K/(gHg −1 ∩ K) = K/Hg . So we write M V (g) = (kg) ⊗ W. k∈K/Hg
∼ (kg) ⊗ W . We define We claim that there is a isomorphism that sends k ⊗ Wg = k ⊗ Wg → (kg) ⊗ W by k ⊗ w 7→ kg ⊗ w. This is an isomorphism of vector spaces almost by definition, so we only check that it is compatible with the action. The action of x ∈ K on the left is given by ρg (x)(k ⊗ w) = k 0 ⊗ (ρg (k 0−1 xk)w) = k 0 ⊗ (ρ(g −1 k 0−1 xkg)w), where k 0 ∈ K is such that k 0−1 xk ∈ Hg , i.e. g −1 k 0−1 xkg ∈ H. On the other hand, ρ(x)(kg ⊗ w) = k 00 ⊗ (ρ(k 00 x−1 (kg))w), where k 00 ∈ K is such that k 00−1 xkg ∈ H. Since there is a unique choice of k 00 (after picking a particular transversal), and k 0 g works, we know this is equal to k 0 g ⊗ (ρ(g −1 k 0−1 xkg)w). So the actions are the same. So we have an isomorphism. Then M V (g) = k ⊗ Wg = IndK Hg Wg , k∈K/Hg
as required. Example. Let g = Sn and H = An . Consider a σ ∈ Sn . Then its conjugacy class in Sn is determined by its cycle type. If the class of σ splits into two classes in An , and χ is an irreducible character of An that takes different values on the two classes, then by the irreducibility criterion, IndSAnn χ is irreducible.
64
13
Integrality in the group algebra
13
II Representation Theory
Integrality in the group algebra
The next big result we are going to have is that the degree of an irreducible character always divides the group order. This is not at all an obvious fact. To prove this, we make the cunning observation that character degrees and friends are not just regular numbers, but algebraic integers. Definition (Algebraic integers). A complex number a ∈ C is an algebraic integer if a is a root of a monic polynomial with integer coefficients. Equivalently, a is such that the subring of C given by Z[a] = {f (a) : f ∈ Z[X]} is finitely generated. Equivalently, a is the eigenvalue of a matrix, all of whose entries lie in Z. We will quote some results about algebraic integers which we will not prove. Proposition. (i) The algebraic integers form a subring of C. (ii) If a ∈ C is both an algebraic integer and rational, then a is in fact an integer. (iii) Any subring of C which is a finitely generated Z-module consists of algebraic integers. The strategy is to show that given a group G and an irreducible character χ, |G| the value χ(1) is an algebraic integer. Since it is also a rational number, it must be an integer. So |G| is a multiple of χ(1). We make the first easy step. Proposition. If χ is a character of G and g ∈ G, then χ(g) is an algebraic integer. Proof. We know χ(g) is the sum of roots nth roots of unity (where n is the order of g). Each root of unity is an algebraic integer, since it is by definition a root of xn − 1. Since algebraic integers are closed under addition, the result follows. We now have a look at the center of the group algebra CG. Recall that the group algebra CG of a finite group G is a complex vector space generated by elements of G. More precisely, X CG = αg g : αg ∈ C . g∈G
Borrowing the multiplication of G, this forms a ring, hence an algebra. We now list the conjugacy classes of G as {1} = C1 , · · · , Ck . Definition (Class sum). The class sum of a conjugacy class Cj of a group G is X Cj = g ∈ CG. g∈Cj
65
13
Integrality in the group algebra
II Representation Theory
The claim is now Cj lives in the center of CG (note that the center of the group algebra is different from the group algebra of the center). Moreover, they form a basis: Proposition. The class sums C1 , · · · , Ck form a basis of Z(CG). There exists non-negative integers aij` (with 1 ≤ i, j, ` ≤ k) with Ci Cj =
k X
aij` C` .
`=1
The content of this proposition is not that Ci Cj can be written as a sum of C` ’s. This is true because the C` ’s form a basis. The content is that these are actually integers, not arbitrary complex numbers. Definition (Class algebra/structure constants). The constants aij` as defined above are the class algebra constants or structure constants. Proof. It is clear from definition that gCj g −1 = Cj . So we have Cj ∈ Z(CG). Also, since the Cj ’s are produced from disjoint conjugacy classes, they are linearly independent. Now suppose z ∈ Z(CG). So we can write X z= αg g. g∈G
By definition, this commutes with all elements of CG. So for all h ∈ G, we must have αh−1 gh = αg . So the function g 7→ αg is constant on conjugacy classes of G. So we can write αj = αg for g ∈ Cj . Then k X g= αj Cj . j=1
Finally, the center Z(CG) is an algebra. So Ci Cj =
k X
aij` C`
`=1
for some complex numbers aij` , since the Cj span. The claim is that aij` ∈ Z≥0 for all i, j`. To see this, we fix g` ∈ C` . Then by definition of multiplication, we know aij` = |{(x, y) ∈ Ci × Cj : xy = g` }|, which is clearly a non-negative integer. Let ρ : G → GL(V ) be an irreducible representation of G over C affording the character χ. Extending linearly, we get a map ρ : CG → End V , an algebra homomorphism. In general, we have the following definition: Definition (Representation of algebra). Let A be an algebra. A representation of A is a homomorphism of algebras ρ : A → End V . 66
13
Integrality in the group algebra
II Representation Theory
Let z ∈ Z(CG). Then ρ(z) commutes with all ρ(g) for g ∈ G. Hence, by Schur’s lemma, we can write ρ(z) = λz I for some λz ∈ C. We then obtain the algebra homomorphism ωρ = ωχ = ω : Z(CG) → C z 7→ λz . By definition, we have ρ(Ci ) = ω(Ci )I. Taking traces of both sides, we know X χ(g) = χ(1)ω(Ci ). g∈Ci
We also know the character is a class function. So we in fact get χ(1)ω(Ci ) = |Ci |χ(gi ), where gi ∈ Ci is a representative of Ci . So we get ω(Ci ) =
χ(gi ) |Ci |. χ(1)
Why should we care about this? The thing we are looking at is in fact an algebraic integer. Lemma. The values of ωχ (Ci ) =
χ(g) |Ci | χ(1)
are algebraic integers. Note that all the time, we are requiring χ to be an irreducible character. Proof. Using the definition of aij` ∈ Z≥0 , and the fact that ωχ is an algebra homomorphism, we get ωχ (Ci )ωχ (Cj ) =
k X
aij` ωχ (C` ).
`=1
Thus the span of {ω(Cj ) : 1 ≤ j ≤ k} is a subring of C and is finitely generated as a Z-module (by definition). So we know this consists of algebraic integers. That’s magic. In fact, we can compute the structure constants aij` from the character table, namely for all i, j, `, we have k
aij`
X χs (gi )χs (gj )χs (g −1 ) |G| ` = , |CG (gi )||CG (gj )| s=1 χs (1)
where we sum over the irreducible characters of G. We will neither prove it nor use it. But if you want to try, the key idea is to use column orthogonality. Finally, we get to the main result:
67
13
Integrality in the group algebra
II Representation Theory
Theorem. The degree of any irreducible character of G divides |G|, i.e. χj (1) | |G| for each irreducible χj . It is highly non-obvious why this should be true. Proof. Let χ be an irreducible character. By orthogonality, we have |G| 1 X = χ(g)χ(g −1 ) χ(1) χ(1) g∈G k
=
=
1 X |Ci |χ(gi )χ(gi−1 ) χ(1) i=1 k X |Ci |χ(gi ) i=1
Now we notice
χ(1)
χ(gi )−1 .
|Ci |χ(gi ) χ(1)
is an algebraic integer, by the previous lemma. Also, χ(gi−1 ) is an algebraic integer. So the whole mess is an algebraic integer since algebraic integers are closed under addition and multiplication. |G| is rational. So it must be an integer! But we also know χ(1) Example. Let G be a p-group. Then χ(1) is a p-power for each irreducible χ. If in particular |G| = p2 , then χ(1) = 1, p or p2 . But the sum of squares of the degrees is |G| = p2 . So it cannot be that χ(1) = p or p2 . So χ(1) = 1 for all irreducible characters. So G is abelian. Example. No simple group has an irreducible character of degree 2. Proof is left as an exercise for the reader.
68
14
14
Burnside’s theorem
II Representation Theory
Burnside’s theorem
Finally, we get to Burnside’s theorem. Theorem (Burside’s pa q b theorem). Let p, q be primes, and let |G| = pa q b , where a, b ∈ Z≥0 , with a + b ≥ 2. Then G is not simple. Note that if a + b = 1 or 0, then the group is trivially simple. In fact even more is true, namely that G is soluble, but this follows easily from above by induction. This result is the best possible in the sense that |A5 | = 60 = 22 · 3 · 5, and A5 is simple. So we cannot allow for three different primes factors (in fact, there are exactly 8 simple groups with three prime factors). Also, if a = 0 or b = 0, then G is a p-group, which have non-trivial center. So these cases trivially work. Later, in 1963, Feit and Thompson proved that every group of odd order is soluble. The proof was 255 pages long. We will not prove this. In 1972, H. Bender found the first proof of Burnside’s theorem without the use of representation theory, but the proof is much more complicated. This theorem follows from two lemmas, and one of them involves some Galois theory, and is hence non-examinable. Lemma. Suppose m
α=
1 X λj , m j=1
is an algebraic integer, where λnj = 1 for all j and some n. Then either α = 0 or |α| = 1. Proof (non-examinable). Observe α ∈ F = Q(ε), where ε = e2πi/n (since λj ∈ F for all j). We let G = Gal(F/Q). Then {β ∈ F : σ(β) = β for all σ ∈ G} = Q. We define the “norm” N (α) =
Y
σ(α).
σ∈G
Then N (α) is fixed by every element σ ∈ G. So N (α) is rational. Now N (α) is an algebraic integer, since Galois conjugates σ(α) of algebraic integers are algebraic integers. So in fact N (α) is an integer. But for α ∈ G, we know X 1 |σ(α)| = σ(λj ) ≤ 1. m So if α 6= 0, then N (α) = ±1. So |α| = 1. Lemma. Suppose χ is an irreducible character of G, and C is a conjugacy class in G such that χ(1) and |C| are coprime. Then for g ∈ C, we have |χ(g)| = χ(1) or 0. Proof. Of course, we want to consider the quantity α=
χ(g) . χ(1) 69
14
Burnside’s theorem
II Representation Theory
Since χ(g) is the sum of deg χ = χ(1) many roots of unity, it suffices to show that α is an algebraic integer. By B´ezout’s theorem, there exists a, b ∈ Z such that aχ(1) + b|C| = 1. So we can write α= Since χ(g) and
χ(g) χ(1) |C|
χ(g) χ(g) = aχ(g) + b |C|. χ(1) χ(1)
are both algebraic integers, we know α is.
Proposition. If in a finite group, the number of elements in a conjugacy class C is of (non-trivial) prime power order, then G is not non-abelian simple. Proof. Suppose G is a non-abelian simple group, and let 1 6= g ∈ G be living in the conjugacy class C of order pr . If χ 6= 1G is a non-trivial irreducible character of G, then either χ(1) and |C| = pr are not coprime, in which case p | χ(1), or they are coprime, in which case |χ(g)| = χ(1) or χ(g) = 0. However, it cannot be that |χ(g)| = χ(1). If so, then we must have ρ(g) = λI for some λ. So it commutes with everything, i.e. for all h, we have ρ(gh) = ρ(g)ρ(h) = ρ(h)ρ(g) = ρ(hg). Moreover, since G is simple, ρ must be faithful. So we must have gh = hg for all h. So Z(G) is non-trivial. This is a contradiction. So either p | χ(1) or χ(g) = 0. By column orthogonality applied to C and 1, we get X 0=1+ χ(1)χ(g), 16=χ irreducible, p|χ(1)
where we have deleted the 0 terms. So we get −
1 X χ(1) = χ(g). p p χ6=1
But this is both an algebraic integer and a rational number, but not integer. This is a contradiction. We can now prove Burnside’s theorem. Theorem (Burside’s pa q b theorem). Let p, q be primes, and let |G| = pa q b , where a, b ∈ Z≥0 , with a + b ≥ 2. Then G is not simple. Proof. Let |G| = pa q b . If a = 0 or b = 0, then the result is trivial. Suppose a, b > 0. We let Q ∈ Sylq (G). Since Q is a p-group, we know Z(Q) is non-trivial. Hence there is some 1 6= g ∈ Z(Q). By definition of center, we know Q ≤ CG (g). Also, CG (g) is not the whole of G, since the center of G is trivial. So |CG (g)| = |G : CG (g)| = pr for some 0 < r ≤ a. So done. This is all we’ve got to say about finite groups. 70
15
Representations of compact groups
15
II Representation Theory
Representations of compact groups
We now consider the case of infinite groups. It turns out infinite groups don’t behave well if we don’t impose additional structures. When talking about representations of infinite groups, things work well only if we impose a topological structure on the group, and require the representations to be continuous. Then we get nice results if we only concentrate on the groups that are compact. This is a generalization of our previous results, since we can give any finite group the discrete topology, and then all representations are continuous, and the group is compact due to finiteness. We start with the definition of a topological group. Definition (Topological group). A topological group is a group G which is also a topological space, and for which multiplication G × G → G ((h, g) 7→ hg) and inverse G → G (g 7→ g −1 ) are continuous maps. Example. Any finite group G with the discrete topology is a topological group. Example. The matrix groups GLn (R) and GLn (C) are topological groups 2 2 (inheriting the topology of Rn and Cn ). However, topological groups are often too huge. We would like to concentrate on “small” topological groups. Definition (Compact group). A topological group is a compact group if it is compact as a topological space. There are some fairly nice examples of these. Example. All finite groups with discrete topology are compact. Example. The group S 1 = {z ∈ C : |z| = 1} under multiplication is a compact group. This is known as the circle group, for mysterious reasons. Thus the torus S 1 × S 1 × · · · × S 1 is also a compact group. Example. The orthogonal group O(n) ≤ GLn (R) is compact, and so is SO(n) = {A ∈ O(n) : det A = 1}. These are compact since they are closed and bounded 2 as a subspace of Rn , as the entries can be at most 1 in magnitude. Note that SO(2) ∼ = S 1 , mapping a rotation by θ to eiθ . Example. The unitary group U(n) = {A ∈ GLn (C) : AA† = I} is compact, and so is SU(n) = {A ∈ U (n) : det A = 1}. Note that we also have U (1) ∼ = SO(2) ∼ = S 1 . These are not only isomorphisms as group, but these isomorphisms are also homeomorphisms of topological spaces. The one we are going to spend most of our time on is SU(2) = {(z1 , z2 ) ∈ C2 : |z1 |2 + |z2 |2 = 1} ≤ R4 = C2 . We can see this is homeomorphic to S 3 . It turns out that the only spheres on which we can define group operations on are S 1 and S 3 , but we will not prove this, since this is a course on algebra, not topology. We now want to develop a representation theory of these compact groups, similar to what we’ve done for finite groups. 71
15
Representations of compact groups
II Representation Theory
Definition (Representation of topological group). A representation of a topological group on a finite-dimensional space V is a continuous group homomorphism G → GL(V ). It is important that the group homomorphism is continuous. Note that for a general topological space X, a map ρ : X → GL(V ) ∼ = GLn (C) is continuous if and only if the component maps x 7→ ρ(x)ij are continuous for all i, j. The key that makes this work is the compactness of the group and the continuity of the representations. If we throw them away, then things will go very bad. To see this, we consider the simple example S 1 = U (1) = {g ∈ C× : |g| = 1} ∼ = R/Z, where the last isomorphism is an abelian group isomorphism via the map x 7→ e2πix : R/Z → S 1 . What happens if we just view S 1 as an abstract group, and not a topological group? We can view R as a vector space over Q, and this has a basis (by Hamel’s basis theorem), say, A ⊆ R. Moreover, we can assume the basis is ordered, and we can assume 1 ∈ A. As abelian groups, we then have M R∼ Qα, =Q⊕ α∈A\{1}
Then this induces an isomorphism R/Z ∼ = Q/Z ⊕
M
Qα.
α∈A\{1}
Thus, as abstract groups, S 1 has uncountably many irreducible representations, namely for each λ ∈ A \ {1}, there exists a one-dimensional representation given by ( 1 µ 6∈ Qλ 2πiµ ρλ (e )= 2πiµ e µ ∈ Qλ We see ρλ = ρλ0 if and only if Qλ = Qλ0 . Hence there are indeed uncountably many of these. There are in fact even more irreducible representation we haven’t listed. This is bad. The idea is then to topologize S 1 as a subset of C, and study how it acts naturally on complex spaces in a continuous way. Then we can ensure that we have at most countably many representations. In fact, we can characterize all irreducible representations in a rather nice way: Theorem. Every one-dimensional (continuous) representation S 1 is of the form ρ : z 7→ z n for some n ∈ Z. This is a nice countable family of representations. To prove this, we need two lemmas from real analysis.
72
15
Representations of compact groups
II Representation Theory
Lemma. If ψ : (R, +) → (R, +) is a continuous group homomorphism, then there exists a c ∈ R such that ψ(x) = cx for all x ∈ R. Proof. Given ψ : (R, +) → (R, +) continuous, we let c = ψ(1). We now claim that ψ(x) = cx. Since ψ is a homomorphism, for every n ∈ Z≥0 and x ∈ R, we know ψ(nx) = ψ(x + · · · + x) = ψ(x) + · · · + ψ(x) = nψ(x). In particular, when x = 1, we know ψ(n) = cn. Also, we have ψ(−n) = −ψ(n) = −cn. Thus ψ(n) = cn for all n ∈ Z. We now put x = m n ∈ Q. Then we have mψ(x) = ψ(nx) = ψ(m) = cm. So we must have
m
m =c . n n So we get ψ(q) = cq for all q ∈ Q. But Q is dense in R, and ψ is continuous. So we must have ψ(x) = cx for all x ∈ R. ψ
Lemma. Continuous homomorphisms ϕ : (R, +) → S 1 are of the form ϕ(x) = eicx for some c ∈ R. Proof. Let ε : (R, +) → S 1 be defined by x 7→ eix . This homomorphism wraps the real line around S 1 with period 2π. We now claim that given any continuous function ϕ : R → S 1 such that ϕ(0) = 1, there exists a unique continuous lifting homomorphism ψ : R → R such that ε ◦ ψ = ϕ, ψ(0) = 0. (R, +)
0 ∃!ψ
ϕ ε S1
(R, +)
The lifting is constructed by starting with ψ(0) = 0, and then extending a small interval at a time to get a continuous map R → R. We will not go into the details. Alternatively, this follows from the lifting criterion from IID Algebraic Topology. 73
15
Representations of compact groups
II Representation Theory
We now claim that if in addition ϕ is a homomorphism, then so is its continuous lifting ψ. If this is true, then we can conclude that ψ(x) = cx for some c ∈ R. Hence ϕ(x) = eicx . To show that ψ is indeed a homomorphism, we have to show that ψ(x + y) = ψ(x) + ψ(y). By definition, we know ϕ(x + y) = ϕ(x)ϕ(y). By definition of ψ, this means ε(ψ(x + y) − ψ(x) − ψ(y)) = 1. We now look at our definition of ε to get ψ(x + y) − ψ(x) − ψ(y) = 2kπ for some integer k ∈ Z, depending continuously on x and y. But k can only be an integer. So it must be constant. Now we pick our favorite x and y, namely x = y = 0. Then we find k = 0. So we get ψ(x + y) = ψ(x) + ψ(y). So ψ is a group homomorphism. With these two lemmas, we can prove our characterization of the (onedimensional) representations of S 1 . Theorem. Every one-dimensional (continuous) representation S 1 is of the form ρ : z 7→ z n for some n ∈ Z. Proof. Let ρ : S 1 → C× be a continuous representation. We now claim that ρ actually maps S 1 to S 1 . Since S 1 is compact, we know ρ(S 1 ) has closed and bounded image. Also, ρ(z n ) = (ρ(z))n for all n ∈ Z. Thus for each z ∈ S 1 , if |ρ(z)| > 1, then the image of ρ(z n ) is unbounded. Similarly, if it is less than 1, then ρ(z −n ) is unbounded. So we must have ρ(S 1 ) ⊆ S 1 . So we get a continuous homomorphism R → S1 x 7→ ρ(eix ). So we know there is some c ∈ R such that ρ(eix ) = eicx , Now in particular, 1 = ρ(e2πi ) = e2πic . This forces c ∈ Z. Putting n = c, we get ρ(z) = z n . 74
15
Representations of compact groups
II Representation Theory
That has taken nearly an hour, and we’ve used two not-so-trivial facts from analysis. So we actually had to work quite hard to get this. But the result is good. We have a complete list of representations, and we don’t have to fiddle with things like characters. Our next objective is to repeat this for SU(2), and this time we cannot just get away with doing some analysis. We have to do representation theory properly. So we study the general theory of representations of complex spaces. In studying finite P groups, we often took the “average” over the group via 1 the operation |G| g∈G something. Of course, we can’t do this now, since the group is infinite. As we all know, the continuous analogue of summing R is called integration. Informally, we want to be able to write something like G dg. This is actually a measure, called the “Haar measure”, and always exists as long as you are compact and Hausdorff. However, we will not need or use this result, since we can construct it by hand for S 1 and SU(2). Definition (Haar measure). Let G be a topological group, and let C(G) = {f : G → C : f is continuous, f (gxg −1 ) = f (x) for all g, x ∈ G}. R A non-trivial linear functional G : C(G) → C, written as Z Z f= f (g) dg G
G
is called a Haar measure if (i) It satisfies the normalization condition Z 1 dg = 1 G
so that the “total volume” is 1. (ii) It is left and right translation invariant, i.e. Z Z Z f (xg) dg = f (g) dg = f (gx) dg G
G
G
for all x ∈ G. Example. For a finite group G, we can define Z 1 X f (g) dg = f (g). |G| G g∈G
The division by the group order is the thing we need to make the normalization hold. Example. Let G = S 1 . Then we can have Z Z 2π 1 f (g) dg = f (eiθ ) dθ. 2π 0 G Again, division by 2π is the normalization required. 75
15
Representations of compact groups
II Representation Theory
We have explicitly constructed them here, but there is a theorem that guarantees the existence of such a measure for sufficiently nice topological group. Theorem. Let G be a compact Hausdorff topological group. Then there exists a unique Haar measure on G. From now on, the word “compact” will mean “compact and Hausdorff”, so that the conditions of the above theorem holds. We will not prove theorem, and just assume it to be true. If you don’t like this, whenever you see “compact group”, replace it with “compact group with Haar measure”. We will explicitly construct it for SU(2) later, which is all we really care about. Once we know the existence of such a thing, most of the things we know for finite groups hold for compact groups, as long as we replace the sums with the appropriate integrals. Corollary (Weyl’s unitary trick). Let G be a compact group. Then every representation (ρ, V ) has a G-invariant Hermitian inner product. Proof. As for the finite case, take any inner product ( · , · ) on V , then define a new inner product by Z hv, wi = (ρ(g)v, ρ(g)w) dg. G
Then this is a G-invariant inner product. Theorem (Maschke’s theoerm). Let G be compact group. Then every representation of G is completely reducible. Proof. Given a representation (ρ, V ). Choose a G-invariant inner product. If V is not irreducible, let W ≤ V be a subrepresentation. Then W ⊥ is also G-invariant, and V = W ⊕ W ⊥. Then the result follows by induction. We can use the Haar measure to endow C(G) with an inner product given by Z hf, f 0 i = f (g)f 0 (g) dg. G
Definition (Character). If ρ : G → GL(V ) is a representation, then the character χρ = tr ρ is a continuous class function, since each component ρ(g)ij is continuous. So characters make sense, and we can ask if all the things about finite groups hold here. For example, we can ask about orthogonality. Schur’s lemma also hold, with the same proof, since we didn’t really need finiteness there. Using that, we can prove the following: Theorem (Orthogonality). Let G be a compact group, and V and W be irreducible representations of G. Then ( 1 V ∼ =W hχV , χW i = . 0 V ∼ 6= W 76
15
Representations of compact groups
II Representation Theory
Now do irreducible characters form a basis for C(G)? Example. We take the only (infinite) compact group we know about — G = S 1 . We have found that the one-dimensional representations are ρn : z 7→ z n for n ∈ Z. As S 1 is abelian, these are all the (continuous) irreducible representations — given any representation ρ, we can find a simultaneous eigenvector for each ρ(g). The “character table of S 1 ” has rows χn indexed by Z, with χn (eiθ ) = einθ . Now given any representation of S 1 , say V , we can break V as a direct sum of 1-dimensional subrepresentations. So the character χV of V is of the form X χV (z) = an z n , n∈Z
where an ∈ Z≥0 , and only finitely many an are non-zero (since we are assuming finite-dimensionality). Actually, an is the number of copies of ρn in the decomposition of V . We can find out the value of an by computing Z 2π 1 e−inθ χV (eiθ ) dθ. an = hχn , χV i = 2π 0 Hence we know X 1 Z 2π iθ 0 −inθ 0 0 χV (e ) = χV (e )e dθ einθ . 2π 0 iθ
n∈Z
This is really just a Fourier decomposition of χV . This gives a decomposition of χV into irreducible characters, and the “Fourier mode” an is the number of each irreducible character occurring in this decomposition. It is possible to show that χn form a complete orthonormal set in the Hilbert space L2 (S 1 ), i.e. square-integrable functions on S 1 . This is the Peter-Weyl theorem, and is highly non-trivial.
15.1
Representations of SU(2)
In the rest of the time, we would like to talk about G = SU(2). Recall that SU(2) = {A ∈ GL2 (C) : A† A = I, det A = I}. Note that if
A=
a c
b ∈ SU(2), d
then since det A = 1, we have A−1 =
d −b . −c a
77
15
Representations of compact groups
II Representation Theory
So we need d = a ¯ and c = −¯b. Moreover, we have a¯ a + b¯b = 1. Hence we can write SU(2) explicitly as a b 2 2 G= : a, b ∈ C, |a| + |b| = 1 . −¯b a ¯ Topologically, we know G ∼ = S 3 ⊆ C2 ∼ = R4 . 2 Instead of thinking about C in the usual vector space way, we can think of it as a subgroup of M2 (C) via z w H= : w, z ∈ C ≤ M2 (C). −w ¯ z¯ This is known as Hamilton’s quaternion algebra. Then H is a 4-dimensional Euclidean space (two components from z and two components from w), with a norm on H given by kAk2 = det A. We now see that SU(2) ≤ H is exactly the unit sphere kAk2 = 1 in H. If A ∈ SU(2) and x ∈ H, then kAxk = kxk since kAk = 1. So elements of G acts as isometries on the space. After normalization (by 2π1 2 ), the usual integration of functions on S 3 defines a Haar measure on G. It is an exercise on the last example sheet to write this out explicitly. We now look at conjugacy in G. We let a 0 T = : a ∈ C, |a|2 = 1 ∼ = S1. 0 a ¯ This is a maximal torus in G, and it plays a fundamental role, since we happen to know about S 1 . We also have a favorite element 0 1 s= ∈ SU(2). −1 0 We now prove some easy linear algebra results about SU(2). Lemma (SU(2)-conjugacy classes). (i) Let t ∈ T . Then sts−1 = t−1 . (ii) s2 = −I ∈ Z(SU(2)). (iii) The normalizer NG (T ) = T ∪ sT =
a 0
0 0 , a ¯ −¯ a
a : a ∈ C, |a| = 1 . 0
(iv) Every conjugacy class C of SU(2) contains an element of T , i.e. C ∩ T 6= ∅. (v) In fact, C ∩ T = {t, t−1 } for some t ∈ T , and t = t−1 if and only if t = ±I, in which case C = {t}. 78
15
Representations of compact groups
II Representation Theory
(vi) There is a bijection {conjugacy classes in SU(2)} ↔ [−1, 1], given by 1 tr A. 2 λ 0 A= ¯ , 0 λ A 7→
We can see that if
then
1 1 ¯ = Re(λ). tr A = (λ + λ) 2 2
Note that what (iv) and (v) really say is that matrices in SU(2) are diagonalizable (over SU(2)). Proof. (i) Write it out. (ii) Write it out. (iii) Direct verification. (iv) It is well-known from linear algebra that every unitary matrix X has an orthonormal basis of eigenvectors, and hence is conjugate in U (2) to one in T , say QXQ† ∈ T. We now want to force Q into SU(2), i.e. make Q have determinant 1. We put δ = det Q. Since Q is unitary, i.e. QQ† = I, we know |δ| = 1. So we let ε be a square root of δ, and define Q1 = ε−1 Q. Then we have Q1 XQ†1 ∈ T. (v) We let g ∈ G, and suppose g ∈ C. If g = ±I, then C ∩ T = {g}. Otherwise, g has two distinct eigenvalues λ, λ−1 . Note that the two eigenvlaues must be inverses of each other, since it is in SU(2). Then we know λ 0 −1 C= h h : h ∈ G . 0 λ−1 Thus we find
C∩T =
λ 0
−1 0 λ , λ−1 0
0 . λ
This is true since eigenvalues are preserved by conjugation, so if any µ 0 , 0 µ−1 then {µ, µ−1 } = {λ, λ−1 }. Also, we can get the second matrix from the first by conjugating with s. 79
15
Representations of compact groups
II Representation Theory
(vi) Consider the map 1 tr : {conjugacy classes} → [−1, 1]. 2 By (v), matrices are conjugate in G iff they have the same set of eigenvalues. Now 1 1 λ 0 ¯ = Re(λ) = cos θ, = (λ + λ) tr −1 0 λ 2 2 where λ = eiθ . Hence the map is a surjection onto [−1, 1]. Now we have to show it is injective. This is also easy. If g and g 0 have the same image, i.e. 1 1 tr g = tr g 0 , 2 2 then g and g 0 have the same characteristic polynomial, namely x2 − (tr g)x + 1. Hence they have the same eigenvalues, and hence they are similar. We now write, for t ∈ (−1, 1), 1 Ct = g ∈ G : tr g = t . 2 In particular, we have C1 = {I},
C−1 = {−I}.
Proposition. For t ∈ (−1, 1), the class Ct ∼ = S 2 as topological spaces. This is not really needed, but is a nice thing to know. Proof. Exercise! We now move on to classify the irreducible representations of SU(2). We let Vn be the complex space of all homogeneous polynomials of degree n in variables x, y, i.e. Vn = {r0 xn + r1 xn−1 y + · · · + rn y n : r0 , · · · , rn ∈ C}. This is an n + 1-dimensional complex vector space with basis xn , xn−1 y, · · · , y n . We want to get an action of SU(2) on Vn . It is easier to think about the action of GL2 (C) in general, and then restrict to SU(2). We define the action of GL2 (C) on Vn by ρn : GL2 (C) → GL(Vn ) given by the rule ρn
a c
b f (x, y) = f (ax + cy, bx + dy). d
In other words, we have a b ρn f c d
x
y
=f
x
a y c
where the multiplication in f is matrix multiplication. 80
b , d
15
Representations of compact groups
II Representation Theory
Example. When n = 0, then V0 ∼ = C, and ρ0 is the trivial representation. When n = 1, then this is the natural two-dimensional representation, and a b ρ1 c d has matrix
a c
b d
with respect to the standard basis {x, y} of V1 = C2 . More interestingly, when n = 2, we know a b ρ2 c d has matrix
a2 2ac c2
ab b2 ad + bc 2bd , cd d2
with respect to the basis x2 , xy, y 2 of V2 = C3 . We obtain the matrix by computing, e.g. ρ2 (g)(x2 ) = (ax + cy)2 = a2 x2 + 2acxy + c2 y 2 . Now we know SU(2) ≤ GL2 (C). So we can view Vn as a representation of SU(2) by restriction. Now we’ve got some representations. The claim is that these are all the irreducibles. Before we prove that, we look at the character of these things. Lemma. A continuous class function f : G → C is determined by its restriction to T , and F |T is even, i.e. −1 λ 0 λ 0 f = f . 0 λ−1 0 λ Proof. Each conjugacy class in SU(2) meets T . So a class function is determined by its restriction to T . Evenness follows from the fact that the two elements are conjugate. In particular, a character of a representation (ρ, V ) is also an even function χρ : S 1 → C. Lemma. If χ is a character of a representation of SU(2), then its restriction χ|T is a Laurent polynomial, i.e. a finite N-linear combination of functions λ 0 7→ λn 0 λ−1 for n ∈ Z. SU(2)
Proof. If V is a representation of SU(2), then ResT V is a representation SU(2) of T , and its character ResT χ is the restriction of χV to T . But every representation of T has its character of the given form. So done. 81
15
Representations of compact groups
II Representation Theory
Notation. We write N[z, z −1 ] for the set of all Laurent polynomials, i.e. nX o N[z, z −1 ] = an z n : an ∈ N : only finitely many an non-zero . We further write N[z, z −1 ]ev = {f ∈ N[z, z −1 ] : f (z) = f (z −1 )}. Then by our lemma, for every continuous representation V of SU(2), the character χV ∈ N[z, z −1 ]ev (by identifying it with its restriction to T ). We now actually calculate the character χn of (ρn , Vn ) as a representation of SU(2). We have χVn (g) = tr ρn (g), where
z g∼ 0
0 z −1
∈ T.
Then we have ρn
z 0
0 z −1
(xi y j ) = (zxi )(z −1 y j ) = z i−j xi y j .
So each xi y j is an eigenvector of ρn (g), with eigenvalue z i−j . So we know ρn (g) has a matrix n z z n−2 . .. , 2−n z z −n with respect to the standard basis. Hence the character is just z n+1 − z −(n+1) z 0 χn = z n + z n−2 + · · · + z −n = , −1 0 z z − z −1 where the last expression is valid unless z = ±1. We can now state the result we are aiming for: Theorem. The representations ρn : SU(2) → GL(Vn ) of dimension n + 1 are irreducible for n ∈ Z≥0 . Again, we get a complete set (completeness proven later). A complete list of all irreducible representations, given in a really nice form. This is spectacular. Proof. Let 0 6= W ≤ Vn be a G-invariant subspace, i.e. a subrepresentation of Vn . We will show that W = Vn . All we know about W is that it is non-zero. So we take some non-zero vector of W . Claim. Let 0 6= w =
n X
rj xn−j y j ∈ W.
j=0
Since this is non-zero, there is some i such that ri 6= 0. The claim is that xn−i y i ∈ W . 82
15
Representations of compact groups
II Representation Theory
We argue by induction on the number of non-zero coefficients rj . If there is only one non-zero coefficient, then we are already done, as w is a non-zero scalar multiple of xn−i y i . So assume there is more than one, and choose one i such that ri 6= 0. We pick z ∈ S 1 with z n , z n−2 , · · · , z 2−n , z −n all distinct in C. Now X z ρn w= rj z n−2j xn−j y j ∈ W. −1 z Subtracting a copy of w, we find X z ρn w − z n−2i w = rj (z n−2j − z n−2i )xn−j y j ∈ W. −1 z We now look at the coefficient rj (z n−2j − z n−2i ). This is non-zero if and only if rj is non-zero and j 6= i. So we can use this to remove any non-zero coefficient. Thus by induction, we get xn−j y j ∈ W for all j such that rj 6= 0. This gives us one basis vector inside W , and we need to get the rest. Claim. W = Vn . We now know that xn−i y i ∈ W for some i. We consider 1 1 −1 1 ρn √ xn−i y i = √ (x + y)n−i (−x + y)i ∈ W. 1 1 2 2 It is clear that the coefficient of xn is non-zero. So we can use the claim to deduce xn ∈ W . Finally, for general a, b 6= 0, we apply a −¯b ρn xn = (ax + by)n ∈ W, b a ¯ and the coefficient of everything is non-zero. So basis vectors are in W . So W = Vn . This proof is surprisingly elementary. It does not use any technology at all. Alternatively, to prove this, we can identify 1 Ccos θ = A ∈ G : tr A = cos θ 2 with the 2-sphere {| Im a|2 + |b|2 = sin2 θ} of radius sin θ. So if f is a class function on G, f is constant on each class Ccos θ . It turns out we get iθ Z Z 2π 1 1 e f (g) dg = f 4π sin2 θ dθ. e−iθ 2π 2 0 2 G 83
15
Representations of compact groups
II Representation Theory
This is the Weyl integration formula, which is the Haar measure for SU(2). Then if χn is the character of Vn , we can use this to show that hχn , χn i = 1. Hence χn is irreducible. We will not go into the details of this construction. Theorem. Every finite-dimensional continuous irreducible representation of G is one of the ρn : G → GL(Vn ) as defined above. Proof. Assume ρV : G → GL(V ) is an irreducible representation affording a character χV ∈ N[z, z −1 ]ev . We will show that χV = χn for some n. Now we see χ0 = 1 χ1 = z + z −1 χ2 = z 2 + 1 + z −2 .. . form a basis of Q[z, z −1 ]ev , which is a non-finite dimensional vector space over Q. Hence we can write X χV = an χn , n
a finite sum with finitely many an 6= 0. Note that it is possible that an ∈ Q. So we clear denominators, and move the summands with negative coefficients to the left hand side. So we get X X mχV + m i χi = nj χj , i∈I
j∈J
with I, J disjoint finite subsets of N, and m, mi , nj ∈ N. We know the left and right-hand side are characters of representations of G. So we get M M mV ⊕ mi Vi = nj Vj . I
J
Since V is irreducible and factorization is unique, we must have V ∼ = Vn for some n ∈ J. We’ve got a complete list of irreducible representations of SU(2). So we can look at what happens when we take products. We know that for V, W representations of SU(2), if SU(2)
ResT then in fact
SU(2) V ∼ W, = ResT
V ∼ = W.
This gives us the following result: Proposition. Let G = SU(2) or G = S 1 , and V, W are representations of G. Then χV ⊗W = χV χW .
84
15
Representations of compact groups
II Representation Theory
Proof. By the previous remark, it is enough to consider the case G = S 1 . Suppose V and W have eigenbases e1 , · · · , en and f1 , · · · , fm respectively such that ρ(z)ei = z ni ei ,
ρ(z)fj = z mj fj
for each i, j. Then ρ(z)(ei ⊗ fj ) = z ni +mj ei ⊗ fj . Thus the character is χV ⊗W (z) =
X
z ni +mj =
i,j
X
! X z ni z mj = χV (z)χW (z).
i
j
Example. We have χV1 ⊗V1 (z) = (z + z −1 )2 = z 2 + 2 + z −2 = χV2 + χV0 . So we have
V1 ⊗ V1 ∼ = V2 ⊕ V0 .
Similarly, we can compute χV1 ⊗V2 (z) = (z 2 + 1 + z −2 )(z + z −1 ) = z 3 + 2z + 2z −1 + z −3 = χV3 + χV1 . So we get
V1 ⊗ V2 ∼ = V3 ⊕ V1 .
Proposition (Clebsch-Gordon rule). For n, m ∈ N, we have Vn ⊗ Vm ∼ = Vn+m ⊕ Vn+m−2 ⊕ · · · ⊕ V|n−m|+2 ⊕ V|n−m| . Proof. We just check this works for characters. Without loss of generality, we assume n ≥ m. We can compute z n+1 − z −n−1 m (z + z m−2 + · · · + z −m ) z − z −1 m X z n+m+1−2j − z 2j−n−m−1 = z − z −1 j=0
(χn χm )(z) =
=
m X
χn+m−2j (z).
j=0
Note that the condition n ≥ m ensures there are no cancellations in the sum.
15.2
Representations of SO(3), SU(2) and U (2)
We’ve now got a complete list of representations of S 1 and SU(2). We see if we can make some deductions about some related groups. We will not give too many details about these groups, and the proofs are just sketches. Moreover, we will rely on the following facts which we shall not prove: Proposition. There are isomorphisms of topological groups:
85
15
Representations of compact groups
II Representation Theory
(i) SO(3) ∼ = SU(2)/{±I} = PSU(2) (ii) SO(4) ∼ = SU(2) × SU(2)/{±(I, I)} (iii) U(2) ∼ = U(1) × SU(2)/{±(I, I)} All maps are group isomorphisms, but in fact also homeomorphisms. To show this, we can use the fact that a continuous bijection from a Hausdorff space to a compact space is automatically a homeomorphism. Assuming this is true, we obtain the following corollary; Corollary. Every irreducible representation of SO(3) has the following form: ρ2m : SO(3) → GL(V2m ), for some m ≥ 0, where Vn are the irreducible representations of SU(2). Proof. Irreducible representations of SO(3) correspond to irreducible representations of SU(2) such that −I acts trivially by lifting. But −I acts on Vn as −1 when n is odd, and as 1 when n is even, since (−1)n (−1)n−2 ρ(−I) = = (−1)n I. . .. −n (−1) For the sake of completeness, we provide a (sketch) proof of the isomorphism SO(3) ∼ = SU(2)/{±I}. Proposition. SO(3) ∼ = SU(2)/{±I}. Proof sketch. Recall that SU(2) can be viewed as the sphere of unit norm quaternions H ∼ = R4 . Let H0 = {A ∈ H : tr A = 0}. These are the “pure” quaternions. This is not hard to see this is i 0 0 H0 = R , 0 −i −1
is a three-dimensional subspace of H. It 1 0 , 0 i
i = R hi, j, ki , 0
where Rh· · ·i is the R-span of the things. This is equipped with the norm kAk2 = det A. This gives a nice 3-dimensional Euclidean space, and SU(2) acts as isometries on H0 by conjugation, i.e. X · A = XAX −1 , giving a group homomorphism ϕ : SU(2) → O(3), 86
15
Representations of compact groups
II Representation Theory
and the kernel of this map is Z(SU(2)) = {±I}. We also know that SU(2) is compact, and O(3) is Hausdorff. Hence the continuous group isomorphism ϕ¯ : SU(2)/{±I} → im ϕ is a homeomorphism. It remains to show that im ϕ = SO(3). But we know SU(2) is connected, and det(ϕ(X)) is a continuous function that can only take values 1 or −1. So det(ϕ(X)) is either always 1 or always −1. But det(ϕ(I)) = 1. So we know det(ϕ(X)) = 1 for all X. Hence im ϕ ≤ SO(3). To show that equality indeed holds, we have to show that all possible rotations in H0 are possible. We first show all rotations in the i, j-plane are implemented by elements of the form a + bk, and similarly for any permutation of i, j, k. Since all such rotations generate SO(3), we are then done. Now consider iθ iθ e 0 ai b e 0 ai e2iθ b = . −¯b −ai 0 e−iθ 0 eiθ −¯be−2iθ −ai So
eiθ 0
0
e−iθ
acts on Rhi, j, ki by a rotation in the (j, k)-plane through an angle 2θ. We can check that cos θ sin θ cos θ i sin θ , − sin θ cos θ i sin θ cos θ act by rotation of 2θ in the (i, k)-plane and (i, j)-plane respectively. So done. We can adapt this proof to prove the other isomorphisms. However, it is slightly more difficult to get the irreducible representations, since it involves taking some direct products. We need a result about products G × H of two compact groups. Similar to the finite case, we get the complete list of irreducible representations by taking the tensor products V ⊗ W , where V and W range over the irreducibles of G and H independently. We will just assert the results. Proposition. The complete list of irreducible representations of SO(4) is ρm ×ρn , where m, n > 0 and m ≡ n (mod 2). Proposition. The complete list of irreducible representations of U(2) is det ⊗m ⊗ ρn , where m, n ∈ Z and n ≥ 0, and det is the obvious one-dimensional representation.
87
Index
II Representation Theory
Index 2-transitive action, 34 Cg , 6 G-homomorphism, 8 G-invariant inner product, 13 G-isomorphism, 9 G-module, 7 G-space, 7 G-subspace, 10 R(G), 48 S 2 V , 43 S 2 χ, 44 S n V , 47 T n v, 46 V ⊗2 , 43 V ⊗n , 46 FG, 7 Λ2 V , 43 Λ2 χ, 44 Λn V , 47 χS 2 , 44 χΛ2 , 44 C(G), 23 CG , 6 k(G), 23 pa q b theorem, 70
conjugacy class, 6 conjugate character, 62 cyclic group, 5
afford, 21 algebraic integer, 65 alternating group, 5
generalized character, 48 group action, 6 group algebra, 7
decomposable representation, 11 degree, 7 degree of character, 21 derived subgroup, 37 dihedral group, 5 dimension, 7 direct sum, 11 double coset, 54 dual representation, 39 equivalent representation, 9 exterior algebra, 48 exterior power, 47 exterior square, 43 faithful character, 21 faithful representation, 7 Frobenius complement, 60 Frobenius group, 60 Frobenius kernel, 60 Frobenius reciprocity, 52 Frobenius’ theorem, 58
Haar measure, 75 Hamilton’s quaternion algebra, 78 Hermitian inner product, 13
Burside’s theorem, 70 canonical decomposition, 20 centralizer, 6 character, 21 character ring, 48 character table, 26 circle group, 71 class algebra constant, 66 class function, 21, 23 class number, 23 class sum, 65 Clebsch-Gordon rule, 85 column orthogonality, 30 commutator subgroup, 37 compact group, 71 completely reducible representation, 12
indecomposable representation, 11 induced class function, 51 induced representation, 56 inner product of class functions, 24 intertwine, 8 irreducible character, 21 irreducible representation, 10 isomorphic representation, 9 isotypical components, 20 left transversal, 52 lifting, 36 linear action, 7 linear character, 21 linear representation, 7 88
Index
II Representation Theory
Mackey’s restriction formula, 61 Maschke’s theorem, 12 matrix representation, 8 maximal torus, 78
Schur’s lemma, 16 semisimple representation, 12 simple representation, 10 structure constant, 66 subrepresentation, 10 symmetric algebra, 48 symmetric group, 5 symmetric power, 47 symmetric square, 43
permutation character, 32 permutation representation, 6, 15, 32 principal character, 21 quaternion algebra, 78 quaternion group, 6
tensor algebra, 47 tensor product, 40 topological group, 71 trivial character, 21 trivial representation, 8
regular module, 15 regular representation, 15 representation, 7, 72 representation of algebra, 66 restriction, 50 row orthogonality, 28
virtual character, 48 Weyl’s unitary trick, 14
89