MATH 155R Algebriac Combinatorics Lecture Notes

This undergraduate course on Algebraic Combinatorics is taught by Professor Lauren Williams. The webpage is at http://ww

235 111 915KB

English Pages 152 Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Course Information
January 28, 2019
January 30, 2019
February 4, 2019
February 6, 2019
February 11, 2019
February 13, 2019
February 20, 2019
February 25, 2019
February 27, 2019
March 4, 2019
March 6, 2019
March 11, 2019
March 13, 2019
March 25, 2019
March 27, 2019
April 1, 2019
April 3, 2019
April 8, 2019
April 10, 2019
April 15, 2019
April 17, 2019
April 22, 2019
April 24, 2019
April 29, 2019
March 1, 2019
Recommend Papers

MATH 155R Algebriac Combinatorics Lecture Notes

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

MATH 155R Lecture Notes Alec Sun May 1, 2019

§1 Course Information This undergraduate course on Algebraic Combinatorics is taught by Professor Lauren Williams. The webpage is at http://www.math.harvard.edu/~williams/155.html. There will be a midterm exam as well as a final project.

Contents 1 Course Information

1

2 January 28, 2019

3

3 January 30, 2019

6

4 February 4, 2019

10

5 February 6, 2019

14

6 February 11, 2019

20

7 February 13, 2019

28

8 February 20, 2019

32

9 February 25, 2019

39

10 February 27, 2019

45

11 March 4, 2019

53

12 March 6, 2019

61

13 March 11, 2019

69

14 March 13, 2019

76

15 March 25, 2019

84

16 March 27, 2019

92

17 April 1, 2019

93

18 April 3, 2019

100 1

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

19 April 8, 2019

107

20 April 10, 2019

113

21 April 15, 2019

120

22 April 17, 2019

121

23 April 22, 2019

127

24 April 24, 2019

135

25 April 29, 2019

141

26 March 1, 2019

147

2

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§2 January 28, 2019 Definition 2.1. A partition is a weakly decreasing sequence of positive P integers λ = (λ1 , λ2 , . . . , λk ) satisfying λ1 ≥ · · · ≥ λk . We say that λ is a partition of n if ki=1 λi = n. Definition 2.2. A Young diagram is a way to visualize a partition, where we place λi boxes in the i-th row of the diagram. A standard tableau of shape λ is a filling of the Young diagram of λ of distinct positive integers from 1 to n such that the rows and the columns are (strictly) increasing. Here is some motivation for these combinatorial objects. It turns out that the irreducible representations of Sn are in bijection with the partitions λ of n so that we can label the irreducibles by Vλ . Furthermore, dim Vλ is equal to the number of standard tableau associated to λ. Here is a nice result that we will prove later on. Definition 2.3. Given a Young diagram, the hook of a box b is hb = {b} ∪ {boxes to the right or below b}. The hook length |hb | of b is the number of boxes of its hook.

Theorem 2.4 (Hook Length Formula) Let λ be a partition of n. Then the number of standard tableau of shape λ, which is also dim Vλ , is n! Q . b∈Young diagram of λ |hb | Remark 2.5. If you’re familiar with high school math contests, the Hook Length Formula snipes USAMO 2016 #2. (Which partition should you consider?)

Example 2.6 If λ = (2, 2) =

, the standard tableau are 1 2 1 3 3 4, 2 4.

Indeed, the hook length formula tells us that the number of such tableaux is 4! = 2. 3·2·2·1 Definition 2.7. Let X be a set and G be a group. We say that G acts (on the left) on X if there is a map G × X → X denoted by (g, x) 7→ g · x such that (gh) · x = g · (h · x), ∀g, h ∈ G, ∀x ∈ X and e · x = x. We sometimes use the shorthand gx or g(x) for g · x. Definition 2.8. Let G be a group. A representation of G is a homomorphism ρ : G → GL(V ) where V is a finite dimensional vector space over C and GL(V ) is the group of invertible linear maps V → V. 3

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Remark 2.9. If we choose a basis {e1 , . . . , en } for V, we can identify GL(V ) with GLn (C), the group of n × n invertible matrices. Representation theory is a way of turning questions about groups into questions we can answer using linear algebra. For the majority of the course it will be assumed that G is a finite group. What is the connection between representations and group actions? It turns out that every representation of a group gives rise to a group action. Definition 2.10. Consider a representation ρ : G → GL(V ). Then we can define an action of G on V via G×V →V (g, v) 7→ ρ(g)v. Define the degree or dimension of ρ to be dim V. Remark 2.11. When the map ρ is understood from context, we often use the shorthand notation g · v or gv instead of ρ(g)v.

Example 2.12 Let G be a group and V = C1 . Define ρ : G → GL(V ) by ρ(g) = I, where I denotes the identity. Equivalently, ρ : G → GL1 (C) ∼ = C∗ by ρ(g) = 1. Then for any v ∈ V, g · v = ρ(g)v = v. This is called the trivial representation of G.

Example 2.13 Let G be a group and X be a set such that G acts on the left of X. Note that G permutes elements of X, namely x1 = 6 x2 =⇒ g(x1 ) 6= g(x2 ). (If g(x1 ) = g(x2 ), act on the left by g −1 .) Any set X on which G acts gives rise to a permutation representation of G. Let V be a vector space with basis {ex : x ∈ X}. Let G act on V by letting g · ex = egx and extending linearly, namely ! X X g· ax ex = ax egx . x

x

Here the coefficients ax are scalars in C. One particular example of a permutation representation is when X = G, which is called the regular representation of G. Because the number of basis vectors is |G|, then the dimension of the regular representation is |G|. Definition 2.14. A permutation of a finite set S is a bijection S → S. Denoting the set {1, 2, . . . , n} by the shorthand [n], define Sn to be the set of permutations of [n]. Remark 2.15. There are several different ways to notate permutations. Consider the permutation σ : 1 7→ 2, 2 7→ 1, 3 7→ 6, 4 7→ 5, 5 7→ 3, 6 7→ 4. The list notation of σ is (2, 1, 6, 5, 3, 4) (list out where each elements maps to in order) and the cycle notation of σ is (12)(3645) (enumerate the cycles). Note that Sn is a group with identity e = (1, 2, . . . , n) = (1)(2) · · · (n) and operation ·. 4

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 2.16 Suppose we want to find (12)(34) · (1234) in cycle notation. It is helpful to find where each element goes by acting on it in order from the right. For example, (1234) maps 1 to 2 and (12)(34) maps 2 to 1. Doing this for each element yields the result (1)(24)(3). Definition 2.17. The transpositions of Sn are the elements of the form (ij). (Note that in cycle notation we sometimes omit the fixed points as a shorthand.) An adjacent transposition is when j = i + 1, and we denote the n − 1 adjacent transpositions of Sn by si = (ij). Proposition 2.18 The group Sn is generated by s1 , s2 , . . . , sn−1 . It turns out that Sn has two 1-dimensional representations: 1. The trivial representation. This representation is irreducible (it is 1-dimensional), and we will see later that it corresponds to the Young diagram (n). We can check that the hook length formula for the number of standard tableau holds. (We know it must be 1 since there is exactly 1 way to order [n] in increasing order.) 2. The sign representation. Definition 2.19. Let π ∈ Sn . Write π as a product of adjacent transpositions si1 si2 · · · sil and define the sign of π to be sgn(π) = (−1)l . Proposition 2.20 The sign is well-defined. Furthermore, sgn is a homomorphism Sn → {±1}. Viewed as a homomorphism Sn → GL1 (C), we get the sign representation on a 1-dimensional vector space over C where for π ∈ Sn , π · v = sgn(π)v. We will see later that this irreducible representation corresponds to the Young diagram (1, 1, . . . , 1). Definition 2.21. A G-linear map ϕ between two representations V and W of G is a vector space map ϕ : V → W that commutes with the group operation of G, namely ∀g ∈ G, ∀v ∈ V, ϕ(g · v) = g · ϕ(v). Definition 2.22. A subrepresentation of a representation V (of G) is a vector subspace W of V such that W is also a representation of G. In other words, we need W to be invariant under G, that is, ∀w ∈ W, ∀g ∈ G, g · w ∈ W. Proposition 2.23 The subspaces ker ϕ, im ϕ, of V, W, respectively, are subrepresentations of V, W, respectively. Definition 2.24. A representation V is irreducible if there is no proper nonzero invariant subspace W of V. Remark 2.25. All 1-dimensional representations are irreducible. In particular, the trivial and sign representations are. 5

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§3 January 30, 2019 We begin by reviewing the definition of a representation. Let G be a group. A representation is a homomorphism ρ : G → GL(V ) where V is a finite dimensional vector space over C. Recall that given a representation ρ we can define an action of G on V via g · v = ρ(g)v, ∀g ∈ G, ∀v ∈ V. We often notate this as gv or g(v) for g · v. Despite the mild abusiveness, we refer to the vector space V itself as a representation of G, or a G-module. Example 3.1 The group G = S3 is the symmetric group on [3]. Let {e1 , e2 , e3 } be a basis for V = C3 . Let G act on V via g · ei = eg(i) . We can encode this action by a homomorphism ρ : G → GL3 (C). For example,   1 0 0 id 7→ 0 1 0 0 0 1   0 1 0 (12) 7→ 1 0 0 0 0 1   0 0 1 (123) 7→ 1 0 0 0 1 0 There are ways to create new representations from old ones. Definition 3.2. If V and W are representations of G, then so is V ⊕ W via the action g · (v, w) = (g · v, g · w). Definition 3.3. Suppose V and W are representations of G, and consider bases {v1 , . . . , vn } for V and {w1 , . . . , wn } for W. Then V ⊗ W is a vector space with basis {vi ⊗ wj |1 ≤ i ≤ n, 1 ≤ j ≤ m}. Thus elements of V ⊗ W are linear combinations of these basis elements. We define the new action on V ⊗ W via g · (v ⊗ w) = (g · v) ⊗ (g · w). One of the goals in representation theory is to classify all representations of a group. Since all representations can be written as direct sums of irreducible ones, it suffices to classify all the irreducible ones. Recall that if V is a representation of G, a subrepresentation W of V is a subspace W ⊆ V that is G-invariant, that is, g · w ∈ W. Recall that a representation V of G is irreducible if it has no proper nonzero subrepresentation W. Definition 3.4. A representation V of G is indecomposable if it cannot be written as U ⊕ W where U, W are proper subrepresentations of V. When the group G is finite, it turns out that irreducible and indecomposable representations are equivalent. Theorem 3.5 (Maschke’s Theorem) If W is a subrepresentation of a representation V of finite group G, then there exists a complementary G-invariant subspace W 0 of V such that V = W ⊕ W 0 .

6

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Proof. Choose an arbitrary subspace U complementary (as a vector space) to W in V. The problem is that U may not be G-invariant. Let π0 : V → W be the projection given by the direct sum, that is π0 : W ⊕ U → W (w, u) 7→ w. Now the key idea is to average the map π0 over G. We define a new map X π(v) = g(π0 (g −1 v)). g∈G

We can check that π : V → W is a G-linear map, namely g(π(v)) = π(gv). Furthermore, π|W is simply multiplication by |G|. By a problem on the homework, ker π is a G-invariant subspace and hence a representation of G. We can now let W 0 = ker π be the desired complementary invariant subspace. Since im π = W, we know that W 0 has the complementary dimension by the Rank-Nullity Theorem, which says that dim ker π = dim V − dim W. Remark 3.6. The above proof does not work if G is an infinite group since we are summing over all elements of G.

Corollary 3.7 Each representation of a finite group G0 is a direct sum of irreducible representations.a a

This property is known as complete reducibility or semisimplicity.

Is the permutation representation we gave for S3 in a previous example irreducible? No, because the 1-dimensional subspace W = C he1 + e2 + e3 i is G-invariant. By Maschke’s Theorem, we know that there exists a complementary representation W 0 = {(z1 , z2 , z3 ) ∈ C3 |z1 + z2 + z3 = 0}. It turns out that this representation, known as the standard representation, is irreducible. Example 3.8 Now we choose a basis for W 0 and write out the matrices ρ(π) for π ∈ S3 . For example, let w1 = (1, −1, 0), w2 = (0, 1, −1). Then (12) · w1 = (−1, 1, 0) = −w1 (12) · w2 = (1, 0, −1) = w1 + w2   −1 1 ρ((12)) = . 0 1 Remark 3.9. For any n, Sn has a standard representation of dimension n − 1 given by W = {(z1 , . . . , zn ) ∈ Cn |z1 + · · · + zn = 0}

7

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

where Sn acts by permuting coordinates. It turns out that the standard representation is always irreducible. Recall that Young diagrams λ with n boxes are in bijection to irreducible representations Vλ of Sn . In general, λ = (n−1, 1) corresponds to the standard representation. We can again test the Hook Length Formula: the number of standard tableaux associated to λ = (n − 1, 1) is n − 1. How can we tell if two irreducible representations are distinct? This can be done via Schur’s Lemma. Lemma 3.10 (Schur’s Lemma) If V, W are irreducible representations of G and ϕ : V → W is a G-module homomorphism (a G-linear map), then: 1. ϕ is an isomorphism or ϕ = 0. 2. If V ∼ = W and ϕ 6= 0 then ϕ = λI for some λ ∈ C. Proof. 1. By a homework problem, ker ϕ, im ϕ are subrepresentations of V, W respectively. Since V is irreducible, ker ϕ = {0} or V. Similarly, im ϕ = {0} or W. If ker ϕ = {0} then im ϕ must be W, so ϕ is an isomorphism. Otherwise ker ϕ = V and ϕ = 0. 2. If V ∼ = W and ϕ 6= 0, then since C is algebraically closed, ϕ has an eigenvalue λ. Then the G-linear map ϕ − λI has nonzero kernel, so the kernel is a proper subrepresentation. By (1), the kernel must be V, which tells us that ϕ − λI = 0 =⇒ ϕ = λI.

Proposition 3.11 For any representation V of a finite group G, there exists a decomposition V = V1⊕a1 ⊕ · · · ⊕ Vk⊕ak where the Vi are distinct irreducible representations. Furthermore, the decomposition of V into such a direct sum of irreducibles is unique. Proof. If we have another decomposition V = W1⊕b1 ⊕ · · · ⊕ Wl⊕bl into irreducibles, then ϕ = id is a map of representations V → V. By Schur’s Lemma, ϕ must map the factor Vi into a factor Wj with Wj ∼ = Vi . Now we can “induct downwards.” The irreducibles of an abelian group G satisfy a very special property. Proposition 3.12 Every irreducible representation of an abelian group G is 1-dimensional. Proof. Let V be an irreducible representation of G. Choose h ∈ G. We can define a

8

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

G-linear map ρ : V → V by ρ(v) = hv. It is G-linear because g · ρ(v) = ghv = (gh)v = (hg)v = h(gv) = ρ(gv). By Schur’s Lemma, each h ∈ G must act by a scalar multiple of the identity map. Therefore, for any v ∈ V and h ∈ G, hv = λv for some scalar λ ∈ C, so v spans a 1dimensional subrepresentation of V, implying that the irreducible V is 1-dimensional. Remark 3.13. This means that irreducibles of an abelian group G are simply homomorphisms ρ : G → C∗ . Even though the study of irreducibles of abelian groups is rather simple, we can use our result to develop a theory regarding irreducibles of a general group G. Example 3.14 Consider S3 again, which is not abelian. It has an abelian subgroup U3 ∼ = Z/3Z generated by any 3-cycle, for example τ = (123). We can consider any representation W of S3 and think of it as a representation of U3 . Since U3 is abelian, we can decompose the representation W now of U3 into a direct sum of 1-dimensional irreducibles. Furthermore, each of these 1-dimensional irreducibles are spanned by an eigenvector of τ, and the eigenvalues must be third roots of unity because of the following fact: Fact 3.15. If g ∈ G has order n then ρ(g)n = ρ(g n ) = ρ(e) = I. In representation theory it can be very difficult to understand a representation. However, one thing that can be helpful is finding a basis with very nice property, known as a canonical basis. One way of finding such a basis is viewing the representation as a representation of an abelian subgroup and using the vectors that generate the 1dimensional irreducibles as part of this basis.

9

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§4 February 4, 2019 Last time we saw that every irreducible representation of an abelian group G is 1dimensional, namely a homomorphism G → C∗ . We also saw the following lemma: Lemma 4.1 If g ∈ G such that g m = e and ρm : G → C∗ is a homomorphism, then ρ(g)m = 1. Thus far we have constructed 3 irreducible representations of G = S3 : the trivial representation, the sign representation, and the standard representation. Let τ be a 3-cycle, for example (123), and σ a transposition, for example (12). The group S3 is generated by τ and σ. We have G = hτ, σi with relations στ σ = τ 2 . Note that τ generates the abelian group U = {e, τ, τ 2 } ∼ = Z/3Z. Consider any irreducible representation W of S3 and let it be a representation of U. Fact 4.2. As a representation of U, W decomposes as a direct sum of 1-dimensional L representations W = Cvi . Fact 4.3. Each vi is eigenvector for τ with eigenvalue a cube root of unity. Therefore τ vi = vi , ωvi , or ω 2 vi where ω is a cube root of unity. How does σ act on each vi ? Lemma 4.4 If τ v = ω j v then τ (σv) = ω 2j (σv). Proof. We compute τ (σv) = σ(τ 2 v) = σ(ω 2j v) = ω 2j σv

L Let W be an irreducible representation of S3 . Write W = Cvi as above. Choose some vector v = vj and suppose τ v = ω i v. (It’s bad practice to use i for a exponent of a complex number.) Now we do a casework bash. 1. If ω i 6= 1, then ω 2i 6= ω i , so v and σv are linearly independent. Claim 4.5. hv, σvi is invariant under S3 . Proof. It is invariant under σ already because σ 2 = id. It is invariant under τ because v and σv are both eigenvectors of τ. This is a two-dimensional irreducible representation of S3 , and we can check that it is the standard representation. 2. If ω i = 1 then ω 2i = 1 so v, σv are both eigenvectors of τ with eigenvalue 1. a) If σv, v are dependent, then σ, τ both preserve hvi , so we get a 1-dimensional irreducible representation. We can see if it is the trivial or sign representation by checking whether ρ(σ) = 1 or −1, respectively. 10

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

b) Suppose that σv, v are linearly independent. Claim 4.6. v +σv spans a 1-dimensional representation isomorphic to the trivial representation, and v − σv spans a 1-dimensional representation isomorphic to the sign representation. Proof. Note that v + σv is a eigenvector for τ with eigenvalue 1 and an eigenvector for σ with eigenvalue 1. Also, v − σv is an eigenvector for τ with eigenvalue 1 and an eigenvector for σ with eigenvalue −1. In this case W is not actually irreducible. This example suggests that if we have a representation V of G, knowing the eigenvalues of each g ∈ G is helpful. Fact 4.7. If the eigenvalues of g (where g means ρ(g)) are {λi }, then the eigenvalues of g k are {λki }. Fact 4.8. If for each g ∈ G we know the sum of the eigenvalues (trace), then we can recover the eigenvalues of each g ∈ G. This is done via Newton sums and solving a degree n polynomial, where n is the dimension of the representation. Motivated by this fact, we introduce a definition: Definition 4.9. If ρ(g) : G → GL(V ) is a representation of G, the character χV is a complex-valued function on G defined by χV (g) = Tr(ρ(g)).

Example 4.10 (Standard Representation of S3 ) Let w1 = (1, −1, 0), w2 = (0, 1, −1). Then 



1 0 0 1



−1 1 0 1



 0 −1 1 −1

ρ(id) = ρ((12)) = ρ((123)) =



χ(id) = 2 χ((12)) = 0 χ((123)) = −1

Definition 4.11. Given g ∈ G, the conjugacy class of g is {hgh−1 |h ∈ G}.  Note 4.12. Since Tr ABA−1 = Tr(B), χ(hgh−1 ) = χ(g). This means that χ is constant on conjugacy classes. Definition 4.13. A function G → C that is constant on conjugacy classes is called a class function. Fact 4.14. Conjugacy classes in Sn are determined by cycle structure. Therefore, conjugacy classes in Sn are specified by the list of cycle lengths, that is, partitions of n. 11

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 4.15 For S3 , this would be (123) 7→ (3) (12)(3) 7→ (2, 1) (1)(2)(3) 7→ (1, 1, 1).

Example 4.16 (Character Table of S3 ) We can create a character table to record the character of each representation. The columns are indexed by conjugacy classes that contain that group element, and the rows are indexed by representation.

Trivial Sign Standard

e 1 1 2

(12) 1 −1 0

(123) 1 1 −1

Note 4.17. χV (e) = dim V since the trace of the identity operator is the dimension.

Proposition 4.18 If V, W are representations of G, then χV ⊕W = χV + χW and χV ⊗W = χV χW . Proof. First prove that every g is diagonalizable using the fact that G is a finite group. Let {λi }, {µj } be eigenvalues of g on V, W, respectively. Choose bases of V, W consisting of the corresponding eigenvectors of g. The rest of the proof is left as an exercise.

Proposition 4.19 If V is the permutation representation associated to the action of a group G on a finite set X, then χV (g) = #{elements of X fixed by G}. Proof. Each ρ(g) is a permutation matrix, and we get a 1 on the diagonal iff g fixes the corresponding element in X. Here is a preview of next lecture. Proposition 4.20 Consider the regular representation R of G with basis {vh |h ∈ G}, where g · vh = vgh . If we decompose R into irreducible representations, then M ⊕a R= Vi i where each irreducible Vi of G occurs exactly ai = dim Vi times.

12

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 4.21 When G is S3 then R = triv ⊕ sign ⊕ std ⊕ std since triv, sign both have dimension 1, and std has dimension 2.

Corollary 4.22 Let f λ = #{standard tableau of shape λ}. Then the irreducibles Vλ of Sn biject to partitions λ of n, and f λ = dim Vλ . This means that X |Sn | = (f λ )2 . λ`n

13

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§5 February 6, 2019 Recall that if ρ : G → GL(V ) is a representation, the character χ is the function χ : G → C such that χ(g) = Tr(ρ(g)). The trace of a linear transformation can be defined without reference to a basis. However, if desired we can choose a basis for V to write ρ(g) as a n × n matrix. In the latter case we must verify that if we choose two different bases for V, we get the same trace. Under two different bases, we get two different matrices ρ(g) and ρ0(g) related by a change-of-basis matrix T, that is, T ρ(g)T −1 = ρ0 (g). Since Tr T AT −1 = Tr(A), we get Tr(ρ0 (g)) = Tr(ρ(g)) as desired. Remark 5.1. If the vector space V is a representation of G, often we write χV for its character, and we have χv (g) = Tr(g|V ), where g|V refers to the matrix encoding of the action of g on V. Recall that characters are constant on conjugacy classes of G, χ(ghg −1 ) = χ(h). Recall also that conjugacy classes in Sn are determined by cycle type of permutation. The next results concerning characters will be stated without proof. Refer to Sections 1.8, 1.9 of Sagan’s book for a reference. Definition 5.2. Define Cclass (G) to be the set of all class functions on G, functions which are constant on conjugacy classes, so that dim Cclass (G) = #{conjugacy classes}. Let 1 X hα, βi = α(g)β(g) |G| g∈G

define an inner product on Cclass (G).

Theorem 5.3 Under this inner product, the characters of the irreducible representations of G are orthonormal, ( 1 V ∼ =W hχV , χW i = . 0 V ∼ 6 W =

Corollary 5.4 The characters of irreducible representations of G are linearly independent, hence #{irreducible representations} ≤ #{conjugacy classes}.

Theorem 5.5 In fact this holds with equality, #{irreducible representations} = #{conjugacy classes}.

14

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 5.6 Recall the character table for S3

Trivial Sign Standard

1 e 1 1 2

3 (12) 1 −1 0

2 (123) 1 1 −1

We compute 1 (1 · 2 + 3 · 0 + 2 · (−1)) 6 = 0.

hχsign , χstd i =

It is important to consider the fact that each conjugacy class may have multiple elements since the inner product is a sum over group elements and not over conjugacy classes.

Example 5.7 We can begin to construct the character table for S4 .

triv sign std a b

e 1 1 3

(12)

(123)

We use the facts χV (e) = dim V and |G| =

(1234)

P

(12)(34)

2 Vi irreducible (dim Vi ) .

Let us give a few more helpful facts about representations. Lemma 5.8 If V and W are representations of G, then χV ⊕W = χV + χW . Proof. It suffices to prove that Tr(ρV ⊕W (g)) = Tr(ρV (g)) + Tr(ρW (g)). But this follows from the fact that the matrix representation of V ⊕ W is   ρV (g) 0 . 0 ρW (g)

15

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Corollary 5.9 Any representation is determined by its character. ⊕a Proof. If V ∼ = V1⊕a1 ⊕ · · · ⊕ Vk k , with the Vi ’s being pairwise distinct representations, then χV = a1 χV1 + · · · + ak χVk . No other non-isomorphic representation has the same character since the χVi form a basis for the set of class functions.

Lemma 5.10 A representation V is irreducible if and only if hχV , χV i = 1. ⊕a Proof. Writing V ∼ = V1⊕a1 ⊕ · · · ⊕ Vk k , we compute

hχV , χV i = ha1 χV1 + · · · + ak χVk , a1 χV1 + · · · + ak χVk i = a21 + · · · + a2k . This is equal to 1 if and only if ai = 1 for exactly one i and the rest of the aj are 0, which corresponds to V being a single multiple of an irreducible.

Corollary 5.11 The multiplicity of ai of the irreducible Vi in V is hχV , χVi i .

Proposition 5.12 Consider the regular representation R of G, and treat R as a vector space with basis {vh |h ∈ G} and group action g · vh = vgh . Then decomposing R into irreducible representations gives M R= Vi⊕ dim Vi . Vi irreducible

Proof. Recall that χR (g) = #{elements of G fixed by g} ( 0 if g 6= e = |G| if g = e Write R =

⊕ai Vi irreducible Vi

L

for some multiplicities ai . Then we derive ai = hχR , χVi i 1 X = χR (g)χVi (g) |G| g∈G

1 = · χR (e)χVi (e) |G| = dim Vi .

16

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Corollary 5.13 |G| =

X

(dim Vi )2 .

Vi irreducible

Proof. Count dimensions, noting that dim R = |G|. Definition 5.14. For a partition λ, Let f λ be the number of standard tableau of shape λ. Example 5.15

f

   1 2 1 3  = 3 , 2   = 2.

Remark 5.16. For Sn , we will prove later in the course that dim Vλ = f λ and hence the corollary states that X n! = (f λ )2 . Vλ

It turns out that: Theorem 5.17 (Robinson-Schensted-Knuth (RSK) Correspondence) There is a combinatorial bijection between permutations π ∈ Sn and pairs of standard tableau of the same shape. This shows that X n! = (f λ )2 . Vλ

Remark 5.18. Knuth changed his name to “Ea” apparently because it related to some god-like being, and right before Y2K he changed his name again to “EaEa.” We will construct a map from permutations to pairs of standard tableau. We introduce the elementary operations of “insertion.” To insert element k into a standard tableau T, 1. Replace in row 1 of T the smallest element a > k by k, thus bumping a. If all elements in row 1 of T are strictly less than k, put k at the end of the row and stop. 2. If a has been bumped by k, insert a into the next row according to Step 1. 3. Continue similarly.

17

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 5.19 Let 1 3 5 2 6 T = 7 8 and k = 4. In Step 1 we bump 5 and insert 4 to get 1 3 4 2 6 7 8 . In Step 2 we bump 6 and insert 5 to get 1 3 4 2 5 7 8 . In Step 3 we bump 7 and insert 6 to get 1 3 4 2 5 6 8 . In Step 4 we insert 7 to get the final result 1 3 4 2 5 6 8 7 . Definition 5.20 (RSK Algorithm). We associate to π ∈ Sn a pair (P, Q) of standard tableau by inserting π(i) in sequence from 1 ≤ i ≤ n. • The base case is P (0) = Q(0) = ∅. • Suppose we have constructed (P (t), Q(t)). 1. Let P (t + 1) equal the result of inserting π(t + 1) into P (t). 2. Construct Q(t + 1) from Q(t) by putting t + 1 into the unique position such that Q(t + 1) and P (t + 1) have the same shape.

18

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 5.21 Let π = (1, 4, 2, 7, 8, 3, 6, 5). The algorithm produces the following sequence of pairs of standard tableau (P (t), Q(t)). 1. 1, 2 2. 1 4, 1 2 3. 1 2 1 2 4 , 3 4. 1 2 7 1 2 4 4 , 3 5. 1 2 7 8 1 2 4 5 4 , 3 6. 1 2 3 8 1 2 4 5 4 7 , 3 6 7. 1 2 3 6 1 2 4 5 4 7 8 , 3 6 7 8. 1 2 3 5 1 2 4 5 4 6 8 3 6 7 7 , 8 Hence 

 1 2 3 5 1 2 4 5 4 6 8 3 6 7    . 7 (P, Q) =  , 8    

Theorem 5.22 The RSK algorithm gives a bijection between permutations π ∈ Sn and pairs (P, Q) of standard tableau with n boxes of the same shape.

19

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§6 February 11, 2019 Remark 6.1. The result of the “class pace” survey is that no one person found the class pace perfect, but the average happens to be perfect. The correct answer to the question, rather amusingly, is “pretty slow.” Today we will review the RSK correspondence. Note 6.2. The algorithm associates to each permutation π ∈ Sn a pair (P, Q) of standard Young tableau (P, Q) of the same shape, by inserting elements step by step. At the first step P (0) = Q(0) = ∅. Suppose we have constructed (P (t), Q(t)). Then • Let P (t + 1) equal P (t) inserted with π(t + 1) via the insertion procedure defined last class. • Construct Q(t + 1) from Q(t) by putting t + 1 in the (unique) position for which the shape of Q(t + 1) is identical to that of P (t + 1).

Example 6.3 Suppose π = 4236517. Then P (1) = 4 , Q(1) = 1 4 1 P (2) = 2 , Q(2) = 2 2 3 1 3 P (3) = 4 , Q(3) = 2 2 3 6 1 3 4 P (4) = 4 , Q(4) = 2 2 3 5 1 3 4 4 6 P (5) = , Q(5) = 2 5 1 3 5 1 3 4 2 6 2 5 4 P (6) = , Q(6) = 6 1 3 5 7 1 3 4 7 2 6 2 5 4 P (7) = , Q(7) = 6

Theorem 6.4 The RSK algorithm is a bijection between permutations π ∈ Sn and ordered pairs (P, Q) = (P (π), Q(π)) of standard Young tableau of the same shape. Proof. The proof is by reversing the algorithm. Note that our algorithm implicitly shows that the map π 7→ (P (π), Q(π)) is injective. We need only show that it is surjective by constructing the reverse of the insertion procedure. In the reverse insertion of a number k, we look for the biggest number smaller than k. The proof will be via example.

20

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 6.5 Consider 1 3 5 7 1 3 4 7 2 6 2 5 4 P (7) = , Q(7) = 6 The largest number is 7, so this must have been the number that was inserted at the most recent step. The previous tableaux must have been 1 3 5 1 3 4 2 6 2 5 P (6) = 4 , Q(6) = 6 . The biggest number is 6 in Q(6), so the number 4 from P (6) must have been bumped by 1 and 2, respectively, from P (6). Hence the most recent permutation list number is 1, and the previous tableaux must have been 2 3 5 1 3 4 4 6 P (5) = , Q(5) = 2 5 . We can continue similarly.

Example 6.6 S3 has 6 permutations. We can check that these are in correspondence with the pairs of standard tableau   1 2 3, 1 2 3   1 2 1 2 3 , 3   1 2 1 3 3 , 2  



 1 3 1 2 2 , 3  

 1 3 1 3 2 , 2  

 1 1 2 2   3, 3    

Definition 6.7. Given π = x1 x2 · · · xn ∈ Sn , an increasing (respectively decreasing) sequence of π is xi1 < xi2 < · · · < xik (respectively xi1 > xi2 > · · · > xik ). 21

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 6.8 If π = 4236517, an increasing subsequence is 4, 6, 7 also 2, 3, 5, 7. Remark 6.9. The longest increasing subsequence problem can be solved in O(n) time, where n is the length of the permutation, using a technique called dynamic programming. RSK also gives a remarkably efficient solution to this problem. Theorem 6.10 (Schensted) Let π ∈ Sn . Let inc(π), dec(π) denote the length of the longest increasing and longest decreasing subsequence, respectively. Then inc(π) is the length of the first row of P (π) and dec(π) is the length of the first column of P (π).

Example 6.11 We can verify the statement of the theorem for π = 4236517 by looking at the result of (P (π), Q(π)). The proof will be via a lemma. Lemma 6.12 If π = x1 x2 · · · xn and xk enters into P (k −1) in column j, then the longest increasing subsequence of π ending in xk has length j. Proof. The proof is by induction in k. The base case k = 1 is evident. Suppose the statement is true for values at most k − 1. We need to show two things. • For the existence of an increasing subsequence of length j, let y be the value in box (1, j − 1) of Pk−1 where the 1 refers to the row and j − 1 refers to the column. Since xk enters in column j, we must have y < xk . By induction, there exists an increasing subsequence σ of π ending in y of length j − 1. Now we can just insert xk onto the end of σ to create an increasing subsequence of length j. • Now we show that there is no increasing subsequence σ of length greater than j ending in xk . Remark 6.13. As we go from P (i) to P (i + 1) the value in a given box will never increase because it can only be bumped by a number less than it. Suppose we have π = x1 x2 · · · xn , and xk enters into P (k − 1) in column j. Assume for contradiction that there is an increasing subsequence σ b of π ending in xk with length j + 1. Write σ b= · · · xi xk | {z } increasing subsequence of length j

where i < k, xi < xk . We will get a contradiction due to the induction hypothesis if xi entered into P (i − 1) in a column whose index is less than j. Hence xi must have entered into P (i − 1) in a column whose index is at least j.

22

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Let us first take the case in which the column index is exactly j. In P (i − 1) we had xi in position (1, j) and in P (k − 1) we have xk in position (1, j). But this is now a contradiction to the fact that the value in a given box will never increase.1

Corollary 6.14 The longest increasing subsequence in π is the length of the first row in P (π). Proof. It suffices to take the maximum of the longest increasing subsequences that end in a given element. The length of such a sequence is the column index at which that element entered. Because the shape of the tableau is fixed, this maximum is simply the length of the first row. This concludes the proof of the increasing part of Schensted’s Theorem. Definition 6.15. If T is a Young diagram or standard Young tableau, the transpose or conjugate T t is obtained by reflecting the tableau over the diagonal.

Example 6.16 The transpose of 1 3 6 7 2 4 T = 5 is 1 2 5 3 4 6 Tt = 7 . To prove the other part of Schensted, we need the following proposition. Proposition 6.17 (Schensted (EaEa)) Let π r be the reverse permuation of π, obtained by reading it from right to left in list notation. If P (π) = P, then P (π r ) = P t .

1

I was very confused about this proof as presented in class. If you are confused too, refer to the proof of Lemma 3.3.3 in Sagan’s book.

23

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 6.18 If π = 4236517, then π r = 7156324. The sequence {P (t)} is 7 1 7 1 5 7 1 5 6 7 1 3 6 5 7 1 2 4 3 6 5 7 1 2 6 3 5 7

Remark 6.19. To conclude Schensted’s Theorem, we note that length of the first column of P (π) = length of the first row of P (π r ) = length of longest increasing subsequence of π = length of longest decreasing subsequence of π.

Example 6.20 If π = 4236517 then 

 1 3 5 7 1 3 4 7 2 6  2 5   .  4 6 (P (π), Q(π)) =  ,   

Definition 6.21. To define column insertion analogous to row insertion, we insert elements into the leftmost column instead of the topmost row. The actual procedure is entirely analogous to the row insertion procedure. We write cx (P ) for the tableau obtained by column insertion of x into P and rx (P ) for the tableau obtained by row insertion of x into P.

24

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 6.22 If π = 4236517, column insertion leads to the following sequence of {P 0 (t)} 4 2 4 2 4 3 2 4 3 6 2 4 3 6 5 1 2 4 3 6 5 1 2 4 3 6 5 7 . Note that the resulting tableau is the transpose of the row insertion result 1 3 5 7 2 6 P (π) = 4 . Note 6.23. Doing column insertion on a permuation π produces the transpose of doing row insertion on the same permutation.

Lemma 6.24 For any partial tableau P and distinct elements x, y ∈ / P, we have cy rx (P ) = rx cy (P ), that is, row insertion and column insertion commute. Proof. The proof is rather technical and involves casework, so the reader is referred to Proposition 3.2.2 of Sagan’s book. We can now use this lemma to prove the earlier proposition. Proposition 6.25 (Schensted (EaEa)) Let π r be the reverse permuation of π, obtained by reading it from right to left in list notation. If P (π) = P, then P (π r ) = P t . Proof. By repeated applications of commutation of row insertion and column insertion,

25

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

we compute P (π r ) = rx1 · · · rxn−1 rxn (∅) = rx1 · · · rxn−1 cxn (∅) = cxn rx1 · · · rxn−1 (∅) = cxn rx1 · · · cxn−1 (∅) = cxn cxn−1 rx1 · · · rxn−2 (∅) = ··· = cxn cxn−1 · · · cx1 (∅) = P t.

There are many standard operations we can perform on permutations, and we can see how they affect the RSK algorithm. For example, what happens if we do RSK to π −1 ? Example 6.26 If π = 4236517, then π −1 = 6231547. Then we have the following sequence {(P 0 (t), Q0 (t)}. 6, 1 2 1 6, 2 2 3 1 3 6 , 2 1 3 1 3 2 2 6 , 4 1 3 5 1 3 5 2 2 6 , 4 1 3 4 1 3 5 2 5 2 6 6 , 4 1 3 4 7 1 3 5 7 2 5 2 6 6 , 4 The result is 

 1 3 4 7 1 3 5 7 2 5  2 6   −1 −1  . 6 4 , (P (π ), Q(π )) =     Note that this procedure reverses P and Q! This is a specific case of Schutzenberger’s Theorem.

26

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Theorem 6.27 (Schutzenberger) If π 7→ (P, Q) via the RSK algorithm then π −1 7→ (Q, P ) via the RSK algorithm.

27

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§7 February 13, 2019 The instructor decides to draw a diagram of fields of mathematics since for many this is a first course on combinatorics. Thanks Harvard! Algebraic combinatorics is related to probability, representation theory, algebraic geometry, physics, and graph theory. Recall the theorem stated at the end of last class. Theorem 7.1 (Schutzenberger) If π 7→ (P, Q) via the RSK algorithm then π −1 7→ (Q, P ) via the RSK algorithm. Remember that P is the insertion tableau and Q is the recording tableau, so it is not at all clear that we have this symmetry when we consider the inverse permutation. One way to prove the theorem is to give a different (equivalent) RSK algorithm that is more symmetric between the two tableaux. Viennot, a graduate student of Schutzenberger, gave the shadow line construction for RSK. Definition 7.2 (Shadow Line Construction). The algorithm is to graph the permuation x1 x2 · · · xn in the Cartesian plane, putting a point with coordinates (i, xi ), ∀1 ≤ i ≤ n. Each point casts a shadow upon all points in the upper right quadrant with reference to the point. We call the first shadow line the outer broken line such that the union of the shadows is to the upper right of this broken line. The second shadow line is a broken line disjoint from the first shadow line that is the outermost line within the shadows cast by the first line. The k-th shadow line is defined similarly. Label the lines L1 , L2 , . . . in order. The shadow diagram is the union of shadow lines. Let xLi denote the x-coordinate of the vertical ray of Li on the left, and let yLi denote the y-coordinate of the horizontal ray of Li on the bottom.

28

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 7.3 Consider π = 837269451. Then the RSK algorithm proceeds as follows. 8, 1 3 1 8, 2 3 7 1 3 8 , 2 2 7 1 3 3 2 8 , 4 2 6 1 3 3 7 2 5 8 , 4 2 6 9 1 3 6 3 7 2 5 8 , 4 2 4 9 3 6 7 8 ,

1 3 6 2 5 4 7

2 4 5 3 6 9 7 8 ,

1 3 6 2 5 8 4 7

1 4 5 2 6 9 3 7 8 ,

1 3 6 2 5 8 4 7 9 .

Note that the xLi are exactly the first row of Q and the yLi are the first row of P . Also notice that the right-angle kinks on Lj are precisely those elements passing through cell (1, j) during the construction of P. For example, in cell (1, 1) of P, the numbers that pass through are 8, 3, 2, 1. In cell (1, 2), the numbers are 7, 6, 4. In cell (1, 3), the numbers are 9, 5. Note 7.4. We see that P1j = yLj and Q1j = xLj . In fact the right-angle kinks on line Lj are precisely those elements passing through cell (1, j) during the construction of P. Proposition 7.5 Consider a shadow diagram of π = x1 · · · xn . To construct the first row of Pk = P (x1 · · · xk ), we let i = #{shadow lines that the vertical line x = k intersects}. Then the first row of Pk is y1 · · · yi where yj is the y-coordinate of the lowest point of intersection with Lj .

29

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Proof. The proof is by induction on k. The base case is k = 0, which is clear. Assume the result holds for x = k, and consider the line x = k + 1. By induction, the first row of Pk is y1 · · · yi . We do casework. 1. If xk+1 > yi , then in the shadow line diagram we have the point (k + 1, xk+1 ) that is above the shadow line Li . We will have a new vertical ray starting at (k + 1, xk+1 ), giving a new ray of the shadow line diagram. Considering the intersection of x = k + 1 with the shadow lines, we see that the values of y1 , . . . , yi do not change. There is one new intersection yi+1 = xk+1 . Hence the shadow line construction replaces y1 · · · yi 7→ y1 · · · yi xk+1 , which is precisely what RSK gives us in the top row of P, namely adds k + 1 to the end of the top row. 2. If xk+1 < yi , then we have y1 < · · · < yh−1 < xk+1 < yh < · · · yi for some h. By induction, the first row of Pk is y1 · · · yi . Intersecting with the vertical line x = k + 1, the lowest coordinate of Lh is now yh0 = xk+1 . All other y-values remain the same. Thus the shadow line construction replaces y1 y2 · · · yh · · · yi 7→ y1 y2 · · · xk+1 · · · yi . In the corresponding RSK algorithm, xk+1 replaces yh , so the shadow line construction replicates what happens in the first row of the RSK algorithm.

Note 7.6. At time k, the line x = k intersects one shadow line at a ray or line segment and all others at a single point. The ray corresponds with putting an element at the end of the first row in P, and the segment corresponds with bumping an element in P.

Corollary 7.7 If π 7→ (P, Q) under RSK and π has shadow lines {Lj } with xLj , yLj defined as before, then ∀j, Pij = yLj and Qij = xLj . Proof. It remains to show the statement for Q. According to RSK, we add an entry k to Q in cell (1, j), that is, Qij = k when xk is greater than all elements in the first row of Pk−1 . By Note 7.6, this happens exactly when the line x = k intersects the shadow line Lj in a vertical ray. In other words, Qij = k = xLj . We have discussed extensively the first row of P, Q in relation to the shadow line diagram. What in the shadow line diagram corresponds to the numbers inserted into the second row of P ? The bumped numbers are related to the right-angle kinks whose vertices point northeast. This gives us an idea of how to construct the second row of P. We should throw away the leftmost vertical lines to make new shadow lines starting with the northeast corners of the old ones. Formally, we consider all the northeast kinks and let these be the new points that are casting shadows on the upper-right quadrants in reference to these northeast kinks. We now construct the shadow lines in the same way, except with these northeast kinks playing the role of the original points (i, xi ). We call these the secondary shadow lines. 30

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Corollary 7.8 If π 7→ (P, Q) under RSK and π has secondary shadows lines {L0j } with xL0i denoting the x-coordinates of the vertical rays of L0i , and yL0i denoting the y-coordinates of the horizontal rays of L0i . Then ∀j, P2j = yL0j and Q2j = xL0j . Remark 7.9. We can now iterate this construction to get subsequent rows of P and Q. We are now ready to prove the theorem of Schutzenberger. Theorem 7.10 (Schutzenberger) If π 7→ (P, Q) via the RSK algorithm then π −1 7→ (Q, P ) via the RSK algorithm. Proof. If π = x1 · · · xn , then in the shadow diagram we plot (i, xi ) ∀i. In the shadow line diagram for π −1 we plot (xi , i), ∀i. This symmetry, which is a reflection of all the points over the line y = x, switches the roles of P and Q in the shadow line diagram. The key idea today was a new description of RSK that was symmetric in P and Q. Next class we will introduce Specht modules. There is a topic called Knuth equivalence that we probably will not get to in the interest of time. As far as the instructor knows, there is no formula for the RSK correspondence of the composition of two transpositions.

31

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§8 February 20, 2019 Today we will construct the irreducible representations of Sn . We have stated (quite a few times now) that these are in bijection with conjugacy classes of Sn , which are in bijection with partitions of n, namely Young diagrams with n boxes. We will give a new construction in 2004 that is a bit ad-hoc. We saw that if we look at the regular representation of a group G, it breaks up into a direct sum of irreducible representations, and each irreducible representation appears as a summand. Thus to construct all the irreducibles of Sn , we should find it inside the regular representation of G. Let T and T 0 be numberings of a Young diagram with n boxes with numbers from 1, . . . , n with repeats allowed. Sn acts on the set of numberings as follows: if σ ∈ Sn , σ · T is the numbering where σ(i) goes in the box i used to be. Example 8.1 If σ = (125)(34), then 2 3 5 4 1 5 2 1 σ· 4 = 3 . Definition 8.2. Fix a numbering T. The row group R(T ) is the subgroup of Sn that preserves the rows of T. The column group C(T ) is the subgroup of Sn that preserves the columns of T. Example 8.3 For 2 3 6 1 5 T = 4 , we have R(T ) = S{2,3,6} × S{1,3} × S{4} . Products of groups like this are called Young subgroups. Note 8.4. Conjugation on a row or column group acts by R(σ · T ) = σR(T )σ −1 C(σ · T ) = σC(T )σ −1 . Definition 8.5. A tabloid is an equivalent class of numberings of Young diagrams where two numbers are equivalent if the corresponding rows contain the same entries. We denote an equivalent class by {T }.

32

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 8.6 The following two Young diagrams are equivalent: 1 4 7 4 7 1 3 6 6 3 2 5 ∼ 5 2 . To pictorially represent an equivalent class, we can draw the Young diagram with no vertical bars separating entries in each row. This represents the fact that numbers in the same row can be permuted in the row group. Note 8.7. Sn acts on tabloids by σ · {T } = {σ · T }. Definition 8.8. The group ring C[G] of a group G consists of all complex linear combinations X χσ σ σ∈G

where χσ ∈ C. We note the following properties: • C[G] is a vector space. • C[G] is a ring with multiplication inherited from (G, ·). • G acts on C[G], for g ∈ G, ! g·

X

χσ σ

=

σ∈G

X

χσ (gσ).

σ∈G

Definition 8.9. Let A = C[Sn ]. Given a numbering T of a diagram with n boxes, define the elements in A X aT = p p∈R(T )

to be the row stabilizer. Define X

bT =

sgn(q)q.

q∈C(T )

Finally, define cT = aT bT . The elements aT , bT , cT are known as Young symmetrizers. Example 8.10 Let 1 3 2 4 T = 5 . Then aT = e + (13) + (24) + (13)(24) bT = e − (34) − (12) − (15) − (25) + (125) + · · · On Homework 2 there is an problem: 33

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Exercise 8.11. For p ∈ R(T ), q ∈ C(T ), show that p · aT = aT · p = aT q · bT = bT · q = sgn(q)bT Also show that aT · aT = #R(T )aT bT · bT = #C(T )bT .

Example 8.12 Let 1 2 T = 3 . Then aT = e + (12) aT · aT = (e + (12))(e + (12)) = e + (12) + (12) + e = 2(e + (12)) = 2aT Here we have R(T ) = S{1,2} × S{3} = {e, (12)}. Definition 8.13. Let M λ be the vector space with basis consisting of the tabloids {T } of shape λ, where λ ` n. Sn acts on tabloids of shape λ, so Sn acts on M λ . Remark 8.14. The representation M λ of Sn is not in general irreducible, but it will contain an attractive irreducible representation called a Specht module. Definition 8.15. Given a numbering T of λ, define vT ∈ M λ by vT = bT · {T } X = sgn(q){q · T } q∈C(T )

A problem that will appear on the Homework 2 is: Exercise 8.16. Show that ∀T, ∀σ ∈ Sn , σ · vT = vσ·T . We are now ready to formally introduce Specht modules. Definition 8.17. The Specht module S λ is the subspace of M λ spanned by elements vT as T ranges over all numberings of λ. Remark 8.18. By the exercise above, S λ is preserved by Sn , so S λ is a representation of Sn .

34

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 8.19 Let λ = (n) = and T = 1 2 3 · n. Then R(T ) = Sn C(T ) = {e} X sgn(q)q bT = q∈C(T )

=e There is only one tabloid, so vT = 1 2 3 · n and we have a 1-dimensional tabloid representation spanned by {T } = 1 2 3 · n . Sn acts trivially on vT , so we produce the trivial representation.

35

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 8.20 Let

λ=

.

Then R(T ) = {e} C(T ) = Sn X bT = sgn(q)q. q∈Sn

Now we have n! tabloids, and vT = bT · {T } X = sgn(q){q · T }. q∈Sn

For any other numbering T 0 , there exists σ such that {T 0 } = {σ · T }, so vT 0 = vσ·T = σ · vT . Consider acting on vT by σ. Note that vT is a sum where every tabloid numbering appears with coefficient equal to the sign of the permutation. Hence the result is that we multiply vT by sgn(σ). All of the vT are the same up to sign, so we have produced the sign representation of Sn . In more generality, we want to show the following theorem: Theorem 8.21 The S λ are distinct irreducible representations of Sn . We introduce the notion of partial orders on partitions. Definition 8.22. The lexicographic order on partitions, denoted by µ ≤lex λ, means that the first index i for which µi 6= λi , if it exists, satisfies µi < λi . Definition 8.23. The dominance order on partitions, denoted M E λ, means that ∀i, µ 1 + · · · µ i ≤ λ1 + · · · + λi .

Example 8.24 We have (1, 1, 1) / (2, 1) / (3). But (2, 2, 2) and (3, 1, 1, 1) are not comparable in dominance order. Since there exist incomparable elements, the dominance order is not a total order.

36

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Lemma 8.25 Let T and T 0 be numberings of shapes λ and λ0 . Assume that λ does not strictly dominate λ0 . Then exactly one of the following hold: 1. There are two distinct integers that occur in the same row of T 0 and the same column of T. 2. λ0 = λ and there is some p0 ∈ R(T 0 ) and a ∈ C(T ) such that p0 T 0 = qT. Proof. Suppose that (1) is false. Then the entries of the first row of T 0 occur in different columns of T. Thus ∃q ∈ C(T ) for which these entries occur in the first row of q1 · T. The entries in the second row of T 0 also occur in different columns of T and hence in different columns of q1 · T, so ∃q ∈ C(q1 · T ) = C(T ) leaving entries in the first row of T 0 in place, such that entries in the second row of T 0 are in the second row of q2 · (q1 · T ). Continuing in this manner, we get q1 , . . . , C(T ) such that the entries in the first k rows of T 0 occur in the first k rows of qk · qk−1 · · · q1 · T. Hence λ1 ≥ λ01 . Arguing similarly for the other rows, we get λ1 ≥ λ01 λ1 + λ2 ≥ λ01 + λ02 ··· λ1 + · · · + λk ≥ λ01 + · · · + λk , implying that λ0 E λ. But the assumption of the lemma tells us that λ does not strictly dominate λ, Hence λ0 = λ. Now let k = number of rows of λ, λ0 q = qk · qk−1 · · · q1 . Since λ = λ0 , qT and T 0 have the same entries in each row, so ∃p0 ∈ R(T 0 ) for which p0 T 0 = qT. The other direction is easy. If (1) is true, then there exist two distinct integers in different columns of T 0 and different rows of T, so it is impossible to use R(T 0 ) and C(T ) to make the numbering the same.

37

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 8.26 Let 2 4 9 6 1 3 0 T = 5 7 8 5 1 8 9 7 4 T = 6 2 3. Then by our construction in the proof of the above lemma, 9 2 4 5 1 8 q1 · T = 6 7 3 9 2 4 6 1 3 q2 · (q1 · T ) = 5 7 8

38

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§9 February 25, 2019 Today we will prove that the modules we have constructed last class are all irreducible and that they form a complete set of irreducible representations of Sn . Example 9.1 Consider 2 3 6 1 5 T = 4 . We have R(T ) = S{2,3,6} × S{1,5} × S{4} C(T ) = S{1,2,4} × S{3,5} × S{6} Recall that a tabloid {T } is an equivalence class of numbering T of a Young diagram where two digrams are equivalent if the corresponding rows contain the same entries. Hence 6 2 3 5 1 T ∼ 4 . We can also denote the equivalence class pictorially by drawing a Young tableau with no vertical lines separating numbers in the same row. Note 9.2. We saw that Sn acts on tabloids via σ · {T } = {σ · T }. Note 9.3. The group ring C[G] of (G, ·) consists of all linear combinations X χσ σ σ∈G

where χσ ∈ C. The group ring is both a vector space and ring, and G acts on C[G]. We denote A = C[Sn ]. Definition 9.4. Given a numbering T with n boxes, define X bT = sgn(q)q ∈ A. q∈C(T )

Definition 9.5. Let M λ be the vector space with basis consisting of all tabloids {T } of shape λ ` n. We see that M λ is an Sn -module. Definition 9.6. Given a numbering T of shape λ, define vT ∈ M λ by vT = bT {T }. The Specht module S λ is the subspace of M λ spanned by all vT ∈ M λ corresponding to numberings T of λ. Remark 9.7. Due to a problem on Homework 2, σ · vT = vσT , Sn acts on vT , implying that S λ is a representation of Sn .

39

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Remark 9.8. The construction thus far is somewhat of a miracle given the choice of defining bT and using row equivalence classes. It seems like something one would pull out of a hat. Next lecture we will see a modern, more natural construction of Specht modules. Definition 9.9. The column word of a tableau is the word created by reading entries from bottom to top in each column, starting with the left column and moving right.

Example 9.10 The word created from 1 4 5 2 6 8 7 is 7216485, where 721 comes from the first column, 64 from the second, and 85 from the third. Definition 9.11. Define a total order on the set of numberings with n boxes by saying that T 0 > T if one of the following conditions holds: • shape(T 0 ) > shape(T ) in the lexicographic partial order. • shape(T 0 ) = shape(T ) and the largest entry that is in a different box in the two numberings occurs at an earlier index in the column word of T 0 than in T.

Example 9.12 We have 1 2 3 1 2 4 4 5 > 3 5 . To see this, note that the column words are 41523, 31524, respectively. The largest number 5 is in the same place. The next largest number 4 occurs in index 1 in the first word and 5 in the second word. We also have 1 2 4 3 5 > 2 1 5 3 4 because the column words are 31524, 21534. The numbers 5 and 4 occur in the same index in both words, and the number 3 occurs in index 1 in the first word and 4 in the second word.

40

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Proposition 9.13 If T is a standard tableau then ∀p ∈ R(T ), ∀q ∈ C(T ), we have p·T ≥T because larger numbers get moved to the left in the word of p · T. Furthermore, q·T ≤T because larger numbers get moved to the right in the word of q · T.

Corollary 9.14 If T and T 0 are standard tableau with T 0 > T, then there is a pair of integers in the same row of T 0 and the same column column of T. This statement bears resemblance to a technical lemma from a previous lecture. Lemma 9.15 Let T, T 0 be numberings of shapes λ, λ0 , respectively. Assume that λ does not strictly dominate λ0 . Then exactly one of the following is true: 1. There are two distinct integers occurring in the same row of T 0 and the same column of T. 2. λ0 = λ and ∃p0 ∈ R(T 0 ), ∃q ∈ C(T ) such that p0 T 0 = qT. Note 9.16. Recall that λ dominates µ, denoted as λ D if ∀i, λ1 + · · · + λi ≥ µ1 + · · · + µi . We can use the lemma to give a proof of Lemma 9.14. Proof. Since T 0 > T, shape(T ) cannot dominate shape(T 0 ). If there is no such pair of integers as stated in the corollary, then the statement (2) in Lemma 9.15 is true, ∃p0 ∈ R(T 0 ), ∃q ∈ C(T ) such that p0 T 0 = qT. But then T 0 ≤ p0 T 0 = qT ≤ T, which contradicts T 0 > T. We conclude the existence of such a pair of integers. By definition, vT = bT {T } and S λ = span{vT }. How does bT act on tabloids other than the {T } from which it was defined? Answering this question will lead to a proof of irreducibility of the Specht module.

41

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Lemma 9.17 Let T, T 0 be numberings of shapes λ, λ0 , respectively. Assume that λ does not strictly dominate λ0 . Then: 1. If there is a pair of integers in the same row of T 0 and the same column of T, then bT · {T 0 } = 0. 2. Otherwise, bT · {T 0 } = ±vT . Proof. 1. If there is such a pair of integers, let t be the transposition that switches them. Then bT · t = −bT since t ∈ C(T ) and sgn(t) = −1. Furthermore, t · {T 0 } = {T 0 } since T ∈ R(T 0 ). We conclude that bT · {T 0 } = bT · {t · {T 0 }} = −bT · {T 0 } =0 2. If there is no such pair of integers, then by Lemma 9.15 we know λ0 = λ and ∃p0 ∈ R(T 0 ), ∃q ∈ C(T ) such that p0 T 0 = qT. Then bT · {T 0 } = bT · {p0 · T 0 } = bT · {qT } = sgn(q)bT · T = sgn(q)vT where the second to last equality follows from a exercise on Homework 2.

Corollary 9.18 If T and T 0 are standard tableau with T 0 > T then bT · {T 0 } = 0. Proof. From Lemma 9.14, we know that there exists a pair of integers that are in the same column of T 0 and same row of T. Now apply Lemma 9.17. We now prove the main theorem that all of this theory has been developed for: Theorem 9.19 The S λ are distinct irreducible representations. Proof. We know that ∀T, vT 6= 0, so S λ 6= 0. Let T be any numbering of λ. Claim 9.20. bT · M λ = CvT 6= 0. This is true by Lemma 9.17 since bT · {T 0 } = 0 or ± vT . Claim 9.21. bT · M λ = bT · S λ .

42

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

We show inclusion in both directions. By definition S λ ⊆ M λ so bT · S λ ⊆ bT · M λ . For the other inclusion, consider {T 0 } of shape λ. If bT · {T 0 } = 6 0 then bT · {T 0 } = ±vT . λ But vT ∈ bT · S since bT · vT = bT · bT {T } = #C(T )bT {T } = #C(T )vT from a problem in Homework 2. Therefore, bT · M λ ⊆ bT · S λ . Furthermore, this shows that bT · S λ = CvT 6= 0. 0

Claim 9.22. bT · S λ = 0 if λ0 >lex λ. This follows from λ0 > λ =⇒ for any T 0 of shape λ0 , T 0 > T =⇒ bT {T 0 } = 0 where we use Corollary 9.18 to get the last equality. We conclude that 0

0

bT · M λ = 0 =⇒ bT · S λ = 0. Claim 9.23. The S λ are pairwise distinct. Given λ, λ0 , assume that λ0 >lex λ. For a numbering T of λ, we know by the previous 0 claims that bT · S λ 6= 0 and bT · S λ = 0. Claim 9.24. Each S λ is irreducible. To show that S λ is irreducible, assume that we can write S λ = V ⊕ W as a direct sum of representations of Sn . Then CvT = bT · S λ = bT (V ⊕ W ) = (bT V ) ⊕ (bT W ) The fact that CvT is 1-dimensional implies that one of bT V, bT W is 0 and the other is CvT . Without loss of generality, assume that CvT = bT · V. Then vT ∈ bT · V ⊆ V =⇒ vT ∈ V. Claim 9.25. S λ = A · vT for any numbering T of λ. Note that each σ ∈ Sn is in A. Since σ · vT = vσ·T , we see that A · vT 0 includes all the vT for any different numbering T 0 . Now we have vT ∈ V =⇒ A · vT ⊆ V =⇒ S λ ⊆ V =⇒ S λ = V We conclude that W = 0 so S λ is irreducible. We have finally found distinct irreducible representations for each λ ` n, meaning that we have found all irreducibles of Sn . Yay! 43

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Proposition 9.26 The elements vT as T varies over all standard tableau of shape λ form a basis for S λ .

Example 9.27 For S

, the basis is       v ,v . 1 3   1 2  3 2

Proof. We know that vT is a linear combination of tabloids {T } where the coefficient associated to each tabloid is 1. Furthermore, the elements of the form {q · T } for q ∈ C(T ) have coefficients ±1. If T is a standard tableau, then q · T < T for q 6= e, q ∈ C(T ). Claim 9.28. The vT are linearly independent. Suppose that X

χT v T = 0

T standard tableau

for χT ∈ C. Look at the maximal T in the sum with nonzero coefficient. Since T is standard, {T } occurs with coefficient 1 in vT . Note that {T } cannot appear in any other vT 0 . because All other T 0 satisfy T 0 < T, and the terms q · T 0 in vT0 satisfy q · T 0 < T 0 < T. We conclude that {T } cannot cancel in the sum to get zero. By the claim, we have dim S λ ≥ f λ where f λ = #{standard tableau of shape λ}. But now we use the sum-of-squares identity for representations to write X n! = (dim S λ )2 λ`n



X

(f λ )2

λ`n

= n! where the last equality is due to the RSK bijection. We conclude that equality holds, implying dim S λ = f λ and that the elements vT form a basis of S λ .

44

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§10 February 27, 2019 What we are going to do today is explore a very different approach to producing the irreducible representations of Sn . It will be a more natural construction. Recall that the irreducibles of Sn are Specht modules S λ for λ ` n. Furthermore, the basis of S λ is {vT : T is a standard tableau of shape λ}. The construction of S λ and the basis vectors was via vT = bT · {T } for tabloids {T }. The construction today is due to Okounkov-Vershik in 2005.2 We will first talk about some ingredients used to cook up the construction: • There is an inclusion of groups S{1} ⊆ S{1,2} ⊆ S{1,2,3} ⊆ · · · ⊆ S{1,2,...,n} . Use the notation Si = S{1,2,...,i} . We will use these groups to find a canonical basis in each irreducible representation of Sn . Remark 10.1. What do we mean by canonical basis? It is better to look for a basis with very nice properties. In our case, we will ask that the basis of each irreducible representation V of Sn is compatible with the restriction V |Sn−1 . In other words, if V |Sn−1 = U ⊕ W, then each basis element will lie in either U or W. • There is a large commutative subalgebra of the group ring A = C[Sn ]. This gives us a way to decompose each irreducible representation of Sn into 1-dimensional subspaces. Today we will summarize the results that happen when you follow an approach like this one. Let Sn∨ denote the equivalence classes of irreducible representations of Sn . Here Sn∨ acts as an indexing set for the irreducible representations. If λ ∈ Sn∨ , let V λ be the Sn -representation corresponding to λ. Here we must realize that we have “forgotten” the Specht module construction, so that λ is treated simply as a symbol. Definition 10.2. The branching graph or Bratteli diagram is a directed graph whose are all the irreducible representations of Sn , that is, the vertex set is S vertices ∨ ∨ ∀n, n≥0 Sn , and which has k edges from µ ∈ Sn−1 to λ ∈ Sn∨ whenever the restricted representation V λ |Sn−1 contains k copies of V µ . Proposition 10.3 The branching graph is simple, meaning that the number of edges from µ to λ is either to 1. Definition 10.4. We write µ % λ if µ → λ is connected by an edge. We write µ ⊂ λ if µ → λ is connected by a path, a sequence of edges. Finally, let φ denote the 1 element of S0∨ . Note 10.5. Proposition 10.3 implies that for any λ ∈ Sn∨ , the decomposition M Vµ V λ |Sn−1 = ∨ µ∈Sn−1 µ%λ

is canonical. 2

The paper is available on arXiv at the link https://arxiv.org/abs/math/0503040.

45

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Remark 10.6. There is no ambiguity about which copy of V µ we are talking about on the right hand side because there is at most one by Proposition 10.3. Take each representation V µ in M



∨ µ∈Sn−1 µ%λ

and restrict it to Sn−2 , decomposing into irreducible representations for Sn−2 . If we keep restricting in this manner, we will eventually get canonical decompositions of V λ into irreducible S1 -modules, namely 1-dimensional subspaces M Vλ = VT ; T

These 1-dimensional subspaces will be indexed by all possible chains, or directed paths, T = λ0 % λ1 % · · · % λn . where λi ∈ Si∨ , λn = λ, and λ0 = φ We can thus produce a basis by taking any vector in each of the 1-dimensional vector spaces. Definition 10.7. Choose a unit vector uT in each 1-dimensional VT . We produce a basis {uT } known as the Gelfand-Tsetlin basis. In the case of the branching graph, the Gelfand-Tsetlin basis is very nice. Definition 10.8. A poset, short for partially ordered set, is a set with a partial order relation. Definition 10.9. The poset of all partitions, ordered by containment of Young diagrams, is known as Young’s lattice. Remark 10.10. We usually depict posets using a Hasse diagram. Example 10.11 In the Hasse diagram, we create a tree with the null partition at the bottom. It branches into the tableau that are formed by adding one box to the Young tableau. In this way we form a diagram of all partitions λ ` n that represents the partial order. For example, here is the Young’s lattice for 4 boxes:

46

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Note 10.12. The edges in the Hasse digram denote cover relations in the poset, the relations x < y such that 6 ∃z such that x < z < y. We use the notation x l y for a cover relation. Example 10.13 We have .

l

Definition 10.14. A saturated chain is a sequence x1 l x2 l · · · l xm .

Theorem 10.15 Let λ ` n. Consider the Specht module S λ which is a irreducible representation for Sn . Then as a restricted representation to Sn−1 , M S λ |Sn−1 = Sµ. µlλ

Example 10.16 As a consequence of Theorem 10.15 we have

S



=S

⊕S

⊕S

.

S5

Note 10.17. The Gelfand-Tsetlin basis {uT } of S λ has one element for each chain T from from φ to λ.

47

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 10.18 Consider the chain in the Hasse diagram

∅→







.

If we number the boxes of the last element

of the chain in accordance with the order in which they are added in the chain, we produce the standard tableau 1 2 3 4 . As another example, for the chain

∅→







,

we produce the standard tableau 1 4 2 3 . What we have found in Example 10.18 is that chains from φ to λ are in bijection with standard tableau of shape λ. We can summarize this in the following corollary of Theorem 10.3: Corollary 10.19 The Gelfand-Tsetlin basis {uT } of S λ has one element for each standard tableau of shape λ. To get a more explicit description of the Gelfand-Tsetlin basis, we can analyze each V λ by using a large commutative subalgebra of C[Sn ]. Definition 10.20. Let Z(n) denote the center of C[Sn ], so Z(n) = {x ∈ C[Sn ] : xy = yx, ∀y ∈ C[Sn ]} . Define the Gelfand-Tsetlin algebra as GT(n) = hZ(1), Z(2), . . . , Z(n)i the subring of C[Sn ] generated by Z(1), . . . , Z(n).

48

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Proposition 10.21 The following are true: 1. GT(n) is a maximal commutative subalgebra of C[Sn ]. 2. v ∈ V λ is in the Gelfand-Tsetlin basis iff v is a common eigenvector of the elements of GT(n). 3. Each basis element is uniquely determined by eigenvalues of elements of GT(n). Definition 10.22. For i = 1, 2, . . . , n define the Jucys-Murphy elements Xi = (1 i) + (2 i) + · · · + (i − 1 i) ∈ C[Sn ], which are the sums of all transpositions of Sn that swap i with a number less than i. We see that Xi ∈ GT(n), ∀i ≤ n. In particular, the Xi commute with each other.

Proposition 10.23 The Gelfand-Tsetlin algebra GT(n) is generated by the Xi , so GT(n) = hX1 , X2 , . . . , Xn i . We thus know that the basis elements {uT : T is a standard tableau of shape λ ` n} are all eigenvectors for each Xi ∈ GT(n). What are the eigenvalues? Definition 10.24. Consider a Young diagram λ. Assign a number known as the content to each box as follows. In each box we use Z>0 to index the rows from top to bottom and columns from left to right, and in each box we write the difference between the column index and the row index. Example 10.25 Consider the Young diagram

The contents of the boxes are 0 1 2 3 4 -1 0 1 2 -2 -1 .

49

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Proposition 10.26 Let λ ` n. We know that V λ has basis {uT : T is a standard tableau of shape λ}. Consider a particular uT and let αi , ∀i ≤ n denote the eigenvalue such that Xi uT = αi uT . We write α(T ) = (α1 , α2 , . . . , αn ). Then α(T ) is obtained by reading the contents of the boxes of λ in order according to the labeling of the boxes in the standard tableau T.

Example 10.27 The Young diagram λ= has basis

   u ,u 1 3   1 2 3 4 2 4

  

.

 

The content of λ is 1 0 -1 0 . Then 

 1 2 α  3 4  = (0, 1, −1, 0). If we use the shorthand u=u

1 2 3 4

,

then X1 u = 0 X2 u = u X3 u = −u X4 u = 0.

50

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 10.28 Consider the trivial representation of S4 labelled by .

λ= The basis of V λ is {u} where u=u

1 2 3 4

and the content of λ is 0 1 2 3. Then α



1 2 3 4



= (0, 1, 2, 3)

X1 u = 0 X2 u = u X3 u = 2u X4 u = 3u. If we write out the Xi , they are X1 = 0 X2 = (12) X3 = (13) + (23) X4 = (14) + (24) + (34). Why is it that α



1 2 3 4



= (0, 1, 2, 3)?

It is because the representation is trivial, so that Xi · u = (1 i) · u + (2 i) · u + · · · + (i − 1 i) · u = u + u + ··· + u | {z } i−1

= (i − 1)u. Note 10.29. Why isn’t the Specht module known as “canonical”? It is because the basis of the Specht module, which is defined via the Young symmetrizers, does not have properties as nice as the construction we discussed today. In a sense, the instructor says that the whole argument for Specht modules was a little arbitrary. We now summarize the content of today’s lecture in the following remark. Remark 10.30. To discover the standard tableau, we: • Use the fact that V |Sn−1 of an irreducible representation V of Sn has simple multiplicities and inductively construct a “natural” basis {uT }. • Use the Gelfand-Tsetlin algebra GT(n), a maximal commutative subalgebra of C[Sn ] that acts diagonally on {uT }. 51

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

• Find nice generators {Xi : 1 ≤ i ≤ n} for GT(n) and compute their eigenvalues on {uT }. • Realize that the collection of eigenvalues that the {xi } have on {uT } can be described as the content vector of λ, read in accordance with the ordering of labeled boxes in the standard tableau.

52

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§11 March 4, 2019 Recall from last class the Hasse diagram of Young’s lattice:

We know the following: • The chains in the Young lattice from the root are in bijection with standard tableau. • The instructor plays a fill-in-the-blank game. Young’s lattice is the for the symmetric group. After two incorrect guesses, a hint is given that the answer starts with ‘B’. Someone guesses ’basis’, and then finally the answer is revealed: branching diagram!

Theorem 11.1 Let λ ` n. Consider the irreducible representation S λ for Sn . Then as Sn−1 representations, M λ S = Sµ. Sn−1

µlλ

Furthermore, for any irreducible S λ of Sn this decomposition is canonical. Remark 11.2. What is meant by canonical here? When we decompose λ S Sn−1

as a direct sum of irreducible, the terms in M



µlλ

are distinct since the partitions µ are distinct. Hence there is no ambiguity in which irreducible corresponds with which partition in the direct sum.

53

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 11.3 We have

S

⊕S

=S

.

S2

Take each S µ in M

Sµ,

µlλ

restrict it to Sn−2 , and decompose into irreducibles of S n−2 . Continuing this process of restriction and decomposition, we will eventually get a canonical decomposition of S λ into irreducible S1 -modules, that is, 1-dimensional subspaces. We write M Sλ = VT T

where the direct sum indexed by chains T from λ down to ∅ in Young’s lattice. Note 11.4. Recall from last lecture that chains T from ∅ to λ are in bijection with standard tableau of shape λ. The bijection is by box position according to the order in which boxes are added along the chain.

Example 11.5 For the chain ∅→





,

the associated standard tableau is 1 2 3 . For the chain ∅→





,

the standard tableau is 1 3 2 . Definition 11.6. Choose a vector uT ∈ VT for each T. We produce a basis {uT : T is a standard tableau of shape λ}. This is known as the Gelfand-Tsetlin basis.

54

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 11.7 Let λ is a Young diagram, say

λ= and T is a standard tableau, say 1 3 4 7 2 6 T = 5 . Recall that the content is a tableau whose entries are the different between the column and row indices. For example, 0 1 2 3 -1 0 cont(λ) = -2 . We can notate the content according to the order of box positions in a standard tableau T as α(T ) = (a1 , . . . , an ) = (0, −1, 1, 2, −2, 0, 3). where a1 , a2 , . . . , an are the contents of the boxes corresponding to the numbers 1, 2, . . . , n in T. Definition 11.8. The Jucys-Murphy elements are defined as Xi = (1 i) + (2 i) + · · · + (i − 1 i) ∈ C[Sn ].

Proposition 11.9 We have Xi uT = ai uT , where here Xi acts on elements of C[Sn ] and ai denotes the content of the box T that contains the number i.

Example 11.10 In Example 11.7, we have X2 uT = −uT X7 uT = 3uT In a way this proposition is nice, but what we really want to know is how all elements of Sn act on the Gelfand-Tsetlin basis {uT }. Definition 11.11. Given a shape λ, let T λ denote the standard tableau where the numbers 1, 2, . . . are placed in the boxes in order from left to right, from the first row to 55

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

the last row. For example 1 2 3 4 5 6 = 7 .

T For any other tableau T of shape λ, define

l(T ) = l(σ) where σ · T = T λ and l(σ) is the minimal number of simple transpositions (i i + 1) used to compose σ.

Theorem 11.12 Let λ ` n and let T be a standard tableau of shape λ. Suppose that α(T ) = (a1 , . . . , an ) is the content, and that T 0 is another standard tableau such that T 0 = (i i + 1)T and l(T 0 ) > l(T ). If we notate the simple transpositions as si = (i i + 1), then 1 si · uT = uT 0 + uT ai+1 − ai   1 1 uT − uT 0 . si · uT 0 = 1 − 2 (ai+1 − ai ) ai+1 − ai Also, if i, i + 1 are in the same row of T, si · uT = uT and if i, i + 1 are in the same column of T, then si · uT = −uT . Note 11.13. The two statements in Theorem 11.12 may appear contradictory, but recall that in 1 uT si · uT = uT 0 + ai+1 − ai   1 1 si · uT 0 = 1 − uT − uT 0 . 2 (ai+1 − ai ) ai+1 − ai we must have that T 0 is a standard tableau satisfying T 0 = (i i + 1)T. In the statements si · uT = uT si · uT = −uT corresponding to i, i + 1 being in the same row and column, respectively, require T 0 = (i i + 1)T to be non-standard since we are swapping two elements in the same row or column.

56

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 11.14 Consider λ= 0 1 -1 cont(λ) = T = Tλ 1 2 = 3 . Then l(T λ ) = 0 α(T λ ) = (0, 1, −1). For s2 = (23) we have by the first statement of Theorem 11.12 1 3 T0 = 2 l(T 0 ) = 1 s2 · uT = uT 0 + =u

1 2 3

1 uT a3 − a2 =u

1 − u . 1 3 2 1 2 2 3

For s1 = (12) we have by the second statement of Theorem 11.12 since 1, 2 are in the same row of T that s1 · uT = uT . We highlight some of the ideas in the proof of Theorem 11.12, but we will not discuss details. Definition 11.15. Let Zn denote the center of C[Sn ], defined to be {x ∈ C[Sn ] : xy = yx, ∀y ∈ C[Sn ]} . Definition 11.16. Define the Gelfand-Tsetlin algebra as GT(n) = hZ(1), Z(2), . . . , Z(n)i , the group generated by Z(1), . . . , Z(n).

Proposition 11.17 The Gelfand-Tsetlin algebra GT(n) is commutative. Proof. Each Z(i) commutes with everything in Si and hence commutes with Z(1), Z(2), . . . , Z(i − 1). 57

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Therefore the Z(i) all commute each with other, meaning that GT(n) which is generated by the Z(i) is also commutative. Definition 11.18. Let Sn∨ denote the indexing set for the irreducible representations of Sn . If λ ∈ Sn∨ , let V λ denote the corresponding irreducible representation. Remark 11.19. Recall that C[Sn ] viewed as a representation for Sn is the regular representation. The regular representation decomposes as M λ C[Sn ] = (V λ )⊕ dim V . ∨ λ∈Sn

Proposition 11.20 There is an isomorphism as algebras C[Sn ] ∼ =

M

End(V λ )

∨ λ∈Sn

Proof. Let G = Sn . For any representation V λ of G, there is a map G → Aut(V λ ). This map extends by linearity to a map C[G] → End(V λ ). Adding all such maps together over all λ yields a map M End(V λ ). ϕ : C[G] → ∨ λ∈Sn

Because the regular representation is faithful, meaning no two group elements act exactly the same way, we have that ϕ is injective. Both C[G] and M End(V λ ) ∨ λ∈Sn

have dimension X

(dim V λ )2 ,

∨ λ∈Sn

so we conclude that ϕ is an isomorphism. Note 11.21. Choosing the Gelfand-Tsetlin basis for each representation V λ , we can identify M C[Sn ] = End(V λ ) ∨ λ∈Sn

   0 0 0 0  0 0  = 0 0  0 0 0 0  as a block-diagonal matrix where the blocks  have size dim V λ for each λ ∈ Sn∨ .

58

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Since GT(n) ⊂ C[Sn ] we can identify   0  0 0

GT(n) with matrices of the form  0 0 0  0 0 . 0  0 0 0 

We now give a refined proposition. Proposition 11.22 The following statements are true: 1. Using the identification with block-diagonal matrices in Note 11.21, GT(n) is the algebra of diagonal matrices with respect to the Gelfand-Tsetlin basis in each V λ . 2. GT(n) is a maximal commutative subalgebra in C[Sn ]. 3. v ∈ V λ is in the Gelfand-Tsetlin basis iff v is a common eigenvector of the elements of GT(n). 4. Each basis element is uniquely determined by its eigenvalues on the elements of GT(n). Proof.

1. Define the matrix 

  0 0 0 0  0 0  Pλ =  0 0  0 0 0 0  where  is the 0 matrix and  is the identify matrix in the block associated with V λ . In other words M proj Pλ : V µ −−→ V λ . ∨ µ∈Sn

We note that Pλ ∈ Z(n) since Pλ commutes with any other block-diagonal matrix. Given a chain T = ∅ = λ0 % λ1 % · · · % λn−1 % λ, we can similarly construct the matrix Pλi for each λi . Here λn−1 represents an irreducible representation in the decomposition of V λ restricted to Sn−1 , and Pλn−1 will have ones in a proper subset of positions that Pλ corresponding to the positions associated with λn−1 in the decomposition of V λ . We can continue this process to define λn−2 , . . . , λ0 . Define PT = Pλ1 · · · Pλ ∈ GT(n) because Pλi ∈ Z(i). Due to the structure of Pλi as 0-1 diagonal matrices where the set of positions that contain ones strictly decreases as i → 1+ , we simply have P T = P λ1 .

59

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Since these matrices PT written in the Gelfand-Tsetlin basis are of the form   0   0  PT =   1  0 where the 1 appears in the position associated with uT , as we range over all T we conclude that all diagonal matrices are in GT(n). 2. The set of all diagonal matrices is a maximal commutative algebra of the set of matrices. 3. Clearly the Gelfand-Tsetlin basis can now be seen at the set of all common eigenvectors of GT(n). 4. Each basis vector is determined by the eigenvalues of PT by inspection.

60

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

§12 March 6, 2019 We will begin by making one comment about the Gelfand-Tsetlin basis in the form of an example. Example 12.1 Consider the Specht module S of S5 . If we restrict it to S4 , we get

S

⊕S

=S

.

S4

If we restrict each of these to S3 , we get z }| { S

z

}|

⊕S

⊕S

{ .

If we restrict each of these to S2 , we get z

}|

{

z

}|

{ z }| {

⊕S

S

⊕S

⊕S

⊕S

.

Finally, restriction to S1 yields z}|{ S

z}|{ ⊕S

z}|{ ⊕S

z}|{ ⊕S

z}|{ ⊕S

.

If we draw lines between the modules from the top row

S to the bottom row z}|{ S

z}|{ ⊕S

z}|{ ⊕S

z}|{ ⊕S

z}|{ ⊕S

.

we recreate Young’s lattice. Remark 12.2. Are we losing information as we decompose down? Even though the modules at the last step are all S

,

each element S can be labeled by the chain that it came from and hence can be distinguished.

61

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

We will now return to a long-lost relative, the Hook Length Formula. Remark 12.3. Recall that given a Young diagram λ and a box v in λ, we denote Hv as the hook of v and hv = |Hv |.

Example 12.4 If

,

λ= and v is the box in position (2, 3), then hv = 6. Definition 12.5. For λ ` n, define

f λ = #{number of standard Young tableau of shape λ}

Theorem 12.6 (Hook Length Formula) For λ ` n, we have fλ = Q

n! v∈λ hv

.

Example 12.7 If λ=

,

there are 5 standard tableau by bashing out, and indeed we have 5! 4·3·2 = 5.

fλ =

We will give a probabilistic proof due to Greene, Nijenhuis, and Wilf. The only “probability” we will use is the fact that the sum of the probabilities in a sample space sum to 1. We will give an algorithm for constructing a standard tableau, with the special property that it will construct each standard tableau equally likely. Hence if we know the probability that any given tableau is produced using the algorithm, the number of tableaux is simply the reciprocal. Definition 12.8 (Hook Length Algorithm). Let λ ` n and define the inner corners of λ to be the boxes v such that there is no box to the bottom or to the right of v. Then perform the following algorithm: 1. Pick a box v ∈ λ from the uniform distribution on boxes. Hence each box is picked with probability 1/n.

62

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

2. While v is not an inner corner, a) Pick a box v ∈ Hv − {v} from the uniform distribution over such boxes. Hence the probability that each box is picked is 1 . hv − 1 b) Set v to be v and go back to the beginning of Step 2. 3. Now v will be an inner corner. Assign the label n to to v. 4. Go back to Step 1 and set λ to be λ − {v} and n to be n − 1. Repeat from Step 1 to 3 until all cells are labeled. Definition 12.9. Label the rows of λ by 1, 2, . . . , lr top to bottom and the columns by 1, 2, . . . , lc from left to right. Position (i, j) refers to the box in row i and column j. Define hij to be the hook length of the (i, j) box. Definition 12.10. We call the sequence of nodes which see in one pass through Steps 1 to 3 of Algorithm 12.8 a trial.

Example 12.11 Suppose we start with

. A trial could be the sequence of boxes (1, 2) → (1, 4) → (3, 4) → (3, 5) where (3, 5) is an inner corner. A diagram for this trial is •

• • • .

Proposition 12.12 Algorithm 12.8 produces any given standard tableau P of shape λ with probability Q v∈λ hv . n! Let (α, β) be an inner corner and let p(α, β) be the probability that a random trial terminates in cell (α, β). Let P : (a, b) = (a1 , b1 ) → (a2 , b2 ) → · · · → (am , bm ) = (α, β) denote the trial traversed by Algorithm 12.8 to get to an inner corner (α, β). 63

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 12.13 In Example 12.11, note that b1 = 2 a1 = a2 = 1 a3 = a4 = 3 b2 = b3 = 4 b4 = 5. Definition 12.14. The vertical projection of a trial P is A = {a1 , a2 , . . . , am } and the horizontal projection is B = {b1 , b2 . . . , bm }. Let p(A, B | a, b) denote the probability that a random trial beginning at (a, b) has vertical and horizontal projections A and B respectively.

Example 12.15 In Example 12.11, we have A = {1, 3} B = {2, 4, 5} Note that we could have used a different trial starting at (a, b) to produce the same projections A, B. For example, the other two trials that satisfy these conditions are • •

• •



• •

and • .

64

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Lemma 12.16 Let P : (a, b) → · · · → (α, β) be a trial with vertical and horizontal projections A, B, respectively. Then p(A, B | a, b) =

Y i∈A i6=α

Y 1 1 · . hiβ − 1 hαj − 1 j∈B j6=β

Proof. The trial starts at (a, b) = (a1 , b1 ). To get vertical and horizontal projections A, B, the second box in the trial must be either (a2 , b1 ) of (a1 , b2 ). Hence p(A, B | a, b) =

p(A, B − {b1 } | a1 , b2 ) p(A − {a1 }, B | a2 , b1 ) + hab − 1 hab − 1 {z } {z } | |

choosing (a2 ,b1 ) then continuing

choosing (a1 ,b2 ) then continuing

If we use induction on the length of the trial, we have p(A − {a1 }, B | a2 , b1 ) = (ha1 β − 1) ·

Y i∈A i6=α

p(A, B − {b1 } | a1 , b2 ) = (hαb1 − 1) ·

Y i∈A i6=α

p(A, B | a, b) =

Y 1 1 · hiβ − 1 hαj − 1 j∈B j6=β

Y 1 1 · hiβ − 1 hαj − 1 j∈B j6=β

Y Y 1 1 1 ((ha1 β − 1) + (hαb1 − 1)) · hab − 1 hiβ − 1 hαj − 1 i∈A i6=α

=

Y i∈A i6=α

Y 1 1 · . hiβ − 1 hαj − 1 j∈B j6=β

In the last step we used the fact that 1 ((ha1 β − 1) + (hαb1 − 1)) . hab − 1 We now prove this fact by example.

65

j∈B j6=β

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

Example 12.17 Consider

with (a, b) = (1, 2) (a1 , β) = (1, 4) (α, b1 ) = (5, 2) (α, β) = (5, 4) Then we can compute hab − 1 = 9 ha1 β − 1 = 7 hαb1 − 1 = 2. Note that indeed hab − 1 = ha1 β + hαb1 − 1.

Finally we finish the proof of the Hook Length Formula. Definition 12.18. Define ( #{standard tableau of shape λ} λ is a valid partition ST (λ1 , . . . , λm ) = 0 otherwise and ( F (λ1 , . . . , λm ) =

n! Q

(i,j)∈λ

hij

0

λ1 ≥ · · · ≥ λm

.

otherwise

Proposition 12.19 We have ST (λ) = F (λ), implying the Hook Length Formula. The proof will be via recurrences. In any standard tableau, the box labeled n must be an inner corner, so ST (λ1 , . . . , λm ) =

m X

ST (λ1 , . . . , λα−1 , λα − 1, λα+1 , . . . , λm ).

α=1

66

Alec Sun (May 1, 2019)

MATH 155R Lecture Notes

We will notate this as X

ST =

STα .

α

To show ST (λ) = F (λ) it suffices to show that the right side satisfies the same recurrence as the left side, namely F (λ1 , . . . , λm ) = F (λ1 , . . . , λα−1 , λα − 1, λα+1 , . . . , λm ) X = Fα α

where we notate the sum X



α

the same way as ST. We will instead prove the equivalent probabilistic interpretation 1=

X Fα

.

F

α

Recall that p(α, β) is the probability that a random trial ends in (α, β). Theorem 12.20 Let (α, β) be an inner corner. Then p(α, β) =

Fα . F

Proof. Look at the right hand side. If we look at F versus Fα where we have removed the inner corner in row α, then using the inductive hypothesis F (λ1 , . . . , λm ) = Q

n! (i,j)∈λ hij

of the recurrence for valid partitions λ1 ≥ · · · ≥ λm , we have Y hiβ hαj Fα 1 Y = · F n h −1 hαj − 1 1≤i