Quantum Mechanics: A Mathematical Introduction (Instructor Solution Manual, Solutions) 1009100505, 9781009100502

This original and innovative textbook takes the unique perspective of introducing and solving problems in quantum mechan

546 43 584KB

English Pages 398 [144] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
2 Linear Algebra
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
3 Hilbert Space
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
4 Axioms of Quantum Mechanics and Their Consequences
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
5 Quantum Mechanical Example: The Infinite Square Well
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
6 Quantum Mechanical Example: The Harmonic Oscillator
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
7 Quantum Mechanical Example: The Free Particle
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
8 Rotations in Three Dimensions
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
9 The Hydrogen Atom
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
10 Approximation Techniques
10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
11 The Path Integral
11.1
11.2
11.3
11.4
11.5
11.6
11.7
11.8
12 The Density Matrix
12.1
12.2
12.3
12.4
12.5
12.6
12.7
12.8
12.9
Recommend Papers

Quantum Mechanics: A Mathematical Introduction   (Instructor Solution Manual, Solutions)
 1009100505, 9781009100502

  • Commentary
  • to the hero who put this treasure on z-library: thanks a lot!
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Quantum Mechanics A Mathematical Introduction ANDREW LARKOSKI SLAC NATIONAL ACCELERATOR LABORATORY

Copyright 2023. This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

Contact me at [email protected] for any errors, corrections, or suggestions.

Contents

2

Linear Algebra 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8

3

Hilbert Space 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

4

Axioms of Quantum Mechanics and Their Consequences 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8

5

Quantum Mechanical Example: The Infinite Square Well 5.1 5.2 5.3 5.4 5.5

iii

1 1 6 6 8 9 10 11 11 14 14 15 16 16 18 18 19 20 22 22 22 24 26 27 29 30 31 34 34 36 36 39 41

Contents

iv

5.6 5.7 5.8

6

Quantum Mechanical Example: The Harmonic Oscillator 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8

7

Quantum Mechanical Example: The Free Particle 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8

8

Rotations in Three Dimensions 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8

9

The Hydrogen Atom 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8

42 45 47 49 49 50 51 52 54 56 57 59 62 62 63 65 66 68 70 72 73 75 75 77 78 78 79 80 80 81 84 84 85 87 91 91 93 93 95

Contents

v

10 Approximation Techniques 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9

11 The Path Integral 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8

12 The Density Matrix 12.1 12.2 12.3 12.4 12.5 12.6 12.7 12.8 12.9

97 97 98 99 100 103 105 105 108 110 112 112 113 115 115 116 117 119 120 122 122 125 127 129 130 132 134 134 137

2

Linear Algebra

Exercises 2.1 (a)

The 2 × 2 and 3 × 3 discrete derivative matrices can actually be read off from the general form provided there:   1 0 2∆x , D2×2 = (2.1) 1 0 − 2∆x   1 0 0 2∆x 1 1  . D3×3 =  − 2∆x (2.2) 0 2∆x 1 0 − 2∆x 0

(b) To calculate the eigenvalues, we construct the characteristic equation where det(D2×2 − λ I) = 0 = λ 2 +

1 . 4∆x2

(2.3)

Then, the eigenvalues of the 2 × 2 derivative matrix are

λ =±

i . 2∆x

For the 3 × 3 derivative matrix, its characteristic equation is   1 0 −λ 2∆x 1 1  det(D3×3 − λ I) = 0 = det  − 2∆x −λ 2∆x 1 0 − −λ   2∆x λ 1 = (−λ ) λ 2 + − . 4∆x2 4∆x2

(2.4)

(2.5)

One eigenvalue is clearly 0, and the other eigenvalues satisfy

λ2 +

1 = 0, 2∆x2

(2.6)

and so the 3 × 3 derivative matrix has eigenvalues i λ = 0, ± √ . 2∆x All non-zero eigenvalues are exclusively imaginary numbers. 1

(2.7)

2 Linear Algebra

2

(c)

For the 2 × 2 derivative matrix, the eigenvector equation can be expressed as      1 i 0 a a 2∆x =± , (2.8) 1 − 2∆x 0 b 2∆x b for some numbers a, b. This linear equation requires that b = ±ia ,

(2.9)

and so the eigenvectors can be expressed as     1 1 ⃗v1 = a , ⃗v2 = a . i −i

(2.10)

To ensure that they are unit normalized, we require that   1 2 ∗ = 2a2 , ⃗v1 ·⃗v1 = a (1 − i) (2.11) i √ or that a = 1/ 2 (ignoring a possible overall complex phase). Thus, the normalized eigenvectors are     1 1 1 1 ⃗v1 = √ , ⃗v2 = √ . (2.12) i 2 2 −i Note that these are mutually orthogonal:   1 1 ⃗v1∗ ·⃗v2 = (1 − i) = 1−1 = 0. −i 2 For the 3 × 3 derivative matrix, we will corresponding to 0 eigenvalue, where   1 0 0 2∆x 1   − 1 0 2∆x 2∆x 1 0 − 2∆x 0

(2.13)

first determine the eigenvector    a 0   = b 0 . c 0

Performing the matrix multiplication, we find that     b 0  c−a  =  0  . −b 0

(2.14)

(2.15)

This then enforces that b = 0 and c = a. That is, the normalized eigenvector with 0 eigenvalue is (again, up to an overall complex phase)   1 1   ⃗v1 = √ . (2.16) 0 2 1 Next, the eigenvectors for the non-zero eigenvalues satisfy      1 0 0 a a 2∆x i   1   − 1  √ = ± . 0 b b 2∆x 2∆x 2∆x 1 0 − 2∆x 0 c c

(2.17)

Exercises

3

Performing the matrix multiplication, we find     b a 1  i  b , c−a  = ±√ 2∆x 2∆x −b c

(2.18)

or that √ b = ± 2i a ,

i c = ± √ b = −a . 2

Then, the other two eigenvectors are   √1 ⃗v2 = a  2i  , −1

 1 √ ⃗v3 = a  − 2i  . −1

(2.19)



(2.20)

The value of a can be determined by demanding that they are normalized:     √1 √ ∗ 2 ⃗v2 ·⃗v2 = a 1 − 2i − 1  2i  = 4a2 , (2.21) −1 or that a = 1/2. That is,   1 1 √  ⃗v2 = , 2i 2 −1

  1 1 √  ⃗v3 = . − 2i 2 −1

(2.22)

All three eigenvectors, ⃗v1 ,⃗v2 ,⃗v3 , are mutually orthogonal. For example,   1   √ √ 1 1−2+1 1 − 2i − 1  − 2i  = ⃗v2∗ ·⃗v3 = = 0. (2.23) 4 4 −1 (d) To determine how the exponentiated matrix M = e∆xD =



∆xn n ∑ D, n=0 n!

(2.24)

acts on an eigenvector ⃗v with eigenvalue λ , let’s use the Taylor expanded form. Note that Dn⃗v = λ n⃗v ,

(2.25)

and so M⃗v =



∞ ∆xn n (λ ∆x)n D ⃗v = ∑ ⃗v = eλ ∆x⃗v . n! n=0 n! n=0



(2.26)

We can then just plug in the appropriate eigenvalues and eigenvectors.

2 Linear Algebra

4

(e)

Now, we are asked to determine the matrix form of the exponentiated 2 × 2 and 3 × 3 derivative matrices. Let’s start with the 2 × 2 matrix and note that we can write  n  n ∞ ∞ 1 ∆xn 1 0 0 1 2∆x M2×2 = e∆xD2×2 = ∑ = . ∑ n 1 − 2∆x 0 −1 0 n=0 n! n=0 2 n! (2.27) So, is reduced to establishing properties of the matrix  the problem  0 1 . Note that the first few powers of the matrix are: −1 0   

0 1 −1 0 0 1 −1 0 0 1 −1 0

0

 =

1

 =

2

 =

1 0 0 1

 = I,

0 1 −1 0 0 1 −1 0

(2.28)

 (2.29)

, 

0 1 −1 0



 =−

1 0

0 1

 = −I .

(2.30)

This pattern continues, and it can be compactly expressed as 

0 1 −1 0

2n

 = i2n I ,

0 1 −1 0

2n+1

 = (−i)i2n+1

0 1 −1 0

 , (2.31)

for n = 0, 1, 2, . . . . Then, the 2 × 2 exponentiated matrix can be expressed as   ∞ ∞ i2n i2n 0 1 + (2.32) M2×2 = I ∑ 2n ∑ 2n+1 −1 0 n=0 2 (2n + 1)! n=0 2 (2n)!     1 1 0 1 cos 21 sin 12 = cos I + sin = . −1 0 − sin 12 cos 21 2 2 A similar exercise can be repeated for the 3 × 3 derivative matrix. We want to evaluate the sum  n 1 0 0 ∞ n 2∆x ∆x 1   − 1 M3×3 = e∆xD3×3 = ∑ (2.33) 0 2∆x 2∆x n! 1 n=0 0 − 2∆x 0  n 0 1 0 ∞ 1  =∑ n −1 0 1  . 2 n! n=0 0 −1 0 Now, note that even powers of the matrix take the form 

0  −1 0

0 1 0 0 1  = I, −1 0

(2.34)

Exercises

5



0  −1 0  0  −1 0

2  1 0 1 0 1  = 0 −1 0 1 4  1 1 0 0 1  = 2 0 −1 0 1

and so the general result takes the form  2n  0 1 0 1  −1 0 1  = (−2)n−1  0 0 −1 0 1

0 0 0

0 0 0

 1 0  − 2I , 1  0 1 0 0  + 4I , 0 1  1 0  + (−2)n I , 1

(2.35)

(2.36)

(2.37)

for n = 1, 2, . . . . Products of odd powers of the matrix take the form  1   0 1 0 0 1 0  −1 0 1  =  −1 0 1  , (2.38) 0 −1 0 0 −1 0  3   0 1 0 0 1 0  −1 0 1  = −2  −1 0 1  , (2.39) 0 −1 0 0 −1 0  5   0 1 0 0 1 0  −1 0 1  = 4  −1 0 1  . (2.40) 0 −1 0 0 −1 0 The general form is then  2n+1  0 1 0 0  −1 0 1  = (−2)n  −1 0 −1 0 0

 1 0 0 1 , −1 0

(2.41)

for n = 0, 1, 2, . . . . Putting these results together, the 3 × 3 exponentiated matrix is     1 0 1 ∞ n (−2)  1  − M3×3 = I + ∑ 2n (2.42) 0 0 0  + I 2 (2n)! 2 n=1 1 0 1   0 1 0 ∞ (−2)n  −1 0 1  + ∑ 2n+1 (2n + 1)! n=0 2 0 −1 0     1 0 1 1 0 −1 ∞ 1 1 (−1)n =  0 0 0  +  0 2 0  ∑ √ 2n 2 2 n=0 2 (2n)! 1 0 1 −1 0 1   0 1 0 ∞ 1 (−1)n + √  −1 0 1  ∑ √ 2n+1 2 (2n + 1)! 0 −1 0 n=0 2

2 Linear Algebra

6



1 0 1 =  0 0 2 1 0 2.2

  1 1 0 cos √12   + 0 0 2 2 1 −1 0

  −1 0 sin √12   √ + 0 −1 2 1 0

 1 0 0 1 . −1 0

If we instead defined the derivative matrix through the standard asymmetric difference, the derivative matrix would take the form   .. .. .. .. . . . . ···   1  ··· − 1 0 ···  ∆x ∆x   1 1 D= (2.43) 0 − ∆x ···   ··· . ∆x 1  ···  · · · 0 0 −   ∆x .. .. .. .. .. . . . . . Such a matrix has no non-zero entries below the diagonal and so its characteristic equation is rather trivial, for any number of grid points. Note that   .. .. .. .. . . . . ···   1  ··· − 1 −λ 0 ···  ∆x ∆x   1 1 det(Dn×n − λ I) = det  −λ ···  0 − ∆x  ···  ∆x 1  ··· 0 0 − ∆x −λ ···    .. .. .. .. .. . . . . . n  1 . = 0 = − −λ (2.44) ∆x

2.3

Thus, there is but a single eigenvalue, λ = −1/∆x. (a) We are asked to express the quadratic polynomial p(x) as a linear combination of Legendre polynomials. So, we have p(x) = ax2 + bx + c = d0 P0 (x) + d1 P1 (x) + d2 P2 (x) ! r r 1 3 5 2 x + d2 (3x − 1) , = d0 √ + d1 2 8 2

(2.45)

for some coefficients d0 , d1 , d2 . First, matching the coefficient of x2 , we must have r 5 3 d2 = a , (2.46) 8 or that 1 d2 = 3

r

8 a. 5

Next, matching coefficients of x, we have r 3 d1 = b, 2

(2.47)

(2.48)

Exercises

7

or that

r d1 =

2 b. 3

Finally, matching coefficients of x0 , we have r 1 5 1 a d0 √ − d2 = d0 √ − = c , 8 2 2 3 or that d0 =

√ 2c +

√ 2 a. 3

Then, we can express the polynomial as the linear combination r r √ ! √ 2 2 1 8 p(x) = 2c + a P0 (x) + bP1 (x) + aP2 (x) , 3 3 3 5 or as the vector in the space of Legendre polnomials √   √ 2 2c + 3 a q   2 . b p(x) =    q3 8 1 a 3 5

(2.49)

(2.50)

(2.51)

(2.52)

(2.53)

(b) Let’s now act on this vector with the derivative matrix we constructed: √  √   √  √   2c + 32 a 2b 3 √0 0 q q   d  2 8 = b p(x) =  0 0 . (2.54) 15      3a  q3 dx 0 0 0 8 1 0 3 5a Then, re-interpreting this as a polynomial, we have that r √ d 8 p(x) = 2bP0 (x) + aP1 (x) = b + 2ax , dx 3 (c)

(2.55)

which is indeed the derivative of p(x) = ax2 + bx + c. Let’s first construct the second derivative matrix through squaring the first derivative matrix: √  √ √     3 √0 0 3 √0 0 0 0 3 5 2 d d d = = 0 0 15   0 0 15  =  0 0 0 . dx2 dx dx 0 0 0 0 0 0 0 0 0 (2.56) By contrast, the explicit matrix element in the i j position would be  2 Z 1 d d2 Pj−1 (x) . = dx P (x) i−1 dx2 i j dx2 −1

(2.57)

2 Linear Algebra

8

For the second derivative to be non-zero, it must act on P2 (x), and so j = 3. Further, by orthogonality of the Legendre polynomials, the only non-zero value is i = 1. Then, the one non-zero element in the matrix is r  2 Z 1 Z 1 √ d2 1 d 5 = dx P0 (x) 2 P2 (x) = √ = 3 5, dx 3 (2.58) 2 dx 13 dx 2 −1 2 −1 which agrees exactly with just squaring the derivative matrix. (d) To calculate the exponential of the derivative matrix, we need all of its powers. We have already calculated the first and second derivative matrices, but what about higher powers? With only the first three Legendre polynomials, it is easy to see that the third and higher derivative matrices are all 0: √   √    0 3 √0 0 0 0 0 0 3 5 d3 d d2 = = 0 0 15   0 0 0 = 0 0 0 . dx3 dx dx2 0 0 0 0 0 0 0 0 0 (2.59) Thus, the Taylor expansion of the exponentiated derivative matrix terminates after a few terms: √  √    0 3 √0 0 0 3 5 2 d ∆x  0 0 M = e∆x dx = I + ∆x  0 0 15  + 0  . (2.60) 2 0 0 0 0 0 0 The action of this matrix on the polynomial as expressed as a vector in Legendre polynomial space is √ √  √   √    √  √  2c + 32 a 2c + 32 a 8a q q q2b       2 2 2 8    b b + ∆x M = + ∆x 0       3a  q3 q3 0 8 8 1 1 0 3 5a 3 5a = ax2 + bx + c + (b + 2a)∆x + 2a∆x2

(2.61)

= a(x + ∆x) + b(x + ∆x) + c , 2

2.4

(a)

which is indeed a translation of the polynomial, as expected. To determine if integration is linear, we need to verify the two properties. First, for two functions f (x) and g(x), anti-differentiation acts on their sum as: Z

dx ( f (x) + g(x)) = F(x) + G(x) + c ,

(2.62)

where we have d d F(x) = f (x) , G(x) = g(x) , (2.63) dx dx and c is an arbitrary constant. Linearity requires that this equal the sum of their anti-derivatives: Z

Z

dx f (x) +

dx g(x) = F(x) + G(x) + 2c .

(2.64)

Exercises

9

Note that each integral picks up a constant c. So, the only way that antidifferentiation can be linear is if the integration constant c = 0. Note also that integration, i.e., the area under a curve, is independent of the integration constant. Next, we must also demand that multiplication by a constant is simple for linearity. That is, for some constant c, we must have that Z

(2.65)

dx c f (x) = c F(x) ,

which is indeed true by the Leibniz product rule. (b) For two vectors ⃗f ,⃗g related by the action of the differentiation matrix, D⃗f = ⃗g ,

(2.66)

we would like to define the anti-differentiation operator A as ⃗f = D−1⃗g = A⃗g .

2.5 (a)

(2.67)

However, this clearly is only well-defined if the derivative operator is invertible; or, that it has no 0 eigenvalues. For the 3 × 3 derivative matrix, we found that it had a 0 eigenvalue, and this property holds for higher dimensional differentiation matrices. Thus, some other restrictions must be imposed for anti-differentiation to be well-defined as a matrix operator. As a differential equation, the eigenvalue equation for the operator Sˆ is d (2.68) f (x) = λ fλ (x) , dx λ for some eigenvalue λ and eigenfunction fλ . This can be rearranged into −ix

dx d fλ = iλ , fλ x

(2.69)

fλ (x) = c xiλ = c eiλ log x ,

(2.70)

and has a solution

for some constant c. Note that the logarithm only makes sense if x ≥ 0, so we restrict the domain of the operator Sˆ to x ∈ [0, ∞). (This can be relaxed, but you have to define what you mean by logarithm of a negative number carefully.) For the eigenfunctions to be bounded, we must require that λ is real-valued. (b) Integration of the product of two eigenfunctions with eigenvalues λ1 ̸= λ2 yields Z ∞ 0

dx fλ1 (x)∗ fλ2 (x) =

Z ∞

dx x−i(λ1 −λ2 ) =

0

Z ∞

dx e−i(λ1 −λ2 ) log x ,

0

(2.71)

where we restrict to the domain x ∈ [0, ∞), as mentioned above. Now, let’s change variables to y = log x, and now y ∈ (−∞, ∞), and the integral becomes Z ∞

dx e 0

−i(λ1 −λ2 ) log x

Z ∞

=

−∞

dy ey e−i(λ1 −λ2 )y .

(2.72)

2 Linear Algebra

10

(c)

Note the extra factor ey in the integrand; this is the Jacobian of the change of variables x = ey , and so dx = ey dy. If this Jacobian were not there, then the integral would be exactly like we are familiar with from Fourier transforms. However, with it there, this integral is not defined. We will fix it in later chapters. A function g(x) with a Taylor expansion about 0 can be expressed as ∞

g(x) =

∑ an x n ,

(2.73)

n=0

for some coefficients an . If we act Sˆ on this function, we find ∞ ∞ d ˆ Sg(x) = −i ∑ an x xn = −i ∑ nan xn . dx n=0 n=0

Therefore, if m powers of Sˆ act on g(x), it returns   ∞ ∞ d m n Sˆm g(x) = ∑ an −ix x = ∑ (−in)m an xn . dx n=0 n=0

(2.74)

(2.75)

Now, the action of the exponentiated operator on g(x) is eiα S g(x) = ˆ



∞ ∞ (iα )m ˆm (iα )m S g(x) = ∑ ∑ (−in)m an xn m=0 m! m=0 n=0 m!

∑ ∞

=

∑ an xn

n=0

(2.76)



∞ ∞ (nα )m nα n = e a x = n ∑ m! ∑ ∑ an (eα x)n m=0 n=0 n=0

= g (eα x) .

2.6

(a)

That is, this operator rescales the coordinate x. For the matrix M, its characteristic equation is   b a−λ det(M − λ I) = det = (a − λ )(d − λ ) − bc c d −λ

(2.77)

= λ 2 − (a + d)λ + (ad − bc) = 0 . We can write this in a very nice way in terms of the trace and the determinant of M: det(M − λ I) = λ 2 − (tr M) λ + det M = 0 . (b) Solving for the eigenvalues, we find q tr M ± (tr M)2 − 4 det M λ= . 2

(2.78)

(2.79)

So, the eigenvalues are real iff (tr M)2 ≥ 4 det M, or, in terms of the matrix elements, (a + d)2 ≥ 4(ad − bc) .

(2.80)

Exercises

11

(c)

2.7 (a)

With the determinant equal to 1 and the trace equal to 0, the eigenvalues are √ 0± 0−4 λ= = ±i , (2.81) 2 which are clearly not real-valued. The other elements of the matrix M we need in the ⃗u-vector basis are    M11 M12 cos θ ⊺ ⃗u1 M⃗u1 = (cos θ sin θ ) (2.82) M21 M22 sin θ = M11 cos2 θ + M21 cos θ sin θ + M12 cos θ sin θ + M22 sin2 θ ,    M11 M12 cos θ (2.83) ⃗u⊺2 M⃗u1 = (− sin θ cos θ ) M21 M22 sin θ = −M11 cos θ sin θ + M21 cos2 θ − M12 sin2 θ + M22 cos θ sin θ ,    M11 M12 − sin θ ⃗u⊺2 M⃗u2 = (− sin θ cos θ ) (2.84) M21 M22 cos θ = M11 sin2 θ − M21 cos θ sin θ − M12 cos θ sin θ + M22 cos2 θ .

(b) In exercise 2.6, we had shown that the characteristic equation for a 2 × 2 matrix can be written as det(M − λ I) = λ 2 − (tr M) λ + det M = 0 ,

(2.85)

where tr M is the sum of the diagonal elements of M. In the ⃗v-vector basis, this characteristic equation is

λ 2 − (tr M) λ + det M = 0 = λ 2 − (M11 + M22 ) λ + (M11 M22 − M12 M21 ) . (2.86) Now, in the⃗u-vector basis, we can calculate the trace and determinant. First, the trace: tr M = ⃗u⊺1 M⃗u1 +⃗u⊺2 M⃗u2 = M11 cos θ + M21 cos θ sin θ + M12 cos θ sin θ + M22 sin θ 2

2



(2.87)

+ M11 sin2 θ − M21 cos θ sin θ − M12 cos θ sin θ + M22 cos2 θ



= M11 + M22 , exactly the same as in the ⃗v basis. The determinant is det M = ⃗u⊺1 M⃗u1⃗u⊺2 M⃗u2 −⃗u⊺1 M⃗u2⃗u⊺2 M⃗u1 .

2.8 (a)

(2.88)

Plugging in the explicit values for the matrix elements, one finds the identical value of the determinant in either the ⃗v-vector or ⃗u-vector basis. Therefore, the characteristic equation is independent of basis. Verifying orthonormality is straightforward. For m ̸= n, we have Z 2π 0

1 d θ fm (θ ) fn (θ ) = 2π ∗

Z 2π

d θ ei(n−m)θ   i =− e2π i(n−m) − 1 = 0 , 2π (n − m) 0

(2.89)

2 Linear Algebra

12

because n − m is still an integer. If instead n = m, we can use l’Hôpital’s rule to find   i 2π i 2π i(n−m) = 1. lim − e (2.90) e2π i(n−m) − 1 = lim −i m→n 2π (n − m) m→n 2π (b) The eigenvalue equation for the operator Dˆ is d Dˆ fλ (θ ) = −i f (θ ) = λ fλ (θ ) . dθ λ

(2.91)

fλ (θ ) = ceiλ θ ,

(2.92)

The solutions are

for some constant c. These eigenfunctions must be single-valued on θ [0, 2π ) and periodicity requires that fλ (θ + 2π ) = ceiλ (θ +2π ) = fλ (θ ) = ceiλ θ .

(2.93)

That is, we must require that λ is an integer, so that e2π iλ = 1 .

(c)

(2.94)

Thus, the eigenfunctions of Dˆ are exactly the orthonormal complex exponentials from part (a). We want to show that the sinusoidal form of the basis elements are also orthonormal. First, for normalization, the integral of the square of cosine is Z 2π 0



cos2 (nπ ) 1 = π 2π

Z 2π 0

d θ (1 + cos(2nθ )) = 1 ,

(2.95)

because the integral of cosine over its entire domain is 0, and we used the double angle formula. Essentially the same calculation follows for sine. For orthogonality, we will again just show one explicit calculation, the product of cosine and sine: 1 π

Z 2π 0

1 2π = 0,

d θ cos(nπ ) sin(mπ ) =

Z 2π 0

d θ [sin ((n + m)θ ) − sin ((n − m)θ )] (2.96)

using the angle addition formulas and noting again that the sum and difference of two integers is still an integer, and integral of sine over its domain √ is 0. Finally, the normalized basis element if n = 0 is just the constant, 1/ 2π , as identified in the exponential form. (d) We are asked to determine the matrix elements of Dˆ in the sinusoidal basis. To do this, we sandwich the operator between the basis elements and inte-

Exercises

13

grate over the domain. First, note that cos-cos or sin-sin matrix elements vanish. For example, 1 π

Z 2π 0

in d θ cos(mθ )Dˆ cos(nθ ) = π

Z 2π 0

d θ cos(mπ ) sin(nπ ) = 0 ,

(2.97)

by orthogonality. The same argument holds for the sin-sin contribution. Therefore, the only non-zero matrix elements are mixed sin-cos, like 1 π

Z 2π 0

in d θ cos(mθ )Dˆ sin(nθ ) = − π

Z 2π 0

d θ cos(mθ ) cos(nθ ) = −inδmn . (2.98)

For the opposite order, the matrix element is 1 π

Z 2π 0

in d θ sin(mθ )Dˆ cos(nθ ) = π

Z 2π 0

d θ sin(mθ ) sin(nθ ) = inδmn . (2.99)

3

Hilbert Space

Exercises 3.1

(a)

If the matrix M is normal, then MM† = M† M. The Hermitian conjugate of M is  ∗  a c∗ M† = . (3.1) b∗ d ∗ The product MM† is  MM† =

a c

b d



a∗ b∗

c∗ d∗



 =

|a|2 + |b|2 a∗ c + b∗ d

ac∗ + bd ∗ |c|2 + |d|2

|a|2 + |b|2 ac∗ + bd ∗

a∗ c + b∗ d |c|2 + |d|2

 .

(3.2)

.

(3.3)

The opposite product is  M M= †

a∗ b∗

c∗ d∗



a b c d



 =



Then, for the matrix M to be normal, we must require that every element of MM† and M† M are equal. Therefore, we must enforce that a∗ c + b∗ d = ac∗ + bd ∗ .

(3.4)

(b) A general upper triangular matrix takes the form    M= 

M11 0 0 .. .

M12 M22 0 .. .

M13 M23 M33 .. .

··· ··· ··· .. .

∗ M11 ∗ M12 ∗ M13 .. .

0 ∗ M22 ∗ M23 .. .

0 0 ∗ M33 .. .

··· ··· ··· .. .

   . 

(3.5)

Its Hermitian conjugate is    M† =  

14

   . 

(3.6)

Exercises

15

To show that this matrix is not normal, we will just focus on the 11 entry of the products with its Hermitian conjugate. Note that    ∗ M11 M12 M13 · · · M11 0 0 ··· ∗  0  ∗ M22 M23 · · ·  0 ···      M12 M22 (3.7) MM† =  0 ∗ ∗ ∗   0 M33 · · ·   M13 M23 M33 · · ·    .. .. .. .. .. .. .. .. . . . . . . . . ! |M11 |2 + |M22 |2 + |M33 |2 + · · · · · · = . .. .. . . By contrast, the product of matrices in the other order is  ∗  M11 0 0 ··· M11 M12 M13  M∗ M∗  0 0 · · · M22 M23 22  12  M† M =  M ∗ M ∗ M ∗ · · ·   0 0 M33 23 33  13  .. .. .. .. .. .. .. . . . . . . . ! 2 |M11 | · · · = . .. .. . .

··· ··· ··· .. .

    

(3.8)

Therefore, for a general upper triangular matrix it is not true that |M11 |2 = |M11 |2 + |M22 |2 + |M33 |2 + · · · , 3.2 (a)

(3.9)

and so upper triangular matrices are not normal. We want to calculate the exponentiated matrix A = eiϕ σ3 =



(iϕ )n n σ3 . n=0 n!



(3.10)

Note that this sum can be split into even and odd components noting that      1 0 1 0 1 0 2 = = I. σ3 = (3.11) 0 −1 0 −1 0 1 Thus, we have ∞ ∞ (iϕ )n n (iϕ )2n (iϕ )2n+1 = I + = I cos ϕ + iσ3 sin ϕ (3.12) σ σ 3 ∑ ∑ ∑ 3 n=0 (2n)! n=0 n! n=0 (2n + 1)!   iϕ   cos ϕ + i sin ϕ 0 e 0 = . = 0 cos ϕ − i sin ϕ 0 e−iϕ ∞

(b) Now, we are asked to evaluate the exponentiated matrix B = eiϕ (σ1 +σ3 ) = Note that (σ1 + σ3 )2 =



1 1

1 −1



1 1



(iϕ )n (σ1 + σ3 )n . n! n=0



1 −1



 =

2 0

0 2

(3.13)

 = 2I.

(3.14)

3 Hilbert Space

16

Therefore, this sum again splits into even and odd parts where ∞

∞ ∞ (iϕ )n 2n (iϕ )2n 2n (iϕ )2n+1 (σ1 + σ3 )n = I ∑ + (σ1 + σ3 ) ∑ n=0 n! n=0 (2n)! n=0 (2n + 1)! √ √ ∞ 2n (i 2ϕ ) σ1 + σ3 ∞ (i 2ϕ )2n+1 + √ =I∑ ∑ 2 n=0 (2n + 1)! n=0 (2n)! √  σ + σ √  1 3 = I cos 2ϕ + i √ sin 2ϕ . 2



3.3

(a)

(3.15)

This can be written in usual matrix form, but we won’t do that here. To verify that the rotation matrix is unitary, we just multiply its Hermitian conjugate with itself:    sin θ cos θ cos θ − sin θ † MM = (3.16) − sin θ cos θ sin θ cos θ   cos2 θ + sin2 θ cos θ sin θ − cos θ sin θ = I, = − cos θ sin θ + cos θ sin θ cos2 θ + sin2 θ

and so M is indeed unitary. (b) We are now asked to determine which of the Pauli matrices are exponentiated to generate this rotation matrix. First, note that each element of this rotation matrix is real, but the form of the exponent is iθ σ j , which is in general complex. To ensure that exclusively real-valued matrix elements are produced, the Pauli matrix must have only imaginary elements and there is only one Pauli matrix for which that is true:   0 −i σ2 = . (3.17) i 0 Then, its exponentiation is eiθ σ2 =



(iθ )n n σ2 . n=0 n!



Note that the square of the Pauli matrix is     0 −i 0 −i 1 2 σ2 = = i 0 i 0 0

(3.18)

0 1

 = I.

(3.19)

Then, the sum splits up into even and odd parts: ∞ ∞ (iθ )n n (iθ )2n (iθ )2n+1 + σ2 ∑ σ2 = I ∑ n=0 (2n)! n=0 n! n=0 (2n + 1)!  cos θ = I cos θ + iσ2 sin θ = − sin θ ∞



3.4

(a)

(3.20) sin θ cos θ

 .

The Hermitian conjugate of the vector ⃗u is

 ⃗u† = e−iϕ1 cos u e−iϕ2 sin u .

(3.21)

Exercises

17

Then, the inner product of ⃗u with itself is    eiϕ1 cos u = cos2 u + sin2 u = 1 , ⃗u†⃗u = e−iϕ1 cos u e−iϕ2 sin u eiϕ2 sin u

(3.22)

and so ⃗u is normalized. The same calculation follows for⃗v with appropriate relabeling. (b) Now, we would like to determine the unitary matrix U that maps ⃗u to ⃗v. With the hint in the problem, we write     iϕ   iθ a b e 1 cos u e 1 cos v = ⃗v = U⃗u → (3.23) c d eiθ2 sin v eiϕ2 sin u  iϕ  ae 1 cos u + beiϕ2 sin u . = ceiϕ1 cos u + deiϕ2 sin u Let’s focus on the first entry. For this to equal the first entry of ⃗v, we must have the exponential phase factor to reduce to eiθ1 . So, we can set a = ei(θ1 −ϕ1 ) |a| ,

b = ei(θ1 −ϕ2 ) |b| .

(3.24)

The exact same idea follows for the second entries, and so c = ei(θ2 −ϕ1 ) |c| ,

d = ei(θ2 −ϕ2 ) |d| .

(3.25)

Inserting these expressions into the matrix product above, we then find the reduced linear equation that     cos v |a| cos u + |b| sin u (3.26) = . sin v |c| cos u + |d| sin u This form might be recognizable as an angle addition formula. For example, note that cos v = cos(u + v) cos u + sin(u + v) sin u .

(3.27)

Then, we can set a = ei(θ1 −ϕ1 ) cos(u + v) ,

b = ei(θ1 −ϕ2 ) sin(u + v) .

(3.28)

Note also that sin v = sin(u + v) cos u − cos(u + v) sin u .

(3.29)

Then, c = ei(θ2 −ϕ1 ) sin(u + v) ,

d = −ei(θ2 −ϕ2 ) cos(u + v) .

Putting it all together, the unitary matrix is  i(θ −ϕ )  e 1 1 cos(u + v) ei(θ1 −ϕ2 ) sin(u + v) U= . ei(θ2 −ϕ1 ) sin(u + v) −ei(θ2 −ϕ2 ) cos(u + v)

(3.30)

(3.31)

3 Hilbert Space

18

(c)

3.5

The determinant of this matrix is   i(θ −ϕ ) e 1 1 cos(u + v) ei(θ1 −ϕ2 ) sin(u + v) (3.32) det U = det ei(θ2 −ϕ1 ) sin(u + v) −ei(θ2 −ϕ2 ) cos(u + v)  = −ei(θ1 +θ2 −ϕ1 −ϕ2 ) cos2 (u + v) + sin2 (u + v) = −ei(θ1 +θ2 −ϕ1 −ϕ2 ) .

This can never equal 0, and so the matrix U is non-degenerate. By definition of living on the Hilbert space, both kets are normalized: ⟨v|v⟩ = 1 ,

⟨u|u⟩ = 1 .

(3.33)

Thus, the simplest operator that maps |u⟩ to |v⟩ is Uuv = |v⟩⟨u| .

(3.34)

|v⟩ = Uuv |u⟩ = |v⟩⟨u|u⟩ = |v⟩ ,

(3.35)

Then, we have that

by the normalization of the kets. Note however that this operator is not unitary, for Uuv U†uv = |v⟩⟨u|u⟩⟨v| = |v⟩⟨v| ̸= I .

3.6

(3.36)

However, the fix to make the operator unitary is pretty simple: we just keep adding outer products of vectors that are mutually orthogonal and orthogonal to |u⟩ until they form a complete basis of the Hilbert space. Except for the fact that this basis includes |u⟩, it is otherwise very weakly constrained, so we won’t write down an explicit form. (a) The sum of the outer products of these vectors is     iϕ  1 e 1 sin θ e−iϕ1 sin θ e−iϕ2 cos θ |v1 ⟩⟨v1 | + |v2 ⟩⟨v2 | = (1 0) + ϕ i 2 0 e cos θ (3.37)     2 1 0 sin θ ei(ϕ1 −ϕ2 ) sin θ cos θ = + . −i( − ) ϕ ϕ 1 2 0 0 e cos2 θ sin θ cos θ This clearly does not equal the identity matrix and therefore does not satisfy the completeness relation for general θ . However, it does equal the identity matrix if θ = 0, for which the off-diagonal entries vanish, and so does not depend on the phases ϕ1 , ϕ2 . (b) Recall that the first three Legendre polynomials are 1 P0 (x) = √ , 2 r 3 P1 (x) = x, 2 r 5 2 P2 (x) = (3x − 1) . 8

(3.38) (3.39) (3.40)

Exercises

19

Then, the outer product of Legendre polynomials is P0 (x)P0 (y) + P1 (x)P1 (y) + P2 (x)P2 (y) =

(c)

5 1 3 + xy + (3x2 − 1)(3y2 − 1) . 2 2 8 (3.41)

Now, we want to integrate this “identity matrix” against an arbitrary quadratic polynomial. What this will do is effectively take the matrix or dot product of the polynomial (like a vector) with the identity matrix and so it should return exactly the polynomial. Let’s see if this is true. To do this we need the integrals: Z 1 Z 1

−1

dx (ax2 + bx + c)

1 a = +c, 2 3

3 dx (ax2 + bx + c) xy = by , 2 −1 Z 1 5 a dx (ax2 + bx + c) (3x2 − 1)(3y2 − 1) = − + ay2 . 8 3 −1

(3.42) (3.43) (3.44)

Then, the sum of these terms is Z 1 −1

3.7 (a)

dx (ax2 + bx + c)I = ay2 + by + c = p(y) .

(3.45)

And so, indeed, the outer product of Legendre polynomials P0 (x)P0 (y) + P1 (x)P1 (y) + P2 (x)P2 (y) acts as an identity matrix. We are asked to calculate the expectation value of the Hamiltonian Hˆ on the time-dependent state. First, the time-dependent state is |ψ (t)⟩ = α1 e−i

E1 t h¯

|1⟩ + α2 e−i

E2 t h¯

|2⟩ .

(3.46)

Then, the action of the Hamiltonian on this linear combination of energy eigenstates is ˆ ψ (t)⟩ = α1 E1 e−i H|

E1 t h¯

|1⟩ + α2 E2 e−i

E2 t h¯

|2⟩ ,

(3.47)

ˆ = E1 |1⟩, for example. Then, the expectation value is, assuming because H|1⟩ orthonormality of the energy eigenstates,    E t E t E t E t ˆ ψ (t)⟩ = α1∗ ei h¯1 ⟨1| + α2∗ ei h¯2 ⟨2| α1 E1 e−i h¯1 |1⟩ + α2 E2 e−i h¯2 |2⟩ ⟨ψ (t)|H| = |α1 |2 E1 + |α2 |2 E2 .

(3.48)

This is independent of time. (b) In complete generality, we can express Oˆ as a linear combination of outer products: Oˆ = a|1⟩⟨1| + b|1⟩⟨2| + c|2⟩⟨1| + d|2⟩⟨2| ,

(3.49)

for some complex numbers a, b, c, d. In this form, we can act it on |1⟩ to find: ˆ O|1⟩ = a|1⟩ + c|2⟩ = |1⟩ − |2⟩ ,

(3.50)

3 Hilbert Space

20

using orthonormality and the constraint provided in the problem. Then, it is clear that a = 1, c = −1. Instead, acting on |2⟩, we have ˆ O|2⟩ = b|1⟩ + d|2⟩ = −|1⟩ + |2⟩ .

(3.51)

We then see that b = −1, d = 1. It then follows that the operator is   1 −1 Oˆ = |1⟩⟨1| − |1⟩⟨2| − |2⟩⟨1| + |2⟩⟨2| = . −1 1

(c)

(3.52)

This is indeed a Hermitian operator as all elements are real and it is symmetric. Now, we want to determine the expectation value of the operator Oˆ on the time-dependent state. First, note that   E t E t ˆ ψ (t)⟩ = Oˆ α1 e−i h¯1 |1⟩ + α2 e−i h¯2 |2⟩ O| (3.53) E1 t

E2 t

= α1 e−i h¯ (|1⟩ − |2⟩) + α2 e−i h¯ (−|1⟩ + |2⟩)   E1 t E2 t  E2 t E1 t  = α1 e−i h¯ − α2 e−i h¯ |1⟩ + α2 e−i h¯ − α1 e−i h¯ |2⟩ . It then follows that the expectation value is E t E t E t E t E t E t ˆ ψ (t)⟩ = α1∗ ei h¯1 α1 e−i h¯1 −α2 e−i h¯2 +α2∗ ei h¯2 α2 e−i h¯2 −α1 e−i h¯1 ⟨ψ (t)|O| = |α1 |2 + |α2 |2 − α1∗ α2 ei

(E1 −E2 )t h¯

− α1 α2∗ e−i

(E1 −E2 )t h¯

.

(3.54)

By contrast to the expectation value of the Hamiltonian, the expectation value of Oˆ does depend on time. If the coefficients α1 , α2 are real, then the expectation value simplifies to (E1 −E2 )t h¯

(E1 −E2 )t h¯

− α1 α2 e−i   (E1 − E2 )t 2 2 = |α1 | + |α2 | − 2α1 α2 cos . h¯

ˆ ψ (t)⟩ = |α1 |2 + |α2 |2 − α1 α2 ei ⟨ψ (t)|O|

3.8

(a)

(3.55)

To determine the electron neutrino after an elapsed time T , we just augment its expression with the time-evolution phase factors: |νe (T )⟩ = cos θ e−i

E1 t h¯

|ν1 ⟩ + sin θ e−i

E2 t h¯

|ν2 ⟩ .

(3.56)

(b) We can isolate the probability for measuring a muon neutrino by taking the inner product with the time-evolved electron neutrino state. The probability amplitude is   E1 t E2 t ⟨νµ |νe (T )⟩ = (− sin θ ⟨ν1 | + cos θ ⟨ν2 |) cos θ e−i h¯ |ν1 ⟩ + sin θ e−i h¯ |ν2 ⟩  E2 t E1 t  = sin θ cos θ e−i h¯ − e−i h¯

(3.57)

Exercises

21

(E2 +E1 )t = e−i 2¯h

= ie−i

(E2 +E1 )t 2¯h

  (E −E )t (E −E )t i 1 2¯h 2 −i 1 2¯h 2 sin θ cos θ e −e sin(2θ ) sin

(E1 − E2 )t . 2¯h

Then, the probability for an electron neutrino to transform into a muon neutrino after time T is (E1 − E2 )t (3.58) . 2¯h By unitarity of time evolution, the electron neutrino stays normalized for all time, so the probability that the electron neutrino transforms into an electron neutrino is just 1 minus this: Pµ = |⟨νµ |νe (T )⟩|2 = sin2 (2θ ) sin2

Pe = 1 − Pµ = 1 − sin2 (2θ ) sin2 (c)

(E1 − E2 )t . 2¯h

(3.59)

This is called neutrino oscillation because the probability to observe neutrinos of different flavors oscillates with the time over which the initial neutrino is allowed to evolve.

Axioms of Quantum Mechanics and Their Consequences

4

Exercises 4.1

This question has numerous interpretations. First, the variance of a nonHermitian operator isn’t even well-defined. To be interpretable as a property of a probability distribution, the variance should be a real number, but even the expecˆ is not real. So, in some sense this is tation value of a non-Hermitian operator ⟨C⟩ a non-starter because eigenvalues of non-Hermitian operators do not correspond to physical observables. However, a more useful way to interpret the uncertainty principle is right-toleft, as a consequence of a non-trivial commutation relation. If two operators Cˆ and Dˆ have a non-zero commutation relation, this means that it is impossible for them to be simultaneously diagonalizable. So, the eigenstates of Cˆ and Dˆ must be different if their commutation relation is non-zero. Correspondingly, if their eigenspace is distinct, then an eigenstate of one operator is necessarily a non-trivial linear combination of the others eigenstates. So there is still an “uncerˆ A tainty” in the decomposition of any vector in terms of eigenstates of Cˆ and D. probabilistic interpretation is lost (or at the very least not obvious), but a lack of commutation means that there is no basis in which both operators take a simple form.

4.2

(a)

Recall that the 3 × 3 derivative matrix we constructed earlier is 

0

1 D =  − 2∆x 0

1 2∆x

0 1 − 2∆x



0 1 2∆x

.

(4.1)

0

Then, because the momentum operator is related to the derivative operator as P = −i¯hD, we find 

0

1 P = −i¯h  − 2∆x 0

1 2∆x

0 1 − 2∆x

0 1 2∆x

 .

(4.2)

0

(b) The position operator X has eigenvalues of the positions on the grid and in position space, those are its diagonal entries: 22

Exercises

23



(c)

 0 0 . x2

0 x1 0

x0 X= 0 0

(4.3)

Now, let’s calculate the commutator of the position and momentum operators on this grid. First, we have the product 

x0 XP = −i¯h  0 0 

0 x1 0

0 x1 = −i¯h  − 2∆x 0

 0 0 1 0   − 2∆x x2 0  x0 0 2∆x x1  . 0 2∆x x2 0 − 2∆x

1 2∆x

0

0 1 − 2∆x

 

(4.4)

 0 0  x2

(4.5)

1 2∆x

0

The other order of the product is 

1 2∆x

0

1 PX = −i¯h  − 2∆x 0  0 x0 = −i¯h  − 2∆x 0

0



x0 1  0 2∆x 0 0  0 x2  . 2∆x 0

0 1 − 2∆x x1 2∆x

0 x1 − 2∆x

0 x1 0

Their commutator is therefore 

0

x1 [X, P] = −i¯h  − 2∆x 0  0 = i¯h  x1 −x0 2∆x

0

x0 2∆x

0 x2 − 2∆x x1 −x0 2∆x

0

x2 −x1 2∆x



0



0

 + i¯h  − x0 2∆x 0 0  0 x2 −x1  . 2∆x 0 x1 2∆x

x1 2∆x

0 x1 − 2∆x

0 x2 2∆x

 

0 (4.6)

Now, on the grid, note that x1 − x0 = x2 − x1 = ∆x, so this simplifies to  [X, P] = i¯h 

0 1 2

0

1 2

0 1 2

0 1 2

 .

(4.7)

0

This is a bit strange from the point of view of the canonical commutation relation, because we would expect that their commutator is [X, P] = i¯hI, proportional to the identity matrix. This is close, but this can make more sense when taking a limit of ∆x → 0.

4 Axioms of Quantum Mechanics and Their Consequences

24

(d) As ∆x → 0, these matrices and the corresponding commutator grow larger and larger, and we would anticipate that it takes the form   . . . .. . .. .. .. · · ·    ··· 0 1 0 ···  2   1 1  [X, P] = i¯h  (4.8)  ··· 2 0 2 ···  .  ··· 0 1 0 ···    2 .. .. .. .. . . . . . . .

4.5

(a)

As ∆x decreases, there are never any non-zero entries on the diagonal, but immediately off the diagonal are factors of 1/2. Relative to the size of the matrix, these factors of 1/2 get closer and closer to being on the diagonal. Continuous-dimensional spaces like position space are a bit strange because infinitesimals can be defined, and so as the grid spacing ∆x → 0, we expect that these off-diagonal entries merge and sum into the identity matrix. For the exponential to make sense, the quantity in the exponent must be unitless. Therefore, the units of the prefactor for the position operator must have dimensions of inverse length: "r # 2π c = L−1 . (4.9) h¯ Rearranging, the units of c are then [c] = [¯h]L−2 = MT −1 ,

(4.10)

that is, mass per unit time. Therefore, c quantifies a flow or current of mass of the system, in a similar way that electric current quantifies the flow of electric charge per unit time. ˆ we need to (b) To calculate the commutation relation of the operators Xˆ and P, use a test function f (x) to ensure that the manipulations we perform use the linearity of the derivative. We then have  q  q 2π c 2π d ˆ P] ˆ f (x) = ei h¯ x , e c¯h h¯ dx f (x) (4.11) [X, q i

=e

q i

=e

2π c x h¯ 2π c x h¯

q

e f

q

2π h d c¯h ¯ dx

2π h d

¯

q i

2π c x

f (x) − e c¯h dx e h¯ f (x)  q  ! q r i 2πh¯ c x+ 2πc h¯ 2π h¯ x+ −e f c

r x+

2π h¯ c

! .

On the second line, we have expressed the momentum and position operators in position space, and then on the third line, note that the exponentiated derivative operator is the translation operator. Further, note that the second remaining exponential factor simplifies: q i

e

2π c h¯

 q  x+ 2πc h¯

q i

=e

2π c x+2π i h¯

q i

=e

2π c x h¯

,

(4.12)

Exercises

25

because e2π i = 1. Therefore, these operators actually commute: ! ! r r q q 2π c x 2π c x 2 π h 2 π h ¯ ¯ i i ˆ P] ˆ f (x) = e h¯ f x + [X, − e h¯ f x + = 0 . (4.13) c c

(c)

If two operators commute, this means that they can be simultaneously diagonalized and have the same eigenstates. By the periodicity of the imaginary exponential, we have that, for example, the two operators r r 2π c 2π c xˆ , xˆ + 2π (4.14) h¯ h¯ ˆ The range of xˆ that produces unique values produce the same operator X. ˆ for X is then r 2π h¯ 0 ≤ xˆ < , (4.15) c where this inequality is understood as a relationship of eigenvalues. A similar relationship can be established for momentum p, ˆ where the unique range of eigenvalues is √ 0 ≤ pˆ < 2π c¯h . (4.16)

(d) The eigenvalue equation for the exponentiated momentum operator is ! r q 2π h d 2 π h ¯ ¯ (4.17) . Pˆ fλ (x) = λ fλ (x) = e c¯h dx fλ (x) = f x + c¯h The only way that a translated function can equal itself up to a multiplicative factor is if that function is periodic, and so can be expressed as a complex exponential. Let’s write fλ (x) = eibx ,

(4.18)

for some real constant b. This must satisfy  q  ib x+ 2πc h¯

λ eibx = e or that

q ib

λ =e

2π h¯ c

.

,

(4.19)

(4.20)

Through the property of periodicity of the imaginary exponential, we can limit b in the range r 2π c 0≤b< . (4.21) h¯ That is, eigenvalues of the operator Pˆ lie on the unit circle, as expected because Pˆ is a unitary operator.

4 Axioms of Quantum Mechanics and Their Consequences

26

ˆ P] ˆ = 0, the eigenstates of Xˆ are identical to the Because the commutator [X, ˆ eigenstates of P. (f) This may seem like we have created position and momentum operators that commute, and therefore have somehow skirted the Heisenberg uncertainty principle. However, as we observed in part (c), the fact that these exponentiated position and momentum operators are unitary means that in each the position or momentum is only defined as modulo 2π . Therefore, the uncertainty has migrated from the non-zero commutator into an uncertainty in ˆ for example, does not the operators themselves. A given eigenvalue for X, define a unique position, and in fact is consistent with a countable infinity of positions! First, let’s write down the relevant Taylor expansions for the exponentiated matrices, to third order. We have (e)

4.4

Aˆ 2 Aˆ 3 −i +··· , 2 6 3 2 ˆ ˆ B B ˆ −i +··· , eiB = I + iBˆ − 2 6 ˆ ˆ 2 ˆ ˆ 3 ( A ˆ B) ˆ i(A+ ˆ − + B) − i (A + B) + · · · . e = I + i(Aˆ + B) 2 6 ˆ

eiA = I + iAˆ −

(4.22) (4.23) (4.24)

ˆ the product of the exponentials is Then, to third order in the matrices Aˆ and B,    Aˆ 3 Aˆ 2 Bˆ 2 Bˆ 3 ˆ ˆ −i +··· I + iBˆ − −i +··· eiA eiB = I + iAˆ − (4.25) 2 6 2 6 ˆ2 ˆ3 ˆ2 ˆ3 ˆ − A − B − Aˆ Bˆ − i A − i B = I + i(Aˆ + B) 2 2 6 6 2 2 ˆ ˆ ˆ ˆ AB BA −i −i +··· . 2 2 Now, we can associate terms and complete squares and cubes. Note that −

Aˆ 2 Bˆ 2 ˆ ˆ Aˆ 2 Bˆ 2 Aˆ Bˆ Bˆ Aˆ Aˆ Bˆ Bˆ Aˆ − − AB = − − − − − + 2 2 2 2 2 2 2 2 ˆ 2 1 (Aˆ + B) ˆ B] ˆ . =− − [A, 2 2

(4.26)

Also, note that the expansion of the cubic binomial is ˆ 3 = Aˆ 3 + Bˆ 3 + Aˆ Bˆ 2 + Bˆ Aˆ 2 + Aˆ Bˆ Aˆ + Bˆ Aˆ Bˆ + Aˆ 2 Bˆ + Bˆ 2 Aˆ , (Aˆ + B)

(4.27)

as always being careful with the order of multiplication. With these results, the difference between the exponential matrices is ˆ

ˆ

ˆ

ˆ

eiA eiB − ei(A+B) = − −

ˆ B] ˆ [A, 2

 i 2Aˆ Bˆ 2 + 2Bˆ Aˆ 2 − Aˆ Bˆ Aˆ − Bˆ Aˆ Bˆ − Aˆ 2 Bˆ − Bˆ 2 Aˆ + · · · 6

(4.28)

Exercises

27

The terms in the parentheses at cubic order can be re-associated into 2Aˆ Bˆ 2 + 2Bˆ Aˆ 2 − Aˆ Bˆ Aˆ − Bˆ Aˆ Bˆ − Aˆ 2 Bˆ − Bˆ 2 Aˆ ˆ B] ˆ B] ˆ Bˆ 2 ] − [Aˆ 2 , B] ˆ Bˆ − [A, ˆ Aˆ + [A, ˆ . = [A,

(4.29)

Thus, at least through cubic order in the Taylor expansion, all of the terms that differ between the product of exponentiated matrices to the exponential of the sum of the matrices can be expressed in terms of commutators of matrices Aˆ and ˆ B] ˆ Note that if [A, ˆ = 0, then all of these residual differences vanish. B. 4.5 (a) If the operator Oˆ acts as ˆ O|n⟩ = |n + 1⟩ ,

(4.30)

then it can be expressed in outer product form as Oˆ =





|n + 1⟩⟨n| .

(4.31)

|n⟩⟨n + 1| .

(4.32)

n=−∞

Its Hermitian conjugate is then Oˆ † =





n=−∞

The product of these two operators is then ∞



∑ ∑

Oˆ † Oˆ =

m=−∞ n=−∞ ∞

=





|m⟩⟨m + 1|n + 1⟩⟨n| =



∑ ∑

|m⟩⟨n| δmn

m=−∞ n=−∞

|n⟩⟨n| .

(4.33)

n=−∞

By the assumed completeness of the winding number states, this final outer product is the identity matrix. Hence, Oˆ † Oˆ = I, which indeed proves that Oˆ is unitary. (b) As a unitary operator, the eigenvalue equation for Oˆ is ˆ ψ ⟩ = eiθ |ψ ⟩ . O|

(4.34)

We can express the eigenstate as a linear combination of winding states: ∞

|ψ ⟩ =



βn |n⟩ ,

(4.35)

n=−∞

and on them the hopping operator acts as ˆ ψ⟩ = O|





ˆ βn O|n⟩ =





βn |n + 1⟩ =

n=−∞

n=−∞





βn−1 |n⟩ .

(4.36)

n=−∞

Then, the eigenvalue equation becomes ∞



n=−∞

βn−1 |n⟩ =





n=−∞

βn eiθ |n⟩ .

(4.37)

4 Axioms of Quantum Mechanics and Their Consequences

28

For this to be an equality, by the orthogonality of the winding states, we must enforce that each coefficient is equal:

βn eiθ = βn−1 .

(4.38)

Up to an overall normalization, this is satisfied by the coefficients

βn = e−inθ .

(4.39)

Therefore, the eigenstate is ∞



|ψ ⟩ =

e−inθ |n⟩ .

(4.40)

n=−∞

(c)

The winding states are eigenstates of the Hamiltonian, so we can express the Hamiltonian in outer product form as ∞

Hˆ = ∑ |n|E0 |n⟩⟨n| .

(4.41)

−∞

With this form, we can just calculate the commutator directly. We have Hˆ Oˆ =





∑ ∑



m=−∞ n=−∞ ∞

=





∑ ∑

|m|E0 |m⟩⟨m|n + 1⟩⟨n| =

|m|E0 |m⟩⟨n| δm,n+1

m=−∞ n=−∞

|n + 1|E0 |n + 1⟩⟨n| .

(4.42)

n=−∞

The opposite order of the product is Oˆ Hˆ =





∑ ∑

m=−∞ n=−∞ ∞

=







∑ ∑

|n|E0 |m + 1⟩⟨m|n⟩⟨n| =

|n|E0 |m + 1⟩⟨n| δmn

m=−∞ n=−∞

|n|E0 |n + 1⟩⟨n| .

(4.43)

n=−∞

The difference between these operator products is ˆ = ˆ O] [H,





(|n + 1| − |n|) E0 |n + 1⟩⟨n|

(4.44)

n=−∞

= =



−1

n=0 ∞

n=−∞

−1

n=0

n=−∞

∑ (n + 1 − n) E0 |n + 1⟩⟨n| + ∑ ∑ E0 |n + 1⟩⟨n| − ∑

(−n − 1 + n) E0 |n + 1⟩⟨n|

E0 |n + 1⟩⟨n| .

Note that this is clearly non-zero. (d) Let’s just act the exponentiated Hamiltonian directly on the eigenstate of the hopping operator:

Exercises

29

ˆ Ht

e−i h¯ |ψ ⟩ =



e−inθ e−i h¯ |n⟩ =



e

n=−∞ ∞

=



ˆ Ht





  E t −in θ +sgn(n) h¯0

e−inθ e−i

|n|E0 t h¯

|n⟩

(4.45)

n=−∞

|n⟩ ,

n=−∞

where the sign function is sgn(n) = 4.6 (a)

n . |n|

(4.46)

First, let’s think about the Cauchy–Schwarz inequality again. The more mundane version of it is just a consequence of the properties of the vector dot product. For two real-valued vectors⃗u,⃗v, the only way that⃗u2⃗v 2 = (⃗u·⃗v)2 is if the vectors are proportional: ⃗v = α⃗u, for some constant α . Therefore, if the Cauchy–Schwarz inequality is saturated, the two vectors are proportional: (xˆ − ⟨x⟩)| ˆ ψ ⟩ = α ( pˆ − ⟨ p⟩)| ˆ ψ⟩ .

(4.47)

Using this, we can then further use the saturated Heisenberg uncertainty principle. The saturated uncertainty principle enforces the relationship that

σx2 σ p2 =

h¯ 2 . 4

(4.48)

Next, the variance in position, say, is just the square of the vector in the Cauchy–Schwarz inequality:

σx2 = ⟨ψ |(xˆ − ⟨x⟩) ˆ 2 |ψ ⟩ = |α |2 ⟨ψ |( pˆ − ⟨ p⟩) ˆ 2 |ψ ⟩ = |α |2 σ p2 .

(4.49)

With these two relationships, we can solve for both α in terms of σx . We have |α | =

2σx2 . h¯

(4.50)

ˆ p] ˆ = i¯h, we can replace the factor of i in α Finally, using the fact that [x, that was divided in the uncertainty principle (see Eq. 4.99 on page 74 of the textbook):

α =−

2iσx2 . h¯

(4.51)

Then, the vectors that saturate the Cauchy–Schwarz inequality satisfy (xˆ − ⟨x⟩)| ˆ ψ⟩ = −

2iσx2 ( pˆ − ⟨ p⟩)| ˆ ψ⟩ . h¯

(4.52)

(b) In position space, the relationship between the action of position and momentum on the saturated state is   2iσx2 d ˆ ψ (x) . (4.53) (x − ⟨x⟩) ˆ ψ (x) = − −i¯h − ⟨ p⟩ h¯ dx

4 Axioms of Quantum Mechanics and Their Consequences

30

This can be rearranged into a more familiar organization where   d ⟨ p⟩ ˆ x − ⟨x⟩ ˆ ψ (x) = 0 . −i + dx h¯ 2σx2

(4.54)

This is a linear, first-order, homogeneous differential equation whose solution takes an exponential form. We have (ignoring overall normalization)

ψ (x) = ei (c)

⟨ p⟩x ˆ h¯



e

(x−⟨x⟩) ˆ 2 4σx2

(4.55)

.

Now, we are asked to Fourier transform this solution to momentum space. One can put this expression into the usual Fourier integral, but we’ll do something different here. Using the fact that [x, ˆ p] ˆ = i¯h, we can instead express the position operator as a momentum derivative: xˆ = i¯h

d , d pˆ

(4.56)

and so the relationship between the vectors can be expressed in momentum space as   2iσx2 d i¯h − ⟨x⟩ ˆ ψ (p) = − (p − ⟨ p⟩) ˆ ψ (p) . (4.57) dp h¯ Because we are in momentum space, it makes more sense to use the momentum variance, so we replace

σx2 = finding

h¯ 2 , 4σ p2

(4.58)

  d i¯h i¯h ˆ ψ (p) . − ⟨x⟩ ˆ ψ (p) = − 2 (p − ⟨ p⟩) dp 2σ p

Again, putting everything to one side, we have ! d ⟨x⟩ ˆ p − ⟨ p⟩ ˆ ψ (p) = 0 . +i + dp h¯ 2σ p2

(4.59)

(4.60)

This is of essentially an identical form to the position space equation, so its solution is

ψ (p) = e−i 4.7

(a)

⟨x⟩p ˆ h¯



e

(p−⟨ p⟩) ˆ 2 4σ p2

.

(4.61)

If these two equations of motion are supposed to be equal, then we must have   dV (⟨x⟩) ˆ dV (x) ˆ = . (4.62) d xˆ d⟨x⟩ ˆ This is a particular instantiation of the relationship ⟨ f (x)⟩ ˆ = f (⟨x⟩) ˆ ,

(4.63)

Exercises

31

for some function f . This can only be satisfied for the expectation value on a general state if the function f is linear: (4.64)

f (x) = a + bx ,

for some constants a, b. This can be argued for a large class of functions by assuming it has a Taylor expansion about x = 0 and noting that it is not true in general that ⟨xˆn ⟩ = ⟨x⟩n ,

(4.65)

for n > 1. If this was true, then for example, the variance would always vanish. Thus, if the derivative of the potential must be linear, the potential must be quadratic, and so the potential takes the form V (x) ˆ = axˆ2 + bxˆ + c .

(4.66)

By an appropriate choice of coordinates and a zero energy point, this can always be reduced to the simple harmonic oscillator, which we will see in a couple of chapters. (b) For this power-law potential, the two derivatives are   dV (⟨x⟩) ˆ dV (x) ˆ 2n−1 = 2nk⟨x⟩ ˆ , = 2nk⟨xˆ2n−1 ⟩ . (4.67) d⟨x⟩ ˆ d xˆ Note that ⟨xˆ2n−1 ⟩ weights larger values of position more than does ⟨x⟩ ˆ 2n−1 and so we expect that   dV (x) ˆ ˆ dV (⟨x⟩) . (4.68) ≥ d⟨x⟩ d xˆ ˆ (c)

4.8 (a)

Now, we flip the interpretation and are asked to find the state |ψ ⟩ on which ⟨xˆ2n−1 ⟩ = ⟨x⟩ ˆ 2n−1 . This can only be true if there is a unique value of xˆ for which the state has a non-zero value; namely it is a position eigenstate. That is, |ψ ⟩ = |x⟩, which is also not in the Hilbert space. Therefore, for any state on the Hilbert space, it is actually impossible for ⟨xˆ2n−1 ⟩ = ⟨x⟩ ˆ 2n−1 . The integral of the exponential probability distribution is Z ∞

1=N

dx e−λ x =

0

N . λ

(4.69)

Therefore, the normalization constant N = λ . (b) The expectation value of x can be calculated through explicit integration, but we will do another approach here. Let’s instead take the derivative of the normalization of the probability distribution with respect to the parameter λ . We have d d 1=0= λ dλ dλ

Z ∞ 0

dx e−λ x =

Z ∞ 0

dx e−λ x − λ

Z ∞ 0

dx x e−λ x .

(4.70)

4 Axioms of Quantum Mechanics and Their Consequences

32

Note that the first integral is 1/λ and the second integral is exactly the expectation value ⟨x⟩. Therefore, we have that 0=

1 − ⟨x⟩ , λ

(4.71)

or that ⟨x⟩ = (c)

1 . λ

(4.72)

The second moment can be calculated in exactly the same way, just through taking the second derivative of the normalization integral. We have Z ∞

d2 d2 1=0= λ 2 dλ dλ 2 =−

Z ∞ 0

= −2

dx e−λ x =

0

dx x e−λ x −

Z ∞ 0

d dλ

Z ∞ 0

dx e−λ x −

dx x e−λ x + λ

Z ∞

d λ dλ

Z ∞

dx x e−λ x

0

dx x2 e−λ x

0

⟨x⟩ + ⟨x2 ⟩ . λ

(4.73)

Therefore, we find that ⟨x2 ⟩ =

2 . λ2

(4.74)

The variance of the exponential distribution is then

σ 2 = ⟨x2 ⟩ − ⟨x⟩2 =

1 , λ2

(4.75)

and then the standard deviation is

σ=

1 . λ

(4.76)

The area of the distribution within one standard deviation of the expectation value is

λ

Z

2 λ

0

dx e−λ x = −e−2 + 1 ≃ 0.864665 .

(4.77)

This is indeed greater than 1/2 and so most of the distribution is within 1 standard deviation. (d) Now, let’s calculate this generalized spread about the mean. We have ⟨(x − ⟨x⟩)n ⟩ = λ

Z ∞ 0

dx (x − ⟨x⟩)n e−λ x .

(4.78)

Let’s now make the change of variables y = x − ⟨x⟩, so that the integral becomes ⟨(x − ⟨x⟩)n ⟩ = λ

Z ∞

−⟨x⟩

dy yn e−λ (y+⟨x⟩) = λ e−1

Z ∞

− λ1

dy yn e−λ y .

(4.79)

Exercises

33

Now, using integration by parts, we have Z ∞ Z ∞ yn −λ y ∞ n n −λ y =− e dy y e + dy yn−1 e−λ y λ λ − λ1 λ −1/ −1/λ Z n ∞ n −n−1 e+ = (−1) λ dy yn−1 e−λ y . λ − λ1 That is, the nth moment is

(4.80)

Z

n −1 ∞ dy yn−1 e−λ y (4.81) λe λ − λ1 n = (−λ )−n + ⟨(x − ⟨x⟩)n−1 ⟩ . λ This defines a recursion relation that can be solved for any n knowing the boundary condition. Note that if n = 1, we have ⟨(x − ⟨x⟩)n ⟩ = (−λ )−n +

⟨(x − ⟨x⟩)1 ⟩ = 0 .

(4.82)

We can verify that this recursion relation is correct, or at least consistent, for n = 2. Then, ⟨(x − ⟨x⟩)2 ⟩ = σ 2 =

1 , λ2

(4.83)

and the recursion relation would predict ⟨(x − ⟨x⟩)2 ⟩ = (−λ )−2 + (e)

2 1 ⟨(x − ⟨x⟩)1 ⟩ = 2 , λ λ

(4.84)

exactly as expected. Finally, we are asked to calculate the median. The median is the point xmed where half of the integral lies to the left (and therefore half lies to the right): 1 =λ 2

Z x med 0

dx e−λ x = 1 − e−λ xmed .

(4.85)

We can then solve for xmed as log 2 . (4.86) λ Note that log 2 ≃ 0.693147, and so the median lies at a smaller value than the mean. xmed =

Quantum Mechanical Example: The Infinite Square Well

5

Exercises 5.1

(a)

We can write the state |ψ ⟩ as |ψ ⟩ =

|ψ1 ⟩ + 3|ψ2 ⟩ , α

(5.1)

for some normalization constant α . The absolute square of this state is 1 = ⟨ψ |ψ ⟩ =

⟨ψ1 | + 3⟨ψ2 | |ψ1 ⟩ + 3|ψ2 ⟩ ⟨ψ1 |ψ1 ⟩ + 9⟨ψ2 |ψ2 ⟩ 10 = = , α∗ α |α |2 |α |2 (5.2)

using the orthonormality of √the energy eigenstates of the infinite square well. Then, we can choose α = 10, and so the normalized state is |ψ ⟩ =

|ψ1 ⟩ + 3|ψ2 ⟩ √ . 10

(b) In position basis, the state takes the form r      1 2 πx 2π x ψ (x) = √ sin + 3 sin a a 10 a      1 2π x πx =√ + 3 sin . sin a a 5a

(5.3)

(5.4)

The expectation value of the position on this state is then Z a

dx ψ ∗ (x)x ψ (x) (5.5)           Z a 1 πx 2π x πx 2π x = dx sin + 3 sin x sin + 3 sin 5a 0 a a a a   1 16 =a − ≃ 0.3919a . 2 15π 2

⟨ψ |x| ˆ ψ⟩ =

(c)

0

The expectation value of the Hamiltonian is very easy to calculate on this state because it is expressed as a linear combination of energy eigenstates. We have 2 2 ˆ ˆ ψ ⟩ = (⟨ψ1 | + 3⟨ψ2 |) H (|ψ1 ⟩ + 3|ψ2 ⟩) = E1 + 9E2 = 37 π h¯ . (5.6) ⟨ψ |H| 10 10 10 2ma2

34

Exercises

35

(d) To include time dependence, we just need to augment the wavefunction with exponential phase factors with the appropriate energy eigenvalues. We have i E2 t 1 h −i E1 t ψ (x,t) = √ e h¯ ψ1 (x) + 3e−i h¯ ψ2 (x) . (5.7) 10 The expectation value of the energy or the Hamiltonian is identical when time dependence is included again because the wavefunction is expressed in terms of energy eigenstates. However, the expectation value of the position will change in time. To evaluate this, let’s first just look at the wavefunction times its complex conjugate: i E2 t 1 h i E1 t ∗ ψ ∗ (x,t)ψ (x,t) = √ e h¯ ψ1 (x) + 3ei h¯ ψ2∗ (x) (5.8) 10 i E2 t 1 h −i E1 t ×√ e h¯ ψ1 (x) + 3e−i h¯ ψ2 (x) 10  (E2 −E1 )t 1 |ψ1 (x)|2 + 9|ψ2 (x)|2 + 3e−i h¯ ψ1∗ (x)ψ2 (x) = 10  (E2 −E1 )t +3ei h¯ ψ1 (x)ψ2∗ (x) . We have chosen the energy eigenstate wavefunctions to be real-valued functions, and so, for example, ψ1∗ (x) = ψ1 (x), and so the square simplifies to     1 (E2 − E1 )t ψ ∗ (x,t)ψ (x,t) = ψ1 (x)2 + 9ψ2 (x)2 + 6 cos ψ1 (x)ψ2 (x) h¯ 10    (E − E )t 3 2 1 = ψ (x)2 − 1 − cos ψ1 (x)ψ2 (x) . (5.9) 5 h¯ That is, the expectation value of x is given temporal dependence by the interference of the energy eigenstates. Note that we already calculated the expectation value on the time-independent state ψ (x), so we will just focus on the modification here. That is, we write the time-dependent expectation value as ⟨x(t)⟩ ˆ = ⟨x⟩ ˆ + ∆⟨x(t)⟩ ˆ , where

   Z a 3 (E2 − E1 )t 1 − cos dx x ψ1 (x)ψ2 (x) 5 h¯ 0    16a (E2 − E1 )t . = 1 − cos h¯ 15π 2

∆⟨x(t)⟩ ˆ =−

(5.10)

(5.11)

Thus, the time-dependent expectation value of the position on this state is   a 16a (E2 − E1 )t ⟨x(t)⟩ ˆ = − . (5.12) cos 2 15π 2 h¯

5 Quantum Mechanical Example: The Infinite Square Well

36

(5.2)

We will first verify that this state is normalized. Note that px

px

e−i h¯ ei h¯ 1 ψ (x)ψ (x) = √ √ = , a a a ∗

(5.13)

independent of position x. Then, the nth moment of this distribution on the infinite square well is ⟨xˆn ⟩ =

Z a 0

dx ψ ∗ (x) xn ψ (x) =

1 a

Z a

dx xn =

0

an . n+1

(5.14)

The normalization corresponds to n = 0, for which we do indeed find the value 1, so this is unit normalized. The variance is the difference between n = 2 and the square of n = 1 moments:   1 a2 2 2 2 2 1 ˆ =a − 2 = . σx = ⟨xˆ ⟩ − ⟨x⟩ (5.15) 3 2 12 On the other hand, for the variance of momentum on this state, note that this state is an eigenstate of the momentum operator: px

p| ˆ ψ ⟩ = p|ψ ⟩



−i¯h

px

d ei h¯ ei h¯ √ =p√ . dx a a

(5.16)

As an eigenstate, the variance necessarily vanishes: σ p2 = 0. This would seem to imply that the Heisenberg uncertainty principle is violated, because

σx2 σ p2 = 0
0. If this vanishes at x = −π /2, then we must enforce

ψn (−π /2) = 0 = α e−i

π pn 2¯h

+ β ei

π pn 2¯h

(5.21)

.

That is,

β = −α e−i

π pn h¯

(5.22)

.

Then, the energy eigenstate wavefunction takes the form  pn x  pn (x+π ) . ψn (x) = α ei h¯ − e−i h¯

(5.23)

Next, if the wavefunction vanishes at the upper endpoint, where x = π /2, we must enforce  π pn    π pn 2π pn 3π pn . ψn (π /2) = 0 = α ei 2¯h − e−i 2¯h = α ei 2¯h 1 − e−i h¯ (5.24) Thus, the momentum eigenvalues are (5.25)

pn = n¯h , for n = 1, 2, 3, . . . . Then, the energy eigenstate wavefunction is    ψn (x) = α einx − e−in(x+π ) = α einx − (−1)n e−inx .

(5.26)

Finally, we fix α by demanding that this wavefunction is normalized on the well: Z π /2

1=

−π /2

= |α |2 = |α |2

dx ψn∗ (x)ψn (x)

Z π /2 −π /2

Z π /2

−π /2

dx e−inx − (−1)n einx

(5.27) 

einx − (−1)n e−inx



 dx 2 − (−1)n e2inx − (−1)n e−2inx = 2π |α |2 .

Then, the normalization constant can be chosen to be r 1 α= , 2π

(5.28)

and the normalized energy eigenstate wavefunction is  1 ψn (x) = √ einx − (−1)n e−inx . 2π

(5.29)

5 Quantum Mechanical Example: The Infinite Square Well

38

(b) To determine the time dependence of this initial wavefunction, we expand in energy eigenstates: |ψ ⟩ =

∑ βn |ψn ⟩ .

(5.30)

n=1

Taking the inner product with the bra ⟨ψm | isolates the coefficient βm : ∞

⟨ψm |ψ ⟩ =

∑ βn ⟨ψm |ψn ⟩ = βm .

(5.31)

n=1

So, we just need to calculate r Z

βn = ⟨ψn |ψ ⟩ =

π /4

−π /4

dx

1 2 ψn (x) = π π

Z π /4 −π /4

 dx einx − (−1)n e−inx . (5.32)

This vanishes if n is even. For odd n, we have

βn =

1 π

Z π /4

 4 nπ dx einx + e−inx = sin . πn 4 −π /4

(5.33)

Thus, the initial wavefunction can be expressed in the basis of energy eigenstates as r (2n−1)π 4 2 ∞ sin 4 ψ (x) = cos ((2n − 1)x) . (5.34) ∑ π π n=1 2n − 1 To determine the wavefunction at a later time, we simply multiply each term by the exponential energy phase factor, where e−i

En t h¯

= e−i

n2 h¯ t 2m

.

Thus, the wavefunction at a later time t is r (2n−1)2 h¯ t 4 2 ∞ e−i 2m (2n − 1)π ψ (x,t) = ∑ 2n − 1 sin 4 cos ((2n − 1)x) . π π n=1 (c)

The inner product ⟨χ |ψ ⟩ is √ (2n−1)π Z π /2 (2n−1)2 h¯ t sin 4 2 ∞ 4 dx cos ((2n − 1)x) ⟨χ |ψ ⟩ = 2 ∑ e−i 2m π n=1 2n − 1 −π /2 √ (2n−1)π (2n−1)2 h¯ t sin 8 2 ∞ 4 = 2 ∑ (−1)n e−i 2m . π n=1 (2n − 1)2

(d) We can then take the derivative and set t = 0 and then find √ 2 sin (2n−1)π 8 2 ∞ d⟨χ |ψ ⟩ ¯ n (2n − 1) h 4 = −i (−1) ∑ 2 dt π 2m (2n − 1)2 t=0 n=1 √ (2n − 1)π 4 2 h¯ ∞ (−1)n sin = −i 2 ∑ π m n=1 4

(5.35)

(5.36)

(5.37)

(5.38)

Exercises

39

√   4 2 h¯ 1 1 1 1 = −i 2 −√ + √ + √ − √ +··· . π m 2 2 2 2 This sum actually vanishes. The sine factor just oscillates between + √12 and

− √12 , and so the sum keeps canceling itself at higher terms in the series. From the Schrödinger equation, the time derivative is determined by the Hamiltonian, which, for the infinite square well, is just the squared momentum operator: pˆ2 Hˆ = 2m

(e)

so

i¯h

d ˆ ψ⟩ . |ψ ⟩ = H| dt

(5.39)

The momentum operator is a spatial derivative, and at t = 0, the initial wavefunction is piecewise constant over the well. This is then projected onto a uniform wavefunction on the well. The second derivatives are 0 almost everywhere, except at the points where the initial wavefunction changes value p from 0 to 2/π , but the values of the spatial second derivatives are opposite of one another. (Roughly, about the point x = −π /4, the wavefunction is “concave-up”, and around x = π /4, it is “concave-down.”) Once projected onto the uniform wavefunction, these second derivatives cancel, rendering the time-derivative at t = 0 0. Let’s first consider the expectation value of momentum on this state. Recall from Ehrenfest’s theorem that the time dependence of the expectation values are d⟨ p⟩ ˆ i ˆ = ⟨[H, p]⟩ ˆ . dt h¯

(5.40)

Because the Hamiltonian is purely dependent on momentum, Hˆ = pˆ2 /(2m), the commutator vanishes and ⟨ p⟩ ˆ is therefore constant in time. We had already argued that the expectation value of momentum at t = 0 was 0, and therefore this is true for all time: ⟨ p⟩ ˆ = 0. For the expectation value of position, note that the expectation value of the commutator is d pˆ2 h¯ ˆ x]⟩ ⟨[H, ˆ = i¯h⟨ψ | |ψ ⟩ = i ⟨ p⟩ ˆ = 0, d pˆ 2m m

(5.41)

where we used the expression of the position operator as a momentum derivative and recall that the expectation value of momentum was 0 on this state. Then, the expectation value of position is also constant in time, d⟨x⟩/dt ˆ = 0, and initially the wavefunction was symmetric about x = 0. Therefore, for all time, ⟨x⟩ ˆ is 0. 5.4 (a)

In position space, the momentum operator is of course pˆ = −i¯h

d . dx

(5.42)

5 Quantum Mechanical Example: The Infinite Square Well

40

Now, with position x = aϕ on the ring, the momentum operator is simply pˆ = −i

h¯ d , a dϕ

(5.43)

in angle space. To establish the Hermiticity of this representation of momentum, we calculate its matrix element ( p) ˆ i j in angle space. For some orthonormal basis {ψi }i on ϕ ∈ [0, 2π ), we have Z

h¯ 2π d d ϕ ψi∗ (ϕ ) ψ j (ϕ ) (5.44) a 0 dϕ   Z 2π h¯ d ∗ ∗ ∗ = −i ψi (0)ψ j (0) − ψi (2π )ψ j (2π ) − d ϕ ψ j (ϕ ) ψi (ϕ ) a dϕ 0 Z d ∗ h¯ 2π d ϕ ψ j (ϕ ) ψi (ϕ ) = ( p) ˆ ∗ji . =i a 0 dϕ

( p) ˆ i j = −i

On the second line, we used integration by parts, and by the periodicity of the states on the ring, note that

ψi∗ (0)ψ j (0) = ψi∗ (2π )ψ j (2π ) .

(5.45)

Therefore, this momentum operator is indeed Hermitian on the Hilbert space of the states on the ring. (b) To find the energy eigenstates on the ring, we first note that energy eigenstates are momentum eigenstates because there is no position-dependence of the potential. Therefore, we can write the eigenstate wavefunctions as

ψn (ϕ ) = α ei

apn ϕ h¯

(5.46)

,

for some momentum pn and normalization constant α . By periodicity, ψn (ϕ + 2π ) = ψn (ϕ ), and so we must enforce that ei

apn (ϕ +2π ) h¯

= ei

apn ϕ h¯

,

(5.47)

or that 2π ah¯ pn

= 2π n, for some integer n. Therefore, the allowed momentum eigenvalues are pn =

n¯h . a

(5.48)

The eigenenergies are then p2n n2 h¯ 2 = . (5.49) 2m 2ma2 Unlike for the infinite square well, the state with n = 0 is allowed on the ring because it is normalizable. Thus the minimal energy of the particle on the ring is actually 0, for which the eigenstate wavefunction is just a constant. Because the ring is of finite size, σϕ is always finite. However, on the ground state where n = 0, the momentum is clearly 0 and so σ p = 0, which would seem to invalidate the Heisenberg uncertainty principle. However, recall En =

(c)

Exercises

41

what the generalized uncertainty principle actually stated. In this case, we have ⟨[ϕˆ , p]⟩ ˆ σϕ σ p ≥ . (5.50) 2i

5.5 (a)

We have established that on the ground state p| ˆ ψ0 ⟩ = 0 and so ⟨ψ0 |[ϕˆ , p]| ˆ ψ0 ⟩ = 0. Thus, this is entirely consistent with the uncertainty principle on the ring, because a state of 0 energy is on the Hilbert space. So, the uncertainty principle isn’t so useful now. Recall that the energy eigenvalues of the infinite square well are n2 π 2 h¯ 2 . 2ma2 The Schwarzschild radius of an energy eigenstate is then En =

(5.51)

2GN 2GN n2 π 2 h¯ 2 En = 4 . (5.52) 4 c c 2ma2 If this equals the width of the well a, then the level n of the eigenstate is Rs =

n2 =

ma3 c4 . π 2 GN h¯ 2

(5.53)

(b) Rearranging the expression for the Schwarzschild radius, the energy when it is the size of the well is E=

ac4 . 2GN

(5.54)

In SI units, c = 3 × 108 m/s, GN = 6.67 × 10−11 m3 kg−1 s−2 , and we have said that the size of the well is a = 10−15 m. Then, this energy is E ≃ 6 × 1028 J ,

(c)

(5.55)

which is enormous! Everyday energies are maybe hundreds of Joules, so this is orders and orders of magnitude greater. With the mass of the pion given as mπ = 2.4 × 10−28 kg, the energy level n at which the infinite square well becomes a black hole is (using the result of part (a)) n2 ≃ 3 × 1038 ,

(5.56)

or that n ≃ 1.7 × 1019 . By contrast, the energy at which the pion is traveling at the speed of light would be equal to setting its kinetic energy equal to its bound state energy: n2 π 2 h¯ 2 1 mπ c2 = , 2 2mπ a2

(5.57)

mπ ca ≃ 0.23 . π h¯

(5.58)

or that n=

5 Quantum Mechanical Example: The Infinite Square Well

42

5.6

So, even the ground state of this infinite square well is relativistic. One thing to note is that clearly if the pion is traveling near the speed of light, its kinetic energy is not as simple as mπ v 2 /2. However, setting v = c establishes an energy at which the pion is definitely relativistic, and at which the non-relativistic expression for the energy is no longer applicable. (a) The matrix elements of the momentum operator are  h¯ mn ( p) ˆ mn = −2i 1 − (−1)m+n . (5.59) 2 2 a m −n The matrix elements of the squared momentum operator are found by matrix multiplication of the momentum operator with itself. We have ( pˆ2 )mn =



ˆ ml ( p) ˆ ln ∑ ( p)

(5.60)

l=1

 ln   4¯h2 ∞ ml  m+l l+n 1 − (−1) 1 − (−1) ∑ a2 l=1 m2 − l 2 l 2 − n2   4¯h2 mn ∞ l 2 1 − (−1)m+l 1 − (−1)l+n =− 2 ∑ . a (m2 − l 2 )(l 2 − n2 ) l=1 =−

(b) Note that terms in the sum vanish if either (−1)m+l = 1 or (−1)l+n = 1. So, to simply ensure that every term in the sum vanishes, we just need to force one of these relationships for every value of l. This can be accomplished if m and n differ by an odd number, call it k. If n = m + k, then either m + l is even and n + l is odd, or vice-versa, for every value of l. Thus, ( pˆ2 )m(m+k) = 0 ,

(5.61)

if k is odd. If instead n = m + k and k is even, the sum can be expressed as     ∞ l 2 1 − (−1)m+l ∞ l 2 1 − (−1)m+l 1 − (−1)l+m+k 1 − (−1)m+l =∑ ∑ (m2 − l 2 )(l 2 − (m + k)2 ) (m2 − l 2 )(l 2 − (m + k)2 ) l=1 l=1  ∞ 2l 2 1 − (−1)m+l =∑ 2 2 2 . (5.62) 2 l=1 (m − l )(l − n ) This follows by distributing the product and noting that (−1)2(m+l) = 1. Further, note that the value of 1 − (−1)m+l is either 0 or 2, and so every non-zero term in the sum has the same sign, at least from this factor. Now, let’s partial fraction the denominators. Note that   1 1 1 1 = (5.63) − , m2 − l 2 2l m − l m + l and similar for the other denominator factor. Then, the sum becomes     1 ∞ 2l 2 1−(−1)m+l 1 ∞  1 1 1 m+l = 1−(−1) − − . ∑ 2 2 2 2 2∑ m−l m + l n+l n − l l=1 (m −l )(l −n ) l=1 (5.64)

Exercises

43

We can keep partial fractioning the product denominators when distributed. We have   1 1 1 1 1 = + , (5.65) m−l n+l m+n m−l n+l   1 1 1 1 1 = − , (5.66) m−l n−l n−m m−l n−l   1 1 1 1 1 (5.67) = − , m+l n+l n−m m+l n+l   1 1 1 1 1 = + . (5.68) m+l n−l m+n m+l n−l Using this, the product of denominators becomes      1 1 1 2m 1 1 1 − − = 2 − (5.69) m−l m+l n+l n−l m − n2 l + m l − m   2n 1 1 + 2 − . n − m2 l + n l − n Thus the expression of the squared momentum as an infinite sum breaks into two telescoping series:  ∞ 2l 2 1 − (−1)m+l (5.70) ∑ 2 2 2 2 l=1 (m − l )(l − n )   m ∞  1 m n n m+l = 2 1 − (−1) − − + . ∑ m − n2 l=1 l +m l −m l +n l −n The series that we need to evaluate is then of the form  ∞  1 1 − , ∑ l −m l=1 l + m

(5.71)

where the values of l that are summed over are either just odd or just even, depending on m. If m is odd, then l is even and we can write l = 2k, and the sum takes the form ∞

∑ (a2k+m − a2k−m ) ,

(5.72)

k=1

where ai is the placeholder for the corresponding term in the sum. Note that by the telescoping nature, once k is greater than or equal to k + 1, everything cancels between the first term and the second. The only sum that remains is just of the second term, up to k = m: ∞

m

m

k=1

k=1

k=1

1

∑ (a2k+m − a2k−m ) = − ∑ a2k−m = − ∑ 2k − m .

(5.73)

Let’s see what this evaluates to for some low values of m. If m = 1, we find 1

1 = −1 . k=1 2k − 1

−∑

(5.74)

5 Quantum Mechanical Example: The Infinite Square Well

44

If m = 3, we find 3

1 1 1 = 1−1− = − . 3 3 k=1 2k − 3

−∑

(5.75)

We could keep going to larger m, but this suggests that it evaluates to −1/m. Let’s prove this with induction. We have already verified low m; let’s now assume it is true for m and prove it is true for m + 2. We consider the sum −

m+2



k=1

m+1 1 1 1 =− ∑ − 2k − m − 2 2k − m − 2 2m + 4 −m−2 k=1

=− =−

m+1



1 1 − 2k − m − 2 m + 2



1 1 − 2(k − 1) − m m + 2

k=1 m+1 k=1 m

(5.76)

1 1 − m+2 k=0 2k − m

=−∑ =

m 1 1 1 1 −∑ − =− . m k=1 2k − m m + 2 m+2

This completes the induction step and proves that  ∞  1 1 1 ∑ l +m − l −m = −m , l=1

(5.77)

if m is odd. Next, if m is even, then l must be odd in the sum and can be written as l = 2k − 1 and the sum takes the form ∞

m

m

k=1

k=1

k=1

1

∑ (a2k−1+m − a2k−1−m ) = − ∑ a2k−1−m = − ∑ 2k − 1 − m ,

(5.78)

by the telescoping nature of the sum, as established earlier. Now, let’s evaluate this again for some low m. First, for m = 2 we have 2

1 = 1−1 = 0. 2k − 1−2 k=1

(5.79)

1 1 1 = +1−1− = 0. 3 3 k=1 2k − 1 − 4

(5.80)

−∑ For m = 4, we have 4

−∑

This suggests that this sum simply vanishes! Let’s again prove this with induction, assuming it is true for m. The sum for m + 2 is −

m+2



k=1

m+1 1 1 1 =− ∑ − 2k − 1 − m − 2 2m + 4 − 1 − m − 2 k=1 2(k − 1) − 1 − m

Exercises

45

m

1 1 − 2k − 1 − m m + 1 k=0

=−∑ =

(c)

(5.81)

m 1 1 1 −∑ − = 0, m + 1 k=1 2k − 1 − m m + 1

which proves the induction step. Inputting these results into Eq. 5.70 proves that all off-diagonal terms in the matrix representation of pˆ2 vanish. Let’s now consider just the ( pˆ2 )11 element. From our earlier partial fractioning, this term can be expressed as  2 4¯h2 ∞ 1 1 ( pˆ2 )11 = 2 ∑ + (5.82) a k=1 2k − 1 2k + 1   4¯h2 ∞ 1 1 2 = 2 ∑ + + a k=1 (2k − 1)2 (2k + 1)2 (2k − 1)(2k + 1) " # ∞ ∞  1 1 4¯h2 π 2 1 = 2 +∑ +∑ − a 8 k=1 (2(k + 1) − 1)2 k=1 2k − 1 2k + 1 " # ∞  4¯h2 π 2 π 2 1 1 = 2 + −1+ ∑ − a 8 8 2k + 1 k=1 2k − 1  2 2 2 4¯h π π = 2 + −1+1 a 8 8

π 2 h¯ 2 . a2 In these expressions, we used our knowledge about telescoping series and the given value of the series presented in the exercise. This is indeed the value of the squared momentum in the first energy eigenstate. (d) The solution to this problem is explicitly worked out in J. Prentis and B. Ty, “Matrix mechanics of the infinite square well and the equivalence proofs of Schrödinger and von Neumann,” Am. J. Phys. 82, 583 (2014). 5.7 (a) Let’s now calculate the matrix elements of the position operator in the energy eigenstate basis. We have Z  mπ x   nπ x  2 a ˆ ψn ⟩ = dx sin x sin (x) ˆ mn = ⟨ψm |x| (5.83) a 0 a a 4mn (1 − (−1)m+n ) =− a. π 2 (m2 − n2 )2 =

We have just presented the result from doing the integral, which is a standard exercise in integration by parts. (b) To evaluate the commutator of position and momentum, we need to take their matrix product in two orders. First,   ∞ l 2 1 − (−1)m+l ∞ 1 − (−1)l+n 8i¯h (xˆ p) ˆ mn = ∑ (x) ˆ ml ( p) ˆ ln = 2 mn ∑ . (5.84) π (m2 − l 2 )2 (l 2 − n2 ) l=1 l=1

5 Quantum Mechanical Example: The Infinite Square Well

46

The opposite order of the product is

  ∞ l 2 1 − (−1)m+l 1 − (−1)l+n 8i¯h ( pˆx) ˆ mn = ∑ ( p) ˆ ml (x) ˆ ln = 2 mn ∑ . (5.85) π (m2 − l 2 )(l 2 − n2 )2 l=1 l=1 ∞

Their difference, the matrix elements of the commutator of position and momentum is then   ∞ l 2 (2l 2 − m2 − n2 ) 1 − (−1)m+l 1 − (−1)l+n 8i¯h ([x, ˆ p]) ˆ mn = 2 mn ∑ . π (m2 − l 2 )2 (n2 − l 2 )2 l=1 (5.86) Now, just like we observed for the square of momentum, all terms in this sum explicitly vanish if m and n are even and odd (or vice-versa). The sum is only non-zero if they differ by an even number. In this case, the numerator simplifies to  ∞ l 2 (2l 2 − m2 − n2 ) 1 − (−1)m+l 16i¯h ˆ p]) (5.87) ([x, ˆ mn = 2 mn ∑ π (m2 − l 2 )2 (n2 − l 2 )2 l=1   ∞ l 2 1 − (−1)m+l  16i¯h 1 1 = − 2 mn ∑ 2 2 2 2 + . 2 2 π n2 − l 2 l=1 (m − l )(n − l ) m − l From exercise 5.6, we had shown that the denominator partial fractions into      1 1 1 1 1 1 = m − −n − . (m2 −l 2 )(n2 −l 2 ) 2l 2 (m2 −n2 ) l −m l +m l −n l +n (5.88) The matrix element of the commutator simplifies to (5.89)        1 1 8i¯h mn 1 1 m+l 1 − (−1) = 2 2 m − − n − ∑ π m − n2 l=1 l −m l +m l −n l +n   1 1 + . × 2 l − m2 l 2 − n 2

([x, ˆ p]) ˆ mn



(c)

On the diagonal, the value of the commutator’s matrix elements are:  32i¯h 2 ∞ l 2 1 − (−1)n+l ([x, ˆ p]) ˆ nn = 2 n ∑ . (5.90) π (l 2 − n2 )3 l=1

Even just including the first 20 terms in the sum, the result is within 1% of i¯h for n = 1, 2, 3, 4, 5, 6, 7. (d) Let’s take the trace of the commutator as a general operator expression. We have tr[x, ˆ p] ˆ = tr(xˆ p) ˆ − tr( pˆx) ˆ = tr(xˆ p) ˆ − tr(xˆ p) ˆ = 0,

(5.91)

using the linearity and cyclicity of the trace. We had just shown that every diagonal element of the commutator was i¯h and yet the sum of diagonal

Exercises

47

5..8

elements must vanish, which seems to be at odds with one another. Our intuition for the trace as the sum of diagonal elements of a matrix works well for finite-dimensional matrices, just like our intuition for finite sums. However, we know that infinite sums or series can have very surprising properties and the trace of the commutator of position and momentum is an infinite sum. So it’s like for any finite n, the diagonal element ([x, ˆ p]) ˆ nn = i¯h, but the “infinite” diagonal element is an enormous negative number ensuring that the trace vanishes. (a) The normalization of this wavefunction is 1 = N2

Z a 0

dx x2 (a − x)2 = N 2

a5 . 30

Therefore, the normalized wavefunction is r 30 ζ1 (x) = x(a − x) . a5

(5.92)

(5.93)

(b) The overlap of this state with the ground state of the infinite square well is √ Z √ 60 a π x 8 15 . ⟨ζ1 |ψ1 ⟩ = 3 dx x(a − x) sin = (5.94) a a π3 0 To evaluate this integral is a standard exercise in integration by parts, but we suppress details here. The fraction of ζ1 (x) that is described by the ground state of the infinite square well is then the square of this: |⟨ζ1 |ψ1 ⟩|2 = (c)

960 ≃ 0.998555 . π6

(5.95)

As illustrated in the picture, these wavefunctions are very, very similar! We can rearrange the Hamiltonian’s eigenvalue equation to solve for the potential. We then find V (x) =

h¯ 2 1 d 2 ζ1 (x) + E1 . 2m ζ1 (x) dx2

(5.96)

The term involving the second derivative of the wavefunction becomes 1 1 d2 d2 2 ζ (x) = x(a − x) = − . 1 2 2 ζ1 (x) dx x(a − x) dx x(a − x)

(5.97)

Therefore, the potential is V (x) = −

h¯ 2 1 + E1 . m x(a − x)

(5.98)

By setting E1 =

4¯h2 , ma2

(5.99)

we can plot this potential and compare to the infinite square well. This is shown in Fig. 5.1.

48

t

Fig. 5.1

5 Quantum Mechanical Example: The Infinite Square Well

Comparison of the new potential (solid black) to the infinite square well. The new potential diverges at the points x = 0, a, which is also where the wavefunctions must vanish. (d) The thing we know about the first excited state with respect to the ground state is that it is orthogonal. Further, the ground state is symmetric about the potential, and so to easily ensure that the first excited state is orthogonal, we just make it anti-symmetric on the well. So, we expect that the first excited state wavefunction takes the form

ζ2 (x) = Nx(a − x)(a − 2x) ,

(5.100)

for some normalization constant N (that we won’t evaluate here). Plugging this into the eigenvalue equation for the Hamiltonian with the potential established above, the first excited state energy is E2 = −

2¯h2 h¯ 2 1 d 2 1 ζ (x) +V (x) = E + . 2 1 2 2m ζ2 (x) dx m x(a − x)

(5.101)

Note that this ζ2 (x) is actually not an eigenstate, because the “energy” E2 for this potential is position dependent. However, we can estimate the excited state energy level by noting that the minimum value of the position dependent piece is when a/2 where 4 1 = 2. (5.102) x(a − x) x=a/2 a So, the second energy eigenvalue is approximately 8¯h2 . ma2 For comparison, in the infinite square well, note that E2 ≃ E1 +

E2 = E1 +

3π 2 h¯ 2 h¯ 2 ≃ E + 14.8 . 1 2ma2 ma2

(5.103)

(5.104)

6

Quantum Mechanical Example: The Harmonic Oscillator

Exercises 6.1 (a)

The inner product of two coherent states as represented by the action of the raising operator on the harmonic oscillator ground state is ⟨ψλ |ψη ⟩ = e−

|λ |2 2

e−

|η |2 2



⟨ψ0 |eλ aˆ eη aˆ |ψ0 ⟩ . †

(6.1)

Now, in the basis of the raising operator, aˆ is a derivative and so its exponential is the translation operator. That is, ∗

eλ aˆ eη aˆ = eη (aˆ †

† +λ ∗

) = eλ ∗ η eη aˆ† .

(6.2)

Then, the inner product of the two coherent states is ⟨ψλ |ψη ⟩ = e− = e−

|λ |2 2

e−

|η |2 2

⟨ψ0 |eλ aˆ eη aˆ |ψ0 ⟩ = e−

|λ |2 2

e−

|η |2 2





∗η



|λ |2 2

e−

|η |2 2



∗η

⟨ψ0 |eη aˆ |ψ0 ⟩ †

(6.3)

.

For any finite values of λ , η this is non-zero, and so two generic coherent states are not orthogonal to one another. (b) Recall that the nth energy eigenstate of the harmonic oscillator can be expressed through the action of the raising operator on the ground state: (aˆ† )n |ψn ⟩ = √ |ψ0 ⟩ . n!

(6.4)

A coherent state with eigenvalue λ can then be expanded in energy eigenstates as |ψλ ⟩ = e−

|λ |2 2

eλ aˆ |ψ0 ⟩ = e− †

|λ |2 2



λ n (aˆ† )n √ |ψ0 ⟩ n!

∑ √n!

n=0

= e−

|λ |2 2



(6.5)

λn

∑ √n! |ψn ⟩ .

n=0

Then, the integral representing the sum over all coherent states is Z ∞Z ∞

−∞ −∞

dRe(λ ) dIm(λ ) |ψλ ⟩⟨ψλ | Z ∞Z ∞

= 49

−∞ −∞

(6.6)

dRe(λ ) dIm(λ ) e−|λ |

2



λ n (λ ∗ )m √ |ψn ⟩⟨ψm | . n!m! n,m=0



6 Quantum Mechanical Example: The Harmonic Oscillator

50

To determine what this operator is in the basis of energy eigenstates, we need to do two things. First, we will establish that all off-diagonal elements vanish. At row n and column m, for n ̸= m, the element is Z ∞Z ∞ −∞

n ∗ m 2 λ (λ ) dRe(λ ) dIm(λ ) e−|λ | √ (6.7) −∞ n!m! Z 2π Z ∞ 2 1 =√ d|λ | d ϕ e−|λ | |λ |n+m+1 ei(n−m)ϕ , 0 n!m! 0

where we have expressed the complex number λ = |λ |eiϕ in polar coordinates on the right. The integral over argument ϕ vanishes, and therefore indeed, the operator formed from the sum over all coherent states is diagonal in the basis of energy eigenstates of the harmonic oscillator. Now, we just need to determine what the diagonal entries are. Restricting to the element of the nth energy eigenstate, we have Z ∞Z ∞ −∞ −∞

dRe(λ ) dIm(λ ) e−|λ |

2

Z

Z

2π 2 |λ |2n 1 ∞ dϕ = d|λ | |λ |2n+1 e−|λ | n! n! 0 0 Z ∞ 2 2π = d|λ | |λ |2n+1 e−|λ | . (6.8) n! 0

Now, we can change of variables to |λ |2 = x for which the remaining integral becomes Z ∞ 0

d|λ | |λ |2n+1 e−|λ | = 2

1 2

Z ∞ 0

dx xn e−x =

n! . 2

(6.9)

Then, the nth diagonal entry of the coherent state sum is Z ∞Z ∞ −∞ −∞

dRe(λ ) dIm(λ ) e−|λ |

2

2π n! |λ |2n = =π. n! n! 2

(6.10)

Thus, the sum over coherent states is overcomplete by a factor of π : Z ∞Z ∞

−∞ −∞

6.2

(a)

dRe(λ ) dIm(λ ) |ψλ ⟩⟨ψλ | = π I .

(6.11)

Demanding that the ground state wavefunction is normalized, we have r Z ∞ π h¯ 2 2 − mh¯ω x2 1=N dx e =N . (6.12) mω −∞ Therefore, the L2 -normalized ground state wavefunction of the harmonic oscillator is  mω 1/4 mω 2 ψ0 (x) = (6.13) e− 2¯h x . π h¯

(b) We had determined that the first excited state wavefunction is r 2mω − mω x2 x e 2¯h . ψ1 (x) = N h¯

(6.14)

Exercises

51

The second excited state is defined through the action of the raising operator as (aˆ† )2 aˆ† ψ2 (x) = √ ψ0 (x) = √ ψ1 (x) 2 2 ! r r r mω mω − m ω x 2 h¯ d = − + x N x e 2¯h h¯ 2mω dx 2¯h   mω 2 N 2mω 2 x − 1 e− 2¯h x . =√ h¯ 2

(6.15)

The third excited state is (aˆ† )3 aˆ† ψ3 (x) = √ ψ0 (x) = √ ψ2 (x) (6.16) 3! 3 ! r r   mω 2 h¯ d mω N 2mω 2 x − 1 e− 2¯h x = − + x √ h¯ 2mω dx 2¯h 6 r  N 2mω 3 2mω − mω x2 =√ x − 3x e 2¯h . h¯ h¯ 6 6.3 (a)

The matrix elements of the raising operator aˆ† in the basis of energy eigenstates are found in the usual way, from sandwiching with the eigenstates. That is, aˆm (aˆ† )n (aˆ† )mn = ⟨ψm |aˆ† |ψn ⟩ = ⟨ψ0 | √ aˆ† √ |ψ0 ⟩ . m! n!

(6.17)

As established in this chapter, this matrix element is non-zero only if m = n + 1, so there are an equal number of raising and lowering operators. In this case, note that r aˆn+1 (aˆ† )n+1 (n + 1)! 1 d n+1 † n+1 p √ (aˆ ) = =p (6.18) † n+1 n! n! (n + 1)! n!(n + 1)! d(aˆ ) √ = n+1. Therefore, expressed as an outer product of the energy eigenstates, the raising operator is aˆ† =





√ n + 1 |ψn+1 ⟩⟨ψn | .

(6.19)

n=0

As a matrix, this looks like    aˆ† =  

0 1 0 .. .

0 0 √0 0 2 0 .. .. . .

··· ··· ··· .. .

   . 

(6.20)

6 Quantum Mechanical Example: The Harmonic Oscillator

52

(b) Note that the lowering operator is just the Hermitian conjugate of the raising operator, and so aˆ = (aˆ† )† =





√ n + 1 |ψn ⟩⟨ψn+1 | .

(6.21)

n=0

We had already established the expressions for the position and momentum operators in terms of the raising and lowering operators, and so we can write: r r  h¯ h¯ ∞ √ † xˆ = aˆ + aˆ = ∑ n + 1 (|ψn ⟩⟨ψn+1 | + |ψn+1 ⟩⟨ψn |) , 2mω 2mω n=0 (6.22) r r  m¯hω † m¯hω ∞ √ pˆ = i aˆ − aˆ = i ∑ n + 1 (|ψn+1 ⟩⟨ψn | − |ψn ⟩⟨ψn+1 |) . 2 2 n=0 (6.23) (c)

6.4

The raising operator has exclusively non-zero entries just off the diagonal, so its determinant is 0. The determinant is basis-independent, so in any basis in which the raising operator is expressed, its determinant is 0. (d) The general Laurent expansion of a function of the raising operator does not exist. Because its determinant is 0, the inverse of aˆ† does not exist, so for g(aˆ† ) to exist, we must assume that it is analytic; i.e., it has a Taylor expansion. (a) Using the result of Eq. 6.118, we can note that the only effect of time evolution of a coherent state is to modify its eigenvalue under the lowering operator as λ → λ e−iω t . This remains a general complex number for all time, and one can simply replace the value of λ in the beginning of the analysis in section 6.4 to demonstrate that neither the variances of position nor momentum are modified by including time dependence. Therefore, we still find that

σx2 =

h¯ , 2mω

σ p2 =

h¯ mω . 2

(6.24)

Further, the Heisenberg uncertainty principle is saturated for a coherent state for all time. (b) For a classical harmonic oscillator, if you start at rest from position x = ∆x, then the time-dependent position is sinusoidal: x(t) = ∆x cos(ω t) ,

(6.25)

by the definition of ω as the frequency of oscillation. Correspondingly, the classical momentum p(t) is p(t) = m

dx = −mω ∆x sin(ω t) . dt

(6.26)

Exercises

53

From the results of the beginning of section 6.4, the time-dependent expectation value of position on a coherent state is r h¯ ⟨x⟩ ˆ = (λ e−iω t + λ ∗ eiω t ) . (6.27) 2mω If we demand that at time t = 0 this is ∆x, we have that r h¯ ∆x = (λ + λ ∗ ) . 2mω

(6.28)

Correspondingly, the time-dependent expectation value of momentum is r m¯hω ∗ iω t ⟨ p⟩ ˆ =i (6.29) (λ e − λ e−iω t ) . 2 At time t = 0, we assume that the particle is at rest, and so we enforce that λ = λ ∗ , or that the eigenvalue of the lowering operator is initially real-valued. Then, r 2¯h ∆x = λ, (6.30) mω and

r ⟨x⟩ ˆ =

h¯ λ (e−iω t + eiω t ) = 2mω

r

2¯h λ cos(ω t) = ∆x cos(ω t) . mω

(6.31)

The expectation value of momentum is r √ m¯hω λ (eiω t − e−iω t ) = − 2m¯hωλ sin(ω t) = −mω ∆x sin(ω t) . ⟨ p⟩ ˆ =i 2 (6.32) (c)

These expectation values are identical to the classical oscillation! A coherent state of the raising operator would satisfy the eigenvalue equation aˆ† |χ ⟩ = η |χ ⟩ .

(6.33)

Assuming completeness of the energy eigenstates of the harmonic oscillator, this can be expanded as |χ ⟩ =





n=0

n=0

(aˆ† )n

∑ βn |ψn ⟩ = ∑ βn √n! |ψ0 ⟩ .

(6.34)

Then, the eigenvalue equation would imply that ∞ ∞ √ (aˆ† )n+1 √ | β ψ ⟩ = β n + 1| ψ ⟩ = n+1 ∑ βn η |ψn ⟩ . ∑ n n! 0 ∑ n n=0 n=0 n=0 ∞

(6.35)

By orthogonality of the energy eigenstates, we must enforce the recursive relationship: √ βn−1 n = ηβn . (6.36)

6 Quantum Mechanical Example: The Harmonic Oscillator

54

6.5

Further, note that there is no contribution to the ground state |ψ0 ⟩ on the left, so we must require that β0 = 0. However, with the recursion relation, this then implies that β1 = 0, β2 = 0, etc. So, there is actually no state on the Hilbert space of the harmonic oscillator that can satisfy the eigenvalue equation for the raising operator. The reason for this is essentially the same as why we must forbid negative eigenvalues of the Hamiltonian. If there was one eigenstate of the raising operator, then there must be an arbitrary number of them, with decreasingly negative expectation values of energy. (a) Let’s just evaluate the product Aˆ † Aˆ with the information given    i i √ pˆ +W (x) ˆ ˆ (6.37) Aˆ † Aˆ = − √ pˆ +W (x) 2m 2m pˆ2 i ˆ p] ˆ +W (x) ˆ 2. = + √ [W (x), 2m 2m Now, recall that [W (x), ˆ p] ˆ = i¯h

dW , d xˆ

(6.38)

so we have h¯ dW (x) ˆ pˆ2 pˆ2 Aˆ † Aˆ = −√ +W (x) ˆ 2 = Hˆ − E0 = +V (x) ˆ − E0 . 2m 2m 2m d xˆ

(6.39)

Here we used the fact that the Hamiltonian is the sum of the kinetic and potential energies. We can then rearrange and solve for the potential: ˆ h¯ dW (x) +W (x) ˆ 2 + E0 . V (x) ˆ = −√ 2m d xˆ

(6.40)

(b) Note that the commutator is ˆ Aˆ † ] = Aˆ Aˆ † − Aˆ † Aˆ . [A,

(6.41)

We already evaluated the second order of the product, and the first is    i i † ˆ ˆ AA = √ pˆ +W (x) ˆ − √ pˆ +W (x) ˆ (6.42) 2m 2m pˆ2 i = − √ [W (x), ˆ p] ˆ +W (x) ˆ2 2m 2m pˆ2 h¯ dW (x) ˆ = +√ +W (x) ˆ 2. 2m 2m d xˆ The commutator is then

r ˆ†

ˆ A ] = h¯ [A,

2 dW (x) ˆ . m d xˆ

(6.43)

Exercises

55

(c)

With these results, the anti-commutator is 2 pˆ2 h¯ dW (x) ˆ ˆ ˆ Aˆ † } = pˆ + √h¯ dW (x) {A, +W (x) ˆ 2+ −√ +W (x) ˆ2 2m 2m 2m d xˆ 2m d xˆ

=

pˆ2 + 2W (x) ˆ 2, m

(6.44)

which is the sum of squares. (d) The potential of the harmonic oscillator is V (x) ˆ =

mω 2 2 h¯ dW (x) ˆ xˆ = W (x) ˆ 2− √ + E0 . 2 2m d xˆ

(6.45)

Recall that the ground state energy of the harmonic oscillator is E0 =

h¯ ω . 2

(6.46)

Now, we make the ansatz that the superpotential is linear in x, ˆ where W (x) ˆ = α xˆ ,

(6.47)

for some constant α . Plugging this into the expression for the potential we have h¯ α mω 2 2 h¯ dW (x) ˆ h¯ ω + E0 = α 2 xˆ2 − √ + xˆ . = W (x) ˆ 2− √ 2 2 2m d xˆ 2m

(6.48)

Demanding that the constant terms cancel each other this sets α to be r m . α =ω (6.49) 2 Note that this is also consistent with the value of α from matching the quadratic terms. Further, note that with this superpotential, the operators Aˆ and Aˆ † for the harmonic oscillator are proportional to the raising and lowering operators aˆ and aˆ† . The anti-commutator of these operators is ˆ Aˆ † } = {A,

(e)

pˆ2 pˆ2 mω 2 2 + 2W (x) ˆ 2= +2 xˆ = 2Hˆ . m m 2

(6.50)

So, we didn’t need the anti-commutator for the harmonic oscillator because it is proportional to the Hamiltonian itself. The potential of the infinite square well is 0 in the well and so the superpotential satisfies: h¯ dW (x) ˆ +W (x) ˆ 2 + E0 = 0 . −√ 2m d xˆ

(6.51)

The ground state energy of the infinite square well is E0 =

π 2 h¯ 2 , 2ma2

(6.52)

6 Quantum Mechanical Example: The Harmonic Oscillator

56

for width a. The differential equation can be massaged as √  2m dW (x) ˆ W (x) ˆ 2 + E0 , = d xˆ h¯ or that dW = W 2 + E0

√ 2m d xˆ . h¯

Both sides can be integrated to find 1 W √ tan−1 √ = E0 E0

(6.53)

(6.54)

√ 2m xˆ + c0 , h¯

(6.55)

for some integration constant c0 . That is, the superpotential is √  p p 2mE0 xˆ + c0 E0 W (x) ˆ = E0 tan h¯   π h¯ π π h¯ =√ tan xˆ + c0 √ . a 2ma 2ma

(6.56)

To fix the integration constant, note that the the potential diverges at x = a, and so we demand that the superpotential diverges there, too. Tangent diverges where its argument is π /2, so setting xˆ = a (in position basis), we fix π π h¯ π (6.57) a + c0 √ = , a 2 2ma or that

r ma c0 = − . 2 h¯

(6.58)

Inserting this into the expression for the superpotential, we have π π h¯ π π h¯ π W (x) ˆ =√ xˆ − = −√ tan cot xˆ . a 2 a 2ma 2ma 6.6

(a)

(6.59)

Recall that the expression for the eigenstate of the phase operator in terms of the eigenstates of the number operator is |θ ⟩ = β0



∑ einθ |n⟩ .

(6.60)

n=0

We can determine the action of the number operator Nˆ on this state, noting that ˆ N|n⟩ = n|n⟩ .

(6.61)

That is, ˆ θ ⟩ = β0 N|





n=0

n=0

d

ˆ = β0 ∑ n einθ |n⟩ = −i |θ ⟩ . ∑ einθ N|n⟩ dθ

(6.62)

Exercises

57

Now, acting on the number operator eigenstate, we have   Z 2π Z 2π d ˆ ˆ θ⟩ = N|n⟩ = n|n⟩ = d θ cθ N| d θ cθ −i |θ ⟩ dθ 0 0   Z 2π dcθ = dθ i |θ ⟩ . dθ 0

(6.63)

On the right, we have used integration by parts and the Hermitivity of the ˆ so we know that the boundary terms cancel. Then, for number operator N, this to be an eigenstate, we must require the differential equation dcθ = ncθ , dθ

(6.64)

cθ = c0 e−inθ ,

(6.65)

i which has a solution

for some normalization constant c0 . Therefore, the number eigenstate expressed in terms of the eigenstates of the phase operator is |n⟩ = c0

Z 2π 0

d θ e−inθ |θ ⟩ .

(6.66)

(b) We just showed that, in the phase eigenstate basis, the number operator is a derivative. Therefore, we immediately know that the commutator is ˆ N] ˆ = i. [Θ,

6.7 (a)

(6.67)

Now, this is a bit quick because it ignores the n = 0 subtlety. On the state |0⟩, the phase θ is ill-defined, or, the representation of the number operator as a derivative breaks down. Specifically, on |0⟩, the phase can be anything and has no relationship to the properties of the number operator state. Correspondingly, the variances of the number and phase operators on this state are completely unrelated: |0⟩ is an eigenstate of the number operator and so has 0 variance, while it is flat or uniform in the eigenstates of the phase operator. As such the product of variances can be 0, and there exists no non-trivial uncertainty relation. To prove that f (aˆ† ) is not unitary, all we have to do is to show that there is one counterexample. For concreteness, let’s just take f (aˆ† ) = aˆ† and the state |ψ ⟩ = |ψ1 ⟩, the first excited state of the harmonic oscillator. Then, we have √ f (aˆ† )|ψ ⟩ = aˆ† |ψ1 ⟩ = (aˆ† )2 |ψ0 ⟩ = 2|ψ2 ⟩ , (6.68) √ a factor of 2 larger than the second excited state. A unitary operator maintains normalization, but in general aˆ† does not, as exhibited in this example. Therefore, a general operator of the form of an analytic function of the raising operator f (aˆ† ) is not unitary.

6 Quantum Mechanical Example: The Harmonic Oscillator

58

(b) We would like to construct a unitary operator exclusively from the raising and lowering operators that leaves the action on the ground state of the harmonic oscillator unchanged. To do this, we will start from the knowledge that we can always write a unitary operator Uˆ as the exponential of a Hermitian operator Tˆ : ˆ Uˆ = eiT .

(6.69)

Now, we would like to construct a Hermitian operator Tˆ from the raising and lowering operators such that its action on the ground state |ψ0 ⟩ exclusively is of the form of a Taylor series in raising operator aˆ† . This can easily be accomplished by expressing Tˆ as a sum of terms, each of which is a product of raising and lowering operators, with the lowering operators to the right: Tˆ =





  ∗ βmn (aˆ† )m aˆn + βmn (aˆ† )n aˆm .

(6.70)

m,n=0

Note that this operator is indeed Hermitian because Tˆ † = Tˆ , and when acting on the ground state we have Tˆ |ψ0 ⟩ =





  ∗ (aˆ† )n aˆm |ψ0 ⟩ βmn (aˆ† )m aˆn + βmn

m,n=0

" =



∑ βm0 (aˆ )

m=0

(c)

† m



+∑

(6.71)

# ∗ (aˆ† )n β0n

|ψ0 ⟩ .

n=0

Now, for the operator that generates coherent states from acting on the ground state. Again, the eigenvalue equation for the coherent state |χ ⟩ is a| ˆ χ ⟩ = λ |χ ⟩ .

(6.72)

Let’s now express the coherent state as a unitary operator Uˆ acting on the ground state, where ˆ ψ0 ⟩ = eiTˆ |ψ0 ⟩ , |χ ⟩ = U|

(6.73)

where Tˆ is a Hermitian operator. Then, the eigenvalue equation becomes ˆ

ae ˆ iT |ψ0 ⟩ =



∑ aˆ

n=0

∞ ∞ (iTˆ )n [a, ˆ (iTˆ )n ] (iTˆ )n |ψ0 ⟩ = ∑ |ψ0 ⟩ = ∑ λ |ψ0 ⟩ , n! n! n! n=1 n=0

(6.74)

where we have used a| ˆ ψ0 ⟩ = 0. Matching terms at each order in n, this then implies the recursion relation [a, ˆ (iTˆ )n+1 ] (iTˆ )n |ψ0 ⟩ = λ |ψ0 ⟩ . (n + 1)! n!

(6.75)

By linearity of all of the operators, this rearranges to [a, ˆ Tˆ n+1 ]|ψ0 ⟩ = −iλ (n + 1)Tˆ n |ψ0 ⟩ .

(6.76)

Exercises

59

The commutator therefore acts exactly like a derivative, which suggests that Tˆ is formed from the raising operator, as established earlier: Tˆ ∼ −iλ aˆ† ,

(6.77)

however, this is not Hermitian. This can be fixed easily, by including the appropriate factor of a: ˆ Tˆ = −iλ aˆ† + iλ ∗ aˆ .

(6.78)

We can prove the recursion relation with induction. First, it works if n = 0, for which  [a, ˆ Tˆ ]|ψ0 ⟩ = aˆ −iλ aˆ† + iλ ∗ aˆ |ψ0 ⟩ = −iλ |ψ0 ⟩ . (6.79) Now, assuming it is true for n − 1, for n we have    aˆTˆ n+1 |ψ0 ⟩ = [a, ˆ Tˆ ]T n + Tˆ aˆTˆ n |ψ0 ⟩ = −iλ Tˆ n + Tˆ (−iλ nT n−1 ) |ψ0 ⟩ = −iλ (n + 1)Tˆ n |ψ0 ⟩ , (6.80) which is what we wanted to prove. Therefore the unitarized operator that produces coherent states from the ground state is: † ∗ ˆ Uˆ = eiT = eλ aˆ −λ aˆ .

6.8 (a)

(6.81)

This is typically called the displacement operator. This slightly displaced state |ψϵ ⟩ is still normalized, and so we must enforce ⟨ψϵ |ψϵ ⟩ = 1 = (⟨ψ | + ϵ ⟨ϕ |)(|ψ ⟩ + ϵ |ϕ ⟩)

(6.82)

= ⟨ψ |ψ ⟩ + ϵ (⟨ψ |ϕ ⟩ + ⟨ϕ |ψ ⟩) + O(ϵ 2) = 1 + ϵ (⟨ψ |ϕ ⟩ + ⟨ϕ |ψ ⟩) + O(ϵ 2) . Therefore, for this equality to hold at least through linear order in ϵ , we must demand that |ψ ⟩ and |ϕ ⟩ are orthogonal: ⟨ψ |ϕ ⟩ = 0 .

(6.83)

(b) To take this derivative, we need to evaluate the variances on the state |ψϵ ⟩ through linear order in ϵ . Here, we will present the most general solutions with non-zero values for the expectation values. First, for two Hermitian ˆ B, ˆ we have (ignoring terms beyond linear order in ϵ ) operators A,   ˆ ψ ⟩ + ϵ (⟨ϕ |A| ˆ ψ ⟩ + ⟨ψ |A| ˆ ϕ ⟩) ˆ ψϵ ⟩⟨ψϵ |B| ˆ ψϵ ⟩ = ⟨ψ |A| (6.84) ⟨ψϵ |A|   ˆ ˆ ˆ × ⟨ψ |B|ψ ⟩ + ϵ (⟨ϕ |B|ψ ⟩ + ⟨ψ |B|ϕ ⟩) ˆ B⟩ ˆ ˆ ψ ⟩) , ˆ + 2 ϵ ⟨A⟩Re(⟨ ˆ ψ ⟩) + 2 ϵ ⟨B⟩Re(⟨ ˆ = ⟨A⟩⟨ ϕ |B| ϕ |A| ˆ ψ ⟩, for example. Then, in the derivative, the first term ˆ = ⟨ψ |A| where ⟨A⟩ cancels, leaving ˆ ψϵ ⟩⟨ψϵ |B| ˆ ψ ⟩⟨ψ |B| ˆ ψϵ ⟩ − ⟨ψ |A| ˆ ψ⟩ lim ⟨ψϵ |A|

ϵ →0

ˆ ψ ⟩) . ˆ ˆ ψ ⟩) + 2⟨B⟩Re(⟨ ˆ ϕ |B| ϕ |A| = 2⟨A⟩Re(⟨

(6.85)

6 Quantum Mechanical Example: The Harmonic Oscillator

60

(c)

Now, inserting this into the expression for the derivative and demanding it vanishes enforces the constraint for arbitrary |ϕ ⟩ that  ˆ Bˆ + ⟨B⟩ ˆ Aˆ |ψ ⟩ = λ |ψ ⟩ . (6.86) ⟨A⟩ We note that this is an eigenvalue equation because we had established that |ψ ⟩ and |ϕ ⟩ are orthogonal. So for the derivative to vanish, all we need is that the action of the Aˆ and Bˆ operators produces something proportional to |ψ ⟩ again. Now, we can insert the corresponding position and momentum operators in ˆ The operators are for Aˆ and B. Aˆ = (xˆ − ⟨x⟩) ˆ 2,

Bˆ = ( pˆ − ⟨ p⟩) ˆ 2.

(6.87)

Note also that ˆ = σx2 , ⟨A⟩

ˆ = σ p2 = ⟨B⟩

h¯ 2 , 4σx2

(6.88)

using the saturated uncertainty principle. The eigenvalue equation can then be expressed as  ˆ 2 + σx2 ( pˆ − ⟨ p⟩) ˆ 2 |ψ ⟩ = λ |ψ ⟩ . σ p2 (xˆ − ⟨x⟩) (6.89) The eigenvalue λ can be determined by using the saturated uncertainty principle, noting that  h¯ 2 ˆ 2 + σx2 ( pˆ − ⟨ p⟩) ˆ 2 |ψ ⟩ = 2σx2 σ p2 = = λ ⟨ψ |ψ ⟩ = λ . ⟨ψ | σ p2 (xˆ − ⟨x⟩) 2 (6.90) Then, the eigenvalue is h¯ 2 . (6.91) 2 This is a familiar equation that we might want to factorize. Let’s call

λ=

bˆ = σ p (xˆ − ⟨x⟩) ˆ + iσx ( pˆ − ⟨ p⟩) ˆ ,

bˆ † = σ p (xˆ − ⟨x⟩) ˆ − iσx ( pˆ − ⟨ p⟩) ˆ .

(6.92)

Now, note that the product bˆ † bˆ is bˆ † bˆ = [σ p (xˆ − ⟨x⟩) ˆ + iσx ( pˆ − ⟨ p⟩)] ˆ [σ p (xˆ − ⟨x⟩) ˆ − iσx ( pˆ − ⟨ p⟩)] ˆ

(6.93)

ˆ 2 + σx2 ( pˆ − ⟨ p⟩) ˆ 2 − iσx σ p [x, = σ p2 (xˆ − ⟨x⟩) ˆ p] ˆ 2 h¯ = σ p2 (xˆ − ⟨x⟩) ˆ 2 + σx2 ( pˆ − ⟨ p⟩) ˆ 2+ . 2

ˆ bˆ † are the familiar raising and Note that up to a rescaling, the operators b, lowering operators, just translated by their expectation values. Then, the eigenvalue equation can be expressed as   h¯ 2 h¯ 2 (6.94) |ψ ⟩ = |ψ ⟩ , bˆ † bˆ + 2 2

Exercises

61

ˆ ψ ⟩ = 0. That is, this eigenvalue equation is exactly the coherent or simply b| state equation, translated in space away from ⟨x⟩ ˆ = 0 and in momentum away from ⟨ p⟩ ˆ = 0. The solutions of this eigenvalue equation are therefore the coherent states with appropriate expectation values. (d) This becomes very clear when we consider ⟨x⟩ ˆ = ⟨ p⟩ ˆ = 0. In this case, the eigenvalue equation reduces to h¯ 2 |ψ ⟩ . (6.95) 2 This is exactly of the form of the eigenvalue equation for the Hamiltonian of the harmonic oscillator. We can verify this by dividing everything by 2mσx2 : !   2 σ p2 2 pˆ2 h¯ pˆ2 h¯ 2 2 x ˆ + | x ˆ + |ψ ⟩ = |ψ ⟩ . ψ ⟩ = (6.96) 2 4 2mσx 2m 8mσx 2m 4mσx2 (σ p2 xˆ2 + σx2 pˆ2 )|ψ ⟩ =

Now, fixing the coefficient of xˆ2 to be mω 2 /2 we have

σx2 =

h¯ . 2mω

(6.97)

Thus, the eigenvalue would be h¯ 2 h¯ ω = , 4mσx2 2 (e)

(6.98)

exactly the ground state energy of the harmonic oscillator. Non-zero values for the expectation of position and momentum can easily be calculated using the translation property of the variance, where we replace ⟨xˆ2 ⟩ → ⟨(xˆ − ⟨x⟩) ˆ 2 ⟩ and ⟨ pˆ2 ⟩ → ⟨( pˆ − ⟨ p⟩) ˆ 2 ⟩. The resulting differential equation is exactly that established for coherent states, i.e., those states that saturate the Heisenberg uncertainty principle.

7

Quantum Mechanical Example: The Free Particle

Exercises 7.1

(a)

The lowering operator can be expressed in the momentum basis as r r i i mω m¯hω d aˆ = √ pˆ + p+i . xˆ = √ 2¯h 2 dp 2m¯hω 2m¯hω Acting on the wavefunction in momentum space we have ! r i m¯hω d ag(p) ˆ = √ p+i g(p) = λ g(p) . 2 dp 2m¯hω

(7.1)

(7.2)

This is a homogeneous differential equation that can be solved by separating as ! r dg 2 p = −iλ g, − (7.3) dp m¯hω m¯hω or in differential form dg = g This has a solution

r −iλ

! p 2 − m¯hω m¯hω

"

g(p) = c exp −iλ

r

(7.4)

dp.

p2 2 p− m¯hω 2m¯hω

# ,

(7.5)

for some normalization constant c. (b) The speed of the center-of-probability of the coherent state is i ⟨ψ | p| d ˆ ψ⟩ ˆ x]| ˆ ψ ⟩ = ⟨ψ |[H, ˆ ψ⟩ = ⟨ψ |x| . dt h¯ m

(7.6)

For the free particle, the Hamilonian is purely a function of momentum, and so Hˆ and pˆ commute for all time. So, all we need to note is that the momentum operator in terms of the raising and lowering operators is r m¯hω † pˆ = i (7.7) (aˆ − a) ˆ . 2 62

Exercises

63

Then, on the coherent state r r √ m¯hω m¯hω † ⟨ψ | p| ˆ ψ⟩ = i ⟨ψ |aˆ − a| ˆ ψ ⟩ = −i (λ − λ ∗ ) = 2m¯hω Im(λ ) . 2 2 (7.8) Therefore, the speed of the center-of-probability on a free-particle coherent state is r d 2¯hω ⟨ψ |x| ˆ ψ⟩ = Im(λ ) . (7.9) dt m

(c)

Note that this is constant in time, so the acceleration is 0, as expected for a particle experiencing no force. In position space, an eigenstate of the raising operator would satisfy ! r r r   i h¯ d mω mω † pˆ + + aˆ ψ (x) = − √ xˆ ψ (x) = − x ψ (x) 2¯h 2mω dx 2¯h 2m¯hω = λ ψ (x) .

(7.10)

The solution of this differential equation is " # r mω 2 2mω ψ (x) ∝ exp x −λ x . 2¯h h¯

7.2 (a)

(7.11)

This is not normalizable on x ∈ (−∞, ∞), and so no eigenstates of the raising operator live on the Hilbert space. The reflection and transmission amplitudes have divergences when their denominators vanish. This occurs when, for the reflection and transmission amplitudes respectively, ap  p k2 − mV0 + ik k2 − 2mV0 cot k2 − 2mV0 = 0 , ap   ha¯ p  p 2 2 2 k − 2mV0 − i(k − mV0 ) sin k2 − 2mV0 = 0 . k k − 2mV0 cos h¯ h¯ (7.12) We can the expression for the transmission amplitude divergences by  divide  p −i sin ah¯ k2 − 2mV0 which produces k2 − mV0 + ik

p

k2 − 2mV0 cot

ap h¯

 k2 − 2mV0 = 0 ,

(7.13)

exactly the expression for the location of the poles in the reflection amplitude. (b) To more easily understand the limitations of the poles in the amplitudes, we will replace k = ip, for some real-valued momentum p. Then, the location of the poles satisfies ap  p −p2 − mV0 − p −p2 − 2mV0 cot −p2 − 2mV0 = 0 . (7.14) h¯

7 Quantum Mechanical Example: The Free Particle

64

Further, bound states can only exist if V0 < 0, so we can replace V0 = −|V0 |, where  q  q a m|V0 | − p2 − p 2m|V0 | − p2 cot 2m|V0 | − p2 = 0 . (7.15) h¯ That is,

p  q  p 2m|V0 | − p2 a tan 2m|V0 | − p2 = . h¯ m|V0 | − p2

(7.16)

Tangent diverges whenever its argument is an odd multiple of π /2, so we would expect in general that there are many momentum solutions p to this equation. However, for a very shallow potential, |V0 | → 0, it’s less clear. To see what happens for a shallow potential, note that in this limit, the argument of tangent is also very small, so we can approximate   q q a a lim tan 2m|V0 | − p2 = 2m|V0 | − p2 . (7.17) h¯ h¯ |V0 |→0 Then, in this limit, the bound state solutions satisfy p q p 2m|V0 | − p2 a 2m|V0 | − p2 = , h¯ m|V0 | − p2

(7.18)

or that h¯ p. (7.19) a Now, this is just a quadratic equation so we know how to explicitly solve it, but I want to do something a bit different here. We are considering the limit in which |V0 | → 0, and that can be imposed by rescaling |V0 | → λ |V0 |, and then taking λ → 0, but keeping |V0 | fixed. We are also assuming that mass m and width a are just finite parameters, so they do not scale with λ . However, the bound state momentum p must scale with λ , so that the particle indeed stays in the well. A priori, we do not know this scaling, so we will just make the replacement p → λ β p, for some β > 0. With the introduction of λ , the equation becomes m|V0 | − p2 =

h¯ mλ |V0 | − λ 2β p2 = λ β p . (7.20) a Now, in the λ → 0 limit, we can just keep terms that could possibly contribute at leading order. So, we ignore the p2 term and have h¯ mλ |V0 | = λ β p , (7.21) a which requires that β = 1. Then, the (magnitude of the) bound state momentum is ma|V0 | p= (7.22) . h¯ Regardless of the shallowness of the potential, this bound state always exists.

Exercises

65

(c)

Now, if instead |V0 | → ∞, we expect that there is a solution to the bound state equation whenever tangent diverges. That is, whenever q a π 2m|V0 | − p2 = (2n − 1) , (7.23) h¯ 2 for n = 1, 2, 3, . . . . This can be rearranged to produce En ≡ |V0 | −

7.3 (a)

p2 (2n − 1)2 π 2 h¯ 2 = . 2m 2ma2

(7.24)

These are exactly the odd energy eigenstates of the infinite square well. For a matrix A, construct the two Hermitian matrices H1 =

A + A† , 2

H2 =

A − A† . 2i

(7.25)

Then, note that we can reconstruct A as A = H1 + iH2 ,

(7.26)

exactly what we were asked to prove. (b) Recall that the interaction matrix satisfies the optical theorem ˆ †M ˆ = 2 Im(M) ˆ . M

(7.27)

In terms of the Hermitian matrices X and Y this relationship is ˆ †M ˆ = X2 + Y2 = 2 Im(M) ˆ = 2Y . M

(7.28)

Then, the eigenvalues satisfy xn2 + y2n − 2yn = 0 ,

(7.29)

xn2 + (yn − 1)2 = 1 .

(7.30)

or that

(c)

This is the equation for a circle centered at (xn , yn ) = (0, 1) with radius 1. The interaction matrix for the narrow potential is   maV0 1 1 h¯ ˆ =− M . (7.31) 1 1 p + i maV0 h¯

One eigenvalue of this matrix is simply 0, while the non-trivial eigenvalue λ is

λ

2maV0 h¯ =− 0 p + i maV h¯

2maV0 p h¯ =− 2 m a2V 2 p2 + h2 0 ¯

2m2 a2V02 h¯ 2 +i m2 a2V 2 2 p + h2 0 ¯

.

(7.32)

Now, let’s verify that these two eigenvalues lie on the Argand diagram. Indeed it is true that a zero eigenvalue satisfies xn2 + (yn − 1)2 = 1, but only

7 Quantum Mechanical Example: The Free Particle

66

is a single point on the circle, where (xn , yn ) = (0, 0). For the non-trivial eigenvalue we can write it in a more compact form where

λ =−

2p2 2p0 p +i 2 0 2 , 2 2 p + p0 p + p0

(7.33)

maV0 . h¯

(7.34)

where p0 =

7.4

(a)

This eigenvalue can indeed take any value on the Argand circle for p ∈ (−∞, ∞). L2 -normalization for this wave packet in momentum space is Z ∞

Z ∞

− 1 dp q d p |g(p)| = e −∞ −∞ πσ p2 2

(p−p0 )2 σ p2

(7.35)

.

To massage this integral, we can first translate p by p0 , which has no effect on the bounds of integration: Z ∞ −∞

dp q



1

πσ p2

e

(p−p0 )2 σ p2

Z ∞

=

dp q

−∞

Z ∞

2

− p2

1

πσ p2

e

σp

= 0

p2

− 2 2 dp q e σp . πσ p2

(7.36) We then make the change of variables x=

p2 , σ p2

(7.37)

and then the differential element is

√ 2p 2 x dx = 2 d p = dp. σp σp

(7.38)

The integral is then Z ∞ 0

dp q

2

− p2

2

πσ p2

e

σp

1 =√ π

Z ∞

dx x−1/2 e−x = 1 ,

(7.39)

0

by the integrals presented in Appendix A of the textbook. (b) The position space wavefunction is Z ∞

px dp 1 1 √ ψ (x) = g(p) e−i h¯ = √ −∞ 2π h¯ 2π h¯ (πσ p2 )1/4

Z ∞ −∞

dpe

−i

p(x+x0 ) h¯



e

(p−p0 )2 2σ p2

(7.40) p2 − 02 2σ p

1 e =√ 2π h¯ (πσ p2 )1/4



Z ∞ −∞

dpe

− 12 2σ p

   (x+x0 )σ p2 p2 −2 p0 +i p h¯

.

Exercises

67

We can then complete the square in the exponent, where ! !!2 (x + x0 )σ p2 (x + x0 )σ p2 2 p − 2 p0 + i p = p − p0 + i h¯ h¯ !2 (x + x0 )σ p2 − p0 + i h¯ !!2 (x + x0 )σ p2 p0 + i − p20 h¯

p−

=

(7.41)

− 2ip0

(x + x0 )σ p2 (x + x0 )2 σ p4 . + h¯ h¯ 2

As earlier, we can translate the momentum p freely which does not effect the boundaries of integration. With this translation, we have i

(x+x0 )p0 (x+x0 )2 σ p2 −

Z ∞

2¯h2 1 e h¯ ψ (x) = √ 1/4 2 (πσ p ) 2π h¯ r (x+x0 )p0 (x+x0 )2 σ p2 σ p i h¯ − 2¯h2 e . = π 2 h¯

(c)

−∞

dpe

2 − p2 2σ p

(7.42)

This is still of the form of a Gaussian function, centered at the position x = −x0 . The wave packet has a momentum of p0 , and so the velocity of the center of probability, where there is no potential, is d⟨x⟩ ˆ p0 = , dt m

(7.43)

simply from Ehrenfest’s theorem. (d) From Example 7.2, the transmitted and reflected wavefunctions in momentum space are gR (p) =

gT (p) =

px0

0 −i maV h¯ 0 p + i maV h¯

− ei h¯ e 1/4 2 (πσ p ) px0

p 0 p + i maV h¯

− ei h¯ e (πσ p2 )1/4

(p+p0 )2 2σ p2

(p−p0 )2 2σ p2

,

(7.44)

.

(7.45)

Let’s focus on the reflected wavefunction first. Note that gR (p) = e

maV0 x0 h¯ 2

0 −i maV h¯ 0 p + i maV h¯

maV0 maV0 x0 = 2 e h¯ 2 h¯

  maV x i p+i h¯ 0 h¯0

e

(πσ p2 )1/4

Z

dx0



e

  maV x i p+i h¯ 0 h¯0

e

(πσ p2 )1/4

(p+p0 )2 2σ p2



e

(p+p0 )2 2σ p2

(7.46)

.

7 Quantum Mechanical Example: The Free Particle

68

Then, using the result for the Fourier transform of the initial wave packet, we then have r Z (x+x0 )p0 maV0 x0 (x+x0 )2 σ p2 σ p maV20 x0 maV0 −i − 2 − h¯ h¯ h¯ 2¯h2 ψR (x) = 2 e dx e . (7.47) 0 2 π h¯ h¯ The integral that remains cannot be expressed in terms of elementary functions, but this is sufficient for our needs here and in the next part. A similar technique can be applied to the transmitted wavefunction, but we suppress the details here. (d) In the limit that σ p → 0, the reflected wavefunction becomes maV0 maV0 x0 ψR (x) ∝ 2 e h¯ 2 h¯

7.5

(a)

Z

−i

dx0 e

(x+x0 )p0 maV0 x0 − 2 h¯ h¯



0 −i maV h¯ 0 −p0 + i maV h¯

e−i

p0 x0 h¯

, (7.48)

where we have only retained the p0 dependence in the expressions. This is exactly of the same form that we derived from translating the wavefunctions directly in position space, for reflected momentum p = −p0 . To determine the differential equation that the translation operator satisfies, note that the Dyson-like series for it can be re-expressed in a recursive form: Z

x+∆x ˆ x + ∆x) = 1 + i ˆ x′ ) U(x, dx′ p(x ˆ ′ ) U(x, h¯ x Z i ∆x ˆ x + y) . = 1+ dy p(x ˆ + y) U(x, h¯ 0

(7.49)

Now, we can differentiate both sides with respect to ∆x and find Z

d i ∆x d ˆ ˆ x + y) U(x, x + ∆x) = (7.50) dy p(x ˆ + y) U(x, d∆x d∆x h¯ 0 i ˆ x + ∆x) . = p(x ˆ + ∆x)U(x, h¯ (b) The translation operator acts to the right, and so to translate from x0 < 0 to x1 > a on the step potential we can factor it as ˆ 0 , x1 ) = U(a, ˆ x1 )U(0, ˆ a)U(x ˆ 0 , 0) = ei U(x

(x1 −a) pˆ h¯

ˆ a)e−i U(0,

x0 pˆ h¯

.

(7.51)

Here, we have written the explicit form for the translation operators where the potential is 0. Where the potential is non-zero for rightward translation (p > 0), we have Z a p ˆ a) = 1 + i (7.52) dx′ 2m(E −V0 ) U(0, h¯ 0  2 Z a Z x′ p p i + dx′ 2m(E −V0 ) dx′′ 2m(E −V0 ) + · · · h¯ 0 0 p  2  2 p a 2m(E −V0 ) 1 i a 2m(E −V0 ) + · · · = 1+i + 2 h¯ h¯ i

=e

a



2m(E−V0 ) h¯

.

Exercises

69

√ Then, with p = 2mE where the potential is zero, the translation operator across the potential is ˆ 0 , x1 ) = ei U(x (c)

√ (x1 −a) 2mE h¯

ei

a



2m(E−V0 ) h¯

e−i

√ x0 2mE h¯

.

(7.53)

Taking this δ -function potential limit, we can simplify the translation operator in a few ways. First, for fixed and finite energy E, aE → 0 and so lim

a→0,V0 →∞

ˆ 0 , x1 ) = U(x

lim

a→0,V0 →∞

√ √ √ a 2ma(V0 −E) (x −x ) 2mE i 1 0h¯ − h¯

e

√ (x −x ) 2mE i 1 0h¯

=e

e

(7.54)

.

This would seem to suggest that the δ -function potential has no effect on the translation operator of a free particle. However, we know this cannot be true from our analysis of the S-matrix for this system. So, let’s try another route. Let’s go back to the differential equation for the translation operator and let’s actually take the second derivative: d2 ˆ p(x ˆ + ∆x)2 ˆ U(x, x + ∆x) . U(x, x + ∆x) = − 2 d∆x h¯ 2

(7.55)

We will consider x = 0 and ∆x = a, for very small displacement a, so that we focus right on the region where the potential is. Right around the potential, the squared momentum for a fixed energy E state is pˆ2 (x + ∆x) = 2m(E −V (x)) = 2m (E − aV0 δ (x)) → −2maV0 δ (x) ,

(7.56)

because a finite energy E is smaller than the infinite potential where x = 0. Then, the differential equation for the translation operator is 2maV0 d2 ˆ ˆ x + ∆x) . U(x, x + ∆x) = δ (x)U(x, d∆x2 h¯ 2

(7.57)

That is, we are looking for a function that produces a δ -function after two derivatives. Note that the Heaviside Θ-function or the step function has a derivative that is a δ -function: d Θ(x) = δ (x) . dx

(7.58)

The Heaviside Θ-function Θ(x) is 0 for x < 0 and 1 for x > 0, so it is finite everywhere, but discontinuous at x = 0. So, we just need to integrate once more. Note that the function (called a rectifier or ReLU)  0, x < 0 ϒ(x) = (7.59) x, x > 0 has a second derivative that is a δ -function. Therefore, right around x = 0, the translation operator when acting on a plane wave of fixed energy and

7 Quantum Mechanical Example: The Free Particle

70

momentum p > 0 takes the form ˆ x) = c0 + c1 x + 2maV0 ϒ(x) , U(0, h¯ 2

7.6

(a)

(7.60)

where c0 , c1 are integration constants. They can be fit to match with the translation operator that acts on the free particle, but we won’t do that here (and it was effectively done in section 7.3.2 of the textbook). By definition, the Hausdorff length L of any curve must be independent of resolution ∆x. So, we can take a derivative of both sides of the Hausdorff dimension definition and find d d L = 0 = ∆xD−1 l + l(D − 1)∆xD−2 . d∆x d∆x

(7.61)

A smooth curve has a finite, fixed length as the resolution scale ∆x → 0, just from the assumption that the derivative of a smooth curve exists. Therefore, the length l is independent of resolution scale d l = 0. d∆x

(7.62)

Then, the only way that the the Hausdorff length can also be independent of resolution scale ∆x is if D = 1. (b) Note that at the nth step of constructing the Koch snowflake, the resolution we need to see all of those kinks is  n 1 . (7.63) ∆x = 3 The length l at this level of resolution is then  n 4 l= , 3

(7.64)

because at each level, the length increases by a factor of 4/3. Then, the Hausdorff length can be expressed as  n  n(D−1) 4 1 L = lim l(∆x)D−1 = lim . (7.65) n→∞ 3 ∆x→0 3 Now, the Hausdorff length is supposed to be independent of resolution ∆x or recursion level n and so we can take a derivative: "    # d d 4 n 1 n(D−1) L=0= (7.66) dn dn 3 3    n  n(D−1) 4 1 4 1 . = log + (D − 1) log 3 3 3 3 We can then immediately solve for the Hausdorff dimension D and find D=

log 4 ≃ 1.26186 . log 3

(7.67)

Exercises

71

(c)

With this insight, let’s determine the Hausdorff dimension of the trajectory of a quantum mechanical particle with 0 expectation value of momentum, ⟨ p⟩ ˆ = 0. With this expectation value, the saturated Heisenberg uncertainty principle is ⟨ pˆ2 ⟩σx2 =

h¯ 2 . 4

(7.68)

Note that the variance of position σx2 is a measure of the squared step size ∆x away from the mean position, so we set σx2 = ∆x2 . Further, the characteristic squared momentum depends on the spatial and temporal step sizes: ⟨ pˆ2 ⟩ = m2

∆x2 . ∆t 2

(7.69)

Then, the Heisenberg uncertainty principle becomes m

h¯ ∆x ∆x = . ∆t 2

(7.70)

Next, let’s multiply and divide by the total number of steps N. We then have m

l N∆x h¯ δ x = m ∆x = , N∆t T 2

(7.71)

using the relationship T = N∆x and that the total, resolution-dependent length, is l = N∆x. Thus, this can be rearranged into T h¯ ≡ L = l∆x . 2m

(7.72)

Everything on the left is independent of resolution ∆x, and so comparing to the definition of the Hausdorff length, we find that the Hausdorff dimension of the trajectory of a quantum mechanical particle is D = 2. That is, the trajectory is an area-filling curve. (d) With a non-zero expectation value of momentum, the variance is now ˆ 2 = ⟨ pˆ2 ⟩ − p20 . σ p2 = ⟨ pˆ2 ⟩ − ⟨ p⟩

(7.73)

Just as earlier, the expectation value of the squared momentum is determined by the magnitude of the fluctuations: ⟨ pˆ2 ⟩ = m2

∆x2 m2 l 2 = . ∆t 2 T2

(7.74)

With the saturated Heisenberg uncertainty principle, note that the variance of momentum is h¯ 2 h¯ 2 = . 4σx2 4∆x2

(7.75)

m2 l 2 h¯ 2 2 2 = ⟨ p ˆ ⟩ − p = − p20 , 0 4∆x2 T2

(7.76)

σ p2 = Then, we have that

σ p2 =

7 Quantum Mechanical Example: The Free Particle

72

or solving for total length l,

s T m

l=

p20 +

h¯ 2 . 4∆x2

(7.77)

In the limit where p0 ≪ h¯ /∆x, this reduces to what we had found earlier, where lim l =

p0 ≪¯h/∆x

T h¯ 1 , 2m ∆x

(7.78)

for which the trajectory has dimension 2. However, in the opposite limit where p0 ≫ h¯ /∆x, we have lim l =

p0 ≫¯h/∆x

7.7

T p0 , m

(7.79)

which is independent of resolution ∆x. Thus, in the high-momentum/small h¯ limit, the Hausdorff dimension of the trajectory is 1, corresponding to a smooth curve, exactly as we would expect in classical mechanics of a point particle. (e) The formulation of this problem from free-particle wavefunctions is worked out in detail in: L. F. Abbott and M. B. Wise, “The dimension of a quantum mechanical path,” Am. J. Phys. 49, 37–39 (1981). (a) The S-matrix represents the probability amplitude for the scattering of a momentum eigenstate off of some potential. As probability amplitudes, when absolute squared and summed over all momentum you must get a total probability of 1. A pole in the S-matrix at a real value of momentum would mean that the probability amplitude at that value of momentum diverges or that there is infinite probability for that momentum eigenstate to scatter. This violates the axioms of probability, and so cannot exist. Note that this says nothing about poles at complex values of momentum because complex values of momentum are not observable. Unitary of the Smatrix is only a statement about the conservation of probability of allowed observable values of momentum. (b) Again, the S-matrix represents the probability amplitude for momentum eigenstates to scatter. Thus, when integrated against an arbitrary L2 normalizable function g(p), the result must be finite. Namely, the total probability for a wave packet g(p) to scatter off of a localized potential and transmit, say, is Z ∞

PT =

−∞

d p |AT |2 |g(p)|2 < ∞ ,

(7.80)

where AT is the transmission amplitude entry in the S-matrix. Therefore, the S-matrix elements cannot affect the integrability of a wavepacket, and therefore cannot scale like any positive power of momentum p as |p| → ∞. Thus, the worst that the S-matrix elements can scale at large momentum is as

Exercises

73

7.8 (a)

a constant: AT → constant as |p| → ∞, for example. This result is consistent with what we observed with the δ potential. We can immediately write down most entries of this S-matrix because the infinite potential barrier prohibits transmission and any waves from the right. So, the S-“matrix” is just a single value and as it must be unitary, it can be expressed as a point on the unit circle, S = AR = eiϕ ,

(7.81)

for some phase ϕ to be determined. The transmission amplitude is 0. (b) We can determine this phase by our usual technique of matching across boundaries. We can express the momentum state in the region to the left of the potential in terms of two momentum eigenstates with initial p and reflected p:  px ϕ  ϕ ϕ px px px px px ψI (x) = ei h¯ + AR e−i h¯ = ei h¯ + e−i h¯ +iϕ = ei 2 ei h¯ −i 2 + e−i h¯ +i 2 . (7.82) AR is the reflection amplitude and we used our result from part (a) to write it as an effective phase. ϕ is just some overall, x-independent phase, so is irrelevant for fixed momentum scattering, so we can ignore the overall phase factor. That is, we can consider   px ϕ ψi (x) = 2 cos . (7.83) − 2 h¯ In the region of finite potential, we can express the momentum state as

ψII (x) = α ei

x



2m(E−V0 ) h¯

+ β e−i

x



2m(E−V0 ) h¯

,

(7.84)

where the energy E is set by the momentum in the region with no potential E=

p2 . 2m

(7.85)

Because of the infinite potential barrier at x = 0, we must enforce that the momentum state vanish there (as usual, by Hermiticity of momentum). This then enforces that β = −α , and so p  √  √ x 2m(E−V0 ) x 2m(E−V0 ) x 2m(E −V0 ) i −i h¯ h¯ −e = 2iα sin ψII (x) = α e . (7.86) h¯ Now, we can match the momentum states at x = −a:   ap ϕ ψI (−a) = ψII (−a) → cos + = −iα sin h¯ 2

! p a p2 − 2mV0 . h¯ (7.87)

74

7 Quantum Mechanical Example: The Free Particle

Next, their derivatives must also match: d ψI (x) d ψII (x) = (7.88) dx x=−a dx x=−a p p   p ap ϕ p2 − 2mV0 a p2 − 2mV0 sin + cos . → = iα h¯ h¯ h¯ h¯ 2 From matching the wave values at x = −a, we have that   ϕ cos ap h¯ + 2 . α =i  √ a p2 −2mV0 sin h¯

(7.89)

Plugging this into the equation matching derivatives, we have ! p p   p2 − 2mV0 a p2 − 2mV0 ap ϕ tan + =− cot . h¯ 2 p h¯ This can easily be solved for phase ϕ where p 2ap p2 − 2mV0 −1 − 2 tan ϕ =− cot h¯ p

a

p

p2 − 2mV0 h¯

(7.90)

!! .

(7.91)

There are some interesting limits to study. First, if p → 0, then the argument of arctangent diverges, corresponding to a value of π /2. Thus the phase ϕ in this limit is lim ϕ = −π ,

p→0

exactly as expected for a wave on a rope, say, fixed to a wall.

(7.92)

8

Rotations in Three Dimensions

Exercises 8.1 (a)

To show that the determinant of a rotation matrix is 1, we make the following observations. The determinant is basis-independent, and just the product of eigenvalues of the matrix. For the 2 × 2 matrix we are studying here, there are therefore two eigenvalues, λ1 , λ2 for which det U = λ1 λ2 .

(8.1)

Now, we have expressed the unitary matrix U in the form of an exponentiated Hermitian matrix, so we can extract the exponent by taking the logarithm: log U =

 i θx Sˆx + θy Sˆy + θz Sˆz . h¯

(8.2)

Correspondingly, taking the logarithm of the determinant returns: log det U = log λ1 + log λ2 ,

(8.3)

which is just the sum of the eigenvalues of the logarithm of U. The sum of the eigenvalues of a matrix is its trace, and therefore  i log det U = tr log U = tr θx Sˆx + θy Sˆy + θz Sˆz . h¯

(8.4)

The spin operators Sˆi are proportional to the Pauli matrices, and all Pauli matrices have 0 trace: tr = σi = 0 ,

(8.5)

for i = 1, 2, 3. Therefore, we also have that log det U = 0 ,

(8.6)

or that det U = 1, as we are asked to prove. (b) We can show that these vectors are unit normalized by simply taking the inner product with their Hermitian conjugate. For example,    eiξ1 cos θ  ⃗v1†⃗v1 = e−iξ1 cos θ e−iξ2 sin θ = cos2 θ + sin2 θ = 1 . (8.7) eiξ2 sin θ 75

8 Rotations in Three Dimensions

76

A similar calculation follows for ⃗v2 . Next, we are asked to fix the phases ξ1 , ξ2 , ξ3 , ξ4 so that ⃗v1 and ⃗v2 are orthogonal. That is,   −eiξ3 sin θ   0 =⃗v1†⃗v2 = e−iξ1 cos θ e−iξ2 sin θ (8.8) eiξ4 cos θ   = −e−i(ξ1 −ξ3 ) + e−i(ξ2 −ξ4 ) sin θ cos θ . To ensure that this vanishes, we then require that

ξ1 − ξ3 = ξ2 − ξ4

mod 2π .

(8.9)

Therefore, with this assignment, the now-orthogonal vectors can be expressed as  iξ    −eiξ3 sin θ e 1 cos θ , ⃗ v = . ⃗v1 = (8.10) 2 eiξ2 sin θ ei(ξ2 +ξ3 −ξ1 ) cos θ Now, we can establish the unitarity of the matrix formed from vectors⃗v1 ,⃗v2 by matrix multiplication. We multiply ! !   ⃗v1† ⃗v1†⃗v1 ⃗v1†⃗v2 1 0 † U U= (⃗ v ⃗ v ) = (8.11) = , 1 2 0 1 ⃗v2† ⃗v2†⃗v1 ⃗v2†⃗v2 (c)

so this is indeed a unitary matrix. In terms of the elements of the vectors we constructed above, the determinant of the matrix U is det U = eiξ1 cos θ ei(ξ2 +ξ3 −ξ1 ) cos θ + eiξ2 sin θ eiξ3 sin θ = ei(ξ2 +ξ3 ) .

(8.12)

Therefore, we must enforce that

ξ3 = −ξ2 The vectors are then  iξ  e 1 cos θ , ⃗v1 = eiξ2 sin θ

mod 2π .  ⃗v2 =

−e−iξ2 sin θ e−iξ1 cos θ

(8.13)  .

(8.14)

(d) Here, we will just establish the Cartesian components of the vector ⃗v1 , and a similar procedure will follow for ⃗v2 . The first component of ⃗v1 is eiξ1 cos θ = cos ξ1 cos θ + i sin ξ1 cos θ = a11 + ib11 ,

(8.15)

a11 = cos ξ1 cos θ ,

b11 = sin ξ1 cos θ .

(8.16)

eiξ2 sin θ = cos ξ2 sin θ + i sin ξ2 sin θ = a21 + ib21 ,

(8.17)

a21 = cos ξ2 sin θ ,

(8.18)

and so

Similarly, the second component of ⃗v1 is

and so b21 = sin ξ2 sin θ .

Exercises

77

(e)

Unit normalization of the vector ⃗v1 in this complex Cartesian expression implies that ⃗v1†⃗v1 = 1 = (a11 − ib11 ) (a11 + ib11 ) + (a21 − ib21 ) (a21 + ib21 )

(8.19)

= a211 + b211 + a221 + b221 .

8.2 (a)

Thus, this describes all points in a four-dimensional space equidistant from the origin; i.e., a three-dimensional sphere S3 . The Hopf fibration expresses this three-sphere as a product of a two-sphere and a circle: S3 ≃ S2 × S1 . Let’s consider the nested commutator of elements of the Lie algebra of rotations 3

3

[Lˆ i , [Lˆ j , Lˆ k ]] = i¯h ∑ ϵ jkl [Lˆ i , Lˆ l ] = −¯h2 ∑

3

∑ ϵ jkl ϵ ilm Lˆ m .

(8.20)

l=1 m=1

l=1

The other commutators are just cyclic permutations of this: 3

3

[Lˆ k , [Lˆ i , Lˆ j ]] = −¯h2 ∑

∑ ϵ i jl ϵ klm Lˆ m ,

[Lˆ j , [Lˆ k , Lˆ i ]] = −¯h2 ∑

∑ ϵ kil ϵ jlm Lˆ m .

(8.21)

l=1 m=1 3 3 l=1 m=1

The sum of these three nested commutators is [Lˆ i , [Lˆ j , Lˆ k ]] + [Lˆ k , [Lˆ i , Lˆ j ]] + [Lˆ j , [Lˆ k , Lˆ i ]] 3

= −¯h2 ∑

3



(8.22)

 ϵ jkl ϵ ilm + ϵ i jl ϵ klm + ϵ kil ϵ jlm Lˆ m .

l=1 m=1

To evaluate this, we can pick specific values for i, j, k and note that the result is completely invariant to permutations of i, j, k. First, if i = j = k, every term in the sum has a factor of ϵ iil = 0, and so we find 0. If j = k, but i ̸= j, then the sum reduces to [Lˆ i , [Lˆ j , Lˆ j ]] + [Lˆ j , [Lˆ i , Lˆ j ]] + [Lˆ j , [Lˆ j , Lˆ i ]] = −¯h2 ∑



 ϵ j jl ϵ ilm + ϵ i jl ϵ jlm + ϵ jil ϵ jlm Lˆ m

= −¯h2 ∑



 ϵ i jl ϵ jlm + ϵ jil ϵ jlm Lˆ m

= −¯h2 ∑



 ϵ i jl ϵ jlm − ϵ i jl ϵ jlm Lˆ m = 0 ,

3

3

l=1 m=1 3 3 l=1 m=1 3 3

(8.23)

l=1 m=1

because ϵ jil = − ϵ i jl , by the total anti-symmetry. Finally, if all of i, j, k are distinct, then the only way that ϵ jkl ̸= 0 is if l = i. However, such a factor multiplies ϵ ilm = ϵ iim = 0, and similar for all other terms. Therefore, any possible configuration of rotation operators satisfies the Jacoby identity.

8 Rotations in Three Dimensions

78

(b) If the Lie bracket is the commutator, then the cyclic sum of the nested commutations is ˆ [B, ˆ + [C, ˆ [A, ˆ B]] ˆ A]] ˆ ˆ C]] ˆ + [B, ˆ [C, [A, ˆ BˆCˆ − Cˆ B] ˆ Aˆ Bˆ − Bˆ A] ˆ + [B, ˆ ˆ + [C, ˆ Cˆ Aˆ − AˆC] = [A,

(8.24)

= Aˆ BˆCˆ − AˆCˆ Bˆ − BˆCˆ Aˆ + Cˆ Bˆ Aˆ + Cˆ Aˆ Bˆ − Cˆ Bˆ Aˆ − Aˆ BˆCˆ + Bˆ AˆCˆ + BˆCˆ Aˆ − Bˆ AˆCˆ − Cˆ Aˆ Bˆ + AˆCˆ Bˆ = 0, 8.3

because all terms cancel pairwise. First, note that the trace of the Casimir of a D dimensional representation is        (R) 2 (R) 2 (R) 2 ˆ ˆ ˆ . + Lz + Ly tr(CR ID ) = DCR = tr Lx (8.25) Because we can always perform a unitary transformation to diagonalize a rotation matrix, the trace of the squares of the rotation matrices must all be equal. Then,   DCR (R) 2 , tr Lˆ i = (8.26) 3 for any i = x, y, z. This, however, is exactly the Killing form, and so h i DCR (R) (R) tr Lˆ i Lˆ j = kR δi j = δi j . 3

(8.27)

For a representation of spin ℓ, the dimension D is (8.28)

D = 2ℓ + 1 , 8.4

which can be easily input into the expression for the Killing form. If a state |ψ ⟩ is a coherent state of the angular momentum lowering operator Lˆ − , then it satisfies Lˆ − |ψ ⟩ = λ |ψ ⟩ ,

(8.29)

for some complex-valued λ . Using the completeness of the states in the spin-ℓ representation of angular momentum, we can express |ψ ⟩ as the linear combination: |ψ ⟩





βm |ℓ, m⟩ ,

(8.30)

m=−ℓ

for z-component of angular momentum m and coefficients βm . Now, on this expression, the coherent state satisfies Lˆ − |ψ ⟩ =





m=−ℓ

βm Lˆ − |ℓ, m⟩ =





λ βm |ℓ, m⟩ .

(8.31)

m=−ℓ

By the definition of the lowering operator, the state Lˆ − |ℓ, m⟩ ∝ |ℓ, m − 1⟩, when m > −ℓ. Therefore, the highest state in the sum on the left is |ℓ, ℓ − 1⟩, while the

79

Exercises highest state on the right is |ℓ, ℓ⟩. Therefore, we must enforce that λ βℓ = 0. There are two possibilities. If we set βℓ = 0, then the recursive nature of this relationship sets βℓ−1 = βℓ−2 = · · · = β−ℓ+1 = 0. Thus, the coherent state would simply be the state with the lowest angular momentum eigenvalue, |ψ ⟩ = |ℓ, −ℓ⟩ with λ = 0. By contrast, if we fix λ = 0, then we again find this state. Therefore, the only coherent state of angular momentum is the state of the lowest value of z-component. Note that in the case of the harmonic oscillator coherent states, there is no upper limit to the energy eigenvalue, so there was no issue with the matching of states in the coherent state eigenvalue equation. The finite number of angular momentum states forbids the existence of coherent states. 8.5 (a) The Hamiltonian for a spin-1/2 particle immersed in a magnetic field was eB0 ˆ Hˆ = Sz . me

(8.32)

The commutator of the Hamiltonian and the spin operator Sˆx is then ˆ Sˆx ] = eB0 [Sˆz , Sˆx ] = i e¯hB0 Sˆy . [H, me me

(8.33)

Therefore, the uncertainty principle that the energy and x-component of angular momentum satisfy is ˆ Sˆx ]⟩ 2 e2 h¯ 2 B20 ⟨[H, = |⟨Sˆy ⟩|2 . σE2 σS2x ≥ (8.34) 2i 4m2e (b) When the expectation value of the y-component of angular momentum vanishes, ⟨Sˆy ⟩ = 0, the lower bound in the uncertainty principle is 0. Having an expectation value of 0 means that the state |ψ ⟩ is in a linear combination of up and down spin with coefficients of equal magnitude:  1 |ψ ⟩ = √ eiϕ | ↑y ⟩ + e−iϕ | ↓y ⟩ , 2

(8.35)

for some phase ϕ . In terms of eigenstates of Sˆz , we can express eigenstates of Sˆy as 1 | ↑y ⟩ = √ (| ↑⟩ + i| ↓⟩) , 2

1 | ↓y ⟩ = √ (| ↑⟩ − i| ↓⟩) . 2

(8.36)

The state |ψ ⟩ in terms of the eigenstates of Sˆz is then |ψ ⟩ = cos ϕ | ↑⟩ − sin ϕ | ↓⟩ .

(8.37)

The variance of the Hamiltonian on this state only vanishes if it is an eigenstate of Sˆz ; i.e., ϕ = 0, π /2, π , 3π /2. The variance of Sˆx vanishes when the magnitudes of the coefficients of this state are equal, corresponding to an eigenstate of Sˆx : ϕ = π /4, 3π /4, 5π /4, 7π /4. At all other values of ϕ , the lower bound in the uncertainty principle is 0 on this state, but the product of variances is non-zero.

8 Rotations in Three Dimensions

80

8.6

To prove the Schouten identity, we will work in the basis of eigenstates of the spin operator Sˆz . First, we can always rotate the z-axis to lie along the direction of one of the spins. So we can, without loss of generality, choose |ψ ⟩ = |↑⟩. For the other spinors, we can write |ρ ⟩ = a1 | ↑⟩ + b1 | ↓⟩ ,

(8.38)

|χ ⟩ = a2 | ↑⟩ + b2 | ↓⟩ ,

(8.39)

|η ⟩ = a3 | ↑⟩ + b3 | ↓⟩ ,

(8.40)

for some complex coefficients ai , bi . The two products of inner products of spinors on the left of the Schouten identity are then: ⟨ψ |ρ ⟩⟨χ |η ⟩ = a1 (a∗2 a3 + b∗2 b3 ) ,

⟨ψ |η ⟩⟨χ |ρ ⟩ = a3 (a∗2 a1 + b∗2 b1 ) .

(8.41)

Then, we need to determine how the Pauli matrix acts on spinors. Note that   0 1 iσ2 = . (8.42) −1 0 Then, the action of this matrix on the spinors is: iσ2 |η ⟩ = b3 | ↑⟩ − a3 | ↓⟩ ,

iσ2 |χ ⟩ = b2 | ↑⟩ − a2 | ↓⟩ .

(8.43)

Then, the final product of inner products is (⟨ρ |)∗ iσ2 |η ⟩⟨ψ |iσ2 (|χ ⟩)∗ = (a1 b3 − b1 a3 )b∗2 .

(8.44)

Putting this together with the other spinor product on the right side of the equation, we find ⟨ψ |η ⟩⟨χ |ρ ⟩ + (⟨ρ |)∗ iσ2 |η ⟩⟨ψ |iσ2 (|χ ⟩)∗ = a3 (a∗2 a1 + b∗2 b1 ) + (a1 b3 − b1 a3 )b∗2 = a1 (a∗2 a3 + b∗2 b3 ) = ⟨ψ |ρ ⟩⟨χ |η ⟩ , (8.45) 8.7

proving the Schouten identity. (a) We can take the trace of the commutator, multiplied by either operator Aˆ or ˆ Note that B.    ˆ A, ˆ B] ˆ = tr Aˆ 2 Bˆ − Aˆ Bˆ Aˆ = 0 = iα trAˆ 2 + iβ tr Aˆ Bˆ . (8.46) tr A[ The trace of the commutator is 0 by the cyclic property of the trace. By the ˆ the trace of their product is 0, and so this requires orthogonality of Aˆ and B, that iα trAˆ 2 = 0 .

(8.47)

By assumption, trAˆ 2 ̸= 0, and so we must fix α = 0. A similar analysis holds ˆ B], ˆ A, ˆ which requires that β = 0. Therefore, we must enforce for the trace of B[ ˆ ˆ that [A, B] = 0, or that a two-dimension Lie algebra is necessarily Abelian.

Exercises

81

(b) Consider the operator Cˆ = Bˆ + α Aˆ , ˆ The trace of this operator with Aˆ is for some constant A.     tr AˆCˆ = tr Aˆ Bˆ + α trAˆ 2 . If we demand that this vanishes, then α is fixed to be   tr Aˆ Bˆ α =− . trAˆ 2 ˆ ̸= 0, we can construct Thus, if tr[Aˆ B] Cˆ = Bˆ −

8.8 (a)

  tr Aˆ Bˆ Aˆ , trAˆ 2

(8.48)

(8.49)

(8.50)

(8.51)

ˆ = 0. and then tr[AˆC] The matrix that rotates about the z-axis is the simplest, because it is identical to the familiar two-dimensional rotation matrix, just with an added diagonal entry to represent that the z-component of the vector is unaffected:   cos θ − sin θ 0 Uz (θ ) =  sin θ (8.52) cos θ 0  . 0 0 1 The rotation matrices about the x and y axes are similar, just with components permuted:     1 0 0 cos θ 0 sin θ Ux (θ ) =  0 cos θ − sin θ  , Uy (θ ) =  0 1 0 . 0 sin θ cos θ − sin θ 0 cos θ (8.53)

(b) We will just construct the Lie algebra element Lˆ x explicitly, and then just state the results for Lˆ y , Lˆ z . We have   1 0 0 θ Lˆ x (8.54) Ux (θ ) =  0 cos θ − sin θ  = ei h¯ . 0 sin θ cos θ Taylor expanding the explicit matrix and the exponential each to linear order in θ , we find   1 0 0 ˆ θ Lx  I+i = (8.55) 0 1 −θ  , h¯ 0 θ 1 or that



 0 0 0 Lˆ x = h¯  0 0 i  . 0 −i 0

(8.56)

8 Rotations in Three Dimensions

82

Correspondingly, the operators Lˆ y and Lˆ z are     0 0 −i 0 i 0 Lˆ y = h¯  0 0 0  , Lˆ z = h¯  −i 0 0  . i 0 0 0 0 0 (c)

(8.57)

Let’s first construct the matrices that implement rotation by π /2 about the x and y axes. We have     1 0 0 0 0 1 Ux (π /2) =  0 0 −1  , Uy (π /2) =  0 1 0  . (8.58) 0 1 0 −1 0 0 If we first rotate about x and then y, the resulting vector is    0 0 1 1 0 0 Uy (π /2)Ux (π /2)⃗v =  0 1 0   0 0 −1   −1 0 0 0 1 0     0 0 1 0 0 =  0 1 0   −1  =  −1 −1 0 0 0 0

 0 0  1  .

Performing the rotation in the opposite way, we find     1 0 0 0 0 1 0 Ux (π /2)Uy (π /2)⃗v =  0 0 −1   0 1 0   0  0 1 0 −1 0 0 1      1 0 0 1 1 =  0 0 −1   0  =  0  . 0 1 0 0 0 The difference of the resulting vectors is then   −1 ⃗vyx −⃗vxy =  −1  . 0

(8.59)

(8.60)

(8.61)

(d) By contrast, we could have calculated the commutator of the rotation matrices. Note that their products in different orders are      0 0 1 1 0 0 0 1 0 Uy (π /2)Ux (π /2) =  0 1 0   0 0 −1  =  0 0 −1  , −1 0 0 0 1 0 −1 0 0 (8.62)      1 0 0 0 0 1 0 0 1 Ux (π /2)Uy (π /2) =  0 0 −1   0 1 0  =  1 0 0  . 0 1 0 −1 0 0 0 1 0

Exercises

83

Their commutator is thus



0 Uy (π /2)Ux (π /2) − Ux (π /2)Uy (π /2) =  −1 −1

1 0 −1

 −1 −1  . 0

The action of this matrix product difference on the vector ⃗v is then   0 1 −1 (Uy (π /2)Ux (π /2) − Ux (π /2)Uy (π /2))⃗v =  −1 0 −1   −1 −1 0   −1 =  −1  , 0 exactly as expected.

(8.63)

 0 0  1 (8.64)

9

The Hydrogen Atom

Exercises 9.1

(a)

The potential of this hydrogenized atom is now V (r) = −

Ze2 1 , 4π ϵ 0 r

(9.1)

so effectively this is just the hydrogen atom potential with the replacement e2 → Ze2 . Correspondingly, in the expression for the energy levels, we can make this replacement to determine the new, modified energy levels. Therefore, the energy eigenstates of this atom are En = −

me Z 2 e4 1 , 2(4π ϵ 0 )2 h¯ 2 n2

(9.2)

for n = 1, 2, 3, . . . . (b) To estimate the speed v of the electron in the ground state, we note that the expectation value of the kinetic energy is  2    pˆr 1 1 Ze2 ≡ me v 2 . (9.3) = E1 + 2me 4π ϵ 0 rˆ 2 The expectation value of the inverse radius on the ground state is   Z 4 ∞ Ze2 me 1 1 = 3 = , dr r e−2r/a0 = rˆ a0 a0 0 4π ϵ 0 h¯ 2

(9.4)

the inverse of the new Bohr radius. Then, the expectation value of the kinetic energy in the ground state is  2  Ze2 Ze2 me me Z 2 e 4 pˆr me Z 2 e 4 =− + = . (9.5) 2 2 2me 4π ϵ 0 4π ϵ 0 h¯ 2(4π ϵ 0 )2 h¯ 2(4π ϵ 0 )2 h¯ 2 The expectation value of the squared velocity of the ground state is then v2 =

Z 2 e4 . (4π ϵ 0 )2 h¯ 2

(9.6)

Setting this equal to v = c/2, the atomic number when the speed is comparable to the speed of light is Z= 84

4π ϵ 0 h¯ c 1 ≃ 69 . = 2 2e 2α

(9.7)

Exercises

85

(c)

This element is thulium. The relativistic kinetic energy is

=

9.2 (a)

s

! |⃗p|2 c2 = me c 1+ 2 4 −1 K= me c   2 2 4 4 |⃗p| c |⃗p| c = me c2 1 + 2 4 − 4 8 + · · · − 1 2me c 8me c q

m2e c4 + |⃗p|2 c2 − me c2

2

(9.8)

|⃗p|4 |⃗p|2 − 3 2 +··· . 2me 8me c

So indeed the lowest order term in the small-velocity limit reproduces the familiar non-relativistic kinetic energy. The first relativistic correction is negative, and so relativistic corrections tend to decrease the kinetic energy as compared to the non-relativistic expression. To calculate the Poisson bracket of the angular momentum and the Hamiltonian, we recall the form of the Poisson bracket. For an individual component of the angular momentum vector Lk , the Poisson bracket is  3  ∂ H ∂ Lk ∂ H ∂ Lk (9.9) {H, Lk } = ∑ − . ∂ pl ∂ rl l=1 ∂ rl ∂ pl For this part, we will simply assume that the Hamiltonian takes the form 3

H=

p2

∑ 2mke +V (r) ,

(9.10)

k=1

p where r = x2 + y2 + z2 , the radial distance. The kth component of the angular momentum vector can be expressed as 3

Lk =



(9.11)

ϵ i jk ri p j .

i, j=1

Then, we can evaluate derivatives. We have

∂H ∂V rl ∂ V = = , ∂ rl ∂ rl r ∂r ∂ Lk = ∂ rl

∂H pl , = ∂ pl me ∂ Lk = ∂ pl

3

∑ ϵ l jk p j ,

j=1

(9.12)

3

∑ ϵ jlk r j .

j=1

Then, the Poisson bracket is {H, Lk } =

  rl r j ∂ V p j pl ϵ − ϵ ∑ jlk r ∂ r l jk me = 0 , j,l=1 3

(9.13)

because 3



j,l=1

3

ϵ jlk rl r j =



j,l=1

ϵ l jk p j pl = 0 ,

(9.14)

9 The Hydrogen Atom

86

and positions and momenta commute amongst themselves. Therefore, angular momentum is conserved in a central potential. (b) For the Poisson bracket with the Laplace–Runge–Lenz vector, we need to specify the explicit form of the potential of the Hamiltonian, and not just the fact that it is only dependent on radial distance. Then, the spatial derivative of the Coulomb potential is

∂H ∂V rl ∂ V e2 rl ∂ 1 e2 rl . = = = =− ∂ rl ∂ rl r ∂r 4π ϵ 0 r ∂ r r 4π ϵ 0 r 3

(9.15)

We had already evaluated the derivatives of the Laplace–Runge–Lenz vector component Ak in the chapter, finding

∂ Ak e2 rk rl − r2 δkl |⃗p|2 δkl − pk pl = + , ∂ rl 4π ϵ 0 r3 me ∂ Ak 2rk pl − rl pk −⃗r ·⃗p δkl = , ∂ pl me

(9.16)

where we have inserted the explicit expression for the Hamiltonian. Then, the Poisson bracket of the Hamiltonian and this component of the Laplace– Runge–Lenz vector is {H, Ak } =

3



l=1

=−



1 m2e



 2rk pl − rl pk −⃗r ·⃗p δkl (9.17) me  2  pl e rk rl − r2 δkl |⃗p|2 δkl − pk pl + − me 4π ϵ 0 r3 me !

e2 rl 4π ϵ 0 r 3

3

|⃗p|2 pk − pk ∑ p2l l=1

e2

1 + 4π ϵ 0 me r3

"

3



rk rl pl − pk rl2



# −⃗r ·⃗p rk + r pk 2

l=1

= 0, where we have used, for example, the dot product ⃗r ·⃗p =

3

∑ rl pl .

(9.18)

l=1

(c)

Therefore, the Laplace–Runge–Lenz vector is also conserved in the hydrogen atom. Now onto the Poisson bracket of two components of the Laplace–Runge– Lenz vector. Using the derivatives established above, the Poisson bracket is

Exercises

87

{Ai , A j } =

3





k=1

 −

e2 ri rk − r2 δik |⃗p|2 δik − pi pk + 4π ϵ 0 r3 me e2

r j rk

4π ϵ 0

− r2 δ

jk

r3

+

|⃗p|2 δ



jk − p j pk

me

3 1 e2 (ri p j − r j pi ) 3r2 − ∑ rk2 = 3 4 π ϵ 0 me r k=1

2r j pk − rk p j −⃗r ·⃗p δ jk me



!



(9.19)  2ri pk − rk pi −⃗r ·⃗p δik me

! 3 1 2 2 − 2 (ri p j − r j pi ) 3|⃗p| − 2 ∑ pk me k=1  2  2 2 |⃗p| e 1 2 = − (ri p j − r j pi ) − = − H(ri p j − r j pi ) , me 2me 4π ϵ 0 r me the Hamiltonian for the hydrogen atom. Now, all that remains is to interpret ri p j − r j pi . This looks like a component of angular momentum if i ̸= j, so we must enforce that. Note that 3

∑ ϵ i jk Lk =

k=1

3



3

ϵ i jk ϵ lmk rl pm =



 δil δ jm − δim δ jl rl pm

(9.20)

l,m=1

k,l,m=1

= ri p j − r j pi , where we used the result established in the chapter for the sum over a product of totally anti-symmetric objects. Using this result, we then find {Ai , A j } = −

9.3 (a)

3 2 H ∑ ϵ Lk , me k=1 i jk

(9.21)

exactly as claimed. Because angular momentum is conserved, the two particles’ trajectories must lie in a plane and must orbit with a common angular frequency ω . Then, the total energy of the system can be expressed as 1 1 n E = m1 r12 ω 2 + m2 r22 ω 2 + krn , 2 2 |n|

(9.22)

where r is the distance between the masses m1 and m2 and r1 , r2 is their respective distance from their common orbiting point. Note that r1 + r2 = r. Given this expression for the potential energy, we can also write down Newton’s second law. Note that the magnitude of the conservative force from the potential is F=

d U(r) = |n|krn−1 . dr

(9.23)

Then, the Newton’s second laws for both masses are |n|krn−1 = m1 r1 ω 2 ,

|n|krn−1 = m2 r2 ω 2 .

(9.24)

9 The Hydrogen Atom

88

Then, we note that this enforces r2 =

m1 r1 , m2

(9.25)

and then with r1 + r2 = r, the distances to the common center are m2 m1 r1 = r, r2 = r. m1 + m2 m1 + m2

(9.26)

With these relationships, the total energy and Newton’s second laws enforce E=

1 m1 m2 2 2 n r ω + krn , 2 m1 + m2 |n|

|n|krn−1 =

m1 m2 rω 2 . m1 + m2

(9.27)

Note the appearance of the reduced mass in both expressions. Now, we will massage Newton’s second law to relate the kinetic and potential energies. First, we can multiply both sides by distance r: m1 m2 2 2 |n|krn = (9.28) r ω . m1 + m2 Then, the right side is nearly the kinetic energy K, up to a factor of 2: |n| n 1 m1 m2 2 2 kr = r ω = K. 2 2 m1 + m2

(9.29)

Next, the right side the magnitude of the potential energy U up to a factor of |n|/2: |n| n |n| kr = U . 2 2

(9.30)

Therefore, the kinetic energy is related to the potential energy as K=

|n| U. 2

(9.31)

(b) Recall that the time dependence of an expectation value of an operator Oˆ is d ˆ ⟨O⟩ = dt

i ˆ ˆ ⟨[H, O]⟩ . h¯

(9.32)

ˆ ψ ⟩ = E|ψ ⟩, the energy E, and so the expectation On an energy eigenstate, H| value of the commutator is ˆ = E⟨Oˆ − O⟩ ˆ = 0. ˆ O]⟩ ⟨[H,

(9.33)

Note that this argument is independent of what the operator Oˆ actually is. With this observation and the fact that the Hamiltonian can be expressed as kinetic plus potential operators Hˆ = Kˆ +V (ˆr) ,

(9.34)

ˆ = 0, ˆ O]⟩ ⟨[H,

(9.35)

we require that

Exercises

89

or that ˆ = −⟨[V (ˆr), O]⟩ ˆ . ˆ O]⟩ ⟨[K,

(9.36)

ˆ In position Let’s first consider the commutator of the potential with O. space, the operator is

∂ ∂ ∂ Oˆ = −i¯hx − i¯hy − i¯hz . ∂x ∂y ∂z

(9.37)

The commutator is then ˆ (ˆr)] = −i¯h [O,V

  n ∂ ∂ ∂ k x +y +z (x2 + y2 + z2 )n/2 . |n| ∂x ∂y ∂z

(9.38)

Note that the derivative is x

n ∂ 2 (x + y2 + z2 )n/2 = nx2 (x2 + y2 + z2 ) 2 −1 , ∂x

and so the sum of the derivatives is   n ∂ ∂ ∂ x +y +z (x2 + y2 + z2 )n/2 = n(x2 + y2 + z2 ) 2 . ∂x ∂y ∂z

(9.39)

(9.40)

Therefore, the commutator is ˆ (ˆr)] = −i¯hnV (ˆr) . [O,V Now, for the kinetic energy. In momentum space, the operator is   ∂ ∂ ∂ Oˆ = 3i¯h + pˆx xˆ + pˆy yˆ + pˆz zˆ = i¯h 3 + px + py + pz . ∂ px ∂ py ∂ pz The commutator is then   ∂ ∂ ∂ ˆ K] ˆ = i h¯ [O, px + py + pz (p2x + p2y + p2z ) = 2i¯hKˆ . 2m ∂ px ∂ py ∂ pz

(9.41)

(9.42)

(9.43)

Note that the constant term in the operator Oˆ from re-ordering the position and momentum operators does not affect the commutator. Using these results, the expectation values of the commutators are related as ˆ = −⟨[V (ˆr), O]⟩ ˆ ˆ O]⟩ ⟨[K,



ˆ = −in¯h⟨V (ˆr)⟩ . −2i¯h⟨K⟩

(9.44)

That is, ˆ = n ⟨V (ˆr)⟩ , ⟨K⟩ 2 (c)

(9.45)

exactly the classical virial theorem, by Ehrenfest’s theorem. What makes the operator Oˆ useful for this task is the following observation. In position space, the operator contains a term of the form

∂ ∂ Oˆ ⊃ −i¯hx = −i¯h . ∂x ∂ log x

(9.46)

9 The Hydrogen Atom

90

Thus, the job of the operator Oˆ is to count the powers of x that appear in the potential, for example. The fact that the potential and kinetic energies are related by a factor of n/2 is a reflection of the n powers of position in the potential and 2 powers of momentum in the kinetic energy. (d) On the ground state of the hydrogen atom, recall that the potential is V (ˆr) = −

e2 1 . 4π ϵ 0 rˆ

(9.47)

Its expectation value on the ground state wavefunction is ⟨V (ˆr)⟩ = −

e2 4 4π ϵ 0 a30

Z ∞ 0

dr r2

e2 1 1 −2r/a0 me e 4 =− e =− . r 4π ϵ 0 a0 (4π ϵ 0 )2 h¯ 2 (9.48)

Also, recall that the energy of the ground state is E0 = −

me e4 , 2(4π ϵ 0 )2 h¯ 2

(9.49)

and so the expectation value of the kinetic energy must be their difference ˆ = E0 − ⟨V (ˆr)⟩ = ⟨K⟩

me e4 . 2(4π ϵ 0 )2 h¯ 2

(9.50)

Thus, we see that ˆ = ⟨K⟩ (e)

1 |⟨V (ˆr)⟩| , 2

(9.51)

exactly as predicted classically for a 1/r potential and Ehrenfest’s theorem. For the harmonic oscillator, recall that its eigenenergies were   1 (9.52) En = n + h¯ ω , 2 for n = 0, 1, 2, . . . . The potential of the harmonic oscillator is V (x) ˆ =

2 h¯ ω † 2 mω 2 2 mω 2 h¯ aˆ† + aˆ = xˆ = aˆ + aˆ . 2 2 2mω 4

(9.53)

The expectation value on an energy eigenstate is then ⟨ψn |V (x)| ˆ ψn ⟩ = = = = =

2 h¯ ω (9.54) ⟨ψ0 |aˆn aˆ† + aˆ (aˆ† )n |ψ0 ⟩ 4n!  h¯ ω ⟨ψ0 |aˆn aˆ† aˆ + aˆaˆ† (aˆ† )n |ψ0 ⟩ 4n!  h¯ ω ⟨ψ0 |aˆn [aˆ† , a] ˆ + 2aˆaˆ† (aˆ† )n |ψ0 ⟩ 4n! h¯ ω h¯ ω ⟨ψ0 |aˆn+1 (aˆ† )n+1 |ψ0 ⟩ − ⟨ψ0 |aˆn (aˆ† )n |ψ0 ⟩ 2n! 4n!   h¯ ω h¯ ω h¯ ω 1 . (n + 1) − = n+ 2 4 2 2

Exercises

91

This is exactly half of the energy eigenvalue, and so we find that, for the harmonic oscillator, ˆ = ⟨V (x)⟩ ⟨K⟩ ˆ ,

(9.55)

on an energy eigenvalue, again, exactly as expected classically. This is another way which exhibits that momentum and position in the harmonic oscillator are treated effectively identically. 9.4 For this question, we will just focus on the 0 angular momentum states. Then, in position space, the eigenvalue equation for the Hamiltonian is  2  h¯ 1 d h¯ 2 d 2 k − − − ψ (r) = E ψ (r) , (9.56) m r dr 2m dr2 rn using the results from section 9.2 in the textbook for relating squared Cartesian momentum to the radial momentum. Let’s take the same ansatz for the ground state of this problem as we did with the hydrogen atom, with

ψ (r) ∝ rb e−r/a , for two constants a and b. The eigenvalue equation then implies that   h¯ 2 b(b + 1) 2(b + 1) 1 k − − + − n =E. 2m r2 ar a2 r

(9.57)

(9.58)

Immediately, it is clear that if n > 2, there is no way to make this relationship consistent; the k/rn term dominates as r → 0. If n = 2, we must enforce a cancellation of the two terms that scale like r−2 : −

h¯ 2 b(b + 1) = k . 2m

(9.59)

However, we must also enforce that the single term that scales like r−1 vanishes by itself, or that b = −1. These requirements are inconsistent, and so only for n < 2 can there possibly be bound states. 9.5 (a) In evaluating the ground state and first excited state of the hydrogen atom, for ℓ = 0, we had constructed specific forms for their ansatze. In particular, these wavefunctions took the form

ψ0 (r) ∝ e−r/a0 ,

ψ1 (r) ∝ (b0 + b1 r)e−r/a1 ,

(9.60)

for constants a0 , a1 , b0 , b1 . So, this suggests that for the nth energy eigenstate, the prefactor of the exponential suppression at large r, is an nth order polynomial: n

ψn (r) = e−r/an ∑ ci rn ,

(9.61)

i=0

for some constants an , ci . In general, such a polynomial will have n nodes, n roots, as we expect for the nth excited state.

9 The Hydrogen Atom

92

(b) The operator that corresponds to the k component of the Laplace–Runge– Lenz vector is 3

Aˆ k =



ϵ i jk

i, j=1

pˆi Lˆ j − Lˆ i pˆ j e2 rˆk − . 2me 4π ϵ 0 rˆ

(9.62)

If this acts on a state of ℓ = 0 angular momentum, then Lˆ i |n, 0, 0⟩ = 0, so the operator simplifies to Aˆ k = −

3



i, j=1

ϵ i jk

Lˆ i pˆ j e2 rˆk − . 2me 4π ϵ 0 rˆ

(9.63)

Recall that the commutator of the momentum and angular momentum is 3

[Lˆ i , pˆ j ] = i¯h ∑ ϵ

k=1 i jk

(9.64)

pˆk ,

so we can switch the operators as 3

Lˆ i pˆ j = [Lˆ i , pˆ j ] + pˆ j Lˆ i = pˆ j Lˆ i + i¯h ∑ ϵ

l=1 i jl

pˆl ,

(9.65)

and on an ℓ = 0 state only the commutator survives. The Laplace–Runge– Lenz operator then reduces to 3 h¯ e2 rˆk h¯ e2 rˆk Aˆ k = −i ϵ ϵ pˆl − = −i pˆk − , ∑ 2me i, j,l=1 i jk i jl 4π ϵ 0 rˆ me 4π ϵ 0 rˆ

(c)

as expected. The first excited state wavefunction is again   1 r ψ2,0,0 (r) = q 1− e−r/2a0 . 2a 3 0 8π a0

(9.66)

(9.67)

Let’s first determine the ψ1,1,0 (⃗r) wavefunction. The operator Aˆ z is, in position space,   e2 zˆ h¯ 2 ∂ 1 z h¯ Aˆ z = −i pˆz − =− + . (9.68) me 4π ϵ 0 rˆ me ∂ z a0 r The derivative acting on the wavefunction is   z r 1 ∂ ∂r d ψ2,0,0 (r) = ψ2,0,0 (r) = − e−r/2a0 . ∂z ∂ z dr r 4a20 a0

(9.69)

Then, the wavefunction of interest is     z 1 1 z r r −r/2a0 − + ψ1,1,0 (⃗r) ∝ Aˆ z ψ2,0,0 (r) ∝ e 1 − e−r/2a0 r 4a20 a0 a0 r 2a0 z ∝ − e−r/2a0 . (9.70) a0

Exercises

93

In these expressions, we have dropped overall factors because we only care about proportionality and can fix normalization later. This result also allows us to immediately write down the other wavefunctions immediately: x + iy −r/2a0 e , a0 x − iy −r/2a0 . ψ1,1,−1 (⃗r) ∝ (Aˆ x − iAˆ y )ψ2,0,0 (r) ∝ e a0

ψ1,1,1 (⃗r) ∝ (Aˆ x + iAˆ y )ψ2,0,0 (r) ∝

(9.71)

8.6 The largest possible energy of an emitted photon from hydrogen would be produced from the transition from a very high energy level, with Ei ≈ 0 down to the ground state with E f = −13.6 eV. The wavelength of a photon with energy 13.6 eV is

λ=

hc . E

(9.72)

The value of Planck’s constant times the speed of light is approximately hc ≈ 1240 eV·nm ,

(9.73)

and so the wavelength of this most energetic light is

λ∞→1 ≈

1240 nm ≈ 91 nm , 13.6

(9.74)

a bit too small to be observed by eye. However, if instead hydrogen transitioned from a very high energy level to the first excited state, the wavelength would be four times this value, because the first excited state has an absolute value of energy that is four times less than the ground state. That is,

λ∞→2 ≈ 364 nm ,

(9.75)

which is on the bluer end of the visible spectrum. It is unlikely that the electron is in an extremely high energy state because any errant radiation will knock it out of its bound state. So, on the other end, the transition from the second to the first excited state would produce a photon with wavelength of

λ3→2 ≈

1240 nm ≈ 656 nm , − 13.6 32

13.6 22

(9.76)

on the redder end of the spectrum. Indeed, hydrogen’s strongest spectral lines are red, with many lines at the blue end of the spectrum, but much weaker brightness. 9.7 (a) Recall that the Biot-Savart law is ⃗B(⃗r) = µ0 4π

Z

I d⃗ℓ × (⃗r − ⃗ℓ) , |⃗r − ⃗ℓ|3

(9.77)

where ⃗ℓ is the position vector on the current, ⃗r is the position of interest, and I is the current. For a charged particle traveling around a circle, we are interested in the magnetic field at the center of the circle,⃗r = 0, to determine

94

9 The Hydrogen Atom

the magnetic moment. As such, a point on the circle can be expressed as the vector ℓ = r(cos ω t, sin ω t) ,

(9.78)

where r is the radius and ω is the angular frequency of the orbit. The differential distance is d⃗ℓ = rω (− sin ω t, cos ω t) dt ,

(9.79)

which we note is orthogonal to the position vector ℓ. In each full orbit, an electric charge e travels around, and therefore the current is eω I= . (9.80) 2π With these results, the magnitude of the magnetic field at the center of the loop is

µ0 eω |⃗B(⃗0)| = 8π 2

I

r2 ω dt , r3

(9.81)

and the integral runs over the total period T = 2π /ω of the orbit. Then, the magnetic field is

µ0 eω 1 . |⃗B(⃗0)| = (9.82) 4π r Now, for a particle of mass me traveling with angular frequency ω around a circle of radius r, its angular momentum is L = me r 2 ω .

(9.83)

Then, the magnetic field, in terms of angular momentum, is

µ0 e 1 µ0 2 L= |⃗m| , |⃗B(⃗0)| = 3 4π me r 4π r 3

(9.84)

where ⃗m is the magnetic dipole moment. Thus, the magnetic dipole moment is e L, (9.85) |⃗m| = 2me and the gyromagnetic moment is easily read off, e |γclass | = . 2me

(9.86)

When the magnitude is removed, we pick up a minus sign because the electron is negatively charged. (b) We consider the external magnetic field along the z-axis as in the Zeeman effect example, ⃗B = B0 zˆ. This magnetic field only affects the energy through the z-component of angular momentum of the electron as it orbits the proton. The new energy operator Uˆ that represents this effect is eB0 ˆ Uˆ = −⃗m · ⃗B = Lz , 2me

(9.87)

Exercises

95

where we assume for now that the g-factor is 1. On an eigenstate of hydrogen |n, ℓ, m⟩, this induces a change in the energy ∆E of ˆ ℓ, m⟩ = ∆E|n, ℓ, m⟩ = U|n,

9.8 (a)

eB0 m|n, ℓ, m⟩ . 2me

(9.88)

With an external magnetic field, the energy levels of hydrogen are no longer independent of m. Recall that the operators Tˆ and Sˆ are r r     1 ˆ me ˆ 1 ˆ me ˆ ˆ ˆ Ti = Li + − Ai , Si = Li − − Ai , (9.89) 2 2E 2 2E

when acting on an energy eigenstate of hydrogen. When acting on the ground state of hydrogen, |1, 0, 0⟩, this has 0 angular momentum, and so is annihilated by Lˆ i . Further, the Laplace–Runge–Lenz operator Aˆ i changes the angular momentum of the state without affecting its energy. However, the ground state of hydrogen is unique, with degeneracy 1, so there is no other state that Aˆ i can transform the ground state to. Therefore, the ground state is also annihilated by Aˆ i . Therefore, the ground state is annihilated by both Tˆi and Sˆi . (b) Note that the action of the raising and lowering angular momentum operators on a state |n, ℓ, m⟩ produce states Lˆ + |n, ℓ, m⟩ ∝ |n, ℓ, m + 1⟩ ,

Lˆ − |n, ℓ, m⟩ ∝ |n, ℓ, m − 1⟩ ,

(9.90)

assuming that m < ℓ and m > −ℓ. The action of the raising and lowering Laplace–Runge–Lenz operators on this state is Aˆ + |n, ℓ, m⟩ ∝ |n − 1, ℓ + 1, m + 1⟩ ,

Aˆ − |n, ℓ, m⟩ ∝ |n − 1, ℓ + 1, m − 1⟩ , (9.91)

assuming that ℓ < n. Therefore, the raising operators Tˆ+ and Sˆ+ produce a linear combination of states: Tˆ+ |n, ℓ, m⟩ = α |n, ℓ, m + 1⟩ + β |n − 1, ℓ + 1, m + 1⟩ , Sˆ+ |n, ℓ, m⟩ = γ |n, ℓ, m + 1⟩ + δ |n − 1, ℓ + 1, m + 1⟩ ,

(9.92)

for example, for some complex constants α , β , γ , δ . The lowering operators produce the corresponding linear combination of lowered states. What makes the Tˆ and Sˆ formalism powerful is that the states generated by, say, Tˆ+ and Sˆ+ are orthogonal. Note that, for example, (Sˆ+ )† = S− , and so † Sˆ+ |n, ℓ, m⟩ Tˆ+ |n, ℓ, m⟩ = ⟨n, ℓ, m|Sˆ− Tˆ+ |n, ℓ, m⟩ (9.93) = (⟨n, ℓ, m − 1|η + ⟨n + 1, ℓ − 1, m + 1|χ ) × (α |n, ℓ, m + 1⟩ + β |n − 1, ℓ + 1, m + 1⟩) = 0, where η , χ are other complex coefficients, because states with any different quantum numbers are orthogonal.

9 The Hydrogen Atom

96

(c)

Doing the same exercise for Tˆz and Sˆz , the states that are produced are Tˆz |n, ℓ, m⟩ = α |n, ℓ, m⟩ + β |n − 1, ℓ + 1, m⟩ ,

(9.94)

and Sˆz produces the orthogonal linear combination. (d) To determine the form of the operators Tˆi and Sˆi in position space, we can first simplify the expression for the Laplace–Runge–Lenz operator Aˆ k . Noting that the angular momentum operator is 3

Lˆ k =



we note that 3



ϵ ( pˆi Lˆ j − Lˆ i pˆ j ) =

i, j=1 i jk

3



ϵ

i, j,l,m=1 i jk 3

=



ϵ

i, j=1 i jk

 ϵ

(9.95)

rˆi pˆ j ,

 lm j

pˆi rˆl pˆm − ϵ

lmi

rˆl pˆm pˆ j

(9.96)

(δim δkl − δil δkm ) pˆi rˆl pˆm

i,l,m=1 3

+



 δkm δ jl − δkl δ jm rˆl pˆm pˆ j

j,l,m=1 3

= ∑ pˆi rˆk pˆi − pˆi rˆi pˆk + rˆi pˆk pˆi − rˆk pˆ2i



i=1

3

= i¯h ∑ (1 − δik ) pˆk = −2i¯h pˆk . i=1

Therefore, the Laplace–Runge–Lenz operator is e2 rˆk h¯ , Aˆ k = −i pˆk − me 4π ϵ 0 rˆ

(9.97)

exactly what we had found for its action on the reduced space of ℓ = 0 states. The prefactor of this operator in the Tˆ or Sˆ operators is r me 4π ϵ 0 h¯ − (9.98) = (n + ℓ) . 2En+ℓ e2 Then, the Tˆz operator, for example, in position space is r   1 ˆ me ˆ ˆ Tz = Lz + − Az 2 2E    1 ∂ ∂ ∂ z = −i¯hx + i¯hy − (n + ℓ)¯h a0 + . 2 ∂y ∂x ∂z r

(9.99)

10

Approximation Techniques

Exercises 10.1 (a)

The complete Hamiltonian for this simple system is   E0 + ϵ ϵ ˆ . H= ϵ E1 + ϵ

(10.1)

Its eigenvalues λ satisfy the characteristic equation: (E0 + ϵ −λ )(E1 + ϵ −λ ) − ϵ 2 = 0 , or that E0 + E1 E1 − E0 λ= ± 2 2

s 1−4

ϵ2 +ϵ . (E1 − E0 )2

(10.2)

(10.3)

Taylor expanding to first order in epsilon, we find the eigenvalues

λ = E0 + ϵ , E1 + ϵ . (b)

(10.4)

From quantum mechanical perturbation theory, the first correction to the energies is the expectation value of the perturbation Hamiltonian on the unperturbed eigenstates. Because the unperturbed Hamiltonian is already diagonal, the ground state is   1 . (10.5) |ψ0 ⟩ = 0 The first correction to the ground state is then ∆E0 = ⟨ψ0 |Hˆ ′ |ψ0 ⟩ = ϵ .

(c)

We find the exact same correction for the first excited state energy by the symmetry of the matrix Hˆ ′ . Thus indeed, to lowest order the correction to the energies is to just increase them by ϵ , which agrees with the expansion of the exact result from part (a). √ The Taylor series of the function 1 − x converges for |x| < 1 and so the Taylor expansion of the square root in the exact result converges for 1>

97

(10.6)

4ϵ2 , (E1 − E0 )2

(10.7)

10 Approximation Techniques

98

or for E1 − E0 . (10.8) 2 That is, the perturbation had better be less than the difference in energy between the two states to honestly be a “perturbation.” The normalization of the wavefunction is ϵ
0, which makes r = 0 a hard boundary, like that of the infinite square well. No Maslov correction is needed for the infinite square well. Further, as r → ∞, the potential actually vanishes, and does not diverge like the harmonic oscillator. This significantly softens the suppression of the wavefunction at large radii compared to the infinite square well, again rendering a Maslov correction irrelevant. (d) With a potential of the form V (x) = k|x|α , the classical turning points correspond to (c)

En = k|x|α , or that

 xmin,max = ∓

En k

(10.75) 1/α .

(10.76)

Then, the quantization condition is 2 h¯

Z ( En )1/α k

dx′

0

  p 1 2m(En − kx′α ) = n + π, 2

(10.77)

by the symmetry about x = 0. Let’s now change variables to kx′α = En u , and so x′ =



En k

1/α

(10.78)

u1/α .

(10.79)

u α −1 du ,

(10.80)

The integration measure is 1 dx = α ′



En k

1/α

1

so the quantization becomes 2 h¯

Z ( En )1/α k 0

√   Z 1 2mEn En 1/α 1 du u α −1 (1 − u)1/2 α k 0 √  1/α 2 2mEn En Γ(1/α )Γ(3/2) = h¯ α k Γ(1/α + 3/2)   1 = n+ π, (10.81) 2

p 2 dx 2m(En − kx′α ) = h¯ ′

10 Approximation Techniques

108

where we have used the fact that the integral that remains is in the form of the Euler Beta function. First, for general α , the energies scale with n for large n like 1+ 1 α

En2

∝ n,

(10.82)

or that 2α

En ∝ n 2+α .

(10.83)

Note that if α = 2 we indeed reproduce the linear scaling of energies with level n. If instead α → ∞, we first note that the Gamma function has the limit lim Γ(1/α ) = α ,

α →∞

(10.84)

and so lim

α →∞

Γ(1/α )Γ(3/2) = 1. α Γ(1/α + 23 )

In this limit, the quantization condition becomes 2p 2mEn = nπ , h¯

(10.85)

(10.86)

and we have turned off the Maslov correction because α → ∞ is a singular limit. The solution of this equation is En =

10.8

n2 π 2 h¯ 2 . 2m · 22

(10.87)

As α → ∞, the potential diverges beyond |x| = 1 and is 0 for −1 < x < 1. Thus, this is an infinite square well with width a = 2, and the BohrSommerfeld quantization condition predicts that exactly correctly. (a) The power method would predict that ground state is approximately proportional to |1⟩ ≃ Hˆ N |χ ⟩ .

(10.88)

To estimate the ground state energy, we take the expectation value of the Hamiltonian on this state and then ensure it is properly normalized. So, the ground state energy is bounded as E1 ≤

⟨χ |Hˆ N Hˆ Hˆ N |χ ⟩ ⟨χ |Hˆ 2N+1 |χ ⟩ = . ⟨χ |Hˆ N Hˆ N |χ ⟩ ⟨χ |Hˆ 2N |χ ⟩

(10.89)

From the given form of the state |χ ⟩, the action of the Hamiltonian is  2N+1 ∞ ∞ βn me e4 Hˆ 2N+1 |χ ⟩ = ∑ β Hˆ 2N+1 |n⟩ = − ∑ n4N+2 |n⟩ . 2 2h 2(4 π ϵ ) ¯ 0 n=1 n=1 (10.90)

Exercises

109

Therefore, the upper bound of the ground state energy is |βn |2



∑n=1 n4N+2 me e 4 E1 ≤ − . 2 2 ∞ |βn |2 2(4π ϵ 0 ) h¯ ∑n=1 4N

(10.91)

n

(b)

Let’s now consider the state | χ ⟩ = β1



1

∑ n |n⟩ ,

(10.92)

n=1

where β0 is a normalization constant. Its value can be determined by demanding that 1 = ⟨χ |χ ⟩ = |β1 |2



1

∑ n2 =

n=1

π2 |β1 |2 . 6

Thus, the properly normalized state is √ ∞ 6 1 |n⟩ . |χ ⟩ = ∑ π n=1 n

(10.93)

(10.94)

The expectation value of the Hamiltonian on this state is then ˆ χ⟩ = − ⟨χ |H|

me e4 6 ∞ 1 ∑ . 2(4π ϵ 0 )2 h¯ 2 π 2 n=1 n4

(10.95)

The value of the sum that remains is π 4 /90 and so the expectation value is   me e4 π2 me e 4 ˆ ⟨χ |H|χ ⟩ = − ≈ 0.65794 − (10.96) 2(4π ϵ 0 )2 h¯ 2 15 2(4π ϵ 0 )2 h¯ 2 (c)

which is indeed larger than the ground state energy, as expected. From the result of part (a), the estimate of the ground state energy after N applications of the power method on this state is E1 ≤ −

(d)

1 ∑∞ me e 4 ζ (4N + 4) me e 4 n=1 n4N+4 = − . 1 2 ∞ 2 ζ (4N + 2) 2 2 2(4π ϵ 0 ) h¯ ∑n=1 n4N+2 2(4π ϵ 0 ) h¯

(10.97)

As N → ∞, the difference between 4N + 4 and 4N + 2 is negligible, and the ratio of the zeta function terms becomes unity, thus reproducing the expected ground state energy. For N = 1, the ratio of the zeta function terms is

ζ (8) ≈ 0.98696 . ζ (6)

(10.98)

ζ (12) ≈ 0.999252 , ζ (10)

(10.99)

For N = 2, the ratio is

well within one part in a thousand of the true ground state energy.

10 Approximation Techniques

110

10.9

(a)

Note that we can write the kinetic energy as  1/2 |⃗p|2 K = me c 2 1 + 2 2 − me c2 me c  1/2 |⃗p|2 |⃗p|4 = me c 2 1 + 2 2 − 4 4 + · · · − me c2 2me c 8me c =

(10.100)

|⃗p|2 |⃗p|4 − 3 2 +··· . 2me 8me c

This can then be trivially promoted to a Hermitian operator on Hilbert space, which takes the form in position space of  2  2  2 h¯ 2 ∂ ∂2 ∂2 h¯ 4 ∂ ∂2 ∂2 Kˆ = − + + − + + +··· . 2me ∂ x2 ∂ z2 ∂ z2 8m3e c2 ∂ x2 ∂ z2 ∂ z2 (10.101) (b) For this and the next part of this problem, we are going to cheat a little bit. First, we note that on the ground state, the speed of the electron is about v ≃ α c, less than 1/100th the speed of light. So, the relativistic correction will be very small and so we can treat it as a perturbation, even in the power method. To see how this works, we will expand the power method approximation to first order in the relativistic correction to the Hamiltonian. That is, ⟨ψ |(Hˆ 0 + Hˆ rel )3 |ψ ⟩ (10.102) ⟨ψ |(Hˆ 0 + Hˆ rel )2 |ψ ⟩ ⟨ψ |Hˆ 03 + Hˆ 02 Hˆ rel + Hˆ 0 Hˆ rel Hˆ 0 + Hˆ rel Hˆ 02 + · · · |ψ ⟩ = , ⟨ψ |Hˆ 2 + Hˆ 0 Hˆ rel + Hˆ rel Hˆ 0 + · · · |ψ ⟩

E0,rel ≃

0

where the ellipses hide operators that are quadratic or higher order in Hˆ rel . Using the fact that |ψ ⟩ is the ground state of the unperturbed Hamiltonian, Hˆ 0 |ψ ⟩ = E0 |ψ ⟩, this ratio simplifies E0,rel ≃

E03 + 3E02 ⟨ψ |Hˆ rel |ψ ⟩ + · · · ≃ E0 + ⟨ψ |Hˆ rel |ψ ⟩ + · · · , E02 + 2E0 ⟨ψ |Hˆ rel |ψ ⟩ + · · ·

(10.103)

from Taylor expanding the denominator. That is, the lowest-order correction to the ground state energy due to relativistic effects is the expectation value of the relativistic Hamiltonian on the unperturbed ground state. This is exactly what we derived from perturbation theory, but here established it from the power method. With this result, we can then move to calculating the expectation value. Recall that the ground state wavefunction of the hydrogen atom exclusively has radial dependence. On a function f of radius r, we had shown in the previous chapter that   2 ∂ ∂2 ∂2 2 d f d2 f + . (10.104) + + f (r) = 2 2 2 ∂x ∂z ∂z r dr dr2

Exercises

111

Now, the expectation value of the relativistic correction can be expressed as ! ! ˆ2 ˆ2 1 |⃗p| |⃗p| ˆ ⟨ψ |Hrel |ψ ⟩ = − ⟨ψ | |ψ ⟩ . (10.105) 2me c2 2me 2me In the expectation value is just the unperturbed kinetic energy, and on the unperturbed ground state, we can use the fact that ˆ2 |⃗p| e2 1 . = E0 + 2me 4π ϵ 0 rˆ

(10.106)

Then, the expectation value of the relativistic correction is  2 1 e2 1 ˆ ⟨ψ |Hrel |ψ ⟩ = − ⟨ψ | E0 + |ψ ⟩ (10.107) 2me c2 4π ϵ 0 rˆ   1 e4 1 e2 1 2 =− E + ψ | | ψ ⟩ + ψ | ψ ⟩ . | E ⟨ ⟨ 0 0 2me c2 2π ϵ 0 rˆ (4π ϵ 0 )2 rˆ2 On the ground state, the expectation values are 1 ⟨ψ | |ψ ⟩ = rˆ 1 ⟨ψ | 2 |ψ ⟩ = rˆ

Z

1 4 ∞ dr r e−2r/a0 = , 3 a0 a0 0 Z 4 ∞ 2 dr e−2r/a0 = 2 . a0 a30 0

Then, the expectation value of the relativistic correction is   1 e2 2e4 2 ⟨ψ |Hˆ rel |ψ ⟩ = − E + E + 0 0 2me c2 2π ϵ 0 a0 (4π ϵ 0 )2 a20 =−

(c)

(10.108)

(10.109)

E02 5E02 (1 − 4 + 8) = − , 2me c2 2me c2

where I have used the expression for the ground state energy. As observed in the previous part, the lowest-order expansion of the power method exactly reproduces the first-order correction to the ground state energy from perturbation theory.

11

The Path Integral

Exercises 11.1

(a)

For a free particle that travels a distance x in time T , its velocity is x v= . (11.1) T Further, the energy E of the particle in terms of momentum is E=

p2 , 2m

(11.2)

and so the difference is px − ET = pvT −

p2 p2 p2 p2 T = T− T= T, 2m m 2m 2m

(11.3)

which is indeed the kinetic energy times the elapsed time; the classical action of a free particle. (b) In terms of the angular velocity ω , the elapsed angle over time T is

θ = ωT ,

(11.4)

L = Iω ,

(11.5)

and the angular momentum L is

for moment of inertia I. The rotational kinetic energy E is 1 L2 E = Iω 2 = , 2 2I

(11.6)

and so the difference is Lθ − ET = Lω T −

(c)

112

L2 L2 L2 L2 T = T− T = T, 2I I 2I 2I

(11.7)

again the rotational kinetic energy times the elapsed time, or the classical action for a particle experiencing no torque. Assuming that the total energy is a constant value E, including time dependence is trivial:  Zx  p i ET ψ (x,t) ≈ exp dx′ 2m(E −V (x′ )) − i ψ (x0 ) , (11.8) h¯ x0 h¯

Exercises

113

where T is the total elapsed time. We can change the integration variable to the time x′ = vt = mp t, where v is the velocity and p is the momentum of the particle. In general, the momentum varies with position or time, but, as with the original WKB approximation, we assume that variation is small, so the integration measure is p dx′ = dt . (11.9) m Then, the wavefunction is approximately  ZT  i 2m(E −V ) ET ψ (x,t) ≈ exp dt −i ψ (x0 ) , (11.10) h¯ 0 m h¯ p because the momentum with fixed energy is p = 2m(E −V ). This can be rearranged to:   ZT i dt [2(E −V ) − E] ψ (x0 ) ψ (x,t) ≈ exp (11.11) h¯ 0  ZT  i = exp dt (E − 2V ) ψ (x0 ) h¯ 0  ZT  i = exp dt (K −V ) ψ (x0 ) , h¯ 0

11.2 (a)

where we have used that the total energy is the sum of the kinetic and potential energies: E = K +V . Thus, what appears in the exponent is indeed the classical action, the time integral of the kinetic minus the potential energy. The general exact expression for the wavefunction at any later time requires the infinity of integrals of the path integral, but the WKB approximation assumes that the potential varies slowly enough that those infinity of integrals just produce an overall normalization, and only the first “step” of the path integral is needed. The Euler-Lagrange equation from a Lagrangian L is

∂L d ∂L − = 0. (11.12) ∂ x dt ∂ x˙ For the harmonic oscillator’s Lagrangian, the necessary derivatives are ∂L d d ∂L = −mω 2 x , − = − mx˙ = −mx¨ . ∂x dt ∂ x˙ dt Therefore, the Euler-Lagrange equations are −mω 2 x − mx¨ = 0 ,

(11.13)

(11.14)

as claimed. (b) Simplifying the equations of motion, we have x¨ + ω 2 x = 0 ,

(11.15)

for which the solutions are linear combinations of sines and cosines: x(t) = a cos(ω t) + b sin(ω t) ,

(11.16)

11 The Path Integral

114

where a, b are integration constants. At time t = 0, the position is x(0) = xi and so a = xi . At t = T , the position is x f , and so the constant b is found from x f = xi cos(ω T ) + b sin(ω T ) ,

(11.17)

and so b=

x f − xi cos(ω T ) . sin(ω T )

(11.18)

Thus, the general solution for the trajectory that satisfies the boundary conditions is x(t) = xi cos(ω t) + (c)

x f − xi cos(ω T ) sin(ω t) . sin(ω T )

(11.19)

The classical action is

  Z  mZ T m T d 2 2 2 2 2 S[x] = dt x˙ − ω x = dt (xx) ˙ − xx¨ − ω x 2 0 2 0 dt   Z m T d = dt (xx) ˙ − x(x¨ + ω 2 x) . 2 0 dt

(11.20)

In these expressions, we have used integration by parts to move one of the time derivatives. Then, x¨ + ω 2 x = 0 by the equations of motion and so the classical action is S[x] =

m 2

Z T

dt 0

d m (xx) ˙ = (x(T )x(T ˙ ) − x(0)x(0)) ˙ . dt 2

(11.21)

Note that the first time derivative of the trajectory is x(t) ˙ = −ω xi sin(ω t) + ω

x f − xi cos(ω T ) cos(ω t) . sin(ω T )

(11.22)

At t = 0 and t = T , we have x(0) ˙ =ω

x f − xi cos(ω T ) , sin(ω T )

x(T ˙ ) = −ω xi sin(ω T ) + ω

(11.23) x f − xi cos(ω T ) cos(ω T ) . sin(ω T )

Then, the value of the classical action is   x f −xi cos(ω T ) x f −xi cos(ω T ) m S[x] = −ω xi x f sin(ω T )+ω x f cos(ω T )−ω xi 2 sin(ω T ) sin(ω T )   x f −xi cos(ω T ) x f −xi cos(ω T ) m −ω xi x f sin(ω T )+ω x f = cos(ω T )−ω xi 2 sin(ω T ) sin(ω T ) =

2 2 2 2 mω (x f +xi ) cos(ω T )−xi x f (1+ sin (ω T )+ cos (ω T )) 2 sin(ω T )

=

2 2 mω (x f +xi ) cos(ω T )−2xi x f . 2 sin(ω T )

(11.24)

Exercises

115

11.3 (a)

Note that we can rearrange the eigenvalue equation to   2 d 2 + (ω − λ ) fλ (t) = 0 , dt 2 for which the solutions are fλ (t) = a cos(t

p p ω 2 − λ ) + b sin(t ω 2 − λ ) ,

(11.25)

(11.26)

for integration constants a, b. Demanding that fλ (0) = 0 just sets a = 0, while fλ (T ) = 0 enforces that p T ω 2 − λ = nπ , (11.27) for n = 1, 2, 3, . . . . Rearranging, we have   n2 π 2 ω 2T 2 n2 π 2 ω2 − 2 = − 2 1 − 2 2 = λ . T T n π (b)

(c)

(11.28)

Note with the eigenvalues, we can evaluate the determinant as the product of all eigenvalues. This is    2  ∞  d n2 π 2 ω 2T 2 2 det + ω = det A = ∏ − 2 1− 2 2 (11.29) dt 2 T n π n=1  2 2 n π sin(ω T ) ∞ = . − 2 ∏ ω T n=1 T Interestingly, this still involves an infinite product of increasingly divergent factors. This can be understood when comparing to the result we found in the methods introduced in section 11.5.4 of the textbook. We had found the determinant of the operator A to be   sin(ω T ) N3 N N sin(ω T ) (11.30) det A = lim (−1) = lim − 2 . N→∞ ∆t 2 ω T ω T N→∞ T If these two expressions are to agree, the remaining infinite product must be equal to the limit:    ∞  n2 π 2 N3 (11.31) − = lim . − ∏ N→∞ T2 T2 n=1

11.4 (a)

Of course, the equality is only formal, because both sides are infinite. The harmonic oscillator path integral is s r S[x ] m ωT i class Z = e h¯ . (11.32) 2π i¯hT sin(ω T ) Taking the ω → 0 limit of the right-most square-root factor produces s r ωT ωT lim (11.33) = lim = 1. ω →0 sin(ω T ) ω →0 ω T

11 The Path Integral

116

Thus, the free-particle path integral is Z=e

i

S[xclass ] h¯

r

m , 2π i¯hT

(11.34)

in agreement with the example. (b) The path integral expression from equation 11.79 is ∞

Z=

∑ e−i

T En h¯

ψn (x f )ψn∗ (xi ) .

(11.35)

n=1

Passing this to continuous momentum eigenstates, the free particle path integral becomes Z=

Z ∞ dp −∞

e−i 2m¯h p ei T

2π h¯

2

(x f −xi ) p h¯

,

(11.36)

using the classical energy for a free particle, E = p2 /2m. Note the factor of 2π h¯ in the measure; this is to ensure that the dimensions of the path integral are still inverse distance. Now, we will make the change of variables to u2 = i or that

T 2 p , 2m¯h

r p=

2m¯h u. iT

(11.37)

(11.38)

The path integral then becomes r q Z ∞ 1 m −u2 +i(x f −xi ) i¯2m hT u √ Z= du e (11.39) 2π i¯hT π −∞ r q Z ∞ 2 im 2 im m 1 −u2 +i(x f −xi ) i¯2m hT u−(x f −xi ) 2¯hT +(x f −xi ) 2¯hT √ du e = 2π i¯hT π −∞ 2 r m (x f −xi ) Z 2r ∞ m (x f −xi ) 2 m ei 2 h¯ T m √ = du e−u = ei 2 h¯ T , 2π i¯hT 2π i¯hT π −∞ where we completed the square in the second line. This is exactly the free particle path integral that we calculated in the example. 11.5 Note that the path integral for the three-dimensional harmonic oscillator takes the form  Z   Z i m 2 m 2 m 2 mω 2 2 mω 2 2 mω 2 2 Z3D = [dx][dy][dz] exp dt x˙ + y˙ + z˙ − x − y − z h¯ 2 2 2 2 2 2 (11.40)  Z      Z Z Z i m 2 mω 2 2 i m 2 mω 2 2 = [dx] exp dt x˙ − x [dy] exp dt y˙ − y 2 2 2 2 h¯ h¯   Z  Z 2 m 2 mω 2 i dt z˙ − z × [dz] exp h¯ 2 2

Exercises

117

Z =

[dx] exp

 Z  3 i m 2 mω 2 2 dt x˙ − x h¯ 2 2

3 = Z1D .

That is, the path integral of the three-dimensional harmonic oscillator is just the cube of the path integral for the one-dimensional harmonic oscillator, by the linearity of integration. 11.6 (a) For the free-particle action, we need the first time derivative of the trajectory. For this trajectory, we have x˙ =

2 2 ′ (x − xi )Θ(T /2 − t) + (x f − x′ )Θ(T − t)Θ(t − T /2) , T T

(11.41)

where the Heaviside theta function Θ(x) is 1 if x > 0 and 0 if x < 0. The action of this trajectory is therefore   Z m T 4 ′ 4 2 ′ 2 S[x] = dt (x −x ) Θ(T /2−t) + (x −x ) Θ(T −t)Θ(t−T /2) i f 2 0 T2 T2  m = (x′ − xi )2 + (x f − x′ )2 , (11.42) T because the integrand is time independent (except for the Heaviside functions). This is just a parabola in the intermediate point x′ with a minimum where  d (x′ − xi )2 + (x f − x′ )2 = 0 = 2(x′ − xi ) − 2(x f − x′ ) , (11.43) ′ dx or that x′ =

x f + xi . 2

(11.44)

At this point, the trajectory of the particle is a straight line for all time 0 < t < T , as expected for the physical, classical free particle. (b) The time derivative of this trajectory is now x˙ =

1 πA πt (x f − xi ) + cos . T T T

The action of this trajectory is then   Z m T 1 πA πt 2 S[x] = dt (x f − xi ) + cos 2 0 T T T   Z T 2 2 m π A 1 2 2 πt = dt (x f − xi ) + 2 cos 2 0 T2 T T   2 2 m 1 π A (x f − xi )2 + . = 2 T 2T

(11.45)

(11.46)

This is again a parabola, this time in the amplitude A, and it is clearly minimized when A = 0, when there is no sinusoidal oscillation on the straight-line trajectory.

11 The Path Integral

118

(c)

For this trajectory, the velocity is x˙ =

nπ nπ t 1 (x f − xi ) + (x f − xi ) cos . T T T

(11.47)

The action is then S[x] =

m 2



Z T

dt 0



Z T

nπ 1 nπ t (x f − xi ) + (x f − xi ) cos T T T

2

n2 π 2 nπ t 1 (x f − xi )2 + 2 (x f − xi )2 cos2 2 T T T 0   2 2 m 1 n π = (x f − xi )2 + (x f − xi )2 , 2 T 2T =

m 2

(11.48) 

dt

yet again a parabola, this time in the frequency n. And again, the action is minimized when n = 0, eliminating the sinusoidal oscillation on top of the classical trajectory. (d) Repeating the exercise for the harmonic oscillator with the same model trajectories is a very different analysis than for the free particle. The biggest difference is that there is no limit of the parameter in these model trajectories for which it reduces to the classical trajectory of the harmonic oscillator, so the trajectory that minimizes the action will be some approximation to the classical trajectory. Here, we will just present one of these calculations; the kinked trajectory from part (a). We have already calculated the time integral of the kinetic energy. The time integral of the potential energy in the action is   2t ′ dt xi + (x − xi ) Θ(T /2 − t) (11.49) T 0   2 2 + x′ + (t − T /2)(x f − x′ ) Θ(T − t)Θ(t − T /2) T " 2 Z T 2 mω 2t ′ =− dt xi + (x − xi ) Θ(T /2 − t) 2 0 T # 2  2 ′ ′ + x + (t − T /2)(x f − x ) Θ(T − t)Θ(t − T /2) T

mω 2 S[x] ⊃ − 2

=−

Z T

 mω 2 T 2 x f + xi2 + (x f + xi )x′ + 2x′2 . 12

Then, the complete value of the action for this harmonic oscillator trajectory is S[x] =

 mω 2 T 2  m (x′ − xi )2 + (x f − x′ )2 − x f + xi2 + (x f + xi )x′ + 2x′2 . T 12 (11.50)

Exercises

119

This is still a parabola in the intermediate position x′ . However, now its minimum is where    mω 2 T 2  d m ′ 2 ′ 2 2 ′ ′2 (x − x ) + (x − x ) x − + x + (x + x )x + 2x i i f f f i dx′ T 12 =0=−

mω 2 T 2m (x f + xi − 2x′ ) − (x f + xi + 4x p ) T 12

(11.51)

or that 1 + ω24T x f + xi . 2 2 2 1− ω T 2 2

x′ =

(11.52)

12

What is interesting about this point is that it is always at a position larger than (xi + x f )/2 as long as the frequency is non-zero, ω > 0. Further, this trajectory is only a minimum of the action (for this parametrization) if the second derivative with respect to x′ is positive. The second derivative of the action is     mω 2 T 2 d2 m 2 ′ ′2 ′ 2 ′ 2 (x − xi ) + (x f − x ) − x f + xi + (x f + xi )x + 2x dx′2 T 12 4m mω 2 T − > 0. T 3 This inequality can only be satisfied if

(11.53)

=

12 (11.54) . T2 So, for sufficiently long high enough frequencies, this trajectory is not even a reasonable approximation to the classical trajectory. Let’s first consider the path integral of the particle that passes through the upper slit. We can use the formalism established in example 11.2 to do this. First, in time τ , the particle travels from the origin to the upper slit, located at ⃗xm = (d, s/2). The path integral for this is r 2 2 m d +(s/2) m , (11.55) Z1 = ei 2 h¯ τ 2π i¯hτ

ω2
S1 + S2 ,

(12.13)

((1 − ϵ )α + ϵ α )2 > (1 − 2 ϵ )α + 2 ϵ α .

(12.14)

or that

Taking ϵ small (and importantly smaller than α −1), we can Taylor expand in ϵ to yield the inequality: 1 − 2α ϵ +α (2α − 1) ϵ 2 + · · · > 1 − 2α ϵ +2α (α − 1) ϵ 2 + · · · ,

(12.15)

where we assume that α > 1, which establishes that, for example, ϵ > ϵ α . Rearranging, this becomes 2α − 1 > 2α − 2 ,

(12.16)

which is indeed true. Therefore, for sufficiently small ϵ , the Rényi entropy of the density matrix

ρ12 = (1 − 2 ϵ )| ↑1 ↑2 ⟩⟨↑1 ↑2 | + ϵ | ↑1 ↓2 ⟩⟨↑1 ↓2 | + ϵ | ↓1 ↑2 ⟩⟨↓1 ↑2 | , violates subadditivity for all α > 1.

(12.17)

12 The Density Matrix

124

(d) With an appropriate choice of basis, the most general density matrix of two spins is

ρ12 = a| ↑1 ↑2 ⟩⟨↑1 ↑2 | + b| ↑1 ↓2 ⟩⟨↑1 ↓2 | + c| ↓1 ↑2 ⟩⟨↓1 ↑2 | + d| ↓1 ↓2 ⟩⟨↓1 ↓2 | , (12.18) where 0 ≤ a, b, c, d ≤ 1 and a + b + c + d = 1. The Rényi entropy of this density matrix is (α )

S12 =

1 log (aα + bα + cα + d α ) . 1−α

(12.19)

The sum of the Rényi entropies of the partial traces are then (α )

(α )

S1 + S2

=

1 [log ((a + b)α + (c + d)α ) + log ((a + c)α + (b + d)α )] 1−α (12.20)

Violation of subadditivity means that (α )

(α )

(α )

(12.21)

S12 > S1 + S2 , or that, for 0 < α < 1,

aα + bα + cα + d α > ((a + b)α + (c + d)α ) ((a + c)α + (b + d)α ) . (12.22) Let’s now take b = c = d = ϵ , and so the inequality becomes (1 − 3 ϵ )α + 3 ϵ α > ((1 − 2 ϵ )α + (2 ϵ )α )2 2α

= (1 − 2 ϵ )



+ (2 ϵ )

(12.23) α

α

+ 2(2 ϵ ) (1 − 2 ϵ ) .

Now, let’s consider ϵ very small and so we can Taylor expand both sides of this inequality to lowest order in ϵ , assuming that α < 1 and 1 − α is much larger than ϵ . Taylor expanding, we find 1 − 3α ϵ +3 ϵ α > 1 − 4α ϵ +4α ϵ 2α +21+α ϵ α .

(12.24)

2α is clearly larger than α , so for small enough ϵ , we can ignore the ϵ 2α term. Rearranging, we have

α ϵ +(3 − 21+α ) ϵ α > 0 .

(12.25)

Note that the second term is positive as long as 3 − 21+α > 0 or that

α


h¯ 2 . 2

(12.127)

This is satisfied if 2w(1 − w) > or that

(c)

1 , 4

    1 1 1 1 1− √