626 74 13MB
English Pages 145 Year 2019
Mathematics for Physicists: Introductory Concepts and Methods Solutions to even-numbered problems ALEXANDER ALTLAND & JAN VON DELFT
January 9, 2019
Contents
i
S
Even-numbered solutions
SL
Problems: Linear Algebra S.L1 Mathematics before numbers Sets and Maps Groups Fields S.L2 Vector spaces Vector spaces: examples Basis and dimension S.L3 Euclidean geometry Normalization and orthogonality Inner product spaces S.L4 Vector product Algebraic formulation Further properties of the vector product S.L5 Linear Maps Linear maps Matrix multiplication The inverse of a matrix General linear maps and matrices Matrices describing coordinate changes S.L6 Determinants Computing determinants S.L7 Matrix diagonalization Matrix diagonalization Functions of matrices S.L8 Unitarity and Hermiticity Unitarity and orthogonality Hermiticity and symmetry Relation between Hermitian and unitary matrices S.L10 Multilinear algebra Direct sum and direct product of vector spaces Dual space Tensors
3 5 5 5 5 7 8 8 11 12 12 12 15 15 16 17 17 18 19 22 24 27 27 29 29 34 35 35 36 41 42 42 43 44
Contents
ii
Alternating forms Wedge product Inner derivative Pullback Metric structures SC
Solutions: Calculus S.C1 Differentiation of one-dimensional functions Definition of differentiability Differentiation rules Derivatives of selected functions S.C2 Integration of one-dimensional functions One-dimensional integration Integration rules Practical remarks on one-dimensional integration S.C3 Partial differentiation Partial derivative Multiple partial derivatives Chain rule for functions of several variables S.C4 Multi-dimensional integration Cartesian area and volume integrals Curvilinear area integrals Curvilinear volume integrals Curvilinear integration in arbitrary dimensions Changes of variables in higher-dimensional integration S.C5 Taylor series Complex Taylor series Finite-order expansion Solving equations by Taylor expansion Higher-dimensional Taylor series S.C6 Fourier calculus The δ-Function Fourier series Fourier transform Case study: Frequency comb for high-precision measurements S.C7 Differential equations Separable differential equations Linear first-order differential equations Systems of linear first-order differential equations Linear higher-order differential equations General higher-order differential equations Linearizing differential equations S.C8 Functional calculus Euler-Lagrange equations
45 45 46 47 47 49 49 49 49 49 51 51 52 56 59 59 59 60 60 60 63 64 66 68 70 70 71 72 73 74 74 79 81 85 87 87 90 91 98 99 100 101 101
iii
S.C9
SV
Calculus of complex functions Holomorphic functions Complex integration Singularities Residue theorem
Solutions: Vector Calculus S.V1 Curves Curve velocity Curve length Line integral S.V2 Curvilinear Coordinates Cylindrical and spherical coordinates Local coordinate bases and linear algebra S.V3 Fields Definition of fields Scalar fields Extrema of functions with constraints Gradient fields Sources of vector fields Circulation of vector fields Practical aspects of three-dimensional vector calculus S.V4 Introductory concepts of differential geometry Differentiable manifolds Tangent space S.V5 Alternating differential forms Cotangent space and differential one-forms Pushforward and Pullback Forms of higher degree Integration of forms S.V6 Riemannian differential geometry Definition of the metric on a manifold Volume form and Hodge star S.V7 Differential forms and electrodynamics Laws of electrodynamics II: Maxwell equations Invariant formulation
103 103 104 105 106 112 112 112 112 113 114 114 117 118 118 119 120 122 123 125 127 134 134 134 135 135 136 137 140 141 141 142 143 143 144
SL
Problems: Linear Algebra
S.L1 Mathematics before numbers S.L1.1 Sets and Maps P
L1.1.2 Composition of maps
(a) Since 02 = 0, (±1)2 = 1, (±2)2 = 4, the image of S under A is T = A(S) = {0, 1, 4} . (b) Since (c)
√ √ √ 0 = 0, 1 = 1, 4 = 2, the image of T under B is U = B(T ) = {0, 1, 2} .
The composite map C = B ◦ A is given by C : S → U,
n 7→ C(n) =
√
n2 = |n| .
(d) A, B and C are all surjective. B is injective and hence also bijective. A and C are not injective, since, e.g., the elements +2 and −2 have the same image under A, with A(±2) = 4, and similarly C(±2) = 2. Therefore, A and C are also not bijective.
S.L1.2 Groups P
L1.2.2 The groups of addition modulo 5 and rotations by multiples of 72 deg
(a)
0
1
2
3
4
•
r(0)
r(72)
r(144)
r(216)
r(288)
r(0)
r(0)
r(72)
r(144)
r(216)
r(288)
r(72)
r(72)
r(144)
r(216)
r(288)
r(0)
0
0
1
2
3
4
1
1
2
3
4
0
2
2
3
4
0
1
3
3
4
0
1
2
r(144)
r(144)
r(216)
r(288)
r(0)
r(72)
4
4
0
1
2
3
r(216)
r(216)
r(288)
r(0)
r(72)
r(144)
r(288)
r(288)
r(0)
r(72)
r(144)
r(216)
The neutral element is 0. The inverse element of n ∈ Zq is 5 − n. (c)
(b)
The neutral element is r(0). The inverse element of r(φ) is r(360 − φ).
The groups (Z5 , ) and (R72 , • ) are isomorphic because their group composition tables are identical if we identify the element n of Z5 with the element r(72n) of R72 .
(d) The group (R360/n , • ) of rotations by multiples of 360/n deg is isomorphic to the group (Zn , ) of integer addition modulo n. 5
S.L1 Mathematics before numbers
6
P
L1.2.4 Group of discrete translations on a ring
(a) Consider the group axioms: (i)
Closure: by definition a, b ∈ Z ⇒ (a + b) mod N ∈ Z mod N . Thus: x, y ∈ G ⇒ ∃nx , ny ∈ Z mod N : x = λnx , y = λny . It follows that T (x, y) = λ · (nx + ny ) mod N ∈ λ · Z mod N = G. X
(ii) Associativity: The usual addition of integers is associative, m, n, l ∈ Z ⇒ (m + n) + l = m + (n + l), and this property remains true for addition modulo N . For x, y, z ∈ G we therefore have: T (T (x, y), z) = λ · ((nx + ny ) + nz )(mod N ) = λ · (nx + (ny + nz ))(mod N ) = T (x, T (y, z)). X (iii) Neutral element: The neutral element is 0 = λ · 0 ∈ T (x, 0) = λ · (nx + 0)(mod N ) = x. X
G: For all x ∈ G we have:
(iv) Inverse element: The inverse element of n ∈ Z mod N is [N + (−n)] mod N ∈ Z mod N . Therefore the inverse element of x = λ·n ∈ G is given by −x ≡ λ·(N + (−n)) ∈ G, since T (x, −x) = λ · (n + (N + (−n)))(mod N ) = λ · 0(mod N ) = 0. X (v) Commutativity (for the group to be abelian): For all x, y ∈ G we have T (x, y) = λ · (nx + ny ) mod N = λ · (ny + nx ) mod N = T (y, x), since the usual addition of real numbers is commutative, and this property remains true for addition modulo N . X Since (G, T ) satisfies properties (i)-(v), it is an abelian group. X (b) The group axioms of (T, (i)
Closure: Tx , Ty ∈ [see (a)]. X
) follow directly from those of (G, T ):
T ⇒ Tx Ty = TT (x,y) ∈ T, since x, y ∈ G ⇒ T (x, y) ∈ G
(ii) Associativity: For Tx , Ty , Tz ∈ T we have: (Tx Ty ) Tz = TT (x,y) Tz = (a)
TT (T (x,y),z) = TT (x,T (y,z)) = Tx
TT (y,z) = Tx
(iii) Neutral element: The neutral element is T0 ∈ Tx T0 = TT (x,0) = Tx+0 = Tx . X
(Ty
Tz ). X
T: For all Tx ∈ T we have:
(iv) Inverse element: The inverse element of Tx ∈ T is T−x ∈ T, where −x is the inverse element of x ∈ G with respect to T , since Tx T−x = TT (x,−x) = Tx+(−x) = T0 . X (v) Commutativity (for the group to be abelian): For all x, y ∈ G we have Tx Ty = TT (x,y) = TT (y,x) = Ty Tx , since the composition rule T in G is commutative. X
Since (T,
) satisfies properties (i)-(v), it is an abelian group. X
L1.2.6 Decomposing permutations into sequences of pair permutations
P
(a) The permutation [132] is itself a pair permutation, as only the elements 2 and 3 are exchanged, hence its parity is odd. [231]
(b) To obtain 123 7−→ 231 via pair permutations, we bring the 2 to the first slot, then [213]
[321]
the 3 to the second slot: 123 7−→ 213 7−→ 231, thus [231] = [321] ◦ [213], with even parity. Below we proceed similarly: we map the naturally-ordered string into the desired order one pair permutation at a time, moving from front to back:
S.L1.3 Fields
7
(c)
[3214]
[1432]
⇒
[3214]
[1432]
[2134]
1234 7−→ 3214 7−→ 3412
[3412] = [1432] ◦ [3214] even
(d) 1234 7−→ 3214 7−→ 3412 7−→ 3421 [15342]
[13245]
[12435]
[32145]
[21345]
[15342]
⇒
[3421] = [2134] ◦ [1432] ◦ [3214] odd
(e)
12345 7−→ 15342 7−→ 15243 7−→ 15234 ⇒ [15234] = [12435] ◦ [13245] ◦ [15342] odd
(f)
12345 7−→ 32145 7−→ 31245 7−→ 31542 ⇒ [31542] = [15342] ◦ [21345] ◦ [32145] odd
S.L1.3 Fields P
L1.3.2 Complex numbers – elementary computations
For z1 = 3 + ai and z2 = b − 2i (a, b ∈ R), we find [brackets give results for a = 2, b = 3]: z¯1 = 3 − ai
(a) (b)
|z1 | =
(e)
p
[1] √ [ 13]
9 + a2
|bz1 − 3z2 | = |i(ab + 6)| = |ab + 6|
[12]
L1.3.4 Algebraic manipulations with complex numbers
(a) (b)
(z + i)2 = (x + i(y + 1))2 = x2 − (y + 1)2 + i2x(y + 1) , (x + iy) (x + 1 − iy) z z¯ + 1 z = · = · z+1 z + 1 z¯ + 1 (x + 1 + iy) (x + 1 − iy) =
(c)
x(x + 1) + y 2 + iy x(x + 1) + y 2 + i(y(x + 1) − xy) = , 2 2 (x + 1) + y (x + 1)2 + y 2
(x − iy) (x + i(1 − y)) z¯ z¯ + i z¯ = · = · z−i z − i z¯ + i (x + i(y − 1)) (x − i(y − 1)) =
P
[5 + 12i]
(3 − ai)(b + 2i) 2a + 3b + i(6 − ab) z¯1 3 − ai = = = z2 b − 2i (b − 2i)(b + 2i) b2 + 4
(d)
P
[4i]
z1 z¯2 = (3 + ai) · (b + 2i) = −2a + 3b + i(6 + ab)
(c)
(f)
[3 − 2i]
z1 − z2 = 3 − b + (a + 2)i
x2 + y(1 − y) + ix(1 − 2y) x2 + y(1 − y) + i(x(1 − y) − yx) = . x2 + (y − 1)2 x2 + (y − 1)2
L1.3.6 Multiplying complex numbers – geometrical interpretation
7→ ( √18 , √18 ) √ √ z2 = 3 − i 7→ ( 3, −1) √ z3 = z1 z2 = ( √18 + √18 i)( 3−i) z1 =
√1 + √1 i 8 8
ρ1 = ρ2 = ρ3 =
q
1 + 18 8
=
1 2
√ 3+1 = 2
q
3 + 18 + 38 + 81 8
π 4
φ1 = arctan 1 =
φ2 = arctan − √13 = =1
φ3 = arctan
√ √3−1 3+1
=
11π 6 π 12
S.L2 Vector spaces
8
= z4 = =
q 1 z1
√
3 8
+
=
q
√ 8 1+i
1 8
+(
q
3 8
−
q
1 )i 8
√
=
8(1−i) (1+i)(1−i)
q
q
3 8
q
+ 18 , 38 − √ ρ4 = 2 + 2 = 2
7→ (
q
1 ) 8
7π 4
7π 4
φ4 = arctan −1 =
√ √ √ 2 − 2i 7→ ( 2, − 2)
z5 = z¯1 =
√1 8
−
√1 i 8
7→ ( √18 , − √18 )
As expected, we find: ρ3 = ρ1 ρ 2
ρ5 =
q
1 8
+
1 8
φ5 = arctan −1 =
Im(z ) 1 2
φ3 = φ1 + φ2
z1
z3 = z1 z2
1
π/4
π/12
ρ4 = 1/ρ1
π/6
φ4 = −φ1
7π/4
ρ5 = ρ1
11π/6
φ5 = −φ1
1 2
=
Re(z ) π/4
2
z5 = z¯1 2
z2 z4 =
1 z1
S.L2 Vector spaces S.L2.3 Vector spaces: examples P
L2.3.2 Vector space axioms: complex numbers
First, we show that (C, +) forms an abelian group. Notation: zj = xj + iyj , j = 1, 2, 3. (i) Closure holds by definition: z1 + z2 ≡ (x1 + x2 ) + i(y1 + y2 ) ∈ C . X (ii)
Associativity:
(z1 + z2 ) + z3 = ((x1 + x2 ) + i(y1 + y2 )) + (x3 + iy3 ) = ((x1 + x2 ) + x3 ) + i((y1 + y2 ) + y3 ) = (x1 + (x2 + x3 )) + i(y1 + (y2 + y3 )) = z1 + (z2 + z3 ) . X
(iii) Neutral element: (iv) Additive inverse:
z + 0 = (x + 0) + i(y + 0) = x + iy = z . X with − z = (−x) + i(−y) is z + (−z) = (x − x) + i(y − y) = 0 . X
(v)
Commutativity:
z1 + z2 = (x1 + x2 ) + i(y1 + y2 ) = (x2 + x1 ) + i(y2 + y1 ) = z2 + z1 . X
Second, we show that multiplication by a real number, · , likewise has the properties required for (C, +, ·) to form a vector space. For λ, µ ∈ R, we have (vi) Closure: λz = λ(x + iy) = (λx) + i(λy) ∈ C, since λx, λy ∈ R
S.L2.3 Vector spaces: examples
9
(vi) Multiplication of a sum of scalars and a complex number is distributive: (λ + µ) · z = (λ + µ)(x + iy) = (λ + µ)x + i(λ + µ)y = (λx + µx) + i(λy + µy) = λz + µz . X (vii) Multiplication of a scalar and a sum of complex numbers is distributive: λ(z1 + z2 ) = λ((x1 + x2 ) + i(y1 + y2 )) = λ(x1 + x2 ) + iλ(y1 + y2 ) = (λx1 + λx2 ) + i(λy1 + λy2 ) = λz1 + λz2 . X (viii) Multiplication of a product of scalars and a complex number is associative: λ · (µ · z) = λ · (µx + iµy) = (λµ)x + i(λµ)y = λµ(x + iy) = (λµ) · z . X (ix) Neutral element: 1 · z = 1 · (x + iy) = z . X Therefore, the triple (C, +, ·) represents an R-vector space. Remark: This R-vector space is two-dimensional (i.e. isomorphic to R2 ), since each complex number z = x + iy is represented by two real numbers, x and y. This fact is utilized when representing complex numbers as points in the complex plane, with coordinates x and y in the horizontal and vertical directions, respectively.
L2.3.4 Vector space of polynomials of degree n
P
(a) The definition of addition of polynomials and the usual addition rule in
R yield
pa (x) + pb (x) = a0 x0 + a1 x1 + . . . an xn + b0 x0 + b1 x1 + . . . bn xn = (a0 + b0 )x0 + (a1 + b1 )x1 + . . . (an + bn )xn = pa+b (x) , since a + b = (a0 + b0 , . . . , an + bn )T ∈ Rn+1 . Therefore pa
pb = pa+b . X
The definition for the multiplication of a polynomial with a scalar and the usual multiplication rule in R yield cpa (x) = c(a0 x0 + a1 x1 + . . . an xn ) = ca0 x0 + ca1 x1 + . . . can xn = pca (x) , since ca = (ca0 , . . . , can )T ∈ Rn+1 . Therefore c • pa = pca . X (b) We have to verify that all the axioms for a vector space are satisfied. First, (Pn , indeed has all the properties of an abelian group: (i)
)
Closure: adding two polynomials of degree n again yields a polynomial of degree at most n. X
(ii,v) Associativity and commutativity follow trivially from the corresponding properties of Rn+1 . For example, consider associativity: pa (pb pc ) = pa
pb+c = pa+(b+c) = p(a+b)+c = pa+b
pc = (pa pb ) pc . X
(iii) The neutral element is the null polynomial p0 , i.e. the polynomial whose coefficients are all equal to 0. X (iv) The additive inverse of pa is p−a . X
S.L2 Vector spaces
10
Moreover, multiplication of any polynomial with a scalar also has all the properties required for (Pn , , •) to be a vector space. Multiplication with a scalar c ∈ R satisfies closure, since c • pa = pca again yields a polynomial of degree n. X All the rules for multiplication by scalars follow directly from the corresponding properties of Rn+1 . X Each element pa ∈ Pn is uniquely identified by the element a ∈ Rn+1 – this identification is a bijection between Pn and Rn+1 , hence (Pn , , •) is isomorphic to Rn+1 and has dimension n + 1. X (c)
The bijection between Pn and Rn+1 associates the standard basis vectors in Rn+1 , namely ek = (0, . . . 1, . . . , 0)T (with a 1 at position k and 0 ≤ k ≤ n), with a basis in the vector space (Pn , , •), namely {pe0 , . . . , pen }, corresponding to the monomials {1, x, x2 , . . . , xn }, since pek (x) = xk . This statement corresponds to the obvious fact that every polynomial of degree n can be written as linear combination of monomials of degree ≤ n.
L2.3.6 Vector space with unusual composition rule – multiplication
P
(a) First, we show that (Va ,
) forms an abelian group.
(i)
Closure holds by definition. X
(ii)
Associativity:
(vx vy ) vz = vx+y−a vz = v(x+y−a)+z−a = vx+y+z−2a = vx+(y+z−a)−a = vx vy+z−a = vx (vy vz ). X
(iii) Neutral element:
vx va = vx+a−a = vx , ⇒ 0 = va . X
(iv) Additive inverse: vx v−x+2a = vx+(−x+2a)−a = va = 0, ⇒ −vx = v−x+2a . X (v)
Commutativity :
vx vy = vx+y−a = vy+x−a = vy vx . X
(b) To ensure that the triple (Va , , ·) forms an R-vector space, scalar multiplication, · , which by definition satisfies closure, also has to have the following four properties, each of which amounts to a condition on the form of f : (vi) Multiplication of a sum of scalars and a vector is distributive: (vii) Multiplication of a scalar and a sum of vectors is distributive: (viii) Multiplication of a product of scalars and a vector is associative: (ix) Neutral element:
(γ + λ) · vx = γ · vx
λ · vx
v(γ+λ)x+f (a,γ+λ) = vγx+f (a,γ)+λx+f (a,λ)−a f (a, γ + λ) = f (a, γ) + f (a, λ) − a λ · (vx + vy ) = λ · vx
(1a)
λ · vy
vλ(x+y−a)+f (a,λ) = vλx+f (a,λ)+λy+f (a,λ)−a −λa + f (a, λ) = f (a, λ) + f (a, λ) − a
(1b)
(γλ) · vx = γ · (λ · vx ) v(γλ)x+f (a,γλ) = vγ(λx+f (a,λ))+f (a,γ) f (a, γλ) = γf (a, λ) + f (a, γ)
(1c)
1 · vx = vx vx+f (a,1) = vx f (a, 1) = 0
(1d)
Evidently, the form of f is fully determined by the distributivity condition (vii), since
S.L2.4 Basis and dimension
11
(1b) yields f (a, λ) = a(1 − λ) . It is easy to check that this form also satisfies the equations (1a), (1c) and (1d) resulting from the other three conditions (vi), (viii) and (ix). (c)
The above arguments hold, too, if a and x are elements of Rn , for any positive interger n. In other words, there is nothing special about the case n = 2 considered above.
S.L2.4 Basis and dimension P
L2.4.2 Linear independence
(a) The three vectors are linearly independent if and only if the only solution to the equation 0 = a1 v1 + a2 v2 + a3 v3 = a1
1 2 3
+ a2
2 4 6
+ a3
−1 −1 0
,
with
aj ∈ R,
(1)
is the trivial one, a1 = a2 = a3 = 0. The vector equation (1) yields a system of three equations, (i)-(iii), one for each of the three components of (1), which we solve as follows: (i)
1a1 + 2a2 − 1a3 = 0
(ii)-(i)
(ii)
2a1 + 4a2 − 1a3 = 0
(iv) in (i)
(iii)
3a1 + 6a2 + 0a3 = 0
(iv) in (iii)
⇒
⇒
⇒
1
(iv) a1 = −2a2 (v)
a3 = 0
(vi)
0=0
2
(ii) minus (i) yields (iv): a = −2a . Inserting (iv) into (i) yields a3 = 0. (iv) into (iii) yields no new information. There are thus infinitely many non-trivial solutions (one for every value of a1 ∈ R), hence v1 , v2 and v3 are not linearly independent. (b) The desired vector v02 = (x, y, z)T should be linearly independent from v1 and v3 , i.e. its components x, y and z should be chosen such that the equation 0 = a1 v1 + a2 v02 + a3 v3 has no non-trivial solution, i.e. that it implies a1 = a2 = a3 = 0: (i)
1a1 + xa2 − 1a3 = 0
(ii)-(i)
(ii)
2a1 + ya2 − 1a3 = 0
(iv) in (i)
(iii)
3a1 + za2 + 0a3 = 0
(iv) in (iii)
⇒
⇒
⇒
(vi)
choose x = 1 , then a2 = 0.
(v)
choose y = 0 , then a3 = 0.
(iv)
choose z = 0 , then a1 = 0.
(iii) yields (iv): 3a1 = −za2 ; to enforce a1 = 0 we choose z = 0. (iv) inserted into (ii) yields (v): a3 = ya2 ; to enforce a3 = 0 we choose y = 0. (iv,v) inserted into (i) yields xa2 = 0; to enforce a2 = 0, we choose x = 1. Thus v02 = (1, 0, 0)T is a choice for which v1 , v02 are v3 linearly indepedent. This choice is not unique – there are infinitely many alternatives; one of them, e.g. is v02 = (2, 4, 1)T .
P
L2.4.4 Einstein summation convention
(a) ai bi = a1 b1 + a2 b2 = 1 · (−1) + 2 · x = 2x − 1 .
S.L3 Euclidean geometry
12
(a)
(b) ai aj bi bj = (ai bi )(aj bj ) = (2x − 1)(2x − 1) = 4x2 − 4x + 1 . (c)
(a)
a1 aj b2 bj = a1 b2 aj bj = 1 · x · (2x − 1) = 2x2 − x .
S.L3 Euclidean geometry S.L3.2 Normalization and orthogonality P
L3.2.2 Angle, orthogonal decomposition
(a)
cos(^(a, b)) =
√ √ √ 2· 2+0·1+ 2·1 a·b 3 √ = √ = kakkbk 2 4+0+2· 2+1+1
⇒
^(a, b) = R
3 2
c=q−p=
(b) Vector from P to Q:
d=r−p=
Vector from P to R:
d⊥
d
0 13a
ˆ)ˆ dk = (d · c c=
(d · c)c (13a) · 2 = kck2 9+4
Comp. of d perp. to c:
d⊥ = d − dk =
d
0 13a
6 −a 4
3 2
−6 =a 9
S
=a
6 4
d⊥ · dk = a2 6 · (−6) + 4 · 9 = 0. X
Consistency check:
s = r − d⊥ =
Coordinates of S: (c) Distance from R to S: Distance from P to S:
−1 −6 −a = −1 + 13a 9
−1 + 6a −1 + 4a
√ √ RS = kd⊥ k = a 36 + 81 = a 117 √ √ P S = kdk k = a 36 + 16 = a 52
S.L3.3 Inner product spaces P
Q c
P
Comp. of d parallel to c:
π 6
L3.3.2 Unconventional inner product
All the defining properties of an inner product are satisfied: (i) Symmetric: hx, yi = x1 y1 + x1 y2 + x2 y1 + 3x2 y2 = y1 x1 + y1 x2 + y2 x1 + 3y2 x2 = hy, xi . X (ii,iii) Linear: hλx + y, zi = (λx1 + y1 )z1 + (λx1 + y1 )z2 + (λx2 + y2 )z1 + 3(λx2 + y2 )z2 = (λx1 z1 + λx1 z2 + λx2 z1 + 3λx2 z2 ) + (y1 z1 + y1 z2 + y2 z1 + 3y2 z2 )
S.L3.3 Inner product spaces
13
= λhx, zi + hy, zi . X (iii) Positive semi-definite: hx, xi = x1 x1 + x1 x2 + x2 x1 + 3x2 x2 = (x1 + x2 )2 + 2x22 ≥ 0 . X If hx, xi, then x = (0, 0)T . X
P
L3.3.4 Projection onto an orthonormal basis 2 0 0 2 2 1
(a)
= 1,
he01 , e02 i =
1 81
(−7)2 +42 +42 = 1 ,
he01 , e03 i =
1 81
(−4)2 +(−8)2 +12 = 1 ,
he02 , e03 i =
1 81
he1 , e1 i =
81
he02 , e02 i =
1 81
he03 , e03 i =
1 81
4 +(−1) +8
4·(−7)+(−1)·4+8·4 = 0 ,
4·(−4)+(−1)·(−8)+8·1 = 0 , (−4)·(−7)+4·(−8)+4·1] = 0 .
The three vectors are normalized and orthogonal to each other, he0i , e0j i = δij , therefore they form an orthonormal basis of
R3 . X
(b) Since the vectors {e01 , e02 } form an orthonormal basis, the component wi of the vector w = (1, 2, 3)T = e0i wi with respect to this basis is given by the projection wi = he0i , wi (with e0i = e0i ):
P
w1 = he01 , wi =
1 9
4·1 + (−1)·2 + 8·3 =
w2 = he02 , wi =
1 9
(−7)·1 + 4·2 + 4·3 =
w3 = he03 , wi =
1 9
26 9
,
13 9
,
(−4)·1 + (−8)·2 + 1·3 = − 17 . 9
L3.3.6 Gram-Schmidt orthonormalization
Strategy: iterative orthogonalization and normalization, starting from v1,⊥ = v1 : (a) Starting vector: Normalizing v1,⊥ : Orthogonalizing v2 :
Normalizing v2,⊥ : Orthogonalizing v3 :
Normalizing v3,⊥ :
(b) Starting vector: Normalizing v1,⊥ :
v1,⊥ = v1 = (0, 3, 0)T v1,⊥ e01 = = (0, 1, 0)T = e01 . kv1,⊥ k v2,⊥ = v2 − e01 he01 , v2 i = (1, −3, 0)T − (0, 1, 0)T (−3) = (1, 0, 0)T v2,⊥ e02 = = (1, 0, 0)T = e02 . kv2,⊥ k v3,⊥ = v3 − e01 he01 , v3 i − e02 he02 , v3 i = (2, 4, −2)T − (0, 1, 0)T (4) − (1, 0, 0)T (2) = (0, 0, −2)T v3,⊥ e03 = = (0, 0, −1)T = e03 . kv3,⊥ k v1,⊥ = v1 = (−2, 0, 2)T v1,⊥ e01 = = √12 (−1, 0, 1)T = e01 . kv1,⊥ k
S.L3 Euclidean geometry
14
v2,⊥ = v2 − e01 he01 , v2 i
Orthogonalizing v2 :
= (2, 1, 0)T − e02 =
Normalizing v2,⊥ :
√ 2) = (1,1,1)T
= e02 .
T √1 (1, 1, 1) 3
v3,⊥ = v3 − e01 he01 , v3 i − e02 he02 , v3 i √ = (3, 6, 5)T − √12 (−1,0,1)T ( 2) −
Orthogonalizing v3 :
T √1 (1,1,1) 3
14 √ 3
= 23 (−1, 2, −1)T v3,⊥ e03 = = √16 (−1, 2, −1)T = e03 . kv3,⊥ k
Normalizing v3,⊥ :
P
v2,⊥ = kv2,⊥ k
T √1 (−1,0,1) (− 2
L3.3.8 Non-orthonormal basis vectors and metric
(a) To express the standard basis vectors as linear combinations of vˆ1 , vˆ2 and vˆ3 , we ˆi = v ˆ j aji , for i = 1, 2, 3. For e ˆ1 , e.g., it has the form solve the vector equation e
1
2
1
1
0 = 1 a11 + 0 a11 + 1 a11 , 0
2
1
0
implying the following three equations: (i)
2a11 + 1a21 + 1a31 = 1
(iv),(v)
(ii)
1a11 + 0a21 + 1a31 = 0
(ii)
(iii)
2a11 + 1a21 + 0a31 = 0
(iii)
⇒
⇒ ⇒
(vi)
a11 = −1 ,
(iv)
a31 = −a11 = 1 ,
(v)
a21 = −2a11 = 2 .
Proceeding similarly for e2 and e3 , we obtain the following representations of the standard basis vectors:
2
ˆ1 = −ˆ ˆ3 e v1 + 2ˆ v2 + v
ˆ 1 − 2ˆ v v2 − 0ˆ v3
2
1
0
0
1
1
0
X
= 1 − 2 0 + 0 1 = 1 , 2 2
ˆ1 − v ˆ2 − v ˆ3 v
1
X
1
ˆ3 = e
1
= − 1 + 2 0 + 1 = 0 , 2
ˆ2 = e
1
0
0
1
1
0
X
= 1 − 0 − 1 = 0 . 2
1
0
1
ˆ1, v ˆ 2 and v ˆ 3 form a basis, since they can represent the standard vectors The vectors v ˆ1 , e ˆ2 and e ˆ3 . [Note: since the basis {ˆ ˆ2, v ˆ 3 } is not orthonormal, the coefficients e v1 , v ˆ j , i.e. aji 6= hˆ ˆi iR3 .] aji are not given by the projection of ei onto the basis vectors v vj , e ˆ and y ˆ as column vectors in the standard basis of (b) A representation of the vectors x R3 can be found as follows:
1
2
3
ˆ =v ˆ1x + v ˆ2x + v ˆ3x , x x1 = 2, x2 = −5, x3 = 3
2
1
1
ˆ = 1 2 + 0 (−5) + 1 3 = ⇒ x 2
1
0
2 5 −1
,
S.L4.2 Algebraic formulation
15
1
2
ˆ =v ˆ1y + v ˆ2y + v ˆ3y , y y 1 = 4, y 2 = −1, y 3 = −2
Scalar product: (c)
(d)
1
1
5
ˆ = 1 4 + 0 (−1) + 1 (−2) = 2 . ⇒ y 2
ˆ iR3 = hˆ x, y
2
3
2 5 −1
0
1
7
5 ·
2 7
= 10 + 10 − 7 = 13 .
ˆ 1 iR3 = 9 , g11 = hˆ v1 , v
ˆ 2 iR3 = 4 , g12 = hˆ v1 , v
ˆ 3 iR3 = 3 , g13 = hˆ v1 , v
ˆ 1 iR3 = 4 , g21 = hˆ v2 , v
ˆ 2 iR3 = 2 , g22 = hˆ v2 , v
ˆ 3 iR3 = 1 , g23 = hˆ v2 , v
ˆ 1 iR3 = 3 , g31 = hˆ v3 , v
ˆ 2 iR3 = 1 , g32 = hˆ v3 , v
ˆ 3 iR3 = 2 . g33 = hˆ v3 , v
x1 = x1 g11 + x2 g21 + x3 g31 = 2 · 9 + (−5) · 4 + 3 · 3 = 7 , x2 = x1 g12 + x2 g22 + x3 g32 = 2 · 4 + (−5) · 2 + 3 · 1 = 1 , x3 = x1 g13 + x2 g23 + x3 g33 = 2 · 3 + (−5) · 1 + 3 · 2 = 7 . Expressed in terms of the covariant components xj (subscript index) and contravariant components y j (superscript index), the inner product takes the form: ˆ iR3 = hx, yig = xi gij y j = xj y j = x1 y 1 + x2 y 2 + x3 y 3 hˆ x, y = 7 · 4 + 1 · (−1) + 7 · (−2) = 28 − 1 − 14 = 13 . X
[= (b)]
S.L4 Vector product S.L4.2 Algebraic formulation P
L4.2.2 Elementary computations with vectors
Using a = (2, 1, 5)T and b = (−4, 3, 0)T , we obtain: (a)
kbk =
√
16 + 9 + 0 = 5
a − b = (2 − (−4), 1 − 3, 5 − 0)T = (6, −2, 5)T a · b = 2 · (−4) + 1 · 3 + 5 · 0 = −5 a×b=
(b)
ak =
2 1 5
! ×
−4 3 0
!
1·0−5·3 5 · (−4) − 2 · 0 2 · 3 − 1 · (−4)
!
=
−15 −20 10
!
=
a·b −5 −1 b= b= (−4, 3, 0)T 25 5 kbk2
a⊥ = a − ak = (2, 1, 5)T − (4/5, −3/5, 0)T = (6/5, 8/5, 5)T =
1 (6, 8, 25)T 5
S.L4 Vector product
16
(c)
−1 −1 b·b= · 25 = −5 = a · b X 5 5 1 a⊥ · b = (−24 + 24 + 0) = 0 X 5 ak · b =
−1 ak × b = 5
−4 3 0 6 8 25
×
!
1 a⊥ × b = 5
×
−4 3 0
!
!
−4 3 0
!
1 =− 5
3·0−0·3 0 · (−4) − (−4) · 0 (−4) · 4 − 3 · (−4)
0 − 75 −100 − 0 18 + 32
1 = 5
−15 −20 10
! =
0 0 0
! X
!
! =
=a×bX
As expected, we have: ak · b = a · b, a⊥ · b = 0, ak × b = 0 and a⊥ × b = a × b. X
P
L4.2.4 Levi-Civita tensor
(a) ai aj ij3 = bm bn mn2 is true , since both sides equal 0. Consider the l.h.s., for example. Writing out the sums over i and j explicitly, the only two terms for which all three indices on ij3 differ are ai aj ij3 = a1 a2 123 + a2 a1 213 = a1 a2 − a2 a1 = 0. More compactly, we may write the l.h.s. in vector notation as ai aj ij3 = [a × a]3 = 0. This vanishes, since it is the 3-component of the cross product of a vector with itself, which yields the null vector, a × a = 0. The r.h.s. vanishes for analogous reasons. For the remaining problems, we use the identity ijk mnk = δim δjn − δin δjm . To be able to apply it, it might be necessary to cyclicly rearrange indices on one of the Levi-Civita factors. (b)
1ik 23k = δ12 δi3 − δ13 δi2 = 0 .
(c)
2jk ki2 = 2jk i2k = δ2i δj2 − δ22 δij = δ2i δj2 − δij .
(d)
1ik k3j = 1ik 3jk = δ13 δij − δ1j δi3 = −δ1j δi3 .
S.L4.3 Further properties of the vector product P
L4.3.2 Lagrange identity (i)
(ii)
(a) (a × b)·(c × d) = (a × b)k (c × d)k = ai bj ij k cm dn mnk (iv)
(iii)
(v)
= ai bj cm dn (δim δj n − δin δj m ) = ai bj ci dj − ai bj cj di = (a·c)(b·d) − (a·d)(b·c)
Explanation: We (i) expressed the scalar product as a sum over the repeated index k; (ii) used the Levi-Civita representation of the cross product twice; (iii) performed the sum over the repeated index j in the product of two Levi-Civita tensors; (iv) performed the sums on the repeated indices m and n, exploiting the Kronecker-δs; and (v) identified the remaining sums on i and j as scalar products. We used horizontal brackets (‘contractions’) to indicate which repeated indices will be summed over in the next equation. (b)
ka × bk =
p
(a × b) · (a × b) =
q
a2 · b2 − (a · b)2 ,
a · b = kak · kbk cos φ
S.L5.1 Linear maps
17
ka × bk = (c) Given:
q
kak2 · kbk2 (1 − cos2 φ) = kak · kbk | sin φ |
a = (2, 1, 0)T , b = (3, −1, 2)T , c = (3, 0, 2)T , d = (1, 3, −2)T
(a · c)(b · d) = 2 · 3 + 1 · 0 + 0 · 2 3 · 1 + (−1) · 3 + 2 · (−2) = 6 · (−4) = −24
(a · d)(b · c) = 2 · 1 + 1 · 3 + 0 · (−2) 3 · 3 + (−1) · 0 + 2 · 2 = 5 · 13 = 65
!
a×b=
2 1 0
!
c×d=
3 0 2
!
×
3 −1 2
!
×
1 3 −2
=
1 · 2 − 0 · (−1) 0·3−2·2 2 · (−1) − 1 · 3
=
0 · (−2) − 2 · 3 2 · 1 − 3 · (−2) 3·3−0·1
!
=
2 −4 −5
!
=
−6 8 9
!
!
(a × b)·(c × d) = 2 · (−6) + (−4) · 8 + (−5) · 9 = −89 = (a·c)(b·d) − (a·d)(b·c) X
P
L4.3.4 Scalar triple product
Let θ be the angle between v3 and e1 , then the parallelepiped is spanned by the unit vectors v1 = (cos φ2 , − sin φ2 , 0)T ,
v2 = (cos φ2 , sin φ2 , 0)T ,
v3 = (cos θ, 0, sin θ)T .
(The second component of the vector v3 is zero, since its projection into the e1 -e2 plane lies parallel to e1 , see figure.) Since each pair of vectors mutually encloses the angle φ, we have: cos φ = v1 ·v2 = cos2 φ2 − sin2 φ2 , cos φ = v2 ·v3 = cos θ cos φ2 , cos φ = v3 ·v1 = cos θ cos φ2 . The first equality holds by construction. The other two fix the value of the angle θ, which is given by cos θ = cos φ/ cos( φ2 ) . The volume of the parallelepiped thus is:
! 0 cos φ2 cos φ2 cos θ · 0 0 V (φ) =| [v1 × v2 ] · v3 | = − sin φ2 × sin φ2 · v3 = 2 cos φ2 sin φ2 sin θ 0 0 q p = sin φ sin θ = sin φ
1 − cos2 θ = sin φ
1 − cos2 φ/ cos2 ( φ2 ) .
Consistency checks: (i) For φ = 0 the three vectors are colinear (all parallel) and the volume is zero, V (0) = 0.X For φ = 2π the three vectors are coplanar (all in the same plane), and 3 with cos2 ( 2π ) = cos2 ( π3 ) the volume vanishes too, V ( 2π ) = 0.X A cube has φ = π2 and 3 3 unit volume, V ( π2 ) = 1.X Finally: V ( π3 ) =
√
3 2
q
1−
√ 2 1 2 / 23 2
S.L5 Linear Maps S.L5.1 Linear maps
=
√ 3 2
p2 3
=
√1 .X 2
S.L5 Linear Maps
18
P
L5.1.2 Checking linearity
The map is linear if F (v+w) = 2(v+w)α is equal to F (v)+F (w) = 2v α +2wα = 2(v α +wα ). This requires α = 1 .
S.L5.3 Matrix multiplication P
L5.3.2 Matrix multiplication
2
0
3
−5
2
7
3
−3
7
2
4
0
PQ =
2
PR =
1
−1
0
2
1
0
3
−5
2
7
3
−3
7
2
4
0
−3
=
4
4
4
−4
−4
−4
6
−3
−1
4
4
4
−4
−1
0
−4
−4
6
2
1
6
−1
4
6
−1
4
4
4
−4
4
4
−4
−4
−4
6
−4
−4
6
RR =
2
8
10
1
=
.
2
0
−50
−14
26
−15
14
= −22
−43
28
14
−9
6
RQ =
5
−10
−1
6
0 27
10
−24
0
28
2
66
.
−8
.
16
−26
52
56
28
−24
−64
−36
36
=
.
L5.3.4 Spin-1 matrices
P
(a)
2
S =
1
0
2
=
1
0
1 0
1 0 1
0 1 0
0 1 0
1
1
0
1
+
0 2 0
2
1 0
1 0 1
2
1 0 1
1
+
0
−i
0
0
−i
0
i
0
−i
i
0
−i
i
0
i
0
2
0
1 0
−1
0 2
0
−1 0
1
1 +
0
0
0
2 =
0 0 0
0
1 +
0 0
1
0 0 −1
0
0 2 0
0 0 1
0
0 0
0
0
0 0
0
0 0 −1
= 2·1 .
0 0 2
This has the form S2 = s(s + 1)1, with s = 1 . X
(b)
[Sx , Sy ] = =
1
0
2
1
= [Sz , Sx ] = =
1 √ 2
1 √ 2
1 √ 2
1 √ 2
−i
0 0
0
i
0
i
0
−i
0
−i
0 0
0
i
i
−i −i
0
i
0
−i
0
i
0
i
−i
−
0 0
0
0
0
0
0
0
0 0
0
0 −
0
0
0
0 0
0
0
1 0
0
0 0
0 −1 0
0
0
−i
0
i
=
−
0 −
0
0
0
0 −2i
0
X
= iSz = ixyz Sz .
0
−i
0
0
i
0
−i
0 0 −1
0
i
0
0
0
0 0
0
i
i
0
1 0
1
0
0
1 0 1
0 0
0
1 √ 2
=
1 √ 2
0
X
= iSx = iyzx Sx .
0 0 −1
0 1 0 0
i
0 −
1 0 −1 0 0
0
0
=
0 1 0
0
0
i
0
0 1 0 0
0
0
1 0
1 0
1 0 1
0
1
1
0
2i
2
0 0 −i
1 0 1
0 0 −1
0
−i
0
0 0 0
1
−i
i
0 0 −1
i
0 −
0
1
0 0 0
−i
i
0 1 0
i
2
[Sy , Sz ] =
0
1 0
1 0 1
0
1 0
−1
0 1
0 −1 0
X
= iSy = izxy Sy .
S.L5.4 The inverse of a matrix
19
X
Commutators are antisymmetric, hence [Sy , Sx ] = −[Sy , Sx ] = −iSz = iyxz Sz , and X
[Sx , Sx ] = 0 = ixxk Sk , etc. Clearly the spin-1 matrices do satisfy the SU(2) algebra.
P
L5.3.6 Matrix multiplication
(a)
(b)
a1
0
0
A=0
a2
0 ,
a3
0
0
(AB)ij =
X
Aik B kj =
0
B = 0
b2
0 ,
0
0
b3
X
k
b1
0
0
0
a1 b3
0
a2 b2
0
a3 b1
0
0
AB =
.
ai δ iN +1−k bk δ kj = δ iN +1−j ai bj .
k
AB =
0
...
0
a1
b1
0
...
0
0
...
0 . . .
...
a2
0 0 . .. 0
b2
...
0 . . .
=
0 . . .
. . . a2 bN −1
.
aN
..
... ...
0
0
..
.
. . . . . . bN
aN b1
.
a1 bN
0
..
0 0
...
...
.
0
S.L5.4 The inverse of a matrix P
L5.4.2 Gaussian elimination and matrix inversion
(a) For a = 13 we diagonalize A using Gaussian elimination (square brackets indicate which linear combinations of rows from the previous augmented matrix were used): [1] : 7
0
2
1
0
0
[2] : 0
5 −2
0
1
0
0
0
1
[3] : 2 −2
6
−→
[1] : 7
0
2
1
0
0
[2] : 0
5 −2
0
1
0
38 −2
0
7
7[3] − 2[1] : 0 −14
. [1] : 7 [2] : 0 14[2] + 5[3] : 0
0
2
1
5 −2
0
0 162 −10
0 1 14
0 0
[1] :
−→
[2] :
35
1 162 [3]
:
1 7 [1]
:
1 5 [2]
:
7
0
2
1
0
0
0
5 −2
0
1
0
5 − 81
7 81
0
0
1
35 162
. [1] − 2[3] : 7
0
91 81
0
14 − 81 − 35 81
[2] + 2[3] : 0
5
0 − 10 81
95 81
35 81
[3] : 0
0
5 1 − 81
7 81
35 162
−→
[3] :
1
0
0
13 81 2 − 81
19 81 7 81
0
1
0
0
0
5 1 − 81
2 5 − 81 − 81 7 81
35 162
We thus obtain:
A−1 =
1 162
26
−4
−4 −10 38
−10 14
14 35
⇒ x = A−1 b =
1 162
26
−4
−4 −10 38
−10 14
141 35
4 1
=
10 1 4 18 1
.
S.L5 Linear Maps
20
(b) The matrix A has no inverse if its determinant equals zero: 0 = det(A) = 5(8 − 3a)(5 + 3a) + 2(2 − 6a)(−4 + 6a) + 2(2 − 6a)(−4 + 6a) − 20 − (8 − 3a)(−4 + 6a)2 − (5 + 3a)(2 − 6a)2 = a2 (−45 − 144 − 144 − 288 − 180 + 72) + a(−75 + 120 + 96 + 48 + 48 + 384 + 120 − 12) + 200 − 32 − 20 − 128 − 20 = − 729a2 + 729a = 729a(1 − a) (c)
⇒
a= 0
a= 1 .
or
We first consider the case a = 0. Gaussian elimination yields: 1 [1] : 8
2
2
4
[2] : 2
5 −4
1
[3] : 2 −4
5
1 2 [1]
−→
:
[2] − [3] :
[1] : 4 −9
9
0
−→
0
[1] − 4[3] :
1
1 2
4
1 9 [2]
1
: 0
[3] − 2[2] : 0
18 −18 0
1
2
1 −1
0
0
0
0
0
The system is underdetermined. Its solution thus contains a free parameter, which we call x3 = λ. Then we obtain: [1] : 4
1
1 2
[2] : 0
1 −1 0
[3] : 0
0
−→
[1] − [3] : 4
1
0
2−λ
[2] + [3] : 0
1
0
λ
[3] : 0
0
1
λ
1 λ
1 4 ([1]
− [2]) : 1
0
0
1 2 (1−λ)
[2] : 0
1
0
λ
[3] : 0
0
1
λ
−→
For a = 0 there thus are infinetely many solutions, x = ( 21 , 0, 0)T + λ(− 12 , 1, 1)T . They lie along a straight line in R3 , parametrized by λ. We now consider the case a = 1. Gaussian elimination yields: [1] :
5 −4
[2] : −4
5
2 1
[3] :
2
8 1
2
[1] :
2 4
−→
5 −4
[2] − [1] : −9
2
1 4 ([1]
4
4[1] − [3] : 18 −18
5 −4
[2] : −9
0 15
1
− [2]) :
−→
0 −3
9
[3] + 2[2] :
2
0
2
4
9
0 −3
0
0
9
3
The third line stands for the equation 0x + 0x + 0x = 9, which is a logical contradiction. For a = 1, this system of equations thus has no solution .
L5.4.4 Matrix inversion
P
(a) The inverse of M2 = M2−1
=
m 0
1 m
1 m
− m12
0
1 m
.
Check:
m We compute the inverse of M3 = [1] : m [2] : [3] :
0 0
1 m 0
0 1 m
1 0 0
0 1 0
1
0 0 1 m
0 0
follows from the formula
−→
1 m 0 [1] −
1 m
− m12
0
1 m
0 1 m
a c
b d
m
1
0
m
1 ad−bc
= X
=
d −c
−b a
1
0
0
1
.
1 m ([2]
using Gaussian elimination: −
1 m ([2]
1 m [3])
−
: 1
0
1 m [3])
: 0
1
1 m [3]
: 0
0
1 m
− m12
1 m3
0
0
1 m
− m12
1
0
0
1 m
0
:
S.L5.5 General linear maps and matrices
21
The right side of the augmented matrix gives the inverse matrix M3−1 :
1 m
− m12
M3−1 = 0
⇒
1 m
0
1 m3 − m12 1 m
0
1 m
− m12
. Check: 0
1 m
0
0
1 m3 − m12 1 m
m 1
0
1 0 0
X
m 1 = 0 1 0.
0
0 m
0
0 0 1
(b) The results for M2−1 and M3−1 have the following properties: the diagonal elements are 1 equal to m and elements along the first and second rows feature increasing powers of 1 and alternating signs. The checks performed in (a) illustrate why these properties m are needed. We thus formulate the following guess for the form of Mn−1 for a general n:
1
− m12
m
Mn−1
1 m3 − m12
1 m
0 . = .. . . .
···
n−2
1 mn−1
.
.
..
.
− m12
···
0
1 m
.
(−1)
. . .
..
···
0
(−1)n−1 m1n
..
0 ..
···
1 m
Now let us check our guess explicitly: does Mn−1 Mn = 1 hold?
1
− m12
m
Mn−1 Mn
0 . = .. . .
mm
=
−
.
···
0 1 m
0
1 m
..
.
1
1 m
1 m3 − m12
1 m2
m
..
.
···
(−1)n−1 m1n
···
n−2
..
.
..
.
···
0 . . . . . .
0 . . .
1 mm
0
...
...
mn−1
. . . − m12 1 m
0
− m12 + m13 m 1 1 m − m2 m
1 mm
(−1)
... ..
.
..
.
0
···
n−3
1 mn−2
+ (−1) . . . . . .
···
0
m
1
···
..
.
..
..
···
0
m
0 . . .
···
0
n−2
Alternative formulation using index notation:
X
(Mn−1 Mn )ij =
(Mn−1 )il (Mn )lj =
l
=
Xh
δlj
1 mn−1
.
1
X
(Mn−1 )ij
1 ml−i+1
m
1 0 ··· 0 . . X 0 1 . . .. = . . . . . .. .. . 0
m
0 ··· 0 1
( (c)
.
1 mm
0
0
1
1 + (−1)n−1 m1n m (−1)n−2 mn−1
... (−1)
0
1
m 0 .. · . 0
=
1 mj−i+1
(−1)i+j for j ≥ i ,
0
otherwise .
(−1)i+l (m δlj + δl+1,j )
l≥i m mj−i+1
j+i
(−1)
+ δl+1,j
1 mj−i
(−1)j−1+i
i
l≥i
=
X l≥i
(δlj − δl+1,j )
1 mj−i
(−1)j+i =
j+i 1 (0 − 0) mj−i (−1) = 0 for j < i , X (1 − 0)
1
(−1)2i = 1
for j = i , X
m0 (1 − 1) 1 (−1)j+i = 0 for j ≥ i . X j−i m
S.L5 Linear Maps
22
S.L5.5 General linear maps and matrices P
L5.5.2 Three-dimensional rotation matrices
(a) The figures show the action of a rotation about axis i on the basis vectors: e3
1
1
0
Rθ (e1 ) : 07→0, 0
17→cos θ,
0
1
Rθ (e2 ) : 07→
sin θ
0
cos θ
07→
Rθ (e3 ) : 07→ sin θ ,
17→
0
0
0
0
0
0
, e1
e2
θ
e3 e3
07→0. 1
e2 e2
e1
0
1
e3 θ
e3
cos θ ,
0
sin θ
e2
e1 e1
cos θ
1
− sin θ
θ
cos θ
0
0
θ
07→ − sin θ ,
, 17→1, 0
e3
0
1
0
cos θ
1
0
0
0
− sin θ
0
0
e1
θ
θ e2 e2 e1
For R : ej 7→ e0j = ei (Rθ )ij the image vector e0j yields column j of the rotation matrix:
1
0
Rθ (e1 ) = 0
0
cos θ − sin θ
0 sin θ
,
Rθ (e2 ) =
0
1
0
,
− sin θ 0 cos θ
cos θ
cos θ 0 sin θ
cos θ − sin θ 0
Rθ (e3 ) = sin θ
cos θ 0
0
0
.
1
(b) Using the formula (Rθ (n))ij = δij cos θ + ni nj (1 − cos θ) − ijk nk sin θ ,
(1)
we obtain for the rotation axes e1 = (1, 0, 0)T , e2 = (0, 1, 0)T and e3 = (0, 0, 1)T :
1 0cos θ
+ 0
1 0 0
(1)
Rθ (e1 ) = 0
1
0
0
− sin θ
0
sin θ
cos θ
1 0cos θ
+ 0
(1)
Rθ (e2 ) = 0
0 0 1
0 0
0
0
0 231 sin θ 321
0
cos θ
1 0 0
0 0(1 − cos θ) − 0
= 0
0 0 0
0 0 1
1 0 0
.X
0 0 0
(2)
0
1 0(1 − cos θ) − 0
0 0 0
0 132 0
0
312 0
0
sin θ
S.L5.5 General linear maps and matrices
23
cos θ
=
0 − sin θ
(1)
.X
1
0
0
cos θ
1 0cos θ
+ 0
1 0 0
Rθ (e3 ) = 0
sin θ
0
0 0 0
0 0 1
(3)
0 0(1 − cos θ) − 213 0
0 0 1
cos θ
− sin θ
= sin θ
cos θ
0
0
B =
1 2
−1
1
0
0
0
1 0 − √
−1
0
1
0sin θ 0
(4)
sin π 0
For B = R π2 ( √12 (e3 −e1 ), the rotation axis is
0 0
.X
According to (a), we have A = Rπ (e3 ) =
(1)
1
cos π (c)
0
0
0
0 123
2
0
0 0 1
0
−231
−321
=
with cos
=
0 0 1
.
π 2
= 0, sin
π 2
= 1:
√ − 2
1 1 √ 2 2
−1 √ 2
0 √ − 2
−1
0
0 −1 0
0 0
0
−1
√1 (−1, 0, 1)T , 2
123
213 0
− sin π cos π 0
.
1
The action of A and B on the vector v = (1, 0, 1)T gives:
−1
Av =
0
0
−1
0
0
0 0
=
0
0
−1
1
1
ˆ v
1
ˆ Aˆv e3
.
1
e2 e1
√ − 2
Bv =
1 √2
−1 1 √ 2 0
0 √ − 2
−1
1
e3
ˆ v
√1 (e3 2
0
=
√
2
1 .
1
− e1 )
e2 B v
0
e1
(d) The rotation group in three dimensions is not commutative. Example: AB 6= BA:
AB =
1 2
−1
0
0
−1
0
0 √ − 2
BA =
1 1 √ 2 2
0 √ − 2
−1
(e)
Tr(R) =
X i
√ − 2
0
1 √ 0 2
0 √ − 2
−1
1
−1 −1 √ 2 0 1 (1)
(R)ii =
0
Xh
−1 √ 2
=
−1 1 √ − 2 2 −1
1
0 −1 0
0
0
=
−1 1 √ − 2 2
1
1
√
2
0 √ − 2 √ √
1 √ − 2 1
0
−1 √ 2
2
1
2
δii cos θ + ni ni (1 − cos θ) − iik nk sin θ
i
= 3 cos θ + n2 (1 − cos θ) = 1 + 2 cos θ . X
i
.
,
S.L5 Linear Maps
24
(f)
Up to a sign, the angle θ is fixed by the trace of C ≡ AB: (e)
(d)
1 + 2 cos θ = Tr(C) =
1 [(−1) 2
⇒
+ 0 + 1] = 0 ,
1 2
− cos θ =
θ = ± 23 π .
⇒
,
Up to a sign, the components ni of the rotation axis are fixed by the diagonal elements of C: (1) (C)ii =
r 2
cos θ + (ni ) (1 − cos θ)
r
(d)
n1 = ±
− 12 + 12 1+ 12
⇒ (d)
r
n2 = ±
= 0,
ni = ± 0+ 12 1+ 12
(C)ii − cos θ 1 − cos θ
.
s (d)
= ± √13 ,
n3 = ±
1 + 21 2 1+ 12
= ±
q
2 3
.
Since the vector n and the angle θ are determined uniquely only up to a point reflection about the origin, the sign of one of the components ni can be chosen at will – let us here choose n2 positive, hence n2 = √13 . The signs of θ and n3 can now be determined from the non-diagonal elments of C. Since n1 = 0, we have (1) (C)1j = 0 + 0 − 1jk nk sin θ, thus: √ 1 (d) = (C)13 = n2 sin θ ⇒ sin θ = 21 3 ⇒ θ = 23 π , 2 (d) √1 = 2
(C)12 = −n3 sin θ
n3 = − √
⇒
1 = − 2 sin θ
q
2 3
.
S.L5.6 Matrices describing coordinate changes P
L5.6.2 Basis transformations in
E2
ˆj = v ˆ 0i T ij between old and new bases yields the transformation matrix (a) The relation v T: 1 0 ˆ v 5 1
ˆ1 = v
ˆ 02 ≡ v ˆ 01 T 11 + v ˆ 02 T 21 + 53 v
⇒
ˆ 2 = − 65 v ˆ 01 + 52 v ˆ 02 ≡ v ˆ 01 T 12 + v ˆ 02 T 22 v
T =
T 11
T 12
T 21
T 22
=
1 5
1
−6
3
2
.
ˆ j (T −1 )j i we can write the new basis in terms of the old: ˆ 0i = v (b) Using T −1 and v T
−1
=
1 det T
T 22
−T 12
−T 21
T 11
=
51 45
2
6
−3
1
=
ˆ 01 = v ˆ 1 (T −1 )11 + v ˆ 2 (T −1 )21 = v
1 ˆ v 2 1
ˆ2 . − 43 v
ˆ 02 = v ˆ 1 (T −1 )12 + v ˆ 2 (T −1 )22 = v
3 ˆ v 2 1
ˆ2 . + 41 v
1 4
2
6
−3
1
≡
(T −1 )11
(T −1 )12
(T −1 )21
(T −1 )22
.
ˆ 1 and v ˆ2 Alternatively, these relations can be derived by solving the equations for v ˆ 01 and v ˆ 02 . (This is equivalent to finding T −1 .) to give v (c)
i
ˆ =v ˆ j xj = v ˆ 0i x0 in the old and new bases, x = (x1 , x2 )T and The components of x i 0 01 02 T x = (x , x ) respectively, are related by x0 = T ij xj :
x=
2 − 21
,
0
x = Tx =
1 5
1 −6 3
2
2 − 12
=
1 1
ˆ = 2ˆ ˆ2 = v ˆ 01 + v ˆ 02 . , ⇒ x v1 − 21 v
S.L5.6 Matrices describing coordinate changes
25
ˆ=v ˆ 0i y 0i = v ˆ j y j in the new and old bases, y0 = (y 01 , y 02 )T and (d) The components of y 1 2 T y = (y , y ) respectively, are related by y j = (T −1 )j i y 0i : 0
y = (e)
3
,
−1
−1
y=T
1 4
0
y =
2 6
−3
0
=
1
−3 1
5 2
ˆ = −3ˆ ˆ 02 = 25 v ˆ2 . , ⇒ y v01 + v
The matrix representation A of the map Aˆ in the old basis describes its action on ˆ A
ˆ j 7→ v ˆ i Aij , yields column j of the old basis: the image of basis vector j, written as v A: 1 (ˆ v1 3
ˆ 1 7→ v
ˆ 1 A11 + v ˆ 2 A21 − 2ˆ v2 ) ≡ v
⇒ A=
ˆ2) ≡ v ˆ 1 A12 + v ˆ 2 A22 ˆ 2 7→ − 31 (4ˆ v1 − v v
A11
A12
A21
A22
1 3
=
1
−4
−2
−1
.
The basis transformation T now yields the matrix representation A0 of Aˆ in the new basis: 0
A = T AT
(f)
−1
=
1 −6
1 5
3
1 3
2
1 −4
1 4
−2 −1
2 6
1 3
=
−3 1
1
4
.
2 −1
ˆ A
ˆ 7→ z ˆ, the components of z ˆ are obtained by matrix multiplying the components For x ˆ in either the old or new basis: ˆ with the matrix representation of A, of x z = Ax =
1 3
1 −4
2
−2 −1
− 21
1 6
=
8
0
,
−7
0
1 3
0
z =Ax = 1 5
The results for z and z0 are consistent, T z =
1 −6 3
4
1
2 −1
1 6
2
1
8
−7
=
1
=
1 3
1 3
5 1
5
.
1
= z0 . X
ˆ1 = (1, 0)T and e ˆ2 = (g) The component representation of the standard basis of E2 is e T ˆ1 = e ˆ1 + e ˆ2 = (0, 1) . Once the old basis has been specified by making the choice v ˆ 2 = 2ˆ ˆ2 = (2, −1)T , that also fixes the new basis, as well as x ˆ and (1, 1)T and v e1 − e ˆ. z ˆ and z ˆ in the standard basis of The components of x ˆ 2 v − 12 v ˆ2 E2 can be computed via either the old or the new ˆ z basis. In the standard basis we obtain the following ˆ1 v ˆ1 2v representation: ˆ 01 v
ˆ j (T =v
−1 j
)
1
1 2
=
3 2
ˆ 02 = v ˆ j (T −1 )j 2 = v
j
ˆ=v ˆj x = 2 x
ˆ=v ˆj zj = z
8 6
1 1
−
1 1
1
−
1
1 1
1 2
+
2
3 4 1 4
2 7 6 −1
=
2
=
1
=
,
5 2
=
−1 5 2
ˆ 1 v
ˆ1 v
ˆ2 e
ˆ 2 v
.
5 4
ˆ1 e
−1
−1
2 5 4
−
−1
−1
2
ˆ= x
.
ˆ2 v
(h) i ˆ 0i x0 v
i
=1
ˆ=v ˆ 0i z 0 = , z
5 3
−1
+1
5 4
−1 5 4
+
1 3
2 5 4
=
2 5 4
1
.X
5 2
=
−1 5 2
.X
ˆ and z ˆ we see that Aˆ multiplies the e ˆ1 direction by a factor −1, i.e. By comparing x it represents a reflection about the vertical axis.
S.L5 Linear Maps
26
P
L5.6.4 Basis transformations and linear maps A
(a) The image of the transformation A on the standard basis, ej 7→ Aj ≡ ei Aij , gives the column vectors of the transformation matrix, A = (A1 , A2 , A3 ). For θ1 = − π3 √ we will use the compact notation cos θ1 = 12 ≡ c and sin θ1 = − 23 ≡ s. Which gives:
1
0
A = 0
0
c −s , s
0
−1
CA = AC =
AB =
−1
0
B = 0
4
0 ,
0
0
1
0
0
c −s ,
0
s
0
0
1
0 .
0
0
1
−2
0
0
0
4
0 ,
0
0
1
CB = BC =
c
0
0
0 4c −s , c
0 4s
2
BA = 0 0
0
0
4c −4s s
0
C=
0
2
0
c
2
6= AB,
c
which is what would have been expected, if one had visualized the transformations in space.
−1
(b)
(c)
y = CAx =
0
0
1
0
c −s1
0
s
c
−1
= c − s c+s
1
Since y = Dz, with D = CBA, we have z = D−1 y , with D−1 = A−1 B −1 C −1 :
1
D−1 = 0
0 0
c s 0
0 −s c
0
z=D
−1
1 2
− 21
y =
0
0 0 1 4
0C
−1
=0
0 1
0 0 c 4
0 − 4s
0
−1
1 2
0 0 c 4 s −4
sc − s= c
c+s
−1 0 0
s 0
1 0 =
c
0 1
0 1 2
c 4 (c
− s) + s(c + s)
− 4s (c − s) + c(c + s)
− 12
0 0 c 4 s −4
0 0
2
= c4
s2 4
+ +
s
,
c
1 2 3 4 sc 3 4 sc
+ s2
.
+ c2
(CA)
(d) On the one hand, we have e0j 7→ ej , with ej = e0i (CA)ij . Meaning that the image of the new basis vectors e0j under the map CA, written in the new basis, is given by the column vectors j of the Matrix CA. The components of which are (CA)ij . (If this (CA)−1
is not obvious, using the inverse transformation we can see that for ej 7→ e0j we have e0j = ei ((CA)−1 )ij . Therefore the image of the standard basis vectors ej under the transformation (CA)−1 , written in the standard basis, is given by the column vectors j of the Matrix (CA)−1 . It has components ((CA)−1 )ij .) On the other hand, we also know that the transformation matrix is defined by ej = e0i T ij . Therefore T = CA . Hence:
S.L6.2 Computing determinants
27
e3
ˆ3 e
−1
T = CA =
=
0
0
1
0
0
0
1
0 0
c
−s
0
0
1
s
c
0
−1
0
0
0
c
−s
0
s
c
ˆ2 e
π 3
ˆ1 e
π 3
e2
e1
(e)
We simplify as much as possible before performing matrix multiplications: −1 −1 −1 −1 −1 −1 −1 −1 z0 = T z = CAD−1 y = C AA | {zC } y = |CC {z } B y = B y | {z } B C y = C B =1 =1 C −1 B −1
0
0
z = 0
1 4
0 c − s
=
0
0
1
0
1 2
−1 c+s
− 12 c−s 4
c+s
1
0
y0 = T y = C |{z} AC Ax = |{z} C 2 A2 x = 0 0 CA =1
c0 s
0
−s0 1
0
c
1
0
1
= c0 − s0 c0 + s 0
1
, for which A2 is a twofold rotation by θ1 = − π3 , or a single rotation by θ10 ≡ − 2π 3 √ cos θ10 = − 12 ≡ c0 and sin θ10 = − 23 ≡ s0 . (f)
Once again, we simplify before performing matrix multiplication: −1 −1 −1 D0 = T DT −1 = |{z} CA CB AA CB C −1 = ABC CC | {z } C = AC |{z} | {z } = ABC AC 1 BC 1
1
D0 = 0 0
0
0
2
0
0
c −s 0
4
0 C
s
0
1
c
0
2
= 0 0
0
0
−1
4c −s 0 4s
c
0
0
0
1
0
=
0
1
−2
0
0
0
4c −s
0
4s
c
Since in the basis {ei } the relation y = Dz holds, its form in the basis {v0i } is y0 = D0 z0 :
−2
Check: y0 = D0 z0 =
0
0
0
4c −s
0
4s
c
− 21 c−s 4
1
1
= c(c − s) − s(c + s) = c0 − s0 . X
c+s
s(c − s) + c(c + s)
c0 + s 0
S.L6 Determinants S.L6.2 Computing determinants P
L6.2.2 Computing determinants
(a) We expand the determinant along the third column, since it contains a zero:
1 det D = d 2
c 2 2
0
= −3(2 − 2c) + e(2 − cd) = c(6 − de) + 2(e − 3) .
3 e
S.L7 Matrix diagonalization
28
(i) The result in the left box shows that det D = 0 for all e if c = 1, d = 2 . (ii) The result in the right box shows that det D = 0 for all c if e = 3, d = 2 . Yes, because the determinant vanishes whenever two columns or two rows are the same. The columns 1 and 2 are the same when a = 1 and b = 2 [case (i)]; and the rows 2 and 3 of the matrix are identical when c = 3 and b = 2 [case (ii)]. (b)
2 −1 −3 1
AB =
0
1
5 5
2
1
6 −2
6
= 8
4−6+6−2
2 − 6 − 24 − 2
0 + 6 − 10 − 10 0 + 6 + 40 − 10
−2 −2
=
−30
2
−14
.
36
det(AB) = 2 · 36 − (−14) · (−30) = 72 − 420 = −348 .
(c)
−1
a
b
c
d
=
BA =
2
1 ad − bc
1
6
8
−b
−c
a
4
⇒
,
(AB)
2
−1
−3
1
0
1
5
5
−2 −2
d
6
−2
−1 −1
7
12 = −4
0
12
10
46
−4
0
−4 −12
4+0
12 + 0
=
−1
=
−1 348
−2 + 1
36
30
14
2
−6+5
−6 + 6 −18 + 30
.
2+5
6 + 30
−4 + 0
2+8
6 + 40 −2 + 40
−4 + 0
2−2
6 − 10 −2 − 10
36 38
.
Since the second row of BA is proportional to the fourth, we have det(BA) = 0 . An explicit Laplace expansion of det(BA) along the indicated columns gives:
4 12 det(BA) = −4 −4
−1
−1
0
12
10
46
0
−4
46 column 1 = 12 −4 12 −10 +4 −4
12 12 36 4 36 column 2 = −(−1) −4 46 38 −10 12 38 −4 −4 −4 −12 −12 12 36 12 36 38 −4 +4 −4 −12 46 38 −12 −1 7 −1 7 36 −4 −12 −4 −12 12 36 −12 7
−1 12 −4
36 −12 7
= [−4800 + 0 + 4800] − 10[0 − 480 + 480] = 0 . Since det(BA) = 0, the matrix BA has no inverse . Since A and B are not square matrices, their determinants and inverses are not defined (even though matrices C or D do possibly exist with CA = 1 or AD = 1, etc.).
S.L7.3 Matrix diagonalization
29
S.L7 Matrix diagonalization S.L7.3 Matrix diagonalization P
L7.3.2 Matrix diagonalization
(a) We find the eigenvalues from the zeros of the characteristic polynomial:
= (4 − λ)(−5 − λ) + 18 −5 − λ
4 − λ
0 = det(A − λ1) = !
Char. polynomial:
−6
3
2
= λ + λ − 2 = (λ − 1)(λ + 2) λ1 = 1 , λ2 = −2 .
Eigenvalues:
X
X
Checks: λ1 + λ2 = −1 = Tr A = 4 − 5, λ1 λ2 = −2 = det A = 4·(−5) − 3·(−6). Eigenvectors: λ1 = 1 :
0 = (A − λ1 1)v1 =
λ2 = −2 :
0 = (A − λ2 1)v2 =
!
!
3
−6
3
−6
6
−6
3
−3
v1
⇒
v1 =
v2
⇒
v2 =
1 1 2
.
1 1
.
The similarity transformation T contains the eigenvectors as columns; we obtain its −1 1 d −b inverse using the inversion formula for 2 × 2 matrices, ( ac db ) = ad−bc −c a :
T = (v1 , v2 ) =
Sim. tr.:
T DT
Check:
−1
=
1 2
1
1
1 2
1
1 1 1
,
T
1·2
−1
=
1·(−2)
−2·(−1)
−2·2
2 −2
−1
=
.
2
4
−6
3
−5
X
= A.
(b) The zeros of the characteristic polynomial yield the eigenvalues: Char. polynomial:
2 − i − λ
0 = det(A − λ1) = !
−1 + 2i − λ 1+i
2 + 2i
= (2 − i − λ)(−1 + 2i − λ) − (2 + 2i)(1 + i) = λ2 − (1 + i)λ + i = (λ − 1)(λ − i) λ1 = 1 , λ2 = i .
Eigenvalues: Checks:
X
λ1 + λ2 = 1 + i = Tr A = (2 − i) + (−1 + 2i), X
λ1 λ2 = i = det A = (2 − i)(−1 + 2i) − (2 + 2i)(1 + i) = 5i − 4i. Eigenvectors: 0 = (A − λ1 1)v1 =
λ1 = 1 :
λ2 = i :
0 = (A − λ2 1)v2 =
!
!
1−i
1+i
2 + 2i
−2 + 2i
2 − 2i
1+i
2 + 2i
−1 + i
v1
⇒
v1 =
1 i
.
v2
⇒
v2 =
1
2i
.
S.L7 Matrix diagonalization
30
Explicitly: For the eigenvector v1 = (v 11 , v 21 )T we have (1 − i)v 11 + (1 + i)v 21 = 0, 1+i 1 v 1 = iv 11 . Since we are instructed to choose v 1j = 1, this implying v 21 = − 1−i T implies v1 = (1, i) . Similarly one finds v2 = (1, 2i)T . The similarity transformation T contains the eigenvectors as columns; its inverse follows via the inversion formula for 2 × 2 matrices, which holds also for complex matrices:
T = (v1 , v2 ) =
Sim-Tr.:
T DT
Check: (c)
−1
=
1
1
i
2i
1
1
i
2i
T −1 =
,
1·2
1·i
i·(−1)
i·(−i)
=
2
i
−1
−i
.
2−i
1+i
2 + 2i
−1 + 2i
X
= A.
We find the eigenvalues from the zeros of the characteristic polynomial. We compute the latter using Laplace’s rule, expanding along the first row because it contains a zero:
−1 − λ 0 = det(A − λ1) = 1 3
1 − λ 1 = (−1 − λ) −1 2−λ
1
!
0
1−λ −1
1 − 3 2 − λ
2 − λ
1
1
= (−1 − λ)[(1 − λ)(2 − λ) + 1] − [(2 − λ) − 3] = (−1 − λ)(1 − λ)(2 − λ) λ1 = −1 , λ2 = 1 , λ3 = 2 .
Eigenvalues:
X
λ1 + λ2 + λ3 = 2 = Tr A = −1 + 1 + 2,
Checks:
X
λ1 λ2 λ3 = 2 = det A = (−1)·(2 − (−1)) − 1·(2 − 3). Eigenvectors:
0
λ1 = −1 :
0 = (A − λ1 1)v1 = 1 !
2
1 v1
−2
λ2 = 1 :
.
0 −1
1
0
1 v2
3
−1
−3
1
0 = (A − λ3 1)v3 =
1
v1 =
3
0
0 = (A − λ2 1)v2 =
!
⇒
1
!
λ3 = 2 :
0
−1
3
1
⇒
1
v2 =
.
2 −1
1
0
1
1 −1
1 v3
3 −1
0
⇒
v3 = 3 . 2
The similarity transformation T that diagonalizes A contains the eigenvectors as columns; we obtain its inverse using Gaussian elimination (details not shown):
Sim. tr.:
T = (v1 , v2 , v3 ) =
1
1
0
2
3
−1 −1
Check:
T DT −1 =
1
1
1
0
2
3
−1 −1
2
1
−1·7
1 1·(−3) 6 2·2
Gauss
, −→ T −1 =
2 −1·(−3) 1·3 2·0
−1·1
1·(−3) = 2·2
1 −3 6 2
7 −3
1
3 −3 0
−1
1
0
1
1
1
3 −1
.
2
2
X
= A.
S.L7.3 Matrix diagonalization
31
P
L7.3.4 Diagonalizing a matrix depending on two variables: B − E qubit ∆ ! Characteristic polynomial: 0 = det(H − E 1) = ∆ −B − E
(a)
= −(B − E)(B + E) − |∆|2 = −B 2 + E 2 − |∆|2 E1 = −X,
Eigenvalues:
X
0 = (H −E1 1)v1 =
0 = (H −E2 1)v2 =
!
p
B 2 + |∆|2 .
X
E1 E2 = − B 2 + |∆|2 = det H .
Checks: E1 + E2 = 0 = Tr H , Eigenvectors: !
X≡
E2 = +X,
B +X
∆
∆
−B +X
v1 ⇒
v1 = a1
B −X
∆
∆
−B −X
∆
v2 ⇒
B −X
v2 = a2
, |a1 | = p
1
, |a2 | = p
1
2X(X −B)
B +X
∆
2X(X +B)
(b) We now choose the phasepof a1 and a2 positive and real, ai = |ai |, write ∆ = eiφ |∆|, √ with |∆| = X 2 −B 2 = (X +B)(X −B) (as follows from the definition of X), and bring the eigenvectors into a form which reveals their behavior B → ±∞:
v1 =
1 √ 2
p p − 1−B/X 1+B/X 1 p p , v2 = √ . iφ iφ e
2
1+B/X
e
E2
1−B/X
E1
|∆| −|∆|
0
Since B/X → ±1 for B → ±∞, we have: |v 11 |2
|v 22 |2
=
1 (1−B/X) 2
⇒
|v 21 |2 = |v 12 |2 =
1 (1+B/X) 2
⇒
=
0 for B → ∞ 1 for B → −∞ 1 for B → ∞ 0 for B → −∞
1
|v 1 1 |2
0 1
|v 2 2 |2
| v 1 2 |2
1 2
−1
P
| v 2 1 |2
1 2
0
0
B/|∆|
1
L7.3.6 Degenerate eigenvalue problem
(a) The zeros of the characteristic polynomial yield the eigenvalues:
15 − λ 0 = det(A − λ1) = 6 −3 !
6 6−λ 6
2 2 6 = λ(−λ + 36λ − 324) = λ(λ − 18) 15 − λ −3
Eigenvalues: λ1 = 0 , λ2 = 18 , λ3 = 18 . The eigenvalue 18 is two-fold degenerate. X X Checks: λ1 + λ2 + λ3 = 36 = Tr A, λ1 λ2 λ3 = 0 = det A. Determination of the (normalized) eigenvector v1 of the non-degenerate eigenvalue λ1 : λ1 = 0 :
0 = (A − λ1 1)v1 = !
−3
15
6
6
6
6 v1
−3
6
15
Gauss
=⇒
v1 =
1 √ 6
1
−2 . 1
S.L7 Matrix diagonalization
32
To find the eigenvector, we used Gaussian elimination and normalized the final result: v 11 v 21 v 31 15 6
−3
v 11 v 21 v 31
v 11 v 21 v 31 0
6
6
6
0
−3
6
15
0
1 3 [1]
⇒
1 18 (5[2]
− 2[2]) :
1
0 −1
0
0
[2] :
0
1
2
0
0
[2] − [1] :
0
0
0
0
:
5
2 −1
0
− 2[1]) :
0
1
2
0
1
2
1 32 (5[3]
+ [1]) :
1 5 ([1]
⇒
The augmented matrix on the right yields two relations between the components of the vector v1 = (v 11 , v 21 , v 31 )T , namely v 11 − v 31 = 0 and v 21 + 2v 31 = 0. Since the third row contains nothing but zeros, the eigenvector is determined only (as expected) up to an arbitrary prefactor a1 ∈ C: v1 = a1 (1, −2, 1)T . The normalization condition kv1 k = 1 implies a1 = ± √16 ; we here choose the positive sign (the negative one would have been equally legitimate). Determination of the eigenvectors v2,3 for the degenerate eigenvalue λ2,3 = 18:
0 = (A − λj 1)vj =
−3
6
6
−12
−3
6
!
λ2 = λ3 = 18 :
−3
6 vj . −3
All three rows are proportional to each other, [3] = [1] = −2[2]. In case one does not notice this immediately and uses Gaussian elimination, one is lead to the same conclusion: v 1j v 2j v 3j
v 1j v 2j v 3j −3
6 −3
6 −12 −3
6
0
1 3 [1]
⇒
−1 2 −1
:
0
6
0
([2] + 2[1]) :
0 0
0
0
−3
0
([3] − [1]) :
0 0
0
0
The augmented matrix on the right contains nothing but zeros in both its second and third rows. This is a direct consequence of the fact that on the left, the second and third rows are both proportional to the first. Since only one row is non-trivial, we obtain only one relation between the components of vj = (v 1j , v 2j , v 3j )T , namely −v 1j + 2v 2j − 1v 3j = 0. Therefore we may freely choose two components of vj , e.g. v 2j = a and v 3j = b, thus obtaining vj = (2a − b, a, b)T . From this we can construct two linearly independent eigenvectors, since a and b are arbitrary. For example, the choice a = b = 1 yields an eigenvector v2 = (1, 1, 1)T , and the choice a = 1, b = 2 an eigenvector v3 = (0, 1, 2)T linearly independent of v2 . As final step, we orthonormalize the eigenvectors. The eigenvectors of different eigenvalues are already orthogonal w.r.t. the real scalar product, hvj , v1 i = 0 for j = 2, 3. (As explained in chapter ??, this is a consequence of the fact that A = AT .) Thus it suffices to orthonormalize the degenerate eigenvectors v2 and v3 . We use the GramSchmidt procedure, e.g. with v02 = v2 /kv2 k = v03,⊥
=
v3 −v02 hv02 , v3 i
0 =
1 2
1 −√ 3
1 1 1
3 √ 3
T √1 (1, 1, 1) 3
−1 =
0 1
, ⇒
The vectors {v1 , v02 , v03 } form an orthonormal basis of
and
v03 =
v03,⊥ = kv03,⊥ k
1 √ 2
−1 0 1
.
R3 . We use them as columns
S.L7.3 Matrix diagonalization
33
for T . Inverting the latter, e.g. using Gaussian elimination, we find that T −1 = T T :
Sim. tr.: T = (v1 , v02 , v03 ) =
Check:
T DT −1 =
1 √ 6 −2 √ 6 1 √ 6
1 √ 3 1 √ 3 1 √ 3
−1 √ 2
1 √ 3 1 √ 3 1 √ 3
1 √ 6 −2 √ 6 1 √ 6 −1 √ 2
0
, T −1 =
1 √ 2
1 √ 2
−2 0· √ 6
0· √16
18· √1
0
18·
3 −1 √ 2
18· √13 18·0
1 √ 6 1 √ 3 −1 √ 2
0· √16
−2 √ 6 1 √ 3
1 √ 6 1 √ 3 1 √ 2
0
vT 1
= v2T = T T .
18· √13 = 18·
0 0
v3T
6
6
6
−3
6
15
1 √ 2
15 6 −3
X
= A.
The fact that T −1 = T T is no coincidence. As explained in chapter ??, this property follows from the orthonormality of the eigenvectors forming the columns of T . (b) The zeros of the characteristic polynomial yield the eigenvalues:
−1 − λ 0 ! 0 = det(A − λ1) = 0 −2i
0 7−λ 2 0
0 2 4−λ 0
2i 0 0 2−λ
= (−1 − λ)(2 − λ)[(7 − λ)(4 − λ) − 4] + (2i)2 [(7 − λ)(4 − λ) − 4] = [(7 − λ)(4 − λ) − 4][(−1 − λ)(2 − λ) − 4] = [λ2 − 11λ + 24][λ2 − λ − 6] where we expanded the determinant along columns or rows containing many zeros. This equation factorizes into two quadratic equations, whose solutions are: √ √ λ1,2 = 12 11 ± 121 − 96 = 12 (11 ± 5), λ3,4 = 21 1 ± 1 + 24 = 12 (1 ± 5). λ1 = 8 , λ2 = λ3 = 3 , λ4 = −2 . The eigenvalue 3 is two-fold degenerate. X
X
λ1 λ2 λ3 λ4 = −144 = det A.
Checks: λ1 + λ2 + λ3 + λ4 = 12 = Tr A,
Determination of the normalized eigenvectors:
λ1 = 8 :
−9 0
0
2i
0 = (A − λ1 1)v1 =
0 1
2
0
!
0 2 −4 −2i 0
λ4 = −2 :
λ2,3 = 3 :
0
v1
1
0
0
2i
0 = (A − λ4 1)v4 =
0
9
2
0
2
6
0
−2i
0
0
=⇒
v1 =
0
1 2 √ 5 1 0
v4
Gauss
=⇒
v4 =
1 √ 5
−2i
.
0 0
.
1
4
−4
0
0
2i
0 = (A − λj 1)vj =
0
4
2
0
2
1
0
−2i
0
0
!
0
Gauss
0 −6
!
0
vj
−1
For the degenerate eigenvalue, the first and fourth row of (A − λj 1) are proportional to each other, as are the second and third rows. Gaussian elimination hence leads to two rows that both contain nothing but zeros, thus the degenerate eigenvectors contain two free parameters. The system can be reduced to the form: 2v 2j + v 3j = 0
S.L7 Matrix diagonalization
34
−2iv 1j − v 4j = 0 For example, we can choose two linearly independent vectors as follows: v2 = (1, 1, −2, −2i)T ,
v3 = (0, 1, −2, 0)T .
As final step, we orthonormalize the eigenvectors. The eigenvectors of different eigenvalues are already orthogonal w.r.t. the complex scalar product, hvj , vi i = 0 for j = 2, 3 and i = 1, 4. (As explained in chapter ??, this is a consequence of the fact that A = A† .) Thus it suffices to orthonormalize the degenerate eigenvectors v2 and v3 . 1 √ (0, 1, −2, 0)T 5
We use the Gram-Schmidt procedure, e.g. with v03 = v3 /kv3 k =
,
and
v2,⊥ = v2 −v03 hv03 , v2 i =
1
0
1
5 1 − 5−2 =
−2
−2i
1
v
0
2,⊥ 0 , ⇒ v2 = kv k = 2,⊥
0
√1 5
0
0
.
−2i
−2i
0
1
The vectors {v1 , v02 , v03 , v4 } form an orthonormal basis of C4 . We use them as columns for T . Inverting the latter, e.g. using Gaussian elimination, we find that T −1 = T † :
T = (v1 , v02 , v03 , v4 ) =
√1 5
0
1
0 −2i
2 1
0
1
0
0 −2
0 −2i
0
0
−1 † , T =T =
√1 5
0
2
1 0 2i
1
1
0
0
0
1
−2
2i
0
0
0
.
1
Check: T DT
−1
0
1
0
−2i
2 = 1
0
1
0
0
−2 0
0 −2i
8·0
8·2
8·1
8·0
3·0
3·0
0
1 3·1 5 3·0
3·1
3·(−2)
3·2i
1
2·2i
2·0
2·0
= 3·0
−1 0 0 2i 0 0
7 2 0 X 2 4 0
= A.
−2i 0 0 2
2·1
The fact that T −1 = T † is no coincidence. As explained in chapter ??, this property follows from the orthonormality of the eigenvectors forming the columns of T .
S.L7.4 Functions of matrices P
L7.4.2 Functions of matrices 0 a 0
(a) For A =
0 0
we have A2 =
b 0
0 0
0 0 0
0 0 0
ab 0 0
and A3 = 0, thus the Taylor series
for eA contains only three terms: eA = A0 + A + 12 A2 = (b) We seek eB , with B = bσ1 , σ1 = σ12 =
Therefore:
0
1
1
0
eB =
0
1
1
0
∞ X l=0
= 1,
1 l B l!
=
0 1
1 0
1 0 0
a 1 0
m=0
b 1
.
. The matrix σ1 has the following properties:
σ12m = (σ12 )m = 1, ∞ X
1 2 ab
1 b2m σ12m (2m)! |{z}
1
σ12m+1 = σ1 (σ12 )m = σ1 .
+
∞ X m=0
1 b2m+1 σ12m+1 (2m + 1)! | {z } σ1
S.L8.1 Unitarity and orthogonality
35
= 1 cosh b + σ1 sinh b =
(c)
We seek eB , with B = b
0 1
1 0
cosh b
sinh b
sinh b
cosh b
.
. We begin by diagonalizing B:
0 = det(B − λ1) = λ2 − b2 ⇒ Eigenvalues: λ± = ±b . !
Char. Polynom:
X
X
λ+ λ− = −b2 = det B .
λ+ + λ− = 0 = Tr B ,
Checks:
Normalized Eigenvectors: 0 = (B − λ± 1)v± ⇒ v± =
1 √ 2
!
1 √ 2
T = (v+ , v− ) =
Similarity transf.: eB = T eD T −1 :
eB = T
=
1 2
eb
0
0 e−b
1 √ 2
1
1
1
1
1 2
=
1 −1
eb + e−b
eb − e−b
eb − e−b
eb + e−b
1
.
±1
1 √ 2
T −1 =
,
1 −1
1
1
1
1
.
1 −1 eb
eb
e−b −e−b
1 −1
=
cosh b
sinh b
sinh b
cosh b
.
This agrees with the result from (b). (d) We seek eC , with C = iθ Ω, Ω = nj Sj = Ω2 =
X
ni Si nj Sj =
ij
X
1 2
n3 n1 −in2 n1 +in2 −n3
ni nj 12 (Si Sj + Sj Si ) =
|
ij
{z
1δ 2 ij
}
1
, and
X
P j
n2j = 1.
n2i 41 1 =
1 4
1 .
i
Alternatively, by explicit matrix multiplication: Ω2 =
1 4
n23 +n21 +n22 (n1 +in2 )n3 −n3 (n1 +in2 )
n3 (n1 −in2 )+(n1 −in2 )(−n3 ) n23 +n21 +n22
=
1 4
1 0 0 1
=
1 4
1.
This implies: Ω2m = (Ω2 )m = ( 41 1)m = ( 12 )2m 1 and Ω2m+1 = Ω(Ω2 )m = Ω( 21 )2m . Hence: eC =
∞ X l=0
=
1 (iθ Ω)l l!
=
∞ X
1 (iθ)2m (2m)!
m=0
1 cos θ2 + 2Ω i sin θ2 =
Ω2m + |{z} (1 )2m 1 2
∞ X
1 (iθ)2m+1 (2m + 1)!
2Ω( 1 )2m+1 2
m=0
cos θ2 +in3 sin
θ 2
i(n1 −in2 ) sin
i(n1 +in2 ) sin
θ 2
θ −in3 2
cos
2m+1 Ω | {z }
sin
S.L8 Unitarity and Hermiticity S.L8.1 Unitarity and orthogonality
! θ 2 θ 2
.
S.L8 Unitarity and Hermiticity
36
P
L8.1.2 Orthogonal and unitary matrices
(a)
AAT =
2 −1
0
3
0
0
2
0
13
0
0
−1
0
2
1
2
0
9
0
0
= 0
5
0
0
0
5
6= 1,
⇒
A is not orthogonal .
Since the column vectors of A are indeed orthogonal to one another but not normalized, AAT is diagonal but not equal to 1.
BB T =
†
CC =
1 9 1 2
1
2 −2
2
1
0
0
−2
2
1 2
2
1 = 0
1
0 =
2
1
2
−2
1
2
0
1
i
1
−1
−i
−i
−1
1
i
(b)
1 −2
kxk =
√
0
0
= 1,
⇒
=
1
3
0
0
1 2
−1
0
2
1
−1
6
−2
B is orthogonal .
C is unitary .
,
1
2
−2
2
1 2
2
1
2
1
1 3
=
−1
√ 46 ,
1, ⇒
−3
1
kak =
6 ,
=
1
2
1 3
0
a = Ax =
b = Bx =
0
kbk =
7
1 . 2
√ 6 .
B is an orthogonal matrix and preserves the norm. Therefore, kbk = kxk. X In contrast, A, is not an orthogonal matrix. Thus, kak 6= kxk. X (c)
c = Cy = kyk =
p
12
1 √ 2
√
i
1
1
−1
−i
i
√ + i(−i) = 2 ,
2i
=
kck =
0
,
q√
√ √ 2i(−i) 2 + 0 = 2 .
C is a unitary matrix and preserves the norm. Therefore, kck = kyk. X
S.L8.2 Hermiticity and symmetry P
L8.2.2 Diagonalizing symmetric or hermitian matrices
The matrix A is symmetric for (a,b) and Hermitian for (c), hence the similarity transformation T containing the eigenvectors as columns can respectively be chosen orthogonal, T −1 = T T , or unitary, T −1 = T † . To ensure this, the eigenvectors must form an orthornormal system w.r.t. to the real or complex scalar product, respectively. We hence choose kvj k = 1 for all eigenvectors, and recall that non-degenerate eigenvectors of symmetric or Hermitian matrices are guaranteed to be orthogonal. (a) The zeros of the characteristic polynomial yield the eigenvalues:
− 19 −λ Char. polynomial: 0 = det(A−λ1) = 103 10 !
= (− 19 −λ)(− 11 −λ) − 10 10
3 10 − 11 10 −λ
9 100
S.L8.2 Hermiticity and symmetry
37
= λ2 + 3λ + 2 = (λ + 1)(λ + 2) Eigenvalues:
λ1 = −1 , λ2 = −2 . X
1 (−19 10
λ1 + λ2 = −3 = Tr A =
Checks:
X
1 10
λ1 λ2 = 2 = det A = Eigenvectors: λ1 = −1 :
0 = (A − λ1 1)v1 =
1 10
λ2 = −2 :
0 = (A − λ2 1)v2 =
1 10
!
!
1 √ 10
T = (v1 , v2 ) =
Sim. tr.:
Check: T DT
−1
=
1 √ 10
1
3
3 −1
3
3
−1
1
3
3
9
⇒
v1 =
1 √ 10
v2 =
1 √ 10
3
,
3 −1
1 √ 10
[(−19)·(−11) − 3·3].
v1
1
− 11),
−9
2
T
v2
⇒
−1
T
=T =
−1·1
−1·3
−2·3
−2·(−1)
=
vT 1
1 10
1
.
3
3
.
−1 1 √ 10
=
vT 2
−19
3
.
3 −1
3
1
X
= A.
3 −11
(b) The zeros of the characteristic polynomial yield the eigenvalues:
−λ Char. polynomial: 0 = det(A − λ1) = 1 0
1
!
1 = −λ[−(1 + λ)(−λ) − 1] + λ −λ 0
−1 − λ 1
2
= −λ(λ + λ − 2) = −λ(λ − 1)(λ + 2) Eigenvalues:
λ1 = 0 , λ2 = 1 , λ3 = −2 . X
X
Checks: λ1 + λ2 + λ3 = −1 = Tr A, Eigenvectors:
λ1 λ2 λ3 = 0 = det A.
λ1 = 0 :
0 = (A − λ1 1)v1 = 1 !
1 0
−1
!
v1 =
√1 2
1
1 −2
0
0 = (A − λ3 1)v3 = 1
1v2
Sim.-tr.: T = (v1 , v2 , v3 ) =
Check:
T DT −1 =
1 √ 2
1 √ 3
v2 =
√1 3
1 . 1
⇒
1 1v3
0
1 √ 3
−1 √ 2
1 √ 2
1 √ 3
0
1 √ 3
1 √ 6 −2 √ 6
−1 √ 2
1 √ 3
1 √ 6
1 √ 3
1 √ 6 −2 √ 6
v3 =
√1 6
1
−2 . 1
vT 1
1 √ 2
−1 T , T = T = vT2 = √13 vT 3
1 √ 6
.
0
1
⇒
0 1 2
2 1 0
λ3 = −2 :
1
−1
1 −1
0
!
1 0
0 = (A − λ2 1)v2 =
⇒
−1 1v1
0
λ2 = 1 :
0
0· √12
0·0
1· √13
1· √13
−2 −2· √16 −2· √ 6
−1 0· √ 2
0
1· √13
−2· √16
1 √ 6
= 1 0
0
−1 √ 2
1 √ 3 −2 √ 6
1 √ 3
.
1 0
−1 1 1 0
1 √ 6
X
= A.
S.L8 Unitarity and Hermiticity
38
(c)
The zeros of the characteristic polynomial yield the eigenvalues:
1 − λ Char. polynomial: 0 = det(A − λ1) = −i 0
−i 1 − λ
i
!
0
2−λ i
= (λ − 1) (λ − 2)(λ − 1) − 1 + i2 (λ − 1)
= (λ − 1) (λ − 2)(λ − 1) − 2 = (λ − 1)λ(λ − 3) λ1 = 0 , λ2 = 1 , λ3 = 3 .
Eigenvalues:
X
X
Checks: λ1 +λ2 +λ3 = 4 = Tr A, λ1 λ2 λ3 = 0 = det A = 1·(2·1−i(−i))−(−i)(i·1)). Eigenvectors: 1
λ1 = 0 :
0
i
1
0
i
0
0 = (A − λ2 1)v2 = −i
1
0
i
!
−2
λ3 = 3 :
i
Sim. tr.: T = (v1 , v2 , v3 ) = √i3
Check:
1 √ 3
T DT −1 = √i3
1 √ 3
P
1 √ 2
0 −1 √ 2
1 √ 6 −2i √ 6
0
1 √ 3
−1 √ 2
1 √ 6 −2i √ 6
⇒
√1 2
v2 =
1
0
−i v3
⇒
√1 6
v3 =
1
−2i .
−2
1
v†1
1 √ 3
−i √ 3
−1 † , T = T = v†2 = √12 v†3
1 √ 6
0· √13 3· √16
.
−1
0
−i 0· √ 3
1· √1 2
1 √ 6
0
0
1 √ 2
i . 1
−i v2
−1
1 √ 3
√1 3
v1 =
0 = (A − λ3 1)v3 = −i
⇒
−i v1
i
!
1
0
2
!
λ2 = 1 :
i
0 = (A − λ1 1)v1 = −i
0· √13
−1 1· √ 2
1·0 2i 3· √ 6
1 √ 6
0 2i √ 6
.
1 √ 6
1
i
0
= −i
2
−i
0
i
1
3· √16
1 √ 3 −1 √ 2
X
=A
L8.2.4 Spin-1 matrices: eigenvalues and eigenvectors
For Sx we obtain: Char. Pol.: Eigenvalues: Checks:
√ 1/ 2
−λ √ 0 = det(Sx − λ1) = 1/ 2 0 !
λx,1 = 1,
λx,2 = 0,
−λ √ 1/ 2
0 √ 1/ 2 −λ
λ
λ
3 2 = −λ + 2 + 2 = −λ(λ − 1)
λx,3 = −1 .
X
X
λx,1 + λx,2 + λx,3 = 0 = Tr Sx ,
λx,1 λx,2 λx,3 = 0 = det Sx .
Eigenvectors vx,a : 0 = (Sx − λx,1 1)vx,1 = !
1 √ 2
−√2 1 0
1 √ − 2 1
0
1 √ − 2
vx,1
⇒
vx,1 =
1 2
1 √
2
1
S.L8.2 Hermiticity and symmetry
39
0 = (Sx − λx,2 1)vx,2 = !
1 √ 2
0 = (Sx − λx,3 1)vx,3 = !
1 √ 2
0
1
0
1
0
1
0
1
0
√2
1
1 √ 2
0
1
⇒
vx,2
vx,2 =
0
⇒
vx,3
1 √ 2
vx,3 =
1 √ 2
1
1 0 −1
1 √ − 2
2
1
For Sy we obtain:
Char. Pol.: Eigenvalues: Checks:
√ −i/ 2
−λ √ 0 = det(Sy − λ1) = i/ 2 0 !
λy,1 = 1,
0 √ −i/ 2
−λ √ i/ 2
−λ
λ
λ
3 2 = −λ + 2 + 2 = −λ(λ − 1)
λy,3 = −1 .
λy,2 = 0, X
X
λy,1 + λy,2 + λy,3 = 0 = Tr Sy ,
λy,1 λy,2 λy,3 = 0 = det Sy .
Eigenvectors vy,a : 0 = (Sy − λy,1 1)vy,1 = !
1 √ 2
0 = (Sy − λy,2 1)vy,2 = !
1 √ 2
0 = (Sy − λy,3 1)vy,3 = !
1 √ 2
−√2
−i √
−
i 0
0
i
0
−i
0
i
0
−i
0
i
0
√2
⇒
vy,1
vy,1 =
−i √
i
−i √ − 2
2
0
0 −i √
2
i
⇒
vy,2
vy,2 =
⇒
vy,3
vy,3 =
2
1
1 √ 2i
2
−1
1 √ 2
1 2
1 0 1
−
1 √
2i
−1
For Sz we obtain:
Char. Pol.:
1 − λ 0 = det(Sz − λ1) = 0 0
Eigenvalues: Checks:
!
λz,1 = 1,
λz,2 = 0, X
λz,1 + λz,2 + λz,3 = 0 = Tr Sz ,
0 −λ 0
0 = (1 − λ)λ(1 + λ) −1 − λ 0
λz,3 = −1 . X
λz,1 λz,2 λz,3 = 0 = det Sz .
Eigenvectors vz,a : 0 = (Sz − λz,1 1)vz,1 = !
0 = (Sz − λz,2 1)vz,2 = !
0 = (Sz − λz,3 1)vz,3 = !
0
0
0
0
−1
0
0
0
−2
1
0
0
0
0
0
0
0
−1
2
0
0
0
1
0
0
0
0
1 vz,1
⇒
vz,1 =
0 0
0 vz,2
⇒
vz,2 =
1 0
0 vz,3
⇒
vz,3 =
0 1
S.L8 Unitarity and Hermiticity
40
P
L8.2.6 Inertia tensor
(a) Point masses: m1 = 23 at r1 = (2, 2, −1)T ; m2 = 3 at r2 = 13 (2, −1, 2)T .
Ieij =
X
ma δij r2a −ria rja
⇒ Ie =
X
−r 1a r 2a
−r 1a r 3a
−r 2a r 1a
r2a −r 2a r 2a
−r 3a r 1a
−r 3a r 2a
−r 2a r 3a 2 ra −r 3a r 3a
ma
a
a
Ie =
r2a −r 1a r 1a
9−4 2 · −4 3 2
−4
2
9−4
2
2
1− 49
+3·
9−1
2 9 − 49
2 9
− 94
1− 19
2 9
2 9
1−
5
−2
= −2
6
2
0
2
7
4 9
0
.
(b) The zeros of the characteristic polynomial yield the moments of inertia (eigenvalues):
5 − λ e 0 = det(I − λ1) = −2 0 !
3 2 2 = −λ + 18λ − 99λ + 162 7−λ
−2
0
3−λ 2
2
= (λ − 3)(−λ + 15λ − 54) = −(λ − 3)(λ − 6)(λ − 9) λ1 = 3, λ2 = 6, λ3 = 9 .
Moments of inertia:
(c)
Determination of the (normalized) Eigenvectors: (i)
Eigenvalue λ1 = 3:
2
0 = (Ie − λ1 1)v1 = −2 !
(ii) Eigenvalue λ2 = 6:
0
3
2 v1
0
2
4
−1
−2
0
0 = (Ie − λ2 1)v2 = −2 !
(iii) Eigenvalue λ3 = 9:
−2
2
1
−4
−2
0
−3
2 v3
0
.
v2 = a2 −1 , a2 =
1 3
.
1 3
Gauss
=⇒
.
2
2
Gauss
1
v3 = a3 −2 , a2 =
=⇒
−2
2
2 ,
0
!
1 3
2 v2
0 = (Ie − λ3 1)v3 = −2
a1 =
v1 = a1
=⇒
−1
0
Gauss
2
−2
Construction of the similarity transformation: Since Ie is symmetric, we may choose T −1 = T T , where the matrix T contains the orthonormal eigenvectors as columns:
T = (v1 , v2 , v3 ) =
1 3
2
2
1
vT 1
2 −1 −2 , −1
T −1 = T T = vT2 = vT 3
2 −2
For these matrices we have T T IeT =
3 0 0
0 6 0
0 0 9
.
1 3
2
2
2 −1 −1
2
1 −2 −2
.
S.L8.3 Relation between Hermitian and unitary matrices
41
S.L8.3 Relation between Hermitian and unitary matrices L8.3.2 Exponential representation 3-dimensional rotation matrix m (a) We use the product decomposition Rθ (ej ) = limm→∞ Rθ/m (ej ) . For m 1, 2 3 P
θ/m 1 we have cos(θ/m) = 1 + O (θ/m) Hence
1
0
Rθ/m (e1 ) = 0
cos
0 sin
Rθ/m (e2 ) =
θ m θ m
cos
1 θ m
− sin θ m θ m
cos
Rθ/m (e3 ) = sin
θ m
0 sin
0
θ m θ m
− sin
θ m
cos
0
− sin cos
0
0
= 0
1
0
θ m
θ m
0
1
0
θ 0 −m
1
0
θ −m 0
1
θ 2 (m ) ≡1+
θ τ m 1
θ 2 + O (m ) ,
θ 2 + O ( m ) ≡1+
θ τ m 2
θ 2 + O (m ) ,
θ τ m 3
θ 2 + O (m ) ,
1 θ m
0
θ −m +O
0
θ m θ m
0
1
=
0
0 cos
1
and sin(θ/m) = θ/m + O (θ/m) .
θ 0 = m
1
0 + O
1
0
1
0
θ 2 (m ) ≡1+
where the three τi matrices have the following form:
0
0
τ1 = 0
0
0 −1 ,
0
1
0 −1
0
1
0
0
0 ,
−1
0
0
τ2 =
0
0
0
τ3 = 1
0
0 .
0
0
0
The identity limm→∞ [1 + x/m]m = ex now yields an exponential representation of Rθ (ei ): Rθ (ei ) = lim
m→∞
m
Rθ/m (ei )
= lim m→∞
1 + mθ τi
m
= eθτi .
(b) For θ/m 1, we use the approximation Rθ/m (n) = Rn1 θ/m (e1 )Rn2 θ/m (e2 )Rn3 θ/m (e3 ) + O (θ/m)2 =
1 + nm1 θ τ1 1 + nm2 θ τ2 1 + nm3 θ τ3 +O ( mθ )2 = 1 + mθ n · τ ≡ 1 + mθ Ω,
where Ω ≡ n · τ =
0 n3 −n2
−n3 0 n1
Rθ (n) = lim
n2 −n1 0
m→∞
(c)
, with matrix elements Ωij = −ijk nk . Hence:
m
Rθ/m (n)
= eθ(n·τ ) = eθΩ .
We first compute Ω2 and Ω3 , using index notation: Ω2ij = Ωik Ωkj = ikl kjm nl nm = (δlj δim −δlm δij )nl nm = nj ni −δij n2l = ni nj −δij . Ω3ij = Ω2 · Ω)ij = (ni nk −δik )(−kjm nm ) = −ni jmk nm nk +ijm nm = −Ωij .
|
{z
}
(n×n)j =0
Hence Ω3 = −Ω. This implies Ω4 = −Ω2 , and more generally, Ωl = −Ωl−2 for l ≥ 3.
S.L10 Multilinear algebra
42
Alternatively, the matrix multiplication can be performed explicitly. This yields:
n22 +n23
−n1 n2
−n1 n3
n21 +n23
−n2 n3
−n1 n3
−n2 n3
n21 +n3
n22 +n23
−n1 n2
−n1 n3
Ω2 = − −n1 n2
(Ωij )2 = ni nj − δij .
⇒
−n3
0
Ω3 = − −n1 n2
n21 +n23
−n2 n3 n3
−n1 n3
−n2 n3
n21 +n3
−n2
n2
0
−n1
n1
0
n3 −n2
0
= −n3
0
n1
n2 −n1
= −Ω .
0
(d) Since Ωl≥3 can be expressed in terms of Ω and Ω2 , using Ω2l = (−1)l+1 Ω2
Ω2l+1 = (−1)l Ω
(l ≥ 1),
(l ≥ 0),
the same is true for the Taylor series of eθΩ . We split off the l = 0 term and group the remaining terms according to odd or even powers of Ωl : Rθ (n) = eθΩ =
X θk k=0
k!
Ωk = 1 +
l=0
= Here we used:
(2l + 1)!
2l+1 Ω | {z } +
=(−1)l Ω
X θ2l (2l)!
l=1
2l
Ω |{z}
=(−1)l+1 Ω2
1 + Ω sin(θ) − Ω2 (cos θ − 1) .
θ 2l (−1)l+1 l=1 2l!
P
X θ2l+1
hP
=−
θ 2l (−1)l l=0 2l!
i
− 1 = −(cos θ − 1) . By rear-
ranging, the matrix elements of Rθ (n) are found to have the following form: Rθ (n)
ij
= δij − ijk nk sin(θ) + ni nj − δij
1 − cos(θ)
= δij cos(θ) − ijk nk sin(θ) + ni nj 1 − cos(θ)
.
X
S.L10 Multilinear algebra S.L10.1 Direct sum and direct product of vector spaces P
L10.1.2 Direct sum and direct product
Given: two vectors in (a)
R2 , u = (−1, 2)T , v = (1, −3)T .
au ⊕ 3v + 2v ⊕ au = (au, 3v) + (2v, au) = (au + 2v, 3v + au) = (−a + 2, 2a − 6, 3 − a, −9 + 2a)T .
(b)
au ⊗ 3v − v ⊗ 2u = (−ae1 + 2ae2 ) ⊗ (3e1 − 9e2 ) − (e1 − 3e2 ) ⊗ (−2e1 + 4e2 ) = (−3a + 2)e1 ⊗ e1 + (9a − 4)e1 ⊗ e2 + (6a − 6)e2 ⊗ e1 + (−18a + 12)e2 ⊗ e2 .
S.L10.2 Dual space
43
S.L10.2 Dual space P
R3∗ : temperature in room
L10.2.2 Dual vector in
(a) Given x1 = (1, 1, 0)T , x2 = (0, 1, 1)T , x3 = (0, 0, 1)T , the standard basis of be expressed as: e1 = x1 − x2 + x3 , e2 = x2 − x3 , e3 = x3 . Therefore
R3 can
T1 = T (e1 ) = T (x1 ) − T (x2 ) + T (x3 ) = 2 − (1 + a) + a = 1, T2 = T (e2 ) = T (x2 ) − T (x3 ) = 1 + a − a = 1, T3 = T (e3 ) = T (x3 ) = a. Hence the temperature is specified by the dual vector T = (1, 1, a) . (b) The temperature at x4 = (3, 2, 1)T = 3e1 + 2e2 + e3 is T (x4 ) = 3T (e1 ) + 2T (e2 ) + T (e3 ) = 3 + 2 + a = 5 + a .
P
L10.2.4 Dual vectors in
R4∗
Expanding the given vectors as xj ≡ ei (xj )i ≡ ei X ij and y = ej y j , we have ej = xi (X −1 )ij and w(y) = w(ej )y j = w(xi )(X −1 )ij y j . The components of xi give the ith column of X:
1
−1
0
0
1
−1
0
0
1
−1
1
0
0
1
X=
0
1
1 −1 Gauss −1 ⇒ X = 2 −1 −1
−1
−1
0
w(y) = w(xi )(X
P
1
1
1
1
1
−1
1
1 1
1 a
1
1
1
1
1 −1 ) j y = (2, 1, 0, 1) 2 −1
1
1
−1
1
1 2
−1
−1
−1
−1 i
j
a = a + 6 .
1
2
1
L10.2.6 Basis transformation for vectors and dual vectors i
j
Given a dual basis transformation e0 = T ij e0 , the corresponding basis transform as e0j = ei (T −1 )ij . The components of e0i give the ith row of T ; the jth column of T −1 gives the components of ej . Therefore e01 = (1, 1, 0), e02 = (0, 1, 1), e01 = (1, 0, 1) implies:
1 T =
P
1 0 0 1 1 1 0 1
⇒ T −1 =
Gauss
1 −1 1 1 1 −1 −1 1 1
1 2
⇒ e01 =
1 1 2
1 −1
, e02 =
−1 1 2
, e03 =
1 1
1 1 −1 2
1
L10.2.8 Canonical map between vectors an dual vectors via metric: hexagonal lattice 1 αβ 1
(a) Given v± =
2
√ ± 3
, the metric gαβ and its inverse g
gαβ = g(vα , vβ ) = Angle between v± :
cos θ =
1 − 12
hv+ ,v− i kv+ kkv− k
=
− 21 1
, g αβ =
g+− √ g++ g−−
is found to be
4 3 2 3
2 3 4 3
,
= − 12 , hence θ =
2π . 3
X
.
S.L10 Multilinear algebra
44
(b) Expressing the basis as vα = ej (T −1 )j α , the dual basis satisfying v α vβ = δ αβ is given by V α = T αi ei . Here we have T (c)
1 = 2
−1
√1 3
1 √ − 3
⇒T =
1 √ 3 − √13
1 1
⇒ v ± = (1, ± √13 ) .
The matrix of the dual space is given by (g ∗ )αβ = hv α , v β i =
2 3 4 3
4 3 2 3
.
It indeed agrees with the inverse metric, g αβ , as it should. + − ∗+− ,v i √ g = 12 , hence θ˜ = = 2/3 Angle between v ± : cos θ˜ = kvhv+ kkv −k = 4/3 ∗++ ∗−− g
g
π . 3
X
(d) The vectors x+ = 2v+ + v− and x− = v+ + 2v− , have component representations x1 x1 + − 2 = ( = ( 12 ), respectively, w.r.t the basis {vα }. The component ) and 1 x2 x2 −
+
α α representation of their duals, x± ≡ J(x± ) = x± α v , w.r.t the dual basis {v } can be ± β obtained by index lowering, xα = x gβα :
x+ =
(2, 1)
(e)
1 − 21 − 12 1
x− = (1, 2)
= ( 23 , 0) ,
g(x+ , x− )
β = xα + gαβ x− = (2, 1)
g ∗ (x+ , x− )
αβ − = x+ xβ = ( 23 , 0) αg
4
1 − 12 1 − 12 3 2 3
2 3 4 3
1 − 21 − 12 1
1 2
0
=
3 2
=
3 2
= (0, 23 ) .
3 2
,X
.X
S.L10.3 Tensors P
L10.3.2 Linear transformations in tensor spaces 0 a 0
(a) For A =
−a 0
0 −1
, the relation e0i = ek Aki implies e01 = −ae2 , e02 = ae1 − e3 ,
1 0
e03 = e2 : t0 = e01 ⊗ e02 + 3e02 ⊗ e03 = −ae2 ⊗ (ae1 − e3 ) + 3(ae1 − e3 ) ⊗ e2 = 3ae1 ⊗ e2 − a2 e2 ⊗ e1 + ae2 ⊗ e3 − e3 ⊗ e2 . (b) The components of t = e1 ⊗ e2 + 3e2 ⊗ e3 ≡ ei ⊗ ej tij are given, in matrix notation, by tij =
0 0 0
1 0 0
0 3 0
. The linear transformation maps them to: t0
kl
= Aki Alj tij =
Aki tij (AT )j l , which we compute in the fashion of matrix multiplication: 0 a 0 −a 0 1 0 −1 0
0 0 0
1 0 0
0 3 0
0 a 0
−a 0 0 −1 1 0
Hence A maps t to t0 = ek ⊗ el t0
kl
=
0 a 0 −a 0 1 0 −1 0
0 −1 0 3 0 0 0 0
a
=
0 3a 0 −a2 0 a 0 −3 0
.
= 3ae1 ⊗ e2 − a2 e2 ⊗ e1 + ae2 ⊗ e3 − e3 ⊗ e2 .
S.L10.6 Wedge product
45
(c)
We compute the components of t0 = ek ⊗ el t0
t
0 kl
−2
3
1
2
= 2
−1
22
3
−1
3
t0 =
P
3
1
1
3
2
a
2−2
−1
2
1
3
3
kl
−1
2
using t0kl = Aki tij (AT )j l :
4a − 12
= 2a − 4
3
2a
2a − 4 a a+4
2a
a + 4 . a + 12
(4a − 12a)e1 ⊗ e1 + (2a − 4)e1 ⊗ e2 + 2a e1 ⊗ e3 + (2a − 4)e2 ⊗ e1 +a e2 ⊗ e2 + (a + 4)e2 ⊗ e3 + 2a e3 ⊗ e1 + (a + 4)e3 ⊗ e2 + (a + 12)e3 ⊗ e3 .
L10.3.4 Tensors in T 22 (V )
The action of t = 2e1 ⊗ e2 ⊗ e1 ⊗ e3 − e2 ⊗ e1 ⊗ e3 ⊗ e2 ∈ T 22 (R3 ) on the argument vectors u = 2e1 − ae2 + ae3 , v = e1 + be2 − 3e3 and dual vector w = 4e1 + 2ce2 − 5e3 yields: (a)
t(w, w; u, v) = 2w1 w2 u1 v3 − w2 w1 u3 v2 = 2 · 4 · 2c · 2 · (−3) − 2c · 4 · a · b = −96 − 8abc .
(b)
t( . , w; v, u) = 2e1 w2 v 1 u3 − e2 w1 v 3 u2 = 2e1 · 2c · 1 · a − e2 · 4 · (−3) · (−a) = 4ace1 − 12ae2 .
(c)
t(w, . ; u, . ) = 2w1 e2 u1 ⊗ e3 − w2 e1 u3 ⊗ e2 = 2 · 4 · e2 · 2 ⊗ e3 − 2c · e1 · a ⊗ e2 = 16e2 ⊗ e3 − 2ace1 ⊗ e2 .
S.L10.4 Alternating forms P
L10.4.2 Two-form in
R3
(a) φ = φi ijk ej ⊗ ek = φ1 (e2⊗ e3 −e3⊗ e2 ) +φ2 (e3⊗ e1 −e1⊗ e3 ) +φ3 (e1⊗ e2 −e2⊗ e1 ) . (b) φ(u, v) = φ1 (u2 v 3 − u3 v 2 ) + φ2 (u3 v 1 − u1 v 3 ) + φ3 (u1 v 2 − u2 v 1 ) . This is reminiscent of a triple product, φ(u, v) = φ˜ · (u × v), where φ˜ = (φ1 , φ2 , φ3 )T is the column vector built from the coefficients of φ. (c)
For (φ1 , φ2 , φ3 ) = (1, 4, 3), u = e1 + 2e2 , v = ae2 + 3e3 , we obtain φ(u, v) = 1 · (2 · 3 − 0 · a) + 4 · (0 · 0 − 1 · 3) + 3 · (1 · a − 2 · 0) = −6 + 3a .
S.L10.6 Wedge product P
L10.6.2 Alternating forms in Λ(R4 )
Given u = ae2 − e3 , v = be1 + 2e3 and w = ce3 − 5e4 in
R4 , we have :
(a) (e1 ∧ e3 )(u, v) = u1 v 3 − u3 v 1 = 0 · 2 − (−1) · b = b .
S.L10 Multilinear algebra
46
(b) (e1 ∧ e2 ∧ e4 )(u, v, w) = u1 v 2 w4 − u1 v 4 w2 + u2 v 4 w1 − u2 v 1 w4 + u4 v 1 w2 − u4 v 2 w1 = 0 · 0 · (−5) −0 · 0 · 0 +a · 0 · 0 −a · b · (−5) +0 · b · 0 −0 · 0 · 0 = 5ab .
P
L10.6.4 Wedge products in the Grassmann algebra Λ(Rn )
(a)
φA ∧ φA =
X
ei ∧ ej =
ij
X
ei ∧ ei +
i
X
(ei ∧ ej + ej ∧ ei ) = 0 ,
i0
C6.2.4 Sine Series
(a) For an odd function, we have f (x) = −f (−x). Therefore, f˜k can be written as follows, via the substitution x → −x for the part of the integral where x ∈ [−L/2, 0]: ˆ L ˆ L ˆ 0 2 2 f˜k = dx e−ikx f (x) = dx e−ikx f (x) dx e−ikx f (x) + −L 2
ˆ
L 2
=
−L 2
0
h
dx e−ikx f (x) + e+ikx f (−x)
0
i
ˆ
L 2
= −2i
dx sin(kx)f (x) .
(1)
0
| {z }
= −f (x) Since sin(kx) is an odd function in k, we have f˜k = −f˜−k . It follows that f˜0 = 0, and: 1 X ikx ˜ 1X˜ 1 X ikx ˜ = e fk = fk 2i sin(kx) f (x) = e fk + e−ikx f˜−k |{z} L L L k k>0 k>0 ˜ = −fk ≡
X
bk sin(kx) ,
with
bk ≡
k>0
2i ˜ (1) 4 fk = L L
ˆ
L 2
dx sin(kx)f (x) .
(2)
0
(b) The sine series coefficients are obtained via (2), where only terms with k > 0 occur: ˆ L ˆ L 2 2 (2) 4 4 4 bk = dx sin (kx)f (x) = dx sin (kx) = − cos(kL/2) − 1 . (3) L 0 L 0 kL In comparison, calculating the Fourier coefficients f˜k is a bit more cumbersome: ˆ L ˆ 0 ˆ L 2 2 k=0: f˜0 = dx e0 f (x) = dx (−1) + dx 1 = 0 . (4) −L 2
ˆ k 6= 0 :
f˜k =
L 2
−L 2
−L 2
0
ˆ
dx e−ikx f (x) =
0
−L 2
ˆ
L 2
dx e−ikx (−1) + 0
dx e−ikx 1
S.C6.3 Fourier transform
81
=−
1 − 1 − e+ikL/2 + e−ikL/2 − 1 ik
h
= −
2 [cos(kL/2) − 1] . ik
i
(3)
[=
L b 2i k
X]
(5)
Now set 0 6= k = 2πn/L in (3), with n ∈ Z: bk =
2 2i ˜ (3) fk = − cos(πn) −1 = L πn | {z } (−1)n
0 4 π(2m+1)
for 0 6= n = 2m for n = 2m + 1
,
with m ∈ N0 . Therefore the sine series representation (2) of f (x) has the following form: X 1 (2) 4 2π(2m + 1)x 1 f (x) = sin 2m + 1 L π L2
m≥0
L 2
4 2πx 6πx 10πx sin + 13 sin + 15 sin + ... L L L 1 π The sketch shows the function f (x) and the approximation thereof that results from the first three terms in the sine series. =
h
i
S.C6.3 Fourier transform P
C6.3.2 Properties of Fourier transformations
¯ = x − a: (a) Fourier transform of f (x − a), rewritten using the substitution x ˆ ˆ ˆ ¯ e−ik·(¯x+a) f (¯ x) = e−ik·a d2 x ¯ e−ik·¯x f (¯ x) = e−ik·a f˜(k) . d2 x e−ik·x f (x−a) = d2 x R2 R2 R2 ¯ = αx: (b) Fourier transform of f (αx), rewritten using the substitution x ˆ
¯ =αx x
d2 x e−ik·x f (αx) =
R2
(c)
ˆ d2 x ¯
R2
1 −i k ·¯x 1 ˜ e α f (¯ f (k/α) . x) = |α|2 |α|2
¯ = Rx: Fourier transform of f (Rx), rewritten using the substitution x ˆ ˆ ˆ −1 −1 −1 1 d2 x e−ik·x f (Rx) = d2 x ¯ e−ik·(R x¯ ) f (¯ x) = d2 x ¯ e−i(R Rk)·(R x¯ ) f (¯ x) | det R| 2 R2 R2 ˆR =
2
−i(Rk)·¯ x
d x ¯e R2
f (¯ x) = f˜(Rk) .
We used the fact that for a rotation, the Jacobian determinant is | det R| = 1. For the second-last step we used the invariance of the scalar product under rotations, i.e. a · b = (R−1 a) · (R−1 b). Remark: In d dimensions the identities proven here generalize to: (a) The Fourier transform of f (x − a) is e−ik·a f˜(k). (b) The Fourier transform of f (αx) is f˜(k/α)/|α|d . (c)
The Fourier transform of f (Rx) is f˜(Rk).
S.C6 Fourier calculus
82
P
C6.3.4 Convolution of Gauss peaks
g
(a) Normalized Gaussian:
[σj ]
ˆ∞
2 2 1 (x) = √ e−x /2σj , 2πσj
dx g [σj ] (x) = 1 .
(1)
−∞
The convolution of two Gaussians is given by: ˆ g
[σ1 ]
∗g
[σ2 ]
∞
dy g
(x) =
[σ1 ]
(x−y)g
[σ2 ]
−∞
1 (y) = 2πσ1 σ2
ˆ
∞
−1 2
h
dy e
x−y σ1
2
y
+ σ 2
2 i .(2)
−∞
Completing the squares in the exponent with σ 2 = σ12 + σ22 gives
x−y σ1
2
+
y σ2
2
2xy x2 − 2xy + y 2 y2 1 1 x2 2 − = + = y + + σ12 σ22 σ12 σ22 σ12 σ12
{z
|
=
σ2 σ4 σ2 y 2 − 2xy 22 + x2 24 2 2 σ σ σ1 σ2
+
x2 σ12
1−
|
σ22 σ2
{z
2 σ 2 −σs σ2
2
=
σ2 σ2 y − 22 x 2 2 σ σ1 σ2
|
}
=
}
σ2 2 σ2 σ1 2
2 σ1 σ2
{z
≡ y¯2
+
x2 . σ2
}
Via the substitution y¯ = y − (σ22 /σ 2 )x, we obtain from (2): g
[σ1 ]
∗g
[σ2 ]
1 (x) = 2πσ1 σ2 (2)
ˆ
∞
−1 2
d¯ ye
| −∞
(1)
=
σy ¯ σ1 σ2
2
2
1 x −2 (σ)
e
{z
1
x 2
e− 2 ( σ ) (1) [σ] = g (x) . X = √ 2πσ
}
√ σ σ 2π 1σ 2
(3)
(b) The Fourier transform of a normalized Gaussian of width σj is an unnormalized 2 2 Gaussian of width 1/σj , namely g˜(k) = e−σj k /2 . From the convolution theorem, we know that the Fourier transform of the convolution g [σ1 ] ∗ g [σ2 ] (x) is given by the product of the Fourier transforms of g [σ1 ] (x) and g [σ2 ] (x): Convolution theorem: for a Gaussian: with σ =
p
] ∗ g [σ2 ] (k) = g g [σ1^ ˜[σ1 ] (k) g˜[σ2 ] (k) 2 2
= e−σ1 k
σ12 + σ22 . Therefore,
2 2 /2 −σ2 k /2 e
2
2
= e−(σ1 +σ2 )k
2
(4) /2
= e−σ
2 2
k /2
,
(5)
] ∗ g [σ2 ] (k) is a Gaussian of width 1/σ. The g [σ1^
inverse transformation is then a Gaussian of width σ, i.e. g [σ1 ] ∗ g [σ2 ] (x) = g [σ] (x). (c)
One basic property of the Fourier transform, known as ’Fourier reciprocity’, is that a function of width σ2 (here g [σ2 ] (x)) will have a Fourier spectrum of width 1/σ2 (here g˜[σ2 ] (k)). If one convolves a different function (here g [σ1 ] (x)) with the peaked function ] ∗ g [σ2 ] (k), (g [σ2 ] (x)), then the width of the Fourier spectrum of the convolution, g [σ1^ is bounded by the width of g˜[σ2 ] (k), independent of the form of the Fourier spectrum of g˜[σ1 ] (k). This is due to the product of the form (4). In other words, the convolution of g [σ1 ] with g [σ2 ] eliminates all those Fourier modes of g [σ1 ] that are not also contained in g [σ2 ] . Convolution thus acts as a ’low pass filter’ that only permits Fourier
S.C6.3 Fourier transform
83
modes with small values of k (|k| . 1/σ2 , i.e. long wavelengths λk & 2πσ2 ). Accordingly, the convolution g [σ1 ] ∗ g [σ2 ] contains no fine structure on intervals smaller than σ2 (since that would require Fourier modes with k-values greater 1/σ2 ), and is therefore smoothed out. In the current example p with Gaussian functions, the width of the convolution g [σ1 ] ∗ g [σ2 ] , namely σ = σ12 + σ22 , is indeed greater than both σ1 and σ2 . The sketches show the functions for illustrative values of σ1 and σ2 : σ1 = 0.5, σ2 = 0.7
−3
−2
−1
0
1
2
3
1
g˜[σ1 ] (k ) g˜[σ2 ] (k ) g˜[σ] (k )
g˜[σ1 ] (x) g˜[σ2 ] (x) g˜[σ] (x)
x
−4
0.5
−2
0
2
4
k
(d) The comb is a sum of shifted g˜[σ1 ] functions, with f [σ1 ] (x) =
P5
[σ ]
n=−5
gn 1 (x) ,
with
[σ ]
gn 1 (x) = g [σ1 ] (x − nL) .
(6)
The results for parts (a) and (b) hold for each of these functions: ˆ ∞ gn[σ1 ] ∗ g [σ2 ] (x) = dy gn[σ1 ] (x − y)g [σ2 ] (y) −∞ ˆ ∞ (2),(3) = dy g [σ1 ] (x − nL − y)g [σ2 ] (y) = g [σ] (x − nL) = gn[σ] (x) .
(7)
−∞
The convolution of the f [σ1 ] -comb with a g [σ2 ] yields another comb of g [σ] functions, i.e. a f [σ] -comb: F [σ2 ] (x) = f [σ1 ] ∗ g [σ2 ] (x) =
5 X n=−5
(7)
gn[σ1 ] ∗ g [σ2 ] (x) =
5 X n=−5
gn[σ] (x) = f [σ] (x) . (8)
(e)
In the convolved comb, F [σ2 ] = f [σ] , each peak has width σ. Whenever this width is of the same order of magnitude as the peak-to-peak distance L, the individual peaks are so wide that they can no longer be distinguished separately. However, since they all have the same height, their sum yields a plateau of constant height. Expressed more quantitatively, one may say that the distance over which a normalized [σ] [σ] Gaussian g [σ] (x) √ drops from maximum to half its height, g (xb )/g (0) = 1/2, is given by xb = 2 ln 2 σ ' 1.2σ. The peaks in the comb can no longer be distinguished separately when this distance is greater than half the distance to the neighbouring peak, xb & L/2, i.e. when σ & L/2.
(f)
To smooth out noise in a signal, it must be convoluted with a peaked function whose width σ2 is greater than the length scale, xnoise , characterizing the noise fluctuations, i.e. σ2 & xnoise . It would be disadvantageous to choose σ2 much greater than xnoise , since then some information stored in the actual signal (i.e. without noise) would be lost.
S.C6 Fourier calculus
84
P
C6.3.6 Performing an infinite series using the convolution theorem
(a) Complex Fourier series: ˆ
ˆ
τ
f˜γ,n =
τ
τ
dt fγ (t) eiωn t = fγ (0)
e(γ+iωn )t γ +iωn 0
dt e(γ+iωn )t = fγ (0)
0
0 γτ iωn τ
= fγ (0)
e e −1 1 1 = , since eiωn τ = ei2πn = 1 and fγ (0) = γτ . γ +iωn γ +iωn e −1 ˆ
τ
dt0 f t − t0 g t0
(b) Convolution theorem: (f ∗ g) (t) =
⇒
f] ∗g
= f˜n g˜n . n
0
1 1 1 · = −f˜γ,n f˜−γ,n . =− iωn + γ iωn − γ ωn2 + γ 2 Convolution theorem applied in reverse:
Observation:
S(t) =
∞ X n=−∞
∞ ∞ X X e−iωn t −iωn t ˜ ˜ = − e f f = − ∗ f−γ )n (1) e−iωn t (fγ^ γ,n −γ,n ωn2 + γ 2 n=−∞
ˆ
n=−∞
τ
dt0 fγ t − t0 f−γ t0
= −τ (fγ ∗ f−γ ) (t) = −τ
.
(2)
0
(c)
For 0 < t < τ , the functions occurring in the convolution integral are defined as follows: 0
f−γ (t) = f−γ (0) e−γt
( 0
fγ (t − t ) =
fγ (0) eγ(t−t fγ (0) e
0
)
γ(t−t0 +τ )
for
t0 ∈ (0, τ )
for
t−t0 ∈ [0, τ )
0 < t0 < τ .
(I)
⇒ t−τ < t0 ≤ t ,
(II)
⇒
0
0
t−t ∈ (−τ, 0) ⇒
for
t < t < t+τ .
(III)
When t0 traverses the domain of integration (0, τ ), the f−γ (t ) function f−γ (t0 ) is described by a single formula, (I), t throughout the entire domain, whereas for fγ (t − t0 ) two cases have to be distinguished: since this function (I) 0 τ 2τ exhibits a discontinuity when its argument t − t0 passes −τ f γ ( t − t ) 0 the point 0, we need formula (II) for 0 < t ≤ t, but formula (III) for t < t0 < τ . [(III) is the ‘periodic continuation’ of (II), shifted by one period]. Shifting the domain of integration from (0, τ ) to (t, t + τ ) would not help: though the new domain would avoid the discont+τ t−τ t t tinuity of f , it would contain a discontinuity of g. We (II) (III) therefore split the integration domain into two subdomains, (0, τ ) = (0, t] ∪ (t, τ ): −
S(t) (2) = τ
ˆ
t
0 ˆ t
=
ˆ dt0 fγ (t − t0 )f−γ (t0 ) +
dt0 fγ (t − t0 )f−γ (t0 ) ˆ τ 0 0 0 γ(t−t0 ) −γt0 dt fγ (0)e f−γ (0)e + dt0 fγ (0)eγ(t−t +τ ) f−γ (0)e−γt
0
=
τ
(eγτ
t
|
{z
(II)
}|
1 − 1)(e−γτ − 1)
"
{z (I)
}
t
|
{z
(III)
#
t τ 0 0 eγ(t−2t ) eγ(t−2t +τ ) + −2γ 0 −2γ t
}|
{z (I)
}
S.C6.4 Fourier transform
85
=
− sinh(γt) + sinh (γ(t − τ )) e−γt − eγt + eγ(t−τ ) − e−γ(t−τ ) = . (−2γ)[2 − e−γτ − eγτ ] (−2γ)[1 − cosh(γτ )]
(3)
Thus we conclude: ∞ X
(1)
S(t) =
n=−∞
P
e−iωn t (3) τ sinh (γ (t − τ )) − sinh (γt) = ωn2 + γ 2 2γ 1 − cosh (γτ )
0 ≤ t ≤ τ.
for
C6.3.8 Poisson resummation formula for Gaussians 2
The Fourier transform of the function f (x) = e−(ax +bx+c) has the following form: ˆ ∞ ˆ ∞ 1 −a x2 + (b+ik)x −c a dx e dx e−ikx f (x) = f˜(k) = ˆ
−∞ ∞
=
−a x+
dx e
−∞ b+ik 2 b+ik 2 a −c 2a 2a e
=
pπ a
−∞
The Poisson summation formula,
X
e−(am
2
P m
+bm+c)
=
1
e 4a (b
2
+2ibk−k2 )−c
P ˜ f (2πn), then gives: n 2 b q π 4a −c X − 1 (π2 n2 +iπnb) a
.
f (m) =
a
m∈Z
e
e
.
n∈Z
S.C6.4 Case study: Frequency comb for high-precision measurements (p. ??) A1: We insert the Fourier series for p(t) into the formula for the Fourier transform of p(t): ˆ ˆ ∞ X 1X ∞ p˜m δ(ω − ωm ) , p˜(ω) = dt eiωt e−iωm t p˜m = ωr dt eiωt p(t) = τ −∞ −∞ m
|
{z
m
}
2πδ(ω−ωm )
with ωr = 2π/τ . We now clearly see that p˜(ω) is a periodic frequency comb of δ functions, whose weights are fixed by the coefficients p˜m of the Fourier series. ´∞ A2: We insert the Fourier representation, f (t) = −∞ dω e−iωt f˜(ω), into the definition of 2π p(t) and then perform the substitution ω = yωr (implying ωτ = 2πy): X Xˆ ∞ dω −iω(t−nτ ) ˜ p(t) = f (t − nτ ) = e f (ω) 2π n ω=yωr
=
Xˆ n
n ∞
−∞
−∞
X
}
n
dy ei2πyn e−iyωr t τ1 f˜(yωr ) =
|
{z
≡F (y)
(Poisson) F˜ (2πn) =
X
F (m)
m
Here we have defined the function F (y) = e−iyωr t τ1 f˜(yωr ), with Fourier transform F˜ (k), and used the Poisson summation formula. (→ ??) Using ωm = mωr = 2πm/τ , we thus obtain: X X −iω t 1 X −imωr t ˜ ωm =mωr 1 p(t) = F (m) = e f (mωr ) = e m p˜m with p˜m = f˜(ωm ) . τ τ | {z } m
m
≡p ˜m
m
S.C6 Fourier calculus
86
The middle term has the form of a discrete Fourier series, from which we can read off the discrete Fourier coefficients p˜m of p(t). They are clearly given by p˜m = f˜(ωm ), and correspond to the Fourier transform of f (t) evaluated at the discrete frequencies ωm . A3: From A1 and A2 we directly obtain the following form for the Fourier transform of p(t): A1
p˜(ω) = ωr
Fourier spectrum:
X
A2
p˜m δ(ω − ωm ) = ωr
X
m
f˜(ωm )δ(ω − ωm ) .
m
P
For a series of Gaussian functions, pG (t) = n fG (t − nτ ), the envelope of the frequency comb, f˜G (ω), has the form of a Gaussian, too (→ ??): ˆ
∞
2 1 2 2 1 − t dt eiωt √ e 2T 2 = e− 2 T ω . 2 2πT −∞
f˜G (ω) =
Envelope:
ω0 = 0
p˜(ω )
p(t)
ωm = mωr τ
t
T
ωr = 2π/τ
1/T
ω
˜ A4: The Fourier transform of E(t) = e−iωc t p(t), to be denoted E(ω), is the same as that of p(t), except that the frequency argument is shifted by ωc : ˆ
ˆ
∞
˜ E(ω) =
∞
dω 0 p˜(ω 0 ) 2π
dt eiωt E(t) = −∞
−∞
ˆ
∞
dt ei(ω−ωc −ω
0
)t
= p˜(ω − ωc )
−∞
|
{z
2πδ(ω−ωc −ω 0 )
}
X X m=n−N 2π A3 2π f˜(ωm )δ(ω − ωm − ωc ) = f˜(ωn−N )δ(ω − ωn − ωoff ) . = τ
τ
m
n
For the last step we used ωc = N ωr + ωoff and renamed the summation index, m = n − N , ˜ such that ωm + ωc = ωn + ωoff . Thus E(ω) forms an ‘offset-shifted’ frequency comb, whose peaks relative to the Fourier frequencies, ωn , are shifted by the offset frequency ωoff . The ‘center’ of the comb lies at the frequency where f˜(ωn−N ) is maximal, i.e. at n = N , with frequency ωN ' ωc . A5: We begin with the definition of the Fourier transform of pγ (t): ˆ ∞ ˆ ∞ X dt eiωt pγ (t) = dt eiωt Definition: p˜γ (ω) = f (t − nτ )e−|n|τ γ −∞ 0
t = t − nτ :
=
X
ˆ inτ ω −|n|τ γ
e
e
n
0
iωt0
dt e
f (t0 ) .
(1)
−∞
n
|
−∞ ∞
{z
≡S [γ,ωr ] (ω)
}|
{z
=f˜(ω)
}
The sum S [γ,ωr ] (ω) ≡
X n∈Z
einτ ω e−|n|τ γ
τ =2π/ωr
=
X n∈Z
ei2πnω/ωr e−2π|n|γ/ωr
(2)
S.C7.2 Separable differential equations
87
has the same form as a damped sum over Fourier modes,
X
S [,L] (x) ≡
eikx−|k| =
X
ei2πnx/L e−2π|n|/L .
(3)
n∈Z
k∈ 2π Z L
The latter can be summed using geometric series in the variables e−2π(∓ix)/L : (→ ??) S [,L] (x) =
1+
e−4π/L
X [] 1 − e−4π/L 'L δLP (x − mL) . −2π/L − 2e cos(2πx/L) m∈Z
(4)
The result is a periodic sequence of peaks at the positions x ' mL, each with the form of [] a Lorentzian peak (LP), δLP (x) = x2/π for x, L. Using the association x 7→ ω, 7→ γ +2 and L 7→ ωr we obtain: (2,4)
S [γ,ωr ] (ω) = ωr
X
[γ]
m∈Z (1,5)
p˜γ (ω) =
and
ωr
δLP (ω − mωr )
X m∈Z
(5)
[γ] δLP (ω − ωm )f˜(ω) .
Thus the spectrum of a series of periodic pulses, truncated beyond |n| . 1/(τ γ), corresponds to a frequency comb with Lorentz-broadened peaks as teeth, each with width ' γ.
S.C7 Differential equations S.C7.2 Separable differential equations P
C7.2.2 Separation of variables
(a) Starting from the DEQ y 0 = −x2 /y 3 , separation of variables and integration yields: dy x2 =− 3 ⇒ dx y
ˆ
ˆ
y(x)
x
d˜ xx ˜2 ⇒
d˜ y y˜3 = − y(x0 )
x0
1 4
y 4 (x)−y 4 (x0 ) = − 13 x3 −x30 .
Initial condition (i): y(0) = 1 :
⇒
1 4
y 4 − 1 = − 31 x3
⇒
y(x) = − 43 x3 + 1
1/4
.
When taking the quartic root we choose the positive root, since the initial value y(0) = 1 is positive, and since the solution exists throughout the interval (−∞, (3/4)1/3 ]. Initial condition (ii): y(0) = −1 :
⇒
1 4
y 4 − 1 = − 13 x3
⇒
y(x) = − − 43 x3 + 1
1/4
.
When taking the square root we choose the negative root, since the initial value y(0) = −1 is negative, and since the solution exists throughout the interval (−∞, (3/4)1/3 ].
S.C7 Differential equations
88
(b) (i) Let us first consider the solution with y (x) |x|3/4 initial condition y(0) = 1. According to the 6 DEQ the slope at x = 0 is y 0 = 0, hence (i) at the point where the solution curve cuts 4 the y axis (at y = 1), it lies parallel to the x axis. At all other points (x 6= 0) 2 x0 we have x2 > 0; for y > 0 the slope x y 0 = −x2 /y 3 thus is negative there, and 2 −10 −8 −6 −4 −2 the solution curve decreases montonically. −2 As x increases past x = 0, the solution y(x) thus becomes ever smaller, and the −4 slope ever larger; it diverges at the point, (ii) −|x|3/4 say x0 , where y = 0. For all x < x0 we −6 have y(x) > 0. The asymptotic behavior for x → −∞ can be found using the power law ansatz y(x) ∝ |x|n . Inserted into the DEQ it yields |x|n−1 ∝ |x|2−3n , hence n = 3/4. (ii) Analogous arguments hold for the solution with y(0) = −1. The solution is strictly negative, y(x) < 0. It increases monotonically (y 0 ≥ 0). The slope vanishes only at x = 0, and diverges at the point where y = 0. For x → −∞ we have y(x) ∝ −|x|3/4 .
C7.2.4 Separation of variables: bacterial culture with toxin
P
(a) The DEQ n˙ = γn − τ nT (t), γ, τ > 0, is separable. After separation of variables and integration, the number of bacteria n(t) for t ≥ 0 is given as follows: ˆ n ˆ t dn d˜ n n = n (γ − τ T (t)) ⇒ = ln = dt˜ γ − τ T (t˜) dt n ˜ n 0 n0 0 ˆ t ˆ t ⇒ n(t) = n0 exp dt˜ γ − τ T (t˜) = n0 exp γt − dt˜τ T (t˜) . 0
0
(b) Qualitative analysis of the differential equation n˙ = (γ − aτ t)n, for a > 0 and t ≥ 0: (i) For small times t < γ/(aτ ), we have n˙ > 0, i.e. the population n(t) increases with time. At t = γ/(aτ ), n˙ = 0, and a maximum in n(t) is attained. (iii) For large times t > γ/(aτ ), n˙ < 0 i.e. n(t) decreases. (iv) As n(t) becomes ever smaller, so does n(t), ˙ hence for t → ∞ they both vanish. For T (t) = at, the solution can be obtained explicitly: ˆt
ˆt dt˜ at˜ = 21 at2
dt˜ τ T (t˜) = 0
⇒
n(t)/n0
(c)
0
n(t) = n0 exp γt − 21 τ at2 , t ≥ 0 .
The explicit solution confirms the analysis in (b). Firstly, limt→∞ n(t) = 0. Furthermore:
n(t) ˙ = n0 (γ − aτ t) exp γt −
1 τ at2 2
⇒
1
0 0
n(t) ˙ > 0,
n(t) ˙ = 0, n(t) ˙ < 0, n(t) ˙ → 0,
1
for for for for
aτ t/γ
t < γ/(aτ ) , t = γ/(aτ ) , t > γ/(aτ ) , t → ∞.
S.C7.2 Separable differential equations
89
(d) The population has shrunk to half its initial value when n(th ) = 12 n0 : = exp γth − 12 aτ t2h
1 2
⇒ γth − 21 aτ t2h = − ln 2 ⇒ t± =
1 aτ
γ±
p
γ 2 + 2aτ ln 2 .
| We require the positive solution th = t+ , i.e. th =
P
1 aτ
γ+
p
{z
}
>|γ|
γ 2 + 2aτ ln 2
.
C7.2.6 Substitution and separation of variables
(a) Given DEQ: y 0 (x) = f (ax + by(x) + c). Substitution: u(x) = ax + by(x) + c. The first derivative of the substitution gives: u0 (x) = a + by 0 (x) = a + bf (ax + by(x) + c) = a + bf (u) . (b) Separation of variables: ˆ u(x) ˆ x d˜ u du d˜ x ⇒ = a + bf (u) ⇒ = dx a + bf (˜ u) u0 x0
ˆ
u(x)
d˜ u u0
1 = x−x0 . a+bf (˜ u)
The equation in the box determines u(x) implicitly. (c)
Let u(x) = x + 3y + 5 and f (u) = eu , with initial condition y(0) = 1 i.e. u(0) = 8. ˆ
u
x−0= 8
ˆ
d˜ u = 1 + 3eu˜
u
d˜ u 8
e−˜u −˜ e u+3
ˆ
˜ v ˜=e−u
e−u
−
=
d˜ v e−8
e−u + 3 1 = − ln −8 v˜ + 3 e +3
.
u(x) = − ln e−x e−8 + 3 − 3 .
Solving for u(x):
x + 3y(x) + 5 = − ln e−x e−8 + 3 − 3 .
Substituting back:
y(x) = − 13 x + 5 + ln e−x e−8 + 3 − 3
Solving for y(x):
.
Check: Does this satisfy the given differential equation y 0 (x) = f (u(x))? 0
y (x) =
− 31
e−x e−8 + 3
1−
!
e−x (e−8 + 3) − 3
=
1 . e−x (e−8 + 3) − 3 −x
f (u) = eu(x) = ex+3y(x)+5 = ex+5−(x+5+ln[e
(e−8 +3)−3]) =
y(0) = − 31 5 + ln (e−8 + 3) − 3
Initial condition:
1 X = y 0 (x). e−x (e−8 +3)−3
= − 13 (5 − 8) = 1. X
dy (d) Alternative strategy, using direct separation of variables: dx = ex+5 e3y . ˆ y ˆ x y x ⇒ d˜ y e−3˜y = d˜ x ex˜+5 ⇒ − 31 e−3˜y 1 = ex˜+5 0 1
⇒
− 31
0
−3y
e
−3
−e
= ex+5 −e5 ⇒ y(x) = − 13 x+5 + ln e−x e−8 +3 −3
.
S.C7 Differential equations
90
(e)
Let u(x) = a(x + y) + c and f (u) = u2 , with initial condition y(x0 ) = y0 i.e. u(x0 ) = x0 = a(x0 + y0 ) + c . ˆ x − x0 =
1 a
u
(x) u0
d˜ u = 1+u ˜2
1 a
[arctan(u) − arctan(u0 )] .
h
u(x) = tan ax −ax0 + arctan [a(x0 + y0 ) + c]
Solving for u(x):
|
{z
i
}
≡d0
= tan(ax + d0 ) . a(x + y(x)) + c = tan(ax + d0 ) .
Substituting back:
y(x) = − ac − x +
Solving for y(x):
1 a
tan(ax + d0 ) .
Check: Does the solution satisfy the given differential equation y 0 (x) = f (u(x))? f (u) = u2 (x) = (a(x + y(x)) + c)2 = (ax − c − ax + tan(ax + d0 ) + c)2 = tan2 (ax + d0 ) . X
y 0 (x) = −1 + a1 (1 + tan2 (ax + d0 )) · a = tan2 (ax + d0 ) = f (u) .
S.C7.3 Linear first-order differential equations P
C7.3.2 Inhomogeneous linear differential equation, variation of constants
(a) Homogeneous differential equation: x+tx ˙ = 0, mit x(0) = x0 . Separation of variables: ˆ
dx = −tx ⇒ dt
x
x0
d˜ x =− x ˜
ˆ
t
dt˜t˜ ⇒ ln 0
x x0
= − 12 t2 ⇒
1 2
xh (t) = x0 e− 2 t
.
1 2
(b) Ansatz for particular solution - variation of constants: xp = c(t)e− 2 t , with xp (0) = 0. Plugging this into the differential equation: 1 2
x˙ p + txp = e− 2 t 1 2
1 2
1 2
1 2
ce ˙ − 2 t − tce− 2 t + cte− 2 t = e− 2 t Particular solution: General solution: (c)
1 2
xp (t) = te− 2 t
⇒
c(t) ˙ =1 ⇒
c(t) = t .
. 1 2
1 2
1 2
x(t) = xh (t) + xp (t) = x0 e− 2 t + te− 2 t = (x0 +t)e− 2 t . 1 2
With the input xh (0) = 1, the homogeneous solution is: xh (t) = e− 2 t . In the same way as in (b), one finds for c˜(t) the differential equation c˜˙(t) = 1. With input c˜(0) = x0 , its solution is c˜(t) = x0 + t. 1 2
Thus the overall solution is: x(t) = c˜(t)xh (t) = (x0 + t)e− 2 t
, as in (b). X
S.C7.4 Systems of linear first-order differential equations
91
S.C7.4 Systems of linear first-order differential equations P
C7.4.2 Linear homogeneous differential equation with constant coefficients
Differential equation:
˙ x(t) = A · x(t) ,
General exponential ansatz:
x(t) =
X
vj eλj t cj ,
1 2
A=
with
3
−1
−1
3
.
Avj = λj vj .
with
j
0 = det(A − λ1) = ( 32 − λ)( 32 − λ) −
Characteristic polynomial:
1 4
= 41 (4λ2 − 12λ + 8) = (λ − 1)(λ − 2) Eigenvalues:
λ1 = 1
,
λ2 = 2 .
Checks:
λ1 + λ2 = 3 = Tr A = 21 (3 + 3) ,
X
1 2 2
X
λ1 λ2 = 2 = det A =
(3 · 3 − 1 · 1) .
Eigenvectors: λ1 = 1 :
0 = (A − λ1 1)v1 =
1 2
λ2 = 2 :
0 = (A − λ2 1)v2 =
1 2
1 −1 −1
v1
1
−1 −1
⇒ v1 =
1 √ 2
⇒ v2 =
1 √ 2
v2
−1 −1
1
.
1
1
.
−1
Similarity transformation for the diagonalization: 1 √ 2
T = (v1 , v2 ) =
1
1
1
−1
,
T
−1
=T
T
1 √ 2
=
1
1
1
−1
.
Determination of the initial condition (using xj (0) = v ji ci = T ji ci ⇒ ci = (T −1 )ij xj (0)): x(0) =
X
vi c
i
⇒
−1
c=T
· x(0) =
i
Solution:
λ1 t 1
c + v2 e
x(t) = v1 e
2
=
Check: Check:
2
x(0) = A·x=
3
2 2
1
−1
−1
et +
c =
−1
X
1
2
−2 2
1
1
−1
3
1
·
1
4 1·t √ e 2
=
+
1 √ 2
1 √ 2
4 −2
1
−1
·
≡
c1 c2
−2 2·t √ e 2
.
3 2
3
1
1
e2t .
=
1
−1
=
+
2 1 2
t
e +
2
1 √ 2
λ2 t 2
1 √ 2
et + X
e2t = x˙ .
−1 1
e2t =
1 2
4 4
et +
−4 4
e2t
S.C7 Differential equations
92
C7.4.4 System of linear differential equations with non-diagonizable matrix: critically damped harmonic oscillator P
2 (a) As explained in the statement of the problem, the 2nd-order DEQ, x ¨ +2γ x+γ ˙ x = 0, T can be written as a system of two first-order DEQs by using x ≡ (x, x) ˙ = (x, v)T :
v˙ = x ¨ = −γ 2 x − 2γv .
v = x, ˙
New variables:
x˙ v˙
Matrix form:
0 −γ 2
=
|{z}
|
˙ x
1 −2γ
x . v
{z
} |{z} x
A
x˙ = A · x .
Compact notation:
x0 = x(0) = (x(0), v(0))T .
Initial value:
Next we find the eigenvalues and eigenvectors of the matrix A: !
λ = −γ
Eigenvalue:
−γ 2
X
!
0 = (A − (−γ))v =
Eigenvector:
1 = λ (2γ + λ) + γ 2 = (λ + γ)2 . −2γ −λ
is doubly degenerate. X
λ · λ = γ 2 = det A
λ + λ = −2γ = Tr A,
Checks:
−λ
Char. polynomial: 0 = det(A − λ1|) =
γ 1 −γ 2 −γ
x (t) = ce−γt ,
v (t) = −cγe−γt .
Solution:
x(t) = ve−γt ,
Check:
x ¨ + 2γ x˙ + γx = c (−γ)2 + (−2γ)γ + γ 2 e−γt = 0 .
⇒
x x 1 , ⇒ v= =c , v v −γ
X
(b) Variation of constants: as ansatz for the second solution we use x2 (t) = c(t)x1 (t), where x1 (t) = e−γt is proportional to the solution found in (a). Then x ¨1 + 2γ x˙ 1 + γ 2 x1 = 0 ,
(1)
x˙ 1 + γx1 = 0 .
(2)
Strategy: Insert x2 = c x1 into the original DEQ to obtain a DEQ for c(t), and solve it: d2t x2 + 2γdt x2 + γ 2 x2 = 0 .
Requirement: d2t (c x1 )
Insert ansatz: Product rule: Regrouping:
+ 2γdt (c x1 ) + γ (c x1 ) = 0 .
(¨ c x1 + 2c˙ x˙ 1 + c x ¨1 ) + 2γ(c˙ x1 + c x˙ 1 ) + γ 2 (c x1 ) = 0 . c¨ x1 + 2c˙ (x˙ 1 + γx1 ) +c (¨ x1 + 2γ x˙ 1 + γ 2 x1 ) = 0 .
|
{z
(2)
}
|
{z
(1)
=0
⇒ DEQ for c(t): Solution to (4): Solution to (3):
(3)
2
}
=0
c¨ = 0 .
(4)
c(t) = c1 + c2 t .
(5) (5)
x2 (t) = c(t)x1 (t) = (c1 + c2 t)e−γt .
(6)
Eq. (6) is the general solution to a critically damped harmonic oscillator. For c2 = 0 and c1 = c, we obtain the solution found in (a). (c)
Required initial conditions for x(t) = (c1 + c2 t)e−γt : x(0) = 1 :
[c1 + c2 t] e−γt t=0 = 1,
⇒
c1 = 1 .
(7)
S.C7.4 Systems of linear first-order differential equations
93
(7)
[c2 − γ(c1 + c2 t)] e−γt t=1 = 1,
x(1) ˙ =1:
⇒
c2 =
eγ + γ . 1−γ
(8)
(d) Alternatively, we find the solution to the critically damped case as the the limiting value Ω/γ → 1 of the solutions to the over- and under- damped cases:
p
For the over-damped harmonic oscillator, with γ > Ω, ≡ γ 2 − Ω2 , the general p solution has the following form, with γ± = −γ ± γ 2 − Ω2 = −γ ± :
x (t) = c+ eγ+ t + c− eγ− t = e−γt c+ et + c− e−t . A Taylor expansion for t 1 gives:
x(t) = e−γt c+ 1 + t + O (t)2
= e−γt c1 + c2 t + O (t)2
+ c− 1 − t + O (t)2
,
c1 = c+ + c− ,
with
c2 = (c+ − c− ) .
For t 1, we obtain the solution Eq. (6) to the critically damped harmonic oscillator. This agreement holds for times t 1/, i.e. the smaller the difference between γ and Ω, the smaller the , and hence the longer the time for which the agreement holds. p For the under-damped case with γ < Ω, ≡ Ω2 − γ 2 , we obtain in an analogous p manner, with γ± = −γ ± i Ω2 − γ 2 = −γ ± i:
x (t) = c+ eγ+ t + c− eγ− t = e−γt c+ eit + c− e−it
= e−γt c1 + c˜2 t + O (t)2
P
,
c1 = c+ + c− ,
with
c˜2 = i(c+ − c− ) .
C7.4.6 Coupled oscillations of three point masses
(a) Equations of motion:
1
x ¨
In matrix form:
2 x¨ = − k − m1
2
x ¨3
0
| Compact notation:
1 m1
− m11
0
2 m2
− m12
− m13
1 m3
{z
1 x
2 x , x3
}
≡A
¨ = −Ax . x
(1)
(b) Reduction to an eigenvalue problem: Ansatz for solution: Ansatz twice differentiated: insert (2), (3) in (1): Eigenvalue equation:
x(t) = v cos(ωt) . 2
¨ (t) = −ω v cos(ωt) . x
(2) (3)
−ω 2 v cos(ωt) = −Av cos(ωt) , Av = ω 2 v .
(4)
S.C7 Differential equations
94
(c)
Setting m1 = m3 = m, m2 = 23 m, and k = mΩ2 gives: A = Ω2 2 (4) ω 1 Av = 2 v = λv , 2 Ω Ω
1 ·[eigenvalue equation (4)]: Ω2
Determination of the eigenvalues λj of the matrix
1−λ ! 0 = det(A−λ1) = − 32 0
3−λ −1
1
−1
0
3
0
−1
1
0
−1
0
(5)
1 A: Ω2
3 2
−
3 2
= (1 − λ)λ(λ − 4) .
λ3 = 4 .
(6)
− 23 v1
− 32
2
0
−1
0
−3
−1
0
λ3 = 4 :
λ2 = 1 ,
− 32
λ2 = 1 :
λ ≡ (ω/Ω)2 .
with
.
(A − λj 1) vj = 0 .
Eigenvectors vj : λ1 = 0 :
0
= (1 − λ) λ2 − 4λ + 3 − λ1 = 0 ,
0 − 23 1
h i 3 3 − 32 = (1−λ) (3−λ)(1−λ) − 2 − 2 (1−λ) 1−λ i
−1
h
Eigenvalues:
−1 3 −1
1 − 32 0
v1 =
v2 =
1 √ 2
1 0 −1
v3 =
1 √ 11
! .
(7)
!
− 23 v2
=0
⇒
−1
− 23 v3
0
−1
−3
p
λj Ω:
(5)
⇒
1 1 1
− 32
Eigenfrequencies ωj =
=0
1 √ 3
=0
(6)
ω1 = 0 ,
⇒
(6)
1 −3 1
.
(8)
! .
(9)
(6)
ω2 = Ω ,
ω3 = 2Ω .
Zero-eigenmode:
(2)
(7)
x1 (t) = v1 cos(ω1 t) =
1 √ 3
1
1 . 1
Symmetric eigenmode:
(2)
(8)
x2 (t) = v2 cos(ω2 t) =
1 √ 2
(2)
(9)
x3 (t) = v3 cos(ω3 t) =
1 √ 11
0 cos(Ωt)
.
−1
Third eigenmode:
1
1
−3 cos(2Ωt) . 1
(d) For the ‘zero-eigenmode’ x1 (t) (left sketch), all three masses are displaced by the same amount, i.e. the whole system has been shifted. Because no springs have been stretched or otherwise disturbed, there is no associated cost in energy, and the eigenfrequency is zero, ω1 = 0. This is in contrast to the other two modes (j = 2, 3). For the symmetric eigenmode, x2 (t) (middle sketch), the two outer masses move with the opposite phase, and the middle mass remains stationary. For the third eigenmode, x3 (t) (right sketch), the two outer masses move with the same phase as each other,
S.C7.4 Systems of linear first-order differential equations
95
and with the opposite phase to the middle mass. The last has the larger amplitude since it is lighter. The sketches below show the positions of the masses from the point t = 0 onwards, and the fat arrows denote their velocities a short time (i.e a quarter period) later.
x11 0
x21
x12
x31
0
0
0
x22
x32
0
x13
0
x23
0
0
x33 0
x 3 ( t)
x2 (t)
x1 (t)
P
t
t
t
C7.4.8 Inhomogeneous linear differential equation of third order
(a) Reduction to a matrix equation: ... x − 6¨ x + 11x˙ − x = fA (t),
x(0) = 1,
x(0) ˙ = 0,
x ¨(0) = a ,
(1)
can be written as a first order matrix DEQ, using x ≡ (x, x, ˙ x ¨)T ≡ (x1 , x2 , x3 )T and ... 3 x = x˙ : New variables: x˙ = x˙ 1 = x2 ,
x˙ 1
Matrix form:
... x = x˙ 3 = 6x1 − 11x2 + 6x1 + fA (t) .
x ¨ = x˙ 2 = x3 ,
x1
0
1
0
x˙ 2 = 0
0
1 x2 + fA (t) 0 .
x˙ 3
6
| {z } ˙ x
Compact notation: Initial values:
|
−11
0
x3
6
{z
1
} | {z }
|
x
A
{z
b(t)
}
x˙ = A · x + b(t) .
(2) T
x0 = x(0) = (x(0), x(0), ˙ x ¨(0)) = (1, 0, a)
a = 2
T
T
−→
(1, 0, 2) .
(b) Homogeneous solution: We first determine the eigenvalues λj and the eigenvectors vj (j = 1, 2, 3) of A:
Char. polynomial:
−λ 0 = det(A − λ1) = 0 6
1
!
1 = −λ(λ(−6+λ)+11)+6 6−λ 0
−λ −11
= −(λ3 − 6λ2 + 11λ − 6) = −(λ − 1)(λ2 − 5λ + 6) = −(λ − 1)(λ − 2)(λ − 3) . Eigenvalues: Checks:
λ1 = 1,
λ2 = 2,
λ3 = 3 .
X
λ1 + λ2 + λ3 = 6 = Tr A,
Eigenvectors:
λ1 λ2 λ3 = 6 = det A = 0 + 6 · 1 · 1 1
0
−1
1 v1
6
−11
5
0
−1
0 = (A − λ1 1)v1 = !
(3) X
1
Gauss
⇒
v1 = 1 . 1
S.C7 Differential equations
96
−2
1
0
−2
1 v2
6
−11
4
−3
1
0
0
−3
1 v3
6
−11
3
0 = (A − λ2 1)v1 = !
0 = (A − λ3 1)v3 = !
0
1
Gauss
⇒
v2 = 2 . 4
1
Gauss
⇒
v3 = 3 . 9
Since xj (t) = vj eλj t satisfy the homogeneous DEQ x˙ j = A · xj , the first component of xj (t), i.e. xj (t) = eλj t , satisfies the DEQ (1)|fA (t)=0 . Check that this is indeed the case: ?
(d3t − 6d2t + 11dt − 6)eλj t = 0 . λ1 = 1 :
(13 − 6 · 12 + 11 · 1 − 6) et = 0 . X
λ2 = 2 :
(23 − 6 · 22 + 11 · 2 − 6)e2t = 0 . X
λ3 = 3 :
(33 − 6 · 32 + 11 · 3 − 6)e3t = 0 . X
The most general form of the homogeneous solution is xh (t) = cj x (t). For a j h j 1 2 3 T given value x0 , the coefficient vector ch = (ch , ch , ch ) is fixed by xh (0) = P initial j v c = x , or in matrix notation, we have T ch = x0 , where the matrix T = {v ij } j 0 h j has the eigenvectors vj as columns, T = (v1 , v2 , v3 ):
P
1 T =
ch = T
−1
x0 ⇒
c1h c2h c3h
!
=
1 2 4
1 1
1 3 9
3 − 52 −3 4 1 − 32
, 1 2
Gauss
T −1 =
1
−1
0 a
1 2
=
− 25 4 − 32
3 −3 1
1 2
,
−1
(4)
1 2
3 + 12 a −3 − a 1 + 12 a
a = 2
−→
4 −5 2
.
The homogeneous solution to matrix DEQ (2)|b(t)=0 is therefore xh (t) =
X
cjh xj (t)
1 = (3 +
1 a) 2
1 1
j
1
t
e − (3 + a)
2 4
2t
e
1 + (1 +
1 a) 2
3 9
e3t ,
and the homogeneous solution of the initially considered third order DEQ, (1)|fA (t)=0 , is xh (t) = x1h (t) = (3 + 21 a)et − (3 + a)e2t + (1 + 21 a)e3t
a = 2
4et − 5e2t + 2e3t .
−→
Check that xh (t) has the required properties (not really necessary, since all relevant properties have already been checked above, but nevertheless instructive): (d3t − 6d2t + 11dt − 6) (3 + 21 )et − (3 + a)e2t + (1 + 21 a)e3t
X
= (3+ 21 a) (13 −6·12 +11·1−6) et − (3+a) (23 −6·22 +11·2−6) e2t + (1+ 12 a) (33 −6·32 +11·3−6) e3t = 0.
| Initial values:
{z 0
}
|
{z 0
}
{z
|
0
xh (0) = (3 + 21 a) − (3 + a) + (1 + 21 a) = 1 ,
X
x˙ h (0) = (3 + 12 a) − (3 + a) · 2 + (1 + 21 a) · 3 = 0 , x ¨h (0) = (3 +
1 a) 2
2
− (3 + a) · 2 + (1 +
1 a) 2
}
2
· 3 = a.
X X
S.C7.4 Systems of linear first-order differential equations
97
(c)
Particular solution: The method of variation of constants soP j looks for a particular lution of the matrix DEQ (2) of the form xp (t) = c (t)xj (t), with cjp (t) chosen j p such that
X
c˙jp (t)xj (t) = b(t) .
(5)
j
A solution of (5) with cjp (0) = 0 (and therefore xp (0) = 0) is given by ˆ t ˜ dt˜˜bj (t˜) e−λj t , cjp (t) =
(6)
0
where the ˜bj (t)’s originate from the decomposition of b(t) = v ˜bj (t) into eigenj j i i ˜j ˜ vectors. In components, b (t) = v j b (t), and in matrix notation, b(t) = T b(t), −1 ˜ b(t) = T b(t):
P
!
˜ b1 (t) ˜ b2 (t) ˜ b3 (t)
(4)
− 25 4 − 32
3 −3 1
=
1 2
0
fA (t)
−1 1 2
= fA (t)
0 1
1 2
.
−1 1 2
For the given driving function, fA (t) = e−bt for t ≥ 0, and therefore we get:
!
c1 (t) c2 (t) c3 (t)
(6)
ˆ
˜ 1 −λ1 t 2e ˜ −λ2 t
1
1 2 λ1 +b
= − λ21+b
1 1 2 λ3 +b
!
t
˜
dt˜e−bt
=
−e ˜ 1 −λ3 t 2e
0
1 − e−(λ1 +b)t
. −(λ +b)t
1 − e−(λ2 +b)t 1−e
3
X
Check initial value: cjp (0) = 0. Check that (5) holds:
X
(5)?
c˙jp (t)xj (t) =
e
1
−(λ1 +b)t
1
1 eλ1 t −e−(λ2 +b)t 2 eλ2 t + e
2
1
j
−(λ3 +b)t
2
4
1
3 eλ3 t 9
0
X
= e−bt 0 = b(t) . 1
The required particular solution for t > 0 is therefore given by:
xp (t) =
X
(3)
cjp (t)vj eλj t =
t
−bt
[e − e ] 2(1 + b)
1
1 − [e
j
1
−bt
2t
−e ] (2 + b)
1 1 1 et − (2+b) e2t + 2(3+b) e3t −e−bt 2(1+b)
xp (t) = x1p (t) =
|
b = 4
−→
1 t e 10
− 16 e2t +
1 3t e 14
−
1 −4t e 210
1
1 −e ] 3 2(3 + b) 9
2 + [e 4
3t
−bt
1 1 1 − (2+b) + 2(3+b) 2(1+b)
{z
[b3 +11b+6b2 +6]−1
(7)
}
.
Check that xp (t) has the required properties (not really necessary, since all relevant properties have already been checked above, but nevertheless instructive): (d3t − 6d2t + 11dt − 6)xp (t) (7)
=
1 1 1 (13 −6·12 +11·1−6) et − (23 −6·22 +11·2−6) e2t + (33 −6·32 +11·3−6) e3t 2(1+b) | (2+b) | 2(3+b) | {z } {z } {z } 0
0
0
S.C7 Differential equations
98
−
1 ((−b)3 −6·(−b)2 +11·(−b)−6)e−bt b3 + 11b + 6b2 + 6
xp (0) = 0,
Initial values:
12 −b2 2(1+b)
x ¨p (0) =
x˙ p (0) =
X −
22 −b2 (2+b)
1+b 2(1+b)
3−b 2
+
=
= e−bt . X
−
1−b 2
2+b (2+b)
−
+
2−b 1
3+b 2(3+b)
+
3−b 2
= 0, X
= 0.
X
S.C7.5 Linear higher-order differential equations P
C7.5.2 Green function of critically damped harmonic oscillator
(a) Below we will use the following properties of the δ function: First, dt Θ(t) = δ(t). Second, for an arbitrary function b(t), we may set δ(t)b(t) = δ(t)b(0). We verify the validity of the given ansatz for the Green function as follows: G(t) = Θ(t)qh (t).
Ansatz:
(1)
ˆ qh (t) = 0 , L(t)
Hom. solution satisfies
(2)
qh (0) = 0 ,
with initial values
dt qh (0) = 1 .
(3)
dt Θ(t)qh (t) = dt Θ(t) qh (t) + Θ(t)dt qh (t)
Therefore:
= δ(t)qh (t) + Θ(t)dt qh (t) , = δ(t) qh (0) +Θ(t)dt qh (t) .
(4)
| {z } (3)
=0
(4)
dt Θ(t)qh (t) = dt Θ(t)dt qh (t) = dt Θ(t) dt qh (t) + Θ(t)d2t qh (t)
2
= δ(t)dt qh (t) + Θ(t)d2t qh (t) , = δ(t) dt qh (0) +Θ(t)d2t qh (t) .
(5)
| {z } (3)
=1
(1)
ˆ G(t) = (d2t + 2Ωdt + Ω2 ) Θ(t)qh (t) ⇒ L(t)
h
(4,5)
i
= δ(t) + Θ(t) d2t + 2Ωdt + Ω2 qh (t) = δ(t) . X
|
{z
}
ˆ qh (t) (2) L(t) =0
(b) The general solution of the homogeneous equation (d2t + 2Ωdt + Ω2 )qh (t) = 0 has the form qh (t) = (c1 + c2 t)e−Ωt . The stated initial conditions (3) imply c1 = 0 and c2 = 1. Therefore qh (t) = te−Ωt , yielding the Green function: (1)
G(t) = Θ(t)te−Ωt . (c)
(6)
The Fourier integral can be performed via partial integration: ˆ ˜ G(ω) =
∞ iωt
dt e
(6)
ˆ
∞
G(t) =
u
v0 (iω−Ω)t part. int.
dt t e
−∞
=
0
hu
v
t e(iω−Ω)t
i∞ 0
iω − Ω
ˆ − 0
∞
u0
v
e(iω−Ω)t dt 1 (iω − Ω) (7)
= [0 − 0] −
∞ e(iω−Ω)t 0 (iω − Ω)2
=
1 . (iω − Ω)2
[e(iω−Ω)∞ = 0, since Ω > 0.]
(8)
S.C7.6 General higher-order differential equations
99
(d) Consistency check: ˆ G(t) = δ(t) , with L(t) ˆ L(t) = d2t + dt 2Ω + Ω2 . ˜ ˜ ˜ L(−iω) G(ω) = 1, with L(−iω) = (−iω)2 −iω2Ω+Ω2 .
Defining eq. for G(t): Fourier transformed: ˜ (9) solved for G(ω): (e)
˜ G(ω) =
1 1 . = ˜ (−iω + Ω)2 L(−iω)
(9)
[= (8) X] .
ˆ q(t) = g(t), with g(t) = g0 sin(ω0 t) = g0 Im eiω0 t . L(t) (10) ˆ ∞ ˆ ∞ s=t−u Ansatz for solution: q(t) = du G (t − u) g(u) = ds G(s)g(t − s) .
Given DEQ:
−∞
−∞
This substitution simplifies the argument of G, which is advisable in the present context, where G(t) is a more ’complicated’ function than g(t). ˆ ∞ ˆ ∞ h i (6),(10) ds s e(−iω0 −Ω)s . q(t) = ds Θ(s) s e−Ωs g0 Im eiω0 (t−s) = g0 Im eiω0 t −∞
|0
(7)
{z
}
˜ = G(−ω 0)
Fortuitously the integral turns out to have the same form, (7), as that which arose ˜ when calculating G(ω); we can thus simply reuse the result (8):
(8)
q(t) = g0 Im
eiω0 t (iω0 + Ω)2
" = g0 Im
(11)
cos(ω0 t)+i sin(ω0 t) (−iω0 +Ω)2 (ω02 +Ω2 )2
# = g0
(Ω2 −ω02 ) sin(ω0 t)−2ω0 Ω cos(ω0 t) . (ω02 +Ω2 )2
(11) is the most convenient form, since proportional to eiω0 t , and may serve as final result. As a check, let us verify that it satisfies the given differential equation: ˆ q(t) (11) L(t) = (d2t + 2Ωdt + Ω2 ) g0 Im
eiω0 t (iω0 + Ω)2
= g0 Im (iω0 )2 + 2Ωiω0 + Ω2
eiω0 t = g0 Im eiω0 t = g0 sin(ω0 t) .X 2 (iω0 +Ω)
S.C7.6 General higher-order differential equations P
C7.6.2 Field lines of electric quadrupole field in two dimensions
Along a field line, r(t), we have r˙ k E(r), i.e. (x, ˙ z) ˙ T k T (x, −3z) . dz z˙ −3z = = dx x ˙ x ˆ z ˆ x d˜ z d˜ x Separation of variables: − = z ˜ z0 3˜ x0 x 1 z x Integrate: − ln = ln 3 z0 x0
2
1
Rearrange:
z z0
−1/3
=
x x0
⇒
z
DEQ for field lines:
0
−1
z = z0
x0 x
3
.
−2 −2
−1
0
x
1
2
S.C7 Differential equations
100
The sketch shows field lines with x0 = ±2 and z0 ∈ ±{0.1, 0.2, 0.3, 0.4, 0.5}.
S.C7.7 Linearizing differential equations P
C7.7.2 Fixed points of a differential equation in one dimension
Differential equation: x˙ = f (x) = tanh[5(x − 3)] tanh[5(x + 1)] sin(πx) (a) The three factors of the function f (x) behave as follows: sin πx oscillates with amplitude 1, mean 0 and has zeroes for all x∗ ∈ Z. tanh(5y) has the form of a step, with the middle occurring at y = 0 and tanh(5y) ' ±1 for y & 21 resp. y . − 12 . Consequently, tanh[5(x − 3)] and tanh[5(x + 1)] have zeroes at x∗ = 3 and −1 respectively, and change the height of the extrema of the sine function only marginally. The function f (x) therefore has a zero at every integer number, and also changes sign at every zero, except for x∗ = 3 and −1, where the sign change of the tanh function compensates the change of sign of the sine function. These two points are also ’double zeroes’ of f (x), with first derivative also equal to zero: f 0 (3) = f 0 (−1) = 0. The fixed points of the DEQ, defined by f (x∗ ) = 0, are therefore x∗n ≡ n ∈ Z . f (x)
tanh[5(x + 1)]
1
(b)
(c)
II
III
I
−4
IV −3
II
III
I
−2
III
I
−1
IV
II
III
I
1 −1
IV
II IV
2
II
3
III
I
4
IV 5
x
tanh[5(x − 3)]
The stability of a fixed point is determined by the sign of x˙ = f (x) directly left and right of the fixed point, i.e. at x = x∗ ∓ (with → 0+ ): Left of x∗ : ∗
for x˙ = f (x∗ − )
Right of x :
∗
for x˙ = f (x + )
> 0, x(t) increases ⇒ flows towards x∗ . < 0, x(t) decreases ⇒ flows away from x∗ .
(I) (II)
> 0, x(t) increases ⇒ flows away from x∗ . < 0, x(t) decreases ⇒ flows towards x∗ .
(III) (IV)
Via graphical analysis (see sketch) we find that: x∗n is - unstable for n even, when n ≤ −2 or n ≥ 4, also for n = 1 (see II, III); - stable for n odd n when n ≤ −3 or n ≥ 5, also for n = 0 and 2 (see I, IV); - semistable for n = −1 (see I, III) and for n = 3 (see II, IV).
P
C7.7.4 Stability analysis in three dimensions
(a) Determination of the fixed points:
x10 −y24 x˙ = f (x) =
1−x −3z −3
∗
⇒ fixed points: f (x ) = 0 ⇒
∗ x−
1 1 −1
=
,
∗ x+ =
1 −1 −1
.
(b) Stability analysis: small deviations ησ = x − xσ∗ from the fixed point xσ∗ (with σ = ±) satisfy the linearized equation η˙ σ = Aσ ησ , where Aσ has the matrix elements ∂f i ∗ (Aσ )ij = ∂x j (xσ ):
S.C8.3 Euler-Lagrange equations
101
∂f i = ∂xj
10x9
−24y 23
−1
0
0
0
0
0
(A+ )ij =
⇒
−3
(A− )ij =
10
24
0
= −1
0
0
x=x∗ +
∂f ∂xj
10
i
0
0
10
−24
= −1 0
x=x∗ −
,
−3
0
0
0 −3
0
0
0 .
0
0
σ24
Compact notation: Aσ = −1
∂f i ∂xj
−3
0
10−λ ! Eigenvalues of Aσ : 0 = det(Aσ −λ1) = −1 0
0 −3−λ
24σ
0
−λ 0
= (−3−λ) [(10−λ)(−λ) + 24σ] = −(3 + λ)(λ2 − 10λ + 24σ) For x∗+ :
λ+,1 = −3 ,
λ+,2 = 6 ,
x∗− :
λ−,1 = −3 ,
λ−,2 = 12 ,
For
λ+,3 = 4 , λ−,3 = −2 .
Neither of the fixed points have all of their eigenvalues negative, so both of them ∗ are unstable. The eigenvalue λ+,1 is negative however, therefore the fixed point x+ is stable with respect to deviations in the direction of the corresponding eigenvector v+,1 : 0 = (A+ − λ+,1 1)v+,1 = !
13 −1 0
24 3 0
0 0 0
0 v+,1 = 0
The associated characteristic timescale is τ+,1 = ∗ x−
two negative eigenvalues, thus the fixed point (0, 0, 1)T are τ−,1 =
and v−,3 = 1
|λ−,1 |
1 3
=
1 √ 5
⇒
v+,1 =
0 1
.
1 = 13 . The matrix A− has |λ+,1 | is stable in two directions: v−,1 =
(2, 1, 0)T . The corresponding characteristic timescales
and τ−,3 =
1
|λ−,3 |
=
1 2
.
S.C8 Functional calculus S.C8.3 Euler-Lagrange equations P
C8.3.2 Fermat’s principle
(a) The x-independence of L implies the conservation of H(y, y 0 ) = (∂y0 L)y 0 − L = h: Conserved quantity:
n0 1 h= c y
"
y0
p
1 + y0 2
# 0
y −
p
1+
y0 2
=−
n0 1 1 p c y 1 + y0 2
.
S.C8 Functional calculus
102
p dy = y 0 = ± r2 /y 2 − 1 , dx
Solve for y 0 :
r ≡ n0 /(hc) .
(b) This differential equation can be solved using separation of variables: ˆ ˆ y dy p = ± dx r2 − y2 −
p
r2 − y 2 = ±(x − x0 )
r2 = y 2 + (x − x0 )2 .
⇒
Here x0 is an integration constant. Thus light travels in circles with radius r = n0 /(hc).
P
C8.3.4 Geodesics on the unit sphere
(a) The curve speed, kdθ r(θ)k on S 2 is computed using the embedding S 2 ⊂ R3 : r(θ) ≡ r(y(θ)) = (cos φ(θ) sin θ, sin φ(θ) sin θ, cos θ)T dθ r(θ) = (−φ0 sin φ sin θ + cos φ cos θ, φ0 cos φ sin θ + sin φ cos θ, − sin θ)T kdθ r(θ)k2 = (−φ0 sin φ sin θ + cos φ cos θ)2 + (φ0 cos φ sin θ + sin φ cos θ)2 + (− sin θ)2 = cos2 θ(cos2 φ+sin2 φ)+sin2 θ+φ02 sin2 θ(cos2 φ+sin2 φ) = 1+sin2 θφ02 . (1) ´ Therefore the length functional on S 2 , L[r] = dθkdθ r(θ)k has the form ˆ p (1) L[r] = dθL(φ(θ), φ0 (θ), θ) with L(φ, φ0 , θ) = kdθ r(θ)k = 1 + sin2 θφ02 . (b) Since L is independent of φ, the Euler-Lagrange equation yields a conservation law:
" ⇒
dθ (∂φ0 L) = ∂φ L
dθ
sin2 θ φ0
p
#
1 + sin2 θφ02
=0 .
(2)
Any meridian, for which φ(θ) = const. and thus φ0 = 0, satisfies this equation. (c)
The Euler-Lagrange equation (2) can trivially be integrated, yielding a conserved 2 0 quantity, √ sin θ2φ 02 = d. Solving this relation for φ0 , we find a differential equation 1+sin θφ
for φ(θ): sin4 θφ02 = d2 (1 + sin2 θφ02 )
⇒
φ0 =
d √ 2 . sin θ sin θ − d2
(3)
(d) In spherical coordinates the equation for a plane, ax1 + bx2 + cx3 = 0, takes the form r(a cos φ sin θ + b sin φ sin θ + c cos θ) = 0. Now set r = 1 to obtain its intersection with the unit sphere, and rearrange it as follows: cot θ = −(a cos φ + b sin φ)/c ≡
1 α
sin(φ − φ0 ).
Here α and φ0 can be identified using sin(φ − φ0 ) = sin φ cos φ0 − cos φ sin φ0 , hence 1 α
cos φ0 = − cb ,
1 α
sin φ0 =
a c
⇒
φ0 = − arctan( ab ) ,
α=
c a2 +b2
.
S.C9.1 Holomorphic functions
103
In spherical coordinates, great circles are thus described by sin(φ − φ0 ) = α cot θ . (e)
(4)
Solving Eq.(4) for φ(θ) and differentiating the result we obtain: φ(θ) = φ0 + sin−1 [α cot θ] , φ0 (θ) = ± √
α
1−α2 cot2 θ
=∓√
α
·
(−1) sin2 θ
=∓
1
1+α2
q sin2 θ−
sin θ
α2 1+α2
α
√ sin θ
=
=∓
sin2 θ−α2 cos2 θ
√ sin θ
d sin2 θ−d2
√ sin θ
α (1+α2 ) sin2 θ−α2
, with d = ∓ √
α 1+α2
.
Thus, great circles satisfy Eq.(3), establishing that they are geodesics of the unit sphere.
S.C9 Calculus of complex functions S.C9.1 Holomorphic functions P
C9.1.2 Cauchy-Riemann equations
(a)
f = u + iv , with u(x, y) = x3 − 3xy 2 , v(x, y) = 3x2 y − y 3 . ∂x u = 3x2 − 3y 2 = ∂y v. X
∂y u = −6xy = −∂x v. X
The partial derivatives are continuous for all (x, y) ∈ R and satisfy the CauchyRiemann equations. Consequently, for all z ∈ C, f (x, y) is an analytic function of z = x + iy, namely: f (x, y) = (x + iy)3 = z 3 . (b)
f = u + iv , with u(x, y) = xy , v(x, y) = 21 y 2 . ∂x u = y = ∂y v. X
∂y u = x 6= −∂x v = 0 .
7
The Cauchy-Riemann equations are not satisfied. Consequently, f (x, y) is not an analytic function of z. (c)
f = u + iv , with u(x, y) = ∂x u =
x2 +y 2 −2x2 (x2 +y 2 )2
∂y u =
−2xy (x2 +y 2 )2
=
x x2 + y 2
y 2 −x2 (x2 +y 2 )2
, v(x, y) = , ∂y v = −
−y x2 + y 2
.
x2 −y 2 (x2 +y 2 )2
=
y 2 −x2 (x2 +y 2 )2
= ∂y v . X
= −∂x v . X
The partial derivatives are continuous for all (x, y)T ∈ R2 \(0, 0)T , and satisfy the Cauchy-Riemann equations. Consequently, f (x, y) is an analytic function of z = x+iy for all z ∈ C\0, namely: f (x, y) =
x−iy (x+iy)(x−iy)
=
z ¯ zz ¯
=
1 z
.
S.C9 Calculus of complex functions
104
(d)
f± = u± +iv± , with u± (x, y) = ex x cos y±y sin y , v± (x, y) = ex x sin y∓y cos y .
x
∂x u± =
e
∂y u± =
e
x
x cos y ± y sin y + cos y
−x sin y ± sin y ± y cos y
x
,
∂y v± =
e
, ∂x v± =
e
x
x cos y ∓ cos y ± y sin y
6= ∂x u+ 7 = ∂x u− X
6= −∂y u+ 7 = −∂y u− X
x sin y ∓ y cos y + sin y
The partial derivatives exist for all (x, y) ∈ R2 . The Cauchy-Riemann equations are fulfilled for f− , but not for f+ . Therefore f− is analytic in z = x + iy for all z ∈ C, but f+ is not. Indeed f− may be expressed in terms of z, whereas f+ depends on both z and z¯: f± (x, y) = ex x cos y ± y sin y + iex x sin y ∓ y cos y = (x ∓ iy)ex cos y + i sin y
= (x ∓ iy)ex eiy ,
⇒
f+ = z¯ez ,
f− = zez .
S.C9.2 Complex integration P
C9.2.2 Cauchy’s theorem
Given: the function f (z) = (z − i)2 (analytic, no singularities); the points z0 = 0, z1 = 1 and z2 = i; and four integration contours, γi , parametrized by t ∈ (0, 1). Specifically, (a) the three straight lines γ1 : z(t) = t from z0 to z1 , γ2 : z(t) = 1 + (i − 1)t from z1 to z2 , and γ3 : z(t) = i(1 − t) from z2 to z0 ; and (b) the quarter-circle γ4 : z(t) = eitπ/2 from z1 to z2 .
Im z i z2 γ3 z0 0
γ4
γ2 γ1
z1 1 Re z
The contour integrals of the function f (z) = (z − i)2 , with antiderivative F (z) = 13 (z − i)3 , all have the same form: ˆ ˆ 1 ˆ z(1) dz(t) Iγi = dz f (z) = f (z(t)) = dt dγ F 0 (γ) = F (z(1)) − F (z(0)) . dt γi 0 z(0) (a) The integrals along the straight lines γ1 , γ2 and γ3 give: ˆ 1 ˆ 1 h i1 dz(t) Iγ1 = dt (z(t) − i)2 = dt (1)(t − i)2 = 13 (t − i)3 dt 0 0 0 1 + 3(−i) + 3(−i)2 = − 23 − i . ˆ 1 ˆ 1 h h i2 3 i 1 dz(t) = dt (z(t) − i)2 = dt (i −1) 1 + (i −1)t − i = 13 1 + (i −1)t − i dt 0 0 0 =
Iγ2
Iγ3
1 3
(1 − i)3 − (−i)3 =
1 3
03 − (1 − i)3 = − 31 1 + 3(−i) + 3(−i)2 + (−i)3 = 23 + i 32 . ˆ 1 ˆ 1 h 2 3 i 1 dz(t) = dt (z(t) − i)2 = dt (−i) i(1 − t) − i = 13 i(1 − t) − i dt 0 0 0
=
1 3
=
1 3
(−i)3 − 03 =
1 i 3
.
Iγ1+Iγ2+Iγ3 = 0, as expected, since Cauchy’s theorem states that the contour integral of an analytic function along any closed path (here γ1 ∪ γ2 ∪ γ3 ) is equal to zero.
S.C9.3 Singularities
105
(b) The path integral along the quarter-circle γ4 gives: ˆ 1 ˆ 1 h π i1 2 π π dz(t) (z(t) − i)2 = dt i π2 ei 2 t ei 2 t − i = 13 (ei 2 t − i)3 dt Iγ4 = dt 0 0 0 =
1 3
(i − i)3 − (1 − i)3 = − 31 1 + 3(−i) + 3(−i)2 + (−i)3 =
2 3
+ i 23 .
Iγ4 = Iγ2 , as expected, because Cauchy’s theorem implies that the path integral of an analytic function between two points (here z1 and z2 ) is independent of the chosen path.
S.C9.3 Singularities P
C9.3.2 Laurent series, residues
The given functions are all of the form f (z) = g(z)/(z − z0 )m , with g(z) analytic in a neighbourhood of z0 . We calculate the residues using Res(f, z0 ) = lim
z→z0
1 1 dm−1 dm−1 [(z − z0 )m f (z)] = lim g(z) . m−1 z→z0 (m − 1)! dz m−1 (m − 1)! dz
(1)
We can find the Laurent series using the Taylor series of g(z) about z0 . To this end we use, where possible, the known series representation for (b) the geometric series, (c) the logarithm, and (d), the trigonometric functions. (a) f (z) = g(z)/(z − 2)3 , with g(z) = 2z 3 − 3z 2 , has a single pole of order 3 at z0 = 2. Res(f, 2) = lim z→2
1 d2 1 d 1 (2z 3 − 3z 2 ) = lim (6z 2 − 6z) = lim (12z − 6) = 9 . z→2 2! dz z→2 2! 2! dz 2
With g (z) = 3z 2 − 6z, g (2) (z) = 12z − 6 and g (3) (z) = 12, g n≥4 (z) = 0, the Taylor series for g(z) and the Laurent series for f (z) about z0 = 2 then read as follows: (1)
g(z) =
3 X g (n) (2) n=0
f (z) =
n!
(z − 2)n = 4 + 12(z − 2) +
18 12 (z − 2)2 + (z − 2)3 . 2! 3!
g(z) 4 12 9 +2 . = + + (z − 2)3 (z − 2)3 (z − 2)2 z−2
Consistency check: the coefficient of (z − 2)−1 , namely 9, matches Res(f, 2). X (b) f (z) = 1/[(z − 1)(z − 3)], has two poles, each of order 1.
Res(f, 1) = lim (z − 1)f (z) = lim 1/(z − 3) = − 21 . z→1
z→1
Res(f, 3) = lim (z − 3)f (z) = lim 1/(z − 1) = z→3
z→3
1 2
.
We determine the Laurent series using the known form of the geometric series: Laurent series about z0 = 1: We write f (z) = g(z)/(z − 1), with g(z) =
∞ h in¯ 1 1 1 1X 1 = =− = − (z − 1) . 2 z−3 (z − 1) − 2 2 2 1 − 12 (z − 1) n ¯ =0
S.C9 Calculus of complex functions
106
∞
f (z) =
X g(z) =− z−1
¯ +1 1 n 2
(z − 1)n¯ −1
n=¯ n−1
=
−
∞ X
1 n+2 2
(z − 1)n .
n=−1
n ¯ =0
Consistency check: the coefficient of (z − 1)−1 , namely − 12 , matches Res(f, 1). X Laurent series about z0 = 3: We write f (z) = g˜(z)/(z − 3), with g˜(z) =
∞ h in¯ 1 1 1X 1 1 = = = − 2 (z − 3) . 1 z−1 (z − 3) + 2 2 2 1 + 2 (z − 3) n ¯ =0 ∞
X n¯ +1 g˜(z) (z − 3)n¯ −1 f (z) = =− − 21 z−3
n=¯ n−1
=
−
n ¯ =0
− 12
n+2
(z − 3)n .
n=−1
Consistency check: the coefficient of (z − 3)−1 , namely (c)
∞ X
1 , 2
matches Res(f, 3). X
f (z) = g(z)/(z − 5)2 , with g(z) = ln z, has a single pole of order 2 at z0 = 5. Res(f, 5) = lim z→5
1 1 d ln z = lim = z→5 z 1! dz
1 5
.
We determine the Laurent series using the Taylor series for the logarithm:
g(z) = ln z = ln(5 + z − 5) = ln 5 + ln 1 + 15 (z − 5) = ln 5 −
∞ h X
in¯ +1
− 15 (z − 5)
.
n ¯ =0
f (z) =
g(z) (z − 5)2
∞
n=¯ n−1
=
X n+2 ln 5 − − 15 (z − 5)n . 2 (z − 5) n=0
Consistency check: the coefficient of (z − 5)−1 , namely
1 , 5
matches Res(f, 5). X
(d) f (z) = g(z)/(z − i)m , with g(z) = eπz , has a pole of order m at z0 = i. Res(f, i) = lim z→i
1 dm−1 πz π m−1 πz π m−1 e = lim e = − . m−1 z→i (m − 1)! (m − 1)! dz (m − 1)!
We determine the Laurent series using the the Taylor series for the exponential function: g(z) = eπz = eπi eπ(z−i) = −
∞ X 1 n ¯ =0
n ¯!
n¯
π(z − i)
.
∞
f (z) =
X π n¯ g(z) =− (z − i)n¯ −m m (z − i) n ¯!
n=¯ n−m
=
−
n ¯ =0
∞ X n=−m m−1
π n+m (z − i)n . (n + m)!
π Consistency check: the coefficient of (z − i)−1 , namely − (m−1)! , matches Res(f, i). X
S.C9.4 Residue theorem
S.C9.4 Residue theorem
107
P
C9.4.2 Circular contours, residue theorem
(a) The function f has a pole of order 1 at za = a, and a pole of order 2 at z1 = −1: f (z) =
4z 4z = . (z − a)(z + 1)2 (z − za )(z − z1 )2
The corresponding residues read:
Res(f, za ) = lim (z −za )f (z) = ± z→za
4za 4a = . (za −z1 )2 (a+1)2
4[(z1 −za ) − z1 ] d d 4z −4a (z −z1 )2 f (z) = lim = = . z→z1 dz z→z1 dz (z −za ) (z1 −za )2 (a+1)2
h
Res(f, z1 ) = lim
i
The circular contour γ1 encloses the pole at za ; γ2 encloses the pole at z1 ; and γ3 encloses both poles. Consequently, we have: ˆ Imz 8πai dz f (z) = 2πi Res(f, za ) = (b) Iγ1 = . γ3 (a+1)2 γ1 γ2 γ1 ˆ 8πai dz f (z) = −2πi Res(f, z1 ) = (c) Iγ2 = . za Rez z1 (a+1)2 γ2 ˆ dz f (z) = 2πi Res(f, za ) + Res(f, z1 ) = 0 . (d) Iγ3 = γ3
to ∞ That Iγ3 = 0 can be seen without calculation: The radius of γ3 can be expanded ¸ without crossing any poles, and since lim zf (z) = 0 we have lim dz f (z) = 0. |z|→∞
P
R→∞
C9.4.4 Integrating by closing contour and using residue theorem ´
Both integrals are of the form I = R dz f (z), with lim|z|→∞ zf (z) = 0, and can therefore be calculated by closing the contour with a semicircle with radius → ∞ in the upper or lower half-planes: ˆ ∞ ˆ ˆ I= dx f (x) = dz f (z) = dz f (z) . −∞
(a) The integrand has three poles of order 1, at z± = ±i|b| in the upper/lower half-plane, and at za = ia in the upper half-plane (because a > 0): f (z) =
a
|b| |b|
z z z = = . (z 2 + b2 )(z − ia) (z − i|b|)(z + i|b|)(z − ia) (z − z+ )(z − z− )(z − za )
The corresponding residues are: z± 1 = . (z± − z∓ )(z± − za ) 2i(±|b| − a) za a Res(f, za ) = lim (z − za )f (z) = = . z→za (za − z+ )(za − z− ) i(a − |b|)(a + |b|)
Res(f, z± ) = lim (z − z± )f (z) = z→z±
If the integration contour is closed in the upper half-plane, then two poles contribute
S.C9 Calculus of complex functions
108
to the result, namely z+ and za ; if it is closed in the lower half plane, then only one pole contributes, namely z− . In both cases, the result is the same: ˆ I=
h
2πi 1 a − i(|b|−a) 2 a+|b|
i
dz f (z) = 2πi Res(f, z+ ) + Res(f, za ) = ˆ
I=
=
π . a+|b|
π 2πi = . 2i(−|b|−a) a+|b|
dz f (z) = −2πi Res(f, z− ) =
(b) The integrand has a single pole of order 1 at za = ia, which lies in the upper halfplane (because a > 0), as well as a pole of order 2 at zb = −ib, which lies in the lower/upper half-plane, depending on whether b > 0 or < 0, respectively: f (z) =
z z = . (z + ib)2 (z − ia) (z − zb )2 (z − za )
The corresponding residues are: za ia =− . (za −zb )2 (a + b)2 d z −za −z d z ia Res(f, zb ) = lim = lim = (z −zb )2 f (z) = lim . z→zb dz z→zb (z −za )2 z→zb dz (z −za ) (b+a)2
Res(f, za ) = lim (z −za )f (z) = z→za
If the integration contour is closed in the upper halfplane, then for b > 0, only one pole contributes, namely za , and for b < 0, both poles contribute:
a b>0
ˆ I=
dz f (z) =
2πi Res(f, za ) =
2πa (b+a)2
If the integration contour is closed in the lower halfplane, then for b > 0, only one pole contributes, namely zb , and for b < 0, no poles contribute. The : result is the same as the one for
I=
dz f (z) =
−2πi Res(f, zb ) =
2πa (b + a)2
|b| b 0 ,
h i 2πi Res(f, z )+Res(f, z ) = 0 a b
ˆ
|b|
for b < 0 . b>0
b 0 , for b < 0 .
0
C9.4.6 Various integration contours, residue theorem
P
(a) The function f (z) has two poles of order one at z0± = ± 21 i, and p two poles of order 2 at za± = a ± 12 4a2 − 4(a2 + 41 ) = a ± 12 i: f (z) =
1 z 2 − 2az + a2 +
1 2 (4z 2 4
+ 1)
Imz z0+ z0−
za+ za−
Rez
1 = . 2 (z − za+ )(z − za− ) 4(z − z0+ )(z − z0− )
S.C9.4 Residue theorem
109
The corresponding residues are:
1 Res(f, z0± ) = lim (z − z0± )f (z) = 2 ± z→z0 (z0± )2 − 2az0± + a2 + 41 4(z0± − z0∓ ) 2a ∓ i(a2 − 1) 1 = . = 2 4a2 (a2 + 1)2 − 41 ∓ ia + a2 + 41 4(±i) Res(f, za± ) = lim
± z→za
=− =−
d d 1 (z − za± )2 f (z) = lim ± dz (z − z ∓ )2 (4z 2 + 1) dz z→za a
h
i
2 (za±
za∓ )3 (4(za± )2
−
2 (±i)3 4a(a
± i)
−
+ 1)
−
(za±
8(a ±
1 i) 2
−
8za± ∓ 2 za ) (4(za± )2
2 =
(±i)2 4a(a ± i)
+ 1)2
−2a ∓ i(2a4 + 5a2 + 1) . 4a2 (a2 + 1)2
The circular contour γ1 encloses the poles z0± , and γ2 encloses the poles z0− and za− . Consequently, we have: ‰ 2πi dz f (z) = 2πi Res(f, z0+ )+Res(f, z0− ) = . (b) Iγ1 = a(a2 +1)2 γ1
z1 z2 γ2
(c)
γ1
dz f (z) = −2πi Res(f, z0− )+Res(f, za− ) =
Iγ2 =
γ2
2
π(a + 3) . (a2 +1)2
´ Because lim|z|→∞ zf (z) = 0, the integral dz f (z) along a circular contour with radius → ∞ gives no contribution. This fact will be used in the following 3 subquestions.
γ3
(d) The circular contour γ3 encloses all four poles (since (a + 12 )2 > a2 + 41 ). Thus, the radius can be expanded to ∞ without crossing any poles. We therefore conclude Iγ3 = 0 . The same result may be obtained by summing the residues of all poles:
z3
dz f (z) = 2πi Res(f, z0− ) + Res(f, z0+ ) + Res(f, za− ) + Res(f, za+ ) = 0 .
Iγ3 =
γ3
(e)
(f)
The straight contour γ4 along the real axis can be calculated by closing the contour with a semicircle of radius → ∞ in the upper or lower half-planes. We choose the lower, because we may then use the fact that encloses the same poles as γ2 = Iγ2 . and is traversed in the same direction: Iγ4 = I The straight contour γ5 along x = 13 a can be closed by a semicircle with radius → ∞ in the left or right half-planes (where Re(z) < 0 or > 0). We choose the left half-plane, because we may use the fact that
encloses the same poles as γ1 , and is
traversed in the same direction: Iγ5 = I
= Iγ1 .
γ4
γ5
S.C9 Calculus of complex functions
110
C9.4.8 Inverse Fourier transform via contour closure: Green function of damped harmonic oscillator P
(a) The Green function may be written as ˆ ∞ ˆ ∞ dω −iωt ˜ dz f (z) , e G(ω) = G(t) = −∞ −∞ 2π f (z) =
2π(z 2
Ω>γ
Ω=γ
Ω γ:
two poles of order 1, at z± = −iγ ± Ωr , with Ωr =
Res(f, z± ) = lim (z −z± )f (z) = z→z±
(ii) Ω = γ:
Ω2 − γ 2 , and
−e−iz± t −e−γt e∓iΩr t = . (1) 2π(z± −z∓ ) ±2π(2Ωr )
one pole of order 2, at z0 = −iγ, and Res(f, z0 ) = lim
z→z0
(iii) Ω < γ:
p
d −e−izt it e−γt d = (z −z0 )f (z) = lim . z→z0 dz dz 2π 2π
two poles of order 1, at z˜± = −iγ ± iγr , with γr =
Res(f, z˜± ) = lim (z − z˜± )f (z) = z→˜ z±
(2)
p
γ 2 − Ω2 , and
−e−i˜z± t −e−γt e±γr t = . 2π(˜ z± − z˜∓ ) ±2π(2iγr )
(3)
In every case the poles lie exclusively in the lower half-plane. (b) We wish to close the integration contour with a circle of radius → ∞. Using the parametrization z = Reiφ we have e−izt ∝ etR sin φ . To ensure that the integrand vanishes in the limit R → ∞, we choose semicircles with φ > 0 or φ < 0 respecand tively, depending on whether t < 0 or t > 0. This yields the contours respectively. They enclose either none or all of the poles, and so we have that: ) ´ G(t < 0) = dz f (z) = 0 P ´ ⇒ G(t) ∝ Θ(t) . dz f (z) = −2πi Poles Res(f, zPole ) G(t > 0) = For positive times we thus obtain the following results:
(1)
sin(Ωr t) e−iΩr t−eiΩr t = e−γt . 2Ωr Ωr
(3)
sinh(γr t) eγr t−e−γr t = e−γt . 2γr γr
(i) Ω > γ: G(t) = −2πi Res(f, z+ )+ Res(f, z− ) = ie−γt
(2)
(ii) Ω = γ: G(t) = −2πi Res(f, z0 ) = te−γt . (iii) Ω < γ: G(t) = −2πi Res(f, z˜+ )+ Res(f, z˜− ) = e−γt
Summary:
G(t) =
sin Ωr t Θ(t) e−γt Ωr Θ(t) t e−γt
Θ(t) e−γt sinh γr t γr
for
Ω>γ,
for
Ω=γ,
for
Ω a. This describes part of a hyperbola, shown as solid line in the figure, for a = 2.
(c)
2
1
2
2
3
4
x
2
˙ r(t) · r(t) = 2t(−e−2t + a2 e2t ) ; vanishes when t = 0 , or for a2 = e−4t , i.e. when t2 = − 41 ln a2 ⇒ t = ±(ln a−1/2 )1/2 .
S.V1.3 Curve length P
V1.3.2 Curve length
4 6 6 T ˙ ˙ For , t , t ) , the curve velocity is r(t) = (4t3 , 6t5 , 6t5 )T , with kr(t)k = √ the curve r(t) = (t3 √ 16t6 + 2 · 36t10 = 2t 4 + 18t4 . The curve length is
ˆ 0
112
ˆ
τ
τ
dt 2t3
˙ dt kr(t)k =
L[γ] =
0
p
4 + 18t4 =
1 2 36 3
h
(4+18t4 )3/2
iτ 0
=
1 54
h
i
(4+18τ 4 )3/2 −8 .
S.V1.4 Line integral
113
P
V1.3.4 Natural parametrization of a curve
(b)
ct
cos ωt sin ωt
r(t) = e
=e
ct
3
C S
1
Compact notation: C = cos ωt, S = sin ωt, mit C 2 + 2 S = 1. ˙ r(t) = c ect
C S
+ ωect
y
2
−S C
= ect
cC − ωS cS + ωC
˙ kr(t)k = ect (cC − ωS)2 + (cS + ωC)2
-3
-2
-1
1
x
2
3
-1 -2
-3
1/2 1/2
p
= ect c2 + ω 2 = ect (c2 + ω 2 )(C 2 + S 2 ) + 2cω(−CS + SC) ˆ t ˆ t p 1p 2 ˙ du kr(u)k = c2 + ω 2 s(t) = du ecu = c + ω 2 ect − 1 c 0 0
(c)
1 t(s) = ln c
(d)
cs √ +1 c2 + ω 2
rL (s) = r(t(s)) = ect(s) ˜ = cos with C
h
ω c
ln √
drL (s) c = √ ds c2 + ω 2
(e)
cs
c2 +ω 2
˜ C S˜
[Inverse function of (c)]
cos ωt(s) sin ωt(s)
+1
i
√
=
, S˜ = sin
h
ω c
cs +1 + √ c2 + ω 2
cs +1 c2 + ω 2
ln √
cs c2 +ω 2
+1
˜ C S˜
i
˜ 2 + S˜2 = 1. , and C
cs −S˜ ω ˜ c √c2 + ω 2 + 1 C
"
˜ − ω S˜ cC ˜ cS˜ + ω C
−1 √
c c2 + ω 2
dr(t(s)) dr(t) dt(s) check: = = ds dt t=t(s) ds
√
drL (s) c2 + ω 2 2 2 1/2 ˜ ˜ ˜ ˜
= √ 1 √ = 1 X (c C − ω S) + (c S + ω C) =
ds c2 + ω 2 c2 + ω 2 1 = √ 2 c + ω2
X
#
As expected, for the natural parametrization we have: ||velocity|| = 1.X
S.V1.4 Line integral P
V1.4.2 Line integrals in Cartesian coordinates
´ ´ ˙ Strategy for the line integral γ dr · F = I dt r(t) · F(r(t)): find a parametrization r(t) of ˙ ˙ the curve, then determine r(t), F(r(t)) and r(t) · F(r(t)), then integrate. Given: r0 ≡ (0, 0, 0)T , r1 ≡ (0, −2, 1)T , F(r(t)) = (x2 , z, y)T . (a) In the case γa = γ1 ∪ γ2 both curves are parametrized by t ∈ I, giving: ˆ ˆ ˆ ˆ n o ˙ ˙ dr · F = dr · F + dr · F = dt r(t) · F(r(t)) γ + r(t) · F(r(t)) γ γa
γ1
γ2
I
1
1
To parametrize the two lines γ1 and γ2 , with r2 ≡ (1, 1, 1)T and t ∈ I = (0, 1), it is advisable to use a linear interpolation for each: γ1 [r0 → r2 ] :
r(t) = r0 + t(r2 − r0 ) = t(1, 1, 1)T = (t, t, t)T ,
S.V2 Curvilinear Coordinates
114
˙ r(t) = (1, 1, 1)T , F(r(t)) = x2 (t), z(t), y(t)
˙ r(t)·F(r(t))
γ1
T
= (t2 , t, t)T
= t2 + 2t
r(t) = r2 + t(r1 − r2 ) = (1, 1, 1)T + t(−1, −3, 0)T = (1 − t, 1 − 3t, 1)T
γ2 [r2 → r1 ] :
˙ r(t) = (−1, −3, 0)T F(r(t)) = x2 (t), z(t), y(t)
˙ r(t)·F(r(t)) ˆ
γa = γ1 ∪ γ2 :
γ2
T
= (1 − t)2 , 1, 1 − 3t
T
= −(1 − t)2 − 3 = −(t2 − 2t + 4) ˆ
1
dr·F = γa
dt
h
2
i
2
ˆ
1
t + 2t − (t − 2t + 4) =
0
dt(4t − 4) = −2 . 0
r(t) = (sin(πt), −2t1/2 , t2 )T
(b) γb :
˙ r(t) = (π cos(πt), −t−1/2 , 2t)T
T
F(r(t)) = x2 (t), z(t), y(t) = (sin2 (πt), t2 , −2t1/2 )T ˆ ˆ 1 ˆ 1 h i ˙ dr·F = dt r(t)·F(r(t)) = dt π sin2 (πt) cos(πt) − t3/2 − 4t3/2 γb
0
0
ˆ
1
=0−
dt 5t
3/2
= −2t
5/2
0
1 = −2 . 0
The first (trigonometric) integral gives 0, because the integrand in the interval (0, 1) is antisymmetric about the point 1/2. [Alternative: solve the integral using the substitution u = sin(πt)]. (c) The path γc is defined by the parabolic equation z(y) = y 2 + 32 y. Because the equation determines a parametrization of z in terms of y, it is advisable to use t = y as the parameter: r(t) = (0, t, t2 + 32 t)T , ˙ r(t) = (0, 1, 2t +
mit t ∈ I = (0, −2)
3 T ) 2
T
F(r(t)) = x2 (t), z(t), y(t) = (0, t2 + 23 t, t)T ˆ ˆ −2 ˆ −2 h i 3 3 −2 3 ˙ dr · F = dt r(t) · F(r(t)) = = −2 . dt (t2 + t + 2t2 + t) = t3 + t2 2 2 2 0 γc 0 0
S.V2 Curvilinear Coordinates S.V2.3 Cylindrical and spherical coordinates P
V2.3.2 Coordinate transformations √ √ √ 3/2, 3) , (ρ, φ, z) = (1, 2π/3, 3) √ √ x = r sin θ cos φ = 2 · 1/2 · (−1/2) = −1/2, y = r sin θ sin φ = 2 · 1/2 · 3/2 = 3/2,
P1 : (r, θ, φ) = (2, π/6, 2π/3), (x, y, z) = (−1/2,
S.V2.3 Cylindrical and spherical coordinates
115
z = r cos θ = 2 ·
p p √ √ 3/2 = 3, ρ = x2 + y 2 = 1/4 + 3/4 = 1.
√ √ √ P2 : (ρ, φ, z) = (4, π/4, 2), (x, y, z) = (2 2, 2 2, 2) , (r, θ, φ) = (2 5, 1.11, π/4) √ √ √ √ x = ρ cos φ = 4 · 2/2 = 2 2, y = ρ sin φ = 4 · 1 = 2/2 = 2 2, p √ √ 1 z r = x2 + y 2 + z 2 = 8 + 8 + 4 = 2 5, θ = arccos = arccos √ ≈ 1.11 (⇒ 63◦ ). r 5
P
V2.3.4 Spherical coordinates: velocity, kinetic energy, angular momentum
In terms of the spherical coordinates y1 = r, y2 = θ, y3 = φ, the Cartesian coordinates are given by x1 = x = r sin θ cos φ, x2 = y = r sin θ sin φ, x3 = z = r cos θ. We also have exi · exj = δij . Position vector: r = x ex +y ey +z ez = r sin θ cos φ ex +r sin θ sin φ ey +r cos θ ez = rer . (a) Construction of the local basis vectors: vyi = ∂r/∂yi , vyi /vyi .
vyi = k∂r/∂yi k ,
eyi =
vr = sin θ cos φ ex + sin θ sin φ ey + cos θ ez vθ = r cos θ cos φ ex + r cos θ sin φ ey − r sin θ ez vφ = −r sin θ sin φ ex + r sin θ cos φ ey
h
vr = sin2 θ cos2 φ + sin2 θ sin2 φ + cos2 θ
h
i1/2
=1
vθ = r2 cos2 θ cos2 φ + r2 cos2 θ sin2 φ + r2 sin2 θ
i1/2
=r
h
vφ = r2 sin2 θ sin2 φ + r2 sin2 θ cos2 φ]1/2 = r sin θ vr = sin θ cos φ ex + sin θ sin φ ey + cos θ ez . vr vθ eθ = = cos θ cos φ ex + cos θ sin φ ey − sin θ ez . vθ vφ eφ = = − sin φ ex + cos φ ey . vφ er =
Normalization is guaranteed by construction: er · er = eθ · eθ = eφ · eφ = 1 . Orthogonality: er · eθ = sin θ cos θ(cos2 φ + sin2 φ) − cos θ sin θ = 0 , er · eφ = − sin θ cos φ sin φ + sin θ cos φ sin φ = 0 , eφ · eθ = − cos θ cos φ sin φ + cos θ sin φ cos φ = 0 . Hence: eyi · eyj = δij . X (b) Cross product: er × er = eθ × eθ = eφ × eφ = 0 . er × eθ = (sin θ cos φ ex + sin θ sin φ ey + cos θ ez ) × (cos θ cos φ ex + cos θ sin φ ey − sin θ ez ) = ex (sin θ sin φ(− sin θ) − cos θ cos θ sin φ) + ey (cos θ cos θ cos φ − (− sin θ sin θ cos φ)) + ez (sin θ cos φ cos θ sin φ − sin θ sin φ cos θ cos φ) = − sin φ ex + cos φ ey = eφ ,
S.V2 Curvilinear Coordinates
116
eφ × er = (− sin φ ex + cos φ ey ) × (sin θ cos φ ex + sin θ sin φ ey + cos θ ez ) = ex (cos φ cos θ − 0) + ey (0 − (− sin φ) cos θ) + ez (− sin φ sin θ sin φ − cos φ sin θ cos φ) = cos θ cos φ ex + cos θ sin φ ey − sin θ ez = eθ , eθ × eφ = (cos θ cos φ ex + cos θ sin φ ey − sin θ ez ) × (− sin φ ex + cos φ ey ) = ex (0 − (− sin θ) cos φ) + ey (− sin θ(− sin φ) − 0) + ez (cos θ cos φ cos φ − cos θ sin φ(− sin φ)) = sin θ cos φ ex + sin θ sin φ ey + cos θ ez = er . Hence: eyi × eyj = εijk eyk . X
(d)
d ˙ θ r + φ∂ ˙ φ r (a) r(r, θ, φ) = r∂ ˙ r r + θ∂ = r˙ er + rθ˙ eθ + rφ˙ sin θ eφ . dt 2 T = 21 mv2 = 12 m r˙ er + rθ˙ eθ + rφ˙ sin θ eφ = 21 m r˙ 2 + r2 θ˙2 + r2 φ˙ 2 sin2 θ .
(e)
L = m(r × v) = m rer × (r˙ er + rθ˙ eθ + rφ˙ sin θ eφ )
(c)
v=
˙ r × eθ ) + r2 φ˙ sin θ(er × eφ ) = mr2 −φ˙ sin θ eθ + θe ˙ φ = m r2 θ(e
P
.
V2.3.6 Line integral in Cartesian and spherical coordinates
(a) In Cartesian coordinates we parametrize the path as r(t) = a + t(b − a), t ∈ (0, 1): ˙ γ1 : r(t) = (1 − t, 0, t)T , r(t) = (−1, 0, 1)T , F(r(t)) = (0, 0, f z(t))T = (0, 0, f t)T . ˆ 1 ˆ 1 W [γ1 ] = dt r˙ · F(r(t)) = dt f t = 21 f . 0
0
(b) In spherical coordinates, r = rer . Along the curve γ2 the angle θ runs from π/2 to 0, with r = 1 and φ = 0. Thus γ2 can be parametrised using θ as curve parameter: γ2 : r(θ) = er |φ=0 = (sin θ, 0, cos θ)T ,
∂θ r(θ) = eθ |φ=0 = (cos θ, 0, − sin θ)T .
In spherical coordinates the vector field takes the form: F(r) = f zez = f cos θ(cos θ er − sin θ eθ )|φ=0 . ˆ 0 ˆ π/2 W [γ2 ] = dθ ∂θ r · F = −f dθ eθ · (cos θ cos θ er − cos θ sin θ eθ ) π/2
ˆ
=f 0
0 π 2
dθ cos θ sin θ = f
h
1 2
sin2 θ
iπ/2 0
=
1 f 2
.
Discussion: Since F is a gradient field (with F = ∇( 21 f z 2 )), the value of its line integral depends only on the starting point and endpoint of its path. These are the same for γ1 and γ2 , hence W [γ1 ] = W [γ2 ].
S.V2.5 Local coordinate bases and linear algebra
117
P
V2.3.8 Line integrals in cylindrical coordinates: bathtub drain z
(a) For ρ0 = 10ρd the soap bubble reaches the drain after a time td = τ ln(ρ0 /ρd ) = τ ln 10 ' 2.3τ . During this time the radius shrinks by a factor of ρ0 /ρd = 10, and it circles the z-axis n = td ω/(2π) times; for ω = 6π/τ we have n ' (2.3τ )(3/τ ) ' 7. r(t) = ρ(t)eρ (t) + z(t)ez (t),
(b)
ρ(t) = ρ0 e−t/τ ,
with
φ(t) = ωt,
y x z(t) = z0 e−t/τ,
˙ φ + ze v = r˙ = ρe ˙ ρ + ρφe ˙ z = −(ρ0 /τ )e−t/τ eρ + (ωρ0 )e−t/τ eφ − (z0 /τ )e−t/τ ez kv(t)k = e−t/τ
p
(ρ0 /τ )2 +(ωρ0 )2 +(z0 /τ )2
vd = kv(td )k = e− ln(ρ0 /ρd ) =
ρd p ˆ
τ
(ρ0 /τ )2 +(ωρ0 )2 +(z0 /τ )2
1+(τ ω)2 +(z0 /ρ0 )2 . ˆ
td
td
dt kv(t)k = (vd ρ0 /ρd )
L[γ] =
(c)
p
0
h
dt e−t/τ = −τ (vd ρ0 /ρd ) e−t/τ
itd 0
0
= τ (vd ρ0 /ρd ) (1 − ρd /ρ0 ) = τ vd (ρ0 /ρd − 1) . (d)
˙ ˙ F = −mgez , r(t) · F(r(t)) = r(t) · (−mgez ) = e−t/τ mgz0 /τ ˆ td ˆ td td mgz0 ˙ W [γ] = dt r(t) · F(r(t)) = dt e−t/τ = −mgz0 e−t/τ τ 0 0 0 = mgz0 (1 − ρd /ρ0 ) .
The final height of the bubble is given by z(td ) = z0 e−td /τ = z0 ρd /ρ0 , and therefore the change in height is ∆z = z(0) − z(td ) = z0 (1 − ρd /ρ0 ). Thus the work done by gravity is given by W [γ] = mg∆z, which is equal to the change in potential energy.
S.V2.5 Local coordinate bases and linear algebra P
V2.5.2 Jacobian determinant for spherical coordinates
(a) Spherical coordinates: x = (x, y, z)T = (r sin θ cos φ, r sin θ sin φ, r cos θ)T . Jacobi matrix for y 7→ x(y):
∂x ∂(x, y, z) J= = ∂(r, θ, φ)
∂x ∂θ ∂y ∂θ ∂z ∂θ
∂r ∂y ∂r ∂z ∂r
∂x ∂φ ∂y ∂φ ∂z ∂φ
=
sin θ cos φ sin θ sin φ cos θ
−r sin θ sin φ r sin θ cos φ 0
r cos θ cos φ r cos θ sin φ −r sin θ
! .
(b) Inverse transformation: T
y = (r, θ, φ) =
2
J
2 1/2
∂r ∂y ∂θ ∂y ∂φ ∂y
∂r
(x + y + z )
∂r −1
2
∂x ∂(r, θ, φ) ∂θ = = ∂x ∂(x, y, z) ∂φ ∂x
∂z ∂θ ∂z ∂φ ∂z
, arccos
z (x2 + y 2 + z 2 )1/2
, arctan
y x
.
S.V3 Fields
118
=
y (x2 +y 2 +z 2 )1/2 yz (x2 +y 2 )1/2 (x2 +y 2 +z 2 ) x x2 +y 2
x (x2 +y 2 +z 2 )1/2 xz (x2 +y 2 )1/2 (x2 +y 2 +z 2 ) y − x2 +y 2
sin θ cos φ = r1 cos θ cos φ φ − r1 sin sin θ
0
sin θ sin φ cos θ sin φ
1 r
z (x2 +y 2 +z 2 )1/2 2 +y 2 )1/2 − (x x2 +y 2 +z 2
cos θ − r1 sin θ . 0
1 cos φ r sin θ
Check:
r cos θ cos φ
−r sin θ sin φ
J · J −1 = sin θ sin φ
r cos θ sin φ
r sin θ cos φ
−r sin θ
cos θ
(c)
sin θ cos φ
sin θ cos φ
r1 cos θ cos φ φ − r1 sin sin θ
0
sin θ sin φ 1 r
cos θ sin φ 1 cos φ r sin θ
cos θ
− r1 sin θ 0
1
0
0
= 0
1
0
0
0
1
= 1. X
The Jacobi determinant is obtained as follows:
sin θ cos φ r cos θ cos φ det(J) = sin θ sin φ r cos θ sin φ cos θ −r sin θ h
−r sin θ sin φ r sin θ cos φ 0
i
h
= r2 cos θ cos θ sin θ cos2 φ+cos θ sin θ sin2 φ + r2 sin θ sin2 θ cos2 φ+sin2 θ sin2 φ
i
= r2 sin θ(cos2 θ + sin2 θ) = r2 sin θ .
sin θ cos φ det(J −1 ) = r1 cos θ cos φ φ − r1 sin sin θ h = − r12 =
sin φ sin θ
sin θ sin φ 1 cos θ sin φ r 1 cos φ r sin θ
cos θ − r1 sin θ 0
i
− sin2 θ sin φ − cos2 θ sin φ −
1 (sin2 φ r 2 sin θ
+ cos2 φ) =
1 r 2 sin θ
1 cos φ r 2 sin θ
h
− sin2 θ cos φ − cos2 θ cos φ
.
Check: det(J)·det(J −1 ) = 1 . X
S.V3 Fields S.V3.1 Definition of fields
i
S.V3.2 Scalar fields
119
V3.1.2 Sketching a vector field
(a) The direction of the vector field u(r) = (cos x, 0)T is always parallel to ex , independent of r. For a fixed value of x the field has a fixed value, independent of y, depicted by arrows that all have the same length and direction. For a fixed value of y, the length and direction of the arrows change periodically with x, as cos(x). In particular, u = ex for x = n2π, u = −ex for x = (n + 12 )2π, and u = 0 for x = (n + 12 )π, with n ∈ Z.
y
P
-π
0 x
π
y
T (b) The p vector field w(r) = (2y, −x) has norm kw(r)k = 1 4y 2 + x2 , thus the arrow length increases with increasing krk, and increases more quickly with increasing |y| than with increasing |x|. On the x-axis we have w(r) = −xey , 0 thus the vectors stand perpendicular to the x-axis, pointing upwards for x < 0 and downwards for x > 0. Analogously, on the y-axis we have w(r) = 2yex , thus the vectors stand -1 perpendicular to the y-axis, pointing to the left for y < 0, -1 0 1 and to the right for y > 0. On the diagonal x = y we have x w(r) = x(2, −1)T , thus for x > 0 (or x < 0) all arrows point with slope − 21 towards the bottom right (or the top left). Analogously for the other diagonal. Arrow directions between axes and diagonals follow by interpolation. In both figures the axis labels refer to the units used for r-arrows from the domain of the map, while the unit of length for arrows from the codomain has not been specified.
S.V3.2 Scalar fields P
V3.2.2 Gradient of a valley x
(a) The gradient and total differential are given by:
∇hr =
∂x h ∂y h
= e
xy
-2 2
0
-1
y x
1
2 2
2 2.8
0.4
3.6
1.2
.
2.4
1
3.2 3 3.8
0.2 1.6
0.6
2.2 1
dhr (n) = (∂x h)nx + (∂y h)ny = (ynx + xny ) exy .
0.8
0
1 1.6
(c)
-1
-1
4 3.2 2.8
2
3.8
-2 -2
0.2 1.2
2.6 3.4 1.8
-1
y
0.8
1.4
2.2
3
0.6
1
2.4
ˆ k = ∇hr /k∇hr k = (y, x)T /r . allel to the unit vector n
1.8
1.4
0
3.6
(b) The direction of the steepest increase of the slope is given by the gradient vector ∇hr = exy (y, x)T . It is par-
1 3.4 4 2.6
-2
0.4
0
1
2
The contour lines at the point r are perpendicular to the gradient vector ∇hr , thereˆ ⊥ = (−x, y)T /r . (Verify that dhr (ˆ fore they run along the unit vectors n n⊥ ) = 0. ˆ ⊥ .) This confirms that h remains unchanged along the direction of n
(d) The arrows with starting points r1 =
√1 (−1, 1)T , 2
r2 = (0, 1)T and r3 =
√1 (1, 1)T 2
de-
S.V3 Fields
120
pict the vectors ∇hr1
1 = √ (1, −1)T , ∇hr2 = (1, 0)T 2e
and ∇hr3
√ e = √ (1, 1)T , 2
respectively. (e)
The contour line at a height of h(r) ≡ H(> 0) is described by the equation H ≡ exy . For a given value of x, we solve for y and we find that y = ln(H)/x .
(f)
Regions of the valley that are completely flat locally can be found using the equation ∇hr = 0. This condition is satisfied only when x = y = 0 and therefore at r = 0 . The height at this point is h(0) = 1 .
(g) For a given distance r = krk from the origin, the slope of the valley is steepest √ where k∇hr k = exy r is maximal. This happens when x = y = ±r/ 2 . (If this is √ not obvious to you, then find the maximum of xy| √ = x r2 − x2 .) y=
! V3.2.4 Gradient ∂x ofpf (r)
r 2 −x2
P
(a)
∇r =
∇r2 =
(b)
x2
∂y ∂z
∂x ∂y ∂z
+
y2
+
z2
=
2
! (x2 + y 2 + z 2 ) =
∇ϕ =
∂x ∂y ∂z
2x 2y 2z
p
x2
!
+
y2
+
z2
1 = r
x y z
! =
r = ˆ r r
! = 2r
! ϕ
2x 2y 2z
1
(chain rule)
chain rule
=
(dr ϕ)
∂x ∂y ∂z
! (a)
r = ϕ0 (r) ˆ r
Interpretation: Since the field ϕ(r) = ϕ(r) depends only on the radius, the direction of the gradient vector (i.e. the direction along which ϕ(r) changes the most) is parallel to ˆ r, i.e. radially outwards; and the magnitude of the gradient vector (which states the slope of ϕ in this direction) is given by the derivative of ϕ with respect to r.
S.V3.3 Extrema of functions with constraints P
V3.3.2 Minimal area of open box with specified volume
We seek the box side lengths, x, y and z, that minimize the area, A = 2xz + 2yz + xy, under the constraint that its volume takes a specified value, g(x, y, z) = xyz − V = 0. To this end we extremize the auxiliary function, F (x, y, z, λ) = A(x, y, z) − λg(x, y, z) = 2xz + 2yz + xy − λ(xyz − V ), with Lagrange multiplier λ, with respect to all its variables: !
0 = ∂x F = 2z + y − λyz !
0 = ∂y F = 2z + x − λxz !
0 = ∂z F = 2x + 2y − λxy !
0 = ∂λ F = xyz − V .
(1) (2) (3) (4)
S.V3.3 Extrema of functions with constraints
121
We need to solve these four equations for x, y, z and λ. To eliminate z, we form the difference (1) − (2) = 0 to obtain (y − x) − λz(y − x) = (y − x) · (1 − λz) = 0. This equation has two solutins, (i) z = 1/λ and (ii) x = y. Solution (i), combined with (2) leads to 2/λ = 0, which is a contradiction. Thus we consider only solution (ii), x = y: (3)
4x − λx2 = x(4 − λx) = 0
⇒ (2)
⇒
0 = 2z +
(4)
⇒
xyz =
(5,7)
⇒
⇒
(8,9)
⇒
− 4z
2
z=
2 λ
4 λ
x=y=
(6,7)
4 λ
2 λ
4 λ
⇒ ⇒
=
x= z=
32 ! = λ3
4 λ
(5)
2 λ
(6) 5/3
V,
⇒
λ=
2 V 1/3
.
(7)
= 22−5/3 V 1/3 = 21/2 V 1/3 ,
(8)
= 21−5/3 V 1/3 = 2−2/3 V 1/3 .
(9)
i
h
Aminimal = 2xz + 2yz + xy = 2·21/3 ·2−2/3 + 2·21/3 ·2−2/3 + 21/3 ·21/3 V 2/3 = 3·22/3 V 2/3 .
Note that the prefactor, 3·22/3 ' 4.7622, is slightly smaller than the value 5 which would result from using a cubical box with x = y = z = V 1/3 .
P
V3.3.4 Maximal volume of box enclosed in ellipsoid
Let 2x, 2y and 2z denote the side lengths of the rectangular box. We seek to extremize its volume, V = 8xyz, subject to the constraint that the corner, P , with coordinates (x, y, z) 2 2 2 lies on the ellipsoid, g(x, y, z) = xa2 + yb2 + zc2 − 1 = 0. To this end we extremize the auxiliary function
F (x, y, z) = V (x, y, z) − λg(x, y, z) = 8xyz − λ
y2 z2 x2 + + −1 a2 b2 c2
,
with Lagrange multiplier λ, with respect to all its variables: x ! 0 = ∂x F = 8yz − 2λ 2 , a y ! 0 = ∂y F = 8xz − 2λ 2 , b z ! 0 = ∂z F = 8xy − 2λ 2 . c x2 y2 z2 ! 0 = ∂λ F = 2 + 2 + 2 − 1 . a b c
(1) (2) (3) (4)
One solution of Eqs. (1)-(3) is x = y = z = 0, but this point does not satisfy Eq. 4 (i.e. it does not lie on the ellipsoid). Hence, all coordinates of P are nonzero. Eq. (1) then implies λ = 4a2 yz/x . Eqs. (2) and (3) then yield 8xz =
8a2 y 2 z 1 x b2
and
8xy =
8a2 yz 2 1 x c2
⇒
x2 y2 z2 = 2 = 2 . 2 a b c
Inserting (5) into (4) yields the coordinates of P , (xp , yp , zp )T = maximal volume of Vmax = 8xp yp zp =
8 √ abc 3 3
.
T √1 (a, b, c) 3
(5)
, and a
S.V3 Fields
122
P
V3.3.6 Entropy maximization subject to constraints, continued
We impose the three constraints via three Lagrange multipliers, defining the auxiliary function F ({pj }) = S({pj }) − λ1 g1 ({pj }) − λ2 g2 ({pj }) − λ3 g3 ({pj }) =−
M X
pj ln pj − λ1
M X
i=1
pj −1 − λ2
M X
i=1
pj Ej −E − λ3
M X
i=1
pj Nj −N ,
i=1
and extremizing it w.r.t. all its variables: !
0 = ∂pj F = − ln pj − 1 − λ1 − λ2 Ej − λ3 Nj
pj = e−λ1 −1 e−λ2 Ej −λ3 Nj .
⇒
It follows that pj depends exponentially on both Ej and Nj . We now define pj = e−β(Ej −µNj ) /Z , with eλ1 +1 ≡ Z, λ2 ≡ β and λ3 ≡ −βµ. Then the condition normalization constant has the form Z =
P j
−β(Ej −µNj )
e
P
pj = 1 implies that the
j
.
S.V3.4 Gradient fields P
V3.4.2 Line integral of a vector field
Given the vector field u(r) = (xeyz , yexz , zexy )T , a possible parametrization of the straight line γ from 0 = (0, 0, 0)T to b = b(1, 2, 1)T is r(t) = tb, with t ∈ (0, 1).
T
r(t) = x(t), y(t), z(t)
= tb = tb(1, 2, 1)T ,
u(r(t)) = x(t)ey(t)z(t) , y(t)ex(t)z(t) , z(t)ex(t)y(t)
2 2
2 2
2 2
˙ r(t) = b(1, 2, 1)T ,
T
= tbe2b
2 2
t
, 2tbeb
2 2
2 2
t )
, tbe2b
2 2
˙ r(t)·u(r(t)) = tb2 e2b t + 4tb2 · eb t + tb2 · e2b t = b2 2te2b t + 4teb t ˆ ˆ 1 ˆ 1 i h 2 2 2 2 ˙ W [γ] = dr · u = dt r(t) · u(r(t)) = dt b2 2te2b t + 4teb t γ
0
W [γ]b2 =ln 2 =
2
1 2 ln 2 e 2
+ 2e
ln 2
t
T
0
1 2b2 t2 1 22 = 2b e + 4b2 2 eb t 4b2 2b
h
2 2
−
5 2
=
1 2
i1 0
2
= 21 e2b −
2
·2 +2·2−
5 2
1 2
=
2
+ 2eb − 2 = 7 2
2 1 2b e + 2
2
2eb −
5 2
.
X
u(r) is not a gradient field, because ∂x uy − ∂y ux = yzexz − xzeyz 6= 0. Hence the integral does depend on the path taken.
V3.4.4 Line integral of vector field on non-simply connected domain
P
(a)
B=
1 (−yxn , xn+1 , 0)T (x2 + y 2 )2
∂y B z − ∂z B y = 0,
∂z B x − ∂x B z = 0
y
x
∂x B − ∂y B =
x2 + y 2 (n + 1)xn − 2xn+1 · 2x + x2 + y 2 xn − 2yxn · 2y (x2 + y 2 )3
S.V3.5 Sources of vector fields
123
= (b)
xn+2 (n + 1 − 4 + 1) + y 2 xn (n + 1 + 1 − 4) =0 (x2 + y 2 )3
r = R (cos t, sin t, 0)T , r˙ = R (− sin t, cos t, 0)T , t ∈ [0, 2π] 1 1 (−y(t)x2 (t), x3 (t), 0)T = 4 R3 (− sin(t) cos2 (t), cos3 (t), 0)T B= R (x2 (t)+y 2 (t))2 ˆ 2π ˆ 2π R ˙ W [γC ] = dt r(t) · B (r(t)) = dt sin2 (t) cos2 (t) + cos4 (t) R 0 0 ˆ 2π h i2π part. int. 1 2 t + 2 sin t cos t = π . = dt cos (t) = 2 0
0
(c)
n=2 .
if
W [γT ] =
W [γT ]
π
π for |a| < 1 , 0 for |a| > 1 .
Justification: For |a| < 1 the triangle γT encloses the z axis. Therefore the integration path can be deformed continuously from γT to the circle γC discussed in part (b), without leaving a non-simply connected domain U that has no intersection with the z axis but encircles it. Since ∂i B j = ∂j B i holds in the entire domain U , all closed line integrals in U have the same value, (b) hence W [γT ] = W [γC ] = π. For |a| > 1 the triangle does not encircle the z axis. Therefore the integration path can be deformed continuously to a point, without leaving a simply-connected domain U 0 that neither has an intersection with the z axis nor encircles it. Since ∂i B j = ∂j B i holds in the entire domain U 0 , all closed line integrals in U 0 have the same value, hence W [γT ] = 0.
−1
a
1
ez
ey γC
U
ex
γT
ez
ey ex
U γT
S.V3.5 Sources of vector fields P
V3.5.2 Divergence
(a)
∇ · u = ∂i ui = ∂x (xyz) + ∂y (z 2 y 2 ) + ∂z (z 3 y) = yz + 5z 2 y .
(b)
∇ · [(a · r) b] = ∂i (aj rj )bi = aj bi ∂i rj = aj bi δi j = ai bi = a · b .
P
V3.5.4 Gauss’s theorem – cuboid (Cartesian coordinates) z
C is a cuboid with edge lengths a, b and c (see sketch). Let S1 to S6 be its 6 faces with normal vectors n1,2 = ±ex , n3,4 = ±ey , n5,6 = ±ez . ´ We seek the flux, Φ = S dS · u, of the vector field u(r) = ( 21 x2 + x2 y, 12 x2 y 2 , 0)T , through the cube’s surface, S ≡ ∂C ≡ S1 ∪S2 ∪· · ·∪S6 .
(a) Direct calculation of Φ = i=1 Φi , with Φi = S dS · u, yields i ˆ b ˆ c Φ1 + Φ2 = dy dz (n1 · u)x=a + (n2 · u)x=0 = 21 a2 (b + b2 )c . 0
0
|
{z
a2 ( 1 +y) 2
}
|
{z 0
}
n2 c
n4
n3 y a
x
´
P6
n5
n1 b n 6
S.V3 Fields
124 ˆ
ˆ
a
Φ3 + Φ4 =
c
dz (n3 · u)y=b + (n4 · u)y=0 = 61 a3 b2 c .
|
dx 0
0
{z
}
1 x 2 b2 2
}
{z
|
0
1 2 a bc(1 2
Analogously: Φ5 + Φ6 = 0, since n5 · u = n6 · u = 0. Thus Φ = ˆ
ˆ Gauss
dS · u =
(b) Alternatively, using Gauss’s theorem, Φ = ˆ
a
ˆ
b
ˆ
=
a
ˆ
S b
1 2
0
x2
ˆ
c
dx dy dz (x + 2xy + x2 y)
0
0
a b c 0
· y
1 2 a bc(1 2
0
· z
+b+
0
0
a 1
+ x2
0
dV ∇ · u, we obtain C
dx dy dz ∇ · u = 0
=
ˆ
c
Φ=
+ b + 31 ab) .
·
2
y2
0
b c 0
· z
0
+
1 3
x3
a 1 0
·
2
y2
b c 0
· z
0
X
1 ab) 3
= (a) .
V3.5.6 Computing volume of grooved ball using Gauss’s theorem
P
(a) In spherical coordinates, r = er r, the surface element of the sphere with radius R is given by dS = er R2 sin θ dθdφ. We thus have dS · 31 r|r=R = 13 R3 sin θdθdφ and ˆ 2π ˆ π ˆ dS · u = dφ dθ 31 R3 sin θ = 34 πR3 . X V = S
0
0
(b) For a grooved ball with φ-dependent radius, r(φ), the surface can still be parametrized using spherical coordinates: S = {r = er r(φ) | θ ∈ (0, π), φ ∈ (0, 2π)} . The oriented surface element then takes the form
dS = dθdφ (∂θ r × ∂φ r) = dθdφ eθ r(φ) × eφ sin θ r(φ) + er dφ r(φ) = dθdφ er r2 (φ) sin θ − eφ r(φ)dφ r(φ) ,
and for 31 r = er 13 r the corresponding flux element is dΦ = dS· 13 r = dθdφ 13 r3 (φ) sin θ .
2/3
The volume of a grooved ball with φ-dependent radius, r(φ) = R 1 + sin(nφ) can thus be computed as ˆ ˆ π ˆ 2π V = dS · 31 r = dθ dφ 31 r3 (φ) sin θ S
= 23 R3 where we used: ˆ π dθ sin θ = 2 , 0
P
ˆ
0
,
0
2π
dφ 1 + 2 sin(nφ) + 2 sin2 (nφ) =
0
ˆ
ˆ
2π
2π
4 πR3 3
˜ φ=nφ
1 + 12 2
ˆ
2πn
dφ sin2 (nφ) =
dφ sin(nφ) = 0 , 0
0
0
˜ dφ n
,
sin2 φ˜ = π .
V3.5.8 Flux integral: flux of vector field through surface with cylindrical symmetry
We employ cylindrical coordinates. The surface SW can be parametrized as r(φ, z) = (e−az cos φ, e−az sin φ, z)T , and its surface element is given by
dSW = ∂φ r × ∂z r dφ dz =
−e−az sin φ e−az cos φ 0
−ae−az cos φ
e−az cos φ
× −ae−az sin φ dφ dz = e−az sin φ dφ dz. 1
ae−2az
S.V3.6 Circulation of vector fields
125
The flux integral through the surface SW can be computed as follows: ˆ
ΦW
ˆ2π ˆ1 ˆ2π ˆ1 e−az cos φ e−az cos φ −2az −az sin φ · e−az sin φ = dφ dz (e = dSW ·u = dφ dz e −2aze−2az )
SW
0
ˆ
1
= 2π
ae−2az
0
ˆ
dz (1 − 2az)e−2az = 2π
−2z 1
dz 0
0
0
0
d ze−2az = 2π ze−2az dz h
i1
= 2πe−2a .
0
For SB , a disk lying in the xy-plane, the surface element is dSB = −ez ρdρ dφ (with negative sign, since the outward direction points downward). At z = 0 the vector field ´ u(r) is perpendicular to ez , implying ΦB = S dS · u = 0 . B For ST , a circular disc with radius e−a lying in the z = 1-plane, the surface element is dST = ez ρdρdφ (with positive sign, since here the outward direction points upward). ˆ
ˆ dST ·u =
ΦT = ST
ˆ
2π
e−a
dφ 0
0
0
ρ cos φ
dρ ρ 0 · ρ sin φ = −4π 1
−2
ˆ
e−a
dρ ρ = −2πe−2a .
0
For the total outward directed flux through S, we have: Φ = ΦW + ΦB + ΦT = 0 .
S.V3.6 Circulation of vector fields P
V3.6.2 Curl
(a)
∂y uz − ∂z uy
∂y (xyz 3 ) − ∂z (y 2 z 2 )
xz 3 − 2zy 2
∇ × u = ei ijk ∂j uk = ∂z ux − ∂x uz = ∂z (xyz) − ∂x (xyz 3 ) = ∂x uy − ∂y ux
(b)
xy − yz 3
∂x (y 2 z 2 ) − ∂y (xyz)
.
−xz
We use Cartesian coordinates and put all indices downstairs: ∇ × [(a · r) b] = ei ijk ∂j (al rl )bk = ei ijk al δjl bk = ei ijk aj bk = a × b .
P
V3.6.4 Stokes’s theorem – cuboid (Cartesian coordinates)
C is a cuboid with edge lengths a, b and c (see sketch). Let S1 to S6 be its 6 sides with normal vectors n1,2 = ±ex , n3,4 = ±ey , n5,6 = ±ez . 2 2 T 1 Given the vector ´ field w(r) = 2 (yz , −xz , 0) , we seek the flux of its curl, Φ = S dS · (∇ × w), through the surface S ≡ ∂C\top ≡ S1 ∪ S2 ∪ S3 ∪ S4 ∪ S6 .
z
∇ × w = (xz, yz, −z 2 )T
c
n4
n3 y
a x
(a) Direct calculation of Φ = Φ1 + Φ2 + Φ3 + Φ4 + Φ6 :
n 5 n2
n1
b n6
S.V3 Fields
126 ˆ
ˆ
b
c
dz n1 · (∇ × w)x=a + n2 · (∇ × w)x=0 = 12 abc2 .
|
dy
Φ1 + Φ 2 =
0
0
ˆ
ˆ
a
Φ3 + Φ 4 =
dx 0
0
{z
}
az
|
}
{z 0
c
dz n3 · (∇ × w)y=b + n4 · (∇ × w)y=0 = 12 abc2 . | {z } | {z } bz
0
Analogously: Φ6 = 0, since n6 · (∇ × w)z=0 = 0. Therefore Φ = abc2 . ˆ
Stokes
dS · (∇ × w) =
(b) Alternatively, using Stokes’s theorem: Φ =
dr · w .
S
∂S
z
The five outer faces S = S1 ∪ S2 ∪ S3 ∪ S4 ∪ S6 (coloured dark grey in the sketch) have the same boundary as the top face, S5 (light grey), hence ∂S = ∂S5 . Because of the outward/downward orientation of the surface S, the line integral around the top face must be performed in the clockwise direction. This follows from the right-hand rule applied to the normal vector of the side faces.
r0 a
S5
r3 n4
x
n2 r1
b
n1
c r2 n 3 y
n6
To calculate the line integral, we define r0 = (0, 0, c)T , r1 = (0, b, c)T , r2 = (a, b, c)T , r3 = (a, 0, c)T and parametrize the line segments by t, as follows: γ1 [r0 → r1 ] : t ∈ (0, b) γ2 [r1 → r2 ] : t ∈ (0, a) γ3 [r2 → r3 ] : t ∈ (0, b) γ4 [r3 → r0 ] : t ∈ (0, a)
r(t) = (0, t, c)T ,
˙ r(t) = (0, 1, 0)T ,
w r(t) = 21 (tc2 , 0, 0)T ,
˙ r(t) · w(r(t))
r(t) = (t, b, c)T ,
2
γ1
= 0.
˙ r(t) = (1, 0, 0)T , T
2
w r(t) = 21 (bc , −tc , 0) ,
˙ r(t) · w(r(t))
r(t) = (a, b − t, c)T ,
γ2
= 12 bc2 .
˙ r(t) = (0, −1, 0)T ,
w r(t) = 21 ((b − t)c2 , −ac2 , 0)T ,
˙ r(t) · w(r(t))
r(t) = (a − t, 0, c)T ,
γ3
= 21 ac2 .
˙ r(t) = (−1, 0, 0)T ,
w r(t) = 12 (0, −(a − t)c2 , 0)T ,
˙ r(t) · w(r(t))
γ4
= 0.
The line integral along the top border, ∂S = γ1 ∪ γ2 ∪ γ3 ∪ γ4 , thus yields: ˆ ˆ a ˆ b X Φ= dr · w = dt 12 bc2 + dt 12 ac2 = abc2 = (a) . ∂S
P
0
0
V3.6.6 Stokes’s theorem – cylinder (cylindrical coordinates)
(a) Direct calculation of the surface integral over the top face of the cylinder:
3
u in cylindrical coordinates : u =
Curl of u :
ρ z
∇ × u = er
− sin φ cos φ
= eφ ρ
0
ρ3 4ρ2 + e z z2 z
3
z
S.V3.7 Practical aspects of three-dimensional vector calculus
127
ˆ Flux through the top:
ˆ
ˆ
2π
dS · (∇ × u) =
ΦT =
dφ
T
T
4ρ3 R2 = 2π . aR2 a
dρ
0
(b) Alternatively, using Stokes’s theorem: ˆ ˆ Stokes: ΦT = dS · (∇ × u) =
R
0
dr · u . ∂T
The line integral around the top of the cylinder can be parametrized by r(φ) = (R cos φ, R sin φ, aR2 )T = Reρ + aR2 ez , with φ ∈ I = (0, 2π). γ[r(0) → r(2π)] :
r(φ) = eρ R + ez aR2 ,
˙ r(φ) = eφ R,
3
R R2 R ˙ = eφ , r(φ) · u(r(φ)) γ = 2 R a a a 2 ˆ 2π ˆ R R2 dφ = 2π dr · u = . ΦT = a a 0 ∂T
u r(φ) = eφ
S.V3.7 Practical aspects of three-dimensional vector calculus P
V3.7.2 Derivatives of curl of vector field
(a)
(iii)
(ii)
(i)
∇·(∇×u) = ∂i ijk ∂j uk = −∂i jik ∂j uk = −∂j ijk ∂i uk = −∂i ijk ∂j uk = −∇· (∇×u). For step (i) we used the anti-symmetry of the Levi-Civita symbol under exchange of indices; for (ii) we relabelled summation indices as i ↔ j; for (iii) we used Schwarz’s theorem to change the order of taking partial derivatives. We have thus shown that ∇ · (∇ × u) is equal to minus itself. Hence it must vanish, ∇ · (∇ × u) = 0 . X
(b) The second identity is a vector relation. We consider its ith component: [∇ × (∇ × u)]i = ijk ∂j (∇ × u)k =
ijk klm
∂j ∂l um = ∂j ∂i uj − ∂j ∂j ui
|{z}
| {z }
=δil δjm −δim δjl
=∂i ∂j
= ∂i (∇ · u) − ∇2 ui = ∇(∇ · u) − ∇2 u
i
X
The individual steps are completely analogous to those used to prove the Grassmann identity, a × (b × c) = b(a · c) − c(a · b) (→ ??) — with the important difference that the components of u must be written at the very right in each term (pulling them to the left would be a mistake, since ∂j ui 6= ui ∂j ). (c)
We consider the field u = (x2 yz, xy 2 z, xyz 2 )T .
∂y uz − ∂z uy
∂y (xyz 2 ) − ∂z (xy 2 z)
x(z 2 − y 2 )
∇×u = ∂z ux − ∂x uz = ∂z (x2 yz) − ∂x (xyz 2 ) = y(x2 − z 2 ) . ∂x (xy 2 z) − ∂y (x2 yz)
∂x uy − ∂y ux
∂x
z(y 2 − x2 )
x(z 2 − y 2 )
∇·(∇×u) = ∂y ·y(x2 − z 2 ) = (z 2 − y 2 ) + (x2 − z 2 ) + (y 2 − x2 ) = 0 . X ∂z
z(y 2 − x2 )
∇·u = ∂x ux + ∂x ux + ∂x ux = 6xyz .
S.V3 Fields
128
x2 yz
∂x
6yz
2yz
4yz
∇(∇·u) − ∇2 u = ∂y 6xyz − (∂x2 +∂y2 +∂z2 )xy2 z =6xz −2xz = 4xz xyz 2
∂z
∂y z(y 2 − x2 ) − ∂z y(x2 − z 2 )
6xy
4yz
2xy
4xy
X ∇×(∇×u) = ∂z x(z 2 − y2 ) − ∂x z(y2 − x2 ) = 4zx = ∇(∇·u) − ∇2 u.
∂x y(x2 − z 2 ) − ∂y x(z 2 − y 2 )
4xy
V3.7.4 Nabla identities
P
∂x
0
∇f = ∂y z −1 cos z = −y−2 cos z .
(a)
−y −1 sin z
∂z
∇2 f = ∂x2 + ∂y2 + ∂z2 z −1 cos z = y −1 cos(z) 2y −2 − 1
.
∇ · u = ∂x ux + ∂y uy + ∂z uz = ∂x (−y) + ∂y x + ∂z z 2 = 2z .
∂y uz − ∂z uy
∂y z 2 − ∂z x
0
∇ × u = ∂z ux − ∂x uz = ∂z (−y) − ∂x z 2 = 0 . ∂x x − ∂y (−y)
∂x uy − ∂y ux
2
∇ · w = ∂x wx + ∂y wy + ∂z wz = ∂x x + ∂y 0 + ∂z 1 = 1 .
∂y wz − ∂z wy
∂y 1 − ∂z 0
∇ × w = ∂z wx − ∂x wz = ∂z x − ∂x 1 = 0 . ∂x wy − ∂y wx
∂x 0 − ∂y x
(b) Equation (i) is a scalar equation. In contrast, (ii) and (iii) are vector equations, which we will consider for a specific component, say i. (i)
∇ · (u × w) = ∂i (u × w)i = ∂i ijk uj wk = ijk wk ∂i uj + ijk uj ∂i wk = wk kij ∂i uj − uj jik ∂i wk = wk (∇ × u)k − uj (∇ × w)j = w · (∇ × u) − u · (∇ × w) X
(ii)
[∇ × (f u)]i = ijk ∂j f uk = f ijk ∂j uk + uk ijk ∂j f = f (∇ × u)i − ikj uk (∇f )j
= f (∇ × u) − (u × (∇f ))
i
X
(iii) [∇ × (u × w)]i = ijk ∂j (u × w)k = ijk ∂j klm ul wm ijk klm
=
(wm ∂j ul + ul ∂j wm ) = wj ∂j ui − wi ∂j uj + ui ∂j wj − uj ∂j wi
| {z }
=δil δjm −δim δjl
= (w · ∇) u − w (∇ · u) + u (∇ · w) − (u · ∇) w
(c)
In addition to the resultsfrom (a)we use u × w = x
(i)
∇ · (u × w) = ∇ · z 2 x + y = 2 . −x2
i
X
x z x+y −x2 2
:
S.V3.7 Practical aspects of three-dimensional vector calculus
129
x
0
X
w · (∇ × u) − u · (∇ × w) = 0 · 0 − u · 0 = 2 = ∇ · (u × w) . 1
(ii)
− cos z
2
−y −2 z 2 cos z + xy −1 sin z
∇ × (f u) = ∇ × xy−1 cos z =
y −1 cos z
f (∇ × u) − u × (∇f ) =
0
=
−y
0 −y −1 sin z
z2
cos z
−xy −1 sin z + z 2 y −2 cos z
xy −1 sin z − y −2 z 2 cos z
=
− sin z
2y −1 cos z
x × −y −2 cos z
−
0 −1
−
0
0
2y
.
sin z
y −1 z 2 cos z
sin z
y −1 cos z
y −1 cos z
X
= ∇ × (f u) .
(iii)
x
∇ × (u × w) = ∇ × z 2 x + y = −x2
−2zx 2x
.
z2
(w · ∇) u − (u · ∇) w + u (∇ · w) − w (∇ · u)
= (x∂x +∂z )
−y
x
x −
−y∂x +x∂y +z 2 ∂z 0 +
z2
0
−y
= x− 2z
P
−y
2xz
0 +
x− 0
0
z2
=
2z
−2xz 2x
x
x (1) − 0 (2z) z2
1
−y
1
X = ∇ × (u × w) .
z2
V3.7.6 Gradient, divergence, curl, Laplace in spherical coordinates
(a)
(b) (c)
r(r, θ, φ) = (r sin θ cos φ, r sin θ sin φ, r cos θ)T . ∂r r ≡ er nr ,
with
nr = 1 ,
er = (sin θ cos φ, sin θ sin φ, cos θ)T .
∂θ r ≡ eθ nθ ,
with
nθ = r ,
eθ = (cos θ cos φ, cos θ sin φ, − sin θ)T .
∂φ r ≡ eφ nφ ,
with
nφ = r sin θ ,
eφ = (− sin φ, cos φ, 0)T .
1 1 1 1 1 ∂r f + eθ ∂θ f + eφ ∂φ f = er ∂r f + eθ ∂θ f + eφ ∂φ f . nr nθ nφ r r sin θ h i 1 ∇·u= ∂r nθ nφ ur + ∂θ nφ nr uθ + ∂φ nr nθ uφ nr nθ nφ h i 1 = 2 ∂r r2 sin θur + ∂θ r sin θuθ + ∂φ ruφ r sin θ ∇f = er
=
1 1 1 ∂r r2 ur + ∂θ sin θuθ + ∂φ uφ . r2 r sin θ r sin θ
S.V3 Fields
130
(d)
(e)
1 1 ∂θ nφ uφ − ∂φ nθ uθ + eθ ∂φ nr ur − ∂r nφ uφ nθ nφ nφ nr h i 1 + eφ ∂r nθ uθ − ∂θ nr ur nr nθ h h i i 1 1 = er 2 ∂θ r sin θuφ − ∂φ ruθ + eθ ∂φ ur − ∂r r sin θuφ r sin θ r sin θ h i 1 θ r + eφ ∂r ru − ∂θ u r h i h i 1 1 1 = er ∂θ sin θuφ − ∂φ uθ + eθ ∂φ ur − ∂r ruφ r sin θ r sin θ h i 1 θ r + eφ ∂r ru − ∂θ u . r 2 ∇ f = ∇ · ∇f 1 1 1 1 1 = 2 ∂r r2 ∂r f + ∂θ sin θ ∂θ f + ∂φ ∂φ f r r sin θ r r sin θ r sin θ
∇ × u = er
i
h
i
h
1 1 1 ∂r r2 ∂r f + 2 ∂θ sin θ∂θ f + 2 ∂φ ∂φ f . r2 r sin θ r sin θ2 h i 1 D ≡ ∇×u = eη +η µ + η µ, ∂µ nν uν − ∂ν nµ uµ nµ nν ν =
(f)
|
{z
}
≡ Dη
ν
1 ∂η nµ nν Dη + η µ + η µ , nη nµ nν ν ν h i 1 1 ν ∇· ∇×u = ∂η nµ nν ∂µ nν u −∂ν nµ uµ +η µ + η µ nη nµ nν nµ nν ν ν h 1 ν µ η ν = ∂η ∂µ nν u − ∂η ∂ν nµ u + ∂µ ∂ν nη u − ∂µ ∂η nν u nη nµ nν ∇·D =
+ ∂ν ∂η nµ uµ − ∂ν ∂µ nη uη
=
i
1 (∂η ∂µ − ∂µ ∂η ) nν uν + (∂µ ∂ν − ∂ν ∂µ ) nη uη nη nµ v h
+ (∂ν ∂η − ∂η ∂ν ) nµ uµ = 0
i
[using Schwarz’s theorem] .
X
(g) In spherical coordinates the fields read f (r) = krk2 = r2 and u(r) = ez z = (er cos θ− eθ sin θ) r cos θ = er ur + eθ uθ + eφ uφ , with ur = r cos2 θ, uθ = r sin θ cos θ, uφ = 0. 1 1 ∇f = er ∂r f + eθ ∂θ f + eφ ∂φ f = er 2r . r r sin θ 1 1 1 ∇·u = 2 ∂r r2 ur + ∂θ sin θuθ + ∂φ uφ r r sin θ r sin θ 1 1 1 = 2 ∂r r3 cos2 θ − ∂θ sin2 θ cos θ + ∂φ 0 r r sin θ r sin θ = 3 cos2 θ − (2 cos2 θ − sin2 θ) = 1 . h i h h i i 1 1 1 1 ∇×u = er ∂θ sin θuφ − ∂φ uθ + eθ ∂φ ur − ∂r ruφ + eφ ∂r ruθ − ∂θ ur r sin θ r sin θ r
S.V3.7 Practical aspects of three-dimensional vector calculus
131
= er 0 + eθ 0 + eφ
1 ∂r −r2 sin θ cos θ − ∂θ (r cos2 θ) r h
i
1 = eφ (−2r sin θ cos θ + 2r cos θ sin θ) = 0 . r 1 1 1 1 2 ∇ f = 2 ∂r r2 ∂r f + 2 ∂θ sin θ∂θ f + 2 ∂φ ∂φ f = 2 ∂r (2r3 )+0+0 = 6 . 2 r r sin θ r sin θ r
P
V3.7.8 Gradient, divergence, curl (cylindrical coordinates) r = (x, y, z)T .
(a) Cartesian:
∂x f ∂y f ∂z f
∇f =
2zx 2zy x2 + y 2
! =
! .
∇ · u = ∂i ui = ∂x (zx) + ∂y (zy) + ∂z (0) = 2z . ∂y uz − ∂z uy ∂z ux − ∂x uz ∂x uy − ∂y ux
∇×u=
−y x 0
! =
! .
∇2 f = ∇ · (∇f ) = ∂i ∂i f = (∂x2 + ∂y2 + ∂z2 )f = 2z + 2z = 4z . r = (ρ, φ, z)T .
(b) Cylindrical:
f (ρ, φ, z) = zρ2 , u(ρ, φ, z) = eρ ρz,
uρ = ρz,
⇒
uφ = 0,
uz = 0 .
1 eρ ∂ρ + eφ ∂φ + ez ∂z zρ2 = eρ 2ρz + ez ρ2 . ρ 1 1 1 ∇ · u = ∂ρ (ρuρ ) + ∂φ uφ + ∂z uz = ∂ρ (zρ2 ) = 2z . ρ ρ ρ ∇f =
∇ × u = eρ
1 ∂φ uz − ∂z uφ ρ
+ eφ (∂z uρ − ∂ρ uz ) + ez
1 ∂ρ ρuφ − ∂φ uρ ρ
= eρ 0 + eφ ∂z (ρz) + ez 0 = eφ ρ . ∇2 f = ∇ · (∇f ) =
1 1 1 ∂ρ (ρ∂ρ f ) + 2 ∂φ2 f + ∂z2 f = ∂ρ 2ρ2 z = 4z . ρ ρ ρ
For the scalar ∇ · u, the agreement between the Cartesian and cylindrical results is obvious. In contrast, for the vectors ∇f and ∇ × u, the Cartesian results first have to be converted to cylindrical coordinates (or vice-versa) to verify the equivalence. For example: x y 0
!
∇f =2z
P
2
+ x +y
2
0 0 1
!
−y x 0
!
X
2
= eρ 2zρ + ez ρ ;
∇×u=
X
= eφ ρ .
V3.7.10 Gauss’s theorem – wedge ring (spherical coordinates)
´ (a) We seek the outward flux, ΦW = ∂W dS · u, of the vector field u = er r2 through the surface of the wedge–ring, ∂W = ∂sphere + ∂wedge . The corresponding directed surface elements are dS = er dθdφR2 sin θ on the curved surface ∂sphere , and dS ∝ eθ
S.V3 Fields
132
on the slanted surface ∂wedge . The latter does not contribute to the flux, since eθ ·u ∝ eθ · er = 0. ˆ ˆ 2π/3 ˆ 2π 2π/3 ΦW = dS · u = dθ sin θ dφ(R2 er ) · (R2 er ) = 2πR4 (− cos θ) ∂sphere
π/3
π/3
0
= 2πR4 2 21 = 2πR4 . (b) The components of u = er r2 in the local basis of spherical coordinates are ur = r2 , uθ = 0, uφ = 0, hence ∇ · u = r12 ∂r (r2 r2 ) = r12 4r3 = 4r. ˆ ˆ Gauss ΦW = dS · u = dV ∇ · u ∂W R
ˆ
ˆ
drr2
=
dφ π/3
0
0
(c)
W 2π/3
ˆ
2π
ˆ 2π/3
dθ sin θ4r = 2π(− cos θ)|π/3
R
dr4r3 = 2πR4 . 0
z
Method (i), computing the flux integral: the vector field w = −eθ cos θ is normal to the wedge–ring’s slanted surface, but tangential to its curved surface. Hence only the former contributes to the flux integral. The upper and π/3be parametrized as S± = lower slanted surfaces can r, r ∈ (0, R), φ ∈ (0, 2π), θ = 2π/3 , with surface elements
dS = ±drdφ∂r r × ∂φ r = ±drdφ(er × eφ r sin θ) = ∓eθ drdφ r sin θ. (The signs are chosen such that the normal vectors of the slanted surface point outward, ‘away from’ the wedge ring, see figure, which shows a cross section through ´ ˜W = the symmetry axis of the wedge ring.) Thus we compute the flux, Φ dS·w = ∂W ´ ´ dS · w + dS · w, as S+ S −
ˆ ˜W = Φ
ˆ
R
2π
0
=
h
dφ (−eθ sin θ)(−eθ cos θ)
drr 0 √ 1 2 R 2π2 23 12 2
θ=π/3
+ (eθ sin θ)(−eθ cos θ)
i θ=2π/3
√
=
3 πR2 2
.
For method (ii), using Gauss’s theorem, we first compute the divergence: 1 1 ∂θ (sin θ(− cos θ) = − cos θ2 − sin θ2 r sin θ r sin θ ˆ ˆ ˆ R ˆ 2π/3 ˆ Gauss = dS·w = dV ∇·w = dr r2 dθ sin θ
h
∇·w= ˜W Φ
∂W
= − 12 R2 2π
P
ˆ
W 2π/3
π/3
0
i
π/3
0
2π
dφ
(cos θ2 −sin θ2 ) r sin θ
2π/3 √ = πR2 12 2 23 = dθ cos 2θ = πR2 12 sin 2θ π/3
√
3 πR2 2
.X
V3.7.12 Gauss’s theorem – electrical field of a point charge (spherical coordinates) p T
(a) We use Cartesian coordinates, r = (x, y, z) , with r = x2 + y 2 + z 2 , and write all indices downstairs. Denoting the components of r by xj = x, y, z, we have ∂j r = ∂r/∂xj = xj /r. The vector field E(r) = rQ3 r has Cartesian components Ej = Rxj , with R = Q/r3 . It is advisable to begin by computing some partial derivatives: ∂r R = −3Q/r4 = −3R/r ,
∂j R = (∂r R)(∂j r) = (∂r R)(xj /r) = −3Rxj /r2 (1)
S.V4 Introductory concepts of differential geometry
133
(1)
∇ · E = ∂j Ej = ∂j R xj + R ∂j xj = −3Rxj xj /r2 + 3R = [−3 + 3] R = 0 .
∇ × E = ∂i Ej εijk ek = ∂i (Rxj )εijk ek = (∂i R)xj + R(∂i xj ) εijk ek
= (∂r R)(xi /r)xj + Rδij εijk ek = 0
[from anti-symmetry of εijk ] .
(b) Now we repeat the calculations in spherical coordinates, r(r, θ, φ), with r > 0: E = er
Q , r2
Er =
⇒
Q , r2
E θ = 0,
Eφ = 0 .
1 1 1 1 X 1 ∂θ sin θE θ + ∂φ E φ =Q 2 ∂r r2 2 = 0 . ∂r r2 E r + 2 r r sin θ r sin θ r r 1 1 1 ∇ × E = er ∂θ sin θE φ − ∂φ E θ + eθ ∂φ E r − ∂r rE φ r sin θ r sin θ 1 θ r X + eφ ∂r rE − ∂θ E = 0 . r
∇·E=
The same results are obtained using Cartesian and spherical coordinates, but the latter more elegantly exploit the fact E depends only on r and er . (c)
Parametrization of the sphere S: r(φ) = er R(θ, φ) with φ ∈ (0, 2π), θ ∈ (0, π), and surface element dS = er dθ dφ sin θ R2 . ˆ
ˆ
S
ˆ
2π
dS · E =
ΦS =
π
dθ sin θ R2 Q
dφ 0
0
1 = 4πQ . R2
(d) From Gauss’s (mathematical) theorem, we see immediately that: ˆ ˆ (2) dV (∇ · E) = dS · E = 4πQ . V
(e)
(3)
S
On the one hand, it follows from (a) that ∇·E = 0 at all spatial points with r > 0, i.e. at all points except the origin. On the other hand, it follows from (d) that the volume integral of ∇ · E does not vanish in the volume V inclosed by the sphere S, but is rather equal to 4πQ. This appears paradoxical at first: How can the volume integral of a vector field yield a finite value if the field apparently vanishes everywhere? However notice that the calculation from part (a) does not apply for the case r = 0, hence we have no reason to conclude that the fields vanishes at the origin. The fact that the volume integral of ∇ · E yields a finite value, although the integrand vanishes except at r = 0, tells us that the integrand must be proportional to a three-dimensional δ-function, peaked at r = 0: ∇ · E = C δ (3) (r) ,
with
δ (3) (r) = δ(x)δ(y)δ(z) .
The constant C can be determined as follows: ˆ ˆ (3) (4) 4πQ = dV (∇ · E) = C dV δ (3) (r) , V
(f)
(2)
|V
{z
=1
⇒
(4)
C = 4πQ .
}
Inserting the above result for C into Eq. (4) yields ∇ · E = 4πρ(r), with ρ(r) = Qδ (3) (r). This corresponds to Gauss’s (physical) law (one of the Maxwell equations), where ρ(r) describes the charge density of a point charge Q at the origin.
S.V4 Introductory concepts of differential geometry
134
S.V4 Introductory concepts of differential geometry S.V4.1 Differentiable manifolds P
V4.1.2 Six-chart atlas for S 2 ⊂ R3
(a) The maps ri,± : Ui,± → Si,± act as follows: r1,± : r2,± : r3,± :
p
(x2 , x3 ) → (± 1p − (x2 )2 − (x3 )2 , x2 , x3 )T , 1 3 1 − (x1 )2 − (x3 )2 , x3 )T , (x , x ) → (x , ± 1p 1 2 1 2 (x , x ) → (x , x , ± 1 − (x1 )2 − (x2 )2 )T .
(b) S3,−;2,+ = S3,− ∩S2,+ is the quarter-sphere with x2 < 0, x3 > 0. It can be described by both r2,+ and r3,− . The transition function translating between these two description is: −1 r3,− ◦ r2,+ : U2,+ x3 0 ,
(x1 , x3 ) 7→ (x1 ,
U3,−
( x1 , x 3 )
x3
1 − (x1 )2 − (x3 )2 )T .
( x1 , x 2 ) 1 r3−,−
r2,+
x2
x1
p
U2,+
S2,+
S.V4.2 Tangent space P
V4.2.2 Tangent vectors on the sphere S 2
(a) The holonomic basis is generated by the curves yθ (t) = (θ + t, φ)T = y + teθ and yφ (t) = (θ, φ + t)T = y + teφ in U . The corresponding basis vectors are represented as: In U :
vθ = dt yθ (t) t=0 = eθ = (1, 0)T ,
vφ = dt yφ (t) t=0 = eφ = (0, 1)T . In
R3 :
∂r(y) = R(cos φ cos θ, sin φ cos θ, − sin θ)T , ∂θ ∂r(y) = = R(− sin φ sin θ, cos φ sin θ, 0)T . ∂φ
vθ = dt r(yθ (t)) t=0 =
vφ = dt r(yφ (t)) t=0
(b) The tangent vector to the spiral curve has the following representation: In U : In
R3
(a) ∂y dθ(t) ∂y dφ(t) + = eθ π+eφ 2πn = (π, 2πn)T . ∂θ dt ∂φ dt ∂r dθ(t) ∂r dφ(t) : (u1 , u2 , u3 )T = dt r(t) = + = vθ π + vφ 2πn ∂θ dt ∂φ dt (a) = R(cos φ cos θ, sin φ cos θ, − sin θ)T π + R(− sin φ sin θ, cos φ sin θ, 0)T (2πn) .
(uθ , uφ )T = dt y(t) =
S.V5.1 Cotangent space and differential one-forms
135
(c)
Along the spiral, the function f (t) ≡ f (y(t)) has the form f (t) = r1 (t) − r2 (t) = R(cos φ − sin φ) sin θ
r=r(t)
.
Its directional derivative along the spiral, ∂u f = uj ∂j f , is (b)
∂u f = uθ ∂θ f + uφ ∂φ f = R π(cos φ − sin φ) cos θ − 2πn(sin φ + cos φ) sin θ
P
.
y=y(t)
V4.2.4 Holonomic basis for generalized polar coordinates
(a) Under the coordinate transformation y 7→ x(y), expressing Cartesian through gener∂xi alized polar coordinates, the holonomic basis vectors transform as ∂yj = ∂y j ∂xi ≡ (vyj )i ∂xi . With x1 = µa cos φ, x2 = µb sin φ, the Cartesian components of ∂µ and ∂φ thus read: ∂x1 ∂x2 , ∂µ ∂µ
T
= (a cos φ, b sin φ)T =
vφ ≡ ((vφ )1 , (vφ )2 )T =
∂x1 ∂x2 , ∂φ ∂φ
T
= (−µa sin φ, µb cos φ)T =
(b) The inverse relations read ∂xi = and φ = arctan
2
a
x x1 b
∂y j ∂xi
v2 ≡ ((v2 )µ , (v2 )φ )T =
JJ
=
∂x1 ∂µ ∂x2 ∂µ
∂x1 ∂φ ∂x2 ∂φ
bx1 a
2
− axb ,
T
.
∂yj ≡ (vxi )j ∂yj . With µ = [(x1 /a)2 + (x2 /b)2 ]1/2
∂µ , ∂φ ∂x1 ∂x1 ∂µ , ∂φ ∂x2 ∂x2
T T
= =
The Jacobi matrix is defined as J ij = (vxi )j : −1
,
, the generalized polar components of ∂1 and ∂2 thus read:
v1 ≡ ((v1 )µ , (v1 )φ )T =
(c)
(x1 ,x2 )T [(x1 /a)2 +(x2 /b)2 ]1/2
vµ ≡ ((vµ )1 , (vµ )2 )T =
! ∂µ ∂x1 ∂φ ∂x1
∂µ ∂x2 ∂φ ∂x2
−x2 /(ab) x1 , µa2 [(x1 /a)2 +(x2 /b)2 ]1/2
T
x1 /(ab) x2 , µb2 [(x1 /a)2 +(x2 /b)2 ]1/2
T
∂xi ∂y j
cos φ φ T , − sin a µa
=
sin φ cos φ T , µb b
= (vyj )i , with inverse (J −1 )j i =
=
=
a cos φ −µa sin φ b sin φ µb cos φ
cos φ a φ − sin µa
sin φ b cos φ µb
=
, .
∂y j ∂xi
=
1 0 .X 0 1
S.V5 Alternating differential forms S.V5.1 Cotangent space and differential one-forms P
V5.1.2 Differential of a function in Cartesian and spherical coordinates
(a) For f (x) = 1
f1 = x
1 (x1 x1 2 2
, f2 = x
+ x2 x2 ), the coefficients of df = fi dxi are fi = 1
1
2
∂ f, ∂xi
2
, f3 = 0 ⇒ df = x dx + x dx .
(b) Using dxi (u) = dxi (uj ∂j ) = ui , we obtain: df (∂u ) = (x1 dx1 + x2 dx2 )(x1 ∂1 − x2 ∂2 + ∂3 ) = x1 x1 − x2 x2 .
hence
S.V5 Alternating differential forms
136
(c)
In spherical coordinates, with x1 (y) = r cos φ sin θ, x2 (y) =r sin φ sin θ, the function f takes the form f (x(y)) = 21 r2 cos2 φ sin2 θ + sin2 φ sin2 θ = 21 r2 sin2 θ. We obtain ∂ the coefficients of df = fi (y)dy i in spherical coordinates using fi (y) = ∂y i f (y): fr =
∂ ∂r
f = r sin2 θ ,
fθ =
∂ ∂θ
f = r2 sin θ cos θ =
1 2 r 2
cos 2θ ,
fφ =
∂ f ∂φ
= 0 .
Thus, in spherical coordinates the form reads: df = r sin2 θ dr + 12 r2 sin 2θ dθ . Alternatively, we can use the transformation rule for the coefficients of forms: Under the transformation y 7→ x(y), expressing Cartesian through spherical coordinates, we have fj (y) = fi (x)J ij x=x(y) , with Jacobian matrix
J ij
=
i
∂x ∂y j
=
∂x1 ∂r ∂x2 ∂r ∂x3 ∂r
∂x1 ∂θ ∂x2 ∂θ ∂x3 ∂θ
∂x1 ∂φ ∂x2 ∂φ ∂x3 ∂φ
sin θ cos φ sin θ sin φ cos θ
=
−r sin θ sin φ r sin θ cos φ 0
r cos θ cos φ r cos θ sin φ −r sin θ
! .
With f1 (x(y)) = x1 (y) = r cos φ sin θ, f2 (x(y)) = x2 (y) = r sin φ sin θ, f3 = 0, we obtain: fr (y) = f1 (x)J 1r + f2 (x)J 2r + f3 (x)J 3r
x=x(y)
= r cos2 φ sin2 θ + r sin2 φ sin2 θ + 0 = r sin2 θ , X fθ (y) = f1 (x)J 1θ + f2 (x)J 2θ + f3 (x)J 3θ
x=x(y) 1 2 r 2
= r cos φ sin θ r cos θ cos φ + r sin φ sin θ r cos θ sin φ + 0 = fφ (y) = f1 (x)J 1φ + f2 (x)J 2φ + f3 (x)J 3φ
sin 2θ . X
x=x(y)
= −r cos φ sin θ r sin θ sin φ + r sin φ sin θ r sin θ cos φ + 0 = 0 . X (d) Using dy i (∂yj ) = δ ij we obtain: df (∂u ) = (r sin2 θ dr + 12 r2 sin 2θ dθ)(∂r − ∂θ + ∂φ ) = r sin2 θ − 12 r2 sin 2θ .
S.V5.2 Pushforward and Pullback P
V5.2.2 Pushforward and pullback: hyperbolic coordinates
(a)
1
α
2
−α
For x = ρ e , x = ρ e
:
J ij
=
∂xi ∂y j
=
∂x1 ∂ρ ∂x2 ∂ρ
∂x1 ∂α ∂x2 ∂α
!
=
eα e−α
ρ eα −ρ e−α
.
(b) The pushforward of a general vector, ∂u = ∂yj uj , is given by F∗ ∂u = ∂xi J ij uj : F∗ ∂ρ = ∂xi J iρ = ∂x1 eα + ∂x2 e−α = ∂x1
q
x1 x2
q + ∂x2
x2 x1
,
F∗ ∂α = ∂xi J iα = ∂x1 ρ eα − ∂x2 ρ e−α = ∂x1 x1 − ∂x2 x2 , √ where we used ρ = x1 x2 , eα = x.
x1 ρ
=
q
x1 , x2
e−α =
q
x2 x1
to express the r.h.s. through
S.V5.3 Forms of higher degree
137
(c)
The pullback of a general form, φ = φi dxi , is given by F ∗ φ = φi J ij dy j : F ∗ dx1 = J 1j dy j = eα dρ + ρ eα dα , F ∗ dx2 = J 2j dy j = e−α dρ − ρ e−α dα . Using pushforward of the vectors (with F∗ ρ∂ρ = ∂x1 x1 + ∂x2 x2 ) we obtain
(d) (i)
(b)
λ(F∗ ρ∂ρ ) = (x1 dx1 + x2 dx2 )(∂x1 x1 + ∂x2 x2 ) = (x1 )2 + (x2 )2 , (b)
λ(F∗ ∂α ) = (x1 dx1 + x2 dx2 )(∂x1 x1 − ∂x2 x2 ) = (x1 )2 − (x2 )2 . (ii) Using pullback of the form λ = x1 dx1 + x2 dx2 , (c)
F ∗ λ = ρ eα (eα dρ + ρ eα dα) + ρ e−α (e−α dρ − ρ e−α dα), we obtain: F ∗ λ(ρ∂ρ ) = ρ2 e2α + ρ2 e−2α = ρ2 2 cosh 2α = (x1 )2 + (x2 )2 , X F ∗ λ(∂α ) = ρ2 e2α − ρ2 e−2α = ρ2 2 sinh 2α = (x1 )2 − (x2 )2 . X
S.V5.3 Forms of higher degree P
V5.3.2 Wedge product in Cartesian and cylindrical coordinates
(a)
λ ∧ η = (x3 dx1 + x1 dx2 + x2 dx3 ) ∧ (x2 dx1 + x3 dx2 + x1 dx3 ) = ((x3 )2 − x1 x2 ) dx1 ∧ dx2 + ((x1 )2 − x2 x3 ) dx2 ∧ dx3 + ((x2 )2 − x3 x1 ) dx3 ∧ dx1 .
(b) The vectors ∂u = x1 ∂1 + x2 ∂2 + x3 ∂3 , ∂v = ∂1 + ∂2 + ∂3 have components (u1 , u2 , u2 )T = (x1 , x2 , x3 )T , (v 1 , v 2 , v 3 )T = (1, 1, 1)T . Now use (dxi ∧ dxj )(∂u , ∂v ) = ui v j − uj v i : (dx1 ∧ dx2 )(∂u , ∂v ) = x1 − x2 , (dx2 ∧ dx3 )(∂u , ∂v ) = x2 − x3 , (dx3 ∧ dx1 )(∂u , ∂v ) = x3 − x1 , (λ ∧ η)(∂u , ∂v ) = ((x3 )2 − x1 x2 )(x1 − x2 ) + ((x1 )2 − x2 x3 )(x2 − x3 )
⇒
+ ((x2 )2 − x3 x1 )(x3 − x1 ) = 0 . (c)
Under the coordinate transformation y 7→ x(y) expressing Cartesian through cylindrical coordinates, x1 = ρ cos φ, x2 = ρ sin φ, x3 = z, we have: i1
i2
dx ∧dx
=
J i1j1 J i2j2
j1
j2
dy ∧dy , with Jacobian matrix
J ij
=
∂xi ∂y j
c =
−ρs 0 s ρc 0 0 0 1
,
where we have used the shorthand c = cos φ, s = sin φ, and c2 + s2 = 1. Since Jρ3 = Jφ3 = Jz1 = Jz2 = 0, many contributions drop out, and we are left with dx1 ∧ dx2 = (J 1ρ J 2φ − J 1φ J 2ρ ) dρ ∧ dφ = ρ(c2 + s2 ) dρ ∧ dφ = ρ dρ ∧ dφ , dx2 ∧ dx3 = J 2φ J 3z dφ ∧ dz − J 2ρ J 3z dz ∧ dρ = ρc dφ ∧ dz − s dz ∧ dρ , dx3 ∧ dx1 = −J 3z J 1φ dφ ∧ dz + J 3z J 1ρ dz ∧ dρ = ρs dφ ∧ dz + c dz ∧ dρ .
S.V5 Alternating differential forms
138
Finally, in cylindrical coordinates, we have: x1 x2 = ρ2 cs,
x2 x3 = zρs,
x3 x1 = zρc,
(x1 )2 = ρ2 c2 ,
(x2 )2 = ρ2 s2 ,
(x3 )2 = ρ2 ,
(z 2 − ρ2 cs)ρ dρ ∧ dφ + (ρ2 c2 − zρs)ρc + (ρ2 c2 − zρc)ρs dφ ∧ dz
(a)
⇒λ∧η =
+ − (ρ2 c2 − zρs)s + (ρ2 c2 − zρc)c dz ∧ dρ .
P
V5.3.4 Mercator projection: spherical area form
(a) Differentiating z 2 (θ) = ln[tan( 12 (π − θ))], we obtain dz 2 dθ
1 = tan( 1 1(π−θ)) cos2 ( 11(π−θ)) (− 12 ) = − 12 sin( 1 (π−θ))1cos( 1 (π−θ)) = − sin(π−θ) = 2
2
2
2
−1 sin θ
.X
z 2 (π/2) = ln[tan( π4 )] = ln(1) = 0 ,
(b)
z 2 (0) = ln[tan( π2 )] = ln(∞) = ∞ , z 2 (π) = ln[tan(0)] = ln(0) = −∞ . (c)
(d)
Mercator coordinates map meridians, i.e. circles of fixed θ having circumference 2π sin θ, onto lines of fixed z 2 , all having the same length, 2π. This involves stretching them by a factor 1/ sin θ, which becomes ever larger the closer they lie to the poles. In order to preserve angles, the z 2 -coordinate has to be stretched correspondingly, by an amount which increases to infinity as θ approaches the poles. 2
2
sin θ = sin π − 2 arctan(ez ) = sin 2 arctan(ez )
z2
z2
arctan ez
= 2 sin arctan(e ) cos arctan(e ) = 2√ e
1
z2
1+e2z
p
2
1+
p
cos θ = ±
= sech(z 2 ) .
2 2 2 e−z +ez
=
e2z2
p
1−sin2 θ = ±
q
1−sech2 (z 2 ) = ±
The signs were chosen such that z 2 ≷ 0 when θ ≶ to − sin θ = sech2 (z 2 ) (e)
dz 2 dθ
⇒
2
2
1+
ez
ez
2
1
cosh2 (z 2 )−1 cosh2 (z 2 )
1 π. 2
= tanh(z 2 ) .
Differentiating w.r.t. θ leads
dz 2 sin θ 1 =− = − .X dθ sin θ sech2 (z 2 ) 2
For the transformation θ(z) = π − 2 arctan(ez ), φ(z) = z 1 , the Jacobi matrix of the map z 7→ y(z) is ∂y J(z) = = ∂z J
Check:
−1
∂z (y) = = ∂y
JJ
−1
=
∂θ ∂z 1 ∂φ ∂z 1 ∂z 1 ∂θ ∂z 2 ∂θ
0
− sin θ
1
0
∂θ ∂z 2 ∂φ ∂z 2
∂z 1 ∂φ ∂z 2 ∂φ
=
!
0 − sin θ(z 2 )
1
0
1
− sin1 θ
0
=
0
=
0 − sech(z 2 )
1
0
0
1
− sin1 θ
0
.
= 1. X
The corresponding determinants are det J = sech(z 2 ) , det J −1 = 1/ sin(θ) .
.
S.V5.3 Forms of higher degree
139
(f)
The area form transform as (e)
ω = sin θ dθ ∧ dφ = sin(θ(z)) det( ∂y ) dz 1 ∧ dz 2 = (sech(z 2 ))2 dz 1 ∧ dz 2 . ∂z (g) Using the area form in spherical coordinates, we trivially obtain ω(∂θ , ∂φ ) = sin θ dθ ∧ dφ (∂θ , ∂φ ) = sin θ . To use it in Mercator coordinates, we first transform the vectors accordingly: ∂θ =
(e) ∂z i ∂ = ∂θ z i
− sin1 θ ∂z2 ,
∂φ =
(e) ∂z i ∂ = ∂φ z i
∂z1 .
Note the minus sign for ∂θ : it reflects the fact that increasing θ causes z 2 to decrease. (sech(z 2 ))2 sech(z 2 )
(e,f)
−1 ω(∂θ , ∂φ ) = ω(z)(dz 1 ∧ dz 2 )( sin ∂ 2 , ∂z1 ) = ω(z) = θ z sin θ
= sech(z 2 ) = sin θ. X
Thus ω(∂θ , ∂φ ), the area of the surface element spanned by (∂θ , ∂φ ), goes to zero exponentially as z 2 → ±∞, as expected for θ → 0,π, respectively.
P
V5.3.6 Exterior derivative
(a)
df = d(x1 x2 + x2 x3 ) = x2 dx1 + (x1 + x3 )dx2 + x2 dx3 .
(b)
dλ = d(x1 x2 dx1 + x2 x3 dx2 + x1 x2 dx3 ) = x1 dx2 ∧ dx1 + x2 dx3 ∧ dx2 + (x2 dx1 + x1 dx2 ) ∧ dx3 = −x1 dx1 ∧ dx2 + x2 dx1 ∧ dx3 + (−x2 + x1 )dx2 ∧ dx3 .
(c)
dω = d x1 x2 x3 (dx1 ∧ dx2 + dx2 ∧ dx3 + dx1 ∧ dx3 )
= (x1 x2 + x2 x3 − x1 x3 )dx1 ∧ dx2 ∧ dx3 .
V5.3.8 Pullback of spherical area form to Cartesian coordinates in form)
P
R3
(solid-angle
j
j
(a) The derivatives, ∂y , needed for the pullback, y ∗ ω = ωj ∂y dxi , are all elements of ∂xi ∂xi the Jacobi matrix for the transformation x 7→ y(x), defined as: r=
p
(x1 )2 + (x2 )2 + (x3 )2 ,
Using the shorthand ρ = lem ??)
p
J=
∂y = ∂x
3
2
θ = arccos( xr ),
φ = arctan( xx1 ).
(x1 )2 + (x2 )2 , the Jacobi matrix is given by (see prob-
∂r ∂x1 ∂θ ∂x1 ∂φ ∂x1
∂r ∂x2 ∂θ ∂x2 ∂φ ∂x2
∂r ∂x3 ∂θ ∂x3 ∂φ ∂x3
=
x1 r x1 x3 ρr 2 2 − xρ2
x2 r x2 x3 ρr 2 x1 ρ2
x3 r − rρ2
0
The pullback of ω to Cartesian coordinates is defined as y ∗ ω = y ∗ (sin θ dθ∧dφ) = sin θ = sin θ
h
∂θ ∂φ ∂x1 ∂x2
−
2 3 ∂θ ∂θ ∂θ dx1+∂x 2 dx + ∂x3 dx ∂x1
∂θ ∂φ ∂x2 ∂x1
dx1 ∧dx2 +
∂θ ∂φ ∂x2 ∂x3
∧
∂θ − ∂x 3
2 3 ∂φ ∂φ ∂φ dx1 + ∂x 2 dx + ∂x3 dx ∂x1 ∂φ ∂x2
dx2 ∧dx3
S.V5 Alternating differential forms
140
∂θ ∂φ ∂x3 ∂x1
+ =
ρ r
h
(−ρ) r2
+ =
x1 x3 ρr 2
·
∂θ − ∂x 1 x1 ρ2
−
(−x2 ) ρ2
·
∂φ ∂x3
x2 x3 ρr 2
−
dx3 ∧dx1
·
x1 x3 ρr 2
(−x2 ) ρ2
i x2 x3 ρr 2
dx1 ∧ dx2 +
· 0 dx3 ∧ dx1
·0−
(−ρ) r2
·
x1 ρ2
dx2 ∧ dx3
i
1 [x3 dx1 ∧dx2 +x1 dx2 ∧dx3 +x2 dx3 ∧dx1 ] r3
1 1 xi dxj 2 r 3 ijk
=
∧ dxk .
(b) The pullback of the κ = − cos θdφ is given by 1 ∂φ y ∗ κ = − cos θ( ∂x 1 dx +
Using
∂(1/r) ∂xi
dy ∗ κ = =
∂(1/ρ) ∂xi
x3 rρ2
h
1 r3
1 − (x2 )2 ( r12 +
−x3 rρ2
∂φ dx3 ) ∂x3
3
2
= − xr [ −x dx1 + ρ2
x1 dx2 ] ρ2
.
i
3 2 3 2 ∂ ( x x )dx2+ ∂x∂ 3 ( xrρx2 )dx3 ∂x2 rρ2
h
+
= − ρx3 (i = 1, 2), the exterior derivative of y ∗ κ is
h
+ =
i
= − rx3 and
∂φ dx2 ∂x2
1 − (x1 )2 ( r12 +
∧dx1+
dx2 +
1 ) ρ2
i
x2 rρ2
dx1 +
1 ) ρ2
h
3 1 3 1 ∂ x x ( −x )dx1+ ∂x∂ 3 ( −x )dx3 ∂x1 rρ2 rρ2
i
∧ dx2
i
1 − (x3 )2 r12 dx3 ∧dx1
−x1 rρ2
i
1 − (x3 )2 r12 dx3 ∧ dx2
x3 dx1 ∧ dx2 + x2 dx3 ∧ dx1 + x1 dx2 ∧ dx3 =
1 1 xi dxj 2 r 3 ijk
∧ dxk . X
To obtain the last line we used 1 − (x3 )2 r12 =
1 ρ2
1 ρ2
2 3 2 1 r −(x ) ρ2 r2
1 − (x1 )2 + (x2 )2 ( r12 +
=
1 ρ2
1 ) ρ2
=
·
ρ2 r2 1 ρ2
=
1 , r2
1−
ρ2 r2
− 1 = − r12 .
V5.3.10 Pullback of spherical area form from spherical to Mercator coordinates
P
∂φ i 1 With φ(z) = z 1 we have y ∗ κ = − cos θ(z) ∂z i dz = − cos θ(z)dz . Its exterior derivative is: 2 θ(z) ) dy ∗ κ = −∂zj cos θ(z)dz j ∧ dz 1 = ∂ cos dz 1 ∧ dz 2 = ∂ tanh(z dz 1 ∧ dz 2 ∂z 2 ∂z 2
= sech2 (z 2 )dz 1 ∧dz 2 . Thus the weight function is ω(z) = sech2 (z 2 ). As expected from dy ∗ κ = y ∗ ω, it agrees with that found in problem ??.
S.V5.4 Integration of forms P
V5.4.2 Mercator coordinates: computing area of sphere
The area of the unit sphere is given by an integral over the entire Mercator coordinate domain: ˆ ˆ ˆ ˆ 2π ˆ ∞ AS 2 = ω= y∗ ω = ω(z) dz 1 ∧ dz 2 = dz 1 dz 2 ω(z) 0 −∞ S2 U U ˆ ∞ h i∞ = 4π . X = 2π dz 2 (sech(z 2 ))2 = 2π tanh(z 2 ) −∞
−∞
S.V6.1 Definition of the metric on a manifold
141
P
V5.4.4 Pullback of current form from Cartesian to polar coordinates
(a) We parametrazie the cone using the map x : U → C ⊂ R3 , y = (ρ, φ)T 7→ x(y) = (x1 , x2 , x3 )T = (ρ cos φ, ρ sin φ, h(1 −
ρ ))T , R
with U = (0, ρ) × (0, 2π), and compute the pullback of the current form as follows: x∗ j = x∗ (j0 dx1 ∧ dx2 ) = j0
∂x1 ∂x2 dy i ∂y i ∂y j
1
∧ dy j = j0 ( ∂x ∂ρ
∂x2 ∂φ
−
∂x1 ∂x2 ) dρ ∂φ ∂ρ
∧ dφ
= j0 cos φ ρ cos φ − (−ρ sin φ) sin φ dρ ∧ dφ = j0 ρ dρ ∧ dφ . (b) The current through the slanted surface is obtained by integrating the pullback form over the coordinate domain parametrizing it, i.e. the base of the cone: ˆ 2π ˆ R ˆ ˆ ˆ dφ = j0 21 R2 · 2π = j0 πR2 . dρ ρ ρ dρ ∧ dφ = j0 x∗ j = j0 j= IC = C
(c)
U
U
0
0
The result is just j0 times the area of a disk of radius R, corresponding to the projection of the slanted surface of the cone onto the x1 x2 -plane. ´ To compute the current as IC = C dS · j, with j = j0 e3 , we need the projection, dS · e3 , of a surface element onto the direction of current flow. It is found as follows: δS · e3 = δρ δφ
∂x ∂ρ
×
∂x ∂φ
· ez = δρ δφ ˆ ˆ 2π ˆ ⇒ IC = dS · j = dφ C
0
h
cos φ sin φ −h/R
×
−ρ sin φ i 0 ρ cos φ 0
·
0 1
= δρ δφ ρ
R
dρ j0 ρ = j0 πR2 . 0
Remark: Note that the height of the cone does not enter the calculation at all. This is a 1 2 reflects the fact that the current form is exact, being expressible as ´ j = d(x ´ dx 1). Hence Stokes’s theorem can be used to compute the current as IC = M j = ∂M x dx2 , an integral around the boundary of M (see problem ZZ).
S.V6 Riemannian differential geometry S.V6.1 Definition of the metric on a manifold P
V6.1.2 Standard metric of
R3 in spherical coordinates
Since the standard metric is diagonal, the transformed metric reads as g = (J T J)kl dxk ⊗ dxl . For the transformation x 7→ y(x) expressing Cartesian through spherical coordinates, x1 = r cos φ sin θ,
x2 = r sin φ sin θ,
x3 = r cos θ,
the Jacobi matrix has the form
∂x1 J=
∂x = ∂y
∂r ∂x2 ∂r ∂x3 ∂r
∂x1 ∂θ ∂x2 ∂θ ∂x3 ∂θ
∂x1 ∂φ ∂x2 ∂φ ∂x3 ∂φ
cos φ sin θ
r cos φ cos θ
−r sin φ sin θ
= sin φ sin θ
r sin φ cos θ
r cos φ sin θ
.
−r sin θ
0
cos θ
S.V6 Riemannian differential geometry
142
⇒
cos φ sin θ
J T J = r cos φ cos θ −r sin φ sin θ
1
0
= 0
2
0
r
0
r cos φ cos θ
−r sin φ sin θ
−r sin θ sin φ sin θ
r sin φ cos θ
r cos φ sin θ
−r sin θ
0
cos θ
r sin φ cos θ r cos φ sin θ
cos θ
0
0 0 2
cos φ sin θ
sin φ sin θ
g = dr ⊗ dr + r2 dθ ⊗ dθ + r2 sin2 θ dφ ⊗ dφ .
⇒
2
r sin θ
The metric is diagonal, hence spherical coordinates from an orthogonal coordinate system.
V6.1.4 Standard metric of unit sphere in Mercator coordinates
P
The metric transform as g = gij (y) dy i ⊗dy j = gkl (z) dz k ⊗dz l , with gkl (z) = gij (y(z))J ik J jl = (J T g(y(z))J)kl . For the map z 7→ y(z), expressing spherical through Mercator coordinates, 2
θ(z) = π − 2 arctan(ez ), φ(z) = z 1 , with sin θ(z) = sech(z 2 ), the Jacobi matrix has the form (see problem ??) J(z) = ⇒ ⇒
J T gJ =
∂y = ∂z
∂θ ∂z 1 ∂φ ∂z 1
∂θ ∂z 2 ∂φ ∂z 2
0
1
− sech(z 2 )
0
=
0
− sech(z 2 )
1
0
1
0
0
sech2 (z 2 )
g = s(z) dz 1 ⊗ dz 1 + dz 2 ⊗ dz 2 ,
0
− sech(z 2 )
1
0
s(z) =
= sech(z 2 )
2
sech(z 2 )
2
1.
.
In Mercator coordinates, the metric tensor is proportional to 1, hence they form an isotropic, orthogonal coordinate system. Angles are preserved when projected from the sphere to a Mercator map, and distances between nearby points on the latter are proportional to those on the globe, irrespective of their relative orientation. The scale factor, s(z), does depend on position, causing lengths to be stretched with increasing distance from the equator, but the local conversion from p Mercator lengths to lengths on the sphere is easily accomplished by division through s(z) = sech(z 2 ) = sin θ. Moreover, the scale factor treats the northern and southern hemispheres symmetrically. These features — angles are preserved, local distances are isotropic, north-south symmetry — make Mercator maps useful for navigation purposes.
S.V6.2 Volume form and Hodge star P
V6.2.2 Hodge duals of all basis forms in spherical coordinates
For a diagonal metric, the Hodge star operation acts on basis forms in 1 3!
p
∗dy i =
1 2!
p
|g|g ii i0 jk dy j ∧ dy k =
∗dy j ∧ dy k =
1 1!
p
|g|g jj g kk j 0 k0 i dy i =
∗dy i ∧ dy j ∧ dy k =
1 0!
p
|g|g ii g jj g kk i0 j 0 k0 =
∗1 =
|g|ijk dy i ∧ dy j ∧ dy k = 0
0
0
0
0
0
p
|g| dy 1 ∧ dy 2 ∧ dy 3
p
p
R3 as
|g|g ii dy j ∧ dy k ,
|g|g jj g kk dy i ,
p
|g|g ii g jj g kk ijk =
√
|g| ijk g
.
S.V7.3 Laws of electrodynamics II: Maxwell equations
143
where on the right ijk are understood to be related cyclically and not summed over. For p spherical coordinates, y = (r, θ, φ)T , we have |g| = r2 sin θ and inverse metric g rr = 1, g θθ = r−2 , g φφ = (r sin θ)−2 : ∗1 =
p
|g| dr ∧ dθ ∧ dφ = r2 sin θ dr ∧ dθ ∧ dφ ,
∗dr =
p
|g|g rr dθ ∧ dφ = r2 sin θ · 1 · dθ ∧ dφ = r2 sin θ dθ ∧ dφ ,
∗dθ =
p
|g|g θθ dφ ∧ dr = r2 sin θ · r−2 dφ ∧ dr = sin θ dφ ∧ dr ,
∗dφ =
p
|g|g φφ dr ∧ dθ = r2 sin θ · (r sin θ)−2 dr ∧ dθ = (sin θ)−1 dr ∧ dθ .
∗(dr ∧ dθ) =
p
|g|g rr g θθ dφ = r2 sin θ · 1 · r−2 · dφ = sin θ dφ ,
∗(dθ ∧ dφ) =
p
|g|g θθ g φφ dr = r2 sin θ · r−2 · (r sin θ)−2 dφ = (r2 sin θ)−1 dφ ,
∗(dφ ∧ dr) =
p
|g|g φφ g rr dθ = r2 sin θ · (r sin θ)−2 · 1 · dθ = (sin θ)−1 dθ .
p ∗(dr ∧ dθ ∧ dφ) =
|g| = (r2 sin θ)−1 . g
From the above list, we see by inspection that ∗∗ acts as the identity map on all basis forms, e.g. ∗∗ 1 = ∗(r2 sin θ dr ∧ dθ ∧ dφ) = r2 sin θ(r2 sin θ)−1 = 1 . X
S.V7 Differential forms and electrodynamics S.V7.3 Laws of electrodynamics II: Maxwell equations P
V7.3.2 Inhomogeneous Maxwell equations: form–to–traditional transcription
In Λ(R3 ), with metric gij = ηij = −δij , we have (→ ??) J −1 dxi = −ei ,
J −1 ∗ dxj ∧ dxk = −jki ei ,
∗ dxi ∧ dxj ∧ dxk = −ijk .
(1)
For the charge density form we have ρ = − ∗ρs , and the relations between the forms H, D, js and their vector representations, H, D, j, read:
D=
1 D dxj ∧ 2 jk j
k
dx , k
js = 12 jjk dx ∧ dx , (a) (i)
(1)
H ≡ H i ei ≡ −J −1 H,
H = Hi dxi ,
i
D ≡ D ei ≡ −J i
j ≡ j ei ≡ −J
−1
−1
∗D,
∗js ,
H i = −Hl η li = Hi .
(2)
(1) Di = 21 Djk jki .
(3)
(1) j i = 12 jjk jki .
(4)
To convert the three-form equation ds D = 4πρs to a scalar equation, we act on it with the Hodge star: 0 = ∗(ds D − 4πρs ) = (− ∗ ds ∗ J) (J −1 ∗ D) −4π ∗ ρs = −∇ · D + 4πρ .
|
{z
div
}|
{z
−D
}
S.V7 Differential forms and electrodynamics
144
(ii) Expressed in terms of components, this strategy reads as follows: 0 = ∗(ds D − 4πρs ) = ∗( 12 ∂i Djk − (1)
= −( 12 ∂i Djk −
(b) (i)
1 4πρ ijk )dxi ∧ 3!
(3) 1 4πρ ijk )ijk = 3!
dxj ∧ dxk
−∂i Di + 4πρ = −∇ · D + 4πρ .
To convert the two-form equation ds H − 1c ∂t D = 4π j to a vector field equation, c s we act on it with the two-form–to–vector conversion operation J −1 ∗: 0 = J −1 ∗ (ds H − 1c ∂t D −
4π j ) c s
−1 −1 1 4π −1 = J −1 ∗ ds J J | {z∗D} − c J ∗js | {zH} − c ∂t J
|
= −∇ × H + 1c ∂t D +
4π j c
(ii) 0 = J −1 ∗ (ds H − 1c ∂t D − (1)
= −∇ × H + 1c ∂t D +
}
| {z }
−D
= J −1 ∗ (∂j Hk − 1c ∂t 12 Djk −
(2,3) 4π 1 j ) jki ei = c 2 jk
4π j c
−H
−j
.
4π j ) c s
= −(∂j Hk − 1c ∂t 12 Djk −
{z
curl
4π 1 j ) dxj ∧ c 2 jk
(−∂j H k jki + 1c ∂t Di +
4π i j ) ei c
.
S.V7.4 Invariant formulation P
V7.4.2 Dual field-strength tensor
Expanding the dual field-strength tensor in components as G = Hi dx0 ∧ dxi + 12 Djk dxj ∧ dxk ≡ 12 Gµν dxµ ∧ dxν . we identify G0i = Hi = H i and Gjk = Djk = jki Di , hence
0 −H 1 {Gµν } = −H 2 −H 3
P
H1 0 −D3 D2
H2 D3 0 −D1
H3 −D2 . D1 0
V7.4.4 Homogeneous Maxell equations: four-vector notation
(a)
1 0 = dG − 4πj = ( 21 ∂α Gµν − 4π 3! jαµν ) dxα ∧ dxµ ∧ dxν
=
1 (∂α Gµν 3!
+ ∂µ Gνα + ∂ν Gαµ − 4πjαµν ) dxα ∧ dxµ ∧ dxν
⇒ 4πjαµν = ∂α Gµν + ∂µ Gνα + ∂ν Gαµ . (b) Using G0i = −Gi0 = H i , Gjk = ijk Di , j0 = j123 , j0jk = −jki 1c jsi we obtain: α = 0, µ = j, ν = k:
4πj0jk = ∂0 Gjk + ∂j Gk0 + ∂k G0j −ijk 4π j i = ijk ∂0 Di − ∂j H k + ∂k H j c s
Contract with − 12 jkl el :
⇒
4π c
j = ∇ × H − 1c ∂t D . X 0 = ∂1 G23 + ∂2 G31 + ∂3 G12 −
α = 1, µ = 2, ν = 3:
= ∂1 D1 + ∂2 D2 + ∂3 D3 − ⇒
4πρ = ∇ · D . X
4π c
dxk
4π j c 123
· cρ
3.7 Differential forms and electrodynamics
145
1 0 = J −1 ∗(dG − 4πj) = J −1 ∗(∂α 21 Gµν − 4π 3! jαµν )dxα ∧ dxµ ∧ dxν
(c)
0
0
0
0
0
1 = J −1 (∂α 12 Gµν − 4π 3! jαµν )g αα g µµ g νν α0 µ0 ν 0 β 0 dxβ 0
0
0
1 jαµν )g αα g µµ g νν α0 µ0 ν 0 β 0 dxβ β eβ = (∂α 21 Gµν − 4π 3!
= Using
αµνβ
−(∂α 21 Gµν =
αβµν
−
(1)
1 4π 3! jαµν )αµνβ eβ
= −
∂α F αβ = 4πj β ,
βµνα
(2)
, this can be expressed as
with
F αβ = − 12 αβµν Gµν , j β =
1 j βαµν . 3! αµν
For the last step, we used the component representation of the relation F = ∗G, 0
0
F = ∗ 21 Gµν dxµ ∧ dxν = 12 Gµ0 ν 0 g µ µ g ν ν µναβ dxα ∧ dxβ , which implies Fαβ =
1 µν G µναβ . 2
Raising the indices of F and lowering those of G
using a product of for g’s produces a minus sign, hence F αβ = − 12 αβµν Gµν . (d) Starting from (2), we obtain, for fixed β: 1 jαµν µναβ . ∂α 21 µναβ Gµν = 4π 3!
Non-zero contributions arise only for all four indices different from each other. Hence we obtain the following set of equations, with α, β, µ, ν all different: 4π 12 (jαµν − jανµ ) = ∂α 12 (Fµν − Fνµ ) + ∂µ 21 (Fνα − Fαν ) + ∂ν 12 (Fαµ − Fµα ) 4πjαµν = ∂α Fµν + ∂µ Fνα + ∂ν Fαµ . X