An Introduction to Elements of Multilinear Algebra


237 83 966KB

English Pages 87+iv [96] Year 1969

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Title
Preface
Contents
1. A Review of Vector Spaces
2. Linear Transformations
3. Dual Spaces
4. Direct Sums and Bilinear Forms
5. Multilinear Forms
6. Tensors And Exterior Algebra
7. Tensor Products of Linear Transformations
Bibliography
Recommend Papers

An Introduction to Elements of Multilinear Algebra

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

An Introduction to Elements of Multilinear Algebra by ALl R. AMIR-MOEZ

MATHEMATICS

SERIES

NO.

Department of Mathematics Texas Technological College Lubbock, Texas

5

PRINTED BY THE TEXAS TECH PRESS TEXAS TECHNOLOGICAL COLLEGE LUBBOCK. TEXAS US A.

PREFACE Usually multilinear algebra is a small part of linear algebra aiming at the theory of determinants.

Tensor algebra, in many

cases, is presented as a tool for its applications.

Here we would

like to study multilinear algebra, tensors and exterior algebra as a single subject. Chapters one and two are reviews of linear spaces and linear transformations.

Then we discuss Dual spaces in chapter three and

bilinear forms in chapter four. of multilinear forms.

Chapter five treats basic ideas

In chapter six exterior algebra is studied.

Finally, in chapter seven the theory of determinant and basic ideas of permanents are discussed. Indeed, the existence of so many open questions in the theory of permanents suggest the need for a book of this sort.

i

Table of Contents Page Preface

i

1. 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11

A Review of Vector Spaces Vector spaces Sub spaces Theorem Linear dependence Theorem Bases Theorem Dimension Algebra fo subspaces Theorem Unitary spaces

1

2. 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11

Linear Transformation Linear transformation Algebra of linear transformations Corollary Inverses Theorem Matrices Theorem Identity and zero matrices Transforms of vectors Change of bases Linear transformations on unitary spaces

10 10 10 11 11 12 12 13 13 14 14 16

3.

Dual Spaces Linear functionals Theorem (dual spaces) A change of notation Theorem Theorem (dual bases) Theorem Corollary Dual space of V' Theorem Reflexivity of a vector space Annihilators Theorem Corollary

18 18 18 19 19 20 21

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13

1 1 1 2 2 3 4 5 6

6 7

ii

22 22 22 23 26 26 27

Table of Contents

(cont.) Page

4. 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14

Direct Sums and Bilinear Forms Internal direct sums Theorem External direct sums Special subspaces of a direct sum Dimension of a direct sum Dual of an internal direct sum Theorem Corollary Bilinear forms The space of Bilinear forms Theorem A basis in the space of bilinear forms Tensor product of two vector spaces A basis for the product space

29 29 29 29 30 30 31 32 33 33 34 34 35 37 37

5. 5.1

Multilinear Forms Direct sums of vector spaces Multilinear forms Bases for spaces of multilinear forms Tensor ~roducts of vector spaces An inductive definition Symmetric groups Signum of a permutation Multilinear forms on a vector space Theorem Alternating forms Theorem Spaces of alternating forms

40 40 40 41 42 42 43 43 45 46 46 47 48

Tensors and Exterior Algebra Summation convention Change of bases in new notations Contravariant and covariant vectors Change of basis in the dual space Tensors Components of a tensor A glance at the algebra of tensors Transformation formulas Contraction A summary of tensor alge~ra Symmetric and skew-symmetric tenso=s

51

5.2 5.3 5.4 5.5 5.6

5.7 5.8 5.9 5.10 5.11

5.12 6.

6.1 6.2 6.3 6.4 6.5 6.6

6.7 6.8 6.9 6.10 6.11

iii

51 51 53 54 55 55 56

57 59

61 61

Table of Contents (cont.) Page 6.12 6.13 6.14 6.15 6.16 6.17

Constructing skew-symmetric tensors Kr5necker tensors Metric tensors Associated tensors Exterior algebra A basis for the exterior algebra

62 63 64 65 66 67

7.

Tensor Products of Linear Transformations Dual of a linear transformation Properties of adjoint transformations Adjoint of A' The matrix of A' with respect to the dual basis The tensor products of two transformations Some properties of~nsor products of transformations Kr5necker products of.matrices Tensor products of linear transformations The determinant of a linear transformation Some properties of determinants Non-singular transformations Determinant of a matrix Theorem Permanent of a matrix Induced inner products The completely symmetric operator Theorem A geometric interpretation of permanents

70 70 71 72 72 73 74 76 77 78 78

7.1 7.2 7.3 7.4 7.5 7.6

7.7 7.8

7.9 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18

Bibliography

79 80 81 82 82 83 83

84 87

iv

1. 1.1

A Review of vector Spaces

Vector spaces:

A vector space V over a field 7 is a set

whose elements are called vectors and V satisfies: (i)

V is a commutative group under a binary operation

called vector addition. zero vector a,

~

E

o.

The identity of this group is called the

We shall denote vectors by Greek letters and if

v, then the sum of these vectors will be denoted by a +

~.

Elements of 7 will be denoted by small letters and will be called scalars. (ii) aa

E

V.

For a

E

V and a

7, a vector aa is defined and

E

We call aa a scalar multiple of a. (iii)

For a, b

(iv)

For a

(v)

Let a, b

E

7 and a

V it follows that a(ba)

E

(ab)a. E

V we have la

a, where 1 is the identity

for 7.

a~

and (a+b)a 1.2

E

7 and a,

~

E

V.

Then a(a +

~)

aa +

aa + ba.

Subspaces:

A non-empty subset s of a vector space V over

a field 7 is called a subspace of v if s is a vector space over 7 under the same vector addition and scalar multiple as the ones in v. a, b

The reader may show that s is a subspace of v if and only if E

J

1.3

and a,

~

E

Theorem:

space is a subspace. a subspace.

b~)

S, then (aa +

E

s.

The intersection of two subspaces of a vector The union of two subspaces is not necessarily

The proof is left as an exercise. 1

In general, one can prove that the intersection of any set of subspaces of a vector space is a subspace. 1.4 7.

Linear dependence:

Let a 1 , ••• ,ak

E

Let V be a vector space over a field

J and a 1 , ••• ,an

E

V.

Then

k

L

j=l

aJ.aJ.

is called a linear combination of a 1 , ... ,ak. A set {a 1 , .•. ,ak} of vectors is called linearly dependent if

there exists a set of scalars, not all zero such that k

L

j=l k

aJ.aJ.

+

o.

o

L aJ.aJ. = implies aJ. = o, j=l, ••• ,k, then the set {a 1 , ... ,ak} j=l is called linearly independent. Note that we defined linear depenIf

dence and independence only for finite sets of vectors.

We shall

not discuss other cases. 1.5

Theorem:

The set of non-zero vectors {a 1 , •.• ,an} is lin-

early dependent if and only if some vector ak is a linear combination of the other vectors of the set. Proof:

Let {a 1 , ••• ,an} be linearly dependent.

Then there

exists a set of scalars {a 1 , ••• ,an}' not all zero, such that

Let, for example, ak

~

0.

Then

2

On the other hand let

Then

(l

for which (-1) 1.6

~

Bases:

0.

0,

n

Thus the set {"l'"""'"n} is linearly dependent.

A set x of vectors of a vector space V over a

field J is called a basis if x is linearly independent and every element of Vis a linear combination of elements of x

=

{"l'"""'"n}.

We shall only consider vector spaces with finite bases. this case V ls called a finite-dimensional vector space.

In

One can

easily show-that every element of vis uniquely represented as a linear combination of elements of X· Let

~

E

v and

and also

This implies that

3

This is done as follows.

Since

!~ 1 ,

•..

,~n}

is linearly independent we must have

o,

X.

l.

1, ...

i

,n.

Thus xi= yi, i=l, .•. ,n. 1.7

Theorem:

tor space v.

Let

!~ 1 ,

...

,~k}

be a set of vectors in avec-

Let 6 1 , ... ,6k+l be vectors such that each 6i is a

linear combination of

~1 ,

...

,~k'

i = l, .•. ,k + 1.

Then !6 1 , ..• ,

6k+l} is linearly dependent. Proof: Suppose 6 1 #

If 6 1 = ~. then the set {6j} is linearly dependent.

0.

Then we may write

where ali is different from zero for some 1

~

This implies that any linear combination of

~1 ,

i

~

•..

linear combination of 6 1 ,~l •.•. ,~i-l'~i+l'"""'~k.

4

k.

Thus

,~k

is also a

In particular

If a 2 j = o, j=l, ..• ,i-l,i+l, ••• ,k, then a 2 = b 21 a 1 , and the set {a 1 , ... ,ak+l} is linearly dependent.

Let some a 2 j

~

0.

This im-

plies that aj is a linear combination of a 1 ,a 2 and the k-2 vectors remaining of the original set {aj}.

Again any vector which is a

linear combination of a 1 , ••• ,ak is a linear combir.ation of the vectors a 1 ,a 2 , and the k-2 vectors remaining of the set {aj}. Continuing in this manner, eliminating a's and adding a's one at a time, at each stage m, either every amn is zero, and am is a linear combination of a 1 , ••. ,am-l or some amn

~

0, and an is a

linear combination of a 1 , •.• ,am' and k-m remaining vectors of the original set {a 1 , •.• ,ak}.

Thus either for some m

~

k,

or else

Thus !a 1 , ... ,ak+l} is linearly dependent. 1.8

Dimension:

over a field 7.

Let v be a finite-dimensional vector space

Then the number of elements in a basis for V is

the same as the number of elements in any other basis. ber is called the dimension of the space.

5

This num-

Proof:

Let {a 1 , •.. ,an} be a basis for V with n elements and

{a 1 , .•. ,am} be another basis with m elements.

Since {a 1 , ... ,an}

is a basis, every ai' i=l, ..• ,m, is a linear combination of a 1 , •.. , an.

If m

>

n, then by 1.7 it follows that {a 1 , ..• ,am}

~s

dependent which contradicts the fact that {a 1 , ... ,am} is a Thus m

~

n.

linearly bas~s.

But since a 1 , ..• ,am is a basis every ai, i=l, ... ,n is

a linear combination of a 1 , ... ,am.

If n

m, then by 1.7 we con-

>

clude that {a 1 , ... ,an} is linearly dependent which contradicts the fact that {a 1 , •.• ,an} is a basis.

Thus m = n.

A great amount of algebra of vectors and vector spaces is independent of choices of bases.

Thus many

propert~es

of vector

spaces are studied without reference to a basis. 1.9

Algebra of subspaces:

union of two subspaces is not

In 1.3 we have shown that the

necessar~ly

a subspace.

would like to define addition for subspaces.

Let S and T be two

subspaces of a vector space V over a field J.

; + n

S + T

I ;

Here we

e S and

Then we define

n

e T}.

Indeed, we have to show that this definition is well-defined, S + T is a subspace of

v.

One may note that S

We shall leave the proof as an exercise.

n

T does not have to be the zero vector

for the definition of addition. we say

s

If s n T =

is a complement of T in v.

complement of S in v.

~.e.,

This

0

and S + T = V, then

~mplies

that T is also a

One may also show that given a subspace S,

it doesn't have a unique complement.

We leave the proof as an ex-

ercise. 1.10

Theorem:

Let S and T be subspaces of a finite-dimensional 6

space v. dim V

We shall use the notation n.

d~m

S for dimension of S.

Let

Then (i)

dim S

~

~

n, dim T

n;

(ii) dimS+ dim T =dim (S + T) +dim (S n T). The proof is left as an exercise. 1.11 J

Unitary spaces:

Let V be a vector space over the field

of real or complex numbers.

Then an inner product is respective~.

ly a real or complex valued function of ordered pair of vectors n

E

v denoted by

(il

(~,n)

:'

u and

E

This

1l

v.

E

let~

Now

Thus ~' =

w.

E

E

W such

function~~

define a

0'

that~

v•.

Then for any

~' (0

But

= 0 since

~

E

u

that

~mt->l~es

and

u• n v•

=

+ n,

~

it follows that u•

u• n

E

o

~·(~)=r;'(~)+r;'(n)

for every ~

Then

of u and V [see 3.11].

respect~vely annih~lators

are subspaces of W'. have

U EB v.

u• EB v•,

Since u and V are subs paces of

Proof: and

are

Let U and V be subspaces

=

{0'}.

where~

U and n

E

E

v.

We

by ~' (n)

for any

~

'

W.

~

1 ,r; 2 , W such that

Let

n1 ,n 2

E

E

W' .

V and a,b

0

E j.

~l

=

~l

+ n 1 and

u.

Similarly we define a function

n~

=

~~

E

~

2

=

~

2 +

n

2

.~

1 .~ 2

E

Then

E

We also show that

~ • ca->

We shall show that E' is a linear functional on

u•.

Let

~

Then~

o.

31

by

,.

~+0.

Thus

£;~ (0

u,

no(~)

n~

It is clear that any



~

e W' and

£



(0. n~

is a linear functional and W

~t

£

v•.

Thus for

follows that

~' (~+n)



We shall define v 2

2 we define vr

let the dual space of V, i.e., V'

= v1•

vr-l 0 v 1 .

= Vr

Now we

We define v 2

and for a positive integer s > 2, we define vs nally we define V~

=

We snall

v 1 0 v 1 and

v1 0 v1,

= vs-l

0 vs.

Fi-

0 vs, where r and s are positive integers.

An element of Vr is called a contravariant tensor of order r

and an element of Vs is called a covariant tensor of order s. Finally an element of vr is called a tensor of type (r,s). s Indeed, one can think of other possibilities. But they will not be of much interest. 6.6 v1

=v

Components of a tensor:

Let !a 1 , ... ,an} be a basis in

and !s 1 , ... ,8n} be the dual basis in v 1

= V'.

This induces

bases in the vector spaces of tensor products of 6.5.

We shall

study these bases more carefully. Let {a. 1

. } be the induced basis in vr.

Then through the

1" • " 1 r

natural isomorphism one can identify this set uith {a. 0 ... 0a. 1

Let '

e vr.

Then

55

1

1

}. r

il •.. i and, of course this sum is unique.

Each t

r is called a corn-

ponent ofT .with.respect to !a 1 , ... ,an}. Let {SJ 1 ···Js} be the induced basis in vs. Then this bas~s jl js can be identified by {B ®... ®8 }. Thus each T £ Vs is uniquely defined by T

=

Again each t. . is called a component of T with respect to J 1" • · Js !a 1 , ••• ,an}. More generally let

be the induced basis in vr. s

This set can be identified as {a. ®••. ®a. ®B ~1

jl

~r

® ••• ®B

js

}

[see 5.5).

Then every T e vr is uniquely determine by s

il ••• ir Each t. . J 1" · · Js

is called a component of T with respect to

{al'. • • ,an} • 6.7

A glance at the algebra of tensors:

multiple of a tensor can always be defined.

Indeed, a scalar Two tensors can be

added if and only if they are of the same type.

Tensors of dif-

ferent types belong to different vector spaces and cannot be added.

We leave the justification as an exercise. The multiplication of any two tensors is always possible.

Let

p

be a tensor of type (r,s) and a be a tensor of the type (p,q).

56

Then it is clear that

p®a

is a tensor of the type (r+p,s+q).

Let

a

and

p€1a

T

js+l ... js+q

=

ex.

1

.

r+l""" 1 r+p

with

T

= k~1···~r+p

J1· • ·Js+q

Then

k~r+l"""~r+p

il ••. i + t. .r p J1···Js+q

Js+l"" ·Js+q

The reader may fill in the details. 6.8 V

1

=V

Transformation formulas:

Let {cx 1, ... •"n•} be a basis in

i

and (ci,) be the change of basis to {cx 1 , •···•"n' }.

the induced change of basis in v 1 is (d~ ) 1

[see 6.4].

Indeed,

This change

of basis induces changes of bases in vector spaces of tensors and thus provides formulas for transformation for components of each tensor.

We shall study these formulas as follows.

From the basis {cx 1 , •.• ,cxn} in V we obtain the basis

~ in V~ [see 6.6].

_Ijl .•• j~ - t i l ... i.:J Indeed, the basis {cx 1 , ... ,cxno} will introduce

another basis

57

~· r

in Vs.

i

r

Thus (ci,) will introduce the change of basis in Vs accord-

ing to

(1)

Now let

T

E

V~.

Then

T

Applying (1) to this equality we obtain

Since

linearly independent we obtain

(2)

Similarly, we can obtain

(3)

The equalities of (2) and (3) are called transformation formulas corresponding to the change of basis from !a 1 , ••• ,an} to !a 1 ,, .• • ,an' } •

58

Thus a set of scalars

s=~l···~rl, fh· ··Jsj

where each index runs over

one through n is the set of components of a tensor of the type (r,s) if and only if a change of basis in V induces the formulas (2) and (3) of transformations to this set. 6.9

Contraction:

Consider the product

Let p and q be the integers such that

l~~r

and

l~q~s.

Then we

can obtain a tensor as a

=

which is of type (r-l,s-1). This operation is called a contracil .•. ir tion. Now lett. . to be a component of •· Then the opera] 1" • • Js

tion of contraction amounts to setting ip case each component of • becomes a sum on ip

jq.

Indeed, in this Thus we get a new

set of scalars which are components of a new tensor of type (r-1, s-1).

This indeed must be justified by standards of 6.8.

Before

we supply the justification we would like to point out that one has to take care of a few logical matters.

Usually a tensor which

is represented as a tensor product of vectors is called decomposable, and one has to worry about how a tensor can be expressed as a linear combination of such tensors. idea further. 59

We shall not discuss the

Now let

be a component of a tensor in V.

Let ip

=

T

w1th respect to the basis {a 1 , ..• ,an}

jq in every component ofT,

Then we get a set of

scalars of the form i 1 •.• ip-l ip ip+l"""ir tjl •.• jq-1 jp jq+l"""js

Note that the right hand side 1s a sum on ip [see 6.1]. Now we show that a change basis in V 1nduces tensor law of transformation on the new set. Let the change of bas1s in V be given by ai' we have il d.r 1r

(1)

We let i

jq.

p

1i ... ip-1 ip+l'''i~

h,

I

"I

i

il i p Then we get d. 1 c.~ p Jq

Jl •. ·Jq-1 j~+l' .. j~

ii d. 11

Thus (1) will become

Jq

d

60

I

6 .~

il ip-1 d.p+l 1 1 p+l p-1

i I d. r 1 r

j]_ C'

I

Jl

This completes the proof of the proposition. 6.10

A summary of tensor algebra:

As we have seen so far,

there are four fundamental operations in tensor algebra: (i)

addition of tensors of the same type;

(ii)

multiplication of a scalar by an tensor;

(iii)

tensor product of any two tensors, for example,

the product of a tensor of type (r,s) by a tensor of type (p,q) will be a tensor of type (r+p, s+q); (1v)

contraction of a tensor of type (r,s) to a ten-

sor of type (r-1, s-1). 6.11

Symmetric and skew-symmetric tensors:

We have already

discussed symmetric and skew-symmetric multilinear forms on V [see 5.9).

Sometimes partial symmetry is also discussed.

Indeed, there are different representations for is an r-linear form on v 1 • where

~1,

••.• ~r

£

v1 .

pect to two arguments. ~

q

Thus we can consider

T;

Let

T

e vr.

one is that

T(~ 1 ,

T

...• ~r>•

We can define a partial symmetry with resWe say is symmetric with respect to

~p

and

if

Similarly, we say

T

is skew-symmetric with respect to

~p

and

~q

if

Indeed, the ideas of partial symmetry and partial skew-symmetry can be generalized.

Here we only discuss complete symmetry or simply

symmetry and skew-symmetry.

Since this has been discussed in 5.9,

5.10, and 5.11, we shall not repeat it here. The ideas of symmetry and skew-symmetry can be defined both 61

for contravariant and covariant tensors.

It can also be modified

to be defined for tensors of type (r,s).

We shall leave them as

exercises. The ideas of symmetry and skew-symmetry induce the same permutations to the indices of the components of a tensor. As an exi1 .•. i r be a component ofT £ Vr. Let p be a permutaample, l e t t tion on (l, ••. ,r). (i)

Then:

T is symmetric if and only if t

il ••. i

tip ( 1) .•• ip (r) .

,

r

(ii) T is skew-symmetric if and only if t

il ... i

i

r

(sgn p) t P

The proof is left as an exercise.

(1) ••• i

( )

P r .

The reader may also discuss

symmetry and skew-symmetry for the components of any type tensor. 6.12

Constructing skew-symmetric tensors:

First, we intra-

duce a set of symbols, which is called the e-system.

Let (i 1 , •.• ,ir) be an ordered r-triple

permutation on (1,2, •.• ,r). of integers with

l~ij~r,

p(l,2, ••. ,r) or not.

j

Let p be a

= l, .•. ,r.

Now we define

Then either (i 1 , ..• ,ir)

~(i 1 ,

•.. ,ir) as follows:

(sgn p) 1 if (i 1 , •.. ,ir) = p(l, .•. ,r);

It is customary to write ~(i 1 , .•• ,ir) = ~ tions require so, such as in

62

il ... i

r, when summa-

Now letT be a tensor of the type (r,O).

Then we define skew T

denoted by sk , by

when

~·s

are arbitrary covariant vectors.

Contravariant tensors

obtained this way are called multivectors. Similarly, letT be a tensor of the type (O,r).

Then we de-

fine

where A1 , ... ,Ar are arbitrary contravariant vectors.

Indeed, ten-

sors obtained in th1s manner are covariant.

6.13 symbol

5~.

KrBnecker tensors:

Indeed, one is familiar with the

One can prove that {5;} as i,j

of components of a tensor.

Suppose i

=

1, •.. ,n, 1s the set

5~ are components of J

6 with

Let ci, be the change of basis [see 6.2).

respect to !a 1 , •.. ,an}. By 6.8, it follows that

This shows that with respect to any basis, 5 has the same components. Now we shall define

where e's are the same as 1n 6.12.

t. l"""i.~ J 1" •• Jk

One can easily show that

is the set of components of a tensor.

proof as an exercise.

We shall leave the

This tensor is called general1zed

63

Kr~necker

5.

6.14

Metr~c

tensors:

Let y be a symmetric

v 1 for which if y(~,n) = 0 for all n is a tensor of type (0,2) or y

E

v2•

b~linear

V, then!=

E

0.

form on

Indeed, y

Then y is called a metric

tensor for v 1 . To study y we shall study its components with respect to a basis in v 1 •

Let {a 1 , ••• ,an} be a basis in v 1 .

be respectively components of Then

y(~,n)

y,~,

Let gij' xi, yi

and n with respect to this basis.

= 0 in terms of components will be i j gij X y

=

0.

This equality is a bilinear form which can be written as

(1)

where the matrix (gij) is symmetric.

According to the definition

of y is the equality (1) holds for every n,

then~

This implies that the matrix is non-singular.

must be zero.

Sometimes one may

say that y is a non-singular tensor of type (0,2) and define y-l by its components gij with respect to {a 1 , ••• ,an}.

Thus

Note that y-l is a metric tensor on v 1 • We may consider y as a linear transformation on V1 to v 1 • Since this transformationE non-singular y establishes an isomorphism between

v1

and

v1 •

One may also consider

y(~,nl

64

as an inner product

of~

and n·

Indeed, when (gijl is the identity matrix, the,

one will have the

usual inner product in a Euclidean real space. 6.15 in 6.14.

Associated tensors:

Let y be the mPtric tensor defined

Let {a 1 , ••• ,an} be the basis in v 1 and the set{ gij} be

the set of components of y.

Let y

on vl onto vl as was done in 6.14. G~

= n I.

G~

Let

~

a linear transformation £

vl an n I

£

vl such that

Suppose

~

Then

= G be

X

i a.

n

and

~

yia

I

i

n 1 in terms of components will be

On the other hand y-l has components gij.

Thus we can write

i

X •

Now let us look at the problem in a more general form of it. il ..• i Let t. .r. be components of a tensor J 1" • • Jr

r-1

This way we obtain a tensor in Vs+l lowering an index. Similarly, let

T

£

Then

65

T

£

Then

This operation is called

Thus we get a tensor in vr+l s-1

Th1s operat1on 1s called ra1sing

an index. One may lower or raise indices a f1nite number of times for a tensor

The set of tensors obta1ned in this manner 1s called

T.

the set of tensors associated with

T.

We can show that the me-

tric tensor and its inverse establish an isomorphism between any two spaces of associated tensors.

We leave the proof as an exer-

cise.

6.16

Exterior algebra:

=

Let Aj'

2, ... ,n be the space of

skew-syrnmetr1c covariant tensors, i.e., tensors of the type (O,j). Let the vector space of real numbers be A0 and V' = A1 . define

It is clear that

1

= (~) ,

dim A

=

n

= (~)

2, ... ,n ,

J

[see 5.12].

dim A1

Thus n

I

dim W

j=O

(~) J

We shall define a multiplication A on W according to: (i)

Let

T

,pl ,p 2

E

(ii)

Let

T

1 ,T 2 ,p

E

W.

Then

w.

66

Then

Then we

(iii)

If

1

E

Ar and a 1

[see 6.12].

E

As' then we define

Aa = sk ( 1 0 a) ,

Indeed, there are a few discrepancies of logic that

one has to clear up by natural isomorphismR.

We shall leave them

to the reader. We observe that if

1

E

Ap and a

E

Aq' then

(1)

TAO

The proof is quite easy and will be left as an exercise. 6.17

A basis for the exterior alg§lbra:

basis in v 1 .

Let {al, ••. ,an} be a

Indeed, this induces a basis in Ak' k = 1, ...

,n.

In 5.12 we have studied the space of alternating k-linear forms on

v1 .

Here the same idea applies to each Ak.

Since the field of

real numbers is considered, elements of Ak are alternating k-linear forms On V l. This implies that a basis for Ak is {a pl A..• Aa pk }, p 1 < •.. B)

@>

(f; ® n)

B) + (A 2

+ (A 2

@

x

B) (\: 0

n)

B)] (\: !'J n).

Therefore

Now to prove (v1) we observe that

This proves (vi). Now if we let A1

I and B1 = I, we obtain A 0 B

(A ® I) (I ® B) •

A ® B =

(I ® B) (A ® I) •

We can also obtain

Finally to prove (vi1) let A-l and B-l exist.

Then by (vi)

we can wr1te I 0 I

I.

Also (A-1 0 B-l) (A 0 B) Therefore, A-1 0 B-1 exist.

=

=

(A-lA) ® (B-lB)

(A ® B)-1 [see 2. 4].

~

I.

Now let (A 0 B)-l

This implies that A 'f 0 and B 'I 0; otherwise by (i) we

75

would have A ® B

0.

Now let

® Bn = (A ®

A~

B)(~

~

0 n) = 0.

(1)

Since the space is finite-dimens1onal and the inverse of A 1sts, we must have such that Bn ~

i

0.

0 n = 0 implies

~

G n = 0.

Thus