Skew Field Constructions 0521214971, 9780521214971

"These notes describe methods of constructing skew fields, in particular the coproduct coconstruction discovered by

111 81 11MB

English Pages 253 [265] Year 1977

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Skew Field Constructions
 0521214971, 9780521214971

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

LONDON MATHEMATICAL SOCIETY LECTURE NOTE SERIES Editor:

PROFESSOR G.C. SHEPHARD, University of East Anglia

This series publishes the records of lectures and semlnars on advanced topics in mathematics held at universities throughout the world. For the most part, these are at postgraduate level either presenting new material or describing older material in a new way. Exceptionally, topics at the undergraduate level may be published if the treatment is sufficiently original. Prospective authors should contact the editor in the first instance. Already published in this series 1..

i..

3.

4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.

General cohomology theory and K-theory, PETER HILTON. Numerical ranges of operators on normed spaces and of elements of normed algebras, F.F. BONSALL and J. DUNCAN. Convex polytopes and the upper bound conjecture, P. McMULLEN and G.C. SHEPHARD. Algebraic topology: A student's guide, J.F. ADAMS. Commutative algebra, J.T. KNIGHT. Finite groups of automorphisms, NORMAN BIGGS. Introduction to combinatory logic, J.R. HINDLEY, B. LERCHER and J.P. SELDIN. Integration and harmonic analysis on compact groups, R.E. EDWARDS. Elliptic functions and elliptic curves, PATRICK DU VAL. Numerical ranges II, F.F. BONSALL and J. DUNCAN. New developments in topology, G. SEGAL (ed.). Symposium on complex analysis Canterbury, 1973, J. CLUNIE and W.K. HAYMAN (eds). Combinatorics, Proceedings of the British combinatorial conference 1973, T.P. McDONOUGH and V.C. MAVRON (eds). Analytic theory of abelian varieties, H.P.F. SWINNERTONDYER. An introduction to topological groups, P.J. HIGGINS, Topics in finite groups, TERENCE M. GAGEN. " Differentiable germs and catastrophes, THEODOR BROCKER and L. LANDER. A geometric approach to homology theory, S. BUONCRISTIANO, C.P. ROURKE and B.J. SANDERSON. Graph theory, coding theory and block designs, P.J. CAMERON and J.H. VAN LINT. Sheaf theory, B.R. TENNISON. Automatic continuity of linear operators, ALLAN M. SINCLAIR. Presentations of groups, D.L. JOHNSON. Parallelisms of complete designs, PETER J. CAMERON. The topology of Stiefel manifolds, I.M. JAMES. Lie groups and compact groups, J.F. PRICE. Transformation groups: Proceedings of the conference ln the University of Newcastle upon Tyne, August 1976, CZES KOSNIOWSKI (ed.).

London Mathematical Society Lecture Note Series

Skew Field Constructions P.M. COHN Bedford College University of London

CAMBRIDGE UNIVERSITY PRESS CAMBRIDGE LONDON NEW YORK MELBOURNE

27

Published by the Syndics of the Cambridge University Press The Pitt Building, Trumpington Street, Cambridge CB2 lRP Bentley House, 200 Euston Road, London NWl 2DB 32 East 57th Street, New York, NY 10022, USA 296 Beaconsfield Parade, Middle Park, Melbourne 3206, Australia ©Cambridge University Press 1977 First published 1977 Printed in Great Britain at the University Press, Cambridge Library of Congress Cataloguing in Publication Data

Cohn, Paul Moritz. Skew field constructions. l R5 • Given any S-inverting homomorphism f:R --> R', we define I

I

-1

f :R5 --> R by mapping aA to af (a £ R) and s' to (sf) , which exists in R', by hypothesis. Any relation in R5 must be a consequence of relations in R and relations expressing that s' is the inverse of sA.

All these relations still hold

in R', so f' is well-defined and it is clearly a homomorphism. It is unique because its values on RA are prescribed, as well 6

as on (SA)

-1

, by the uniqueness of inverses. •

The ring R.S constructed here 1s called the universal s-inverting ring for the pair R,S.

We have in fact a functor

from pairs (R,S) consisting of a ring R and a subset S of R (with morphisms f:(R,S) --~ (R' ,S') homomorphisms from R to R' which map S into S') to the category of rings and homomorphisms.

All this is easily checked, but it provides no

information about the structure of R8 .

In particular we

shall be interested in a normal form for the elements of RS and an indication of the size of the kernel of A, and here we shall need to make some simplifying assumptions. Let us look at the commutative case first.

To get a con-

venient expression for the elements of R8 we shall take S to be multiplicative, i.e. 1

S and a,b

E

E

S

~

ab

E

S.

Then

every element of RS can be written as a quotient a/s, where a E R, s E S, and a/s some t

E

S.

= a'/s'

if and only if as't

= a'st

for

This is not exactly what one understands by a

normal form, but it is sufficiently explicit to allow us to determine the kernel of A, viz.

ker A

(1)

{a E R

I

0 for some t

at

E

S}.

Ore's idea consists 1n asking under what circumstances the elements of RS have this form, when commutativity is not -1 assumed. We must be. able to express s a (for a E R, s E S) as a 1 /s 1 , where a 1 E R, s 1 E S, and multiplying up, we find as 1 = sa 1 • More precisely, we have as 1 /l = sa 1 /l, whence as 1 t • sa 1 t for some t E S. This is the well known Ore condition and it leads to the following result: Theorem 1.2.2.Let R be a ring and S a subset such that D.l

S is multiplicative,

D.2

For any a

E

R, s

E

S, sRnas f.(), 7

D.3

For any a

£

R, s

£

S, sa

=0

~

at = 0 for some t

S.

£

Then the universal S-inverting ring R 8 may be constructed as follows:

On R x S define the relation

(a,s) - (a' ,s') whenever

(2)

for some u,u'

E

a' u', su

s'u'

E

S

R.

This is an equivalence on R8 •

au

R x S and the quotient R x S/- is

In particular, the elements of R8 may be written as

fractions a/s =as-land ker

The proof

~s

A is given by (1).

a lengthy but straightforward verification,

which may be left to the reaaer.

It can be simplified a

little by observing that the assertion may be treated as a result on semigroups; once the 'universal S-inverting

se~­

group' R8 has been constructed by the method of this theorem, it is easy to extend the ring structure to R8 • A subset S of a ring R satisfying D.l-3 is called a right denominator set. Wheii R is connnutative, p.Z-3 are automatic and may be omitted. S

=

If R is entire, D.3 may be omitted and if moreover,

R*, then D.2 reads aRnbR # 0 for a,b # 0. This was the

case actually treated by Ore [31] and R is then called a right Ore domain.

Apparently these results were found inde-

pendently by E. Noether, but not published.

There have been

many papers dealing with generalizations, e.g. Asano [49]; for a survey see Cohn [71']. It is important to observe that the field of fractions of a right Ore domain

~s

essentially unique.

that the construction is functorial.

Let us first note

Thus, given a map be-

tween pairs f:(R,S) --> (R' ,S'), i.e. a homomorphism f:R --> R' such that Sf

C S', then we have the diagram shown, and by

universali~y

of R8 there is a unique map f 1 :R8 --> R~, such that the resulting square connnutes. In particular, if f is an isomorphism, so is f 1 • 8

A

R------~

'1

A'

Rr--------------~

Rs lfl RS'

So far R,R' have been quite general; suppose now that R ~s a right Ore domain and K is any field of fractions of R,

thus we have an embedding

f:R --> K.

If S

= R*,

we have

a homomorphism £ :R --> K, which we claim is injective. -1 l s -1 -1 For if as E ker f 1 , then 0 (as )f 1 = (af)(sf) , hence af = 0, and so a = 0, because f is injective. It follows that f 1 is an embedding; the image ~s a field containing R and hence equal to K, because K was a field of fractions. Thus £ 1 is an isomorphism and we have proved Proposition 1.2.3. The field of fractions of a right Ore domain is unique up to isomorphism. •

The result is of particular interest because it ceases to hold for more general rings; we shall soon meet rings which have several non-isomorphic fields of fractions.

1.3

Skew polynomial rings Given a commutative field k, there are four important con-

structions involving an indeterminate that we can perform. We can form polynomials, rational functions, formal power series and formal Laurent series.

These constructions are

all well known in the commutative case, and the relations between the four rings so obtained may be summed up in the following commutative diagram k(x)

- - - - - - - - > k((x))

r

k

[x]

---------> k

C: x]

Each can be generalized by taking the field k to be skew, and taking the indeterminate x to be central, but that is not the most general (nor the most useful) choice. Starting from an entire ring A, let us ask for a ring R whose elements can all be uniquely expressed as polynomials 9

(1)

(a. £A).

f

1

As usual we write deg f

= n if a n # 0 in (1).

The additive

group of R is just a direct sum of copies of A (by the uniqueness of (1)) • To multiply two elements, .say f (1) and g

=

~x

1

. by a., g1ven 1



.

= ~xjb. we have, by distributivity, fg = ~x 1 (a.xJ)b. J

.

and so it will only be necessary to prescribe a.xJ. 1

J

1

To ensure

that R is again entire, let us assume that (2)

deg fg

deg f + deg g.

Then in particular, ax for any a

£

A has degree at most 1,

so (3)

ax

xa

0 + a ,

a.

a.

0 a ' a 1-> a are mappings of A into itself. This is already enough to fix the multiplication in R, for

where a

1-»

now we can work out ax axr

=

r

by induction on r:

2 2 (xa a. +a o)xr-1 = [x 2 a a. + x(a a.o +a oa )+a o Jx r-2 =

We derive some consequences from (3): (a+ b)x

= x(a

+ b)a+ (a+ b) 0 ,

ax+ bx

hence

(4)

and (ab)x

so 10

a

o

x(ab) + (ab) , a(bx)

...

aba

(ab) 0

(5)

a

Further, lx

xl, therefore

(6)



1,

cS a cS a b + ab .

0,

and since ax has degree 1 for a f 0, (7)

a

a

0

==>

a

= 0.

From (4)-(7) we see that a is an injective endomorphism of A and

cS

~s

an a-derivation of A, i.e. a mapping such that (ab) 0

(8)

6 a. 6 a b + ab •

We note that (8) entails 1° = 0, by putting a= b = 1 ~n the second equation.

Conversely, let A be an entire ring with an

injective endomorphism a and an a-derivation

o.

Then the set

of all expressions (1) can be made into a ring by defining addition componentwise and multiplication by the commutation rule (3).

The resulting ring R is again entire (because the

degree on it satisfies (2)).

It is called the skew polynomial

ring in x over A (associated with a,o) and is denoted by

A[x;a.,o]. When

o

0 we also write A[x;a] instead of A[x;a,O]; if more-

l, we obtain the polynomial ring in a central in-

over a

determinate over A, also written A[x]. In matrix notation the commutation rule (3) can be written

a(x

1)

(x

11

and the conditions (4)-(6) may be summed up by saying that the mapping of A into the matrix ring A2 defined by

(9)

~s

a

\->

c: :)

a ring homomorphism.

More precisely, it is a homomorphism

into the ring of lower 2

X

2 triangular matrices over A, which

in a suggestive notation may be written

(! ~)·

Suppose now

that A is a right Ore domain with field of fractions K, then any injective endomorphism a of A extends to a unique endomorphism of K, again denoted by a.

An a-derivation 8 defines

a homomorphism (9) which by functoriality extends to a homomorphism K

--~(~ ~),

say u

\--~{~~ ~).

is an a-derivation on K extending stead of u'.

~

Clearly u

1--~

u' 8 and we shall write u in-

Thus we have shown (using (9)) that any a-deri-

vation of a right Ore domain extends to a unique a-derivation of the field of fractions.

This remark will be useful later.

Let A be any ring with an endomorphism a, then for each c

£

A the mapping ~ :a 1---~ ac - caa c

is easily seen to be an a-derivation; it ~s called the inner a-derivation induced by c. is called outer.

A derivation which is not inner

The construction of the skew polynomial

ring shows that any a-derivation 8 on A may be made inner by going over to A[x;a,8] (which can of course be defined even if A has zero-divisors and a fails to be injective). In this ring

o is

inner on A, say o A[x;a,o]

=

induced by x; moreover, if 8 was already

= ~c

then on writing y

=x

- c we have

A[y;a], as is easily checked.

Let K be a field, then any endomorphism of K is injective, 12

for its kernel is a proper ideal of K.

Thus for any en-

domorphism a and any a-derivation 6, the skew polynomial ring R = K[x;a,o] is entire.

As in the commutative case we can

show that this is a principal right ideal domain.

For let

a be a right ideal; if a 1 0, pick a monic polynomial f say,

of least degree in a •

Then every g E a can be written as

g = fq + r, where deg r < deg f. follows that r = 0 and so g

E

fR.

Since r = g - fq e:: a , it Thus a

fR and we have

proved the first part of Proposition 1.3.1.

Any skew polynomial ring K[x;a,o] over

a field K is a principal right ideal domain.

It is a princi-

pal left ideal domain i f and only i f a is an automorphism.

To prove the second part, assume first that a is an automorphism. Then if aa = b, we have a= b 6 , where S = a-l and the commutation relation (3) can be rewritten as b 6x = xb + b 66 , i.e. (10)

xb

Now it follows by symmetry that R is a principal left ideal domain.

Conversely, assume that a is not an automorphism,

then it is not surjective and so there exists c e:: K, c

i

Ka.

We assert that RxnRxc = 0, for if not, then for some f,g e:: R*,

(11)

fx

gxc.

Comparing degrees we see that deg f = deg g f ~ xna + •.• , g

=

. in (11) we f~nd aa

n say.

Let

x~ + ••• , then by comparing highest terms = b a c, hence c = ( b -1 a )a , an d t h.~s contra-

dicts the choice of c. ~.e.

=

So we have shown that RxnRxc

R is not a left Ore domain.

=

O,

Now the assertion follows

from the fact (proved below) that every left Noetherian domain is a left Ore domain.

For if R were left principal,

RUNT LIBRARY CARNEGIE Yr~ L~1 -::.·;>~,:n PITTSBURGH, PEN~:~YlYANIA 1521

13

it would be left Noetherian and hence left Ore, but we have just seen that this is not so. • We still have to prove the assertion made in the course of the proof about left Noetherian domains.

Translating to the

right, we find that what we need is Proposition 1.3.2.

Proof.

Any right Noetherian domain is right Ore.

Let R be a right Noetherian domain and a,b

E

R*, then

Loo aibR is finitely generated, hence for some n ~ 1, 0

.

n-1

a~

bc 0 + abc 1 + ••• +a

F 0,

Since a~

bcn-l"

not all the ci vanish; let ck be the first

non-zero coefficient, then we can cancel ak and obtain

so bck e: aR, i.e. aRnbR

r

0, and this is just the right Ore

condition. • This proof (which goes back to Goldie) actually shows a In an entire ring R, if aRnbR = 0 for a,b :f 0,

little more:

then the elements a~ (n dependent over R.

=

0,1, ••• ) are right linearly in-

Fo~ if Laibci = 0, and ck is the first

non-zero coefficient, then as before we find that bck e: aR. Hence if aRn bR

=

0, we can find a free right ideal of

countable rank in R. Corollary.

This proves the

An integral domain is either a right Ore domain

or it contains free right ideals of countable rank. •

Prop. 1.3.1 shows that any skew polynomial ring K[x;a,o] over a field is right Noetherian, and hence right Ore by Prop. 1.3.2.

Therefore it has a unique field of fractions,

which we shall denote by K(x;a,o). There

~san

interesting application of the last Cor., due

to Jategaonkar [69'] and independently to Ko~evoi [70].

14

Proposition 1.3.3.

Let R be an entire ring with centre C.

Then R is either a left and right Ore domain or it contains a-free C-algebra of countable rank.

Proof.

Suppose that R is not right Ore; it will be enough

to find two elements x,y which are free, for then the elements 1

x Y form an infinite free generating set.

We choose x,y e: R*

such that xRrl yR = 0 and claim that the C-algebra generated by x and y is free.

If not, let f

=0

be a non-trivial re-

lation of least degree between x and y. (12)

a + xa + yb

This has the form

a, b e: R, a e: C.

0

Here a,b are not both zero, s1nce they have lower degree than Say, b f 0, then on multiplying (12) on the right by x we

f.

have ax+ xax + ybx = 0, i.e. ybx = x(-a-ax) e: xRrlyR, and ybx f 0, a contradiction, which shows x,y to be free over C. • The result can be used to embed a free algebra 1n a field (as both Jategaonkar and

Ko~evoi

have observed).

For let K

be a field with a non-surjective endomorphism a (e.g. the rational function field k(t) with endomorphism f(t) and form the skew polynomial ring R

=

K[x;a].

1--> f(t 2 ))

This 1s an

entire ring; by Prop. 1.3.1 it is not left Ore and so it contains a free C-algebra on two free generators, where C is the centre of R.

But R is right Ore and so it can be

embedded in a field; this then provides an embedding of the free algebra (of countable rank) in a field. In spite (or perhaps because) of its simplicity this construction is of limited use, because not every automorphism of C can be extended to an automorphism of the field of fractions, constructed here.

One application of this con-

struction, due to J.L. Fisher

[71]

is to show that C

has many different fields of fractions.

Let A= k[t] be the

usual (commutative) polynomial ring with endomorphism an: f(t)

1-->

f(tn), and consider the subring of R

=

A[x;an] 15

generated by x and y

=

xt over k, for n > 1.

in the image of k(t) under

Since t is not

n' it follows as in the proof of Prop. 1. 3 . 1 that Rx () Ry = 0, hence the subalgebra on x ct

and y over k is free, and for different n we clearly get distinct (i.e. non-isomorphic) embeddings, because x

-1

16

yx

tx

xt

n

= x(x-1 y) n = (yx-1 ) n x, so x -1 y = (yx-1 ) n •

2· Topological methods

2.1

Power series rings In the commutative case the familiar power series ring

k

[X]

may be regarded as the completion of the polynomial

ring with respect to the "x-adic topology", i.e. the topology It

obtained from the powers of the ideal generated by x.

~s

no problem to extend this concept to the ring k [ x; a J, but when The

there is a non-zero derivation we face a difficulty.

above topology may be described in terms of the order-function. o(f)

=r

if

n r r+l x ar + x ar+l+ • • • + x an

f

(a

r

+ 0).

Now it turns out that when 6 # 0, multiplication is not continuous in the x-adic topology, as the formula (1)

a.x

xa

a

+ a

6

shows, and any attempt to construct the completion directly will fail. y

=

One way out of this difficulty is to introduce

x -1 and rewrite (1) as a commutation formula for y.

We

then get (2)

ya

a 6 a y + ya y.

Owing to the inversion we now have to shift coefficients to the left (unless of course a happens to be an automorphism) and as (2) shows we cannot usually do this completely in the polynomial ring, but we can do it to any desired degree of

17

accuracy by applying (2) repeatedly: a 6a 2 o2 2 a y + a y + ya y

ya (3)

If 6 is locally nilpotent, i.e. for each a n such that

a

on

E

k there exists

; 0, this can be used as a commutation for-

mula in the skew polynomial ring. been studied by T .H.M. Smits

[68]

This kind of formula has (cf. also Cohn

[71"] ,Ch.O).

But in any case, in the power series ring we can pass to the limit in (3) and obtain the formula (4)

a oa 2 62a 3 a y +a y +a y + •.•

ya

The ring obtained in this way 1s clearly an integral domain and the set consisting of powers of y is a left denominator set, by (4), so we can

for~

the ring of fractions, which is

in effect the ring of (skew) formal Laurent series in y. If K is a field with a surjective derivation 6, then K(x;l,6) is a field with a surjective inner derivation, and we can also obtain a field with this property by taking skew Laurent series. -1 X

c

6

.

-1

Thus if the commutation formula 1s ex

6

c - c , then on writing [a,b] ; ab - ba, we have [x-l ,c]; and hence [ X-1 ,LX i

c.] 1

LXi [ X-1 ,c.

1

J;

LXi c 6.. 1

Since 6 is surjective, every element has this form and the assertion follows.

To get a surjective derivation we need

only take a function field in which every function 1s a derivative, or more algebraically, a differentially closed field (cf.e.g. Sacks [72], Shelah [72]).

This answers a

question first raised by Kaplansky [70]; he asked for a 18

field in which every element 1s a sum of commutators and B. Harris

[ss}·

answered this by constructing a field in which

every element is a commutator.

Such a field is the union of

the ranges of its inner derivations, and it can also be constructed very simply as follows (Bokut' [63]): field K, the rational function field L

= K(x)

1ndeterminate x admits the derivation f

\-->

=

1

=a

1, hence [ax,y]

for any a

in a central f'

derivative) and in the skew function field K have [x,y]

Given any



(the usual

= L(y;l,') K.

we

If we re-

peat this process we obtain a field K2 :::) K1 and every element of K1 1S a commutator in K2 . Thus ue can form an ascending chain K1 C K2 C .•• whose un10n 1S a field which 1S the un10n of the ranges of its inner derivations. In finite characteristic Lazerson [61] has constructed a field with a surjective inner derivation: Given any field k, of characteristic p I 0, adjoin commuting indeterminates k(x 1 ,x 2 , .. ) with derivation 8 such that 0

0

xi = xi-1 (i > 1), x 1 = 1. Put L = K(t;l,o) then o is induced by t, hence oq is an inner derivation induced by tq, for any q = pn , and it annihilates anything involving only x1, ••• ,xq-1' that [a, tq] surjective.

Thus, given a



L, there exists q

0 and so [ax ,tq] =a. q

=

pn such

This shows that 8 is

By taking ultra-products of such fields for

varying p we obtain fields of characteristic 0 with surjective derivations.

For other constructions see Cohn

[73'"]. There is an important generalization of the power ser1es method, to which we now turn.

Let G be a group and consider

the group algebra kG over a commutative field k. kG embeddable in a field?

When is

Clearly a necessary condition is

that it should be entire, and for this it is necessary for G to be torsion free.

For if u



G is of order n, then

(u- 1)(un-1 + un-2 + ••• + u + 1) =

o· 19

In the abelian case this condition on G is also sufficient. For if G is torsion free abelian, it can be totally ordered (regard G as Z -space, embed it in a Q-space and use a lexicographic ordering with respect to an ordered basis). G is totally ordered, kG is clearly entire:

When

Let a= a 1 s 1

+ ••• +as

with a. £ k, s. £ G and s 1 < ••• < sm' similarly mm 1. 1. b = b 1 t 1 + ••• + bhtn (bj E k, tj £ G, t 1 < ••• < tn)' then ab = a 1 b 1s 1 t 1 + ••• ,where the dots represent terms greater than s 1 t 1 , hence ab ~ 0. Thus we have Proposition 2.1.1. Let G be an abelian group, then the group algebra kG (over any commutative field k) is embeddable in a field if and only if G is torsion free. •

In the non-commutative case little is known; it is not even known whether kG is entire for any torsion free G. But Farkas and Snider [76] have recently proved that this

l.S

the case when G is polycyclic (i.e. soluble with maximum condition on subgroups); since kG is Noetherian in this case, it is then embeddable in a field.

In another direction

J. Lewin and T. Lewin [a] have shown (using methods of Magnus

and some results from Ch.4 below) that for any torsion free group G with a single defining relation the group algebra kG can be embedded in a field. It has long been known that Prop.2.1.1 can be generalized to non-abelian groups which are ordered.

In that case we can

form a kind of power series ring k((G)) which turns out to be a field.

This uas first proved by Hahn [07] for abelian

groups with an archimedean ordering and then generally by Mal'cev[48] and independently, Neumann [49'].

The result was

put in a general algebraic setting by Higman [52]; his proof (with some simplifications) is given in Cohn [65].

Let us

briefly describe the construction without entering into the details of the proof. Consider the k-space kG of all k-valued functions on G; it contains the group algebra as subspace, in fact a 20

= (a ) g

belongs to the group algebra precisely if its support D(a)

{g e G

is finite.

I

a

g

j: 0}

Now the multiplication on kG cannot in any

natural way be extended to kG; if a= (ag)' b = (bg)' then we should have

and there is no guarantee that the sum on the right is finite.

Let k((G)) be the subset of kG consisting of all

elements with well-ordered support (in the ordering on G). If a,b

k((G)), then the sum on the right of (5) is finite

E

for-each g, for the h such that ah f 0 form an ascending chain, so the h

-1

g (g fixed) form a descending chain and

hence only finitely many such bh-lg are non-zero.

More-

over, it is easily seen that ab s k((G)), so that the latter forms in fact a ring, with kG as subring.

The theorem

of Mal'cev and Neumann asserts that k((G)) is actually a Thus each element of k((G)) has the form Za u and

field. if u

0

u

s G

~s

the least element for which a

can write f =a

u

only of elements 0 1 + g + g

2

u

'f 0, then we 0

u (1- g), where g has support

• • cons~st~ng

0

>

1.

Now what has to be proved is that

+ ••• e k((G)); once that is established we

clearly have

f-1

=

(1 + g + g2 + ..• ) u

-1 -1 a 0

uo

In order to apply this result to embed free algebras in fields we use the fact that the free group can be totally ordered (cf. e.g. Fuchs [63] ).

Briefly the proof goes as

follows: Let F be. the free group on a set X, taken to be finite 21

for simplicity, and let b 1 ,b 2 , •.• be the sequence of basic commutators in X. Denote by yt(F) the tth term of the lower central series ofF and let b ,b , •.• ,b

be all the basic w commutators of weight < t, then as is well known, every ele1

ment a

£

2

F has a unique expression

(a.

l.

£

z)

(cf. e.g. M. Hall [59]), and we can therefore represent each a

£

F by an infinite product

Now write a> 1 whenever the first non-zero of a 1 ,a 2 , ••• is positive; this provides a total ordering of F. The same method works for free groups of infinite rank (taking the free generating set to be well-ordered). It is clear that k, the free k-algebra on X, can be embedded in kF, the group algebra of the free group on X, and since kF is embedded in the 'power series field' k((F)) just constructed, we have another embedding of kin a field. If instead of the free group F we take the free metabelian group G, i.e. the group defined by the law ((u,v),(w,x)) (where (x,y)

-1 -1

=x y

least when card (X)

=1

xy), we can still embed kin kG, at

= 2, the crucial case (Moufang [37]).

Moreover, G can again be ordered, so we have another embedding of k in a field, and these two embeddings of k in k((F)) and k((G)) are clearly distinct. Of course there are other simpler ways of finding nonisomorphic fields of fractions of k, e.g. that by Fisher described earlier.

We note that 1n the above construction

the free metabelian group cannot be replaced by a free 22

nilpotent group of any class, for the free semigroup on X cannot be kmbedded in any nilpotent group on X, by results of Mal' cev (cf. Lyapin [60]). As an application of the Mal'cev-Neumann construction we derive the one-sided principal ideal domains first obtained by Jategaonkar.

We have seen that a polynomial ring over a

field is a principal ideal domain, and it is clear that the condition is necessary, i.e. if a polynomial ring is principal, the coefficient ring must be a field.

This is true

even for skew polynomial rings relative to an automorphism, but for a non-surjective endomorphism it need not hold. precise conditions were determined by Jategaonkar Theorem 2.1.2. put R = A[x;S].

The

[69]:

Let A be a ring with an endomorphism Sand Then R is a principal right ideal domain i f

and only i f A is a principal right ideal domain and S maps

A* into U(A), the group of units of A. Proof.

If R is principal, so 1s A because it is a retract

of R, i.e. a subring which 1s also a homomorphic image (put x = 0).

Further, for any a

E

A* we have aR + xR = cR, where

c 1s the highest common left factor of a and x.

It follows

that c has degree 0 (as factor of a), sox= cf, where f has degree l, say f shows that ce

xd + e. 0 , c sd

= 1,

s

Now x = cxd + ce = xc d + ce, which . a un1"t. so c s 1s

Let au + xv = c,

then putting x = 0 we see that a is associated to c, hence a

s

. so a s 1s . a un1t, . as c l a1me . d 1s associated to c s , a un1t, . Conversely, if the given conditions hold, R is clearly entire.

Let a .be a right ideal in R.

When a

= O,

there is

nothing to prove; otherwise let n be the least degree of polynomials occurring 1n a .

The leading coefficients of

polynomials of degree n in a form with 0 a right ideal in A, generated by a say. Let f xna + • • • e: a , then aS e: U(A) n+l S . as highest coeff1c1ent. · · and hence fx x a + •.. has a un1t It follows that a contains a monic polynomial of degree n+l and so also of all higher degrees.

Now it is clear that

23

a

= fR,

hence R is a principal right ideal domain. •

This result shows that under favourable circumstances one may iterate the polynomial ring construction and still get a principal right ideal domain, and this suggests the following definitions.

By a J-skew polynomial ring one

understands a skew polynomial ring A[x;S] such that S is injective and satisfies Jategaonkar's condition: AS ~ U(A) U {0}. E.g. this condition holds whenever A is a field; what is of interest is that there are other cases.

It is easily seen

that any J-skew polynomial ring over A is entire if A is. Now let R be a ring and called a J-ring of type (a < T) such that

T

T

an ordinal number, then R is

if R has a chain of subrings R

a

(i)

R = U(R) U {0}

(ii)

Ra+l is a J-skew polynomial ring over Ra for all aa). - r

It ~s easily verified by induction that U(R ) a we have the Corollary.

U(R ), hence 0

Any J-ring (of any type T) is a principal right

ideal domain. •

It turns out that J-rings can be characterized as integra: 24

domains with Euclidean algorithm (generally transfinite) and unique remafnder (cf. Lenstra [74]). any '

there are J-rings of type T.

We shall see that for Such rings form a useful

source of counter-examples; they were constructed by Jategaonkar to provide examples of (i) a principal right ideal domain in which there are non-units with arbitrarily long factorizations (only rather special examples, in effect Jrings of type

2,

were known earlier, cf. Cohn

[67] ),

(ii) a

ring with left and right global dimensions differing by an arbitrary integer (the largest known difference had been 2 before, cf. Small

[65]),

(iii) a left but not right primi-

tive ring (such a ring was first constructed by Bergman

[56],

answering a question of Jacobson example is more direct.

[64],

but Jategaonkar's

See also Brungs

[69]

for some re-

markable properties of this construction). Skew polynomial rings over a field are J-rings of type l; J-rings of type 2 can be obtained by an ad hoc construction (Cohn

[67])

but beyond this the general case is no harder

than the finite case.

Moreover, one cannot use induction

directly, s1nce the coefficient ring depends essentially on the order type.

Jategaonkar uses an ingenious argument in-

volving ordinals; below is a direct proof based on the Mal'cevNeumann construction (cf. Cohn [71 11 ] ) . We observe that to achieve the form (6) we need a commutation rule of the form (S

=0

=> x

= 0.

Then the

ax is injective, and it 1s clearly right

K-linear, on a finite-dimensional K-space, hence it is sur-

= 1 for some b £ A. Now b 1s again a left non-zerodivisor: if bx = 0, then x = abx = 0. Hence there exists c c A such that be = 1, and so c = abc = a and this shows that ab = ba = 1, i.e. a is a unit. The rest is clear.• jective, and so ab

There is one important case where left and right degrees are the same. Theorem 3.1.2. centre.

Let K be a field of finite degree over its

Then the left and right degrees over any subfield

coincide.

Proof.

Let E be any subfield of K and denote the centre of

K by C.

By hypothesis K is a C-algebra of finite degree,

and it is clear that A subalgebra.

EC

= {Lx.y. 1 1

Jx.1 E E, y.1 E C} is a If we regard A as E-ring, we can choose a

basis of A as left E-space consisting of elements of C; then this will also be a right E-basis for A, hence

(2)

Now A 15 a c-algebra of finite degree, entire as subalgebra of K, hence A is a field.

By (1) 31

[K:C] .. [K:A]L [A:CJ Since [K:C] is finite, so is [A:C].

I f we divide by [A:CJ

and multiply by (2) we get (on using (1) again)

3.2

The Sweedler predual and the Jacobson-Bourbaki correspondence The aim of this section is to establish the Jacobson-

Bourbaki correspondence theorem, used later for Galois theory and also useful elsewhere.

We shall first prove the

Sweedler correspondence theorem on corings, see Sweedler [75]. We begin by explaining the notion of a coring. a ring.

Let A be

By an A-coring we understand an A-bimodule M to-

gether with A-bimodule

~ps ~:M

--> M ~AM, E:M --> A, such

that the following diagrams commute:

/I~

M ~ M l.E

Example.

Let

~:A-->

> M


im(U

~

V/V', the kernel of a V') ).

An element g

£(g)

=

1, 6(g)

~n

=

g

B is im(U'

~

V) +

a coring M is called grouplike if ~

g.

E.g. the standard B-coring B

has the standard grouplike 1 Proposition 3.2.1. grouplikes.

~

~

~AB

1.

Any coring map takes grouplikes to

Given a homomorphism ¢:A --> B, let C

=B

~AB

be the standard B-earing over A, then for any B-earing P there is a natural bijection (of sets)

where Hom denotes the set of B-earing maps and

{g E

Proof.

Pj

g grouplike and ga

E

A}.

The first part is clear, to prove the second we note

that under any coring map C --> P, 1 like centralizing A. Lai ~ b i

ag for all a

~

1 maps to a group-

Conversely, given g £ GA(P), the rule

j --> Laigb i defines a map C --> P which is easily

seen to be a coring map.• We shall often write gB/A for 1 ~ 1 in B ~AB. It is a natural question to ask if every B-coring is standard 33

Since B GA B ~s generated as B-bi-

over some subring.

module by the grouplike gB/A' we shall limit ourselves to corings generated by a single grouplike.

Then the answer

is 'yes', provided that B is a skew field: Proposition 3.2.2.

Let K be a skew field and M any K-coring.

Given a grouplike g c M, write D

= {x

K

£

I

xg

= gx},

then

Dis a subfield of K and the standard coring map (Prop. 3.2.1)

(1)

1-»

1 G 1

l';:K~K-·»M

is injective; i t is an isomorphism i f M

Proof.

Clearly D

~s

a field.

= La.

then there exists x

~

Suppose

~b.

~

~

1';

g

= KgK. is not injective,

0 such that

n

La.gb. 1 ~ ~

(2)

(a., b.

0

~

~

£

K),

and we assume that n, the number of terms in (2), is minimal. Moreover, 0

= s(La.gb.) = La. b., hence n > 1 and after multi~ ~ ~ ~ -1 plying by al we may take al = 1. Now by minimality a 1 = 1, a2, . , . , an are right D-independent and a2g # ga2, hence there

is a right K-space map G:M -» K such that G(a g - ga2) ~ 0. 2 Consider x' = Ln1e(a.g- ga.) ~b .. By the independence ~ ~ ~ of the bi we have x' ~ 0, but a 1 g- ga 1 0 and so x' has fewer than n terms.

c

Now let us chase x around the diagram

c ----»

M

j

j

~

where C

c

----»

= K ~K.

0.1 M ~ M ----~> K

~

34

L8(a.g)gb. l.

~

M

M

Going across and down we get 0, going down

and across we get EG(a.g)gb., hence (3)

~

o.

~

Now ~(x')

= EG(a.g1.

= E0(a.g)gb. 1. 1.

ga.) gb. 1.

1.

- E~(g)a.gb .. 1.1.

Here the first sum vanishes by (3) and the second by (2). Thus ~(x') diction.

= 0, but x'

~ 0

and x' has < n terms, a contra-

Therefore (1) is injective; clearly it is sur-

jective, and hence an isomorphism, whenever M = KgK.• Taking the case M = KgK, we get the Corollary.

Every K-coring over a field K, generated by a

grouplike is standard.

We can now prove the first correspondence theorem: Theorem 3.2.3 (Sweedler correspondence theorem). skew fields F ~ K, let M = K over F, write

C

~

F~D~K,

preserving bijection C ~>

cx:D

~:J where n:M

Proof.

1-> 1->

K be the standard K-coring

for the set of coideals in M and

set of fields D such that

D+

ker(M

J+

{x

E

I

V

for the

then there is an order-

V defined by

=K~

K

Given

K -> K

x. ~(gK/F)

~

=

K),

~ M/J.

As kernel of a coring map D+ is a coideal.

If L is

= ex} is clearly and it contains F. a subfield of K, so J is a subfield of K, + JD is the kernel of the natural aS = 1. Given D £ v, D ++ map K ~K -> K~ K and ~IF 1-> gK/D' Now x £ D 1 Q X < > X £ D. X Q 1 any K-bimodule and c

£

L, then {x

£

K I xc

+

~ex

1.

Given J e: C, put D = J+ and let n:M -> M/J be

the natural map, then c

=

n(~/F) centralizes D, by defini-

=

Moreover, c generates M/J, hence M/J K~ K by Prop. 3 . 2 . 2 , wh ere the homomorphisms of M correspond 1n + ++ this isomorphism, so J = D = J ·• In this correspondence F was any field, e.g. we can take tion of D.

35

it to be the prime subfield P of K, then we obtain a bijection between the coideals of K

~

K and all subfields of

K. To obtain a case of Jacobson-Bourbaki correspondence we need some facts on duality. we write HomA_(M,N),

Given A-bimodules M, N,

Hom_A(M~N), HomA,A(~~N)

for the set

of all left-A, right-A and A-bimodule homomorphisms.

Further

we put *M = HomA_(M,A), M* = Hom_A(M,A), *M*

= *Mn M* = HomA,A (M,A) .

E.g. if MA is free of finite rank, then *(M*)

=M,

as is

well known. Let M be an A-coring, then *M has a ring structure as follows: For f,g composition

E

*M, their product is defined as the

M

__g__> A.

Thus f.g: u I--~ I(uilui 2 ) if ~(u) = Iuil 9 uiZ' It is easily verified that this is indeed a ring structure. Simif

g

larly M* is defined as a ring: the composition of f,g is defined as

E

M*

f f.g:M ---> M 9A M g 9 1 > A 9A M = M - > A.

Here f.g maps u to I(ur1 ui 2 )f We note that for f,g s *M* both definitions reduce to Iu~ u~ thus both *M and M* 11 12' contain *M* as subring. Example.

Let C

Hom_B(B 9A B,B) C*

36

B 9A B

be a standard B-earing, then C*

Hom_A(B,Hom_B(B,B))

End_A(B),

=

Hom_A(B,B).

Thus

=

and similarly *C =End .

A-

(B), while *C* =End

latter is essentially the centralizer of

A,A

~(A)

(B)

The

'

in B.

It is clear that if f:M ~ N is a coring map, then f*: M* ---> N* and *f:*M ---> *N are ring homomorphisms.

E.g.,

for any B-earing M, e::M ---> B is a coring map, hence e:* :B ---> M*

1--->

is a ring homomorphism; explicitly, e:* :b corresponds to the map m

1--->

Ab£' i.e. b

e:(bm).

Now let K be any field and End(K) the ring of additive group endomorphisrns of K.

In End(K) we have the subrings

p(K), A(K) of right and left multiplications, and as is well known, these are each other's centralizers, thus p(K)=EndK_(K) A(K)=End_K(K).

K

Further, End(K) as subset of K can be regarded

as a topological space, the topology being induced by the K

product topology on K , taking K with the discrete topology. This is sometimes known as the topology of simple convergence; iff e: End(K), a typical neighbourhood off consists of all ~

e: End(K) such that for a given finite set c 1 , ••• ,cn £ K, c.f =c.~. In particular, this shows every centralizer to ~

.

~

be closed . We shall need one more auxiliary result: Proposition 3.2.4.

Let K be any field.

Given a subring F

of End (K) such that p (K) c;;;; F c;;;; End (K), define

D

{x e: K

I

{x e: K

I (xy)f

A centralizes F}

then (i) D = {x e: K

X

I

x.yf for all ye:K,fe:F}

xf = x.lf for all f

£

F}, (ii) the

centralizer of F in End(K) is A(D) and hence D is a subfield of K, (iii) p (K) c;;;; F ~ EndD- (K) •

Proof. (i) If xf

= x.lf,

then (xy)f

Xp f y

x.lp f y

=

x.yf 37

because p(K)C F.

(ii) Since p(K)C F, the centralizer of F

is contained in A(K); in fact the definition of D states that A f =fA , thus the centralizer is A(D). X

X

Now the rest

is clear. • Theorem 3.2.5 (Jacobson-Bourbaki).

Let K be a skew field

and End(K) its endomorphism ring, as topological ring.

Then

there is an order-reversing bijection between the subfields

D of K and the closed p(K)-subrings F of End(K), defined by the rules

(4)

D

1->

End0 _ (K) ,

F

1-> D

{x

g

K

I

xf = x.lf for f

E

F}.

Further, i f D and F correspond, then

whenever either side is flnite.

Proof.

Given F, defineD as in (4).

forK and for x y. 0

X

Let X be a left D-basis

define ox e End 0 _(K) by

E X

1

ify

0

if y "' x.

x,

Then the oX are right K-linearly independent, for if ~o X a X = 0 (a X E K), we can apply this toy EX to get a y = 0, so the relation was trivial. Moreover, if [K:D] 1 < ~, the

ox form a basis for End 0 _(K) as right K-space, because then f

= ~ox.xf,

(6)

as is easily checked.

[End0 _ (K) :K] R

if the right-hand side is finite.

38

Therefore we have

Now F ~ End 0 _ (K) , so

[F:K]R is finite if [K:D]L is, by (6). To show that

:

(7)

F = EndD- (K) ,

we use Jacobson's density theorem [56] to deduce that F, as p(K)-subring, is dense in EndD_(K) and being closed is therefore the whole of EndD_(K).

In the restricted but still im-

portant situation where [F:K]R

< ""•

(7) can be proved from

Prop. 3.2.2 by verifying that the map K

:K ~ K

1s injective.

F*,

-;:>

It is of interest that for any p(K)-subring F

the injectivity of K would follow from the density of F in Conversely, if

EndD- (K).

* (F*) -;:. * (K ~K)

= EndD- (K)

is surjective, then since F is dense in *(F*), it follows that F is dense in EndD_(K); thus the injectivity of of

th~

K

is a predual

density theorem.

To outline the proof, let [F :K]R < ""· n:F*

~ F*

->

(F

~)*,

~DW

Then the map

1->

(f 9 g

1->

(fwg)~ )

1s an isomorphism and F* has a coring structure given by _ 1

¢1--> (lF)¢, and 6:F* (mult)* > *(F*) = F, as rings. Now *K is an

~K F)* _n___> F*~

e::F* --> K,

(F

Moreover,

injective ring

homomorphism, hence Let g

=

K(g) e: F*.

~/D £

K

K =

~D

(*K)* is a surjective coring map. K be the standard grouplike and c =

Put D1 = {x e: K

cb = K(l 9 b) and be= K(b (cb)f

= bf,

(bc)f

I ~

xc =ex}, then D1 2D; if be: K, 1), hence for any f e: F,

= b.lf, so if be: D1 then bf

=

b.lf, i.e.

b e: D by Prop. 3.2.4 (i); this shows that D1 =D. Further, F* = KcK, hence K is an isomorphism by Prop. 3.2.2, therefore so is *K, i.e. F

= EndD-(K)

and [K:D]L= [F:K]R.

Conversely; given D, put F = EndD_(K) and define D1 {x

£

K I xf = x.l£ for all f e: F}, then

D 1~D

and by what we

(K), hence D1 =D. Finally if either Dlside of (5) is finite, then by (6) we have

have seen, F =End

39

l

as claimed. • Galois theory

3.3

Let K be a skew field and G the group of all its autocr + X,. morphisms. For any subfield E of K let E = {cr e: G for all x {x

£

K

I

X

+

£

x0

E} and for any subgroup H of G, put H

=x

for all a

£

H}.

Then it is clear that E+ is

a subgroup of G, H+ is a subfield of K and as in any Galois connexion,

H~H

++

and hence +++ H

+

E '

+

H .

Given a subfield E of K, we call E+ the Galois group of K/E, +

written Gal(K/E), and given a subgroup H of G, we call H the fixed field of H.

If E

H+ for some H, we shall say

that K/E is Galois. The object of Galois theory is to find which fields in K are of the formE the formE+.

= H+ and which subgroups of Gal(E/K) have

We recall that in the case of commutative

fields the finite Galois extensions are just the normal separable extensions, while every subgroup of Gal(K/E) has the form F+ for a suitable field F between K and E.

The

account which follows is based on Jacobson [56]. The commutative theory rests on two basic results: Dedekind's lemma.

Distinct homomorphisms of a field E into

a field Fare linearly independent over F.

40

If G is a group of automorphisms of a field

Artin's lemma.

E and F is the fixed field, then [E:F]

=

lei

whenever either

side is finite.

Our object is to find generalizations.

We begin with

Dedekind's lemma; here we have to define what we mean by the linear independence of homomorphisms over a skew field.

= Hom(K,L) for the

Given any skew fields K, L, we write H

set of all field homomorphisms from K to L.

Let HL be the

right L-space on the set H as basis and define HL as left K-space by the rule s

as Thus HL

sa , ~s

s

a e: K,

a (K,L)-bimodule, as

~s

element of HL defines a mapping K Es.A.: a 1--~ Ea

(1)

~

H

Hom(K,L).

easily checked. --~

Each

L as follows

s. 1

~

(a e:

A.

~

We observe that for a,S thus aSs

E

£

K, s s H, a.

K, s.~ e: H, A.~ e: L). Ss

a

sSs

~sas ~ ~

= ( aS )s ,

(a5)s and so the left K-module structure of HL

acts on K in the expected way.

Let N be the kernel of the

mapping from HL to Map(K,L) defined by (1), thus N consists s. 0 for all a. £ K. Write of all sums l:s.:\. such that l:a. l:\. 1

~

1

M = HL/N, then M is a (K,L)-bimodule whose elements have the form Es.A. (s. e: H,A. e: L), with Es.1 A.1 = 0 if and only if ~ 1 1 1 s.

=0

Ea 1A. 1

for all a e: K.

Each p e: L* defines an inner automorphism of L:

I : A 1---~ PAP

-1

fJ

,

and it 1s clear that for s s,t: K

-->

£

H, si

fJ

£

H.

Two homomorphisms

L are called equivalent if they differ by an inner

automorphism: t

= si].1 •

We note that for each s e: H, sL is a 41

(K,L)-submodule of HL which is simple as (K,L)-module, since it is already simple as L-module.

Two homomorphisms s,t

define isomorphic (K,L)-bimodules if and only if they are equivalent.

For if sL

then for all a E K,

= tL,

a.s~

say t corresponds to

that as~= ~at, hence s,t are equivalent. s,t are equivalent, then as~

= ~at

tracing our steps we find that sL to

s~

s~ (~

E

L*),

. s £'~nd s~nce as = sa , we

= s~.a t •

Conversely, if

for some ~ E L* and re-

= tL,

with t corresponding

in the isomorphism.

It follows that HL is a sum of simple (K,L)-bimodules, i.e. semisimple, hence so is the quotient M.

We recall that

a semisimple module is a direct sum of homogeneous components, where each homogeneous component is a direct sum of simple

[56]

modules of a given type (cf. Jacobson

or Cohn

[77]).

Now we have the following generalization of Dedekind's lemma. Theorem 3.3.l(i).

Given s,s 1 , ... ,sn

s.I

then s

(ii)

~

1

for some i and

~

= Hom(K,L), if

in M l='HL/N

s = l:s.A. 1

H

E

(A..

1

some~ E

E

L),

L*.

Givens E H, i f ~ 1 , ... ,~r E L are such that the

elements si

in M are linearly dependent over L, then the

~i

~i are linearly dependent over C (Ks), the centralizer of

Ks in L. Proof.

(i)

If s

=

l:s.A.., then the simple module sL lies ~ ~

in the same homogeneous component as somes., so sands. 1

generate isomorphic modules, i.e. s

=

s.I ~

~

(~ E

1

L*).

(ii) If the si~i are linearly dependent, take a relation of shortest length: p

I 1 si

~·1

A..

~

0

(A..

~

E

L) •

Then each A.i f 0 and by multiplying on the right by a suitable factor we may assume that 42

A. 1

= ~1 .

Apply this relation

to aS

E

(2)

0

K: s s -1 == L11. a. S ll· A.• 1

1

1

Next apply the relation to a and multiply by Ss on the right: 0

Lll.ct 1

s -1 s ll· A.S . 1

1

Taking the difference, we get "'p s -1 ('.as ~lll·Cl ll· 1\ p 1 1 1

s -1 ll·B ll·1 A.) 1 1

0.

The first coefficient 1s Alps - A1 Ss ; 0, hence by minimality -1

the others are also 0, so ll· A.S 11 E C (Ks). Now by (2), with cY. = i3

-1 l:]J •• ]J.

1

1

\.

1

s

s -1

-1

; S ll· A., i.e. ll· A. 11 11 = 1,

0,

and this is the required dependence relation over C(K 5 ) . • Corollary 1.

Let s 1 , ... ,sr be pairwise inequivalent iso-

morphisms between K and Land let A1 , ... ,At

E

L be linearly

independent over the centre of L, then the isomorphisms

s.I, are linearly independent over L. 1 /\. J

For if they were linearly dependent, then for some s

=

si

the siA. would be linearly dependent (because each si beJ

longs to a different homogeneous component).

C (K 5 ),

the A. are linearly dependent over J of L, which contradicts the hypothesis. •

So by (ii)

i.e. the centre

If sl' • •. ,sr are inequivalent isomorphisms between K and L, A.l, •.. ,At E L are linearly independent

Corollary 2.

over C, the centre of L and

43

(3)

(ex ••

s = L:s. I, ex ••

J

then

L),

E

1J

1 /\. 1J

s = siiA, for some i, where A= L:A}lj (ilj

Proof.

E

C).

By (i), s = siiA for some i and some A E L.

Thus

Is. I, ex •• 1 1\. 1J J

and by equating homogeneous components we can omit terms sk

r i.

with k

dent over 6.

J

E

c,

Now by Cor.l, A,A 1 , ... ,At are linearly depenbut A1 , ... ,At are independent, hence A n. a., J J

C••

Next we have to translate Artin's lemma.

Without using

= [G:E], where G, or

Dedekind's lemma the result is [E:F]

rather GE is regarded as right E-space.

We shall replace

G, a group of F-automorphisms of E, by F-linear transforma-

tions of FE.

(yx)s

Every such s E EndF_(E) satisfies yx s

X

E

E, y

This generalizes the rules (xy)s for s

E

G.

E

F.

s s

x y

y

s

= y satisfied

Given any skew field K, we consider the set

End(K) of additive group endomorphisms as a topological ring (3.2).

This set contains p(K), the ring of right multi-

plications as subring, and we recall the Jacobson-Bourbaki correspondence (p.38): There 1s an order-reversing bijection between the subfields D of K and the closed p(K)-subrings F of End(K) such that

whenever either side is finite. Given a group G of automorphisms of K, we have a right

44

K-space GK, and we need only show that this is a ring in order to be able to apply the preceding result. Proposition 3.3.2.

Thus we have

Let K be any skew field and G a group

of automorphisms of K, then GK is a p(K)-subring of End(K) and its closure GK is EndD_(K), where Dis the subset of K left fixed by G.

Proof.

In GK we have the rule ag

gag

(a

K, g

E

E

G).

Using this rule, we have

Since every element of GK is a sum of terms ga, it follows that GK is closed under products and contains 1, hence it is a ring, indeed a p(K)-ring, because p(K)~ GK. Jacobson-Bourbaki correspondence, GK

{x

E

K

I

a E

x

f

D

= x.l

a e: D

for all f

E

-

GK}.

EndD_(K), where D Thus if a

f f a = a.l for all f agf3 a.lgS for all g agf3 af3,




= a.

for all g

E

By the

E

GK,

E

G,

K, then

E

aE

K,

G. •

Combining this with Th.3.3.l, we get Proposition 3.3.3.

Let K be a skew field, G a group of

automorphisms of K, and D the fixed field of G.

Assume that

G contains every inner automorphism of K over D.

If E is a

subring of K such that D ~ E ~ K and

IE :D I1


K be a D-ring homomorphism.

Regarding E

and K as left D-spaces, we see that E is a subspace, so s can be extended to a D-space endomorphism of K, 1.e. an element of EndD_(K).

By Prop.3.3.2, EndD_(K) = GK and on any

finite-dimensional subspace s can be written as Ig.A. (g. 1 1 1 Ai e: K). In particular, since [E:D] 1 < oo, we have

E

45

G,

= Ig.A.

s

~

on E.

~

Now E is an entire D-ring of finite left degree, hence a field, and by Th.3.3.1 (i), s = g.I ~

v E K.

Applying this to a E D we have a

Thus I

v for some i and some

~s

v

hence g.I ~

v

s

an inner automorphism of K over D, and so I

E G,

v

EGis an automorphism which induces s.• Let K be a skew field with centre C and D a C-

Corollary.

subalgebra finite-dimensional over C, then any C-algebra homomorphism D --> K can be extended to an inner automorphism of K.

This follows by taking G to be the group of all inner automorphisms of K, then C is the fixed field and every inner automorphism belongs to G, so the proposition may be applied.• We now return to our initial task of finding which automorphism groups and subfields correspond under the Galois connexion.

We first deal with a condition which is obviously Let K be a field with centre

satisfied by all Galois groups.

C and let D be any subfield of K, then the centralizer of D in K is a subfield D' containing C.

Any non-zero element of

D' defines an inner automorphism of K which leaves D elementwise fixed and so belongs to the group D+ ; conversely, an ~nner

+

automorphism of K belongs to D only if it is induced

by an element of D'.

Thus we see that the a E K for which

+

Ia

~

~s

then a necessary condition for a group to be Galois and

D form together with 0 a subfield containing C.

we define: A group G of automorphisms of K is called an N-group (after E.Noether) if the set

A

46

{a

E

K

I

a

0 or I

a

E

G}

This

~s a C-subalgebra of K.

We shall call this the C-algebra

associated with G.

Clearly the associated C-algebra ~s necessarily a field. If G is any N-group with associated algebra A, and G is the 0 subgroup of inner automorphisrns I (a E A), then G is normal 1 a in G, for if x £ K, a £ A*, s £ S, then x s- las = 0( ax s-1 a -l)s = s ( s)-1 a x a = xi s• so a

s

-1

I

a

.s

I s· a

We define the reduced order of G as

With this notation we have Theorem 3.3.4.

Let K be any skew field, G anN-group of +

automorphisms of K and put D = G •

Then

(4)

whenever either side is finite, and when this is so, G++

Proof.

Suppose first that [K:D]L

(J.-B.)

[End 0 _(K):K]R

incongruent (mod G0

)

=m


. 1 , ... ,"At any elements of the =

associated C-algebra A that are linearly independent over

C.

Then the maps sir>.. are in End 0_(K) and by Th.3.3.1, J

Cor.l they are linearly independent over K; hence rt


. 1 , ••• ,"At a C-basis for A, then we know that the s.I, ~

over K.

are right linearly independent

1\.

J

We shall show that they form a basis for End 0 _(K). Lets£ G, then s = s 1 I"A say, where A£ A and so "A=

47

For any c e: K we have

l:A.ts. (ts. e: C) • J J

J

s

c

c

slr

>.cs 1 A-1 A= sl -1 l:A.~.c A J J

s -1 -1 n.c n .. A.fLA J

J

J J

l:c sl IA y., j J

where y. = J

A.~.A.

-1

spanG, e: K. This shows that the s.IA 1. •

J J

hence also GK.

J

Now GK has finite dimension over K and so

= EndD_(K) (Prop.3.3.2). [End 0 _ (K) : K] R = (K: D] L •

GK

=

GK

Next it is clear that D+ then s e: EndD_(K), hence s

++

G

It follows that !Gired ;2G.

= l:s.IA a ..• 1. • l.J

+

Conversely, if s e: D , By Th.3.3.1, Cor.

2, since s is an automorphism of K~ s = siiA for some i, where).= EA.ts.

(~.

JJ

s

E

J

e: C), but then I, e: G and s. e: G, hence 1\

0

1.

G ••

Here is an example, taken from Amitsur [54], to illustrate the need for introducing the reduced order. Let F be the Q-algebra generated by u, with defining relation u

2

+ u +

1

= 0. 2

F admits aQ-automorphism cr:u 1-> u. Put k = F[v;cr], then v 2 - 2 is central and irreducible, because the equation

has no rational solution. So we can form the skew field K = R/(v 2 - 2). Now the group generated by I has order 3, u but reduced order 2, for its fixed field is F and [K:F] = 2.

48

We note some consequences of Th.3.3.4. Corollary 1.

:If K/D is Galois (i.e. D

= G+

for some G), then

whenever either side is finite.

This follows from (4) by symmetry. • Corollary 2.

Let K be a skew field with centre C and A any

C-subalgebra of

K.

Then the centralizer

C-subalgebra of K and A"-:;;;_A.

(5)

A' of A is again a

Moreover,

[A:c]

whenever either side is finite, and when this is so,

A"

A.

=

Proof.

Suppose first that [K:A'] 1 is finite. Clearly A' 1s a subfield of K; let G be the group of inner automorphisms of K fixing A' and A1 the associated algebra, then A1 -:;;;_A and by Th.3.3.4, lclred

[A 1

:cJ

=

[K:A']L' hence A is then

finite-dimensional.

Thus we may assume that [A:C] is finite,

hence A is a field.

Now let G be the group of inner auto-

morphisms induced by A, then clearly G+ = A', and (5) follows from (4).

Moreover, (A')+= G, which means that

A" = A. • Corollary 3.

If K/D is Galois with group G and [K:D] 1 < oo,

then any p(K)-subring B of GK has the form HK, where H

=

GnB is an N-subgroup of G.

Proof.

Let H = GnB, then clearly HKc;::; B.

To prove equality

we note that HK and B are both K-bimodules contained in GK

= EndD_(K), by Prop.3.3.2.

Now GK is semisimple and

hence so is B; moreover every simple submodule M of B is isomorphic to a simple submodule of GK and hence 1s of the form M = uK, where ~u Replacing u by uy (y

= ua 8 £

for all a

£

K and some s

£

G.

K) if necessary, we may suppose that

49

l.u

and still a.u

1

ua.

s

(for somes E G).

Hence a..u: l.ua.s: a.s, i.e. u is an automorphism of K, viz. s, and since u

EndD-(K), s fixes D, i.e. s

E

E

G, so s

E

GnB:

H; this shows that B = HK. That H is a group follows because B is a centralizer (being finite-dimensional over K, by. Jacobson-Bourbaki). that H is an N-group, let I ~

0; we must show that I

to show that I hence Ia. E

E

B

a.

E

X

-1

-1

So if (i),

Conversely,

£

L be such that

E

K satisfies

ex,

ex is an automorphism of K,

we have a homomorphism

K[t;o]

--:>

L

given by t

1-->

x,

and the generator of the kernel has the form tna - 1, where 67

a satisfies (i), (ii). • If we drop the condition that wo

= w,

then wo

= wv

for

some v and it is not hard to write down conditions on v for an extension to exist.

One can also give conditions for an

outer cyclic extension of degree n if K merely contains a primitive dth root of 1, where d is a proper factor of n. As a consequence we have a form of Hilbert's theorem 90: If L/K is an outer cyclic extension of degree n

Corollary.

with generating automorphism a, and c

(4)

X

a

E

L, then the equation

XC

has a non-zero solution in L i f and only if a an-1 cc .•• c

(5)

1.

• f'les (5) . Cl ear 1y c ; a -l a 0 satls

Proo f .

holds, we have (erA )

n

c

Converse 1y, l'f (5)

; 1, where A denotes left multiplicac

tion by c, for the left-hand side maps x successively to a a a 02 a an 0 n-l x, ex, c x , .•. ,cc .•• c x ; x. Thus we have

1] ; 0.

x[(a;\ )n c

This has the form xp(a) [erA p(a); now xp(cr)

= 0

c

- 1] = 0 for some polynomial

can be considered as a differential

equation (with respect to D

=a -

1) of order n-1, so its

solution space has dimension~ n-1, hence there exists a such that ap(a) = b f o, and b(aA - 1) c so (4) holds for x = b-l. • In a similar way one can show that x 0 solution if and only if c

68

crn-1

a

... c c

1.

= o, i.e. cba =

E

L

b.

ex has a non-zero

We also note the following criterion for reducibility: Proposition 3.5.4.

Let L be a skew field with an automor-

phism o of order n and a primitive root of 1, w, in its Then for any a

centre.



L, tn - a is either irreducible

over L[t;a] or splits into factors of the same degree.

In

particular, i f n is prime, tn- a is a product of linear factors or irreducible according as

=

a

an-l

a XX

••• X

has a solution or not.

Proof.

Let p = p(t) be an irreducible left factor of tn- a,

then so is p(wvt), for v

= l, ••• ,n-1,

therefore tn- a=

p 1 p 2 ... prq' where p 1 = p(t) and pi= p(wvit). If r is chosen v as large as p0ssible, each p(w t) is a factor of p 1 p 2 ... pr; ~n fact this is their least common right multiple and so is unchanged by the substitution t

1-->

wt.

This means that it

~s a polynomial in tn, of positive degree, and a factor of tn

a.

Hence it must be tn - a, i.e. q

=

1 and we have

proved the first part. Now if p has degree d, then dJn, hence when n is prime, d = l or n, and now the last part follows from the identity

t

n

-

a

( t - b)(tn-l + t

n-2 an-1 2 n-1 b + •.. + b 0 b 0 .•• b 0 ) + n-l + bb 0

•••

b0

-a. •

We now turn to the case where n is a power of the charace char K. Observe that in this case (2) teristic, n = p p reduces to Dn

= 0.

Proposition 3.5.5. degree n

pe, p

Write LV

{x e L

Let L/K be an outer cyclic extension of

= char J

K, with automorphism a

x(v)

=D

+ 1,

0}, then each Lv is a right

K-space of dimension v and

69

CL

K

L,

n

L

By Th.3.5.1, [Lv:K]R ~ v

Proof.

equality.

v

v+ 1

and for v

=

l, ... ,n-1).

(v

=L'

n we have

We shall use induction on n-v, thus assume that

L 1 has a right K-basis a = l,a 1 , ... ,a. We claim that 0 \) v+ = a, then Ea.a 1• a1 ' ' ' ' ' ' a'\) are a K-basis for L\) • If Ea!a. 1 1 1 a E K and by the linear independence of a , .•. ,a 0

..

= •.•

a1

=a\!=

a,

thus al, .•.

,a~

are linearly independent;

they belong to Lv so this shows that Lv \)

Since L'

= Ln- 1 ,

Corollary 1. Corollary 2. by aPi (i

=

L~+l and

E

s

a (a

L) has a solution in

8

Ln_ 1 · •

The subspace L i is the subfield of L fixed

= O,l, •.. ,e).

P

This follows because DPi

(o - l)Pi

a Pi - 1..

Let us define the trace of a E L as tr a Then n-1

D

[Lv :K]R

we have

The equation x'

L if and only if a

we have

\)

(a _

(a - l)n (a - 1)

0 n-l

n

(J (J

=

n-1 (J\) E0 a .

- 1 - 1

n-1 v L:o a '

hence (6)

tr a

a

(n-1) ·

for any a E L.

This formula enables us to prove a normal basis theorem: Theorem 3.5.6.

=P

n

e

(p

The outer cyclic extension L/K of degree

= char

K) has a normal basis, and a tive if and only if tr a ~ a. For tr

a~ a
a(n-l)

~a


8

a i Ln-l'

L is primiThus for

any a i Ln_ 1 , a(v)E L , a(v)i L and hence a,a' , ... , (n-1) n-v n-v-1 a form a basis of L = L. • n

e

We shall only determine extensions of degree p, the case

P , e > 1, follows by repetition (for details see Amitsur 70

[54]).

We shall write/

xp - x; let us also recall the

Jacobson-Zassenhaus formula (Jacobson [62] ,p.l87(63)): xp + yp + .\ (x,y), where

~

is a sum of commutators in x and y.

It follows

that the expression V{x) defined by (7)

V(x)

(t + x)

=

p

- t

p

when evaluated in K[t;l,D] lS a polynomial in x,x I , ... ,x (p) ' since e.g. [x, t] = xt - tx = x'. We first prove an analogue of Prop.3.5.4. Proposition 3.5.7.

Let L be a field of characteristic p

with a derivation D such that oP

.

polynoaual

t

p

-

= 0.

For any a £ L, the

a is a product of linear factors over

L[t;l,D] or irreducible according as the equation V(x) + a where Vis as in

Proof.

0

(7), has a solution in Lor not.

Let h be a monic irreducible factor oft

p

-a, of

degree d say, then the polynomials h(t + v) (v = 0,1, .•• , p p-1) are factors of t - a, Their least common right mul-

p

.

tiple is of degree ~ p and is a factor of t - a, hence lt p must be t - a. Now all the h(t + v) are irreducible of the same degree, so djp and either d

(t +

p

or d

= 1.

If

O, then

V(b) + a

hence t

=p

b/ - t

V(b)

-a= (t +b)

p

=

-a,

(t + b)((t +b)

p-1

- 1) and so

tp - a splits into linear factors.

Conversely, if tp - a p has a linear factor t + b, then (t + b) - V(b) - a = 71

(t + b)h(t), hence V(b) +a hast+ bas factor. has degree 0 in t, so V(b) + a

=

But it

0. •

We can now prove an analogue of Th.3.5.3. Theorem 3.5.8.

A skew field K of characteristic p has an

outer cyclic extension of degree p i f and only if there is a derivation D in K such that (i) Dp is inner, induced by a

£

K with a

D

= 0,

but D is outer (ii) V(x) + a "'0 has no

solution in K.

When this holds, tp - a is right invariant irreducible

~n

R

= K[t;l,D] and L = R/(tp- a)R, with generating auto-

morphism

Proof.

Again (i),(ii) ensure that tp- a

~s central and

irreducible, so when they are satisfied we have an extension. Conversely, let L/K be an outer cyclic extension of degree (J

p, then by Prop.3.5.5, L has an element y such that y = y + 1. D Hence c 1-> c = cy - y..c induces a derivation on K and we have a homomorphism K[t;l,D] -> L with t 1-> y. Here yp . inner, induced by a, and a D = o, while = a £ K, so Dp ~s V(x) + a = 0 has no solution in K, by the irreducibility p of y - a over K. •

72

4 ·The ·general embedding

4.1

The category of epic R-fields and specializations We now come to the fourth method of embedding rings in

fields listed in the prologue.

It is quite general in that

it provides a criterion for arbitrary rings to be so embeddable, and also gives a survey over the different possible embeddings. Let R be any ring.

We shall be interested in R-rings

that are fields, R-fields for short.

If K is an R-field

which is generated (as a field) by the image of R, we call K an epic R-field.

In fact K is an epic R-field precisely

when the canonical map R --> K is an epimorphism in the category of rings.

Our object

~s

to make the epic R-fields

(for a given R) into a category and we must find the morphisms.

To take R-ring homomorphisms would be too restric-

tive, for if f:K -->Lis such a map between epic R-fields, then f is injective (because the kernel

~s

a proper ideal

of a field) and im f is a subfield of L containing the image of R, hence im f isomorphism.

L (because L was epic), so f must be an

To obtain a workable notion of morphism let

us define a local homomorphism between any rings A, B as a homomorphism f:A

0

--> B whose domain A 1s a subring of A 0

and which maps non-units to non-units. means that the non-units in A

0

hence A

0

is then a local ring.

understand a ring A

0

If B is a field, this

form an ideal, viz. ker f, Generally by a local ring we

in which the non-units form an ideal

111

the quotient ring A /m is then a field, called the residuea

73

class field of A .

Of course when we are dealing with R-

o

rings, a local homomorphism is understood to have a domain which includes the image of R. Let f be a local homomorphism between epic R-fields K,L. If its domain is K , then by what has been said, K is a 0

0

local ring with residue class field K0 /ker f; this is isomorphic to a subfield of L containing the image of R, and hence L. (1)

Thus

K /ker f 0

~

L.

Two local homomorphisms are said to be equivalent if their restrictions to the intersection of their domains agree and again define a local homomorphism.

This is easily veri-

fied to be an equivalence; an equivalence class of local homomorphisms from K to L is also called a specialization. It can be checked that the composition of specializations is again a specialization (i.e. composition of mappings, when defined, is compatible with the equivalence defined earlier), and so we obtain for each ring R, a category FR of epic Rfields and specializations. At first sight it looks as if there may be several specializations between a given pair of epic R-fields.

E.g. let R

k[x,y] be the commutative polynomial ring over a field, K

=

k(x,y) its field of fractions with the natural embedding and L

=

k with the homomorphism R --~ L given by x 1--~ 0,

Y 1--~ 0.

We obtain a specialization from K to L by de-

fining a homomorphism a:k[x,y] --> L in which xa

= ya = 0.

Let K0 be the localization of k[x,y] at the maximal ideal (x,y), then a can be extended in a natural way to K .

We

0

observe that there are local homomorphisms from K to L that are defined on larger local subrings than K

(we can

0

'specialize' rational functions ¢(x,y) so that x/y takes on a specified value ink), but all agree on K0 just one specialization from K to L. 74

,

so that there is

In fact this is a

general property: between any two epic R-fields there is at most onfi specialization.

This will become clear later.

Of course for some rings R there will be no R-fields at all, e.g. R

= 0,

or for a less trivial example, any simple

ring with zero-divisors, say a matrix ring over a field. For any map R - > K must be injective and this is impossible when K

~s

a field.

Even entire rings R without R-fields

exist, e.g. if R is any ring without invariant basis number (Leavitt [57], Cohn [66']); R may be chosen entire and any R-ring is again without invariant basis number and so cannot be a field. What can we say about R-fields

~n

the commutative case?

Let R be a commutative ring and K an epic R-field, then K is of course also commutative (being generated by a homomorphic image of R). R

--~

The kernel p of the natural mapping

K is a prime ideal and K can be constructed in two

ways from R and p .

Firstly we can form R/p, an integral

domain (because p is prime), and now K is obtained as the field of fractions of R/p.

Secondly, instead of putting

the elements in p equal to 0, we can make the elements outside p invertible, by forming the localization Rp• ~s

This

a local ring and its residue-class field is isomorphic

to K.

The situation can be illustrated by the accompanying

commutative diagram.

The two

triangles correspond to the two methods of constructing K. The route via the lower triangle is perhaps more familiar, but unfortunately it does not seem to generalize to the non-commutative case; we therefore turn to the upper triangle.

Even this cannot be used as it stands, for as we

have seen, the field of fractions need not be unique, which means that ~n general an epic R-field will not be determined by its kernel alone. 75

Thus to describe an epic R-field we need more than the elements which map to zero, we need the matrices which become singular.

Here we use the fact that for any square

matrix A over a field K (even skew) the following four conditions are equivalent:

A has no

left

inverse,

right A ~s a left zero-divisor.

right

A matrix A with these properties

~s

called singular, all

others (if square) are called non-singular.

Given any R-

field A:R --> K, by the singular kernel of K (or A) written Ker A, we understand the collection of all square matrices over R(of all orders) which map to singular matrices over

K.

Let P be the set of all such matrices, then we can de~

fine a localization case) as follows.

(analogous to Rp in the commutative

In 1.2 we met the notion of a universal

S-inverting ring; we shall need the corresponding construction when S is replaced by a set of matrices over R. Let Z be a set of matrices over R, possibly of different orders, but all square (this is to avoid pathologies, because we want to make the matrices in Z invertible). For every n x n matrix A in ~ we choose n 2 symbols a!. which we ~J

adjoin to R, with the defining relations (in matrix form) (2)

AA'

A'A

I,

where A'

(a!.). ~J

The resulting ring is denoted by RI and called the universal I-inverting ring.

Clearly the natural homomorphism A!R --> RI

is I-inverting, in the sense that all matrices in I map to invertible matrices over RI (an inverse being provided by (2)), and every I-inverting homomorphism f:R --> R' can be 76

factored uniquely by A (the universal mapping property). The proof is the same as for Prop.l.2.1. We can now describe the construction of an epic R-field in terms of its singular kernel.

Let K be an epic R-field,

P its singular kernel and I the complement of p l.n the set of all square matrices over R.

Thus I consists of all

square matrices over R which become invertible over K. Then the universal I-inverting ring RI is a local ring, with residue-class field K.

We shall soon see a proof of

this fact, but we note that it does not solve our problem yet.

For we would like to know when a collection of ma-

trices is a singular kernel, just as we can tell when a collection of elements of R is a prime ideal.

In fact we

shall be able to characterize singular kernels in much the sarue way in which kernels of R-fields in the commutative case are characterized as prime ideals. 4.2

The construction of epic R-fields A basic step in the construction of an R-field is the

description of its elements as components of the solution vector of a matrix equation. morphism f:R (Af)

-1

-->

Given a I-inverting homo-

R', the set of all entries of matrices

, where A£ I, is called the I-rational closure under

f of R in R'.

It is not hard to give conditions on I for

this I-rational closure to be a ring; in fact they correspond to the condition of being multiplicatively closed in the commutative case for sets of elements.

So we define a

set I of square matrices over a ring R to be multiplicative if it includes the 1 x 1 matrix 1 and for any A, B E E we C of the right size. In have ( A 0 B £ I for all matr1ces

c)

any homomorphism f:R

.

-->

R' the set of all matrices inis inver-

verted over R' is alwa~s mul~iplicati~e'[!o~)l tible and if A, B are 1.nvert1ble, so 1s 0 B , with inverse

77

The characterization of the rational closure, which is at the basis of all further development (Cohn [71"]) stems from the rationality criterion for formal power series due to Schutzenberger Theorem 4.2.1.

[62]

[68].

and Nivat

Let R be a ring and E a multiplicative set

of square matrices over R.

Given a E-inverting map f:R --> R',

the following conditions on x

£

R' are equivalent: R in R',

(a)

x lies in the E-rational closure under f of

(b)

x is a component of the solution of a matrix equation

Au + a

(1)

o,

where A

£

Ef, and a is a column

over Rf, x is a component of the solution of a matrix equation

(c)

Au

(2)

e,

where A

Ef and e is a column of the iden-

£

tity matrix. Moreover, the set of all these elements x is a subring of

R' containing Rf. Proof.
(a) and (2) is a special case of (1), so (c) => (b).

To prove (b) => (c) we note that if Au + a

= 0,

then

o, so when (b) holds, each component satisfies an equation of type (2) and so (c) holds. To prove that the rational closure is a ring containing Rf we use (b): For any c

78

£

Rf we obtain cas solution of

l.u- c

o.

Now let ul, vl be the first components of the solutions of Au + a = o, Bv + b = o, then ul - vl is the first component of the solution of =

-(UlJ

where A= (a 1 , ... ,an), u - u' . Further, first component of the solution of

1.s the

and the matrices of these systems lie in Zf, because Z is multiplicative.

This shows that the Z-rational closure con-

tains Rf and admits sums and products, hence it is a ring.• This theorem shows that every component of the rational closure can be obtained as some component u. of the solution 1. of a matrix equation Au

a.

Here A is called the denominator of u. and A., the matrix 1. 1. obtained by replacing the ith column of A by a, is called the numerator of u .. This usage is justified by Cramer's 1. rule, which states that when R is commutative,

u.

l.

det A. 1. det A

In the general case we no longer have this formula (because we do not have determinants), but we have the following substitute, still called Cramer's rule:

79

Proposition 4.2.2.

Let u. be the ith component of the solu~

tion of Au = a, where A is invertible, and let A. be the ~

matrix obtained by replacing the ith column of A by a, then

u. is a { l~ft ~

} { zero~divisor } i f and only i f · un~t

r~ght

(in the matrix ring).

Proof.

Take i

=

1 for simplicity and write again

(a,a 2 , ••• ,an)

then A1

A(Ul OJ u• I

= A [u1 0

~

U= (Ul) u' '

= (Au,Ae 2 , .•• ,Aen) =

0)I (1u• OJI ,

full matrix ring Rn) to

A. is one

Thus A1 is associated (in the

(~1 ~)

and now the result follows

because being a zero-divisor or unit is preserved by multiplying by a unit or bordering with I. • Anticipating a definition from 8.1 we may say that u 1 is stably associated to A1 . As an application let us show how to construct epic Rfields from their singular kernels. field, Let

P'

~:R -->

Let K be an epic R-

K the canonical map and

be the complement of

P in

P the

singular kernel.

the set of all square

matrices over R, then the universal P'-inverting ring, which should be written



is usually written

we write Rp in the commutative case), of

P it

follows that

~

~

(just as

From the definition

can be factored uniquely by A:R -->

to give a map a:Rp --> K. with residue class field K.

We claim that

Rp

is a local ring

This will follow if we show

that every element not in ker a is invertible.

For then ker a

~s

the unique maximal ideal of Rp and its residue class field is a subfield of K containing the image of R, hence equal to K, because K was an epic R-field. Let u 1 ~ . AA

equat~on

80

Rp

be the first component of the solution of an

u + a

= O,

where A is a matrix over R which be-

Rp

comes invertible over K, and define A1 as in Cramer's rule. -h · 1nvert1 · 'bl e, h ence AAa ~ · · If Ua1 4r 0 , .~en u a1 1s 1 ~A 1s 1nverl A tible, by Cramer's rule, but this means that A1 i P, so A1 is invertible over a unit in

Rp.

~and

so, again by Cramer's rule, u 1 is

as claimed.

This result shows that any given epic R-field K can be reconstructed from its singular kernel.

What we need now

1s a simple way of recognizing singular kernels - just as 1n the commutative case the kernels of epic R-fields are precisely the prime ideals of R.

For this we need to de-

velop an analogue of ideal theory in which the place of ideals is taken by certain sets of matrices. In the first place we must define the operations of addition and multiplication for matrices; they will not be the usual ones of course, but more like the addition and multiplication of determinants. Multiplication:

As the product of two square matrices

A, B (over any ring R) we take their diagonal sum A

[~ ~).

Note that over a field A

+B

+B

is singular if and

only if either A or B is. Addition is more complicated, just as the addition of determinants is not straightforward, and in fact the latter provides the clue.

Let A, B be two matrices which agree

in all entries except possibly the first column: A= Ca 1 , a 2 , ... ,an)' B

= (al,a 2 , ... ,an)' then the

determinantal sum

of A and B is defined as the matrix

Similarly one defines determinantal sums with respect to another column or with respect to a row.

Of course it must

be borne in mind that the determinantal sum need not be defined.

As notation we shall always use A V B, indicating 81

in words the relevant column or row, when this is necessary to prevent confusion. We observe that over a commutative ring, where determinants are defined, one has det(A V B)

= det A

+ det B, when-

ever the determinantal sum (for any row or column) is defined.

Likewise, over a skew field, if two of A, B, A V B

are singular, so is the third.

Over a general ring there

is no direct interpretation, but, and this is the point, whether the operation is defined depends only on the matrices involved and not on the ring. The third operation we need is the analogue of zero, in our case a matrix which becomes singular under any homomorphism into a field. divisors, for if AB

Here one cannot just take zero-

= 0,

where A,B # 0, it may still happen

that under a homomorphism A becomes invertible and B becomes zero.

But there are some matrices that always map

to singular ones, e.g. the zero matrix, and more generally a matrix of the form

(~~- ~:J

n x n matrix A non-full if A r x n and r


1.

particular, Cr (I - BA)AnBn

0 and choosing r = n+l we find that I - BA

= C~n =

0 ••

Consider the following example due to Bergman [74]. R be defined by 27 elements, arranged as 3 U, V with defining relations UV P)U.

VU

x

Let

3 matrices P,

2

= I, P = P, UP = (I -

If R could be mapped into a field, then P could be

transformed to diagonal form, with O's and l's on the main diagonal; since P is similar to I - P, there must be equal numbers of both, but this is impossible, because the order is odd.

It is easy to see that R is entire and Bergman

shows that it satisfies Klein's nilpotence condition, therefore it is weakly finite, in particular I is full.

But by

Cor.l of Th.4.B, the unit matrix (of a certain size) can be written as a determinantal sum of non-full matrices.

Thus

R does not satisfy (ii). The conditions of Th.4.3.1 are not easy to apply; there ~s just one case where they can be checked without diffi-

culty, namely for semifirs, which were introduced in 1.1: Once the basic properties of semifirs have been derived, such a verification takes less than a page (Cohn [71"], 87

p.283), but since we have not developed this background here, we omit a detailed proof: Theorem 4.C.

Every semifir has a universal field of

fractions, obtained as the universal ring inverting all the full matrices.

We shall denote the universal field of fractions of R by F(R).

To prove the result one only has to verify (i),(ii)

of Th.4.3.1.

Here (i) follows from a form of Sylvester's

law of nullity for semifirs, while (ii) is a relatively direct calculation, based on the dimension formula dim(U + V) + dim (U

n V)

dim U + dim V

for finitely generated submodules of free modules over a semifir. To apply Th.4.C we derive a further consequence which tells us when a homomorphism can be extended.

A homomor-

phism f:R --> S between any rings is called honest if it keeps full matrices full.

Any homomorphism keeps non-full

matrices non-full, hence any isomorphism and in particular any automorphism is honest.

An honest homomorphism must be

injective, for an element c is non-zero if and only if it is full, as 1 x 1 matrix.

But an injective homomorphism

need not be full; here is an example. Let R

= k

be a free k-algebra on four gen-

erators and define an endomorphism a over k by xil--> x 1yi, y. 1--> x 2y. (i = 1,2). ·It is easily checked that a is ~

~

injective; but it is not honest, as the equation

shows.

The right-hand side

~s

not full, but the matrix to

which a is applied is full, since it can be specialized to 88

the unit-matrix which is full. Theorem 4.3;3. sendfirs.

Let f:R --> S be a homomorphism between

Then f extends to a homomorphism (necessarily

unique) between their universal fields of fractions i f and only i f i t is honest.

In particular, every isomorphism

between R and S extends to a unique isomorphism between their universal fields of fractions.

Proof.

Denote by

,

'±' the set of all full matrices over R,

S respectively, then if f

1S

honest, f

c

'±', and so the

mapping R - > s - > s'±' 1S -inverting. Hence there is a unique homomorphism f 1 :R --> S'±' such that the diagram shown commutes, i.e. f can be extended (in just one way).

Conversely,

R

----------~>

R

if an extension of f exists, any

flj

full matrix A over R becomes invertible over R and is mapped

s

-----------;>

s'!'

to an invertible matrix over SV. f

But this is the image of A , which must therefore be full. Hence f is honest, as claimed.

The rest follows since an

isomorphism is always honest. • The notion of an 'honest map' is chiefly of use for semifirs, because here the non-full matrices constitute the unique least prime matrix ideal. To get an idea of the usefulness of Th.4.C and Th.4.3.3 we really need to know how extensive the class of semifirs is.

In the commutative case semifirs are just Bezout do-

malus.

Somewhat more familiar are the principal ideal do-

mains; they may also be characterized as the Noetherian Bezout domains.

Analogously the semifirs contain as sub-

class the firs (=free ideal rings), which by definition are rings in which every right ideal and every left ideal is free of unique rank.

This class is far more extensive

than the class of non-commutative principal ideal domains. To give some examples, firs include (i) free algebras over 89

a field, (ii) group algebras of free groups, (iii) free products of skew fields. in more detail in Ch.S.

These examples will be examined For the moment we only note that

for any field k and any set X, the free algebra k is a fir; this is most easily proved by the weak algorithm, a form of the Euclidean algorithm adapted for use in free algebras (cf. Cohn [71"] ,Ch.2). Earlier in 1.1 we quoted the result from (Cohn [69]) showing the existence of n-firs from (n+l)-term relations without interference.

This can be used to show that for

any n > 1 there exists an n-fir not embeddable in a field. -

We take 2(n+l)

2

elements a .. , b .. (i,j = 1, ... ,n+l) arranged ~J

~J

as matrices A

(a .. ), B =(b .. ) with relations (in matrix

form) AB = I.

These relations satisfy the non-interference

~J

~J

condition, so we have an n-fir, but BA

~

I (by an easy nor-

mal form argument), so the ring constructed is not weakly finite and therefore not embeddable in a field.

= 1, ••• , n+l,

2(n+l) (n+2) variables a'iA.'bAi (i n+2) with relations AB = I

n+ 1

, BA

=

I

n+ 2

It we take A.

= 1, ••• ,

, we get an n-fir

with no R-field. These rings are of interest in that they enable us to answer the following problem raised by Mal'cev [73]. saw

~n

As we

1.1, the class of entire rings embeddable in fields

can be defined by quasi-identities, and we note incidentally that the conditions implicit in Cor.l,2 of Th.4.2.3 are easily put into this form.

Now Mal'cev asks whether this

class can already be defined by a finite set of quasiidentities.

The answer is 'no' (as for semigroups) and

this may be seen as follows:

Suppose there is a finite set

of first-order sentences which expresses the fact that a ring is embeddable in a field.

On replacing them by their

conjunction we obtain a single sentence A say, which ~s necessary and sufficient for a ring to be embeddable in a field. 90

Now let

Fn

be the class of n-firs for which A is

false, then Fn is an elementary class (cf. e.g. Cohn [65]) since n-firs can be defined by an elementary sentence.

Now

nFn ~s the class of semifirs not satisfying A, but every semifir n Fn

~s

= ~.

embeddable in a field and so satisfies A, hence Thus we have a family of elementary model classes

with empty intersection; by the compactness theorem logic (cf. e.g. Cohn [65], Mal'cev [73])

Fn

~n

=~for some n,

i.e. there exists n such that every n-fir satisfies A and ~n

so is embeddable

a field.

But this contradicts our

earlier findings and we conclude (Cohn [74"]): Theorem 4.3.4.

The condition for a ring to be embeddable

in a field cannot be expressed in a finite set of sentences.

In intuitive (though imprecise) terms we can say that embeddability in a field requires n-term conditions for arbitrarily large n.

This

embeddability in a group.

~s

in interesting contrast with

If R is entire, so that R*

~s

a

semigroup, a sufficient condition for the embeddability of R* in a group can be expressed in terms of 2-term conditions, for we have the following result (Cohn [71]): Theorem

4.D.

Let

R be a 2-fir in which every non-unit is

a finite product of irreducibles (i.e. R is 'atomic'), then

R* is embeddable in a group. This makes the difference between embeddability

~n

a

group and in a field rather clear, and it provides a simple answer to another question of Matcev's, whether an entire ring R exists such that R*

~s

is not embeddable in a field.

embeddable in a group but R To get an example we need

only take 24 generators A= (aiA), B = (bAi) (i = 1,2,3, A= 1,2,3,4), such that AB = 1 3 , BA = I 4 . This is an atomic 2-fir (Cohn [69]), hence R* is embeddable in a group, but there are no R-fields.

Other examples, using similar prin-

ciples, were found by A.J. Bowtell [67] and A.A. Klein [67]. L.A. Bokut [69] also gives an example of such a ring; his construction is more complicated, but unlike the other cases, his example is of a semigroup algebra. 91

5· Coproducts of fields

5.1

The coproduct construction for groups and rings Let A be any category; we recall the definition of the

coproduct.

Given any family (A.) of objects in ~

object S with a family of maps a natural correspondence f

1-->

~.:A. --> ~

A,

and an

S, this defines

~

~.f from maps S ~

-->

X to

families of maps A. --> X, thus a mapping ~

A(S,X)

-->

TI A(A.,X). ~

When this mapping is a bijection, the object S with the maps UA .. ~



~

is called the coproduct of the A. and is written ~

From the definition it is easily seen to be unique

up to isomorphism, if it exists at all.

Thus for sets we

obtain the disjoint union, for abelian groups the direct sum, for general groups the free product, but we shall return to this case below. Often we need an elaboration of this idea.

Let K be a

fixed object in A and consider the comma category (K,A): its objects are arrows K --> A (A s Ob A) and its morphisms are commutative triangles

~A K

t

--------A'

This category has the initial object K _!_> K; it reduces

92

to

A when

A.

K is an initial object of

Now the coproduct

in (K,A) is ·called the coproduct over K.

E.g., for two

objects K --~ A, K --~ B, this is just their pushout. Consider coproducts over a fixed group K in the category of groups.

This means that we have a family of groups (G.) ~

and homomorphisms G.--~

a.: ~

~

~.

~

:K

G. and the coproduct C with maps

-->

~

Cis a sort of 'general pushout'.

Clearly any element of K mapped to 1 by any



~

must be

mapped to 1 by every a., so by modifying K and the G. we J

~

may as well assume that each ~i

is injective; this means

that K is embedded in G. via ~

~-·

If in this situation

~

all the a. are injective, ~

the coproduct is called faithful.

If moreover,

G.a. n G.a. = ~

J J

~

K~.

~

for all

i I j, the coproduct is called separating.

These defini-

tions apply quite generally for concrete categories (i.e. categories where the objects have an underlying set structure).

Now for groups we have the following basic result

(Schreier [27]): Theorem 5.1.1.

The coproduct of groups (over a fixed group)

is faithful and separating.

This is proved by writing down a normal form for the elements of the coproduct C.

= K~. be the image of

Let K.

~

~

K in Gi and choose a left transversal for Ki in Gi of the formS. U {1}, thus G. ~

~

=

K. u S.K.. ~

~

Then every element of

~

C can be written in just one way as

(1)

u 1u 2 ••• unc

(n

~

0; u 'V e:

s.

~

'V

'

i

v-1

;. i c e: K). v'

clear how to write any element of c in this form; to prove the uniqueness one defines a multiplication of the It

~s

93

expressions (1) (which consists in a set of rules reducing the formal product of two expressions to normal form), and verifies that one obtains a group in this way. way (v.d. Waerden

[48])

A quicker

is to define a group action on the

set of expressions (1) for each group G.: ~ i

i with the understanding that u

p

f i and cg

n

=

n

=

u c' in G.' ~

i and u cg n

P

u c' in G., ~ p

~s omitted if cg s K.

~

the second case, if uncg sKi).

(or in

Now it can be verified that

these group actions can be combined to give a C-action on the elements (1), and the conclusion follows because the expressions (1) are distinct. • We have given this proof in outline since it is very similar to the corresponding proof for the coproduct of fields, which we shall soon meet. The coproduct of groups over a given group K is usually called the free product of groups with amalgamated subgroup K.

It is also possible to define a notion of free product

of groups with different amalgamated subgroups, where we have a family of groups (G.) and subgroups H.. of G. such that H.. ~J

= H... J~

~

~J

~

This can again be constructed as a co-

product, amalgamating H.. with H.. , but it will in general ~J

J~

be neither faithful nor separating (cf. B.H. Neumann [54] and the references given there). Our aim in this section is to describe the coproduct of rings.

Thus let K be a fixed ring and consider K-rings;

it is easy to see that coproducts always exist.

We simply

take a presentation by generators and defining relations for each K-ring in our family and write all these presentations together. separating.

But the coproduct need not be faithful or

Before finding conditions for it to be so we

look at some examples. 94

1.

Let k be a commutative field, K

a

k[t.J, where A is

a central indeterminate, R In R

k(>.), s = k[>.,l-ll >.l-l = S, the coproduct of R and S over K, we have

K

~ = 1.~

A-l.A~ = O,

o].

so S is not faithfully repre-

sented.

2.

The inclusion Z C Q

an epimorphism, and Q ZQ

H

=

Q.

Hence the coproduct ~s faithful, but not separating. More generally, if R --~ S is any ring epimorphism, one finds that S ~

S

=S

(cf. Knight [70

J) .

When the coproduct of rings ~s faithful and separating we shall often call it the free product. when does the free product exist?

Thus the question is:

We begin by giving a

necessary condition; for simplicity we limit ourselves to two factors. Let R1 , R2 be K-rings which are faithful, i.e. the mapping K --> R.~ is injective. If the free product is to exist, we must have

Clearly this also holds with R1 and R2 interchanged, and more generally, for matrix equations. If we take all implications of a suitable form we can obtain necessary and sufficient conditions for free products to exist, but they will not be in a very explicit form.

Syntactical criteria of a

rather different form have been obtained by D.A. Bryars. We now come to a simple sufficient condition for the existence of free products which will be enough for all the applications we have in mind.

This states essentially that

the free product of a family {RA} of faithful K-rings exists provided that each quotient module RA/K is free as right Kmodule.

A direct proof, following the pattern of the proof

of Schreier's theorem 5.1.1, is quite straightforward.

How95

ever, we shall want to prove a little more and therefore follow a slightly different approach, which will place us 1n a position to obtain Bergman's coproduct theorem in the next section.

I am indebted to W. Dicks for the presentation

of these results. We shall want to consider the coproduct of a quite arbitrary family {RA} (A E A) of K-rings. venient to write K to write A,A 1 ,A \1,1-1',

= R0

It will be con-

where we assume that 0

i

A, and

etc. for the typical element of A and

0

etc. for the typical element of AU{O}.

Theorem 5.1.2.

Let R0 be any ring, {RAIA

E

A} a family of

= u R their coproduct. If the o R A 0 quotient modules R~/R0 are free as right R0 -modules, then

faithful

R -rings and R

the coproduct is faithful and separating, and for each ll E

AU {0}, R/R

ll

(hence also R itself) is free as right

R -module. ll

Further, for each 1-1 E AU ~ llll~

R -module and put M =

{O}, let M be a free right ll

$M

ll

R.

Then the canonical R ll

ll

module homomorphism M --> M is injective and the cokernel ll

is free as right R -module. ll

(Equivalently: M

M $ M' ll

ll'

where M' is free as right R -module). ll ll Proof. For each A e AletTA U {1} be a right R0 -basis of

RA and for each Mll.

ll e A

U {0} let S be a right R -basis of ll

ll

We shall denote the disjoint union of the TA by T,

that of the Sll by S and call the elements of TA or SA associated with the index A; the elements of S

are not

0

associated with any index. We begin by proving that M

= @M

basis consisting of all products (2)

ll

~ R

s e S, ti e T, n

has a right R 0

~

0,

where no two successive terms are associated with the same ~.

96

To establish this fact, denote the set of all such

formal products by U and let F be the free right R -module 0

on U.

We shall define a right R-module structure on F and

show that F

= M.

An element u ~ U is said to be associated with A if its

last factor (an element of S or T) is associated with A. The set of elements of U not associated with A is denoted by UA Fix A E A, then we may write F = S,R @ (U\S )R , A

o

11.

o

and we can give the first summand a right RA-structure by identifying it with MA; we note that this still holds if we replace A by 0.

Now consider the free right RA-module . We h ave an R0 -l1near map: UIt RA -->

. on UA as b as1s: UARA

(U\SA)R A

o

1-->

given by formal multiplication (u,t)

£ U , t £TAU {1}).

ut (u

This map is easily seen to be bi-

jective, so we have an RA-module structure on (U\SA)R0 and in this way F becomes an RA-module, for each A £ A; all the R -actions agree, so F is in fact ari R-module. 0

Moreover, for each M

-->

~ E AU {0}

there is an R -linear map ~

F which is injective, with a free complement, so

~

there is an R-linear map M --> F.

To show that this is an

isomorphism we construct its inverse.

Let f

the canonical R -linear map and define f:U ~

if

:M ~

-->

S E

~

--> M

be

M by S • ~

By R -linearity this extends to a map f:F --> M which is 0

clearly the desired inverse.

This proves the second part,

= RA it shows that the coproduct is faithful. To show that it is separating we take M0 = R0 , MA = 0 (A E A), then we find that M = R has an R0 -basis consisting

and applied to MA

of all products t 1 t 2 ••• tn (ti ~ T), where no two successive terms are associated with the same A. Thus if 1,2 £ A, 1 f 2, then the R0 -submodule R1 + R2 has as basis T1 u T2U {1}, hence R1 nR 2

R0 and this shows that the coproduct is

separating. • 97

When R

is a field (the case of main importance for us),

0

all the conditions are easily satisfied and we obtain the The free product of any family {RA} of non-zero

Corollary.

rings over a skew field exists and is left and right free over each RA • •

The free product is known to exist under more general conditions; instead of requiring RA/R0 to be free it is enough for it to be locally free (i.e. every finite subset is contained in a free submodule) or more generally, flat, i.e. each RA is faithfully flat (cf. Cohn [59]).

But

we observe that over a semifir, 'flat' means the same as 'locally free' (Cohn [71"], Th.l.4.4,p.56). Th.5.1.2 provides a means of finding the homological dimension of certain R-modules, for a coproduct R: Proposition 5.1.3. and M

=@

(Bergman

[74]}.

Let R be a skew field 0

R, then

M ll

sup (hdR M,,). ll

Proof.

ll

1-'

Clearly it suffices to show that hdR(Ml.l

~

=

R)

Now R is left free, hence left flat over R , ll

therefore -

~

R converts a projective R -resolution of )l

ll

to a projective R-resolution of Ml.l ~ R and so hdRMl.l ~ R

Mll


-

hdR M

ll ll

~

but the latter equals hdR M , because M and M ~ R differ ll ll

ll

ll

only by a free summand.• 5.2

Projective modules over coproducts over skew fields In this rather technical section we show that the pro-

jective modules of a coproduct R

=

RRA 0

98

over a skew field

R,

R

0

are of the form 9P

module.

~

~ R, where P~ is a projective R~ ~

[74],

The results are due to Bergman

tion ~s again due to Dicks.

the presenta-

We fix a skew field R and reo

tain the notation of the proof of Th.5.1.2. For any A

E ~

M = MA 9 UARA.

we obtained a direct sum representation Thus for any u

UA we have a mapping p\u:

E

M --~ R\, namely the coordinate in RA corresponding to the term u,

Similarly, since M = UR , we have for any u

E

0

mapping p

ou and for any

:M --~ R •

For convenience we write U for U,

0

~ E ~

X of M to be {u



Ua

0

U {0} we define the

U~,xp

~u

+0

~-support

for some x



X}.

of a subset The 0-

support will also be called the support. The elements of the basis U will be called monomials; the degree of a monomial is its length.

Let us well-order the

sets S and T arbitrarily and then well-order U by degree and monomials of the same degree lexicographically, reading from left to right. Next well-order 1\U {0}, making 0 the least element, and then well-order (A U {0}) x U first by the degree of the second factor, then (for a given degree) lexicographically from left to right.

Finally let H be the set of almost-

everywhere zero functions

(~

U {0}) x U

--~

N, well-ordered

lexicographically reading from highest to lowest in (~ U {0}) X

U. Consider any non-zero element x in M; the monomials in

the support of x will be called its terms.

The greatest such

monomial (in the ordering of U) is called the leading term and its degree is the degree of x, written deg x.

If all

terms of x of degree deg x are associated with \, x is called \-pure,

If x is not \-pure, the greatest element

~n

the

support belonging to UA is called the \-leading term (this should perhaps be called the non-\ leading term).

If x ~s

\-pure for some \, it is called pure, otherwise it is impure or also 0-pure, and its leading term is then called the 099

leading term.

With these preparations we can state the main result of this section. (Bergman

Theorem 5.2.1

[74]).

Let R

=

R RA

be a coproduct,

0

where R is a skew field, and for any family {MA}, where MA 0

is a free RA-module, put M

=~

R.

MA ~

If L is any sub-

A

module of M, then for each ~ module L

E

A U {0} there is an R -sub~

of L such that the canonical map @ L

1l

1l

~

R-"' L 1l

is an isomorphism.

Proof.

Fot each A

A let LA be the RA-submodule of L con-

E

sisting of all elements whose A.-support does not contain

By L

the A.-leading term of any (non )..-pure) element of L.

0

we denote the R -module consisting of all elements of L 0

whose support does not contain the leading term of any pure element of L.

We claim that the family {L } has the desired 1l

properties. To prove that EL R choose y

E

L, y

t

1l

=

L, assume that this is not so and •

EL R so as to minimize h(y) ~

E

H, where

1 if u is ~n the ~-support of y and y is ]l-pure,

{0

h(y) (]l,u)

otherwise. Suppose first that y is pure, say )..-pure.

Since y

t

LA,

some monomial u in the )..-support of y is the )..-leading term of some non A.-pure element x of L. therefore x

Clearly deg x < deg y,

ELllR; further, there exists c

E

E

RA such that

the )..-support of y - xc does not contain u, and y - xc is either )..-pure with less A.-support or of lower degree than y, hence h(y- xc) < h(y). and so y

E

EL R. 1l

It follows that y- xc EEL R

Next if y is impure, then since y

t

1l

L , 0

some monomial u in the support of y is the leading term of some pure element x of L. 100

It follows that h(x) < h(y),

therefore x

E

EL R, and again for some c

LL R.

E

the support

0

of y - xc does not contain u, so h(y y

R

E

ll

xc)


L is injective

we shall isolate the following evident properties of the family {L } : ll

For each ll e:: A U {0},

A • ll

All elements of L

are )1-pure.

11

For all ll 1 ,11 2 e:: A U {0}, B

111].12

The lll -support of L



which is also the

].1 1 -leading

].11

contains no monomial u

term of a (non

xa, where x e:: L112 , a e:: R and i f Jl_l

=

)1 1 -pure)

].1 2 , deg xa > deg

element

x as

well.

Given ].1 e:: A U {0}, we choose for each monomial u that lS the leading term of some element of L

an element q

ll

E

L

ll

having this leading term with coefficient 1, and denote the set of all such q's by Q ,

From the well-ordering of U and

ll

property A

ll

it follows that Q is an R -basis of L • ll

Let A e:: A and for each u of an element of L

£

A

U

choose an element q

0

\l

0

that is the ).-leading term L

E

0

having u as

\-leading term with coefficient 1, and denote the set of all such q's by Q0 A.

By property A0 every element of R0 has a

\-leading term, hence by the well-ordering Q0 A lS an R0 -basis of L0 for each A e:: A.

We shall call the elements of QA

"associated with \", those of Q0 A "not associated with\", and write Q

=

U (Q 0 Au QA).

Further, V will denote the union

of Q and the set of all products. 0

q

£

Q,

t.

£

l

T,

n ~ 1,

where no two successlve terms are associated with the same index; thus if q

£

Q0 A then n

~

1, and t 1

£

TA. 101

We shall show that the elements of V have distinct leading terms and hence are right R -independent. 0

By the same

argument as in the proof of Th.5.1.2 we conclude that

= VR0

~ L~ 6l F

L and the proof will be complete.

From the lexicographic ordering of U it follows that if the choice of q

Q was determined by the monomial u, then

E

Thus we are led to con-

qt 1 ••• tn has leading term ut 1 ••• tn, sider an equality of the form

= u't'1''' t'n

u t 1''' t m

in

u;

if m > n say, this reduces to an equality of the form

u'. Let q

E

L

~2

, q'

L

E

~1

correspond to u, u' respectively;

there are two cases: Case 1, m > n. support of q'

E

If 11 1 ··

L

11 1

:f 0, then t

m-n

contains ut 1 ••• t

E

m-n- 1

T

11 1

, so the p 1-

which is also the

~1-leading term of the non p 1-pure element qt 1 ••• tm-n-l'

and this contradicts B

fll~2



If ~l

= 0,

then since the

support of q' E L

contains ut 1 ••• t which is also the m-n leading term of the pure element qt 1 ••• tm-n' we again have a contradiction to B • o

~1~2

Case 2; m = n, then u = u' is associated with A, say. Then q and q' belong to Q U QA, where ll ~s the index Oll associated with t 1 = ti if m > o, and ll o, QOJl Qo if m = 0. By the construction of the Q's, if q :f q' • they cannot belong to the same set, say q E Q011 , q' £ QA. Then the support of q

E

L0 contains a monomial u which ~s also

the leading term of a pure element q' .1, where q' this contradicts B0 A. 102

E

LA, and

This shows that the elements of V have distinct leading terms, henoe Vis an R -basis of L, as we wished to show.• 0

Corollary

1

(Bergman

[74];.

Let {RA} be any family of non-

R0 is a skew field, and R

trivial R0 -rings, where

RRA,

=

o

then

when the right-hand side is positive, and r,gl.dim,R < 1 when all the RA have r.gl.dim. 0.

Proof.

By Prop.5.1.3, r.gl.dim.R

~

supA{r.gl.dim.RA}.

Now

let M be a submodule of a free right R-module F, then by Th.5.2.1, M ~

~

M

~

].J

R, where each M

~s

].J

F, which is projective as R -module, ].J

an R -submodule of ].J

Hence hdR M


S induces a monoid homomorphism P(R)

-->

P(S), defined by

[P] J-->

[P ~

R-module by pullback along f. tor from rings to monoids. that

s],

where S is a left

In this way

P becomes a func-

Our objective will be to show

P preserves coproducts over skew fields.

More pre-

cisely we have Theorem 5.3.1 (Bergman [74]).

Let R0 be a skew field, {RA}

a family of non-trivial R -rings and R their coproduct, then 0

the map

(1)

Pit )

P(R,_)

-> P(R)

0

induced in the category of monoids is an isomorphism.

Proof.

We first prove

~njectivity.

distinct elements ~[L ], ~[M l..l

l..l

J

Thus assume that two

of the left-hand side of (1)

have the same image, i.e. there is an isomorphism (2)

ex:

Ql L

l..l

& R

--~

ED M

]J

Ill R.

We shall retain the notation of the proofs of Th.5.1.2 and 5.2.1 for M = @ M

11

L

l..l

ct

in M.

6l

R and identify L with its image 11

Since each L

~s

11

finitely generated, we can asso-

ciate with the map a an element h

s H defined by the rule

ct

1 if u is in the ]J-support h (]J 0 U)

a.

{ 0 otherwise.

We may assume the pair I[L ~n

l..l

J f

I[M

11

J

and the isomorphism a

(2) to be chosen so as to minimize h • a

With this choice

we claim that the family {L } satisfies the conditions A ]J ]J.

104

B of the proof of Th.S.2.1, and hence L a = M • It will lllll2 ll ll be important for later applications that this can be done without using the finite generation of the M nor the proJl

jectivity of the L , M • ll

ll

If A fails then some x E L is not ll 1-pure. Let lll lll u E ulll be the ]1 1 -leading term of x and consider the restriction p' = p jL (where p :M --~ R is as defined on lll u lll u lll lll u lll p.99). The image of xis a non-zero element of R , hence 0

p~ u is surjective and so splits; thus L

1 where L' lll

=

ker p'

lJlU

!& xR



lll lll and u is clearly not in the ll 1-support

Now x must be ll 2-pure for some llz I lllo

Taking a

!& xR and for ll I lll'llz new symbol x, we define L' = L llz llz llz put L' L There is an obvious isomorphism!& L' ~ R --~ ].l

!&

L

Jl

ll

].l

~ R sending ~ to x, and if we follow this by a we ob-

tain an isomorphism S: E& L' where hB differs from ha

~

R

--~

M.

The first place

either Cll 1 ,u) or (ll 2 ,u'), where I 0, then (ll 1 ,u) > (ll 2 ,u') by considering

If llz llz degrees, and for llz

u E u'R

ll ~s

= 0

this inequality still holds, by the

Thus ha(ll 1 ,u) = 1 > 0 hS (lJ 1 ,u), which contradicts the minimality of h , because

ordering by first components.

a

L[L'] = Z[L] f z[M ]. ].l ].l ].l

Hence A holds for all ].l

J1 E

A U {0}.

contain a moIf B fails, let the ll 2-support of L lll J.llll2 nomial u which is also the ll 1-leading term, with coefficient 1, of a (non ll 1-pure) element xa, where x

E

L

, a c R and

llz

there is a L lll y - xa (yp) (yp c R ) , unique element y¢ of M of the form y¢ lll whose J.l 1 -support does not contain u. Indeed, p = p].l' u llz• deg xa > deg x.

For each y

E

1

L ->R ; we observe that this is R -linear. Let ¢:M -~ M be lll lll ]11 the R-linear map which leaves every element of Lll (ll f J1 1 ) 105

L to y~. We claim that ~ is an ]ll automorphism, with inverse e leaving LJl (Jl ~ Jl 1 ) fixed and fixed and which maps y

E

L toy+ xa(yp). If Jll ~ !lz' then x E L is ll1 llz fixed by ~ and e. If Jll llz' then deg xa > deg x, so u is

sending y

E

~

not 1n the Jl 1-support of x and xis again left fixed by

e.

and

It follows that

~

and

e

are mutually inverse.

Now

consider the isomorphism y: ~ L & R ~> M !_> M. The first ]l place where h and h differ is (Jl 1 ,u) and it is clear that a

y

This contradiction shows that B holds. Y a ll1Jl2 we can construct V as.in the proof of Th.5.2.1 • h


~ (P @ Q ) ~ ]l

such that In

]l

]l

= n; if we choose the n and a so as to minimise h

Jl Jl a then as in the first part of the proof Rnll = P $ Q hence P is f. ]l ]l ll' ]l itely generated. Thus (1) is surjective, and hence an isomorphism. This result enables us to derive several useful consequences without difficulty (cf, Cohn [63,64,68], Bergman [74]): Theorem 5o3o2o

The coproduct of a family of firs over a

skew field is a fir,

In particular, the coproduct of skew

fields (over a skew field) is a fir.

Proof. 106

By Th,5.2.1, Cor.l, the ideals of the coproduct are

projective, and by Th.5.3.1 all projectives are free of unique rank, hence all ideals are free of unique rank. • In 4.3 we saw that any semifir (and in particular, any fir) has a universal field of fractions in which all full matrices become invertible.

So we conclude that the coproduct of

fields EA over a field K has a universal field of fractions. This will be called the field coproduct or simply coproduct ~

of the EA, written E

K F.

EA or

~n

the case of two factors E,F,

Let us recall from Cohn

[71"], ch.l that a ring is

Morita-equivalent to a fir if and only if it is an n x n matrix ring over a fir, and observe that Th.5.2.l. Cor.l,2 and Th.5.3.2 are statements about categories of modules and hence Morita-invariant.

We deduce

[74]).

Theorem 5.3.3 (Bergman

Let R

0

be ann x n matrix ring

over a skew field, {RA} a fanlily of faithful R0 -rings and R their coproduct over

(i)

R0 • {

r.gl.dim.R

Then

supA{r.gl.dim.RA} i f this is :f= 01 0 or 1 otherwise;

(ii) every projective R-module has the form

~

P

11

~

R, where

is R -projective 1

P

1l

1-1

(iii) P(R) = p*) P(RA).• 0

If we exam~ne the proof of Th.5,3.l, we find that it shows rather more than is asserted. ~:

(3)

$

M

1l

~

R

--~ i

N

1l

~

Consider a homomorphism

R.

If ~ arises from a family of R -linear maps ~ :M --> N , we 'Y

shall call it induced.

(4)

11

11

11

11

Next we observe that ~

R

(M

llz

i

R )

llz

~

107

R,

because R

~1

~

R

= R~2

~

R

= R.

An isomorphism (3) arising

by a transfer of terms as in (4) is called a free transfer isomorphism. A second kind of isomorphism arises as follows. Let e:M

--> R

~1

~1

be an R -linear functional, extended to M ~1

so as to annihilate MV for~ f ~ 1 •

Given x s M , we have a ~2

map a(x):R --> M defined by 11--> x s M ~ R~ M. Clearly v2 a(x)e = 0 if ~l f v 2 and this holds even for v 1 = ~ 2 if we add the condition x

ker e.

£

Then for any a s R the map

ea(a)a(x):u 1--> xa(ue) is nilpotent, and so 8

= 1-

ea(a)a(x)

is an automorphism of M; such an automorphism will be called a transvection, v-based in case

~l

=

~ 2 =~and

a s

R~.

The proof of Th.5.3.2 shows that every surjection (3) where M is finitely generated can be obtained as a composite of a ~

finite number of free transfer isomorphisms, transvections and an induced surjection. Proposition 5.3.4.

Let R0 be a skew field and {RA} a family

R0 -rings, then their coproduct R is entire and any

of entire

unit in R is a product of units in the RA.

We shall call such a unit a monomial unit. Proof. R

Each unit u s R defines an automorphism x 1--> ux of

= R0

R.

~

Here R is free on a single generator; any free

transfer isomorphism just amounts to renaming the generator, while a surjection is a unit in some R • V

The only transvec-

tion is the identity map, since it must be v-based for some

v,

but R

v

is entire.

This proves the second part.

Now the

assertion about zero-divisors is the special case n

=1

of

the next result. Proposition 5.3.5. integer.

Let R

0

be a skew field and n a positive

Then the coproduct of any family of n-firs over

R is an n-fir. 0

n

Proof.

For any map R --> R the image can by Th.5.2.1 be

written as ~ M ~ R, so there is an induced surjection

v

a: ~ Rn~ ~ R --> ~ M ~ R, where Ln ~

108

v

v

= n.

Since M is a

v

submodule of the projective R -module R, M ~s free of ]1 )1 rank at most n and so Ell M & R is free of rank at most n. ]1 ]1 If we repeat this argument with an isomorphism, we see that the rank must be unique. • Bergman [74] determines the units and zero-divisors of coproducts over skew fields in a more general situation. We can also obtain the elements of a coproduct which are algebraic over the ground field.

Let {RA} be a family of

entire rings over a skew field R ; their coproduct R is a 0

free product and is entire, by Th.5.1.2, Cor. and Prop.5.3.4. If a

E

R is right algebraic over R , it satisfies an equation 0

o,

+ ••• +

y.

~

R , not all o.

E

0

Without loss of generality a # 0, and since R is entire, we

# 0; on dividing by the constant term we

may assume that y

= 1.

may take y 0 ab + 1 R.

n

Then a(a

0, so a(-b)

=1

n-1

y

o

+ ••• + y

n- 1

) + 1

= o,

i.e.

and it follows that a is a unit in

By Prop.5.3.4, it must be a monomial unit and we can

write it as a if deg u

>

= p -1 up,

E

RA for some A, or

1, the first and last monomial factors of u are

in different rings RA. sume that deg up (5)

where either u

n

u PYO + u

= deg

n-1

u + deg p; then

pyl + ••• + upy n- 1 + p

= r.deg

But deg(urp)

In the latter case we may also as-

= 0.

u + deg p, so the first term in (5) has

greater degree than the rest and we have a contradiction. So this case can be excluded, and we obtain Let {RA} be a family of entire rings over a

Corollary 1. skew field R

0

and R their coproduct.

Then any element of

R which is right (or left) algebraic over R0 is conjugate to an

ele~nt

in one of the factors. •

It should be observed that not every element conjugate 109

to an element of some RA that is algebraic over R0 need itself be algebraic over R • over R0

,

p

-1

For if a satisfies an equation

0

ap may not do so; only when R0 is in the centre

of each RA and each RA algebraic over R0 does the converse of Cor.l hold. When different subfields are amalgamated in different factors Th.5.3.2 and Prop.5.3.4 no longer hold, in fact R need not even be entire.

This follows from an example for

groups due to B.H. Neumann [54].

Let k be a commutative

field and form the fields

Kl

k(x,y) with defining relation y

K2

k(y,z) with defining relation z

K3

k(z,x) with defining relation

-1 -1

-1

X

xy

X

yz

y

zx

z

-1

• • -1 -1

These fields can of course be constructed as fields of fractions of skew polynomial rings, e.g. K1 = L(y;a), where L = k(x) with automorphism a:f(x) J--~ f(x- 1 ). We form the coproduct P of K1 , K2 , K3 with amalgamations K12 = k(y), K23 = k(z), K31 = k(x). To see that this is a free product, we first form k(s,n,s) determinates. f(s,n,s)x

~n

three commuting in-

Next adjoin x subject to xf(s,n,s

-1

),

X

2



Then adjoin y subject to f(x,n,s)Y

= yf(x-1 ,n,s),

y

2

n,

and finally adjoin z subject to f(x,y,s)z

= zf(x~,y

-1

.~)

z

2



It is easily verified that the resulting coproduct is a 110

free product.

If e.g. there is a relation between x and y,

we can write it as a polynomial in y: +

~

Conjugating by

y

n

+ a 1 (x

+ a

2

z ~

2

0

n

a.

a. (x).

~

~

we find

n-1

)y

+ ••• + a (x n

~

2

)

o,

and by the uniqueness of the minimal equation for y, a.(x

2

~)

~

a.~ (x), hence a.~ is independent of x, soy is algebraic over k, clearly a contradiction.

Thus P is a free product, but in

P, xyz is an element of order 2: -1

xyz

thus (xyz)

5.4

yx

2

z = yz

-1 -1 x

Z

-1 -1 -1 y

X

1; this shows that P is not even entire.

The tensor K-ring on a bimodule Let K be any field, then the free K-ring on a set X, K,

may be defined by the following universal mapping property: K is generated by X as a K-ring, and any mapping X --> R into a K-ring R such that the image of X centralizes K (i.e. ya

ay for all a

E

K and y in the image of X) can be extended

to a unique K-ring homomorphism of K into R.

The elements

of K can be uniquely written as

(x.e:X,a . •

• • •x .

1

r

1

11 • •

.l.r

e:K) •

As is easily seen, K may also be represented as a coproduct K

i!!:

w

X

K[x] ,

111

where x runs over X, and since each K[x] is a principal ideal domain (and hence a fir), it follows from Th.5.3.2 that K is a fir.

We shall now outline another way of

establishing this fact which will be useful when we come to consider a generalization of K needed later. Let K be any field and M a K-bimodule. (n factors),

MfilMCil ••• CilM thus M1

We put

= M and by convention, M0

= K.

It is clear that

Mr filMs~ Mr+s, hence we have a multiplication on the direct sum T (M) = EB ~,

which turns it into a K-ring. K-ring on M,

This ring is called the tensor

It is shown in Cohn [71"], Ch.2 that this ring

possesses a weak algorithm and hence is a fir.

We shall not

repeat the proof here, but note that it yields another proof that K is a fir.

Let M

~

EB K and denote by x the X

element corresponding to 1 s K in the factor indexed by x s X. Thus the general element of M has the form Ea x (a X

almost all are 0), and it is easily seen that T(M)

X



K,

= K.

Let E be a field with K as subfield, and put (1)

E KK.

By Th.5.3.2 this is again a fir, but we can also obtain this ring as the tensor E-ring of a module. E-subbimodule spanned by x s X.

In EK consider the

Its elements are of the form

Ea.xb. (a.,b. s E) and it is clear from the definition that ~ ~ ~ ~ this module is isomorphic to E

~

E.

Hence we see that EK

can be described as a tensor E-ring as follows:

112

Since this is a fir (hence a semifir) we see by Th.4.C that it has a ~niversal field of fractions.

= K, the ring

When E

~

We shall write

is just the tensor ring K in-

troduced earlier, so no confusion need arise if we regard K as an abbreviation for

~,

and correspondingly de-

note the universal field of fractions by K{X}. To elucidate the relation between EK{X} and K{X} we need a lerruna. Lemma 5.4.1.

Let R, S be semifirs over a field K, then

the inclusion map

R u S

--~

R

--~

(i)

R u S is honest, (ii) the map

F (R) uS is honest.

Proof. (i) Consider the injections R --;:. R u S

--~

F (R) u S.

Let A be full over R, then it is invertible over F(R), hence also over F(R) uS, and so 1s full over Ru S, as we had to show. (ii) By (i) any full matrix over R is full over Ru S, hence we have a homomorphism F (R) - > F(R u S) (Th.4. 3. 3) and it follows that we have a homomorphism F (R) u S -> F(R

~

S).

Thus we have mappings

R u S --> F (R) u

S - > F (R U S).

Now any full matrix over Ru Sis invertible over F(Ru S) and hence is full over F (R) U S. • Proposition

5.4.2.

Let

K ~ E be any fields, then

E {X} ~ E ~ K{X}.

Proof.

Put R

K, then we have to show that

RUNT LIBRARY CARNEGIE-MELLON U"IVERSIH .

n,.,..,.n,.,

C,"l

D~ •,t·· ·,, V-','Jt,\

1~?1.


F u L --;:. F

F (E u L) •

U

Any full matrix over E u L is invertible over F U F (E u L), and hence full over F LJ L.

Thus F (E LJ L) is embedded in

F (F LJ L) and by Prop. 5. 4. 2, this is the result claimed. • Suppose that we have fields K ~ E

~

F, K ~ K'

~

F, then

we have a natural map

and when this map is honest, we obtain an embedding of EK{X} in FK 1 {X}, but in general (3) need not even be injective.

Thus let x

E

X and c

E

K1 n E, then ex - xc maps

to 0 under (3), but it is not itself 0 unless c

E

K.

Thus

a necessary condition for (3) to be injective is that (4)

K'

n

E

K.

Later in 6.3 we shall see that when K is contained in the centre of E, (4) is also sufficient for (3) to be honest. 5.5

Subfields of field coproducts Although Schreier had discussed free products of groups

114

~n

1927, it was not until more than 20 years later that

significant applications were made, notably in the classic paper by Higman-Neumann-Neumann [49].

Their main result

was the following

5.A.

Theorem

Let

G be any group with two subgroups A, B

which are isomorphic, say f:A --> B is an isomorphism. Then G can be embedded in a group H containing also an element t such that

t

-1

at

af

for all a

£

A.

We observe that this would be trivial if f were an automorphism of the whole of G: then H would be the split extension of G by an infinite cycle inducing f.

But for

proper subgroups A, B the result is non-trivial and (at first) surprising. co~sequences

It has many interesting and important

for groups and it is natural to try and prove

an analogue for skew fields. ~n

the category of fields.

What one needs 1s a coproduct However, we shall not adopt a

categorical point of view: the morphisms in the category of fields are all monomorphisms and this strictive.

~s

somewhat re-

Over a fixed ring, it is true, we have defined

specializations, but it would be more cumbersome to define them without a ground ring, and not really helpful. In this section we shall prove an analogue of the HigmanNeumann-Neumann theorem using the field coproduct introduced

~n

5.3.

But we shall also need some auxiliary re-

sults on subfields of coproducts.

It will be convenient to

regard all our fields as algebras over a given commutative field k; this just amounts to requiring k to be contained in the centre of each field occurring.

The proof of the

next result is based on a suggestion by A. Macintyre. Theorem 5.5.1.

Let K be a field and A, B subfields of K,

isomorphic under a mapping f:A --> B, where K,A,B are k-

115

algebras and f is k-linear.

K can be embedded in a

Then

field L, again a k-algebra, in which A and B are conjugate by an inner automorphism inducing f, i.e. L contains t

f

0

such that

af Proof.

t

-1

at

for all a

E

A.

Define K as right A-module by the usual multiplica-

tion and as left A-module by a.u

(1)

a

(af)u

A, u

£

E

K.

Let us form the K-bimodule K NA K, with the usual multiplication by elements of K; if we abbreviate 1 consists of all sums

~u.tv. ~

(u.,v.

~

~

E

~

~

1 as t, this

K) with the defining

relations at

(2)

t.af

.. (a

E

A).

By the remarks in 5.4, the tensor K-ring T(K

~A

and so has a universal field of fractions L.

K) is a fir,

Thus we have

embedded K in a field L in which (2) holds. • Let K be a field with k as subfield of the centre, then K is said to be finitely homogeneous over k, if for any elements a 1 , ••• ,a ,b 1 , ••• ;;b E K such that the map a. I-!> b. n n l l defines an isomorphism k(a 1 , ••• ,an) ~ k(b 1 , ••• ,bn)' there exists t

£

K* such that t

Corollary 1.

-1

a.t =b. ~

~

(i = l,ooo,n).

Every field K (over a subfield k of its centre)

can be embedded in a field (again over k) which is finitely homogeneous.

Proof.

Given a's and b's such that a.

~

1-!>

b. defines an ~

isomorphism, we can by Th.S.S.l extend K to include an element t f 0 such that t

-1

a.t =b., and the least such ex~

~

tension has the same cardinal as K or is countable. 116

If we

do this for all pairs of finite sets in K which define isomorphisms we get a field K1 , still of the same cardinal as K or countable, such that any two finitely generated isomorphic subfields of K are conjugate in K1 •

We now repeat

this process, obtaining K2 and if we continue thus we get a tower of fields

Their union L 1s a field with the required properties, for if a 1 , ••• ,an, b 1 , ••• ,bn £ Land ai 1--~ bi defines an isomorphism, we can find K to contain all the a's and b's, hence r

they become conjugate inK

and a fortiori in L.• r+ 1 A finitely homogeneous field has the property that any

two elements with the same (or no) minimal equation over k are conjugate. Corollary 2.

Hence we obtain Every field K (over a subfield k of its centre)

can be embedded in a field L in which any two elements with the same minimal equation over k are conjugate, as are any two

transcendenta~

elements.•

Let K be any field, then the group of fractional linear transformations PGL 2 (K), consisting of all mappings

x 1--~ (ax+ b)(cx +d)

-1

is well known to be triply transitive, i.e. for any two triples of distinct elements of Koo

=

K U { oo } there is a

transformation mapping one into the other.

With the help

of Cor.l we can (as P.J. Cameron has observed) construct a field on which PGL 2 is 4-transitive. We need only take the field of rational functions in one variable over GF(2) and embed it in a finitely homogeneous field L.

Given any two

117

elements a,b of L different from 0 and 1, there is an inner automorphism a mapping a to b.

Now by the result quoted

above, PGL 2 is triply transitive on L and the stabilizer of {0,1, oo} is still 4-transitive (by conjugation), hence PGL 2 00

is 4-transitive. Later we shall need an analogue of Cor.2 for matrices instead of elements.

Let A

E~

n

(K), then A is said to be

transcendental over k if for every non-zero polynomial

f

E

k[t] the matrix f(A) is non-singular.

Clearly if A is

a transcendental matrix over k, then the field generated over k by A is a simple transcendental extension of k.

The

next lemma and its application to the proof of.Th.5.5.3 are due to G.M. Bergman (cf. Cohn [73"]). Lemma 5.5.2.

Given a field K (over k) and n

~

1, let E be

a subfield of linn (K).

If F1 , F 2 are subfields of E which are isomorphic under a map ¢:F 1 --~ F2 , then there is an extension field L of K such that x¢

for all x e: F1 ,

for some T e: GL (L). n

Proof.

By Th.5.5.1 E has an extension field E' with an

element T inducing ¢.

Consider R =linn (K)

jt E'; by Tho 5. 3. 3

this is hereditary and every projective R-module is a direct sum of copies of P

~

R, where P is a minimal projective for

Since Pn ~ R =lin n (K) ~ R ~ R, it follows (by Th.l.4.2 of Cohn [71"]) that R is ann x n matrix ring over a fir,

linn (K).

say R =linn (G), where G ~s a fir containing Ko

Let L be

the universal field of fractions of G, then L contains K andlinn(L) contains the element T inducing

¢·•

Let K be a field and suppose that~ (K) contains ison

morphic subfields F1 ,F 2 ,F 3 with isomorphisms f:F 1 --> F2 , g:F 2 --~ F3 say, and such that F1 , F2 lie in a common subfield oflinn(K), as do F2 , F 3 o Then by Lemma 5.5.2 we can 118

enlarge K to a field L and obtain a unit X such that conjugation by_·X induces f.

Now F 2 ,F 3 still lie in a common subfield of9n (L) and we can enlarge L to a field M to obn

tain a unit Y which induces the isomorphism g and F 3 •

between F2

Now XY induces the isomorphism fg:F 1 --> F 3 ; in

this way the scope of the lemma can be extended.

As a

result we can prove Theorem 5.5.3.

Let K be a field (over k) and n ~ 1.

Given

two n x n matrices A, B over K, both transcendental over k, there exists a field extension L of K containing a non-singular matrix

Proof.

T such that T- 1AT

B.

Since A is transcendental, k(A)

lS

a purely transcen-

dental extension of k, thus if u is a central indeterminate over K, we have k(A)

~

k(u) and similarly k(B)

~

k(u).

We

shall take F 1 ~ k(A), F 2 ~ k(u), F 3 ~ k(B). Let K((u)) be the field of formal Laurent series in u over K, then (3)

in (K((u)) ) ll9n (K) ((u)). n

n

Now F1 , F 2 are contained in the subfield k(A)((u)) of (3), while F2 , F 3 are contained in k(B)((u)).

We can therefore

apply Lemma 5.5.2 and the remark following it and obtain an extension field H of K((u)) such that9n (H) contains a unit n

T inducing the k-isomorphism k(A) ~ k(B) defined by A

[-->

B.•

Clearly we can repeat the process for other pairs of transcendental matrices until we obtain a field K1 ~ K in which any two transcendental matrices of the same order over K are similar.

If we repeat the construction for K1 we get

a chain of fields (over k):

whose union

lS

a field with the property that any two tran-

scendental matrices of the same order are similar, thus we 119

have the Let K be a field (over k) then there exists an

corollary.

extension field L of K (over k) such that any two matrices of the same order over L and both transcendental over k are similar over L. •

This means, for example, that over L any transcendental matrix A can be transformed to scalar (not merely diagonal) We need only choose a transcendental element a of L;

form.

clearly a is transcendental as n x n matrix, therefore T-lAT =a for some T

E

GL (L). n

We shall return to this topic

in 8.4. Our next objective is to show that every countable field can be embedded in a 2-generator field.

This corresponds to

a theorem of B.H. Neumann [54] for groups. some lemmas on field coproducts.

We shall need

First we examine a situa-

tion in which a subfield of a given field is a field coproduct.

5.5.4.

Lemma

P

=K

aH(x),

Let

where xis an indeterminate centralizing H.

Then the subfield

-i

X

Kx

i

Proof.

(i

E

K be a field with a subfield H and let G of P generated by the fields K.~

Z) is their field coproduct over H.

Take a family of copies of K indexed by Z , say {K.}, ~

denote by R their coproduct over H and by U the universal field of fractions of R, thus U is the field coproduct of the K. over H. ~

By the universal property of U it follows

that the subfield Q described in the lemma is an R-specialiFrom the universal property of P = K 0 H(x), . H this specialization will be an isomorphism whenever there zation of U.

is some K-field L containing an element y # 0 such that the i -i specialization from U to L which maps K. to y Ky is an ~

embedding.

Such an L is easily constructed: the mapping

K.~ --> K.~+ 1 ~s an automorphism of R which extends to an automorphism a say, of U. Now form the skew function field U(y;a); it has all the properties required of L.• 120

We shall also need a result on free sets in field coproducts.

~iven

a field over k, by a free set over k we

understand a subset Y such that the subfield generated by

Y is free, i.e. isomorphic to the universal field of fractions of the free algebra k. Lemma 5.5.5.

Let E be a field, generated over k by a family

{eA} of elements, and let U be the field freely generated by a family {uA} over k, then the elements uA + eA form a free set in the field coproduct U

Proof.

0

k

The field coproduct G

E.

=U

0

k

E has the following

universal property: given any E-field F and any family {fA} of elements ofF, there is a unique specialization from G to F (over E, with domain generated by E and the uA) which maps uA to fA.

In particular, there are specializations

from G to itself which map uA to uA + eA (respectively to uA - eA).

On composing these mappings (in either order)

we obtain the identity mapping, hence they are inverse to each other, and so are automorphisms.

It follows that the

uA + eA like the uA form a free set.• We can now achieve our objective, the embedding theorem mentioned earlier; the proof runs closely parallel to the group case. Theorem 5.5.6.

Let E be a field, countably generated over

a subfield k of its centre, then E can be embedded in a 2-generator field over k.

In essence the proof runs as follows: Suppose that E is g enerated by e 0

=

0 '1 e '2 e ' · · · ·· we construct an extension

field L generated by elements x,y,z over E satisfying

y

-i

xy

i

= z

-i

xz

i

+ e.

1

(i

0,1, ••• ).

Then L is in fact generated by x,y,z alone. If we now ad. f"~e 1d join t such that y = txt -1 z = t -1 xt, the resu1t1ng is generated by x and t. 121

To prove the theorem, let F1 be the free fiel~ o~ x,y -l l over k; it has a subfield U generated by u.l = y xy (i = 0,1, ••• ) freely, by Lemma 5.5.4, and similarly, let F 2 be the free field on x,z over k, with subfield V freely genera-i i . 9 • ted by vi = z xz (l = 0,1, ••• ). Form K = E k F1 ; thls has a subfield W generated by w. = u. +e. (i 0,1, ••• ), l l l freely by Lemma 5.5.5. We note that w0 = u 0 + e 0 x0 = x ' so K is generated over k by x,y and thew.l (i -> 1). Let L be the field coproduct of K and F 2 , amalgamating We note that w

Wand V along the isomorphism w. v .• l

X= V 0

0

l

and that L lS generated by x,y,z and the w.l or also

by x,y,z and the v., or simply by x,y,z.

Now L contains the

l

isomorphic subfields generated by x,y and by z,x respectively, -1 -1 hence we can adjoin t to L such that t xt z, t yt = x (by Th.5.5.1).

It follows that we have an extension of L

generated by x,t over k and it contains K. • As usual we have the Corollary.

Every field over k can be embedded in a field L

such that every countably generated subfield of L is contained in a 2-generator subfield of L.

Proof.

Let E be the given field and EA a typical countably

generated subfield (always over k), then there is a 2generator field LA containing EA, by the theorem.

Let MA

be the field coproduct of E and LA over EA; if we do this for each countably generated subfield of E we get a family {MA} of fields, all containing E.

Form their field co-

product E' amalgamating E, then in E' every countably generated subfield of E is contained in a 2-generator subfield of E', namely EA is contained in LA.

Now repeat the process

that led from E toE': E C E' C E" C • • • C Ew C E w+ 1 C ••• CE, v where Ea.

122

U {E 13 J 13 < a} at a limit ordinal

a, and where v

is the first uncountable ordinal.

Then Ev is a field in

which every countable subfield is contained in some Ea (a< v) and hence in some 2-generator subfield of Ea+l~

E". • At this point it ~s natural to ask whether there is a countable field, or one countably generated over k, containing a copy of every countable field (of a given characteristic).

As

~n

the case of groups, the answer is 'no';

this is shown by the following argument, for which I am indebted to A. Macintyre. For any field K, let S(K) be the set of isomorphism types of finitely generated subgroups of K*. countable, then so is S(K). that there are c

2x

0

If K is

Now D.B. Smith [70] has shown

isomorphism types of finitely gen-

erated orderable groups.

Further, every ordered group can

be embedded in a field of prescribed characteristic, by the methods of 2.1; hence every countable ordered group can be embedded in a countable field.

It follows that there are

c distinct sets S(K) asK runs over all countable fields of

any given characteristic.

Therefore these fields cannot all

be embedded in a 2-generator field. 5.6

Extensions with different left and right degrees

In Ch.3 we examined a particular kind of binomial extension.

Given a prime number p and a primitive pth root

of 1, w say, in our ground field k, let us take a field E with an endomorphism S and an S-derivation D such that

(1)

DS

wSD

and construct fields K and L as in Th.3.4.4.

We then have

an extension L/K of right degree p and its left degree will

s

be > p if we can show that K ~ K. [K:K 5]L = ""• then [L:K]L

=

oo

More generally, if

But some care is needed here: 123

it is not enough to take [E:E 5] 1 shall have KS

0, (because S is then an inner automorphism of L, with inverse c 1--~ tct- 1). Likewise one can =

show that [L:K] 1 son [56],Ch.6).

K if D

= oo, for whatever S is, we

=

=

[L:K]R whenever K is commutative (Jacob-

To obtain the required example, let p be a prime number, k any commutative field containing a primitive root of 1, w say, and let A be any set.

When k has characteristic p this

We form the free algebra F 2 kon a family of indeterminates indexed by AxN • Let is taken to mean that w

=

1.

A~J

E

=

F(F) be the universal field of fractions of F.

On F we

have an endomorphism S defined by

This is an honest endomorphism because F5 ~s a retract of F: Let T be the endomorphism defined by

if j > 1, if J Then ST

= 1

1.

(read from left to right), hence if AS is non-

full, then so is AST

= A,

i.e. S is honest, as claimed.

It

follows that S extends to an endomorphism of E, again denoted by S. Next let D be the S-derivation of F defined by

D x>..ij

xA i+l j"

This again extends to an S-derivation of E, still denoted by D.

We now form L

= E(t;S,D) and K = E(tP;sP,nP), as in

Th.3.4.4, then L/K is a binomial extension of right degree p. 124

To show that the left degree is > p it is enough to

s

prove that K f K.

This can be shown quite easily whatever

A, e.g. we/could take A to consist of one element.

But we

are then left with the task of finding whether [K:K5]L is finite or infinite.

It is almost certainly infinite, but

this is not easy to show when

[A[ = 1,

whereas it becomes

easy for infinite A. For any

~

£

A denote by E the subfield of E generated ~

=1

over k by all x, . . such that j > 1, or j /\1]

thus we take all x's except x . 1 (i e: N). E

~

].J-

xAll

ES £

for all

~.

and E (t) ~

s

~]._ ~ L

-

~

s

K •

EJ.l (t) if and only if A f J.l•

and A f

~;

It follows that We claim that

Assuming this for the

moment, we see that the xAll are left linearly independent over K5 , for if LaAxAll = O(aA £ K5 ), and some aJ.l f 0, then we could express x].Jll in terms of the xAll' A f ].J, and so x~ 11 £

EJ.l(t), which contradicts our assumption.

That xAll

£

EJ.l(t) for A f J.l is clear from the definition.

To show that x].Jll i E~(t), writeR

EJ.l[t] (for a fixed ].J)

and observe that for any a

ta 5 + aD, hence

at - a

D

(mod R),

and so by induction on n, at If x].Jll

E

E, at

E

n _

=a

Dn

(mod R).

EJ.l(t), we would have x].Jll

Then x~ 11 g _ f then

=0

(mod R), and if g

-1

ig

, where f,g

= Lt Si' where Si

E

R.

e: EJ.l,

(mod R)

Here we have multiplied a congruence mod R by elements of R, which is permissible.

Thus we have 0

S.,S e: R. ]._

125

This is an equation in E[t], more precisely in the subring E [ t] u k (clearly this subring is a coproduct) and by 1-l 1-l~ equating cofactors of the x . 1 we see that S = s.~ = o, i.e. JH

g

= Oo But this is a contradiction, for g as denominator of

x'J..lll cannot vanish.

This proves that x'J..lll i El-l(t) and it

follows that [L:K] 1 ~ 111.1. Given any infinite cardinal a, take a set A of cardinal in this case.

ILl = a,

hence [L:K] 1 = a Thus we have found an extension with right

a and k a countable field, then degree p and left degree a.

If instead of p we have a com-

posite integer n, pick a prime factor p of n and combine an extension of (left and right) degree n/p with the previous case.

Similarly if

a is

an infinite cardinal < a, we can

start with an extension of degree Theorem 5.6.1.

a.

Thus we have proved

Given any two cardinals

a,a

of which at

least one is infinite, there is a field extension L/K of left degree a and right degree

S, and of prescribed charac-

teristic. •

Whether the left and right degrees can be both finite and different remains open.

On the face of it this looks un-

likely, but it does not seem an easy problem to decide.

126

6 · Gen-era I skew field extensions

6.1

Presentations of skew fields We have already discussed skew field extensions (in Ch.J),

but they were usually of a rather special sort, of finite degree (at least on one side). sions.

We now turn to general exten-

Of course it is no longer true, as in the commutative

case, that every simple extension of infinite degree is free, in fact we shall need to define what we mean by a free extensian. To reach the appropriate definition, consider a finitely generated field extension E

= K(a 1 , •• o,an).

As before we

shall take all our fields to be k-algebras, where k is a commutative field.

This represents no loss of generality

(in fact a gain): if k is not present we can take the prime subfield to play the role of the ground field.

Given E as

above, we have a homomorphism of K-rings: ~

(1)

--> E,

x.~

X

1-»

a. .• ~

Here ~ is the coproduct Kk k; it ~s called the tensor K-ring on X over k. of (1); since E

~san

epic

Let

P be

~-field

the singular kernel

(being generated by

the a. over K), it is determined up to isomorphism by ~

M be a set of matrices generating

P.

Let

P as matrix ideal, then E

is already determined by M; we write

(2)

E

= ~ {X;M} 127

and call this a presentation of E.

In particular we call

the a. free over K if the presentation can be chosen with 1

M

~;

this just means that (1) is an honest map, i.e. that

P consists of all non-full matrices and no others.

From

Cohn [71"] Ch.2 we know that ~ is a fir and so has a universal field of fractions, written

~

{X} and called the

free K-field on X.

Given any set X and any set M of matrices over

~,

we

can ask: When does there exist a field with presentation ~{X;M}?

(3)

Let (M) be the matrix ideal of

~

generated by M, then

there are two possibilities: (i)

(M) is improper.

Then there is no field (3), in

fact there is no field over which all the matrices of M become singular.

Here there is no solution

because we do not allow the 1-elernent set as a field. (ii) (M) is proper.

Now there is always a field over

which the matrices of M become singular, possibly more than one.

The different such fields corres-

pond to the prime matrix ideals containing (M), and there is a universal one among them precisely when the radical I(M) is prime.

In particular, this is

so when (M) is prime, and that will be the only case in which the notation (3) will be used. Let E be a field with presentation (3); we shall say that E is finitely related when M can be chosen finite, and E 1s finitely presented if X and M can both be chosen finite.

for groups we have Theorem 6.1.1

128

A finitely related field can be expressed

As

as the field coproduct of a fintely presented and a free field.

Let E = ~{X;M}, where M is finite,

Proof.

Then the set

X' of elements of X occurring in matrices from M is finite. Let X" be the complement of X' in X, then we clearly have 0

E = E'

E", where E' =~{X' ;M}, E" =~{X"}.

Here E' is

finitely presented and E" is free.• In the special case when E/K has finite degree, the above construction can be a little simplified. is surjective, not merely epic.

In that case (1)

Moreover, instead of taking

the free algebra, we can incorporate the commutativity relations as follows.

Let u 1 , ••• ,un be a right K-basis of E, then since E is a K-bimodule, we have the equations

(4)

a.u. = L:u.p .. (o;) J ~ ~J

{a. e: K) '

where a 1--~ (p .. (a)) is a homomorphism from K to K. n

~J

Let

M be the free right K-space on u 1 , ••• ,un as basis; by the equations (4), M becomes a K-bimodule, which contains K as submodule (as we see by choosing our basis of E so that u1

=

1).

Let ¢K(M) be the filtered ring on this bimodule,

constructed as in 2.5 of Cohn [71"]; by Th.2.5.1, l.c., ¢K(M) has weak algorithm and hence is a fir.

Now E is ob-

tained as a homomorphic image of ¢K(M); so we need to look for ideals in ¢K(M) which as right K-spaces have finite codimension, in fact the kernel of (1) in this case is a complement of M

~n

¢K(M).

But it is not at all clear how

this would help in the classification of extensions of finite degree. In practice most of the presentations we shall meet are given by equations rather than singularities, but the latter are important in theoretical considerations, e,g, when we want to prove that an extension is free we must check that there are no matrix singularities.

We shall return to this 129

question in 8.1. A special case occurs when E (and hence also K) is of finite degree over k.

In that case the singularity of a

matrix can be expressed by the vanishing of a norm and hence it will be enough to consider equations.

Only in the case

of infinite extensions are the matrix singularities really needed. 6.2

Existentially closed skew fields Let k be a commutative field.

k one usually understands a field (i)

k

By an algebraic closure of

k

with the properties:

is algebraic over k,

(ii) every equation over k has a solution in k. It is well known that every commutative field has an algebraic closure, and that the latter is unique up to isomorphism (though not necessarily a unique isomorphism, thus the algebraic closure is not a functor).

When one tries to

perform an analogous construction for skew fields one soon finds that it is impossible to combine (i) and (ii).

In

fact (i) is rather restrictive, so we give it up altogether and concentrate on (ii).

Here it is convenient to separate

two problems, namely (a) which equations are soluble (in some extension) and (b) whether every soluble equation has a solution in the closure. Of these (a) is a difficult question to which we shall return later, and for the moment concentrate on (b). ~o

The assertion that an equation f(x 1 , ... ,xn) has a solution can be expressed as follows:

Any sentence of the form 3a 1 , ••• ,an P(a 1 , ••• ,an)' where P is an expression obtained from equations by negation, con130

junction and disjunction is called an existential sentence. By an existentially closed field, EC-field for short, we understand a field K (over k) such that any existential sentence which holds in some field extension of K, already holds in K.

Clearly such a sentence can always be expressed

as a finite conjunction of disjunctions of basic formulae, a basic formula being of the form f = g or its negation, where f,g are polynomials

~n

x 1 , ••• ,xr, 1.e. elements of

~.

r

Now the negation of f = g, i.e. f # g, can again be expressed as an equation, viz. (f - g)y = 1, where y is a new variable, and any disjunction of equations can be expressed as a single equation, since (fl = gl)v

v(fn =

gn) holds if and only if (f 1 - g1 ) ••• (fn- gn) = 0.

So

we have reduced the sentence to a finite set of equations, and we see that K is existentially closed if and only if any finite system of equations which is consistent (i.e. has a solution in some extension field) has a solution in K itself.

For example, k itself is existentially closed

over k precisely when k is algebraically closed, but for K

~

k it may be possible for K to be existentially closed

even when k is not algebraically closed.

In fact we shall

see that every field K can be embedded in an EC-field, but the latter will not be unique in any way. Instead of the vanishing of elements, i.e. equations, we may equally well talk about the singularity of matrices. For if A= (a .. ) ~s any n x n matrix, let us write sing(A) lJ for the existential sentence

3U 1'"""' Un' V 1'"""' Vn ( I: a lj u j

= 0 A •••• A I: a . u. = 0/1.

nJ J

and nonsing(A) for the sentence

131

3x .. (i,j ~J

=

l, ••• ,n)(La. x. ~\1

VJ

= o~J ..

=

(i,j

l, ••• ,n)).

It is clear that sing(A) asserts that (over a field) A is singular and nonsing(A) -, sing(A), (where•P means 'not P'). Proposition 6.2.1.

From this it is easy to deduce

A field K over k is an EC-field if and

only if any finite set of matrices over

~

which all be-

come singular for a certain set of values of the x's in some extension of K, already become singular for some set of values in K.

Proof.

The condition for existential closure concerns the

vanishing of a finite set of elements, i.e. the singulaFity of 1 x 1 matrices, and so is a special case of the second condition, which is therefore sufficient,

Conversely, when

K is existentially closed, and we are given matrices A1 , ..• ,A

r

which become singular in some extension, then sing

(A 1 )"'.,. "' sing(Ar) is consistent and hence has a solution inK. • It is almost trivial to show that every field K can be embedded

~n

an EC-field, by first constructing an extension

K1 in which a given finite consistent system of equations over K has a solution, and then repeating the process infinitely often.

However there is no guarantee that the EC-

field so obtained will contain solutions of every finite consistent system over K.

For this to hold we need to be

assured that any two consistent systems over K are jointly consistent,

Of course this follows from the existence of

field coproducts: if equations over K, say

~l' ~ 2 ~i

are two consistent systems of

has a solution inK., then any ~

field L containing both K1 and K2 will contain a solution of ~l A ~ 2 • For L we can take e.g. the field coproduct 132

K1

° K2 ;

more generally a class of algebras is said to

possess the amalgamation property if any two extensions B1 , B2 of an algebra A are contained ~n some algebra c. An example of a class not possessing the amalgamation property is the class of formally real fields. To construct an EC-field extension of K we take the family {CA} of all finite consistent systems of equations over K and for each A take an extension EA in which CA Put ~

has a solution.

=

~ EA, then every finite con-

sistent set of equations over K has a solution in K1 • we repeat this process, we obtain a tower

whose

un~on

If

L is again a field, of the same cardinal as K

or countable (if K was finite).

Any finite consistent set

of equations over L has its coefficients in some K. and so ~

has a solution

~n

K. 1 • ~+

Thus L is an EC-field and we have

proved Theorem 6.2.2.

Let K be any field (over k), then there

exists an EC-field L containing K, in which every finite consistent set of equations over K has a solution.

When

K is infinite, L can be chosen to have the same cardinal as K, while for finite K, L may be taken countable. •

If a is any infinite cardinal, we can similarly construct a-EC-fields containing a given field K, in which every consistent set of fewer than a equations has a solution.

However, the EC-fields constructed here are not in

any way unique; even a minimal EC-field containing a given field K need not be unique up to isomorphism, as will become clear later on.

Further, it will no longer be possible

to find an EC-field algebraic (in any sense) over K. Sometimes a stronger version of algebraic closure

~s

needed, in which the above property holds for all sentences, 133

not merely existential ones.

We shall not need this stronger

form, and therefore merely state the results without proof. Let A be an inductive class of algebras (of some sort), i.e. a class closed under isomorphisms and unions of chains. By an infinite forcing companion one upderstands a subclass C of A such that F.l

Every A-algebra can be embedded in a C-algebra,

F.2

Ann inclusion C C C between C-algebras is elemen"' 1- 2 tary,

F.3

C is maximal subject to F.l,2.

Q is elementary if for every sentence a(x) which holds in P, a(xf) holds in Q.) It can (Recall that a map f:P

--~

be shown that every inductive class has a unique forcing companion (cf. A. Robinson [71], G. Cherlin [72], J. Hirschfeld-W.Ho Wheeler [75]). to skew fields.

All this applies in particular

Here we also have the amalgamation pro-

perty; but an important difference between commutative and non-commutative fields is that algebraically closed commutative fields are axiomatisable (we can write down a set of first order sentences asserting that all equations have solutions); the corresponding statement for EC-fields is false.

This follows from the fact that the class of EC-

fields is not closed under ultrapowers (J. HirschfeldW.H. Wheeler [75]). Although EC-fields do not share all the good properties of algebraically closed fields, they have certain new features not present in the commutative case.

For example,

the property of being transcendental over the ground field can now be expressed as an elementary sentence: (1)

transc(x):

J y' z (xy

yx

2

A

2 x z

This sentence is due to Wheeler (loco). 134

2

zx ;\ xz "' zx ;\ y "' 0). It states that there

is an element z commuting with x 2 but not with x, hence 2 . conJugate . 2 k(x ) C k(x); secondl y x lS to x, so k ( x) k(x 2 ),

=

J,

in particular, [k (x) :k J = [k (x 2 ) :k 2

and so, because

k(x) C k(x), the degree must be infinite.

Conversely,

when x is transcendental, (1) can be satisfied in some extension, and hence in the EC-field. Another sentence characterizing transcendental elements, in the case of a perfect ground field, was obtained by Boffa-v.Praag

[12];

3y(xy-yx

it is 1).

Wheeler has generalized (1) to find (for each n

~

1) an

elementary formula transc (x1 , .•• ,x ), expressing the fact n n that x 1 , ••• ,xn commute pairwise and are algebraically independent over the ground field; as a consequence he is able to show that every EC-field contains a commutative algebraically closed subfield of infinite transcendence degree, To describe EC-fields in a little more detail we need two results from Hirschfeld-Wheeler

[75], see also Macintyre

[75]. Lemma 6.2.3 (Zig-zag lemma) EC-fields over k, then K

~

If K, L are two countable

L i f and only i f they have the

same family of finitely generated subfields.

Proof.

Clearly the condition is necessary.

Conversely, let

K,L be countable EC-fields having the same finitely generated subfields.

Let K = k(a 1 ,a 2 , ••• ), L = k(b 1 ,b 2 , ••• ); we shall construct finitely generated subfields Kn' Ln of K and L =:> respectively such that (i) Kn C:::: Kn+l' Ln C:::: Ln+l' ( ii) Knk(a 1 , ••• ,an)' Ln ~ k(b 1 , •• , ,b n ) , (iii) there is an isomorphism between Kn+l and Ln+l extending a given isomorphism between K and L . - Since K =UK, L = UL, it will foln n n n low that K = L, by taking the common extension in (iii). 135

Put K = L = k· if K , L are defined, with an isomoro o ' n n Phism ~'~'n :Kn --~ Ln' let K'n = Kn (a n+l ), then K'n is finitely generated, hence isomorphic to a subfield of L containing an isomorphic copy of L • n

By Th,5.5.1, Cor.l, L can be emr

bedded in a finitely homogeneous extension, but L is an EC-field and hence is itself finitely homogeneous. can apply an

~nner

Thus we

automorphism of L so as to map K' onto a n

subfield 1 1 containing L , in such a way that the restricn

n

homomorphism~

tion to K is the n

n



Let

~ 1 :K' -~

n

n

L' be the n

isomorphism so obtained.

Now put L 1 = L 1 (b +l) and find n+ n n an isomorphic copy of Ln+ 1 in K; this will contain a subfield isomorphic to K' and by applying a suitable inner automorn phism of K we obtain an isomorphism of L 1 with a subfield n+ -1 Kn+ 1 say of K, which when restricted to 1 n1 is (~') • Now n Kn+l' Ln+l satisfy (i)-(iii) and the result follows by induction. • For the -second result let us write, for any subset S of K, C(S) for the centralizer of S in K: Proposition 6.2.4.

Let K be an EC-field over k and let

a 1 , ••• ,ar,b E K, then

This means that the formula 'bE k(a 1 , ••• ar)', not at first sight elementary (and in fact not so in the commutative case), can be expressed as an elementary sentence in an EC-field:

Vx

(2)

(a.x ~

xa. (i ~

1, ••• ,r)

~

bx

xb).

Write A= k(a 1 , ••• ,ar)' then if bE A, (2) clearly holds; if b i A, then (2) is false in K A(x) and hence Proof.

X

also inK, because the latter is existentially closed.• Taking r = O, we obtain a result which is well known in 136

the special case when k is the prime subfield: Corollary

1.

The centre of an EC-field over k is k. •

EC-fields are in some way analogous to algebraically closed groups, which have been studied by B.H. Neumann [73]; the next result is analogous to a property proved by Neumann for groups: Proposition 6.2.5

An EC-field cannot be finitely generated

or finitely related.

Proof.

Given a 1 , ••• ,an 3x,y(a.x

xa. (i

~

~

£

K, the sentence l, ••• ,n) A

xy

f yx)

k k(y);

~s

consistent, for it holds in K(x)

~n

K itself, and by Prop. 6.2.3 this means that K contains

an element y generated.

t

k(a 1 , ••• ,an).

hence it holds

Hence K cannot be finitely

In a finitely related field which is not finitely

generated, infinitely many generators occur

~n

no relation

and so generate a free factor. If x is one of them, then the sentence jy(x = y 2 ) is not satisfied inK, though clearly consistent, and this contradicts the fact that K is an EC-field.

Hence K cannot be finitely related.•

As we saw in 5.5, there are continuum-many non-isomorphic finitely generated fields, hence no countable EC-field can contain them all, i.e. there are no countable universal ECfields (Hirschfeld-Wheeler [75]).

However, it is possible

to construct a countable EC-field containing all finitely presented fields:

we simply enumerate all finitely presented

fields K1 ,K2 , ••• over k, form their field product over k and take a countable EC-field containing this product, which exists by Th.6.2.2.

The result is a countable EC-field con-

taining each finitely presented field over k. Any EC-field has proper EC-subfields, thus there are no minimal EC-fields. Theorem 6.2.6.

This follows from

Let K be an EC-field over k and c any element

137

of K, then the centralizer C of c in K is an EC-field over

k(c). Proof. C.

It is clear that k(c) is contained in the centre of

Now let f

(3)

r

0

be any finite set of equations in x 1 , ••• ,xn over C which has a solution in some extension of Cover k(c).

This means

that the solution also satisfies x c - ex

(4)

n

n

o.

Hence the equations (3), (4) are consistent and so have a solution in K.

By (4) this means that we have found a solu-

tion of (3) inC, soC is an EC-field over k(c), as claimed.• By taking intersections we get EC-fields over k itself. Such a construction can also be obtained in a more straightforward fashion: Theorem 6.2.7.

Let K be an EC-field over k and let a

E

K

be transcendental over k, then there exists b E K such that

ba

(5)

=

2

a b

r 0.

If C is the centralizer in K of such a pair a,b, then C is again an EC-field over k, and the inclusion C

~

K is an

elementary embedding.

Proof. f(a)

Since a is transcendental over k, the mapping 1-> f (a2) is an endomorphism a of k(a), so the system

(5) has a solution in some extension of K, and hence in K

itself.

Now given a,b satisfying (5), let C be their cen-

tralizer in K and let (6)

138

f

r

0

be a consistent system of equations ;n . . th e

· bl es x 1 , Since K is an EC-field, this system has a

••• ,xn over C.

var~a

solution inK.

Let c 1 , ••• ,cs s C be the coefficients occurring in (6) and consider the system consisting of (6) and (7)

x.z

= yc. •

c.y

1

J

J

This system is consistent: we form first K(y) and with the

1-->

endomorphism a:f(y)

f(y 2 ) form K(y)(z;a).

Hence (7)

has a solution in K itself; let us denote this solution also by xi' y, z.

1-->

Then the mapping y

a, z

1-->

b defines an

isomorphism

for both sides are obtained by first adjoining a central indeterminate y and then forming the field of fractions of the skew polynomial ring with respect to the endomorphism f(y)

2

1-->

that t

-1

f(y ).

By homogeneity there exists t

c.t =c., t J

x!, then x! ~

~

-1

J

yt

C and x!

£

~

a, t ~sa

-1

zt =b.

£

Now put t

solution of (6).

K such -1

xl..t

This shows

C to be an EC-field. To prove that the inclusion C

~

K is an elementary em-

bedding we need only show that every finitely generated subfield of K can be embedded in C.

Let c 1 , ••• ,cs s K and consider the system (7), but without the equations involving x .• ~

tion in K.

This system is consistent and so has a soluSince k(y,z) ~ k(a,b) with y

there exists t put t

-1

c.t J

£

K such that yt

cj, then cj

E

= ta + 0,

1-->

a, z

zt • tb.

C and k(c 1 , ... ,c 3 )

1-->

b,

If we

= k(cl, ... ,c~),

hence the result. • When K is countable, it follows from the zig-zag lemma (6.2.3) that C Corollary.

~

K and we obtain the

Every countable EC-field has a proper subfield

139

isomorphic to itself. •

An important and useful result due to Wheeler (l.c.) is that every countable EC-field has outer automorphisms; the proof below is taken from Cohn [75].

){

Every countable EC-field has 2

Theorem 6.2.8.

°

distinct

automorphisms, and hence has outer automorphisms.

Proof.

Let K be generated over k by a 1 ,a 2 , ••• , where the

ai are chosen so that ani k(a 1 , ••• ,an_ 1 ); this is clearly possible.

By Prop. 6.2.4 there exists bn commuting with

a 1 , ••• ,an-l but not with an. phism induced by b

n

Let ~n be the inner automor-

and consider the formal product

for a given choice of exponents Ei defines an automorphism on K.

= 0,1. We claim that a

Its effect on k(a 1 , ••• ,an)

is E

6

n n '

for when i > n, S. leaves k(a 1 , ••• a) elementwise fixed. n

1

Thus it is an endomorphism which is in fact invertible since each

s. l

is.

Since the E. are independent and each l

choice gives a different automorphism, we have indeed 2 K 0 distinct automorphisms; of course there cannot be more than this number.

Now a countable field has at most countably

many inner automorphisms, hence K has outer automorphisms. • This proof is of course highly non-constructive; since EC-fields themselves are not given in any very explicit form, there seems little hope of actually finding a particular outer automorphism. An important but difficult question is: Which fields are embeddable in finitely presented fields?

It would be in-

teresting if some analogue of Higman's theorem could be 140

established.

This asserts that a finitely generated group

1s embeddable in a finitely presented group if and only if it is recursively presented (Higman [61]).

6.3

A specialization lemma In this section we digress somewhat to prove a technical

result which is sometimes useful: Lemma 6.3.1

Let K be a field with

(Specialization lemma)

centre C and assume (i) C is infinite and (ii) K has infinite degree over C.

Then any full matrix over KC is

non-singular for some set of values of X in K.

Some preparations are necessary for the proof.

In the

first place we shall need Amitsur's theorem on generalized polynomial identities.

Let A be a k-algebra, then by a

generalized polynomial identity (g.p.i.) one understands

a non-zero element p of pings X

-->

A.

~

which vanishes under all map-

Amitsur [65] proved that a primitive k-alge-

bra A satisfies a g.p.i. if and only if it is a dense ring of linear transformations over a skew field of finite degree over its centre and A contains a transformation of finite rank.

We shall be particularly concerned with the case

where A is itself a skew field; in this case Amitsur's theorem takes the following form: A skew field satisfies a generalized polynomial identity i f and only i f it is of finite degree over its centre.

For the proof we refer to Amitsur [65].

A second result is the inertia theorem (Bergman [67], Cohn [71 "]).

Let R be any ring and

A

a subring, then

A

is

said to be n-inert in R if for any families (a;\) of rows in Rn and (b ) of columns in~ such that a,b ~

A,~,

h

there exists P s GL (R) -1

~

E

A for all

such that on writing a~ = aAP,

n

= p b , each product a~b' lies trivially in A, 1n the ~ A ~ sense that for each 1 = l, ••• ,n, either a~. = 0 orb'. =0 /\1 ].11

b'

~

or both

1

aH

and b'. lie in A. JU

If A is n-inert in R for all

141

n, it is called totally inert in R. Inertia theorem.

Now we have the

~ is totally inert in ~.

The theorem is proved in Cohn

[71"] (p.l03f.)

for a wider

class of rings; however the proof given there is not complete.

We therefore give a proof below (which it is hoped

is complete).

In the proof we shall need the weak algorithm;

for this we refer to Cohn [71"], and in fact, the reader willing to accept the inertia theorem will need only the following corollary in which the notion of inertia does not appear. The embedding ~

Corollary (to the inertia theorem). -:> ~

is honest.

For let C be a full matrix over F full over " F

~,

say C

= AB,

=~

which is non-

where A is n x r, B is 1\

r x n and r < n.

By inertia we can find P e GL (F) such

that on writing A'

=

AP, B'

-1

r

= P B, the product of any row

of A' by any column of B' lies trivially in F.

Since C is

full, A', B' cannot have. all their entries in F.

If the

(1,1)-entry of A is not in F, say, then the first row of B' is zero, and on omitting the first column of A' and the first row of B1 we can diminish r.

By induction on r we

obtain a contradiction; this proves the corollary, starting from the theorem.• We shall prove the inertia theorem in the following (slightly more general) form.

The proof follows Cohn-Dicks

[76]. Inertia theorem. and

"R

Proof.

Let R be a graded ring with weak algorithm, 1\

its completion, then R is totally inert in R.

For any a

E

"R the

order o (a) of a is defined as the

minimum of the degrees of the homogeneous components of a, or oo if a=O. Let

m = ~x

142

£

R

I

o (x) > 0},

then by the weak algorithm R/m is a field; moreover completion' of R in the m-adic topology. for the completion of m.

Ris

We shall write

the

m

Now m as a free right ideal of R

has, by the weak algorithm, a homogeneous basis X say, and

" can be uniquely written as a any a e: m

=

~xa

X

, where the

summation is over all x e: X and all but a finite number of the a

£

X

R are

zero.

o(a) > 1 +

(1)

We note that

mi~{o(ax)};

further~

if a e: m, then all the a lie in R. X "'r Let A F, hence a mapping E(t) --> F(t) and so an E(t)-ring homomorphism

(5)

E(t)k(t) --> F(t).

Now let A be a full matrix over Ek, then A is invertible over F, hence invertible over F(t) and by (5) it is full over E(t)k(t), as we had to show.• Let E be a field with centre k, then E(t) has

Lemma 6.3.5.

147

the centre k(t).

Proof.

Every element of E(t) has the form ¢ We shall use induction on d(~)

f,g s E[t].

= =

fg

-1

, where

deg f + deg g

to prove that if ~ is in the centre of E(t), then¢ For d(¢)

=

0 the result holds by hypothesis.

we may assume deg f

~

. deg g, replac~ng

By the Euclidean algorithm, f

= qg

+

~

by

k(t).

E

If d(¢) > O, -l 'f necessary. ~ ~

r, where deg r < deg g,

with uniquely determined q,r s E[t],

Let us write u

-1

c

= c uc,

for any c s E*, then fg

-1

q + rg

-1

q

c + r c( g c)-1 •

Since ¢ is in the centre of E(t), we have q + rg

-1

q

c

+

c -1 . r (g ) , ~.e. c

(6)

q - q

c

= r

= deg

Now v(¢)

c( g c)-1 - rg-1

g - deg f is a valuation on E(t), and the

left-hand side of (6) has value ~ 0, unless qc the right-hand side has positive value. are 0, q d(fg

-1

c

=q

and rg

-1

q, while

Hence both sides

is in the centre, but d(rg

-1

)
E(t)k(t) is honest, by Lemma 6.3.4, hence any full matrix over Ek is full over F(t)C(t) and hence over Fc.• 6.4

The word problem for free fields

The word problem in a variety of algebras, e.g. groups, 1s the problem of deciding, for a given presentation of a group, when two expressions represent the same group element.

In the case of skew fields we again have a presenta-

tion, as explained in 6.1, and we can ask the same question,

149

but the word problem is now a relative one.

Generally we

have a coefficient field K and we need to know how K is given.

It may be that K itself is given by a presentation

with solvable word problem, and the algorithm which achieves this is then incorporated in the algorithm to be constructed; or more generally, we merely postulate that certain questions about K can be answered in a finite number of steps and use this fact to construct a relative algorithm. Our aim here will be to show how to solve the word problem in free fields, and of the two alternatives described above we shall take the second, thus our solution will not depend on the precise algorithm in K but merely that it exists.

In fact it is not enough to assume that K has a

solvable word problem; we need to assume that K is dependable over its centre: Given a field K which is a k-algebra, we shall call K dependable over k if there is an algorithm which for each finite family of expressions for elements of K, in a finite number of steps leads either to a linear dependence relation between the given elements over k or shows them to be linearly independent over k. When K is dependable over k, K and hence k has a solvable word problem, as we see by testing 1-element sets for linear dependence.

Let K have centre C; our task will be to solve

the word problem for the free field KC{X}.

For this it will

be necessary to assume K dependable over C; this assumption ~s

indispensable for we shall see that it holds whenever

KC{X} has a solvable word problem. There is another difficulty which needs to be briefly discussed.

As observed earlier we need to deal with expres-

sions of elements in a skew field and our problem will be

to decide when such an expression represents the zero element.

But in forming these expressions we may need to in-

vert non-zero elements, therefore we need to solve the word problem already in order to form meaningful expressions.

150

This problem could be overcome by allowing formal expressions such as (a ~ a)

-1

; but we shall be able to bypass it alto-

gether: instead of building up rational functions step by step, we can obtain them in a single step by solving suitable matrix equations, as explained in 4.2.

In fact we

have the following reduction theorem. Theorem 6.4.1.

Let R be a semifir and U its universal field

of fractions, then the word problem for U can be solved i f the set of full matrices over

R is recursive.

Any element u 1 of U is obtained as the first component of the solution of a matrix equation Proof.

Au + a

o,

and u 1 = 0 if and only if~ = (a,a 2 , ••• ,an) is non-full. By hypothesis there ~s an algorithm to decide whether A1 is full or not, and this provides the answer to our question.• We note that it is enough to assume that the set of full matrices over R is recursively enumerable, because its complement, the set of all non-full matrices is always recursively enumerable (in an enumerable ring). In the same way one can show that for an epic R-field K the word problem can be solved if the set of all matrices inverted over K is recursive. We now come to the main result to be proved

~n

this sec-

tion: Theorem 6.4.2

Let K be a field, dependable over its centre

C, then the free K-field on a set X over C, U

= KC

{X} has

a solvable word problem and is again dependable over C. Conversely, i f the word problem in U is solvable (for an infinite set X), then K is dependable over C.

To prove the theorem we shall at first assume that C is infinite and the degree [K:C] is infinite.

Of course this

must be understood in a constructive sense: Given n > O, we 151

can

~n

a finite number of steps find n distinct elements of

C and n elements of K linearly independent over C.

Like-

wise, any other results we use will need to be put in a constructive form, e.g. the specialization lemma, and Amitsur's theorem on which it was based.

Thus given f

£

KC, f f 0, there is a method (an 'oracle') for obtaining a set of arguments for which f is non-zero in a finite number of steps. To prove Th,6,4.2, we must describe an algorithm which will enable us to decide when a matrix A over F full.

= KC is

First we observe that being full is unaffected by

elementary transformations and by taking the diagonal sum with a unit matrix. linear in the x.

~

£

This allows us to reduce A to a matrix X, the process of "linearization by en-

largement" (sometimes called 'Higman's trick', cf. Higman

[40]).

To describe a typical case of this process, suppose

that the (n,n)-entry of an n x n matrix has the form f + ab. On enlarging the matrix we can replace the term ab by separate terms a,b by applying elementary transformations, as follows: f + ab

+ ab

Here only the last two entries in the last two rows are shown. By repeated application we can therefore reduce A to the form (1)

A'

where A

homogeneous .of degree 0 and A1 of degree 1 ~n the x' s. Thus A has entries in K and A1 = l:B.x. where the B. 0 ~ ~ ~ have entries in K· moreover A' is full if and only if A ~s. 0

~s

Suppose that A'

'

~s

not full, then it will remain non-full

when the x. are replaced by 0, i.e. A must then be singular ~

152

0

over K.

Thus

~f

A

0

1s non-singular, A' (and with it A) is

necessarily full. We may therefore suppose that A

0

is singular, of rank

r < N say, where N is the order of A'.

By diagonal reduc-

tion over K (which leaves the fullness of A' unaffected) we can reduce A0 to the form

[~ ~)

(cf. e.g. Cohn[71"], Ch,8;

clearly this 1s an effective process because K is dependable over C). Let us partition A1 accordingly, then

A' where P,Q,R,S are homogeneous of degree l (and the sign of P is chosen for convenience 1n what follows). the completion

F = Kc;

Now pass to

by the corollary to the inertia A

theorem A' is full over F if and only if it is full over F. The matrix I - P is invertible over F and by elementary transformations we obtain (I- P)-lQ

)

S - R(I - P) -lQ •

To find whether (3)

S - R(I - P)

-1

Q

0

we have to check that for each v terms of degree v are 0.

=

0,1, ••• the homogeneous

Now S - R(I - P)

-1

n

Q = S - IRP Q

and equating terms of a given degree v we find that (3) is equivalent to (4)

s

a,

0

(v

O,l,ooo)o

These are equations of matrices over F and s1nce the latter is embeddable in a field, we may regard (4) as equations over 153

a field.

In that case the equations (4) follow from the

same equations with v < N.

Assuming this for now, we thus

have an algorithm for determining whether (3) holds.

When

this equation holds, the matrix on the right of (2) has at least one row of zeros and hence A' is then non-full.

If

(3) does not hold, then by Lemma 6.3.1 we can specialize the x's within K to values a. such that I - P remains non-singu~

lar and S - R(I - P)

-1

Q remains non-zero.

Translating back

to A' we find that specializing to a. we obtain a matrix of ~

rank > r. (1).

We now replace x. by x. + a. and start again from ~

~

~

This time we have a matrix A

0

over K of rank greater

By repeating this process a finite number of times

than r.

(at most N times, where N is the order of A'), we can thus decide whether or not A' is full and this completes the proof in the case where C and [K:C] are infinite. We still need to prove that the equations (4) all follow from a finite subset; this is related to the well known fact that a nilpotent n

x

n matrix A over a field satisfies

An = 0. Lemma 6.4.3.

Let P be an n x n matrix, Q a matrix with n

rows and R a matrix with n columns over a skew field K.

0 for v

(5)

If

O,l, ••• ,n- 1,

\)

then RP Q = 0 for all v.

Proof.

Let nK be the right K-space of columns with n comr

ponents.

The columns of Q span a subspace V of ~while 0

the columns annihilated by the rows of R form a subspace

~. and since RQ = 0 by hypothesis, we have VoC W. Regarding P as an endomorphism of ~ we may define a sub-

W of

space v\) of ~ for \) > 0 inductively by the equations

v

\)

154

vv

1+ PVv- 1"

Thus V

V0 + PV0 + ••• + P\IV 0 and it follows that

\)

(6)

••• c- vn- 1'

Moreover, by (5) Vv • • • • Consider the leading coefficients of f 1 v••••fn; if they are linearly independent over C then the f's are linearly independent over C'.

Otherwise we can find i, 1 2._ i2_ n, ai+l'''"'an

£

C and

v. positive integers v 1.+ 1 , ... ,vn such that f! =f. -E. 1 f.a.t J l. l. I.+ J J has lower degree than f .. Now the linear dependence over C' n

l.

of f 1 , ••• ,fn is equivalent to that of f 1 , ••• ,fi-l'fl,fi+l''''' f and here the sum of the degrees I.s smaller. Using inn

duction on the sum of the degrees of the f's we obtain the result.

155

Now return to Th.6.4.2.

If in that theorem C is finite

or more generally, there is no constructive process of obtaining infinitely many elements in C, we adjoin a central indeterminate t so that K, Care replaced by K'

= C(t).

=

K(t), C'

By what we have just seen, K' is dependable over

C' whenever K is so over C. ·

construct~ve

[K:C].

Moreover C1 is infinite in the sense ( e.g. we can t a k e 1 , t , t 2 , ••• ) and [K' • C'] =

It follows that the set of full matrices over K'c 1

is recursive, hence by Th.6.3.6 (or even the special case Lemma 6.3.4) the set of all full matrices over KC is also recursive.

Hence the word problem in U is soluble, so Th.

6.4.2 continues to hold even when the centre of K is finite. There remains the case where K has finite degree over its centre, or where no process for

~onstructing

early independent sets exists.

Instead of adjoining a cen-

infinite lin-

tral indeterminate we now form a skew extension. Let K be a field with centre C and let cr be an automorphism of K leaving C elementwise fixed. polynomial ring R K(y;o).

We form the skew

= K[y;cr] and its field of fractions K'

If no power of cr is an inner automorphism, then

the centre of K' is C.

To see this we embed K' in the field

of skew Laurent series K((y;cr)) (cf.2.1). in the centre then fy \)

all c

v

~

ca0

E

K, hence c

0

a \)

O, it follows that ,

i.e. f

= a0

E

C.

= yf, hence

= a \) c. a =0 \)

a0 \)

If f

=

Iy\la

\)

lies

= a \) and cf = fc for

Since cr\1 is not inner for except when v

=0

and a c 0

Clearly K' is of infinite degree

over its centre, e.g. the powers of y are linearly independent. Taking C'

is honest,

C in Th.6.3.6 we find that the embedding

Moreover if K is dependable over C (and if cr

is 'computable' in an obvious sense) then so is K'. 156

For

let u 1 , ••• un fi,g

E

K'; as before we write u.

E

1.

= f.g- 1 , where 1.

K[y;o] and it is again enough to test £1 , ••• ,fn for

linear dependence over

c.

This time we single out the f's

of maximal degree, £ 1 , ••• ,fr say. If their leading terms are linearly independent over C, then so are the f's. · , r Oh t erw~se 1 et f. = f. - E. £.a. (a. s C) have lower degree ~

~

~+1

J J

J

than f. and continue with £ 1 , ••• ,£!, ••• ,£ as before; again ~ ~ n this process ends after a finite number of steps. To complete the proof of Th.6.4.2 we shall need another Lemma which will also establish the second part of the theorem. Lemma 6.4.4.

Let K be a field (over k).

If for every finite

set Y the word problem for the free K-field on Y over k is soluble, then the free K-field on any set X over k is dependable over k.

Proof.

Let U

= ~ {X} be the free field; we may assume X

infinite by embedding U in a free K-field on an infinite set containing X (that such an embedding exists follows from Th.6.3.6 but is also easy to see directly). u 1 , ••• ,un

E

U, we have to determine whether the u's are

linearly independent over k. the case n

Given

We shall use induction on n,

= 1 being essentially the word problem for U.

We may assume u 1 1 0, and hence on dividing by u 1 we may suppose that u 1

=

1.

Only finitely many elements of X

occur in u 2 , ••• ,un, so we can find another element in X, y say.

Write u!

1.

= u.y 1.

- yu. and check 1.

are linearly dependent over k.

If so,

where a 2 , •• o,an E k and are not all zero, then u = satisfies yu = uy. Since u does not involve y, it that u represents an element a of k (which can be computed by suitably specializing the x's), and hence l.a- E~uiai

= 0 is a dependence relation over k. Conversely, if there is a dependence relation E~uiai = O, where not all the ai vanish, then not all of a 2 ,.o.,an can vanish (because u 1 = 157

+ 0),

and so Ln2-u!a. = 0 is a dependence relation between 1 1 u2, ' .•• ,un. ' The result now follows by induction on n.• We note that since K is a subfield of U, K is dependable

1

over C; thus the dependability of K is a consequence of the solubility of the word problem for U (on an infinite set). This completes the p:oof of Th.6.4.2 when [K:C] is infinite. When [K:C] is finite, but K has a (computable) automorphism over C, no power of which is inner, we can form the skew function field K'

K(y;cr).

Then K' 1s of infinite

degree over its centre C and K' is dependable over C, hence the set of all full matrices over K'c is recursive, and so is the set of all full matrices over K , because (7)

c

is honest.

This solves the word problem for U.

Finally if K has no automorphism cr of the required kind, we form K'

K(t) with a central indeterminate t; as we have

=

seen, the result is a field K' with centre C(t).

1-->

Now ref(t 2 ). This

peat the process with the endomorphism f(t) is computable (in any reasonable sense) and it induces an auto!

l

morphism of K(t,t 2 ,t 4 ,

••• ) ,

no power of which is inner, hence

we obtain a field K" of infinite degree over its centre C. From Th.6.3.6 taking K

=

Corollary.

we obtain the following special case by

C: Let k be any commutative field with soluble

word problem, then U

k {X} has soluble word problem.•

There still remains the word problem for a free field ~{X}

where k is not the exact centre of K.

This really

requires a sharper form of the specialization lemma, but we shall not pursue the matter here. 6.5

A skew field with unsolvable word problem As is to be expected, for general skew fields the word

problem is unsolvable; an example was given by Macintyre [73].

The account below is a (slightly simplified) version

of another example due to Macintyre. '

158

The idea is to take a

finitely presented group with unsolvable word problem and use these ~elations in the group algebra of the free group. We need a couple of preparatory lemmas. Lemma 6.5.1.

Let F

be the free group on x , ••• ,x and F 1 n y the free group on y y In the direct product F x F 1••••• n" X

X

y

H be the subgroup generated by the elements x.y.(i = 1, 1 1 ••• ,n) and elements ul, ••• ,um E Fx. Then H n Fx is the

let

normal subgroup of Fx generated by u1 , ••• ,um.

Proof. Let N be the normal subgroup of F generated by the · -1 x_l u (~ s l, ••• ,m). S1nce x. u x. = (x.y.) u (x.y.) E H, it v 1 v 1 1 1 ~ 1 1 is clear that N ~ H, hence N ~ H n F • To prove equality, X

consider the obvious homomorphism f:F

X

x F y

which maps u

~

--~

(F /N) x F

y

X

to 1 (p

= l, ••• ,m).

If wE H n F , then wf X

is a product of the (x.y.)f, and since the x's andy's comr 1

1

mute, we can write it as wf = [v(x)v(y)]f

v(xf)v(yf),

where v is a word 1n n symbols. so v(yf)

Since w

E

F , wf X

E

F /N and X

1, but the y.f are free, so v is the empty word 1

and wf = 1, hence w E ker f = N. • Let F , F , H, N be as above and consider G X

y

= FX

x

F • y

This group can be ordered: we order the factors as in 2.1 and then take the lexicographic order on G. form the power series field K

=

k((G)).

Hence we can

The power series

with support in H form a subfield L; we take a family of copies of K indexed by Z and form their coproduct amalgamating L.

The resulting ring is a fir, with universal

field of fractions D say.

If a is the shift automorphism,

we can form the field of fractions D(t;a) of the skew polynomial ring D[t;cr]. 159

With the above notation, let w e F , where

Lemma 6.5.2. F

xD(t;cr).

Proof.

X

C D; then

K

C G C K

o-

w e N if and only if wt = tw in

If w e N, then w e H by Lemma 6.5.1, hence w e L

and so wt

=

tw.

Conversely, if tw

= wt,

w is fixed under a and so lies

in the fixed field of cr, i.e. w e L. power series with support in H, so w

But L consists of all E

Hn F

X

= N, by Lemma

6.5.1..

Now let A be a finitely presented group with unsolvable word problem, say (1)

A

= u

1},

m

where u1 , ••• ,um are words in the x's. We shall construct a finitely presented field whose word problem incorporates that of A.

where (2)

Let

~

consists of the following equations: x.y. ~

tx.y. (i

J

~

u t ].l

tu

].l

(].l

~

1, ••• , n)

l, ••• ,m).

To see that this is meaningful, let P

PX be the free field

over k on x 1 , ••• ,xn and form

This is a fir and so has a universal field of fractions Q; moreover, Fx

x

Fy is naturally embedded in Q, in fact Q

~s

also the universal field of fractions of the group algebra 160

of F

X

over k

x F

y

over k.

by~.

In Q consider the subfield R generated

and let S be the field coproduct of copies of

Q indexed by Z amalgamating R. phism inS, we can form T

=

If cr is the shift automor-

S(t;cr); from its construction

this is essentially M (cf. Lemma 5.5.4).

By the universality

ofT we have a specialization from T to D(t;o).

We claim

that (3)

='l

w

£ N

D

defined by (f,a)

'

J->

f(a), where f(a) = f(a 1 , ••. ,am) ar1ses from f(x 1 , ••. ,xm) on replacing xi by ai £D. If we fix f £ F, we have a ~pping Dm - > D and if we fix a E Dm we get a mapping F --> D. morphism.

Clearly the latter mapping is a D-ring homo-

Let us write Hom(F,D) for the set of all D-ring

homomorphisms, then we have Lemma 7.1.1.

If Dis a k-algebra and F

m Hom(F ,D) a D •

Explicitly we have 4»

I-->

cp

cp

(x1 , ••• ,xm).

This follows immediately: we have seen that a

E

Dm defines 163

a homomorphism and conversely, each homomorphism



f,

where f:Dm - > D is the function on Dm defined by f.

Now

(2) is also a D-ring homomorphism if we regard the functions

from Dm to D as a ring under pointwise operations; this amounts to treating the right-hand side of (2) as a product of rings.

The image of F under 0 is written F; it is the

ring of polynomial functions in m variables on D.

The ker-

nel of 0 is just the set of all generalized polynomial ident~t~es

in m variables on D.

By identifying Dm with knm via a k-basis of D, we may view the ring of polynomial functions knm(= Dm) - > k as a

nm

; clearly G does not depend on Dm the choice of k-basis of D. Since the canonical map k ~ D Dm Dm . . . . b . --> D ~s ~nJect~ve, the su r~ng C of D generated by D central k-subalgebra G of D

and G is of the form G

~

D.

If moreover, k is infinite, G

is just the k-algebra of polynomials in mn commuting indeterminates, so C is the D-ring of polynomials in mn central Another description of C is given in

indeterminates. Theorem 7.1.2,

Let D be ann-dimensional central simple

k-algebra, then F

Dk may be expressed as the

free D-ring on mn D-centralizing indeterminates and C is the image of the evaluation map

Proof.

e.

We may regard F as the tensor D-ring on the D-bi-

module (D

164

=F

~

m

D) , and since D is central simple, the map

(4)

~:D ~ D

-->

Endk(D) ~ Dn, where (a G b)~:x

1s a D-bimodule isomorphism,

I->

axb

It follows that F 1s the tensor

D-ring on mn D-centralizing indeterminates, Now fix a k-basis u 1 , ... ,un of D and consider the dual k-basis ut, ..• ,u~ E Ho~(D,k) ~ Endk(D). For each~= l, ... ,n there exists v~

Ea~A

v~~ = u~.

F = D

ruinates v. , i = l, ••• ,m,

~

1~

(l:AblAu\, ... ,l:\bm\u\)

1->

D-ring generated by the algebra generated by

= l, •.. ,n.

Write~-

1~

= v. 8: 1~

bi].J' then it is clear that

~-

1W

.

the~-

1W

F

is the

Now G is by definition the k, hence C = GD = F as claimed.•

If we examine the role played by 8 we obtain Theorem 7 .1. 3.

If k is infini t:e and D is an n-dimensional

central simple k-algebra, then the evaluation map 8 can be expressed in t:he form

F

D ->

D [ E;.

1~

J

C

(i

~

nm

D

, where v.

1, ... , m; ~

1~

1->

E;.

1]1

1, ... , n)

hence the kernel of G is generated by the commutators of pairs of the

v.1~ ·•

The above account follows Procesi [68], with simplifications by Dicks; cf. also Gordon-Motzkin [65], who prove Th.7.1.2

when Dis a field.

For a more general treatment,

in the context of Azumaya algebras, see Procesi [73].

7.2

Rational identities The basic result on rational identities, again due to

Amitsur [66], states that there are no non-trivial rational 165

identities over a skew field which is infinite-dimensional over its centre and which has an infinite centre.

But it

is now more tricky to decide what constitutes a 'non-trivial' identity.

Here are some 'trivial' ones:

(x + y)

-1

= y -1 (x-1

+ y

[x -1 + (y-1 - x) -1]-1

-1 -1 -1 )

x

x- xyx

, (Hua's identity).

We shall give two proofs of Amitsur's result, one by Bergman [70] and one by the author, based on the results of 6.3 above (cf. Cohn

[72 ']).

Our first task is to find a means of expressing rational functions; here we shall follow Bergman [70,76].

Let D be

a skew field with centre k, then we can form D(t), the field of rational functions in a central indeterminate.

¢ s D(t) has the form ¢ in t, and we can set t

~

fg

=

u

then ¢(a) will be defined.

¢ = fg

-1

£

-1

Any

, where f,g are polynomials

k if u is such that g(u)

# 0;

Given ¢, we can choose f,g in

to be coprime, and then f,g will not both vanish

for any a

£

k.

Since we only had to avoid the zeros of g

~n

defining ¢(a) we see that ¢ is defined at all but finitely many points of k. We have to generalize this to the case of several noncentral variables.

Now we no longer have D(t) at our dis-

posal, and although we have seen in Ch.6 how to construct free fields, that construction will not be needed here. What we shall do is to build up formal expressions in x 1 , ··:•xm using+,-,~, '/.and elements of D.

The expressions

will be defined on a subset of Dm or more generally on Em, where E is a D-field. Let X be any set. a:X

--~

An X-ring is a ring R with a mapping

R; we write R or (R,a) to emphasize the mapping.

If R is a field we speak of an X-field; this is essentially 166

the same as a Z-field in our previous terminology. Given X.~ {x 1 , ••• ,xm} we write R(X) for the free abstract algebra on X with operations {0 1 ()-l + x } o' o' 1' 1 ' 2' 2 ' where subscripts indicate the arity of the operation. For each expression there is a unique way of building it up .

s~nce

no

.

relat~ons

are

.

~mposed,

thus e.g. (x-x)

-1

exists.

In contrast to 7.1 we now have a partial evaluation mapping (1)

R (X) x Rm

Thus any map a:X

R.

-:>

-->

R defines a map a of a subset of R(X)

into R, by the following rules: (i) if a

0 or 1, a a ... 0 or 1,

= xi'

(ii) if a (iii)

if a

=

= xia' ~ -b or b+c or be and ba, ca are defined, then x.a

aa= -ba or ba + ca or ba.ca. (iv) if a= b-land ba is defined and invertible in R, then a~

= (b~)- 1 •

Since a just extends a we can safely omit the bar.

In a

field invertible is the same as non-zero, hence we have Proposition 7.2.1.

Let X be a set, (D,a) an X-field and

as R(X), then aa is undefined i f and only i f a has a subexpression b

-1

, where ba

= o.•

With each a:X --> D we can associate a subset E(D) of R (X), the domain of a, consisting of expressions which can be evaluated for a.

Similarly with each f s R (X) we asso-

ciate its domain dom f, a subset of Dm consisting of the points at which f is defined; more generally we shall consider dom f in Em, where E is a D-field. is called non-degenerate on E.

If dom f ~

0, f

In this section we shall

mainly be dealing with the domains of functions f s R(X). Lemma 7.2.2.

Let D be a skew field which is an algebra

167

over an infinite field k.

If f,g are non-degenerate on a

D-field E then dam f ('dam g

Proof.

Let p

£

dom f, q

£

f

(/J.

dam g, write r

and consider f(r), g(r) e: E(t).

= tp

+ (1-t)q

Each is defined for all but

finitely many values of t in k, hence for some a e: k both are defined. • Given f,g e: R(X), let us put f - g if f,g are non-degenerate (on a given E) and f,g have the same value at each point of dam f n dam g. This is clearly an equivalence, the transitivity follows by Lemma 7.2.2.

If f,g are non-degen-

erate, so are f+g, f-g, fg; moreover they depend only on the classes of f,g not on f,g themselves, and if f f O, then f-l is defined. Theorem 7.2.3.

Thus we have Let D be a skew field with infinite centre

k and E a D-field which is also a k-algebra,

~hen

valence classes of rational functions from Em to

the equi-

E with co-

efficients in D form a skew field DE(X). •

If E is commutative, this reduces to D(X) and is independent of E.

In that case any element of D(X) can be

written as a quotient of two coprime polynomials, and this expression is essentially unique.

The dependence on E in

the general case will be examined below; now there is no such convenient normal form for the elements of DE(X). Even if we use Ch.6, a given element may satisfy more than one matrix equation Au

=

a and the relation between them

remains to be described (cf. Cohn [b]).

In terms of ex-

plicit rational expressions for f, it may be that different expressions have different domains and the resulting function is defined on the union of these domains.

Bergman

[70]

raises the question: "Whether there is always an expression for f having this whole set for domain of definition, a 'universal' expression for the rational function f." Generally the domains of functions form a basis for the open sets of a topology on Em, the rational topology on Em 168

(cf. also 8.5 below; the polynomial topology, a priori coarser, is the Zariski topology).

The closed sets are of

the form

V(P)

= {p

£

Em

I

f(p)

0 for all f £ P}

A subset S of Em is called irreducible if it is non-empty and not the union of two closed proper subsets.

Equiva-

lently: the intersection of non-empty open subsets of S is non-empty.

Thus Lemma 7.2.2 states that Em is irreducible

in the rational topology when the centre of E is infinite. A subset S of Em is called flat if p,q (1-a)q

£

S for infinitely many a

£

subset will then contain ap + (1

k.

£

S implies ap +

Of course a closed flat

a)q for all a

£

k.

Now

the proof of Lemma 7.2.2.gives us Lemma 7.2.4.

Any non-empty flat subset of Em is irreducible. •

An example of a flat closed subset is the space S defined by (2)

I:a.,x.b., = c ~ll

~

~ll

By Lemma 7.2.4, S is irreducible (if non-empty) and so as ln Th. 7.2.3 yields a skew field D5 (x) in x 1 , ••• ,xm satisfying (2), the function field of (2). In general it is not easy to decide whether a given set lS irreducible, e.g. xlx2 - x2xl = 0 for E :;2 D :;2 k. In the commutative case every closed set lS a finite union of irreducible closed sets, but this need not hold in general. It is clear that polynomially closed

~

rationally

closed; we want conditions for the converse to hold.

First

two remarks: (i)

Let S

C Em

be such that p

i S,

then there exists f

169

defined at p but not anywhere on S.

The degeneracy of

=g

f can only arise by inversion, so f

-1

where g is

non-degenerate on S and 0 at all points of S where defined, and g(p) (ii)

~

0.

=0

Any element of D(t) defined at t

can be expanded -1

in a power series. Let g = a - th say, then g a- 1 (1- tha-l)-l = ~a- 1 (tha-l)n. So we can build up any function in D(t) provided that it is defined at t Lemma 7.2.5.

= O.

Let D be a skew field which is a k-algebra,

where k is an infinite field, and let E be a D-field and If S ~ Dm is flat, then its closure in Em

a k-algebra.

is polynomially closed.

Thus for flat sets, rationally closed

polynomially

closed. Proof.

t

Let p

S; we have to find a polynomial over D which

is zero on S but not at p.

We know that there is a rational

s

function f, non-degenerate on Say f is defined at q For any x for t x



= 0,





and f

=

0 on

s

but f(p)

r o.

S.

Em consider f((l-t)q + tx); this is defined

so it is a well-defined element of E(t).

S, f is 0 by flatness, but for x

because it is non-zero for t

=

1.

=

If

p it is non-zero

In the power series ex-

pansion of f((l-t)q + tx), if we have to take the inverse of an expression h(t), the constant term h(O) is non-zero, because f(q) is defined, and h(O) does not involve the coordinates of x.

So the expansion f((l-t)q + tx) has coeffi-

cients which are polynomials in x; their coefficients are in D because q ~s

at least one

E

S ~ Dm.

These polynomials are 0 on S, but

non-zero at p, and this is the required

polynomial.• Corollary.

Let k,D,E be as before, and assume that D and E

satisfy the same generalized polynomial identities over k with coefficients in D, then DE(X) ~ DD(X) and for any

170

k-subfield C of D, CE(X) a CD(X).

Tqe rational closure of Dm ~n Em is polynomially

Proof.

closed by Lemma 7.2.5.

Now every g.p.~. · on Dm ho 1ds on Em, So the rat ~onal · Em , ~.e. · Dm ~s · d ense ~n · Em. ~ c 1 osure o f Dm ~s

The rest follows because CE(X) is the subfield of DE(X) generated by

c

and

x.•

Now the rational identities may be described as follows: Theorem

7.2.6. {Bergman [70]J.

Let

D be a field with centre

k then there exists a D-field E with infinite centre C ~ k such that [E:C]=oo, and for each m, any such E, C yield the same function field DE(X) (m =

Proof.

lxl).

The last part follows from Lemma 7.2.5; it only re-

If k is finite, adjoin t to get n1 = D(t). Now let F = D1 (u) with endomorphism a:f(u) 1---~ f(u 2 ) and formE= F(v;a). As in the proof of Th.6.3.6 we see that

ma~ns

to produce E,C.

the centre of E is k (t).• This theorem may be expressed by saying that for skew fields infinite over their centre (where the latter

~s

finite) there are no non-trivial rational identities.

inWe now

give another proof of this result using the methods of Ch.4 and Th.6.3.6. Theorem 7.2.7.

Let E be a field with centre C, let D be a

subfield of E and write k

D n C.

=

[E:C] =oo and (ii) C is infinite.

Assume further that

(i)

Then every element of

Dk{X} is non-degenerate on E. Proof.

We know that any p

£

Dk{X} can be obtained as a

component p = u 1 of the solution of an equation Au

= a,

where A is a full matrix over Dk, and p will be nondegenerate on E provided that A goes over into a non-singular matrix under some substitution X

---~

E.

By Th.6.3.6,

the mapping Dk ---~ EC is honest, hence A is full over 171

EC and by the specialization lemma 6.3.1, A can be specialized to a non-singular matrix over E, which is what was needed. • When D is finite-dimensional over its centre, there are of course non-trivial identities, but Amitsur [66] shows that they depend only on the degree (cf. also Bergman [70]). More precisely, if [ D:k ]

=

n

. d 2 and E ~s any

=

D with infinite centre C containing k, where [E:C] then DE(X) depends only on D,d,r, m

of (rd) 2 , •

extens~on

= lXI and not on E.

It

is shown that DE(X) has dimension (rd) 2 over its centre, hence these fields are different for different values of rd. Moreover for d 1 id 2 the field with d 1 is a specialization of that with d 2 (cf. Bergman [70]). 7.3

Specializations We now examine how rational identities change under speci-

alization.

Of course we must first define the appropriate

notion of specialization. ~s

A homomorphism between two rings

said to be local if it maps non-units to non-units.

Let

D, D' be fields, then a local homomorphism from a subring D1 of D to D' is also called a local homomorphism from D to D' with domain D1 • If ~:D ---~ D' is a local homomorphism with domain

n1 ,

then ker

~ ~s

the set of non-units of

n1 ,

hence D1 is a local ring with residue class field n1 /ker isomorphic to a subfield of D'.

~

Let (D,a), (D',a') be X-fields, then a local homomorphism ~:D ---~ D' whose domain contains Xa and such that a'

is called an X-specialization.

=

a~

Clearly this exists only if

the domain of a contains that of a'. To describe rational identities we shall need the notion of PI-degree. 2 d"

n -

.

~mens~onal

Let A be a commutative ring, then Wl (A) is over

.

~ts

n

centre (as free A-module) and it

satisfies the standard identity of degree 2n (Amitsur-Levitzki theorem):

172

Let R be any prime PI-ring; by Posner's theorem it has a ring of fractions Q which is simple Artinian and satisfies the same polynomial identities as R (cf. Jacobson [75] or Cohn [77]). Let Q be d 2-dimensional over its centre, then R satisfies

s2d

0 and no standard identity of lower degree.

We shall call d the PI-degree of R (and Q) and write d deg R.

=

PI-

If R is a prime ring satisfying no polynomial iden-

tity its PI-degree is said to be

~.

We shall also need the notion of generic matrix ring. k be a commutative field and m,d > 1.

Write k[T] for the

commutative polynomial ring over kin the family T commuting indeterminates, where i,j

Let

= l, ••• ,d, A

A = {x1J .. }

of

1, ••• , m.

Let k(T) be its field of fractions and consider the matrix rings

We have a canonical m-tuple of matrices X, I\

A

= (x1J .. ); the

k-algebra generated by these m matrices is written kd and is called the generic matrix ring of order d. the free k-algebra on X

It is

= {XA} in the variety of k-algebras

generated by d x d matrix rings over commutative k-algebras. Amitsur has shown that R

=

kd is entire (cf. e.g. Cohn

[77]: one has to find a field of PI-degree d and embed it in ffiRd(E), where Ed k).

As an entire PI-ring R is an Ore

domain; its field of fractions 1s written k{X}d; like kd it has PI-degree d, if m > 1.

Of course form= 1, kd

reduces to a polynomial ring in one variable; this is not of interest and we henceforth assume that m > 1. Let (D,a) be any X-field; we defined in 7.2 its domain E(D) as the subset of R(X) for which a is defined •. Let Z(D) be the subset of E(D) consisting of all functions which 173

vanish for a.

Any f

E

Z(D) is called a rational relation,

or k-rational relation if coefficients in k are allowed. Explicitly, we have fa

= 0 in D, but of course this pre-

supposes that fa is defined.

Now Amitsur's theorem on ra-

tional identities (7.2) may be expressed as follows:

Let

D be a field with infinite centre k, then there is an Xfield E over k such that the k-rational identities over D are the k-rational relations satisfied by X over E.

Thus we can

speak of E as the free X-field for this set of identities. Moreover, the structure of E depends only on k, m and the PI-degree of D: If PI-deg D

d, then E

=

k {X}d is the field of generic

matrices, If PI-deg D :oo, then E

k{X} is the free k-field on X.

In particular, two k-fields satisfy the same rational identities if and only if they have the same PI-degree.

For our

first theorem we need a result of Bergman-Small [75].

We

recall that a ring R is local if R/J(R) is a field (where

J(R) is the Jacobson radical of R); if R/J(R) is a full matrix ring over a field, R is said to be a matrix local ring. Theorem 7.A

(i)

(Bergman-Small [75];

If R is a prime PI-ring which is also local (or even matrix local) with maximal ideal m then PI-deg R/mdivides PI-deg R.

(ii) If R1 ~ R are PI-domains, then PI-deg R1 divides PI-deg R. We shall sketch the proof of (ii) only. PI-degrees of R1 , R.

174

Let d 1 ,d be the They are also· the PI-degrees of their

fields of fractions Q1 ,Q. Let k 1 , k be their centres; by enlarging we may assume that k :2 k. Now choose a maximal

C\

1

commutative subfield F 1 of Q1 and enlarge F 1 to a maximal commutative subfield F of Q, then [F 1 ·k · ·d es [F : k] , an d . 1J d lVl this means that d 1 1d. • With the help of this result we can describe the specializations between generic matrix rings, following Bergman [77]. Theorem 7.3.1.

c,d

1.

>

Let k be a commutative field, m > 1 and

Then the following conditions are equivalent:

E(k {X}c) ~ E(k {X}d), i.e. every rational identity in

(a)

PI-degree d is one for PI-degree c,

(b)

there is an X-specialization k {X}d --> k {X} ,

(c)

there is a surjective local homomorphism D --> D

c

c'

d

where D. is a division algebra over k of PI-degree i, l

I

c

(d)

Proof. let

~

(a)

D~ ~

then c

d.

(b)

~

(c) is clear.

To prove (c)

~

(d),

Dd be a local ring with residue class field Dc ,

= PI-deg

D c

(d)~

of Th. 7A.

I

PI-deg D'

(a):

d

PI-deg Dd by (i), (ii)

Let E be an infinite k-field.

Since cld, we can embed 9nc(E) in 9nd(E) by mapping a to diag(a,a, ••• ,a).

Then every rational identity in 9nd(E)

holds in 9n (E).

But these identities are just the rational

c

relations ink {X}d, k {X}c' hence E(k {X}c) ~ E(k {X}d)' i.e. (a). • 7.4

A special type of rational identity

As a consequence of Th.7.3.1 there are rational identities holding in PI-degree 3 but not in PI-degree 2.

We

shall now describe a particular example of such an identity which was found by Bergman [77].

From results in Bergman-

Small [75] (cf.7.3) it follows that there is no (x,y)175

specialization

Thus there must be a relation holding in PI-degree 3 but not 2, and we are looking for an explicit such relation. We shall need some preparatory lemmas; we put [x,Y]

=

XY-

YX. Lemma 7.4.1.

Let C be a commutative ring and X,Y Eftn 3 (C),

then

[x, [x, Y] 2]

(1)

(det [x, Y]). [x, [x, Yr 1

whenever [x,Y]-l is defined.

J,

For 2 x 2 matrices the left-

o.

hand side of (1) is

Put Z = [x,Y], then tr Z = 0, hence Z has the characteristic equation z 3 + pZ - q = O, where q = det z. Now

Proof.

multiply by z- 1 : z 2 + p - qZ-l

o; apply [x,-

=

. q [X,Z -1] = 0, ~.e. (1). For 2

X

2 matrices,

If we write Y'

=

z2

- q =

J:

hence [x,z 2]

o,

[x,z 2] =

o .•

[x,Y], the conclusion of the lemma can be

expressed as (2)

((YI) 2) I

for 2

((Y') -1) I

for 3 x 3 matrices.

x

2 matrices,

Here we have used the convention of writing ~ for a scalar a. Lemma 7.4.2,

a if u

av

Let X, Y Eftn 3 (C) and write b for the discriminant of the characteristic polynomial of X, then (3)

Proof. 176

det Y1 1 1

t;

det Y'.

First let X

o2 •

then~

(A. 2 - A. 3 )CA. 3 - A. 1 ),

Now an iterated commuta-

tor has the form n (A.l-A.2) yl2

0

(A.2-A.l)ny21 (A;-A.l)ny31 hence det Y(n)

0

(A.l-A.3) ny 13 (A.2-A.3)ny23 0

(A.3-A.2)ny32

n n n (Al-A2) (A.2-A3) (A3-Al) Y1zYz3Y31 +

(A.l-A.3)n(A.3-A.2)n(A.2-A.l)ny13Y32Y21

Now n

= 1,3 differ by a factor

6

2

=

~.

hence (3).

This

proves (3) for matrices over an algebraically closed field whenever A + 0; hence it holds identically. • Using (2) and (3) we can write down rational identities for 3

x 3

matrices, but most of them will hold for 2

matrices too.

x

3 matrices which fails when these com-

mutators are replaced by 0.

(4) Since

2

What we need is a relation between determinants

of commutators of 3 write Y'

x

For any X and Y let us again

[x,Y] and consider

det Y'det Y"(det(Y"-l)')(det(Y"'- 1 ) 1 ) . -1

is a derivation, det(Y )' -2 (det Y) det - Y' and (4) becomes I

a

det- y

-1

Y'Y

-1

(det Y')(det Y")(det Y")- 2 (det Y"')(det Y'")- 2 (det Yiv) = (det Y') (det Y")

-1

(det Y"')

-1

iv (det Y ) •

Applying Lemma 7.4.2, we get (5)

(det Y')(det Y")-lll-l(det Y')-l~(det Y")

1. 177

Thus we obtain Theorem 7 • 4 • 3 • (B ergman [76]) •

Let k be a commutative field

and n = 2 or 3; for X,Y £in (k) write Y' n

(YZ)' [(Y-l)']- 1 , so that by (2), o(Y')

[x,Y],

= det Y'

o(Y)

=

or 0 accord-

ing as n = 3 or 2, then there are rational identities

O(Y')o(Y") [Co(Y")- 1 )'] [Co(Y'")-l)

(6)

Proof.

']=~~

if n if n

3, 2.

By equating the left-hand side to 1 we get an iden-

tity in degree 3 but not in degree 2.

We know that this holds

if the left-hand side is defined, so we need only find X, Y for which the left-hand side is defined.

Let K be any ex-

tension of k with more than two elements and write S for the set of

matrices(~

~)

a,b

K* when n

£

00 0 a

0 0

0 0

when n

c

~)

= 2,

or

a,b,c

£

K*

= 3. Then S consists of invertible matrices and is

closed under inversion and commutation by diagonal matrices with distinct elements.

If we choose Y in S and X diagonal

with distinct entries, all terms lie in S and so (6) is defined. • 7.5

The rational meet of a family of X-rings We shall now make a closer study of specializations, fol-

lowing Bergman [76].

We shall find that for skew fields they

cannot be reduced to the situation involving only two fields, as in the commutative case.

We shall be concerned with two

basic notions: an essential term in a family of X-fields and the support relation. Given rings R1 ~ R2 we say that R1 is rationally closed 1n R2 if the inclusion is a local homomorphism. The intersection of a rationally closed family is again rationally 178

closed, so we can speak of the rational closure of X in R, which is the least rationally closed subring of R containing X.

If it is R, we call R a strict X-ring; e.g. Q (x,y) is a strict (x,y)-ring, SO l.S z[x,y,y- 1], but not z[x,y,xy- 1]. Generally, if l:: is the set of all matrices over z which are mapped to invertible matrices over R, then the rational closure of X in R is contained in the l::-rational closure of Z , in the sense of Ch.4, but the two may be distinct (if x,y,u,v

E:

X, the entries of(:

but not the former).

~)-llie

in the latter

We note that an epic Z -field,

briefly an epic X-field, is just a strict X-field.

A local homomorphism between X-fields ¢:D

--->

D' may be

described as a partial homomorphism from D to D' whose graph is rationally closed in D

x

D'; hence if there is any X-

specialization at all, the rational closure of X in D x D' is the unique least X-specializationo

So there is at most

one minimal X-specialization between two X-fields.

Our aim

is to study the rational closure of X in finite direct products; to do so we need to introduce the following basic concepts a Definition.

Let {Rs}S be a family of strict X-rings, then

their rational meet

rr s

R

S Rs

is the rational closure of X 1.n

o

s The rational meet can also be viewed as the product in the

category of strict X-ringso smaller is

S Rs'

in the sense that for T

~

S we have a pro-

~ Rs ---> ~ Rs"

E.g. whether n1 AD 2 is the graph of a specialization in one direction or the other jection

PsT:

We note: the bigger S is, the

depends on which projection maps are injective. Lemma 7o5olo

Let {Rs}S be any family of strict X-rings, then

179

For ~Rs is the set of all rational expressions evaluable in each R modulo the relation of having equal values in each s R: f - g D, (2)

Ker (D ) cj;, t

'

S

U .L T

t

Ker (D ) • S

For when (1) holds, take f En E(Ds)' f t E(Dt), then f contains a subexpression g-l such that gDt = 0 but gDs I 0 -1

for s f t.

Conversely, given such g, we find that g

be-

longs to the right but not the left-hand side of (1); the equivalence of (1) and (2) is clear.

Using this notion,

we can say when the rational meet reduces to a direct product: Proposition 7.5.2.

Let X be a set and {Ds}S a finite family

of epic X-fields, then the following are equivalent:

(a)

Each s is essential in S,

(b)

for each s

E

S, there exists es

E

~ E(Du) such that

e~t = 8st' (c)

AD

S

Proof. (a)

s

=lTD

S

~



(b).

Choose f

s

in Ds but not in Dt for t f s.

defined in all D and vanishing Then gt

order) vanishes on all D's except D • t

satisfies the required condition.

180

11 srt

t

f s (in any_ 1 Now es = gs (l:tgt) =

(b)~ (c).

By (b),~ Ds contains a set of central

idempotent~'es' which shows that~ Ds; ~ Rs for someRs~

D • Now AD 1s rationally closed inTI D , hence R 1s s s s s rationally closed in D and it contains X, so R = D • s

s

s

(c) ~ (a). Givens e S, choose g en E(Dt) such that gDs; 0 but gDt f 0 fort I s.• if E(D 1 ) ~ E(D 2 ), D1 An 2 is a local ring, the graph of a specialization n1 ---~ n2 • Illustration.

Similarly if E(D 1 ) then

n1 AD 2

=

D1 x

n1 hD 2 ;

Consider ~

E(D 2 ), while if neither inclusion holds,

n2 •

For more than two factors we shall

see that AD s is a semilocal ring, 1.e. a ring R such that R/J (R) is semisimple Artinian. Lemma 7.5.3.

Let f:R

---~

R' be a homomorphism such that

Rf rationally generates R', then f is local if and only if f is surjective and ker f Proof.

~.

~

J (R).

Rf is rationally closed because f is local and

it rationally generates R'

'

hence Rf

R' •

=

If

af

=

o, then

1 + ax maps to 1, a unit, hence 1 + ax is a unit, for any X

e R, but that means that a e J (R). Dt is surjective Our next question is:

answered by

Let X be a set, {Ds}S a family of epic

X-fields and t £ S, then the following are equivalent:

(b)

any relation defined in each D8 and holding in Dt holds in all Ds,

Note that by (c) there is a local homomorphism Dt --> ~ Ds. Proof.

e

£

R(X) represents an element in ker

Ps

t if and

only if it is 1n the left-hand side of (a) and it represents 0 if and only if it is in the right-hand side of (a); this just expresses (b). • When these conditions hold we say that t supports S (or also: D supports {D }5 ). t s

More generally, if t i S, we say

183

that t supports S if it supports S U {t} in the above sense. To gain an understanding of the support relation we begin by proving some trivial facts. Proposition 7.5.6. X-fields.

(i)

Let X be a set ~nd {Ds}S a family of epic

Then

Let t e S, U

~

S.

If t supports U and U contains an

element distinct from t, then t is essential for

UU{t}. If t supports U, then i t supports U U{t}.

(ii)

(iii) If t supports s. (i ~

E

If t supports U and for each u e U, u e

(iv)

supports Su, then t supports

Proof.

't

~s

~ si.

I), then i t supports

u s u u·

su

and u

inessential for S' means: any relation defined

in all D and holding in D also holds in some D , s s

s

t

~

t.

't supports S' means: any relation defined in all Ds and Dt and holding

~n

Dt holds in all Ds.

(ii) also follows.

To prove (iii), let f be defined in Dt

and f and Ds (s e S.) 1 so t supports USi. v

E

=

=0

in Dt' then f

0 in all D, s e S., s 1

(iv) Let f be defined in Dt and Dv where

Su' for all u e U.

hence f

Now (i) is clear and

Iff= 0 in Dt then f

=

0 in Du (u e U),

~ Su.•

0 in Dv (v e Su)' sot supports

If the D are commutative and t supports S, then s either S =~or Dt specializes to some Ds (s E S). More

Corollary.

precisely: t supports {s} i f and only i f there is a specialization Dt --~ Ds.

For if t supports S and S f

~'

then either t e S or t is

inessential for SU{t}; in the latter case there exists s ~ t in S such that D is a specialization of D • • 8

t

To clarify the relation between support and essential set we have the following lemma.

Note that by (i) above, a sup-

porting index is a special kind of inessential index.

184

Lemma 7.5.7.

Let X be a set, {D } a finite famdly of pairs s wise non-isbmorphic epic X-fields and Dt an epic X-field. Then the following are equivalent:

(a)

SU{t} is a minimal set in which

(b)

S is a minimal non-empty set supported by t.

Proof.

is inessential,

t

Let us write (a ),(b) for (a),(b) without the 0

0

Then (b 0 ) ~ (a 0 ) by (i) of Prop.7.5.6.

minimality clause.

To prove that (a) ~ (b ) we know by hypothesis that S is 0

minimal subject to~ E(Ds)

=

S~t} E(Ds).

By Prop. 7.5.4,

S is the set of essential indices of SU{t}, hence the projection Psu{t}S

is surjective.

If t does not support s,

D and u e: S such that a 0 t = 0 but s aDu 1- 0. Let us write a for a Du etc. Since the map u 1\ D --;:> TT D is surjective, there exists b E: 1\ D SU{ t} s s s SU{t} s

1\

there exists a e:

such that b Then e

=

u

a

SU{ t}

-1

u '

b

s

=0

for all s "' u, t, where s e: s.

. 1\ ab is m SU{t} Ds and has value 1 in Du and 0

everywhere else, for bs

=0

for s f t and at

1s a central idempotent and so s6?t} D8

=R

=

0.

x Du.

Thus e Now write

(S\{u})U{t}, then R C ~* Ds and R is rationally gener-

S*

ated by X and rationally closed, hence R

=

~* Ds.

Further,

Psu{t}S is a local homomorphism, so pS* s\{u}is too (we have to factor by Du), therefore by Prop.7.5.4, S\{u} includes all essential indices in S, which contradicts the minimality of s.

So Dt supports {Ds}S and (b 0 ) follows.

Thus we have (a)

~

(b ) and (b) 0

~

(a ); in an obvious 0

terminology, if S is a minimal a-set, it is a b-set.

Now

take a minimal b-subset S' of S; this is also an a-set contained in S, hence S'

=

S, i.e. S was a minimal b-set.

(a) ==> (b) and similarly (b) Corollary 1.

E (D ) t

~

Thus

(a). •

:::> rl 5 E (D s ) if and only if D t supports -

185

some non-empty subfamily of {Ds}S.

For the left-hand side expresses the fact that t is inessential in SU( t}.

Now pick SaC S minimal with this property and apply the lemma to obtain the desired conclusion.• A relation 't supports S' will be called non-trivial if

s#

~.{t}.

Corollary 2.

Each s s S is essential if and only i f there

are no non-trivial support relations in S. •

The essential relations are determined by the minimal essential relations, but there is no corresponding statement for support relations.

However, Cor.2 shows that

essential relations are determined by the support relations. Let us call a set S essential if each member is essential in it. Proposition 7.5.8.

Let {Ds}SU{t} be a finite family of

pairwise non-isomorphic epic X-fields, then the following conditions are equivalent: supports S and S is essential,

(a)

t

(b)

sGtt} Ds is a semilocal ring contained in Dt (via the projection map), with residue class fields D , s there exists a semilocal X-ring R ~ Dt with residue

(c)

class fields D (s s S), s (d)

Z (D t)

n

~

E (D s)

~ ~

Z (D s) and no E(D ) contains the

s

intersection of all the others.

Proof.

(a)=> (b) by Prop~7.5.4,5, (b)=> (c)='> (d) is

trivial and (d)=> (a) is also clear.• Corollary.

Let {D 8 }S be a finite family of pairwise non-

isomorphic epic X-fields,

t

s S and suppose that t supports

S and U is the subset of essential indices, then the map

~ 186

Ds -"'

uat} Ds is an isomorphism and

t

supports U • •

7.6

The support relation We shalY now give a complete description of all possible

support relations, using the work of Bergman-Small still following Bergman [76]).

[75]

(and

We shall need Th.6.8 of that

paper, which for our purpose may be stated as follows. Theorem 7.B

Let R be a prime PI-ring and p 1 a prime ideal

of R, then PI-deg R - PI-deg R/p 1 can be written as a sum of integers PI-deg R/p (allowing repetitions), where p ranges over the maximal ideals of

R.

Let us say that an integer n supports a set M of positive integers if for each m

E

M, n-m lies in the additive monoid

generated by the elements of M. subset of {1,2, ... ,n}.

Clearly M must then be a

The Bergman-Small theorem shows the:

truth of the following: If R is any prime PI-ring, then PI-deg R supports the set

{PI-deg R/p

Ip

prime in R}.

In what follows, X will be fixed, with more than one element, so that k {X}

n

has PI-degree n.

We shall write E(n)

=

E(k {X} ), Z(n) = Z(k {X} ) for brevity. n n Theorem 7.6.1. (Bergman [76]). Let n be a positive integer and M a finite (non-empty) set of positive integers.

Then

the following conditions are equivalent:

(a)

k {X}n supports {k {X}m

(b)

Z(n)

(c)

p

(d)

n

QE(m)

-

1, each

Let A be a

commutative k-algebra which is a semilocal principal ideal domain with just s non-zero prime ideals infinite residue class field K.

~

~1 ,

•••• ~s each with

= A/~. (e.g. let ~

K

-~ k be

an infinite field extension and take a suitable localization of

K[t]).

. mn (K.).

II ~

~

Then A/ J (A)

=

T[ K.' hence m (A) I J (IDl (A)) ;; ~

~

n

n

Now for each i,IDl n (K.) has a block diagonal subring ~

isomorphic to IDl m('~. l)(K.) x ••• x IDl (' ~ m ~,ri )(K.) ~

L.

~

say.

Hence (1)

Q

=ry..._..._ L~ ~

nm

(K.)

~n~

;;wz n (A)/J

(IDl n (A)),

where Q as a direct product of simple Artinian rings is semi188

Let R be the inverse image of Q in9R (A), by the

simple.

isomorphis~

(1), then J(R)

n J (9R (A)), hence R/J (R) ~

=

n

Q.

Since R/ J (R) is semisimple (Artinian), it follows that R is semilocal and PI-deg{R/max} = ~

R and p

=

Let ~ be a prime ideal in

M.

n A, then p is prime in A, so p 1s 0 or some :1 ]_.•

= 0, then A C R/~; write F for the field of

Suppose that p

fractions of A, then since A +9Rn ( J (A)) C R C 9Rn (A), we have RA* J (A)

=

9R n (A) A*

F 0,

~must

=

9R n (F) because A is a domain and

Hence RA* is simple with 0 as the only prime, so

be 0.

If p = :1., then K.]_ = A/:1.] _C- R/1l and since R ]_ R/~

is a finitely generated A-module, ated K.-module, hence Artinian. ]_ simple, and so (d+)

='l

(b), i.e. e

~

(b).

was maximal.

is a finitely gener-

It is also prime, hence Thus R satisfies (d+).

Assume that e lies 1n the left-hand side of

=0

~sa

rational identity holding in PI-degree

n and not degenerate in PI-degree m for any m to 3how that e

=

0 holds in each PI-degree m

as in (d+); this means that for each

prime~

E

E

M.

We have

M.

Let R be

f 0 of R there is

given a map ~:X --;:. R/\V such that ea.~ is defined in R/'ll to show that all the ea.~ are 0.

Since R is semilocal, by

the Chinese remainder theorem there exists a.:X --> R inducing all the a~, maximal~,

Now ea can be evaluated (mod~) for all

hence it can be evaluated in R.

ea = 0 and so ea~

=

Since e

E

Z(n),

0 as claimed. •

To give an illustration, we have 5

=2

+ 3.

Let A be a

local principal ideal domain with maximal ideal :1, then9R 5 (A) contains the subring

and we have a local homomorphism (K

= A/:1).

ms (A)

-->

m2 (K)

X

.m 3 (K)

This gives rise to a specialization of fields, 189

replace~

because when we

n

(K) by the generic matrix ring, we

get a field with the sarre identities as

~n (K)

o

If we combine Tho7.6ol with Prop. 7o5.8, we get Corollary lo

Let n be an integer and M a set of integers,

then the following conditions are equivalent: (a)

p: M A{ }k {X}. -» k {X}

U n

~

n is injective, with residue

class fields k {X} (mE M), m (b)

k {X}

has a semilocal subring with residue class

n

fields k {X}

m

(c)

(m

E

M),

M is a minimal set supported by n. •

Let n, M be as before, then the following are

Corollary 2. equivalent: (a)

k {X}

supports a non-empty subfamily of

n

{k {X}m (b)

I mE

M},

every rational identity holding in PI-degree n holds in some PI-degree m (m

(c)

M): Z (n)

n

QE (m) ~

~ Z (m)

o

there exists a prime ring of PI-degree n with ~

{PI-deg R/max}

(d)

E

M,

n supports a subset of

M.•

To describe the connexion between prirre ideals and the support relation we shall need a couple of auxiliary lemmas. Lemma. 7.6.2.

Let R be a ring,

~ 1 ,.oo'~m any ideals in R

and \Jl 1 , ••• ,\Jln any prime ideals such that~. s e: T. To

D5 , then p:R ---> R' is

If t,t' are as in (b),

is a maximal ideal of R containing

~t

then~t'

and hence ker p, so

its image under p is a maximal ideal of R' with the same residue class field.

But if R' has a residue class field

isomorphic to Dt' then t' s Ess(T) (b)=> (a).

~

T, so (b) holds.

Since imp rationally generates R', it is

enough to show that imp is rationally closed in R', i.e. the inclusion im p

~

R' is a local homomorphism.

Let

a s R be such that ap is invertible in R', then a i Ill t for all t s T.

Now consider those s e: Ess(S) for which a s

by (b), since s

9 ~t

= ker p.

i

T,~s 12~t for all t e: T.

~

s



"l

By Lemma 7.6.2 there exists be: R such that

b e: ker p and for any s e: Ess (S), b i a e:

Hence~s

~s;

But then a + b lies

~n

~

s

if and only if

no maximal ideal of R and so

is a unit, and (a + b)p = ap is likewise a unit in im p. • Note that (a) shows that the residue class rings of R' at maximal ideals are just the residue class rings of R at the maximal ideals containing ker p.

We can now express

the inclusion of prime ideals in terms of the support re-

191

lation.

We shall write Supp 8 (t) for the maximal subset of

S supported by t, i.e. the union of all subsets supported by t. Theorem 7.6.4. fields, R =

Let {Ds}S be a finite family of epic Xand~

1\ D

s

s

= ker(R --::> D ) , then

s

s

l11u

Supp 8 (u) = { v e: S

(2)

Proof.

s;;~)·

Isomorphic X-fields determine the same kernel in

R, so we may without loss of generality take the Ds to be pairwise non-isomorphic.

Fix u and let T be the right-hand

side of (2), i.e. the set of all s

E

S for

which~

then T satisfies (b) of Lemma 7.6.3, so R ---> jective.

u

D is sur-

T s ~~t which contains ~u by defini-

The kernel is

tion ofT, in fact since us T, we have the map

1\

:J~,

s -

n ~ T

s

=~ • u

D --::> D is injective, i.e. u supports T. s u follows that T t;;; SuppS (u), but clearly also Supp 8 (u)

Hence It

1\

T

~

T,

so Supp 8 (u) = T. • Corollary 1.

Ess (s)n

eg

Corollary 2.

In Lemma 7.6.3, (b) just states that T:;::;>

~ Supp 8 (t). • If T, T' ~ s, then ker PsT ~ ker pST' i f and

only ~'f u T Supp 8 (t)

u

:;::;>

T'

(i)

PsT is injective

(ii)

ker PsT ~ J (R)

In general R = ~s

1\

s

Supp 8 (t).

~ Supp 8 (t)



In particular,

= s,

~ Supp 8 (t) ~ Ess(S). •

D will have prime ideals not of the form s

e.g. if C is a commutative local domain, X is a rational

n1 are the field of fractions and the residue class field respectively, then D0 AD 1 = c, but C

generating set and D0

,

may have other primes (if C is not discrete). We recall that Prop. 7.5.2 asserted that and only if S is essential.

192

= 1T D if s s s More generally we can now say 1\D

s

s = s 1u

1\ D

s

s

us r

if the S. are disjoint support sets ~n S (i.e. for any t

E

~

S., ~

Supp 5 (t)k. Si). We know that non-isomorphic X-fields may have the same kernels, e.g. k~k[t] [x;a..J, where a. :f(t)

.

Here y

= xt and tx = xt

embeddings k

->

~

~

~

(cf. page 15f.).

1->

f(ti).

The resulting

Di are distinct fori= 2,3, ••• and

none is a specialization of the others. By contrast, if R is a right Ore X-ring, the R-fields may be determined by their kernels, e.g. R = kn.

If moreover, Dt

is commutative, then Supp 5 (t) = {u E S I t supports {u}}, i.e. the set of u such that Dt ---> Du is a specialization. For let C C

.

~s

a

=

S /'--...( ) D ; we have an injection C -> Dt, so up~s t.

commutat~ve

s

~ntegral

domain with Dt as field of

Let u E Supp 5 (t), then C/~ is an integral dou main with fields of fractions D Hence the localization u ~s a local ring Lu ~ Dt with residue class ring D , at~ u u fractions.

0

i.e. we have a specialization Dt 7. 7

---~

Du.

Examples Before constructing examples let us summarize the proper-

ties of supports.

This is most easily done by introducing

the notion of an abstract support system. stand a set S with a relation on S

By this we under-

P(S) written t

x

~

U and

called the support relation, with the following properties: S.l If t e S, Uh S, then t

s.z

if t

S. 3 i f t

a:

a:

si (i

cc

e I), then t

U and for each u

E

U t cc

U, u

~ a:

a:

UU{t},

si, S

u

of(/J, then t "' U S U U • u

If in S.2 we take the index set I to be empty, the hypothesis is vacuous, hence t

a:

(/J and by S.l, t

a:

{t} always.

A special case of the support relation is that where 193

t

a:

U t cx:·{u} for all u e: U.

This is completely determined by all pairs t,u with t

a:

{u}

and if we write t < u instead of t o: { u} we obtain a preordering of

s.

Conversely, every preorder on S leads to a

support relation in this way.

Thus preorders may be regarded

as a special case of support relations.

A support relation on S induces a support relation on any subset of

s.

If a support relation is such that

so: {t} and t

ex:

{s} imply s

t,

the relation is said to be separated.

E.g. the separated

preorders are just the partial orders.

Note that this is

not the same ass e: Supp 8 (t), t e: Supp 8 (s), which may well

hold for distinct s,t in a separated support relation. We now construct all possible separated support relations on a 3-element set.

There are 10 in all, 5 of them orders

(if we allow non-separated ones and do not identify 1somorphic ones, we get 53 support systems, 29 of them preorders). We list the 10 below, the orders first, with rising arrows to indicate specializations.

Examples. 3).

(a)

X=~, D.

1

=

Z/p. (i = 1,2, 1

(b) X= {x}, xl-:> 1,2,3 inQ,

(c)

k {X}., i=3,4,5. 1

2.

Here

n1

(D 1 AD 2 )

specializes to X

n2 , n1 Ao 2 Ao 3 =

DJ (a) X= {x,y,z}, Dl = Q(x,y)

(z 1--:> 0), n2 = Q(x) (y,z 1-:> 0), 0 3 = Q(z)(x,y 1-:> 0) (b) D1 = k {X} 4 , D2 = k {X} 2 , n3 = k {X} 3 • 3.

R = D1

Ao 2 Ao 3

semilocal domain C

(o 1 An 2 ) n(n 1 An 3 ) 194

~

n1

(a)

n1

(b)

D1 =

= Q,

D. = Z/p. (i = 1,2) 1

1

o3

k {X} 6 , D2 = k {X} 2 ,

k{X} 3 •

4.

R

= D1 ,.n 2...n3

is local ring with two mini-

mal prime ideals, subdirect product of

n1 ...n3 and n2..n3 • (a)

(x (b)

s.

n 1 = Q(x) (y

1-> n1

\->

n3 = Q (x,y = Q (xI-> 0),

0),

= Q (y)

0), n 2

1-> n2

0).

=Q(x 1-> p

prime), n3 = Z /p (x 1-> 0). R = n1 ... n2 ... n3 = n1 .. n3 • (a)

n1 • Q (x,y), n2 • Q(x) (y 1-> 0),

n3

""Q(x,y 1-.~ 0).

D2

= Q(x 1-->

0),

n1

(b)

"'Q(x),

n3 - Z/p.

In each case there are commutative examples.

For the re-

maining support systems (non-orders) we have of course only non-commutative examples.

In each case s

a:

T

~s

indicated

by drawing an arrow from s to a balloon enclosing T. indicate the partially ordered set of -!>

primes~.

~

=

We also

ker(AD.

J

in each case the lowest prime is 0.

6.

195

8.

Here n 1 a: {D 3 } follows from the relations

© r ~2

shown.

~1

!nl

n1

=k

{X} 3 ,

D2

=k

{X} 2 ,

n3

=k

{X} 1 •

9. 0

n1

=k

{ x,y

I

-1 2 (x y)

= yx-1 },

II! 3

D2 = k {x,y

n3

I

(x-ly)3

= yx-1

= k.

10.

n 1 = k { x,y } ,

n 2 = k { x,y

I

-1 2 (x y)

= yx-1

},

We conclude by expressing the support relation in terms of

.

singular kernels.

Let

P be any prime matrix ideal, then

P is defined as the set of matrices all of whose first order minors lie in P.

.

Clearly P ~ P and under a homomor-

phism into a field, if P represents the singular matrices,

P represents 196

the matrices of nullity at least two.

},

7.7.~.

Lemma

any n x n

~matrix

an~

prime matrix ideals P1 , •.. ,Pr in k, A tuP. can be extended to ann x n+l matrix

Given

~

which has rank n mod

P.

~

for each i.

Write A= (a 1 , ••• ,an); if a ~s another column, we put A*= (a,a 1 , ••• ,an) and we write A* t Pi to indicate

Proof.

that A* has rank n mod P.; this means that the square matrix ~

obtained by omitting some column of A* is not in P..

We

~

use induction on r; when r = 1, A has nullity 1 and we can make it non-singular by modifying a single column.

When

r > 1, we can by induction hypothesis adjoin a column to A to obtain ann x (n+l) matrix A. such that A. ~

If for some i, Ai

wise A.

1

E

P .• 1

~

i Pi, this will show that A1

t P.J

(j 1 i).

~UP j;

other-

Now form (ex E k).

For a I 0 this is not in P1 or P2 and it lies in any Pi (i > 2) for just one value of a. Avoiding these values (which we can do, because k is infinite) we get a matrix A* such that A*

t

Theorem 7.7.2.

Let {Ds}S be a family of epic X- fields and

uP 1.• • Now the support relation is described by p

s

the singular kernel of Ds' then Dt supports Ds (s £ S)

i f and only i f

In words: every matrix which becomes singular in Dt is either singular in each Ds or of nullity > 1 in some Ds • •

Proof.

Suppose that t supports Sand let A£ Pt' A

By the lemma we can find a column a such that (a,A)

i uP. s

i UPs. A*

Hence the equation A*u = 0 defines u = (u 0 ,u 1 , •••

u ) up to a scalar multiple in any D • n s 0 in all D , i.e. Since A E Pt' u 0 = 0 in Dt and so u 0 s 197

A

E

P for all s, hence (1) holds. s

Conversely, assume (1) and let f be defined in all Ds and f

=

0 in Dt.

We can find a denominator for f, say A,

with numerator A1 , then A1 E Pt, so either A1 E n Ps, ~.e. f = 0 in all D , or A1 £ P for some s. But then A £ P s s s and this contradicts the fact that A was a denominator. •

198

8 · Equations and singularities

8.1

Equations over skew fields In the commutative case there

~s

a well known theorem,

going back to Kronecker, which asserts that every polynomial equation of positive degree over a commutative field k has a solution in some extension field of k. One effect of this result has been to try to reduce any search for solutions to a single equation.

E.g. to find the eigenvalues of a

= 0.

matrix A we solve the equation det(xi-A)

In the general case no such simple theorem exists (so far!) and in any case we do not have a good determinant function (the determinant introduced by Dieudonn~ [43] is not really a polynomial but a rational function), so the above reduction is not open to us.

In fact we shall find

it more profitable to go from scalar equations to matrices. Our first problem is to write down the general equation ~n

one variable x over a skew field K.

We cannot allow x

to be central if we want to be able to substitute noncentral values of K, but some elements of K are bound to commute with x, e.g. 1, -1 etc. and it elements form a subfield k. ax

=

xa, then a must lie

~n

~s

Moreover, if a

k [x].

k, so that

Thus we have a field

K which is a k-algebra, and a polynomial = ~ = K~

E

the centre of K if arbitrary

substitutions of x are to be allowed. p of F

clear that these

~n

x is an element

Explicitly p has the form

199

wh ere a, b i, ••• , E K• Thus even a polynomial of quite low degree can already have a complicated form, and the problem of finding soluti~ns seems at first sight quite hopeless. (But we note that for polynomials in two variables over k, i.e. elements of k, an -extension field containing solutions has been found, by Makar-Limanov [75,77]).

A

little light can be shed on the problem by trying to generalize it. p

Instead of finding extensions L of K where

= 0 has a root, let us look for L such that a given matrix

A over F becomes singular.

We shall need some definitions;

let us recall that a K-ring is just a ring R with a homomorphism K --> R and a homomorphism f:R --> R' between Krings is a K-ring map if the triangle shown commutes. Let A be any square matrix over a K-ring R· we shall say that

'

A

is

proper i f there is a K-ring homo-

morphism of R into a K-field L

K----If

~R'

such that A maps to a singular matrix over L; otherwise A is said to be improper.

To elu-

cidate this concept we note Proposition 8.1.1.

Let K be any field and R a K-ring, then

an invertible matrix over

R is improper.

When

R is commuta-

tive, the converse holds: every improper matrix is invertible, but this does not hold generally.

Proof.

If A is invertible over R and f:R --> L is a K-ring

map then Af is again invertible, hence A is then improper. When R is commutative and A is not invertible, then det A is also a non-unit and so is contained in a maximal ideal m of R.

The natural map R --> R/ m is clearly a K-ring map

into a field and it maps det A to 0, so A becomes singular over R/m. To find a counter-example we can limit ourselves to 1 x 1 matrices, thus we must find a non-unit of R which maps to a unit under any homomorphism into a field. 200

Take a ring with

no R-fields, e.g. K2 ; any element c of K2 which a unit is improper but not invertible. • Our original problem was this:

~s

not 0 or

Does every polynomial p

in F "' ~ which is a nonunit have a zero in some K-field L? We note that p has the zero a ~n L if and only if the K-ring map F

--1>

L defined by x 1--~ a maps p to zero.

Thus the question now becomes:

Is every non-invertible

element of F proper? and it is natural to subsume this under the more general form: Problem 1.

Is every non-invertible matrix over

~

proper?

We shall find that this is certainly not true without restriction on k and K, and find a positive answer

~n

certain special cases, but the general problem is still open.

As a first condition we have the following result

(Cohn [76 ']). Theorem 8.1.2.

Let K be a field with centre k.

If every

non-constant equation over K has a solution in some extension, then k is algebraically closed in K.

Proof (*)

(2)

Let a

ax- xa

E

K be algebraic over k but not in k, then (metro-equation)

1

has a solution.

Let f be the minimal polynomial for a over

k, then by (2), 0

= f(a)x-xf(a) = f'(a),

formal derivative of f.

where f' is the

By the minimality of f it follows

that f' = 0, so a is not separable over k.

In particular,

this shows that k must be separably closed in K. purely inseparable over k, say aq

E

If a

k, where q "'pr, then

on writing D for the mapping x [--> ax - xa, we have

= o.

a

~s

D~ "'Daq

t

k, D ~ 0, say bD ~ 0 for some b E K. Now a a the equation xDq-l = b has a solution x in some extension, Since a

a

o

(*) I am indebted to P.v.Praag for simplifying my original proof.

201

but then x Dq o a

= bD # 0, a contradiction. a

Hence k must be

algebraically closed in K. • Thus for Problem 1 to have a positive solution it is necessary for k to be algebraically closed in K.

To sim-

plify Problem 1 we introduce a relation between matrices. Let

R

be any ring, then two matrices A, B over

R

are said

to be associated if there are invertible matrices U,V such that A

= UBV.

For simplicity we assume that R has invariant

basis number, so that all invertible matrices are square. Then it is clear that associated matrices have to be the same size, not necessarily square.

The following weaker equiva-

lence relation is often useful:

Two matrices A,B are said

to be stably associated if for some unit matrices, A + I is associated to B + I.

Clearly this is again an equivalence

relation (for the interpretation in terms of modules see Cohn [76]); stably associated matrices need not be the same size, but if an m x n matrix is stably associated to an r x s matrix then m-n number).

r-s (at least when R has invariant basis

The following result is an immediate consequence

of the definitions. Proposition 8.1.3.

Let R be a K-ring; i f a matrix A is in-

vertible, or proper, then any matrix stably associated to

A has the same property.•

We can now view the 'linearization by enlargement', already encountered in 6.4, in a different light: us that every matrix over

~

It tells

is stably associated to a

matrix linear in x: Theorem 8.1.4. then

Let A

= A(x)

be a square matrix over

A is stably associated to Bx + C, where B,C

some n.

Proof.

E

~ 1

K , for

n Moreover, i f A(O) is non-singular, we can take

C = I.

The first part follows as in 6.4.

If A(O) is nonsingular, then so is C, the result of putting x = 0 in Bx +

and Bx +Cis associated to C-lBx +I •• The linear matrix Bx + C obtained here is called the 202

c,

companion matrix for A(x).

(3)

To give an example, let

+ ••• + a n ,

and write P = an + p 1 x, then the first step is

where we have interchanged the rows.

If we continue in this

way, we obtain the matrix X

-1

0

0

0

0

X

-1

0

0

0

0

a

n

-1

an-1 •••

This is of the form xi - A, where A

E

K , and this matrix n

or also A itself is usually known as the companion matrix of p.

As we have seen, the general polynomial is more com-

plicated than p and its companion matrix, which always exists, by Th.8.1.4, need not be of the form xi - A.

More-

over, the companion matrix of a given polynomial or matrix 1s not generally unique, but it can be shown that if xi - A, xi - B are companions for the same matrix then A and B are similar over k (cf. Cohn [76]). An element p of

~

or more generally a matrix P whose

companion matrix can be put in the form xi - A is said to be non-singular at infinity.

finity.

E.g. (3) is non-singular at in-

Generally a matrix P with companion matrix Bx + C

is non-singular at infinity if B is non-singular; in any case the rank of B depends only on P and not on the choice 203

of the companion matrix, as is easily seen. also called the degree of P.

This rank is

For a polynomial of the form

(2) it clearly reduces to the usual degree, Let A

E~

n

(K), by a singular eigenvalue of A we under-

stand an element

~ E

K such that A -

~I

is singular.

We

can now state two conjectures whose proof would entail a positive solution of Problem 1. Conjecture 1.

Every square matrix over a field K has a

singular eigenvalue in some extension field of K. Conjecture 2,

Let K be a field which is a k-algebra and

assume that k is algebraically closed in K.

Then every

square matrix A over K has a non-zero singular eigenvalue in some extension of K unless A is triangularizable

over k.

If the answer were known, we could settle Problem 1 as follows. ~;

Let A be a non-invertible square matrix over

we have to show that A is proper, and by Th.8.1.4

and Prop"8.1.3 we can instead of A take its companion matrix Bx + C. ting x

If C is singular, this must be proper, by put-

0; otherwise we can take it in the form I - Bx.

If B has a non-zero singular eigenvalue S say, then I - BS-l is singular; otherwise by Conjecture 2, there exists E GL (k) such that P- 1 BP = T is triangular, hence

P

-1

P

n

(I - Bx)P

=I

- Tx (here we have used the fact that the

entries of P lie ink). ment a, then I - Ta

-1

If T has a non-zero diagonal ele-

is singular; hence all diagonal ele-

ments of T are 0 and so (Tx)n

0; it follows that I - Tx

is invertible, hence so is A, a contradiction. Before we can describe the special cases in which these conjectures can be settled we need to introduce another kind of eigenvalue, which always exists and which can be used to accomplish a transformation to Jordan canonical form.

204

8.2

Left and right eigenvalues of a matrix One of the main uses of eigenvalues in the commutative

case is to effect a reduction to diagonal form (when posLet A be a square matrix over a field K and sup-

sible).

pose that A is similar to a diagonal matrix D an).

=

diag(a 1 , ••• ,

Then there is a non-singular matrix U such that

Au

un.

If we denote the columns of U by u 1 , ••• ,un' this equation can also be written as Au.

~

u.a. ~

(i

~

l, ••• ,n).

This makes it clear that we have indeed an eigenvalue problem, but the a.l. need not be singular eigenvalues of A, since they do not in general commute with the components of u .. ~

Let K be any field and A

E

K ; an element a n

E

K is called

a right eigenvalue of A if there is a non-zero column vector u, called an eigenvector for a such that (1)

Au

ua.

Similarly a left eigenvalue of A is an element 6

E

K for

which there exists a non-zero row vector v, an eigenvector for 6, such that vA

=

eigenvalues of A

called the spectrum of A, spec A.

l.S

Let c e: K*• if Au

'

Sv.

= ua,

The set of all left and right then A.uc

= uac = uc.c-1 ac.

This

shows that the right eigenvalues of A consist of complete conjugacy classes; similarly for left eigenvalues. If -1 -1 -1 P E GL (K), then P AP.P u = P Au = p- 1ua, hence a is also n a right eigenvalue of P- 1AP. In other words, right (and left) eigenvalues are similarity invariants of A.

For sin-

gular eigenvalues this is not in general the case; in fact 205

it is easy to see that the three notions of eigenvalue coincide for elements in the centre of K ( 11 central" eigenvalues), but in general there is no very close relation between them.

Thus it is possible for a matrix to have a

right but no left eigenvalue (Cohn [73 11 ] ) , but as we shall see later, over an existentially closed field the notions of left and right eigenvalue coincide. G.M. Bergman has observed (in correspondence) that left and right eigenvalues are special cases of the following more general notion:

If A

£

K then n

~ £

K is called an

inner eigenvalue, more precisely an r-eigenvalue (where

1




ax, pb:x

1-->

xb,

then (2) may be written m.

(3)

and f(pb)

= pb :>. a

and by hypothesis f(\ ) is a unit a Now define the polynomial ~(s,t) in the

We note that :>..apb

= 0.

commuting variables s,t by cp(s,t)

f(s) - f(t)

=

s - t

then ¢(\ ,pb)(\

a

a

-

p )

b

f (\ ) • a

207

Since f(Aa) is a unit, Aa - pb has a two-sided inverse and it follows that (3) has a unique solution in M. • The significance of the lemma lie~ in (t:i:): GiveRn R,S,MS as in the lemma, the set of all matr~ces 0 s , r s , s s , m s M, is a ring under the usual matrix multiplication, and (2) shows that

(~ ~)

(: :)· (: :) (~ :)

· (a m)b · ·

'1 ar to a 'd'~agona 1' matr~x. · ~s s~m~ 0 Theorem 8.2.3. Let K be any field and A s K • ~.e.

n

Then spec A

cannot contain more than n conjugacy classes, and when it consists of exactly n classes, all except at most one algebraic over the centre of K, then A is similar to a diagonal matrix.

Proofo

We have seen that spec A consists of conjugacy

classeso

Let r be the number of classes containing right

eigenvalues and s the number of the remaining classes in spec A, then the space spanned by the columns corresponding to right eigenvalues is at least r-dimensional and the space of rows orthogonal to this is at least a-dimensional, hence r+s

~

n, and r+s is just the number of conjugacy classes in

spec A. Assume now that r+s

n; let a 1 , •• o,ar be inconjugate right eigenvalues and u1 ,oo.,ur the corresponding eigenvectors, while

=

s1 ,oo•,Ss

are the left eigenvalues not con-

jugate among themselves or to the a's, with corresponding eigenvectors v 1 , ••• ,v 5 • By Prop.8o2.1 the u's are right linearly independent, the v's are left linearly independent and v.u. 0 for all i,jo Write ul for then X r matrix J

~

consisting of the columns u 1 ,.o.,ur and matrix consisting of the rows v 1 , ••• ,vs.

v2

for the s x n Since the columns

of u1 are linearly independent, we can find an r x n matrix 208

V1 over K such that v1 u1 =I and similarly there ~san n x s matrix U2 such that v 2u2 =I. Put U = (u u ), V 1 2

(~; )-

then

vu The matrix on the right is clearly invertible and since one-sided ~nverses over a field are two-sided (i.e. a field is weakly finite, cf. Cohn

[71")), we have U(VU)-l =

v- 1,

so

VA 6 "v s s

It follows that VAV-l

=

(~ ~).

where a= diag(a 1 , ••• ,ar),

6 = diag(6 1 , ••• ,6s) and T s rKs. Now all the a's and 6's are inconjugate and all but at most one are algebraic over the centre of K, hence their minimal equations are distinct (cf.3.3).

If only right or only left eigenvalues occur, we

have diagonal form; otherwise let 61, ••• ,6s be algebraic, say.

Taking f to be the product of their minimal polynomials

we have f(S)

0 while f(a) is a unit.

can find X s rKs such that aX - XS matrix by

(~ ~)

=

By Lemma 8.2.2 we

T and transforming our

we reach diagonal form. •

The restriction on the eigenvalues, that there is to be only one transcendental conjugacy class (at most)

~s

not as

severe as appears at first sight, but is to be expected, since K can be extended so that all transcendental elements are conjugate (cf. 5.5). 209

8.3

Canonical forms for a single matrix over a skew field As before let K be a field which is a k-algebra; our

task is to find a canonical form for a matrix under similarity transformation.

The results are not quite as pre-

cise as in the commutative case, but come very close, the main difficulty being the classification of polynomials over K, i.e. elements of K[x]. Let A e K ; in 5.5 we called A transcendental over k if n

for any f e k[x]*, f(A) is non-singular.

If there is a non-

zero polynomial f over k such that f(A)

O, A is said to be

algebraic over k.

Of course when K

= k (the classical case)

every matrix is algebraic, by the Cayley-Hamilton theorem. In general a matrix is neither algebraic nor transcendental, e.g. diag(a,l), where a is transcendental over k, but we have the following reduction. Proposition 8.3.1.

Every matrix A over K is similar to the

diagonal sum of an algebraic and a transcendental matrix.

Proof.

We can interpret A as as an endomorphism of Kn;

clearly being algebraic or transcendental is a similarity invariant, and so may be regarded as a property of the endomorphism.

More precisely, V

= Kn can be regarded as a

(K,k[t])-bimodule, where tis a central indeterminate, with

= Av.

the action vt

Then the restriction of A to an A-

invariant subspace W of V is algebraic if and only if W is a k[t]-torsion module.

v,

Let V0 be the torsion submodule of

then V is a K-subspace, hence we can find a complement 0 of v in V: 0 (1)

v

v

0

~

vl.

Now A restricted to V is algebraic, while the transfor0 mation induced on V ~ V/V is transcendental. Hence if 1 0 we use a basis adapted to the decomposition (1), A takes the form 210

where A0 ~s algebraic and A1 is transcendental. By applying Lemma 8.2.2 we can reduce A' to 0 and so obtain the desired decomposition. • It is not hard to see that the algebraic and transcendental parts are in fact unique up to similarity, so that by this result, we need only consider algebraic or transcendental matrices. The transcendental part with.

~s

in many ways simpler to deal

For by suitably extending K, we can always transform

a transcendental matrix to scalar form, as we saw in 5.5. Of course over K itself we cannot expect such a good normal form.

We state the result as

Proposition 8.3.2.

Let K be existentially closed (over k).

Then any transcendental matrix A is similar to al, where a is any transcendental element of

K. •

To describe the algebraic part, let V space, with an algebraic endomorphism 8.

= Kn

as right K-

Writing R

= K[t]

with a central indeterminate t, consider V as right R-module by letting

~tic.

~

correspond to

~eic .• ~

If A is the matrix

of 8 relative to a basis of V, we shall call V the R-module associated to A.

It is clear that two matrices are similar

if and only if the associated R-modules are isomorphic. Now R is a principal ideal domain and every matrix over R is associated to a diagonal matrix (c£. e. g. Cohn Ch.8), thus there exist P,Q

£

[71"].

GL (R) such that n

where A. 1 is a total divisor of A. (i = 2, ••• ,n). This ~~ means that for each i there is an invariant element c of R 211

( i.e, cR = Rc) such that A.1-1 R -~ cR -~ A.R. 1

The A.1 are just

the invariant factors of ti - A and as right R-module V is isomorphic to the direct sum

We observe that this holds for any matrix A, algebraic or. not.

In fact we now see that A is algebraic if and only

if A divides a polynomial with coefficients in k. n

Let us

take k to be the precise centre of K, then a polynomial over K is invariant if and only if it is associated to a poly-

[71

nomial over k (Cohn

p.297).

11 ] ,

It follows that A is

algebraic if and only if A divides an invariant polynomial, n

i.e. (by definition) if and

on~y

if A is bounded. n

To find when A is transcendental we recall that an element of R is said to be totally unbounded if it has no bounded factor (apart from units).

Suppose that A has a bounded n

factor p say, then the R-module V has an element annihilated by p and hence by p*, where p* is the bound of p (i.e. the least invariant element divisible by p).

Now p*

=

p*(t) is

invariant and p*(A) is singular, so A cannot be transcendental. Conversely, if A is not transcendental, V has an element annihilated by an invariant polynomial, so some invariant factor

A. has a factor which is bounded, and hence A then has a n

1

bounded factor.

This proves most of

Proposition 8,3.3.

A

Let K be a field with centre k and let

Kn have invariant factors A1 , ••• ,An' Then (i) A is algebraic over k if and only if A is bounded, (ii) A is E

n

transcendental over k if and only i f A is totally un-

= ••• = An-1 = 1.

bounded, and then Al

n

Only the last part still needs proof:

Each A. (i < n) is 1 a total divisor of A , so there is an invariant element c such that A.1

I

c

I

n

An •

ding An is 1, hence Ai 212

But the only invariant element divi-

=1

(i

= l, ••• ,n-1).•

To obtain a normal form for algebraic matrices we need a result on tbe decomposition of cyclic modules over a principal ideal domain R.

We recall (cf. Cohn [71"] p.229) that a

cyclic R-module R/aR has a direct decomposition

where each qi is a product of pairwise stably associated bounded atoms, while atoms

~n

different q's are not stably

associated, and u is totally unbounded. result to (3) and observe that A. for i

If we apply this


We shall call a free over K if the map x isomorphism~

an

~

{x}

K(a).

a defines

Then we have the

n

Let a e E , then a is free over K i f and only i f

Corollary.

A + LAiai is singular only when A + LAixi is not full in ~.



Th. 8.5.1 also makes it clear how specializations in projective space should be defined. Each point of the projective space P n(E) ~s described by an (n+l)-tuple I; = and 1;,11 represent the same point if and only if n 11.>- for some A e E*. At first sight it is not clear ~

(I; , ••• ,!; )

o

I; .

~

=

how specialization in projective space is to be defined; instead of polynomials we would have to consider rational functions in the x. and (in contrast to the commutative ~

case) there is no simple way of getting rid of the denominators.

However, with Th. 8.5.1 in mind we can define

I; --> 11 (over K) if and only if n

E

0

A.~. ~

singular

~

~

EnA.n. singular, 0

~

1.

for any matrices A , ••• ,A over K. o n The condition of Th. 8.5.1 can still be simplified if we are specializing to a point in K: Theorem 8.5.2.

Let E/K be any extension, a e En, A e

~'

then a--> A i f and only i f for any matrices Al, • • • ,An over

K, I Proof.

EA.(a.1.

1.

).. ) is non-singular. 1

Assume that a-->

>-·• if

I -

LA. (x. ~

~

-

A.) becomes ~

singular for X = a, it must also be singular for X = A, but then it reduces to I, a contradiction, hence I - EA.~ (a.~ -

A.) is non-singular. 1.

Conversely, when the condition is satisfied, let the singular kernel of the map ~ show that under the map x

1-->

-->

K(a).

1- every matrix of

P be

We must

P becomes

singular, and here it is enough to test matrices of the 225

Thus let A + IA.a. be singular, we have to

form A + IA.x .• 1

show that C

1

1

=A

1

is also singular; note that C has + IA.A. 1 1

If C were non-singular, we could write

entries in K.

A+ EA.a. =A+ EA.A. + EA.(a.- A.) 11 11 11 1

=C+

EA. (a. - A.) 1

1

1

-1

C(I- IB.(a.- A.)), where B.= -C 1 1 1 1

A.• 1

Here the left-hand side is singular, and by hypothesis the right-hand side is non-singular, a contradiction, which shows that A+ EA.A. is in fact· singular. • 1

1

With the help of the specialization lemma 6.3.1 we again get a criterion for a point to be free: Corollary.

Let K be infinite-dimensional over its centre

k, where k is infinite, then for any a

£

En the following

conditions are equivalent: (a)

a is free over K,

(b)

every point of Kn is a specialization of a,

(c)

I - EA.(a.- A.) is non-singular for all A. over 1 1 1 1 K and all A. £ K. 1

Proof.

By Th. 8.5.2, (b)

To prove (b)

~

D

167

Z (D)

zero-set of a:X ---> D

173

kd

generic matrix ring

173

J (R)

Jacobson radical of R

174

rational meet

179

set of all m x n matrices over R

189

~n

Ess(S) set of essential suffixes

191

Supp 5 (t)

192

maximal subset of S supported by t

support relation

193

233

Bibliography

Numbers in italics refer to page numbers in the book. Amitsur, S.A. 48.

A generalization of a theorem on differential equations, Bull. Amer. Math. Soc. 54 (1948) 937-941

54.

Noncommutative cyclic fields, Duke Math. J. 21 (1954) 87-105

54

1 •

65

48, 65, 70f.

Differential polynomials and division algebras, Ann. of Math. 59 (1954) 245-278

55.

Finite subgroups of division rings, Trans. Amer. Math. Soc. 80 (1955) 361-386

58.

Commutative linear differential operators, Pacif. J. Math. 8 (1958) 1-10

65.

Generalized polynomial identities and pivotal monomials, Trans. Amer. Math. Soc. 114 (1965) 210-226

66.

141

Rational identities, and applications to algebra and geometry, J. Algebra 3 (1966) 304-359

165, 172

Asano, K. 49.

Uber die Quotientenbildung von Schiefringen, J. Math. Soc. Japan 1 (1949) 73-78

Bergman, G.M. 64

8

118, 147, 206

A ring primitive on the right but not on the left, Proc. Amer. Math. Soc. 15 (1964) 473-475

67.

25

Commuting elements in free algebras and related topics in ring theory, Thesis Harvard University 1967

234

141

70.

Skew fields of noncommutative rational functions, after Amitsur, Sem, Schutzenberger-Lentin-Nivat 1969/70 No, 16 (Paris 1970)

74.

166, 168, 171f.

Modules over coproducts of rings, Trans. Amer. Math. Soc. 200 (1974) 1-32

74'.

5, 87, 98ff., 103f., 106f., 109

Coproducts and some universal ring constructions, Trans. Amer. Math. Soc. 200 (1974) 33-88

76.

Rational relations and rational identities in division ringE I, II. J.A1gebra, 43(1976)252-66, 267-97

166, 175, 178, 1t

Bergman, G.M. and Small, L.W. 75.

PI-degrees and prime ideals, J. Algebra 33(1975) 435-462 174f., 187

Boffa, M. and v.Praag, P. 72.

Sur les corps generiques, c.R.Acad.Sci. Paris Ser.A 274 (1972) 1325-1327

135

Bokut', L.A. 63, On a problem of Kaplansky, Sibirsk. Zh. Mat. 4(1963) 69.

1184-1185 19 On Mal'cev's problem, Sibirsk.Zh.Mat. 10(1969) 965-1005 3,

Borevi~,

58.

Z.I.

On the fundamental theorem of Galois theory for skew fields, Leningrad Cos. Ped, Inst. U6. Zap.l66(1958) 221-226

Bortfeld, R. 59. Ein Satz zur Galoistheorie in Schiefkorpern, J. reine u.angew. Math. 201 (1959) 196-206

Bowtell, A.J. 67. On a question of Mal'cev, J. Algebra 6 (1967) 126-139 4, 91

235

Brauer, R. 49. On a theorem of H. Cartan, Bull. Amer.Math.Soc. 55 (1949) 619-620 Brungs, H.-H. 69. Generalized discrete valuation rings, Canad. J. Math. 21 (1969) 1404-1408 Bryars, D.A.

25

95

Burmistrovic, I.E. 63. On the embedding of rings

~:

'·nw fields, Sibirsk. Zh.

Mat. 4 (1963) 1235-1240

Cameron, P.J.

117

Cartan, H. 47. Theorie de Galois pour 1es corps non-commutatifs, Ann. Sci. E.N.S. 64 (1947) 59-77

Cherlin, G. 72,

The model companion of a class of structures, J. Symb. Logic 37 (1972) 546-556

134

Cohn, P.M. 59.

On the free product of associative rings, Math. Zeits. 71 (1959) 380-398

60.

98

On the free product of associative rings.II. The case of (skew) fields, Math. Zeits. 73 (1960) 433-456

61.

On the embedding of rings in skew fields, Proc. London Math. SOCo

61'.

28

Quadratic extensions of skew fields, Proc. London Math. Soc.

236

(3) 11 (1961) 511-530

(3) 11 (1961) 531-556

29, 56

62.

Eine Bemerkung uber die multiplikative Gruppe eines Korpers, Arch. Math. 13 (1962) 344-348

63.

Rings with a weak algorithm, Trans. Amer. Math.Soc. 109 (1963) 332-356

106

64.

Free ideal rings, J. Algebra 1 (1964) 47-69

65.

Universal Algebra, Harper and Row (New York, London 1965)

66.

106

4, 20, 91

On a class of binomial extensions, Ill. J. Math. 10 (1966) 418-424

66'.

Some remarks on the invariant basis property, Topology 5 (1966) 215-228

67.

Torsion modules over free ideal rings, Proc. London Math. Soc.

68.

75

(3) 17 (1967) 577-599

On the free product of associative rings III, J. Algebra 8 (1968) 376-383

69.

25

Dependence in rings.II.

106

The dependence number, Trans.

Amer. Math. Soc. 135 (1969) 267-279

71.

The embedding of firs in skew fields, Proc. London Math. Soc.

71'.

(3) 23 (1971) 193-213

91

Rings of fractions, Amer. Math. Monthly 78 (1971) 596615

71".

3, 90f.

5, 9

Free rings and their relations, No.2, LMS monographs, Academic Press (London, New York 1971)

18 1 25, 27,

54, 58, 78, 83, 86f., 90, 98, 107, 112, 118, 12Bf., 141f., 153, 209, 211-216, 221

71'". Un critere d'immersibilite d'un anneau dans un corps gauche, C.R.Acad. Sci. Paris, Ser. A, 272 (1971) 1442-1444 72.

Universal skew fields of fractions, Symposia Math. VIII (1972) 135-148

72'.

Generalized rational identities, Proc. Park City Conf. 1971 in Ring Theory (ed. R.Gordon) Acad. Press (New York 1972) 107-115

147, 166

237

72".

Skew fielJs of fractions, and the prime spectrum of a general ring, in Lectures on rings and modules, Lecture Notes in Math. No.246 (Springer, Berlin 1972) 1-71

72"'. Rings of fractions, Univ. of Alberta Lecture Notes 1972 73.

Free products of skew fields, J. Austral. Math. Soc. 16

73 1 •

The word problem for free fields, J. symb. Logic 38

(1973) 300-308 (1973) 309-314, correction and addendum ibid. 40 (1975) 69-74 73".

The similarity reduction of matrices over a skew field, Math. Zeits. 132 (1973) 151-163

73"

1 •

118, 206

The range of derivations on a skew field and the equation ax- xb

= c,

J. Indian Math. Soc. 37 (1973) 1-9

19

73iv. Skew field constructions, Carleton Math. Lecture Notes No.7 (Ottawa 1973) 74.

Progress in free associative algebras, Israel J. Math. 19 (1974) 109-151

74'.

Localization in semifirs, Bull. London Math. Soc. 6 (1974) 13-20

74".

The class of rings embeddable in skew fields, Bull. London Math. Soc. 6 (1974) 147-148

75.

Presentations of skew fields.

91

I. Existentially closed

skew fields and the Nullstellensatz, Math. Proc. Camb. Phil. Soc. 77 (1975) 7-19

76.

The Cayley-Hamilton theorem in skew fields, Houston J. Math. 2 (1976) 49-55

76'.

140

202f.

Equations dans les corps gauches, Bull. Soc. Math. Belg. 201, 223

77.

Algebra vol.2, J. Wiley (London, New York 1977)

42,

54, 162, 173, 230f.

(a) Zum Begriff der Spezialisierung tiber Schiefkorpern, to appear (b) The universal field of fractions of a semifir, to appear 168

238

(c) A construction of simple principal ideal domains, to appear 58 Cohn, P.M. and Dicks, W. 76.

Localization in sernifirs II, J. London Math.Soc. (2) 13 (1976) 411-418

142

Dauns, J. 70.

Embeddings

~n

division rings, Trans. Amer. Math. Soc.

150 (1970) 287-299 Dicks, W.

96, 99, 165

Dickson, L.E. Dieudonn~,

43.

28

54

J.

Les determinants sur un corps noncommutatif, Bull. Soc. Math. France 71(1943) 27-45

52.

199

Les extensions quadratiques des corps noncommutatifs et leurs applications, Acta Math. 87 (1952) 175-242

Doneddu, A. 71.

Etudes sur les extensions quadratiques des corps noncommutatifs, J. Algebra 18(1971) 529-540

72.

Structures geometriques d'extensions finies des corps non-commutatifs, J. Algebra 23(1972) 18-34

74.

Extensions pseudo-lineaires finies des corps non-commutatifs,J. Algebra 28(1974) 57-87

Faith, C.C. 58.

On conjugates

~n

division rings, Canad. J. Math. 10

(1958) 374-380 Farkas, D.R. and Snider, R.L. 239

76.

K and Noetherian group rings, J. Algebra 42 (1976) 0

192-198

20

Faudree, J.R. 66.

Subgroups of the multiplicative group of a division ring, Trans. Amer. Math. Soc. 124 (1966) 41-48

69.

Locally finite and solvable subgroups of skew fields, Proc. Amer. Math. Soc. 22 (1969) 407-413

Fisher, J.L. 71.

Embedding free algebras in skew fields, Proc. Amer. Math. Soc. 30 (1971) 453-458

74.

15

The poset of skew fields generated by a free algebra, Proc. Amer. Math. Soc. 42 (1974) 33-35

74'.

The category of epic R-fields, J. Algebra 28 (1974) 283-290

Fuchs, L. 63.

Partially ordered algebraic systems, Pergamon (Oxford

1963)

21

Gelfand, I.M. and Kirillov, A.A. 66.

Sur les corps lies aux algebres enveloppantes des algebres de Lie, Publ. Math. IHES No. 32 (1966) 5-19

Goldie, A.W.

14

Gordon, B. and Motzkin, T.S. 65.

On the zeros of polynomials over division rings, Trans. Amer. Math. Soc. 116 (1965) 218-226, Correction ibid.

122 (1966) 547

54, 165, 207

Hahn, H. 07.

Uber die nichtarchimedischen Grossensysteme, Wiss. Wien IIa 116 (1907) 601-655

240

20

s.-B.

Akad.

Hall, M. 59.

The theory of groups, Macmillan (New York 1959)

22

Harris, B. 58.

Commutators in division rings, Proc. Amer. Math. Soc. 9 (1958) 628-630

19

Herstein, I.N. 53.

Finite subgroups of division rings, Pacif. J. Math. 3 (1953) 121-126

56.

Conjugates in division rings, Proc. Amer. Math. Soc. 7 (1956) 1021-1022

68.

Non-commutative rings (Carus Monographs; J. Wiley 1968) 162

Herstein, I.N. and Scott, W.R. 63.

Subnormal subgroups of division rings, Canad. J. Math. 15 (1963) 80-83

Higman, G. 40.

The units of group rings, Proc. London Math. Soc. (2) 46 (1940) 231-248

52.

Ordering by divisibility ln abstract algebras, Proc. London Math. Soc.

61.

152

(3) 2 (1952) 326-336

20

Subgroups of finitely presented groups, Proc. Roy.Soc., Ser. A 262 (1961) 455-475

141

Higman, G., Neumann, B.H. and Neumann, H. 49.

Embedding theorems for groups, J. London Math. Soc. 24 (1949) 247-254

115

Hilbert, D. 1896. Grund1agen der Geometrie, Teubner (Stuttgart 1896, lOth ed. 1968) 241

Hirschfeld, J. and Wheeler, W.H. 75.

Forcing, Arithmetic, Division rings, Lecture Notes in Math. No. 454, Springer (Berlin 1975)

134f., 137

Hua, L.K. 49.

Some properties of a sfield, Proc. Nat. Acad. Sci. USA 35(1949) 533-537

50.

On the multiplicative group of a sfie1d, Science Record Acad. Sinica 3 (1950) 1-6

Huzurbazar, M.S. 60.

The multiplicative group of a division ring, Dok1ady Acad. Nauk SSSR 131 (1960) 1268-1271

= Soviet

Math.

Dok1ady 1(1960) 433-435

61.

On the theory of multiplicative groups of division rings, Doklady Akad. Nauk SSSR 137(1961) 42-44

=

Soviet Math. Doklady 2 (1961) 241-243

Ikeda, M. 62.

Schiefkorper unendlichen Ranges uber dem Zentrum, Osaka Math. J. 14(1962) 135-144

Jacobson, N. 40.

The fundamental theorem of Galois theory for quasifields, Ann. of Math. 41(1940) l-7

43.

Theory of rings, Amer. Math. Soc. (Providence 1943)

55.

A note on two-dimensional division ring extensions, Amer. J. Math. 77 (1955) 593-599

56.

Structure of rings, Amer. Math. Soc. (Providence 1956, 1964) 25, 32, 40, 42, 124, 162

62.

Lie algebras, Interscience (New York and London 1962) 71

242

75.

PI-algebras, an introduction, Lecture Notes in Math. No.:441 Springer (Berlin 1975)

173

Jategaonkar, A.V. 69.

A counter-example in homological algebra and ring theory, J. Algebra 12 (1969) 418-440 23, 27

69'.

Ore domains and free algebras, Bull. London Math. Soc. 1(1969) 45-46 14

Kaplansky, I. 51.

A theorem on division rings, Canad. J. Math. 3(1951) 290-292

70.

Problems ln the theory of rings revisited, Amer. Math. Monthly 77 (1970) 445-454

18

Klein, A.A. 67.

Rings nonembeddable in fields with multiplicative semlgroups embeddable in groups, J. Algebra 7(1967) 100-125 4, 91

69.

Necessary conditions for embedding rings into fields, Trans. Amer. Math. Soc. 137(1969) 141-151

70.

A note about two properties of matrix rings, Israel. J. Math. 8(1970) 90-92

70 1 •

5, 81

5

Three sets of conditions on rings, Proc. Amer. Math. Soc. 25(1970) 393-398

72.

A remark concerning embeddability of rings into fields, J. Algebra 21(1972) 271-274

5

Knight , J • T . 70. On epimorphisms of non-commutative rings, Proc. Camb. Phil. Soc. 68 (1970) 589-600

95

243

Koethe, G. 31.

Schiefkorper unendlichen Ranges uber dem Zentrum, Math. Ann.

lOS (1931) 15-39

Kosevoi, E.G. 70.

On certain associative algebras with transcendental relations, Algebra i Logika 9, No. 5 (1970) 520-529 14

Laugwitz, D. (a) Tullio Levi-Civita's work on non-archimedean structures, to appear in Levi-Civita memorial volume. Lazerson, E.E. 61.

Onto inner derivations in division rings, Bull. Amer. Math. Soc. 67(1961), cf. Zentralblatt f. Math. 104

(1964) 33-34

19

Leavitt, W.G. 57.

Modules without invariant basis number, Proc. Amer. Math. Soc. 8(1957) 322-328

75

Lenstra, jr., W.H. 74.

Lectures on Euclidean rings, Bielefeld 1974

25

Lewin, J. and Lewin, T. (a) An embedding of the group algebra of a torsion free one relator group in a field, to appear in J. Algebra 20

Likhtman, A.I. 63.

On the normal subgroups of the multiplicative group of a division ring, Doklady Akad. Nauk SSSR 152(1963) 812-815

244

= Soviet

Math. Doklady 4(1963) 1425

Lyapin, E.S. 60.

Semigroups (Moscow 1960, translated Amer. Math. soc. 1963) 23

Macintyre, A. 73.

115, 123, 158

The word problem for division rings, J. symb. Logic 38(1973) 428-436

158

(a) On algebraically closed division rings, Ann. Math. Logic

135

(b) Combinatorial problems for skew fields. I Analogue of Britton's lemma, and results of Adyan-Rabin type, to appear Magnus, W. 37.

20

Uber Beziehungen zwischen hoheren Kommutatoren, J. reine u. angew. Math. 177 (1937) 105-115

Makar-Limanov, L.G. 75.

On algebras with one relation, Uspekhi Mat. Nauk 30, No.2 (182)(1975) 217

77.

200

To appear in Algebra i Logika

200

Mal 'cev, A. I. 37.

On the immersion of an algebraic ring into a field, Math. Ann. 113(1937) 686-691

39. ·

1

Uber die Einbettung von assoziativen Systemen 1n Gruppen I, II (Russian, German summary), Mat. Sbornik N.S.

6 (48) (1939) 331-336 ibid. 8(50) (1940) 251-264

4

48.

On the embedding of group algebras in division algebras Doklady Akad. Nauk SSSR 60 (1948) 1499-1501

73.

Algebraic systems, Springer (Berlin 1973)

20 90f.

Moufang, R. 245

37.

Einige Untersuchungen uber geordnete Schiefkorper, J. reine u. angew. Math. 176(1937) 203-223

22

Nagahara, T. and Tominaga, H. 55.

A note on Galois theory of division rings of infinite degree, Proc. Japan Acad. 31(1955) 655-658

56.

On Galois theory of division rings I,II, Math. J. Okayama univ. 6 (1956) 1-21, ibid. 7 (1957) 169-172

Nakayama, T. 53.

On the commutativity of certain division rings, Canad. J. Math. 5 (1953) 242-244

Neumann, B.H. 49.

On ordered groups, Amer. J. Math. 71 (1949) 1-18

49'.

On ordered division rings, Trans. Amer. Math. Soc. 66 (1949) 202-252

54.

20

An essay on free products of groups with amalgamations, Phil. Trans. Roy. Soc. Ser. A 246 (1954) 503-554 94, 110, 120

73.

The isomorphism problem for algebraically closed groups, in Word problems ed. W.W. Boone et al. North Holland (Amsterdam 1973)

137

Neumann, H. 48.

Generalized free products with amalgamated subgroups. I. Definitions and general properties, Amer. J. Math. 70 (1948) 590-625

49.

Generalized free products. II.

The subgroups of gen-

eralized free products, Amer. J. Math. 71 (1949) 491-540 Nivat, M. 70. 246

Series rationnelles et algebriques en variables non

commutatives, Cours du DEA 1969/70

78

Niven, I. 41.

Equations in quaternions, Amer. Math. Monthly 48 (1941) 654-661

Noether, E.

8, 46

Ore, 0. 31.

Linear equations

~n

non-commutative fields, Ann. of

Math. 32(1931) 463-477

32.

8

Formale Theorie der linearen Differentialgleichungen, J. reine u. angew. Math. 167 (1932) 221-234, ibid.

168 (1932) 233-252 33.

Theory of non-commutative polynomials, Ann. of Math. 34 (1933) 48D-508

v.Praag, P. 71.

201

Groupes mu1tip1icatifs des corps, Bull. Soc. Math. Belg. 23(1971) 506-512

Procesi, C. 68.

Sulle identita delle algebre semp1ici, Rend. eire. Mat. Palermo ser 2, XVII (1968) 13-18

73.

163, 165

Rings with polynomial identities, (M. Dekker, New York

1973)

165

Richardson, A.R. 27. Equations over a division algebra, Mess. Math. 57 (1927) 1-6 Robinson, A. 71. On the notion of algebraic c1osedness for non-commutative groups and fields, J. symb. Logic 36 {1971) 441-444

134

247

Sacks, G. 72.

Saturated model theory, Benjamin (New York 1972)

18

Schenkman, E.V. 58.

Some remarks on the multiplicative group of a sfield, Proc. Amer. Math. Soc. 9 (1958) 231-235

61.

Roots of centre elements of division rings, J. London Math. Soc. 36 (1961) 393-398

Schreier, 0. 27.

Uber die Untergruppen der freien Gruppen, Abh. Math. Sem. Hamburg 5 (1927) 161-183

93

Schutzenberger, M.P. 62.

On a theorem of Jungen, Proc. Amer. Math. Soc. 13 (1962) 885-890

78

Scott, W.R. 57.

On the multiplicative group of a division ring, Proc. Amer. Math. Soc. 8 (1957) 303-305

Shelah, S. 73.

Differentially closed fields, Israel J. Math. 16 (1973) 314-328

18

Simbireva, E.P. 47.

On the theory of partially ordered groups, Mat. Sbornik, 20 (1947) 145-178

Sizer, W.S. 75.

Similarity of sets of matrices over a skew field, Thesis

(London University 1975) 77.

215

Triangularizing semigroups of matrices over a skew field, Linear Algebra and its applications, to appear 206

248

Small, L.W.

An example ~n Noetherian rings, Proc. Nat. Ac~d. Sci.

65.

USA 54 (1965) 1035-1036

25

Smith, D.B. 70.

On the number of finitely generated 0-groups, Pacif. J. Math. 35 (1970) 499-502

123

Smits, T.H.M. 68.

Skew polynomial rings, Indag. Math. 30 (1968) 209-224 18

Sweedler, M.E. 75.

The predual theorem to the Jacobson-Bourbaki correspondence, Trans. Amer. Math. Soc. 213 (1975) 391-406 32

Szele, T. 52.

On ordered skew fields, Proc. Amer. Math. Soc. 3 (1952) 410-413

Treur, J. 76.

A duality for skew field extensions, Thesis (Utrecht

1976) v.d. Waerden, B.L. 48.

Free products of groups, Amer. J. Math. 70 (1948) 527-528

II

94



Wahl1ng, H. 74. Bericht tiber Fastkorper, Jber. Deutsch. Math.-Ver. 76 (1974) 41-103 Wedderburn, J.H.M.

249

09.

A theorem on finite algebras, Trans. Amer. Math. Soc. 6 (1909) 349-352

Wiegmann, N.A. 55.

Some theorems on matrices with real quaternion elements, Canad. J. Math. 7 (1955) 191-201

217

Wolf, L.A. 36.

Similarity of matrices in which the elements are real quaternions, Bull. Amer. Math. Soc. 42 (1936) 737-743

250

Index

Algebraic

109, 210

Algebraic dependence

230

Amalgamation property

133

Artin's lemma

EC-field

131

Eigenvalue, central

206

41

Bezout domain

89

Binomial extension

61

inner

206

inverse

226

left, right

205

singular

204

Elementary divisor

213

Elementary mapping

134 73

Central eigenvalue

206

Central extension

61

Essential index

180

Coideal

33

Essential index set

186

203

Existentially closed field

131

Companion matrix Coproduct

92

Coring

32

Cramer's rule

79

Crossed product

54

Dedekind's lemma Degree

40 30, 99, 204

Denominator Denominator set

79 8

Epic R-field

Extension

30

Faithful coproduct

93

X- field

166

Field coproduct

107 1

Field of fractions

30

Finite extension

116

Finitely homogeneous

89

Dependable

150

Fir

Dependence relation

230

Flat subset

169

Derivation

11

Forcing companion

134

Determinantal sum

81

Free field

128

Dickson's theorem

54

Free transfer isomorphism

108

Differential equation

65

Free point

225 251

Full matrix

82

226

Level

28

Lie algebra Galois group

40

Galois group, outer

52

Generalized polynomial identity (g.p.i.) 141, 163 Generic matrix ring Group like

173

152, 202

Local homomorphism

73, 172

Local ring

73, 174 227

Locus

33

Hilbert's 'theorem 90'

68

Honest homomorphism

88

Hua's identity

166

Improper matrix

200

Inductive class

134

Inert, inertia theorem

Linearization by enlargement

14lf.

Infinite forcing companion

134

Inner derivation

12

Inner eigenvalue

Mal'cev-Neumann construction

20f. 82

Matrix ideal Matrix local ring

174

Metro equation

201 99

Monomial

108

Monomial unit Multiplicative set n-fir

7, 77 3

N-group

46

206

N-invariant

50

Invariant factor

212

Non-singular at

Inverse eigenvalue

226

Numerator

oo

203 79

S-inverting, E-inverting 6, 76 Irreducible subset

169

Order Order function

Jacobson-Bourbaki correspondence

Ore condition, domain 38

Jacobson-Zassenhaus formula 71 Jordan canonical form J-skew polynomial ring

17 7f.

Outer derivation

12

Outer Galois group

52

216 24 PI-algebra

Klein's nilpotence condition

142

PI-degree 5

Presentation Prime matrix ideal

162 172f. 128 82

Laurent series

18

Projection

179

Leading term

99

Proper matrix

200

252

Pseudolinear extension

56

Tensor K-ring

127

Pure

99

Totally unbounded

212

Pure extension

61

Trace

Quadratic extension

56

Quasi-identity, quasivariety

70

Transcendental

210

Transvection

108

Trivial, trivializable

2£.

4

6

Unit Rational closure

77, 179

Rational meet

179

Rational relation

174

Rational topology

168, 228£.

Reduced order

47

Regular ring

5

Residue class field

166

Semifir

Universal EC-field

137

Universal R-field

85

Universal S-inverting ring

7, 76

Weakly finite

87

Zigzag lemma

135

3

Semilocal ring

181

Separating coproduct

93

Singular eigenvalue

204

Singular kernel

76

Singularity support

229

Skew polynomial ring

11

74, 172, 224

Specialization lemma

141

Spectrum

205

Stably associated

202

Strict X-ring

179

Strongly regular ring

4£.

Support Support relation

108

73

X-ring

Specialization

Unit, monomial

99 184, 187

Sweedler correspondence

35 253