Computer Algebra: Systems and Algorithms for Algebraic Computation [2 ed.] 9780122042324, 0122042328

This book still remains the best introduction to computer algebra, catering to both the interested beginner and the expe

250 46 3MB

English Pages 313 Year 1993

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Computer Algebra: Systems and Algorithms for Algebraic Computation [2 ed.]
 9780122042324, 0122042328

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Prefa e An example The example whi h follows is a work session on the omputer, using the software MACSYMA. The user's ommands are entered in the lines beginning with

( i).

Here we are dealing with a limited (or Taylor series)

expansion, al ulating an integral and he king that the derivative in the result is indeed the initial fun tion.

( 2) taylor( sinh(sin(x))-sin(sinh(x)) ,x ,0 , 15); 7 11 15 x x 5699 x (d2)/T/ -- - ---- - ---------- + . . . 45 1575 1277025750 ( 3) primitive:integrate( 1/(x**2 + 1) ** 4 ,x); 5 3 5 atan(x) 15 x + 40 x + 33 x (d3) --------- + ---------------------------16 6 4 2 48 x + 144 x + 144 x + 48 ( 4) derivative:diff(primitive, x); 4 2 75 x + 120 x + 33 (d4) ---------------------------6 4 2 48 x + 144 x + 144 x + 48 5 3 5 3 (15 x + 40 x + 33 x) (288 x + 576 x + 288 x) 5 - ----------------------------------------------- + ----------6 4 2 2 2 (48 x + 144 x + 144 x + 48) 16 (x + 1)

v

vi

Prefa e

( 5) fa tor(ratsimp(derivative)); (d5)

1 --------2 4 (x + 1)

All these al ulations an, in theory, be done by a rst year student, but they are quite laborious and it is diÆ ult to do them with pen il and paper without making a mistake. It is not suÆ iently well-known that this type of al ulation is as mu h within the range of omputers as is numeri al al ulation. The purpose of this book is to demonstrate the existing possibilities, to show how to use them, and to indi ate the prin iples on whi h systems of non numeri al

al ulation are based, to show the diÆ ulties whi h the designers of these systems had to solve | diÆ ulties with whi h the user is soon fa ed. But before we try to des ribe how the authors hope to arry out this ambitious design, we should rst try to de ne the subje t better.

S ienti al ulation and algebrai al ulation

From the start of ele troni al ulation, one of the main uses of omputers has been for numeri al al ulation. Very soon, appli ations to management began to dominate the s ene, as far as the volume of al ulation assigned to them is on erned. Nevertheless, s ienti appli ations are still the most prestigious, espe ially if we look at the performan e required of the

omputer: the most powerful omputers are usually reserved for s ienti

al ulation. This on ept of \s ienti al ulation" on eals an ambiguity, whi h it is important to note: before omputers appeared on the s ene, a al ulation usually onsisted of a mixture of numeri al al ulation and what we shall

all \algebrai al ulation", that is al ulation by mathemati al formulae. The only example of purely numeri al al ulation seems to have been the feats of al ulating prodigies su h as Inaudi: the authors of tables, espe ially of logarithms, did indeed arry out enormous numeri al al ulations, but these were pre eded by a restatement of the algebrai formulae and methods whi h were essential if the work was to be within the bounds of what is humanly possible. For example, the famous large al ulations of the 19th entury in lude a large proportion of formula manipulation. The best known is ertainly Le Verrier's al ulation of the orbit of Neptune, whi h started from the disturban es of the orbit of Uranus, and whi h led to the dis overy of Neptune. The most impressive al ulation with pen il and paper is also in the eld of astronomy: Delaunay took 10 years to al ulate the orbit of the moon, and another 10 years to he k it. The result is not numeri al, be ause it onsists for the most part of a formula whi h by itself o

upies all the 128 pages of Chapter 4 of his book.

Computer Algebra

vii

The ambiguity mentioned above is the following: when omputers ame on the s ene, numeri al al ulation was made very mu h easier and it be ame ommonpla e to do enormous al ulations, whi h in some ases made it possible to avoid laborious algebrai manipulations. The result was that, for the publi at large and even for most s ientists, numeri al al ulation and s ienti al ulation have be ome synonymous. When the title or list of subje ts of a ongress or onferen e in ludes the words \s ienti al ulation", we an generally be sure that it is a question of numeri al al ulation only, even if omputer algebra is also in the list of subje ts. However, numeri al al ulation does not rule out algebrai al ulation: writing the most trivial numeri al program requires a restatement of the formulae on whi h the algorithm is based. And, the power of omputers does not solve everything: al ulating the development of the atmosphere with the pre ision needed by meteorologists for a fore ast 48 hours ahead would require mu h more than 48 hours on the most powerful omputer at present available (CRAY). Multiplying their power by 10 would make the situation only marginally better for, at the same time, the theory would be being improved and the mesh-size would have to be divided by 2 (to distinguish for example between the weather in Paris and that in Orleans), and this would multiply by 8 the number of numeri al al ulations needed.

Computer Algebra

Thus algebrai al ulation has not lost its relevan e. However, it is most frequently done with pen il and paper, even though the rst software intended to automate it is already quite old [Kahrimanian, 1953; Nolan, 1953℄. It was very qui kly seen that su h software for helping algebrai al ulation would have to be a omplete system, whi h in luded a method with a very spe ial stru ture for representing non numeri al data, a language making it possible to manipulate them, and a library of e e tive fun tions for arrying out the ne essary basi algebrai operations. And so there appeared systems whi h run very well, and the most widely used of these are des ribed or referred to in this book. Writing, developing and even using these systems gradually merged into an autonomous s ienti dis ipline, obviously based on omputer s ien e. But the obje tives are in the eld of arti ial intelligen e, even if the methods are moving further and further away from it. Moreover, the algorithms used bring into play less and less elementary mathemati al tools. And so this dis ipline seems to form a frontier between several elds, and this adds both to its ri hness and, from the resear h aspe t, to its diÆ ulty. The name of this dis ipline has long hesitated between \symboli and algebrai al ulation", \symboli and algebrai manipulations", and nally settled down as \Computer Algebra" in English, and \Cal ul Formel", ab-

viii

Prefa e

breviated to CALSYF, in Fren h. At the same time so ieties were formed to bring together the resear h workers and the users of this dis ipline: the world-wide organisation is SIGSAM(1) group of the ACM(2) whi h organises the ongresses SYMSAC and EUROSAM and publishes the bulletin SIGSAM. The European group is alled SAME(3) and organises ongresses

alled EUROCAM, EUROSAM,... the pro eedings of whi h are referred to in the bibliography. For Fren h resear h workers there is the body of the CNRS (Centre national de la re her he s ienti que), the GRECO(4) of Computer Algebra. And sin e 1985 a spe ialist review, the Journal of Symboli Computation, has been published.

Computer Algebra systems

There are very many omputer algebra systems; but s ar ely more than ten are up-to-date, general and fairly widely available. In this book the authors have hosen four as being espe ially representative: - MACSYMA, the most developed. - REDUCE, the most widely available of all the large systems. - DERIVE, the system most easily available to mi ro- omputers, whi h means that it is very widely used but at the same time means that it is mainly used by beginners. - AXIOM, a very new system, with ( urrently) restri ted availability, but whi h, be ause of its ompletely di erent stru ture, an over ome some limitations the other systems share, and an be the prototype for the next generation of Computer Algebra systems. The systems meioned here, and most of those not mentioned, are very user friendly; admittedly, the super ial syntax of the language is not the same; admittedly, the library of available fun tions varies from some dozens to more than a thousand; admittedly, the internal stru ture of the system varies onsiderably; but they all share the following properties: - The programming is mainly intera tive: the user does not, in theory, know either the form or the size of his results and must therefore be able to intervene at any time. - Most of the data worked on are mathemati al expressions, of whi h at least the external representation is the one everyone is a

ustomed to. - The language used is \ALGOL-like". - The implementation language is often LISP; in any ase, the data are list stru tures and tree stru tures, and memory management is (1) (2) (3) (4)

Spe ial Interest Group in Symboli and Algebrai Manipulations. Asso iation for Computing Ma hinery. Symboli and Algebrai Manipulations in Europe. Groupe de RE her hes COordonnees.

Computer Algebra

ix

dynami with automati re overy of the available spa e. Thus, on e one of the Computer Algebra systems has been mastered,

hanging to a di erent one does not usually present the programmer with many problems, with far fewer in any ase than does hanging the system used at the same time as hanging omputer. That is why the authors thought it better to write a general introdu tion to Computer Algebra than a manual for a parti ular system. And so, the introdu tory hapter \How to use a Computer Algebra system" is written in MACSYMA, the ommands and the results are extremely readable and easily understood. But REDUCE has been hosen for the detailed des ription of a system, be ause it is more widely available. The other hapters do not usually refer to a parti ular system.

A

essibility of Computer Algebra systems

Computer Algebra systems an generally be used only on fairly large ma hines whose operating system works in virtual memory: to work without too mu h diÆ ulty, a mega-byte of main memory seems to be the smallest amount suitable. The most e onomi al system is obviously DERIVE; the most ostly in work memory is AXIOM for whi h more than 10 mega-bytes of work spa e are ne essary at present. The one whi h needs most disk spa e is probably MACSYMA whose ompiled ode is getting on for mega-bytes, to whi h must be added several mega-bytes of line do umentation. At present the availability of the systems is as follows: MACSYMA runs on VAX, and some personal work-stations (Symboli s, Sun), and even on 386-based PCs. REDUCE an be used on almost all the main models of large ma hines or of mini- omputers, even though in some ases only the old versions are available (Cyber, IBM series 360, 370,: : :, MULTICS, all the UNIX systems,: : :). DERIVE an be used on almost every mi ro omputer. As for AXIOM, it only works on IBM's VM/CMS or RS/6000 systems.

Using Computer Algebra systems

For a beginner, the Computer Algebra languages are some of the simplest to use. In fa t, at rst, he only needs to know some fun tions whi h allow him to rewrite the problem in question in a form very similar to its mathemati al form. Even if the rewriting is lumsy or in orre t, the intera tivity means that after some playing around he an qui kly nd results unobtainable with pen il and paper. And, for many appli ations, that is suÆ ient. In programming language su h as FORTRAN the synta ti al subtleties require a long apprenti eship whereas the work prin iples of the ompiler

an be totally ignored, but here, the user must very qui kly grasp \how that works", espe ially how the data are represented and managed.

x Prefa e

In fa t, although it is usually diÆ ult to foretell the time a al ulation will take and what its size will be, a knowledge of the working prin iples an give an idea as to their order of magnitude and, if need be, optimise them. These estimates are in fa t essential: for most algebrai al ulations, the results are quasi-instantaneous, and all goes well. But, if this is not so, the in reases in the time and memory spa e required are usually exponential. So the feasibility of a given al ulation is not always obvious, and it is stupid to ommit large resour es when a failure an be predi ted. Moreover, an intelligent modelling of the problem and a program adapted to the stru ture of the data may make an otherwise impossible problem easy. Thus, in Chapter 1 two programs for al ulating the largest oeÆ ient of a polynomial are given; the text of the one whi h seems most natural for someone used to FORTRAN is onsiderably more ompli ated than the other, but above all it needs 10 times more time for a polynomial of degree 30 ; if we move on to a polynomial of degree 1000, about 30 times greater, the running time of the se ond program is in theory multiplied by 30, whereas that of the rst is multiplied by a fa tor of the order of 1000, whi h gives a running time ratio of 300 between the two programs, and real times of about one minute and ve hours. A quiring an eÆ ient programming style and an ability to foresee the size of a al ulation is therefore mu h more important here than in numeri al al ulation where the in rease is generally linear. Unfortunately, this is for many a matter of experien e, whi h it is hard to pass on by way of a manual. However, familiarity makes it easier to a quire this experien e; and that is why su h a large part of the book is devoted to des ribing the mathemati al working of Computer Algebra systems.

Plan of the book

The book an be divided into four se tions. The rst (Chapter 1) is an introdu tion to Computer Algebra with the help of annotated examples, most of whi h are written in MACSYMA. They an usually be understood on a rst reading. Nevertheless, it is a very useful exer ise to program these examples or similar ones in MACSYMA or in some other system. This obviously requires a knowledge of the orresponding users' manual. The se ond part, whi h onsists of Chapters 2, 3, and 4, des ribes the working of Computer Algebra systems. Besides being interesting in itself, the des ription of the problems whi h had to be solved and of the prin iples behind the hosen solutions is essential for helping the user to get round the problems of \ ombinatorial explosion" with whi h he will be onfronted. Chapter 5 makes up the third part and gives two Computer Algebra problems, still at the resear h stage, even though the softwares des ribed

Computer Algebra

xi

are beginning to be widely used. They are very good examples, whi h require all the power and all the possibilities of Computer Algebra systems to solve diÆ ult natural problems; they are good illustrations both of the possibilities and of the diÆ ulties inherent in this dis ipline. Finally, a detailed presentation of REDUCE, as an appendix, enables this book to serve as a manual, at least for the most widely used of these systems. There is also a detailed bibliography, whi h in ludes many more referen es than appear expli itly in the text, so that the reader has a

ess to the methods, results and algorithms for whi h there was no room in the book. In on lusion, let us hope that this book, whi h till now has no parallel anywhere in the world, will help to in rease the use of this tool, so powerful and so little known: Computer Algebra. Daniel Lazard Professor at the P. and M. Curie University (Paris VI)

ACKNOWLEDGEMENTS

The authors would like to thank all those who read the various drafts of this book, in parti ular Mrs. H. Davenport, J. Della Dora, D. Duval, M. Giusti, D. Lazard and J.C. Smith. This book was typeset with the TEX system. The author of TEX, D.E. Knuth, is responsible for the typographi al ex ellen e of the book, but obviously not for the errors. The presentation owes mu h to the TEX experts: C. Goutorbe at the C.I.C.G.* and C.E. Thompson of the University of Cambridge.

For the se ond edition, the authors would like to thank everyone who

ommented on the rst edition. * The Fren h original was prepared on the hardware of the C.I.C.G. (Inter-University Computer Centre of Grenoble).

xii

Prefa e

Foreword It is now over thirty years sin e the omputer was rst used to perform algebrai al ulations. The resulting programs ould di erentiate very simple expressions, a far ry from the impressive array of sophisti ated

al ulations possible with today's algebrai omputation programs. Until re ently, however, su h programs have only been a

essible to those with a

ess to large main-frame omputers. This has limited both the use and the appre iation of algebrai omputation te hniques by the majority of potential users. Fortunately, the personal omputer revolution is hanging this; readily available ma hines are now appearing that an run existing algebra programs with impressive ease. As a result, interest in omputer algebra is growing at a rapid pa e. Although programs exist for performing the esoteri algebrai omputations of interest to professional mathemati ians, in this book we are mainly on erned with the onstru tive methods used by physi al s ientists and engineers. These in lude su h mundane things as polynomial and series manipulation, as well as more sophisti ated te hniques like analyti integration, polynomial fa torization and the analyti solution of di erential equations. In addition, a growing number of s ientists are nding that these programs are also useful for generating numeri al ode from the resulting expressions. Algebrai omputation programs have already been applied to a large number of di erent areas in s ien e and engineering. The most extensive use has o

urred in the elds where the algebrai al ulations ne essary are extremely tedious and time onsuming, su h as general relativity, elestial me hani s and quantum hromodynami s. Before the advent of omputer algebra, su h hand al ulations took many months to omplete and were error prone. Personal workstations an now perform mu h larger al ulations xiii

xiv

Foreword

without error in a matter of minutes. Given the potential of omputer algebra in s ienti problem solving, it is lear that a text book is needed to a quaint users with the available te hniques. There are books on erned with the use of spe i algebrai

omputation programs, as well as those oriented towards omputer s ientists who study the eld. However, this is the rst general text to appear that provides the pra tising s ientist with the information needed to make optimal use of the available programs. The appearan e of an English version of the original Fren h text also makes it a

essible to a wider lass of readers than was previously possible. In addition to being a general text on the subje t, this book also in ludes an annex des ribing the use of one parti ular algebra system, namely REDUCE, a system I rst designed in the late 1960s. The version des ribed here has undergone many hanges and extensions sin e those days, and is now in use by thousands of s ientists and engineers throughout the world on ma hines ranging in power from the IBM PC to the Cray X-MP. I am sure that they will wel ome this useful exposition of the system. Anthony C. Hearn The RAND Corporation De ember 1987

ACKNOWLEDGEMENTS

The translators would like to thank J.P. Fit h and J.C. Smith for their helpful omments on this translation, as well as the reader R. Cutler at A ademi Press. This translation was also prepared with TEX, and C.E. Thompson again provided mu h useful advi e.

The data transfers

involved in preparing this translation would have been impossible without the helpful advi e and support of the University of Cambridge Computing Servi e.

For the re-printing, R.J. Bradford and M.A.H. Ma Callum sig-

nalled a few mis-prints, whi h have been orre ted.

The se ond edition

owes mu h to the helpful omments of many, notably A.H.M. Levelt.

Contents Prefa e . . . . . . . . . . . . . . . . . . . . . . . . . . . v Foreword to English translation . . . . . . . . . . . . . . . xiii 1 How to use a Computer Algebra system . . . . . . 1.1 Introdu tion . . . . . . . . . . . . . . . . 1.2 Features of Computer Algebra systems . . . . . 1.3 Syntax of the asso iated languages . . . . . . 1.4 Areas overed by existing systems . . . . . . . 1.5 Computer Algebra by example . . . . . . . . 1.5.1 Simple operations on numbers . . . . . . 1.5.2 Polynomials and rational fra tions . . . . 1.5.3 Matrix al ulation . . . . . . . . . . . 1.5.4 Di erentiation { Taylor series . . . . . . 1.5.5 Simpli ation of formulae . . . . . . . . 1.5.6 Integration . . . . . . . . . . . . . . 1.5.7 Ordinary di erential equations . . . . . 1.6 MACSYMA's possibilities in Algebra . . . . . 1.6.1 General Possibilities . . . . . . . . . . 1.6.2 The division of the ir le into 17 equal parts 1.7 Availability of MACSYMA . . . . . . . . . . 1.8 Other systems . . . . . . . . . . . . . . . 1.9 AXIOM . . . . . . . . . . . . . . . . . . 2 The problem of data representation . . . . . . . . 2.1 Representations of integers . . . . . . . . . . 2.2 Representations of fra tions . . . . . . . . . 2.3 Representations of polynomials . . . . . . . . 2.3.1 Canoni al and normal representations . . xv

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

1 .1 .2 .3 .3 .4 .4 10 18 28 34 42 48 56 56 60 70 70 71 75 75 77 78 79

xvi

Table of ontents

2.3.2 Dense and sparse representations . . . 2.3.3 The g. .d. . . . . . . . . . . . . . 2.4 Polynomials in several variables . . . . . . . 2.5 Representations of rational fun tions . . . . 2.6 Representations of algebrai fun tions . . . . 2.6.1 The simple radi als . . . . . . . . . 2.6.2 Nested radi als . . . . . . . . . . . 2.6.3 General algebrai fun tions . . . . . . 2.6.4 Primitive elements . . . . . . . . . . 2.6.5 Dynami evaluation of algebrai fun tions 2.7 Representations of trans endentals . . . . . 2.8 Representations of matri es . . . . . . . . 2.8.1 Dense matri es . . . . . . . . . . . 2.8.2 Bareiss' algorithm . . . . . . . . . . 2.8.3 Sparse matri es . . . . . . . . . . . 2.9 Representations of series . . . . . . . . . . 2.9.1 Taylor series: simple method . . . . . 2.9.2 Taylor series: Norman's method . . . . 2.9.3 Other series . . . . . . . . . . . . . 3 Polynomial simpli ation . . . . . . . . . . . 3.1 Simpli ation of polynomial equations . . . . 3.1.1 Redu tions of polynomials . . . . . . 3.1.2 Standard (Grobner) bases . . . . . . 3.1.3 Solution of a system of polynomials . . 3.1.4 Bu hberger's algorithm . . . . . . . . 3.1.5 Relationship to other methods . . . . 3.2 Simpli ation of real polynomial systems . . . 3.2.1 The ase of R1 . . . . . . . . . . . 3.2.1.1 Isolation of roots . . . . . . . 3.2.1.2 Real algebrai numbers . . . . 3.2.2 The general ase | (some de nitions) . 3.2.2.1 De omposition of Rn . . . . . 3.2.3 Cylindri al de ompositions . . . . . . 3.2.3.1 The ase of R2 . . . . . . . . 3.2.3.2 The general ase . . . . . . . 3.2.4 Appli ations of ylindri al de omposition 3.2.4.1 Quanti er elimination . . . . . 3.2.4.2 Roboti s . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

81 84 86 88 91 91 92 93 95 96 97 100 100 103 104 105 105 108 109

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

111 111 112 113 115 117 120 122 122 124 128 129 130 131 133 134 135 135 137

4 Advan ed algorithms . . . . . . . . . . . . . . . . . . . 141 4.1 Modular methods . . . . . . . . . . . . . . . . . . . . 141 4.1.1 g. .d. in one variable . . . . . . . . . . . . . . . . 141

Computer Algebra 4.1.1.1 The modular { integer relationship . . . . 4.1.1.2 Cal ulation of the g. .d. . . . . . . . . . 4.1.1.3 Cost of this algorithm . . . . . . . . . . 4.1.2 g. .d. in several variables . . . . . . . . . . . . 4.1.2.1 Bad redu tion . . . . . . . . . . . . . 4.1.2.2 The algorithm . . . . . . . . . . . . . 4.1.2.3 Cost of this algorithm . . . . . . . . . . 4.1.3 Other appli ations of modular methods . . . . . . 4.1.3.1 Resultant al ulation . . . . . . . . . . 4.1.3.2 Cal ulating determinants . . . . . . . . 4.1.3.3 Inverse of a matrix . . . . . . . . . . . 4.1.3.4 Other appli ations . . . . . . . . . . . 4.2 p-adi methods . . . . . . . . . . . . . . . . . . . 4.2.1 Fa torisation of polynomials in one variable . . . . 4.2.1.1 Berlekamp's algorithm . . . . . . . . . . 4.2.1.2 The modular { integer relationship . . . . 4.2.2 Hensel's Lemma | linear version. . . . . . . . . 4.2.2.1 Hensel's Lemma | quadrati version . . . 4.2.2.2 Hensel's Lemma | re nement of the inverses 4.2.2.3 The fa torisation algorithm . . . . . . . 4.2.2.4 The leading oeÆ ient . . . . . . . . . . 4.2.3 Fa torisation in several variables . . . . . . . . 4.2.3.1 The algorithm . . . . . . . . . . . . . 4.2.3.2 The leading oeÆ ient . . . . . . . . . . 4.2.4 Other appli ations of p-adi methods . . . . . . . 4.2.4.1 g. .d. by a p-adi algorithm. . . . . . . . 5 Formal integration and di erential equations . . . . . . . . 5.1 Formal integration . . . . . . . . . . . . . . . . . 5.1.1 Introdu tion . . . . . . . . . . . . . . . . . 5.1.2 Integration of rational fun tions . . . . . . . . . 5.1.2.1 The nave method . . . . . . . . . . . . 5.1.2.2 Hermite's method . . . . . . . . . . . . 5.1.2.3 Horowitz-Ostrogradski method . . . . . . 5.1.2.4 The logarithmi part . . . . . . . . . . 5.1.3 The integration of more ompli ated fun tions . . 5.1.4 Integration of logarithmi fun tions . . . . . . . 5.1.4.1 The de omposition lemma . . . . . . . . 5.1.4.2 The polynomial part . . . . . . . . . . 5.1.4.3 The rational and logarithmi part . . . . . 5.1.5 Integration of exponential fun tions . . . . . . . 5.1.5.1 The de omposition lemma . . . . . . . . 5.1.5.2 The generalised polynomial part . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xvii

. . . . . . . . . . . . . . . . . . . . . . . . . .

143 145 147 150 153 154 156 157 157 158 158 160 161 161 161 164 167 168 170 171 174 175 176 178 179 179 183 . 183 . 183 . 185 . 185 . 187 . 188 . 188 . 190 . 192 . 192 . 193 . 195 . 196 . 197 . 199

xviii

Table of ontents

5.1.5.3 The rational and logarithmi part . . . . . 5.1.6 Integration of mixed fun tions . . . . . . . . . . 5.1.7 Integration of algebrai fun tions . . . . . . . . 5.1.8 Integration of non-elementary fun tions . . . . . 5.2 Algebrai solutions of o.d.e.s . . . . . . . . . . . . . 5.2.1 First order equations . . . . . . . . . . . . . . 5.2.1.1 Ris h's problem . . . . . . . . . . . . . 5.2.1.2 A theorem of Davenport . . . . . . . . . 5.2.2 Se ond order equations . . . . . . . . . . . . . 5.2.3 General order equations . . . . . . . . . . . . 5.2.3.1 Homogeneous equations . . . . . . . . . 5.2.3.2 Inhomogeneous equations . . . . . . . . 5.3 Asymptoti solutions of o.d.e.s . . . . . . . . . . . . 5.3.1 Motivation and history . . . . . . . . . . . . . 5.3.2 Classi ation of singularities . . . . . . . . . . 5.3.3 A program for solving o.d.e.s at a regular singularity 5.3.4 The general stru ture of the program . . . . . . 5.3.5 Some examples treated by \DESIR" . . . . . . . 5.3.5.1 Examples with Bessel's equation . . . . . 5.3.5.2 Another example . . . . . . . . . . . . Appendix. Algebrai ba kground . . . . . A.1 Square-free de omposition . . . . . A.2 The extended Eu lidean algorithm . A.3 Partial fra tions . . . . . . . . . A.4 The resultant . . . . . . . . . . A.5 Chinese remainder theorem . . . . A.5.1 Case of integers . . . . . . . A.5.2 Case of polynomials . . . . . A.6 Sylvester's identity . . . . . . . . A.7 Termination of Bu hberger's algorithm

. . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

199 200 203 204 204 205 205 207 208 210 210 210 212 212 213 216 218 221 221 223

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . .

229 229 231 232 233 236 237 238 239 242

Annex. REDUCE: a Computer Algebra system R.1 Introdu tion . . . . . . . . . . . . . R.1.1 Examples of intera tive use . . . . R.2 Syntax of REDUCE . . . . . . . . . R.2.1 Synta ti elements . . . . . . . R.2.2 Expressions . . . . . . . . . . R.2.2.1 Di erent types of expressions R.2.2.2 Simpli ation of expressions R.2.2.3 List expressions . . . . . R.2.3 De larations . . . . . . . . . . R.2.3.1 Plain variables . . . . . .

. . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

245 245 246 248 248 249 249 250 252 253 253

. . . .

Computer Algebra R.2.3.2 Asymptoti de larations . . . R.2.3.3 Array de larations . . . . . R.2.3.4 Operator de larations . . . . R.2.3.5 Pro edure de larations . . . R.2.4 Commands . . . . . . . . . . . . R.2.4.1 Assignment . . . . . . . . R.2.4.2 Instru tion group . . . . . . R.2.4.3 Conditional statement . . . . R.2.4.4 Iteration statement . . . . . R.2.4.5 Blo ks . . . . . . . . . . R.2.4.6 Lo al value assignment . . . R.3 Built-in fa ilities . . . . . . . . . . . . R.3.1 Pre x operators . . . . . . . . . . R.3.1.1 Numeri operators . . . . . R.3.1.2 Mathemati al operators . . . R.3.1.3 Di erentiation . . . . . . . R.3.1.4 Integration . . . . . . . . R.3.1.5 Fa torisation . . . . . . . . R.3.1.6 Resultants . . . . . . . . . R.3.1.7 Solution of systems of equations R.3.2 Manipulation of expressions . . . . R.3.2.1 Output of expressions . . . . R.3.2.2 Parts of expressions . . . . . R.3.3 Substitution . . . . . . . . . . . R.3.3.1 Lo al substitution . . . . . R.3.3.2 Global substitution . . . . . R.4 Matrix algebra . . . . . . . . . . . . . R.5 IRENA . . . . . . . . . . . . . . . . R.6 Con lusion . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xix

254 255 255 257 257 257 258 258 258 259 259 259 259 259 260 260 260 261 261 262 263 263 265 267 267 268 271 271 274 275 293

1. How to use a Computer Algebra system

1.1 INTRODUCTION

The idea of doing algebrai al ulation on omputers is not new. From l960 on, numerous programs have appeared on the market whi h are intended to show that in the s ienti eld one an go beyond the purely numeri al area usually attributed to omputers. The language LISP dates from this period and opened the way to the rst spe ta ular demonstrations of the following possibilities: formal integration and proofs of theorems. In the same vein, some people very qui kly saw that they ould ask the ma hine to

arry out algebrai operations whi h, though tedious, were useful for later numeri al al ulation: the expansion of polynomial expressions, formal differentiation et : : : . The appearan e of time-sharing systems ontributed a great deal to the generalisation of programs of algebrai al ulation, and they have gradually be ome genuine sub-systems whi h an be used in a language lose to the usual algorithmi languages. But as the subje t is mu h more open than numeri al al ulation or management there has never been any standardisation, as there has been for those languages. That is one of the reasons why these possibilities have so long been ignored by the s ienti and industrial world. For a long time too the use of these systems has been limited to a ertain type of ma hine and obviously this limitation has not helped to spread knowledge of them. However, re ently things have been hanging: the language LISP is nding favour again and its numerous diale ts and variations have not prevented a system su h as REDUCE from being available on a great number of omputers and work-stations. As of now, MACSYMA is ommer ially available on a ertain number of

omputers and is no longer onsidered to be an ina

essible system. More re ently, there have appeared some systems for mi ro- omputers whi h, albeit limited, at least give us some feeling for the subje t: muMATH and 1

2

How to use a Computer Algebra system

its repla ement DERIVE fall into this atgeory. More re ently, the appearan e of ompeting systems su h as MAPLE and MATHEMATICA (written in C) and AXIOM (formerly SCRATCHPAD, and available on IBM omputers) are ertainly ontributing to the greater understanding and use of

omputer algebra. 1.2 FEATURES OF COMPUTER ALGEBRA SYSTEMS

The most intuitive, although somewhat restri tive, approa h to Computer Algebra systems is to say that they are made for the manipulation of everyday s ienti and engineering formulae. A mathemati al formula whi h is des ribed in one of the usual languages (FORTRAN, PASCAL, BASIC,: : : )

an only be evaluated numeri ally, on e the variables and parameters have themselves been given numeri al values. In a language whi h allows of algebrai manipulations, the same formula an also be evaluated numeri ally, but above all it an be the obje t of formal transformations: di erentiation, development in series, various expansions, even integration. For example, one an, with just one ommand, ask for the de omposition into partial fra tions of x

2

5 1)4

(

x x

whi h is

5 x

+

5 x

1

(x

5

1)2

+

(x

6

1)3

(x

4

1)4

and fairly easy to do by hand. But it is ni e to do by ma hine the de omposition of x +a 2 x(x b)(x + ) whi h is a rather more ompli ated form: a b x

+

a

+b

(b + b3 )(x

)

b

( ab)x + (b + a) : ( 2 + b2 )(x2 + )

As a general rule, Computer Algebra systems are drawn up to meet the two following requirements: - To provide a set of basi pre-programmed ommands whi h will hand over to the ma hine the wearisome al ulations whi h o

ur in a pro ess run

ompletely by the user: this is the role of an advan ed desk-top ma hine. - To o er a programming language whi h lets us de ne higher-level

ommands or pro edures to enlarge the original set of ommands.

Computer Algebra

3

We see that there is no fundamental on i t with the aims of languages su h as LISP or APL: the same onversational aspe t, the same possibilities for de ning pro edures or fun tions and thus the same possibilities for adding to the existing libraries on a given subje t. Naturally, when the algorithm is suitable, it is perfe tly possible to exploit algebrai al ulation as a bat h job. That depends only on the

omputing environment and it is up to the user to see that all the data are de ned at the time of exe ution. 1.3 SYNTAX OF THE ASSOCIATED LANGUAGES

Although there are some di eren es between the systems, syntax is not the main problem of Computer Algebra. In general a few hours and a little pra ti e are perfe tly adequate. The syntax an be said to be largely that of PASCAL. Of ourse there is an assignment instru tion, the idea of alling fun tions ( ommands), a fairly ri h set of ontrol stru tures (if, do, while, repeat et : : : .), possibilities for de laring pro edures : : : . In brief, all the arsenal of programming languages required for writing algorithms. 1.4 AREAS COVERED BY EXISTING SYSTEMS

If it is to be useful and to keep the user's interest, a Computer Algebra system must over by means of a ri h set of ommands those areas whi h immediately ome to mind when one puts aside the limitations of hardware, and of traditional programming languages. The following tools are part of the ne essary equipment: - Operations on integers, on rational, real and omplex numbers with unlimited a

ura y. Chapter 2 shows the importan e of this. - Operations on polynomials in one or more variables and on rational fra tions. In short, the obvious rational operations, al ulating the g. .d., fa torising over the integers. - Cal ulations on matri es with numeri al and/or symboli elements. - Simple analysis: di erentiation, expansion in series, Pade approximants et . - Manipulation of formulae: various substitutions, sele tion of oeÆ ients and of parts of formulae, numeri al evaluation, pattern re ognition,

ontrolled simpli ations, : : : . Starting out from this ommon base, the systems may o er possibilities in spe i areas, possibilities whi h to some extent hara terise their degree of development. For example: - Solution of equations. - Formal integration. - Cal ulation of limits. - Tensor al ulus.

4

How to use a Computer Algebra system

In addition, users have an opportunity to add to the fun tion library by passing on work whi h is suÆ iently general to interest a group of people. A typi al example is the SHARE library of MACSYMA. 1.5 COMPUTER ALGEBRA BY EXAMPLE

In what follows we have hosen to expound, using the MACSYMA system, a series of examples from some of the areas mentioned. Most of the examples an be repeated using other systems, espe ially REDUCE whi h is des ribed in the annex. It is obvious that we are not trying to repla e the MACSYMA programmers' manual and the reader will at times have to be satis ed with brief explanations given in the ommentary. The examples are printed as they were given to the ma hine. The typography of the results has not been hanged. The lines typed by the user are labelled ( i ) and nish with \;" or \$". The latter symbol tells MACSYMA not to print the result. So time is saved if printing the result is not essential. The result of a ommand is labelled (di ). The symbol % used in a ommand is the value asso iated with the previous ommand. The onstants  , e, i are written %pi, %e, %i. The assignment symbol is the olon. The running times quoted after ertain ommands are for the version of MACSYMA distributed on behalf of the U.S. Department of Energy, and installed on an IBM RISC-6000 (Model 970). 1.5.1 Simple operations on numbers

( 1) n0 : 123456789123456789 * 123456789 + 987654321123456789; (d1)

15241579753086420873647310

( 2) n1 : 100! ; (d2) 93326215443944152681699238856266700490715968264381621 46859296389521759999322991560894146397615651828625369 7920827223758251185210916864000000000000000000000000 ( 3) 101! / % ; (d3)

101

MACSYMA has predi ates and fun tions whose meaning is obvious but the appli ation of whi h an be very long. That is true of the predi ate of primality primep or of the fa torisation fun tion fa tor. ( 4) primep(12);

Computer Algebra (d4)

5

false

( 5) primep(216091); (d5)

true

( 6) fa tor(2**63-1); 2 7 73 127 337 92737 649657

(d6)

Let us al ulate one of the largest prime numbers known, but not print it sin e it has 65049 de imal gures. ( 7) large_prime : 2**216091 - 1 $

Here is an example of a loop with the instru tion for. When the loop is nished, MACSYMA prints \done". That is, say, the (not very interesting) value orresponding to the for instru tion. ( 8) for a:1 thru 5 do for b:1 thru 5 do if g d(a,b)=1 then print("a=",a,", b=", b ,", ",fa tor(b**6 + 3*a**6)); a= 1 , b= 1 , a= 1 , b= 2 , a= 1 , b= 3 , a= 1 , b= 4 , a= a= a= a=

1 2 2 2

, , , ,

b= b= b= b=

5 1 3 5

, , , ,

a= 3 , b= 1 , a= 3 , b= 2 , a= 3 , b= 4 , a= a= a= a=

3 4 4 4

, , , ,

b= b= b= b=

5 1 3 5

, , , ,

a= 5 , b= 1 , a= 5 , b= 2 , a= 5 , b= 3 , a= 5 , b= 4 ,

2 2 67 2 2 3 61 4099 2 2 3907 193 3 307 15817 2 2 547 2251 61 103 2 2 61 73 12289 3 4339 103 271 2 2 11719 73 643 2 2 3 3967 50971

6

How to use a Computer Algebra system

(d8)

done

Let us now nd the rst two odd prime numbers p su h that 2p

1

1

2

(mod

p

):

( 9) for p:3 step 2 thru 3601 do if primep(p) and (remainder(2**(p-1), p**2) =1) then display(p); p = 1093 p = 3511 (d9)

done

Among some equations of the form x3 + px + q (p and q being relatively prime integers) let us nd one whose dis riminant is a perfe t square over Z. ( 10) for n:2 thru 100 do for p:1 thru (n-1) do (q: n-p, if (g d(p,q)=1 and integerp((4*p**3+27*q**2)**(1/2))) then display(p,q) ); p = 1 q = 10 p = 13 q = 34 p = 33 q = 32 p = 69 q = 22 (d10)

done

Examples of al ulations with rational numbers: ( 11) s:0 $ ( 12) for i:1 thru 100 do s:s+1/i;

Computer Algebra (d12)

7

done

( 13) s; (d13)

14466636279520351160221518043104131447711 ----------------------------------------2788815009188499086581352357412492142272

( 14) s: sum(1/i, i, 1, 100); (d14)

14466636279520351160221518043104131447711 ----------------------------------------2788815009188499086581352357412492142272

Obviously, MACSYMA an also al ulate with real numbers in single or double pre ision depending on the ma hiney: ( 15) 216091*0.69314718; (d15)

149782.86727337999

( 16) a: 123.897 * 1234.6 /(1.0 -345.765e-03); (d16)

233804.72796472211

But thanks to the fun tion bfloat, one an go beyond the pre ision and the range of the numbers o ered by the hardware. In MACSYMA, su h large real numbers are written synta ti ally: n:nnnnnnnb  nn. The default pre ision is the value of the symbol fppre . ( 17) fppre ; (d17)

16

( 18) bfloat(%pi); (d18)

3.141592653589793b0

( 19) big: 1.123424213423452515135b53*2.2342345872541274512452 3423544b64 (d19)

2.50999323378944b117

( 20) fppre : 36; (d20)

36

( 21) bfloat(%pi);

y

The version of MACSYMA on the RISC-6000 uses double pre ision.

8

How to use a Computer Algebra system

(d21)

3.14159265358979323846264338327950288b0

( 22) big: 1.123424213423452515135b53*2.2342345872541274512452 3423544b64; (d22)

2.50999323378944021829124202358948023b117

( 23) bfloat(s); (d23)

5.18737751763962026080511767565825316b0

The following example shows the slow onvergen e of the series 2 to its sum: 6 .

P1

n2

( 24) s2: sum(1/i**2, i, 1, 200)$ ( 25) float(%); (d25)

1.6399465785675531

( 26) float(%pi**2/6),numer; (d26)

1.6449340668482264

We an use this lassi problem to demonstrate fun tion de nition in MACSYMA. We shall al ulate  by Me hain's formula:

 1 = 4 ar tan 4 5

ar tan

1 239

;

the expansion of whi h is:

 = 16

1 X

( 1)p (2p + 1):52p+1 p=0

4

1 X

( 1)p : (2p + 1):2392p+1 p=0

The input parameter n of the fun tion is the number of terms hosen in the rst series. We know that the error in su h a series is less than the rst term omitted and has the same sign. The al ulation of the se ond series is done in su h a way that the error is less than the error in the rst series.

Computer Algebra

9

( 27) approx_pi (n) := blo k([s1,s2,isgn,isgn1,err1,err2,err,i,j,sav℄, s1: 0, isgn: 1, sav: fppre , fppre : 3, for i:1 thru n do (j: 2*i -1, s1: s1 + isgn /(j * 5**j), isgn: -isgn), err1: 16/((2*n+1)*5**(2*n+1)), isgn1: isgn, s1: 16*s1, s2: 0, isgn: 1, for i:1 step 1 while true do (j: 2*i-1, err2: 1/(j * 239**j ), if (4*err2 - err1) Union(``failed'', UP(x,FRAC INT)) quotient(p1,p2,n) == minimumDegree(p1) < minimumDegree(p2) => ``failed'' reste:UP(x,FRAC INT):=p1 degp2:=minimumDegree(p2)

oefp2:= oeffi ient(p2,degp2) quot:UP(x,FRAC INT):=0 while degree(quot) < n repeat deg:=minimumDegree(reste) mon:=monomial( oeffi ient(reste,deg)/ oefp2,deg-degp2)$ UP(x,FRAC INT) reste:=reste-(mon*p2) quot:=quot+mon quot

72

How to use a Computer Algebra system

The use of the expression $UP(x,FRAC INT) indi ates that the fun tion monomial from the type UP(x,FRAC INT), rather than, say, the monomial from UP(y,FRAC INT), whi h should be used. This fun tion an now be used: at the rst use, the AXIOM system will ompile it. (3) ->quotient(1,1+x,8) Compiling fun tion quotient with type (UnivariatePolynomial(x, Fra tion Integer),UnivariatePolynomial(x,Fra tion Integer), NonNegativeInteger) -> Union(``failed'',UnivariatePolynomial(x, Fra tion Integer)) 8 7 6 5 4 3 2 (3) x - x + x - x + x - x + x - x + 1 Type: Union(UnivariatePolynomial(x,Fra tion Integer),...) (4) ->quotient(x**2-x+1,x**3-x-6/7,8) (4) 84778967 8 18089477 7 2286095 6 166061 5 8281 4 -------- x - -------- x + ------- x - ------ x - ---- x 10077696 1679616 279936 46656 7776 + 4459 3 889 2 91 7 ---- x - --- x + -- x - 1296 216 36 6 Type: Union(UnivariatePolynomial(x,Fra tion Integer),...)

Now let us onsider an algebrai extension, for whi h MACSYMA was not ompletely satisfa tory. We will use the domain SimpleAlgebrai Extension, whi h takes as parameters a eld, a polynomial stru ture over that eld, and a polynomial belonging to that stru ture. This will suÆ e to de ne a new type, whi h we all etx1. (1) ->ext1:=SAE(FRAC INT,UP(a,FRAC INT),a**2+a+1) (1) SimpleAlgebrai Extension(Fra tion Integer,UnivariatePolynomial(a, Fra tion Integer),a*a+a+1) Type: Domain (2) ->e:ext1:= onvert(((3/4)*a**2-a+(7/4))::UP(a,FRAC INT)) 7 - - a + 1 4 Type: SimpleAlgebrai Extension(Fra tion Integer, UnivariatePolynomial(a,Fra tion Integer),a*a+a+1) -- inversion de e: (3) ->re ip(e) (2)

(3)

28 44 -- a + -93 93

Computer Algebra

73

Type: Union(SimpleAlgebrai Extension(Fra tion Integer, UnivariatePolynomial(a,Fra tion Integer),a*a+a+1),....) (4) ->e**2 105 33 - --- a - -16 16 Type: SimpleAlgebrai Extension(Fra tion Integer, UnivariatePolynomial(a,Fra tion Integer),a*a+a+1) (5) ->e:= onvert((a**2-1)::UP(a,FRAC INT)) (4)

(5) - a - 2 Type: SimpleAlgebrai Extension(Fra tion Integer, UnivariatePolynomial(a,Fra tion Integer),a*a+a+1)

Now let us take two expanded polynomials p1 and p2, with oeÆ ients in the eld Q(a), whi h we alled ext1, (x2 a2 )(x2 + 3x + a) and x2 a2 . We noti e that this time the system orre tly simpli es the rational fun tion p2=p1. (6) ->p1:UP(x,ext1):=x**4+3*x**3+(2*a+1)*x**2+(3*a+3)*x-1 4 3 2 (6) x + 3x + (2a + 1)x + (3a + 3)x - 1 Type: UnivariatePolynomial(x,SimpleAlgebrai Extension(Fra tion Integer,UnivariatePolynomial(a,Fra tion Integer),a*a+a+1)) (7) ->p2:UP(x,ext1):= x**2+a+1 2 (7) x + a + 1 Type: UnivariatePolynomial(x,SimpleAlgebrai Extension(Fra tion Integer,UnivariatePolynomial(a,Fra tion Integer),a*a+a+1)) (8) ->p2/p1 1 ----------2 x + 3x + a Type: Fra tion UnivariatePolynomial(x,SimpleAlgebrai Extension (Fra tion Integer,UnivariatePolynomial(a,Fra tion Integer),a*a+a+1)) (8)

Let the reader take heart! This example was deliberately ompli ated in order to demonstrate the possibilities of AXIOM to mathemati ians. All the examples we gave of MACSYMA an be trans ribed almost verbatim, without the requirement to de lare domains. Most re ent of the omputer algebra systems o ered to mathemati ians and engineers, it is ertain that AXIOM should have a onsiderable impa t. As with the other systems, it is widespread use of the system whi h will guarantee its growth, and it is

74

How to use a Computer Algebra system

extremely en ouraging to note that its authors have su

eeded in making it available.

2. The problem of data representation

This hapter is devoted to a basi aspe t of Computer Algebra: the representation of mathemati al obje ts on the omputer. In this hapter we shall onsider questions of the form: \How are the data represented on the omputer?"; \I an present my problem in several ways | whi h will be the most eÆ ient?" et . We shall not go into the very te hni al questions of REDUCE or MACSYMA, but we shall explain the general prin iples, and the representation traps into whi h the user may so easily fall. 2.1 REPRESENTATIONS OF INTEGERS

\God reated the integers: the rest is the work of man" said the great mathemati ian Krone ker. Nevertheless, man has some problems with representing integers. Most omputer languages treat them as a nite set, su h as f 231 ; : : : ; 231 1g. It is very easy to say \my data are small, the answer will be small, so I shall not need large integers and this nite set will be enough", but this argument is ompletely false. Suppose we want to al ulate the g. .d. of the following two polynomials [Knuth, 1969; Brown, 1971℄: ( ) =x8 + x6 3x4 3x3 + 8x2 + 2x 6 4 B (x) =3x + 5x 4x2 9x + 21: A x

5;

Sin e we know Eu lid's algorithm we an begin to al ulate (we shall return to this example in the sub-se tion \The g. .d."). The last integer al ulated has 35 de imal digits, that is 117 bits, and as the result is an integer we nd that the polynomials are relatively prime. However, the data are small (the biggest number is 21), and the result \relatively prime" was a yes/no answer and only needed 1 bit. This problem of intermediate expression swell is one 75

76

The problem of data representation

of the biggest problems in Computer Algebra. Similar problems have been pointed out in integration [Davenport, 1981, pp. 165{172℄ and elsewhere. In the hapter \Advan ed algorithms" we give an example of an algorithm for the g. .d. of polynomials, whi h does not have this defe t. So, it is ne essary to deal with the true integers of mathemati ians, whatever their size. Almost all Computer Algebra systems do this, and a trivial test of a Computer Algebra system is to al ulate su h a number. The representation of large integers (sometimes alled \bignums") is fairly obvious: we hoose a number N as base, just as the normal de imal representation uses 10 as base, and every number is represented by its sign (+ or ) and a su

ession of \digits" (that is the integers between 0 and N 1) in this base. The most usual bases are the powers of 2 or of 10: powers of 10 make the input and the output of the numbers (that is their

onversion from or into a de imal representation) easier, whilst powers of 2 make several internal al ulations more eÆ ient, sin e multipli ation by the base and division by the base an be done by a simple shift. Hen e a system whi h is aimed at the printing of large numbers should hoose a base whi h is a power of 10, whilst one whi h is aimed at al ulating with su h numbers, whi h are rarely printed, will more likely hoose a base whi h is a power of 2. Normally it is worth hoosing as large a base as possible (provided that the \digits" an be stored in one word). For example, on a

omputer with 32 bits (su h as the large IBM 30XX, the Motorola 680X0, the Intel 386 and 486, and the vast majority of \RISC" systems), we an

hoose 109 as the de imal base, or 230 or 231 as the binary base. Usually 232 is not used, be ause it makes addition of numbers diÆ ult: the arry (if there is one) annot be stored in the word, but has to be retrieved in a way whi h is ompletely ma hine-dependent (\over ow indi ator" et .). On e we have hosen su h a representation, addition, subtra tion and multipli ation of these integers are, in prin iple at least, fairly easy | the same prin iples one learns in primary s hool for de imal numbers suÆ e. Nevertheless, the multipli ation of numbers gives rise to a problem. The produ t of two numbers ea h ontained in one word requires two words for storing it. Most omputers ontain an instru tion to do this, but \high level" languages do not give a

ess to this instru tion (the only ex eption is BCPL [Ri hards and Whitby-Strevens, 1979℄, with its MULDIV operation). Therefore almost all the systems for large integers ontain a fun tion written in ma hine language to do this al ulation. Division is mu h more ompli ated, sin e the method learnt at s hool is not an algorithm as it requires us to guess a number for the quotient. Knuth [1981℄ des ribes this problem in detail, and gives an algorithm for this guesswork, whi h is almost always right, an never give too small an answer and an never give an answer more than one unit too big (in su h an

Computer Algebra

77

event, the divisor has to be added to the numerator to orre t this error). To al ulate the g. .d. there is Eu lid's algorithm and also several other ways whi h may be more eÆ ient. Knuth [1981℄ treats this problem too. But the fa t that the systems an deal with large numbers does not mean that we should let the numbers in rease without doing anything. If we have two numbers with n digits, adding them requires a time proportional to n, or in more formal language a time O(n). Multiplying them requires a time 0(n2 )*. Cal ulating a g. .d., whi h is fundamental in the al ulation of rational numbers, requires O(n3 ), or O(n2 ) with a bit of are**. This implies that if the numbers be ome 10 times longer, the time is multiplied by 10, or by 100, or by 1000. So it is always worth redu ing the size of these integers. For example it is more eÆ ient to integrate 1=x + 2=x2 than to integrate 12371265=(249912457x) + 24742530=(249912457x2). When it omes to fa torising, the position is mu h less obvious. The simple algorithm we all know, whi h onsists of trying all the primes less than N 1=2 , requires a running time O(N 1=2 log2 N ), where the fa tor of log2 N omes from the size of the integers involved. If N is an integer with n digits, it be omes O(10n=2 n2 ), whi h is an exponential time. Better algorithms do existy with a running time whi h in reases more slowly than the exponentials, but more qui kly than the polynomials, that is O(exp((n log n)1=2 )), or even O(exp(n1=3 log2=3 n)). For integers of a realisti size, we nd [Wunderli ht, 1979℄ O(N :154 ), whi h is a little less than O(10n=6 ). Ma millan and Davenport [1984℄ give a brief a

ount of what is meant by \realisti ". Mu h resear h is being done on this problem at present, largely be ause of its ryptographi interest. In general, we an say that the re ex \I have just al ulated an integer; now I am going to fa torise it" is very dangerous. We should also note that spe ialised fa torisation software, often running on parallel omputers, is always more eÆ ient than the general-purpose omputer algebra systems. 2.2 REPRESENTATIONS OF FRACTIONS

Fra tions (that is the rational numbers) are obviously almost always represented by their numerator and denominator. As a rule they must not * In prin iple, O(n log n log log n) is enough [Aho et al., 1974, Chapter 8℄, but no omputer algebra system uses this at the present time, for it is more like 20n log n log log n. ** In prin iple, O(n log2 n log log n) [Aho et al., 1974, Chapter 8℄, but again no system uses it. y These algorithms are now implemented in the major omputer algebra systems su h as REDUCE 3.3, MAPLE V and AXIOM. This marks a net improvement from the situation in 1986, when this book rst appeared.

78

The problem of data representation

be repla ed by oating numbers for this entails not only loss of pre ision, but ompletely false resultsz. For example, the g. .d. of x3 8 and (1=3)x2 (4=3) is (1=3)x (2=3), whereas the g. .d. of x3 8 and 2 :333333x 1:33333 is :000001, be ause of trun ation. All al ulations with rational numbers involve al ulating g. .d.s, and this an be very ostly in time. Therefore, if we an avoid them, the

al ulation will usually go mu h better. For example, instead of the above

al ulation, we an al ulate the g. .d. of x3 8 and x2 4, and this

an be al ulated without any g. .d. over the integers. The same is true of the relation polynomials | rational fun tions, and we shall show an algorithm (Bareiss' algorithm) whi h does not need re ourse to fra tions for elimination in matri es. The algorithms of addition, multipli ation et . of fra tions are fairly easy, but nevertheless there are possible improvements. Let us onsider for example multipli ation: a  = p: b

d

q

The most obvious way is to al ulate p = a , q = bd, and then to remove the g. .d. But, supposing that a=b and =d are already redu ed, we an

on lude that g d(p; q ) = g d(a; d) g d(b; ): Then it is more eÆ ient to al ulate the two g. .d.s on the right than the g. .d. on the left, for the data are smaller. It is the same for addition: a b

+

d

=

Here we an al ulate p = ad + b and

al ulate q = bd= g d(b; d) and p

=a

d

g d(b; d)

p q q

+

:

=

bd

, but it is more eÆ ient to

b

g d(b; d)

:

It is still ne essary to take out the g. .d. of p and q , but this method yields p and q smaller than does the simpler method. 2.3 REPRESENTATIONS OF POLYNOMIALS

Now we shall onsider the fundamental al ulation of all Computer Algebra systems, that whi h distinguishes them from other systems: polynomial

z For example, the SMP system had several problems arising from the use of su h a representation for rational numbers.

Computer Algebra

79

al ulation. We must stress that the adje tive \polynomial" applies to programmed al ulations and not ne essarily to the types of data to whi h the al ulations apply. For example, the al ulation (x

y )(x + y ) = x2

y2

is a polynomial al ulation, but ( os a

sin b)( os a + sin b) = os2 a

sin2 b

is also one: in fa t we have here the same al ulation, with the variable x repla ed by os a and the variable y repla ed by sin b. All Computer Algebra systems manipulate polynomials in several variables (normally an inde nite number*). We an add, subtra t, multiply and divide them (at least if the division is without remainder | see later), but in fa t the interesting al ulation is simpli ation. It is not very interesting to write the produ t of (x + 1) and (x 1) as (x + 1)(x 1): to write it as x2 1 is mu h more useful. \Simpli ation" is a word with many meanings. It is obvious that x 1 is \simpler" than (x2 1)=(x + 1), but is x999 x998 + x997    1 \simpler" than (x1000 1)=(x + 1)? To deal with this somewhat nebulous idea, let us de ne exa tly two related ideas whi h we shall need. 2.3.1 Canoni al and normal representations

A representation of mathemati al obje ts (polynomials, in the present ase, but the de nition is more general) is alled anoni al if two di erent representations always orrespond to two di erent obje ts. In more formal terms, we say a orresponden e f between a lass O of obje ts and a lass R of representations is a representation of O by R if ea h element of O orresponds to one or more elements of R (otherwise it is not represented) and ea h element of R orresponds to one and only one element of O (otherwise we do not know whi h element of O is represented). The representation is

anoni al if f is bije tive. Thus we an de ide whether two elements of O are equal by verifying that their representations are equal. If O has the stru ture of a monoid (and almost every lass in Computer Algebra is at least a monoid), we an de ne another on ept. A representation is alled normal if zero has only one representation. (One may ask \Why zero?" The reason is that zero is not legitimate as a se ond parameter for division, and therefore one has to test before dividing. If * CAMAL requires their number to be de lared, but this onstraint is one of the reasons for the speed of CAMAL.

80

The problem of data representation

there were other ex luded values, the de nition of \normal" would be more

ompli ated.) Every anoni al representation is normal, but the onverse is false (as we shall see later). A normal representation over a group gives us an algorithm to determine whether two elements a and b of O are equal: it is suÆ ient to see whether a b is zero or not. This algorithm is, of ourse, less eÆ ient than that for anoni al representations, where it is suÆ ient to see whether the representations of a and b are equal. So we an say that a simpli ation should yield a representation whi h would be at least normal, and if possible anoni al. But we want mu h more, and it is now that subje tivity omes into play. We want the simpli ed representation to be \regular", and this would ex lude a representation su h as the following (where we have de ned A = x2 + x, and we then demand some variations of A) : obje t : representation A : x(x + 1) 3 1 x A+1 : x 1 2 A x : x A +x +1 : (x + 1)2 Brown's storage method [1969℄ raises this question of regularity. Given a normal representation, he proposes to onstru t a anoni al representation. All the expressions whi h have already been al ulated are stored, a1 ; : : : ; an . When a new expression b is al ulated, for all the ai we test whether b is equal to this ai (by testing whether ai b is zero). If b = ai , we repla e b by ai , otherwise an+1 be omes b, whi h is a new expression. This method of storing yields a anoni al representation, whi h, however, is not at all regular, be ause it depends entirely on the order in whi h the expressions appear. Moreover, it is not eÆ ient, for we have to store all the expressions and ompare ea h result al ulated with all the other stored expressions. Also, we want the representation to be \natural" (whi h today at least ex ludes the use of Roman numerals), and, at least in general, \ ompa t" (whi h forbids the representation of integers in unary, so that 7 be omes 1111111). Fortunately there are many representations with these properties, and every Computer Algebra system has (at least!) one. Most of the systems (in parti ular REDUCE ) always simplify (that is | put into

anoni al form): MACSYMA is an ex eption and only simpli es on demand. For polynomials in one variable, these representations are, mathemati ally speaking, fairly obvious: every power of x appears only on e (at most), and therefore the equality of polynomials omes down to the problem of equality of oeÆ ients.

Computer Algebra

81

2.3.2 Dense and sparse representations

Now we have to distinguish between two types of representation: dense or sparse. Every Computer Algebra system has its own representation (or its own representations) for polynomials and the details only begin to be interesting when we have to tinker with the internal representation, but the distin tion dense/sparse is important. A representation is alled sparse if the zero terms are not expli itly represented. Conversely, a representation is alled dense if all the terms (or at least those between the monomial of highest degree and the monomial of lowest degree) are represented | zero or non-zero. Normal mathemati al representation is sparse: we write 3x2 + 1 instead of 3x2 + 0x + 1. The most obvious omputerised representation is the representation of a polynomial a0 + a1 x +    + an xn by an array of its oeÆ ients [a0 ; a1 ; : : : ; an ℄. All the ai , zero or non-zero, are represented, therefore the representation is dense. In this representation, the addition of two polynomials of degree n involves n + 1 (that is O(n)) additions of oeÆ ients, whereas multipli ation by the nave method involves O(n2 ) multipli ations of oef ients (one an do it in O(n log n) [Aho et al., 1974, Chapter 8℄, but no system uses this at present | see Probst and Alagar [1982℄ for an appli ation and a dis ussion of the ost). A less obvious representation, from the omputing point of view, but

loser to the mathemati al representation, is a sparse representation: we store the exponent and the oeÆ ient, that is the pair (i; ai ), for all the non-zero terms ai xi . Thus 3x2 + 1 an be represented as ((2; 3); (0; 1)). This representation is fairly diÆ ult in FORTRAN, it is true, but it is more natural in a language su h as LISP (the language preferred by most workers in Computer Algebra). We stress the fa t that the exponents must be ordered (normally in de reasing order), for otherwise ((2; 3); (0; 1)) and ((0; 1); (2; 3)) would be two di erent representations of the same polynomial. To prove that this representation is not very ompli ated, at least not in LISP, we give pro edures for the addition and multipli ation of polynomials with this representation. Readers who are not familiar with LISP

an skip these de nitions without losing the thread. We shall use a representation in whi h the CAR of a polynomial is a term, whilst the CDR is another polynomial: the initial polynomial less the term de ned by the CAR. A term is a CONS, where the CAR is the exponent and the CDR is the oeÆ ient. Thus the LISP stru ture of the polynomial 3x2 +1 is ((2 . 3) (0 . 1)), and the list NIL represents the polynomial 0. In this representation, we must note that the number 1 does not have the same representation as the polynomial 1 (that is ((0 . 1))), and that the polynomial 0 is represented di erently from the other numeri al polynomials.

82

The problem of data representation (DE ADD-POLY (A B) (COND ((NULL A) B) ((NULL B) A) ((GREATERP (CAAR A) (CAAR B)) (CONS (CAR A) (ADD-POLY (CDR A) B))) ((GREATERP (CAAR B) (CAAR A)) (CONS (CAR B) (ADD-POLY A (CDR B)))) ((ZEROP (PLUS (CDAR A) (CDAR B))) ; We must not onstru t a zero term (ADD-POLY (CDR A) (CDR B))) (T (CONS (CONS (CAAR A) (PLUS (CDAR A) (CDAR B))) (ADD-POLY (CDR A) (CDR B)))))) (DE MULTIPLY-POLY (A B) (COND ((OR (NULL A) (NULL B)) NIL) ; If a = a0+a1 and b = b0+b1, then ab = ; a0b0 + a0b1 + a1b (T (CONS (CONS (PLUS (CAAR A) (CAAR B)) (TIMES (CDAR A) (CDAR B))) (ADD-POLY (MULTIPLY-POLY (LIST (CAR A)) (CDR B)) (MULTIPLY-POLY (CDR A) B))))))

If A has m terms and B has n terms, the al ulating time (that is the number of LISP operations) for ADD-POLY is bounded by O(m + n), and that for MULTIPLY-POLY by O(m2 n) ((m(m + 3)=2 1)n to be exa t). There are multipli ation algorithms whi h are more eÆ ient than this one: roughly speaking, we ought to sort the terms of the produ t so that they appear in de reasing order, and the use of ADD-POLY orresponds to a sorting algorithm by insertion. Of ourse, the use of a better sorting method (su h as \qui ksort") o ers a more eÆ ient multipli ation algorithm, say O(mn log m) [Johnson, 1974℄. But most systems use an algorithm similar to the pro edure given above. There is a te hni al diÆ ulty with this pro edure, dis overed re ently by Abbott et al. [1987℄. We shall explain this diÆ ulty in order to illustrate the problems whi h an arise in the translation of mathemati al formulae into Computer Algebra systems. In MULTIPLY-POLY, we add a0 b1 to a1 b. The order in whi h these two obje ts are al ulated is important. Obviously, this an hange neither the results nor the time taken. But it an hange the memory spa e used during the al ulations. If a and b are dense of degree n, the order whi h rst al ulates a0 b1 should store all these intermediate results before the re ursion nishes. Therefore the memory spa e needed is O(n2 ) words, for there are n results of length between 1 and n. The other order, a1 b al ulated before a0 b1 , is learly more eÆ ient, for the spa e

Computer Algebra

83

used at any moment does not ex eed O(n). This is not a purely theoreti al remark: Abbott et al. were able to fa torise x1155 1 with REDUCE in 2 mega-bytes of memory, but they ould not remultiply the fa tors without running out of memory. In any ase, one might think that the algorithms whi h deal with sparse representation are less eÆ ient that those whi h deal with dense representation: m2 n instead of mn for elementary multipli ation or mn log m instead of max(m; n) log max(m; n) for advan ed multipli ation. It also looks as though there is a waste of memory, for the dense representation requires n + 2 words of memory to store a polynomial of degree n (n + 1 oeÆ ients and n itself), whilst sparse representation requires at least 2n, and even 4n with the natural method in LISP. In fa t, for ompletely dense polynomials, this omparison is fair. But the majority of polynomials one nds are not dense. For example, a dense method requires a million multipli ations to

he k (x1000 + 1)(x1000 1) = x2000 1; whereas a sparse method only requires four, for it is the same al ulation as (x + 1)(x 1) = x2 1: When it is a question of polynomials in several variables, all realisti polynomials ought to be sparse | for example, a dense representation of 5 5 5 5 5 a b d e ontains 7776 terms. Therefore, the al ulating time, at least for addition and multipli ation, is a fun tion of the number of terms rather than of the degree. We are familiar with the rules bounding the degree of a result as a fun tion of the degrees of the data, but the rules for the number of terms are di erent. Suppose that A and B are two polynomials, of degree nA and nB with mA and mB terms. Then the primitive operations of addition, subtra tion, multipli ation, division, al ulation of the g. .d. and substitution of a polynomial for the variable of another one satisfy the bounds below: Operation A

+B

A A



B B

A=B

g: :d:(A; B ) subst(x = A; B )

Degree of result max(nA ; nB ) max(nA ; nB ) nA + nB nA nB min(nA ; nB ) nA nB

Number of terms in result mA + mB mA + mB mA mB nA nB + 1  min(nA ; nB ) + 1 nA nB + 1

The bound we give for the number of terms in the result of a division is a fun tion of the degrees, and does not depend on the number of terms

84

The problem of data representation

in the data. This may seem strange, but if we think of (xn 1)=(x 1) = xn 1 + xn 2 +    +1, we see that two terms in ea h polynomial an produ e a polynomial with an arbitrarily large number of terms. For the g. .d., the problem of limiting (non-trivially) the number of terms is quite diÆ ult. Coppersmith and Davenport [1991℄ have shown that, whatever n > 1 and  > 0 are hosen, there is a polynomial p su h that pn has at most  times as many terms as p. In parti ular, if we take n = 2 and write p = g d(p2 ; (p2 )0 ), we get a g. .d. whi h has 1= times as many terms as the inputs. For substitution, it is still possible for the result to be ompletely dense, even though the data are sparse. From this we dedu e a very important observation: there may be a great di eren e between the time taken for al ulating a problem in terms of x and the same problem in terms of x 1 (or any other substitution). To show what this remark means in pra ti e, let us onsider the matrix

0 1 + 2x A =  x + x4

x + x4 1 + 2x4 x4 + x9

x + x9

x + x9 x4 + x9 1 + 2x9

1 A:

We an al ulate A2 in .42 se onds (REDUCE on a mi ro- omputer Motorola 68000), but if we rewrite it in terms of y = x 1, this al ulating time be omes 5.26 se onds | more than 10 times more. This problem omes up likewise in fa torisation, as we shall see in Chapter 4. 2.3.3 The g. .d.

As we shall see later, al ulations with rational fra tions involve al ulating the g. .d. of the numerator and denominator of fra tions. These al ulations are less obvious than one might think, and to illustrate this remark let us onsider the g. .d. of the following two polynomials (this analysis is mostly taken from Brown [1971℄) : =x8 + x6 3x4 3x3 + 8x2 + 2x 6 4 B (x) =3x + 5x 4x2 9x + 21: A(x)

The rst elimination gives A

2

( x3

2 9 )B ,

5 4 1 2 x + x 9 9

that is 1 ; 3

and the subsequent eliminations give 117 2 x 25

9x +

441 ; 25

5;

Computer Algebra 233150 x 19773

85

102500 ; 6591

and, nally,

1288744821 : 543589225 It is obvious that these al ulations on polynomials with rational oeÆ ients require several g. .d. al ulations on integers, and that the integers in these

al ulations are not always small. We an eliminate these g. .d. al ulations by working all the time with polynomials with integer oeÆ ients, and this gives polynomial remainder sequen es or p.r.s. Instead of dividing A by B in Q, we an multiply A by a power (that is with exponent the di eren e of the degrees plus one) of the leading oeÆ ient of B , so that this multiple of A an be divided by B over Z. Thus we dedu e the following sequen e: 15x4 + 3x2 15795x2 + 30375x

9; 59535;

1254542875143750x 1654608338437500 and

12593338795500743100931141992187500:

These sequen es are alled Eu lidean sequen es (even though Eu lid did not know about polynomials as algebrai obje ts). In general, the oeÆ ients of su h a sequen e undergo an exponential in rease in their length (as fun tions of the degrees of the given polynomials). Obviously, we an simplify by the ommon fa tors of these sequen es (this algorithm is the algorithm of primitive sequen es), but this involves the al ulation of the g. .d., whi h puts us ba k into the morass from whi h we have just emerged. Fortunately, it is possible to hoose the oeÆ ients by whi h we multiply A before dividing it by B et ., so that the in rease of the oeÆ ients is only linear. This idea, due independently to Brown and to Collins, is alled sequen e of sub-resultant polynomials (sub-resultant p.r.s.). This algorithm is des ribed by Brown [1971℄ and by Loos [1982℄, but it is important enough to be des ribed here, even though we omit the proofs. Suppose that the data are two polynomials F1 and F2 with integer

oeÆ ients. We determine their g. .d. with the help of a sequen e of polynomials F3 , : : : . Let Æi be the di eren e between the degrees of Fi and Fi+1 , and fi the leading oeÆ ient of Fi . Then the remainder from the division of fiÆi 1 +1 Fi 1 by Fi is always a polynomial with integer oeÆ ients. Let

86

The problem of data representation

us all it i+1 Fi+1 , where i+1 is to be determined. The algorithm alled \Eu lidean" orresponds to the hoi e of i+1 = 1. If we put

3 = ( 1)Æ1 +1 ;

Æi 2 i

i = fi 2 where the

are de ned by

3 = 1; i

= ( fi 2 )Æi

3

1

i

Æi 3

1

;

then (by the Sub-resultant Theorem [Loos, 1982℄) the Fi are always polynomials with integer oeÆ ients, and the in rease in length of the oeÆ ients is only linear. For the same problem as before, we get the sequen e:

F3 = 15x4 3x2 + 9; F4 = 65x2 + 125x 245; F5 = 9326x 12300; F6 = 260708; and the growth is learly less great than for the previous sequen es. This algorithm is the best method known for al ulating the g. .d., of all those based on Eu lid's algorithm applied to polynomials with integer

oeÆ ients. In the hapter \Advan ed algorithms" we shall see that if we go beyond these limits, it is possible to nd better algorithms for this

al ulation. 2.4 POLYNOMIALS IN SEVERAL VARIABLES

There is a new diÆ ulty as soon as we begin to al ulate with polynomials in several variables: do we write x + y or y + x? Mathemati ally, we are dealing with the same obje t, and therefore if we want a anoni al representation, we ought to write only one of these expressions. But whi h one? To be able to distinguish between them, we have to introdu e a new idea | that of an order among the variables. We have already used, impli itly, the idea of an order among the powers of a variable, sin e we write x3 + x2 + x1 + x0 instead of x2 + x0 + x3 + x1 , but this order seems \natural" (Norman [1982℄ has written a system whi h does not order the monomials a

ording to this arrangement, but this gives rise to many unsolved problems). The idea of an order among the variables may seem arti ial, but it is as ne essary as the other. When we have de ided on an order, su h as \x is more important

Computer Algebra

87

than y ", we know that x + y is the anoni al representation, but that y + x is not. When it is a question of more ompli ated monomials (we re all that a monomial is a produ t of powers of variables, su h as x2 y 3 or x1 y 2 z 3 ), there are various ways of extending this order. The most ommon ways are the following: (a) We an de ide that the main variable (the one whi h is in front of all the others in our order) is to determine the order as exa tly as possible, and that we will only onsider the powers of other variables if the powers of the rst variable are equal. This system is alled lexi ographi , for it is the same system as the one used in a di tionary, where one looks rst at the rst letter of the two words, and it is only if they are the same that one looks at the se ond et . In this system, the polynomial (x + y )2 + x + y + 1 is written x2 + 2xy + x + y 2 + y + 1 (if x is more\prin ipal" than y ). (b) We an de ide that the total degree of the monomial (that is the sum of the powers of the variables) is the most important thing, and that the terms of total degree 2 ought to appear before all the terms of total degree 1 et . We use the previous method to distinguish between terms of the same total degree. This system is alled total degree, or, more pre isely, total degree, then lexi ographi . In this system, the polynomial (x + y )2 + x + y + 1 is written x2 + 2xy + y 2 + x + y + 1 (if x is more \prin ipal" than y ). ( ) Instead of the lexi ographi method, we an use the opposite. For a polynomial in one variable, this is the same as the in reasing order of the powers, and this gives rise to diÆ ulties (at least from the nave point of view) in division and when al ulating g. .d.s. But this system has advantages when linked to the total degree method, as total degree, 2 then inverse lexi ographi . In this system, the polynomial (x + y ) + 2 2 x + y + 1 is written y + 2xy + x + y + x + 1 (if x is more \prin ipal" than y ). The reader might think that the systems (b) and ( ) are equivalent, but with the order of the variables inverted. That is true for the ase of two variables, but not for more than two. To show the di eren e, let us look at the expansion of (x + y + z )3 . First of all, in the order total degree then lexi ographi (x before y before z ), we get 3

x

+ 3x2 y + 3x2 z + 3xy 2 + 6xyz + 3xz 2 + y 3 + 3y 2 z + 3yz 2 + z 3 :

In the order total degree, then inverse lexi ographi (z before y before x), we get 3

x

+ 3x2 y + 3xy 2 + y 3 + 3x2 z + 6xyz + 3y 2 z + 3xz 2 + 3yz 2 + z 3 :

88

The problem of data representation

In this order, we have taken all the terms whi h do not involve z , before atta king the others, whereas the rst order hose all the terms ontaining x2 (even x2 z ) before atta king the others. In fa t, de iding whi h of these systems is the best for a parti ular

al ulation is not always very obvious, and we do not have any good riteria for hoosing (but see Bu hberger [1981℄). Most of the existing systems use a lexi ographi order, but ea h system has its own parti ular methods. Lexi ographi representation has an interesting onsequen e: as all the terms with xn are grouped together (supposing that x is the main variable), we an therefore onsider this polynomial as a polynomial in x, whose

oeÆ ients are polynomials in all the other variables. So the polynomial in

ase (a) an be written in the form x2 + x(2y +1)+(y 2 + y +1). This form is

alled re ursive, in ontrast to the distributed form x2 +2xy + x + y 2 + y +1. The re ursive form is used by most systems. This dis ussion provides us with some indi ations about the behaviour of systems. For example, in MACSYMA, the fun tion INPART lets us hoose one part of an expression, but it operates on the internal form, whi h is more or less inverse lexi ographi re ursive. Similarly, REDUCE has a fun tion COEFF, whi h gives the oeÆ ients of an expression, seen as a polynomial in one named variable. As the internal representation of REDUCE is lexi ographi re ursive, it is obvious that this fun tion is mu h qui ker when the named variable is the prin ipal one. For example, if we take (w + x + y + z )6 , the time of COEFF varies between .50 se onds (w named) and .84 se onds (z named). In general, the al ulation time (and the work memory) may vary greatly a

ording to the order of the variables [Pear e and Hi ks, 1981, 1982, 1983℄, but the reasons are not always very obvious. We do not know of any good general rules for hoosing the order, for it is largely determined by the form desired for printing the results. We must also point out that

hanging the order is expensive, sin e all the results whi h have already been al ulated have to be re-expressed in the new order. These questions of order are not just internal details whi h on ern only the programmers: they an in uen e the results obtained. For example, suppose we want to divide 2x y by x + y . If x is the main variable, then the quotient is 2, and the remainder is 3y . But if y is the main variable, then the quotient is 1, and the remainder is 3x. 2.5 REPRESENTATIONS OF RATIONAL FUNCTIONS

Most al ulations use not only polynomials, but also fra tions of them, that is, rational fun tions. The same remarks whi h applied in the polynomial

ase apply here: the al ulation 1

os x + sin x 1 + = sin x os x

os x sin x

Computer Algebra

89

is a rational al ulation | in fa t it is the same al ulation as 1 1 a+b + = : b

a

ab

If we represent a rational fun tion by a polynomial (the numerator) divided by another polynomial (the denominator), we have of ourse a normal representation, for the fun tion represents zero if and only if its numerator is zero. With REDUCE, it is possible not to use this representation (OFF MCD), but we do not advise this, be ause we no longer know whether a formula represents zero or not, and this auses many problems with Gaussian elimination, for example in



1 1=(x 1) 1=(x + 1) 1=(x2 1)



:

If we want a anoni al representation, we have to do more than express in the form of the quotient of two polynomials. For example, the formulae (x 1)=(x + 1) and (x2 2x + 1)=(x2 1) represent the same element of Q(x), but they are two di erent formulae. Here we must say a few words about the di eren e between the elements of Q(x) and the fun tions of Q into itself whi h they represent. Seen as fun tions of Q into itself, f (x) = (x 1)=(x + 1) and g (x) = (x2 2x + 1)=(x2 1) are two di erent fun tions, for the rst is de ned at the value x = 1 (where it takes the value 0), whereas g (x) is not de ned, sin e it be omes 0=0. But this singularity is not \intrinsi ", be ause the fun tion de ned by

8 2 < )= :0

2x + 1 (x 6= 1) 1 (x = 1) is ontinuous, di erentiable et . at x = 1. In general, it is not very hard to

he k that, if f1 = p1 =q1 is a simpli ation of f = p=q (where p, q , p1 and q1 are polynomials), then at ea h value x0 of x where f is de ned, f1 (x0 ) is de ned and equal to f (x0 ), and, moreover, if f (x1 ) is not de ned, but f1 (x1 ) is de ned, then the fun tion (

x

g1 x

( )=

g x



2

x

( ) (x 6= x1 ) ( ) (x = x1 )

f x

f1 x

is ontinuous et . at x = x1 . Thus this kind of simpli ation of a formula only hanges the fun tion by eliminating su h singularities*. * The reader who is familiar with the theory of denotational semanti s

an express these ideas di erently by noting that f1 is higher (in the latti e of partial fun tions of Q into itself) than the fun tion f .

90

The problem of data representation

Seen algebrai ally, i.e. as elements of Q(x), the expressions (x 1)= (x + 1) and (x2 2x + 1)=(x2 1) are equal, sin e the di eren e between them is 0=(x2 1) = 0. Thus there is a slight distin tion between the elements of Q(x) and the fun tions of Q into itself, but, with the usual abuse of language, we all the elements of Q(x) rational fun tions. (Bourbaki : \: : : the abuses of language without whi h any mathemati al text threatens to be ome pedanti and even unreadable"). After this short digression, we return to the problem of a anoni al representation for rational fun tions, that is to say, the elements of Q(x). We have established that (x 1)=(x + 1) and (x2 2x + 1)=(x2 1) are two di erent representations of the same fun tion. The de nition of \ anoni al" implies the fa t that at least one of them is not anoni al. The natural

hoi e is to say that there must not be any divisor ommon to the numerator and the denominator. This implies that (x2 2x + 1)=(x2 1) is not

anoni al, for there is a g. .d. whi h is (x 1). If we remove this g. .d., we ome ba k to (x 1)=(x + 1). In general, we nd a representation with the degree of the numerator as small as possible (and the same for the denominator). If there were only one su h representation, it would be a good

hoi e for a anoni al representation. Unfortunately, the ondition that the degree of the numerator is to be minimised does not give us uniqueness, as the following examples show: 2x 1 4x 2 2x + 1 = = = 2x + 1 2x 1 4x 2

+ 1=2 = + 1=2

x x

x x

1=2 : 1=2

To resolve these ambiguities, most existing systems take into a

ount the following rules (for rational fun tions with oeÆ ients in Q): (1) no rational oeÆ ient in the expression (whi h eliminates the last two expressions); (2) no integer may divide both the numerator and the denominator of the expression (whi h eliminates (4x 2)=( 4x 2)); (3) the leading oeÆ ient of the denominator of the expression must be positive (whi h eliminates the se ond expression). There are several other possibilities, but these rules are the ones most used, and are suÆ ient to give a anoni al form for polynomials over Q. The

ase of more general oeÆ ient domains is explained by Davenport and Trager [1990℄. As we have already said in the ase of rational numbers, the fa t that one an al ulate with rational fra tions does not mean that one should al ulate with them. If one an nd an algorithm whi h avoids these al ulations, it will in general be more eÆ ient. We shall see later that Bareiss' algorithm is a variation of Gaussian elimination whi h does not need the use of fra tions.

Computer Algebra

91

p

2.6 REPRESENTATIONS OF ALGEBRAIC FUNCTIONS

We understand by algebrai a solution of a polynomial equation. 2 is an algebrai number, sin e it is both a number and a solution of the equation 2 2 = 0. This equation has two roots, but we shall not di erentiate between them in this se tion. In the next hapter we shall see how to di erentiate between the di erent real values whi h p satisfy the same equation, but this is unne essary for many appli ations. 3 x2 1 is an algebrai fun tion, be ause it is both a fun tion and a solution of the equation 3 x2 + 1 = 0. Almost all this se tion applies to both fun tions and numbers: in general we shall speak of algebrai fun tions, even though most of the examples are algebrai numbers. Every radi al is an algebrai expression, but the opposite is false, as the great mathemati ian Abel proved. More pre isely, the algebrai number whi h is a solution of 5 + + 1 = 0 annot be expressed in radi als. One an therefore distinguish p threep lasses of algebrai expressions: (1) the simple radi als, su h as 2 or 3 x2 1; (2) the simple or nested radi als, whi h in lude also expressions su h as p p p 1 + 2 or 3 2 + 3 x; (3) the general algebrai expressions, whi h also in lude expressions su h as the algebrai number de ned by 5 + + 1 = 0. Ea h lass is ontained in the subsequent ones.

p

q

2.6.1 The simple radi als

For the rst lass, the representation is fairly obvious: we onsider ea h radi al as a variable appearing in a polynomial or rational expression. Obviously, if is an n-th root, we only use the powers 0; : : : ; n 1 of , and we repla e the higher powers by lower powers. This representation is not anoni al for two di erent reasons. In the rst pla e there is a problem with rational fra tions whi hp ontain algebrai p expressions. Let us look for example at 1=( 2 1) and 2 + 1. These two expressions are not equal, but they represent the same number, for their di eren e is zero:

p1 2 1

p

p

p

( 2 + 1)( 2 1) 1 p ( 2 + 1) = p 2 1 2 1 p 2 1 ( 2) + 1 0 p = =p = 0: 2 1 2 1

One solution is to insist that p the roots appear only in the numerator: this forbids the representation 1=( 2 1). For every root , we an a hieve this by multiplying the numerator n and the denominator d of the expression

92

The problem of data representation

by a polynomial d1 , su h that the algebrai quantity does not appear in the produ t dd1 . In this ase,

n nd1 = d dd1 and is removed from the denominator. We an al ulate this polynomial d1 by applying the extended Eu lideanalgorithm (see the Appendix) to the pair d and p, where p is the polynomial de ning . This appli ation produ es two polynomials d1 and d2 su h that dd1 + pd2 = , where does not depend on , and then dd1 = . With this restri tion, we an be sure that we shall have a anoni al system, if the radi als form an independent system. But this representation p 1 p p be omes is not always very eÆ ient: for example p 2+ 3+ 5+ 7

ppp

22 3 5 7

ppp

ppp

p

34 2p 5p 7p 50 2 p3 7 + 135 p 7+ p 62 2 3 5 133 5 145 3 + 185 2 : 215

Be ause of this growth, we often stop at the normal representation given by rational fra tions in the powers of less than n. But even with this restri tion to polynomials, there is another problem: that of the interdependen e of the radi als. A very p simple example of this kind of problem is that we must p not onstru tp 1, whi h is equal to 1. Similarly, we with 2, or with 8, but not with both of p an al ulate p them, for 8 = 2 2, and this representation and not p isp not anoni al p even normal. Similarly, among the radi als 2, 3 and 6, we an work with p any two p out of the three: if they are all present we get the relation p 2 3 = 6. All these examples are \obvious", and we might suppose that it is enough to make sure that all the numbers (or polynomials or rational fun tions) whi h are in the radi als are relatively prime, and that the radi als annot be simpli ed.pUnfortunately, things are a little more

ompli ated. Let us onsider = 4 4. One might suppose that there is no possible simpli ation, but in fa t 2 = 2 2. This is a onsequen e of the fa torisation x4 + 4 = (x2 2x + 2)(x2 + 2x + 2). Following p Capelli [1901℄, we an prove that this example (and variations on it su h as 4 4:34) is the only possible non-trivial simpli ation. Najid-Zejli [1984, 1985℄ has studied these questions, and has given an algorithm to de ide if there is a relation between non-nested radi als. 2.6.2 Nested radi als

The problems are more diÆ ult for the se ond lass, i.e. the nested radi als. The rst problem, that of the equivalen e between rational fra tions whi h

Computer Algebra

93

ontain radi als, has the same \solution" as before, and the same defe ts. The other problem, that of relations between the radi als, is still not solved (in a satisfa tory manner). There is a general solution, but it is the same solution as for general algebrai expressions, and we shall onsider it later. We annot prove that this problem is diÆ ult, but there are many surprising examples. We ite the identities

q

q

p

5+2 6+

q

r 16

q

p

qp

p

5

5

32=5

p

p

r

1=

p

10 29 =

p 5

(1)

p

2 6=2 3

x + x2

2 29 + 2 55 3

q

q

p p

9+4 2=1+2 2

2

+

(2)

x 1

p

22 + 2 5

r

r

1 3 + 5 25 25 r  p 1 1+ 53 = 5 25

27=5 =

p p

q

x+1

r

5

p

(3)

2

q

p

p

11 + 2 29 + 5 (4)

r 5

9 25

p5

2

3

p p

(112 + 70 2) + (46 + 34 2) 5 = (5 + 4 2) + (3 + 2) 5

(5) (6)

where we owe (1) to Davenport [1981℄, (2) and (3) to Zippel [1985℄, (4) to Shanks [1974℄, (5) to Ramanujan [1927℄ and (6) to Borodin et al. [1985℄. Re ently, Borodin et al. [1985℄ and Zippel [1985℄ have been studying this problem, and they have found algorithms whi h an solve several nested radi als, either by writing them in a non-nested way, or by proving that there is no simpli ation of this kind. But these algorithms are limited to a few spe ial ases, su h as square roots with two levels of nesting. Landau [1992a, 1992b℄ has improved the pro edure, but it is still not guaranteed to nd a minimal de-nesting. The general ase remains, as we said earlier, unresolved (in a satisfa tory way). 2.6.3 General algebrai fun tions

Now we look at algebrai fun tions whi h are de ned as being the roots of some polynomial. It is possible that there exists a representation in terms of radi als, but, as we have already stated, it is also possible that there is not one. So, let be an algebrai fun tion (or number) de ned as the root of the polynomial p( ). If the polynomial p is not irredu ible, even simple al ulations onfront us with many problems. For example, take p( ) = 2 3 + 2. Thus is an algebrai number of degree two. ( 1) and ( 2) are two numbers, a priori non-zero. But their produ t is

94

The problem of data representation

p( ), whi h redu es to zero. In fa t, this is a generalisation of the problem p we ame up against with simple radi als, where 1 was not a legitimate radi al. In this ase,pseen from this new angle, the problem is that the polynomial de ning 1, i.e. 2 1, is not irredu ible and has a fa tor 1. In the same way, our new polynomial has fa tors 1 and 2, and therefore one of the expressions ( 1) and ( 2) is zero. Even if the fa tors are not linear, any al ulation with the roots of a redu ible polynomial may result in an impasse, where two expressions, apparently non-zero, give zero when multiplied. The other diÆ ulty whi h may arise is the impossibility of dividing by a non-zero expression, for it may have a non-trivial g. .d. with the polynomial de ning . Therefore we require, mathemati ally speaking, that the polynomials de ning the algebrai numbers and fun tions be irredu ible, and most omputer algebra systems impose the same restri tion, or else do not guarantee the results if the polynomials are not irredu ible. An alternative philosophy is sket hed in se tion 2.6.5. All the examples of the pre eding sub-se tions, where the radi als were simpler than had been thought, p have in ommon the fa t that the polynomials were not irredu ible. 1 is only an integer, be ause the polynomial whi h seems ( 1)( + 1). p4 to de ne it, that is 2 1, fa torises into Similarly, 4 is not legitimate, for its polynomial, 4 + 4, fa torises into ( 2 2 + 2)( 2 + 2 + 2). The same is true for nested radi als. We onsider equation (1) of the p previous sub-se tion, where the polynomial de ning p 9 + 4 2 is 2 p(9 + p 4 2). This polynomial fa torises into ( (1 + 2 2))( + (1 + 2 2)), and therefore the nested radi al has a simpler form. If we want to treat only polynomials with integer oeÆ ients, we an say that is a root of 4 18 2 + 49 (whi h is the norm * of the polynomial already given). This polynomial also fa torises p into ( 2 2 7)( 2 + 2 7), and the roots of the rst fa tor are 1  2 2. Thus, a de nition su h as \ a root of p( )" is legitimate if and only if p is irredu ible. Here \legitimate" means that the use of polynomials in (of degree less than that of p) gives a anoni al representation, and that the use of su h rational fun tions (whi h do not ontain this root in the denominator) also gives a anoni al representation. When several roots appear, obviously all the polynomials de ning them must be irredu ible. But we need more than that, in the sense that they must be irredu ible, not only separately, but also together. Consider, for example, a root of 5 1, and a root of 5 + 5 4 + 10 3 + 10 2 + 4 1. These two polynomials are irredu ible, when viewed as

p

* These norms an be al ulated using resultants | see the Appendix.

Computer Algebra

95

polynomials with integer oeÆ ients. But if we take the polynomial de ning as a polynomial whose oeÆ ients may depend on , the situation is very di erent. In fa t this polynomial fa torises into ( + 1) ( + 3 ( +4)+ 2( 2 +3 +6)+ ( 3 +2 2 +3 +4)+ 4 + 3 + 2 + ): 4

This fa torisation should not surprise us, be ause is only 1, as the linear fa tor above proves. (The other fa tor orresponds to the fa t that 5 1 has ve roots, and an be expressed in terms of a di erent root from the root hosen as .) Thus, to verify that a system of roots i of polynomials pi is legitimate, we have to fa torise p1 as a polynomial with integer oeÆ ients, then p2 as a polynomial with oeÆ ients in Z[ 1 ℄, then p3 as a polynomial with

oeÆ ients in Z[ 1 ; 2 ℄, then : : : . If all the pi are polynomials with integer

oeÆ ients, the order among the pi does not hange the result. If a pi depends on an j , pj has to be fa torised before pi . Note that, in fa t, this pro ess an easily be very expensive, for fa torisations over algebrai elds are very hard to arry out. Abbott [1988℄ quotes more than 40 minutes to fa torise the polynomial

x9 + 9x8 + 36x7 + 69x6 + 36x5 99x4 303x3 450x2 342x 226 over the extension generated by , a root of

9 15 6 87 3

125 = 0:

2.6.4 Primitive elements

Instead of studying several algebrai numbers (or fun tions), su h as p

p

2 and 3, we an always* go ba k to the ase of a single algebrai number (or of a single algebrai fun tion), in terms of whi h all the others an be expressed. This quantity is alled a primitive element for the eld generated by the given quantities, or, more simply, a primitive element for the quantities themselves. For example, of the polynomial 4 p p p the number de nedp as a root 3 2 10 +1 is 2+ 3, and in terms of , 2 = ( 9 )=2 and 3 = (11 * Given that we are working in a eld with hara teristi zero, that is an extension of the integers. This theorem does not hold if we are working in an extension of the integers modulo p, but we are not interested in this at present.

96

The problem of data representation

p p

3 )=2. is therefore a primitive element for the eld Q[ 2; 3℄. These

primitive elements an be al ulated from the polynomials whi h de ne the given quantities, by using the resultant (see the appendix \Algebrai ba kground"). For example, is de ned by the resultant of x2 p3 and (x y )2 2. We an already see that the relation between and 2 and p 3 is not obvious. The primitive elements are often very ompli ated. Najid-Zejli [1985℄ noted that the primitive element orresponding to two roots and of the polynomial x4 + 2x3 + 5 is (at least if we al ulate with the well-known algorithms [Trager, 1976℄) a root of

12 + 18 11 + 132 10 + 504 9 + 991 8 + 372 7 3028 6 6720 5 + 11435 4 + 91650 3 + 185400 2 + 194400 + 164525: This polynomial is itself dis ouraging enough, but, in addition, the expressions for and in terms of require numbers with fourteen digits. When we are dealing with a primitive element for three of the roots (whi h is at the same time a primitive element for all the roots), the orresponding polynomial has oeÆ ients of more than 200 digits. We an on lude that, although primitive elements are fairly useful theoreti ally, they are too diÆ ult to al ulate and to use in pra ti e. 2.6.5 Dynami evaluation of algebrai fun tions

This se tion presents an alternative philosophy of working with algebrai numbers and fun tions, originally presented by Della Dora et al. [1985℄ and Di res enzo and Duval [1985℄, and later re ned by Di res enzo and Duval [1988℄. The idea is that, rather than insisting a priori that the de ning polynomials be irredu ible, we pro eed on the optimisti assumption that they are irredu ible, and perhaps we will dis over some fa torisations a posteriori. Many arithmeti operations (addition, subtra tion, multipli ation) an take pla e normally. Division may not be possible, be ause we may dis over an uninvertible element. For example, if we assume that the polynomial x2 1 is irredu ible, and let be a root of this polynomial, we an divide by + 2, be ause (x 2) 1 + (x2 1)  1 = (x + 2)  3 3 ( al ulated by applying the extended Eu lidean algorithm to x + 2 and x2 1), and, sin e 2 1 = 0, this means that ( 2) ; 1 = ( + 2)  3

Computer Algebra

97

so dividing by + 2 is the same as multiplying by ( 2)=3. However, we will ome to grief if we attempt to divide by 1, sin e applying the extended Eu lidean algorithm to x 1 and x2 1 will dis over that the two are not relatively prime, and in fa t we have a fa tor x 1 of x2 1. At this point, the al ulation must bran h, depending on what is, or what it may be. if is a root of x 1, then we are trying to divide by zero, and we must think again (if we were doing Gaussian elimination, we would hoose a di erent pivot); else is a root of x + 1 and the al ulation is well-founded, sin e 1 is now known to be 2. Of ourse, a realisti example would be substantially more ompli ated, and one might have to make several splittings of the original polynomial, ending up with some kind of de ision tree. Furthermore, a pra ti al appli ation might be able to dis ard some bran hes, for example, we might know that was real, and so be able to dis ard bran hes where had to be omplex (see se tion 3.2.1 for methods of making this de ision). 2.7 REPRESENTATIONS OF TRANSCENDENTALS

Trans endental fun tions group together several lasses of fun tions, ea h with its own spe ial rules. In general, a fun tion su h as sin x is represented by a stru ture su h as (SIN X) in LISP, or a \re ord" in PASCAL. Numbers su h as sin 1 or sin  an also be represented in this way. Thus, this stru ture is regarded as a variable and an appear in polynomials or rational fun tions. In REDUCE, for example, su h a stru ture is alled a \kernel". We have already seen that polynomial or rational al ulations

an apply to variables of this kind. The great problem then is the simpli ation of these variables, and their relation to one another. We know a lot of rules of simpli ation, su h as sin(x + y ) = sin x os y + os x sin y sin(x + y ) + sin(x y ) sin x os y = 2 log(xy ) = log x + log y log exp x = x exp(x + y ) = exp x exp y sin  = 0:

(1) (2) (3) (4) (5) (6)

Most Computer Algebra systems let the user de ne rules of this kind, whi h the system will take into a

ount. For example, in REDUCE we an express the rules (1){(6) by:

98

The problem of data representation FOR FOR FOR FOR FOR LET

ALL X,Y LET SIN(X+Y) = SIN(X)*COS(Y)+COS(X)*SIN(Y); ALL X,Y LET SIN(X)*COS(Y) = (SIN(X+Y)+SIN(X-Y))/2; ALL X,Y LET LOG(X*Y) = LOG(X) + LOG(Y); ALL X LET LOG(EXP(X)) = X; ALL X,Y LET EXP(X+Y) = EXP(X)*EXP(Y); SIN(PI)=0;

There is a di eren e between the last rule and the others: the last one applies to a spe ial number, whilst the others apply to any possible value of X or Y, whi h is expressed by the preamble FOR ALL. Nevertheless, there are still some pitfalls in this area of rules (or, more exa tly, rewrite rules). Firstly, we see that rule (1) implies that sin 2x = 2 sin x os x, whereas its expression in REDUCE does not have the same e e t, for REDUCE does not see that 2x = x + x. Therefore a better translation of the rst rule would be FOR ALL X,Y LET SIN(X+Y) = SIN(X)*COS(Y)+COS(X)*SIN(Y); FOR ALL X LET SIN(2*X) = 2*SIN(X)*COS(X); but even this is not enough (be ause of sin 3x et .), and we have to add a

rule su h as

FOR ALL X,N SUCH THAT NUMBERP N AND N>1 LET SIN(N*X) = SIN((N-1)*X)*COS(X) + COS((N-1)*X)*SIN(X);

with orresponding rules for os. Se ondly, rules (1) and (2) are mutual inverses. If we ask REDUCE to apply both, it loops* when we enter sin(a + b), for rule (1) rewrites it in the form sin a os b + os a sin b, and the rst term of this is rewritten by rule (2) in the form 12 (sin(a + b) + sin(a b)), whi h ontains the original term, to whi h rule (1) an be applied again. Thirdly, this simpli ation by rewrite rules an be very expensive. Every rewrite needs a resimpli ation (from the polynomial point of view) of the expression we are trying to simplify. Moreover, the simpli ation we have just done may give rise to other simpli ations. Fourthly, we are not ertain that we have given all the ne essary rules. Be ause of the intera tive nature of Computer Algebra systems, this is not always a serious problem, but often we want all the trigonometri fun tions to be linearised, or all the logarithms to be independent et . All these problems an be linked to the fa t that we are using a general method, that is the method of rewrite rules, to solve a problem whi h is * In prin iple. In fa t, the present version of REDUCE noti es that it has applied more rules than a system limit allows, stops and gives an error message.

Computer Algebra

99

learly less general, su h as the simpli ation of logarithmi or trigonometri fun tions. If we have a fun tion of whi h we know nothing but some rules, the approa h by rewrite rules is the only possible one. There is a di eren e between knowing some possible simpli ations (whi h may be rewrite rules), and knowing not only these simpli ations, but also that they are the only possible ones. For example, we are familiar with the rules: log(f g ) = log f + log g ; exp(f + g ) = exp f  exp g ; exp log f = log exp f = f ;

(1)

but are they the only possible simpli ations? There are theorems whi h an des ribe pre isely the possible simpli ations. The rst, and the one whi h overs the most important ase, is Ris h's stru ture theorem [Ris h, 1979; Rosenli ht, 1976℄, whi h says, in e e t, that the rules (1) are the only possible simpli ations for fun tions generated by the operators exp and log. Although we are referring to fairly re ent literature, the theorem (or, more exa tly, the underlying prin iples) has been known sin e Liouville, but it is only Computer Algebra whi h requires su h theorems to be expli it. Let K be a eld of onstants and 1 , : : : , n algebrai , exponential (when we write i = ui = exp vi ) or logarithmi (when we write i = vi = log ui ) fun tions, with ea h i de ned over K (x; 1 ; : : : ; i 1 ), and with K (x; 1 ; : : : ; n ) having K as the eld of onstants. (a) In these onditions, a i whi h is an exponential (i0 = vi0 i ) is trans endental over K (x; 1 ; : : : ; i 1 ) if, and only if, vi annot be expressed as i 1

+ j =1 nj vj , where belongs to K and the ni are rational numbers. (b) Similarly, a i whi h is a logarithm (i0 = u0i =ui ) is trans endental over K (x; 1 ; : : : ; i 1 ) if, and only if, no power un i of ui an be expressed as nj i 1

j =1 uj , where belongs to K and n and the nj are integers (with n 6= 0).

Stru ture Theorem.

P

Q

This theorem may seem fairly ompli ated, but it an be re-expressed informally in a mu h simpler form: (a) An exponential fun tion is independent of the exponentials and logarithms whi h have already been introdu ed if, and only if, its argument

annot be expressed as a linear ombination (with rational oeÆ ients) of the logarithms and arguments of the exponentials whi h we have already introdu ed. Su h a ombination means that the new exponential is a produ t of powers of the exponentials and of the arguments of the logarithms already introdu ed.

100

The problem of data representation

(b) A logarithmi fun tion is independent of the exponentials and logarithms already introdu ed if, and only if, its argument annot be expressed as a produ t (with rational exponents) of the exponentials and of the arguments of the logarithms already introdu ed. Su h a produ t means that the new logarithm is a linear ombination (with rational oeÆ ients) of the logarithms and arguments of the exponentials already introdu ed. This theorem only applies to fun tions. The position for numbers de ned by exponentials and logarithms is mu h less lear. It is onje tured (the Ax{S hanuel onje ture [Ax, 1971℄) that this theorem ontinues to hold, but we have no idea how to prove that. We do not even know whether e (= exp(1)) and  (= (1=i) log( 1)) are independent or not. 2.8 REPRESENTATIONS OF MATRICES

There are two styles of matrix al ulations, whi h an be alled impli it and expli it. An example of impli it al ulation is provided by the mathemati al expression \Let A and B be two square matri es of the same dimension". Here we have not de ned exa tly the dimension of the matri es and we have said nothing of their elements. On the other hand, in expli it al ulation, we de ne exa tly all the elements of the matrix, whi h may be, not only numbers, but also polynomials, rational fun tions, or any symboli obje ts. In fa t, in impli it al ulation A and B are variables. But the polynomial or rational al ulation we have already seen does not apply to su h variables, for they are non- ommutative variables. For example, AB may be di erent from BA. Several Computer Algebra systems let the user work with su h variables. In MACSYMA, for example, we saw in the examples in Chapter 1 that there are two di erent operators for multipli ation: A  B for ommutative multipli ation and A:B for non- ommutative multipli ation. Thus A  B B  A be omes 0, but A:B B:A remains un hanged. In REDUCE (see also the des ription of REDUCE's operators in the Annex), the possibilities are similar, but the manner of expressing them is di erent. From the moment we de lare NONCOM M, the kernels whi h begin with M (su h as M(1) or M(A,B)) will be non- ommutative kernels, and will not ommute with other non- ommutative kernels, but will ommute with ordinary ( ommutative) kernels. We an use rewrite rules to say that some non- ommutative obje ts satisfy ertain onstraints. In the remainder of this se tion, we shall deal with expli it matrix al ulation. Here, as with polynomials, there is the distin tion dense/sparse. 2.8.1 Dense matri es

The obvious way of representing an expli it matrix is an array of the elements of the matrix (if the implementation language does not have arrays,

Computer Algebra

101

whi h is the ase with several diale ts of LISP, we an use ve tors of ve tors, or even lists of lists, but a representation by lists is less eÆ ient). If the signs signify a ve tor, then the matrix

0 

a

b

d

e

f

g

h

i

1 A

will be represented by < < a b > < d e f > < g h i > > . For dense matri es, this method works quite well, and most Computer Algebra systems use it. The algorithms for the addition and multipli ation of these matri es are the same as for numeri al matri es, and imply that one an add two matri es of size n in O(n2 ) operations, and multiply them in O(n3 ) operations. As with numeri al matri es, there are \non-obvious" algorithms for multipli ation, whi h are, asymptoti ally, more eÆ ient than usual algorithms, su h as those of Strassen [1969℄ (whi h gives us an algorithm of

omplexity O(nlog2 7 )), Winograd [1968℄ and Coppersmith and Winograd [1982℄. Asymptoti ally, the fastest known methods are O(n2:376 ) | see Coppersmith and Winograd [1990℄. For numeri al matri es, these methods are only qui ker from n > 20 on, and are at most 18% qui ker when n = 100 [Brent, 1970℄. It is probable that the same on lusions hold to a large extent for the matri es of Computer Algebra. We do not know of any system whi h uses these \fast" methods. When it omes to inversion, and problems asso iated with it su h as the solution of linear systems and nding the determinant, the algorithms of numeri al al ulation are not easy to apply, for the diÆ ulties whi h arise are very di erent in Computer Algebra and numeri al al ulation. In the rst pla e, Computer Algebra does not have any problem of numeri al stability, and therefore every non-zero element is a good pivot for Gaussian elimination. In this respe t, Computer Algebra is simpler than numeri al

al ulation. On the other hand, there is a big problem with the growth of the data, whether they be intermediate or nal. For example, if we take the generi matrix of size three, that is

0 

a

1 A

b

d

e

f

g

h

i

bdi

+ bf g + dh

;

its determinant is aei

af h

eg;

102

The problem of data representation

and its inverse is

aei

af h

1 bdi + bf g + dh

eg

0 

ei di dh

fh

+ fg eg

bi

+ h

ai ah

g

bf

+ bg

af

e

+ d

ae

1 A

:

bd

For the generi matrix of size four, the determinant is af kp

+bgip + hin

af lo

hj m

+ agln + ahj o ahkn bekp + belo + bhkm + ej p eln f ip + f lm dej o + dekn + df io df km dgin + dgj m;

agj p

bglm

bhio

and the inverse is too large to be printed here. Therefore, in general, the data swell enormously if one inverts generi matri es, and the same is true for determinants of generi matri es, or for solutions of generi linear systems. Su h results may appear as intermediate data in a al ulation, the nal result of whi h is small, for example in the al ulation of the  determinant, equal to 0, of a matrix of type

M

M

M

M

, where

M

is a

generi matrix. The other big problem, espe ially for the al ulation of determinants, is that of division. A

ording to Cramer's rule, the determinant of a matrix is a sum (possibly with negations) of produ ts of the elements of the matrix. Thus, if the elements belong to a ring (su h as the integers or the polynomials), the determinant also belongs to it. But the elimination method requires several divisions. These divisions may not be possible (for example one annot divide  by 5or 2 in the ring of integers modulo 10, but 5 2 nevertheless the matrix has a well de ned determinant, that is, 1). 2 5 Even if the divisions are possible, they imply that we have to al ulate with fra tions, whi h is very expensive be ause of the ne essary g. .d.s (whi h are often non-trivial). Bareiss [1968℄ has des ribed a unning variation of Gaussian elimination, in whi h ea h division in the ring has to give a result in the ring, and not a fra tion*. This method (des ribed in the next sub-se tion) is very often used in Computer Algebra, if the ring of the elements allows division (more pre isely, if the ring is integral). Otherwise, there is a method found by Sasaki and Murao [1981, 1982℄ where one adds several new variables to the ring, and where one keeps only a few terms in these variables. Cramer's method, whi h writes the determinant of a matrix of size n as the sum of n! produ ts of n elements of the matrix, is very ineÆ ient * The ideas behind this algorithm go ba k to Dodgson [1866℄, better known as \Lewis Carroll".

Computer Algebra

103

numeri ally: the number of operations is O(n(n!)), instead of O(n3 ) for Gaussian elimination. But, in Computer Algebra, the ost of an operation depends on the size of the data involved. In this expansion, ea h intermediate al ulation is done on data whi h are (at least if there is no an ellation) smaller than the result. It seems that in the ase of a matrix of polynomials in several variables (these polynomials have to be sparse, otherwise the

ost would be enormous), Cramer's method is learly mu h faster than any method based on Gaussian elimination. For the ase of a matrix of integers or of polynomials in one variable, Bareiss' method seems to be the most eÆ ient. 2.8.2 Bareiss' algorithm

In fa t, Bareiss has produ ed a whole family of methods for elimination without fra tions, that is, where all the divisions needed are exa t. These methods answer the problem stated in the se tion \Representations of fra tions", the problem of nding an algorithm whi h does not require al ulations with fra tions. The simplest method, alled \one step", whi h has in fa t been known sin e Jordan, is based on a generalisation of Sylvester's identity. Let a(i;jk) be the determinant a1;1 a2;1 ::: ak;1

a1;2

:::

a1;k

a2;2

:::

a2;k

:::

:::

:::

ak;2

:::

ak;k

ai;2

:::

ai;k

ai;1

a2;j ::: : ak;j a1;j

ai;j

In parti ular, the determinant of the matrix of size n (whose elements are (n 1) . The basi identity is the following: (ai;j )) is an;n (k )

a

i;j

1

= a

(k

2) 1;k

k

1

a(k 1) k;k (k 1) a

i;k

a

(k k;j

(k

ai;j



1)

:

1)

In other words, after an elimination, we an be ertain of being able to divide by the pivot of the pre eding elimination. These identities are demonstrated in the Appendix. To demonstrate Bareiss' method, let us onsider a generi matrix of size three, that is: 1 0 b1;1

b1;2

b1;3

2;1

b2;2

b2;3

b3;1

b3;2

b3;3

b

A:

104

The problem of data representation

After elimination by the rst row (without division), we have the matrix 0 B 

1 b1;1

b1;2

0 0

b1;3

b2;2 b1;1

b2;1 b1;2

b2;3 b1;1

b2;1 b1;3

b3;2 b1;1

b3;1 b1;2

b3;3 b1;1

b3;1 b1;3

C A:

A se ond elimination gives us the matrix 0 B B 

b1;1

0

b1;2 b2;2 b1;1

0

1

b1;3

b2;1 b1;2

b2;3 b1;1

b2;1 b1;3

( b3;3 b2;1 b1;2 +b3;2 b2;1 b1;3 + b3;1 b2;3 b1;2

b1;1 b3;3 b2;2 b1;1

0

b3;2 b2;3 b1;1 b3;1 b2;2 b1;3

C C; A

)

and it is obvious that b1;1 divides all the elements of the third row. The general identity is

a

(k) i;j

=

1

 a

(l

1)

l;l

k

l

(l) a l+1;l+1 ::: (l) ak;l+1 (l) a

i;l+1

::: ::: ::: :::

a

(l) l+1;k :::

a

(l)

k;k

a

(l)

i;k

(l) l+1;j

a

:::

a

(l) k;j

(l)

ai;j

;

of whi h the identity we have already quoted is the spe ial ase l = k

1.

2.8.3 Sparse matri es

When we are dealing with sparse matri es, Computer Algebra omes lose to the methods of numeri al al ulation. We often use a representation where ea h row of the matrix is represented by a list of the non-zero elements of the row, ea h one stored with an indi ation of its olumn. We an also use a method of storing by olumns, and there are several more ompli ated methods. In this ase, addition is fairly simple, but multipli ation is more diÆ ult, for we want to traverse the matrix on the left row-by-row, but the matrix on the right olumn-by- olumn. For the determinant, the general idea is Cramer's expansion, but there are several tri ks for hoosing the best dire tion for the expansion, and for using results already al ulated, instead of al ulating them again. Smit [1981℄ shows some of these te hniques. The inverse of a sparse matrix is usually dense, and therefore should not be al ulated. Thus, there are three general ways of solving the system Ax = b of linear equations. In the rst pla e, there is a formula (due apparently to Lapla e), whi h expresses the elements xi of the solution in terms of the bj and of the Ai;j , whi h are the minors of A, that is the

Computer Algebra

105

determinants of the matrix obtained by striking out the i-th olumn and the j -th row of A. In fa t xi

=

X

Ai;j bj :

These minors an be al ulated by Cramer's rule, and the han es of reusing intermediate results of a al ulation in subsequent al ulations are good. Smit [1981℄ des ribes several tri ks whi h improve this algorithm. But one defe t whi h must be noted is that memory has to be used to store the reusable results. A se ond way is to use Gaussian elimination (or one of the variants des ribed for the ase of a dense matrix). We an hoose the rows (or

olumns) to be eliminated a

ording to the number of non-zero elements they ontain (and, among those whi h have the same number, one an try to minimise the reation of new elements, and to maximise the superimposition of non-zero elements in the addition of rows or olumns). This \intelligent elimination" is fairly easy to write*, but, in general, the matrix be omes less and less sparse during these operations, and the al ulating time remains at O(n3 ). This method was used by Coppersmith and Davenport [1985℄ to solve a system of 1061 equations in 739 variables, and this swell did indeed appear. Very re ently, several authors have adapted the iteration methods from numeri al al ulation, su h as the method of Lan zos and the method of

onjugate gradients, to Computer Algebra. These methods seem asymptoti ally more worth-while than Gaussian elimination, although, for a small problem, or even for the problem of Coppersmith and Davenport, they are less rapid. Coppersmith et al. [1986℄ dis uss these methods. 2.9 REPRESENTATIONS OF SERIES

Computer Algebra is not limited to nite obje ts, su h as polynomials. It is possible to deal with several types of in nite series. Obviously, the

omputer an deal with only a nite number of obje ts, that is, the rst terms of the series. 2.9.1 Taylor series: simple method

These series are very useful for several appli ations, espe ially when it is a question of a non-linear problem, whi h be omes linear when we suppress some small quantities. Here one an hope that the solution an be represented by a Taylor series, and that a small number of terms suÆ es for the appli ations (e.g. numeri al evaluation). * The author has done it in less than a hundred lines in the language SCRATCHPAD-II [Jenks, 1984℄ | now AXIOM.

106

The problem of data representation

Very often these series an be al ulated by the method alled su

esLet us take for example the equation y 2 = 1 + , where y is an unknown and  is small, and let us look for an expression of y as a Taylor series with respe t to . Let us write yn for the series up to the term n n , with y0 = 1 (or 1, but we shall expand the rst solution). We

an al ulate yn+1 , starting from yn , by the following method: sive approximation.

1 +  = yn2 +1 + O(n+2 )

2

= yn + n+1 n+1 + O(n+2 ) = yn2 + 2yn n+1 n+1 + O(n+2 ) = yn2 + 2y0 n+1 n+1 + O(n+2 ): If

is the oeÆ ient of n+1 in 1 +  yn2 , this formula implies that

n+1 = dn+1 =2y0 . This gives a fairly simple REDUCE program for the evaluation of y , as far as 10 for example (using E for ): dn+1

ARRAY TEMP(20); Y:=1; FOR N:=1:10 DO >;

This program has the disadvantage that it expli itly al ulates all the terms of y 2 , even those whi h ontribute nothing to the result. REDUCE has a me hanism for avoiding these al ulations: the use of WEIGHT and WTLEVEL. The details are given in the annex on REDUCE, but here we shall give the algorithm rewritten to use this me hanism. WEIGHT E=1; Y:=1; FOR N:=1:10 DO >;

In this ase, there are more dire t methods, su h as the binomial formula, or dire t programming whi h only al ulates the term in en of y 2 , but they are rather spe ialised. This method of su

essive approximation an be applied to many other problems | Fit h [1985℄ gives several examples. Series an be manipulated in the same way as the polynomials | in fa t most Computer Algebra systems do not make any distin tion between these two. In general, the pre ision (that is the highest power of e, in the

ase we have just dealt with) of a result is the minimum of the pre isions of the data. For example, in n X

i=0

ai ei

+

m X

j =0

min(m;n)

bj ej

=

X i=0

(ai + bi )ei ;

Computer Algebra

107

the terms of the result with i > min(m; n) annot be determined purely as a fun tion of the data | we have to know more terms ai or bj than those given. In parti ular, if all the initial data have the same pre ision, the result has also the same pre ision | we do not get the a

umulation of errors that we nd in numeri al al ulation. Nevertheless, there is a possible loss of pre ision in some ases. For example, if we divide one series by another whi h does not begin with a term with exponent zero, su h as the following series:

X X ai e

i=1

i

X

min(m 1;n)

m

n

bj e

j

=

i

i e ;

i=0

j =1

we nd that there is less pre ision. (We note that 0 = a1 =b1 | it is more ompli ated to al ulate the other i , but the method of su

essive approximation is often used in this al ulation.) This an also happen in the ase of al ulating a square root:

v u u tX n

ai e

i

=

i=0

X n

j

bj e

j =0

p

if a0 6= 0 (in this ase, b0 = a0 , and the other bi an be determined, starting out from this value, by the method of su

essive approximation). On the other hand, if a0 = 0, the series has a very di erent form. Ifpa1 6= 0, the series annot be written as a series in e, but it needs terms in e; it is then a Puiseux series. If a0 = a1 = 0, but a2 6= 0, then the series is still a Taylor series:

v u u tX n

i=2

ai e

i

=

X

n

1 j

bj e :

j =1

Here there is indeed a loss of pre ision, for the oeÆ ient bn is not determined by the given oeÆ ients ai , but, it requires a knowledge of an+1 for its determination. We must take are to avoid these losses in pre ision, for simply using polynomial manipulation does not alert us to them. But the situation is mu h less ompli ated than in numeri al al ulation. There is no gradual loss of pre ision, su h as that generated by numeri al rounding. The ir umstan es whi h give rise to these losses are well de ned, and one an program so that they are reported. If ne essary, one an work to a higher pre ision and he k that the results are the same, but this is a solution of last resort.

108

The problem of data representation

2.9.2 Taylor series: Norman's method

There are appli ations in whi h the losses of pre ision des ribed in the last paragraph o

ur very frequently. Moreover, the simple method requires all the terms of the intermediate results to be al ulated before the rst term of the nal result appears. Norman [1975℄ therefore suggested the following method: instead of al ulating all the terms 0 ; : : : ; n of a series before using them, we state the general rule whi h expresses i in terms of these data, and leave to the system the task of al ulating ea h i at the moment when it is needed. For normal operations, these general rules are not very ompli ated, as the following table shows (an upper ase letter indi ates a series, and the orresponding lower ase letter indi ates the oeÆ ients of the series): C C

C

=A+B =A B

= ai + b i

i = ai bi

i

=AB

i

= A=B

i

C

=

X i

j =0

=

ai

aj bi

P

j

i 1

b j =0 j i b0

j

(D )

(The last equation only holds in the ase b0 6= 0 | the general equation is a bit more ompli ated.) Norman has also shown that any fun tion de ned by a linear di erential equation gives rise to a similar equation for the

oeÆ ients of the Taylor series. These rules have been implemented by Norman [1975℄ in SCRATCHPAD-I [Griesmer et al., 1975℄, a system whi h automati ally expands the values given by these rules. It is enough to ask for the value of 5 , for example, for all the ne essary al ulations to be done. Moreover, the system stores the values already al ulated, instead of al ulating them again. This last point is very important for the eÆ ien y of this method. Let us take, for example, the ase of a division (equation (D) above). The al ulation of ea h i requires i additions (or subtra tions), i multipli ations and one division, and this gives us (n + 1)2 operations for al ulating 0 ; : : : ; n . But, if we suppose that the i are not stored, the ost is very di erent. The al ulation of 0 requires a division. The al ulation of 1 requires a multipli ation, a subtra tion and a division, but also involves the al ulation of 0 , whi h means the ost of one addition (or subtra tion), one multipli ation and two divisions, whi h we all [A = 1; M = 1; D = 2℄. The

al ulation of 2 needs two additions/subtra tions, two multipli ations and one division, plus the al ulation of 0 and 1 , whi h osts [A = 3; M = 3; D = 4℄, a ost whi h is already higher than the storage method, for 0 has

Computer Algebra

109

been al ulated twi e. The al ulation of 3 requires [A = 3; M = 3; D = 1℄ plus the al ulation of 0 ; 1 ; 2 , whi h osts [A = 7; M = 7; D = 8℄. Similarly, the al ulating ost for 4 is [A = 15; M = 15; D = 16℄, and the general formula for n is [A = M = 2n 1; D = 2n ℄. The situation would have been mu h worse, if the data ai and bi had required similar al ulations before being used. But su h a system of re ursive al ulation an be implemented in other Computer Algebra systems: Davenport [1981℄ has onstru ted a sub-system in REDUCE whi h expands Puiseux series (that is series with fra tional exponents) of algebrai fun tions. The internal representation of a series is a list of the oeÆ ients whi h have already been al ulated, and to this is added the general rule for al ulating other oeÆ ients. For example, if T1 and T2 are two of these representations, the representation orresponding to T1  T2, with two oeÆ ients 0 and 1 already al ulated, would be*: (((0 .

0

) (1 .

1

)) TIMES T1 T2)

and the ommand to al ulate 2 will hange this stru ture into (((0 .

0

) (1 .

1

) (2 .

2

)) TIMES T1 T2) ,

with perhaps some expansion of T1 and T2 if other terms of these series have been alled for. This idea of working with a in nite obje t of whi h only a nite part is expanded is quite lose to the idea of lazy evaluation whi h is used in

omputer s ien e. It is very easy (perhaps one hundred lines [Ehrhard, 1986℄) to implement Taylor series in this way in lazy evaluation languages, su h as \Lazy ML" [Mauny, 1985℄. AXIOM has adopted this philosophy for the majority of its in nite obje ts, not just for series. 2.9.3 Other series

There are several kinds of series besides Taylor series (and its variants su h as the Laurent or Puiseux series). A family of series whi h is very useful in several areas is that of Fourier series, that is

X n

f

= a0 +

ai

os it + bi sin it:

i=1

The simple fun tion sin t (or os t) represents the solution of y 00 + y = 0, and several perturbations of this equation an be represented by Fourier series, whi h are often al ulated by the method of su

essive approximation. Fit h [1985℄ gives some examples. * For te hni al reasons, Davenport uses spe ial symbols, su h as TAYLORTIMES instead of TIMES.

110

The problem of data representation

For Computer Algebra systems, based on polynomial al ulation, it is more diÆ ult to treat this series than it is to treat Taylor series. The reason is that the produ t of two base terms is no longer a base term: i  j = i+j , but os it  os jt 6= os(i + j )t. Of ourse, it is possible to re-express this produ t in terms of base fun tions, by the rewrite rules given in the se tion \Representation of trans endental fun tions", but (for the reasons given in that se tion) this may be ome quite ostly. If we have to treat large series of this kind, it is more eÆ ient to use a representation of these series, in whi h the multipli ation an be done dire tly, for example a ve tor of the

oeÆ ients of the series. There are systems in whi h a representation of this kind is used automati ally for Fourier series | CAMAL [Fit h, 1974℄ is one example of this. The reader may noti e that the relation between trun ation and multipli ation of these series is not as lear as it was for Taylor series. If we use the notation b: : : n to mean that an expression has been trun ated after the term of index n (for example, n or os nt), we see that

X 1

i

ai 

=0

X 1 =0

X 1 i

=0



i

i

but that

bi 

i

X 1

ai os it

i

=0

bi os it

= n

 n

X 1 =0

i

6=

X 1 =0

i

ai 

i

 X 1 n

ai os it

i

=0

i

n

=0

i

(T )

;

bi 

 X 1 n



n



bi os it

: n

n

(F ) For this reason, the oeÆ ients of our Fourier series must tend to zero in a ontrolled fashion, for example ai = O(i ) where  is a quantity whi h is

onsidered small. In this ase, the equation (F ) be omes a true equality. The reader an easily nd other series, su h as Poisson series, whi h an be treated in the same way. Se tion 5.3 illustrates some of these questions at greater length.

3. Polynomial simpli ation 3.1 SIMPLIFICATION OF POLYNOMIAL EQUATIONS

Very often, when studying some situation, we nd that it is de ned by a system of polynomial equations. For example, a position of the me hani al stru ture made up of two segments of lengths 1 and 2 respe tively, and joined at one point is de ned by nine variables (three points ea h with three o-ordinates) subje t to the relations: (x1 (x1

)2 + (y1 2 x3 ) + (y1 x2

)2 + (z1 2 y3 ) + (z1 y2

)2 = 1; 2 z3 ) = 4; z2

(1)

or perhaps to the relations: )2 + (y1 x2 ) + (2y1 +(2z1

(x1

(2x1

x2

x3

)2 + (z1 y2 y3 )(y3 z2 z3 )(z3

x2

)(x3

y2

)2 = 1; y2 )+ z2 ) = 3 z2

(1 ) 0

(where we have repla ed the se ond equation by the di eren e between the two equations); or likewise to the relations: (2x1

x2

x3

)(x3 (x1

) + (2y1 +(2z1 2 x3 ) + (y1

x2

)(y3 z2 z3 )(z3 2 y3 ) + (z1

y2

y3

)+ z2 ) = 3; 2 z3 ) = 4;

y2

(1 ) 00

or many other variants. The fundamental question of this se tion an be expressed thus: whi h nite list of relations must we use? Every al ulation about the movement of this obje t must take these relations into a

ount, or, more formally, take pla e in the ideal generated by these generators. 111

112 Polynomial simpli ation

De nition. The ideal* generated by a family of generators onsists of the set of linear ombinations of these generators, with polynomial oeÆ ients. De nition. Two polynomials

f and g are equivalent with respe t to an ideal if their di eren e belongs to the ideal.

If we regard the generators of the ideal as polynomials with value zero, this de nition states that the polynomials do not di er. Therefore, the questions for this se tion are: (1) How do we de ne an ideal? (2) How do we de ide whether two polynomials are equivalent with respe t to a given ideal?

3.1.1 Redu tions of polynomials

Obviously, there are several systems of generators for one ideal. We an always add any linear ombination of the generators, or suppress one of them if it is a linear ombination of the others. Is there a system of generators whi h is simple? Naturally, this question requires us to be pre ise about the idea of \simple". That depends on the order over the monomials of our polynomials (see the se tion \Polynomials in several variables"). Let us onsider polynomials in the variables x1 ; x2 ; : : : ; xn , the oef ients of whi h belong to a eld k (that is that we an add, multiply, divide them, et .). Let < be an order over the monomials whi h satis es the following two onditions: (a) If a < b, then for every monomial , we have a < b . (b) For all monomials a and b with b 6= 1, we have a < ab. The three orders \lexi ographi ", \total degree then lexi ographi " and \total degree then inverse lexi ographi " satisfy these riteria. Let us suppose that every (non-zero) polynomial is written in de reasing order (a

ording to Xi+1 for every i. We all X1 the prin ipal monomial and a1 X1 the prin ipal term of the polynomial. Let G be a nite set of polynomials, and > a xed order, satisfying the above onditions.

P

De nition. A polynomial

f is redu ed with respe t to G if no prin ipal monomial of an element of G divides the prin ipal monomial of f .

In other words, no ombination f hgi of f and an element of G an have a prin ipal monomial less (for the order z ) is formed from g2 and g3 and from the following three polynomials: = x2 yz 2 g5 = yz 2 2 g6 = x z

g4

2

z ;

2

z ;

3

z :

(A standard basis may very well ontain more polynomials than the initial basis | we shall return to this remark at the end of the next se tion.) It is now obvious that there are two possibilities: either z = 0, or z 6= 0. If 2 2 z = 0, these polynomials redu e to x y , therefore x = 0 or y = 0. If z 6= 0, then these polynomials redu e to y = 1 and x2 = z . Therefore, the set of

ommon zeros of G onsists of two straight lines (x = z = 0 and y = z = 0) and a parabola (y = 1, x2 = z ). We now state some theorems on standard bases. We shall not prove them: the reader who is interested in the proofs should onsult the papers by Bu hberger [1970, 1976a, b, 1979, 1981, 1985℄. Theorem 1. Every ideal has a standard basis with respe t to any order (whi h satis es the onditions (a) and (b) above).

Two ideals are equal if and only if they have the same redu ed standard bases (with respe t to the same order).

Theorem 2.

By redu ed basis, we mean a basis, ea h polynomial of whi h is ompletely redu ed with respe t to all the others. This restri tion to redu ed bases is ne essary to eliminate trivialities su h as fx 1; (x 1)2 g, whi h is a di erent basis from fx 1g, but only be ause it is not redu ed. In fa t, this theorem gives a anoni al representation for the ideals (as soon as we have xed an order and a anoni al representation for the oeÆ ients). Theorem 3. A system of polynomial equations is in onsistent (it an never be satis ed, even if we add algebrai extensions | for example over the

omplex numbers C) if and only if the orresponding standard basis (with respe t to any order satisfying the onditions (a) and (b) above) ontains a onstant.

Computer Algebra

115

3.1.3 Solution of a system of polynomials

A system of polynomial equations has a nite number* of solutions over C if and only if ea h variable appears alone (su h as z n ) in one of the prin ipal terms of the orresponding standard basis (with respe t to any order satisfying the onditions (a) and (b) above). Theorem 4.

If this basis is omputed with respe t to a lexi ographi order, we an determine all the solutions by the following method. Lexi ographi orders are not always the easiest to ompute with, but Faugere et al. [1989℄ have shown that it is omparatively inexpensive to transform any zerodimensional basis into a lexi ographi one. Let us suppose that the variables are x1 ; x2 ; : : : ; xn with x1 > x2 > : : : xn . The variable xn appears alone in the prin ipal term of a generator of the standard basis. But all the other terms of this generator are less (in the sense of y > z

I

x

;

x

y; x

z; yz ;

I

x

;

x

y; x

z

:

Computer Algebra

117

In both, x appears by itself, but neither y nor z does. Now I1 has dimension one, for its solutions are the two straight lines x = 1, z = 0 and x = 1, y = 0, whereas I2 has dimension 2, for its solutions are the plane x = 1 and the isolated point x = 1, y = z = 0.

3.1.4 Bu hberger's algorithm Theorem 1 of the last se tion but one tells us that every ideal has a standard basis | but how do we al ulate it? Similarly, how an we de ide whether a basis is standard or not? In this se tion we shall explain Bu hberger's algorithm [1970℄, with whi h we an solve these problems. We suppose that we have hosen on e and for all an order on the monomials, whi h satis es

onditions (a) and (b) of the se tion \Redu tions of polynomials".

De nition. Let f and g be two non-zero polynomials, with prin ipal terms fp

(

and gp . Let h be the l. .m. of ), is de ned by

fp

and

gp

. The

S-polynomial

of

f

and g ,

S f; g

(

S f; g

)=

h fp

f

h gp

g:

The l. .m. of two terms or monomials is simply the produ t of all the variables, ea h to a power whi h is the maximum of its powers in the two monomials. h=gp and h=fp are monomials, therefore S (f; g ) is a linear

ombination with polynomial oeÆ ients of f and g , and belongs to the ideal generated by f and g . Moreover, the prin ipal terms of the two

omponents of S (f; g ) are equal (to h), and therefore an el ea h other. We note too that S (f; f ) = 0 and that S (g; f ) = S (f; g ).

Theorem 5. A basis G is a standard basis if, and only if, for every pair of polynomials f and g of G, S (f; g ) redu es to zero with respe t to G.

This theorem gives us a riterion for de iding whether a basis is standard or not | it is enough to al ulate all the S-polynomials and to he k that they do redu e to zero. But if we do not have a standard basis, it is pre isely be ause one of these S-polynomials (say S (f; g )) does not redu e to zero. Then, as its redu tion is a linear ombination of the elements of G, we

an add it to G without hanging the ideal generated. After this addition, S (f; g ) redu es to zero, but there are new S-polynomials to be onsidered. It is a remarkable fa t, whi h we again owe to Bu hberger [1970℄, that this pro ess always omes to an end (and therefore gives a standard basis of the ideal). A proof of this fa t is given in the Appendix, along with a formal presentation of the algorithm. Let us apply this algorithm to the example fg1 ; g2 ; g3 g of the previous se tion. The S-polynomials to be onsidered are S (g1 ; g2 ), S (g1 ; g3 ) and

118 Polynomial simpli ation

(

). The prin ipal terms of g2 = xy 2 z xyz and and x2 y 2 , whose l. .m. is x2 y 2 z . Therefore

S g2 ; g3

2 xy z

(

S g2 ; g3

) = xg2

zg3

= (x2 y 2 z

2

x yz

(x2 y 2 z

)

z

2

g3

=

2 2

x y

2

)=

x yz

z

are

+ z2:

This polynomial is non-zero and redu ed with respe t to G, and therefore G is not a standard basis. Therefore we an add this polynomial (or, to make the al ulations more readable, its negative) to G | all it g4 . Now G onsists of 3 2 g1 = x yz xz ; 2 g2 = xy z xyz; 2 2 g3 = x y z; 2 2 g4 = x yz z ; and the S-polynomials to be onsidered are S (g1 ; g2 ), S (g1 ; g3 ), S (g1 ; g4 ), S (g2 ; g4 ) and S (g3 ; g4 ). Fortunately, we an make a simpli ation, by observing that g1 = xg4 , and therefore the ideal generated by G does not hange if we suppress g1 . This simpli ation leaves us with two Spolynomials to onsider: S (g2 ; g4 ) and S (g3 ; g4 ). (

S g2 ; g4

) = xg2

yg4

2

=

x yz

+ yz 2 ;

and this last polynomial an be redu ed (by adding g4 ), whi h gives us 2 2 z . As it is not zero, the basis is not standard, and we must enlarge G by adding this new generator, whi h we all g5 . Now G onsists of

yz

= xy 2 z 2 2 g3 = x y 2 g4 = x yz 2 g5 = yz

g2

xyz; z;

2

z ;

2

z ;

and the S-polynomials to be onsidered are S (g3 ; g4 ), S (g2 ; g5 ), S (g3 ; g5 ) and S (g4 ; g5 ). 2 2 S (g3 ; g4 ) = zg3 yg4 = z + yz ; and this last one an be redu ed to zero (by adding g5 ) (in fa t, this redu tion follows from Bu hberger's third riterion, whi h we shall des ribe later). 2 2 S (g2 ; g5 ) = zg2 xyg5 = xyz + xyz = 0: (

S g4 ; g5

) = zg4

2

x g5

=

z

3

+ x2 z 2 = x2 z 2

3

z ;

Computer Algebra

119

where the last rewriting arranges the monomials in de reasing order (with respe t to 0. In other words: 0

p1 := p= g d(p; p ); for all i do p1 := p1 = g d(p1 ; qi ); 0

and keep only those roots of p where p1 hanges sign in the isolating interval.

124 Polynomial simpli ation

[3℄ Redu e the intervals so that ea h interval de ning a root of p1 does not ontain a root of one of the qi , i.e. so that the isolating intervals of p and of the qi are disjoint. [4℄ For ea h root of p1 , isolated between a and b, he k that all the qi are positive. It is enough to he k that qi (a) > 0, for we have ensured that qi has no root between a and , and annot therefore hange sign. This algorithm is purely rational and does not require any al ulation with oating-point numbers. It requires an isolating algorithm, whi h we shall now des ribe.

3.2.1.1 Isolation of roots

P

In this se tion, we shall onsider a polynomial p(x) = ni=0 ai xi , with integer oeÆ ients. This latter limitation is not really a limitation, for the roots of a polynomial do not hange if we lear denominators. Without loss of generality, we may suppose that p has no multiple fa tors (that is, that p and p0 are relatively prime), for the fa tors do not hange the roots, but only their multipli ities. This is a problem whi h has been studied by many famous mathemati ians, for example, Des artes in the seventeenth entury, Fourier, Sturm and Hermite in the nineteenth entury, and by Computer Algebraists sin e the birth of the subje t. Amongst re ent papers, we mention those by Collins and Loos [1982℄ and by Davenport [1985b℄: we give a summary of the latter (omitting the proofs).

De nition. The Sturm sequen e asso iated with the polynomial p is a sequen e of polynomials with rational oeÆ ients p0 (x), p1 (x), : : : , pk (x) (where k  n and pk is onstant), de ned by the following equations: p0 (x) = p(x); p1 (x) = p0 (x); pi (x) = remainder(pi 2 (x); pi 1 (x))

(1)

where \remainder" means the remainder from the division of one polynomial of Q[x℄ by another. In fa t, we only need the sign of an evaluation of the elements of a Sturm sequen e; so it is appropriate to lear its denominators, and to treat the sequen e as a sequen e of elements of Z[x℄. This is very similar to the

al ulation of a g. .d., and the elements of a Sturm sequen e are (to within a sign | a subtlety whi h it is easy to overlook) the su

essive terms of the appli ation of Eu lid's algorithm to p and p0 ; and this is why we always end up with a onstant, for p and p0 were supposed relatively prime. Sturm sequen es an be al ulated by all the methods for al ulating the g. .d.:  navely (but we strongly re ommend that this is not used | see se tion 2.3.3 for an example of the growth of the integers involved);

Computer Algebra

125



by the method of sub-resultants (see the same se tion), but we have to make ertain that the fa tors suppressed in this method do not hange the sign of the polynomial;  by the modular method (des ribed in the next hapter), but here there is a serious problem with the treatment of signs and this is quite hard to solve. Sturm sequen es are interesting be ause of their relationship with the real roots of p, and this relationship is lari ed in the following de nition and theorem.

De nition.

y be a real number whi h is not a root of p. The variation p, written V (y), is the number of variations of sign in the sequen e p0 (y ), p1 (y ),: : : ,pk (y ).

at the point

Let

y

of the Sturm sequen e asso iated with

For example, if the values of the elements of the sequen e are 1, 2, 1, 0 and 2, the variation is 2, for the sign hanges between 1 and 2, and between 1 and 2 (the zeros are always ignored).

Sturm's Theorem. If a and b are two points, whi h are not real roots of p, su h that a < b, then the number of real roots of p in the interval (a; b) is V (a) V (b). It is possible to take the values 1 and 1 for a and b, by taking for V (1) the number of variations of sign between the leading oeÆ ients of the elements of the Sturm sequen e, and for V ( 1) the same number after hanging the signs of the leading oeÆ ients of the elements of odd degree. In parti ular, the number of real roots is a fun tion of the signs of the leading oeÆ ients. For this theorem to give us an algorithm for nding the real roots of a polynomial, we have to bound the roots of this polynomial, so that we an start our sear h from a nite interval. There are three su h bounds, whi h we ite here.

Proposition 1 [Cau hy, 1829; Mignotte, 1992, Theorem 4.3℄. be a root of p. Then  j j  1 + max aan





n





a a ; : : : ; 0 : an an

1 n 2

;

Let

This bound is invariant if we multiply the polynomial by a onstant, but it behaves badly under the transformation x ! x=2, whi h only hanges the roots by a fa tor 2, but may hange the bound by a fa tor 2n . The next two bounds do not have this defe t.

126 Polynomial simpli ation

Proposition 2 [Cau hy, 1829; Mignotte, 1992, Theorem 4.3℄. Let

be a root of p.

Then

 j j  max naan



1

n

;

nan an

1=2 2

Proposition 3 [Knuth, 1981℄. Let

 j j  2 max aan





;

nan an

1=3 3

be a root of p.



1=2 1=3 1 n 2 n 3 n n n

;

a a



: : : ;

;

a a





na0 1=n : an

Then



: : : ;





a0 1=n : an

Ea h of these propositions also gives us a bound for the minimal absolute value of a root of a polynomial (supposing that the onstant oeÆ ient is not zero) | we repla e x by 1=x in the polynomial, or, in other words, we look for the largest root of a0 xn + a1 xn 1 +    + an 1 x + an , the re ipro al of whi h is the smallest root of an xn + an 1 xn 1 +    + a1 x + a0 . Ea h of these propositions an be used to nd more a

urate bounds on the largest or smallest root, by working harder with the oeÆ ients [Davenport and Mignotte, 1990℄. These bounds give us the isolating algorithm displayed below. Result is a list of intervals (that is of pairs of rational numbers), ea h of whi h

ontains only one root. Work is a list of stru tures, ontaining two rational numbers de ning an interval with several roots, and the values of the variation of the Sturm sequen e at these two points. We suppose that Maximal Bound and Minimal Bound return rational numbers. In this ase, all the numbers manipulated by this algorithm are rational, and the al ulations an be arried out without any loss of a

ura y. We stress the reliability of this algorithm, be ause the roots of a polynomial, espe ially the real roots, are very unstable as a fun tion of the

oeÆ ients. A very good example of this instability is given by Wilkinson [1959℄, who onsiders the polynomial

W (x) = (x + 1)(x + 2)    (x + 20) = x20 + 210x19 +    + 20!: This polynomial looks ompletely normal, the roots are well separated from one another et . The leading oeÆ ients of the elements of its Sturm sequen e are all positive, and we dedu e that V (1) = 0, V ( 1) = 20, and that the polynomial has 20 real roots. Let us onsider, as Wilkinson did, a very small perturbation of this polynomial, that is

(x) = W (x) + 2 W

x = x20 + (210 + 2

23 19

23

)x19 +    + 20!

(the number 2 23 is hosen to hange the last bit out of 32 in the oeÆ ient

and W are so lose that W

must also of x19 ). It seems \obvious" that W

Computer Algebra

127

Isolating algorithm for real roots := f 0 k g := Sturm ( ); := Variations ( 1) Variations ( 1); if = 0 then return ;; if = 1 then return [ 1 1℄; := Maximal Bound ( ); Result := ;; Work := f[ Variations ( 1) Variations ( 1)℄g; while Work 6= ; do; [ a b ℄ := element of (Work ); Work := Work n f[ a b ℄g; := 12 ( + ); if ( ) = 0 then Result := Result [ f[ ℄g; := Minimal Bound (subst ( = ) );

+ := Variation ( + );

:= + + 1; if a = + 1 then Result := Result [ f[ ℄g; if a

+ 1 then Work := Work [ f[ a ℄g; if b = + 1 then Result := Result [ f[ + ℄g; if b

+ 1 then Work := Work [ f[ +

+ b ℄g; else := Variation ( ); if a = + 1 then Result := Result [ f[ ℄g; if a

+ 1 then Work := Work [ f[ a ℄g; if b = 1 then Result := Result [ f[ ℄g; if b

1 then Work := Work [ f[

b ℄g; return Result; S

p ;:::;p

N

p

S;

S;

N

N

;

p

M

M; M;

S;

;

S;

a; b; V ; V

a; b; V ; V

a

b

p

;

M

x

V

V

x

; p =x

M

V

V

V

V

> V

V

V

V

< V

a;

a;

M; V ; V



V

M

M; b

M; b; V

;V

S;

V

V

V

> V

V

V

V

< V

a;

a; ; V ; V

; b

; b; V ; V

have 20 real roots, but that is ompletely false. In fa t, has only ten real roots, and the imaginary parts of the other roots are fairly big | between 0.8 and 3. The signs of the leading oeÆ ients of the Sturm sequen e of be ome negative from the eighth polynomial on, and (1) = 15, whereas ( 1) = 5. W

W

V

V

128 Polynomial simpli ation

This reliability is all the more important be ause the interesting questions in me hani s, CAD or roboti s are often about the existen e or not of real roots, or the value of a parameter, for whi h the roots of interest begin to exist. It looks as if Computer Algebra is the only tool apable of answering these questions. However, we must be aware that these al ulations an be fairly expensive. For example, we have already seen that the resultants and the

oeÆ ients whi h appear in the Sturm sequen e may be large. The fa t that the roots of a polynomial may be very lose to one another implies that many bise tions may be needed before separating two roots, and that the numerators and denominators whi h o

ur in the rational numbers may be ome quite big. All this may onsiderably in rease the running time for the algorithm in question.

Theorem [Davenport, 1985b℄. The running time of this algorithm is bounded by O(n6 (log n + log P a2)3 ). i

This bound is somewhat pessimisti , and it seems that the average time is more like O(n4 ) [Heindel, 1971℄. Other methods, whi h may be more eÆ ient, are des ribed by Collins and Loos [1982℄.

3.2.1.2 Real algebrai numbers

The previous hapter dealt with the representation of algebrai numbers from a purely algebrai point of view and we did not distinguish between the di erent roots of an irredu ible polynomial. That point of view is appropriate for many appli ations, su h as integration. But we now have available the tools ne essary for dealing with algebrai numbers in a more numeri al way.

A representation of a real algebrai number onsists of: a square-free polynomial p with integral oeÆ ients; and an isolating interval [a; b℄ of one of its roots. De nition.

We have not insisted on the polynomial being irredu ible, for we have all the information needed to answer questions su h as \Is this algebrai number a root of this other polynomial q ?". In fa t, the solution of this question is fairly simple | \yes" if and only if the g. .d. of the two polynomials has a root in the interval [a; b℄, and one an test this by he king whether the g. .d. hanges sign between a and b (the g. .d. annot have multiple roots). As we have already said, by starting from su h a representation, we an produ e rational approximations (and therefore approximations in oatingpoint numbers) of arbitrary pre ision. There are other representations possible for real algebrai numbers, for example a real algebrai number is uniquely de ned by a polynomial

Computer Algebra

129

p su h that p( ) = 0 and by the signs of p ( ), p ( ), and so on. This 0

00

representation is used by Coste and Roy [1988℄.

3.2.2 The general ase | (some de nitions)

Before we an deal with this ase, we need several de nitions whi h are, in some sense, generalisations of the ideas of \root" and of \interval".

De nition.

A semi-algebrai omponent of

Rn is a set of points satisfying

a family of polynomial equalities and inequalities:



(x1 ; : : : ; xn ) : p1 (x1 ; : : : ; xn ) =    = pk (x1 ; : : : ; xn ) = 0; q1 (x1 ; : : : ; xn ) > 0; : : : ; qk (x1 ; : : : ; xn ) > 0 :

This is, obviously, the natural generalisation of the situation in R1 , whi h we have just looked at. It is worth noting that this de nition of \ omponent" is an algebrai de nition, not a topologi al one. For example, the polynomial

p(x; y) = (x2 + y2 ) (x 3)2 + y2



1 =0

de nes one semi-algebrai omponent, but two topologi al omponents of di erent dimensions. fp(x; y ) = 0 \ x < 1g is the point (x = 0; y = 0), but fp(x; y) = 0 \ x > 1g is the ir le of radius 1 about (3; 0), of dimension 1. In fa t, it is more natural to look, not only at the omponents, but also at the obje ts onstru ted by set operations starting from omponents.

De nition.

A

semi-algebrai variety

or one of the sets

A [ B, A \ B

and

is either a semi-algebrai omponent,

A n B , where A and B

are two semi-

algebrai varieties.

This de nition hara terises the sets of Rn whi h an be des ribed in terms of polynomial equalities and inequalities, for example 

(x; y; z ) : x2 + y 2 + z 2 > 0 or (x 6= 0 and y 2

z  0)



an be written as a semi-algebrai variety in the form

A [ ((B [ C ) \ (D [ E )); where 



A = (x; y; z ) : x2 + y2 + z 2 > 0 B = f(x; y; z ) : x > 0g

C = f(x; y; z ) : x < 0g  D = (x; y; z ) : y2 z < 0  E = (x; y; z ) : y2 z = 0 :

130 Polynomial simpli ation

We an test whether a point (with rational or algebrai oeÆ ients, in the sense de ned in the previous se tion) belongs to su h a variety, by he king whether the point belongs to ea h omponent, and by applying the Boolean laws orresponding to the onstru tion of the variety. We an do this more e onomi ally: if the variety is of the form A [ B , there is no need to test the omponents of B if the point belongs to A. But if we start from su h a des ription, it is very hard to understand the stru ture of a semi-algebrai variety, even if we an determine whether some points belong to the variety. Here are some questions whi h an be asked about a variety: (1) Is it empty? (2) What is its dimension? (3) Is it onne ted?

3.2.2.1 De omposition of Rn

In this se tion, we shall look at a method for de omposing Rn whi h allows us to analyse the stru ture of a semi-algebrai variety.

De nition. A de omposition of Rn is the representation of Rn as the

union of a nite number of disjoint and onne ted semi-algebrai omponents. By requiring the omponents to be disjoint and onne ted, a de omposition is already rather more manageable than an arbitrary semi-algebrai variety. It is often useful to know one point in ea h omponent of a de omposition.

De nition. A de omposition is pointed if, for every omponent, a point

with rational or algebrai oeÆ ients belonging to this omponent is asso iated with it. The points given in the omponents form a set of representatives of

Rn with a point in ea h omponent. Before we an use su h a set, we must know that it really is representative.

De nition. A de omposition of Rn is invariant for the signs of a family

of polynomials ri (x1 ; : : : ; xn ) if, over ea h omponent of the de omposition, ea h polynomial is:  always positive or  always negative or  always zero. For example, the de omposition of R1 as (

1

;

2) [ f 2g [ ( 2; 1) [ f1g [ (1; 1)

Computer Algebra

131

(whi h an be written more formally as

f

x

:

x

2 > 0g [ fx : x + 2 = 0g [ fx : x + 2 > 0 and 1 [fx : x 1 = 0g [ fx : x 1 > 0g

x >

0g

a

ording to the formal de nition of a de omposition) is invariant for the sign of the polynomial (x + 2)(x 1).

Proposition. Let V be a semi-algebrai variety, and D a de omposition of Rn invariant for the signs of any polynomials o

urring in the de nition

of V (we shall abbreviate this to \invariant for the signs of V "). For every

omponent C of D, C \ V = ; or C  V . In fa t, a pointed de omposition whi h is invariant for the signs of V gives us a lot of information about the variety V . Starting from su h a de ompostion, we an test whether the variety is empty or not by testing whether ea h of the points marking the de omposition belongs to the variety or not. If the variety ontains a point of Rn , then the proposition implies that it ontains the whole of the omponent it is in, and therefore also the point whi h marks it.

3.2.3 Cylindri al de ompositions Although the idea of de omposition is useful, it is not suÆ iently onstru tive to enable us to ompute with any de omposition. In fa t, the previous se tion is not really part of Computer Algebra, be ause the ideas in it are not ompletely algorithmi . In this se tion we shall de ne a type of de omposition whi h is mu h more onstru tive, and we shall sket h an algorithm for al ulating it. We suppose that we have hosen a system of o-ordinates n x1 ; : : : ; xn of R . of Rn , that is Rn = E1 [    [ EN , is

ylindri al if n = 0 (the trivial ase) or if: (a) Rn 1 has a ylindri al de omposition D0 , whi h we an write Rn 1 =

De nition. A de omposition F1

[    [ m and F

D

(b) For ea h omponent Ei of D, there is a omponent Fj of D0 su h that Ei is written in one of the following forms  (x1 ; : : : ; xn ) : (x1 ; : : : ; xn 1 ) 2 Fj and xn < fk (x1 ; : : : ; xn 1 ) (1)  (x1 ; : : : ; xn ) : (x1 ; : : : ; xn 1 ) 2 Fj and xn = fk (x1 ; : : : ; xn 1 ) (2)  (x1 ; : : : ; xn ) : (x1 ; : : : ; xn 1 ) 2 Fj and fk (x1 ; : : : ; xn 1 ) < xn and xn < fk0 (x1 ; : : : ; xn 1 ) (3)  (x1 ; : : : ; xn ) : (x1 ; : : : ; xn 1 ) 2 Fj and xn > fk (x1 ; : : : ; xn 1 ) (4)

132 Polynomial simpli ation

where the fk are the solutions of polynomial equations with rational

oeÆ ients, for example \fk (x1 ; : : : ; xn 1 ) is the third real root of the polynomial p(x1 ; : : : ; xn 1 ; z ), onsidered as a polynomial in z ". The idea behind this de nition, introdu ed by Collins [1975℄, is quite simple | Rn 1 is divided a

ording to the de omposition D0 , and above ea h omponent Fj we onsider the ylinder* of all the points (x1 ; : : : ; xn ) with (x1 ; : : : ; xn 1 ) belonging to Fj . We require this ylinder to be interse ted by a nite number N of surfa es, whi h are single-valued with respe t to (x1 ; : : : ; xn 1 ), i.e. whi h an be written in the form xn = fk (x1 ; : : : ; xn 1 ). Moreover, we require that these surfa es do not tou h, and a fortiori that they do not ross. So it is possible to arrange the surfa es in in reasing order of xn for a point (x1 ; : : : ; xn 1 ) of a given Fj , and this order is the same for every point of Fj . Thus the ylinder is divided into N surfa es of type (2), of the same dimension as that of Fj , N 1 \sli es" between two surfa es given by equations of type (3) and of dimension equal to that of Fj plus one, and two semi-in nite \sli es", given by equations of type (1) and (4), likewise of dimension one plus that of Fj . A ylindri al de omposition of R1 , invariant for the signs of a family of polynomials, is quite easy to al ulate | we have to know about (in the sense of isolating them) all the roots of the polynomials, and the de omposition is made up of these roots and of the intervals between them. There is an algorithm whi h al ulates a ylindri al de omposition of Rn invariant for the signs of a family P of polynomials with integer oeÆ ients. If the family P ontains m polynomials, of degree less than or equal to d and with oeÆ ients bounded by 2H (length bounded by H ), the time taken to nish this algorithm is bounded by Theorem [Collins, 1975℄.

2n+8 2n+6 3 m H :

(2d)2

(5)

The prin iple behind this algorithm is quite simple: it is based on the following plan for n > 1, and we have already seen how to solve this problem in the ase n = 1. [1℄ Cal ulate a family Q of polynomials in the variables x1 ; : : : ; xn 1 , su h that a de omposition of Rn 1 invariant for their signs an serve as a basis for the de omposition of Rn invariant for the signs of P . [2℄ Cal ulate, by re ursion, a de omposition D0 of Rn 1 invariant for the signs of Q. [3℄ For ea h omponent F of D0 , al ulate the de omposition of the ylinder above F indu ed by the polynomials of P . * More pre isely, the generalised ylinder. If Fj is a dis , we have a true

ylinder.

Computer Algebra

133

Stage [1℄ is the most diÆ ult: we shall ome ba k to its implementation later. Stage [2℄ is a re ursion, and stage [3℄ is not very diÆ ult. If Q is well hosen, ea h element of P (seen as a polynomial in xn ) has a onstant number of real roots over ea h omponent of D0 , and the surfa es de ned by these roots do not interse t. The de omposition of ea h ylinder indu ed by these polynomials is therefore ylindri al, and we have found the desired

ylindri al de omposition. It is lear that all the on eptual diÆ ulty in this algorithm arises from the hoi e of Q. This hoi e requires a good knowledge of analysis, and of real geometry, whi h we annot give here for the general ase. 3.2.3.1

The ase of R2

For the sake of simpli ity, we shall restri t ourselves for the present to the

ase n = 2, writing x and y instead of x1 and x2 . We begin with the family P , and we have to determine a family Q of polynomials in the variable 1 x, su h that a de omposition of R invariant for the signs of Q an serve as the foundation for a ylindri al de omposition of R2 . We may suppose that the elements of P are square-free and relatively prime. What are the restri tions whi h the omponents Fj of this de omposition of R1 have to satisfy? (1) Over ea h omponent, ea h polynomial of P has a onstant number of real roots. (2) Over ea h omponent, the surfa es (in fa t, in R2 they are only urves) given by the roots of the elements of P do not interse t. For the rst restri tion, there are two situations in whi h the number of real (and nite) roots of a polynomial pi (x; y ) ( onsidered as a polynomial in y ) ould hange: a real nite root ould be ome in nite, or it ould be ome omplex. For the se ond situation, we must note that the omplex roots of a polynomial with real oeÆ ients always ome in pairs, and that therefore one real root annot disappear. There must be a multiple root for a parti ular value x0 of x, for two roots of a polynomial equation pi (x; y ) = 0 to be able to vanish. The riti al values of x whi h have to appear in the de omposition of R1 are the values x0 of x for whi h one of the polynomials of P has a multiple root. If pi (y ) has a multiple root, g d(pi ; dpi =dy ) is not trivial (see se tion A.1 in the Appendix). In other words, for this value x0 of x, we have Resy (pi (x0 ; y ); dpi (x0 ; y )=dy ) = 0 (see se tion A.4). But this resultant is a polynomial in x, written Dis y (pi ). The riti al values are therefore the roots of Dis y (pi ) for ea h element pi of P . We still have to deal with the possibility that a nite root be omes in nite when the value of x hanges. If we write the equation p(y ) = 0 in the form n an (x)y + : : : + a0 (x) = 0, where the ai are polynomials in x, and therefore always have nite values, it is obvious that an has to an el to give a root

134 Polynomial simpli ation

of p(y ) whi h tends to in nity. That an also be dedu ed from the bounds on the values of roots of polynomials given in the last sub-se tion. There are two possibilities for the se ond restri tion also. The two

urves whi h interse t may be roots of the same element or of two different elements of P . In the rst ase, this element of P has a multiple root where two of its roots interse t; and the ase of multiple roots has already been dealt with in the last paragraph. In the se ond ase, the orresponding polynomials must, at the value x0 of x where these two urves interse t, have a ommon root, the value of y at the point of interse tion. In other words, if pi and pj are polynomials, their g. .d. is not trivial, and Resy (pi (x0 ; y ); pj (x0 ; y )) = 0. The hypothesis that the elements of P have no multiple fa tors implies that the polynomial Resy (pi ; pj ) is not identi ally zero, and therefore x0 is determined as a root of the polynomial Resy (pi ; pj ). If we use the notation l (p) to denote the leading oeÆ ient of a polynomial p, we have proved the following result:

If = f 1 n g is a family of square-free polynomials relatively prime in pairs, it suÆ es to take for the following family:

Proposition.

P

p ;:::;p

Q

fl y (

pi

); 1  i  ng[fDis y (pi ); 1  i  ng[fResy (pi ; pj ); 1  i < j

 g n

:

The reader will noti e a great in rease in the size of the data involved in this redu tion to R1 . If the family P onsists of m polynomials of degree bounded by d, then Q onsists of O(m2 ) polynomials of degree bounded by 2 O (d ). Moreover, the oeÆ ients of the elements of Q will be fairly large | they are bounded by (d + 1)2d B 2d , where B is a bound for the oeÆ ients of the elements of P . 3.2.3.2 The general ase

The analysis we gave for the ase n = 2 was rather spe ial to that ase. The main prin iples of the general ase are the same, but there are many more possibilities to be onsidered. The reader should onsult the arti les of Collins [1975℄ or of Arnon et al. [1984℄, or the re ent results of M Callum [1985a, b℄. He has proved that normally* it is enough to take the resultants and the dis riminants, as we did in the ase of R2 , and also all the

oeÆ ients.

With the same hypotheses as Collins' theorem, it is possible to al ulate a de omposition in time bounded by Theorem [M Callum, 1985a, b℄.

(2d)n2

n+7

m

2n H 3 :

(6)

* If the primitive part of ea h polynomial only an els identi ally as a polynomial in xn at a nite number of points of Rn 1 .

Computer Algebra

135

This result has been slightly improved [Davenport, 1985b℄, to give n+5

(2d)n2

m

2n

(7)

3

H :

These growths all behave doubly exponentially with respe t to n, and we may ask whether this is really ne essary. After all, the problems do not seem to be very ompli ated. Unfortunately, this behaviour is intrinsi to the problem of al ulating a ylindri al de omposition, be ause of the following result. Theorem [Davenport and Heintz, 1988℄. There are examples where the spa e needed to write a ylindri al de omposition, and

a fortiori

the

al ulating time, is doubly exponential in n.

More exa tly, if n is of the form 10r + 2, there is a family of 12r linear equations and two equations of degree four, su h that the family an be written in O(n) symbols, but su h that a ylindri al de omposition of Rn invariant for its signs needs a spa e 22

2r +1

(n+3)=5

= 22

:

(8)

The equations of Davenport and Heintz are spe ialised, and the result only holds for one hoi e of o-ordinates, that is of the order of elimination of the variables. In other o-ordinates, the de omposition may be very mu h simpler. But this proves that the al ulation an be very expensive, and the examples in the next se tion show that this high ost features largely in the appli ations. 3.2.4 Appli ations of ylindri al de omposition

There are as many appli ations as there are polynomial systems with real solutions: we an mention only two of these appli ations.

Quanti er elimination We re all that a quanti er is one of the two symbols 8 (for all) and 9 (there exist). Quanti ers are used in the form 8x p(x), whi h means \for all x, the proposition p(x) is true". All the quanti ed variables in this se tion are quanti ed over the real numbers | more formally, we are interested in the rst order theory of R. We will use the familiar logi al signs ^ (and), _ (or, in the in lusive sense) and : (logi al inversion). This theory ontains expressions of the following types, where the xi are variables, the pi are polynomials, and A and B are, re ursively, expressions of this \theory": 3.2.4.1

(

) = 0; pi (xi ; xj ; : : :) > 0; A ^ B; A _ B; :A; 9xi A; 8xi A:

pi xi ; xj ; : : :

136 Polynomial simpli ation

For example, the expression

8x ax4 + bx3 + x2 + dx + e > 0

(9)

is an expression of this theory.

De nition. The problem of quanti er elimination is that of nding an

algorithm, whi h, given a logi al expression ontaining quanti ers, nds an equivalent expression whi h does not ontain any.

We an, for example, ask for an expression equivalent to (9), that is an expression in the variables a, b, , d and e whi h is true if and only if the polynomial is always positive. Tarski [1951℄ proved that most problems in elementary algebra and in Eu lidean geometry an be solved by eliminating quanti ers, and he gave an algorithm for doing it. But his algorithm was ompletely impra ti al. Collins [1975℄ has used ylindri al de omposition to eliminate quanti ers. The idea is quite simple, although the details are somewhat te hni al,

hie y be ause of the possibility of having several quanti ers referring to variables with the same name. So we shall give some examples, rather than a omplete algorithm. Let us onsider for example in expression (9) the variables a,: : : ,e as

o-ordinates x1 ,: : : ,x5 (their order is unimportant); the quanti ed variable x must be x6 (here the order is important). Let us make a ylindri al de omposition D of R6 invariant for the sign of the polynomial of (9). For ea h omponent Fi of the ylindri al de omposition of R5 on whi h D is based, there is a nite number of omponents Ei;j of D, ea h with its typi al point zi;j . If the polynomial of (9) is positive at the typi al point zi;j , it is positive throughout the omponent Ei;j . If the polynomial is positive for all the typi al points above Fi , it is positive for all the ylinder above Fi . It follows that the semi-algebrai variety of R5 whi h ontains exa tly the solutions of (9) is [ 4 3 2 Fi : 8j ai;j xi;j + bi;j xi;j + i;j xi;j + di;j xi;j + ei;j > 0 ; where we have written the typi al point zi;j as (ai;j ; bi;j ; i;j ; di;j ; ei;j ; xi;j ). This solution was given by Arnon [1985℄. However, none of the above implies that this solution is the simplest solution, nor that this pro edure is the best method for nding a solution. This problem was solved with pen il and paper by Mignotte [1986℄ for the polynomial x4 + px2 + qx + r, and by Lazard [1988℄ in the general ase (ex ept that he used MACSYMA to expand the ne essary resultants). Both found solutions simpler than that obtained algorithmi ally.

Computer Algebra

137

Another problem of this kind is the ellipse problem: what are the

onditions on a, b, and d for the ellipse 

(x; y ) :

(x

)2

a

2

+

(y

)2

d

2

b

1



to be ompletely ontained in the ir le x2 + y 2 = 1. Here there are two quanti ers, for the problem an be rewritten in the form

8 8 x

y



(x

)2

a

2

(y

+

d

2

b

)2

1)

x

2

+y

2



1

:

This problem has still not been solved in a ompletely automati way, although Arnon and Smith [1983℄ have solved the spe ial ase d = 0. Mignotte [1986℄ has also solved this ase with pen il and paper, and Lazard [1988℄ has solved the general ase, using MACSYMA to expand a polynomial T of degree 12 with 105 terms, whi h features in the solution. The solution he gives has the form T

 0^ ( 2 + ( + j j)2  1) ^ ( 2 + (  ^ ( 2  )^ ( 2  ) _ 2 ( )^( 2 2  (1 )( 2 ( )^( 2 2  (1 2 )( 2

b

b

a

a > b

b > a

d

a

a d

b

d

a

+ j j)2

 1)

b

a

b

a

b

b

2

))



_

 2  ; a ))

whi h, as we an see, is not very simple, espe ially be ause of the size of T . We an dedu e from these two examples that this tool is very general, but that it seems rather expensive, at least with the existing implementations. 3.2.4.2

Roboti s

One of the problems of roboti s is the problem of robot motion planning, that is of de iding how the robot an move from one pla e to another without tou hing the walls, without damaging itself on other robots et . There are, it is true, many other problems, but we shall onsider only this one. For the sake of simpli ity, we limit ourselves to the ase of one robot, and this is diÆ ult enough | the ase of several robots is studied in S hwartz and Sharir [1983b℄. We suppose that all the obsta les are xed, and are de ned by polynomial equations, and that the robot too is given by polynomial equations. If we hoose a ertain number of points on the robot, we an x exa tly the position of ea h of its parts. For example, if the robot is rigid, it

138 Polynomial simpli ation

is suÆ ient to hoose three points whi h are not ollinear. If we need k points, the position of the robot is determined by a point in R3k . But there are onstraints between these points | for example two points of the same rigid part of the robot are always at the same distan e from one another. Thus our points in R3k have to satisfy a system of polynomial equations (and, possibly, inequalities). The obsta les de ne several polynomial inequalities, whi h have to be satis ed for the position of the robot to be \legitimate", for example, that it does not ross a wall. The laws of physi s may introdu e several inequalities, for example, the robot must be in a state of equilibrium under the e e t of gravity. All these equations and inequalities give us a semialgebrai variety in R3k . In prin iple, the problem of motion planning is fairly simple: are the departure position and the desired position onne ted in the variety? In other words, is there a path in the variety whi h links the departure position to the desired position? If there is one, then it gives the desired movement; if not the problem has no solution. This idea has been studied by S hwartz and Sharir [1983a℄, where they explained how to nd the path. Starting from a ylindri al de omposition, the problem breaks down into two stages.  The geometri aspe t: whi h omponents are next to one another (we need only onsider the omponents in the variety of legitimate positions).  The ombinatorial aspe t: given the graph of the neighbourhood relations between the omponents, nd the paths in this graph whi h join the omponent of the departure position and the omponent of the desired position. The reader is advised to onsult their arti les for the details. Unfortunately, this algorithm s ar ely seems pra ti able. One of the simplest problems of motion planning is the following: Given in R2 a right-angled \ orridor" of width unity, say (x > 0 ^ 0  y  1) [ (y > 0 ^ 0  x  1), an we get a rod of length three (and of in nitesimal width) to negotiate it. The answer p is \no", be ause the biggest rod whi h an negotiate it is of length 2 2. Davenport [1986℄ tried to solve this problem by S hwartz and Sharir's method. The semi-algebrai variety is in R4 (for we take the two extremities of the rod as points of referen e), and is de ned by an equation, requiring that these two points be always at a distan e three from one another, and by eight inequalities, two for ea h wall of the orridor. To nd a ylindri al de omposition of R4 , we have to determine one of R3 , then one of R2 and nally one of R1 . For R1 , we nd 184 (irredu ible) polynomials, of total degree 801, and maximum degree 18. This redu tion took ten

Computer Algebra

139

minutes pu-time with REDUCE on a DEC 20{60. These polynomials have 375 real roots, whi h an be isolated (in the sense of isolating the roots of ea h polynomial) in ve minutes. But we have to isolate all the roots of all the polynomials, and Davenport was unable (with the te hnology of 1986) to do the al ulation, be ause of the growth in the numerators and denominators of the rational numbers involved.

140 Polynomial simpli ation

4. Advan ed algorithms

Computer Algebra, in general, manipulates quite familiar mathemati al obje ts, and the manipulations are, in prin iple, fairly simple. But as we have already seen, even for al ulating the g. .d. of two polynomials with integer oeÆ ients, nave algorithms are extremely ostly. We therefore introdu ed methods, su h as sub-resultant sequen es of polynomials, whi h seem to be mu h more eÆ ient. We said: This algorithm is the best method known for al ulating the g. .d., of all those based on Eu lid's algorithm applied to polynomials with integer oeÆ ients. In the hapter \Advan ed algorithms" we shall see that if we go beyond these limits, it is possible to nd better algorithms for this al ulation. 4.1 MODULAR METHODS

In this se tion, we shall des ribe the rst (histori ally speaking) family of non-obvious algorithms, and we shall start with its greatest su

ess: the g. .d. 4.1.1 g. .d. in one variable

This se tion is set in the following ontext: A and B are two polynomials, belonging to the ring Z[x℄, whose g. .d. we want to al ulate. The restri tion to integer oeÆ ients does not present any problem, for we an always multiply the polynomials by an integer so as to eliminate any denominator. A slight variation of the lassi example of Knuth [1969℄ (see also Brown [1971℄), where this al ulation by nave methods be omes very ostly, is:

A(x) =x8 + x6 3x4 3x3 + x2 + 2x B (x) =3x6 + 5x4 4x2 9x + 21 141

5;

Advan ed algorithms

142

(see the al ulations in the se tion \The g. .d." of Chapter 2). Let us suppose that these two polynomials have a ommon fa tor, that is a polynomial (of non-zero degree) whi h divides and . Then there is a polynomial su h that = . This equation still holds if we take ea h oeÆ ient as an integer modulo 5. If we write 5 to signify the polynomial onsidered as a polynomial with oeÆ ients modulo 5, this equation implies that 5 divides 5. Similarly, 5 divides 5, and therefore it is a ommon fa tor* of 5 and 5. But al ulating the g. .d. of 5 and 5 is fairly easy: 8 + 6+2 4+2 3+ 2 +2 ; 5( ) = 6 + 2 + + 1; 5 ( ) =3 2 + 1) 5( ) = 2 2 + 3; 5 ( ) =remainder( 5 ( ) 5 ( )) = 5 ( ) + 3( 4 + 2 + 3) 5( ) = ; 5 ( ) =remainder( 5 ( ) 5 ( )) = 5 ( ) + ( 5 ( ) =remainder( 5 ( ) 5 ( )) = 5 ( ) + 3 5( ) = 3 Thus 5 and 5 are relatively prime, whi h implies that 5 = 1. As the leading oeÆ ient of has to be one, we dedu e that = 1. The on ept of modular methods is inspired by this al ulation, where there is no possibility of intermediate expression swell, for the integers modulo 5 are bounded (by 4). Obviously, there is no need to use the integers modulo 5: any prime number will suÆ e (we hose 5 be ause the al ulation does not work modulo 2 and 3, for reasons we shall explain later). In this example, the result was that the polynomials are relatively prime. This raises several questions about generalising this al ulation to an algorithm

apable of al ulating the g. .d. of any pair of polynomials: (1) how do we al ulate a non-trivial g. .d.? (2) what do we do if the modular g. .d. is not the modular image of the g. .d. (as in the example in the footnote*)? (3) how mu h does this method ost? Before we an answer these questions, we have to be able to bound the

oeÆ ients of the g. .d. of two polynomials. Theorem (Landau-MignottePinequality). Let = P =0 be a divisor of the polynomial = =0 (where and are integers). P

A

Q

A

B

PQ

P

P

P

A

A

P

B

B

x

A

x

A

x

B

x

C

x

A

x ;B

x

A

x

D

x

B

x ;C

x

B

x

x

x

E

x

C

x ;D

x

C

x

xD

x

x

A

x

x

x

x

B

x

x

x

B

x

x

C

x

x

:

B

P

P

P

p

p i

P

Then

q

X

ai x

ai

bi x

i

bi

v u p ap uX q 2 t ai : b

j j2 bi

i=0

q i

Q

i

q

i=0

See the paper by Landau [1905℄, and those by Mignotte [1974, 1982℄. * Note that we annot dedu e that 5 = g d( 5 5): a ounter-example is = 3, = + 2, where = 1, but 5 = 5 = + 2, and so g d( 5 5) = + 2, whereas 5 = 1. P

A

x

A ;B

B

x

x

P

P

A ;B A

B

x

Computer Algebra

Every oeÆ ient of the g. .d. of = P i i (with and integers) is bounded by i i

P

Corollary 1.

A

i=0 bi x

a

2

b

g d(

; )

min(

a

; b

0 v u u 1 tX  ) min j j a

i=0

2

i

a ;

and =

i i

=0 a x

v 1 uX 1u t iA 2

j j

b

b

143

B

:

i=0

The g. .d. is a fa tor of and of , the degree of whi h is, at most, the minimum of the degrees of the two polynomials. Moreover, the leading

oeÆ ient of the g. .d. has to divide the two leading oeÆ ients of and , and therefore has to divide their g. .d. A slight variation of this orollary is provided by the following result. of the g. .d. of = P i i i and = P i (whereEvery oeÆ ient i i i are integers) is bounded by i Proof.

A

B

A

B

Corollary 2. =0 b x

a

2

=0 a x

A

; )

min(

b

g d(

a0 ; b0

0 v u u 1 tX  ) min j j a0

i=0

v

2

i

a ;

1

uX 1u t iA

j j

b

b0

B

2

:

i=0

If =PP i i i is a divisor of , then ^ = P i i i is a divisor of ^ = i i i , and onversely. Therefore, the last orollary

an be applied to ^ and ^ , and this yields the bound stated. It may seem strange that the oeÆ ients of a g. .d. of two polynomials

an be greater than the oeÆ ients of the polynomials themselves. One example whi h shows this is the following (due to Davenport and Trager): = + 1 = ( + 1) ( 1); = + + + 1 = ( + 1) ( + 1); g d( ) = + 2 + 1 = ( + 1) This example an be generalised, as say = +3 +2 2 3 1 = ( + 1) ( 1); = + 3 + 3 + 2 + 3 + 3 + 1 = ( + 1) ( + 1); g d( ) = + 4 + 6 + 4 + 1 = ( + 1) Proof.

C

=0 x

A

a

=0

A

x

B

x

A; B

B

A; B

4.1.1.1

5

x

6

x

4

x

C

=0

x

B

A

A

A

x

x

x

4

5

x

3

x

3

2

x

4

3

x

2

3

4

x

2

2

x

x

2

x

2

x

3

x

x

x

x

2

x

x

x

x

x

2

x

x

:

x

2

x

The modular { integer relationship

x

x

x

4

x

4

4

2

x

x

:

In this sub-se tion, we answer the question raised above: what do we do if the modular g. .d. is not the modular image of the g. .d. al ulated over the integers?

144

Advan ed algorithms

If p does not divide the leading oeÆ ient of g d(A; B ), the degree of g d(Ap ; Bp ) is greater than or equal to that of g d(A; B ).

Lemma 1.

Proof. Sin e g d(A; B ) divides A, then (g d(A; B ))p divides Ap . Similarly, it divides Bp , and therefore it divides g d(Ap ; Bp ). This implies that the degree of g d(Ap ; Bp ) is greater than or equal to that of g d(A; B )p . But the degree of g d(A; B )p is equal to that of g d(A; B ), for the leading oeÆ ient of g d(A; B ) does not an el when it is redu ed modulo p. This lemma is not very easy to use on its own, for it supposes that we know the g. .d. (or at least its leading oeÆ ient) before we are able to

he k whether the modular redu tion has the same degree. But this leading

oeÆ ient has to divide the two leading oeÆ ients of A and B , and this gives a formulation whi h is easier to use.

If p does not divide the leading oeÆ ients of A and of B (it may divide one, but not both), then the degree of g d(Ap ; Bp ) is greater than or equal to that of g d(A; B ).

Corollary.

As the g. .d. is the only polynomial (to within an integer multiple) of its degree whi h divides A and B , we an test the orre tness of our

al ulations of the g. .d.: if the result has the degree of g d(Ap ; Bp ) (where p satis es the hypothesis of this orollary) and if it divides A and B , then it is the g. .d. (to within an integer multiple). It is quite possible that we ould nd a g d(Ap ; Bp ) of too high a degree. For example, in the ase we have already ited of ( ) =x8 + x6 3x4 3x3 + x2 + 2x 6 4 B (x) =3x + 5x 4x2 9x + 21; A x

5;

g d(A2 ; B2 ) = x + 1 (it is obvious that x + 1 divides the two polynomials modulo 2, be ause the sum of the oeÆ ients of ea h polynomial is even). The following lemma shows that this possibility an only arise for a nite number of p. Let C = g d(A; B ). If p satis es the ondition of the orollary, does not divide Resx (A=C; B=C ), then g d(Ap ; Bp ) = Cp .

Lemma 2.

and if

p

and B=C are relatively prime, for otherwise C would not be the g. .d. of A and B . By the orollary, Cp does not vanish. Therefore

Proof. A=C

g d(Ap ; Bp ) = Cp g d(Ap =Cp ; Bp =Cp ): For the lemma to be false, the last g. .d. has to be non-trivial. This implies that the resultant Resx (Ap =Cp ; Bp =Cp ) vanishes, by proposition 1 of se tion A.4 in the Appendix. This resultant is the determinant of a Sylvester

Computer Algebra

145

matrix, and jMp j = (jM j)p , for the determinant is only a sum of produ ts of the oeÆ ients. In the present ase, this amounts to saying that Resx (A=C; B=C )p vanishes, that is that p divides Resx (A=C; B=C ). But the hypotheses of the lemma ex lude this possibility.

De nition.

If g d(Ap ; Bp ) = g d(A; B )p , we say that the redu tion of this good, or that p is of good redu tion. If not, we say that p is of bad redu tion. problem is

This lemma implies, in parti ular, that there are only a nite number of values of p su h that g d(Ap ; Bp ) does not have the same degree as that of g d(A; B ), that is the p whi h divide the g. .d. of the leading oeÆ ients and the p whi h divide the resultant of the lemma (the resultant is nonzero, and therefore has only a nite number of divisors). In parti ular, if A and B are relatively prime, we an always nd a p su h that Ap and Bp are relatively prime.

4.1.1.2 Cal ulation of the g. .d.

In this se tion we answer the question posed earlier: how do we al ulate a non-trivial g. .d.? One obvious method is to use the Landau-Mignotte inequality, whi h an determine an M su h that all the oeÆ ients of the g. .d. are bounded by M , and to al ulate modulo a prime number greater than 2M . This method translates into the following algorithm: M

:= Landau Mignotte bound(A; B );

do p := nd large prime (2M ); if degree remainder (p; A) or degree remainder (p; B ) then C := modular g d (A; B; p); if divide (C; A) and divide (C; B ) then return C ; forever;

(where the algorithm Landau Mignotte bound applies the orollaries of their inequality, the algorithm nd large prime returns a prime number greater than its argument (a di erent number ea h time), the algorithm degree remainder veri es that the redu tion modulo p does not hange the degree, that is that p does not divide the leading oeÆ ient, the algorithm modular g d applies Eu lid's algorithm modulo p and the algorithm divide veri es that the polynomials divide over the integers). In fa t, it is not ne essary to test that p does not divide the leading oeÆ ients | the Landau-Mignotte bound ( orollary 1) implies that p is greater than one of the leading oeÆ ients of A and B . The drawba k to this method is that it requires lengthy al ulations modulo p, whi h may be a fairly large integer. So we suggest a method

146

Advan ed algorithms

whi h uses several small primes and the Chinese remainder theorem (see the appendix). If we al ulate Cp and Cq , where C is the desired g. .d., and p and q are two primes, then this theorem al ulates Cpq for us. We must point out that this theorem is applied in its integer form, to ea h oeÆ ient of C separately. There is no question of using the polynomial version, even though we are trying to re over a polynomial. This method translates into the following algorithm: M := Landau Mignotte bound(A; B ); Avoid := g d(l (A); l (B )); E0: p := nd prime (Avoid ); C := modular g d (A; B; p); E1: if degree (C ) = 0 then return 1; Known := p; Result := C ; while Known  2M do p := nd prime (Avoid ); C := modular g d (A; B; p); if degree (C ) < degree (Result ) then go to E1; if degree (C ) = degree (Result ) then Result := CRT (Result ; Known ; C; p); Known := Known  p; if divide (Result ; A) and divide (Result ; B ) then return Result ; go to E0;

(where \l " denotes the leading oeÆ ient of a polynomial, the sub-algorithms of the last algorithm have the same meaning here, and the algorithm nd prime returns a prime whi h does not divide its argument (a di erent prime ea h time), and the algorithm CRT applies the Chinese remainder theorem to ea h oeÆ ient of the the two polynomials Result (modulo Known) and C (modulo p), representing the integers modulo M between M=2 and M=2). The two go to* in this algorithm orrespond to the two ways of dete ting that all the hosen primes were of bad redu tion:  either we nd a p (satisfying the orollary of lemma 1) su h that the degree of the g. .d. modulo p is smaller than the degrees already al ulated (and therefore they ome from bad redu tions); * whi h ould be repla ed by loops, or other tools of stru tured programming, if one desired.

Computer Algebra

147



or we get to the end of the al ulations with a result whi h seems good, but whi h does not divide A and B , sin e all the redu tions have been bad (an unlikely possibility). If the rst redu tion was good, no go to would be exe uted. This algorithm is open to several improvements. It is not ne essary for the p to be genuinely prime. It suÆ es if the in nite set of the p has an in nite number of prime fa tors, for we know that there is only a nite number of values with bad redu tion. If p is not a prime, it is possible that the algorithm modular-g d nds a number whi h is not zero, but whi h

annot be inverted modulo p, su h as 2 (mod 6). This dis overy leads to a fa torisation of the modulus, and we an therefore work modulo the two fa tors separately. The line Known

:= Known  p;

should of ourse be repla ed by Known

:= l m(Known ; p);

and the implementation of the Chinese remainder theorem should be apable of dealing with moduli whi h are not relatively prime. It is not absolutely ne essary to go as far as Known > 2M . The Landau-Mignotte bounds are often far too pessimisti , although Mignotte [1981℄ proved that they annot, in general, be redu ed. It is possible to test at ea h stage whether we have found a ommon divisor, for we know that this method an never nd a polynomial of too small a degree. But these tests an be very ostly, and a ompromise often used is to test after those stages in whi h the value does not hange, that is to repla e the line then Result

:= CRT (Result ; Known ; C; p);

by the lines := Result ; := CRT (Result ; Known ; C; p); if Previous = Result and divide (Result ; A) and divide (Result ; B ) then return Result ;

then Previous

Result

4.1.1.3 Cost of this algorithm

We now analyse the basi algorithm, as we have written it, without the improvements outlined at the end of the last paragraph. We add the following hypotheses:

148

Advan ed algorithms

(1) All the primes have already been al ulated (this hypothesis is not very realisti , but we have outlined a method whi h an use numbers whi h are not prime, and in pra ti e this sear h for primes is not very expensive); (2) Ea h prime p is suÆ iently small for all the al ulations modulo p to be done in time O(1); that is, the numbers modulo p an be dealt with dire tly on the omputer, for one word is enough to store them; (3) The polynomials A and B are of degree less than or equal to n, and satisfy the bound

vuuX vuuX t 2 t 2



i=0

i

a ;

i=0

b

i

H

p

(whi h would be satis ed if all the oeÆ ients were less than H= n + 1, and whi h implies that all the oeÆ ients are less than H ). The oeÆ ients of the g. .d. are bounded by 2n H , a

ording to the LandauMignotte inequality. If N1 is the number of p with good redu tion we need, the produ t of all these p must be greater than 2n+1 H , and this implies that N1 < (n +1) log2 H (where we use the inequality that every prime is at least 2: a more deli ate analysis would bring in analyti number theory without improving the order obtained for the running time). Moreover, there are the p of bad redu tion | let us suppose that there are N2 of them. They must divide the resultant of A=C and B=C . These two polynomials are fa tors of A and B , and therefore their oeÆ ients (in fa t, the sum of the absolute values of the oeÆ ients) are bounded by 2n H . Their resultant is, therefore, bounded by (2n H )2n . This implies N2 < n(n + log2 H ). We an dedu e a better bound N2 < n log2 H , if we note that \bad redu tion" is equivalent to saying that one of the minors of Sylvester's matrix is divisible by p, even though it is not zero (see Loos [1982℄). The big while loop is therefore exe uted at most N1 + N2 times | probably fewer, for we have supposed that the resultant had all its fa tors distin t and small, and that we meet all its fa tors before nding enough p of good redu tion. The expensive operations of this loop are the modular g. .d. and the appli ation of the Chinese remainder theorem. For the modular g. .d., we rst have to redu e the oeÆ ients of A and B modulo p | 2 O (n log2 H ) operations | and then to al ulate the g. .d. | O (n ) operations. The Chinese remainder theorem is applied to all the oeÆ ients of the supposed g. .d. We write C for the g. .d. modulo p, and D for the g. .d. modulo M , the produ t of all the p already al ulated (written Known in the algorithm). To prove the theorem we al ulate integers f and g su h that f M + gp = 1, and prove that e = + (d )f M is the value modulo

Computer Algebra

149

Mp whi h redu es to modulo M and d modulo p. Cal ulating f and* g requires O(log2 M ) operations (for p is supposed small, by hypothesis 2). f is bounded by p, and b a an be al ulated modulo p (for the result is modulo MP ). The multipli ation (b a)fM is the multipli ation of a large number M by a small number, and requires O(log2 M ) operations. As we have at most n + 1 oeÆ ients to al ulate, the total ost of a y le of the loop is O(n(n + log2 H ) + n log2 M ) operations. As M < 2n+1 H , this an be simpli ed to O(N (n + log2 H )). Thus we dedu e a total ost of O(n(n + log2 H )2 ) operations for the loop. The last stage of this algorithm is he king the result, by dividing A and B by the result, and this he ks that we have found a ommon divisor of A and B . The most expensive ase is the division of a polynomial of degree n by a polynomial of degree n=2, and this requires n2 =4 operations by nave methods. As these operations were performed on integers of size (number of digits) bounded** by n log2 H , we nd that the total ost of these divisions is O(n4 log22 H ) operations. This is somewhat annoying, sin e this implies that the veri ation is onsiderably more ostly than the al ulation itself. We an use \eÆ ient" methods (based on the Fast Fourier Transform), and this gives us 

O n3 log2 H log2 n log2 (n log2 H ) log2 log2 (n log2 H )

operations, but this notation on eals a fairly large onstant, and the ost remains higher than that for the loop. Brown [1971℄ therefore proposed

ontinuing the loop until we are sure of having exhausted all the p of bad redu tion, that is adding to the ondition of the while the phrase or

Q

p < Resultant Bound

where the produ t extends over all the p already provided by nd prime, and the variable Resultant Bound is given the value of a bound for the * Although we do not use g in this al ulation, the extended Eu lidean algorithm al ulates it automati ally. It need only be al ulated on e, rather than on e per oeÆ ient. This does not alter the asymptoti ost, but it is quite important in pra ti e, for the onstant on ealed by the notation O is onsiderably larger for the extended Eu lidean algorithm than that for multipli ation. ** If we nd an integer of the quotient greater than this bound, this implies that the division is not exa t, and therefore we an terminate the divide operation immediately. This remark is quite important in pra ti e, for a division of this kind whi h fails may generate huge oeÆ ients | for example the division of x100 by x 10 gives a remainder 10100 . See Abbott et al. [1985℄ for a dis ussion of the ost of the divide operation.

150

Advan ed algorithms

resultant of A and B , and this limits also all the minors, as the an ellation of one of them was the ondition of bad redu tion. With this addition, we know that we have hosen a p of good redu tion, and that the nal result is

orre t. This addition is of theoreti al use, redu ing the al ulating time to 3 3 O (n log2 H ), but implementers prefer the algorithm in the form in whi h we stated it. In pra ti e, we an suppose that the algorithm does not nd many p of bad redu tion. If, in addition, we suppose that the oeÆ ients of the g. .d. are less than H , and if we add a termination test (su h as that outlined in the pre eding paragraph), we arrive at a ost of O(n log2 H (n + log2 H )). In any ase this algorithm is asymptoti ally more eÆ ient than the subresultant algorithm (O(n4 log42 H ) a

ording to Loos [1982℄). This eÆ ien y is borne out in pra ti e.

4.1.2 g. .d. in several variables

The previous se tion showed the use of modular al ulations for avoiding intermediate expression swell. When we al ulate modulo p, we have a guarantee that the integers do not ex eed p 1 (or p=2 if we are using a symmetri representation). This method gives an algorithm for al ulating g. .d.s in one variable whi h is more eÆ ient than the sub-resultant polynomial sequen es algorithm. Can it be generalised to polynomials in several variables? Before we an answer this question, we must settle some details. The most obvious algorithm for al ulating the g. .d. in several variables x1 ; : : : ; xn is to onvert Eu lid's algorithm into Q(x1 ; : : : ; xn 1 )[xn ℄. But this has one big theoreti al drawba k (and several pra ti al ones): it

al ulates only the dependen e of the g. .d. on xn . For example, if it is applied to A(x1 ; x2 ) = (x1 1)x2 + (x1 1); B (x1 ; x2 ) = (x1 1)x2 + ( x1 + 1); the answer would be that they are relatively prime, even though they have a fa tor of x1 1 in ommon.

De nition. Let R be an integral domain, and p 2 R[x℄, with oeÆ ients

P

, su h that p = ki=0 ai xi . The ontent of p, written ont(p), is the g. .d. of all its oeÆ ients. If the ontent of a polynomial is one, the polynomial is alled primitive. The primitive part of a polynomial p, written pp(p), is de ned by pp(p) = p= ont(p). a0 ; : : : ; ak

It is easy to dedu e that the primitive part of a polynomial is primitive. The following result is quite well known in mathemati al literature, and it demonstrates the great importan e of this idea. In the se tion on fa torisation, we shall present this lemma in a slightly di erent form.

Computer Algebra [ ℄

Gauss's Lemma. Let p and q be two polynomials of R x . Then

= ont(p) ont(q )

(and therefore

pp(pq ) = pp(p) pp(q )).

151

ont(pq )

[ ℄

Corollary. Let p and q be two polynomials of R x . Then

ont g d(p; q ) = g d( ont(p); ont(q )); pp g d(p; q ) = g d(pp(p); pp(q )): Thus, given two polynomials in several variables, we an all one variable the \main variable" (written x), and we an al ulate the g. .d. by multiplying the g. .d. of the primitive parts (found by Eu lid's algorithm, taking

are that the result is primitive) by the g. .d. of the ontents. This se ond g. .d., as well as the g. .d.s needed to al ulate the ontents, do not involve the variable x, and an therefore be done re ursively. The ontext of this se tion is as follows: A and B are two polynomials belonging to the ring Z[x1 ; : : : ; xr ℄ whose g. .d. we want to al ulate. Restri ting ourselves to polynomials and to integer oeÆ ients does not ause any problems, for we an always multiply the polynomials by an integer to eliminate a numeri al denominator, and by a polynomial in several xi to eliminate a polynomial denominator. Therefore, the pro edure in the last paragraph an be expressed in terms of two re ursive algorithms: pro

g d (A; B; r); A ; = ontent (A; r); Ap ; = A=A ; B ; = ontent (B; r); Bp ; = B=B ; return ontent (Eu lid (Ap ; Bp ; r ); xr )  g d (A ; B ; r

pro

1);

ontent (A; r); Result := oe (A; xr ; 0); i

:= 1;

while

Result 6= 1 and i < degree (A; xr ) Result := g d (Result ; oe (A; xr ; 0); r

do

:= i + 1; Result;

1);

i

return

where the operators degree and oe extra t the indi ated omponents from their rst parameter, onsidered as a polynomial in the se ond, and the algorithm Eu lid applies this algorithm to its rst two parameters, onsidered as polynomials in one variable, that is the third parameter. Here we have

hosen to suppress the ontent of A and B before applying Eu lid's algorithm | this hoi e does not alter the result, but it makes the Eu lid

152

Advan ed algorithms

parameters smaller. The algorithm ontent stops as soon as it nds that the ontent is one: this an save useless al ulations. The problems with this algorithm are many. (a) The intermediate expression swell in Eu lid's algorithm. Even if we use the method of sub-resultant polynomial remainder sequen es, the intermediate degrees will grow onsiderably. (b) The number of re ursive alls. If we want to al ulate the g. .d. of two polynomials in two variables, of degree ten in ea h one, so that the g. .d. is of degree ve in ea h variable, we need: 20 alls to al ulate the two ontents; 1 all to al ulate the g. .d. of the two ontents; 5 alls to al ulate the ontent of the result of Eu lid; that is 26 re ursive alls. The ase of the three variables therefore needs 26 alls on the algorithm g d in two variables, ea h needing about 26

alls on g d in one variable. ( ) Moreover, all the integers appearing as oeÆ ients an be ome very large. These drawba ks are not purely hypotheti al. To illustrate the enormous swell whi h o

urs we give a very small example, hosen so that the results will be small enough to be printed, in two variables x and y (x being the main variable): A B

= (y 2 = (y 2

1)x2 2 y + 1)x

y

(y 2 2)x + (y 2 + y + 1); (y 2 + 2)x + (y 2 + y + 2):

The rst elimination gives C

= (2y 2 + 4y )x + (2y 4 + y 2

3y + 1);

and the se ond gives D

= 4y 10 + 4y 9

4y 8 + 8y 7

7y 6 + y 5

4y 4 + 7y 3

14y 2 + 3y

1:

Here we see a growth in the size of the integers, and a very signi ant growth in the power of y . Note that a generi problem of this degree an result in a polynomial of degree 32 in y . As we saw in the last se tion, modular al ulation an avoid this growth of integer oeÆ ients. But we have to be more subtle to get round the other problems of growth of the powers of non-prin ipal variables, and of re ursive

alls. Let us suppose that the two polynomials A and B of our example have a ommon fa tor, that is a polynomial P (of non-zero degree) whi h divides A and B . Then there exists a polynomial Q su h that A = P Q. This equation is still true if we evaluate every polynomial at the value y = 2.

Computer Algebra

153

If we write Py=2 to signify the polynomial P evaluated at the value y = 2, this equation implies that Py=2 divides Ay=2 . Similarly, Py=2 divides By=2 , and is therefore a ommon fa tor* of Ay=2 and By=2 . But the al ulation of the g. .d. of Ay=2 and By=2 is quite simple: ( ) =x2 2x + 7; 2 By =2 (x) =3x 6x + 8; remainder(Ay=2 (x); By=2 (x)) =29: Ay =2 x

Thus Ay=2 and By=2 are relatively prime, and this implies that Py=2 = 1. Sin e the leading oeÆ ient of P has to divide the g. .d. of the leading

oeÆ ients of A and B , whi h is one, we dedu e that P = 1. This al ulation is quite like the al ulation modulo 5 whi h we did in the last se tion, and an be generalised in the same way. The reader who is interested in seeing mathemati s in the most general setting should onsult the arti le by Lauer [1982℄, whi h gives a general formalisation of modular

al ulation.

4.1.2.1 Bad redu tion

We have de ned a number of the g. .d. of A and B if

p

as being

of bad redu tion for the al ulation

g d(A; B )p = 6 g d(Ap ; Bp ): We have also remarked that, if p is not of bad redu tion, and if one of the leading oeÆ ients of A and B is not divisible by p, then the g. .d. modulo p has the same degree as the true g. .d. These remarks lead to the following de nition.

De nition.

[ ℄[x℄,

Let A and B be two polynomials of R y

integral domain. The element r of R is

of good redu tion

where R is a

for the al ulation

of the g. .d. of A and B if

g d(A; B )y=r = g d(Ay=r ; By=r ): Otherwise, r is

of bad redu tion

Proposition.

r is of bad redu tion if and only if y

Resx



.

A

;

B

g d(A; B ) g d(A; B )

r divides

 :

* As in the previous se tion, it must be observed that we annot dedu e that Py=2 = g d(Ay=2 ; By=2 ): a ounter example is A = yx +2, B = 2x + y , where P = 1, but Ay=2 = By=2 = 2x +2, and thus g d(Ay=2 ; By=2 ) = x +1, whereas Py=2 = 1.

Advan ed algorithms

154

If is of good redu tion and does not divide the two leading oeÆ ients (with respe t to ) of and , g d( ) and g d( y=r y=r ) have the same degree with respe t to .

Proposition.

r

y

x

A

r

A

B

;B

A; B

x

The proofs of these propositions are similar to those of the previous se tion. A key result of that se tion was the Landau-Mignotte inequality, whi h is rather diÆ ult to prove, and rather surprising, for the obvious onje ture, that the oeÆ ients of a fa tor must be smaller than the oeÆ ients of the original polynomial, is false. Abbott [1988℄ quotes the surprising example of 41

40

x

39

x

x

20

19

x

+x

+ x36 + x35 + x18 x14

x

33

11

x

+ x32 + x9

30

27

x

8

x

+ x23 + x22 x21 + x x2 x + 1; x

6

+x

5

whi h has a fa tor of 33

+ 7x32 + 27x31 + 76x30 + 174x29 + 343x28 + 603x27 + 968x26 +1442x25 + 2016x24 + 2667x23 + 3359x22 + 4046x21 + 4677x20 +5202x19 + 5578x18 + 5774x17 + 5774x16 + 5578x15 + 5202x14 +4677x13 + 4046x12 + 3359x11 + 2667x10 + 2016x9 + 1442x8 + 968x7 +603x6 + 343x5 + 174x4 + 76x3 + 27x2 + 7x + 1: x

In the present ase, things are very mu h simpler.

If is a fa tor of , then the degree (with respe t to ) of is less than or equal to that of .

Proposition. C

4.1.2.2

C

A

y

A

The algorithm

Armed with these results, we an al ulate the g. .d. of two polynomials in n variables by a re ursive method, where r is the index of the leading variable (whi h will not hange) and s is the index of the se ond variable, whi h will be evaluated. The algorithm is given on the next page. The initial all has to be done with s = r 1. When s drops to zero, our polynomial is in one variable, the re ursion is terminated, and we all on a g d simple pro edure (whi h may well be the modular g. .d. pro edure of the last se tion) to do this al ulation. In this algorithm, we have supposed that the random algorithm returns a random integer (a di erent number ea h time), and the CRT algorithm applies the Chinese remainder theorem (polynomial version) to ea h oef ient (with respe t to xr ) of the two polynomials Res (modulo Known) and C (modulo xs v ). The two loops do : : : while, whi h are identi al,

hoose a value v whi h satis es the ondition that xs v does not divide the two leading oeÆ ients of A and B .

Computer Algebra

155

Algorithm of modular g. .d. pro g d (A; B; r; s); if s = 0 then return g d simple (A; B; xr ); M := 1 + min(degree (A; xs ); degree (B; xs )); E0: do v := random (); Av := subs (xs ; v; A); Bv := subs (xs ; v; B ); while degree (Av; xr ) 6= degree (A; xr ) and degree (Bv; xr ) 6= degree (B; xr ) C := g d (A; B; r; s 1); E1: Known := xs v ; n := 1; Res := C ; while n  M do do v := random (); Av := subs (xs ; v; A); Bv := subs (xs ; v; B ); while degree (Av; xr ) 6= degree (A; xr ) and degree (Bv; xr ) 6= degree (B; xr ) C := g d (A; B; r; s 1); if degree (C; xr ) < degree (Res ; xr ) then go to E1; if degree (C; xr ) = degree (Res ; xr ) then Res := CRT (Res ; Known ; C; xs v ); Known := Known  (xs v ); if divide (Res ; A) and divide (Res ; B ) then return Res ; go to E0;

As in the modular g. .d. algorithm of the last se tion, the two go to in this algorithm orrespond to two ways of dete ting whether all the hosen values were of bad redu tion:  either we nd a v su h that the degree of the g. .d. after the evaluation xs = v is smaller than the degrees already al ulated (and that they therefore ome from bad redu tions);  or else we get to the end of the al ulation with a result whi h looks good but whi h does not divide A and B be ause all the redu tions have been bad (a rather unlikely possibility). If the rst redu tion was good, no go to would be arried out. This algorithm an be improved upon in the same way as the algorithm

156

Advan ed algorithms

for the ase of one variable, by repla ing the line then

Result := CRT (Result Known ;

s

; C; x

v

);

v

);

by the lines then

Previous := Result ; Result := CRT (Result Known s if Previous = Result and divide (Result ) and divide (Result then return Result ; ;

; C; x

;A

4.1.2.3

;B

)

Cost of this algorithm

We now analyse the nave algorithm as it is written, without the improvement des ribed at the end of the last paragraph. We use the following notation: r means the total number of variables (thus, after r 1 re ursions, we shall all the algorithm g d simple), the degrees with respe t to ea h variable are bounded by n, and the lengths of all the oeÆ ients are bounded by d. We need at most n + 1 values of good redu tion. We an leave out of a

ount the small loops do : : : while, for in total, they repeat at most n times, for a repetition means that the value of v returned by random is a

ommon root of the leading oeÆ ients of A and B , whereas a polynomial of degree n annot have more than n roots. If N is the number of values of xs of bad redu tion we nd, then the big loop is gone through N + n + 1 times. It is then ne essary to bound N , whi h is obviously less than or equal to the number of v of bad redu tion. Let C be the true g. .d. of A and B . Thus, A=C and B=C are relatively prime, and their resultant (see the Appendix) with respe t to xr is non-zero. If v is of bad redu tion, Ax =v and Bxs =v have a g. .d. greater than Cxs =v , B  s are not relatively A and prime, and therefore their i.e. C C xs =v xs =v resultant with respe t to xr must be zero. But if the leading oeÆ ients do not an el, 

Resxr

A C

;

B C





xs =v

= Resxr

A C



xs =v

 ;

B C





xs =v

:

(If one of the leading oeÆ ients (for example, that of A) an els, the situation is somewhat more ompli ated. The resultant on the right is the determinant of a Sylvester matrix of dimension smaller than that on the left. In this ase, the resultant on the right, multiplied by a power of the other leading oeÆ ient, is equal to the resultant on the left. In any ase,

Computer Algebra

157

one resultant an els if and only if the other an els.) This implies that v is a root of Resxr (A=C ; B=C ). This is a polynomial of degree at most 2n2 , and therefore N  2n2 . In this loop, the most expensive operations are the re ursive all to the algorithm g d and the appli ation of the Chinese remainder theorem. For the latter, we have to al ulate at most n + 1 oeÆ ients of a polynomial in xr , by applying the theorem to Known and xs v . Known is a produ t of linear polynomials, and the se tion on the Chinese remainder theorem in the appendix \Algebrai ba kground" gives an extremely simple formulation in this ase. In fa t, we only have to evaluate Res at the value v , to multiply the di eren e between it and C by a rational number and by Known, and to add this produ t to Res. The multipli ation (the most expensive operation) requires a running time O(n2s 1 d2 ) for ea h oeÆ ient with respe t to the

al ulated xr , that is, a total time of O(n2s d2 ). Thus, if we denote by g d(s) the running time of the algorithm g d with fourth parameter equal to s, the ost of one repetition of the loop is g d(s 1) + O(n2s d2 ). The total ost therefore be omes (2n2 + n + 1) g d(s



1) + O(n2s d2 )

:

The results of the last se tion, whi h dealt with the ase of one variable, imply that g d(0) = O(n3 d3 ). An indu tion argument proves that g d(s) = 2s+3 d3 ). The initial value of s is r 1, and therefore our running time O (n is bounded by O(n2r+1 d3 ). This is rather pessimisti , and Loos [1982℄ ites a onje ture of Collins that on average the time is rather O(nr+1 d2 ). 4.1.3 Other appli ations of modular methods

Modular methods have many other appli ations than g. .d.s, although this was the rst one. Some of these appli ations also have the idea of bad redu tion, for others the redu tions are always good. Obviously, the algorithm is mu h simpler if the redu tions are always good. In general, we illustrate these appli ations by redu tion modulo p, where p is a prime number, as we did in the rst se tion. If the problems involve several variables, we an evaluate the variables at ertain values, as we did for the g. .d. in several variables. The generalisation is usually fairly obvious, and an be looked up in the referen es ited. 4.1.3.1

Resultant al ulation

Resultants are de ned in the Appendix. As this al ulation is very losely linked to that of the g. .d., we should not be surprised by the existen e of a link between the two. In fa t, as the resultant of two polynomials A and B is a polynomial in the oeÆ ients of A and B , we see (at least if p does

158

Advan ed algorithms

not divide the leading oeÆ ients) that Resx (A; B )p = Resx (Ap ; Bp ): As this equation always holds, every p is of good redu tion. We ex lude those p whi h divide the leading oeÆ ients in order to avoid the ompli ations that arise when the degrees of A and B hange. The Appendix gives bounds on the size of a resultant, and they tell us how many p are needed to guarantee that we have found the resultant. Collins [1971℄ dis usses this algorithm, and it turns out to be the most eÆ ient method known for this

al ulation. 4.1.3.2

Cal ulating determinants

The resultant is only the determinant of the Sylvester matrix, and we may hope that the modular methods will let us al ulate general determinants. This is in fa t possible | all we have to do is to al ulate the determinant modulo a suÆ ient number of p, and apply the Chinese remainder theorem. We have to determine the number of p required | in other words, we must bound the determinant. Let us suppose that the determinant is of dimension n, and that the entries are ai;j . Theorem.



j det(

ai;j



)j  n! max max jai;j j i

n

:

j

Proof. The determinant onsists of the sum of n! terms, of whi h ea h is the produ t of n elements of the matrix. Theorem (Hadamard's bound [1893℄).

j det(

ai;j

)j2



Y X =1

i

4.1.3.3

Inverse of a matrix

n

n

j

=1

j j2 ai;j

 :

This problem is rather more ompli ated, for the elements of the inverse of a matrix with integer oeÆ ients are not ne essarily integers. Moreover, it is possible that the matrix an be inverted over the integers, but that it is singular modulo p, and therefore bad redu tion an o

ur. If inversion does not work modulo p, this implies that p divides the determinant, and, if the matrix is not truly singular, there are only nitely many su h p. Therefore, if we avoid su h p, inversion is possible. There are two solutions to the problem set by the fa t that the elements of the inverse are not ne essarily integers. The rst is to note that they are always rational numbers whose

Computer Algebra

159

denominators are divisible by the determinant. Therefore, if we multiply the element modulo p by the determinant (whi h we al ulate on e and for all) modulo p, we get ba k to the numerator modulo p, whi h is used to nd the true solution. The other solution is important, be ause it an be applied to other problems where the desired solutions are fra tions. Wang [1981℄ suggested this method in the ontext of the sear h for de ompositions into partial fra tions, but it an be applied more generally. First we need some algebrai ba kground. Let M be an integer (in pra ti e, M will be the produ t of all the primes of good redu tion we have used), and a=b a rational number su h that b and M are relatively prime (otherwise, one of the redu tions was not good). If we apply the extended Eu lidean algorithm (see the Appendix) to b and M , we nd integers and d su h that b + M d = 1 | in other words b  1 (mod M ). We an extend the idea to a rational ongruen e by saying a  a (mod M ):

p

b

Moreover, if a and b are less than M=2, this representation is unique, for 0 0 0 0 a=b  a =b implies ab  a b. For example, if M = 9, we an represent all the fra tions with numerator and denominator less than or equal to 2, as we see in the following table: Modulo 9 0 1 2 3 4 5 6 7 8

Fra tion 0=0 1=1 = 2=2 2=1 | 1=2 1=2 | 2=1 1=1 = 2=2

We have proved the se ond lause of the following prin iple. Prin iple of Modular Cal ulation. An integer less than

p

1 2M

has a

unique image n modulo M . Similarly, a rational number whose numerator and denominator are less than

M=

2 has a

unique image modulo M .

In the ase of integers, given the image n, al ulating an integer is fairly simple | we take n or n M , a

ording to whether n < M=2 or n > M=2. For rational numbers, this \re onstru tion" is not so obvious.

160

Advan ed algorithms

Wang [1981℄ proposed, and Wang et al. [1982℄ justi ed, using the extended Eu lidean algorithm (many other authors made this dis overy at about the same time). Ea h integer al ulated in Eu lid's algorithm is a linear

ombination of two data. We apply the algorithm to n and M , to give a de reasing series of integers ai = bi n + i M . In other words, n  ai =bi (mod M ). It is very remarkable that if there is a fra tion a=b equivalent to M=2, this algorithm will nd them, and they n with a and b less than will be given by the rst element of the series with ai < M=2. We an adapt Eu lid's algorithm to do this al ulation.

p

:= M ; := n; Q := 0; R := 1;

p

q

r

while r

6= 0

:= remainder(q; r); := Q bq=r R; q := r ; r := t; Q := R; R := T ; M=2 then if r
. In this ase, the terms of degree + 1 must an el out the terms of degree Æ , therefore = Æ + 1 . (2) 1 < . In this ase, the terms of degree + must an el out the terms of degree Æ , therefore = Æ . (3) 1 = . In this ase, the terms of degree + 1 on the left may

an el. If not, the previous analysis still holds, and = Æ + 1 . To analyse the an ellation, we write Y = i=0 yi xi , R = i=0 ri xi and S = i=0 si xi . The oeÆ ients of the terms of degree + 1 are r y and s y . The an ellation is equivalent to = s =r . We have proved the following result: 0

P

P

Lemma [Ris h, 1969℄.

P

 max(min(Æ ; Æ + 1 ); s =r ), where = + 1, and only when it gives rise

the last term is in luded only when to a positive integer.

Determining the oeÆ ients yi of Y is a problem of linear algebra. In fa t, the system of equations is triangular, and is easily solved. 5.2.1.2

A theorem of Davenport

Nevertheless, the transformation of the di erential equation into equation (1) gives a very interesting result. Theorem [Davenport, 1985a℄. Let A be a lass of fun tions, ontaining f and g, and let us suppose that the equation y + fy = g has an elementary solution over A. Then: f is algebrai over A, and in this ase the theory of integration either e ought to be able to determine y ; or y belongs to A. 0

R

R

R

If e f is not algebrai over A, then this fun tion is trans endental over A. We put B = A( f ). Either e f is algebrai over B , or it is trans endental over B . The only way in whi h e f an be algebrai over B is for f to be a sum of logarithms with rational oeÆ ients. But in this ase, e f is algebrai over A . Proof.

R

R

R

R

* The reader may noti e that this analysis is very similar to the analysis of the denominator. This similarity is not the result of pure han e | in fa t the amount by whi h the degree of Y is greater than that of D is the multipli ity of x^ in the denominator of y , after arrying out the transformation x^ = 1=x. Davenport [1984a, b℄ analyses this point.

208 Formal integration and di erential equations R

The only other possibility to be onsidered is that e f is trans endental over B . We supposed y elementary over A, and a fortiori, y elementary over B . Now the quotient of two elementary fun tions is an elementary fun tion, and thus Z R R f ye = ge f is elementary over B . But the de omposition lemma for integrals of exR f ponential fun tions implies that this integral has the form he , where h belongs to B . Then y 2 B , and y is the solution to a problem of Ris h: in fa t to Ris h's problem for the original equation y + f y = g . But f and g belong to A, and therefore the solution of Ris h's problem belongs to the same lass. In fa t, this theorem asserts that there are only two ways of al ulating an elementary solution of a rst order di erential equation: either we nd a solution to the homogeneous problem, or we nd a solution in the same

lass as the oeÆ ients. Moreover, it suÆ es to look for the homogeneous algebrai solutions over the lass de ned by the oeÆ ients. 0

5.2.2 Se ond order equations

Every rst order equation has, as we have seen, a solution whi h an be expressed in terms of integrals and of exponentials, and the interesting problem about these equations is to determine whether or not the solution is elementary. For se ond order equations, it is no longer obvious (or true) that every solution an be expressed in terms of integrals and of exponentials. To be able to state exa t theorems, we need a pre ise de nition of this on ept.

De nition. Let K be a eld of fun tions. The fun tion  is a Liouvillian

generator over K if it is: (a) algebrai over K , that is if  satis es a polynomial equation with oeÆ ients in K ; (b)  is an exponential over K , that is if there is an  in K su h that 0 =  0 , whi h is an algebrai way of saying that  = exp  ; ( )  is an integral over K , that is if there is an R in K su h that 0 =  , whi h is an algebrai way of saying that  =  .

De nition. Let K be a eld of fun tions. An over- eld K (1 ; : : : ; n ) of

K is alled a eld of Liouvillian fun tions over K if ea h i is a Liouvillian generator over K . A fun tion is Liouvillian over K if it belongs to a Liouvillian eld of fun tions over K .

If we omit K , we imply C(x): the eld of the rational fun tions.

Computer Algebra

209

This de nition is more general than the de nition of \elementary", for every logarithm is an integral. As we have already said, this in ludes the trigonometri al fun tions. We use the notation K (liou) for the lass of Liouvillian fun tions over K . Theorem [Kova i 1977, 1986℄. There is an algorithm whi h, given a se ond order linear di erential equation, y 00 + ay 0 + by = 0 with a and b rational fun tions in x, either nds two Liouvillian solutions su h that ea h solution is a linear

ombination with onstant oeÆ ients of these solutions, or proves that there is no Liouvillian solution (ex ept zero).

This theorem and the resulting algorithm are quite ompli ated, and we annot R a=2 give all the details here. It is known that the transformation 00 2 0 z = e y redu es this equation to z + (b a =4 a =2)z = 0. Kova i has proved that this equation an be solved in four di erent ways.

R

(1) There is a solution of the form e f , where f is a rational fun tion. In this ase, the di erential operator fa torises, and we get a rst order equation, the solutions of whi h are always Liouvillian.

R

(2) The rst ase is not satis ed, but there is a solution of the form e f , where f satis es a quadrati equation with rational fun tions as oef ients. In this ase, the di erential operator fa torises and we get a rst order equation, the solutions of whi h are always Liouvillian. (3) The rst two ases are not satis ed, but there is a non-zero Liouvillian solution. In this ase, every solution is an algebrai fun tion. (4) The non-zero solutions are not Liouvillian. An example of the use of Kova i 's algorithm is the equation y

00 = 4x6

8x5 + 12x4 + 4x3 + 7x2 4x4

5x + 1

y;

for whi h one solution is y

3 log x + log(x2 =e 2 =x

3 =2

(x2

1) +

2 =2 x 1=x

1)ex

1 2 x 2

1

1=x

:

Kova i 's method proves also that Bessel's equation, whi h an be written as  4n2 1  00 1 y; y = 4x2

210 Formal integration and di erential equations

only has elementary solutions when 2n is an odd integer, and nds solutions when n meets this requirement. This algorithm has been implemented in MACSYMA, and seems to work quite well [Saunders, 1981℄. In the ase of se ond degree equations, the same methods whi h we used for rst degree equations an be used for solving equations with a non-zero right-hand side (see Davenport [1985a℄), but in fa t the results are mu h more general, and will be dealt with in the next se tion. 5.2.3 General order equations

The situation for general order equations is, in theory, as well understood as for se ond order equations, even if the pra ti al details of the algorithms are not as lear. 5.2.3.1

Homogeneous equations

We shall rst onsider generalisations of Kova i 's algorithm. There is an algorithm whi h, given a linear di erential equation of any order, the oeÆ ients of whi h are rational or algebrai fun tions: either nds a Liouvillian solution; or proves that there is none. Theorem [Singer, 1981℄.

If this algorithm nds a Liouvillian solution, the di erential operator (whi h an be supposed of order n) fa torises into an operator of order one, for whi h the solution found is a solution, and an operator of order n 1, the oeÆ ients of whi h are likewise rational or algebrai fun tions. We

an apply the algorithm re ursively to the latter to nd all the Liouvillian solutions. This algorithm is more general than that of Kova i for se ond order equations, not only be ause of the generalisation of order, but also be ause it allows algebrai fun tions as oeÆ ients. However, no-one has implemented this algorithm and it seems to be quite ompli ated and lengthy. Singer and Ulmer [1992℄ have found a Kova i -like algorithm for the ase of third-order equations. Singer [1985℄ has also found an algorithm like the previous one, but it looks for solutions whi h an be expressed in terms of solutions for se ond order equations and Liouvillian fun tions. Here again the algorithm seems quite lengthy. 5.2.3.2

Inhomogeneous equations

Here we look at generalisations of Davenport's theorem stated for linear rst order equations. There are essentially three possibilities: (1) There is an elementary solution;

Computer Algebra

211

(2) There is a Liouvillian solution, whi h is not elementary; (3) There is no Liouvillian solution. The following theorem lets us distinguish between the rst ase and the other two by looking for.algebrai solutions to the homogeneous equation. Theorem 1 [Singer and Davenport, 1985; Davenport and Singer,

Let A be a lass of fun tions ontaining the oeÆ ients of a di erential linear operator L, let g be an element of A, and let us suppose that the equation L(y ) = g has an elementary solution over A. Then: either L(y ) = 0 has an algebrai solution over A; or y belongs to A . 1986℄.

This theorem is a orollary of the following result, whi h lets us distinguish between the rst two ases and the last one, by looking for Liouvillian solutions (of a spe ial kind) over A. Theorem 2 [Singer and Davenport, 1985; Davenport and Singer,

Let A be a lass of fun tions, whi h ontains the oeÆ ients of a di erential linear operator L, let g be an element of A, and let us suppose that the equation L(y ) = g has a Liouvillian solution over A. Then: R z

ase 1) either L(y ) = 0 has a solution e with z algebrai over A;

ase 2) or y belongs to A . 1986℄.

Moreover, there is an algorithm for the ase when A is a eld of algebrai fun tions, whi h de ides on the ase in theorem 2, and whi h redu es the problem of theorem 1 to a problem in algebrai geometry. This algorithm is based on Singer's algorithm (or Kova i 's, as the ase may be) for homogeneous equations. But this is quite a new subje t, and there are many problems still to be solved and algorithms to be implemented and improved. Solutions in nite form to a di erential equation are very useful, if there are any. But there are no su h solutions to most di erential equations in physi s. That is why the following se tion is important, for we study a method for determining solutions in series to these equations.

212 Formal integration and di erential equations

5.3 ASYMPTOTIC SOLUTIONS OF O.D.E.S

5.3.1 Motivation and history

The aim of this part of the book is to des ribe some re ent developments* in the algorithmi methods needed for the \solution" of linear differential equations. Note that here \solution" means \solution in series". We shall only onsider equations of the form: an (x)(y )(n) + an

1

(x)(y )(n

1)

+    + a0 (x)y = 0

(1)

where it is always supposed that the ai are polynomials with omplex oeÆ ients (we shall dis uss this hypothesis later), with no ommon fa tor. Of ourse, di erential equations su h as (1) have been the subje t of innumerable studies. Ever sin e the rst papers by Gauss in 1812 and those of Kummer (1834), most great mathemati ians have worked on solutions to these equations in C. We must mention the papers of Riemann (1857), Weierstrass (1856), Cau hy (1835{1840), before passing on to the fundamental work of Fu hs (1865), Frobenius (1873), Poin are (1881), Birkho (1909), to name only the most important ones. Today these studies have been taken up again by P. Deligne (1976), B. Malgrange (1980) and J.P. Ramis (1981) from the theoreti al standpoint. Why this interest in equations su h as (1) ?

There are many answers: 1) obvious theoreti al interest, 2) enormous pra ti al interest | we quote just a few appli ations of linear di erential equations | solution by separation of variables of problems with partial derivatives solution of eigenvalue problems (Sturm-Liouville problems), generation of numerous spe ial fun tions et .... What an we hope to ontribute to su h a bran h of mathemati s?

* This resear h is dire ted by J. Della Dora in the Computer Algebra group of the Laboratory LMC at Grenoble, with the help of A. Barkatou, C. Di res enzo, A. Hilali, F. Ri hard-Jung, E. Tournier, A. Wazner, H. Zejli-Najid. The work is arried out in lose ollaboration with D. Duval, urrently at the University of Limoges, with the University of Strasbourg (J.P. Ramis, J. Thoman), and with the Fourier Institute in Grenoble (B. Malgrange).

Computer Algebra

213

Firstly, to be able for the rst time to generate without error the algebrai solutions to these equations. The result is extremely important in its own right as we shall show later, for it enables us to deal with theoreti al problems previously ina

essible. A se ond appli ation, whi h we shall not

onsider here, is to provide a \generator" of spe ial fun tions (numeri al values, pre ise index hara terisiti s : : : ). This se ond set of problems alone requires an enormous amount of work, reviewing all that has already been done in this eld. Finally, we have to onsider that this subje t is a test

ase | if software for solving algebrai ally ordinary di erential equations were found, it would open the door to work on software for solving partial di erential equations. The future is ri h in problems.

5.3.2 Classi ation of singularities

Let us onsider an equation su h as (1). The singularities of the solutions of this equation are lo alised at the zeros of an (x) = 0, or possibly at in nity. By a translation su h as x ! x we an redu e the problem, for zeros at a nite distan e, to a singularity at the origin (the substitution x ! 1=x similarly redu es the singularity at in nity to one at zero). Classi ally, we suppose that (1) has a singularity at zero. This way of pro eeding is in fa t erroneous. For we know that, away from the singularities, the solutions are analyti fun tions and therefore very regular. Now, hoosing a root of a polynomial leads, whatever numeri al method is used (Newton, : : : ), to an approximation to this root and, by a numeri al translation, we shall pla e ourselves at a regular point, and therefore we shall lose all our information about the singularity. We annot pro eed thus. We outline brie y a way of getting out of this problem (a way whi h of ourse requires great hanges of attitude by the al ulator). First of all for the sake of simpli ity we an suppose that the oeÆ ients of an are rational numbers (therefore an belongs to Q[x℄) and furthermore that they are integers. Here the roots of an are not determined by numeri al values (whi h would of ourse be only approximations in most ases), but simply by the

omputational rule \an ( ) = 0", whi h is true for all the roots of an . Then performing the translation x ! x has a very pre ise meaning, whi h we illustrate by an example. The new operator we obtain has its oeÆ ients in (Q[x℄=an )[y ℄. When we look at the di erential operator L(y) = (x2 + 2)y + y we see that its oeÆ ients belong to Z[x℄. Let denote a root of x2 + 2; therefore satis es 2 = 2 and that is the only information we an use. 00

214 Formal integration and di erential equations

Let us perform the translation

z=x then

z 2 = x2 2  x + 2 = x2 2  x 2 = x2 2 (z + ) 2 = x2 2  z + 2

therefore x2 + 2 = z 2 + 2 z and the equation be omes (z 2 + 2  z )y + y = L(y ); 00

that is:

z (z + 2 )y + y = L(y): 00

We have indeed performed a translation whi h redu es the singularity to zero. It is worth noting that there is no need to repeat this al ulation for the other root. We an therefore suppose from now on that the origin is a root of an , that is an (0) = 0. We have already supposed that the ai have no ommon fa tor, therefore there is at least one index j su h that aj (0) di ers from 0. To lassify the singularities, we need another on ept, that of

Newton's polygon of a di erential operator.

We give an example to explain this. Let the operator be

L = x5  4 + x3  3 + 2x3  2 +  + 1; where we write  =

d : dx

If xi  j is a di erential monomial belonging to L, we asso iate to it the point in Z2 : (j; i j ): Therefore we ought to onsider the points in Z2 : (4; 1); (3; 0); (2; 1); (1; 1); (0; 0):

Computer Algebra

215

will denote the set of points of Z2 asso iated with the monomials of L. Then, for a point (a; b) thus found, we onsider the quadrant :

S

(

) = f(x; y ) 2 R2 ; x = bg:

Qt a; b

We de ne QT

=

[ (a;b)

2S

(

)

Qt a; b :

Newton's polygon is the frontier of the onvex hull of

QT

.

216 Formal integration and di erential equations

Then we an state Fu hs' hara terisation: of

We shall say that 0 is a regular singularity for L if the Newton polygon L has a single slope and if this slope is zero. Otherwise we shall say that

the singularity is irregular.

Note that in the lassi development [In e, 1956℄, other riteria are used to de ne the nature of the singularities. Our de nition has the advantage of greater simpli ity. It is easy to see that if 0 is a regular singularity, we

an write L in anoni al form

L = xn R0 (x) n + xn 1 R1 (x) n

1

+ ::: + Rn (x)

where R0 (0) is non-zero and the Ri are polynomials. 5.3.3 A program for solving o.d.e.s at a regular singularity

We shall explain this ase in more detail, sin e it is simpler than the general ase and illustrates some of the diÆ ulties en ountered. The approa h outlined here follows a lassi algorithm of Frobenius (other algorithms are possible). (1) We look at the a tion of L on a symboli power x , and see that

L(x ) = x f (x; ) where

f (x; ) =

X m

j =0

fj ()xj

(1)

Computer Algebra

217

with fj a polynomial in the variable , and m a fun tion of the degrees of R0 ; : : : ; Rn . (2) We an look formally for solutions of the form

x

+1 X

j =0

gj xj with g0 6= 0:

The linearity of L lets us write

L(x

+1 X

j =0

gj xj ) =

+1 X

j =0

gj L(x+j ):

(3) Using (1) it is easily seen that

L(x

+1 X

j =0

gj xj ) = x

+1 X

j =0

gj

k X i=0



fi ( + j )xi ;

P

and therefore x gj xj will be a solution of L if and only if the following in nite system is satis ed:

: : : 1 0 g0 1 : : : C B g1 C C .. C B .. C C .CB = 0: B CB . C B . C  gk C B f () f . A . (  + 1) : : : f (  + k ) k 1 0 A  k .. .. .. .. .. . . . . . 0 0

f0 () B f1 () B B .. B .

0 f0 ( + 1) .. .

::: 0 .. .

::: ::: .. .

This in nite system an be broken down into two parts: | the initial onditions part

f0 ()g0 = 0 f1 ()g0 + f0 ( + 1)g1 = 0 f2 ()g0 + f1 ( + 1)g1 + f0 ( + 2)g2 = 0 ::::::::::::::::::::::::: ::::::::::::::::::::::::: ::::::::::::::::::::::::: f(k 1) ()g0 +       + f0 ( + k 1)gk 1 = 0 | the other part is in fa t a linear re urren e equation

218 Formal integration and di erential equations

fk ( + n)gn + :::::::::::::: + f0 ( + k + n)gn+k

= 0 for n  0:

We see that the gi an be expressed algebrai ally as rational fra tions in , say for example: f1 () : g1 = f0 ( + 1)g0 It is important to note here that the gi will be obtained by exa t al ulations. The only diÆ ulty is the rst equation: f0 ()g0

= 0:

Sin e g0 annot be set equal to zero, the only solution is to hoose  su h that f0 () = 0 , but here we ome up against a problem we have already en ountered, made worse by the diÆ ulty whi h arises from the fa t that f0 is an element of (Q[x℄=(an ))[y ℄, and we have therefore to make a further extension, and work in (Q[x℄=(an )[y ℄): We shall not go any deeper into this deli ate question, but shall on lude with the remark that ertain denominators of the gi an be zero. The Frobenius program an deal with the se ond diÆ ulty mentioned, thanks to the work of Di res enzo and Duval [1985℄. 5.3.4 The general stru ture of the program

The program DESIR, presented in detail by Tournier [1987℄, is made up of the following basi modules: FROBENIUS, whi h we have just outlined, lets us deal with the ase of regular singularities. NEWTON whi h lets us deal with the ase of irregular singularities. We state here the basi idea of the method, starting from the following simple example: let the di erential operator of order n = 2 be L = x2 Æ + (x) ;



2 k[x℄;

(0)

We onstru t the Newton polygon asso iated with L:

6= 0

Computer Algebra

219

We see that in this ase the Newton polygon has a slope  = 1. It is well known that we an nd n linearly independent algebrai solutions: u1 ; u2 ; ::::un of L(u) = 0 whi h are written ui

= eQi (x) v (x);

where Qi (x) is a polynomial in x( 1=qi ) , with qi solutions will therefore be of the form: 

u = e(a=x ) v (x)

2 Z.

In our example, the

with  = 1:

With the operator L and with the slope  = 1 we asso iate a new operator La obtained by the hange of variable a

u = e x v (x);

whi h gives a

L(e x v (x))

= x2 ( a

a x2

a

a

a

e x v (x) + e x Æv (x)) + (x)e x v (x)

= e x (x2 Æ + ((x)

a))v (x);

therefore we make the operator La = x2 Æ + ((x)

a)

orrespond to L.

The term of La orresponding to the point s0 of the polygon is the polynomial P (a) = (0) a. We all the equation P (a) = 0 the hara teristi equation of La . Therefore if we give a the value of the root of the

hara teristi equation, say: a = (0)

220 Formal integration and di erential equations

the point

s0

disappears, and the Newton polygon be omes:

That is a Newton polygon whi h hara terises a regular singularity whose solutions we know how to determine by the Frobenius method. Therefore the general solution will be of the form: (a =x )

( ) where ( ) is a solution given by the Frobenius algorithm, and  is a root of the hara teristi equation. The example given here is that of the simple generi ase, that is the

ase where  is a simple root of the hara teristi equation, and where we get a zero slope after just one hange of variable. A omplete a

ount would have to onsider the nature of the roots (simple, multiple, di ering from one another by an integer) of the hara teristi equation. For a detailed study of these various ases, we refer the reader to Tournier [1987℄, and to se tion 5.3.5, where we give several examples of DESIR. e

x ;

x

a

a

The other modules

Asso iated with the two FROBENIUS and NEWTON modules whi h deal with di erential equations, is a module whi h treats the ase of di erential systems [Hilali, 1987℄: _= where is meromorphi at the origin. There are also four fundamental modules, viz : X

A

AX

Computer Algebra 1) an arithmeti al module whi h makes it possible to manipulate the various algebrai extensions required [Di res enzo and Duval, 1984, 1985℄, [Duval, 1987℄; 2) a module for numeri al resummation, whi h makes it possible to al ulate spe ial fun tions [Ramis and Thomann, 1980℄; 3) a module for graphi al visualisation [Ri hard, 1988℄; 4) a module for treating linear di eren e equations [Della Dora and Tournier, 1984, 1986℄, [Tournier, 1987℄. These resear hes have enabled the produ tion of the DESIR software, of whi h we present various examples in the following se tion.

5.3.5 Some examples treated by \DESIR" DESIR is a program developed under the REDUCE system. The examples given below have been implemented on an IBM RS/6000. In these examples there are: - the all whi h is made by the ommand : desir(); - the intera tive dialogue for introdu ing data; - printing solutions of the di erential equation with the number of terms required; - veri ation of the orre tness of this solution by putting it ba k into the given equation, the answer is then an expression, all of whose terms are of degree higher than that of the solution.

5.3.5.1

Examples with Bessel's equation

1: load desir; loading ...

DESIR : solutions formelles d'equations differentielles lineaires homogenes au voisinage de zero, point singulier regulier ou irregulier, ou point regulier Version 3.1 - Mai 1991 Appel par desir();

221

222 Formal integration and di erential equations

2: on time; Time: 300 ms 3: desir(); ATTENTION : haque donnee doit etre suivie de ; ou de $ L' equation est

de la forme

a(0)(x)d~0 + a(1)(x)d~1 + .... + a(n)(x)d~n = 0 ordre de l'equation ? 3: 2; Donner les oeffi ients 3: x**2;x;x**2; a(0) = X

a(j)(x), j = 0..n

2

a(1) = X a(2) = X

2

orre tion ? ( oui; / non; ) 3: non; transformation ? (oui;/non;) 3: non; nombre de termes desires pour la solution ? 3: 10;

LES 2 SOLUTIONS CALCULEES SONT LES SUIVANTES

============== SOLUTION No 1 ============== 1 10 1 8 1 6 1 4 1 2 - (----------*X - --------*X + ------*X - ----*X + ---*X - 1) 14745600 147456 2304 64 4

Computer Algebra ============== SOLUTION No 2 ============== 1 10 1 8 1 6 - (----------*LOG(X)*X - --------*LOG(X)*X + ------*LOG(X)*X 14745600 147456 2304 1 4 1 2 137 10 ----*LOG(X)*X + ---*LOG(X)*X - LOG(X) - -----------*X + 64

4

884736000

25 8 11 6 3 4 1 2 ---------*X - -------*X + -----*X - ---*X ) 1769472 13824 128 4 Time: 1670 ms 4: solvalide(first ws,1,10); La solution numero 1 est 10 8 6 4 2 X - 100*X + 6400*X - 230400*X + 3686400*X - 14745600 -----------------------------------------------------------14745600 La partie reguliere du reste est de l'ordre de x**(11) Si on reporte ette solution dans l'equation, le terme signifi atif du reste est : 12 X - ---------14745600 Time: 390 ms 5.3.5.2

Another example

1: load desir; loading ... DESIR : solutions formelles d'equations differentielles lineaires homogenes au voisinage de zero, point singulier regulier ou irregulier, ou point regulier

223

224 Formal integration and di erential equations

Version 3.1 - Mai 1991 Appel par desir(); 2: on time; Time: 270 ms 3: desir(); ATTENTION : haque donnee doit etre suivie de ; ou de $

*****

L' equation est

INTRODUCTION DES DONNEES

de la forme

a(0)(x)d~0 + a(1)(x)d~1 + .... + a(n)(x)d~n = 0 ordre de l'equation ? 3: 4; Donner les oeffi ients a(j)(x), j = 0..n 3: x+1; 2*x**2*(x+1); x**4; 5*x**7/2; x**10; a(0) = X + 1 2 a(1) = 2*X *(X + 1) a(2) = X

4

7 5*X a(3) = -----2 a(4) = X

10

orre tion ? ( oui; / non; ) 3: non; transformation ? (oui;/non;) 3: non; nombre de termes desires pour la solution ? 3: 5;

*****

Computer Algebra

LES 4 SOLUTIONS CALCULEES SONT LES SUIVANTES

============== SOLUTION No 1 ============== - E

(SQRT(X)*SQRT(6) + 1)/X -4 1348195877 -1 2 *X *(------------*SQRT(X)*SQRT(6) *X + 3072

1330595 -1 173 -1 ---------*SQRT(X)*SQRT(6) *X + -----*SQRT(X)*SQRT(6) 96 2 174069763 2 9173 -----------*X - ------*X - 1) 4608 16

============== SOLUTION No 2 ============== E

( - SQRT(X)*SQRT(6) + 1)/X -4 1348195877 -1 2 *X *(------------*SQRT(X)*SQRT(6) *X + 3072 1330595 -1 173 -1 ---------*SQRT(X)*SQRT(6) *X + -----*SQRT(X)*SQRT(6) + 96 2 174069763 2 9173 -----------*X + ------*X + 1) 4608 16

============== SOLUTION No 3 ==============

225

226 Formal integration and di erential equations

X

13/27

2 ( - 32*X + 3)/(12*X ) 14 3275412370921168749152 5 *E *X *(------------------------*X + 12709329141645

90412648939865456 4 10833178373456 3 353835104 2 -------------------*X + ----------------*X + -----------*X + 10460353203 43046721 59049 25336 -------*X + 1) 243

============== SOLUTION No 4 ==============

- X

1/54

2 (2*X + 3)/(3*X ) 10 4550251662719798533 5 *E *X *(---------------------*X 1626794130130560

863316799848061 4 48578095525 3 7318955 2 1333 -----------------*X + -------------*X - ---------*X + ------*X 1338925209984 344373768 236196 243 - 1) Time: 431780 ms plus GC time: 6350 ms 4: solvalide(first ws,1,2);

La solution numero 1 est E

- (

(SQRT(X)*SQRT(6) + 1)/X 2 *(4044587631*SQRT(X)*X + 127737120*SQRT(X )*X + 797184*SQRT(X) - 348139526*SQRT(6)*X

2

- 5283648*SQRT(6)*X

4 - 9216*SQRT(6)))/(9216*SQRT(6)*X ) 5 La partie reguliere du reste est de l'ordre de x**( - ---) 2 Si on reporte ette solution dans l'equation, le terme signifi atif du reste est :

Computer Algebra

227

135986861435 (SQRT(X)*SQRT(6) + 1)/X --------------*SQRT(X)*E 12288 ------------------------------------------------SQRT(6) (SQRT(X)*SQRT(6) + 1)/X 3 - (E *X*(3275045050305*SQRT(X)*SQRT(6)*X + 4209434611080*SQRT(X)*SQRT(6)*X

2

- 466934674845*SQRT(X)*SQRT(6

)*X - 1053600588472*SQRT(X)*SQRT(6) + 3822135311295*X 14323961393910*X

3

+ 5789417431041*X

2

plus GC time: 5690 ms

+

- 4240522163934*X -

1631842337220))/(147456*SQRT(X)*SQRT(6)) Time: 162220 ms

4

228 Formal integration and di erential equations

Appendix. Algebrai ba kground

A.1 SQUARE-FREE DECOMPOSITION

In this se tion we onsider a polynomial p of R[x℄, where R is an integral ring of zero hara teristi * (for example, the ring Z of integers). It is possible that p has multiple fa tors, that is that there is a polynomial q su h that q 2 divides p (perhaps that q 3 or a higher power divides p, but in this ase it is still true that q 2 divides p). Obviously, we an nd all the multiple fa tors by doing a omplete fa torisation of p, but there is a very mu h simpler way, whi h we now des ribe. It suÆ es to onsider moni p, that is with 1 as leading oeÆ ient. Suppose that p is fa torised into a produ t of linear fa tors:

Y n

p

=

(x

ai

)ni ;

i=1

where the ai an be algebrai quantities over R (but we only do this fa torisation for the proof). The derivative of p, whi h is al ulated purely algebrai ally starting from the oeÆ ients of p, is then



0=X n

p

(

ni x

ai

)ni

1

Y n

(x

aj

)nj



;

j =1 i=j

i=1

6

* We make this hypothesis in order to ex lude the ase of xp + 1 over the eld with p elements: this polynomial is irredu ible but its derivative is zero. This and similar ases are not very diÆ ult to handle | see, for example, Appendix 3 of Davenport [1981℄. 229

230

Appendix. Algebrai ba kground

It is obvious that, for every i, (x ai )ni 1 divides p and p0 . Moreover, every polynomial whi h divides p is a produ t of the (x ai ), to a power less than or equal to ni . Therefore the g. .d. of p and p0 is almost determined: it is the produ t of the (x ai ), to a power whi h is ni 1 or ni . But this power annot be ni , for (x ai )ni divides all the terms of p0 ex ept one, and annot therefore divide p0 . So we have proved that

Yn 0 (x ai )ni 1 ; g d(p; p ) = i=1 Q Thus, p= g d(p; p0 ) = ni=1 (x ai ). For the sake of brevity let us all

this obje t q . Then

g d(q; g d(p; p0 )) = from whi h we dedu e q

g d(q; g d(p; p0 ))

=

Yn nii=1 >1

Yn nii=1 =1

(x

a

i );

(x

a

i ):

The lefthand side of this equation is the produ t of all the non-multiple fa tors of p, and we have shown how to al ulate it using only the operations of derivation, (exa t) division and g. .d. These operations do not make us leave R[x℄, whi h implies that this produ t an be al ulated in R[x℄, and by a fairly eÆ ient method. In addition, we have al ulated as an intermediary result g d(p; p0 ), whi h has the same fa tors as the multiple fa tors of p, but with their powers redu ed by one. If the same al ulation we have just applied to 0 0 p is applied to g d(p; p ), we are al ulating all the fa tors of g d(p; p ) of multipli ity one, that is the fa tors of p of multipli ity two. And similarly for the fa tors of multipli ity three, : : : . That is, by quite simple al ulations in R[x℄, we an break down p in the form pii , where pi is the produ t of all the fa tors of p of multipli ity i. In this de omposition of p, ea h pi is itself without multiple fa tors, and the pi are relatively prime. This de omposition is alled the square-free de omposition of p (also alled \quadrat-frei"). The method we have shown may seem so e e tive that no improvement is possible but Yun [1976, 1977℄ has found a unning variation, whi h shows that, asymptoti ally, the ost of al ulating a square-free de omposition of p is of the same order of magnitude as the ost of al ulating the

Q

Computer Algebra

231

g d(p; p0 ). We see that all these al ulations are based on the possibility of our being able to al ulate the derivative of p. Cal ulating a square-free de omposition of an integer is very mu h more ompli ated | it is almost as expensive as fa torising it. A.2 THE EXTENDED EUCLIDEAN ALGORITHM

A

ording to Eu lid (see the histori al notes in Knuth [1973℄), we an al ulate the g. .d. of two integers q and r by the following method:

jj jj

:= q ; := r; r := t;

if q < r then t

q

while r

6= 0

:= remainder(q; r); := r; r := t;

do t

q

return q

;

(where we have used indentation to indi ate the stru ture of the program, rather than begin : : : end, and the fun tion \remainder" al ulates the remainder from the division q=r). In fa t, the same algorithm holds for the polynomials in one variable over a eld | it suÆ es to repla e the meaning \absolute value" of the signs j j by the meaning \degree". This algorithm an do more than the simple al ulation of the g. .d. At ea h stage, the values of the symbols q and r are linear ombinations of the initial values. Thus, if we follow these linear ombinations at ea h stage, we an determine not only the g. .d., but also its representation as a linear

ombination of the data. Therefore we an rewrite Eu lid's algorithm as follows, where the bra kets [: : :℄ are used to indi ate pairs of numbers and Q and R are the pairs whi h give the representation of the present values of q and r in terms of the initial values. The assignment to T may seem obs ure, but the de nition of remainder gives t the value q bq=r r, where bq=r is the quotient of q over r. The rst value this algorithm returns is the g. .d. of q and r (say p), and the se ond is a pair [a; b℄ su h that p = aq + br. This equality is

alled Bezout's identity, and the algorithm we have just explained whi h

al ulated it is alled the extended Eu lidean algorithm. We said that Eu lid's algorithm applies equally to polynomials in one variable. The same is true of the extended Eu lidean algorithm. But there is a little snag here. Let us suppose that q and r are two polynomials with integer oeÆ ients. Eu lid's algorithm al ulates their g. .d., whi h is also a polynomial with integer oeÆ ients, but it is possible that the intermediary

232

Appendix. Algebrai ba kground

Extended Eu lidean Algorithm

:= ; := ; := ; := [0 1℄; := [1 0℄; := [1 0℄; := [0 1℄; 6= 0 := remainder( ); := b ; := ; := ; := ; := ; and ;

jj jj

if q < r then t

else

while r

q

q

r

r

t

Q

;

R

;

Q

;

R

;

do t

T

q; r

Q

q

r

r

t

Q

R

R

T

return q

q=r R

Q

values are polynomials with rational oeÆ ients. For example, if = 2 1 and = 2 2 + 4 + 2, then the quotient b is 1 2. After this division, our variables have the following values: :2 2+4 +2 : 2 2 : [1 1 2℄ : [0 1℄ The remainder from the se ond division is zero, and we get the g. .d. 2 2, whi h an be expressed as 1( 2 1) 21 (2 2 + 4 + 2) Perhaps it is more natural to make the g. .d. moni , whi h then gives us as g. .d. + 1, whi h is expressible as 1 ( 2 1) + 1 (2 2 + 4 + 2) 2 4 q

r

x

x

q=r

q

x

Q

;

x

x

=

r

x

R

;

=

:

x

x

x

x

:

x

x

x

x

:

A.3 PARTIAL FRACTIONS

Two fra tions (of numbers or of polynomials in one variable) an be added: ) + g d( ) + = g d( l m( ) a

b

p

q

aq=

p; q

bp=

p; q

p; q

:

Computer Algebra

233

Is it possible to transform a fra tion in the other dire tion, that is, an we rewrite =pq in the form (a=p) + (b=q )? Su h a de omposition of =pq is

alled a de omposition into partial fra tions. This is only possible if p and q are relatively prime, for otherwise the denominator of (a=p) + (b=q ) would be l m(p; q ), whi h di ers from pq . If p and q are relatively prime, the g. .d. is 1. By the theorem of the pre eding paragraph, there are two ( omputable!) integers (or polynomials, depending on the ase) P and Q with P p + Qq = 1. Then

pq

=

(

Pp

+ Qq )

pq

=

Qq pq

+

P p pq

=

Q p

+

P q

;

and we have arrived at the desired form. In pra ti e, =pq is often a proper fra tion, that is the absolute value (if it is a question of numbers, the degree if it is a question of polynomials) of is less than that of pq . In that ase, we want the partial fra tions to be proper, whi h is not generally guaranteed by our method. But it suÆ es to repla e Q by the remainder after division of it by p, and P by the remainder after its division by q . This pro edure obviously extends to the ase of a more ompli ated denominator, always with the proviso that all the fa tors are relatively prime. It suÆ es to take out the fa tors su

essively, for example,

p1 p2 : : : pn

=

a1

=

a1

p1

p1

+ +

a2

+

a2

=  =

a1 p1

b1 p2 : : : pn

p2

p2

+

b2 p3 : : : pn

++

an pn

:

In fa t, there are more eÆ ient methods than this, whi h separate the in a more balan ed fashion [Abdali et al., 1977℄.

pi

A.4 THE RESULTANT

It quite often happens that we have to onsider whether two polynomials, whi h are usually relatively prime, an have a ommon fa tor in ertain spe ial ases. The basi algebrai tool for solving this problem is alled the resultant. In this se tion we shall de ne this obje t and we shall give some properties of it. We take the ase of two polynomials f and g in one variable x and with oeÆ ients in a ring R. i We write f = ni=0 ai xi and g = m b x . i=0 i

P

P

234

Appendix. Algebrai ba kground

De nition.

The

0 BB 0 BB BB 0 BB 0 BB BB 0 BB B 0

an

Sylvester matrix of f

1

an an

. . .

.

.

:::

.

.

1

bm bm

. . .

.

.

.

:::

0

:::

.

.

.

.

0 0

a1

.

.

0

1 .

:::

an

::: bm

a0

.

0 0

::: :::

bm

1

an

a1

an

:::

b1

.

bm

0

1

bm

.

.

.

::: an

0

1

1

.

::: bm

1

.

::: .

.

a0 a1

0 0 .

.

. . .

.

a1

.

.

.

1 CC CC C 0C C 0C C 0C C 0C CC CA 0 0 0

:::

:::

b0 .

::: bm

0 0

a0 .

an b0

.

0

:::

b1

.

and g is the matrix

a

::: ::: .

.

. . .

.

b1

b0

:::

b1

b0

where there are m lines onstru ted with the ai , n lines onstru ted with the bi .

De nition.

The

resultant

of f and g, written

Res(f; g ),

or

Resx (f; g )

if

there has to be a variable, is the determinant of this matrix.

Well-known properties of determinants imply that the resultant belongs to R, and that Res(f; g ) and Res(g; f ) are equal, to within a sign. We must note that, although the resultant is de ned by a determinant, this is not the best way of al ulating it. Be ause of the spe ial stru ture of the Sylvester matrix, we an onsider Eu lid's algorithm as Gaussian elimination in this matrix (hen e the onne tion betwen the resultant and the g. .d.). One an also onsider the sub-resultant method as an appli ation of the Sylvester identity (des ribed in se tion 2.8.2 \Bareiss's algorithm") to this elimination. It is not very diÆ ult to adapt advan ed methods (su h as the method of sub-resultants des ribed in se tion 2.3.3, or the modular and p-adi methods des ribed in hapter 4) to the al ulation of the resultant. Collins [1971℄ and Loos [1982℄ dis uss this problem. We now give a version of Eu lid's algorithm for al ulating the resultant. We denote by l (p) the leading oeÆ ient of the polynomial p(x),by degree(p) its degree, and by remainder (p; q ) the remainder from the division of p(x) by q (x). We give the algorithm in a re ursive form.

Computer Algebra

Algorithm

235

resultant;

f; g; Output r ; n := degree (f ); m := degree (g); nm if n > m then r := ( 1) resultant (g; f ) else an := l (f ); m if n = 0 then r := an else h := remainder (g; f ); if h = 0 then r := 0 else p := degree (h); p r := am resultant (f; h); n return r ; Input

We write

h

=

P

= 0 for i > p. This algorithm does g for, when n  m and n 6= 0, the xi h (for 0  i < n) are linear ombinations of the xj f p i=0

i xi

indeed give the resultant of

and

f

i

and

xi g (for 0  j < m), and therefore we are not hanging the determinant of the Sylvester matrix of f and  g by repla ing bi by i for 0  i < m. Now this  A  new matrix has the form where A is a triangular matrix with 0 B p determinant am and B is the Sylvester matrix of f and h. From this n

polynomials

algorithm we immediately get

f; g)

Proposition 1. Res(

= 0

f

if and only if

and

g

ommon.

It is now easy to prove the following propositions: Proposition 2. If the

i

are the roots of

Res(

Proposition 3. If the

i

Res(

Proposition 4. Res(

f; g) = am n

Y n

n f; g) = am n bm

g( i ):

i=1

are the roots of

f; g) = (

f , then

g, then

b

Q

n i=1

Q

Y m

mn n 1) m

i=1

m j =1

f ( i ): (

i

j ).

have a fa tor in

236

Appendix. Algebrai ba kground

Proof. [Duval, 1987℄. We write the right hand sides of the three propositions as

Y g( ); Y f ( ); R (f; g) =( 1) b Y Y ( ): R (f; g) =a b

R2 (f; g) =am n

n

i

i=1

mn n m

3

m n n m

4

n

m

i

i=1 m

i=1 j =1

i

j

It is lear that R2 (f; g ) and R3 (f; g ) are equal to R4 (f; g ). The three propositions are proved simultaneously, by indu tion on the integer Min(n; m). If f and g are swapped, their resultant is multiplied by ( 1)nm , and gives R4 (f; g). We an therefore suppose that n  m. Moreover R2 (f; g) is equal to am n when n = 0, as is the resultant of f and g , and R4 (f; g ) is zero when n > 0 and h = 0, as is the resultant. It only remains to onsider p R (f; h) the ase when m  n > 0 and h 6= 0. But then R2 (f; g ) = am 2 n for g ( i ) = h( i ) for ea h root i of f , and the algorithm shows that p Res(f; h), from whi h we get the desired result. Res(f; g ) = am n

De nition.

The

dis riminant

a2nn

2

of

f , Dis (f ) or Dis x (f ), is

Y Y ( n

n

i=1 j =1 j 6=i

Proposition 5. Res(f; f 0 ) = an Dis (f ).

i

j ) : Moreover

Dis (f ) 2 R.

Whi hever way they are al ulated, the resultants are often quite large. For example, if the ai and bi are integers, bounded by A and B respe tively, the resultant is less than (n + 1)m=2 (m + 1)n=2 Am B n , but it is very often of this order of magnitude. Similarly, if the ai and bi are polynomials of degree and respe tively, the degree of the resultant is bounded by m + n . An example of this swell is the use of resultants to al ulate primitive elements | see se tion 2.6.4.

A.5 CHINESE REMAINDER THEOREM

In this se tion we shall treat algorithmi ally this well-known theorem. There are two ases in whi h we shall use this theorem: that of integers and that of polynomials. Of ourse, these ases an be regrouped in a more abstra t setting (Eu lidean domains), but we leave this generality to the pure mathemati ians. We shall deal rst of all with the ase of integers, and then (more brie y) with that of polynomials.

Computer Algebra

237

A.5.1 Case of integers It is simpler to deal rst with the ase where the two moduli are relatively prime, and then to go on to the general ase.

Chinese remainder theorem ( rst ase). tively prime integers.

su h that x

(mod

)



For every pair

(mod

a

M

)

and x

(a; b)



b

Let M and N be two rela-

of integers, there is an integer

(mod

N

)

if and only if x



MN .

Proof. By the theory of se tion A.2, there are two integers f and g su h

that f M + gN = 1 (here we are making the hypothesis that M and N are relatively prime). Let = a + (b a)f M . If x  (mod M N ), then x  (mod M ). But  a (mod M ), and therefore x  a (mod M ). Moreover,

= a + (b

)

a fM

= a + (b

a

)(1

gN

)  a + (b

a

) (mod

N

) = b;

and therefore x  (mod M N ) implies x  b (mod N ). In the other dire tion, let us suppose that x  a (mod M ) and that x  b (mod N ). We have shown that  a (mod M ) and  b (mod N ), therefore we must have x  (mod M ) and (mod N ). But the statement that x  (mod M ) is equivalent to asserting that \M divides x ". Similarly N divides x , and therefore M N divides it. But this is equivalent to \x  (mod M N )".

Corollary.

n

Let N1 ; : : : ; N

n i)

be integers, pairwise relatively prime. For ev-

Qn

ery set a1 ; : : : ; a

of integers there is an integer su h that, for every i,

x

if and only if x



a

i

(mod

N



(mod

i=1 Ni ).

The same methods an be applied to the ase when M and N are not relatively prime. Although this generalisation is rarely used, we present it here, be ause it is often misunderstood.

(Generalised) Chinese remainder theorem.

Let M

and N

be two

(a; b) of integers: a 6 b (mod g d(M; N )), in that ase it is impossible to satisfy the equations x  a (mod M ) and x  b (mod N ); there is an integer su h that x  a (mod M ) and x  b (mod N ) if and only if x  (mod l m(M; N )).

integers. For every pair either

Proof. The rst ase, a 6 b (mod g d(M; N )), is quite simple. If x  a

(mod M ), then x  a (mod g d(M; N )). Similarly, if x  b (mod N ), then x  b (mod g d(M; N )). But these two equations are ontradi tory. In the other ase, we fa torise M and N into produ ts of prime numni i bers: M = i2P pm i and N = i2P pi . We now divide the set P of

Q

Q

238

Appendix. Algebrai ba kground

indi es into two parts*: = fi : mi < ni g; R = fi : mi  ni g:

Q

^ = Let M

Q

mi

i

2R pi

Q 2 Y

and N^ = ^ ^=

MN

i

Q

p

ni i

, su h that

max(mi ;ni )

i

2P

pi

= l m(M; N ):

This theorem (the ase when the moduli are relatively prime) an be applied ^ ) and x  b (mod N^ ), in order to dedu e to the equations x  a (mod M that there is a su h that this system is equivalent to the equation x  ^ N^ ). (mod M We have to prove that this equation implies, not only that x  a ^ ), but also that x  a (mod M ) (and the same for N ). Let us (mod M ^ N^ ) is satis ed. This implies that therefore suppose that x  (mod M x  b (mod N^ ), and therefore the same equation modulo all the fa tors i  = i2Q pm of N^ , in parti ular modulo M (we re all that mi < ni for i  i 2 Q). M is also a fa tor of M , and therefore divides g d(M; N ). Sin e  ). We already a  b (mod g d(M; N )), we dedu e that a  b (mod M   ). From this know that x  b (mod M ), and therefore x  a (mod M ^ ^  and from x  a (mod M ), we dedu e x  a (mod M M ). But

Q

^  =

MM

Y

i

2R

mi i

p

Y

2Q

mi

pi

= M;

i

and therefore the equation modulo M whi h has to be satis ed really is satis ed. The same is true for N , and therefore the theorem is proved. A.5.2 Case of polynomials

In this se tion we repeat the theory dis ussed in the last se tion, for the

ase of polynomials in one variable (say x) with oeÆ ients belonging to a eld K . For the sake of simpli ity, we suppose that ea h modulus is a moni polynomial. The pro ess of al ulating the remainder of a polynomial with respe t to a moni polynomial is very simple if the polynomial is of degree one (that is if it is of the form x A): the remainder is simply the value of the polynomial at the point A. * The distin tion between < and  is not very important, provided the two sets are disjoint and ontain every element of P .

Computer Algebra Chinese remainder theorem ( rst ase).

239

Let M and N be two rela-

tively prime moni polynomials. For every pair (a; b) of polynomials there is a polynomial su h that X only if X

Proof.





a



(mod M ) and X

b

(mod N ) if and

(mod M N ).

The theory of se tion A.2 holds also for the present ase, and

therefore the proof follows the same lines without any hanges.

There

are two polynomials f and g su h that f M + gN = 1, and we put = a + (b

a)f M .

Corollary.

Let N1 ; : : : ; Nn be polynomials relatively prime in pairs.

Q

For

every set a1 ; : : : ; an of polynomials, there is one polynomial su h that n X ai (mod Ni ) if and only if X

(mod Ni ). i





Chinese remainder theorem (generalised).

=1

Let M and N be two poly-

nomials. For every pair (a; b) of the polynomials: either a

6

b

(mod g d(M; N )), in su h a ase it is impossible to satisfy

the equations X



a

(mod M ) and X

there is a polynomial su h that X (mod N ) if and only if X







b

a

(mod N );



(mod M ) and X

b

(mod l m(M; N )).

In the ase when all the polynomials are linear, these methods are equivalent to the Lagrange interpolation, whi h determines the polynomial p whi h has the values a1 ; : : : ; an at the points N1 ; : : : ; Nn , that is modulo N1 ); : : : ; (x

the polynomials (x

Nn ).

As we are using this ase in modular al ulation, it is useful to prove that this algorithm is espe ially simple.

Let us suppose that a is a poly-

nomial over K , and b a value belonging to K , and that the equations to be solved are X x



a

(mod M ) and X

6



b

(mod x

v) (with M and

v relatively prime, that is Mx=v = 0). Let V be an element of K su h

that the remainder from dividing V M by x

v is 1, that is V = (Mx=v )

1.

Then the solution is

X



It is lear that Z (mod x

Q

Z = a + M V (b



a

ax=v )

(mod M ), and the hoi e of V

v).

If, in addition, M is of the form (v

vi )

1,

(mod M (x

Q

(x

vi ), V

v)):

ensures that Z



b

has the spe ial value

and an be al ulated in K .

A.6 SYLVESTER'S IDENTITY Sylvester's identity is fundamental for the method of fra tion-free elimination for matri es, due to Bareiss and des ribed in se tion 2.8.2.

This

240

Appendix. Algebrai ba kground

identity an be stated as =

(k )

ai;j

where

(k )

a

i;j

a

(k k

1

2) 1;k

1

a(k 1) k;k (k 1)

a

(k k;j

(k

a

ai;j

i;k



1)

;

1)

is the determinant a1;1 a2;1 ::: ak;1

a1;2

:::

a1;k

a2;2

:::

a2;k

:::

:::

:::

ak;2

:::

ak;k

ai;2

:::

ai;k

ai;1

a2;j ::: : ak;j a1;j

ai;j

To motivate this identity, let us rst onsider the fra tion-free al ulation of the determinant = a

b

d

e

f

g

h

i

:



Fra tion-free elimination of and gives us the equation d

a

 = 0 0 whi h we an also write as a

2

a

 = 0 0 a

The elimination of



a g

2 a a d

2

h b

b e

=

g

b

ae

bd

af

ah

bg

ai

b a b d e a b g h



d ;

g

a

d f : a g i

gives us a

0

b a b d e

0

0

a d a g

a

d f a b d e a b g h



: f

i

Computer Algebra

241

Sin e this last matrix is triangular, we an dedu e that 

=

a a d

a b d e a g

 =

a d a g

e b h

2 a a d

b e

and so that a

e b h

a d a g

b

a d a g

b

f ;

i

f :

i

As  and the right-hand side are polynomials in , the right-hand side must be divisible by . As the reader an easily imagine, this analysis has a two-fold generalisation: rstly, we an show that any element of a matrix after two elimination steps is divisible by , and se ondly, we an show that after steps, every element is divisble by the prin ipal determinant of size 1. A more formal proof of the generalised Sylvester identity was given by Bareiss [1968℄. Let be a square matrix of size , with generi entries . Write a blo k de omposition of this after the th row and olumn, as   = , where is a square matrix of size . As the entries are generi , annot be singular, so we an write      0 = = 0 Taking determinants of both sides, we nd j j = j jj j (1) Multiplying this equation by j j gives j jj j = jj j( )j (2) sin e the matrix ( ) has dimension . Re all that we de ned a:::i

a

a

k

k

M

n

ai;j M

k

A

B

C

D

A

k

A

M

A

B

A

C

D

C

M

D

A

CA

(k )

a

ij

n

1

k

n

A

I

A

A

M

I

D

D

k

1

CA

1

A

= ...

ai 1

B :

D

CA

n

a12 a22

...

ak 2 ai 2

B

CA

1

B

a11 a21 ak 1

1

 

...

 

a1k a2k

...

akk aik

1

B

k

a2j ; akj a1j

...

aij

1

B

:

242

Appendix. Algebrai ba kground

whi h is the matrix A to whi h we have added the ith row and j th olumn of M . Applying the identity (2) to this determinant rather than to M , we nd that ! (k ) a ij

= jAj

XX k

aij

k

(

air A

1

)rs asj

:

r =1 s=1

The right-hand side of this is the hen e we dedu e that

j jj j M

A

n

i; j k

th element of the determinant in (2), 1

= ja(ijk) j :

Sylvester's identity. A.7 TERMINATION OF BUCHBERGER'S ALGORITHM

In this se tion we will show, using the algebrai properties of ideals, why Bu hberger's algorithm (se tion 3.1.4) terminates. First, we will give the algorithm, assuming, for simpli ity, that all polynomial oeÆ ients ome from a eld. ; = fg1 ; : : : ; gk g; B is the basis of an ideal I ; G = fg1 ; : : : ; gn g; G is a standard basis of the same ideal;

Algorithm Bu hberger's, na ve version Input Output

B

:= k := B P := f(i; j ) : 1  i < j  k g; P is a set of pairs of integers while P is non-empty do Choose (i; j ) 2 P ; Q := S (gi ; gj ); P := P n f(i; j )g; Redu e Q with respe t to G; if Q 6= 0 then P := P [ f(i; n + 1) : 1  i  ng; gn+1 := Q; G := G [ fQg; n := n + 1; return G; n

G

(*)

The problem of showing termination redu es to the problem of showing that the lines following the then marked (*) an only be exe uted a nite number of times, sin e every other al ulation is nite as long as P stays nite.

Computer Algebra

243

We establish the following notation. : if Q is a polynomial, T (Q) is the leading monomial of Q. For a set E of polynomials, T (E ) is the set of leading monomials of the polynomials in E . I : for a set E of polynomials, I (E ) is the ideal generated by the polynomials of the set E . In this notation, the fa t that Bu hberger's algorithm is orre t is due to Theorem 5 of se tion 3.1.4 (a basis G is a standard basis if, and only if, every S-polynomial S (f; g ) redu es to zero with respe t to G, for all f; g 2 G) and the fa t that I (G) does not hange, be ause Q is a linear ombination of polynomials from the set G. Now let us onsider I (T (G)). Every monomial of every polynomial in this ideal is divisble by (at least) one member of T (G), be ause I (T (G)) is an ideal generated by monomials. But, after line (*), T (Q) is not divisible by any monomial of T (G), be ause Q is redu ed with respe t to G. So T (Q) 62 I (T (G)). Hen e I (T (G [ fQg)) % (T (G)): in other words, every time we exe ute the line (*), the ideal I (T (G)) in reases stri tly. But it is impossible (by the Hilbert Basis Theorem) to have an in nite sequen e of stri tly in reasing ideals. So, the lines after (*) an only be exe uted a nite number of times, whi h is what we had to show. T

244

Appendix. Algebrai ba kground

Annex. REDUCE: a Computer Algebra system

R.1 INTRODUCTION

Over the last twenty years substantial progress has been made throughout the eld of Computer Algebra, and many software systems have been produ ed (see also Chapter 1). Among the major Computer Algebra systems in servi e today, we have

hosen to give here a detailed exposition of REDUCE. Why REDUCE, when it an be argued that the most omplete system today is MACSYMA? Firstly, REDUCE o

upies a privileged position: it is, and has been for a long time, the most widely distributed system, and is installed in well over 1000 sites, on a wide spe trum of omputers*. Se ondly, its PASCAL-like syntax makes it easy to learn and simple to use. This system is the brain- hild of A.C. Hearn, and the rst version appeared in 1967. It has ontinued to develop with the far- ung ollaboration of its ommunity of \advan ed" users, who have ontributed many modules and fa ilities to the system, and will surely ontinue to do so. The book by Ma Callum & Wright [1991℄ des ribes more appli ations and examples of REDUCE than this annex an reasonably ontain. The prin ipal possibilities whi h this system o ers are: - Integer and rational \arbitrary pre ision" arithmeti - Transparently integrated ma hine and \arbitrary pre ision" oatingpoint arithmeti * To obtain the latest state of REDUCE distribution, send an ele troni mail message ontaining the line send info-pa kage to the address redu e-netlibrand.org. A le giving the most up-to-date state will be returned to the sender. In parti ular, this le des ribes a portable C implementation of REDUCE. 245

246

Annex. REDUCE: a Computer Algebra system

- Polynomial algebra in one or several variables - g. .d. omputations - fa torisation of polynomials with integer oeÆ ients - Matrix algebra, with polynomial or symboli oeÆ ients - Determinant al ulations - Inverse omputation - Solution of systems of linear equations - Cal ulus - Di erentiation - Integration - Limits - Lapla e transforms - Manipulation of expressions - Simpli ation - Substitution - Solution of equations and systems of equations (using Grobner bases where appli able | see se tion 3.1) REDUCE was, from its beginning, on eived of as an intera tive system. A \program" is therefore a series of free-standing ommands, ea h of whi h is interpreted and evaluated by the system in the order in whi h the ommands are entered, with the result being displayed before the next ommand is entered. These ommands may, of ourse, be fun tion alls, loops,

onditional statements et . R.1.1 Examples of intera tive use

REDUCE ommands end with \;" or \$": in the se ond ase, the result is not printed. Failing to terminate a ommand is a ommon error for nave users. REDUCE prompts for input by typing \1:" for the rst input, and so on. The user's input follows this prompt in all the examples below*. These numbers an be used to refer to the results of previous omputations | see the ws fun tion later. REDUCE 3.4, 15-Jul-91 ... 1: (x+y+z)**2; X

2

2 2 + 2*X*Y + 2*X*Z + Y + 2*Y*Z + Z

2: df((x+y)**3,x,2);

* In future examples, we shall omit the opening and losing stages of the dialogue. Note that the version number and date may hange from time to time. Earlier versions may not support all the fa ilities des ribed here.

Computer Algebra

247

6*(X + Y) 3: int(e**a*x,x); A 2 E *X ------2 4: int(x*e**(a*x)/((a*x)**2+2*a*x+1),x); A*X E -------------2 A *(A*X + 1) 5: for i:=1:40 produ t i; 815915283247897734345611269596115894272000000000 6: matrix m; 7: m:=mat((a,b),(b, ))$ 8: 1/m; [ C [---------[ 2 [ A*C - B [ [ - B [---------[ 2 [ A*C - B

- B ℄ ----------℄ 2 ℄ A*C - B ℄ ℄ A ℄ ----------℄ 2 ℄ A*C - B ℄

9: det(m); A*C - B

2

10: bye; Quitting

These few examples illustrate the \algebrai " mode of REDUCE. There is another mode, \symboli ", whi h allows one to exe ute arbitrary LISP instru tions dire tly. These modes orrespond to the two data types \algebrai expression" and \S-expression of LISP". The REDUCE system is itself written in LISP, and an be seen as a hierar hy:

248

Annex. REDUCE: a Computer Algebra system

REDUCE RLISP LISP

algebrai mode in x symboli mode pre x symboli mode

It is possible to ommuni ate between the various modes. But REDUCE is essentially designed to work in algebrai mode, and symboli mode al ulation requires a good knowledge of LISP and of the internal stru ture of REDUCE. Hen e, in this annex, we will restri t ourselves to algebrai mode. RLISP is des ribed by Davenport & Tournier [1983℄. R.2 SYNTAX OF REDUCE

We on entrate on the synta ti elements of REDUCE in this se tion, and pass later to the various algebrai fa ilities o ered. R.2.1 Synta ti elements

A REDUCE program is, as we have already said, a sequen e of simple statements evaluated in order. These statements an be de larations, ommands or expressions. These expressions are omposed of numbers, variables, operators, strings (delimited by "), reserved words and delimiters. While this synta ti stru ture is ommon to most high-level languages, we should note the following di eren es.



The numbers



The variables

The integers of REDUCE are \in nite pre ision", and numbers whi h are not exa t integers are normally represented as the quotient of two integers. It is possible to use oating-point numbers of any pre ision: with REDUCE using ma hine oating-point numbers or software \arbitrary pre ision" as appropriate.

Ea h variable has a name and a type. The name must not be that of a reserved word, i.e. a keyword of the language. Most variables an take the default type, whi h is SCALAR. Su h variables an be given any algebrai expression as value. In the absen e of any su h value, they have themselves as value. The reserved \variables" are: E { the base of the natural logarithms; I { the square root of 1 (but it an also be used as a loop

ounter); INFINITY { used in limiting et . as a spe ial value; NIL { a synonym of zero; PI { the ir ular onstant  ; T { a synonym for \true".

Computer Algebra 

249

The operators

These ome in two types: { pre x A number of pre x operators with spe ial properties are in orporated in the system: a detailed list is given in se tion R.3.1. The user an also add new operators, with similar properties (see R.2.3.4). { in x The built-in in x operators are: WHERE := OR AND NOT NEQ = >= > <

whose value is that of the last expression. Note the di eren e between

> whose value is Y, and > whose value is 0, that of the (null) expression at the end. R.2.4.3

Conditional statement

IF THEN IF THEN ELSE

The value obtained is that of the bran h exe uted (or 0 if absent). Note that if x

or

=> when where the s may ontain ordinary variables, whi h mean themselves, or variables pre eded by ~, whi h stand for any expression

that may be substituted for them (subje t to any onditions imposed by the s. So the rule {sin(~n*pi) => 0 when fixp n}

would a e t sin(3*pi) but not sin(3*q), sin e ~n an be repla ed by anything satisfying the fixp predi ate, but pi stands for itself. In fa t, only the rst o

urren e of a free variable need be pre eded by ~. Rule lists

an be assigned to variables, as in trigredu e := { os(~x)* os(~y) => ( os(x+y)+ os(x-y))/2,

os(~x)*sin(~y) => (sin(x+y)-sin(x-y))/2, sin(~x)*sin(~y) => ( os(x-y)- os(x+y))/2,

os(~x)**2 => (1+ os(2*x))/2, sin(~x)**2 => (1- os(2*x))/2};

whi h produ es a set of rules with mu h the same e e t as MACSYMA's trigredu e fun tion. R.3.3.1

Lo al substitution

This an be performed by the SUB operator, whose syntax is SUB([=,℄*)

whi h yields the result of evaluating the last expression, and then repla ing in parallel ea h kernel appearing as the left-hand side of an equation by the * Rule lists are a new feature of REDUCE 3.4.

268

Annex. REDUCE: a Computer Algebra system

orresponding right-hand side. For example

1: sub(x=x+1, y= os(u), x*y+u*sin(x)); SIN(X + 1)*U + COS(U)*X + COS(U) 2: z:=a*(x-y)**2+2*b*x+log(y); Z := LOG(Y) + A*X

2

- 2*A*X*Y + A*Y

2

+ 2*B*X

3: zp:=sub(x=y, y=u+1, z); ZP := LOG(U + 1) + A*U 2*B*Y

2

- 2*A*U*Y + 2*A*U + A*Y

2

- 2*A*Y + A +

4: % But z is not altered 4: z; LOG(Y) + A*X

2

- 2*A*X*Y + A*Y

2

+ 2*B*X

A related operator is the in x operator WHERE, whi h has the syntax WHERE [,℄* where a an be a single equation = or a single rule, or a named rule set. In algebrai mode, WHERE provides more fun tionality than SUB, and should probably now be used instead of SUB. An example of using the trigredu e rules de ned earlier is 3: ( os a + sin b)^3; COS(A)

3

2 2 3 + 3*COS(A) *SIN(B) + 3*COS(A)*SIN(B) + SIN(B)

4: ws where trigredu e; ( - 3*COS(A - 2*B) - 3*COS(A + 2*B) + COS(3*A) + 9*COS(A) - 3*SIN(2*A - B) + 3*SIN(2*A + B) - SIN(3*B) + 9*SIN(B))/4

R.3.3.2

Global substitution

These substitutions, introdu ed by LET statements, take pla e in all expressions ontaining the left-hand side of the substitution rule, from the moment when the rule is introdu ed until it is super eded, or an elled via the CLEAR ommand.

Computer Algebra

269

1: operator f,g; 2: let g(x)=x**2, 2: f(x,y)=x* os(y); 3: df(g(u),u); %our rule only affe ted g(x) DF(G(U),U) 4: f(x,y); COS(Y)*X 5: f(a,b); %our rule only affe ted f(x,y) F(A,B)

If we wish to substitute for expressions ontaining arbitrary variables, rather than parti ular named variables, then we must prefa e the LET statement by FOR ALL preludes. 6: for all x,y let f(x,y)=log(x)-y; 7: f(a,b); LOG(A) - B

The formal syntax is [FOR ALL [,℄*℄ LET [,℄* where a rule is de ned as = Various kinds of lhs are possible: the simplest being a kernel. In this ase, the kernel is always repla ed by the right-hand side. If the lhs is of the form a**n (where n is an expli it integer)*, then the rule applies to all powers of a of exponent at least n. If the lhs is of the form a*b (where a and b are kernels or powers of kernels or produ ts of powers of kernels), then a*b will be repla ed, but not a or b separately. If the lhs is of the form a+b (where a and b are kernels or powers of kernels or produ ts of powers of kernels), then a+b will be repla ed, and the rst of a and b in REDUCE's ordering (say a) will be impli itly treated as (a+b)-b, and the a+b repla ed. It is also possible to pla e onditions, by means of SUCH THAT lauses, on the variables involved in a FOR ALL prelude, as in * One has to be more areful with other forms of exponents. For example, the rule let a**b=0; auses a**(2*b); to simplify to 0, sin e the internal 2 form is essentially (ab ) , while a**(b*d); is not simpli ed, sin e its internal form is essentially abd .

270

Annex. REDUCE: a Computer Algebra system

1: operator lapla e,fa torial; 2: for all x,n su h that fixp n and n>0 let 2: lapla e(x**n,x)=fa torial(n)*t**(-n-1); 3: lapla e(y**2,y); FACTORIAL(2) -------------3 T 4: lapla e(y**(-2),y); 1 LAPLACE(----,Y) 2 Y A rule an be leared with the

CLEAR

ommand. To lear a rule en-

tered with a prelude, it is ne essary to use the same variable names. We have already seen that

same prelude, with the CLEAR also lears WEIGHT

de larations, and in fa t it resets many other properties. Rule sets an also be used globally, using the

LETRULES

statement to

de lare that a parti ular rule set is to be used from now on, as in

letrules trigredu e;

CLEARRULES statement. WHERE, LET rules are applied repeatedly

These an be an elled by the Unlike

SUB

and

until no fur-

ther substitutions are possible. While this is often useful (e.g. in linearising trigonometri fun tions | see Se tion 2.7 in Chapter 2 and the

trigredu e

list), it does mean that one should not referen e the left-hand side of a rule in the right-hand side.

1: let x=x+1; 2: x; ***** Simplifi ation re ursion too deep 3: let y=z; 4: y; Z 5: let z=y; 6: y; ***** Binding sta k overflow, restarting...

Computer Algebra

271

R.4 MATRIX ALGEBRA

The MATRIX de laration lets one de lare that variables will take matrix values, instead of the ordinary s alar ones. Matri es an be entered via the MAT operator, as in 1: matrix m;

2: m:=mat((a,b),( ,d)); [A M := [ [C

B℄ ℄ D℄

REDUCE an ompute the inverses of matri es, their determinants, transposes and tra es, as in 3: m**(-1);

[ D [----------[ A*D - B*C [ [ - C [----------[ A*D - B*C

- B ℄ -----------℄ A*D - B*C ℄ ℄ A ℄ -----------℄ A*D - B*C ℄

4: det(m); A*D - B*C 5: tp(m); [A C℄ [ ℄ [B D℄ 6: tra e m; A + D

R.5 IRENA

IRENA, the Interfa e between REdu e and NAg, is a separate pa kage, available from the Numeri al Algorithms Group Ltd.*, whi h interfa es the intera tive symboli fa ilities of REDUCE with the bat h numeri al fa ilities of the NAG Fortran Library. Full details are ontained in the IRENA do umentation, and the rationale behind IRENA is presented by Dewar [1989℄ and Dewar and Ri hardson [1990℄. To give a avour of the

apabilities of IRENA, suppose that we wish R1 to al ulate the integral 0 1+4x2 dx. In this ase, we ould integrate it * Wilkinson House, Jordan Hill Road, Oxford, England, OX2 8DR.

272

Annex. REDUCE: a Computer Algebra system

symboli ally, and substitute the end-points, sin e we an assure ourselves that there is no singularity in the region of integration. However, this is

ertainly not always possible, while the NAG library ontains many ex ellent numeri al integration routines, the use of whi h generally requires the writing of a omplete Fortran programme. D01AJF is a suitable routine for this appli ation. With IRENA, we an all for this program to be written and run intera tively for us.

1: d01ajf(f(x)=4/(1+x^2),range=[0:1℄);

For an index to the following list, type `0;'. The values of its entries may be a

essed by their names or by typing `1;', `2;' et {INTEGRAL,ABSOLUTE_ERROR_ESTIMATE,INTERVALS,A_LIST,B_LIST,E_LIST, R_LIST}

The result is a list of all the REDUCE variables whi h have been given values by this all. We an inspe t them, to determine the results, error

onditions, and so on. 2: 1;

3.14159265359 3: 2; 1.74563730979E-14

For a more ompli ated example, onsider solving the boundary value problem in the interval [0; 10℄ of = y2 = y3 y3 = y1 y3 0

y1 0

y2 0

2(1

2 y2 )

 ;

where  is a ontinuation parameter, subje t to the boundary onditions y1 (0)

=0 y2 (0) = 0 y2 (10) 1 = 0: D02RAF is an appropriate routine, but alling it dire tly requires the spe i ation of subroutines to al ulate  the di erential equations fi ;  the boundary onditions gi ; fi ;  the Ja obian of the di erential equations y j

Computer Algebra

273

;  the Ja obian of the di erential equations with respe t to the ontinui ation parameter f ;   the Ja obian of the boundary onditions with respe t to the ontinui ; ation parameter g  In luding these subroutine parameters, D02RAF will need 24 parameters. IRENA an all it as follows:  the Ja obian of the boundary onditions

gi yj

1: d02raf(range=[0:10℄, 1: f n1(x,y1,y2,y3,eps)= y2, 1: f n2(x,y1,y2,y3,eps)= y3, 1: f n3(x,y1,y2,y3,eps)= - y1*y3 - 2*(1 - y2*y2)*eps, 1: gbeg1(yl1,yl2,yl3,eps)= yl1, 1: gbeg2(yl1,yl2,yl3,eps)= yl2, 1: gend1(yr1,yr2,yr3,eps)= yr2 - 1)$ For an index to the following list, type `0;'. The values of its entries may be a

essed by their names or by typing `1;', `2;' et {SIZE_OF_MESH_USED,MESH,SOLUTION,ABSOLUTE_ERROR_ESTIMATES,DELEPS} 2: print!-pre ision 5; 12 3: 1; 33 4: tp solution; [ 0.0 0.0 1.6872 ℄ [ ℄ [0.0032142 0.10155 1.5626 ℄ [ ℄ [0.012532 0.19536 1.4398 ℄ ........................................ [ 8.8776 1 - 0.0000000067002℄ [ ℄ [ 9.5026 1 0.0000000035386 ℄

We see that IRENA has produ ed a 33-point mesh, and we have given part of the solution (using REDUCE's tp operator to get the transpose of the solution so that it is legible) and using the print!-pre ision feature of REDUCE to print only digits whi h may be signi ant A detailed examination of ABSOLUTE ERROR ESTIMATES would be needed to know more about the a

ura y. It would also be possibel to use IRENA in its \prompting" mode, where the user need only type d02raf(); and IRENA will prompt for the ne essary values (beginning, in this ase, with the number of di erential

274

Annex. REDUCE: a Computer Algebra system

equations, and then asking for the fun tions, the number of initial boundary

onditions, and so on). R.6 CONCLUSION

REDUCE is a powerful Computer Algebra system available on many ma hines, from the IBM PC to the largest main-frames, but the elementary fa ilities are easy to use. Full details (and many options and variations that we have not had the spa e to present here) are ontained in the manual.

Bibliography This book has only given a brief outline of Computer Algebra. Before giving a list of all the work referred to in this book, we shall mention several books and periodi als of general interest, whi h may add to the reader's knowledge of this subje t. The mathemati s and algorithms for treating integers and dense polynomials have been very arefully dealt with by Knuth [1981℄. The olle tion edited by Bu hberger et al. [1982℄ and Mignotte's book [1992℄ des ribe several mathemati al aspe ts of Computer Algebra, and state many of the proofs we have omitted. These two books are essential for those who want to reate their own system. Various uses of omputer algebra are presented in the olle tion edited by Cohen [1993℄. Several arti les on Computer Algebra and its appli ations have appeared in the pro eedings of onferen es on Computer Algebra. The onferen es in North Ameri a are organised by SIGSAM (the Spe ial Interest Group on Symboli and Algebrai Computation of the Asso iation for Computing Ma hinery) and their pro eedings are published by the ACM, with the title SYMSAC, su h as SYMSAC 76 (Yorktown Heights), SYMSAC 81 (Snowbird), SYMSAC 86 (Waterloo) and SYMSAC 98 (Portland), and more re ently with the title ISSAC, su h as ISSAC 90 (Tokyo), ISSAC 91 (Bonn) and ISSAC 92 (Berkeley). The pro eedings of the European onferen es have been published sin e 1979 by Springer-Verlag in their series Le ture Notes in Computer S ien e: The table below gives the details. For ea h system of Computer Algebra the best referen e book for the user is, of ourse, its manual. But these manuals are often quite long, and it is hard to get an overall view of the possibilities and limitations of these systems. Thus, the Journal of Symboli Computation (published by A ademi Press) ontains des riptions of systems, presentations of new appli ations of Computer Algebra, as well as resear h arti les in this eld. 275

276

Bibliography

Conferen e EUROSAM 79 EUROCAM 82 EUROCAL 83 EUROSAM 84 EUROCAL 85 EUROCAL 87 ISSAC 88

Pla e Marseilles Marseilles Kingston-on-Thames Cambridge Linz Leipzig Rome

LNCS 72 144 162 174 203 and 204 378 358

The other journal of parti ular interest to students of Computer Algebra is the SIGSAM Bulletin, published by SIGSAM, whi h we have already mentioned. It is a very informal journal, whi h ontains, as well as s ienti arti les, problems, announ ements of onferen es, resear h reports, new versions of systems et . In Fran e, there is a Computer Algebra ommunity (GRECO, PRC, : : :), whi h organises a onferen e every year. The pro eedings are published along with other interesting arti les in the review CALSYF, edited by M. Mignotte at the University of Strasbourg. With this one an follow Fren h work in this eld. [Abbott,1988℄ Abbott,J.A., Fa torisation of Polynomials over Algebrai Number Fields. Ph.D. Thesis, University of Bath, 1988. [Abbott et al., 1985℄ Abbott,J.A., Bradford,R.J. & Davenport,J.H., A Remark on Fa torisation. SIGSAM Bulletin 19 (1985) 2, pp. 31{33, 37. [Abbott et al., 1987℄ Abbott,J.A., Bradford,R.J. & Davenport,J.H., A Remark on Sparse Polynomial Multipli ation. Privately ir ulated 1987, reprinted as Bath Computer S ien e Te hni al Report 92-67. [Abdali et al., 1977℄ Abdali,S.K., Caviness,B.F. & Pridor,A., Modular Polynomial Arithmeti in Partial Fra tion De omposition. Pro . 1977 MACSYMA Users' Conferen e, NASA publi ation CP{2012, National Te hni al Information Servi e, Spring eld, Virginia., pp. 253{261. [Aho et al., 1974℄ Aho,A.V., Hop roft,J.E. & Ullman,J.D., The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974. MR 54 (1977) #1706. [Allen, 1978℄ Allen,J.R., The Anatomy of LISP. M Graw-Hill, New York, 1978. [Arnon, 1985℄ Arnon,D.S., On Me hani al Quanti er Elimination for Elementary Algebra and Geometry: Solution of a Nontrivial Problem.

Computer Algebra

277

Pro . EUROCAL 85, Vol. 2 [Springer Le ture Notes in Computer S ien e Vol. 204, Springer-Verlag, 1985℄ pp. 270{271. [Arnon & Smith, 1983℄ Arnon,D.S. & Smith,S.F., Towards Me hani al Solution of the Kahan Ellipse Problem I. Pro . EUROCAL 83 [Springer Le ture Notes in Computer S ien e 162, SpringerVerlag, Berlin, Heidelberg, New York, 1983℄, pp. 36{44. [Arnon et al., 1984a℄ Arnon,D.S, Collins,G.E. & M Callum,S., Cylindri al Algebrai De omposition I: The Basi Algorithm. SIAM J. Comp. 13 (1984) pp. 865{877. [Arnon et al., 1984b℄ Arnon,D.S, Collins,G.E. & M Callum,S., Cylindri al Algebrai De omposition I: An Adja en y Algorithm for the Plane. SIAM J. Comp. 13 (1984) pp. 878{889. [Ax, 1971℄ Ax,J., On S hanuel's Conje tures. Ann. Math. 93 (1971) pp. 252{268. [Ba kelin & Froberg, 1991℄ Ba kelin,J. & Froberg,R., How we proved that there are exa tly 924 y li 7{roots. Pro . ISSAC 1991 (ACM, New York) pp. 103{111. [Bareiss, 1968℄ Bareiss,E.H., Sylvester's Identity and Multistep Integerpreserving Gaussian Elimination. Math. Comp. 22 (1968) pp. 565{578. Zbl. 187,97. [Bateman & Danielopoulos, 1981℄ Bateman,S.O.,Jr., & Danielopoulos,S.D., Computerised Analyti Solutions of Se ond Order Di erential Equations. Computer J. 24 (1981) pp. 180{183. Zbl. 456.68043. MR 82m:68075. [Berlekamp, 1967℄ Berlekamp,E.R., Fa toring Polynomials over Finite Fields. Bell System Te h. J. 46 (1967) pp. 1853{1859. [Borodin et al., 1985℄ Borodin,A., Fagin,R., Hop roft,J.E. & Tompa,M., De reasing the Nesting Depth of an Expression Involving Square Roots. J. Symboli Comp. 1 (1985), pp. 169{188. [Brent, 1970℄ Brent,R.P., Algorithms for Matrix Multipli ation. Report CS 157, Computer S ien e Department, Stanford University, Mar h 1970. (The results are des ribed by Knuth [1981℄, p. 482.) [Bronstein, 1987℄ Bronstein,M., Integration of Elementary Fun tions. Ph.D. Thesis, Univ. California Berkeley, 17 April 1987. [Bronstein, 1990℄ Bronstein,M., Integration of Elementary Fun tions. J. Symboli Comp. 9 (1990), pp. 117{173. [Brown, 1969℄ Brown,W.S., Rational Exponential Expressions and a Conje ture on erning  and e. Amer. Math. Monthly 76 (1969) pp. 28{34.

278

Bibliography

[Brown, 1971℄ Brown,W.S., On Eu lid's Algorithm and the Computation of Polynomial Greatest Common Divisors. J. ACM 18 (1971) pp. 478{504. MR 46 (1973) #6570. [Bu hberger, 1970℄ Bu hberger,B., Ein algorithmis hes Kriterium fur die Losbarkeit eines algebrais hen Glei hungssystems. Aequationes Mathemati  4 (1970) pp. 374{383. (An algorithmi riterion for the solubility of an algebrai system of equations.) [Bu hberger, 1976a℄ Bu hberger,B., Theoreti al Basis for the Redu tion of Polynomials to Canoni al Forms. SIGSAM Bulletin 39 (Aug. 1976) pp. 19{29. [Bu hberger, 1976b℄ Bu hberger,B., Some Properties of Grobner-Bases for Polynomial Ideals. SIGSAM Bulletin 40 (Nov. 1976) pp. 19{24. [Bu hberger, 1979℄ Bu hberger,B., A Criterion for Dete ting Unne essary Redu tions in the Constru tion of Groebner Bases. Pro eedings of the 1979 European Symposium on Symboli and Algebrai Computation [Springer Le ture Notes in Computer S ien e 72, Springer-Verlag, Berlin, Heidelberg, New York, 1979℄, pp. 3{21. Zbl. 417.68029. MR 82e:14004. [Bu hberger, 1981℄ Bu hberger,B., H-Bases and Grobner-Bases for Polynomial Ideals. CAMP-Linz publi ation 81{2.0, University of Linz, Feb. 1981. [Bu hberger, 1983℄ Bu hberger,B., A Note on the Complexity of Computing Grobner-Bases. Pro . EUROCAL 83 [Springer Le ture Notes in Computer S ien e 162, Springer-Verlag, Berlin, Heidelberg, New York, 1983℄, pp. 137{145. [Bu hberger, 1985℄ Bu hberger,B., A Survey on the Method of Groebner bases for Solving Problems in Conne tion with Systems of Multivariate Polynomials. Pro . 2nd RIKEN Symposium Symboli & Algebrai Computation (ed. N. Inada & T. Soma), World S ienti Publ., 1985, pp. 69{83. [Bu hberger et al., 1982℄ Bu hberger,B., Collins,G.E. & Loos,R. (editors), Symboli & Algebrai Computation. Computing Supplementum 4, Springer-Verlag, Wien, New York, 1982. [Capelli, 1901℄ Capelli,A., Sulla riduttibilita della funzione xn A in ampo qualunque di rationalita. Math. Ann. 54 (1901) pp. 602{603. (On the redu ibility of the fun tion xn A in some rational eld.) [Cau hy, 1829℄ Cau hy,A.-L., Exer ises de Mathematiques Quatrieme Annee. De Bure Freres, Paris, 1829. uvres, Ser. II, Vol. IX, Gauthier-Villars, Paris, 1891.

Computer Algebra [Char

279

1985℄ Char,B.W., Geddes,K.O., Gonnet,G.H., Watt,S.M., Maple user's guide. Wat om Publi ations, Waterloo 1985. [Chazy, 1953℄ Chazy, J., Me anique eleste. Presses Universitaires de Fran e, 1953. [Cherry, 1983℄ Cherry,G.W., Algorithms for Integrating Elementary Fun tions in Terms of Logarithmi Integrals and Error Fun tions. Ph.D. Thesis, Univ. Delaware, August 1983. [Cherry, 1985℄ Cherry,G.W., Integration in Finite Terms with Spe ial Fun tions: the Error Fun tion. J. Symboli Comp. 1 (1985) pp. 283{ 302. [Cherry & Caviness, 1984℄ Cherry,G.W. & Caviness,B.F., Integration in Finite Terms with Spe ial Fun tions: A Progress Report. Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 351{359. [Chou & Collins, 1982℄ Chou,T.-W.J. & Collins,G.E., Algorithms for the solution of systems of linear Diophantine equations. SIAM J. Comp. 11 (1982) pp. 687{708. Zbl. 498.65022. MR 84e:10020. [Cohen, 1993℄ Cohen,A.M. (ed.), Computer Algebra in Industry: Problem Solving in Pra ti e. Wiley, 1993. [Collins, 1971℄ Collins,G.E., The Cal ulation of Multivariate Polynomial Resultants. J. ACM 18 (1971) pp. 515{532. [Collins,1975℄ Collins,G.E., Quanti er Elimination for Real Closed Fields by Cylindri al Algebrai De omposition. Pro . 2nd GI Conf. Automata Theory and Formal Languages (Springer Le ture Notes in Computer S ien e 33), pp. 134{183. MR 55 (1977) #771. [Collins & Loos, 1982℄ Collins,G.E. & Loos,R., Real Zeros of Polynomials. Symboli & Algebrai Computation (Computing Supplementum 4) (ed. B. Bu hberger, G.E. Collins & R. Loos) Springer-Verlag, Wien, New York, 1982, pp. 83{94. [Coppersmith & Davenport, 1985℄ Coppersmith,D. & Davenport,J.H., An Appli ation of Fa toring. J. Symboli Comp. 1 (1985) pp. 241{ 243. [Coppersmith & Davenport, 1991℄ Coppersmith,D. & Davenport,J.H., Polynomials whose Powers are Sparse. A ta Arithmeti a 58 (1991) pp. 79{87. [Coppersmith & Winograd, 1982℄ Coppersmith,D. & Winograd,S., On the Asymptoti Complexity of Matrix Multipli ation. SIAM J. Comp. 11 (1982) pp. 472{492. Zbl. 486.68030. MR 83j:68047b. et al.,

280

Bibliography

[Coppersmith & Winograd, 1990℄ Coppersmith,D. & Winograd,S., Matrix Multipli ation via Arithmeti Progressions. J. Symboli Comp. 9 (1990) pp. 251{280. An extended abstra t appeared in Pro . 19th ACM Symp. on Theory of Computing (STOC 1987), pp. 1{6. [Coppersmith et al., 1986℄ Coppersmith,D., Odlyzko,A.M. & S hroeppel,R., Dis rete Logarithms in GF (p). Algorithmi a 1 (1986) pp. 1{15. [Coste & Roy, 1988℄ Coste,M. & Roy,M.F., Thom's Lemma, the Coding of Real Algebrai Numbers and the Computation of the Topology of Semi-Algebrai Sets. J. Symboli Comp. 5 (1988) pp. 121{129. Also Algorithms in Real Algebrai Geometry (ed. D.S. Arnon and B. Bu hberger), A ademi Press, London, 1988. [Cox et al., 1992℄ Cox,D., Little,J. & O'Shea,D., Ideals, Varieties and Algorithms: an introdu tion of Computational Algebrai Geometry and Commutative Algebra. Springer Verlag, 1992. [Coxeter, 1961℄ Coxeter,H.S.M., Introdu tion to Geometry. Wiley, New York, 1961. [Czapor et al., 1992℄ Czapor,S.R., Geddes,K.O. & Labahn,G., Algorithms for Computer Algebra. Kluwer A ademi Publishers, 1992. [Davenport, 1981℄ Davenport,J.H., On the Integration of Algebrai Fun tions. Springer Le ture Notes in Computer S ien e 102, SpringerVerlag, Berlin, Heidelberg, New York, 1981. Zbl. 471.14009. MR 84k:14024. [Davenport,1982℄ Davenport,J.H., On the Parallel Ris h Algorithm (I). Pro . EUROCAM '82 [Springer Le ture Notes in Computer S ien e 144, Springer-Verlag, Berlin, Heidelberg, New York, 1982℄, pp. 144{157. MR 84b:12033. [Davenport, 1984a℄ Davenport,J.H., Integration algorithmique des fon tions elementairement trans endantes sur une ourbe algebrique. Annales de l'Institut Fourier 34 (1984) pp. 271{276. Zbl. 506.34002. [Davenport, 1984b℄ Davenport,J.H., y 0 + fy = g . Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, SpringerVerlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 341{350. [Davenport, 1985a℄ Davenport,J.H., Closed Form Solutions of Ordinary Differential Equations. Pro . 2nd RIKEN Symposium Symboli & Algebrai Computation (ed. N. Inada & T. Soma), World S ienti Publ., 1985, pp. 183{195. [Davenport, 1985b℄ Davenport,J.H., Computer Algebra for Cylindri al Algebrai De omposition. TRITA{NA{8511, NADA, KTH, Sto kholm, Sept. 1985.

Computer Algebra

281

[Davenport, 1985 ℄ Davenport,J.H., On the Ris h Di erential Equation Problem. SIAM J. Comp. 15 (1986) pp. 903{918. [Davenport, 1986℄ Davenport,J.H., On a \Piano Movers" Problem. SIGSAM Bulletin 20 (1986) 1&2 pp. 15{17. [Davenport & Heintz, 1987℄ Davenport,J.H. & Heintz,J., Real Quanti er Elimination is Doubly Exponential. J. Symboli Comp. 5 (1988) pp. 29{35. [Davenport & Mignotte, 1990℄ Davenport,J.H. & Mignotte,M., On Finding the Largest Root of a Polynomial. Modelisation Mathematique et Analyse Numerique 24 (1990) pp. 693{696. [Davenport & Singer, 1986℄ Davenport,J.H. & Singer,M.F. Elementary and Liouvillian Solutions of Linear Di erential Equations. J. Symboli Comp. 2 (1986) pp. 237{260. [Davenport & Trager, 1990℄ Davenport,J.H. & Trager,B.M., S rat hpad's View of Algebra I: Basi Commutative Algebra. Pro . DISCO '90 (Springer Le ture Notes in Computer S ien e Vol. 429, ed. A. Miola) pp. 40{54. [Delaunay, 1860℄ Delaunay, Ch., Theorie du mouvement de la lune. Extra t from the Comptes Rendus de l'A ademie des S ien es, Vol. LI. [Della Dora & Tournier, 1981℄ Della Dora,J. & Tournier,E., Formal Solutions of Di erential Equations in the Neighbourhood of Singular Points (Regular and Irregular). Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 25{29. [Della Dora & Tournier, 1984℄ Della Dora,J. & Tournier,E., Homogeneous Linear Di erential Equations (Frobenius-Boole Method). Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 1{12. [Della Dora & Tournier, 1986℄ Della Dora,J. & Tournier,E., Formal solutions of linear di eren e equationsmethod of Pin herle-Ramis. Pro . SYMSAC 86 (ACM, New York, 1986) pp. 192{196. [Della Dora et al., 1982℄ Della Dora,J., Di res enzo,C. & Tournier,E., An Algorithm to Obtain Formal Solutions of a Linear Homogeneous Di erential Equation at an Irregular Singular Point. Pro . EUROCAM 82 [Springer Le ture Notes in Computer S ien e 144, Springer-Verlag, Berlin, Heidelberg, New York, 1982℄, pp. 273{ 280. MR 84 :65094. [Della Dora et al., 1985℄ Della Dora,J., Di res enzo,C. & Duval,D., About a new Method for Computing in Algebrai Number Fields. Pro .

282

Bibliography

EUROCAL 85, Vol. 2 [Springer Le ture Notes in Computer S ien e 204, Springer-Verlag,Berlin, Heidelberg, New York, Tokyo, 1985℄, pp. 289{290. [Dewar, 1989℄ Dewar,M.C., IRENA | An Integrated Symboli and Numeri Computation Environment. Pro . ISSAC '89 (ed. G.H. Gonnet), ACM, New York, 1989, pp. 171{179. [Dewar & Ri hardson, 1990℄ Dewar,M.C. & Ri hardson,M.G., Re on iling Symboli and Numeri Computation in a Pra ti al Setting. Pro . DISCO '90 (Springer Le ture Notes in Computer S ien e Vol. 429, ed. A. Miola) 195{204. [Di res enzo & Duval, 1984℄ Di res enzo,C. & Duval,D., Computations on Curves. Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 100{107. [Di res enzo & Duval, 1985℄ Di res enzo,C. & Duval,D., Algebrai Computation on Algebrai Numbers. Computers and Computing (ed. P. Chenin, C. Di res enzo, F. Robert), Masson and Wiley, 1985, pp. 54{61. [Di res enzo & Duval, 1988℄ Di res enzo,C. & Duval,D., Algebrai Extensions and Algebrai Closure in SCRATCHPAD II. Pro . ISSAC 88 (Springer Le ture Notes in Computer S ien e 358, SpringerVerlag, Berlin-Heidelberg-New York-Tokyo, 1989,) pp. 440{446. [Dodgson, 1866℄ Dodgson,C.L., Condensation of determinants, being a new and brief method for omputing their algebrai value. Pro . Roy. So . Ser. A 15(1866) pp. 150{155. [Dubreuil, 1963℄ Dubreuil,P., Algebre. Gauthier-Villars, Paris, 1963. [Duval, 1987℄ Duval, D., Diverses questions relatives au al ul formel ave des nombres algebriques. These d'E tat de l'Universite de Grenoble, Avril 1987. [Ehrhard, 1986℄ Ehrhard,T., Personal ommuni ation, Mar h 1986. [Faugere , 1989℄ Faugere,J.C., Gianni,P., Lazard,D. & Mora,T., EÆ ient Computation of Zero-Dinemsional Grobner Bases by Change of Ordering. Te hni al Report LITP 89{52. [Fit h, 1974℄ Fit h,J.P., CAMAL Users' Manual. University of Cambridge Computer Laboratory, 1974. [Fit h, 1985℄ Fit h,J.P., Solving Algebrai Problems with REDUCE. J. Symboli Comp. 1 (1985), pp. 211{227. [Gaal 1972℄ Gaal,L., Classi al Galois theory. Chelsea, 1972. et al.

Computer Algebra

283

[Gebauer & Kredel, 1984℄ Gebauer,P. & Kredel,H., Note on \Solution of a General System of Equations". SIGSAM Bulletin 18 (1984) 3, pp. 5{6. [Gianni, 1989℄ Gianni,P., Properties of Grobner bases under spe ializations. Pro . EUROCAL 87 (Springer Le ture Notes in Computer S ien e 378, Springer-Verlag, Berlin-Heidelberg-et ., 1989), pp. 293{ 297. [Giusti, 1984℄ Giusti,M., Some E e tivity Problems in Polynomial Ideal Theory. Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 159{171. [Gregory & Krishnamurthy, 1984℄ Gregory,R.T. & Krishnamurthy,E.V., Methods and Appli ations of Error-free Computation. SpringerVerlag, New York, 1984. CR 8504{0270. [Griesmer et al., 1975℄ Griesmer,J.H., Jenks,R.D. & Yun,D.Y.Y., SCRATCHPAD User's Manual. IBM Resear h Publi ation RA70, June 1975. [Hadamard, 1893℄ Hadamard,J., Resolution d'une Question Relative aux Determinants. Bull. des S i. Math. (2) 17 (1893) pp. 240{246. uvres, CNRS, Paris, 1968, Vol. I, pp. 239{245. [Hearn, 1987℄ Hearn,A.C., REDUCE-3 User's Manual, version 3.3, Rand Corporation Publi ation CP78 (7/87), 1987. [Heindel,1971℄ Heindel,L.E., Integer Arithmeti Algorithm for Polynomial Real Zero Determination. J. ACM 18 (1971) pp. 535{548. [Hermite, 1872℄ Hermite,C., Sur l'integration des fra tions rationelles. Nouvelles Annales de Mathematiques, 2 Ser., 11 (1872) pp. 145{148. Ann. S ienti ques de l'E ole Normale Superieure, 2 Ser., 1 (1872) pp. 215{218. [Hilali, 1982℄ Hilali,A., Contribution a l'etude de points singuliers des systemes di erentiels lineaires. These de troisieme y le, IMAG, Grenoble, 1982. [Hilali, 1983℄ Hilali,A., Chara terization of a Linear Di erential System with a Regular Singularity. Pro . EUROCAL 83 [Springer Le ture Notes in Computer S ien e 162, Springer-Verlag, Berlin, Heidelberg, New York, 1983℄, pp. 68{77. [Hilali, 1987℄ Hilali,A., Solutions formelles de systemes di erentiels lineaires  au voisinage de points singuliers. These d'Etat, Universite I de Grenoble, June 1987.

284

Bibliography

[Horowitz, 1969℄ Horowitz,E., Algorithm for Symboli Integration of Rational Fun tions. Ph.D. Thesis, Univ. of Wis onsin, November 1969. [Horowitz, 1971℄ Horowitz,E., Algorithms for Partial Fra tion De omposition and Rational Fun tion Integration. Pro . Se ond Symposium on Symboli and Algebrai Manipulation, ACM In ., 1971, pp. 441{457. [In e, 1953℄ In e,F.L., Ordinary Di erential Equations. Dover Publi ations, 1953. [Jenks, 1984℄ Jenks,R.D., A Primer: 11 Keys to New SCRATCHPAD. Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 123{147. [Jenks & Sutor, 1992℄ Jenks,R.D. & Sutor,R.S., AXIOM: The S ienti Computation System. Springer{Verlag, New York, 1992. [Johnson, 1974℄ Johnson,S.C., Sparse Polynomial Arithmeti . Pro EUROSAM 74 (SIGSAM Bulletin Vol. 8, No. 3, Aug. 1974, pp. 63{ 71). [Kahrimanian, 1953℄ Kahrimanian,H.G., Analyti di erentiation by a digital omputer. M.A. Thesis, Temple U., Philadelphia, Pennsylvania, May 1953. [Kalkbrener, 1989℄ Kalkbrener,M., Solving systems of algebrai equations by using Grobner bases. Pro . EUROCAL 87 (Springer Le ture Notes in Computer S ien e 378, Springer-Verlag, BerlinHeidelberg-et ., 1989), pp. 282{292. [Kaltofen et al., 1981℄ Kaltofen,E., Musser,D.R. & Saunders,B.D., A Generalized Class of Polynomials that are Hard to Fa tor. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 188{194. Zbl. 477.68041. [Kaltofen et al., 1983℄ Kaltofen,E., Musser,D.R. & Saunders,B.D., A Generalized Class of Polynomials that are Hard to Fa tor. SIAM J. Comp. 12 (1983) pp. 473{483. CR 8504-0367 (Vol. 25 (1985) p. 235). MR 85a:12001. [Knuth, 1969℄ Knuth,D.E., The Art of Computer Programming, Vol. II, Semi-numeri al Algorithms, Addison-Wesley, 1969. [Knuth, 1973℄ Knuth,D.E., The Art of Computer Programming, Vol. I, Fundamental Algorithms, 2nd Edn., Addison-Wesley, 1973. [Knuth, 1981℄ Knuth,D.E., The Art of Computer Programming, Vol. II, Semi-numeri al Algorithms, 2nd Edn., Addison-Wesley, 1981. MR 83i:68003.

Computer Algebra

285

[Kova i , 1977℄ Kova i ,J.J., An Algorithm for solving Se ond Order Linear Homogeneous Di erential Equations. Preprint, Brooklyn College, City University of New York. [Kova i , 1986℄ Kova i ,J.J., An Algorithm for solving Se ond Order Linear Homogeneous Di erential Equations. J. Symboli Comp. 2 (1986) pp. 3{43. [Krishnamurthy, 1985℄ Krishnamurthy,E.V., Error-free Polynomial Matrix Computations. Springer-Verlag, New York, 1985. [Landau, 1905℄ Landau,E., Sur Quelques Theoremes de M. Petrovit h Relatifs aux Zeros des Fon tions Analytiques. Bull. So . Math. Fran e 33 (1905) pp. 251{261. [Landau, 1992a℄ Landau,S., A Note on \Zippel Denesting". J. Symboli Comp. 13 (1992), pp. 41{45. [Landau, 1992a℄ Landau,S., Simpli ation of Nested Radi als. SIAM J. Comp. 21 (1992) pp. 85{110. [Lang, 1965℄ Lang,S., Algebra. Addison-Wesley, Reading, Mass., 1959. [Lauer, 1982℄ Lauer,M., Computing by Homomorphi Images. Symboli & Algebrai Computation (Computing Supplementum 4) (ed. B. Bu hberger, G.E. Collins & R. Loos) Springer-Verlag, Wien, New York, 1982, pp. 139{168. [Lauer, 1983℄ Lauer,M., Generalized p-adi Constru tions. SIAM J. Comp. 12 (1983) pp. 395{410. Zbl. 513.68035. [Lazard, 1983℄ Lazard,D., Grobner Bases, Gaussian Elimination and Resolution of Systems of Algebrai Equations. Pro . EUROCAL 83 [Springer Le ture Notes in Computer S ien e 162, SpringerVerlag, Berlin, Heidelberg, New York, 1983℄, pp. 146{157. [Lazard, 1988℄ Lazard,D., Quanti er Elimination: Optimal Solution for 2 Classi al Examples. J. Symboli Comp., 5 1988 pp. 261{266. [Lazard, 1992℄ Lazard,D., Solving Zero-dimensional Algebrai Systems. J. Symboli Comp. 13 (1992) pp. 117{131. [Lenstra et al., 1982℄ Lenstra,A.K., Lenstra,H.W.,Jr. & Lovasz,L., Fa toring Polynomials with Rational CoeÆ ients. Math. Ann. 261 (1982) pp. 515{534. Zbl. 488.12001. MR 84a:12002. [Lipson, 1976℄ Lipson,J.D., Newton's Method: a great Algebrai Algorithm. Pro eedings of the 1976 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1976, pp. 260{270. Zbl. 454.65035. [Loos, 1982℄ Loos,R., Generalized Polynomial Remainder Sequen es. Symboli & Algebrai Computation (Computing Supplementum 4)

286

Bibliography

(ed. B. Bu hberger, G.E. Collins & R. Loos), Springer-Verlag, Wien, New York, 1982, pp. 115{137. [M Callum, 1985a℄ M Callum,S., An Improved Proje tion Algorithm for Cylindri al Algebrai De omposition. Computer S ien e Te h. Report 548, Univ. Wis onsin at Madison, 1985. [M Callum, 1985b℄ M Callum,S., An Improved Proje tion Algorithm for Cylindri al Algebrai De omposition. Pro . EUROCAL 85, Vol. 2 [Springer Le ture Notes in Computer S ien e 204, SpringerVerlag, Berlin, Heidelberg, New York, Tokyo, 1985℄, pp. 277{278. [Ma Callum & Wright, 1991℄ Ma Callum,M.A.H. & Wright,F.J., Algebrai Computing with Redu e. Pro . First Brazilian S hool on Computer Algebra (ed. M.J. Rebou as & W.L. Roque) Oxford University Press 1991. [M Carthy et al., 1965℄ M Carthy,J., Abrahams,P.W., Edwards,W., Hart, T.P. & Levin,M., The LISP 1.5 Programmers Manual. M.I.T. Press, 1965. [Ma millan & Davenport, 1984℄ Ma millan,R.J. & Davenport,J.H., Fa toring Medium-Sized Integers. Computer J. 27 (1984) pp. 83{84. [Marti et al., 1978℄ Marti,J.B., Hearn,A.C., Griss,M.L. & Griss,C., The Standard LISP Report. Report UCP-60, University of Utah, Jan. 1978. SIGSAM Bulletin 14 (1980), 1, pp. 23{43. [Mauny, 1985℄ Mauny,M., These de troisieme y le, Paris VII, Sept. 1985. [Mayr & Mayer, 1982℄ Mayr,E. & Mayer,A., The Complexity of the Word Problem for Commutative Semi-groups and Polynomial Ideals. Adv. in Math. 46 (1982) pp. 305{329. [Mignotte, 1974℄ Mignotte,M., An Inequality about Fa tors of Polynomials. Math. Comp. 28 (1974) pp. 1153{1157. Zbl. 299.12101. [Mignotte, 1981℄ Mignotte,M., Some Inequalities About Univariate Polynomials. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 195{199. Zbl. 477.68037. [Mignotte, 1982℄ Mignotte,M., Some Useful Bounds. Symboli & Algebrai Computation (Computing Supplementum 4) (ed. B. Bu hberger, G.E. Collins & R. Loos), Springer-Verlag, Wien, New York, 1982, pp. 259{263. Zbl. 498.12019. [Mignotte, 1986℄ Mignotte,M., Computer versus Paper and Pen il. CALSYF 4, pp. 63{69. [Mignotte, 1992℄ Mignotte,M., Mathemati s for Computer Algebra. Springer-Verlag, Berlin-Heidelberg, 1992. Translation of Mathematiques pour le Cal ul Formel (PUF, Paris, 1989).

Computer Algebra

287

[Moller & Mora, 1984℄ Moller,H.M. & Mora,T., Upper and Lower Bounds for the Degree of Groebner Bases. Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 172{183. [Moore & Norman, 1981℄ Moore,P.M.A. & Norman,A.C., Implementing a Polynomial Fa torization and GCD Pa kage. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 109{116. [Moses, 1966℄ Moses,J., Solution of a System of Polynomial Equations by Elimination. Comm. ACM 9 (1966) pp. 634{637. [Moses, 1967℄ Moses,J., Symboli Integration. Ph.D. Thesis & Proje t MAC TR{47, M.I.T., 1967. [Moses, 1971a℄ Moses,J., Algebrai Simpli ation | A Guide for the Perplexed. Comm. ACM 14 (1971) pp. 527{537. [Moses, 1971b℄ Moses,J., Symboli Integration, the stormy de ade. Comm. ACM 14 (1971) pp. 548{560. [Musser, 1978℄ Musser,D.R., On the EÆ ien y of a Polynomial Irredu ibility Test. J. ACM 25 (1978) pp. 271{282. MR 80m:68040. [Najid-Zejli, 1984℄ Najid-Zejli,H., Computation in Radi al Extensions. Pro . EUROSAM 84 [Springer Le ture Notes in Computer S ien e 174, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984℄, pp. 115{122. [Najid-Zejli, 1985℄ Najid-Zejli,H., Extensions algebriques: as general et as des radi aux. These de troisieme y le, IMAG, Grenoble, 25.6.85. [Nolan, 1953℄ Nolan,J., Analyti di erentiation on a digital omputer. M.A. Thesis, Math. Dept., M.I.T., Cambridge, Massa husetts, May 1953. [Norman, 1975℄ Norman,A.C., Computing with Formal Power Series. ACM Transa tions on Mathemati al Software 1 (1975) pp. 346{356. Zbl. 315.65044. [Norman, 1982℄ Norman,A.C., The Development of a Ve tor-based Algebra System. Pro . EUROCAM 82 [Springer Le ture Notes in Computer S ien e 144, Springer-Verlag, Berlin, Heidelberg, New York, 1982℄, pp. 237{248. [Norman & Davenport, 1979℄ Norman,A.C., & Davenport,J.H., Symboli Integration | the Dust Settles? Pro eedings of the 1979 European Symposium on Symboli and Algebrai Computation [Springer Le ture Notes in Computer S ien e 72, Springer-Verlag, Berlin, Heidelberg, New York 1979℄, pp. 398{407. Zbl. 399.68056. MR 82b:68031.

288

Bibliography

[Norman & Moore, 1977℄ Norman,A.C., & Moore,P.M.A., Implementing the new Ris h Integration Algorithm. Pro . Symp. advan ed omputing methods in theoreti al physi s, Marseilles, 1977, pp. 99{110. [Ostrowski, 1946℄ Ostrowski,A.M., Sur l'integrabilite elementaire de quelques lasses d'expressions. Comm. Math. Helvet. 18 (1946) pp. 283{308. [Pear e & Hi ks, 1981℄ Pear e,P.D. & Hi ks,R.J., The Optimization of User Programs for an Algebrai Manipulation System. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 131{136. [Pear e & Hi ks, 1982℄ Pear e,P.D. & Hi ks,R.J., The Appli ation of Algebrai Optimisation Te hniques to Algebrai Mode Programs for REDUCE. SIGSAM Bulletin 15 (1981/2) 4, pp. 15{22. [Pear e & Hi ks, 1983℄ Pear e,P.D. & Hi ks,R.J., Data Stru tures and Exe ution Times of Algebrai Mode Programs for REDUCE. SIGSAM Bulletin 17 (1983) 1, pp. 31{37. [Probst & Alagar, 1982℄ Probst,D. & Alagar,V.S., An Adaptive Hybrid Algorithm for Multiplying Dense Polynomials. Pro . EUROCAM 82 [Springer Le ture Notes in Computer S ien e 144, SpringerVerlag, Berlin, Heidelberg, New York, 1982℄, pp. 16{23. [Puiseux, 1850℄ Puiseux,M.V., Re her hes sur les fon tions algebriques. J. Math. Pures et Appliquees 15 (1850) pp. 365-480. [Ramanujan, 1927℄ Ramanujan,S., Problems and Solutions. In Colle ted Works (ed. G.H. Hardy, P.V. Se ha Ayar, & B.M. Wilson), C.U.P., 1927. [Ramis & Thomann, 1980℄ Ramis,J.P. & Thomann,J., Remarques sur l'utilisation numerique des series de fa torielles. Seminaire d'analyse numerique, Strasbourg No. 364, 1980. [Ramis & Thomann, 1981℄ Ramis,J.P. & Thomann,J., Some Comments about the Numeri al Utilization of Fa torial Series Methods in the Study of Criti al Phenomena. Springer-Verlag, Berlin, Heidelberg, New York, 1981. [Ri hard, 1988℄ Ri hard,F., Representations graphiques de solutions d'equations di erentielles dans le hamp omplexe. These de do torat de l'Universite de Strasbourg, septembre 1988. [Ri hards & Whitby-Strevens, 1979℄ Ri hards,M. & Whitby-Strevens,C., BCPL, The Language and its Compiler. C.U.P., 1979. Zbl. 467.68004.

Computer Algebra

289

[Ri hardson, 1968℄ Ri hardson,D., Some Unsolvable Problems Involving Elementary Fun tions of a Real Variable. J. Symboli Logi 33 (1968), pp. 511{520. [Ris h, 1969℄ Ris h,R.H., The Problem of Integration in Finite Terms. Trans. A.M.S. 139 (1969) pp. 167{189. Zbl. 184,67. MR 38 (1969) #5759. [Ris h, 1979℄ Ris h,R.H., Algebrai Properties of the Elementary Fun tions of Analysis. Amer. J. Math. 101 (1979) pp. 743{759. MR 81b:12029. [Rosenli ht, 1976℄ Rosenli ht,M., On Liouville's Theory of Elementary Fun tions. Pa i J. Math 65 (1976), pp. 485{492. [Rothstein, 1976℄ Rothstein,M., Aspe ts of Symboli Integration and Simpli ation of Exponential and Primitive Fun tions. Ph.D. Thesis, Univ. of Wis onsin, Madison, 1976. (Xerox University Mi ro lms 77{8809.) [Sasaki & Murao, 1981℄ Sasaki,T. & Murao,H., EÆ ient Gaussian Elimination Method for Symboli Determinants and Linear Systems. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 155{159. Zbl. 486.68023. [Sasaki & Murao, 1982℄ Sasaki,T. & Murao,H., EÆ ient Gaussian Elimination Method for Symboli Determinants and Linear Systems. ACM Transa tions on Mathemati al Software 8 (1982) pp. 277{ 289. Zbl. 491.65014. CR 40, 106 (Vol. 24 (1983) p. 103). [Saunders, 1981℄ Saunders,B.D., An Implementation of Kova i 's Algorithm for Solving Se ond Order Linear Homogeneous Di erential Equations. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 105{108. Zbl. 486.68023. [S hwartz & Sharir, 1983a℄ S hwartz,J.T. & Sharir,M., On the \Piano Movers" Problem II. General Te hniques for Computing Topologi al Properties of Real Algebrai Manifolds. Advan es Appl. Math. 4 (1983) pp. 298{351. [S hwartz & Sharir, 1983b℄ S hwartz,J.T. & Sharir,M., On the \Piano Movers" Problem II. Coordinating the Motion of Several Independent Bodies: The Spe ial Case of Cir ular Bodies Moving Amidst Polygonal Barriers. Int. J. Robot. Res. 2 (1983) pp. 46{75. [Singer, 1981℄ Singer,M.F., Liouvillian Solutions of n-th Order Homogeneous Linear Di erential Equations. Amer. J. Math. 103 (1981) pp. 661{682. Zbl. 477.12016. MR 82i:12008.

290

Bibliography

[Singer, 1985℄ Singer,M.F., Solving Homogeneous Linear Di erential Equations in Terms of Se ond Order Linear Di erential Equations. Amer. J. Math. 107 (1985) pp. 663{696. [Singer & Davenport, 1985℄ Singer,M.F. & Davenport,J.H., Elementary and Liouvillian Solutions of Linear Di erential Equations. Pro . EUROCAL 85, Vol. 2 [Springer Le ture Notes in Computer S ien e 204, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1985℄, pp. 595{596. [Singer & Ulmer, 1992℄ Singer,M.F. & Ulmer,F., Liouvillian Solutions of Third Order Linear Di erential Equations: New Bounds and Ne essary Conditions. Pro . ISSAC 92 (ed. P.S. Wang) pp. 57-62. [Singer et al., 1981℄ Singer,M.F., Saunders,B.D. & Caviness,B.F., An Extension of Liouville's Theorem on Integration in Finite Terms. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 23{24. Zbl. 482.12008. [Singer et al., 1985℄ Singer,M.F., Saunders,B.D. & Caviness,B.F., An Extension of Liouville's Theorem on Integration in Finite Terms. SIAM J. Comp. 14 (1985) pp. 966{990. [Slagle, 1961℄ Slagle,J., A Heuristi Program that Solves Symboli Integration Problems in Freshman Cal ulus. Ph.D. Dissertation, Harvard U., Cambridge, Mass. May 1961. [Smit, 1981℄ Smit,J., A Can ellation Free Algorithm, with Fa toring Capabilities, for the EÆ ient Solution of Large Sparse Sets of Equations. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 146{154. [Stewart, 1989℄ Stewart,I.N., Galois Theory (2nd. edition). Chapman & Hall, 1989. [Strassen, 1969℄ Strassen,V., Gaussian Elimination is not Optimal. Numer. Math. 13 (1969) pp. 354{356. [Tarski, 1951℄ Tarski,A., A De ision Method for Elementary Algebra and Geometry. 2nd ed., Univ. California Press, Berkeley, 1951. MR 10 #499. [Tournier, 1987℄ Tournier,E., Solutions formelles d'equations di erentielles:  Le logi iel de al ul formel DESIR. These d'Etat, Universite I de Grenoble, April 1987. [Trager, 1976℄ Trager,B.M., Algebrai Fa toring and Rational Fun tion Integration. Pro eedings of the 1976 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1976, pp. 219{ 226. Zbl. 498.12005.

Computer Algebra

291

[Trager, 1985℄ Trager,B.M., On the Integration of Algebrai Fun tions. Ph.D. Thesis, Dept. of Ele tri al Engineering & Computer S ien e, M.I.T., August 1985. [Viry, 1982℄ Viry,G., Fa torisation des polyn^omes a plusieurs variables. RAIRO Inform. Theor. 12 (1979) pp. 209{223. [van der Waerden, 1949℄ van der Waerden,B.L., Modern Algebra. Frederi k Ungar, New York, 1949. [Wang, 1978℄ Wang,P.S., An Improved Multivariable Polynomial Fa torising Algorithm. Math. Comp. 32 (1978) pp. 1215{1231. Zbl. 388.10035. MR 58 (1979) #27887b. [Wang, 1980℄ Wang,P.S., The EEZ-GCD Algorithm. SIGSAM Bulletin 14 (1980) 2 pp. 50{60. Zbl. 445.68026. [Wang, 1981℄ Wang,P.S., A p-adi Algorithm for Univariate Partial Fra tions. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 212{217. Zbl. 486.68026 [Wang,1983℄ Wang,P.S., Early Dete tion of True Fa tors in Univariate Polynomial Fa torization. Pro . EUROCAL 83 [Springer Le ture Notes in Computer S ien e 162, Springer-Verlag, Berlin, Heidelberg, New York, 1983℄, pp. 225{235. [Wang et al., 1982℄ Wang,P.S., Guy,M.J.T. & Davenport,J.H., p-adi Re onstru tion of Rational Numbers. SIGSAM Bulletin 16 (1982) pp. 2{3. [Wasow, 1965℄ Wasow,W., Asymptoti Methods for Ordinary Di erential Equations. Kreiger Publ. Co., New York, 1965. [Watanabe, 1976℄ Watanabe,S., Formula Manipulation Solving Linear ODEs II. Publ. RIMS Kyoto Univ. 11 (1976) pp. 297{337. [Watanabe, 1981℄ Watanabe,S., A Te hnique for Solving Ordinary Di erential Equations Using Riemann's p-fun tions. Pro eedings of the 1981 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1981, pp. 36{43. Zbl. 493.34002. [Wilkinson, 1959℄ Wilkinson,J.H., The Evaluation of the Zeros of Ill onditioned Polynomials. Num. Math. 1 (1959) pp. 150{166, 167{ 180. [Winograd, 1968℄ Winograd,S., A New Algorithm for Inner Produ t. IEEE Trans. Computers C-17 (1968) pp. 693{694. [Winston & Horn, 1981℄ Winston,P.H. & Horn,B.K.P., LISP. AddisonWesley, 1981. (The 2nd ed., 1984, is written for Common LISP.)

292

Bibliography

[Wunderli ht, 1979℄ Wunderli ht,M.C., A Running-Time Analysis of Brillhart's Continued Fra tion Fa toring Algorithm. In: Number Theory Carbondale 1979 (ed. M.B. Nathanson) [Springer Le ture Notes in Mathemati s 751, Springer-Verlag, Berlin, Heidelberg, New York, 1979℄, pp. 328{342. [Yun, 1974℄ Yun,D.Y.Y., The Hensel Lemma in Algebrai Manipulation. Ph.D. Thesis & Proje t MAC TR{138, M.I.T., 1974. [reprinted Garland Publishing Co., New York, 1980℄. [Yun, 1976℄ Yun,D.Y.Y., On Square-free De omposition Algorithms. Pro eedings of the 1976 ACM Symposium on Symboli and Algebrai Computation, ACM In ., New York, 1976, pp. 26{35. Zbl. 498.13006. [Yun, 1977℄ Yun,D.Y.Y., On the Equivalen e of Polynomial G d and Squarefree Fa torization Algorithms. Pro . 1977 MACSYMA Users' Conferen e NASA publi ation CP{2012, National Te hni al Information Servi e, Spring eld, Virginia, pp. 65{70. [Zassenhaus, 1969℄ Zassenhaus,H., On Hensel Fa torization.J. Number Theory 1 (1969) pp. 291-311. MR 39 (1970) #4120. [Zippel, 1979℄ Zippel,R.E., Probabilisti Algorithms for Sparse Polynomials. Pro eedings of the 1979 European Symposium on Symboli and Algebrai Computation [Springer Le ture Notes in Computer S ien e 72, Springer-Verlag, Berlin, Heidelberg, New York 1979℄, pp. 216{226. [Zippel, 1985℄ Zippel,R.E., Simpli ation of Expressions Involving Radi als. J. Symboli Comp. 1 (1985), pp. 189{210.

Index

ABS 259. ACOS 260. ACOSH 260.

bad redu tion 145, 153. zeros 176. Bareiss, method of 103. base of an integer 76.

addition of fra tions 78.

BEGIN 259.

of polynomials 81. algebrai 91, 128.

Berlekamp, algorithm of 161.

ALGEBRAIC 257.

Bezout's identity 231.

algebrai integers 56.

bignum 76.

algorithm of Berlekamp 161.

Blo ks 259.

Bu hberger 117, 242.

boolean expressions 249.

Eu lid 124, 150.

bound of Hadamard 158. Bu hberger, algorithm of

extended 92, 149, 167, 231.

117, 242.

Hensel 168, 171.

riterion of 119.

Zassenhaus 165.

ALLFAC 264. allroots 28. ANTISYMMETRIC 256. APPEND 253. ARRAY 255. ASIN 260. ASINH 260.

CAMAL 79, 110.

anoni al 79. Cau hy, Inequality of 125{126.

CEILING 260.

hara teristi polynomial 22. Chinese remainder theorem 237{239.

Assignment 257.

Chur h-Rosser property 113.

ATAN 260. ATANH 260.

CLEAR 254, 268{270. COEFF 265. COEFFN 265. COLLECT 258.

Ax{S hanuel Conje ture 100. AXIOM 2, 42, 70, 72, 73.

293

294

Index

ombinatorial explosion 166, 178. Common LISP 291.

ompletely redu ed 113.

omponent, semi-algebrai 129.

omposed 39.

omposition 32. CONS 253.

ontent 150. COS 260. COSH 260. COT 260. Cramer, method of 102{104.

riterion of Bu hberger 119.

urrent result 263.

ylindri al de omposition 131. Davenport's theorem 203, 207. De omposition Lemma 192, 197. de omposition 130.

ylindri al 131. square-free 229. DEG 266. DEN 266. denominator 89. denominators, ommon 251. dense 81, 100. DEPEND 253, 256, 260. dependen y 253. DERIVE 2, 70. DESIR 218. determinant 22, 25, 101, 158. DF 260. di erentiation 28{32. DILOG 260. dis riminant 134, 236. distributed 88. DIV 264. division 76. DO 258.

elementary 191. elimination of quanti ers 135. ellipse problem 137. ELSE 258. END 259. equational expressions 249. equivalent 112. ERF 260. Eu lid, algorithm of 124, 150. extended algorithm of 92, 149, 167, 231. Eu lidean sequen es 85. EVENP 249. EXP 250, 260. expansion of expressions 250. EXPINT 260. exponential 99, 196. extended Eu lidean algorithm 92, 149, 167, 231. EZGCD 250. FACTOR 264.

fa torisation 4, 34. of integers 77. of polynomials 161. FACTORIZE 261. Fermat, little theorem of 162. FIRST 253. FIXP 249.

oating-point numbers 7, 78. FLOOR 260. FOR 258. FOR ALL 269. Formal di erentiation 183. integration 183. FORT 265. Fourier, series of 109. Fredholm equation 46. FREEOF 249. FROBENIUS 218.

Computer Algebra

g. .d. 13. of polynomials 75, 78, 83{90, 102, 151{154, 179. Gauss, elimination of 102{105. Lemma of 151, 161. GCD 250. generalised polynomial 197. generator of an ideal 112. Gianni 116. good redu tion 145, 153. GOTO 259. greatest ommon divisors see g. .d. Grobner basis 113. group of instru tions 258. Hadamard, bound of 158. Hearn 245. Hensel, Lemma of 167. linear algorithm of 168. quadrati algorithm of 169{171. Hermite, method of 187. Hilbert matrix 19. Irredu ibility theorem 176. Horowitz, method of 188. ideal 112. IF 258. inequalities 122. Inequality of Cau hy 125, 126. Knuth 126. Landau-Mignotte 142. INFIX 255. INT 260. INTEGER 253, 257. integer expressions 249. integration 43. problem 184. intermediate expression swell 75. INTSTR 264. inverse of a matrix 158. IRENA 271.

295

isolation of a root 123. JOIN 258. Kalkbrener 116. kernel 97, 249. Knuth, Inequality of 126. Kova i 's theorem 209. Landau-Mignotte Inequality 142. Lapla e transform 43. latti e 89. lazy 249. evaluation 109. LCM 251. LCOF 266. leading oeÆ ient 177. least ommon multiples 251. Lemma De omposition 192, 197. Gauss's 151, 161. Hensel's 167. LENGTH 253. LET 260, 268. lexi ographi 87. LINEAR 255. linear equations 160. Liouville's Prin iple 191. Liouvillian 208. LISP 1{3, 40, 70. LIST 264. LOG 260. logarithm 100, 192. LTERM 266. MACSYMA 1{73, 80, 88, 100, 161, 210, 266. MAINVAR 266. map 38. MAPLE 2, 70. MAT 271. MATHEMATICA 2, 70{71.

296

Index

matrix 18{27. Hilbert 19. inverse of 158. Sylvester's 234. MAX 259. MCD 251. Me hain's formula 8. memory spa e 82. method, Bareiss' 103. Cramer's 102{104. Hermite's 187 Horowitz's 188 Norman's 108. Ostrogradski's 188. Ostrowski's 196. MIN 259. Modular Cal ulation, Prin iple of 159. g. .d. 145, 154. Mora, example of 120. motion planning 137. multipli ation of fra tions 78. polynomials 81. muMATH 1, 70. Musser, riterion of 173. NAT 264. nested radi al 92. NEWTON 218. Newton, iteration of 169. NODEPEND 254. non- ommutative variables 100. NONCOM 100, 256. norm 94. normal 79. Norman, method of 108. NUM 266. NUMBERP 249. numerator 89. OPERATOR 255. order 86. ORDER 263.

ordered 249. ORDP 249. Ostrogradski, method of 188. Ostrowski's method 196. p-adi numbers 170.

p.r.s. 85. parasiti fa tors 176. solutions 121. PART 266{267. partial fra tions 159, 232. planning, robot motion 137. polynomial remainder sequen es 85. PRECEDENCE 255. pre ision 7, 106. PRECISION 251. PRIMEP 250. primitive 150. element 95. part 150. sequen es 85. Prin iple of Modular Cal ulation 159. probabilisti 161. PROCEDURE 257. PRODUCT 258. Puiseux, series 107{109 quanti ers, elimination of 135. rad an 36. radi al 91. RAT 264. ratdenomdivide 36. ratexpand 36. rational fun tions 88. numbers 77. RATPRI 264. re ursive 11, 88. REDUCE 1, 42, 70, 80-89, 97{100, 106, 109, 139, 245{274.

Computer Algebra redu ed 112. basis 114. REDUCT 266. redu tion 113, 145. remainder theorem, Chinese 239. REMFAC 264. repeated elimination 121. representation 79. REST 253. resultant 14, 96, 121, 134, 144, 157, 234, 261. RETURN 259. REVERSE 253. REVPRI 264. rewrite rules 98. Ris h's problem 205. theorem 202. RLISP 70. robot motion planning 137. roboti s 137. root, isolated 123. ROUND 260. ROUNDED 251. rounded 260. rule list 267. S-polynomial 117. SCALAR 248, 253. SCRATCHPAD 2, 71 see AXIOM. SECOND 253. semi-algebrai omponent 129. variety 129. sequen e, Sturm 124 sub-resultant polynomial 85. series 105. Fourier 109. Puiseux 107. Taylor 28, 105. SHARE 4. simpli ation 28, 36, 39, 79, 184, 250.

297

SIN 260. Singer's theorem 210. SINH 260. SOLVE 262. sparse 81, 104, 160, 181. SQRT 260. square-free de omposition 229. standard basis 113. STRUCTR 265. Stru ture, theorem of 99. Sturm sequen e 124. Theorem of 125. SUB 267. sub-resultant 85. Theorem 86. substitution 29, 84, 267. su

essive approximation 106. SUCH THAT 269. SUM 258. Sylvester, identity of 103. matrix 234. SYMBOLIC 257. SYMMETRIC 256. TAN 260. TANH 260. Taylor series 28, 105. THEN 258. Theorem, Chinese remainder 237{239. Davenport's 203,207. Fermat, little 162. Gianni-Kalkbrener 116. Hilbert irredu ibility 176. Kova i 's 209. Ris h's 202. Singer's 210. Stru ture 99. Sturm's 125. Sub-resultant 86. THIRD 253. total degree 87.

298

Index

Trans endental fun tions 97. transform of Lapla e 43. unlimited a

ura y 3.

UNTIL 259. variation 125. variety, semi-algebrai 129.

WEIGHT 106, 254, 270.

WHERE 259, 268. WHILE 259. Wilkinson, polynomial of 126.

WRITE 263. WS 263. WTLEVEL 106, 254. Zassenhaus, Algorithm of 165. zero-dimensional 115.