128 12
English Pages 310 [316] Year 2021
Springer Series in Computational Mathematics 55
John C. Butcher
B-Series Algebraic Analysis of Numerical Methods
Springer Series in Computational Mathematics Volume 55
Series Editors Randolph E. Bank, Department of Mathematics, University of California, San Diego, La Jolla, CA, USA Wolfgang Hackbusch, Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig, Germany Josef Stoer, Institut für Mathematik, University of Würzburg, Würzburg, Germany Richard S. Varga, Kent State University, Kent, OH, USA Harry Yserentant, Institut für Mathematik, Technische Universität Berlin, Berlin, Germany
This is basically a numerical analysis series in which high-level monographs are published. We develop this series aiming at having more publications in it which are closer to applications. There are several volumes in the series which are linked to some mathematical software. This is a list of all titles published in this series.
More information about this series is available at http://www.springer.com/series/ 797
John C. Butcher
B-Series Algebraic Analysis of Numerical Methods
123
John C. Butcher Department of Mathematics University of Auckland Auckland, New Zealand
ISSN 0179-3632 ISSN 2198-3712 (electronic) Springer Series in Computational Mathematics ISBN 978-3-030-70955-6 ISBN 978-3-030-70956-3 (eBook) https://doi.org/10.1007/978-3-030-70956-3 Mathematics Subject Classification: 65L05, 65L06, 65L20 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
In Spring 1965 I took part in a scientific meeting in Vienna, where I gave a talk on Lie derivatives and Lie series, research done together with Hans Knapp under the supervision of Wolfgang Gr¨obner at the University of Innsbruck. Right after, in the same session, was a talk by Dietmar Sommer from Aachen on a sensation: Implicit Runge–Kutta methods of any order, together with impressive numerical results and trajectory plots. Back in Innsbruck, the paper in question was quickly found in volume 18 of Mathematics of Computation from 1964, but not at all quickly understood. Definition, Definition, so-called “trees” were expressions composed of brackets and τ’s, Lemma, Proof, Lemma, Proof, Theorem, Proof, etc. Apparently, another paper of John Butcher’s was required to stand a chance to understand it, but that paper was in the Journal of the Australian Mathematical Society, a journal almost completely unknown in Austria, at a time long before the internet when even access to a Xerox copying machine required permission from the Rektorat. John told me later that this paper had previously been refused for publication in Numerische Mathematik for its lack of practical interest, no Algol code, no impressive numerical results with thousands of digits, and no rigorous error bounds. These two papers of Butcher brought elegance and order into the theory of Runge– Kutta methods. Earlier, these methods were very popular for practical applications; one of the first computers, ENIAC (Electronic Numerical Integrator and Computer), containing only a few “registers”, had mainly been built for solving differential equations using a Runge–Kutta method. The very first program which G. Dahlquist wrote for the first Swedish computer was a Runge–Kutta code. But not much theoretical progress had been achieved in these methods after Kutta’s paper from 1901. For example, the question whether a fifth order method can be found with only five stages was open for half a century, until Butcher had proved that such methods were impossible. The next sensation came when, “at the Dundee Conference in 1969, a paper by J. Butcher was read which contained a surprising result” 1 . The idea of combining 1
H. J. Stetter 1971 v
vi
Foreword
different fourth order methods with five stages in such a way, that their composition leads to a fifth order numerical result, culminated in Butcher’s algebraic theory of integration methods, first accessible as a preprint with beautiful M¯aori design, then published in an accessible journal, but which was “admirantur plures quam intelligant” 2 . Fortunately, in the academic year 1968/69 I had the chance to deliver a first year Analysis course in Innsbruck, where a couple of very brilliant students continued to participate in a subsequent seminar on Numerical Analysis, above all Ernst Hairer, in whose hand this M¯aori-design preprint eventually arrived. Many months later, Ernst suddenly came to me and said: “Iatz hab i’s verstandn”3 . But to push all these Runge–Kutta and generalized Runge–Kutta spaces into a brain that has worked for years on Lie series and Taylor series, was another adventure. The best procedure was finally to bring the algebraic structures directly into the series themselves. So we arrived at the composition of B-series. Gerhard Wanner
2 3
more admired than understood, (A. Taquet 1672) now I have understood it
Preface
The term “B-series”, also known as “Butcher series”, was introduced by Ernst Hairer and Gerhard Wanner [52] (1974). In 1970, I was invited to visit the University of Innsbruck to give a series of lectures to a very talented audience, which included Ernst and Gerhard. At that time, my 1972 paper [14] had not been published, but a preprint was available. A few years later the important Hairer–Wanner paper [52] appeared . “B-series” refers to a special type of Taylor series associated with initial-value problems y(x0 ) = y0 , y (x) = f y(x) , and the need to approximate y(x0 + h), with h a specified “stepsize”, using Runge– Kutta and other numerical methods. The formal Taylor series of the solution about the initial point x0 is a sum of terms containing two factors: (i) a factor related to a specific initial value problem; and (ii) a coefficient factor which is the same for each initial value problem. If, instead of the exact solution to an initial value problem, it is required to find the Taylor series for an approximate solution, calculated by a specific Runge–Kutta method, the terms in (i) are unchanged, but the coefficients in (ii) are replaced by a different sequence of coefficients, which are characteristic of particular Runge–Kutta methods. This factorization effectively divides mathematical questions, about initial value problems and approximate solutions, into two components: questions about f analytical in nature, and essentially algebraic questions concerning coefficient sequences. An important point to note is that, in the various Taylor series, the terms in the sequences are best not thought of in terms of indices 0, 1, 2, 3, 4, . . . , but in terms of graph-theoretic indices: ∅, , , , , . . . The sequence of graphs which appears here consists of the empty tree, followed by the sequence of all rooted trees. The significance of trees in mathematics was pointed out by Arthur Cayley [28] (1857), and the name “tree”, referring to these objects, is usually atributed to him. vii
viii
Preface
The use of trees in the analysis of Runge–Kutta methods seems to have been due to S. Gill [45] (1951), and then by R. H. Merson [72] (1957). The present author has also developed these ideas [7, 14] (1963, 1972), leading to the use of group and other algebraic structures in the analysis of B-series coefficients. The Butcher group, referred to in this volume as the “B-group”, is central to this theory, and is related to algebraic structures with applications in Physics and Geometry – see [4], (Brouder, 2000). Chapter 1 is a broad and elementary introduction to differential equation systems and numerical methods for their solution. It also contains an introduction to some of the topics included in later chapters. Chapter 2 is concerned with trees and related graphical structures. B-series, with further properties, especially those associated with compositions of series, are introduced in Chapter 3 Properties of the B-group are explored in Chapter 4. This chapter is also concerned with “integration methods” in the sense of [14]. Integration methods were intended as a unifying theory that includes Runge–Kutta methods, with a finite number of stages, and continuous stage Runge–Kutta methods, such as in the kernel of the Picard–Lindel¨of theorem – see for example [30] (Coddington and Levinson, 1957). Chapter 5 deals with Runge–Kutta methods with an emphasis on the B-series analysis of these methods. Multivalue methods are the subject of Chapter 6. This includes linear multistep methods and so-called “general linear methods”. In these general linear methods, multistage and multivalue aspects of numerical methods fit together in a balanced way. In the final Chapter 7, the B-series approach is applied to limited aspects of the burgeoning subject of Geometric Integration. In addition to exercises scattered throughout the book, especially in the early chapters, a number of substantial “projects” are added at the end of each chapter. Unlike the exercises, the projects are open-ended and of a more challenging nature and no answers are given to these. Throughout the volume, a number of algorithms have been included. As far as I am aware, the first B-series algorithm was composed by Jim Verner and myself on 1 January 1970. A shared interest in related algorithms has been maintained between us to this day. Amongst the many people who have taken an interest in this work, I would like to mention four people who have read all or part of the text and given me detailed advice. I express my gratitude for this valuable help to Adrian Hill, Yuto Miyatake, Helmut Podhaisky and Shun Sato. Special thanks to Tommaso Buvoli, Valentin Dallerit, Anita Kean and Helmut Podhaisky, who are kindly working with me on the support page. Support page A support page for this book is being developed at jcbutcher.com/B-series-book Amongst other information, the Algorithms in the book will be re-presented as procedures or functions in one or more standard languages. The support page will also contain some informal essays on some of the broad topics of the book.
Contents
1
Differential equations, numerical methods and algebraic analysis . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Examples of differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 The Euler method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Runge–Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Multivalue methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 B-series analysis of numerical methods . . . . . . . . . . . . . . . . . . . . . . . .
1 1 4 8 14 19 28 33
2
Trees and forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction to trees, graphs and forests . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Rooted trees and unrooted (free) trees . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Forests and trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Tree and forest spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Functions of trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Trees, partitions and evolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Trees and stumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Subtrees, supertrees and prunings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Antipodes of trees and forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39 39 44 50 53 58 65 76 80 88
3
B-series and algebraic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.2 Autonomous formulation and mappings . . . . . . . . . . . . . . . . . . . . . . . . 101 3.3 Fr´echet derivatives and Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.4 Elementary differentials and B-series . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.5 B-series for flow h and implicit h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.6 Elementary weights and the order of Runge–Kutta methods . . . . . . . 124 3.7 Elementary differentials based on Kronecker products . . . . . . . . . . . . 127 3.8 Attainable values of elementary weights and differentials . . . . . . . . . 129 3.9 Composition of B-series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 ix
x
Contents
4
Algebraic analysis and integration methods . . . . . . . . . . . . . . . . . . . . . . . 151 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 4.2 Integration methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.3 Equivalence and reducibility of Runge–Kutta methods . . . . . . . . . . . 155 4.4 Equivalence and reducibility of integration methods . . . . . . . . . . . . . . 158 4.5 Compositions of Runge–Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . 160 4.6 Compositions of integration methods . . . . . . . . . . . . . . . . . . . . . . . . . . 163 4.7 The B-group and subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 4.8 Linear operators on B∗ and B0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5
B-series and Runge–Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 5.2 Order analysis for scalar problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5.3 Stability of Runge–Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.4 Explicit Runge–Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5.5 Attainable order of explicit methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.6 Implicit Runge–Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5.7 Effective order methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
6
B-series and multivalue methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6.2 Survey of linear multistep methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6.3 Motivations for general linear methods . . . . . . . . . . . . . . . . . . . . . . . . . 217 6.4 Formulation of general linear methods . . . . . . . . . . . . . . . . . . . . . . . . . 220 6.5 Order of general linear methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.6 An algorithm for determining order . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7
B-series and geometric integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 7.2 Hamiltonian and related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 7.3 Canonical and symplectic Runge–Kutta methods . . . . . . . . . . . . . . . . 252 7.4 G-symplectic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 7.5 Derivation of a fourth order method . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 7.6 Construction of a sixth order method . . . . . . . . . . . . . . . . . . . . . . . . . . 270 7.7 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 7.8 Numerical simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 7.9 Energy preserving methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Answers to the exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Chapter 1
Differential equations, numerical methods and algebraic analysis
1.1 Introduction Differential equations and numerical methods Ordinary differential equations are at the heart of mathematical modelling. Typically ordinary differential equation systems arise as initial value problems y (x) = f x, y(x) , y(x0 ) = y0 ∈ RN . or, if y does not depend directly on x, y (x) = f y(x) ,
y(x0 ) = y0 ∈ RN .
(1.1 a)
The purpose of an equation like this is to describe the behaviour of a physical or other system and, at the same time, to predict future values of the time-dependant variable y(x), whose components represent quantities of scientific interest. It is often more convenient, in specific situations, to formulate (1.1 a) in different styles. For example, the components of y(x) might represent differently named variables, and the formulation should express this. In other situations the problem being modelled might be more naturally represented using a system of second, or higher, order differential equations. However, we will usually use (1.1 a) as a standard form for a differential system. Given x > x0 , the flow of (1.1 a) is the solution to this initial value problem evaluated at x. This is sometimes written as e(x−x0 ) f y0 , but our preference will be to write it as flow x−x y0 , where the nature of f is taken for granted. 0 The predictive power of differential equations is used throughout science, even when solutions cannot be obtained analytically, and this underlines the need for numerical methods. This usually means that we need to approximate flow h y0 to obtain a usable value of y(x0 + h). This can be repeated computationally to obtain, in turn, y(x0 + h), y(x0 + 2h), y(x0 + 3h), . . . . Although many methods for carrying out the approximation to the flow are known, we will emphasize Runge–Kutta methods, because these consist of approximating © Springer Nature Switzerland AG 2021 J. C. Butcher, B-Series, Springer Series in Computational Mathematics 55, https://doi.org/10.1007/978-3-030-70956-3_1
1
2
1 Differential equations, numerical methods and algebraic analysis
the solution at x0 + nh, n = 1, 2, 3, . . . , step by step. As an example of these methods, choose one of the famous methods of Runge [82] (Runge, 1895), where the mapping R h y0 = y1 is defined by (1.1 b) y1 = y0 + h f y0 + 12 h f (y0 ) .
Accuracy of numerical approximations Accuracy of numerical methods will be approached, in this volume, through a study of the formal Taylor expansions of the solution, and of numerical approximations to the solution. The flavour of the questions that arise is both combinatorial and algebraic, because of the common structure of many of the formal expansions. For the problem (1.1 a), we will need to compare the mappings flow h and, for a particular Runge–Kutta method, the mapping RK h . This leads us to consider the difference flow h y0 − RK h y0 . If it were possible to expand this expression in a Taylor series, then it would be possible to seek methods for which the terms are zero up to some required power of h, say to terms like h p . It would then be possible to estimate the asymptotic accuracy of the error as O(h p+1 ). This would be only for a single step but this theory, if it were feasible, would also give a guide to the global accuracy.
Taylor expansions and trees Remarkably, flow h y0 and RK h y0 have closely related Taylor expansions, and one of the first aims of this book is to enunciate and analyse these expansions. The first step, in this formulation, is to make use of the graphs known as rooted trees, or arborescences, but referred to throughout this book simply as trees. The formal introduction of trees will take place in Chapter 2 but, in the meantime, we will introduce these objects by illustrative diagrams: ,
,
,
,
,
,
,
,
...
The set of all trees will be denoted by T. If t denotes an arbitrary tree, then |t| is the “order”, or number of vertices, and σ (t) is the symmetry of t. The symmetry is a positive integer indicating how repetitive a tree diagram is. The formal statement will be given in Definition 2.5A (p. 58). The common form for flow h y0 and RK h y0 is y0 + ∑ χ(t) σ 1(t) F(t)h|t| ,
(1.1 c)
t
where, for a given tree, F(t) depends only on the differential equation being solved and χ(t) depends only on the mapping, flow h or RK h .
1.1 Introduction
3
The formulation of various Taylor expansions, given by (1.1 c), is the essential idea behind the theory of B-series, and is the central motivation for this book. We will illustrate the use of this result, using three numerical methods from the present chapter, together with the flow itself. The methods are the Euler method, Euler h , (1.4 a), the Runge–Kutta method, Runge-I h : y1 = y0 + 12 h f (y0 ) + 12 h f y0 + h f (y0 ) and Runge-II h , given by (1.1 b). Alternative formulations of these Runge–Kutta methods in Section 1.5 (p. 19) are, for Runge-I h , (1.5 c) and, for Runge-II h , (1.5 d). The coefficients, that is the values of Ψ (t), for |t| ≤ 3, are Mapping
Ψ( ) Ψ( ) Ψ( ) Ψ( )
flow h
1
1 2
1 3
1 6
Euler h
1
0
0
0
Runge-I h
1 1
1 2 1 4
0
Runge-II h
1 2 1 2
0
Independently of the choice of the differential equation system being solved, we can now state the orders of the three methods under consideration. Because the same entry is given for the single first order tree, each of the three numerical methods is at least first order as an approximation to the exact solution. Furthermore, the two Runge methods are order two but not three, as we see from the agreement with the flow for the order 2 tree, but not for the two order 3 trees. Also, from the table entries, we see that the Euler method does not have an order greater than one. Fr´echet derivatives and gradients In the formulation and analysis of both initial value problems, and numerical methods for solving them, it will be necessary to introduce various structures involving partial derivatives. In particular, the first Fr´echet derivative, also known as the Jacobian matrix, with elements ⎡ ⎤ ∂ f1 ∂ f1 ∂ f1 · · · ∂ yN 1 2 ∂y ⎢ ∂y ⎥ ⎢ ∂ f2 ∂ f2 ∂ f2 ⎥ ⎢ · · · ∂ yN ⎥ 1 ⎢ ⎥ ∂ y2 . f (y) = ⎢ ∂ y .. .. ⎥ ⎢ .. ⎥ ⎢ . . . ⎥ ⎣ ⎦ ∂ fN ∂ fN ∂ fN · · · ∂ yN 1 2 ∂y
∂y
Similarly the Fr´echet derivative of a scalar-valued function has the form
H (y) = ∂∂ yH1 ∂∂ yH2 · · · ∂∂yHN .
4
1 Differential equations, numerical methods and algebraic analysis
This is closely related to the gradient ∇(H) = H (y)T which arises in many specific problems and classes of problems.
Chapter outline In Section 1.2, a review of differential equations is presented. This is followed in Section 1.3 by examples of differential equations, The Euler and Taylor series methods are introduced in Section 1.4 followed by Runge–Kutta methods (Section 1.5), and multivalue methods (Section 1.6). Finally, a preliminary introduction to B-series is presented in Section 1.7.
1.2 Differential equations An ordinary differential equation is expressed in the form dy d x = f x, y(x) ,
f : R × R N → RN
(1.2 a)
or, written in terms of individual components, 1 d y1 1 2 N d x = f x, y (x), y (x), . . . , y (x) , 1 d y2 2 2 N d x = f x, y (x), y (x), . . . , y (x) , .. .
.. .
(1.2 b)
1 d yN N 2 N d x = f x, y (x), y (x), . . . , y (x) . This can be formulated as an autonomous problem dy d x = f y(x) ,
f : R N → RN ,
(1.2 c)
by increasing N if necessary and introducing a new dependent variable y0 which is forced to always equal x. This autonomous form of (1.2 b) becomes d y0 = 1, dx d y1 = f 1 y0 (x), y1 (x), y2 (x), . . . , yN (x) , dx d y2 = f 2 y0 (x), y1 (x), y2 (x), . . . , yN (x) , dx .. .. . . N dy = f N y0 (x), y1 (x), y2 (x), . . . , yN (x) . dx
1.2 Differential equations
5
Initial value problems A subsidiary condition y(x0 ) = y0 ,
x0 ∈ R,
y 0 ∈ RN ,
(1.2 d)
is an initial value and an initial value problem consists of the pair of equations (1.2 a), (1.2 d) or the pair (1.2 c), (1.2 d). Initial value problems have applications in applied mathematics, engineering, physics and other sciences, and finding reliable and efficient numerical methods for their solution is of vital importance. Exercise 1 Reformulate the initial value problem u (x) + 3u (x) = 2u(x) + v(x) + cos(x),
u(1) = 2,
u (1) = −2,
v (x) + u (x) − v (x) = u(x) + v(x)2 + sin(x),
v(1) = 1,
v (1) = 4,
in the form y (x) = f y(x) , y(x0 ) = y0 , where y0 = x, y1 = u, y2 = u , y3 = v, y4 = v .
Scalar problems If N = 1, we obtain a scalar initial value problem y(x0 ) = y0 ∈ R. y (x) = f x, y(x) ,
(1.2 e)
Scalar problems are useful models for more general problems, because of their simplicity and ease of analysis. However, this simplicity can lead to spurious conclusions. A specific case is the early analysis of Runge–Kutta order conditions [82] (Runge, 1895), [56] (Heun, 1900), [66] (Kutta, 1901), [77] Nystr¨om, 1925), in which, above order 4, the order conditions derived using (1.2 e) give an incomplete set. Complex variables Sometimes it is convenient to write a differential equation using complex variables dz = f t, z(t) , dt For example, the system
f : R × C N → CN .
dx = 2x + 3 cos(t), dt dy = 2y + sin(t), dt
x(0) = 1, y(0) = 0,
can be written succinctly as dz = 2z + 2 exp(it) + exp(−it), dt with z(t) = x(t) + iy(t).
z(0) = 1,
(1.2 f)
6
1 Differential equations, numerical methods and algebraic analysis
Exercise 2 Find the values of A, B, C such that z = A exp(2t) + B exp(it) +C exp(−it) is the solution to (1.2 f). Exercise 3 Write the solution to Exercise 2 in terms of the real and imaginary components.
Well-posed problems An initial value problem is well-posed if it has a solution, this solution is unique and the solution depends continuously on the initial value. In this discussion we will confine ourselves to autonomous problems. Definition 1.2A A function f : RN → RN satisfies a Lipschitz condition if there exists a constant L > 0 (the Lipschitz constant) such that y, z ∈ RN .
f (y) − f (z) ≤ Ly − z, Given an initial value problem y (x) = f y(x) ,
y(x0 ) = y0 ,
where f satisfies a Lipschitz condition with constant L, we find by integration that for x ≥ x0 , x f y(x) d x. (1.2 g) y(x) = y0 + x0
If x ∈ I := [x0 , x], and y denotes supx∈I y(x), we can construct a sequence of approximations y[k] , k = 0, 1, . . . , to (1.2 g), from y[0] (x) = y0 , y[k] (x) = y0 +
x f y[k−1] (x) d x, x0
k = 1, 2, . . . .
If r := | x − x0 | L < 1, we obtain the estimates y[1] − y[0] ≤ | x − x0 | f (y0 ), y
[k+1]
− y ≤ ry − y [k]
[k]
[k−1]
(1.2 h)
≤ r | x − x0 | f (y0 ). k
(1.2 i)
This shows that the sequence y[k] , k = 0, 1, . . . , is convergent. Denote the limit by y. It can be verified that the conditions for well-posedness are satisfied. By adding (1.2 h) and (1.2 i), with k = 1, 2, . . . , we see that every member of the sequence satisfies 1 | x − x0 | f (y0 ). y[k] − y[0] ≤ 1−r
1.2 Differential equations
7
To overcome the restriction | x − x0 | L < 1, a sequence of x values can be inserted between x0 and x, sufficiently close together to obtain convergent sequences in each subinterval in turn. While a Lipschitz condition is very convenient to use in applications, it is not a realistic assumption, because many well-posed problems do not satisfy it. It is perhaps better to use the property given in the following. Definition 1.2B A function f : RN → RN satisfies a local Lipschitz condition if there exists a constant L (the Lipschitz constant) and a positive real R (the influence radius) such that f (y) − f (z) ≤ Ly − z,
y, z ∈ RN ,
y − z ≤ 2R.
If f satisfies the conditions of Definition 1.2B, then for a a given y0 ∈ RN , define a disc D by D = {y ∈ RN : y − y0 ≤ R} and a function f by
f(y) =
⎧ ⎪ ⎨
f (y),
⎪ ⎩ f y + R (y − y ), 0 0 y−y0
y ∈ D, y ∈ D.
Exercise 4 Show that f satisfies a Lipschitz condition with Lipschitz constant L.
The first numerical methods The method of Euler [42] (Euler, Collected works, 1913), proposed in the eighteenth century, is regarded as the foundation of numerical time-stepping methods for the solution of differential equations. We will refer to it here as the “explicit Euler” method to distinguish it from the closely related “implicit Euler” method. Given a problem y(x0 ) = y0 , y (x) = f x, y(x) , we can try to approximate the solution at a nearby point x1 = x0 + h, by the formula y(x1 ) ≈ y1 := y0 + h f (x0 , y0 ). This is illustrated in the one-dimensional case by the diagram on the left (Explicit Euler).
8
1 Differential equations, numerical methods and algebraic analysis Explicit Euler
Implicit Euler y(x)
y1
y1 y0
y0
h
x0
y(x)
h x1
x0
x1
According to this diagram, y1 − y0 is calculated as the area of the rectangle with width h and height f (x0 , y0 ). This is not the correct answer, for which h should be multiplied by the average value of f (y(x)), but it is often close enough to give useful results for small h. In the diagram on the right (Implicit Euler), the value of y1 − y0 is h is multiplied by f (y1 ), which is not known explicitly but can be evaluated by iteration in the formula y1 = y0 + h f (x1 , y1 ). We will return to the Euler method in Section 1.4.
1.3 Examples of differential equations Linear problems Exponential growth and decay dy = λ y. dx If λ > 0, the solution represents exponential growth and, if λ < 0, the solution represents exponential decay. Two cases can be combined into a single system y1 d y1 = . d x y2 −y2 This can also be written
y =
0 1
−1 0
∇(y1 y2 )
and is an example of a Poisson problem y (x) = S∇(H(y)),
(1.3 a)
where S is a skew-symmetric matrix. For such problems H(y(x)) has a constant value, because d H y(x) ∂ H ∂ H T = 0. = S dx ∂y ∂y
1.3 Examples of differential equations
9
It is an important aim in numerical analysis to preserve this invariance property, in computational results. A four-dimensional linear problem The problem y = My,
⎡ −2 1 0 0 ⎢ ⎢ 1 −2 1 0 M=⎢ ⎢ 0 1 −2 1 ⎣ 0 0 1 −2
⎤ ⎥ ⎥ ⎥, ⎥ ⎦
is a trivial special case of the discretized diffusion equation on an interval domain. A
= T −1 MT , where transformation M → M ⎡ ⎤ ⎡ ⎤ 1 0 0 1 −2 1 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ 0 1 1 0 ⎥ ⎢ ⎥ ⎥,
= ⎢ 1 −1 0 0 ⎥ , T =⎢ M ⎢ 0 1 −1 0 ⎥ ⎢ 0 0 −3 1 ⎥ ⎣ ⎦ ⎣ ⎦ 1 0 0 −1 0 0 1 −2 partitions the problem into symmetric and anti-symmetric components. Also write y = T −1 y, y 0 = T −1 y0 so that the partitioned initial value problem becomes
y, y = M
y (x0 ) = y 0 .
Making this transformation converts the problem into two separate two dimensional problems which can be solved independently and the results recombined. Harmonic oscillator and simple pendulun The harmonic oscillator:
y2 d y1 = . d x y2 −y1
This equation can be recast in scalar complex form by introducing a new variable z = y1 + iy2 . It then becomes dz = −iz. dx The harmonic oscillator can also be written in the form (1.3 a), with H(y) = 12 (y1 )2 + (y2 )2 . The simple pendulum:
y2 d y1 . = d x y2 − sin(y1 )
10
1 Differential equations, numerical methods and algebraic analysis
This problem is not linear but, if y(0) is sufficiently small, the simple pendulum is a reasonable approximation to a linear problem, because sin(y1 ) ≈ y1 . It also has the form of (1.3 a) with H(y) = 12 (y2 )2 − cos(y1 ). Stiff problems Many problems arising in scientific modelling have a special property known as “stiffness”, which makes numerical solution by classical methods very difficult. An early reference is [35] (Curtiss, Hirschfelder,1952). For a contemporary study of stiff problems, and numerical methods for their solution, see [53] (Hairer, Nørsett, Wanner,1993) and [86] (S¨oderlind, Jay, Calvo, 2015). When attempting to determine the most appropriate stepsize to use with a particular method, and a particular problem, many considerations come into play. The first is the requirement that the truncation error is sufficiently small to match the requirements of the physical application, and the second is that the numerical results are not corrupted unduly by unstable behaviour. To illustrate this idea, consider the use of the Euler method (see Section 1.4 (p. 14)), applied to the three-dimensional problem ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ −y2 + 0.40001(y3 )2 y1 (0) 0.998 y1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ d ⎢ ⎥ , ⎢ y2 (0) ⎥ = ⎢ 0.00001 ⎥ , (1.3 b) ⎢ y2 ⎥ = ⎢ y1 ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ dx ⎣ y3 (0) y3 −100y3 1 with exact solution ⎡
y1 (x)
⎤
⎡
cos(x) − 0.002 exp(−200x)
⎤
⎢ ⎥ ⎢ ⎥ ⎢ y2 (x) ⎥ = ⎢ sin(x) + 0.00001 exp(−200x) ⎥ . ⎣ ⎦ ⎣ ⎦ y3 (x) exp(−100x) A solution by the Euler method consists of computing approximations y1 ≈ y(x0 + h),
y2 ≈ y(x0 + 2h),
y3 ≈ y(x0 + 3h),
using yn = F(yn−1 ), n = 1, 2, . . . , where ⎡ u1 + h − u2 + 0.40001(u3 )2 ⎢ F(u) = ⎢ u2 + hu1 ⎣ (1 − 100h)u3
...,
⎤ ⎥ ⎥. ⎦
For sequences like this, stability, for the third component, depends on the condition 1 − 100h ≥ −1 being satisfied, so that h ≤ 0.02. If this condition is not satisfied, unstable behaviour of y3 will feed into the first two components and the computed
1.3 Examples of differential equations
11
results cannot be relied on. However, if the initial value for y3 were zero, and this component never drifted from this value, there would be no such restriction on obtaining reliable answers. Exercise 5 If problem (1.3 b) is solved using the implicit Euler method (1.4 d), find the function F
n−1 ), and show that there is no restriction on positive h to yield stable results. such that yn = F(y
Test problems A historical problem The following one-dimensional non-autonomous problem was used by Runge and others to verify the behaviour of some early Runge–Kutta methods: dy y−x = , y(0) = 1. (1.3 c) dx y+x A parametric solution t → y(t), x(t) := y1 (t), y2 (t) can be found from the system d dt
y1
=
y2
1 −1 1
y1 y2
1
,
y1 (0)
y2 (0)
=
1
0
and, by writing z = y1 + iy2 , we obtain dz = (1 + i)z, z(0) = 1, dt with solution z = exp (1 + i)t , so that, reverting to the original notation, y(t) = exp(t) cos(t), x(t) = exp(t) sin(t). The solution on 0, exp( 12 π) corresponds to t ∈ 0, 12 π and is shown in the diagram t = π/4
y 1 t =0
0
t = π/2 0
x
exp(π/2)
12
1 Differential equations, numerical methods and algebraic analysis
A problem from DETEST One of the pioneering developments in the history of numerical methods for differential equations is the use of standardized test problems. These have been useful in identifying reliable and accurate software. This problem from the DETEST set [57] (Hull, Enright, Fellen, Sedgwick, 1972) is an interesting example. dy = cos(x)y, dx
y(0) = 1.
The exact solution, given by y = exp sin(x) , is shown in the diagram exp(1)
y 1 exp(−1) π/2
π
3π/2
x
2π
The Prothero–Robinson problem The problem of Prothero and Robinson [79] (1974), y (x) = g (x) + L y − g(x) , where g(x) is a known function, was introduced as a model for studying the behaviour of numerical methods applied to stiff problems. A special case is y = cos(x) − 10 y − sin(x) ,
y(0) = 0,
with general solution y(x) = C exp(−10x) + sin(x), where C = 0 when y(0) = 0.
A problem with discontinuous derivatives The two-dimensional “diamond problem”, as we will name it, is defined to have piecewise constant derivative values which change from quadrant to quadrant as follows
1.3 Examples of differential equations
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨
dy = dx ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
13
−1 1
−1 −1 1 −1 1 1
,
y1 > 0,
y2 ≥ 0,
,
y1 ≤ 0,
y2 > 0,
,
y1 < 0,
y2 ≤ 0,
,
y1 ≥ 0,
y2 < 0.
Using the initial value y = [ 1 0 ]T , the orbit, with period 4, is as in the diagram: 1 0
1
This problem is interesting as a numerical test because of the non-smoothness of the orbit as it moves from one quadrant to the next. The Kepler problem
⎡ ⎤ ⎡ 3⎤ y y1 ⎥ ⎢ ⎥ ⎢ 4⎥ ⎢ 2 y ⎢ ⎥ ⎢ d ⎢y ⎥ ⎢ 1 ⎥ ⎢ ⎥= y ⎥, − 3⎥ d x ⎢y3 ⎥ ⎢ ⎢ ⎣ ⎦ ⎣ r ⎥ ⎦ y2 y4 − 3
(1.3 d)
r
1/2 where r = (y1 )2 + (y2 )2 . The Kepler problem satisfies conservation of energy H = 0, where H(x) = 12 (y3 )2 + (y4 )2 − r−1
and also conservation of angular momentum A = 0, where A(x) = y1 y4 − y2 y3 . Exercise 6 Show that H(x) is invariant.
Exercise 7 Show that A(x) is invariant.
14
1 Differential equations, numerical methods and algebraic analysis
1.4 The Euler method The explicit Euler method as a Taylor series method Given a differential equation and an initial value, y (x) = f (x, y),
y(x0 ) = y0 ,
the Taylor series formula is a possible approach to finding an approximation to y(x0 + h): 1 2 1 p (p) h y (x0 ) + · · · + p! h y (x0 ). y(x0 + h) ≈ y(x0 ) + hy (x0 ) + 2!
If y is a sufficiently smooth function, then we would expect the error in this approximation to be O(h p+1 ). When p = 1, this reduces to the Euler method. This is very convenient to use, because both y(x0 ) = y0 and y (x0 ) = f (x0 , y0 ) are known in advance. However, for p = 2, we would need the value of y (x0 ), which can be found from the chain rule: ∂ f ∂ f dy d y (x) = dx f x, y(x) = ∂ x + ∂ y dx = fx + fy f , where the subscripts in fx and fy denote partial derivatives, and, for brevity, the arguments have been suppressed. Restoring the arguments, we can write y (x0 ) = fx (x0 , y0 ) + fy (x0 , y0 ) f (x0 , y0 ). The increasingly more complicated expressions for y(3) , y(4) , . . . , have been worked out at least to order 6 [59] (Hut’a, 1956), and they are summarized here to order 4. y = f , y = fx + fy f , y(3) = fxx + 2 fxy f + fyy f 2 + fx fy + fy2 f , y(4) = fxxx + 3 fxxy f + 3 fxy fx + 5 fxy fy f + 3 fxyy f 2 + fy fxx + 3 fx fyy f + fy2 fx + fy3 f + 4 fy f 2 fyy + f 3 fyyy . We will return to the evaluation of higher derivatives, in the case of an autonomous system, in Section 1.7 (p. 33). Exercise 8 Given the differential equation y = y + sin(x), find y(n) for n ≤ 7.
The explicit Euler method The Euler method produces the result yk = yk−1 + h f (xk−1 , yk−1 ),
k = 1, 2, . . . .
(1.4 a)
1.4 The Euler method
15
In this introduction, it will be assumed that h is constant. Now consider a numerical method of the form (1.4 b) yk = yk−1 + hΨ(xk−1 , yk−1 ), used in the same way as the Euler method. Definition 1.4A The method defined by (1.4 b) is convergent if, for a problem defined by f (x, y), y(x0 ) = y0 , with the solution Yn , at x, approximated using n steps with h = (x − x0 )/n, then lim Yn = y(x).
n→∞
Theorem 1.4B The Euler method is convergent. This result from [36] (Dahlquist, 1956), with an exposition in the classic textbook [55] (Henrici, 1962), is also presented in the more recent books [50] (Hairer, Nørsett, Wanner, 1993) and [20] (Butcher, 2016). Variable stepsize The standard formulation of a one-step method is based on a single input y0 , and its purpose is to calculate a single output y1 . However, it is also possible to consider the input as being the pair [y0 , h f0 ], with f0 = f (y0 ). In this case the output would be a pair [y1 , h f1 ]. Apart from the inconvenience of passing additional data between steps, the two formulations are identical. However, the two input approach has an advantage if the Euler method is required to be executed as a variable stepsize method, as in the Octave function (1.4 c).As we will see in Section 1.5, the Runge–Kutta method (1.5 c) has order 2. This would mean that half the difference between the result computed by Euler, and the result computed by this particular Runge–Kutta method, could be used as an error estimator for the Euler result because y0 + 12 h f0 + 12 h f1 is identical to the result computed by (1.5 c). This is the basis for the function represented in (1.4 c). Note that this estimation does not require additional f calculations. function [yout,hfyout,hout] = Euler(y,hfy,tolerance) yout = y + hfy; hfyout = h * f(yout); error = 0.5 * norm(hfy - hfyout); r = sqrt(tolerance / error); hout = r * h; hfyout = r * hfyout; endfunction
Exercise 9 Discuss the imperfections in (1.4 c).
(1.4 c)
16
1 Differential equations, numerical methods and algebraic analysis
The implicit Euler method As we saw in Section 1.3, through the problem (1.3 b), there are sometimes advantages in using the implicit version of (1.4 a), in the form yk = yk−1 + h f (xk , yk ),
k = 1, 2, . . .
(1.4 d)
This method also reappears as an example of the implicit theta Runge–Kutta method (1.5 g) with θ = 1. In the calculation of yk in (1.4 d), we need to solve an algebraic equation Y − h f (Y ) = C, where C = yk−1 . If f satisfies a Lipschitz condition with |h|L < 1, then it is possible to use functional iteration. That is, Y can be found numerically from the sequence Y [0] ,Y [1] ,Y [2] , . . . , where Y [0] = C, Y [n] = C + h f (Y [n−1] ),
n = 1, 2, . . .
To obtain rapid convergence, this simple iterative system can be replaced by the quadratically-convergent Newton scheme: Y [0] = C,
−1 [n−1] Y Y [n] = Y [n−1] − I − h f (Y [n−1] ) −C − h f (Y [n−1] ) ,
n = 1, 2, . . .
Experiments with the explicit Euler method The Kepler problem The Kepler problem (1.3 d), with initial value y0 = [1, 0, 0, 1]T , has a circular orbit solution with period 2π. To see how well the Euler method is able to solve this problem over a single orbit, a constant stepsize h = 2π/n is used over n steps in each of the cases n = 1000 × 2k , k = 0, 1, . . . , 5. As a typical case, n = 2000 is shown in the following diagrams, where the first and second components are shown in the left-hand diagram, and the third and fourth components on the right: y2
y4
y1
y3
1.4 The Euler method
17
To assess the accuracy, in each of the six cases, it is convenient to calculate yn − y0 2 . For example, if n = 1000, then yn = [1.015572, −0.358194, 0.319112, 0.907596]T , yn − y0 = [−0.015572, 0.358194, −0.319112, 0.092404]T , yn − y0 2 = 0.488791. This single result gives only limited information on the accuracy to be expected from the Euler method when carrying out this type of calculation. It will be more interesting to use the sequence of six n values, n = 1000, 2000, . . . .32000, with corresponding stepsizes h = 2π/1000, 2π/2000, . . . , 2π/32000, displayed in a single diagram. As we might expect, the additional work as n doubles repeatedly gives systematic improvements. To illustrate the behaviour of this calculation for increasingly high values of n, and increasingly low values of h, the following diagram is presented error 10−0.5 10−1 1 10−1.5
1 10−3.5
10−3
10−2
10−2.5
h
The triangle shown beside the main line suggests that the slope is close to 1. The slope of lines relating error to stepsize is of great importance since it predicts the behaviour that could be expected for extremely small h. For example, if we needed 10−6 accuracy this figure suggests that we would need a stepsize of about 10−8 and this would require a very large number of steps and therefore an unreasonable amount of computer time. If, on the other hand, the slope were 2 or greater, we would obtain much better performance for small h. Experiments with diamond In the case of the diamond problem, it is possible to evaluate the accumulated error in a single orbit, evaluated using the Euler method. If n, the number of steps to be evaluated, is a multiple of 4, there is zero error. We will consider the case n = 4m + k, with m + k ≥ 4, where 1 ≤ k ≤ 3. Because the period is 4, the stepsize is h = 4/n. In the first quadrant, m + 1 steps moves the solution to the second quadrant and a further m + 1 advances the solution to the interface with the third quadrant. It then takes m + 2 steps to move to the fourth quadrant. This leaves 4m + k − 2(m + 1) − (m + 2) = m − (4 − k) steps to move within the fourth quadrant. The final position, relative to the initial point, is then
18
1 Differential equations, numerical methods and algebraic analysis
m + 1 −1 m+2 1 m+k−4 1 4 k−4 m + 1 −1 + + + = . n/4 n/4 −1 n/4 −1 n/4 n k−6 1 1 Computer simulations for this calculation can be misleading because of round-off error. Exercise 10 Find the error in calculating two orbits of diamond using n = 8m + k steps with 1 ≤ k ≤ 7, with m sufficiently large.
An example of Taylor series From the many choices available to test the Taylor series method, we will look at the initial value problem y(0) = 1. (1.4 e) y = x2 + y2 , In [55] (Henrici, 1962), this problem was used to illustrate the disadvantages of Taylor series methods, because of rapid growth of the complexity of the formulae for y , y(3) , . . . . This was in the relatively early days of digital computing, and the situation has now changed because of the feasibility of evaluating Taylor terms automatically. But going back to hand calculations, the higher-derivatives do indeed blossom in complexity, as we see from the first few members of the sequence y = x2 + y2 , y = 2x + 2x2 y + 2y3 , y(3) = 2 + 4xy + 2x4 + 8x2 y2 + 6y4 , y(4) = 4y + 12x3 + 20xy2 + 20x4 y + 40x2 y3 + 24y5 . Recursive computation of derivatives Although we will not discuss the systematic evaluation of higher derivatives for a general problem, we can at least find a simple recursion for the example problem (1.4 e), based on the formula y(n) = ∂∂x y(n−1) + ∂∂y y(n−1) f . This gives the sequence of formulae y(1) = x2 + (y(0) )2 , y(2) = 2x + 2y(0) y(1) , y(3) = 2 + 2y(0) y(2) + 2(y(1) )2 , y(4) = 2y(0) y(3) + 6y(1) y(2) , y(5) = 2y(0) y(4) + 8y(1) y(3) + 6(y(2) )2 , y(6) = 2y(0) y(5) + 10y(1) y(4) + 20y(2) y(3) , y(7) = 2y(0) y(6) + 12y(1) y(5) + 30y(2) y(4) + 20(y(3) )2 ,
1.5 Runge–Kutta methods
19
y 1.0
0.8 p=4 p=3
0.6
p=2 0.4
p=1
0.2
0.0 0.0
0.2
0.4
0.6
0.8
1.0
x
Figure 1 Taylor series approximations of orders p = 1, 2, 3, 4 for y = x2 + y2 , y(0.2) = 0.3
and the general result y(n) =
n−1
∑
i=0
n−1 i
y(i) y(n−1−i) ,
n ≥ 4.
To demonstrate how well the Taylor series works for this example problem, Figure 1 is presented.
1.5 Runge–Kutta methods One of the most widely used families of methods for approximating the solutions of differential equations is the Runge–Kutta family. In one of these methods, a sequence of n steps is taken from an initial point, x0 , to obtain an approximation to the solution at x0 + nh, where h is the “stepsize”. Each step has the same form and we will consider only the first. Write the input approximation as y0 ≈ y(x0 ). The method involves first obtaining approximations Yi ≈ y(x0 + hci ), i = 1, 2, . . . , s, where c1 , c2 , . . . , cs are the stage abscissae. Write Fi = f (Yi ) for each stage so that Fi ≈ y (x0 + hci ). The actual approximations used for the stage values take the form Yi = y0 + h ∑ ai j Fj , j 0,
denotes the Heavyside function. This can be regarded as the continuous analogue of the s-stage Runge–Kutta method with the coefficient matrix given by
1.5 Runge–Kutta methods
25
⎧ ⎪ ⎪ ⎨0, ai j =
i < j,
1 2s , ⎪ ⎪ ⎩1, s
i = j, i > j.
It is possible to place these two methods on a common basis by introducing an “index set” I [14] (Butcher, 1972) which, in these examples, could be [0, 1] or {1, 2, 3, . . . , s}. Adapting Runge–Kutta terminology slightly, the stage value function becomes a bounded mapping I → RN and the coefficient matrix A becomes a bounded linear operator on the space of bounded mappings I → R to this same space. The final component of a Runge–Kutta method specification, that is the row vector bT , becomes a linear functional on the bounded mappings I → R. More details will be presented in Chapter 4. Even though energy-preserving Runge–Kutta methods, with finite I, do not exist, the following method, the “Average Vector Field” method [80] (Quispel, McLaren, 2008) ) does satisfy this requirement [29] (Celledoni et al, 2009). y 1 = y0 + h
1 0
f (1 − η)y0 + ηy1 d η.
For this method we have I = [0, 1], 1
A(ξ )φ = ξ bT φ =
0 1
0
φ (η) d η,
φ (η) d η.
Methods based on the index set [0, 1] are referred to as “Continuous stage Runge– Kutta methods”.
Equivalence classes of Runge–Kutta methods The two Runge–Kutta methods 0 1 2
1 2
1
0 0
0 1 1
, 0
1 2
1 2
0
1
are equivalent in the sense that they give identical results because the third stage of the method on the left is evaluated and never used. This is an example of Dahlquist– Jeltsch equivalence [39] (Dahlquist, Jeltsch, 2006). Similarly the two implicit methods
26
1 Differential equations, numerical methods and algebraic analysis 1 2 1 2
√ − 16 3 √ + 16 3
1 4 √ 1 1 + 4 6 3 1 2
1 4
√ − 16 3
1 2 1 2
,
1 4 1 2
√ + 16 3 √ − 16 3
1 4 √ 1 1 − 4 6 3 1 2
1 4
√ + 16 3 1 4 1 2
are equivalent because they are the same method with their stages numbered in a different order. Another example of an equivalent pair of methods, is 1 3 1 3
1 1
1 3 1 − 12 1 2 1 4 3 8
1 12 1 2 1 4 1 2 3 8
1 − 12
0
1 12 1 2 1 8 1 3
− 16 − 14
1 3
,
1 8 1 − 12
1
5 12 3 4 3 4
1 − 12 1 4 1 4
.
Suppose Y1∗ , Y2∗ are the solutions computed using the method on the right. Then Y1 = Y2 = Y1∗ and Y3 = Y4 = Y2∗ satisfy the stage conditions for the method on the left. Hence, the outputs for each of the methods are equal to the same result 1 h f (Y4 ) y1 = y0 + 38 h f (Y1 ) + 38 h f (Y2 ) + 13 h f (Y3 ) − 12 1 = y0 + 38 h f (Y1∗ ) + 38 h f (Y1∗ ) + 13 h f (Y2∗ ) − 12 h f (Y2∗ )
= y0 + 34 h f (Y1∗ ) + 14 h f (Y2∗ ). This is an example of Hundsdorfer–Spijker reducibility [58] (Hundsdorfer, Spijker, 1981).
Experiments with Runge–Kutta methods The advantages of high order methods As methods of higher and higher order are used, the cost also increases because the number of f evaluations increases with the number of stages. But using a high order method is usually an advantage over a low order method if sufficient precision is required. We will illustrate this in Figure 2, where a single half-orbit of the Kepler problem with zero eccentricity is solved using four Runge–Kutta methods ranging from the order 1 Euler method to the methods (1.5 c), (1.5 e) and (1.5 f). The orders of the methods are attached to the plots of their error versus h behaviours on a log-log scale. Also shown are triangles showing the exact slopes for comparison.
1.5 Runge–Kutta methods
27
p=1
error
1
10−3
1
10−6
1
p=
2
p=
1
p
3
=
2
3 4
10−9 4
10−12
1
10−3
10−2
h
Figure 2 Error behaviour for Runge–Kutta methods with orders p = 1, 2, 3, 4, for the Kepler problem with zero eccentricity on the time interval [0, π]
Methods for stiff problems The aim in stiff methods is to avoid undue restriction on stepsize for stability reasons but at the same time, to avoid excessive computational cost. In this brief introduction we will compare two methods from the points of view of stepsize restriction, accuracy and cost. The methods are the third order explicit method (1.5 e) and the implicit Radau IIA method (1.5 h). In each case the problem (1.3 b) (p. 10) was solved with output at x = 1 taking n steps with n ranging from 1 to 51200. The dependence of the computational error on n, and therefore on h = 1/n is shown in the Figure 3, where the method used in each result is attached to the curve. Note that the error in the computation is only for a representative component y1 . From the figure we see that the output for the explicit method is useless unless h < 0.02, approximately. This is a direct consequence of the stiffness of the problem. But for the implicit Radau IIA method, there is no constraint on the stepsize except that imposed by the need to obtain sufficient accuracy. Because the computational cost is much greater for the implicit method, many scientists and engineers are willing to use explicit methods in spite of their unstable behaviour and the need to use small stepsizes.
28
1 Differential equations, numerical methods and algebraic analysis
|error| 10−3
10−6
t
ici
pl ex
t
ici
pl
im
10−9
10−12 10−4
10−3
10−2
10−1
1
h
Figure 3 Errors for the stiff problem (1.3 b), solved by an explicit and an implicit method
1.6 Multivalue methods Linear multistep methods Instead of calculating a number of stages in working from yn−1 to yn , a linear multistep method makes use of past information evaluated in previous steps. That is, yn is found from yn = a1 yn−1 + · · · + ak yn−k + hb1 f (yn−1 ) + · · · + hbk f (yn−k ).
(1.6 a)
In this terminology we will always assume that |ak | + |bk | > 0 because, if this were not the case, k could be replaced by a lower positive integer. With this understanding, we refer to this as a k-step method. The “explicit case” (1.6 a) is generalized in (1.6 c) below. In the k-step method (1.6 a), the quantities ai , bi , i = 1, 2, . . . , k, are numbers chosen to obtain suitable numerical properties of the method. It is convenient to introduce polynomials ρ, σ defined by ρ(w) = wk − a1 wk−1 − · · · − ak , σ (w) = b1 wk−1 + · · · + bk ,
(1.6 b)
so that the method can be referred to as (ρ, σ ) [36] (Dahlquist, 1956). The class of methods in this formulation can be extended slightly by adding a term hb0 f (yn ) to the right-hand side of (1.6 a) or, equivalently, a term b0 wk to the expression for σ (w). Computationally, this means that yn is defined implicitly as the
1.6 Multivalue methods
29
solution to the equation yn − hb0 f (yn ) = a1 yn−1 + · · · + ak yn−k + hb1 f (yn−1 ) + · · · + hbk f (yn−k ). In this case, (1.6 b) is replaced by ρ(w) = wk − a1 wk−1 − · · · − ak , σ (w) = b0 wk + b1 wk−1 + · · · + bk .
(1.6 c)
The most well-known examples of (1.6 b) are the Adams–Bashforth methods [3] (Bashforth, Adams,1883), for which ρ(w) = wk − wk−1 and the coefficients in σ (w) are chosen to obtain order p = k. Similarly, the well-known Adams–Moulton methods [74] (Moulton, 1926) also have ρ(w) = wk − wk−1 in (1.6 c), but the coefficients in σ (w) are chosen to obtain order p = k + 1. Consistency, stability and convergence Definition 1.6A A method (ρ, σ ) is preconsistent if ρ(1) = 0. The method is consistent if it is preconsistent and also ρ (1) = σ (1). The significance of Definition 1.6A is that for the problem y (x) = 0, y(0) = 1, if yn−i = 1, i = 1, 2, . . . , k, then the value computed by the method in step number n is also equal to the correct value yn = 1 if and only if ∑ki=1 ai = 1, which is equivalent to preconsistency. Furthermore, if the method is preconsistent and is used to solve y (x) = 1, y(0) = 0, and the values yn−i = h(n − i) then the result computed in step n has the correct value yn = nh if and only if nh = ∑ki=1 h(n − i)ai + h ∑ki=0 bi , which is equivalent to the consistency condition, k − ∑ki=1 (k − i)ai = ∑ki=0 bi . Definition 1.6B A method (ρ, σ ) is stable if all solutions of the difference equation yn = a1 yn−1 + · · · + ak yn−k are bounded.
Definition 1.6C A polynomial ρ satisfies the root condition if all zeros are in the closed unit disc and all multiple zeros are in the open unit disc. The following result follows from the elementary theory of linear difference equations Theorem 1.6D A method (ρ, σ ) is stable if and only if ρ satisfies the root condition.
30
1 Differential equations, numerical methods and algebraic analysis
Exercise 14 Find the values of a1 and b1 for which the method (w2 − a1 w + 12 , b1 w + 1) is consistent. Is the resulting method stable?
Order of linear multistep methods Dahlquist [36] (Dahlquist, 1956) has shown that Theorem 1.6E Given ρ(1) = 0, the pair (ρ, σ ) has order p if and only if σ (1 + z) =
ρ(1 + z)/z + O(z p ), ln(1 + z)/z
where ln denotes the principal value so that ln(1 + z)/z = 1 + O(z). For convenience in applications of this result, note that 1 1 1 2 1 3 19 4 3 5 863 6 275 7 ln(1+z)/z = 1 + 2 z − 12 z + 24 z − 720 z + 160 z − 60480 z + 24192 z 33953 8 8183 9 3663197 10 − 3628800 z + 1036800 z + 43545600 z + O(z11 ).
Examples of linear multistep methods The Euler method can be defined by ρ(w) = w − 1, σ (w) = 1 and is the first member of the Adams–Bashforth family of methods [3] (Bashforth, Adams, 1883) The next member is defined by ρ(w) = w2 − w,
σ (w) = 32 w − 12 ,
because ρ(1+z) (1 + 12 z) + O(z2 ) z = (1 + z)(1 + 12 z) + O(z2 ) = 1 + 32 z (w = 1 + z) = 32 w − 12 ,
σ (1 + z) =
and has order 2 if correctly implemented. By this is meant the definition of y1 which is required, in addition to y0 , to enable later values of the sequence of y values to be computed. A simple choice is to define y1 by a second order Runge–Kutta method, such as (1.5 c) or (1.5 d). Exercise 15 Show that the order 3 Adams-Bashforth method is defined by ρ(w) = w3 − w2 , 4 5 2 σ (w) = 23 12 w − 3 w + 12 .
1.6 Multivalue methods
31
Adams–Moulton methods [74] (Moulton, 1926) are found in a similar way to Adams–Bashforth methods, except that σ (1 + z) is permitted to have a term in zk . For k = 2 and k = 3, we have in turn ρ(w) = w − 1 = z, ρ(w) = w2 − w = (1 + z)z, where we will always write w = 1 + z. The formulae for σ (w) are, respectively σ (w) = 1 + 12 z = 12 w + 12 , σ (w) = (1 + z)(1 +
(k = 2),
1 1 2 2 z − 12 z ) =
1+
3 5 2 2 z + 12 z
=
5 2 2 1 12 w + 3 w − 12 ,
(k = 3).
Exercise 16 Show that the order 4 Adams-Moulton method is defined by ρ(w) = w3 − w2 , 5 1 2 σ (w) = 38 w3 + 19 24 w − 24 w + 24 .
General linear methods Traditionally, practical numerical methods for differential equations are classified into Runge–Kutta methods and linear multistep methods. Combining these two families of methods into a single family gives methods characterized by two complexity parameters r, the number of quantities passed from step to step, and s, the number of stages. As for Runge–Kutta methods, the stages will be denoted by Y1 , Y2 , . . . ,Ys and the corresponding stage derivatives by F1 , F2 , [n−1] [n−1] . . . , Fs . The r components of input to step number n will be denoted by y1 , y2 , [n−1] [n] [n] [n] . . . , yr , and the output from this step by y1 , y2 , . . . , yr . These quantities are interrelated in terms of a partitioned (s + r) × (s + r) matrix A U B
V
using the equations s
r
[n−1]
Yi = h ∑ ai j Fj + ∑ ui j y j
,
[n]
,
j=1 s
j=1 r
[n−1]
yi = h ∑ bi j Fj + ∑ vi j y j j=1
Fi = f (Yi ),
i = 1, 2, . . . s, i = 1, 2, . . . r.
j=1
The essential part of these relations can be written more compactly as Y = h(A ⊗ I)F + (U ⊗ I)y[n−1] , y[n] = h(B ⊗ I)F + (V ⊗ I)y[n−1] , or, if no confusion is possible, as Y = hAF +Uy[n−1] , y[n] = hBF +V y[n−1] .
32
1 Differential equations, numerical methods and algebraic analysis
Consistency, stability and convergence Generalizing the ideas of consistency to general linear methods is complicated by the lack of a single obvious interpretation of the information passed between steps of the method. However, we will try to aim for an interpretation in which y[n−1] = uy(xn−1 ) + hvy (xn−1 ) + O(h2 ) for some u, v ∈ RN with the parameters chosen so that at the completion of the step, y[n] = uy(xn ) + hvy (xn ) + O(h2 ), and also so that the stage values satisfy Yi = y(xn−1 ) + O(h). We will explore the consequences of these assumptions by analysing the case n = 1. We find in turn 1y(x0 ) = hAy (x0 ) +U uy(x0 ) + hvy (x0 ) + O(h), U1 = 1,
u(y(x0 ) + hy (x0 ) = hB 1y (x0 ) +V uy(x0 ) + hvy (x0 ) + O(h2 ).
For Runge–Kutta methods, there is only a single input and accordingly, r = 1. For the method (1.5 f) the defining matrices are ⎡ ⎤ 0 0 0 0 1 ⎢ ⎥ ⎢ 1 0 0 0 1 ⎥ 4 ⎢ ⎥ A U ⎢ ⎥ = ⎢ 0 12 0 0 1 ⎥ . ⎢ ⎥ B V ⎢ 1 −2 2 0 1 ⎥ ⎣ ⎦ 1 1 2 0 1 6 3 6 By contrast, for a linear multistep method, s = 1. In the case of the order 3 Adams– Bashforth method, the defining matrices are ⎡ ⎤ 5 4 0 1 23 − 12 3 12 ⎥ ⎢ 5 ⎥ 23 4 ⎢ 0 1 − ⎢ 12 3 12 ⎥ ⎢ ⎥ A U =⎢ 1 0 0 0 0 ⎥ ⎢ ⎥. B V ⎢ ⎥ ⎢ 0 0 1 0 0 ⎥ ⎣ ⎦ 0
0
0
1
0
Moving away from traditional methods consider the method with r = 2, s = 3, with matrices ⎡ ⎤ 0 0 0 1 0 ⎢ ⎥ ⎥ 1 ⎢ ⎢ 2 0 0 1 1 ⎥ ⎢ ⎥ A U =⎢ (1.6 d) 0 1 0 1 0 ⎥ ⎢ ⎥. B V ⎢ 1 2 1 ⎥ ⎢ ⎥ ⎣ 6 3 6 1 0 ⎦ 1 1 3 0 0 4 −4 2
1.7 B-series analysis of numerical methods
33
For a person acquainted only with traditional Runge–Kutta and linear multistep methods, (1.6 d) might seem surprising. However, it is for the analysis of methods like this that the theory of B-series has a natural role. In particular, we note that if the [n] method is started in a suitable manner, then y1 ≈ y(xn ) to a similar accuracy as for the fourth order Runge–Kutta method. One possible starting scheme is based on the tableau 0
Rh =
1 2 1 2
1 2
0 − 14
.
1 2 1 8
1 8
Starting with the initial value y0 , the initial y[0] can be computed by [0]
y1 = y0 , [0]
y2 = R h y0 − y0 .
(1.6 e)
In Chapter 6, Section 6.4 (p. 225), the method (1.6 d), together with (1.6 e) as starting method, will be used as an illustrative example.
1.7 B-series analysis of numerical methods Higher derivative methods The Euler method was introduced in Section 1.4 (p. 14) as the first order case of the Taylor series method. The more sophisticated methods are attempts to improve this basic approximation method. The practical advantage of methods which require the evaluation of higher derivatives hinges on the relative cost of these evaluations compared with the cost of just the first derivative. But there are other reasons for obtaining formulae for higher derivatives in a systematic way; these are that this information is required for the analysis of so-called B-series. For a given autonomous problem, y (x) = f y(x) , y(x0 ) = y0 , y : R → R N , f : R N → RN , written in component by component form d yi i 1 2 N d x = f (y , y , . . . , y ),
i = 1, 2, . . . , N,
we will find a formula for the second derivative of yi . This can be obtained by the chain-rule followed by a substitution of the known first derivative of a generic
34
1 Differential equations, numerical methods and algebraic analysis
component f j . That is, d2 yi = d x2
N j=1 N
=
∂ f i dyj dx
∑ ∂yj
∂ fi
∑ ∂ y j f j.
j=1
This can be written in a more compact form by using subscripts to indicate partial y derivatives. That is, f ji := ∂ f i /∂ y j . A further simplification results by adopting the “summation convention”, in which repeated suffixes in expressions like f ji f j imply summation, without this being written explicitly. Hence, we can write d2 yi = f ji f j . d x2
Take this further and find formulae for the third and fourth derivatives d 3 yi i j k = f jk f f + f ji fkj f k , d x3 d 4 yi i i j k = f jk f j f k f + 3 f jk f f f + f ji fkj f k f + f ji fkj fk f . d x4
From the sequence of derivatives, evaluated at y0 , the Taylor series can be evaluated. In further developments, we will avoid the use of partial derivatives, in favour of i , . . . , we will use the total Fr´echet derivatives. That is, in place of the tensors f ji , f jk derivatives f , f , . . . . Evaluated at y0 , these will be denoted by f = f (y0 ), f = f (y0 ), f = f (y0 ), .. .. . .
Formal Taylor series The first few terms of the formal Taylor series for the solution at x = x0 + h are y(x0 + h) = y0 + hf + 12 h2 f f + 16 h3 f ff + 16 h3 f f f + · · ·
(1.7 a)
Application to the theta method The result computed by the theta method (1.5 g) (p. 22) has a Taylor expansion, with a resemblance to (1.7 a). That is, y1 = y0 + hf + θ h2 f f + 12 θ 2 h3 f ff + θ 2 h3 f f f + · · ·
(1.7 b)
1.7 B-series analysis of numerical methods
35
A comparison of (1.7 a) and (1.7 b) suggests that the error in approximating the exact solution by the theta method is O(h2 ) for θ = 12 and O(h3 ) for θ = 12 . Useful though this observation might be, it is just the start of the story. We want to be able to carry out straight-forward analyses of methods using this type of “B-series” expansion. We want to be able to do manipulations of B-series as symbolic counterparts to the computational equations defining the result, and the steps leading to this result, in a wide range of numerical methods. Elementary differentials and trees The expressions f, f f, f ff and f f f are examples of “elementary differentials” and, symbolically, they have a graph-theoretical analogue. Corresponding to f is an individual in a genealogical tree; corresponding to f is an individual with a link to a possible child. The term f f corresponds to this link having been made to the child represented by f. The bi-linear operator f corresponds to an individual with two possible links and in f ff these links are filled with copies of the child represented by f. Finally, in these preliminary remarks, f f f corresponds to a three generation family with the first f playing the role of grandparent, the second f playing the role of a parent, and the child of the grandparent; and the final operand f playing the role of grandchild and child, respectively, of the preceding f operators. The relationship between elementary differentials and trees can be illustrated in diagrams. f f f
f
f
f
f f
f
We can extend these ideas to trees and elementary differentials of arbitrary complexity, as shown in the diagram f f f f f f f f
f f
f (4) The elementary differential corresponding to this diagram can be written in a variety of ways. For instance one can insert spaces to emphasize the separation between the four operands of f (4) , or use power notation to indicate repeated operands and operators: f (4) f ff ff fff f f = f (4) f f f f f ff f f f = f (4) (f f)2 f f 2 f f f = f (4) (f f)2 f f 2 f f f
36
1 Differential equations, numerical methods and algebraic analysis
As further examples, we show the trees with four vertices, together with the corresponding elementary differentials:
f (3) f 3
f ff f
f f f 2
f f f f
Exercise 17 Find the trees corresponding to each of the elementary differentials: (a) f (f f)2 , (b) f (4) f 3 f f, (c) f f f 2 f f. Exercise 18 Find the elementary differentials corresponding to each of the trees: (a)
, (b)
, (c)
.
Summary of Chapter 1 and the way forward Summary Although this book is focussed on the algebraic analysis of numerical methods, a good background in both ordinary differential equations and numerical methods for their solution is essential. In this chapter a very basic survey of these important topics has been presented. That is, the fundamental theory of initial value problems is discussed, partly through a range of test problems. These problems arise from standard physical modelling, with the addition of a number of contrived and artificial problems. This is then followed by a brief look at the classical one-step and linear multistep methods, and an even briefer look at some all encompassing multivalue-multistage methods (“general linear methods”). Some of the methods are accompanied by numerical examples, underlining some of their properties. As a preview for later chapters, B-series are briefly introduced, along with trees and elementary differentials. The way forward The current chapter includes preliminary notes on some of the later chapters. This is indicated in the following diagram by a dotted line pointing to these specific chapters. A full line pointing between chapters indicates a stronger prerequisite. 5 1
2
3
7
4 6
1.7 B-series analysis of numerical methods
37
Teaching and study notes It is a good idea to supplement the reading of this chapter using some of the many books available on this subject. Those best known to the present author are Ascher, U.M. and Petzold, L.R. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations (1998) [1] Butcher, J.C. Numerical Methods for Ordinary Differential Equations (2016) [20] Gear, C.W. The Numerical Integration of Ordinary Differential Equations (1967) [44] Hairer, E., Nørsett, S.P. and Wanner, G. Solving Ordinary Differential Equations I: Nonstiff Problems (1993) [50] Hairer, E. and Wanner G. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems (1996) [53] Henrici, P. Discrete Variable Methods in Ordinary Differential Equations (1962) [55] Iserles, A. A First Course in the Numerical Analysis of Differential Equations (2008) [61] Lambert, J.D. Numerical Methods for Ordinary Differential Systems (1991) [67] Projects Project 1 Explore existence and uniqueness questions for problems satisfying a local Lipschitz condition. Project 2 Find numerical solutions, using a variety of methods, for the simple pendulum. Some questions to ask are (i) does the quality of the approximations deteriorate with increased initial energy? and (ii) how well preserved is the Hamiltonian.? Project 3
Learn all you can about fourth order Runge–Kutta methods.
Project 4
Read about predictor-corrector methods in [67] or some other text-book.
Chapter 2
Trees and forests
2.1 Introduction to trees, graphs and forests Trees, series and numerical methods It was pointed out by A. Cayley in 1857 [28] that trees, in the sense of this chapter, have an intimate link with the action of differential operators on operands which, in their turn, might also have something of the nature of operators. This is now extended to numerical methods through the order conditions and the use of B-series. As we saw in Chapter 1, the form of B-series, which provides the link between differential equations and numerical approximations, is given by (1.1 c) (p. 2). The present chapter aims to present a detailed background to the graph-theoretical and combinatorial aspects of this subject, for use in Chapter 3 and later chapters. In this introductory section, trees and forests will be introduced together with an appreciation of Cayley’s fundamental work.
Graphs and trees A graph in mathematics is a set of points (vertices) and a set of connections (edges) between some of the pairs of vertices. It is convenient to name or label each of the vertices and use their names in specifying the edges. If V is the set of vertices and E the set of edges, then the graph is referred to as (V, E). Basic terminology and observations A path is a sequence of vertices in which each successive pair is an edge. A graph is connected if there is a path from any vertex to any other vertex. A loop is a path from a vertex to itself, in which the other vertices comprising the path are distinct. © Springer Nature Switzerland AG 2021 J. C. Butcher, B-Series, Springer Series in Computational Mathematics 55, https://doi.org/10.1007/978-3-030-70956-3_2
39
40
2 Trees and forests
The order of a graph is the number of vertices. A tree is a connected graph with at least one vertex and with no loops. For a tree, the number of edges is one less than the number of vertices. The “empty tree” ∅, with V = E = 0, / is somtimes included, as an additional tree. The set of trees with a positive number of vertices will be denoted by T and the set of trees, with ∅ included by T# . That is, T # = T ∪ {∅}. The set of trees t with a positive number of vertices, not exceeding p, will be denoted by T p .
Examples of graphs In these examples, the standard notation, V for the set of vertices and E for the set of edges, is used. b a
V = {a, b, c}
E = {a, b}, {b, c}, {c, a}
(2.1 a)
V = {0, 1, 2, 3, 4}
E = {0, 1}, {0, 2}, {0, 3}, {0, 4}
(2.1 b)
V = {α, β , γ}
E = {β , γ}
(2.1 c)
c 3
2
0 4 1
α β
γ
The graph (2.1 a) is connected but contains a loop; hence it is not a tree. In contrast, (2.1 b) is connected and has no loops and is therefore a tree. Note that there are 5 vertices and 4 edges. The final example (2.1 c) has no loops but it is not connected and is therefore not a tree. Exercise 19 For a graph with a positive number of vertices, show that any two of the following three statements imply the third: (i) the graph is connected, (ii) there are no loops, (iii) the number of vertices is one more than the number of edges.
Rooted and unrooted trees In using trees in mathematics, it is sometimes natural to treat all vertices in an evenhanded way. To specify the structure of such a tree, the sets V and E are all that is needed. However, it is often more useful to specify a particular vertex as constituting the “root” r of the tree. That is, a triple of the form (V, E, r) is needed to fully specify a “rooted tree”. Throughout this volume, the single word “tree” will mean such a rooted tree. In contrast, a tree where no root is specified, is called a “free tree” or an “unrooted tree”. Throughout this volume, “free tree” and “unrooted tree” will be used interchangeably. The set of free trees will be denoted by U.
2.1 Introduction to trees, graphs and forests
41
There is a good reason for studying trees, both rooted and unrooted. Trees play a central role in the formulation of order conditions for Runge–Kutta and other numerical methods. In particular, elementary differentials, which are the building blocks of B-series, are indexed on the set of rooted trees. Furthermore, unrooted trees emerge as fundamental concepts in the theory of symplectic methods and their generalizations. Trees and forests A forest, as a collection of trees, possibly containing duplications, is a natural extension of the idea of trees. We will always include the empty forest in the set of forests and, for a non-empty forest, no account will be taken of the order in which the constituent trees are listed. Algebraically, the set of forests is a monoid (semi-group with identity) and is related to the set of trees via the B+ operation. The identity, that is the empty set of trees, is denoted by 1. Terminology A typical tree, such as t, t , t 1 , t2 , . . . , is a member of T. A typical unrooted tree, such as u, is a member of the set U. A typical forest, such as f, is a member of the set of all forests F. A weighted sum of trees, such as T, is a member of the tree space T. A weighted sum of forests, such as F, is a member of the forest space F.
Some examples Members of F (a)
1
(b) (c) Members of T (a) (b) (c)
−
2
+ 2
+3
5
Members of F (a)
+
+4
(b)
1+ +
(c)
5
+
42
2 Trees and forests
Diagrammatic representations In this book we will invariably use diagrams with the root at the lowest point and other vertices rising upwards. To represent free trees, we will use diagrams with no special point empahasized in any way. Examples: T: U:
Cayley, trees and differential equations The pioneering work of Cayley [28] (1857) directly connected trees with differential operators. In this discussion we adapt these ideas to the ordinary differential equation context. Consider the system d yi = f i = f i (y1 , y2 , . . . , yN ), dx
i = 1, 2, . . . , N.
(2.1 d)
Using elementary calculus and the summation convention, d2 yi = f ji f j , d x2
(2.1 e)
where subscript on f ji denotes partial differentiation (∂ y j )−1 . Take this further d3 yi i j k = f jk f f + f ji fkj f k , d x3 d4 yi i i j k = f jk f j f k f + 3 f jk f f f + f ji fkj f k f + f ji fkj fk f . d x4
(2.1 f) (2.1 g)
i , . . . into analogous statements about If we translate expressions like f i , f ji , f jk parent-child relationships involving people whose names are i, j, k, . . . , we see that the terms occurring in (2.1 d), (2.1 e), (2.1 f), (2.1 g) translate to kinship statements, such as f i translates to “i has no children”,
f ji f j translates to “i has one child named j, who has no children”, i f jk
j
k
f f translates to “i has two children named j and k”, each of whom has no children,
2.1 Introduction to trees, graphs and forests
43
f ji fkj f k translates to “i has a child j who has a child k” who has no children. We can write these “kinship statements” as labelled trees shown, respectively, in the following diagrams k j j j k i
i
i
i
Subtrees, supertrees and prunings For the two given trees, t =
,
t=
,
(2.1 h)
t is a subtree of t and t is a supertree of t , because t can be constructed by deleting some of the vertices and edges from t or, alternatively, t can be constructed from t by adding additional vertices and edges. The pruning associated with a tree and a possible supertree is the member of the forest space, written as t t , formed from the collection of possible ways t can be “pruned” to form t . For example, for the trees given by (2.1 h), . t t = In two further examples, t =
,
t=
t =
,
t=
t t = 2 ,
, ,
t t =
3
+
(2.1 i) ,
(2.1 j)
where the factor 2 in t t (2.1 i) is a consequence of the fact that t could equally well have been replaced by its mirror image. Note that t t in (2.1 j) has two terms indicating that the pruning can be carried out in two different ways.
Chapter outline Elementary properties of graphs, trees and forests are discussed in Sections 2.2 – 2.4. Functions of trees are introduced in Section 2.5 and partitions and evolution of trees in Section 2.6. The generalization known as the “stump” of a tree is introduced in Section 2.7. Prunings of trees, and subtrees and supertrees, to be introduced in Section 2.8, are essential precursors to the composition rule to be introduced in Chapter 3. This is followed in 2.9 by a consideration of antipodes of trees and forests.
44
2 Trees and forests
2.2 Rooted trees and unrooted (free) trees To illustrate the relationship between rooted and unrooted trees, consider u, given by (2.1 b) (p. 40) and reproduced in (2.2 a). If u = (V, E) and r ∈ V , the triple (V, E, r) is a rooted tree. In this case, if r = 0 we obtain t and if r = 1 we obtain t . Note that, because of symmetry, r = 2, 3, 4 we also obtain t . 3 0 4
u=2
(V, E)
= {0, 1, 2, 3, 4}, {0, 1}, {0, 2}, {0, 3}, {0, 4}
1 1 2 3 4 t=
2 t =
0 3
4 0 1
(V, E, r) = {0, 1, 2, 3, 4}, {0, 1}, {0, 2}, {0, 3}, {0, 4} , 0
(2.2 a)
(V, E, r) = {0, 1, 2, 3, 4}, {0, 1}, {0, 2}, {0, 3}, {0, 4} , 1
Even though free trees, that is, trees without a root specified, have been introduced as a basic structure, we will now consider rooted trees as being the more fundamental constructs, and we will define unrooted trees in terms of rooted trees. However, we will first introduce recursions to generate trees from the single tree with exactly one vertex.
Building blocks for trees Let τ denote the unique tree with only a single vertex. We will show how to write an arbitrary t ∈ T in terms of τ using additional operations. Definition 2.2A The order of the tree t, denoted by |t|, is the cardinality of the set of vertices. That is, if t = {V, E, r}, then |t| = card(V ).
Using the B+ operation If trees t1 , t 2 , . . . , t m are given, then t = [t 1 t 2 . . . t m ],
(2.2 b)
also written B+ (t1 , t 2 , . . . , t m ) because of the connection with Hopf algebras, will denote a tree formed by introducing a new vertex, which becomes the root of t, and m new edges from the root of t to each of the roots of ti , i = 1, 2, . . . , m. Expressed diagrammatically, we have
2.2 Rooted trees and unrooted (free) trees
t=
t1
45
t2
tm
If a particular tree is repeated, this is shown using a power notation. For example, [t31 t22 ] is identical to [t 1 t 1 t1 t 2 t2 ]. Similarly subscripts on a pair of brackets [m · · · ]m will indicate repetition of this bracket pair. For example, [2 t1 t2 t 3 · · · ]2 has the same meaning as [[t1 t 2 t3 · · · ]]. B+ induction Many results concerning trees can be proved by an induction principle which involves showing the result is true for t = τ and for t = [f], where the forest f is assumed to consist of trees with order less than |t|. Thus, effectively, B+ induction is induction on the value of |t|. Balanced parentheses By adjusting the terminology slightly we see that trees can be identified with balanced sequences of parentheses and forests by sequences of balanced parentheses sequences. To convert from the B+ terminology it is only necessary to replace each occurrence of τ by () and to replace each matching bracket pair [·] by the corresponding parenthesis pair (·). For example, τ → (); [τ] → (()); [τ 2 ] → (()()); [2 τ]2 → ((())); In the BNF notation, widely used in the theory of formal languages [75] (Naur, 1963), we can write ::= () ::= | Using the beta-product A second method of forming new trees from existing trees is to use the unsymmetrical product introduced in [14]. Given t 1 and t2 , the product t 1 ∗ t 2 is formed by introducing a new edge between the two roots and defining the root of the product to be the same as the root of t1 . Hence, if t = t 1 ∗ t2 then we have the diagram t2
t= t1
This operation will be used frequently throughout this book and will be given the distinctive name of “beta-product”.
46
2 Trees and forests
Using iterated beta-products Because the beta product is not associative, it has been customary to use parentheses to resolve ambiguities of precedence. For example, (τ ∗ τ) ∗ τ gives and τ ∗ (τ ∗ τ) gives . Ambiguity can also be avoided by the use of formal powers of the symbol ∗ so that t1 ∗2 t2 t3 = t 1 ∗ ∗t 2 t3 denotes (t1 ∗ t2 ) ∗ t 3 , whereas t1 ∗ t2 ∗ t3 will conventionally denote t1 ∗ (t2 ∗ t3 ).
Table 1 Trees to order 4 with B+ notation, beta products and Polish subscripts |t|
B+
beta
Polish
1
τ
τ
0
2
[τ]
τ ∗τ
10
3
[τ 2 ]
τ ∗2 τ 2
200
3
[2 τ]2
τ ∗τ ∗τ
110
4
[τ 3 ]
τ ∗3 τ 3
3000
4
[τ[τ]]
τ ∗2 ττ ∗ τ = τ ∗2 τ ∗ ττ
2010 = 2100
4
[2 τ 2 ]2
τ ∗ τ ∗2 τ 2
1200
4
[3 τ]3
τ ∗τ ∗τ ∗τ
1110
t
Using a Polish operator Finally we propose a “Polish” type of tree construction. Denote by τn the operation, of forming the tree [t 1 t2 · · · t n ], when acting on the sequence of operands t1 , t 2 , . . . , tn . This operation is written in Polish notation as τn t 1 t 2 · · · t n . From [τ] = τ1 τ we can for example write [τ[τ]] = τ2 ττ1 τ = τ2 τ0 τ1 τ0 , where τ0 is identified with τ. For compactness, this tree can be designated just by the subscripts; in this case 2010, or because the operation τ2 is symmetric, as 2100. These Polish codes have been added to Table 1.
2.2 Rooted trees and unrooted (free) trees
47
In the spirit of Polish notation, it is also convenient to introduce a prefix operator β which acts on a pair of trees to produce the beta-product of these trees. That is β t1 t2 := t 1 ∗ t2 . Examples to order 4 are τ = τ, β ττ = [τ], β β τττ = [τ 2 ], β τβ ττ = [2 τ]2 , β β β ττττ = [τ 3 ], β β τβ τττ = β β ττβ ττ = [τ[τ]], β τβ β τττ = [2 τ 2 ]2 , β τβ τβ ττ = [3 τ]3 . Other uses of Polish notation are introduced in Chapter 3, Section 3.3 (p. 106). Exercise 20 Write the following tree in a variety of notations such as (i) iterated use of the B+ operation, (ii) iterated use of the beta-product, (iii) Polish notation using a sequence of factors τ, τ1 , τ2 , . . . t=
An infix iterated beta operator It is sometimes convenient to replace ∗n t1 t2 · · · t n by t1 t 2 · · · t n . The operator operates on whatever trees lie to its right, and parentheses are typically needed to indicate a departure from strict Polish form. For example, (τm t1 t 2 · · · t n )t 1 t2 · · · t m = τm+n t1 t2 · · · t n t1 t 2 · · · t m = [t 1 t2 · · · t n t1 t 2 · · · t m ]. 2 Exercise 21 Write the tree τ2 τ3 τ 3 in B+ and beta-prodict forms.
Equivalence classes of trees If t1 = t ∗ t , t2 = t ∗ t for some t , t then we write t 1 ∼ t 2 . We extend this to an equivalence relation by defining t 1 ≈ t n if there exist t 2 , . . . , t n−1 such that t 1 ∼ t 2 ∼ t 3 · · · t n−1 ∼ t n .
48
2 Trees and forests
Definition 2.2B An unrooted tree u is an equivalence class of T. The set U is the partition of T into unrooted trees. As an example, we show how four specific trees, with order 6, are connected using the ∼ relation.
=
∗
∼
∗
=
=
∗
∼
∗
=
∗
=
∼
∗
=
Hence,
≈
≈
≈
This defines the corresponding unrooted tree in terms of its equivalence class = , , , .
S-trees and N-trees In the theory of symplectic integration, as in Chapter 7 (p. 247), unrooted trees have an important role [84]. In this theory a distinction has to be made between two types of trees. Definition 2.2C A tree t with the property that t ∼ t ∗ t , for some t , is said to be an S-tree. An equivalence class of S-trees is also referred to as an S-tree. An N-tree is a rooted or unrooted tree which is not an S-tree. The terminology “S-tree” is based on their superfluous role in the order theory for symplectic integrators. Hence, following the terminology in [84], S-trees will also be referred to as “superfluous trees”. Similarly N-trees are referred to as “nonsuperfluous trees.” Unrooted trees to order 5 For orders 1 and 2 there is only a single tree and hence, the equivalence classes contain only these single members. For order 3, the two trees are similar to each other and constitute the only similarity class of this order. For order 4, the 4 trees break into two similarity classes and, for order 5, they break into 3 classes. We will adopt the
2.2 Rooted trees and unrooted (free) trees
49
convention of drawing unrooted trees with all vertices spread out at approximately the same level so that there is no obvious root. The unrooted trees, separated into S-trees and N-trees, together with the corresponding rooted trees, up to order 5, are shown in Table 2.
Table 2 Rooted and unrooted trees (S-trees and N-trees) to order 5 U S
U
UN
T
Exercise 22 For the following unrooted tree, find the four corresponding trees, in the style of Table 2. Show how these trees are connected by a sequence of ∼ links between the four rooted trees which make up the unrooted tree
Exercise 23 As for Exercise 22, find the five corresponding trees for the following unrooted tree, in the style of Table 2, and show the ∼ connections
50
2 Trees and forests
Classifying trees by partitions For a tree t = [t 1 , t 2 , . . . , t m ], with |t| = p, the corresponding partition of t is the partition of p − 1 given by p − 1 = |t 1 | + |t2 | + · · · + |tm |. The number of components in this partition, that is, the value of m, will be referred to as the “order of the partition”. Given the value of p, the set of all partitions of p − 1 gives a convenient classification of the trees of the given order. For example, to identify the trees of order 5, and to list them in a systematic manner, first find the partitions of 4: 1 + 1 + 1 + 1,
1 + 1 + 2,
1 + 3,
2 + 2,
4
and then write down the required list of trees using this system. The result of this listing procedure, up to p = 6, is shown in Table 3.
2.3 Forests and trees A forest is a juxtaposition of trees. The set of forests F can be defined recursively by the generating statements 1 ∈ F, t f = f t ∈ F, t ∈ T, f ∈ F, f 1 f2 = f 2 f1 ∈ F, f 1 , f 2 ∈ F. The structure used in (2.2 b) can now be interpreted as a mapping from F to T f → [f] ∈ T
(2.3 a)
with the special case [1] = τ. We will sometimes use the terminology B+ for the mapping in (2.3 a) together with the inverse mapping B− . That is, B+ (f) = [f] ∈ T,
B− ([f]) = f ∈ F.
The order of a forest is an extension of Definition 2.2A (p. 44): Definition 2.3A The order of f ∈ F is defined by |1| = 0, |t 1 t2 · · · t m | = |t 1 | + |t2 | + · · · + |tm |.
2.3 Forests and trees
51 Table 3 Trees up to order 6, classified by partitions
order
partition
2
1
3
1+1 2
4
1+1+1 1+2
3 5
1+1+1+1 1+1+2
1+3 2+2
4 6
1+1+1+1+1 1+1+1+2
1+1+3 1+2+2
1+4
2+3
5
trees
52
2 Trees and forests
Theorem 2.3B Amongst elementary consequences of Definitions 2.2A and 2.3A, we have |t f| = |t| + |f|, |f1 f2 | = |f 1 | + |f2 |, |t 1 ∗ t2 | = |t 1 | + |t2 |, |B+ (f)| = 1 + |f|. In contrast to the width of a forest, we will also use the “span” of a forest, according to the recursive definition: Definition 2.3C The span of a forest f is defined by 1 = 0, f ∈ F,
ft = f + 1,
t ∈ T.
Miscellaneous tree terminology and metrics The root of a tree t will be denoted by root(t). Every other vertex is a member of the set nonroot(t). Because a tree is connected and has no loops, there is a unique path from root(t) to any v ∈ nonroot(t). The number of links in this path will be referred to as the “height” of v and the height of a tree is the maximum of the heights of its vertices. If the last link is the edge {v , v} then v is the “parent” of v and v is a “child” of v . If v has no children it is a “leaf” of t. If v is present in the path from the root to v then v is an “ancestor” of v and v is a “descendant” of v . The “dependants” of a vertex v are its descendants together with v itself. Every vertex, except the root, has a unique parent. The “width” of a tree is defined recursively by width(τ) = 1, m
width([t 1 t2 · · · t m ]) = ∑ width(t i ). i=1
For the tree
t=
f e
d b
c a
the various relationships are given for the 6 vertices a, b, . . . , f as follows
2.4 Tree and forest spaces
v
53
height(v) child(v) parent(v) descendant(v) ancestor(v)
a
0
{b, c}
0/
{b, c, d, e, f }
0/
b
1
0/
a
0/
{a}
c
1
{d, e}
a
{d, e, f }
{a}
d
2
0/
c
0/
{a, c}
e
2
{f}
c
{f}
{a, c}
f
3
0/
e
0/
{a, c, e}
For this tree, root(t) = a, nonroot(t) = {b, c, d, e, f }, height(t) = width(t) = 3.
2.4 Tree and forest spaces The tree space A formal linear combination of trees, including ∅, constitutes a member of the “tree space”. The sum, and multiplication by a scalar, are defined by a0 ∅ + ∑ a(t)t + b0 ∅ + ∑ b(t)t = (a0 + b0 )∅ + ∑ a(t) + b(t) t, t∈ T
t∈ T
t∈ T
c a0 ∅ + ∑ a(t)t = ca0 ∅ + ∑ ca(t)t. t∈ T
t∈ T
The forest space A formal linear combination of forests will constitute a member of the “forest space” F. That is, F ∈ F, for an expression of the form F
=
∑ a(f)f,
a : F → R.
f ∈F
If φ : R → R is given, with a Taylor expansion of the form φ (x) = a0 + a1 x + a2 x2 + · · · , then for a specific t ∈ T, F = φ (t) ∈ F is defined by F
= a0 1 + a1 t + a2 t 2 + · · · .
For example, (1 − t)−1 =
∞
∑ tn .
n=0
54
2 Trees and forests
Theorem 2.4A The sum of all forests is given by the formal product
∏ (1 − t)−1 = ∑ f.
(2.4 a)
f ∈F
t∈ T
The forest space as a ring A ring is one of the fundamental algebraic structures. A unitary commutative ring is a set R with two binary operations, known as addition and multiplication, with the following properties 1. (R, +) is an abelian group with identity 0. 2. (R, ·) is associative and commutative with identity 1. That is, it is a commutative semi-group with identity (or a commutative monoid). 3. The left-distributive rule holds: x(y + z) = xy + xz Without providing details, we assert that the forest space F is a unitary commutative ring.
Enumeration of trees and free trees Introduce the three generating functions A (x) = a1 x + a2 x2 + · · · , B(x) = b1 x + b2 x2 + · · · ,
(2.4 b)
C (x) = c1 x + c2 x + · · · , 2
where, for each n = 1, 2, . . . , an is the number of trees with order n, bn is the number of unrooted (free) trees with order n, cn is the number of non-superfluous unrooted trees with order n. Low order terms in A , B and C By inspection of Table 2 (p. 49), we see that a1 = 1, b1 = 1,
a2 = 1,
a3 = 2,
a4 = 4,
a5 = 9,
b2 = 1,
b3 = 1,
b4 = 2,
b5 = 3,
c1 = 1,
c2 = 0,
c3 = 1,
c4 = 1,
c5 = 3.
Hence we can insert the coefficients in (2.4 b): A (x) = x + x2 + 2x3 + 4x4 + 9x5 + · · · , B(x) = x + x2 + x3 + 2x4 + 3x5 + · · · , C (x) = x + x3 + x4 + 3x5 + · · · .
2.4 Tree and forest spaces
55
Let ξ be the surjection from F to the space of power series in some indeterminant x such that ξ (t) → x|t| , for any tree t. For example, ξ (1 − t) = 1 − x|t| . ξ is a homomorphism because, in additional to the required vector space properties,
ξ (ff ) = x|ff | = x|f|+|f | = x|f|| x|f | = ξ (f)ξ (f ). Theorem 2.4B The coefficients in A (x) are given by a1 + a2 x + a3 x2 + · · · = (1 − x)−a1 (1 − x2 )−a2 . . . Proof. Apply the operation ξ to each side of (2.4 a). The left-hand side maps to n −an and, because the number of trees of order n is the same as the ∏∞ n=1 (1 − x ) number of forests of order n − 1 (note the equivalence [f] ↔ t), the right-hand side of n−1 . (2.4 a) maps to ∑∞ n=1 an x Picking out the coefficients of the first few powers of x we obtain in turn a1 = 1, a2 = a1 = 1, a3 = 12 a1 (a1 + 1) + a2 = 2, a4 = 16 a1 (a1 + 1)(a1 + 2) + a1 a2 + a3 s = 4. The conclusions of Theorem 2.4B are used in Algorithm 1.
Algorithm 1 Find the number of trees of each order Input: p max Output: Ntrees[1 : p max] % % Calculate the number of trees of each order 1 : p max and place % the results in Ntrees, using Theorem 2.4B % 1 for i from 1 to p max do 2 Ntrees[i] ← 1 3 end for 4 for i from 2 to p max − 1 do 5 for j from p max step − 1 to 1 do 6 a ← Ntrees[i] 7 for k from 1 to floor(( j − 1)/i) do 8 Ntrees[ j] ← Ntrees[ j] + a ∗ Ntrees[ j − i ∗ k] 9 a ← a ∗ (Ntrees[i] + k)/(k + 1) 10 end for 11 end for 12 end for
56
2 Trees and forests
Let a n = ∑ni=1 ai denote the number of trees with order not exceeding n. Values of an and a n up to n = 10 are given below using a Julia realization of Algorithm 1 (p. 55). Further information is available in Table 4 (p. 58) n
1 2 3 4
5
6
7
8
9
10
11
12
an
1 1 2 4
9 20 48 115 286
a n
1 2 4 8 17 37 85 200 486 1205 3047 7813
719 1842 4766
denote the subset of T in which no vertex has exactly one descendant. That is, Exercise 24 Let T the first few members are = , , , , ... . T with order n, show that If an is the number of members of T ∞
1 + x + a3 x2 + a4 x3 + · · · = (1 − x)−1 ∏ (1 − xn )−an . n=3
Lemma 2.4C B(x) − C (x) = A (x2 ). Proof. Because each superfluous tree is made up by joining two copies of a rooted tree, b2n − c2n = an , b2n−1 − c2n−1 = 0.
Lemma 2.4D B(x) + C (x) = 2A (x) − A (x)2 . Proof. Let T ∈ T be the sum of all rooted trees and consider T
= 2 T − T ∗ T.
For any u ∈ UN , write the corresponding equivalence class of trees as a sequence t1 , t2 , . . . , tn , where ti−1 ∼ ti , i = 1, 2, . . . , n. Hence, the total of the coefficients of these trees is 2n, from the term 2T, minus 2n − 2 from the term −T ∗ T, making a total of 2. If u ∈ US , then the result is 2n − (2n − 1) = 1. Hence, ξ (T ) = B(x) + C (x), which equals ξ (2T − T ∗ t)S = 2A (x) − A (x)2 . To illustrate the proof of Lemma 2.4D, 2T − T ∗ T is evaluated, with only trees up to order 5 included. Note that trees of a higher order that appear in the manipulations are deleted. 2 + + + + + + + + + + + + + + + + − + + + + + + + ∗ + + + + + + +
2.4 Tree and forest spaces
57
=
+ + +
+ + +
+
+ +
+ + + + +
+
+ +
A final result consists of two terms; the first has a representative of each equivalence class, with superfluous trees included, and the second term contains a different representative of each class but with superfluous trees omitted. Finally the generating functions for members of U and US can be deduced. Theorem 2.4E
B(x) = A (x) − 12 (A (x)2 − A (x2 ), C (x) = A (x) − 12 (A (x)2 + A (x2 ).
Proof. Use Lemmas 2.4C and 2.4D. The calculation of the coefficients in B(x) and C (x) is shown in Algorithm 2. Results from Algorithms 1 (p. 55) and 2 are presented in Table 4. In this table, an = Ntrees, bn = Utrees, cn = UNtrees. Accumulated totals are denoted by a , b, c .
Algorithm 2 Find the number of unrooted and non-superfluous unrooted trees of each order Input: p max and Ntrees from Algorithm 1 (p. 55) Output: Utrees, UNtrees % % Calculate the number of unrooted trees of each order 1 : p max and place % the results in Utrees, and the number of non-superfluous % unrooted trees and place these in UNtrees, using the result of Theorem 2.4E % 1 for i from 1 to p max do 2 Xtrees[i] ← 0 3 Ytrees[i] ← 0 4 end for 5 for i from 1 to p max − 1 do 6 for j from 1 to p max − i do 7 Xtrees[i + j] ← Xtrees[i + j] + Ntrees[i] ∗ Ntrees[ j] 8 end for 9 end for 10 for i from 1 to floor(p max/2) do 11 Ytrees[2 ∗ i] ← Ntrees[i] 12 end for 13 for i from 1 to p max do 14 Utrees[i] ← Ntrees[i] − (Xtrees[i] − Ytrees[i])/2 15 UNtrees[i] ← Ntrees[i] − (Xtrees[i] + Ytrees[i])/2 16 end for
58
2 Trees and forests Table 4 Tree enumerations for rooted trees, unrooted trees, and non-superfluous unrooted trees, with accumulated totals.
n
1 2 3 4
5
9
10
11
12
13
14
15
an
1 1 2 4
9 20 48 115 286
6
7
8
719
1842
4766
12486
32973
87811
a n
1 2 4 8 17 37 85 200 486 1205
3047
7813
20299
53272
141083
bn
1 1 1 2
3
6 11
23
47
106
235
551
1301
3159
7741
bn
1 2 3 5
8 14 25
48
95
201
436
987
2288
5447
13188
cn
1 0 1 1
3
4 11
19
47
97
235
531
1301
3111
7741
c n
1 1 2 3
6 10 21
40
87
184
419
950
2251
5362
13103
2.5 Functions of trees A number of special functions on the set of trees have applications in Chapter 3. We will continue the present chapter by introducing these and, at the same time, establishing a standard ordering for the trees up to order 6. For a tree t defined as the graph (V, E, r), consider a permutation π of the members of V which acts also on the vertex pairs comprising E and on the root r. That is, v ∈ V maps to πv, an edge {v1 , v2 } maps to {πv1 , πv2 } and r maps to πr. In compact notation (V, E, r) maps to (πV, πE, πr). Because π is a permutation, πV = V . If also πE = E and πr = r, then π is said to be an automorphism of t. Define the group A(t) as the group of automorphisms of t. Finally, we have the definition Definition 2.5A The symmetry of t, written as σ (t), is the order of the group A(t). For example, consider the tree defined by V = {a, b, c, d, e, f , g}, E = {{a, b}, {a, c}, {a, d}, {d, e}, {d, f }, {d, g}}, r = a, and represented by the diagram e t=
b
g
f
c
d a
(2.5 a)
By inspection we see that the only automorphisms are the permutations for which a and d are invariant, the set {b, c} is invariant and the set {e, f , g} is invariant. Hence, σ (t) = 2!3! = 12.
2.5 Functions of trees
59
Theorem 2.5B The function σ satisfies the recursion σ (τ) = 1, m
t = [t k11 tk22 · · · t kmm ].
σ (t) = ∏ ki !σ (ti )ki ,
(2.5 b)
i=1
Proof. Write the set of vertices of t given in (2.5 b) in the form V = V0 ∪
km m ! !
Vi j ,
i=1 j=1
where V0 contains the label assigned to the root and Vi j contains the labels assigned to copy number j of ti . The permutations of the members of V which retain the connections, as represented by the set of all edges, consist of the compositions of two types of permutations. These are (1) the permutations within each of the ki copies of ti , for i = 1, 2, . . . , m, and (2) for each i = 1, 2, . . . , m, the permutations of the sets Ei j , j = 1, 2, . . . , ki , amongst the k j copies of t i . To contribute to the final result, (1) gives m ki a factor of ∏m i=1 ki ! and (2) gives a factor of ∏i=1 σ (t i ) . Another characterization of σ (t) is found by attaching an integer Si to each vertex kim
i of t. Let the forest of descendant trees from i be tki1i1 tki2i2 · · · t imii , where ti1 , ti2 , . . . , timi are distinct. Define (2.5 c) Si = ki1 !ki2 ! · · · kimi !. Corollary 2.5C With Si defined by (2.5 c), σ (t) = ∏ Si . i
Proof. The result follows by B+ induction. The factorial (sometimes referred to as the density) is defined as the product of the number of dependants of each vertex. For t given by (2.5 a), the numbers of dependants is shown in (2.5 d), giving t! = 28. 1 t=
1
1
1
1 4
.
7 It is convenient to give a formal definition in the form of a B+ recursion.
(2.5 d)
60
2 Trees and forests
Definition 2.5D The factorial of t, written as t!, is τ! = 1, m
t! = ([t 1 t2 · · · t m ])! = |t| ∏(t i !). i=1
A recursion based on the beta operation is sometimes useful. Theorem 2.5E The factorial function satisfies the following recursion, where t and t are any trees, τ! = 1, (t ∗ t )! =
t! t ! (|t| + |t |) . |t|
We next introduce three combinatorial functions which have important applications in Chapter 3. α For a given tree t with |t| = n, α(t) is the number of distinct ways of writing t in the form (V, E, r), where V is a permutation of {1, 2, . . . , n} and every vertex is assigned a lower label than each of its descendants.
(2.5 e)
β For a given tree t with |t| = n, β (t) is the number of distinct ways of writing t in the form (V, E, r), where V is a permutation of {1, 2, . . . , n}; there is no requirement to satisfy (2.5 e). β For a given tree t with |t| = n, β (t) is the number of distinct ways of writing t in the form (V, E, r), where V is a permutation of {1, 2, . . . , n}; where there is no requirement to satisfy (2.5 e) but the root is always labelled 1. For t given by (2.5 a), α(t) = 15. The labelled trees contributing to this total are 7 5 6 2 3 4 1
7 5 6 2 4 3 1
7 4 6 2 5 3 1
7 4 5 2 6 3 1
6 4 5 2 7 3 1
7 5 6 3 4 2 1
7 4 6 3 5 2 1
7 4 5 3 6 2 1
6 4 5 3 7 2 1
7 3 6 4 5 2 1
7 3 5 4 6 2 1
3 5 6 4 7 2 1
7 3 4 5 6 2 1
3 4 6 5 7 2 1
3 4 5 6 7 2 1
The value of β (t) is found to be 420 and β (t) to be 60. The general results are
2.5 Functions of trees
61
Theorem 2.5F |t|! , σ (t)t! |t|! , β (t) = σ (t) (|t| − 1)! . β (t) = σ (t)
α(t) =
(2.5 f) (2.5 g) (2.5 h)
Proof. The value of β (t) given by (2.5 g) is found by assigning labels to the vertices in all possible ways and then dividing by σ (t) to compensate for duplicated numbering of the same tree. The factor t! in the denominator of (2.5 f) compensates for the fact that each vertex v must receive the lowest label amongst the dependants of v. Similarly, (2.5 h) is found by dividing (2.5 g) by |t| because of the allocation of 1 to the root.
Standard numbering of trees We will need to make use of individual rooted trees throughout this volume and it will be convenient to assign standardized numbers. Denote tree number n in the sequence of numbered trees as tn . We will now describe the procedure for constructing the sequence of all trees in the standard order. Starting with t1 = τ, we generate trees of orders 2, 3, . . . recursively. For a given order p > 1 suppose all trees of lower order have been assigned sequence numbers. To generate the numbered trees of order p, write t = t ∗ tr , where r runs through the sequence numbers of previously assigned trees of order less than p. For each choice of r, runs, in order, through all integers such that |t | + |tr | = p. If t constructed by this process has not been assigned a sequence number already, then it is given the next number available, say n, and we write L(n) = and R(n) = r. However, if it is found that t = t ∗ tr is identical with an existing numbered tree, then t is regarded as a duplicate and no new number needs to be assigned. While the numbers are being assigned, it is also convenient to assign standard factors to each tree tn , in the form tL(n) and tR(n) say. This is done the first time each new tree arises and the same factors automatically apply to any duplicates. Table 5 (p. 62) illustrates how this procedure works for trees up to order 4. The values of t = t and t = tr are shown for each tree together with its sequence number. However, in the case that this is a duplicate, no diagram for the tree is given and no entries are given for L or R. In the numbering system based on these rules, the partition orders, for a given tree order, are automatically in decreasing order as was illustrated in Table 3 (p. 51). The standard numbering is shown to order 6 in Table 6 (p. 63), classified according to the partition order. For example, the numbered trees up to order 4 are
62
2 Trees and forests
Table 5 Illustration of tree numbering to order 4, showing the beta-product and unique factorization t ∗ t
L(t)
R(t)
t2
t1 ∗ t1
1
1
t3
t2 ∗ t1
2
1
t4
t1 ∗ t2
1
2
t5
t3 ∗ t1
3
1
t6
t4 ∗ t1
4
1
t6
t2 ∗ t2
t7
t1 ∗ t3
1
3
t8
t1 ∗ t4
1
4
t t1
t1 =
t2 =
t3 =
t4 =
t5 =
t6 =
t7 =
t8 =
A Julia realization of Algorithm 3 (p. 64), using p max = 5, gives the expected results: L[2] L[3] · · · L[17] 1 2 1 3 4 1 1 5 6 7 8 4 1 1 1 1 = 1 1 2 1 1 3 4 1 1 1 1 2 5 6 7 8 R[2] R[3] · · · R[17] ⎤ ⎡ 2 4 7 8 14 15 16 17 ⎡ ⎤ ⎥ ⎢ 3 6 11 12 prod[1, 1] prod[1, 2] · · · prod[1, 7] prod[1, 8] ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢prod[2, 1] prod[2, 2] · · · prod[2, 7] ⎥ ⎢ 5 10 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ 6 13 .. ⎢ ⎥=⎢ ⎥, ⎢ ⎥ ⎢ 9 . ⎥ ⎢ ⎥ ⎢ ⎥ ⎢prod[7, 1] ⎥ ⎢10 ⎥ ⎣ ⎦ ⎢ ⎥ ⎦ ⎣ 11 prod[8, 1] 12 where the blank entries are not relevant to this algorithm if the maximum order is 5.
2.5 Functions of trees
63
Table 6 Standard numbering of trees to order 6 classified by the order of their partitions t1 = t2 = t3 = t4 =
t5 = t6 =
t7 =
t8 =
t9 = t10 =
t11 =
t12 =
t13 =
t14 =
t15 =
t16 =
t20 =
t21 =
t22 =
t23 =
t24 =
t25 =
t26 =
t27 =
t28 =
t29 =
t30 =
t31 =
t32 =
t33 =
t34 =
t17 =
t18 = t19 =
t35 =
t36 =
t37 =
64
2 Trees and forests Algorithm 3 Generate trees
Input: p max Output: order, first, last, L, R, prod % % order[1:last[p max]] is the vector of orders of each tree % first[1 : p max] is the vector of first serial numbers for each order % last[1 : p max] is the vector of last serial numbers for each order % L[2 : p max] is the vector of left factors for each tree % R[2 : p max] is the vector of right factors for each tree % prod is the array of products of trees and r such that order[] + order[r] ≤ p max % 1 first[1] ← 1 2 last[1] ← 1 3 order[1] ← 1 4 n←1 5 for p from 2 to p max do 6 first[p] ← n + 1 7 for r from 1 to last[p − 1] do 8 for from first[p − order[r]] to last[p − order[r]] do 9 if = 1 or r ≤ R[] then 10 n ← n+1 11 prod[, r] ← n 12 L[n] ← 13 R[n] ← r 14 order[n] ← p 15 else 16 prod[l, r] ← prod[prod[L[], r], R[]] 17 end if 18 end for 19 end for 20 last[p] ← n 21 end for
The standard numbering of the 37 trees to order 6 is shown in Table 7 (p. 66). Also shown are σ (t) and t! for each tree t.
Generation of various tree functions We will temporarily introduce Bminus to denote some aspects of the structure of a tree t. Let n = |t|, then Bminus(t) will denote a vector with dimension m := last[n − 1], where last[n] is the total number of trees with order up to n. Thus last is one of the results computed in Algorithm 3. The elements of Bminus(t) are the exponents k1 , k2 , . . . , km such that
t = tk11 tk22 · · · tkmm . We will present Algorithm 4 to generate values of Bminus together with the factorial and symmetry functions.
2.6 Trees, partitions and evolutions
65
Algorithm 4 Generate tree functions Input: order, first, last, L, R, prod, p max from Algorithm 3 Output: Bminus, factorial, symmetry % % Bminus[n][1 : order[n] − 1], n = 1 : lastp top, is the sequence of Bminus values % factorial[1 : last[p top] is the array of factorial values % symmetry[1 : last[p top] is the array of symmetry values % 1 Bminus[1] ← [] 2 factorial[1] ← 1 3 symmetry[1] ← 1 4 for n from 2 to last[p max] do 5 factorial[n] ← factorial[L[n]] ∗ factorial[R[n]] ∗ order[n]/order[L[n]] 6 Bminus[n] = pad(last[order[n]], Bminus[L[n]]) 7 Bminus[n][R[n]] ← Bminus[n][R[n]] + 1 8 symmetry[n] = symmetry[L[n]] ∗ symmetry[R[n]] ∗ Bminus[n][R[n]] 9 end for
A realization in Julia, up to order 5, gave the results symmetry = {1, 1, 2, 1, 6, 1, 2, 1, 24, 2, 2, 1, 2, 6, 1, 2, 1} factorial = {1, 2, 3, 6, 4, 8, 12, 24, 5, 10, 15, 30, 20, 20, 40, 60, 120} Bminus = {}, {1}, {2, 0}, {0, 1}, {3, 0, 0, 0}, {1, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}, {4, 0, 0, 0, 0, 0, 0, 0}, {2, 1, 0, 0, 0, 0, 0, 0}, {1, 0, 1, 0, 0, 0, 0, 0}, {1, 0, 0, 1, 0, 0, 0, 0}, {0, 2, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 1, 0, 0, 0}, {0, 0, 0, 0, 0, 1, 0, 0}, {0, 0, 0, 0, 0, 0, 1, 0}, {0, 0, 0, 0, 0, 0, 0, 1}
2.6 Trees, partitions and evolutions For a tree t = [t 1 t2 · · · t m ], with |t| = n + 1, the set {|t1 |, |t2 |, . . . , |t m |} is a partition of n into m components, in the sense that |t1 | + |t2 | + · · · + |tm | = n. In this section, we will consider partitions of sets and numbers and their relationship with trees. Partitions have a central role in the study of B-series in Chapter 3, with specific applications in Section 3.5 (p. 117).
Partitions of numbers and sets Given a finite set S with cardinality n, it is convenient to distinguish between the partitions of S, denoted by P[S], and of n, denoted by P(n).
66
2 Trees and forests Table 7 Trees to order 6 showing σ (t) and t!
|t| 1
2
σ (t)
t t1
t2
τ
[τ]
1
1
t! 1
2
|t|
σ (t)
t!
6 t18
[τ 5 ]
120
6
6 t19
[τ 3 [τ]]
6
12
6 t20
[τ 2 [τ 2 ]]
4
18
6 t21 [τ 2 [2 τ]2 ]
2
36
t
3
t3
[τ 2 ]
3
3
6 t22
[τ[τ]2 ]
2
24
3
t4
[2 τ]2
1
6
6 t23
[τ[τ 3 ]]
6
24
6 t24 [τ[τ[τ]]]
1
48
4
t5
[τ 3 ]
6
4
6 t25 [τ[2 τ 2 ]2 ]
2
72
4
t6
[τ[τ]]
1
8
6 t26
[τ[3 τ]3 ]
1
144
4
t7
[2 τ 2 ]2
2
12
6 t27 [[τ][τ 2 ]]
2
36
4
t8
[3 τ]3
1
24
6 t28 [[τ][2 τ]2 ]
1
72
6 t29 [τ[2 τ 2 ]2 ]
24
30
[τ[3 τ]3 ]
2
60
[τ 4 ]
24
5
5 t10 [τ 2 [τ]]
2
10
6 t31 [[τ][τ 2 ]]
2
90
5 t11 [τ[τ 2 ]]
2
15
6 t32 [[τ][2 τ]2 ]
1
180
5 t12 [τ[2 τ]2 ]
1
30
6 t33
[2 [τ]2 ]2
2
120
5
t9
6 t30
5 t13
[[τ]2 ]
2
20
6 t34
[3 τ 3 ]3
6
120
5 t14
[2 τ 3 ]2
6
20
6 t35
[3 τ[τ]]3
1
240
1
40
6 t36
[4 τ 2 ]4
2
360
6 t37
[5 τ]5
1
720
5 t15 [2 τ[τ]]2
5 t16
[3 τ 2 ]3
2
60
5 t17
[4 τ]4
1
120
2.6 Trees, partitions and evolutions
67
Table 8 Systematic generation of trees to order 6 t1 t2 = t1 ∗ t1 t3 = t2 ∗ t1
t4 = t1 ∗ t2
t5 = t3 ∗ t1
t6 = t4 ∗ t1
t6 = t2 ∗ t2
t7 = t1 ∗ t3
t9 = t5 ∗ t1
t10 = t6 ∗ t1
t11 = t7 ∗ t1
t12 = t8 ∗ t1
t10 = t3 ∗ t2
t13 = t4 ∗ t2
t11 = t2 ∗ t3
t12 = t2 ∗ t4
t14 = t1 ∗ t5
t15 = t1 ∗ t6
t16 = t1 ∗ t7
t17 = t1 ∗ t8
t18 = t9 ∗ t1
t19 = t10 ∗ t1
t20 = t11 ∗ t1
t21 = t12 ∗ t1
t22 = t13 ∗ t1
t23 = t14 ∗ t1
t24 = t15 ∗ t1
t25 = t16 ∗ t1
t26 = t17 ∗ t1
t19 = t5 ∗ t2
t22 = t6 ∗ t2
t27 = t7 ∗ t2
t28 = t8 ∗ t2
t20 = t3 ∗ t3
t28 = t4 ∗ t3
t21 = t3 ∗ t4
t28 = t4 ∗ t4
t23 = t2 ∗ t5
t24 = t2 ∗ t6
t25 = t2 ∗ t7
t26 = t2 ∗ t8
t29 = t1 ∗ t9
t30 = t1 ∗ t10
t31 = t1 ∗ t11
t32 = t1 ∗ t12
t33 = t1 ∗ t13
t34 = t1 ∗ t14
t35 = t1 ∗ t15
t36 = t1 ∗ t16
t37 = t1 ∗ t17
t8 = t1 ∗ t4
68
2 Trees and forests
Definition 2.6A A partition P of a set S is a set of subsets P = {S1 , S2 , . . . , Sm }, " such that for i = j ∈ {1, 2, . . . , m}, Si ∩S j = 0/ and such that m i=1 Si = S. A partition p of a positive integer n is a set of integers p = {p1 , p2 , . . . , pm }, where repetitions are permitted, such that n = ∑m i=1 pi . If the components are permuted, it is regarded as the same partition. For example, if S = {1, 2, 3, 4}, then the members of P[S] are {{1}, {2}, {3}, {4}}, {{1}, {2}, {3, 4}} and 13 further partitions. We will, for reasons of brevity, write these in a compressed notation, starting with: 1+2+3+4, 1+2+34 . . . . The complete list is (line 1)
1+2+3+4,
(line 2)
1+2+34, 1+3+24, 1+4+23, 2+3+14, 2+4+13, 3+4+12,
(line 3)
1+234, 2+134, 3+124, 4+123,
(line 4)
12+34, 13+24, 14+23,
(line 5)
1234,
(2.6 a)
where the five lines correspond to the five members of P(4): (line 1)
1+1+1+1,
(line 2)
1+1+2,
(line 3)
1+3,
(line 4)
2+2,
(line 5)
4.
(2.6 b)
Definition 2.6B Let P = {S1 , S2 , . . . , Sm }, be a partition of a set S, then φ (P) is the corresponding partition of n = card(P), where φ (P) = {card(S1 ), card(S2 ), . . . , card(Sm )}. Furthermore, if p is a partition of the integer n then the p-weight is p-weight(p) := card(φ −1 (p)).
As examples of these ideas, φ (P) for any of the partitions in line i = 1, 2, 3, 4, 5 of (2.6 a) are given in the corresponding lines of (2.6 b). Furthermore, by comparing (2.6 a) and (2.6 b), we see that p-weight(1+1+1+1) = 1, p-weight(1+1+2) = 6, p-weight(1+3) = 4, p-weight(2+2) = 3, p-weight(4) = 1. In referring to a particular partition of n, it will often be convenient to specify the number of repetitions of each particular component. Thus we will write
2.6 Trees, partitions and evolutions
69
1n1 + 2n2 + · · · + mnm , as an alternative way of writing n
n
nm # $%1 & # $%2 & # $% & 1+1+···+1+2+2+···+2+···+m+m+···+m
to specify the number of occurrences of each part. In this notation, kn j may be omitted from the sum if n j = 0, for 1 ≤ j ≤ m. For example, the 7 partitions of 5 can be written in several alternative but equivalent ways, such as 1+1+1+1+1 = 15 , 1+1+1+2 = 13 +2, 1+1+3 = 12 +20 +3 = 12 +3, 1+2+2 = 1+22 , 1+4 = 1+20 +30 +4, 2+3 = 10 +2+3, 5 = 10 +20 +30 +40 +5. The value of p-weight Using this terminology, it is possible to find a formula for p-weight. Theorem 2.6C Let p = 1n1 + 2n2 + · · · + mnm ∈ P(n). Then p-weight(p) =
n!
. ni ∏m i=1 ni !(i!)
Proof. Given P ∈ P[S], with card(S) = n, let P=
ni m ! !
Si j ,
card(Si j ) = i,
i=1 j=1
where φ (P) = 1n1 + 2n2 + · · · + mnm ∈ P(n), The number of ways of assigning n ni labels to the sets Si j is given by the multinomial coefficient n!/∏m i=1 (i!) . However, for each i = 1, 2, . . . , m, the ni sets Si j , j = 1, 2, . . . , ni can be permuted without altering the set-partition. Hence, to compensate for the over-counting, it is necessary to divide by i!.
Exercise 25 Find the partitions of {1, 2, 3, 4, 5} such that φ (P) = 1 + 22 .
70
2 Trees and forests
Evolution of partitions of numbers and sets Evolution of P[S] for an embedded sequence of sets Let S1 ⊂ S2 ⊂ · · · be a sequence of sets, such that card(Sn ) = n, n = 1, 2, . . . Equivalently, given a sequence of distinct symbols s1 , s2 , . . . , define Sn = {s1 , s2 , . . . , sn }. In the examples given below, we will use the sequence starting with {1, 2, 3, 4, 5, . . . }. Definition 2.6D Given partitions P ∈ P[Sn−1 ], P ∈ P[Sn ], P is the parent of P and P is a child of P if (i) card(P) = card(P ) − 1 and P = P + {sn }, or (ii) P0 , P1 , P2 exist such that P = P0 ∪ {P1 }, P = P0 ∪ {P2 }, P2 = P1 ∪ {sn }. We will explore the evolutionary structure of the sequence of partitions P[Sn ], n = 1, 2, . . . , as we move from step to step and indicate with the notation P −→ P that P is a child of P. Given P ∈ P[Sn−1 ], the expression evolve(P) will denote the formal sum of the children of P. Furthermore, evolve acting on a formal sum of members of P[Sn−1 ] will denote m
evolve
∑ Pi
i=1
m
= ∑ evolve(Pi ). i=1
For the sequence, {1}, {1, 2}, {1, 2, 3}, {1, 2, 3, 4}, this structure is shown in Figure 4.
1+2+3+4 1+2+34 1+2+3
1+3+24 1+4+23
1+2 1+23
2+3+14 2+4+13 3+4+12
2+13
1
1+234 2+134
3+12 12
3+124 4+123 14+23
123
13+24 12+34 1234
Figure 4 Evolution of the partitions from P[{1}], through P[{1, 2}], P[{1, 2, 3}] to P[{1, 2, 3, 4}]
2.6 Trees, partitions and evolutions
71 1+1+1+1 1+1+1
1+1
1+1+2 1+2
1
1+3 2+2
2 3
4
Figure 5 Evolution of the partitions from P(1), through P(2), P(3) to P(4)
Evolution of P(n) for n = 1, 2, 3, 4 Definition 2.6E Given partitions p ∈ P(n − 1), p ∈ P(n), p is the parent of p and p is a child of p if (i) either p = p ∪ {1} or (ii) a set p0 , and an integer m > 1 exist such that p = p0 ∪ {m − 1}, p = p0 ∪ {m}. Given p ∈ P(n−1 ), the expression evolve(p) will denote the formal linear combination of the children of p, where the factor m is introduced in mp if p is a child of p in m different ways. Furthermore, evolve acting on a formal sum of members of P(n − 1) will denote evolve
m C p ∑ i i = ∑ Ci evolve(pi ). m
i=1
For example,
i=1
evolve(13 ) = (14 ) + 3(12 + 2), evolve(1 + 2) = (12 + 2) + (1 + 3) + (22 ), evolve(3) = (1 + 3) + (4),
compared with evolve(P) for some corresponding sets, as read off from Figure 4, evolve(1+2+3) = (1+2+3+4)+(1+2+34)+(1+3+24)+(1+4+23), evolve(1+23) = (1+234)+(1+4+23)+(14+23), evolve(123) = (4+123)+(1234). Corresponding to Figure 4 the evolutionary scheme for partitions of numbers is shown in Figure 2.6, where p −→ p indicates that p is a child of p.
72
2 Trees and forests
0/
1
1+1
1+1+1
2
1+2
3
Figure 6 Evolution of the partitions from P(0), through P(1), P(2) to P(3) together with associated trees from t1 to t8
As partitions evolve, related trees evolve at the same time. This will be illustrated in Figure 6, showing how the evolution from P(0) to P(3) is accompanied by trees with orders 1 to 4. Evolution of labelled trees We will look at the production of the α(t) copies of t as the result of an evolutionary process in which, in each step, one new vertex is added in every possible way to an existing tree. Denote each of the steps as the result of an operator evolve acting on an existing linear combination of trees. Examples of the action on low order trees are evolve(∅) = t1 , evolve(t1 ) = t2 , evolve(t2 ) = t3 + t4 , evolve(t3 ) = t5 + 2t6 , evolve(t4 ) = t6 + t7 + t8 , where the action on ∅ is conventional. These correspond to the diagrams ∅
→ → →
, , +
→
+ 2
→
+
, , +
.
Write evolven as the composition of n applications of evolve and we have
2.6 Trees, partitions and evolutions
73 3
2
4
1 3
2
1
2 1
1
4 3
2
1
4 2
3 1
3 2
4 1 3
3 2 1
4 2 1 4 3 2 1
Figure 7 Evolution of labelled trees up to order 4
Theorem 2.6F evolven (∅) =
∑ α(t)t
|t|=n
Up to order 5 we have evolve(∅) = t1 , evolve2 (∅) = t2 , evolve3 (∅) = t3 + t4 , evolve4 (∅) = t5 + 3t6 + t7 + t8 , evolve5 (∅) = t9 + 6t10 + 4t11 + 4t12 + 3t13 + t14 + 3t15 + t16 + t17 . It will now be shown diagrammatically how trees evolve by the addition of a further labelled vertex in each step. For order n, the labelled vertex n is attached to existing vertices in Sn−1 in all possible ways. This gives the evolution diagram shown in Figure 7. Strip off the labels and consolidate the identical unlabelled trees that remain. The result is shown in Figure 8 (p. 74). Recursions for evolve Theorem 2.6G evolve(t) satisfies the recursion ' t2 , evolve(t) = evolve(t ) ∗ t + t ∗ evolve(t ),
t = t1 , t = t ∗ t .
74
2 Trees and forests
3 6 4
2 3
4 3
2 3
Figure 8 Evolution of trees up to order 5
Proof. The result for evolve(τ) is clear. It remains to show that evolve(t ∗ t ) = evolve(t ) ∗ t + t ∗ evolve(t ). The first term is the result of adding an additional child to each vertex in t in turn and the second term is the result of adding the additional child to the vertices of t .
Theorem 2.6H evolve(t) satisfies the recursion ' [t1 ], t = t1 , evolve(t) = m t ∗ τ + ∑i=1 [t1 , . . . , evolve(t i ), . . . , tm ], t = [t 1 , t 2 , . . . , t m ]. Proof. To prove the second option, note that an additional child is added in turn to the root, resulting in the term t ∗ τ or, for each i, to one one of the vertices in ti , resulting in the term [t1 , . . . , evolve(t i ), . . . , tm ]. The contributions of R. H. Merson In 1957 a remarkable paper [72] appeared. This described the evolved combinations of trees, as illustrated here in Figure 8. As shown in Merson’s work, and further discussed in Chapter 3 of the present volume, these expressions give the Taylor
2.6 Trees, partitions and evolutions
75
series for the flow of a differential equation and, when compared with the series for a numerical approximation, enable the order conditions for Runge–Kutta method to be written down. Labelling systems for forests Using tuples Let tuple denote the set of all n-tuples of positive integers, for n = 1, 2, 3, . . . . That is tuple = (1), (2), (3), . . . , (1, 1), (1, 2), . . . , (2, 1), . . . , (1, 1, 1), (1, 1, 2), . . . . If x ∈ tuple is an n-tuple, denote by x− the (n − 1)-tuple formed from x by omitting the final integer from x. Conventionally, if n = 1, write x− = ( ) ∈ tuple. If (V, E) is connected then it is a labelled tree. The connected components combine together to comprise a forest. If (V, E) = (0, / 0) / then this is the empty tree ∅. Definition 2.6I For V ∈ tuple, a finite subset, define the graph (V, E) generated by V as the labelled graph for which E = (x− , x), x, x− ∈ V .
Lemma 2.6J Every forest corresponds to a graph generated by some finite subset of tuple. Proof. It will be enough to restrict the result to trees and we prove this case by beta induction. For τ we can choose V = {(1)}. Assume the result has been proved for t and t , and that these correspond to V and V , respectively. Without loss of generality, assume that the roots are (1) and (m), where m is chosen so that no member of V is of the form (1, m). We note that every member of V begins with the sequence (m, . . . ). Form V by prepending 1 to each member of V so that the members of V are of the form (1, m, . . . ). The tree t ∗ t corresponds to V ∪V .
Convenient shortened form for tuples For convenience a tuple such as (1, 2, 1, 1) will sometimes be written as 1211, if no confusion is possible. We illustrate the formation of the beta product of two trees labelled using tuples, in a diagram:
76
2 Trees and forests 1211
111
121
11
1211 122 111 31
∗
12
11
=
122
121
13
12
3
1
131
1
The universal forest
11
12
13
1
21
22 2
23
31 1 31 2 31 3 32 1 32 2 32 3 33 1 33 2 33 3
21 1 21 2 21 3 22 1 22 2 22 3 23 1 23 2 23 3
11 1 11 2 11 3 12 1 12 2 12 3 13 1 13 2 13 3
We can construct a connected graph with a denumerable set of vertices V = tuple, rather than a finite subset. If we restrict the members of tuple to be based on {1, 2, 3}, rather than all the positive integers, we obtain a graph shown partially in this diagram.
31
32
33
3
2.7 Trees and stumps We now consider a generalization of trees, known as “stumps”. These can be regarded as modified trees with some leaves removed, but the edges from these leaves to their parents retained. In the examples given here, a white disc represents the absence of a vertex. The number of white discs is the “valency”. Note that trees are stumps with zero valency.
Valency 0 Valency 1 Valency 2
Right multiplication by one or more additional stumps implies grafting to open valency positions.
2.7 Trees and stumps
77
Products of stumps Given two stumps s, s , the product ss has a non-trivial product if s is not the trivial stump and s has valency at least 1; that is, it is not a tree, the product is formed by grafting the root of s to the rightmost open valency in s. Two examples of grafting illustrate the significance of stump orientations
= = If the s1 is a tree, no contraction takes place.
Atomic stumps An atomic stump is a graph of the form
Note that no more than two generations can be present. If m of the children of the root are represented by black discs and n are represented by white discs, then the atomic stump is denoted by smn . The reason for the designation “atomic” is that every tree can be written as the product of atoms. This is illustrated up to trees of order 4: = s00
= s30
= s10
= s11 s10
=
= s20
= s01 s20
=
= s01 s10 =
= s01 s01 s10 =
Isomeric trees In the factorization of trees into products of atoms the factors are written in a specific order with each factor operating on later factors. However, if we interpret the atoms just as symbols that can commute with each other, we get a new equivalence relation:
78
2 Trees and forests
Definition 2.7A Two trees are isomeric if their atomic factors are the same. Nothing interesting happens up to order 4 but, for order 5, we find that
= s11 s01 s10 ∼ s01 s11 s10 = It is a simple exercise to find all isomeries of any particular order but, as far as the author knows, this has not been done above order 6. For orders 5 and 6, the isomers are
= s11 s01 s10
= s01 s11 s10
= s02 s10 s01 s10
= s01 s02 s10 s10
= s11 s01 s20
= s01 s11 s20
= s21 s01 s10
= s01 s21 s10
= s11 s01 s01 s10
= s01 s11 s01 s10
= s01 s01 s11 s10
Exercise 26 Find two isomers of s21 s01 s01 s10 .
Isomers and Runge–Kutta order conditions In Chapter 5, Section 5.2 (p. 183), structures related to stumps and isomeric trees will play a special role in distinguishing between the order conditions for Runge–Kutta methods intended for scalar problems, as distinct from more general high-dimensional systems.
2.7 Trees and stumps
79
The algebra of uni-valent stumps Let S denote the semi-group under multiplication of stumps with valency 1. The multiplication table for this semi-group, restricted to stumps of orders 1 and 2, is shown on the left diagram below. Associativity is easy to verify. τ1
τ2 τ
τ1 τ1
τ1
τ1 τ1
τ1 τ2 τ
τ1 τ1 τ1
τ2 τ
τ2 ττ1 τ2 ττ2 τ τ2 ττ1 τ1
τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ2 τ τ1 τ1 τ1 τ1
Exercise 27 Extend the right table to include stumps to order 3.
Sums over S By taking free sums of members of S, we can construct an algebra of linear operators. Because of the important applications to the analysis of Rosenbrock and related methods, and to applications to exponential integrators, we will focus on members of this algebra of the form a0 1 + a1 τ1 + · · · + an τ12 + · · · .
(2.7 a)
Let φ (z) = a0 + a1 z + a2 z2 + · · · , then we will conventionally write (2.7 a) as φ (τ1 ). Important examples in applications are φ (z) = (1 − γz)−1 , φ (z) =
1+ 12 z , 1− 12 z
φ (z) =
exp(z)−1 . z
Taking this further, we can give a meaning to expressions such as φ (τ1 )t = a0 t + a1 [t] + · · · + an [n t]n + · · ·
(2.7 b)
This topic will be considered again, in the context of B-series and numerical applications, in Chapter 4, Section 4.8 (p. 174).
80
2 Trees and forests
Sub-atomic stumps The special stumps s0,n = τn , with order 1, have a special role in Chapter 5, Section 5.2 (p. 183). It is possible to write these in terms of an incomplete use of the beta product. That is, t∗ is a stump in which an additional valency is assigned to the root of t. This would mean that (t∗)t = t ∗ t . Write ∗n for an iterated use of this notation and we have τ∗n = s0,n = τn .
(2.7 c)
Definition 2.7B The objects occurring in (2.7 c) are sub-atomic stumps. Note the identity smn = τ ∗m+n τ m , which allows any tree written as a product of atomic stumps to be further factorized into sub-atomic stumps.
2.8 Subtrees, supertrees and prunings A subtree, of a given tree, is a second tree embedded within the given tree, and sharing the same root. The vertices, and their connecting edges, cut off from the original tree to expose the subtree, comprise the offcut. Since the offcut is a collection of trees, they will be interpreted as a forest. If t is a subtree of t, then t is said to be a “supertree” of t . The words “pruning” and “offcut” will be used interchangeably. For the two given trees, t =
,
t=
,
(2.8 a)
t is a subtree of t, and t is a supertree of t , because t can be constructed by deleting some of the vertices and edges from t or, alternatively, t can be constructed from t by adding additional vertices and edges. The member of the forest space, written as t t , is formed from the collection of possible ways t can be “pruned” to form t . For example, for the trees given by (2.8 a), , t t = 2
2.8 Subtrees, supertrees and prunings
81
where the factor 2 is a consequence of the fact that t could equally well have been replaced by its mirror image. These simple ideas can lead to ambiguities and the possibility of duplications. For example, the tree t = [[τ][τ 2 ]], regarded as a labelled tree, can have the offcut removed in three ways to reveal the subtree t = [2 τ]2 . This can be illustrated by using the orientations of the edges of t to distinguish them, as shown in the diagram
case
cut =
t showing cuts
t showing orientation
offcut
(i) (ii) (iii)
In this diagram, cases (i) and (ii) give the same t and the same prunings. Hence, a complete list of these for a given t would need to contain a duplicate of this case. Further, case (iii) needs to be included separately even though they both have the same subtree t . We can deal with cases like this by reverting to the use of labelled trees. That is, we can use the (V, E, r) characterization of trees. For the present example, we would identify t with the triple t = (V, E, r) := ({a, b, c, d, e, f }, {(a, b), (b, c), (b, d), (a, e), (e, f )}, a), as in the diagram d c b a and the three cases of t = (V1 , E1 , r) and f = (V2 , E2 ) are f
e
(V1 , E1 , r) = ({a, b, c}, {(a, b), (b, c)}, a), (V2 , E2 ) = ({d, e, f }, {(e, f )}), (ii) (V1 , E1 , r) = ({a, b, d}, {(a, b), (b, d)}, a), (V2 , E2 ) = ({c, e, f }, {(e, f )}), (i)
(iii) (V1 , E1 , r) = ({a, e, f }, {(a, e), (e, f )}, a), (V2 , E2 ) = ({b, c, d}, {(b, c), (b, d)}). In specifying a subgraph (V1 , E1 ) of (V, E), it will be enough to specify only V1 because the vertex pairs which comprise E1 are the same as for E except for the deletion of any pair containing a member of V V1 ; hence the members of E1 do not need to be stated. In this example, we can write = 2
+
82
2 Trees and forests
Main definition
Definition 2.8A Consider a labelled tree t = (V, E, r). A subgraph defined by V1 , with the property that parent(v) ∈ V1 whenever v ∈ V1 , is a subtree t of t. If V1 ⊂ V then t is a proper subtree. The corresponding offcut is the forest formed from V V1 . In using subtrees in later discussions, we will not give explicit labels for the elements of V , but we will take the existence of labels into account. Notations Notationally, t ≤ t will denote “t is a subtree of t” and t < t will denote “t is a proper subtree of t”; Also ∅ < t ≤ t and ∅ < t < t will have their obvious meanings. The offcuts corresponding to V V1 will be written t t . Using the Sweedler notation [89] (Underwood, 2011), adapted from co-algebra theory, a “pruning” consisting of an offcut-subtree pair will be written (t t ) ⊗ t . Definition 2.8B The function Δ (t) denotes the combination, with integer weights to indicate repetitions, of all prunings of t. That is, Δ (t) = t ⊗ ∅ + ∑ (t t ) ⊗ t . t ≤t
For example, = 1⊗ Δ +2
+ ⊗ ⊗ +
+2 ⊗ ⊗ +
+3 ⊗ ⊗ +
+ ⊗ ⊗ +
⊗
+
⊗ +
⊗∅
or, using the standard numbering established in Table 6 (p. 63), Δ (t27 ) = 1 ⊗ t27 + t1 ⊗ t11 + 2t1 ⊗ t13 + 3t21 ⊗ t6 + t2 ⊗ t7 + t31 ⊗ t3 + 2t1 t2 ⊗ t4 + t3 ⊗ t4 + t21 t2 ⊗ t2 + t1 t3 ⊗ t2 + t2 t3 ⊗ t1 + t27 ⊗ ∅. For later reference, Δ (t), for |t| ≤ 5, is given in Table 9. Subtrees in Polish notation If ∅ < t ≤ t, write
t = τm1 τm2 · · · τmk ,
k = |t |.
then we can write a representation of t using the operator , introduced in Section 2.2 (p. 47). This result is given without proof.
2.8 Subtrees, supertrees and prunings
83
Table 9 Δ (t), for |t| ≤ 5 Δ (t)
|t|
t
0
∅ 1⊗∅
1
t1 1 ⊗ t1 + t1 ⊗ ∅
2
t2 1 ⊗ t2 + t1 ⊗ t1 + t2 ⊗ ∅
3
t3 1 ⊗ t3 + 2t1 ⊗ t2 + t21 ⊗ t1 + t3 ⊗ ∅
3
t4 1 ⊗ t4 + t1 ⊗ t2 + t2 ⊗ t1 + t4 ⊗ ∅
4
t5 1 ⊗ t5 + 3t1 ⊗ t3 + 3t21 ⊗ t2 + t31 ⊗ t1 + t5 ⊗ ∅
4
t6 1 ⊗ t6 + t1 ⊗ t3 + t1 ⊗ t4 + t21 ⊗ t2 + t2 ⊗ t2 + t1 t2 ⊗ t1 + t6 ⊗ ∅
4
t7 1 ⊗ t7 + 2t1 ⊗ t4 + t21 ⊗ t2 + t3 ⊗ t1 + t7 ⊗ ∅
4
t8 1 ⊗ t8 + t1 ⊗ t4 + t2 ⊗ t2 + t4 ⊗ t1 + t8 ⊗ ∅
5
t9 1 ⊗ t9 + 4t1 ⊗ t5 + 6t21 ⊗ t3 + 4t31 ⊗ t2 + t41 ⊗ t1 + t9 ⊗ ∅
5 t10 1⊗t10 +2t1 ⊗t6 +t1 ⊗t5 +2t21 ⊗t3 +t21 ⊗t4 +t2 ⊗t3 +t31 ⊗t2 +2t1 t2 ⊗t2 +t21 t2 ⊗t1 +t10 ⊗∅ 5 t11 1 ⊗ t11 + 2t1 ⊗ t6 + t1 ⊗ t7 + t21 ⊗ t3 + 2t21 ⊗ t4 + t31 ⊗ t2 + t3 ⊗ t2 + t1 t3 ⊗ t1 + t11 ⊗ ∅ 5 t12 1 ⊗ t12 + t1 ⊗ t6 + t1 ⊗ t8 + t21 ⊗ t4 + t2 ⊗ t3 + t1 t2 ⊗ t2 + t4 ⊗ t2 + t1 t4 ⊗ t1 + t12 ⊗ ∅ 5 t13 1 ⊗ t13 + 2t1 ⊗ t6 + t21 ⊗ t3 + 2t2 ⊗ t4 + 2t1 t2 ⊗ t2 + t22 ⊗ t1 + t13 ⊗ ∅ 5 t14 1 ⊗ t14 + 3t1 ⊗ t7 + 3t21 ⊗ t4 + t31 ⊗ t2 + t5 ⊗ t1 + t14 ⊗ ∅ 5 t15 1 ⊗ t15 + t1 ⊗ t7 + t1 ⊗ t8 + t21 ⊗ t4 + t2 ⊗ t4 + t1 t2 ⊗ t2 + t6 ⊗ t1 + t15 ⊗ ∅ 5 t16 1 ⊗ t16 + 2t1 ⊗ t8 + t21 ⊗ t4 + t3 ⊗ t2 + t7 ⊗ t1 + t16 ⊗ ∅ 5 t17 1 ⊗ t17 + t1 ⊗ t8 + t2 ⊗ t4 + t4 ⊗ t2 + t8 ⊗ t1 + t17 ⊗ ∅
Theorem 2.8C Any tree t ≥ t can be written in the form t = (τm1 f1 )(τm2 f2 ) · · · (τmk fn ), where f 1 , f 2 , . . . , f k ∈ F. For example, consider 4
t =
2
4 3
2
≤
1 We have
3
= t.
(2.8 b)
1
t = (τ2 ∗ τ)(τ ∗ τ1 τ)(τ1 ∗ τ)(τ ∗ τ 2 ), t = τ2 ττ1 τ, f 1 f 2 f 3 f 4 = τ 4 τ1 τ =
(2.8 c) .
(2.8 d)
84
2 Trees and forests
Note that in (2.8 b), the labelled vertices in t and t, correspond to the the four factors τ2 , τ, τ1 , τ in (2.8 c). To convert t to t, additional trees are attached as follows: to label 1: an additional tree τ, to label 2: an additional tree τ1 τ, to label 3: an additional tree τ, to label 4: two additional trees, each of them τ. Combining these additional trees together, we obtain (2.8 d) as the forest t t = (τ)(τ1 τ)(τ 2 )(τ) = ττ1 ττ 2 τ = τ 4 τ1 τ. Starting with the example case we have already considered, the possible choices of f1 , f 2 , f 3 , f 4 , to convert t to t, are f1 = ,
f2 = ,
f3 = ,
f1 = ,
f2 = ,
f3 =
f1 = ,
f2 =
,
f4 = ,
f 2 = 1,
f 3 = 1,
f4 = ,
f1 = ,
f 2 = 1,
f3 = ,
f4 =
f1 = ,
f 2 = 1,
f3 =
,
,
,
f 4 = 1,
f3 = 1,
f1 =
,
f4 =
,
f 4 = 1.
Using the numbered tree notation, we have t t = t41 t2 + t1 t2 t3 + t31 t3 + t1 t11 + t1 t2 t4 + t3 t4 . Subtrees, supertrees and prunings have their principal applications in Chapter 3,Section 3.9 (p. 133).
Recursions The two tree-building mechanisms that have been introduced: t 1 , t 2 , . . . , t n → [t 1 t2 · · · t n ], t 1 , t 2 → t 1 ∗ t2 , have been used to define and evaluate a number of functions recursively. We will now apply this approach to Δ . Recursion for Δ using the beta-product For a given tree t, Δ (t) is a combination of terms of the form f ⊗ t with the proviso that for the term for which t = ∅, f = t. We need to extend the meaning of the beta-product to the individual terms of these types and extend the meaning further by treating the product as a bi-linear operator.
2.8 Subtrees, supertrees and prunings
85
Theorem 2.8D Given trees t1 , t2 , Δ (t 1 ∗ t2 ) = Δ (t 1 ) ∗ Δ (t2 ), where the product on the right is treated bi-linearly and specific term by term products are evaluated according to the rules (f1 ⊗ t1 ) ∗ (f2 ⊗ t2 ) = (f 1 f2 ) ⊗ (t1 ∗ t2 ), (t 1 ⊗ ∅) ∗ (f2 ⊗ t2 ) = 0,
(f 1 ⊗ t1 ) ∗ (t2 ⊗ ∅) = (f 1 t2 ) ⊗ t1 ,
(t 1 ⊗ ∅) ∗ (t2 ⊗ ∅) = (t 1 ∗ t2 ) ⊗ ∅.
Proof. If t = t 1 ⊗ t2 , then the prunings of t consist of terms of the form (t 1 t 1 )(t2 t 2 ) ⊗ (t1 ∗ t2 ), corresponding to t1 ≤ t1 , t 2 ≤ t2 , where t1 > ∅, together with (t1 ∗ t2 ) ⊗ ∅, corresponding to t = ∅. Now calculate Δ (t 1 ∗ t2 ) =
∑
∑ (t1
t 1 )(t2 t 2 ) ⊗ (t1 ∗ t2 ) + (t1 ∗ t2 ) ⊗ ∅
∅ 0, rewrite (4.6 c), (4.6 e) in the form ωi = ∏ b1 ψ j A2 dot j∈E(i) ω j , i ∈ Vk , i = r, Φk (t) =
j∈D(i)
b1 ψ j
∏
b2 dot j∈E(r) ω j , r ∈ Vk ,
j∈D(r)
so that Φk (t) =
s
∏ ∏
k (t) b1 ψ j Φ
j=1 j∈D(i)
k (t), = Ψ (t t )Φ k (t) is defined by the recursion where Φ ωi = A2 dot j∈E(i) ω j ,
i ∈ Vk ,
k (t) = b2 dot j∈E(r) ω j , Φ
r ∈ Vk ,
i = r,
k (t) = Ω (t ) and and it follows that Φ n
n
k=1
k=1
k (t) = Ψ (t) + ∑ Ψ (t t )Ω( t ). Φ(t) = Φ0 (t) + ∑ Ψ (t t )Φ
Comments on Theorem 4.6A For a specific example of t, the choices of V are illustrated in the upper line of (4.6 f). The 10 possible non-empty choices of V are given, with vertices shown as . The
4.7 The B-group and subgroups
165
remaining vertices of t are shown as . In addition to 1, 2, . . . , 10, the special case headed 0 corresponds to t = ∅. On the lower line of (4.6 f), t is shown together with t t . 0
1
2
3
4
5
6
7
8
9
10
(4.6 f)
Recall Theorem 3.9C (p. 139), which we restate Theorem 4.6B (Reprise of Theorem 3.9C)
Let a ∈ B, b ∈ B∗ . Then
(ab)(∅) = b(∅), (ab)(t) = b(∅)a(t) + ∑ b(t )a(t t ), t ≤t
t ∈ T.
(4.6 g)
Proof. Let p be a positive integer. By Theorem 3.8B (p. 130), there exist Runge–Kutta method tableaux M1 , M2 , with B-series coefficients, Ψ , Ω , respectively, such that Ψ = a + O p+1 , Ω = b + O p+1 . For M = M1 M2 , let Φ be the corresponding B-Series so that Φ = Ψ Ω . Use Theorem 4.6A so that (4.6 g) holds to order p. Since p is arbitrary, the result follows.
4.7 The B-group and subgroups Reinterpreting Theorem 4.6B in the case that b ∈ B, we obtain a binary operation on this set given by (ab)(t) = a(t) + ∑ b(t )a(t t ), t ≤t
t ∈ T,
(4.7 a)
or, written another way, as (ab)(t) =
∑
∅≤t ≤t
b(t )a(t t ),
t ∈ T,
(4.7 b)
Theorem 4.7A The set B equipped with the binary operation (a, b) → ab, given by (4.7 b), is a group.
166
4 Algebraic analysis and integration methods
Proof. We verify the three group axioms. (i) B is associative because (a(bc))(t) and ((ab)c)(t) are each equal to a(t t )b(t t )c(t )
∑
∅≤t ≤t ≤t
(ii) The identity element exists, given by 1(∅) = 1,
1(t) = 0,
t∈T
(iii) The inverse exists. For a ∈ B, a−1 = x is defined recursively by x(t) = −
∑
x(t )a(t t ),
t ∈ T,
a(t )x(t t ),
t ∈ T,
∅≤t 1, C(q) is impossible for explicit Runge– Kutta methods but it is possible if some limited form of implicitness is allowed, such as for the fourth order method 0 1 2
1 4
1 4
1
0
1
1 6
2 3
. 1 6
4.7 The B-group and subgroups
173
Explicit methods with order greater than 4 have stage order 2 or higher, except that this does not apply to the second stage. It is necessary to compensate for this anomaly by imposing additional conditions, such as b2 = ∑i bi ai2 = ∑i bi ci ai2 = ∑ bi ai j a j2 = 0. The D subgroups These subgroups have several important roles in numerical analysis. First, the subgroup D1 has a historic connection with the work of Kutta [66]. Every fourth order Runge–Kutta method with four stages is a member of D1 . Although this is not generally true for higher order explicit methods, it is standard practice, in searching for such methods, to consider only methods belonging this subgroup. Thus it plays the role of a “simplfying assumption”. Secondly, Ds contains the s-stage Gauss method. Finally, D defines canonical Runge–Kutta, which play a central role in the solution of Hamiltonian problems, as we will see in Chapter 7. Definition 4.7M D p ⊂ B, D pq ⊂ B and D ⊂ B are defined by a ∈ D p if a ∈ D pq if a∈D
if
a(t ∗ t ) + a(t ∗ t) = a(t)a(t ),
a(t ∗ t ) + a(t ∗ t) = a(t)a(t ), a(t ∗ t ) + a(t ∗ t) = a(t)a(t ),
t, t ∈ T,
|t| ≤ p,
|t| ≤ p, |t | ≤ q,
t, t ∈ T,
t, t ∈ T.
Theorem 4.7N Each of D pq , D p and D is a subgroup of B. Proof. It will be sufficient to prove the result in the case of D pq . Assume that a, b ∈ D pq , and evaluate (ab)(t ∗ t ): (ab)(t ∗ t ) = a(t ∗ t ) +
∑
x≤t,x ≤t
a(t x )a(t x )b( x ∗ x ) + ∑ a(t x )a(t )b( x ). x≤ t
Add the corresponding expression, with t and t interchanged, and we find (ab)(t ∗ t ) + (ab)(t ∗ t) = a(t)a(t ) +
∑
a(t x )a(t x )b( x )b( x )
x≤t,x ≤t
+ ∑ a(t x )a(t )b( x ) + x≤t
∑
a(t x )a(t)b( x )
x ≤t
= a(t) + ∑ a(t x )b( x ) a(t ) + x≤t
∑ a(t
x )b(x )
x ≤t
= (ab)(t)(ab)(t ). One of the aims of this chapter is to generalize these constructions to the full set of trees on which B is defined. A second aim is to interrelate B with the generalizations
174
4 Algebraic analysis and integration methods
of Runge–Kutta methods introduced in [14] as “integration methods” in which the usual A in (A, b, c) is replaced by a linear operator and b is replaced by a linear functional in a space of functions on a possibly infinite index set. For example, the index set {1, 2, . . . , s} could be replaced by the interval [0, 1]. The integration methods would thus include not only Runge–Kutta methods, but also the Picard construction Y (x0 + hξ ) = y(x0 ) + h y1 = y(x0 ) + h
ξ 0
f Y (x0 + hξ ) d ξ ,
1 0
f Y (x0 + hξ ) d ξ ,
where we see that A and bT correspond respectively to the operations ϕ → ϕ →
0
ϕ,
1 0
ϕ.
Discrete gradient methods also fall into the definition of integration methods.
4.8 Linear operators on B∗ and B0 B and its subspaces recalled
In B-series analysis, mappings defined in terms of the triple (y0 , f , h) are represented by (Bh y0 )a for a ∈ B∗ . The affine subspace B has a special role as the counterpart to central mappings which are within O(h) of id h . The members of B act as multipliers operating on the linear space B∗ and are typified by Runge–Kutta mappings. The linear subspace B∗0 corresponds to the space spanned by slopeh ◦ C h , where C h is a central mapping. This means that B∗0 is the span of BD. An extended set of linear operators on B∗0 In Chapter 2, Section 2.7 (p. 79), the set S of uni-valent stumps was introduced. In the case of τ1 ∈ S, power-series φ (τ1 ) = a0 1 + a1 τ1 + a2 τ12 + · · · were introduced. We will consider B-series ramifications of these expressions. Motivation By introducing linear operators, such as h f (y0 ), into the computation, a Runge– Kutta method can be converted to a Rosenbrock method or some other generalization. For example, the method Y1 = y0 ,
F1 = f (Y1 ),
(4.8 a)
Y2 = y0 + a21 hF1 ,
F2 = f (Y1 ),
(4.8 b)
L = h f (y0 + g1 hF1 + g2 hF2 ),
(4.8 c)
4.8 Linear operators on B∗ and B0
y1 = y0 + b1 hF1 + d1 hLF1 + b2 hF2 + d2 hLF2 ,
175
(4.8 d)
contains additional flexibility compared with a Runge–Kutta method. If (4.8 d) is replaced by y1 = y0 + b1 hF1 + d1 hφ (L)F1 + b2 hF2 + d2 hφ (L)F2 , numerical properties can be enhanced. We aim to include the use of L within the B-series formulation. The operator J Corresponding to h f (y0 ), we introduce the linear function J : B0 → B0 , satisfying Bh Jb = h f (y0 )Bh b, where b(∅) = 0. By evaluating term by term, we see that (Jb)([t]) = b(t) for t ∈ T, with (Jb)(t ) = 0 if t cannot be written as t = [t].We also need to find the B-series for operations of the form h f (y1 ) = h f (Bh a)y0 so that we can handle expressions such as L in (4.8 c). Theorem 4.8A
h f (Bh a)y0 (Bh b)y0 = BBh (aJ)(a−1 b) y0 .
(4.8 e)
b so that (4.8 e) becomes Proof. Let y1 = (Bh a)y0 , b = a b)y1 = BBh (J)( b) y1 . h f y1 (Bh b. which is (4.8 e) after the substitutions y0 → y1 , b →
Summary of Chapter 4 and the way forward Integration methods were introduced as a generalization of Runge–Kutta methods in which the index set I = {1, 2, . . . , s} is replaced by a more complicated alternative. Equivalence and reducibility of methods, with an emphasis on the Runge–Kutta case, were considered. Compositions of methods were introduced leading to the composition theorem for integration methods. A number of subgroups of B were introduced, many of which have a relationship with simplifying assumptions for Runge–Kutta methods.
The way forward Subgroups of B are used in the construction of the working numerical methods considered in Chapters 5 and 6. Continuous Runge–Kutta methods have, as a natural application, the energy-preserving methods of Chapter 7.
176
4 Algebraic analysis and integration methods
Teaching and study notes Possible supplementary reading includes Butcher, J.C. An algebraic theory of integration methods (1972) [14] Butcher, J.C. Numerical Methods for Ordinary Differential Equations (2016) [20] Hairer, E., Nørsett, S.P. and Wanner, G. Solving Ordinary Differential Equations I: Nonstiff Problems (1993) [50] Hairer, E. and Wanner, G. Multistep-multistage-multiderivative methods for ordinary differential equations (1973) [51] Hairer, E. and Wanner, G. On the Butcher group and general multi-value method (1974) [52] Projects Project 12 Develop the topic of reducibility in Section 4.4 further so that pre-reduced methods become fully reduced by eliminating unnecessary stages. Project 13
Investigate the conditions for order 4 for the method given by (4.8 a) – (4.8 d),
Chapter 5
B-series and Runge–Kutta methods
5.1 Introduction The aim of this chapter is to continue a selected study of Runge–Kutta methods. While the emphasis will necessarily be on the application of the B-series approach to order questions, we will also attempt to carry out a traditional analysis following in the footsteps of the pioneers of these methods. This will be based on the scalar test equation y (x) = f (x, y). In modern computing there is little interest in numerical methods which are applicable only to scalar problem and it comes as a cautionary tale that the derivation of these methods does not automatically yield a method that works more widely. This will be illustrated by deriving a family of methods for which order 5 is achieved for scalar problems, whereas only the order 4 conditions hold for a vector problem. For the derivation of practical methods, we need to use the key result: Theorem 5.1A (Reprise of Theorem 3.6C (p. 127)) For an initial value problem of arbitrary dimension, a Runge–Kutta method (A, bT , c) has order p if and only if 1 Φ(t) = t! ,
(5.1 a)
for all trees such that |t| ≤ p.
Chapter outline The theory of order for scalar problems is presented in Section 5.2. The stability of Runge–Kutta methods is surveyed in Section 5.3, followed by the derivation of explicit methods in Section 5.4. Order barriers will be introduced in Section 5.5 through the simplest case (that order p = s is impossible for explicit methods with p > 4). This is followed in Section 5.6 by a consideration of implicit methods. The generalization to effective (or conjugate) order is surveyed in Section 5.7. © Springer Nature Switzerland AG 2021 J. C. Butcher, B-Series, Springer Series in Computational Mathematics 55, https://doi.org/10.1007/978-3-030-70956-3_5
177
178
5 B-series and Runge–Kutta methods
5.2 Order analysis for scalar problems In contrast to the B-series approach to order conditions, it is also instructive to explore order conditions in the same way as the early pioneers. Hence, we will review the work in [82] (Runge, 1895), [56] (Heun, 1900), [66] (Kutta, 1901), who took as their starting point the scalar initial value problem y (x) = f (x, y),
y(x0 ) = y0 .
(5.2 a)
In this derivation of the order conditions, ∂x f := ∂ f /∂ x, ∂y f := ∂ f /∂ y, with similar notations for higher partial derivatives. We start with (5.2 a) and find the second derivative of y by the chain rule y = ∂x f + ( ∂y f ) f . Similarly, we find the third derivative 2
2
y(3) = ( ∂x f + ( ∂x ∂y f ) f ) + ∂y f ( ∂x f + ( ∂y f ) f ) + ( ∂x ∂y f ) f + ( ∂y f ) f 2 2
2
= ∂x f + 2( ∂x ∂y f ) f + ( ∂y f ) f 2 + ( ∂x f ∂y f ) f + ( ∂y f )2 f and carry on to find fourth and higher derivatives. By evaluating the y(n) at x = x0 , we find the Taylor expansions to use in (5.2 a). A more complicated calculation leads us to the detailed series (5.2 d) in the case of any particular Runge–Kutta method and hence to the determination of its order. Details of this line of enquiry will be followed below. The greatest achievement in this line of work was given in [59] (Hut’a, 1956), where sixth order methods involving 8 stages were derived. In all derivations of new methods up to the publication of this tour de force, a tacit assumption is made. This is that a method derived to have a specific order for a general scalar problem will have this same order for a coupled system of scalar problems; that is, it will have this order for a problem with N > 1. This unproven assumption is untrue and it becomes necessary to carry out the order analysis in a multi-dimensional setting.
Systematic derivation of Taylor series The evaluation of y(n) , n = 1, 2, . . . , 5, will now be carried out in a systematic manner. Let m m m−i n+i (5.2 b) Dmn = ∑ ( ∂x ∂y f ) f i . i i=0 We will also write D mn to denote Dmn evaluated at (x0 , y0 ).
5.2 Order analysis for scalar problems
179
Lemma 5.2A d d x Dmn
= Dm+1,n + mDm−1,n+1 D10 .
(5.2 c)
Proof. d m m m−i n+i ∑ i ( ∂x ∂y f ) f i d x i=0 m m m m m−i+1 n+i m−i n+i+1 = ∑ f ) f i+1 ( ∂x ( ∂x ∂y ∂y f ) f i + ∑ i i i=0 i=0 m m m−i n+i +∑ i( ∂x f )( ∂x ∂y ∂y f ) f i−1 i i=0 m+1 m m m−i+1 n+i = ∑ ( ∂y f ) f i + ∂x i i − 1 i=0 m m! m−i n+i +∑ i ( ∂x f )( ∂x ∂y ∂y f ) f i−1 i!(m − i)! i=0 m+1 m−1 m+1 m−1 m−i+1 n+i m−i−1 n+1+i = ∑ ( ∂x ( ∂x f )( ∂x ∂y f ) f i + m ∑ ∂y ∂y f ) f i i i i=0 i=0 = Dm+1,n + mDm−1,n+1 D10 .
Using Lemma 5.2A, we find in turn, y = D00 , y = D10 , y = D20 + D01 D10 , y(4) = D30 + 2D11 D10 + D11 D10 + D01 (D20 + D01 D10 ), = D30 + 3D11 D10 + D01 D20 + D201 D10 , y
(5)
(5.2 d)
= D40 + 3D21 D10 + 3(D21 + D02 D10 )D10 + 3D11 (D20 +D01 D10 ), +D11 D20+D01 (D30+2D11 D10 )+2D01 D11 D10+D201 (D20+D01 D10 ), = D40 + 6D21 D10 + 3D02 D10 D10 + 4D11 D20 + 7D11 D01 D10 , + D01 D30 + D201 D20 + D301 D10 .
To find the order conditions for a Runge–Kutta method, up to order 5, we need to systematically find the Taylor series for the stages, and finally for the output. In this analysis, we will assume that ∑sj=1 ai j = ci for all stages. For the stages it will be sufficient to work only to order 4 so that the scaled stage derivatives will include h5 terms.
180
5 B-series and Runge–Kutta methods
As a step towards finding the Taylor expansions of the stages and the output, we need to find the Taylor series for h f (Y ), for a given series Y = y0 + · · · . The following result does this for an arbitrary weighted series using the terms in (5.2 d).
Lemma 5.2B If D00 + a2 h2 D 10 + a3 h3 21 D 20 + a4 h3 D 01 D 10 Y = y0 + a1 hD + a5 h4 61 D 30 + a6 h4 D 11 D 10 + a7 h4 21 D 01 D 20 + a8 h4 D 201 D 10 + O(h5 ), then h f (x0 + ha1 ,Y ) = hT1 + h2 T2 + h3 T3 + h4 T4 + h5 T5 + O(h6 ), where T1 = D 00 , T2 = a1 D10 , T3 = 12 a21 D 20 + a2 D 01 D 10 , T4 = 16 a31 D 30 + a1 a2 D 11 D 10 + 12 a3 D 01 D 20 + a4 D 201 D 10 , 1 4 T5 = 24 a1 D 40 + 12 a21 a2 D 21 D 10 + a1 a3 D 11 D 20 + a1 a4 + a6 D 11 D 01 D 10 + 12 a22 D 02 D 210 + 16 a5 D 30 D 01 + 12 a7 D 201 D 20 + a8 D 301 D 10 . k
m
Proof. Throughout this proof, an expression of the form ∂x ∂y f is assumed to have been evaluated at (x0 , y0 ). Evaluate T1 , T2 , T3 , T4 : T1 = f (x0 , y0 ) = D00 , T2 = a1 ∂x f + a1 ( ∂y f ) f = a1 D 10 , 2
2
D00 + 12 a21 ( ∂y f )D D200 + a2 ( ∂y f )D D10 T3 = 12 a21 ∂x f + a21 ( ∂x ∂y )D = 12 a21 D 20 + a2 D 01 D 10 , 3
2
2
3
D10 + 12 a31 ( ∂x ∂y f )D D210 + 16 a31 ∂y f D 310 T4 = 16 a31 ∂x f + 12 a31 ( ∂x ∂y f )D 2
D10 + a1 a2 ( ∂y f )D D10 D 01 + a1 a2 ( ∂x ∂y f )D =
D20 + a4 ( ∂y f )D D01 D 10 + a3 ( ∂y f )D 2 1 3 6 a1 D 30 + a1 a2 D 11 D 10 + a3 D 01 D 20 + a4 D 01 D 10 .
The evaluation of the terms in T5 is similar and is omitted except, as examples, the terms in a1 a4 and a6 , which can be found from the simplified expression h f (x0 + a1 h, y0 + ha1 D 00 + h3 a4 D 01 D 10 + h5 a6 D 11 D 10 ). The two example terms are
5.2 Order analysis for scalar problems
181
Table 16 Data for Theorem 5.2C p
σ
T
φ
e
1
1
D 00
∑ bi
1
2
1
D 10
∑ bi ci
1 2
2
D 20
∑ bi c2i i
1 3
1
D 01 D 10
∑ bi ai j c j
1 6
6
D 30
∑ bi c3i
1 4
1
D 11 D 10
∑ bi ci ai j c j
1 8
2
D 01 D 20
∑ bi ai j c2j
1 12
1
D 201 D 10
∑ bi ai j a jk ck
1 24
24
D 40
∑ bi c4i
1 5
2
D 21 D 10
∑ bi c2i ai j c j
1 10
2
D 11 D 20
∑ bi ci ai j c2j
1 15
1
D 11 D 01 D 10
∑ bi (ci + c j )ai j a jk ck
7 120
2
D 02 D 210
∑ bi ai j c j aik ck
1 20
6
D 01 D 30
∑ bi ai j c33
1 20
2
D 201 D 20
∑ bi ai j a jk c2k
1 60
1
D 301 D 10
∑ bi ai j a jk ak c
1 120
t
3
4
5
2 a1 a4 h5 ∂x ∂y f D 01 D 10 + ∂y f f D 01 D 10 = h5 a1 a4 D 10 D 11 D 01 , and h5 a6 ( ∂y f D 11 D 10 ) = h5 a6 D 10 D 11 D 01 , which combine to give the single term D10 D 11 D 01 . h5 (a1 a4 + a6 )D For the stage values of a Runge–Kutta method, we have
182
5 B-series and Runge–Kutta methods s
Yi = y0 + ∑ ai j h f (x0 + hc j ,Y j ) j=1
= y0 + hci D 00 + O(h2 ), and then, to one further order, s
Yi = y0 + ∑ ai j h f (x0 + hc j , y0 + hc j D 00 ) + O(h3 ) j=1
= y0 + hci D 00 + h2 ∑ ai j c j D 10 + O(h3 ). j
A similar expression can be written down for the output from a step y1 = y0 + h ∑ bi D 00 + h2 ∑ bi ci D 10 + O(h3 ). i
i
A comparison with the exact solution, y0 + hy (x0 ) + 12 h2 y (x0 ) + O(h3 ), evaluated using (5.2 d) gives, as second order conditions,
∑ bi D 00 = D 00 , i
∑ bi ci D 10 = 12 D 10 . i
Theorem 5.2C In the statement of this result, the quantities p, T , σ , φ are given in Table 16 1. The Taylor expansion for the exact solution to the initial value problem y (x) = f (x, y),
y(x0 ) = y0 ,
(5.2 e)
to within O(h6 ), is y0 plus the sum of terms of the form e h p σ −1 T . 2. The Taylor expansion for the numerical solution y1 to (5.2 e), using a Runge– Kutta method (A, bT , c), to within O(h6 ), is y0 plus the sum of terms of the form φ h p σ −1 T . 3. The conditions to order 5 for the solution of (5.2 e) using (A, bT , c) are the equations of the form φ = e.
5.2 Order analysis for scalar problems
183
This analysis can be taken further in a straightforward and systematic way and is summarized, as far as order 5, in Theorem 5.2C. This theorem, for which the detailed proof is omitted, has to be read together with Table 16 (p. 181). To obtain a convenient comparison with the non-scalar case, the corresponding t, or more than a single t, in Theorem 5.1A (p. 177), are also shown in this table. Relation with isomeric trees Isomeric trees, introduced in Section 2.7 (p. 77), involve quantities smn which correspond to D mn in the present section. The isomers occur when the s factors are formally allowed to commute. Commutation of the D factors in the order analysis actually occurs because these are scalar quantities. However, if the same analysis were carried out in the RN setting, commutation would not occur because the D factors then become vectors, linear operators and multilinear operators. Hence, the trees comprising the isomeric classes would yield independent order conditions. In particular, for order 5, the only non-trivial class is D11 D 01 D 10 , D 01 D 11 D 10 } = {F(t12 ), F(t15 )}. {D
(5.2 f)
These give separate order conditions in the vector case because D 11 and D 01 no longer commute. This phenomenon will be illustrated by the construction of a method with ambiguous order.
Derivation of an ambiguous method We will now construct a method which has order 5 for a scalar problem but only order 4 for a vector based problem. This means that all the conditions Φ(ti ) = 1/ti ! are satisfied for i = 1, 2, . . . , 17 except for i = 12 and i = 15, for which the corresponding order conditions are replaced by Φ(t12 ) + Φ(t15 ) =
1 7 1 + = . t12 ! t15 ! 120
(5.2 g)
For convenience, we will refer to the order conditions as (O1), (O2), . . . , (O17), where (Oi) is the equation (Oi) Φ(ti ) = 1/ti !. That is, bT 1 = 1,
(O1)
1 2, 1 3,
(O2)
T
b c= bT c2 = .. .. . . bT A3 c =
1 120 .
(O3)
(O17)
184
5 B-series and Runge–Kutta methods
In construcing this method, it is convenient to introduce a vector d T defined as d T = bT A + bTC − bT , where C = diag(c), which satisfies the property d T cn−1 = 0,
n = 1, 2, 3, 4,
(5.2 h)
because d T cn−1 = bT Acn−1 + bT cn − bT cn−1 = 1/n(n + 1) + 1/(n + 1) − 1/n = 0. In the method to be constructed, some assumptions will be made. These are i−1
∑ ai j c j = 12 c2i ,
i = 2, 3,
(5.2 i)
j=1
c6 = 1,
(5.2 j)
b2 = b3 = 0.
(5.2 k)
From (5.2 j), (5.2 k), (O1), (O2), (O3), (O5), (O9), it follows that 6
∑ bi ci (ci − c4 )(ci − c5 )(1 − ci ) = 0,
implying and hence
i=1 1 120 (20c4 c5 − 10(c4 + c5 ) + 4) = 0 1 ( 12 − c4 )(c5 − 12 ) = 20 .
7 Choose the convenient values c4 = 14 , c5 = 10 together with c2 = value of b, from (O1), (O2), (O3), (O5), and d from (5.2 h) are
250 5 1 32 b= , 0 0 14 81 567 54
125 d = t 1 7 79 − 112 0 , 27 − 27
1 2
and c3 = 1. The
where t is a parameter, assumed to be non-zero. The third row of A can be found from d2 (− 12 c22 ) + d3 (a32 c2 − 12 c23 ) = 0, (5.2 l) because, from (O3) – (O8), d T (Ac − 12 c2 ) = bT A2 c + bTCAc − bT Ac − 12 bT Ac2 − 12 bT c3 + 12 bT c2 =
1 24
+ 18
− 16
1 − 24
− 18
+ 16
= 0.
From (5.2 l), it is found that a32 = 13 4 . The values of a42 , and a52 can be written in terms of the other elements of rows 4 and 5 of A and row 6 can be found in terms of the other rows. There are now four free parameters remaining: a43 , a53 , a54 and t, and four conditions that are not automatically satisfied. These (O10), O16), (O17) and (5.2 g). The solutions are given in the complete tableau, with t = −3/140,
5.2 Order analysis for scalar problems
185
0 1 2
1 4
1 2 − 94 9 64
7 10
63 625
1
1
13 4 5 32 259 2500
139 − 27 50 − 50 1 14
0
3 − 64
(5.2 m)
231 2500
252 625
− 21 50
56 25
5 2
0
32 81
250 567
5 54
Numerical tests on the ambiguous method
For these tests we use the test problem (1.3 c) (p. 11), written in two alternative formulations, one scalar and one vector-valued, using matching initial and final values. Let x 0 1 , t0 = exp( 10 π), x0 = t0 sin ln(t0 ) , y0 = t0 cos ln(t0 ) , z0 = y0 x 1 . t1 = exp( 12 π), x1 = t1 sin ln(t1 ) , y1 = t1 cos ln(t1 ) , z1 = y1 The scalar formulation, as an initial value problem with a specified output value is dy y−x = , dx y+x
y(x0 ) = y0 ,
and the vector-valued formulation is z1 z2 + z1 d −1 = z , dt z2 z2 − z1
y1 = y(x1 )
z(t0 ) = z0 ,
z1 = z(t1 ).
Numerical tests were made for each problem on the intervals [x0 , x1 ] and [t0 ,t1 ] respectively, using a sequence of stepsizes based on a total of n = 5, 10, 20, . . . , 5 × 26 steps, in each case. Shown below are the absolute value, or the norm in the vector case, of the error at the output point. These are denoted by errn . Also shown are errn/2 /errn . For fourth order behaviour, these ratios should be approximately 16 and for fifth order, they should be approximately 32. The results are given in the display
186
5 B-series and Runge–Kutta methods
errn/2 /errn
n
errn
5 × 20
4.3170 × 10−4
5 × 21
1.0906 × 10−5
5 × 23
2.8486 × 10−7
5 × 24
8.3007 × 10−9
5 × 25
2.5422 × 10−10
5 × 26
7.8960 × 10−12
errn
errn/2 /errn
9.4865 × 10−4 39.583
5.2577 × 10−5
18.043
38.286
3.4454 × 10−6
15.260
34.318
2.3100 × 10−7
14.915
32.651
1.5117 × 10−8
15.281
32.198
9.6908 × 10−10
15.599
As we see, the predictions are confirmed by the computed error ratios.
The first sixth order method In [59, 60] (Hut’a, 1966, 1967), the detailed conditions for a method with 8 stages to have order six were derived. The very intricate analysis in this work was combined with stage order conditions and other simplifying assumptions to yield methods with the required properties. In [8] (Butcher, 1963) it was shown that the simplifications had the effect of forcing the 31 conditions assumed by Hut’a to actually hold for the full set of 37 conditions required for applications to vector-valued problems. We will review this result starting with Table 17, which generalizes the pairing of two trees because they share an isomeric class. This generalization is taken to order 6 trees. Using Table 17, we can write down the scalar order conditions in the case of these isomeric classes. Φ(t12 ) + Φ(t15 ) = t 1 ! + t 1 ! , 12 15 Φ(t21 ) + Φ(t29 ) = t 1 ! + t 1 ! , 21 29 Φ(t25 ) + Φ(t31 ) = t 1 ! + t 1 ! , 31 25 Φ(t28 ) + Φ(t33 ) = t 1 ! + t 1 ! , 28 33 Φ(t26 ) + Φ(t32 ) + Φ(t35 ) = t 1 ! + t 1 ! + t 1 ! . 32 26 35 The single-tree isomeric classes provide 26 order conditions which, together with the 5 listed above, constitute the 31 conditions to obtain order 6 for scalar problems, as in [59] (Hut’a, 1966). Remarkably, Hut’a’s methods are actually order 6, even for high-dimensional problems. To verify this, it is only necessary to show that Φ(ti ) = t1! , i
i = 15, 29, 31, 32, 33, 35.
(5.2 n)
But each of these trees is of the form t = [t ] so that, according to the D(1) simplifying assumption, which holds for the Hut’a methods,
5.3 Stability of Runge–Kutta methods
187
Table 17 Trees arranged in isomeric classes, with corresponding order conditions
D 11 D 01 D 10
Φ(t12 ) = t 1 ! 12
D 01 D 11 D 10
Φ(t15 ) = t 1 ! 15
D 21 D 01 D 10
Φ(t21 ) = t 1 ! 21
D 01 D 21 D 10
Φ(t29 ) = t 1 ! 29
D 11 D 01 D 20
Φ(t25 ) = t 1 ! 25
D 01 D 11 D 20
Φ(t31 ) = t 1 ! 31
D 02 D 10 D 01 D 10
Φ(t28 ) = t 1 ! 28
D 01 D 02 D 10 D 10
Φ(t33 ) = t 1 ! 33
D 11 D 01 D 01 D 10
Φ(t26 ) = t 1 ! 26
D 01 D 11 D 01 D 10
Φ(t32 ) = t 1 ! 32
D 01 D 01 D 11 D 10
Φ(t35 ) = t 1 ! 35
1 1 Φ(t) − Φ(t ) + Φ(t ∗ τ) = t! − t1 ! + (t ∗τ)! .
Consequently, Φ(t) = 1/t! because Φ(t ) = 1/t ! and Φ(t ∗ τ) = 1/(t ∗ τ)!, in each of these cases listed in (5.2 n).
5.3 Stability of Runge–Kutta methods The stability function Given a method (A, bT , c), consider the result computed for the linear problem, y = qy, where q is a (possibly complex) constant. If z = hq, the output after a single step is y1 = R(z)y0 , where R(z) is the “stability function”, defined by Y = y0 1 + zAY, R(z)y0 = y0 + zbTY. From (5.3 a), Y = y0 (I − zA)−1 and from (5.3 b),
(5.3 a) (5.3 b)
188
5 B-series and Runge–Kutta methods
R(z) = 1 + zbT (I − zA)−1 1.
(5.3 c)
For an explicit s-stage method, it further follows that s
R(z) = 1 + ∑ Φ([n 1]n )zn .
(5.3 d)
n=1
Exercise 44 Verify (5.3 d) for an explicit s stage method.
If an explicit method has order p = s, it further follows that p
n R(z) = 1 + ∑ zn! .
(5.3 e)
n=1
Exercise 45 Verify (5.3 e) for an explicit method with p = s .
Finally, we find a convenient general formula for the stability function. See for example [53] (Hairer, Wanner, 1996). Lemma 5.3A The stability function for a Runge–Kutta method (A, bT , c) is equal to det I + z(1bT − A) R(z) = . (5.3 f) det(I − zA)
Proof. If a square non-singular matrix M is perturbed by a rank 1 matrix uvT , the determinant is modified according to det(M + uvT ) = det(M) + vT adj(M)u. It follows that det(M + uvT )/ det(M) = 1 + vT M −1 u. Substitute M = I − zA, uT = zbT , v = 1 and the result follows from (5.3 c).
The stability region and the stability interval The set of points in the complex plane for which |R(z)| ≤ 1 is the “stability region”. The interval I = [−r, 0], with maximal r, such that I lies in the stability region, is the stability “interval”. In the case of the explicit methods for which 1 ≤ p = s ≤ 4, the boundaries of these finite regions are as shown in the diagram:
5.4 Explicit Runge–Kutta methods
189
3i 4 3 2
1
−3
0
−3i‘
The stability intervals are p = 1 : I = [−2.000, 0] p = 2 : I = [−2.000, 0] p = 3 : I = [−2.513, 0] p = 4 : I = [−2.785, 0] The stability interval is an important attribute of a numerical method because, for a decaying exponential component of a problem, we want to avoid exponential growth of the corresponding numerical approximation. Exercise 46 Find the stability function for the implicit method 1 4
7 24
1 − 24
1
2 3
1 3
2 3
1 3
5.4 Explicit Runge–Kutta methods In this section we will review the derivation of the classical explicit methods in the full generality of multi-dimensional autonomous problems. That is, we will define the order of a method as given by Theorem 5.1A (p. 177)).
190
5 B-series and Runge–Kutta methods
Low orders Order 1 For a single stage there is only the Euler method but for s > 1 other options are possible such as 0 1
1 7 8
(5.4 a) 1 8
This so-called “Runge–Kutta–Chebyshev” method [90] (van der Houwen, Sommeijer, 1980) is characterized by its extended real interval of stability; that is, a high value of r for its stability interval I = [−r, 0]. For this method the stability function is R(z) = 1 + z + 18 z2 compared with RE (z) = 1 + z for the Euler method. The corresponding stability regions are shown in the following diagrams, with Euler on the left and (5.4 a) on the right.
i −2
0
i −8
0
−i
−i
The extended stability interval [−8, 0] is regarded as an advantage for the solution of mildly stiff problems. Order 2 From the order conditions b1 + b2 = 1, b2 c2 = 12 , the family of methods is found, where c2 = 0, 0 c2
.
c2 1−
1 2c2
This family, particularly the special cases c2 = the pioneering paper by Runge [82].
1 2c2 1 2
and c2 = 1, were made famous in
5.4 Explicit Runge–Kutta methods
191
Exercise 47 Derive an explicit Runge–Kutta method with s = p = 2 and b2 = 1.
Order 3 These methods are associated with the paper by Heun [56]. For a three stage method, with p = 3, we need to satisfy b1 + b2 + b3 = 1, b2 c2 + b3 c2 = 12 , b2 c22 + b3 c22 = 13 , b3 a32 c2 = 16 . There are four cases to consider (i) c2 = c3 , c2 = 0 = c3 , c2 = (ii) c3 = 23 , 0 = c2 = 23 , (iii) c2 = 23 , c3 = 0, b3 = 0, (iv) c2 = c3 = 23 . These are
(i)
= c3 ,
0 c2
c2
c3
(3c22 −3c2 +c3)c3 (3c2 −2)c2
c3 (c2 −c3 ) (3c2 −2)c2
6c2 c3 −3c2 −3c3 +2 6c2 c3
3c3 −2 6c2 (c3 −c2 )
0 c2 (ii)
2 3
2 3
c2 6c2 −2 9c2
2 9c2
1 4
0
, 3 4
0 (iii)
2 3
0
2 3 1 − 4b3 1 4
− b3
,
1 4b3 3 4
b3
0 (iv)
2 3 2 3
2 3 2 3
− 4b13 1 4
.
1 4b3 3 4
− b3
b3
, 2−3c2 6c3 (c3 −c2 )
192
5 B-series and Runge–Kutta methods
Examples from each case are 0
0 1 2
(i)
1 2
1 −1
2
1 6
2 3
,
1 3 2 3
(ii)
1 6
(iii)
0
2 3
1 4
0
, 3 4
0
0 2 3
1 3
2 3
0 −1
1
0
3 4
,
2 3 2 3
(iv)
1 4
2 3 2 3 3 8
0 1 4
. 3 8
Exercise 48 Derive an explicit Runge–Kutta method with s = p = 3, c2 = c3 , b2 = 0.
Order 4 To obtain an order 4 method, with s = 4 stages, the equations Φ(t) = 1/t!, |t| ≤ 4, must be satisfied. Write these as ui = vi , i = 1, 2, . . . , 8, where the vectors u and v are given by ⎤
⎡ b1 + b2 + b3 + b4
⎢ ⎢ b2 c2 + b3 c3 + b4 c4 ⎢ ⎢ ⎢ b2 c22 + b3 c23 + b4 c24 ⎢ ⎢ ⎢ ⎢ b3 a32 c2 + b4 a42 c2 + b4 a43 c3 u := ⎢ ⎢ ⎢ b2 c32 + b3 c33 + b4 c34 ⎢ ⎢ ⎢ b3 c3 a32 c2 + b4 c4 a42 c2 + b4 c4 a43 c3 ⎢ ⎢ ⎢ b3 a32 c22 + b4 a42 c22 + b4 a43 c23 ⎣ b4 a43 a32 c2
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ v := ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
1 1 2 1 3 1 6 1 4 1 8 1 12 1 24
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(5.4 b)
The value of c4 If c2 , c3 , c4 are parameters we might attempt to solve the system in three steps: (i) solve the linear system u2 = v2 , u3 = v3 , u5 = v5 , to obtain b2 , b3 , b3 , (ii) solve u4 = v4 , u6 = v6 , u7 = v7 to obtain a32 , a42 , a43 , (iii) substitute the solutions to steps (ii) and (iii) into u8 = v8 . This will give a condition on the c values. A short-circuit to this analysis is given in the following:
5.4 Explicit Runge–Kutta methods
193
Lemma 5.4A For any explicit Runge–Kutta method, with p = s = 4, c4 = 1. Proof. From the equations b4 a43 · (c3 − c2 )c3 , (u7 − c2 u4 ) = (u6 − c4 u4 ) = b3 (c3 − c4 ) · a32 c2 , u5 − (c2 + c4 )u3 + c2 c4 u2 = b3 (c3 − c4 ) · (c3 − c2 )c3 , b4 a43 · a32 c2 ,
u8 = it follows that
(u7 − c2 u4 )(u6 − c4 u4 ) − u5 − (c2 + c4 )u3 + c2 c4 u2 u8 = 0. Substitute ui = vi , with the result
(v7 − c2 v4 )(v6 − c4 v4 ) − v5 − (c2 + c4 )v3 + c2 c4 v2 v8 = 0,
which simplifies to c2 (c4 − 1) = 0. If c2 = 0 we deduce the contradiction u8 = 0 = v8 . Hence, c4 = 1.
Lemma 5.4B For any explict Runge–Kutta method, with p = s = 4, D(1) holds. That is, di = 0, where d j = ∑i> j bi ai j − b j (1 − c j ), j = 1, 2, 3, 4. Proof. From Lemma 5.4A, d4 = 0; from ∑4j=1 d j (c j − c2 )c j = 0, d3 = 0; from ∑4j=1 d j (c j = 0,d2 = 0; and from ∑4j=1 d j = 0, d1 = 0.
Reduced tableaux for order 4 For Runge–Kutta methods in which D(1) holds, the first column and last row of A can be omitted from the analysis and restored later when the derivation of the method is completed in other respects. Let bT = bT A and consider the reduced tableau c2 c3
,
a32 b2
b3
satisfying the reduced order conditions b2 c2 + b3 c3 = 16 , 1 b2 c22 + b3 c23 = 12 ,
194
5 B-series and Runge–Kutta methods
1 ba32 c2 = 24 .
Assuming c2 , c3 ∈ {0, 1} and c2 =
1 2
(to avoid b3 = 0), we find that
2c3 − 1 , 12c2 (c3 − c2 ) (1 − 2c2 ) b3 = , 12c3 (c3 − c2 ) c3 (c3 − c2 ) , a32 = 2c2 (1 − 2c2 ) b2 =
leading to the full tableau: 0 c2
c2
c3
c3 (3c2 −4c22 −c3 ) 2c2 (1−2c2 )
c3 (c3 −c2 ) 2c2 (1−2c2 )
1
a41
a42
a43
6c2 c3 −2c2 −2c3 +1 12c2 c3 )
2c3 −1 12c2 (1−c2 )(c3 −c2 )
(1−2c2 ) 12c3 (1−c3 )(c3 −c2 )
, (5.4 c) 6c2 c3 −4c2 −4c3 +3 12(1−c3 )(1−c2 ))
where a41 =
12c22 c23 − 12c22 c3 − 12c2 c23 + 4c22 + 15c2 c3 + 4c23 − 6c2 − 5c3 + 2 , 2c3 c2 (6c2 c3 − 4c2 − 4c3 + 3)
(1 − c2 )(−4c23 + c2 + 5c3 − 2) , 2c2 (c3 − c2 )(6c2 c3 − 4c2 − 4c3 + 3) (1 − 2c2 )(1 − c2 )(1 − c3 ) a43 = . c3 (c3 − c2 )(6c2 c3 − 4c2 − 4c3 + 3)
a42 =
This “general” case takes a simpler form if c2 and c3 are symmetrically placed in [0, 1]. Write c2 = 1 − c3 and (5.4 c) becomes 0 1 − c3
1 − c3
c3
c3 (1−2c3 ) 2(1−c3 )
c3 2(1−c3 )
1
(1−2c3 )(4−9c3 +6c23 ) 2(1−c3 )(−1+6c3 −6c23 )
c3 (1−2c3 ) 2(1−c3 )(−1+6c3 −6c23 )
1−c−3 1−6c3 +6c32
−1+6c3 −6c23 12c3 (1−c3 )
1 12c3 (1−c3 )
1 12c3 (1−c3 )
−1+6c3 −6c23 12c3 (1−c3 )
(5.4 d) Exercise 49 Derive an explicit Runge–Kutta method with s = p = 4, c2 = 13 , c3 = 34 .
5.4 Explicit Runge–Kutta methods
195
Kutta’s classification of fourth order methods In the famous 1901 paper by Kutta [66] the classical theory of Runge–Kutta methods, up to order 4, was effectively completed. Given the value c4 = 1, 5 families of methods were formulated, in addition to the general case. These are as follows, where the reduced tableau is shown in the first column and the full tableau in the second. In each case λ is a non-zero parameter. 0 λ λ 1 2
λ
1 2 1 8λ
1 − 8λ
1 2
1 −1 +
, 1 3
0
1 8λ 1 1 2λ − 2λ
2
0
2 3
1 6
(5.4 e) , 1 6
0 1 2 1 2
1 2 1 2
,
1 12λ
−λ
1 3
1 2 1 2
1 − 12λ
1
λ
1 12λ
(5.4 f)
0
1 − 6λ
1 6
2 3
,
6λ
− 2λ
1 6
2λ
0 1 1 2
1 8
,
0
1
1
1 2
3 8
1
1−
1 3
1 4λ
1 6
1 8 1 − 12λ 1 6
−λ
(5.4 g) 1 3λ 2 3
, λ
0 1 2 1 2
0
, 1 12λ 1 3
0 1
λ
1 2 1 − 12λ − 12 − 6λ 1 6
−λ
1 12λ 3 2
6λ
2 3
λ
(5.4 h) . 1 6
The contributions of S. Gill A four stage method usually requires a memory of size 4N +C to carry out a single step. In [45] (Gill, 1951), a new fourth order method was derived in which the
196
5 B-series and Runge–Kutta methods
memory requirements were reduced to 3N +C. However, the work of S. Gill in this paper has a wider significance. An important feature of Gill’s analysis, and derivation of the new method, was the use of elementary differentials written in tensor form f i,
f ji f j ,
f ji fkj f k ,
i j k f jk f f ,
...,
represented by the trees ,
,
,
,
....
The Gill Runge–Kutta method To motivate this discussion, we need to ask how many vectors are generated in each step of the calculation process, as stage values are evaluated and stage derivatives are then calculated. The calculations, and a list of the vectors needed at the end of each stage derivative calculation, are: Y1 = y0 , hF1 = h f (Y1 ), Y1 hF1 , Y2 = y0 + a21 hF1 , hF2 = h f (Y2 ), Y2 hF1 hF2 , Y3 = y0 + a31 hF1 + a32 hF2 , hF3 = h f (Y3 ), Y3 y0 +a41 hF1 +a42 hF2 y0 +b1 hF1 +b2 hF2 hF3 , Y4 = y0 + a41 hF1 + a42 hF2 + a43 hF3 , hF4 = h f (Y4 ), Y4 y0 + b1 hF1 + b2 hF2 + b3 hF3 hF4 . Note that, at the time h f (Y3 ) is about to be computed, hF1 and hF2 are still needed for the eventual calculation of Y4 and the output value y1 and these are shown, in the list of required vectors, as partial formulae for these quantities, which can be updated as soon as hF3 and hF4 become available. At the point in the process, immediately before hF3 is computed, we need to have values of the three vectors, Y3 = y0 + a31 hF1 + a32 hF2 , y0 + a41 hF1 + a42 hF2 and y0 + b1 hF1 + b2 hF2 . This information can be stored as just two vectors if ⎛⎡ ⎤⎞ 1 a31 a32 ⎜⎢ ⎥⎟ det ⎝⎣ 1 a41 a42 ⎦⎠ = 0. (5.4 i) 1 b1 b2 Gill’s derivation based on the Kutta class (5.4 f), for which (5.4 i) holds gives 72λ 2 − 24λ + 1 = 0, 1 √ λ = 16 ± 12 2. √ 1 Gill recommends λ = 16 − 12 2 based on the magnitude of the error coefficients.
leading to
5.4 Explicit Runge–Kutta methods
197
An alternative solution satisfying Gill’s criterion If the assumption c2 = 1 − c3 is made, as in (5.4 d), Gill’s criterion (5.4 i) gives 3c33 − 3c3 + 1 = 0, π , c3 = − √2 cos 18 3 c3 = √2 cos 5π 18 , 3 c3 = √2 cos 7π 18 .
with solutions
(5.4 j) (5.4 k) (5.4 l)
3
The first case (5.4 j) gives c2 > 1, c3 < 0 and this should be rejected. The second case (5.4 k) has elements of rather large magnitude in A and seems, on this basis, less desirable than the third case (5.4 l), for which the tableau is, to 10 decimal places, 0 0.6050691568 0.3949308432 1
0.6050691568 0.0685790216
0.3263518216
−0.5530334218
0.1581025768
1.3949308450
0.1512672890
0.3487327110
0.3487327110
. 0.1512672890
Fifth order and higher order methods The pattern which holds up to order 4, in which methods with order p exist with s = p stages, does not hold above order 4. It will be shown in Theorem 5.5A (p. 200) that, for p > 4, s > p is necessary. We will complete this survey of explicit Runge–Kutta methods by presenting a single fifth order method with 6 stages and referring to a famous method with p = 10. The role of simplifying assumptions The D(1) condition was necessary in the case of s = p = 4 and, at the same time, simplified the order requirements. The C(2) condition is not really possible because it would imply 12 c22 = a21 c1 = 0. This would mean that c2 = c1 and the first two stages compute the same value and could be combined into a single stage. Taking this argument further, we conclude that all stages are equivalent to a single stage and only order 1 is possible. But for order at least 5, it becomes very difficult to construct methods without assuming something related in some way to C(2). We can see this by evaluating the following expression on the assumption that the order 5 order conditions are satisfied. s
s−1
∑ bi ∑ ai j c j − 12 c2i
i=1
j=1
2
s
s−1
i=1
j=1
s s−1
s
i=1 j=1
i=1
= ∑ bi ∑ ai j c j aik ck − ∑ ∑ bi c2i ai j c j + 14 ∑ bi c4i =
1 20
= 0.
−
1 10
+
1 20
198
5 B-series and Runge–Kutta methods
If for example the C(2) requirement were satisfied for each stage except the second stage, it would be necessary that b2 = 0. It would also be necessary that
∑ bi ai2 = 0, ∑ bi ci ai2 = 0, ∑ bi ai j a j2 = 0,
(5.4 m)
otherwise it would be impossible for the following three pairs of conditions to hold simultaneously. ∑ bi ai j a jk ck = 241 , ∑ bi ai j c2j = 121 ,
∑ bi ci ai j a jk ck = 301 , 1 , ∑ bi ai j a jk ak c = 120
∑ bi ci ai j c2j = 151 , ∑ bi ai j a jk c2k = 601 .
If D(1) holds, a suitably modified form of C(2) does not require (5.4 m) but only ∑ bi (1 − ci )ai2 = 0, in addition to b2 = 0. These assumptions open a path to the construction of fifth order methods. Rewrite (5.4 b) with ui replaced by u i , where the bi are replaced by
i = 1, 2, 3, 4, bi = ∑ b j a ji , j>i
and the vi are replaced by the elements of the vector v T =
1 2
1 6
1 12
1 24
1 20
1 40
1 60
1 120
.
To enable us to focus on the parts of the tableau that are most significant, we use a reduced tableau of the form c3 c4 c5
We need to solve
a43 a53
a54
b3
b4
.
b5
b3 c3 + b4 c4 + b5 c5 = 16 , 1
b3 c23 + b4 c24 + b5 c25 = 12 , 3 3 3
b3 c + b4 c + b5 c = 1 , 3
4
5
b5 a54 c4 (c4 − c3 ) =
20 1 60
1 − 24 c3 .
These conditions give no information about a43 and a53 . We also need to take into account the relations based on the C(2) condition. These are a32 c2 = 12 c23 , a42 c2 = 12 c24 − a43 c3 ,
b5 a52 = − b3 a32 − b4 a42 , a53 c3 = 12 c25 − a52 c2 − a54 c4 .
5.5 Attainable order of explicit methods
199
We can solve these equations sequentially for a32 , a42 , a52 , and a53 , with a43 chosen arbitrarily, to complete the reduced tableau. In the special case c=
0
1 4
1 4
1 2
3 4
1
T
,
with the chosen value a43 = 1 we find the reduced tableau to be 1 4 1 2
1 0
9 16
4 15
1 15
, 4 45
leading to the full tableau 0 1 4 1 4 1 2 3 4
1
1 4 1 8
0
1 8 − 12
3 16 − 37
0
7 90
.
1
2 7
9 0 16 12 12 7 − 7
8 7
0
16 45
2 15
16 45
7 90
A tenth order method Using a combination of simplifying assumptions and variants of these assumptions, a 17 stage method of order 10 [47] (Hairer, 1978) has been constructed. It is not known if methods of this order exist with s < 17.
5.5 Attainable order of explicit methods As we have seen, it is possible to obtain order p with s = p stages for p ≤ 4. However, for p ≥ 5, methods only exist if s ≥ p + 1. Furthermore, for p ≥ 7, methods only exist if s ≥ p + 2. Before presenting these results, we make some preliminary remarks.
200
5 B-series and Runge–Kutta methods
Remarks It can always be assumed that c2 = 0 If a method existed with c2 = 0, the second stage will give a result identical to the first stage so that the second stage can be effectively eliminated. That is, the tableau 0 0
0
c3
a21
c4 .. .
a31 .. .
a32 .. .
..
b1
b2
···
, .
can be replaced by 0 c3
a21
c4 .. .
a31 + a32 .. .
..
b1 + b2
···
, .
with one less stage but the same order. Some low rank matrix products In the proofs given in this section, products of matrices U and V occur in which many terms cancel from UV because of zero elements in the final columns of U and the initial rows of V . Typically the rows of U have the form bT Am , with some of the A factors replaced by some other strictly lower triangular matrices, and with the columns of V of the form An c, again with some of the A factors replaced by strictly lower triangular matrices. From the specific structure of U and V , an upper bound on the rank of UV can be given.
Order bounds In this section, C = diag(c), dI = 1 − ci , D = I −C. Theorem 5.5A No (explicit) Runge–Kutta s-stage method exists with order p = s > 4. Proof. We will assume a method of this type exists and obtain a contradiction. Let
5.6 Implicit Runge–Kutta methods
201
⎡
⎤ bT A p−3
U =⎣
⎦, bT A p−4 (D − d4 I)
V = Ac (C − c2 I)c .
Since each of these matrices has rank 1, their product is singular. However, the product is given by ⎤ ⎡ c2 2 1 − p! p! (p−1)! ⎥ ⎢ ⎥ UV = ⎢ ⎦ ⎣ p−3 2(p−3)c2 2(p−3) c4 2c4 c2 c4 s! − (p−1)! p! − (p−1)! − (p−1)! + (p−2)! =
1
0
p−3
−d4
⎡ ⎢ ⎣
1 p! 1 (p−1)!
⎤
1 (p−1)! ⎥ 1 ⎦ 1 0 (p−2)!
2
−c2
.
Since the last two factors are non-singular, it follows that d4 = 0 so that c4 = 1. Repeat this argument with V unchanged but ⎤ ⎡ bT A p−3 ⎦, U =⎣ bT A p−5 (D − d5 I)A and it follows that c5 = 1. From c4 = c5 = 1, it follows that bT A p−5 (I − diag(c))A2 c is zero. However, by the order conditions, p−4
bT A p−5 (I − diag(c))A2 c = p! = 0. For further results on attainable order, see [11] (Butcher 1965), [16] (Butcher 1985), [20] (Butcher 2016).
5.6 Implicit Runge–Kutta methods The classical methods in which A is strictly lower triangular can be implemented explicitly. That is the stages can be evaluated in sequence, using only information already available so that the stage values, Yi , and the corresponding B-series, which will be denoted by ηi , are computed by Yi = y0 + h ∑ ai j f (Y j ),
i = 1, 2, . . . , s,
ηi = 1 + ∑ ai j η j D,
i = 1, 2, . . . , s.
j 0, i = 1, 2, . . . , s. Then for any dissipative problem, [yn , yn ]Q is non-increasing. Proof. Because M is positive indefinite, it is the sum of squares of linear forms and hence s
s
∑ ∑ mi j [Fi , Fj ]Q ≥ 0.
i=1 j=1
From Theorem 7.3A, [yn , yn ]Q − [yn−1 , yn−1 ]Q is the sum of three non-positive terms. The conservation case For problems satisfying [Y, F]Q = 0, where it is not necessary to assume that Q has any special properties other than symmetry, the stability condition becomes a conservation property of the differential equation. Theorem 7.3C Let (A, bT , c) be a Runge–Kutta method with M = 0. Then for a problem satisfying [Y, F]Q = 0, [yn , yn ]Q is constant. Proof. From Theorem 7.3A, [yn , yn ]Q − [yn−1 , yn−1 ]Q is the sum of three zero terms. For applications of this result to problems possessing quadratic invariants, see [32] (Cooper, 1987) and [68] (Lasagni. 1988). For applications to Hamiltonian problems see [83] (Sanz-Serna,1988). Order conditions The order conditions for Runge–Kutta methods have a remarkable property in the case of symplectic methods (see [84] (Sanz-Serna, Abia, 1991)). Rather than impose sufficiently many additional restrictions as to make canonical methods elusive, and difficult to construct, the conditions M = 0 actually lead to simplifications. To illustrate this effect, look at the usual conditions for order 4, where the underlying trees are also shown bT 1 = 1, (7.3 b) bT c = 12 , (7.3 c) bT c2 = 13 , (7.3 d) bT Ac = 16 , (7.3 e) bT c3 = 14 , (7.3 f) bT cAc = 18 , (7.3 g)
7.3 Canonical and symplectic Runge–Kutta methods
255
bT Ac2 = 2
T
b A c=
1 12 , 1 24 .
(7.3 h) (7.3 i)
Write M = 0 in the form diag(b)A + AT diag b = bbT
(7.3 j)
and form the inner product uT diag(b)Av + uT AT diag(b)v = uT bbT v, for various choices of u and v, to obtain the results u = 1, u = 1,
v = 1, v = c,
u = c, v = c, u = 1, u = 1,
v=c , 2
v = Ac,
2bT c = (bT 1)2 ,
yields 2
T
T
T
(7.3 k) T
yields
b c + b Ac = (b 1)(b c),
(7.3 l)
yields
2b cAc = (b c) ,
(7.3 m)
yields yields
T
T
3
T
T
2
2
T
T
2
T
T
2
b c + b Ac = (b 1)(b c ), T
T
b cAc + b A = (b 1)(b Ac).
(7.3 n) (7.3 o)
Starting from (7.3 b), we find in turn (7.3 k) (7.3 l) (7.3 m) (7.3 n) (7.3 o)
=⇒ =⇒ =⇒ =⇒ =⇒
(7.3 c), (7.3 d) + (7.3 e), (7.3 g), (7.3 f) + (7.3 h), (7.3 g) + (7.3 i).
In summary, instead of the 8 independent order conditions (7.3 b)–(7.3 i), it is only necessary to impose the three conditions (7.3 b), (7.3 d), (7.3 f) to obtain order 4, given that the method is canonical. To extend this approach to any order, consider a sequence of steps that could be taken to verify that all order conditions are satisfied.. 1. Show that the order 1 condition is satisfied. 2. For p = 2, . . . , up to the required order, show that the order condition for one tree within each non-superfluous class of order p is satisfied. 3. For p = 2, . . . , up to the required order, show that the order condition for one tree within each superfluous class of order p is satisfied. 4. Show that if the order condition for one tree within each order p class is satisfied then the same is true for all trees in the class. In the case of canonical methods, for which Theorem 7.3A holds, Steps 3 and 4 in this sequence are automatically satisfied and only Steps 1 and 2 needs to be verified.
256
7 B-series and geometric integration
Theorem 7.3D For a canonical Runge–Kutta method, of order p − 1, let t1 = t ∗ t , t 2 = t ∗ t where |t| + |t | = p. Then 1 1 Φ(t 1 ) − t1! + Φ(t2 ) − t1! = Φ(t)Φ(t ) − t! . (7.3 p) t ! 1
2
Proof. To show that Φ(t1 ) + Φ(t2 ) = Φ(t)Φ(t ), write Φ(t) = bT φ , Φ(t ) = bT φ , so that Φ(t1 ) = bT φ Aφ = φ T diag(b)Aφ , Φ(t 2 ) = bT φ Aφ = φ T AT diag(b)φ . From (7.3 j), it follows that φ T diag(b)A + AT diag(b) = φ T bbT φ = Φ(t)Φ(t ). To show that (t1 !)−1 + (t2 !)−1 = (t!)−1 (t !)−1 , use the recursions t1 ! = so that
t!t !|t 1 | |t | ,
1 1 1 1 t 1 ! + t 2 ! = t! t !
t2 ! =
t !t!|t 2 | |t| ,
t ! t! t1 ! + t2 !
1 1 = t! . t !
Theorem 7.3E For a canonical Runge–Kutta method, the number of independent conditions for order p is equal to the number of non-superfluous free trees of order up to p. Proof. From (7.3 p), we deduce that the order conditions for t 1 and t2 are equivalent and hence only one condition is required for each non-superfluous free tree. In the case of a superfluous tree t1 = t 2 = t ∗ t, (7.3 j) implies 2Φ(t 1 ) = 2(t 1 !)−1 .
Particular methods Gauss methods For the classical Gauss methods it was shown in [15] (Butcher, 1975) that M is positive indefinite and therefore that the method is algebraically stable. But in the present context, these methods are symplectic because M = 0. Theorem 7.3F Let (A, bT , c) be the Gauss method with s stages,.then diag(b)A + AT diag(b) = bbT . Proof. Let V denote the Vandermonde matrix with (i, j) element equal to cij−1 . From the order conditions for the trees [τ i−1 [τ j−1 ]], the product V T diag(b)AV has (i, j)
7.3 Canonical and symplectic Runge–Kutta methods
257
element equal to 1/ j(i + j). Hence, the (i, j) element of V T (diag(b)A + AT diag(b) − bbT )V is equal to 1/ j(i + j) + 1/i(i + j) − 1/i j = 0. Because V is non-singular, the result follows.
Diagonally implicit methods
Methods in which A is lower triangular are canonical only if they have the form 1 2 b1 b1 + 12 b2
1 2 b1
b1
1 2 b2
.. .
.. .
.. .
..
b1 + b2 + · · · + 21 bs
b1
b2
···
1 2 bs
b1
b2
···
bs
.
.
This can also be looked at as the product of a sequence of s scaled copies of the implicit mid-point rule method. That is, the product method 1 2 b1
1 2 b2
1 2 b1
b1
1 2 b2
···
b2
1 2 bs
1 2 bs
bs
.
For consistency, which will guarantee order 2, we must have ∑si=1 bi = 1. To obtain order 3, we must have ∑si=1 b3i = 0 and, assuming this holds, order 4 is also possible if bT is symmetric, in the sense that bi = bs+1−i . The simplest case of order 4 can then be found with b3 = b1 and satisfying 2b1 + b2 = 1,
(7.3 q)
2b31 + b32
(7.3 r)
= 0.
√ From (7.3 r), b2 = − 3 2 b1 and from (7.3 q) we then find
bT =
1√ 2− 3 2
√ − 3√2 2− 3 2
1√ 2− 3 2
.
This gives the method [34] (Creutz, Gocksch, 1989), [88] (Suzuki, 1990), [92] (Yoshida, 1990)
258
7 B-series and geometric integration
1√ 4−2 3 2
1√ 4−2 3 2
1 2
1√ 2− 3 2
√ 3 2 3−2 √ 4−2 3 2
1√ 2− 3 2 1√ 2− 3 2
√ − 3√2 4−2 3 2 √ − 3√2 2− 3 2 √ − 3√2 2− 3 2
1√ 4−2 3 2
.
(7.3 s)
1√ 2− 3 2
Many similar schemes exist of which the following is particularly convenient and efficient [88] (Suzuki, 1990)
1√ 4− 3 4
bT =
1√ 4− 3 4
√ − 3√4 4− 3 4
1√ 4− 3 4
1√ 4− 3 4
.
Block diagonally implicit Nesting of known methods to obtain higher orders is possible using block diagonal structures. For example, if (A, bT , c) is a symmetric canonical method with order 4, then the composition of three methods forming the product θc
θA θb
T
·
(1−2θ )c
(1− 2θ )A (1−2θ )b
T
·
θc
θA θ bT
,
√ where θ = (2 − 5 2)−1 , will be canonical and have order 6. For example, the method (A, bT , c) could be the 2-stage Gauss method or the method (7.3 s).
Order with processing In [69] (L´opez-Marcos, Skeel, Sanz-Serna, 1996), it was proposed to precede a sequence of symplectic Runge–Kutta steps with a “processing step”, which can have its effects reversed at the conclusion of the integration steps. This makes it possible to obtain adequate accuracy with an inexpensive integrator. This can be seen as an application of effective order, or conjugate order [13] (Butcher, 1969). Let ξ denote the B-series for the input to each step so that the order conditions become η = A(ηD) + 1ξ , (7.3 t) Eξ = bT (ηD) + ξ + O p+1 , where η is the stage B-series vector. For classical order, ξ = 1.
7.3 Canonical and symplectic Runge–Kutta methods
259
Conformability and weak conformability The conformability and weak conformability conditions refer to the starting method (that is, the processor). To obtain the highest possible order, the values of ξ for any pair of equivalent trees need to be related in a special way. Definition 7.3G The starting method ξ is conformable of order p if, for t, t , such that |t| + |t ] ≤ p − 1, ξ (t ∗ t ) + ξ (t ∗ t) = ξ (t)ξ (t ).
Definition 7.3H The starting method ξ is weakly conformable of order p if, for t, t , such that |t| + |t ] ≤ p, (Eξ )(t ∗ t ) + (Eξ )(t ∗ t) − (Eξ )(t)(Eξ )(t ) = ξ (t ∗ t ) + ξ (t ∗ t) − ξ (t)ξ (t ).
(7.3 u)
We now present a series of results interconnecting the two levels of conformability and order. Write O to mean that a method has order p relative to a specific choice of ξ , WC to mean that ξ is weakly conformable, C to mean that ξ is conformable and P to mean that if the order condition holds for a tree in each non-superfluous class, then the order is p. The results can be summarized in the diagram. O =⇒ WC ⇐⇒ C =⇒ P.
(7.3 v)
Theorem 7.3I Let (A, bT , c) be a canonical Runge–Kutta method with order p relative to the starting method ξ . Then ξ is weakly conformable of order p. Proof. Write (7.3 t) in the form (Eξ )(t) − ξ (t) = bT (ηD)(t),
|t| ≤ p,
and substitute t → t ∗ t , noting that bT (ηD)(t ∗ t ) = (ηD)(t)T diag(b)η(t ). This gives (Eξ )(t ∗ t )−ξ (t ∗ t ) = (ηD)(t)T diag(b)η(t ) = (ηD)(t)T diag(b) A(ηD)(t ) + 1ξ (t ) = (ηD)(t)T diag(b)A(ηD)(t ) + bT (ηD)(t)ξ (t ). Add a copy of this equation, with t and t interchanged, and the result is
260
7 B-series and geometric integration
(Eξ )(t ∗ t ) − ξ (t ∗ t )+(Eξ )(t ∗ t) − ξ (t ∗ t) = (ηD)(t)T diag(b)A(ηD)(t )+(ηD)(t )T diag(b)A(ηD)(t) + bT (ηD)(t)ξ (t ) + bT (ηD)(t )ξ (t) = (ηD)(t)T diag(b)A + AT diag(b) (ηD)(t ) + bT (ηD)(t)ξ (t ) + bT (ηD)(t )ξ (t) = (ηD)(t)T bbT (ηD)(t ) + bT (ηD)(t)ξ (t ) + bT (ηD)(t )ξ (t) = bT (ηD)(t) + ξ (t) bT (ηD)(t ) + ξ (t ) − ξ (t)ξ (t ) = (Eξ )(t)(Eξ )(t ) − ξ (t)ξ (t ), which is equivalent to (7.3 u). Before showing the equivalence of conformability and weak conformability, we establish a utility definition and a utility lemma. Definition 7.3J Let t = [t 1 t2 · · · t m τ n ], where ti = τ, i = 1, 2, . . . , m. Then the bushiness of t is defined by bush(t) = n.
Lemma 7.3K For ξ ∈ B and t, t ∈ T, (Eξ )(t ∗ t ) + (Eξ )(t ∗ t) − (Eξ )(t)(Eξ )(t) = ∑ E(t x )E(t x ) ξ ( x ∗ x ) + ξ ( x ∗ x ) − ξ ( x )ξ ( x ) .
(7.3 w)
x≤t,x ≤t
Proof. The subtrees of t ∗ t are of the form x ∗ x and x and hence (Eξ )(t ∗ t ) =
∑
≤t
E(t x )E(t x )ξ ( x ∗ x )
x ≤t, x + E(t ) ∑ E(t x )ξ ( x ) + E(t ∗ t ). x≤t Using this and the same formula, with t and t interchanged, we find (Eξ )(t ∗ t ) + (Eξ )(t ∗ t) − (Eξ )(t)(Eξ )(t) = ∑ E(t x )E(t x ) ξ ( x ∗ x ) x ≤t, x ≤t + ξ ( x ∗ x ) + E(t ) ∑ E(t x )ξ ( x ) x≤t + E(t) ∑ E(t x )ξ ( x ) + E(t ∗ t ) + E(t ∗ t) x ≤t − ∑ E(t x )ξ ( x ) + E(t) ∑ E(t x )ξ ( x ) + E(t ) . x≤t x ≤t
7.3 Canonical and symplectic Runge–Kutta methods
261
Noting that E(t ∗ t) + E(t ∗ t) = E(t)E(t ), we see that this reduces to the result of the lemma. We now have: Theorem 7.3L The starting method ξ is weakly conformable of order p, if and only if it is conformable of order p. Proof. The ‘if’ part of the proof follows from Lemma 7.3K because, if ξ is conformable of order p, all terms on the right of (7.3 w) are zero. To prove the only if result by induction, assume that ξ is conformable of order p − 1, so that it is only necessary to show that for | x |+| x | = p−1, ξ ( x ∗ x )+ξ ( x ∗ x )−ξ ( x )ξ ( x ) = 0. Without loss of generality, assume bush( x ) ≥ bush( x ). Note that bush( x ) ≤ p − 3, corresponding to x = [τ p−3 ], x = τ. We will carry out induction on K = p−3, p−4, . . . , 0. For each K consider all x , x pairs such that bush( x ) = K. Define t = x ∗ τ, t = x , and substitute into (7.3 w). All terms on the right-hand side vanish because they correspond to a higher value of K and the single term corresponding to the current value of K. Hence we have (K + 1) ξ ( x ∗ x ) + ξ ( x ∗ x ) − ξ ( x )ξ ( x ) = 0.
Theorem 7.3M Let (A, bT , c) be a canonical Runge–Kutta method such that, for each non-superfluous free tree, at least one of the trees has order p relative to a conformable starting method ξ , then all trees have order p relative to ξ . Proof. Use an induction argument, so that the result can be assumed for all trees up to order p − 1. It remains to show that if (Eξ )(t ∗ t ) − ξ (t ∗ t ) − bT (ηD)(t ∗ t ) = 0, then (Eξ )(t ∗ t) − ξ (t ∗ t) − bT (ηD)(t ∗ t) = 0. Add these expressions and use the fact that bT (ηD)(t ∗ t ) + bT (ηD)(t ∗ t) = (Eξ )(t)(Eξ )(t )ξ (t)ξ (t ). It is found that (Eξ )(t ∗ t ) − ξ (t ∗ t ) − bT (ηD)(t ∗ t ) + (Eξ )(t ∗ t ) − ξ (t ∗ t) − bT (ηD)(t ∗ t) = (Eξ )(t ∗ t ) − ξ (t ∗ t ) + (Eξ )(t ∗ t ) − ξ (t ∗ t)(Eξ )(t)(Eξ )(t ) + ξ (t)ξ (t ) = 0.
262
7 B-series and geometric integration
7.4 G-symplectic methods The multivalue form of the matrix M, and an identity For a general linear method,
the partitioned matrix
M=
A
U
B
V
,
DA + AT D − BT GB
DU − BT GV
U T D −V T GB
G −V T GV
(7.4 a)
was introduced in [6] (Burrage, Butcher, 1980) to characterize quadratic stability for multivalue methods and it has a similar role in the general linear case as the matrix (7.3 a) with the same name. The matrix G appearing in M has a similar role as in the definition of G-stability [38]. In the general linear case, G is used to construct the quadratic form r [n] [n] [y[n] , y[n] ]G⊗Q := ∑ gi j yi , y j Q , i, j=1
whose behaviour, as n increases, can be used to study non-linear stability ([6]) or conservation. The quadratic identity for multivalue methods The result given in Theorem 7.3A has a natural extension to the G-symplectic case Theorem 7.4A [y[n] , y[n] ]G⊗Q = [y[n−1] , y[n−1] ]G⊗Q + h[Y, F]D⊗Q + h[F,Y ]D⊗Q − [v, v]M⊗Q , where
v=
hF
y[n−1]
.
Proof. Rewrite (7.4 a) in the form ; < T ; < D 0 0 BT A G BV = + A U − M. D0 + 0 0G UT VT Apply the linear operation X → [v, Xv]Q to each term in (7.4 a) and the result follows.
7.4 G-symplectic methods
263
Non-linear stability As for Runge–Kutta methods, we will consider problems for which [Y, f (Y )] ≤ 0 with the aim of achieving stable behaviour. Definition 7.4B A general linear method (A,U, B,V ) for which there exist D, a non-negative diagonal matrix, and G a positive semi-definite symmetric matrix is algebraically stable if M, given by (7.4 a), is positive semi-definite.
Theorem 7.4C For a problem for which [Y, f (Y )]Q is non-positive for positive indefinite Q, a numerical solution y[n] , found from an algebraically stable Runge–Kutta method, has the property that [y[n] , y[n] ]G⊗Q is non increasing for n = 0, 1, 2, . . . . Proof. This follows from Theorem 7.4A.
Conservation properties We now turn our attention to problems for which [Y, f (Y )]Q = 0 and methods for which M = 0, where M is given by (7.4 a). That is, we are considering methods covered by the following definition: Definition 7.4D A general linear method (A,U, B,V ) for which there exist D, a non-negative diagonal matrix and G such that DA + AT D = B∗ GB, DU = B∗ GV,
(7.4 b) (7.4 c)
∗
G = V GV. is G-symplectic. In this definition we have allowed for complex coefficients in U, B and V , by writing Hermitian transposes. Theorem 7.4E Let (A,U, B,V ) denote a G-symplectic method. Then for a problem for which [Y, f (Y )]Q = 0, [y[n] , y[n] ]Q is constant for n = 0, 1, 2, . . . . Proof. The result follows from the identity in Theorem 7.4A, with [Y, F]D⊗Q and M deleted. Two methods based on Gaussian quadrature The two methods P and N were introduced in [21] and differ only in the sign of which appears in the coefficients. For P we have the defining matrices
√
3
264
7 B-series and geometric integration
⎡
A
U
B
V
√ 3+ 3 ⎢ 6 ⎢ √ ⎢− 3 ⎢ 3
=⎢ ⎢ ⎢ ⎣
0
1
√ 3+ 3 6
1
1 2 1 −2
1 2 1 2
⎤
√ − 3+26 3 √ 3+2 3 6
1
0
0
−1
⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦
This method can be verified to be G-symplectic with √ 3
G = diag(1, 3+26
).
D = diag( 12 , 12 ).
Dealing with parasitism Parasitism, and methods for overcoming its deleterious effects. were discussed in [21] (Butcher, Habib, Hill, Norton, 2014). The stability function for P is
⎤
⎡
V + zB(I − zA)−1U = V + zBU + O(z2 ) = ⎣
1+z
0
√
0 −1 − 3+26
3
z
⎦ + O(z2 ).
For a high-dimensional problem, z represents the value of an eigenvalue of the Jacobian of f at points in the step being taken. In general, it is not possible to guarantee that the real parts of these eigenvalues will not be positive and hence the method cannot be guaranteed to be stable. In numerical experiments with the simple pendulum, unstable behaviour does occur both for P and N. This is manifested by the loss of apparently bounded behaviour of the deviation of the Hamiltonian from its initial value. The onset depends on the initial amplitude of the pendulum swings and also appears later for N, compared with P. This behaviour is illustrated, in the case of P, in Figure 13. This shows the deviation of H from its initial value for the simple pendulum problem with p = 0 and two different values of q0 .
Conformability properties for general linear methods We will extend Definitions 7.3G (p. 259) and 7.3H to the multivalue case. For Runge– Kutta methods, the need for these concepts only arose for methods with processing but, in the more general case, they are always needed because, even if the principal input might have a trivial starting method, the supplementary components will not. Recall (7.3 v) (p. 259) which applies, suitably interpreted, also to G-symplectic methods.
7.4 G-symplectic methods
265
H − H0 2 × 10−11 1 × 10−11 5 × 10−12
0 0.01
0.1
1
10
102
x
103
H − H0 5 × 10−11 2 × 10−11 1 × 10−11 5 × 10−12 −5 × 10−12 −1 × 10−11 Figure 13 Variation in the Hamiltonian in attempts to solve the simple pendulum problem using method P with h = 0.01 and 105 time steps, with initial value y0 = [1.5, 0]T (upper figure) and y0 = [2, 0]T (lower figure)
Definition 7.4F The starting method ξ is conformable of order p if, for t, t , such that |t| + |t ] ≤ p − 1, ξ1 (t ∗ t ) + ξ1 (t ∗ t) = ξ (t)T Gξ (t ).
Definition 7.4G The starting method ξ is weakly conformable of order p if, for t, t , such that |t| + |t ] ≤ p, (Eξ )1 (t ∗ t )+(Eξ )1 (t ∗ t)−(Eξ )(t)T G(Eξ )(t ) = ξ1 (t ∗ t )+ξ1 (t ∗ t)−ξ (t)T Gξ (t ).
Theorem 7.4H The starting method ξ is conformable of order p if and only if ξ is weakly conformable of order p.
Theorem 7.4I Let (A,U, B,V ) be a G-symplectic method with order p relative to the starting method ξ . Then ξ is weakly conformable of order p.
266
7 B-series and geometric integration
Theorem 7.4J Let (A,U, B,V ) be a G-symplectic method with order at least p − 1 relative to a starting method ξ , which is conformable of order p. Then the method satisfies the order condition for t ∗ t , where |t ∗ t | = p, if and only if it satisfies the order condition for t ∗ t. Theorems 7.4H, 7.4I, 7.4J are proved in [24] (Butcher, Imran, 2015).
7.5 Derivation of a fourth order method The method G4123 The method G4123, with pqrs = 4123, was derived in [24] (Butcher, Imran, 2015). We will consider methods with a partitioned coefficient matrix ⎡ ⎤
A 1 U ⎢ ⎥ A U T ⎥, =⎢ b 1 0 ⎣ ⎦ B V B 0 V with the eigenvalues of V distinct from 1 but lying on the unit circle. It will be found that, with s = 3 and r = 2, fourth order G-symplectic methods exist such that A is lower-triangular with only a single non-zero diagonal element, and such that the parasitic growth factors are zero. A suitable ansatz is ⎡ ⎤ 1 2 0 0 1−gx1 2 b1 (1 + gx1 ) ⎢ ⎥ 1 ⎢ 0 1−gx2 ⎥ ⎢ b1 (1 + gx1 x2 ) 2 b2 (1 + gx22 ) ⎥ ⎢ ⎥ A U ⎢ = ⎢ b1 (1 + gx1 x3 ) b2 (1 + gx2 x3 ) 1 b3 (1 + gx2 ) 1−gx3 ⎥ ⎥ , (7.5 a) 3 2 B V ⎢ ⎥ ⎢ ⎥ b b b 1 0 ⎣ ⎦ 1 2 3 b2 x2 b3 x3 0 −1 b 1 x1 based on D = diag(b1 , b2 , b3 ),
G = diag(1, g).
For efficiency, we will attempt to obtain order 4 with a11 = a22 = 0. We achieve this by choosing g = −1 (for simplicity, noting that g cannot be positive), together with x1 = 1, x2 = −1. Substitute into (7.5 a) to obtain the simplified coefficient matrices ⎡ ⎤ 0 0 0 1 1 ⎢ ⎥ ⎢ 0 0 1 −1 ⎥ 2b1 ⎢ ⎥ A U ⎢ ⎥ = ⎢ b1 (1 − x3 ) b2 (1 + x3 ) 21 b3 (1 − x32 ) 1 x3 ⎥ . ⎢ ⎥ B V ⎢ ⎥ b1 b2 b3 1 0 ⎦ ⎣ b1 −b2 b3 x3 0 −1
7.5 Derivation of a fourth order method
267
Table 19 Solution and verification of (7.5 b)
∅ ξ
1
0
1 − 32
7 − 4320
ζ
0
1 4
1 − 16
49 − 960
η1
1
1 4
3 − 32
η2
1
5 12
η3
1
η1 D
149 8640
0
0
0
0
13 − 384
2543 57600
193 7680
619 34560
163 69120
91 − 1728
287 − 17280
2543 57600
193 7680
619 34560
163 69120
19 96
787 8640
197 − 17280
1943 − 57600
313 − 7680
5497 − 103680
557 − 41472
11 20
37 160
1147 8640
739 17280
377 6400
2489 172800
15487 − 345600
0
1
1 4
1 16
3 − 32
η2 D
0
1
5 12
25 144
19 96
125 1728
η3 D
0
1
11 20
121 400
37 160
Eξ
1
1
15 32
1163 4320
1319 8640
Eζ
0
1 4
3 16
71 960
11 384
313 12800
91 − 1728
287 − 17280
95 1152
787 8640
197 − 17280
1331 8000
407 3200
1147 8640
739 17280
109 720
3 32
187 2160
187 4320
1001 − 34560
1457 − 69120
1 64
2677 − 57600
3 − 128
73 − 2560
It was shown in [24] (Butcher, Imran, 2015) how the free parameters and the starting vectors can be chosen to achieve order 4 accuracy and also to ensure that parasitic growth factors are zero. The method parameters, are ⎡ ⎤ 0 0 0 1 1 ⎢ ⎥ ⎡ ⎤ ⎢ 2 1 0 1 −1 ⎥ ⎥ ⎢ 3 0 ⎢ ⎥ ⎢ 4 ⎥ A U 1 1 ⎥ ⎢ 2−3 ⎢ 5 ⎥ 1 − = ⎢ 5 10 2 c = ⎢ 12 ⎥, 5 ⎥, ⎢ ⎥ ⎣ ⎦ B V ⎢ 1 − 3 25 1 0 ⎥ 11 ⎢ 3 8 24 ⎥ 20 ⎣ ⎦ 3 1 5 0 −1 3 8 − 24 and they were chosen to satisfy the order conditions
ζ , η = AηD + 1ξ + U T Eξ = b ηD + ξ + O5 ,
+ V ζ + O5 . Eζ = BηD
(7.5 b)
The values of the starting methods, ξ and ζ , and the stage values and derivatives, η and ηD, are given in Table 19, together with a verification that the conditions are satisfied. It was assumed from the start, without loss of generality, that ξ1 and ξ5 , ξ6 , ξ7 , ξ8 are zero. Note that the entries for Eξ and Eζ in (7.5 b) are identical, to within O5 and these lines are the final steps of the order verification.
268
7 B-series and geometric integration
Implementation questions Starting and finishing methods for ξ We will write S h and F h for the mappings corresponding to the starting and finishing [0] methods, respectively, for y1 . Because each of the mappings is only required to be correct to within O4 , with the proviso that F h ◦ S h = id + O5 , we will first construct a Runge–Kutta tableau with only three stages which gives the B-series ξ −1 + O4 from which a corresponding approximation to ξ can be found cheaply to within O5 . Calculate the coefficients of ξ −1 for the first 4 trees and write down the order conditions for the required tableau b1 + b2 + b3 = ξ −1 ( ) = 0, b2 c2 + b3 c3 = ξ −1 ( ) = b2 c22 + b3 c23 = ξ −1 ( ) = b3 a32 c2 = ξ −1 ( ) =
1 32 , 7 4320 , 149 − 8640 .
A possible solution to this system is 0 1 2
1
1 2 28 − 121
149 121
391 − 4320
16 135
. 121 − 4320
If the finishing method for the first component is given by F h , then S h can be approximated by S h = 3id − F h − F h ◦ (2id − F h ).
Let a=
1
0
a2
a3
a4
a5
a6
a7
a8
be the B-series coefficients for F h , for ∅ . . . t8 , so that the corresponding coefficient vector for (2id − F h ) is
b = 1 0 −a2 −a3 −a4 −a5 −a6 −a7 −a8 . The series for F h ◦ (2id − F h ) is found to be
ba = 1 0 0 0 0 0 with the final result
1 0 −a2
−a3
−a4
−a5
−a22
a22 − a6
0
−a22
−a7
,
a22 − a8
,
7.5 Derivation of a fourth order method
269
which is identical to the series for a−1 , corresponding to S h . [0]
Now consider the starting method for y2 . This can be found using a generalized Runge–Kutta method with order conditions Φ(t) = ζ (t) for |t| ≤ 4 and coefficient of y0 equal to zero. A suitable tableau for this starter is 0 − 14
− 14
9319973 − 14 − 11609760
6417533 11609760
1 4
− 7417 6432
0
9025 6432
− 34
887 − 5536
0
0
0
28 675
− 1583059879 5775779700
. − 3265 5536
43875218 57757797
67 173 − 450 − 1350
[0]
Exercise 58 Find an alternative starting method for y2 , using a generalized five stage Runge–Kutta method with a42 = a52 = a53 = 0, and with c2 = c3 = − 13 , c4 = − 23 , c5 = −1. Exercise 59 If a four stage generalized Runge–Kutta method is used for the starting method for [0] y2 and c2 = 15924 14305 , what is c4 ?
Initial approximation for Y3 Because the first two stages of the method are explicit, the most important implementation question is the evaluation of the third stage. We will consider only the Newton method for this evaluation and we will need to find the most accurate method for obtaining an initial estimate to commence the iterations. Information available when the first two stage derivatives have been computed in, [0[] [0[] for example, the first step of the solution, includes y1 , y2 , hF1 and hF2 and we will need to obtain a useful approximation to F3 . In terms of B-series coefficients we have ξ (∅) = 1,
ξ ( ) = 0,
1 ξ ( ) = − 32 ,
7 ξ ( ) = − 4320 ,
ζ (∅) = 0,
ζ ( ) =
1 ζ ( ) = − 16 ,
49 ζ ( ) = − 960 ,
1 4,
η1 D(∅) = 0, η1 D( ) = 1, η1 D( ) =
1 4,
η1 D( ) =
1 16 ,
η2 D(∅) = 0, η2 D( ) = 1, η2 D( ) =
5 12 , 37 160 ,
η2 D( ) =
25 144 , 1147 8640 .
η3 (∅) = 1,
η3 ( ) =
11 20 ,
η3 ( ) =
A short calculation suggests that the approximation
− 65 η1 D + 32 η2 D η ≈ ξ +η
η3 ( ) =
270
7 B-series and geometric integration
Table 20 Trees to order 6, grouped together as free trees with superfluency, symmetry and possible deletion if the C(2) condition holds order
serial number
1
1
1
2
2
1
3
3
2
4
4
2
5
2
6
2
7
4
8
3
9
2
X
10
4
X
X
11
4
X
X
12
2
X
X
13
5
X
X
14
3
X
X
5
6
free tree
tree count
superfluous
symmetric
X
X
C(2)
X X X
X
X
X X
X
X
is exact, based on just 0, / , and . Accordingly, the approximation [0]
[0]
Y3 ≈ y1 + y2 − 65 hF1 + 32 hF2 is suggested to initialize the iterative computation of Y3 .
7.6 Construction of a sixth order method This discussion is based on [25] (Butcher, Imran, Podhaisky, 2017). Of the 37 trees up to order 6, which contribute to the order requirements, these can be immediately reduced to 14 because of the role played by the equivalences which define free trees. Some of these can immediately be discarded because of superfluency. If the method is symmetric then further trees become candidates for deletion from the set of required order conditions [23] (Butcher, Hill, Norton, 2016). If it is possible to impose the C(2) condition, further deletions are possible. These simplifications are summarized in Table 20.
7.6 Construction of a sixth order method
271
Design requirements Time-reversal symmetry Methods with time-reversal symmetry were considered in [23] (Butcher, Hill, Norton, 2016). This property is an important attribute of numerical schemes for the longterm integration of mechanical problems. Furthermore, the symmetric general linear methods perform well over long time intervals. We can define a general linear method to be symmetric in a similar fashion to a Runge–Kutta method. A general linear method is symmetric if it is equal to its adjoint general linear method, where the adjoint general linear method takes the stepsize with opposite sign. However, symmetry in general linear methods is not as simple as for Runge–Kutta methods, because the output approximations contain the matrix V , which is multiplied by the input approximations, and it is possible that the inverse matrix V −1 is not equal to V . For this reason, an involution matrix L is introduced, such that L2 = I and LV −1 L = V . We also introduce the stage reversing permutation P defined as Pi j = δi,s+1− j for i, j = 1, . . . , s. In particular, because of time-reversal symmetry, trees with even order can be ignored because the corresponding conditions will be automatically satisfied. Definition 7.6A A method (A,U, B,V ) is time-reversal symmetric with respect to the involution L if (7.6 a) A + PAP = UV −1 B, V LBP = B, (7.6 b) PULV = U, (7.6 c) (LV )2 = I.
(7.6 d)
From results in [23], it follows that, for a method with this property, with starting method Sh , it can be assumed that Sh = LS−h . Methods which are both G-symplectic and symmetric have many advantages, and some of these were derived in [23]. For methods with lower-triangular A, the two properties are closely related. Theorem 7.6B Let (A,U, B,V ) be a method with the properties 1. A is lower triangular, 2. The method is G-symplectic, 3. (7.6 c) is satisfied, then (7.6 a), (7.6 b) and (7.6 d) are satisfied. This result is proved in [20] (Butcher, 2016).
272
7 B-series and geometric integration
Structure of the method G6245 The method, which will be referred to as G6245, because pqrs = 6245, was originally derived in [25]. It achieves order 6 by combining symmetry, C(2) with Gsymplecticity. Symmetry requirements An arbitrary choice is made to define V = diag(1, i, −i, −1), G=
U= where
⎡
(7.6 e)
diag(1, − 12 , − 12 , 1), 1 2 (−β
1
α1
⎢ ⎢ α2 ⎢ ⎢ α := ⎢ 0 ⎢ ⎢−α ⎣ 2 −α1
Also write bT = b1 b2
− iα)
⎤
⎡
⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎦
b3 b2
1 2 (−β
β1
+ iα)
⎤
−γ ⎡
,
γ1
⎤
⎢ ⎥ ⎢ ⎥ ⎢ β2 ⎥ ⎢ γ2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ β := ⎢ β3 ⎥ , γ := ⎢ γ3 ⎥ . ⎢ ⎥ ⎢ ⎥ ⎢ β ⎥ ⎢ γ ⎥ ⎣ 2 ⎦ ⎣ 2 ⎦ β1 γ1 b1 , D = diag(b). From (7.4 c), we deduce ⎡
bT
⎢ ⎢ (α T + iβ T )D B=⎢ ⎢ (α T − iβ T )D ⎣ γ TD
⎤ ⎥ ⎥ ⎥. ⎥ ⎦
Define the 5 × 5 symmetric matrix W with elements wi j = αi α j + βi β j − γi γ j , i, j = 1, 2, . . . , 5, which can be written W = αα T + β β T − γγ T .
(7.6 f)
Because of the symmetries and anti-symmetries in α, β , γ, it follows that W has the form ⎡ ⎤ w11 w21 w31 w41 w51 ⎢ ⎥ ⎢ w21 w22 w32 w42 w41 ⎥ ⎢ ⎥ ⎢ ⎥ W = ⎢ w31 w32 w33 w32 w31 ⎥ . (7.6 g) ⎢ ⎥ ⎢ w ⎥ ⎣ 41 w42 w32 w22 w21 ⎦ w51 w41 w31 w21 w11
7.6 Construction of a sixth order method
273
From (7.4 b) (p. 263), assuming A is lower triangular, the elements of this matrix are found to be ⎧ ⎪ ⎪ b j (1 − αi α j − βi β j + γi γ j ) = 12 b j (1 − wi j ), j < i, ⎪ ⎨ ai j = 12 b j (1 − αi α j − βi β j + γi γ j ) = b j (1 − wi j ), j = i, (7.6 h) ⎪ ⎪ ⎪ ⎩ 0, j > i. Symmetry also requires Pc + c = 1 and bT P = bT and we choose the abscissae vector as
c = 0 12 (1 − t) 12 12 (1 + t) 1
and the vector bT = b1 b2 b3 b2 b1 such that bT 1 = 1, bT c2 = 13 , bT c4 = 15 .
The choice of t = 1 − 2c2 The choice of t must yield a negative coefficient amongst b1 , b2 , b3 to ensure that the parasitism growth factors can be eliminated. It is found that this is possible in three cases Case 1:
0 < t2
0, by the Weierstrass approximation theorem there exists a polyno (ξ , η) = Π (ξ , η) + E(ξ , η), with |E(ξ , η)| ≤ ε mial in two variables Π such that Ψ for ξ , η ∈ [0, 1]. Without loss of generality (because Π (ξ , η) can be replaced by Π (ξ , η) + Π (η, ξ ) /2), assume that Π (ξ , η) = Π (η, ξ ). Let Π (ξ , η) =
1
ξ
ξ2
···
ξ n−1
T M
1
η
η2
···
η n−1
,
where M is an n × n symmetric matrix. From standard decomposition results for symmetric matrices, there exists an m × n matrix N and a diagonal m × m matrix D, such that M = N T DN. It then follows that Π (ξ , η) = ∑m i=1 di ϖi (ξ )ϖi (η), where the polynomial ϖi has coefficients given by row number i in N. We can now write m
(ξ , η) = ∑ di ϖi (ξ )ϖi (η) + E(ξ , η) Ψ i=1
and we obtain H(y1 ) − H(y0 ) 1 d
=
dξ
0
1
H(Yξ ) d ξ
d Y dξ dξ ξ 0 1 T 1 (ξ , η)S∇Hη d η d ξ Ψ ∇Hξ h = H (Yξ )
=
0
=
0
1 0
T 1 m ∇Hξ h ∑ di ϖi (ξ )ϖi (η)+E(ξ , η) S∇Hη d η d ξ. 0
i=1
The coefficient of hdi in (7.9 k) is
(7.9 k)
288
7 B-series and geometric integration
1 0
∇Hξ
1
= 0
T
1
0
ϖi (ξ )ϖi (η)S∇Hη d η d ξ
1 ϖi (ξ )∇Hξ d ξ S ϖi (η)∇Hη d η , 0
which vanishes because of skew-symmetry of S. Hence, 3 3 3H(y1 ) − H(y0 )3 3 3 1 1 3 3 T 3 E(ξ , η) ∇Hξ S∇Hη d η d ξ 33 ≤ h3 ≤ εh
0
0
0
0
1 1 3 3
3 T 3 3 ∇Hξ S∇Hη 3 d η d ξ .
Because this can be made arbitrarily small, H(y1 ) = H(y0 ).
A fourth order method We will construct an energy preserving method based on a polynomial Ψ (ξ , η) = 2aξ 2 η + 2bξ η + bξ 2 + (1 − a − 2b)ξ , where the coefficients are chosen subject to the symmetry of ∂Ψ /∂ ξ and the consis5 tency condition Φ(t1 ) = 01 Ψ (1, η) d η = 1. Evaluation of the remaining elementary differentials up to order 4 give Φ(t2 ) = 12 , Φ(t3 ) = 13 , 1 1 a + 36 (a + b)2 , Φ(t4 ) = 14 − 36 Φ(t5 ) = 14 , 1 1 1 a + 360 (a + b)(6a + 5b) − 360 (a + b)3 , Φ(t6 ) = 16 − 72 1 1 1 a + 180 (a + b)(6a + 5b) − 180 (a + b)3 , Φ(t7 ) = 16 − 36 1 1 a + 36 (a + b)2 , Φ(t8 ) = 18 − 36
and to obtain order 4, by requiring that Φ(t) = 1/t!, up to this order we need to satisfy (a + b)2 − a = 3, −(a + b)3 + (a + b)(6a + 5b) − 5a = 15, with solution −a = b = 3.
7.9 Energy preserving methods
289
Summary of Chapter 7 Although it has not been possible to survey all aspects of the burgeoning subject of Geometric Integration, symplectic Runge–Kutta methods and their generalization to general linear methods are introduced to the extent that their main properties are studied and explained. It is perhaps surprising that G-symplectic methods perform well over millions of time steps, even though, according to [40], they will eventually fail. In Section 7.9, energy preserving methods were introduced, based on integration methods, in the sense of Chapter 4, also known as Continuous Stage Runge–Kutta methods.
Teaching and study notes The following books and articles are essential reading, and provide a starting point for further studies on Geometric Integration. Cohen, D. and Hairer, E. Linear energy-preserving integrators for Poisson systems, (2011) [31] Hairer, E. Energy-preserving variant of collocation methods (2010) [48] Hairer, E., Lubich, C. and Wanner, G. Geometric Numerical Integration: StructurePreserving Algorithms for Ordinary Differential Equations, (2006) [49] Iserles, A., Munthe-Kaas, H.Z., Nørsett, S.P. and Zanna, A. Lie-group methods, (2000) [62] Miyatake, Y. An energy-preserving exponentially-fitted continuous stage Runge– Kutta method (2014) [73] McLachlan, R. and Quispel, G. Six lectures on the geometric integration of ODEs (2001) [71] Sanz-Serna, J.M. and Calvo, M.P. Numerical Hamiltonian Problems, (1994) [85] Projects Project 26
Derive a method similar to G6245 but with c2 =
1 10 .
Project 27 Consider the consequence of replacing (7.6 e) (p. 272) by V = diag(1, exp(iθ ), exp(−iθ ), −1), for 0 < θ < π, in the G6245 method.
Answers to the exercises
Chapter 1 Exercise 1 (p. 5) The function f and the components of y0 are f 0 = 1,
y00 = 1,
f 1 = y2 ,
y10 = 2,
f = 2y − 3y + y + cos(y ), 2
1
2
3
y20 = −2,
0
f 3 = y4 ,
y30 = 1,
f = y − y + (y ) + y + sin(y ), 4
1
2
3 2
4
0
y40 = 4.
Exercise 2 (p. 6) Substitute z = A exp(2t) + B exp(it) +C exp(−it) into d z/ dt − 2z − 2i exp(iz) − i exp(−iz) and obtain (2A − 2A) exp(2t) + (iB − 2B − 2) exp(it) + (−iC − 2C − 1) exp(−it). This is zero for all t iff B = − 45 = 25 i and C = − 25 + 15 i. Add the condition z(0) = 1 to obtain 1 A + B +C = 1. Hence, A = 11 5 + 5 i.
Exercise 3 (p. 6) The real and imaginary components are x = y = 15 exp(2t) − 25 cos(t) − 15 sin(t).
11 5
exp(2t) − 65 cos(t) + 35 sin(t),
© Springer Nature Switzerland AG 2021 J. C. Butcher, B-Series, Springer Series in Computational Mathematics 55, https://doi.org/10.1007/978-3-030-70956-3
291
292
Answers to the exercises
Exercise 4 (p. 7) Given y, z ∈ RN , let y = y0 +
R (y − y0 ), max(y − y0 , R)
z = y0 +
R (z − y0 ), max(z − y0 , R)
where y and z are shown in three cases, relative to {y : y − y0 ≤ R}, y y
y
y
z
z
z
z y0
y0
y0
In each case the Lipschitz condition follows from f(y) − f(z) ≤ f ( y) − f ( z) ≤ L y − z ≤ Ly − z.
Exercise 5 (p. 11) ⎡ ⎢ ⎢
=⎢ F(u) ⎢ ⎢ ⎣
u1 −hu2 1+h2
+ (1+h0.40001h 2 )(1+100h)2
u2 +hu1 1+h2
0.40001h2 (1+h2 )(1+100h)2
+
u3 1+100h
⎤ ⎥ ⎥ ⎥ ⎥. ⎥ ⎦
Stability is guaranteed by the power-boundedness of the matrix ⎤ ⎡ 1 h 2 − 1+h2 1+h ⎥ ⎢ ⎦, ⎣ h 1+h2
z
1 1+h2
and the boundedness of (1 + 100h)−n for positive integral n.
Exercise 6 (p. 13) 1/2 so that In this and the following answer, r := (y1 )2 + (y2 )2 ) H(x) = 12 (y3 )2 + (y4 )2 − r−1 , ∂ r−1 /∂ y1 = −y1 /r3 ∂ r−1 /∂ y2 = −y2 /r3 . We now find H = (∂ H/∂ y1 )(y1 ) + (∂ H/∂ y2 )(y2 ) + (∂ H/∂ y3 )(y3 ) + (∂ H/∂ y4 )(y3 ) = −(y1 /r3 )y3 − (y2 /r3 )y4 + y3 y1 /r3 + y4 y2 /r3 = 0.
Exercise 7 (p. 13) A = (∂ A/∂ y1 )(y1 ) + (∂ A/∂ y2 )(y2 ) + (∂ A/∂ y3 )(y3 ) + (∂ A/∂ y4 )(y3 ) = y4 y3 − y3 y4 + y2 y1 /r3 − y1 y2 /r3 = 0.
Answers to the exercises
293
Exercise 8 (p. 14) Evaluate in turn y = y + sin(x), y = y + cos(x) = y + sin(x) + cos(x), y(3) = y − sin(x) = y + cos(x), y(4) = y(3) − cos(x) = y, y(5) = y(4) + sin(x) = y + sin(x), y(6) = y(5) + cos(x) = y + sin(x) + cos(x), y(7) = y(6) − sin(x) = y + cos(x).
Exercise 9 (p. 15) (a) It is possible that the result error vanishes so that the evaluation of r fails because of the zero division. (b) Even if error is non-zero but small, the value of r might be very large, resulting in an unreasonably large value of yout. In practical solvers, the value of the stepsize ratio is not allowed to exceed some heuristic bound such as 2. (c) Similarly a very small value of r needs to be avoided and a heuristic lower bound, such as 0.5 is imposed in practical solvers.
Exercise 10 (p. 18) For 2 orbits with n steps, h = 8/n. The number of steps in successive quadrants are m + 1, m + 1, m + 2, m + 2, m + 3, m + 3, m + 4, m + k − 16, giving a final position 1 1 2m+4 −1 2m+4 −1 2m+6 2m+k−14 + + + n/8 n/8 −1 n/8 −1 n/8 1 1 8m + 9k − 128 1 , =n 8(k − 20) which is
8 n
k − 16
k − 20
from the starting point.
Exercise 11 (p. 21) y(x0 + h) − y(x0 ) − hF2 = y(x0 + h) − y(x0 ) − hy (x0 + 12 h) + y (x0 + 12 h) − F2 = hy (x0 ) + 12 h2 y (x0 ) + 16 h3 y(3) (x0 ) − hy (x0 ) − 12 h2 y (x0 ) − 18 h3 y(3) (x0 ) + h 18 h2 fy (x0 , y0 )y (x0 ) + O(h4 ) =
4 1 3 (3) 1 3 24 h y (x0 ) + 8 h f y (x0 , y0 )y (x0 ) + O(h ).
294
Answers to the exercises
Exercise 12 (p. 21) y(x0 + 13 h) −Y2 = O(h2 ),hy (x0 + 13 h) − hF2 = O(h3 ), y(x0 + 23 h) −Y3 = O(h3 ),hy (x0 + 23 h) − hF3 = O(h4 ), y(x0 + h) − y1 = y(x0 + h) − y0 − 14 hy (x0 ) − 34 hy (x0 + 23 h) + O(h4 ) = O(h4 ). Exercise 13 (p. 21) 1 3 1 4 (3) 1 4 2 h Jy (x0 ), Δ3 := 192 h Jy + 64 h J y (x0 ), In this answer J := fy (x0 , y0 ), Δ2 := 32 3 1 1 1 1 2 y(x0 + 4 h)−Y2 = y(x0 + 4 h)−y0 − 4 hy (x0 ) = 32 h y (x0 )+O(h ),
hy (x0 + 14 h)−hF2 = Δ2 +O(h4 ), y(x0 + 12 h)−Y3 = y(x0 + 12 h)−y0 − 12 hy (x0 + 14 h)+ 12 Δ2 +O(h4 ) 4 1 3 (3) 1 3 192 h y + 64 h Jy (x0 )+O(h ), hy (x0 + 12 h)−hF3 = Δ3 +O(h5 ), y(x0 +h)−Y4 = y(x0 +h)−hy0 +2hy (x0 + 14 h) −2hy (x0 + 12 h)−2Δ2 +O(h4 )
=
1 3 (3) 1 3 = − 48 h y − 16 h Jy (x0 )+O(h4 ) hy (x0 +h)−hF4
= −4Δ3 +O(h4 ),
y(x0 +h)−y1 = y(x0 +h)−y0 − 16 hy (x0 )− 23 hy (x0 + 12 h) − 16 hy (x0 +h)+O(h5 ) = O(h5 ). Exercise 14 (p. 30) The preconsisitency condition is ρ(1) = 32 − a1 = 0, implying a1 = 32 . The consistency condition then becomes ρ (1) − σ (1) = (2 − 32 ) − (b1 + 1) = 0, implying b1 = − 12 . The method (w2 − 32 w + 12 , − 12 w + 1) is stable because the roots of ρ(w) = 0 are 1 and 12 . Exercise 15 (p. 30) Using the relation w = 1 + z and writing every series in z only to z2 terms, we have ρ(1 + z)/z = (w3 − w2 )/(w − 1) = w2 = 1 + 2z + z2 , 1 2 σ (1 + z) = (1 + 2z + z2 )(1 + 12 z − 12 z ) 5 = 1 + 12 z + 23 12 =
23 2 4 5 12 w − 3 w + 12 .
Exercise 16 (p. 31) Use the relation w = 1 + z and write every series up to terms in z3 . ρ(1 + z)/z = (1 + z)2 ; 1 2 1 3 σ (1 + z) = (1 + 2z + z2 )(1 + 12 z − 12 z + 24 z ) 2 3 3 = 1 + 52 z + 23 12 z + 8 z 2 5 1 = 38 w3 + 19 24 w − 24 w + 24 .
Answers to the exercises
295
Exercise 17 (p. 36)
(a)
, (b)
, (c)
.
Exercise 18 (p. 36) (a) f ff f 2 , (b) f f ff f f, (c) f f(f f)2 .
Chapter 2 Exercise 19 (p. 40) The result uses induction on n = #V . For n = 1 there are no edges and each of the statements is true. For n > 1, the result is assumed for #V = n − 1. Add an additional vertex and an additional edge is also required to maintain connectivity without creating a loop. However, any additional edge will produce a loop. Exercise 20 (p. 47) t = [[τ 2 ][2 τ 2 ]2 = (τ ∗ ((τ ∗ τ) ∗ τ)) ∗ (τ ∗ ((τ ∗ τ) ∗ τ)) = τ2 τ2 τ 2 τ1 τ2 τ 2 . Exercise 21 (p. 47) [[τ 3 ]2 ], τ ∗ ((τ ∗ τ) ∗ τ) ∗ (τ ∗ τ ∗ τ). Exercise 22 (p. 49) The four trees, with the ∼ links shown symbolically, are
t33 = t1 ∗ t13 ∼ t13 ∗ t1 =t22 = t6 ∗ t2 ∼ t2 ∗ t6 =t24 = t15 ∗ t1 ∼ t1 ∗ t15 =t35 Exercise 23 (p. 49) The five trees, with the ∼ links shown symbolically, are
t32 = t1 ∗ t6 ∼ t6 ∗ t1 =t21 = t3 ∗ t4 ∼ t4 ∗ t3 =t27
= t7 ∗ t2 ∼ t2 ∗ t7 =t25 = t16 ∗ t1 ∼ t1 ∗ t16 =t35 Exercise 24 (p. 56) In the factors on the left of (2.4 a), the factor (1 − [τ])−1 must be removed because no descendants of any vertexcan contain .
296
Answers to the exercises
Exercise 25 (p. 69) First calculate p-weight(1 + 22 ) = 5!/1!2!2!2 = 15. The 15 results are 1 + 23 + 45, 1 + 24 + 35, 1 + 25 + 34, 2 + 13 + 45, 2 + 14 + 35, 2 + 15 + 34, 3 + 12 + 45, 3 + 14 + 25, 3 + 15 + 24, 4 + 12 + 35, 4 + 13 + 25, 4 + 15 + 23, 5 + 12 + 34, 5 + 13 + 24, 5 + 14 + 23.
Exercise 26 (p. 78) s01 s21 s01 s10 and s01 s01 s21 s10 .
Exercise 27 (p. 79) τ1
1
τ2 τ
τ 1 τ1
τ3 ττ
τ2 ττ1
τ1 τ2 τ
τ 1 τ1 τ1
1
1
τ1
τ2 τ
τ 1 τ1
τ3 ττ
τ2 ττ1
τ1 τ2 τ
τ 1 τ1 τ1
τ1
τ1
τ1 τ1
τ1 τ2 τ
τ1 τ1 τ1
τ1 τ3 ττ
τ1 τ2 ττ1
τ1 τ1 τ2 τ
τ1 τ1 τ1 τ1
τ2 τ
τ2 τ
τ2 ττ1
τ2 ττ2 τ
τ2 ττ1 τ1
τ2 ττ3 ττ
τ2 ττ2 ττ1
τ2 ττ1 τ2 τ
τ2 ττ1 τ1 τ1
τ1 τ1
τ1 τ1
τ1 τ1 τ1
τ1 τ1 τ2 τ
τ1 τ1 τ1 τ1
τ1 τ1 τ3 ττ
τ1 τ1 τ2 ττ1
τ1 τ1 τ1 τ2 τ
τ1 τ1 τ1 τ1 τ1
τ3 ττ
τ3 ττ
τ3 τττ1
τ3 τττ2 τ
τ3 τττ1 τ1
τ3 τττ3 ττ
τ3 τττ2 ττ1
τ3 τττ1 τ2 τ
τ3 τττ1 τ1 τ1
τ2 ττ1 τ2 ττ1 τ2 ττ1 τ1 τ2 ττ1 τ2 τ τ2 ττ1 τ1 τ1 τ2 ττ1 τ3 ττ τ2 ττ1 τ2 ττ1 τ2 ττ1 τ1 τ2 τ τ2 ττ1 τ1 τ1 τ1 τ1 τ2 τ τ1 τ2 τ τ1 τ2 ττ1 τ1 τ2 ττ2 τ τ1 τ2 ττ1 τ1 τ1 τ2 ττ3 ττ τ1 τ2 ττ2 ττ1 τ1 τ2 ττ1 τ2 τ τ1 τ2 ττ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ2 τ τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ1 τ3 ττ τ1 τ1 τ1 τ2 ττ1 τ1 τ1 τ1 τ1 τ2 τ τ1 τ1 τ1 τ1 τ1 τ1
Exercise 28 (p. 87) Use the recursion t n = t n−1 ∗ τ starting with Δ (t 0 ) = Δ (τ) = 1 ⊗ τ + τ ⊗ ∅. By Theorem 2.8D, Δ (t n ) = Δ (t n−1 ) ∗ Δ (τ) n−1 n−1−i = ∑n−1 ⊗ t i + t n−1 ⊗ ∅ ∗ 1 ⊗ τ + τ ⊗ ∅ i τ i=0 n−1 n−1 n−1−i ⊗ t i ∗ 1 ⊗ τ + τ ⊗ ∅ + (t n−1 ⊗ ∅) ∗ 1 ⊗ τ + τ ⊗ ∅ = ∑i=0 i τ n−1 n−1−i n−1 n−1−i ⊗ t i ∗(1 ⊗ τ)+ ∑n−1 ⊗ t i ∗(τ ⊗ ∅)+(t n−1 ⊗ ∅)∗(τ ⊗ ∅) = ∑n−1 i τ i τ i=0 i=0 = ∑n−1 i=0 = ∑ni=0
n−1 n−1−i τ ⊗t i
n n−i τ ⊗t i
n−1 n−1 n−i ⊗ t i + (t n ⊗ ∅) i+1 + ∑i=0 i τ
i + (t n ⊗ ∅).
Exercise 29 (p. 87) Write Δ (t n ) = Dn + t n ⊗ ∅, with D0 = 1 ⊗ τ. To find Dn , Δ (t n ) = (1 ⊗ τ + τ ⊗ ∅) ∗ (Dn−1 + t n−1 ⊗ ∅) = (1 ⊗ τ) ∗ Dn−1 + t n−1 ⊗ τ + t n ⊗ ∅, and it follows that Dn = (1 ⊗ τ) ∗ Dn−1 + t n−1 ⊗ τ. It can be verified by induction that Dn = ∑n−1 i=1 t n−i ⊗ t i so that Δ (t n ) =
n−1
∑ tI ⊗ tn−i + tn ⊗ ∅.
i=1
Answers to the exercises
297
Exercise 30 (p. 90) Denote the vertices of t = [τ n ] by 0, 1, 2, . . . , n, where 0 is the root. The partitions of t are (a) n + 1 singleton vertices, (b) n − i singleton vertices and an additional tree [τ i ], i = 1, 2, . . . n − 1, and (c) the one element partition t. The signed partition contributed by (a) is (−1)n+1 τ n+1 , the signed partitions contributed by (b), with 1 ≤ i ≤ n − 1, are ni copies of −(−1)n−i [τ i ]τ n−i , and (c) contributes −[τ n ].
Exercise 31 (p. 90) The partitions of [3 τ]3 are
and the signed partitions, term by term, and then totalled, are τ 4 − τ 2 [τ] − τ 2 [τ] − τ 2 [τ] + τ[2 τ]2 + [τ]2 + τ[2 τ]2 − [3 τ]3 = τ 4 − 3τ 2 [τ] + 2τ[2 τ]2 + [τ]2 − [3 τ]3 .
Chapter 3 Exercise 32 (p. 105) Write the solution in the form y1 = y0 + a1 hF1 + a2 h2 F2 + 12 a3 h3 F3 + a4 F4 so that y1 = y0 + h f ( 12 (y0 + y1 )) implies a1 hF1 + a2 h2 F2 + 12 a3 h3 F3 + a4 F4 = hF1 + 12 a1 h2 F2 + 18 a21 h3 F3 + 12 a2 F4 + O(h3 ). By comparing coefficients, it follows that a1 = 1, a2 = 12 , a3 = a4 = 14 .
Exercise 33 (p. 105) = y0 ,
Y1 = y0 hF1 = h f (Y1 ) Y2 = y0 +
1 2 hF1
hF2 = h f (Y2 )
=
hE1 ,
= y0 + 12 hE1 , =
hE1
y1 = y0 + hF2 = y0 + hE1 giving a result identical with flow h to within O(h3 ).
1 3 + 14 h2 E2 + 24 h E3 , 1 3 + 14 h2 E2 + 24 h E3 ,
298
Answers to the exercises
Exercise 34 (p. 105) Write the output from flow h as y1 and derive the coefficients a1 , a2 , a3 , a4 in the following lines y1 = y0 + ha1 f + h2 a2 f f + 12 h3 a3 f ff + h3 a4 f f f + O(h3 ), h f (y1 ) = hf + ha1 f f + 12 h3 a21 f ff + h3 a2 f f f + O(h3 ),
h(d / d h)y1 = ha1 f + ha2 f f +
3 3 1 3 2 h a3 f ff + h a4 f f f + O(h ).
(1) (2)
Compare the coefficients in (1) and (2) to find a1 = 1, a2 = 12 , a3 = 13 , a4 = 16 . Finally substitute into (1) to give h f (y1 ) = hf + hf f + 12 h3 f ff + h3 12 f f f + O(h3 ).
Exercise 35 (p. 117) Let t = [t 1 t 2 · · · t n ]. Then (ED)(∅) = 0 = |∅|/∅!, (ED)(τ) = 1 = |τ|/τ!,
=
n
(ED)(t) = ∏ E(t i ) = 1 i=1
n
=
∏ ti ! = |t| i=1
n
|t| ∏ t i ! = |t|/t!. i=1
Exercise 36 (p. 118) Differentiate y (4) = f (3) y y y + 3f y y + f y (3) , to obtain y (5) = f (4) y y y y + 3f (3) y y y + 3 f (3) y y y + f y y + f y y (3) + f y y (3) + f y (4) = f (4) y y y y + 6f (3) y y y + 4f y y (3) + 3f y y + f y (4) .
Exercise 37 (p. 142) λ (a, t6 ) = a1 (a2 t1 + a1 t2 + t4 ) + (a2 t1 + a1 t2 + t4 ) ∗ (t1 ) = a1 a2 t1 + a21 t2 + a1 t4 + a2 t2 + a1 t3 + t6 = a1 a2 t1 + (a21 + a2 )t2 + a1 t4 + a1 t3 + t6 .
Exercise 38 (p. 142) λ (a, t6 ) = a2 (a1 t1 + t2 ) + (a1 t1 + t2 ) ∗ (a1 t1 + t2 ) = a1 a2 t1 + a2 t2 + a21 t2 + a1 t3 + a1 t4 + t6 = a1 a2 t1 + (a21 + a2 )t2 + a1 t4 + a1 t3 + t6 .
Answers to the exercises
299
Chapter 4 Exercise 39 (p. 155) ϕξ (τ) =
ξ 0
ξ
dξ,
= ξ,
ϕξ ([τ]) =
1 3 ξ , 3 0 ξ 1 ϕξ ([τ 3 ]) = ξ 3 d ξ , = ξ 4, 4 0 ξ 1 3 1 4 ϕξ ([[τ 2 ]]) = ξ dξ, = ξ , 12 0 2 ϕξ ([τ 2 ]) =
ξ2 dξ,
ϕξ ([[τ]] =
=
ϕξ ([τ[τ]]) = ϕξ ([[[τ]]]) =
ξ 0
ξ 0
ξ 0
ξ 0
1 2 ξ , 2 1 2 1 ξ d ξ , = ξ 3, 2 6 1 3 1 4 ξ dξ, = ξ , 2 8 1 3 1 4 ξ dξ, = ξ . 6 24
ξ dξ,
=
To find the Φ(t), substitute ξ = 1. The results are Φ(τ) = 1, Φ([τ]) = 12 , Φ([τ 2 ]) = 13 , 1 1 Φ([[τ]]) = 16 , Φ([τ 3 ]) = 14 , Φ([τ[τ]]) = 18 , Φ([[τ 2 ]]) = 12 , Φ([[[τ]]]) = 24 . Exercise 40 (p. 155) ϕξ (τ) = ξ
1 0
1
dξ,
= ξ,
ϕξ ([τ]) = ξ
1 ξ, 3 0 1 1 ϕξ ([τ 3 ]) = ξ ξ3 dξ, = ξ, 4 0 1 1 1 2 ϕξ ([[τ ]]) = ξ ξ dξ, = ξ, 4 0 2 ϕξ ([τ 2 ]) = ξ
ϕξ ([[τ]] = ξ
ξ2 dξ, =
ϕξ ([τ[τ]]) = ξ ϕξ ([[[τ]]]) = ξ
1 0
1 0
1 0
1 0
1 ξ, 2 1 1 ξ dξ, = ξ, 2 4 1 2 1 ξ dξ, = ξ, 2 6 1 1 ξ dξ, = ξ. 4 8
ξ dξ,
=
To find the Φ(t), substitute ξ = 1. The results are Φ(τ) = 1, Φ([τ]) = 12 , Φ([τ 2 ]) = 13 , Φ([[τ]]) = 14 , Φ([τ 3 ]) = 14 , Φ([τ[τ]]) = 16 , Φ([[τ 2 ]]) = 14 , Φ([[[τ]]]) = 18 . Exercise 41 (p. 158) It is observed that the stages can be reducd using P1 = {1, 4}. P2 = {2, 3}, giving the tableau 1 2
1 2
0
2 3
1 3
1 3
1
0
.
Only the first reduced stage is essential, and we get the final result 1 2
1 2
.
1
Exercise 42 (p. 169) The given set is a subgroup because β1 12 β 2 β3 12 β3 α1 12 α 2 α3 12 α3 = α1 + β1 21 (α1 + β1 )2 α1 β1 (α1 + β1 ) + α3 + β3 12 α1 β1 (α1 + β1 ) + α3 + β3 .
300
Answers to the exercises
Exercise 43 (p. 169) The H4 is a subgroup because (ab)1 = a1 + b1 , (ab)2 = a2 + a1 b1 + b2 = (a1 + b1 )2 = (ab)21 . To be a normal subgroup, x must exist such that xa = ab. This is solved by writing x1 = b1 , x2 = b2 , with xi , i = 3, 4, . . . , found recursively.
Chapter 5 Exercise 44 (p. 188) Expand (I − zA)−1 as a geometric series noting that As = 0. This gives 1 + ∑sn=1 bT An−1 = 1 + ∑sn=1 Φ([n 1]n )zn .
Exercise 45 (p. 188) Since p = s, Φ([n 1]n ) = 1/[n 1]n ! and it is only ncessary to verify by induction that [n 1]n ! = n!.
Exercise 46 (p. 189) Use (5.3 f).
R(z) =
> det(I + z(1bT − A)) = det(I − zA)
1 + 38 z 38 z 0 1
det ⎛⎡
7 z 1 − 24
det ⎝⎣
1 24 z 1 − 13 z
− 23 z
Exercise 47 (p. 191) 0 1 2
.
1 2
0
1
Exercise 48 (p. 192) 0 2 3
2 3
2 3
1 3
1 3
1 4
0
. 3 4
? ⎤⎞ = ⎦⎠
1 + 38 z 1 − 58 z + 18 z2
.
Answers to the exercises
301
Exercise 49 (p. 194) 0 1 3
1 3
3 4
− 21 32
45 32
1
7 3
− 12 5
16 15
1 9
9 20
16 45
. 1 12
Exercise 50 (p. 203) 1 2
√ 1 − 10 15 1 2
1 2
√ 1 + 10 15
5 36
√ 1 + 24 15 √ 5 1 36 + 20 15
2 9
5 36
5 18
√ 1 − 30 15 2 9
2 9
√ 1 + 30 15 4 9
√ 1 − 20 15 √ 5 1 36 − 24 15 5 36
5 36
.
5 18
Exercise 51 (p. 203) Pn (1) = 1, for all n. Therefore, Ps (1) − Ps−1 (1) = 0, for s ≥ 1. Exercise 52 (p. 203) √ 1 The zeros of P3 − P2 = 20x3 − 36x2 + 18x − 2 are 25 ∓ 10 6 and 1. Solve linear equations for A and T b . The final tableau is √ √ √ √ 2 1 11 7 37 169 2 1 − 225 + 75 6 5 − 10 6 45 − 360 6 225 − 1800 6 √ √ √ √ 2 1 37 169 11 7 2 1 − 225 − 75 6 5 + 10 6 225 + 1800 6 45 + 360 6 . √ √ 4 1 4 1 1 1 9 − 36 6 9 + 36 6 9 √ √ 4 1 4 1 1 9 − 36 6 9 + 36 6 9
Exercise 53 (p. 208) From the equations in (5.7 c), it follows that ∑i j bi (1 − ci )ai j c j (c j − c3 ) = left-hand side is zero, c3 = 25 .
1 60
1 − 24 c3 . Since the
Chapter 6 Exercise 54 (p. 216) In each case, z is in the stability region if the difference equation (1 − b0 z)yk = ∑ki=1 (ai + bi z)yk−i has only bounded solutions.
302
Answers to the exercises
Exercise 55 (p. 218) The characteristic polynomial of M is found to be w(w − 1)(w − 240μ+361 ). The zeros of this 121 | < 1 for μ ∈ (− 241 , −1). , which satisfies |w polynomial are 0, 1, w , where w = 240μ+361 121 120 Exercise 56 (p. 229) 1 −1 T= . 0 1 Exercise 57 (p. 235) (θ5 − (c2 + c4 )θ3 + c2 c4 θ2 )θ8 − (θ6 − c4 θ4 )(θ7 − c4 θ2 ) = 0 simplifies to c2 (1 + 2c4 ) = 0.
Chapter 7 Exercise 58 (p. 269) 0 − 13
− 13
− 13
896237 950913
− 1213208 950913
− 23 − 15257 23193
0
−1 − 4736 3591
0
4537 0 − 12800
.
205 − 23193
0
17759035623 15529062400
1145 3591
89068851 7731 − 3105812480 − 12800
1197 12800
Exercise 59 (p. 269) The matrix bT (c − c4 )
bT A
(c − c2 )c Ac
⎡ =⎣ ⎡ =⎣ ⎡ =⎣
bT (c − c4 )(c − c2 )c
bT (c − c4 )Ac
bT A(c − c2 )c
bT A2 c
bT (c − c4 )(c − c2 )c
bT (c − c4 )Ac
bT A(c − c2 )c
bT A2 c
Because t13 = [t22 ], the result is H f ff f.
⎦
ζ7 − c2 ζ4
ζ8
with the result c4 = − 10331 17166 .
Exercise 60 (p. 282)
⎤
ζ6 − c4 ζ4
2574900c2 c4 + 1453965c2 + 967440c4 + 688748 = 0. Substitute c2 =
⎦
ζ5 − (c2 + c4 )ζ3 + c2 c4 ζ 2
has rank 1 and its determinant is zero. This simplifies to 15924 14305 ,
⎤
⎤ ⎦
References
1. Ascher, U.M. and Petzold, L.R.: Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM (1998) 2. Azamov, A,A. and Bekimov, M.A.: An approximation algorithm for quadratic dynamic systems based on N. Chomsky’s grammar for Taylor’s formula, Proc. Steklov Inst. Math. 293, S1–S5 (2016) 3. Bashforth, F. and Adams, J.C.: An Attempt to Test the Theories of Capillary Action by Comparing the Theoretical and Measured Forms of Drops of Fluid, with an Explanation of the Method of Integration Employed in Constructing the Tables which Give the Theoretical Forms of Such Drops. Cambridge University Press, Cambridge (1883) 4. Brouder, C.: Runge–Kutta methods and renormalization, Eur. Phys. J. C. 12, 521–534 (2000) 5. Burrage, K.: A special family of RungeKutta methods for solving stiff differential equations, BIT 18 22–41 (1978) 6. Burrage K. and Butcher, J.C.: Non-linear stability of a general class of differential equation methods. BIT 20, 185–203 (1980) 7. Butcher, J.C.: Coefficients for the study of Runge–Kutta integration processes. J. Austral. Math. Soc. 3, 185–201 (1963) 8. Butcher, J.C.: On the integration processes of A. Hut’a, J. Austral. Math. Soc. 3, 202–206 (1963) 9. Butcher, J.C.: Implicit RungeKutta processes, Math. Comp. 18, 50–64 (1964) 10. Butcher, J.C.: A modified multistep method for the numerical integration of ordinary differential equations. J. Assoc. Comput. Mach. 12, 124–135 (1965) 11. Butcher, J.C.: On the attainable order of Runge–Kutta methods, Math. Comp. 19, 408–417 (1965) 12. Butcher, J.C.: On the convergence of numerical solutions to ordinary differential equations, Math. Comp. 20, 1–10 (1966) 13. Butcher, J.C.: The effective order of Runge–Kutta methods, Lecture Notes in Math. 109, 133–139 (1969) 14. Butcher, J.C.: An algebraic theory of integration methods. Math. Comp. 26, 79–106 (1972) 15. Butcher, J.C.: A stability property of implicit Runge–Kutta methods. BIT 15, 358–381 (1975) 16. Butcher, J.C.: The nonexistence of ten-stage eighth order explicit Runge–Kutta methods, BIT 25, 521–540 (1985) 17. Butcher, J.C.: The Numerical Analysis of Ordinary Differential Equations, Runge–Kutta and General Linear Methods, John Wiley & Sons Ltd, Chichester (1987) 18. Butcher, J.C.: An introduction to “Almost Runge–Kutta” methods. Appl. Numer. Math. 24, 331–342 (1997) 19. Butcher, J.C.: General linear methods, Acta Numerica 15,157–256 (2006) 20. Butcher, J.C.: Numerical Methods for Ordinary Differential Equations (Third Edition), John Wiley & Sons, Chichester (2016)
© Springer Nature Switzerland AG 2021 J. C. Butcher, B-Series, Springer Series in Computational Mathematics 55, https://doi.org/10.1007/978-3-030-70956-3
303
304
References
21. Butcher, J.C., Habib, Y., Hill, A.T. and Norton, T.J.T.: The control of parasitism in G-symplectic methods, SIAM J. Numer. Anal. 52, 2440–2465 (2014) 22. Butcher, J.C. and Hill, A.T.: Linear multistep methods as irreducible general linear methods. BIT Numerical Mathematics 46, 5–19 (2006) 23. Butcher, J.C., Hill, A.T. and Norton T.J.T.: Symmetric general linear methods. BIT Numerical Mathematics 56, 1189–1212 (2016) 24. Butcher, J.C. and Imran, G.: Order conditions for G-symplectic methods. BIT 55, 927–948 (2015) 25. Butcher, J.C., Imran, G. and Podhaisky, H.: A G-symplectic method with order 6. BIT Numerical Mathematics 57, 313–328 (2017) 26. Butcher, J.C. and Sanz-Serna, J.M.: The number of conditions for a Runge–Kutta method to have effective order p. Appl. Numer. Math. 22, 103–111 (1996) 27. Byrne, G.D. and Lambert, R.J.: Pseudo-Runge–Kutta methods involving two points. J. Assoc. Comput. Mach. 13, 114–123 (1966) 28. Cayley, A.: On the theory of the analytical forms called trees. Phil. Mag. 13, 172–176 (1857) 29. Celledoni, E., McLachlan, R.I., McLaren, D.I., Owren, B., Quispel, G.R.W. and Wright, W.M.: Energy-preserving Runge–Kutta methods, Math. Model. Numer. Anal. 43, 645–649 (2009) 30. Coddington, E.A. and Levinson, N.: Theory of Ordinary Differential Equations, McGraw–Hill, New York (1955) 31. Cohen, D. and Hairer E.: Linear energy-preserving integrators for Poisson systems, BIT 51, 91–101 (2011) 32. Cooper, G.J.: Stability of Runge–Kutta methods for trajectory problems, J. Numer. Anal. 7, 1–13 (1987) 33. Cooper, G.J. and Verner, J.H.: Some explicit Runge-Kutta methods of high order, SIAM J. Numer. Anal. 9, 389–405 (1972) 34. Creutz, M. and Gocksch, A.: Higher order hybrid Monte-Carlo algorithms, Phys. Rev. Lett. 63, 912 (1989) 35. Curtiss, C.F. and Hirschfelder, J.O.: Integration of stiff equations. Proc. Nat. Acad. Sci. U.S.A. 38, 235–243 (1952) 36. Dahlquist, G.: Convergence and stability in the numerical integration of ordinary differential equations. Math. Scand. 3, 33–53 (1963) 37. Dahlquist, G.: A special stability property of linear multistep methods. BIT 4, 27–43 (1956) 38. Dahlquist, G.: Error analysis of a class of methods for stiff nonlinear initial value problems, Numerical Analysis, Dundee, Lecture Notes in Mathematics 506, 60–74 (1976) 39. Dahlquist, G. and Jeltsch, R,: Reducibility and contractivity of Runge–Kutta methods revisited, BIT 46, 567–587 (2006) 40. D’Ambrosio, R. and Hairer, E.: Long-term stability of multi-value methods for ordinary differential equations, J. Sci. Comput. 60, 627–640 (2014) 41. Donelson, J. and Hansen, E.: Cyclic composite multistep predictor–corrector methods. SIAM J. Numer. Anal. 8, 137–157 (1971) 42. Euler, L.: De integratione aequationum differentialium per approximationem, In: Opera Omnia, 1st series, Vol. 11, Institutiones Calculi Integralis, pp 424–434. Teubner, Leipzig and Berlin (1913) 43. Gear, C.W.: Hybrid methods for initial value problems in ordinary differential equations. SIAM J. Numer. Anal. 2, 69–86 (1965) 44. Gear, C.W.: The numerical integration of ordinary differential equations, Math. Comp. 21, 146–156 (1967) 45. Gill, S.: A process for the step-by-step integration of differential equations in an automatic computing machine. Proc. Cambridge Philos. Soc. 47, 96–108 (1951) 46. Gragg, W.B. and Stetter, H.J.: Generalized multistep predictor-corrector methods. J. Assoc. Comput. Mach. 11, 188–209 (1964) 47. Hairer, E.: A Runge–Kutta method of order 10, J. Inst. Math. Appl. 21, 47–59 (1978) 48. Hairer, E.: Energy-preserving variant of collocation methods, J. Numer. Anal. Ind. Appl. Math. 5, 73–84 (2010)
References
305
49. Hairer, E., Lubich, C. and Wanner, G.: Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, second edition, Springer-Verlag, Berlin (2006) 50. Hairer, E., Nørsett, S.P. and Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems, Springer, Berlin (1993) 51. Hairer, E. and Wanner, G.: Multistep-multistage-multiderivative methods for ordinary differential equations. Computing 11, 287–303 (1973) 52. Hairer, E. and Wanner, G.: On the Butcher group and general multi-value methods. Computing 13, 1–15 (1974) 53. Hairer, E. and Wanner, G.: Solving Ordinary Differential Equations II: Stiff and DifferentialAlgebraic Problems, Springer, Berlin (1996) 54. Hammer, P.C. and Hollingsworth, J.W.: Trapezoidal methods of approximating solutions of differential equations. Math. Tables Aids Comput. 9, 92–96 (1955) 55. Henrici, P.: Discrete Variable Methods in Ordinary Differential Equations. John Wiley & Sons Inc, New York (1962) 56. Heun, K.: Neue Methode zur approximativen Integration der Differentialgleichungen einer unabh¨angigen Ver¨anderlichen. Z. Math. Phys. 45, 23–38 (1900) 57. Hull, T.E., Enright, W.H., Fellen, B.M. and Sedgwick, A.E.: Comparing numerical methods for ordinary differential equations. SIAM J Numer. Anal. 9, 603–637 (1972) 58. Hundsdorfer W.H. and M. N. Spijker M.N.: A note on B-stability of Runge–Kutta methods. Numer. Math. 36, 319–333 (1981) 59. Hut’a, A.: Une am´elioration de la m´ethode de Runge–Kutta–Nystr¨om pour la r´esolution num´erique des e´ quations diff´erentielles du premier ordre. Acta Fac. Nat. Univ. Comenian. Math. 1, 201–224 (1956) 60. Hut’a, A.: Contribution a` la formule de sixi`eme ordre dans la m´ethode de Runge–Kutta–Nystr¨om. Acta Fac. Nat. Univ. Comenian. Math. 2, 21–24 (1957) 61. Iserles, A.: A First Course in the Numerical Analysis of Differential Equations (Cambridge Texts in Applied Mathematics), second edition (2008) 62. Iserles, A., Munthe-Kaas, H.Z. Nørsett, S.P. and Zanna, A: Lie-group methods, in Acta Numerica 9, 215–365 (2001) 63. Jackiewicz, Z.: General Linear Methods for Ordinary Differential Equations, John Wiley & Sons, Hoboken (2009) 64. Kirchgraber, U.: Multistep methods are essentially one-step methods. Numer. Math. 48, 85–90 (1986) 65. Kuntzmann, J.: Neuere Entwickelungen der Methode von Runge–Kutta. Z. Angew. Math. Mech. 41, T28–T31 (1961) 66. Kutta, W.: Beitrag zur n¨aherungsweisen Integration totaler Differentialgleichungen. Z. Math. Phys. 46, 435–453 (1901) 67. Lambert, J.D.: Numerical Methods for Ordinary Differential Systems: The Initial Value Problem, Wiley, Chichester (1991) 68. Lasagni, F.M.: Canonical Runge–Kutta methods, Z. Angew. Math. Phys. 39, 952–953 (1988) 69. L´opez-Marcos, M.A., Skeel, R.D and Sanz-Serna, J.M.: Cheap enhancement of symplectic integrators, in Numerical Analysis, D.F. Griffiths and G.A. Watson (eds.), Pitman Res. Notes Math. Ser. 344, Longman, Harlow, 107–122 (1996) 70. Łukasiewicz, J. and Tarski, J.: Investigations into the sentential calculus. Comp. rend. Soc. Sci. Lett. Warsaw 23, Class III, 31–32 (1930) 71. McLachlan, R. and Quispel, R.: Six lectures on the geometric integration of ODEs. Foundations of Computational Mathematics (Oxford, 1999) London Math. Soc. Lecture Note Ser. 284: Cambridge Univ. Press, Cambridge 155–210 (2001) 72. Merson, R.H.: An operational method for the study of integration processes. Proc. Symp. Data Processing, Weapons Research Establishment, Salisbury, S. Australia (1957) 73. Miyatake Y: An energy-preserving exponentially-fitted continuous stage Runge–Kutta method for Hamiltonian systems, BIT 54, 777–799 (2014) 74. Moulton, F.R.: New Methods in Exterior Ballistics. University of Chicago Press (1926) 75. Naur, P.: Report on the algorithmic language ALGOL 60, Comm. ACM, 6 (1), 1–17 (1963)
306
References
76. Nordsieck, A.: On numerical integration of ordinary differential equations. Math. Comp. 16, 22–49 (1962) ¨ die numerische Integration von Differentialgleichungen. Acta Soc. Sci. 77. Nystr¨om, E.J.: Uber Fennicae 50 (13), 1–55 (1925) 78. Prince, P.J. and Dormand, J.R.: High order embedded Runge–Kutta formulae, J. Comput. Appl. Math. 7, 67–75 (1981) 79. Prothero, A. and Robinson, A.: On the stability and accuracy of one-step methods for solving stiff systems of ordinary differential equations. Math. Comp. 28, 145–162 (1974) 80. Quispel, G.R.W. and McLaren, D.I.: A new class of energy-preserving numerical integration methods, J. Phys. A 41, 045206 (2008) 81. Rudin, W.: Principles of Mathematical Analysis. McGraw-Hill, 3rd Edition (1976) ¨ die numerische Aufl¨osung von Differentialgleichungen. Math. Ann. 46, 82. Runge, C.: Uber 167–178 (1895) 83. Sanz-Serna, J.M.: Runge–Kutta schemes for Hamiltonian systems, BIT 28, 877–883 (1988) 84. Sanz-Serna, J.M. and Abia,, L.: Order conditions for canonical RK schemes. SIAM J Numer. Anal. 28, 1081–1096 (1991) 85. Sanz-Serna, J.M. and Calvo, M.P.: Numerical Hamiltonian Problems, Chapman & Hall, London (1994) 86. S¨oderlind, G., Jay L. and Calvo M.: Stiffness 1952–2012: Sixty years in search of a definition, BIT Numer Math 55, 531–558 (2015) 87. Stoffer, D.: General linear methods: connection to one step methods and invariant curves. Numer. Math. 64, 395–408 (1993) 88. Suzuki, M.: Fractal decomposition of exponential operators with applications to many-body theories and Monte Carlo simulations, Phys. Lett. A 146, 319–323 (1990) 89. Underwood, R.G.: An introduction to Hopf algebras, Springer-Verlag, Berlin, (2011) 90. van der Houwen, P.J. and Sommeijer, B.P.: On the internal stability of explicit m-stage Runge– Kutta methods for large m-values. Z. Angew. Math. Mech. 60. 479–485 (1980) 91. Verner, J. H.: Some Runge–Kutta formula pairs: SIAM J. Numer. Anal. 28, 496–511 (1991) 92. Yoshida, H.: Construction of higher order symplectic integrators, Phys. Lett. A 150, 262–268 (1990)
Index
A (generating function), 54 algebra co-, 82 Hopf, 44, 88 linear, 95, 97 α (combinatorial), 60, 61 antipode, 43, 88, 91, 145 B+ recursion, 93 involution property, 88, 91 arborescence, 2 atom, 77 autonomous, 4, 24 B (generating function), 54 B, B∗ ,B0 , 113 B-group = Butcher group, 165 B+ , 44–47, 50, 59, 86 B-series, 3, 33, 41, 65, 100, 110, 113, 125 central, 134 composition, 100, 133 fractional powers, 147 inverse, 145 balanced parentheses, 45 β (combinatorial), 60, 61 β (combinatorial), 60, 61 beta-product, 45, 47, 84, 85 iterated, 46 recursion, 92 BNF (formal languages), 45 C (generating function), 54 combinatorics, 39 composition, 43, 59, 72 conformabilty, 259 weak, 259 conjugacy test, 241 conjugate order, 219
conservation of angular momentum, 13, 247 conservation of energy, 13, 247 consistency, 29, 32 convergence, 29, 32 covariance, 101 D ∈ B0 , 116 Dahlquist barrier, 216 second, 217 Δ (Sweedler notation), 82 DETEST, 12 differential equation, 1, 4 linear, 8 distributive, 54 E ∈ B, 115 edge, 39 effective order, 219 elementary differential, 35, 41, 100, 103, 110, 112–114, 127, 131 attainable value, 129 perturbed, 137 elementary weight, 100, 124, 126 attainable value, 129 energy-preserving, 25 equivalence class, 48 evolution, 65 evolve, 70 exponential growth and decay, 8 exponential integrator, 79 flow, 1 forest, 39, 41, 50, 75 antipode, 90 labelled, 75 space, 53 span · , 52
© Springer Nature Switzerland AG 2021 J. C. Butcher, B-Series, Springer Series in Computational Mathematics 55, https://doi.org/10.1007/978-3-030-70956-3
307
308 universal, 76 formal languages, 45 formal Taylor series, 34 Fr´echet derivative, 3, 34, 100, 106 generating function, 54 ∇ (gradient), 3, 4 graph, 2, 39 connected, 39, 40 loop in, 39 order, 40 path in, 39 graph theory, 39 group, 58 abelian, 54 Hamiltonian function, 250 Heavyside function, 24 index set, 24 initial value, 5 invariant subspace, 240 involution, 91 isomer, 77 J (operator on B0 ), 175 Jacobian matrix, 3 Kronecker product, 100, 127 λ , 141, 142 Λ , 95, 145 Lipschitz condition, 6 local, 7 constant, 6 loop, 39 mapping central, 102, 113, 134 composition, 133 differential, 102 Euler, 105, 115 flow, 105, 111, 115, 117, 122, 123 flow-slope, 105, 111 id, 105, 115 implicit, 105, 110, 111, 117, 121, 123, 124 mid-point, 105 Runge-I, 105, 111 Runge-II, 105, 111 slope, 105, 115, 116 Merson, 74 method Adams–Bashforth, 29, 30, 211, 224
Index Adams–Moulton, 29, 211 adjoint, 23 Almost Runge–Kutta, 227 average vector field, 25, 153, 248, 285 backward difference, 211 backward difference representation, 230 continuous Runge–Kutta, 152 cyclic composite, 211, 218, 226 energy preserving , 281 Euler, 3, 14, 30, 33 explicit, 7 implicit, 7, 22 G-symplectic, 248, 262, 263, 266, 270 G4123, 266, 277, 278 G6245, 272, 276, 277, 279 Gauss, 23 general linear, 31, 212 algebraically stable, 263 conformability, 264 consistency, 222 convergence, 223 finishing, 232 order algorithm, 239 pre-consistency, 222 reducibility, 152 stability, 222 starting, 232 time-reversal symmetric, 271 weak conformability, 265 general linear method order, 231 implicit mid-point, 22 integration, 151, 152 A-equivalent stages, 159 composition, 163 elementary weight, 154, 163 ϕ-equivalent stages, 159 reduced, 158 inverse, 23 inverting, 161 linear multistep, 28, 213, 217, 224 A-stability, 215 consistency, 213 convergence, 214 formulation, 220 order, 214 stability, 214, 215 multivalue, 28 Nordsieck representation, 230 numerical, 1 accuracy of, 2 off-step, 217, 226 off-step predictor, 211 one-leg, 216
Index Picard, 153 predictor-corrector, 211 pseudo Runge–Kutta, 212 re-use, 212, 219, 225 Rosenbrock, 79 Runge–Kutta, 1, 19, 75, 132, 148, 154, 170, 172, 224, 235 Hammer and Hollingsworth, 202 ambiguous order, 183 block diagonally implicit, 258 canonical, 247, 252, 259 Chebyshev, 190 composition, 152, 160 continuous stage, 25 cyclic composite, 220 diagonally implicit, 257 effective order, 205, 219 elementary weight, 170 equivalence, 25, 155 equivalent, 25, 26 explicit, 189 Gauss, 202, 203, 256 Gill, 195 Hairer, 199 high order, 197 Hut’a, 186 implementable, 203 implicit, 22, 201 Kutta’s classification, 195 low order, 190 order, 21, 78, 99, 124, 177 order barrier, 199 order conditions (symplectic), 254 P-equivalence, 156 ϕ-equivalence, 156 pre-reduced, 157 processing, 247 Radau IIA, 27, 202, 203 reduced, 158 reduced tableau, 193 reducibility, 151 reducible, 155 scalar, 178 simplifying assumptions, 197 singly-implicit, 202 stability, 177, 187 stability function, 187 stage-order, 172 symplectic, 41, 247, 252 tableau, 21 transformed, 204 starting, 213, 235, 238 Taylor series, 14, 18, 33 theta, 16, 22, 34, 35
309 transformation, 228 underlying one-step, 213, 240 moments of inertia, 250 monoid, 41 commutative, 54 multi-dimensional, 107, 108 multinomial coefficient, 69 multiplicative function, 94 Newton iteration, 23, 145 1 + O p+1 = order-defining subgroup, 152, 166 operator bilinear, 103, 106, 108 linear, 106, 107 order with processing, 258 parasitism, 264 partition evolution, 70 of number, 65 of set, 65 Picard–Lindel¨of theorem, 153 Polish notation, 46, 82, 106, 108 polynomial Laguerre, 205 Legendre, 203 preconsistency, 29, 32 problem, 100, 101 diamond, 12, 17 Euler rigid body, 250 H´enon–Heiles, 277 Hamiltonian, 248, 250 harmonic oscillator, 9 initial value, 1, 5, 99 autonomous, 110 Kepler, 13, 16, 26 Poisson, 8, 250 Prothero–Robinson, 12 Runge, 11 scalar, 5 simple pendulum, 9, 277 stiff, 10 symplectic, 252 variational, 252 well-posed, 6 pruning, 82, 84, 94 p-weight, 68, 69 quadratic identity, 253 quadratic invariant, 248 question, 100, 101 easy set, 105
310 recursion, 44, 84 ρ (polynomial), 28, 213 ring, 54 commutative, 54 root condition, 29 semi-group, 41 σ (polynomial), 28, 213 σ (symmetry), 58 stability, 29, 32 stability interval, 188 stability region, 188 stage, 19, 22, 24, 25, 28, 31 stepsize, 19 Stone–Weierstrass theorem, 129 stump, 76 atomic, 77 product, 77 sub-atomic, 80 uni-valent, 79 subtree, 84, 94 supertree, 84 Sweedler notation, 82, 95, 139 symbolic matrix, 95 antipode, 96 symbolic vector, 95 symplectic flow, 252 Taylor series, 2, 21, 100, 103, 107–112 trapezoidal rule, 20 tree, 2, 35, 39, 44, 50 antipode, 88 automorphism, 58 Cayley, 42 density, 59 empty, 40, 75 enumeration, 54 evolution, 72
Index t! (factorial of t), 59 free, 40, 44 height, 52 isomeric, 77, 183 kinship, 42 labelled, 81 leaf, 52 N-, 48 offcut, 80, 81 |t| (order of t), 2, 44 partition, 50 order, 50 pruning, 43, 80 root, 40, 52 rooted, 2, 40, 41, 44 S-, 48 space, 53 standard numbering, 61 sub, 43, 80, 87 subtree, 43 super, 43, 80, 87 superfluous, 48 supertree, 43 symmetry, 2, 58, 113 τ, 44 unrooted, 40, 41, 44, 48 width, 52 truncation error, 21 tuple, 75 variable stepsize, 15 vertex, 39 ancestor, 52 child, 52 dependant, 52 descendant, 52 parent, 52 valency, 76