271 44 2MB
English Pages 320 [301] Year 2020
Introduction to Mathematical Methods of Analytical Mechanics
To Françoise
Series Editor Roger Prud’homme
Introduction to Mathematical Methods of Analytical Mechanics
Henri Gouin
First published 2020 in Great Britain and the United States by ISTE Press Ltd and Elsevier Ltd
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Press Ltd 27-37 St George’s Road London SW19 4EU UK
Elsevier Ltd The Boulevard, Langford Lane Kidlington, Oxford, OX5 1GB UK
www.iste.co.uk
www.elsevier.com
Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. For information on all our publications visit our website at http://store.elsevier.com/ © ISTE Press Ltd 2020 The rights of Henri Gouin to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library Library of Congress Cataloging in Publication Data A catalog record for this book is available from the Library of Congress ISBN 978-1-78548-315-8 Printed and bound in the UK and US
Preface
The objective of this book is to offer an overview of geometric methods of calculus of variations and how this can be applied in analytical mechanics. It is followed by the study of properties of spaces in mechanical systems with a finite number of degrees of freedom. The book was inspired, in part, by methods proposed by Pierre Casal, a former Professor at the Faculty of Sciences of Marseilles University. These methods were considered again and developed during a course that I taught to students in the third year of the Applied Mathematics Bachelor’s program. The mathematical tools used throughout the book are those used in elementary algebra, analysis and differential geometry. The book does not require mathematical tools that would be beyond the scope of a third-year university student (readers may refer, among others, to the works of (Queysane 1971, Couty and Ezra 1980, Martin 1967, Brousse 1968). Part 1 In the first part of the book, we present geometric methods used in the calculus of variations. Free extrema or extrema that are related to integral or non-integral constraints are studied. These make it possible to introduce the concept of the Lagrange multiplier. An initial study of the Hamilton equations1 associated with the concept of a generating function follows from these methods. Research into the geodesics of surfaces is also a natural application that uses differential geometry. These methods of differential geometry can be extended to the calculation of the variation of curvilinear integrals. Two forms of variations can be made and explicitly discussed: the first form uses the concept of variation of a vectorial derivative; the second form is similar to finding the optical path followed by light in a medium with 1 It should be noted that these equations are likely to have been written by Huygens, and the symbol H corresponds to his initials.
x
Introduction to Mathematical Methods of Analytical Mechanics
a variable refractive index. This leads to the study of Descartes’ laws and isoperimetric problems. Noether’s theorem, groups of invariance associated with differential equations and the concept of a Lie group are natural extensions of these calculations. Thus, Fermat’s principle, associated with the optical path followed by light, leads to finding first integrals. The tools used are related to tensor calculations that bring in the structure of a vector space and its dual space. Part 2 The second part of the book presents the application of the tools discussed earlier to the mechanics of material systems with a finite number of degrees of freedom. After briefly reviewing the d’Alembert principle, we introduce the concept of Lagrangian defined in space-time by the homogeneous Lagrangian. We find that the first integral in mechanics is associated with Noether’s theorem. The reintroduction of partial results leads to the Maupertuis principle and to the introduction of Riemannian geometry in the case of the conservation of energy; this is the foundation for de Broglie wave mechanics. The integration methods for equations in mechanics are analyzed using the Jacobi method and its application in the important case of Liouville’s integrability. This section continues with the concepts of angular variables and action variables for periodic and multi-periodic motions, presented by Delaunay, which are especially useful in celestial mechanics. The second part concludes with the study of space in analytical mechanics, including various phase spaces. The concepts of dynamic variables, Lie brackets, Poisson brackets and Lie algebra are natural developments of this study. We then briefly return to first integrals when studying Poisson brackets with two dynamic variables. Canonical transformations that conserve the form of the mechanical equations in different dynamic variables lead to the concept of symplectic scalar product, which is the first step in studying symplectic geometry. Part 3 The third part presents some easy applications of differential equations to mechanical systems. The concept of flow in phase space leads to the Liouville theorem, which is essential in statistical mechanics. It corresponds to the conservation of volume in this space and can be interpreted using the Poincaré recurrence theorem. The small motions of mechanical systems are analyzed using the specific case of the Weierstrass discussion. The equilibrium positions of the systems associated with autonomous differential equations bring us to the concepts of Lyapunov stability and asymptotic stability. The necessary conditions for stability are presented in the context of the Lejeune Dirichlet theorem. The concept of linearization of differential equations in the neighborhood of an equilibrium position makes it possible to study the small oscillations of dynamic Lagrangian systems and their frequency, as well as the systems’ disturbances. This part ends with a discussion
Preface
xi
on the stability of periodic systems and the topology of phase spaces, with the extension of the concepts of Lyapunov stability, asymptotic stability and the new concept of strong stability in the neighborhood of an equilibrium position. The Hill and Mathieu equations allow us to apply these concepts. At the end of the book, the reader can find a collection of exercises for geometrical and mechanical applications, followed by bibliographical references and an index. I would like to thank Françoise for reviewing the proofs and helping me with her valuable comments. Henri G OUIN October 2019
Mathematicians, Physicists and Astronomers Cited in this Book
Jean le Rond d’Alembert (1717–1783), French mathematician, physicist and philosopher. Vladimir Igorevitch Arnold (1937–2010), Russian mathematician. Louis de Broglie (1892–1987), French mathematician and physicist. Charles-Eugène Delaunay (1816–1872), French mathematician. René Descartes (1596–1650), French mathematician, physicist and philosopher. Pierre de Fermat (1601–1665), French mathematician. William Hamilton (1805–1865), English–Irish physicist and astronomer. George Hill (1838–1914), American mathematician and astronomer. Christiaan Huygens (1629–1695), Dutch mathematician. Charles Jacobi (1804–1851), German mathematician. Johann Lejeune Dirichlet (1805–1859), German mathematician. Sophius Lie (1842–1899), Norwegian mathematician. Joseph Liouville (1809–1882), French mathematician. Aleksander Mikhailovitch Lyapunov (1857–1918), Russian mathematician.
xiv
Introduction to Mathematical Methods of Analytical Mechanics
Emile Mathieu (1835–1890), French mathematician. Pierre de Maupertuis (1698–1759), French mathematician, physicist, astronomer and naturalist. Amalie Emmy Noether (1882–1939), German mathematician. Henri Poincaré (1854–1912), French mathematician, physicist and philosopher. Siméon Denis Poisson (1781–1840), French mathematician and physicist. Bernhard Riemann (1826–1866), German mathematician. Karl Weierstrass (1815–1897), German mathematician.
Important Notations
x, Q · · · x , Q · · · ⎤ ⎡ ⎤ q1 x1 ⎢ .. ⎥ ⎢ .. ⎥ ⎣ . ⎦, ⎣ . ⎦ xn qn ⎡
[x1 , · · · , xn ], [q1 , · · · , qn ]
∂V ∂x
: transpose operation in a vector space : vectors in a vector space, represented in bold italics : linear transposed forms of the vectors in a vector space : n-tuples of Rn written as columns of the elements of the vectors x, Q written in the canonical basis
: n-tuples of Rn written as rows of the linear forms of the components x , Q , where Rn denotes the dual of Rn linear mapping R defined by the relation ∂V dV = dx and represented by the matrix ∂x ⎤ ⎡ ∂v1 (x1 , ..., xn ) ∂v1 (x1 , ..., xn ) , ... , ⎥ ⎢ ∂x1 ∂xn ⎥ ⎢ .. .. ⎥, ⎢ . . ⎥ ⎢ ⎣ ∂v (x , ..., x ) ∂vn (x1 , ..., xn ) ⎦ n 1 n , ... , ∂x1 ∂xn :
where V , a function ⎡ ⎤ of x, is represented by the v1 ⎢ .. ⎥ column matrix ⎣ . ⎦ vn
xvi
Introduction to Mathematical Methods of Analytical Mechanics
a˙ =
da dt
1
: derivative of a with respect to time in Newtonian dQ notation. Similarly, Q˙ = dt : identity tensor in a vector space
grad
: gradient in a vector space
rot
: rotational in R3
1 Elementary Methods to the Calculus of Variations
The calculus of variations is done using all methods that allow the resolution of extremum problems. Numerous problems in physics can be solved using variational methods. In mechanics, for example, an equilibrium position is one where the potential of forces applied to the considered system is an extremum. In optics, the optical path followed by light is an extremum. In capillarity, the surfaces of bubbles and drops whose volume is given is the value that makes them minimal. We will see that a non-dissipative motion is one that makes the Hamiltonian action an extremum. The problems that we study are introduced in different forms: – In numerical form: the unknown consists of a set of scalars or functions. When the unknown is a set of scalars or functions, the calculus of variations is carried out using elementary differential calculus. This is the case for n scalars x = (x1 , ..., xn ), which is an element of Rn or for functions of the form t ∈ [t1 , t2 ] ⊂ R −→ φ(t) ∈ Rn . – In geometric form: the unknown is represented by a set of points, curves or surfaces. 1.1. First free extremum problems The unknown is composed of n scalars x = (x1 , ..., xn ) ∈ Rn . We will determine the values of x such that a = G(x) becomes an extremum, where G, assumed to be differentiable, is a mapping from Rn to a real set R. The reasoning used is as follows:
4
Introduction to Mathematical Methods of Analytical Mechanics
⎡
⎤ dx1 ⎢ ⎥ let us write dx = ⎣ ... ⎦ an n-tuple of Rn , which we name the variation of x. We dxn can derive the value of the differential of G(x), da =
∂G ∂G (x1 , · · · , xn ) dx1 + ... + (x1 , · · · , xn ) dxn , ∂x1 ∂xn
which can be written in matrix form as ⎤ dx1 ∂G ∂G ⎥ ⎢ (x1 , · · · , xn ) , · · · , (x1 , · · · , xn ) ⎣ ... ⎦ . da = ∂x1 ∂xn dxn
⎡
⎡
⎤ dx1 ⎢ ⎥ The column matrix ⎣ ... ⎦ represents a vector in Rn given by its elements in the dxn ⎡ ⎤ ⎡ ⎤ 1 0 ⎢ .. ⎥ ⎢ .. ⎥ n canonical basis of R , e1 = ⎣ . ⎦ , · · · , en = ⎣ . ⎦. The vectors will not be written 0 1 with an arrow, but in bold italic letters. The line matrix
∂G ∂G (x1 , · · · , xn ), · · · , (x1 , · · · , xn ) ∂x1 ∂xn
represents a linear form, element of the dual space of Rn , denoted by Rn (the dual space is also designated by L(Rn , R)). This linear form is expressed in the dual basis e1 = [1, · · · , 0] , . . . , en = [0, · · · , 1] of the basis e1 , · · · , en and satisfies ei ej = δij , where δij is the Kronecker symbol, by a line matrix with n columns. Thus, designates the transposition in Rn assumed to be Euclidean. We have G (x) =
∂G ∂G (x1 , · · · , xn ) e1 + · · · + (x1 , · · · , xn ) en ∂x1 ∂xn
and dx = dx1 e1 + · · · + dxn en ,
Elementary Methods to the Calculus of Variations
5
and it is possible to write da = G (x) dx, which is called the variation of a. For reasons that will be understood later in the book, we write δx and δa instead of dx and da, respectively. D EFINITION 1.1.– a is an extremum at x if and only if, for any variation δx, the variation δa is zero. This assertion can be written as ∀ δx, δa ≡ G (x) δx = 0
and as a result, G (x) = 0 , where G is a linear form of Rn , which can be developed as ∂G ∂G (x1 , · · · , xn ) = 0, · · · , (x1 , · · · , xn ) = 0. ∂x1 ∂xn The definition is a common one in the calculus of variations. It is, nevertheless, preferable to call it a stationary point instead of an extremum, as it will be seen in the simple example given below. E XAMPLE 1.1.– Consider the mapping (x, y) ∈ O ⊂ R2 −→ f (x, y) ∈ R where O is an open set of R2 . It is assumed that at (x0 , y0 ) ∈ O, f (x, y) is an extremum. Without loss of generality, it can be assumed that (x0 , y0 ) = (0, 0) and f (x0 , y0 ) = 0. The point (0, 0) corresponds to a maximum of f (x, y) if and only if there exists r ∈ R such that for any (x, y) satisfying 0 ≤ x2 + y 2 ≤ r2 , we have f (x, y) ≤ 0. Similarly, the point (0, 0) will correspond to a minimum of f (x, y) if and only if there exists r ∈ R such that for any (x, y) satisfying 0 ≤ x2 + y 2 ≤ r2 , we have f (x, y) ≥ 0. Assume that f belongs to the C 2 class; in the neighborhood of (0, 0), then f (x, 0) and f (0, y) are extrema at (0, 0). Hence, fx (0, 0) = 0 and fy (0, 0) = 0. If these conditions are satisfied, the second-order Taylor–Young formula implies f (x, y) =
1 2 α x + 2 β xy + γ y 2 + x2 + y 2 ε(x, y), 2
where lim ε(x, y) = 0 when (x, y) −→ (0, 0), with α=
∂2f ∂2f ∂2f (0, 0), β = (0, 0). (0, 0), γ = 2 ∂x ∂x∂y ∂y 2
6
Introduction to Mathematical Methods of Analytical Mechanics
Consequences: if β 2 − αγ < 0, we have a minimum when α > 0 and a maximum when α < 0; if β 2 − αγ > 0, the quadratic form α x2 + 2 β xy + γ y 2 is not defined, and there is no extremum for f (x, y), but there is a saddle point. If β 2 − αγ = 0, we cannot arrive at any conclusion; for example, in the case where all second derivatives are zero at (0, 0) and if a third derivative is non-zero, there is neither a maximum nor a minimum. It is still important to note that the definition for an extremum is associated with the condition G (x) = 0 , which in all cases is called the stationarity condition. E XAMPLE 1.2.– Consider in a plane three points A, B and C. We wish to find a point M in the plane such that l(M ) ≡ M A + M B + M C is of minimum length. Given that l(M ) is continuous and greater than zero, the length l(M ) does admit a lower limit. Given that it is possible to limit ourselves to a compact domain in the plane (such as the domain bounded by a disk whose radius is sufficiently large), the minimum is obtained. Let us calculate the variation denoted by δl(M ): δl(M ) ≡ δM A + δM B + δM C. We first compute δM A; let us carry out the calculus using orthonormal axes whose origin is A; for M with the coordinates x and y, r = M A = x2 + y 2 . We derive x
y
δx δx + δy = grad r δM A = 2 2 2 2 δy x +y x +y where grad r = uA with uA =
AM . AM
The bipoint AM corresponds to the vector of tail A and head M . The computation is carried out in the same way for the points B and C, and we obtain δl(M ) = 0 ⇐⇒ uA + uB + uC = 0. The point M belongs to the arc that is able to support AB with the angle 23 π. This is the same for BC and for AC. In order for the arcs to have a common point, it is < 2 π; no angle in triangle ABC can be larger than 2 π. The essential that |ABC| 3 3 process for finding the point M is represented in Figure 1.1. In the case where one of the vertices in triangle ABC can have an angle greater than the point M cannot be found at any point in the plane except A, B or C. Indeed, we know that an extremum exists and that A, B and C are the points for which l(M ) is not differentiable; at these points, the calculus given above is not valid. It must also be noted that there exists a single solution and this single solution is a minimum. 2 3 π,
Elementary Methods to the Calculus of Variations
7
Figure 1.1. Representation of the triangle ABC in the case where no vertex can have an angle greater than or equal to 23 π. The point M corresponds to the minimum distance M A + M B + M C. For a color version of the figures in this chapter, see www.iste.co.uk/gouin/mechanics.zip
1.2. First constrained extremum problem – Lagrange multipliers 1.2.1. Example of Lagrange multiplier T HEOREM 1.1.– Let A be a linear mapping from Rn to Rq and let B be a linear mapping from Rn to Rp . Regardless of the vector V in Rn , BV = 0 implies AV = 0 is equivalent: there exists a linear mapping Λ, from Rp to Rq , called the Lagrange multiplier such that A = ΛB. P ROOF.– As the property is an equivalence, it must be formulated as a necessary and sufficient condition. ⇐
If V ∈ Ker B, then ΛBV = 0 and V ∈ Ker A.
⇒ Reciprocally, assume that Ker A ⊂ Ker B, Ker A being a vector sub-space of Rn . There exists an additional vector sub-space E of Ker B such that Rn = Ker B E. Thus, B|E is an isomorphism from E to Im B ⊂ Rp and is −1
is a linear mapping from Im B to E. Let us write invertible; and B|E
−1
, a mapping from Im B to Rq . We can write Rp = Im B E , Λ1 = A B|E where E is an additional vector space of Im B relative to Rp . Let Λ2 be an arbitrary linear mapping over E (we can consider, for example, the linear mapping with zero values). Thus, ∀ V , V ∈ Rp , V = V 1 + V 2
8
Introduction to Mathematical Methods of Analytical Mechanics
This decomposition is unique. Let us posit that Λ is a linear mapping from Rp to R such that Λ(V ) = Λ1 (V 1 ) + Λ2 (V 2 ). Therefore, A = ΛB. Indeed, for any W , being a vector in Rn , it is possible to write W = W 1 + W 2 , where W 1 ∈ Ker B
−1 and W 2 ∈ E, the decomposition being unique. On the one hand, A B|E B(W 1 +
−1 B is the identity over E and B(W 1 ) = 0. On the W 2 ) = A(W 2 ) since B|E other hand, A(W ) = A(W 1 + W 2 ) = A(W 1 ) + A(W 2 ) and Ker A ⊂ Ker B implies for any W 1 ∈ Ker A, A(W ) = A(W 2 ). Finally, for any vector W in Rn , A(W ) = ΛB(W ), i.e. A = ΛB. q
1.2.2. Application to the constrained extremum problem Let G be a differentiable mapping from Rn to R and F be another differentiable mapping from Rn to Rp . We wish to find values for the element x belonging to Rn such that a = G(x) be an extremum, knowing that x satisfies the condition F (x) = 0 (called a constraint). We must write: δa = 0 not for all δx but for all δx verifying the condition F (x) δx = 0. We write, ∀ δx, δx ∈ Rn ,
F (x) δx = 0
=⇒
G (x) δx = 0
with F (x) = 0, On the basis of Rn and Rp , let us write F using column matrices ⎡
⎡ ⎤ ⎤ x1 f1 (x1 , ..., xn ) ⎢ ⎥ ⎢ ⎥ .. p x = ⎣ ... ⎦ ∈ Rn −→ F (x) = ⎣ ⎦∈R . xn and
F (x) = 0
⇐⇒
fp (x1 , ..., xn ) ⎧ ⎪ ⎨ f1 (x1 , ..., xn ) = 0 .. . . ⎪ ⎩ fp (x1 , ..., xn ) = 0
The Jacobian matrix of the derivative of F (x) is represented by ⎡
⎤ ∂f1 (x1 , ..., xn ) ∂f1 (x1 , ..., xn ) , ... , ⎢ ⎥ ∂x1 ∂xn ⎢ ⎥ . . ⎢ ⎥. .. .. F (x) = ⎢ ⎥ ⎣ ∂f (x , ..., x ) ∂fp (x1 , ..., xn ) ⎦ p 1 n , ... , ∂x1 ∂xn
Elementary Methods to the Calculus of Variations
9
The condition F (x) δx = 0 is written in matrix form as ⎡
∂f1 (x1 , ..., xn ) , ... ⎢ ∂x1 ⎢ .. ⎢ . ⎢ ⎣ ∂f (x , ..., x ) p 1 n , ... ∂x1
⎤ ⎡ δx ⎤ ∂f1 (x1 , ..., xn ) 1 ⎡ ⎤ ⎥ 0 ⎥⎢ ∂xn ⎢ ⎥ ⎥ .. .. ⎥ = ⎢ .. ⎥ , ⎥⎢ ⎥ ⎣.⎦ . ⎥⎢ ⎢ . ⎥ ⎦ ∂fp (x1 , ..., xn ) ⎦ ⎣ 0 , δxn ∂xn ,
where the final column is made up of p lines. The condition for the extremum of G can then be written in matrix form as ⎡
δx1
⎤
⎥ ⎢ ⎢ ⎥ ∂G(x1 , ..., xn ) ∂G(x1 , ..., xn ) ⎢ . ⎥ , ... , ⎢ .. ⎥ = 0. ⎢ ⎥ ∂x1 ∂xn ⎣ ⎦ δxn
Applying Theorem 1.1, there exists a Lagrange multiplier Λ, which is a linear mapping from Rp to R such that G (x) = Λ F (x) with
F (x) = 0.
On the basis of Rp , we write Λ = [λ1 , ..., λp ] or λi ∈ R, i ∈ {1, ..., p} . The following three properties are equivalent: (A) ∀ δx, δx ∈ Rn , F (x) δx = 0
=⇒
G (x)δx = 0
with F (x) = 0;
(B) G (x) = Λ F (x) with F (x) = 0; (C)
∀ δx, δx ∈ Rn and ∀ δΛ, δΛ ∈ L(Rn , R), δ [G(x) − Λ F (x)] = 0.
We have demonstrated that (A) is equivalent to (B). Property (C) shows that finding a connected extremum is the same as finding a free extremum, but with respect to n + p variables, the position of the extremum being made up of the elements x and Λ. The introduction of a Lagrange multiplier leads to the freeing of the constraint. This property results from Theorem 1.1, and we can write that G(x) − Λ F (x) = b, where b is the new function to stationarize.
10
Introduction to Mathematical Methods of Analytical Mechanics
1.3. The fundamental lemma of the calculus of variations In the case where the unknown is an element of a functional space, we are truly dealing with the calculus of variations – this is an extension of differential calculus. Indeed, in the preceding sections, we have used the following lemma: Given A belonging to L(Rn , R), for any V belonging to Rn , AV = 0 implies A = 0. It is possible to propose the following generalization: Let us consider E, the set of p-time differentiable real functions defined by [t0 , t1 ] ⊂ R −→ ψ(t) ∈ Rn . If p = 0, functions are simply continuous. We may add ψ(t0 ) = 0 and ψ(t1 ) = 0. The set E has the structure of a real vector space. Let φ be a mapping
[t0 , t1 ] ⊂ R −→ φ(t) ∈ L(Rn , R) ≡ Rn , then φ, defined from R to the dual of Rn , is assumed to be continuous over [t0 , t1 ]. ˆ Let us write G(ψ) =
t1
t0
φ(t)ψ(t)dt. This mapping defined over E is linear as for
all scalars λ1 , λ2 and for all functions ψ1 , ψ2 , G(λ1 ψ1 + λ2 ψ2 ) = λ1 G(ψ1 ) + λ2 G(ψ2 ). L EMMA 1.1.– Let φ be a continuous mapping from [t0 , t1 ] to L(Rn , R) and ψ a continuous mapping from [t0 , t1 ] to Rn ; then, ˆ
t1
φ(t) ψ(t)dt = 0
F or any ψ,
implies
t0
φ = 0.
P ROOF.– It is enough to demonstrate the lemma for n = 1. Let us assume through reductio ad absurdum that there exists t2 ∈ ]t0 , t1 [ such that φ(t2 ) = 0 (e.g. the φ(t2 ) value is strictly positive). Since φ(t) is continuous, there exists [t2 , t3 ] included in ]t0 , t1 [ such that for t belonging to [t2 , t3 ] , φ(t) > 0. By choosing p+1 for t ∈ [t2 , t3 ] and ψ (t) = 0 for t ∈ / [t2 , t3 ], we would ψ (t) = [(t − t2 )(t3 − t)] have ˆ
ˆ
t1
t3
φ(t)ψ(t)dt = t0
t2
φ(t) [(t − t2 )(t3 − t)]
p+1
dt > 0,
Elementary Methods to the Calculus of Variations
11
which leads to a contradiction. This lemma can easily be extended when t2 takes the value of one of its limits t0 or t1 and can be applied in the case where ψ (t0 ) = ψ (t1 ) = 0 (as was the case in the proof). L EMMA 1.2.– du Bois-Raymond’s lemma. φ being a continuous mapping from [t0 , t1 ] to L(Rn , R) and ψ being an application with a continuous derivative from [t0 , t1 ] to Rn such that ψ (t0 ) = ψ (t1 ) = 0; then, ˆ t1 φ(t) ψ (t) dt = 0 implies φ = C, F or any ψ, t0
where C is a constant linear mapping from Rn to R. Then, ψ belongs to D1 [t0 , t1 ]. for n = 1; this can easily be P ROOF.– It is enough to demonstrate the lemma ˆ t1 (φ(t) − c) dt = 0. Let us write extended to n > 1. Let the real c defined by t0 ˆ t (φ(u) − c) du. We have ψ(t) ∈ D1 [t0 , t1 ]. According to the hypotheses, ψ(t) = t0
ˆ
t1
t0
(φ(t) − c) ψ (t) dt =
ˆ
t1 t0
φ(t) ψ (t) dt − c (ψ (t1 ) − ψ (t0 )) = 0.
However, ψ (t) = φ(t) − c; hence,
ˆ
φ(t) = c.
t0
ˆ Let us note that we have written G (ψ) =
t1
2
(φ(t) − c) dt = 0, which implies
t1 t0
φ(t) ψ (t) dt; G is a linear mapping
from E to R; consequently, G is a linear functional of E or an element in the dual vector space E ∗ . The parallel with section 1.2 is complete. 1.4. Extremum of a free functional Let G be a continuously differentiable mapping from Ω × [t0 , t1 ] to R, with Ω ⊂ Rn . This mapping is denoted by (Q, t) ∈ Ω × [t0 , t1 ] −→ G(Q, t) ∈ R and we say that G is a generating function. Let φ be a continuous mapping from [t0 , t1 ] to Rn . We posit ˆ t1 a= G(φ(t), t)dt, t0
12
Introduction to Mathematical Methods of Analytical Mechanics
and we write a = G(φ); thus, G is a functional of the φ-functions and belongs to A(A(R, Rn ), R). Let ψ be another continuous mapping from [t0 , t1 ] to Rn . As the scalar x is real, φ + xψ is a continuous application from [t0 , t1 ] to Rn and ˆ G(φ + xψ) =
t1
G φ(t) + xψ(t), t dt.
t0
For the given φ and ψ, we write g(x) = G(φ+xψ). Thus, the mapping g is a linear
mapping from R to R. For the given φ, ψ and t, we write f (x) = G φ(t) + x ψ(t), t . The function f can be derived at x = 0, i.e. f (x) = f (0) + xf (0) + x ε(x)
where
lim ε(x) = 0.
x→0
∂G φ(t), t Consequently, as is a linear mapping from Rn to R, ∂Q
∂G
f (x) = G φ(t) + xψ(t), t = G φ(t), t + x φ(t), t ψ(t) ∂Q
+x ψ(t) ε xψ(t) ,
where lim ε xψ(t) = 0, x→0
and, through integration ˆ
t1
G φ(t) + xψ(t), t dt =
t0
ˆ
t1
t0
G φ(t), t dt
ˆ
t1
+x ˆ where
ε1 (x) =
t0 t1
t0
ψ(t) ε xψ(t) dt with lim ε1 (x) = 0. x→0
ˆ G(φ + xψ) = G(φ) + x
Thus ,
C OROLLARY 1.1.– ˆ g (0) =
t1
t0
∂G
φ(t), t ψ(t)dt + xε1 (x) ∂Q
t1 t0
∂G (φ(t), t) ψ(t) dt, ∂Q
which corresponds to the derivative of g at 0.
∂G
φ(t), t ψ(t)dt + xε1 (x), ∂Q
Elementary Methods to the Calculus of Variations
ˆ
t1
D EFINITION 1.2.– a =
13
G φ(t), t dt is an extremum for the continuous function
t0
φ from [t0 , t1 ] to Rn if and only if for any ψ continuous mapping from [t0 , t1 ] to Rn , g (0) ≡
ˆ
t1
t0
∂G
φ(t), t ψ(t)dt = 0. ∂Q
We write δa = g (0), which is called the variation of a relative to the continuous functions φ from [t0 , t1 ] to Rn . When there is no ambiguity, we write ˆ
t1
a=
G(Q, t) dt, t0
and, analogous with the previous sections, ψ(t) is denoted by δQ(t) or, more simply, δQ. We write ˆ
t1
g (0) = δa = t0
∂G (Q, t) δQ dt. ∂Q
From Lemma 1.2 and Corollary 1.1, it follows that ˆ t1
G Q, t dt is an extremum for the continuous function C OROLLARY 1.2.– a = t0
∂G(Q, t) Q from [t0 , t1 ] to R if = 0. ∂Q n
When there is no ambiguity, we write
∂G(Q, t) ∂G for . ∂Q ∂Q
1.5. Extremum for a constrained functional The search for the function t ∈ [t0 , t1 ] −→ φ(t) ∈ Rn may satisfy certain relationships called constraints. We come across two types of constraints. 1.5.1. First type: integral constraint D EFINITION 1.3.– An integral constraint is a relationship in the form ˆ
t1
F (φ(t), t) dt = b,
[1.1]
t0
where b is a given real and F is a continuously differentiable mapping from an open set Ω of Rn × [t0 , t1 ] to R.
14
Introduction to Mathematical Methods of Analytical Mechanics
We can generalize the definition for a mapping from Rn × [t0 , t1 ] to Rp . The generalization corresponds to p integral constraints. Finding the mappings φ which ˆ t1 will make G(φ(t), t)dt extremum and which are subject to condition [1.1] results t0
in the shortened form: ∀ ψ continue, ψ ∈ A(R, Rp ), ˆ t1 ∂F δb ≡ (φ(t), t) ψ(t)dt = 0 =⇒ t0 ∂Q
ˆ δa ≡
t1 t0
∂G (φ(t), t) ψ(t)dt = 0. ∂Q
We can simply write: δb = 0 =⇒ δa = 0. T HEOREM 1.2.– With δb and δa being two linear functionals defined over the continuous mappings φ from R to Rn , with values in R, δb = 0
=⇒
δa = 0
is equivalent to there existing a Lagrange multiplier, i.e. a linear mapping from R to R such that δa = λ δb. We can write ˆ t1 ˆ t1 ∂G ∂F (φ(t), t) ψ(t)dt = λ (φ(t), t) ψ(t)dt. ∂Q ∂Q t0 t0 Thus, λ is a real constant. We are brought back to the search for the free extremum of ˆ a − λb = c
t1
with c = t0
[ G(φ(t), t) − λ F (φ(t), t) ] dt.
Lemma 1.1 of the calculus of variations implies the following theorem. T HEOREM 1.3.– The continuous functions φ from [t0 , t1 ] to Rn , satisfying relationship [1.1] and which makes “integral a” extremum, simultaneously satisfy the two relations: ˆ t1 ∂G ∂F (φ(t), t) = λ (φ(t), t) and F (φ(t), t)dt = b. ∂Q ∂Q t0 We also use the result in simplified form as ∂F ∂G (Q(t), t) = λ (Q(t), t) ∂Q ∂Q
ˆ
t1
F (Q(t), t)dt = b.
and t0
Elementary Methods to the Calculus of Variations
15
These properties generalize, for functionals with real values, the results obtained in section 1.4. P ROOF.– We demonstrate the property in the case of a mapping φ from R to R. We can easily generalize the demonstration to the mappings with values in Rn . Let us write ˆ
ˆ
t1
a=
t1
g(x(t), t)dt and b = t0
f (x(t), t)dt, t0
where f and g are two continuously differentiable mappings from R2 to R and x(t) denotes a continuous mapping from R to R. Thus, ˆ
t1
δa = t0
∂g (x(t), t)δx dt and δb = ∂x
ˆ
t1 t0
∂f (x(t), t)δx dt. ∂x
Let us write ˆ J (δx) =
t1 t0
∂g (x(t), t) δx dt ∂x
ˆ and
K(δx) =
t1 t0
∂f (x(t), t) δx dt. ∂x
The terms J and K are two linear mappings defined over δx mappings from R to R. Thus, ∀ δx, δx ∈ C 0 (R, R),
K(δx) = 0
=⇒
J (δx) = 0.
Let δx1 be a given continuous mapping from R to R such that K(δx1 ) = 0 and δx is any other continuous mapping from R to R. Let us write δx2 = δx − μ δx1 , where μ ∈ R and such that K(δx2 ) = 0. This implies that K(δx − μ δx1 ) = 0 or μ = K(δx)/K(δx1 ) ∈ R. According to the hypothesis, we obtain K(δx2 ) = 0 =⇒ J (δx2 ) = 0. From this result, it can be derived that J (δx2 ) = J (δx) − μ J (δx1 ) = 0 = J (δx) − K(δx)
=⇒
J (δx2 )
J (δx1 ) = 0. K(δx1 )
Let us write λ = J (δx1 )/K(δx1 ); λ is a scalar that is given by δx1 . Hence, J (δx) = λ K(δx).
16
Introduction to Mathematical Methods of Analytical Mechanics
C OROLLARY 1.3.– The three propositions are equivalent: (A) ∀ δx,
K(δx) = 0
J (δx) = 0;
=⇒
(B) ∃ λ ∈ R such that ∀ δx, δx ∈ C 0 (R, R), J (δx) − λ K(δx) = 0; (C) ∃ λ ∈ R, a constant scalar, called the Lagrange multiplier, such that J = λ K. We also write δa − λ δb = 0. 1.5.2. Second type: distributed constraint Let F be a mapping from Ω×[t0 , t1 ] to R, where Ω is an open set of Rn and [t0 , t1 ] ˆ t1 a segment of R. We wish to find the extrema of a = G(Q(t), t)dt such that, for t0
any value t of [t0 , t1 ], we have F (Q(t), t) = 0. The constraint F (Q(t), t) = 0 is called a distributed constraint. We demonstrate that we are led to introduce a multiplier denoted by Λ, a function defined for each value of t belonging to [t0 , t1 ], i.e. a Λ-multiplier function of t. T HEOREM 1.4.– We have the equivalence of the three propositions: (A) there exists a continuous mapping Q from [t0 , t1 ] to Ω satisfying F (Q(t), t) = 0 such that for any continuous application δQ from [t0 , t1 ] to Rn and for any t of [t0 , t1 ], ∂F (Q(t), t) δQ = 0 ∂Q
ˆ =⇒
t1
G(Q(t), t)dt = 0;
δ t0
(B) there exists a continuous mapping Q from [t0 , t1 ] to Ω and there exists a continuous mapping Λ from [t0 , t1 ] to R such that for any continuous mapping δQ from [t0 , t1 ] to Rn and any continuous mapping δΛ from [t0 , t1 ] to R, ˆ
t1
δ t0
G(Q(t), t) − Λ(t) F (Q(t), t) dt = 0;
(C) there exists a continuous mapping Q from [t0 , t1 ] to Ω and there exists a continuous mapping Λ from [t0 , t1 ] to R such that ∂G ∂F (Q(t), t) − Λ(t) (Q(t), t) = 0 and F (Q(t), t) = 0. ∂Q ∂Q
Elementary Methods to the Calculus of Variations
17
P ROOF.– Let us demonstrate the equivalence of (A), (B) and (C). (B) is equivalent to (C). This result leads to the writing of the extrema of ˆ
t1
t0
(G(Q(t), t) − Λ(t) F (Q(t), t)) dt. ˆ
ˆ
t1
(C) implies (A). Indeed, δ
t1
G(Q(t), t) dt = t0
t0
∂G(Q(t), t) δQ(t) dt. ∂Q
∂F (Q(t), t) ∂G(Q(t), t) Let δQ(t) be such that δQ(t) = 0, then δQ(t) = 0 ∂Q ∂Q ˆ t1 G(Q(t), t)dt = 0. and δ t0
(A) implies (C). In order to simplify the demonstration, it is assumed that the mapping F has real values; the demonstration canˆeasily be extended for n > 1. Given Q(t) satisfying F (Q(t), t) = 0 and making mapping Q(t) also makes the integral, ˆ
t1
b= t0
t1
G(Q(t), t)dt extremum, the t0
(G(Q(t), t) − Λ(t) F (Q(t), t)) dt,
extremum. This is true for any mapping Λ from [t0 , t1 ] to R. Let us calculate δb for any variation of δQ(t) = [δq1 (t), . . . , δqn (t)] ˆ t1 ∂G ∂F ∂G ∂F δq1 + δq2 + . . . δb = −Λ −Λ ∂q1 ∂q1 ∂q2 ∂q2 t0 ∂F ∂G δqn dt. −Λ + ∂qn ∂qn
We can write that δb = 0 for any δq1 , δq2 , . . . , δqn satisfying ∂F ∂F δq1 + . . . + δqn = 0. ∂q1 ∂qn
[1.2]
∂F does not cancel between t0 and t1 . The function Λ, which ∂q1 was arbitrary until now, is given by the relationship Let us assume that
∂G ∂F −Λ = 0, ∂q1 ∂q1
[1.3]
18
Introduction to Mathematical Methods of Analytical Mechanics
and we get, ˆ
t1
δb = t0
∂G ∂F ∂G ∂F δq2 + . . . + δqn dt −Λ −Λ ∂q2 ∂q2 ∂qn ∂qn
which must be zero for any value of δq2 , . . . , δqn , which are no longer constrained by relation [1.2] since δq1 is derived from this relationship. The fundamental lemma for the calculus of variations gives: for any i belonging to {2, . . . , n}, ∂G ∂F −Λ =0 ∂qi ∂qi
[1.4]
and relationships [1.3, 1.4] imply (C)
∂F = 0, we choose Λ in a ∂q1 ∂G ∂F ∂F −Λ = 0, where at t2 , = 0. neighborhood of t2 , satisfying the condition ∂qp ∂qp ∂qp Let us note that if, for a value t2 in the interval [t0 , t1 ],
The only case that would prevent the existence of Λ for a particular value t2 in ∂F (Q(t2 ), t2 ) the interval [t0 , t1 ] would be when = 0. This case corresponds to a ∂Q singular point of F . We then say that the constraint is singular at t2 . In the form of meta-language, the results that represent the equivalent conditions for a distributed constrained condition are summarized as: (A)
∃ Q ∈ C 0 ([t0 , t1 ], Rn ) with F (Q, t) = 0 such that ˆ t1 ∂F ∀ δQ, G(Q(t), t)dt = 0, δQ = 0 =⇒ δ ∂Q t0
(B)
∃ Q ∈ C 0 ([t0 , t1 ], Rn ), ∃ Λ ∈ C 0 ([t0 , t1 ], R) such that ˆ t1 ∀ δQ, ∀ δΛ, δ (G(Q, t) − Λ F (Q, t)) dt = 0, t0
(C)
0
∃ Q ∈ C ([t0 , t1 ], Rn ), ∃ Λ ∈ C 0 ([t0 , t1 ], R) such that ∂G ∂F −Λ = 0 and F (Q(t), t) = 0. ∂Q ∂Q
Elementary Methods to the Calculus of Variations
19
1.6. More general problem of the calculus of variations 1.6.1. First extension of the earlier results Let G be a continuously differentiable mapping from Ω × . . . × Ω× [t0 , t1 ] to R (where Ω, an open set of Rn , is repeated q times) and φ is a mapping from [t0 , t1 ] to Ω, p times derivable (p > q; if p = q, the pth derivative is continuous). The set of functions φ is called (F). Let us write ˆ
t1
a=
G φ(t), φ (t), . . . , φ(q) (t), t dt.
t0
We wish to find, among the (F) functions, the functions which make a an extremum. The mappings φ of (F) can be subject to additional constraints of the type that we studied in the previous section: (L1) Integral constraint of the form ˆ
t1 t0
F φ(t), φ (t), . . . , φ(r) (t), t dt = b where r ≤ p.
Such a constraint corresponds to a constant Lagrange multiplier; if the constraint is scalar (i.e. if b is scalar), then the Lagrange multiplier is a given real λ. (L2) Distributed constraint of the form
H φ(t), φ (t), . . . , φ(r) (t), t = 0, where r ≤ p. The functions F and H have the same conditions for regularity as G. If H has scalar values, we associate the constraint (L2) with a Lagrange multiplier λ, which is a mapping from R to R. Furthermore, φ(t) can be subject to additional conditions of the type φ(t0 ) = φ(t1 ) = 0, and even to conditions related to the derivatives of φ(t) at t0 and t1 . We do not study the most general case, but the reasoning used will remain the same. 1.6.2. Important example Based on the hypotheses in section 1.6.1, we propose studying the variations of ˆ G(φ) =
t1 t0
G(φ(t), φ (t), t) dt, which can be written as
ˆ
t1
a= t0
G(Q, Q , t) dt.
20
Introduction to Mathematical Methods of Analytical Mechanics
Consider the mapping φ(t) + x ψ(t), where φ and ψ are two derivable mappings of the real variable with real values. Thus, ˆ G(φ + x ψ) =
t1
G(φ(t) + x ψ(t), φ (t) + x ψ (t), t) dt.
t0
As in section 1.5, we denote for the given φ and ψ, f (x) = G(φ + x ψ) and δa = f (0). Then, ˆ
t1
δa = t0
∂G ∂G (φ(t), φ (t), t) ψ (t) dt. (φ(t), φ (t), t) ψ(t) + ∂Q ∂Q
Since
∂G ∂G (φ(t), φ (t), t) ψ (t) = (φ(t), φ (t), t) ψ(t) ∂Q ∂Q ∂G d − (φ(t), φ (t), t) ψ(t), dt ∂Q
we can derive
t1 ∂G δa = (φ(t), φ (t), t) ψ(t) ∂Q t0 ˆ t1 ∂G ∂G d + (φ(t), φ (t), t) ψ(t)dt. (φ(t), φ (t), t) − ∂Q dt ∂Q t0
[1.5]
– In the particular case where ψ(t0 ) = ψ(t1 ) = 0, corresponding to null function values at the extremities of the segment [t0 , t1 ], we can derive ˆ
t1
δa = t0
∂G d (φ(t), φ (t), t) − ∂Q dt
∂G (φ(t), φ (t), t) ψ(t)dt ∂Q
and the fundamental lemma of the calculus of variations implies d dt
∂G ∂G (φ(t), φ (t), t) − (φ(t), φ (t), t) = 0. ∂Q ∂Q
[1.6]
– In the most general case where ψ(t0 ) and ψ(t1 ) are arbitrary, we can choose for them to be zero at t0 and t1 and we arrive again at equation [1.6]. Consequently, by writing this result in equation [1.5], for any ψ(t0 ) and ψ(t1 ),
∂G (φ(t), φ (t), t) ψ(t) δa = ∂Q
t1 =0 t0
Elementary Methods to the Calculus of Variations
21
and we obtain ∂G ∂G (φ(t0 ), φ (t0 ), t0 ) = 0 and (φ(t1 ), φ (t1 ), t1 ) = 0, ∂Q ∂Q corresponding to the fact that δa is a linear functional of both ψ(t) and ψ(t0 ), ψ(t1 ). We simply write ˆ
t1
a= t0
G(Q, Q , t)dt,
hence, ˆ
t1
δa = t0
∂G ∂G δQ dt with δQ = (δQ) , δQ + ∂Q ∂Q
which implies
t1 ˆ t1 ∂G ∂G ∂G d δa = δQ dt. δQ + − ∂Q dt ∂Q ∂Q t0 t0 Thus, δa is the sum of a linear functional of δQ and a linear function of δQ0 and δQ1 . We can derive the equations known as the Euler equations: d dt
∂G ∂Q
−
∂G =0 ∂Q
or
d dt
∂G ∂qi
−
∂G = 0 for i ∈ {1, . . . , n} . ∂qi
[1.7]
The method used can be summarized in the following way: with Q being the solution to the variational problem, δQ is a variation of Q and this results in the variation δa. Through integration by parts, we apply the fundamental lemma for the calculus of variations, which gives the equation satisfied by Q. 1.6.3. First application: the Hamilton principle Consider a conservative mechanical system whose position is determined with respect to a reference Galilean system using n independent parameters. The system is said to have n degrees of freedom. The constraints that limit the position of the system are assumed to be perfect. The movements are represented by the C 2 class φ(t) mappings from t ∈ [t0 , t1 ] to Rn .
22
Introduction to Mathematical Methods of Analytical Mechanics
P RINCIPLE 1.1.– The Hamilton principle: The motion of the mechanical system between instants t0 and t1 associated with the initial position Q0 = φ(t0 ) and the final position Q1 = φ(t1 ) makes ˆ
t1
a=
G(φ(t), φ (t), t)dt
extremum.
t0
In the Hamilton principle, G is called the Lagrangian and its value is the difference between the system’s kinetic energy T and potential energy W : G ≡ T − W . The integral a is called the Hamiltonian action of the system. Observations – We have assumed that the possible positions of the system are represented using n independent parameters. It is the explicit solution of the constraint equations that allows us to obtain this result. – The constraints are perfect; the system is said to be conservative or without dissipation. – The equilibrium positions of the system are specific motions of the system: they are the motions for which the potential energy is extremum. This is the particular case where the kinetic energy, which is positive, is zero. – The φ functions do not have the structure of a vector space. Indeed, if φ1 and φ2 belong to (F), where (F) is the set of C 2 functions such that φ(t0 ) = Q0 = 0 and φ(t1 ) = Q1 , then φ1 (t0 )+ φ2 (t0 ) = 2Q0 . We would then have to carry out the following modification: the ψ functions, which can also be denoted by δQ, are the difference between the two (F) functions. The set that is defined is denoted by (E). This set is the vector space of the C 2 functions that are zero at t0 and at t1 . For any real value x, for all the φ functions of (F) and ψ functions of (E), φ + x ψ belongs to (F), since φ(t0 ) + x ψ(t0 ) = φ(t0 ) = Q0 and φ(t1 ) + x ψ(t1 ) = φ(t1 ) = Q1 , therefore, δa =
∂G ∂G (φ(t1 ), φ (t1 ), t1 ) ψ(t1 ) − (φ(t0 ), φ (t0 ), t0 ) ψ(t0 ) ∂Q ∂Q ˆ t1 ∂G ∂G d (φ(t), φ (t), t) ψ(t) dt, + (φ(t), φ (t), t) − ∂Q dt ∂Q t0
Elementary Methods to the Calculus of Variations
23
and the Hamilton principle makes it possible to write d dt
∂G ∂G (φ(t), φ (t), t) − (φ(t), φ (t), t) = 0. ∂Q ∂Q
We obtain the Euler equation of motion. If T W = W (φ(t), t), equations [1.8] can be written as d dt
∂T ∂Q
−
=
[1.8] T (φ(t), φ (t), t) and
∂T ∂W =− ∂Q ∂Q
[1.9]
which are called the Lagrange equations. Observations d ∂G ∂G is the value at t of a linear form of Rn represented by a − – ∂Q dt ∂Q row matrix (1 row and n columns)
∂G d − ∂q1 dt
∂G ∂q1
∂G d ,..., − ∂qn dt
∂G ∂qn
– The Lagrange equations are equivalent to n scalar equations ⎧ d ∂T ∂T ∂W ⎪ ⎪ − =− , ⎪ ⎪ ⎪ dt ∂q ∂q ∂q1 1 1 ⎨ .. ⎪ . ⎪ ⎪ ∂T ∂W ⎪ d ∂T ⎪ − =− . ⎩ dt ∂qn ∂qn ∂qn Following Newtonian notations, if t is time, we then write qi = q˙i . 1.6.4. Second application: geodesics of surfaces Consider an affine three-dimensional Euclidean space denoted by E 3 . A point M in this space is represented, with respect to a right-handed orthonormal system O ijk, by the system with three coordinates (x, y, z). We also denote the bipoint OM as, ⎡ ⎤ x M = ⎣y⎦. z
24
Introduction to Mathematical Methods of Analytical Mechanics
The equation for a surface (S) takes the form F (M ) ≡ F (x, y, z) = 0, where F is a differentiable mapping from R3 to R. We wish to determine the curves of (S) with fixed extremities A and B, whose length is an extremum. We assume that the curves are represented by differentiable mappings from R to E 3 . One of these curves, which we call (C), is a geodesic of the surface joining A with B. This geodesic will be represented by the mapping u ∈ [u0 , u1 ] −→ M = f (u) ∈ E 3 with F (f (u)) = 0. Let s denote a curvilinear abscissa of (C), then dM = T ds, where T denotes the unit vector tangent to (C)1. Thus, (ds)2 = (dM )2 ; dM = f (u) du, which 2 2 we = M = u M u (du) 2 also write2 as M u du. We obtain (dM ) 2 2 xu + yu + zu (du) , where M u is the vector of components xu , yu , zu ; and let M u be the row matrix representing [xu , yu , zu ].
Let us write G(M u ) = M u M u ; for an appropriate choice of orientation for (C), ds = G(M u ) du. This leads to find the curves, (C), which make ˆ u1 G(M u )du L= u0
extremum, where M (u0 ) = A, M (u1 ) = B and F (M (u)) = 0. According to section 1.5.2, associated with the existence of a distributed multiplier, the problem is equivalent to the following: There exists a C 1 -mapping u ∈ [u0 , u1 ] −→ M = f (u) ∈ E 3 with f (u0 ) = A and f (u1 ) = B and a C 0 mapping u ∈ [u0 , u1 ] −→ λ(u) such that ˆ u1 G(M u ) − λ(u) F (M (u)) du a= u0
is extremum for M = f (u) and λ(u). Furthermore, d
dG(M u ) G(M u ) = 2 G(M u )
and
dG(M u ) = d(M u M u ) = 2M u dM u ,
which derives M dM u , d G(M u ) = u G(M u ) 1 It is recalled that we write vectors without an arrow but in bold italic.
Elementary Methods to the Calculus of Variations
25
and implies
∂
G(M u ) M = u = ∂M u G(M u )
dM du ds du
=
dM = T . ds
[1.10]
Thus, ˆ
u1
δa = u0
∂ G(M u ) ∂F (M (u)) δM u − λ(u) δM −F (M (u)) δλ du, ∂M ∂M u
hence, through integration by parts u1 ∂ G(M u ) δM δa = ∂M u u0 ˆ u1 d ∂ G(M u ) ∂F (M (u)) δM +F (M (u)) δλ du. + λ(u) − du ∂M ∂M u u0
Furthermore, δM (u0 ) = δM (u1 ) = 0 and the first term is zero. We can deduce two possibilities: – Let us choose δM = 0. For any δλ, δa = 0. The fundamental lemma of the calculus of variations implies F (M (u)) = 0, and we get again the fact that the curve (C) belongs to the surface (S). – Let us choose δλ = 0. For any δM , δa = 0. The fundamental lemma of the calculus of variations implies d du
∂ G(M u ) ∂F (M (u)) + λ(u) = 0. ∂M ∂M u
Taking into account equation [1.10], we obtain dT + λ(u) du
∂F (M (u)) ∂M
= 0.
26
Introduction to Mathematical Methods of Analytical Mechanics
Noting that
∂F (M (u)) ∂M
= gradF and that Frenet’s first formula is written
dT N = , where N is the principal normal and ρ is the radius of the algebraic ds ρ curvature of (C) at M , we obtain
as
N du + λ(u) gradF = 0. ρ ds Let us write μ(u) = −λ(u)
du gradF, we obtain ds
N = μ(u)gradF.
[1.11]
As the osculating plane of (C) at M is the plane M T N , we obtain the following theorem. T HEOREM 1.5.– The curves of a surface connecting two points A and B, and whose length is an extremum, are those for which the osculating plane is normal to the surface. Such curves are called geodesics of the surface (see Figure 1.2).
Figure 1.2. Geodesic curve of the surface F connecting the points A with B. The plane (M , T , N ) represents the osculating plane of the curve
Elementary Methods to the Calculus of Variations
27
E XAMPLE 1.3.– Geodesics of a sphere. Let (S) be a sphere with the center O and radius Ro . At a point M on the sphere, OM is normal to (S) and parallel to N . The condition [1.11] is satisfied. Furthermore, T =
thus,
dOM ds
and
dT N = ; ds ρ
d2 OM is parallel to OM . The condition [1.11] is equivalent to ds2 d dOM dOM × OM = 0 ⇐⇒ × OM = K, ds ds ds
where K is a constant vector and × denotes the vector product. OM is orthogonal to K, and M belongs to the diametral plane of the sphere (S) orthogonal to K; the geodesics of the sphere are portions of its great circles.
2 Variation of Curvilinear Integral
2.1. Geometrization of variational problems As we have seen in Chapter 1, the mapping from an interval [t0 , t1 ] included in R to Rn , i.e. made up of n real functions, t ∈ I ≡ [t0 , t1 ] −→ M = ϕ(t) ∈ Rn , represents a curve (C) of Rn associated with parameter t. The mapping ϕ is differentiable over [t0 , t1 ] as many times as necessary. The search for n real functions leads to the search for a curve in an n-dimensional space, and consequently it is possible to transform a variational problem into a geometric problem. Indeed, it is when the problem statement brings in only quantities attached to the curve – i.e. independent of the parameterization – that the problem is a geometric problem and the calculus of variations’ problems are stated in the form of finding a curve of Rn that realizes an extremum condition. Consider the differentiable mapping (t, u) ∈ I × J −→ ψ(t, u) ∈ Rn , where J ≡ [u0 , u1 ] is an interval of R containing 0 such that ψ(t, 0) = ϕ(t). For each value of u belonging to J, the mapping ψ defines curves {(Cu ), u ∈ J} such that when u has the value 0, (C0 ) = (C). We have defined two main families of curves: (Cu ), u ∈ J :
t ∈ I −→ ψ(t, u) ∈ Rn ,
(Γt ), t ∈ I :
u ∈ J −→ ψ(t, u) ∈ Rn .
30
Introduction to Mathematical Methods of Analytical Mechanics
In the particular case where n = 3, the mapping ψ defines a surface (S) of R3 . The curves (Cu ) and (Γt ) are coordinate lines of the surface (S). Each point M (t, u) on the surface (S) is generally determined by the intersection of the two coordinate lines. The curves (Cu ) resulting from the first family are the deformed shapes of (C0 ) = (C) through the deformations associated with the second family. The mapping ψ is differentiable, and we note ∂M (t, u) ∂M (t, u) dt + du ∂t ∂u ∂ψ(t, u) ∂ψ(t, u) dψ = dt + du. ∂t ∂u
dM =
representing
The vector dM belongs to the plane tangent to the surface (S) at M (t, u), for ∂ψ(t, u) ∂ψ(t, u) and constitute a tangent-plane basis. Given the which surface ∂t ∂u requirements of the problem, we simply change the notations such that (see Figure 2.1): dM =
∂ψ(t, 0) ∂ψ(t, 0) dt and δM = du. ∂t ∂u
Figure 2.1. The coordinate curves of the surface (S) associated with two position parameters, u and t. For a color version of the figures in this chapter, see www.iste.co.uk/gouin/mechanics.zip
The vector dM is the value of the differential of ψ for du = 0 taken at a point on (C), i.e. for u = 0. The vector δM is the value of the differential of ψ for dt = 0 taken at a point on (C), i.e. for u = 0. We have the important relation dδM = δdM
Variation of Curvilinear Integral
31
corresponding to the interchange of the two partial derivatives of the C 2 -mapping ψ. Thus, δM will be called the deformation vector field of the curve (C) associated with the family {(Cu ), u ∈ J}. 2.2. First form of curvilinear integral The vector space Rn is assumed to be Euclidean. Given a differentiable vector field of Rn , M ∈ Rn −→ V = Φ(M ) ∈ Rn , we consider a curve (C) connecting two points A and B. The curve (C) is assumed to be differentiable. D EFINITION 2.1.– We call circulation of the vector field V the curvilinear integral ˆ a=
(C)
V dM ,
where dM = ϕ (t) dt with t ∈ I ≡ [t0 , t1 ] −→ M = ϕ(t) ∈ Rn being the representation of the curve (C). Indeed, we only need a form-field as M ∈ Rn −→ V = Φ (M ) ∈ Rn . This is the case for an Euclidean space; the metric induces a transposition denoted by , which associates a linear form with each vector. Any form field V is associated with an integral a, which can be written as a = G(C). We propose to study the concept of variation of a for the deformation field ψ. To each curve (Cu ), with u ∈ J, represented by t ∈ I −→ M (t, u) ≡ ψ(t, u) ∈ Rn with ψ(t, 0) = ϕ(t), is associated the scalar G(Cu ), which is called the circulation of the form field Φ (M ) ≡ V (M (t, u)) along the curve (Cu ) (Figure 2.2): ˆ a = G(Cu ) =
(Cu )
V (M (t, u)
∂M (t, u) dt. ∂t
32
Introduction to Mathematical Methods of Analytical Mechanics
Figure 2.2. Diagram of the calculus of the curvilinear integral a along the curve Cu
Thus, for every curve (Cu ), we define a scalar a = f (u). We write δa = f (u)|u=0 du. This expression is analogous to the calculus carried out in Chapter 1. To calculate da, the mapping ψ(t, u) must be twice continuously differentiable, ˆ da =
t1
t0
∂V (M (t, u)) ∂ 2 M (t, u) ∂M (t, u) du + V (M (t, u)) du dt. ∂u ∂t ∂t ∂u
Through integration by parts, we get t1 ˆ t1 ∂V (M (t, u)) ∂M (t, u) ∂M (t, u) da = V (M (t, u)) du + du dt ∂u ∂u ∂t t0 t0 ˆ t1 ∂V (M (t, u)) ∂M (t, u) − du dt. ∂t ∂u t0
For u = 0, da = δa = f (0) du and we obtain t1 ˆ t1 ∂V (M (t, 0)) ∂M (t, 0) ∂M (t, 0) δa = V (M (t, 0)) du + du dt ∂u ∂u ∂t t0 t0 ˆ t1 ∂V (M (t, 0)) ∂M (t, 0) − du dt. ∂t ∂u t0
Variation of Curvilinear Integral
33
But ∂M (t, 0) dt = dM ∂t
∂M (t, 0) du = δM , so ∂u ∂V (M (t, 0)) ∂V ∂V (M (t, 0)) ∂M (t, 0) dt = , dt = dM ∂t ∂M ∂t ∂M and
denoted by dV , and ∂V (M (t, 0)) du = ∂u
∂V (M (t, 0)) ∂M (t, 0) ∂M ∂u
du =
∂V δM ∂M
,
denoted by δV . From the diagram of the deformation of (Cu ), shown in Figure 2.3, we deduce the relation that gives the variation of the curvilinear integral in the form δa =
V δM
B A
ˆ +
AB
δV d M − dV δM , or
δa = V (B) δB − V (A) δ A+
ˆ AB
δV d M − dV δM .
Figure 2.3. Variation of the curve (C0 ) to (Cu ) by a vector field δM
Let us explain the calculations. If ⎤ ⎡ ⎤ V1 x V ≡ V (M ) with V = ⎣ V2 ⎦ and M = ⎣ y ⎦ , V3 z ⎡
[2.1]
34
Introduction to Mathematical Methods of Analytical Mechanics
then ⎤ ∂V 1 ∂V 1 ∂V 1 ⎢ ∂x ∂y ∂z ⎥ ⎥⎡ ⎤ ⎤ ⎢ ⎡ ⎥ ⎢ dV1 ⎢ ∂V ∂V ∂V ⎥ dx ⎥⎣ ⎦ ⎢ 2 2 2 ⎣ dV2 ⎦ = ⎢ ⎥ dy . ⎢ ∂x ∂y ∂z ⎥ dV3 ⎥ dz ⎢ ⎥ ⎢ ⎣ ∂V 3 ∂V 3 ∂V 3 ⎦ ∂x ∂y ∂z ⎡
We write dV =
∂V ∂V dM and δV = δM . ∂M ∂M
Hence, δa = V (B) δB − V (A) δA ˆ ∂V ∂V d M − dM δM , δM + ∂M ∂M AB or δa = V (B) δB − V (A) δA + with W =
∂V − ∂M
∂V ∂M
ˆ AB
W dM ,
δM .
In the case of space R3 , ∂V − ∂M
∂V ∂M
⎤ 0 −ξ η = ⎣ ξ 0 −ζ ⎦ −η ζ 0 ⎡
denoted by i (rot V ) ,
where (ζ, η, ξ) are the components of rot V . Then, i(rot V ) δM = rot V × δM and δa = V (B) δB − V (A) δ A+
ˆ AB
(rot V , δM , dM ) .
This form of writing, restricted to a three-dimensional Euclidean space, cannot be generalized to other dimensions.
Variation of Curvilinear Integral
35
2.3. Second form of curvilinear integrals A curvilinear integral often takes the form ˆ a=
AB
n ds
[2.2]
where n is a differentiable mapping from Rp to R M ∈ Rp −→ n(M ) ∈ R, We have dM = T ds with T T = 1, where and s is the abscissa of the curve AB. oriented in the direction of increasing s. T is the unit vector tangent to the arc AB The relations demonstrated in the previous section make it possible to calculate the variation of a. Indeed, let us write V = n(M ) T , which follows ˆ ˆ a= n(M ) T dM ≡ V dM . AB AB Hence, ˆ
δa = n(B) T B δB − n(A) T A δA +
AB
δ (n(M ) T ) dM − d (n(M ) T ) δM .
This relation is always satisfied because if M (t, u) represents a deformation of (C) = (C0 ), then ˆ a=
B A
n(M (t, u)) T (t, u)
∂M (t, u) dt ∂t
and for u = 0, we have δV =
∂V (t, 0) du, ∂u
dV =
∂V (t, 0) dt, ∂t
and δ (n T ) dM = n δT d M + δn T dM . But T T = 1 implies δT T = 0, and thus δT dM = 0 and δ (n T ) dM = δn ds. d (n T ) ds. Finally, Furthermore, d (n T ) = ds δa = n(B) T B δB − n(A) T A δA +
ˆ AB
δn − δM
dnT ds. ds
36
Introduction to Mathematical Methods of Analytical Mechanics
Since δn =
∂n δM = δM gradn, ∂M
we can write1 δa = [n T
B δM ]A
ˆ +
AB
(gradn) −
dnT ds
δM ds,
which gives along the extremals gradn −
dnT = 0. ds
This is the most commonly used relation; however, we can also make it explicit in another form dnT N dT =n + T T gradn = n + T T gradn ds ds R because dn ∂n dM = = (gradn) T = T gradn, ds ∂M ds at M and R is the radius of where N is the principal normal vector to the curve AB curvature. So, we obtain δa = [n T
B δM ]A
ˆ +
AB
δM
N (1 − T T ) gradn − n R
ds,
[2.3]
where 1 denotes the identity. R EMARK 2.1.– The term T T is represented in orthonormal axes by the p × p matrix ⎡
⎤ T1 ⎢ .. ⎥ ⎣ . ⎦ T 1 , · · · , Tp , Tp 1 In orthonormal axes, ∂n ∂n ∂n = (gradn) . ,..., = ∂M ∂x1 ∂xp
Variation of Curvilinear Integral
37
at M . which stands for the projection on the tangent to the curve AB Indeed, if W is a vector of Rp , T T W = T (T W ) and 1 − T T represents the at M , i.e. normal to the vector T at projection on the plane normal to the curve AB M R EMARK 2.2.– Instead of using the fundamental formula [2.1] demonstrated in section 2.2, we could carry out a more direct calculation of the variation of the curvilinear integral [2.2]: Let M (t, u) be the deformation defined earlier. Then, ˆ δa =
δn (M ) ds + n (M ) δds.
AB
Since dM = T ds, we have ds = T dM . According to the Schwarz theorem, δdM = dδM and consequently, δds = δ (T dM ) = δT dM + T δdM = δT T ds + T dδM . But T T = 1 implies δT T = 0 and δds = T dδM = T
ˆ δa = ˆ
AB
=
dδM ∂n δM + n (M ) T ∂M ds
= [n T
ds
d (nT δM ) (gradn) δM + − ds
AB
B δM ]A
ˆ +
AB
gradn −
dδM ds. It follows ds
dnT ds
dnT ds
δM
ds
δM ds
which corresponds to result [2.3]. 2.4. Generalization and variation of derivative It may happen that the curvilinear integral corresponds not only to the circulation of a vector field defined at each point of the space but also to the circulation of intrinsic quantities defined along the curve such as N, T , R. Relation [2.1] in section 2.2 can be applied without any change. However, it is useful to apply a preliminary lemma concerning the variation of derivative.
38
Introduction to Mathematical Methods of Analytical Mechanics
L EMMA 2.1.– Variation of derivative. Let the twice continuously differentiable mapping be x ∈ Rn −→ A = F (x) ∈ Rp . Then, δ
where
∂A ∂x
=
∂ δA ∂A ∂ δx − , ∂x ∂x ∂x
[2.4]
∂A denotes the tangent linear mapping of F (x). ∂x
P ROOF.– The expression dA = F (x) dx is written in matrix formulation ⎡
⎤ dx1 ⎢ ⎥ dx = ⎣ ... ⎦ vector of Rn , and F (x) =
dxn ∂F ∂F , ··· , ∂x1 ∂xn
, a matrix with p lines and n columns.
We can choose for dx a mapping H(x) from Rn to Rn , i.e. dA = F (x) H(x); for the sake of convenience, we call H(x) the variation of x and we denote it by δx. Thus, x ∈ Rn −→ δx ∈ Rn is a vector field of Rn and similarly, we write δA = F (x) δx = G(x). Let us calculate dG(x); we have dG(x) = d δA = dF (x) δx + F (x) d δx.
[2.5]
Furthermore, dF (x) = F (x) dx. The term F (x) is a convenient operator field. Thus, (F (x) dx) δx =
∂2F dxi δxj . ∂xi ∂xj i,j
As the mapping F is twice continuously derivable, is a symmetric bilinear operator, F (x) dx δx = F (x) δx dx.
∂2F ∂2F = and F (x) ∂xi ∂xj ∂xj ∂xi
Variation of Curvilinear Integral
39
Relation [2.5] becomes d δA = (F (x) δx) dx + F (x)
∂δx dx. ∂x
∂δA ∂δA dx, where is the tangent linear operator of the ∂x ∂x field G(x) = δA assumed to be differentiable if H(x) = δx is also differentiable. Hence, Furthermore, d δA =
F (x) δx + F (x)
∂δx ∂δA = , ∂x ∂x
or δF (x) + F (x)
∂δx ∂δA = . ∂x ∂x
Furthermore, F (x) = ∂ δA =δ ∂x
∂A ∂x
∂A and we obtain ∂x
+
∂A ∂ δx , ∂x ∂x
and we get relation [2.4]. E XAMPLE 2.1.– Determine the variations of ˆ a= V (T ) dM , (C)
where V is a function of the unit tangent vector along the curve (C). The calculations are as follows: B ˆ ∂V δV dM − dV δM with δV = + δa = V δM δT . ∂T A AB We must calculate δT as a function of δM . According to Lemma 2.1, T =
dM ds
=⇒
δT = δ
dM ds
=
dδM dM dδs − . ds ds ds
40
Introduction to Mathematical Methods of Analytical Mechanics
According to the Schwarz theorem, dδs = δds and δdM = dδM ; hence, dδs = δ (T dM ) = δT dM + T δdM = T dδM , because δT dM = δT T ds = 0. Therefore, dδM dδM dM dδs − = (1−T T ) . ds ds ds ds
δT =
Furthermore, δV =
∂V δT ∂T
=⇒
δV =
dδM ∂V (1 − T T ) . ∂T ds
Hence, ∂V (1−T T ) T ds δV dM = ∂T ∂V T. = (dδM ) (1−T T ) ∂T
dδM ds
We obtain
V δM
δa =
B A
ˆ +
AB
(dδM ) (1−T T )
∂V ∂T
and through integration by parts, B ∂V δa = V δM + δM (1−T T ) T ∂T A ˆ ∂V δM d V + (1−T T ) − T , ∂T AB
T − dV δM
Variation of Curvilinear Integral
41
or B ∂V (1−T T ) δM ∂T A ˆ d ∂V V + (1−T T ) − δM T ds, ds ∂T AB
δa =
V + T
which fits with the case V (T ) = T corresponding to constant scalar n. 2.5. First application: studying the optical path of light 2.5.1. Fermat’s principle The notations used are those given in section 1.6.4 in Chapter 1. In the real three-dimensional Euclidean space E 3 corresponding to the usual physical space, and using the direct orthonormal coordinate system O ijk, we consider a differentiable mapping M ∈ E 3 −→ n(M ) ∈ R, called the refractive index of the medium. P RINCIPLE 2.1.– In a medium whose refractive index is n(M ), the trajectory of the ray of light going from point A to point B is among the curves connecting A and B, the curve which makes extremal ˆ ˆ a= n(M ) ds ≡ n(M ) T dM . (C)
(C)
The trajectory is called the optical path. R EMARK 2.3.– Given a curve (C) defined by the differentiable mapping t ∈ I −→ ϕ(t) ≡ M (t) ∈ R3 , and given a vector field defined at any point along the curve (C) M ∈ (C) −→ H(M ), a ruled surface is defined by the mapping (see Figure 2.4) (t, u) ∈ I × R
−→
ψ(t, u) = M (t) + u H(M (t))
42
Introduction to Mathematical Methods of Analytical Mechanics
We note that ψ(t, 0) = ϕ(t). So, we have defined a deformation field ψ such that ∂ψ(t, u) du = H(M (t)) du; we can choose du = 1, u=0 ∂u ∂ψ(t, u) dM = dt = ϕ (t) dt; we can choose dt = 1. u=0 ∂t
δM =
Thus, any arbitrary differentiable field H(M ) defined at any point on (C) can be considered as a field δM associated with a deformation ψ of (C).
Figure 2.4. Variation of the curve (C) generating a ruled surface formed by the lines carried by the vector field H(M (t))
Let us return to Fermat’s principle. The single-parameter families associated with a variation of (C) are made up of curves connecting A and B. That yields δA = 0 and δB = 0, and for any δM , we can write N ds = 0 δM (1 − T T ) gradn − n R (C)
ˆ δa =
The field δM can be chosen as any continuous field of vectors that are null at A and B. According to the lemma in Chapter 1, section 1.1 on the calculus of variations, Fermat’s principle implies (1 − T T ) gradn − n
N = 0. R
Consider the Frenet frame M T N B; therefore, (1 − T T ) is the projection on the normal plane of the Frenet frame (see Figure 2.5). So, the projection of gradn is
Variation of Curvilinear Integral
43
carried by the principal normal direction to the optical path. Consequently, gradn is in the plane M T N , i.e. it belongs to the osculation plane to the optical path at M . Multiplying the above relation by N , we deduce R N
gradn = 1. n
Figure 2.5. Frenet frame M T N B of the curve (C) at M
Let point D such that M D = R N and I be the orthogonal projection of K on gradn M D such that M K = , and H be the orthogonal projection of D on M K. n In the plane M D H, we have MI MK = or M I M D = M K M H. MH MD Furthermore, M D = R and M I = N Consequently, MK =
gradn , which yields M I M D = 1. n
gradn n =⇒ M H = . n gradn
The center of curvature of the optical path at M can be found in the plane (Π) perpendicular to the vector gradn and at a distance M H such that MH =
n . gradn
44
Introduction to Mathematical Methods of Analytical Mechanics
2.5.2. Descartes’ laws These are the laws that govern the optical path of a diopter. Consider a surface (S) in E 3 which separates the physical space between two regions E1 and E2 whose refractive indices are n1 and n2 , respectively. These refractive indices are assumed to be differentiable, and for M ∈ (S), n1 (M ) = n2 (M ). It is assumed that n1 (M 1 ), M 1 ∈ E1 and n2 (M 2 ), M 2 ∈ E2 , admit a limit when M1 and M2 tend towards any point M of (S). We say that (S) makes up a diopter. We write [n(M )] =
lim
M1 →M ∈(S)
n1 (M 1 ) −
lim
M2 →M ∈(S)
n2 (M 2 ).
Since A and B are the two ends of the optical path (C), let us write ˆ ˆ n1 (M ) ds + n2 (M ) ds, a= AD DB where D = (C) ∩ (S). Let us assume that the optical path (C) is differentiable out of the diopter (S); then, δa = n1 (D) T 1 (D) δD − n1 (A) T (A) δA ˆ N ds δM (1 − T T ) gradn1 − n1 + R AD + n2 (B) T (B) δB − n2 (D) T 2 (D) δD ˆ N ds, δM (1 − T T ) gradn2 − n2 + R DB
[2.6]
with δA = 0 and δB = 0, where T 1 (D) and T 2 (D) denote the limits on (S) of the vector T tangent to (C) in E1 and E2 , respectively. Let us consider the field δM satisfying δD = 0; then, ˆ δa =
AD
N (1 − T T ) gradn1 − n1 R
ds
N ds. δM (1 − T T ) gradn2 − n2 R AD
ˆ +
δM
Thus, for the two regions E1 and E2 , (1 − T T ) gradni − ni
N =0 R
where i = 1, 2.
[2.7]
Variation of Curvilinear Integral
45
By returning to relation [2.6] and taking into account equation [2.7], for the optical path and for any vector field δM such as with δA = 0 and δB = 0, we obtain δa = [n1 (D) T 1 (D) − n2 (D) T 2 (D)] δD. Thus, for any vector δD, Fermat’s principle implies δa = [n1 (D) T 1 (D) − n2 (D) T 2 (D)] δD = 0. Since D belongs to (S), we deduce that N S (D) δD = 0, where N S denotes the normal to the surface (S) at D. Consequently, N S δD = 0
=⇒
[n1 (D) T 1 (D) − n2 (D) T 2 (D)] δD = 0.
This property corresponds to the existence of a real-scalar Lagrange multiplier λ such that n1 (D) T 1 (D) − n2 (D) T 2 (D) = λ N S (D)
⇐⇒
at D,
n1 T 1 − n2 T 2 = λ N S . The vectors T 1 , T 2 and N S belong to the same plane (Π), and vector u denotes the axis of the tangent plane to (S), intersecting with (Π) such that (u, N S ) = π/2. By writing ip = (N S , T p ), p = {1, 2}, we obtain n1 sin i1 = n2 sin i2 . We can state Descartes’ two laws, depicted in Figure 2.6: a) the incidence plane of the optical path is the same as the reflection plane; b) n1 sin i1 = n2 sin i2 .
Figure 2.6. The diopter (S) separates the two media E1 and E2 . The tangents T 1 , T 2 and the normal to the diopter N S at M belong to the same plane (Π) normal to the surface (S)
46
Introduction to Mathematical Methods of Analytical Mechanics
2.6. Second application: the problem of isoperimeters Consider a differentiable closed plane curve (C), without double point, encircling a quarrable domain (D) (see Figure 2.7). The plane is assumed to be plunged into the ´ oriented Euclidean space E 3 , and the length of the curve (C) is L = (C) ds, where s denotes the˜positively oriented curvilinear abscissa of (C). The area of the domain (D) is A = (D) dσ, where dσ denotes the area measurement in the plane.
Figure 2.7. Quarrable domain (D) of the curve (C) without double point and for which the origin and extremity are confounded
We deduce ˆ ¨ L= T dM and A = (C)
(D)
n n dσ,
where n denotes the unit vector orthogonal to the Euclidean plane containing the curve (C). In orthonormal axes i, j of the plane, the vector n = i × j ≡ k completes the orthonormal trihedron in the space and forms the right-handed orthonormal coordinate system of E 3 . There exists a vector field V ≡ X(x, y, z) i+Y (x, y, z) j+Z(x, y, z) n such that ⎡ ⎤ ∂Z ∂Y − ⎢ ∂y ∂z ⎥ ⎢ ⎥ ⎡ ⎤ ⎢ ⎥ 0 ⎢ ⎥ ∂Z ⎥ ⎣ ⎦ ⎢ ∂X 0 . = [2.8] rotV = ⎢ ⎥ − ⎢ ∂z ∂x ⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎣ ∂Y ∂X ⎦ − ∂x ∂y
Variation of Curvilinear Integral
47
For the solution of this system of partial differential equations [2.8], we choose X = −y/2, Y = x/2 and Z = 0, and we can write ¨ A=
(D)
(rotV ) n dσ.
The area A represents the flow of rotV through the domain (D) oriented by the vector n, and according to Stokes’ theorem, ¨ (D)
(rotV ) n dσ =
ˆ (C)
V dM ,
with ⎤ dx dM = ⎣ dy ⎦ dz ⎡
ˆ and
1 V dM = 2 (C)
ˆ (C)
x dy − y dx.
This is a well-known result giving the oriented area bounded by a closed-plane curve. The following two problems are equivalent: (A) determining the curves of extremal length surrounding a domain with a given area; (B) determining the curves with a given length surrounding a domain of an extremal area. Problem (A) is equivalent to finding the extremals of ˆ L−λ
(C)
V dM − A ,
where λ is a given real. These are the extremals of ˆ (C)
ˆ
T dM − λ
(C)
V dM .
Problem (B) is equivalent to finding the extremals of ˆ A−μ
(C)
T dM − L ,
48
Introduction to Mathematical Methods of Analytical Mechanics
where μ is a given real. These are the extremals of ˆ (C)
ˆ
V dM − μ
(C)
T dM .
Since ˆ (C)
ˆ
T dM − λ
(C)
ˆ
V dM = −λ
(C)
V dM − μ
ˆ (C)
T dM
with μ=
1 , λ
that proves the equivalence. Let us write ˆ (T − λV ) dM . a= (C)
Since the curve is closed, A ≡ B and the terms integrated by parts in equation [2.1] are zero, ˆ δa =
(C)
δ (T − λV ) dM − d (T − λV ) δM .
Let us note that while V is a vector field, this is not the same for T which is only ∂V ∂T defined along the curve (C). So, we can consider but not . Nonetheless, ∂M ∂M δT dM = δT T ds = 0, and we deduce ˆ δa =
(C)
λ (− δV dM + dV δM ) − dT δM .
Let us note that ∂V ∂V − δV dM + dV δM = − δM dM + dM δM ∂M ∂M ∂V ∂V = − δM dM − ∂M ∂M
Variation of Curvilinear Integral
∂V because − ∂M
∂V ∂M
49
is an anti-symmetrical matrix.
Since dM = T ds and dT =
N ds, we finally obtain R
∂V N ∂V ds. T− − δM λ − ∂M ∂M R (C)
ˆ δa =
For any vector field M ∈ R3 −→ δM , δa = 0 and Lemma 1.1 in section 1.3 of Chapter 1 (the fundamental lemma of the calculus of variations) implies λ
∂V − ∂M
∂V ∂M
T − c N = 0,
with c ≡
1 . R
[2.9]
Since ∂V − ∂M
∂V ∂M
⎤ 0 −1 0 ∂V ∂V ⎦ ⎣ T = n × T = N, = 1 0 0 then − ∂M ∂M 0 0 0 ⎡
equation [2.9] writes λ N − c N = 0 and c = λ . The curvature c of (C) is constant. The required plane curves are circles.
3 The Noether Theorem
3.1. Additional results on differential equations D EFINITION 3.1.– Let (t, x1 , · · · , xn ) ∈ O ⊂ Rn+1 −→ f (t, x1 , · · · , xn ) ∈ R be a mapping defined on an open set O of Rn+1 . The differential equation written as x(n) = f (t, x, x , · · · , x(n−1) )
[3.1]
is called the normal form of the nth -order differential equation. The mapping t ∈ I ⊂ R −→ x = g(t) ∈ R where I is an interval in R, assumed to be n-times derivable and also denoted by x, which is said to be a solution of [3.1] if and only if for any t, g (n) (t) = f (t, g(t), g (t), · · · , g (n−1) (t)). For the sake of simplicity, we will note x(t) for g(t). Let us note that if we write x1 = x, x2 = x , · · · , xn = x(n−1) , then the differential equation [3.1] can be written as ⎧ x1 = x2 ⎪ ⎪ ⎪ ⎪ x ⎪ ⎨ 2 = x3 .. . ⎪ ⎪ ⎪ x = xn ⎪ n−1 ⎪ ⎩ xn = f (t, x1 , · · · , xn )
[3.2]
52
Introduction to Mathematical Methods of Analytical Mechanics
In a coordinate system with the origin O, a point with the coordinates x1 , . . . , xn in Rn is denoted by M ; the vector M (or the bipoint OM ) is represented by ⎡
⎤ x1 ⎢ ⎥ M = ⎣ ... ⎦ . xn We can write ⎡ ⎢ ⎢ ψ(t, M ) = ⎢ ⎣
x2 .. . xn f (t, x1 , · · · , xn )
⎤ ⎥ ⎥ ⎥ , ψ being an application from Rn+1 to Rn . ⎦
Equation [3.2] can be written as dM = ψ(t, M ). dt
[3.3]
Any system of differential equations can be written in the form [3.3]. Let us recall that if x(t) is a solution defined over the interval I of the differential equation [3.3], its restriction in any subinterval of I is also a solution of [3.3]. Conversely, there may exist solutions of [3.3] that extend x(t). If the only solution extending x(t) is itself, we say that x(t) is a maximal solution of [3.3]. This concept can be easily extended to a system of differential equations. D EFINITION 3.2.– Let (t, M ) −→ ψ(t, M ) denote a mapping from Rn+1 to Rn and f1 (t, x1 , · · · , xn ), · · · , fn (t, x1 , · · · , xn ) denote the values of the component mappings constituting ψ, i.e. ⎡
⎤ f1 (t, x1 , · · · , xn ) ⎢ ⎥ .. (t, M ) ≡ (t, x1 , · · · , xn ) −→ ψ(t, M ) = ⎣ ⎦. . fn (t, x1 , · · · , xn ) The vector equation [3.3] is called a system of first-order differential equations. T HEOREM 3.1.– Let U be an open set of R × Rn and (t, M ) ∈ U −→ ψ(t, M ) be a ∂fi continuous mapping made up of elements f1 , · · · , fn . It is assumed that , where ∂xj i, j ∈ {1. · · · , n}, exist and are continuous in U . For any point (t0 , x10 , · · · , xn0 ) in U , there exists a maximal solution (x1 (t), · · · , xn (t)) such that x1 (t0 ) = x10 , · · · , xn (t0 ) = xn0 .
The Noether Theorem
53
R EMARK 3.1.– We are back to u = 0 by writing u = t − t0 , and equation [3.3] is written in the form dM = ψ(u, M ). du In the specific case where ψ is independent of u, the equation is said to be autonomous1. Going forward, we will only consider the case where ψ is independent of u and we write dM = W (M ). du
[3.4]
3.2. One-parameter groups and Lie groups ⎡ ⎤ x10 ⎢ ⎥ Let M 0 = ⎣ ... ⎦ be a point in Rn . The solution of [3.4] that takes the value M 0 xn0 for u = 0 is denoted by M = Tu (M 0 ). For u = 0, we have M 0 = T0 (M 0 ); thus, T0 = 1, where 1 is the identity. 1 Any differential equation of form [3.3] can be considered as an autonomous equation. Indeed, consider the system of differential equations ⎡ ⎤ x1 dP ⎢ . ⎥ = φ(u, P ), where P = ⎣ .. ⎦ , du xn du dP dP Writing u = t, then = 1 and = . Let us write dt du dt ⎡ ⎤ u ⎢ x1 ⎥ du dP u ⎢ ⎥ = φ(u, P ), and = 1, , with M =⎢ . ⎥= P dt dt ⎣ .. ⎦ xn that is,
φ(u, P ) W (M ) = . 1
Thus, W is a vector field of Rn+1 and
dM = W (M ). dt
54
Introduction to Mathematical Methods of Analytical Mechanics
T HEOREM 3.2.– The transformations Tu , u ∈ I ⊂ R associated with differential equation [3.4] and satisfying the conditions of Theorem 3.1 satisfy Tu1 ◦Tu2 = Tu1 +u2 −1 and (Tu ) = T(−u) . The transformations Tu , u ∈ I ⊂ R form a group that is called a one-parameter group. We say that limu→0 Tu = 1 if and only if for any M ∈ Rn , limu→0 Tu (M ) = M. D EFINITION 3.3.– A Lie group or a continuous one-parameter group is a group {Tu , u ∈ I ⊂ R} of transformations of Rn such that Tu1 ◦ Tu2 = Tu1 +u2 ,
(Tu )
−1
= T(−u)
and
lim Tu = 1.
u→0
Consequently, T0 = 1. E XAMPLE 3.1.– Consider the transformation of the plane
x = x0 cos u − y0 sin u thus y = x0 sin u + y0 cos u
⎧ dx ⎪ ⎪ ⎪ ⎨ du = −y, ⎪ ⎪ dy ⎪ ⎩ = x. du
The mapping
x −y 2 M = ∈ R −→ W (x, y) = ∈ R2 y x dM = W (M ). Thus, du
x0 cos u − sin u x where Tu = = Tu sin u cos u y0 y
is such that
represents the rotation of the angle u. Indeed, we have Tu1 ◦ Tu2 = Tu1 +u2 ,
[Tu ]
−1
= T(−u)
and
lim Tu = 1.
u→0
D EFINITION 3.4.– A Lie group {Tu , u ∈ I ⊂ R} is said to be differentiable if and only if for any M 0 in Rn , Tu+h (M 0 ) − Tu (M 0 ) h
The Noether Theorem
55
admits a limit when h tends to 0 (h = 0). We write Tu+h (M 0 ) − Tu (M 0 ) dTu (M 0 ) ≡ Tu (M 0 ) = lim . h→0 du h R EMARK 3.2.– Write M = Tu (M 0 ). Then, Tu+h (M 0 ) − Tu (M 0 ) (Th − 1) (M ) = . h h The existence of a limit when h tends to zero can be denoted (Th − 1) (M ) dTu (M ) ≡ T0 (M ) = lim , h→0 du h |u=0 or
dTu du
|u=0
≡ T0 .
Then, dTu (M 0 ) = T0 ◦ Tu (M 0 ). du and consequently, dM dTu (M 0 ) = = T0 ◦ Tu (M 0 ) = T0 (M ). du du Let us denote W (M ) = T0 (M ), so W (M ) is the value at M of a vector field of R , which is independent of u. n
D EFINITION 3.5.– The vector field M ∈ Rn −→ W (M ) = T0 (M ) ∈ Rn is called an infinitesimal generator of the differentiable Lie group {Tu , u ∈ I ⊂ R}. ⎧ dx ⎪ ⎪ ⎪ ⎨ dλ = x0 x = λ x0 1 E XAMPLE 3.2.– Let be a transformation of R2 , then and ⎪ y = y0 ⎪ dy y 0 ⎪ λ ⎩ =− 2 dλ λ ⎧ dx ⎪ ⎪ ⎪ ⎨ dλ = x x = x0 for λ = 1 (and not for λ = 0) , consequently, is the system of y = y0 ⎪ dx ⎪ ⎪ ⎩ = −y dλ
56
Introduction to Mathematical Methods of Analytical Mechanics
differential equations associated with the transformation. Let us note that ⎧the Lieugroup ⎨ x = e x0 , is not associated with λ = 0. Indeed, by writing λ = eu , we obtain ⎩ −u y = e y 0 ⎧ dx ⎪ u ⎪ ⎪ ⎨ du = e x0 = x which is a Lie group, and . ⎪ ⎪ dy ⎪ −u ⎩ = −e y0 = −y du 3.3. Invariant integral under a Lie group
On the one hand, let M ∈ Rn −→ V (M ) ∈ Rn be a field of linear forms. is associated with the Any curve (C), with extremities A and B and also denoted AB, curvilinear integral ˆ a=
(C)
V (M ) dM .
[3.5]
Expression [3.5] defines a functional of the curve (C) that is denoted by a = G(C). On the other hand, let {Tu , u ∈ J ⊂ R} be a differentiable Lie group defined by the infinitesimal generator M ∈ Rn −→ W = ψ(M ) ∈ Rn . Here, (Cu ) = Tu (C) is associated with each value u in the interval J. We obtain the image of (C) by Tu such that Tu (M ) belonging to (Cu ) is associated with each point M in (C) t ∈ I ⊂ R −→ M (t) ∈ (C) −→ Tu (M (t)) = M u (t) ∈ (Cu ) Let us recall that W =
dM . du
The construction defines a two-parameter family (t, u) ∈ I × J −→ M u (t) ∈ (Cu ) . ´ D EFINITION 3.6.– The integral a = (C) V (M ) dM is said to be invariant under the differentiable Lie group {Tu , u ∈ J ⊂ R} if and only if for any u ∈ J and for any curve (C), G (Tu (C)) = G(C). We also say that the differentiable Lie group {Tu , u ∈ J ⊂ R} keeps a invariant. A curve that makes a extremal is said to be an extremal curve.
The Noether Theorem
57
T HEOREM 3.3.– Noether’s theorem. ´ Given a curvilinear integral a = (C) V (M ) dM invariant under the differentiable Lie group {Tu , u ∈ J ⊂ R} of infinitesimal generator W (M ), then, along the extremals of a, we have the relation V (M ) W (M ) = c1 , where c1 is a real constant. Remarks a) V W scalar defined at each point M of (C) is constant all along an extremal curve; V W corresponds to a law of conservation, and a is called a first integral. b) The differentiable Lie group {Tu , u ∈ J ⊂ R} makes it possible to define a deformation at every point of (C): indeed, if we write δM = W (M ) du = (T )0 (M ) du, then according to the calculation carried out in section 2.2, this corresponds to a two-parameter family we denoted by ψ. P ROOF.– Consider a variation of the curve (C). In Chapter 2, we saw that ˆ δa =
(C)
δV d M − dV δM + V (B) δB − V (A) δA
where A and B are the extremities of (C). This result is independent of the chosen extremities on (C), and if we replace (C) by (C ), which is a curve contained in (C) with extremities A and B , we obtain the same result. Let us choose: a) δM = W (M ) du, where W (M ) is the infinitesimal generator of the Lie group {Tu , u ∈ J ⊂ R}; b) an extremal curve (C) of a. On the one hand, since a is invariant by the Lie group, δM = W (M ) du implies δa = 0. The curve (C) is an extremal of a; thus, δV d M − dV δM = 0. On the other hand, we have δB = W (B) du and δA = W (A) du. Consequently,
V (B) δB du − V (A) δA du = 0 then V (B) W (B) − V (A) W (A) = 0. Points A and B do not play any particular role in the proof, and consequently, V (M ) W (M ) is constant along the extremal curve (C).
58
Introduction to Mathematical Methods of Analytical Mechanics
3.4. Further examination of Fermat’s principle The geometric conditions of the study are those used in section 2.5.1. From origin O, the radius vector of the current point M is denoted by r. The refractive index is supposed to depend only on r, and we thus obtain ˆ a=
AB
ˆ n(r) ds ≡
r1
r0
n(r) T dM ,
where r0 , r1 correspond to the radius vectors of points A and B, respectively. The integral a is invariant by the rotation of angle θ0 around origin O. Indeed, ds =
1 + r2 θ2 dr, where θ = θ(r)
is invariant by the transformation (r, θ) −→ (r, θ + θ0 ), which, in the orthonormal system O i j, is represented by the rotation
x = x0 cos θ − y0 sin θ y = x0 sin θ + y0 cos θ
which implies
dM −y = W (M ) with W (M ) = . x dθ Along the extremals of a, the expression n(r) T W is constant. In Figure 3.1, at point M , v being the angle between the tangent to the curve and the radius vector, we obtain π T W = T W cos − v = r sin v. 2 Consequently, n(r) r sin v = c2 . Note: it is possible to get the same result by using the Euler equations obtained in Chapter 2.
The Noether Theorem
Figure 3.1. Geometric interpretation of Fermat’s principle. The angle made by tangent T to the curve at M with radius vector OM is denoted by v. For a color version of this figure, see www.iste.co.uk/gouin/mechanics.zip
59
4 The Methods of Analytical Mechanics
4.1. D’Alembert’s principle Although the principle can be studied independently of analytical mechanics methods, we show that it gives the same results, i.e. the Lagrange equations of motion. The reader may learn more details in Mécanique Générale by J. Pérès or in Mechanics by P. Germain. 4.1.1. Concept of virtual displacement The classical physical space has three dimensions. Any material system is studied with respect to a reference coordinate system (R0 ) with origin O. The concept of time, denoted by t, is assumed to be known. Consider a material system (S) whose position is determined by n parameters denoted by q1 , · · · , qn or more simply Q with Q = [q1 , · · · , qn ]. With respect to the coordinate system (R0 ), each point M of system (S) is represented by the twice-differentiable vector mapping (Q, t) ∈ O × [t1 , t2 ] ⊂ Rn+1 −→ OM (Q, t) ∈ R3 , where O is an open set in Rn . We deduce dOM =
∂OM ∂OM dQ + dt ∂Q ∂t
64
Introduction to Mathematical Methods of Analytical Mechanics
The motion of system (S) with respect to (R0 ) is given by a mapping ϕ, of class C 2 from R to Rn t ∈ I ⊂ R −→ Q = ϕ(t) ∈ Rn ,
[4.1]
where I denotes an open interval of R. The vector V denotes the velocity of point M with respect to (R0 ). Thus, V =
where
∂OM ˙ ∂OM , Q+ ∂Q ∂t
[4.2]
dQ ˙ is also denoted Q. dt
D EFINITION 4.1.– At instant t, the virtual displacement of M at position Q is the value of differential of OM (Q, t) taken for dt = 0. At fixed time t, it is a differential form and it is common to use the prefix δ to denote a virtual displacement δOM =
∂OM δQ. ∂Q
4.1.2. Concept of constraints In mechanical system, the constraints limit the positions and motions. They are of two types. D EFINITION 4.2.– A holonomic constraint is a relation of the form F (Q, t) = 0,
[4.3]
where F is a differentiable mapping from Rn to Rp with p < n. In coordinates, we write fj (q1 , · · · , qn , t) = 0,
j ∈ {1, · · · , p}.
The p relations are assumed to be independent. D EFINITION 4.3.– A virtual displacement δQ is compatible with constraint [4.3] if ∂F and only if δQ = 0. ∂Q
The Methods of Analytical Mechanics
65
Such a constraint reduces the number of independent parameters. Given that the p constraints are independent, it is possible to use the theorem of implicit functions to get n − p independent parameters. The system has n − p degrees of freedom. Let us note that a relation of the form fj (q1 , · · · , qn , t) ≥ 0 is said to be a unilateral relation. The inequality does not reduce the number of independent parameters but only limits the positions of the material system. D EFINITION 4.4.– A non-holonomic constraint is a non-integrable relation of the form A(Q, t) Q˙ + B(Q, t) = 0
[4.4]
where A is a mapping from Rn+1 to Rs and B is a vector field of Rs defined at (Q, t) ∈ Rn+1 . A(Q, t) is represented by a matrix with n columns and s rows, and B(Q, t) by a column with s rows. Relation [4.4] can be expressed as ai 1 (q1 , · · · , qn , t) q˙1 + · · · + ai n (q1 , · · · , qn , t) q˙n + bi (q1 , · · · , qn , t) = 0, i ∈ {1, · · · , s}. If relation [4.4] is integrable, we are back to the case of relation [4.3]. The matrix A(Q, t) is of rank s with s < n. D EFINITION 4.5.– A virtual displacement Q is compatible with constraint [4.4] if and only if A(Q, t) δQ = 0. 4.1.3. The Lagrange formulae Relation [4.2] yields ∂V ∂OM = ∂Q ∂ Q˙
hence
∂OM 1 ∂V V =V . 2 ∂ Q˙ ∂Q
d By deriving with respect to time and taking into account dt we obtain 1 ∂V V d 1 ∂V V ∂OM a − = ∂Q dt 2 ∂ Q˙ 2 ∂Q
∂OM ∂Q
=
∂V , ∂ Q˙
[4.5]
66
Introduction to Mathematical Methods of Analytical Mechanics
dV where a = denotes the acceleration of M . Relation [4.5] is called the Lagrange dt kinematic formula. D EFINITION 4.6.– The scalar m a δOM is called virtual work of inertial forces of point M with mass m. D EFINITION 4.7.– With motion t ∈ I ⊂ R −→ Q = ϕ(t) ∈ O ⊂ Rn ˙ t . T is a mapping is associated the scalar T (ϕ(t), ϕ(t), ˙ t), also denoted by T Q, Q, from O × O × I to R, called the kinetic energy of system (S) defined by T =
1 2
ˆ (S)
2
V (M ) dm ≡
1 2
ˆ (S)
V V dm.
[4.6]
The kinetic energy is usually expressed by using a second-degree polynomial with respect to variables q˙1 , · · · , q˙n where polynomial coefficients are functions of parameters q1 , · · · , qn and t. T HEOREM 4.1.– The virtual work of the inertial forces of (S) is ˆ d ∂T ∂T δQ, a δOM dm = − dt ∂ Q˙ ∂Q (S) where dm denotes the mass measurement of material system (S). The proof of the theorem can be quickly deduced from the mass conservation and relations [4.5] and [4.6]. We also assume that system (S) has a potential energy represented by the differentiable mapping (t, Q) ∈ I × O ⊂ R × Rn −→ W (Q, t) ∈ R, where O denotes an open set of Rn . The virtual work of forces deriving from the potential energy W (Q, t) is δτ = −
∂W (Q, t) δQ. ∂Q
If the forces are not deriving from a potential, it is still assumed that the virtual work of forces is a linear form of the virtual displacement, i.e. δτ = R(Q, t) δQ with
R(Q, t) = [r1 (q1 , · · · , qn , t), · · · , rn (q1 , · · · , qn , t)] .
The Methods of Analytical Mechanics
67
P RINCIPLE 4.1.– d’Alembert’s principle or the principle of virtual work. There exists a coordinate system (R0 ) called a Galilean system and an absolute chronology such that for the coordinate system and the chronology, for any material system, for any virtual displacement compatible with the constraints, the virtual work of forces applied to the system is equal to the virtual work of inertial forces. Given the holonomic and non-holonomic constraints, the principle can be expressed as ∂F (Q, t) δQ = 0 and A(Q, t) δQ = 0, ∂Q
˙ t) ˙ t) ∂T (Q, Q, d ∂T (Q, Q, δQ = 0. − R(Q, t) δQ − dt ∂Q ∂ Q˙
∀ δQ ∈ Rn we have
satisfying
This result is equivalent to the following: There exist Λ and Ξ, Lagrange multipliers, such that
d dt
˙ t). ∂T (Q, Q, ∂ Q˙
˙ t) ∂T (Q, Q, − ∂Q
[4.7]
∂F (Q, t). = R(Q, t) + Λ + ΞA(Q, t). ∂Q In coordinate lines, we have
d dt
∂T ∂ q˙i
−
p s ∂fj ∂T = ri + λj + μk ak,i , ∂qi ∂qi j=1
i ∈ {1, · · · , n}
k=1
with Λ = [λ1 , · · · , λp ] ,
Ξ = [μ1 , · · · , μs ]
and p + s < n.
These equations are the n Lagrange equations in the general case of holonomic and non-holonomic constraints. In order for the system to have as many equations as the unknowns’ number, we must add the p + s constraints, F (Q, t) = 0
and A(Q, t) Q˙ + B(Q, t) = 0.
68
Introduction to Mathematical Methods of Analytical Mechanics
4.2. Back to analytical mechanics Let us consider a system (S) with n degrees of freedom and with holonomic constraints. The position of the system is determined by an appropriate choice of n independent parameters Q = [q1 , · · · , qn ]. In this book, we consider such systems with forces deriving from potential. The Lagrangian associated with system (S) is the differentiable mapping G from ˙ t) ∈ R2n+1 to R such that 2n + 1 variables (Q, Q, ˙ t) − W (Q, t). ˙ t) = T (Q, Q, G(Q, Q,
[4.8]
A simpler writing, G = T − W , is generally used. Let us recall the Hamilton principle stated in Chapter 1, section 1.6.3. We consider a Galilean coordinate system R0 , which is called a fixed coordinate system, for which the positions of system (S) at instants t0 and t1 are denoted by Q0 and Q1 , respectively. The motion of system (S) between instants t0 and t1 corresponding to positions Q0 and Q1 is given by the mapping t ∈ I ⊂ O −→ Q = ϕ(t) ∈ Rn satisfying ϕ(t0 ) = Q0 and ϕ(t1 ) = Q1 and making extremal the Hamilton action ˆ a=
t1
t0
˙ t) dt. G(Q, Q,
We have seen in Chapter 1, section 1.6.3 that the extremals of a satisfy the Euler equations, which are called Lagrange equations in mechanics: ⎧ d ∂G ⎪ ⎪ − ⎪ ⎪ ⎪ ⎨ dt ∂ q˙1 .. ⎪ . ⎪ ⎪ d ∂G ⎪ ⎪ − ⎩ dt ∂ q˙n
∂G = 0, ∂q1 [4.9] ∂G = 0, ∂qn
As a function of kinetic energy and potential energy, we obtain ⎧ d ∂T ∂T ∂W ⎪ ⎪ − =− , ⎪ ⎪ ⎪ dt ∂ q ˙ ∂q ∂q1 1 1 ⎨ .. ⎪ . ⎪ ⎪ d ∂T ∂T ∂W ⎪ ⎪ − =− . ⎩ dt ∂ q˙n ∂qn ∂qn
The Methods of Analytical Mechanics
69
They are the Lagrange equations obtained in [4.7], section 4.1.3 or [1.9] in Chapter 1. 4.3. The vibrating strings An application of the Hamilton principle is associated with the vibrating strings. Let us consider an elastic wire that constitutes a string whose ends are fixed at O and A. The string is first stretched out to a length l0 , and ρ denotes the linear mass of the homogeneous wire. In the orthonormal axes Oxy, the unstretched string occupies the position denoted by M0 (x, 0), where x ∈ [0, l0 ]. It is hypothesized that the position of the string varies, with respect to its initial position, only by its ordinate. This hypothesis corresponds to the fact that the position of the string varies from the initial position along the line parallel to Oy. At instant t, the string occupies the position M (x, y), where x ∈ [0, l0 ] and y = f (x, t). Consequently, the position of the string is represented by the value y(x, t) (see Figure 4.1). The kinetic energy of the string is T =
1 2
ˆ
A
O
y˙ 2 dm =
1 2
ˆ 0
l0
ρ y˙ 2 dx
with
y˙ =
∂y (x, t), ∂t
where the mass measurement dm is equal to ρ dx and ρ is constant. It is supposed that the potential energy of the string is proportional to its stretching relative to the initial unstretched position W = k 2 (l(t) − l0 )
where l(t) is the length of the string at instant t.
Figure 4.1. A string connects two fixed points O and A. The position of the string is represented at each instant by the curve y = f (x, t), also simply denoted y = y(x, t). For a color version of the figures in this chapter, see www.iste.co.uk/gouin/mechanics.zip
70
Introduction to Mathematical Methods of Analytical Mechanics
The coefficient k 2 is positive and l(t) = The value of the Lagrangian is 1 2
T −W =
ˆ
l0
ρ y˙ 2 − 2k 2
0
ˆ
l0
0
∂y 1 + y 2 dx, where y = (x, t). ∂x
1 + y 2 − 1
dx
and between instants t0 and t1 , the Hamilton action is 1 a= 2
ˆ
ˆ
t1
t0
l0
0
2 ρ y˙ − 2k 1 + y − 1 dx dt. 2
2
Because y(0, t) = y(l0 , t) = 0, and y(x, t0 ) and y(x, t1 ) are given, the Hamilton principle implies that whatever δy such that δy(0, t) = δy(l0 , t) = 0 and δy(x, t0 ) = δy(x, t1 ) = 0, then δa = 0 where ˆ δa =
ˆ
t1
t0
l0
y δy ρ y˙ δ y˙ − k 2 1 + y 2
0
dx dt.
An integration of δa taking into account d y˙ δ y˙ = [y˙ δy] − y¨ δy dt
y δy y d d y δy − δy = dx dx 1 + y 2 1 + y 2 1 + y 2
and
can be written as ˆ δa = ρ
l0
0
ˆ
− k2
t1
t0
y˙ δy
⎛
t1 t0
ˆ −
y δy
⎝ 1 + y 2
t1
t0
y¨ δy dt dx
l0
ˆ −
0
l0
0
d dx
y
1 + y 2
⎞
δy dx⎠ dt.
The terms within the square brackets [ ] are zero; therefore, ˆ δa =
t1
t0
ˆ 0
l0
d −ρ y¨ + k 2 dx
y
1 + y 2
δy dx dt.
The Methods of Analytical Mechanics
71
If, for any δy, we get δa = 0, then we obtain the differential equation d −ρ y¨ + k dx
2
y
1 + y 2
= 0.
[4.10]
Let us develop the second term in [4.10], d dx
y 1 + y 2
=
y 1 + y 2
y 2 y y − = . 3 3 2 2 (1 + y ) (1 + y )
When we assume |y | 1. This term can be expanded in the form
y (1 + y 2 )
3
=y
3 2 1 − y + ··· . 2
We are in the case of infinitesimal motions of the vibrating string. Therefore, the linearized motion equation of vibrating strings writes ρ y¨ − k2 y = 0.
[4.11]
4.3.1. First study of the solutions of equation [4.11] We are searching for solutions in a form that separates space and time y = f (x) cos ωt. For any t, we obtain in [4.11], cos ωt ρ ω 2 f (x) + k 2 f (x) = 0
=⇒
f (x) + ρ
ω2 f (x) = 0. k2
Then, researching for f (x) in the form f (x) = erx where r is a constant scalar, we obtain r2 + ρ
ω2 ω2 = 0, or r = ± i α with α > 0 and α2 = ρ 2 2 k k
and consequently, f (x) = A cos(α x) + B sin(α x).
72
Introduction to Mathematical Methods of Analytical Mechanics
The ends of the string are fixed. Thus, y(0, t) = y(l0 , t) = 0, and we obtain f (0) = 0 =⇒ A = 0 and f (l0 ) = 0 =⇒ B sin l0 = 0, i.e. α l0 = n π. If the solutions are not identically zero (corresponding to the unstretched state of the string), we have an infinite number of values corresponding to αn = n
π l0
or
ωn =
k √ n π. l0 ρ
The different values of n correspond to all the harmonics of the same vibrating string’s note. The note is associated with k and l0 . The value of B is arbitrary and characterizes the note’s intensity. The general solution of equation [4.11] is
π πk y(x, t) = Bp sin p x cos p √ t . l0 l0 ρ p=1 p=n
4.3.2. Second study of the solutions of equation [4.11] We search for the solutions of the vibrating strings by using the Cauchy problem and the method of the waves’ propagation. Equation [4.11] takes the form of a partial differential equation, 1 ∂2y ∂2y − = 0, v 2 ∂t2 ∂x2
[4.12]
ρ where v 2 = 2 . The unknown function y depends on x and t. To find the general k solution of [4.12], let us change variables:
ζ = x + vt η = x − vt
⇐⇒
⎧ ⎪ ⎨x = ζ + η 2 ζ −η . ⎪ ⎩t= 2v
Since we have the relations ∂ ∂ ∂ = + ∂x ∂ζ ∂η
and
∂ =v ∂t
∂ ∂ − ∂ζ ∂η
,
we obtain ∂2 ∂2 ∂2 ∂2 = 2 +2 + 2 2 ∂x ∂ζ ∂ζ∂η ∂η
and
∂2 = v2 ∂t2
∂2 ∂2 ∂2 −2 + 2 2 ∂ζ ∂ζ∂η ∂η
,
The Methods of Analytical Mechanics
73
and 1 ∂2y ∂2y ∂2y − = −4 . v 2 ∂t2 ∂x2 ∂ζ∂η Using new variables ζ and η, the partial differential equation [4.12] becomes ∂2y = 0. ∂ζ∂η
[4.13]
The general solution of [4.13] is y(ζ, η) = f (ζ) + g(η)
=⇒
y(x, t) = f (x + vt) + g(x − vt),
where f and g are two arbitrary, two-times continuously differentiable functions. At point x and at instant t, the function g(x − vt) takes the value that it would take at the point x−vt at instant 0. The value of g(x−vt) is the translation of g(x) by translation v t. The function y(x, t) = g(x − vt) represents a wave, which propagates towards the right with velocity v. The function y(x, t) = f (x + vt) represents a wave which is propagated towards the left with velocity v. Any wave of the vibrating string is made up of the superimposition of these two waves. P ROPOSITION 4.1.– The Cauchy problem. The Cauchy problem is as follows: we search for a solution of the vibrating strings’ equation, which corresponds to given initial conditions. The form of the string at the initial instant is: y0 (x) = y(x, 0). The velocities ∂y along the string at the initial instant are y˙ 0 (x) = (x, 0). Then, ∂t y0 (x) = f (x) + g(x) and y˙ 0 (x) = v (f (x) − g (x)) . Second equation [4.14] implies ˆ
x 0
y˙ 0 (ζ) dζ = v (f (x) − g(x)) + c where c is a constant.
Hence, ⎧ ˆ 1 1 x c ⎪ ⎪ f (x) = y0 (x) + y˙ 0 (ζ) dζ − ⎪ ⎪ ⎨ 2 v 0 v
. ˆ x ⎪ ⎪ 1 1 c ⎪ ⎪ y0 (x) − y˙ 0 (ζ) dζ + ⎩ g(x) = 2 v 0 v
[4.14]
74
Introduction to Mathematical Methods of Analytical Mechanics
The constant c in f is simplified with the constant c in g, and we obtain the solution of the Cauchy problem y(x, t) =
1 1 [y0 (x + vt) + y0 (x − vt)] + 2 2v
ˆ
x+vt
x−vt
y˙ 0 (ζ) dζ.
[4.15]
E XAMPLE 4.1.– Vibrating string with one fixed end. The solutions are given by [4.15], where y(x, t) is defined from [0, 0 ] × [t0 , +∞] to R. Indeed, any solution can be extended in a unique manner at t ∈] − ∞, +∞[. For a fixed end point O and for any t, we have y(0, t) = 0. Thus, f (vt) + g(−vt) = 0, and for any s, g(s) = −f (s). Finally, y(x, t) = f (x + vt) − f (x − vt) where f is an arbitrary function. At x, y(x, t) is an odd function and is the superimposition of the two waves y1 (x, t) and y2 (x, t) propagating in opposite directions and satisfying, whatever t, to the relation y1 (−x, t) = −y2 (x, t). There is a reflection of waves at point O with a change of sign for the amplitude of the wave’s oscillation. E XAMPLE 4.2.– Vibrating string with two fixed ends. The solutions are given by [4.15], where y(x, t) is defined from [0, 0 ] × [t0 , +∞] to R. We obtain f (vt) + g(−vt) = 0 =⇒ f (l0 + s) = f (−l0 + s). f (l0 + vt) + g(l0 − vt) = 0 The function f is periodic with period 2 l0 and, as in the previous example, can be extended in y such that
y(−x, t) = −y(x, t) y(x + 2l0 , t) = y(x, t)
and
y(x,
The relations imply y(0, t) = y(l0 , t) = 0.
v t + 2l0 ) = y(x, t). v
The Methods of Analytical Mechanics
75
4.4. Homogeneous Lagrangian. Expression in space time The motion of system (S) relative to the coordinate system (R0 ) is represented by the mapping [4.1]. The motion can also be expressed in the form of parametric representation by writing t = g(τ ), where g is continuously differentiable and a bijective function from an interval I to an interval J of R. Thus,
Q = ψ(τ ), t = g(τ )
[4.16]
The parametric representation [4.16] is the representation of the trajectory of system (S) in space time. Writing
Q X= , t dX Q representing the column of derivatives with is denoted by X = t dτ Q respect to parameter τ . By noting that Q˙ = , the Hamilton action is written as t
then
ˆ a=
t1
t0
Q G(Q, , t) dt = t
ˆ
τ1
τ0
Q G(Q, , t) t dτ where t
τ0 = g −1 (t0 ) . τ1 = g −1 (t1 )
Q , t) t . Thus, L is called the homogeneous t Lagrangian of system (S). Since L is a homogeneous function of X , of degree 1, the Euler identity yields We write L(X, X ) = G(Q,
L=
∂L(X, X ) X, ∂X
or L(X, X ) =
∂L ∂L ∂L q + · · · + qn + t . ∂q1 1 ∂qn ∂t
We obtain ˆ a=
τ1
τ0
∂L(X, X ) X dτ. ∂X
76
Introduction to Mathematical Methods of Analytical Mechanics
Let us write dX = X dτ and Y =
∂L(X, X ) , with ∂X
∂L ∂L ∂L , = ,··· , ∂X1 ∂Xn+1 ∂X then, ˆ a=
τ1
τ0
ˆ
Y X dτ =
X1
X0
Y dX
with X(τ0 ) = X 0 and X(τ1 ) = X 1 .
The vector Y is called the energy momentum vector of system (S). The Hamilton action is the integral or circulation in space time of the energy momentum vector. Let us write P dQ Y = and dX = , h dt thus, Y dX = P dQ + h dt. The covector P is associated with the momentum, and the scalar h is associated with the energy. Calculating P and h, ∂L(X, X ) ∂L ∂L . Y = [P , h] = = , ∂X ∂Q ∂t Since L(X, X ) = G(Q,
Q Q ∂ Q˙ 1 , t) t and Q˙ = , we obtain = , and t t t ∂Q
consequently, P =
h=
∂G ∂ Q˙ ∂L ∂Gt ∂ Q˙ ∂G = t , = = ˙ ˙ ∂Q ∂Q ∂Q ∂Q ∂Q ∂ Q˙
∂G ˙ ∂L ∂G Q = G − t =G− Q, 2 ˙ ∂t t ∂Q ∂ Q˙
hence, we obtain the system ⎧ ∂G ∂T ⎪ ⎪ = , ⎨P = ∂ Q˙ ∂ Q˙ ∂G ˙ ⎪ ⎪ Q. h=G− ⎩ ∂ Q˙
[4.17]
The Methods of Analytical Mechanics
77
R EMARK 4.1.– The kinetic energy is generally a second-degree polynomial with respect to the derivatives of the parameters q˙1 , · · · , q˙n (Q˙ = [q˙1 , · · · , q˙n ]). We write T = T2 + T1 + T0 where: ˙ T2 is homogeneous of degree 2 in Q; ˙ T1 is homogeneous of degree 1 in Q; ˙ T0 is independent of Q. Thus, according to the Euler identity, ∂T ˙ Q = 2 T2 + T1 , ∂ Q˙ and we obtain h = T − W − 2 T2 − T1 , or h = −(T2 − T0 + W ). For example, the kinetic energy of a material point can be written in orthonormal 1 axis as: T = m(x˙ 2 + y˙ 2 + z˙ 2 ); hence, P = [ mx, ˙ my, ˙ mz˙ ] and h = −(T + W ). 2 As
the
kinetic
energy is a second-degree polynomial relatively to ∂T ∂T are is a linear form whose n components q˙i , (i ∈ {1, . . . , n}), term ∂ q˙i ∂ Q˙ ∂T ˙ t) is a system of n first-degree polynomials in q˙i . This means that P = (Q, Q, ∂ Q˙ linear equations with respect to [ q˙1 , · · · , q˙n ] whose coefficients depend on Q = [ q1 , · · · , qn ]. Moreover, P = [ p1 , · · · , pn ] are the components of the momentum vector. Solving the system P =
∂T ˙ t) makes it possible to write (Q, Q, ∂ Q˙
⎤ ⎡ ⎤ f1 (Q, P , t) q˙1 ⎢ ⎥ ⎢ ⎥ .. Q˙ = ⎣ ... ⎦ = ⎣ ⎦ , that is . ⎡
q˙n
fn (Q, P , t)
Q˙ = F (Q, P , t).
78
Introduction to Mathematical Methods of Analytical Mechanics
The resolution is only possible if the system is a Cramer system. This is assumed in the next sections. The term Q˙ is expressed as a function of P , Q and t. Consequently, ˙ t) − h = G(Q, Q,
˙ t) ∂G(Q, Q, Q˙ ∂ Q˙
and we obtain an expression of the form h + H (Q, P , t) = 0, with H = P Q˙ − G
[4.18]
where Q˙ is expressed as a function of Q, P and t. The new term H(Q, P , t) is called the Hamiltonian of the mechanical system. If T is a homogeneous second-degree term of q˙1 , · · · , q˙n , the value of H represents the total energy of system (S). 4.5. The Hamilton equations 4.5.1. First method using Lagrange equations The Lagrangian G of the system (S) is defined by ˙ t) − W (Q, t) ˙ t) = T (Q, Q, G(Q, Q, and the momentum P is defined by [4.17]; let us change the variables as ˙ t −→ (Q, P , t) . Q, Q, ˙ t) in the simplified form G In order to simplify the notation, we write G(Q, Q, and H(Q, P , t) in the simplified form H. We can deduce, from [4.18] regarding the Hamiltonian H, dH = d P Q˙ − G = dP Q˙ + P dQ˙ −
∂G ∂G ∂G ˙ dQ − dQ − dt, ∂Q ∂t ∂ Q˙
The Methods of Analytical Mechanics
79
therefore, P =
∂G ∂ Q˙
=⇒
dH = −
∂G ∂G dQ + Q˙ dP − dt. ∂Q ∂t
By construction, the Hamiltonian H is a function of 2n + 1 variables q1 , · · · , qn , p1 , · · · , pn , t and, by identification of the components of dH, we obtain ⎧ ∂H ⎪ ˙ ⎪ ⎪ ∂P = Q ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ∂H ∂G =− ⎪ ∂Q ∂Q ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ∂H = − ∂G ∂t ∂t P ROPOSITION 4.2.– The Hamilton equations of motion The Hamilton principle yields the Lagrange equations [4.9] for the motion of system (S). The momentum is defined by [4.17], ∂G P = ∂ Q˙
and
d dt
∂G ∂ Q˙
−
∂G =0 ∂Q
=⇒
∂G P˙ = . ∂Q
Hence, the system of Hamilton equations is ⎧ dQ ∂H ⎪ ⎪ = ⎪ ⎪ ⎨ dt ∂P ⎪ ⎪ dP ∂H ⎪ ⎪ ⎩ =− dt ∂Q
[4.19]
– The Hamilton equations are associated with the change of variables, which replaces the n second-order differential equations of Lagrange by 2n first-order differential equations. – Since dH ∂H ˙ ∂H ˙ ∂H = Q+ P+ dt ∂Q ∂P ∂t and by taking into account the Hamilton equations [4.19], we obtain ∂H ˙ ∂H ˙ P+ Q = Q˙ P˙ − P˙ Q˙ = 0, ∂P ∂Q
80
Introduction to Mathematical Methods of Analytical Mechanics
and for any motion of the system (S), dH ∂H = . dt ∂t When H or G are explicitly independent of time, ∂H dH = 0, then =0 ∂t dt
hence H(Q, P , t) = H0 ,
where H0 is a constant that only depends on the initial conditions of the motion of system (S). – When H or G are explicitly independent of time and the kinetic energy is a second-degree polynomial with respect to the derivatives of the parameters, T2 − T0 + W = H0 . Indeed, if G is independent of t, the homogeneous Lagrangian L defined in section 4.2 is explicitly independent of t, and for any real u, L(Q, t, Q , t ) = L(Q, t + u, Q , t ). Since the group of translations {Tu , u ∈ I ⊂ R} associated with the transformation Q Q Tu = t t+u ˆ X1 0 has the infinitesimal operator W = , and since a = Y dX is invariant by 1 X0 the Lie group {Tu , u ∈ J ⊂ R}, according to the Noether theorem, along the extremal curve representing the motion of system (S), we have Y W = h1 . However, Y = [ P , h ]; then, h = h1 , where h1 , value of −H, is constant. 4.5.2. Second method using the Hamilton principle The Hamilton principle leads to search for curves (C) in space time that make extremal the integral ˆ a= L(X, X ) dτ. (C)
The Methods of Analytical Mechanics
81
We previously write ˆ a=
(C)
ˆ
Y dX =
(C)
P dQ + h dt,
with h +H(Q, P , t) = 0, which represents a constraint between Q, P , t and h. Due Q P to X = and Y = , the constraint is written F (X, Y ) = 0. We will now t h find a curve (C) and a vector field Y satisfying F (X, Y ) = 0, which make extremal ´ the curvilinear integral (C) Y dX. The problem was solved in Chapter 3 as follows: $ with the given ends A and B and a vector field V defined along Find a curve AB $ subject to the constraint F (M , V ) = 0, where M is the current point on AB $ AB, ´ and such that AB V dM is an extremum. % The solution is associated with the construction of a distributed Lagrange multiplier corresponding to the constraint written as ∃ s ∈ I ⊂ R −→ (M (s), V (s), λ(s)) such that ∀ (δM , δV , δλ) , ˆ V dM − λ(s) F (M , V ) ds = 0. δ % AB We deduce
∂F ∂F δV d M − dV δM − λ(s) δV+ δM ds = 0, ∀ (δM , δV ) , ∂V ∂M % AB ˆ
with F (M , V ) = 0, and we obtain d M = λ(s)
∂F ∂V
ds and dV = −λ(s)
∂F ∂M
ds.
For notations of the homogeneous Lagrangian, we get d X = λ(τ )
∂F ∂Y
dτ and dY = −λ(τ )
∂F ∂X
dτ .
[4.20]
82
Introduction to Mathematical Methods of Analytical Mechanics
Moreover, F (X, Y ) = h + H(Q, P , t) with X =
Q t
and Y =
P h
and
[4.20] is equivalent to ⎧ ⎨ ⎩
dQ = μ dt = μ
∂H ∂P
and
⎧ ∂H ⎪ ⎨ dP = −μ ∂Q with μ = λ ds. ⎪ ⎩ dh = −μ ∂H ∂t
By eliminating μ and by taking the constraint into account, we obtain the system ⎧ dQ ∂H ⎪ ⎪ = ⎪ ⎪ dt ∂P ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ dP ∂H = − ⎪ dt ∂Q ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ dh + ∂H = 0 dt ∂t
with
h + H(Q, P , t) = 0.
[4.21]
∂H dh + = 0 results from the first two equations and the dt ∂t constraint h + H(Q, P , t) = 0. Conversely, if we note that H = T2 − T0 + W and that W is defined up to a constant, the constraint h + H(Q, P , t) = 0 results from three equations of system [4.21]; it is similar for H. In the same way, Let us note that
dF (X, Y ) =
∂F ∂F dX + dY = μ ∂X ∂Y
∂F ∂X
∂F ∂Y
−
∂F ∂Y
∂F ∂X
= 0,
therefore, F (X, Y ) = c3 , scalar c3 being constant that can be chosen to be zero. C ONCLUSION 4.1.– Let us summarize ⎡ ⎤ the previous steps. For a system (S) with n q1 ⎢ .. ⎥ independent parameters Q = ⎣ . ⎦: qn ˙ t) and W (Q, t), and we obtain G(Q, Q, ˙ t) a) we calculate T (Q, Q, ˙ T (Q, Q, t) − W (Q, t);
=
˙ t) ∂G(Q, Q, , value of the momentum vector; the n terms ∂ Q˙ p1 , · · · , pn are the conjugate variables of q1 , · · · , qn ; b) we write P =
c) we deduce Q˙ as a function of P and Q;
The Methods of Analytical Mechanics
83
˙ ˙ t) − ∂G(Q, Q, t) Q, ˙ with Q˙ being d) we calculate −H(Q, P , t) = G(Q, Q, ˙ ∂Q expressed as a function of Q and P . If T − W is explicitly independent of time, we obtain the cases H = T + W or H = T2 − T0 + W . Knowing the Hamiltonian H, we can deduce the 2n independent first-order Hamilton equations. E XAMPLE 4.3.– For a material point and relatively to an orthonormal coordinate system, the kinetic energy and potential energy are ' 1 & 2 m x˙ + y˙ 2 + z˙ 2 and W = W (x, y, z), 2 ⎧ ⎨ px = mx˙ thus, G = T − W and py = my˙ , hence ⎩ pz = mz˙ T =
H=
' 1 & 2 m px + p2y + p2z + W (x, y, z), 2
and the Hamilton equations are ⎧ ⎪ ⎪ ⎪ x˙ = ⎪ ⎪ ⎪ ⎪ ⎨ y˙ = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ z˙ =
px m py m pz m
⎧ ∂W ⎪ ⎪ p˙x = − ⎪ ⎪ ∂x ⎪ ⎪ ⎪ ⎪ ⎨ ∂W and p˙y = − ⎪ ∂x ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ p˙ z = − ∂W ∂x
We also have the Painlevé theorem, ∂H =0 ∂t
=⇒
H = H0 or T + W = H0 .
4.6. First integral by using the Noether theorem 4.6.1. Secondary parameters Let G = G ( q1 , · · · , qn , q˙1 , · · · , q˙n , t) denote the Lagrangian of system (S) with respect to a Galilean frame (R0 ).
84
Introduction to Mathematical Methods of Analytical Mechanics
D EFINITION 4.8.– The parameter qi , where i ∈ {1, · · · , n}, is said secondary ∂G parameter if and only if = 0, i.e. G ( q1 , · · · , qn , q˙1 , · · · , q˙n , t) is independent of ∂qi the variable qi . ˙ t) ∂H(Q, P , t) ∂G(Q, Q, =− . So, the definition is ∂Q ∂Q equivalent to the Hamiltonian independent of qi . The i-th Lagrange equation writes In section 4.5.1 we saw that
d dt
∂T ∂ q˙i
∂G − =0 ∂qi
hence
d dt
∂T ∂ q˙i
= 0.
There exists a constant ci such that ∂T = ci ∂ q˙i
⇐⇒
pi = c i .
The conjugate variable of the secondary parameter is constant. 4.6.2. Back to the Noether theorem Parameter q1 is a secondary parameter. Let {Tu , u ∈ J ⊂ R} be the group of translations [q1 , q2 , · · · , qn , t] −→ [q1 + u, q2 · · · , qn , t] . The transformation Tu is associated with the infinitesimal operator ⎡ ⎤ 1 ⎢0⎥ ⎢ ⎥ W = ⎢ . ⎥. ⎣ .. ⎦ 0 For the motions of system (S), we get Y W = c1 , then p1 = c1 . If t is a secondary parameter,
∂H = 0 and therefore h is constant. ∂t
T HEOREM 4.2.– The Painlevé theorem. If constraints are independent of time, which is a secondary parameter, for T = T2 , then H = T + W is constant. For T = T2 + T1 + T0 , then H = T2 − T0 + W is constant.
The Methods of Analytical Mechanics
85
E XAMPLE 4.4.– Consider a system with n independent parameters q1 , · · · , qn such that t and q1 are secondary parameters and the possible constraints are independent of time. We consider the transformation T (u, v) : [q1 , q2 , · · · , qn , t] −→ [q1 + u, q2 , · · · , qn , t + v] . When T = T2 + T1 q˙1 + T0 q˙12 , the transformation makes G invariant. From the invariance relative to {T (u, v), u and v ∈ R}, we deduce the invariance relative to {Tu = T (u, 0), u ∈ R} and the invariance relative to {Tv = T (0, v), v ∈ R} and consequently the first integrals p1 = c1 and h = h0 . Let us impose the rheonomic constraint q1 = ωt. The transformation Tv : [q1 , q2 , · · · , qn , t] −→ [q1 + ωv, q2 , · · · , qn , t + v] makes invariant the Hamilton action. Hence, ωp1 + h = c1 , where c1 is a constant associated with Y W = c1 , which comes from the Noether theorem applied for the generator vector W such that W = [ ω, 0, · · · , 0, 1 ]. Let us note that ∂T p1 = = T1 + 2 T0 q˙1 = T1 + 2 T0 ω for q˙1 = ω. Scalars ω p1 and h are ∂ q˙1 associated with the first integrals. Therefore, ω (T1 + 2 T0 ω) − (T + W ) = h0
=⇒
T2 − T0 + W = −h0 ,
with h0 constant, which is the expression of the Painlevé theorem. Application: spherical pendulum. Consider a point P with mass m on a sphere (S) of center O with unit radius and axis denoted by Oz giving the direction of the ascendant vertical. The kinetic energy in spherical coordinates (Figure 4.2) is T =
1 ˙2 m θ + ϕ˙ 2 sin2 θ 2
and the potential energy is W = k 2 cos θ. ∂T = α, i.e. ϕ˙ sin2 θ = α, ∂ ϕ˙ where α is constant. The time t is a secondary parameter; therefore, T + W = h0 , where h0 is constant, The parameter ϕ is a secondary parameter; therefore,
1 ˙2 m θ + ϕ˙ 2 sin2 θ + k 2 cos θ = h0 . 2
86
Introduction to Mathematical Methods of Analytical Mechanics
Figure 4.2. Spherical pendulum: frictionless motion of point P placed on the sphere (S) in the gravitational field
Now, let us consider the case where we impose the rheonomic constraint ϕ = ω t corresponding to an imposed rotation around axis Oz. Thus, T =
1 ˙2 m θ + ω 2 sin2 θ 2
and W = k 2 cos θ.
Therefore, (T2 − T0 + W ) = h1 , where h1 is constant. Then, 1 ˙2 m θ − ω 2 sin2 θ + k 2 cos θ = h0 . 2 The equation of energy in the first case differs from Painlevé’s equation in the second case. Now, let us consider a system with Lagrangian G1 =
1 m 2
α2 + θ˙2 sin2 θ
− k 2 cos θ
with ϕ˙ sin2 θ = α.
The Methods of Analytical Mechanics
87
We note that the condition ϕ˙ sin2 θ = α corresponds to the first integral obtained in the first case, but the study is completely different. Time t is a secondary parameter, and we get the Painlevé’s first integral 1 α2 − k 2 cos θ = h2 , where h2 is constant. m θ˙2 + 2 sin2 θ
[4.22]
This expression differs from the energy theorem obtained when the parameters θ and ϕ are independent 1 α2 + k 2 cos θ = h0 , where h0 is consant. m θ˙2 + 2 sin2 θ
[4.23]
Indeed, equation [4.22] is associated with the study of the extremals of integral ˆ a1 =
t1
t0
˙ ϕ) G1 (θ, θ, ˙ dt,
submitted to the constraint ϕ˙ = ˆ ϕ(t1 ) − ϕ(t0 ) =
t1
t0
α or sin2 θ
α dt. sin2 θ
We have written the expression of the first integral in the Lagrangian expression. For example, when H is independent of q1 , we obtained the first integral p1 = c1 , which is written, by solving relatively to q˙1 , in the form q˙1 = f (c1, q2 , · · · , qn , q˙2 , · · · , q˙n , t) .
[4.24]
By bringing the result in G (c1 , q2 , · · · , qn , q˙2 , · · · , q˙n , t), we obtain G1 (q2 , · · · , qn , q˙2 , · · · , q˙n , t) = G (q2 , · · · , qn , f (c1, q2 , · · · , qn , q˙2 , · · · , q˙n , t) , q˙2 , · · · , q˙n , t) . Even if we take into account the partial result q˙1 = f (c1, q2 , · · · , qn , q˙2 , · · · , q˙n , t), the study associated with system (S 1 ) corresponding to G1 is not equivalent to the study associated with the system (S) corresponding to G. This observation leads to the problem of the re-injection of the partial result into the Lagrangian.
88
Introduction to Mathematical Methods of Analytical Mechanics
4.7. Re-injection of a partial result What modification of the Lagrangian must be made when we know the expression of a first linear integral of the motion? Let a material system with n independent parameters q1 , · · · , qn , with q1 being a secondary parameter. The Lagrangian expression is G = G(q2 , · · · , qn , q˙1 , · · · , q˙n , t). The expression of the ∂T first integral corresponding to q1 is = α. Let ∂ q˙1 ⎡
⎤ q1 (t0 ) ⎢ ⎥ Q0 = ⎣ ... ⎦
⎡
⎤ q1 (t1 ) ⎢ ⎥ and Q1 = ⎣ ... ⎦ qn (t1 )
qn (t0 )
be the given ends of the motion of the system (S). The equations of motion are associated with the extremal curves connecting Q0 and Q1 in integral ˆ a=
t1
t0
G(q2 , · · · , qn , q˙1 , · · · , q˙n , t) dt.
They are the same curves that make extremum integral ˆ a=
t1
t0
[ G(q2 , · · · , qn , q˙1 , · · · , q˙n , t) − α q˙1 ] dt,
where α is constant. Indeed, let us write a1 = a − α q1 (t1 ) + α q1 (t0 ), scalars a1 and a differ from a constant. Let us choose the value of α given by the first integral ∂T = α. ∂ q˙1 Then, ˆ δa1 =
=
t1
t0
⎡
⎤ n ∂G ∂G ⎣ ∂G − α δ q˙1 + δ q˙j + δqj ⎦ dt ∂ q˙1 ∂ q ˙ ∂q j j j=2
n ˆ j=2
t1
t0
⎤Q(t ⎡ ˜ 1) n ∂G d ∂G ∂G − + δq1 dt + ⎣ δq1 ⎦ dt ∂ q˙j ∂qj ∂ q˙j j=2 ˜ 0) Q(t
The Methods of Analytical Mechanics
89
⎡
⎤ ⎡ ⎤Q(t ˜ 1) q2 n ∂G q 1 . ⎥ ˜ =⎢ where Q δq1 ⎦ is ⎣ .. ⎦ is fixed at t0 and t1 , and Q = ˜ . Thus, ⎣ Q ∂ q˙j j=2 ˜ 0) qn Q(t zero and ˆ t1 n d ∂G ∂G − + δq1 dt. δa1 = dt ∂ q˙j ∂qj t0 j=2 Therefore, the equations of motion are ∂G d ∂G + = 0 for j ∈ {2, · · · , n} , dt ∂ q˙j ∂qj to which we add the condition
∂G = α. They are the equations of motion of (S). ∂ q˙1
∂G = α, ∂ q˙1 Hamilton’s principle is equivalent to a Hamilton’s principle for the n − 1 parameters q2 , · · · , qn , where q2 , · · · , qn are fixed for t0 and t1 . C ONCLUSION 4.2.– If G1 = G(q2 , · · · , qn , q˙1 , · · · , q˙n , t) − α q˙1 with
E XAMPLE 4.5.– We consider the spherical pendulum described in section 4.6.2. The parameter ϕ is a secondary one, and pϕ = ϕ˙ sin2 θ = α. 1 We consider G1 = G − α ϕ˙ = m θ˙2 + ϕ˙ 2 sin2 θ − k 2 cos θ − α ϕ. ˙ By 2 re-injecting the first integral pϕ in G1 , we obtain a Lagrangian only depending on parameter θ, 1 α2 2 ˙ G1 = m θ − − k 2 cos θ, 2 sin2 θ corresponding to a system with kinetic and potential energies as T =
1 m θ˙2 2
and W = k 2 cos θ +
m α2 . 2 sin2 θ
Then, we obtain the energy first integral 1 α2 + k 2 cos θ = h0 , m θ˙2 + 2 sin2 θ or, by taking pϕ into account, 1 ˙2 m θ + ϕ˙ 2 sin2 θ + k 2 cos θ = h0 , 2 which is the expression obtained in [4.23].
90
Introduction to Mathematical Methods of Analytical Mechanics
4.8. The Maupertuis principle Let us use the results from section 4.7 when t is a secondary parameter. Consider ˙ is explicitly independent of time t. The system (S), whose Lagrangian G = G(Q, Q) homogeneous Lagrangian is ˙ t L(X, X ) = G(Q, Q)
Q is a function of parameter τ . The system is associated with t Painlevé’s first integral, which has the dimension of an energy, where X =
h=
∂L ∂L ∂G ˙ = −E with =G− Q ∂t ∂t ∂ Q˙
or H(Q, P ) = E. ∂L Since = −E corresponds to the linear first integral associated with the ∂t secondary parameter t, according to section 4.7 searching for the extremals of the Hamilton action is equivalent to the research of the extremals of ˆ a=
X1
X0
L(X, X ) dτ, where X 0 =
Q(t0 ) t0
and X 1 =
and they are also the extremals of ˆ a1 =
X1
X0
∂L = −E, L(X, X ) − (−E)t dτ with ∂t
or equivalently, ˆ a1 =
t1
t0
˙ + E dt with ∂G Q˙ − G = E. G(Q, Q) ∂ Q˙
Therefore, since P = G+E =
∂G ˙ Q ∂ Q˙
∂G , ∂ Q˙
=⇒
(G + E) dt = P dQ.
Q(t1 ) t1
The Methods of Analytical Mechanics
91
In an n-dimensional space, we are brought back to the extremals of ˆ a1 =
Q1
Q0
P dQ with H(Q, P ) = E.
T HEOREM 4.3.– The Maupertuis principle. The trajectories of a system whose Lagrangian (or Hamiltonian) is explicitly independent of time are obtained by the extremal curves’ research of integral ˆ a1 =
Q1
Q0
P dQ,
where Q0 and Q1 are the ends of the curves, with H(Q, P ) = E, where E is the constant energy along the trajectories. 4.8.1. First application: case of a material point A material point, with mass m in the three-dimensional Euclidean space E 3 , admits a kinetic energy T and a potential energy W . The time is a secondary parameter. We have the energy first integral T + W = E. Since G + E = 2 T , we obtain ˆ a1 =
t1
t0
2 T dt
and the Maupertuis principle implies that the trajectories make extremal the time integral of kinetic energy. 2 1 ds , with s denoting the positively m 2 dt oriented curvilinear abscissa of the trajectory, we obtain From 2 T = 2(E − W ) and T =
1 m 2
ds dt
2 =E−W
=⇒
ds = dt
(
2 (E − W ) . m
Hence, ˆ a1 =
s1
s0
2m (E − W ) ds where s0 = s(t0 ) and s1 = s(t1 ).
If a refractive index n at any point M of E 3 is defined by using the relation n(M ) =
ˆ 2m (E − W ), we are back to integral a1 =
s1
s0
n(M )ds.
92
Introduction to Mathematical Methods of Analytical Mechanics
C OROLLARY 4.1.– The trajectories of point M , with mass m and potential energy W (M ), are the optical paths associated with the refractive index n(M ) = 2m (E − W ), where E is the constant total energy of point M . It can be noted that W (M ) = E −
n2 (M ) . 2m
C OROLLARY 4.2.– To a given optical medium with refractive index n(M ) and to optical paths in the medium, we can associate a material point M of mass m, total n2 (M ) energy E and potential energy W (M ) = E − such that the optical paths 2m correspond to the trajectories of point M. The corollary is the starting point for Louis de Broglie’s theory connecting wave mechanics and geometric optics (De Broglie 1924). It can be noted that the writing of ˆ Q1 the Maupertuis principle corresponds to finding the extremals of a1 = P dQ Q0
1 P P with + W = E and Q being the column of the Cartesian coordinates of M . 2 m Thus, P P = 2 m (E − W ) or P P = n2 (M ), scalar field n(M ) being the refractive index. The extremal curves satisfy the differential condition written in Part 1, Chapter 2 in the form dnT = gradn. ds 4.8.2. Second application: introduction to the Riemannian geometry Let (S) be a system with n independent parameters and homogeneous second ˙ degree kinetic energy T with respect to q˙1 , · · · , q˙n . We can write 2 T = Q˙ A Q, where A is a symmetric matrix only depending on Q and t. Then, 2T = aij q˙i q˙j with aij = aji , i, j ∈ {1, · · · , n} . ij
The forces applied to (S) derive from the potential energy W = W (Q, t) = W (q1 , · · · , qn , t). If the Lagrangian is explicitly independent of t, matrix A only depends on Q and W = W (q1 , · · · , qn ). Energy T + W = E is a first integral of the motion. In the space of parameters (q1 , · · · , qn ), the trajectories of system (S) are the extremals of ˆ a1 =
Q1
Q0
ˆ (G + E) dt =
t1 t0
ˆ 2 T dt =
t1
t0
2 (E − W ) dt.
The Methods of Analytical Mechanics
93
For an appropriate orientation of the trajectories in the space of parameters, Q˙ A Q˙ = 2 (E − W ) =⇒ dQ A dQ 2
= 2 (E − W ) (dt) =⇒ dt =
)
dQ A dQ , 2 (E − W )
hence, ˆ a1 =
Q1
Q0
2 (E − W ) (dQ A dQ).
The mapping Q ∈ Rn −→ B(Q) = 2 (E − W (Q)) A(Q) represents a field of symmetric matrices whose coefficients are defined at any point Q by bij = 2 (E − W ) aij , i, j ∈ {1, · · · , n}. The matrix field is associated with a field of positive definite quadratic forms. ´Q For example, for 2 (E − W (Q)) A(Q) = 1, we associate a1 = Q 1 dQ dQ, 0 which represents the length of the trajectory in the space of parameters. The space of parameters becomes an n-dimensional Euclidean vectorial space. D EFINITION 4.9.– A Riemann space is an n-dimensional vectorial space provided with a field of positive definite quadratic forms. At any point, the matrix B represents the quadratic form in the vectorial space basis. If we consider a vector field Q ∈ Rn −→ V (Q), we call the scalar product in Q of the vector field V V (Q) B(Q) V (Q). If in the previous study we denote dσ = ˆ a1 =
2 (E − W ) dQ A dQ, we obtain
Q1
dσ. Q0
We say that the trajectory curves of system (S) are the length extremals for a Riemann space whose metric is associated with both kinetic energy and potential energy.
5 Jacobi’s Integration Method
5.1. Canonical transformations The Hamilton equations ⎧ dQ ∂H ⎪ ⎪ = ⎪ ⎪ ⎨ dt ∂P [5.1]
⎪ ⎪ dP ∂H ⎪ ⎪ ⎩ =− dt ∂Q can be obtained by writing that ˆ a=
(Q1 ,t1 )
(Q0 ,t0 )
P dQ + h dt
is extremum, where (Q0 , t0 ) and (Q1 , t1 ) are given and h, P , Q and t are related through the relation h + H(P , Q, t) = 0. Equations [5.1] are called canonical form of motion equations. We search for a change of variables in R2n : (P , Q, t) −→ (A, B, t) represented by
A = A(P , Q, t) B = B(P , Q, t)
or equivalently
P = P(A, B, t) Q = Q(A, B, t)
[5.2]
96
Introduction to Mathematical Methods of Analytical Mechanics
which can be made explicit in the form
ai = Ai (p1 , · · · , pn , q1 , · · · , qn , t) with i ∈ {1, · · · , n} . bi = Bi (p1 , · · · , pn , q1 , · · · , qn , t)
Generally, the change of variables does not retain the canonical form of equations [5.1]. D EFINITION 5.1.– The change of variables – called the Jacobi transformation – is said to be canonical if and only if equations [5.1] are transformed as ⎧ dB ∂K ⎪ ⎪ = ⎪ ⎪ ⎨ dt ∂A ⎪ ⎪ dA ∂K ⎪ ⎪ ⎩ =− dt ∂B
[5.3]
where K(A, B, t) replaces the Hamiltonian H(P , Q, t) and represents the new Hamiltonian as a function of (A, B, t). Equations [5.3] express that ˆ b=
(B 1 ,t1 ) (B 0 ,t0 )
A dB − K(A, B, t) dt
is extremum on a curve (C) connecting points B 0 = B(t0 ) and B 1 = B(t1 ). The change of variables is canonical if a and b are simultaneously extremal. This is the case for the a − b constant. So, a sufficient condition is that the differential form P dQ − Hdt − (A dB − Kdt) is exact, i.e. if there exists a function S such that dS = P dQ − Hdt − (A dB − Kdt) .
[5.4]
∂2S = 0, function S is called the generating function of the If det ∂qi ∂qj canonical transformation.
Jacobi’s Integration Method
97
Given relation [5.4], the generating function S satisfies the conditions ⎧ ∂S(Q, B, t) ⎪ ⎪ ⎪P = ⎪ ⎪ ∂Q ⎪ ⎪ ⎪ ⎪ ⎨ ∂S(Q, B, t) −A = ⎪ ∂B ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ K − H = ∂S(Q, B, t) ∂t
or
⎧ ∂S(q1 , · · · , qn , b1 , · · · , bn , t) ⎪ ⎪ pi = ⎪ ⎪ ∂qi ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ∂S(q1 , · · · , qn , b1 , · · · , bn , t) ⎪ ⎪ −a = ⎪ ⎨ i ∂bi ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
[5.5]
K(a1 , · · · , an , b1 , · · · , bn , t) −H(p1 , · · · , pn , q1 , · · · , qn , t) =
∂S(q1 , · · · , qn , b1 , · · · , bn , t) ∂t
with i ∈ {1, · · · , n}. The equations giving p1 , · · · , pn in system [5.5] constitute an implicit system of n equations, where b1 , · · · , bn are function of p1 , · · · , pn , q1 , · · · , qn and t, or ⎧ ∂S(q1 , · · · , qn , b1 , · · · , bn , t) ⎪ ⎪ p1 = , ⎪ ⎪ ∂q1 ⎪ ⎪ ⎪ .. ⎪ ⎪ . ⎪ ⎪ ⎨ ∂S(q1 , · · · , qn , b1 , · · · , bn , t) , pi = ⎪ ∂qi ⎪ ⎪. ⎪ ⎪ .. ⎪ ⎪ ⎪ ⎪ ⎪ ∂S(q1 , · · · , qn , b1 , · · · , bn , t) ⎪ ⎩ . pn = ∂qn ∂2S = 0, the theorem of implicit functions makes it possible to ∂q ∂q ⎡ i⎤ j b1 deduce B = ⎣ ... ⎦ in the form B = B(P, Q, t). Then, B being obtained, we can bn obtain
Since det
A = −
∂S(Q, B(P, Q, t), t) , ∂B
where A = A(P , Q, t) corresponds to the change of variables [5.2], which becomes canonical. In new variables A and B, the expression of S implies K(A, B, t) =
∂S (Q(A, B, t), B, t) + H (P (A, B, t), Q(A, B, t), t) [5.6] ∂t
98
Introduction to Mathematical Methods of Analytical Mechanics
and it is possible to write the system of the Hamilton equations [5.3] with the new variables. 5.2. The Jacobi method We search for generating functions such that K ≡ 0. The equations of motion in the new variables are dA = 0 and dt
dB = 0, dt
or A = A0 and B = B 0 ,
[5.7]
where A0 and B 0 are constant vectors, i.e. a1 , · · · , an , b1 , · · · , bn are constants depending on the initial conditions. Given relation [5.6], the equation giving the generating function S = S(Q, B, t) can be written as ∂S ∂S +H , Q, t = 0 [5.8] ∂t ∂Q where H is the Hamiltonian. This first-order partial differential equation is called the Jacobi equation. It is written ∂S +H ∂t ··· ,
∂S (q1 , · · · , qn , b1 , · · · , bn , t) , ∂q1
∂S (q1 , · · · , qn , b1 , · · · , bn , t) , q1 , · · · , qn , t ∂qn
= 0.
The equations of motion are in the form [5.7] for any solution S = S (q1 , · · · , qn , b1 , · · · , bn , t) of equation [5.8] on n arbitrary 2depending ∂ S = 0; the change of constants denoted by b1 , · · · , bn and such that det ∂qi ∂qj canonical variables is associated with the system ⎧ ∂S (Q, B, t) ⎪ ⎪ ⎪ ⎨ P = ∂Q ⎪ ⎪ ⎪ ⎩ −A = ∂S (Q, B, t) ∂B
Jacobi’s Integration Method
99
which expresses the equations of motion in variables P and Q as a function of the 2n integration constants [a1 , · · · , an ] = A and [b1 , · · · , bn ] = B by solving the system in the form Q = Q(A, B, t)
and P = P(A, B, t).
Q = Q(A, B, t) represents the equations of motion in the form qi = qi (a1 , · · · , an , b1 , · · · , bn , t), P = P(A, B, t) represents the impulsion covector obtained after replacing Q as a function of A, B and t. R EMARK 5.1.– If S is a solution of [5.8], S + λ, where λ ∈ R, is also a solution of [5.8]. Nevertheless, λ cannot be taken as one of the parameters b1 , · · · , bn since ∂(S + λ) ∂ 2 (S + λ) = 1 and ∀ i ∈ {1, · · · , n} , = 0. In this case, ∂λ ∂λ∂qi ∂ 2 (S + λ) = 0, where λ is one of the parameters qi . det ∂qi ∂qj E XAMPLE 5.1.– One-dimensional harmonic oscillator. Consider a point M with mass unity, on the axis Oi, attracted by the point O proportionally to the distance. We denote this q = OM i . The kinetic energy and the potential energy are 1 2 q˙ 2
T =
and W =
1 2 2 ω q . 2
Consequently, 1 2
ˆ
t1
2 q˙ − ω 2 q 2 dt
⎧ ⎪ ⎨ p = q˙
. ⎪ ⎩ h = − 1 q˙2 − 1 ω 2 q 2 = −H(p, q, t) 2 2 We also deduce this relation from H = T + W and H(p, q, t) = 12 p2 + ω 2 q 2 . The Jacobi equation becomes a=
t0
∂S 1 + ∂t 2
∂S ∂q
2 +
hence
1 2 2 ω q = 0. 2
[5.9]
We search for the solutions of [5.9] in the form S = β t + S1 (q), where β is a real constant. Thus, β+
1 2 1 S1 (q) + ω 2 q 2 = 0, 2 2
100
Introduction to Mathematical Methods of Analytical Mechanics
and a solution is ˆ q S1 = −2β − ω 2 q 2 dq, q0
which yields ˆ S = βt +
q
q0
−2β − ω 2 q 2 dq.
We verify that we have obtained a solution depending on one parameter β. Then, ∂S = −α, ∂β where α is constant, expresses the equation of motion in the form ˆ q dq = 0. t+α− −2β − ω 2 q 2 q0 Let us denote a2 ω 2 = −2β, then ˆ q dq ω (t + α) = . 2 a − q2 q0 By writing α = −t0 , we obtain q = a cos ω (t − t0 ). 5.2.1. One position parameter is secondary The parameter q1 is assumed to be secondary. Then,
∂H = 0. We search for S in the form ∂q1
S = β1 q1 + S1 (q2,··· , qn , t, β1 ) . We deduce
∂S1 ∂S1 ,··· , , q2 , · · · , qn , t , H (p1 , · · · , pn , q2 , · · · , qn , t) = H β1 , ∂q2 ∂qn
where β1 is constant and S1 verifies ∂S1 ∂S1 ∂S1 ,··· , , q2 , · · · , qn , t = 0, + H β1 , ∂t ∂q2 ∂qn which is the new form of Jacobi’s equation.
Jacobi’s Integration Method
101
5.2.2. Time is a secondary parameter ∂H This is the case when = 0 or H = H(P , Q). We search for the generating ∂t function S in the form S = −Et + S1 (q1,··· , qn , E) . The Jacobi equation becomes H
∂S1 ∂S1 ,··· , , q1 , · · · , qn ∂q1 ∂qn
=E
where E is the Painlevé energy of the system. Let S1 = S1 (q1,··· , qn , E, β2 , · · · , βn) be a solution depending on the n − 1 parameters β2 , · · · , βn. The equations of motion become ∂S ∂S1 = −t0 or t − t0 = ∂E ∂E ∂S + αi = 0 ∂βi
with
i ∈ {2, · · · , n} ,
where t0 and αi are constant. The first relation expresses the law of time on the trajectory. The n − 1 other equations are independent of time and connect the parameters q1 , · · · , qn ; they represent the equations of the trajectory. 5.3. The material point in various systems of representation In the three-dimensional space, consider the motion of a material point with mass m, subject to a force field independent of time and deriving from a potential W . We consider a form of W when we can integrate the Jacobi equation with separated position parameters. 5.3.1. Case of Cartesian coordinates The kinetic energy and the Hamiltonian are T =
1 2 m x˙ + y˙ 2 + z˙ 2 2
and H =
1 2 p + q 2 + r2 + W (x, y, z), 2m
102
Introduction to Mathematical Methods of Analytical Mechanics
where p = m x, ˙ q = m y, ˙ r = m z˙ denote the conjugate variables of x, y, z, respectively. Taking into account that t is a secondary parameter, the generating function is S = −E t + F (x, y, z) and the Jacobi equation can be written as 1 2m
∂F ∂x
2
+
∂F ∂y
2
+
∂F ∂z
2 + W (x, y, z) = E.
In order for variables x, y, z to be separated, we must have W (x, y, z) = W1 (x) + W2 (y) + W3 (z). By writing S = −E t + F1 (x) + F2 (y) + F3 (z), we obtain 2 2 dF1 dF2 + 2m W1 (x) + + 2m W2 (y) dx dy 2 dF3 + 2m W3 (z) = 2mE. + dz
Since each of the three terms within the square brackets depends on only one variable, we can choose each term constant, 2 2 dF1 dF2 + 2m W1 (x) = α, + 2m W2 (y) = β, dx dy 2 dF3 + 2m W3 (z) = γ, dz with α + β + γ = 2m E. Each of the three expressions is integrable and ˆ ˆ S = −E t + α − 2m W1 (x) dx + β − 2m W2 (y) dy +
ˆ z
x
y
γ − 2m W3 (z) dz.
5.3.2. Case of cylindrical representation The kinetic energy and the Hamiltonian are 1 T = m ρ˙ 2 + ρ2 θ˙2 + z˙ 2 2
1 q2 2 2 p + 2 + r + W (ρ, θ, z), and H = 2m ρ
Jacobi’s Integration Method
103
˙ r = m z˙ denote the conjugate variables of ρ, θ, z, where p = m ρ, ˙ q = m ρ2 θ, respectively. Taking into account that t is a secondary parameter, the generating function is S = −E t + F (ρ, θ, z) and the Jacobi equation writes 1 2m
∂F ∂ρ
2
1 + 2 ρ
∂F ∂θ
2
+
∂F ∂z
2 + W (ρ, θ, z) = E.
If we choose W in the form W (ρ, θ, z) = W1 (z) + W2 (ρ, θ), we write S = −E t + F1 (z) + G(ρ, θ) and the Jacobi equation can be separated into two equations,
dF1 dz
2 + 2m W1 (z) = 2m α
and
∂G ∂ρ
2 +
1 ρ2
∂G ∂θ
2 + 2m W2 (ρ, θ) = 2m (E − α),
where α is constant. The first equation can be integrated. Multiplying the second equation by ρ2 , we obtain 2
ρ
∂G ∂ρ
2
+
∂G ∂θ
2
+ 2m ρ2 W2 (ρ, θ) − 2m (E − α)ρ2 = 0.
If we write W2 (ρ, θ) = h(ρ) + ρ2
∂G ∂ρ
2
g(θ) , the second equation can be separated into ρ2
+ 2m ρ2 h(ρ) − 2m (E − α)ρ2 = 2m β
and
∂G ∂θ
2 + 2m g(θ) = −2m β,
104
Introduction to Mathematical Methods of Analytical Mechanics
1 g(θ), and by separating ρ2 the three variables z, θ, ρ, we obtain the generating Jacobi function where β is constant. Hence, W (ρ, θ, z) = W1 (z) + h(ρ) +
S = −E t +
√ 2m
ˆ z
α − W1 (z) dz
ˆ ˆ β + −(β + g(θ)) dθ + E − α + 2 − h(ρ) dρ . ρ θ ρ 5.3.3. Case of spherical representation The kinetic energy and the Hamiltonian are T =
1 2 m r˙ + r2 θ˙2 + r2 ϕ˙ 2 sin2 θ 2
and p2ϕ p2θ 1 2 p + + W (r, θ, ϕ), + 2 H= 2m r r2 r sin2 θ ˙ pϕ = m r2 ϕ˙ sin2 θ denote the conjugate variables of ˙ pθ = m r2 θ, where pr = m r, r, θ, ϕ. Taking into account that t is a secondary parameter, the generating function is of the form S = −E t + F (r, θ, ϕ) and the Jacobi equation can be written as 2 2 2 1 ∂F ∂F ∂F 1 2 r + r2 W (r, θ, ϕ) = E r2 . + + 2m ∂r ∂θ sin2 θ ∂ϕ If we choose W such that r2 W (r, θ, ϕ) = W1 (r) + W2 (θ, ϕ), the generating function can be written as S = −E t + F1 (r) + G(θ, ϕ) and the Jacobi equation can be separated into two equations r
2
dF1 dr
2
+ 2m W1 (r) − 2m E r2 = 2m
and sin2 θ
∂G ∂θ
2
+
∂G ∂ϕ
2
+ 2m sin2 θ W2 (θ, ϕ) = −2m α sin2 θ,
Jacobi’s Integration Method
105
where α is constant. The first equation can be integrated. If we write W2 (θ, ϕ) = 1 h(ϕ), then G(θ, ϕ) = F2 (θ) + F3 (ϕ) with g(θ) + sin2 θ 2 2 β dF2 dF3 = 2m = 2m [−β − h(ϕ)] − α − g(θ) and dθ sin2 θ dϕ and we obtain the generating function which separates the three variables r, θ, ϕ ˆ √ α − W1 (r) + E dr S = −E t + 2m r2 r ˆ ˆ √ √ β − α − g(θ) dθ + + 2m 2m −β − h(ϕ) dϕ. sin2 θ θ ϕ As shown in section 5.2, we obtain the time law and the trajectories in the three cases. We can also consider more complex cases such as parabolic coordinates or elliptical coordinates1. 5.4. Case of the Liouville integrability D EFINITION 5.2.– A system (S), with n independent parameters, has a kinetic energy of the form T =
1 (U1 + · · · + Un ) V1 q˙12 + · · · + Vn q˙n2 , 2
1 Parabolic coordinates (ζ, η, ϕ): Based on the cylindrical representation, we write: f (ϕ) g(ζ) + h(η) 1 + . z = (ζ − η) and ρ = ζ η. We obtain W (ζ, η, ϕ) = 2 ζη ζ +η The surfaces ζ = Cte and η = Cte are paraboloids of revolution around axis Oz. Elliptical coordinates (ξ, μ, ϕ): Based on the cylindrical representation, we write:
(ξ2 − 1)(1 − μ2) and z = ξ μ where is called the transformation parameter. f (ϕ) h(μ) g(ξ) We obtain W (ζ, η, ϕ) = 2 + 2 . + 2 (ξ − 1) (1 − μ2 ) ξ − μ2 ξ − μ2 ρ=
The surfaces ξ = Cte are a family of ellipsoids. The surfaces μ = Cte are a family of hyperboloids.
106
Introduction to Mathematical Methods of Analytical Mechanics
and a potential energy of the form W =
W1 + · · · + Wn , U 1 + · · · + Un
where Ui , Vi and Wi , with i ∈ {1, · · · , n}, depend only on the variables qi , where i ∈ {1, · · · , n}. The kinetic energy and potential energy are explicitly independent of time. Thus, t is a secondary parameter and H = T + W = E. This is a case with separation of variables. The expressions for the conjugate variables pi are pi = (U1 + · · · + Un ) Vi q˙i with i ∈ {1, · · · , n} . We obtain p21 p2 + · · · + n + W 1 + · · · + Wn 1 V1 Vn H =T +W = . 2 U 1 + · · · + Un We search for solutions of the Jacobi equation in the form S = −E t + S1 (q1 ) + · · · + Sn (qn ) .
[5.10]
As t is a secondary parameter, the Jacobi equation becomes
Si2 + Wi i=1 V n i i=1 Ui
n
= 2 E,
or n S 2 i
i=1
Vi
+ Wi − 2 EUi = 0
[5.11]
and equation [5.11] is equivalent to Si2 + Wi − 2 EUi = βi Vi
where
i ∈ {1, · · · , n} ,
with
n i=1
βi = 0.
Jacobi’s Integration Method
107
Vi (βi + 2 EUi − Wi ) and by integration,
Consequently, Si =
⎧ n ⎪ ⎪ S = −E t + Si (qi ) ⎪ ⎪ ⎨ i=1 with ˆ qi ⎪ ⎪ ⎪ ⎪ (q ) = Vi (βi + 2 EUi − Wi ) dqi . S ⎩ i i qi0
where βn = −
n−1
βi . Then,
i=1
t − t0 =
n ∂Si i=1
∂E
represents the law of time on the trajectory. The n − 1 equations ∂S + αi = 0 ∂βi
⇐⇒
∂Si ∂Sn − + αi = 0 where i ∈ {1, · · · , n − 1} ∂βi ∂βn
represent the equations of the trajectory. 5.5. A specific change of canonical variables If the Hamiltonian is independent of time, we search for a change of the canonical variables (P , Q, t) −→ (A, B, t) brought about by a generating function of the form F (Q, A), which does not depend on time t, and such that the new Hamiltonian depends only on A, i.e. K = K(A). Equation [5.4] P dQ − Hdt − (A dB − Kdt) = dS, can be written as P dQ + B dA + (K − H) dt = d(S + A B). Let us write F = S + A B P dQ + B dA + (K − H) dt = dF
108
Introduction to Mathematical Methods of Analytical Mechanics
and we deduce ⎧ ∂F (Q, A) ⎪ ⎪ P = ⎪ ⎪ ⎪ ∂Q ⎪ ⎪ ⎪ ⎪ ⎨ ∂F (Q, A) . B = ⎪ ∂A ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ K − H = ∂F (Q, A) ∂t ∂F (Q, A) Since = 0, we obtain K(A) = H(P, Q), and the new Hamilton ∂t equations are written as ⎧ dB ∂K ⎪ ⎪ = K (A) = ⎪ ⎪ ⎨ dt ∂A ⎪ ⎪ dA ∂K ⎪ ⎪ ⎩ =0 =− dt ∂B
.
We obtain A = A0
and B˙ = K (A0 ) = B˙ 0
where A0 and B˙ 0 are constant vectors. Finally, B = B˙ 0 t + B 0
and K(A) = K(A0 ) = E
[5.12]
where B 0 is a constant vector and E is the total energy of the system. The equation that gives the generating function F is H
∂F ∂Q
,Q
=E
[5.13]
A complete solution of equation [5.13] must take into account not only the parameter E, which we denote by α1 , but also, according to Remark 5.1, the n − 1 non-additive parameters α2 , · · · , αn . Thus, A = [α1 , · · · , αn ] and the solution of the partial differential equation [5.13] is in the form F (Q, A). From equations [5.12] and [5.13], we deduce the equations of motion, B˙ 0 t + B 0 =
∂F (Q, A0 ) ∂A
.
Jacobi’s Integration Method
109
5.6. Multi-periodic systems. Action variables Let consider a Liouville dynamical system. We can choose the function F as a sum of functions depending only on a single parameter qi , i ∈ (1, · · · , n), F (Q, A) = F1 (q1 , A) + · · · + Fn (qn , A). We assume that the system is multi-periodic. We have ∀ i ∈ {1, · · · , n}, ∃ Ti ∈ R
such that
∀ t ∈ R,
qi (t + Ti ) = qi (t) + 2π.
and we search for a new Hamiltonian in the form K(A). This is a case studied by Delaunay in his celestial mechanics’ work (later developments of this work can be found in (Béletski 1986)). Let us write ˛ Ji =
Ti
˛ pi dqi =
Ti
∂Fi dqi , ∂qi
i ∈ {1, · · · , n},
˛ where Ti
denotes the integration on period Ti associated with the motion of
parameter qi . We write J = [J1 , · · · , Jn ] . With J substituting A, we search for B and K(J ) such that P dQ + B dJ + [K(J ) − H(P , Q)] dt = dF. The new Hamilton equations are ⎧ dB ∂K ⎪ ⎪ ≡ K (J ) = ⎪ ⎪ ⎨ dt ∂J [5.14]
⎪ ⎪ dJ ∂K ⎪ ⎪ ⎩ =0 =− dt ∂B and the equations of motion are given by B˙ 0 t + B 0 =
∂F (Q, J 0 ) ∂J
where J = J 0 , with B 0 and B˙ 0 constant vectors.
110
Introduction to Mathematical Methods of Analytical Mechanics
˛ The vector components Ji , i ∈ {1, · · · , n} are called variables of action because pi dqi has the dimension of time × energy, i.e. the dimension of action. Ti
The constants βi , i ∈ {1, · · · , n}, components of B, are called angular variables because they have no physical dimension since F has the dimension of action, and ∂F (Q, J ) consequently, B = is dimensionless. ∂J ˛ ∂βi dqi . It can be deduced Over a period Ti , we write Δβi = Ti ∂qi ˛ Δβi =
Ti
∂2F ∂ dqi = ∂qi ∂Ji ∂Ji
˛ Ti
∂F ∂ dqi = ∂qi ∂Ji
˛ Ti
pi dqi =
∂Ji = 1. ∂Ji
dβi is constant in motion and dt ˛ dβi dβi dβi dt = dt = Ti . dt dt Ti dt
Because of equation [5.14], ˛ Δβi =
Thus,
Ti
dβi Ti = 1 and we denote dt
νi =
dβi 1 = , dt Ti
where νi represents the frequency relatively to parameter qi , and equation [5.14] implies νi =
∂K . ∂Ji
E XAMPLE 5.2.– Back to a one-dimensional harmonic oscillator. This is the case considered in Example 5.1, where kinetic and potential energy are T =
1 m q˙2 2
and W =
1 2 k q , with k > 0. 2
The Hamiltonian is H=
1 2m
∂F ∂q
2 +
1 2 k q = E, 2
Jacobi’s Integration Method
111
where F is the generating function and E is the total energy of the system. Since there is only one parameter, it exists as a single action variable denoted by J, ˛ J=
˛ T
p dq =
T
By writing q = ˆ J=
0
2E
˛ T
2E − q 2 dq. k
2E sin θ, we obtain k
2π
√ ∂F dq = m k ∂q
m cos2 θ dθ = 2π E k
J We deduce K = E = 2π
m . k
k ∂K 1 and the frequency ν = = m ∂J 2π
k . m
E XAMPLE 5.3.– The three-dimensional harmonic oscillator. Consider a three-dimensional harmonic oscillator with unequal forces along the three orthogonal axes. The kinetic and potential energies of the system are T =
1 2 m q˙1 + q˙2 2 + q˙3 2 2
and
W =
1 k1 q12 + k2 q22 + k3 q32 , 2
where k1 , k2 , k3 are positive constants. We get 1 p2 2 1 p3 2 1 p1 2 2 2 2 H= + k1 q 1 + + k2 q2 + + k3 q 2 2 m 2 m 2 m and the generating function F satisfies
∂F1 ∂q1
2
+ m k1 q12 +
∂F2 ∂q2
2
+ m k2 q22 +
∂F3 ∂q3
2
+ m k3 q32 = 2m E,
which is a case of separation of variables; we can deduce
∂Fi ∂qi
2 +
m ki qi2
= 2m αi
i ∈ {1, 2, 3} with
3
αi = E.
1
We have ˛ Ji =
Ti
˛ pi dqi =
Ti
∂F dqi = ∂qi
˛ 2m αi − mki qi2 dqi , T
i ∈ {1, 2, 3}.
112
Introduction to Mathematical Methods of Analytical Mechanics
Using the same calculus as in Example 5.2, we obtain Ji = 2παi
m , ki
i ∈ {1, 2, 3}.
Hence, J1
k1 + J2
k2 + J3
√ √ k3 = 2π m E = 2π m K
and frequencies of the three oscillations on the three orthogonal axes are ∂K 1 νi = = ∂Ji 2π
ki , m
i ∈ {1, 2, 3}.
6 Spaces of Mechanics – Poisson Brackets
6.1. Spaces in analytical mechanics The calculations used in the previous chapters have led to different equivalent configurations: ⎡ ⎤ q1 ⎢ .. ⎥ a) The n-dimensions’ configuration space. The space has parameters Q = ⎣ . ⎦, qn ˙ t and unknown Q(t). The motion is described by the Lagrange data G Q, Q, equations d ∂G ∂G − = 0. dt ∂ Q˙ ∂Q
b)The n + 1 dimensions’ configuration space time. The space has parameters X =
Q , data L X, X and unknown X(τ ). The equations of motion are t d ∂L ∂L = 0. − dτ ∂X ∂X ⎡
⎤ q1 ⎢ ⎥ c) The 2n dimensions’ phase space. The space has parameters Q = ⎣ ... ⎦ and qn ⎤ p1 ⎢ ⎥ P = ⎣ ... ⎦, data H (P , Q, t) and unknowns Q(t), P (t). The equations of motion ⎡
pn
114
Introduction to Mathematical Methods of Analytical Mechanics
are the Hamilton equations ⎧ ∂H ⎪ ⎪ ˙ Q= ⎪ ⎪ ⎨ ∂P ⎪ ⎪ ∂H ⎪ ⎪ ⎩ P˙ = − ∂Q
.
d) The
2n + 2 dimensions’
homogeneous
phase space. The space has parameters X Q P Z = , where X = and Y = . There exists a relation between X Y t h and Y (or between h, t, P , Q). Thus, parameters belong to a hypersurface called the surface of states. e) 2n + 1 dimensions’ state space. The space has parameters Z = Z(τ ), where Z satisfies relation ϕ(Z) = 0 equivalent to F (X, Y ) = 0 (or h + H (P , Q, t) = 0). The equations of motion are ⎧ dX ∂F ⎪ ⎪ = ⎪ ⎪ ⎨ dτ ∂Y [6.1] . ⎪ ⎪ dY ∂F ⎪ ⎪ ⎩ =− dτ ∂X
∂ϕ ∂F ∂F . The Hamilton = , ∂Z ∂X ∂Y equations in the state space – whose surface equation is F (X, Y ) = 0 – can be written in the form of equation [6.1] If we denote ϕ (Z) ≡ F (X, Y ) , thus
Let us write 0n for the n × n zero matrix and 1n for the n × n identity matrix. We denote
˙I2n = 0n 1n −1n 0n 2 therefore, I˙ 2n = −I˙ 2n and I˙ 2n = −12n .
The matrix I˙ 2n is called the 2n-dimensional symplectic matrix. System [6.1] is the equivalent of dZ = I˙ 2n+2 dτ
∂ϕ ∂Z
[6.2]
Spaces of Mechanics – Poisson Brackets
115
Matrix I˙ 2n+2 corresponds to the symplectic matrix of the 2n + 2 dimensions’ homogeneous phase space. Let us also note that ⎤ ⎡ ⎤ ∂H dQ ⎢ ∂Q ⎥ ⎢ dt ⎥ ⎥ ⎥ ˙ ⎢ ⎢ ⎥ ⎥ = I2n ⎢ ⎢ ⎢ ⎥ . ⎣ dP ⎦ ⎣ ∂H ⎦ dt ∂P ⎡
X1 N OTATION .– We denote Z 1 = Y1 Z 1 I˙ 2n+2 Z 2 = [X 1 , Y 1 ]
X2 and Z 2 = , then Y2
0n+1 1n+1 −1n+1 0n+1
X2 . Y2
In the absence of ambiguity, we remove the indices n, n + 1, 2n, 2n + 2 for respective phase spaces. We deduce Z 1 I˙ Z 2 = X 1 Y 2 − Y 1 X 2 = −Z 2 I˙ Z 1 . D EFINITION 6.1.– Term Z 1 I˙ Z 2 is called the symplectic scalar product of vectors Z 1 and Z 2 defined on a vector space with an even dimension. Expression Z 1 I˙ Z 2 is the symplectic scalar product in the basis of E 2n+2 . We also write σ (Z 1 ) (Z 2 ) = Z 1 I˙ Z 2 , and for any couple of vectors V 1 and V 2 , we get σ (V 1 ) (V 2 ) = −σ (V 2 ) (V 1 ) . R EMARK 6.1.– For any vector Z such that Z I˙ Z = 0, we deduce ∂ϕ ˙ I ∂Z
∂ϕ ∂Z
=0
which can be written
∂ϕ dZ = 0. ∂Z dτ
∂ϕ represents gradϕ in the E 2n+2 space. Vector I˙ gradϕ is normal ∂Z to gradϕ and is a tangent vector to the trajectory of the motion in the state space (see Figure 6.1).
Vector
116
Introduction to Mathematical Methods of Analytical Mechanics
Figure 6.1. Trajectory of Z in the space state. The vector I˙ gradϕ is tangent to the trajectory represented in the state space. For a color version of this figure, see www.iste.co.uk/gouin/mechanics.zip
6.2. Dynamical variables – Poisson brackets A function of the state of system (S) is called a dynamical variable. In the various configurations, a dynamical variable u is a function of the parameters: a) in the phase space, u = u(P , Q, t); b) in the homogeneous phase space, u = u(P , Q, h, t) ≡ u(X, Y ); c) in the state space, u = u(Z). 6.2.1. Evolution equation of a dynamical variable The dynamical variable is assumed to be differentiable. Case (a) :
u = u (P , Q, t)
=⇒
∂u ∂u dP ∂u dQ du = + + . dt ∂t ∂P dt ∂Q dt
du denotes the material derivative or the derivative along the motion of dt system (S). We obtain Term
du ∂u ∂u = − dt ∂t ∂P
∂H ∂Q
∂u + ∂Q
∂H ∂P
.
Spaces of Mechanics – Poisson Brackets
Case (b) :
u = u (X, Y )
=⇒
117
du ∂u dX ∂u dY = + . dτ ∂X dτ ∂Y dτ
Along the motion of system (S), we deduce du ∂u ∂F ∂u ∂F − ⇐⇒ = dτ ∂X ∂Y ∂Y ∂X ⎡ ⎤ ∂F
⎥ du ∂u ∂u ˙ ⎢ ∂X ⎥ . I⎢ = , ⎣ ⎦ ∂F dτ ∂X ∂Y ∂Y D EFINITION 6.2.– The scalar ∂u ∂Q
[u, H](Q,P ) =
∂H ∂P
−
∂u ∂P
∂H ∂Q
is called the Poisson bracket of u and H relatively to variables Q and P . Its expanded form is [u, H](Q,P ) =
n ∂u ∂H ∂u ∂H . − ∂qi ∂pi ∂pi ∂qi i=1
Consequently, we get du ∂u = + [u, H](Q,P ) . dt ∂t
Q By denoting R = , we obtain P [u, H](Q,P )
Case(c) :
∂u ˙ = I ∂R
∂H ∂R
du ∂u ˙ = I dτ ∂Z
.
∂ϕ ∂Z
= [u, F ](X,Y ) ,
where depending on the cases, I˙ has dimension 2n or 2n + 2. We can also write du ∂u dZ = . dτ ∂Z dτ
118
Introduction to Mathematical Methods of Analytical Mechanics
R EMARK 6.2.– We conclude q˙i = [qi , H](Q,P ) and p˙i = [pi , H](Q,P ) 6.2.2. First integral D EFINITION 6.3.– A dynamical variable u is a first integral if and only if u is constant along each trajectory of the motion. du R EMARK 6.3.– The first integral u implies ≡ [u, F ](X,Y ) = 0. In the case where dt ∂u = 0, u is a first integral if and only if [u, H](Q,P ) = 0. ∂t 6.3. Poisson bracket of two dynamical variables D EFINITION 6.4.– Given two differentiable dynamical variables denoted by u and v: (X, Y ) ∈ R2n −→ u (X, Y ) ∈ R and (X, Y ) ∈ R2n −→ v (X, Y ) ∈ R, the scalar denoted by [u, v](X,Y ) and defined by [u, v](X,Y ) =
∂u ∂X
∂v ∂Y
−
∂u ∂Y
∂v ∂X
⎧
∂u ∂u ∂u ∂u ∂u ∂u ⎪ ⎪ and ,··· , ,··· , = = ⎪ ⎪ ⎨ ∂X ∂x1 ∂xn ∂Y ∂y1 ∂yn where
⎪ ⎪ ∂v ∂v ∂v ∂v ∂v ∂v ⎪ ⎪ and ,··· , ,··· , = = ⎩ ∂X ∂x1 ∂xn ∂Y ∂y1 ∂yn
is called the Poisson bracket of u and v relatively to X and Y . In expanded form, it is
[u, v](X,Y )
⎡ ⎤ ⎡ ⎤ y1 x1 n ∂u ∂v ∂u ∂v ⎢ ⎥ ⎢ ⎥ where X = ⎣ ... ⎦ and Y = ⎣ ... ⎦ . = − ∂xi ∂yi ∂yi ∂xi i=1 xn yn
Spaces of Mechanics – Poisson Brackets
119
In the following, the Poisson bracket is denoted by [u, v] without referring to variables X and Y , where (X, Y ) denotes a point in the phase space. The results are identical for the Poisson brackets taken relatively to the couple (P , Q). The relations obtained are not relevant to the study of material systems. In the mathematical case, we say Lie brackets. The Lie brackets constitute the Lie algebra. 6.3.1. Properties of Poisson brackets T HEOREM 6.1.– A Poisson bracket (or a Lie bracket) is an antisymmetric bilinear mapping. – The Poisson bracket is bilinear For u1 , u2 , v mapping from R2n to R and for real λ, [u1 + u2 , v] = [u1 , v] + [u2 , v]
[λu, v] = λ [u, v] .
and
For u, v1 , v2 mapping from R2n to R and for real λ, [u, v1 + v2 ] = [u, v1 ] + [u, v2 ]
and
[u, λv] = λ [u, v] .
– The Poisson bracket is antisymmetric [u, v] = − [v, u]. Consequently, [u, u] = 0. R EMARK 6.4.– For all dynamical variables u, v, w, we have [uv, w] = u [v, w] + v [u, w] . This property immediately results from the properties of partial derivatives ∂ associated with the dynamical variables u, v, w. Furthermore, for denoting a ∂α partial derivative associated with one of variables defining u and v,
∂ ∂u ∂v . [u, v] = , v + u, ∂α ∂α ∂α
D EFINITION 6.5.– Derivation. A mapping d from A R2n , R to A R2n , R is a derivation if and only if it satisfies the three following properties: d (u + v) = du + dv, d (uv) = du v + u dv, u being a constant mapping, du = 0.
120
Introduction to Mathematical Methods of Analytical Mechanics
T HEOREM 6.2.– The Jacobi identity. Given three twice-differentiable mappings, u, v, w of A R2n , R , we have the relation
[u, v] , w + [v, w] , u + [w, u] , v = 0.
P ROOF.– (Proof for the Jacobi identity) a) Let us write Du v = [u, v]. Then, Du is a derivation. Indeed, Du (v + w) = [u, v + w] = [u, v] + [u, w] = Du v + Du w, Du (v w) = [u, v w] = v[u, w] + w[u, v] = v Du w + wDu v, if v = a with a constant, then Du a = [u, a] = 0. b) Let d denote the derivation Dw , then d[u, v] = [du, v] + [u, dv]. Indeed, ∂u [u, v] = ∂X
∂v ∂Y
∂u − ∂Y
∂v ∂X
and we apply d to [u, v]. c) Dw [u, v] = w, [u, v] . Furthermore, according to (b), Dw [u, v] = [Dw u, v] + [u, Dw v] = [ w, u], v + u, [w, v] . But, the Poisson bracket is antisymmetric, then, − [ u, v], w = [ w, u], v + [ v, w], u , or
[ u, v], w + [ v, w], u + [ w, u], v = 0,
which demonstrates the Jacobi identity.
D EFINITION 6.6.– Lie algebra. Since Poisson brackets (or Lie brackets) are antisymmetric bilinear mappings satisfying the Jacobi identity, the dynamical variables have a Lie algebra structure.
Spaces of Mechanics – Poisson Brackets
121
R EMARK 6.5.– The Jacobi relation can be interpreted in the form D[u,v] w = Du Dv w − Dv Du w, indeed,
[ u, v], w = u, [v, w] − v, [u, w] .
We deduce that Du Dv − Dv Du is a derivation and the Poisson bracket of two derivations is a derivation. T HEOREM 6.3.– The non-zero Poisson bracket of two first integrals is a first integral. This is a consequence of the Jacobi identity. Indeed, u is a first integral equivalent to [u, F ] = 0, v is a first integral equivalent to [v, F ] = 0. We deduce
[ u, v], F = u, [v, F ] − v, [u, F ] = 0.
Thus, [u, v], F = 0; if [u, v] = 0, then [u, v] constitutes a first integral. 6.3.2. Application to the Noether theorem Let {Tα , α ∈ I1 ⊂ R} be a Lie group whose infinitesimal operator is the vector field ´Φ(X) defined on the configuration space E n+1 . It is assumed the integral a = (C) Y (X) dX is invariant by the Lie group. Along the curves (C) where a is extremal, u = Y Φ (X) is constant and u is a first integral. The differential equation associated with {Tα , α ∈ I1 ⊂ R} is dX = Φ (X) . dα Similarly, let {Tβ , β ∈ I2 ⊂ R} be a Lie group with the infinitesimal operator Ψ(X) making the integral a invariant. The differential equation associated with {Tβ , β ∈ I2 ⊂ R} is dX = Ψ (X) . dβ
122
Introduction to Mathematical Methods of Analytical Mechanics
Along each curve (C), v = Y Ψ (X) is a first integral. According to Theorem 6.3, [u, v] is a first integral of the extremal curves of a. Furthermore, ∂u ∂u = Y Φ (X) , = Φ (X), ∂X ∂Y
∂v ∂v = Y Ψ (X) , = Ψ (X) ∂X ∂Y
that yields ∂u [u, v] = ∂X
∂v ∂Y
∂v − ∂X
∂u ∂Y
= Y Φ (X) Ψ(X) − Ψ (X) Φ(X)
∂Φ is represented by the Jacobian matrix ∂X {1, · · · , n + 1}. This is the same for Ψ (X). Let us write
where Φ (X) =
∂φi ∂Xj
with i, j ∈
Ξ(X) = Φ (X) Ψ(X) − Ψ (X) Φ(X). Thus, [u, v] = Y Ξ (X) and the vector field Ξ (X) is the infinitesimal operator dX of the Lie group associated with the differential equation = Ξ (X). This group dγ is denoted {Tγ , γ ∈ I ⊂ R}. E XAMPLE 6.1.– Let {Tα , α ∈ I1 ⊂ R} be the group of rotations around axis Oz of the orthonormal frame Oxyz. In Cartesian coordinates ⎤ −y Φ(X) = ⎣ x ⎦ . 0 ⎡
Let {Tβ , β ∈ I2 ⊂ R} be the group of rotations around axis Oy of the orthonormal frame Oxyz. In Cartesian coordinates ⎤ z Ψ(X) = ⎣ 0 ⎦ . −x ⎡
We deduce ⎤ 0 −1 0 Φ (X) = ⎣ 1 0 0 ⎦ 0 0 0 ⎡
⎤ 0 0 1 Ψ (X) = ⎣ 0 0 0 ⎦ . −1 0 0 ⎡
and
Spaces of Mechanics – Poisson Brackets
123
Hence, ⎤ 0 Ξ (X) = Φ (X) Ψ(X) − Ψ (X) Φ(X) = ⎣ z ⎦ . −y ⎡
This is the infinitesimal operator associated with the group of rotations around axis Ox. Similarly, if invariance of rotation around axis Ox and invariance of rotation around axis Oy correspond to two first integrals, we can deduce a first integral associated with the group of rotations around axis Oz. Let us note that the Poisson bracket of two first integrals does not always imply a new first integral E XAMPLE 6.2.– Let {Tα , α ∈ I1 ⊂ R} be the group of translations in direction Ox of orthonormal frame Oxyz. Then, in Cartesian coordinates, ⎡ ⎤ 1 Φ(X) = ⎣ 0 ⎦ 0
and Φ (X) = 0.
Let {Tβ , β ∈ I2 ⊂ R} be the group of translations in direction Oy of orthonormal frame Oxyz. Then, in Cartesian coordinates, ⎡ ⎤ 0 Ψ(X) = ⎣ 1 ⎦ 0
and Ψ (X) = 0.
Hence, ⎡ ⎤ 0 Ξ (X) = Φ (X) Ψ(X) − Ψ (X) Φ(X) = ⎣ 0 ⎦ . 0
The invariance by translation with respect to Ox and the invariance by translation with respect to Oy do not imply an additional first integral. 6.4. Canonical transformations In the homogeneous phase space, the equations of motion are in the form [6.2], dZ = I˙ dτ
∂ϕ ∂Z
,
124
Introduction to Mathematical Methods of Analytical Mechanics
where I˙ denotes I˙ 2n+2 and ϕ (Z) = 0 denotes the constraint h + H (P, Q, t) = 0. Consider the change of variables ˜ = ψ (Z) ∈ E 2n+2 . Z ∈ E 2n+2 −→ Z Condition ϕ (Z) = 0 is written in new variables ˜ ≡ ϕ ψ −1 Z ˜ ϕ˜ Z = 0. ˜ ∂Z ∂ψ (Z) The linear mapping = is locally invertible. In new variables, we can ∂Z ∂Z write ˜ ˜ ∂Z ∂Z ∂Z = ∂τ ∂Z ∂τ and, taking into account [6.2] and ˜ ˜ ∂Z ∂Z = I˙ ∂τ ∂Z
˜ ∂Z ∂Z
˜ ∂ϕ ∂ ϕ˜ ∂ Z = , we obtain ˜ ∂Z ∂Z ∂Z
∂ ϕ˜ ˜ ∂Z
.
Let us denote ˜ ∂Z P= I˙ ∂Z
˜ ∂Z ∂Z
We call P the Poisson matrix of the change of variables. The change of variables is canonical if and only if ˜ ∂Z = I˙ ∂τ
∂ ϕ˜ ˜ ∂Z
.
Hence, T HEOREM 6.4.– The change of variables, ˜ = ψ (Z) ∈ E 2n+2 Z ∈ E 2n+2 −→ Z ˙ is canonical if and only if P = I.
Spaces of Mechanics – Poisson Brackets
125
6.4.1. Calculus of the Poisson matrix ⎡
˜ ∂X
⎢ ˜ ˜ ∂X X ˜ = X , then ∂ Z = ⎢ Since Z = and Z ⎢ ˜ Y ∂Z Y ⎣ ˜ ∂Y ∂X
˜ ⎤ ∂X ∂Y ⎥ ⎥ ⎥ ⎦ ∂ Y˜ ∂Y
and ⎡
˜ ∂Z ∂Z
˜ ∂X ∂X
∂ Y˜ ∂X
⎤
⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ˜ ∂ Y˜ ⎣ ∂X ∂Y ∂Y
⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦
We obtain ˜ ∂Z I˙ ∂Z ⎡
˜ ∂X ⎢ ∂X ⎢ =⎢ ⎣ ˜ ∂Y ∂X ⎡
˜ ∂Z ∂Z
⎡ ⎤ ˜ ⎤ ∂X ∂ Y˜ ˜ ∂X ⎢ ⎥ ⎥ ∂X
⎢ ∂X ⎢ ⎥ ∂Y ⎥ ⎥ 0 1 ⎢ ⎥ ⎥ ⎢ ⎥ ⎦ −1 0 ⎢ ⎥ ˜ ˜ ∂Y ∂ Y˜ ⎣ ∂X ⎦ ∂Y ∂Y ∂Y
˜ ∂X ˜ ∂ Y˜ ˜ ˜ ˜ ∂X ˜ ∂ Y˜ ∂X ∂X ∂X ∂X − − ⎢ ⎢ ∂X ∂Y ∂Y ∂X ∂X ∂Y ∂Y ∂X ⎢ =⎢ ⎢ ⎢ ˜ ˜ ˜ ∂ Y˜ ∂ Y˜ ∂X ∂ Y˜ ∂ X ∂ Y˜ ∂ Y˜ ⎣ ∂Y − − ∂X ∂Y ∂Y ∂X ∂X ∂Y ∂Y ∂X
⎤ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦
126
Introduction to Mathematical Methods of Analytical Mechanics
The change of variables is canonical if and only if ⎧ ˜ ∂X ˜ ˜ ˜ ∂X ⎪ ∂X ⎪ ∂X ⎪ − = 0, ⎪ ⎪ ∂X ∂Y ∂Y ∂X ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ˜ ˜ ∂ Y˜ ∂ X ∂ Y˜ ∂X − = 1, ⎪ ∂X ∂Y ∂Y ∂X ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ∂ Y˜ ∂ Y˜ ∂ Y˜ ∂ Y˜ ⎪ ⎪ ⎪ − = 0. ⎩ ∂X ∂Y ∂Y ∂X Let us note that ˜ ∂X ∂X
∂ Y˜ ∂Y
˜ ∂X − ∂Y
∂ Y˜ ∂X
= 1,
is equivalent to ∂ Y˜ ∂X
˜ ∂X ∂Y
∂ Y˜ − ∂Y
˜ ∂X ∂Y
= −1.
For a calculus in coordinates, we obtain results that depend on the chosen basis, ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ ⎤ x ˜1 y1 y˜1 x1 ⎢ ⎢ ⎢ ⎥ ˜ ⎢ .. ⎥ ⎥ ⎥ = ⎣ . ⎦ , Y = ⎣ ... ⎦ and Y˜ = ⎣ ... ⎦ . X = ⎣ ... ⎦ , X xn+1 x ˜n+1 yn+1 y˜n+1 ⎡
˜ ∂X = ∂X
˜ ∂X = ∂Y
˜ ∂X ∂X aij =
n+1 h=1
∂x ˜i ∂xh ∂x ˜j ∂yh
˜ ∂X ∂Y
where i is the row index and h is the column index, where j is the row index and h is the column index,
= {aij } where i is the row index and j is the column index, with
˜j ∂x ˜i ∂ x . ∂xh ∂yh
Spaces of Mechanics – Poisson Brackets
127
Thus, ˜ ∂X ∂X
˜ ∂X ∂Y
˜ ∂X − ∂Y
˜ ∂X ∂X
has for generic element, n+1 h=1
˜j ˜j ∂x ˜i ∂ x ∂x ˜i ∂ x − ∂xh ∂yh ∂yh ∂xh
where i is the row index and j is the column index, which is the Poisson bracket ˜j ](X,Y ) . Thus, for i, j ∈ {1, · · · , n + 1}, [˜ xi , x ˜j ](X,Y ) = 0. Similarly, for [˜ xi , x i, j ∈ {1, · · · , n + 1}, [˜ yi , y˜j ](X,Y ) = 0. Finally, for i, j ∈ {1, · · · , n + 1}, [˜ xi , y˜j ](X,Y ) = δij , where δij = 0 for i = j and δij = 1 for i = j. C OROLLARY 6.1.– In coordinate lines, the change of variables ˜ = ψ (Z) ∈ E 2n+2 . Z ∈ E 2n+2 −→ Z is canonical if and only if for all i, j ∈ {1, · · · , n + 1}, [˜ xi , x ˜j ](X,Y ) = 0,
[˜ yi , y˜j ](X,Y ) = 0,
[˜ xi , y˜j ](X,Y ) = δij .
E XAMPLE 6.3.– Determine the values of α and β such that the change of variables x ˜ = xα cos βy and y˜ = xα sin βy is canonical. The only condition to be satisfied is [˜ x, y˜](x,y) = 1. That leads to
∂x ˜ ∂ y˜ ∂ x ˜ ∂ y˜ α−1 cos βy (βxα cos βy) − = αx ∂x ∂y ∂y ∂x
+ (βxα sin βy) αxα−1 sin βy = 1, or, α β x2α−1 = 1. The condition must be satisfied for all values of x. Hence, 2α − 1 = 0 and α β = 1
⇐⇒
α=
1 and β = 2. 2
128
Introduction to Mathematical Methods of Analytical Mechanics
Finally, x ˜=
√ √ x cos 2y and y˜ = x sin 2y.
R EMARK 6.6.– Let us use the previous results to verify that the change of variables associated with a generating function of Jacobi is canonical. We have written that the ´ extremals of a = (C) Y dX where F (X, Y ) = 0 are identical to the extremals of ´ ˜ where F˜ X, ˜ Y˜ = 0. b = (C) Y˜ dX P ROOF.– For the proof, it is enough that there exists a generating function of Jacobi S such that ˜ = Y dX − Y˜ dX ˜ with F (X, Y ) = 0. dS X, X We deduce ⎞ ⎞ ⎛ ⎛ ˜ ˜ ∂S X, X ∂S X, X ⎠ with F X, ∂S ⎠ and Y˜ = − ⎝ = 0. Y =⎝ ˜ ∂X ∂X ∂X Therefore, ⎛
⎞ ⎞ ˜ ∂S X, X ⎜ ⎠ ⎟ F ⎝X, ⎝ ⎠=0 ∂X ⎛
represents the Jacobi partial differential equation in homogeneous variables. ˜ ∂Z Let us calculate I˙ ∂Z
˜ ∂Z ∂Z
. We have
⎧ ∂ ∂S ∂ ∂S ⎪ ⎪ ˜ dY = dX + dX ⎪ ⎪ ˜ ∂X ⎨ ∂X ∂X ∂X ⎪ ⎪ ∂S ∂ ∂S ∂ ⎪ ⎪ ˜ ⎩ dY˜ = − dX − dX, ˜ ˜ ∂X ˜ ∂X ∂ X ∂X which can be written ⎧ ˜ ⎨ dY = A dX + B dX ⎩ ˜ ˜ dY = −B dX − C dX
with A = A and C = C .
[6.3]
Spaces of Mechanics – Poisson Brackets
129
Indeed,
∂ ∂X
∂S ∂X
=
∂2S ∂xi ∂xj
⎡ ∂S ⎢ ∂x1 ⎢ ∂S .. because =⎢ . ⎢ ∂X ⎣ ∂S ∂xn+1
⎤ ⎥ ⎥ ∂2S ∂2S ⎥ and = . ⎥ ∂xi ∂xj ∂xj ∂xi ⎦
Therefore, A is a symmetric matrix. This is the same for C. Furthermore,
∂ B= ˜ ∂X
∂S ∂X
=
∂2S ∂x ˜i ∂xj
∂2S ∂x ˜i ∂xj
,
= 0, where i is the column index and j is the row index, 2 ∂S ∂ S ∂ , where i is the = the matrix B is invertible. Furthermore, ˜ ∂X ∂ X ∂xi ∂ x ˜j row index and j is the column index, and Because det
∂ B= ˜ ∂X
∂S ∂X
∂ B = ∂X
=⇒
From system [6.3], ⎧ ˜ = −B−1 A dX + B−1 dY ⎨ dX
⎩ ˜ dY = CB−1 A − B dX − CB−1 dY and ⎡ ˜ ∂Z =⎣ ∂Z
−B−1 A −1
CB
B−1
A−B
−CB
−1
⎤ ⎦,
∂S ˜ ∂X
.
130
Introduction to Mathematical Methods of Analytical Mechanics
hence ˜ ∂Z I˙ ∂Z
˜ ∂Z ∂Z
⎡
=⎣
−B−1 A
B−1
−1
CB
−1
⎤ ⎦
A − B −CB ⎤ ⎡
−AB−1 0 1 ⎢ AB−1 C − B ⎥ ⎥ ⎢ × ⎦ ⎣ −1 0 −1 −1 −B C B ⎡ ⎤ 0 ⎢ B−1 B ⎥ ⎥ = I. ˙ =⎢ ⎣ ⎦ −B−1 B
0
Consequently, the change of variables associated with a generating function S is canonical. R EMARK 6.7.– If the change of variables ˜ = ψ (Z) ∈ E 2n+2 Z ∈ E 2n+2 → Z is canonical, the change of variables ˜ ∈ E 2n+2 → Z = ψ −1 Z ˜ ∈ E 2n+2 Z is also canonical. Indeed, ˜ ˜ ∂Z ˙I ∂ Z = I˙ ∂Z ∂Z
⇐⇒
∂Z ˙ I ˜ ∂Z
∂Z ˜ ∂Z
˙ = I.
E XAMPLE 6.4.– Consider the Jacobi transformation S = a x2 cotg x ˜ of two variables x and x ˜. Let us verify that S is associated with a canonical change of variables, i.e. we obtain y and y˜ such that dS = y dx − y˜ d˜ x. We should have ⎧ ∂S ⎪ ⎪ ˜ ⎪ ⎨ y = ∂x = 2ax cotg x ⎪ 2 ⎪ ⎪ ⎩ y˜ = − ∂S = ax ∂x ˜ sin2 x ˜
=⇒
x2 =
⎧ ! ⎪ y˜ ⎪ ⎪ ˜ ⎪ ⎨ x = ε a sin x
y˜ ˜ or sin2 x ! ⎪ a ⎪ ⎪ y˜ ⎪ ⎩ y = ε2a cos x ˜ a
Spaces of Mechanics – Poisson Brackets
131
with ε = ±1. Indeed, [x, y](˜x,˜y)
! y˜ ˜ ∂x ∂y ∂x ∂ y˜ 2a cos x = − = cos x ˜√ √ ∂x ˜ ∂ y˜ ∂ y˜ ∂ x ˜ a a 2 y˜ ! y˜ sin x ˜ −2a − √ √ sin x ˜ = 1. a 2 a y˜
6.5. Remark on the symplectic scalar product The scalar σ (dZ) (δZ) = dZ I˙ δZ is called the symplectic scalar product of vectors dZ and δZ. If we write
dX δX dZ = and δZ = , dY δY we obtain σ (dZ) (δZ) = [dX , dY ] I˙ = [dX , dY ]
δX δY
0 1 −1 0
δX δY
= [−dY , dX ]
δX , δY
or σ (dZ) (δZ) = dX δY − dY δX = δY dX − δX dY . This scalar product is the origin of the symplectic geometry. T HEOREM 6.5.– The symplectic scalar product is invariant under a canonical change of variables. ˜ = ψ (Z) ∈ E 2n+2 being a canonical change of P ROOF.– Term Z ∈ E 2n+2 → Z variables, ˜ δZ ˜ = dZ σ dZ where L =
"
˜ ∂Z ∂Z
˜ ∂Z ∂Z
˜ ∂Z . I˙ ∂Z
# ˜ ∂ Z δZ = dZ L δZ I˙ ∂Z
132
Introduction to Mathematical Methods of Analytical Mechanics
Thus, since I˙ 2 = −1, L
−1
=
˜ ∂Z ∂Z
−1
˙ −1
I
˜ ∂Z ∂Z
−1 =
∂Z ˙ ∂Z −I ˜ ˜ ∂Z ∂Z
∂Z ˙ Since the change of variables is canonical, I ˜ ∂Z ˙ Consequently, L−1 = −I˙ and L = I. ˜ δZ ˜ = dZ I˙ δZ = σ (dZ) (δZ) . σ dZ
∂Z ˜ ∂Z
˙ therefore = I,
7 Properties of Phase Space
We consider the case where the Hamiltonian does not explicitly depend on time, H = H (Q, P ) . We call such a system the Lagrangian dynamical system. 7.1. Flow of a dynamical system The phase space of a dynamical system was defined as the 2n-dimensional space of coordinates (Q, P ) ≡ (q1 , · · · , qn , p1 , · · · , pn ). In the case where n = 1, the space is a phase plane. The right-hand sides of Hamilton equations define a vector field: each point (Q, P ) of the phase space is associated with a 2n-component vector
∂H ∂H ∂H ∂H − ,··· ,− , ,··· , ∂q1 ∂qn ∂p1 ∂pn
and we assume that solutions of the Hamilton equations ⎧ ∂H ⎪ ˙ ⎪ ⎪ ⎨ Q = ∂P ⎪ ∂H ⎪ ⎪ ⎩ P˙ = − ∂Q are defined over the time axis.
[7.1]
136
Introduction to Mathematical Methods of Analytical Mechanics
D EFINITION 7.1.– The flow associated with system [7.1] of the Hamilton equations is the mapping gt : (Q(0), P (0)) −→ (Q(t), P (t)) , where Q(t) and P (t) are solutions of the Hamilton equations that take the values Q(0) and P (0), respectively, at instant t = 0. We have seen that gt mappings constitute a Lie group. E XAMPLE 7.1.– In the phase plane, the equation of motion is written as x ¨ = f (x).
[7.2]
The kinetic energy and the potential energy are 1 T = x˙ 2 2
ˆ and W = −
x x0
f (ζ) dζ.
We denote y = x˙ the conjugate variable of x. A solution of differential equation [7.2] is a solution of the system
y˙ = f (x) x˙ = y.
[7.3]
The system of motion equations gives a vector field of the phase plane, which is called the phase velocity vector field. Similarly, in the general case of system [7.1], the right-hand side of the equations represents a vector field of R2n also called the phase velocity vector field. A solution of system [7.3] in the form t ∈ R −→ Ψ(t) ∈ R2 is a representation of the motion in the phase space. At time t, the velocity is the phase velocity vector. The image obtained by mapping Ψ is the phase curve defined by parametric representation
x = ψ(t) ˙ y = ψ(t).
Example: Harmonic oscillator.
Properties of Phase Space
137
The motions of a one-dimensional harmonic oscillator are solutions of the adimensional second-order differential equation x ¨ + x = 0,
associated with Hamilton’s equations
x˙ = y y˙ = −x
where x denotes the position of the oscillator and T =
1 2 x˙ , 2
W =
1 2 x 2
with
1 2 1 2 x˙ + x = E 2 2
or x2 + y 2 = 2E.
The trajectories in the phase plane are sets of constant energy E. They are concentric circles centered at the origin of the coordinate system (see Figure 7.1). At point M (x, y) in the phase plane, the components of the phase velocity vector are (−y, x). This vector is perpendicular to the radius vector of the circle. The motion in the phase plane is a uniform rotation around O where any set with constant energy represents a phase curve in the form
x = r0 cos (t0 − t) y = r0 sin (t0 − t) .
Figure 7.1. Circle representing a trajectory of a harmonic oscillator in a phase plane. Phase velocity vectors are tangent to the trajectory. For a color version of the figures in this chapter, see www.iste.co.uk/gouin/mechanics.zip
138
Introduction to Mathematical Methods of Analytical Mechanics
7.2. The Liouville theorem 7.2.1. Preliminary L EMMA 7.1.– Given a matrix A = {aij } , i, j ∈ {1, · · · , n} and the mapping t ∈ R −→ det (1 + A t) ∈ R, where 1 is the identity of Rn ; then, det (1 + A t) = 1 + Tr (A) t + t ε(t) where lim ε(t) = 0 t→0
and Tr (A) =
n
aii denotes the trace of A.
i=1
Indeed, det (1 + A t) is an n-degree polynomial of t at the most, whose constant n term is 1 and where the coefficient of t is aii . i=1
Consequences Consider the first-order differential equation ¨ = f (x) x
[7.4]
where x = (x1 , · · · , xn ) is an n-component term of Rn . Equation [7.4] is called autonomous; f = (f1 , · · · , fn ) denotes the n functions from Rn to R assumed to be of class C 2 . Equation [7.4] is equivalent to the system ⎧ dx1 ⎪ ⎪ = f1 (x1 , · · · , xn ) ⎪ ⎪ ⎨ dt .. . ⎪ ⎪ ⎪ dx n ⎪ ⎩ = fn (x1 , · · · , xn ) dt Then, x ∈ Rn −→ f (x) ∈ Rn is the infinitesimal generator of the Lie group {Gt , t ∈ R}, where t is the parameter of the differential equation [7.4]. The Lie group makes it possible to define the trajectory of [7.4]. Indeed, thanks to the definition of the infinitesimal generator, Gt (x) − x = f (x), t→0 t lim
Properties of Phase Space
139
or Gt (x) = x + t f (x) + t α(x, t) with for given x, lim α(x, t) = 0. t→0
As the mapping f is assumed to be twice-continuously differentiable, theorems relative to systems of differential equations enable us to write Gt (x) = x + t f (x) + t2 ε(x, t)
where lim ε(x, t) = 0, t→0
and for given t and for any compact Δ of Rn , ε(x, t) and vector and linear mapping, respectively.
∂ε(x, t) are the bounded ∂x
For any value of t, the associated group Gt defines the transformation of the space of parameters. x = (x1 , · · · , xn ) −→ Gt (x). Consider a measurable domain D0 in the space of parameters. Its image by the mapping Gt is denoted by Dt ≡ Gt (D0 ). Let V0 be the volume of D0 and Vt be the volume of Dt , ˆ ˆ V0 = dx and Vt = dx. D0
Dt
The determinant defines the measure dx of the volume. Variable changes in multiple integrals imply
ˆ
∂Gt (x)
Vt =
dx.
det ∂x D0 Assuming that x belongs to a compact of Rn , ∂Gt (x) ∂ε(x) ∂f (x) =1+t + t2 (x, t) ∂x ∂x ∂x and from Lemma 7.1, we deduce ∂f (x) ∂Gt (x) = 1 + t Tr + t2 ε1 (x, t) where det ∂x ∂x For an infinitesimal number t, we get det
∂Gt (x) ∂x
>0
lim ε1 (x, t) = 0.
t→0
140
Introduction to Mathematical Methods of Analytical Mechanics
and ˆ Vt =
1 + t Tr
D0
∂f (x) ∂x
+ t2 ε1 (x, t)
dx.
The volume Vt is derivable for t = 0, and its derivative value is ˆ ∂f (x) dx. Tr V (0) = ∂x D0 Similarly, we have
ˆ
V (t)t=t0 =
D t0
Tr
∂f (x) ∂x
dx.
Indeed, the group {Gt , t ∈ R} is additive, and we can replace 0 by t − t0 with Gt−t0 ◦ Gt0 = Gt . Furthermore, ⎤ ∂f1 (x1 , · · · , xn ) ⎥ ∂xn ⎥ .. ⎥ . ⎥ ⎥ ⎥ ⎦ ∂fn (x1 , · · · , xn ) ∂xn
⎡
∂f1 (x , · · · , xn ) · · · ⎢ ∂x1 1 ⎢ .. ∂f (x) ⎢ . =⎢ ⎢ ∂x ⎢ ⎣ ∂f ··· n (x1 , · · · , xn ) ∂x1
and the divergence of the vector field x ∈ Rn −→ f (x) ∈ Rn is Tr
∂f (x) ∂x
=
n ∂fi (x1 , · · · , xn ) . ∂x i i=1
Consequently, Tr
∂f (x) ∂x
= divf (x) and V (t)t=t0 =
ˆ Dt 0
divf (x) dx.
T HEOREM 7.1.– The transformation Gt conserves volume in the phase space if and only if divf (x) = 0. In a two-dimensional space and divf (x) = 0, we can schematize the deformation from D0 to Dt by the domain occupied by Arnold’s cat. The volume V0 of the cat at instant t0 and the volume Vt at instant t are equal (see Figure 7.2).
Properties of Phase Space
141
Figure 7.2. Deformation of Arnold’s cat in the phase plane using the mapping Gt , where D0 denotes the initial position and Dt denotes the position at the instant t
T HEOREM 7.2.– The Liouville theorem. The flow Gt with divf (x) = 0 conserves volume in the phase space. We can write vol [Gt (D0 )] = vol Dt . 7.2.2. Application to mechanical systems The Hamiltonian is independent of time, and the space of 2n parameters constitutes the phase space. The equations of motions are given by Hamilton equations [7.1]. The infinitesimal operator of the differential system is ⎡ ⎤ ∂H ⎢ ∂P ⎥ ⎢ ⎥ f (Q, P ) = ⎢ ⎥. ⎣ ∂H ⎦ − ∂Q Assuming that H is of class C 2 , divf (Q, P ) =
n ∂ ∂H ∂ ∂H − = 0. ∂qi ∂pi ∂pi ∂qi i=1
142
Introduction to Mathematical Methods of Analytical Mechanics
R EMARK 7.1.– In the case where H depends on time, there is no longer any conservation of volume in the phase space. Any variation of volume can be interpreted as an energy dissipation. E XAMPLE 7.2.– Examples of Liouville’s theorem application. Example 7.2a: For the oscillations of a harmonic oscillator, we have
y f (x, y) = −x
and divf (x, y) = 0.
The mapping Gt is a displacement in the phase plane, which conserves the volume. The domain Dt is represented in Figure 7.3.
Figure 7.3. Rotation from D0 to Dt . The shape of domain Dt is unchanged by transformation Gt
Example 7.2b: The equation for an adimensional simple pendulum is x ¨ + sin x = 0, which implies the energy equation 1 2 x˙ = cos x + E. 2 ˙ The curves take the form represented Let us write y 2 = 2 cos x+2E, where y = x. in Figure 7.4. The bubble is transformed into the lunula of the same area.
Properties of Phase Space
143
Figure 7.4. Deformation from D0 to Dt by the mapping Gt . The domain Dt is strongly deformed by deformation Gt , but its area remains constant
Example 7.2c: The equation of the adimensional inverted pendulum is x˙ = y x ¨ = x =⇒ . y˙ = x We deduce y 2 − x2 = 2E. The trajectories are equilateral hyperbolas x = λ et + μ e−t . y = λ et − μ e−t For t = 0, the initial conditions are
[7.5]
x0 = λ + μ , and finally, y0 = λ − μ
⎧ x0 + y0 t x0 − y0 −t ⎪ e + e ⎪ ⎨x = 2 2 . ⎪ ⎪ ⎩ y = x0 + y0 et − x0 − y0 e−t 2 2 We can determine the image of the domain x2 + (y − 1)2 ≤ 1 and can verify that its area is conserved by flow [7.5]. Example 7.2d: Consider the differential system x˙ = x . y˙ = −y The system verifies div V = 0, where V = hyperbolas x = x 0 et , consequently, x y = x0 y0 . y = y0 e−t
x . We obtain the equilateral −y
[7.6]
144
Introduction to Mathematical Methods of Analytical Mechanics
In axes Ox and Oy, the images of disk x20 + y02 ≤ 1 by [7.6] are elliptical domains with equation x2 e−2t + y 2 e2t ≤ 1. These domains have the same area as x20 + y02 ≤ 1. T HEOREM 7.3.– In the phase space, a canonical change of variables keeps volumes unchanged. We have seen that a canonical change of variables verifies ˜ = ψ (Z) ∈ E 2n+2 Z ∈ E 2n+2 → Z and satisfies the property ˜ ∂Z I˙ ∂Z
˜ ∂Z ∂Z
˙ = I.
We deduce ˜ ∂Z det det I˙ det ∂Z
˜ ∂Z ∂Z
= det I˙
=⇒
˜ ∂Z det ∂Z
2
˜
∂Z
= 1 =⇒ det
= 1,
∂Z
and the volume is conserved. This property shows that in mechanical phase space, the concept of volume is intrinsic. This result is the starting point of the statistical mechanics. 7.3. The Poincaré recurrence theorem 7.3.1. The recurrence theorem T HEOREM 7.4.– Consider an affine Euclidean space E n . Given a bijective mapping g of E n that conserves a closed bounded domain D = g(D) ⊂ E n and an open set U ⊂ D with non-zero finite volume, there exists a point x ∈ U and a positive integer p such that g p (x) ∈ U . In the theorem, it is enough to write g(D) ⊂ D. Point x returns in the domain U after the transformation g has been iterated n times. P ROOF.– Sets U, g(U ), · · · , g k (U ), · · · denote the successive transformations of U i j by g. If we assume that k for all i, j ∈ N, g (U ) ∩ g (U ) = ∅, then vol(D) is infinite because vol g (U ) = vol(U ) and for any k, vol(D) ≥ k vol(U ). Thus, there exist k and l (k > l), positive integers such that g k (U ) ∩ g l (U ) = ∅. Since the mapping g is bijective, g −l g k (U ) ∩ g l (U ) = g k−l (U ) ∩ U = ∅. Let us write y = g k−l (x) an element of g k−l (U ) ∩ U , where x ∈ U , which proves the theorem with p = k − l.
Properties of Phase Space
145
E XAMPLE 7.3.– Let D be a circle of center O and radius r in the Euclidean plane and g be the rotation of center O and angle α. If α = 2 m π/n, where m and n are two positive integers, then g n is the identity and the Poincaré theorem is verified. If α is incommensurable with π, the Poincaré theorem implies that for any open interval of length of the circle, there exist x0 and n0 such that d (g n0 (x0 ) − x0 ) < , where d measures the distance between the two points along the circle arc. The result does not depend on x0 . Let us denote by Rθ0 the rotation such that Rθ0 = g n0 , then d (Rθ0 (x0 ) − x0 ) < . Any point x of the circle can be written x = Rθ (x0 ), consequently, d (Rθ Rθ0 (x0 ) − Rθ (x0 )) < . Hence, d (Rθ0 (x) − x) < , or d (g n0 (x) − x) < . Here, we have g n0 (x) = x since α is incommensurable with π. Consequently, for any point x of D, the set of points {g n (x), n ∈ N } is dense on D. 7.3.2. Case of mechanics Given the differential equation ¨ = f (x) where x ∈ Rn x
[7.7]
where f is continuously differentiable, we consider a closed bounded domain D in the phase space. The one-parameter group associated with [7.7] is {Gt , t ∈ R}. At time t, ∂Gt the mapping Gt is uniformly lipschitzian with a first-order derivative of elements ∂x n {aij (x, t) ∈ R, i, j ∈ R }. Then,
Gt (x ) − Gt (x) ≤ M x − x ,
[7.8]
with M = sup |aij (x, t)|, where represents the Euclidean norm and x and x belong to D. Theorem 7.4 can be rewritten as the following. T HEOREM 7.5.– The Poincaré recurrence theorem. Equation [7.7] representing a mechanical system whose phase variables are included in a closed bounded domain D of Rn , each point of the phase space returns to its initial position neighborhood. The meaning of the theorem: the initial instant is t = 0. Given x ∈ D and t0 > 0, for any ball O centered at x and a strictly positive radius ε, there exists t1 , t1 > t0 such that Gt1 (x) ∈ O. The proof is as follows: for any x ∈ D
Gt1 (x) − x ≤ Gt1 (x) − Gt1 (x ) + Gt1 (x ) − x + x − x .
146
Introduction to Mathematical Methods of Analytical Mechanics
ε ε , there exists Given x and ball O1 of center x and radius r = inf , 4 2(M + 1) t1 (corresponding to t0 increased by integer p given by Theorem 7.4) and x ∈ O1 such that Gt1 (x ) ∈ O1 . Then,
Gt1 (x ) − x ≤
ε ε and (M + 1) x − x ≤ 2 2
=⇒
Gt1 (x) − x ≤ ε.
We similarly demonstrate that any point in the phase space returns an infinite number of times into the neighborhood of its initial position. This theorem is one of the few conclusions for motions given by an equation of the form ¨=− x
∂W ∂x
when condition [7.8] is verified and the solutions belong to a bounded domain. E XAMPLE 7.4.– Consider a point M with the mass m whose initial position M (0) is taken inside an axisymmetric half-bowl of axis Oz. Axis Oz is the ascendant vertical. Its position at the instant t is M (t). We have T +W =E
⇐⇒
1 m( x˙ 2 + y˙ 2 + z˙ 2 ) + mgz = E. 2
In conjugate variables p, q, r, 1 ( p2 + q 2 + r2 ) + mgz = E. 2m For z ≥ 0, the triplet (p, q, r) is bounded. For an equation on the regular surface f (x, y, z) = 0 of the half-bowl, x, y and z are bounded. The material point, which is experimentally constituted by a small marble rolling without friction inside the half-bowl, comes infinitively back as closely as we wish near both the initial position and the velocity (see Figure 7.5). R EMARK 7.2.– A paradoxical consequence of the Poincaré recurrence theorem: consider a closed receptacle made up of two chambers separated by a partition wall. The first chamber is filled with gas. The second chamber is empty. The gas molecules are subject to laws of forces deriving from a potential. We open the partition wall separating the two chambers. After a time T , the gas molecules which have spread across both chambers will again congregate in the first chamber. The paradox is due to the fact that time T taken for all the gas molecules to return to the first chamber is greater than the existence time of the universe.
Properties of Phase Space
Figure 7.5. Point M subject to gravity rolls without friction inside an axisymmetric half-bowl
147
8 Oscillations and Small Motions of Mechanical Systems
8.1. Preliminary remarks The Cauchy problem of differential equations does not always lead to easy problems. The fact can be demonstrated by using two simple examples. E XAMPLE 8.1.– The differential equation y − y = 0 associated with the function x ∈ R −→ y(x) ∈ R has the general solution y = λ ex with λ ∈ R. Let us choose y(0) = 0 as initial condition for x = 0; thus, λ = 0, and the solution is y(x) = 0. Let us now choose as initial condition y(0) = ε for x = 0 with 0 < ε 1; then, the solution is y(x) = ε ex . Although the initial conditions differ very slightly, the distance between the two solutions goes to infinity when x goes to infinity.
150
Introduction to Mathematical Methods of Analytical Mechanics
E XAMPLE 8.2.– The differential equation y + ω 2 y = 0
with
ω ∈ R+∗
associated with the function x ∈ R −→ y(x) ∈ R has the general solution y = a cos(ω x) + b sin(ω x) with a ∈ R and b ∈ R. Let us choose y(0) = 0 and y (0) = 0 as initial conditions for x = 0, the solution is y(x) = 0. Let us now choose y(0) = ε, where 0 < ε 1 and y (0) = 0 as initial conditions for x = 0; then, the solution is y(x) = ε cos(ω x). The distance between the two solutions is of ε-order. Now, let us disturb the value of ω as (1 + ε) ω, with 0 < ε 1. Let us choose for x = 0; the solutions are y(x) = y(0) = 1 and y (0) = 0 as initial conditions cos(ω x) and y(x) = cos (1 + ε) ω x , respectively. Note that cos (1 + ε) ω x − cos(ω x) is not of ε-order. To prove this property, let kπ kπ us choose ω x = , where k ∈ 2N + 1 (k is any odd integer) and ε close to 2π 2 2 4 by choosing the nearest odd integer to . We obtain ε cos (1 + ε) ω x − cos(ω x) > 1 . 2 The linear differential equations are the easiest to solve, so the theory of linear oscillations is an important part of mechanics. However, in many problems, the differential equations are not linear, but their linearization often provides a satisfactory approximated solution. While this is not true for all problems, the study of linearized equations is an important step in solving nonlinearized problems. It would be informative to consult the examples given in Arnold’s book, “Ordinary Differential Equations” (Arnold 1992).
Oscillations and Small Motions of Mechanical Systems
151
8.2. The Weierstrass discussion 8.2.1. Introduction In many problems, the study of the motion of material systems leads to the differential equation q¨ = g(q),
[8.1]
where q is a real function of the real variable t, and g is a real function of variable q which depends on the integration constants given by linear first integrals. In mechanics, this is a consequence of the case called the regular integration case. The discussion of the differential equation [8.1] is called the Weierstrass discussion. The discussion ends with the concept of the stability of equilibrium positions. Equation [8.1] implies q¨ q˙ = g(q) q. ˙ Function g is assumed continuous; we obtain q˙2 = 2
ˆ
q
q0
g(u)du + c,
[8.2]
where the constant c = q˙02 is obtained at initial instant t = t0 corresponding to the position q = q0 . We deduce the fundamental equation q˙2 = f (q)
[8.3] ˆ
where f (q) = 2
q q0
g(u)du + c. The C 1 -function f generally depends on the initial
conditions and on the total energy yielded by the Painlevé theorem. R EMARK 8.1.– Equation [8.3] is equivalent to
q˙ (¨ q − g(q)) = 0
⇐⇒
⎧ q˙ = 0, ⎪ ⎪ ⎪ ⎨ or ⎪ ⎪ ⎪ ⎩ q¨ = 1 f (q) . 2
Equation [8.3] introduces solutions q˙ = 0
⇐⇒
q (t) = q0 ,
152
Introduction to Mathematical Methods of Analytical Mechanics
where q0 is a constant. In order for the function q (t) = q0 to be a solution of [8.1], it is necessary and sufficient that f (q0 ) = 0. Simultaneously, we must have ⎧ ⎨ f (q0 ) = 0, ⎩
[8.4]
f (q0 ) = 0,
The motion q (t) = q0 , or the equilibrium position for the parameter q0 , must satisfy system [8.4]. The equilibrium positions of the parameter q0 correspond to the multiple roots of f (q) = 0. Nevertheless, in order for q0 to be the equilibrium position, it is necessary and sufficient that f (q0 ) = 0. The additional relation f (q0 ) = 0 gives the value of the integration constant of equation [8.2]. Let us also remark that if p = q, ˙ we obtain the differential system ⎧ ⎪ ⎨ q˙ = p, ⎪ ⎩ p˙ = 1 f (q), 2 whose equilibrium positions correspond to p = 0 and f (q) = 0, or equivalently f (q) = 0 and f (q) = 0. 8.2.2. Discussion of fundamental equation [8.3] The variations of q only occur in intervals where f (q) ≥ 0. Then, equation [8.3] becomes dq dt = ε f (q)
where
ε = ±1.
The choice of ε depends on the motion direction at initial instant t0 . Assume that at instant t0 , q˙0 > 0, then q˙0 = f (q0 ), where q(t0 ) = q0 yields the value of c in equation [8.2]. The motion is given by ˆ t − t0 =
q q0
du . f (u)
[8.5]
The relation remains valid for all values of q greater than q0 if f (q) does not reach a zero of equation f (u) = 0.
Oscillations and Small Motions of Mechanical Systems
153
So, two cases are considered. First case: for any q greater than q0 , the function f is strictly positive. The motion verifies time law [8.5]. Then, along the motion, t is a strictly increasing function of q; conversely, q is a strictly increasing function of t. The position “q = +∞” is reached ˆ +∞ du in a finite or an infinite time depending on the nature of integral . f (u) q0 – If the integral is divergent, the position “q = +∞” is reached in an infinite time and the study is ended. The motion is the strictly increasing mapping t ∈ [t0 , +∞[
−→
q(t) ∈ [q0 , +∞[.
– If the integral is convergent, the position “q = +∞” is reached in a finite time t1 such that ˆ t 1 − t0 =
+∞ q0
du . f (u)
It is necessary to revisit the physical definition of the problem. 0 greater than q0 . Second case: There exists q1 , first root of equation f (u) ˆ q= 1 du specifies if Therefore, ∀ q ∈ [q0 , q1 [, f (q) > 0. The nature of integral f (u) q0 the position q = q1 is reached in a finite or an infinite time. Case (a): q1 is a multiple root of the equation f (u) = 0. The integral is divergent; the position q = q1 is reached in an infinite time and the problem is ended. Let us note that the position q = q1 corresponds to the definition of the equilibrium position of equation [8.3]. Case (b): q1 is a simple root of equation f (u) = 0. The integral is convergent; the position q = q1 is reached at time ˆ t 1 = t0 +
q1
q0
du . f (u)
However, the position q = q1 is not a multiple root of the equation f (u) = 0 and consequently is not an equilibrium position of equation [8.3]. Since f is a continuous function, f (q) < 0, in a neighborhood of q1 with q > q1 . Consequently, for t > t1 , the motion is represented by the equation q˙ = − f (q),
154
Introduction to Mathematical Methods of Analytical Mechanics
which implies ˆ t = t1 −
q
q1
du , f (u)
where
q < q1 .
For the position q = q1 , q˙1 = 0 and q¨1 = 0; from the variation of q(t) for t > t1 , 1 q¨1 = f (q1 ) < 0. The study of the motion is obtained in a same way as previous 2 cases; it depends on the possible root for q < q1 of equation f (q) = 0. The parameter q(t) takes the same positions at times smaller than t1 , but with velocities of opposite sign. The position q, where q < q1 , is reached at two instants t1 − τ and t1 + τ that are symmetric with respect to t1 , ˆ τ=
q
q1
du . f (u)
The position q(t) is an even function of t − t1 . All cases may occur again for t > t1 : – Case (c): Uninterrupted motion until “q = −∞”. – Case (d): Motion goes up to the root q2 of the equation f (u) = 0 (where q2 < q0 ). When q2 is a multiple root of equation f (u) = 0, the position q = q2 is reached in an infinite time. When q2 is a simple root of equation f (u) = 0, the position q = q2 is reached at the instant t2 such that ˆ q2 ˆ q1 du du = t1 + . t2 = t 1 − f (u) f (u) q1 q2 This case corresponds to q0 located between two simple roots of equation f (u) = 0. The motion is periodic. The period of an oscillation is the time required to return to the position q0 with the initial velocity q˙0 . The period T is ˆ T =
q1
q0
du − f (u)
ˆ
q2
q1
du + f (u)
ˆ
q0
q2
du =2 f (u)
ˆ
q1 q2
du . f (u)
The period T does not depend on q0 . The case q˙0 = 0 does not present difficulty. It is enough to consider the sign of f (q0 ) to deduce the direction of the variation of q(t).
Oscillations and Small Motions of Mechanical Systems
155
8.2.3. Graphical interpretation We represent the analysis by using graphs. a) f (q) = 0 has no root (for all q, f (q) > 0). The asymptotic directions of q(t) are obtained as a limit of +∞ or −∞ (see Figure 8.1).
f (q) when q goes to
b) f (q) = 0 has a simple root q1 (see Figure 8.1). c) f (q) = 0 has a multiple root q1 (see Figure 8.2). d) f ˆ (q) = 0 has two simple roots q1 and q2 . The motion is oscillatory of period q1 du (see Figure 8.2). T =2 f (u) q2 e) f (q) = 0 has a simple root q1 and a multiple root q2 (see Figure 8.3). f) f (q) = 0 has two multiple roots q1 and q2 (see Figure 8.3).
Figure 8.1. Weierstrass discussion for cases a and b: The lines plotted on the lefthand side represent the function q → f (q) for q belonging to a domain where f (q) > 0. The lines plotted on the right-hand side correspond to the motions associated with the initial condition q(to ) = qo and for the values of q˙o = ± f (qo ). For a color version of the figures in this chapter, see www.iste.co.uk/gouin/mechanics.zip
156
Introduction to Mathematical Methods of Analytical Mechanics
Figure 8.2. The Weierstrass discussion for cases c and d. The description of the curves is the same as in Figure 8.1
Figure 8.3. Weierstrasss discussion in the cases e and f. The description of the curves is the same as in Figure 8.1
Oscillations and Small Motions of Mechanical Systems
157
8.2.4. Study of equilibrium positions Assume that q (t) = q0 is an equilibrium position. The equation associated with the equilibrium position satisfies q˙2 = f (q) with f (q0 ) = 0 and f (q0 ) = 0. The second-order equation of motion q¨ =
1 f (q) 2
[8.6]
enables us to write the fundamental equation for any other motions in the form q˙2 = f (q) − h, where h is a constant depending on initial conditions and which is zero when q (t) = q0 . Constant h is a perturbation of equation [8.6] with respect to the equilibrium position. It corresponds to the idea that “small values” of h are associated with “neighbor” motions of q(t) = q0 . D EFINITION 8.1.– The definition is only applicable to the case of equation [8.3]: the position q0 is a stable equilibrium position if and only if ∀ ε ∈ R+∗ , ∃ η ∈ R+∗ , ∀ h ∈ R, |h| ≤ η
∀ t, t ≥ t0 , q (t) − q0 ≤ ε.
=⇒
An equilibrium is said to be unstable if it is not stable. Assuming that f is developed near q0 to second order by the Taylor–Lagrange formula, and F (q) = f (q) − h, there exists θ ∈ ]0, 1[ such that F (q) = F (q0 ) + (q − q0 ) F (q0 ) +
From we obtain
2
(q − q0 ) F (q0 + (q − q0 ) θ) . 2
F (q0 ) = −h, F (q0 ) = f (q0 ) = 0, F (q) = f (q) , F (q) = −h +
2
(q − q0 ) F (q0 + (q − q0 ) θ) . 2
Function f is continuous at q0 and it is assumed that f (q0 ) = 0. In this case, the study of stability becomes a graphical study (see Figure 8.4). A more precise study in the sense of Lyapunov is done in later sections; however, this graphical representation is instructive as an analysis of stability.
158
Introduction to Mathematical Methods of Analytical Mechanics
– First case: f (q0 ) < 0 For h > 0, there is no physical motion. Since F is continuous, for h < 0, the amplitude of the motion is infinitesimal with respect to h; the equilibrium is stable. – Second case: f (q0 ) > 0 For h > 0, the motion is associated with arcs AC and BD. For h < 0, we obtain the whole curve; the equilibrium is unstable. – Case f (q0 ) = 0: The function f is assumed to be sufficiently regular, p is the first non-zero derivative at q0 . This derivative is assumed to be continuous in the neighborhood of q0 . The Taylor–Lagrange development to the pth order yields: F (q) = −h +
p
(q − q0 ) (p) F (q0 + (q − q0 ) θ) . p!
Figure 8.4. Small motions around an equilibrium position q = q0 following the sign of the concavity of curve f (q)
The following graphical discussion can be deduced: If p is even, we deduce a graphical discussion analogous to that in Figure 8.4. Only the contact order of the function f with the q axis is modified. For f (p) (q0 ) < 0, the equilibrium is stable. For f (p) (q0 ) > 0, the equilibrium is unstable. If p is odd, the study near q = q0 is shown on the graph in Figure 8.5; the equilibrium is unstable.
Oscillations and Small Motions of Mechanical Systems
159
Figure 8.5. Graph of f (q) near q = q0 when p is odd
8.2.5. Small motions near an equilibrium position We are in the simplest case of a stable equilibrium position: f (q0 ) = 0, f (q0 ) = 0, f (q0 ) < 0. The general equation of motion associated with parameter q is [8.1], written in the form q¨ =
1 f (q). 2
The finite-increments formula applied to f implies ∃ θ ∈ ]0, 1[ such that f (q) = f (q0 ) + (q − q0 ) f (q0 + θ (q − q0 )) . Therefore, q¨ −
(q − q0 ) f (q0 + θ (q − q0 )) = 0. 2
We call a linearized equation of equation [8.1] near the equilibrium position q = q0 the second-order differential equation with constant coefficients: q¨ −
(q − q0 ) f (q0 ) = 0 2
160
Introduction to Mathematical Methods of Analytical Mechanics
or u ¨ + ω02 u = 0
and ω02 = −
with u = q − q0
f (q0 ) . 2
[8.7]
The motion is periodic of period 2π = 2π T0 = ω0
2 , −f (q0 )
[8.8]
T0 period of linearized equation [8.7] is called period of small oscillations. R EMARK 8.2.– It is possible to get the result from equation [8.3]: u˙ 2 = f (q0 + θu)
u2 − h. 2
The corresponding linearized equation is u˙ 2 = f (q0 )
u2 − h. 2
According to the study in section 8.2.2, the period of oscillations is ˆ 2
+u1
−u1
2 du =2 −f (q0 ) u21 − u2
+u1 2 u arcsin , −f (q0 ) u1 −u1
where
u1 =
2h f (q0 )
which allows us to get the period T0 . E XAMPLE 8.3.– The simple pendulum. A point M with mass m moves without friction along a fixed, vertical hoop with radius R. Axes O ij are orthonormal and Oi denotes the descending vertical; we write OM = R u, u being a unit vector; let p be the unit vector directly perpendicular to u and θ be the oriented angle i, u (see Figure 8.6). Then, v = R θ˙ p is the velocity of M and m a = m R θ¨ p − m R θ˙ 2 u is the impulsion or acceleration quantity. The point M is submitted to its weight mg i and to the perfect reaction N u normal to the hoop. The equation of motion is m g i + N u = m R θ¨ p − m R θ˙2 u.
Oscillations and Small Motions of Mechanical Systems
161
Figure 8.6. Diagram of the simple pendulum. The point M moves without friction along a vertical circle whose center is O and radius is R
By projecting on p and u, we obtain, respectively m R θ¨ + m g sin θ = 0 −m R θ˙2 − m g cos θ = N The first equation can be written θ¨ + (g/R) sin θ = 0 whose linearized equation near the stable equilibrium position θ = 0 is θ¨ + (g/R) θ = 0. The small-oscillation period is T = 2π can be used.
R/g. So, previous graphical discussion
R EMARK 8.3.– In section 8.2.4, we assumed that solutions of the linearized equation are linearized solutions of the initial equation. This property is not always true. Indeed, the limited development of the derivative is not always the derivative of its limited development. In the case of the simple pendulum, it is possible to use mathematical tools to prove that the period of oscillations goes to the period of small oscillations when h goes to zero. The theory of small motions is related with the stability of dynamical systems. The problem is still partially open. For simple differential equations, the question does not always have an answer. We will study the question with more detail.
162
Introduction to Mathematical Methods of Analytical Mechanics
8.3. Equilibrium position of an autonomous differential equation D EFINITION 8.2.– A differential equation is autonomous when written in the form of dx = f (x), dt
[8.9]
where x belongs to an open set O of Rn and f continuous at 0 is a mapping from Rn to Rn . Equation [8.9] represents a system of first-order differential equations of infinitesimal operator f (x). D EFINITION 8.3.– The position x0 belonging to open set O is the equilibrium position of differential equation [8.9] if and only if, for all t x(t) = x0 satisfies differential equation [8.9], with x0 being a constant vector. Thus, x(t) = x0 is the equilibrium position if and only if f (x0 ) = 0. In mechanics, the equilibrium positions correspond to positions whose phase velocity is zero. Let us review the well-known cases. Case of Lagrangian dynamical ⎡ ⎤systems: Consider a mechanical system of n q1 ⎢ .. ⎥ independent parameters Q = ⎣ . ⎦, where Q belongs to open set O. The system qn admits a Lagrangian ˙ = T (Q, Q) ˙ − W (Q) L(Q, Q) ˙ and W (Q) denote the kinetic and potential energies, respectively. where T (Q, Q) Since ˙ = T (Q, Q)
1 ˙ 1 aij (q1 , · · · , qn ) q˙i q˙j Q A Q˙ = 2 2 i,j
where A = {aij }, with i, j ∈ {1, · · · , n}, is an invertible symmetric matrix associated with a positive definite quadratic form differentiable on O, and W (Q) = W (q1 , · · · , qn )
Oscillations and Small Motions of Mechanical Systems
is also differentiable on O. Let us recall that as pj = the conjugate variables of q1 , · · · , qn are pj =
n
aij q˙i
or
163
∂T is the polar half-form of T , ∂ q˙j
˙ P = A Q.
i=1
We obtain T =
1 −1 P A P 2
where A−1 is a positive definite quadratic form with respect to P . The equations of motion are the n Lagrange equations d ∂L ∂L = . dt ∂ Q˙ ∂Q ∂L ∂T ∂W ∂L and = − , we obtain a system equivalent to ∂Q ∂Q ∂Q ∂ Q˙ the system of Hamilton equations,
Since P =
⎧ ∂W ∂T ⎪ ⎪ ˙ − P = ⎨ ∂Q ∂Q ⎪ ⎪ ⎩ ˙ Q = A−1 P
[8.10]
The equilibrium positions of system [8.10] satisfy A−1 P = 0; thus, P = 0 ∂T ∂W and − = 0. Since Q˙ = A−1 P , P = 0 is equivalent to Q˙ = 0. It can be ∂Q ∂Q ˙ that verified, by differentiating the quadratic form giving T as a function of Q and Q, ˙ ∂T (Q, Q) the associated linear form is zero. ∂Q Therefore, the equilibrium equations satisfy P = 0 and
∂W (Q) = 0. ∂Q
A critical point of potential energy verifies
∂W (Q) = 0, and we obtain the ∂Q
theorem: T HEOREM 8.1.– Equilibrium positions of a Lagrangian dynamical system are given by {P 0 = 0, Q0 }, where Q0 is a critical point of potential energy W .
164
Introduction to Mathematical Methods of Analytical Mechanics
8.4. Stability of equilibrium positions of an autonomous differential equation Our aim is to find motions whose initial conditions are close to those of an equilibrium position. Time t0 is the initial instant. D EFINITION 8.4.– The Lyapunov stability. An equilibrium position x0 of system [8.9] is Lyapunov stable if and only if for any neighborhood V of x0 , there exists a neighborhood W of x0 such that, for any initial condition x10 in W, the corresponding solution x1 (t) with t ≥ t0 of differential equation [8.9] belongs to the neighborhood V. It can be written that x0 is a Lyapunov-stable equilibrium position of system [8.9] if and only if ∀ ε ∈ R+∗ , ∃ η ∈ R+∗ , ∀ x10 , x10 −x0 ≤ η =⇒ ∀ t, t ≥ t0 , x1 (t) − x0 ≤ ε. where x1 (t) denotes the solution of system [8.9] such that x1 (t0 ) = x10 . Let us note that in the Weierstrass discussion, we have given a weaker definition of stability associated only with Q and not with (Q, P ), which represents x in the system of differential equations [8.9]. D EFINITION 8.5.– The asymptotic stability. An equilibrium position x0 of system [8.9] is asymptotically stable if and only if x0 is Lyapunov stable and if there exists a neighborhood V of x0 such that for any initial condition x10 in V, the associated solution x1 (t) of system [8.9] satisfies lim x1 (t) = x0 .
t−→+∞
So, the position x0 is asymptotically stable if and only if x0 is Lyapunov stable and if lim x1 (t) − x0 = 0.
t−→+∞
Let us note that definitions require the existence of a norm in the phase space. Nonetheless, neither the Lyapunov stability nor the asymptotic stability depend on the choice of the norm because all norms are equivalent in Rn . 8.5. A necessary condition of stability L EMMA 8.1.– Given a positive definite quadratic form n i,j=1
bij xi xj .
Oscillations and Small Motions of Mechanical Systems
165
– For any strictly positive real ε1 , there exists a strictly positive real η1 such that n
bij xi xj ≤ η1
sup |xi | ≤ ε1 .
=⇒
i
i,j =1
– For any strictly positive real ε2 , there exists a strictly positive real η2 such that sup |xi | ≤ η2 i
n
=⇒
bij xi xj ≤ ε2 .
i, j=1
P ROOF.– There are two parts in the lemma. The proof of the second property results from the continuity of a second-degree polynomial of n variables xi , i ∈ {1, · · · , n} at (0, · · · , 0). Proof for the first property: Let xi =
n
αij yj , i ∈ {1, · · · , n} be a change of coordinates (det {αij } = 0)
j=1
such that in the new basis n
bij xi xj =
i, j=1
n
λ2i yi2 .
i=1
All λi , i ∈ {1, · · · , n} are non-zero. Let us denote λ2 = inf λ2i , i
then, n
λ2i yi2 ≤ η1
=⇒
η1 =⇒ λ2i √ η1 sup |yi | ≤ . |λ| i
∀ i, i ∈ {1, · · · , n} , yi2 ≤
i=1
∀ i, i ∈ {1, · · · , n} , |yi | ≤
√
η1 |λ|
=⇒
Furthermore, ⎡ ⎤ n n |xi | ≤ |αij |⎦ . αij yj =⇒ |xi | ≤ sup |yj | ⎣ j j=1 j=1
166
Introduction to Mathematical Methods of Analytical Mechanics
Let us write Mi =
n
|αij |; because det {αij } = 0, Mi = 0. We denote
j=1
M = sup Mi , then i
√ |xi | ≤ M sup |yj | j
=⇒
sup |xi| ≤ M sup |yj | i
j
=⇒
sup |xi| ≤ M i
η1 . |λ|
√ Let us choose η1 such that M
η1 ε2 λ 2 ≤ ε1 or η1 ≤ 1 2 , then |λ| M n
∀ ε1 ∈ R+∗ , ∃ η1 ∈ R+∗ such that
bij xi xj ≤ η1 =⇒ sup |xi| ≤ ε1 ,
i, j=1
i
which demonstrates the first property.
T HEOREM 8.2.– The Lejeune Dirichlet theorem. Given a Lagrangian dynamical ∗ system depending on n parameters Q = [q1 , · · · , qn ] , we assume that: – The kinetic energy T is minored by a positive definite quadratic form, i.e. when ∗ P = [p1 , · · · , pn ] denotes the conjugate variables of Q and T =
n 1 αij (q1 , · · · , qn ) pi pj , 2 i,j=1
there exists a quadratic form with constant coefficients bij such that for any Q, n n 1 αij (q1 , · · · , qn ) pi pj ≥ bij pi pj . 2 i,j=1 i,j=1
– The potential energy W has a strict local minimum at ∗
Q0 = [q10 , · · · , qn0 ] . Then, (P = 0, Q = Q0 ) is a Lyapunov-stable equilibrium position. P ROOF.– We are in the case of energy first-integral E and Q0 is a critical point of the potential energy W . Making translations in the space of parameters, we assume Q0 = 0 and W (0) = 0. We write E (P , Q) = T (P , Q) + W (Q) ,
Oscillations and Small Motions of Mechanical Systems
167
Then, E (P , Q) = E (P 0 , Q0 ), where (P 0 , Q0 ) denotes the initial conditions. We denote
Q = sup |qi | and P = sup |pi | i
i
Preliminary 1: At 0, potential W (Q) is strict local minimum. Therefore, there exists a strictly positive real A such that for Q ≤ A, W (Q) has an absolute minimum at 0; this property is conserved by decreasing the value of A. Preliminary 2: Given M =
inf W (Q). Then, M > 0, otherwise W (Q) has no strict
Q=A
minimum at 0. Given (P 0 , Q0 ) such that Q0 ≤ A and E (P 0 , Q0 ) < M , with T (P , Q) being positive, and for all t, E (P , Q) < M , we deduce W (Q) < M . Hence, for any t ≥ t0 , Q ≤ A. Indeed, on the contrary, there would exist t1 such that Q (t1 ) > A and, according to the intermediate value theorem on [t0 , t1 ], there exists t2 such that
Q (t2 ) = A, and we would get W (Q (t2 )) ≥ M , which is impossible. Preliminary 3: E will be assumed to be smaller than M . Thus, for any Q such that Q ≤ A, we have ∀ ε ∈ R+∗ , ∃ η1 ∈ R+∗ such that ∀ Q, W (Q) ≤ η1 =⇒ Q ≤ ε. Otherwise, ∃ ε1 ∈ R+∗ , ∀ n ∈ N, ∃ Qn , such that W (Qn ) ≤
1 and Qn > ε1 . n
Because Qn ≤ A, the sequence Qn satisfies lim W (Qn ) = 0. Let Qc be the n−→∞
limit of a convergent extracted sub-sequence. We have Qc ≤ A and W (Qc ) = 0 for Qc ≥ ε; this proves that the minimum is not strict. Consequences: Using Lemma 8.1 relative to the minorant quadratic form, ∀ ε ∈ R+∗ , ∃ η1 ∈ R+∗ such that T (P , Q) ≤ η1 =⇒ P ≤ ε.
168
Introduction to Mathematical Methods of Analytical Mechanics
Using Preliminary 3, ∀ ε ∈ R+∗ , ∃ η2 ∈ R+∗ such that W (Q) ≤ η2 =⇒ Q ≤ ε. Let us write η = inf (η1 , η2 ), then E (P , Q) ≤ η
P ≤ ε and Q ≤ ε.
=⇒
By continuity of W (Q), there exists ν1 , ν1 ∈ R+∗ such that
Q0 ≤ ν1
W (Q0 ) ≤
=⇒
η 2
and because αij , i, j ∈ {1, · · · , n} are continuous at Q = 0, the kinetic energy T (P , Q) is continuous with respect to (P , Q), and ν1 being chosen sufficiently small, there exists ν2 , ν2 ∈ R+∗ such that
Q0 ≤ ν1
and
P 0 ≤ ν2
=⇒
T (P 0 , Q0 ) ≤
η . 2
The two implications are simultaneously satisfied. Let us write ν = inf (ν1 , ν2 ), then ∀ ε ∈ R+∗ ,
∃ η ∈ R+∗ and ∃ ν ∈ R+∗
such that
Q0 ≤ ν
and
P 0 ≤ ν
=⇒
T (P 0 , Q0 ) ≤
η 2
and W (Q0 ) ≤
η . 2
Therefore, E (P , Q) ≤ η
=⇒
P ≤ ε
and
Q ≤ ε,
which demonstrates the Lyapunov stability at (0, 0).
R EMARK 8.4.– It is important to note that the position (P = 0, Q0 ) is not asymptotically stable. Indeed, if the position is asymptotically stable, the total energy E (P , Q) would go towards 0 while t goes to infinity, and here the energy is constant in the motion and non-zero except for the position (P = 0, Q0 ). Let us note that for a system with one degree of freedom, an equilibrium position q0 , which is not strict local minimum of W , is unstable in the sense of Lyapunov. It seems natural to conjecture that a dynamical system with n degrees of freedom (n > 1) whose equilibrium position is not strict local minimum of W is always unstable.
Oscillations and Small Motions of Mechanical Systems
169
8.6. Linearization of a differential equation 8.6.1. Preliminary An autonomous differential equation is in the form: dx = f (x) dt
[8.11]
where ⎡
⎡
⎤ f1 (x1,··· , xn ) ⎢ ⎥ .. f (x) = ⎣ ⎦ .
and
⎤ x1 ⎢ ⎥ x = ⎣ ... ⎦ .
fn (x1,··· , xn )
xn
We assume that equation [8.11] admits an equilibrium position at x0 . The mapping f is continuously differentiable at x0 . Making a translation in the space of parameters, we assume that x0 = 0 and we choose f (0) = 0. In the regular case, there exists a ∂f (0) linear mapping represented by matrix A = from Rn to Rn such that ∂x f (x) =
∂f (0) x+ x R (x) where ∂x
lim R (x) = 0.
x−→0
The matrix A = {aij } , i, j ∈ {1, · · · , n} with aij = this mapping.
∂fi (0, · · · , 0) represents ∂xj
D EFINITION 8.6.– The differential equation with constant coefficients dx = A x. dt
[8.12]
is called the linearized differential equation of equation [8.11] in the neighborhood of the equilibrium position x = 0. This differential equation is made up of a system of n first-order differentiable scalar equations. R EMARK 8.5.– As tangent linear mapping A, linearization is an intrinsic operation, i.e. independent of the chosen system of coordinate. Since A has constant coefficients, equation [8.12] has the general solution x (t) = e(t−t0 )A x (t0 )
170
Introduction to Mathematical Methods of Analytical Mechanics
where x (t0 ) denotes the n initial conditions at t0 and e(t−t0 )A = 1 + (t − t0 ) A +
2
n
(t − t0 ) A2 (t − t0 ) An + ··· + + ··· , 2! n!
is a converging matrix series. T HEOREM 8.3.– If eigenvalues λi , i ∈ {1, · · · , n} of A have a strictly negative real part, the equilibrium position x = 0 of differential equation [8.12] is asymptotically stable. P ROOF.– Two cases are possible. First case: the matrix A is diagonalizable. In axes associated with eigenvectors of the complexification of vector space, we can write the general solution of the system by using scalar components in the form xi (t) = eλi (t−t0 ) xi (t0 ), where xi (t0 ) is the initial condition associated with component number i. Then, |xi (t)| ≤ eRe(λi )(t−t0 ) |xi (t0 )| , where Re (λi ) denotes the real part of λi . For Re (λi ) < 0, we immediately obtain that lim xi (t) = 0 and solution xi = 0 is asymptotically stable. Let us note that if t−→∞
Re (λi ) = 0, the solution xi = 0 is Lyapunov stable. Second case: Matrix A is not diagonalizable. Any n×n complex matrix is similar to a matrix of the form ⎡
J1 0 · · · ⎢ ⎢ 0 J2 . . . J =⎢ ⎢ . . . ⎣ .. . . . . 0 ··· 0
⎤ 0 .. ⎥ . ⎥ ⎥, ⎥ 0 ⎦ Jp
where p < n and Ji , i ∈ {1, · · · , p} are square matrices of the form ⎡
⎤ 0 .. ⎥ . ⎥ ⎥. ⎥ 1⎦ 0 · · · 0 λi
λi 1 · · · ⎢ ⎢ 0 λi . . . Ji = ⎢ ⎢ . . . ⎣ .. . . . .
Oscillations and Small Motions of Mechanical Systems
171
Since λi , i ∈ {1, · · · , p} are eigenvalues of A, system [8.12] is constituted of p autonomous systems of the form, ⎡
dy i = Ji y i dt
where
⎤ y1 ⎢ ⎥ y i = ⎣ ... ⎦ y ri
p
with
ri = n.
[8.13]
i=1
We can write ⎤ 0 1 0 ⎥ ⎢ Ni = ⎣ ... . . . 1 ⎦ 0 ··· 0 ⎡
Ji = λi 1 + Ni ,
then
(Ni )ri = 0.
hence
Therefore, Ni is nilpotent. The general solution associated with the partial system [8.13] is ⎛ y i (t) = eλi (t−t0 ) e(t−t0 )Ni y i (t0 ) = eλi (t−t0 ) ⎝
p−1 j (t − t0 ) j=0
j!
⎞ Nj ⎠ y i (t0 ).
The components of y i (t) are in the form eλi (t−t0 ) P (t), where P (t) is a t-polynomial of degree less than n. For Re (λi ) < 0, the solution has the limit 0 when t goes to infinity and the position 0 is asymptotically stable. 8.6.2. Application to Lagrangian dynamical systems A Lagrangian dynamical system has – a kinetic energy T =
n 1 ∗ 1 aij (q1 , · · · , qn ) q˙i q˙j = Q˙ A Q˙ 2 i,j=1 2
or in conjugate variables, T = with
n 1 1 −1 a (q1 , · · · , qn ) pi pj = P ∗ (A∗ ) P 2 i,j=1 ij 2
−1 aij , i, j ∈ {1, · · · , n} = (A∗ ) ,
172
Introduction to Mathematical Methods of Analytical Mechanics
– a potential energy W = W (q1 , · · · , qn ) . The position Q = 0 is assumed to be the equilibrium position; therefore, ∂W (0) = 0, i.e. Q = 0 is singular point of the potential energy W . The system of ∂Q the equations of motion is the system of Hamilton equations, P˙ = −
∂H ∂Q
∗
and Q˙ =
∂H ∂P
∗ where
H (P , Q) = T + W.
[8.14]
Since (P = 0, Q = 0) is solution of ∂H (P , Q) =0 ∂Q
and
∂H (P , Q) = 0, ∂P
the Taylor development of the right-hand side of [8.14] in the neighborhood of (0, 0) has no constant terms. The linearized system of [8.14] has linear terms in P and Q. These terms come from the quadratic terms of the second-order development of H (P , Q) at (0, 0). T HEOREM 8.4.– To linearize a Lagrangian dynamical system in the neighborhood of the equilibrium position Q = 0, we consider the system with the kinetic energy n 1 T2 = αij q˙i q˙j 2 i,j=1
with αij = aij (0, · · · , 0) , i, j ∈ {1, · · · , n}
and the potential energy W2 =
n 1 bij qi qj 2 i,j=1
with bij =
∂2W (0, · · · , 0) , i, j ∈ {1, · · · , n} . ∂qi ∂qj
E XAMPLE 8.4.– Consider the one degree of freedom system T =
1 a(q) q˙2 2
and
W = W (q).
The position q = 0 is assumed to be a stable equilibrium position; it is enough ∂W ∂2W (0) > 0. We replace the expression for potential energy that (0) = 0 and ∂q ∂q 2 by approximating parabola, T2 =
1 a(0) q˙2 2
and W2 =
1 b(0) q 2 2
where
b(0) =
∂2W (0) . ∂q 2
Oscillations and Small Motions of Mechanical Systems
173
The linearized system is q¨ + ω 2 q = 0
where
ω2 =
b(0) . a(0)
Its solution is q = A sin [ω (t − t0 )]
and p = A a(0) ω cos [ω (t − t0 )] .
In the phase space, the approximated trajectory is represented by ellipse q2 +
p2 = A2 , ω 2 a(0)
The fundamental question is: what is the nature of the linearized solution? Is it a neighbor solution of the nonlinearized motion? E XERCISE 8.1.– Study the period of “small oscillations” in the neighborhood of a stable equilibrium position of a weighted point in motion along a rigid wire situated in a vertical plane. The equation of the wire in the vertical plane Oxy is of the form y = y(x). It is assumed that the stable equilibrium position corresponds to x = 0. The kinetic energy and potential energy are T =
1 m 1 + y (x)2 x˙ 2 and W = m g y(x) with y (0) = 0 and y (0) > 0. 2
For the sake of simplicity, we choose the units such that mass m = 1 and acceleration due to gravity g = 1. We are back to the previous example, for which T2 =
1 2 x˙ 2
and W2 =
1 2 2 ω x 2
with ω 2 = y (0).
The period of the small oscillations is T =
2π . ω
8.6.3. Small oscillations of a Lagrangian dynamical system D EFINITION 8.7.– In the neighborhood of a stable equilibrium position Q0 , the motions of the linearized system of motion equations are called small oscillations. The linearized kinetic energy and potential energy of a Lagrangian dynamical system in the neighborhood of the equilibrium position Q0 = 0 are T2 =
1 ˙∗ ˙ Q AQ 2
and
W2 =
1 ∗ Q B Q, 2
174
Introduction to Mathematical Methods of Analytical Mechanics
where A is associated with a positive definite quadratic form. The Lagrange equations of motion are d ˙ A Q + B Q = 0. dt
[8.15]
In an orthonormal basis of the quadratic form X ∗ A X, the two quadratic forms X A X and X ∗ B X can be written ⎡ ⎤ ⎡ ⎤ Y1 X1 n ⎢ ⎥ ⎢ ⎥ λi Yi2 where X = ⎣ ... ⎦ , Y = ⎣ ... ⎦ , X ∗ A X = Y ∗ Y and X ∗ B X = i=1 Xn Yn ∗
and Y = C X denotes the change of coordinates with passage matrix C associated with the new orthonormal basis with respect to X ∗ A X. Let us write Q1 = C Q, then Q˙ 1 = C Q˙ and T2 =
n 1 2 q˙ 2 i=1 1i
and W2 =
n 1 2 λi q1i . 2 i=1
Values λi , i ∈ {1, · · · , n} associated with the quadratic form B for the A-metric are the eigenvalues of B with respect to A. T HEOREM 8.5.– The eigenvalues of B with respect to A are the solutions of the characteristic equation det (B − λA) = 0, where all roots of the characteristic equation are real. P ROOF.– In the new basis, “ ˜ ” denotes the new different tensors. ⎡
∗ B˜ = C −1 B C −1
then
λ1 · · · ⎢ .. . . ˜ B=⎣ . . 0 ···
⎤ 0 .. ⎥ . ⎦ λn
and
∗ 1 = C −1 A C −1 .
˜ The eigenvectors of B corresponding to eigenvalues λ1 , · · · , λn satisfy ˜ ˜ ˜ B − λ1 V = 0, where V = 0. Vectors V˜ are the new eigenvectors. Let us write V = C −1 V˜ , then C ∗ B˜ − λ1 CV = 0 and, consequently, (B − λA) V = 0. Thus, V is the eigenvector of B with respect to A associated with the eigenvalue λ ∈ {λ1 , · · · , λn } if and only if det (B − λA) = 0. The eigenvalues are real because 2 the potential energy is real in any basis and for all values of q1i , i ∈ {1, · · · , n}.
Oscillations and Small Motions of Mechanical Systems
175
In the new coordinate system, the Lagrange equations are written as d 2 Q1 + B˜ Q1 = 0. dt2 In coordinates q¨1i + λi q1i = 0, for i ∈ {1, · · · , n}. We avoid using the additional index 1 and write q¨i + λi qi = 0, for i ∈ {1, · · · , n}.
[8.16]
C OROLLARY 8.1.– The small oscillations of a Lagrangian dynamical system are obtained by considering the Cartesian product of the small oscillations of the n one-dimensional differential equation system. These differential equations correspond to the eigenvectors that are orthogonal in the sense of the scalar product defined by the quadratic form associated with the linearized kinetic energy. For each scalar differential equation, we can distinguish three cases: a) The eigenvalue λi = ωi2 > 0; the general solution is qi = ci cos [ωi (t − t0 )] . The potential energy admits a strict minimum in direction qi = 0. The position qi = 0 is stable for the nonlinearized equation. b) The eigenvalue λi = 0; the general solution is qi = ci1 t + ci2 . The position qi = 0 is an unstable equilibrium position of the linearized equation. We cannot draw any conclusions for the nonlinearized system. c) The eigenvalue λi = −ki2 < 0; the general solution is qi = ci cosh [ki (t − t0 )] . The position qi = 0 is unstable for the linearized equation. It can be shown that it is also unstable for the nonlinearized equation.
176
Introduction to Mathematical Methods of Analytical Mechanics
C OROLLARY 8.2.– For all strictly positive eigenvalues λi , i ∈ {1, · · · , n}, system [8.15] associates small oscillations Q(t) =
n
(ci1 cos ωi t + ci2 sin ωi t) V i
[8.17]
i=1
where ωi2 = λi and V i are the eigenvectors associated with the eigenvalues λi corresponding to B V i = λi A V i . Indeed, the differential equation associated with the coordinate i in the basis V i , i ∈ {1, · · · , n} is q¨i + λi qi = 0 whose general solution is ci1 cos ωi t + ci2 sin ωi t.
[8.18]
The oscillations given by [8.17] are the Cartesian product of motion qi (t) associated with direction V i and the n − 1 trivial motions qj (t) = 0, j = i associated with the n − 1 other directions V j . A motion [8.18] is called an eigenoscillation with eigenfrequencies ωi of the Lagrangian dynamical system. The number of linearly independent eigenoscillations is equal to Sylvester’s inertia index of the quadratic form X ∗ B X. According to the Sylvester theorem, the index does not depend on chosen axes. In case where all eigenvalues are strictly positive, all small oscillations of [8.15] are sum of eigenoscillations. As periods ωi , i ∈ {1, · · · , n} are not necessarily commensurable, this sum is not necessarily periodic; this is the case of the well-known Lissajous curves. C ONCLUSION 8.1.– When all the eigenvalues are strictly positive, in order to find the small oscillations of a Lagrangian dynamical system associated with [8.15], we project the motion initial conditions on the system of eigenvectors V k , k ∈ {1, · · · , n}. We are back to n scalar differential equations of the form [8.16]. In practice, we look for solutions of motion in the complex form Q(t) = eiωt V and by writing Q(t) in Lagrange equations [8.15], we obtain
B − ω2 A V = 0
Oscillations and Small Motions of Mechanical Systems
177
and we get the n eigenvalues λk , k ∈ {1, · · · , n}. When all λk , k ∈ {1, · · · , n} are strictly positive, ωk2 = λk , k ∈ {1, · · · , n}, and for the n eigenvectors V k , k ∈ {1, · · · , n} that constitute an orthogonal system in A-sense, we can write ! n iωk (t−tk ) Q(t) = Re Ck e Vk . k=1
The result is valid even for multiple eigenvalues. 8.7. Behavior of eigenfrequencies 8.7.1. Preliminaries A surface of equation Q∗ B Q = 1, where Q ∈ Rn and B is a positive definite symmetric matrix, is called ellipsoid of Rn . The half-axes’ lengths of ellipsoid E are denoted by ai , i ∈ {1, · · · , n} with a1 ≥ · · · ≥ an . T HEOREM 8.6.– Each section of ellipsoid E by a k-dimensional hyperplane Hk (where k < n) has its smallest half-axis of length shorter than or equal to ak and ak = max min x . Hk
x∈Hk ∩E
P ROOF.– Let Hn−k+1 be the vector subspace of Rn generated by the directions of half-axes associated with ak , · · · , an of the ellipsoid. The subspace dimension is n − k + 1. It has common points with Hk since n − k + 1 + k = n + 1 > n. Let x be an intersection point of the two vector subspaces corresponding to one direction of the intersection that meets E. Since x ∈ Hn−k+1 , we have x ≤ ak . As x is greater than or equal to the length of the smallest half-axis of the ellipsoid E ∩ Hk , this small half-axis is not greater than ak . Furthermore, the upper bound corresponding to the largest of the small half-axes of ellipsoid E ∩ Hk is reached for the subspace generated by the half-axes associated with a1 , · · · , ak , where a1 ≥ · · · ≥ ak and has the value ak . 8.7.2. Behavior of eigenfrequencies with the system rigidity Let us consider a Lagrangian dynamical system whose kinetic energy and potential energy are T =
1 ˙∗ ˙ Q AQ 2
and
W =
1 ∗ Q B Q. 2
It is assumed that for all Q˙ = 0, the kinetic energy T is positive, and for all Q = 0, the potential energy is also positive. The eigenvalues of B with respect to A are positive real and the system has n periodic oscillations.
178
Introduction to Mathematical Methods of Analytical Mechanics
1 ∗ ˙ D EFINITION 8.8.– For given kinetic energy T = Q˙ A Q, Lagrangian dynamical 2 1 system S with the potential energy W = Q∗ B Q is said to be more rigid than 2 1 Lagrangian dynamical system S with potential energy W = Q∗ B Q if and only if 2 Q∗ B Q ≥ Q∗ B Q. T HEOREM 8.7.– When the rigidity of a Lagrangian dynamical system increases, all eigenfrequencies increase, i.e. if ω1 ≤ · · · ≤ ωn are the eigenfrequencies of system S and ω1 ≤ · · · ≤ ωn are eigenfrequencies of the Lagrangian dynamical system S , the Lagrangian dynamical system being more rigid than system S implies ω1 ≤ ω1 , ω2 ≤ ω2 , · · · , ωn ≤ ωn . If we consider the Euclidean structure defined by the kinetic energy, it may be assumed that A = 1. We consider ellipsoid E in Rn defined by Q∗ B Q = 1 and ellipsoid E in Rn defined by Q∗ B Q = 1. L EMMA 8.2.– If Lagrangian dynamical system S is more rigid than Lagrangian dynamical system S, then E is contained inside E. Indeed, Q∗ B Q ≥ Q∗ B Q; if Q∗ B Q = 1, then Q∗ B Q ≥ 1 and points Q are on ellipsoid E and outside ellipsoid E . L EMMA 8.3.– The lengths of half-axes of ellipsoid E are inversely proportional to the 1 eigenfrequencies, i.e. ai = , i ∈ {1, · · · , n}. ωi 1 ∗ ˙ With the kinetic energy being T = Q˙ Q, in an orthonormal basis, the matrix B 2 is diagonal and n
λi qi2 = 1
=⇒
∀i, λi a2i = 1
i=1
⇐⇒
1 1 ai = √ = , ωi λi
Theorem 8.7 is equivalent to: T HEOREM 8.8.– If ellipsoid E with half-axes a1 , · · · , an , where a1 ≥ · · · ≥ an , contains ellipsoid E with same center and with half-axes a1 , · · · , an , where a1 ≥ · · · ≥ an , then the half-axes of ellipsoid E are shorter than the half-axes of ellipsoid E, i.e. a1 ≥ a1 , a2 ≥ a2 , · · · , an ≥ an .
Oscillations and Small Motions of Mechanical Systems
179
P ROOF.– Indeed, the smallest half-axis of each section by an hyperplane Hk of dimension k with ellipsoid E is not greater than the half-axis of the section made in ellipsoid E by the same hyperplane of dimension k since E ⊂ E. Thus, min
x∈Hk ∩E
x ≤
min x .
x∈Hk ∩E
According to Theorem 8.6, min
x∈Hk ∩E
x ≤ max min x = ak Hk x∈Hk ∩E
=⇒
max
min
Hk x∈Hk ∩E
x ≤ ak .
Furthermore, according the conclusion of Theorem 8.6 ak ≤ ak because ak = max min x . Hk x∈Hk ∩E
R EMARK 8.6.– Because the potential energy is a positive definite quadratic form, we can return to section 8.7.2 by replacing “kinetic energy” with “potential energy”. If the potential energy of a Lagrangian dynamical system is unchanged, then an increase in kinetic energy (for instance, by increasing the masses) implies a decrease of each eigenfrequency. Let us note that a cracked bell has a rigidity smaller than the corresponding intact bell; consequently, sounds of the cracked bell are deeper than sounds of the intact bell. 8.7.3. The frequencies’ behavior when parameters are linked by constraints Consider the Lagrangian dynamical system Sn with n degrees of freedom. The kinetic energy and potential energy are T =
1 ˙∗ ˙ Q Q 2
and
W =
1 ∗ Q BQ 2
where
Q ∈ Rn ,
which correspond to periodic small oscillations around the stable equilibrium position 0 ∈ Rn . Let Hn−1 denote a n − 1-dimensional vector subspace of Rn . Consider a Lagrangian dynamical system Sn−1 with n − 1 degrees of freedom associated with Hn−1 when writing the nullity of the linear relation between parameters (q1 , · · · , qn ). Such a combination corresponds to the linearized part in the neighborhood of (0, · · · , 0) of a holonomic constraint related to (q1 , · · · , qn ). The kinetic and potential energies of Sn−1 are equal to the restriction of T and W to Hn−1 . System Sn−1 is deduced from initial system Sn submitted to the linear constraint. Assume that Sn has n eigenfrequencies, ω1 , · · · , ωn such that ω1 ≤ · · · ≤ ωn and Sn−1 has n − 1 eigenfrequencies ω1 , · · · , ωn−1 such that ω1 ≤ · · · ≤ ωn−1 .
180
Introduction to Mathematical Methods of Analytical Mechanics
T HEOREM 8.9.– The eigenfrequencies of Lagrangian dynamical system Sn−1 associated with a holonomic constraint separate the eigenfrequencies of the initial Lagrangian dynamical system S as follows: ω1 ≤ ω1 ≤ ω2 ≤ ω2 ≤ · · · ωn−1 ≤ ωn−1 ≤ ωn .
Because of Lemma 8.3 this theorem is equivalent to: T HEOREM 8.10.– Consider the section En−1 of ellipsoid En of half-axes’ lengths a1 , · · · , an , where a1 ≥ · · · ≥ an , by an n − 1-dimensional hyperplane. Then, the half-axes a1 , · · · , an−1 , where a1 ≥ · · · ≥ an−1 of section En−1 separates the half-axes of En as a1 ≥ a1 ≥ a2 ≥ a2 ≥ · · · an−1 ≥ an−1 ≥ an . P ROOF.– Indeed, let Hk be a k-dimensional hyperplane included in Hn−1 . Since Hk ∩ En−1 = Hk ∩ En , we get ak = max
min
Hk x∈Hk ∩En−1
x = max
min
Hk x∈Hk ∩En
x .
However, with Hk being any k-dimensional hyperplane and since we consider more elements in Hk , max
min
Hk x∈Hk ∩En
x ≤ max
min
Hk x∈Hk ∩En
x .
Furthermore, according to Theorem 8.6 max
min
Hk x∈Hk ∩En
x = ak .
Therefore, ak ≥ ak . We now have to prove ak ≥ ak+1 . Let us intersect hyperplane Hn−1 containing En−1 with a k + 1-dimensional hyperplane Hk+1 . The section has k + 1 or k dimensions, depending on whether Hn−1 contains Hk+1 or not. The shortest half-axis of ellipsoid En−1 ∩ Hk+1 is greater than or equal to the shortest half-axis of ellipsoid En ∩ Hk+1 . We have min
x ≥
min
x ≥ max
x∈Hk+1 ∩En−1
max
Hk+1 x∈Hk+1 ∩En−1
min
x∈Hk+1 ∩En
x ,
min
Hk+1 x∈Hk+1 ∩En
which implies
x = ak+1.
[8.19]
Oscillations and Small Motions of Mechanical Systems
Let us estimate max
181
x .
min
Hk+1 x∈Hk+1 ∩En−1
– Case (a): it corresponds to Hk+1 Hn−1 , then Hk is the trace of Hk+1 on Hn−1 ; Hk is any hyperplane included in Hn−1 and we get min
x∈Hk+1 ∩En−1
x =
min
x∈Hk ∩En−1
x
and then max
min
Hk ⊂Hn−1 x∈Hk ∩En−1
x = ak
– Case (b): it corresponds to Hk+1 ⊂ Hn−1 . min
x =
max
min
x∈Hk+1 ∩En−1
min
x∈Hk+1 ∩En−1
x
with Hk+1 ⊂ Hn−1
and thus Hk+1 ⊂Hn−1 x∈Hk+1 ∩En−1
x = ak+1 .
Moreover, max
min
Hk+1 x∈Hk+1 ∩En−1
x = sup
max
min
Hk ⊂Hn−1 x∈Hk ∩En−1
x , where Hk ⊂ Hn−1 ,
yields max
min
Hk+1 ⊂Hn−1 x∈Hk+1 ∩En−1
x = sup(ak , ak+1 ) = ak , where Hk+1 ⊂ Hn−1 .
The half-axis of ellipsoid En−1 ∩ Hk+1 is greater than or equal to the small half-axis of ellipsoid En ∩ Hk+1 , then ak ≥ ak+1 ; we finally obtain relation [8.19] ak ≥ ak ≥ ak+1 , which proves the theorem.
8.8. Perturbed equation associated with linear differential equation A linear mapping from Rn to Rn is represented by a matrix A and x is an element of Rn .
182
Introduction to Mathematical Methods of Analytical Mechanics
D EFINITION 8.9.– The perturbed equation of the linear differential equation dx = A x, dt
[8.20]
in the neighborhood of the position x = 0 is the differential equation dx = A x + p(x), dt where p denotes a C 1 -mapping from Rn to Rn such that p(0) = 0 and
[8.21] ∂p (0) = 0. ∂x
E XAMPLE 8.5.– Consider the scalar differential equation dx = −x + x2 . dt
[8.22]
The associated linearized differential equation is dx = −x. dt
[8.23]
– The general solution of [8.23] is ϕ(t) = a e−t . where a is real constant. The solution with initial value x0 for t = 0 is ϕx0 (t) = x0 e−t . – The general solution of [8.22] is x − 1 t x = |λ| e . where λ is real constant. In the case x ∈ ]0, 1[, x − 1 = λ x et , where λ < 0 and 1 1 x= . Taking x0 ∈ ]0, 1[ as the initial position at t = 0, x0 = and the 1 − λ et 1−λ solution is
ψx0 (t) = 1+
x0 1 ≡ , 1 x + (1 − x0 ) et 0 t −1 e x0
which belongs to ]0, 1[ for t > 0.
Oscillations and Small Motions of Mechanical Systems
183
It must be noted that the solutions of [8.22] and [8.23] are asymptotically stable at x = 0. Let us calculate Δx0 (t) = ψx0 (t) − ϕx0 (t), then Δx0 (t) =
x0 (1 − e−t ) 1 1+ − 1 et x0
and for t > 0, 0 ≤ Δx0 (t) ≤ x0 . Consequently, ψx0 (t) − ϕx0 (t) is small with x0 and goes to zero independently of x0 when t goes to infinity. We can ask the following questions: – For the same initial positions, what can we deduce for solutions of differential equations [8.20] and [8.21]? Are they close in a sense to be defined? – If x = 0 is a stable equilibrium position of [8.20], is it the same for [8.21]? Let us study an estimation that gives the difference between solutions of the two differential equations and look for a theorem that answers the above questions. C ONCLUSION 8.2.– Estimation over a finite time interval of difference between solutions of differential equations [8.20] and [8.21] for the same initial condition in the neighborhood of a stable equilibrium position of the nonlinearized differential equation [8.21]. We choose the initial condition at 0. Let z(t) denote the difference between the two solutions x(t) and y(t) of differential equations [8.21] and [8.20], respectively; the initial condition for z(t) is z(0) = 0. From [8.20] and [8.21], we obtain dz = A z + p(x) with z(0) = 0. dt We deduce d −A t z = e−A t p(x). e dt Using the finite increment theorem to the ith element of the solution, and by noting that z(0) = 0, ∀ t, t ∈ R+∗ , ∃ ti , i ∈ {1, · · · , n}, ti ∈]0, t[,
184
Introduction to Mathematical Methods of Analytical Mechanics
such that "
e−A t z(t)
# i
" # = t e−A ti p(x(ti )) i .
We deduce $ % [z(t)]i = t eA(t− ti ) p(x(ti ))
i
hence
& & & &
z(t) ≤ t sup &eA(t− ti ) & p(x(ti ) . ti
& & Over time interval [0, T ], we compare the solutions. Term &eA t & being bounded over [0, T ], & & ∀ t, t ∈ [0, T ] , ∃ m(T ), m(T ) ∈ R+∗ , such that &eA t & ≤ m(T ). & & Furthermore, t − ti ∈ [0, T ] implies supti &eA(t− ti ) & ≤ m(T ). As p is continuous, ∀ ε ∈ R+∗ , ∃ η1 ∈ R+∗ , such that x(t) ≤ η1 =⇒ p(x(t) ≤
ε . m(T )
Since the position 0 is Lyapunov stable, ∀ t > 0, ∀ η ∈ R+∗ , ∃ δ ∈ R+∗ such that x0 ≤ δ =⇒ x(t) ≤ η. Let us choose η = η1 , then ∀ t ∈ [0, T ] , ∀ε ∈ R+∗ , ∃ δ ∈ R+∗ such that x0 ≤ δ =⇒ z(t) ≤ ε t. When the initial condition is close to a Lyapunov-stable equilibrium position of the nonlinearized equation, solutions x(t) and y(t) of the two differential equations remain neighboring over a given time interval. The approximation results from the size of ε t and therefore from the size of the time interval. In case of strong regularity, we can answer the questions. T HEOREM 8.11.– Let differential equation dx = A x + p(x), dt
[8.24]
with p(0) = 0 and the eigenvalues of A have strictly negative real parts. Moreover, for x1 and x2 in the closed ball of center 0 and radius r,
p (x1 ) − p (x2 ) ≤ k(r) x1 − x2 ,
Oscillations and Small Motions of Mechanical Systems
185
where the continuous mapping k defined over [0, a] with a > 0 has its values in R+∗ and lim k(r) = 0. Then, x = 0 is an asymptotically stable equilibrium position of r→0
differential equation [8.24]1. P ROOF.– ψx0 (t) denotes the solution of equation [8.24] such that ψx0 (0) = x0 . Let x1 and x2 be close to position 0. We write Δ(t) = ψx2 (t) − ψx1 (t) and ρ(t) = 2
Δ(t) . ρ(t) =
n
2
|δj (t)| =
j=1
n
δj (t) δj (t),
j=1
where δj (t) represents the component number j of Δ(t) (the conjugate complex of δj (t) is denoted by δj (t)). It can be verified that ⎞ ⎛ n ∗ dΔ(t) dρ (t) dδ j ⎠ = 2 Re . δj (t) Δ(t) = 2 Re ⎝ dt dt dt j=1
According to differential equation [8.24], dΔ(t) dψx2 (t) dψx1 (t) = − = A Δ(t) + p (ψx2 (t)) − p (ψx1 (t)) . dt dt dt Hence, ∗ ∗ dρ = 2 Re Δ(t) A Δ(t) + 2 Re Δ(t) [p (ψx2 (t)) − p (ψx1 (t))] . dt According to theorems relating to the triangulation of endomorphisms, for all η ∈ R+∗ , we can find a basis of Rn in which the mapping A can be written ⎡
λ1 a1,1 ⎢ ⎢ 0 ... A=⎢ ⎢ . . ⎣ .. . . 0 ···
⎤ · · · a1,n−1 ⎥ .. .. ⎥ . . ⎥ where ai,j ≤ η for i ≤ j. ⎥ .. . an−1,n−1 ⎦ 0 λn
1 Let us note this is the case for differential equation [8.22] of Example 8.5. Indeed, we have p(x) = x2 , hence p(x1 ) − p(x2 ) = (x1 + x2 ) (x1 − x2 ) . For |x1 | ≤ r and |x2 | ≤ r, |p(x1 ) − p(x2 )| ≤ k(r) |x1 − x2 | with k(r) = 2r. The solutions are asymptotically stable when the initial condition is close to 0.
186
Introduction to Mathematical Methods of Analytical Mechanics
Then, ⎛ ⎞ n n ∗ 2 Re Δ(t) A Δ(t) ≤ Re ⎝ λi |δj (t)| + ai,j δi (t) δj (t)⎠ . j=1
We note that
δi (t) δj (t) ≤
i 2, the mapping G is unstable. P ROOF.– Let λ1 and λ2 be the eigenvalues of mapping G. They satisfy the characteristic equation 2
λ2 − Tr(G) λ + 1 = 0, with discriminant Δ = [Tr(G)] − 4.
[9.5]
First case: For |Tr(G)| < 2, the roots λ1 and λ2 of [9.5] are complex conjugate; λ2 = λ1 and λ1 λ2 = 1, or equivalently λ1 = eiα and λ2 = e−iα . The associated diagonal matrix representing G in the eigenvector basis is G1 =
eiα 0 0 e−iα
and in a new basis of the two-dimensional complex vector space, we obtain the matrix representing G, G1 = P
−1
G P,
cos α sin α with G = − sin α cos α
where P denotes the matrix of the change of coordinates. Furthermore, G is stable. Indeed, ∀ ε ∈ R+∗ , ∃ δ = ε, ∀ x, x ≤ ε =⇒ ∀ n ∈ N, G n (x) ≤ ε,
192
Introduction to Mathematical Methods of Analytical Mechanics
because G n is the rotation of angle nα, which conserves distances to the fixed point (0, 0). Second case: For |Tr(G)| > 2, the roots λ1 and λ2 of [9.5] are real with, for example, λ1 > 1 and λ2 = λ−1 1 . For an appropriate change of coordinates’ matrix P, mapping G is represented by the matrix ⎤ λ1 0 1 ⎦. G1 = P −1 GP where G1 = ⎣ 0 λ1 ⎡
Furthermore, G1 is unstable. Indeed, the instability is written as ∃ ε ∈ R+∗ , ∀ δ ∈ R+∗ , ∃ x1 , x1 ≤ δ and
∃ n1 ∈ N such that
G1n1 (x)
[9.6]
> ε,
with ⎤ 0 1 ⎦. =⎣ 0 λn1 1 ⎡
G1n1
λn1 1
Let us choose ε = 1; for all δ belonging to R+∗ , let us choose x1 such that δ δ x1 = associated with the eigenvalue λ1 > 1, and n1 such that λn1 1 ≥ 1 or 2 2 ln 2δ . Then, conditions [9.6] for instability is satisfied. n1 ≥ ln λ1 In the case |Tr(G)| = 2, we cannot draw any general conclusions. 9.4. Strong stability in periodic Hamiltonian systems We consider the case of the periodic differential equation [9.4] when det G = 1. The associated mechanical systems are said to be Hamiltonian. D EFINITION 9.8.– Consider two T -periodic linear differential equations dx = f1 (t) x dt
and
dx = f2 (t) x, dt
[9.7]
the maximum of distance between scalars f1 (t) and f2 (t) when t belongs to [0, T ] is called the distance between the two linear differential equations.
The Stability of Periodic Systems
193
D EFINITION 9.9.– The trivial solution 0 of a T -periodic linear Hamiltonian system dx = f (t) x dt
[9.8]
is strongly stable if and only if 0 is Lyapunov stable and if there exists a neighborhood of f (t) such that, in this neighborhood, the Hamiltonian systems are also Lyapunov stable. It follows that in the case of a plane – called the phase plane – we have the result: T HEOREM 9.5.– The trivial solution of differential equation [9.8] is strongly stable for |Tr(G)| < 2. Indeed, from the property of continuity of linear operators’ trace, it follows that if dx |Tr(G)| < 2, the mapping G associated with a system close to = f (t) x is also dt such that |Tr(G )| < 2. 9.5. Study of the Mathieu equation. Parametric resonance The Mathieu equation is the second-order scalar differential equation x ¨ = −ω 2 (1 + ε cos t) x,
[9.9]
where ω ∈ R+∗ is called the eigenfrequency of [9.9] and ε ∈ R+∗ is a small parameter (ε 1). Let us note: the fact that ε is positive is of no importance. This is a particular case of the Hill equation corresponding to the motion differential equation of pendulum whose length varies periodically with a small amplitude of 2π period 2π. The pendulum period is close to . The system of Hamilton equations ω associated with differential equation [9.9] is depicted by point (ω, ε) in the half-plane. Stable systems correspond to an open set in the half-plane such that |Tr(G)| < 2; unstable systems correspond to |Tr(G)| > 2; the limit of stability corresponds to |Tr(G)| = 2. T HEOREM 9.6.–All points on the axis ω corresponding to ε = 0, except points k ω = , k ∈ N , are associated with strong stable systems. 2 P ROOF.– Determine the matrix of the mapping G = gT for ε = 0 and T = 2π in the basis of the phase plane associated with x1 = x and x2 = x. ˙ Equation [9.9] becomes ⎧ ⎨ x˙ 1 = x2 , ⎩
x˙ 2 = −ω 2 x1 ,
194
Introduction to Mathematical Methods of Analytical Mechanics
which has the general solution ⎧ ⎨ x = x1 = c1 cos ωt + c2 sin ωt, ⎩
x˙ = x2 = −c1 ω sin ωt + c2 ω cos ωt.
– The solutioncorresponding for t = 0 to the initial conditions associated with the 1 first-vector basis defined by x1 = x = 1 and x2 = x˙ = 0 is 0 ⎧ ⎨ x = cos ωt ⎩
x˙ = −ω sin ωt.
– The solution corresponding for t = 0 to the initial conditions associated with the 0 second-vector basis defined by x1 = x = 0 and x2 = x˙ = 1 is 1 ⎧ 1 ⎪ ⎨ x = sin ωt ω ⎪ ⎩ x˙ = cos ωt. The linear mapping G = gT associates the vector
cos ωt −ω sin ωt
t=2π
and associates the vector
1 sin ωt ω cos ωt
=
cos 2ωπ , −ω sin 2ωπ
0 with the vector 1
= t=2π
1 with the vector 0
1 sin 2ωπ . ω cos 2ωπ
1 1 0 sin 2ωπ in the basis , . We deduce Hence, G = gT = ω 0 1 −ω sin 2ωπ cos 2ωπ k |Tr(G)| = 2 |cos 2ωπ| and |Tr(G)| < 2 for ω = , k ∈ N . 2 cos 2ωπ
It is possible to prove that the domain of instability of theMathieu equation in k the half-plane (ω, ε) keys the axis ω at points ω = , k ∈ N . The corresponding 2
The Stability of Periodic Systems
195
values of k are called the parametric resonance values. At these points on the halfplane (ω, ε), the linear systems may become unstable. This fact was experimentally observed for a frictionless swing, which becomes unstable if the user changes the distance of the mass center from its fixed pivot – even if it is a very small change – with a periodic variation; the variation is associated with the values of k. This parametric resonance manifests the strongest at k = 1, corresponding to the first point on the axis ω. In fact, in real material systems, friction intervenes and the equation to consider is no longer the Mathieu equation but an equation of the form x ¨ + χ x˙ + ω 2 (1 + ε cos t) x = 0, where χ is a small parameter of friction. As k becomes smaller, the domain of instability comes closer to the axis ω. Experimentally, the parametric resonance only manifests for k = 1, k = 2 and, much more rarely, for k = 3. Indeed, stability problems are difficult to study as it is rarely possible to make explicit G = gT . Most often, we determine the trace of G by a numerical integration of the motion equation over interval [0, T ]. In the particular case where f (t) is close to a constant, it is possible to use simple reasoning. This is the case we now consider. 9.6. A completely integrable case of the Hill equation Determine the domain of stability of the dynamical system represented by the Hill equation: x ¨ = −f 2 (t) x,
[9.10]
where f (t) =
ω+ε ω−ε
for 0 < t ≤ π and ∀ t ∈ R+∗ , f (t + 2π) = f (t) for π < t ≤ 2π
with ω ∈ R+∗ and ε ∈ R+∗ such that ε inf (1, ω). Differential equation [9.10] can be written in matrix form X˙ = F (t) X, where F (t) is linear:
x˙ 1 = x2 x˙ 2 = −f 2 (t) x1
⇐⇒
x˙ 1 x˙ 2
x = F (t) 1 x2
with
0 1 F (t) = . −f 2 (t) 0
The mapping G corresponding to the flow over period T = 2π is linear. Let us
−1 . The mapping g is linear because gT and g(T /2) are linear. write g = gT ◦ g(T /2) It is the same for gt , associated with the flow at the instant t when t = 0 is the origin of time. With the coefficient ω1 = ω + ε being constant, differential equation [9.10]
196
Introduction to Mathematical Methods of Analytical Mechanics
is integrable on ]0, π]; similarly, with the coefficient ω2 = ω − ε being constant, the differential equation is integrable on ]π, 2π]. Let us determine g(T /2) and g in the canonical basis of the phase plane 1 0 x=1 x=0 and corresponding to and , respectively. 0 1 x˙ = 0 x˙ = 1 Computation of g(T /2) on [0, π] Differential equation [9.10] yields x = b1 cos ω1 t + b2 sin ω1 t
and
x˙ = −b1 ω1 sin ω1 t + b2 ω1 cos ω1 t.
a) For t = 0, hence,
x=1 x˙ = 0
=⇒
b1 = 1 , b2 = 0
=⇒
⎧ ⎨ b1 = 0 1 , ⎩ b2 = ω1
x = cos ω1 t x˙ = −ω1 sin ω1 t.
For t = π, x = cos ω1 π x˙ = −ω1 sin ω1 π. b) For t = 0, hence, ⎧ ⎨
x=0 x˙ = 1
1 sin ω1 t ω1 ⎩ x˙ = cos ω1 t. x=
For t = π, ⎧ 1 ⎨ x= sin ω1 π ω1 ⎩ x˙ = cos ω π. 1
The Stability of Periodic Systems
Finally, ⎤ 1 sin ω1 π ⎦ cos ω1 π =⎣ . ω1 −ω1 sin ω1 π cos ω1 π ⎡
g(T /2)
By writing c1 = cos ω1 π and s1 = sin ω1 π, we obtain ⎤ ⎡ 1 s c 1 1 g(T /2) = ⎣ ω1 ⎦ . −ω1 s1 c1 Computation of g on [π, 2π] Differential equation [9.10] yields x = d1 cos ω2 t + d2 sin ω2 t
and
x˙ = −d1 ω2 sin ω2 t + d2 ω2 cos ω2 t.
a) For t = π,
x=1 x˙ = 0
=⇒
d1 = cos ω2 π , d2 = sin ω2 π
hence, ⎧ ⎨ x = cos ω2 π cos ω2 t + sin ω2 π sin ω2 t ⎩
x˙ = − ω2 cos ω2 π sin ω2 t + ω2 sin ω2 π cos ω2 t.
For t = 2π, x = cos ω2 π x˙ = − ω2 sin ω2 π. b) For t = π,
x=0 x˙ = 1
=⇒
⎧ 1 ⎪ ⎨ d1 = − sin ω2 π ω2 , 1 ⎪ ⎩ d2 = cos ω2 π ω2
⎧ 1 1 ⎪ cos ω2 π sin ω2 t ⎨ x = − sin ω2 π cos ω2 t + ω2 ω2 ⎪ ⎩ x˙ = sin ω2 π sin ω2 t + cos ω2 π cos ω2 t.
197
198
Introduction to Mathematical Methods of Analytical Mechanics
For t = 2π, ⎧ ⎨
1 sin ω2 π ω2 ⎩ x˙ = cos ω2 π. x=
Therefore, g can be written as ⎤ 1 s 2 g = ⎣ ω2 ⎦ with c2 = cos ω2 π and s2 = sin ω2 π, −ω2 s2 c2 ⎡
c2
and
gT = g ◦ gT /2
⎤⎡ ⎤ 1 1 c1 s2 s1 c2 =⎣ ω2 ⎦ ⎣ ω1 ⎦ . −ω2 s2 c2 −ω1 s1 c1 ⎡
Finally, we obtain ω2 ω1 . |Tr (gT )| = |Tr G| = 2c1 c2 − s1 s2 + ω1 ω2 We compare |Tr G| and 2, with Tr G being an implicit function of ω and ε. Study of Tr G We have ⎧ ⎨ 2c1 c2 = 2 cos π (ω + ε) cos π (ω − ε) = cos 2πε + cos 2πω, ⎩
2s1 s2 = cos 2πε − cos 2πω.
Furthermore, 2 2 ω1 ω2 ω 2 + ω22 (ω + ε) + (ω − ε) 2ε2 + O ε4 . + = 1 = = 2 1 + 2 2 2 ω2 ω1 ω1 ω2 ω −ε ω Let us note Δ=
2ε2 + O ε4 , 2 ω
The Stability of Periodic Systems
199
then, Tr G = cos 2πε + cos 2πω − (1 + Δ) (cos 2πε − cos 2πω) = −Δ cos 2πε + (2 + Δ) cos 2πω.
The boundaries of stability correspond to −Δ cos 2πε + (2 + Δ) cos 2πω = ±2. Two cases are deduced: First case: cos 2πω =
2 + Δ cos 2πε . 2+Δ
Since cos 2πω 1 and ω is of the form ω = k + α, with |α| 1, k ∈ N, cos 2πω = cos 2πα = 1 − 2π 2 α2 + O(α4 ). Furthermore, cos 2πω = 1 −
Δ (1 − cos 2πε) . 2+Δ
Therefore, 2π 2 α2 + O(α4 ) = Δ π 2 ε2 + O(ε4 ) with ω − k = ±
ε2 + o(ε2 ), ω
which implies ω=k±
ε2 + o(ε2 ). k
[9.11]
The boundary stability curves represented by equations [9.11] are approximately small arcs of parabolas. Indeed, for small ε, the domain of instability is very thin (thickness of ε2 order). Second case: cos 2πω =
−2 + Δ cos 2πε . 2+Δ
200
Introduction to Mathematical Methods of Analytical Mechanics
1 Since cos 2πω = −1 + o(ε) and ω is of the form ω = k + + α with |α| 1, 2 k ∈ N, cos 2πω = cos (2πα + π) = −1 + 2π 2 α2 + o(α2 ), or cos 2πω = −1 +
Δ (1 + cos 2πε) . 2+Δ
Therefore, 2π 2 α2 + o(α2 )Δ with ω − k −
1 ε + o(α2 ) = ± + o(ε), 2 πω
which implies ω=k+
1 ± 2
ε
1 π k+ 2
+ o(ε).
[9.12]
Figure 9.1. The graph on the left-hand side corresponds to perfect case. The graph on the right-hand side corresponds to the case with friction. The colored zones represent the domains of instability due to small periodic variations of frequency ω. For a color version of this figure, see www.iste.co.uk/gouin/mechanics.zip
The stability boundary curves represented by equations [9.12] are half-lines 1 passing through the points ω = k + . The interior of colored zones whose thickness 2 is of ε order are domains of instability: the graph on the left-hand side of Figure 9.1. In the case of friction, the arcs of curves are deformed and do not reach axis ω. For the domains of instability, we obtain (here, this has not been demonstrated) the interior of colored zones in the graph on the right-hand side of Figure 9.1.
10 Problems and Exercises
Exercise 1 At equilibrium, determine the plane curves of length L of a homogeneous flexible and inextensible wire connecting two given points A, B and subject to gravity acceleration. Exercise 2 Prove that a necessary and sufficient condition for a curve to be traced on a sphere is that the radius of curvature R and the radius of torsion τ are related by the relation d ds
τ
dR ds
+
R = 0. τ
Exercise 3 Study of the equilibrium interface of an incompressible liquid between two rectangular vertical flat plates. A knowledge of the concept of fluid capillarity is necessary for understanding this exercise. The interfacial section with an orthogonal plane at one extremity is represented by an unknown function y = y(x). It is assumed that is very large with respect to b − a (see the following figure). 1) Calculate the volume V of liquid in the parallelepipedic domain given in the figure. 2) Calculate the total potential energy due to gravity Wp in the parallelepipedic domain; Wc1 , the capillary energy in the form σ1 S, where σ1 is a constant that represents the liquid–wall superficial tension and S is the area of the contact surface of liquid with walls along ; and Wc2 , the capillary energy in the form σ2 Σ, where
204
Introduction to Mathematical Methods of Analytical Mechanics
σ2 is a constant representing the liquid–air superficial tension and Σ is the area of the liquid interface along . 3) Find three relations relating x, y, σ1 and σ2 . 4) a) Prove that the connecting angles ϕ of the curved interface at points A and B are such that sin ϕA = − sin ϕB =
σ1 . σ2
b) Prove that curve y = y(x) satisfies the Jurin equation written as ρ g (y − y0 ) =
σ2 , R
where ρ denotes the constant density of the liquid, g denotes the acceleration due to gravity, R denotes the local radius of curvature of curve y = y(x) and y0 is an integration constant.
Exercise 4 The notations of Exercise 3 are valid for the case of a liquid contained between two coaxial vertical circular cylinders made up of the same material with radii r0 and r1 , where r0 < r1 , and closed at its bottom by a horizontal disk. 1) Calculate the volume V of liquid in the domain between the two cylinders. 2) Calculate the total potential energy due to gravity Wp of the liquid domain; Wc1 , capillary energy of the form σ1 S, where σ1 is a constant representing the liquid–wall superficial tension and S is the area of the contact surface of liquid with walls; and Wc2 , capillary energy of the form σ2 Σ, where σ2 is a constant representing the liquid– air superficial tension and Σ is the interface area of liquid air (we said meniscus).
Problems and Exercises
205
3) Prove that the meridian curve z = z(r), with r0 ≤ r ≤ r1 , of the meniscus satisfies the equation called Jurin’s equation, where ρ denotes the constant density of the liquid, g denotes the acceleration due to gravity and R1 and R2 are the principal radii of curvature of the meridian curve. 4) Prove that the connecting angles ϕ of the meridian of the meniscus and the walls verify sin ϕr0 = − sin ϕr1 =
σ1 . σ2
Exercise 5 A surface (S) has equation F (M ) = 0. Among all curves on (S) and connecting two given points A and B, we look for the curves whose length is extremal (they are called geodesics of the surface). 1) In a parametric representation, the curve is written as M = f (u), u ∈ [α, β]
with A = f (α), B = f (β)
At any point, we denote by T the unit vector of the oriented tangent to the curve, N its principal unit normal and R its radius of curvature. Prove that N + λ(u) R
du ds
gradF = 0
and conclude that one characteristic property of the geodesics of the surface (S) is that the osculating plane of the geodesic is normal to (S). 2) Geodesics of the sphere: a) prove that the geodesics of the sphere are parts of great circles,
ˆ
B
b) obtain the result again by finding the extremum of the integral L =
ds. A
3) Study the geodesics of circular cylinders. 4) Study the geodesics of the cone of revolution: a) by writing the equation of a cone in orthonormal axes as r = az, where (r, θ, z) is the cylindrical representation, b) by writing the equation of the cone in orthonormal axes as θ = θ(z).
206
Introduction to Mathematical Methods of Analytical Mechanics
5) Study of the geodesics of a surface expressed as a function of two parameters (u, v). a) Prove that the first fundamental form of the surface is ds2 = E du2 + 2F dudv + G dv 2 . Write ds2 if coordinate lines are orthogonal. du2 + dv 2 be the first fundamental form of a surface (S), u2 expressed as a function of parameters (u, v), where u is assumed to be positive. The point M (u, v) of (S) corresponds to the point P whose coordinates in an auxiliary plane (Π) are (u, v). Determine the geodesics of the surface (S) and their images in the plane (Π). b) Let ds2 =
Exercise 6 Let F (M ) = 0 be the equation of surface (S) and let A and B be two given points on the surface. We recall that a necessary and sufficient condition for a curve on this surface, connecting A and B and written in a parametric representation as M = f (u), u ∈ [α, β]
with A = f (α), B = f (β),
to be of minimum length is N + λ(u) R
du ds
grad F = 0,
where N denotes the principal unit normal to the curve, R is its radius of curvature, s is the curvilinear abscissa and λ(u) is an appropriate scalar field. 1) Since one of the characteristic properties of a helix is that its tangent vector makes a constant angle with a given fixed direction, prove that the geodesics on the cylinder with a circular base and the axis Oz are helices. Is it possible to forecast the result? 2) Prove that the geodesics on a surface of revolution are generally written as r2
dθ =k ds
where k is constant. Obtain the result of 1) again.
Problems and Exercises
207
Exercise 7 Let (C) be a plane curve connecting given points A and B. The plane orthonormal frame is Oxz. Consider the surface generated by (C) when Oxz rotates around Oz. 1) Determine (C) such that the area of the generated surface is minimal. Find the result by using, a) the analytical calculus, b) the geometrical calculus. 2) Determine (C) such that when the generated area is given, the volume resulting from the rotation of (C) is maximum. Exercise 8
ˆ
Consider a = (C)
V dM , where V is a differentiable vector field in Rn and
(C) is a differentiable curve connecting two given points A and B. The relation that gives the variation of a curvilinear integral is B
δa = [V δM ]A +
ˆ AB
δV dM − dV δM .
In the Euclidean plane, determine the differential equation of curves (C) connecting A and B so that ˆ a=
n ds C
is maximal, where n is a field that associates with each point M scalar n = φ(M ). Give in geometric form the differential equation that relates grad n, the oriented unit tangent, the unit normal, and the algebraic radius of curvature of extremal curves. Determine the equations of extremal curves when φ(M ) = ey , where y is the ordinate of M in a system of orthonormal axes. Exercise 9 Equilibrium shape of a wire on a surface of revolution. A. Analytical method. An inextensible homogeneous wire (C) of linear mass ρ, connecting two given points A and B, is located without friction on the surface (S) of
208
Introduction to Mathematical Methods of Analytical Mechanics
revolution of a vertical axis. The cylindrical representation of the surface of revolution is r = f (z), where the direction z is the vertical ascendant. The acceleration due to gravity is denoted by g. The curve (C) is defined by θ = θ(z). 1) Prove that the length element of the wire is ds =
[1 + f 2 (z) + f 2 (z) θ2 ] dz.
2) Write the differential equation satisfied by the wire on surface (S) as a function of z, θ , f, f . 3) Instead of writing θ = θ(z) for the representation of (C), we choose z = z(θ). Write the differential equation satisfied by the wire on surface (S) as a function of z, z , f, f . 4) Study the equilibrium shape of a wire on a vertical circular cylinder. B. Geometrical method. 1) The wire connecting A and B, which may be confounded, is located on a surface (S) (not necessarily the surface of revolution) of the equation F (M ) = 0. Prove that the differential equation of curve (C) is (z − h)
dT − (1 − T T ) grad (z − h) − μ grad F = 0, ds
where h and μ are constants and T is the unit tangent vector to the curve (C). 2) Surface (S) is the surface of revolution around the axis Oz. A point M on the surface has coordinates (r, θ, z) in the moving system (u, w, k) of the cylindrical representation of the axis Oz (OM = r u + z k). a) Write the components of T in (u, w, k) frame. Deduce the matrix 1 − T T dT and components of in the basis (u, w, k). ds b) The coordinate lines of the surface are defined by z = cte (called “parallels”) and θ = cte (called “meridians”). i) Determine the components of unit vectors v 1 and v 2 tangent to the coordinate lines in the basis (u, w, k). ii) Determine the components of the unit normal n to the surface (S) in the basis (u, w, k). 3) From 2a), 2b) and 1), deduce the three scalar equations satisfied by the curve (C).
Problems and Exercises
209
Integrate the simplest of the equations to obtain the result derived from method A, i.e. the equation (z − h) r2
dθ = Cte. ds
Exercise 10 An inextensible homogeneous and flexible wire is located on the surface of a cone of revolution of the vertical downward axis Oz. The acceleration due to gravity is denoted by g. The cylindrical representation is (r, θ, z), and O denotes the vertex of the cone. The equation of the cone is r = z tg α,
z > 0,
where α denotes the half-angle of the cone. We will study two cases: – case a: the wire connects given points (z0 , θ0 , r0 ) and (z1 , θ1 , r1 ) on the cone; – case b: the wire is a closed curve rotating only once around the cone axis; the highest and lowest points of the wire are at levels z0 and z1 , respectively. 1) By writing that the potential of the wire with a given length is minimum, obtain the differential equation that relates z and θ of the equilibrium shape. Prove that the equation can be written as r (h − z) cos i = k, where h and k are constants and i is the angle between the tangent to the wire and the corresponding parallel at the point associated with (r, z, θ). This equation gives θ as an integral function of z. For case a, write the two conditions that determine the constants h and k. 2) For case b, prove that there exists an equilibrium position for which the wire is a horizontal circle. Assuming that there is another solution, calculate h and k as functions of z0 and z1 and write the conditions that determine z0 and z1 . Discuss the existence of such a solution according to the values of angle α. Exercise 11 Consider the half-plane of Oxy, where x > 0. We search for the extremals of the ˆ ds curvilinear integral . xm AB
210
Introduction to Mathematical Methods of Analytical Mechanics
1) Prove that along an extremal, we have the relation: xm = am sin α, where α is the angle made by the tangent of the integral curve with the x axis and a is constant. We choose 0 ≤ α ≤ π. What is the nature of a? 2) If m = 0, the parametric equations of the extremals are written as x = a sinr α,
y = y0 + r
ˆ α π x dα 2
with r =
1 . m
What are the extremal curves obtained for m = 0? 1 1 3) Write the integral giving y for m = 1, m = , m = −1, m = − . What is the 2 2 nature of the corresponding curves? 4) Prove that for the given m, the extremals are deduced from one of them by translation parallel to Oy and homothety of center O. Prove that each curve has an axis of symmetry. 5) Prove that if m > 0, concavity of the extremal curve is towards x < 0. Prove that there exists an extremal curve connecting any two given points A and B in the half-plane x > 0. 6) Prove that if m < 0, then no extremal or two extremal curves (exceptionally, a single extremal curve) connect any two given points A and B in the half-plane x > 0. Exercise 12 Consider the motions of a point P moving in the half-plane x > 0 from a given point A at instant t = 0 to another given point B at instant t = T > 0. 1) Determine the motion such that ˆ
T
I= 0
(a x)2 − v 2 dt,
is extremal, where v is the velocity of P and a is a constant with the dimension of inverse time and chosen to be equal to 1. Such a motion is called the extremal motion. To obtain this aim, it is possible to take t and y as unknown functions of x. Extremum of I is expressed by two first integrals. By eliminating y , we can express t a as a function of x; we will write x = and y as a function of u. cosh u
Problems and Exercises
211
2) Consider the extremal motions between points A and B such that A has coordinates x = 1 and y = −1, and B has coordinates x = 1 and y = 1. a) What are the trajectories of these motions? b) For which motions is the time lapse minimum? What is the corresponding trajectory and time lapse? (Note that the minimum time lapse is not zero). 3) Prove that ˆ the trajectory of an extremal motion with minimum time lapse is an ds extremal of . What are the corresponding trajectories? x Exercise 13 We search for the extremals y = f (x) of ˆ x1 x y y dx. a= x0
1) Write the Euler equation satisfied by extremals; it is denoted by (1). 2) A parametric representation of the extremals is x(t) y(t) and conjugate variables are p and q; write the corresponding Hamiltonian H(p, q, x, y) and the Hamilton equations. 3) Note that the integral a is invariant by the transformation tx x Tt = y . y t Deduce the first integral u = p x − q y denoted by (2) and write the first-order differential equation (3) in the form F (x, y, y , u) = 0 verified by the extremals. 4) By using H, verify whether u is a first integral. Prove that (3) is a consequence of (1). 5) Write the Jacobi equation of the extremals and find a solution in the form S(x, y) = S1 (x) + S2 (y). Deduce the extremals in the form y = f (x) denoted by (4). p 6) From solution (4), prove that v = 2 is another first integral. What is the x corresponding invariance group? Can this invariance be directly obtained from the integral a? Is it possible to deduce other linear first integrals from u and v? Examine their invariance group. 7) In the plane, can two points of coordinates (x0 , y0 ) and (x1 , y1 ) always be connected by an extremal curve? Calculate the corresponding value of a. Is the extremum of a maximum or minimum value?
212
Introduction to Mathematical Methods of Analytical Mechanics
Exercise 14 A. The brachistochrone–analytical method A point M of mass 1, without an initial velocity, describes without friction a curve connecting origin O and the given point A situated in a vertical plane. We search for curve (C) for which the time lapse is minimum. The curve is described in orthonormal axes by y = y(x), where y is the coordinate corresponding to the vertical ascendant direction. The acceleration due to gravity is denoted by g. 1) After using the energy conservation theorem, prove that the time lapse to go from O(0, 0) to A(a, y(a)), y(a) < 0 is ˆ
a
τ= 0
1 + y 2 dx = −2gy
ˆ
a
f (y, y ) dx.
0
2) Prove that y(x) verifies the partial differential equation f − y
∂f =c ∂y
where c is a constant. 3) Integrate this equation. Characterize the obtained curves. B. The brachistochrone–geometrical method 1) Prove that ˆ
A
τ= O
T dM , v
where T is the unit tangent to the curve (C) and v is the velocity of the point M . 2) δτ being zero for any δM , prove that −
ρ N + (1 − T T ) grad v
1 = 0, v
where N is the principal unit normal to the curve (C) and ρ is the curvature at M . What does tensor (1 − T T ) mean?
Problems and Exercises
213
Deduce equation 1 = 1, R v N grad v
where R denotes the radius of curvature of (C) at M . 3) From expression of v used in I, prove that y = y(x) satisfies the second-order differential equation independent of x, −2yy” = 1 + y 2 . 4) By writing z(y) = y , get the result obtained in I. Exercise 15 A point M with mass m moves without friction along a curve (C) which connects a point A with a vertical plane (Π). The orthogonal projection of A on (Π) is denoted by H; we write AH = j 0 , where j 0 is unit vector, and i0 = j 0 ∧ k0 , where k0 denotes the vertical ascendant unit vector. The acceleration due to gravity is denoted by g. We write AM .k0 = z. Let s be a ds curvilinear abscissa of (C) such that = v > 0, and T be the unit vector tangent to dt dM = vT. (C) such that dt 1) By using energy conservation E, where E > 0, prove that the time lapse τ to reach plane (Π) satisfies
2E τ= m
ˆ (C)
T √
dM , 1 − hz
where constant h is a function of g and E. 2) Determine the conditions verified by (C) such that τ is minimal. Deduce that T ∧ k0 √ = C, 1 − hz where C is a constant vector.
214
Introduction to Mathematical Methods of Analytical Mechanics
ds , where R is the dϕ radius of curvature of (C), give the curve (C) for which time τ is minimal. What point in plane (Π) is the curve extremity? Give the result as a function of initial conditions. 3) It is assumed that C = c j 0 and ϕ = (j 0 , T ). From R =
Exercise 16 Consider the system of differential equations ⎧ dx ⎪ ⎪ ⎪ ⎨ du = 2x ⎪ ⎪ dy ⎪ ⎩ = 3. du 1) Prove the system has a unique solution denoted by M = Tu (M0 ) for a given initial condition M0 . 2) Prove that mappings Tu form a Lie group. 3) Determine the infinitesimal displacement associated with Tu . 4) Study the same questions for the system of differential equations ⎧ dx ⎪ ⎪ ⎪ ⎨ du = y ⎪ ⎪ dy ⎪ ⎩ =x du
Exercise 17 Let A be a C 1 mapping, X ∈ Rn −→ Q ∈ Rp . ∂Q The linear tangent mapping at X is denoted by . With the generating function ∂X ˆ t1 ˙ dt is associated the generating function ˙ with action a = G(Q, Q) G(Q, Q) t0
∂Q ˙ ˙ Γ(X, X) = G A(X), X , ∂X ˆ
t1
with action b = t0
˙ dt. Γ(X, X)
Problems and Exercises
215
˙ express δb as a function of δQ and 1) a) Express δb as a function of δX and δ X; ∂Γ ∂G ∂Γ ∂G ˙ Deduce δ Q. as functions of . and and ˙ ∂X ∂Q ∂X ∂ Q˙ b) Let (C) be a curve defined by the mapping t ∈ I −→ X(t), where I is an interval of R. Prove that (C) is an extremal curve of a if and only if (γ) = A (C) is an extremal curve of b. c) If A is a diffeomorphism, prove that (γ) is an extremal curve if and only if (C) = A−1 (γ) is an extremal curve of a. 2) Consider a single-parameter group of diffeomorphisms {Tu , u ∈ R} such that ∂Qu ˙ Qu = Tu (Q) and Q˙ u = Q. It is assumed that for any u in an interval ∂Q ˙ Let W be the J, the generating function G satisfies G(Qu , Q˙ u ) = G(Q, Q). ∂G infinitesimal displacement associated with Tu . Prove that quantity W is constant ∂ Q˙ over extremals of action a associated with G. Exercise 18 A point M1 with mass m1 moves without friction on a horizontal plane (Π). An inextensible wire passing through a hole of (Π) connects M1 with a point M2 of mass m2 . We study the motions of the system when M2 moves along the vertical upwards Oz, where g is the acceleration due to gravity. We denote OM2 = z and (r, θ) as the polar representation of M1 in (Π). 1) Write the Lagrange equations of motion. Analyze the terms of these equations and obtain a relation of the form r˙ 2 = f (r). 2) Discuss the motion. Find the relation that exists between the initial conditions when M2 remains fixed. Exercise 19 A mechanical system has two degrees of freedom. Its kinetic energy is 2T =
x˙ 2 + y˙ 2 . y2
There is no potential energy. 1) With t and x being two secondary variables, write two first integrals and deduce the motion of the system (i.e. calculate x and y as functions of t).
216
Introduction to Mathematical Methods of Analytical Mechanics
2) The conjugate variables of x and y are denoted by p and q, respectively. Calculate the Hamiltonian of the system. Write the Hamilton equations and by integration obtain the results of (1). 3) Write the Jacobi equation and obtain the motion. 4) Writing that 2T is invariant by the homothety (x, y) → (αx, αy), where α is a constant scalar, obtain a third first integral of motion and prove that we obtain the trajectories of the motion without integrating. Exercise 20 Consider two circles of the axis Oz. We look for different methods to determine the surface of revolution supported by the two circles and whose area is minimum. The surface is given by its meridian curve. We use the semi-polar representation (r, θ, z). 1) Using the Euler method, find the meridian curve in the form z = f (r): determine the differential equation (E) verified by the meridian curve and integrate it. 2) For the representation z = z(α) , r = r(α) of the meridian curve, write the Hamilton and Jacobi equations and obtain the equation (E) again. ˆ 3) Prove that the meridian curves are extremals of an integral of the form n ds, (C)
where n = g(r) has to be made explicit. 4) Obtain equation (E) by using Noether’s theorem and give a geometric property of the center of curvature of meridian curves thanks to the variation of curvilinear integrals. Exercise 21 A mechanical system has two degrees of freedom. Its position is identified by two parameters x and y, and its kinetic energy is 2T = y 2 x˙ 2 + x2 y˙ 2 . There is no potential energy. We call p and q the conjugate variables of x and y, respectively. 1) What are the Lagrangian L(x, y, x, ˙ y) ˙ and the Hamiltonian H (x, y, p, q) of the system. Deduce the Lagrange equations (L), the Hamilton equations (H) and the Jacobi equation (J).
Problems and Exercises
217
2) Prove that the Lagrangian is invariant by transformations Tα depending on a parameter α and defined by x x eα Tα = . y y e−α From Noether’s theorem, deduce a linear first integral in p and q (this first integral u is written in the form u = ap + bq, where a and b are functions of x and y). By using either (L) equations or (H) equations, prove that the dynamical variable u is a first integral. 3) Verify whether the dynamical variable v=
p cos y
Log
x y
+
q sin x
Log
x y
is another first integral. Deduce a third first integral w. 4) Using these three first integrals and without carrying out any integration, find the equations for trajectories. 5) Find a complete integral of the Jacobi equation when the generating function is written in the form S(x, y, t, E, c) = −Et + f (ξ) + g(η) x , where f and g are two functions to be determined. The complete y integral depends on a parameter c, which is chosen as the value of the first integral u.
with ξ = xy , η =
Deduce the equations of motion. Obtain the equations for the trajectory again. Exercise 22 A mechanical system has two degrees of freedom. Its position is identified by two positive parameters x and y. Its kinetic energy is 2T =
y 2 x 2 x˙ + y˙ . x y
There is no potential energy. We call p, q and h the conjugate variables of x, y and t, respectively. 1) What is the Hamiltonian H(x, y, p, q) of the mechanical system? Write the Lagrange equations (L), the Hamilton equations (H) and the Jacobi equations (J).
218
Introduction to Mathematical Methods of Analytical Mechanics
2) Write and justify the relation existing between x, y, p, q, expressing the first energy integral. The value of the energy is denoted by E. 3) Prove that the Hamiltonian action is invariant by the group of transformations τα depending on a parameter α and defined by ⎤ ⎡ ⎤ ⎡ x x eα Tα ⎣ y ⎦ = ⎣ y e−α ⎦ . t t From Noether’s theorem, deduce a linear first integral (1) in p and q (this first integral is written as u = a p + b q, a with b being the functions of x and y). Verify whether this dynamical variable is a first integral by using either (L) equations or (H) equations. 4) Consider the group of transformations ⎤ ⎡ ⎤ ⎡ x λx Tλ ⎣ y ⎦ = ⎣ λ y ⎦ . t λn t For what value of constant n is the group also an invariance group? Find another first integral, we denote by (2). Verify whether this dynamical variable is a first integral by using either (L) equations or (H) equations. Prove that there exists an instant t0 where the expression p x + q y is zero. The value of p x at this instant t0 is denoted by c. Express the values of first integrals (1) and (2) as functions of E, t0 and c. 5) From the three first integrals and without integration, deduce a relation (3) between x, y, t and E, t0 , c. 1 x Log ; using (H) equations and first integrals of t, E, t0 and 2 y dθ c, calculate the quantity . Deduce t as a function of θ (this relation will be denoted dt by (4)). 6) We denote θ =
7) From (3) and (4), deduce the equation of trajectories relating x and y (this equation will be denoted by (5)). 8) By taking a generating function of the form S(x, y, t, E, C) = −Et + f (ξ) + g(η), x , where f and y g are functions to be determined; this integral depends on a parameter c. Deduce the equations of motion and relations (3) and (5).
find a complete integral of the Jacobi equation, with ξ = xy and η =
Problems and Exercises
219
Exercise 23 A mechanical system has two degrees of freedom. Its position is defined by parameters ρ and θ, and its kinetic energy is 2T = ρ2 [ρ˙ 2 + ρ2 θ˙2 ]. There is no potential energy. We call p and q the conjugate variables of ρ and θ, respectively. 1) What is the Hamiltonian H(ρ, θ, p, q)? Write the Lagrange equations and the Hamilton equations. Write the Jacobi equation for which a complete integral will be given and deduce the equations of motion. In an auxiliary plane, the parameters ρ and θ denote the polar representation of the point M ; verify the trajectories of M are conics and give the law of motion on the trajectories. 2) From the invariance by rotation of the Lagrangian around the origin in the polar representation, deduce a first integral (this first integral may also be directly obtained using Lagrange or Hamilton equations). 3) Prove that the dynamical variable u =
1 (ρ p cos 2θ − q sin 2θ) is also ρ2
another first integral. 4) From the two first integrals, deduce a third first integral independent of the previous ones. 5) By eliminating p and q between the three first integrals, obtain the trajectories of M again. 6) By using the Maupertuis principle, obtain the trajectories of M again. Exercise 24 A mechanical system has two degrees of freedom. Its position is given by parameters x and y, which are strictly positive. The kinetic energy is 2T = (x + y)
y˙ 2 x˙ 2 + x y
,
the potential independent of time of the applied forces to the mechanical system is denoted by W (x, y). We call p and q the conjugate variables of x and y.
220
Introduction to Mathematical Methods of Analytical Mechanics
1) What is the Hamiltonian H(x, y, p, q)? 2) What is the most general form the potential W (x, y) must have to obtain a separation of variables? In other words, the Jacobi partial derivatives will have a complete integral of the generating function of the form S = ϕ(t) + f (x) + g(y) 3) Now, we assume W (x, y) = 0. What is the Hamiltonian? Write the Hamilton equations. 4) Using the Jacobi method, write the equations of motion. The equation expressing the time law is denoted by (I), the equation giving the trajectories is denoted by (II). 5) Prove that dynamical variable α = x(p2 − 2H) is a first integral. 6) Find the function b(x, y) of the two variables x and y, satisfying b(1, 1) = 1 and α such that w = (p − q) b(x, y) is a first integral. Deduce that v = is another first w integral. 7) As the Poisson bracket of two first integrals is a first integral, deduce the first px − qy . Prove that u is a first integral. Calculate integral [v, w]. We denote u = x+y 2 2 u + v as a function of H and, from u, deduce the value of [u, v]. 8) By eliminating p and q between the three first integrals u, v, w, obtain the expression (II) for the trajectories. 9) Prove that transformation ⎤ ⎡ ⎤ ⎡ x λx Tλ ⎣ y ⎦ = ⎣ λ y ⎦ t λn t keeps invariant the Hamilton action for a value of n to be evaluated. Deduce a first integral and obtain (I) again. 10) Let G1 be the invariance group corresponding to the first integral u and which is given by x x0 1 = TX , y y0 1 TX transformation of the group G1 .
Problems and Exercises
221
By integrating the differential equation associated with G1 , prove that if X is the parameter of the group, we have ⎧ ⎨ x − y = x0 − y0 + X, ⎩
x y = x0 y0 .
11) Same question for the first integral v, with TY2 ∈ G2 , G2 being the invariance group corresponding to v and Y , the parameter. Prove that ⎧ ⎨ x − y = x0 − y0 , √ ⎩ √ 2 x y = 2 x0 y0 + Y 12) Let us consider the change of variables (X, Y ) −→ (x, y) and
x 1 2 0 . = TX ◦ TY 0 y
Let P and Q be the conjugate variables of X and Y , respectively. Calculate the new Hamiltonian associated with X, Y . Exercise 25 Consider a point M with mass m moving on a surface of revolution (S) of the axis Ok0 , where O is the origin. The acceleration due to gravity is denoted by g. The constraints are perfect and independent of time, and the point M is not subject to other forces. We choose a cylindrical system of representation such that OM = r er +z k0 . The angle between moving vector er and a fixed vector i0 orthogonal to k0 is denoted by θ. 1) Calculate the kinetic energy and the potential energy of the point M . 2) Write the differential equations of motion and deduce first integrals by using: a) the Lagrange method, b) the Hamilton method, c) Noether’s method. 3) Use the method of re-injecting partial result and what conclusion can you deduce?
222
Introduction to Mathematical Methods of Analytical Mechanics
Exercise 26 A point M with unit mass is attracted by a fixed center O by a force deriving from k2 the potential W = − 4 , where k is a given constant and r is the distance OM . As 2r the trajectory is planar and the motion has central acceleration, we choose the polar representation (r, θ) to determine the position of M . The total energy of the material point is denoted by E and C is the constant of areas. The conjugate variables of r and θ are denoted by pr and pθ . 1) Calculate the Hamiltonian and write the Hamilton equations. 2) Deduce two first integrals.
ˆ
Write the equation of the trajectory as θ − θ0 = a function of E, C and k.
F (r) dr, F (r) being given as r0
ˆ
Write the equation for the time law as t − t0 =
r
r
G(r) dr, G(r) being given as r0
a function of E, C and k. We do not integrate these two equations. 3) Study of the specific case of motion when the total energy E is zero. a) Prove that F (r) = nature of the curves. b) Prove that G(r) =
C 2
k − C 2 r2
. Deduce the trajectories and identify the
1
. Deduce the time law. k C2 − 2 r4 r 4) Using ˆthe Maupertuis principle, define the trajectories as extremals of an integral 2
s1
n ds. Express n as a function of E and r. Obtain the trajectories when ˆ θ1 √ 2 r + r2 E = 0 (we study the extremals of a = dθ). r2 θ0 of the form
s0
5) Write the Lagrange equations of motion and deduce that it is possible to have motions where r remains constant. What are the corresponding initial conditions? What is the relation between E and C? Exercise 27 Consider a system (S) with two degrees of freedom. The position parameters are denoted by x and y and the associated conjugate variables are denoted by p and q, respectively.
Problems and Exercises
223
1) System (S) is invariant by the group of transformations Tα defined by x x = Tα 0 , y y0 such that x=
x0 + tg α , 1 − x0 tg α
y = y0 cos α − x0 sin α.
a) Verify whether the transformations form a Lie group. b) Calculate the infinitesimal displacement W 1 (x, y) of the group. c) Give the first integral we can deduce from the group. ⎡1⎤ ⎢y⎥ 2) Let W 2 (x, y) = ⎣ ⎦ be a second infinitesimal displacement. 0 a) Express the associated Lie group. b) Give the second first integral deduced from this group. 3) By using Poisson’s brackets, deduce a third first integral. Prove that this first integral expresses the invariance relative to a third group after having indicated the corresponding infinitesimal displacement. 4) Then, deduce trajectories of the motions of (S). Exercise 28 A point of unit mass whose position is defined by the spherical representation (r, θ, ϕ) is subject to a force deriving from the potential W such that W = f (r) +
g(θ) h(ϕ) + 2 2 , 2 r r sin θ
where f , g and h are real regular functions of the real variable. 1) Form the Jacobi partial differential equation and give the solutions in the form of integrals. Deduce in the integral form the system (E) of the three equations of motion (these integrals give time law and trajectories). 2) Obtain system (E) again by using the Lagrange equations and the Hamilton equations.
224
Introduction to Mathematical Methods of Analytical Mechanics
3) When f , g and h are identically zero, calculate (E) and verify that the motion is what we expected. Exercise 29 A system has three degrees of freedom. Its position is given by the dimensionless parameters x, y and z. Its kinetic energy T and potential energy W are given, respectively, by 2 T = A 2 x˙ 2 + y˙ 2 cos2 x + z˙ 2 sin2 x ,
W =
A ω2 . 4 sin2 2x
where A is a constant with dimension of moment of inertia and ω is a constant with dimension of angular velocity. 1) Find the motion of the system when the initial conditions are x0 =
π , x˙ 0 = α ω, y˙ 0 = β ω, z˙0 = γ ω, 4
where α, β, γ are three real constants. Write the equations of motion and verify it is possible to express y˙ and z˙ as functions of u = cos 2x and obtain u as a function of t. π 2) Prove that x lies between 0 and . For what values of α, β, γ will we have a 2 motion for which x remains constant? 3) The system is subject to the perfect constraint y˙ + z˙ = ϕ(x), where ϕ(x) is a continuous function of x. Prove that the equations of motion allow us to calculate y˙ and z˙ as a function of x. Then, obtain a differential equation of x. Express x, y, z as functions of time when ϕ(x) = 0. Exercise 30
The change of variables r = αp + βq
p q
−→
r is defined by s
s = β p + α q.
What relation must verify α and β for a canonical change of variables? Deduce the generating function F (q, s).
Problems and Exercises
225
Exercise 31 Consider a mechanical system with one degree of freedom whose phase is defined by the conjugate variables q and p and the change of variables ⎧ sin p ⎪ ⎪ ⎨ s = Ln q . ⎪ ⎪ ⎩ r = q cotg p 1) Prove that the change of variables is canonical. 2) Calculate the generating function F (s, p) of the change of variables when s and p are independent. 3) G(q, r) is the generating function when q and r are independent. What is the relation between F and G? Deduce the function G. Exercise 32 Consider a mechanical system with one degree of freedom whose phase is defined by the conjugate variables q and p and the change of variables ⎧ ⎨ s = pα sin β q ⎩
r = pα cos β q
1) For which α and β values are the change of variable canonical? 2) Calculate the different generating functions S(q, s), F (q, r), G(p, r), H(p, s). Exercise 33 Consider the canonical change of variables q s −→ p r s2 associated with generating function S = − cotg 2q. What is the canonical change 2 of variables?
226
Introduction to Mathematical Methods of Analytical Mechanics
Exercise 34 Consider the change of variables q s −→ p r with
⎧ √ √ ⎨ r = 2 (1 + q cos p) q sin p ⎩
s = Ln (1 +
√
. q cos p)
1) Prove that the change of variables is canonical. 2) Calculate the generating functions F (p, s) and S(q, s) of the canonical changes of variables. Exercise 35 Consider the planar motion of a point M of unit mass, attracted by two fixed points with forces in inverse-square distance. The Cartesian coordinates of the point M are given by (x, y) for the orthonormal axes with origin O, the midpoint of the segment that connects the two fixed points on the axis Ox. We denote by q1 and q2 the elliptical coordinates of the point M defined by ⎧ ⎨ x = cos q2 ch q1 . ⎩ y = sin q2 sh q1 1) Prove that the coordinate lines are ellipses and hyperbolas, respectively, whose foci F and F are to be determined. 2) Calculate the kinetic energy of the point M in the form T (q1 , q2 , q˙1 , q˙2 ) and the potential energy in the form W (q1 , q2 ). 3) Write the Jacobi partial differential equation and, by using a simple integration, give the generating function S. Deduce the time law and equation of the trajectory of the point M . 4) If H = E, find a canonical change of variables associated with a generating function F (Q, R) such that the new Hamiltonian is of the form K = K(R) (we will obtain a system decomposable in the sense of Liouville). Write the new Hamilton equations, the Jacobi equation and the equations of motion. 5) We must recall what a multi-periodic, dynamically decomposable system in the sense of Liouville is. Define the action variables, the angular variables and the frequencies with respect to q1 and q2 .
Problems and Exercises
227
Exercise 36 This exercise relates to the motion of planets using action variables and angular variables. A planet P of mass m reduced to its center is attracted by another planet O assumed to be fixed and also reduced to its center, with a force in inverse-square K distance (the force value is − 2 ). r A. The plane of the orbit of P is assumed to be known, and in this plane, the coordinates of P are in polar representation (r, θ). 1) Calculate the kinetic energy, the potential energy and the Hamiltonian of P . 2) Write the Jacobi equation. Deduce the law of areas, the time law and the trajectory equation of P . 3) When the trajectory is closed (r1 ≤ r ≤ r2 ), the motion is periodic. Write the action variables Jr and Jθ in the integral form. Develop Jθ . 4) Prove the calculus of Jr is the same as the calculus of ˆ
r2
r1
dr E(r − r1 )(r − r2 ) , r ˆ
r2
f (r) dr. What must the sign of E be?
we denote by r1
5) The last integral can be calculated using simple calculus, but it is quicker to use the residue method (after having determined zeros and the pole of f (r). Choose an appropriate path and calculate the residue for the pole). Prove that Jr = −2π
√ πK √ 2m α + √ 2m. −E
Deduce the value of E as a function of Jr + Jθ (verify its sign). 6) By using the angular variables, calculate the frequencies of the motion and deduce the common period. B. We are now in the three-dimensional space. Although we do not ignore the fact that the trajectory is planar, we use the spherical representation (r, ϕ, θ). Study all questions of (A) again and prove that E=−
2m π 2 K 2 . (Jr + Jϕ + Jθ )2
Prove the three frequencies are equal.
228
Introduction to Mathematical Methods of Analytical Mechanics
Exercise 37 Consider the differential system ⎧ dx1 ⎪ ⎪ ⎪ ⎨ dt = 2x2 ⎪ ⎪ dx ⎪ ⎩ 2 = x1 + x2 dt 1) Calculate its solutions. Determine the equilibrium position. What is the character of stability of the equilibrium position? What can you say about the transformation g t associated with the differential system? 2) Consider the square of the phase plane: |xi | ≤ 1 where i ∈ {1, 2}. Determine its image at the instant t = 1. What is the geometrical nature of this image? Exercise 38 1) Study the character of the stability of the equilibrium position of the differential equation x ¨ + sin x = 0. Is the volume conserved in the phase space? Study the possible asymptotic stability. 2) Same question for x ¨ + x3 = 0. Exercise 39 Consider the differential system ⎧ dx ⎪ ⎪ ⎪ ⎨ dt = − sin y ⎪ ⎪ dy ⎪ ⎩ = sin x + sin y dt where x and y are two real functions defined over [0, 2π[. 1) Determine equilibrium points of the differential system. 2) Then, write the matrix that represents the tangent linear mapping of the motion at these points. 3) Study the stability of equilibrium points.
Problems and Exercises
229
Exercise 40 Frame R0 = (O, i0 , j 0 , k0 ) is an orthonormal coordinate system, where k0 represents the direction of the ascendant vertical. A point M of unit mass, whose coordinates are (x, y, z), moves without friction on a surface z = f (x, y). The point M is subjected to the acceleration due to gravity g. 1) Write the kinetic energy and the potential energy of the point M as functions of x, y, f (x, y) and their time derivatives. 2) What conditions must satisfy f such that O is in stable equilibrium position? 3) We write p(x0 , y0 ) =
∂f (x0 , y0 ), ∂x
r(x0 , y0 ) =
∂2f (x0 , y0 ), ∂x2
q(x0 , y0 ) =
∂f (x0 , y0 ) ∂y
s(x0 , y0 ) =
∂2f (x0 , y0 ), ∂x∂y
t(x0 , y0 ) =
∂2f (x0 , y0 ). ∂y 2
We recall that the principal radii of curvature of a surface at the point (x0 , y0 ) are R1 and R2 such that 1 1 (1 + p2 )t + (1 + q 2 )r − 2pqs 1 = + 2 R1 R2 2(1 + p2 + q 2 )3/2 1 rt − s2 = . R1 R2 (1 + p2 + q 2 )2 Calculate the eigenfrequencies of the system as functions of R1 and R2 . Deduce the most general representation of the small motions near equilibrium position O. Exercise 41 1) Write the kinetic energy, potential energy and Lagrange equations of a spherical pendulum consisting of a material point of mass m, subjected to gravity forces and in frictionless contact with a sphere of center O and radius R. Consider two parameters denoted by θ and ψ corresponding to the Euler angles (θ is the angle of the direction OM with the ascending vertical axis Ok and ψ is the angle of the projection of OM with the axis Oi of the horizontal plane). 2) We enforce the constraint ψ˙ = ω, where ω is constant. Give the associated equation of motion. 3) What is the corresponding equilibrium position? From the linearized equation, study the small motions near this equilibrium position. What is the period of small motions?
230
Introduction to Mathematical Methods of Analytical Mechanics
Exercise 42 A double pendulum consists of two material points M1 and M2 which have the same mass m. The point M1 is connected to a fixed point O with a wire of length ; the point M2 is connected to the point M1 by another wire of the same length. The angles that OM1 and M1 M2 make with the vertical descending axis Ox are denoted by u and v, respectively. 1) Calculate the kinetic energy of the system. 2) The acceleration due to gravity is denoted by g. Calculate the potential energy of the system. For what values of u and v can we have a stable equilibrium position? The values associated with this position are denoted by u0 and v0 . 3) We plan to study the small motions near the stable equilibrium position. We write u = u0 + α and v = v0 + β. a) Write the linearized kinetic energy T and the linearized potential energy W . b) Prove that the associated equations of motion form a second-order differential system with constant coefficients. c) Find the small motions in the form α = A cos ω (t − τ ),
β = B cos ω (t − τ ).
Determine the two eigenvibrations ω1 and ω2 and obtain the most general motion of the system. Exercise 43 This exercise is a study of the longitudinal vibrations of a linear triatomic molecule. At equilibrium, a molecule consists of two atoms of mass m situated at A and C symmetrically located on both sides of an atom of mass M situated at B. The three atoms are aligned and at equilibrium, the distances AB and BC are equal to b. The intra-atomic potential is symbolized by the action of two springs that symmetrically connect the three atoms with the same spring constant k. 1) Let xoi (i = 1, 2, 3) be the respective equilibrium positions of the three atoms. Using coordinates ηi = xi − xoi with respect to the equilibrium positions, write the kinetic energy and the potential energy of the molecule. 2) Deduce the eigenfrequencies ω of small oscillations. What does the value ω = 0 mean?
Problems and Exercises
231
3) It is assumed that the center of mass of the molecule remains fixed at origin O. a) What is the relation between m, M , x1 , x2 , x3 ? b) Study the eigenfrequencies of the molecule and conclude. 4) We consider the frame of eigen-directions associated with the eigenfrequencies α, β, γ. In this frame, the kinetic energy and the potential energy are T =
1 ˙2 Qi , m 2 i
W =
1 2 2 w i Qi . m 2 i
Study the normal modes corresponding to the different eigenfrequencies. How do we obtain longitudinal vibrations without rigid translation of the molecule? Exercise 44 A mechanical system subject to gravity g = 1 consists of two identical simple pendulums of lengths 1 = 2 = 1 and masses m1 = m2 = 1. The pendulums are connected by a spring without mass and whose length, when the spring is unstressed, is equal to the distance = 1 between their two points of attachment on a horizontal axis. The angles made by the two pendulums with respect to the descendant vertical axis are denoted by θ1 and θ2 , respectively. 1) Calculate the kinetic energy of the system. 2) Let k be the spring coefficient. Calculate the potential energy of the system. Determine the equilibrium position. Calculate the linearized kinetic energy and potential energy near the equilibrium position. 3) Determine the two eigenfrequencies of the system in small motion near the stable equilibrium position and give the most general linearized motion of the system. 4) To which pendulum motion does each eigenfrequency correspond? 5) It is assumed that k 1 and at initial instant t = 0, θ1 = θ2 = 0 and θ˙1 = v, θ˙2 = 0. Prove that after a time T , the first pendulum will have no velocity, all the energy being concentrated in the second pendulum. Exercise 45 On the horizontal axis Oi0 , an unstressed spring of length 4 a and spring 4 coefficient k is fixed at both ends. Three material points A, B, C of masses m, m 3 and m, respectively, are attached to the spring at equal distance a and distance a between the spring ends. The oscillation of the spring occurs on the axis Oi0 (then
232
Introduction to Mathematical Methods of Analytical Mechanics
gravity is not taken into account). The system produces small oscillations. We wish to study the motion of the mechanical system. 1) Calculate the kinetic energy of the system as a function of x, y, z, x, ˙ y, ˙ z˙ with OA = x i0 , OB = y i0 and OC = z i0 . 2) Calculate the potential energy of the system. We denote k = 2m n2 . 3) Calculate the eigenfrequencies of the system. 4) Write the Lagrange equations and stable solutions and then obtain the eigenfrequencies of the system again. Represent positions of the system for each eigenfrequency. Is it possible for the motion of the system to be periodic? 5) Write the normal coordinates α, β, γ of the system for which the kinetic energy and potential energy may be written in these new coordinates as T =
1 ˙2 Qi , m 2 i
W =
1 2 2 w i Qi . m 2 i
6) Write the equations of motion when initial conditions are x − a = y − 2a = z − 3a = 0,
x˙ = u, y˙ = z˙ = 0.
This corresponds to the initial position at rest with a small impulsion m u exerted on the first material point.
11 Solutions to Problems and Exercises
Solution to Exercise 1 For a curve of the form y = y(x), the potential energy is ˆ
b
ρgy
W =
1 + y 2 dx,
a
where ρ is the wire linear mass, g is the acceleration due to gravity, and a and b are the abscissas of A and B, respectively. The length of the wire is ˆ
b
L=
1 + y 2 dx.
a
We are back to the search for extremals of ˆ
b
G(y, y) dx
a=
with
G(y, y) = (y − μ) 1 + y 2 ,
a
where μ is a constant Lagrange multiplier. With the stationarity relation being G−
∂G y = C, ∂y
where C is a constant, we obtain by integration, y − μ = C cosh
x − x0 C
.
Knowing a, b and L makes it possible to determine the constants μ, C and x0 that represent a catenary.
234
Introduction to Mathematical Methods of Analytical Mechanics
Solution to Exercise 2 The curve belongs to a sphere with center A if and only if for any point M on the curve AM 2 = a2 (a constant)
⇐⇒
AM T = 0.
So, point A is in the plane normal to the curve. By denoting another fixed point by O, we have ⇐⇒
OA = OM + M A
OA = OM + λ(s)N + μ(s)B
where N and B denote the principal unit normal and the unit binormal to the curve. As the center A is fixed: dOA B T B + μ(s) = 0. = T + λ (s)N + +μ (s)B + λ(s) − + ds R τ τ Thus, μ dR = , ds τ
λ = R,
R μ (s) = − . τ
By eliminating μ, we obtain d ds
τ
dR ds
+
R = 0. τ
Let us note that from the values of λ and μ, we can find the position of the fixed point A on the plane normal to the curve at M . Solution to Exercise 3 ˆ b y(x) dx. 1) We obtain V = a
2) We have ˆ Wtotal = σ1 [y(a) + y(b) + b − a] + σ2 a
b
ˆ 1 b 1 + y 2 dx + ρ g y 2 dx. 2 a
Solutions to Problems and Exercises
235
3) The problem is to make a = Wtotal − λ V extremal, where λ is a constant Lagrange multiplier: δa yb ya δyb + σ1 − σ2 δya = σ1 + σ2 1 + ya2 1 + yb2 ˆ b d y ρ g (y − μ) − σ2 δy dx, + dx 1 + y 2 a where ρ g μ = λ. 4) a) Let us write y = tg ϕ, where ϕ denotes the angle of the tangent of the y meniscus with the horizontal axis Ox. Since sin ϕ = , we obtain 1 + y 2 sin ϕA = − sin ϕB =
σ1 . σ2
b) Moreover, ρ g (y − y0 ) = σ2
d sin ϕ dϕ dϕ = σ2 cos ϕ = σ2 , dx dx ds
where s is the curvilinear abscissa of the meniscus. We deduce the Jurin law: ρ g (y − y0 ) =
σ2 . R
Solution to Exercise 4 1) In the cylindrical representation, the meniscus can be written as z = z(r), the bottom being the horizontal wall closing the cylindrical tube. Thus, ˆ
r1
r z(r) dr.
V = 2π r0
2) We obtain 1 Wtotal = σ1 [r0 z(r0 ) + r1 z(r1 )] + σ2 2π
ˆ
r1
r r0
1+
z 2
1 dr + 2
ˆ
r1
r0
ρ g r z 2 dr.
236
Introduction to Mathematical Methods of Analytical Mechanics
3) The problem is to make a = Wtotal − λ V extremal, where λ is a constant Lagrange multiplier. We obtain z1 z0 δa δz1 + r1 σ1 − σ2 δz0 = r1 σ1 + σ2 2π 1 + z12 1 + z02 ˆ b d r z √ ρ g r (z − zm ) − σ2 δz dr, + dr 1 + z 2 a where zm replaces the Lagrange multiplier. We obtain ρ g r (z − zm ) − σ2
d r z √ = 0. dr 1 + z 2
Let ϕ be the angle of the tangent to the meridian of the meniscus with the classical radius axis Ou. We have dϕ dϕ ds dϕ 1 = = , dr ds dr ds cos ϕ where s denotes the curvilinear abscissa of the meridian of the meniscus, and
sin ϕ dϕ 1 1 = ρ g (z − zm ) ⇐⇒ σ2 = ρ g (z − zm ) . + + σ2 r ds R1 R2 This relation is valid in the case of a cylindrical tube. If the meniscus is approximated by a spherical cap with radius R, 2 σ2 = ρ g (z − zm ) . R 4) Since sin ϕ =
y 1 + y 2
sin ϕr0 = − sin ϕr1 =
, we obtain
σ1 . σ2
Solution to Exercise 5 1) Refer to Part 1, Chapter 2. 2) a) N and grad F are collinear and for the sphere grad F is collinear 2 OM , we obtain with OM , hence N = ν OM . Since N is collinear with d ds 2 2 d OM ∧ OM = 0. There exists a vector C such that dOM ds ∧ OM = C and M ds2
Solutions to Problems and Exercises
237
belongs to the plane containing O and perpendicular to C. Geodesics are parts of the great circles of the sphere. b) In the spherical representation (θ, ϕ, R), the curvilinear abscissa ds is such that ds2 = R2 (dϕ2 + cos2 ϕ dθ2 ), hence: ˆ
B
G(θ, θ , ϕ) dϕ
L=
where
G(θ, θ , ϕ) = R
A
1 + θϕ2 cos2 ϕ.
So, the extremum condition is d dϕ
∂G ∂θ
=0
⇐⇒
θ cos2 ϕ 1 + θϕ2 cos2 ϕ
= k,
k and θ0 1 − k2 is a constant. By changing the spherical representation into Cartesian coordinates, we verify that this relation is the equation of a plane containing O. where k is a constant. We obtain λ tg ϕ = sin(θ − θ0 ), where λ = √
3) In the cylindrical representation, the equation of a cylinder of revolution is x = R cos θ, y = R sin θ, z. The length of a curve traced on the cylinder is ˆ
B
L=
R2 + z 2 (θ) dθ.
A
We can deduce the equation of extremals of L, d dθ
z (θ)
R2 + z 2 (θ)
=0
⇐⇒
z (θ) = k. R2 + z 2 (θ)
Thus, z (θ) = h =⇒ z = h(θ − θ0 ) is the equation of the cylindrical helices, where k, h and θ0 are constants. 4) a) For r = az, we have ˆ
β
L= α
G(z, z ) dθ,
where
G(z, z ) =
(1 + a2 ) z 2 + a2 z 2 ,
238
Introduction to Mathematical Methods of Analytical Mechanics
which implies the Euler equation in integrated form:
a2 z 2 (1 + a2 ) z 2 + a2 z 2
=C
where C is a constant. b) In the form θ = θ(z), we obtain: ˆ
b
L=
G(θ , z) dz,
where
G(θ , z) =
(1 + a2 ) + a2 z 2 θ2 ,
a
which implies the Euler equation in integrated form:
a2 z 2 θ (1 + a2 ) + a2 z 2 θ2
=K
where K is a constant. This differential equation has the solution K = cos az
a(θ − θ0 ) √ 1 + a2
.
Note that case (a) leads to the same representation. 5) a) The first fundamental form ds2 = E du2 + 2F dudv + G dv 2 corresponds to E = (M u )2 , F = M u .M v and G = (M v )2 . If the coordinate lines are orthogonal, then it corresponds to F = 0. ˆ u1 du2 + dv 2 2 b) For ds = , L = G(u, vu ) du, where G(u, vu ) = u2 u0 1 + v2u . u By integration, the associated Euler equation is vu = K u
1 + (vu )2 ,
Solutions to Problems and Exercises
239
which yields √ v = v0 −
1 − K 2 u2 , K
where K and v0 are constants. The images in plane (Π) are circles of equations: u2 + (v − v0 )2 =
1 . K2
Solution to Exercise 6 1) N is collinear with grad F and gives the direction of the normal to the surface. The normal meets and is perpendicular to the axis Oz. Thus, N is perpendicular to Oz, a property that characterizes helices. This result could be predicted: the cylinder is a surface that can be developed into a plane, and the development of helices gives straight lines in the plane. 2) For any surface of revolution written as z = f (r), θ = g(r), we have
ds2 = 1 + f 2 (r) dr2 + r2 dθ2 . ˆ
s1
The Euler equation associated with L =
ds is integrated as s0
r2 θ˙ 1+
f 2 (r)
+
r2
g (r)2
=K
⇐⇒
r2
dθ = K, ds
where K is a constant. For a cylinder r = R, R is a constant, and we obtain K (s − s0 ) = θ − θ0 , which represents the helices. Solution to Exercise 7 1) a) The curve is defined by the representation z = z(x). We obtain ˆ
B
S = 2π
G(z, z , x)dx
where
A
The integrated Euler equation yields dz a , =√ 2 dx x − a2
G(z, z , x) = x
1 + z 2 .
240
Introduction to Mathematical Methods of Analytical Mechanics
where a is a constant. We obtain x , z − z0 = a Argch a where z0 is another constant. b) Note that it is possible to use a geometric method by writing ˆ
ˆ
B
S = 2π
B
x ds = 2π A
x T dM .
A
ˆ
z1
2) The volume is V =
π x2 dz, and we are led to find the extremals of
z0
a = S − 2λ V , where λ is a constant Lagrange multiplier. We obtain the integrated Euler equation: x2 − √
2λ x =C 1 + x2
with x = x(z),
where C is a constant. We obtain arcs of circles of equations: (z − z0 )2 + x2 = 4 λ2 .
Solution to Exercise 8 The calculations are based on an analytical or geometric review of the study of the optical path of light presented in this book, followed by the case where the optical index is n = ey . Solution to Exercise 9 A. 1) The result comes from (ds)2 = (dz)2 + (dr)2 + r2 (dθ)2 written in the representation r = f (z) and θ = θ(z): ˆ
z1
L=
1 + f 2 (z) + f 2 (z) θ2 dz.
z0
2) The potential energy W of the wire is ˆ
ˆ
B
W =
z1
ρ g z ds = A
ρgz z0
1 + f 2 (z) + f 2 (z) θ2 dz.
Solutions to Problems and Exercises
241
By writing the extremals of W − λ L, where λ is a constant Lagrange multiplier, we obtain (z − λ) r2
dθ = C, ds
where C is a constant. 3) We obtain
(z − λ) f 2 (z) (1 + f 2 ) z 2 + f 2
= C.
4) The equilibrium curve on the circular cylinder of radius R is K z−λ= cosh R
R2 (θ − θ0 ) , K
where K is a constant. These are catenaries, or chainettes, on the cylinder. B. 1) We have ˆ T dM L=
ˆ and W =
(C)
(C)
ρ g z T dM .
The extremals of a = W − λ L, where λ is a constant Lagrange multiplier, lead to the variation of curvilinear integrals: (z − h)
dT − (1 − T T ) grad (z − h) − μ grad F = 0, ds
where F (M ) = 0 is the surface equation. 2) a) We obtain r (z)u + r θ (z)v + k T = 1 + f 2 (z) + f 2 (z) θ2 and (z − h)
dT − (1 − T T ) grad (z − h) − μ n = 0, ds
242
Introduction to Mathematical Methods of Analytical Mechanics
where n is the unit normal to surface (S). In the basis (u, w, k), the matrix of projection is ⎡ 2
dz ds
2
⎢1 − r ⎢ ⎢ ⎢ ⎢ dz dθ ⎢ 1 − T T = ⎢ −r r ⎢ ds ds ⎢ ⎢ ⎢ 2 ⎣ dz −r ds
dz dθ −r r −r ds ds
dz ds
2 ⎤
⎥ ⎥ ⎥ ⎥ 2 ⎥ dθ dθ dz ⎥ ⎥ −r 1 − r2 ds ds ds ⎥ ⎥ ⎥ 2 ⎥ dz dθ dz ⎦ −r 1− ds ds ds
and ⎡ ⎢r ⎢ ⎢ ⎢ dT ⎢ =⎢ ⎢ ds ⎢ ⎢ ⎣
dz ds
2
2 r
d2 z +r −r ds2
dθ dz d2 θ +r 2 ds ds ds d2 z ds2
dθ ds
2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
b) i) ⎡ ⎤ r 1 ⎣ 0 ⎦. and v2 = √ 1 + r2 1
⎡ ⎤ 0 v1 = ⎣ 1 ⎦ 0
⎡ ⎤ 1 1 ⎣ 0 ⎦. ii) The unit normal to (S) is n = √ 1 + r2 −r 3) Let us consider the simplest of the three relations already given in (1). The second component of the vectorial relation given in (1) becomes: −2 (z − h) r
dθ dz dθ dz d2 θ − r (z − h) 2 − r = 0. ds ds ds ds ds
By multiplying by r and integrating, we obtain: (z − h) r2
dθ = Cte. ds
Solutions to Problems and Exercises
243
Solution to Exercise 10 Exercise 10 is a particular case of the problem given in Exercise 9. We can refer to the solution to Exercise 9 and consider the case of a cone of revolution. Solution to Exercise 11 ˆ ˆ ds T dM = is unchanged by translation parallel to 1) The integral xm xm AB AB the axis Oy. The translations are represented as Tα
x x = . y y+α
0 They form a Lie group of infinitesimal displacement W = , and Noether’s 1 theorem is applicable: T W = Cte xm
⇐⇒
1 sin α = Cte xm
⇐⇒
xm = am sin α,
where a is a constant.
ˆ α π x dα, where 2 π y0 is the value of y for α = . For m = 0, sin α = Cte. The curves are lines of the 2 slope α. 2) From x = a sinr α and dy = tg α dx, we deduce y = y0 + r
3) For m = 1, we obtain x = a sin α, y − y0 = −a cos α, hence x2 + (y − y0 )2 = a2 , corresponding to arcs of circles. For m =
1 , we obtain arcs of cycloids: 2
a (2 α − sin 2 α) . 2 y − y0 , which is the equation of catenary. For m = −1, we obtain x = a cosh a x=
a (1 − cos 2 α) , 2
y − y0 =
1 1 For m = − , we obtain arcs of parabolas x − a = (y − y0 )2 . 2 4a
244
Introduction to Mathematical Methods of Analytical Mechanics
4) For a given m, ⎧ 1 x = a sin( m ) α ⎪ ⎪ ⎨ ˆ a α ( m1 ) ⎪ ⎪ u du sin ⎩ y = y0 + m π2 The curves are deduced from one of the above by the composition of a translation relative to Oy and a homothety of center O and ratio a. The change of α into π − α leaves x unchanged and y − y0 changed to its opposite; we obtain a symmetry relative to Oy. 5) The previous curves are such that y x − x y < 0 and the concavity is turned towards x < 0. The remaining questions (5) and (6) can be easily solved. Solution to Exercise 12 1) We choose t = t(x) and y = y(x). Therefore, ˆ
t−1 (T )
I=
G(t, t , y, y ) t dx,
G(t, t , y, y ) =
where
t−1 (0)
x2 t2 − 1 − y 2 .
The Euler equations are: d dx
∂G ∂t
= 0 ⇐⇒
∂G = Cte ∂t
and
d dx
∂G ∂y
= 0 ⇐⇒
∂G = Cte, ∂y
or
x2 t x2 t2 − 1 − y 2
=α
and
y x2 t2 − 1 − y 2
= β,
where α and β are constants. By eliminating y between the two first integrals, we obtain x2 α2 2 2 x t 1 − 2 = 1 with a2 = . a 1 + β2
Solutions to Problems and Exercises
245
We finally obtain x=
a , cosh (t − t0 )
y = y0 + b tanh (t − t0 ).
2) a) The previous results yield cosh T2 , x= cosh t − T2
tanh t − T2 , y= tanh T2
which can be written as x2 y2 2 + (a − 1) = 1, a2 a2
where
T , a = cosh 2
corresponding to an ellipse. b) The time displacement is minimum for x2 − v 2 = 0. Since
sinh2 T2 − 1 cosh2 T2 , x −v = cosh4 t − T2 sinh2 T2 2
2
we must have T > 2 Argsh√ 1. The limit value corresponding to time lapse is T = 2 Argsh 1, for which a = 2 and the trajectory is the circle x2 + y 2 = 2. 2 ´T 3) Since v 2 = ds x2 (dt)2 − (ds)2 . For x2 (dt)2 = (ds)2 , we obtain dt´ , I = ´0 T s 1 ds dt = ds x and T = 0 dt = s0 x . The trajectories have been studied previously. In the representation y = f (x), they are those associated with the extremal of ´ x1 √1+f 2 (x) dx. x x0 Solution to Exercise 13 1) Derivatives being taken with respect to variable x, the Euler equation can be written as 2 y y − x y y − 2 x y 2 = 0
246
Introduction to Mathematical Methods of Analytical Mechanics
2) The Hamiltonian is 4 p q − x2 y 2 and yields the Hamilton equations ⎧ dx ⎪ ⎪ ⎪ ⎨ dτ = 4 q, ⎪ ⎪ dy ⎪ ⎩ = 4 p, dτ
dp = 2 x y2 , dτ dq = 2 x2 y. dτ
3) a is invariant by transformation tx x = y . Tt y t However, as the identity element of the group is obtained for t = 1, we choose a Lie group with the identity element
t = e . We obtain the infinitesimal 0 by writing dM e x0 x = , displacement W = = −e y0 −y d
1 y x and the first integral u = [p, q] = p x − q y. From p = x y τ , −y 2 xτ 1 xτ q = x y and from the first integral, we deduce 2 xτ 2 u y − x2 y y + x y 2 = 0. 4) We have [u, H](x,y,p,q) = 0 and as (1) is verified, (3) is therefore verified. ∂S + H = 0 is integrated in the form ∂t y3 1 k x3 + S(x, y) = 6 k
5) The Jacobi equation
where k is a constant. We obtain the extremals in the form y = 3 λ2 x 3 + μ where λ and μ are constants. λ is a constant 2 1 dM 1 2 = x . along the motion. From the expression 2 p + 0 q = Cte, we deduce x dr 0 6) From (4), it can be immediately verified that v with value
Solutions to Problems and Exercises
247
Consequently, ⎧ dx 1 ⎪ ⎧ 3 ⎪ ⎪
√ ⎨ dr = x2 ⎨ x = x30 + s, x x3 + s . = where s = 3 r and Ts =⇒ y y ⎪ ⎩ ⎪ dy y = y0 ⎪ ⎩ =0 dr 3
3
ˆ
X1
By writing X = x and Y = y , we obtain a = ε X0
YX dX, where
ε = ±1, and this integral is invariant by transformation (X −→ X + s, Y −→ Y ). p The Poisson bracket [u, v](x,y,p,q) = 3 2 does not give any additional first integral; x p therefore, another method must be found. From u = p x − q y and v = 2 , we obtain x q q y = v x3 − u, which is of form w y 3 since y 3 = λ x3 + μ. We obtain w = 2 . By y comparison with previous calculations, this result corresponds to the invariance group ⎡ ⎤ ⎡ ⎤ x x ⎦. Tk ⎣ ⎦ = ⎣ 3 y y +k 7) The extremal curves are b x3 − y 3 = c, where b and c are constants. In order for (x0 , y0 ) and (x1 , y1 ) to be connected, we must have ⎧ y 3 − y03 ⎪ ⎪ b = 13 ⎪ ⎪ ⎨ x0 − x31 ⎪ ⎪ y 3 x3 − y13 x30 ⎪ ⎪ ⎩ c = 0 13 x1 − x30 which requires x0 = x1 . If x0 = x1 and y0 = y1 , then there is no solution, otherwise y0 = y1 would correspond to the same point. > 0, i.e. b ≥ 0. The corresponding value of b is obtained It is also essential that yˆ x1 3 by the integral a = x y y dx for y = b x3 + c. We obtain x0 (y13 − y03 ) (x30 − x31 ) b= . 3
248
Introduction to Mathematical Methods of Analytical Mechanics
Solution to Exercise 14 A. 1) From conservation of energy, the choice v = 0 for y = 0 yields m v 2 + 2 m g y = 0. We obtain
2
(1 + y )
dx dt
2 = − 2 g y.
Hence, ˆ
a
τ= 0
1 + y 2 dx = −2gy
ˆ
a
f (y, y ) dx.
0
2) In order for time lapse τ to be minimum, we must have the Euler equation ∂f d − ∂y dx
∂f ∂y
=0
which can be written as
d ∂f f −y =0 dy ∂y
⇐⇒
f − y
∂f = c. ∂y
3) From the expression of f given in (1), we deduce
1 1+
y 2
√ = K −y
where K is a constant that can be integrated as ⎧ 1 ⎪ ⎪ ⎨ x − x0 = 2 K 2 [2ϕ + sin 2ϕ] ⎪ ⎪ ⎩ y = − 1 [1 + cos 2ϕ] 2 K2 which represent cycloids. ds B. 1) Since v = and τ = dt ˆ
A
τ= O
T dM . v
ˆ
B
dt, we deduce A
Solutions to Problems and Exercises
249
1 2) The classic relation with respect to the curvilinear integrals with n = leads to v the relation: ρ 1 = 0. − N + (1 − T T ) grad v v When multiplied by N , we obtain the relation 1 = 1. R v N grad v
√ 1 −y 3) From v = −2 g y and N = , we deduce the differential 2 1 1+y equation: 1 + y 2 + 2 y y = 0. 4) A classic integration of this differential equation by denoting z(y) = y makes it possible to obtain the cycloid of (A. 3) again. Solution to Exercise 15 1) We have 1 m 2
ds dt
2 + mgz = E
We deduce 2E ds dt = √ m 1 − hz
where
where h =
and since ds = T dM ,
2E τ= m
ˆ (C)
T √
dM . 1 − hz
E > 0.
mg , E
250
Introduction to Mathematical Methods of Analytical Mechanics
1 and we obtain 2) We denote n(M ) = √ 1 − hz Extremals of τ satisfy the geometric condition
2E τ= m
ˆ (C)
n T dM .
dnT = grad n. ds Since grad n is collinear with k0 , we obtain dnT ∧ k0 = 0 ds
d (n T ∧ k0 ) = 0 ds
⇐⇒
⇐⇒
T ∧ k0 √ = C, 1 − hz
where C is a constant vector. 3) For C = c j 0 , we obtain T ∧ k0 T ∧ k0 √ = c j 0 =⇒ T √ = c T j0 1 − hz 1 − hz
=⇒
cϕ = 0
=⇒
ϕ = 0.
1 = 0. The curve (C) is the line segment AH. Point M meets the R plane (Π) at point H and Thus, ρ =
2E τ= m
ˆ ds =
=⇒
τ=
(C)
m . 2E
Solution to Exercise 16 1) We immediately obtain ⎧ ⎨ x = λ e2u ⎩
y = 3 (u − u0 )
For u = 0, x0 = λ and y0 = −3 u0 , the solution is ⎧ ⎨ x = x0 e2u ⎩
or y = 3u + y0
M = Tu (M 0 )
with
x x0 . M= and M 0 = y y0
2) The transformations Tu form a Lie group because T u
x x xe2u e2u = , ◦ Tu = Tu+u y y y + 3u + 3u
Solutions to Problems and Exercises
(Tu )
−1
251
−2u
x x xe = T−u = . y y y − 3u
Furthermore, Tu goes to 1 when u goes to 0.
dM 2x 3) When u goes to 0, W = = . 3 du 4) These can be resolved in the same way as the previous questions. Solution to Exercise 17 ∂G ∂Γ ˙ ∂G ˙ ∂Γ δX = δ Q. δX + δQ + 1) a) δα = ˙ ∂X ∂Q ∂X ∂ Q˙ We have Q = A(X)
=⇒
δQ =
∂Q δX ∂X
and δ Q˙ =
d dt
∂Q ∂X
δX +
∂Q ˙ δ X. ∂X
Hence,
∂G d ∂G ∂Q + δα = ∂Q ∂X ∂ Q˙ dt
∂Q ∂X
δX +
∂G ∂Q ˙ δ X. ∂ Q˙ ∂X
Therefore, ∂Γ ∂G ∂Q ∂G d = + ∂X ∂Q ∂X ∂ Q˙ dt
∂Q ∂X
∂Γ ∂G ∂Q = . ˙ ∂X ∂ Q˙ ∂X
and
b) Without moving the ends of (γ) and (C), calculate the variations of b:
ˆ δb = (γ)
ˆ (γ)
∂Γ ∂Γ ˙ δX δX + ∂X ∂ X˙
∂G ∂Q ∂G d + ∂Q ∂X ∂ Q˙ dt
∂Q ∂X
dt = δX +
∂G ∂Q ˙ δ X dt. ∂ Q˙ ∂X
By integrating by parts, without variations of extremities of (γ),
ˆ δb = (γ)
∂G ∂Q ∂G d + ∂Q ∂X ∂ Q˙ dt
∂Q ∂X
−
d dt
∂G ∂Q ∂ Q˙ ∂X
δX dt.
252
Introduction to Mathematical Methods of Analytical Mechanics
Finally,
ˆ δb = (γ)
∂G d − ∂Q dt
∂G ∂ Q˙
∂Q δX dt = ∂X
ˆ (C)
∂G d − ∂Q dt
∂G ∂ Q˙
δQ dt = δa.
Thus, (C) is an extremal curve of a that implies that (γ) = A (C) is an extremal curve of b. c) If A is a diffeomorphism, the mapping A−1 makes it possible to prove the reciprocal property of (b), and (γ) is an extremal curve of b if and only if (C) = A−1 (γ) is an extremal curve of a. 2) We have
ˆ δa = (C)
∂G d − ∂Q dt
∂G ∂ Q˙
δQ dt +
B ∂G δQ , ∂ Q˙ A
where A and B are extremities of (C). However, this result is independent of the end points chosen on (C), and if we replace (C) with (C ), a curve contained in (C) with the ends A and B , we obtain the same result. Let us choose: a) δQ = W (Q) du, where W (Q) is the infinitesimal displacement of group {Tu , u ∈ R}; b) a curve (C), extremal of a. On the one hand, δQ = W (Q) du implies δa = 0, since a is invariant by the group {T u, u ∈ R}. The curve (C) is an extremal of a; therefore, ∂G d ∂G = 0. − ∂Q dt ∂ Q˙ On the other hand, it can be deduced from δB = W (B) du and δA = W (A) du that ∂G ∂G ∂G ∂G (B) δB du − (A) δA du = 0 then (B) δB − (A) δA = 0, ∂ Q˙ ∂ Q˙ ∂ Q˙ ∂ Q˙ where
∂G ∂G ∂G (A) and (B) are the values of at A and B, respectively. ˙ ˙ ∂Q ∂Q ∂ Q˙
Points A and B play no particular role in the proof, and the point chosen on extremal curve (C).
∂G W is independent of ∂ Q˙
Solutions to Problems and Exercises
253
Solution to Exercise 18 1) The Lagrangian is G=
1 1 m2 r˙ 2 + m1 r˙ 2 + r2 θ˙2 − m2 g r. 2 2
From the Lagrange equations, we obtain r2 θ˙ = C
and r˙ 2 = A − 2B r −
C r2
where A, B and C are constants depending on initial conditions and on m1 , m2 , g. √ √ 2) If A < 2 CB + B, no motion is possible. If A = 2 CB + B, the equilibrium position r0 is stable. We √ deduce that θ˙ = Cte, and we obtain a circular and uniform motion for M1 . If A > 2 CB+B, we obtain a periodic oscillation between positions r1 and r2 , and θ varies in the same orientation as r. Solution to Exercise 19 1) From the conservation of energy, we obtain 2T = 2E, where E is a constant x˙ 2 + y˙ 2 = 2E y 2 . The Lagrange equation with respect to x is x˙ = p0 y 2 , where p0 is a constant. Since the Lagrangian is invariant by translation along the x-axis, we obtain this first integral again. Since the Lagrangian is invariant in time, we have conservation of energy. Taking into account the previous first integral, we deduce p20 y 4 + y˙ 2 = 2E y 2 . This equation becomes √ du = ε 2Edt u 1 − u2 √
where
By writing u = sin ϕ, we obtain √ dϕ = ε 2Edt, sin ϕ
p0 u = √ y, 2E
with ε = ±1.
254
Introduction to Mathematical Methods of Analytical Mechanics
which can be integrated in the form √ y=
2E sin ϕ. p0
Finally, √ √ 2E keε 2E t √ , y=2 p0 1 + k 2 e2ε 2E t
where k is a constant, and √ 2E 1 √ x − x0 = −2ε , p0 1 + k 2 e2ε 2Et where x0 is another constant. 2) The conjugate variables are p =
H=
x˙ y˙ , q = 2 and they yield the Hamiltonian 2 y y
1 2 p + q2 y2 . 2
The Hamilton equations can be written in the form dx dy dp dq dh = =− =− 2 = = dt, p y2 q y2 0 (p + q 2 ) y 0 with the convention that the numerator is zero when the denominator is zero. We deduce p = p0 , h = −E and (p20 + q 2 )y 2 = 2E, and as in (1), we obtain the variables x and y as functions of t. From p dx + q dy + y dq = 0 and since p = p0 , we obtain p0 (x − β) + q y = 0. Since q 2 y 2 = 2E − p0 y 2 , we obtain (x − β)2 + y 2 =
2E . p20
The trajectories are circles centered on the axis Ox. 1 2 3) Since H = p + q 2 y 2 , x and t are secondary variables. We research a 2 generating function in the form S = −E t + p0 x + S1 (y), which satisfies the partial
Solutions to Problems and Exercises
differential equation ∂S 1 + ∂t 2
∂S ∂x
2
+
∂S ∂y
2
y 2 = 0.
We obtain
dS1 dy
2 =
2E − p20 , y2 ˆ
and hence, S = −E t + p0 x + ε ∂S = −t0 ∂E
2E − p20 dy. The equations of motion are y2
∂S = β. ∂p0
and
We obtain ∂S = −t + ε ∂E
ˆ y2
dy = −t0 , 2E 2 − p 0 y2
and consequently, ˆ √ ε 2E(t − t0 ) =
y2
dy = −t0 , 2E 2 − p 0 y2
which gives y as a function of time. Furthermore, ∂S =x−ε ∂p0
ˆ
p0 dy = β, 2E 2 − p0 y2
and consequently, x−β =−
ε p0
2E − p20 y 2 ,
which is the expression obtained in (2).
255
256
Introduction to Mathematical Methods of Analytical Mechanics
4) Transformation T (x, y) = (α x, α y) keeps the Lagrangian invariant. We write
x . According to Noether’s α = eu , and the infinitesimal displacement is W = y theorem, p x + q y = a, where a is a constant. However, p = p0 and p2 + q 2 y 2 = 2E. So, we obtain p0 y 2 = 2E − (a − p0 x)2 = 2E − p20 (x − β)2
where
a = p0 β.
Finally, (x − β)2 + y 2 =
2E , p20
which are the circles found in (2). The trajectories clearly depend on three constants associated with the initial conditions. Solution to Exercise 20 1) Since the equation of the meridian is z = f (r), the associated area can be written as ˆ
ˆ
r1
S=
r1
2 π r dr = 2π r0
G(z, z , r) dr
with
G(z, z , r) = r
1 + z 2 ,
r0
where r0 and r1 are the radii of the two circles. The Euler equation is d dr
∂G ∂z
−
∂G = 0, ∂z
which is integrated as
where k is a constant. Hence, z 2 = r = b cosh
z − z0 b
∂G = k, ∂z
k2 . Integrals are catenaries of the form r2 − k2
where z0 and b are two constants determined by the circles. 2) The conjugate variable of z is p = H = p z − G = √
r 1 + z 2
r z ∂G √ = − and gives the Hamiltonian ∂z 1 + z 2
=⇒
H=−
r2 − p2 .
Solutions to Problems and Exercises
257
We deduce the Hamilton equations z =
∂H p = − 2 ∂p r − p2
and p = −
∂H = 0 =⇒ p = k ∂z
and obtain the equation in (1) again. Let us find S = S1 (r) + S2 (z). The Jacobi equation is ∂S ∂S H(z, , r) + =0 ∂z ∂r
2
r −
=⇒
dS1 dr
2
−
dS2 dz
2 = 0.
We deduce dS1 − r2 + β 2 = 0 dr
dS2 = β, dz ˆ where β is a constant. Finally, S = β z + and
ˆ =⇒
r2 − β 2 dr; therefore, the “motion
r0
equations” (meridian equations) are ∂S +γ =0 ∂β
r
r
z= r0
β dr + γ, − β2
r2
dz β . = dr r2 − β 2 ˆ n ds when 3) According to (1), the meridians are obtained as extremals of a =
where γ is a constant, which corresponds to the result in (1) since
(C)
n = r.
ˆ
M1
4) In Euclidean plane (O, i, k), a =
r T dM . This integral is invariant by
M0
translation along the axis Ox. According to Noether’s theorem
0 =e rT 1
=⇒
r T k = e,
where e is a constant and T = √
z 1 i+ √ k. 1 + z 2 1 + z 2
So, the catenaries satisfy Fermat’s equation which is associated with the variation dnT of curvilinear integrals: = grad n. ds
258
Introduction to Mathematical Methods of Analytical Mechanics
Because grad n = grad r = i, we obtain: n
dT dn + T = i. ds ds
The angle between the tangent to the meridian and i (unit vector of the axis r) being denoted by ϕ, the relations dr = ds cos ϕ, dz = ds sin ϕ imply r
N + cos ϕ T = i, R
where N is the unit normal to the meridian and R is the radius of curvature. By r π projection of the relation on N , we obtain = sin ϕ = cos −ϕ . R 2 Using a diagram, we can see that at any point M , the curvature center of the meridian is symmetric to the intersection point between the normal and the revolution axis Ok. Solution to Exercise 21 1 2 2 1) L = y x˙ + x2 y˙ 2 , 2
H(x, y, p, q) =
p2 q2 + . y2 x2
Lagrange equations:
2 y x˙ y˙ + x ¨ y 2 − x y˙ 2 = 0, 2 x x˙ y˙ + y¨ x2 − y x˙ 2 = 0.
Hamilton equations: ⎧ 2p ⎪ ⎪ ⎪ x˙ = 2 , ⎪ ⎨ y
p˙ =
q2 , x3
⎪ ⎪ 2q ⎪ ⎪ ⎩ y˙ = 2 , x
q˙ =
p2 . y3
Jacobi equation: S = −E t + S1 (x, y)
and
1 y2
∂S1 ∂x
2
1 + 2 x
∂S1 ∂y
2 = 2E.
2) u = x p − y q. By using the Hamilton equations, it can be verified that u˙ = 0.
Solutions to Problems and Exercises
259
3) By using the Hamilton equations, it can be verified that v˙ = 0. By using Poisson brackets, we obtain: q p x x − cos Log . w = sin Log y y x y 4) Equation of the trajectories:
x x + B sin Log = u, xy A cos Log y y where u, A, B are constants. 5) New Jacobi equation: ξ 2 f (ξ)2 + η 2 g (η)2 = Eξ 2 .
We deduce c S = −E t + Logη + 2
ˆ
E c2 − 2 dξ 2 4ξ
and we can obtain equations of time law and trajectories. Solution to Exercise 22 1) Hamiltonian: H(x, y, p, q) =
1 x 2 y 2 p + q . 2 y x
Lagrange equations: ⎧
d y 1 y˙ 2 y x˙ 2 ⎪ ⎪ x˙ − − 2 = 0, ⎪ ⎪ ⎨ dt x 2 y x
⎪ ⎪ d x 1 x˙ 2 x y˙ 2 ⎪ ⎪ y˙ − − 2 = 0. ⎩ dt y 2 x y
260
Introduction to Mathematical Methods of Analytical Mechanics
Hamilton equations: ⎧ x ⎪ ⎪ x˙ = p, ⎪ ⎪ ⎨ y
1 p˙ = − 2
⎪ ⎪ y ⎪ ⎪ ⎩ y˙ = q, x
1 q˙ = − 2
p2 y 2 − 2q , y x
q2 x 2 − 2p . x y
Jacobi equation: noting that t is a secondary parameter, S = −E t + S1 (x, y)
and
x y
∂S1 ∂x
2 +
y x
∂S1 ∂y
2 = 2E.
2) H is independent of t:
1 x 2 y 2 p + q = E. 2 y x
x ; we deduce the first −y integral u = p x − q y. The Hamilton equations immediately imply u˙ = 0. ⎡ ⎤ x 4) The value is n = 2. Tλ is a Lie group of infinitesimal displacement ⎣ y ⎦; we 2t deduce the first integral v = p x + q y + 2h t. The Hamilton equations imply 3) Tα is a Lie group with the infinitesimal displacement
v˙ = 0,
p x + q y = 2c,
p x + q y = 2E(t − t0 ).
5) The relation is E xy = c2 + E 2 (t − t0 )2 . 6) We obtain cE dθ = 2 , dt c + E 2 (t − t0 )2
tg θ =
where a is a constant. 7) The equation of the trajectories is 2
E x y cos
1 Log 2
x = c2 . y
E (t − t0 ) + a c
Solutions to Problems and Exercises
261
8) We find E ξ − c2 f (ξ) = , ξ
g(η) = c Log η
with ˆ E ξ − c2 dξ + c Log η. S = −E t + ξ ∂S ∂S With = −t0 and being constant, we obtain the time law and trajectories ∂E ∂c of the motion. Solution to Exercise 23 1) Hamiltonian: 1 H(ρ, θ, p, q) = 2
p2 q2 + ρ2 ρ4
.
Lagrange equations: ρ4 θ˙ = C,
ρ2 ρ¨ + ρ ρ˙ 2 − 2ρ3 θ˙2 = 0.
Hamilton equations: ⎧ p ⎪ ⎪ ⎪ ρ˙ = ρ2 , ⎨
q θ˙ = 4 , ρ
⎪ ⎪ p2 q2 ⎪ ⎩ p˙ = 3 + 2 5 , ρ ρ
q˙ = 0.
Jacobi equation: ∂S 1 + ρ2 ρ4 ∂t 2
∂S ∂ρ
2 + ˆ
S = −E t + C θ + ε
ρ
ρ0
∂S ∂θ
2 = 0,
2E ρ4 − C 2 dρ. ρ
with S = −E t + C θ + S1 (ρ).
262
Introduction to Mathematical Methods of Analytical Mechanics
Trajectories (equilateral hyperbolas): C ρ2 cos [2 (θ − θ0 )] = √ . 2E Time law: t − t0 =
C tg 2 (θ − θ0 ) . 4E
2) The Lagrangian is invariant by the change of θ into θ + α; by using Noether’s theorem, we obtain the first integral q = C. 3) We have u˙ = 0. 4) By using the Poisson brackets, we obtain the first integral w=
q p sin 2θ + 2 cos 2θ. ρ ρ
v = Cte. u 6)ˆNoting that t is a secondary parameter, we are back to extremals of √ 2E ρ ds, where s is the curvilinear abscissa of the trajectory, and by using a= 5) We find ρ2 cos 2 (θ − θ0 ) =
C again. Maupertuis’ principle, we obtain ρ2 cos [2 (θ − θ0 )] = √ 2E Solution to Exercise 24 x p2 + y q 2 + 2 W (x, y). x+y
1)
2H =
2)
W (x, y) =
3)
H=
W1 (x) + W2 (y) . x+y
1 x p2 + y q 2 2 x+y
and the Hamilton equations are ⎧ xp ⎪ ⎪ x˙ = , ⎪ ⎪ ⎨ x+y
p˙ = −
1 y(p2 − q 2 ) , 2 (x + y)2
⎪ ⎪ yq ⎪ ⎪ , ⎩ y˙ = x+y
q˙ = −
1 x(q 2 − p2 ) . 2 (x + y)2
Solutions to Problems and Exercises
263
Additionally, H = E (constant total energy). 4) We write S = −E t + f (x) + g(y), and we obtain x f 2 (x) + y g 2 (y) − 2E(x + y) = 0, which implies ˆ f (x) =
α dx x
2E +
ˆ and
g(y) =
2E −
α dy, y
where E > 0 and α is a constant. Hence, the time law is ˆ ˆ dy dx + t − t0 = , α α 2E − 2E + y x and the trajectories verify ˆ
dx
− α x 2E + x
ˆ
dy = β, α y 2E − y
where β is a constant and equivalently Argch
4E x 4E y + 1 − Argch − 1 = Cte. α α
5) Hamilton equations show that α˙ = 0; therefore, α is the first integral. 6) Calculate b(x, y) such that w˙ = 0 or such that Poisson bracket [w, H] with respect to (x, y, p, q) is zero. We obtain w = (p − q) b(x, y) and taking √ into account xy √ . that b(1, 1) = 1, we obtain b(x, y) = x y. Similarly, v = (p + q) x+y px − qy . We find u2 + v 2 = 2H and 7) Calculate u = [v, w]. We obtain u = x+y [u, v] = 0. 8) By elimination of p and q between the three first integrals u, v, w, we obtain
1 u (x + y) = √ v (x2 − y 2 ) + w (x + y) . 2 xy
264
Introduction to Mathematical Methods of Analytical Mechanics
u 9) Hamiltonian is invariant for n = 2. After writing ⎤ = e , we obtain a Lie group ⎡ λ x of which the infinitesimal displacement is W = ⎣ y ⎦. Consequently, p x + q y = 2t 2E t. Hence,
1 1 (p x + q y) = t= 2E 2E
α α x 2E + + y 2E − , x y
which corresponds to an integral of (I) by taking into account (II). ⎡ x ⎢x+y ⎢ 10) The infinitesimal displacement of group G1 is W 1 = ⎢ −y ⎣x+y 0 parameter of the group, we have
⎤ ⎥ ⎥ ⎥. As X is the ⎦
⎧ dx dX ⎪ ⎪ = , ⎪ ⎨ x x+y ⎪ ⎪ dy dX ⎪ ⎩ = , −y x+y which is integrated as ⎧ ⎨ x − y = x0 − y0 + X, ⎩
x y = x0 y0 .
11) The infinitesimal displacement of group G2 is W 2 parameter of the group, we have dx dY dy = =√ √ xy x+y xy which is integrated as ⎧ ⎨ x − y = x0 − y0 , √ ⎩ √ 2 x y = 2 x0 y0 + Y
⎡ √x y ⎢ x√+ y ⎢ = ⎢ xy ⎣ x+y 0
⎤ ⎥ ⎥ ⎥. As Y is the ⎦
Solutions to Problems and Exercises
265
12) The equations obtained in 10) and 11) make it possible to obtain the change of variables (X, Y ) → (x, y) for x0 = 0 and y0 = 0: ⎧ ⎨ x − y = X, ⎩ √ 2 x y = Y, and we obtain the new Hamiltonian associated with X, Y . Solution to Exercise 25 1) The equation of the meridian r = f (z) reduces the number of parameters to two. In cylindrical coordinates, the parameters are z and θ. The kinetic energy and potential energy are ! 1 m 1 + f 2 (z) z˙ 2 + f 2 (z) θ˙2 2
T =
and W = m g z.
2) a) The kinetic energy is second-degree homogeneous with respect to the derivatives of parameters, and T − W is independent of t. We deduce the first integral of energy T + W = Cte:
1 + f 2 (z) z˙ 2 + f 2 (z) θ˙2 + 2 g z = −2 h,
and the Lagrange equation for θ is d dt
∂T ∂ θ˙
=0
⇐⇒
f 2 (z) θ˙ = C
where C is a constant associated with the law of areas. b) The conjugate variables are p=
∂G = m 1 + f 2 (z) z˙ ∂ z˙
and q =
∂G = m f 2 (z) θ˙ ∂ θ˙
and the Hamiltonian is
1 p2 q2 2 H(p, q, z, θ, t) = + + 2m gz . 2 m 1 + f 2 (z) f 2 (z)
266
Introduction to Mathematical Methods of Analytical Mechanics
We deduce the Hamilton equations: dp 1 p2 f (z) f (z) 1 q 2 f (z) = −m g + + 2 dt m 1 + f (z) m f 3 (z) dq =0 dt
⇐⇒
and
m f 2 (z) θ˙ = C.
c) Hamiltonian action: ˆ
t1
ˆ ˙ t) dt = G(z, θ, z, ˙ θ,
t0
X1
Y dX
where
Y = [p, q, h],
X0
⎡ ⎤ z and X = ⎣ θ ⎦ t
⎤ ⎡ ⎤ ⎡ z z is invariant by translation group Tu such that Tu ⎣ θ ⎦ = ⎣ θ + u ⎦, where the t t ⎡ ⎤ 0 infinitesimal displacement vector is W θ = ⎣ 1 ⎦. By Noether’s theorem, we obtain 0 the first integral Y W θ ≡ q =⎡ C. ⎤ Similarly, ⎤Hamiltonian action is invariant by ⎡ z z translation group Tv such that Tv ⎣ θ ⎦ = ⎣ θ ⎦, where infinitesimal displacement t t+v ⎡ ⎤ 0 is W t = ⎣ 0 ⎦. We obtain the first integral Y W t ≡ h = −E, where E is a constant 1 corresponding to the T + W value. 3) The parameter θ is secondary and associated with θ˙ = new generating function G1 = G − m
C f 2 (z)
C ; we define the f 2 (z)
, which, after the reinjection of the first
integral q = m C, becomes
2 1 C2 C2 2 1 + f (z) z˙ + 2 . ˙ t) = G2 (z, z, − 2gz − 2 2 2 f (z) f (z) and we obtain the integrated Lagrange equation:
2 m C2 2 1 + f (z) z˙ + 2 − 2 g z − 2 = Cte. 2 f (z) Similarly, t is a secondary parameter and we obtain the new generating function G3 = G − (−E) t. Thanks to the Maupertuis principle, we return to the study of the
Solutions to Problems and Exercises
ˆ
ˆ
t1
extremals of 2 (E − W ) ds corresponding to the optical path of a= t0 where n= 2 (E − m g z).
267
s1
n ds, s0
Solution to Exercise 26 1) The Hamiltonian is deduced from T +W =
k2 1 2 (r˙ + r2 θ˙2 ) − 2 2 r4
˙ and the Hamilton equations can be written. with pr = r˙ and pθ = r2 θ, 2) The two first integrals are the law of areas r2 θ˙ = C and the conservation of energy k2 1 2 = E. (r˙ + r2 θ˙2 ) − 2 2 r4 We obtain ˆ θ − θ0 =
r
r0
C dr 2 r2 k C2 − 2 + 2E 4 r r
ˆ and t − t0 =
r r0
dr 2
k C2 − 2 + 2E 4 r r
corresponding to the equation of the trajectory and the time law, respectively. 3) For E = 0, we immediately obtain expressions for F (r) and G(r). a) The trajectory is r=
k cos (θ − θ0 ), C
corresponding to circles passing through O. b) The time law associated with point O is k2 t = 3 Arcsin C
Cr k
kr − 2C 2
1−
C 2 r2 . k2
,
268
Introduction to Mathematical Methods of Analytical Mechanics
π k2 When θ varies from π, time varies from T = 3 along the trajectory. C ˆ s1 k2 4) We obtain n ds with n = 2 E + 4 . For E = 0, we are back to the r s0 ˆ s1 √ 2k ds. Writing the trajectories in the form r = r(θ), we obtain a extremals of r2 s0 Euler equation independent of θ, which can be integrated as ˆ θ − θ0 =
r
√
r0
dr , − r2
a2
where a is a constant. We obtain circles again in the planar polar representation. 5) The Lagrange equations of motion are r2 θ˙ = C
and r¨ −
2k 2 C2 + = 0. r3 r5
√ k C3 2 and for initial condition θ˙ = C 2k 2 C4 corresponding to a uniform circular motion. We deduce E = 2 . 8k We obtain constant r for r0 =
Solution to Exercise 27 1) a) We verify that
x0 x0 x , = Tα ◦ T β = Tα+β y0 y0 y
T0 = 1,
Tα ◦ T−α = 1.
So, transformations (Tα )α form an abelian Lie group. b) For α = 0, x0 = x and y0 = y, we obtain ∂x = 1 + x2 ∂α |α=0
and
∂y = −x y. ∂α |α=0 ⎡
Consequently, the infinitesimal displacement is W 1 (x, y) = ⎣ c) From Noether’s theorem, the associated first integral is u = (1 + x2 )p − x y q.
1 + x2 −xy
⎤ ⎦.
Solutions to Problems and Exercises
269
2) a) We have the relations y dx =
dy = dα 0
(a zero denominator corresponds to a zero numerator), where the Lie group is associated with Tβ such that Tβ
x0 y0
x = y
⇐⇒
⎧ ⎨
x = x0 +
⎩y = y 0
b) The second first integral is v =
β y0 .
p . y
3) The Poisson bracket for u and v is w = [u, v] =
∂u ∂v ∂u ∂v ∂u ∂v ∂u ∂v + − − ∂x ∂p ∂y ∂q ∂p ∂x ∂q ∂y
=⇒
w=p
x − q. y
This is the third first integral, the infinitesimal displacement of which associated x with the third group of invariance is W 3 (x, y) =
y . We have the relations −1
y dx = −dy = dα. x Hence, y = y0 + α and Tα
x0 y0
x = y
x y0 = . The Lie group associated with Tα is x0 y0 − α
⇐⇒
⎧ x0 y0 ⎪ ⎨x = y − α 0 . ⎪ ⎩ y = y0 − α
We verify this is an abelian group. ⎧ (1 + x2 )p − x y q = a, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨p = b, 4) y ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ p x − q = c, y
270
Introduction to Mathematical Methods of Analytical Mechanics
where a, b and c are constants. By eliminating p and q between the three relations, we obtain the representation of equilateral hyperbolas: (cx + b) y = a.
Solution to Exercise 28 1) Jacobi equation: 2 2 2 ∂S ∂S ∂S 1 1 1 ∂S + 2 + 2 + ∂t 2 ∂r 2r ∂θ 2 r sin2 θ ∂ϕ + f (r) +
1 1 h(ϕ) = 0, g(θ) + 2 r2 r sin2 θ
justifying to write S in the form: S = −E t + S2 (r) + S3 (θ) + S4 (ϕ), we obtain
ˆ
S = −E t + ˆ +
2β sin2 θ
2α + 2E − 2 f (r) dr r2 ˆ − 2 g(θ) − 2 α dθ + −2 β − 2 h(ϕ) dϕ,
where α and β are constants. The equations for time law and trajectories are, respectively ˆ ⎧ ∂S dr ⎪ ⎪ = t0 , = −t + ⎪ ⎪ ∂E 2α ⎪ ⎪ ⎪ + 2E − 2 f (r) ⎪ ⎪ r2 ⎪ ⎪ ⎪ ⎪ ˆ ˆ ⎪ ⎪ ⎪ dθ dr ⎨ ∂S = = a, − ∂α 2β 2α 2 ⎪ − 2 g(θ) − 2 α r + 2E − 2 f (r) ⎪ ⎪ r2 ⎪ sin2 θ ⎪ ⎪ ⎪ ⎪ ˆ ˆ ⎪ ⎪ dϕ ∂S dθ ⎪ ⎪ − = b, = ⎪ ⎪ ⎪ ∂β −2 β − 2 h(ϕ) 2β ⎪ 2 ⎩ − 2 g(θ) − 2 α sin θ sin2 θ
Solutions to Problems and Exercises
271
where t0 , a and b are constants. Consequently, ⎧ 2 α ⎪ dr ⎪ ⎪ =2 + E − f (r) , ⎪ 2 ⎪ dt r ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ 2 β dθ 2 − α − g(θ) , = 4 ⎪ dt r sin2 θ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 2 ⎪ ⎪ dϕ 2 ⎪ ⎪ (−β − h(ϕ)) . = 4 ⎩ dt r sin4 θ 2) Hamilton equations: From the Hamiltonian p2ϕ g(θ) h(ϕ) 1 2 p2θ p + + f (r) + 2 + 2 , + 2 H= 2 2 r r2 r r sin θ r sin2 θ we deduce ⎧ pθ 1 ⎪ ⎪ r˙ = pr , θ˙ = 2 , ϕ˙ = 2 pϕ ⎪ 2 ⎪ r r sin θ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ p2ϕ p2 2 g(θ) 2 h(ϕ) − f (r) + , p˙r = 3θ + 3 + 3 2 3 ⎪ r r r sin θ r sin2 θ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ p2 cos θ g (θ) 2 h(ϕ) h (ϕ) ⎪ ⎩ p˙θ = ϕ − cos θ, p ˙ . + = − ϕ r2 r2 sin3 θ r2 sin3 θ r2 sin2 θ We can add h˙ = 0, which corresponds to the conservation of energy. Finally, we obtain ⎧ 4 2 r ϕ˙ sin2 θ = −2 β − 2 h(ϕ), ⎪ ⎪ ⎪ ⎪ ⎪
⎪ ⎪ ⎨ 4 ˙2 β − g(θ) , r θ = 2 −α + sin2 θ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ r˙ 2 + 2 f (r) − 2 α = 2E. r2
272
Introduction to Mathematical Methods of Analytical Mechanics
Lagrange equations: ⎧ d 2 h (ϕ) ⎪ ⎪ r ϕ˙ sin2 θ = − 2 ⎪ ⎨ dt r sin2 θ
=⇒
⎪ ⎪ d 2 ˙ g (θ) 2 β cos θ ⎪ ⎩ r θ =− 2 − 2 dt r sin3 θ r
r2 ϕ˙ sin2 θ
=⇒
2
+ 2 h(ϕ) = −2 β,
2 r2 θ˙ = 2 −α +
β − g(θ) , sin2 θ
to which the conservation of energy is added. 3) For f = g = h = 0, we obtain the equation of a straight line on which the motion is uniform. Solution to Exercise 29 1) The Lagrange equation for y is d y˙ cos2 x = 0 dt
y˙ cos2 x =
βω . 2
z˙ sin2 x =
γω . 2
=⇒
The Lagrange equation for z is d z˙ sin2 x = 0 dt
=⇒
Instead of the Lagrange equation for x, we use the conservation of energy: 2 x˙ 2 + y˙ 2 cos2 x + z˙ 2 sin2 x +
ω2 = 2 h, 2 sin2 2x
where h is a constant. Taking into account the initial conditions and the first two integrals, we obtain 2 β 2 ω2 γ2 2 2 x˙ + + + 4 cos2 x sin2 x sin2 2x [11.1] 2 ω2 2 2 2 2ω = 1 + β + γ + 4α = k , 2 2 with k 2 = 1 + β 2 + γ 2 + 4 α2 . Multiplying by 2 sin2 2x (for 2x = n π) and denoting u = cos 2x, we obtain u˙ 2 + k 2 ω 2 u2 + (γ 2 − β 2 ) ω 2 u − 4 α2 ω = 0.
Solutions to Problems and Exercises
273
Deriving and simplifying by 2 u, ˙ we obtain u ¨ + k2 ω2 u +
γ2 − β2 2 ω =0 2
and by denoting v = u −
β2 − γ2 , we obtain 2 k2
v¨ + k 2 ω 2 v = 0. Hence, v = B cos k ω t + C sin k ω t and u = cos 2x =
β2 − γ2 + B cos k ω t + C sin k ω t. 2 k2
Taking into account the initial conditions, we obtain cos 2x =
2α β2 − γ2 (1 − cos k ω t) − sin k ω t. 2 k2 k
2) With respect to x, ˙ from equation [11.1], we have x˙ 2 = f (x), which can be represented in the form 4 x˙ 2 1 + 2 β 2 sin2 x + 2 γ 2 cos2 x . = k2 − 2 ω sin2 2x π The variations of f (x) prove that when x goes to 0 or , the right-hand side goes 2 π to −∞, but at the initial instant, we have x = and x˙ 2 = α2 ω 2 > 0. 4 π Thus, x˙ is zero and changes sign at x1 , where > x1 > 0, and at x2 , where 4 π π π! > x2 > . Consequently, x belongs to 0, . 2 4 2 The parameter x remains constant if, at the initial instant, x˙ = 0 and x ¨ = 0. The Lagrange equation associated with x is ω 2 cos 2x 2x ¨ − sin x cos x z˙ 2 − y˙ 2 = . sin3 2x
274
Introduction to Mathematical Methods of Analytical Mechanics
π π At the initial instant x = and x˙ 2 = α2 ω 2 and conditions x˙ 0 = 0, x = 4 4 imply α = 0 and sin 2x0 = 1, cos 2x0 = 0. Furthermore, from the differential equation obtained in(1) connecting u and u ¨, where u = cos 2x, the condition x ¨0 = 0 implies 12 γ 2 − β 2 ω 2 = 0 and γ = ±β. Finally, x remains constant for α = 0 and γ = ±β. 3) If the constraint y˙ + z˙ = ϕ(x), then we obtain the Lagrange equations d y˙ cos2 x = λ, dt
d z˙ sin2 x = λ, dt
where λ is a Lagrange multiplier. We deduce d y˙ cos2 x − z˙ sin2 x = 0 dt
=⇒
y˙ cos2 x − z˙ sin2 x = D,
where D is a constant. We note that the constraint is not independent of time. Taking into account the constraint y˙ cos2 x − [ϕ(x) − y] ˙ sin2 x = D, we deduce y˙ = D + ϕ(x) sin2 x
and similarly z˙ = −D + ϕ(x) cos2 x.
For x, the Lagrange equation is ω 2 cos 2x =0 2x ¨ − sin x cos x ϕ(x)2 cos2 x − sin2 x − 2 D ϕ(x) − sin3 2x which yields a differential equation in the form x˙ 2 = F (x) by integration. If ϕ(x) = 0, then the constraint is independent of time. From y˙ = D = β ω
and z˙ = −D = −β ω,
we deduce y = β ω t + y0
and z = −β ω t + z0 .
Hence, the conservation of energy can be written as 4 x˙ 2 + β 2 ω 2 +
ω2 = 2 sin2 2x
2 α2 + β 2 +
1 2
ω2
or 4 x˙ 2 sin2 2x − ω 2 1 + 4 α2 sin2 2x + ω 2 = 0.
Solutions to Problems and Exercises
275
Let us denote u = cos 2x; then, 2 x˙ sin 2x = −u˙ and sin2 2x = 1 − u2 . Hence, u˙ 2 + ω 2 1 + 4 α2 u2 − 4 α2 ω 2 = 0
=⇒
u ¨ + ω 2 1 + 4 α2 u = 0
and u = B cos k ω t + C sin k ω t with k 2 = 1 + 4 α2 .
For t = 0, x =
π , x˙ = α ω, 4
cos 2x = 0 = B Therefore, C = − cos 2x = −
and
2 x˙ sin 2x = 2 x˙ = 2 α ω = −C k ω.
2α . Finally, k
2α sin k ω t. k
Solution to Exercise 30 For a canonical change of variables, we must have α2 − β 2 = 1. The generating 1 function is F (q, s) = − (q − s)2 . 2β Solution to Exercise 31 1) The change of variables is canonical if and only if [r, s]p,q = 1. We verify that ∂r ∂s ∂r ∂s − = 1. ∂p ∂q ∂q ∂p 2) The generating function S satisfies dS = p dq − r ds. Furthermore, d(S − p q) = −r ds − q dp and F (s, p) = S − pq, with ⎧ ∂F ⎪ −s ⎪ ⎪ ⎨ ∂s = −r = −e cos p ⎪ ∂F ⎪ ⎪ ⎩ = −q = −e−s sin p ∂p
.
We obtain F = e−s cos p + Cte. Then, S = e−s cos [Arcsin (q es )] + q Arcsin (q es )]. 3) We have G = S + r s + Cte = F + p q + r s + Cte. Hence, q G(q, r) = r + q Arctg ( ) − r Ln q 2 + r2 + Cte. r
276
Introduction to Mathematical Methods of Analytical Mechanics
Solution to Exercise 32 Using calculations similar to those used in Exercise 31, we obtain 1) α =
1 and β = 1. 2
r2 tg 2q + Cte, 2 p r r + Cte, p − r2 − Arccos √ G(p, r) = − 2 2 p
2) S(q, s) = −
s2 cotg 2q + Cte, 2
F (r, q) =
s p H(p, s) = − p − r2 + Arcsin 2 2
s √ p
+ Cte.
Solution to Exercise 33 This is the converse problem: dS = p dq − r ds + (K − H)dt. S is independent of t, which implies that K = H. Therefore, dS = p dq − r ds
and p =
∂S ∂S , r=− . ∂q ∂s
We deduce p=
s2 sin2 2q
and r = s cotg 2q.
Finally, r=
√
p cos 2q
and
s=
√ p sin 2q.
Solution to Exercise 34 The calculations of 1) and 2) are similar to those in Exercises 32 and 33. As a result, we get: F (p, s) = −(es − 1)2 tg p + Cte,
F is obtained by writing S = F + p q.
Solutions to Problems and Exercises
277
Solution to Exercise 35 1) The elliptical coordinate q1 = Cte corresponds to the ellipses of foci F and F , center O and half-axes a = ch q1 , b = |sh q1 |. The foci are at a distance 1 from O. The coordinate q2 = Cte corresponds to the hyperbolas of foci F and F . According to the Poncelet theorem, the ellipses and hyperbolas are orthogonal. μ μ and WF = − , r r respectively, where r is the distance from M to F and r from M to F . We deduce 2) Potentials associated with F and F are WF = −
2 T (q1 , q2 , q˙1 , q˙2 ) = sh2 q1 + sin2 q2 q˙12 + q˙22 , W (q1 , q2 ) = −
(μ + μ ) ch q1 + (μ − μ ) cos q2 . sh2 q1 + sin2 q2
3) Consequently, T = (U1 + U2 ) (V1 q˙1 + V2 q˙2 ) with
U1 = sin2 q1 ,
V1 = 1, U2 = sin2 q2 , V2 = 1 and W =
W 1 + W2 U 1 + U2
with W1 = −(μ + μ ) ch q1 , W2 = −(μ − μ ) cos q2 .
We will now consider Liouville’s integrability, with
p1 = sh2 q1 + sin2 q2 q˙1 ,
p2 = sh2 q1 + sin2 q2 q˙2
and 2H =
2 α ch q1 + 2 β cos q2 p21 + p22 − sh q1 + sin2 q2 sh2 q1 + sin2 q2 2
where
α = μ + μ and β = μ − μ . ∂S Let us denote S = −E t + S1 (q1 ) + S2 (q2 ). The Jacobi equation is H + =0 ∂t and
2 2 S 1 + S 2 − 2 α ch q1 − 2 β cos q2 = 2 E sh2 q1 + sin2 q2 ,
278
Introduction to Mathematical Methods of Analytical Mechanics
which can be separated into two equations: ⎧ 2 2 ⎨ S 1 = 2 α ch q1 + 2 E sh q1 + k, ⎩
with S1 = S1 (q1 , E, k, ),
2
S 2 = 2 β cos q2 + 2 E sin2 q2 − k,
with S2 = S2 (q2 , E, k),
where k is a constant. We obtain S1 and S2 , and deduce ⎧ ∂S ∂S1 ∂S2 ⎪ ⎪ ⎪ ⎨ ∂E = Cte =⇒ t − t0 = ∂E + ∂E , the time law, ⎪ ⎪ ∂S ∂S1 ∂S2 ⎪ ⎩ = Cte =⇒ + = 0, which gives the equation of the trajectory. ∂k ∂k ∂k 4) In the case of the conservation of energy, we obtain P dQ − R dS + (K − H) dt =
∂F ∂F ∂F dt + dQ + dR − d(R S), ∂t ∂Q ∂R
hence ∂F = P , ∂Q
∂F = S, ∂R
K −H =
∂F + 0. ∂t
The new Hamilton equations are ∂K = 0, then R˙ = − ∂S
R = R0
∂K and S˙ = ∂R |R=R0
= S˙ 0 , then S = S˙ 0 t + S 0 . Consequently, K(R0 ) = H = E = β1 . So, the Jacobi equation can be written as H
∂F ∂Q
, Q = β1 .
and n − 1 non-additive parameters β2 , . . . , βn , i.e. A solution includes⎡β1 ⎤ β1 ⎢ ⎥ F (Q, B), where B = ⎣ ... ⎦ = B 0 . Since B = B 0 and R = R0 , we can write β ⎡ n⎤ g1 ⎢ .. ⎥ B = G(R0 ), with G = ⎣ . ⎦ and the solution giving F (Q, G(R0 )). The equations gn
Solutions to Problems and Exercises
279
∂F of motion are S = . Generally, we choose G = 1, where 1 is the identity ∂R0 and we obtain the equations of motion of the type obtained in (4).
5) Refer to the book. Solution to Exercise 36 k A. 1) 2 T = m r˙ 2 + r2 θ˙2 and W = − . The conjugate variables and the r Hamiltonian are: k 1 2 1 2 ˙ 2 pr + 2 pθ − . pr = mr, ˙ pθ = m r θ and H = 2m r r 2) Since t is a secondary parameter, we can write S in the form S = −E t+F (θ, r). We obtain the Jacobi equation: 1 2m
∂F ∂r
2
1 + 2 r
∂F ∂θ
2 −
k =E r
or r2
∂F ∂r
2
− 2 m k r − 2 m E r2 +
∂F ∂θ
2 = 0,
which allows us to write F (θ, r) = S1 (r) + S2 (θ) and yields
∂S1 ∂r
2
k α2 = 2m E + − 2 , r r
S2 = α
√
2 m θ.
Hence, ˆ √ k α2 E + − 2 dr, S = −E t + 2 m α θ + ε r r where α is a constant and ε = ±1. This leads to the time law: √ ˆ √ 2 m dr ∂S = t − t0 = ε 2m ∂E k α2 E+ − 2 r r
280
Introduction to Mathematical Methods of Analytical Mechanics
and the equation of the trajectory: ˆ
√ √ ∂S = 2 mθ − ε 2 m α ∂α
r2
dr
k α2 E+ − 2 r r
= α0 ,
where t0 and α0 are constants. The law of areas is pθ = Cte or r2 θ˙ = C. 3) If the trajectory is closed, then the motion is periodic. We use the action variables with r1 ≤ r ≤ r2 : ˛
˛ √ √ k α2 − 2 dr and Jθ = α 2 m dθ = 2 π α 2 m, r r ˛ where the integral denoted by is taken over a period of the motion. Jr =
ε
√
2m
E+
k α2 − 2 ; r1 and r2 are the roots of the polynomial z z 2 2 E r + k r − α written as E (r − r2 ) (r − r1 ). Since r1 ≤ r ≤ r2 , we must have E < 0 and ˆ r2 ˆ r2 dr E(r − r1 )(r − r2 ) f (r) dr. = r r1 r1 4) We denote f (z) =
E+
α2 k 5) We have r1 r2 = − > 0 and r1 + r2 = − > 0; hence, 0 < r1 < r2 . E E Consider the plane shown in Figure 11.1. The domain located between Γ, Γ1 and Γ2 is denoted by D. Then, ˆ
ˆ f (r) dr + Γ
ˆ f (r) dr =
Γ1
ˆ f (r) dr,
Γ2
which implies
Jr . f (r)dr = √ 2m Γ
ˆ We have Γ1
f (r) dr = 2 iπR0 , where R0 is the residue associated with the single
pole z = 0. R0 = limz→0 z f (z)ˆ = −i α because with the same determination 3π Jr for f (z), we have arg f (z) = = −2 iπ (R0 + R∞ ). Let us f (r) dr = √ . 2 2m Γ 1 calculate R∞ : by the change of variable u = , we have f (z) dz = g(u) du, where z
Solutions to Problems and Exercises
√ g(u) =
−E u2
1+
Jr = −2 π
√
k α2 2 u− u E E
12 . Hence, R∞ =
2
ik √ , and finally, −E
πk √ 2mα + √ 2 m. −E
Figure 11.1. Calculation of Jr using the residue method
Consequently, kπ √ Jr + Jθ = √ 2 m then −E
2 m π2 k2 (Jr + Jθ )
2
= −E.
6) The frequencies of the motion are: νr = νθ =
∂E ∂E 4 m π2 k2 = = 3 ∂Jr ∂Jθ (Jr + Jθ )
and the common period of the motion is τperiod
1 1 = = = πk νr νθ
1 . −2 E 3
B. In the three-dimensional space and spherical representation, k 2 T = m r˙ 2 + r2 θ˙2 + r2 sin2 θ ϕ˙ 2 , W = − . r
281
282
Introduction to Mathematical Methods of Analytical Mechanics
The conjugate variables and the Hamiltonian are: ˙ pθ = m r2 θ,
˙ pr = m r,
˙ pϕ = m r2 sin2 θ ϕ,
and 1 H= 2m
p2r
p2ϕ p2 + 2θ + 2 r r sin2 θ
−
k . r
We deduce the Jacobi equation: 2 2 2 ∂S k ∂S ∂S 1 1 1 − = E, + 2 + 2 2m ∂r r ∂θ r r sin2 θ ∂ϕ with ˆ √ S = −E t + 2 m
√ k α2 E + − 2 dr + 2 m r r
ˆ α2 −
√ β2 2 m β ϕ, 2 dθ + sin θ
where α and β are constants. We obtain the action variables ˛ ˛ ⎧ √ √ ⎪ ⎪ J = p dϕ = 2 m β dϕ = 2 2 m π β, ϕ ϕ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ˛ ˛ ⎨ √ β2 dθ, α2 − Jθ = pθ dθ = 2 m ⎪ sin2 θ ⎪ ⎪ ⎪ ⎪ ⎪ ˛ ˛ ⎪ ⎪ √ √ k α2 k ⎪ ⎪ ⎩ Jr = pr dr = 2 m . E + − 2 dr = 2 m 2 π −α + √ r r 2 −E To calculate Jθ , we express the kinetic energy both in the spherical and planar polar representations: 2T = r˙ 2 + r2 θ˙2 + r2 sin2 θ ϕ˙ 2 = r˙ 2 + r2 ψ˙ 2 m where ψ is the azimuthal angle of the planet P on its orbit (angle denoted by θ in I). We deduce: pr r˙ + pψ ψ˙ = pr r˙ + pθ θ˙ + pϕ ϕ˙
=⇒
pθ dθ = pψ dψ − pϕ dϕ
and ˛ Jθ =
˛ pψ dψ −
pϕ dϕ =
√
2 m 2 π(α − β).
Solutions to Problems and Exercises
283
Consequently, √
1 2mα = (Jθ + Jϕ ) 2π
˛ =⇒
By a similar calculation of I, Jr = − (Jθ + Jϕ ) + π k
Jr =
2
2mE +
2 m k (Jθ + Jϕ ) dr. − r 4 π 2 r2
2m , −E
hence, H≡E=−
2 π2 m k2 (Jr + Jθ + Jϕ )
2
and the common frequency is ν=
∂H ∂H 4 π2 m k2 ∂H = = = 2. ∂Jr ∂Jθ ∂Jϕ (Jr + Jθ + Jϕ )
Solution to Exercise 37 1) The solutions of the system are: ⎧ −t 2t ⎪ ⎨ x1 = λ e + μ e ⎪ ⎩ x = − λ e−t + μ e2t 2 2 where λ and μ are constants depending on initial conditions. The equilibrium position corresponds to x1 = x2 = 0. The equilibrium position is Lyapunov unstable because when t goes to infinity, x1 and x2 go to infinity. By writing the differential system ⎧ dx1 ⎪ ⎪ ⎪ ⎨ dt = f1 (x1 , x2 ), ⎪ ⎪ dx ⎪ ⎩ 2 = f2 (x1 , x2 ), dt we obtain ⎡
∂f1 ⎢ ∂f ⎢ ∂x1 =⎢ ∂x ⎣ ∂f 2 ∂x1
⎤ ∂f1 ⎡ ⎤ 02 ∂x2 ⎥ ⎥ ⎣ ⎦ ⎥= ∂f2 ⎦ 11 ∂x2
where
x=
x1 x2
and f =
f1 , f2
284
Introduction to Mathematical Methods of Analytical Mechanics
∂f ≡ divf (x) = 1. The divergence of f (x) is not zero, and with Tr ∂x t transformation g does not conserve the areas in the plane.
λ 2) – At t = 0, x1 (0) = λ + μ and x2 (0) = μ − . Therefore, −1 ≤ λ + μ ≤ 1 2 λ and −1 ≤ μ − ≤ 1. 2 – At t = 1, λ λ + μ e2 , x2 (1) = μ − + μ e2 , e 2e 2e x 1 + 2 x2 λ= . (x1 − x2 ), μ = 3 3 e2
x1 (1) =
therefore
We deduce the inequalities: ⎧ x1 1 1 2 x2 ⎪ ⎪ −1 ≤ + 2e + −e ≤1 ⎪ ⎪ ⎨ 3 e2 3 e2 ⎪ ⎪ 1 2 x1 x2 ⎪ ⎪ −e + +e ≤1 ⎩ −1 ≤ 3 e2 3 e2 which represents in plane (x1 , x2 ) the interior surface of the parallelogram delimited by four lines. It can be explicitly verified that g t does not conserve areas. Solution to Exercise 38 1) The equation can be written as ⎧ dx ⎪ ⎪ ⎪ ⎨ dt = y, ⎪ ⎪ dy ⎪ ⎩ = − sin x. dt The equilibrium position is obtained for x = 0. The study of linearized motions is obtained by considering f (x, y) = whose tangent linear application is
∂f 0 1 = , − cos x 0 ∂x
where
x x= . y
y − sin x
Solutions to Problems and Exercises
We have Tr
∂f ∂x
285
= divf = 0. The volume is conserved in the phase space.
∂f 0 1 (0, 0) = , −1 0 ∂x whose eigenvalues are i and −i. The point (0, 0) is of stable equilibrium for the linearized system. However, since the real parts of the eigenvalues are zero, we cannot conclude that it is the same for the original system. Nonetheless, from a mechanical point of view, with the kinetic energy T = 12 x˙ 2 and the potential energy W = − cos x, the Lejeune–Dirichlet theorem allows us to conclude on an equilibrium which is Lyapunov stable. Due to the conservation of energy, the equilibrium cannot be asymptotically stable (energy cannot go to zero when t goes to infinity). 2) The mechanical method concludes that there is Lyapunov stable equilibrium for x = 0; however, x = 0 is not asymptotically stable. The “volume” is conserved in the phase space. Solution to Exercise 39 1) The equilibrium points correspond to sin x = 0 and sin y = 0, i.e. to x = 0 and y = 0 (modulo π). 2) We limit ourselves to x = 0, y = 0; the reader can study the other cases. Denoting f (x, y) = − sin y and g(x, y) = sin x + sin y, we obtain at (0, 0) ⎡
∂f ⎢ ∂F ⎢ ∂x =⎢ ∂X ⎣ ∂g ∂x
⎤ ∂f ∂y ⎥ ⎥ ⎥ ∂g ⎦ ∂y
⎡ =⎣
0 −1 1 1
⎤ ⎦
where
x X= y
and
f F = . g
(x=0,y=0)
3) With the near equilibrium position x = 0, y = 0, the linearized system is ⎧ dx ⎪ ⎪ ⎪ ⎨ dt = −y ⎪ ⎪ dy ⎪ ⎩ =x+y dt d2 x dx d2 y dy − − + x = 0 and + y = 0. The solutions for dt2 dt dt dt √2 1±i 3 , j ∈ {1, 2}. The solutions of the x or y are λ er1 t + μ er2 t , where rj = 2 linearized equation are Lyapunov unstable since the real parts of r1 and r2 are positive. which implies that
286
Introduction to Mathematical Methods of Analytical Mechanics
Solution to Exercise 40 1) The kinetic energy is 1 1 T = m x˙ 2 + y˙ 2 + z˙ 2 = 2 2
2
2
x˙ + y˙ +
∂f ∂f x˙ + y˙ ∂x ∂y
2
The potential energy is W = m g f (x, y) = g f (x, y). 2) The point O belongs to the surface and f (0, 0) = 0. The Lejeune–Dirichlet theorem implies that O corresponds to a minimum of the potential W , and for (x, y) = (0, 0), we have f (x, y) > 0 near (0, 0). 3) In order for O to be an extremum point, it is necessary that p(0, 0) =
∂f (0, 0) = 0, ∂x
q(0, 0) =
∂f (0, 0) = 0. ∂y
The Taylor–Lagrange development of f (x, y) near (0, 0) is (2) ∂f ∂f 1 ∂f ∂f x (0, 0) + y (0, 0) (0, 0) + y (0, 0) + ∂x ∂y 2! ∂x ∂y (3) ∂f 1 ∂f x (h, k) + y (h, k) + , where 0 < h < x and 0 < k < y. 3! ∂x ∂y
f (x, y) − f (0, 0) = x
In this case, ∂2f ∂2f ∂2f x2 2 (0, 0) + 2 xy (0, 0) + y 2 2 (0, 0) ∂x ∂x∂y ∂y (3) ∂f 1 ∂f x (h, k) + y (h, k) where 0 < h < x and 0 < k < y, + 3! ∂x ∂y
f (x, y) =
1 2!
and at(0, 0), f (x, y) − f (0, 0) > 0
=⇒
s2 − r t < 0.
Solutions to Problems and Exercises
287
We denote r(0, 0) = r, s(0, 0) = s, t(0, 0) = t; the linearization of the kinetic energy and the potential energy in the neighborhood of (0, 0) can be written as ⎡ T =
1 [x, ˙ y] ˙ ⎣ 2
1 + p2
pq
pq
1+p
⎤
⎦ x˙ y˙ 2
and g W = 2
∂2f ∂2f ∂2f x2 2 (0, 0) + 2 xy (0, 0) + y 2 2 (0, 0) ∂x ∂x∂y ∂y
We denote ⎤ ⎡ 1 + p2 p q ⎦, A=⎣ 2 pq 1 + p
⎡ B=⎣
rs
⎡ r g = [x, y] ⎣ 2 s
⎤ s
⎦ x . y t
⎤ ⎦.
st
The eigenfrequencies ω of the system are associated with the eigenvalues λ = ω 2 of B with respect to A and whose characteristic polynomial is det (B − λ A) = λ2 − λ g (r + t) + g 2 rt − s2 = 0. The sum of the roots of the polynomial is g (r + t), and the product of roots is g 2 (rt − s2 ). Since at (0, 0), p = q = 0,
1 1 + R1 R2
=r+t
and
1 = rt − s2 , R1 R2
g g , are the roots of det (B − λ A) = 0. Since s2 − rt < 0, the radii of R1 R2 1 curvature R1 and R2 have the same sign. We choose t and r to be positive; thus, R1 1 and are positive, and we denote R2 and
ω1 =
g R1
and ω2 =
g R2
which are the eigenfrequencies. In the basis of eigenvectors, the system is written as: q¨1 + ω12 q1 = 0,
q¨2 + ω22 q2 = 0,
288
Introduction to Mathematical Methods of Analytical Mechanics
where q1 and q2 are the coordinates in this basis. We obtain q1 = A cos
g t + ϕ1 , R1
q2 = B cos
g t + ϕ2 . R2
where A, B, ϕ1 and ϕ2 are constants. A general motion is a linear combination of eigenmotions and the small motions near the stable equilibrium position O are written: ⎧ g g ⎪ ⎪ x = a1 cos t + ϕ1 + b1 cos t + ϕ2 , ⎪ ⎪ ⎨ R1 R2 ⎪ ⎪ g g ⎪ ⎪ t + ϕ1 + b2 cos t + ϕ2 , ⎩ x = a2 cos R1 R2 where a1 , b1 , a2 and b2 are constants. Solution to Exercise 41 1) With respect to the orthonormal coordinate system O i, j, k, the kinetic energy and potential energy are T =
1 m R2 θ˙2 + ψ˙ 2 sin2 θ , 2
W = m g R cos θ.
The Lagrange equations are d m R2 ψ˙ sin2 θ = 0 dt
⇐⇒
ψ˙ sin2 θ = k,
where k is a constant, and d m R2 θ˙ − m R2 ψ˙ 2 sin θ cos θ = m g R sin θ. dt Taking into account the previous first integral, this can be written as: sin θ cos θ g = θ¨ − k 2 sin θ, 4 R sin θ and gives the equation of the conservation of energy. 2) In this case, T =
1 m R2 θ˙2 + ω 2 sin2 θ , 2
W = m g R cos θ
Solutions to Problems and Exercises
289
and we obtain the single Lagrange equation: g θ¨ − ω 2 sin θ cos θ = sin θ. R 3) Equilibrium positions θ = θ0 satisfy θ˙0 = 0 and θ¨0 = 0. We deduce θ0 = 0 or π (modulo π) and
g + ω 2 cos θ0 = 0. R
a) If θ0 = 0, the linearized Lagrange equation is g θ = 0. θ¨ − ω 2 + R The position θ0 = 0 is the unstable equilibrium position. b) If θ0 = π, we can write α = π − θ and the linearized Lagrange equation near α = 0 is g α ¨+ − ω 2 α = 0. R g > ω 2 , then the equilibrium position is stable and the small motions are R g 2π . If periodic with period τ = < ω 2 , then the equilibrium position is R g − ω2 R unstable. g If ¨ = 0 and we obtain α = α˙ 0 t + α0 . However, we can prove that = ω 2 , then α R the equilibrium position is stable. g c) If sin θ0 = 0, then + ω 2 cos θ0 = 0; therefore, it is necessary that R g < ω 2 . By writing α = θ − θ0 , we obtain the linearized Lagrange equation: R α ¨ + ω 2 sin2 θ0 α = 0. If
The equilibrium is stable and the period of the small motions is τ = Solution to Exercise 42 1) The kinetic energy is 2 T = 2 m2 u˙ 2 + m2 v˙ 2 + 2 m2 u˙ v˙ cos(v − u).
2π . ω sin θ0
290
Introduction to Mathematical Methods of Analytical Mechanics
2) The potential energy is W = −m g (2 cos u + cos v). We have ∂2W = 2 m g cos u, ∂u2
∂2W = 0, ∂u∂v
∂2W = m g cos v. ∂v 2
The minimum of the potential energy is obtained for u = 0 and v = 0. According to the Lejeune–Dirichlet theorem, the corresponding position is a stable equilibrium. 3) a) The linearized kinetic energy T and linearized potential energy W near the position (u = α = 0, v = β = 0) are 2
2
2
˙ 2 T = 2 m α˙ + m β˙ 2 + 2 m2 α˙ β,
W = m g
β2 α + 2 2
+ Cte.
b) We deduce the linearized equations of motion g 2α ¨ + β¨ + 2 α = 0,
g α ¨ + β¨ + β = 0,
c) The small motions in the form α = A cos ω (t − τ ),
β = B cos ω (t − τ ),
where τ is a constant, must be solutions of the differential system. We obtain g + B ω 2 = 0, 2 A ω2 −
g = 0. A ω2 + B ω2 −
The system of linear equations in A and B must have solutions other than the null solution, and we obtain g 2 ω4 − 2 ω2 − = 0, which has the roots √ ω1 = ±
2 √ 2−1
g
√ with B = − 2 A
Solutions to Problems and Exercises
291
and √ 2 g √ ω2 = ± 2+1
with B =
√
2 A.
The most general motion of the linearized system is α = a1 cos ω1 (t − τ1 ) + a2 cos ω2 (t − τ2 ), β = −a1
√
2 cos ω1 (t − τ1 ) + a2
√
2 cos ω2 (t − τ2 ),
with a1 and a2 being constants. Solution to Exercise 43 1) The kinetic energy of the triatomic molecule is ⎡ ⎤ η˙ 1 1 1 m η˙ 12 + M η˙ 22 + m η˙ 32 = [η˙ 1 , η˙ 2 , η˙ 3 ] A ⎣ η˙ 2 ⎦ , T = 2 2 η˙ 3 where ⎤ m 0 0 A = ⎣ 0 M 0 ⎦. 0 0 m ⎡
The potential energy of the triatomic molecule is ⎡ ⎤ η1 ! 1 k 2 2 (η2 − η1 ) + (η3 − η2 ) = [η1 , η2 , η3 ] B ⎣ η2 ⎦ , W = 2 2 η3 where ⎤ k −k 0 B = ⎣ −k 2 k −k ⎦ . 0 −k k ⎡
2) The eigenfrequencies are the roots of det(B − λ A) = 0: ⎤ k − λm −k 0
det ⎣ −k 2 k − λ M −k ⎦ = (k − λ m) m M λ2 − λ k(M + 2m) = 0. 0 −k k − λm ⎡
292
Introduction to Mathematical Methods of Analytical Mechanics
2m k k 1+ . The roots λ1 and λ3 are We obtain λ1 = , λ2 = 0 and λ3 = m m M k k 2m 1+ . positive and give the eigenfrequencies ω1 = and ω3 = m m M The eigenvalue λ2 = 0 does not correspond to an oscillatory motion. The equation of motion for the corresponding normal coordinate is β˙ = 0 associated with a rectilinear and uniform translation. The zero frequency corresponds to the fact that the molecule undergoes a rigid translation along its axis, without any change in the potential energy; since the recall force is zero, the associated frequency must be null. We hypothesized that the molecule has three degrees of freedom, but one of them is the degree of freedom of the rigid body. 3) a) If the center of mass of the molecule is fixed, then m (x1 + x3 ) + M x2 = 0. b) From relation (3), we obtain a system with only two degrees of freedom, and ⎤ m m2 m 1 + ⎥ ⎢ M M ⎥ ⎢ A = ⎢ ⎥. ⎣ m2 m ⎦ m 1+ M M ⎡
T =
1 [η˙ 1 , η˙ 3 ] A 2
η˙ 1 η˙ 3
where
Taking into account m (x01 + x03 ) + M x02 = 0, 1 W = [η1 , η3 ] B 2
η1 , η3
where ⎡
⎤ m 2 m 2 m m 1+ 2k ⎢k 1 + M + M ⎥ M M ⎢ ⎥ ⎥. B =⎢ ⎢ ⎥
⎣ m m m 2 m 2 ⎦ 2k 1+ k 1+ + M M M M k The roots of the characteristic polynomial det(B − λ A ) = 0 are λ1 = and m 2m k λ3 = 1+ . By fixing the mass center of the molecule, zero frequency was m M eliminated by preventing the molecule from moving as a whole along its axis. 4) Normal modes:
Solutions to Problems and Exercises
293
For ω2 = 0 and the normal coordinates (α, β, γ), the system is ⎧ kα−kβ = 0 ⎪ ⎪ ⎪ ⎪ ⎨ −k α + 2 k β − k γ = 0 ⎪ ⎪ ⎪ ⎪ ⎩ −k β + k γ = 0
=⇒
α = β = γ.
Hence, the motion corresponds to a solid displacement from the position x01 , x02 , x03 . k For ω1 = , the system is written as: m ⎧ ⎨k β = 0 =⇒ β = 0 and α = −γ. ⎩ −k α + k β − k γ = 0 The atom B does not move and the other atoms A and C vibrate out of phase from π. For ω3 =
k m
2m 1+ , the system is written as: M
⎧ ⎪ k 2 m ⎪ ⎪ 1+ α−kβ = 0 ⎪ k− ⎪ ⎪ m M ⎨ ⎪ ⎪ ⎪ ⎪ k 2m ⎪ ⎪ γ=0 ⎩k β − k − m 1 + M
=⇒
α = γ.
The two atoms A and C vibrate with the same amplitude, while the central atom B oscillates, out of phase from π with respect to A and C, and with a different amplitude. Solution to Exercise 44 1) The kinetic energy is T =
1 ˙2 ˙2 θ + θ2 . 2 1
2) The potential energy is W = −(cos θ1 + cos θ2 ) + k
a2 , 2
294
Introduction to Mathematical Methods of Analytical Mechanics
where
2 2 (cos θ2 − cos θ1 ) + (1 + sin θ2 − sin θ1 ) − 1
a=
is the spring elongation. The linearized kinetic energy near θ1 = θ2 = 0 is T =
1 ˙ ˙ [θ1 , θ2 ] A 2
θ˙1 θ˙2
⎡
where
A=⎣
10
⎤ ⎦.
01
The linearized potential energy is W = − cos θ1 − cos θ2 +
k (sin θ2 − sin θ1 )2 2
with ∂W = sin θ1 − k cos θ1 (sin θ2 − sin θ1 ) , ∂θ1 and ∂W = sin θ2 + k cos θ2 (sin θ2 − sin θ1 ) . ∂θ2 These partial derivatives are zero for θ1 = 0 and θ2 = 0 corresponding to the equilibrium position of the system. The second derivatives of W for θ1 = 0 and θ2 = 0 are ∂ 2 W = 1 + k, ∂θ12
∂ 2 W = −k, ∂θ1 ∂θ2
∂ 2 W = 1 + k. ∂θ22
Hence, 1 W = [θ1 , θ2 ] B 2
θ1 θ3
⎡ where B = ⎣
1 + k −k
⎤ ⎦.
−k 1 + k
3) The eigenvalues of B with respect to A satisfy: ⎡ det ⎣
1+k−λ
−k
−k
1+k−λ
⎤ ⎦ = (1 − λ) (1 + 2 k − λ) = 0,
Solutions to Problems and Exercises
295
which gives λ1 = 1
and λ2 = 1 + 2 k.
We obtain two unit eigenvectors: ⎡ 1 ⎤ √ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 1 ⎦ √ 2
⎡ and
1 ⎤ √ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥. ⎣ 1 ⎦ −√ 2
In the basis of unit eigenvectors, we denote 1 θ1 = √ (q1 + q2 ) 2
1 and θ2 = √ (q1 − q2 ) 2
and we obtain T =
1 2 q˙1 + q˙22 2
and W =
1 2 q1 + (1 + 2 k) q22 . 2
2 The √ eigenfrequencies satisfy ωi = λi , i ∈ {1, 2}; hence, ω1 = 1 and ω2 = 1 + 2 k. By using the new coordinates, the Lagrange equations are
q¨1 + q1 = 0
and q¨2 + (1 + 2 k) q2 = 0,
whose solutions, back to the initial space, are √ θ1 = A cos (t − τ ) + B cos 1 + 2 k (t − τ ), √ θ2 = A cos (t − ϕ) − B cos 1 + 2 k (t − ϕ), where τ and ϕ are constants. 4) For θ1 = θ2 , i.e. following the direction associated with ω1 = 1, we obtain q2 = 0. The two pendulums are displaced with the eigenfrequencies 1; the spring is not required. For θ1 = −θ2 , we obtain q1 = √ 0. The pendulums are in phase opposition along the direction associated with ω2 = 1 + 2 k. The action of the spring increases the frequency.
296
Introduction to Mathematical Methods of Analytical Mechanics
5) At the initial instant t = 0, it is assumed that θ1 = θ2 = 0 and θ˙1 = v, θ˙2 = 0. In space (q1 , q2 ), at t = 0, we obtain q1 = q2 = 0. The oscillations become
sin ω2 t v sin t + θ1 = 2 ω2
sin ω2 t v sin t − and θ2 = . 2 ω2
√ 1 1 + 2 k 1 + k and 1 − k. We denote ω2 ω2 − 1 k ω2 + 1 ε= and Ω = ; we obtain 2 2 2 If k 1, then ω2 =
θ1 v cos ε t sin Ω t and θ2 v sin ε t cos Ω t, where Ω 1 and the amplitude of v cos ε t is very slightly variable. For cos ε t = 0 π π corresponding to t = , θ1 = 0 and θ2 will be maximum. For t = , θ2 = 0 and θ1 2ε ε π is maximum. Let us denote T = , and we obtain the beating phenomenon. In order 2ε to observe this phenomenon, we advise the reader to represent the curves of θ1 (t) and θ2 (t) on the same graph. Solution to Exercise 45 1) The kinetic energy is ⎡ ⎡ ⎤ x˙ 1 4 1 m x˙ 2 + y˙ 2 + z˙ 2 = [x, ˙ y, ˙ z] ˙ A ⎣ y˙ ⎦ 2 3 2 z˙
where
m 0
2) The potential energy is
W = 2mn
⎡ ⎤ x 1 x2 + y 2 + z 2 − x y − y z = [x, y, z] B ⎣ y ⎦ , 2 z
where ⎡
2 −1 0
⎤
⎥ ⎢ ⎥ ⎢ ⎢ B = m n ⎢ −1 2 −1 ⎥ ⎥. ⎦ ⎣ 0 −1 2 2
⎤
⎢ ⎥ ⎢ ⎥ ⎢ 4 ⎥ A = ⎢ 0 m 0 ⎥. ⎢ 3 ⎥ ⎣ ⎦ 0
2
0
0 m
Solutions to Problems and Exercises
297
3) The eigenvalues of B with respect to A satisfy ⎡
4n2 − λ
−2n2
0
0
−2n2
4n2
⎤
⎢ ⎥ ⎢ ⎥ 1
4 ⎢ ⎥ m det ⎢ −2n2 4n2 − λ −2n2 ⎥ = 4 n2 4 λ2 − 28 n2 + 24 n4 = 0. ⎢ ⎥ 3 3 ⎣ ⎦
We obtain λ1 = n2 , λ2 = 4 n2 and λ3 = 6 n2 . √ We can deduce the eigenfrequencies of the system: ω1 = n, ω2 = 2 n and ω3 = n 6. 4) The Lagrange equations of motion are ⎧ x ¨ = −4 n2 x + 2 n2 y, ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ 4 y¨ = 2 n2 x − 4 n2 y + 2 n2 z, ⎪ 3 ⎪ ⎪ ⎪ ⎪ ⎩ z¨ = 2 n2 y − 4 n2 z. Search the values of ω such that x = A cos ω(t − τ ), y = B cos ω(t − τ ), z = C cos ω(t − τ ), where A, B, C and τ are constants. The system of Lagrange equations (with the condition that the values of A, B, C are not simultaneously zero) implies that the determinant for B − ω 2 A is zero. The eigenvectors associated with frequencies are: ⎧ ⎡ ⎤ 2 ⎪ ⎪ ⎪ 2 2 ⎪ ω = n , we obtain the eigenvector ⎣ 3 ⎦ , ⎪ ⎪ ⎪ ⎪ 2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎤ ⎡ ⎪ ⎪ 1 ⎨ ω 2 = 4 n2 , we obtain the eigenvector ⎣ 0 ⎦ , ⎪ ⎪ −1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎤ ⎡ ⎪ ⎪ 1 ⎪ ⎪ ⎪ ⎪ ω 2 = 6 n2 , we obtain the eigenvector ⎣ −1 ⎦ . ⎪ ⎪ ⎩ 1
298
Introduction to Mathematical Methods of Analytical Mechanics
We obtain the position of the system for each eigenfrequency: ⎧ ⎪ ω 2 = n2 , we obtain the position ⎪ ⎪ ⎪ ⎪ ⎨ ω 2 = 4 n2 , we obtain the position ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ω 2 = 6 n2 , we obtain the position
•(M1 )
•(M2 )
•(M1 ) •(M1 )
•(M3 )
,
•(M2 ) •(M3 ) , •(M2 )
•(M3 )
.
For a periodic motion, the eigenfrequencies must be commensurable. Since the √ eigenfrequencies have values ω1 = n, ω2 = 2 n and ω3 = n 6, the third mode does not come into play. ⎡ ⎤ ⎡ ⎤ x α 5) In the basis of eigenvectors, the vector ⎣ y ⎦ becomes ⎣ β ⎦, with z γ ⎧ x = 2α + β + γ ⎪ ⎪ ⎪ ⎪ ⎨ y = 3α − γ ⎪ ⎪ ⎪ ⎪ ⎩ z = 2α − β + γ
⇐⇒
⎧ 1 ⎪ ⎪ α= (x + 2 y + z) , ⎪ ⎪ 10 ⎪ ⎪ ⎪ ⎪ ⎨ 1 β = (x − z) , ⎪ 2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ γ = 1 (3 x − 4 y + 3 z) . 10
Then, the kinetic energy and the potential energy are T =
1 " ˙2 Qi , m 2 i
W =
1 " 2 2 w i Qi , m 2 i
with Q1 = 2
√
5 α,
Q2 =
√
2 β,
Q3 =
10 γ. 3
6) If the initial conditions are x − a = y − 2a = z − 3a = 0,
x˙ = u, y˙ = z˙ = 0,
Solutions to Problems and Exercises
then we obtain the solution ⎧
u 2 5 3 ⎪ ⎪ x − a = sin ω t + sin ω t + sin ω t , ⎪ 1 2 3 ⎪ ⎪ 10 ω1 ω2 ω3 ⎪ ⎪ ⎪ ⎪ ⎪
⎨ 3u 1 1 y − 2a = sin ω1 t − sin ω3 t , ⎪ 10 ω1 ω3 ⎪ ⎪ ⎪ ⎪ ⎪
⎪ ⎪ 2 u 5 3 ⎪ ⎪ sin ω1 t − sin ω2 t + sin ω3 t . ⎩ z − 3a = 10 ω1 ω2 ω3
299
References
Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics. Springer, Berlin. Arnold, V.I. (1992). Ordinary Differential Equations. Springer, Berlin. Béletski, V.V. (1986). Essais sur le mouvement des corps cosmiques. Mir, Moscow. Berest, P. (1997). Calcul des variations. Ellipses, Paris. Bonvalet, M. (1993). Les principes variationnels. Masson, Paris. Brousse, P. (1968). Mécanique. Armand Colin, Paris. Couty, R. and Ezra, J. (1980). Analyse. Armand Colin, Paris. De Broglie, L. (1924). Recherches sur la théorie des quanta. Thesis, Université de Paris. Germain, P. (1987). Mécanique, vols 1 and 2. Ellipses, Paris. Hand, L.N. and Finch, J.D. (2008). Analytical Mechanics. Cambridge University Press, Cambridge. Kibble, T.W.B. (1973). Classical Mechanics. McGraw-Hill, London. Lagrange, J.-L. (1965). Mécanique analytique, vols 1 and 2. Blanchard, Paris. Lanczos, C. (1970). The Variational Principles of Mechanics. Dover Publications Inc., New York. Luré, L. (1968). Mécanique analytique, vols 1 and 2. Masson, Paris. Martin, P. (1967). Applications de l’algèbre et de l’analyse à la géométrie. Armand Colin, Paris. Pars, L.A. (1979). A Treatise on Analytical Mechanics. Ox Bow Press, Woodbridge. Pérès, J. (1962). Mécanique générale. Masson, Paris. Queysane, M. (1971). Algèbre. Armand Colin, Paris.
Index
A, C, D
I, J, L
action variables, 109, 110 calculus of variations, 1, 3, 5, 10, 14, 18–21, 25 canonical transformations, 95–98, 107 constrained extremum, 7–9 d’Alembert principle, 63, 67 dynamical variables, 116, 118–120
invariant integral, 56–58 isoperimeters, 46 Jacobi method, 98 Lagrange equation, 63, 67–69, 78, 79, 84 multipliers, 7, 9, 14, 16, 19 Lie group, 53–57 linearization of differential equations, 150, 169, 182, 183 Liouville case, 105, 109 theorem, 138, 141
E, F, H eigenfrequencies, 177, 179 equilibrium position, 151–153, 157–159, 161–164, 166, 168–170, 172, 173, 175, 179, 183–185 Fermat’s principle, 58, 59 first integral, 83, 85, 87–91, flow of a dynamical system, 135 free extremum, 3, 9, 14 Hamilton equation, 78, 79, 83 Hill equation, 187, 188, 193, 195 homogeneous Lagrangian, 75, 80, 81, 90
M, N, O Mathieu equation, 193–195 Maupertuis principle, 90–92 Noether theorem, 51 optical path of light, 41
304
Introduction to Mathematical Methods of Analytical Mechanics
P, S, V, W parametric resonance, 193, 195 periodic systems, 188, 192 phase space, 135, 136, 140–142, 144–146 Poincaré recurrence theorem, 144–146 Poisson brackets, 113, 116–121, 123, 127 spaces in analytical mechanics, 113
stability, 151, 157–159, 161, 164, 168, 170–173, 175, 179, 183–195, 199, 200 symplectic scalar product, 115, 131 variation of curvilinear integral, 29, 33, 37 Weierstrass discussion, 151, 155, 156, 164