Nonlinear Dynamics: Non-Integrable Systems and Chaotic Dynamics 9783110439380, 9783110430585, 9783110430677, 9783110430592

The book provides a concise and rigor introduction to the fundamentals of methods for solving the principal problems of

242 18 4MB

English Pages 299 [300] Year 2016

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
The Authors’ Preface
Contents
1 Nonlinear Oscillations
1.1 Nonlinear Oscillations of a Conservative Single-Degree-of-Freedom System
1.1.1 Qualitative Description of Motion by the Phase Plane Method
1.2 Oscillations of a Mathematical Pendulum. Elliptic Functions
1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System
1.3.1 Straightforward Expansion
1.3.2 The Method of Multiple Scales
1.3.3 The Method of Averaging: The Van der Pol Equation
1.3.4 The Generalized Method of Averaging. The Krylov– Bogolyubov Approach
1.4 Forced Oscillations of an Anharmonic Oscillator
1.4.1 Straightforward Expansion
1.4.2 A Secondary Resonance at 9 ˜ ±3
1.4.3 A Primary Resonance: Amplitude–Frequency Response
1.5 Self-Oscillations: Limit Cycles
1.5.1 An Analytical Solution of the Van der Pol Equation for Small Nonlinearity Parameter Values
1.5.2 An approximate solution of the Van der Pol equation for large nonlinearity parameter values
1.6 External Synchronization of Self-Oscillating Systems
1.7 Parametric Resonance
1.7.1 The Floquet Theory
1.7.2 An Analytical Solution of the Mathieu Equation for Small Nonlinearity Parameter Values
2 IntegrableSystems
2.1 Equations of Motion for a Rigid Body
2.1.1 Euler’s Angles
2.1.2 Euler’s Kinematic Equations
2.1.3 Moment of Inertia of a Rigid Body
2.1.4 Euler’s Dynamic Equations
2.1.5 S.V. Kovalevskaya’s Algorithm for Integrating Equations of Motion for a Rigid Body about a Fixed Point
2.2 The Painlevé Property for Differential Equations
2.2.1 A Brief Overview of the Analytic Theory of Differential Equations
2.2.2 A Modern Algorithm of Analysis of Integrable Systems
2.2.3 Integrability of the Generalized Henon–Heiles Model
2.2.4 The Linearization Method for Constructing Particular Solutions of a Nonlinear Model
2.3 Dynamics of Particles in the Toda Lattice: Integration by the Method of the Inverse Scattering Problem
2.3.1 Lax’s Representation
2.3.2 The Direct Scattering Problem
2.3.3 The inverse scattering transform
2.3.4 N-Soliton Solutions
2.3.5 The Inverse Scattering Problem and the Riemann Problem
2.3.6 Solitons as Elementary Excitations of Nonlinear Integrable Systems
2.3.7 The Darboux–Backlund Transformations
2.3.8 Multiplication of Integrable Equations: The modified Toda Lattice
3 Stability of Motion and Structural Stability
3.1 Stability of Motion
3.1.1 Stability of Fixed Points and Trajectories
3.1.2 Succession Mapping or the Poincare Map
3.1.3 Theorem about the Volume of a Phase Drop
3.1.4 Poincare–Bendixson Theorem and Topology of the Phase Plane
3.1.5 The Lyapunov Exponents
3.2 Structural Stability
3.2.1 Topological Reconstruction of the Phase Portrait
3.2.2 Coarse Systems
3.2.3 Cusp Catastrophe
3.2.4 Catastrophe Theory
4 Chaos in Conservative Systems
4.1 Determinism and Irreversibility
4.2 Simple Models with Unstable Dynamics
4.2.1 Homoclinic Structure
4.2.2 The Anosov Map
4.2.3 The Tent Map
4.2.4 The Bernoulli Shift
4.3 Dynamics of Hamiltonian Systems Close to Integrable
4.3.1 Perturbed Motion and Nonlinear Resonance
4.3.2 The Zaslavsky–Chirikov Map
4.3.3 Chaos and Kolmogorov–Arnold–Moser Theory
5 Chaos and Fractal Attractors in Dissipative Systems
5.1 On the Nature of Turbulence
5.2 Dynamics of the Lorenz Model
5.2.1 Dissipativity of the Lorenz Model
5.2.2 Boundedness of the Region of Stationary Motion
5.2.3 Stationary Points
5.2.4 The Lorenz Model’s Dynamic Regimes as a Result of Bifurcations
5.2.5 Motion on a Strange Attractor
5.2.6 Hypothesis About the Structure of a Strange Attractor
5.2.7 The Lorenz Model and the Tent Map
5.2.8 Lyapunov Exponents
5.3 Elements of Cantor Set Theory
5.3.1 Potential and Actual Infinity
5.3.2 Cantor’s Theorem and Cardinal Numbers
5.3.3 Cantor sets
5.4 Cantor Structure of Attractors in Two-Dimensional Mappings
5.4.1 The Henon Map
5.4.2 The Ikeda Map
5.4.3 An Analytical Theory of the Cantor Structure of Attractors
5.5 Mathematical Models of Fractal Structures
5.5.1 Massive Cantor Set
5.5.2 A binomial multiplicative process
5.5.3 The Spectrum of Fractal Dimensions
5.5.4 The Lyapunov Dimension
5.5.5 A Relationship Between the Mass Exponent and the Spectral Function
5.5.6 The Mass Exponent of the Multiplicative Binomial Process
5.5.7 A Multiplicative Binomial Process on a Fractal Carrier
5.5.8 A Temporal Data Sequence as a Source of Information About an Attractor
5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps
5.6.1 General Regularities of a Period-Doubling Process
5.6.2 The Feigenbaum–Cvitanovic Equation
5.6.3 A Universal Regularity in the Arrangement of Cycles: AUniversal Power Spectrum
5.7 Synchronization of Chaotic Oscillations
5.7.1 Synchronization in a System of Two Coupled Maps
5.7.2 Types and Criteria of Synchronization
Conclusion
References
Index
Recommend Papers

Nonlinear Dynamics: Non-Integrable Systems and Chaotic Dynamics
 9783110439380, 9783110430585, 9783110430677, 9783110430592

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Alexander B. Borisov, Vladimir V. Zverev Nonlinear Dynamics

De Gruyter Studies in Mathematical Physics

Edited by Michael Efroimsky, Bethesda, Maryland, USA Leonard Gamberg, Reading, Pennsylvania, USA Dmitry Gitman, São Paulo, Brazil Alexander Lazarian, Madison, Wisconsin, USA Boris Smirnov, Moscow, Russia

Volume 36

Alexander B. Borisov, Vladimir V. Zverev

Nonlinear Dynamics Non-Integrable Systems and Chaotic Dynamics

Physics and Astronomy Classification Scheme 2010 Primary: 05.45.-a; Secondary: 45.05.+x Authors Prof. Dr. Alexander B. Borisov Institute of Metal Physics, S. Kovalevskoy Str. 18 Ekaterinburg 620990 Russian Federation [email protected] Prof. Dr. Vladimir V. Zverev Ural Federal University Inst. of Physics & Technology 19 Mira street Ekaterinburg 620002 Russian Federation [email protected]

ISBN 978-3-11-043938-0 e-ISBN (PDF) 978-3-11-043058-5 e-ISBN (EPUB) 978-3-11-043067-7 Set-ISBN 978-3-11-043059-2 ISSN 2194-3532 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de.

8

© 2017 Walter de Gruyter GmbH, Berlin/Boston Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck  Printed on acid-free paper Printed in Germany www.degruyter.com

Dedicated to the blessed memory of our loved ones that have passed away

The Authors’ Preface The book is a rigorous and concise introduction to the field of basic problems and methods of modern nonlinear dynamics and compares favorably to other existing textbooks on dynamics, in selection and presentation of the material. The need of writing the book is due to the following circumstances. The creation of the soliton theory and the theory of dynamical chaos in the 1960s– 1980s radically changed the traditional understanding of how motion can occur in dynamic systems. In particular, it became clear that “integrable” and “nonintegrable” systems behave qualitatively differently and should be studied separately through specific methods and approaches. In classical mechanics, for Newton’s equations to be solved, it is necessary to assign initial positions and velocities of material points. According to the determinacy principle, this allows one to detect the latter in a certain place at any time. In practice, “nonintegrable” systems whose motion can be both regular and chaotic constitute an important subclass of deterministic nonlinear dynamical systems. Chaotic behavior exhibits itself in unstable dynamical systems, with their motion being extremely sensitive to changes in initial conditions and parameters. As for real systems, the initial conditions and parameters are known with some degree of accuracy, so the lack of stability imposes certain restrictions to predict the system’s evolutionary behavior over time. Owing to the development of computer simulation methods in the 1960s, a chaotic (strange) attractor was discovered. It involves a nonstandard set having a selfsimilar (fractal) structure and containing a limiting trajectory of chaotic motion. This event initiates the dynamical chaos theory to a rapid development in the next years. At present, the dynamical chaos theory became a universal theory to describe a “turbulent” behavior of dynamical systems of any nature, including not only physical but biological, social, economic and so on. Deterministic motion in “integrable” systems having the total number of integrals of motion is regular and, in general, multiperiodic. In this book, being a subject of considerable importance from both a fundamental and applied point of view, the modern theory of integrable systems was drawn much attention by the authors. A remarkable feature of integrable N degree-of-freedom systems is that they can be solved explicitly, possessing N integrals of motion. In integrable systems with an infinite number of degrees of freedom, there may exist solitons – nonlinear localized excitations of a new type, whose properties are unusual. The content of the book is divided into two parts, which are inextricably coupled to each other. The first part of the book covers the issues of regular dynamics and “integrable” systems. Chapter 1 provides the fundamentals of the theory of nonlinear oscillations. At the outset, it focuses on the phase plane method used for analyzing the qualitative features of the motion of a nonlinear system, then defines and describes elliptic

VIII

The Authors’ Preface

functions, which perhaps are yet unknown to the students. Since experts in nonlinear dynamics deal mostly with problems that cannot be solved exactly, they often have to resort to the perturbation theory methods. These are also presented in the same chapter. Chapter 1 pays special attention to small-amplitude oscillations in basic models of nonlinear dynamics such as forced oscillations of an anharmonic oscillator, self-oscillations, limit cycles and parametric resonance. In this chapter, the authors regard approximate methods of nonlinear dynamics (a multiscale method and method of averaging) as the minimum tools necessary for practicing engineers and theoretical physicists. Chapter 2 gives an introduction to the theory of integrable systems. By the example of equations of motion for a rigid body, we present a novel method proposed by S. Kovalevskaya for finding a new integral of motion. After a brief review of the analytic theory of differential equations and the Painlevé property, we acquaint the reader with the Weiss linearization method and modern methods of M. Ablowitz, A. Ramani and H. Segura to verify integrability of the equations of nonlinear dynamics. To date, these methods and their generalizations are effective ways to search for integrable systems. Finally, at the end of the second chapter, we offer a procedure for solving the integrable equations of dynamics by the inverse scattering problem. By the example of the particle dynamics in the Toda lattice, we describe in detail necessary components of this powerful and very popular method: Lax’s representation, the Riemann problem and the Darboux–Backlund transformations. Being intensively studied in recent years by physicists and mathematicians in various fields of science and technics, solitons as a new type of localized excitations are also discussed. The second part of the book consisting of three chapters is about “nonintegrable” systems. Here the theory of dynamical chaos occupies a special room. Chapter 3 is devoted to the stability of motion and structural stability. The authors dwell on methods of analysis of the motion stability. They use an approach to linearize the equations of motion and focus on the concept of “phase fluid volume,” the Lyapunov exponents and the results of the Poincare–Bendixson theorem. Common approaches are illustrated by examples of the dynamics on a plane and in three-dimensional space. It is shown that any change in control parameters causes bifurcations, which are accompanied by topological rearrangements of phase portraits. The concept of structural stability is considered by the example of a cusp catastrophe. A brief introduction to the theory of catastrophes is given. Chapter 4 discusses mechanisms of the emergence of irreversible behavior in deterministic systems. It concerns a number of simple dynamic models with discrete time, when they evolve along unstable trajectories or demonstrate the mixing process. One of such models, playing a special role in the theory of dynamic stochastization (randomization) of motion, describes the dynamics of a Hamiltonian system close to integrable one. Also it allows revealing a profound relationship between the instability of motion, the mixing and existing nonlinear resonances. Chapter 5 deals with motion chaotization occurring in dissipative systems and accompanied by the appearance of attractors with fractal structure. Here the reader can find a short introduction to the theory of nonstandard sets of Cantor type.

The Authors’ Preface

IX

This chapter sets out with a sufficient level of detail the standard theory of multifractals as a working tool to describe chaotic attractors and introduces the concept of the spectrum of fractal dimensions. The issues of universality, scaling and application of the renormalization-group approach to the dynamical systems are considered in relation to a discrete-time one-dimensional system whose dynamics manifests a sequence of period-doubling bifurcations. The phenomenon of synchronization of chaotic oscillations is also touched upon. Scientific results that have become recognized scientific achievements find their place in monographs and textbooks, subsequently contacted by many generations of young people. However, a dry and concise presentation of the material, as is accepted in science, does not allow the readers to get a feel for the atmosphere where important discoveries were made. In the end, unfortunately, the spirit of an epoch turns out to be lost, and dramas of ideological confrontation are forgotten. In an effort to, at least partially, break this tradition, we added comments “as to,” brief biographies and portraits of the researchers who made the most significant contribution to one or another question. And yet, we have to note with regret that the book’s orientation and scope do not allow describing biographies and presenting portraits of many other outstanding scientists who advanced the theory of nonlinear dynamics and integrable systems. The authors hope this book will be useful to a wide audience: beginners in theoretical physics and engineer-physicists who are not yet familiar with nonlinear dynamics, as well as students embarking on the study of nonlinear phenomena, and graduate students in related disciplines.

Contents 1 1.1 1.1.1 1.2 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.4 1.4.1 1.4.2 1.4.3 1.5 1.5.1 1.5.2 1.6 1.7 1.7.1 1.7.2 2 2.1 2.1.1 2.1.2 2.1.3 2.1.4 2.1.5 2.2 2.2.1

Nonlinear Oscillations 1 Nonlinear Oscillations of a Conservative Single-Degree-of-Freedom System 3 Qualitative Description of Motion by the Phase Plane Method 5 Oscillations of a Mathematical Pendulum. Elliptic Functions 9 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System 14 Straightforward Expansion 15 The Method of Multiple Scales 18 The Method of Averaging: The Van der Pol Equation 22 The Generalized Method of Averaging. The Krylov– Bogolyubov Approach 24 Forced Oscillations of an Anharmonic Oscillator 27 Straightforward Expansion 28 A Secondary Resonance at 9 ≈ ±3 29 A Primary Resonance: Amplitude–Frequency Response 31 Self-Oscillations: Limit Cycles 37 An Analytical Solution of the Van der Pol Equation for Small Nonlinearity Parameter Values 39 An approximate solution of the Van der Pol equation for large nonlinearity parameter values 42 External Synchronization of Self-Oscillating Systems 45 Parametric Resonance 55 The Floquet Theory 56 An Analytical Solution of the Mathieu Equation for Small Nonlinearity Parameter Values 60 65 Integrable Systems Equations of Motion for a Rigid Body 65 Euler’s Angles 68 Euler’s Kinematic Equations 71 Moment of Inertia of a Rigid Body 73 Euler’s Dynamic Equations 76 S.V. Kovalevskaya’s Algorithm for Integrating Equations of Motion for a Rigid Body about a Fixed Point 79 The Painlevé Property for Differential Equations 85 A Brief Overview of the Analytic Theory of Differential Equations 85

XII

2.2.2 2.2.3 2.2.4 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5 2.3.6 2.3.7 2.3.8

Contents

A Modern Algorithm of Analysis of Integrable Systems 88 Integrability of the Generalized Henon–Heiles Model 94 The Linearization Method for Constructing Particular Solutions of a Nonlinear Model 98 Dynamics of Particles in the Toda Lattice: Integration by the Method of the Inverse Scattering Problem 100 Lax’s Representation 104 The Direct Scattering Problem 108 The inverse scattering transform 116 N-Soliton Solutions 120 The Inverse Scattering Problem and the Riemann Problem 128 Solitons as Elementary Excitations of Nonlinear Integrable Systems 133 The Darboux–Backlund Transformations 135 Multiplication of Integrable Equations: The modified Toda Lattice 139

145 Stability of Motion and Structural Stability Stability of Motion 145 Stability of Fixed Points and Trajectories 145 Succession Mapping or the Poincare Map 149 Theorem about the Volume of a Phase Drop 151 Poincare–Bendixson Theorem and Topology of the Phase Plane 153 3.1.5 The Lyapunov Exponents 155 3.2 Structural Stability 162 3.2.1 Topological Reconstruction of the Phase Portrait 162 3.2.2 Coarse Systems 165 3.2.3 Cusp Catastrophe 167 3.2.4 Catastrophe Theory 169

3 3.1 3.1.1 3.1.2 3.1.3 3.1.4

174 4 Chaos in Conservative Systems 4.1 Determinism and Irreversibility 174 4.2 Simple Models with Unstable Dynamics 180 4.2.1 Homoclinic Structure 180 4.2.2 The Anosov Map 182 4.2.3 The Tent Map 183 4.2.4 The Bernoulli Shift 187 4.3 Dynamics of Hamiltonian Systems Close to Integrable 4.3.1 Perturbed Motion and Nonlinear Resonance 189 4.3.2 The Zaslavsky–Chirikov Map 193 4.3.3 Chaos and Kolmogorov–Arnold–Moser Theory 195

189

Contents

Chaos and Fractal Attractors in Dissipative Systems 200 5.1 On the Nature of Turbulence 200 5.2 Dynamics of the Lorenz Model 203 5.2.1 Dissipativity of the Lorenz Model 205 5.2.2 Boundedness of the Region of Stationary Motion 205 5.2.3 Stationary Points 206 5.2.4 The Lorenz Model’s Dynamic Regimes as a Result of Bifurcations 207 5.2.5 Motion on a Strange Attractor 208 5.2.6 Hypothesis About the Structure of a Strange Attractor 209 5.2.7 The Lorenz Model and the Tent Map 211 5.2.8 Lyapunov Exponents 212 5.3 Elements of Cantor Set Theory 213 5.3.1 Potential and Actual Infinity 213 5.3.2 Cantor’s Theorem and Cardinal Numbers 217 5.3.3 Cantor sets 222 5.4 Cantor Structure of Attractors in Two-Dimensional Mappings 225 5.4.1 The Henon Map 225 5.4.2 The Ikeda Map 227 5.4.3 An Analytical Theory of the Cantor Structure of Attractors 228 5.5 Mathematical Models of Fractal Structures 230 5.5.1 Massive Cantor Set 231 5.5.2 A binomial multiplicative process 232 5.5.3 The Spectrum of Fractal Dimensions 237 5.5.4 The Lyapunov Dimension 240 5.5.5 A Relationship Between the Mass Exponent and the Spectral Function 241 5.5.6 The Mass Exponent of the Multiplicative Binomial Process 243 5.5.7 A Multiplicative Binomial Process on a Fractal Carrier 244 5.5.8 A Temporal Data Sequence as a Source of Information About an Attractor 245 5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps 5.6.1 General Regularities of a Period-Doubling Process 250 5.6.2 The Feigenbaum–Cvitanovic Equation 258 5.6.3 A Universal Regularity in the Arrangement of Cycles: A Universal Power Spectrum 262 5.7 Synchronization of Chaotic Oscillations 269

XIII

5

249

XIV

Contents

5.7.1 5.7.2

Synchronization in a System of Two Coupled Maps Types and Criteria of Synchronization 272

Conclusion

274

References

277

Index

281

270

1 Nonlinear Oscillations There is probably no exaggeration to say that among the processes as freely occurring in nature as used in engineering, oscillations to be understood in the broadest sense of the word, in many respects occupy a prominent and often dominant place . . . . The development of the study of oscillations is inextricably tied up with the development of mathematical methods for their treatment. Nikolai Dmitrievich Papaleksy In the field of oscillations, there takes place a sharply distinct interaction between physics and mathematics, as well as influence of physics’s needs on the development of mathematical methods and mathematics’s feedback towards our physical knowledge. A.A. Andronov, C.E. Vitt

At dawn of its development, the theory of oscillations studied only the simplest forms of vibrations and used simple mathematical methods. In physics, most nonlinear problems were very often “linearized” by reducing nonlinear equations to linear ones due to considering only small fluctuations. This was chiefly because, by that time, researchers had had at their disposal a well-designed mathematical apparatus of linear differential equations and partial differential equations with constant coefficients. “In spite of having yielded the remarkable results, this linear approach to non-linear problems left aside not only observable effects but lost the essence of a phenomenon examined when idealizing the real tasks. Such ‘linearization’ is always artificial, rarely useful, mostly teaches us nothing, and sometimes actually harmful” [1]. Although by the late nineteenth century Jules Henri Poincare and A.M. Lyapunov in their works had laid the mathematical foundations of the nonlinear oscillation theory and developed some methods (the perturbation theory and elimination of secular terms), by means of which some oscillatory systems and problems of celestial mechanics were investigated, their application often had an irregular pattern or took the form of mathematical tricks. At the beginning of twentieth century, “ . . . the main results obtained by Poincare and Lyapunov were not sufficiently utilized by physicists and engineers who continued using the classical linear theory apparatus as a remedy for all diseases even when studying essentially nonlinear problems . . . . Alfred-Marie Lienard and Henri Cartan in France, representatives of MandelstamPapaleksy’s school in the USSR were the first to apply rigorous methods in radio technology” [3]. At that time, the needs and challenges of rapidly developing fields of engineering, especially radio physics and radio technology, played a decisive role in the development of the nonlinear oscillations theory. Generation and reception of electromagnetic waves required to examine special kind of autonomous oscillatory systems (later they were called self-oscillating), where a constant power source could maintain the oscillations of a certain form, frequency and power as well. Their characteristic feature was that the oscillations were independent on initial conditions and could not be

2

1 Nonlinear Oscillations

described by linear differential equations. Difficulties met in transformation and stabilization of frequency, modulation and demodulation of oscillations derived from the circumstance that the mathematical apparatus had lacked the resources to solve particular problems quantitatively and understand the essence of physical phenomena in these areas. Moreover, practice forced the researchers to seek simple engineering calculation methods. Therefore, there arose two research areas in parallel to study nonlinear oscillatory systems: theoretical and practical engineering. Three branches such as the qualitative (topological) theory of differential equations, Poincare’s small parameter method and Lyapunov’s stability theory were a mathematical foundation for creating the nonlinear oscillation theory. The theory of nonlinear oscillations explores periodic oscillatory motion described by nonlinear differential equations. As compared with the linear theory, it reflects the properties of vibrational systems more completely. It is important to note some of them. – The superposition principle is not applicable to nonlinear systems: in contrast to linear systems, it makes it impossible for the general solution to be composed of independent particular solutions of nonlinear differential equations. The superposition of two or more oscillatory motions, each of which exists in the nonlinear system, is not its vibration. – If one expands a force acting on a nonlinear system in a Fourier series, its action is not equal to a linear sum of the actions of each summand of the series. – In nonlinear systems, several stationary and periodic oscillatory motions are possible to exist. In contrast to intrinsic frequencies of linear systems, natural frequencies of the nonlinear systems do depend on initial conditions. – Free vibrations of linear systems are always damped in the presence of friction. In nonlinear systems, even the presence of friction makes the periodical oscillatory motion possible. – The period of oscillations in linear systems under the action of an external (disturbing) harmonic force in the steady-state regime coincides with the period of that force. However, as to nonlinear systems, forced oscillations occur with a period multiple of the period of the external harmonic force, including unusual properties of the amplitude–frequency response.

To date, despite intensive scientific research by many generations of mathematicians and physicists, there is no complete closed theory, which would allow describing analytically the entire dynamics of nonlinear systems having even one degree of freedom. Nevertheless, it is the dynamics of a single-degree-of-freedom system that has been studied more thoroughly. However, there are many quite well-developed general methods to compute solutions for weakly nonlinear systems with a small nonlinearity parameter. The present chapter concerns with an elementary introduction to

1.1 Nonlinear Oscillations of a Conservative Single-Degree-of-Freedom System

3

the theory of nonlinear phenomena, acquaints us with the dynamics of conservative systems with one degree of freedom, as well as such asymptotic methods for solving nonlinear problems as the method of averaging and the method of multiple scales.

1.1 Nonlinear Oscillations of a Conservative Single-Degree-of-Freedom System The study of free oscillations of conservative single-degree-of-freedom systems involves an elementary introduction to the theory of nonlinear oscillations and a simple way to learn the qualitative theory of nonlinear systems (the phase plane method). Consider the motion of a system with one degree of freedom. Such a system can be described by a real variable x, with the system’s motion causing a change of this quantity over time under the action of an applied force. Although the physical nature and the dimension of x (t) may be different, an equation of motion in the conservative system can always be reduced to the form x¨ = f (x) .

(1.1)

In mechanics, f (x) can be regarded as a force per unit mass. We write down eq. (1.1) as a system of two differential equations of the first order x˙ = y y˙ = f (x) = –

(1.2) ∂U ∂x

(1.3)

 where U (x) = f (x) dx is the potential energy of the system. In what follows, f (x) is assumed to be an analytic function on the entire range of x. That is to say, it can be expanded in a power series with a nonzero radius of convergence at every point of this interval. Let us find the dependence between the velocity and coordinates: y (t) = y (x (t)) .

(1.4)

By virtue of the fact that y = x˙ , and d dy dy x˙ = y = x¨ = f (x) = y, dt dx dx we obtain a differential equation for a phase (integral) trajectory dy f (x) = , dx y

(1.5)

4

1 Nonlinear Oscillations

along which the material point moves in the (x, y) phase plane. The solution of this equation y = y (x, c) by Cauchy’s theorem is the only in some domain D of the phase plane, provided that the function P (x, y) = f (x) y is a single-valued and continuous func-

tion in the given domain D and its continuous derivative is ∂P(x,y) ∂y . In accordance with the uniqueness theorem, only one phase trajectory passes through the given point, and phase trajectories do not intersect. Otherwise, points being beyond the above conditions are called singular or critical points. In this case, several phase trajectories can pass through each point. The points xi (i = 1, 2, . . . , n), where x˙ = x¨ = 0  ∂U  =0 y = 0, f (xi ) = ∂x x=xi are referred to as equilibrium points and are singular points because the values of the function P (x, y) = f (x) y at them are uncertain, and the phase trajectory may pass through them in different directions. From a physical point of view, the system with the initial conditions y (t) = 0, x(t) = xi at t = 0 saves the velocity and position values at all times t. The equations of motion (1.2) and (1.3) can be solved as follows. We multiply the first of them by ∂U ∂x , the second by y and add term by term. Then one is led to   ∂U d y2 y˙ y + x˙ = + U(x) = 0. (1.6) ∂x dt 2 Hence it follows that the total energy per unit mass E=

y2 + U(x) 2

(1.7)

does not depend on time and is an integral of motion. In Chapter 2, such systems were called conservative. The quantity, conserved over time, allows one to lower the order of the differential equation. From eq. (1.7) we can arrive at a differential equation of motion for the conservative system  y = x˙ = ± 2(E – U(x)). (1.8) Let us examine the motion with the initial conditions x (t = 0),y (t = 0). The total energy E is determined by the initial conditions. Direct integration of eq. (1.8) does not result in the quantity x as a function t immediately, but we can find the inverse func –1 dt tion t (x) by integrating the equation dx =± 2 (E – U (x)) . The solution obtained depends on two parameters E and x0 , and it is quite convenient to write it down as an implicit function x 

F(t, x) = (t – t0 ) – % x0

which is called a solution in quadratures.

d& 2(E – U(& ))

= 0,

% = ±1,

(1.9)

1.1 Nonlinear Oscillations of a Conservative Single-Degree-of-Freedom System

5

1.1.1 Qualitative Description of Motion by the Phase Plane Method The behavior of a material point can be qualitatively described using the phase plane without the inverse integral (1.9). The condition for the reality of solutions (E – U (x) > 0) in eq. (1.9) splits the range of the variable x into a certain number of intervals (ai (E) , bi (E)) ; i = 1, 2, .., n. Endpoints of the intervals are either turning points, calculated from the condition E – U (x) = 0, or unlimited ones when ai (E) → –∞, bi (E) → ∞. Consider the motion of the material point in one of the intervals [a, b] when x (0) ∈ (a, b),˙x(0) < 0, with the quantity a taking both finite and infinite values, and the quantity 2 (E – U (x)) having no roots in the interval. From eq. (1.9) we find the partial derivatives ∂F ∂F 1 . = 1 and =  ∂t ∂x 2 (E – U (x)) Obviously, they are continuous on the interval [a, x (0)]. Then it follows from the theorem on the existence of an unambiguous and continuous implicit function that relation (1.9) defines x as a single-valued function of time on this interval. Since  x˙ (t) = – 2 (E – U (x)) < 0, then x (t) is a monotonically decreasing function. There fore, if a → –∞ and 2 (E – U (x)) is always positive (e.g., at E = E6 as illustrated in Fig. 1.1), the divergence of integral (1.9) implies that the motion of the material point is unlimited: lim x (t) → –∞. For a finite value of a there are two different types of motion. Suppose that the turning point does not coincide with the equilibrium point, for example, the point a overlaps the point x3 (see Fig. 1.1). Because the first term is nonzero in the expansion 1 E – U(x) = –U ′ (a)(x – a) – U ′′ (a)(x – a)2 , 2

(1.10)

the integral in eq. (1.9) converges at x = a. Therefore, at some finite value t1 it comes out that x (t1 ) = a.

U (x)

E6 E5 E4 E3 E2 E1

x1 Figure 1.1

x2

x3

x4

x5

x6

x7 x8

x

6

1 Nonlinear Oscillations

The resulting motion can last without interruption at all further moments in time, since x¨ (t1 ) ≠ 0 and 1 x (t1 + %) = x (t1 ) + x¨ (t1 ) %2 + ⋯ ≠ x (t1 ) (% 0), the phase point always moves to the right (an arrow specifies its direction) as x increases because x˙ > 0. In the bottom half-plane, the phase travels to the left. The phase curve always intersects the abscissa (y = 0) at a nonsingular point at the right angle. To construct the phase curve, it is necessary according to eq. (1.8) to compute the  values of 2 (E – U (x)) consistently for all values of x in the permitted motion intervals (ai (E) , bi (E)) and put them on the y–axis. In the vicinity of the turning points that do not coincide with the critical points the phase trajectory is approximately described by a hyperbola y2 = 2U ′ (b)(b – x) (U ′ (b) > 0, x < b) or y2 = –2U ′ (a)(x – a),

U ′ (a) < 0, x > a.

8

1 Nonlinear Oscillations

The form of the phase trajectories in the vicinity of the equilibrium points x = xi

y = ± 2 (E – U (xi )) – U ′′ (xi ) (x – xi )2 (1.13) depends strongly on the type of the potential energy extremum. In the vicinity of the minimum potential (U ′′ (xi ) > 0), the level lines at E > U (xi ) are of a family of deformed ellipses, which degenerate into an isolated equilibrium point when E = U (xi ). In the given case, the points are x4 and x7 as shown in Fig. 1.2. In the vicinity of the potential energy maximum, eq. (1.13) is responsible for a set of hyperbolas, which at E = U (xi ) turn into straight lines passing through the equilibrium position (the points x2 and x6 in Fig. 1.2). As can be inferred from the picture, there are curves that divide the phase plane into two parts. Each region possesses the phase trajectories of different forms, corresponding to the finite (limited) and infinite (unlimited) motions, respectively. Such curves are called separatrices. The motions taking place near the potential energy maximum are those about an unstable equilibrium state. All of the phase points except for moving along two branches of the separatrix go away from the equilibrium position. By analogy with the contour lines of equal height on topographic maps, the singular points corresponding to the unstable equilibrium are called a saddle (a mountain valley). The above analysis is sufficient to look into different types of motion. Let us construct, for example, the phase portrait of the material particle’s motion with the potential energy as shown in Fig. 1.2. When E = E1 , x (t0 ) > x1 , x (t0 ) ≠ x4 there is no motion. When the initial energy is E = E1 and x (t0 ) = x4 , the phase trajectory degenerates into the equilibrium point (0, x4 ). The phase curves for the initial conditions x2 < x (t0 ) < x6 and E1 < E < E5 are closed curves around the point (0, x4 ) and describe a continuum of periodic motions in a bounded region of the phase space. In the upper half-plane (x˙ > 0), the image point moves from left to right, in the bottom that travels in the opposite direction. For values of E close to E1 , the oscillations near the point x = x4 resemble harmonic ones whose period is amplitude independent. As the total energy increases, nonlinear oscillations arise, with their period (1.12) depending on the amplitude and becoming indefinite at E = E5 . When E = E5 for the initial conditions x6 < x (t0 ) < x8 there exists a phase trajectory approaching the equilibrium position (0, x6 ) over an infinite time interval. Such a separatrix, being a frontier between the regions with the finite and infinite motions, closes on the saddle. In addition, the appropriate phase curve (x2 < x (t0 ) < x6 ) connects the unstable equilibrium positions. Expanding eq. (1.13), we obtain exponential decay at E = U (xi ): √ ′′ (1.14) x → x6 – e–t/ –U (x6 ) /c at t → ∞ in reaching the unstable equilibrium point. The decay further exhibits a power-law behavior x → x6 –

c (t → ∞) t n–1 2

9

1.2 Oscillations of a Mathematical Pendulum. Elliptic Functions

x

x

x6

x8

t x6

x2

t Figure 1.3

in the event of equality of the multiplicity of zero for the function E – U (x) to n (n > 1) at x = x6 . Graphs of the functions x (t) reflecting the motion along the separatrix are depicted in Fig. 1.3. The closed separatrix meets a localized impulse, the unlocked separatrix – a kink or, in other words, it is a localized transition from the values of x2 to x6 . We have looked into the types of motion inherent in conservative nonlinear systems. Thus, the restoring force nonlinearity leads to the following: – We can observe anharmonicity of periodic oscillations with a period depending on the initial conditions. – There are solutions of special type to meet localized impulses and kinks in the presence of unstable equilibrium positions.

1.2 Oscillations of a Mathematical Pendulum. Elliptic Functions The Pendulum has played an extremely important role in the history of physics. Leonid Isaakovich Mandelstam Like to Pavlov’s dog, it is necessary to erect a monument to the Pendulum. Evgeni Akimovich Turov

The equation of motion of a pendulum ¨ = –920 sin > >

(1.15)

is applied in various areas of theoretical physics and is a reference model to describe nonlinear oscillations. Its integral of motion ˙2 > + 920 (1 – cos >) = E 2

(1.16)

can be easily calculated with eq. (1.7). Figure 1.4 illustrates a phase portrait and curves of potential energy 920 (1 – cos >) for the pendulum oscillations.

10

1 Nonlinear Oscillations

U(φ)

(ω0 = 1/2)

2ω02

φ

φ E = 3/4 E = 1/2 E = 1/4

φ

2π Figure 1.4

The phase curve is periodic over > with the period 20. For values of E less than the potential energy maximum Umax = 2920 there are nonlinear oscillations near the stable equilibrium position > = 0. When E > 2920 there are no nonlinear oscillations due to the infinite motion. This motion meets initial conditions when, for example, the pendulum is in the upper position (> (0) = 0) and, according to eq. (1.16), has the velocity

˙ = ± E – 2920 . Separatrices whose equations according to eq. (1.16) have the simple > form at E = 2920 : ˙ = ±290 cos >

> 2

(1.17)

splits the motions into oscillatory and infinite. Putting that the expression sin

> = k sin u, 2

E = 2k2 90 2

(0 < k < 1)

(1.18)

holds for the oscillatory motions, we transform integral (1.16) in the form u˙ 2 = 90 2 (1 – k2 sin u2 ). As a result, separation of variables involved in this equation du 9dt = √ 1 – k2 sin u2 yields a law of the pendulum motion as the time dependence on the deviation angle

1.2 Oscillations of a Mathematical Pendulum. Elliptic Functions

u 90 (t – t0 ) =

√ 0

dv 1 – k2 sin v2

= F(u, k).

11

(1.19)

The function F (u, k) is known as an incomplete elliptic integral of the first kind. The name of this integral comes from the history when it has appeared for the first time in calculating the arc length of an ellipse. The arc length of an ellipse cannot be expressed via elementary transcendental functions such as logarithm, arcsin, and so on, but it can be represented as an infinite series. The values of the function F (u, k) are tabular and can be found in many reference books. Finding the inverse function u = u (t) is called the inversion of integral (1.19), according to Jacobi, this function is called the amplitude, with constant k being modulus and having a special notation u = am (90 (t – t0 ) k, ) .

(1.20)

The sine and cosine functions of u are designated as sn, cn: sin u = sin am(90 (t – t0 ), k) = sn(90 (t – t0 ), k), cos u = cos am(90 (t – t0 ), k) = cn(90 (t – t0 ), k).

(1.21)

The denominator of the integrand in eq. (1.19) is denoted as dn:   dn(90 (t – t0 ), k) = 1 – k2 sin2 u = 1 – k2 sn2 (90 (t – t0 ), k).

(1.22)

Finally, when E = 2k2 90 2 (0 < k < 1), the solution of eq. (1.15) can be written in the form sin

> = ksn(90 (t – t0 ), k). 2

(1.23)

In the case of the infinite pendulum motion (E > 2920 ), we make the following change in eq. (1.16): sin

> = sin u, 2

E=

290 2 k2

(0 < k < 1).

(1.24)

The inversion of the integral 90 (t – t0 ) = k

u √ 0

dv 1 – k2 sin v2

then gives the solution in elliptic functions sin

  > 90 (t – t0 ) = sn ,k 2 k

(1.25)

12

1 Nonlinear Oscillations

We find derivatives of the elliptic functions. For simplification, let us choose 90 = 1, t0 = 0. Differentiating relation (1.19) over time, one is led to the following result: √ d u = 1 – k2 sin u2 . dt Consequently, we have d d d sn(t, k) = sin(u) = cos u u = cn(t, k) dn(t, k). dt dt dt

(1.26)

The derivatives of the functions cn and dn can be obtained by differentiating the relations cn(t, k)2 + sn(t, k)2 = 1,

dn(t, k)2 + k2 sn(t, k)2 = 1,

(1.27)

which gives d cn(t, k) = –sn(t, k) dn(t, k), dt . d dn(t, k) = –k2 sn(t, k) cn(t, k). dt

(1.28)

Tabular values for the elliptic functions can be found in special guides. Analyzing the equation for nonlinear oscillations > ˙ = ±2 k2 – sin2 , > 2

(1.29)

in the graph of the pendulum’s potential energy and relation (1.18), we can establish the time dependence between them. Let the pendulum at the moment t = 0 be in the lowest position > = 0. Then, according to eqs (1.18) and (1.19), we have the function sn (0, k) = 0 at 90 = 1, t0 = 0. When the initial velocity of the pendulum is positive, both the deviation angle > and the quantity u increase up to the moment in time t = K. The appropriate values in this moment are sin

> 0 = k, u = . 2 2

Consequently, we have sn (K, k) = 1, where

K=F

0 2



0

2



,k = 0

dv 1 – k2 sin v2

.

(1.30)

1.2 Oscillations of a Mathematical Pendulum. Elliptic Functions

13

In what follows, we demonstrate that the function sn (u, k) decreases from 1 to 0 in the interval from K to 2K. In the interval from 2K to 4K, this function takes negative values. It decays between 2K and 3K, sn (3K, k) = –1 and increases to zero in the interval from 3K to 4K. It implies that the function sn is odd and periodic with the period 4K: sn(t + 4K, k) = sn(t, k).

(1.31)

To better understand the properties of the elliptic functions and integrals, it is important to draw attention to the facts. When k < 1, the integrand in eq. (1.30) can be expanded in a convergent series which in turn can be integrated, and then we get 0 K(k) = 2

   2 2 2 1 ⋅ 3 1 ⋅ 3 ⋅ 5 1 k2 + k4 + k6 . . . . 1+ 2 2⋅4 2⋅4⋅6



At the same values of k the function sn u can be expanded in a Fourier series: 1 ∞ 0u 20  qn– 2 sin(2n – 1) , k K n=1 1 – q2n–1 2K

(1.32)

1 ∞ 0u 20  qn– 2 cos(2n – 1) , cn u = k K n=1 1 – q2n–1 2K

(1.33)

sn u =

where  q = exp

–0K ′ K

 ,

K′ = F

 0 √ , 1 – k2 . 2

(1.34)

Graphs of the function sn u for three values of k are depicted in Fig. 1.5. It is seen that the period of oscillations increases as k increases. The function sn (u, k) differs from the function sin u = sn (t, 0) in being flatter with growth of k. When k = 1, the series (1.33) and (1.34) diverge, but integral (1.17) can be computed explicitly   1  1 + sin u  90 (t – t0 ) = ln  . 2 1 – sin u  Hence sin

> = ± tanh 90 (t – t0 ) , 2

(1.35)

and the phase point moves along the separatrix (see Fig. 1.4) from > = 0 to > = ±0 as t increases from t = 0 to ∞. The solution for the velocity ˙ =% >

290 , cosh 90 (t – t0 )

% = ±1,

(1.36)

14

1 Nonlinear Oscillations

k = 0.99

sn(u)

k=0

k = 0.4

1

0.5

5

10

20 u

15

–0.5

–1 Figure 1.5

φ

Figure 1.6

t

exponentially decays as t → ±∞ and bears the name of a soliton. Its profile width is inversely proportional to 90 (Fig. 1.6).

1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System Free or natural oscillations involve oscillations that occur in a physical system after an initial external influence, owing to the originally accumulated energy. As a result, the transfer of potential energy to kinetic energy and vice versa happens due to the interaction between the system and surrounding bodies. Free oscillations of single-degree-of-freedom conservative systems can be described by the universal equation (1.1), where in the general case the restoring force f (x) per unit mass is a nonlinear function. If one admits that x = 0 is an equilibrium state of the system, f (x) can be expanded in a Taylor series for small deviations from it:

1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System

f (x) = a1 x + a2 x2 + a3 x3 + ⋯ ⋅,

an =

1 n!



 dn f (x) (x = 0), dxn

15

(1.37)

where the parameter a1 = –90 2 defines the frequency of linear oscillations. Putting that 90 = 1, we can gauge a dimensionless time t in periods of free oscillations. Then eq. (1.1) in the first nonlinear approximation for an odd function takes the form x¨ = –x – %x3 .

(1.38)

It can serve as the simplest example of an anharmonic oscillator. The dimensionless parameter % = aa31 ) = – sin > = –> +

>3 +⋯ 3!

and the small parameter is % = – 3!1 . Although the exact solution of eq. (1.38) can be expressed in terms of the elliptic functions, it would be proper to find an approximate analytical expression for the anharmonic oscillator. For that, at the outset we construct it using straightforward expansion.

1.3.1 Straightforward Expansion One of the most widely used analytical methods for studying nonlinear systems is the method of perturbation theory. It is applied to equations whose nonlinear terms are connected with a small parameter. The right-hand side on eq. (1.38) is an analytic function of % and x; the approximate solution can be sought through direct expansion of x (t) in increasing powers of the small parameter %, that is, in the form of the series x(t) =

∞ 

%n xn (t).

(1.39)

0

Having substituted eq. (1.39) into the nonlinear differential equation, arranged the terms with the same powers of the small parameter into groups, discarded the terms containing the parameter % that is greater than a certain power N and set the coefficients of each power of the parameter equal to zero, we obtain a system of N + 1 equations. This method, called straightforward expansion, provides the possibility of determining sequentially the terms x0 , x1 , . . . , xN of the series (1.39) by differentiating and integrating on condition that the general solution is known at % = 0. When % = 0, the general solution of eq. (1.38) can be written as x0 = a cos(t + ") ,

(1.40)

16

1 Nonlinear Oscillations

with arbitrary constants a and ". Expansion (1.39) is typical for the perturbation theory. We assume that the influence of the small summand %x3 (perturbation) causes a change in both a and " over time and the emergence of other harmonics. Substitution of eq. (1.39) into eq. (1.38) and arrangement of the coefficients of equal powers of % into groups yield (¨x0 (t) + x0 (t)) + %(¨x1 (t) + x1 (t) + 3x0 (t)3 ) + ⋯ = 0. The resulting equation must hold for all values of %. The powers of %n being independent for various n, the coefficient of each power n must disappear. Finally, in each order we arrive at the differential equations for xn : x¨ n + xn = G(xn–1 , xn–2 , . . . , x0 ).

(1.41)

The right-hand sides G (xn–1 , xn–2 , . . . , x0 ) of these equations depend on xm with the numbers m less than n. The homogeneous equation x¨ n + xn = 0 having a solution of the type (1.40), eq. (1.41) can be solved without any difficulties. We introduce degrees of smallness for the quantities in the limit % → 0. Let us denote f (%) = O[g(%)]

(1.42)

as % → 0 provided that there is a number A such that lim

%→0

f (%) → A, g(%)

0 < |A| < ∞

(1.43)

and f (%) = o[g(%)]

(1.44)

if lim

%→0

f (%) → 0. g (%)

The series (1.39) are as a rule Poincare’s asymptotic series, for which the following is valid: x(t) =

n 

%m xm (t) + O(%n+1 )

(% → 0).

(1.45)

m=0

In other words, the error associated with the restriction of the series on the term n does not exceed the first discarded term proportional to %n+1 . For any fixed values of t, the

1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System

17

  asymptotic series diverges x(t) – xn (t) → ∞(n → ∞). Successive approximations in n at first bring xn (t) into proximity with x (t), but after a certain value n0 (%) the error begins increasing. In practice, it is sufficient to restrict oneself to computing a few terms of the series. The coefficient of % in eq. (1.41) furnishes the equation x¨ 1 = –x1 – x03 = –x1 –

a3 [3 cos (t + ") + cos (3t + ")] 4

(1.46)

that can be conveniently represented in the matrix form y˙ = Ay + h,

(1.47)

where y=

x1 x˙ 1

,

A=

0 1 –1 0

,

h=

0 x03

.

(1.48)

Then, writing down eq. (1.47) as d (exp [–At] y) = exp [–At] h, dt we immediately obtain its solution (with the initial conditions y (t = t0 ) = y0 ) as the sum y = yg + yp

(1.49)

of the general solution yg of the homogeneous equation and of the particular solution yp of the inhomogeneous equation t yg = e

A(t–t0 )

y0 ,

yp = e

At

e–A s h (s) ds.

(1.50)

t0

It is easier to calculate the matrix eAt =

k=∞  k=0

(At)k k!

by expanding spectrally the matrix A with eigenvalues +1 , +2 over projection operators P1 , P2 : A = +1 P1 + +2 P2 ,

P1 =

A – +2 I , +1 – +2

P2 = –

A – +1 I , +1 – +2

(1.51)

18

1 Nonlinear Oscillations

where I is an identity matrix and P1 P2 = P2 P1 = 0,

P1 2 = P1 ,

P2 2 = P2 .

The above relations imply that using the simple formula [5] eAt = e+1 t P1 + e+2 t P2 ,

(1.52)

the solution of eq. (1.49) can be written as x1 = a cos (t + "1 ) –

3a3 1 t sin (t + ") + a3 cos (3t + 3") . 8 32

(1.53)

As to the right-hand side of eq. (1.46), the summand proportional to sin (t + "), called the resonant term, involves the driving force for the oscillator with the variable x1 (t). The summand proportional to time describes a resonance appearing in consequence of coincidence of the frequencies of the free oscillations x1 (t) and the driving force. In the astronomical literature, such terms are referred to as mixed secular. The type expansion (1.39) is thought to be uniformly suitable within a certain time period subject to being each term as a small allowance for the previous one at any t. It is worth noting that the allowance %x1 (t), assumed earlier to be small,   in our case as t > O %1 becomes greater than the main term x0 (t) of the expansion. Therefore, expansion (1.53) is uniform over t and holds only for the interval t %1 such an expansion is called nonuniform (pedestrian expansion). There are several methods allow avoiding use of the secular terms in the perturbation theory. The first thing we will make for this is to consider the method of multiple scales [12].

1.3.2 The Method of Multiple Scales The method of multiple scales is so popular that it is being rediscovered about every six months. It is applicable in wide areas of physics, engineering and applied mathematics. Ali Hasan Nayfeh

To identify the causes of the appearance of the secular terms, let us refer to the exact solution of eq. (1.38). It has an integral of motion x˙ 2 x2 %x4 + + = E, 2 2 4 4

(1.54)

with the potential energy U (x) = x2 + %x4 . Graph of U (x) depends strongly on the sign of % (see Fig. 1.7). When % > 0, nonlinear oscillations exist for all values of the energy 1 E; however, at % < 0, the nonlinear oscillations are observed only for values of E < 4|%| .

1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System

U(x)

ɛ0

x

x Figure 1.7

To start with, we compute the exact solution of eq. (1.38). In doing so, it is convenient to relate the value of the energy E to the amplitude b of the nonlinear oscillations E=

b2 %b4 + 2 4

(1.55)

under assumption of coinciding the coordinate x with b when the velocity is x˙ = 0. We make the following replacement x(t) = b cos y(t1 ),

t1 =



1 + % b2 t.

(1.56)

Then the equation x˙ 2 x2 %x4 b2 %b4 + + = + 2 2 4 2 4

(1.57)

rearranges to the form  1 – k2 sin y(t1 ) =

2

d y(t1 ) dt1

,

(1.58)

where k2 =

%b2 . 2(1 + %b2 )

(1.59)

In correspondence to eq. (1.19), the solution of eq. (1.58) is the function y (t1 ) = am (t1 , k) . Thus, x (t) can be expressed through the Jacobi elliptic functions √  x(t) = b cn 1 + %b2 t, k . When expanding cn

√

(1.60)

 1 + %b2 t,k in powers of %, we can see from formulas (1.32)–

(1.34) that the functional dependence of x on both t and % is impossible to split because

20

1 Nonlinear Oscillations

the function x will depend separately not only on these arguments but also on the products %t, %2 t, %3 t:   x = x t, %, %t, %2 t, %3 t, . . . .

(1.61)

So it follows from eqs (1.32)–(1.34), (1.59) and (1.60) that in the lowest order over % is    3 2 x (t) = b cos 1 + % b t + O (%) , 8 and the nonlinearity makes the oscillation frequency change. We enter the auxiliary arguments T0 = t,

Tn = %n t

(n = 1, 2, 3, . . . ),

(1.62)

1 which represent different temporal scales. So for % = 10 , changes in the scale of T0 correspond to several periods of the oscillations, with T1 meeting tens of periods, and T2 meeting hundreds of periods, etc. In the method of multiple scales, an approximate solution is sought in the form

x = x0 (T0 , T1 , T2 , . . . ) + %x1 (T0 , T1 , T2 , . . . ) + ⋅ ⋅ ⋅ ,

(1.63)

where each term in the expansion is regarded as a function of the independent variables T0 , T1 , T2 , . . . . Technically, the calculations related to partial differential equations, as we will see later, can be carried out without any complications. Using the rule for differentiating a composite function, we get d x(t) = dt

d2 dt2



∂ ∂ ∂ +% + %2 +⋯ ∂T0 ∂T 1 ∂T2 

x(t) =

 (x0 (T0 , T1 , T2 , . . . ) + %x1 (T0 , T1 , T2 , . . . ) + ⋯ ),

2

(1.64)

∂ ∂ ∂ +% + %2 +⋯ × (1.65) ∂T0 ∂T1 ∂T2 × (x0 (T0 , T1 , T2 , . . . ) + %x1 (T0 , T1 , T2 , . . . ) + ⋯ ).

Then, eq. (1.38) takes the form of the partial differential equations  2  ∂ x1 ∂ 2 x0 ∂ 2 x0 3 2 + x + % + 2 + x + x 0 1 0 + O[% ] = 0. ∂T0 ∂T1 ∂T02 ∂T02

(1.66)

In the zero order in %, the solution of this equation x0 = a (T1 , T2 , . . . ) cos (T0 + " (T1 , T2 , . . . ))

(1.67)

1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System

21

is determined by the amplitude a (T1 , T2 , . . . ) and the phase " (T1 , T2 , . . . ). They are not constant but depend on the slow variables T1 , T2 , . . . . Further their form will be defined by eliminating secular terms. Substituting eq. (1.67) into eq. (1.66) and setting the first order in % equal to zero, we arrive at ∂ 2 x1 ∂a + x1 = 2 sin (T0 + ") ∂ 2 T0 ∂T1  .  ∂" 3 2 1 3 cos T – a + " – a cos 3T + 3" . + 2a ∂T ( ) ( ) 0 0 8 4 1

(1.68)

Here we have used the trigonometric identity cos3 ! =

1 (3 cos ! + cos 3!) . 4

The terms on the right-hand side of eq. (1.68) are proportional to sin (T0 + ") and cos (T0 + ") and responsible for the appearance of secular terms in x1 . To avoid the secular terms, the method of multiple scales requires us to set them equal to zero: ∂a = 0, ∂T2

∂" 3 – a2 = 0. ∂T1 8

(1.69)

When integrating eq. (1.68), one should take into account only the partial solution of the inhomogeneous equation x1 =

1 3 a cos (3T 0 + 3") . 32

(1.70)

Equations (1.69) can also be elementary integrated: a = a0 (T2 , . . . ) ,

"=

3 T1 a2 + "0 (T2 , . . . ) . 8

They contain the quantities a (T2 , . . . ) and "0 (T2 , . . . ), depending on the slow variables T2 , T3 , . . . . The order of expansion (1.66) defines their explicit form. Finally, returning to the previous independent variables, we have     3 9 1 x = a0 cos t + % t a0 2 + "0 + % a3 cos 3t + % ta2 + 3"0 + O[%2 t]. 8 32 8

(1.71)

This formula coincides with eq. (1.53) subject to using the incorrect expansion   3 3 cos t + % t a0 2 + "0 ≈ cos (t + "0 ) – % ta0 2 sin (t + "0 ) . 8 8   It should be emphasized that when 0 < t < O 1/% , the quantities a0 (T2 , . . . ) and "0 (T2 , . . . ) can be considered as constant. For this interval, using the formula   3 x = a0 cos t – % t a0 2 + "0 + O[%], 8

(1.72)

22

1 Nonlinear Oscillations

we can find an approximate analytical solution for the anharmonic oscillator, including the correct frequency dependence on the oscillation amplitude. For lack of the secular terms and small correction, expansion (1.72) is uniform.

1.3.3 The Method of Averaging: The Van der Pol Equation We consider it more correct (also following the path of historical development) to examine the whole problem and apply certain approximations, guided chiefly by physical reasoning, and do not pay much attention to mathematical rigor. Van der Pol One can make a lot of objections, but mechanics cannot require the same rigor as in a pure analysis. Henri Poincare

The averaging method has been proposed by the Dutch physicist Van der Pol, who has been one of the pioneers of physical and mathematical aspects of the theory of nonlinear oscillations. Exploring mostly the type of the equations x¨ + 92 x = %f (x˙ , x) ,

(1.73)

he has suggested a method of “slowly varying coefficients.” This method has allowed clarifying some questions in the theory of nonlinear oscillations and describing correctly the processes of self-excitation of oscillations in eq. (1.73) in the first approximation. Despite the lack of rigor (this is that the epigraph to this section speaks about), the techniques used by Van der Pol and his colleagues have turned out to be extremely fruitful and vital in the early development of the theory of nonlinear oscillations. Balthasar Van der Pol (in English Balthasar Van der Pol, January 27, 1889–October 06, 1959) is a Dutch physicist and mathematician. He was born in Utrecht. He graduated from the University of Utrecht (1916), then was taught by John Ambrose Fleming in London and by Sir Joseph John Thomson at the Cavendish Laboratory at Cambridge University (1916–1919). In 1922–1949 he conducted the research in the electrical laboratory in Eindhoven. His main works in mathematics and radio physics concern with the theory of oscillations, the theory of electrical circuits and radio wave propagation. In 1920, he derived the famous equation (the Van der Pol equation) that described oscillations in vacuum tube os-

1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System

23

cillator. To solve it, Van der Pol proposed a method of “slowly varying coefficients” (the Van der Pol method), which played an important role in the development of the theory of nonlinear oscillations. Later Van der Pol extended this method to more general cases of differential equations. In 1935, he was awarded the gold medal of the Institute of Radio Engineers (now IEEE) for outstanding achievements in radio physics. The asteroid 10443 bears his name. Let us explain his method by the example of eq. (1.38): x¨ = –x – %x3 . When % = 0, its solutions are x0 = A cos (t + ") and x0′ = –A sin (t + "). The averaging method starts with the method of variation of arbitrary constants. It is assumed that the solution of the equation has the form x = A (t) cos (t + " (t)) ,

(1.74)

where the quantities A(t), " (t) are functions of time. Introduction of these two independent functions instead of one x (t) requires imposing a single independent condition, which should be chosen in the same way as for the linear equation x˙ = –A (t) sin (t + " (t)) .

(1.75)

Differentiating eq. (1.74) with account of eq. (1.75), we come to the first equation ˙ –A (t) sin(t + "(t))(1 + "(t)) + A˙ (t) cos(t + "(t)) = –A (t) sin(t + "(t)). Substitution of eqs (1.74) and (1.75) into eq. (1.38) brings into existence of the second equation ˙ –A˙ (t) sin(t + "(t)) – A (t) cos(t + "(t))(1+ "(t)) = –A (t) cos(t + "(t)) – %(A (t) cos(t + "(t)))3 . Then, the original equation (1.38) is replaced by the two first-order equations A˙ = %A3 sin > cos3 >, ˙ = 1 + % A2 cos4 >, >

(1.76)

for the quantities A(t), > = t +" (t). In these equations, the dependence of the new variables on % stands out quite clearly because for small % the functions A(t), " (t) alter con ˙ siderably slower A ∼ O(%), " ∼ O (%) than >(t). The variables A(t), " (t) are said to be slow variables, and the variable >(t) is a fast variable. The averaging procedure elimin  ates the fast variables from equations of perturbed motion. Since sin > cos >3  < 1, we expect the amplitude A to change little over a time equal to the period. Let us integrate

24

1 Nonlinear Oscillations

d both sides of (1.76) with respect to time from t to t+0. Seeing that the derivatives dt A (t) d and dt " (t) are of the order of O (%), their changes in the interval (t, t + 0) will be also of the order of O (%). Therefore, when integrating, we suppose the quantities A(t), " (t)   to be constant; the error being of the order of O %2 . Integration of the left-hand sides gives rise to the appearance of summands of the type A (t + 0) – A (t), which should d be replaced by 0 dt A (t) in the same approximation. Thus, according to the averaging principle, in the first approximation the right-hand sides of, eq. (1.76) may be replaced by their average values over the period of time T ∈ [0, 0] or by the values in >, averaged over the period of time [0, 0 ] in the same approximation. “Such statements are often a fruitful source of mathematical theorems” [4]. Consequently,

A˙ = %A3 01

0

sin > cos3 > d> = 0,

0

˙ = 1 + % 01 >

0 0

. A2 cos4 > d> = 1 + % 83 A2 .

In the long run, in the first approximation we have  3 2 > = 1 + % A t. 8 

x = A cos >,

A = const,

(1.77)

and eq. (1.77) coincides with solution (1.72), obtained by the method of multiple scales. In spite of offering the possibility of getting some approximate equations, the Van der Pol method of averaging contains no ways of evaluating the solutions obtained, no limits of applicability and no ways of achieving higher order approximations.

1.3.4 The Generalized Method of Averaging. The Krylov–Bogolyubov Approach The present work includes new methods allowing studying arbitrary motions as a consequence of a small perturbation of any movements or equilibrium state. These methods refer here to methods of nonlinear mechanics due to the fact that they have been created specifically to solve problems concerning nonlinear oscillations. N.M. Krylov, N.N. Bogolyubov. An application of nonlinear mechanics to the theory of stationary oscillations. N.M. Krylov and N.N. Bogolyubov have succeeded in looking at the subject matter with a whole new perspective. This has enabled them to develop an asymptotic method of getting a solution as the first term, which could be obtained also by the Van der Pol method having a heuristic nature. These works have laid the foundation for a new line in the theory of asymptotic methods. It is deeply embedded in the various fields of theoretical physics, astronomy, spacecraft dynamics, etc. N.N. Moiseev. Asymptotic methods of nonlinear mechanics.

The generalized method of averaging has been proposed by N.M. Krylov and N.N. Bogolyubov [2, 3] and developed in different ways by many researchers.

1.3 Small-Amplitude Oscillations of a Conservative Single-Degree-of-Freedom System

25

Nikolai Mitrofanovich Krylov (November 16 (29), 1879, St. Petersburg, Russian Empire to May 11, 1955, Moscow, USSR) is a Soviet Russian mathematician, Full Member of the USSR Academy of Sciences since 1929. His main works concern with interpolation, approximate integration of differential equations in mathematical physics and in nonlinear mechanics. Since 1932, N.M. Krylov worked together with Full Member N.N. Bogolyubov and devoted much of his life to the problems in the theory of nonlinear oscillatory processes where they succeeded in laying the foundations of nonlinear mechanics. Nikolai Nikolaevich Bogolyubov (August 8 (21), 1909, Nizhny Novgorod to February 13, 1992, Moscow) is an outstanding Soviet mathematician and theoretical physicist, Full Member of the Russian Academy of Sciences (1991), Full Member of the USSR (1953) and the Ukrainian Academy of Sciences (1948), the founder of scientific schools in nonlinear mechanics and theoretical physics. Since 1951 he was a director of the Laboratory of Theoretical Physics, Joint Institute for Nuclear research (JINR) in Dubna, from 1965 to 1988 was a director of JINR. N.N. Bogolyubov is an author of many fundamental studies in theoretical physics and mathematics. His main works in mathematics are devoted to asymptotic methods of nonlinear mechanics, calculus of variations, approximate methods of mathematical analysis, differential equations and mathematical physics, the theory of stability, the theory of dynamical systems and other areas. In theoretical physics, he contributed greatly to the classical statistical mechanics of imperfect systems and quantum statistics, the theory of superconductivity, the quantum field theory and the physics of strong interactions. The basic idea of this method is to replace variables by eliminating fast variables from equations of perturbed motion up to a given degree of accuracy to eventually distinguish the evolution of the fast and slow variables. To this end, the general solution of eq. (1.38) needs to be sought in the form of the asymptotic expansion

26

1 Nonlinear Oscillations

x = a cos > + % x1 (a, >) + %2 x2 (a, >) + ⋯ + %m xm (a, >) . . . ,

(1.78)

where > = t + " (t) and the quantities xm (a, >) are periodic functions > with the period 20. The dynamics of a, > is given by the differential equations a˙ = H(a, %) = %H1 (a) + %2 H2 (a) + ⋯ + %m–1 Hm–1 + ⋯ ; ˙ = G(a, %) = 1 + %G1 (a) + % G2 (a) + ⋯ + % > 2

m–1

Gm–1 (a) + ⋯ , .

(1.79) (1.80)

The right-hand sides of these equations contain no fast variables. The functions H (a, %) , G (a, %) as well as the coefficients of their expansion are unknown beforehand. Their explicit form is determined by solving the problem. To determine them uniquely, it is necessary to impose the additional conditions 20

20 xm (a, >) cos >d> =

0

xm (a, >) sin >d> = 0

(m = 1, 2, ⋯ ),

(1.81)

0

according to which the functions xm (a, >) (m = 1, 2, . . . ) do not contain terms, proportional to cos > or sin >. Such a requirement eliminates secular terms from expansion (1.78). Substituting eqs (1.78)–(1.80) into eq. (1.38) and equating the coefficients of like powers of % up to equations of the order of %m+1 , we obtain a system of mequations defining an approximate solution for xm (a, >). The latter differs from the exact one by magnitude of the order of %m+1 . The imposed additional conditions (1.81) define the coefficients H1 , H2 , . . . , Hm–1 and G1 , G2 , . . . , Gm–1 . To finish explaining the method, the autonomous system (1.79) should be integrated by simple mathematical techniques, and the evolution of the fast variable > is then determined by direct integration of eqs (1.80). The resulting formal expansion within a step n can be used as the formula to calculate and as the replacement of variables. It enables us to reduce the exact equation to an approximate one, which is convenient for further theoretical studies. So in the first order in %, we have the equation ∂ 2 x1 a + x1 – 2H1 (a) sin > + 1/4a3 cos 3> + (3a2 – 8G1 (a)) cos > = 0. 2 ∂ > 4

(1.82)

The additional conditions (1.81) are responsible for the form of G1 (a) and H1 (a): G1 (a) = 3a2 /8, H1 (a) = 0

(1.83)

Their explicit expressions lead to the absence of the secular terms. When choosing a, it is necessary that the general solution of the homogeneous equation (1.82) be discarded. Finally, the result obtained in the lowest orders   a3 3 x = a cos > + % cos (3>) , a = Const, > = 1 + % a2 t (1.84) 32 8 coincides with eq. (1.71).

1.4 Forced Oscillations of an Anharmonic Oscillator

27

1.4 Forced Oscillations of an Anharmonic Oscillator Mathematics is often unable to accurately answer questions posed by the natural and engineering sciences. In spite of having already been solved with high precision, many problems had had so sophisticated algorithms that they could not be applied in practice. Searching for the simplest and at the same time sufficiently accurate method of getting approximate solutions, including the estimate of the degree of accuracy of the approximation often require an exceptional talent and profound knowledge. In this sense, obviously, it is necessary to understand the words by the great Russian mathematician P.L. Chebyshev as quoted once by the French geometer G. Darbu that an approximate solution is better than exact. N.M. Krylov

When a system is subjected to external perturbations, the system executes forced oscillations. This is a distinguishing feature of the process. A perturbing force F (t) is most often approximated by such functions as a step function, an impulse function or harmonic one. The equation of motion for an anharmonic oscillator in the presence of damping and the external force with amplitude F and frequency $ has the form m x¨ = –kx + bx3 – ! x˙ + F cos $ t.

(1.85)

We introduce the dimensionless variables u(90 t), ,, f with the relations x(t) = u(90 t), F = f m U 90 2 ,

b = –m % 920 , $ = 990 ,

! = 2% , m90 , k 90 = . m

% 0, , > 0, in the absence of external forces the function E (t) is positive and it decreases in time toward the steady value of u (t) = 0 for any initial conditions. To date, eq. (1.87) for arbitrary values of the parameters is extremely difficult, if not impossible, to solve analytically. As shown below, analytical solutions for non-damped oscillations can be obtained in systems to which inherent either low nonlinearity or low attenuation or certain relations between parameters of oscillations and external forces.

1.4.1 Straightforward Expansion To start with, we find an approximate analytical solution of eq. (1.87) up to the first order of the parameter %. For this purpose, we first examine the direct expansion u = u0 + % u1 + %2 u2 + ⋅ ⋅ ⋅ .

(1.88)

Substituting it into eq. (1.87), we come up with the equations in the two lowest orders u¨ 0 + u0 = f cos 9t, u¨ 1 + u1 = –2,u˙ 0 –

(1.89)

u30 .

(1.90)

f cos 9 t, 1 – 92

(1.91)

The solution of eq. (1.89) has the form u0 = a cos (t + ") +

where a and " are constants. Further, eq. (1.90) implies the following equation: u¨ 1 + u1 = b1 ei t + b2 ei3 t + b3 ei 9t + b4 ei3 9t + b5 ei(9+2)t + b6 ei(9–2)t + b7 ei(29–1)t + b8 ei(29+1)t + (c.c)

(1.92)

with the constant quantities b! (! = 1, . . . , 8). Here, “c.c.” denotes the complex conjugate terms on the right-hand side. The partial solution of the inhomogeneous equation (1.92) is the sum of partial solutions corresponding to each term on the right-hand side of this equation. Because the partial solution of the expression u¨ + u = ei3t

1.4 Forced Oscillations of an Anharmonic Oscillator

29

can be written as u=

1 ei3t , 1 – 32

the partial solution of eq. (1.92) contains both secular terms proportional to t and harmonic oscillations with the frequencies 3 = 9 ± 2, 29 ± 1, 39, called combinational. Their amplitude is inversely proportional to 32 – 1. In subsequent orders of perturbation theory, combinational oscillations with the frequencies m9 + n appear, where m and n are integers. When one of the combinational frequencies is close to the natural oscillation frequency m9 + n ≈ ±1, the appropriate harmonic may affect dramatically the oscillation amplitude due to the presence of small denominators. It stands to reason that such resonance phenomena occur at the frequencies 9=

p , q

where p and q are integers. The resonance at 9 ≈ ±1 appearing in the first term of expansion (1.88) is referred to as primary. The resonances at 9 ≈ 0,

9 ≈ ±3,

9≈

1 3

(1.93)

involved in the second term of expansion (1.88) are secondary. 1.4.2 A Secondary Resonance at 9 ≈ ±3 Using a secondary resonance at 9 ≈ ±3 as an example and the method of multiple scales, let us construct first-order uniform expansion, not containing secular terms and small denominators, for an approximate oscillating solution of the equation (1.87). We introduce the detuning parameter 3 ∈ O [1]: 9 = 3 + 3 %.

(1.94)

Then, we seek the approximate solution of eq. (1.87) in the form u = u0 (T0 , T1 ) + % u1 (T0 , T1 ) + ⋯ , .

(1.95)

Here we have entered the variables T0 = t, T1 = %t and omitted the dependence of slow variables Tn (n ≥ 2) inside u0 and u1 . After substituting eq. (1.95) into eq. (1.87) and equating the coefficients of like values of %, we obtain

30

1 Nonlinear Oscillations

∂ 2 u0 + u0 – f cos(T0 9) = 0; ∂T02

(1.96)

∂ 2 u1 ∂ 2 u0 ∂u1 + u1 = –2 – u30 – 2, . 2 ∂T0 ∂T1 ∂T0 ∂T0

(1.97)

Because the general solution of eq. (1.96) can be represented as   u0 = A(T1 ) cos t + "(T1 ) + B cos 9t,

f , 1 – 92

B= 

(1.98)

the right-hand side of eq. (1.97) includes now the terms of the same structure as of eq. (1.92). However, relation (1.94) modifies the function ei(9–2)t generating the small denominators for u1 in eq. (1.92) into the function ei(1+3%)t = ei(T0 +3T1 ) creating the secular terms in eq. (1.97). In doing so, we try to avoid the summands proportional to cos t, sin t and causing the secular terms for u1 . Such a line of reasoning leads us to a system of two equations for the amplitude A (T1 ) and the phase I(T1 ) = T1 3 – 3"(T1 ):   ˙ 1 ) = 1 (–8,A(T1 ) – 3A(T1 )2 B sin I(T1 ) ) A(T 8 2 2   ˙ 1 ) = – 9B + 3 – 9A(T1 ) – 9 B A(T1 ) cos I(T1 ) . I(T 4 8 8 It should be noted that, in contrast to free oscillations of an anharmonic oscillator, the evolution of the amplitude depends strongly on the behavior of the phase I (t). By integrating numerically, we can show that the solution of these equations tends to the steady-state values A(T1 ) → A, I(T1 ) → I0 , which satisfies the expression –8,A – 3A2 B sin [I0 ] = 0,

(1.99)

9A2 9 9B2 +3– – BA cos [I0 ] = 0. – 4 8 8

(1.100)

Elimination of I0 from these equations furnishes the trivial case or biquadratic equation 

  16 4  144,2 + (9B2 – 43)2 = 0, A + 3B – 3 A2 + 9 81 4

2

(1.101)

that represents the frequency response equation A (3). A simple analysis shows that the real solutions  

1 A2 = 27B2 – 163 ± 3 –63 B4 + 32B2 3 – 256,2 (1.102) 18

1.4 Forced Oscillations of an Anharmonic Oscillator

31

u 1.5 1 0.5 5

10

15

20 t

–0.5 –1 –1.5

Figure 1.8

of this equation exist under the condition: –63 B4 + 32 B2 3 – 256,2 ≥ 0, 27 3 > B2 . 16

(1.103)

Hence, in the interval 3–



32 – 63,2 ≤

63 f2   ≤3+ 16 1 – 92 2

32 – 63,2

(1.104)

non-damped oscillations arise for the amplitude f of the driving force even in the presence of friction in the system. As a result, in the first approximation the solution is given by   u = A cos t + "(T1 ) + B cos 9 t     %t3 I0 9 t I0 f = A cos t + cos 9 t. – + B cos 9 t = A cos – + 3 3 3 3 1 – 92

(1.105)

Figure 1.8 illustrates the following findings of the numerical integration of eq. (1.87): f = –8, % = 0.02, 3 = 3, , = 1/3. The data meet condition (1.104) and the initial values u (0) = 1.9, u˙ (0) = 0. It is clear that the system has the non-damped oscillations, which are the superposition of two harmonic oscillations with the frequencies 9 and 9/3.

1.4.3 A Primary Resonance: Amplitude–Frequency Response Now we find suitable uniform expansion in the case of the primary resonance for small amplitude %f of an external force. After substituting the expression

32

1 Nonlinear Oscillations

u = u0 (T0 , T1 ) + % u1 (T0 , T1 ) + ⋯ .

(1.106)

u¨ + u + % u3 + 2% , u˙ – %f cos 9t = 0,

(1.107)

into the equation

in the zeroth and first orders in % we come up with the system of equations ∂ 2 u0 + u0 = 0, ∂T02

(1.108)

∂ 2 u1 ∂ 2 u0 ∂u1 + u1 = –2 – u30 – 2 , + f cos(T0 9). 2 ∂T0 ∂T1 ∂T0 ∂T0

(1.109)

The zero-order solution   u0 = a(T1 ) cos t + "(T1 )

(1.110)

on the right-hand side of eq. (1.109) gives resonance summands, generating secular terms in u1 . In addition, the summand f cos (T0 9) = f cos (T0 + T1 3) = f (cos (T0 ) cos (T1 3) – sin (T0 ) sin (T1 3)) at the frequency 9 = 1 + 3%

(1.111)

creates additional terms to the resonant ones on the right-hand side of eq. (1.109). The solution for u1 (T1 ) lacks the secular terms proportional to t subject to setting the coefficients standing before cos t, sin t on the right-hand side of eq. (1.109) equal to zero. Then we arrive at a closed system of equations to determine a(T1 ) and γ (T1 ) = T1 3 – "(T1 ): f ˙ 1 ) = –,a(T1 ) + sin γ (T1 ), a(T 2 3 1 a(T1 )γ˙ (T1 ) = 3a(T1 ) – a(T1 )3 + f cos γ (T1 ). 8 2

(1.112)

By employing numerical integration, we can demonstrate that the solution of eqs (1.112), as in the case of secondary resonances, tends to the steady-state values a(T1 ) → a, γ (T1 ) → γ that satisfy the equations f –,a + sin γ = 0, 2 3 3a – a3 + 21 f cos γ = 0, 8 γ = T1 3 – ".

(1.113) (1.114)

According to eq. (1.111), to the stationary values a, γ there correspond the main approximation oscillations   u = a cos t + "(T1 ) = a cos (t + T1 3 – γ ) = a cos (9t – γ ) . (1.115)

33

1.4 Forced Oscillations of an Anharmonic Oscillator

It would be advisable to recall that we have 9 = 9$0 . In order to go over from the dimensionless time to the real, it is necessary to replace t by 90 t. Therefore, in the approximation at hand, the oscillator executes forced oscillations with the frequency $ of the external force and constant amplitude, depending on the detuning. Let us explore this dependence. Excluding γ from eq. (1.113), we come to the cubic equation with respect to a2  2 4,2 a2 + a2 83 – 3a2 /16 = f 2 ,

(1.116)

which defines the function a (3) as the frequency–response equation of the system. Next, we discuss the stability of stationary states. Let us denote f P(a(T1 ), γ (T1 )) = –,a(T1 ) + sin γ (T1 ), 2 3 f cos γ (T1 ), Q(a(T1 ), γ (T1 )) = 3 – a(T1 )2 + 2a(T 1) 8

(1.117)

and write down the system of eqs (1.112) in the compact form   ˙ 1 ) = P a(T1 ), γ (T1 ) , a(T   γ˙ (T1 ) = Q a(T1 ), γ (T1 ) . To determine the stability of stationary states, we should consider the evolution of the functions a(T1 ) = a + sA(T1 ), γ (T1 ) = γ + sA(T1 ) (s 0, the equilibrium states are stable, the roots of the above equation are complex conjugate, and their real part is negative. Equation (1.116) defines a resonance curve (frequency–response curve) in the   plane 3, a2 . Because it has one or three roots, the function a (3) is a multivalued function within a certain region of values of the parameter detuning 3, and the resonance curves have a more diverse nature than in the linear case. Let us look into the behavior of a (3) at a fixed value of f . Differentiating eq. (1.116), we find the equation of a tangent da 8a(–83 + 3a2 ) = d3 D

(1.124)

to the frequency–response curve. Now, we construct the curve D = 0 in the plane   3, a2 that does not intersect the line (see Fig. 1.9): 3=

3a2 . 8

(1.125)

The quantity D is less than zero (D < 0) inside the shaded area and greater than zero (D > 0) outside it. If the region of D < 0 contains the curve a (3), the latter’s derivative 2 is positive as far as the straight line 3 = 3a8 and negative as 3 increases (see the curves 1 and 2 in Fig. 1.9). As follows from eq.  (1.124),the resonance curve intersects the curve da D = 0, having the vertical tangent d3 = ∞ . In this case, the only resonant curve touches the curve D = 0, where the largest coordinate of the point H is equal to a2 = Since the point H lies on the curve D = 0, its coordinates are

a2

4

C

D

B 3 A 2 H 1

E F

G σ

Figure 1.9

8, 3 .

1.4 Forced Oscillations of an Anharmonic Oscillator

3=

3a2 , 8

a2 =

8, . 3

35

(1.126)

The point H with such coordinates corresponds to the critical amplitude of the external force 3

8, 2 fk = √ . 3

(1.127)

The amplitude is small for small values of f , 3 and the term 3a2 on the left-hand side of eq. (1.116) can be ignored. Then, we can establish the following dependence: a2 =

f2 4(32 + ,2 )

(1.128)

of the amplitude on the oscillation frequency with a maximum at the point 3 = 0 (the curve 1 in Fig. 1.9). The dependence is typical for the linear forced oscillations. When f < fk , eq. (1.116) has one root, with the curve a (3) deforming toward higher values of 3 as f increases. When f > fk , eq. (1.116) has three roots, and the nature of the forced oscillations of the oscillator dramatically changes. With increasing 3 starting 2 da from –∞, the function d3 is positive up to the straight line 3 = 3a8 where it vanishes. Further, a2 decreases as far as the curve D = 0 where the frequency–response curve da has a vertical tangent. In the region of D < 0, the function d3 again possesses positive values up to the curve D = 0. By further increasing 3, the amplitude drops to zero. At the point where the line (1.125) and the resonance curve intersect, the amplitude amax is maximum and equals to amax =

f 2,

(1.129)

  3f 2 2 for at 3 = 32, 2 . Figure 1.9 displays the frequency–response curve in the plane 3, a different f . While f grows (f > fk ), the nonlinearity also causes the resonance peaks to bend to the right. The foregoing evidences that there are frequency regions where to the fixed value of 3 there correspond two or three values of the amplitude a (curves 3 and 4). On the branch DE of curve 4, the quantity D is less than zero (D < 0). According to the analysis carried out above, the oscillations with the appropriate amplitudes are unstable. This gives rise to a new nonlinear phenomenon, to a jump in the stationary amplitude when slowly passing the lower overhanging part of the resonance curves. Further rise in frequency produces a decrease in the amplitude to the point D. After that, it abruptly changes its value from D to F, bypassing the unstable curve DE. A similar phenomenon is also observed at a loss of frequency while the amplitude grows along the curve CE. Next, we can see the jump from the point E to the point B to follow the upper branch of the resonance curve. Note that the jumps occur only for the stationary amplitude. In real systems, the true amplitude

36

1 Nonlinear Oscillations

u(t) 3 2 1 10

20

30

40 t

–1 –2 –3 –4 Figure 1.10

u(t) 1.5 1 0.5

10 –0.5

20

30

40

t

–1 –1.5 –2 Figure 1.11

is nonstationary during the transition. The jump excites the natural oscillations, which are damped in the end to establish new stationary amplitude. The breakdown phenomenon is also observed at some fixed frequency $ and slowly varying amplitude f . The oscillation amplitude breakdown phenomenon is illustrated by numerical calculations of the equation in Figs. 1.10 and 1.11. The maximum amplitude of curve 4 (see Fig. 1.9) is 3 = 4.593 at f = 7. As shown in Fig. 1.10, the system undergoes forced oscillations with the period of the driving force at the initial data u (0) = 0.3 and u′ (0) = 3.83 and at the parameter values f = 7, , = 1, % = 0.02, 3 = 4.593.

1.5 Self-Oscillations: Limit Cycles

37

The amplitude a, estimated by formula (1.116) is 3.5, which is in good agreement with the numerical data. Numerical calculations of eqs (1.107) and (1.111) with the slightly modified initial data u (0) = –0.7, u′ (0) = 0.2 and the parameter values f = 7, , = 1, % = 0.02, and 3 = 4.65 (see Fig. 1.11) specify the breakdown of the oscillation amplitude as the detuning increases.

1.5 Self-Oscillations: Limit Cycles Of all the services that would be rendered science, generation of new ideas is the most important. Sir Joseph John Thomson

Let us explore the motion of a mechanical system under the action of a conservative F(x) and an arbitrary dissipative force G (x˙ ): x¨ = F (x) + G (x˙ ) .

(1.130)

We multiply both sides of this equation by x˙ and represent it in the form d dt



 x˙ 2 + U(x) = G(˙x)˙x. 2

Since dissipative forces act always in the direction opposite to velocity, then G (x˙ ) x˙ < 0, and the system’s total energy E = x˙2 + U (x) always diminishes. As the potential energy is bounded from below by U (x˙ ) ≠ –∞, the system tends to reach a state of equilibrium x˙ (t → ∞) = 0, F (x (t → ∞)) = 0, and no periodic motions are possible. Suppose the system to be subjected to an additional active force H (x˙ , x), which adds some amount of energy BE during each oscillation period. And let the quantity BE be greater, and the amplitude smaller. Then, near equilibrium, where the dissipation energy is less than BE, the oscillation amplitude has upward tendency. If the amplitude is large at the initial time, the oscillations are damped because the quantity BE is less than the amount of energy dissipated by the system. Therefore, in such systems the regime of periodic oscillations with constant amplitude may be established in the case of equality of the quantity BE and the dissipation energy during one period. “As far back as fifty years ago A.A. Andronov named systems capable of generating such oscillations as self-oscillating. He was the first to endow them with a clear mathematical content, having related self-oscillations with the Poincare limit cycles” [6]. 2

38

1 Nonlinear Oscillations

Alexander Alexandrovich Andronov (April 11, 1901, Moscow to October 31, 1952, Gorky) is an outstanding Soviet physicist (electrical engineering, radio physics and applied mechanics), Full Member of the USSR Academy of Sciences, founder of the theory of nonlinear oscillations. He introduced the concept and mathematical definition of selfoscillations, developed their theory, having related it with the qualitative theory of differential equations, topology and the general theory of stability of motion. A.A. Andronov laid the foundation of the theory of nonlinear oscillations and advanced a method of point mappings. As distinct from forced oscillations, the self-oscillations are not due to the external periodic force. In the system, internal interactions being a nonperiodic energy source initiate the self-oscillations, regulated by the system’s feedback. The simplest selfoscillating system is a watch where a spring tension (or battery) is the source of energy, a balancer is the oscillatory system and a trigger control is the feedback. To the selfoscillatory regime in the phase plane there corresponds an isolated closed curve – a limit cycle. Poincare was the first to enter this concept. Jules Henri Poincare (Fr. Jules Henri Poincaré; April 29, 1854 to July 17, 1912) is an eminent French mathematician, physicist, philosopher and theorist of science, the head of the Paris Academy of Sciences since 1906 and the French Academy since 1908. J.H. Poincare is called one of the greatest mathematicians of all times, as well as the last universal mathematician or a person capable of covering all the mathematical lines of his time.

A limit cycle is a closed integral (cumulative) curve, for example a circle in Fig. 1.12, which splits the whole phase plane into two areas: internal and external. In each of the areas, all the phase trajectories tend to the limit cycle over time. In its vicinity, these consist of spirals winding inward toward the cycle (see Fig. 1.12). As a consequence of this, the amplitude and frequency of the self-oscillations are independent on initial conditions but defined only by parameters of the system.

1.5 Self-Oscillations: Limit Cycles

39



x

Figure 1.12

To get an idea of the typical methods for solving equations of motion for oscillating systems with a small parameter, take the Van der Pol equation by way of example:   x¨ + bx2 – % x˙ + x = 0,

% > 0,

b > 0.

Having made the replacement x (t) → cx (t) , b → %/c2 , we can write it down as x¨ + %(x2 – 1)˙x + x = 0. The change in the total energy of the system is   d x˙ 2 x2 + = %(1 – x2 )˙x2 . dt 2 2

(1.131)

(1.132)

It is not hard to qualitatively analyze the Van der Pol equation. For small oscillations x2 < 1, the coefficient of x˙ involved in eq. (1.131) and describing friction is negative. Such a situation is responsible for “energy pumping” into the system at small x and leads to an increase in the oscillation amplitude. If x2 becomes greater than unity, the friction gets positive and causes the oscillation amplitude to decay. Due to the two opposing influences, the oscillation pumping slows down gradually but the motion unlimitedly approaches the regime with constant amplitudes. The phase portrait in Fig. 1.12 illustrates such a self-oscillation regime. The regime not requiring an initial push is known as a soft excitation regime. However, in systems with hard excitation, the self-oscillations rise spontaneously only from some initial amplitude. 1.5.1 An Analytical Solution of the Van der Pol Equation for Small Nonlinearity Parameter Values An approximate analytical solution of this equation can be found only at small or large values of the parameter %. When % = 0, the oscillations x = a cos (t + ") occur with a constant amplitude a and phase ". When % ′ = 1 + %g1 (a) + %2 g2 (a) .

(1.134)

Substituting eqs (1.133) and (1.134) into eq. (1.131) and equating the first-power coefficients of %, we obtain: (–2a) cos >g1 +

 1 1 4a – a3 – 8h1 sin > – a3 sin 3> + x1 + x1,> > = 0. 4 4

(1.135)

Conditions (1.81) immediately yield g1 (a) = 0,

h1 (a) =

 1 4a – a3 . 8

Consequently, the solution of eq. (1.135) can be written as x1 = –

1 3 a sin 3>. 32

(1.136)

In a similar manner we find h2 (a) = 0,

g2 (a) =

1 (–32 + 32a2 – 7a4 ). 256

Then in a second-order approximation, a and > vary according to the equations: % (4a – a3 ), 8  %2  –32 + 32a2 – 7a4 . >′ = 1 + 256 a′ =

(1.137) (1.138)

Since we concern with the solutions of eq. (1.137) with a polynomial (the righthand side), it is convenient to represent their properties in the phase straight line a (Fig. 1.13), where dots denote the roots of the polynomial on right-hand side of eq. (1.137). These roots are equilibrium points. The arrows indicate the directions of change of a, determined by the value of the polynomial in this region.

1.5 Self-Oscillations: Limit Cycles

–2

0

41

2

Figure 1.13

x 4 3 2 1

20

40

60

80 t

–1 –2 Figure 1.14

As seen from the figure, the roots a = 0, a = ±2 are fixed points (stationary solutions). The static regime a = 0 is unstable. Any small initial perturbations excite oscillations with increasing amplitude. The oscillations tend to reach the limiting steady state a = ±2. The initial excitations decrease in amplitude from |a| > 2 to the stationary value |a| = 2. Multiplying eq. (1.137) by a and integrating the evolution of the amplitude in the approximation at hand by the variable separation method, we find %t

a=

2a0 e 2

4 + a20 (–1 + e%t )

;

a0 = a (t = 0) .

(1.139)

It can be seen that the initial value of the amplitude being zero, it remains the same for all moments. However, the static regime is unstable and any small random perturbation gives rise to oscillations whose amplitude tends to the value a = 2 at % > 0 regardless of the initial conditions. Plots in Figs 1.14 and 1.15 reflect the findings of numerical calculations of eq. (1.132) at % = 0.5 with the initial data x (0) = 4, x′ (0) = 0 and x (0) = 0.5, x′ (0) = 0, respectively. Comparing the above figures, we can see that the numerical calculations are in good agreement with the approximate solution:    %2 x = a cos t 1 – + O(%). 16

(1.140)

42

1 Nonlinear Oscillations

x 2

1

20

40

60

80 t

–1

–2 Figure 1.15

1.5.2 An approximate solution of the Van der Pol equation for large nonlinearity parameter values Let us discuss the behavior of solutions of the Van der Pol equation for large values of the parameter %. It, written in the form    d ● x3 x + % –x + + x = 0, dt 3

(1.141)

is equivalent to the system of two first-order equations for the variables y, x (the Lienard equations):   ● x3 x + % –x + = %y, 3

(1.142)

%˙y + x = 0.

(1.143)

From the equation for phase trajectories: ●

3x dy y = = dx x● %2 (–3x + x3 – 3y)

(1.144)

as % ≫ 1 it follows that dy → 0, dx

% → ∞.

Hence, y = c everywhere except for the neighborhood of the curve y = –x +

x3 . 3

(1.145)

1.5 Self-Oscillations: Limit Cycles

43

y P D

2/3

A

1 C

x

–1 –2/3

B

Figure 1.16

Thus, in the first approximation for large values of the parameter %, the Van der Pol equation reduces to a system of two equations, one of which is eq. (1.143). The second equation is y = c everywhere except for the vicinity of the curve (1.145). Let us construct this curve and examine the qualitative behavior of the phase point in the phase plane (y, x) (see Fig. 1.16). All points lying beyond the neighborhood of the curve belong to the horizontal dy field of directions for the phase trajectory. According to eq. (1.144), dx is an odd function. Therefore, any point in the phase plane, for example, the point in Fig. 1.16 tends to curve (1.145). The system’s dynamics along the curve is described by first-order equation:   x d x3 y˙ = – = –x + , % dt 3 or 

–1 + X 2

 d X + X = 0, dt1

(1.146)

  where X (t1 ) = x %t . If the phase point is above curve (1.145), it, having reached the latter, begins moving along the curve as far as the point B because dtd1 X < 0 (X > 1) in accordance with eq. (1.146). The origin X = 0 is an unstable fixed point of this equation. It is easy to make sure that the solution has the formX = X0 et1 (X 0 characterize positive and negative damping of the oscillations. The small parameters E0 = O (%) , , = O (%) , % |E| and small values of the amplitude of the external force, the T0 -dependence of the self-oscillations x0 = 2 cos[T0 + "(T1 )]

50

1 Nonlinear Oscillations

are characterized by two incommensurable periods. Such oscillations are called quasiperiodic. Another situation can be observed when the inequality 2 |3| < |E| holds. Then the 20-periodic right-hand side of eq. (1.158) vanishes at two fixed points (stationary phase values), one of which is unstable and the other is stable. Therefore, when 2 |3| < E and T1 → ∞ the solution of eq. (1.158) is ⎡ ⎢ "(T1 ) = –2 tan–1 ⎣

E+

√ 2 2 ⎤ √ T1 E2 – 432 tanh E –43 8 ⎥ ⎦ 23

tends to the stable fixed point  " = –2 tan

–1

E+



 E2 – 432 . 23

Finally, the self-oscillating system can be entrained by even a very weak external signal within the range of 2 |3| < |E|. The system’s oscillations take place at frequency of the external field (frequency capture) with a constant initial phase depending on the detuning and small external field amplitude (phase capture). This regime is referred to as an external synchronization regime. In general, an external force affects both the amplitude and the phase of the oscillations, so we need to investigate the solutions of eqs (1.156) and (1.157) with constant amplitude and phase to be stable under small perturbations. The oscillations of system (1.151) are synchronized and described in the main approximation by expression (1.155). As in the case of forced oscillations, the equilibrium states are defined by the system of equations  3

(4a – a – 4E cos ") = 0,

E sin " 3+ a

 = 0.

(1.159)

They imply at once the expression for the constant phase " = – arcsin

3a E

and the algebraic equation 1632 a2 + a2 (4 – a2 )2 = 16E2 .

(1.160)

We have just derived an equation to compute the amplitude of the oscillations through the detuning value 3 and the amplitude of the external force. Now we find the stability boundary of the harmonic oscillations (1.155) with the a and " constant amplitudes

1.6 External Synchronization of Self-Oscillating Systems

51

and satisfying eq. (1.159). For this end, it is sufficient to explore the perturbation of the equilibrium states in the form a(T1 ) = a + a1 e+T1 ,

"(T1 ) = " + "1 e+T1

(1.161)

with a1 and "1 small amplitudes. Next expanding eqs (1.156) and (1.157) in powers of a1 and "1 and retaining only terms linear in these variables, we get a system of two homogeneous equations for them, with the compatibility condition for the system leading to the characteristic equation for the parameter +: +2 + p+ + q = 0,

(1.162)

where 1 p = (–2 + a2 ), 2

q=

1 ((–4 + a2 )(–4 + 3a2 ) + 1632 ). 64

(1.163)

The amplitude curve (1.160) and the characteristic equation are invariant to the replacement a → –a, 3 → –3, E → –E. Consequently, it is convenient to carry out a subsequent analysis in terms of the variable 1=



a.

(1.164)

For the equilibrium states to be stable, the following conditions are necessary: p > 0,

q > 0.

Then the roots +1,2 = –

p ± 2



p2 –q 4 2

of eq. (1.162) take either real negative values if the discriminant is D = p4 – q = 1632 ) > 0 or are complex conjugate with negative real part if D < 0. The curves

1 2 64 (1



(–4 + 1)(–4 + 31) + 1632 = 0, 12 – 1632 = 0, 1 = 2 are called separatrices. Figure 1.20 illustrates how they split the (1, 3)-plane into regions with different equilibrium states. The curve q = 0 is an ellipse   1 3 8 2 + 32 = , 1– 16 3 3

(1.165)

centered at the point 1 = 83 , 3 = 0. The shaded area of the (1, 3)-plane below the straight line 1 = 2 and inside the ellipse is a region of instability of the harmonic

52

1 Nonlinear Oscillations

ρ

1 4

0

E2 =

32 27

E2 =

1 4

ρ–

=

16 27

E2 =

4σ =



E2 =

64 27

0

ρ+

E2 = 1

E2 =

4

Region of stability

8 3 2

Region of instability

4 3



1 2

0

1 2

σ

Figure 1.20

oscillations. It is easy to show that the equilibrium points inside are saddles (inside the ellipse), unstable nodes or foci. The rest of the area contains equilibrium points as stable nodes (1 > 4 |3|) or stable foci (1 < 4 |3|). Let us briefly discuss both the synchronization regions and the geometric features of the amplitude curves (1.160) in terms of the variables 1 1632 1 + 1(4 – 1)2 = 16E2 .

(1.166)

In the absence of the force, this equation has the trivial solution 1 = 0 with arbitrary values of 3 and the solution of the limit cycle with 1 = 4 (a = 2) , 3 = 0. It is in decaying away after the force ceased to act that the synchronization differs from the forced oscillations of nonlinear oscillators. For small values of the force, the amplitude curve consists of two branches. One branch is ovals, close to ellipses 1632 +(4–1)2 = 4E2 (see Fig. 1.20) near the point 1 = 4, 3 = 0 in the stable region; another branch is described   by the equation 1 1 + 32 = E2 for small values of 1 in the unstable region. Since the terms on the left-hand side of eq. (1.166) are positive, the ovals expand as the force increases, crossing the upper part of the ellipse. The derivative of the amplitude curve ∂F(1,3)

∂1 3213 ∂3 =– 2 = – ∂F(1,3) ∂3 31 + 16(1 + 32 – 1) ∂1

1.6 External Synchronization of Self-Oscillating Systems

53

given as the implicit function F(1, 3) = 1632 1 + 1(4 – 1)2 – 16E2 is a continuous and single-valued function everywhere except the point 1 = 43 , 3 = 0. ∂1 At this point, the amplitude of the force is equal to E = √427 , and the derivative ∂3 is uncertain because ∂F (1, 3) ∂F (1, 3) = = 0. ∂1 ∂3 As shown in Fig. 1.20, the two branches are combined with a nonclosed curve with a self-intersection point. With further increase in the force, the amplitude curve is not closed. When E2 = 1, it passes through the point 1 = 2, 3 = ± 21 where the straight lines 32 1 = ±43 and 1 = 2 touch the ellipse. When E2 = 27 , the amplitude curve touches the 8 1 ellipse at the points 1 = 3 , 3 = ± √3 . The amplitude curve has a vertical slope at the

points of contact with the ellipse and rapidly descends when |3| > √13 . This statement comes from the expression for its derivative, written in various forms: ∂3 (–4 + 1)(–4 + 31) + 1632 8E2 + (–4 + 1)12 =– =– . ∂1 3213 1612 3 The synchronization band B3 of the oscillations is the interval between the points, at which each amplitude curve intersects the boundary of√ the instability region (bold 32 32 points in Fig. 1.20). When E2 > 27 , this interval is equal to 2E2 – 1, when 0 < E2 < 27 , √ the synchronization band B3 varies from zero to 2 3. Figures 1.21 and 1.22 show the results of numerical calculations of eq. (1.152) with 4 the initial data x (0) = 13 , x′ (0) = 2 and the parameters x 2

1

0 –1

–2

Figure 1.21

20

40

60

80

100

120

t

54

1 Nonlinear Oscillations

x 2

1

0 20

40

60

80

100

t

–1

–2

Figure 1.22

% = 61 ,

E = 1,

3 = 31 ,

% = 61 ,

E = 1,

3 = 2.

.

It is seen that in the first case, when the values of 3 lie in the synchronization band, the oscillations become stationary (1.205) with a constant amplitude and initial phase after the transient regime is finished. As compared to the solutions of system (1.159), their numerical values differ by only a few percent. In the second case, the values of 3 lie beyond the synchronization band and we can observe the typical quasi-periodic oscillations. The relations x0 = a(T1 ) cos[T0 + "(T1 )] = cos[T0 ]a(T1 ) cos["(T1 )] – sin[T0 ]a(T1 ) sin["(T1 )], dx0 == x˙ 0 = –a(T1 ) sin[T0 + "(T1 )] = – sin[T0 ]a(T1 ) cos["(T1 )] + cos[T0 ]a(T1 ) sin["(T1 )] dT0 can be regarded as a transformation of the original coordinate system into a new coordinate system x0 = a(T1 ) cos["(T1 )],

dx0 = –a(T1 ) cos["(T1 )] dT1

that rotates with frequency of the external force counterclockwise. In the new system, when E ,

y = l cos > + A cos 29t.

(1.167)

Then, after excluding the total time derivative, the system’s Lagrangian equal to L=

  ml2 2 ˙ – ml cos > g + 4A92 cos 29t > 2

leads to the equation of motion  ¨+ >

 g 4A92 + cos 29t sin > = 0. l l

(1.168)

For small deviations (sin > ≈ >), we have obtained a linear equation with periodic coefficients (the Mathieu equation): ¨ + 90 2 (1 + 2% cos 29t) > = 0, >

(1.169)

where 920 =

g 2A92 . ,% = l l90 2

(1.170)

56

1 Nonlinear Oscillations

The Mathieu equation is a special case of Hill’s equation x¨ + 92 (t)x = 0,

9(t + T) = 9(t)

(1.171)

that describes the oscillations of systems with one degree of freedom, where external impact is reduced to a periodic change of the system’s parameters.

1.7.1 The Floquet Theory It is not permissible to use questionable judgments as soon as we solve a certain problem - whether it’s the problem, anyway, of math or physics, posed definitely in terms of mathematics. It becomes a matter of pure analysis and should be treated as such. A.M. Lyapunov

This section deals with general properties of the Hill equations. Floquet was the first to study them in mathematics, and Bloch – in solid physics. Let x1 (t) and x2 (t) be two linearly independent solutions to eq. (1.171). Then the functions xi (t + T ) (i = 1, 2) are also linearly independent solutions of this equation. It is worth recalling that a secondorder equation must have only two independent solutions. Note that the solution of eq. (1.171) is not always the periodic function x (t) = x (t + T ). Generally, the functions xj (t + T ) (j = 1, 2) are linear combinations of xi (t) (i = 1, 2): x1 (t + T) = a11 x1 (t) + a12 x2 (t), x2 (t + T) = a21 x2 (t) + a22 x2 (t),

(1.172)

with constant coefficients aij (i, j = 1, 2). If the linearly independent solutions xi (t) (i = 1, 2) satisfy the initial conditions x1 (0) = 1; x˙ 1 (0) = 0, x2 (0) = 0; x˙ 2 (0) = 1,

(1.173)

we immediately get the coefficients a11 = x1 (T ) ,

a21 = x2 (T )

from eq. (1.172). The above relations and a12 = x˙ 1 (T ) , a22 = x2 (T ) obtained by differentiating eq. (1.172) immediately determine the coefficients aij when the functions x1 (t) , x2 (t) are known. Equations (1.172) can be written in a matrix form as X(t + T) = AX(t),

(1.174)

57

1.7 Parametric Resonance

where X=

x1 x2

,

A=

a11 a21

a12 a22

.

We transform eq. (1.174) to a simple form. Let us put X (t) = BY (t), where B is a constant x1 . nonsingular matrix, with Y being the column vector Y = x2 Hence, X (t + T ) = BY (t + T ) = AX (t) = ABY (t), so Y(t + T) = B–1 ABY(t).

(1.175)

The matrix B can be chosen so that the matrix B–1 AB should have a diagonal form: –1

B AB =

+1 0

0 +2

.

(1.176)

The eigenvalues +1 , +2 are defined as the roots of the equation det(B–1 AB – +I) = det(A – +I) = +2 – (a11 + a22 )+ + det A = 0,

(1.177)

where I is the identity matrix. Given that det A = x1 (T ) x˙ 2 (T ) – x2 (T ) x˙ 1 (T ) is the Wronski time-independent determinant equal to unity at the initial conditions (1.173), we come up with the solution of eq. (1.177) in the form +1,2 = " ±

"2 – 1,

1 " = (a11 + a22 ). 2

(1.178)

We introduce the Lyapunov characteristic exponents: #i =

1 ln +i T

(i = 1, 2)

(1.179)

and rewrite eq. (1.175) as follows: yi (t + T) = e#i T yi (t). Thus, after multiplying it by e–γi (t+T) , we have yi (t + T ) e–γi (t+T) = e–γi t yi (t) .

(1.180)

58

1 Nonlinear Oscillations

This means that e–γi t yi (t) is a periodic function Fi (t) with the period T. As a result, the general solution to the Hill equation at +1 ≠ +2 can be written in the form x(t) = c1 y1 (t) + c2 y2 (t) = c1 e#1 t F1 (t) + c2 e–#1 t F2 (t).

(1.181)

We have here taken into account the relation γ1 + γ2 = 0, being the corollary of the equation +1 +2 = 1. The general form of the solutions to the Hill equation depends on the values of the       real parameter ". When " < 1, both roots +1 , +2 are complex conjugates: +1  = +2  = 1 and γ1 = i1 (1 ∈ R). Since x (t) is a real-valued function, the constants c1 , c2 and the periodic functions F1 , F2 involved in eq. (1.181) must be complex conjugate quantities. Let it be c c1 = e$i , 2

F1 = a (t) eiJ(t) ,

where a, J are real periodic functions with the period T. Then the solution of the Mathieu equation is >(t) = ca(t) cos($ + 1t + J(t)).

(1.182)

It is a combination of the functions with the periods T1 = T and T2 = 20 1 and describes not periodic motion on condition that these periods are incommensurable, and periodic motion – in the opposite case. Aleksandr Mikhailovich Lyapunov (05.25.1857, Yaroslavl to 11.03.1918, Odessa) was an outstanding Russian mathematician and engineer, Corresponding Member of the Petersburg Academy of Sciences since 1901 and Full Member since 1900, representative of the St. Petersburg of the mathematical school created by P.L. Chebyshev. Member of St. Petersburg, Kharkov and Kazan universities, Foreign Member of the Accademia dei Lincei, Corresponding Member of the Paris Academy of Sciences, Foreign Member of the mathematical circle in Palermo, an honorary member of the Kharkov Mathematical Society and other scientific societies. He was the creator of the motion stability theory, which belongs to the category of the most difficult problems of the natural science, and the author of fundamental research on figures of equilibrium of a rotating fluid. In his work “The general problem of stability of motion” (1892), he proposed new general stringent methods for solving problems of stability of motion. One of these methods, based on the concept of the so-called Lyapunov function, has allowed one to get important criteria of stability of solutions to apply them in practice. Research methods discerned by Lyapunov are the underpinning

1.7 Parametric Resonance

59

of other parts of the theory of differential equations. A.M. Lyapunov made a great contribution in mathematical physics as well, particularly in the potential theory. He gave a simple and rigorous proof of the central limit theorem in probability theory where he had developed an original and extremely fruitful method of characteristic functions, which is widely used in the modern theory of probability.   In the case when " > 1, the quantity γ is a real number. At γ > 0, the second term in eq. (1.181) decays rapidly and the amplitude x (t) grows fast with time: x(t) ≈ C1 eγ t F1 (t).

(1.183)

This phenomenon is called parametric resonance.   For " = 1, the eigenvalues +1 = +2 = ±1 coincide, and as follows from the theorem of linear algebra of reducing the matrix to Jordan form, the matrix B–1 AB has the form –1

B AB =

+1 1

0 +1

,

+1 = ±1.

(1.184)

Then, instead of relations (1.175) one is led to y1 (t + T) = +1 y1 (t), y2 (t + T) = +1 y2 (t) + y1 (t).

(1.185)

The first equation implies that y1 (t) = e#1 t F1 (t), #1 =

1 ln +1 , F1 (t + T) = F1 (t), T

(1.186)

and the second equation reduces to e–#1 (t+T) y2 (t + T) = e–#1 t y2 (t) + e–#1 T F1 (t). The latter’s solution is y2 (t) = e

#1 t

 t F1 (t) , F2 (t) = F2 (t + T). F2 (t) + T+1



(1.187)

  Therefore, when " = 1, one of the solutions of the Mathieu equation always describes growing-in-time motion. The second independent solution is periodic either with the   period T (+1 = 1) or with 2T (+1 = –1). Values of " = 1 are the boundary between the regions of stable and unstable solutions and called transient values. How does resonance in linear systems with forced oscillations differ from parametric one? Forced oscillations cause the resonance to occur when the frequency of a perturbation and the natural frequency of the system coincide with each other.

60

1 Nonlinear Oscillations

This gain is due to an influx of energy, and the resonance takes place regardless of whether the system is in equilibrium or not at an initial time. The oscillation amplitude increases proportionally to t (arithmetic progression). The parametric resonance is characterized by unstable equilibrium. Any small perturbation makes the system oscillate with many excitation frequencies. As far as the oscillation amplitude is concerned, in this case it rises exponentially (geometric progression). To date, the Hill equations have been studied in detail and solved only for some special types of the functions 9 (t). At the same time, the conditions for the parametric resonance to occur have investigated in detail for the Mathieu equation (1.169). Its characteristic values of γ depend only on the values of 9, % and do not depend on the initial data. For each pair of values 9, % the stability region in the plane of 9, % can be established. Appropriate graphs are given in special monographs. Here we shall dwell upon the condition for the parametric resonance to emerge at % (t) = >0 (t) + %>1 (t) + %2 >2 (t)

(1.188)

into eq. (1.169) and setting the coefficients of like powers equal to zero, we get ¨ 0 + 90 2 >0 = 0, >

(1.189)

¨ 1 + 90 2 >1 + 2>0 920 cos 29t = 0, >

(1.190)

¨2 + >

(1.191)

920 >1

+

2>1 920 cos 29t

= 0.

The general solution of eq. (1.189) >0 = a cos (90 t + ") after substitution into eq. (1.190) brings to the following expression for >1 : a920 >1 = 49



 cos [(–29 + 90 ) t + "] cos [(9 + 90 ) t + "] + . 9 – 90 9 + 90

It includes small denominators at 9 = ±90 . When substituting it into eq. (1.191), in the expression for >2 , resonant summands appear. They lead to secular terms proportional to t. As a consequence, the presence of the secular terms in > and the appearance of the small denominators make it impossible to use the direct expansion.

1.7 Parametric Resonance

61

If one considers the higher order terms in expansion (1.188), the small denominators arise when 9 = 0,

9=±

90 n

(n = 1, 2, 3, . . . ) .

Let us find an approximate expression >(t) for small % by means of the method of multiple scales. Now we plug the expansion >(t) = >0 (t, %t) + %>1 (t, %t) + O(%2 ) into eq. (1.169) and equate the coefficients of %0 and %1 . We come up with the following result: ∂2 ∂T0 2

>0 (T0 , T1 ) + 920 >0 (T0 , T1 ) = 0;

2920 cos (2T0 9) >0 (T0 , T1 ) + 920 >1 (T0 , T1 ) + 2 +

(1.192) ∂ ∂ >1 (T0 , T1 ) ∂T0 ∂T1

∂2 >0 (T0 , T1 ) = 0, ∂T02

(1.193)

As before, we have here designated as t = T0 ; %t = T1 . The substitution of the general solution of eq. (1.192) >0 (T0 , T1 ) = a (T1 ) cos (90 T0 + " (T1 ))

(1.194)

into eq. (1.193) reduces it to the following form:   ∂2 >1 + 920 >1 + 920 a(T1 ) cos (29 – 90 ) T0 –"(T1 ) 2 ∂T0   2 + 90 a(T1 ) cos (29 + 90 ) T0 +"(T1 ) – 290 a′ (T1 ) sin (90 T0 + "1 ) – –290 cos(90 T0 + "1 (T1 ))"′ (T1 ) = 0.

(1.195)

We analyze the region with the frequencies 9 near 90 with the detuning parameter 3: 9 = 90 + %3.

(1.196)

In this case, the left-hand side of eq. (1.195) contains the third term     920 a(T1 ) cos (29 – 90 ) T0 –"(T1 ) = 920 a(T1 ) cos 90 T0 + 23T1 –"(T1 ) , fifth and sixth terms as resonant. Next, we expand the sum of these terms in the functions cos 90 T0 and sin 90 T0 . In order to eliminate the secular terms from the solution

62

1 Nonlinear Oscillations

for >1 (T1 ), we set the coefficients in front of these functions equal to zero. In the end, we have the equations 1 a′ (T1 ) = – 90 a (T1 ) sin #(T1 ), 2 #′ (T1 ) = 23 – 90 cos #(T1 ),

(1.197)

  γ (T1 ) = 2 T1 3 – "(T1 ) .

(1.198)

where

Hence, employing the variables w1 (T1 ) = a(T1 ) cos

γ1 (T1 ) γ1 (T1 ) and w2 (T1 ) = a(T1 ) sin , 2 2

we can write down the above expressions as the system of linear differential equations  90  w1 ′ (T1 ) = – 3 + w2 (T1 ) 2   9 0 w2 ′ (T1 ) = 3 – w1 (T1 ). 2

(1.199)

The solutions of this system need to be sought in the form w1 (T1 ) = v1 esT1 and w2 (T1 ) = v2 esT1 , which turns out to be possible under the conditions v1 = –

v2 (23 + 90 ) , 2s

s2 = –32 +

 9 2 0

2

.

(1.200)

Then, simple transformations of the solution (1.194) yield >(t) = w1 cos 9t + w2 sin 9t + O (%) ,

(1.201)

where w1 =

 3 + 920  %st c1 e + c2 e–%st , s w2 = c1 e%st + c2 e–%st .

(1.202)

Here 9 is given by eq. (1.196), and c1 and c2 are functions of T2 , T3 , . . . and in our approach are constants. The parametric resonance condition is the reality of s, which takes place within a narrow range of the frequencies 9 with the width of %90 . Indeed, conditions (1.200) where – 920 < 3 < + 920 and eq. (1.199) specify the frequency interval   % % < 9 < 90 1 + 90 1 – 2 2

(1.203)

1.7 Parametric Resonance

63

to excite the parametric resonance. If s is an imaginary quantity, > oscillates with increasing t. Therefore, the equations  % 9 = 90 1 ± (1.204) 2 are responsible for the boundaries separating the stability and instability regions in the first approximation. The presence of small damping gives rise to a narrowing of the instability region. The extra term 2%,>˙ (, > 0) being involved in eq. (1.169) leads to the appearance of the factor e2%,t on the right-hand side of eq. (1.202). Therefore, the amplitude grows when –2, + s > 0 (s > 0), where s is determined by formula (1.194). In this case, to the resonance region there corresponds the condition φ 0.01

0.005

20

40

60

80

100 t

–0.005

–0.01 Figure 1.24

φ 0.4 0.2

20 –0.2

–0.4

Figure 1.25

40

60

80

100 t

64

1 Nonlinear Oscillations

90 – %



  90  2 90 2 2 – 4, < 9 < 90 + % – 4,2 2 2

(1.205)

that narrows the parametric resonance region. The parametric resonance also happens at the frequencies 9 = 29n0 , where n is an integer. In addition, the resonance width decreases as %n 90 does. Figures 1.24 and 1.25 illustrate the results when solving eq. (1.169) numerically with the initial conditions >(0), >′ (0) = 0.01 and the parameters 90 = 1, % = 0.01, 9 = 1.07 and 90 = 1, % = 0.1, 9 = 0.97, respectively. As seen from the first plot, the solutions outside the parametric resonance are beat-like oscillations. In the second case, conditions (1.203) are met, and the oscillations increase exponentially with time.

2 Integrable Systems A little man chirred at a blackboard Like a small gray grasshopper Inviting us to wander around the worlds Which not everyone can dream of. The poem “A Theorist” by V.E. Zakharov

2.1 Equations of Motion for a Rigid Body In mechanics, a rigid body is a system of material points with holonomic constraints represented by the equations rij = cij ,

(2.1)

where rij is the distance between the points i and j, with cij being constants. To describe the motion of a rigid body, it is necessary to mentally split it into small particles. Each particle is a material point, to which the laws of mechanics are applicable. Consequently, the rigid body is regarded as a set of tightly linked material points. Internal forces between the particles are thought to be significantly stronger than external forces applied to the body. A continual description requires viewing the rigid body as a continuous medium filled by elementary volumes dV with density 3 and mass dm = 3dV. In spite of having an infinite number of material points, the rigid body as a whole can be defined in space by six coordinates. Moreover, it has six degrees of freedom. Indeed, if a rigid body is fixed at the point O, it can only rotate around that point. After fixing the rigid body at the second point A, it can rotate about the axis OA passing through the mentioned above points. At last, having secured the rigid body at the third point B that does not lie in a straight line passing through the axis OA, we get an infinite amount of material points of the solid. Constancy of the distances imposes three holonomic constraints on 3 ⋅ 3 = 9 coordinates of the points A, B, O and leaves only six free coordinates or six degrees of freedom of the rigid body. It would be convenient for us to introduce these six degrees as follows. We express the radius vector 1 (t) of a point M relative to a fixed coordinate system both through  (t) of an arbitrary point O in the solid and through the radius vector the radius vector R → r (t) = OM (see Fig. 2.1):  (t) + r (t) . 1 (t) = R

(2.2)

After picking the point O on the object, we establish three degrees of freedom because it can be moved in the x, y, and z directions. That is to say, to determine the motion of the object fixed at a point, it is sufficient to have only three degrees of freedom. Then any point of the object can move only around the surface of a sphere centered

66

2 Integrable Systems

x3ʹ

x1 M

x3

r(t ) O

ρ(t ) R(t )

x2 x1ʹ

A

x2ʹ

Figure 2.1

at the point O. Radius of the sphere is the distance between the two points. Let us construct an immobile system of coordinates (space frame) k with the basic unit vectors e′k (k = 1, 2, 3) at the point O: e′k ⋅ e′i = $ik .

(2.3)

Their direction coincides with the axes of the fixed system (see Fig. 2.1) located at the point A. To describe the rigid body rotation about the point O, it is convenient to introduce a “mobile” coordinate system (body frame) K centered at that point with the orthonormal basis ek (t) (k = 1, 2, 3): ek (t) ⋅ ei (t) = $ik .

(2.4)

The new system is rigidly attached to the rigid body; it rotates with the latter about the point O. As to the two reference frames, the vector r (t) possesses some properties such as (1) It has the time-independent coordinates in the moving system K: r (t) = xi ei (t) . (2)

(2.5)

It has the time-dependent coordinates x1′ (t) = x′ (t), x2′ (t) = y′ (t), x3′ (t) = z′ (t) in the fixed system k r (t) = x′ (t)i e′i .

(2.6)

From now on, we use superscript to designate the basis unit vectors and coordinates of the vectors in the fixed system k. Thus, the position and dynamics of any point of

2.1 Equations of Motion for a Rigid Body

67

the solid can be completely determined by assigning the coordinates of vector r (t) in the stationary or moving coordinate system at point O and the coordinates of vector R (t) in the fixed coordinate system at point A. In addition, the orientation of the basis ei (t) with regard to the basis unit vectors e′i in the fixed coordinate system is ascertained by the time-dependent variables Dik (t) (i, k = 1, 2, 3): ei (t) = Dik (t) e′k .

(2.7)

Substituting these relations into eq. (2.4), we immediately obtain the restriction Dik (t)Dip (t) = $kp

(k, p = 1, 2, 3)

(2.8)

to the matrix D. Therefore, the transformation inverse to eq. (2.7) is e′k (t) = Dik (t) ei .

(2.9)

Nine quantities of {Dik }, which specify the orientation of the moving axes relatively fixed ones, are related by the six constraints (2.8), and this orientation is determined by three independent parameters. Equations (2.7) and (2.8) can be written as matrix equations e = D(t)e′ , T

D(t)D (t) = I,

(2.10) (2.11)

where e (t) , e′ are columns of the vectors ei (t) and e′k , respectively; D(t) is the matrix with elements {Dik (t)}; DT (t) is the transpose of the matrix D; and I is the identity matrix. A linear transformation of the type (2.9) that satisfies eq. (2.11) is called orthogonal, and the matrices D are orthogonal. Consider their properties. It follows from eq. (2.11) that det DDT = (det D)2 = 1, and in what follows, we will consider only those transformations that have the determinant equal to 1. This condition implies both the relations %i j k Di s Dj p Dk q = %s p q , %s p q Di s Dj p Dk q = %i j k

(2.12) (2.13)

and the corollaries easily derivable from them by means of eq. (2.8) %i j a Di s Dj p = %s p q Da q ,

(2.14)

%s p a Di s Dj p = %i j k Dk a .

(2.15)

68

2 Integrable Systems

Here we have introduced the absolutely antisymmetric quantities %ijk (i, j, k = 1, 2, 3) that have three different values: (a) 0, if any two indices coincide; (b) 1, if the indices i, j, k are an even permutation of the numbers 1, 2, 3; (c) –1, if the indices i, j, k are an odd permutation of the numbers 1, 2, 3. When it comes to particular calculations, the identities %i j k = %k i j = %j k i , %i j k %k s p = $i s $j p – $i p $j s ,

(2.16)

%i j k %j k p = 2 $i p . become useful.

2.1.1 Euler’s Angles When calculating the rotational degrees of freedom of a rigid body, it is convenient to use the Euler angles as independent parameters. To determine them, we should view the rotational motion of the fixed system with the axes Ox′ , Oy′ and Oz′ with regard to the point O (see Fig. 2.2). The plane Ox′ y′ and the plane Oxy (the latter is shaded in the figure) intersect in a line called the line of nodes (the axis O. in Fig. 2.2). The Euler angles are as follows: (1) Self-rotation angle > = x′ O. is measured from the axis Ox′ . (2) Nutation angle ( = z′ Oz is measured from the axis Oz′ . (3) Precession angle 8 = xO. is measured from the line of nodes. As a result, the transition from the fixed reference frame to mobile one can be brought about through the ordered sequence of three rotations as shown in Figs. 2.3–2.5. At the onset, we carry out a circular motion around the axis Oz′ by an angle > (Fig. 2.3). Further, we rotate the system around the axis O. , directed along the line of nodes, by an angle ( (Fig. 2.4). zʹ θ

z

y

o xʹ



φ Ψ x ξ Figure 2.2

2.1 Equations of Motion for a Rigid Body

69



η φ

O



φ xʹ

z

ξ

Figure 2.3



θ

ηʹ θ

η yʹ

O φ xʹ

ξ Figure 2.4

zʹ z

θ y ηʹ Ψ θ O xʹ φ Ψ x

η yʹ

ξ Figure 2.5

The following rotation occurs around the axis Oz by the angle 8 (Fig. 2.5). All three Euler angles change independently from each other. They vary within the regions such as 0 ≤ ( ≤ 0, 0 ≤ > ≤ 20 and 0 ≤ 8 ≤ 20. The former transformation is a rotation by the angle > and converts the basis unit     vectors e′1 , e′2 , e′3 to e1 , e2 , e3 in the following manner: e1 = cos >e′1 + sin >e′2 , e2 = – sin >e′1 + cos >e′2 , e3 = e′3 or ei = (D1 )ik e′k ,

70

2 Integrable Systems

where the matrix D1 has the form ⎛

cos > ⎜ D1 = ⎝ – sin > 0

sin > cos > 0

⎞ 0 ⎟ 0⎠. 1

The second transformation is a rotation around the axis Ox and its matrix is ⎞ ⎛ 1 0 0 ⎟ ⎜ D2 = ⎝ 0 cos ( sin ( ⎠ . 0 – sin ( cos ( Finally, the last transformation is a rotation about the axis Oz and has the form e1 = cos 8e′1 + sin 8e′2 , e2 = – sin 8e′1 + cos 8e′2 , e3 = e′3 , ⎛

cos 8 ⎜ D3 = ⎝ – sin 8 0

sin 8 cos 8 0

⎞ 0 ⎟ 0⎠. 1

Ultimately, the matrix D in eq. (2.7) can be obtained by multiplying the matrices D1 , D2 , D3 , that is D = D1 D2 D3 . Its final form can be written as ⎛

cos 8 cos > – cos ( sin > sin 8 D(t) = ⎝– sin 8 cos > – cos ( sin > cos 8 sin ( sin >

cos 8 sin > + cos ( cos > sin 8 – sin 8 sin > + cos ( cos > cos 8 – sin ( cos >

⎞ sin 8 sin ( cos 8 sin (⎠. cos ( (2.17)

Writing the vector r (t) as r (t) = xi′ (t) e′i = xi ei (t) = xi Dik (t) e′k in the mobile and fixed reference frames implies the correlations between the coordinates of the vector r in these systems: xi (t) = Dik (t)xk′ ,

(2.18)

xk′

(2.19)

= Dik (t)xi (t).

Consequently, the matrix D (t) determines the rotational motion of the point of the rigid body, with xk′ being its coordinates in the fixed coordinate system. Let us look at the properties of the matrix D (t) when turning the coordinate systems. Denote the matrix D (!) as a rotation matrix with time-independent Euler angles – D (!) = D ((1 , >1 , 81 ). ′ Then, as the coordinate system k rotates, the basis e′i transforms in it to the basis e˜i as e˜′i = Dik (!)e′k .

(2.20)

2.1 Equations of Motion for a Rigid Body

71

The matrix form of this basis is ′ e˜ = D(!)e′ .

(2.21)

Since the basis ek does not change when rotating the fixed system of coordinates, the ′ relations e = D (t) e′ = D˜ (t) e˜i = D˜ (t) D (!) e′ immediately imply the transformation law for the matrix D (t): ˜ = D(t)DT (!). D(t)

(2.22)

T The product of the matrices   D (t) and D (!) is again a rotation matrix with other Euler ˜ ˜ angles ( (t) , >˜ (t) , 8 (t) . Therefore, eq. (2.22) establishes a nonlinear transformation law of the Euler angles when rotating the fixed coordinate system. The orthogonal transformation of the mobile coordinate system

e˜ i = Dik (!)ek ,

(2.23)

x˜ i = Dik (!)xk

(2.24)

through the matrix D (!) involved in the equations ˜ e′ e˜ = D(!)e = D(!)D(t)e′ = D(t) yields the transformation law for the matrix D (t) in the form D˜ (t) = D (!) D (t) .

(2.25)

This result differs from formula (2.22).

2.1.2 Euler’s Kinematic Equations To calculate velocities of material points of a rigid body executing pure rotational motion around a fixed point, it is necessary to differentiate the equation (2.5) over time. Taking into account the fact that the coordinates xi do not depend on time but the evolution of the Euler angles (2.7) establishes the time dependence of the basis ei (t), we have de (t) r˙(t) = v(t) = xi i = xi D˙ ij (t)e′j dt T ˙ = xi D˙ ij (t)Dkj (t)ek (t) = xi (D(t)D (t))ik ek (t). Differentiation of formula (2.11) over time T T T ˙ ˙ ˙ (t) + D(t)D˙ T (t) = D(t)D (t) + (D(t)D (t))T = 0 D(t)D

(2.26)

72

2 Integrable Systems

implies that the matrix D˙ (t) DT (t) is antisymmetric. Using the completely antisymmetric unit tensor %spq , we can write its matrix elements as D˙ ij (t)Dkj (t) = %ikn 9n (t).

(2.27)

According to eq. (2.22), the quantity 9n (t) is invariant with respect to rotations of the fixed coordinate system and characterizes the rotational motion of the rigid body. Using the relation %ikn ek (t) = en (t) × ei (t),

(2.28)

we can rewrite formula (2.26) in the following manner: T ˙ v(t) = xi (D(t)D (t))ik ek (t) = xi %ikn 9n (t)ek (t)

= xi 9n (t)en (t) × ei (t).

(2.29)

The vector quantity  (t) = 9n (t) en (t) 9

(2.30)

is called the vector of the instantaneous angular velocity of the rigid body and the resulting ratio  (t) × r - (t) = 9

(2.31)

 (t) gives the position bears the name of Euler’s formula. The direction of the vector 9 of the instantaneous axis of rotation at each moment. Its equation can be derived from the condition: for a rigidly rotating body, its points lying on the instantaneous rotation  (t) and r are parallel: axis have zero velocities. If the vectors 9 r = +9  (t) ,  (t) × r is zero, and the instantaneous axis of rotation offers a straight the expression 9 line passing through the fixed point O. Once eq. (2.17) is substituted into eq. (2.27), we obtain formulae for the projections of the angular velocity vector on the axes of the moving coordinate system in the following form: ˙ sin ( sin 8 + (˙ cos 8, 91 = > ˙ sin ( cos 8 – (˙ sin 8, 92 = >

(2.32)

˙ ˙ cos ( + 8. 93 = > Making use of these equations, it is also easy to find the projections 9′n (t) =  on the axes of the fixed reference frame: 9i (t) D (t)in of the vector 9

73

2.1 Equations of Motion for a Rigid Body

˙ sin ( sin > + (˙ cos >, 9′1 = 8 ′ ˙ sin ( cos > + (˙ sin >, 9 = –8

2 9′3

(2.33)

˙ cos ( + >. ˙ =8

Equations (2.32) and (2.33) are called Euler’s kinematic equations for a rigid body.

2.1.3 Moment of Inertia of a Rigid Body The velocity of an arbitrary material point M of the rigid body (see Fig. 2.1) is the time derivative of its radius 1 (t) (2.2). Differentiating relation (2.2) and taking eq. (2.31) into consideration, we find   + 9  × r . 1˙ = V (2.34)  is the velocity of the point O, being as the center of the moving coordinate The vector V system, relative to the axes of the fixed system at point A. Imagine the rigid body as a discrete system of material points. We designate the radius vectors of an arbitrary particle with the number ! = 1, 2, . . . , N and mass m! as 1 ! and r! , respectively. Similarly to formula (2.2), these vectors are related by  + r! . 1! = R

(2.35)

Then, according to eq. (2.34) the velocity 1˙ ! of the particle m! is equal to  +9  × r! . 1˙ ! = V

(2.36)

As is well known, the momentum of a system consisting of N material points with masses mi is equal to the product of the mass of particles and their velocities: = P

N 

m! 1˙ ! =

!=1

N 

 +9  × r! ). m! (V

!=1

By definition of the center of mass &N miri  , Rc = &i=1 N i=1 mi the momentum can be expressed through the mass of the body M and the radius vector Rc of the center of mass:   =M V  +9 c .  ×R P The angular momentum of the rigid body relative to the fixed point A is determined by the expression

74

2 Integrable Systems

N 

A = M

1 ! × m! 1˙ !

!=1

that after insertion of eqs (2.35) and (2.36) takes the form A = M

N 

 + r! ) × m! 1˙ ! = R  ×P + (R

!=1

N 

 +9  × r! ) m!r! × (V

!=1

 ×P  + MR c × V + =R

N 

 × r! ). m!r! × (9

!=1

It is worth pointing out that if we place the   origin of the system K (the point O) in  c = 0 , the observable momentum and angular the center of mass of the rigid body R momentum in the fixed coordinate system are  = MV c, P A = R  ×P + M

N 

(2.37)

   × r! . m!r! × 9

(2.38)

!=1

 c is the velocity of the center of mass. It is also called the translational velocity Here V of the rigid body. The first summand on the right-hand side of eq. (2.38) is the moment of the total momentum of the center of mass. It concentrates the whole mass of the system. The second summand  = M

N 

 × r! ) m!r! × (9

(2.39)

!=1

might be called the intrinsic angular momentum of the rigid body due to rotation only. Let us transform this expression by the formula of the vector triple product:  = M

N 

      r!r! – r! 9  r! m! 9

!=1

and compute its components in the moving coordinate system K, where the coordinates xi! (i = 1, 2, 3) of the vector r! are time independent. We have Mi =



 m! 9 i



3 

!

2 x!l

– xi!

l=1

3 

 xl! 9l

.

(2.40)

l=1

The quantities Iik =

 !

 m! $ik



3  l=1

2 xl!

 – xi! xk!

(2.41)

75

2.1 Equations of Motion for a Rigid Body

depend on the mass distribution within the rigid body and its shape. They are referred to as moments of inertia. It is seen that the linear transformation using the matrix I = {Iik } gives the angular momentum vector of the rotating body via the angular velocity: Mi = Iik 9k .

(2.42)

(here and below we omit the sign of summation over the same indices). The moments of inertia also depend on choosing the system K. It is important to mention that for a coordinate system subject to orthogonal transformations, a set of three variables {a1 , a2 , a3 } transforming in this way a˜ i = Dik (!) ak is called a vector or a first-rank tensor. Nine transformed moments of inertia Iik can be determined as follows: I˜ij = Dik (!)Djn (!)Ikn .

(2.43)

The latter bear the name of the second-rank tensor. As known from the course of linear algebra, any real symmetric matrix B has the appearance: B = DCDT , where D is an orthogonal matrix, with C being a real diagonal matrix. Therefore, the tensor I = {Iik } calculated in a mobile basis can be reduced through the orthogonal transformation e → De to the diagonal form ⎛

I1 ⎜ Iik = ⎝ 0 0

⎞ 0 ⎟ 0 ⎠. I3

0 I2 0

(2.44)

The rigid-body axes coinciding with the new coordinate axes are called the principal axes of inertia of the body. The values of the diagonal components of the tensor I1 , I2 , I3 are the principal moments of inertia of the body. The kinetic energy of the rigid body T=

 m! -2

!

!

2

after plugging expression (2.36) for a particle’s velocity with the number ! is equal to

T=

 2  + 9  × r!  m! V !

2

=

 m! !

2

V2 +

 !

  m!  2   9  × r! .  × r! + m! V 9 2 !

76

2 Integrable Systems

The second term on the right-hand side is expressed through the vector of the center of mass Rc ' (     9 c  9  ×R  × r! = M V m! V !

and is zero subject to choosing the point O as the center of inertia of the body. Finally, in the third term we open the square of the vector product and, using (2.16), we transform it into the form  m! 

 × r! 9

2  m!

2

=

 m! 2

%s k j 9k xj %s i p 9i xp

($ik $jn – $nk $ji )9k xj %si p 9i xp 2  m! 9i Iik 9k = 9i ($ik xj xj – xk xi )9k = . 2 2 =

At length, if we choose the origin of the moving reference frame in the center of inertia, and the direction of its axes coincides with the principal axes of inertia, we find a simple expression for the kinetic energy T=

 MVc 2 1  2 + I1 91 + I2 922 + I3 923 2 2

(2.45)

and how the momenta are associated with the angular velocities M1 = I1 91 ,

M 2 = I2 9 2 ,

M 3 = I3 9 3 .

(2.46)

The first term in eq. (2.45) is responsible for kinetic energy of the translationally moving body with the mass center velocity Vc . The form of this energy is so as if the whole body weight should be concentrated in its center of mass. The second term is the kin around an axis passing etic energy of rotational motion Trot with the angular velocity 9 through the center of inertia. A body, in which all the principal moments of inertia are different, is called asymmetric top. It is worth noting that if the density of the rigid body is constant, and its shape has symmetry, the same symmetry is inherent in the principal axes of inertia and the inertia tensor. Two principal moments of inertia and an ellipsoid of revolution coincide. Such a body is named symmetrical top. Any body, which has I1 = I2 = I3 = I is referred to as a spherical top.

2.1.4 Euler’s Dynamic Equations Equations of motion of a rigid body can be derived from the Lagrangian L containing potential and kinetic energies. These forms of the energies take into account the dynamic Euler equation. The equations of motion split into three equations of translational motion of the center of mass and three equations of rotational motion around it.

2.1 Equations of Motion for a Rigid Body

77

In mechanics, a lot of technical problems often arise to be solved in the context when the rigid body is pinned at a point O not coinciding with the center of mass. To study the behavior of the above object, it is necessary to use another form of the equations of motion. For this purpose, we must resort to the general theorem of mechanics, which states that the rate of change of the angular momentum of an open system is equal to  of external torques acting on it: the total moment N (  ' dM  ri × Fiext = N. = dt N

(2.47)

i=1

As to a fixed inertial reference frame, these equations are not always convenient to analyze. Even in the absence of external forces Fiext when the moment in the system k is preserved: Mi′ = const (i = 1, 2, 3) , the time dependence of the moments of inertia tensor and the lack of a simple nexus between the moment components and angular velocity in this system only complicate the equations. The latter have the simpler relationship (2.46) in the moving coordinate system, where Ii is constant. To derive the equations in this system, we make use of the relation Mi = Dik (t)Mk′

(2.48)

between the projections of the moment vector in the K and k coordinate systems and the formula as well: D˙ ik = %isn 9n Dsk .

(2.49)

This formula is the consequence of definition (2.27). Then, differentiating eq. (2.48) and employing eq. (2.47), we find ˙ i = D˙ ik (t)M ′ + Dik (t)M ˙′ M k i = %isn 9n Dsk Mk′ + Dik (t)Nk′ = %isn 9n Ms + Dik (t)Nk′ .

(2.50)

The quantities Ms are expressed in terms of the angular velocity components and of Dik (t) Nk′ -components of the moment of the external force in the moving coordinate system. Then, with account of eq. (2.46), we arrive at a closed system of equations  for 9: ˙ 1 + (I3 – I2 )92 93 = N1 , I1 9 ˙ 2 + (I1 – I3 )93 91 = N2 , I2 9 ˙ 3 + (I2 – I1 )91 92 = N3 . I3 9

(2.51)

78

2 Integrable Systems

These equations are called Euler’s dynamic equations. For the overall plan to be complete, we should also add three Euler’s kinematic equations (2.32). The system of six first-order equations (2.51) and (2.32) for 91 , 92 , 93 , >, (, 8 is sufficient to describe the dynamics of a rigid body with a fixed point. Suppose a rigid body to be under the influence of gravity F = Mg , where g is the acceleration of free fall, directed vertically downward. In the fixed coordinate system, the components of the force and momentum have the appearance Fi′ = –Mg$i3 ,

′ Ni′ = –%ik3 Mgxck ,

′ are the coordinates of the center of mass in the system k. In the moving where xck coordinate system, the momentum components are expressed via xck– the coordinates of the center of mass in this system, and the rotation matrix:

Ni = Dik Nk′ = –MgDik %kj3 Dsj xcs .

(2.52)

Recalling relations (2.15), we can write the above expression as Ni = –Mg%isn Dn3 xcs .

(2.53)

Due to the specifics of the problem, it would be convenient to use the quantities Dn3 as new dynamical variables. We denote Dn3 = γn , xc1 = x0 , xc2 = y0 , xc3 = z0

(2.54)

and substitute eq. (2.53) into eqs (2.51). As a result, we get the Euler equations for the rigid body rotating around the fixed point with the coordinates (x0 , y0 , z0 ) in the system of the center of mass: ˙ 1 + (I3 – I2 )92 93 = –Mg(yo γ3 – zo γ2 ), I1 9 ˙ 2 + (I1 – I3 )93 91 = –Mg(zo γ1 – xo γ3 ), I2 9

(2.55)

˙ 3 + (I2 – I1 )91 92 = –Mg(xo γ2 – yo γ1 ), I3 9 γ˙i = %isn 9n γs .

(2.56)

These are the first-order system for the six variables 9i , γi (i = 1, 2, 3), related by the condition γ1 2 + γ2 2 + γ3 2 = 1.

(2.57)

Having solved this system, we can determine the angles from the definition γ1 = sin 8 sin (,

γ2 = cos 8 sin (.

To find the third Euler angle > (t), we should exploit one of Euler’s kinematic equations.

2.1 Equations of Motion for a Rigid Body

79

2.1.5 S.V. Kovalevskaya’s Algorithm for Integrating Equations of Motion for a Rigid Body about a Fixed Point An imaginary state of quality distinguished from the actual by an element known as excellence; an attribute of the critic. The Devil’s Dictionary by Ambrose Bierce

Despite the analytical method makes it possible for equations of mechanics to be solved exactly only in rare cases, it gives us great advantages as compared to numerical calculations. Such solutions allow us to investigate the dynamics of a system in the entire range of its parameters and under arbitrary initial conditions. They are the foundation for various approximate methods of perturbation theory. According to Liouville’s theorem, to obtain exact solutions for the equations of motion in quadratures, it is necessary that the system should have a sufficiently large number of integrals of motion. Integrable systems with N degrees of freedom are a special class of dynamical systems. A major problem in their detection is to find N integrals of motion. The problem of integrating the Euler equations (2.55)–(2.57) in the gravitational field is one of the most remarkable ones in theoretical (nonlinear) physics. To date, its complete solution still has not yet been brought about. Nevertheless, the impressive results obtained in particular cases by Euler, Lagrange, Poisson, Kovalevskaya, Poincare and also by many contemporary researchers have enriched mechanics and nonlinear dynamics with new ideas and techniques. Let us put that Mg = C and rewrite the system of equations (2.55)–(2.57) in the following way: ˙ 1 + (I3 – I2 )92 93 = –C(yo γ3 – zo γ2 ), I1 9 ˙ 2 + (I1 – I3 )93 91 = –C(zo γ1 – xo γ3 ), I2 9

(2.58)

˙ 3 + (I2 – I1 )91 92 = –C(xo γ2 – yo γ1 ) I3 9 γ˙1 = (93 γ2 – 92 γ3 ), γ˙2 = (91 γ3 – 93 γ1 ),

(2.59)

γ˙3 = (92 γ1 – 91 γ2 ), γ1 2 + γ2 2 + γ3 2 = 1.

(2.60)

As a result, we get a system of six first-order equations (2.58) and (2.59) for six variables 91 , 92 , 93 , γ1 , γ2 , γ3 . It has three integrals of motion, one of which is eq. (2.60). Two other integrals of motion come from the law of conservation of total energy E and the law of conservation of the projection of the angular momentum M3′ in the fixed coordinate system to the gravity vector. Taking into consideration eqs (2.45), (2.46), (2.48) and (2.53), we can write them as E=

 1 2 I1 91 + I2 922 + I3 923 – gM(x0 γ1 + y0 γ2 + z0 γ3 ), 2 M3′ = (I1 91 γ1 + I2 92 γ2 + I3 93 γ3 ).

(2.61) (2.62)

80

2 Integrable Systems

Because the system (2.58) (2.60) in terms of Euler’s angles is Hamiltonian, to integrate it, it is necessary, according to Liouville’s theorem, to find once more integral of motion apart from eqs (2.61) and (2.62). In the absence of external forces x0 = y0 = z0 = 0

(2.63)

 ′ are preserved, the system (2.58)–(2.60) was solved by when all the projections of M Carl Gustav Jacobi by means of the theory of elliptic functions, created by himself. In addition, Joseph-Louis Lagrange described the integrable special case, studied later by Simeon-Denis Poisson, for an ellipsoid of revolution when the center of gravity is in the axis of rotation: I 1 = I2 ,

x0 = y0 = 0.

(2.64)

Consequently, the system gets an additional integral of motion 93 = const.

(2.65)

After these works, the theory of integrating the system (2.58)–(2.60) did not produce significant findings. Given the importance of this problem, in 1888 the Paris Academy of Sciences announced a contest for the Prix Bordin (a monetary award about 3,000 francs) for the best way to explain the motion of a rigid body about a fixed point. Previously, this award had been issued completely for 50 years only three times. Among 15 works submitted for the competition there was Sofia Kovalevskaya’s project – “On the Rotation of a Solid Body about a Fixed Point” under the motto “Say what you know, do what you must, come what may.” The commission of this contest recognized her treatise as the “outstanding one, which contains the discovery of a new case.” The author was awarded the full premiums increased to 5,000 francs. Sofia Vasilyevna Kovalevskaya (3 January, 1850–29 January, 1891) was an outstanding Russian mathematician, writer and publicist. The world’s first female professor of mathematics and the first female Corresponding Member of the St. Petersburg Academy of Sciences (1889).

2.1 Equations of Motion for a Rigid Body

81

An original idea suggested by S. Kovalevskaya to solve various problems is purely mathematical in nature. It does not have anything to deal with mechanics. In the eighteenth century, the equations of mechanics came to be written in the form of differential equations. The problem to integrate them was the most important one of mathematical analysis. By the late eighteenth century, some of the presently known methods for integrating the differential equations had been found. Their general solutions were expressed in terms of elementary functions, well known at the time. In addition, beginning with Isaac Newton’s works when solving linear and nonlinear differential equations of the type F(˙x, x¨ , x, t) = 0,

(2.66)

powerful methods of local analysis – finding the solutions in the form of series x (t ) =

∞ 

an tn

n=0

in powers of the independent variable t – were widely applied. Substitution of such a series in eq. (2.66) yielded the coefficients an . If linear differential equations with solutions in the form of series were important for various physical applications, a special name was assigned to them. Early in the nineteenth century, to look into the linear differential equations with variable coefficients of the form d2 y(x) dy(x) + p(x) + q(x) = 0, 2 dx dx

(2.67)

new functions such as the Bessel function, the Legendre function, a hypergeometric function and so on were introduced. Beginning with Cauchy’s works, it has proved extremely fruitful to use the theory of functions of a complex variable for studying these equations. The differential equations for the real function y (x) are considered in the complex domain. The independent variable x and the dependent variable y are assumed to be ones as z and y (z), respectively. For complex functions, it is natural to enter the concept of the derivative y′ (z) at the point z in the complex domain D as the limit: y′ (z) = lim

h→0

y (z + h) – y (z) , h

with the complex number h tending to zero in an arbitrary manner. Functions having derivatives at every point of the domain D are called analytic in this area and can be expanded in the convergent series y(z) =

∞ 

wn (z – z0 )n ,

(2.68)

n=0

with coefficients wn near any point z ∈ D. Points, belonging to the complex domain of the variable z where the function y (z) is not analytic, are referred to as singular

82

2 Integrable Systems

points. If z0 is an isolated singularity, in the vicinity of which the function expansion has the form y (z) =

∞ 

wn (z – z0 )n ,

–N

the point z0 is a pole of the order N. The function y (z) is said to be meromorphic in the complex domain if it is analytic in this area except for special points being poles. It has been proven that if the coefficients p (z) and q (z) involved in formula (2.67) have no singularities in the complex domain; the general solution is analytic in this area. The appearance of the solutions dramatically varies near singular points of the functions p (z) and q (z), and the expansion of type (2.68) does not take place. Before S. Kovalevskaya all works on mechanics had regarded time as an independent variable that takes only real values. She drew attention to the previously studied cases (2.63) and (2.64) where the solution of system (2.58)–(2.60) had been expressed in terms of elliptic functions that were meromorphic functions in the complex plane of the variable t. In her work, she first began to consider the time t is an independent complex variable. She also set her mind on finding all the cases when the solutions of the system (2.58)–(2.60) are single-valued meromorphic functions inside the entire complex domain for the variable t. In this case, the differential equations can be formally integrated using the series 9i = γi =

∞ 1  9ia ta tni a=0 ∞ 1 

tmi

γia ta

(i = 1, 2, 3), (2.69) (i = 1, 2, 3),

a=0

where ni , mi are integers. Prior to the above idea given by S. Kovalevskaya, to search integrals of motion, the local analysis of solutions of nonlinear differential equations with complex values of time has not been used. System (2.58) and (2.59) consists of six equations for six variables with a single constraint condition, so the series (2.69) represent the general solution providing that they contain five arbitrary constants. After inserting eq. (2.69) into eqs (2.58)–(2.60), we find that n1 = n2 = n3 = 1, m1 = m2 = m3 = 2 and the complex coefficients 9i0 , γi0 satisfy the system of algebraic equations – I1 910 + (I3 – I2 )920 930 = –C(yo γ30 – zo γ20 ), – I2 920 + (I1 – I3 )930 910 = –C(zo γ10 – xo γ30 ), – I3 930 + (I2 – I1 )920 910 = –C(xo γ20 – yo γ10 ),

(2.70)

2.1 Equations of Motion for a Rigid Body

83

– 2γ10 = (930 γ20 – 920 γ30 ), – 2γ20 = (910 γ30 – 930 γ10 ), – 2γ30 = (920 γ10 – 910 γ20 ). The coefficients 9im , γim (m > 0) can be determined through the system of linear equations (m – 1)I1 91m + (I3 – I2 )(92m 930 + 920 93m ) + C(yo γ3m – zo γ2m ) = Pm , (m – 1)I2 92m + (I1 – I3 )(93m 910 + 930 91m ) + C(zo γ1m – xo γ3m ) = Qm , (m – 1)I3 93m + (I2 – I1 )(92m 910 + 920 91m ) + C(xo γ2m – yo γ1m ) = Rm ,

(2.71)

(m – 2)γi0 – %isn (9n0 γs0 + 9n0 γs0 ) = Tim . The right-hand sides of these equations are polynomials of the quantities 9in , γin with the index n < m. System (2.70) and (2.71) completely determines all the coefficients of the expansion but not always. There are cases when the determinant on the left-hand side of eqs (2.71), being linear with respect to 9in , γin , vanishes for five positive integer values of m. If these five values of m also turn the right-hand sides of eqs (2.71) into zero, the five coefficients in expansion (2.65) are arbitrary, and such an expansion will be the total solution. At the outset, we consider the case at I1 = I2 . Then, the coordinate axes can always be chosen so that y0 = 0. Consequently, system (2.70) implies the equation after algebraic manipulations 930 γ30 = 0. A simple analysis shows that eq. (2.70) has two systems of solutions 1. 2I3 2I3 i% γ10 = – , γ20 = , γ30 = 0, Cx0 Cx0 2I3 z0 , 930 = –2i%, %2 = 1; 910 = i920 , 920 = – (I1 – 2I3 )x0 2.

γ10 = –

2iI3 , C(ix0 + z0 )

910 = 0,

920 = 2i%,

γ20 = 0, 930 = 0,

γ30 = –

2I3 % , C(ix0 + z0 )

(2.72)

(2.73)

%2 = 1.

For the solutions of eq. (2.72), the determinant D on the left-hand side of eqs (2.71) is equal to 

D=

I3 I12 (m

2I3 – I1 – 2)(m + 1)(m – 3)(m – 4) m – I1



 2(I1 – I3 ) m– , I1

(2.74)

3) ≥0 and has five roots with m ≥ 0 provided that the quantities 2I3I1–I1 ≥ 0 and 2(I1I–I 1 are positive integers. This is possible only under the condition: I1 = 2I3 . For 910 and

84

2 Integrable Systems

920 in eq. (2.72) do not become infinite, and it is necessary to put that z0 = 0. Then the quantity 920 is arbitrary, and 910 is equal to i920 . Direct calculations show that when γ10 = –

2I3 , Cx0

γ20 =

2I3 i% , Cx0

γ30 = 0,

910 = i920 ,

920 = –

2I3 z0 , (I1 – 2I3 )x0

(2.75)

930 = –2i% the determinant D is zero at m = 0, 1, 2, 3, 4. It is easy to verify that the right-hand sides of eqs (2.71) are also equal to zero for these values of m. Thus, the coefficients 9im , γim contain five arbitrary constants at the specified values of m. A similar analysis can be applied to the systems of solutions (2.74) and to moments of inertia unequal to each other. At last, we arrive at S. Kovalevskaya’s theorem. It states that “in the general case, the equations (2.57)–(2.60) have no single-valued meromorphic solutions with five arbitrary constants in the complex plane of the variable t. There are however exceptions”: 1. I1 = I2 = I3 , 2. x0 = y0 = z0 = 0, 3. I1 = I2 , x0 = y0 = 0, 4. I1 = I2 = 2I3 , z0 = 0. For the fourth case that was previously unknown, the equations of motion are ˙1 = 9

92 93 , 2

γ˙i = %isn 9n γs ,

9 3 9 1 + C1 γ 3 , 2 2Cx0 C1 = . I1

˙2 = – 9

˙ 3 = C 1 γ2 , 9

S. Kovalevskaya found an integral of motion for this case. Because we have d (91 + i92 )2 = –i(91 + i92 )2 93 – iC1 (91 + i92 )γ3 , dt d (γ1 + iγ2 ) = –i(γ1 + iγ2 )93 + i(91 + i92 )γ3 , dt

(2.76) (2.77)

after multiplying the second equation by C1 and adding to the first one, we find that the quantity K = (91 + i92 )2 + C1 (γ1 + iγ2 ) satisfies the simple equation ˙ = –i93 K. K

(2.78)

Hence it follows that the product of K and the complex conjugate quantity K∗ is an integral of motion I:

2.2 The Painlevé Property for Differential Equations

I = KK∗ = (91 2 – 92 2 + C1 γ1 )2 + (291 92 + C1 γ2 )2 .

85

(2.79)

As was shown by S. Kovalevskaya, using this integral of motion, the integration of the system (2.76) and (2.77) in the fourth case can be reduced to the inverse of hyperelliptic integrals. This gives the complete solution of the problem in terms of theta functions of two variables [1–3].

2.2 The Painlevé Property for Differential Equations Every solution breeds new problems. Murphy’s Law

2.2.1 A Brief Overview of the Analytic Theory of Differential Equations At the beginning of this section, we are going to give a quick overview of the analytic theory of differential equations [3–7, 24]. To study them in the complex domain of the variable t, it is required that the singular points of solutions of the differential equations be classified. The classification of the singularities is chiefly based on the amount of values of a function when going around the singular point. In this case, if the value of the function changes, the singular point t = t0 is called a critical singularity. Otherwise, it is referred to as noncritical. Next, we should distinguish special points, which may have algebraic functions. For example, the linear equation (t – b) x˙ = ax

(2.80)

x = C (t – b)a .

(2.81)

has the solution

Let it be: t = b + rei> . When we move around the point b in the complex domain, the angle > changes by 20 and the value of the function x¯ is equal to x¯ = xe2i0a (see Fig. 2.6). Then the function at the point b 1. is an analytic function with positive integer values of a; 2. has a pole of the order a at negative integer values of a; 3. has a critical algebraic singular point (branch point) at positive rational values of a = nn21 , where n1 , n2 are integers. In going around the singular point n2 times, the function takes the previous value; 4. has a critical pole at negative rational values of a = nn21 . Items 2–4 exhaust all algebraically singular points. For the classification of nonalgebraic singularities, we should here introduce the concept of the range of uncertainty. Let the singular point t0 be the center of a circle of radius r. We designate

86

2 Integrable Systems

Im(t)

r t φ 0

Re(t)

b

–r Figure 2.6

the set of values, which the function x (t) takes inside the circle, as Br . As the radius r decreases unlimitedly, the set Br tends to a limit set, called the range of uncertainty. If this domain consists of only one point, the singular point bears the name of a transcendental singularity. The function x = ln (t – a) is a solution of the equation x˙ = e–x , and the point x = a is the transcendental singularity or a logarithmic branch point. After inserting z = a+rei> , the solution takes the form x = i>+lnr. In the neighborhood of r = 0, it has an unlimited number of values tending to infinity as r → 0. A singular point is called an essential singularity if the uncertainty range admits more than one   point. The point t = 0 of the solution x (t) = exp 1t of the equation x˙ = –x ln2 x

(2.82)

can be presented as an example of the essential singularity. The value of x (t) in the complex domain depends on directions, along which the variable t tends toward zero. For example, if t tends to zero, taking positive real values, x (t) tends to infinity; if t has only negative real values, then x (t) → 0 as t → 0. The poles and essential singularities are examples of noncritical singular points. Lazarus Immanuel Fuchs was the first to divide the singular points of solutions of differential equations into two classes: movable and fixed. The singularities of solutions of differential equations in the complex domain are called fixed singular points if their position does not depend on initial conditions. The singularities of solutions of differential equations in the complex domain are said to be movable singularities if they depend on initial data. The differential equation 2tx˙x = 1

(2.83)

2.2 The Painlevé Property for Differential Equations

87

has the solution ) x=

ln

  t . C

(2.84)

The points t = 0 and t = ∞ are transcendental and fixed singular points for this solution. As seen from the expansion near the point t = C       t t–C t–C 1 t–C 2 +⋯ = ln +1 = + C C C 2 C ) )     t t–C 1 t–C 2 ln +⋯ == + C C 2 C )     √ 1 t–C 1 t–C = 1+ +⋯= t–C √ +⋯ C 2 C C ln

(2.85)

and this point is a critical movable singular point. After the works of S. Kovalevskaya who was the pioneer in applying the analytic theory of differential equations to the problem of mechanics, mathematicians in the late nineteenth century focused mainly on the classification of ordinary differential equations by type of singularities which have their solutions. Solutions of linear differential equations have only fixed singular points, which in turn can be determined by the peculiarities of coefficients of these equations. Outwardly, nonlinear differential equations do not always allow one to find the position of the movable singular points. In general case, these are extremely difficult to find. Paul Painlevé, a French mathematician, and his school examined a class of the second-order equations x¨ = F (x˙ , x, t)

(2.86)

using a new method, where F is a rational function in x, x˙ and an analytic one in t. They found just 50 equations whose solutions have no critical fixed and movable points. Of these, 44 equations could be integrated via the then known functions. Solutions of the six rest equations could be determined through new special functions, which are now called a Painlevé transcendent. The first and second Painlevé equations have the simple form x¨ = 6x2 + t, x¨ = 2x3 + tx + !,

(2.87)

where ! is an arbitrary parameter. Solutions of these 50 equations are characterized only by movable poles.

88

2 Integrable Systems

There are other definitions. Give some of them. An ordinary differential equation is thought to possess the Painlevé property, provided that its general solution has no critical movable and fixed points. A differential equation with the Painlevé property is called the Painlevé equation. Solutions of these equations have only movable poles. About 40 years ago in connection with the discovery of integrable partial differential equations (PDE) there arouse an urgent need in finding a simple analytical criterion for identifying integrable systems in the field theory and nonlinear dynamics. The works of S. Kovalevskaya, L. Painlevé and others, as far as the PDEs are concerned, triggered a great deal of interest of mathematicians around the world. “. . . Their ideas have been found to play a central role in determining and understanding the integrability of dynamical systems [7].” A method for finding the integrals of motion consists in determining the singularities of solutions of differential equations in the complex domain of the variable t. Such an analysis depends on the properties of the differential equations and does not require the explicit form of the solutions. “This idea, in fact, is quite old and has its origin in the classical work of the great Russian mathematician Sofia Kovalevskaya” [7].

2.2.2 A Modern Algorithm of Analysis of Integrable Systems Inside every large problem is a small problem struggling to get out. Hoare’s law of large problems

The assumption that integrable nonlinear equations must have the Painlevé property has proved extremely fruitful and effective. The works [8–12] underlie a modern algorithm for verifying the integrability of nonlinear equations. In what follows, we will use the term “Kovalevskaya, Ablowitz, Ramani, Segur algorithm – the KARS algorithm.” According to this algorithm, the procedure to examine the integrability of the differential equation   dn x dx d2 x dn–1 x = F x, ⋯ , t , , dtn dt dt2 dtn–1

(2.88)

where F is an analytic function of the variable t and a rational function in the remaining variables, is as follows. It is assumed that the solution of this equation has only simple poles at t = t0 , and near t0 it can be expanded in a Laurent series x= with constant coefficients xn .

∞  1 xn (t – t0 )n , m (t – t0 ) n=0

(2.89)

2.2 The Painlevé Property for Differential Equations

89

The first step within the algorithm is to determine the smallest power of m. In doing so, the expression x=

x0 , 4m

4 = t – t0

(2.90)

needs to be inserted in eq. (2.89). Then, in this equation we should analyze the terms with the lowest values m. For example, after substituting eq. (2.90) into the equation of an anharmonic oscillator either without damping x¨ = Ax – Cx3

(2.91)

x¨ = Ax – Cx3 – B˙x,

(2.92)

(1 + m)mx0 4–2–m + Cx03 4–3m – Ax0 4–m = 0

(2.93)

(1 + m)mx0 4–2–m + Cx0 3 4–3m – Ax0 4–m – Bmx0 4–1–m = 0,

(2.94)

or with decaying

we get

or

respectively. The most singular terms, often called the leading terms of the equation   x¨ and x3 , must vanish for 4 → 0. Setting their exponents equal to the variable 4, we find from the equation –3m = –m – 2 that m = 1. Then 2x0 + Cx03 = 0 and √ i 2 x0 = √ % (% = ±1). (2.95) C The value (or several values) of m determined at this step must be an integer positive number. For fractional values of m, eq. (2.88) has movable critical singular points and does not possess the Painlevé property. In some cases, for lack of this property, the value of m may prompt a replacement for the function x (t) to transform the initial expression into the Painlevé equation. So for the equation x˙ + Ax + Cx3 = 0, the value of m is 1/2. However, the replacement x (t) =



y (t) leads to another equation

y˙ + 2Ay + 2Cy2 = 0 having the Painlevé property. The second step of the procedure is responsible for finding the powers of n in expansion (2.89) for each pair of values of (m, x0 ), with the coefficients of these powers being arbitrary. They are called Fuchs’s indices or resonances but sometimes

90

2 Integrable Systems

the Kovalevskaya exponents. For example, inserting expansion (2.89) at m = 1 into eq. (2.91), we successively determine the coefficients xn (n > 0): x1 = 0,

–i A x2 = √ , 3 2C

x3 = 0.

(2.96)

For the coefficient x4 we obtain the expression 3(2 + Cx0 2 )x4 = x2 (A – 3Cx0 x2 ) – 6Cx0 x1 x3 – 3Cx1 2 x2 .

(2.97)

Both sides of the equation in accordance with eqs (2.95), and (2.96) being equal to zero, the coefficient x4 is arbitrary. As a result, the unique expansion (2.89) has two arbitrary constants t0 and x4 and eq. (2.89) is the general solution of eq. (2.91). All in all, if m takes multiple values, the solution has several branches, each of which is to be examined separately. Then eq. (2.91) possesses the Painlevé property provided that this property is inherent in each branch of the solutions. A simple analysis of the resonances can be done without computing all the coefficients of the expansion by substituting the expression x=

x0 (1 + γr (t – t0 )r ) (t – t0 )m

(r > 0)

(2.98)

into the leading terms of eq. (2.88). Setting the summands linear in γr equal to zero, one is led to the equation Q(r)γr = 0,

(2.99)

where Q (r) is a polynomial of order of n in r. Then the solution Q (r ) = 0

(2.100)

defines Fuchs’s indices. Now we illustrate this step again for eq. (2.91). Substituting eq. (2.98) into its leading terms and selecting the linear terms yield the expression (r – 2) (r – 1) + 3Cx02 = (r – 2) (r – 1) – 6 = 0, with the Kovalevskaya exponents equal to r = –1 and r = 4. It is worth noting that one of Fuchs’s indices is always “–1.” This value corresponds to the arbitrary choice of t0 . In fact, replacing t0 by t0 + % inside expression (2.89) and expanding in a series of %, we see that the term linear in % has the dependence (t – t0 )–m–1 . The roots of eq. (2.100) with Re r < 0 except for r = –1 indicate that the solutions are impossible to represent in the form (2.89). Such a situation requires employing the Conte–Fordy–Pickering algorithm to investigate further [13]. Any Fuchs’s indices with Re r > 0 and r, not being an integer, point out that the equation has a movable critical point and does not have the Painlevé property at t = t0 . However, the rational value of r may cause, as at the first step of the procedure, to replace x (t) by a new function. The general solution of eq. (2.88) must have n arbitrary constants but

2.2 The Painlevé Property for Differential Equations

91

if eq. (2.100) has n – 1 positive integer roots, solution (2.89) does not admit movable algebraic singular points besides logarithmic. The latter can be ascertained within the third step of the algorithm. Let r1 , r2 , . . . , rn–1 (r1 ≤ r2 ≤ ⋯ ≤ rn–1 ) be Fuchs’s indices for each pair of values of (m, x0 ). Substituting the truncated expansion x=

rs  1 xi (t – t0 )i , (t – t0 )m

s = 1, 2, ⋯, n – 1

(2.101)

i=0

into eq. (2.88) and setting the terms with the same powers (t – t0 ) equal to zero, we arrive at the equations   Q ( j) xj = Rj x0 , x1 , . . . , xj–1

(j = 0, 1, 2, . . . , rs )

to determine the remaining coefficients. However, at j = r1 the polynomial Q (r) is zero and the coefficient xr1 is arbitrary subject to holding the restriction Rr1 (x0 , x1 , . . . xr1 –1 ) = 0.

(2.102)

In order to show that this is valid, we insert expansion (2.101) into eq. (2.92) and find that x1 =

B , 3Cx0

x2 =

A – 3cx1 2 , 3Cx0

x3 =

–iB(9A + 2B2 ) . √ 54% 2C

Then for the coefficient x4 , this equation takes the form √ –i 2B2 √ (9A + 2B2 ) + %(6 + 3Cx0 2 )x4 = 0. 27% C

(2.103)

Taking into account eq. (2.95) at A ≠ –2B2 /9, the restriction (2.102) is not fulfilled. Consequently, when expanded, the solution in the form (2.89) is not true. In this case, the solution needs to be sought as  1 xi (t – t0 )i + b(t – t0 )3 ln(t – t0 ). (t – t0 ) 4

x=

i=0

Then all the coefficients x0 , x1 , x2 , x3 retain their values, and the left-hand side of eq. (2.103) adds 5b to itself. The coefficient x4 is arbitrary if b turns into b=

√ i 2B2 (9A + 2B2 ) . √ 135 C%

However, continuing to calculate further the coefficients, we should introduce new logarithmic terms. As a result, eq. (2.92) has the logarithmic branch points and does

92

2 Integrable Systems

not have the Painlevé property. This property is possible only under the following condition: A = –2B2 /9.

(2.104)

Then, if the above condition is met, an integral of motion for eq. (2.92) can be found explicitly. After substituting x˙ (t) = F (x (t)), this equation acquires the form F(x)

d F(x) = –2B2 /9 x + C x3 + B F(x), dx

and its solution is expressed through a hypergeometric function with an arbitrary constant E  F2,1 E = –X

1 3 3 –2X 2 2 , 4 , 2 , 9C



3 C





 3C 9 +

+

2X 2 9C

B

1

4

x ,

X=

Bx + 3˙x , x2

(2.105)

  2 where F2,1 21 , 43 , 32 , –2X is the hypergeometric function. Such an integral of motion 9C is not algebraic because the hypergeometric function F1,2 (a, b, c, x) is defined by the series F2,1 (a, b, c, x) = 1 +

ab 1 a (a + 1) b (b + 1) +⋯ x+ c 2! c ( c + 1)

in the range |x| ≤ 1 and the value of c is nonzero or a negative integer. If the coefficient xr1 is arbitrary, further analyzing is necessary to calculate the coefficients xi (r1 < i < rn–1 ). Lastly, if all the coefficients xri (i = 1, 2, . . . , n – 1) are arbitrary, the differential equation has neither movable critical algebraic nor logarithmic movable singular points. In this case, it possesses the Painlevé property. In her work, S. Kovalevskaya suggested regarding this property as an integrability criterion. Therefore, such a simple three-step procedure immediately leads us to a conclusion, though incomplete, concerning the integrability of the system. To investigate the set of nonlinear differential equations d xa = Fa (x1 , x2 , . . . xN , t), dt

a = 1, 2, . . . , N,

the first step requires inserting the solution xa =

xa0 (t – t0 )ma

(2.106)

in its leading terms. Here we can identify all the pairs of (xa0 , ma ) with positive integer values of ma . For each such a pair, the solution of the above set should be sought in the form of a Laurent series

93

2.2 The Painlevé Property for Differential Equations

xa =

∞  1 xai (t – t0 )i . (t – t0 )ma

(2.107)

i=0

At the second step, for each (xa0 , ma ) the expression xa =

xa0 (1 + γa (t – t0 )r ) (t – t0 )ma

(r > 0, a = 1, 2, . . . , N)

(2.108)

needs to be inserted in the leading terms of the system. Leaving there only linear terms over γa , we obtain the equation Q (r) γ = 0,

γ = ( γ1 , γ2 , . . . , γN ) ,

where Q (r) is an N × N matrix. Some of the quantities γa are arbitrary, and therefore expansion (2.107) includes arbitrary constants if the exponents r satisfy the algebraic equation det Q(r) = (r + 1)(rN–1 + a2 rN–2 + ⋯ + aN–1 ) = 0,

(2.109)

with coefficients an , depending on the values of xa0 . Due to the arbitrary choice of t0 , one of the roots of this equation is always equal to “–1.” If other roots r1 , r2 , ⋯, rN–1 (r1 ≤ r2 ≤ . . . ≤ rN–1 ) are positive integers, the third step is to check for singularities of the logarithmic type. For this purpose, we substitute the truncated expansion xa =

rs  1 xai (t – t0 )i , (t – t0 )m

s = 1, 2, . . . , N – 1

(2.110)

i=0

into the system of differential equations. Then equaling the terms of the same powers, we get the equations Q(i)Xi = Ri ,

i = 1, 2, . . . , rs .

(2.111)

where Xi is a vector column with the components x1i , x2i , ..., xNi ; Ri = (R1i , R2i , . . . , RNi ) is a row vector, depending on the coefficients xaj , 0 ≤ j ≤ i–1. When i < r1 , this expression recurrently determines all the coefficients of Xi . Because at i = r1 det Q (r1 ) = 0, system (2.111) should be satisfied by the consistency conditions det Q(k) (r1 ) = 0 (k = 1, 2, . . . , N – 1),

(2.112)

where Q(k) (r1 ) is the matrix Q (r1 ) with the column k replaced by the column RTr1 .

94

2 Integrable Systems

If conditions (2.112) are met, we should repeat the procedure up to checking the arbitrariness of values of XrN–1 . In the case of multiple roots in eq. (2.109), the number of arbitrary components of Xrs is equal to the multiplicity of the root rs . Finally, the system of N-order nonlinear equations has the Painlevé property, if it has no movable singular points besides poles and if the expansion in the Laurent series (2.107) containing N – 1 arbitrary constants holds true around arbitrary values of t0 . But such cases are rare enough. Nevertheless, the algorithm mentioned above offers only necessary conditions to treat the system at hand as a Painlevé-type system. If solutions of a system of nonlinear equations carry no movable both algebraic and logarithmic singularities, the singularities may be essential. However, the latter is an extremely seldom case and to date, the KARS algorithm is the only effective way to search for integrable systems. Moreover, it becomes far too important under a certain generalization when it comes to looking into the integrability of the PDEs.

2.2.3 Integrability of the Generalized Henon–Heiles Model Theories come and go, but examples remain. I.M. Gelfand.

To illustrate the application of the KARS algorithm, we examine the integrability of the equations [7] x¨ + Ax + 2Dxy = 0, y¨ + By + Dx2 – Cy2 = 0,

(2.113)

for the generalized Henon–Heiles model that under certain values of the parameters A, B, C, D describes the motion of a star in the middle field of a galaxy. We first find out the greatest negative powers m1 and m2 in the expressions x=

x0 , 4m1

y=

y0 , 4m2

4 = t – t0 .

(2.114)

Since the summands x¨ and y¨ are more singular than x and y, respectively, it is sufficient to keep only the leading terms in the system within this and the next steps of the algorithm x¨ + 2Dxy = 0, y¨ + Dx2 – Cy2 = 0.

(2.115)

After substituting expressions (2.114) into the leading terms, we obtain 2y0 D42–m2 + m1 (m1 + 1) = 0, x0 2 D42(m2 –m1 ) – y0 2 C + y0 4–2+m2 m2 (m2 + 1) = 0.

(2.116)

2.2 The Painlevé Property for Differential Equations

95

A simple analysis shows that there are two possible branches of the solutions. For the first type of the solutions 3 1 m1 = 2, x0 = % 2+ (% = ±1), D + (2.117) 3 m2 = 2, y0 = – , D where we have entered the notation D = +C. The second type of the solutions exists under the restriction m1 < 2. Then the term proportional to x02 disappears as 4 → 0, and we arrive at √ 1 + % 1 – 48+ m1 = – (% = ±1), 2 (2.118) 6+ , m2 = 2, y0 = D with an arbitrary value of the quantity x0 . Let us have the outset look at the solutions of the first type. To analyze Fuchs’s indices, we plug the expressions x = x0 4–2 + xr 4r–2 ,

y = y0 4–2 + yr 4r–2

into eq. (2.115) and obtain the system of linear equations in the lowest order in 4   1 6 2 6xr % 2 + + yr 6 – 5r + r + = 0, + + (2.119) 1 xr (r – 5) + 6%yr 2 + = 0. + Either xr or yr takes an arbitrary value if the exponent r satisfies the equation     30r (1 + +) 1 6 – – 36 2 + = 0, r4 – 10r3 + r2 31 + + + + whose roots are r1 = –1,

r2 = 6,

r3,4 =



  1 – 24 1 + +1 2

.

The quantities r3,4 must be positive integers. This condition imposes a restriction on the value of the parameter +. It is easy to reveal that permitted values for this parameter are 1. + = –1, r = –1, 2, 3, 6; 2. + = –1/2, r = –1, 0, 5, 6; 3. + = –3/4, r = –1, 1, 4, 6.

96

2 Integrable Systems

Let us discuss the first two cases. At + = –1, inserting the expansions x=

6  1 xi (t – t0 )i , (t – t0 )m1

m1 = 2;

(2.120)

6  1 yi (t – t0 )i , (t – t0 )m2

m2 = 2

(2.121)

i=0

y=

i=0

into eq. (2.113), we get the restrictions x1 = y1 = 0 and the equations A + 2%x2 – 2y2 = 0, C B + 2%x2 – 2y2 = 0. C This means that either x2 or y2 takes an arbitrary value if A = B. Further calculations show that the compatibility conditions are fulfilled at the r = 3 and r = 6 resonances. It is not hard to make sure that in this case the change of variables u=

x+y , 2

v=

x–y 2

splits the system of equations into two independent expressions u¨ + Au – 2Cu2 = 0, v¨ + Av + 2Cv2 = 0, with each integrating in elliptic functions. As a result, under the above restrictions the Henon–Heiles system has both the Painlevé property and the integrability property. As it follows from eqs (2.119), for + = –1/2 the values of x0 and y0 are zero, which contradicts the condition of resonance at r = 0. For one of the coefficients of x0 or y0 to be arbitrary, it is necessary to add logarithmic terms to expansions (2.120) and (2.121). However, this violates the Painlevé property and necessary condition for the system’s integrability. A detailed analysis shows that the consistency condition (2.112) is not fulfilled for + = –3/4. In the case of eq. (2.118) and in virtue of the condition when m1 < m2 , at the second stage of the algorithm, it is sufficient to analyze the system x¨ + 2Dxy = 0, y¨ – Cy2 = 0. Having substituted the following formulae into the above system: –m1

x = x0 4

r–m1

+ xr 4

,

–2

y = y0 4

r–2

+ yr 4

m1 = –

1–

√ 1 – 48+ , 2

2.2 The Painlevé Property for Differential Equations

97

we have the resonance findings r = –1, 0,



1 – 48+, 6.

√ Since the quantity 1 – 48+ is to be positive integers, the values of the parameter + required for the system’s integrability are 5 1. + = – 16 , m1 = 32 , r = –1, 0, 4, 6; 1 2. + = – 6 , m1 = 1, r = –1, 0, 3, 6; 3. + = – 161 , m1 = 21 , r = –1, 0, 2, 6. Here it is necessary to draw attention to the rational values of the index m1 as well 1 3 because after the replacement x → X 2 or x → X 2 , eqs (2.113) remain rational functions of the variables X, y. It should be reinforced the following cases to apply the expansion types (2.120) and (2.121): 1. The compatibility conditions (2.112) are not satisfied both at the resonance r = 4, where the value of x0 needs to be determined, and at the resonance r = 6. 5 Consequently, at + = – 16 the Henon–Heiles system is not integrable. 1 2. For + = – 6 this system possesses the Painlevé property for all values of the parameters A, B, C and D. Moreover, we can find the additional algebraic integral F: F=

3.

1 2 2 2 x (C x + 4(9A(4A – B) – Cy(6A – Cy))) 12 + (12A – 3B + 2Cy)˙x2 – 2Cx˙xy˙ .

(2.122)

For + = – 161 and B = 16A the Henon–Heiles model is integrable and has also the additional algebraic integral F: F=

1 4 1 2 1 x˙ + x˙ (8A – Cy)x2 + x˙ y˙ Cx3 4 16 48 x4 (–1152A2 + C2 x2 + 6Cy(–16A + Cy)) – , 4608

(2.123)

Thus, the KARS algorithm has enabled calculating the values of parameters, for which system (2.113) passes the Painlevé test. This is a sufficient condition for the integrability of the system. To estimate the hidden integrals of motion for systems without symmetry, direct methods are often used. So to determine an algebraic integral I in two-degrees-of-freedom systems (x and y), it is assumed to have the form I=

n 

x˙ k y˙ i–k fi,k (x, y) .

i=0

Then the condition d I=0 dt

(2.124)

98

2 Integrable Systems

and the equations of motion immediately lead to an overdetermined system of differential equations for the unknown functions fi,k (x, y). Choosing different values of n and applying the above procedure, we can find the integrals of motion for many systems. It is by this way that we have deduced integrals (2.122) and (2.123).

2.2.4 The Linearization Method for Constructing Particular Solutions of a Nonlinear Model Even if there are known integrals of motion in integrable systems, to construct exact general solutions is an open question. As the example of a damped anharmonic oscillator has showed that even if we have an integral of motion, the general solution is extremely difficult to obtain. One of the ways to solve this problem is to use the linearization method proposed by Weiss et al. [8, 9]. In some cases, it allows one to find new properties of the nonlinear integrable system and reduce the latter to a system of linear equations. In this method, the particular solution of eq. (2.88) needs to be sought in the form of the truncated expansion x (t) =

m 1  xn (t) 6 (t)n , 6 (t)m n=0

(2.125)

where 6 (t) and xn (t) are functions of t. The function x (t) is assumed to be singular on a curve 6 (t) = 0 rather than at a point t = t0 . Furthermore, if the function 6 (t) = 0 vanishes at a certain value t = t0 , this zero is assumed not to coincide with the zero of the function xn (t). A positive integer m can be determined by analysis of the leading terms of the equation. The singular expansion (2.89) comes from xn (t) being expanded in powers of t – t0 and at 6 (t) = t – t0 . Substituting eq. (2.125) into the equations of motion (2.88) yields an overdetermined system of equations for 6 (t) and xn (t), with the expressions for xm (t) coinciding with eq. (2.88). This enables another particular solution of this equation to be found when the particular solution (2.88) is known. Such a mapping of a solution of the nonlinear equation into another is called the Backlund transform. As a short illustration, let us exploit the truncated expansion method to get the particular solutions (2.91) and (2.92) of the anharmonic oscillator for the values of the A, C parameters: A = –a2 ,

C = –c2 .

(2.126)

After making the appropriate substitution x=

x0 (t) + x1 (t) 6(t)

(2.127)

2.2 The Painlevé Property for Differential Equations

99

into eq. (2.91) and equating the terms with the same powers of 6 (t), we obtain the same overdetermined system of equations. From the equation for the leading terms it immediately follows that √ % 2˙ 6(t), % = ±1 (2.128) x0 (t) = c and the three remaining equations acquire the form √ ˙ + 6(t) ¨ = 0, 2c%x1 (t)6(t) ... ˙ + 6(t) = 0, (a2 – 3c2 x1 (t)2 )6(t)

(2.130)

a x1 (t) – c x1 (t) + x¨ 1 (t) = 0.

(2.131)

2

2

3

(2.129)

The compatibility condition of eqs (2.129) and (2.130) results in the additional firstorder equation for x1 (t): √ (2.132) a2 – c2 x1 (t)2 – % 2c˙x1 (t) = 0. It is easy to see that the above equation is compatible with eq. (2.131). Moreover, it is a particular integral of motion of eq. (2.91). Indeed, its general integral E=

x˙ 2 (t) x2 (t) x 4 (t ) + a2 – c2 2 2 4

4

a when E = 4c 2 corresponds to the maximum potential energy and coincides with eq. (2.132). Ultimately, we have the known particular solution x1 (t), the so-called seed solution, of the equations of the anharmonic oscillator. It satisfies eq. (2.132). With the help of the linear equation (2.129) we find 6 (t). Next, we get a particular solution of eq. (2.91), using formula (2.127). So for the simplest particular solution

a x1 (t) = ± , c the kink-type solution presents itself to us immediately   a%(t – t0 ) a , x(t) = ± tanh √ c 2

(2.133)

(2.134)

with the arbitrary constant t0 of this equation. It is worth noting that for the equations 2B2 x(t) – c2 x(t)3 + B˙x(t) + x¨ (t) = 0 9

(2.135)

of the damped anharmonic oscillator and condition (2.104), the expression for x0 (t) coincides with eq. (2.128). Also, the formula for x1 (t) coincides with eq. (2.135), and the independent equations for the rest of variables are transformed to

100

2 Integrable Systems

√ ˙ + 36(t) ¨ = 0, (B + 3 2c%x1 (t))6(t) √ √ 2Bx1 (t) + 3%cx1 (t)2 + 3 2˙x1 (t) = 0.

(2.136) (2.137)

Let us take x1 (t) = 0 as a seed solution of eq. (2.135). Then we can compute x (t) through quadrature √ % 2B

. x (t) = –  B c 3 + Be 3 (t–t0 )

(2.138)

The → √ function x (t ) describes kink from the equilibrium state x (t ) – % 3c2B (t → –∞, B > 0) to another equilibrium state x (t) → 0 as t → ∞. These simple examples show of how much the linearization method is effective. However, the given procedure, as distinct from the KARS algorithm, does not provide a complete set of solutions. The reason is that we are using expansion (2.125) in the vicinity of the curve 6(t) = 0 instead of the KARS algorithm’s local expansion (2.89) in the neighborhood of the point t = t0 with arbitrary values of t0 . It is such additional restrictions that cause a loss of both the integration constants and the set of exact solutions .

2.3 Dynamics of Particles in the Toda Lattice: Integration by the Method of the Inverse Scattering Problem New systems generate new problems. The main Murphy’s Technology Law

The present section deals with the Toda lattice dynamics [16] to discuss the integration of non-linear equations using the inverse scattering method. Consider a one-dimensional chain or a one-dimensional lattice of particles being at a distance a from each other and connected by nonlinear springs (see Fig. 2.7a). Let each particle be at a rest and y0n = na (–∞ < n < ∞) be its coordinates. Now we assume that the equilibrium position shifts so that the coordinate of the particle n should become yn = y0n + xn , where xn is the displacement of the particle n from its equilibrium position. Then, if U (xn+1 – xn ) is the potential energy of the interaction between n+1 and n, the equation of motion for the particles of mass m can be written as m¨xn = U ′ (xn+1 – xn ) – U ′ (xn – xn–1 ). (a) (b) Figure 2.7

a

(2.139)

2.3 Dynamics of Particles in the Toda Lattice

101

For the harmonic potential U(x) = bx2 /2, the force U ′ (x) is proportional to the relative displacement x and the above expression acquires the form m¨xn = b(xn+1 – 2xn + xn–1 ).

(2.140)

Since the closest atoms are bonded together, a vibrating particle or a group of particles excites their neighbors, with the oscillations propagating through the lattice with a certain velocity. As is known, the excitation transfer process through a medium (in particular, the oscillatory process) from one point to another is called a wave. The simplest type of the wave is a monochromatic wave, which is the solution of eq. (2.140): xn (t) = C(k)ei(kan+9t)

(2.141)

with frequency 9, a wave vector k and an arbitrary constant amplitude C (k). At that, the relation between 9 and k is fulfilled   b ka 9 = ±91 (k) , 91 (k) = 2 sin . (2.142) m 2 It is called the dispersion relation. The wave vector k relates the displacements of neighboring atoms xn+1 (t) = eika . xn (t) Since the replacement k → k + 20 a does not change the above ratio, the physical meaning of the values of the wave vector k is defined by the inequalities 0≤k≤

20 a

or –

0 0 ≤k≤ . a a

Expression (2.141) is responsible for the oscillatory process in the system of interconnected particles (wave motion). This process is characterized by the propagation of the fluctuation phase I = kan + 9t through the lattice, by the energy and momentum transfer with the velocity v = ±9/k. The wave period T is equal to 20 9 , and its length 20 + = k (a ≤ + ≤ ∞) is the distance between the particles, vibrating in phase. For k 0),

(2.154)

which is assumed to be harmonic only for small values of b and finite values of ab. In such an exactly solvable model, called the Toda lattice, we can investigate the elementary excitations and illustrate fairly simply a nonlinear analogue of the Fourier transform – the inverse scattering method. After introducing the dimensionless variables ab X = bx, T = t, (2.155) m we can write down the equations of motion in such a lattice as x¨ n = – exp[xn – xn+1 ] + exp[xn–1 – xn ].

(2.156)

104

2 Integrable Systems

For the sake of simplicity, we have entered X → x, T → t. For the Hamiltonian H of the system H=

  p2

n

n

2

 + exp (xn – xn–1 ) ,

(2.157)

the displacements xn and momenta pn = x˙ n are canonically conjugate variables. We write eq. (2.156) as the system a˙ n = an (bn – bn+1 ), b˙ n = 2(an–12 – an 2 ),

(2.158)

with the simplest quadratic nonlinearity being convenient for further discussion. From now on, we have designated an =

x – x  1 x˙ n n n+1 exp , bn = . 2 2 2

(2.159)

Next, we investigate the integrability of system (2.158) under the assumption of tending the velocities and the relative displacements of neighboring particles to zero xn – xn+1 → 0 (n → ±∞) at large distances: xn – xn+1 → 0, an → 1/2, bn → 0

(n → ±∞).

(2.160)

2.3.1 Lax’s Representation One of the effective methods to study integrable systems is the method of the inverse scattering problem (MISP). The idea of the technique is very simple. Let a dynamical system be described by the system of equations x¨ a = Fa (xa , x˙ a , t)

(a = 1, 2, . . . , N),

(2.161)

including the case N → ∞. Suppose that for this system we have succeeded in finding a Hermitian L (L+ = L) and an anti-Hermitian A (A+ = –A) matrices, whose elements depend on xa , x˙ a so that the desired equations are equivalent to the matrix equation dL = AL – LA. dt

(2.161)

The particular equation is called the Lax representation, and the matrices – the Lax pair. The matrix A is anti-Hermitian, so the matrix U (t) defined by the equation dU(t) = AU(t), dt

U(t = 0) = I

(2.162)

2.3 Dynamics of Particles in the Toda Lattice

105

  is a unitary matrix U + = U –1 . Since dU –1 (t) dU + (t) = = –U + (t)A = –U –1 (t)A, dt dt eq. (2.161) is equivalent to the matrix equation d –1 (U (t)L(t)U(t)) dt = U –1 (t)[–AL(t) +

d L(t) + L(t)A]U(t) = 0. dt

  Thus, the matrix U –1 L (t) U is time independent, but the matrix L (t) at any moment in time and the matrix L (0) at the initial moment of time are connected by the unitary transformation. In other words, they are unitarily equivalent to each other: L(t) = U(t)L(0)U –1 (t).

(2.163)

Let J (t) and + (t) be eigenvector function and eigenvalue of the matrix L (t), respectively: L(t)J(t) = +(t)J(t). Then, collating the above expression with U(t)L(0)J(0) = U(t)L(0)U –1 (t)U(t)J(0) = L(t)U(t)J(0) = +(0)U(t)J(0), we get the evolution equation for the function J (t): J(t) = U(t)J(0). Hence we have dJ(t) = AJ(t). dt

(2.164)

In addition, the eigenvalues of the matrix L (t) are time independent (+ (t) = + (0) = +), that is, they are integrals of motion. Ultimately, if a system with many degrees of freedom permits the Lax representation, the integrals of motion are much easier to estimate through the characteristic equation, as we will see later, than by the direct method, as shown at the bottom of Section 2.2.3. To date, the MISP is said to be applicable to all integrable systems. As a rule, when known, the Lax pair makes it possible for Hamiltonian N-degrees-of-freedom systems to be described by the required number of the integrals of motion in involution. From the foregoing it follows that the eigenfunctions J (t) satisfy the two equations

106

2 Integrable Systems

L(t)J(t) = +J(t),

(2.165)

dJ(t) = A(t)J(t). dt

(2.166)

To check their compatibility, we shall differentiate eq. (2.165) with respect to time and make use of expression (2.166); then we get dJ(t) dJ(t) dL(t) dL(t) J(t) + L(t) –+ = J(t) + L(t)A(t)J(t) – A(t)+J(t) dt dt dt dt   dL(t) + L(t)A(t) – A(t)L(t) J(t) = 0. = dt As a result, the integrable equation (2.161) is the compatibility condition of two linear equations (2.165) and (2.166) for the auxiliary vector function J (t). Consider the MISP as a case in point of integrating the system (2.158). It is easy to verify that its equations guarantee the compatibility condition for the expressions (LJ)n = an–1 Jn–1 + an Jn+1 + bn Jn = +Jn ,

(2.167)

d Jn = (AJ)n = an–1 Jn–1 – an Jn+1 + CJn , dt

(2.168)

where C is an arbitrary constant. We argue that the periodic Toda chain is integrable completely. Imagine such a system as a chain of particles located around a circumference with the interaction potential (2.154) between nearest neighbors. To extract the given system out of eqs (2.158), we should impose the periodic conditions an = an+N ,

bn = bn+N ,

Jn = Jn+N .

Then the symmetric N × N matrix L has the form ⎛

b1 ⎜ ⎜ a1 ⎜ ⎜ 0 ⎜ L=⎜ ⎜ 0 ⎜ 0 ⎜ ⎜ ⎝ 0 aN

a1 b2 a2 . . 0 0

0 a2 b3 . . . 0

0 0 a3 . . . 0

... ...

0 0

. .

. .

aN–2 0

bN–1 aN–1

⎞ aN ⎟ 0 ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟. 0 ⎟ ⎟ ⎟ aN–1 ⎠ bN

Its eigenvalues +1 , +2 , . . . , +N are real and satisfy the equation det (L – +I ) = +N +

N–1  i=0

+i F (an , bn ) = 0.

(2.169)

2.3 Dynamics of Particles in the Toda Lattice

107

The eigenvalues being time independent, the expansion coefficients F (an , bn ) do not depend on time as well and are integrals of motion. It is not hard to verify that they contain the chain’s total momentum P and energy H (2.157): P=

N 

pn = 2TrL = 2

n=1

H=

N   pn 2 n=1

2

N 

+n ,

n=1

 2

+ exp (xn – xn+1 ) = 2TrL = 2

N 

(2.170) +2n .

n=1

Further, we can illustrate that the +i and +j integrals of motion are in involution. That + * is to say, their Poisson bracket +i , +j is zero. We write it down via the variables an , bn {+i , +j } =

   N  N   ∂+j ∂+j 1 ∂+i ∂+j ∂+i – (i ↔ j) = – ak–1 – (i ↔ j) . ak ∂pk ∂xk 4 ∂ak ∂ak–1 ∂bk k=1

k=1

(2.171) Let J ( j) be an eigenvector column with the components J1 ( j) , J2 ( j) , . . . , JN ( j) of the matrix L having the eigenvalue +j , normalized to unity JT ( j) J ( j) = N & ∂+ Jn ( j) Jn ( j) = 1. For the calculation of the derivatives ∂a j , we differentiate the k n=1

equation over ak N 

Lnm Jm (j) = +j Jn (j).

(2.172)

m=1

Next, we multiply the result on the left by JTn ( j) and sum up the expression obtained over the index n running from 1 to N. Then due to that JT ( j) L = +j JT ( j) we get in a matrix formulation ∂L ∂J(j) ∂+j T ∂J(j) J(j) + JT (j)L – J (j)J(j) – +j JT (j) ∂ak ∂ak ∂ak ∂ak ∂+ ∂L j = JT (j) J(j) – = 0. ∂ak ∂ak

JT (j)

The form of the matrix L in eq. (2.169) implies that ∂+j = 2Jk (j)Jk+1 (j). ∂ak

(2.173)

∂+j = Jk (j)2 . ∂bk

(2.174)

Likewise, we obtain

In the long run, expression (2.171) takes the form

108

2 Integrable Systems

1 Jk (i)Jk (j){–ak Jk+1 (i)Jk (j) + ak–1 Jk–1 (i)Jk (j) – (i ↔ j)}. 2 N

{+i , +j } =

k=1

Note that for the components Jk (i) , Jk ( j) of any two eigenvectors of the matrix L involved in eq. (2.167), the relation immediately follows (+i – +j )Jk (i)Jk (j) = –Wk (i, j) + Wk–1 (i, j). Here we have denoted as Wk (i, j) = ak (Jk (i)Jk+1 (j) – Jk+1 (i)Jk (j)). Because (ak Jk+1 (i)Jk (j) – ak–1 Jk–1 (i)Jk (j) – (i ↔ j)) = –Wk (i, j) – Wk–1 (i, j), the Poisson bracket can be written as  1 1 (Wk–1 (i, j)2 – Wk (i, j)2 ) = (W0 (i, j)2 – WN (i, j)2 ). (2.175) 2(+i – +j ) 2(+i – +j ) N

{+i , +j } =

k=1

It is equal to zero in virtue of the boundary conditions W0 (i, j) = WN (i, j). Thus, the periodic Toda lattice has N independent integrals of motion and, according to Liouville’s theorem, it is integrable in quadratures. Appropriate solutions are expressed in terms of multidimensional theta functions and the reader can get acquainted with them consulting the special monographs (see, e.g., Ref. [16]).

2.3.2 The Direct Scattering Problem One must draw the edge of the mind on the smallest and simplest things and dwell on them until we get used to seeing clearly and distinctly the truth in them. Rene Descartes

Equation (2.167) with given coefficients an , bn under the boundary conditions (2.160) is the task to determine the eigenfunctions Jn and eigenvalues +, called the spectrum of the matrix L. In this section, we focus on the properties of the functions Jn . Making allowance for condition (2.175) and C = 0 for n → ±∞, we show that expressions (2.167) and (2.168) just reduce to 1 (Jn–1 + Jn+1 ) = +Jn , 2 1 d Jn = (Jn–1 – Jn+1 ). dt 2

(2.176) (2.177)

2.3 Dynamics of Particles in the Toda Lattice

109

The solution of these equations Jn → C1 ei9t zn + C2 e–i9t z–n

(2.178)

with C1 , C2 being constant coefficients defines the asymptotic behavior of the functions Jn . We have here introduced the notations 1 + = (z + z–1 ), 2

i 9 = (z – z–1 ). 2

(2.179)

The quantity z is classified as a spectral parameter. As far as the study of nonlinear oscillations with a given potential energy are concerned, it should be kept in mind that Section 1.1 of the textbook splits all motions of a material point into two classes: infinite motions with unlimited values of the generalized coordinate as t → ±∞ and finite ones or periodic fluctuations in the potential well. For eqs (2.167) with the real eigenvalue + also there are two types of solutions with a continuous and discrete spectrum. The first type of the solutions describes the asymptotic behavior of the wave (2.178) with the wave vector k and the frequency 9: z = eik , + = cos k,

9 = – sin k

(0 ≤ k ≤ 20).

(2.180)

Eigenfunctions with a discrete spectrum (bound states) are similar to the periodic solutions of a material point. However, as distinct from nonlinear oscillations, they exist only at discrete values +i (i = 1, 2, . . . , N ). For each value +i of the discrete spectrum, the eigenfunctions Jin exhibit a decreasing asymptotic behavior at infinity Ji n → c2 eEi t zin (n → ∞),

(2.181)

i

(2.182)

Jn →

c1 e–Ei t zi–n

(n → –∞),

where zi = %e–ki ,

Ei = % sinh ki ,

+i = % cosh ki ,

% = ±1,

0 < ki < ∞.

(2.183)

This kind of asymptotic behavior comes directly from eq. (2.178) after the substitution of z → zi and the selection of restricted solutions. Let us briefly discuss the direct problem of scattering theory for the operator L. ¯ (z) with the asymptotic For the continuous spectrum (|z| = 1), we choose >n (z) and > behavior when n → ∞ as two fundamental solutions to represent the general solution of the finite-difference equation (2.167): >n (z) → zn (n → ∞), ¯ n (z) → z–n (n → ∞). >

(2.184)

110

2 Integrable Systems

¯ n (z), We also introduce a second pair of linearly independent functions 8n (z) and 8 defined by the asymptotics when n → ∞: 8n (z) → z–n (n → –∞), ¯ n (z) → zn (n → –∞). 8

(2.185)

The above functions are discrete analogues of Jost’s functions, and we will save the name during the treatment of the direct problem of scattering theory. In addition, to ¯ n (z) more compactly, we should write the asymptotics of the functions 8n (z) and 8 omit the multipliers ei9t and e–i9t , respectively. However, we shall reenter them when appropriate. Since in the given case there are only two independent solutions of eq. ¯ n (z) are linear combinations of >n (z) and > ¯ (z): (2.176), the functions 8n (z) and 8





¯ n (z) > !(z) "(z) 8n (z) . (2.186) ¯ ¯ n (z) = "(z) ¯ !(z) 8 >n (z) with the scattering coefficients ! (z) , !¯ (z), " (z) , "¯ (z) being n-independent. Their calculation for the assigned quantities an , bn is one of the objectives of the direct scattering problem. Note that eq. (2.186) implies the relation 8n (z) "(z) ¯ n (z) + => >n (z). !(z) !(z)

(2.187)

In the future, we will use it often. Let us find how the scattering coefficients ! (z) , " (z) are connected with asymptotic values of the functions introduced. As it follows from eq. (2.167), the expression W(J(z), I(z)) = ak (Jk (z)Ik+1 (z) – Jk+1 (z)Ik (z))

(2.188)

for two arbitrary eigenfunctions Jk (z), Ik (z) and the matrix L with the same eigenvalues does not depend on k. It is a discrete analogue of the Wronskian and equals to zero if the vectors of the functions J  (z) and  I (z) are linearly dependent. Then, calculating ¯ when n → –∞, we get ¯ ) when n → ∞ and W 8, 8 W (>, > ¯ W(>(z), >(z)) =–

z – z–1 , 2

z – z–1 ¯ W(8(z), 8(z)) = . 2

Further, use of the relations ¯ ¯ >(z)) ¯ ¯ ¯ W(8(z), 8(z)) = W(!(z)>(z) + "(z)>(z), !(z)>(z) + "(z) ¯ ¯ – "(z)"(z))W(>(z), ¯ = (!(z)!(z) >(z))   ¯ (z) , 8 (z) yield the explicit and insertion of eq. (2.187) into W (8 (z) , > (z)), W 6 expressions for ! (z), " (z):

111

2.3 Dynamics of Particles in the Toda Lattice

!(z) =

2W(8(z), >(z)) , z – z–1

"(z) =

¯ 2W(>(z), 8(z), ) –1 z–z

(2.189)

and the restriction on the scattering coefficients ¯ = 1. ¯ – "(z)"(z) !(z)!(z)

(2.190)

Next, the aim of our actions is to continue the study of the properties of the Jost functions. Prove that the function >n (z) has a triangular representation >n (z) =

∞ 

K (n, m) zm ,

(2.191)

m=n

where the coefficients K (n, m) (n ≤ m) do not depend on z and obey the boundary conditions K(n, m) = 0 (n ≠ m, m → ∞),

K(n, n) = 1 (n → ∞).

(2.192)

With K (n, m) (n ≤ m) forming an infinite upper triangular matrix, the representation (2.191) is sometimes called the upper triangular representation. Substituting it into eq. (2.167) and equating the coefficients of powers of z, we come up with the equations K(n, n) = 2an–1 K(n – 1, n – 1), K(n – 1, n) K(n, n + 1) – = bn , 2K(n, n) 2K(n – 1, n – 1) an–1 K(n – 1, m + 1) + an K(n + 1, m + 1) + bn K(n, m + 1) 1 = (K(n, m) + K(n, m + 2)) (n ≤ m). 2

(2.193)

(2.194)

The solutions of the difference equations (2.193) and (2.193) for m = n with the boundary conditions (2.192) have the form K(n, n) = e

–xn +x∞ 2

,



K(n, n + 1) = –2K(n, n)

∞ 

bm ,

(2.195)

m=n+1

and so on. As far as the displacements of particles in the lattice are determined up to a constant, we then choose x∞ = 0. Putting consistently m = n, n + 1, m = n + 2, . . . in eq. (2.194), we have the simple finite-difference equations e

xn–1 2

xn

xn

K(n – 1, m + 1) – e 2 K(n, m + 2) = e 2 (–2an K(n + 1, m + 1) – 2bn K(n, m + 1)

+ K(n, m))

(n ≤ m),

112

2 Integrable Systems

with the right-hand sides being known at each step. It is easy to demonstrate that the uniqueness of the solutions is inherent in these expressions with the boundary conditions (2.192). Consequently, we can write explicit formulas for them for small values of m – n. At length, we have proved the representation (2.191) takes place at finite values of the total momentum m=∞ 

2bm < ∞

m=–∞ ∞ ,

and finite relative shifts at infinity

m=–∞

(2am–1 ) < ∞ (they are equivalent to the con-

dition (xn – x–n ) < ∞ when n → ∞). In a similar manner, we can illustrate that the function 8n (z) has the lower triangular representation 8n (z) =

m=n 

¯ m)z–m , K(n,

¯ n) → 1(n → –∞), K(n,

¯ K(n, m) → 0

(m < n, m → –∞),

–∞

(2.196) where the coefficients K (n, n) satisfy the equation ¯ ¯ n) = K(n – 1, n – 1) , K(n, 2an–1 ¯ + 1, n) ¯ n – 1) K(n K(n, – = –bn ¯ + 1, n + 1) ¯ n) 2K(n 2K(n,

(2.197)

and so on. As we can see, from the foregoing one is led to the formula necessary for the further exposition ¯ n) = e xn –x2–∞ K(n,

(2.198)

¯ n)K(n, n) = g, g, g = e x∞ –x2 –∞ K(n,

(2.199)

and the relation

for the diagonal elements.   ¯ n (z) , >n z1 are subject It is required to point out that when |z| = 1 the functions > to the same equation and the same boundary conditions. Therefore, by the uniqueness of solutions of finite-difference equations, the relations ¯ n (z) = >∗n (z), >

¯ n (z) = 8∗ (z), 8 n

¯ !(z) = !∗ (z),

¯ = "∗ (z) (|z| = 1) "(z)

(2.200)

are valid. Now we go over to the study of the analytic properties of the Jost functions. Consider the solutions of eq. (2.167) for complex values of the spectral parameter z.

2.3 Dynamics of Particles in the Toda Lattice

φ (z), Ψ (z)

D+

z

Г

D–

φ (z), Ψ (z)

113

α(z)

‌z‌ =1 Figure 2.8

Otherwise speaking, the area of nonphysical values of the wave vector is of concern to us. The presentations (2.191) and (2.196) are valid for the continuous spectrum for z = eik . In the complex z-plane it is a circumference A of a unit circle |z| = 1 (Fig. 2.8). If the series (2.191) and (2.196) converge in it, they also converge inside the unit circle for |z| ≤ 1. In this case, the functions 6n (z) z–n and 8n (z) zn are thought to provide the analytical continuation from the unit circle (the scattering region) into the unit circle’s inner region (into the region D– ). Continuing the relations (2.200) into the complex plane, we obtain         1 1 1 1 ¯ ¯ ¯ ¯ n (z) = >n , 8n (z) = 8n , !(z) =! , "(z) = " > (|z| > 1) . z z z z ¯ n (z) zn and 8 ¯ n (z) z–n functions are analytic functions of the In consequence, the 6 spectral parameter z beyond the unit circle (D+ ) (|z| ≥ 1). Formula (2.189) shows that ! (z) is an analytic function for |z| ≤ 1. Analyticity regions for these functions are depicted in Fig. 2.8. The function ! (z) may have zeros on the real axis. We denote them as z1 , z2 , z3 , . . . , zN (|zi | < 1). Since W (8 (zi ) , > (zi )) = 0 according to eq. (2.189), the function 8n (zi ) is proportional to >n (zi ) (i = 1, 2, . . . , N ): 8n (zi ) = γi >n (zi ).

(2.201)

Therefore, the asymptotic behavior of the functions >n (z) and 8n (z) when n → ∞ is the same: 8n (zi ) → γi (zi )n → 0 (n → ∞), while 8n (zi ) decreases when n → –∞ as well by the definition (2.185):

114

2 Integrable Systems

8n (zi ) → (zi )–n → 0 (n → –∞). Thus, the zeros of the function ! (z) correspond to the discrete eigenvalues of the   matrix L with real eigenvalues +i = zi + zi–1 /2. The quantities γi are connected with normalization of discrete spectrum eigenfunctions. To derive the necessary relations, we should employ two equations am–1 8m–1 (z) + am 8m+1 (z) + (bm – +)8m (z) = 0,

(2.202)

am–1 >m–1 (z) + am >m+1 (z) + (bm – +)>m (z) = 0

(2.203)

  for the Jost functions 8m (z) and >m (z) with the eigenvalue + = 21 z + z–1 . We take the quantity bn out of eq. (2.203) and insert it into the expression, obtained by differentiating eq. (2.202) over z. Then we have d+ 8m (z)>m (z) = Tm (z) – Tm–1 (z), dz

(2.204)

where  Tn (z) = an

 d8n (z) d8n+1 (z) >n (z) – >n+1 (z) . dz dz

We sum up (2.204) with respect to the index m running from –∞ to n for z = zi . Since d8n (zi ) → 0 (n → –∞), then eq. (2.204) can be converted to the form dz i

n d+(zi )  8m (zi )>m (zi ) = Tn (zi ). dzi m=–∞

(2.205)

Further, differentiating eq. (2.203) over z, substituting bn involved in eq. (2.202), summing up over the index m from n + 1 to ∞ for z = zi yield the following:   ∞ d>n+1 (zi ) d+(zi )  d>n (zi ) 8m (zi )>m (zi ) = an 8n+1 (zi ) – 8n (zi ) dzi m=n+1 dz dz

(2.206)

Now, adding eq. (2.205) to eq. (2.206), we can arrive at the result  ∞ d>n+1 (zi ) d+(zi )  d>n (zi ) 8m (zi )>m (zi ) = an 8n+1 (zi ) – 8n (zi ) dzi m=–∞ dz dz  d8n+1 (zi ) d8n (zi ) + >n (zi ) – >n+1 (zi ) . dz dz

(2.207)

As a consequence of the equation 8n (zi ) = γi >n (zi ), the left-hand side of this relation is proportional to the norm of the vector function > (zi ), which we denote as

2.3 Dynamics of Particles in the Toda Lattice

∞ 

>m (zi ) >m (zi ) =

m=–∞

1 . c2i

115

(2.208)

With account of the definitions (2.188) and (2.189), the right-hand side of the above formula is equal to the z-derivative of the function   –! (z) z – z–1 /2 for z = zi . Finally, we deduce a simple formula to relate the quantity γi and the norm of the vector function > (z): !′ (zi ) =

d γi !(z)/z=zi = – 2 . dz zi ci

(2.209)

It also appears from it that the zeros of !(z) are simple and the quantity !′ (zi ) is a real value. A set of the real quantities γi together with the discrete spectrum zi (i = 1, 2, . . . , N ) "(z) and the reflection coefficient R (z) = !(z) is called the scattering data S: S = {R(z), γi , zi (i = 1, 2, . . . , N)}.

(2.210)

The data calculation for given coefficients an , bn of the operator L is referred to as the direct scattering problem. It would be relevant to recall that in the spectral problem (2.167) the quantities an , bn are time dependent. Hence, the eigenfunctions and scattering data S are also functions of time. Let us look at the evolution of the coefficients ! (z, t), " (z, t) involved in the equation 8n (z, t) = !(z, t)>¯ n (z, t) + "(z, t)>n (z, t).

(2.211)

While studying the Jost functions, we have omitted the time factors e±i9t and defined the asymptotic behavior of 8n (z) as 8n (z) → z–n (n → –∞). Such a behavior is in good –1 agreement with eq. (2.168) provided that the condition C = – z–z2 stands. Therefore, the evolution of the scattering data S can be inferred from the equation d z – z–1 8n = (A8)n = an–1 8n–1 – an 8n+1 – 8n dt 2

(2.212)

as follows. The key to this is the fact that the asymptotic behavior of 8n when n → ±∞ is responsible for the scattering data S subject to holding the boundary conditions (2.160) for an (t). Let us illustrate that the time dependence of the scattering data immediately is defined by the linear equations with constant coefficients: d 1 z – z–1 8n = (8n–1 – 8n+1 ) – 8n dt 2 2

(n → ±∞).

(2.213)

116

2 Integrable Systems

Substitution of expression (2.211) 8n (z, t) → !(z, t)z–n + "(z, t)zn when n → ∞ into eq. (2.213) leads to the equation d8n (z, t) → (z–1 – z)"(z, t)zn (n → ∞). dt Moreover, we have d!(z, t) –n d"(z, t) n d8n (z, t) → z + z (n → ∞). dt dt dt Similarly, we can come up with the following findings: d8n (zi , t) = γi (t)zin (zi–1 – zi ), dt d8n (zi , t) d(γi >n (zi , t)) dγi (t) n = = z dt dt dt i for the bound state (z = zi ) in eq. (2.213) when n → ∞ for the function 8n (zi , t) = γi (t) >n (zi , t). In the end, the implicit mapping of the variables a (t) , bn (t) into the new dynamical variables ! (t) , " (t) and γi (t) triggers reduction of the nonlinear dynamics of the lattice to trivial linear equations with the solutions !(z, t) = !(z, 0), "(z, t) = "(z, 0) exp[(z–1 – z)t], γi (t) = γi (0) exp[(zi–1 – zi )t].

(2.214)

If the initial conditions a (0) , bn (0) are known, the solutions of eqs (2.193)–(2.195), (2.197) and (2.198) enable the kernels Knm (t = 0) and K¯ nm (t = 0) to be found. After that, we can ascertain the Jost functions and the scattering data values at the initial moment in time. Consequently, their evolution (2.214) becomes clear, and the dynamics of particles in the Toda lattice may be in principle understandable provided that an approach to determine the quantities a(t) , bn (t) with use of the scattering data S (t) is already chosen. This method bears the name of the inverse spectral transformation or the method of the inverse scattering problem.

2.3.3 The inverse scattering transform This problem is one of the most intriguing and instructive branches of mathematical physics. In developing itself, it reveals new and unexpected aspects and is far from being exhausted. L.D. Faddeev

Inverse problems often arise in many fields of science, when observable data is the source of obtaining model parameter values. Examples of solving the inverse problems

2.3 Dynamics of Particles in the Toda Lattice

117

can be encountered in geophysics, astronomy, medical imaging, tomography, spectral analysis, nondestructive diagnostics problems and so on. In our case, when deriving equations of the inverse scattering problem, the analytic properties of the functions 8n (z) and >n (z) play a fundamental role. 1 m–1 Let us multiply eq. (2.187) by 20i z (n ≤ m) and integrate over a closed contour (|z| = 1):    1 1 1 8n (z) m–1 m–1 ¯ (2.215) z dz = >n z dz + R(z)>n zm–1 dz. 20i !(z) 20i 20i For compactness of notation here, we have omitted the time-dependent variables. We have performed the integration in the counterclockwise direction. At the outset, we modify the right-hand side of this equation using previously secured expressions for the Jost functions >n (z) =

∞ 

K(n, s)zs ,

s=n

   ∞ 1 ¯ n (z) = >n K(n, s)z–s . > = z s=n The first term takes the simple form   ∞ 1  1 ¯ n (z)zm–1 dz = K(n, s) z–s zm–1 dz = K(n, m), > 20i 20i s=n

(2.216)

because the integrals in the above formula can be easily calculated after the substitution z = ei> (0 ≤ > ≤ 20 ): 1 20i

 –s m–1

z z

1 dz = 20

20 exp [i (–s + m)]d> = $ms . 0

The term with the reflection coefficient R (z) reduces to    ∞ ∞  1 1 K(n, s)R(z)zs zm–1 dz = K(n, s)Fw (s + m), R(z)zm–1 >n (z)dz = 20i 20i s=n s=n (2.217) where the function Fw (m) is determined by the contribution of the continuous spectrum  1 Fw (m) = (2.218) R(z)zm–1 dz. 20i Here it is worth drawing attention to a distinction in simplification of both sides. That is to say, in fact, the right-hand side can be simplified by the Fourier transform. At the same time, to simplify the left-hand side, we must use the analyticity properties of the

118

2 Integrable Systems

n (z) m–1 Jost functions. It should be borne in mind that the function 8!(z) z is analytic inside the circle |z| < 1, except the poles z1 , z2 , . . . , zN (|zi | < 1) – the zeros of the function ! (z) and the poles for z = 0. The function appears from expansion (2.196). Cauchy’s theorem says that the left-hand side of eq. (2.215) is equal to the sum of residues (Res) n (z) m–1 of the function 8!(z) z at these poles:

1 20i



    N  8n (z) m–1  8n (z) m–1  8n (z) m–1 + Res z dz = Res z z   . !(z) !(z) !(z) z=0 z=zi

(2.219)

i=1

Find first the value of ! (z) for z = 0. Given eqs (2.195) and (2.198) in the limit z → 0 and the substitution of expansions (2.191) and (2.196) into the definition of ! (z) (2.189), we arrive at ¯ + 1, n + 1) = K(n, n)K(n, ¯ n) = g !(0) = 2an K(n, n)K(n and  Re s

8n (z) m–1 z !(z)

 = /z=0

¯ n) $nm K(n, 1 = $nm . g K(n, n)

To calculate the residues of the second summand in eq. (2.219), we should make use of the expansion  !(z) =

 d!(zi ) (z – zi ) + ⋯ = !′ (zi )(z – zi ) + ⋯ dzi

in the neighborhood of the point z = zi . As a consequence, we obtain the intermediate result     8n (z) m–1 8n (zi ) 8n (zi ) m–1 m–1 Res = Res z = ′ z z . i ′ ! (z) ! (zi ) (z – zi ) ! (zi ) i /z=zi According to eqs (2.201) and (2.209), the last expression can be written as 8n (zi ) m–1 γi >n (zi ) m–1 = ′ z z !′ (zi ) i ! (zi ) i ∞ ∞  γi zm–1  = ′i K(n, s)zis = –ci 2 K(n, s)zi s+m . ! (zi ) s=n s=n In the long run, gathering all the terms, we give the final form to eq. (2.215), basing on its time dependence ∞

$nm

 1 K(n, s, t)F(s + m, t) = 0 = K(n, m, t) + K(n, n, t) s=n

(n ≤ m),

(2.220)

119

2.3 Dynamics of Particles in the Toda Lattice

where F(s+m, t) =

1 20i



N       ci (t = 0)2 exp zi–1 –zi t zis+m . R(z, t = 0) exp z–1 – z t zs+m–1 dz + i=1

(2.221) This equation is a discrete analogue of the Gelfand–Levitan–Marchenko equation. The diligent reader can meet it in modified versions of the inverse scattering problem concerning integrating nonlinear PDEs. It represents a system of linear equations ∞ 

G(n, m, t) + F(n + m, t) +

G(n, s, t)F(s + m, t) = 0 (n < m)

(2.222)

s=n+1

for the off-diagonal elements G(n, s) =

K(n, s) K(n, n)

(n < s).

The solutions of this system according to eq. (2.220) for n = m determines the diagonal elements ∞  1 = 1 + F(2n, t) + G(n, s, t)F(s + n, t). K(n, n, t)2 s=n+1

(2.223)

These equations are the essence of the MISP. It includes three steps. As is seen from the foregoing, the unambiguous nontrivial change of the variables {an (t), bn (t)} for the scattering data turns the Toda lattice equations into a trivial system of ordinary differential equations. Within the first step (the direct scattering problem), knowing the initial values of an (t = 0) , bn (t = 0) (–∞ ≤ n ≤ ∞), we can calculate the scattering data S (t = 0) (2.210). The second stage makes it possible to observe how the scattering data S (t) (2.214) evolve. At last, the third step (the inverse scattering problem) solves eq. (2.222) to determine the matrix G (n, s, t) and, consequently, the quantities K (n, n, t). Finally, we can use the formula 2an (t) = e

xn (t)–xn+1 (t) 2

=

K (n + 1, n + 1, t) K (n, n, t)

(2.224)

to compute the change in deformation and displacement of the lattice particles xn (t) = –2 ln K(n, n, t) + const.

(2.225)

The scheme below illustrates these steps: I

II

III

→ S (t = 0)  → S (t) → xn (t) . {xn (t = 0) , pn (t = 0)} 

120

2 Integrable Systems

All the stages are associated with solving linear problems, which is the main advantage of the procedure. The method presented allows us not only to examine the evolution of the lattice particles, but also to get a set of particular solutions. Let us look first at the case of infinitely small initial displacements and their velocities in the lattice, that is, xn (t = 0) ∼ O (%), bn (t = 0) ∼ O (%), % 0). We denote   c1 (t = 0)2 exp z1–1 – z1 t 1–

z12

   = exp 2% sinh γ (t – t0 )

and write down B (n) in the form B(n) = 1 + c1 (t)2

  z12(n+1) = 1 + exp –2k(n + 1) + 2% sinh(k(t – t0 )) . 1 – z12

Then the displacements of the lattice particles

  1 + exp –2kn + 2% sinh(k(t – t0 ))   + const xn = ln 1 + exp –2k(n + 1) + 2% sinh(k(t – t0 ))

(2.236)

are nothing but a localized solitary wave moving with constant velocity v=

sinhk > 1, k

in the positive (% = 1) or negative (% = –1) direction without changing its shape. Entering the dimensionless variables (2.155), we have chosen the distance between the particles in the lattice to be unity. Then the coordinate of the particle with the index n is equal to yn . Due to the presence of the localized wave, its coordinate is yn + xn and the difference yn – yn–1 between the positions of neighboring particles changes by

124

2 Integrable Systems

(xn–xn–1)

n

0

–2 ln(ch k)

Figure 2.9

1 + xn – xn–1 . The local deformation of the lattice has a simple form xn – xn–1

B(n)B(n – 2) sinh2 k . = – ln = – ln 1 + B(n – 1)2 cosh2 (–kn + % sinh(k(t – t0 )))

(2.237)

Because (xn – xn–1 ) < 0, the soliton offers a localized compression wave. Its structure for k = 0.7 is shown in Fig. 2.9. With decreasing the width d ∼ k1 both the velocity and the amplitude of the wave increase. The solitary wave always propagates faster than low-amplitude linear waves. Note at last that the last formula in dimensional variables can be written as ⎞

⎛ 1 ⎜ xn – xn–1 = – ln ⎜ 1+ b ⎝



sinh2 k

cosh2 –kn + % sinh k



ab mt

– t0

⎟  ⎟ ⎠.

It should be reinforced that this formula is impossible to derive from the perturbation theory. Such solutions are essentially nonlinear formations and bear the name of solitons. In general, a soliton is a solitary wave permeating through different physical media; its shape and propagation velocity remain unchanged. Its name comes from the English word “solitary”; “on” is a typical ending of the terms of this sort (e.g., an electron, a photon, etc.). It indicates the similarity with a particle. However, the soliton dynamics in integrable systems has important features, one of which we illustrate by example is a two-soliton solution. The solution depends on four parameters: ci (t = 0), zi = %i exp[–ki ] (%i , = ±1, ki > 0, i = 1, 2) and structure of the determinant of the matrix B (n)

2.3 Dynamics of Particles in the Toda Lattice

det B(n) = 1 +

c1 (t)2 z12(n+1) c2 (t)2 z2 2(n+1) + 1 – z1 2 1 – z2 2

125

(2.238)

(z1 – z2 )2 + c1 (t)2 z1 2(n+1) c2 (t)2 z2 2(n+1) . (1 – z1 z2 )2 Use of the notations ci (t)2 = (1 – zi 2 ) exp[(zi –1 – zi )t + 2>i ], Ii (n) = –2(n + 1)ki + 2%i shki t + 2>i , i = 1, 2 gives a more compact form for this formula det B(n) = 1 + eI1 + eI2 +

(z1 – z2 )2 I1 +I2 e . (1 – z1 z2 )2

(2.239)

The latter represents a nonlinear superposition of two solitons. For definiteness, we put that k1 > k2 , %1 = %2 = 1. Let us dwell upon the structure of such a wave in a coordinate system moving with the velocity of the first soliton v1 = sinh k1 /k1 . Then I1 (n) = const and we have  I2 (n) = k2

   sinh k1 sinh k2 >1 I1 (n) >2 + 2t – + –2 +2 , k1 k1 k2 k2 k1

sinh k1 sinh k2 – > 0. k1 k2

As t → –∞ the quantity I2 (n) tends to infinity, the first and second terms (2.239) are small, and we have 

I1 (n)

det B(n) ≈ 1 + e

(z1 – z2 )2 (1 – z1 z2 )2



eI2 (n) .

As t → ∞ the function I2 (n) tends to –∞ and det B(n) ≈ 1 + eI1 . It means that in the moving coordinate system when I1 (n) = const and t → ±∞, the deformation can be found from   sinh k12 (2.240) xn – xn–1 = – ln 1 +    , cosh2 I1 (n) + A± /2 + k1 where A– = ln

(z1 –z2 )2 , A+ (1–z1 z2 )2

= 0. Similarly, in a coordinate system moving with the ve-

locity of the second soliton -2 = sinh k2 /k2 as t → ±∞, the deformation exhibits an asymptotic behavior as follows:  xn – xn–1 = – ln 1 +

sinh k22

cosh2



   , I2 (n) + B± /2 + k2

(2.241)

126

2 Integrable Systems

(xn–xn–l) 0 –2 ln(ch k2)

n

–2 ln(ch k1) (xn–xn–l) 0 –2 ln(ch k2)

n

–2 ln(ch k1) (xn–xn–l) 0 –2 ln(ch k2)

n

–2 ln(ch k1) (xn–xn–l) 0 –2 ln(ch k2)

n

–2 ln(ch k1) (xn–xn–l) 0 –2 ln(ch k2) –2 ln(ch k1)

n

Figure 2.10

where B– = 0, B+ = A– . Thus, for %1 = %2 = 1 the two-soliton solution describes the collision of two solitons with the velocities v1 and v2 . When t → –∞ they are spatially separated, with the faster soliton being more to the left than the slower one because sinh k1 /k1 > sinh k2 /k2 . Asymptotics of the two-soliton solution in this limit is the sum of the right-hand sides of the formulas (2.240) and (2.241). With the passage of time the faster soliton overtakes the slower one. As they approach each other, the pattern of the spatially separated solitons gets lost. However, the faster soliton eventually leaves behind the slower soliton, and when t → +∞ they become again spatially separated apart. Having interacted, the faster soliton suffers the change in phase and shifts forward by – Ak–1 > 0 as compared to the position which it would occupy in the absence of the second soliton. At the same time, the slower soliton goes backward by the magnitude – Ak2– . Calculated from exact solutions, the scenario of motion of the two solitons with the parameters k1 = 1.5, k2 = 0.7 at regular time intervals is depicted in Fig. 2.10. During the collision the number of particles, their amplitude and velocities do not change in the scattering. The scattering process causes the solitons to change in phase 2 by A– . Since ln (z1 –z2 ) 2 < 0, due to the interaction the faster soliton shifts forward and (1–z1 z2 ) the slower one backward.

2.3 Dynamics of Particles in the Toda Lattice

127

(xn–xn–1) 0 –2 ln(chk2)

n

–2 ln(chk1) (xn–xn–1) 0 –2 ln(chk2)

n

–2 ln(chk1) (xn–xn–1) 0 –2 ln(chk2)

n

–2 ln(chk1) (xn–xn–1) 0

n

–2 ln(chk2)

–2 ln(chk1) Figure 2.11

In the case of %1 %2 = –1, the solitons move towards each other and if the distance between them is greater than the characteristic dimensions, the solution has two sharp minima. Figure 2.11 gives the understanding of the motion of the solitons with the parameters k1 = 1.5, k2 = 0.7 at regular intervals. As the solitons approach each other, the amplitude of the faster soliton decreases while that of the slower one grows. Once equalization of the amplitudes occurs, the solitons trade them and their velocities and then diverge. The initial amplitude of the slower soliton comes up to the amplitude of the faster one. In the general case the N-soliton solution (2.235) describes the scattering of N solitons. At that, the total phase increment of a soliton is equal to the sum of the phases during the paired collision with other solitons. The solitons behave like particles (a particle-like wave) because, having interacted with each other, they are not destroyed and preserve their structure unchanged. The outlined picture of the soliton motion is common to all integrable systems.

128

2 Integrable Systems

2.3.5 The Inverse Scattering Problem and the Riemann Problem An idea which can be used only once is a trick. If one can use it more than once it becomes a method. G. Polya, G. Szego

The algorithm of the inverse scattering problem represented in the previous section contains the finite-difference equations (2.222) and (2.223). Further, we give this method another fruitful formulation where the Riemann problem plays a central role. The solution of eq. (2.187) written as 8n (z) ¯ n (z) + R(z)>n (z) => !(z)

(2.242)

is a particular example of the typical solution of the Riemann problem. In our case, the problem is formulated as follows. We are given a closed contour dividing the complex plane into the internal D+ and external D– parts (Fig. 2.8). We wish to find two funcn (z) tions: 8!(z) that is analytical in the region D+ except for the points z = zi (i = 1, 2, . . . , N ) ¯ n (z) that is analytical in the region D– with a given where it has simple poles; and > behavior as z → ∞. At that, they satisfy the condition (2.242) on the contour A. Now we show that this problem can be solved through a certain system of singular integral equations. First, we shall provide the reader some necessary information about specific integrals of complex function theory [24]. Let A be a closed contour in the region of the complex variable z; D+ is the area inside the contour, D– is the area added to D+ + A. Here we should recall that if f (z) is an analytic function in the region D+ and continuous in D+ + A, its values belonging to the region D+ can be calculated, when known on the boundary, by the famous Cauchy formula 1 20i

 A

f (t)dt = t–z

.

f (z), z ∈ D+ , 0, z ∈ D– .

(2.243)

If f (z) is an analytic function in the region D– , continuous in D– +A and f (∞) ≠ ∞, then 1 20i

 A

f (t)dt = t–z

.

0, z ∈ D+ , –f (z) + f (∞), z ∈ D– .

(2.244)

We choose the direction of passing around the contour so that the region D+ should lie to the left. For integrals of Cauchy type, the integrand in eq. (2.243) is analytic over z only inside the contour A. At the points that lie on the contour, the integrand ceases to be analytical and turns into infinity. The integral  lim

z→z0

A

f (t) dt , t–z

z0 ∈ A

2.3 Dynamics of Particles in the Toda Lattice

129

zb Г

D+ z0

z2

z1

Г1

D–

Г za

Figure 2.12

is called special or singular. It acquires sense through the special limiting transition. We draw a circumference with radius of r (z1 = z0 + r) the last integral is easy to estimate:

f (t)dt = i f (z0 ) lim r→0 t – z0

A1



20 d> = i 0f (z0 ).

d> = i f (z0 )

(2.247)

0

A1

The limit of the sum of the first two integrals in eq. (2.246) for r → 0 is called the principal value of a singular integral and denoted as P: ⎛z ⎞ b z1 zb f t dt f t dt ( ) ( ) ⎠ = P f (t) dt . lim ⎝ + r→0 t–z t – z0 t – z0 z2

za

za

If the contour is closed, then za = zb and we arrive at the final formula  lim

z→z0

A

f (t) dt = i0 f (z0 ) + P t–z

 A

f (t) dt , t – z0

z ∈ D+ ,

z0 ∈ A.

(2.248)

130

2 Integrable Systems

In a similar way we can prove the formula  lim

z→z0

A

f (t) dt = –i0 f (z0 ) + P t–z

 A

f (t) dt , t – z0

z ∈ D– ,

z0 ∈ A.

(2.249)

These formulas are referred to as Sokhotski’s formulas. In what follows, to employ the above formulas it is convenient to proceed from the Jost functions to functions with constant values for z → ∞ and z = 0. We introduce new functions Jn (z) , In (z), and ¯ n (z) as follows: I 8n (z) = z–n e

–xn (t) 2 Jn (z),

¯ n (z) = z–n e >

–xn (t) ¯ n (z), 2 I

>n (z) = zn e

(2.250)

–xn (t) 2 In (z).

Then Jn (z) have no poles for z = 0 and In (z) → 1 when z → ∞. Now we rewrite relation (2.242) in terms of the new functions: Jn (z) ¯ = In (z) + R(z)In (z)z2n , z ∈ A. !(z)

(2.251)

According to the Cauchy theorem, we have  ¯ ′ ′ 1 In (z )dz ¯ In (z) = 1 – , z ∈ D– , 20i z′ – z A  Jn (z) Jn (z′ )dz′ 1 = , z ∈ D+ . !(z) 20i !(z′ )(z′ – z)

(2.252)

(2.253)

A

Since when passing the contour A around, the area D– remains on the right, before the integral on the right-hand side of eq. (2.252) the minus sign stands. The contour A¯ encloses both the contour A and the circumferences with infinitesimal radius about the points z = zi (i = 1, 2, . . . , N ), connected by cuts with the contour (Fig. 2.13). The edges A+ , A– of one of the cuts are shown in Fig. 2.13. It is clear that the integral over the sum of all the edges of the cuts is equal to zero. Then 1 20i

 A

Jn (z)dz′ 1 = !(z′ )(z′ – z) 20i

 A

 Jn (z′ )dz′ Res – ′ ′ !(z )(z – z) N

i=1



  Jn (z′ )  , ′ ′ ′ !(z )(z – zi )(z – z) z′ =zi

where according to eqs (2.201), (2.209) and (2.250) we have  Res

  γi I(zi )zi2n c2i I(zi )zi2n+1 Jn (z′ ) Jn (zi )  = = = – . !(z′ )(z – zi )(z′ – z) z=zi (zi – z)!′ (zi – z)!′ (zi – z)

(2.254)

2.3 Dynamics of Particles in the Toda Lattice

131

Г Г+ zN

z3...

z2

Г–

z1

Figure 2.13

At last, the final expression is Jn (z) 1 = !(z) 20i



 c2 In (zi )z2n+1 Jn (z′ )dz′ i i + . ′ ′ !(z )(z – z) zi – z N

(2.255)

i=1

A

¯ n (z). We Let us find another integral representation for the eigenfunctions Jn (z) and I insert the relation ¯ n (z) = Jn (z) – R(z)In (z)z2n (z ∈ A) I !(z) into the right-hand side of eq. (2.252). The right part of the obtained expression has the integral  Jn (z′ )dz′ 1 (2.256) , z ∈ D– . – 20i !(z′ )(z′ – z) A

It is equal to the sum of residues only at the points z = zi (i = 1, 2, . . . , N ) because Jn (z′ )dz′ ¯ n (z) has is meromorphic function in the region D+ . As a result, the function I (z′ –z) the integral representation ¯ n (z) = 1 + I

N  c2 In (zi )z2n+1 i

i

i=1

zi – z

+

1 20i

 A

R(z′ )z′2n In (z′ )dz′ , z′ – z

z ∈ D– .

(2.257)

From the foregoing, it becomes clear that all of the Jost functions for all values of the spectral parameter are determined only by both the wave functions In (z) in the scattering field and by the bound states. Putting that z = zk–1 in eq. (2.257) and given the   relation In zk–1 = In (zk ), we can derive a closed system of equations for the wave functions. Then In (zk ) = 1 +

N  c2 In (zi )z2n+1 i

i=1

zi –

i zk–1

+

1 20i

 A

R(z′ )z′2n In (z′ )dz′ . z′ – zk–1

(2.258)

132

2 Integrable Systems

Since the function In (z) is continuous in the region D– + A, in the limit z → A of eq. (2.249) we obtain ¯ n (z) = 1 + I

N  c2 In (zi )z2n+1 i

i

i=1

zi – z

1 1 – R(z)z2n In (z) + P 2 20i

 A

R(z′ )z′2n In (z′ )dz′ , z′ – z

z ∈ A.

(2.259) The system of singular equations (2.258) and (2.259) is a set of basic equations to formulate the MISP with the use of the Riemann problem. Solving the Riemann problem under a preestablished time dependence between ci and R (z′ ), we are capable of finding Jost’s functions at any time. Note that the function Jn (z) meets the equation z (Jn–1 (z) – Jn (z)) +

–Jn (z) + 4a2n Jn+1 (z) + 2bn Jn (z) = 0. z

After substituting the asymptotic expansion Jn (z) ≈ Jn (0) + zJ′n (0) + ⋯ into it and cutting down the singular terms for z → 0, we arrive at the formula to calculate xn : xn = ln Jn (z = 0) + const.

(2.260)

The above approach in more general formulation is known as a “dressing method” [13–15]. It allows constructing solutions of integrable evolutionary equations, including PDEs, in accordance with a given partial solution. To demonstrate the equivalence of the two formulations of the inverse scattering method, formula (2.256) makes it possible for us to do it. Plugging the representation for In (z) and In (z) in the form ¯ n (z) = 1 + I

∞ 

Gnm z–m+n ,

m=n+1

In (z) = 1 +

∞ 

xn

Gnm zm–n , Gnm = e 2 K(n, m) =

m=n+1

K(n, m) K(n, n)

into it and equating the terms of the inverse powers of the parameter z, we deduce the Gelfand–Levitan–Marchenko equation (2.222). Without doubt, the main advantage of eqs (2.258) and (2.259) is a simple and effective procedure for deriving multi-soliton solutions. Here, for further discussion we are bound to present the explicit form of the function In (z) for a one-soliton solution. From eqs (2.258) and (2.259) when N = 1, R (z) = 0 it immediately follows that –1

t(z1 –z1 ) + z1 2n+1 (–1 + z1 z) ¯ n (z) = (z – z1 )e . I –1 (z – z1 )(et(z1 –z1 ) + z1 2n+2 )

(2.261)

2.3 Dynamics of Particles in the Toda Lattice

133

2.3.6 Solitons as Elementary Excitations of Nonlinear Integrable Systems In the present section, we prove that the Toda lattice equations have an infinite set of integrals of motion, and any excitement in it is a superposition of noninteracting nonlinear waves and solitons. At the outset, we look into the structure of integrals of motion. The quantity Tr (Ln ) as a polynomial in an , bn is an integral of motion. This assertion follows directly from the matrix equation (2.161) and the matrix permutation rule under the Tr:   d  n Tr L = Tr (AL – LA) Ln–1 = 0. dt In integrable models, the connection between Tr (Ln ) and the scattering data is given by the so-called trace formula. A small remark: the interested reader can get familiar with it in the monograph [24]. The explicit form of the first integrals of motion and their expression in terms of the scattering data are simpler to find as follows. The function ! (z) (2.189) does not depend on time and is a functional of an , bn . Consequently, when expanded in powers of z–1 or z, its coefficients give conserved quantities. Let n → ∞ in eq. (2.189), and using the triangular representation (2.191), we stand out the first terms of the expansion: !(z) = K(n, n) + zK(n, n + 1) + z2 (K(n, n + 2) – K(n, n + 1) + K(n, n)) + O(z3 ), (n → –∞).

(2.262)

The function ! (z) can be expressed via the scattering data. It is not worth forgetting that it is analytic in the region D+ and has a finite number of zeros for z =   zi (i = 1, 2, . . . , N ) while the function ! z1 is an analytic function in the area D– . In addition, according to eq. (2.190), the relations ¯ !(z)!(z) = !(z)!

   2 1 = !(z)!∗ (z) = (1 – R(z) )–1 z

(2.263)

are fulfilled on the contour A. We first show that the function ! (z) in the region D+ is completely determined by the values of the reflection coefficient modulus on the contour A and discrete spectrum. The function

!1 ( z ) =

N / zzj – 1 i=1

z – zi

! (z)

(2.264)

also analytic in the region D+ and has no zeros in it. In this case, |! (z)| = |!1 (z)| on the contour A. According to the Cauchy theorem (2.243) and (2.244), for the functions   ln!1 (z) and ln!1 z1 the relations hold true:

134

2 Integrable Systems

ln !1 (z) = 

1 0= 20i

A

1 20i



ln !1 (z′ )dz′ z′ – z

A

ln !1 ( z1′ )dz′ z′ – z

(z ∈ D+ ),

(z ∈ D+ ).

Adding them together, we obtain the expression ln !1 (z) =

=

1 20

0

1 20i

 A

   0  ln !1 (z′ )!1 ( z1′ ) dz′ ln !1 (eik )!1 (e–ik ) dk 1 = = z′ – z 20 1 – zeik –0





ln !1 (eik )!1 (e–ik )

(2.265)



1 1 + dk, 1 – zeik 1 – ze–ik

z ∈ D+ ,

0

which allows one to find the function !1 (z) in the region D+ using its values on the boundary of this area. Hence one is led to the useful formula   2     ln 1 – R(eik )

0

–1 ln !(z) = 20

1 1 + ik 1 – ze 1 – ze–ik

 dk

0

+

N  i=1

(2.266) z – zi ln , (z ∈ D+ ), z zi – 1

according to which the function ! (z) recovers in the reflection coefficient and discrete spectrum, that is, in the scattering data. Note that the expansion of ln ! (z) in accordance with eqs (2.194), (2.195) and (2.262) has the form ln

!(z) = 1 – P – z2 H + O(z3 ). !(0)

(2.267)

Here P is the total momentum: P=2

 n

bn =



x˙ n ,

(2.268)

n

and H is the system’s Hamiltonian: H=

x – x     x˙ 2  n n+1 ((4an 2 – 1) + 2bn 2 ) = + exp –1 . 2 2 n n

(2.269)

Comparing the expansion over z in the formulas (2.266) with (2.267), we come up with the following results: 1 P= 0

0 0

  N    ik 2 ln 1 – R(e ) cos k dk + 2% sinh ki , i=1

(2.270)

2.3 Dynamics of Particles in the Toda Lattice

1 E= 0

0

  N    ik 2 ln 1 – R(e ) cos 2k dk + 2% sinh 2ki ,

135

(2.271)

i=1

0

where % = ±1, ki > 0. These formulas express the classical integrals of motion in terms of the new variables – the scattering data. In these variables, the Hamiltonian of the system isa continual set of noninteracting nonlinear waves with the density   2 ln 1 – R eik  , the momenta cos k, the energy cos 2k, N solitons with the momenta 2% sinh ki and the energy 2 sinh ki (i = 1, 2, . . . , N ). The solitons do not interact between each other and the nonlinear waves. If the nonlinear system admits such a separation of the variables in the Hamiltonian, the solitons in it are “true solitons.” Solitons as nonlinear waves are elementary excitations in nonlinear systems with dispersion. The dispersion law for the solitons ) E = %P 1+

P2 4

(2.272)

for small momenta coincides with the dispersion relation for long-wavelength phonons, and at large momenta is the same as the expression for the energy of a free particle with mass m = 1.

2.3.7 The Darboux–Backlund Transformations As a rule, the Cauchy problem can be solved only for a limited class of initial conditions. In general, to determine the analytic structure of eigenfunctions even for simple initial conditions is a daunting task. As a consequence, only the asymptotic behavior of solutions for large times is possible to investigate. Proceeding from the above, at present for this purpose simpler methods are widely used, so-called direct methods in the theory of solitons, such as Darboux’s and Backlund’s transformations, and Hirota’s method. These are rightfully considered to be a miracle of the soliton mathematics. However, they do not solve the Cauchy problem for integrable equations, but allow one to construct a set of partial solutions, starting with the exact solution to be as “seed.” The Darboux transformation [17] is applicable for all integrable equations and is an effective method to solve nonlinear integrable equations exactly. In our case, knowing a particular solution (an , bn , Jn ) of  eqs (2.158),  (2.167) and (2.168), we have the possibility of obtaining new solutions an , bn , Jn of this system. Consider the two systems of equations (LJ)n = an–1 Jn–1 + an Jn+1 + bn Jn = +Jn , d Jn = (AJ)n = an–1 Jn–1 – an Jn+1 , dt

(2.273)

136

2 Integrable Systems

¯ n–1 + a¯ n J ¯ n+1 + b¯ n J ¯ n = +J ¯ n = a¯ n–1 J ¯ n, (LJ) d ¯ ¯ n–1 – a¯ n J ¯ n+1 . ¯ n = a¯ n–1 J Jn = (A¯ J) dt

(2.274)

The quantities a¯ n , an , b¯ n , bn are different solutions of the Toda lattice equations a˙ n = an (bn – bn+1 ), b˙ n = 2(an–1 2 – an 2 ),

(2.275)

a˙¯ n = a¯ n (b¯ n – b¯ n+1 ), 2 – a¯ n2 ). b¯˙ n = 2(a¯ n–1

(2.276)

¯ are related by the Under the Darboux transformations the wave functions J and J linear transformation ¯ = DJ, J

(2.277)

where the matrix D is, in general case, (a¯ n , an , b¯ n , bn ) and + dependent, + being the spectral parameter. The equations ¯ = DL, LD

(2.278)

dD ¯ = –DA + AD dt

(2.279)

to define the matrix D appearing immediately after the substitution of eq. (2.277) into eq. (2.274). Differentiation of expression (2.278) and further elimination of dD dt gives the compatibility condition

  dL¯ ' ¯ ¯ ( dL + L, A D = D + [L, A] dt dt

for eqs (2.278) and (2.279). It holds if the quantities a¯ n , an , b¯ n , bn satisfy eqs (2.275) and (2.276). Let us construct the simplest Darboux transformations for systems (2.273)–(2.277). We put that ¯ n = Tn Jn + Bn Jn+1 . J

(2.280)

Then, according to eqs (2.273) and (2.274) the undetermined coefficients Tn , Bn , independent on the spectral parameter, are governed by the system. One of these has the form Tn–1 Bn–1 Tn Bn = . an–1 an

(2.281)

2.3 Dynamics of Particles in the Toda Lattice

137

Consequently, the quantity TnaBn n does not depend on n. And, the function being ascertained with accuracy up to a factor, we can put Tn Bn = an for Tn ≠ 0. As a result, the remaining system of equations can be written as a¯ n = an+1

Bn , Bn+1

(2.282)

b¯ n = bn + Bn 2 – Bn–1 2 ,   a˙ n B˙ n = Bn –Bn–1 2 + Bn 2 + , an 2

– Bn–1 –

an 2 Bn 2

(2.283)

+ bn = !,

with an arbitrary constant !. The last equation is readily apparent from the relation –Bn–1 2 –

an 2 Bn

2

+ bn = –Bn 2 –

an+1 2 Bn+1 2

+ bn+1 .

Thus, these formulas determine the Darboux transformation, which maps the class of solutions of the Toda lattice equations into itself. When we know the solution an , bn , expression (2.283) helps find the quantities Bn , which in turn according to eq. (2.282) as certain the new solution a¯ n , b¯ n . In what follows, this Darboux transformation is called an A-transformation. To illustrate their effectiveness, we set 1 an = , bn = 0 2 for any n. Then the replacement B2n = Gn converts system (2.283) like this: 1 G˙ n = (1 + 4!Gn + 4Gn 2 ), 2 1 – – Gn–1 = !. 4Gn

(2.284) (2.285)

Solving the first equation, we get Gn =

'√ ( √ 1 –! + 1 – !2 tan 1 – !2 (t + 4Cn ) 2

(2.286)

with constants Cn . To construct the soliton solution it is required to put ! > 1 and choose the following parameterization ! = cosh k. Then we have Gn =

  1 – cosh k – sinh k tanh sinh k(t + 4Cn ) , 2

and eq. (2.285) establishes the dependence of the constants Cn on n:

138

2 Integrable Systems

Cn = Cn–1 –

k , 4 sinh k

whence Cn = –n

k t0 – . 4 sinh k 4

At last, we arrive at the final formula

1/2   x¯ n – x¯ n+1 1 1 sinh2 k , a¯ n = exp = 1+   2 2 2 cosh2 nk – (t – t0 ) sinh k     x˙¯ n sinh k  tanh k(n – 1) – (t – t0 ) sinh k + tanh kn – (t – t0 ) sinh k . b¯ n = = 2 2 (2.287) This formula depends on two arbitrary constants t0 and * and coincides with the onesoliton solution (2.237). Application of the procedure mentioned above with a new parameter !′ to this solution results in the two-soliton solutions and so on. Eventually, an N step yields an N-soliton solution depending on 2N arbitrary constants. As shown by straightforward calculations, transformation (2.280) takes the form for Tn = 0: ¯ n = Jn+1 . a¯ n = an+1 , b¯ n = bn+1 , J

(2.288)

It follows immediately from the discrete translational symmetry of the L – A pair and the equations of motion of particles in the Toda lattice. We should call this simple but important transformation for further investigation as the Darboux B-transformation. We briefly discuss the Backlund transformation (BT), which also allows one to get complex solutions from simple ones. We get it from the Darboux A-transformation, putting that Bn = exp[Qn ].

(2.289)

Then the first equation of eq. (2.282) implies that 1 Qn = [¯xn – xn+1 – C]. 2

(2.290)

The quantity Qn is determined up to an inessential constant C. For the sake of convenience, we choose the constant C equal to ln (2). Substituting eqs (2.289) and (2.290) into eqs (2.282) and (2.283), we form the set of differential relations V1 ≡ –˙xn + exp[–xn + x¯ n–1 ] + exp[xn – x¯ n ] + 2! = 0, V2 ≡ –x˙¯ n + exp[xn – x¯ n ] + exp[–xn+1 + x¯ n ] + 2! = 0.

(2.291) (2.292)

139

2.3 Dynamics of Particles in the Toda Lattice

In virtue of the adopted terminology for dynamical systems, the relations of type R1 (Fn (t), Un (t), , F˙ n (t), U˙ n (t), !) = 0, R2 (Fn (t), Un (t), , F˙ n (t), U˙ n (t), !) = 0 are referred to as the Backlund transformation of nonlinear equations. However, it should be strongly emphasized here that the functions Tn , Un and their time derivatives, involved in these relations, impose some restrictions but namely, if the quantities Fn (t) meet the nonlinear equation E1 (Fn ) = 0, the variables U (t) also satisfy, in a general case, another nonlinear equation E2 (Un ) = 0. With direct calculations, it is easy to show that d V1 = E(xn ) ≡ x¨ n + exp[xn – xn+1 ] – exp[xn–1 – xn ] = 0, dt d V2 = E(¯xn ) ≡ x¨¯ n + exp[¯xn – x¯ n+1 ] – exp[¯xn–1 – x¯ n ] = 0. dt

(2.293) (2.294)

Because xn and x¯ n satisfy the same equation, such an ambiguous solution mapping is called the Backlund auto-transformation. From the classical mechanics’ point of view, it means that these transformations are a nontrivial generalization of canonical transformations. When the seed solution is assigned, the Backlund transformation reduces the problem of finding new solutions to solving the system (2.293) and (2.294). In our exposition, the BTs have been secured by the L–A pair. Historically, the BTs had been first found for many integrable equations, and then the inverse scattering problem was studied. 2.3.8 Multiplication of Integrable Equations: The modified Toda Lattice If you do not understand a particular word in a piece of technical writing, ignore it. The piece will make perfect sense without it. Mr. Cooper’s law

This section covers the technique of multiplication of integrable equations by the example of the Toda lattice in order to consistently derive another “modified” integrable equation and its L–A pair according to a known Lax’s pair. Such a scheme is a discrete version of the multiplication method [20–22] for nonlinear evolutionary partial differential equations in the space-time of different dimensions. The Darboux and Backlund transformations play a key role for constructing new equations and their conjugate L – A pairs.

140

2 Integrable Systems

As noted in the previous section, the importance of the Darboux transformation lies in the possibility of finding the function Jn for a known solution xn with the help of eqs (2.273), (2.280), (2.282), (2.283) and (2.288) and obtaining a new solution x¯ n in a purely algebraic manner. Consider now another problem. Let the quantities an , bn satisfy expressions (2.275). Find equations for the auxiliary functions Jn . For this purpose, we need to express these quantities via the function Jn and its derivative. The simplest way to do this is to use an auxiliary function Fn (t): Jn = e

t(–z+z–1 ) –xn 2 e 2

Fn (t).

(2.295)

So employing it and resorting to eqs (2.167) and (2.168), we can build the following equations for Fn–1 + 4an 2 Fn+1 – 2(+ – bn )Fn = 0, 2F˙ n – 2bn Fn + 4an 2 Fn+1 – Fn–1 = 0.

(2.296) (2.297)

Then the Darboux A-transformation acquires the form a¯ n = an+1

Fn Fn+2

, Fn+1 2 Fn–1 Fn b¯ n = bn + – 2Fn 2Fn+1

(2.298)

and the Darboux B-transformation is a¯ n = an+1 , b¯ n = bn+1 , F¯ n = Fn+1

(2.299)

  for eqs (2.275), (2.296) and (2.277), where + = 21 z + z–1 . Let us deduce equations for the functions Fn . From eqs (2.296) and (2.297) it is easy to obtain the explicit expression for an , bn : an 2 =

+Fn – F˙ n , 4Fn+1

1 F˙ n – Fn–1 bn = + + 2 2Fn

(2.300)

in terms of the Fn , F˙ n eigenfunctions just introduced. After substituting these relations into eq. (2.275), we arrive at the new equation 2 (–F¨ n Fn + F˙ n )Fn+1 + (Fn 2 – Fn–1 Fn+1 )(F˙ n – +Fn ) = 0,

(2.301)

with the parameter + for the function Fn (t). It should be noted that due to the presence of the term linear in F˙ n , this expression is a formally dissipative system. The L – A pair being represented by linear homogeneous equations, the resulting expression is homogeneous with respect to the quantities Fn (t) and their derivatives. In the variables

2.3 Dynamics of Particles in the Toda Lattice

141

r – r  1 r˙n n n+1 exp , Bn = 2 2 2

Fn = exp[rn ], An = it takes the compact form

A˙ n = An (Bn – Bn+1 ), B˙ n = 2(An–1 2 – An 2 )(+ – 2Bn ).

(2.302)

Let us find an L – A pair for system (2.302). For the next Darboux transformation a¯ n → a˜ n , b¯ n → b˜ n the following formulas (2.297) and (2.298) also hold true even if we make the replacements an → a¯ n , bn → b¯ n , Fn → F¯ n , + → +(1) . Therefore, we have a¯ 2n =

+(1) F¯ n – F˙¯ n ¯ 1 F¯˙ n – F¯ n–1 , bn = +(1) + , 2 4F¯ n+1 2F¯ n

(2.303)

and substitution of these expressions into eq. (2.276) yields the nonlinear equation for F¯ n : 2 2 (–F¨¯ n F¯ n + F˙¯ n )F¯ n+1 + (F¯ n – F¯ n–1 F¯ n+1 )(F˙¯ n – +(1) F¯ n ) = 0.

(2.304)

Formulas (2.298)–(2.300) and (2.303) give a set of relations between the quantities Fn , F¯ n and their derivatives for the Darboux A transformation: +(1) F¯ n – F˙¯ n = 0, F¯ n+1 F˙ n F¯ n–1 F¯˙ n – – + =0 Fn F¯ n F¯ n

Fn (+ Fn+1 – F˙ n+1 ) Fn+1 2 –+ + +(1) +

Fn Fn+1



(2.305)

and + Fn – F˙ n +(1) F¯ n–1 – F˙¯ n–1 – = 0, Fn+1 F¯ n F˙ n – Fn–1 F¯˙ n–1 – F¯ n–2 – =0 (+ – +(1) ) + Fn F¯ n–1

(2.306)

for the Darboux B-transformation. Straightforward calculations show that both eqs (2.305) and (2.306) are BT between eqs (2.301) and (2.304). Furthermore, a sequence, for example, of A-transformations can be written as a chain of the equations Fn (i) (+(i) Fn+1 (i) – F˙ n+1 ) (i)

(Fn+1 (i) )2 –+(i) + +(i+1) +

Fn (i) Fn+1 (i)



(i) F˙ n

Fn (i)





+(i+1) Fn (i+1) – F˙ n

Fn–1 (i+1) Fn (i+1)

(i+1)

Fn+1 (i+1) +

(i+1) F˙ n

Fn (i+1)

= 0,

= 0, i = 1, 2, . . . ,

142

2 Integrable Systems

where +(0) = +, Fn (0) = Fn , Fn (1) = F¯ n and so on. The solutions of such chains, called as “dressing chains” [18, 19] allow one to find new solutions to the Toda lattice equations with the help of the formulas (an (i) )2 =

+(i) Fn (i) – F˙ n 4Fn+1 (i)

(i)

(i) 1 F˙ n – Fn–1 (i) , bn (i) = +(i) + . 2 2Fn (i)

In addition, it is worth pointing out that the “dressing chains” for integrable systems admit the periodical closure on the discrete index i, which leads to nontrivial integrable equations [18, 19]. Let us proceed finally to the procedure to multiply the integrable equations using the Darboux B-transformations. To construct an L – A pair, associated with eq. (2.301), it is most convenient0 to apply 1 relations (2.306). To do this, it is necessary to find the change of variables Fn , F¯ n → {Fn , Vn }, (–∞ < n < ∞) so that the above relations should reduce to the L – A pair in the space of a new auxiliary variable Vn . Such a change can be made in many ways. However, to extend the multiplication procedure, we have to bring the BTs (2.306) to the form when the functions An and Bn may be expressed through the quantity Vn and its derivatives. Outwardly, eq. (2.306) makes it clear that such a transformation requires the term with F˙ n to be eliminated in one of the expressions, and this requirement leads to the replacement F¯ n–1 = Fn Vn .

(2.307)

Then the L – A pair for eq. (2.301) or (2.302) is Fn–1 (1) V˙ n = (Vn–1 – Vn ) + (+ – + )Vn Fn

(2.308)

(1)

= 4An–1 2 (Vn–1 – Vn ) + (+ – + )Vn , (1)

Fn–1 (–Vn–1 + Vn ) + Vn ((2+ 2Fn (1)

= + Vn + 2An–1

2

– +)Fn – F˙ n )

1 F˙ n Vn+1 + –+ + 2 Fn

(2.309)

+ (–Vn–1 + Vn ) + Bn (–Vn + Vn+1 ) – (Vn + Vn+1 ) = 0. 2

By direct computations it is easy to verify that the compatibility conditions (2.308) (1) and (2.309) with the arbitrary parameter + = 21 z′ + 2z1 ′ are equivalent to eq. (2.301) or (2.302). Thus, the sequence of transformations from the initial system to new one includes the following steps. At the first step, using the L – A pair, we find the presentation of the variables {an , bn } in terms of the auxiliary quantities Fn , their derivatives and the spectral parameter z. In the theory of integrable PDEs, such a representation is known as the Miura transformation. Then, solving the system of differential equations for

2.3 Dynamics of Particles in the Toda Lattice

143

{an , bn }, we obtain a new nonlinear equation for the quantity Fn with the parameter +. According to the terminology, integrable equations secured by the Miura transformation are called modified. Therefore, eq. (2.301) is the first modified Toda equation. At the second step, employing the Darboux transformation for the L – A pair, we form two systems of equations for Fn and Fn′ to be the BTs of the resulting expression with the parameters + and +(1) . Finally, the 0 third 1 step requires a certain ingenuity to make ¯ an appropriate change of variable Fn , Fn → {Fn , Vn } so that we could get the L – A pair of the new equation with the spectral parameter +(1) . If we succeed in bringing about this scheme for the original equation, we will be able to derive the next integrable equation. Thus, in the case under consideration, eqs (2.308) and (2.309) give us the possibility to express An and Bn through Vn and its derivative, and after substituting the particular formulas into eq. (2.303), we are capable of obtaining the nonlinear differential equation for the variable Vn – the second modified Toda equation. Further, using the Darboux transformation A¯ n = An+1 ,

B¯ n = Bn+1 ,

V¯ n = Vn+1

for this equation, we can find the BT and present the L – A pair. Omitting cumbersome equations for the latter, we can write the second modified Toda equation in the variables Vn = exp [Wn ] as (1) 2 ˙2 ¨ n = (1 – exp[Wn–1 – 2Wn + Wn+1 ])[(+ – + ) – Wn ) . W (–1 + exp[Wn–1 – Wn ])(1 – exp[Wn+1 – Wn ])

(2.310)

If there is a possibility of implementing the third step, the multiplication procedure allows a new integrable equation and its associated Lax pair to be obtained. To conclude this section, we shall briefly discuss the one-soliton solutions to eq. (2.301). Through relation (2.295) we can find simply enough to find the soliton solutions for the first modified Toda equation. To soliton (2.236) in the Toda lattice, according to eq. (2.261), there corresponds the formula Fn = e

t(z–z–1 ) 2

z–n (etK (z – z1 ) + z1 1+2n (–1 + z z1 )) , (z – z1 )(etK + z1 2+2n )

(2.311)

with the parameters z1 = %e–* (% = ±1, * > 0),

K = z1 – z1–1 .

(2.312)

Here z is a parameter of eq. (2.301), with z1 being the parameter of the solution. In the general case, due to the presence of the factor z–n , the solution diverges for arbitrary real values of the parameter z when n → ∞ or n → –∞. As a consequence, it makes sense only for the complex variables of Fn (|z| = 1) or for the real variables of Fn (z = %,   ¯ n z = 1/z1 (2.261) of z = 1/z1 ). To the last value there corresponds the wave function I

144

2 Integrable Systems

a bound state. Then, putting that z = 1/z1 = e–* (* > 0) in eq. (2.311), we arrive at the solution to eq. (2.301) in the form of a solitary wave Fn =

1 ek  . 2 cosh k(n + 1) – t sinh k

When z = 1 there is also no divergent factor in eq. (2.311) and Fn = 1 – again represents a solitary wave.

1 + ek  1 + exp 2k(n + 1) – 2t sinh k 

(2.313)

3 Stability of Motion and Structural Stability When your tired consciousness is losing its mental equilibrium, when the stairsteps are slipping away under your feet, like a deck, when your nightly loneliness spit up on the mankind, you can turn over in your mind to reflect on the eternity, and to doubt the purity. Joseph Brodsky’s “Loneliness”

3.1 Stability of Motion The theory of stability of motion has been researching the impact of disturbing factors on the motion of a material system. When left aside due to their smallness as compared to the main ones, forces underlie the disturbing factors in describing the motion. These disturbing forces are usually unknown. Ioel Malkin [1]

3.1.1 Stability of Fixed Points and Trajectories When it comes to stability, we most often have in mind static stability. Cheops Pyramids and stone arches of Stonehenge are examples of stable man-made structures. In contrast, a house of cards and sand castles are symbols of fragility and instability. Typically, a massive smooth ball being at the lowest point of a smooth relief surface and under the influence of gravity directed downward serves as an example of a stable mechanical system (Fig. 3.1). The same ball placed at the highest point is in an unstable position. That is to say, any weak impact makes it leave the top. Simultaneously, the ball lying in a horizontal plane demonstrates a “neutral” behavior because all of its positions are equivalent to each other. To better understand the issue at hand, consider again a mathematical pendulum. Recall that we have already deduced the law of motion in an analytical form Section 1.2. Now it is assumed to be point mass attached to one end of a rigid rod of negligible mass. Another end of the rod is secured to the hinge. The pendulum as well as the ball lying on the relief surface have a stable state when the potential energy is minimal. When turned by 180 degrees, the pendulum goes over into an unstable state with high potential energy. Both states are stationary (fixed) points; i.e., if the kinetic energy is zero, the pendulum can be indefinitely in each of these states. One can analyze the features of the pendulum motion in more detail by considering a family of phase trajectories as shown in Fig. 3.2 (a for frictionless pendulum, ˙ ) = (±20n, 0) and n = 0, ±1, ±2, . . . b for the pendulum with friction). The points (>, > are stable stationaries. Their vicinities have typical trajectory families. Here they are the center in the absence of friction, a stable focus or a stable node for a system with friction (e.g., to the point (0, 0) there correspond a center as in Fig. 3.2(a) and a stable focus as in Fig. 3.2(b). The saddle-type configurations can be seen in the vicinities of

146

3 Stability of Motion and Structural Stability

Figure 3.1

. φ

–π

0

(a)

. φ

–π

φ

π

(b)

π

φ

Figure 3.2

˙ ) = (0 ± 20n, 0). In Figs 3.2(a) and (b), special separatthe unstable fixed points (>, > ing trajectories are depicted as dashed lines and called separatrices. From the linear analysis, it is observed that the deviation B> = > – >0 near the unstable steady-state point (>0 , 0) is equal to a linear combination of two exponential functions. The latter

3.1 Stability of Motion

147

is infinitely large as t → ∞, with another being as infinitesimal. As for the points lying in the separatrices, one of the functions is involved in the linear combination with ˙ ) are assumed to satisfy a coefficient of zero. Sometimes the phase points (> ± 20n, > one and the same state of the pendulum. In this case, it is sufficient that > should vary from 0 to 20. Then the lateral surface of a cylinder containing the stable fixed point (0, 0) and the unstable stationary point (0, 0) plays a role of the phase space. Analyzing the structure of the phase plane of the mathematical pendulum, one can notice that in the neighborhood of any stable stationary point, the phase trajectories are not only in close proximity to it but in mutual proximity to each other. In the case of the frictionless pendulum, the nature of this proximity is the ability to choose initial conditions in such a way that the phase points belonging to different trajectories should not be removed from each other by a distance greater than a pre-specified value. In the case of the pendulum with friction, the other statement holds true: the phase points on different trajectories tend to a common limiting point, coinciding with the stationary point as t → ∞. When observed in the vicinity of the unstable (saddle) fixed point, it makes sense to discern the phase points’ behavior as “runaway” from the fixed point and their mutual “divergence.” All phase points belonging to different phase trajectories can also diverge from each other except for those that rest in two separatrices, tending toward the saddle point as t → ∞. Due to this line of reasoning, the conclusion suggests itself that the approach based on the mutual behavior of phase points lying in different trajectories is more general. Now, the standard apparatus of mathematical analysis makes it possible to give a rigorous definition of stability for general dynamical systems. It should be stressed that a fundamental contribution to the theory of stability of motion was made by A. M. Lyapunov [2]. We present a number of definitions. Let the motion of an n-dimensional system be described by a normal system of ordinary differential equations in the form dx    = f x, t , dt

(3.1)

where t is time, x = (x1 , x2 , . . . , xn ) ∈ Rn is the real vector in n-dimensional space of dynamical variables and f ∈ Rn is the vector of right-hand sides. In most cases, it is considered that the functions on the right-hand sides are continuous and comply with the conditions under which there is always a unique solution of the Cauchy problem. To find out more about various formulations of existence and uniqueness of solutions for the system of equations, the reader can refer to Ref. [3]. To present the solution of    t0 , x0 ; t with the initial conditions x = x0 at t = t0 in a geometric the system x = X way, it is necessary to build a set of points in the space G of (t, x1 , x2 , . . . , xn ) ∈ Rn+1 vectors. As a result, we get a line, called an integral curve. Each point of the space G lies in a unique integral curve provided that the existence and uniqueness theorem for the system (3.1) is valid. The projection of the integral curve on the subspace of the xvectors is called a phase trajectory, and the subspace itself, phase space. I would like

148

3 Stability of Motion and Structural Stability

to point out that we have already introduced and used these concepts. A dynamical system is referred to as autonomous if the right-hand sides of the equations are not time-dependent explicitly. Otherwise, the system is non-autonomous. If the system is autonomous and the theorem on existence and uniqueness of the solutions is fulfilled, there is a unique phase trajectory passing through each point of the phase space. Note that the nonautonomous system can always be turned into autonomous by taking time as the coordinate (xn+1 ≡ t) and adding an (n + 1)-th equation dxn+1 /dt = 1 to the system. Let us define the norm in the space of the x-vectors by means of the expression   ||x|| = max |x1 |, |x2 |, . . . , |xn | and use the metric associated with this norm as the value ||x1 – x2 || is the distance between two vectors.    t0 , y0 ; t as stable if Lyapunov Stability. The system (3.1) has the solution y = X the following requirement is met: for any %, there exists a distance $% such that  0 , x0 ; t) – X(t  0 , y0 ; t)|| < % for every t ∈ [t0 , ∞). In ||x0 – y0 || < $% which implies ||X(t other words, if at t = t0 the initial points lying on two trajectories are removed from one another by a distance not exceeding $% in terms of the entered metric, then, at any subsequent moment in time t, the distance between the points moving along the selected    t0 , y0 ; t ≡ y0 paths is not greater than %. If one chooses the stationary point as y0 , X is true for every t ≥ t0 . It is easy to see that for two-dimensional systems (in particular, in the case of a mathematical pendulum), a number of stationary points are said to be Lyapunov stable and can be classified as a stable node, stable focus and center.   Orbital Stability. Let d A, x be the distance between the point x and the nearest point of a set A. We replacethe proximity condition containing % in the definition of   0 , x0 ; t) < %. In so doing, we regard Ay as a set of stability by the condition d Ay , X(t all points of the trajectory starting from the point y0 . As a result, we obtain the definition of the orbital stability. It sounds as follows: phase trajectories themselves rather than points resting on them must be mutually close. In addition, there is a permissible divergence of the phase points along the trajectories.    t0 , y0 ; t is asymptotically Lyapunov Asymptotic Stability. The solution y = X stable if it is stable by the definition mentioned earlier, and the limiting condition  0 , x0 ; t) – X(t  0 , y0 ; t)|| = 0 is fulfilled. In other words, points lying on diflimt→∞ ||X(t ferent phase trajectories tend toward each other. Either a stable node or a stable focus possesses the property of being asymptotically stable.    t0 , y0 ; t is said to be unstable, if the stability conInstability. The solution y = X dition is infringed. This means that for an arbitrarily small $, there is x0 such that despite the fulfillment of the condition ||x0 – y0 || < $, trajectories, starting from the points y0 and x0 , diverge. In this case, an unstable focus, an unstable node and a

3.1 Stability of Motion

149

saddle can serve as examples of unstable fixed points. Linearized in the neighborhood of such a point, the system (3.1) has solutions which diverge exponentially at infinity as t → ∞. It should not be forgotten, however, that the linearization procedure supposes the deviations from the fixed point to be very small. Thus, leaving the small neighborhood of the unstable stationary point, the phase point no longer runs away from it exponentially; its motion obeys not the approximate linearized equation, but the exact eq. (3.1). The postulate quoted here brightly reflects the attitude of scientists toward motion stability issues, formulated in the end of nineteenth and the first half of twentieth centuries [3]: “If one should turn out that arbitrary small changes in initial data are able to badly modify a solution, the solution, due to the inaccurate initial data chosen by us, usually does have no applied meaning and cannot describe a phenomenon under study even approximately.” As per this paradigm, there arose the necessity of turning efforts to the study of stable processes: to creation of engineering devices with stable driving modes regarding every manifestation of instability as “parasitic” and undesirable effects. The view at these things changed in the second half of the twentieth century only, when moving unstable systems received considerable attention to investigate. One of the reasons for changing the direction of the research interest was the need of having adequate theories to describe the phenomena and processes in strongly nonlinear systems that are far away from the states with minimum energy. That is to say, the expansion in small deviations from equilibrium could not be used in most cases. This led to the development of the interdisciplinary field of “nonlinear studies.” Among these, the investigations in the field of “nonlinear physics” occupied a central place. This term as used herein can be applied to physics of nuclear installations and facilities, plasma physics, aerodynamics and hydrodynamics, nonlinear optics and laser physics, accelerator physics, and many fields of solid-state physics. This listing can be significantly extended. Methods and approaches of the “nonlinear physics” became widespread in biology, ecology, economics, and even the social sciences. In all the cases, the basic ideas were derived from nonlinear dynamics, but issues concerning the systems whose motion is unstable came to be the most paramount. It should be noted that substantial progress in the study of nonlinear systems has been made possible only owing to the emergence of computers. In the next chapters we will devote most of our attention to the dynamics of unstable systems. However, before considering the latter, first we are to dwell upon the general results of some effective methods, analyzing such systems.

3.1.2 Succession Mapping or the Poincare Map From Fig. 3.2, it is observed that the phase points in the neighborhood of the saddle travel like particles near the scattering center, in consequence of which the initially

150

3 Stability of Motion and Structural Stability

(a)

(b)

(c) xN N

xN

xN+1

xN = xN+1 N+1

xN+1

Figure 3.3

close trajectories diverge. Mostly, this property is peculiar to the trajectories whose segments are located on opposite sides of the incoming separatrix of the saddle and close to it. The consideration of the various kinds of pendulum motion leads to the following conclusion: a small segment located on the phase trajectory and in proximity to the separatrix, when slightly perturbed, causes the position of the segments that follow it to change significantly. This situation makes the entire trajectory unstable in general. To understand whether a given pattern of motion is globally stable or not, it is sufficient to look, in turn, at relatively lengthy subsegments of its trajectory. The approach to analyze autonomous systems is to enter a first return map (Poincare map) as in Fig. 3.3 (a) and (b) that ties coordinates of various points of the phase trajectory when intersecting with a (hyper) surface, which is nowhere tangent to the trajectory. For nonautonomous systems under the action of periodic external forces, the stroboscopic mapping plays the same role (Fig. 3.3 (c)). The figures show examples of sequences of the intersection points at different driving modes: (a) – in the case of a stable focus, the sequence converges to a limiting point; (b) – in the case of a limiting cycle, all points of the sequence are the same. Denoting the intersecting surfaces as G, for the succession map we can write the following expressions:    t0 , x0 ; tN ∈ G, xN = X

   tN , xN ; tN+1 – tN , xN+1 = X

where the moments of time t are determined by the condition under which the phase trajectory intersects with the surface G. Similarly, for the stroboscopic mapping, we have    t0 , x0 ; NT ∈ GN , xN = X

   t0 + NT, xN ; T ; xN+1 = X

& here T is the period of the external force, N is the surface, intersected by the phase trajectories at the moment of time t = NT. In both cases, excluding the time informa  tion, we can describe motion using the dynamic mapping xN+1 = F xN that connects the neighboring elements of the sequence of the phase points.

3.1 Stability of Motion

151

3.1.3 Theorem about the Volume of a Phase Drop In the previous chapters, we have dealt with mechanical dynamical systems, each of which can be assigned to one of two classes: conservative or dissipative systems. The former preserves its energy when moved. The latter loses its energy due to the action of a friction mechanism. The characteristic features of the dynamics inherent in each type of the system are reflected in the structure of their phase portraits. In particular, only in the case of dissipative systems, phase points belonging to different paths evolve asymptotically toward limiting sets – attractors. Dimension of these sets is less than the dimension of the phase space. In the case of a harmonic oscillator with friction, stationary points such as a stable focus or a stable node in the phase plane are examples of attractors. It is easy to show that the point attractors are also peculiar to nonlinear systems, e.g., to a mathematical pendulum with friction as in Fig. 3.2 (b). In Section 1.5, we have discussed the phase plane of the Van der Pol oscillator that has an attractor of different type – an attracting limit cycle. Apart from the attractors, the phase space of the dissipative systems has basins of attraction of these attractors. These are a set of points that lie on trajectories tending asymptotically to the attractors as t → ∞. The Liouville Theorem about preservation of phase space volume in bounded domains (phase drops) is usually proved in the analytical mechanics courses. It is valid for conservative mechanical systems. Clearly, this theorem does not hold true for dissipative systems as the statement about conservation of phase volume contradicts with the statement about the presence of an attractor. In other words, if all points of a certain region tend to a limiting point or a curve, the volume of this area tends to zero. Following Ref. [4], we find a general expression for the rate of change in the phase drop volume. Suppose that, at time t, the phase point x forms a dense region Dt – a phase drop. As time goes by, each point obeying the law of motion (3.1) moves along its trajectory, so there is a mapping of Dt into some new region Dt+Bt . The region Dt can be associated with its numerical measure (volume). To avoid the strict definitions, this numerical measure can be treated as length in one-dimensional space, area in two-dimensional space and volume in three-dimensional phase space, etc. For small volume elements, * 2 + the relation Bv (t + Bt) = det (J ) Bv (t) is fulfilled, where J = ∂Xi (t, x; t + Bt) ∂xj is the Jacobi matrix of the transformation, relating the coordinates of phase points x(t + Bt) and x(t). Integrating, we obtain an expression for the volume of the region Dt+Bt V (t + Bt) = det (J ) dv. (3.2) Dt

In the case of small Bt, the shift of the phase point can be represented in the form of expansion        t, x; t + Bt = x + Bt f t, x + O Bt2 . X

152

3 Stability of Motion and Structural Stability

Further, the determinant is *  + = det (J ) = det E + Bt F + O Bt2      2  = 1 + Bt sp (F ) + O Bt = 1 + Bt divf t, x + O Bt2 ,

(3.3)

* * + 2 + where F = ∂fi (t, x) ∂xj ; E = $ij is the identity matrix; the spur of the matrix F is written as the generalized divergence n    ∂fi divf t, x = . ∂xi i=1

Using the expansion (3.3), it is not difficult to find a final expression for the rate of change of the phase volume dV(t) V (t + Bt) – V (t) = lim = Bt→0 dt Bt



  divf t, x dv.

(3.4)

Dt

Consider the various special cases. Hamiltonian Systems. Hamilton equations

For a friction-free mechanical system, described by the

q˙ i = ∂H /∂pi ,

p˙ i = –∂H /∂qi ,

we have divf =

 ∂  ∂H   ∂  ∂H  + – ≡ 0. ∂qi ∂pi ∂pi ∂qi i

i

Thus, formula (3.4) implies the Liouville Theorem: in the case of the Hamiltonian dynamics a phase “liquid” is incompressible. Linear and Nonlinear Oscillators. system of equations x˙ = v,

Let an oscillator with friction be described by the

v˙ = –γ v – U ′ (x).

As to a linear oscillator, U = 920 x2 /2. On the right-hand side of the equations of motion the divergence of the vector can be then expressed as  ∂ ∂  –γ v – U ′ (x) = –γ . divf = v+ ∂x ∂v

3.1 Stability of Motion

153

It follows that in the conservative case (γ = 0) the phase fluid is incompressible. However, when there is friction in the system (γ > 0), the phase fluid is uniformly compressed at all points of the phase space. The Van der Pol Oscillator. Some parts of the phase plane are responsible for stretching the phase liquid, the others – for its compression. Indeed, from the equations of motion x˙ = v,

v˙ = (+ – x2 )v – x,

it follows that  ∂ ∂  (+ – x2 )v – x = + – x2 . divf = v+ ∂x ∂v It can be shown that for + > 0, the volume of the phase drop grows in the neighborhood of the origin. At that, the unstable stationary point (x, v) = (0, 0) pushes the phase points away, being a repeller. However, near the stable limit cycle that exists for + > 0 (Section 1.5), the phase volume shrinks and the phase trajectories tend asymptotically to this cycle. We can conclude that the formula (3.4) gives us a simple and convenient classification criterion: divf = 0 for conservative dynamical systems, while divf ≠ 0 for dissipative systems. The case when divf > 0 throughout the phase space usually is of no interest. In practice, we have to deal with dissipative systems when divf < 0 for the whole phase space or its part. Note that the characteristic structure of trajectories near the stationary points for dynamic systems of general type (but not necessarily of mechanical systems) bears a qualitative similarity to the structure for mechanical systems. On the other hand, if a dynamical system is nonmechanical and even nonphysical but, e.g., biological, ecological or economic, the concept itself of “energy” may have nothing to do with a dynamic process. The previous criterion allows one to distinguish conservative systems from dissipative ones and is applicable in these cases.

3.1.4 Poincare–Bendixson Theorem and Topology of the Phase Plane Considering the dynamics of the phase plane, we have shown that there are two types of attractors: point attractors (stable foci and stable nodes) and attractors in the guise of closed curves (a stable limit cycle). The question naturally arises: are there other sorts of attractors? For the case of two-dimensional systems, the answer is negative as the following assertion is true.

154

3 Stability of Motion and Structural Stability

(a)

(b)

Figure 3.4

The Poincare–Bendixson Theorem. Located in a bounded part of the phase plane, the phase portrait of the two-dimensional system has only two types of attracting sets (attractors) – a stable stationary point (a focus or node) and a stable limit cycle. Omitting the proof of this statement, we note that the two-dimensionality of the phase space leads to the topological constraints. Since, in virtue of the theorem on existence and uniqueness of solutions, phase trajectories cannot self-intersect, their possible limiting behavior on the plane has a relatively simple character: they either run away to infinity or tend to the limiting point or the limit cycle. More complicated limiting behavior could occur in the event of “jumping” the phase point over the existing spiral turns. Such a situation, as we demonstrate below later, comes up in the three- or more dimensional phase space. Attention should be drawn that the global structure of the phase portrait of the two-dimensional system is composed as a mosaic from structures of the vicinity of singular points and limit cycles. One may ask: what are structures allowed, and what not? A convenient approach to classify and design the structures is based on finding the values of topological indices (invariants) [5]. Let us choose a closed contour, which is lying in the phase plane, passing through no singular points and intersecting no limit cycles. Moving along this path in a certain direction, at each point of the contour, we will build a vector tangent to the phase trajectory and monitor its behavior. Having completed the contour, the vector turns by an angle 20j; i.e., it makes j revolutions, where j is an integer. Here, it should be thought that j > 0 if the direction of the vector’s rotation coincides with the direction of the passage. Otherwise, we assign the minus sign to the number of revolutions. The number j thus found is called the Poincare index. Subject to the contour enclosing no singular points and cycles, then j = 0; the saddle has the index j = –1 (Fig. 3.4 (a) ; the node, focus, center, and limit cycle have the index j = 1 (Fig. 3.4 (b) shows a stable focus). If several singular points are within the same contour, the Poincare indices are added. So if the contour covers two saddle points, then j = –1 – 1 = –2. If the contour covers two saddles and a focus, then j = –1 + 1 – 1 = –1. It is not difficult to come up with configurations that can never exist. So a region that possesses two saddles and a focus (j = –1) cannot be covered by a limit cycle

3.1 Stability of Motion

155

(j = 1). However, the limit cycle can cover the focus (as in the case of Van der Pol) or two foci and a saddle simultaneously.

3.1.5 The Lyapunov Exponents Looking into stability, Lyapunov was ahead of his time. During his lifetime he had no disciples and followers, and for a long time after its creation, the Lyapunov stability theory had not only been developed but also been used at all. (From the foreword by the publisher to a book by Nikolay Chetaev) [6].

Using the definitions mentioned earlier, the problem of the motion stability can be solved by analyzing the motion along closely adjacent phase trajectories. To trace how points lying on the neighboring trajectories converge or diverge, we need to integrate equations of motion for the two trajectories x(t) and x∗ (t), starting from points x0 and x∗0 , separated by a very small distance. However, in many cases, it is convenient to integrate the equations for the basic trajectory x(t) together with the equations for the vector of deviations $x(t) = x∗ (t) – x(t) x˙ = f (x, t),

$x˙ = f (x + $x, t) – f (x, t).

If the points of near trajectories are in proximity at the same moment in time, the vector of deviations is originally small. It also remains small yet for some time after. This allows the second system of equations to be linearized x˙ = f (x, t),

$x˙ = A(x(t)) $x,

* 2 + A = ∂fi ∂xj .

(3.5)

Here, the symbol A designates the Jacobi matrix. The latter depends on the parameters  0 , x0 ; t) and, consequently, on x0 and t. Next, supof the basic trajectory x(t) = X(t pose that there is a generic case and special conditions, under which the matrix A is degenerated or equal to zero, are not fulfilled. The solution of the equations for $x has the form of a linear transformation. Using the evolution matrix U, we can write it as $x($x0 , t) = U(t0 , x0 ; t) $x0 ,

$x0 = $x(t0 ).

When considered within rather small segments of the trajectories, the matrix A is said to be time-independent. According to the basic theory of systems of linear equations, we conclude that the vectors $x are linear combinations containing the functions   ts exp (& + i9)t . Since, for & ≠ 0, the most rapidly changing function is the exponent function exp [& t], the rate of convergence (divergence) of the phase points is determined by the linear combination term involving exp [& t] with the highest value of & .

156

3 Stability of Motion and Structural Stability

An expression for the length of the vector $x in a quadratic norm is  1/2  1/2 n    T $x = $ x ⋅ $x = $xi2 , i=1

where the symbolT denotes transposition. Then the time-dependence of the vector length can be provided by the equation ( 1/2   ' $x($x0 , t) = ($x0 )T U T (t)U(t)$x0 . To the symmetric matrix U T (t)U(t) there correspond real eigenvalues 'r and mutually orthogonal eigenvectors $xr0 with numbers r = 1, . . . , n. Choosing the latter as initial vectors, we obtain     $x($x0 , t) = $x0  ⋅ '1/2 (t). r r r The eigenvalues 'r (t) ∼ exp [+r t] meet exponential convergence (divergence) of the trajectories. All coefficients +r averaged along the trajectory form the spectrum of Lyapunov characteristic exponents. Numbering of the Lyapunov exponents is usually produced so that the inequality +1 ≥ +2 ≥ ⋯ ≥ +n should be fulfilled. The general definition of the Lyapunov exponent can be written as 2      +r = lim 4–1 ln $x($xr0 , t0 + 4) $xr0  , (3.6) 4→∞

where $x($xr0 , t0 + 4) is the solution of the system of equations for deviations. This formula stands for integrating simultaneously the equations for the basic trajectory and for the deviations. A bar above the symbol for the limiting transition means that the upper limit would be taken. That is to say (a) there are sequences 4m → ∞ for m → ∞, for which there exist limits, which may be different, in the usual sense; and (b) among limiting values thus found, the largest must be chosen. Investigating the trajectory as a whole, it should be appreciated that because A * + is time-dependent, the sets of the eigenvalues {'r } and eigenvectors $xr0 are different in various points of the trajectory. However, these quantities vary in a continuous manner as the phase point moves along the trajectory. Consequently, proceeding from the foregoing, a contribution of each part of the trajectory to the exponent +r with a certain number r is possible to find. Figure 3.5 illustrates a simple geometric interpretation of the Lyapunov exponents. Suppose that the phase drop first looks like a ball with small radius 1. Its center lies on the main path. During motion, the phase drop becomes deformed and its shape turns into an ellipsoid. The exponents +r characterize the law of change in the size of semi-principal axes of the ellipsoid over time. Note that if the vector $x0 = x∗0 – x0 as well as e|| are directed tangent to the phase trajectory, both points x0 and x∗0 belong to one and the same path. Then, if 4 is sufficiently small, the following estimate is true:

3.1 Stability of Motion

157

x2

x3 ρ2

0 ρr = ρ exp[λr t] x1

λ1 = λmax ρ

ρ3

ρ1

Figure 3.5

      $x($x0 , t0 + 4) ≈ x˙ 4 = f 4 < M 4, where M is a positive constant, the existence of which follows from the boundedness   condition f  < M. Such a character of the local change does not cause the quantity   $x(t) to increase exponentially as t passes. As a consequence, the exponent +|| corresponding to the tangential direction is zero. Here we leave aside the case when the phase trajectory is a stationary point. Calculation of the Maximum Lyapunov Exponent. An important feature of the instability of motion is the positivity of the maximum Lyapunov exponent (MLE). The fact of the positivity testifies that initially close phase trajectories diverge, and a phase drop stretches. MLE can be found numerically using the formula 2      +max = lim 4–1 ln $x($x0 , t0 + 4) $x0  , 4→∞

(3.7)

with “almost any” vector being sufficient to be taken as the initial vector of the deviations $x0 . Let us demonstrate that this is the case. An arbitrary vector can be expanded in the basis along one of the axes where the most rapid growth/the slowest descent of emax is observed. Choosing the initial vector in the direction perpendicular to this axis is an unsuccessful variant. If the projection of the initial vector on emax is nonzero, it increases so fast that other components become negligible over time. To put it otherwise, as time goes by, the deviation vector turns around, tending to the direction of emax that corresponds to MLE. A difficulty met in calculating MLE for a trajectory of unstable motion using a computer is specific enough: MLE achieves quickly “too large values” due to the exponential growth of the deviation vector lengths, in consequence of which an overflow of the registers happens. To avoid such an unpleasant situation, the coefficients of growth of the vector lengths should be sought for small time intervals, and only then

158

3 Stability of Motion and Structural Stability

v0 τ x (0)

v1

u2

u1

v2

x(τ) τ

x(2τ)

Figure 3.6

we need to sum up their logarithms. A commonly used algorithm for computing MLE has the following guise: vm–1 = $x((m – 1)4)



+max = lim

M→∞



u m = $x(m4),

2  vm = u m u m  

1     . ln u m M4 m=1 M

(3.8)

The arrow ⇒ here denotes the transition that is responsible for integrating simultaneously the equations for the point lying on the main path and the equations for the   vector of deviations within the time interval (m – 1)4, m4 . Each consecutive step of the integration requires that the initial vector be the normalized vector, as shown in Fig. 3.6. Accordingly, we should add together the length logarithms of the resulting vectors. Calculation of the Whole spectrum of Lyapunov Exponents. To more precisely classify the types of unstable dynamics, it is necessary to estimate the whole spectrum of Lyapunov exponents. For each point of the phase trajectory there are two directions that stood out in space: (a) the direction of the most rapid divergence/the slowest convergence of trajectories emax = e1 , and (b) a tangent direction e|| , for which +|| = 0. These are mutually orthogonal directions. As to the other Lyapunov exponents, there exist directions orthogonal both to emax and to e|| . These exponents may have different signs. The phase drop diminishes its volume when moved, if +1 + ⋯ ++n < 0 (dissipative systems) or leaves it unchanged if +1 + ⋯ + +n = 0. The fact that the arbitrary vector of deviation, when moved, tends toward the direction emax is the cause of complicating the estimation of the other exponents than MLE. Let us discuss how to bypass this difficulty. Elements of the Jacobi matrix depend on coordinates of the point on the main trajectory. In turn, these coordinates are time dependent. Thus, equations for the coordinates of the deviation vector are of linear homogeneous differential equations with variable coefficients. Choosing the time 4 as sufficiently small, we can establish the time-dependence of the coefficients so that the latter should not change dramatically within each of the intervals Tm = [(m – 1)4, m4]. Now, performing calculations on Tm , we suppose the coefficients to be constant. Consequently, normalized by the condition |$xi ((m – 1)4)| = 1 and selected in a special

3.1 Stability of Motion

159

way, the initial vectors $xi ((m – 1)4) provide a way to find local contributions to the exponents +i = 4–1 ln(|$xi (m4)|). In reality, the coefficients are time-dependent due to tending the vectors $xi (m4) to turn toward the direction emax . This effect is revealed only over large enough times t = m4, m >> 1. Therefore, in order to neutralize small rotations at each step m – 1 → m, it suffices to make adjustments for each integration step over Tm , m = 1, 2 . . . . For this purpose, the Gram–Schmidt orthogonalization is a commonly used procedure. Eventually, the formulae for calculating the Lyapunov exponents take the following form: (r) vm–1 = $x(r) ((m – 1)4) (1) m u (1) , m =w

(l) m u (l) – m =w

l–1 

  (k) (k) T  (l) vm [vm ] ⋅ wm ,

k=1 (r) vm

=

u (r) m

3   (r)  u m ,

(r) m w = $x(r) (m4),





+r = lim

M→∞

l = 2, . . . , n,

 M 1   (r)   ln u m  . M4 m=1

(3.9)

The arrow ⇒ here denotes the procedure of integrating simultaneously the system of nonlinear equations for the phase point lying on the main path and n systems of linear equations for the deviations. Each linear system has its initial vector, which is one of the vectors of the orthogonal set. Computation of Lyapunov Exponents for Mapping. In the event of describing a system’s dynamics by mapping, e.g., by the Poincare maps, the procedure to calculate the Lyapunov exponents must be modified. The modification takes into account the fact that time now changes in discrete steps rather than in a continuous manner. However, making the algorithm, for the most part, is quite similar to that considered earlier. In this case, the basic nonlinear mapping and the related linear mapping for the deviations should be iterated simultaneously.   xN+1 = F xN ,

  $xN+1 = A xN $xN ,

(3.10)

2 where A(xN ) = {∂Fi ∂xj }|x=xN . Exact analytical expressions for the Lyapunov exponents can now be written as +r = lim ln |3r (BN )|, BN = [A(xN )A(xN–1 ) ⋯ A(x1 )]1/N . N→∞

(3.11)

The symbol 3r (BN ), r = 1, . . . , n denotes eigenvalues of the matrix BN . Also, as for the continuous way, the formula of this type is of little use for practical calculations. This is due to the fact that the matrix BN has exponentially fast-growing eigenvalues with increasing N. To calculate the Lyapunov exponents for mapping, it is more convenient to employ a step-by-step procedure. The latter consists in seeking the eigenvalues of matrices corresponding to single iterations. The resulting algorithm differs from

160

3 Stability of Motion and Structural Stability

that introduced for continuous systems only by replacing the numerical integration of differential equations by the iteration of mapping. Lyapunov Exponents and Dynamic Behavior Types. Consider possible kinds of motion of dynamic systems with continuous time. To classify the types of dynamics there is a principle based on writing down a sequence of the exponent signs and the exponents that are equal to zero. The appropriate writings are referred to as signatures of the Lyapunov exponent spectrum. For asymptotically stable regular motion, the Lyapunov exponents are always nonpositive. Presented here is a list of various special cases. If the phase space is two dimensional, the following signatures and types of motion are possible: < –, – > – an asymptotically stable stationary point; < 0, – > – an asymptotically stable limit cycle; < 0, 0 > – stable motion along a path, a phase drop preserves its volume. Asymptot&  ically stable motion is peculiar to dissipative systems i +i > 0 . Preservation of & phase volume takes place in conservative systems i +i = 0 . Attention should be drawn that one of the exponents is zero for the trajectories that are not stationary points. Note the signature < 0, 0 > characterizes an autonomous conservative onedegree-of-freedom Hamiltonian system, which is integrable because its energy is conserved. For such a system (e.g., a frictionless pendulum), finite periodic motion is characteristic type of motion. In the case of three-dimensional phase space, there are four possible types of motion: < –, –, – > – an asymptotically stable stationary point; < 0, –, – > – an asymptotically stable limit cycle; < 0, 0, – > – an asymptotically stable two-dimensional torus; < +, 0, – > – an attractor with an unstable phase trajectory. Let us dwell upon the case < 0, 0, – >. Suppose the position of a point in the phase space to be defined by values of two independent continuously varying parameters: x = x(>1 , >2 ). Varying the value >1 and leaving >2 = const, we obtain a curve. If the dependency on >1 is periodic, the curve is closed (a cycle). Surface swept out by the cycle, moving in space as >2 changes, looks like a deformed cylinder. If the >2 dependency is also periodic, the surface represents a two-dimensional torus. With one of the Lyapunov exponents being negative, the phase point asymptotically tends to the surface of the torus as t → ∞. The doubly periodic motion on the surface of the torus can be described by the function x(91 t, 92 t) (Fig. 3.7). When a rational frequency ratio (91 /92 ∈ Q) is achieved, the trajectory is closed on itself on the torus and the

3.1 Stability of Motion

161

ω2

ω1

Figure 3.7

Unstable manifold Wu xʹ(t)

The Poincaré cross section

Γu

ΓS

WS Stable manifold

o x(t)

Figure 3.8

motion is one periodic. Otherwise, when 91 /92 ∉ Q, the trajectory is nowhere closed and called irrational winding of the torus. In the case of < +, 0, – >, we are dealing with a phase trajectory of saddle type. Structure of the neighborhood of such a trajectory is shown in Fig. 3.8. The main path x(t) is the intersection of a stable W s and an unstable W u manifold. A trajectory being in close proximity to the main one and belonging to W s tends asymptotically toward x(t) as t → ∞; a trajectory as belonging to W u , on the contrary, moves farther and farther away from x(t). At the intersection with the Poincare plane passing through 0, there arises a pattern of the neighborhood of the saddle point – with the As and Au separatrices of the saddle being lines of intersection of the W s and W u manifolds with the Poincare plane. One would assume that, as in the previous case, due to the compression the trajectory also tends asymptotically to the surface of the torus. This, however, is not the case because the motion on the torus as being two-dimensional cannot be unstable. In fact, in the case at hand, the attractor is a self-similar fractal set. In the theory of dynamical systems, such attractors are called strange.

162

3 Stability of Motion and Structural Stability

In the n-dimensional phase space, we can speak of the following types of asymptotically stable motion: < –, –, –, . . . , – > – a stationary point; < 0, –, –, –, . . . , – > – a limiting cycle; < 0, 0, . . . , 0, –, –, . . . , – > – a limiting n-torus. In the case of unstable motion (signatures < +, +, . . . , +, 0, –, –, . . . , – >), generalized saddle-type phase trajectories are possible. In that case stable and unstable manifolds can have dimension higher than one.

3.2 Structural Stability 3.2.1 Topological Reconstruction of the Phase Portrait Whatever is the ultimate nature of reality (assuming that this expression has meaning), it is indisputable that our universe is not chaos. We perceive beings, objects, things to which we give names. These beings or things are forms or structures endowed with a degree of stability; they take some part of space and last to some period time. René Thom

From a practical point of view, it is important to be able to predict how a dynamical system with different control parameters involved in the problem posed evolves. Like a phase space, the space of the control parameters can be multidimensional. In most cases, it is assumed to be finite. Smoothly travelling in the space of the control parameters, a point, having reached quite certain positions, may cause abrupt qualitative changes in dynamics and rearrangements of the phase portrait (bifurcations). These exceptional positions form bifurcation sets. Usually, they are of manifolds of a different dimension, embedded in the parameter space. In the case of one-dimensional parameter space, bifurcations take place in isolated bifurcation points on the axis parameter. The system’s dynamics has its qualitative features in each of the cases: (i) + < +0 , (ii) + = +0 and (iii) + > +0 , with + and +0 being the current and bifurcation control parameters, respectively. Consider a linear damped oscillator described by the equation x¨ + γ x˙ + 920 x = 0. The value γ = 0 of the friction coefficient is bifurcational because the stationary point of the system is (i) a stable focus for γ < 0; (ii) a center for γ = 0; (iii) an unstable focus for γ > 0. Examples of the appropriate types of the dynamic behavior of the oscillator are (i) damped oscillations, (ii) fluctuations with constant amplitude and (iii) growing oscillations.

3.2 Structural Stability

163

Nonlinear dynamical systems typically exhibit a more complex bifurcation behavior. In particular, in the case of turning a stable focus into an unstable focus and of occurring oscillations with increasing amplitude, there is usually a nonlinear mechanism responsible for the limiting-amplitude behavior. Finally, a stable oscillation mode is established. A stable limit cycle corresponds to such a mode. The transition from a fixed state to stable oscillations is usually called a bifurcation of the birth of a cycle or the Andronov–Hopf bifurcation. To better understand the issue, let us discuss two systems with nonlinear friction. A Normal (Supercritical) Bifurcation of the Birth of a Cycle. There are bifurcations at which the birth, disappearance and modification of structures containing not only singular points, but also limit cycles take place. We refer again to the Van der Pol equation (Section 1.5) x¨ + (+ – x2 )˙x + x = 0 to analyze the nucleation process of small oscillations and the establishment process of a stationary regime of nonlinear oscillations. The solution of the equation should be sought as the real part of the product of the slowly varying amplitude A(t) and rapidly oscillating factor exp(it) corresponding to the dynamics with no damping x(t) = A(t) exp(it) + A∗ (t) exp(–it). Here the symbol ∗ denotes complex conjugation. Note that the only unknown quantity x turned out to be expressed in terms of two quantities: Re A and Im A. This circumstance creates an ambiguity in choosing the complex function A(t). To get rid of the ˙ exp(it) + A˙ ∗ (t) exp(–it) = 0 on the ambiguity, we impose the additional condition A(t) solution. Next, we substitute the chosen solution form into the equation and divide both sides by exp(it). Further we apply the averaging method described in Section 1.3. In doing so, we take into account the fact that the oscillating terms are of secondary importance: the slow dynamics of the amplitude A(t) “feels” only an average effect of rapid oscillations. Let us average all the terms on the left-hand side of the equation over the rapid oscillations. To put it otherwise, we shall integrate them over the oscillation period and divide the result by 20. Eventually, we arrive at the shortened Van der Pol equation A˙ =

1 2



It can be solved exactly. Substituting A = system

 + – |A|2 A. 

(3.12)

  1(t) exp –i6(t) , we reduce eq. (3.12) to the

1˙ = (1 – 1) 1,

˙ = 0. 6

164

3 Stability of Motion and Structural Stability

ρ (i)

(ii)

(a)

λ 0

(i) (i)

(ii)

(ii)

(iii)

ρ μ/2 ρ+

μ/4

ρ– λ (b)

(i)

–μ2/8

(ii) 0

(iii)

Figure 3.9

As can be seen, the phase 6 does not change when moving. Bernoulli’s equation for the function 1 = |A|2 has the solution 1(t) = + [1 + (+/10 – 1) exp (–+t)]–1 ,

10 = 1 (0) .

If + > 0 and 10 0, any arbitrarily small value of the oscillation amplitude reaches a certain limit. The phase portraits of (i) and (ii), depicted on the left, correspond to two particular values of +. Such a scenario of the bifurcation behavior is referred to as soft mode excitation of oscillations. Reverse (Subcritical) Bifurcation of the Birth of a Cycle. Let us derive an equation similar to the Van der Pol equation, but with different kind of nonlinear damping member x¨ + (+ + , x2 – x4 )˙x + x = 0.

(3.13)

(This equation arises in the theory of electronic generators of special type.) As in the previous case, we shall seek a solution in the form of the real part of the product of the slowly varying amplitude A(t) and rapidly oscillating factor exp (it) by introducing

3.2 Structural Stability

165

an additional condition. The averaging over a period of rapid oscillations yields the truncated equation   A˙ = 21 + + , |A|2 – |A|4 A. As mentioned, the phase 6 = argA remains constant; an equation for 1 = |A|2 now takes the form   1˙ = + + , 1 – 212 1. It is easy to see that there is always a fixed point 1 = 0 corresponding to an unexcited 2 state. This point is stable when + < 0. Under the condition –,2 8 < + < 0 there are two more stationary solutions  

1 2 1± = 4 , ± , + 8+ . In the phase plane of the dynamical system (3.13), an unstable limit cycle corresponds to the solution 1– and a stable limit cycle to the solution 1+ . For + > 0, there is only an attractor describable by the solution 1+ . Let us look at the system’s bifurcation behavior when changing the parameters (Fig. 3.9 (b)). As + increases slowly (from negative values), the system remains in the ground state up to + = 0. However, if under an external influence the value of x 2 abruptly varies, this gives rise to an oscillation process even at –,2 8 < + < 0. If + = 0, a bifurcation takes place: the steady state loses stability. Next, after a shortterm transient, the finite amplitude oscillations arise immediately (a hard mode of oscillations). If we now begin to diminish +, the bifurcation behavior pattern will be different. 2 Up to the value + = –,2 8, there occurs an oscillatory process, although for + < 0 a disruption of the oscillations is possible due to external influences. When passing + 2 through the value + = –,2 8, the disruption of the oscillations happens necessarily. After the short-term transient ends, the amplitude drops to zero in almost a stepwise fashion. A so-called hysteretic behavior of the nonlinear dynamical system is plain to see as in Fig. 3.9(b). (The right-side view: arrows indicate a hysteresis cycle on the bifurcation diagram; the left-side view: there are phase portraits of (i), (ii) and (iii) for three particular values of the parameter +.) 3.2.2 Coarse Systems My Heart is in equilibrium only at the razor-edge. Pierre Reverdy

It is safe to say that one of the universal concepts widely used in all fields of the exact sciences is the concept of the Fourier spectrum. This notion is an extremely useful tool

166

3 Stability of Motion and Structural Stability

to describe mathematically most of the physical dynamic processes in a rather trivial way. Fluctuations in uncoupled or loosely coupled systems of harmonic oscillators can serve as an illustration for the processes mentioned. When a dynamic variable has the meaning of a small deviation from an equilibrium state with local minima of the potential energy, one can speak of harmonic oscillations. In this case, expanding the potential in powers of deviations from the extremum and keeping only the lowestorder (second) term in the expansion, we can obtain a harmonic oscillator equation. Let us discuss in more detail the form of a power series expansion of the potential. Although the expansion near the minimum can start with any even power, we draw the two-power case because it is typical. To elucidate this concept, which plays an important role in applied mathematics, we argue as follows. Creating mathematical models of the phenomena occurring in the real world, we are aware of any functional dependence only approximately. In particular, the difference between a real and a model potential function can be an arbitrary function but small if the model is sufficiently accurate for its intended use. In mathematics, there is a criterion that allows one to select virtually reliable models. This criterion states that conclusions of a qualitative nature made for every model analyzed must be unchanged to being slightly varied a particular function involved in the problem posed. It is generally believed that the procedure of the “perturbation” is to add a small function that fails to mutilate the differentiability properties. When perturbed, the function f (x) with the expansion ∼ x2n , n > 1 near the extremum, turns into a function with one or more quadratic extremes. Indeed, if the original function has a degenerate extremum, it satisfies the system of equations f (k) (0) = 0, k = 1, . . . , 2n – 1. However, these conditions are violated due to disturbance. But since the perturbation cannot eliminate the extremum itself, the conditions f ′ (x0 ) = 0 and f ′′ (x0 ) ≠ 0 are fulfilled for the expansion in the vicinity of the shifted extremum. To put it otherwise, it is a quadratic extremum. The notion of typicality bears a similarity to the notion of structural stability (coarseness) of dynamical systems. This concept was first introduced by A.A. Andronov and L.S. Pontryagin [7]. In their book [8], A.A. Andronov et al. who contributed much to the theory of nonlinear dynamic processes interpret this concept as follows: In writing a differential equation, as we have already said, we never take into account and cannot take into account all factors without exception which somehow influence the behavior of a physical system considered. On the other hand, none of the factors can remain absolutely unchanged as the physical system moves. Setting up a particular physical problem, we attribute quite certain fixed values to the system’s parameters. However, this makes sense only provided that the parameters do not affect significantly the motion, when changed little. Suppose that the dynamical system at hand corresponds to a real physical problem. In writing the differential equations, right sides of such a dynamical system always contain a given number of parameters appropriate to the parameters of the physical problem. If the dynamic system is in good accordance with the properties of the physical problem when the parameters weakly vary, then by virtue of the foregoing, it has to preserve those features that characterize the behavior of the physical model. First of all, for such dynamical systems, a qualitative structure of a partition into trajectories must be constant. If some qualitative features are due to

3.2 Structural Stability

167

certain quantitative relations between the parameters involved in the differential equations to describe the physical problem, these qualitative features disappear at an arbitrarily small change in the parameters. Clearly, these qualitative features, generally speaking, are impossible to observe in real systems. So, it would be natural to distinguish a class of dynamical systems whose topological structure of the phase trajectories is not altered by small changes in differential equations. Such systems are called coarse.

A dynamical system is structurally unstable (noncoarse) if the value of a controlling parameter is equal to a bifurcation value. Otherwise, a small “perturbation” of the parameter violates this equality. It can be assumed that either a new system formed after the “perturbation” will have no bifurcation or the bifurcation point will shift and occupy a new position on the axis of the controlling parameter. If any “perturbation” causes the bifurcation point to displace only, we can claim that the entire family of dynamical systems is coarse when running the governing parameter the range of admissible values. Thus, the structurally unstable dynamical system may be an element of a structurally stable single-parameter family. A more general statement sounds in such a way that a structurally unstable m-parameter family can be embedded in a structurally stable m′ -parameter family if m′ > m. An example that will be mentioned later is intended to illustrate the last assertion. 3.2.3 Cusp Catastrophe It is the last straw that breaks the camel’s back. Eastern proverb

Section 1.1 of the present book dealt with a common approach to analyzing nonlinear oscillations in single-degree-of-freedom systems. A particular case of such a system is a nonlinear oscillator with a double-well potential described by a fourth-order polynomial. We assume that due to the presence of friction the fluctuations in such a system are damped. For the sake of convenience, the potential function should be chosen in the form U(x) = x4 – Ax2 + Bx. Varying the values of the parameters A and B, we obtain the required changes in the potential profile. To see the global picture of the bifurcation behavior, we need to deduce an equation for finding the stationary points: U ′ (x) = 4x3 – 2Ax + B = 0. The best way to do this is to construct a graphical image of the set of solutions. Plotting the parameters A and B along abscissa and ordinate axes, we mark the values of x, which correspond to positions of the stationary points, on the z-axis. The resulting surface should be projected on the parameter plane. As can be seen from Fig. 3.10, the projection has characteristic elements such as fold curves and an cusp point. To derive an equation for the fold curves, apart from the extremum condition 4x3 – 2Ax + B = 0, the condition for the existence of a multiple root 12x2 – 2A = 0 should be written down. Eliminating the variable x = ±(A/6)1/2 , we arrive at the desired equation relating the parameter values only: B = ±(2A/3)3/2 .

168

3 Stability of Motion and Structural Stability

x

Cusp point

Fold

B

A Projection Fold curve

Cusp point Figure 3.10

A qualitative picture of the oscillations is illustrated in Fig. 3.11. The two wells are responsible for two equilibrium steady states. The corresponding attracting fixed points (attractors) coincide with the potential minima. By varying the parameters, a bifurcation can be initiated, which causes both the right or left potential wells and the corresponding stationary point to disappear. Suppose the system to be in a state that loses its stability when the bifurcation occurs. Then, as the governing parameter passes through the bifurcation value, a transient process takes place, and the system comes to a new stable state. When the parameters vary rather slowly (adiabatically), the system state is said to still satisfy a certain potential minimum, at least as long as the latter exists. As can be seen from the figure, by choosing an appropriate shape of the potential, one can achieve both the transition from right to left well (top) and the reverse transition (bottom). An example of a bifurcation, where a small variation of the control parameter results in a significant change in the system state, is the destruction of structures under the impact of increasing static load. For this

Figure 3.11

3.2 Structural Stability

169

reason, such a bifurcation is often called catastrophe. In our example, this is a cusp catastrophe. Let us summarize the features of the considered family of dynamical systems as regards the presence or absence of the coarseness properties (structural stability) in them. The single dynamic system that corresponds to a certain point (A, B) in the plane of the controlling parameters is structurally unstable if the point lies on the fold curve or coincides with the cusp point. Otherwise it is coarse. The single-parameter family of systems that corresponds to all possible points lying on a straight line A = A0 > 0 parallel to the axis B is coarse. At the same time, it has two structurally unstable systems appropriate to the points of intersection of a straight line A = A0 with the fold curves. When slightly perturbed, the intersection points shift but still lie on the line. The single-parameter family of systems that corresponds to points on the axis B (A = 0) is structurally unstable because it includes the cusp point. The latter leaves the axis B, when smoothly perturbed. The two-parameter family of systems that corresponds to all points of the plane of the controlling parameters is coarse. This is due to the fact that the fold curves and cusp point, when shifted after a small “perturbation,” should remain within the parameter plane.

3.2.4 Catastrophe Theory A generalization of the theory of systems with a quadratic potential is called the catastrophe theory. In this book, the authors present some rigorous mathematical findings underlying the theory without proof but with comments [9]. Let us look at potential-type dynamical systems during motion, believing that phase trajectories are attracted to stable stationary points located at the minima of the potential function. As in Section 3.2.3, we focus our attention on steady-state rearrangements occurring due to the appearance or disappearance of the minima as the potential function varies. Consider a system of general form with r controlling parameters in an n-dimensional phase space. In this case, we have a family of potentials of the guise U = U(x, t) = U(x1 , . . . , xn ; t1 , . . . , tr ), where x = (x1 , . . . , xn ) ∈ Rn are dynamic variables (behavior parameters) and t = (t1 , . . . , tr ) ∈ Rr are control parameters. It is worth pointing out that the choice of a certain potential is brought about by assigning the vector t. We determine the change of coordinates in the space of dynamical variables y = f (x), which is defined by the functions

170

3 Stability of Motion and Structural Stability

y1 = f1 (x1 , . . . xn ),

...,

yn = fn (x1 , . . . , xn ).

Such a mapping of the space Rn onto itself is called a diffeomorphism if it is invertible and determined by smooth functions; i.e., they are sufficiently many times differentiable. Diffeomorphic transformations correspond to smooth deformations without the birth of new singularities. Two smooth potentials U1 (x) and U2 (x) are thought to be locally equivalent nearby zero provided that there is a local diffeomorphism and a constant γ in a neighborhood of zero such that      U2 x = U1 f x + γ .

(3.14)

This definition can be extended to potential families. The U2 (x, t) and U1 (x, t) families   are considered equivalent near the point x, t = (0, 0) if there are simultaneously:  1. a diffeomorphism in the space of potentials s = g t ; 2. a family of diffeomorphisms y = f (x, t), depending on controlling parameters;  3. a smooth function γ t such that         U2 x, t = U1 f x, t , g t + γ t . Formula (3.14) is valid when the number of controlling parameters is zero. Equivalent potential families form equivalent classes consisting of pairwise equivalent families. Discussing the concept of coarseness (structural stability), we have noted that any functions, which characterize a dynamic system, are not known exactly to us but up to small “perturbations” caused by inaccuracy in measurement, dispersion in parameters of “identical” systems, slow changes in parameters of a given system, etc. It is reasonable to require that these “perturbations” should not change the system properties fundamentally. For this purpose, to formulate a mathematical model, it suffices to assume that the “perturbation” is carried out by adding a small smooth function to the potential. It is intuitively clear that small smooth “modifications” are equivalent to some diffeomorphic transformations of the type described earlier. This implies, in particular, that if the small “perturbation” changes the potential, its original form can be restored using a specially chosen diffeomorphic transformation. The study of the qualitative features of the real system’s dynamics can be thus reduced to the investigation of a system with a family of potentials belonging to a given equivalence class. No doubt that the family having the simplest functional form is more advantageous to study. Suppose that we have some combination of values of dynamic variables and controlling parameters such that all first derivatives of the potential over the dynamic variables vanish (the potential has a critical point)

3.2 Structural Stability

171

∂U ∂U =⋯= = 0. ∂x1 ∂xn Critical points of the potential correspond to stationary points of the dynamical system. The potential is assumed to correspond to zero values of the controlling parameters t = 0 and has a critical point with coordinates x = 0. This can be accomplished by conceiving a suitable origin in space of the x and t vectors. The critical point   x, t = (0, 0) is called nondegenerate if the Hessian matrix  H=

∂ 2U ∂xi ∂xj

 (3.15)

at this point has nonzero determinant and rank equal to n. The following assertion holds true.   The Morse Lemma. If the critical point x, t = (0, 0) of the smooth family of potentials U(x, t) is nondegenerate, this family is equivalent to the family of the form 2 2 y12 + ⋯ + yn–l – yn–l+1 – ⋯ – yn2 ,

(3.16)

where the fixed number l of summands with the minus signs is called the index of the critical point (0, 0). The sense of this statement is that if the original system is described by a smooth family of potentials with a non-degenerate critical point or by another family being the result of small modifications, there exists a diffeomorphic transformation (y1 , . . . , yn ) = y = f (x, t) that makes any member of the family be quadratic in the standard form (3.16). As can be seen earlier, the system’s behavior in the neighborhood of the nondegenerate critical point is trivial: qualitatively different types of the behavior correspond to forms that differ only in the number of positive and negative terms (Morse l-saddle). Let us formulate another statement. Splitting Lemma. Let the Hessian matrix (3.15) have a rank n – m at the critical   point x, t = (0, 0) of the smooth family of the potentials U(x, t). Then this family is equivalent to a family of the form       2 U˜ y1 x, t , . . . , ym x, t , t ± ym+1 ± ⋯ ± yn2 . From the splitting lemma it follows that in the case of a degenerate critical point whose degeneracy degree is determined by m called co-rank a diffeomorphic transformation y = f (x, t) can be found such that allows a potential function to be split into two parts,

172

3 Stability of Motion and Structural Stability

2 the former being the Morse saddle ± ym+1 ± ⋯ ± yn2 and the latter, some non-trivial function of the transformed variables y1 , . . . , ym . The type and degeneracy degree define the latter’s form.   If m = 1, the non-Morse part of the potential has the form U˜ y1 , t , with the expansion in powers of y1 for t = 0 beginning with y1k , k > 2. In this simplest case, the following theorem is relevant.

Theorem on Reduction of a Function. point at zero, and

Let a smooth function >(x) have a singular

>(0) = >′ (0) = >′′ (0) = ⋯ = >(k–1) (0) = 0,

>(k) (0) ≠ 0.

Then using a smooth change of coordinates, it can be reduced to the form xk for odd k or to the form ± xk for even k. In the last case, the sign of x and of the derivative >(k) (0) just coincide. By adding the smooth change of coordinates mentioned in the theorem to transformation y = f (x, t), we obtain a diffeomorphism that leads the initial potential 2 to the form ±y1k ± ym+1 ± ⋯ ± yn2 in the vicinity x = 0 on condition that t = 0. We, however, must solve a more general problem: to find reduced forms for the families of potentials, having selected only those that are structurally stable. The analysis run in the previous section showed that the appearance of such forms is determined by the number of control parameters. Namely, in the absence of the parameters, only quadratic or Morse’s features are stable. A structurally stable oneparameter family may contain y13 , but a two-parameter family y14 . This list of reduced forms often referred to as just catastrophes can be extended. Let the co-rank m be equal to one as the number of parameters varies from one to five. Then, the appropriate catastrophes can be distinguished explicitly through their very expressive names, rooted in the literature and associated with the shape of the bifurcation surfaces “Fold-catastrophe” – U = y13 + t1 y1 + M1 ; “Cusp-catastrophe” – U = ±(y14 + t2 y12 + t1 y1 ) + M1 ; “Swallowtail-catastrophe” – U = y15 + t3 y13 + t2 y12 + t1 y1 + M1 ; “Butterfly-catastrophe” – U = ±(y16 + t4 y14 + t3 y13 + t2 y12 + t1 y1 ) + M1 ; “Wigwam-catastrophe” – U = y17 + t5 y15 + t4 y14 + t3 y13 + t2 y12 + t1 y1 + M1 . In these expressions M1 = ±y22 ± ⋯ ± yn2 are the quadratic parts of the reduced forms. The number of parameters required for the family of potentials to be structurally stable is determined by the type of degeneracy of the critical point. If the initial parameters are more than necessary, their number will decrease after reducing the family to any of the standard forms of the catastrophes mentioned already. In particular, from

3.2 Structural Stability

173

the Morse lemma, if the critical point is nondegenerate, for any initial parameters the final expression – a Morse function – does not depend on the parameters at all. If m > 1, the non-Morse part is a polynomial depending on m variables. As an example, we present two catastrophes with m = 2, called “elliptical umbilic” and “hyperbolic umbilic,” with the former having the negative sign, the latter, positive U = y12 y2 ± y23 + t3 y12 + t2 y2 + t1 y1 + M2 , where M2 = ±y32 ± ⋯ ± yn2 . In the theory of the features of differential mappings, common approaches to classifying structurally stable families of mappings and, in particular, potentials to be mapped on the real axis were developed by H. Whitney, R. Thom, J.N. Mather, V.I. Arnold and other mathematicians. Arnold in his brochure [10] that became quickly popular pointed out that the rapid development of the catastrophe theory was accompanied with some costs. It was explained by a great desire of some scientists to spread its findings on the fields of knowledge where they are difficult for mathematization. Here is a quote from this book: Arnold Vladimir Igorevich (June 12, 1937, Odessa – June 3, 2010, Paris) is a Soviet and Russian mathematician who worked in the field of topology, the theory of differential equations, the theory of singularities, theoretical mechanics.

The first information about the catastrophe theory appeared in the Western press about 1970. Such a magazine as Newsweek reported about a breakthrough in mathematics that can be comparable only with Newton’s invention of the differential and integral calculus . . . . Among the published papers on the theory of stability there are investigations as regards the stability of ships, simulation of brain activity and mental disorders, prison revolts, behavior of stock-market players, effect of alcohol on driving ability, censorship policy towards erotic literature. In the early 70s, the catastrophe theory fast became fashionable, well-publicized theory that resembles pseudoscientific theories of the last [nineteenth – Approx. authors] century with the versatility of its claims.

Along with the foregoing, it becomes clear that model systems that obey a rigorous proof of the structural stability (coarseness) are the most valuable and should be studied first of all. Therefore, wherever a mathematical technique to describe such systems is applicable, the results of the catastrophe theory undoubtedly are extremely important.

4 Chaos in Conservative Systems Before the worlds there were: Eternal Chronos and wise Chaos whose a pre-awaiting yawns like an abyss . . . From Chronos Chaos took the essence of the Universe . . . Hymns of Orpheus Here I have . . . to speak once again on behalf of the broad global fraternity of practitioners of mechanics. We are all deeply conscious today that the enthusiasm of our forebears for the marvellous achievements of Newtonian mechanics led them to make generalizations in this area of predictability which indeed, we may have generally tended to before 1960, but which we now recognize were false. We collectively wish to apologize for having misled the general educated public by spreading ideas about the determinism of systems satisfying Newton’s laws of motion thet, after 1960, were to be proved incorrect. Sir James Lighthill

4.1 Determinism and Irreversibility The discovery of the mechanical laws of motion by Isaac Newton gave rise to revolutionary changes in the field of scientific methodology. As a matter of fact, a universal approach, currently called mathematical physics, to conceptualizing any motion processes (changes) occurring in nature was formulated. The use of this approach offers the possibility of describing quite completely the state of moving objects through numerical quantities, found with some measurements. With the numerical data, various mathematical relations can be sought. That relation that allows one to accurately predict the outcome of an experiment is the form of representation of objective laws of nature. If the data relating to the preceding point in time enables us to predict results coming up later, we have an objective cause-and-effect link. The Newtonian mechanics laws illustrating the approach formulated above make us reckon that there are similar laws applicable to the description of a wide range of processes. Let us present this concept that bears now the name of Laplacian determinism in the most radical form, given by Pierre-Simon de Laplace himself [1]: All events, even those which on account of their insignificance do not seem to follow the great laws of nature, are a result of it just as necessarily as the revolutions of the sun . . . Present events are connected with preceding ones by a tie based upon the evident principle that a thing cannot occur without a cause which produces it. This axiom, known by the name of the principle of sufficient reason, extends even to actions which are considered indifferent; the freest will is unable without a determinative motive to give them birth . . . We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it – an intelligence sufficiently vast to submit these data to analysis – it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.

4.1 Determinism and Irreversibility

175

Catching the ball before the bound, it should be noticed that the concept has proved to be very fruitful. This is reflected in the creation both of “non-classical” quantum and relativistic mechanics and of generalized “mechanics” of the electromagnetic field – electrodynamics. The entire modern physics is actually based on the predictive force of appropriate equations. However, a definitive answer to the question whether these equations are truly “fundamental” still eludes us. Understanding the reason for the situation created requires elucidating some properties of the equations of mechanics. To describe the cause-and-effect relationships, it suffices to find the parameters of motion at any subsequent moment in time t = t1 , knowing initial conditions at t = t0 . Equations of mechanics allow, however, one to solve the inverse problem – to find the parameters of motion at time t = t0 if we already know them at t = t1 , with this being attained through the reversal of time. To clarify the situation, we present a simple example. Let us transfer mechanical momentum to each of a few billiard balls at the moment in time t0 by hitting with a cue. We make a video filming for subsequent movements of the balls. Next, before playback, we interchange the beginning and end. If the recording time is short and friction forces can be neglected, we see the pattern of the motion, which could take place in reality. Imagine now that a gas filling a vessel consists of identical molecules. They suffer elastic collisions with each other and the vessel walls. The system under consideration obeys the laws of mechanics and is conservative. Suppose the vessel to be originally divided by a partition into two parts, and all the molecules are in one of them. After removal of the partition, the molecules will fill up the entire vessel. Is the reverse process – the spontaneous growth of concentration of the molecules anywhere in the vessel – possible to occur? The motion having such a character is not prohibited by the laws of mechanics but has never been observed. Typically, this is explained by the fact that the pattern of the motion in the system with a large number of molecules exhibiting uneven spatial distribution is a highly improbable event to arise. From this, the conclusion suggests itself that in order to describe mechanical systems consisting of a large number of particles, special type equations, allowing excluding unlikely processes from consideration, are needed. One of the first equations of this type was the Boltzmann kinetic equation [2], derived in 1872. This equation describes the motion as the evolution of probability distributions and can be applied to irreversible processes. It has been found that such an equation is consistent with the phenomenological equations of thermodynamics, which, in turn, describe correctly the thermodynamic processes observed experimentally. In particular, the Boltzmann equation implies the existence of a state function increasing with the passage of time – entropy. The questions, whether the replacement of the mechanical equations by kinetic ones is relevant and whether the latter as an approximation can be deduced from the mechanical equations, became a source of dramatic debate. Mark Kac in his book [3] has managed to capture the atmosphere and essence of these discussions. A quote below from it bears evidence of this.

176

4 Chaos in Conservative Systems

. . . You probably know that Boltzmann, following people like Clausius and Meyer, tried to explain the behavior of gases on the basis of a mechanistic model. A gas was regarded as a system of large number of particles, and the laws of mechanics were used to derive the equation of state and other things. The crowning achievement of the work of Boltzmann and of Maxwell is the derivation of the so-called H theorem . . . . according to which H [Emphasis added] decreases, or at least does not increase in time. This was remarkable achievement which particularly pleased Boltzmann because H clearly was some kind of an analog of the negative entropy. It was well known classical thermodynamics that this function of state has the remarkable property that it never decreases. Here Boltzmann had managed to construct a mechanistic quantity which had a similar behavior. All was well until it was pointed out that this was clearly untenable, since it was in contradiction with mechanics. The objections were crystallized in two paradoxes. One was the reversibility paradox of Loschmidt (around 1876) and the second was the recurrence paradox due to Zermelo and Poincaré. That came a little later, after 1900, I think. The reversibility paradox is the simpler of the two and in a way more basic. It simply says the following: All equations of mechanics are time-reversible. This means that if you perform the transformation t → –t (that is, replace time by minus time) then the equations do not change. This is simply due to the fact that in mechanics all derivatives with respect to time are of second order. There is no way to distinguish the equations written with time going forward and time going backward. Using a more philosophically appealing terminology, there is no mechanical experiment that will tell you which way time is flowing. Consequently, said Loschmidt, something must be wrong, because this H presumably is a quantity which can be derived from a mechanistic description of the system. But now, if I change t to minus t, the quantity instead of decreasing, will increase. Thus from a purely reversible model, we draw an irreversible conclusion and something clearly is wrong. It’s interesting that in mathematics one contradiction is enough; it is even enough to think that something is wrong, to have doubt. But in physics, you must have several paradoxes before people will believe that something is wrong. So, to provide another one, Zermelo (who at the turn of the century was also interested in the logical foundations of this subject) recall a theorem of Poincaré. It’s a very famous and a very beautiful theorem. It says that any conservative, closed, dynamical system is such that (I will first state it very loosely) if you start from somewhere, then unless you are extremely unlucky in choosing your starting point, you are bound to come back arbitrarily close to the starting point. To put it another way: conservative dynamical systems with finite energy are quasi-periodic. That is, states tend to recur. Consequently, if this quantity H were a mechanical quantity then it would have to oscillate. Starting from a certain value, it would eventually have to come back arbitrarily near to that value. This would certainly contradict the result that H changes in only one direction.

Ludwig Boltzmann (February 20, 1844, Vienna – September 5, 1906, Duino) is an Austrian theoretical physicist and one of the founders of statistical mechanics and molecular-kinetic theory. He obtained a formula for the equilibrium distribution of an ideal gas, formulated the ergodic hypothesis, and derived an equation describing the kinetic processes, proved the H-theorem.

4.1 Determinism and Irreversibility

177

Questions to give scientific credence to kinetic equations were in the focus of theoretical physicists throughout the twentieth century. However, today the understanding of many aspects of this problem cannot be considered as final. To provide an insight into the modern view of things, we address the reader to the paper of Oscar Lanford [4], published in 1976, that is more than a century after the publication of the Boltzmann equation. The author notes that all issues related to the validity of the Boltzmann equation are necessary to carefully investigate. To know the conceptual basis of this equation is not a goal in itself. The equation is actually of a prototype of the mathematical structure, around which a theory of the nonstationary processes in large systems is constructed. As far as a shortened or “macroscopic” description is concerned, only partial information found in a macroscopic experiment can be taken into account. These macroscopic characteristics are related with others less accessible to measure through exact equations of motion. Thus, values of only the macroscopic characteristics alone in the present do not define uniquely their values in the future, that is, the macroscopic characteristics evolve in time not autonomously. However, one can hope that, if the number of particles tends to infinity (in this case a microscopic description is extremely complicated), the influence of the microscopic characteristics on the macroscopic ones in the probabilistic sense is approximated by a function depending only on these macroscopic quantities. Therefore, in the limit, the macroscopic quantities themselves determine their evolution in time. As can be seen from the aforesaid, the reasoning concerning the transition from a microscopic to macroscopic description includes an element of the probabilistic nature. This is really the case. However, the exploitation of an idea of the probabilistic approach here is much more subtle than it seems at first glance. In particular, we will develop here a point of view that gives an accurate description of the time evolution of almost all initial states instead of that when the Boltzmann equation holds true on average. Leaving aside the specifics of methods in statistical physics, we shall focus our attention on the statistical approach to justify. It is extremely important to understand whether there are some prerequisites in the mechanical motion itself to reject strict determinism and the transition into the language of probability theory. One can trace how the views in this field were changing, referring the reader to textbooks written by leading experts. Thus, L.D. Landau and E.M. Lifshitz in their fifth volume of the course Theoretical Physics [5], first published in 1938, wrote as follows: In principle, we can obtain complete information concerning the motion of a mechanical system by constructing and integrating the equations of motion of the system, which are equal in number to its degrees of freedom. But if we are concerned with a system which, though it obeys the laws of classical mechanics, has a very large number of degrees of freedom, the actual application of the methods of mechanics involves the necessity of setting up and solving the same number of differential equations, which in general is impracticable. It should be emphasised that, even if we could integrate these equations in a general form, it would be completely impossible to substitute in the general solution the initial conditions for the velocities and co-ordinates of the particles, if only because of the amount of time and paper that would be needed.

178

4 Chaos in Conservative Systems

At first sight we might conclude from this that, as the number of particles increases, so also must the complexity and intricacy of the properties of the mechanical system, and that no trace of regularity can be found in the behavior of a macroscopic body. This is not so, however, and we shall see below that, when the number of particles is very large, new types of regularity appear. These statistical laws resulting from the very presence of a large number of particles forming the body cannot in any way be reduced to purely mechanical laws. One of their distinctive features is that they cease to have meaning when applied to mechanical systems with a small number of degrees of freedom. Thus, although the motion of systems with a very large number of degrees of freedom obeys the same laws of mechanics as that of systems consisting of a small number of particles, the existence of many degrees of freedom results in laws of a different kind.

Thus, the cited book chiefly focuses on the presence of a large number of degrees of freedom in systems, studied by methods of statistical physics. The given circumstance forces a researcher to go over to a “shortened description” taking into consideration the temporal evolution of only a very small part of variables required for the mechanical motion to be described completely. In other words, the introduction of the probabilistic description is a “fee” for a shortcoming of our analytical and computational capabilities. The book by Resibois and de Leener [6] contains a fresh approach to these issues. They write that numerical experiments clearly showed that individual trajectories of particles are of no essential significance for studying a system’s macroscopic properties. Indeed, these trajectories are extremely unstable and complicated to describe. Moreover, they are extremely unstable as regards the smallest changes of the initial conditions. For this reason, the study of individual trajectories does not give substantial scientific information. However, scrutinizing the individual motion of each particle, we study the system’s global properties depending on their large number. We can thus conclude that these properties are usually quite stable and hardly ever depend on the accuracy of the initial conditions. During the writing of this book (1972–1974), computers already existed, and the method of numerical simulation in molecular dynamics was developed. For this reason, the features of the mechanical motion itself, particularly, its instability, gained more the researchers’ attention. However, the description of the mechanical motion due to a large number of particles still remains a complex task. Next we show that if the phase space dimension of the dynamical system is larger than 2, the system can exhibit unstable motion regimes, “reinforcing” arbitrarily small fluctuations. To adequately describe these types of motion, it is necessary to use statistical and probabilistic methods. In this case, no association with the presence of a large number of degrees of freedom exists to pass on to the probabilistic description. Thus, quickstepping, the theory of dynamical systems with the unstable motion (dynamical stochastic or dynamical chaos theory) initiates the necessity of revising certain assertions underlying the statistical physics. The main question to be finally clarified is: whether the motion occurring in an objectively existing world, not related to our perception is reversible or irreversible.

4.1 Determinism and Irreversibility

179

Numerous publications by Ilya Prigogine, a Nobel Prize laureate in chemistry, who became the de facto founder of a new paradigm in this field, contain a criticism of the traditional views. A combination of subtle analysis of these problems with expressive and polemical style of presentation is a characteristic feature of his books. Here are two excerpts from his paper [7]. It is customary to consider the basic laws of nature as deterministic and time reversible. This is indeed true for classical dynamics which corresponds to point transformations as well as for quantum mechanics which involves the spectral theory of operators. However everywhere around us we observe irreversible processes in which past and future play different roles and time symmetry is broken. This is precisely the content of the second law of the thermodynamics. Since Boltzmann this observation has led to unending controversies . . . . Our main reason to consider the problem of irreversibility from a more fundamental point of view came from our work on non equilibrium physics. It is well known that far from equilibrium new space time structures may occur (the “dissipative” structures) . . . . The distance from equilibrium and therefore the arrow of time plays an essential constructive rule. Moreover, bifurcations far from equilibrium involve probabilistic processes in contradiction with the deterministic character of traditional laws of physics. Also irreversibility appears not only in connection with thermodynamics but also on many other level of observation such as radioactivity, spontaneous emission, not to mention the black hole entropy. It is therefore difficult to deny to irreversibility a fundamental dynamical character. Our program was therefore to reformulate the microscopic laws of physics to include probability and time symmetry breaking for well defined classes of systems. . . . We believe that this program has now been largely realized . . .

Ilya Prigogine (January 25, 1917, Moscow – 28 May 2003, Brussels) is a Belgian and American physicist and chemist of Russian origin, Nobel Prize laureate in chemistry in 1977. Main works have a direct bearing on nonequilibrium thermodynamics and statistical mechanics of irreversible processes. He proved the theorem of minimum entropy production in an open system and showed the possibility of forming dissipative structures.

Assuming that the source of stochasticity in the dynamics is small fluctuations, amplified by unstable motion, we need to specify a source of these fluctuations. However, a detailed discussion of these issues requires considering quantum mechanical effects, which is beyond the scope of this book. Without delving into this topic too deeply, we restrict ourselves to simple qualitative reasoning. At an initial time we have 10,000 atoms of a radioactive isotope with a half-life of 1 s. We will observe the decay of 5,000, 2,500 and 1,250, etc., atoms for the first, second and third and subsequent seconds.

180

4 Chaos in Conservative Systems

Having plotted the time dependency of the number of particles, one can see that it is an exponential function. However, the experimental points lie close to the exact exponential curve because there is a data scattering due to the random nature of the decay process. The inherent quantum fluctuations are a source of quantum noise and occur when any spontaneous transitions take place.

4.2 Simple Models with Unstable Dynamics 4.2.1 Homoclinic Structure Stochastization of motion in nonlinear dynamical systems can be caused by the formation of homoclinic structures, discovered by Poincare. Here we restrict ourselves to a qualitative discussion of this issue. Let a dynamical system in three-dimensional phase space have a saddle limit cycle C, with the latter being a line of intersection of a two-dimensional stable W s and a two-dimensional unstable W u manifold (Fig. 3.8 would be useful to refresh memory). Phase points in the vicinity of the limit cycle C perform a cyclic movement along trajectories close to it, but in the general case not closed. Cutting the set of the trajectories (nontangentially) by some plane G, we can define the Poincare map (Fig. 4.1). When intersecting the plane G with the cycle C, a saddle point O arises. In the figure, the lines of intersection of G with W s and W u are stood for as As and Au for its separatrices. A phase point belonging to one of the manifolds (W s or W u ) lies on a phase trajectory, which is entirely contained in the appropriate manifold. For this reason, if the line of intersection between W s and W u is not only C but another line A, the latter is also one of the phase trajectories. Such a trajectory is called homoclinic (a point of intersection of the trajectory with the plane G is denoted as H). Since A ∈ W s , the trajectory A asymptotically tends to the cycle C as t → +∞, “winding” around it. The points of intersection of the trajectory with G form a sequence to converge to the saddle point O and to belong to the stable separatrix. Likewise, A ∈ W u implies that A tends asymptotically to the cycle C as t → –∞ as well. As a result, there arise another sequence of points of intersection with G, convergent to O (when a limiting condition specified) and belonging to the unstable separatrix. All the intersection points of the homoclinic path A with the plane G belong to both

H

Γs O Γu

Figure 4.1

4.2 Simple Models with Unstable Dynamics

181

W s and W u . Therefore, each of which is the point of intersection of the separatrices. This is possible due to the fact that the separatrices form a complex structure with a countable set of the intersection points, called homoclinic. Consider those phase trajectories that are close to A and pass through the points belonging to the shaded areas in the figure. These carry out a succession mapping of the same shaded areas into each other. It is easy to see that if one puts a phase drop into any of the areas (here the phase drop means a simply connected domain in the plane), it will be deformed by iterating as follows: compression in a certain direction is accompanied with stretching in the other direction, linearly independent as regards the first. We show that the complex motion takes place inside the set of points in the vicinity of the homoclinic point H. Such motion can be called mixing. Using the above term, we proceed from the idea that the set of the phase points forms a phase liquid, which is converted in a continuous manner, when mapped. Confining ourselves to the iterative dynamics being generated by the Poincare mapping F, we argue as follows. Suppose we have chosen a neighborhood S of the point O (Fig. 4.2). Having performed N1 iterations, we get the image F (N1 ) (S) as a narrow band along Au , with a part of this band appearing in the neighborhood of H for a sufficiently large number of N1 . Now, keeping in mind the same neighborhood S, we iterate N2 times using the inverse Poincare map. It corresponds to the step-iterative motion in the negative direction of the time axis. The image F (–N2 ) (S) ≡ [F (N2 ) ]–1 (S) looks like a narrow band along the As and if N2 is sufficiently large, it also includes the portion of the neighborhood of H. If F (N1 ) (S) ∩ F (–N2 ) (S) ≠/0, the transformation F (N1 +N2 ) (S) maps a certain set of points 3 ∈ F (N1 ) (S) ∩ F (–N2 ) (S), which belong to the neighborhood of H. In this case, close points must diverge. The set G being limited, the mixing must occur. Being perturbed by arbitrarily weak fluctuations, such motion becomes indistinguishable from random.

Γu F (N' ) (S)

F (–N) (S) H

O

H

Γs

F (N+N' ) (S) Figure 4.2

S

O

H

O

182

4 Chaos in Conservative Systems

4.2.2 The Anosov Map Consider another map, generating mixing in a phase space. We define explicitly a mapping Y using the formulas .

pN+1 = pN + xN xN+1 = pN + 2xN

(mod 1), (mod 1).

It can be assumed that this mapping defining a dynamical system with discrete time is a succession mapping for a system with continuous time (although such an assumption is not necessary to make). The system described by the mapping Y often referred to as a “cat map” in the literature is treated as the dynamical Anosov system. It is worth making a remark concerning the name “cat map.” Such a name appeared in the literature after publication of a book’s illustration, similar to that shown in Fig. 4.3. As can be seen, the mapping is built under the linear transformation by adding the operation mod 1. In this case, it can be understood as the removal of the integer part: b mod 1 = b – [b]. It is not hard to verify that the mapping Y preserves the area because the determinant of the transformation matrix is equal to unity. The converted domain being periodic in each coordinate, it is assumed to be the surface of a torus. The mapping Y is classified as hyperbolic. This means that the neighborhood of any point on the torus is converted as a neighborhood of the saddle point. That is to say, the phase drop (in our case, it is any small area on the torus) stretches along a certain direction and compresses along the other direction, when converted. The stretch and compression coefficients are easy to find by solving the problem on eigenvalues and eigenvectors of a two-dimensional matrix that defines the mapping. These coefficients are k1 =

1 √  3+ 5 , 2

k2 =

√  1 3– 5 . 2

We can check again that the phase volume remains the same: k1 ⋅ k2 = 1. Since the angular stretch/compression direction coefficients are irrational numbers, in the limit N → ∞, the image of the phase drop densely covers the whole torus

S Figure 4.3

Y(S)

Y (2)(S)

4.2 Simple Models with Unstable Dynamics

183

surface. This implies that the image of any phase drops obtained by performing a sufficiently large number of iterations has points in common with any arbitrarily small %-neighborhood of any point lying on the torus. It is rigorously proved that the dynamics generated by the mapping Y is ergodic, that is, time averages can be computed as averages (with some distribution function) in the phase space. Let us review the question of the iterative dynamics reversibility. Since the matrix that defines the mapping Y is non-degenerate, the exact mapping of Y is reversible. Hence, it follows that having Y (N) (S) – the result of an N-fold iteration of the direct transformation applied to any configuration of the phase points S (e.g., the portrait of a cat), we can reconstruct the original, having brought about the inverse transformation N times. However, it is clear that if small random fluctuations distort the fine structure of the image Y (N) (S), the original fails to regain its former shape after the reverse transformations. We conclude that the impact of the random fluctuations on the mixing dynamics in the system is that there occurs a forgetting of the initial conditions, which leads to the irreversibility.

4.2.3 The Tent Map Mappings as well as the above Anosov map to realize the mixing are often referred to as baker’s transformations. The transformations generated by the mappings are named after a kneading operation that bakers apply to dough: the dough is cut in half, and the two halves are stacked on one another, and compressed. The operation is repeated until the dough becomes homogeneous. Along with multidimensional mappings, which can be simultaneously mixing, reversible and phase volume preserving, it is useful to examine simple one-dimensional maps of segments of the real axis into itself. In spite of not being reversible, the mixing one-dimensional mappings are very simple to characterize their properties through findings of analytical methods. This gives the possibility of understanding more deeply the nature of the mixing process. Consider a piecewise-linear tent map as defined by the following equations: xN+1 = fB (xN ) , . 1   2rx,   fB (x) = r 1 – 2 2 – x = 2r(1 – x),

0 < x ≤ 21 , 1 2 < x < 1.

(4.1)

The variable N is assumed to be discrete time. Dynamics generated by this mapping can become clear through a graphical plotting as shown in Fig. 4.4. Having selected an initial value x0 on the abscissa, we draw up a vertical line to intersect the graph fB (x) and then a horizontal line to the left to cross the ordinate axis. As a result, we have the point on the ordinate axis with the coordinate x1 = fB (x0 ). Now we perform the

184

4 Chaos in Conservative Systems

(a) 1

(b) fΔ(x)

1

(c) fΔ(x)

1

x 0

0.5

1

fΔ(x)

x 0

0.5

1

x 0 δ1

0.5

δ2 1

Figure 4.4

same steps in reverse order, using the angle bisector of the first quadrant as a graph of function. The point x1 will be moved to the x-axis. Repeating the graphical plotting, we find the sequence x0 → x1 → x2 ⋅ ⋅ ⋅ . A polygonal chain constructed as described above is called the Lamerey stairs. When describing the discrete dynamics, the piecewise liner curve plays the same role as that of the phase trajectory plays in the case of an ordinary continuous-time dynamical system. The stationary points are easy to determine their position graphically. These are the points of intersecting the graph of a function that defines the mapping with the bisector. If r < 1/2, there is only one stationary point x = 0, which is asymptotically stable. Any trajectory in the shape of staircase tends to it as N → ∞ (Fig. 4.4a). If r > 1/2, there are two stationary points, both are unstable. The iterated mapping generates an irregular (chaotic) sequence of points, as is easily seen by computing (Fig. 4.4b). For r > 1/2, the dynamics has an important feature – the paths coming out from arbitrarily close initial points move further apart. Figure 4.4c illustrates that the initial distance $1 between the phase points at each iteration doubles, turning into $2 = 22 $1 after two iterations. This indicates that the motion is unstable as a whole. Mixing and a loss of information about the position of the initial point take place. Below we introduce some new notions for the above said to acquire a strict mathematical sense. 4.2.3.1 Lyapunov Exponent Generating two iteration sequences that start from two close initial points, we can follow the distances between points with the same numbers. If Bx0 = x02 – x01 is a small quantity, the distance BxN within each iteration N → N +1 changes in a certain number of times. Otherwise speaking, it is multiplied by a certain stretch/compression coefficient. From this it follows that during the iteration the distance between the points of the initially close paths varies according to geometric progression (exponentially):    (N)  f (x0 + %) – f (N) (x0 ) ≈ % eN +(x0 ) , where f (N) (x) = f (f ( ⋅ ⋅ ⋅ f (x) ⋅ ⋅ ⋅ )). This equality is not exact because the conversion factor is not constant. But it qualitatively exactly reflects the behavior of the initial deviation.

4.2 Simple Models with Unstable Dynamics

185

In Section 3.1, we showed that the special numerical characteristic – the Lyapunov exponent can be introduced as a numerical measure of the rate of exponential divergence (convergence) of nearby trajectories. In our simple case of the one-dimensional mapping it is determined by the formula:     1  f (N) (x0 + %) – f (N) (x0 )  1  df (N) (x0 )  +(x0 ) = lim lim ln  ln  .  = lim   N→∞ N  dx0  N→∞ %→0 N % Applying the rule for differentiating a composite function, we can write df (N) (x0 ) df (f ( ⋅ ⋅ ⋅ f (x0 ) ⋅ ⋅ ⋅ )) / df (xk ) = = , dx0 dx0 dxk N–1

xk = f (k) (x0 ).

k=0

It follows that  N–1   N–1 1 / df (xk )  1   df (xk )  . ln  +(x0 ) = lim ln   = lim  N→∞ N dxk  N→∞ N dxk  k=0

k=0

For the tent mapping, the derivative is equal to the angular coefficient of each of the line segments forming the mapping graph. Moduli of these coefficients are equal. Thus dfB (x) = ±2r dx

⇒ +(x0 ) = ln(2r),

so that +(x0 ) > 0 for r > 1/2. 4.2.3.2 Invariant Distribution In statistical physics and thermodynamics, a fundamental role is played by the hypothesis that the motion at the molecular level possesses the ergodic property: a phase point occupies a certain position in the phase space with a certain frequency, which allows time averages to be replaced by averages calculated using a distribution function. We show that the dynamics generated by a one-dimensional tent mapping is (at least in one particular case) ergodic. Look at the mapping xN+1 = f (xN ) and represent the average over a discrete time of a function g(x) as the average over a coordinate: N–1 1  g = lim g(xk ) = N→∞ N k=0

0

1



N–1 1  (k) lim $(x – f (x0 )) g(x) dx. N→∞ N k=0

The mathematical writing is of the time average (so far only formally) secured by the averaging procedure, containing the distribution function: 1 g =

1(x)g(x)dx, 0

N–1 1  $(x – f (k) (x0 )). N→∞ N

1(x) = lim

k=0

186

4 Chaos in Conservative Systems

It still does not prove the existence of ergodicity because 1(x) is not a usual probability density function but expressed in terms of the generalized function $(x). However, we will obtain later an equation for 1(x) having a usual solution as well. By direct substitution, it is not hard to make sure that the function 1(x) defined by the above expression satisfies the Frobenius–Perron equation: 1 1(y) =

$(y – f (x))1(x) dx.

(4.2)

0

The sense of this equation is easy to understand if one recollects the formula that appears in the theory of probability and relates the distribution function of the random variables x and y, one of which is a function of another: y = f (x). To derive the formula aforementioned it is sufficient to write the average, say, of yn in two ways, by averaging either over y or over x: 1

1



n f (x) 1(x) dx =

yn 1˜ (y) dy = 0

0

1

⎛ yn ⎝

0

1

⎞  $ y – f (x) 1(x) dx⎠ dy. 

0

Here we believe that x ∈ [0, 1], y ∈ [0, 1]. As can be seen, the functionally related random variables and their appropriate probability densities are in accordance with the following equation: 1 ⇔

y = f (x)

1˜ (y) =

  $ y – f (x) 1(x) dx.

(4.3)

0

Comparing eq. (4.2) with the integral relation, we have now found the associates 1˜ and 1, so we arrive at the conclusion: the Frobenius–Perron equation means that the function 1(x) is invariant under the action of the mapping y = f (x). We show that the tent mapping has an invariant distribution function (and hence it is ergodic) for r = 1. Substituting function (4.1) with r = 1 into the Frobenius–Perron equation, this mapping can be written in the simple form: 1(x) =

 1 ' x x ( 1 +1 1– . 2 2 2

By direct substitution, it is easy to convince oneself that this equation has the solution 1(x) ≡ 1. We can demonstrate that this solution is the only one. To this end, we write down the transformation defined by the right formula in eq. (4.3) for the case of the function (4.1) for r = 1 and employ it n times for the arbitrary initial distribution 10 (x): 1n (x) =

   2n –1   1  x j–1 x j + – 1 + 1 . 0 0 2n 2n–1 2n 2n–1 2n k=1

(4.4)

4.2 Simple Models with Unstable Dynamics

187

Passing on to the limit n → ∞, we obtain an invariant (a “fixed point”) of the mapping 1n (x) → 1n+1 (x). That is to say, it is the solution of the Frobenius–Perron equation. Next, as can be readily observed, the limiting transition turns the right-hand side of eq. (4.4) into the integral sum of a definite integral. Putting that the function 10 (x) is normalized to unity, we have ⎤ ⎡ 1 1 1⎣ 1(x) = lim 1n (x) = 10 (y)dy + 10 (y)dy⎦ = 1. n→∞ 2 0

0

Ultimately, we have shown that the tent mapping (4.1) for r = 1 is ergodic in the following sense: the iterative sequence xN , N = 1, 2 . . . , generated by this mapping is such that the result of averaging over the “time” N coincides with the result of averaging with a uniform distribution.

4.2.4 The Bernoulli Shift Let us get acquainted with a one-dimensional discontinuous piecewise linear map, called the Bernoulli shift: . 2x, 0 < x ≤ 21 , (4.5) xN+1 = fB (xN ), fB (x) = 2x – 1, 21 < x < 1. Graph of the function that defines this map is shown in Fig. 4.5a. Using a computer, it is easy to verify that the tent map and the Bernoulli shift generate iterative sequences similar in type. Let us explain the geometric sense of the aforementioned maps. In both cases, the conversion is as follows: the interval [0,1] is divided in half, each half is stretched twice as large and the resulting segments are superimposed on one another. The difference is that the tent map bends over the segment in the middle, so that one half turns over. At the same time, the Bernoulli shift cuts the segment in the middle and superimposes the halves on each other with no overturn (Fig. 4.5b). The Bernoulli shift operation admits the following interpretation. Let a real number in binary form, having zeros in the positions to the left of the “binary point” (zero integer part) be assigned to each point that belongs to the interval [0,1]. We choose one such a number and act on it by the Bernoulli transformation. Note that the interval [0, 21 ) or the interval [ 21 , 1] has points corresponding to the numbers, in which the first digit to the right of the “binary point” a1 contains zero or unity. In accordance with definition (4.5), the number of each type is first multiplied by 2, which corresponds to a single shift of the “binary point” to the right. Once multiplied by 2, in numbers with a1 = 0 there appears zero to the left-hand side of the “binary point”; numbers with a1 = 1 have 1 at the same place, but it is replaced by 0 after subtraction 1. As a result,

188

4 Chaos in Conservative Systems

(a)

(b) fB(x) 0.5

0

1.0

1

Two-fold stretching

1

Superimposing (with no overturn)

x 0

0.5

2

1.0

Figure 4.5

the conversion of any number x0 is as follows: the position to the right-hand side of the “binary point” is emptied, after which the chain a2 a3 . . . moves one step leftward: x0 =

∞  r=1



x1 = fB (x0 ) =

ar 2–r = (0. a1 a2 a3 ⋅ ⋅ ⋅ )2 .

2x0 , 2x0 – 1,

a1 = 0 a1 = 1



4 = (0. a2 a3 a4 ⋅ ⋅ ⋅ )2 .

Let the bit positions a1 a2 ⋅ ⋅ ⋅ ak–1 at the binary numbers x0 and x0′ be equally filled in and only the position ak has different digits. Choosing k large enough, we can make these numbers as arbitrarily close as we need. However, k iterative steps yield the numbers that differ by ∼ 1. We have thus demonstrated that in the case of unstable motion the phenomenon of “diverging trajectories” is deeply connected with the properties of the set of real numbers. In the number theory, the following result is proved: the binary expansion of “almost all” of irrational numbers within the interval [0,1] (i.e., of those numbers that are not a part of a special set of measure zero) include any finite sequence of zeros and unities an infinite number of times. It is not difficult to conclude that the numbers of the iterative sequence x1 , x2 , . . . , generated by the Bernoulli shift (provided that x0 is a typical irrational number) visit an arbitrarily small neighborhood of any point lying on the segment [0,1] an arbitrarily large number of times. It follows that the iterative dynamics generated by the Bernoulli shift has the ergodicity and mixing properties. Let us take a look at the findings secured from several different points of view. From the Perspective of “Strict” Mathematics. Dealing with either continuous dynamical systems described by differential equations or discrete systems described by mappings, we are in the framework of a deterministic approach: given initial conditions uniquely determine the position of the phase point at any time. In the case of mechanical motion in conservative systems, the differential equations and their corresponding mappings describe the reversible dynamics.

4.3 Dynamics of Hamiltonian Systems Close to Integrable

189

In Terms of Describing Physical Processes. Systems whose motion provokes a rapid divergence of initially close trajectories are “reinforcers” of inherent fluctuations, and therefore, strictly speaking, exhibit a random behavior. In this context, experimental results are impossible to reproduce because the law of motion of random disturbances is unknown and unrepeatable. As Regards Computational Mathematics. If an initial condition is assigned by the number with certain accuracy, at some point of time, a computer gives results that are determined by the features of its internal processes. As a matter of fact, “reinforcement of fluctuations” caused by the limited accuracy of calculations occurs. As the “computer” number is finite, the counting process is always periodic.

4.3 Dynamics of Hamiltonian Systems Close to Integrable 4.3.1 Perturbed Motion and Nonlinear Resonance Let us refer to mechanical systems whose motion is described by Hamilton’s equations. As shown in Section 3.1, Hamiltonian dynamics preserves the phase volume. It is worth reemphasizing that an n-degrees-of-freedom Hamiltonian system, describable by a set of 2n first-order equations of motion, having n functionally independent * + integrals of motion F1 , . . . , Fn Fi , Fj = 0, where { . . . } is the Poisson bracket  i, j = 1, . . . , n and performing finite motion can be, at least in principle, integrated in quadratures. This system is called integrable and admits a description in terms of n pairs of canonically conjugate “action-angle” variables. Both the Hamiltonian and the integrals of motion can be expressed through actions (generalized momenta) Ij : H = H(F1 , . . . , Fn ) = H(I1 , . . . , In ), Fj = Fj (I1 , . . . , In ),

j = 1, . . . , n.

Note that the system considered is said to be autonomous. The equations of motion and their solutions have the form: . I˙j = –∂H/∂(j = 0, Ij = const ⇒ (˙ = ∂H/∂Ij = 9 (I1 , . . . , In ), (j = 9 (I1 , . . . , In ) ⋅ t + (0 . Thus, in the general case, the motion is conditionally periodic (quasi-periodic) and phase trajectories lie on the surface of n-dimensional invariant tori in the 2n-dimensional phase space. Note that although the phase trajectory cannot leave the surface of a specific torus, no limiting attracting tori arise in the given case – any point not belonging to this torus is an element of another torus. Particular cases of such dynamical systems are discussed in Chapter 2. The laws of motion of integrable systems are too “simple” for observing dynamic chaotization. The latter occurs in the event of violating the integrability conditions.

190

4 Chaos in Conservative Systems

Keeping in mind the typical problems that emerge in various math and physics applications and for the sake of the greatest simplicity, we restrict ourselves to the case when the main system is integrable but loses this property, when perturbed. In the future, we are interested in the question: how does the perturbation affect the nature of motion and whether it can be the cause of both destruction of the invariant tori and dynamics chaotization (stochastization)? As follows from the Poincare–Bendixson theorem, a nontrivial dynamic behavior becomes possible if the phase space dimension is larger than 2. For Hamiltonian systems with the phase space dimension equal to the double number of degree of freedom, there is a very simple conceptual model. It includes two degrees of freedom and a trajectory lying on the surface of a two-dimensional torus being in four-dimensional space. However, to simplify the problem, we can look at a single-degree-of-freedom system under unidirectional impact of another system of the same type. Then in fact, we arrive at the problem about the motion of a nonlinear oscillator driven by a periodic external force. Let the perturbation be weak. It follows that the Hamiltonian can be given by (the Chirikov model) H = H0 (I) + %V(I, (, t), V(I, (, t) = V(I, (, t + T),

% , V1–1 = |V1–1 | e i> = V¯ 0 ei> . For a more detailed analysis of the system to be run, we make a number of simplifying approximations. Let I = I0 be the value of the action, for which the resonance condition is satisfied exactly: 9(I0 ) = 9ext . We expand the right-hand sides of the Hamiltonian and eqs (4.10) in powers of the deviation $I = I – I0 . In addition, we make the replacement V¯ 0 (I) → V¯ 0 (I0 ) = V0 and omit the perturbation term on the right-hand side of the equation for the angular variable. Now expressions (4.10) take the guise: H = H0 (I0 ) + 9(I0 ) $I + 21 9′0 $I 2 + %V0 cos 8, . $I˙ = %V0 sin 8, ˙ = 9′ $I, 8 0  2  where 9′0 = d9(I) dI I=I . Combining the obtained equations of motion, it is easy to 0 make sure that in the end we get a pendulum equation: ¨ – % V0 9′ sin 8 = 0. 8 0 Note that the replacement of the variables (I, () → (I, 8), 8 = ( – 9ext t + > has the meaning of the transition to a rotating coordinate system in the phase plane. With this in mind, let us analyze the phase portrait of the system (Fig. 4.6a). The dotted line represents an undisturbed motion path, where I = I0 = const, ( = 9(I0 )t and 8 = const: in the system of the variables (I, (), the phase point rotates at a constant speed; in the system of the variables (I, 8), it is fixed. “Switching on” the interaction leads to the appearance of two fixed points: the center c and saddle s, and separatrices as well, as shown by the bold curve. Phase points of the region lying between the inner and outer separatrices and containing c belong to the “captured” trajectories. Their motion (in the rotating frame) is accompanied by periodic (reciprocating) change of the angle 8. Phase points being inside the region with 0 and lying beyond the areas enclosed by the separatrices belong to the “flyby” trajectories. The

192

4 Chaos in Conservative Systems

(a)

(b) c

o

o

s Figure 4.6

angle 8 monotonically increases as these phase points move. As can be seen from the figure, the fragment of the phase portrait is in the phase plane having the coordinate system (I, (). Because ( = 8 + 9ext t – >, it rotates with the constant angular velocity 9ext . Note that the trajectories and separatrices depicted are not to scale. In that case, if % is small, the separatrices are located close to the dotted line. The longest distance $Imax between them in the radial direction is a magnitude far less than the dotted circle’s radius I0 . We estimate the value of $Imax . The phase curves and separatrices as in Fig. 4.6a are level curves of the energy surface. The latter is defined by the Hamiltonian, written in the rotating coordinate system: ¯ H($I, 8) = H – 9ext $I = 21 9′0 $I 2 + % V0 cos 8.

(4.11)

When driving along the separatrix, the kinetic energy vanishes as the potential energy is maximum (cos 8 = 1). This condition is satisfied if the total energy is H¯ sep = %V0 . Equating this energy value to the right-hand side of eq. (4.11), where cos 8 = –1 (the potential energy is minimal), we obtain the estimate 2  1/2 $Imax ∼ %V0 |9′0 | .

(4.12)

We have considered the simplest case, when equality (4.9) is satisfied for the minimum values of integer constants. Nonlinear resonance is possible to occur under more general conditions. Suppose, for example, that l0 = –1, but k0 is not equal to unity. Then we can reduce the equations of motion to the pendulum equation, putting that 8 = k0 ( – 9ext t + >. Now, it can be shown that near some circles being the lines of cross sections of invariant tori there appear “necklaces” of cells, formed by the separatrices. This structure includes new centers and saddle points (Fig. 4.6b). Each of the cells inside itself contains closed contours to be cross sections of the invariant tori generated by the first-order nonlinear resonance. Along some of these contours, new “necklaces” appropriate to the second-order nonlinear resonances can be formed.

4.3 Dynamics of Hamiltonian Systems Close to Integrable

193

The above process continues indefinitely, creating a hierarchical, approximately self-similar structure. It is of a kind of fractal. Next, we look at some examples, based on the numerical results obtained using a universal mapping.

4.3.2 The Zaslavsky–Chirikov Map The description of the structure of the phase portrait we gave in the previous section is not complete. In that case, if the magnitude of disturbance exceeds a certain threshold value, regular motion is replaced by chaotic one when the trajectories lie on tori. Numerical calculations show that the regions of the chaotic dynamics emerge near the separatrices (stochastic separatrix destruction). George Moiseevich Zaslavsky (May 31, 1935, Odessa November 25, 2008, New York) is a Soviet and Russian physicist. His major works are devoted to plasma physics, the theory of dynamical systems, dynamical chaos, applying the method of fractional integro-differentiation to the physical processes.

Boris Valerianovich Chirikov (June 6, 1928, Orel February 12, 2008, Novosibirsk, Russia) is a Soviet and Russian physicist, founder of the theory of dynamical chaos in classical and quantum Hamiltonian systems, and works on the grounds of statistical physics.

In order to give a qualitative explanation of this phenomenon, let us look into the behavior of the phase point near the separatrix. If the energy of the system is such that the point moves exactly along the separatrix, its velocity in the neighborhood of the saddle falls to zero. In other words, for the phase point to reach the position of the saddle point, it takes time equal to infinity. If the trajectory is close to the separatrix, the motion of the phase point can be divided into two phases:

194

(a) (b)

4 Chaos in Conservative Systems

The phase point moves along the separatrix away from the saddle point; at that, $I is virtually unchanged (adiabatic stage). The phase point moves near the saddle point; at that, $I is changed significantly (a step of nonadiabatic motion). The above observation laid the foundation for an approximate approach, developed by G.M. Zaslavsky, B.V. Chirikov and their collaborators. They suggested regarding a time interval, at which the motion is not adiabatic as infinitesimal. In this case, to build a mathematical model, it is necessary to take a Hamiltonian, in which disturbance is periodically time dependent and contains a sum of $-functions. Due to “switching on” the perturbation at regular intervals, we can write the Hamiltonian in the form: H = H0 (I) + % V(I, () T

∞ 

$(t – kT).

k=–∞

Equations of motion corresponding to it are ⎧ ∞ & ∂V ⎪ ˙ $(t – kT), ⎪ ⎨ I = –% ∂( T k=–∞ ∞ & ∂V ⎪ ⎪ ⎩ (˙ = 9(I) – % $(t – kT). T ∂I k=–∞

(4.13)

We integrate these equations as follows. Let $-shaped “bumps” occur at the moments in time t0 and t0 + T. It is assumed that the action does not change within the time interval (t0 + 0, t0 + T – 0) when the perturbation is turned off, and the angle increases by a fixed increment: ¯ I(t0 + T – 0) = I(t0 + 0) = I, ¯ T. ((t0 + T – 0) = ((t0 + 0) + 9(I) The change of the variables due to the perturbation is described by the relations t 0 +0

I(t0 + 0) – I(t0 – 0) = t0 –0

∂V(I, () I˙ dt = –%T , ∂(

t 0 +0

((t0 + 0) – ((t0 – 0) = t0 –0

∂V(I, () (˙ dt = %T . ∂I

Joining the solutions, we obtain an evolution map: ⎧ ∂V(IN , (N ) ⎪ ⎨ IN+1 = IN – %T , ∂(N ⎪ ⎩ (N+1 = (N + T9(IN+1 ) + %T ∂V(IN , (N ) . ∂IN

4.3 Dynamics of Hamiltonian Systems Close to Integrable

195

We simplify the expressions secured, assuming that the perturbation does not depend on action, that is, that V = V((): . ∂V(() IN+1 = IN – % I0 g((N ), . I0 g(() = T (N+1 = (N + T 9(IN+1 ), ∂( Finally, believing that the perturbation is only the first harmonic g(() = sin (, we obtain the map in its simplest form: . IN+1 = IN – % I0 sin (N , (4.14) (N+1 = (N + T 9(IN ) – W sin (N ,  2  where W = % I0 T d9(I) dI . Thus, we have deduced a two-dimensional map that is called the universal Zaslavsky–Chirikov map.

4.3.3 Chaos and Kolmogorov–Arnold–Moser Theory The map found in the previous section provides mixing, which gives rise to chaotization of motion. The cause of dynamical chaotization (stochastization) in the given model can be attributed to the overlap of nonlinear resonances of various orders. In order to go over to the “language” of resonances, it is sufficient to expand the sum of the $-functions on the right-hand sides of eqs (4.13) into a Fourier series: T

∞  k=–∞

$(t – kT) =

∞ 

exp(im9ext t),

9ext = 20/T.

m=–∞

Usefulness of this approach is to allow formulating a criterion for chaotization (the Chirikov criterion) as a condition on parameters. The following speculations underlie this criterion. The presence of an infinite number of harmonics stands for numerous resonances. We write down the equation 9(Im ) = m9ext that allows one to find such values of the action Im , for which the conditions of nonlinear resonance are fulfilled. To estimate the density of the resonances on the axis of action, the parameter BI = Im+1 – Im as the distance between the nearest resonances should be entered. Under the assumption of being sufficiently far apart (by location on the axis of action) and of not overlapping each other, the nonlinear resonances do not destroy regular motion. It should be emphasized that this assumption is well confirmed by numerical calculations. With the growth of %, the size of each of the separatrix cells increases by a magnitude, approximately equal to the magnitude of eq. (4.12). We introduce the parameter K = $Imax /BI, where the numerator is equal to eq. (4.12), and the denominator is the mean distance between the resonances. Instabilities causing the chaotization of motion take place

196

4 Chaos in Conservative Systems

Figure 4.7

under the resonance overlapping condition K ≥ 1. There appears a stochastic layer instead of the destroyed separatrix (Fig. 4.7). In the case of a two-dimensional phase space, this layer is located between the invariant tori and occupies a well-defined region in the plane. However, in higher dimensional spaces, the stochastic layers are combined into a stochastic web. All this bears evidence of a violation of the conditions for the regular motion. In this case, the phase trajectory not tied with a particular torus visits remote areas of the space – there is a stochastic Arnold diffusion. To obtain numerical results, we can take the Zaslavsky–Chirikov map in its simplest form: .

yN+1 = yN + a sin xN , xN+1 = xN + yN+1

(4.15)

This mapping is deduced from eq. (4.14) if one puts that T9(I) ≈ I. It is convenient to hold that periodic boundary conditions are imposed on the variables x and y in such a way that the torus surface, “rolled up” from the square [0, 20] × [0, 20] plays the role of the phase space. Suppose we have a stroboscopic mapping for a dynamical system. The variable values need to be determined at regular time intervals T, equal to the period of the external force. Then, to the limit cycle with the period T there corresponds a stationary point of the mapping; to the limit cycle with the period 2T – two different points, and so on. If the motion is periodic and the period is related with T rationally, the phase plane of the mapping has a finite number of the points. If these periods are related irrationally (quasiperiodic motion), the mapping, when iterated, causes the points to lie densely on a curve. Such a curve is the cross section of an invariant torus (in fact, we see the cut of the irrational winding of the torus). Note that by numerical simulation it is impossible to distinguish the case of an irrational ratio of these periods from a rational one if the latter is an irreducible fraction, in which the numerator and denominator are too large. As far as the Hamiltonian system is concerned, there are no attractors and repellers. Therefore, the invariant tori are nested into each other. Let us now discuss the numerical results. Figure 4.8 shows phase portraits obtained by iterating the mapping (4.15) for different values of a. Each continuous or broken line is a cross section of the invariant torus with winding and responsible for

4.3 Dynamics of Hamiltonian Systems Close to Integrable

0.1

0.3

0.5

0.7

1.0

1.5

2.0

4.0

197

Figure 4.8

the periodic (quasiperiodic) motion regime. For a = 0.1, such lines cover a large part of the phase plane; also there are narrow “necklaces” of cells bounded by the separatrices. These necklaces are generated by the nonlinear resonances. With the growth of a, the quantity of the necklaces increases. For a = 0.3 – 0.5, we can see narrow stochastic layers along some separatrices. For a = 1, the stochastic motion already occupies a significant region; the regular motion structure is greatly complicated. This is explained by the fact that the various-order nonlinear resonances generate a selfsimilar fractal structure. Finally, for a = 1.5–4.0, the phase portrait becomes populated with localized “islands” of regular motion in a “sea” of the stochastic motion. As a grows, the region of regular motion decreases, with some of the islands disappearing and their total number going down. In 1950–1970, A.N. Kolmogorov, V.I. Arnold and J. Moser (KAM) established a number of rigorous laws pertaining to the dynamics of weakly perturbed integrable systems. We present here some of them without seeking rigor language and omitting the proofs. The latter can be found in many textbooks in the KAM theory/KAM theorem sections.

198

4 Chaos in Conservative Systems

Based on the results obtained in Section 4.3.1, it can be concluded that the resonant trajectory, that is the trajectory of the unperturbed motion under one of the resonance conditions (4.9), meets to a well-defined initial condition. It can be shown that a subset of the resonant initial conditions and the corresponding resonant trajectories is dense in a full set of the trajectories but has measure zero. To illustrate this assertion, it should be noted that a subset of rational points on the real number axis has similar properties: an arbitrarily small neighborhood of any real point has such points (the density property), but the total length (measure) of the set of the rational points is zero. If % becomes nonzero but remains small enough, each point corresponding to the resonant initial condition turns into a small region. Trajectories starting inside such a region meet irregular (chaotic) motion. Under perturbation, the other trajectories undergo no qualitative changes but slightly deformed. From the KAM theory it follows that the measure of the union of irregular motion regions is not zero, but it can be made arbitrarily small by choosing a sufficiently small value of %. Thus, for small %, most of the trajectories satisfying the regular motion are preserved without actually losing their shape. Such a motion pattern can be observed using the above numerical calculations (Fig. 4.8, a = 0.1 – 0.5). Andrei Nikolaevich Kolmogorov (April 12, 1903, Tambov – October 20, 1987, Moscow) is one of the greatest mathematicians of the twentieth century who obtained fundamental results in various areas of mathematics. In particular, he published the invariant torus theorem, generalized by Arnold and Moser.

The general theory deals with an unperturbed integrable n-degrees-of-freedom Hamiltonian system, which has nonresonant trajectories as irrational windings of ntori. Otherwise speaking, this is quasiperiodic motion with the frequencies 91 , . . . , 9n to be functionally independent. A perturbation must be a smooth function, that is, it must have sufficiently high-order derivatives. An important application of chaos theory and the KAM theory is celestial mechanics. Mass of the solar system planets is about ∼ 10–3 of the Sun mass. For this reason, the interplanetary gravitational interactions can be regarded as a “small perturbation” in relation to the movement of each planet around the Sun. Can the appearance of resonances lead to any “fast,” of course, on the timescale of planetary motion, changes

4.3 Dynamics of Hamiltonian Systems Close to Integrable

199

in the orbits, to clashes of the planets and other dramatic events in cosmic space? Although the current state of the solar system does not portend disaster, the aftermath of the interplanetary interactions are observable phenomena. For example, it is found that the ratio between the angular frequency of rotation of Jupiter and Saturn is close to the resonance 29U – 59S ≈ 0. This results in perturbing the planetary motion with a period of about 800 years. It is also known that the planet–planet interactions give rise to periodic changes in the eccentricity of the planets’ orbits. It is assumed that this is the cause of forming glaciations throughout the history of the Earth, lasting with a period of a few tens of thousands of years.

5 Chaos and Fractal Attractors in Dissipative Systems A little bug’s walking down the street, Dressed in the trendy jacket An icon shines on it, another bug was painted, Too in the jacket, another icon hangs on it There‘s yet another bug in the icon. . . Andrey Usachev The fractal approach is both effective and “natural”. Not only should it not be resisted, but one ought to wonder how one could have gone so long without it. Benoit Mandelbrot

5.1 On the Nature of Turbulence There are several fundamentally important examples of unstable motion in physical systems. Molecular chaos in the motion of an ensemble of particles interacting with each other and moving in accordance with the laws of classical mechanics is among these. Another well-known example is a turbulent motion in a fluid. The latter case represents a dissipative motion as unstable and describable by macroscopic variables. As used herein, the term “turbulence” stands for unstable macroscopic motion in any systems. An important consequence of the presence of dissipation is the formation of coherent dissipative structures in such systems. They are of spatial and temporal inhomogeneities whose forms emerge and exist as quite certain. Klimontovich in his book [1], from which the quote is given below, proposed to distinguish between three types of motion. This is, firstly, chaotic thermal motion. In this case, the averaged macroscopic parameters are constant, and the existence of fluctuations characterizes the “molecular” structure of the system. The fluctuations of the macroscopic characteristics are small, and in many cases can be neglected – not however, in the case of the Brownian motion of small particles in a liquid. The second group comprises laminar flows, or the laminar space-time dissipative structures. They arise against the background of thermal motion and are characterized by a small number of macroscopic degrees of freedom. The role of fluctuations here is especially important in the vicinity of the critical points of transition from one dissipative structure to another (nonequilibrium phase transitions). The third group contains turbulent motions, which are characterized by a large number of macroscopic degrees of freedom. Turbulent motions are extremely diverse and can arise at all levels of description, from kinetic to diffusion or diffusion-reaction. They are characterized by a large number of spatial and temporal scales. Coherent space-time structures may stand out against the small-scale turbulence.

5.1 On the Nature of Turbulence

201

Despite the successes of the nonlinear systems theory, a complete understanding of all the turbulent motion features has not been achieved so far. The main results in this field pertain to the genesis of turbulence and to motion near the threshold of turbulence. The first model to explain the phenomenon of turbulence on a qualitative level was proposed by Landau in 1944 [2]. He supposed that the complicated irregular motion can arise as a result of a large (but finite) number of bifurcations of the birth of a cycle (see Section 3.2.1). In accordance with this model, after k bifurcations a regime of quasiperiodic oscillations with frequencies 91 , 92 , . . . , 9k is established. An attractor as a k-dimensional torus corresponds to it. If k is large, the motion is rather complicated. However, there is no exponential divergence of nearby trajectories, and the motion is not sensitive to initial conditions. Edward Norton Lorenz (May 23, 1917, West Hartford, USA – April 16, 2008, Cambridge, USA) was an American mathematician, an expert in the field of theoretical meteorology, a founder of the theory of dynamical chaos who first described the structure of a strange attractor.

Turbulence modeling with the unstable motion received rapid development after the emergence of a paper by Lorenz in 1963 [3]. It was devoted to the issues of the atmospheric dynamics. His work significantly stimulated further scientific investigations of complex nonperiodic motion modes in low-dimensional nonlinear dissipative dynamical systems. Afterward, it came to be a classic treatise in this field. The research has led to a fundamental and, in a sense, to a revolutionary change in our understanding of how a finite dynamical system can behave. Ultimately, there appears the theory of dynamic stochastization of motion (dynamic chaos). A fundamentally important circumstance, which has made it possible for this research to successfully promote, was the creation of computers. The latter enabled the differential equations describing the atmospheric dynamics to be numerically integrated. Apart from the above said, development of numerical methods was stimulated by the presence of important practical problems of high complexity. In particular, these problems were associated with weather prediction. In his lecture, which was delivered at the sitting of the American Mathematical Society in 2008 [4], the famous physicist Freeman John Dyson shares his memories of nascent stage of computer technologies. He noted that the American mathematician John von Neumann played a special role in advancing them. We give here long quotations from his speech.

202

5 Chaos and Fractal Attractors in Dissipative Systems

John von Neumann (December 28, 1903, Budapest, Hungary – February 8, 1957, Washington, USA) was an American mathematician who made important contributions to quantum physics, quantum logic, functional analysis, theory of sets, informatics, economy and other branches of science. He was one of the founders of modern computer architecture.

Von Neumann made fundamental contributions to several other fields, especially to game theory and to the design of digital computers. For the last ten years of his life, he was deeply involved with computers. He was so strongly interested in computers that he decided not only to study their design but to build one with real hardware and software and use it for doing science. I have vivid memories of the early days of von Neumann’s computer project at the Institute for Advanced Study in Princeton. At that time he had two main scientific interests, hydrogen bombs and meteorology. He used his computer during the night for doing hydrogen bomb calculations and during the day for meteorology. Most of the people hanging around the computer building in daytime were meteorologists. Their leader was Jule Charney. Charney was a real meteorologist, properly humble in dealing with the inscrutable mysteries of the weather, and skeptical of the ability of the computer to solve the mysteries. John von Neumann was less humble and less skeptical. I heard von Neumann give a lecture about the aims of his project. He spoke, as he always did, with great confidence. He said, “The computer will enable us to divide the atmosphere at any moment into stable regions and unstable regions. Stable regions we can predict. Unstable regions we can control.” Von Neumann believed that any unstable region could be pushed by a judiciously applied small perturbation so that it would move in any desired direction. The small perturbation would be applied by a fleet of airplanes carrying smoke generators, to absorb sunlight and raise or lower temperatures at places where the perturbation would be most effective. In particular, we could stop an incipient hurricane by identifying the position of an instability early enough, and then cooling that patch of air before it started to rise and form a vortex. Von Neumann, speaking in 1950, said it would take only ten years to build computers powerful enough to diagnose accurately the stable and unstable regions of the atmosphere. Then, once we had accurate diagnosis, it would take only a short time for us to have control. He expected that practical control of the weather would be a routine operation within the decade of the 1960s.

A subsequent remark made by F. Dyson enables us to understand how results obtained by Lorenz were actually new. Von Neumann, of course, was wrong. He was wrong because he did not know about chaos. We now know that when the motion of the atmosphere is locally unstable, it is very often chaotic. The word “chaotic” means that motions that start close together diverge exponentially from each other as time goes on. When the motion is chaotic, it is unpredictable, and a small perturbation does not move it into a stable motion that can be predicted. A small perturbation will usually move it into another chaotic motion that is equally unpredictable. So von Neumann’s strategy for controlling the weather fails.

5.2 Dynamics of the Lorenz Model

203

The dynamics of radiophysical systems similar to the turbulent one was examined by the English mathematician Mary Cartwright in connection with an important applied project – to develop the first radars. The need to study turbulent regimes of generation of radio waves has been caused by demanding to use the microwave range. Mary Cartwright’s contribution to the science of chaos appeared, unfortunately, to be almost forgotten, which Freeman Dyson noted in Ref. [4]. Mary Cartwright (December 17, 1900, Northamptonshire – April 3, 1998, Cambridge, UK) was an English mathematician, one of the founders of the theory of chaos. In 1947, she was elected to the Royal Society. In 1948, she headed Girton College, Cambridge. In 1961– 1963 she was president of the London Mathematical Society, and in 1968 became winner of the De Morgan Medal, its highest award. In 1969, she was elevated to the rank of Dame Commander of the Order of the British Empire.

Edward Lorenz discovered in 1963 that the solutions of the equations of meteorology are often chaotic. That was six years after von Neumann died. Lorenz was a meteorologist and is generally regarded as the discoverer of chaos. He discovered the phenomena of chaos in the meteorological context and gave them their modern names. But in fact I had heard the mathematician Mary Cartwright, who died in 1998 at the age of 97; describe the same phenomena in a lecture in Cambridge in 1943, twenty years before Lorenz discovered them. She called the phenomena by different names, but they were the same phenomena. She discovered them in the solutions of the Van der Pol equation which describe the oscillations of a nonlinear amplifier, [5]. The Van der Pol equation was important in World War II because nonlinear amplifiers fed power to the transmitters in early radar systems. The transmitters behaved erratically, and the Air Force blamed the manufacturers for making defective amplifiers. Mary Cartwright was asked to look into the problem. She showed that the manufacturers were not to blame. She showed that the van der Pol equation was to blame. The solutions of the Van der Pol equation have precisely the chaotic behavior that the Air Force was complaining about. I heard all about chaos from Mary Cartwright seven years before I heard von Neumann talk about weather control, but I was not far-sighted enough to make the connection. It never entered my head that the erratic behavior of the Van der Pol equation might have something to do with meteorology.

Now we will focus on the Lorenz model.

5.2 Dynamics of the Lorenz Model The stomp of your foot, on one mouse, could start an earthquake, the effects of which could shake our earth and destines down through Time, to their very foundations. With the death of that one caveman, a billion others yet unborn are throttled in the womb. Perhaps Rome never rises on its seven hills. Perhaps Europe is forever a dark forest, and only Asia waxes healthy and teeming. Step

204

5 Chaos and Fractal Attractors in Dissipative Systems

on a mouse and you crush the pyramids. Step on a mouse and you leave your print, like a Grand Canyon, across eternity. Ray Bradbury, A Sound of Thunder

The Lorenz model is one of the most important base (reference) models in nonlinear dynamics. Numerical analysis of this model has led to the discovery of the Lorenz attractor. The latter is a representative of a class of strange attractors – bounded and connected attracting sets with complicated self-similar (fractal) structure. They are neither points nor closed curves nor surfaces. Below we briefly summarize the results obtained by Edward Norton Lorenz and other authors. Lord Rayleigh studied a flow in a horizontal liquid layer when a temperature difference between the upper (cooler) and lower boundaries is maintained constant. In such a system, a static state is possible at a small temperature difference BT when no fluid motion occurs. However, as BT grows, it loses stability (convective instability). As a result, the regular structure of the convective rolls emerges, as shown in Fig. 5.1 (the Rayleigh–Benard cells). With a further increase of BT, it can be seen both different types of relatively simple regular motion and chaotic motion, which eventually causes fully developed turbulence to take place. In his paper, Lorenz investigated a solution of the Navier–Stokes equation that describes a fluid motion and heat transfer equations by choosing parameter values, at which instability occurs [3]. The fluid was assumed to be weakly compressible. Therefore, the temperature dependence of the fluid density could be taken into account only in the term that includes the gravity force (the Boussinesq approximation). The velocity and temperature fields were expanded in Fourier series, which resulted in an infinite chain of coupled ordinary differential equations. Because previous numerical simulations had shown that the amplitudes of the highest harmonics of the Fourier approached zero for some values of the parameters, and these chains were “broken off.” We omit a detailed derivation of the equations by using the procedures described above, as too cumbersome, and go straight to the resulting equations. Standard form of the Lorenz equations is x˙ = 3 (y – x) ,

y˙ = rx – y – xz,

z˙ = –bz + xy.

Cooling

g Heating

Figure 5.1

Connective rolls in the liquid layer.

5.2 Dynamics of the Lorenz Model

205

Thus, we have a normal system of three differential equations whose right-hand sides are polynomially nonlinear functions. Lorenz was the first to show that the dynamic behavior of the system becomes unusual and interesting to explore in the vicinity of the parameter values 3 = 10, b = 8/3, r = 28. Let us discuss the Lorenz model properties.

5.2.1 Dissipativity of the Lorenz Model We demonstrate that in accordance with a criterion derived from the generalized Liouville theorem (see Section 3.1.3), the Lorenz model refers to dissipative dynamical systems. Once the above parameter values are found, we have div f =

∂ ∂ ∂ [3 (y – x)] + [rx – y – xz] + [–bz + xy] = –3 – 1 – b < 0. ∂x ∂y ∂z

Substituting the value of the divergence in the Liouville formula, we obtain dV(t) = dt



div f dx = –k V(t) → V(t) = V(0) e–kt ,

where k = 3 +1+b ∼ 13.7, so that exp(–k) ∼ 7⋅10–5 . Note that the degree of compression of phase volume per unit time is very high.

5.2.2 Boundedness of the Region of Stationary Motion We show that for any initial condition, a point lying on the phase trajectory is over time within a bounded region in the vicinity of the origin, which it does not leave in the future. From this, in particular, it follows that the limit set (the attractor) in the phase space of the Lorenz system is a bounded set. We make the replacement w = r – z and multiply the right-hand and left-hand sides of each equation by a factor located to the right, and then sum up the results: ⎧ ⎪ ⎨ x˙ = 3 (y – x) , y˙ = –y + xw, ⎪ ⎩w ˙ = –bw + br – xy

× 3–1 x, × y, × w.

Finally, we get the equality  –1   –1 2  d ˙ = 21 dt 3 x ⋅ x˙ + [y] ⋅ y˙ + [w] ⋅ w 3 x + y2 + w2 =   = 3–1 x ⋅ [3 (y – x)] + [y] ⋅ [–y + xw] + [w] ⋅ [–bw + br – xy] =  2  2 = – x – 21 y – 43 y2 – b w – 21 r + 41 br2 .

206

5 Chaos and Fractal Attractors in Dissipative Systems

Using this relation, it is easy to prove that if the inequality 

x – 21 y

2

 2 + 43 y2 + b w – 21 r > 41 br2

is fulfilled, as a consequence, the following inequality d dt



 3–1 x2 + y2 + w2 < 0

is valid too. Let us explain the geometric meaning of each of the inequalities. The first inequality, taken by us as a condition, means that the phase point lies outside an ellipsoid. Let it be designated as S0 . We choose a phase point satisfying this condition. Note that we can always choose the number C > 0 such that this point would lie on the ellipsoid surface HC describable by the equation 3–1 x2 + y2 + w2 = C2 and having a center at the origin. The second inequality contains the time derivative. It can be interpreted as follows: when travelled, the phase point passes from one ellipsoid to another HC → HC′ in such a way that the inequality C′ < C should hold true for constants. That is to say, the new ellipsoid is embedded into the original one. In other words, all points lying on the surface of HC are the beginning of the trajectories, which consist of the interior points of HC if only HC involves S0 . It is clear that, having taken a large enough C = C0 , we can always choose the ellipsoid HC0 so that the ellipsoid S0 should be embedded toward the latter. As a result, the phase point moving along any path sooner or later becomes an interior point of HC0 and remains as such at all subsequent moments of time.

5.2.3 Stationary Points Setting the right-hand sides of the Lorenz equations equal to zero, we find the coordinates of the stationary points: 3 (y – x) = 0,

rx – y – xz = 0,

–bz + xy = 0 →

→ y = x → x (r – 1 – z) = 0, that is, either x = 0 or z = r – 1. Thus, there are three stationary points: one is at the origin; the other two are located symmetrically: O : x = y = z = 0,

 O1,2 : x = y = ± b (r – 1),

z = r – 1.

Linearizing the Lorenz equations in the vicinity of the stationary points and performing the exponential substitution Bx, By, Bz ∼ exp(. t), we find the respective characteristic equations. For the point O, the values of . can be found explicitly; for the other two points, we obtain a cubic equation, which can be solved numerically. We write down the final formulas:

5.2 Dynamics of the Lorenz Model

207

⎧ ⎪ ⎨ B˙x = 3 (By – Bx) , B˙y = rBx – By – x0 Bz – z0 Bx, → ⎪ ⎩ B˙z = –bBz + x By + y Bx, 0 0 .  2  (. + b) . + (3 + 1) . + 3 (1 – r) = 0, O: .1 = –b, .2,3 = – 21 (3 + 1) ± 21 (3 + 1)2 + 3 (r – 1) O1,2 :

. 3 + (3 + b + 1) . 2 + b (3 + r) . + 23b (r – 1) = 0.

Note that the bifurcation parameter values for the points O1,2 , when there is a loss of stability, can be found analytically. For each of these points, it is fairly easy to show numerically that the stability loss occurs as a transition of a stable focus to unstable. During this transition the real part of . changes its sign, passing through zero. It is clear that in this case the bifurcation value of . is purely imaginary. We substitute . = i9 into the characteristic equation and separate it into real and imaginary parts. Eliminating 9 and solving the equation with respect to r, we deduce a formula for the bifurcation value of this parameter: – i93 – (3 + b + 1) 92 + ib (3 + r) 9 + 23b (r – 1) = 0 → → r∗ = 3 (3 + b + 3)/(3 – b – 1) ≈ 24.74.

5.2.4 The Lorenz Model’s Dynamic Regimes as a Result of Bifurcations Consider now the dynamical pattern in the Lorenz model, relying on the computerbased simulation results. Assuming that 3 = 10, b = 8/3, we will vary the parameter r. Figure 5.2 shows typical behaviors of the phase trajectories. – For r < 1, the point O is a stable attracting point. Fig. 5.2 does not illustrate this case. The point loses stability when passing r through the bifurcation value 1. It turns from a node into a saddle: there appear separatrix directions A1 and A2 , along which the phase point “runs away” from the point O (dim W u = 1). If the value of r exceeds 1 negligibly, the corresponding trajectories are attracted to the other two stable stationary points (nodes or foci): A1 → O1 and A2 → O2 (Fig. 5.2a). – The value r = 13.927 is a bifurcation value (Fig. 5.2b). For this r, the separatrices A1,2 of the saddle O return to O along the stable direction that coincides with the upward-directed vertical axis Oz. At that, two loops of separatrices from a saddle to a saddle are formed. The point O1,2 remain as stable foci. Each of them has its own basin of attraction. – For r > 13.927, unstable (repelling) limit cycles come into being in the vicinity of the points O1,2 . They exhibit themselves in not permitting the trajectories, running along the separatrix directions, to be attracted to the respective stable foci. The trajectories are attracted to the foci O1,2 in a crisscross manner when distorted in

208

5 Chaos and Fractal Attractors in Dissipative Systems

(a)

(b)

O2

Γ2

Separatrix

O1

O

(c)

Γ1

O (d)

O

Figure 5.2

O

Phase trajectories in the Lorenz model.

the vicinity of the cycles. This configuration is shown in Fig. 5.2c. In accordance with the notations introduced in Fig. 5.2a, the attraction occurs as follows: A1 → O2 , A2 → O1 . – For r = 24.06, bifurcation takes place. It results in repelling the separatrix trajectories A1,2 from the unstable cycles. In this case, the former cease to be attracted to the points O1,2 at all. Instead, they approach asymptotically as t → ∞ to a new type of the attractor, mentioned earlier – a strange attractor as being neither a stationary point nor a limit cycle. The word strange has sense of mathematical term and is usually written without inverted commas. The points O1,2 still remain stable foci with local basins of attraction (Fig. 5.2d). – For r = 27.74, unstable limit cycles converge toward stable foci O1,2 . As a result, the latter become unstable foci. To illustrate this situation, it suffices to redraw the spiral of the trajectory in the neighborhood of O1,2 as in Fig. 5.2d so that the attraction should replace by repellency. For r > 27.74, the strange attractor becomes the only attracting set.

5.2.5 Motion on a Strange Attractor A phase trajectory near the strange attractor can be constructed by numerical simulation. Special case of such a trajectory corresponding to the parameter values 3 = 10,

5.2 Dynamics of the Lorenz Model

Figure 5.3

209

The phase trajectory on a strange

attractor.

b = 8/3, r = 34 and initial conditions x0 = z0 = 0, y0 = 10 is shown in Fig. 5.3. For the same values, Fig. 5.4 presents graphs of the time dependence of individual coordinates of the phase point. We give now a qualitative description of the key features of motion on the attractor. When we observe essentially aperiodic motion, the phase point spirals outward, getting farther and farther from the center of the spiral, and then passes to another spiral. Centers of the spiral trajectories almost coincide with the points O1,2 . The number of helix turns of each fragment, sizes of these fragments, the moments of “skipping” of the phase point from one spiral on the other and other motion parameters vary in an irregular manner. Time dependence of each coordinate of the phase point is given by an aggregate of fragments, which are responsible for the oscillation mode with increasing amplitude. However, there is no simple pattern in how and when a vibrational fragment finishes and the next one begins. It seems that the skippings occur randomly even though we deal with a solution of deterministic equations of motion and leave aside the influence of random factors in the computer-based simulation. Yet another peculiarity of the motion on the strange attractor is strong sensitivity of the motion pattern to initial conditions. That is to say, it is sufficient to enter an arbitrarily small change in the initial position of the phase trajectory in order as the alternation sequence of the vibrational fragments became completely different. Thus, as in the previous chapter, we deal with the unstable motion.

5.2.6 Hypothesis About the Structure of a Strange Attractor The question of how to describe a strange attractor, using the concepts of geometry is of interest. From the fact that the Lorenz system is dissipative and phase drop volume must tend to zero at t → ∞ it follows that the dimension of the attracting set d must be

210

20

5 Chaos and Fractal Attractors in Dissipative Systems

x

10 0

5

10

15

20

25

30

35

–10

40 t

–20 y 20 10 t 0

5

10

15

20

25

5

10

15

20

25

30

35

40

–10 –20

50 40 30 20 10 0 Figure 5.4

30

35

t

Time dependences of coordinates of the phase point.

less than 3. In three-dimensional space there are standard submanifolds satisfying this requirement such as a surface (d = 2) or a curve (d = 1) or a point (d = 0). Emanating from analysis of the numerical results, it can be assumed that there are two partially “glued” surfaces, which contain spiral trajectories. The passage from one surface to another occurs in the lower common region for these surfaces. To pictorially construe the hypothesis, we borrowed a figure similar to Fig. 5.5 out of the work of Lorenz [3]. A darkened line is the boundary of the region, within which the glued sheets of the attractor lie. Thin lines correspond to different constant values of the coordinate x. Where the sheets come apart, each line splits into two lines – unbroken and dashed ones. Branch points are marked with small black circles.

5.2 Dynamics of the Lorenz Model

211

z

400

160 –160

200

120 –120

80

–80 40

–40

y

0 –200 Figure 5.5

0

200

Schematic representation of the strange attractor.

However, this simple model contradicts to the theorem of existence and uniqueness of the solution for an autonomous system of differential equations. By the theorem, the trajectories cannot intersect. The contradiction can be removed by assuming that instead of the surface, we have a quasi-surface consisting of an infinite number of closely adjacent quasiparallel surfaces, which collectively form a “sheet” of finite thickness (a cabbage head-like structure). Once again having found itself on the “sheet,” the phase trajectory actually falls onto the other surface as being some part of the “sheet.” Consequently, it can be seen that the trajectories do not intersect. This circumstance removes the above contradiction. Below we show that the cross section of the “sheet” has a structure well known in mathematics as the Cantor set, having a fractional dimension. Now two questions lie in wait for us to answer. First, we should explain the essentially irregular (chaotic) nature of the dynamics in the Lorenz model and sensitivity to initial conditions. Second, some approaches need to be found to describe the structure of the attractor.

5.2.7 The Lorenz Model and the Tent Map Lorenz [3] proposed a novel approach to study the dynamics of his model. He constructed a one-dimensional mapping zN+1 = f (zN ) to connect the neighboring elements of a sequence of maximal values of the z-coordinate of the phase points (Fig. 5.6). It has turned out that the sequence of the points, built by computer simulation, belongs

212

5 Chaos and Fractal Attractors in Dissipative Systems

zN+1 45

35

zN 25

35

Figure 5.6 The tent map for the maximum values of z coordinates.

45

to a “curve,” which can be approximated by a continuous function. Its graph has a tent-like shape. In fact, the set of the points is not a curve, but a narrow band. Using this mapping, we can, of course qualitatively, simulate the dynamics of the Lorenz model over large times. Obvious analogy that exists between the found mapping and the tent mapping, discussed in Section 4.2.3, allows us to conclude that the dynamics is unstable, and the motion is chaotic.

5.2.8 Lyapunov Exponents As what the motion on the Lorenz attractor is unstable, it can be verified by direct computation of the Lyapunov exponents. Figure 5.7 illustrates time dependencies of & ¯ (r) the sums M m | involved in the last formula in eq. (3.9); t = M4. The mere fact m=1 ln |u that the dependencies are linear attests to the existence of the Lyapunov exponents. The figure shows the results obtained for the parameter values: r = 28,3 = 10,b = 8/3. The Lyapunov exponents calculated numerically are: +1 = 0.897, +2 = 0, +3 = –14.563. It is easy to be certain that their sum is equal to the phase volume compression coefficient: +1 + +2 + +3 = –3 – 1 – b = –13.666. Because one of the coefficients is positive, the motion on the attractor is unstable. 150 1

2 3

t –150

0

50

100

Figure 5.7

lines.

Lyapunov exponents are determined by slopes of

5.3 Elements of Cantor Set Theory

213

5.3 Elements of Cantor Set Theory It has been said that figures rule the world; maybe. I am quite sure that it is figures which show us whether it is being rules well or badly. Johann Wolfgang von Goethe . . . harsh and inflexible isolation of mathematics makes it suitable a very little for satisfying the interests of the uninitiated, directed only to the most common questions. . . Felix Klein

Strange attractors are nonstandard sets whose properties are impossible to describe completely by conventional methods of mathematical analysis. For this reason we are bound to devote some time to a division of mathematics, which is commonly employed as a foundational system for looking into collections of different mathematical objects – the set theory. The most profound and meaningful part of this theory deals with infinite sets. In particular, infinite point sets, embedded into real n-dimensional space Rn are of our concern. In mathematics, the concept of infinity was treated in different ways at different times. Mathematics and applied disciplines may be said to have developed as the understanding of the nature of the infinite deepened. David Hilbert, a German mathematician, speaking at the Congress of Mathematicians in 1925, noted in this regard that “. . . final clarification of the essence of the infinite goes beyond the narrow interests of the special sciences, and, moreover, . . . it has become necessary for the honor of the human mind itself. For a long time no other question has been exciting a human thought so deeply but a matter of the infinite; the infinite acted on mind as motivating and fruitful as any other idea hardly acted; however no other concept needs to be clarified as infinity” [6]. Georg Ferdinand Ludwig Philipp Cantor, a German mathematician achieved outstanding successes in advancing the theory of infinite sets. He introduced the concept of actual infinity, cardinality of the set (cardinal number) in the math. We consider these issues to the extent required for explanation of the fractal structure of strange attractors.

5.3.1 Potential and Actual Infinity The history of science raises before us an exciting picture of penetration of human genius into the deepest mysteries of the world, the greatest manifestation of the human intellect and examples of struggle in the name of truth. Mstislav Keldysh God exists since mathematics is consistent, and the devil exists since we cannot prove the consistency. Morris Kline

214

5 Chaos and Fractal Attractors in Dissipative Systems

Setting forth the Cantor set theory would be incomplete if we do not pay due attention to the conceptual apparatus and the history of its formation. The character of “ideological struggle,” which was attended by leading mathematicians, was inherent in Cantor’s times. The history may be said to have been instructive. Here are excerpts from the popular book by Vilenkin [7]. It acquaints us with a range of issues associated with the concept of “infinity” and contains a brief historical digression.

It would not be an exaggeration to say that all of mathematics derives from the concept of infinity. In mathematics, as a rule, we are not interested in individual objects (numbers, geometric figures), but in whole classes of such objects: all natural numbers, all triangles, and so on. But such a collection consists of an infinite number of individual objects. For this reason mathematicians and philosophers have always been interested in the concept of infinity. This interest arose at the very moment when it became clear that each natural number has a successor, i.e., that the number sequence is infinite. However, even the first attempts to cope with infinity lead to numerous paradoxes. For example, the Greek philosopher Zeno used the concept of infinity to prove that motion was impossible! Indeed, he said, for an arrow to reach its target it must first cover half the distance to the target. But before it can cover this half, it has to cover a fourth, an eighth, etc. Since the process of halving is a never-ending one will never end (here infinity crops up!), the arrow never leaves the bow. He proved in an identical fashion that swift Achilles never overtakes the slow tortoise. Because of these paradoxes and sophisms, the ancient Greek mathematicians refused to have anything to do with the notion of infinity and excluded it from their mathematical arguments. It should be said that the topic about infinity triggered sometimes sharp and high-pitched debate. The famous Greek philosopher Plato treated the atomistic theory of Democritus with such intransigence that he destroyed all manuscripts of this author wherever he could find. Before the invention of printing, this method of ideological struggle was quite effective. . . . . All in all, the ancient Greeks carefully masked application of methods using the notion of infinity. So in XVI-XVII centuries, European mathematicians had to rediscover these methods. In the Middle Ages the problem of infinity was of interest mainly in connection with arguments about whether the set of angels who could sit on the head of a pin was infinite or not. A wider use of the notion of infinity began in the 17th century, when mathematical analysis was founded. Concepts such as “infinitely large quantity” and “infinitely small quantity” were used in mathematical reasoning at every step. However, sets containing infinitely many elements were not studied at this time; what were studied were quantities which varied in such a way as eventually to become larger than any given number. Such quantities were called “potentially infinitely large,” meaning that they could become as large as you please . . . It was only in the middle of the 19th century that the study of infinite sets, consisting of an infinitely large number of elements, began to occur in the analysis of the concept of infinity. The founders of the mathematical theory of infinite sets were the Czech savant B. Bolzano (unfortunately, his main work was not published until many years after his death in 1848) and the German mathematician Georg cantor. These famous scientists . . . turn the theory of sets into an important part of mathematics.

5.3 Elements of Cantor Set Theory

215

In the Cantor era and before him, the concept of infinity was associated with the unboundedness of process. By the term “an infinite quantity,” scientists of that time meant a variable capable of unlimited growth. For example, the second postulate of Euclid reads: any straight line segment can be extended indefinitely in a straight line. At the same time it does not claim that there exists a stationary object like an endless straight line. Differentials of the first and higher orders, sums of infinite series were understood in a similar manner. The symbols ±∞ were reckoned to be abbreviated notations of limiting expressions but not treated as separate full-fledged objects. Cantor named thus understandable infinity as a potential infinity. In his theoretical constructions, Cantor began to use another type of infinity, which he called a completed (actual) infinity. After that, for example, all natural numbers or the entire set of points on a line segment became possible to regard as a single completed object. According to Cantor, infinity is “. . . such quantity, which, on the one hand, is not unchangeable, but rather fixed and definite in all of its parts, is a truth constant, and on the other hand, at the same time it exceeds in its magnitude any finite quantity of the same kind” [8]. Georg Cantor (German: Georg Ferdinand Philipp Cantor; March 3, 1845, St. Petersburg – January 6, 1918, Halle) was a German mathematician, creator of the set theory and the theory of transfinite numbers.

Ever since Aristotle, philosophers and mathematicians rejected the concept of a completed infinity due to relevant logical paradoxes. One of them is Galileo’s paradox. In his latest work Two New Sciences, Galileo gave two seemingly contradictory judgments about natural numbers. The first of these is: some numbers are perfect squares, that is, squares of other integers. The other numbers do not possess such a property. Thus, the perfect squares should be “less” than all numbers. The second proposition is formulated as follows: for each natural number there exists its perfect square, and vice versa – for each perfect square there is an integer square root. Therefore, the perfect squares and natural numbers should be “the same amount.” Having speculated, Galileo concluded that the statement about the same number of objects may be valid only for finite sets. For his theory, Cantor invented techniques to compare infinite sets by establishing a one-to-one correspondence between elements. Then, setting up the above

216

5 Chaos and Fractal Attractors in Dissipative Systems

correspondence between a set and its subset, he could stop regarding different situations around them as incorrect and inadmissible. In this context, Dauben [9] notes that Cantor “. . . gave mathematical content to the idea of actual infinity. In so doing he laid the groundwork for abstract set theory and made significant contributions to the foundations of the calculus and to the analysis of the continuum of real numbers. Cantor’s most remarkable achievement was to show, in a mathematically rigorous way, that the concept of infinity is not an undifferentiated one. Not all infinite sets are the same size, and consequently, infinite sets can be compared with one another. For example, a set of points on a straight line and a set of all rational numbers are infinite. Cantor succeeded in proving that the power of the first set exceeds the power of the second . . . in fact Cantor took Galileo’s paradox and turned it into a means of quantitative comparison of infinite sets.” Novelty and, at the same time, paradoxicality of Cantor’s ideas initially encountered dramatic antagonism, which is often noted in reviews on the history of mathematics. Here are two more quotes from the Dauben’s article [9]. But so shocking and counter-intuitive were Cantor’s ideas at first that the eminent French mathematician, Henri Poincare, condemned Cantor’s theory of transfinite numbers as a “disease” from which he was certain mathematics would one day be cured. Leopold Kronecker, one of Cantor’s teachers and among the most prominent members of the German mathematics establishment, even attacked Cantor personally, calling him a “scientific charlatan”, a “renegade”, and a “corrupter of youth.”

In 1831, Carl Friedrich Gauss expressed his attitude to completed infinities by words, which Cantor once called too categorical. In a letter to Henry Schumacher Gauss wrote: “As for your proof, I protest against the use of infinite magnitude as something accomplished, which is never permissible in mathematics. Infinity is merely a figure of speech, the true meaning being a limit.” A few words should be said about scientific endeavor of Bernard Bolzano, who was Cantor’s predecessor to the creation of the set theory. The major work by Bolzano was published many years after his death. As far back as before Cantor, he began the study of infinite sets, defending, following Leibniz, the objectivity of the actually infinite. He was close to introducing the concept of set power, the definition of an infinite set as a set whose elements can be in a one-to-one correspondence with its proper part. Bolzano indicated a general character of a one-to-one correspondence – the connection between two sequences of numbers {1, 2, 3, 4 . . . } and {1, 4, 9, 16 . . . }, opened by Galileo and, according to Bolzano, unnoticed still by mathematicians. Cantor greatly appreciated the creativity of the Czech mathematician, noting also its flaws. He wrote, “Maybe, Bolzano is the only author who operates, to some extent, with infinite numbers, at least he repeatedly addresses them. . . To express in terms of definitely infinite numbers, he lacks both a general understanding of set and an exact understanding of quantity. . . It is true that both latter are encountered in some texts in imperfect form . . . ” [8].

5.3 Elements of Cantor Set Theory

217

Now then, it is time to get familiar with the main statements of Cantor’s set theory [10].

5.3.2 Cantor’s Theorem and Cardinal Numbers Often a good new name may become a creator. Henri Poincare Great metanarrative (An universal system of concepts, signs, symbols, metaphors . . . aimed at creating a single type description – Emphasis added) of Georg Cantor is the set theory, almost single-handedly created by him for about fifteen years; it resembles more a work of art than a scientific theory. Yuri Manin [11]

Given two sets A and B with k elements in each, we can set up a correspondence between each element of one set and only one well-defined element of another set. The procedure to establish pairwise matching turns into the usual numbering procedure if one takes the aggregate of numbers {1, 2, 3, . . . , k} as one of the sets. Without requiring looking through the elements “to the end,” we can restrict ourselves to designing a way to establish the correspondence. In doing so, it is convenient to represent it as a function f , which yields the element b = f (a) ∈ B and each element a ∈ A. Formulated thus, the method for establishing the one-to-one correspondence can be easily generalized to infinite sets. For example, the function f (k) = 2 ⋅ k,k = 1, 2, . . . , determines the way of numbering even numbers using natural numbers. A one-to-one correspondence between points of the real axis and an open unit interval can be set up through the arctangent function: y = 0–1 ⋅ arctan(x) ∈ (0, 1),

x ∈ (–∞, ∞).

We call sets whose elements are in a one-to-one correspondence as equivalent, denoting the latter by a special sign: A ∼ B. The aggregate of pairwise-equivalent sets forms an equivalence class. In the case of finite sets, each equivalence class consists of sets with a given number of elements. A function m(A) = k is convenient to put a k-element set A into correspondence to amount of its elements. It can be noticed that the relations m(A) = m(B) or m(A) < m(B) are likely to be valid for any two finite sets. Accordingly the first case is A ∼ B. For the second case, there is a subset B′ of the set B such that A ∼ B′ . In what follows, we use the designations for the subset B and element b: B′ ⊂ B and b ∈ B. Going over to infinite sets, we introduce the function m(A)whose values are special quantities, which index types of sets – cardinal numbers (cardinalities) or the set power values. We assume that all equivalent sets have the same cardinal numbers, that is, A ∼ B implies m(A) = m(B). Besides, on the condition that the cardinal

218

5 Chaos and Fractal Attractors in Dissipative Systems

numbers are different, it can be claimed that one of them is smaller than another, thereby we enter the rule of linear ordering on the set of cardinal numbers. We define a comparison rule as follows: m(A) < m(B) if A ∼/ B, but there is B′ ⊂ B such that A ∼ B′ . It remains to show that infinite sets having different cardinal numbers can exist. Let us prove the theorem of Cantor, from which it follows that the set of cardinal numbers itself has an infinite amount of elements. Cantor’s Theorem. For a nonempty set M, , is a set whose elements are all possible subsets of M. Then m(,) > m(M). Proof: We first show that there exists a subset ,1 ⊂ , such that ,1 ∼ M. In fact, note that among the subsets of M, in particular, there are subsets containing one element from M; each of them, being (as a subset) the element of ,, can be one-to-one corresponding with the element of M. We now show that , is not equivalent to M. We use the method of contradiction. Assume that , ∼ M. This means that for any element a ∈ M, there is a subset A ⊂ M such that the one-to-one correspondence a ↔ A takes place. However, this assumption leads to an inherent contradiction. Note that for each pair of a ↔ A, one of the two statements holds: either a ∈ A or a ∉ A. Forming ,, we must go through all possible principles to combine the elements in subsets. Among them is this one: a set X consists of elements being beyond the corresponding subset (“corresponding” in the sense of the above matching rule a ↔ A). According to the assumption made by us about that , ∼ M, there is an element x ∈ M such that x ↔ X. We try to answer the question: Does X contain x? It is easy to see that there is no correct answer. This should be read as an “if yes, no” assertion. Thereby the premise is not true and , is not equivalent to M. Furthermore, since , ⊃ ,1 ∼ M, we have m(,) > m(M). ∎ By the procedure M → ,, repeated many times to construct a set with a large cardinality, we can get an increasing sequence of cardinal numbers and their respective equivalence classes of sets. In the mathematical literature, we can encounter the symbolic notation m(,) = 2m(M) that can be elucidated as follows. The set of subsets of a finite set with n elements splits into classes of sets with k elements, where n ≥ k ≥ 0, with each class containing Nk =

5.3 Elements of Cantor Set Theory

219

n! / k! (n – k)! elements. Summing over the classes and using the binomial formula, we can find the total number of subsets:

N=

n  k=0

n! 1k 1n–k = (1 + 1)n = 2n . k! (n – k)!

Since the concept of a cardinal number generalizes the concept of number of elements of a finite set, it would be natural to extend the formula N = 2n to the case of infinite sets. Let us demonstrate that when applied to any infinite sequence, the procedure M → , generates a set equivalent to a set of points within the unit interval [0, 1] on the real line. Such a set of points is referred to as a continuous set or continuum. Also, it would be advisable to recall that an infinite sequence is a set equivalent to a set of positive whole numbers; such sets are called countable. Theorem About Countable Sets and a Continuum. A set of all subsets of a countable set is equivalent to a continuum. Proof: For definiteness, we take the set of natural numbers itself as a countable set. We choose a subset M in this set. To achieve greater clarity, let us imagine that we have written down the elements as three sequences located one below the other: the first row is the natural numbers, the second row is the elements of M (if M includes the number, we put this number into a cell; if not, we put an asterisk) and finally, the lowermost row represents the unities beneath the numbers and asterisks beneath zeros.

M {ei }

1

2

3

4

5

6

7

8

9

...

1 1

2 1

* 0

4 1

5 1

* 0

* 0

8 1

9 1

... ...

Now, using the elements of the lower sequence, we construct a new number as a binary fraction: x = (0.e1 e2 e3 . . .)2 = e1 ⋅ 2–1 + e2 ⋅ 2–2 + e3 ⋅ 2–3 + ⋯ . It is obvious that this number belongs to the interval [0, 1]. Consequently, we can thus get all the numbers, corresponding to the points of this interval. Thus, we have proved that a one-to-one correspondence between the set of all subsets of a countable set and the set of binary fractions representing numbers, lying on the interval [0, 1] takes place.

220

5 Chaos and Fractal Attractors in Dissipative Systems

Note that the proof is not complete, since the set of binary fractions is not in the one-to-one correspondence with the set of the points within the interval [0, 1]. That is, some points of this segment are represented by two fractions. For example, to the middle of the segment there correspond two binary fractions: (0.1(0))2 = (0.0(1))2 . The former has zero in the period, and the second has the unity in the period. Attention should be drawn to the fact that an analogous equality holds for decimal fractions with zero and the number 9 in the period: (0.5(0))10 = (0.4(9))10 ). It can be shown that given the aforesaid ambiguity, the statement of the theorem still remains valid. We omit the appropriate proof here. ∎ Cardinal numbers (power) of countable and continuous sets are often designated by the symbols ℵ0 and ℵ, where ℵ (“aleph”) is the first letter of the Semitic alphabets. Thus, the result of the proved theorem can be written as ℵ = 2ℵ0 . We have proved that the points of the unit interval cannot be uniquely matched by the natural numbers. In other words, these points cannot be enumerated. It may seem that this is due to the fact that real points fill the interval densely, that is, there are no gaps between them, because if a1 < a2 , there always exists a3 such that a1 < a3 < a2 . However, it would be wrong to relate the uncountability of the continuum of [0, 1] to the density! Indeed, the set of the points lying on the interval [0, 1] is a combination of subsets of rational and irrational numbers. But only the irrational points in the aggregate represent a continuum; the subset of the rational points is countable. Of course, this result is quite surprising but can be proved as follows. Theorem About Rational Numbers. The set of rational numbers is countable. Proof: We make an infinite table whose columns and rows are enumerated by the positive integers. We put the rational number apq = p/q into a cell belonging to the pth row and the qth column, where p, q = 1, 2, 3, . . . (see Fig. 5.8). Using the only set of natural numbers, we can enumerate the rational numbers. One of the ways to do this is to pass step by step from one cell to another starting with a11 and moving along the

a11

a12

a13

a14

a21

a22

a23

a24

a31

a32

a33

a34

a41

a42

a43

a44 Figure 5.8

Sequential numbering of table cells.

5.3 Elements of Cantor Set Theory

221

diagonal segments, with each of which being defined by the condition p + q = n, n = 2, 3 . . . . This transition to an n + 1-segment should be carried out after going through a segment labeled n. Upon reaching a certain cell, we shall reduce a fraction being there. The resulting irreducible fraction receives the next label only subject to coinciding with none of the already enumerated fractions. In the end, the one-toone correspondence between the set of the rational numbers and the set of the natural numbers will be set up. ∎ Not only segments of the real axis, but also domains in finite-dimensional real spaces Rn have the power of the continuum. We prove the appropriate theorem, taking the domain in the shape of a unit cube. Theorem About Multidimensional Continua.

A set of points belonging to a unit cube

{(x1 , x2 , . . . xn ), 0 ≤ x1 ≤ 1, 0 ≤ x2 ≤ 1, . . . , 0 ≤ xn ≤ 1} ∈ Rn has the cardinality of the continuum. Proof: We represent each of the coordinates of a point belonging to the cube in the form of an infinite decimal fraction xi = (0.ai1 ai2 ai3 . . . aij . . .)10 ,

i = 1, 2, . . . , n,

where aik is the decimal place value of the coordinate xi and of the kth position to the right of the decimal point. We put the sequences of the decimal places in the rows of the table shown in Fig. 5.8 to fill the first n-rows. We run through the cells of the table in the manner described in the preceding theorem. Finding a decimal digit in the next (nonempty) cell, we add it to the decimal fraction y = (0.b1 b2 . . . bj . . .)10 to the right. If the cell contains no digit, the fraction remains unchanged in value. For example, after mixing the digits up, the point on the plane with the coordinates x1 = 61 = 0.166666 . . . , x2 = 0.123789 . . . corresponds to the point with the coordinate z = 0.116263676869 . . . on the interval [0, 1] of a straight line. In his letter to Georg Cantor, Richard Dedekind asked for him “to pay attention to the following drawback in the proof: if one makes a fraction decimal expansion to be unambiguous as it is necessary for an unambiguous representation of points, writing down, say, 0.2 as 0.19999 . . . , then, for example, the point 0.210608040 . . . has no inverse image in the square, for (0.2000 . . . , 0.1684 . . . ) should be taken as a prototype, which is unacceptable” [9]. This gap in the proof was immediately eliminated by Cantor through the continued fraction expansion of an irrational number. ∎

222

5 Chaos and Fractal Attractors in Dissipative Systems

This result is hard to take, basing on an intuitive understanding of mathematics; nevertheless, it is true. Cantor wrote to Dedekind: “I see it but I don’t believe it!” [8] In particular, from this theorem it follows that it is possible to establish a one-to-one correspondence between the points of the unit square [1, 0] ⊗ [1, 0] and the points of the segment [0, 1], with the way of establishing such a match being not the only one. Different from the above, a simple proof of this correspondence was suggested by J. König. Given two infinite decimal fractions that represent a point on the plane, we mix up not digits but digital groups, including digits up to the nearest nonzero one. Then the point with the coordinates (x1 = 0.130606789 . . . , x2 = 0.100123075 . . .) after splitting the fractions into the digital groups x1 = 0.1|3|0 6|0 6|7|8|9 . . . ,

x2 = 0.1|001|2|3|07|5|9 . . .

and mixing the digits up, biuniquely corresponds to the point y = 0.113001062067 . . . within the interval [1, 0]. By a similar method, we can prove also a theorem about multidimensional continua. In 1911, L.E.J. Brouwer, a Dutch mathematician and philosopher, proved Dedekind’s hypothesis that a one-to-one correspondence between varieties of different dimensions is discontinuous everywhere.

5.3.3 Cantor sets Mathematics has a threefold purpose. It must provide an instrument for the study of nature. But this is not all: it has philosophical purpose, and, I daresay, an aesthetic purpose. Henri Poincare, The Value of Science

Now, we need basic information about Cantor sets [8]. Initially, it will be sufficient to consider the triadic Cantor set. The latter is convenient to regard as a subset of the set of real numbersx ∈ [0, 1] as coming from performing an infinite sequence of steps. 5.3.3.1 The Triadic Cantor Set Construction   Remove the middle third 31 , 32 from the segment [0, 1]; next, remove the middle thirds 1 2 7 8  1 2  , 1 ; remove again 9 , 9 and 9 , 9 from the remaining two segments  0, 3  and  3     the middle thirds from the remaining four segments 0, 91 , 92 , 31 , 32 , 97 , 89 , 1 . The procedure described above is shown in Fig. 5.9.

5.3 Elements of Cantor Set Theory

223

Figure 5.9 Successive stages of construction of the Cantor set.

5.3.3.2 Self-Similarity of the Cantor Set A subset of the Cantor set that consists of the points belonging to this set and lying   on the segment 0, 31 , using the threefold stretching, turns into the full Cantor set. In a similar way, we can get the entire Cantor set, taking its subsets embedded in       segments 32 , 1 , 0, 91 , 92 , 31 , , and so on and doing a stretching supplemented with displacements. The self-similarity property is also called scale invariance (scaling). 5.3.3.3 Length (Measure) of the Cantor Set A natural measure for the set being a subset of the unit interval is length. It is easy to show that, for the triadic Cantor set, this characteristic is equal to zero. For this, it is enough to sum up the lengths of all the middle thirds, removed during the above construction process: 1 1 1 1 +2⋅ +4⋅ +⋯ = 3 9 27 3



∞ i  2 1+ 3i i=1

1 = 3



2 1– 3

–1 = 1.

Thus, the length of what is left is 1 – 1 = 0. It can be noted that the Cantor set as a set of a measure zero is similar in this sense to a finite or infinite (countable) sequence of points. 5.3.3.4 Power of the Cantor Set It may seem that, since the Cantor set as a numeric sequence is a set of isolated (separated by intervals) points of the real axis, it must be countable. We show that this assertion is fallible. Cantor’s Power Set Theorem.

The Cantor set has the power of the continuum.

Proof: It suffices to prove that there is a one-to-one correspondence between the elements of this set and points in the unit interval. We represent the numbers x ∈ [0, 1] in the ternary notation, that is, in the form of ternary fractions, when written using only the digits 0, 1 and 2. It is easy to notice that when constructing the Cantor set, the points that correspond to the ternary fractions having the digit 1 are removed. The remaining points of the Cantor set correspond to all possible ternary fractions where there are only the digits 0 and 2. Each such a fraction corresponds to a unique and ever-existing binary fraction that comes out from the ternary one by replacing the digits 2 → 1. The set of these fractions is in one-to-one corresponds with the set of points of the unit interval [0, 1], if its points can be represented by numbers in binary notation. ∎

224

5 Chaos and Fractal Attractors in Dissipative Systems

5.3.3.5 The Fractional Dimension of the Cantor Set Considering different submanifolds of finite-dimensional spaces, we usually can easily determine their dimension. In doing so, we relate the dimension to the number of independent parameters required for running through all points of the subset. However, there is no intuitive solution to the matter of the dimension of some nonstandard sets, including the Cantor set. To find the dimension of such a set, we formulate a procedure using minimum coverings of this manifold with open spheres. We define the open sphere of radius % centered at the point x0 = (x10 , x20 , . . . , xn0 ) ∈ Rn as the subset: .



O% (x0 ) = x ∈ R : n

n i=1

4 (xi – xi0

)2

< % ⊂ Rn .

This is a segment in R1 , a circle in R2 , a sphere (in the usual sense) in R3 and so on. An arbitrary covering of the set G ⊂ Rn is a union of a finite number of the O% – spheres, containing G as a subset: G⊂

9N i=1

O% (xi )

A minimum covering is a covering with the smallest (for a given value of %) number of spheres. Consider the relationship between the number of the minimum covering spheres N% and radius % of one the spheres, varying %. It is not difficult to see that N% ⋅ % ≈ C (and hence N% ≈ C ⋅ %–1 ) for a segment (a set of the dimension d = 1), N% ⋅ %2 ≈ C and N% ≈ C ⋅ %–2 for an element of the surface (a set of the dimension d = 2), N% ≈ C ⋅ %–3 for an element of volume (a set of the dimension d = 3), etc. Summing up these expressions, we can derive a formula relating the number of elements of the covering and the dimension of the covered set: N% ≈ C ⋅ %–d



d = lim

%→0

ln N% . ln(%–1 )

With some simplifications, we have described the procedure of introducing the Hausdorff–Besicovitch dimension. We shall return to this concept below. Using this formula, we determine the dimension of the triadic Cantor set. Assigning values of % that are multiples of one-third, we find %=

1 3

→ N% = 2,

%=

1 9

→ N% = 4,

%=

1 27

→ N% = 8.

Basing on an easily guessed pattern, we calculate the dimension, which in this case is fractional:

5.4 Cantor Structure of Attractors in Two-Dimensional Mappings

225

ln N% ln(2i ) ln 2 = = lim = 0.63 . . . –1 %→0 ln(% ) i→∞ ln(3i ) ln 3

d = lim

Note that there are other definitions of dimension that are applicable to sets with a complex structure. We construct coverings of the set G ⊂ Rn , using closed spheres. The definition of the closed sphere emanates from the definition of the open sphere O% , as given above, by replacing the condition < % by the condition ≤ %. The number n is taken as the dimension of the set (in this case it is called topological) if for sufficiently small %, all the coverings necessarily have points belonging simultaneously to n + 1 elements of the covering. At that, points that are common to n + 2 and more elements may not exist. It is not hard to show that for standard subsets of R3 , the following statements are valid: n = 0 for a system of isolated points, n = 1 for a line, n = 2 for a plane domain, n = 3 for a spatial domain. The topological dimension of the triadic Cantor set is zero. Various properties of the other sets of fractional dimension, also known as fractal sets or fractals will be discussed in more detail later.

5.4 Cantor Structure of Attractors in Two-Dimensional Mappings Beyond a certain point there is no return. This point has to be reached. Franz Kafka, Reflections on the right path

For autonomous systems, in Section 3.1.2, we described a general approach to pass from dynamical systems with continuous time to mappings. It consists in entering the Poincare map that relates coordinates of different intersection points of the phase trajectory with some surface. In contrast to the mapping zN+1 = f (zN ), the Poincare map for the Lorenz model allows us to give an accurate description over large times. However, due to the fact that the phase volume compression occurs too fast, the mapping is hard to visualize. For this reason, Michel Henon [12] took an artificially constructed quadratic mapping as the object of the study. It had the same qualitative features as the Poincare map for the Lorenz model. The Henon map is one of the first sufficiently completely investigated mappings with a strange attractor.

5.4.1 The Henon Map Consider a sequence of three transformations: inflection stretching T′ that is similar to a transformation that carries out the one-dimensional tent mapping without violating its reversibility and preserves a phase volume, compression T′′ along one of the axes (it reduces the phase volume), and a mirror reflection T′′′ about the axis x = y (Fig. 5.10): . ′

T :

x′ = x, y′ = y + 1 – ax2 ;

. ′′

T :

x′′ = bx′ , y′′ = y′ ;

. T

′′′

:

x′′′ = y′′ , y′′′ = x′′ .

226

5 Chaos and Fractal Attractors in Dissipative Systems

y

yʹ x

Figure 5.10

yʹʺ

yʺ xʹ

xʹʺ



The deformations caused by the transformations T′ , T′′ and T′′′ .

We numerically iterate the resulting mapping . ′

′′

′′′

T=T T T

:

xN+1 = yN + 1 – axN2 , yN+1 = bxN

for the parameter values a = 1.4, b = 0.3. For convenience, we put the starting point at the origin (see Fig. 5.11). Plotting the points produced by each iteration, we will observe the image of the attractor gradually emerging as an endless set of (a)

(b)

0.20

0.4

y

0.18

0.2

0.0

0.16

–0.2

–0.4

x –1.0

0.60

1.0

0

0.70

(d)

0.186

0.1890

0.188

0.1892

0.190

0.1894

(c)

0.625

Figure 5.11

0.635

The Henon attractor.

0.6310

0.6320

5.4 Cantor Structure of Attractors in Two-Dimensional Mappings

227

quasiparallel curves. Domains highlighted in Fig. 5.11a–c are sequentially shown in an enlarged scale in Fig. 5.11b–d. Analyzing the graphic image of the attractor, we reveal the presence of both the self-similarity and the Cantor structure in cross sections perpendicular to the curves. The Henon map is a one-to-one (reversible) and contracting. This follows from the fact that the Jacobian of the transformation is strictly negative:   ∂x  N+1 /∂xN   ∂yN+1 /∂xN

 ∂xN+1 /∂yN   = –b. ∂yN+1 /∂yN 

The inverse mapping can be found explicitly: . –1

T

:

xN = b–1 yN+1 , 2 . yN = xN+1 – 1 + ab–1 yN+1

5.4.2 The Ikeda Map Another example of a contracting and mixing reversible mapping is Ikeda’s mapping [13]. It is convenient to define it as a mapping of the complex plane onto itself (see Fig. 5.12): I:

zN+1 = A + * zN exp(i|zN |2 + i ;0 ),

zN = aN + ibN ∈ C

The Ikeda map describes the conversion of the slowly varying amplitude of a light signal passing through a circular ring resonator with a nonlinear optically active medium. Figure 5.12a illustrates how the phase droplet, initially having the shape of a circle, is modified during the first and second mapping operations. A typical multispiral strange attractor of the Ikeda map is displayed in Fig. 5.12b. It as well as the (a)

Figure 5.12

(b)

Successive deformations of a circle (a) and a multi-spiral strange attractor (b).

228

5 Chaos and Fractal Attractors in Dissipative Systems

attractor of Henon has the guise of the endless set of the quasiparallel curves. The Henon and Ikeda maps correspond to different types of “baker’s transformations.” In the first case, a baker folds the dough sheet in half before rolling it while in the second case, he gives it the shape of a spiral “roll.” Below we consider the mathematical apparatus required for describing the selfsimilar structure of the strange attractors.

5.4.3 An Analytical Theory of the Cantor Structure of Attractors The Cantor structure of the attractor in the Henon model can be described analytically, calculating a function of the attractor shape by expanding in powers of a small parameter. We use the method described in Refs [14, 15], applying it to the Henon model. Performing the substitution zN = yN /b, we can write the Henon map in the form: * 2 T : xN+1 = 1 – a zN+1 + b zN ,

zN+1 = xN .

This mapping is a special case of the mapping: xN+1 = g (%h (xN , zN ), zN+1),

zN+1 = f (xN , zN ),

if one puts that f (x, z) = x, h(x, z) = z, g(x, z) = 1 + x – a z2 and % = b. We assume that the shape of the attractor is described by the function x = E(z). This function must satisfy an equation, the meaning of which is as follows: if a point lying on an attractor is subjected to the mapping operation, the resulting point must also belong to the attractor: E (f (E (z), z)) = g (% h (E (z), z), f (E (z), z)). We expand the function E(z) into the small parameter %: x = E (z) =

∞ 

%k Ek (z).

k=0

Any of the coefficients of Ek (z) can be found explicitly. If we restrict ourselves to the zero-order approximation, the attractor is approximated by a curve, and instead of a two-dimensional mapping, we have a one-dimensional mapping, defined on the curve x = E0 ( z ) ,

E0 (z) = g (0, z),

zN+1 = 1 (zN ), 1 (z) = f (g (0, z) , z).

5.4 Cantor Structure of Attractors in Two-Dimensional Mappings

229

The coefficients of the first- and second-order power expansions have the form E1 (1 (z)) = g (1,0) (0, 1 (z)) h(E0 (z), z), E2 (1 (z)) = 21 g (2,0) (0, 1 (z)) (h(E0 (z), z))2 + + {g (1,1) (0, 1 (z)) f (0,1) (E0 (z), z) h (E0 (z), z) + + g (1,0) (0, 1 (z)) h(1,0) (E0 (z), z) – E′1 (1 (z)) f (1,0) (E0 (z), z)}E1 (z), E′1 (1 (z)) = g (1,1) (0, 1 (z)) h (E0 (z), z) + g (1,0) (0, 1 (z))⋅ ⋅ {h(0,1) (E0 (z), z) + h(1,0) (E0 (z), z) E′0 (z)}/1′ (z) (the prime denotes differentiation with respect to z). By differentiating the explicit expression for E1 (z), we can calculate the derivative E′1 (z). Theoretical formulas describe well the structure of the attractor far away from the turning points of the curve where 1′ (z) = 0 because one of these formulas contains 1′ in the denominator. In the zerothorder approximation, the “curve” of the attractor can be described by the equation x = E0 (z). As can be seen, it represents part of a parabola. In any higher order approximation, which retains the terms up to a certain order %k , the “curve” is described parametrically. As an example, we write the parametric equations of the first (a top line) and the second (a bottom line) orders: z = 1(4) , x = E0 (1(4)) + % E1 (1(4)) , z = 1(1 (4)) , x = E0 (1 (1 (4))) + % E1 (1 (1 (4))) + %2 E2 (1 (1(4))) . The function 1(z) is not monotonous. Accordingly, its inverse function is not unique. Thus, a “graph” of the attractor in any order in % is given by a single-valued function of the parameter. However, in Cartesian coordinates z0x, it is a graph of an multivalent function, with the degree of multivalence increasing while the number of the terms accounted for in the expansion in powers of % also growing. In the limit, we get infinitely fold multivalence, which corresponds to the appearance of the Cantor set structure. Figure 5.13 compares the analytical and numerical results. In constructing the graphic image of the Henon attractor, we use formulas that keep the terms of different orders in powers of %. In the zeroth approximation, we obtain a parabola whose fragments are marked in Fig. 5.13ab by index (!). The curve (") corresponding to the first approximation consists of two almost parabolic segments. These are located on opposite sides of the parabola (!) at distances of the order of %. In the second approximation, we get the curve (γ ) consisting of two pairs of segments, with each pair being located near a certain segment of the curve ("). The deviation of either of the two segments of the curve (") from the nearest segment of the curve (!) is proportional to %2 . Continuing constructing the curves corresponding to the cubic and the following approximations, we finally arrive at the set of quasiparallel curves with the Cantor

230

5 Chaos and Fractal Attractors in Dissipative Systems

1.5 (a)

x 1 0.5

(α)

(β)

0 z –0.4

0.4

0

x

0.8

(δ)

(b)

0.7 (α)

(γ)

0.6

(β) (γ) (δ)

0.5

0.48

0.52

z 0.56

0.6

Figure 5.13 Analytical approximation and numerical results for the Henon attractor.

structure in cross section. The numerical results obtained by iterating the Henon map are plotted in Fig. 5.13b. The bold segments of the line ($) practically coincide with the segments of the curve (").

5.5 Mathematical Models of Fractal Structures Every equation halves the number of readers. Stephen Hawking

A numerical data analysis for the Lorenz attractor and chaotic (strange) attractors in other models shows that these are nonstandard subsets of points in the phase space, with their structure being similar to the structure of the triadic Cantor set discussed earlier. For example, the structure of a typical chaotic attractor in three-dimensional phase space is: traveling within a two-dimensional subset of points, we do not go beyond the attractor; however, shifting in a direction perpendicular to this “leaf,” we run along a curve, having points of a Cantor subset and only these belong to the attractor. The attractor, thus, represents a layered, onion-shaped structure with a Cantor distribution of the layers. The attractor in a higher dimensional space can be arranged more difficult; its dimension (roughly equal to the number of positive Lyapunov exponents) usually varies significantly. The fact that the attractor has the Cantor structure speaks of that it is locally self-similar: subsets belonging to it are related between each other by scaling transformation. Nonstandard sets of this type, known as fractals, are used as

5.5 Mathematical Models of Fractal Structures

231

mathematical models in many fields of the natural sciences. Currently, a general theory of fractals is successfully developed to study real physical objects, in particular, the chaotic attractors. An important part of this theory is the aggregate of mathematical techniques providing solving key practical problems such as a quantitative description of the fractals, comparison of theoretical, numerical and experimental results, and construction different models. Next, we focus our attention on these issues, omitting specifics of the dynamical system theory.

5.5.1 Massive Cantor Set Given the attractor only as a set of points in phase space, we actually exclude the nature of motion on an attractor from consideration. Meanwhile, in the steady state, the phase point visits different areas of the attractor at different times. This can be taken into account by assigning a numeric distribution of the visit frequency on the attractor. Consequently, after appropriate normalization, we obtain the probability distribution of the visits. The procedure described is a special case of the procedure of introducing a measure on a fractal set. It will be described in greater detail later. The science of fractals has established many useful patterns by examining the simplest models, which accurately reflect the properties of real systems on a qualitative level. Next, we consider a number of such models, keeping in mind the necessity of advancing the mathematical apparatus. Let us build the simplest fractal with a given measure on it (“mass”), generalizing to some extent the procedure of constructing the triadic Cantor set. Suppose a rod of the unit length l0 = 1 has mass ,0 = 1. We break the rod into two parts, both having mass equal to 21 . We shorten both the lengths by forging to the length 31 . In the next step, we again cut each of the resulting pieces in half and subject them to forging. As a result, we get four rods with the masses 41 and the lengths 41 . Repeating the procedure n times, we get a pre-fractal of nth generation. It consists of N = 2n rods having the masses ,i = 2–n and the lengths li = 3–n , where i = 1, . . . . , n (see Fig. 5.14). We establish

Successive stages of construction of the massive Cantor set.

Figure 5.14

232

5 Chaos and Fractal Attractors in Dissipative Systems

a connection between the mass and the length and watch the density 1 = ,/l behave. It is easy to show that ,i = l!i ,

li →0

1i = ,i /li = l!–1 → ∞, i

! = ln 2/ ln 3 < 1.

Thus, if we examine the fractal properties using a tool having a limited resolving power, we see that the mass is fragmented into elements with certain “densities”. However, increasing the resolution, we will see more elements (finer crushing); with the density of each element becomes larger. The exact distribution of the density is singular and described by a scaling factor (a singularity indicator) ! = ln 2/ ln 3 < 1. In the mathematical literature, this quantity bears the name of the Lipschitz-Holder exponent. The relation between the mass and the length indicates the fractal nature of the distribution. In other words, where for a normal linear distribution , = l1 the integer exponent should stand, we have a fractional number. Note that although ! has the same meaning as the fractional dimension introduced earlier for the mass distribution carrier – the triadic Cantor set, this quantity has a different meaning.

5.5.2 A binomial multiplicative process The Besicovitch style is architectural. He builds out of simply elements a delicate and complicated architectural structure, usually with a hierarchical plan, and then, when the building is finished, the completed structure leads by simple arguments to an unexpected conclusion. Every Besicovitch proof is a work of art, as carefully constructed as a Bach fugue. Freeman Dyson

A distinguishing feature of the model presented above is that the heterogeneity of the mass distribution on the fractal set is a consequence of the presence of “voids” in the set carrier itself. Let us show that a heterogeneous self-similar fractal can be built on a dense continuous carrier – the unit interval. For this, we believe a certain quantity to be uniformly distributed on the interval [0, 1]. We call it a population. Breaking the interval into two halves, we change the character of the distribution by placing the population fraction '0 = p(0 < p < 1) on the left half, and the fraction '1 = 1 – p - on the right one. During the next step, we do analogous redistributions for each half. As a result, we obtain four cells with the distribution: 2

Q2 = {,i }2i=0–1 = {'0 '0 , '0 '1 , '1 '0 , '1 '1 } . In the third generation, we have eight cells: 3

2 –1 Q3 = {,i }i=0 = {'0 '0 '0 , '0 '0 '1 , '0 '1 '0 , '0 '1 '1 ,'1 '0 '0 , '1 '0 '1 , '1 '1 '0 , '1 '1 '1 }

5.5 Mathematical Models of Fractal Structures

233

Continuing the process of partitions, we obtain a greater number of shorter segments, each of which contains a progressively smaller fraction as regards the initial interval. Now, it would be useful to discuss the properties of the function of x that results from summing the distribution from the left end of the interval to an arbitrary point x: M (x) =

x⋅2n i=0

,i .

Only the contributions of the domains falling into the interval [0, 1] are summed here. Graph of the function M(x)is displayed in Fig. 5.15. Plotting the graph, we have used data of the pre-fractal distribution as a result of the eleventh step of the multiplicative process for p = 0.25. It can be shown that if one compresses the entire graph horizontally by half, and then compresses it vertically by multiplying the values of M(x) by p, the result will coincide with the entire graph on the interval [0, 21 ] on the x-axis (a property of scale invariance). If the population distribution built above is assumed to be a probability distribution density, the function M(x) plays a role of the distribution function. This function assigns a random segment [x′ , x′′ ] to its measure ,([x′ , x′′ ]) = M(x′′ ) – M(x′ ). The scale invariance property enables a part of the graph of the function to turn into the entire graph using some affine transformations: . M(x) =

p M(2x), p + (1 – p)M(2x – 1),

0 ≤ x < 1/2, 1/2 ≤ x ≤ 1.

Constructing the fractal population distribution, at the nth step (or, as they say, in the nth generation), we obtain N = 2n cells. The latter can be enumerated by index i = 0, . . . , N – 1. The length of each cell is equal to $ = 2–n , while the population 1.0 M(x) 0.8

0.6

0.4

0.2

0

0.2

0.4

0.6

0.8

x 1.0

Self-similar structure of the distribution function.

Figure 5.15

234

5 Chaos and Fractal Attractors in Dissipative Systems

fraction attributable to the cell is ,i = 'k0 'n–k 1 . It is easy to notice there is regularity: k is the number of zeros in the binary fraction representing the number x = i/2n (this is the left end of the corresponding segment). The quantity of the cells with the same population fraction (measure) ,i = 'k0 'n–k is equal to the number of different ways 1 of putting zeros and units in the n-digit binary number with k zeros. This is called the number of combinations of n taken k at a time and calculated by using a standard formula from combinatorial analysis. Introducing the notation . = k/n, we write down the number of the cells and the measure of the cell as a function of . : Nn (. ) =

n .n

,. = Bn (. ) ,

=

n! , (. n)! ( [1 – . ] n)! . 1–.

B (. ) = '0 '1

= p. (1 – p)1–. .

We combine the segments with the same measure ,. and designate the obtained subset of points lying on the unit interval as Gn (. ). Thus, we have broken the unit interval into disjoint subsets: 9

Gn (. ) = [0,1];

Gn ( . )

:

  Gn . ′ = ∅,

. ≠ . ′.

. ∈[0,1]

In the limit n → ∞, we have Gn (. ) → G(. ), where G(. ) is the fractal set. Let us find its fractal dimension. For this purpose, we employ the previously mentioned procedure of finding the Hausdorff–Besicovitch dimension. However, now we define it more strictly. Abram Besicovitch (January 11, 1891, Berdyansk – November 2, 1970, Cambridge, UK) was a Russian and British mathematician. The main results are related to the theory of almost periodic functions, measure and integration theory, and various issues of the theory of functions.

The Hausdorff–Besicovitch Fractal Dimension. In a simplified form, the procedure looks as follows: the set G included in Euclidean space is covered by $ -spheres of appropriate dimension (by segments on the real axis, by circles in the plane, by spheres in three-dimensional space, etc.). In doing so, the covering is constructed so that the smallest number of the spheres should be used (the minimum covering). Next,

5.5 Mathematical Models of Fractal Structures

235

the number $d , where d is a parameter, needs to be assigned to each sphere. Then these values are summed over the covering. If the spheres are identical, $d should be multiplied by the number of the spheres N($). The number D is referred to as the Hausdorff–Besicovitch (fractal) dimension if 

. d $→0

d

$ = N ($) $ →

G

0, ∞,

d > D, d < D.

If the limit N($)$D → const exists as $ → 0, we can write a formula for the direct estimation of the fractal dimension: D = – lim (ln N ($)/ln $) . $→0

We apply this procedure to the set G(. ) at a fixed . . The segments of the length $n = 2–n are convenient to use as elements of the covering. Then $ → 0 if n → ∞, and the dimension is equal to 2   D(. ) = – lim ln Nn (. ) ln 2–n . n→∞

Applying Stirling’s formula n ≈ Nn (. ) =

n .n

√ 20nn+1/2 e–n that holds for large n, we find



exp {–n(. ln . – (1 – . ) ln (1 – . ))}  . 20n. (1 – . )

Forming the expression  ln Nn (. ) – ln 20n. (1 – . ) – n(. ln . – (1 – . ) ln (1 – . )) = , ln 2–n –n ln 2 we see that for n → ∞, the first term in the numerator of the right side gives no contribution because its growth rate ∼ ln n is slower as compared to the growth rate of the second term ∼ n. The result is D (. ) = f (. ) = – lim

n→∞

. ln . – (1 – . ) ln (1 – . ) ln Nn (. ) =– . ln 2–n ln 2

Thus, having presented the unit interval as a combination of the nonoverlapping fractal sets, we have shown that each of them has its own fractal (Hausdorff) dimension. Such structures are called multifractals. Considering the singular measure on a fractal by the example of the massive Cantor set, we have ascertained that the mass and length of the elementary segment are related by a power law with a fractional exponent (Lipschitz–Holder exponent). A similar connection takes place also for the elements of the fractal distribution at hand. But now the values of the exponent depend on which of the sets of G(. ) the

236

5 Chaos and Fractal Attractors in Dissipative Systems

segment belongs to. Otherwise, it is a function of . . Assuming that ! is defined by the relation M (x(. ) + $) – M! (x(. )) = ,. = $! , and expressing ,. and $ through n, we find (n 4 ' ,. = Bn (. ) = p. (1 – p)1–. $=

2–n

=–

⇒ ! (. ) =

ln ,. n ln B (. ) = ln $ (–n ln 2)

. ln p + (1 – . ) ln (1 – p) . ln 2

It can be shown that

!min ≤ ! ≤ !max ,

!min = ! (0) = –ln (1 – p)/ln 2,

!max = ! (1) = –ln p/ln 2.

Given that ! and . are related by a functional dependence, we can turn any function of . into a function of ! via a change of variables. In addition, the index ! can be used for indexing the fractal subsets of the unit interval: . ⇒ !;

f˜ (. ) ⇒ f (!) ≡ f˜ (. (!)) ;

G (. ) ⇒ G(!) ,

9 !

G(!) = [0, 1].

Thus, the fractal distribution produced by the binomial process can be characterized by the multifractal spectrum function f (!) that establishes a connection between the Hausdorff–Besicovitch dimension and the Lipschitz–Holder exponent. Felix Hausdorff (November 8, 1868, Breslau – January 26, 1942, Bonn) was a German mathematician, one of the founders of the topology. He introduced and studied the notion of a Hausdorff space, the Hausdorff dimension. He contributed to the theory of sets, functional analysis, theory of topological groups and number theory.

The binomial multiplicative process transforms the originally uniform measure distribution into nonuniform. To determine the degree of nonuniformity attained in the limit n → ∞ is of interest. To this end, we estimate the measure fraction attributable to each of the fractal subsets in asymptotics of large n. Using the earlier found

5.5 Mathematical Models of Fractal Structures

237

1.0

0.5

0 Figure 5.16

1.0

2.0

The graph of f (!) and the experimental data.

approximate expression for Nn (. ), as well as the exact expression for ,. , we can get the following result: M (Gn (. )) = Nn (. ) ,. ≈ 

1 20np (1 – p)

 exp –

 n 2 (. – p) . 2p (1 – p)

For n >> 1, the graph of this distribution has a sharp peak in the neighborhood of the point . ∼ p. Thus, all of the measure is virtually concentrated on the subsets with |. – p| < 3, where 3 = [p(1 – p)/n]1/2 → 0 as n → ∞. The comparison of the theoretical and experimental results shows that the curve f (!) for the binomial multiplicative process (after fitting a value of p) can accurately describe the multifractal spectrum for one-dimensional sections of the dissipation field in fully developed turbulent flows. Fig. 5.16 taken from the paper [16] represents the graph of the function f (!) (a solid line) that describes the binomial multiplicative process with p = 0.7; squares depict the multifractal spectrum for one-dimensional sections of the dissipation field in a fully developed turbulent flow. This spectrum is obtained from experimental data.

5.5.3 The Spectrum of Fractal Dimensions Having considered the particular example of a fractal distribution, we have ascertained that a knowledge of the only fractal dimension does not provide a complete description of it. For this, it is necessary to find the spectrum (distribution) of dimensions whose components are dimensions of individual fractal subsets. This assertion covers the other fractal distributions, particularly, chaotic attractors. Suppose P phase points to be distributed in some limited domain of phase space. We cover the domain by cells (by $ – balls or $ – cubes) and assume that Pi points fall into ith cell. The quantity ,i = Pi /P can be regarded as a probability of a point falling into ith cell. The method of seeking ,i is easy to generalize to the case when the

238

5 Chaos and Fractal Attractors in Dissipative Systems

Covering the three-dimensional attractor with cubic elements.

Figure 5.17

points comprise a phase trajectory. For this end, we should take the ratio of the length of a part of the trajectory falling into ith cell to the total length of the path. Strictly speaking, the path length should be considered as tending to infinity, however, in practice, we usually have to deal with a finite but large enough trajectory segment, found by numerical simulation. Figure 5.17 shows the covering of a chaotic attractor in three-dimensional phase space. Define the quantities 4(q) usually called mass exponents by the ratio N($) 

. q ,i $d

d $→0

= N (q, $) $ →

i=1

0, ∞,

d > 4 (q), d < 4 (q),

where q is some real parameter; the summation is over all N($) cells within the covering and containing points of the fractal set. In general, the mass exponent 4(q) depends on a chosen value of the order of the moment q. The fractal probability distribution (the probability measure) is characterized by the aggregate of the exponents at different q. Varying the values of q, we can learn what different power laws the dependence ,i ∼ $! may be subject to. If the limit N(q, $)$d exists as $ → 0, we can write: 4 (q) = – lim (ln N (q, $)/ln $) . $→0

It is easy to see that for q = 0 the mass exponent is equal to the Hausdorff–Besicovitch dimension earlier found. In this particular case, information about the probability distribution turns out to be lost because we can register only the presence or absence of the points belonging to the fractal set in a cell. (One can say that we see a “gray” fractal as “black and white.”) In other words 4(0) is the carrier dimension of the fractal distribution.

5.5 Mathematical Models of Fractal Structures

239

In order the mass exponent for any q to have the meaning of the fractal dimension, we should make the renormalization: D (q) = 4 (q)/(1 – q). This formula is the definition of the spectrum of Renyi dimensions. Let us make sure that the above renormalization is needed. For this, we find the spectrum of the Renyi dimensions for a fractal set where probabilities are distributed uniformly, i.e. for any i = 1, . . . , N($), ,i = , holds. From the normalization condition it follows that ,i = , = 1/N($), hence  ; 0) and compresses it exp |+2 | = exp(–+2 ) times in the direction of the vector e2 (+2 < 0, it is because the condition +1 + +2 < 0 must be satisfied for dissipative systems). Assume the domain K to be so small that the stretching and compression directions specified by the vectors e1 and e2 are the same at all points. From the general concepts of the structure of the attractor it follows that the attractor is a standard set (a dense continuum) in the direction of the vector e1 and a Cantor fractal in the direction of the vector e2 . Apply the mapping F to the domain K successively q times and cover the resulting domain ¯ = F (q) (K) = F(. . . F (K) . . .) K = >? @ q

by squares with sides $exp(–q|+2 |). Since the domain K, and hence any of the “void” that separates the elements of the Cantor set, as well as the side of the covering square is compressed by the same number of times, the number of the squares in any “column” lined up along the compression direction e2 remains the same. The number of the squares in each “column” lined up along the compression direction e1 increases exp(q+1 ) × exp(q|+2 |) times. It is clear that we have multiplied the stretching coefficient

5.5 Mathematical Models of Fractal Structures

e1

e2

ae–|qλ2|

a

b

241

Figure 5.18 Local transformation of the chaotic attractor, caused by the F mapping.

beqλ1

of the covered domain by the compression coefficient of the covering square, given that there are no “voids” in the direction e1 . Thus, we get   NK¯ $e–q|+2 | ≈ eq(+1 +|+2 |) NK ($) . Note that, putting that +1 and +2 are the average Lyapunov exponents over the attractor, we can extend the above relation to the complete coverings of the entire attractor. Furthermore, since the attractor is an invariant of the succession mapping F, the quantities of the covering elements in both sides of the equality can be considered as belonging to the same covering set – the complete attractor A. Now we make the replacement ¯ → A and believe that NA ($) ∼ const ⋅ $–D . Solving the equality with respect to the K, K dimension, we obtain an expression for the Lyapunov dimension: 2  DL = 1 + +1 +2 . The procedure of deriving the formula for DL is explained by Fig. 5.18. The latter displays a small fragment of the chaotic attractor, representing a direct product of the segment and Cantor set (the left part is before compression, the right one is after compression). White squares correspond to the “voids” in a Cantor structure. For a given size of the covering elements, we “feel” no smaller voids.

5.5.5 A Relationship Between the Mass Exponent and the Spectral Function This section deals with the measure distribution on a fractal set in a sufficiently general form. The set is assumed to be represented as the union of subsets with certain fractal dimensions. We introduce the function f (!) for the set. Here ! is the Lipschitz–Holder exponent used by us to index the subsets: G=

9 !

G(!),

G(!) ⇔ ,! ∼ $! .

By calculating the mass exponent 4(q), we can demonstrate that there is a one-to-one relationship between the f (!) and 4(q) functions, with each subset of G(!) having the

242

5 Chaos and Fractal Attractors in Dissipative Systems

dimension of f (!). This means that to cover the elements of the set G(!) within the interval [!, ! + d!], we need to have n(!, $) d! = 1(!)$–f (!) d! segments of the length $. Here 1(!)d! is the number of sets being between G(!) and G(! + d!). In the mass exponent determination, we replace the summation by integration over !. Then we obtain N 

q ,i $d

d

= N (q, $) $ =

–f (!)

1(!) $

 q $→0 d! $! $d →

i=1

.

0, d > 4 (q) , ∞, d < 4 (q) .

Under the limiting condition $ → 0, we arrive at an integral whose value can be estimated by a method of asymptotic estimates. Suppose, for example, that $ = 2–n . Then the integrand contains the cofactor $–f (!)+q!+d = 2–n(–f (!)+q!+d) = e–n ln 2(–f (!)+q!+d) that increases (decreases) exponentially as n grows. In this case, the value of the integral is estimated by the value of the integrand taken at an extremum of the exponent: 1(!)$–f (!)+!q+d d! ∼$–f (!(q))+!(q)q+d . This evaluation method is based on the fact that for small $ (i.e., at large n) the value of the integral is determined by the contribution of a small neighborhood of the maximum point ! = !(q) of the function –f (!) + !q + d. The position of the maximum can be found by solving the equation d d!

(–f (!) + !q) = 0.

In the case at hand, for $ → 0, the exponential cofactor attains the property of the delta function, “removing” the integral at the point ! = !(q). This evaluation method is called Laplace’s method and is a special case of the saddle-point method (the method of steepest descent). A rigorous description of these methods can be found in Ref. [17]. In the determination of the mass exponent 4(q), we replace the integral over ! by its asymptotic estimate, and then we immediately find 4(q) = f (! (q)) – q !(q) . It is worth noting that the equation q(!) = f ′ (!) that comes from the extremum condition defines the function !(q) implicitly. Thus, knowing the function f (!), we can

243

5.5 Mathematical Models of Fractal Structures

always find 4(q). To express f (!) through 4(q), we differentiate both sides of the expression for 4(q) over q and take into account the extremum condition. Next, we express f (!) through 4(q) and !, and then substitute d4/dq for !. As a result, we have d4 df d! d! d! = –!–q = dq d! dq dq dq



 df – q – ! = –!, da

f = 4 + q! = 4 – q

d4 . dq

Hence we obtain the system of equations defining the function parametrically: f (! (q)) = 4(q) – q (d4(q)/dq) ,

!(q) = –d4(q)/dq.

At the point, at which the function f (!) has a maximum, its derivative vanishes. The extremum condition for the index mass implies that q = 0 at this point. On the other hand, substituting q = 0 into the left equation, involved in the system of parametric equations, we get f (!max ) = 4(0) = D(0). As can be seen from the foregoing, the maximum value of the spectral function f (!) is equal to the dimension of the carrier of the fractal distribution.

5.5.6 The Mass Exponent of the Multiplicative Binomial Process Now we consider the particular case when the spectrum of mass indices and the spectrum of Renyi dimensions can be found exactly. Although in the general theory, we used the variables . and !, here it is more convenient to enter a variable k = n. and take the limit $ → 0 as n → ∞. A binomial process is described by the formulas: n Nn (. ) = , k

,. = pk (1 – p)n–k ,

$ = 2–n ,

.=

k . n

Employing the above formulas, we can find: & q  ln . Nn (. ) ,. ln N (q, $) = = – lim 4 (q) = – lim n→∞ ln $ ln 2–n $→0 ⎛ ⎞ &n n k 1 – p n–k ]q [p ln ( )   ⎜ ⎟ k=0 k ⎜ ⎟ ln pq + (1 – p)q ⎟= = – lim ⎜ . ⎟ n→∞ ⎜ ln 2–n ln 2 ⎝ ⎠ 

Note that 4(0) = D(0) = 1. This result has a simple explanation: since, as we have already stated, the dimension D(0) “feels” no probability distribution and it is equal to the carrier dimension. In the given case on hand, the unit interval is a standard set with a dimension of 1.

244

5 Chaos and Fractal Attractors in Dissipative Systems

5.5.7 A Multiplicative Binomial Process on a Fractal Carrier Previously, we have dealt with two models of fractal measures: (a) The uniform distribution on the carrier that is the Cantor fractal. (b) The irregular (fractal) distribution on the unit interval. Turning to the problem of designing the structure of chaotic (strange) attractors, we can assume that then mixed models would be more useful. This is because the attractor itself is a set of Cantor type, and the motion of the phase point generates the visit frequency distribution on it. Consider the particular example of a mixed model. Let the unit interval be partitioned into three segments of lengths l0 = 0.25, l1 = 0.35 and l2 = 0.4. Then we remove the middle segment. Further, we divide the remaining side segments in the same proportion and remove their middle segments and so on. Suppose that the left segment as a result of the first division acquires the fraction '0 = 0.6 of the original measure, and the right segment acquires the fraction '2 = 0.4; next, the measure is redistributed in the same proportion (see Fig. 5.19). In each generation of constructing the fractal distribution, the unit interval turns out to be broken into small segments. For some of them, the measure is distinct from zero, for the others it is zero. Since the lengths of these segments are different, the Renyi dimension given earlier is inconvenient. Note, however, that there exists an upper bound $n of the lengths of all the segments for the division corresponding to nth generation. At that, the choice of the upper bounds may be such that $n → 0 as n → ∞. Using this, we can formulate a generalized definition of the mass exponent 4(q). As in the case above, the carrier (now it is the Cantor set) can be represented as a union of disjoint subsets of the fractal. In the nth generation, one of these sets Gn (. ) is a union of Nn (. ) segments, each of which has a length Ln (. ) and bears a fraction ,. of the measure: Nn (. ) =

n .n

,

( ' . 1–. n L . = l0 l 2 ,

( ' . 1–. n ,. = '0 '2 ;

.=

k . n

Successive stages of construction of a distribution on a Cantor fractal.

Figure 5.19

5.5 Mathematical Models of Fractal Structures

245

Define the mass exponents by the new relation:

1  . =0

n→∞ . $n → 0 0, d > 4 (q) , q d Nn (. ) ,. L. = → ∞, d < 4 (q) ,

where the summation is actually performed over k = 1, . . . , n. An explicit expression for the sum is derived by convoluting the binomial expansion and has the form: 1  . =0

Nn (. )

q ,. Ld.

=

n  k=0

n n  k n–k q  k n–k d  q d q '0 '2 l0 l 2 = '0 l0 + '2 ld2 . k

Obviously, the resulting expression tends to a constant only if the sum in parentheses is equal to unity. Thus, the mass exponent function is a solution of the equation q 4(q)

'0 l 0

q 4(q)

+ '2 l 2

= 1.

As before, the spectrum of the dimensions can be determined by the expression D (q) = 4(q)/(1 – q).

5.5.8 A Temporal Data Sequence as a Source of Information About an Attractor Previously, to compute the characteristics of irregular motion, we applied methods that assume that all data on a phase trajectory are known. This is true to the dynamics with continuous time as well as to the iterative dynamics of a mapping when a sequence of phase points plays a role of the phase trajectory. Only the numerical simulation gives a complete description of a phase trajectory and an attractor. However, in order to compare theoretical and experimental results, it is necessary to be able to calculate the characteristics of the attractor using experimental data. In this case, a complete description is hardly possible. For example, studying the turbulent motion in a fluid, we deal with an infinite-dimensional system and do not even know what the upper bound on the dimension of the attractor is. In other words, how many independent variables must be monitored with the purpose of obtaining data on their motion to describe the dynamic process adequately afterward? However, the problem of describing the nonlinear dynamics can be solved by the Packard–Takens method. The method is based on the assumption that the nature of irregular motion in a system with the phase space having a large dimension is completely determined by motion on a low-dimensional chaotic attractor. It has been shown that in this case, information about the characteristics of the attractor can be obtained from observations for the temporal behavior of a single dynamic variable x1 .

246

5 Chaos and Fractal Attractors in Dissipative Systems

In 1980–1981, N. Packard, F. Takens and R. Mane gave a rigorous justification of this method. Here we omit it and consider a simple illustrative example. Suppose there is a two-dimensional dynamical system x˙ = f (x), where x = (x1 , x2 ). Integrating over time the right and left sides of one of the equations, we can obtain the relation

t+4

x1 (t + 4) =

    f1 x1 (t′ ), x2 (t′ ) dt′ + x1 (t) ≈ 4 f1 x1 (t), x2 (t) + x1 (t).

t

The approximate equality holds only for sufficiently small 4. It can be argued that if we follow the behavior of the functions (x1 (t), x1 (t + 4)), we obtain information equivalent to that which we would have watching the behavior of (x1 (t) , x2 (t)). Indeed, the transition (x1 (t), x1 (t + 4)) → (x1 (t), x2 (t)) is actually to carry out some nondegenerate two-dimensional functional transformation (mapping). Under such a transformation, Lyapunov exponents and the fractional dimension do not change their values. Pursuing this idea and intending to analyze the dynamics, generated by the motion on the attractor of a large enough dimension, we have to observe the behavior of components of the vector (x1 (t), x1 (t + 4), . . . , x1 (t + m4)), having chosen somehow m and 4. An essential requirement is to introduce thus a large enough dimension m of the auxiliary (reconstructed) phase space. For this, at least the two conditions must be met: (a) The dimension m needs to be greater than the dimension of the attractor. (b) The dimension m needs to be large enough so that the mapping of variables from the subspace into the reconstructed m-dimensional space should be nondegenerate. At that, it should be borne in mind that the variables are related by nonlinear equations, which are responsible for the chaotic dynamics. A rigorous justification of the possibility of recreating the chaotic attractor in the new (reconstructed) phase space is based on the embedding theorem. Below we present its content without seeking mathematical rigor and without explaining the terms. The Embedding Theorem. Let A ⊂ Rn be a compact subset and embedded into the space Rm providing that there is a mapping, establishing a one-to-one correspondence between the points of the pre-image and image of the subset A. The theorem states that the embedding of A into Rm guarantees any typical smooth mapping 6 : Rn → Rm , if the condition is met: m ≥ 2mA + 1, where mA is a dimension of the subset A. In particular, if A is a fractal set, mA = DA is a fractional dimension. In order to select m, we need to know at least an approximate value of DA . If there is no a priori information about the attractor dimension, we should carry out trial calculations of the dimension at different m. From a practical standpoint, with data on the dynamics of a system, the correlation dimension DC equal to the Renyi dimension for q = 2 is the simplest to calculate. Note that the equality DC = D(q = 2) follows

5.5 Mathematical Models of Fractal Structures

247

from simple arguments. Suppose that data about a fractal set represent an aggregate of vectors which define the positions of individual points belonging to the fractal {xi }, i = 1, . . . , M. It is worth emphasizing that the kind of data can be received experimentally. Having formed a covering with $-spheres, we denote the relative fraction of the points falling into a certain $-sphere as ,k . Here the relative fraction ,k is equivalent to the probability of a point falling into the $-sphere. Then ,2k = ,k ⋅ ,k is the probability of a pair of points falling into the sphere. At that, it should be thought that such “fallings” are independent events. This implies that for large values of M N($) 

,2i

i=1

1 ∼ 2 ⋅ M

.

The number of pairs, for which   xi – xj  < $

4 =

M   1   ( $ – xi – xj  , 2 M i,j=1

and, as a consequence, the correlation dimension is ⎛ ⎞⎤   N($)  1 ln C($) ,2k ⎠⎦ = lim D(q = 2) = lim ⎣ ln ⎝ = DC . ln $ ln $ $→0 $→0 ⎡

k=1

The presence of a linear relationship between ln C($) and ln $ is the criterion that the considered set of points is a fractal and the dimension DC exists. Besides, graph of this dependence must look like a straight line with the angular coefficient DC . On the contrary, lack of the linear relationship indicates that there is no fractal structure! This may be due to the fact that either the irregular motion is unrelated to the presence of the strange attractor, or the selection of the dimension m is wrong. The attractor reconstruction method using signal delays works quite well. This is seen from Fig. 5.20a. The latter represents the dependence of the logarithm of the correlation integral on the logarithm of the cell size. Such curves are inherent in the Lorenz model. The lower curve is obtained by processing a sequence of values of the true phase trajectory x(ti ) = (x(ti ), y(ti ), z(ti )); the upper one is plotted through consecutive values of one coordinate, taken at different times with a constant step 4: u (ti ) = (x(ti ), x(ti + 4), x(ti + 24)). These curves are well approximated by straight (a)

(b) log2C(δ)

0

(c) 0 log C(δ) 2

(α)

–10

–10

–20

–20

–14 (1)

log2(δ/δ0) 1 Figure 5.20

5

9

13

log2C(δ)

–4

log2(δ/δ0) 0

10

20

–24

(2)

0

(β)

(3)

10

log2(δ/δ0) 20

The graphs of dependences of ln C on the logarithm of the cell size in different cases.

248

5 Chaos and Fractal Attractors in Dissipative Systems

lines. The fact that they are parallel means that the correlation dimensions computed in two different ways are the same. The linear nature of the dependence between the logarithms is violated if $ is very small or too large. Let us turn to the dependencies shown in Fig. 5.20b, c that are attributed to the Henon map. For large $, the self-similarity law is violated since $ becomes comparable to the size of the attractor (an appropriate place on the graph is marked by !). If $ is small, the number of the pairs of points close together in the required degree is also small; therefore, fluctuations begin to make themselves felt. The role of the random component of the signal can be explained by artificially introducing a noise addition. Figure 5.20c displays the area marked by " where the graphs are plotted for the cases: (1) There is no noise. (2) The noise amplitude is 5 ⋅ 10–3 . (3) The noise amplitude is 5 ⋅ 10–2 . As can be seen, the noise destroys the self-similarity at certain scales, wherefore the dependence of the variables ceases to be linear. From a practical point of view, we need to know whether there is a range of scales in which ln C($) is a linear function. If no, the formally calculated value of DC (even when it is not an integer) cannot be regarded as evidence of the presence of a fractal structure. The answer to the question can be received graphically, and itself value of DC is easy to determine by the angle of inclination. Processing the data, we must strive to get the graph with a linear segment whose slope is independent on the choice of 4 and m. For this purpose, we should vary the delay time 4 and increase the dimension m when required. In the case of a successful solution of this problem, it can be claimed that there is motion on the attractor having the fractal dimension DC . Figure 5.21 shows the curves that allow the correlation dimension of the attractor to be estimated according to the results of experimental observation of the Rayleigh–Benard convection. As can be noticed, for m = 4, 5, . . . , 8, each curve has a pronounced linear segment, with all slopes being equal. When m = 3, the angle of

1 C(δ) 10–1

3 4

10–2

6

5 10

7

–3

8 δ 2

10

3

10

4

10

The graphs of dependences of C on the cell size for different m.

Figure 5.21

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps

249

inclination is a little bit different although the graph also has a linear segment. Thus, one should take m > 3. Analysis of the dependence of the correlation integral on the scaling parameter can contribute to a better understanding of qualitative features of the dynamics of the system under study. For example, the presence of two linear segments with different slopes speaks of that the different scales correspond to different types of the self-similarity.

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps All depends, then, on finding out these easier problems, and on solving them by means of devices as perfect as possible and of concepts capable of generation. David Hilbert So then always that knowledge is worthiest. . . which considereth the simple forms or differences of things which are few in number, and the degrees and co-ordinations whereof make all this variety. Frances Bacon

The study of dynamical patterns of systems demonstrating the transition to chaos as a result of an implementation of a cascade of period doubling bifurcations, has led to the discovery of deep mathematical laws associated with having the scale invariance property (scaling) by solutions. Below, we briefly review the results obtained by Feigenbaum [18–20], and discuss their significance (see also [21–24]). The sequence of period doubling can be observed in many nonlinear dynamical systems both under numerical simulation and experimentally. Let Xi , i = 1, 2, . . . , be a sequence of intersection points of a phase trajectory passing through the Poincare section plane. The period-doubling bifurcation process in dynamic systems happens as follows: at a certain value of the control parameter r = r1 , the trajectory of a limit cycle with period T splits into two close trajectories, which form a limit cycle with period 2T. In the Poincare section plane, during the first period doubling a stable fixed point turns into two close points, which are visited by a point moving along the phase trajectory, alternately. In other words, the sequence of points X1 , X1 , X1 , X1 , . . . is transformed into a sequence of points X1 , X2 , X1 , X2 , . . .; the last one is called a 2-cycle of the Poincare map. Further, the next splitting occurs when r = r2 , whereby we obtain a limit cycle with period 4T for the phase trajectory. This cycle corresponds to a 4-cycle of the Poincare map; when r = r3 , there arise a limit cycle having period 8T and 8-cycle. The trajectory of the limit cycle of period 2n T intersects the plane of the Poincare section at 2n points that take turns in running: X1 , X2 , . . . , X2n , X1 , X2 , . . . (a 2n -cycle for the Poincare map). The sequence of the r parameter values, corresponding to the period-doubling bifurcations, converges to the limit rn → r∞ ; when r > r∞ , there is chaotic motion.

250

5 Chaos and Fractal Attractors in Dissipative Systems

(a) 4 2 1

(b)

λmax

8

r1

r2

r

Figure 5.22 A bifurcation diagram (a) and a graph of the dependence of the maximum Lyapunov exponent +max on the parameter r (b). In the chaotic motion area, the exponent +max becomes positive.

A typical graph plotted with data obtained by numerical simulation or experimentally looks like the graph depicted in Fig. 5.22a. Each value of the parameter r corresponds to a dot on the horizontal axis, through which a vertical line is held. Dots lying on this straight line have ordinates equal to the projections of points X1 , X2 , X3 , X4 , . . . on some axis. In the case of the limit cycle with period 2n T, the number of dots having different ordinates is 2n . In the case of chaos, there are infinitely many such dots. In the regular motion area, when r < r∞ , the built dots form a system of branch curves. However, in the chaotic motion, when r > r∞ , there takes place a complex distribution of points of the fractal type. The graph in Fig. 5.22a and other similar graphs are called bifurcation diagrams. Figure 5.22b displays a typical dependence of the maximum Lyapunov exponent +max on the parameter r. It is seen that the exponent vanishes at each point of the period doubling (r = rn ), since the cycle with period 2n–1 T loses stability at this point.

5.6.1 General Regularities of a Period-Doubling Process In order to understand the nature of the period-doubling phenomenon, it suffices to consider the simplest dynamic model – a one-dimensional mapping with mixing. In this case, point sequences, coming out of the mapping iteration, play the same role as that of sequences of points of intersection of the continuous phase trajectory

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps

251

with the Poincare section plane. As previously noted, the system is mixing if a function that defines the mapping is not monotonic. For simplicity, we can choose a function having a single extremum. Let us consider a typical situation when only the first derivative vanishes at the extremum point. Finally, when performing the calculations and plotting the graphs, a certain well-defined function is convenient to use. As such, we can take the simplest function of the required form as quadratic. Following the line of reasoning, we arrive at a logistic mapping, which can be written as XN+1 = f (XN , r) ,

f (x, r) = rx (1 – x) ,

where x and r are real variables; 0 ≤ x ≤ 1, 0 ≤ r ≤ 4. In Section 5.4.3, it was shown that the parabolic one-dimensional mapping may be regarded in some cases as an approximation to a two-dimensional Poincare map. These mappings have similar qualitative features of the dynamics. The dynamics of the simplest piecewise linear one-dimensional mappings was already discussed in Sections 4.2.3 and 4.2.4. To study the logistic mapping, we will further employ the Lamerey staircase and Lyapunov exponent methods set forth in these sections. In the case of the logistic mapping, it is easy to ascertain that the iterative process is governed by simple rules. We begin with the issue of the existence of a stable fixed point: X0 = f (X0 , r). When r < 1, the logistic mapping has the single fixed point X0 = 0. When r > 1, the mapping is added by the second fixed point X00 = 1–r–1 (Fig. 5.23). Introducing the mapping for a deviation from the fixed point and linearizing it: $xN+1 = f (X0 + $xN , r) – f (X0 , r) = fx′ (X0 , r) $xN + o ($xN ) , we find the stability criterion:       $ xN+1  / $ xN  ≈ f ′ (X0 , r) < 1. x This inequality has a simple geometric meaning – a fixed point is stable if the slope of the tangent line to the graph of the function f (x, r) lies between – 04 and 04 . The point (a) xN+1

(b) xN+1 r1

f X0

f xN

X00

xN

Figure 5.23 Bifurcation when r = 1: (a) the only stable stationary point X0 = 0 when r < 1; (b) two fixed points: unstable X0 = 0 and stable X00 when r > 1.

252

5 Chaos and Fractal Attractors in Dissipative Systems

(a) f

(c)

f (2)

d1 d2

x

x

0

X0

X1

0

X2 1

f (2)

X1

X3 X2

X4 1

f (4)

(b)

(d)

x

0 1

Figure 5.24 To 2-cycle of the mapping f (a) there correspond two fixed points of the mapping f (2) (b); to 4-cycle of the mapping f there correspond two 2-cycle of the mapping f (2) (c) and four fixed points of the mapping f (4) (d).

x

0

1

X0 = 0 is stable when r < 1; the point X00 is stable when 1 < r < 3. When r > r1 = 3, both the fixed points become unstable, there arises a steady 2-cycle. Figure 5.24a shows that the limit “trajectory” of the 2-cycle has the form of a square. Each of the points X1 , X2 contained in it is a stable fixed point of the mapping (Fig. (5.24b)): XN+1 = f (2) (XN , r),

f (2) (X, r) ≡ f (f (X, r), r).

With further increase of the parameter r at the value r = r2 = 3.45 . . . , the 2-cycle loses stability and the 4-cycle is born. Each of its points X1 , X2 , X3 , X4 is a fixed point of the mapping (Fig. 5.24d): XN+1 = f (4) (XN r),

f (4) (X, r) ≡ f (2) (f (2) (X, r), r).

Moreover, the elements of the 4-cycle can be grouped into two pairs, each of which corresponds to the two-cycle of the mapping XN+1 = f (2) (XN , r) (Fig. 5.24c). It is clear that the procedure for getting the 2n -cycles can be continued by turning to large values of n. By direct differentiation, it is easy to show that for the 2n -mapping function n f (2 ) (X, r), which can be defined by the recurrence relation n

n–1 )

f (2 ) (X, r) ≡ f (2

n–1 )

(f (2

(X, r), r),

its derivative is the same at all points of the 2n -cycle: n

[f (2 ) ]′x (Xi , r) = fx′ (X1 , r) fx′ (X2 , r) ⋯ fx′ (X2n , r) ,

i = 1, 2, 3 , . . . , 2n .

The r parameter values, corresponding to the bifurcation points, can be found by successively solving functional equations. Denote the parameter value by rn , at which the

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps

253

bifurcation of the birth of the 2n -cycle occurs. This cycle is stable when changing the parameter within rn < r < rn+1 . Any element of the 2n -cycle being a stationary point of the 2n -mapping, we have an equation to find these elements: n f (2 ) (x, r) = x



X1 (r), X2 (r) , . . . , X2n (r) .

It is worth noting that the position of each element is determined by the value of r. When analyzing the graphs, it can be noticed that the running of the parameter r over the region of existence of the 2n -cycle is accompanied by a change of the 2n -mapping derivative, taken anywhere in the 2n -cycle from 1 to –1: rn → r → rn+1 ,

 3  n 1 → ∂f (2 ) (x, r) ∂x |

x=X1 (r)

→ –1.

It is understandable that solving the equation 

3  n ∂f (2 ) (x, r) ∂x |

x=X1 (r)

= –1,

where X1 (r) is the first element of the 2n -cycle, we find rn+1 as the parameter value, at which the 2n -cycle loses stability and the 2n+1 -cycle is born. Figure 5.25 illustrates how the graph of the 2-mapping deforms when passing the parameter through the value at which the 2-cycle is born. It is seen that the birth of two new stable stationary points follows the loss of stability of the stationary point. The law of convergence of the sequence of the {rn } parameter values to a limit point was investigated by Feigenbaum [18–20]. He showed that the bifurcation values of the parameter r for large values of n for the logistic mapping obey the law of geometric progression: n→∞

(rn – rn–1 )/(rn+1 – rn ) →$ = 4.669 . . . . A natural desire to find a similar law for other mappings has led to an unexpected result: it turned out that not only the nature of convergence but also the value of constant $ is the same for all mappings having a quadratic maximum. Thus, the quantity $, called the Feigenbaum constant, is a fundamental mathematical constant as that of the numbers 0 and e = exp (1). The reasons for this state are explained by the theory of universality or the scale invariance (scaling) theory, presented by Feigenbaum in his works.

f (2) r> 1, for most of this operators 6 ≈ g. This furnishes the second approximation – we should replace this function as well everywhere by the limiting one: 6 → g. The result is Tn F(x, r) ≈ g(x) + (r – r∞ ) Lng $F (x, r∞ ) . If n is large, in order to analyze expressions of type Lng f (x), it is advisable to use an expansion of the function f (x) over eigenfunctions of the operator Lg . This is due to the fact that, for large values of n, the behavior of the function Lng f (x) is determined by a contribution of that eigenfunction, to which there corresponds the largest eigenvalue. In our case, using the explicit expression (5.8) for a linear operator, as well as g (x) and ! that are known approximately, we can solve numerically the problem of the eigenfunctions. It turns out that among the eigennumbers of the operator Lg is nondegenerate and has a value greater than unity: L>(x) = $ ⋅ >(x),

$ > 1;

$F(x, r∞ ) = c1 >(x) + ⋯.

Here we have designated the largest number as $. Retaining for n >> 1 only the contribution of the eigenfunction corresponding to the largest eigenvalue, and substituting r = Rn , we obtain Tn F(x, Rn ) ≈ g(x) + (Rn – r∞ ) $n c1 >(x).

(5.11)

261

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps

We now plug x = 0 into both sides of the approximate equality (5.11). Further, we represent the left-hand side of this equality through the left expression involved in (5.10). Taking into account the above-said and the right-hand equation in eq. (5.2), we find that the expression standing to the left from the equal sign in eq. (5.11) is zero. Thus, we get n Tn F(0, Rn ) = (–!)n F (2 ) (0, Rn ) = 0,

n

g(0) = 1 –1

⇒ (Rn – r∞ )$ ≈ –[c1 >(0)] . Hence it immediately implies that Rn ≈ r∞ – const ⋅ $–n . We have demonstrated that the Feigenbaum constant $ that determines the law of convergence of the sequence of points Rn , corresponding to the supercycles, to the limit value r∞ coincides with the largest eigenvalue of the operator Lg . Utilizing eq. (5.11), we can find a more general relation. We noted earlier that every point of the 2n -cycle is a stable stationary point of the 2n -mapping. This statement holds true within the 2n -cycle stability region: rn < r < rn+1 . In this range, the quantity n

n 2 ∂F (2 ) (X1 , r) / ∂F (Xk , r) = ,= ∂X1 ∂Xk

k=1

n

(a multiplier), which is equal to the slope of the tangent to the graph of F (2 ) (x, r) at any point x = Xk of its intersection with the bisector. The value of this quantity varies from 1 to –1. Within the selected interval, any value of r can be parameterized by the number n and value of ,. We denote the appropriate function by Rn (,), assuming that the requirements are met: rn < Rn (,) < rn+1 , Rn = Rn (0), rn = Rn (1). Inserting x = 0 and r = Rn (,) into eq. (5.11), we arrive at  lim

n→∞

Rn (,) – r∞ $–n

 =

(g0,, (0) – g(0)) , c1 >(0)

where n g0,, (x) = lim (–!)n F (2 )

n→∞



 x , R (,) . n (–!)n

This implies that the relation of universality for the sequence Rn (,) for an arbitrary value of , is fulfilled: Rn (,) ≈ r∞ – const ⋅ $–n . In particular, for , = 1 we have rn ≈ r∞ – const ⋅ $–n .

262

5 Chaos and Fractal Attractors in Dissipative Systems

5.6.3 A Universal Regularity in the Arrangement of Cycles: A Universal Power Spectrum With increase in the number n by unity, the number of elements in the cycle doubles, so that the distances between the elements diminish. In this case, there is a complex pattern in the arrangement of the elements. It is similar to the arrangement of points of the Cantor set. This is easy to see by splitting the bifurcation diagram along vertical straight segments at the points r = Rn and collating the resulting sets of points (Fig. 5.29). Here we consider the law under which the distance between the nearest points of the 2n -supercycle varies as n grows. We introduce the following notations: Xm , m = 0, 1, 2, . . . , 2n – 1 for elements of the n 2 -supercycle; X¯ m , m = 0, 1, 2, . . . , 2n+1 – 1 for elements of the 2n+1 -supercycle. Also, we assign the number zero to elements, which are located at the point of extremum of a function that defines the mapping: X0 = X¯ 0 = 0. There are relations fulfilled: F(Xm , Rn ) = Xm+1 , m = 0, . . . , 2n – 2; F(X¯ m , Rn+1 ) = X¯ m+1 , m = 0, . . . , 2n+1 – 2;

F(X2n –1 , Rn ) = X0 , F(X¯ n+1 , Rn+1 ) = X¯ 0 . 2

–1

Let’s keep in mind that the elements of the 2n -supercycle are stationary points of the 2n -mapping: n F (2 ) (Xm , Rn ) = Xm .

The set of elements of the 2n -cycle can be represented as a set of 2n–1 pairs so that each pair is a 2-cycle of the 2n–1 -mapping. For each element of the 2n -cycle, any other element, comprising a pair with the former, is the nearest. Thus, we can calculate the distance between these elements of any supercycle: n–1

dn (m) = Xm – F (2

) (Xm , Rn ).

We now need to add a function whose values indicate of how much the distances between the neighboring elements will contract when the transition from the 2n supercycle to the 2n–1 -supercycle occurs: 3n (m) =

R1 R2 R3 R4

n dn+1 (m) X¯ m – F (2 ) (X¯ m , Rn+1 ) . = n–1 dn (m) Xm – F (2 ) (Xm , Rn )

(5.12)

x

Arrangement of elements of 2n -supercycles.

Figure 5.29

263

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps

In this case, the role of the argument is played by the index m, where m = 0, 1, . . . , 2n – 1. As previously noted, there is a universal pattern of change in the distances dn (0): dn+1 /dn ≈ –1/!,

n >> 1;

n–1

dn ≡ dn (0) = –F (2

) (0, Rn ).

(5.13)

Thus, a particular value of function (5.12) becomes known: 3n (0) ≈ –1/!. We show that 3n (m + 2n ) = –3n (m). For this, we calculate the numerator and denominator in eq. (5.12):   n–1 n dn m + 2n = Xm+2n – F (2 ) (Xm+2n , Rn ) = F (2 ) (Xm , Rn ) n–1 n n–1 – F (2 ) (F (2 ) (Xm , Rn ), Rn ) = Xm – F (2 ) (Xm , Rn ) = dn (m);   n n dn+1 m + 2n = X¯ m+2n – F (2 ) (X¯ m+2n , Rn+1 ) = F (2 ) (X¯ m , Rn+1 ) n n n n+1 – F (2 ) (F (2 ) (X¯ m , Rn+1 ), Rn+1 ) = F (2 ) (X¯ m , Rn+1 ) – F (2 ) (X¯ m , Rn+1 ) n = F (2 ) (X¯ m , Rn+1 ) – X¯ m = –dn+1 (m).

As a consequence, we have   dn+1 (m + 2n ) [–dn+1 (m)] = = –3n (m). 3n m + 2n = dn (m + 2n ) dn (m)

(5.14)

Further, we should seek a value of function (5.12) for m = 2n–i , i = 0, 1, . . . , n: 3n (2n–i ) =

n n n–i n–i X¯ 2n–i – F (2 ) (X¯ 2n–i , Rn+1 ) F (2 ) (0, Rn+1 ) – F (2 ) (F (2 ) (0, Rn+1 ), Rn+1 ) = n–1 n–i n–1 n–i X2n–i – F (2 ) (X2n–i , Rn ) F (2 ) (0, Rn ) – F (2 ) (F (2 ) (0, Rn ), Rn )

n n–i (0, R(n–i)+i+1 ) – F (2 ) (F (2 ) (0, Rn+1 ), R(n–i)+i+1 ) . n–i n–i n–1 F (2 ) (0, R(n–i)+i ) – F (2 ) (F (2 ) (0, Rn ), R(n–i)+i ) n–i )

=

F (2

Given that for large values of n – i the mapping functions are close to universal, we can use an estimate: gj (x) = lim (–!)k F k→∞

  2k

((–!)–k x, Rk+j )



F

  2k

(x, Rk+j ) ≈ (–!)–k gj ((–!)k x),

k >> 1.

Approximating the functions that define the mapping by the universal functions (here it is necessary to employ the following rules of index substitution: k → n – i; j → i or i + 1), we derive an approximate expression for the universal function (5.12): 3n (2n–i ) ≈ =

(–!)–n+i gi+1 ((–!)n–i ⋅ 0) – (–!)–n+i gi+1 ((–!)n–i (–!)–n g1 ((–!)n ⋅ 0)) (–!)–n+i gi ((–!)n–i ⋅ 0) – (–!)–n+i gi ((–!)n–i (–!)–n+1 g1 ((–!)n–1 ⋅ 0)) gi+1 (0) – gi+1 ((–!)–i g1 (0)) . gi (0) – gi ((–!)–i+1 g1 (0))

264

5 Chaos and Fractal Attractors in Dissipative Systems

Note that the right-hand side is n-independent. It is convenient to redefine function (5.12), introducing its argument in a new way:   2 3 (. ) = 3 m 2n+1 ≡ 3n (m) . When n >> 1, the variable . can be regarded as continuous. Making the variable . changes within the interval [0, 1), we run through a sequence of numbers of the elements of the 2n+1 and 2n -cycles, respectively, once and twice. The initial (left) and redefined (right) functions have partial values:   3 (0) = –1 !, /  n   3n (0 + 2n ) = –3n (0) = 1/!,



  3(0) = –1 !, /    3(1/2) = –3(0) = 1/!.

(5.15)

Note also that the equality 3n (2n–i ) = 3(2–1–i ) holds. The relation allows finding the right-hand limit of the function 3 (. ) as its argument tends to zero. Using the fact that ! > 1, so that the condition (–!)–i → 0 is fulfilled as i → ∞, we expand the function g (x) in a power series in a neighborhood of the extremum point x = 0: g((–!)–i g1 (0)) = g(0) + 21 g ′′ (0)[(–!)–i g1 (0)]2 + o(!–i ). We have gi+1 (0) – gi+1 ((–!)–i g1 (0)) = i→∞ gi (0) – gi ((–!)–i+1 g1 (0))

3(0+ ) = lim 3(2–1–i ) = lim i→∞

1 ′′ –i 2 2 g (0)[(–!) g1 (0)] 1 i→∞ g ′′ (0)[(–!)–i+1 g1 (0)]2 2

= lim

=

1 . !2

(5.16)

Comparing the values 3(0) = –1/! and 3(0+ ) = 1/!2 , we conclude that the function 3(. ) suffers a finite discontinuity at the point . = 0. Considering relations (5.14) and (5.15), we see that there is also a discontinuity at the point 21 : 3

1 2

= 1/!,

3

1 2

   2 + 0+ = –3 0+ = –1 !2 .

The plot of the function in Fig. 5.30 is found numerically. A detailed analysis shows that the function has finite discontinuities in all dyadic rational points. However, the amplitude of the jumps at points that are different from 0, 1/4, 1/2, 3/4 and 1 is small. Assuming that a one-dimensional mapping is an approximate form of the Poincare map of a dynamical system, we can think that the iterative process reflects the

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps

2

α

265

σ–1(x)

α 0

1.0 x

0.5

Figure 5.30 Graph of the fractal function 3–1 . It defines the distribution of factors of the scale transformation related to the transition from the 2n -supercycle to the 2n+1 -supercycle in different parts of the phase trajectory.

change of variables over time. The change in motion parameters, when bifurcations occur, leads, in particular, to a change in the frequency spectrum. Here we demonstrate that, in this case, universal patterns also exhibit themselves. Let the index t enumerate the elements of the 2n -cycle and play the role of time, on which the value of “a signal” depends: X [n] (t) = F (t) (0, Rn ),

t = 1, 2, 3, . . . , 2n ≡ Tn .

Now we consider the element being previously zeroth as the last element of the cycle. Given that the “signal” is periodic, we represent it in the form of a discrete Fourier expansion:

X [n] (t) = X [n] (t + Tn )



X [n] (t) =

n –1 2

  a[n] k exp 20ikt/Tn .

k=0

We write down the general expression for frequencies: [n] 20ikt/Tn = i9[n] k t ⇒ 9k = 20k/Tn .

It is easy to show that the period doubling causes new spectral components such as subharmonics to emerge. Their appropriate spectral lines are arranged in a symmetrical manner at the intervals between those lines which were before doubling: Tn+1 = 2Tn = 2n+1



9[n+1] = 21 9[n] k k .

266

5 Chaos and Fractal Attractors in Dissipative Systems

Performing the inverse Fourier transform, we find the coefficients that represent the iterative sequence of the 2n -supercycle:

a[n] k =

    Tn 2n 1  [n] 20ikt 20ikt 1 [n] X (t) exp – dtX (t) exp – ≈ . 2n t=1 2n Tn Tn 0

Here the sum is written in a more usual form as an integral. It means that we have extended a definition of the function X [n] (t) to a piecewise constant (step) function. Also, we can get a similar expression for the sequence of the 2n+1 -supercycle by integrating over the period Tn = 2n :

a[n+1] k

Tn+1

= 0

Tn = 0

Tn = 0

Tn = 0

   2Tn  20 ikt 20 ikt dt [n+1] dt [n+1] X (t) exp – X (t) exp – = = Tn+1 T n+1 2Tn 2T n 0

  0 ikt dt [n+1] X (t) exp – + 2Tn Tn

2Tn

  0 ikt dt [n+1] X (t) exp – = 2Tn Tn

Tn

dt 2Tn

    0 ikt 0 ik(t + Tn ) X [n+1] (t) exp – + X [n+1] (t + Tn ) exp – = Tn Tn



  ( 0 ikt dt ' [n+1] k [n+1] X (t) + (–1) X (t + Tn ) exp – . 2Tn Tn

Let us take a look first at the coefficients with even numbers:

a[n+1] 2k

Tn = 0

  ( 20 ikt dt ' [n+1] [n+1] X (t) + X (t + Tn ) exp – . 2Tn Tn

Note that the elements of the 2n+1 -supercycle in the square brackets correspond to the nearest points: n X [n+1] (t + Tn ) = F (2 ) (X [n+1] (t), Rn+1 ),

the element of X [n] (t) of the 2n -supercycle being located between these points. The proximity of these elements allows one to make the approximate replacement: X [n+1] (t) + X [n+1] (t + Tn ) ≈ 2X [n] (t).

5.6 Universality and Scaling in the Dynamics of One-Dimensional Maps

267

As a result,

a[n+1] 2k

Tn ≈ 0

  20ikt dt [n] ⋅ 2X (t) exp – = a[n] k . 2Tn Tn

Then, the expression for the coefficients with odd numbers has the form

a[n+1] 2k+1

Tn = 0

  ( 0 i(2k + 1)t dt ' [n+1] [n+1] X (t) – X (t + Tn ) exp – . 2Tn Tn

(5.17)

Further, we express the difference between the quantities in the square brackets via the universal function previously introduced: n X [n+1] (t) – X [n+1] (t + Tn ) = X [n+1] (t) – F (2 ) (X [n+1] (t), Rn+1 ) =

= d[n+1] (t) = 3 (t/2Tn ) d[n] (t).

(5.18)

The function d[n] (t) involved in the right-hand side of eq. (5.18) can be expressed in terms of the Fourier coefficients corresponding to the 2n -supercycle: d[n] (t) = X [n] (t) – X [n] (t + Tn–1 ) = n –1     2 2n–1 ' ( –1 [n] 20 ikt 20 i(2k + 1)t k = 1 – (–1) exp a[n] a exp = 2 . k 2k+1 Tn Tn k=0

(5.19)

k=0

Now we substitute eq. (5.19) into eq. (5.18). Then, we substitute the resulting expression into eq. (5.17) and find a relation between the Fourier coefficients of the 2n+1 -cycle and 2n -supercycle with odd numbers:

a[n+1] 2k+1 =

2n–1 –1 q=0

a[n] 2q+1

Tn 0

     t dt 20 it 2k + 1 3 . exp 2q + 1 – Tn 2Tn Tn 2

(5.20)

In order to calculate the integral on the right-hand side of eq. (5.20), we use the simplest approximation of the function 3 (. ). Suppose the latter to be equal to !–2     within the interval . ∈ 0, 1/4 and to !–1 within the interval . ∈ 1/4, 1/2 . In this case, we have

268

5 Chaos and Fractal Attractors in Dissipative Systems

Tn 0

  1     t ( dt 20 it 3 ' = d( 3 e20i(' ≈ exp Tn 2Tn Tn 2

1 ≈ 2 !

0

  1/2 1   1 1 1 1 20i(' 20i(' i0' 2i0' i0' e –1 + , d(e + d(e = –e e ! 20i' !2 ! 0

1/2

Consequently, eq. (5.20) takes the form

a[n+1] 2k+1

    2n–1 –1 1  [n] !–1 + !–2 + i (–1)k !–1 – !–2 = a . 20i q=0 2q+1 2q + 1 – (2k + 1)/2

(5.21)

Thus, we have succeeded in obtaining an approximate relationship between the amplitude of the spectral components, including the universal constant !. The main contribution to the sum is given by the terms with small denominators. The resulting assumption is that the coefficients are related to each other through a rough approximation:      [n+1]   [n]  aK  ≈ ,–1 aK/2  . However, a simple analytical estimation of the sum in eq. (5.21) does not allow us to compute the magnitude of the coefficient , with acceptable accuracy. Therefore, here we give its value obtained by a more rigorous approach [26, 27]: , = 22.2. Usually in the literature, the value is given in decibels: 10 ⋅ lg , = 13.5 dB. When it comes to lumped-parameter systems and distributed systems, the scenario of transition to chaos through an infinite period-doubling sequence is implemented often enough both in a numerical simulation and in real experiments. Every time this indicates the emergence a low-dimensional chaotic attractor in the systems. That is to say, the initial stage of turbulent motion is observed. The fact that the intensities of the subharmonics of adjacent orders are in a ratio of ∼ 13.5 dB is an extra proof of the existence of the period-doubling scenario (Fig. 5.31). There are other scenarios of route to chaos. For example, periodic oscillations can go into chaos with intermittency where relatively long phases of “laminar” (almost periodic) motion alternate with short “turbulent” phases (phases of irregular motion). The universality theory for the laminar motion near the threshold of chaos with intermittency was developed in Ref. [28]. Also, getting acquainted with the manuscripts [29, 30], the reader can learn universal patterns for another important scenario of transition to chaos when quasiperiodic motion on a two-dimensional torus is destroyed.

5.7 Synchronization of Chaotic Oscillations

0

269

1

2

2 –10

–20

3 3

4

3

4

4 4

3 4

4 4

–30 Figure 5.31 The appearance of subharmonics as a result of a period doubling bifurcation in the dynamics of an oscillating circuit with a nonlinear element (Experiment [25]). The intensity ratio of subharmonics ∼ 10dB is consistent with the theoretical value of 13.5 dB.

5.7 Synchronization of Chaotic Oscillations In Chapter 4, we reviewed the theory of synchronization of an oscillating system under an external periodic force. The synchronization phenomenon is that the oscillation frequency becomes equal to the frequency of the external signal (frequency capture) for certain values of the system’s parameters. In the absence of synchronization, these frequencies are different and there are “beats” at the difference frequency. At the time of the frequency capture, a saddle-node bifurcation occurs. As a result, the motion on a torus turns into a limit cycle. Studying coupled nonlinear systems exhibiting chaotic behavior, we can observe various transformations of their joint dynamics as the coupling coefficient changes. If the coupling leads to close dynamical regimes in the subsystems, it is natural to assume that we deal with the phenomenon of synchronization. However, in this case, the synchronization criterion is difficult to specify. There are a few interpretations of the concept of chaos synchronization. They are listed below. (i) Chaotic oscillations are transformed into harmonic ones under the influence of an external harmonic signal (suppression of chaos). (ii) In two coupled systems exhibiting chaotic behavior, chaotic oscillations are “in phase” above a certain threshold value of the coupling parameter (time dependencies copy each other). (iii) There is a functional (correlation) relationship between instantaneous states of coupled partial systems (generalized synchronization).

A fairly complete analysis of the effect of synchronization in dynamical systems exhibiting the regular and chaotic dynamics can be found in Ref. [31]. Here, we confine ourselves to discussing the simplest case of synchronization of chaos in a discrete-time system.

270

5 Chaos and Fractal Attractors in Dissipative Systems

5.7.1 Synchronization in a System of Two Coupled Maps Let us look at a one-dimensional mapping XN+1 = f (XN ). A suitable choice of the function on the right side (see Sections 4.2 and 5.6) yields the simplest dynamic system with chaotic behavior. If two such systems interact with each other, with the interaction bringing the values of variables describing their states nearer together and disappearing in the case of equality of them, there are conditions for the synchronization phenomenon to be observed. A system of two identical, symmetrically coupled mappings can be described, for example, using the equations:

XN+1 YN+1

=

1–% %

% 1–%



f (XN ) f (YN )

=

(1 – %) f (XN ) + % f (YN ) % f (XN ) + (1 – %) f (YN )

.

When % = 0, the iterative processes XN → XN+1 and YN → YN+1 are independent. On the contrary, if % = 1/2, then XN = YN , starting from the second step. This is the case of extremely strong coupling. In the second case, the regular or chaotic dynamics of the system of mappings bear absolute similarities to the dynamics of a single mapping. In other words, there is complete synchronization in any type of dynamics. With increasing the value of the coupling parameter %, where 0 < % < 1/2, a sequence of bifurcations takes place. It leads to the threshold value % = %c . When % > %c , there is complete synchronization. Images of attractors in the system of two coupled skew-tent mappings . XN+1 =

XN /a, 2 (1 – XN ) (1 – a),

0 < x ≤ a, a