Working with Dynamical Systems: A Toolbox for Scientists and Engineers (Series in Computational Biophysics) [1 ed.] 1138591718, 9781138591714

This book provides working tools for the study and design of nonlinear dynamical systems applicable in physics and engin

228 39 18MB

English Pages 250 Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Dedication
Table of Contents
Preface
1 Whence Dynamical Systems
1.1 Classical Mechanics
1.1.1 Conservative Equations of Motion
1.1.2 Systems with One Degree of Freedom
1.1.3 Symmetries and Conservation Laws
1.1.4 Interacting Particles
1.1.5 Dissipative Motion
1.2 Chemical Kinetics
1.2.1 Mass Action
1.2.2 Adsorption and Catalysis
1.2.3 Autocatalysis and Self-Inhibition
1.2.4 Thermal Effects
1.3 Biological Models
1.3.1 Population Dynamics
1.3.2 Epidemiological Models
1.3.3 Neural and Genetic Networks
1.4 Electric Currents
1.4.1 Electric Circuits
1.4.2 Electrochemical Reactions
1.4.3 Membrane Transport
1.5 Spatially Extended Systems
1.5.1 From Time to Coordinate Dependence
1.5.2 Fourier Decomposition
1.6 Continuous vs. Discrete
1.6.1 Iterative Systems
1.6.2 From Continuous to Discrete
1.6.3 Poincaré Maps
2 Local Bifurcations
2.1 Bifurcation of Stationary States
2.1.1 Branches of Stationary States
2.1.2 Bifurcation Expansion
2.1.3 Fold and Transcritical Bifurcations
2.1.4 Cusp Singularity
2.1.5 Higher Singularities
2.2 Stability and Slow Dynamics
2.2.1 Linear Stability Analysis
2.2.2 Stable and Unstable Manifolds
2.2.3 Exchange of Stability
2.2.4 Amplitude Equations
2.3 Bifurcations of Periodic Orbits
2.3.1 Hopf Bifurcation
2.3.2 Derivation of the Amplitude Equation
2.3.3 Instabilities of Periodic Orbits
2.4 Example: Exothermic Reaction
2.4.1 Bifurcation of Stationary States
2.4.2 Hopf Bifurcation
2.4.3 Branches of Periodic Orbits
2.5 Example: Population Dynamics
2.5.1 Prey–Predator Models
2.5.2 Stability and Bifurcations
2.5.3 Periodic Orbits
3 Global Bifurcations
3.1 Topology of Bifurcations
3.1.1 More Ways to Create and Break Periodic Orbits
3.1.2 Bifurcations in a System with Three Stationary States
3.2 Global Bifurcations in the Exothermic Reaction
3.2.1 Basin Boundaries
3.2.2 Saddle-Loop Bifurcations
3.2.3 Sniper Bifurcation
3.3 Bifurcation at Double-Zero Eigenvalue
3.3.1 Locating a Double Zero
3.3.2 Quadratic Normal Form
3.3.3 Expansion in the Vicinity of Cusp Singularity
3.4 Almost Hamiltonian Dynamics
3.4.1 Weak Dissipation
3.4.2 Hopf and Saddle-Loop Bifurcations
3.4.3 Bifurcation Diagrams
3.4.4 Basin Boundaries
3.5 Systems with Separated Time Scales
3.5.1 Fast and Slow Variables
3.5.2 Van der Pol Oscillator
3.5.3 FitzHugh–Nagumo Equation
3.5.4 Canards
3.6 Venturing to Higher Dimensions
3.6.1 Dynamics Near Triple-Zero Eigenvalue
3.6.2 Double Hopf Bifurcation
3.6.3 Blue Sky Catastrophe
4 Chaotic, Forced, and Coupled Oscillators
4.1 Approaches to Hamiltonian Chaos
4.1.1 Hiding in Plain Sight
4.1.2 Resonances and Small Divisors
4.1.3 Example: Hénon-Heiles Model
4.1.4 Quantitative Measures of Chaos
4.2 Approaches to Dissipative Chaos
4.2.1 Distilling Turbulence into Simple Models
4.2.2 Chaotic Attractors
4.2.3 Period-Doubling Cascade
4.2.4 Strange, Chaotic, or Both?
4.3 Chaos Near a Homoclinic
4.3.1 Shilnikov's Snake
4.3.2 Complexity in Chaotic Models
4.3.3 Lorenz Model
4.4 Weakly Forced Oscillators
4.4.1 Phase Perturbations
4.4.2 Forced Harmonic Oscillator
4.4.3 Weakly Forced Hamiltonian System
4.5 Effects of Strong Forcing
4.5.1 Universal and Standard Mappings
4.5.2 Forced Dissipative Oscillators
4.5.3 Forced Relaxation Oscillator
4.6 Coupled Oscillators
4.6.1 Phase Dynamics
4.6.2 Coupled Pendulums
4.6.3 Coupled Relaxation Oscillators
4.6.4 Synchronization in Large Ensembles
5 Dynamical Systems in Space
5.1 Space-Dependent Equilibria
5.1.1 Basic Equations
5.1.2 Stationary Solution in One Dimension
5.1.3 Systems with Mass Conservation
5.2 Propagating Fronts
5.2.1 Advance into a Metastable State
5.2.2 Propagation into an Unstable State
5.2.3 Pushed Fronts
5.3 Separated Time and Length Scales
5.3.1 Two-Component Reaction–Diffusion System
5.3.2 Stationary and Mobile Fronts
5.3.3 Stationary and Mobile Bands
5.3.4 Wave Trains
5.4 Symmetry-Breaking Bifurcations
5.4.1 Amplitude Equations
5.4.2 Bifurcation Expansion
5.4.3 Interacting Modes
5.4.4 Plane Waves and their Stability
5.5 Resonant Interactions
5.5.1 Triplet Resonance
5.5.2 Stripes–Hexagons Competition
5.5.3 Standing Waves
5.6 Nonuniform Patterns
5.6.1 Propagation of a Stationary Pattern
5.6.2 Self-Induced Pinning
5.6.3 Propagating Wave Pattern
5.6.4 Nonuniform Wave Patterns
Bibliography
Online Files
Illustration Credits
Recommend Papers

Working with Dynamical Systems: A Toolbox for Scientists and Engineers (Series in Computational Biophysics) [1 ed.]
 1138591718, 9781138591714

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Working with Dynamical Systems

Series in Computational Biophysics Series Editor: Nikolay Dokholyan Molecular Modeling at the Atomic Scale Methods and Applications in Quantitative Biology Ruhong Zhou Coarse-Grained Modeling of Biomolecules Garegin A. Papoian Computational Approaches to Protein Dynamics From Quantum to Coarse-Grained Methods Monika Fuxreiter Modeling the 3D Conformation of Genomes Guido Tiana, Luca Giorgetti Biomembrane Simulations Computational Studies of Biological Membranes by Max L. Berkowitz Working with Dynamical Systems A Toolbox for Scientists and Engineers LM Pismen For more information about this series, please visit: [www.crcpress.com/Series-in-Computational-Biophysics/book-series/CRCSERCOMBIO]

Working with Dynamical Systems A Toolbox for Scientists and Engineers

Professor LM Pismen Department of Chemical Engineering, Technion - Israel Institute of Technology

First edition published 2021 by CRC Press 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742 and by CRC Press 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN © 2021 Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, LLC Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact [email protected] Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. ISBN: 9781138591714 (hbk) ISBN: 9780429488856 (ebk) Typeset in Computer Modern font by Cenveo Publisher Services Visit the [companion website/eResources]: [www.routledge.com/9781138591714]

To my daughter Anne, my best success in teaching math

Contents Preface

xi

1 Whence Dynamical Systems

1

1.1

1.2

1.3

1.4

1.5

1.6

Classical Mechanics . . . . . . . . . . . . . . . 1.1.1 Conservative Equations of Motion . . 1.1.2 Systems with One Degree of Freedom 1.1.3 Symmetries and Conservation Laws . 1.1.4 Interacting Particles . . . . . . . . . . 1.1.5 Dissipative Motion . . . . . . . . . . Chemical Kinetics . . . . . . . . . . . . . . . 1.2.1 Mass Action . . . . . . . . . . . . . . 1.2.2 Adsorption and Catalysis . . . . . . . 1.2.3 Autocatalysis and Self-Inhibition . . . 1.2.4 Thermal Effects . . . . . . . . . . . . . Biological Models . . . . . . . . . . . . . . . . 1.3.1 Population Dynamics . . . . . . . . . 1.3.2 Epidemiological Models . . . . . . . . 1.3.3 Neural and Genetic Networks . . . . Electric Currents . . . . . . . . . . . . . . . . 1.4.1 Electric Circuits . . . . . . . . . . . . 1.4.2 Electrochemical Reactions . . . . . . . 1.4.3 Membrane Transport . . . . . . . . . . Spatially Extended Systems . . . . . . . . . 1.5.1 From Time to Coordinate Dependence 1.5.2 Fourier Decomposition . . . . . . . . . Continuous vs. Discrete . . . . . . . . . . . . 1.6.1 Iterative Systems . . . . . . . . . . . 1.6.2 From Continuous to Discrete . . . . . 1.6.3 Poincar´e Maps . . . . . . . . . . . . .

vii

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

2 2 4 6 8 9 11 11 13 16 16 18 18 20 21 22 22 24 25 26 26 27 29 29 30 31

CONTENTS

viii

2 Local Bifurcations 2.1

2.2

2.3

2.4

2.5

Bifurcation of Stationary States . . . . . . . 2.1.1 Branches of Stationary States . . . . 2.1.2 Bifurcation Expansion . . . . . . . . . 2.1.3 Fold and Transcritical Bifurcations . 2.1.4 Cusp Singularity . . . . . . . . . . . . 2.1.5 Higher Singularities . . . . . . . . . . Stability and Slow Dynamics . . . . . . . . . 2.2.1 Linear Stability Analysis . . . . . . . 2.2.2 Stable and Unstable Manifolds . . . . 2.2.3 Exchange of Stability . . . . . . . . . 2.2.4 Amplitude Equations . . . . . . . . . Bifurcations of Periodic Orbits . . . . . . . . 2.3.1 Hopf Bifurcation . . . . . . . . . . . . 2.3.2 Derivation of the Amplitude Equation 2.3.3 Instabilities of Periodic Orbits . . . . Example: Exothermic Reaction . . . . . . . . 2.4.1 Bifurcation of Stationary States . . . 2.4.2 Hopf Bifurcation . . . . . . . . . . . . 2.4.3 Branches of Periodic Orbits . . . . . Example: Population Dynamics . . . . . . . 2.5.1 Prey–Predator Models . . . . . . . . 2.5.2 Stability and Bifurcations . . . . . . . 2.5.3 Periodic Orbits . . . . . . . . . . . .

33 . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

3 Global Bifurcations 3.1

3.2

3.3

3.4

Topology of Bifurcations . . . . . . . . . . . . . . . . . . . . 3.1.1 More Ways to Create and Break Periodic Orbits . . . 3.1.2 Bifurcations in a System with Three Stationary States Global Bifurcations in the Exothermic Reaction . . . . . . . . 3.2.1 Basin Boundaries . . . . . . . . . . . . . . . . . . . . . 3.2.2 Saddle-Loop Bifurcations . . . . . . . . . . . . . . . . 3.2.3 Sniper Bifurcation . . . . . . . . . . . . . . . . . . . . Bifurcation at Double-Zero Eigenvalue . . . . . . . . . . . . . 3.3.1 Locating a Double Zero . . . . . . . . . . . . . . . . . 3.3.2 Quadratic Normal Form . . . . . . . . . . . . . . . . 3.3.3 Expansion in the Vicinity of Cusp Singularity . . . . Almost Hamiltonian Dynamics . . . . . . . . . . . . . . . . . 3.4.1 Weak Dissipation . . . . . . . . . . . . . . . . . . . . 3.4.2 Hopf and Saddle-Loop Bifurcations . . . . . . . . . . 3.4.3 Bifurcation Diagrams . . . . . . . . . . . . . . . . . . 3.4.4 Basin Boundaries . . . . . . . . . . . . . . . . . . . .

34 34 35 37 38 40 41 41 44 45 47 49 49 52 54 56 56 58 61 62 62 64 65

69 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

70 70 72 74 74 75 78 80 80 81 82 85 85 87 88 91

CONTENTS 3.5

3.6

Systems with Separated Time Scales . . . . . 3.5.1 Fast and Slow Variables . . . . . . . . 3.5.2 Van der Pol Oscillator . . . . . . . . . 3.5.3 FitzHugh–Nagumo Equation . . . . . 3.5.4 Canards . . . . . . . . . . . . . . . . . Venturing to Higher Dimensions . . . . . . . 3.6.1 Dynamics Near Triple-Zero Eigenvalue 3.6.2 Double Hopf Bifurcation . . . . . . . 3.6.3 Blue Sky Catastrophe . . . . . . . . .

ix . . . . . .

. . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

Approaches to Hamiltonian Chaos . . . . . . . . 4.1.1 Hiding in Plain Sight . . . . . . . . . . . . 4.1.2 Resonances and Small Divisors . . . . . . 4.1.3 Example: H´enon-Heiles Model . . . . . . 4.1.4 Quantitative Measures of Chaos . . . . . Approaches to Dissipative Chaos . . . . . . . . . 4.2.1 Distilling Turbulence into Simple Models 4.2.2 Chaotic Attractors . . . . . . . . . . . . . 4.2.3 Period-Doubling Cascade . . . . . . . . . 4.2.4 Strange, Chaotic, or Both? . . . . . . . . Chaos Near a Homoclinic . . . . . . . . . . . . . 4.3.1 Shilnikov’s Snake . . . . . . . . . . . . . 4.3.2 Complexity in Chaotic Models . . . . . . 4.3.3 Lorenz Model . . . . . . . . . . . . . . . Weakly Forced Oscillators . . . . . . . . . . . . . 4.4.1 Phase Perturbations . . . . . . . . . . . . 4.4.2 Forced Harmonic Oscillator . . . . . . . . 4.4.3 Weakly Forced Hamiltonian System . . . Effects of Strong Forcing . . . . . . . . . . . . . 4.5.1 Universal and Standard Mappings . . . . 4.5.2 Forced Dissipative Oscillators . . . . . . 4.5.3 Forced Relaxation Oscillator . . . . . . . Coupled Oscillators . . . . . . . . . . . . . . . . 4.6.1 Phase Dynamics . . . . . . . . . . . . . . 4.6.2 Coupled Pendulums . . . . . . . . . . . . 4.6.3 Coupled Relaxation Oscillators . . . . . . 4.6.4 Synchronization in Large Ensembles . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Chaotic, Forced, and Coupled Oscillators 4.1

4.2

4.3

4.4

4.5

4.6

92 92 93 94 96 97 97 100 101

103 104 104 106 108 111 114 114 116 117 120 123 123 125 129 136 136 138 140 142 142 145 147 152 152 153 156 158

CONTENTS

x

5 Dynamical Systems in Space 5.1

5.2

5.3

5.4

5.5

5.6

Space-Dependent Equilibria . . . . . . . . . . . . . 5.1.1 Basic Equations . . . . . . . . . . . . . . . 5.1.2 Stationary Solution in One Dimension . . . 5.1.3 Systems with Mass Conservation . . . . . . Propagating Fronts . . . . . . . . . . . . . . . . . . 5.2.1 Advance into a Metastable State . . . . . . 5.2.2 Propagation into an Unstable State . . . . 5.2.3 Pushed Fronts . . . . . . . . . . . . . . . . Separated Time and Length Scales . . . . . . . . . 5.3.1 Two-Component Reaction–Diffusion System 5.3.2 Stationary and Mobile Fronts . . . . . . . 5.3.3 Stationary and Mobile Bands . . . . . . . . 5.3.4 Wave Trains . . . . . . . . . . . . . . . . . Symmetry-Breaking Bifurcations . . . . . . . . . . 5.4.1 Amplitude Equations . . . . . . . . . . . . 5.4.2 Bifurcation Expansion . . . . . . . . . . . 5.4.3 Interacting Modes . . . . . . . . . . . . . . 5.4.4 Plane Waves and their Stability . . . . . . . Resonant Interactions . . . . . . . . . . . . . . . . 5.5.1 Triplet Resonance . . . . . . . . . . . . . . 5.5.2 Stripes–Hexagons Competition . . . . . . . 5.5.3 Standing Waves . . . . . . . . . . . . . . . Nonuniform Patterns . . . . . . . . . . . . . . . . . 5.6.1 Propagation of a Stationary Pattern . . . . 5.6.2 Self-Induced Pinning . . . . . . . . . . . . . 5.6.3 Propagating Wave Pattern . . . . . . . . . 5.6.4 Nonuniform Wave Patterns . . . . . . . . .

163 . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

163 163 165 168 170 170 175 178 180 180 183 187 192 198 198 200 202 204 207 207 210 212 215 215 218 221 223

Bibliography

227

Online Files

233

Illustration Credits

235

Preface Why Dynamical Systems? Things change in time. They tend to change gradually and continuously, though sometimes an abrupt transition occurs. Ordinary differential equations (ODE) are natural models of gradual change. One starts with assigning a set of variables. It should be possible to define them in such a way that the rate of change at any moment is determined in a unique way by their instantaneous values. Then the evolution is deterministic, and future states of the system can be predicted by integrating these equations starting from known initial conditions. However, it is not that simple in the real world. Any model containing a finite number of variables is imperfect, and no values can ever be measured precisely. Therefore “solving” equations is not enough. Dynamical systems may exhibit complex temporal behavior, and sometimes, even though evolution remains deterministic, unreachable precision is required to predict the outcome of a disorderly process. Dynamical systems also serve as a tool for understanding still more complex spatially distributed systems when a few dominant modes are sufficient for adequate approximate description of spatio-temporal dynamics or when a single spatial coordinate plays the role of time. People may be interested in dynamical systems for different reasons. For the mathematician, it is a logical arena and a way to look into the abyss of chaos. For the physicist, it is a station on the way of inquiry into ever more complex dynamics. The engineer may pursue the opposite aim of taming the system and eliminating all dangerous quirks of unruly nonlinearities and feedback loops. We do not look at a dynamical system as an abstract object. The first chapter of this book emphasizes the origins of the systems we study. We wish to see common features in systems of different origins, and to apply common methods of study without losing sight of possible complications and uncertainties related to their physical origin.

Why This Book? Do we need another book on dynamical systems? Hasn’t it been thoroughly explored and explained by the great mathematicians of the last century, from Henri Poincar´e to Vladimir Arnold? Have not the books on the topic filled long meters of shelves in

xi

xii

PREFACE

academic libraries, now thinned out by the librarians to clear space for cozy nooks where the students play with their smartphones and laptops, while the faculty browse arxiv.org on huge desktop screens in their offices? I embarked on writing this book nevertheless, with some pushing by the editors. Nonlinearities and bifurcations have accompanied me throughout my variegated academic career, from chemical reactors to fluids to topological singularities and back to wetting and soft matter. Dynamical systems is an old and well-studied topic. This book is essentially a 20th-century book spiced by 21st-century computer graphics. The 20th century was the summit of analytical insight into complex dynamics, but it has not yet penetrated undergraduate curricula, still discussing paradoxes of infinitesimals that perplexed ancient Greeks, and advancing only as far as linear equations of the 18th century. This leaves the students to play blindly with 21st century computers when encountering nonlinear problems of real life. I am trying here to present advanced classical tools as a template for exploring the complexity hidden in run-of-the-mill physical and engineering models briefly reviewed in the first chapter. In the following two chapters, the analysis of nonlinear dynamical systems, with their plethora of local and global bifurcations among stationary and periodic attractors and repellers, is carried out, in a way that the mathematicians may characterize as formal, helped by the classical tools of perturbations, multiscaling, and asymptotic matching, before turning to numerics as the last resort. Chaos makes an appearance in the fourth chapter, where dynamics of forced and interacting oscillators is discussed as well. The last chapter is devoted to extending the methods of dynamical systems to problems involving spatial patterns and waves. This is a door to a much wider world of dynamical patterns, which is beyond the scope of this book. The presentation style is not suitable for lovers of theorems. I am trying to use plain English as far as it is possible in a text interspersed with formulas. The material develops from elementary to more advanced, but we have to stop when encountering more complicated and specialized topics. The accompanying Mathematica notebooks demonstrating the application of general methods to specific, apparently simple but hiding inherent complexity, problems may serve both as illustration of general derivations and as templates for quantitative analysis of similar systems by the devoted reader.

Rules of Engagement We intend to study phenomena in their simplest form, using dimensionless variables and parameters to get wider applicable results with least clutter. The material of this book is supposed to be self-contained. The reader should be able to reproduce all computations rather than rely on an outside authority. After some doubts, I decided against including elementary exercises usually given in textbooks, which are mostly helpful to teachers of elementary courses for assigning tests but otherwise may be just a waste of time. The opposite extreme are exercises in the famous course by Landau and Lifshitz, which are based on old journal publications. In our time,

PREFACE

xiii

journal publications are usually far more complicated and specialized, and are read only by those continuing the same or a related line of research. The best strategy for the actively engaged reader would be to reproduce the derivations in the text applying them to similar problems, in the best case, related to their own line of study or research. The online Mathematica notebooks complementing the text can be helpful in this respect. They are open and transparent; the user has to understand the inner workings of mathematical operations rather than to press buttons and get results out of a black box. It is not essential to understand the programming; thus, the user should be able to change program inputs, such as equations and boundary conditions, without taking care of a panoply of commands. The notebooks may be also helpful, though in a lesser degree, to readers not having access to Mathematica, as they still could use free Wolfram Player to follow the derivations in more detail than in the main text, and may be able to reproduce the computations in a more familiar or better accessible program. The notebooks are written in the latest Mathematica 12 version; the reader is warned that changes in programming in updated versions sometimes make some components of old notebooks incompatible. The logical structure of Mathematica 1 was a brilliant creation of the genius of Stephen Wolfram and his early collaborators, while further development went along the common path of adding more features of convenience, first of all, fine graphics, the source of almost all illustrations in this book.

Bibliographical Notes There is a limited number references in this book, confined to specific problems treated in detail and wider surveys. Most prominent names are mentioned but I do not expect the reader to look into original publications in obscure journals, nor to engage in priority issues. Neither do I wish to endorse some monographs or review papers over the others. The inquisitive readers interested in going further along their own line of inquiry will be not spared the adventure of googling the amazing maze of scientific literature or, more safely, restricting the search to more reliable sources in the Web of Science. Relevant references in the text can serve as a seed for this search. L. Pismen Haifa, Israel May 2020 https://www.routledge.com/Working-with-Dynamical-Systems-A-Toolbox-forScientists-and-Engineers/Pismen/p/book/9781138591714.

1 Whence Dynamical Systems Chapter Overview What is a dynamical system? A wide definition would be that it is any system developing in time. This definition is much too wide, as it encompasses spatially distributed systems with an infinite number of degrees of freedom. A working definition should be more restrictive. The core of the subject lies in initial-value problems for ordinary differential equations (ODE). Discrete evolving systems should be included as well, as they are intimately connected to continuous ones, as we shall see. A dynamical system defined in this way is a relatively simple mathematical object, but more complex phenomena, taking place in spatially distributed systems, may be sometimes reduced to simpler descriptions without losing their essential features, and the study of boundary value ODE problems may also benefit from the techniques developed in the theory of dynamical systems. The essential feature of problems we are interested in, whether discrete or continuous is nonlinearity. A linear system is bland and easily molded by external inputs. On the opposite, a nonlinear system has a character of its own, tends to be attracted to certain states, and may exhibit mutable, and even irregular, behavior. It is the changes in behavior, that we are mostly interested in. This introductory chapter has a wide scope but just touches upon the problems to be discussed at length in the sequel. The material of different sections of this chapter might be very familiar (and therefore superfluous) to some readers but novel to others. Our aim here is to make clear that dynamical systems are not a mathematical abstraction but are ubiquitous in nature and technology. We will see that similar equations may come from entirely different backgrounds, and the essential difference may lie in their general properties rather than in their origin. What are these essential properties? There is an apparently clear divide between conservative and dissipative systems, but this distinction is blurred in open systems interacting with their environment. Our attention is focused on dynamical systems of different origin (mechanical, chemical, and biological) that are likely to display interesting (complex, nontrivial) behavior. The tools for studying the various dynamic behaviors will be developed in the following chapters.

1

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

2

1.1 1.1.1

Classical Mechanics Conservative Equations of Motion

Classical mechanics is the oldest – classical indeed – source of dynamical systems. Problems of classical mechanics are not a centerpiece of this book, but we will start with them anyway, since they provided methods and ideas for many other applications. Lagrangian formalism. Classical mechanics deals with nondissipative motion of point particles. The state of a mechanical system at any given moment is determined by positions of these particles qn , which are vectors that can be represented by components qnj in a coordinate system fitting the geometry of the problem. To predict positions at future times, it is sufficient to know instantaneous velocities vn = dqn /dt ≡ q˙ n . The index n marks here the particles, and the index j, vector components. One can ˙ together, they span 2N -dimensional phase space arrange all variables in arrays Q, Q; of the system; N is the number of degrees of freedom equal to the number of particles multiplied by the number of spatial dimensions1 . ˙ Derivation of equations of motion starts from the Lagrangian L(Q, Q): L=

1 mn |q˙ n |2 − V (Q). 2 n

(1.1)

The first term is kinetic energy, and the second term is potential energy; mn is the mass of nth particle. The equations of motion are obtained by varying the action integral S = Ldt: d ∂L ∂L − = 0. (1.2) dt ∂q˙ n ∂qn This leads to the Newton law ¨ n = Fn (Q), mn q

(1.3)

where Fn = −∂V /∂qn is the force acting on the nth particle. The equations of motion imply conservation of energy: E=

 n

q˙ n ·

d ∂L 1 −L= mn |q˙ n |2 + V (Q). 2 n dt ∂q˙ n

(1.4)

Trajectories of a conservative mechanical system lie on the manifold E = const in ˙ the dimensionality of this manifold can be further reduced if the phase space (Q, Q); there are additional conservation laws, which may follow from the symmetries of the system. Equilibria are extrema of the potential V (Q) wherein all forces vanish. 1 Arrays are printed in bold italics to distinguish them from vectors. The dot product may stand for either vector or array contraction, i.e., the sum over the respective components.

1.1. CLASSICAL MECHANICS

3

Hamiltonian formalism. The Hamiltonian formalism involves, instead of velocities q˙ n , momenta pn = ∂L/∂q˙ n . The Hamiltonian function is defined as  (1.5) H= pn · q˙ n − L. n

The coordinates and momenta are conjugate variables, and the equations of motion equivalent to (1.3) are obtained as q˙ n =

∂H , ∂pn

p˙ n = −

∂H , ∂qn

∂H , ∂pnj

p˙nj = −

(1.6)

or, in the component notation, q˙nj =

∂H ∂qnj

.

(1.7)

We have to keep in mind that the momentum is a covariant vector, unlike contravariant position q and velocity v = q˙ vectors. Components of a covariant vector are written with lower vector indices, to distinguish them from upper-index contravariant components. The straightforward relation p = mv can be used in Cartesian coordinates only. Otherwise, this relation has to be written explicitly as pi = mgij v j , where gij is an applicable metric tensor, and summation over repeating upper and lower indices is implied. For example, in the spherical coordinates r, θ, φ, the metric tensor is diagonal with the elements {1, r2 , r2 sin2 θ}. Provided the Hamiltonian does not depend on time explicitly, its conservation follows directly from the equations of motion:    ∂H dqn dH ∂H dpn = + = 0. (1.8) · · dt ∂qn dt ∂pn dt n The Hamiltonian is related to the action integral defined by integrating the Lagrangian in time along a certain trajectory, so that S˙ = L. Using this in (1.5) yields  (1.9) δS = pn dqn − Hdt. According to this relation, the partial derivatives of action with respect to the coordinates are equal to the respective momenta, ∂S/∂qn = pn , while ∂S/∂t = −H. This leads to the Hamilton–Jacobi equation ∂S + H(Q, SQ ) = 0, ∂t

(1.10)

where SQ is an array of partial derivatives ∂S/∂qn . The advantage of this form is that it includes only first-order derivatives; it will be particularly useful in Sect. 4.1.2.

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

4

1.1.2

Systems with One Degree of Freedom

Basic equations. The simplest case is one-dimensional (1D) motion of a single particle (N = 1). The Lagrangian and Hamiltonian, with the coterie of indices dropped, are L=

1 mq˙2 − V (q), 2

H=

1 mp2 + V (q). 2

(1.11)

The respective equation of motion is F = −V  (q).

mq¨ = F (q),

(1.12)

Generally, the force may vanish at several positions corresponding to stable equilibria (s) (u) q = qj [minima of V (q) with F  (q) < 0] and unstable equilibria q = qj [maxima  of V (q) with F (q) > 0]. Evidently, these equilibria must alternate. The dynamics can be visualized as frictionless motion of a massive point particle on a slide with the profile proportional to V (q) under the action of gravity. Nonlinear oscillator. The simplest nontrivial example is a nonlinear oscillator with a symmetric double-well potential, which we shall write in a dimensionless form V (q) = 1 2 2 4 (1−q ) . The mass m can be incorporated in the energy scale, and in this formulation there is no difference between the velocity q˙ and the momentum p (note, however, the basic distinction explained in Sect. 1.1.1). The orbits in the phase plane q, p are curves of constant energy levels: E=

1 mp2 + V (q) = const. 2

(1.13)

One can see in Fig. 1.1 two kinds of phase space orbits separated by a figure-eight passing through the unstable equilibrium q = p = 0. This orbit, corresponding to (a)

(b) 



 

 ()

 

- - - -

-

-

 







- -

-

-

 







Figure 1.1. (a): A symmetric double-well potential. (b) The phase plane of a nonlinear oscillator showing three kinds of orbits



1.1. CLASSICAL MECHANICS

5

   - - -

-

-









/ Figure 1.2. The phase plane of a pendulum

E = E0 = 1/4, is called a homoclinic. It has an infinite period – we’ll have much more to say on this in the following two chapters. At 0 < E < E0 , the orbits are disconnected, and correspond to oscillations around one of stable equilibria q = ±1. At higher energies, E > 1/4, the orbits lie outside the homoclinic and surround all three  equilibria. All these orbits can be easily drawn by plotting the function p = ± 2(E − V (q)) for some values of energy below and above E0 . Ideal pendulum. Another example – physical but as highly abstract as all textbook examples – is a pendulum with a massive bob tied to a fixed pivot by a weightless noninertial stiff rod of length l and moving without friction in a fixed plane in the field of gravity. There is a single position variable – the deviation angle θ from the rest position, and the potential is V (θ) = mgl cos θ. The Lagrangian is L=

1 2 ˙2 ml θ + mgl cos θ. 2

(1.14)

The parameters – mass m and acceleration of gravity g, as well as l, can be removed by choosing mgl as the energy scale and l/g as the time scale. The dimensionless Lagrangian and Hamiltonian rewritten in these units are L=

1 ˙2 θ + cos θ, 2

H=

1 2 p − cos θ. 2

(1.15)

The respective dimensionless equation of motion is θ¨ + sin θ = 0.

(1.16)

The equilibria are θ = nπ with any integer n, but those with n even are stable (minima of V ) and those with n odd are unstable (maxima of V ). The orbits in the phase plane θ, p =θ˙ (Fig. 1.2) are level curves E = const expressed, as before, by the function p = ± 2(E − cos θ). One can observe that there is no difference in positions with angles differing by 2π. There is, however, a qualitative difference between undulation, with the angle

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

6

oscillating around a stable equilibrium, and rotation, with the angle increasing or decreasing monotonically. The two kinds of trajectories are separated by an infinite period orbit passing through unstable equilibria, which corresponds to the level E = 1 (Fig. 1.2). This distinction would not be as clear if, say, the vertical position z = −l cos θ, rather than angle, was chosen as a dynamic variable. On the other hand, since θ modulo 2π corresponds to the same spatial position, one could hesitate whether to call an infinite period orbit a homoclinic (one starting and ending at the same equilibrium state) or heteroclinic (one connecting two distinct equilibria).

1.1.3

Symmetries and Conservation Laws

Spherical pendulum. The above example can be slightly modified by removing the restriction of planar motion. This is a spherical pendulum (Fig. 1.3a); its motion should be viewed in spherical coordinates θ, φ. In this case, there are four phase variables, but equations of motion remain simple, since the potential depends on the azimuthal angle θ only. The Lagrangian can be written in the dimensionless form scaled as in (1.15): L=

 1  ˙2 θ + φ˙ 2 sin2 θ + cos θ. 2

(1.17)

Take note that sin2 θ is the element of the metric tensor that has to be entered when the respective coordinates are multiplied. The equations of motion are more transparent when derived from the Hamiltonian H = pθ θ˙ + pφ φ˙ − L =

1 2 pθ + p2φ − cos θ. 2

(1.18)

The Hamiltonian is independent of the polar angle φ; this symmetry results in conservation of the respective momentum pφ = ∂L/∂φ˙ = φ˙ sin2 θ. Notably, this does not ˙ as the rotation rate in the horizontal plane varies imply a constant angular velocity φ, with changing polar angle. The motion, combining azimuthal undulation with rotation around the symmetry axis, looks superficially rather complex, but remains predictable. Since the system is constrained by two conservation laws (energy and angular momentum), phase space trajectories cover, generally, a 2D surface in the 4D phase space. Any continuous symmetry implies a conservation law (Noether’s theorem). In particular, conservation of energy is a consequence of the symmetry to translations in time, whereby the Lagrangian does not depend on time explicitly. The conservation of the angular momentum follows from the symmetry to azimuthal translations. Integrable systems. A symmetry may be not as obvious as that. It might be possible, however, to transform the coordinates and momenta to action-angle variables Jn , Θn , such that the Hamiltonian acquires a normal form dependent on Jn only. Then

1.1. CLASSICAL MECHANICS

7

Figure 1.3. (a): A spherical pendulum. (b): Coupled pendulums tied by an elastic spring. (c): A double pendulum. (d): Long exposure of a double pendulum exhibiting chaotic motion (tracked with a LED)

the equations of motion become dΘn ∂H = , dt ∂Jn

dJn = 0. dt

(1.19)

Thus, the action variables are integrals of motion, while the angle variables rotate with a constant speed. When such a canonical transformation is possible, the system is integrable, and can be fully resolved analytically. The dynamics reduces to rotation around an N -dimensional torus (where N the number of degrees of freedom). Since the rotation periods are, generally, incommensurate, the motion is quasiperiodic: the system never returns to the original state. This is also true in the simple case of a spherical pendulum. Integrable systems are nongeneric; they can be useful, however,

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

8

for studying more complex dynamics when conservation laws are slightly perturbed (see Sect. 4.1).

1.1.4

Interacting Particles

Coupled pendulums. Motion becomes far more complicated when interactions between particles are taken into account. In this case, coordinates of different particles in V (Q) cannot be separated, so that the force Fn = −∂V /∂qn acting on nth particle depends on other particles’ positions. As an example, consider two coupled pendulums with the rods of the same length l moving in parallel planes separated by the length d, and with identical bobs tied by an elastic spring, which is unstrained in the rest position (Fig. 1.3b). The squared distance between the bobs is d2 + 2l2 [1 − cos(θ1 − θ2 )], where θn are deviation angles, and the Lagrangian, brought to the dimensionless form as in Sect. 1.1.2, is   1 k 2 ˙ (1.20) L= θ + cos θn − [1 − cos(θ1 − θ2 )], 2 2 n where k is the dimensionless elastic constant of the spring, which controls the strength of interaction. The dynamic behavior is influenced thereby in contradictory ways. At k → 0, the two pendulums are independent, while at k → ∞, they undulate synchronously as a single body. Thus, it is at intermediate strength of interaction that nontrivial dynamics is to be expected. Elastic pendulum. A simpler device apt to behave chaotically is an elastic pendulum hung on a weightless spring rather than on a stiff rod. Here the pendulum and a spring are “interacting particles”, and the dynamic variable additional to the deviation angle θ is the relative elongation of the spring x. The dimensionless Lagrangian is L=

k 1 (1 + x)2 θ˙2 + x˙ 2 + (1 + x) cos θ − x2 . 2 2

(1.21)

Similar to the case of coupled pendulum, in the limit of a very large elastic constant the system reverts to a simple pendulum, but a very small elastic constant would allow the bob to drop indefinitely far from the pivot. Double pendulum. Another example is a double pendulum with massive bobs and weightless stiff rods (Fig. 1.3c). There are four variables, angles of both rods θn , n = 1, 2 and the corresponding velocities θ˙n or momenta pn , but no symmetries at hand to simplify the problem, even when the pendulums are identical. Assuming this is true, we can use the same units as in Sect. 1.1.2 to write the Lagrangian in the form (Landau and Lifshitz, 1960) 1 L = θ˙12 + θ˙22 + θ˙1 θ˙2 cos(θ1 − θ2 ) + 2 cos θ1 + cos θ2 . 2

(1.22)

1.1. CLASSICAL MECHANICS

9

On the road to chaos. In all above cases, the source of complications lies in the interaction terms containing cos(θ1 − θ2 ) or both x and cos θ. The levels H = const are now (generically) 3D hypersurfaces, and the motion can be chaotic – and, for all practical purposes, unpredictable, even though the equations of motion remain, of course, fully deterministic. The entanglement is stronger in the case of a double pendulum, where not only coordinates but their derivatives or momenta appear in the same nonlinear term. A chaotic trajectory traced in the experiment is shown in Fig. 1.3d. Coupled and elastic pendulums are easier to analyze, since the weak interaction limit can be tested in these cases. Chaotic dynamics of these models can be analyzed in the same way as a more abstract example in Sect. 4.1.3. Henri Poincar´e has understood this more than a century ago, before quantum uncertainty shocked the science, but nobody paid attention. His motivation was a study of a three-body planetary interaction – a more realistic and practically important problem than frictionless mathematical pendulums with weightless rods or springs. Learned philosophers as well as common folk kept viewing classical mechanics as a model of determinism, even though a die and a roulette, tools of chance games, are respectable macroscopic mechanical systems. The reason was that only an unrepresentative minority of mechanical systems – those that can be resolved analytically – could be studied and understood before computers became commonplace (more on this in Sect. 4.1.1).

1.1.5

Dissipative Motion

Ubiquity of dissipation. As conservation laws are woven into the canvas of physics, dissipative (nonconservative) models always appear as approximate or coarse-grained models describing the averaged behavior of complex systems. Dissipation may appear in a mechanical system as an additional term taking into account friction, which is, in fact, the result of disordered motion of a great number of microscopic particles lumped by a macroscopic model characterized by few degrees of freedom. Similar mechanisms underlie dissipation due to resistance in electric circuits or losses in optical transmission lines. Mean field models of thermodynamic or chemical origin are always dissipative, as they describe averaged dynamics characterized by a small number of thermodynamic variables. In fluid mechanics, dissipation is brought in by viscosity and diffusion, stemming, like friction, from disordered motion on a microscopic level. Dissipative models may be also derived from a mass, energy, or momentum conservation law, but contain sources or sinks that destroy conservation of the respective integrals. Besides dissipation in the common sense of the word – loss of energy, momentum, etc. – nonconservative systems may include sources due to external pumping, or “antidissipation”. Behavior of such systems, constrained neither by conservation laws nor by the boring laws of thermodynamics, is particularly rich and interesting; most dynamical systems we shall study belong to this class. From a formal point of view, conservative systems are nongeneric: a randomly generated set of equations will have no conservation laws.

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

10

Driven mechanical systems with friction. Dissipation of mechanical energy by friction makes long-time dynamics trivial, since the system always evolves to a stable equilibrium (though not necessarily to the one corresponding to the lowest energy level). Losses can be, however, compensated by external energy supply. This, in essence, increases dimensionality of the available phase space. The energy of the system is not conserved anymore and can be lost and gained at a variable rate in the course of evolution, so that the system is not constrained to a constant energy level. Adding to (1.12) a friction term proportional to the velocity and an external force F0 (q, t) brings it to the form mq¨ + kq˙ − F (q) = F0 (q, t),

Figure 1.4. A pendulum on a rotating pivot

(1.23)

where k > 0 is a friction coefficient. As an example, consider a planar pendulum with a pivot rotating clockwise in the same vertical plane with the amplitude a and frequency ω (Fig. 1.4). The horizontal and vertical coordinates of its bob are

x = a cos ωt + l sin θ,

y = −(a sin ωt + l cos θ),

(1.24)

so that the kinetic energy is

m 2 2 m 2 a ω + l2 θ˙2 − 2alωθ˙ sin(θ + ωt) . x˙ + y˙ 2 = 2 2

(1.25)

This expression can be used to write down the Lagrangian, but the latter looks more neat if the term linear in θ˙ is removed by extracting the time derivative of alω cos(θ + ωt), which would not affect the equation of motion to be obtained by varying the Lagrangian. The constant term a2 ω 2 can be removed as well, and the θ-dependent part of the potential energy V (y) = −gy → gl cos θ remains the same as for a pendulum hanging on a fixed point. Transforming to the dimensionless variables as in Sect. 1.1.2, we write the Lagrangian in the form L=

1 ˙2 θ + γ sin(θ − ωt) + cos θ, 2

(1.26)

where γ = aω 2 /g. Varying this expression and adding friction yields the dimensionless equation of motion θ¨ + kθ˙ + sin θ = γ cos(θ − ωt). (1.27) This equation can be further simplified assuming undulations to be small, θ  1. Then the angle in the driving term can be neglected, and

the nonlinear term expanded to the lowest nonvanishing order, sin θ = θ 1 − 13 θ2 . Removing also the numerical

1.2. CHEMICAL KINETICS

11

√ coefficient by setting θ = 3q yields a version of the Duffing equation – the simplest model of a driven dissipative nonlinear oscillator: q¨ + kq˙ − (1 − q 2 )q = γ cos ωt.

(1.28)

The same equation in a modified form describes the motion of a nonlinear spring with friction under the action of a periodic force. It comes there with the reverse sign of the linear term: (1.29) q¨ + kq˙ + ω02 (1 + αq 2 )q = γ cos ωt. √ The parameters√ α, ω0 can be eliminated by rescaling q → q/ α, t → t/ω0 , k → ω0 k, γ → γω02 / α, ω → ωω0 . Only the sign of α is relevant: α = 0 corresponds to a linear spring, while with the positive sign, the spring strengthens with strain. The case α < 0 that corresponds to a weakening spring √ is problematic, as the solutions then diverge when q escapes from the interval |q| < α. We will return to this equation, as one of examples of chaos in forced dissipative systems, in Sect. 4.5.2.

1.2 1.2.1

Chemical Kinetics Mass Action

Stoichiometric equations. A different kind of dynamical systems is based on application of the mass action law lying in the foundation of chemical kinetics. Production rates of different species due to a chemical reaction are related by the stoichiometry of the reaction. Any chemical reaction is characterized by its stoichiometric equation, which has a general form  νi Ai = 0, (1.30) where Ai are symbols of chemical species, and νi are stoichiometric coefficients defining proportions between changes of the amount of reactants and products in the course of the reaction. Stoichiometric coefficients can be multiplied by any number, but it is reasonable to choose the lowest possible set of integers. The signs are customarily chosen in such a way that stoichiometric coefficients of reactants are negative, and those of products are positive. However, the reaction may be reversible and proceed in either direction; then the choice depends on the net direction of chemical change. A more conventional notation is   |νi |Ai  ν i Ai , (1.31) i∈I−

i∈I+

where I− is a set of reactants and I+ is a set of products. It may be more suggestive when one of the reactants is also a reaction product. It happens in chain reactions, the most celebrated example of which – stepping outside chemistry, but retaining the same formalism – is the nuclear chain reaction multiplying the number of neutrons in explosive nuclear fission.

12

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

Chemical kinetics. The mass action law assumes that the reaction rate is proportional to the product of reactant concentrations. This, indeed, should be true if reactant molecules, present in macroscopic amounts, are uniformly distributed in space, and the reaction might occur when they collide at random. The probability of the reaction upon collision is measured by the reaction rate constant k. Thus, the change of the concentration ci of a chemical species due to a reversible reaction defined by (1.31) is ⎞ ⎛  |ν |  (1.32) c˙i = νi ⎝k+ ci i − k − cνi i ⎠ , i∈I−

i ∈I +

where k± are rate constants of the direct and reverse reactions. This general formula is unnecessarily clumsy, since triple and higher-order collisions are highly unlikely, and the expression should not contain terms higher than quadratic if the stoichiometric equation reflects a true reaction mechanism. This, in fact, almost never happens. Chemical reactions, which require restructuring of stable molecules, proceed, as a rule, through a number of stages involving short-lived intermediate species. Some chemical stages rapidly reach equilibrium, which enforces certain algebraic relations among the concentrations of reacting species. Even when no partial equilibrium is reached, concentrations of intermediate species taking part in fast reactions will arrive at certain quasistationary values dependent on instantaneous concentrations of long-lived species. Using either equilibrium or quasistationary relations in kinetic equations of slower stages leads to nonpolynomial expressions for reaction rates even when all original equations are based on the mass action law. Some examples are given in the next subsection. Reaction rate expressions, measured experimentally or deduced from a detailed mechanistic model after eliminating intermediate species, may have variegated forms involving some functions of the array of concentrations C: c˙i = νi f (C),

(1.33)

For a single reaction, one can eliminate concentrations of all species except one with the help of stoichiometric relations (ci −c0i )/νi = idem, where c0i are initial concentrations. For a general reaction system, the number of independent equations equals to the rank of the matrix of stoichiometric coefficients. This number defines the effective dimensionality of the phase space of the system. Chemical reactors. Chemical reactions in a closed system (what is called a “batch” reactor) always proceed to a state of thermodynamic equilibrium in such a way that free energy of the system decreases monotonically. This appears to rule out any nontrivial dynamic behavior. When Belousov observed back in the mid-20th century the famous oscillatory reaction, known now as the Belousov–Zhabotinsky (BZ) reaction, he was prevented from publishing this, ostensibly, rude violation of thermodynamic laws – or, in fact, of thermodynamic misconceptions, as was later realized. Oscillations

1.2. CHEMICAL KINETICS

13

are indeed impossible close to equilibrium, but far from equilibrium these constraints are invalidated, and a monotonic decrease of the free energy contradicts oscillatory dynamics no more than a monotonic decrease of the energy of a not perfectly frictionless pendulum prevents it from undulating for hours before coming to rest at the equilibrium. A chemically reacting mixture can be sustained in a stationary nonequilibrium state or in a persistent nonstationary, e.g., oscillatory state when reactants are supplied and products removed from the system; energy supply or removal are needed as well when the reactions have a heat effect as they usually do. Both chemical reactors and living cells operate in this way, whatever complex reactions might be involved. A simplest type of a chemical reactor is a well stirred tank with one or more inlets and outlets. The reacting mixture within the tank, as well as in the outlet streams has a uniform composition and temperature, so that the material balance equations become

(1.34) c˙i = νi f (C) + β c0i − ci , where c0i is the concentration in the inlet stream, and β is the inverse of the average residence time, i.e., the ratio of the volumetric flow rate to the reactor volume. Another kind of an industrial chemical reactor is a plug flow reactor (PFR). An ideal PFR is a flow-through reactor, where each volume element of the inlet flow spends in the reactor identical time. The flow elements entering the PFR at different moments of time do not mix, and therefore each one can be considered as an infinitesimal batch reactor. Therefore in a PFR operating in a stationary regime, the clock time is replaced by the residence time that has passed from the moment of entry of a particular flow element. The residence time t can be related to the current spatial position x by the differential relationship dx = v dt, where v is the local flow velocity. Although a batch reactor and a PFR are not at all similar physically, a stationary PFR is described by exactly the same equations as a batch reactor, and we shall consider both together. The similarity is lost when nonstationary operation of a PFR is considered, or when deviations of a real flow pattern in the reactor from the ideal model are taken into account.

1.2.2

Adsorption and Catalysis

Quasistationary relations. Fast intermediate reaction stages are commonly encountered in catalytic or enzyme reactions. Any catalytic reaction is a multistep process that involves stages of adsorption, chemical transformation of the adsorbed species, and desorption of products. An enzyme is just a biochemical catalyst, attaching a reactant in order to change its structure and facilitate its transformation. Usually, the adsorption and desorption stages are much faster than the reaction stage, and can be assumed to be at a quasi-equilibrium corresponding to local reaction conditions at a particular moment of time. A simple monomolecular reaction proceeds according to the following mechanism: A1 + A0  A∗1 ,

A∗1 → P + A0 ,

(1.35)

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

14

where A0 is a free catalytic site or an enzyme molecule, P is a nonadsorbing product, and the asterisk denotes the adsorbed forms of the reactant A1 . The adsorption rate is proportional to the fraction of free sites c0 = 1 − c∗ , where c∗ is the fraction of occupied sites. Assuming adsorption equilibrium, the latter is determined by the algebraic relation k + c (1 − c∗ ) = k − c∗ , (1.36) where k ± are adsorption and desorption rate constants (which should both far exceed the reaction rate constant k). This relation defines c∗ , which further determines the chemical reaction rate per active site kc∗ , i.e., the rate of production of P . Since the intermediate adsorption stage is fast, the rate of consumption of A1 is the same, as can be also checked algebraically: c˙ = −

kσKc . 1 + Kc

(1.37)

where K = k+ /k − is the adsorption equilibrium constant, and σ is the concentration of active sites. The reaction rate saturates at c  K −1 (Fig. 1.5a). This mechanism is known as Langmuir–Hinshelwood kinetics in heterogeneous catalysis and Michaelis– Menten kinetics in enzymatic reactions. A similar equation with a sharper saturation threshold (introduced on formal grounds), called the Hill equation, is often used in biochemical applications: kσ(Kc)n c˙ = − . (1.38) 1 + (Kc)n Competitive and cooperative adsorption. More varied expressions are obtained when a second species competing for the same adsorption sites is added. Then the fraction of free sites is c0 = 1 − c∗1 − c∗2 , where c∗i is the fraction of sites occupied by the respective species. The adsorption equilibrium relations analogous to (1.36) become ki+ ci (1 − c∗1 − c∗2 ) = ki− c∗i

(i = 1, 2),

(1.39)

where ki± are adsorption and desorption constants of the two species. The ratio K1 c 1 c∗1 = , K2 c 2 c∗2

Ki =

ki+ , ki−

(1.40)

dependent on adsorption equilibrium constants Ki , is obtained when the two relations (1.39) are divided one by the other. Combining this with either of (1.39) yields c∗i =

Ki ci 1 + K1 c1 + K2 c2

(i = 1, 2).

(1.41)

The resulting kinetic expression depends on the role of the second species. If it is just a passive inhibitor, (1.37) is modified to c˙1 = −kσc∗i = −

kσK1 c1 . 1 + K1 c1 + K2 c2

(1.42)

1.2. CHEMICAL KINETICS

15

(a)

b)

)

Figure 1.5. (a): Saturation of the chemical reaction rate: the Langmuir–Hinshelwood equation (n = 1) and the Hill equation with n = 2 and n = 4. (b): A typical dependence of the reaction rate of a bimolecular catalytic reaction on the concentration of a competing (blue curve) or cooperating (orange curve) reactant at a fixed level of the other concentration, (made larger for the competitive case to make scaling compatible).

Alternatively, A2 may take part together with A1 in a bimolecular reaction, so that the detailed mechanism is Ai + A0  A∗i (i = 1, 2),

A∗1 + A∗2 → P + 2A0 .

(1.43)

Then, according to the mass action law, the chemical reaction rate per active site is kc∗1 c∗2 , and (1.42) yields kinetic equations for the concentrations of both reactants: c˙i = −

kσK1 K2 c1 c2 (1 + K1 c1 + K2 c2 )2

(i = 1, 2).

(1.44)

The important feature of this reaction rate expression is a possibility of its decreasing with the concentration of one of the reactants (Fig. 1.5b). Interaction of adsorbed species may be cooperative rather than competitive. For example, a simplified model of glycolysis (biochemical oxidation of glucose) involves adsorption of the reactant by the adsorbed reaction product rather than by an unbound enzyme. The chemical scheme of the process is A2 + A0  A∗2 ,

A1 + A∗2  A∗1 ,

A∗1 → A2 + A∗2 .

(1.45)

If the two adsorption stages are at equilibrium, the equations defining the fraction of sites c∗i occupied by the reactant and the product are K1 c1 c∗2 = c∗1 , Using this to find

c∗i ,

K2 c2 (1 − c∗1 − c∗2 ) = c∗2 .

(1.46)

we arrive at the overall reaction rate expression c˙i = −kσc∗1 = −

kσK1 K2 c1 c2 . 1 + K 2 c 2 + K1 K 2 c 1 c 2

The contrast between both cases is clearly seen in Fig. 1.5b.

(1.47)

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

16

1.2.3

Autocatalysis and Self-Inhibition

Far from equilibrium, the function f (C) may have a form qualitatively different from the mass action law (1.32); in particular, the reaction may be slowed down by reactants (self-inhibition) or accelerated by reaction products (autocatalysis). Either property is essential for nontrivial dynamics capable of producing multiple stationary states and oscillations, as we shall further see. An extreme example of autocatalysis is a branched chain reaction, such as the one occurring in nuclear fission where a neutron causing a heavy nucleus to break up generates, alongside splinter nuclei, more neutrons. Chemical chain reactions involving free radicals are common and had been studied way before the nuclear fission was discovered. Branching should be moderated by removal of the autocatalytic agent through transport or other reactions if we want to observe long-time dynamics rather than having the system explode on us or somebody else. An example of an autocatalytic system contained within a finite phase space volume is a celebrated artificial quasichemical system called, after its birthplace fused with its prominent feature, the Brusselator model. The dynamic equations are constructed as mass conservation equations describing two fictitious chemical reactions 2A1 + A2 → 3A1 ,

A1 → A2 ,

(1.48)

with kinetics obeying the mass action law and the reactant A1 supplied at a constant rate and removed at a rate proportional to its concentration. Scaling the concentrations and time in an ingenuous way, one can bring the dynamic equations to the form u˙ = 1 − (μ + 1)u + μu2 v,

γv˙ = u − u2 v,

(1.49)

where the only stationary solution u = v = 1 is independent of the two remaining dimensionless parameters μ, γ. This system, though being rather unrealistic, is used as a convenient model of oscillations and pattern formation due to its algebraic simplicity. Other artificial models, some of which we shall encounter further in this book, have been constructed likewise to illustrate various aspects of oscillatory and chaotic dynamics. Natural examples of self-inhibition are found in catalytic or enzymatic reactions. In the bimolecular reaction characterized by the kinetic expression (1.44), self-inhibition arises under conditions when strong adsorption of one of the reactants reduces the reaction rate by suppressing adsorption of the other reactant; thus, if K1 c1 far exceeds both K2 c2 and unity, the nonlinear expression in (1.44) is approximately proportional to c2 /c1 , so that the reaction rate decreases when concentration of the strongly adsorbing reactant increases, as seen in Fig. 1.5b.

1.2.4

Thermal Effects

Arrhenius law. A common autocatalytic agent in chemical processes is heat. Heat released in an exothermic reaction increases the temperature of the reacting mixture,

1.2. CHEMICAL KINETICS

17

which, in turn, accelerates the reaction. The ratio of the reaction heat to the heat capacity of the reacting mixture can be viewed as a “stoichiometric coefficient” of temperature. Denoting it as ν0 , we can write, analogous to (1.34), the dynamic equation for temperature, complementing kinetic equations of chemical species: T˙ = ν0 f (C, T ).

(1.50)

The temperature dependence in f (C), now extended to f (C, T ), comes from the temperature dependence of reaction rate constants, commonly expressed by the Arrhenius law, which assumes that only sufficiently energetic molecules exceeding a certain activation energy E may undergo chemical transformation. If energy and temperature are measured in the same units (setting the Boltzmann constant to unity), the fraction of such molecules in an ensemble obeying the Maxwell distribution is e−E/T . The activation energy usually far exceeds the thermal energy, so the fraction in the exponent is very small, and the reaction rate constant, k = k0 e−E/T ,

(1.51)

is a sharply increasing function of temperature reaching saturation only at unrealistically high temperatures T ∼ E. For a stirred reactor, (1.50) is extended to the energy balance equation analogous to (1.34): T˙ = νT f (C, T ) + βT (T0 − T ) .

(1.52)

The constants βT , T0 may combine the effect of different ways of cooling and heating, either by adjusting the temperature of the inlet stream or by heat exchangers, or both. A representative example. A standard example, which we shall work out in the sequel, is a monomolecular (first-order) exothermic reaction A1 → A2 with the simplest kinetic function f (c1 ) = k(T )c1 and ν1 = −1. Instead of concentration, it is convenient to use the conversion ratio u = 1−c1 /c01 . The inverse rate constant at a reference temperature k(T0 ) serves as the time scale. The activation energy E is not fit to be used as a temperature scale, being, as mentioned above, too large. A more convenient scale suitable to a common temperature range is T02 /E. Defining the reduced dimensionless temperature as v = E (T − T0 ) /T02 brings (1.34), (1.52) to the dimensionless form   v u˙ = exp (1 − u) − μu, (1.53) 1 + v   v γv˙ = ν exp (1 − u) − μv, (1.54) 1 + v containing four parameters μ=

β1 , k(T 0 )

ν=

νT c01 Eβ1 , βT T02 )

γ=

β1 , βT

=

T0 . E

(1.55)

18

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

This system, as well as (1.44), generates multiple stationary states and oscillations. The parameter γ affects the dynamics only; if it is small, temperature is less inertial than conversion. The parameter  1 is not essential and can be set to zero, thereby approximating the Arrhenius temperature dependence by a simple exponent. This results in the loss of a nonphysical solution with v ∼ −1 . The two essential parameters affecting stationary states are μ, the ratio of characteristic times of chemical reaction and reactant supply, and ν, expressing thermal effects. A slightly simplified system containing only these parameters still exhibits all essential generic features of twocomponent dynamical systems, and will be presented in Sects. 2.4 and 3.2 as a template for a detailed study of their behavior.

1.3 1.3.1

Biological Models Population Dynamics

Interacting populations. Equations of population dynamics are constructed, as in chemical kinetics, based on the mass action law. Birth, death, and interactions among populations can be presented, similar to (1.30), as “reactions” involving one, two, or more species, e.g., 2Ai → 0, Ai → (νi − 1)Ai , Ai + Aj → (νi + 1)Ai + (νj + 1)Aj .

(1.56) (1.57)

In the first “reaction”, νi is the excess of the birth over death rate for an ith species, both assumed being linear in the population density ci . The second “reaction” in (1.56) expresses the effect of congestion. In (1.57), Ai is a predator, and Aj is a prey when νi > 0 and νj < 0. If both νi and νj are positive, it expresses the effect of symbiosis, and when both are negative, the effect of competition. Kinetic equations. A general equation system including linear growth or decay and quadratic interactions has the form    kij cj . (1.58) c˙i = ci ki + These equations can be further extended to a general form, dependent on an array of variables c˙i = ci f (C), (1.59) where f (C) is some function of the array C of population densities ci . This may look similar to the general kinetic expression (1.33), but the substantial difference is that ci = 0 always remains a solution, and the dynamics is restricted to ci ≥ 0. Another common generalization is introducing delays, so that past population densities enter (1.58) or (1.59).

1.3. BIOLOGICAL MODELS

19

(a)

(b)

 

 









    

 

 

     























Figure 1.6. (a) Logistic growth. Inset: the growth rate (1.60). (b) Effect of the cubic growth rate. The solid curves corresponding to c− = 0.5 (above) and c− = 0.2 (below) with initial concentrations above these values approach the same limit c+ = 1, but the population decreases when the starting value is below c− (dashed curve at c− = 0.5)

A single population with a linear growth rate would either die away or grow exponentially to its Malthusian doom, depending on whether ν is negative or positive. The growth can be limited by adding a quadratic term expressing congestion effects. This leads to the logistic growth equation c˙ = kc(c+ − c),

(1.60)

where c+ is the maximum population. We can see in Fig. 1.6a that the population density, initially growing exponentially, saturates at the limiting value c+ . The kinetic expression can be further modified by allowing for negative growth rates at population densities below some critical value c− (Allee effect), leading to a cubic expression (1.61) c˙ = kc(c − c− )(c+ − c). The dynamics of population growth shown in Fig. 1.6b, if started at c slightly larger than c− , is not dissimilar to that of the logistic curve but the population is extinguished if drops below c− . Lotka–Volterra system. A simple example of interaction of two populations is the Lotka–Volterra system involving “autocatalytic” prey and predator which dies away linearly but is supported by a quadratic interaction term. The general prey–predator model describes as well plant–herbivore or host–parasite dynamics. We write this system in a dimensionless form u˙ = u(1 − v),

v˙ = μv(u − 1),

(1.62)

where u, v are scaled population densities, and the only parameter μ expresses the ratio of the predator death rate to the net growth rate of the prey. A peculiar feature of this



CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

20

system is a Hamiltonian structure revealed by transforming to the new variables x = ln u, y = ln v. The transformed dynamic equations are derivable from the Hamiltonian H = μ(ex − x) + ey − y: x˙ = 1 − ey ≡ −∂H/∂y,

y˙ = μ(ex − 1) ≡ ∂H/∂x.

(1.63)



Conservation of H necessitates oscillatory dynamics, with trajectories surrounding the only nontrivial equilibrium u = v = 1. The amplitude of oscillations increases for levels of H rising above its minimal value H0 = 1 + μ (see Fig. 1.7). Unlike problems of classical mechanics,  there is no intrinsic reason for a conservation law to appear in population dynamics. The Hamiltonian structure of the Lotka–  Volterra system is purely accidental, and breaks down when any change is introduced into the model to make it more realistic,  e.g., accounting for congestion or competition, as in the models we will consider in detail in Sect. 2.5. Although the conserva tion law simplifies the analysis, it points out, in essence, to a deficiency of this nongeneric model. All this said, the model had impor      tant practical consequences. It made it clear  that eliminating a predator may not help prey: when done at prey’s population denFigure 1.7. Trajectories of the Lotka– sities below the unstable stationary state, it Volterra system. moves the system to higher levels of H and just increases the amplitude of oscillations. This is a reason why wolves, previously hunted to extinction, are reintroduced in Europe and North America.

1.3.2

Epidemiological Models

A different scheme operates in models of infection and immune response. Viral infection can be described by the equivalent reaction scheme A1 + A2 → A1 + A3 ,

A3 → νA1 .

(1.64)

The first reaction is infection by virus A1 of a healthy cell A2 turning it to an infected cell A3 . The second reaction is production of the virus by an infected cell. Adding here linear growth or death terms we arrive at the dynamical system c˙1 c˙2 c˙3

= =

νk13 c3 − k1 c1 , −k12 c1 c2 + k2 c2 ,

=

k12 c1 c2 − k3 c3 .

(1.65)

1.3. BIOLOGICAL MODELS

21

This system reduces to the Lotka–Volterra model if infected cells are short-lived, so that the last equation reaches a quasistationary state. A basic epidemiological model involves susceptible (A1 ), infected (A2 ), and immune (A3 ) populations and the binary interaction A1 + A2 → 2A2 . Assuming that all populations give birth to susceptible individuals with the rate constant k0 , and adding a linear death (with the rate constants ki , k1 = k3 < k2 ), infection (with the rate constant k12 ) and immunization (with the rate constant k23 ) terms, we arrive at the system c˙1 c˙2

=

k0 (c1 + c2 + c3 ) − k1 c1 − k12 c1 c2 ,

=

k12 c1 c2 − k23 c2 − k2 c2 ,

c˙3

=

k23 c2 − k3 c3 .

(1.66)

It is easy to integrate these, as well as chemical, kinetic equations for a particular set of parameters but we need more advanced tools of bifurcation analysis to identify the various possible regimes as the parameters are varied.

1.3.3

Neural and Genetic Networks

Interactions of various proteins and morphogens in living cells generally follow the rules of chemical kinetics, but expression of a gene is basically a logical operation: it is switched on or off when a certain concentration level of a particular combination of proteins is reached within the cell. In its turn, the production of proteins, which is dependent on the state of a particular gene, affects expression of other genes. Likewise, a neuron is fired when a potential induced at its synapses by other neurons reaches a certain threshold. The term “neural network” applies as well to computational devices and algorithms based on the same principle. A number of neurons or genes interacting in this way form a neural or genetic network. These networks are unfathomably complex in living organisms, but complex dynamic behavior may be generated even by relatively small networks. A general equation system of a network of this kind can be written in the form ⎛ ⎞  u˙ i = μi f ⎝ aij uj − ηi ui ⎠ − λi ui . (1.67) j

In the genetic context, ui denotes the concentration of the ith protein and λ its decay rate; f (x) is a monotonic function changing from 0 at x → −∞ to 1 at x → ∞; the simplest option is the Heaviside (step) function jumping from H(x) = 0 at x < 0 to H(x) = 1 at x > 0. The values ηi denote then the thresholds of the linear combinations of protein concentrations, characterized by the coefficients aij , above which the gene governing the production of the ith protein becomes active; μi denotes the production rate above this threshold. In the neural context, ui denotes the potential of the ith synapse and ηi , the excitation threshold of a neuron affecting this synapse.

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

22

The coefficients aij can be of either sign, with the positive ones stimulating and the negative ones repressing activation or expression. The sum sign in (1.67) should not be interpreted literally. It is more like a logical operation: in Wolpert’s (2011) words, “it is rather like voting: the unit looks at all the activation and inhibition and their weightings, and decides who wins”. In order to make a complex network more manageable, it can be separated in several “layers”. Some ui may be considered as “inputs” subject to outside control, which can be exercised by affecting the respective thresholds or by adding source terms to their dynamic equations. Other ui are viewed as “outputs” available for observation; the rest constitute a “hidden” layer. The layered structure can be made more restrictive by allowing links leading from “lower” to “upper” layers (with the input at the bottom and the output on the top) but not in the opposite direction.

1.4 1.4.1

Electric Currents Electric Circuits

Linear circuits. Electric circuits contain power sources, resistors, capacitors, and inductors. The variables are currents and voltages. The voltage drop VLR on a segment with the resistance R and inductance L is ˙ VLR = RI + LI,

(1.68)

˙ while the charge q on a capacitor dependent both on current I and its time derivative I, with the capacitance C and the related voltage drop VC evolve as q˙ = C −1 V˙ C = I.

(1.69)

For an elementary circuit shown in Fig. 1.8a, the sum VLR + VC equals to the voltage V0 supplied by the power source. Taking q as the dynamic variable and combining (1.68), (1.69), one can write the dynamic equation Lq¨ + Rq˙ + Cq = V0 .

(1.70)

This equation has exactly the same structure as the equation of dissipative motion of a driven dissipative mechanical system with a single degree of freedom (1.23) with a linear function F (q) and the imposed voltage V0 playing the role of a constant external force. Nonlinear circuits. For a long time, electrical engineers dealt with circuits described by linear equations like (1.70) applied to larger circuits with more degrees of freedom. This does not interest us: a linear system does not have a life of its own, it just reacts to external inputs. The most straightforward way to make the system nonlinear is to introduce some dependence of the capacitance on the charge. Choosing the simplest cubic function and applying harmonic driving V0 = γ cos ωt yields, after removing

1.4. ELECTRIC CURRENTS

23

(a)

(b)

R

R V0

C

C2

L

C1

NR

Figure 1.8. (a): Elementary electric circuit. (b): Chua’s circuit containing a nonlinear negative resistance NR

superfluous parameters by scaling, the same Duffing equation (1.28) that we derived in a totally different way before. Another possibility is to introduce a nonlinear variable resistance. A vacuum tube may amplify the current, i.e., act as a negative resistance, at low and damp it at high charges. A driven circuit containing such a nonlinear resistor is modeled by the forced van der Pol equation q¨ − k(1 − q 2 ) q˙ + q = γ cos ωt. (1.71) Nonlinearities may enter in many other ways; they may be sustained by realistic physical mechanisms, e.g., the resistance increasing with the current due to heating, or may be introduced in a more sophisticated way by creating special combinations of elements with desired behavior, such as operational amplifiers, to be plugged into a circuit. A well-studied circuit with complex dynamics is Chua’s circuit (Fig. 1.8b) containing a nonlinear negative resistance – an amplifier with a positive feedback. It is described by the system of three equations for the voltages V1 , V2 across the capacitors C1 , C2 and the electric current I through the inductor L: V˙ 1 RC2 V˙ 2

=

α[V2 − V1 − f (V1 )],

=

V1 − V2 − RI,

I˙ = −βV2 .

(1.72)

The function f (V1 ) describes the electrical response of the nonlinear resistor and α, β are constants. This circuit is known for generating chaotic dynamics (see Sect. 4.3.2). More coupled equation systems can be obtained for branching circuits with the help of Kirchhoff’s laws following from conservation of energy and electric charge: the algebraic sum of currents should vanish at each junction, and the voltage drops around any closed contour should add up to zero. The interest to dynamics of nonlinear electric circuits was at its peak in the pre-digital age when analog computers – suitably constructed circuits where currents and voltages could be easily measured – were used as differential equations solvers.

24

1.4.2

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

Electrochemical Reactions

Electrochemical cells. An electric circuit may include an electrochemical element where the passing current is accompanied or caused by chemical reactions on electrodes involving electron transfer. Electrochemical elements may consume power, like electrolysers, or produce it, as fuel cells. The stoichiometric scheme of an electrochemical reaction written in the form (1.30) or (1.31) includes, among the symbols Ai , ions and the electron, as well as electroneutral species. Kinetics of electrochemical reactions. If a voltage difference is applied between two electrodes (anode and cathode) placed in an electrolyte solution (Fig. 1.9) but no electrochemical reaction takes place, no current passes through the electrolyte, and the entire voltage drop is concentrated in electric double layers formed in the immediate vicinity of the electrodes. The thickness of double layers usually lies in a nanometer range, and can only become macroscopic in nonFigure 1.9. A basic electrochemical cell conducting liquids containing a very small concentration of ions. The activation barrier of an electrochemical reaction is overcome with the help of a surface overpotential Φ, i.e., the difference between the actual potential drop between the electrode and the electrolyte solution and its equilibrium value at zero current. Therefore, the reaction rate depends on Φ exponentially: k = k0 eαΦ/T .

(1.73)

The electrode current per unit area is proportional to the electrochemical reaction rate: I = νe k0 f (C)eαΦ/T , (1.74) where the “stoichiometric coefficient” νe = nF is the Faraday number F (i.e., the electron charge times the Avogadro number) multiplied by the number of electrons n produced as the result of the reaction. This relation may be applicable only in a limited range of Φ. As the overpotential increases, the physical conditions of the electrode surface may change. Most commonly, a protective film is formed, and, as a result, the electrode is passivated, i.e., the value of k0 drops abruptly. The current may increase exponentially with Φ again in the transpassive region. Electrochemical circuits. The inverse of the derivative of the current-voltage relation, Re = (dI/dΦ)−1 can be interpreted as the effective resistance of the electrode. The electrode has a substantial capacitance as well, which is inversely proportional to

1.4. ELECTRIC CURRENTS

25

the double-layer thickness. Thus, it can be modeled by a resistor and a capacitor connected in parallel. The dynamic equation for the overpotential Φ is written then as ˙ = −I(C, Φ) + V0 − Φ , (1.75) γΦ AR where γ is the double-layer capacitance per unit area, V0 is the externally applied voltage, and A is the electrode area. This equation can be solved together with dynamic equations of the type (1.32) for chemical species taking part in the reaction, with the rate constants k± dependent on the overpotential Φ. If the reactants are supplied and removed from the electrochemical cell, the applicable equations are (1.34). For a single reaction, the chemical rates in (1.33) or (1.34) are related to the current per unit area I as f (C) = Iσ/νe , where σ = A/V is the electrode area per unit volume. These equations are, of course, applicable only when the electrolyte is well stirred, so that the concentrations are uniform everywhere except, perhaps, a diffusion layer adjacent to the electrode. In the latter case, the expression for the current or electrochemical reaction rate should be modified to account for diffusional limitations. One can notice that (1.75) has the same form as (1.34) or (1.52), and the exponential dependence of the current on the overpotential is rather similar to the Arrhenius temperature dependence of the reaction rate of an endothermic reaction. Indeed, (1.75) can be reduced to the same dimensionless form (1.54), but with −v standing as the argument of the exponent. If the current increases with growing overpotential as in (1.74), the effective resistance is positive and, assuming the concentrations to be fixed, (1.75) has a single stable solution. Instabilities and steady state multiplicity may appear when the effective resistance is negative in some range of Φ, as it happens in the passivation region. Otherwise, instabilities may be caused, as in chemical reactions, by autocatalysis or self-inhibition.

1.4.3

Membrane Transport

The performance of electrochemical cells and, in particular, of modern fuel cells also depends on the transport of reactants toward electrodes and the change of potential in the electrolyte. Ion transport and electrochemical reactions play as well a major role in cellular membrane transport and propagation of signals in neural networks. A membrane can be modeled as a capacitor in parallel with a resistor. The voltage drop across the membrane obeys the same relation (1.69) as in a standard circuit, but the current-voltage dependence I(V ), determined by the ion transport within the membrane may be far more involved. The ion transport combines diffusion, driven by the concentration gradient ∇ci of a particular ion, and migration, driven by the potential gradient ∇φ. The ion flux j is defined, therefore, by the Nernst–Planck equation ji = −D∇ci − T −1 nF ci ∇φ,

(1.76)

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

26

where n is the ion charge and F is the Faraday number. This equation contains already a spatial dependence of the concentration and potential, and the concentration distribution can be computed by solving the equation c˙i = ∇ · ji .

(1.77)

The same equation applies to the concentration distribution in the electrolyte separating the electrodes in an electrochemical cell, unless it is well mixed by convection as in a stirred tank discussed in Sects. 1.2.1, 1.2.4. This is already a partial differential equation (PDE), which lies outside the scope of dynamical systems. The description is, however, simplified if the ions do not undergo any chemical transformations within the membrane, and stationary concentration levels c± i are maintained at both sides of the membrane, say, at x = ±L/2. The flux must be then constant and directed along the coordinate x across the membrane. Then (1.77), with the term containing the time derivative dropped, reduces to an ordinary differential equation (ODE) defining the ion concentration distribution within the membrane. It has to be complemented by an equation of φ, which depends also on other ions present and, possibly, on nonionic conduction mechanisms. The simplest case is a constant potential gradient, which will be achieved if the electric current is largely carried by a “background electrolyte” present at a constant concentration far exceeding ci or by an efficient nonionic mechanism. Then ∇φ = φ (x) = V /L, and (1.77) becomes dci nF V j + ci + = 0. (1.78) dx DLT D This first-order equation is subject to two boundary conditions, and the flux j is computed to satisfy a solvability condition of this overdetermined problem: j=

D nF V c+ − c− i exp (nF V /T ) . ci i L T 1 − exp (nF V /T )

(1.79)

This flux can be entered in the equations describing chemical transformations in volumes separated by the membrane bringing them to the form similar to (1.34) but with the coefficients βi dependent on the potential V applied across the membrane. Outside chemical and electrical engineering, there is enduring interest in electrochemical processes in neural networks. Nerve axons can be viewed as elements of a sophisticated electric circuit, but chemical reactions and molecular transport play no lesser role in the activity of neural networks than electric currents.

1.5 1.5.1

Spatially Extended Systems From Time to Coordinate Dependence

Real-life problems of physics, biology, and engineering involve systems varying in space as well as in time. Spatial dependence may be totally suppressed only in simplest cases,

1.5. SPATIALLY EXTENDED SYSTEMS

27

when we deal with a small number of rigid bodies approximated as point particles or when spatial dependence of concentrations or population densities is suppressed by intensive mixing or rapid motion in a habitat of a limited extent. Dynamics of spatially distributed systems is described by partial rather than ordinary differential equations and, in more complicated cases, by integral equations involving nonlocal interactions between different locations. We will not touch upon such complicated topics in the framework of this book, but restrict to simpler cases when a space-dependent problem can be either reduced to ODEs or replaced by a finite system of dynamic equations. In the simplest 1D setup, stationary solutions obey an ODE with time replaced by a spatial coordinate. Such “dynamical systems in space”, to be considered in Ch. 5, differ in important aspects from time-dependent dynamical systems but retain helpful analogies. In this way, it is also possible to understand some basic spatially-dependent dynamical problems. Spatially extended nonequilibrium systems are capable, in principle, of exhibiting extremely complex dynamics. With a great (virtually, infinite) number of degrees of freedom present, one could expect chaotic behavior of unfathomable complexity to be a norm. However, this does not necessarily happen. More often than not, the latent complexity of behavior is not realized, and the prevailing state can be well characterized by a few properly chosen dynamic variables. This suggests that a dynamical system involving a limited number of variables might be sufficient to faithfully describe the dynamic behavior in lieu of a full partial differential equation system, at least approximately and within a certain range of parameters.

1.5.2

Fourier Decomposition

Fourier series. The choice of such a representative set of variables is by no means obvious. In the absence of other hints, one can expand the variables in a Fourier series, say,  u(x, t) = a0 (t) + aj (t) exp(ikj · x). (1.80) The amplitude aj (t) of the mode with a wave vector kj is complex, and if the variable u(x, t) is real, the Fourier expansion should include also complex conjugate terms with the amplitudes aj (t). When the expansion is used in the original system of equations, different modes couple through nonlinear terms, and, generally, no finite number of modes would ever suffice, as, given two modes with wave vectors ki , kj , their sum and difference will always be generated. The expansion can be, however, truncated, leaving only a small number of modes and neglecting the rest. The resulting dynamical system with polynomial functions fi (A) dependent on the array A of amplitudes aj (t) of the chosen modes may look like “chemical” systems of Sect. 1.2.1 derived from the mass action law. The semblance is, however, superficial, since completely different principles restrict possible combinations. The product of the amplitudes aj , ak , . . . may appear in the equation for the amplitude ai if the respective

28

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

wave vectors add up to zero, ki + kj + kk + . . . = 0. This condition ensures that the modes in question are in resonance and, generally, makes the dynamics dependent on a combination of their phases. Which modes are relevant? One can never be sure which, if any, features of the original system are preserved by such a bold action, but, with a good luck, it may bring success. In this way, Lorenz (1963) constructed a three-variable dynamical system by retaining three modes of the Navier–Stokes and convective diffusion equations describing thermal convection in a fluid heated from below. The resulting system was not actually faithful to the original, but turned instead (after some 15 years of obscurity) into a celebrated paradigm of chaotic dynamics (see Sect. 4.3.3). A more systematic (though empirical) approach to spectral truncation is offered by the Karhunen–Loewe method used, in particular, for compressing images and discerning “coherent structures” in turbulent flow and complex chemically reacting systems. This method employs statistical analysis of measured or computed data to find out an optimal combination of modes sufficient to reproduce the data in a faithful but economic way. This combination would, of course, change when the underlying system is modified or driven to a different regime. It is far more appealing to be able to discern relevant modes analytically; this may, indeed, be possible under favorable conditions. The key is given by scale separation: some modes may evolve on far longer characteristic times than do the others. Fast modes relax to their quasistationary values dependent on the current values of slow modes, which remain virtually unchanged during this relaxation period. Following this, slow modes evolve on a longer time scale, while fast modes follow them quasistationarily, being bound by quasiequilibrium algebraic relations – much in the same way as rapidly reacting chemical species reaching a partial equilibrium (Sect. 1.2.3). This is expressed by Haken’s (2004) slaving principle: “fast modes are slaved by slow modes”. Clearly, only slow modes are relevant for long-time evolution. Therefore, the original system, however complex it might be, can be reduced to a dynamical system for the amplitudes of slow modes. Exact derivation of such a system of amplitude equations is possible when the amplitudes of slow modes are small, so that the state of the system deviates only slightly from a known “trivial” homogeneous stationary state. We shall study some systems of this kind in Sect. 5.4. Dynamical systems derived through this approach can reproduce the dynamics of the original underlying system faithfully only within a limited range of parameters and within a limited region of the phase space, since slow modes would, generally, be modified or cease to be slow when either parameters or values of dependent variables change. Nevertheless, they may reproduce dynamics correctly in a qualitative sense even outside the domain where their derivation can be justified formally.

1.6. CONTINUOUS VS. DISCRETE

1.6 1.6.1

29

Continuous vs. Discrete Iterative Systems

What is more complicated – continuous or discrete? The answer appears to be straightforward. Ancient Greeks had trouble with the notion of continuum, musing over Zeno’s paradoxes of Achilles unable to overtake a tortoise or an arrow unable to move. Modern freshmen, introduced to calculus as small children looking on a speedometer in a family car, must still struggle with limit theorems invented to overcome the doubts of the ancients, and acquire aversion to mathematics after passing calculus 101. We are accustomed to think in terms of a continuum, yet we are solving continuum problems using digital computers, which, by their nature, are capable to work with discrete entities only. After all, digital computing, translated literally, is just counting on fingers, which our computers have aplenty. Discrete systems appear to be far simpler – yet a discrete dynamical system, such as a 1D iterative map un+1 = f (un ), (1.81) generates far more complicated dynamics than a 1D continuous dynamical system. This is an intrinsic reason for numerical instabilities plaguing digital computations. A continuous dynamical system is constrained topologically: trajectories may never intersect. This is why chaotic dynamics is impossible in less than three dimensions. A discrete system, free of this constraint, may be chaotic already in a single dimension (more on this in Sect. 4.2). An iterative map may arise as a mathematical procedure, e.g., finding a root of an algebraic equation f (u) = 0 using Newton’s method. In 1D, it is expressed by the iterative map f  (un ) . (1.82) un+1 = un − f (un ) The method works because the iterations converge to an equilibrium, but this never can be warranted when the dimension is higher than one. This is why we need expert 

















 













-

-

-

-











Figure 1.10. Newton’s method. Iterations, shown by dashed lines, promptly converge to a root, but not necessarily to the closest one.

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

30

computer scientists fighting numerical instabilities. Even in 1D, the root may be not the one we are looking for if it is not unique (see Fig. 1.10). In population dynamics, an iterative map may express a relation between numbers of individuals or population densities in successive generations. Such a relation, whether derived from a model or measured empirically, would reflect, in fact, the result of a very complex continuous process. Thus, a discrete dynamical system, if it is taken from the physical world rather than introduced as a mathematical abstraction, will be usually a result of a reduction of a continuous system. Examples of such reductions are given further in this section. Such a reduction, if it is possible, is always welcome, as it greatly facilitates computations. On the other hand, a reductive procedure hidden in the background justifies the complexity of behavior of a discrete system, which reflects the complexity of a continuous system hidden underneath.

1.6.2

From Continuous to Discrete

Reduction of a continuous dynamic system to a map is possible when an analytically predictable behavior is interrupted by discrete events. A simple example is a bouncing ball. As with other mechanical systems (Sect. 1.1.5), taking account of dissipation and compensating it by pumping energy makes the system both more realistic and more interesting. We consider therefore a ball bouncing with an imperfect restitution on a vibrating plate. We shall neglect air resistance, another dissipation mechanism, assuming it to be relatively weak. Between two consecutive impacts, the ball jumping off vertically at a moment tn with a velocity un decelerates in the gravity field, so that its velocity and vertical displacement are 1 u(t) = un − gt, z(t) = un t − gt2 . (1.83) 2 (a)

(b)























 

















Figure 1.11. The bouncing velocity as a function of time. (a): The map (1.86) with α = 0.99, γ = 0.05; the solid line shows the change of the bouncing velocity on a quiescent plate. (b): The map (1.87) with α = 0.9, γ = 0.1.

1.6. CONTINUOUS VS. DISCRETE

31

The next impact occurs when z(t) coincides with the displacement h(t) = a sin ωt of the plate vibrating with the amplitude a and frequency ω. Solving the equation z(t) = h(t) involves unnecessary (from a qualitative point of view) algebraic complications, which are avoided if the amplitude is assumed to be small and the plate displacement neglected. Then the time of the next impact is approximated as tn+1 = tn +

2 un . g

(1.84)

The ball returns with the velocity u(tn+1 ) = −un to encounter the plate moving with the velocity u0 (tn+1 ) = aω cos ωtn+1 . The velocity of the ball bouncing upon the next impact is   2 un+1 = αun + aω cos ω tn + un , (1.85) g where α < 1 is the restitution coefficient. Even though a is small, the frequency may be sufficiently large, so that aω may be comparable with un . Scaling time by 1/ω and velocity by 12 g/ω, the 2D map (1.84), (1.85) is rewritten as tn+1 = tn + un ,

un+1 = αun + γ cos (tn + un ) ,

where γ = 2aω 2 /g. On the average, the ball does not gain energy, though its bouncing velocity sometimes increases. As we see in Fig. 1.11a, eventually it stops even faster than on a quiescent plate. If, instead, the plate does not move but the ball gets a random push up every tine it bounces, the map (1.86) is modified to un+1 = αun + γζn+1 , (1.87)



Z



    [

tn+1 = tn + un ,

(1.86)

 

where ζn is random number uniformly distributed in the interval (0,1). With this random energy input, the velocity approaches some average level and keeps fluctuating, as seen in Fig. 1.11b.

1.6.3

Poincar´ e Maps

  Y

 

Figure 1.12. Poincar´e map

Construction of a

The most general way of converting a continuous dynamical system into a discrete map is constructing a Poincar´e map. In an autonomous system, it may be possible to choose as a Poincar´e section a manifold with the dimension one less than that of the corresponding dynamical system, which is repeatedly crossed by all trajectories (see Fig. 1.12). A Poincar´e map of a nonlinear dynamical system can

32

CHAPTER 1. WHENCE DYNAMICAL SYSTEMS

be only constructed numerically, but it gives great advantages for studying the dynamic behavior, such as bifurcations of periodic orbits (Ch. 3) and especially chaotic dynamics (Ch. 4). This mapping can be further used to construct a first-return map un+1 = f (un ).

(1.88)

by marking successive intersections of the Poincar´e section. Finding a suitable Poincar´e section is straightforward in the case of a periodically forced system. Such a system, e.g. (1.28), can be made autonomous by adding an angular variable φ = ωt and its dynamic equation φ˙ = ω. The dependence on φ is 2π-periodic, so that the values differing by 2π can be identified, and a map constructed by marking the values of the variables over the period T of the forcing input.

Further Reading There are numerous textbooks and reviews treating in-depth the topics touched upon in this chapter. They are good for satisfying your curiosity but are not necessary for following further material. It will not hurt, nevertheless, to refresh, for example, vol. 1 of Landau and Lifshitz (1960) or another your favorite book on classical mechanics, as well as basic texts on other topics. On topics like chemical reactors, it is better not dive into chemical and engineering literature, but the book by Aris (1965), the mathematician spending most of his career in a chemical engineering department, might be helpful. My favorite book on electrochemistry is that by my tutor Levich (1959). Murray (2003) describes many biological applications of dynamical systems, and ecological problems served May (1974b) as a springboard for studies of chaotic dynamics.

2 Local Bifurcations Chapter Overview This chapter contains basic (somewhat boring) recipes of building branches of stationary states, analyzing their stability and locating bifurcations at a simple zero eigenvalue, where these branches come together, and at an imaginary eigenvalue, where oscillatory states (periodic orbits) emerge. We first encounter here the notion of stable and unstable manifolds, which will play an important role in the analysis of global bifurcations and transition to chaos in the following chapters. The derivation of amplitude equations (normal forms) near the fold, transcritical, cusp, and Hopf bifurcation points allows us to establish stability (or a lack of it) of branches of stationary or periodic solutions converging there. Algebraic calculations, which may be technically cumbersome, allow us to distinguish between supercritical and subcritical bifurcations. We briefly formulate the problem of stability of periodic orbits, but analytical techniques of this chapter allow us to investigate it only in the vicinity of a Hopf bifurcation. This problem can be effectively treated only with the help of the global analysis in the following chapter. However, we can follow their simplest bifurcations numerically when investigating large-amplitude behavior of an exothermic chemical reaction and of simple models of population dynamics in the accompanying Mathematica notebooks.

33

34

2.1 2.1.1

CHAPTER 2. LOCAL BIFURCATIONS

Bifurcation of Stationary States Branches of Stationary States

A general dynamical system. Evolution of a dynamical system brings it, unless severely constrained by conservation laws, to some attractors where it eventually resides. In two phase-space dimensions, there are only two kinds of attractors: stable stationary or oscillatory states. These states may be also unstable – but even then they play an important role in separating attraction basins of stable states. Stationary states are also called fixed points or equilibria. We will avoid the last term common in mathematical literature, since dynamical systems of interest commonly describe nonequilibrium systems that may be sustained in a stationary state by external fluxes. Oscillatory states are also commonly called periodic orbits or cycles. Finding stationary states of a dynamical system by solving a system of algebraic equations may be a technically difficult task, in particular, when the dependence on some variables is expressed by exponentials or other transcendental functions. Sometimes, it is convenient to find an implicit solution by solving equations with respect to some parameter that enters them in a simple way (an example is found in Sect. 2.4.1). More generally, solutions can be found numerically. A general dynamical system can be written in an array form u˙ = f (u; p),

(2.1)

where f (u) is an array of nonlinear functions of the array of variables u, which is also dependent on the array of parameters p. Its stationary solutions satisfy the system of algebraic equations f (u; p) = 0. (2.2) Continuation procedure. Suppose that we know the location of a zero of the function array f (u; p) at a certain point p = p0 in the parametric space, i.e., a solution u = u0 satisfying f (u0 ; p0 ) = 0. This solution may be trivial at some values of parameters, for example, it is easy to solve (1.53), (1.54) at ν = 0. Near this point, (2.2) can be linearized in both variables and parameters, assuming deviations u1 = u − u0 , p1 = p − p0 to be small: (2.3) fu u1 + fp p1 = 0. Here fu , fp are arrays of derivatives with respect to variables and parameters evaluated at the reference point u = u0 , p = p0 . The square array fu is called a Jacobi matrix. If its inverse fu −1 exists, (2.3) can be resolved to find the shift of the stationary solution corresponding to a parametric shift p1 : u1 = −fu −1 fp p1 . (2.4) The rectangular array fu −1 fp is called the parametric sensitivity matrix.

2.1. BIFURCATION OF STATIONARY STATES

35

A continuous dependence on parameters can be used to construct a branch of stationary states, as long as the matrix fu has an inverse. One can assume u1 = du, p1 = dp to be infinitesimally small and recast (2.4) as a differential continuation equation along some parametric path. For example, choosing a certain single parameter p in p yields du = −fu −1 f p , (2.5) dp where fu , f p are now evaluated at current values u, p. Solving this equation yields a solution branch as a function of p. This branch terminates when the matrix fu has a zero eigenvalue and, consequently, its determinant vanishes and the inverse does not exist. The continuation technique is used by a public domain package AUTO to build up branches of stationary as well as oscillatory states. In its refined form, the capabilities of AUTO are extended to branches of spatially inhomogeneous solutions.

2.1.2

Bifurcation Expansion

Bifurcation manifold. The continuation fails when the matrix fu becomes singular. This is a bifurcation point where the branch of solutions terminates or intersects another branch. If there are P parameters, one can talk more precisely of a bifurcation manifold in the P -dimensional parametric space. Generically, this manifold is (P −1)dimensional, i.e., its codimension is one. Let now p0 be a point on the bifurcation manifold. In the vicinity of this point, the linear continuation fails, so that one can neither construct a stationary solution at values of parameters close to this point nor characterize its stability in the linear approximation. Nevertheless, it is still possible to construct solutions in the vicinity of bifurcation points, but we need to scale the deviations of both variables and parameters in a different way, and expand the original system (2.2) to higher orders. One can anticipate that the effect of small parametric deviations will be different in the directions parallel and normal to the bifurcation manifold. It is therefore convenient, instead of fixing relative orders of magnitude of parametric deviations beforehand, to expand both u and p in powers of a dummy small parameter : u = u0 + u1 + 2 u2 + . . . ,

p = p0 + p1 + 2 p2 + . . .

(2.6)

The expansion of f (u; p) reads (2.7) f (u; p) = f (u0 ; p0 ) + (fu u1 + fp p1 )   1 1 + 2 fu u2 + fp p2 + fuu [u1 , u1 ] + fup [u1 , p1 ] + fpp [p1 , p1 ] + . . . . 2 2 Higher derivatives fuu , fup , etc. are defined as arrays of a higher tensor rank: thus, the mth derivative acts upon m arrays written to the right of it in brackets. In the first order, (2.3) is recovered. The determinant of a singular matrix fu is zero. Since the determinant equals the product of eigenvalues, this matrix must have

CHAPTER 2. LOCAL BIFURCATIONS

36

a zero eigenvalue. We suppose now that this eigenvalue is nondegenerate; then there is a unique eigenvector U satisfying fu U = 0. We have to assume as well that the zero eigenvalue is isolated, i.e., that fu has no other eigenvalues with an O( ) real part. Solvability conditions. By Fredholm alternative, if a homogeneous linear equation Lu = 0 has a solution, the respective inhomogeneous equation Lu + F (u) = 0 is solvable only when the inhomogeneity is orthogonal to eigenfunctions of the respective conjugate operator L† with the zero eigenvalue. This principle is applicable to any linear operator. We will repeatedly rely on it for treating time- and space-dependent problems, and will clarify the meaning of the conjugation in due course, but at the time being we have an algebraic equation, where the linear operator is just a real square matrix fu , and the conjugate operator is its transpose f †u , which has the eigenvector U † with the zero eigenvalue satisfying U † fu = 0. The solvability condition is then U † F (u) = 0. Less formally, we will say that it means that the inhomogeneity does not project on U . Since U , U † can be multiplied by any number, we will further apply the normalization condition U † · U = 1. In the particular case of (2.3), the solvability condition reads (1)

κ0 ≡ U † · fp p1 = 0.

(2.8)

In our notation, which will be useful further on, the subscripts denote the scaling of respective parametric deviations from the bifurcation point. Parameters with the subscript zero are characteristic to a particular point on the bifurcation manifold. Parameters with the subscript one depend on O( ) parametric deviations, etc. The superscripts in parentheses (which will be omitted at a later stage) denote the order (1) of the expansion; thus, κ0 depends on the position of the parametric point p0 and is obtained in the first order of the expansion. Generically, these parameters differ from zero, but they may vanish on a manifold of a higher codimension that corresponds to a degenerate or a higher-order bifurcation. The relation (2.8) defines the tangent hyperplane to the bifurcation manifold at the point p = p0 . Since no solutions of (2.4) exist unless (2.8) is satisfied, one can obtain O( ) corrections to the solution u0 (p0 ) only when the parameters are shifted within this plane. This will, however, bring us to an O( 2 ) vicinity of another point on the bifurcation manifold. In a generic case, such a displacement along the bifurcation manifold is superfluous – we could have started the expansion from another point on the same manifold as well. Therefore, one can consider only parametric deviations transverse to the bifurcation manifold and set p1 = 0, while restricting transverse displacements to a higher order p2 . Then (2.3) reduces to a homogeneous linear equation fu u1 = 0, which has a continuous family of solutions u1 = aU with an arbitrary amplitude a.

2.1. BIFURCATION OF STATIONARY STATES

2.1.3

37

Fold and Transcritical Bifurcations

The second-order equation derived from (2.7) is 1 fu u2 + fp p2 + a2 fuu [U , U ] = 0. 2 This leads to the solvability condition of (2.9) with respect to u2 (2)

(0)

κ0 + κ2 a2 = 0.

(2.9)

(2.10)

The parameters of this equation are (2)

κ0 = U † · fp p2 ,

(0)

κ2 =

1 † U · fuu [U , U ]. 2

(2.11) (2)

Equation (2.10) defines a fold bifurcation (Fig. 2.1a). Two solutions exist when κ0 (0) and κ2 have opposite signs, and none otherwise. Thus, a pair of solutions emerges or (2) disappears when κ0 switches its sign with a shift of some parameter of the system. Nongeneric situations may arise either because of intrinsic symmetries of the system or “accidentally” on a manifold of a higher codimension. It may happen that the matrix fp vanishes identically. This would be the case when u0 is a “trivial” solution that remains constant at all values of parameters (in which case, all higher derivatives with respect to parameters vanish as well). This happens, for example, in equations of population dynamics, such as (1.59) where trivial solutions with at least one population density vanishing exist at any values of parameters. Then (2.3) is satisfied identically, and first-order parametric deviations can be retained. The second order equation derived from (2.7) with fp , fpp set to zero is then 1 fu u2 + afup [U , p1 ] + a2 fuu [U , U ] = 0, 2 (b)





(a)

(2.12)





Figure 2.1. Dependence of the amplitude on the bifurcation parameter for elementary fold (a) and transcritical (b) bifurcations. Solid lines denote stable states, and dashed lines denote unstable states. The curves in magenta show splitting of the transcritical bifurcation due to a weak perturbation

CHAPTER 2. LOCAL BIFURCATIONS

38

where the array fup is obtained by differentiating fu with respect to the parameters. The solvability condition of this equation is (1)

(0)

κ1 a + κ2 a2 = 0,

(1)

κ1 = U † · fup [U , p1 ],

(0)

κ2 =

1 † U · fuu [U , U ]. (2.13) 2

This equation, describing a transcritical bifurcation, has two solutions on both sides of the bifurcation manifold (Fig. 2.1b); one of them is the trivial solution a = 0. In a (2) generic case, a weak perturbation, adding to (2.13) a small constant term κ0 , splits the transcritical bifurcation into two folds, as shown in Fig. 2.1b.

2.1.4

Cusp Singularity

(0)

(0)

If κ2 = 0, the expansion should be continued to the next order. The coefficient κ2 may vanish identically because of the symmetry of the original problem to inversion of u. Otherwise, it can be equal to zero at certain values of the parameters of the problem. Generally, the two conditions, vanishing of both the determinant of fu and (0) κ2 , are satisfied simultaneously on a codimension-two manifold in the parametric space that corresponds to a cusp singularity. In order to continue the expansion, deviations transverse to the bifurcation man(2) ifold have to be restricted by the second-order condition κ0 = 0, which is stronger than the first-order condition (2.8). First-order parametric deviations p1 parallel to the bifurcation manifold, which are still allowed by (2.8), should be restricted by the (1) (1) condition κ1 = 0. If the array p contains two parameters only, the conditions κ1 = 0 (1) and κ0 = 0 imply, in a nondegenerate case, that first-order parametric deviations should vanish identically. When more parameters are available, parametric deviations satisfying both these conditions are again superfluous, since they correspond just to gliding into a closer vicinity of another point on the codimension-two bifurcation manifold in a higher-dimensional parametric space. Further on, we will set therefore p1 to zero identically. The third-order equation reads 1 fu u3 + fp p3 + afup [U , p2 ] + afuu [U , u2 ] + a3 fuuu [U , U , U ] = 0. 6

(2.14)

Here fuuu is the 4D array of third derivatives. The second-order function u2 has to be found by solving (2.9). Only the solution of the inhomogeneous equation, which does not project on the eigenvector U , is relevant. It can be expressed as a quadratic form in a containing two suitable vectors: u2 = V 0 (p2 ) + a2 V 2 .

(2.15)

The solvability condition of (2.14) is obtained then in the form (3)

(2)

(0)

κ0 + κ1 a + κ3 a3 = 0,

(2.16)

2.1. BIFURCATION OF STATIONARY STATES (b)





(a)

39

(c)

(d)













Figure 2.2. The cusp singularity. (a): The cusped region of multiple stationary states in the parametric plane κ0 , κ1 (b)–(d): Dependence of the amplitude on a bifurcation parameter. (b): Asymmetric bifurcation. (c): Supercritical pitchfork bifurcation. (d): Subcritical pitchfork bifurcation described by a symmetric fifth-order normal form. Solid lines denote stable states, and dashed lines denote unstable states.

where (3)

κ0

(0)

κ3

= =

(2)

U † · fp p 3 , κ1 = U † · fup [U , p2 ] + U † · fuu [U , V 0 ], 1 † U · fuuu [U , U , U ] + U † · fuu [U , V 2 ]. 6

(2.17)

This equation presents the parametric unfolding of a cusp singularity. (0) The parameter κ3 must be negative to prevent escape to “infinity”, i.e., to large deviations from the origin u = u0 . Since this parameter does not depend on deviations  1/3 (0) from the bifurcation manifold, the amplitude can be rescaled a → a −κ3 to reduce the coefficient at the cubic term to −1. We will write the resulting cubic equation retaining the same symbols for the rescaled parameters but omitting the upper indices: κ0 + κ1 a − a3 = 0. (2.18) This equation has three solutions in the cusped region κ1 > 0, |κ0 | < 2(κ1 /3)3/2

CHAPTER 2. LOCAL BIFURCATIONS

40

shown in Fig. 2.2a, and one otherwise. The solutions evolve with changing κ1 as shown in Fig. 2.2b in a generic case κ0 = 0 and as shown in Fig. 2.2c in a symmetric case κ0 = 0. In the latter case, the bifurcation is called a pitchfork one, according to what its diagram looks like.

2.1.5

Higher Singularities (0)

Continuing to higher orders. The condition κ3 = 0 defines a singular bifurcation manifold of codimension-three. Again, it is possible to reduce the order of magnitude of parametric deviations parallel to the bifurcation manifold to O( 3 ) and of transverse deviations to O( 4 ), retaining only O( 2 ) deviations from the singular point in the hyperplane parallel to the codimension-two manifold of the cusp singularity. This leads in the fourth order of the bifurcation expansion to a quartic equation that represents the unfolding of the swallowtail singularity. In the symmetric case when only odd powers of a can appear, the next highest singularity is of the fifth order, and its unfolding has the form (4)

(2)

(0)

κ1 a + κ3 a3 + κ5 a5 = 0,

(2.19)

with the coefficients expressed through arrays of derivatives, now up to the fifth order, (2) (0) in a way similar to above. An important particular case is κ3 > 0, κ5 < 0, representing the unfolding of a subcritical cusp bifurcation. The symmetric bifurcation diagram for this case is shown in Fig. 2.2d. The procedure can be continued further if a sufficient number of free parameters are available. The generic fifth-order singularity, with its unfolding containing, unlike (2.19), also even terms, is called butterfly. Even if names dry up before parameters do, one can go on to arrive, in a generic case, in the nth order at a normal form expressed by an nth-order polynomial κn(0) an +

n 

(j)

κn−j an−j = 0.

(2.20)

j=2

Additional restrictions on parametric deviations have to be imposed to make the orders of magnitude of different terms compatible. Practical computation of the coefficients is a tedious task, but, since the problem is fully formalized, it can be handled by symbolic computation programs. Rationale for higher-order expansions. One can ask what is the sense of such a procedure restricting us to an ever more diminutive slice of the parametric space where more and more stationary states and transitions among them crowd in. It forms a rational basis of Thom’s (1975) rather overblown catastrophe theory. One can argue that all transitions that can be followed relatively easily with the help of a polynomial normal form can be also observed in the full system (2.2). Even if they might be then more technically difficult to follow, the advantage is that they will be directly related

2.2. STABILITY AND SLOW DYNAMICS

41

to physical parameters, and one would be spared derivation of the coefficients κj . On the other hand, locating a highest-order singularity identifies the maximum number of solutions that can be observed at a certain point in the parametric space, and (2.20) represents the complexity of transitions in (2.2) in a qualitative sense, even if it is valid literally as a rational approximation only very close to the singular bifurcation point. This said, one could not even bother to locate this point numerically, which might be quite an effort, but be satisfied just by finding out what is the highest possible singularity that may exist in the system in question. Of course, one can never be sure that the highest order n is equal to the number of parameters of the problem. Even if this theoretical ceiling is achieved, the singular point may fall into an unphysical or inaccessible parametric domain, so that not all theoretically possible kinds of bifurcation diagrams may be actually observed. On the other hand, in special cases when (2.2) contains periodic functions of some variable(s), the number of distinct solutions might be infinite, but they would never be brought together at a singular bifurcation manifold. Clearly, the maximum number of real roots of the general normal form (2.20) equals n. Following different parametric paths, one can build up variegated bifurcation diagrams more elaborate than those in Figs. 2.1, 2.2, but containing, in a generic case, the same basic elements. Already looking at Fig. 2.2b we see that it contains the same fold bifurcation as in Fig. 2.1a, with the essential difference that the system may escape from the unstable branch to the upper stable branch rather than to infinity. Only the symmetric picture in Fig. 2.2c is topologically different. Generically, unfoldings of higher singularities would contain folds inclined either backwards or forwards, which cannot be considered as qualitatively distinct. Two such folds can combine to isolas, which are disconnected from other solution branches. This isolation can be, however, just relative, as isolas may appear in some 2D sections of the parametric space and be connected in other sections. All even-order singularities are asymmetric to flipping the sign, and the system would always be able to escape from some unstable branches to infinity or a faraway stationary state. (n) Odd-order singularities, provided the higher-order coefficient κ0 is negative, are applicable to dynamical systems confined to a finite region of the phase space. The qualitatively new feature appearing in the unfolding of the butterfly singularity is the subcritical pitchfork bifurcation shown in Fig. 2.2d. This feature is, however, not generic, as it requires inversion symmetry. In other nongeneric cases when, in the absence of symmetry, a persistent constant (usually trivial) solution exists, transcritical bifurcations would appear, as in Fig. 2.1b.

2.2 2.2.1

Stability and Slow Dynamics Linear Stability Analysis

Linearization. A stationary state u − u0 is stable if the system returns to it following  = u − u0 . This is a common-sense definition any infinitesimally small perturbation u

42

CHAPTER 2. LOCAL BIFURCATIONS

of stability that the physicist or the engineer would intuitively apply. The mathematicians distinguish between the Lyapunov stability, implying that the trajectory lingers about in the vicinity of a stationary or other persistent state, and the asymptotic stability, allowing the trajectory to travel around in the phase space for a while, before returning home as a prodigal son. These definitions are clearly not equivalent, and our intuitive definition requires all trajectories both never to leave the vicinity of a state in question and to return to it asymptotically at long times (t → ∞). Since perturbations are presumed to be arbitrarily small, it is sufficient to consider equations linearized in the vicinity of a stationary state in order to decide whether it is stable or not. This is, indeed, true generically, but nonlinear terms may be essential at particular values of parameters or in the presence of certain symmetries. The linearized general dynamical system (2.1) is  du  = fu u, dt

(2.21)

where fu is the Jacobi matrix introduced in Sect. 2.1.2, with all derivatives evaluated at u = u0 . In a generic case (we will come to exceptions below and explore their implications in Sect. 3.3), solutions of a linear system can be presented as a combination of exponents  = (2.22) u eλi t v i , where λi are eigenvalues of fu , and v i are the corresponding eigenvectors satisfying fu v i = λi v i .

(2.23)

The eigenvalues λi are also called local Lyapunov exponents, as they define the divergence of trajectories of the dynamic system in the vicinity of the fixed point. Since the matrix fu is real, all eigenvalues and eigenvectors must either be real or come in complex conjugate pairs. The form (2.22) is applicable if all eigenvalues are nondegenerate. This would be the case for a matrix fu with its elements picked at random, in the absence of any specific symmetries. A degeneracy may occur at particular combinations of values of parameters. Thus, one of the eigenvalues may become degenerate with the multiplicity two on a codimension-one manifold in the parametric space. Higher or additional degeneracies may occur on manifolds of a higher codimension. A nondegenerate matrix can be transformed to a diagonal form: v † fu v = Λ,

(2.24)

where Λ is a diagonal matrix with elements λi , v is a square matrix the columns of which are eigenvectors v i , and v † is a square matrix the rows of which are conjugate eigenvectors v †i satisfying v †i fu = λi v †i . (2.25)

2.2. STABILITY AND SLOW DYNAMICS

43

A  degenerate  matrix can be transformed to a Jordan normal form where a 2 × 2 bloc λi 1 corresponds to a doubly degenerate eigenvalue. Analogously, a k × k bloc 0 λi with unity elements immediately above the main diagonal corresponds to an eigenvalue with the degeneracy k. The solution (2.22) includes in this case a polynomial in t of the order k − 1 at the respective exponent eλi t . Stability criteria. It is clear from either (2.22) or its modified form for a degenerate case that all eigenvalues must have negative real parts to ensure stability of the stationary solution. The eigenvalues can be found by solving the characteristic equation Det(λI − fu ) = 0,

(2.26)

where I is a unity matrix. After computing the determinant, this equation reduces to a polynomial of the order n equal to the dimension of fu : n 

bj λj = 0,

(2.27)

j=0

where bn = 1 and b0 is the Jacobian Det(fu ) if n is even and the same with the inverted sign if n is odd. Finding all eigenvalues and eigenvectors is superfluous for linear stability analysis. Short of solving the characteristic equation, stability can be established with the help of the Routh–Hurwitz criterion that requires only computing the Hurwitz determinants, the first of which is the coefficient b0 in (2.27). Provided all coefficients of the dynamical system are real, (2.26) can be presented as the product of terms of the kind (λ − βi ) and (λ − γi )2 + ωi2 , where βi are real and γi ± iωi are complex eigenvalues. Provided all βi are negative, b0 > 0, independently of the signs of γi . Thus, b0 > 0 is a necessary, but not sufficient, stability condition. We will be mostly interested in finding points of marginal stability where the leading eigenvalue with the largest real part crosses the imaginary axis in the complex plane. When this eigenvalue is real, this happens at a bifurcation manifold Det(fu ) = 0, and finding out the sign of the Jacobian is sufficient for establishing linear stability in the vicinity of a bifurcation of stationary states. The trailing eigenvalues, which still have negative real parts at the bifurcation manifold, are not interesting, at least from a qualitative point of view. Midway along a branch of stationary solutions, stability may be lost as a result of a pair of complex eigenvalues crossing the imaginary axis in the complex plane. This is a Hopf bifurcation, also called, more fairly, Andronov–Hopf or, still further extended in time and printed length, Poincar´e–Andronov–Hopf. We will return to it in Sect. 2.3.1 (sticking to the established short name). The Hopf bifurcation can be located by computing the second Hurwitz determinant. In a two-variable system it reduces to the condition of zero trace of the Jacobi matrix fu . In a generic threevariable system, the expansion of (2.26) at the Hopf bifurcation point must have the

44 form

CHAPTER 2. LOCAL BIFURCATIONS

Det(λI − fu ) = (λ2 + ω 2 )(λ + β) = λ3 + βλ2 + ω 2 λ + ω 2 β,

(2.28)

where −β is the remaining negative real eigenvalue. Consequently, the Hopf bifurcation condition can be formulated as vanishing of the second Hurwitz determinant b1 b2 − b0 . Computing this expression for the case when the leading complex eigenvalue has a small real part shows that b1 b2 − b0 > 0 is another necessary stability condition. If there are no more than three variables, stability is assured when the two Hurwitz determinants are positive.

2.2.2

Stable and Unstable Manifolds

In order to understand the dynamic behavior in a qualitative way, we need not only to establish whether a certain stationary state is stable or not, but to characterize the motion in its vicinity. If there are m eigenvalues with positive real parts and n-m eigenvalues with negative real parts, the stationary state is said to have an mdimensional unstable and an (n-m)-dimensional stable manifolds. If m = 0, the state in question is called an attractor, as it attracts trajectories in the phase space – at least those starting in its vicinity. All trajectories approach a stationary state along its stable, and depart from it along its unstable manifold. If the stationary point is unstable, so is the motion along its stable manifold: the trajectory departs from it when perturbed, and is swept away from the vicinity of the unstable state. The motion along both stable and unstable manifolds can be either monotonic or oscillatory, depending on whether the eigenvalues are real or complex. In 2D phase space, there are only five possibilities. The stationary state is either a stable or an unstable node when both eigenvalues are real and, respectively, either

Figure 2.3. Trajectories in the vicinity of a node (left), focus (center) and saddle (right). For the cases when the eigenvalues are real, the direction of one of the eigenvectors is indicated by the dashed line; the other eigenvector is directed along the abscissa, and the corresponding eigenvalue is larger by its absolute value. Arrows pointing toward or away from the stationary point at the origin can be added when nodes and foci are, respectively, stable or unstable

2.2. STABILITY AND SLOW DYNAMICS

45

negative or positive. The stationary state is either a stable or an unstable focus when both eigenvalues are complex with, respectively, either negative or positive real parts. For a stable node or focus, the dimension of the stable manifold is two and the dimension of the unstable manifold is zero, and for an unstable node or focus, the other way around. The remaining possibility is a saddle, realized when the eigenvalues are real and have opposite signs; then the dimension of both stable and unstable manifolds is one. Typical trajectories in the vicinity of a stationary point for all these cases are sketched in Fig. 2.3. The dashed lines, which are, generally, not mutually perpendicular, show the direction of real eigenvectors. Arrows showing the direction of motion are missing, and therefore the pictures for stable and unstable nodes and foci, which can be converted one into the other by reversing time, are identical. Note that trajectories are, generally, curvilinear even in the vicinity of a node, since evolution is faster in the direction of the eigenvector with the eigenvalue largest by its absolute value. In 3D, the above pictures can be trivially extended to a stable or an unstable node-focus and a saddle-node with either 1D stable and 2D unstable manifolds or the other way around. A nontrivial additional possibility is a saddle-focus with a positive real eigenvalue and complex eigenvalues with negative real parts or a negative real eigenvalue and complex eigenvalues with positive real parts. We will return to this case, conducive to the emergence of chaotic dynamics, in Sect. 4.3.1. Trajectories departing along the unstable manifold and arriving along the stable manifold may connect far from a stationary state under the study, where the linearized equations are no longer applicable. This question can be elucidated by studying global bifurcations in Sect. 3.1.

2.2.3

Exchange of Stability

Standard linear stability analysis, as described above, can be carried out analytically only in exceptionally simple cases. Numerical computations are easy, especially when using Mathematica or other high-level programming, but not very revealing. Rather than analyzing stability of a particular stationary state, one should follow the change of stability along a branch of solutions. This is both easier technically and gives far better insight into the global picture of the behavior of a physical system in a wide parametric range. When discussing the construction of branches of stationary states in Sect. 2.1.2, we assumed that one could find such values of parameters that a stationary solution would be unique and easily computable. In a properly formulated physical or engineering problem, this solution should also be stable. As parameters are shifted continuously along a branch of solutions, eigenvalues change continuously as well. Stability can be lost in two ways: either the largest real eigenvalue passes zero, or the real part of a pair of complex conjugate eigenvalues becomes positive. The first possibility is the only one if the dynamical system has a gradient structure,

CHAPTER 2. LOCAL BIFURCATIONS

46 i.e., can be written in the form

u˙ j = −

∂V ∂uj

(2.29)

derived from some potential V , which is monotonically decreasing with time: V˙ =

  ∂V 2  ∂V ≤ 0. u˙ j = − ∂uj ∂uj

(2.30)

This  is, of course, a very special case, unless there is a single variable, so that V = − f (u)du. With more than one variable, the gradient structure exists in the vicinity of thermodynamic equilibria. Small deviations from equilibrium obey the Onsager reciprocal relations whereby the matrix ∂fi /∂uj = −∂ 2 V /(∂ui ∂uj ) is symmetric. A neural network described by the system (1.67) with a symmetric matrix of coefficients aij would also have a gradient structure. In a large network, there may be a great number of alternative equilibria, and the system will inevitably relax to one of them, though it would not necessarily reach the equilibrium with the lowest energy. In nonequilibrium systems with variegated dynamic behavior, which are of main interest to us, the dynamics will be, as a rule, nongradient. Nevertheless, the leading eigenvalue with the largest real part may be real in many cases of interest. At a parametric point where this eigenvalue equals zero, the Jacobian, which equals to the product of all eigenvalues, vanishes. Thus, at this point the Jacobi matrix fu is singular, i.e., it must lie on a bifurcation manifold (Sect. 2.1.2). Hence, loss of stability caused by the change of the sign of a real eigenvalue can only occur when a bifurcation manifold is crossed. Another simple argument of topological nature can be used to establish stability of solutions on branches converging at a bifurcation point. Suppose there are just two attractors, at least one of them being a stable stationary state, and the other one, perhaps, removed at infinity. Trajectories starting from any point in the phase space must approach either attractor at t → ∞; thus, the entire phase space can be divided into two domains, which are attraction basins of the two states. There should be a boundary between these domains, and the points lying exactly on this boundary are attracted to neither state. Of course, an infinitesimal deviation from the domain boundary would send the trajectory to one of the alternative attractors, so that the motion along the boundary is unstable. An unstable trajectory, followed precisely, must, however, arrive somewhere. There are two possibilities: either the motion is persistent, so that there is a periodic orbit or a more complex unstable dynamic state lying on the basin boundary, or trajectories terminate at some stationary state, which must be unstable as well. A consequence is that stationary states belonging to two branches of solutions converging at either a fold or a transcritical bifurcation (Sect. 2.1.3) cannot be both stable. If we approach this point along a branch of stable states, the other branch existing at the same values of parameters must be unstable, as shown in Fig. 2.1.

2.2. STABILITY AND SLOW DYNAMICS

2.2.4

47

Amplitude Equations

Extended time scales. More quantitatively, the dynamics in the vicinity of a bifurcation manifold is explored by introducing a hierarchy of time scales tk rescaled by the factor −k ,  1 and expanding the time derivative in a series of partial derivatives:  ∂ =

k ∂ k , ∂t n

k=0

∂k ≡

∂ . ∂tk

(2.31)

The hierarchy of equations approximating the dynamics near a reference point is obtained as in Sect. 2.1 in consecutive orders of the expansion of (2.1). In this way, the original dynamical system is reduced to a simple universal form that is characteristic of the bifurcation of a particular type and retains qualitative features of the dynamic behavior of an underlying system in an adjacent parametric domain. The reduced equations are called normal forms in mathematical literature. The term amplitude equation, preferred by the physicists, is often extended to include spatial dependence on an extended scale (see Sect. 5.4.1). Saddle-node bifurcation. Repeating the procedure of Sect. 2.1.2, we see that the dependence on t0 has to be suppressed to ensure solvability of the first-order equation at a bifurcation point. The remaining slow time dependence is carried to the next order by the dependence of the amplitude on the next time scale t1 . The second-order equation (2.9) is complemented then by the term −U ∂1 a, and its solvability condition becomes (2.32) ∂ 1 a = κ0 + κ 2 a 2 . The coefficient at the time derivative is unity when the eigenvector is normalized, U † · U = 1, and we suppress from this point on all upper indices denoting the order of parametric deviations. It is clear from this equation that only one of the two stationary solutions a = ±(κ0 /κ2 )1/2 is stable, confirming what we have deduced qualitatively in Sect. 2.2.3. One of the emerging states is a node, and the other one is a saddle. Hence, the term “saddle-node bifurcation”, which informs us on dynamics, and therefore is preferable to the term “fold bifurcation” used in Sect. 2.1.2. Some trajectories run away to ±∞, depending on the sign of κ2 . From the practical point of view, “infinity”, means in this context just large deviations from the reference point u = u0 . Since (2.32), by its derivation, is valid only while the amplitude is small, this may correspond, in fact, to runaway to an alternative attractor far removed from the reference state u0 . This attractor cannot be affected by small deviations from the bifurcation point, and keeps attracting some trajectories repelled by the unstable stationary state, but other trajectories, starting at |a| < (κ0 /κ2 )1/2 or coming from ∓∞, are attracted, following the bifurcation, to the newly-created stable stationary solution. The unstable state serves as the boundary between the attraction basins of the stable state and an alternative far removed attractor. The dynamics would, generally, become fast (i.e., revert to the basic time scale t0 ) after the trajectory leaves the vicinity of the reference state.

48

CHAPTER 2. LOCAL BIFURCATIONS

Transcritical bifurcation. In the same way, for a transcritical bifurcation, (2.13) is modified to (2.33) ∂1 a = κ1 a + κ2 a2 . This equation describes the exchange of stability between the two solutions a = 0 and a = −κ1 /κ2 . The first (“trivial”) one is stable at κ1 < 0, and the second, at κ1 > 0, as can be easily checked by linearizing (2.33) in the vicinity of this solution. Thus, as the coefficient κ1 (which is the only one dependent on parametric deviations) increases past zero, stability is transferred from the “trivial” to the “nontrivial” solution. Also in this case, some trajectories escape to infinity, and the unstable state serves as the basin boundary. Higher-order bifurcations. Continuing to higher singularities as in Sects. 2.1.4, 2.1.5, the time dependence is slowed down alongside restricting parametric deviations to higher orders. In a generic case, the amplitude equation obtained in the nth order contains the same nth-order polynomial as in (2.20) and the time derivative on the scale slowed down by the factor −(n−1) : ∂n−1 a = κn an +

n 

κn−j an−j .

(2.34)

j=2

The maximum number of distinct stationary states equals the order n of the polynomial on the r.h.s. In the absence of symmetries, one has to fix n − 1 parameters to arrive at the nth-order form. As in all dynamic equations of this section, the coefficient at the highest-order term is dependent only on the position of the reference point p0 on the bifurcation manifold, while all other coefficients depend on parametric deviations. By rescaling time, the coefficient at the highest-order term, κn , can be rescaled to ±1. With n odd and κn = −1, the amplitude always remains finite, which ensures that the dynamics is contained in the vicinity of the reference point. In this case, there are at most (n+1)/2 stable and (n − 1)/2 unstable stationary states. With n even, the maximum number of both stable and unstable stationary states is n/2, and there must be an additional attractor at a → ±∞, i.e., far removed from the reference point. Stable and unstable stationary states appear and disappear in pairs at fold (saddle-node) bifurcations. In particular, equation (2.34) with n = 3 and κ3 < 0 describing the dynamics in the vicinity of a cusp singularity, can be presented in the rescaled form a˙ = κ0 + κ1 a − a3 ,

(2.35)

0, a fold bifurcation where the dot denotes now the slow time derivative ∂/∂t2 . At κ0 = takes place when the coefficient κ1 increases, creating a pair of stationary states, one stable and one unstable. One branch of solutions is not affected by the bifurcation, and solutions belonging to this branch remain stable (see Fig. 2.2b). With κ1 decreasing, the system residing initially on the lower branch of stable solutions jumps to the upper

2.3. BIFURCATIONS OF PERIODIC ORBITS

49

branch; this is a first-order transition involving a finite change of the amplitude. A reverse transition to the lower branch of stable solutions at κ1 exceeding the critical value can take place only as a result of a finite perturbation, which would cross the unstable branch and bring the system within the attraction basin of the alternative stable state. In systems with inversion symmetry, κ0 must vanish and the bifurcation diagram is symmetric, as in Fig. 2.2c. In this case, a second-order transition occurs at the pitchfork bifurcation point κ1 = 0. The trivial state a = 0 persists through the bifurcation point but becomes unstable. The two symmetric stable stationary states exist at κ1 > 0; they have infinitesimal amplitudes when they emerge at the bifurcation point. A symmetric fifth-order amplitude equation with κ3 > 0, κ5 = −1, and even-order coefficients vanishing, obtained on the further extended time scale t4 , a˙ = κ1 a + κ3 a3 − a5 ,

(2.36)

has the bifurcation diagram shown in (Fig. 2.2d). The trivial state loses stability at κ1 > 0 as before, but the pitchfork bifurcation is now subcritical : a pair of nontrivial solutions emerging at this point is unstable and exists at κ1 < 0. An additional feature is a fold bifurcation at κ1 = −κ23 /4, where two pairs of stable and unstable nontrivial stationary states merge. As the bifurcation parameter κ1 increases, two stable stationary states appear in this case via a first-order transition at this point. A perturbation required to bring the system from the trivial state, which is still stable, to one of these states should bring the system beyond one of the branches of unstable steady states, which serve as basin boundaries between the attraction basins of the three stable states that exist in the interval −κ23 /4 < κ1 < 0; the strength of such perturbations decreases as the pitchfork bifurcation point is approached. Containment at small amplitudes at odd n still does not mean that (2.34) faithfully reproduces the dynamics of the original system, even in a qualitative sense. Any amplitude equation derived in the vicinity of a bifurcation at zero eigenvalue has a gradient structure; thus, (2.34) can be written as a˙ = −

∂V , ∂a

κn an+1  κn−j an−j − . n+1 n−j+1 j=2 n

V =−

(2.37)

The original equations may not possess gradient structure; thus, (2.34) may characterize the dynamics in the vicinity of the bifurcation point only, rather than globally, in the entire phase space, where more complicated dynamics may be observed.

2.3 2.3.1

Bifurcations of Periodic Orbits Hopf Bifurcation

Approaching the bifurcation. At the Hopf bifurcation point, the parametric dependence of stationary solutions u = u0 (p) remains smooth; a linear correction can

CHAPTER 2. LOCAL BIFURCATIONS

50

be obtained by continuation, as in Sect. 2.1.1. In order to simplify derivations, one can eliminate this trivial parametric dependence by transforming to a new variable  = u − u0 (p). The resulting dynamic system has the same form as (2.1) but has a u  0 = 0. However, this way of eliminating the parametric persistent trivial solution u dependence is usually inconvenient for practical computations, since analytical expressions of stationary solutions hardly ever can be computed explicitly. It is more suitable to present the dynamical system in the form Γ

du = f (u; p). dt

(2.38)

Then one can keep the entire set of parameters p fixed, but vary only dynamic parameters entering the capacitance matrix Γ, which do not affect the fixed points; this would be sufficient to locate a Hopf bifurcation point if it exists at all at this set of static parameters. Stability is determined by eigenvalues of the Jacobi matrix J = Γ−1 fu . As parameters of the matrix Γ are shifted, a stationary solution of (2.38) may lose stability when the real part of a pair of complex conjugate eigenvalues becomes positive. At a Hopf bifurcation point, there is a pair of purely imaginary eigenvalues ±iω0 . The respective eigenvectors are U , satisfying J U = iω0 and its complex conjugate U . Then the linearized equation du1 = J u1 (2.39) dt has an oscillatory solution u1 = aU eiω0 t + aU e−iω0 t .

(2.40)

Unlike the bifurcation at zero eigenvalue, the amplitude a is complex ; its modulus (also called a real amplitude) is ρ = |a|, and its phase is φ = arctan(Im a/Re a). The amplitude equation. The symmetry of the slow mode determines the structure of the amplitude equation in a crucial way. At the Hopf bifurcation point, where the frequency ω0 differs from zero, the amplitude is complex and even-order terms are forbidden by symmetry to phase shifts. The lowest-order amplitude equation involving a single complex amplitude a should be then a˙ = a(κ1 + κ3 |a|2 ),

(2.41)

where, as in (2.35), the dot denotes the slow time derivative ∂/∂t2 but the coefficients κj are now complex, with κ1 = O( 2 ) dependent on O( 2 ) parametric deviations from the bifurcation point and κ3 = O(1) defined on the bifurcation point itself. It is often convenient to present this equation in the polar form by setting a = ρeiθ , where ρ = |a| is the real amplitude, and θ is the phase. Separating the real and imaginary parts yields θ˙ = κ1i ρ + κ3i ρ2 , (2.42) ρ˙ = κ1r ρ + κ3r ρ3 ,

2.3. BIFURCATIONS OF PERIODIC ORBITS

51



(b)



(a)





Figure 2.4. Dependence of the real amplitude on a bifurcation parameter near a supercritical (a) and subcritical (b) Hopf bifurcation. The solid and dashed lines show the real amplitudes of stable and unstable solutions, respectively

where the parameters are denoted as κn = κnr + iκni . The dynamics is restricted to small amplitudes when the real part of κ3 is negative. Then the Hopf bifurcation is supercritical. The bifurcation diagram, shown in Fig. 2.4a has the same form as in Fig. 2.2c but with the part below the abscissa cut off. Negative values of ρ make sense as well, since changing the sign is equivalent to shifting the phase by π, but they represent the same solution. The stationary state a = 0 is stable at κ1r < 0, while at κ1r > 0 it is unstable, and the trajectories are attracted to the periodic orbit corresponding to the nontrivial solution of (2.41): a = ρs eiω2 t ,

ρ2s = −

κ1r , κ3r

ω2 = κ1i + κ3i ρ2s ,

(2.43)

shift. Rescaling the slow time variable t → |κ1r |t and where ω2 is an O( 2 ) frequency  the real amplitude ρ → −κ1r /κ3r ρ brings (2.42) to the standard form ρ˙ = ρ − ρ3 ,

θ˙ = μ − νρ2 ,

(2.44)

where μ = κ1i /|κ1r |, ν = κ3i /κ3r . Clearly, a periodic orbit, either stable or unstable, may exist, respectively, in the supercritical or subcritical region, only when κ1r and κ3r have opposite signs. The sign of κ1r is easily determined by stability of the stationary state in question, but computing κ3r is a quite technically cumbersome task. The general algorithm based, as in Sect. 2.2.4, on the multiscale expansion procedure, will be described in Sect. 2.3.2, and examples of its implementation, based on symbolic computations in the accompanying Mathematica notebooks, in Sects. 2.4.2, 2.5.3. If κ3r > 0, the bifurcation is subcritical. An unstable periodic orbit exists then at κ1r < 0, i.e., under conditions when the trivial solution is stable. This orbit defines the boundary of the attraction basin of the trivial state. The higher-order amplitude

CHAPTER 2. LOCAL BIFURCATIONS

52

equation describing dynamics in the vicinity of a subcritical bifurcation should have the form (2.45) a˙ = κ1 a + κ3 |a|2 a + κ5 |a|4 a. In the standard case when the real parts of both κ1 and κ5 are negative and that of κ3 is positive, the bifurcation diagram, shown in Fig. 2.4b, is the same as in Fig. 2.2d with the lower symmetric half cut off. Since (2.45) retains the symmetry to phase shifts, all transitions depend only on real parts of the coefficients κj . The only difference from solutions of (2.36) is that the saddle-node bifurcation creates a pair of periodic orbits (one stable and one unstable) rather than two symmetric pairs of stationary states.

2.3.2

Derivation of the Amplitude Equation

Linear analysis. In the vicinity of a Hopf bifurcation point, we set Γ = Γ0 + Γ1 + · · · and introduce a hierarchy of time scales tk as in Sect. 2.2.4. As before, we assume that the bifurcation point is isolated and J has no other eigenvalues with an O( ) real part. The first-order equation Lu1 ≡ (fu − Γ0 ∂0 )u1 = 0

(2.46)

has a nontrivial oscillatory solution u1 = a(t1 , t2 , . . .)Φ(t0 ) + c.c.;

Φ(t0 ) = eiω0 t0 U

(2.47)

with an arbitrary complex amplitude a(t1 , t2 , . . .) changing on an extended time scale; U is the eigenvector of Γ−1 0 fu with the eigenvalue iω0 : Γ−1 0 fu U = iω0 U .

(2.48)

The vector-function Φ(t0 ) and its complex conjugate Φ(t0 ) are the two eigenfunctions of the linear operator L defined by (2.46) with the zero eigenvalue. The operator L acts here in the space of 2π/ω0 -periodic complex-valued vector-functions with the scalar product defined as ω0 u, v = 2π The adjoint operator is



2π/ω0 0

u(t) · v(t) dt.

L† = f †u + Γ†0 ∂0 ,

(2.49)

(2.50)

where the dagger sign at a matrix symbol denotes a transposed matrix. The eigenfunctions of L† are Φ† (t0 ) = eiω0 t0 U † and its complex conjugate; U † is the eigenvector of the matrix (Γ†0 )−1 fu † with the eigenvalue iω0 ; a suitable normalization condition † is U Γ0 U = 1.

2.3. BIFURCATIONS OF PERIODIC ORBITS

53

The second-order equation is 1 Lu2 − (Γ0 ∂1 + Γ1 ∂0 )u1 + fuu [u1 , u1 ] = 0. 2

(2.51)

The inhomogeneity of this equation contains both the principal harmonic eiω0 t0 contributed by the linear terms, and quadratic terms with zero and double frequency. The scalar products of the latter with the eigenfunction Φ† (t0 ) and its complex conjugate vanish, and the solvability condition of (2.51) is obtained in the form ∂1 a = κ1 a where, in view of the above normalization condition, †

κ1 = −iω0 U Γ1 U .

(2.52)

The amplitude should neither diverge or decay with time, so the real part of κ1 should vanish. This condition defines a hyperplane in the parametric space of the capacitance matrix tangential to the Hopf bifurcation manifold, and the imaginary part of κ1 defines the respective frequency increment. Since we could have started from a shifted point, this procedure is superfluous, and can be safely eliminated by setting Γ1 = 0. Then the solvability condition reduces to ∂1 a = 0, so that the amplitude may evolve only on a still slower scale t2 . The second-order function u2 has to be found by solving (2.51), now reduced to the form

1 2 2iω0 t0 (fu − Γ0 ∂0 )u2 + |a|2 fuu [U , U ] + a e fuu [U , U ] + c.c = 0. (2.53) 2 The solution of this equation is sought for in the form

u2 = |a|2 V 0 + a2 e2iω0 t0 V 2 + c.c. .

(2.54)

Using this anzatz in 2.53, we compute the real vector V 0 and the complex vector V 2 V 0 = −fu −1 fuu [U , U ],

1 V 2 = − (fu − 2iω0 Γ0 )−1 fuu [U , U ]. 2

(2.55)

The third-order equation is 1 Lu3 − (Γ0 ∂2 + Γ2 ∂0 )u1 + fuu [u1 , u2 ] + fuuu [u1 , u1 , u1 ] = 0. 6

(2.56)

The amplitude equation is obtained as the solvability condition of this equation. Only the part of the inhomogeneity containing the principal harmonic contributes to the solvability condition, which should take the form coinciding with (2.41), where the dot now stands for ∂2 . The coefficient at the cubic term is computed now as   1 † κ3 = U fuuu [U , U , U ] + fuu [U , V 0 ] + fuu [U , V 2 ] . 2 Practical computations involve quite a lot of algebra; some examples are given in the accompanying Mathematica notebooks.

54

2.3.3

CHAPTER 2. LOCAL BIFURCATIONS

Instabilities of Periodic Orbits

Periodic orbits emerging at a Hopf bifurcation may, in turn, lose stability at secondary bifurcation points. Stability of a T -periodic orbit u0 (t) = u0 (t + T ) to infinitesimal perturbations is determined by linearizing the underlying equation in the vicinity of this solution. Consider a general dynamical system (2.1) and look for solutions in the form u(t) = u0 (t) + v(t), where v(t) is an infinitesimally small deviation from the periodic solution u0 (t). The linearized system for v(t) is ∂t v = F (t)v.

(2.57)

The fundamental solution of (2.57) is a matrix A(t), such that v(t) = A(t)v(0) for any initial condition v(0). Stability of the T -periodic solution u0 (t) is determined by eigenvalues Λj of the monodromy matrix A(T ), called Floquet multipliers. The periodic orbit is stable when all Λj do not exceed unity by their absolute value. Alternatively, solutions of (2.57) can be sought for in the form v(t) = eiλj t w(t), where w(t) is T -periodic and λj = T −1 ln Λj are Floquet exponents, which should all have nonpositive real parts for a stable orbit. Among the Floquet multipliers, there is always one equal to unity, which reflects neutral stability to a shift along the periodic orbit. The respective eigenvector is v 0 = u0 (t). One can see, indeed, that the right-hand side of (2.57) with v = u0 (t) equals the time derivative of f (u0 (t)), and, therefore, integrating it over the period yields v 0 (T ) − v 0 (0) = f (u0 (T )) − f (u0 (0)) = 0. Additional neutrally stable eigenvectors may exist in the presence of conservation laws. Other Floquet multipliers should lie within the unit circle in the complex plane when the periodic orbit is stable. Varieties of bifurcations. Stability may be lost in three distinct ways, depending on whether the value of the multiplier with the largest absolute value at the critical point of marginal stability is 1, −1, or a complex number Λ = eiω . As in the case of stationary states, one can expect that the growth of perturbations in the leading mode would lead (provided it saturates at a certain finite amplitude) to a new orbit of a kind indicated by the instability mode. At Λ = 1 (i.e., λ = 0), this might be another periodic orbit; most commonly, this indicates a saddle-node bifurcation of periodic orbits where two orbits, one stable and one unstable, collide and disappear. A transition at Λ = −1 (i.e., λ = iπ), sometimes called a flip bifurcation, indicates period doubling. A complex Floquet multiplier on the unit circle signals the emergence of a new frequency. The expected new state is a periodic orbit with the period 2nT if ω/π = m/n is rational, or a quasiperiodic orbit covering a torus if ω/π is an irrational number. Quasiperiodic orbits are usually frequency-locked at larger amplitudes, turning into orbits with a finite, though enlarged, period. As a rule, Floquet exponents or multipliers and, consequently, locations of bifurcation points can be determined only numerically. In some special cases, analytical results can be attained using scale separation, as it will be done in Sect. 3.4.3.

2.3. BIFURCATIONS OF PERIODIC ORBITS

55

Linearization of the amplitude equation. The vicinity of a Hopf bifurcation is the only location where stability of periodic orbits can be determined easily, since, provided the Hopf bifurcation is isolated, as we have assumed, stability of solutions of the full underlying system is totally determined by stability of solutions of the amplitude equation. Indeed, all perturbations projecting on the eigenvectors of fu other than U , U † should rapidly decay, since the real parts of the respective eigenvalues are negative. The remaining perturbations can be presented in the form v= aeiΩT U + c.c,

(2.58)

where  a is the perturbation of the nontrivial stationary solution (2.43) of the cubic equation (2.41) or of a respective solution of the quintic equation (2.45). Thus, stability analysis of the full underlying system reduces in the vicinity of a Hopf bifurcation to a far easier task of stability analysis of an applicable amplitude equation. In a standard way, the latter has to be linearized after setting a = as + v, where v is presumed to be infinitesimally small. Recall that the amplitude equation is actually a shorthand for a system of two equations for the amplitude a and its complex conjugate a ¯; accordingly, its linearization has the form (2.57) with v = {v, v¯} and the matrix F which has, for the cubic equation (2.41), the form ⎞ ⎛ κ1 + 2κ3 ρ2s κ3 ρ2s e2iΩT ⎠. F =⎝ (2.59) κ3 ρ2s e−2iΩT κ1 + 2κ3 ρ2s This expression retains a periodic form of a general Floquet matrix in (2.57), though it excludes the basic oscillation frequency ω and only retains the nonlinear frequency shift Ω. A time-dependent form is inconvenient for the analysis, but (2.59) can be transformed to a constant matrix by replacing in (2.57) v = w eiΩT . Using here (2.43) yields an utterly simple result:   1 1 . (2.60) F = −κ1r 1 1 The eigenvalues of this matrix, i.e., Floquet exponents, are 0 and −2κ1r ; thus, as expected, the solution is stable when the bifurcation is supercritical. The zero eigenvalue corresponds to neutral stability to phase shifts. It is still more convenient to use the polar form of the amplitude equation (2.42), which makes it immediately clear that the system is neutrally stable to infinitesimal phase shifts, while stability to perturbations of the real amplitude is the same as in the amplitude equation in the vicinity of a cusp singularity (2.35) or (2.36).

CHAPTER 2. LOCAL BIFURCATIONS

56

2.4 2.4.1

Example: Exothermic Reaction Bifurcation of Stationary States

Basic equations. A generic example, displaying both multiplicity of stationary states and periodic orbits and distinguished by strongly nonlinear dynamics, is the exothermic reaction model (1.53), (1.54) with = 0: u˙ = ev (1 − u) − μu,

(2.61)

γv˙ = ν e (1 − u) − μv.

(2.62)

v

The computations of this section are displayed in all detail in the accompanying Mathematica notebook Exothermic.nb. The stationary states of the system (2.61), (2.62) are determined quite easily. Since there is only one nonlinear term appearing in both equations, the stationary system can be simplified using the linear relationship u = v/ν between both variables: conversion u and reduced temperature v. The single remaining equation is (ν − v) ev − μv = 0.

(2.63)

Branches of solutions cannot be expressed explicitly but can be easily drawn by solving the stationary equation with respect to the parameter μ (Fig. 2.5a). The relation defining the extrema of (2.63) is obtained by differentiation: (ν − v − 1) ev − μ = 0.

(2.64)

Solving (2.63), (2.64) with respect to the parameters gives the equation of the bifur(a)

(b)  

 

 











  











v













 

Figure 2.5. (a): Branches of solutions of (2.63). (b): The locus of fold bifurcation in the parametric plane μ, ν terminating in a cusp at ν = 4, μ = e2

2.4. EXAMPLE: EXOTHERMIC REACTION

57

cation line in the parameteric plane μ, ν in the implicit form: μ=

ev , v−1

ν=

v2 . v−1

(2.65)

The bifurcation locus in the parametric plane is shown in Fig. 2.5b. The two branches join at the cusp point μ = e2 , ν = 4, which corresponds to v = 2. Bifurcation expansion at zero eigenvalue. Alternatively, in accordance to the general procedure, the condition of fold bifurcation is obtained by equating to zero the determinant of the Jacobi matrix. Eliminating u and μ as before, this matrix is expressed as ⎞ ⎛ ev v  v ev 1 − ⎜ ⎟ ν 2

⎠ fu = ⎝ 1 − v . (2.66) v e − (ν + 1)v + ν + 1 v −ev ν − v−1 The determinant of this matrix is indeed zero when ν is given by (2.65). The normalized eigenvectors of this matrix and of its transpose used in the bifurcation expansion of Sect. 2.1.3 are     v−1 , 1 , U † = −v 2 , v , U † · U = 1. (2.67) U= 2 v The amplitude equation is obtained in the form (2.32) with the coefficients κ0 = ν2 ev − μ2 v,

κ2 = −

ev (v − 2) , 2(v − 1)

(2.68)

where μ2 , ν2 are O( 2 ) deviations of the respective parameters. Take note that v > 2 on the lower branch of the fold locus in Fig. 2.5b and 1 < v < 2 on the upper branch; therefore κ2 is negative on the lower and positive on the  upper branch. The two stationary solutions of the amplitude equation, as = ± −κ0 /κ2 , exist when these coefficients have opposite signs. If, say, the fold locus is crossed by changing μ only, the multiplicity region corresponds therefore to μ2 > 0 on the upper and μ2 < 0 on the lower branch, in accordance to what we see in Fig. 2.5b. The linear stability criterion for the emerging solutions is κ2 as < 0. Accordingly, of the two solutions emerging when the upper branch is crossed, that with as > 0, corresponding to the higher stationary state of the underlying system, is stable, while that with as < 0, is unstable. Of the two solutions emerging when the lower branch is crossed, that with as < 0, corresponding to the lower stationary state, is stable, while that with as > 0, corresponds to the unstable intermediate state. The third-order amplitude equation applicable in the vicinity of the cusp singularity is obtained in the form (2.35) with the coefficients κ0 = −μ3 ,

κ1 = ν2

e2 , 4

κ3 = −

e2 . 6

(2.69)

CHAPTER 2. LOCAL BIFURCATIONS

58

The cusp singularity is entered along the common tangent to the two branches of the fold locus meeting at this pont. The parametric deviations along this line are tied by the relation μ2 = ν2 e2 /v. The third-order deviation μ3 represents the deviation from this line. The last coefficient is negative, which warrants that the dynamics is indeed confined to small amplitudes.

2.4.2

Hopf Bifurcation

The stationary states depend on two parameters μ and ν only, as the parameter γ falls out. The dynamic significance of this parameter lies in the fact that the characteristic thermal response time is proportional to γ. As temperature in an exothermic reaction is an “autocatalytic” variable accelerating its own growth, oscillations are likely to appear at small γ when the reactant depletion is too slow to prevent thermal runaway. The Hopf bifurcation condition is determined by the vanishing trace of the Jacobi matrix J = Γ−1 fu , where Γ is the diagonal matrix with the elements 1, γ. It is convenient to parametrize the locus of Hopf bifurcation by the stationary value of v, which we denote in this section, to distinguish it from the respective variable, by the capital letter V , and to compute the value of the heat capacitance parameter γ corresponding to the bifurcation point at a given value of ν:   V γ = (V − 1) 1 − . (2.70) ν Clearly, the Hopf bifurcation never occurs at V ≤ 1; the more restricting lower bound is V > (1 − γ)−1 . This limiting value corresponds to the locus of the double zero eigenvalue where the determinant of J vanishes alongside its trace. Also excluded is the nonphysical region V > ν that corresponds to conversion exceeding unity. The additional condition is that the determinant of J , which is the product of the two eigenvalues ±iω, should be positive. This restricts possible values of V by the inequalities       ν ν 1 ν ν ν>V > + ν −1 or − ν −1 >V > . (2.71) 2 4 2 4 1−γ In the multiplicity region ν > 4, this condition excludes intermediate values where the stationary state is a saddle; V must lie within either of the two ranges corresponding, respectively, to the upper and lower solution branches. One can also check that, under the specified restrictions, γ is always less than 1. The values of γ at the Hopf bifurcation are plotted against ν and V in Fig. 2.6a. Fig. 2.6b shows the loci of the Hopf bifurcation in the parametric plane μ, ν for several chosen values of γ. At γ = 1/2, the Hopf line starts exactly at the cusp point. In the region within the loop of the Hopf curve outside the cusped region, there is a single stationary state, which is unstable. At γ < 1/2, the Hopf locus penetrates into the cusped region, as shown in the blowup in Fig. 2.6c. In the strip between the locus of the Hopf bifurcation and the lower boundary of the cusped region, there are

2.4. EXAMPLE: EXOTHERMIC REACTION

59

(a)

(b) 







 



 































 



 













(c)

(d)





 

 













































Figure 2.6. (a): The values of γ at the Hopf bifurcation as functions of ν and V (indicated at the respective contour lines). There are no Hopf bifurcations in the white triangle. (b): Loci of the Hopf bifurcation in the parametric plane μ, ν for several values of γ (from above to below in the right part: 0.3, 0.4, 0.5, and 0.6); the cusped region of multiple stationary states is also shown. (c): Blowup at γ = 0.35. (d): Blowup at γ = 0.582

one stable (lower) and two unstable stationary states: the upper state is an unstable focus, and the intermediate one is a saddle. Finally, in the small triangular region above the Hopf locus both the upper and lower stationary states are stable, and the intermediate one is a saddle.

CHAPTER 2. LOCAL BIFURCATIONS

60 (a)

(b) 

 

  



 





 



  

  

 





 





















Figure 2.7. (a): The dependence of the frequency of orbits emerging at the Hopf bifurcation on γ and V , as indicated by the numbers at the contour lines. (b): The regions of the supercritical (darkly shaded) and subcritical (lightly shaded) Hopf bifurcation in the parametric plane γ, V

The intersection of the Hopf and fold lines bears no particular significance, since the Hopf bifurcation takes place on the branch of solutions that does not change when the fold line is crossed and a new pair of solutions (lower and intermediate) appears. The Hopf line continuing into the cusped region is the locus of the Hopf bifurcation on the upper solution branch; the lower state never suffers an oscillatory instability at this and larger values of γ. At γ > 1/2, both the lower and and the upper solutions are stable in the immediate vicinity of the cusp. The region where there are no stable stationary states disappears altogether when the Hopf locus is tangent to the fold bifurcation locus at some point. This happens at this particular set of parameters at γ = 0.582. The Hopf locus at this value of γ is shown on a large scale in Fig. 2.6d. The dependence of the frequency of the orbit emerging at the Hopf bifurcation on γ and V is shown in Fig. 2.7a. The bounding curve on the right is the locus of the double zero eigenvalue where the frequency vanishes. A long algebraic computation along the lines of Sect. 2.3.2, detailed in the Mathematica notebook, arrives at the following expression for the real part of the coefficient at the cubic term in the amplitude

2.4. EXAMPLE: EXOTHERMIC REACTION

61

equation (2.41), which has to be cut in parts to be displayed within page’s width:   4 eV 6 + 12γ + n=1 qn V n , (2.72) κ3r = 2V (1 − γ)(1 + γ − V )[4(1 − γ)V + 3)] q1 = 24γ 2 − 23(1 + γ),

q2 = 12γ 3 − 37γ 2 − 6γ + 38,

q3 = −8γ 3 + 2γ 2 + 35γ − 29,

q4 = 8(1 − γ)2 .

(2.73)

The regions of the supercritical (darkly shaded) and subcritical (lightly shaded) Hopf bifurcation in the parametric plane γ, V are shown in Fig. 2.7b. We see that the boundary following from this long formula is almost rectilinear, and the bifurcation is subcritical only at low stationary temperatures.

2.4.3

Branches of Periodic Orbits

When there is a single unstable stationary state, the reactor must operate in the oscillatory regime. An example of the computation in the supercritical region from the accompanying Mathematica notebook Exothermic.nb is shown in Fig. 2.8a. The amplitude of the nested orbits increases in this case as the capacitance parameter γ decreases away from a supercritical Hopf bifurcation point, and already at γ = (a)

(b)



 

 

   



 





 

 

 











 



Figure 2.8. (a): Stable periodic orbits in the phase plane (u, v) at ν = 3.5, μ = 5. The amplitude of the nested orbits increases as the capacitance parameter γ decreases from γ = 0.4 to γ = 0.15 with the decrement 0.05. (b): Stable (solid lines) and unstable (dashed lines) periodic orbits surrounding a stable fixed point at ν = 4, μ = 6 for the values of γ between the subcritical Hopf bifurcation and the saddle-node bifurcation of periodic orbits. In both plots, the stationary state is placed at the intersection of the coordinate axes

CHAPTER 2. LOCAL BIFURCATIONS

62 (a)

(b)

   





 

  











 









 





Figure 2.9. The dependence of the oscillation period on γ at ν = 3.5, μ = 5 following a supercritical Hopf bifurcation (a) and in the supercritical region γ < γ0 at ν = 4, μ = 6, near a subcritical bifurcation (b). The red dot marks the period 2π/ω0 at the Hopf bifurcation point

0.3 nearly total conversion is reached in the hot phase of the cycle. The oscillation period, on the opposite, decreases in parallel with the growth of the amplitude. The dependence of the period on γ, shown in Fig. 2.9a, is almost linear. Unstable orbits appearing at a subcritical Hopf bifurcation point can be obtained in the same way by reversing the time direction. This works in 2D but not in higher phase-space dimensions. Initial conditions should lie within the stable orbit to avoid runaway to infinity. Nested stable and an unstable orbits are shown together in Fig. 2.8b. As γ increases from its value at the Hopf bifurcation point γ0 = 0.42665, the amplitude of unstable orbits grows, while that of stable orbits decreases till they merge at the fold bifurcation of periodic orbits, which lies just above γ0 + 0.0026; no periodic orbits exist beyond this point. The period grows with the amplitude along this nested sequence, but it differs only slightly from that at the Hopf bifurcation point. Both the amplitude of these orbits and the interval of their existence are minuscule, as is the attraction basin of the stable steady state, which is bounded by a surrounding unstable orbit. In the supercritical region γ < γ0 , where the stationary state becomes unstable, the oscillation period of stable orbits now passes a maximum, as shown in Fig. 2.9b. We delay till Sect. 3.2 the case when both multiple stationary states and periodic orbits exist, and a plethora of global bifurcations is observed.

2.5 2.5.1

Example: Population Dynamics Prey–Predator Models

The Hamiltonian structure of the Lotka–Volterra system (1.62) is lost when some realistic features are added. Then only a certain periodic orbit would be chosen. As

2.5. EXAMPLE: POPULATION DYNAMICS

63

a simple example, we will consider in parallel two models differing from the classical model by accounting for prey congestion through the additional term setting the maximum prey population in the absence of predation to unity, and modifying the predation term by by the function F (u, v) modeling competition among predators. The two models to be analyzed here differ only by this function but, although they look similar, their dynamics differs in a substantial way. These extended prey–predator systems are described by two dimensionless equations, where both u and v are scaled by prey’s maximum population: u˙ = u (1 − u − μvΨ) ,

γv˙ = −v (1 − μuΨ) ,

(2.74)

where, for the two models, Ψ1 (u) =

1 , ν+u

Ψ2 (u) =

1 . νv + u

(2.75)

The parameter μ characterizes the predation rate, and the parameter ν affects its relative decrease due to competition among predators. The dynamics is controlled by the capacitance parameter γ, which can be interpreted as the ratio of the effective proliferation rate of prey to that of the predator. In both cases, the stationary states can be determined explicitly by setting the right-hand sides of (2.74) to zero. Unlike a generic system, the persistent state with the predator extinct exist here: u = 1, v = 0. In model 1, there is also the trivial solution u = v = 0, but it is indefinite in model 2. The nontrivial state of model 1 is u=

ν , μ−1

ν(μ − ν − 1) , (μ − 1)2

v=

(2.76)

which is positive and finite, provided μ > ν + 1. The nontrivial state of model 2 is u=

1+ν−μ , ν

v=

(μ − 1)(μ − ν − 1) . ν2

(a)

(b)



















 





(2.77)





















Figure 2.10. Dependence of the variables u (upper curve) and v (lower curve) on μ at ν = 1 for models 1 (a) and 2 (b)

CHAPTER 2. LOCAL BIFURCATIONS

64

Its existence is limited by the inequalities 1 < μ < ν + 1. Typical plots of u, v vs. μ at ν = 1 are shown in Fig. 2.10; we can see there how dissimilar the two models are.

2.5.2

Stability and Bifurcations

Computing the Jacobi matrix, it is easy to see that in both models the state with predators extinct, u = 1, v = 0, is stable when the nontrivial state does not exist. As μ approaches from opposite sides the limit μ = ν +1, the branch of nontrivial solutions smoothly joins the state with the maximum prey population and the predator extinct; thus, only one stable stationary state exists in both models. In both models, the nontrivial solution can undergo the Hopf bifurcation that takes place when the trace of the Jacobi matrix vanishes. The bifurcation condition in model 1 is 1+ν μ= , ν < 1. (2.78) 1−ν A nongeneric feature of this model is that the bifurcation locus does not depend on the dynamic parameter γ. For model 2, this peculiarity disappears, and the Hopf bifurcation condition is   ν(μ − 1) 1 γ= 2 , ν 1, i.e., when predator’s dynamics is slower, but becomes subcritical when predator’s dynamics accelerates. At a fixed γ < 1, the bifurcation becomes subcritical as competition among predators intensifies, so that ν increases toward the limiting value ν = 1. In model 2, the bifurcation is subcritical near the upper bound of the oscillation domain in Fig. 2.11b. Typical periodic orbits in the supercritical region are shown in Fig. 2.13a,b. In model 1, the amplitude increases at fixed ν, γ, as μ goes up from its critical value at the bifurcation point, and the period grows as well, as shown in the inset of Fig. 2.13a. In model 2, both the amplitude and the period increase at fixed ν, μ with γ above the bifurcation point, as shown in Fig. 2.13b. In the supercritical region, paired stable and unstable orbits, such as shown Fig. 2.13c,d, have a very small amplitude and exist in a tiny parametric domain near the Hopf bifurcation locus.

(a)

(b)

Figure 2.12. (a): The supercritical and subcritical domains, shaded, respectively, dark and light, for models 1 (a) and 2 (b)

CHAPTER 2. LOCAL BIFURCATIONS

66 (a)  

(b) 

     





      



    











    

















(c)

(d)













 

    































Figure 2.13. (a,b): Nested periodic orbits in the supercritical domain. Model 1 (a): the amplitude increases as μ goes up from 2 to 4.5 with the increment 0.5 at fixed ν = 0.3, γ = 0.5. The inset shows the dependence of the period on μ. Model 2 (b): the amplitude increases as γ goes up from 10 to 13 with the increment 0.5 at fixed ν = 3, μ = 3.5. The inset shows the dependence of the period on γ. (c,d): Stable (solid line) and unstable (dashed line) periodic orbits in the subcritical domain close to the Hopf bifurcation locus. The parameters: Model 1 (c) : ν = 0.7, γ = 0.2, μ = 5.666, close to the Hopf bifurcation point at μ ≈ 5.66667. Model 2 (d): ν = 3, μ = 3.35, γ = 40.86, close to the Hopf bifurcation point at γ ≈ 40.87

2.5. EXAMPLE: POPULATION DYNAMICS

67

Further Reading Thom (1975), dramatizing bifurcations as “catastrophes”, is entertaining in places, but taking Arnold (1984) as an antidote is recommended. Anosov and Arnold (1988) is concise and strict. Mathematicians’ approach to normal forms is well presented by Guckenheimer and Holmes (1983). More detailed treatment of various bifurcations, dressed in theorems, is found in Golubitsky and Schaeffer (1985). The two alternative (or complementing) notions of stability go back to Poincar´e (1885) and Lyapunov (1892). The classical source on oscillations is Andronov, Vitt, and Khaikin (1937, 1966). Kuznetsov (1995) is an advanced treatise coming closer to practical applications than the bulk of mathematical literature on the subject. The two prey–predator models are due, respectively, to May (1974a) and Kuang and Bertta (1998).

3 Global Bifurcations Chapter Overview Global analysis of a dynamical system aims at defining all kinds of bifurcations affecting either the number of attractors or their basin boundaries. We start with approaching this problem in a qualitative way, looking for all possible transitions among stationary states and periodic orbits. This qualitative picture is supported by numerics, continuing the study of the exothermic chemical reaction in the accompanying Mathematica notebook. Following this, we explore ways to approach global dynamics analytically. First, is studying dynamics in the vicinity of a degenerate bifurcation at the double-zero eigenvalue. Second, is the use of singular perturbations in systems with separated time scales, in particular, almost Hamiltonian systems. Global dynamics involves new kinds of transitions, in addition to the three generic (codimension-one) bifurcations that can be detected by local analysis. An unstable periodic orbit emerging at a subcritical Hopf bifurcation may undergo a saddle-loop bifurcation that occurs when it collides with a saddle point, thereby forming an infiniteperiod homoclinic trajectory going out of a saddle point along its unstable manifold and returning along its stable manifold. A sniper bifurcation (not related to sharpshooters) occurs when a pair of stationary states – a saddle and a node – emerges on a per iodic orbit; its period, as well as that of a homoclinic orbit, diverges at the bifurcation point. A less known (and rare) phenomenon is a canard (nothing to do with aquatic birds but an allusion to fake news). Complete global analysis is possible for two-variable systems, which have two kinds of attractors only: fixed points or periodic orbits. What happens in higher dimensions? The last section is a brief excursion there, featuring three kinds of bifurcations, which cannot occur in 2D, but we stop short of chaotic dynamics, the subject of the next chapter. Fortunately for engineers, not all higher-dimensional systems realize the full potential of complexity, and their behavior largely remains benign.

69

70

3.1 3.1.1

CHAPTER 3. GLOBAL BIFURCATIONS

Topology of Bifurcations More Ways to Create and Break Periodic Orbits

The overall picture. We have seen in Sect. 2.3 that a periodic orbit, either stable or unstable, can emerge at a Hopf bifurcation and disappear when colliding at a saddlenode bifurcation of periodic orbits (snp). At birth, it has a finite period and zero length, and at death, both period and length are finite. Two more ways to open up periodic orbits involve saddle stationary states, and take place through extending their period to infinity. Those are a saddle-loop and sniper bifurcations. The scheme of possible bifurcations involving a stable stationary state and a saddle is sketched in Fig. 3.1. The symbols at the nodes of this diagram correspond to the number of periodic orbits surrounding a stationary state; if this state or the outermost orbit is unstable, the overline is added. Some links, representing bifurcations, are familiar from the preceding chapter: saddle-node bifurcation (sn), supercritical (h+ ) and subcritical (h− ) Hopf bifurcations, and a saddle-node bifurcation of periodic orbits (snp). We postpone the discussion of bifurcations in a system with three stationary states till Sect. 3.1.2 and just denote here the pair of a saddle and an additional attractor by an x. The other two links are global bifurcations: saddle-loop Figure 3.1. The scheme of bifurcations be- (sl ) and sniper (sni ). tween stationary states and periodic orbits Starting from a single stable stationary state 0, the link h+ brings us to the configuration 1 where the stationary state is unstable and is surrounded by a stable periodic orbit. The link h− from this configuration represents a subcritical Hopf bifurcation, leading to the configuration 2. This restores the stability of the stationary state and creates an unstable periodic orbit that forms the boundary between the attraction basins of the stationary state and the stable orbit. The link snp corresponds to the collision of the stable and unstable orbits bringing us back from the configuration 2 to 0. Other bifurcations, which first appear in this section, involve interaction with another stationary state – a saddle. A saddle-node pair of stationary states can emerge “out of thin air” via a saddle-node (sn) bifurcation, complementing a single stable stationary state in the configuration 0, 1, or 2. In the simplest case, it converts the

3.1. TOPOLOGY OF BIFURCATIONS

71

(a)

(b)

Figure 3.2. Parametric evolution of a periodic orbit through a saddle-loop (a) and sniper (b) bifurcations.

configuration 0 to 0x. A supercritical Hopf bifurcation changes this configuration to 1x. Otherwise, the same configuration can emerge from 1 when a saddle-node pair appears in a region of the phase space exterior to the periodic orbit. As the periodic orbit expands and its period grows, it collides with the saddle point at the saddle-loop (sl ) bifurcation, turning into an infinite-period homoclinic trajectory. Following this, the orbit opens up leaving the configuration ¯0x with the stationary state outside the loop remaining as a sole attractor. In a similar way, the configuration 2x, formed from 2 via sn, turns into ¯ 1x. Saddle-loop bifurcation. The evolution through a sl bifurcation is shown in Fig. 3.2a. All lines are drawn in this picture without arrows, as they can be read either way. If, say, the unstable manifold of the saddle is directed to the upper right and lower left, and the stable one, from the lower right and upper left, the right branch of the unstable manifold is attracted to the periodic orbit (traversed clockwise) in the left panel, and connects to the stable manifold, forming the homoclinic loop, in the middle panel. In the right panel, as the loop opens up, this branch of the unstable manifold takes a long roundabout route, following the shade of the disappeared cycle, toward a faraway attractor. Both branches of the saddle’s stable manifold come from the upper left before the bifurcation, one of them taking a long route limiting the attraction basin of the periodic orbit. The lower branch originates in the unstable stationary state after the loop is broken. The same picture can be viewed in another way when the orbit broken at the bifurcation is unstable and surrounds a stable stationary state. We denote this bifurcation as sl− . In the left panel, the trajectories escaping from the unstable orbit (also traversed clockwise) are attracted either inward to the stable stationary state or

72

CHAPTER 3. GLOBAL BIFURCATIONS

outward to a faraway attractor. In the middle panel, the orbit converts to an unstable homoclinic loop. In the right panel, the loop opens, and the basin boundary between the two attractors is formed by the stable manifold of the saddle, as we already discussed in Sect. 2.2.3. If the periodic orbit is traversed anticlockwise, the stable and unstable manifolds of the saddle interchange. Sniper bifurcation. A saddle-node pair may emerge in another way on a stable orbit after its period becomes infinite. This is the sniper bifurcation. It leaves the state that was previously enclosed by the periodic orbit unstable, turning 1 into ¯0x or 2 into ¯ 1x. The evolution through this bifurcation is shown in Fig. 3.2b, with the stable orbit in the left panel traversed clockwise. Take note that, immediately following the bifurcation, trajectories starting on the left from the saddle point in the right panel take a long roundabout route to the emerging attractor following the contour of the disappeared cycle. The unstable stationary states left after either saddle-loop or sniper bifurcations may consequently annihilate with the saddle at what we denote as sn− bifurcation, to distinguish it from the collision between a stable state and a saddle. This restores the original configuration 0 with a single stationary state. Can a sniper bifurcation occur on an unstable orbit? The two emerging stationary states will be then a saddle and an unstable node. The same picture can be now read with the motion along the unstable orbit directed anticlockwise. Following the bifurcation, both branches of the stable manifold of the saddle, enclosing the attraction basin of the stable state, originate in the newly formed unstable node. This, in effect, changes nothing, as the attraction basin of the stable state does not open up.

3.1.2

Bifurcations in a System with Three Stationary States

Configurations with three stationary states formed as a result of a saddle-node or sniper bifurcations have already appeared in Fig. 3.1. A second stationary state can pass through the same cycle of supercritical and subcritical Hopf bifurcations shown in this figure. An additional element in the three-state picture is the formation of “large” orbits surrounding all three stationary states. This happens as a result of a saddle-loop bifurcation that we denote as sl0 . The large orbit emerging (or broken) at this point may be either stable or unstable, which, generally, would happen on different segments of the sl0 locus separated by a neutral point, in the same way as h+ and h− are separated on the Hopf locus. Again, stable and unstable orbits can annihilate at the saddle-node bifurcation of large periodic orbits snp0 . We need to extend the code of Sect. 3.1.1 to describe the full variety of phase portraits in a system with three stationary states. Three digits are now required, equal to the number of periodic orbits enclosing, in this order, the lower and upper stationary states and all three stationary states together. As before, the first two digits are written with an overline when either the respective stationary state or the outermost periodic orbit is unstable. The third digit will never have this mark, since we restrict to dynamics confined to a finite region of the phase space. This code allows

3.1. TOPOLOGY OF BIFURCATIONS

73

Figure 3.3. Phase portraits of a system with three stationary states.

us to reconstruct the changes in phase portraits quite easily. The code digit changes by one at the Hopf and sl bifurcations, and by two at snp. Some phase portraits and their labels are shown in Fig. 3.3. The scheme of the sl0 bifurcation involving two stable periodic orbits surrounding unstable stationary states is shown in Fig. 3.4. The left panel presents the starting configuration 110 with two stable periodic orbits surrounding respective unstable stationary states. Assuming that both orbits are traversed anticlockwise, we can view the solid line as the stable manifold of the saddle point separating their attraction basins and the dashed line as the unstable manifold that eventually approaches the nearby attractor. In the middle panel, the stable and unstable manifolds connect, forming a figure-eight (or rather figure-∞) homoclinic orbit. In the right panel, the homoclinic orbit transmutes into a large periodic orbit, which becomes the sole attractor in the configuration ¯ 0¯ 01. The stable manifold of the saddle point now originates at the unstable stationary states and its unstable manifold is attracted to the newly formed orbit.

Figure 3.4. Saddle-loop bifurcation sl0 of a large periodic orbit surrounding all three stationary states. See the text for explanations

CHAPTER 3. GLOBAL BIFURCATIONS

74

The remaining unstable stationary states may consequently undergo subcritical Hopf bifurcations that change the configuration to ¯1¯11. As the unstable orbits expand, they may again coalesce into a figure-eight homoclinic orbit, which will be now unstable and transmute following the bifurcation into a large unstable cycle in the configuration 002. The course of this bifurcation can be read from the same Fig. 3.4 adding to the picture a stable large orbit that prevents the escape of trajectories to infinity and interchanging the role of stable and unstable manifolds of the saddle. Viewing it in this way, we recognize in the left panel unstable orbits enclosing the attraction basins of the two stationary states. In the right panel, the large unstable orbit separates the attraction basins of both stationary states from that of the large stable orbit (not shown in the picture). The dashed stable manifold of the saddle forms the boundary of the attraction basins of the two stationary states, and the unstable manifold, now shown by solid lines, terminates at the attractors. Further evolution through a snp0 bifurcation would eliminate both large orbits restoring the state 000. In the absence of inversion symmetry, both kinds of sl0 bifurcations are codimensiontwo events, since they involve simultaneous sl bifurcations on two periodic orbits, but once the large orbits exist, the two stationary states can undergo Hopf bifurcations, like converting ¯ 1¯ 11→0¯ 11 or 002→102. Another possibility is to convert a “small” orbit formed as a result of a Hopf bifurcation into a “large” orbit by creating a saddlenode pair in its interior, thereby turning the configuration 1 from Sect. 3.1.1 into ¯0¯01, with other configurations within the large orbit developing through subcritical Hopf bifurcations of the two unstable stationary states.

3.2 3.2.1

Global Bifurcations in the Exothermic Reaction Basin Boundaries

The above qualitative picture can be illustrated by the study of the exothermic reaction model of Sect. 2.4, based on the computations detailed in the Mathematica notebook Exothermic.nb. We concentrate now on global bifurcations involving periodic orbits and multiple stationary states. We shall see that exploring a single set of static parameters, under conditions when multiple stationary states exist, makes it possible to obtain the various global bifurcations just by shifting the dynamic capacitance parameter γ. We choose ν = 4.5, μ = 9 as these representative values. Of the three stationary states that exist at these values of parameters, the middle one, with the conversion u ≈ 0.37 and the reduced temperature v ≈ 1.665 is a saddle, the higher one, with u ≈ 0.81, v ≈ 3.64, is a focus, stable at γ > γ+ ≈ 0.50456 and undergoing at the latter point a supercritical Hopf bifurcation, and the lower one, with u ≈ 0.30, v ≈ 1.35, is a stable node at higher values of γ, turning with decreasing γ into a focus, and undergoing a subcritical Hopf bifurcation at γ− ≈ 0.24579. Whenever there are coexisting stable stationary states and/or periodic orbits, there must exist a boundary separating their attraction basins. We start from a situation when both higher and lower fixed points are stable. The saddle has one negative and

3.2. GLOBAL BIFURCATIONS IN THE EXOTHERMIC REACTION (a)

(b)





















 



75

 























Figure 3.5. (a): The stable (dashed line) and unstable (solid line) manifolds of the saddle point in the exothermic system at ν = 4.5, μ = 9, γ = 0.6. The former one serves as the basin boundary of the two stable states. (b): The stable manifold of the saddle point forming the boundary between the stable state and a periodic orbit, and the unstable manifold approaching either attractor

one positive eigenvalue; the corresponding eigenvectors span the 1D stable and the unstable manifolds. The stable manifold of the saddle marks the boundary between the attraction basins of the two stable states. This boundary, such as the one shown by the dashed curve in Fig. 3.5a, can be drawn by integrating the dynamical system in reverse time. Its two branches are obtained by starting the integration from one of the points slightly shifted from the saddle in the opposite directions along the stable manifold. The two branches of the unstable manifold, shown by the solid lines in the same figure, terminate at one of the stable states, of which the higher one is, at the values of parameters used in this example, a focus, and the lower one, a node. As the capacitance parameter γ decreases, the higher state undergoes the Hopf bifurcation at γ = γ+ , turning it into an unstable focus and creating a stable periodic orbit. The basin boundary between the two attractors shown in Fig. 3.5b is drawn in the same way as before, but now it closely approaches at some point the unstable manifold of the saddle. The saddle-loop bifurcation takes place when the stable and unstable manifolds reconnect.

3.2.2

Saddle-Loop Bifurcations

Breaking stable orbits. As γ decreases further, the amplitude of oscillations grows, as seen in the sequence or nested orbits in Fig. 3.6, and the period increases as well,

CHAPTER 3. GLOBAL BIFURCATIONS

76   



 



 

   





 

 

 







Figure 3.6. Nested periodic orbits surrounding the unstable focus at γ going down from the value at the Hopf bifurcation point γ+ = 0.50456 with the decrement 0.005, and terminating close to the saddle-loop bifurcation. The approximated infiniteperiod orbit is shown by the red, and the trajectory going from the saddle to the remaining single remaining attractor following the saddle-loop bifurcation is shown by the green line. Inset: the periods of the nested orbits

as shown in the inset of this figure, until the outermost orbit touches the saddle point, forming an infinite-period homoclinic orbit, shown by the red line in Fig. 3.6. This is the saddle-loop bifurcation point, beyond which oscillations disappear, and the lower stationary state remains the sole attractor. Beyond this bifurcation, the unstable manifold of the saddle approaching the lower stationary state, shown by the green line in Fig. 3.6, makes a wide loop, following the shade of the just broken homoclinic so closely that both are not resolved in some segments in the plot. The system is highly sensitive to changes of γ in the vicinity of this bifurcation. Breaking unstable orbits. Unstable periodic orbits may undergo a saddle-loop bifurcation as well. Also here, the period grows with increasing amplitude, and diverges as the orbit hits the saddle point forming a homoclinic orbit, but the latter joins now the two branches of the stable manifold of the saddle and is unstable. The sequence of unstable orbits computed by integrating in reversed time at the same set of static parameters and several values of γ, going up from the vicinity of the subcritical Hopf bifurcation, as well as an approximation of the homoclinic orbit at γ ≈ 0.26879, are shown in Fig. 3.7a. Although the fixed point enclosed by these orbits is stable, its attraction basin is minuscular, and it can be accessed only starting from initial conditions within the enclosing orbit. Beyond the saddle-loop bifurcation, the unstable cycle disappears, but the attraction basin is still limited by a curve built of two trajectories, also obtained by integrating in reverse time and distinguished by color in Fig. 3.7b, which start at the unstable higher fixed point and approach the saddle point from the two opposite stable directions. Trajectories starting outside the attraction basin of the lower stationary state are

3.2. GLOBAL BIFURCATIONS IN THE EXOTHERMIC REACTION (a)

77

(b)



 

  

  

 







  













 

(c)

(d) 



 







 

 



   

 







    

























Figure 3.7. (a): Nested unstable periodic orbits surrounding the stable lower state at γ going up from the value close to the Hopf bifurcation point γ = 0.25 with the increment 0.006, terminating close to saddle-loop bifurcation. The approximated infinite-period orbit is colored brown. (b): The sequence of basin boundaries of the lower stationary state widening as γ increases from 0.27 to 0.32 with the 0.01 increment. (c): Nested large-amplitude stable periodic orbits surrounding all three fixed points with the amplitude increasing as γ goes up from 0.15 to 0.325 with the increment 0.025. The red dashed line shows the basin boundary of the lower fixed point at γ = 0.325, which corresponds to the outermost unstable orbit. Inset: the periods of the nested orbits. (d): A roundabout trajectory illustrating excitable dynamics at γ just above the saddle-loop bifurcation on the large-amplitude branch. Inset: the reduced temperature v as the function of time on this trajectory

CHAPTER 3. GLOBAL BIFURCATIONS

78



  

 

       

 



Figure 3.8. The dependence of existence intervals of different attractors on γ and the locations of Hopf (H) and saddle-loop (SL) bifurcations at ν = 4.5, μ = 9











attracted to a large-amplitude periodic orbit, which surrounds all three fixed points and is disconnected from the periodic orbits considered above. A nested set of such orbits is shown in Fig. 3.7c. The dashed curve in the same plot is the outer boundary of the attraction basin of the stable lower stationary state at the value of γ corresponding to the outermost orbit. The period, shown in the inset, increases with γ, as the sequence of orbits approaches the saddle-loop bifurcation at γ ≈ 0.328. The dependence v(t) shown in the inset makes it clear that the large loop with sharp temperature changes is traveled fast, while a longer time is spent in the vicinity of the fixed point. Beyond this bifurcation, the lower fixed point is the sole attractor, but the dynamics is excitable, as demonstrates a trajectory shown in Fig. 3.7d. Following a small perturbation bringing it slightly away from the fixed point, it goes around a large loop following the shade of the disappeared homoclinic orbit, before returning to the stable stationary state. The attractors at the chosen values of the static parameters at different values of γ and their bifurcations – Hopf (H) and saddle-loop (SL) – are summarized in Fig. 3.8. The diagram does not show the saddle-node bifurcation of periodic orbits just to the right of the Hopf bifurcation point of the lower state, since the distance between the two points is indistinguishable on the scale of this diagram.

3.2.3

Sniper Bifurcation

Infinite-period saddle-node (sniper) bifurcation occurs when a periodic orbit passes through the saddle stationary state. Unlike other global bifurcations, this one cannot be located by changing γ alone, since the values of the static parameters ν, μ have to be fixed on the fold bifurcation line. The bifurcation is of codimension-one, but it is observed only on a limited segment of the fold line. The bifurcation can be located most easily by changing μ. Other parameters have to be fixed in such a way that the system has only one stable stationary state and no periodic orbits in the multiplicity region and a stable periodic orbit surrounding an unstable stationary state in the parametric region where the stationary state is unique. The transition takes place as

3.2. GLOBAL BIFURCATIONS IN THE EXOTHERMIC REACTION (a)

79

(b)

   



       



   

 



 



  













 





Figure 3.9. (a): A trajectory starting close to the saddle and terminating, after a long excursion, at the stationary state in the multiplicity region close to the sniper bifurcation. (b): The blowup of the segment of the same trajectory near the fixed points (solid line) and the segment of the periodic orbit closing up at the sniper bifurcation (dashed line)

μ crosses the fold line and the periodic orbit opens up as the saddle and node emerge. Suitable parametric values correspond to the region of excitable dynamics considered above. This dynamics can be viewed, indeed, as a phantom of the periodic orbit destroyed at the sniper bifurcation. We take as an example ν = 4.5, γ = 0.4. The value of μ at the bifurcation point is determined by the lower fold condition μ0 = 2e3/2 ≈ 8.96. The respective stationary values of the variables are v = 32 , u = 13 . The stationary states close to the bifurcation point can be computed with the help of the stationary amplitude equation at the fold bifurcation (2.10), κ0 + κ2 a2 = 0, where (at ν fixed) vev (v − 2) 3 = e3/2 . κ0 = μ0 − μ, κ2 = (3.1) 2(v − 1) 4 The stable lower stationary state is located under these conditions at  √  1 2 μ − 2e3/2 3 3 μ − 2e3/2 √ . (3.2) u= − , v= − 3/4 2 3 e3/4 3 3e Close to the fold in the multiplicity region, the trajectory starting close to the saddle and terminating, after a long excursion, at the stationary state is shown in Fig. 3.9a. Outside the multiplicity region, this trajectory closes into a periodic orbit, which is almost indistinguishable from the open trajectory at the plot’s scale. The distinction between them can be seen in the blowup near the disappearing stationary states shown in Fig. 3.9b.

CHAPTER 3. GLOBAL BIFURCATIONS

80

3.3 3.3.1

Bifurcation at Double-Zero Eigenvalue Locating a Double Zero

We can find out hubs of complexity where a variety of stationary and oscillatory states can be obtained by slightly shifting some parameters. Such hubs can be identified with bifurcations of a higher codimension. We have seen a simple example of a codimensiontwo bifurcation as the cusp singularity in Sect. 2.1.4. A codimension-two bifurcation that involves periodic orbits is found at the double-zero eigenvalue. It is also known as the Takens–Bogdanov (TB) bifurcation. More than one mode is necessary for the description of slow dynamics near a manifold in the parametric space where the underlying linearized system has more than one eigenvalue with a vanishing real part. The bifurcation manifold is of codimension-two, and it requires fixing two parameters of a generic dynamical system. This bifurcation can be realized as an intersection of the Hopf bifurcation locus, with the frequency vanishing at this point, with any bifurcation point of stationary states. This singular bifurcation is particularly interesting because it is possible to study analytically in its vicinity global bifurcations that otherwise can be detected only numerically or with the help of separated time scales (see Sects. 3.4, 3.5). Since we are primarily interested here in the dynamic behavior that can be modified even without affecting stationary states, we consider the dynamical system in the form (2.38): du Γ = f (u; p). (3.3) dt The double degeneracy is attained at the intersection of the fold and Hopf bifurcation loci. It can be easily located at the fold locus where the Jacobi matrix fu has a zero eigenvalue by adjusting the capacitance matrix Γ = Γ0 in such a way that the zero eigenvalue of J = Γ−1 0 fu becomes doubly degenerate. In the absence of special symmetries, the double zero eigenvalue is degenerate geometrically as well as algebraically. Geometric degeneracy means that the matrix J has only one eigenvector U satisfying J U = 0. The orthogonal basis spanned by this vector and the rest of eigenvectors of J (that correspond to stable modes) should be complemented then by an additional vector V satisfying J V = U . The matrix J can be reduced in this basis to the normal Jordan form, where the cell  0 1 corresponds to the double zero eigenvalue. We also need the eigenvectors 0 0 U † , V † of the transposed matrix J † , sometimes called left eigenvectors, which satisfy U † J = 0, V † J = U † . Evidently, U is the right eigenvector of both fu and J ; another useful relation is fu V = Γ0 U . For other eigenvectors W i of J , which have eigenvalues λi with negative real parts, fu W i = λi Γ0 W i . The relations between the eigenvectors are invariant to the transformations V → V + kU , V † → V † − kU † with an arbitrary scalar k. This gauge freedom can be used to arrange for the orthogonality conditions U † Γ0 V = V † Γ0 U = 0. The eigenvectors

3.3. BIFURCATION AT DOUBLE-ZERO EIGENVALUE

81

can be subsequently normalized as U † Γ0 U = V † Γ0 V = 1. These relations are convenient for the forthcoming bifurcation expansion.

3.3.2

Quadratic Normal Form

Bifurcation expansion. As in Sect. 2.2.4, we introduce a hierarchy of time scales (2.31), and expand all variables, parametric deviations, the capacitance matrix Γ, and the nonlinear functions in powers of a dummy small parameter as in (2.6), (2.7). As before, the dependence on the fast time t0 has to be suppressed at the bifurcation point, and this requires restricting parametric deviations to the second order. Setting tentatively u1 = a1 U + a2 V , the linear equation obtained in the first order is J u1 = a1 J U + a2 J V = a2 U = 0.

(3.4)

This means that only the amplitude a1 can be of the O( ), while the other amplitude a2 must be at most of the O( 2 ). Both amplitudes a1 , a2 are allowed to vary on a slow time scale. Continuing the expansion, and adding second-order deviations of static parameters p2 and a first-order deviation of the capacitance matrix Γ1 , we obtain in the second order 1 Γ0 ∂1 u1 + a1 Γ1 U = fu u2 + fp p2 + a21 fuu [U , U ]. 2

(3.5)

The solvability condition of this inhomogeneous equation requires now the inhomogeneity to be orthogonal to both eigenvectors U † and V † . Keeping in mind the orthonormality conditions while computing both projections, we arrive at the amplitude equations ∂1 a1 = a2 + κ01 + κ11 a1 + κ21 a21 ,

0 = κ02 + κ12 a1 + κ22 a21 ,

(3.6)

containing the parameters 1 † U fuu [U , U ], 2 1 = V † fuu [U , U ]. 2

κ01 = U † fp p2 ,

κ11 = −U † Γ1 U ,

κ21 =

κ11 = V † fp p2 ,

κ12 = −V † Γ1 U ,

κ22

(3.7)

The system (3.6) is incomplete, as it lacks dynamics of a2 . This is a consequence of the scaling of this amplitude following from (3.4). The scaling problem. In mathematical literature (Guckenheimer and Holmes, 1983), the difference in the scaling of the two amplitudes is ignored. Both would be, indeed, of the same order of magnitude if there were no geometric degeneracy, but this is a nongeneric situation. On the other hand, it may happen that some coefficients of the bifurcation expansion are much larger than others, so that orders of magnitude mix

CHAPTER 3. GLOBAL BIFURCATIONS

82

up when the dummy parameter is not excessively small. Adopting this attitude, but omitting nevertheless terms quadratic in a2 , (3.6) is modified to ∂1 a1 = a2 + κ01 + κ11 a21 + κ121 a1 a2 , †

κ121 = U fuu [U , V ],

∂1 a2 = κ02 + κ12 a21 + κ122 a1 a2 , †

κ122 = V fuu [U , V ].

(3.8) (3.9)

This equation system is further simplified by transforming the amplitudes as a1 → x,

a2 → y + κ01 + κ11 x2 + κ121 xy,

(3.10)

leading to what is viewed as a normal form, or a “universal unfolding” of the doublezero singularity: x˙ = y, y˙ = ν0 + ν1 y + ν2 x2 + ν3 xy, (3.11) where the slow time derivative is replaced by a dot, and the coefficients νi can be expressed through the more richly indexed κ’s in (3.7) and (3.9). The coefficients ν0 , ν1 related to κ01 , κ02 depend on parametric deviations and can be varied to arrive at different dynamics. We leave it as an exercise for the reader who can consult Guckenheimer and Holmes, 1983 where the case ν2 = ν3 = 1 is analyzed in detail. At ν1 = ν3 = 0, the system (3.11) is conservative, as it can be derived from the Hamiltonian H = 12 y 2 − V (x), V (x) = ν0 x + 31 ν2 x3 , and when ν1 , ν3 are small the perturbation technique of Sect. 3.4 comes in handy. It is possible, in particular, to trace here a saddle-loop global bifurcation sketched in Fig. 3.2. As we have noted in Sect. 2.2.4, the dynamics of even-order amplitude equations is not confined to small amplitudes and some trajectories inevitably escape to infinity. We continue therefore to derive a third-order normal form, which both confines the dynamics to small amplitudes and alleviates the scaling problem.

3.3.3

Expansion in the Vicinity of Cusp Singularity

The third-order form. We continue to the third order to derive the dynamic equation for the double zero singularity near the cusp point. The dynamics of a1 unfolds now on the O( −1 ) rather than O( −2 ) time scale of the amplitude equations at the cusp and Hopf bifurcations (2.16) and (2.41). The dynamics of a1 on this time scale is defined by the first equation (3.6), but the “ultraslow” time derivative ∂2 a1 may add finer features. The third-order equation is written as Γ0 (∂1 u2 + ∂2 u1 ) + Γ1 U ∂1 a1 = fu u3 + fp p3 + a1 fup [U , p2 ] + a1 fuu [U , u2 ]. (3.12) First of all, we need to evaluate the second-order function u2 by solving (3.5). Its part proportional to U is annulled by fu and should be omitted. The term a2 V yields, when acted upon by fu , a2 Γ0 U . If there are more than two dynamic variables, the remaining part, to be denoted as V 2 , contains only eigenvectors of J with negative real parts. Since the sum of the projections of the inhomogeneous terms on U and V

3.3. BIFURCATION AT DOUBLE-ZERO EIGENVALUE

83

vanishes due to the solvability conditions (3.6), (3.5) is brought to the form confined to the nonsingular subspace: 1 fu V 2 + fp p2 + a21 fuu [U , U ] − a1 Γ1 U = 0, 2

(3.13)

where fu denotes the respective part of fu , with the double-zero block excluded. Since fu is nonsingular, V 2 can be evaluated as   −1 1 (3.14) a1 Γ1 U − fp p2 − a21 fuu [U , U ] . V 2 = fu 2 This brings (3.12) to a more explicit form ∂1 (a2 Γ0 V + a1 Γ1 U + Γ0 V 2 ) + ∂2 a1 Γ0 U = fu u3 + fp p3 + a1 fup [U , p2 ] + a1 a2 fuu [U , V ] + a1 fuu [U , V 2 ].

(3.15)

This equation can be further rearranged by replacing ∂1 a1 with the help of (3.6): ∂1 a2 Γ0 V + Γ1 U + (a2 + κ01 + κ11 a1 + κ21 a21 )(Γ1 U + Γ0 F ) + ∂2 a1 Γ0 U = fu u3 + fp p3 + a1 fup [U , p2 ] + a1 a2 fuu [U , V ] + a1 fuu [U , V 2 ], (3.16) F = fu

−1

{Γ1 U − a1 fuu [U , U ]} .

(3.17)

The solvability condition of (3.17) obtained by projecting its inhomogeneity on V † yields now the dynamic equation of a2 in the form ∂1 a2 = μ0 + μ1 a1 + μ2 a2 + μ11 a21 + μ12 a1 a2 + μ3 a31 , (3.18)   −1 † μ0 = V fp p3 − κ01 (Γ1 U + Γ0 fu Γ1 U ) ,   −1 −1 μ1 = V † fup [U , p2 ] − fuu [U , fu fp p2 ] − κ11 (Γ1 U + Γ0 fu Γ1 U ) ,   −1 κ12 †  −1 μ3 = V fu fuu [U , U ], μ2 = −V † Γ1 U + Γ0 fu Γ1 U , 2   −1 −1 κ11 κ21 μ11 = V † fuu [U , fu Γ1 U ] + Γ0 fu fuu [U , U ] − Γ1 U , 2 2   −1 †  μ12 = V fuu [U , V ] + Γ0 fu fuu [U , U ] . (3.19) Projecting in the same way on U † yields dynamics of a2 on the O( −2 ) time scale, which may cause a slow drift from a stationary state or a periodic orbit established as a solution of (3.6), (3.18) . Again, it is convenient to replace the variables similar to (3.10): a1 → x,

a2 → y + κ01 + κ11 x + κ21 x2 .

(3.20)

CHAPTER 3. GLOBAL BIFURCATIONS

84

Replacing also the slow time derivative by a dot, the amplitude equation system becomes x˙ = y,

y˙ = ν0 + ν1 x + ν2 y + ν11 x2 + ν12 xy + ν3 x3 ,

ν0 = μ0 , ν1 = κ1 + κ01 μ2 , ν2 = μ2 , ν11 = μ11 + κ11 μ2 , ν12 = μ12 , ν3 = μ3 + κ21 μ2 .

(3.21) (3.22)

Finally, x can be rescaled to reduce the coefficient at the cubic term ν3 , which must be negative to contain the dynamics at small amplitudes, to −1, and shifted by a constant to eliminate the term containing x2 . The required transformation is x → −1/3 −2/3 x + 31 ν11 ν3 . The rescaling of y should fit that of x. The final form is −ν3 x˙ = y,

y˙ = b0 + b1 x + b2 y + b3 xy − x3 ,

3 1 ν1 ν11 2 ν11 1 2 −4/3 −1/3 , b1 = − ν 1 ν 3 + + ν11 ν3 , b0 = ν 0 − 2 3 ν3 27 ν3 3 1 ν11 ν12 −1/3 , b3 = −ν12 ν3 . b2 = ν 2 − 3 ν3

(3.23)

(3.24)

The rationale of the formalism. Is this long bland derivation worth the trouble? The form (3.23) could have been written out of general considerations to account for the entire variety of behavior of a dynamical system having up to three stationary states, confined in a finite region of the 2D phase space, and capable of producing a full zoo of global bifurcations. If, however, you are studying a particular process, not necessarily restricted to two variables, it might be useful to find a parametric region where all possible stationary and oscillatory states and transitions among them are concentrated as devils on a needle head, and to see whether this point is physically accessible and all coefficients in the generic form are finite. This procedure becomes, however, trivial when applied to a two-variable system, and therefore we do not apply it to our favorite example of the exothermic reaction model. The form (3.23) is still algebraically cumbersome. Solving it numerically is not an attractive proposition. The advantage is that (3.23) clearly separates a function f (x) that affects stationary states, and a function yg(x) that controls dynamics. In the next section, we explore this feature with the help of an additional approximation to map all dynamic regimes and global bifurcations analytically. At this point, we can observe that mathematicians’ nonchalance to balancing the orders of magnitude has its merits. Even if a normal form cannot be derived from an underlying physical system through an ordered expansion in powers of a small parameter, it can be viewed as a model equation allowing us to reproduce dynamics in a semi-quantitative way, adding gravity to the qualitative descriptions of Sect. 3.1. The distinction between the roles of the two variables brings here a clear advantage: if you wish to increase the number of stationary states, raise the power of f (x), if you want the dynamics be more variegated, take a polynomial g(x) of a higher order. Unfortunately, these features cannot be manipulated as easily in actual physical models.

3.4. ALMOST HAMILTONIAN DYNAMICS

3.4 3.4.1

85

Almost Hamiltonian Dynamics Weak Dissipation

Solutions at different energy levels. The key to extending the analytical study is to assume the coefficients at the terms containing y to be small, which reduces (3.23) to a perturbed Hamiltonian systems with weakly dissipative corrections. The general form of such a system is x˙ = y,

y˙ = f (x) + δ y g(x),

(3.25)

with a small parameter δ  1. The origin of the Hamiltonian formalism is in classical mechanics, as we discussed in Sect. 1.1.1, but it emerged, rather unexpectedly, in a completely different context in Sect. 1.3.1. Now we encounter it again in the normal forms that can be derived though the formal procedure of the preceding section from any underlying system without special symmetries. At δ = 0, (3.25) is a conservative system. The conserved quantity is “energy”  V (x) = − f (x) dx. (3.26) E = 12 y 2 + V (x), The stationary states are defined exclusively by the function f (x); the dependence of their locations and energies on the parameter κ0 is shown in Fig. 3.10. The “dynamic” variable y plays the role of the momentum p in (1.13), and the “static” variable x, the role of the coordinate q. The minima of the potential V (x) correspond to stable stationary states and the maxima, to saddle points. The system (3.25) at δ = 0 can be obtained by varying the action integral  1 2  (3.27) S= 2 y − V (x) dt. (a)

(b) 

 -

  -

-





- 

-



-



 -

- -

Figure 3.10. The dependence of stationary states (a) and their energies (b) on the parameter κ0 .

CHAPTER 3. GLOBAL BIFURCATIONS

86 (a)

(b)

(c)

Figure 3.11. Trajectories of a Hamiltonian system with two stable fixed points at E + > E > E − (a), E 0 > E > E + (b), and E > E 0 (c). The homoclinic loop is also shown in all panels

Trajectories in the phase plane spanned by x and y immediately follow from (3.26): y 2 = 2[E − V (x)].

(3.28)

Trajectories at different levels of E are qualitatively different. We can see it in Fig. 1.1, but now we need to add asymmetry, so that the stationary state with the lowest energy, say, E − is absolutely stable and the other one, with E + > E − , is metastable. At E + > E > E − , only one orbit exists, while at energies exceeding both E ± , but lower than the energy of the intermediate saddle fixed point E = E 0 , the level E = const consists of two disconnected parts surrounding one of the stable stationary states only. At energies higher than E 0 , each trajectory surrounds all three stationary states, two stable ones and a saddle (Fig. 3.11). The divide between the two types is the homoclinic trajectory passing through the saddle, shaped as the figure-eight with its two loops surrounding each of the stable fixed points. Slow evolution. At δ = 0, the energy changes with time: ∂E ∂E E˙ = x˙ + y˙ = δ y 2 g(x). ∂x ∂y

(3.29)

When δ is small, the energy changes slowly, so that it remains nearly constant during an oscillation period T along a particular trajectory. Since the trajectories are invariant to the reversal of y, integrating over a half-orbit is sufficient, and the time integral can be reduced to a spatial integral taken between the turning points x± where y vanishes, i.e., suitable roots of E = V (x):   x+  x+ dx ΔE 2δ T /2 2 ˙ E ≡ = y g(x) dt = 2δ y(x)g(x) dx . (3.30) y(x) T T 0 x− x− The integration limits in the last integrals are the values of x at the turning points where y vanishes, i.e., the suitable roots of E = V (x). For a specific system, the computed “dissipation rate” ΔE/T can be expressed as a function of energy. Unlike mechanical systems with friction, dissipation may be

3.4. ALMOST HAMILTONIAN DYNAMICS

87

negative (it is sometimes called then“antidissipation”) – after all, we are studying here general equations that may originate from a system sustained far from equilibrium. The total dissipation would then vanish on a trajectory that satisfies the relation  x+  √  x+ y(x)g(x) dx = 2 g(x) E − V (x) dx = 0. (3.31) J= x−

x−

This trajectory then becomes stationary and can be identified with a periodic orbit of a general dynamical system. We can observe that even arbitrarily weak dissipation brings about a qualitative change of dynamic behavior. A conservative system can move along any available trajectory, depending on its energy. On the contrary, a weakly dissipative system evolves to a certain energy level where the average change of energy vanishes. In order to construct a global bifurcation diagram, we have to focus attention on the values of parameters ensuring that certain special (nongeneric) trajectories, such as homoclinics, become stationary.

3.4.2

Hopf and Saddle-Loop Bifurcations

We return now to the normal form (3.23) with the cubic f (x) = κ0 + κ1 x − x3 , coinciding with the unfolding of the cusp singularity (2.18). However, a linear function g(x) = b2 + b3 x in (3.23) is problematic, because it does not prevent escape to periodic orbits with arbitrary large amplitudes. Therefore here, as in Sect. 3.3.2, we have to sacrifice the precise balance of orders of magnitude and add a quadratic term that would confine all trajectories to a finite domain. Thus, we set in the following g(x) = μ0 + μ1 x + μ2 x2 with μ2 < 1. The most interesting parametric region is that where three stationary solutions exist, confined, as we already noted in Sect. 2.1.4, to the cusped region κ1 > 0, |κ0 | < 2(κ1 /3)3/2 shown in Fig. 2.2a. The locus of Hopf bifurcations on the branches of fixed points y = 0, x = x± s is determined exclusively by the function g(x). The only off-diagonal element in the Jacobi matrix is δ g(xs ), and therefore g(xs ) must vanish μ2 x2s = 0. at the Hopf bifurcation point. This defines the bifurcation locus μ0 + μ1 xs + This formula applies on stable solution branches of f (x) = 0, where |xs | > κ1 /3. Is the Hopf bifurcation supercritical or subcritical? This depends on whether the energy of small orbits surrounding the stationary state is lower or higher than V (xs ). The increment of the potential V (ξ) = V (x) − V (xs ) along a small orbit is approximated in the leading order in small deviations ξ = x − xs as 1 1 V (ξ) = − f  (xs )ξ 2 = (3xs2 − μ1 )ξ 2 . 2 2

(3.32)

The turning  points of the orbits at small energy deviations  1 are approximated as ξ± = ± (3x2s − κ1 )/2. Due to the symmetry, the increment of the integral multiply˙ = μ0 δ . ing μ1 vanishes, and the average dissipation (3.30) is evaluated simply as E Therefore the bifurcation is supercritical at μ0 > 0 and subcritical at μ0 < 0.

CHAPTER 3. GLOBAL BIFURCATIONS

88

A homoclinic orbit corresponds to the energy level E = V (x0s ), and, according to (3.28), is defined by the equation y 2 = 2[V (x0s ) − V (x)].

(3.33)

The turning points of orbits are defined by quartic equations, so, unfortunately, we have to evaluate the stationarity condition (3.31) numerically. This has to be done separately for each loop to locate sl± bifurcations on the branches of periodic orbits surrounding the respective points, while the condition evaluated for the entire figureeight defines sl0 bifurcation on the branch of “large” orbits surrounding all three fixed points. Strictly speaking, the above derivation, assuming that the energy change during a single oscillation period is small, fails for an infinite-period homoclinic trajectory. Nevertheless, the stationarity condition for a homoclinic trajectory correctly indicates the location of the saddle-loop bifurcation in a weakly dissipative system of the type (3.25).

3.4.3

Bifurcation Diagrams

The overall picture with three fixed points. Bifurcation diagrams for representative values of parameters are constructed in the accompanying Mathematica notebook WeakDissipation.nb, which can be used as a template for other computations, including those with different functions f (x) and g(x). The bifurcation diagrams in the parametric plane μ0 , μ1 , including also the loci of saddle-node bifurcations of periodic orbits (snp), are shown in Fig. 3.12 for the representative values κ0 = −0.1, κ1 = 1, μ2 = −1 in the domain where three fixed points exist. The loci of Hopf and saddle-loop (sl ) bifurcations are straight lines, and each fixed point is stable to the left of the respective Hopf locus. The snp0 locus, computed by requiring both the dissipation integral (3.31) and its derivative with respect to E to vanish, is drawn in this figure only for the “large” orbit. It emerges tangentially from the respective sl0 line at the codimension two nsl0 point where the homoclinic orbit is neutrally stable. Locating this point precisely requires a special procedure, since the derivatives of the integrals  k x E − V (x)dx in (3.31) diverge there. The phase portraits in each parametric domain are marked by the code explained in Sect. 3.1.2. We can see in Fig. 3.12a, drawn in an extended parametric domain, that the loci of Hopf and sl bifurcations for both the upper and the lower stationary states intersect, so that at positive and weakly negative values of μ0 the saddle-loop bifurcation occurs while the fixed point is still stable. Recalling the qualitative analysis in Sect. 3.1.1, this suggests that the Hopf bifurcation should be subcritical at these values of the parameters and the sl bifurcation takes place on the branch of unstable periodic orbits. The symbol ¯ 0 changes to ¯ 1 when the respective Hopf line is crossed from the right to the left. The unstable orbits break at the saddle-loop lines changing the symbol ¯1 to 0. A blowup near the intersection of the three sl lines is shown in Fig. 3.12b. A stable “large” orbit exists to the right of the line formed by the segment of the line sl0 to the left of the point nsl0 (where the homoclinic orbit is stable) and the

3.4. ALMOST HAMILTONIAN DYNAMICS

89

(a) 







 

  

 



-

-

-

-



 







 



 -





- 











-













 

(b)

























Figure 3.12. (a): Loci of Hopf (dashed lines) and saddle-loop (sl, solid lines) bifurcations in the parametric plane μ0 , μ1 at κ0 = −0.1, κ1 = 1, μ2 = −1, showing also the snp0 locus of “large orbits” (the curved black line) emerging tangentially from the respective sl0 line at the point nsl0 . (b): Blowup near the intersection of the three sl lines. The codes of the phase portraits are as in Fig. 3.3

snp0 locus. This orbit opens on the sl0 line to the left of the point nsl0 , while on the snp line it annihilates with an unstable orbit. The latter lies within the stable one but still outside the homoclinic loop and opens up on the unstable segment of the sl0 locus to the right of the point nsl0 . Resolving the snp bifurcation. The parametric region near the intersection of the Hopf and sl lines is resolved on a larger scale in Fig. 3.13 where, to get a clearer picture, the difference Δμ1 between the values of μ1 for snp or sl and Hopf bifurcations is plotted against μ0 , so that the locus of Hopf bifurcation is mapped onto the axis Δμ1 = 0. The intersection point has no particular significance but the dynamically important codimension two points are nh, where the third-order coefficient in the expansion at Hopf bifurcation vanishes and nsl where the homoclinic orbit is neutrally stable. These points do not generically coincide and should be connected by an snp branch. The snp loci (not discernible on the scale of Fig. 3.12a and off the range of Fig. 3.12b) can be computed by requiring both the dissipation integral (3.31) and its derivative with respect to E to vanish. The snp line approaches the other two tangentially, forming cusps at both codimension two points. The fixed point is stable above the Hopf line (the axis Δμ1 = 0 in Fig. 3.13a), which is supercritical to the left and subcritical to the right of the point nh. An unstable periodic orbit surrounds this point in the region ¯100 on the right, confining its attraction basin to its close vicinity. As μ0 further increases, this orbit opens up

CHAPTER 3. GLOBAL BIFURCATIONS

90 (a)

(b)



 





 















- 





 

+





-



 -

 -





- 

- 

-



-

-

-

-

-





Figure 3.13. (a): Blowup of the loci of Hopf (h+ and h− ), sl, and snp bifurcations in the parametric plane μ0 , μ1 for the orbits surrounding the lower fixed point. (b): The loci of fold (vertical line), Hopf (dashed lines) and saddle-loop (solid lines) bifurcations (blue for the lower and orange for the higher state), and the snp bifurcation between the orbit surrounding the lower state and the “large” orbit (black line) in the parametric plane κ0 , μ1 at κ1 = μ1 = 1

at the saddle-loop bifurcation locus to the right of the point nsl bordering the region 000. This, however, cannot happen to the left of the point nsl where the homoclinic orbit is stable. What really happens at this line segment is the bifurcation of a stable periodic orbit surrounding both the fixed point and the unstable orbit that serves as the boundary of their attraction basins. Both attractors coexist with the stable upper state in the region 200. At the snp locus, the stable and unstable orbits annihilate, and in the region 000 the fixed point attracts trajectories originating beyond the homoclinic loop. In the region 100 there is a stable periodic orbit surrounding the unstable fixed point. To the left of the point nh, this orbit is formed as a result of a supercritical Hopf bifurcation, while to the right of this point it is just continuing from the region 200, being unaffected by the subcritical Hopf bifurcation annihilating the unstable orbit. The stable orbit opens up at the saddle-loop bifurcation, and in the region ¯000 all trajectories are deflected to a faraway attractor. The dynamics near the higher stationary state is similar, the only difference being that the stationary state is stable below the Hopf line. Bifurcation diagrams with multiplicity transitions If the absolute value of the parameter κ0 increases beyond the critical point |κc | = 2/33/2 , the phase portraits 000, ¯000, 0¯00 transform to 0, the portraits ¯ 100, 0¯10, ¯001, 0¯01, ¯0¯01 to ¯1, and ¯002, 0¯02, ¯0¯02, ¯011, 1¯01 to 2. Since the saddle-node bifurcation is impossible in the regions 100 and 010, they have to terminate in a cusp at a double-zero point on the sn locus.

3.4. ALMOST HAMILTONIAN DYNAMICS

91

We fix now the value of μ0 = 1 and explore transitions involving the change of the number of fixed points when κ0 passes the saddle-node locus. The Hopf bifurcation points for the two stable stationary states are defined as before by the condition g(xs ) = 0. The Hopf bifurcation locus for the lower stationary state (the dashed blue line in Fig. 3.13b) smoothly continues across the saddle-node locus (black line), while that of the higher state (dashed orange line) terminates there approaching it tangentially. The two Hopf lines converge tangentially at κ0 = 0 where the solution is symmetric. Homoclinic orbits must pass through the saddle, and therefore also have to terminate at the saddle-node bifurcation locus. The evolution of the homoclinics with changing κ0 is shown by solid lines in Fig. 3.13b: blue, for the loop surrounding the lower state, and orange, for the loop surrounding the upper state. The sniper bifurcation line cannot appear in this diagram, since at δ  1 the stable fixed points are foci rather than nodes, unless in a close vicinity of the saddle-node locus.

3.4.4

Basin Boundaries



In several parametric domains, there are  multiple attractors (either stable stationary states or stable periodic orbits). Their at traction basins are separated either by sta ble manifolds of saddle points or by unstable periodic orbits. Returning to Fig. 3.3, we  see two attractors with the stable manifold - of a saddle point serving as the basin boundary in the configurations 000, 010, 020, and - 0¯01. Notably, in the last case, this bound- ary has the form of a closed loop with both branches emerging from the unstable sta- -    tionary state. In other cases, the bounding  trajectories come from infinity. In the configurations 0¯ 10, ¯ 1¯ 01, and 0¯ 02, the basin of a Figure 3.14. Rolled trajectories attracted stable stationary state is surrounded by an to two alternative stable fixed points unstable cycle. In this case, as well as in the configuration 0¯ 01, the stable state with a limited basin can never be attained starting from faraway initial conditions. Three stable states with two of them hidden within unstable cycles are seen in the configurations ¯1¯11 and 0¯12; in the latter case, there is a double wrapping, one by an unstable cycle surrounding the upper stationary state and the other one, by a by a “large” unstable orbit separating both hidden attractors from the surrounding “large” stable orbit. In the configuration 020 one stable state is hidden within an unstable orbit, while the other two attractors, a stable stationary state and a stable orbit are separated by the stable manifold of a saddle point. In the same way, two attractors are separated in the configurations 002 and 012, while both

CHAPTER 3. GLOBAL BIFURCATIONS

92

are surrounded by a “large” unstable orbit. In the configuration 022 there are already four attractors with both kinds of basin boundaries. In the case of almost Hamiltonian dynamics, the stable manifold of the saddle point forming a basin boundary is highly convoluted, so that it cannot be well resolved in the weak dissipation approximation. Since all trajectories spiral down along the orbits of the corresponding conservative system, slowly approaching either an attractive orbit or a fixed point, the attraction basins are neatly rolled around each other, so that either the higher or the lower attractor might be reached starting from nearby points lying outside the homoclinic figure-eight (Fig. 3.14). To the left of the sl0 –snp0 line in Fig. 3.12a, all trajectories are attracted to the higher state in the region ¯ 000 and to the lower state in the region 0¯00; in the regions ¯ 100 and 0¯ 10, outlying trajectories behave in the same way, but the trajectories lying within an unstable orbit spiral down to the nearby fixed point. In the region 000, both fixed points are attractors, with a convoluted basin boundary, as described above. When the snp0 line is crossed, the dynamics remains qualitatively the same within the emerging “large” unstable orbit, which now forms the basin boundary of the “old” attractors; outside this orbit, the trajectories are attracted to the “large” stable orbit. Following the breakdown of the unstable orbit at the unstable segment of the sl0 line, the attraction basins of the “large” orbit and the surviving stable fixed point become intertwined in the regions ¯001 and 0¯01, being separated by a convoluted stable manifold of the saddle point. The “large” orbit becomes the sole attractor only in the region ¯ 0¯ 01.

3.5 3.5.1

Systems with Separated Time Scales Fast and Slow Variables

The analysis in the preceding section demonstrates that global behavior can be followed most easily when characteristic time scales associated with the two variables are widely separated. However, this approach is not restricted to almost Hamiltonian dynamics. The general form of a two-variable system with separated scales is x˙ = F (x, y),

y˙ = G(x, y, )

(3.34)

with  1. The line F (x, y) = 0 is a slow manifold. The system spends most of the time on its attracting branches. Oscillations take place when no stable stationary points with G(x, y) = 0 lie on these branches, which must consist in this case of disconnected parts with their attraction basins separated by repelling branches of the slow manifold. The dynamics may still display subtle details that can be only resolved using singular perturbation techniques. Let x = x(y) be a solution of F (x, y) = 0. The slow variable glides along some attracting branch of the slow manifold following the equation y˙ = G(x(y), y).

(3.35)

3.5. SYSTEMS WITH SEPARATED TIME SCALES

93

If the right-hand side of this equation has no zeroes at this branch, it reaches a turning point, say, y = y ∗ . Save for rare occasions when the system lingers in the vicinity of the turning point (see Sect. 3.5.4), this necessitates a fast jump to another branch. The applicable equation on the fast time scale is x˙ = F (x, y ∗ ). The fast stage is called a boundary layer, by analogy to hydrodynamic phenomena where fast changes often occur in thin layers, often adjacent to boundaries. The slow and fast solutions have to be matched, as shown in the examples below. The jump is followed by slow gliding on another branch of the slow manifold, and if no zeroes of G(x(y), y) with an alternative solution of F (x, y) = 0 are found there, another jump takes place. In the regime of relaxation oscillations, the system eventually returns to the original branch.

3.5.2

Van der Pol Oscillator

A celebrated example is the unforced van der Pol oscillator, which has the form (3.25) with f (x) = −x, g(x) = μ(1 − x2 ): y˙ = μy(1 − x2 ) − x.

x˙ = y,

(3.36) 2

2

At μ = 0, this is a conservative system oscillating around the circle x + y = 1, but as this parameter grows, the shape of the periodic orbit changes drastically, as seen in Fig. 3.15a. Already at μ = 5, the system undergoes periodic fast changes, as seen in the time plot Fig. 3.15b. An alternative form is obtained by applying the Lienard transformation   x2 y − . (3.37) v =x 1− μ 3 √ √ Denoting also = μ, u = x and rescaling time by μ to fit the notation in (3.34), we reduce (3.36) to the form:   u2 u˙ = u 1 − − v, v˙ = u. (3.38) 3

(a)

(b)  

   -

- -

 





 



 









- -

Figure 3.15. (a): Evolution of the periodic orbits of the van der Pol oscillator with increasing parameter μ; its values are shown near the respective curves. (b): Relaxation oscillations at μ=5

CHAPTER 3. GLOBAL BIFURCATIONS

94

In the limit → 0, the dynamics can be read directly from the plot of null iso clines, i.e., loci of zeros of the right-hand  sides of the system. The null-isocline of the  slow equation (slow manifold) is S-shaped, - with the extrema at u = ±1, v = ±2/3, as seen in Fig. 3.16. The two branches of the - slow manifold at |u| > 1 are attracting, and - - -    the middle branch is repelling. The other  null-isocline u = 0 intersects the slow manifold at the single point u = v = 0, which Figure 3.16. The slow manifold of (3.38). is unstable, so that the regime of relaxation The arrows show the relaxation cycle at oscillations is the only possibility. →0 Slow evolution drives the slow variable v up on the right-hand branch and down on the left-hand branch of the S-shaped curve, and fast evolution sets on when the respective extremum is reached. The fast variable u jumps from ±1 to ∓2, which immediately explains why its extremal values in the oscillation cycle shown in Fig. 3.15(b) are ±2 and makes evident the shape of the relaxation cycle alternating gradual evolution with steep jumps.

v



3.5.3

FitzHugh–Nagumo Equation

Another representative example is the FitzHugh–Nagumo (FN) model with a fast “activator” u and a slow “inhibitor” v. This equation system, with addition of diffusional terms, has been brought forward as a dressed-down model of impulse propagation in neurons. We will return to the space-dependent problem in Sect. 5.3.1, but here we truncate it to a dynamical system with a small parameter :   u2 − v, v˙ = μu − v − ν. (3.39)

u˙ = u 1 − 3 The slow manifold has exactly the same S-shape as in (3.38). A substantial difference is that the equation of the slow inhibitor v has now two parameters μ, ν, which enables changing the character of solutions by moving and rotating the corresponding straight-line null isocline. As before, the dynamics can be read in the limit → 0 directly from the plot of null isoclines. The fast variable u is slaved to v and is attracted to either stable (descending) branch of the S-shaped curve and slips alongside following slow evolution. Now several possibilities exist, schematized in Fig. 3.17. Similar to the van der Pol equation, relaxation oscillations take place if there are no stable stationary states on either branch (Fig. 3.17a). This happens when all intersections of the two null isoclines lie on the ascending unstable branch. The necessary condition is for the slope of the straight line to be higher than that of the line connecting the two extrema of the S-shaped curve, so that μ > 2/3. The values of ν are limited by the condition that

3.5. SYSTEMS WITH SEPARATED TIME SCALES 

v

v



95



 

v

v







Figure 3.17. Null-isoclines of the FN system (3.39). (a) Oscillatory system; the grey line shows the periodic orbit. (b) Excitable system; the gray line shows a trajectory starting from the perturbed stationary state. (c) Bistable excitable system. (d) Bistable system

the intersection point should lie within the interval |u| < 1, |v| < 2/3, which requires |ν| < μ − 2/3. The orbit of relaxation oscillations always has a large amplitude; it cannot coexist with a stationary state, and, as long as it exists, it is insensitive to changes of μ and ν. Other possibilities exist when there are stable intersections of the two isoclines on the descending branches of the slow manifold. If such an intersection lies not far from the locus of oscillatory transition, the system is excitable. A typical disposition of null-isoclines is shown in Fig. 3.17b. The fixed point lying close to the turning point is stable, but a small finite perturbation displacing the phase point below the minimum of the S-shaped curve initiates transition to the alternative branch. The system returns to the stationary state only after a very long excursion, following a roundabout route along both branches, as seen in the figure. In the case shown in Fig. 3.17c, both stationary states are excitable, and the system may switch between them when triggered by a weak perturbation. A much stronger perturbation is needed to initiate a switch when the null-isoclines are arranged as in Fig. 3.17d.

CHAPTER 3. GLOBAL BIFURCATIONS

96

3.5.4

Canards

Crossing the parametric boundary of relaxation oscillations |ν| < μ − 2/3 represents a special kind of a bifurcation. Normally, transition to oscillatory dynamics would take place via a Hopf bifurcation generating a small-amplitude (either stable or unstable) periodic orbit in the vicinity of the bifurcation point. Transition to large-amplitude relaxation oscillations requires a different mechanism, and involves a special subtle phenomenon that we will now explore taking the FN system (3.39) as a testing ground. Jumping to an alternative branch is what normally happens at turning points, but a trajectory climbing up the slow manifold may be attracted instead to a nearby fixed point or a small-amplitude limit cycle rather than jumping to an alternative stable branch if it is directed to the “inner” side of the unstable branch of the slow manifold. In order to obtain an equation system applicable in the vicinity of the turning point, we set u = 1 − x,

v=

2 + 2 y, 3

μ = μ0 + μ1 + 2 μ2 ,

2 μ0 = ν + , 3

(3.40)

Using this in (3.39), we arrive in the leading order of the expansion at the equation system x˙ = y + x2 , y˙ = −1 (μ1 − μ0 x) − (μ1 x + y − μ2 ). (3.41) The roles of the two variables have now interchanged, and it is now y that changes fast and is slaved to x, but the O(1) term in the second equation is important. Fast dynamics near a suitable point x = xs can be eliminated by setting μ1 = μ0 xs . The remaining system evolving on a slow scale has the stationary solution  μ2 μ2 , ys = −x2s = − . (3.42) xs = μ0 − 1 μ0 − 1 Stability of this fixed point is determined by the Jacobi matrix   1 2xs . J= −μ0 ( −1 + xs ) −1

(3.43)

Its determinant is positive and large, and the trace vanishes, in the leading order, at xs = 12 , or μ2 = 14 (μ0 − 1), where a Hopf bifurcation takes place. Following this bifurcation, a stable periodic orbit appears, but it exists only within a diminutive parametric domain. This is the canard phenomenon discovered by Benoˆıt (1983). The term, sometimes rendered as “French duck” and even illustrated by a bird shape embedded into the S-shaped curve, should be really associated with a word for fake news in both English and French, going back to the venerated Parisian satirical weekly Le Canard enchaˆın´e and the French slang word for a newspaper. It deserves association with fake news or birds because it is hard to catch. This is indicated by a subtle adjustment of the parameters necessary to ensure the existence

3.6. VENTURING TO HIGHER DIMENSIONS

97

of a fixed point. The eigenvalues and the corresponding eigenvectors of (3.43) are, in the leading order,  ! 1 μ0 , U = √ ±i(μ0 )−1/2 , 1 . (3.44) λ = ±i

2 √ This indicates fast motion along the emerging orbit with the period of O( ). Our usual procedure of expansion in a small parameter near a bifurcation point cannot be implemented here in a straightforward way, since a small parameter is already present both in the basic problem and in the eigenvalues and eigenvectors of its linearization.

3.6 3.6.1

Venturing to Higher Dimensions Dynamics Near Triple-Zero Eigenvalue

Three dimensions make a difference. Much more variegating dynamics is possible in 3D. Canards thicken to tori nested within folded manifolds. They become generic and therefore easier to catch (Vo, 2017). But what is much more important, is that trajectories may weave in complicated ways, which is impossible in 2D, since they cannot intersect; this makes possible chaotic dynamics, the subject of the next chapter. The new kind of a fixed point, first appearing in 3D, is a saddle-focus with one real and two complex conjugate eigenvalues. When such a stationary state is unstable, chaos is likely to emerge (more on this in Sect. 4.3). We will not dwell on this here but explore possible advantages of scale separation methods (enabled by the presence of either a small parameter in a dynamical system or slowly evolving integrals of motion of a truncated system) in higher dimensions The vicinity of a bifurcation at the triple-zero eigenvalue is a natural place to look for different types of chaotic regimes and routes to chaos in dynamical systems. One can expect to encounter there a large variety of behavior characteristic to 3D motion, in the same way as all kinds of 2D phase portraits are observed near a bifurcation at the double-zero eigenvalue (Sect. 3.3). All this variety would be tightly packed into a narrow parametric patch near the singular point, and dynamics scaled down to small-amplitude motions more amenable to analytical study (Arneodo et al, 1982; Pismen, 1987). This highly degenerate singular point is commonly found in chaotic models, and the behavior is expected to be highly complicated in its vicinity. If a triple-zero degeneracy is found, a 3D dynamical system can be reduced by projecting on a suitable basis to the form u˙ = Lu + f (u),

(3.45)

where f (u) is a nonlinear array-function of the 3D array of variables u, and L is the normal Jordan block ⎛ ⎞ 0 1 0 L = ⎝0 0 1⎠ . (3.46) 0 0 0

CHAPTER 3. GLOBAL BIFURCATIONS

98

A higher-dimensional system can be also reduced near a triple-zero degeneracy to a normal form in a way it has been done in the case of a double-zero degeneracy in Sect. 3.3.2, but we will not engage here in such a kind of boring algebra. The three eigenvectors of L satisfy LU = 0, LV = U , LW = V . The three respective amplitudes, to be denoted as x, y, z, are graded as in Sect. 3.3.2 by their magnitude. If, say, x = O( ) and time is extended by the factor of O( −1/2 ), we have y = O( 3/2 ) and z = O( 2 ). Dynamics near triple-zero singularity. Adopting this scaling and allowing for parametric deviations of a suitable order, we arrive, after collecting terms up to the third order in and transforming the variables in a way similar to that in Sect. 3.3.2, at the system of amplitude equations (Pismen, 1987): x˙ =y,

y˙ = z,

z˙ = (α + x)y + 1/2 G(x, y, z),

G(x, y, z) =κ0 + κ1 x + κ2 z + μ1 y 2 + μ2 xz − x3 .

(3.47)

With the adopted scaling, in the last equation z˙ = O( 5/2 ), and so is the first term in the right-hand side, provided the coefficient α is contributed by O( 3/2 ) parametric deviations. The three nonlinear terms in the function G(x, y, z) are O( 3 ). The coefficients κi are due to parametric deviations, respectively, of the O( 3 ), O( 2 ), and O( ) order. The term quadratic in x is omitted, and the negative cubic term prevents trajectories from escaping to infinity. ... Truncating (3.47) at → 0 and replacing y˙ → x, ¨ z˙ → x leads to a traveling wave form of the Korteweg–de Vries equation with the parameter α playing the role of the propagation velocity: ... x + xx ˙ ¨ − αx˙ = 0. (3.48) This equation has a continuous family of stationary states x = q, y = z = 0 with arbitrary q, and possesses two integrals of motion: K(x, z) = z − αx −

1 2 x , 2

H(x, y, z) =

1 2 1 1 y − xz + αx2 + x3 . 2 2 3

(3.49)

The eigenvalues of (3.48) linearized near the branch of stationary points are 0 √ and ± q + α. At q > −α, the stationary point has a 1D stable and a 1D unstable manifold, in addition to the 1D center manifold directed along the locus of stationary points in the 3D phase space {x, y, z}. At q < −α, the stationary point has a 3D center manifold. To underline this distinction, we shall call stationary points at q > −α and q < −α, respectively, hyperbolic and elliptic. The branches of elliptic and hyperbolic points K(q, 0), H(q, 0, 0) meet in a cusp, as shown in Fig. 3.18. Since K(q, 0) is symmetric, and H(q, 0, 0) antisymmetric to the simultaneous reversal of the signs of α and q, the curves at negative α are mirror images of the shown curves.

3.6. VENTURING TO HIGHER DIMENSIONS

K˙ = 1/2 G(x, y, z),

 =/



= 

There is a continuous two-parametric family of orbits parametrized by the integrals K, H surrounding elliptic fixed points. A family with K = const and changing H is shown in Fig. 3.19a, and a family with H = const and changing K in Fig. 3.19b. Each family collapses to a point at one limit of its existence, and at the opposite end, expands to a homoclinic passing through a hyperbolic point, beyond which trajectories escape to infinity. At = 0, both integrals of motion evolve on an extended time scale:

99



-

=

- -

-

˙ (3.50) H˙ = −xK.

 





Figure 3.18. Branches of elliptic and hy-

As in Sect. 3.4, the drift of H, K can be com- perbolic points at selected values of α > 0 puted by averaging over trajectories. Since the function G(x, y, z) has a number parameters, there are many possibilities here – but no surprises. Since H, K evolve in 2D, it is impossible to detect chaotic motion while relying on scale separation in this system. Moreover, slow dynamics is fragile, since the region where closed orbits exist is limited, and the integrals of motion may be driven out to the domain of runaway trajectories. (a)

(b)

Figure 3.19. (a): The family of closed orbits with K = 0 and H increasing outward from −2.65 to −2.05 with the increment 0.15. (b): The family of closed orbits with H = −2 and K decreasing outward from 0.3 to 0 with the increment 0.1

CHAPTER 3. GLOBAL BIFURCATIONS

100

3.6.2

Double Hopf Bifurcation

In 4D, two Hopf bifurcations may take place. In a degenerate case, they happen at the same set of parameters. Generically, this leads to quasiperiodic motion. A special case is a resonance between the two oscillations that leads to their nonlinear interaction. In the case of 1:2 and 1:3 resonance, this interaction arises already on the level of lowest-order amplitude equations. Consider the two amplitudes presented in the polar form a1 = ρ1 eiωt , a2 = ρ2 e2iωt . (3.51) Collecting the resonant terms up to the third order leads to the amplitude equations a˙ 1 = a1 (κ1 − |a1 |2 ) + μ1 a2 a1 ,

a˙ 2 = a1 (κ2 − |a2 |2 ) + μ2 a21 ,

(3.52)

where the parameters μi are determined by the bifurcation expansion of an underlying dynamical system, and the parameters κi depend on parametric deviations from the bifurcation point. As usual, the coefficient at the cubic term, that should be negative at a supercritical bifurcation, is reduced to −1 by rescaling. The polar form of (3.52), obtained by setting ai = ρi eiθi and separating the real and imaginary parts, reduces to just three, rather than four equations, since the two phases θi come only in the combination Θ = 2θ1 − θ2 : ρ˙ 1 =ρ1 (κ1 − ρ21 ) + μ1 ρ1 ρ2 cos Θ, ρ22 )

μ2 ρ12

ρ˙ 2 =ρ2 (κ2 − + cos Θ,

1 ˙ = Θ 2μ1 ρ22 − μ2 ρ21 sin Θ. ρ2

(3.53) (3.54) (3.55)

The generic fixed points of the last equation are Θ = 0 and Θ = π. Dynamics would be nontrivial when stability of these states can interchange, which may happen when the coefficients μi have different signs. We explore this case, and, to reduce clutter, set μ1 = −μ2 = μ, κi = 1, to arrive at a simplified form ρ˙ 1 =ρ1 (1 − ρ21 ) + μρ1 ρ2 cos Θ, ρ22 )

μρ21

ρ˙ 2 =ρ2 (1 − − cos Θ, 2

μ ˙ =− Θ 2ρ2 + ρ21 sin Θ. ρ2

(3.56) (3.57) (3.58)

The dependence ρi (μ) in the case Θ = 0, obtained by solving (3.56), (3.57), which reduce to a cubic equation, is shown in Fig. 3.20a. There are two stable solutions with Θ equal to either 0 or π at μp < |μ| < μc , where μc ≈ 0.3254 is the location of the saddle-node bifurcation point and μp ≈ 0.2118. A single stable solution with Θ = 0 at μ > 0 or Θ = π at μ < 0 exists at |μ| < μp , and none at |μ| > μc . At |μ| > μc , dynamics is oscillatory. A typical example of the change of the variables with time during a single period is shown in Fig. 3.20b. Dynamics remains qualitatively the same at larger values of μ. Detailed derivations and stability analysis are given in the accompanying Mathematica notebook DoubleHopf.nb.

3.6. VENTURING TO HIGHER DIMENSIONS

101

  

(a)

(b)













-

-

-

- - - -

 

















Figure 3.20. (a): Dependence on μ of the stationary values of ρ2 (blue curve for Θ = 0 and green curve for Θ = π) and ρ1 (red curve for either case). The solid lines show the stable, and dashed lines, unstable solutions. (b): Oscillatory solution at μ = 0.5. The blue, orange, and green curves show the change, respectively, of ρ1 ρ2 , Θ with time during a single period

3.6.3

Blue Sky Catastrophe

The latest kind of a bifurcation of periodic orbits to be discovered is the blue sky catastrophe. Most commonly, a periodic orbit disappears either with a finite period and zero amplitude at the Hopf bifurcation, or with a finite period and amplitude at the saddle-node bifurcation, or with the infinite period and finite amplitude at the homoclinic bifurcation. At the blue sky catastrophe, both the period and the amplitude become infinite before the orbit collapses. The period grows as the orbit develops an infinite number of loops before the catastrophe. One cannot fail thinking of the exponential growth of the economic output and complexity of our civilization that could terminate in a collapse of this kind. A specific example of an equation system with a periodic orbit growing to a great length without losing stability on the way was reported by Gavrilov and Shilnikov (2000): x˙ =x(2 + μ − 10(x2 + y 2 )) + z 2 + y 2 + 2y, y˙ = − z 3 − (1 + y)(z 2 + y 2 + 2y) − 4x + μy, z˙ =(1 + y)z 2 + x2 − .

(3.59)

The orbit at μ = 0.456, = 0.0357, just before disappearing in the blue sky catastrophe, is shown in Fig. 3.21a. We see here the region of tightly-packed loops indistinguishable on the scale of the plot and a large-amplitude excursion connecting the

CHAPTER 3. GLOBAL BIFURCATIONS

102 (a)

(b)

Figure 3.21. (a): The orbit at μ = 0.456,  = 0.0357. (b): The Poincar´e map of the same orbit, showing values of z at the intersection with the plane x = 0

opposite ends of this coil. The structure of the orbit is better seen in the Poincar´e map in Fig. 3.21b. The map with the plane x = 0 taken as the Poincar´e section is almost rectilinear, with the values of y remaining almost constant, and therefore only the values of z are plotted. The displayed sequence corresponds to two revolutions of the orbit, with the tightening sequences well resolved but the connecting trajectory remaining outside of the plot.

Further Reading The classical text Andronov, Vitt, and Khaikin (1966) extends to global bifurcations covered in this chapter. Later advanced surveys are Guckenheimer and Holmes (1983), Kuznetsov (1995), and Shilnikov et al. (2001). The latter book contains a detailed study of the various types of homoclinic bifurcations in 3D. Singular bifurcations, unfolding of the double-zero singularities, and almost Hamiltonian dynamics are discussed in detail by Pismen (1984, 1985). The canard phenomenon was discovered by Benoˆıt et al. (1979). A rational analysis of the emergence of canards was given by Eckhaus (1983). Multiscale perturbation expansions in other contexts are described at length by Kevorkian and Cole (1996).

4 Chaotic, Forced, and Coupled Oscillators Chapter Overview In this chapter, we definitely step out to higher phase-space dimensions, and explore chaotic dynamics in autonomous systems, and then, dynamics of forced and interacting oscillators. We start with conservative systems, particularly prone to become chaotic, as was already understood by visionary mathematicians late in the 19th century. A model example will illustrate transition to chaos and its quantitative characteristics, which are not easy to measure. We continue with exploring chaotic attractors in dissipative systems, distinguished by their complex geometrical and dynamic structure. Computing Poincar´e and nextreturn maps allows us to get more insight into the structure of attractors than nice pictures of trajectories. Sometimes, the behavior can be imitated by simple model maps, which were widely studied when chaos became fashionable in the 1970s. Qualitative arguments explain the emergence of chaos near homoclinic orbits, illustrated and supported by ingenuously designed continuous models. Their dynamics may be imitated by discrete next-return maps, but they are apt to have a more complicated structure, and reveal more than a single way of transition to chaos. Further complexity is introduced by external forcing, which may lead to contradictory results: either turning regular oscillations to chaotic, or synchronizing with the forcing frequency. Interacting oscillators exert mutual forcing, which may also cause synchronization of both coupled oscillators and their large ensembles.

103

104

4.1 4.1.1

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

Approaches to Hamiltonian Chaos Hiding in Plain Sight

Ubiquity of chaos. Chaos is a fascinating phenomenon that prompted scientists and philosophers in the last third of the 20th century to rethink fundamental questions of determinism, predictability, and randomness. It had always been a commonplace knowledge that complex phenomena, influenced by a great number of different factors, are unpredictable (at least, from a practical point of view) and are amenable to statistical methods only. Statistics gives, however, little consolation when one has to live through a single decisive event, as gamblers, market traders, and weather forecasters know too well. Predicting the future has been a preoccupation of people of different walks of life, from clairvoyants to stock traders, using no less diverse methods. It was therefore a kind of a cultural shock (at least for those who were able to comprehend it) that systems as simple as three-variable ODEs or even single-variable discrete mappings could be capable of unpredictable – chaotic – behavior. Deterministic chaos is not an exotic phenomenon at all, and, probably, is met much more often than it is reported. The catch is that it may be hard to tell whether the observed irregular behavior is caused by extraneous noise, hidden variables, experimental errors – or by the intrinsic dynamics. The very nature of chaos is such that it looks superficially like a random process, while its internal structure, distinguishing deterministic chaos from effects of random noise, is not easy to discern. Chaos is actually a very complicated structured pattern brought to such a baroque sophistication that its orderliness is not recognized anymore. Studying process dynamics in a wide range of parameters may give a hunch that the observed chaos is indeed deterministic if it is confined to a certain range of parameters and disappears elsewhere. Of course, this would not yet qualify as a proof, since other spontaneously emerging dynamic features, like the appearance of metastable states, may be responsible for additional frequencies and an outward picture of disorder, as well as for enhanced sensitivity to a background noise. Paradoxically, the more erratically the dynamic system behaves, the more precise technique is required for its proper identification. Determinism vs. randomness Nondissipative Hamiltonian systems are most susceptible to chaos. Why did it take so long to notice? Why mechanical motion was seen as the ultimate expression of determinism? Laplace’s demon, as imagined by one of the great architects of classical mechanics, could compute the future in its entirety when all equations and initial conditions are known. This was supposed to have dire philosophical and theological implications for the problem of free will – are we responsible for our deeds if everything is preordained? At the birth of quantum mechanics, some physicists enjoyed and some deplored its probabilistic character. It was a misplaced concern. The wave function evolves deterministically and even smoothes out classical instabilities. If particle trajectories become unpredictable in the quantum-mechanical description, it is just because the

4.1. APPROACHES TO HAMILTONIAN CHAOS

105

very notion of a trajectory has no sense in the quantum-mechanical framework. Albert Einstein famously remonstrated that God does not play dice. But what is playing dice, or any chance game in an old-fashioned casino? It is a purely mechanical action perfectly obeying Newton’s laws. Laplace’s demon would be definitely able to break a casino. The point is in knowing initial conditions, “all positions of all items of which nature is composed”, to infinite precision, a task really demonic. Even with no quantum uncertainty, “the future is hard to see”, as a naive song says. The opposite point of view emphasizing randomness prevailed in the statistical mechanics of Ludwig Boltzmann and Josiah Willard Gibbs, that came onto the center stage in the late 19th century. It dealt with very large ensembles of particles, and relied on the ergodic hypothesis, which assumes that over a long period of time the system visits all available states with equal probability. The hypothesis is indispensable because it makes it possible to identify time and ensemble averages. It supposes “molecular chaos” – and this was the traditional use of the word chaos before it was first applied (May, 1974b) to complex dynamics, which is neither predictable nor ergodic. Chaos in the skies. Celestial motions, observable with great precision and unperturbed by Earthly uncertainties, by friction and other kinds of dissipation, were the natural playground of classical mechanics. Newton’s theory perfectly explained the revolution of planets around the Sun, but more subtle interactions involving moons and asteroids remained a challenge, even though there was no doubt that they obey the same laws. This three-body problem was particularly important, since it was connected with the fundamental question of stability of the solar system. More than 800 papers relating to this problem were published between 1750 and 1900 (Barrow-Green, 1997), a great number for the time. In a rough form, the problem requires solving nine second-order equations, but it can be reduced to a sixth-order system by finding twelve integrals of motion, and further simplified by assuming one body to be a small planetoid not influencing the other two. Henri Poincar´e (1890) took part in, and won, the challenge announced by the King of Sweden to solve the three-body gravitational problem. After first arriving at an apparently rational solution based on a power series expansion (which was the one that actually won the prize), he understood that this series actually diverges, and a three-body system behaves chaotically. Trajectories of two oscillators revolving around tori are closed if the frequencies are expressed through rational numbers, ωi = 2πn/m with integer m, n. Otherwise, the motion is quasiperiodic, and, in the log run, trajectories uniformly cover the surface of respective tori. Since the entirety of rational numbers forms a set of measure zero on the real line, the motion is expected to be generically ergodic. The problem is what happens when the motion around the tori is perturbed, even weakly. The grasp of this problem is what led Poincar´e to the discovery of deterministic chaos in conservative systems. Almost concurrently, Jacques Hadamard (1898) has found that trajectories of fric-

106

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

tionless particles moving on a billiard with concave walls or along geodesics on a surface with a negative curvature diverge exponentially one from another, which is a characteristic of chaotic motion. The insights of Poincar´e and Hadamard have not been followed for more than half-a-century, while physicists were brooding on quantum uncertainty. The next chaotic system, also failing to attract much attention, was Khaikin’s “universal circuit” devised in the course of studies of oscillations in electrical networks (Khaikin, 1930; Andronov et al., 1937). Chaotic behavior was also detected by Mary Cartwright and John Littlewood (1945) in the dynamics of the forced van der Pol oscillator, another model inspired by electrical circuits that we will consider among other problems of this kind in Sect. 4.5.2. Yet, deterministic chaos has not become popular before the last third of the 20th century.

4.1.2

Resonances and Small Divisors

Perturbations of integrable systems. The reason why classical mechanics was considered the bastion of determinism is that its great builders were solving throughout the 18th and 19th centuries only those problems that could be solved with the help of the elegant methods they have created, which means, only a tiny and nonrepresentative minority of mechanical problems. Those problems that could be solved by classical methods are appropriately called integrable. Integrability is ensured when it is possible to find a canonical transformation to a normal form dependent on a set of action–angle variables, an array J of generalized momenta Ji and an array Θ of generalized coordinates θi , such that the Hamiltonian becomes dependent on J only. Then, in view of the Hamiltonian equations ∂H = 0, J˙i = − ∂θi

∂H θ˙i = = ωi (Ji ), ∂Ji

(4.1)

the action variables remain constant in time, while the angles θi = ωi t rotate with the frequencies ωi . Each separated pair of variables moves on a torus. These tori do not interact, and the evolution is strictly deterministic. The question that has led to the discovery of chaotic behavior is whether motion around the tori remains regular when the Hamiltonian of an integrable system is weakly perturbed. Adding a small perturbation, we arrive at the Hamiltonian presented as H = H0 (J 0 ) + V (J 0 , Θ),

 1.s (4.2) This system remains integrable if it is possible to transform the perturbed Hamiltonian  . The original   ) dependent on another set of action varibles J to a normal form H(J = tori would be only weakly perturbed if this amounts to an O( ) correction only, J J 0 + J 1 . Resonances. The new momenta can be expressed as partial derivatives of the perturbed action (see Sect. 1.1.1), expressed as  , Θ). S = J 0 · Θ + S(J

(4.3)

4.1. APPROACHES TO HAMILTONIAN CHAOS Then the Hamiltonian (4.2) is expanded as # " ∂H ∂ S · + V (J 0 , Θ) . H = H0 (J 0 ) + ∂J 0 ∂Θ

107

(4.4)

Since the system is conservative, the value of the Hamiltonian, i.e., the energy E, should remain unchanged when expressed through the new momenta, and therefore the above bracketed expression should vanish. Using (4.1) and collecting the frequencies ωi = ∂H/∂J0i in the array ω, we write the equation of S as ∂S ω· = −V (J 0 , Θ). (4.5) ∂Θ This equation can be solved by taking into account that, by the nature of angular variables, both S and V (J 0 , Θ) should be periodic in θi , and expand them in the Fourier series:   S = Sn (J 0 ) exp(in · Θ), V (J 0 , Θ) = Vn (J 0 ) exp(in · Θ), (4.6) where the components of the array n are integers ni . Using this expansion in (4.5) yields the coefficients at Fourier harmonics Vn (J 0 ) Sn = i . (4.7) n·ω This expression becomes infinite in a resonant case when the denominator vanishes. This “small divisors problem” causes the reduction to a normal form to fail, and toroidal motion to be strongly perturbed. Dynamics near resonances. The KAM theory (Kolmogorov, 1954; Arnold, 1963; Moser, 1962) repairs this failure to a certain extent, asserting that a “majority” of tori remain unbroken as increases to a certain limit. Take note that the distinction between rational and irrational numbers is both abstract and paradoxical. On the one hand, rational numbers are “rare”: the measure of their totality on the real line is zero. On the one hand, a rational number is found in an infinitesimal neighborhood of any irrational number, so one can always find an approximate resonance pushing  Accordingly, regular and chaotic trajectories are up the ostensibly small correction S. closely interspersed. In the case of an isolated resonance between two modes with a rational frequency ratio ω1 /ω2 = −n2 /n1 , the singularity can be resolved by introducing a slow evolving resonance phase φ = ω1 θ1 − ω2 θ2 . We will consider this method in more detail for a simpler case of an externally driven system (what is called 1 12 degrees of freedom) in Sect. 4.4.3. Resolving a resonance in this way still does not eliminate chaos, persisting in stochastic layers that form in the vicinity of homoclinic points. Regular orbits densely cover toroidal surfaces of constant action, while chaos persists in “interstices” between the tori1 . 1 It is hard to visualize 4D space, still more to represent it on a 2D page, so the tori are often drawn as if inserted one into the other.

108

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

The reason for chaotic behavior near hyperbolic points, connected by homoclinic or heteroclinic orbits, is the instability of motion in their vicinity. Stable and unstable trajectories that arrive or depart from such points behave in an utterly irregular way. Whereas they are constrained by the nonself-intersection condition, they can intersect each other, forming other hyperbolic points (Fig. 4.1), and the latter’s stable Figure 4.1. A scheme of intersecting tra- and unstable trajectories behave in the same jectories near hyperbolic points way. This leads to an infinite number of intersections forming a chaotic maze. Assuming just two frequencies and isolated resonances is still a gross idealization. Even in the case of a simple pendulum, the frequency of oscillations depends on the amplitude, and numerous resonances may occur as amplitudes change. Chaos arises when resonances overlap, which happens readily in system with a large number of degrees of freedom N . Most theoretical studies do not go beyond N = 2, as in the examples to follow, and it is likely that there are no domains of stable motion at all at higher N . The overlap of chaotic layers in systems with a larger number of degrees of freedom is responsible for “Arnold diffusion” over a chaotic web, which, notably, causes instability of the solar system due to weak multibody interactions (Arnold, 1964). Thankfully, this instability develops on exceedingly long times, and is irrelevant from a practical point of view, since dissipative phenomena (for example, due to crashing asteroids and comets) have a much stronger impact.

4.1.3

Example: H´ enon-Heiles Model

Model equations. The simplest nonintegrable autonomous conservative system should have at least two degrees of freedom. The four variables, generalized coordinates and momenta, evolve in this 4D space on a 3D manifold H = const, which is just the minimal number of degrees of freedom allowing trajectories to diverge. The first numerical study of a minimal conservative chaotic system of this kind, motivated by planetary dynamics, was carried out by H´enon and Heiles (1964). Their model includes two position variables qn and the respective momenta pn , n = 1, 2 with the Hamiltonian H=

1 1  2 pn + qn2 + q12 q2 − q23 , 2 3

(4.8)

leading to the dynamic equations qn (t) = pn ,

p1 (t) = −q1 (1 + 2q2 ),

p2 (t) = −q2 − q12 + q22 .

(4.9)

4.1. APPROACHES TO HAMILTONIAN CHAOS

109

(a)

(b)

(c)

(d)

(e)

(f)

Figure 4.2. Poincar´e maps of the H´enon-Heiles model in the plane q2 , p2 , generated starting from a grid of initial conditions (color-coded as explained in the text) at H = 0.1 (a), 0.11 (b), 0.12 (c), 0.13 (d), 0.14 (e), 0.15 (f)

110

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

Trajectories can be drawn by straightforward numerical integration, but they do not give a proper insight into the character of motion, and the better way is constructing a Poincar´e map, which traces successive intersection points (in the same direction) of the trajectories with a given surface, called Poincar´e section; this surface should be everywhere transversal to the flow generated by the ODE. We have first encountered this device in Sect. 1.6.3 and have used of for simpler tasks in Mathematica notebooks illustrating problems of Ch. 2 and Ch. 3. Poincar´ e maps. We take as a Poincar´e section the 3D manifold spanned by the q2 , p2 plane and the deviation level q1 = 0. The Poincar´e maps shown in Fig. 4.2 are produced in the range of energies around the chaotic transition point, starting from a grid of initial conditions with√the initial energy divided in variable proportions √ between the positional variable q1 = 2H sin y and the respective momentum p1 = 2H cos y with the “angle” y taking 10 equally spaced values between π/20 and π/2. The initial value of the other positional variable q2 is set to zero, and the respective momentum √

3/2 p2 = 3−1/2 H 1/4 2 sin y is assigned to fit the total energy H. The dots marking the intersections with the Poincar´e section in Fig. 4.2 are produced by 500 iterations for each set of initial conditions, and are color-coded in such a way that the color changes from red to yellow to green to blue to purple as more initial energy is concentrated in p2 . At a low energy level H = 0.1 (Fig. 4.2a), all trajectories appear to be quasiperiodic. As energy increases to H = 0.11 (Fig. 4.2b), the purple-blue trajectory initiated at y = 25 π is the first to become disordered. This energy level was considered to be close to the chaotic transition in the original publication. The important change is the appearance of homoclinic points that serve as hubs where chaos nucleates. The orbits with lower values of the “angle” y, i.e., with a higher fraction of initial energy in p1 , are not well pronounced, while those with a larger y remain well ordered. As energy increases, more orbits become chaotic, blue ones at H = 0.12 (Fig. 4.2c), and greenish ones of different hues at H = 0.13 (Fig. 4.2d) and H = 0.14 (Fig. 4.2e). But even at the highest energy level H = 0.15 (Fig. 4.2f), close to the limiting value H = 1/6 (where a singularity is encountered and the computation breaks down), when almost entire available phase plane is filled by chaotically distributed dots, there remains a white area not accessed by trajectories, and the orbit at y = π/2 with p1 (0) = 0 remains quasiperiodic. These pictures, as well as longer orbits at various energy levels, starting from the same initial conditions at different energy levels, and drawn in another Poincar´e section, are generated in the accompanying Mathematica notebook HenonHeiles.nb. Chaos in other abstract conservative models, including the models introduced in Sect. 1.1.4: the elastic pendulum (Anurag et al., 2020 is the latest reference) and the double pendulum (Stachowiak and Okada, 2006), has been extensively studied numerically, producing qualitatively similar Poincar´e maps. More examples of Hamiltonian chaos will appear in Sect. 4.5.1 and Sect. 4.6.2.

4.1. APPROACHES TO HAMILTONIAN CHAOS

4.1.4

111

Quantitative Measures of Chaos

Lyapunov exponents. Although computational results, such as shown in the pictures of the preceding section, do not advance beyond visual impressions, they clearly demonstrate that transition to chaos in conservative systems is not clear-cut, and takes place at different values of governing parameters, depending on initial conditions. The dynamics is certainly not ergodic, as trajectories fail to cover the available phase space. The popular notion that dynamics is chaotic when it is generated by deterministic equations but is indistinguishable from a stochastic process is faulty not only because it is vague, but because deterministic chaos is never random but is organized in a way which is so sophisticated that it cannot be deciphered. Still, these computations do not characterize dynamics in a quantitative way or provide a measure distinguishing quasiperiodicity from chaos. The most natural signature of chaos is divergence of trajectories starting from nearby points. This property is also connected with other measures, characterizing the proliferation of paths leading to a certain cell in the phase volume – ideally, infinitesimal, but practically, coarse-grained. These measures, entropies of a special kind, originate in statistical physics, and are important, in particular, for characterizing ergodicity or lack thereof. Another signature of chaos is the dispersal of the occupied phase volume. According to the Liouville theorem, the phase volume of a conservative system is conserved, as a volume of an incompressible fluid does, whatever complicated is a flow pattern. However, the boundary of this volume may become convoluted, wrinkled and folded, and eventually fractal. We will not dwell on these notions here but concentrate on Lyapunov exponents characterizing instability of motion, which are also connected, though not in a straightforward way, to other characteristics of deterministic chaos. We have already encountered Lyapunov exponents in Sect. 2.2.1 as characteristics of stability of a stationary state, defined as eigenvalues λi of the associated Jacobi matrix. Floquet exponents (Sect. 2.3.3), introduced by a contemporary of Lyapunov, are extensions of the same concept to periodic orbits. However, their definition and computation become far more complicated when applied to irregular trajectories with widely variable local characteristics. If we take a small (eventually, infinitesimal) ball of an appropriate dimensionality surrounding a point in the phase space, dynamic evolution distorts it into an ellipsoid with axes proportional to eReλi . In a conservative system, the volume cannot change, so, among the exponents, there should be those with both positive and negative real parts, and the enclosed volume will extend in some directions and contract in others. The extending and contracting directions will rotate along the trajectory, so some global characteristic is called for. Computation methods. The straightforward measure of divergence of nearby trajectories originating at some points x, x + d(0), |d|  1 would be the distance between

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

112

them |d|(t) accumulated over a long period of time: $ $  1 t $$ d(t) $$ dt. λ = lim lim ln $ d(0) $ d(0)→0 t→∞ t 0

(4.10)

Such a computation is impossible to implement in practice, and inevitably some kind of discretization should be applied. The Jacobi matrix J can be straightforwardly defined by comparing the values of dynamic variables xi (both positions and velocities or momenta) at successive time steps as J = ∂xi+1 /∂xi , and local Lyapunov exponents be defined as eigenvalues of this matrix. Producing a discretized version of (4.10) in this way is also not a straightforward task, since it would lead to large numerical errors when trajectories diverge considerably. This tendency is balanced by shifting points at each step to restore the original small distance. Benettin et al (1976) computed in this way Lyapunov exponents for the H´enon-Heiles model. This computation is reproduced in the notebook HenonHeiles.nb for the same set of orbits at H = 0.13 as in Fig. 4.2d. The discretized version of (4.10), as applied to Poincar´e maps, is 1  di ln . n→∞ d0 →0 n i=1 d0 n

λ = lim lim

(4.11)

Here the orbits are represented by a sequence of 2D vectors in the Poincar´e section rather than 4D vectors on a corresponding trajectory; d0 is the standard distance between points, restored after each iteration, and di is the distance between the respective points after the ith step. The limits in the mathematical formula cannot be taken literally when confronting realities of a numerical computation. The tests undertaken in the Mathematica notebook showed that increasing the number of iterations n as close as possible to the unattainable infinity is counterproductive not just (a)

(b)

  

 







-



-





 



 

















Figure 4.3. (a): A spread of local Lyapunov exponents at H = 0.13, y = π/20. (b): Lyapunov exponents at H = 0.09 (red), H = 0.1 (yellowish-green), H = 0.11 (green), and H = 0.12 (blue) computed in the notebook HenonHeiles.nb

4.1. APPROACHES TO HAMILTONIAN CHAOS

113

because of the exploding computation time but due to accumulating errors. Numerical errors grow as well when separation of orbits becomes too small, and therefore a middle ground is preferred. It turned out that, when using a default integration routine built-in in Mathematica, a modest number of 1000 iterations with the standard separation of orbits d0 ≈ 0, 0008 would be an optimal, if not perfect, choice. The local values of the Lyapunov exponents, measured by ln(di /d0 ), fluctuate wildly, as seen in Fig. 4.3a, taken for a relatively benign case of the entire initial energy concentrated in the positional variable q1 . The results for the entire set of initial conditions are collected in Fig. 4.3b, where the computed Lyapunov exponents are plotted for several energy levels against the number k labeling the “angle” y = kπ/20 in the adopted definition of the initial conditions (Sect. 4.1.3). The values of λ drop abruptly for the orbits with y coming closer to π/2, which are the last to become disordered in Fig. 4.2, and steadily increase with energy. The blue line at H = 0.12 has an anomaly, dropping below all others at k = 6, apparently due to numerical errors, and the data at H = 0.13, not shown here, behave irregularly for the same reason. All this indicates imperfections of the method, as well as effects of numerical noise. The computation is supposed to yield the largest Lyapunov exponent, but there is no warranty that the perturbed trajectories followed in this way are the strongest to diverge. One can try different starting points for perturbed trajectories, at the price of greatly increasing computation time (Sano and Sawada, 1985; Eckmann et al., 1986). In principle, this procedure can generate the full spectrum of Lyapunov exponents determined by the product of the local Jacobians along the trajectory. A more economic time-delay method reconstructing the attractor dynamics from a single time series was proposed by Rosenstein et al. (1993). Their algorithm locates the nearest neighbor of each point on the trajectory, and each pair of neighbors are taken as initial conditions for different nearby trajectories. The largest Lyapunov exponent is then estimated as the mean rate of separation of the nearest neighbors. The alternative geometric method (Horwitz et al., 2007) does away with extracting Lyapunov exponents from a time series altogether, but relates chaotic behavior to the geometry of geodesics on a suitable Hamiltonian manifold. Conservative systems are particularly prone to numerical errors. They are evident in numerical dissipation, a change of energy (in either direction) in the course of a computation. The errors can be reduced, at the cost of increasing computation time, using special integration routines, but never eliminated. I prefer to emphasize this point rather than trying to improve results. After all, we are dealing here with models having no practical significance. The only practical application of conservative equations of classical mechanics is in satellite navigation and tracing motion of planetoids, comets, and meteorites in our mildly chaotic solar system. This is where precision is needed and attained; the destiny of humankind will depend on precise predictions when time comes to deflect a disastrous impact of a large meteorite or an asteroid like the one that put an end to the dinosaurs.

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

114

4.2 4.2.1

Approaches to Dissipative Chaos Distilling Turbulence into Simple Models

Complexity of the real world. Earthly phenomena are dissipative, unless distilled into idealized models, such as frictionless pendulums or inviscid fluids. Whatever we encounter in real life had been known to be unpredictable way before the term “chaos” entered scientific literature. Fluid flow is a prime example of chaotic motion, turbulence, fascinating great painters (Fig. 4.4) but dismissed by scientists prior to the 20th century as “hopelessly complicated”. The turning point was the understanding that turbulence can be treated statistically. It has been first realized by Geoffrey Taylor (1935), whose name is born by the appellations of numerous hydrodynamic phenomena he brought to light. Addressing the most transparent idealized case of homogeneous isotropic turbulence, Taylor suggested measuring the correlation functions between velocity components at different times and different points in space. Hydrodynamic turbulence involves motion on many different scales, seen in artists’ renderings in Fig. 4.4, as well as in images of real flows. The fluid can be stirred on a large scale, and this energy input trickles down to shorter scales where it is dissipated by viscosity. Based on this idea of energy cascade, Andrey Kolmogorov (1941) derived by elementary means of dimensional analysis a simple power law E ∝ k −5/3 of energy distribution among modes with different wavenumbers k. Lev Landau (1944) and, later but independently, Eberhard Hopf (1948) envisaged transition to turbulence as gradual excitation of waves with different wavenumbers and frequencies. Various structures observed near transition to turbulence were dismissed as a “crisis of turbulence”, but turbulent structures persisting for a limited time in real flows, and described by highdimensional dynamical systems, attracted later much attention (Sirovich, 1987). (a)

Figure 4.4. (fragment)

(b)

(a): A drawing by Leonardo. (b): Great Wave off Kanazawa by Hokusai

4.2. APPROACHES TO DISSIPATIVE CHAOS

115

Continuous chaotic models. The attitude has changed in the 1970s when attention shifted to simple models displaying chaotic behavior. Ruelle and Takens (1971) have undermined the Landau–Hopf scenario by proving that already after the second Hopf bifurcation oscillations become structurally unstable in a dynamical system, a far simpler object than spatially distributed turbulent flow. Still earlier, but without being noticed for a time, Edward Lorenz (1963) reduced the complexity of turbulent thermal convection to a chaotic three-variable model, to be described in detail in Sect. 4.3.3. The principal roads to chaos in dynamical systems involve not repeating Hopf bifurcations but bifurcations of periodic orbits that cannot take place in 2D: period doubling, or flip, and a homoclinic bifurcation involving a saddle-focus stationary state. 3D phase space is sufficient to allow for nonintersecting trajectories that may widely diverge when starting from neighboring points, so even four variables necessary to arrange for two Hopf bifurcations appear to be an overshoot. Otto R¨ossler (1976) designed several simple chaotic three-variable models bearing a semblance of equations of chemical kinetics; we will return to one of them in Sect. 4.3.2. When going one dimension up, R¨ossler (1979) called the observed dynamics “hyperchaos”. Clinton Sprott (1994) went the opposite way, undertaking an intensive computerized search for chaotic models that would be simpler algebraically than those by Lorenz and R¨ ossler, and came up with 19 three-variable systems with either five terms and two nonlinearities or six terms and one nonlinearity. Khaikin’s circuit was reinvented in a simplified form as “Chua’s circuit” (Matsumoto et al., 1984), and intensively explored in the course of following decades. Discrete chaotic models. This was still far from the ultimate simplification. We have already seen in the preceding section that 2D Poincar´e maps characterize dynamics better that 4D trajectories. The conclusive simplification step was constructing maps with desired properties rather than deriving them from trajectories of dynamical systems, as we did in the preceding section. Since discrete mappings are not restricted by the requirements of continuity and nonintersection, a single dimension is enough, and computations can be carried out with minimal numerical errors and at a lightning speed. The simplest area-preserving chaotic 1D maps are baker’s transformation, designed by Hopf (1937) but known to bakers since times immemorial, and Bernoulli shift. The way a baker mixes dough is stretching it and folding upon itself, which leads to the triangular map in Fig. 4.5a. More sophisticated 2D versions of the baker’s transformation may include different ways of folding and rotation. Bernoulli shift would cut the stretched dough in half and superimpose the two pieces. In mathematical terms, it acts upon a point on the unit interval expressed as an infinite binary string, where the leading digit 0 or 1 would mean, respectively, that it is located in either the first or the second half of the interval. Following the transformation, the second digit of the original string will classify the position; thus, the mapping in Fig. 4.5b, described above in a way suitable to bakers, is equivalent to removing the first digit and shifting the remaining sequence to the left. In both mappings, the unit interval

116

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS (a)

(b)









































Figure 4.5. (a): Baker’s transformation map. (b): Bernoulli shift. Thin lines show successive iterations

is stretched twice over, and the second part is mapped back on the same interval by either bending or cutting (i.e., either inverting or preserving its direction). However close two points are originally, they will be eventually separated and sent to the two alternative half-intervals. In the case of the Bernoulli shift, if the difference in the original positions was in the nth digit, this will happen after nth iteration. The most famous dissipative 1D transformation is the logistic map, to be discussed in Sect. 4.2.3. It was once a common occurrence for a lecturer on the subject of turbulence to start with one of Leonardo’s drawings, like the one in Fig. 4.4a, before swiftly switching to the logistic map.

4.2.2

Chaotic Attractors

In dissipative, unlike conservative, systems, the phase volume shrinks. Stretching in a certain direction can be, however, compensated by strong shrinking of the phase space in other directions, so that the phase volume will tend to zero. Shrinking of the phase volume combined with local stretching leads to the formation of an attractor of a reduced, possibly fractal dimension. This mechanism is explained qualitatively by the action of the horseshoe map (Smale, 1967). The unit square (Fig. 4.6a) is stretched vertically and shrunk horizontally in such ratios that its total area decreases. It is folded and mapped back on itself in such a way that the bent area remains outside the square (Fig. 4.6b). The part of the original square that is mapped back onto it following these transformations is restricted to two horizontal stripes (Fig. 4.6c), and its image is the two vertical stripes

4.2. APPROACHES TO DISSIPATIVE CHAOS

117

shown in Fig. 4.6b. The area that remains invariant following the transformation, i.e., is mapped upon itself, is confined to the four squares shown in Fig. 4.6d. Next iteration restricts the invariant area to the 16 small squares shown in Fig. 4.6e. Upon successive iterations, the invariant area shrinks further, fading from view in Fig. 4.6f, and turns eventually into a fractal object of zero area, parts of which, nevertheless, can be found in far-removed locations within the original unit square. The structure of the attrac   tor, though topologically complex, is well defined, but predicting the location of the image of a certain point after a large number of iterations would require ever-increasing precision  in defining the initial data. This kind of stretching and folding is similar to the baker’s transformation extended to 2D, but  accompanied by shrinking the  area, since otherwise no attractor could emerge. Nevertheless, neighboring points may become far separated within this lowdimensional attractor. Ruelle and Takens (1971) called this kind of an attractor strange. Figure 4.6. Smale horseshoe. See the text for explanaContinuous dissipative dytions namical systems implementing the same kind of stretching and folding, while contracting the phase volume, show the same kind of unpredictability: if initial conditions are measured or assigned with a finite precision, the uncertainty of the values of the dynamic variables grows with time in such a way that, after a finite time interval, the system may be found in a vicinity of any point within the attractor. As evolution of dissipative systems contracts the phase volume, it eventually shrinks to zero, and the dimension of the attractor must be less than the number of the dynamic variables. In the examples to follow, such as the Lorenz (Sect. 4.3.3) and R¨ ossler (Sect. 4.3.2) models, the attractors of 3D continuous dynamical system are almost 1D, which makes it possible to approximate them by 1D maps, even though their more complex internal structure is revealed under magnification.

4.2.3

Period-Doubling Cascade

Logistic map. The logistic map has arisen in the context of population dynamics of nonoverlapping generations. Unlike continuous processes of birth and death described

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

118

(a)

(b)

() 

 () 





























(c)









(d)

 () 



































Figure 4.7. (a): The logistic map at μ = 3.1, after the period-doubling bifurcation; red lines show an iteration sequence. (b): The composite map f1 (x) at μ = 3.1. (c): The composite map f2 (x) at μ = 3.5, after the repeated period doubling. (d): The rescaled central segment of f2 (x) at the third period-doubling, compared to f (x) (red line) at the first period-doubling

by models we encountered in Sects. 1.3.1 and 2.5.1, in some cases, in particular, among insects, population growth takes place at discrete intervals, and generations do not overlap. In this situation, the relation between numbers or densities of successive generations is described by a discrete map, xn+1 = f (xn ). The logistic map, xn+1 = μ xn (1 − xn ),

(4.12)

belongs to this family. Robert May (1976) has brought attention to its complex behavior and its evolution with changing the parameter μ.

4.2. APPROACHES TO DISSIPATIVE CHAOS

119

(a)

μ

(b)





μ



 

Figure 4.8. (a): Bifurcation diagram for the logistic map, showing the attractors for the respective values of the parameter μ. (b): Blowup in the chaotic region

What makes the logistic map important is that it reproduces the period-doubling route to chaos observed in many dynamical systems, and provides much easier means to explore its details. The first stages of the period-doubling cascade, taking place as the parameter μ increases, are illustrated in Fig. 4.7. At small values of μ, the logistic map has, besides the unstable trivial equilibrium, a single fixed point x = x0 , which is stable as long as the slope f  (x) is less than one by its absolute value. At μ = 3, a flip bifurcation takes place: above it, f  (x0 ) < −1, so that the fixed point is unstable, and the system approaches a period-two sequence, alternating between two values x = x± 1 upon each iteration (Fig. 4.7a). Both points are the stable fixed points of the composite map f1 (x) = f (f (x)), while x0 is an unstable fixed point of this map with f1 (x0 ) > 1 (Fig. 4.7b). As μ further grows, |f1 (x± 1 )| increases, and a new period doubling takes place at μ ≈ 3.4495. Above this point, a period-four orbit emerges, where the four alternating values of x are stable fixed points of the composite map f2 (x) = f1 (f1 (x)) = f 4 (x) (Fig. 4.7c). Complexity and universality. This sequence of bifurcations continues ad infinitum. n One of the stable fixed points of the composite map fn (x) = f 2 (x) approaches the maximum of the logistic map at xm = 1/2 as n increases, and the form of the composite map in the vicinity of this point becomes indistinguishable from the rescaled original logistic map. This similarity is apparent already in the map f2 (x), as seen in Fig. 4.7d , where the rescaled central segment of f2 (x) at the period-eight bifurcation point μ ≈ 3.5441 is compared to f (x). On the basis of this self-similarity, Feigenbaum (1978) applied the renormalization group to prove universality of the period-doubling sequence. Asymptotically at n → ∞, the period 2n bifurcation point approaches the accumulation point μ∞ ≈ 3.5699456 as μ∞ − μn ∼ δ −n , with the exponent δ ≈ 4.6692016. The chaotic attractor emerges at μ = μ∞ .

120

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

The attractor undergoes an infinite number of bifurcations beyond μ∞ . Figure 4.8a shows the points on the attractor at different values of μ up to the upper limit μ = 4. This picture still gives a little idea of the true complexity of behavior and can be enlarged to reveal finer structures at smaller scales. The sequence contains an infinite number of periodic “windows” and repeated cascades. A prominent feature is a rather wide period-three window seen in the blowup in Fig. 4.8b. Li and Yorke (1975) have proven that chaos is always present in some parametric region whenever a period-three orbit exists. The universality of the period-doubling sequence explains its wide applicability to a variety of dynamical systems, which was confirmed in numerous experiments and simulations, some of which we will consider in Sect. 4.3. As long as the map is continuous and has a maximum, its exact form is irrelevant, since only the vicinity of the maximum, which can always be approximated by a parabola, is relevant at late stages of the period doubling cascade. However, though frequently encountered, the period-doubling route to chaos, is in no way unique. A complicated system containing a number of parameters can enter the chaotic region in different ways along different parametric paths, e.g., proceeding first into the period three region. A still-wider variety of routes to chaos is possible for discontinuous maps and ODEs that can be approximated by maps of this kind, as we will see in Sect. 4.3.3.

4.2.4

Strange, Chaotic, or Both?

Dynamics vs. geometry. Grebogi et al. (1984) have undertaken to distinguish between the properties of an attractor, usually related, of being chaotic and strange. In their words, “chaotic refers to the dynamics on the attractor, while strange refers to the geometrical structure of the attractor”. A natural question is how these properties overlap, and more precise definitions are needed to identify them. The authors classify an attractor as strange when it is “not a finite set of points and is not piecewise differentiable”. Defining an object by what is it not rather than by what it is might be objectionable, and commonly (though less precisely) such a kind of an attractor is identified as fractal, unlike attractors we encountered in Sects. 2, 3 – either fixed points or periodic orbits – or toroidal attractors of quasiperiodic motion. Smale’s attractor is definitely strange in this sense, but its dynamics is not defined, although it is likely that a dynamical system implementing a similar kind of stretching and folding would be chaotic; we will see some examples in the next section. In agreement with the common notion that chaotic behavior is characterized by divergence of nearby trajectories, Grebogi et al. identify a chaotic attractor as “one for which typical orbits on the attractor have a positive Lyapunov exponent”. The qualification “typical”, though blurring mathematical precision, is in place: recall examples in Sect. 4.1.3 (though in a different context) of orbits starting from different initial conditions that evolve in unlikely ways. Hamiltonian systems we have considered in the preceding section have no attractors, and are chaotic but not strange, according

4.2. APPROACHES TO DISSIPATIVE CHAOS

121

to this definition, since they conserve phase volume, even though they are not ergodic, and are able to disperse the phase volume in such a way that its boundary becomes fractal. The opposite case of strange nonchaotic attractors (SNA) is more subtle. Heavy numerical work is needed to measure both the fractal dimension of the attractor and Lyapunov exponents and therefore these studies are commonly based on discrete maps rather than continuous dynamical systems. Grebogi et al. (1984) gave examples of SNA in circle maps of the type xn+1 = f (xn , θn ),

θn+1 = θn + 2πω (mod 2π).

(4.13)

Such maps commonly originate in forced systems (which we consider in detail in Sect. 4.5). SNAs appear to be typical when the forcing is quasiperiodic, so that the corresponding dynamical system has the form θ˙ = ω1 ,

x˙ = f (x, θ, φ),

φ˙ = ω2

(4.14)

with incommensurate ω1 , ω2 . Jalnine and Kuznetsov (2016) found both chaotic and nonchaotic strange attractors in a model of coupled dissipative mechanical rotators: θ˙ = ω, φ˙ = ω − sin φ, ω˙ = μ − ω − sin φ − sin θ − λ sin ρθ.

(4.15)

In the original publication, the model contained more parameters related to an actual physical device. Although this system is formally autonomous, it also contains two angular variables θ, φ√ , and incommensurate frequencies were built-in by choosing the parameter ρ = 12 (1 + 5) – the golden ratio, which is known as the “most irrational” real number. Authenticating chaos and strangeness. Unequivocal quantitative characteristics of both chaos and strangeness are not easy to obtain. We have already discussed in Sect. 4.1.4 the difficulties in measuring Lyapunov exponents, the touchstone of chaos. Gottwald and Melbourne (2004, 2005) devised the “1-0 test” distinguishing chaos from either (quasi)periodic dynamics or random noise. The discrete version of the test, based on a single sequence xi , i = 1, 2, . . . N , operates with the function pn =

n 

xi cos ic,

(4.16)

i

where c is a randomly chosen number. This function is used to compute the mean square displacement over a set of data, M (n) =

N −n  1 (pi+n − pi )2 . N −n i

(4.17)

122

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

The value

ln M (n) (4.18) n→∞ ln n serves as the test of chaos: it is finite (in the most clear-cut case, unity) when dynamics is chaotic and zero when it is not. The test is also computationally expensive, since reliable results are only obtained by analyzing a set of computations with different values of the parameter c. The authors demonstrated it by applying the test to the logistic map where chaotic and periodic regimes are interspersed as the parameter μ is shifted by small increments (recall Fig. 4.7), and 100 randomly chosen values of c were used to make the distinction clear. The method also works with noisy data, and with continuous systems, which, however, implies a further increase of the bulk of computations. Gopal et al. (2013) applied this test also to distinguishing between SNA and either quasiperiodic or truly chaotic dynamics. Measuring the fractal dimension, the benchmark of strangeness, is not a straightforward task as well. The standard procedure of fractal geometry is “box counting”, whereby the fractal dimension D is computed as K = lim

D = lim

→0

ln N ( ) , ln(1/ )

(4.19)

where N ( ) is the number of cubes of the appropriate phase-space dimension with the edge length that are needed to cover the attractor; this limit should not depend on the kind of a grid. Applying this method to chaotic attractors is highly problematic, since, with a finite set of points in a data set, some cubes may contain several orbits points, while others would have none in a shorter series but would be hit by the chaotic orbit in a longer run. The characteristic more suitable for attractors of dynamical systems is the “generalized”, or R´enyi dimension (R´enyi, 1970; Grassberger and Procaccia, 1983), that takes into account the fraction of time τ (Ci ) that the trajectory spends in each cube Ci or, in the case of a discrete data set, the frequency this cube is visited. The precise definition implies the infinite overall time limit, but we know that infinity is never reached. The generalized dimension is computed then as  ln i [τ (Ci )]q Dq = lim . (4.20) →0 ln(1/ ) The exponent q stands for the weight of the frequency characteristic, so that (4.19) is recovered at q = 0. This parameter makes the result dependent on free choice, which could be adjusted to particular aims. With the commonly used values q = 1 or 2, the result is called, respectively, the “information” or “correlation” dimension. The geometrical and dynamic measures are commonly related. Kaplan and Yorke (1979) introduced “Lyapunov dimension” dependent on the number of positive Lyapunov exponents, claiming that it is equal to the information dimension for “typical” systems. This might be vaguely true, but the case of SNA certifies that nontypical systems do exist as well.

4.3. CHAOS NEAR A HOMOCLINIC

4.3 4.3.1

123

Chaos Near a Homoclinic Shilnikov’s Snake

Motion in the vicinity of a homoclinic orbit is particularly sensitive to perturbations when it slows down as it passes close to a saddle point. In a three-variable nonlinear dynamical system, a saddle point may have either a 1D stable and a 2D unstable manifold or the other way around. A particularly interesting case is an unstable saddlefocus with the real eigenvalue and the real part of the complex conjugate pair having opposite signs. Shilnikov (1965) has shown that a horseshoe map and, consequently, a chaotic attractor can be generated by a three-variable nonlinear dynamical system when it has a homoclinic passing through a saddle-focus. If the complex pair of eigenvalues μ = γ ± iω has a negative real part γ, while the real eigenvalue λ is positive, a trajectory ejected along the unstable direction, lacking other attractors, returns and spirals in toward the fixed point as in Fig. 4.9a. If, on the opposite, γ > 0, while λ < 0, the same picture can be read in the opposite sense: a trajectory starting from the vicinity of the fixed point spirals out, but eventually returns along the stable direction. Close to the fixed point, the dynamical system can be linearized. Suppose that the coordinate system spanning the 3D phase space has been transformed in such a way that the fixed point is placed at the origin, the eigenvector corresponding to the real eigenvalue is directed along the z-axis, and the 2D manifold spanned by the eigenvectors corresponding to the complex conjugate pair lies in the plane z = 0, which can be conveniently parametrized by the polar coordinates r, φ. Then the trajectory of the linearized system started at the moment t = 0 at a point with the coordinates r = r0 , z = z0 , φ = φ0 can be presented as z = z0 eλt ,

r = r0 eγt ,

φ = φ0 + ωt.

(4.21)

In order to see how a phase volume is distorted close to the homoclinic, take a small cylinder 0 < z < Z  1, 0 < r < R  1 and consider the map upon the top of the cylinder z = Z, r < R of a narrow strip r = R, |φ| < φ0 , 0 < ζ0 < z < ζ1  Z on the rim, generated by flow with γ < 0, λ > 0. This strip, shaded in Fig. 4.9b, should be close to the entrance of the homoclinic trajectory into the vicinity of the fixed point. Eliminating time from (4.21) yields  γ/λ Z r=R , z

φ = φ0 +

ω Z ln . λ z

(4.22)

The image of any arc at the elevation z is a logarithmic spiral, and the mapping of the narrow section of the rim of the cylinder upon its top is a fattened spiral (“Shilnikov snake”) seen in the same figure. The length of the spiral, as estimated by the total span of the angular variable, grows logarithmically without limit as z → 0, but the width shrinks much faster, proportionally to (z/Z)−γ/λ , so that the area decreases.

124

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

Figure 4.9. (a): A typical trajectory close to the homoclinic. (b): Mapping of the shaded strip upon the top of the cylinder by a typical trajectory in the vicinity of the fixed point at γ < 0, λ > 0. (c): Mapping of the “snake” back upon the strip by global flow. (d): Mapping of the shaded sector upon the rim of the cylinder by flow with γ > 0, λ < 0

The “snake” image is mapped back upon the area encircling the fixed point by global flow, as seen in Fig. 4.9c. Inasmuch as the global flow is nonsingular (e.g., similar to that in Fig. 4.9a), it rotates and extends or shrinks the image but does not affect its topology. The transformation by the global flow can be assumed to be linear, as long as the image is small. The points of the segment mapped back upon it by the combined transformation due to the flow near the fixed point and global flow lie within several strips seen in Fig. 4.9c. This invariant area is further cut, shrunk, and dispersed upon successive iterations, turning eventually into a fractal object of zero 2D volume. This is qualitatively similar to the Smale horseshoe map (Sect. 4.2.2) and should result likewise in the formation of a fractal chaotic attractor. The essential difference is that the number of strips increases without limit as the lower edge of the segment comes closer to the entrance point of the homoclinic trajectory. Thus, an infinite number of horseshoes are formed in the course of an infinite series of bifurcations as the homoclinic orbit is approached. In the opposite case, γ > 0, λ < 0, one can construct in a similar way the map of the line z = Z, φ = 0, ρ < r < R, ρ  R upon the rim of the cylinder generated by flow going down toward the fixed point and spiraling out:  λ/γ ω R R z=Z , φ = ln . (4.23) r γ r The mapping is a helix, which again stretches without limit at r → 0. The mapping of the shaded sector in Fig. 4.9d upon the rim is a fattened helix, and the total area shrinks, as the length grows logarithmically, whereas the width shrinks according to the power law (r/R)−γ/λ . The Smale horseshoe is constructed in this case in the same way as before. Shilnikov’s snake can be realized in the unfolding of the degenerate Hopf-monotonic

4.3. CHAOS NEAR A HOMOCLINIC

125

bifurcation, which should have the general form a˙ 1 = μ0 + μ1 a1 − a31 + μ2 |a2 |2 + μ3 a1 |a2 |2 ,

a˙ 2 = a2 ν0 + ν1 a1 + ν2 a21 − ν3 |a2 |2 ,

(4.24)

where the amplitude a1 and the parameters μi are real, while the amplitude a2 and parameters νi are complex. The trivial state is a saddle-focus when μ0 and ν0 have opposite signs, and runaway to infinity is prevented when the cubic terms is negative. Simpler ad hoc dynamical systems containing a saddle-focus are able to reproduce chaotic dynamics as well, but, lacking cubic terms, are fragile.

4.3.2

Complexity in Chaotic Models

R¨ ossler’s model. A simple dissipative system that generates global flow returning trajectories into the vicinity of a saddle-focus was devised by R¨ossler (1976): x˙ = −y − z,

y˙ = x + μy,

z˙ = μ + (x − ν)z.

(4.25)

Two fixed points exist at ν > 2μ: ± ± x± s = −μys = μzs =

The Jacobi matrix is



0 J = ⎝1 zs

  1 ν ± ν 2 − 4μ2 . 2

−1 μ 0

⎞ −1 0 ⎠. xs − ν

(4.26)

(4.27)

At sufficiently small μ, both fixed points are saddle-foci. The “upper” one (marked by the plus sign) has a 1D unstable and a 2D stable manifold and the “lower” one, the other way around. In the range of parameters explored in the accompanying Mathematica notebook Roessler.nb, the real eigenvalue is larger by its absolute value than the real part of the complex pair for both stationary states. A nearly homoclinic orbit spiraling out of the lower fixed point and returning to it along its 1D stable manifold satisfies the conditions required to form Smale horseshoes. The R¨ ossler model does not represent global dynamics realistically. Its trajectories escape to infinity from a larger fraction of the phase space but may be trapped in the region adjacent to the lower fixed point. A periodic orbit found in this region at appropriate values of parameters undergoes a series of period doubling bifurcations, terminating in a chaotic orbit, as shown in Fig. 4.10a–d. The Poincar´e map with the ossler attractor at plane x = 0 used as the Poincar´e section, generated by the chaotic R¨ ν = 5, shown in Fig. 4.10e, is almost 1D, and so is the first-return map in Fig. 4.10f. When supplemented by a segment connecting it to the origin, it looks very much similar to the logistic map (Sect. 4.2.3).

126

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS (a)

(b)

(c)

(d)

(e)

(f)

Figure 4.10. (a)–(d): Trajectories of the Rossler ¨ system at μ = 0.2 and increasing ν: periodic orbit at ν = 2 (a); period-two orbit at ν = 3 (b); period-four orbit at ν = 4 (c); Chaotic orbit at ν = 5 (d). (e): The Poincar´e map in the chaotic region. (f): The next-return map in the chaotic region. The sign of the coordinate y is inverted, and the added segment is shown by the dashed line

4.3. CHAOS NEAR A HOMOCLINIC

127

Chua’s circuit. The basic set-up of Chua’s, as well as of Khaikin’s, universal circuit includes a folded manifold, with one stationary state on each fold. The dynamics becomes interesting when these states are saddle-foci. Most studies of Chua’s circuit, starting from Matsumoto et al. (1984), concentrated on piecewise-linear geometry of folds, but smooth geometry is more realistic. We consider the equations in the form x˙ = α(y + μx − x3 ),

y˙ = x − y − z,

z˙ = −y.

(4.28)

Unlike the R¨ ossler system, (4.28) contains a cubic term preventing escape to infinity. At μ > 0, there are three stationary states, one trivial, and two symmetric √ nontrivial, x = x = ± μ, y = 0. The respective Jacobi matrices are ⎛ ⎞ ⎛ ⎞ αμ α 0 −2αμ α 0 −1 −1⎠ . J 0 = ⎝ 1 −1 −1⎠ , J± = ⎝ 1 (4.29) 0 β 0 β 0 0 The determinant Det(J 0 ) = −αβμ is negative when all parameters are positive, so that this state must be unstable, with a 1D unstable and 2D stable manifold. The nontrivial states undergo a Hopf bifurcation at β = βc = α(2μ − 1)(2μα + 1),

α < 1/2.

(4.30)

At β < βc saddle foci, conducive to chaotic dynamics, exist. The sequence in Fig. 4.11 shows a period-doubling cascade on a single fold at α, μ fixed and decreasing β, terminating in an apparently chaotic attractor seen in Fig. 4.12a (Messias et al., 2009). As the parameter β decreases a bit more, two symmetric attractors on the two folds merge (Fig. 4.12b). The computations are reproduced in the accompanying Mathematica notebook ChuaCircuit.nb. As usual, it is better to characterize chaotic dynamics by constructing Poincar´e √ and next-return maps. Two of them, taken in the section x = μ at the same values of β as in Fig. 4.12a,b, are presented in Fig. 4.12c,d. Already in the case of the one-fold chaos (Fig. 4.12c), the shape of the next-return map is more sophisticated than that of the logistic map or R¨ossler’s system, with several extrema and a self-intersection. Although it is still almost 1D, it would be difficult to approximate it analytically. The points on the map spread chaotically in some region when the one-fold attractors merge (Fig. 4.12d). This spread is still more pronounced when the section x = 0 is chosen; this picture, and some more, are seen in the accompanying notebook. The shape of the next-return maps keeps changing as β decreases further, and not necessarily toward increasing complexity. Some of the maps are compact, others spread out. A period-three orbit is seen at β = 3.2. No wonder so many papers, some of them with full galleries of attractors, are written about this system. With some patience, a diagram similar to that in Fig. 4.8 can be constructed through a standard computer routine – but this would hardly bring more insight.

128

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS (a)

(b)

(c)

(d)

Figure 4.11. Orbits of the smooth Chua circuit model at α = 3, μ = 1/4. (a): The periodic orbit at β = 3.5. (b): The orbit at β = 3.4, following the period-doubling bifurcation. (c): The period-four orbit at β = 3.373. (d): The period-eight orbit at β = 3.371

4.3. CHAOS NEAR A HOMOCLINIC

129

(a)

(b)

(c)

(d)

Figure 4.12. Chaotic orbits of the smooth Chua circuit model at α = 3, μ = 1/4. (a): The chaotic attractor at β = 3.35 surrounding the upper stationary state. (b): Merged chaotic √ attractors at β = 3.33. (c,d): Next-return maps in the section x = μ corresponding to (a,b)

4.3.3

Lorenz Model

The sequence of bifurcations. Historically, the first example of chaos in a dissipative system was given by Edward Lorenz (1963). The Lorenz system was originally derived as a qualitative model of thermal convection and included the amplitudes of just three

130

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

modes in the Fourier expansion of equations of the flow and temperature fields. x˙ = σ(y − x),

y˙ = ρ x − y − x z,

z˙ = x y − β z.

(4.31)

This third-order system, though looking deceivingly simple, retains qualitative wealth of behavior of the original hydrodynamic equations, simulating the transition to turbulence at high Rayleigh numbers in a manner of a masterly caricature. At ρ = 1, the system undergoes a pitchfork bifurcation: the trivial stationary state, which is the only solution that exists below this value, becomes unstable, and two stable symmetric nontrivial stationary states emerge:  xs = ys = ± β(ρ − 1), zs = ρ − 1. (4.32) This models the situation when a quiescent convection cell becomes unstable; the two nontrivial states correspond to two opposite directions of circulation, which are totally equivalent but exclusive. The trivial equilibrium becomes at ρ > 1 a saddle with a 1D unstable and a 2D stable manifold. The nontrivial equilibria undergo a Hopf bifurcation at σ(3 + σ + β) σ = 1 + β, ρ= . (4.33) σ−1−β Beyond the Hopf bifurcation locus, the nontrivial fixed points are saddle-foci. We shall further fix the commonly adopted values β = 8/3, σ = 10 and follow the parametric evolution at increasing ρ. Some pictures of trajectories, reproduced in the accompanying Mathematica notebook Lorenz.nb, are collected in Fig. 4.13. One expects dynamic behavior to be interesting in the parametric region where all equilibria are unstable. In fact, the dynamics becomes nontrivial even at values of ρ below the Hopf bifurcation. A homoclinic orbit, comprising two symmetric loops going out of the trivial fixed point and returning after going around either nontrivial stationary state, as shown in Fig. 4.13a, exists at ρ ≈ 13.9265. Below this value, trajectories going out of the trivial equilibrium either way along the unstable direction (distinguished by color in the picture) approach the closest nontrivial equilibrium. Above this point, the trajectories turn around and converge to the nontrivial equilibrium of the opposite sign, as shown in Fig. 4.13b. The homoclinic butterfly. This a special kind of bifurcation, called a homoclinic butterfly (Shilnikov et al., 2001), with two branches of the stable manifold returning to the saddle tangentially to each other. Generically, this bifurcation is of codimensiontwo, but it is of codimension-one in this particular case because of the symmetry of the Lorenz system. As ρ increases further, a series of homoclinic bifurcations takes place, and trajectories may wander around for a long time, as shown in Fig. 4.13c, before coming to an equilibrium point. The principal distinction from the two models described above is that the cascade of homoclinic bifurcations rather than a period doubling cascade leads to chaos. At some values of ρ, chaotic attractors may coexist with stable fixed points, and the difference between the pictures in Fig. 4.13c,d taken

4.3. CHAOS NEAR A HOMOCLINIC

131

(a)

(b)

(c )

(d)

Figure 4.13. Trajectories of the Lorenz system at β = 8/3, σ = 10 and ρ = 13.92 (a), ρ = 14 (b), ρ = 24.7 (c), and ρ = 28 (d). In (c,d), initial stages of the evolution are omitted

below and above the Hopf bifurcation is not striking; one can only take note of a more uniform color close to stable stationary state in the former case. Above the Hopf bifurcation at ρ ≈ 24.7368, trajectories coming to the vicinity of nontrivial equilibria spiral out, and switch erratically between circling either positive or negative fixed point. Nevertheless, the dynamics is not uniformly chaotic, and thorough studies (Sparrow, 1982) discern closed orbits and period-doubling bifurcations

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

132

within the turbulent region. Indeed, chaos depends on parameters of the problem in a chaotic way, as it also does in other problems: different kinds of orbits crowd the parametric space as devils on a needle head. The shape of the attractors in Fig. 4.13 resembles butterfly’s wings. Perhaps this prompted Lorenz’s famous question in his talk on a 1972 scientific conference: “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?” Being a meteorologist, Lorenz put forward the results of his numerical study of chaos in this apparently simple system as a proof of unpredictability of weather. His colleagues were not impressed, and the work remained obscure for about 15 years until studies of chaotic behavior came into fashion. Poincar´ e and next-return maps. A better picture of long-time chaotic behavior can be seen with the help of Poincar´e and next-return map. An appropriate Poincar´e section is the plane z = zs = ρ − 1. The two pictures in Fig. 4.14, constructed in the accompanying notebook, differ by the chosen direction of trajectories, either toward increasing or decreasing z. As in the case of the R¨ossler model, both Poincar´e maps in the chaotic region are almost 1D, and even almost rectilinear, but they consist of two separated parts. We will further construct a next-return map based on the map in Fig. 4.14b. Upwardly directed trajectories never cross the interval between the projections of the two nontrivial fixed points on the x, y plane, and the map can be made more compact by measuring the positions relative to the nearest equilibrium point. The respective next-return map with the empty interval excluded is shown in (a)

(b)

 

 

 



 

- -

- - - -

- -

-



- 







-

-

-

 







Figure 4.14. Poincar´e maps of the Lorenz system in the chaotic region (β = 8/3, σ = 10, ρ = 28). The two pictures differ by the chosen direction of trajectories

4.3. CHAOS NEAR A HOMOCLINIC

133

(a)

(b) 



()



  











-

-

-

 



-

Figure 4.15. (a): The next-return map of the Lorenz system in the chaotic region (β = 8/3, σ = 10, ρ = 28). The red curve shows the approximate map (4.34), and the vertical red lines show the locations of the discontinuities. (b): A chaotic sequence generated by the map (4.34)

Fig. 4.15a. It is very much different from both logistic and Rossler ¨ map, as it has a discontinuity corresponding to the point where the orbit switches between the two parts of the attractor encircling the two alternative saddle-foci. The map is well approximated by the function f (x) = x − [(γ n − xn )1/n − γ]sign x.

(4.34)

With the parameter values n = 4, γ = 4.726, this map, shown by the red curve in Fig. 4.15a, closely approximates the computed next-return map. All computations can be carried out very efficiently with the help of the approximate map (4.34). One can compute very long sequences in this way, such as the one in Fig. 4.15b, and study the statistics of the output. One can see that the number of consecutive points in either part of the attractor, dependent on the proximity of the “injection point” to xs , forms a randomly intermittent sequence, and build up statistics of this sequence. It is evident, in particular, in Fig. 4.15b that sequences of small shifts alternate with large jumps and switches to an alternative part of the attractor. A model discontinuous map. The disadvantage of this approach is, of course, that one has to build up an approximate map anew for each combination of the parameters by integrating the original equations. However, essential features of the dynamics of the Lorenz system can be demonstrated with the help of a simple model map, much in the same way as dynamics of the R¨ossler model is imitated by the logistic map. A

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

134

suitable map defined on the unit interval is the following discontinuous map, defined for positive x, γ, μ and antisymmetric with respect to the point x = 1/2: % γ − μx2 at x < 1/2 f (x) = (4.35) 2 1 − γ − μ(1 − x) at x > 1/2. The saddle-node bifurcation takes place in both regions at γμ = 14 . The emerging pair of stationary states is √ 1 ± 1 − 4γμ 1 x= at x< (4.36) 2μ 2 √ 1 ± 1 − 4γμ 1 x=1− at x> . (4.37) 2μ 2 If there were no discontinuity, a saddle-node pair would emerge. Indeed, for the higher stationary state at x < 1/2 or the lower state at x > 1/2, the derivative of the map f  (x) exceeds unity, so that they are unstable, while their counterparts in both regions are stable. However, at γ < 14 (2 − μ), μ < 2, the unstable state falls into the “wrong” region, so that only a single stable fixed point exists in either region. Between this line and the saddle-node bifurcation locus, pairs of fixed points exist, and none are found at γμ > 14 . The number of fixed points in either half-interval is shown in Fig. 4.16a. The parametric evolution of the attractor (for trajectories originating just above x = 1/2) at γ = 0.08 and increasing values of μ is shown in Fig. 4.16b. This particular line γ = const passes through the areas with 1, 2, and 0 fixed points as μ increases. In the area 1, the trajectories are attracted to the nearest fixed point. As the border of the area 2 is crossed, the change in dynamics is not straightforward. The entering unstable fixed points serve in the subarea 2c as domain boundaries between a chaotic (a)

(b)

 





 







 

 

 





 







 





  

 



Figure 4.16. (a): The bifurcation diagram of the map (4.35), showing the number of fixed points in each area. (b) Evolution with growing μ of the attractor attained from an initial condition x0 > 1/2 (γ = 0.08)

4.3. CHAOS NEAR A HOMOCLINIC

135

(a)

(b)











()



()











 









 

















(c)

(d)



  

 

 

   

 

 

 









 





 



 



Figure 4.17. Successive iterations (a) and the resulting sequence of positions (c) in the case of a long chaotic transient (μ = 3.1, γ = 0.07). Successive iterations (b) and the resulting sequence of positions (d) in the case of transition to chaos through intermittency (μ = 3, γ = 1/12 + 0.001)

attractor and a stable fixed point lying in an alternative half-interval, similar to the behavior of the Lorenz system following the homoclinic connection. As μ increases further, the extent of the chaotic attractor grows before it collapses and the fixed point in the alternative half-interval becomes the only attractor in the subarea 2a. Finally, after the locus of the saddle-node bifurcation is crossed, the large-amplitude chaotic attractor prevails in the area 0. Long transients are observed near bifurcation loci. An example of a long chaotic transient preceding escape from the trapped region to the attraction basin of a stationary state through a small opening is shown in Fig. 4.17a,c. On the other hand, near the border of the chaotic region, just after the saddle-node bifurcation, some

136

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

trajectories may be trapped for a long time in the “shadow” of a former fixed point where dynamics slows down, even though the stable fixed point has already disappeared. The positions after successive iterations of the map are shown in Fig. 4.17b,d. This is known as the intermittency route to chaos (Manneville and Pomeau, 1979).

4.4 4.4.1

Weakly Forced Oscillators Phase Perturbations

Sensitivity of the phase. Oscillations may be perturbed by external forcing. For example, the pivot of a pendulum may vibrate or, in a far more complicated setting, biochemical oscillations may be affected by the circadian rhythm or population dynamics by the change of seasons. The effects of perturbations on conservative and dissipative systems are substantially different. A typical conservative system has a continuum of oscillating solutions with the amplitude and period dependent on energy. External forcing pumps energy into the system and may strongly affect it even when it is weak, modifying the oscillation amplitude as well as the phase. This may have dire consequences: ordered oscillations would become chaotic or even, as energy keeps increasing, the system will be driven to breakdown. External energy input should be compensated by dissipation, which is always present in macroscopic physical systems, but even then nonlinearities are essential for stabilization. On the opposite, the amplitude of a single nonlinear oscillator in a dissipative system operating far from equilibrium is rigidly fixed by the underlying dynamics, while the phase θ can be shifted arbitrarily. Weak external forcing just slightly perturbs a stable periodic orbit by shifting its phase. Only when the effect of forcing oscillations is comparable to that of internal dynamics, the system may be driven to an alternative attractor or to a chaotic state. Phase dynamics. The phase of an oscillatory system can be always defined in such a way that it grows uniformly in time, θ˙ = ω0 , where ω = 2π/T is the frequency and T is the period. Formally, for a dynamical system in the general form u˙ = f (u), the phase equation can be derived from the relation θ˙ =

 ∂θ fi (u) = ω0 . ∂ui i

(4.38)

 (t),  1, the dynamic If the system is subject to a weak external perturbation, f variables are also weakly perturbed by some vector u i , and the phase equation becomes θ˙ = ω0 +

 ∂θ ∂fi (u j (t) + fj (t) ≡ ω0 + V (t, θ). ∂ui ∂uj i,j

(4.39)

4.4. WEAKLY FORCED OSCILLATORS

137

The phase dynamics can be followed stroboscopically by recording phase increments over the forcing period. This generates a circle map of the kind θn+1 = θn + Q(θn ).

(4.40)

Such maps can be formally constructed even when the perturbation is not weak; they have a complex dynamics and have been extensively studied by mathematicians. The problem is that a strong perturbation affects not just the phase but the amplitude of underlying oscillations, and a circle map may be relevant for the description of a physical system only under very special conditions, mostly when the perturbations are pulse-like rather than continuous. Even a weak perturbation may have a strong effect when operating over many oscillation cycles, but then the phase evolution can be better described by a differential equation on a slow time scale. Resonances. The function V (t, θ) in (4.39) is 2π-periodic in both its arguments, and can be expanded in a double Fourier series:   amn ei(mω0 +n )t . (4.41) V (t, θ) = amn ei(mθ+n t) = m,n

m,n

In the last expression, the phase is replaced by its unperturbed (zero-order) value θ0 = ω0 t. Among the multitude of terms of this formal expansion, one can distinguish between almost resonant terms with mω0 + n = O( )  1 and other terms rapidly changing on the original time scale. These rapidly oscillating terms can cause only O( ) phase perturbations, and vanish upon averaging, so that only terms close to resonances affect the phase evolution over a long time scale τ = t/ . The above formula only serves to demonstrate the concept; in practical computations, averaging over the period can be carried out without expanding in a Fourier series, as we shall see in the sequel. In the simplest case, the prevailing forcing frequency is close to ω0 . Then only terms with m = −n in (4.41) need to be retained. The averaged forcing can be then expressed as a certain function q(φ) of the phase difference φ = θ − t. Replacing accordingly θ by φ in (4.39) yields the long-scale evolution equation dφ = Ω + q(φ), dτ

Ω=

ω0 −  .

(4.42)

This expression can be generalized to higher-order resonances. If the ratio of the frequencies of the oscillator and the forcing is expressed by a rational number, /ω0 = m/n, the same long-scale equation (4.42) can be obtained for the generalized resonance phase and frequency mismatch φmn = mθ − nt,

Ω = −1 (mω0 − n),

with the function qmn (φ) dependent on m, n.

(4.43)

138

4.4.2

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

Forced Harmonic Oscillator

A model example. Practically, (4.38), (4.39) are hard to disentangle to arrive at the phase equation in an explicit form and compute the functions V (t, θ) and q(φ). This task is greatly simplified when oscillations are close to harmonic. A ready example is provided by the amplitude equation derived in the vicinity of a Hopf bifurcation where the oscillation amplitude is fixed by (2.43), while the phase grows linearly with time according to (2.42). We will further use the rescaled form (2.44) with the stationary amplitude ρ = 1, and the frequency ω0 = μ − ν. Adding now a weak harmonic forcing with a different frequency , the amplitude equation takes the form   (4.44) a˙ = a (1 + iμ) − (1 + iν)|a|2 ) + ei t , or, after transforming to the polar form, ρ˙ = (1 − ρ2 )ρ + cos(t − θ), θ˙ = μ − νρ2 + ρ−1 sin(t − θ).

(4.45) (4.46)

The perturbed amplitude is ρ = 1+ ρ, and the equation of ρ is obtained by linearizing (4.45): dρ (4.47) = −2ρ + cos( − ω0 )t. dt Omitting an exponentially decaying term depending on initial conditions, it is integrated to ρ(t) =

1 [2 cos( − ω0 )t + ( − ω0 ) sin( − ω0 )t] . 4 + ( − ω0 )2

(4.48)

If the basic and perturbation frequencies are nearly resonant, so that mω0 − n =

Ω  1 with integer m, n, this expression reduces in the leading order to a function (a)

(b) / 



 













-

- 

 

+

-

Figure 4.18. (a) The synchronization interval of a forced harmonic oscillator. (b) Evolution of the resonance phase φ (modulo 2π) at χ = 1.1 (blue curve) and χ = 1.5 (orange curve)

4.4. WEAKLY FORCED OSCILLATORS

139

of the slow time variable τ = t/ : ρ(τ ) =

1 cos Ωτ + O( ). 2

(4.49)

The phase perturbation is determined by both the amplitude perturbation and the imaginary part of forcing. Using (4.49) in (4.46) and extracting the phase t of the external forcing yields in the leading order the equation of the resonance phase φ(τ ) = mθ − nt: dφ = Ω − (ν cos Ωτ − sin Ωτ ). (4.50) dτ Taking note that the argument of the trigonometric functions coincides with the resonance phase φ, we observe that this equation has the form (4.42) with q(φ) = (ν cos φ + sin φ) = ν cos(φ + α),

(4.51)

where α = arctan(1/ν). Resonances. Rescaling the slow time variable by the factor ν and omitting the constant phase shift α reduces (4.50) to the essential form dφ = χ − cos φ dτ

(4.52)

     

containing the parameter χ = Ω/ν that determines the dynamics of the phase mis              match. At |χ| < 1, the evolution equation of φ has two fixed points φs = arccos χ − α (see Figure 4.19. Arnold tongues of a harFig. 4.18a), of which the one with ν sin(φ − monic oscillator α) > 0 is stable. This stable stationary solution corresponds to the frequency-locked state with a constant resonance phase dependent on the parameters ν and Ω. The width of the synchronization interval, 2 arccos χ, is growing with the perturbation strength and nonlinear frequency shift and decreasing with the frequency mismatch, vanishing at |χ| = 1. Outside the synchronization interval, the resonance phase rotates with a variable speed following (4.52). This equation can be integrated analytically to yield & τ  ' χ−1 tanh χ2 − 1 . φ + α = 2 arctan (4.53) χ+1 2 The resonance phase φ (modulo 2π) evolves as shown in Fig. 4.18b. As the parameter χ increases, the resonance phase grows faster and more uniformly.

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

140

The width of the synchronization interval is expressed through the generalized phase mismatch Ω in the same way for all resonant m, n as before, but recalling the explicit expression (4.43) shows that it is narrowing with n at a fixed forcing strength: $ m $$ ν $ (4.54) $ − ω0 $ < . n n The triangles going up from the resonance points, as shown in Fig. 4.19, are called Arnold tongues. They narrow with increasing n, and become curvilinear at stronger forcing when oscillations cease to be harmonic, and the approximation we are using is no longer applicable. Dynamics becomes erratic, developing what is called “stochastic instability” when resonances overlap.

4.4.3

Weakly Forced Hamiltonian System

Phase dynamics of a forced conservative system is treated in a way similar to Sect. 4.1.2, the difference being that the source of perturbations is now external. A weakly perturbed Hamiltonian, expressed through the action–angle variables J, θ as in (4.1), can be written in the simplest case of a single degree of freedom as H = H0 (J) + V (J, θ, t),

 1.

(4.55)

The Hamiltonian equations of motion (4.1) transform then to ∂V J˙ = − , ∂θ

∂V , θ˙ = ω0 + ∂J

(4.56)

where ω0 = ∂H0 /∂J is the unperturbed oscillation frequency. As in Sect. 4.4.1, the perturbation is expanded in a double Fourier series (4.41), and the change of the phase over a long time is of O(1) only when resonances satisfying mω0 − n ∼ O( ) with m, n integer are present. In the case of a single resonance, the dynamic equations (4.56) reduce after setting V = V0 (I0 )eiψ to J˙ = nV0 sin(mθ − nt + ψ),

∂V θ˙ = ω(J) + cos(mθ − nt + ψ). ∂J

(4.57)

Clearly, the action variable, similar to the real amplitude in the above example,  Accordingly, changes by an O( ) increment only, and can be expanded as J = J0 + J. the frequency can be expanded as   dω   ω = . (4.58) ω = ω0 + ω J, dJ J=J0 Introducing, similar to (4.43), the resonance phase φ = mθ−nt+ψ and the frequency mismatch Ω leads to the equations of φ   dφ ∂V0 cos φ . (4.59) = ω(J) + Ω + dt ∂J0

4.4. WEAKLY FORCED OSCILLATORS (a)

141

(b)

(c)

Figure 4.20. Perturbed trajectories at 1:1 (a), 1:3 (b), and 1:5 (c) resonance. The unperturbed orbit is shown by the dashed circle

Eliminating terms of a higher order in brings the perturbation equations to the form dJ dφ  (4.60) = mω  J. = mV0 sin φ, dt dt These equations have a Hamiltonian form  dJ ∂H =− , dt ∂φ

 dφ ∂H = dt ∂J

(4.61)

 = 1 mω  J2 − mV0 cos φ. H 2

(4.62)

derivable from the perturbed Hamiltonian

This chain of transformations has brought us back to the pendulum equation for the resonance phase: φ¨ + κ sin φ = 0 (4.63) after rescaling time, with κ = m2 V0 ω  . The dimensionless form (1.16) is recovered √ and the basic frequency of small-amplitude oscillations is κ. The perturbed orbits showing “chains of islands” bounded by homoclinics are shown in Fig. 4.20. The story does not end here. The presence of hyperbolic points (that correspond to unstable equilibria of a pendulum) and connecting separatrices in (4.63), seen in Fig. 4.20, makes the evolution of the resonance phase in conservative systems substantially different from the its steady advance in (4.52). In the vicinity of a hyperbolic point, oscillations of the resonance phase slow down, and higher-order terms, neglected in the above expansion, become substantial. A small perturbation is sufficient to switch between oscillation and rotation. Rather than joining smoothly, stable and unstable manifolds of hyperbolic points intersect infinitely many times (recall Fig. 4.1), each intersection being another hyperbolic point, leading to a chaotic tangle.

142

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

This derivation could be repeated for an unforced system with two degrees of freedom, where the same pendulum equation can be derived for an appropriate resonance phase, and the same kind of a tangle arises near hyperbolic points. This problem is treated in detail in the review by Chirikov (1979), as well as in the monograph by Lichtenberg and Lieberman (1983, 1992).

4.5 4.5.1

Effects of Strong Forcing Universal and Standard Mappings

Stroboscopic map. As usual, chaotic dynamics is best revealed by Poincar´e maps, and it is natural to construct them stroboscopically, by marking the change of the dynamic variables over the period T of the forcing. This is done in a most straightforward way when the perturbation is pulse-like. The Hamiltonian of a conservative system with one degree of freedom subject to pulses repeated with the period T = 2π/ is written in the form containing Dirac’s δ-function:    t (4.64) δ −n . H = H0 (p) + f (φ) 2π n Between the kicks, a particle is in free motion, so that the momentum p is constant, and the coordinate changes between the kicks by ω(p)T . The dynamics is reduced therefore to a discrete map pn+1 = pn + f  (φn )T,

φn+1 = φn + ω(pn )T.

(4.65)

Kicked rotator. This “universal mapping” can be applied to cases when either momentum or coordinate or both are defined either on a line or on a circle, i.e., are either unbounded or periodic, and the kind of dynamics it generates strongly depends on the function f (φ). In the particular case ω(p) = p, f (φ) = κ cos φ, (4.65) with the unit period reduces to pn+1 = pn + κ sin φn ,

φn+1 = φn + pn .

(4.66)

This mapping is called the standard, or Taylor–Chirikov mapping. The idealized device implementing this mapping is a kicked rotator, a weightless stick rotating frictionlessly in a horizontal plane around a vertical axis located in one of its tips, and periodically kicked on the other tip. In the continuous limit at κ → 0, (4.66) reduces to the same equation of a nonlinear pendulum (4.63). While κ is small, trajectories starting from different initial conditions are periodic orbits with different rotation numbers r, as shown in Fig. 4.21a. The orbit with r equal to the golden ratio is broken first at κ ≈ 0.971635, dispersing into a chaotic cloud (Fig. 4.21b). Chaotic regions spread out as κ further increases (Fig. 4.21c). The

4.5. EFFECTS OF STRONG FORCING

143

(a)

(d) 







 







(b)







(e) 











(c)









(f)              

Figure 4.21. (a-c): Trajectories of the standard mapping in the plane φ (modulo 2π), p at κ = 0.6 (a), κ ≈ 0.971635 (b), and κ = 1.2 (c). (d-f) A single trajectory at κ = 2 under successive magnifications

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

144

developed chaos is structured in a sophisticated way, as seen in Fig. 4.21d-f showing a single trajectory at κ = 2. The most densely covered region around p = φ = π is successively magnified in Fig. 4.21e,f. The last picture required half-a-million iterations. The points are color-coded in such a way that the color changes from red to yellow to green to blue to purple as iterations progress; the purple color prevails in dense regions as points overlap. Kicked pendulum. A practical situation where a mapping of this kind may emerge is the sampled-data proportional control under conditions when only few variables can be monitored (Pismen, 1980). As a simple example, consider a pendulum, which the controller, capable to observe at unit intervals only its position but not velocity, aims to bring to its equilibrium position at φ = 0. Being ignorant of pendulum’s dynamics, the controller tries to correct the trajectory by pushing in what is conceived as a proper direction. The applied force is proportional to κ sin φ, exactly as in the standard mapping. Taking account of the dynamics of the pendulum between the kicks leads to a mapping more complicated then (4.66), and, rather than using the analytical solution of the pendulum equations expressed through elliptic functions, it is easier to integrate them in the interval between kicks numerically, as it is done in the accompanying Mathematica notebook Forced.nb. It is no surprise that this kind of control does not lead to stabilization. Rather than reducing the amplitude of oscillations, it adds energy to the system and distorts phase trajectories. As the control parameter increases, the pendulum reaches the hyperbolic point separating undulations from rotation, and a chaotic layer appears in its vicinity, as seen in Fig. 4.22. (a)

(b)  

 











-  -  





   ( )









   (  )





Figure 4.22. (a): Mapping trajectories of the kicked pendulum at the initial energy E0 = 1/2, κ = 1.28, and control period π/2. (b): Magnification of the chaotic region

4.5. EFFECTS OF STRONG FORCING

145

Although this example is rather artificial, chaos of this kind may be typical for regulation of complex systems where only few variables can be monitored. Such a system (which may be anything from a chemical reactor to a national economy) often cannot be left alone for a period long enough to recognize its inherent dynamic features, but chaos induced by control smears the intrinsic pattern beyond recognition. This is a classical case of an evil circle where ignorance begets chaos and chaos perpetuates the ignorance. Continuous control may not be as disastrous, but we have to be aware that regulation generally makes the system more complex by introducing additional variables. As the dimensionality of the phase space increases, so does that of an asymptotic orbit, and, in the worst case, ordered oscillations may give way to chaos.

4.5.2

Forced Dissipative Oscillators

Van der Pol and Duffing oscillators. Periodic forcing, which we treated as a weak perturbation in Sect. 4.4, is more common in dissipative, as well as conservative systems, and may also generate chaos as it strengthens. There are no insights into transitions between different kinds of dynamics in systems of this kind beyond numerical experiments, where the evolution is followed in a stroboscopic way by plotting values of φ, φ˙ at time moments equal to multiples of the forcing period. The early example of chaotic behavior in a periodically forced dissipative system was the forced van der Pol oscillator (1.71) (Cartwright and Littlewood, 1945): ˙ − φ2 ) − φ + β cos ωt. φ¨ = μφ(1 (a)

(4.67) (b)



  

 

 

 



 - -

- -

-

 





-

-

 





Figure 4.23. (a): Stroboscopic map of the forced van der Pol oscillator at β = ω = 1, μ = 6. (b): Stroboscopic map of the Duffing equation at ω = 1, γ = 0.2, β = 0.3

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

146

Some examples of stroboscopic maps at β = ω = 1 are presented in in the accompanying Mathematica notebook Forced.nb. At μ  2.6 the attractor is a single point; this gives way to a 1D attractor, which remains qualitatively more or less unchanged as μ grows, with continuous contours of about the same shape prevailing. A typical attractor within this interval is shown in Fig. 4.23a. At μ  7, the attractor breaks down to all kinds of orbits with integer period. Here, as well as other models of this kind, scanning the parametric space can uncover qualitative changes of behavior caused by tiny shifts of parameters. Another example of a forced dissipative oscillator is the Duffing equation (1.28) derived in Sect. 1.1.5, where, unlike (4.67), the dissipative term is linear, while the nonlinear term is coordinate-dependent: φ¨ = γφ(1 − φ2 ) − φ˙ + β cos ωt.

(4.68)

An example of an apparently chaotic attractor from the same notebook is presented in Fig. 4.23b. Parametric forcing. The dissipative parametrically forced pendulum has a variegated dynamics, including a stable inverted equilibrium at φ = π, integer period orbits, and chaos. It is reported that it may have up to four competing attractors at a certain set of parameters (Bartuccelli et al., 2001). The dynamic equations are (4.69) φ¨ = γφ˙ − (ω 2 − β cos t) sin φ. √ Some numerical tests at ω = 12 ( 5 − 1) (golden ratio) and γ = 0.2 under different forcing strength β are shown in Fig. 4.24. At β  0.28, φ = 0 is the stable equilibrium, (a)

(b)  

 

 

 

 

  - - 

- - - - -    

 











   





Figure √ 4.24. Stroboscopic maps of the dissipative parametrically forced pendulum at ω = 1 ( 5 − 1) and γ = 0.2, β = 0.42 (a), γ = 0.025, β = 0.42 (b) 2

4.5. EFFECTS OF STRONG FORCING

147

but above this point a period-two orbit appears, which prevails till the hyperbolic point φ = π is approached at β ∼ 0.42 (Fig. 4.24a), which causes an apparently chaotic attractor to emerge. Beyond this point, undulations give way to rotation, but the form of the attractor changes only slightly when φ is shifted by π and plotted modulo 2π. Further evolution of the attractor with increasing forcing or weaker dissipation is erratic, as it commonly happens in chaotic dynamics. For example, a robust chaotic attractor is observed at β = 0.42, γ = 0.025 (Fig. 4.24c), but when dissipation rate is further lowered to γ = 0.02, a period-eight orbit of φ modulo 2π is observed, while rotation persists. More pictures are found in the notebook Forced.nb.

4.5.3

Forced Relaxation Oscillator

Relaxation oscillators. We have seen in Sect. 3.5.2 that trajectories of the unforced van der Pol oscillator become steeper, approaching relaxation oscillations, as the parameter μ grows, and the dynamics becomes transparent after the Lienard transformation (3.37) is applied. The extended form of the resulting system is the FitzHugh– Nagumo equation (3.39). Applying forcing to a relaxation oscillator gives us an opportunity to consider a case as far removed as possible from a harmonic oscillator near a Hopf bifurcation point of Sect. 4.4.2 and at the same time amenable for an analytical study. Handling a cubic nonlinearity involves, however, cumbersome algebra caused by such a trivial matter as the necessity to compute roots of a cubic equation. Still more significant is a loss of time-scale separation between the fast and slow variables near the turning points, which leads to the canard phenomenon (Sect. 3.5.4) that becomes more robust and complicated when the dimensionality of the phase space effectively increases by applied forcing. We choose, therefore, a system with a piecewise-linear slow manifold, which is qualitatively similar but much easier for analysis:

u˙ = (1 + u)(u − v)(1 − u),

v˙ = μu − v + β cos ωt.

(4.70)

What makes this system convenient is that the two stable fixed points of the fast equation u = ±1 are independent of the slow variable v. Relaxation oscillations of the unforced system take place at μ > 1 when the only stationary state of the system, u = v = 0, is unstable. As the slow variable gradually climbs up along the line u = 1, it reaches the intersection point v = μ, whereupon the fast variable jumps to the alternative branch u = −1, where v descends to the level v = −μ and jumps back. The phase diagram in Fig. 4.25 is a kind of angular version of Fig. 3.16. The jump time can be neglected even in the forced system if the forcing strength satisfies  β  1. Forced piecewise-linear system. The solution of the unforced system on the branches u = ±1, starting from v = ∓1 at the jump points is simple:   v± = ± μ − (μ + 1)e−t . (4.71)

148

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

Since the slow variable does not change appreciably during a fast jump, the oscillation period is twice the time needed to pass either branch, obtained by setting in (4.71) v = ±1: μ+1 T = 2 ln . (4.72) μ−1 The phase equation (4.38) greatly simplifies on the branches u = ±1, where only the slow variable changes: dθ 2π (±μ − v) = ω0 = . (4.73) dv T The relation between v and θ is obtained by solving this equation on both branches. Using also (4.72), we obtain 0 + θ± (v) = θ±

2π μ + 1 ln . T μ−v

(4.74)

With the boundary conditions θ+ (−1) = 0, θ− (1) = π, this formula gives θ+ (1) = 0 , θ− (−1) = 2π, so that the periodicity is retained. However, due to the π = θ± utterly dissimilar dynamics of the relaxation oscillator and harmonic forcing, the phase equation (4.39) cannot be reduced in this case to an explicit equation for the phase difference between the oscillator and external forcing. It is advantageous, instead, to deduce the synchronization condition at weak forcing stroboscopically, by computing the shift of the phase of the forcing at a certain point of the relaxation oscillation cycle – most conveniently, at a jump point – due to changes of the oscillation period induced by weak forcing. For this purpose, we need to expand the slow variable as v + βv(t) and compute the perturbation v(t) on the branches u = ±1 by solving dv = −v + cos(ωt − ψ), (4.75) dt where ψ is the phase of the forcing at the moment of the jump from the negative to the positive branch, taken as t+ = 0; the reverse jump takes place then in the unforced 0 system at t− = T /2. Denoting v± (t± ) = v± , (4.75) is integrated on either branch to 0 −(t−t± ) e + v± (t) = v±

cos(ωt − φ) − cos(ωt± − φ) , 1 + ω2

(4.76)

0 where φ = ψ + arctan ω. The constants v± should be obtained by matching the solutions on the two branches. We also need to take into account that the time taken to pass each branch changes by an O(β) increment as well. Taking the moment of the jump from the lower to the higher branch as zero and denoting the increments of the passage time of the branches u = ±1 as βt± , the perturbation at the upper jump point is determined by the condition v + βv = 1 at t = T /2 + β  t+ . Using (4.71), (4.76) and expanding to the leading order in β yields

 t+ = −

0 v+ cos (ωT /2 − φ) − cos φ − . μ+1 (μ − 1)(1 + ω 2 )

(4.77)

4.5. EFFECTS OF STRONG FORCING

149

0 equals to the value of v+ (T /2) at the moment of the jump. In the The constant v− leading order, there is no need to correct it by the change of the passage time, and it follows from (4.76) that 0 0 v− = v+

μ − 1 cos (ωT /2 − φ) − cos φ . + μ+1 1 + ω2

(4.78)

Next, (4.76) is used to determine the increment of the passage time on the negative 0 branch. Retaining as before the lowest order in β, we obtain the relation between v− and t− analogous to (4.77): cos (ωT − φ) − cos (ωT /2 − φ) 0 μ−1  − . t− = v− 2 (μ + 1) (μ + 1)(1 + ω 2 )

(4.79)

0 ) is the increment of the slow variable at the lower jump The value β(v− (T ) − v+ point over the relaxation oscillation cycle, and βω(t+ +  t− ) is the period increment that determines the change of the phase of the forcing. The rate of change is slow, 0 and therefore the evolution of v+ (to be denoted now simply as v) and of the phase ψ or its shifted value φ over many cycles can be approximated by continuous equations on the slow time scale τ = βt/T . The explicit form of these equations, after some

(a)

(b)



 









 -



-









-

- - -  





-

Figure 4.25. (a) The phase plane of the relaxation oscillator (4.70) (μ = 1.2). The null isocline of the fast variable is shown by bold lines, and that of the slow variable v, by the dashed line. The arrows indicate fast jumps. (b) The dependence of the determinant of the Jacobi matrix of (4.80) for the two fixed points (4.81) on the frequency ratio λ = (ωT )/(2π)

150

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

algebra taken care of by Mathematica, comes out to be quite complicated: &   ' dv 4μv 1 2 ωT μ−1 =− + cos (ωT − φ) cos − − φ − cos φ , dτ (μ + 1)2 1 + ω2 μ+1 2 μ+1 dφ 4μv 1 =− + × dτ (μ + 1)3 (μ + 1)(1 + ω 2 ) &   ' μ2 + 4μ − 1 ωT 4μ cos φ × cos (ωT − φ) − cos φ + . (4.80) − μ2 − 1 2 μ2 − 1 The fixed points φs , vs of (4.80) satisfy (μ + 1)2 [cos (ωT − φs ) − cos φs ] , vs = 4μ(1 + ω 2 )

 cos

ωT − φs 2

 = cos φs .

(4.81)

The last relation can be resolved to sin ωT /2 sin ωT /2 cos φs = ±  =± . 2 sin ωT /4 2(1 − cos ωT /2)

(4.82)

Stability of synchronized states. To check stability of the fixed points, we need to compute the determinant and trace of the Jacobi matrix of (4.80). The determinant reduces to a positive number multiplied by a factor dependent on φs equal for their two alternative values in (4.82) to Δ(φ+ s ) = 2 sin

ωT ωT cos , 8 4

Δ(φ− s ) = 2 sin

ωT 3ωT cos . 4 8

(4.83)

Both expressions are drawn as functions of the frequency ratio λ = (ωT )/(2π) in Fig. 4.25b. The plot, which repeats itself with the period eight, indicates that at most values of λ one of the solutions is unstable, while the other one may be stable. This is what happens, in particular, when λ is an odd integer, while Δ = 0 for both solutions when λ is an even integer. Δ is positive for both solutions at λ < 23 and 4 − 32 < λ < 4 and negative for both solutions at 4 < λ < 4 + 23 , 8 − 23 < λ < 8, etc. The expression for the trace of the Jacobi matrix is less transparent, and depends on the frequencies of both the relaxation oscillator and the forcing rather than on their ratio only. The plots in Fig. 4.26 show the stability regions, shaded dark. This plot is aperiodic, the solutions tend to stabilize at higher ω and destabilize at higher λ. Inspecting the plots of both the determinant and the trace, we see that in some regions, e.g. 4 < λ < 4 + 23 and 8 − 23 < λ < 4 and moderate ω, both solutions are unstable, while at λ < 23 and 4 − 23 < λ < 4 both are stable, and so are both solutions at sufficiently large ω. What serves as a basin boundary when both solutions are stable and as an attractor when both are unstable? There are certainly more phase-locked states than we have so far detected! They can be found by considering fixed points resulting from the double, triple, etc. passage of the oscillation cycle – but we stop here and just contemplate the complexity hidden in a system that appears to be rather simple at the first sight.

4.5. EFFECTS OF STRONG FORCING

151

Figure 4.26. Levels of the trace of the Jacobi matrix of (4.80) for the two fixed points (4.81) in the parametric plane λ, ω. The stable region is shaded dark

Strong forcing. Dropping the presumption of weak forcing, the equation of the slow variable in (4.70) can be integrated on the positive branch to v(t) = β

cos θ(t) + ω sin θ(t) 1+β − e−t + μ(1 − e−t ), 1 + ω2 1 + ω2

(4.84)

where time is counted from the moment of the jump at v = −1 and θ(t) = ωt is the phase shift of the forcing at the moment t. It is utterly inconvenient to use this expression to determine the moment of the reverse jump at v = 1, but it can be applied to estimate the critical forcing that suppresses relaxation oscillations by causing the slow variable to oscillate on a single branch. For this purpose, it is sufficient to estimate the maximum effect of the first term in (4.84). Its lower limit is reached when the phase shift during the total traverse of the branch is θm = arctan ω with θm modulo 2π lying between π and 32 π where both trigonometric functions are negative. The √ minimum value of the term is then −β 1 + ω 2 . Using this in (4.84) makes it possible to invert this expression and determine the maximum arrival time tm at the jump point v = 1: μ + (1 + β)(1 + ω 2 ) √ . (4.85) tm = ln μ − 1 − β 1 + ω2 The√jump point is never reached when the denominator vanishes, i.e., at β > (μ − 1)/ 1 + ω 2 . This is the upper estimate of the forcing strength, but it is likely that in the course of relaxation oscillations the phase shift during the total traverse of either branch, which keeps changing due to the difference between the frequencies of the forcing and relaxation oscillations, reaches the most effective value.

152

4.6 4.6.1

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

Coupled Oscillators Phase Dynamics

Coupled phases. We have seen in Sect. 4.4.1 how steady growth of the phase is perturbed by external forcing. This can be also caused by interactions with another oscillator and may induce the coupled oscillators to synchronize. The phase dynamics approach from Sect. 4.4.1 is equally applicable to weakly coupled oscillators, which even may be of a completely different nature, and the same general equation (4.39) can be written for both phases. Marking for brevity both phases, as well as the dynamic variables they depend upon, by ± signs and assuming the interaction to be weak, β →  1, we rewrite the phase equations as  ∂θ± ∂f ± i ± θ˙± = ω± + ± (4.86) j (θ+ , θ− ) + fi (t) ≡ ω± + V (θ+ , θ− ), ± ± (u ∂ui ∂uj i,j where u ± j denote perturbations of the respective variables due to coupling. Again, the function V (θ+ , θ− ) is 2π-periodic in both its arguments, and can be expanded in a double Fourier series similar to (4.41). Unfortunately, the universality of the phase equations is mere formal: as we have already noted before, deriving such equations could be practical only when oscillations are close to harmonic, and a numerical stroboscopic approach might be used instead in more complicated cases. In the most common case, the frequencies of interacting oscillators are close one to the other, and the interaction can be expressed as a function q(φ) of the phase difference φ = θ+ − θ− , so that, similar to (4.42), the evolution of the phase mismatch on the extended time scale τ = t/ obeys the equation dφ ω+ − ω− = Ω + q(φ), Ω= . (4.87) dτ

Resonances. By analogy with the example in Sect. 4.4.2, consider two weakly coupled harmonic oscillators   (4.88) a˙ ± = a± (1 + iμ± ) − (1 + iν± )|a± |2 ) + (a∓ − a± ). At → 0, both real amplitudes |a|± = ρ± are equal, so that in the leading order the interaction depends only on the phase difference. Setting ρ± = 1 + ρ± , the equations in the polar form reduce to ρ˙ ± = −2ρ± + cos(θ∓ − θ± ), θ˙± = ω± + [sin(θ∓ − θ± ) − 2ν± ρ± ],

(4.89) (4.90)

where ω± = μ± − ν± . The equations of ρ± are integrated in the same way as (4.47), leading, after dropping an exponentially decaying term and terms of a higher order in the frequency mismatch, to a function of the slow time variable τ = t/ : ρ± (τ ) =

1 cos Ωτ. 2

(4.91)

4.6. COUPLED OSCILLATORS

153

Using this in (4.90) leads to the evolution equation of the phase difference φ = θ+ − θ− in the slow time similar to (4.50): dφ = Ω − (ν+ − ν− ) cos Ωτ − sin Ωτ. dτ

(4.92)

Finally, (4.52) holds with the only difference that that the parameter χ is now defined as χ = Ω/(ν+ − ν− ), and the oscillators are synchronized at |χ| < 1. However, the result cannot be extended to the case of 1:2 and 1:3 resonances considered in Sect. 3.6.2, when the real amplitudes are coupled.

4.6.2

Coupled Pendulums

Varieties of coupling. Systems with amplitude-dependent frequency, such as coupled pendulums, present a more challenging problem. Already in the 17th century, Christiaan Huygens (1673), famous for his wave theory of light, observed that two pendulum clocks synchronize in antiphase when placed on common support and undulate in the same plane (Fig. 4.27a). Bennett et al. (2002) reproduced this experiment with the pendulums hanged on a heavy support mounted on a low-friction wheeled cart. They found that the synchronization would require matching the frequencies with extremely high precision that Huygens would not be able to achieve, so that his results “depended on both talent and luck”. A variety of models have been put forward to describe Huygens’ experiment (Kapitaniak et al, 2012); all of them contain at least three degrees of freedom. Simpler configurations of coupled pendulums with just two degrees of freedom and stronger interactions are more likely to behave chaotically than synchronize. Pendulums can be connected in a variety of different ways, for example, by an elastic spring as in Fig. 1.3b or in a different arrangement, undulating in the same plane, or with

Figure 4.27. (a): The original experiment by Huygens featuring pendulums on a common support. (b): Pendulums hung on a torsionally elastic bar

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

154

pivots rather than bobs connected. Generally, the interaction between two pendulums, which we mark for brevity by ± signs, can be expressed as a certain function of the phases, f (θ+ , θ− ). This function has the simplest form in the case of pendulums hung on a horizontal bar (Fig. 4.27b) and undulating or rotating in parallel planes (Huynh et al., 2013). If the pivots are firmly fixed, the pendulums interact due to the resistance of the common bar to torsion, so that f (θ+ , θ− ) = ±k|θ+ − θ− |, where k is proportional to the torsional elastic constant. Presuming that the pendulums are identical and only the bobs are massive and there is no dissipation, the dimensionless Hamiltonian is H=

1 2 k (p+ + p2− ) − cos θ+ − cos θ+ + (θ+ − θ− )2 , 2 2

(4.93)

leading to the dynamic equations 許 + sin θ± ∓ k(θ+ − θ− ) = 0.

(4.94)

The interaction function is somewhat more complicated in another commonly studied arrangement when the pendulums connected by a spring undulate in the common plane. In this case, the interaction term in the dimensionless Hamiltonian is k[1 − cos(θ+ − θ− ) + α(cos θ+ − cos θ− )],

(4.95)

where k is the dimensionless elastic constant of the spring and α is the ratio of the rod length to the separation distance between the pivots. A comprehensive study of (a)

(b)

(c)

  







   



-  - - - - -

- 

  

 





- -

-

 





Figure 4.28. (a): A quasiperiodic trajectory of (4.94) at k = 0.3 starting at θ1 = π/2, θ2 = p1 = p2 = 0 with the initial energy concentrated in the torsional energy. (b): Quasiperiodic Poincar´e maps at k = 0.3, H = 0. (c) The color code for the Poincar´e maps (see the text for explanations)

4.6. COUPLED OSCILLATORS

155 (b)

















(a)

-

-

-

-

- - -

 







-













-

-

-

- 

 





(d)





(c)



 

-







-

-

 

-

(e)







(f)



















-

-

-

- - 





 













 







Figure 4.29. Color-coded Poincar´e maps of (4.94) at k = 0.3 for a grid of initial conditions as explained in the text. The energies are H = 0.4 (a, b), H = 1 (c), H = 1.5 (d,e), H = 2.5 (f). The initial conditions are θ1 = θ2 = π/2 and momenta of equal (a,c,e) or opposite (b,d,f) signs

156

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

chaotic dynamics in this system, including Poincar´e maps at different energies and Lyapunov exponents, has been carried out by Szuminski ´ and Wo´zniak (2020). Torsional coupling. We consider in detail the representative case of torsional connection, based on the computations presented in the Mathematica notebook CoupledPendulums.nb. Two persistent periodic orbits with synphase and antiphase motion exist is this system, starting from the initial conditions θi = θ± = 0 with the momenta pi = θ˙i of the same absolute value and, respectively, identical or opposite signs. Trajectories with antisymmetric initial values of the angles are also periodic in antiphase when a part of the energy is torsional. We study dynamics at different energies by constructing Poincar´e maps with the plane θ1 (modulo 2π when undulations give way to rotation), p1 at the deviation level θ2 modulo 2π taken as the Poincar´e section. The synphase and antiphase trajectories starting from θi = 0 project at a single fixed point on this map. At sufficiently small energies, the respective points are surrounded by quasiperiodic trajectories. One of them is presented in Fig. 4.28a. The Poincar´e maps shown in Fig. 4.28b, as well as in the following pictures, are produced by 2000 iterations for each trajectory, starting from θ± = π/2, and a grid of initial conditions√with the initial energy √ divided in variable proportions between the momenta p1 = 2H sin y and p2 = 2H cos y with the “angle” y taking nine equally-spaced values between 0 and π/2. The dots marking the intersections with the Poincar´e section are color-coded in such a way that the color changes from red to green to blue to purple as more initial energy is concentrated in p1 . At H = 0 in Fig. 4.28b, starting from momenta of equal or opposite signs makes no difference, but respective trajectories differ substantially at higher energies, as shown in Fig. 4.29. At H = 0.4, more islands appear at some trajectories with symmetric (a) but not antisymmetric (b) signs. At H = 1, undulation switches to rotation in the symmetric case (c) and almost all tori break, while in the antisymmetric case undulation and quasiperiodic motion persist, and only at H = 1.5 (d) one can see first tori broken, while with symmetric initial conditions the motion is totally chaotic at this energy level (e). Only in the last picture at H = 2.5 (f) one can see rotation with almost all tori broken for antisymmetric initial conditions. The angle in (c,e,f) is shown modulo 2π after being shifted by π to avoid breaking the picture.

4.6.3

Coupled Relaxation Oscillators

A contrasting example is coupling of identical relaxation oscillators with interacting slow variables. As in Sect. 4.5.3, we consider a system with a piecewise-linear slow manifold:

u˙ ± = (1 + u± )(u± − v± )(1 − u± ), v˙ ± = μu± − v± ∓ k(v+ − v− ).

(4.96) (4.97)

4.6. COUPLED OSCILLATORS

157

The first question we address is the stability of a synchronized state. Suppose that v+ is slightly ahead of v− , so that at the moment the latter jumps on the branch u = 1, v− = −1, while v+ = −1 + δ,  δ  1. Setting v± = v + δv± and expanding (4.97), we obtain the first order v˙ ± = −v± ∓ k(v+ − v− ).

(4.98)

Combining these equations yields the equation of the difference w  = v+ − v− : w ˙ = −(1 + 2k)w. 

(4.99)

Clearly, the mismatch between the two slow variables decays exponentially as both progress along the positive branch. However, this changes when the v+ reaches 1 and jumps to the lower branch. Starting from this moment, and before the trailing variable jumps as well, the governing equations are v˙ + = −μ − v+ − k(v+ − v− ),

v˙ − = μ − v− + k(v+ − v− ).

(4.100)

Now they differ already in the zero order, and after extracting we get for the zero-order mismatch w = v+ − v− w˙ = −2μ − (1 + 2k)w, (4.101) so that the difference shrinks as v+ decreases while v+ still grows, before it reaches 1 and jumps to the lower branch as well after an O(δ) time increment. The mismatch keeps decreasing exponentially as both slow variables go down the lower branch. Near the lower jumping point, the sign of the constant term in (4.100) is reversed, so that, if μ is sufficiently large, w may increase during an O(δ) interval when v+ is already on the upper branch while v− trails below, but the exponential decay of the mismatch along the entire circuit ensures that the synchronized state is restored. However, this is not a unique asymptotic solution, as synchronization may never be achieved at a high initial mismatch. Consider the extreme case when the variables are initially in antiphase: v± (0) = ±1. The dynamic equations (4.100) with v+ decreasing on the lower, and v− growing on the upper branch are solved with these initial conditions as e−(1+2k)t (1 + 2k + μ) − μ . (4.102) v± = ± 1 + 2k At μ > 1 + 2k, both variables simultaneously reach their respective jump points, and the antiphase state persists with the period T =

2 μ + 1 + 2k ln . 1 + 2k μ − 1 − 2k

(4.103)

The period grows logarithmically with decreasing μ, and diverges at the critical point μc = 1 + 2k. Below this limit, the jump is never reached; oscillations cease, and the stationary state v± = ∓μ/(1 + 2k) is reached. It is easy to see that this fixed point is stable.

158

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

The antiphase state is stable as well when it exists. Solving (4.102) with a perturbed boundary condition v− (0) = −1 + δ yields v+ + v− = δe−t , so that the system relaxes exponentially to the antiphase state v+ = −v− .

4.6.4

Synchronization in Large Ensembles

Global vs. local interactions. Numerous examples of synchronization in large collectives of oscillators have been observed in a variety of settings: in blinking lights of a swarm of fireflies or a large group of bikers, in gene-manipulated circadian clocks in bacterial colonies, or even on a longer time scale in laying eggs by nesting birds and perhaps menstruations of women living together, and, eventually, synchronous emergence of cicadas in a 13- or 17-year cycle. Norbert Wiener (1948), the creator of cybernetics, speculated that “pulling together of frequencies” of neuron oscillations plays an important part in the activity of the brain. Oscillations of the London Millennium “Wobbly” bridge were attributed to spontaneous synchronization of pedestrians’ steps, and many of us have witnessed synchronization of clapping in a theater. This indicates that even intrinsically chaotic systems may synchronize under certain conditions. On the other hand, as we observed earlier in this section, even a pair of coupled oscillators may turn their ordered oscillations to chaotic or stop oscillating altogether. Clearly, synchronization cannot be too common in large ensembles. Interactions in the above examples are of very much different nature, but they mostly affect the entire collective of oscillators united by a common location. The simplest assumption is that interaction of a large number of oscillators is global, being independent of their relative positions. It is reasonable to assume in these cases that only the phase, rather than the amplitude, of oscillations is affected; otherwise, substantial energy transfer would be required2 . The early biologically motivated theory by Winfree (1967) was stripped of all intrinsic mechanisms of interacting oscillators, representing them as “rotators” with frequencies ωi that just advance their phases at a uniform rate. The ensemble is described along these lines by the equation of phases θi of N oscillators,  1 f (θj ), (4.104) θ˙i = ωi + β(θi ) N j where the function β(θ) describes the phase sensitivity of a particular oscillator, and the function f (θ), the strength of its influence. In such a general but abstract formulation, the problem hardly could be treated quantitatively, and the original publication lacked mathematical formulas. The alternative to global connections are local interactions between nearest neighbors propagating along a chain or spreading on a grid. Both global and local interactions may be governed by recognizable common mechanisms, and are ubiquitous in mechanical, chemical, and biological systems. Local interactions are set in space, 2 Recall that amplitudes are perturbed, albeit weakly, even in a pair of coupled harmonic oscillators described in Sect. 4.6.1.

4.6. COUPLED OSCILLATORS

159

and, essentially, are discretized versions of dynamical systems in space, the subject of the next chapter. An intermediate case is interactions within a network propagating along the links between nodes, which may be far removed geometrically, like in airline charts, and still more in “small world” communication networks. This case is too removed from physics to be treated here. Collective phase dynamics. The problem of global interactions has been rationalized by Kuramoto (1984) who specified the phase sensitivities in (4.104) to be equal and the interactions to depend on the phase difference only: β  θ˙i = ωi + f (θj − θi ). (4.105) N j The phase dependence must be periodic, and the simplest and most natural choice is a harmonic function: β  θ˙i = ωi + sin(θj − θi ). (4.106) N j In the limit N → ∞, this relation can be simplified by introducing a complex parameter Z(t), which represents the mean field created by averaging the effect of the entire collective of interacting oscillators:  2π 1  iθj 1 n(θ, t)eiθ dθ. (4.107) Z = ρeiΘ = ρe  N j 2π 0 The amplitude ρ, dependent on some function n(θ, t) that quantifies the distribution of frequencies in the entire ensemble, is expected to be stationary, while the averaged phase Θ = Ωt would rotate with some constant frequency Ω, as yet unknown. Then (4.106) separates into an array of evolution equations of individual oscillators forced by the mean field: θ˙i = ωi + βρ sin(Θ − θi ), (4.108) or, after introducing the effective phase mismatch φi = θi − Ωt, φ˙ i = ωi − Ω − βρ sin φi .

(4.109)

This is still another appearance of the common formula of phase dynamics (4.42) or (4.52). As in these cases, oscillations synchronize when the frequency mismatch is sufficiently small at a given strength of interactions, so that (4.109) may evolve to a stationary solution ωi − Ω |ωi − Ω|  1. (4.110) if φsi = arcsin βρ βρ However, now it is just a beginning of the story, since both the effective interaction strength, proportional to ρ, and the frequency mismatch, dependent on Ω, are unknown and have to be determined self-consistently. When the oscillators remain noncoherent, the mean field vanishes, so that there is no forcing. At least some mutually synchronized group must consolidate in order to entrain the rest.

CHAPTER 4. CHAOTIC, FORCED, AND COUPLED OSCILLATORS

160

Entrainment by collective oscillation. In order to understand the emergence of the mean field, we consider, following Kuramoto (1984), a population of oscillators with the normalized distribution of natural frequencies f (ω). For simplicity, we assume this distribution to be symmetric, centered on some frequency ω0 . A more detailed characteristic is the joint distribution of natural frequencies and phase mismatches n(ψ, ω) = ns (φ)+na (φ, ω), split into two groups: those entrained by the mean field, ns , and the remaining incoherent oscillators na . For the first group, the phase mismatch is constant, and is related to their natural frequency by (4.110). The distribution ns (ψ) is expressed therefore as ns (φ) = f (ω)ω  (φ) = βρf (Ω + βρ sin φ) cos φ,

|φ|  π/2.

(4.111)

The phases of incoherent oscillators are rotating, and do not contribute to the mean field (4.107), which can be expressed therefore as Z = ρe

iΩt



π/2

= −π/2

ns (φ)e

i(φ+Ωt)

iΩt



π/2

dφ = βρe

eiφ f (Ω + βρ sin φ) cos φ dφ.

−π/2

(4.112) The imaginary part of this equation, 

π/2

0=

eiφ f (Ω + βρ sin φ) sin φ cos φ dφ,

(4.113)

−π/2

is satisfied identically if Ω is at the center of a symmetrical frequency distribution, and the real part, 

π/2

1=β

eiφ f (Ω + βρ sin φ) sin φ cos φ dφ,

(4.114)

−π/2

defines the critical coupling strength β necessary for synchronization. An analytically treatable example is the Cauchy distribution of frequencies centered on Ω: 1 1 f (ω) = . (4.115) 2 π γ + (Ω − ω)2 Then (4.114) is integrated to 1=

  1  γ + γ 2 + β 2 ρ2 . 2 βρ

This equation is solved by ρ=



1 − 2γ/β.

(4.116)

(4.117)

It follows that the critical interaction strength required for consolidation of a synchronized collective is β = 2γ, increasing with the spread of the distribution.

4.6. COUPLED OSCILLATORS

161

Among other possibilities, there are “chimera” states, named after the Greek mythological creature with a discrepant anatomy, containing distinct groups of oscillators synchronized at different frequencies (Kotwal et al., 2015). This commonly takes place when the natural frequency distribution has two well separated peaks. This term more often denotes separation between synchronized and noncoherent identical oscillators, which is observed in chains of locally coupled oscillators (Kuramoto and Battogtokh, 2002). In globally coupled systems, it may occur when interactions between members of two groups of oscillators are weaker than between members of the same group (Abrams et al, 2008).

Further Reading There is a vast body of literature on deterministic chaos, including many monographs, among them well-established earlier sources Schuster (1984), extended and enlarged in the latest edition (2005), Ott (1993, 2002), and later surveys by Cencini et al. (2010), Kuznetsov (2012), Anishchenko et al. (2014). The books by Lichtenberg and Lieberman (1983, 1992) and Zaslavsky (2005) are comprehensive sources on Hamiltonian chaos. A detailed historical survey of dissipative chaotic models is given by Letellier and Messager (2010) and Letellier (2013). Topology and symmetry of various chaotic attractors are described in detail by Gilmore and Letellier (2007). Synchronization in various systems, including both regular and chaotic oscillators and interactions in large ensembles, is described in detail in the monographs by Pikovsky et al. (2001) and Osipov et al. (2007). Global interactions are reviewed by Pikovsky and Rosenblum (2015). Many references to original publications are found in these sources.

5 Dynamical Systems in Space Chapter Overview The material of this chapter does not commonly appear in books on dynamical systems. Adding spatial dimensions to evolution in time leads us, generally, into a much wider realm of partial differential equations (PDEs). We do not mean going this far, but restrict to situations where a spatially-dependent dynamical problem can be reduced to ODEs. This happens either in 1D problems or when more complicated systems can be reduced to ODEs through bifurcation expansion. We start with the analysis of stationary or propagating 1D structures, taking reaction-diffusion systems (RDS) as the principal template. After briefly stopping at equilibria, we consider the simplest dynamical structures: propagating fronts that spread a globally stable state of a single variables. It might seem counterintuitive that it is more difficult to understand advance into an unstable rather than into a metastable state. Passing to two-variable RDS, we first encounter the phenomenon of spontaneous symmetry breaking. We concentrate on a simple model system with widely separated time and length scales, which is capable to generate a variety of stationary and propagating structures. While still restricting to 1D, we will be also capable to analyze their instabilities. Following this, we advance to the analysis of symmetry-breaking bifurcations in general RDS and consider competition between various stationary and wave patterns described by amplitude equations. The last section gives a perspective of inhomogeneous and spreading 1D patterns, using as templates both the most bountiful amplitude equation and an ad hoc pattern-forming model.

5.1 5.1.1

Space-Dependent Equilibria Basic Equations

Thermodynamic relations. Whereas dynamical systems originate in classical mechanics (Sect. 1.1.1), the source of spatially distributed systems is in classical thermodynamics. And as most interesting dynamic behavior is observed when basic assumptions

163

164

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

of classical mechanics are lifted, most interesting spatially dependent solutions are seen far from thermodynamic equilibrium. Unlike classical mechanics where energy is conserved, the basic thermodynamic variable, free energy, is monotonically decreasing during relaxation to thermodynamic equilibrium. We start, therefore, rather than from a Lagrangian, from the free energy functional E, which depends now on an array of thermodynamic variables u with the components ui and their spatial gradients ∇ui : ⎤  ⎡  1 ⎣ Kij ∇ui · ∇uj + V (u)⎦ dx. (5.1) E= 2 i,j Here Kij are elements of a rigidity matrix K, and the integral is taken over a closed volume. This form is similar to (1.4), with the substantial difference that it contains derivatives with respect to spatial coordinates, which, unlike time, can be multiple, denoted by the vector x. The variables ui may be concentrations of chemical species or other thermodynamic variables, such as temperature and electric potential. The first term in the functional E is the distortion energy taking into account the “costs” of spatial inhomogeneities, dependent on generalized rigidities Kij . Varying (5.1) with respect to ui leads to the dynamic equations  ∂ui = ∇(Kij ∇uj ) + f (u), ∂t j

(5.2)

where f is the array with the components fi (u) = −∂V /∂ui . The energy functionals of thermodynamic theory have different forms applicable to processes carried out under constant volume, pressure, or the number of particles. We avoid these distinctions at this point, but will return to the dynamics restricted by conservation laws in Sect. 5.1.3. The gradient structure ensuring a monotonic decrease of energy is retained if the array K is symmetric, according to the Onsager reciprocal relations. Reaction-diffusion equations. A general reaction-diffusion system (RDS) has the same form as (5.2), with a substantial difference that the functions fi (u) cannot be derived from a potential. This leads to a much reacher dynamic behavior that can be realized in nonequilibrium systems. The rigidities Kij should be now replaced by diffusivities Dij , and the functions fi (u) may include both rates of chemical reactions and distributed external fluxes, either stationary or time-dependent, that maintain the system in a nonequilibrium state. Generally, diffusivities depend on the variables ui , and the diffusivity matrix is nondiagonal, so that the flux of a particular species would be also dependent on gradients of other species. We will avoid here such effects, which are commonly relatively weak, and study RDS with the constant diagonal array D, written in the form ∂u = D ∇2 u + f (u; p). (5.3) ∂t

5.1. SPACE-DEPENDENT EQUILIBRIA

165

Here ∇2 is the Laplace operator, represented as the sum of second derivatives with respect to coordinates x in the Cartesian coordinates or a suitable expression in a curvilinear coordinate system, and the functions fi (u) also contain, as in (2.1), an array of parameters p. Equation systems of this kind, involving diffusional transport, internal production or decay and external supply or removal of the various species, arise in a variety of chemical and biological applications, and can be viewed as a straightforward generalization of dynamical systems to space-dependent processes. Equilibria with a single variable. When there is a single variable, the substantial distinction between dynamics near and far from equilibrium  is lost, since one can always assign to a single function f (u) a potential V = − f (u)du. The rigidity K or diffusivity D can be removed from a single equation of the type (5.2) by rescaling the variables as t→

t x u f (u)τ , , x→ √ , u → ∗ , f (u) → u∗ τ u Dτ

(5.4)

where u∗ is a characteristic scale of the variable u(x, t) and τ = u∗ /f (u∗ ) is a characteristic time scale. The resulting dimensionless form is ∂u = ∇2 u + f (u). ∂t

(5.5)

This equation can also arise as a spatially dependent extension of an amplitude equation, e.g., (2.35). It is derived from the energy functional ∂u δE =− , ∂t δu

 & E=

' 1 2 |∇u| + V (u) dx, 2

(5.6)

 where V (u) = − f (u)du. In nonequilibrium systems, the energy functional lacks direct physical interpretation, but it can be defined formally as stated above. It is remarkable that the gradient structure emerges near a bifurcation point even when it is not present in the underlying system.

5.1.2

Stationary Solution in One Dimension

Analogy with classical mechanics. A stationary distribution of a variable depending on a single coordinate x is defined by (5.5) reduced to a single dimension and with the time dependence suspended: u (x) + f (u) = 0.

(5.7)

Recalling Sect. 1.1.2, this equation formally coincides with the equation of motion (1.12) with u replacing the coordinate of a particle of unit mass, x playing the role of

166

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

time, and f (u) interpreted as the force with the inverted sign. Its solution minimizes the energy integral  & ' 1  u (x)|2 + V (u) dx. (5.8) E= 2 Zeroes of f (u), which were defined as equilibria in Sect. 1.1.2 or as fixed points of a dynamical system in Sect. 2.1.1, now correspond to homogeneous stationary states (HSS). Remarkably, the same form arises in totally different contexts. Since (5.7) does not depend on the coordinate explicitly, its order can be reduced by using y = u (x) as a dependent, and u an independent variable. Then u (x) = y  (x) = y  (u)y =

1 d(y 2 ) , 2 du

(5.9)

and (5.7) reduces to a first-order equation 1 d(y 2 ) + f (u) = 0. 2 du It is integrated to 1 2



du dx

(5.10)

2 − V (u) = V0 ,

(5.11)

containing a constant V0 , so far indefinite. Here again, we recognize the energy expression (1.13) of classical mechanics with the diffusional or distortion term playing the role of the kinetic, and −V (u), of potential energy. Their sum is constant, by analogy to the energy conservation law of classical mechanics. Simplest solutions. Equation (5.7) should be supplemented by boundary conditions at both ends of a coordinate domain, say, −L < x < L that would define the constant in (5.11) and an additional integration constant. The boundary conditions can be of either type: Dirichlet, fixing the value of u(±L), Neumann, assigning the derivative u (x) at the boundary points, or mixed, setting a combination of the values of the function and its derivative. The most natural boundary condition for an isolated system is the no-flux condition u (x) = 0 at x = ±L; it is most suitable when the length L is stretched to infinity. Here the analogy with classical mechanics ends: while a particle’s trajectory is defined by its initial position and velocity, here we have a more technically difficult boundary value problem. In particular, it follows from (5.11) that in the case of no-flux boundary conditions the potentials V (u± ) corresponding to the boundary values u± = u(±L) should be equal. This corresponds to a very special kind of motion that, starting from the rest state comes to rest at a certain point. A useful analytical form is the kink solution of the symmetric cubic equation with f (u) derived from the quartic potential V (u) = −

2 1 1 − u2 . 4

(5.12)

5.1. SPACE-DEPENDENT EQUILIBRIA

167

This lowest-order symmetric form appears as a result of expansion near a pitchfork bifurcation (Sect. 2.1.4), or a second-order phase transition. The solution in the infinite domain with no-flux boundary conditions is y(u) =

1 − u2 √ , 2

x u(x) = tanh √ . 2

(5.13)

At x → ∞, the two alternative HSS u = ±1 are approached. The constant V0 in (5.11) equals to V (±1) = 0, so that the distortion and potential energies are equal. The total energy, which is accumulated in the O(1) transitional region and therefore plays the role of the surface energy σ of the boundary between the two HSS, is √  ∞  1 √  1 2 2  2 E0 = 2 (1 − u )du = y(u)du = 2 . (5.14) u (x) dx = 2 3 0 0 0 This is the simplest example of Maxwell construction – coexistence of two states with equal energy. Generally, it can be achieved by adjusting some parameter ν at a certain level ν = ν0 . The resulting nonuniform stationary state carries interfacial energy, as has been computed above for the special case of the cubic f (u). As the energy is minimized in the course of evolution, the total length of the boundary between two alternative states should shrink. This is the process of coarsening, or Ostwald ripening, by which blobs occupied by a certain state either unite or collapse and disappear, so that eventually the boundary shortens to a single point in one dimension, to a shortest line in two, or to a surface of the smallest area in three dimensions. Instability of nonmonotonic solutions. As in Sect. 2.2.1, we can test stability of a space-dependent solution, say, u = u0 (x) by considering the system’s reaction to a  (x), which can be presented, analogous to (2.22), as a combination weak perturbation u of exponents   (x) = u eλi t v i (x), (5.15) where λi are eigenvalues and v i (x) are corresponding eigenvectors. Of course, the spatial dependence makes the problem far more difficult, but it can be treated analytically in the case of a single equation on a line, so we further restrict to this case. Linearizing (5.5) in the vicinity of a stationary solution and using the representation (5.15), we arrive at the equation for the eigenvectors vi (x) vi (x) + F (x)vi = λi vi ,

(5.16)

where F (x) is the derivative f  (u) evaluated at the stationary solution u = u0 (x). This is a particular case of the classical Sturm–Liouville problem1 . The operator in the left-hand side is self-adjoint, and its eigenvalues λi are real, and, what is most 1 Coordinate-dependent functions would appear in the differential term as well when diffusivity is coordinate-dependent.

168

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

important for our considerations, can be ordered in such a way that λ0 > λ1 > λ2 > . . . > −∞ and each corresponding eigenfunction vi (x) has exactly i zeroes. Even though a general solution can be only obtained numerically, one eigenfunction is easily found by inspection. This is the function v(x) = u0 (x): by differentiating (5.7) with respect to x, we clearly see that it satisfies (5.16) with the zero eigenvalue. But how this function has to be labeled in the above sequence? If the function u0 (x) is monotonic, u0 (x) has no zeroes and its eigenvalue should be labeled λ0 . Accordingly, all other eigenvalues are negative, and the nonuniform stationary state is stable. In the infinite domain or on a closed contour, this eigenfunction corresponds to neutral stability of the stationary state to translation along the x axis. If, on the other side, u0 (x) is not monotonic, u0 (x) has at least one zero, and therefore must exist another, monotonic, eigenfunction with a positive eigenvalue, which signifies that a stationary state having a maximum or a minimum must be unstable. This proves, in particular, instability of spatially periodic solutions. The simplest solution of this kind can be constructed on a finite interval with no-flux boundary conditions by combining two identical stationary fronts of opposite polarity, called a kink and an antikink, on the two half-intervals. Certainly, the distortion energy will vanish if the pair will collapse, so we expect them to attract each other – but this still does not prove instability to small perturbations, which can be established by the above argument. The “attraction” is illusory, since both fronts are also attracted to their antisymmetric images outside the finite interval. Instability may cause them to move in either direction, which would lead eventually to either one of the two competing HSS with identical energy. On an interval far exceeding the characteristic front thickness, alternative HSS separated by kinks and antikinks may arise at different locations following a phase transition. Interaction of fronts of opposite polarity leading to their annihilation provides the mechanism of Ostwald ripening in one dimension. Nonmonotonic solutions are possible only in nonequilibrium systems with at least two variables, as we shall see in Sect. 5.3.1.

5.1.3

Systems with Mass Conservation

Interphase boundaries. Two alternative states can coexist in spatially separated domains also when their energies are not equal, provided they are restricted by a global conservation law. The most common example of such a restriction is conservation of mass of a certain chemical substance in a system relaxing toward equilibrium. This condition is enforced by adding to the energy functional the chemical potential μ. In the case of a single conserved species, (5.1) is extended to  & ' K 2 (5.17) F= |∇u| + V (u) − μu dx. 2 We do not use here the scaling adopted in Sect. 5.1 that applies to a thin layer separating the two domains. In the following, we will be interested in the bulk transport

5.1. SPACE-DEPENDENT EQUILIBRIA

169

processes, which are scaled by a characteristic size of the system L. Accordingly, the coordinates should be rescaled now as x → x/L is and the rigidity coefficient K is not eliminated but rescaled as K → KL2 /τ with τ defined as in (5.4). The Euler–Lagrange equation δF/δu = 0 following from (5.17) is K∇2 u + f (u) + μ = 0.

(5.18)

In the absence of chemical reactions that would serve as material sources or sinks, the concentration evolves in time only by transport from neighboring locations: ∂u = −∇ · j. ∂t The flux j is proportional to the gradient of the chemical potential:

(5.19)

δE ≡ −Γ(u)∇μ, (5.20) δu where Γ > 0 is the mobility coefficient, which is, generally, dependent on u. This formula generalizes the common expression for the diffusional flux j = −D∇u, which is recovered when the system is near equilibrium, so that μ = −f (u), ∇μ = −f  (u)∇u, and the effective diffusivity is defined as D = −Γ(u)f  (u). On a branch of stable equilibria, f  (u) < 0, so that D > 0 and the resulting diffusion equation is well posed, although the diffusivity is generally variable. j = −Γ(u)∇

Cahn–Hilliard equation. Equations (5.19), (5.20) combine to ∂u = ∇ · [Γ(u)∇μ] . (5.21) ∂t We can check that the free energy always decreases until a minimum is reached. This is proven by computing the change of the energy functional (5.1) in the course of evolution following (5.19), (5.20). After integrating by parts, we obtain: $ $ & '    $ δ E $2 δE ∂u δE δE dE = dx = ∇ · Γ(u)∇ dx = − Γ(u) $$∇ $$ dx ≤ 0. (5.22) dt δu δu ∂t δu δu Assuming Γ = const and using τ = 1/Γ as the time unit leads, after eliminating μ from (5.21) with the help of (5.18), to a single fourth-order equation due to Cahn and Hilliard (1959):   ∂u = −∇2 K∇2 u + f (u) . (5.23) ∂t  If V (u) = − f (u)du is a double-well potential and μ is fixed in such a way that f (u) + μ = 0 has two stable roots (HSS), spinodal decomposition takes place, and the system separates into two phases corresponding to the alternative HSS, which we will mark by the subscripts ±. Stationary solutions of the Cahn–Hilliard equation describing equilibrium interphase boundaries coincide with the stationary solutions of its nonconservative counterpart (5.5). When the total mass is conserved, a single planar interface cannot propagate, and therefore, the Maxwell construction should be reached automatically.

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

170

Stationary front. The chemical potential can be viewed as an adjustable constant ensuring the existence of a stationary front solution. The value of μ = μs required to maintain the equilibrium is obtained by multiplying (5.18) by ∇u and integrating across the front. The differential term vanishes upon integration, while the integral of the algebraic part yields the Maxwell condition μs =

V (u+ ) − V (u− ) . u + − u−

(5.24)

The position of the front on a finite interval −L < x < L is easily determined √ when the length L far exceeds the thickness of the transition zone, which is of O( K) in the dimensionless variables based on the system size. Then the thin front can be neglected when computing the average concentration U , so that U=

1 2L



L

−L

u(x)dx = u+ L+ + u− L− ,

(5.25)

where L± is the fraction of the domain occupied by the state u± . This relation is resolved as u+ − U U − u− . (5.26) L− = , L+ = u+ − u− u+ − u − These relations are trivially extended to a large area or volume in higher dimensions, with L± reinterpreted as area or volume fractions, but, of course, the shape of the boundary needs to be determined there to minimize the interfacial energy proportional to the length or area of the interface; its curvature significantly affects the energy only when the size of a domain is close by its order of magnitude to the thickness of the interfacial layer.

5.2 5.2.1

Propagating Fronts Advance into a Metastable State

A front as a heteroclinic orbit. When mass is not conserved, the Maxwell construction is nongeneric, unless the problem possesses a symmetry, for example, a sign reversal symmetry, as in the case of the quartic potential (5.12). In a general case, a HSS corresponding to the zero of f (u) with the lowest value of V (u) is absolutely stable and other stable HSS are metastable. If two states initially coexist, transition from a metastable to the absolutely stable HSS does not require a strong perturbation, but can proceed by gradually increasing the area occupied by the state with the lower energy. The absolutely stable state will advance through propagation of a front separating the two domains. A straight-line front normal to the x coordinate axis in an unrestricted area should move with a constant speed c. Since propagation is stationary, (5.5) can be rewritten

5.2. PROPAGATING FRONTS

171

in the frame comoving with the front velocity c, as yet unknown. The steadily propagating solution depends on a single coordinate z = x − ct, and (5.5) reduces to the ordinary differential equation cu (z) + u (z) + f (u) = 0.

(5.27)

Denoting the absolutely stable state as u− , and the other state as u+ , we set the boundary conditions (5.28) u = u± , u (z) = 0 at z → ±∞. This problem is overdetermined, so that the solution exists only at a single value of c; thus, the propagation speed is obtained as a solution of a nonlinear eigenvalue problem. This is made transparent when (5.27) is viewed as a dissipative (damped ) mechanical system (recall Sect. 1.1.5), where the coordinate z plays the role of time. In this interpretation, −f (u) can be interpreted, as in (1.23), as a force with the potential V (u), while c plays the role of the friction coefficient. Since the sign of the potential is interchanged, stable HSS of the reaction-diffusion equation are mapped onto unstable mechanical equilibria and vice versa; the energy integral (5.8) corresponds to the action integral of a mechanical system. The front solution can be constructed as a heteroclinic trajectory connecting the equilibria u = u± . In mechanical terms, this would be a trajectory of a particle starting from the higher maximum of the potential −V (u), dissipating a part of its energy through friction, and stopping precisely at the lower maximum. This can be made possible only by adjusting the friction coefficient in a unique way. A stricter argument is based on a “counting” principle. Linearizing (5.27) in the vicinity of either HSS yields the equation with constant coefficients cu (z) + u (z) − αu = 0. (b)

()

()

(a)

(5.29)





Figure 5.1. Generic trajectories in the phase plane u, y = u (z) (a) and a nongeneric set of trajectories containing a heteroclinic orbit (b)

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

172

The coefficients α = −f  (u± ) are positive, in view of the presumption that both equilibria are stable. The solutions of (5.29) are combinations of exponents exp(λ± z), with the eigenvalues   c −1 ± 1 + 4α/c2 . λ± = (5.30) 2 With α > 0, both pairs of eigenvalues λ± are real and of different signs, so that both HSS are saddles when viewed as fixed points of (5.27). There is a single trajectory leaving or entering a saddle point. The front solution is a heteroclinic trajectory connecting both equilibria, as shown in Fig. 5.1. Such a trajectory does not exist generically, but can only be obtained by adjusting the propagation speed c, which is unknown a priori. Since the motion slows down near the equilibrium, the heteroclinic trajectory takes infinite time to traverse; accordingly, a steadily propagating solution exists, strictly speaking, only on an infinite line. The equilibrium is, however, approached exponentially, and therefore, for all practical purposes, the deviation from a HSS becomes negligible at distances of several basic length units from the propagating front. Qualitative estimates. The sign of c can be determined by multiplying (5.27) by u (z) and integrating over the infinite interval:  ∞  ∞  2 u (z) dz = − f (u)u (z)dz = V (u+ ) − V (u− ). (5.31) c −∞

−∞

Since the integral on the left-hand side is positive, the sign of c coincides, as expected, with the sign of the difference between the energies of the two competing states. The state with the lower energy advances, and the energy of the entire system is lowered as the area occupied by the more favorable state is increasing. The potential difference equals to the derivative of the energy with respect to the front position ζ(t) = ζ0 + ct, and can be interpreted as a “thermodynamic force” acting on the fronts. The speed can therefore be presented as the ratio of the thermodynamic force to the dissipation integral, which coincides with the surface energy of the front E0 :  ∞ dζ ∂E E0 = = E0−1 , u (z)2 dz. (5.32) dt ∂ζ −∞ The local propagation speed of a weakly curved front can be estimated by rewriting (5.27) in curvilinear coordinates. In the plane, those will be polar coordinates, so that (5.27) is modified to (5.33) (c + R−1 )u (z) + u (z) + f (u) = 0, where R is the local curvature radius. In 3D, R−1 will be replaced by the local mean curvature κ. Clearly, the local propagation speed will be shifted by κ in such a way that convex segments will slow down and concave ones accelerate, so that the flat interface will be restored. This tendency can be also understood from energy arguments: in the absence of geometric constraints, the flat interface has the minimum energy. Take note

5.2. PROPAGATING FRONTS

173

that stability of a flat front can be established only in the case of a single equation with no external fluxes. A familiar counterexample is elaborate shapes of growing crystals. We will see an example of front instability already in a simple two-component system (Sect. 5.3.2). Computation of the propagation speed. As a rule, the exact value of the propagation speed can be only computed numerically. As in Sect. 5.1.2, (5.27) is solved most easily by transforming to the phase plane variables, with u playing the role of an independent, and y = u (z) of a dependent variable. Then (5.27) is rewritten as cy + yy  (u) + f (u) = 0.

(5.34)

Since y vanishes when a HSS is approached, this equation has to be integrated with the boundary conditions y(u± ) = 0. In this formulation, it is obvious that a first-order differential equation with two boundary conditions is overdetermined. One could argue that a simple transformation to the form (5.34) might have replaced lengthy arguments from the preceding subsection; this transformation is, however, possible only when the unknown function u(z) is monotonic. The propagation speed can be now found by integrating (5.34) starting from either u+ or u− and finding the value of c by shooting. This nonautonomous equation is, however, singular at u = u± , since the coefficient at the derivative vanishes there. The numerical solution cannot start precisely at the fixed point. Instead, the initial point should be slightly shifted along the unstable manifold of the saddle point of (5.29). Another practical consideration is to start numerical integration from the globally stable state; in this case, the propagation speed is positive and the numerical algorithm is more stable. Suppose this state is u = u− < u+ . Taking as a starting (a)

( b)

Figure 5.2. (a): Curves y(u) with an underestimated (the upper curve) overestimated (the lower curve) and exact (the middle curve) values of the propagation speed. (b): The dependence of the propagation speed on the parameter ν for the cubic function f (u) = u − u3 − ν

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

174

point u = u− + δ with δ  1 and choosing the sign of y to be positive, we set the initial condition (5.35) y(u− + δ) = λ+ (u− )δ, where the eigenvalue λ+ (u− ) is defined by (5.30) with α = −f  (u− ). Using some chosen starting value of c, we continue the integration until either u reaches the maximum value u+ or y vanishes. In the former case, the guessed value of c is too small and in the latter, too large (see Fig. 5.2). An efficient numerical procedure may start with the upper and lower a priori bounds of c and converge to the true value by iteratively subdividing the interval, until y(u+ ) drops below a set small target value. Once the function y(u) is found, the profile u(z) is computed (in an implicit form) by quadrature: z = y(u)−1 du. Explicit example: a cubic. The propagation speed can be computed analytically when f (u) is a cubic, presented in the most transparent form f (u) = −(u − u+ )(u − u− )(u − u0 ),

(5.36)

where u± are the two stable HSS and u0 is the unstable middle root. The solution satisfying both boundary conditions is sought for in the form y(u) = b(u+ − u)(u − u− ).

(5.37)

When this ansatz is substituted in (5.34), the equation factors out as (u − u+ )(u − u− )[(2b2 − 1)u + bc + u0 − b2 (u− + u+ )] = 0.

(5.38)

The bracketed linear form has just enough adjustable coefficients to make it vanish. This yields & ' √ √ 1 b = ±1/ 2, c = 2 u0 − (u− + u+ ) . (5.39) 2 The speed is assumed to be positive when the lower state advances. The two signs of b correspond to the two alternative orientations of the front. For the cubic in the form f (u) = u − u3 − ν, the speed can be written explicitly using the convenient trigonometric expression for the roots:  3/2  √ √ 3 1 ν . (5.40) u± = −(2/ 3) sin(ψ ∓ π/3), u0 = (2/ 3) sin ψ, ψ = arcsin 3 2 Using (5.40) in (5.39) yields c=



& 6 sin

1 arcsin 3



33/2 ν 2

' .

(5.41)

The dependence of the propagation speed on the parameter ν is shown in Fig. 5.2b. Take note that the curve becomes vertical at the bifurcation point ν = 2/33/2 where

5.2. PROPAGATING FRONTS

175

 the roots u0 and u+ merge, terminating there at the maximum value cm = 3/2. The curve continues antisymmetrically into the region ν < 0. The good luck does not extend to higher-order polynomials. Take f (u) of the order 2n − 1; it must be odd to prevent runaway to infinity. Then y(u) must be of the order n to match the orders of f (u) and y(u)y  (u). After the boundary conditions are satisfied, n − 1 free parameters are left in the polynomial y(u). Those should be used, together with c, to cancel 2(n − 1) terms of the polynomial remaining after factoring. The numbers match only for n = 2, i.e., for a cubic f (u).

5.2.2

Propagation into an Unstable State

Continuum of propagating solutions. When a front propagates into an unstable state, its velocity has to be computed in a different way. This problem arises in a natural way in population dynamics. A stationary state where a competitively advantageous species is absent is formally unstable to infinitesimal perturbations but nevertheless will be preserved at any location until this species is introduced there. This would typically happen as its habitat’s boundary propagates outward, i.e., via a propagating front connecting the stable and unstable states. The difficulty of this problem, which has motivated the pioneering work of Kolmogorov et al. (1937), lies in the nonuniqueness of a solution describing a steadily propagating front. Therefore the propagation speed cannot be computed in a straightforward way by solving an ODE. It is selected dynamically, and the full nonstationary problem should be considered to understand the selection mechanism. If the receding state is unstable, the coefficient α in the linearized equation (5.29) √ is positive. Then both eigenvalues in (5.30) are real negative at c > c0 = 12 α; below this value they are complex with a negative real part. This means that at c > c0 the unstable state, viewed as an equilibrium point of (5.27), is a stable node, and the unstable manifold of the advancing stable HSS connects generically to this point, as shown in Fig. 5.3a. The generic connection persists also at smaller positive velocities, but the approach to the equilibrium is oscillatory (Fig. 5.3b), and therefore is inadmissible in the ecological context. Thus, there is a continuum of 1D front solutions, and restricting to monotonic solutions gives only a lower bound c = c0 . The question is, which speed is actually chosen by a dynamically evolving system. Moreover, it is not evident a priori that transition from an unstable to a stable state should be carried by a propagating front retaining its quasistationary shape. Any perturbation of an unstable state is destined to grow; therefore, dynamics of the transition from an unstable to a stable state should strongly depend on initial conditions. The kind of an initial condition that is likely to generate a propagating front is a perturbation localized in a narrow region. This kind of a perturbation is typical for many physical problems. When a perturbation of this kind is introduced into a system that had been rapidly quenched into an unstable state, the transition into a stable state is effected locally, and a sharp front propagating into the unstable

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

176

state is formed. In problems of population dynamics (which historically were the first to prompt the interest in front propagation dynamics), a localized perturbation corresponds to a locally introduced successful mutant population that displaces an old genome. In other systems, the stable state may nucleate at a boundary or at a local inhomogeneity. In the absence of other perturbations, the global transition to the stable state is effected through front propagation. Asymptotic theory of the leading edge. The method developed for the reactiondiffusion equation by Kolmogorov et al. (1937) is based on the asymptotic linear analysis of the leading edge of the propagating front. The front is called pulled when its propagation speed is determined by growth of perturbations at its leading edge. Consider growth of a perturbation of an unstable state u = u0 . While the perturbation remains small, the linear theory applies, and the solution can be presented as a Fourier integral  u(x, t) − u0 =



−∞

i[kx−ω(k)]t  u(k)e dk.

(5.42)

The integration variable k is generally complex: its real part kr is the wavenumber of spatial oscillations, and its imaginary part ki quantifies the attenuation (when positive) or blowup (when negative) along the x axis. The dispersion relation ω(k) follows from an applicable evolution equation. In our particular case, this is (5.5) (with the Laplacian replaced by ∂ 2 /∂x2 ) linearized in the vicinity of the unstable state u = u0 , leading to ω(k) = i(α − k 2 ). (5.43) To determine the front speed c, we evaluate (5.42) asymptotically under the assumption that both x and t are large but the combination z = x − ct is of O(1). This assumption restricts the solution to the vicinity of the leading edge of the front (where the linear theory still applies), advancing with the speed c, as yet undetermined. The (a)

(b)

|()|

| ()|



=



 

 



-

 





















-

Figure 5.3. Trajectories in the phase plane connecting a stable and an unstable equilibrium at c ≥ c0 (a) and c < c0 (b)

5.2. PROPAGATING FRONTS

177

integral (5.42) now takes the form  u(z, t) − u0 =



−∞

i[kz−Ω(k)t]  u(k)e dk,

(5.44)

where Ω(k) = ω(k) − ck is the Doppler-shifted frequency. If the original perturbation is sufficiently localized, all Fourier modes are present, and u(k)  should be a smooth function. The integral (5.44) can be evaluated using the method of steepest descent in the complex k-plane, which implies that Ω(k) should be minimal: Ω (k) = ω  (k) − c = 0.

(5.45)

This means that the front propagates with the group velocity  c = ω  (k). In addition, the imaginary part of Ω(k) must vanish, so that the solution expressed by (5.44) would neither grow nor decay in time. This implies c= cr (k),

 ci (k) = 0,

c = ωi /ki .

(5.46)

This condition can be satisfied simultaneously with (5.45) at a some point k =  k in the complex k-plane. The approximation implies that the Fourier integral is dominated at late stages of evolution by this particular value of k. Separating explicitly the real and imaginary parts, the leading edge of the front can be presented as 









u − u0 = ei[kx−ω(k)t] = e−ki (x−ct) ei[kr x−ωr (k)]t .

(5.47)

The first exponent containing the imaginary part of  k determines the rate of decay in the direction of propagation. According to (5.45), the selected propagation speed coincides with the real part of the group velocity, which determines the speed of propagation of perturbations. In a frame moving with a speed faster than c, the solution would appear to be decaying in time; thus, one can say that propagation plays a stabilizing role, in a sense that one can outrun a growing perturbation by moving sufficiently fast. If the real part of  c were larger than the speed c with which the front moves, perturbations would run ahead of the front and cause the profile to be unstable. The selected speed is such that the perturbations neither grow nor decay, i.e., it satisfies the condition of marginal stability. Recalling the dispersion relation (5.43), we see that the group velocity comes in this particular case to  c = ω  (k) = −2ik. Since it has to be real, the real part of k must vanish; thus, as expected, no oscillations arise here, and both ω and k = iq are purely imaginary. The two above expressions for the propagation velocity reduce to  c = 2q,

c = ω/k = α/q + q. (5.48) √ The relation√c =  cr (k) is satisfied at q = α. Thus, the marginally stable propagation speed c = 2 α turns to be in this case also the slowest speed c0 of a nonoscillating front.

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

178

5.2.3

Pushed Fronts

An apparent contradiction. This simple result of the leading-edge theory becomes questionable under certain circumstances. Recall the exact solution (5.41) of (5.27) with the standard cubic nonlinearity f (u) = u − u3 − ν that yields the propagation speed or the front separating two stable states. As one of the competing states approaches the bifurcation point ν = 2/33/2 where it becomes marginally stable, the p ropagation speed grows, as shown in Fig. 5.2b, reaching the maximum value c = 3/2 at the bifurcation point itself. On the other hand, the propagation speed into the same marginally stable state can be computed, as above, by approaching it along the branch of unstable states. Along √ this branch, the propagation speed predicted by the leading-edge theory, c = 12 α decreases with the decreasing derivative α = f  (u0 ) and vanishes at the marginally stable point. The clue to this apparent contradiction is that the special propagating solution obtained in Sect. 5.2.1 is also applicable to a front separating an absolutely stable and an unstable state. Such a front, determined by the full nonlinear solution, is called pushed, as it is “pushed ahead” by the advancing stable state rather than “pulled along” by perturbations growing at the leading edge. If, say, the state u = u+ is approached at x → −∞, the solution in the comoving frame is expressed, according to (5.37) and (5.39), as √ √ c = (u+ + u0 − 2u− )/ 2, (5.49) y ≡ u (z) = −(u − u0 )(u+ − u)/ 2, where u± , u0 are defined by (5.40). The asymptotic value of the slope at the leading edge following from (5.49) is $ $ $ d ln(u − u0 ) $ y(u0 ) u+ − u0 $ $ , (5.50) q = $ = = √ $ u − u dz 2 0 z →∞ (a)

(b)



















 





||







||

Figure 5.4. Asymptotic slope (a) and velocity (b) of a pulled (blue curve) and pushed fronts. The upper (green) and lower (orange) branches correspond to the advance of the globally stable state to the unstable and metastable state, respectively

5.2. PROPAGATING FRONTS

179

Figure 5.5. A nonlinear front trailing a pulled front (blue curve) and a pushed front formed after the nonlinear front has caught up (orange curve) 3/2 which increases when u0 is shifted toward ≈  the marginally stable point ν = −2/3 −0.3849, contrary to a decrease of q = f  (u0 ) predicted by the leading edge theory. At ν > ν ≈ −0.2078, the slope q exceeds q, as shown in Fig. 5.4a.

Pulled or pushed? The system chooses the propagating solution with the steepest slope, which now, unlike the generic situation in the linear leading edge theory, is also moving faster (Fig. 5.4b). The plot of the propagation speed against μ defined by (5.49) is tangential at ν = ν to the plot of the generic propagation speed c = 2q. The transition between the generic pulled front and the pushed front determined by the special nonlinear solution (5.49) occurs at ν = ν (Ben Jacob et al., 1985). Beyond this value, the pushed front, which is still faster but has a lower slope than the pulled one, has no physical significance. The reason why the competition with nongeneric pushed fronts has not surfaced in the mathematically rigorous theory of Kolmogorov et al (1937) is that the special nonlinear solution (5.49), apparently, cannot be attained as a result of evolution starting from a localized initial condition with u ≥ u0 . In problems of population dynamics, which gave motivation to the early theories, states with u < u0 = 0 are nonphysical, and, in any case, solutions of (5.29) with α > 0 starting from strictly nonnegative initial conditions remain nonnegative at all times. On the other hand, if deviations from u = u0 of any sign are allowed, evolution to the pushed front can be understood in the following way (van Saarloos, 1988). Consider a propagating front of the state u = u− , which is metastable at ν slightly above the bifurcation value ν = −2/33/2 ≈ 0.3849, into the unstable state u = u0 , being trailed by a transition front from u− to the globally stable state u+ , as shown in Fig. 5.5. Sufficiently close to the bifurcation point, (actually, at |ν| > 0.3730 – see Fig. 5.4b), the trailing front moves faster, and catches on. As a result, the intermediate metastable state is exterminated, and u becomes a monotonically decreasing function of z, while the front evolves to the nongeneric profile (5.49). A more complex mechanism of relaxation to a pushed front is required when a pulled front propagates into the unstable state faster than the globally stable state displaces the metastable

180

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

one. This indicates an important role of perturbations bringing the system below the level u = u0 at some location.

5.3 5.3.1

Separated Time and Length Scales Two-Component Reaction–Diffusion System

Turing instability. The minimal pattern-forming system is a two-component RDS: u˙ = D1 ∇2 u + f (u, v), v˙ = D2 ∇2 v + g(u, v),

(5.51)

where f (u, v), g(u, v) are source functions depending on the variables u and v, and D1 , D2 are diffusivities. Suppose that the system has a HSS u = u0 , v = v0 satisfying f (u0 , v0 ) = g(u0 , v0 ) = 0. Stability analysis of the chosen HSS to infinitesimal spacedependent perturbations with a wavenumber k, u, v ∼ exp(ik · x) leads to the Jacobi matrix   fu f v fu = , (5.52) J = −Dk 2 + fu , gu gv where the indices at f, g denote the derivatives with respect to the variables u, v, and D is the diagonal matrix with the elements Di . The determinant of J is Det(J ) = (fu − D1 k 2 )(gv − D2 k 2 ) − fv gu .

(5.53)

Differentiating this with respect to k 2 shows that the maximum corresponding to the most dangerous perturbations corresponds to the squared wavenumber   1 fu gv 2 + . (5.54) k = 2 D1 D2 This value should be positive, which is possible only in the presence of a positive feedback, or, in chemical terms, when at least one of the species is autocatalytic, say, fu > 0. This signals breaking the spatial symmetry, called the Turing bifurcation, in honor of his pioneering study of reaction-diffusion equations (Turing, 1952). The trace of J is maximal at k = 0, so that a wave bifurcation at a purely imaginary eigenvalue and k = 0 cannot happen here, and is only possible when there are at least three variables (more on this in Sect. 5.4.1). Breaking spatial symmetry preempts Hopf bifurcation, which occurs at fu + gv = 0, provided only one of the species is autocatalytic, so that gv < 0. Moreover, the inhibiting action should prevail over the autocatalytic one, so that |gv | > fu . Therefore k 2 in (5.54) may be positive only when the autocatalytic species is less diffusive, D1 < D2 . Thus, for spatial symmetry breaking in a two-component system, one needs a combination of a slowly diffusing activator and a rapidly diffusing inhibitor.

5.3. SEPARATED TIME AND LENGTH SCALES

181

It still remains to check whether the determinant can vanish under these conditions. Plugging (5.54) in (5.53) yields   1 D2 2 D1 2 (5.55) Det J = f − g . 2fu gv − 4fu gv − 4 D1 u D2 v Since fu gv − fv gu should be negative for stability to space-independent perturbations, the sum of the terms that do not contain diffusivities is not less than −2fu gv , and thus is positive. Therefore the determinant can be brought to zero by choosing parameters of a problem in a way that this positive term will prevail over the negative terms containing the diffusivities. This short derivation contains, in effect, all results of the long original paper (Turing, 1952). The development of a pattern can be understood qualitatively in the following way. A local upsurge of the activator concentration also increases the concentration of the inhibitor, which spreads out suppressing the activator at neighboring locations. This, in turn, suppresses the inhibitor locally and, through inhibitor diffusion, enhances the activator further along the line, so that the inhomogeneous state spreads out. This scheme works with the roles of an activator and an inhibitor played, respectively, by prey and predator in population dynamics, by growing plants and seeping moisture in ecology. Similar roles are played, respectively, by buoyancy and heat conduction in natural convection, which prompted the first theoretical study of a bifurcation of a stationary pattern (Rayleigh, 1916), although the applicable fluid-mechanical equations have a very much different structure. The model system. A reaction-diffusion model suitable for the study of a variety of patterns is a two-component system (5.51) with widely separated time and length scales. The scaling becomes more transparent when the system is rewritten in the form (2.38) where the functions f (u, v) and g(u, v) are renormalized in such a way that the derivatives |∂f /∂u| and |∂g/∂v| be of O(1). We define the capacitance factors γj , which have the dimension of time, as the inverses of these derivative at some chosen values of u, v, and set f(u, v) = γ1 f (u, v), g(u, v) = γ2 g(u, v). Then (5.51) is rewritten as γ1 u˙ = γ1 D1 ∇2 u + f(u, v), γ2 v˙ = γ2 D2 ∇2 v + g(u, v).

(5.56)

The choice of characteristic time and length scales is linked to the expected √ behavior of the system. We choose the diffusional range of the inhibitor l∗ = γ2 D2 as unit; then the wide separation of length scales is attained when = √ the length γ1 D1 /l∗  1. If the function f (u, v) has at a given value of v two stable zeros, say, u = u± (v), the short-scale variable could switch between these two values across a front of the characteristic thickness . The long-scale variable remains almost constant across a narrow front, and therefore the derivation of the front velocity in Sect. 5.2.1 remains in force. The front, generally, may propagate with a characteristic velocity

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

182

dependent on the ratio of characteristic length and time scales. For  slow propagation, this scale ratio should be of O(1). This is achieved by using t∗ = γ1 γ2 D2 /D1 as the time scale. This brings (5.56) to the form (omitting now the hats)

u˙ = 2 ∇2 u + f (u, v), τ

−1

2

v˙ = ∇ v + g(u, v).

(5.57) (5.58)

 The parameter τ = t∗ /γ2 = γ1 D2 /(γ2 D1 ) is the ratio of the characteristic time that a front takes to advance at a distance of O(1) and the characteristic time of the inhibitor. If τ  1, the inhibitor is fast and adjusts to the front motion quasistationarily. This case is conducive to the formation of stationary structures. In the opposite case τ  1, when the inhibitor is very inertial, immobilized fronts are unstable and mobile structures are typical. The transition between both types of behavior occurs at τ = O(1), i.e., under conditions when the activator is fast as well as short-range. We will concentrate upon this range in the two following subsections. Another scaling suitable for rapidly propagating fronts will be introduced in Sect. 5.3.4. We will explore in detail, as a particular form of (5.56) obeying this scaling, the FitzHugh–Nagumo (FN) equation, which we have met already, with the diffusional terms dropped, in Sect. 3.5.3:

u˙ = 2 ∇2 u + u − u3 − v, τ

−1

2

v˙ = ∇ v − v − ν + μu.

(5.59) (5.60)

The choice of the cubic function in (5.59) ensures the existence of two alternative HSS of this equation at v = O(1), and the bias is kept small to make the algebra simpler while retaining qualitative features of the model. The relevant range of parameters can be suggested by qualitative analysis based on the disposition of null-isoclines in the plane u, v in Fig. 3.17. At μ > 0, |ν| < μ, both global stationary states u = ±1 + O( ), v = ±μ − ν would retreat when placed in contact with an alternative HSS of the short-scale equation at the same value of v. With this choice of signs, v is depleted in the down state u < 0 and produced in the upper state u > 0, while, contrariwise, the lower state advances at v > 0 and the upper one at v < 0. Such conditions are conducive to the formation of stable solitary structures. Consider, for example, a finite region with u < 0 immersed within a continuum of positive u. If it starts to shrink, the level of v would raise and, as a result, the front may be immobilized again. If the dimension is higher than one, the question is whether this stabilizing action wins over curvature-driven shrinking, which is accelerated inversely proportional to its radius. At μ < 0, neither global stationary state is excitable. Stationary solitary structures, which can be formally constructed also in this case, correspond to unstable critical nuclei of one of alternative states. Only one state is excitable at μ > 0, |ν| > μ. The parameter τ plays, as we shall further see, a special role, influencing the transition from stationary to mobile structures.

5.3. SEPARATED TIME AND LENGTH SCALES

5.3.2

183

Stationary and Mobile Fronts

A stationary front. Various stationary structures can be obtained in the limit → 0 following a common strategy. We solve the long-scale equation (5.60) replacing u by its quasistationary values u± s = ±1 + O( ) in the regions separated by a front, which is assumed to be of negligible thickness on the characteristic length scale of the longrange variable. The position of the front is further defined by requiring the values of v to annul the propagation speed given by (5.41). A 1D solution corresponding to a single stationary front on the infinite line verifies the equation v  (x) − v − ν − μ = 0 

v (x) − v − ν + μ = 0

at

x  0,

at

x  0.

(5.61)

The matching conditions at the short-scale discontinuity x = 0 require v to be continuous and smooth. The solution is  at x  0, μ(ex − 1) − ν (5.62) v= μ(1 − e−x ) − ν at x  0. The stationarity condition is just v(0) = 0, which implies ν = 0. Thus, as it should be clear from the outset, a single stationary front is possible only when the system is symmetric. Stability of a stationary front. Unlike a front in a one-component system, the front solution we have just obtained may be unstable. Stability to infinitesimal displacements should be checked by taking into account their coupling with perturbations of the slow variable. An instantaneous displacement of the front from its unperturbed position toward the region occupied by the upper state dependent on the transverse variable y will be denoted as ζ(y, t) (having more than one transverse variables brings nothing new). As long as the amplitude of the perturbation is much smaller than its wavelength, the normal vector defining the direction of the front propagation is almost ˙ In the same approximation, parallel to the x-axis, and the propagation speed is c = ζ.  the curvature of a perturbed front is given by κ = −ζ (y). As we have seen in Sect. 5.2.1, the velocity of a weakly curved front is shifted by κ relative to that of a straight-line front under the same conditions. The latter is proportional to the level of the long-scale variable v(ζ) at the perturbed location. For the cubic function in (5.59), linearizing (5.41) with ν = v(ζ)  1 yields c0 = 32 v(ζ). The infinitesimal shift of the local concentration at the perturbed front position is expressed as (5.63)  v(ζ) = v1 (ζ, y, t) + ζv0 (x)x=0 , where v0 (x) is the stationary solution (5.62) and v1 (x) is the perturbation of the slow variable v1 due to an infinitesimal shift of the front position. The last term is just the

184

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

stationary flux at the front with the inverted sign, which equals μ according to (5.62). Thus, the equation of the front displacement is ζ˙ = ζ  (y) + b[v1 (ζ, y, t) + μζ].

(5.64) √ The coefficient b = 3/ 2 is the derivative of the front velocity c with respect to v at c = 0. this equation should be solved together with the equation of v1 τ −1 v˙ 1 = ∇2 v1 − v1 − 2μζ(y, t)δ(x).

(5.65)

The last term containing Dirac’s delta-function is due to a shift of the front position by an infinitesimal increment ζ, which is equivalent to switching the sign of the source term in (5.60) in a narrow region near x = 0. This introduces a source localized at the unperturbed front position and proportional to its displacement. Although perturbations are habitually presumed to be infinitesimally small for the purpose of linear stability analysis, the method is actually also applicable to finite perturbations of the front position, provided they are small on the extended scale, i.e., ζ ≤ O( ). Both perturbation variables are presented in the spectral form ζ(y, t) = ζ(k, λ)eλt+iky ,

v1 (x, k, t) = v(x, k, λ)eλt+iky .

(5.66)

The dispersion relation λ(k 2 ) determines stability to infinitesimal perturbations. Using (5.66) in (5.64), (5.65) yields (λ + k 2 )ζ = b(v + μζ),  λ)δ(x) = 0, v (x) − q 2 v − 2μζ(k,

(5.67) (5.68)

where q 2 = 1 + λ/τ + k 2 . The last equation is solved with the boundary conditions v(x) → 0 at x → ±∞. The derivative v (x) is discontinuous at x = 0 due to the  λ). Thus, the solution of this point source in (5.68), so that v (−0) − v (+0) = 2μζ(k, equation is  λ)q −1 e−q|x| . v(x, k) = −μζ(k, (5.69) Using (5.69) in (5.67) yields the implicit dispersion relation λ = −k 2 + bμ(1 − q −1 ).

(5.70)

As expected, λ vanishes at k = 0, which reflects the translational symmetry of the front. An explicit form of the implicit dispersion relation would be awkward, but it is not needed for locating a long-scale instability at the zero eigenvalue, which sets on when the derivative dλ/d(k 2 ) > 0 at λ, k → 0. It is sufficient therefore to expand (5.70) assuming λ ∼ k 2  1. This yields in the leading order λ = −k 2 τ

bμ − 2 . bμ − 2τ

(5.71)

5.3. SEPARATED TIME AND LENGTH SCALES

185

The result indicates two instability regions: μ>

2 2 , μ< τ b b

or

μ
τ. b b

(5.72)

The character of the transitions is very much different. At μ = 2/b, the eigenvalue switches sign when passing zero, while at μ = 2τ /b it diverges before changing its sign, which arises a suspicion that the expansion we have used is invalid at this point. A more detailed analysis resolving the dispersion relation at k → 0, shows that there is a positive root both at the line μ = 2/b, τ < 1 and at the line μ/τ = 2/b, τ > 1. Since the positive root cannot pass zero anywhere within the parametric domain μ/τ > 2/b, μ > 2/b, the front must be unstable within the entire domain. Oscillatory instability. This still does not prove stability in the remaining parametric region μ/τ < 2/b, μ < 2/b. We still need to investigate whether an oscillatory instability at a purely imaginary eigenvalue could occur there. Setting λ = iωτ and separating the real and imaginary parts of (5.70) yields



(5.73) k 2 = bμ 1 − χ−1 cos α + i −τ ω + bμχ−1 sin α , where χ = [(1 + k 2 )2 + ω 2 ]1/4 , and α is the half of the phase angle of the complex number 1 − q −1 : α=

1 ω  arg(1 + k 2 + iω) = arctan , 2 2 1 + k + (1 + k 2 )2 + ω 2

|α| 

π . 4

(5.74)

The instability is possible when the imaginary part of the above expression vanishes, while its real part is positive. The imaginary part comes to zero at bμ ωχ = . τ sin α

(5.75)

The lowest possible value of the ratio in the right-hand side is attained at ω = 0. Expanding χ and α in this limit shows that it is equal to 2(1 + k 2 )3/2 . Thus, an oscillatory instability is possible only at bμ > 2τ , and the front is stable in the entire domain bμ < 2τ, μ < 2/b. More details are found in the accompanying Mathematica notebook SeparatedScales.nb. Mobile fronts. Besides a stationary single front, one can construct a family of fronts propagating with a speed c related to the level of v at the front by (5.41). The long-range variable should verify then the equation in the comoving frame z = x − ct τ −1 cv  (z) + v  (z) − v − ν − μ = 0 τ

−1





cv (z) + v (z) − v − ν + μ = 0

at

z  0,

at

z  0.

(5.76)

186

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

The general solution that satisfies the matching conditions at the short-scale discontinuity x = 0 and does not diverge at z → ±∞ is ⎧ 

 at z  0 ⎨ μ (1 + c/s) exp s2−c τ z −1 −ν v= (5.77)

 ⎩  μ 1 − (1 − c/s) exp − s+c z − ν at z  0, 2τ where s2 = c2 + 4τ 2 . The value of v on the front, v(0) = μc/s − ν, must be equal to its value corresponding to the propagation velocity. Inverting (5.41) with ν replaced by v yields the surprisingly simple dependence v = c/b − (c/b)3 . Clearly, c should be of O( ), since (5.59) is only weakly asymmetric. This yields the equation for c, c  c 3 μc − =√ − ν. b b c2 + 4τ 2

(5.78)

In the first order, this equation is satisfied identically at μ = τ2 /b, which coincides in the symmetric case ν = 0 within the stability limit of a stationary front derived above. Asymmetric solutions with c = O( ) exist only at small ν = ν1 . A pair of solutions propagating in a “wrong” direction, one stable and another one unstable, emerges at a saddle-node bifurcation, which takes place when the derivative of (5.78) with respect to c vanishes. The nature of the bifurcation. By analogy with magnetic domain walls, stationary and propagating fronts have been labeled, respectively, as Ising and Bloch fronts, and the transition between them as the Ising–Bloch (IB) bifurcation (Coullet et al., 1990). The term has taken roots, although the specific structure of magnetic walls is in no way reproduced in reaction-diffusion models, and defining these fronts as, respectively, stationary and mobile is both rational and more suggestive. The velocities of the emerging fronts would grow to O(1) if the small parameter multiplying v in (5.59) is removed. The location of the bifurcation point will not change in this case, but solving (5.76) analytically will be impossible due to the nonlinear dependence u(v) in (5.59), which makes (5.60) nonlinear as well through the term μu. Hagberg and Meron (1994) computed a bifurcation diagram for a strongly biased FN model numerically. If (5.78) is viewed just as an algebraic equation, this is a pitchfork bifurcation point, as in Sect. 2.1.4, beyond which two symmetric nontrivial solutions exist, which correspond to the fronts propagating in the opposite directions. If we dealt with a standard dynamical system, we would expect traveling solutions to emerge when the stationary solution loses stability – but they exist only when the stationary front is stable as well. Intuition borrowed from Ch. 2 might be misleading in the case of spatially distributed systems, which have an infinite number of degrees of freedom. The innocuous-looking FN system with widely separated scales is capable to generate chaotic patterns, and a repelling manifold separating the attraction basins of stationary and propagating fronts may be highly convoluted and practically impossible to locate.

5.3. SEPARATED TIME AND LENGTH SCALES

187

Stability of a mobile front. The coexistence of two kinds of stable fronts in the same parametric domains can be understood if we take note that at c  1 stability conditions for stationary and moving fronts cannot differ much. Repeating the above stability analysis for c = 0, we arrive at the following equations for the Fourier transforms of the front displacement and the perturbation of the slow variable modifying (5.67), (5.68): (λ + k 2 − c/τ )ζ = b(v + μζ),

(5.79)

2

v (z) + (c/τ )v (z) − q v − 2μζ(k, λ)δ(z) = 0.

(5.80)

v(z, k) = μζ(k, λ)Q−1 e−Q|z| ,

(5.81)





The solution of the last equation is

where Q2 = q 2 + (c/2τ )2 . Using (5.81) in (5.79) yields the implicit dispersion relation λ + k 2 − ick − bμ(1 − Q−1 ) = 0.

(5.82)

The imaginary term proportional to c is irrelevant, and the rest of this expression should be expanded as before assuming λ ∼ k 2  1. To match the orders of magnitude, we need to assume c ∼ k. This yields in the leading order λ = τ −1

2k(k − ic)τ 2 + bμ[(c/2)2 + (kτ )2 . bμ − 2τ

(5.83)

The instability condition dλr /d(k 2 ) > 0 at λ, k → 0 does not depend on c and exactly coincides with (5.72). Oscillatory instability at a finite k is not sensitive to c  1 at all in the leading order and is anyway confined to the region where mobile fronts do not exist. Thus, stability regions of stationary and slowly-moving fronts coincide.

5.3.3

Stationary and Mobile Bands

A solitary band. A solitary band of a width 2L enclosed by two fronts of opposite polarity can be viewed as a bound state of a kink and an antikink. For the down state inside the band, the profile of v should verify v  (x) − v − ν − μ = 0 

v (x) − v − ν + μ = 0

at

|x|  L,

at

|x|  L.

(5.84)

There are now two symmetric sets of matching conditions at x = ±L, and the solution is  at |x|  L, μ(2e−L cosh x − 1) − ν v= (5.85) μ(1 − 2e−|x| sinh L) − ν at |x|  L.  The fronts are stationary when v(±L) = μe−2L − ν = 0, which gives L = ln μ/ν. The solutions exist at μ/ν > 0.

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

188

Dispersion relations. Similar to Sect. 5.3.2, we can write equations of motion for infinitesimal displacements ζ± of the kink and the antikink at x = ±L that confine a solitary band. We take note that the sign of the bias term in the dependence of the displacement on the shift of v should be reversed when applied to the antikink. Using the expression for the stationary fluxes following from (5.85), we obtain    (y) + b ±v1 (±L, t) + μ(1 − e−2L )ζ±L . (5.86) ζ˙±L = ζ±L It is advantageous to consider the symmetric and antisymmetric combinations of the displacements, ζ± = 12 (ζ+L ∓ ζ−L ), which are coupled, respectively, to the symmetric and antisymmetric parts w± of the perturbation field v1 :    (y) + b w± (L, t) + μ(1 − e−2L )ζ± . (5.87) ζ˙± = ζ± The equations for both perturbation fields, w± , are identical: τ −1 w˙ ± = ∇2 w± − w± − 2μζ± (y, t)δ(x − L).

(5.88)

± eλt+iky , ζ± = Presenting the perturbation variables in the spectral form w± = w λt+iky  ζ± e we obtain   (λ + k 2 )ζ± = b w ± + μ(1 − e−2L )ζ , (5.89)  2 (5.90) w  (x) − q w ± − 2μζ(k, λ)δ(x − L) = 0, ±

where q 2 = 1 + λ/τ + k 2 . The last equations are solved with the boundary conditions  ± (x) is discontinuous at x = ±L due to the w ± (x) → 0 at x → ±∞. The derivative w   /± (L + 0) = ±2μζ(k, λ). The solution point source in (5.90), so that v± (L − 0) − w verifying these conditions is % −2μζ+ q −1 e−qL cosh qx at |x|  L, w + (x) = −1 −q|x|  −2μζ+ q e cosh qL at |x|  L, % at |x|  L, −2μζ− q −1 e−qL sinh qx (5.91) w − (x) = −2 sign(x) μζ− q −1 e−q|x| sinh qL at |x|  L. Implicit expressions of the symmetric and antisymmetric branches of the dispersion relation are obtained by inserting (5.91) in (5.89):

  (5.92) λ± = −k 2 + bμ 1 − e−2L − q −1 1 ± e−2qL . Monotonic instability. Vanishing λ− at k = 0 reflects the translational symmetry of the band, so that, similar to the case of a single front, a long-scale instability is expected. Expanding (5.92) at λ− ∼ k 2  1 yields in the leading order: λ− = k 2 τ

2 − bμ + bμ(1 + 2L)e−2L . bμ − 2τ − bμ(1 + 2L)e−2L

(5.93)

5.3. SEPARATED TIME AND LENGTH SCALES

189

As for a single front, the derivative dλ/d(k 2 ) is positive, implying instability, in two triangles, either bμ b (5.94) > f (L), < f (L), f (L) = 1 − (1 + 2L)e−2L 2τ μ or with both signs reversed. In the limit L → ∞, the instability condition of a single front (5.72) is recovered. As elaborated in the notebook SeparatedScales.nb, we can prove in the same way as for a single front that the band is also unstable when both limits in (5.94) are exceeded. As L decreases, the remaining domain covers a wider parametric region. A representative example is shown in Fig. 5.6a, but, as we will presently see, the stable domain is encroached by the oscillatory instability. Oscillatory instability. An instability at a purely imaginary eigenvalue is also located similarly to that of a single front. Setting in (5.92) λ = iωτ and separating the real and imaginary parts yields two equations for k 2 and ω, where the positive sign corresponds to symmetric (varicose) and the negative sign, to antisymmetric (zigzag) instabilities:    (5.95) k 2 = bμ 1 − e−2L − χ−1 cos α ± e−2Lχ cos α sin α cos(α + 2Lχ) ,   −1 −2Lχ cos α ωτ = bμχ sin α 1 ± e sin(α + 2Lχ) . (5.96) The notation follows (5.73), which is recovered at L → ∞. In the asymmetric case, the lowest possible value of the ratio μ/τ satisfying (5.96) is again attained at ω → 0, and is expanded at ω  1 as    √  bμ 2 (5.97) = 2(1 + k 2 )3/2 + ω 1 + k 2 1 ± e−2 1+k sin 2 1 + k 2 L . 2τ (a)

(b)



/ 

   



 





 

 

 















Figure 5.6. (a): The stable parametric domain of a single front, below the black lines, is compared with that of a band with the half-thickness L = 1. The limit set by the monotonic instability only is shown by orange lines. The zigzag instability cuts off the triangle above the red line. (b): The lower limit of the ratio μ/τ for the zigzag (blue curve) and varicose (orange curve) instability as a function of the half-thickness L. The dashed line shows the respective value for a single front

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

190 (a)

(b)

Figure 5.7. The range of possible wavenumbers and frequencies for the zigzag (a) and varicose (b) instability at L = 1 (shaded blue) and L = 1/2 (shaded orange)

The coefficient at ω is always positive, so that, as in the case of a single front, the lowest value of the right-hand side is 2. However, the right-hand part of (5.95) is negative in a range of small ω, k that shrinks as L increases. This, in turn, raises the minimum level of μ/τ satisfying (5.96), and therefore instability in the region stable to monotonic perturbation is possible. In the symmetric case, the expansion near ω = 0 does not show a definite result, but numerical minimization in the notebook SeparatedScales.nb shows that also in this case the minimum is observed at zero wavenumber and small ω. The range of possible wavenumbers and frequencies, shown in Fig. 5.7, grows and shifts to higher ω as L decreases, and is wider for the antisymmetric (zigzag) that for the symmetric (varicose) instability. The former is generally more dangerous, unless at L  1.5 when both are already close to the stability limit of a single front, as shown in Fig. 5.6b. The red line in Fig. 5.6a shows the limit of the zigzag instability at L = 1. Beyond the marginal instability locus, the maximum growth rate is observed at a finite wavelength. The zigzag instability is apt either to destroy the band turning it into an array of spots or to trigger spreading out into a labyrinthine pattern – but this can be only determined by full numerical simulation in 2D. Mobile bands. The profile of the slow variable in a traveling band with the down state inside the band should verify equations similar to (5.84) when written in the comoving frame z = x − ct: τ −1 cv  (z) + v  (z) − v − ν − μ = 0 τ

−1





cv (z) + v (z) − v − ν + μ = 0

at

|z |  L,

at

|z|  L.

(5.98)

5.3. SEPARATED TIME AND LENGTH SCALES The solution, infinity, is ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ v= ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

191

satisfying the usual matching conditions at z = ±L and decaying at

  −ν μ 1 + 2ez/2 (1 + c/s) sinh 12 c−s τ

at

x  −L

  −ν μ 1 − 2e−z/2 (1 − c/s) sinh 12 c+s τ

at

xL ,

 

 μ (1 + c/s) exp − 12 s−c (L − z) + τ (1 − c/s) exp





 − 12 c+s (L + z) − 1 − ν τ

at

(5.99)

|x|  L.

where s2 = c2 + 4τ 2 . Naturally, the profile of v(z) is asymmetric, as illustrated by Fig. 5.8. The velocity, as for a single front, should v fit the values of v on both sides of the band,  and, as for a stationary band, L is fixed  by the requirement that the kink and the antikink move with the same O( ) speed.  We can use again the simple dependence 

v = c/b − (c/b)3 , which should be equated  to v(±L) following from (5.99):  - - -  v(±L) = μ (1 ∓ c/s)e−(s±c)L ± c/s − ν. (5.100) Expanding this to the first order in c yields Figure 5.8. A typical profile of the longrange variable for a moving band  μc  1 − e−2L (1 + 2L) . v(±L) = μe−2L −ν± 2τ (5.101)  The zero-order term must vanish, which fixes L = ln μ/ν, exactly as for a stationary band. Equating the first-order term to the first-order term in the dependence v(c) (with the opposite signs for the kink and the antikink) yields precisely the limit of the monotonic instability of a stationary band (5.94). This is encouragingly similar to what we have observed for a single front, but we encounter here a severe technical problem. Expanding v(±L) further, we see, alongside odd-order terms having opposite signs, also even-order terms of the same sign, which do not fit the kinematic relation between the front velocity and the value of the slow variable at the front. The contradiction can be resolved by taking into account the finite thickness of the front, which is of O( ) and therefore should influence higher-order terms. We stop at this point, but, leaving aside the higher orders of the expansion, it is clear that, like in the case of a single front at L → ∞, bands propagating in opposite directions should exist when the incline of a straight line at c = 0 lies below the limit of the monotonic instability of a stationary band, so that mobile and static bands coexist in a common parametric domain. Also, since c ∼ O( ), the respective stability limits may not differ more than by an O( ) shift.

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

192

Striped patterns. It is also possible to construct in this way a regular striped pattern in the infinite plane with the period 2L = (L+ + L− ), formed by alternating fronts of opposite polarity: kinks at x = 2nL and antikinks at x = 2(nL + L+ ). The stationary profile of the long-range variable is ' & sinh L+ − 1 − ν at 2nL − L− ≤ x ≤ 2nL, v = μ 2 cosh (x − 2nL + L− ) sinh L & ' sinh L− v = μ 1 − 2 cosh (x − 2nL − L+ ) − ν at 2nL ≤ x ≤ 2nL + L+ . (5.102) sinh L The equilibrium condition, which fixes the relation between the lengths L± , is v(0) = v(2L+ ) = 0. A short computation yields ν = μ sinh(L+ − L− )/ sinh L.

(5.103)

Limiting the bias by the inequality μ > |ν| is necessary for the existence of solutions. Provided this holds, there is a continuum of solutions with L and L+ − L− verifying (5.103). Striped patterns are subject to the same kinds of monotonic, zigzag, and varicose instabilities as a single band, which are analyzed in a similar way, though leading to more complicated expressions. However, the analysis becomes more involved if perturbations of the constituent kinks and antikinks are not restricted by symmetries. Mobile solutions can be constructed as well, but all this is not worth trouble, since 2D patterns are more likely to be distorted.

5.3.4

Wave Trains

Advective scaling. The scaling of Sect. 5.3.2 is not suitable for the analysis of oscillatory or excitable systems, since strong coupling of the two variables in the FN system is necessary to eliminate one or both fixed points. Typical structures associated with oscillatory or excitable dynamics are excitation pulses and wave trains, which can be viewed as excitation spikes or oscillations of dynamical systems translated from the temporal to spatial domain. A wave train in an oscillatory system would combine alternating transitions from the lower to the upper state and back (Fig. 3.17a). For an excitable system (Fig. 3.17b), both wave trains and solitary spikes are possible. Unlike stationary or slowly-evolving patterns where the characteristic length scale is set by the diffusional range of the long-scale variable, the wavelength of a propagating pattern is tied to the propagation speed and remains finite even when the inhibitor is nondiffusive. Returning to the general two-variable system (5.56), we take as the time scale the characteristic relaxation time of the inhibitor γ2 and base the  length ∗ = D1 /γ1 . scale upon the characteristic propagation speed of the activator front c  Using this “advective” length unit, l∗ = γ2 c∗ = γ2 D1 /γ1 , we bring (5.56) to the

5.3. SEPARATED TIME AND LENGTH SCALES

193

dimensionless form εu˙ = ε2 u (x) + f (u, v), 2 

v˙ = τ v (x) + g(u, v).

(5.104) (5.105)

 The parameter τ = γ1 D2 /(γ2 D1 ) is the same that has been introduced in Sect. 5.3.1, and the new small parameter ε = γ1 /γ2 is connected to the former one as ε = D2 /D1 . Advective limit. The “inner” scale of the transitional layer, where the system switches between the two alternative activator states, u = u± s , is now set exclusively by the capacitance ratio, independently of diffusivities and, provided the activator is fast, remains small even when the inhibitor is less diffusive than the activator and , as defined in Sect. 5.3.1, is not small. Indeed, τ = (D2 /D1 )2 , so that τ  1 while

 1, when the inhibitor diffusion is negligible. Under these conditions, the inhibitor diffusion can be neglected, reducing (5.105) to v˙ = g(u, v). Although this equation contains no mechanism for healing discontinuities in v, the inhibitor field would usually remain smooth in the course of evolution, barring freaky initial conditions or strongly localized perturbations. In this “advective” limit, with both diffusivities negligible, an undistorted excitation pulse or wave train propagating with a constant speed c along the x-axis is defined by a simple equation in the comoving frame z = x − ct: cv  (z) + g ± (v) = 0,

g ± (v) = g(u± (v), v).

(5.106)

This equation is, generally, nonlinear even with a linear function g(u, v), in view of the nonlinear dependence of quasistationary activator values on v. This technical difficulty is removed when the symmetry of the function f (u, v) is only weakly biased, as in (5.59), but this slows down the propagation speed in the same proportion. Another choice is a piecewise-linear function, as in (4.70). Then u± = ±1, and the propagation speed√is c = βv0 , where v0 is the value of the slow variable at the transition point, and β = 2. With this assignment, and a linear g(u, v) in its usual form, (5.60) becomes linear and can be easily solved analytically: cv  (z) − v − ν ± μ = 0.

(5.107)

This equation admits a continuum of periodic solutions parametrized by the velocity c. The relation between the velocity and wavelength L, i.e., a dispersion relation, can be obtained by fixing the values of v at the transition fronts. Qualitatively similar solutions for nonlinear systems with phase portraits of the type shown in Fig. 3.17a,b can be routinely computed numerically. Wave trains. Consider an oscillatory system with the null-isoclines shown in Fig. 5.9a. The only possible regular pattern in this system is a wave train combining alternating transitions from the lower to the upper state and back, as shown by the dashed lines in

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

194





v

v

back front

back front 



Figure 5.9. Null-isoclines of an oscillatory (a) and an excitable (b) system with the piecewiselinear function f (u, v). Dashed lines show transition fronts of the inhibitor in a wave train (a) or a single excitation pulse (b)

this figure. To conform to the common usage, we shall call the upper state “excited.” On its advancing edge, called the “front” side and located, say, at x = nL, n = . . . , −1, 0, 1, . . ., the propagation speed, by our usual sign convention, is negative. The reverse transition to a “quiescent” lower state takes place at the “back” side located at x = nL ∓ L± , where the propagation speed is positive; L± are the widths of the excited and quiescent regions. It is sufficient to consider a single period −L+  L− . Clearly, the values of v at the front and back edges should be of the opposite signs and equal by their absolute value, so that the solution should satisfy v(∓L± ) = v0 = c/β, v(0) = −v0 = −c/β. Usual matching conditions do not apply in the absence of diffusional terms. The problem is overdetermined, so that its solution should fix the period L as well. The solution of (5.107) satisfying the boundary condition at z = 0 is  at −L+ < z < 0, μ − ν − (μ − ν + c/β)ez/c v= (5.108) 0 < z < L− . −(μ + ν) + (μ + ν − c/β)ez/c at The widths of the excited and quiescent regions L± should be obtained from the remaining boundary conditions. This gives the relation between L± and the propagation speed c0 in the implicit form μ=

c eL/c − 1 , β (eL+ /c − 1)(eL− /c − 1)

ν=

c eL+ /c − eL− /c . β (eL+ /c − 1)(eL− /c − 1)

(5.109)

The values of L± at particular levels of the parameters μ, ν can be read from Fig. 5.10a. In the symmetric case ν = 0 and L± = L/2, one can obtain an explicit dispersion relation L = 4c arccoth(βμ/c). (5.110) The period diverges at c → βμ when the trajectory approaches a heteroclinic orbit passing through the corners of the zigzag null-isocline in Fig. 5.9. At c → 0,

5.3. SEPARATED TIME AND LENGTH SCALES

195

(a)

 - -

(b)











 



v

- /



 

 - - 

 



  - /











/()





   

-

     |/ |

- -

-

-

-





Figure 5.10. (a): Resolving (5.110). Black and red contours show the values of βμ/c, βν/c, respectively, as functions of L± /c. The labels of βν/c contours are framed. (b): A profile of v for a typical propagating pulse in an excitable system. Inset: the dependence of the scaled width of the excited zone on the ratio ν/μ

(5.110) yields formally a vanishing period; however, the formula is inapplicable at small velocities, since diffusion has to be taken into account. Periodic solutions constructed in this way are stable to uncorrelated displacements of individual fronts, since any advance would bring it to an unrelaxed zone where the value of v is unfavorable, and thus cause deceleration. Conversely, any delay will shift the front further down the relaxation tail of the preceding transition and cause acceleration, restoring the original wavelength. Excitation pulses. Similar patterns can be constructed for an excitable system with the dynamical phase portrait shown in Fig. 5.9b. The only limitation is that the front edge should lie above the fixed point in this picture. Generating such a pattern will require, however, special initial conditions. It could be sustained, in a more or less realistic setting, on a large ring, where the excitation waves of finite period would chase one another. A more natural structure for this system is a single excitation pulse propagating into the quiescent state v = v0 . The speed of the excitation front is c0 = c(v0 ). The reverse transition in the relaxation tail may be delayed, in the absence of perturbations, to the turning point of the positive branch in Fig. 5.9b, but the back edge forming there would move faster and catch up to the level of v where both speeds equalize. The profile satisfying the linear equation (5.107) should be computed with c = βv0 , v0 = −(ν + μ). An excitable state on the lower branch with −1 < v0 < 0 exists at −μ < ν < 1 − μ. The solution in the comoving frame with the front edge at the origin

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

196

and the back edge at −L, satisfying the boundary condition v(−L) = −v(0) = ν + μ, is ⎧ −(μ + ν) at z > 0, ⎪ ⎪ ⎨ z at −L < x < 0, μ − ν − 2μ exp β(μ+ν) (5.111) v= ⎪ ⎪ μ z ⎩ −(μ + ν) − 2μ(1 + ) exp at z < −L, ν β(μ+ν) L = β(μ + ν) ln |μ/ν|.

(5.112)

A typical profile is shown in Fig. 5.10b, where the dependence of the scaled width of the excited zone L/(βν) on the ratio ν/μ is shown in the inset. Chaotic wave trains. Replacing a generic system with the phase portrait in Fig. 3.17 by a system with a zigzag phase portrait as in Fig. 5.9 has its price, as it eliminates oscillatory approach to the stationary state in the wake of a propagating excitation pulse. Oscillatory tails affect in a nontrivial way the interaction between successive pulses, which can lead to spatio-temporal complexity of the wave train structure (Elphick et al., 1988). We return now to the system (5.104), (5.105). In the limit τ → 0, the system rewritten in the comoving frame rescaled as z → εz reads cu (z) + u (z) + f (u, v) = 0, −1

ε



cv (z) + g(u, v) = 0.

(5.113) (5.114)

Suppose the system is excitable, having a single stationary state u0 , v0 . Linearizing (5.113) and (5.114) near this solution leads to a cubic equation for the eigenvalues λ, which characterizes the exponential tail of the pulse. In a certain parametric region, this equation has a pair of complex conjugate roots, λ = λr ± iλi , in addition to a real root λ0 , with λr and λ0 having opposite signs. Under these conditions, the fixed point (u0 , v0 ) of (5.113), (5.114), viewed as a dynamical system with the coordinate z playing the role of time, is a saddle-focus. We have seen in Sect. 4.3 that trajectories passing near this point are chaotic. This indicates that a train of pulses with oscillatory tails may form a chaotic sequence. Elphick et al. (1988) described this sequence through interaction of well-separated pulses centered at z = zj . Because of the exponential decay toward the fixed point both ahead and behind each pulse, it is sufficient to take into account interactions between the nearest neighbors only. For a train propagating in the negative direction, λr < 0 and λ0 > 0, and each pulse is influenced by the oscillatory tail of the preceding pulse located at z = zj −1 and the exponentially weak precursor of the trailing pulse located at z = zj+1 . The corrections to the propagation speed due to these interactions can be computed by applying a solvability condition of perturbation equations. In this way, we arrive at the sequence of equations z˙j = a+ e−λ0 (zj+1 −zj ) + a− eλr (zj −zj−1 ) cos [λi (zj − zj−1 ) − θ] ,

(5.115)

5.3. SEPARATED TIME AND LENGTH SCALES

197

(a)

(b)



 

 

  







()

()

 

    





 





 





 





Figure 5.11. (a): The pattern map (5.115) with r = 0.9 ω = 10, γ = θ = 0. Inset: blowup near the origin. (b): An iterated pattern map

where the coefficients a± and the phase θ can be related to the parameters of the original system by computing the above-mentioned solvability condition explicitly. For a steadily propagating train with a constant speed increment relative to the c, this relation can be converted into a pattern map analogous comoving frame, z˙j =  to a Poincar´e map of a dynamical system. Denoting zj = a+ e−λs (zj −zj−1 ) ,

a=−

a− , a+

r=

λr , λs

ω=

λi , λs

γ=

 c , a+

(5.116)

we present the pattern map in the form zj+1 = τ + azjr cos (ω ln zj − θ) ≡ f (zj ).

(5.117)

Fixed points of this map are solutions with equal spacing of the pulses. As the parameters change, these solutions appear or annihilate in pairs (one stable and one unstable) at saddle-node bifurcations. The fixed points accumulate at Z → 0, which corresponds to the infinite spacing, as shown in Fig. 5.11a. The map (5.117) cannot be iterated to obtain chaotic sequences in the same way as Poincar´e maps are, since it contains nonphysical segments, with f (z) < 0 (not shown in Fig. 5.11). One can, however, obtain solutions with unequal spacing as fixed points of iterated maps, e.g., period-two sequences are fixed points of the iterated map in Fig. 5.11b. Numerical simulations of (5.117) starting from an unstable equally spaced solution showed the tendency of the wave train to break into groups of pulses separated by “defects” with very large spacing (Elphick et al., 1988).

198

5.4 5.4.1

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

Symmetry-Breaking Bifurcations Amplitude Equations

Stability analysis. Homogeneous stationary states (HSS) of a space-dependent system, such as the reaction-diffusion system (RDS) (5.3) can exhibit a wider variety of bifurcations than stationary states of the dynamic system (2.1). We have already encountered a symmetry-breaking bifurcation in Sect. 5.3.1. Advancing now to a general n-component RDS, we extend the linear stability analysis in Sects. 2.2.1, 2.3.1 to consider stability of an HSS u = u0 to spatially dependent perturbations presented as a combination of Fourier modes with a wave vector k: Φ(x, t) = exp(λt + ik · x)U (λ, k).

(5.118)

As usual, we test stability by linearizing (5.3) in the vicinity of an HSS ∂Φ = D ∇2 u + fu (u0 )Φ, ∂t

(5.119)

and testing eigenvalues of the Jacobi matrix J (k) = fu −k 2 D. In an isotropic system, J depends only on the absolute value of the wave vector – the wavenumber k = |k|, but not on its direction. As parameters of the problem are shifted, a symmetry-breaking instability leading to a stationary pattern (Turing bifurcation) takes place when the determinant of J (k) vanishes at some value of k0 = 0, such that J (k0 ) is minimal. The eigenvector U with the zero eigenvalue is real and independent of the direction of the wave vector. In the case when the instability takes place at a purely imaginary eigenvalue λ = iω, the bifurcation leads to a wave propagating with the velocity c = ω/k. The bifurcation point is located in the same way as the Hopf bifurcation in Sect. 2.3.1, and, also here, a wave pattern, rather than uniform oscillations, will emerge when some k = 0 minimizes the applicable bifurcation condition, e.g., the trace of J (k) for a two-component, or the condition stemming from (2.28) for a three-component system. In this case, the eigenvector U with the purely imaginary eigenvalue is complex but, as above, is independent of the direction of the wave vector. Basic forms. We can anticipate the basic form of amplitude equations at bifurcations breaking spatial homogeneity. The amplitude of a single mode (5.118) with k = 0, like the amplitude of a Hopf mode in Sect. 2.3.2, is complex even when ω = 0. Therefore the amplitude equation describing its dynamics in the vicinity of the bifurcation point cannot contain quadratic terms, and its space-independent form should coincide with (2.41). We could allow, however, the amplitude to vary on a long spatial as well as temporal scale. A long-scale diffusional term is added if the gradient operator is expanded as ∇ = ∇0 + ∇1 . The resulting lowest-order amplitude equation should have a general form (omitting now the index at ∇1 ) a˙ = χ∇2 a + κ1 a + κ3 |a|2 a.

(5.120)

5.4. SYMMETRY-BREAKING BIFURCATIONS

199

As usual, the dot denotes here the derivative with respect to the slow time variable, which has to be extended by −2 to match the order of magnitude of the nonlinear term at a = O( ). The coefficient κ1 depends on O( 2 ) parametric deviations from the bifurcation point, and the coefficient χ is the effective long-scale diffusivity. This equation is actually a shorthand for two equations for the amplitude a and its complex conjugate a. Its coefficients are real for a Turing and complex for a wave bifurcation. The real part of κ3 must be negative to avoid the escape to infinity, and can be set to −1 by rescaling the amplitude. The imaginary part of the linear growth coefficient, say, iμ, is irrelevant, since it can be always removed by the transformation u → u eiμt , so that the added frequency is absorbed into the frequency of the underlying oscillations. Its real part can be reduced in the case of a supercritical bifurcation to unity by changing the time scale. The equation is well-posed when the real part of χ is positive, and the latter can be then reduced to unity by rescaling spatial coordinates. As a result, (5.120) is reduced in a supercritical case to the standard form a˙ = (1 + iη)∇2 a + a − (1 + iν)|a|2 a,

(5.121)

containing just two parameters η, ν in the case of a wave, and none at all in the case of a Turing bifurcation. It is often advantageous to work with the polar representation a = ρeiθ . After separating the real and imaginary parts, (5.121) splits into two real equations of the real amplitude (modulus) ρ and phase θ: ρ˙ = ∇2 ρ + ρ(1 − |∇θ|2 − ρ2 ) − η(ρ∇2 θ + 2 ∇ρ · ∇θ), θ˙ = ∇2 θ + 2ρ−1 ∇ρ · ∇θ + η(ρ−1 ∇2 ρ − |∇θ|2 ) − νρ2 .

(5.122) (5.123)

When an inhomogeneous solution bifurcates supercritically, the bifurcation diagram for the real amplitude has the same basic form as in Fig. 2.4a, or in Fig. 2.2c with the two stable branches identified by shifting the phase by π. The attractor is a harmonic pattern (stationary or wave) if the real part of κ1 is positive; otherwise, the attractor is the trivial state a = 0. If the real part of χ is negative but small, the derivation has to be modified by rescaling the coordinate by −1/2 , to arrive at an amplitude equation on a still longer scale containing a fourth-order spatial derivative, which, assuming now the coefficients to be real and properly rescaled, should have a general form a˙ = −∇4 a − χ∇2 a + a − |a|2 a.

(5.124)

As in Sect. 2.3.1, the amplitude equation can be extended to include the fifth order κ5 |a|4 a if Re(κ3 ) > 0. When the coefficients are real, (5.120) has a gradient structure and is derivable from the energy functional that decreases monotonically with time in the course of evolution to a stationary pattern: 1 F = χ∇a · ∇a − κ1 |a|2 − κ3 |a|4 , 2

a˙ = −

δF , δa

(5.125)

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

200

or, in the rescaled polar form,   1 1 4 2 2 2 2 F= |∇ρ| + ρ |∇θ| − ρ + ρ , 2 2

ρ˙ = −

δF ˙ δF , θ=− . δρ δθ

(5.126)

The fourth-order equation (5.124) retains the gradient structure with the energy functional 1 (5.127) F = ∇2 a∇2 a − χ∇a · ∇a − κ1 |a|2 + |a|4 . 2

Figure 5.12. A 3D plot of the real amplitude ρ solving (5.128)

Model equations. Equations of this kind arise in all kinds of physical systems capable to undergo symmetry-breaking transitions, and have been also put forward and extensively studied as model equations, commonly used in a basic rescaled form. The simplest model, (5.120), with real coefficients, all reduced to unity, is known as the Ginzburg– Landau (GL) equation, first put forward in the context of the phenomenological superconductivity theory (Ginzburg and Landau, 1950). Its most remarkable 2D stationary solution is a vortex. In the polar representation, the phase θ equals to the angle φ in polar coordinates, while the radial dependence of the real amplitude ρ is defined by the equation     1 d dρ ρ r + 1 − 2 − ρ2 ρ. (5.128) r r dr dr

The function ρ(r), sketched in Fig. 5.12, is linear near the origin and approaches unity asymptotically at r → ∞. The complex form (5.121), which is obtained as the amplitude equation at a wave bifurcation, is known as the complex GL equation (CGL). It has extremely rich dynamics, “the world”, as says the title of the review by Aranson and Kramer (2002). The fourth-order equation (5.124) with the linear part written as −(1 − ∇2 )2 and the real variable u (which makes it impossible to solve it analytically) is known as the Swift– Hohenberg (SH) equation (Swift and Hohenberg, 1977). It is widely used, together with its extensions including also quadratic and/or quintic terms, as a representative model of pattern dynamics, and we will do it as well in Sects. 5.6.1, 5.6.2.

5.4.2

Bifurcation Expansion

Derivation of amplitude equations near a symmetry-breaking bifurcation point follows the same routine of expanding in powers of a dummy small parameter as in Sects. 2.2.4, 2.3.2. Similar to Sect. 2.3.2, we can isolate parameters affecting dynamics only by presenting a RDS in the form Γ

∂u = D ∇2 u + f (u; p), ∂t

(5.129)

5.4. SYMMETRY-BREAKING BIFURCATIONS

201

and fix the parameters affecting homogeneous equilibria but vary only the matrix of diffusivities D and the capacitance matrix Γ expanded as D = D 0 + D 1 + · · · , Γ = Γ0 + Γ1 +· · · . This would suffice to locate a bifurcation point where the Jacobi matrix 2 J = Γ−1 0 (fu − D 0 k ) has a zero or a purely imaginary eigenvalue for a certain value of the wavenumber k, if it exists at all for this set of static parameters. We assume that this eigenvalue is nondegenerate and that all other bands of eigenvalues λ(k) lie entirely in the complex half-plane Re λ < 0, so that it is the leading eigenvalue. The first-order solution has the form (5.118) where U is the eigenvector of fu − k 2 D − iωΓ. The amplitude is expected to evolve on a slow time scale tn extended by a factor −n ; the respective derivatives will be denoted as ∂n . We begin with considering a single mode with k = 0. Let the first-order function au1 = aΦ(x, t) with Φ(x, t) = exp[i(ωt + ik · x)]U (k, ω),

(5.130)

together with its complex conjugate, be the leading eigenfunctions with the eigenvalue λ = iω of the operator L = D0 ∇20 + fu − Γ0 ∂0 . (5.131) The leading eigenfunctions of the adjoint operator L† are Φ† = exp[i(ωt + ik · x)]U † (k, ω)

(5.132)

and its complex conjugate. As in Sect. 2.3.2, we adopt the normalization condition † U Γ0 U = 1, where the overline denotes the complex conjugate. The second-order equation is 1 Lu2 + (D 1 ∇0 + 2D 0 ∇0 ∇1 − Γ1 ∂0 − Γ0 ∂1 )u1 + fuu [u1 , u1 ] = 0. 2

(5.133)

The solvability condition of this equation is evaluated by taking the scalar product † with the eigenfunction Φ . The contribution of the quadratic term vanishes, and the solvability condition just requires all linear terms in (5.133) containing u1 to vanish. This is done by freezing evolution on the time scale t1 , suppressing spatial dependence of the amplitude along the wave vector k, and restricting parametric deviations to a higher order, so that D 1 = Γ1 = 0. As discussed in Sect. 2.3.2, retaining these deviations would be equivalent to sliding along a hyperplane in the parametric space tangential to the bifurcation manifold. The second-order function u2 solving (5.133) contains more terms than (2.54). In addition to a real constant term V 0 and the second harmonic exp[2i(ω + k · x)]V 2 together with its complex conjugate, it has at ω = 0 terms containing exp(2iω) and exp(2ik · x), and their complex conjugates. However, these terms do not contribute to the amplitude equation to be obtained in the next order of the expansion. The vectors V 0 , V 2 verify the respective equations fu V 0 + |a|2 fuu [U , U ] = 0,

1 (fu − 4k 2 D 0 − 2iωΓ)V 2 + a2 fuu [U , U ] = 0. (5.134) 2

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

202

The relevant part of the third-order equation containing the principal harmonic is Lu3 + (D 0 ∇21 + D 2 ∇20 − Γ0 ∂2 − Γ2 ∂0 )u1 1 + fuu [u1 , V 2 ] + fuu [u1 , V 0 ] + fuuu [u1 , u1 , u1 ] = 0. 2 The solvability condition has the form (5.120) with the coefficients: †



χ = U D0 U , κ1 = −U (k 2 D 2 + iωΓ2 )U ,   1 † fuuu [U , U , U ] + fuu [U , V 0 ] + fuu [U , V 2 ] . κ3 = U 2

5.4.3

(5.135)

(5.136)

Interacting Modes

Symmetry-breaking bifurcations in more than one dimension are degenerate due to spatial symmetries. In an isotropic system, an arbitrary number of differently di 0 can be excited beyond the bifurcarected modes with the same wavenumber k = tion point. The respective eigenvectors U , U † are also identical, being dependent on k only. A combination of these modes can give a variety of distinct planforms. Competition among the modes that determines the pattern selection is governed by nonlinear interactions. The second-order equations contain, besides the arrays V 0 , exp[2i(ω + ki · x)]V 2 in (5.134), which are identical for all modes, also mixed quadratic terms V 0,ij exp[i(ki − kj ) · x)], V 2,ij exp[i(2ωt + (ki + kj ) · x)] that verify the equations

fu − |ki − kj |2 D 0 V 0,ij + ai aj fuu [U , U ] = 0,

fu − |ki + kj |2 D 0 − 2iωI V 2,ij + ai aj fuu [U , U ] = 0. (5.137) The coefficients χ, κ1 , as well as the self-interaction coefficients κ3,ii = κ3 in the amplitude equations obtained in the third order are the same as in (5.136), and are identical for all modes, but the cross-interaction terms are different. The evolution equations for the amplitudes ai have the general form  a˙ i = κ1 ai + κ3,ii |ai |2 ai + ai κ3,ij |aj |2 , (5.138) i=j

κ3,ij = U

 fuuu [U , U , U ] + fuu [U , V 0 + V 0,ij ] + fuu [U , V 2 + V 2,ij ] . (5.139)

†

All coefficients κ3,ij should have negative real part to prevent runaway to infinity. We omit here the long-scale spatial dependence that does not include interaction terms. Although the above derivation applies to both stationary and wave patterns, the differences are substantial, since the coefficients in the amplitude equations are real for the former and complex for the latter. The system (5.139) for stationary (Turing), but not for wave patterns, has a gradient structure, and is derivable from the potential  1  κ3,ii κ1 ai ai + V =− |ai |4 + κ3,ij |ai |2 |aj |2 . (5.140) 2 2 i i=j

5.4. SYMMETRY-BREAKING BIFURCATIONS

203

The slow dynamic equations (5.138) are recovered by differentiating with respect to the amplitude: a˙ i = −∂E/∂ai . One can use (5.140) to compare the energy of different solutions. It is zero for the trivial one (all aj = 0), and should be negative for solutions corresponding to stable patterns. The amplitude equations applicable to regular patterns effectively turn underlying PDEs into dynamical systems for the amplitudes of constituent modes. The preferred number of modes depends on the relative strength of nonlinear suppression by selfor cross-interactions. The answer to this question is more transparent in the case of stationary (Turing) modes. Then all coefficients in (5.138) are real, and so is the energy density in (5.140). Therefore, the answer can be secured both by linear stability analysis of solutions of (5.138) and by comparing the energies of different combinations of modes. Suppose we start with a single-mode solution. Its squared real amplitude defined by (5.138) is ρ21 = κ1 /|κ3,11 |. The equation for another mode, viewed as an infinitesimal perturbation of the single-mode solution, is   κ3,12 . (5.141) a˙ 2 = κ1 a2 1 − κ3,11 Clearly, the one-mode state is stable when the cross-interaction coefficient is larger by its absolute value than the self-interaction coefficient. This is confirmed by comparing the energy of one- and two-mode solutions. If there are two modes with equal constant real amplitudes ρ, (5.138) is solved by ρ2 = κ1 /|κ3,11 +κ3,12 |, and the energy density in (5.140) is κ21 /(κ3,11 + κ3,12 ) < 0. For a single mode, the energy density is 21 κ21 /κ3,11 < 0, which is lower, provided |κ3,ij | > |κ3,ii |. In the case cross-mode suppression is weaker, the preferred number of modes will be affected by the angular dependence of V 0,ij , V 2,ij . Unfortunately, analytical expression for coefficients κ3,ij are too involved and lack transparency even in simplest cases, but they can be checked numerically when all parameters of a problem are given. They depend on the angle between interacting modes and are typically more suppressive than self-interactions. Take, for example, two modes forming a small angle. Then the array V 0,ij will be close to V 0 when the modes are almost parallel or to V 2 when they are almost antiparallel; therefore, V 2,ij will be close to either V 2 or V 0 . Comparing (5.139) with (5.136) we see that in the former the cubic interaction term is doubled and another quadratic interaction term is added. Since all these interactions tend to be suppressive, such a mode is unlikely to be excited. There are more chances that an additional mode will be directed at a larger angle, e.g., π/2, to form a regular quadratic pattern, but in any case combinatorics works to enhance mutual suppression.

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

204

5.4.4

Plane Waves and their Stability

Plane wave solutions. Inhomogeneous solutions of the amplitude equations in 1D are plane waves, which are conveniently expressed in the polar representation a = ρ(q)ei(q·x−ωt) .

(5.142)

The modulus ρ, the wavenumber q = |q|, and the frequency ω of a plane wave with a wavenumber q are all constant, and their values are fixed by (5.122), (5.123):  ω = ν + (η − ν)q 2 . (5.143) ρ = 1 − q2 , In the case of the CGL equation, the waves are dispersive, and the group velocity is c = ω  (q) = 2q(η − ν).

(5.144)

Stability analysis. Plane wave solutions are subject to instabilities, which can be investigated, as usual, by adding small time- and coordinate-dependent perturbations. As in the standard case of homogeneous states, the stability condition is Re λ(k) ≤ 0 for all k. In a spatially homogeneous infinitely extended system, there is always a neutrally stable mode u (x) = u0 (x). This is a consequence of the translational symmetry, equivalent to the neutral stability to a constant phase shift. Most commonly, the most dangerous perturbation modes have vanishing wavenumber and can be viewed as long-scale distortions of neutrally stable translational modes. Loss of stability at k= 0 may cause formation of more complicated structures, e.g., period doubling when k is twice the wavenumber of the basic structure q. A 2D structure would form when stability is lost to a perturbation mode with k not collinear with the wave vector of the basic structure. It is advantageous to work in the polar representation (5.122), (5.123). We take a perturbed solution in the form ρ = ρ0 + ,

θ = qx − ωt + ϑ,

(5.145)

where ρ0 (q), ω are defined by (5.143) and , ϑ are infinitesimally small, and linearize (5.122), (5.123): ˙ = ∇2  − 2ρ20  − 2ρ0 q · ∇ϑ − η[ρ0 ∇2 ϑ + 2q · ∇], ϑ˙ = ∇2 ϑ + 2ρ−1 q · ∇ − 2νρ0  + η[ρ−1 ∇2  − 2q · ∇ϑ]. 0

0

(5.146) (5.147)

The eigenvalues of this system dependent on the perturbation wave vector k = ∇ϑ can be found in a standard way by setting here , ϑ ∝ exp(λ + ik · x) to obtain the Jacobi matrix   −(k 2 + 2ρ20 + 2iηk · q) ρ0 (ηk 2 − 2ik · q) J= . (5.148) 1 −1 2 2 −2νρ0 + ρ− 0 (2ik · q − ηk ) −k − 2iηρ0 k · q

5.4. SYMMETRY-BREAKING BIFURCATIONS

205

Simple stability criteria we have used so far are applicable only to real matrices. This is the case when the wavenumber k is normal to the wavenumber q of the plane wave, so that k · q = 0. Then the trace is always negative, and instability may set on when the determinant of J vanishes. It is computed as Det(J) = 2(1 + ην)k 2 + (1 + η 2 )k 4

(5.149)



Clearly, perturbations with small k are most dangerous, and the wave is unstable at 1 + ην < 0. This instability is called the self-focusing, or Benjamin–Feir instability. The alternative are perturbations parallel to q, which modify the basic wavelength, k · q = ±kq. Then the characteristic equation for the eigenvalues λ becomes

λ2 + 2λ ρ20 + q 2 ± 2iηkq + (1 + η 2 )k 2 (k 2 − 4q 2 ) + 2ρ20 [k 2 (1 + ην) ± 2i(η − ν)kq] = 0. (5.150) The expressions for the eigenvalues are too   cumbersome, but in the real case η = ν = 0    the Jacobian reduces to Det(J) = k 2 (2 + 2 2  k − 6q ). It increases monotonically with the perturbation w √avenumber and vanishes at   k → 0, q = 1/ 3. This is known as the  long-wave (Eckhaus) instability. The trace is   always negative, so that it is the only instabil ity in 1D. Since a long-wave instability tends  -  to be prevalent, we can explore it in the complex case as well. Expanding the solution of - (5.150) at k  1 up to the second order, we  obtain for the root vanishing at k = 0:   2

- -

-

-

  







k . ρ20 (5.151) Figure 5.13. The levels of limiting val-

λ = ∓2i(η−ν)qk+[(3+ην+2ν 2 )q 2 −(1+ην)]

The plane wave is stable at  1 + ην q< , 1+ην > 0. (5.152) 3 + ην + 2ν 2

ues of q at the long-scale instability. The dashed line marks the limit in the real case

The levels of limiting values of q are shown in Fig. 5.13; they reduce to the real case at η = ν, but the long-scale instability signals now the bifurcation to a quasiperiodic pattern stationary in the moving frame x − ct. Unlike the real case, this instability is convective: a perturbation does not necessarily destabilize the pattern but may be “washed away” downstream, and the absolute stability condition is less restrictive (Aranson et al., 1992). Phase dynamics. This instability can be best understood in terms of phase dynamics, and it will be momentarily useful here to extend the arguments to 2D. Assuming that

206

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

the spatial scale of perturbations is large, the spatial derivatives are rescaled by a small parameter , ∇ → ∇. The dynamics on a slow time scale extended by the factor −2 is viewed in the coordinate frame moving with the group velocity c = O( ). Under these conditions, perturbations of the real amplitude  are “slaved” to phase perturbations and can be expressed by expanding the rescaled equation (5.146) in as & ' 1 −3 2 2  = − ρ0−1 k · ∇ϑ + 2 − ηρ−1 ∇ ϑ + νρ (k · ∇) ϑ . (5.153) 0 2 0 Using this in the rescaled equation (5.147), we see that O( ) terms cancel exactly in the frame propagating with the group velocity (5.144), and obtain in the leading O( 2 ) order the phase diffusion equation in the comoving frame: 2(1 + ν 2 ) ϑ˙ = (1 + νη)∇2 ϑ − (q · ∇)2 ϑ. 1 − q2

(5.154)

This is an anisotropic diffusion equation, which can be rewritten in the frame aligned with the wave vector q of the plane wave to distinguish distinct longitudinal and transverse diffusivities: D = 1 + νη − (3 + νη + 2ν 2 )

q2 , 1 − q2

D⊥ = 1 + νη.

(5.155)

Both phase diffusivities should be positive for stability of a plane wave, and these conditions coincide with (5.152). Nonlinear phase equations. The long-scale expansion can be extended to obtain nonlinear equations describing phase dynamics on the background of either plane waves or homogeneous oscillations in the comoving frame. For this purpose, the phase perturbation is not assumed anymore to be small, but only changing on a long scale, while perturbations of both the modulus and the wavenumber do not exceed O( ). The modulus has to be now expanded to the second order: ρ = ρ0 + ρ1 + 2 ρ2 . Then (5.122) transformed to the comoving frame and expanded in yields ρ1 = −ρ−1 0 q · ∇ϑ,   η 2 1 3 2 −1 −1 2 ∇ ϑ + |∇ϑ| + νρ0 q · ∇ρ1 + ρ1 ρ2 = −ρ0 2 2 2 & ' 1 η 2 1 ν 1 2 2 2 =− ∇ ϑ + |∇ϑ| − 2 (q · ∇) ϑ + 2 (q · ∇ϑ) . ρ0 ρ0 2 2 ρ0

(5.156)

Using this in (5.123) transformed to the comoving frame yields in the leading second order of the expansion the equation describing relaxation of phase inhomogeneities on the slow time scale:    2  2 ∂2ϑ ∂2ϑ ν ∂ϑ ∂ϑ ˙ + (ν − η) , (5.157) −η ϑ = D 2 + D ⊥ 2 + 2 1−q ∂x ∂y ∂y ∂x

5.5. RESONANT INTERACTIONS

207

where x, y are extended coordinates directed, respectively, along and across q, and the parallel and transverse diffusivities are defined by (5.155). This is recognized as the anisotropic form of the Burgers equation. The isotropic form is recovered at q = 0: ϑ˙ = (1 + νη)∇2 ϑ + (ν − η)|∇ϑ|2 ,

(5.158)

At k = 0, the expansion can be continued near the self-focusing instability limit D⊥ = 1 + νη = 0. The long spatial scale should be shortened now from O( −1 ) to O( −1/2 ). To make the expansion consistent, the nonlinear term also has to be reduced to O( ), which requires the phase deviations to be small. The right-hand side of (5.157) vanishes after setting k = 0, η = −ν −1 (1 − η)  and rescaling ϑ → ϑ. The real amplitude has to be computed now by continuing the expansion of (5.122) to the third order:   1 1 1 (5.159) ρ = 1 − 2 η ∇2 ϑ − 3 |∇ϑ|2 + η ∇2 ϑ . 2 2 2 Expanding likewise (5.123) yields in the leading third order the equation describing phase dynamics on the O( −2 ) time scale: ϑ˙ = η ∇2 ϑ + (ν + ν −1 )|∇ϑ|2 − 12 (1 + ν −2 )∇4 ϑ.

(5.160)

This equation, in a rescaled form with all coefficients eliminated by rescaling the phase, coordinates, and time, is known as the Kuramoto–Sivashinsky (KS) equation, although it was first put forward based on general scaling and symmetry considerations by Nepomnyashchy (1974). Kuramoto and Tsuzuki (1976) derived it by direct longscale small-amplitude expansion of a reaction-diffusion system in the vicinity of a spatially homogeneous periodic solution. The coefficients of the KS equation derived in this way are related to the parameters of the underlying system through averaging over the oscillation period. Sivashinsky (1977) derived it in a completely unrelated fluiddynamical context2 . The parametric range where the KS equation can be obtained by rational expansion of underlying equations is very limited; nevertheless, it became very popular as a representative equation of mild spatio-temporal chaos.

5.5 5.5.1

Resonant Interactions Triplet Resonance

Second-order equations. Mutual suppression of different modes can be counterbalanced by special interactions. In stationary (Turing), but not wave patterns, the triple product of the amplitudes a(k) would appear in the potential (5.140) if the respective wave vectors add up to zero, ki + kj + kk = 0. This condition ensures that the modes in question are in resonance. Otherwise, the product of these modes rapidly oscillates and is averaged out when the amplitude equation is derived using the multiscale 2 The paper had been submitted to the leading fluid-mechanical journal and rejected after twoyear-long refereeing.

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

208

expansion procedure. For a bifurcation at a certain wavenumber k, wave vectors of resonant modes form an equilateral triangle. The additional resonant term entering the equation (5.133) for an ith mode will be then fuu [ui+1 , ui−1 ], where the modes are labeled modulo 2. This leads to the solvability conditions i = {0, 1, 2},

(5.161)

κ2 = U † fuu [U , U ].

(5.162)

∂1 ai = κ1 ai + κ2 ai+1 ai−1 , κ1 = −k 2 U † D 1 U , †

Take note that U , U are real and direction-independent. By symmetry, we can expect all amplitudes aj to have the same magnitude ρ and to differ only by their phases θj . Presenting them in the polar form aj = ρ exp(iθj ) and separating the real and imaginary parts reduces the three complex equations(5.161) to two real equations, since the phases appear only in the combination Θ = θj : ρ˙ = κ1 ρ + κ2 ρ2 cos Θ,

˙ = −3κ2 ρ sin Θ. Θ

(5.163)

where the slow time derivative is denoted by the dot. The remaining two phase degrees of freedom are neutrally stable translational modes, since shifting phases in an arbitrary way while keeping their sum constant is equivalent to shifting the entire pattern in the plane. The total phase Θ is, however, dynamically important. Clearly, it relaxes to zero when κ2 > 0 or to π when κ2 < 0. In either case, the quadratic term in the equation of ρ is positive, driving the real amplitude to “infinity”, i.e., to large deviations making the amplitude equation no more applicable. This is why the hexagonal pattern comprising three resonant modes always bifurcates subcritically, and why hexagonal planforms are ubiquitous in 2D pattern-forming systems. Third-order equations. Since quadratic terms are always destabilizing, cubic terms are necessary to obtain a stable finite-amplitude solution. In order to obtain a cubic amplitude equation with balanced orders of magnitudes, the coefficient κ2 should be assumed small, of O( ). Generically, this requires restricting the system parameters to the vicinity of a codimension two manifold in the parametric space. Deviations of diffusivities should be again reduced to O( 2 ), and the evolution on the O( −1 ) time scale frozen, as it has been done before. Continuing to the third order, we should obtain amplitude equations in the form (5.138) with an added resonant term:  κ3,ij |aj |2 . (5.164) a˙ i = κ1 ai + κ2 ai+1 ai−1 + κ3,ii |ai |2 ai + ai i=j

In order to compute cross-interaction coefficients κ3,ij , we need to take into account, as in Sect. 5.4.2, phase-dependent nonresonant terms in the second-order equations involving different modes. Since the angles √between modes are equal, all of them contain harmonics with the same wavenumber 3k. The respective arrays V 3,i verify the equations (5.165) (fu − 3k 2 D 0 )V 3,i + ai+1 ai−1 fuu [U , U ] = 0.

5.5. RESONANT INTERACTIONS

209

These terms contribute to the third-order equation of any ith mode the following resonant terms: (5.166) ai+1 fuu [U , V 3,i−1 ] + ai−1 fuu [U , V 3,i+1 ]. The resulting additional terms in the amplitude equations are phase-independent, and result in modifying the constants κ3,ij in (5.139) to κ3,ij = U † {fuuu [U , U , U ] + fuu [U , V 0 + V 2 + V 3,i+1 + V 3,i−1 ]} .

(5.167)

All these constants are equal, and can be denoted by a single symbol, say, κ 3 , which, similar to κ3 , should be negative and is likely to exceed the latter by its absolute value. Transforming again to the polar form, we write the amplitude equations as ρ˙ i = κ1 ρi + κ2 ρi+1 ρi−1 cos Θ + κ3 ρ3i + κ 3 ρi (ρ2i+1 + ρ2i−1 ).

(5.168)

Multimode patterns. Similar to (5.140), this system of amplitude equations has a gradient structure. Assuming κ2 > 0 and Θ = 0 (as we have just seen, reversing the sign makes no difference), the applicable potential is ⎡ ⎤  2  2    1 1 1 V = −⎣ κ1 ρ2i + κ3 ρ4i + κ2 (5.169) 3 ρi + κ ρ2i ρj2 ⎦ . 2 4 2 i=0 i=0 i=j

This representation is most transparent, since every cross-interaction term can be associated with a triangle or rhombus formed by the respective wave vectors. It can be

Figure 5.14. Two resonant triangles and the respective dodecagonal quasicrystalline pattern

210

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

easily extended to higher dimensions where triplet interactions favor a combination of modes forming one of regular polyhedra with triangular faces – tetrahedron, octahedron, or icosahedron (Alexander and McTague, 1978). The former two correspond to the same body-centered cubic crystalline lattice, but the latter corresponds to a quasicrystalline structure. These structures should have the lowest energy close to the bifurcation point, in the same way as the hexagonal structure is preferable in two dimensions, and likewise should exist in the near subcritical region. Since the pattern boosted by the resonant terms can be stabilized only by nonuniversal higher-order interactions, other structures can be actually chosen by the evolving system, and both experiment and numerical simulations show that the facecentered cubic structure is usually favored (Groh and Mulder, 1999). The special character of the icosahedral combination of modes is apparently responsible for the formation of quasicrystals with pentagonal symmetry (Shechtman et al., 1984), which was seen at the time as a sensational discovery contradicting the dogma of classical crystallography. An exotic possibility is a “dodecagonal” pattern formed by combining two resonant triangles turned by π/6 relative to each other (Malomed et al, 1989). A piece of such a six-mode pattern, comprising almost symmetric dodecagonal and pentagonal “flowers” but never repeating itself, is shown in Fig. 5.14. Fixing two phase relations to satisfy resonance conditions in the two resonant triangles still leaves four phase degrees of freedom. Thus, in addition to the two translational modes, there are two neutrally stable “phasons” modifying the pattern in a nontrivial way. There are no resonant interactions between the two triangles, and therefore suppressing cubic interactions between their constituent modes are likely to render the quasicrystalline structure unstable.

5.5.2

Stripes–Hexagons Competition

Possible stationary solutions of (5.168) in two dimensions are single-mode (striped) and a triple-mode (hexagonal) patterns. The former one obeys (5.120) with real coefficients, leading, after rescaling that sets κ3 → −1, to the stationary amplitude √ ρ = κ1 , while the stationary amplitude of the hexagonal pattern can be found after setting Θ = 0, ρj = idem = ρ from the equation ρ˙ = κ1 ρ + κ2 ρ2 − νρ3 = 0, where ν = 1 + 2κ 3 . The nontrivial stationary solutions are   1 1 ρs = κ2 ± κ22 + 4κ1 ν . 2ν

(5.170)

(5.171)

Two positive solutions exist in the subcritical region 0 > κ1 > − 14 κ22 /ν. The lower limit is the fold bifurcation point of (5.171), which is also the marginal stability point, so that the lower solution, intermediate between the stable trivial solution and the stable upper branch, must be unstable.

5.5. RESONANT INTERACTIONS

211

The lower branch of solutions terminates at the subcritical bifurcation point κ1 = 0, while the upper branch continues into the supercritical region κ1 > 0 where it coexists with the single-mode solution. Is the supercritical resonant solution stable to decay of a pair of modes? This can be decided by returning to (5.168) and checking its stability to infinitesimal perturbations ρi = ρs + ρi ; the phase variable Θ can be kept at its zero value. The rescaled form of (5.168) is ρ˙ i = κ1 ρi + κ2 ρi+1 ρi−1 − ρ3i −

ν−1 ρi (ρ2i+1 + ρi2−1 ). 2

(5.172)

Two modes out or three should be excited or decay together, so their amplitudes can be assumed equal and denoted as (t), while keeping ρ(t) as the amplitude of the remaining mode. Then (5.172) is rewritten as ρ˙ = κ1 ρ − ρ3 + κ2 2 − (ν − 1)ρ2 , 1 ˙ = κ1  − 3 + κ2 ρ − (ν − 1)(ρ2 + 2 ). 2

(5.173)

It is convenient to parametrize the computation by the stationary amplitude ρs instead of the bifurcation parameter, expressing the latter as κ1 = ρs (νρs − κ2 ). Then, linearizing (5.173) near the stationary solution ρ =  = ρs , we arrive at the Jacobi matrix   −ρs (κ2 + 2ρs ) 2κ2 ρs − ρ2s (ν − 1) J= . (5.174) 2(κ2 ρs − ρ2s (ν − 1) −ρs2 (ν + 1) (a)

(b)

κ� ��

ρ� ���





��� ���



���



���



��� ���

� ���

���

���

���

���

���

���

���

ν









��

��

κ�

Figure 5.15. (a): Coexistence region of single-mode and triplet solutions in the plane ν, κ1 at κ2 = 1, bounded from above by the instability of the triplet to decay, and from below, by the instability of a single mode to excitation of a resonant pair. The area where the triplet solution is absolutely stable is shaded orange, and the area where the single-mode solution is absolutely stable is shaded blue. (b): Dependence of the real amplitude of the striped and hexagonal patterns on the bifurcation parameter κ1 at κ2 = 1, ν = 2. The solutions are shown in blue when stable and orange when unstable. The number of modes is shown at the respective curves

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

212

The trace of this matrix is negative, whereas the determinant is positive, indicating stability, at κ2 κ2 2κ2 (ν + 3) 2κ2 , or − 2 < κ1 < 2 < ρs < . (5.175) ν ν−3 ν ν−3 The lower limit is trivial, as it coincides with the saddle-node bifurcation point in the subcritical region, and the upper limit indicates the instability of the resonant solution to decay of two modes. The resonant term facilitates excitation of additional modes, so that stability of the single-mode solution is no longer defined by (5.141). The Jacobi matrix obtained by √ linearizing the equation of (t) in (5.173) near the single-mode solution ρ = κ1 ,  = 0 leads to the stability condition κ1 >

4κ22 (ν + 3) . (ν − 3)2

(5.176)

Both limits (5.173) and (5.176) are proportional to κ22 , and they do not coincide. The two proportionality constants are drawn as the functions of ν in Fig. 5.15a, showing the shadowed area where the single-mode and triplet solutions coexist. This is possible only below the singular limit ν = 3 corresponding to κ 3 = 1, which means that coexistence is possible only when cross-interaction of modes is weaker than cubic self-interaction. As the bifurcation parameter κ1 increases, the energy of the striped pattern, which is metastable at κ1 just above the excitation limit, becomes at some stage lower than that of the hexagonal pattern, which turns from absolutely stable to metastable at this point before reaching the instability limit. Both solutions are shown in Fig. 5.15b.

5.5.3

Standing Waves

Resonant amplitude equations for waves. Triplet interactions are forbidden in wave patterns formed by modes with both k and ω differing from zero. Indeed, the quadratic resonance condition would now require, besides vanishing sum of kj , that the sum of ωj also vanish for the waves forming a resonant triangle. Clearly, this is impossible when all ωj are equal. However, another type of a resonance involves in this case pairs of waves propagating in the opposite directions. ± Let a± 1 , a2 be the amplitudes of the four waves with identical ω and k propagating in the opposite directions along, respectively, the x1 and x2 axes. The mixed quadratic terms in (5.137) have now four varieties, together with their complex conjugates: V 0,ii exp[2iki xi ], V 2,ii exp[2iωt],

V 0,ij exp[i(ki xi ± kj xj )], V 2,ij exp[i(2ωt + (ki xi ± kj xj )].

(5.177)

The expressions on the left involve waves propagating in the opposite, and those at

5.5. RESONANT INTERACTIONS

213

the right, in cross directions. All of them are computed by solving the equations

− fu − 4k 2 D 0 V 0,ii + a+ i ai fuu [U , U ] = 0, − (fu − 2iωI) V 2,ii + a+ i ai fuu [U , U ] = 0,

− fu − 2k 2 D 0 V 0,ij + a+ i aj fuu [U , U ] = 0,

fu − 2k 2 D 0 − 2iωI V 2,ij + ai aj fuu [U , U ] = 0.

Avoiding the plethora of indices, the amplitude equations are written as  +2

 ± ± 2 ∓ 2 − 2 + − + γa∓ a˙ ± i = ai κ − ν+ |ai | − ν− |ai | − μ |aj | + |aj | i aj aj .

(5.178) (5.179)

(5.180)

The coefficients κ, ν+ , ν− are computed as, respectively, κ1 , κ3,ii , κ3,ij in (5.136), with the arrays V 0,ij , V 2,ij defined by (5.178), and the coefficient μ, as κ3,ij as well, but with V 0,ij , V 2,ij defined by (5.137). The signs of all coefficients at the cubic terms are reversed, as we expect them to be stabilizing. The new coefficient at the resonant term is given by γ=U





 fuuu [U , U , U ] + fuu [U , V 0,ij + V 2,ij ] + fuu [U , V 2,ii ] .

(5.181)

It is easy to see that both wave vectors and frequencies add up to zero along the rectangle formed by the four modes that correspond to the last term. While all other terms are phase-independent, the resonant term depends on the combination of phases Θ = θ1+ + θ1− − θ2+ − θ2− (Pismen, 1986; Hoyle, 1994). All coefficients in (5.181) are complex, so that this equation system does not have a gradient structure. Standing wave patterns. A single standing wave is unstable when ν− > ν+ , which is likely, as we have already observed by comparing (5.139) with (5.136) in Sect. 5.4.2. Only resonant interactions are able to stabilize multimode patterns. Similar to Sect. 5.5.1, we expect them to impede decay of standing wave patterns into single propagating waves. Further on, we concentrate therefore on standing wave solutions with equal real amplitudes of the counter-propagating waves ρj = |a± j |. After transforming to the polar form, the real-part of (5.180) takes the form 

 ρ˙ 1 = ρ1 κr − νr ρ21 − 2μr ρ22 + Re γe−iΘ ρ22 , 

 ρ˙ 2 = ρ2 κr − νr ρ22 − 2μr ρ22 + Re γe−iΘ ρ22 , (5.182) where ν = ν+ + ν− and the complex coefficients are presented as ν = νr + iνi , etc. These equations can be rearranged by transforming to the new variables p = ρ21 + ρ22 , q = ρ21 − ρ22 : p˙ q˙

2κr p − νr (q 2 + p2 ) − (2μr − γr cos Θ) (p2 − q 2 ), = 2κr q − 2νr pq − γi (p2 − q 2 ) sin Θ.

=

(5.183)

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

214



 Figure 5.16. Oscillations at γ = 0.2 + i, κ = ν = μ = 1. The blue, green, and orange -  curves show, respectively, the dependence of p, q, and Θ/π on time 













The imaginary part of the polar form of (5.180) gives the equation of Θ: ˙ = −2γr p sin Θ + 2q (2μi − νi + γi cos Θ). Θ

(5.184)

Two easily computable stationary solutions of (5.183), (5.184) are synphase (Θ = 0) and antiphase (Θ = π) locked states, which exist in the supercritical region κr > 0: q = 0,

cos Θ = ±1,

p=

2κr . νr + 2μr ∓ γr

(5.185)

Linearizing (5.183), (5.184) in the vicinity of these solutions, we obtain the Jacobi matrix ⎞ ⎛ 0 0 −2κr   2 4νr 4κr γi ⎟ ⎜ ⎟ ⎜ ∓ κr 2 − (5.186) J =⎜ 0 ⎟. ν + 2μ ∓ γ ν + 2μ ∓ γ r r r r r r ⎝ ⎠ 4κr γr 0 2(2μi − νi ± γi ) ∓ νr + 2μr ∓ γr Stability of the stationary solution in the supercritical region κr > 0 depends on the sign of the determinant and trace of the 2 × 2 matrix in the lower right. The trace depends only on real parts of the parameters, and oscillations arise at 2κr (2μr − νr ∓ 3γr ) > 0. 2μr + νr ∓ γr

(5.187)

The determinant depends also on imaginary parts of the parameters, and a monotonic instability takes place at ∓γr (2μr − νr ∓ γr ) ± γi (2μi − νi ± γi −) = ±(|γ|2 − Re[γ(2μ − ν)] < 0.

(5.188)

This means that stability of the two stationary states to monotonic perturbations is exclusive: one should be stable, and the other one, unstable.

5.6. NONUNIFORM PATTERNS

215

A stable periodic orbit emerging beyond a Hopf bifurcation corresponds to a “twinkling” square wave pattern with the amplitudes of both waves oscillating on the long time scale of (5.180). A typical oscillation cycle is shown in Fig. 5.16. A variety of more elaborate patterns, sometimes dynamic, may arise at higher codimension bifurcations when two stationary (Turing) modes with different wavelength, or a stationary and symmetry-breaking mode, or a Turing and a wave mode emerge simultaneously.

5.6 5.6.1

Nonuniform Patterns Propagation of a Stationary Pattern

Velocity selection at the leading edge. When a stable pattern advances into an unstable homogeneous state, the problem of selection of the propagation speed considered in Sect. 5.2.2 is augmented by the problem of wavelength selection. As a rule, there is a one-parametric family of patterned states parametrized by wavelength, which expands into a two-parametric family of propagating solutions. With all this, we expect that a unique propagation speed and a unique wavelength will be selected dynamically. The theory of Kolmogorov et al. (1937) was generalized to pattern-forming systems by Dee and Langer (1983) and Ben Jacob et al. (1985). The simplest nontrivial example of selection of a patterned state gives the Swift– Hohenberg (SH) model briefly mentioned in Sect. 5.4.1, which we write here in the 1D form u˙ = −(1 + ∂x2 )2 u + μu − u3 . (5.189) The trivial state u = 0 is unstable at μ > 0. We assume, as in Sect. 5.2.2, that selection is determined by the leading edge of the propagating pattern and use the dispersion relation computed analogous to (5.43):   (5.190) ω(k) = i μ − (1 − k 2 )2 . Using again the method of steepest descent in the complex k-plane to compute the integral (5.44), we come to the requirement that the function   (5.191) Ω(k) = ω(k) − ck = i μ − (1 − k 2 )2 − ck should be minimal, so that (5.46) is retained, implying that the propagation speed coincides with the group velocity, the only distinction from Sect. 5.2.2 being a different expression for ω(k): (5.192) c= c = ω  (k) = 4ik(1 − k 2 ). Using this expression, we find that the condition  ci = 0, necessary to keep the perturbation stationary in time, has a nontrivial solution 1 kr = 1 + 3ki2 . (5.193)

216

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

The indices r and i denote here, as in Sect. 5.2.2, the real and imaginary parts, respectively. The analogy with propagation of a stationary state, where the real part of k must vanish, ends here, and the choice of the wavenumber has to be determined from the stability condition of the moving edge. Wavelength selection. It is advantageous for this purpose to express k as a dynamic variable allowed to change slowly in space and time by setting u = eiθ(x,t) and defining k = ∂θ/∂x as the derivative of the complex phase θ(x, t) (van Saarloos, 1988). The evolution equation of θ, rewritten in a frame z = x − ct propagating with the velocity c, is (5.194) θ˙ = ck − ω(k) + 2k  (z)(1 + 3k 2 ) + (higher-order terms), where we have used (5.190) and omitted terms containing higher spatial derivatives, as well as the nonlinear term, which is exponentially small at the leading edge of the propagating pattern. Since the amplitude of the propagating solution is monotonically decreasing in the direction of propagation, the imaginary part of the phase ϑ = θi can be taken as an independent variable instead of z. The transformation to the new variable is carried out using the relations (∂z )t = ki (∂ϑ )t ,

(∂t )z = (∂t )ϑ + (ϑt )z (∂ϑ )t ,

(5.195)

where (∂z )t denotes differentiation with respect to z at t = const, etc. The imaginary part of the right-hand side of (5.194) should be substituted for (ϑt )z . Then differentiating (5.194) with respect to ϑ yields ∂k + (higher-order terms). (5.196) k˙ = [c(k) −  cr (k)] ki ∂ϑ When small terms, containing higher-order derivatives with respect to ϑ and powers of kϑ , are omitted, we are left with what looks as a first-order quasilinear PDE, but is in fact a system of equations for ki , kr . However, when the stability condition  ci = 0 holds, the bracketed combination, which we denote as C(k), is real, and can be written as a function of ki only. It follows from (5.196) that k is conserved along a characteristic ϑt = −ki C(ki ). Since ki > 0, and C must be positive for a stable profile, the characteristics are directed toward decreasing ϑ, i.e., to larger amplitudes. This supports the notion that the leading edge plays the premier role in shaping the emerging pattern: the causal propagation along the characteristics is directed toward the advancing pattern. When the function C(ki ) is monotonically decreasing, steeper parts of the profile propagate back slower, and therefore dominate the leading edge of the front. Under these conditions, the front steepens in the course of evolution, and approaches the slope ki corresponding to the marginal value C(ki ) = 0, which corresponds to the condition c = ωi /ki that follows from the vanishing of the imaginary part of Ω(k). The resulting relation ωi μ = cr (k) = + 4ki (1 + 2ki2 ) = 8(ki + 4ki3 ) =  cr (k) (5.197) ki ki

5.6. NONUNIFORM PATTERNS

217

(a)

(b)



 



 

 



 



 



























Figure 5.17. Dependence of the propagation velocity (a), of the wavenumber at the leading edge kr (b, blue curve), and of the selected wavenumber q (b, orange curve) on the supercriticality parameter μ

is resolved by

1/2 1  1 + 6μ − 1 . ki = √ 2 3

(5.198)

This defines the slope of the marginally stable amplitude profile. The propagation speed (5.192) is now evaluated as   1/2  4  c= c = 8ki (1 + 4ki2 ) = √ 2 + 1 + 6μ 1 + 6μ − 1 . 3 3

(5.199)

The wavenumber kr in (5.193), selected at the leading edge and not influenced by nonlinear interactions, cannot be identical to the wavenumber of the full-grown pattern formed behind the front. The latter can be computed assuming that the number of wave crests does not change in the wake of the propagating front. The frequency of the emerging wave is determined by ωr that is Doppler-shifted in the moving frame to Ωr = ωr − ckr , and the wave crests are convected into the interior of the advancing pattern with the speed c. The actual wavenumber of the stationary pattern left behind the propagating front must therefore be

3/2 √ 3 3 + 1 + 6μ Ωr ωr

. q=− = kr − = √ c c 8 2 + 1 + 6μ

(5.200)

It is remarkable that the selected wavenumber coincides with the optimal value minimizing the energy functional of the SH model only in the limit μ → 0. The dependence of kr and q and c on μ is shown in Fig. 5.17. When the bifurcation point μ = 0 is approached, these values vary as ki ≈

3 1 1√ √ μ, kr ≈ 1 + μ, q ≈ 1 + μ, c ≈ 4 μ. 2 8 8

(5.201)

218

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

These predictions were confirmed with high precision by direct computations (Ben Jacob et al., 1985). The above procedure is also applicable to other pattern-forming systems, it is enough just to replace the dispersion relation ω(k). However, in spite of the success achieved in this and other simple cases, the wavelength selection at the leading edge may have only limited influence on the properties of the full-grown pattern, since nonlinear interactions behind the front can modify its geometry and affect the propagation speed to the extent that the selected pattern may even turn to be unstable (van Saarloos, 2003).

5.6.2

Self-Induced Pinning

Pinned states. In the case when a stable homogeneous solution coexists with a stable periodic pattern, Pomeau (1986) has described the mechanism of robust existence of stable stationary fronts between the two states, in spite of one of them being absolutely stable, and the other one, metastable. The motion of this front is affected by the discrete structure of the pattern, which causes self-induced pinning hindering the retreat of a metastable state. The two possible depinning transitions, corresponding to “crystallization” or “melting” of the pattern, are shown schematically by thick lines in Fig. 5.18. To the right of the crystallization threshold C, the pattern advances by a periodic nucleation process which creates new elementary cells at the interface, while to the left of the melting limit M, the pattern recedes as elementary cells at the interface are destroyed. The minimal number of cells in a sustainable stationary structure increases as the melting threshold is approached; the corresponding melting limits, starting from the single-cell melting limit 1-M, are shown schematically by thin vertical lines in Fig. 5.18. The respective crystallization thresholds for clusters of finite size lie to the right of C, approaching this point as the cluster size increases. A variety of metastable stationary structures including a different number of cells can be observed within the interval between the melting transition M and the singlecell spreading limit 1-C lying to the right of the crystallization threshold. We have already noticed in Sects. 5.2.1, 5.2.2 the correspondence between spatial structures and trajectories of a dynamical system in the simplest case of a second-order equation. This correspondence can be followed to higher orders, but we have seen when advancing from Ch. 3 to Ch. 4, that dynamics becomes drastically more complicated as the

Figure 5.18. A scheme of the depinning transitions showing the crystallization (C) and melting (M) thresholds for an infinite cluster, as well as the corresponding limits for clusters of different sizes, terminating in single-cell limits 1-C, 1-M

5.6. NONUNIFORM PATTERNS

219

number of degrees of freedom increases. The SH equation, a model pattern-forming system, is of the fourth order, and a panoply of solutions may emerge already in 1D. We can associate a regular pattern with a periodic orbit, a symmetric patterned inclusion with a homoclinic orbit, an asymmetric inclusion between two different HSS with a heteroclinic orbit, and a pinned front of a semi-infinite cluster with a heteroclinic connection between a fixed point and a periodic orbit. The structure of homoclinic or heteroclinic connections is now not as simple as in second-order systems. As mentioned in Sect. 4.1.2, stable and unstable trajectories near hyperbolic points of higher-order dynamical systems behave in a highly irregular way, leading to an infinite number of intersections. In the context of 1D patterns, this is translated to snaking trajectories that correspond to pinned periodic inclusions (Coullet et al., 2000). The simplest system suitable for the study of pinned structures is the 1D SH equation (5.189). A small-amplitude striped pattern with the wavenumber k = 1 first appears at the symmetry-breaking bifurcation point μ = 0. As the parameter μ further increases, the available wavenumber band widens, until it reaches the limiting value k√ = 0. This signals the appearance of a pair of nontrivial uniform states, u = ± μ − 1. The two symmetric states are stable to infinitesimal perturbations at μ > 3/2. Since the SH equation is a gradient system, one can use the expression for the energy functional    1 1 4 2 2 2 F= [(1 + ∂x )u] − μu + u dx (5.202) 2 2 to compute the energy of stationary solutions obtained in numerical computations, and characterize them as either absolutely stable or metastable states. The energy of the uniform state is higher than that of the regular pattern with the optimal wavenumber at μ < μc ≈ 6.287. Stepwise advance of a pattern. The envelope of a pattern would advance or retreat, respectively, below or above this critical value, as in Sect. 5.2.1. However, due to the discrete structure of the pattern, creation or destruction of a cell at the moving interface requires overcoming a certain energy barrier. Therefore, in spite of the difference between the energies of the uniform and patterned state, the boundary between them remains immobile at both sides of μ = μc , and pinned fronts, as well as multiple localized states, remain stable to infinitesimal perturbations. Pinned structures may include a single-cell “soliton” immersed in a uniform state, a semi-infinite pattern coexisting with a uniform state, or a finite patterned inclusion, sandwiched between semi-infinite domains occupied by either the same uniform state or two alternative states of different signs. The general scheme of crystallization and melting thresholds is the same as in Fig. 5.18, with μ increasing from the right to the left. Numerical computation of the SH model (Aranson et al., 2000) shows that the depinning transition for a single cell (point 1-C) occurs at μ ≈ 1.74. The depinning threshold increases with the size of the patterned cluster, rapidly converging to the limiting crystallization threshold for the semi-infinite pattern μ = μ∞ ≈ 1.7574 (point C).

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

220

Self-induced pinning that impedes propagation of a pattern into a metastable uniform state (crystallization) or the reverse melting process can be attributed to the oscillatory character of the asymptotic perturbations of the uniform state. The front propagation takes the form of well separated in time periodic nucleation events of new “atoms” of the “crystalline” state at the front. This process resembles crystallization of equilibrium solids, with the important distinction that the new “atoms” are created directly from the metastable “vacuum” state. Unlike atoms, nucleation of new cells on the edge of a pattern proceeds gradually, as a perturbation grows in time, while another cell is nucleated in its wake, as shown in Fig. 5.19a. Between successive nucleation events, the solution remains close to a stationary semi-infinite pattern at μ = μ∞ . The time between consecutive nucleation events diverges as μ approaches the pinning threshold. The average front speed can be approximated by a linear relation √ c = 2.292 μ∞ − μ (Aranson et al., 2000). Near the pinning threshold of the semi-infinite pattern, such as that shown in Fig. 5.19b, Aranson et al. (2000) approximated the front dynamics at μ = μ∞ + , 

x





μ

φ(x)

u(x)

 Figure 5.19. (a): Space-time plot of a large cellular cluster expanding into the stable uniform phase at μ = 1.757 (only a part of the cluster near the advancing boundary is shown). (b): A semi-infinite pattern coexisting with a uniform state (solid line) and the perturbation mode φ(x) (dashed-dotted line). Inset: The stable (solid line) and unstable (dashed line) eigenvalues as functions of μ. Reprinted from Aranson et al. (2000). Copyright by the American Physical Society.

5.6. NONUNIFORM PATTERNS

221

1 analytically by presenting the solution in the form u(x, t) = u0 (x) + | |1/2 u1 (x, t),

(5.203)

where u0 (x) is the stationary front solution at μ = μ∞ and u1 (x, t) is a dynamic correction. Plugging this ansatz in the SH equation (5.189) and expanding to the lowest order in , we obtain   (5.204) u˙ 1 = [−(1 + ∂x2 )2 + μ∞ − 3u20 )]u1 − 3| |1/2 u0 u21 − sign( )u0 . At > 0, the immobile front solution u0 (x) is linearly stable. Numerical stability analysis shows that there exists an exponentially decaying mode u1 = φ(x)eiλt localized at the front with a negative eigenvalue λ− , which approaches zero as | |1/2 . In addition, there is also an unstable mode with a positive eigenvalue, which also approaches zero as | |1/2 , as shown in the inset of Fig. 5.19b. At the pinning threshold, these two modes collide and disappear via a saddle-node bifurcation. Both the stationary front solution u0 (x) and the stable mode v(x) are shown in Fig. 5.19b. The analysis is also applicable at μ < μ∞ , when the front solution becomes nonstationary but the solution remains close to the stationary front solution u0 (x) all the time except short intervals when a new cell nucleates. It should be noted that the pinning phenomenon is restricted to a single spatial dimension, where the analogy between the spatial patterns and trajectories of a dynamical systems applies. The front of a pinned striped pattern in 2D, though stable to infinitesimal perturbations, is unstable to transverse distortions, which grow rather than decay beyond some critical size. The stripes depin through the zigzag instability on the wavelength exceeding that of the stripe pattern, so that the striped pattern evolves into a labyrinthine maze (Hagberg et al., 2006).

5.6.3

Propagating Wave Pattern

When a plane wave propagates into the unstable trivial state u = 0, the propagation speed is selected according to the same marginal stability criterion as in Sect. 5.2.2, so that (5.46) remains applicable. The representative example is the amplitude equation at the wave bifurcation, known also as the CGL equation. We will work with its rescaled form (5.121) reduced to 1D: u˙ = (1 + iη)u (x) + u − (1 + iν)|u|2 u.

(5.205)

The nonlinear term is irrelevant at the leading edge of the propagating pattern. Linearizing (5.205) in the vicinity of the trivial state leads to the dispersion relation   (5.206) ω(k) = i 1 − (1 + iη)k 2 . Accordingly, the complex group velocity is  c = ω  (k) = −2i(1 + iη)k = −2 [ki + ηkr + i(ηki − kr )] .

(5.207)

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

222 (a)

(b)

Figure 5.20. The function q(η, ν) (orange-shaded surface) shown together with the blueshaded surfaces representing the selected wavenumber at the leading edge kr (η) (a) and the upper stability limit for a plane wave (b). The semitransparent shading allows lower-lying segments to be visible

The value of the wavenumber verifying the stability condition  ci = 0 is kr = ηki . It is used in the condition of equality of the phase and group velocities: c=

1 ωi = + ki (1 + η 2 ) = −2ki (1 + η 2 ) =  cr . ki ki

(5.208)

This relation is satisfied at ki = (1 + η 2 )−1/2 , which defines the slope of the amplitude profile in accordance to the principle of marginal stability. The corresponding wavenumber and propagation speed are kr = η(1 + η 2 )−1/2 ,

c= cr = 2(1 + η 2 )1/2 .

(5.209)

The wavenumber q of the pattern formed behind the propagating front does not coincide with kr , but can be computed with the help of the wave crests conservation condition as in Sect. 5.6.1. The entire pattern should be stationary in the comoving frame, except for a common factor eiΩt . Far behind the front, the relation between the frequency and the wavenumber is the Doppler-shifted dispersion relation for plane waves (5.143), (5.210) Ω = ν + (η − ν)q 2 − cq. On the other hand, near the leading edge the same frequency is computed as Ω = ωr − ckr = −η.

(5.211)

Equating the two expressions and solving for the root satisfying the inequality |q| < 1

5.6. NONUNIFORM PATTERNS yields the wavenumber q selected by the pulled front: $ $ $ 1 + η 2 − √1 + ν 2 $ $ $ q=$ $. $ $ η−ν

223

(5.212)

Both kr (η) and q(η, ν) are plotted in Fig. 5.20a. The selected wavenumber may fall into the unstable range. This can be clearly seen in Fig. 5.20b: it happens when the yellow-shaded surface representing the function q(η, ν) lies above the blue-shaded surface showing the upper stability limit for a plane wave (5.152). In this case, the wave pattern may become chaotic in the wake of the propagating front. Alternatively, the pattern may be stabilized further downstream through a wavelength change facilitated by nonlinear interactions.

5.6.4

Nonuniform Wave Patterns

Converting CGL equation into an ODE. There is a wide variety of nonuniform 1D wave patterns. Typical structures with spatially varying modulus and wavenumber are stationary in a frame propagating with a certain speed c and depend on the comoving coordinate z = x − ct only. The most prominent heteroclinic solution is a front describing the trivial state invaded by a wave train considered above. Heteroclinic trajectories connecting domains with distinct wavenumbers are also called wavenumber kinks. Possible homoclinic solutions are a pulse (“soliton”) approaching the trivial state at both extremes, and oppositely directed plane waves colliding at a domain boundary. Such trajectories are also called holes or dark solitons when they involve depression of the amplitude due to a strong phase rotation. The version of the CGL equation particularly convenient for finding nonuniform solutions and their perturbations is obtained by transforming the variable to the polar form, which is stationary in the comoving frame z = x − ct, except for oscillations with a constant frequency Ω: u = ρ(z)ei[θ(z)−Ωt)] .

(5.213)

We define the complex wavenumber as k ≡ kr + iki = −i D(ln u)/dz = θ (z) − i d(ln ρ)/dz.

(5.214)

The derivatives entering the CGL equation (5.205) are expressed then as u−1 u˙ = −cki − i(Ω + ckr ) = −i(Ω + ck), u−1 u (z) = ikr (z) − ki (z) + ki2 − 2ikr ki − kr2 = ik  (z) − k 2 .

(5.215)

The CGL equation is transformed using these expressions to the system of one real and one complex first-order dynamic equations with the coordinate z playing the role of time: i(Ω + ck) − 1 1 + iν 2 ρ (z) = −ki ρ, ik  (z) = − ρ . (5.216) + k2 − 1 + iη 1 + iη

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

224

The last equation becomes after separating the real and imaginary parts ν−η 2 Ω + c(kr + ηki ) − η + 2kr ki + ρ , 2 1+η 1 + η2 ηΩ + c(ηkr − ki ) + 1 1 + νη 2 ki (z) = ρ . + ki2 − kr2 − 1 + η2 1 + η2 kr (z) = −

(5.217) (5.218)

Take note that this transformation, though similar, differs from that in Sects. 5.6.1, 5.6.3, which is applicable in the vicinity of the trivial solution, since the nonlinear term cannot be rationally expressed in terms of a phase variable, even a complex one. Fixed points and heteroclinic connections. Plane waves  are nonlinear fixed points of this system with ki = 0, kr = const = q, and ρ = ρ0 = 1 − q 2 . The relation between the frequency and the real part of the wavenumber following from either(5.217) or (5.218) coincides with the Doppler-shifted dispersion relation (5.210). The linear fixed points that correspond to the trivial state ρ = 0 with a nonzero k, which can be obtained by resolving (5.216):  −c ± c2 − 4(η − i)(Ω + i) k= . (5.219) 4(η − i) Separating here the real and imaginary parts leads to awkward expressions, as does an attempt to resolve (5.217), (5.218), but implicit solutions can be obtained by resolving the latter with respect to Ω, c: Ω = kr ki + η(kr2 + ki2 ) −

kr (1 − kr2 ) , ki

c = ki + 2ηkr +

1 − kr2 . ki

(5.220)

Nonuniform solutions of (5.217), (5.218) can be viewed as homoclinic or heteroclinic trajectories connecting the fixed points, and the parameters c, Ω can be adjusted to find various solutions. It might be possible to anticipate existence and multiplicity of such solutions by counting the dimensions of stable and unstable manifolds of the fixed points, determined by the real parts of the respective eigenvalues. For linear fixed points, the eigenvalues λi are expressed in a simple way:   c ηc λ± = 2ki − ± i 2k − . (5.221) λ0 = −ki , r 1 + η2 1 + η2 The fixed point is a saddle-focus with 1D stable and 2D unstable manifolds if c < cmax = 2kr (1 + η 2 ), allowing for a connection with the stable manifold of a nonlinear fixed point, if one exists. We have actually found such a connection in another way in Sect. 5.6.3. Using the expressions for c, kr , ki at the leading edge derived there, we find that the eigenvalues λ± vanish. For nonlinear fixed points, linearizing (5.216) at the plane wave solution and computing the Jacobi matrix leads to a cubic equation determining the eigenvalues, which

5.6. NONUNIFORM PATTERNS

225

depend on the parameters of the problem in a complicated way. The determinant of the Jacobi matrix is 2(1 − q 2 )[c + 2q(ν − η)] J= . (5.222) 1 + η2 The real parts of the eigenvalues cannot have the same sign when J negative, i.e., at c < 2q(η − ν). This ensures that both stable and unstable manifolds are present, allowing for a connection with a linear fixed point, and making it also possible to form homoclinic connections. Nonuniform solutions. With good luck and ingenuity, nonuniform solutions can be obtained analytically if a proper ansatz is found. The simplest particular case is a static solution with oppositely directed but otherwise identical waves on both sides. By symmetry, this solution must be stationary (c = 0, Ω = ω, z=x). Nozaki and Bekki (1984) constructed this solution, which they called a hole, using the ansatz ρ(x) = ρ0 tanh κx,

kr (x) = q tanh κx.

(5.223)

These formulas embody the desired structure with both the real amplitude and the wavelength approaching constant absolute values at x → ±∞. The change of the sign of ρ is irrelevant, as only its square enters (5.217), (5.218), and the sign can be inverted by shifting the phase by π. The change of the sign of kr indicates the inversion of the propagation direction. With q > 0, the hole is a sink of the converging waves, and with q < 0, it is a source. The imaginary part of the wavenumber indicating the rate of change of the real amplitude ki = −ρ (x)/ρ is maximal at the hole location. Serendipitously, (5.217), (5.218) can be satisfied identically by a proper choice of the parameters ρ0 , q, κ, ω. A heteroclinic trajectory connecting fixed points corresponding to two distinct wavenumbers q± describes a wavenumber kink propagating with some speed c, which has to be chosen to equalize the corresponding Doppler-shifted frequencies Ω(c, q± ). In view of the dispersion relation for plane waves (5.210), this speed is expressed as c=

ω(k+ ) − ω (k− ) = (η − ν)(k+ + k− ). 2(k+ − k− )

(5.224)

This formula can be interpreted as a phase conservation relation. Comparing the last expression with the standard formula for the group velocity  c = ω  (k), we see that the kink propagates with the average group velocity of the two plane waves which are approached asymptotically at z → ±∞. Bekki and Nozaki (1986) constructed a family of heteroclinic solutions connecting nonlinear fixed points with wavenumbers q± , starting from the explicit anzatz &  z ' i 1 + Ze−2κz iθ(z) (q+ + q− tanh κζ)dζ , (5.225) exp = ρ0 ρ(z)e 2 1 + e−2κz

226

CHAPTER 5. DYNAMICAL SYSTEMS IN SPACE

where Z is a complex parameter. Following a long ingenious chain of computations, they arrived at a continuous family of solutions parametrized by the speed c. These solutions are sources rather than sinks, and have a dip of the modulus, though not reaching zero as in the case of a standing hole. The constant κ decreases with c and vanishes at the maximum velocity cmax where the solution merges with a plane wave having the group velocity  c = cmax . The earliest known nonuniform solution is a pulse (Hocking and Stewartson, 1972), but, unlike the hole and wavenumber kinks, it exists only at a restricted set of combinations of the basic parameters η, ν. Various quasiperiodic solutions (modulated amplitude waves) have been constructed numerically. It should be noted that all analytic solutions are subject to instabilities within a certain range of the parameters of the problem. Also, although they satisfy the CGL equation, they cannot be viewed as viable solutions of problems leading to this amplitude equation in the course of a bifurcation expansion (Sect. 5.4.2), since, as a rule, they are sensitive to small perturbations that are inadvertently caused by higher-order terms. A thorough study of 1D solutions of the various generalizations of the CGL equation (with added fourthderivative and quintic terms) have been carried out by van Saarloos and Hohenberg (1992).

Further Reading The last decades of the 20th century were the summit of the theory of nonequilibrium patterns. The review by Cross and Hohenberg (1993) remains to this day the most cited valuable survey of the field, which is far wider than the scope of this chapter restricted to problems reducible to ODEs. Monographs emphasizing different aspects of nonequilibrium dynamics and pattern formation were published by Manneville (1990), Mikhailov (1991, 1994), Nicolis (1995), Walgraef (1997), Mori and Kuramoto (1998), Hoyle (2006), Pismen (2006), Desai and Kapral (2009). Aranson and Kramer (2002) published a comprehensive review of solutions of the CGL equation not limited to a single dimension. Various localized stationary structures are reviewed by Knobloch (2015). The 21st century brought copious computational results, while many scientists active in the field have turned to biologically inspired applications.

Bibliography Abrams DM, Mirollo R, Strogatz SH, and Wiley DA, 2008. Phys. Rev. Lett. 101, 084103 Alexander S and McTague J, 1978. Phys. Rev. Lett. 41, 702-705 Andronov AA, Vitt AA and Khaikin SE, 1937; transl. 1966 Theory of Oscillators, Pergamon Press, Oxford Anishchenko VS, Vadivasova TE, and Strelkova GS, 2014. Deterministic Nonlinear Systems, Springer Anurag, Mondal B, Jayanta K. Bhattacharjee JK, and Chakraborty S, 2020. Physica D 402, 132256 Aranson IS, Aranson L, Kramer L, and Weber A, 1992. Phys. Rev. A 46, R2992R2995 Aranson IS, Malomed BA, Pismen LM, and Tsimring LS, 2000. Phys. Rev. E 62, R5-R8 Aranson IS and Kramer L, 2002. Rev. Mod. Phys. 74, 99-143 Aris R, 1965. Introduction to the analysis of chemical reactors, Prentice-Hall, Englewood Cliffs, NJ Arnold VI, 1963 Russ. Math. Surv. 18 9-36 Arnold VI, 1964. Dokl. Akad. Nauk SSSR 156 9-12 Arnold VI, 1984. Catastrophe Theory, Springer-Verlag, Berlin Barrow-Green J, 1997. Poincar´e and the three-body problem, AMS series History of Mathematics, 11 Bartuccelli MV, Gentile G, and Georgiou KV, 2001. Proc. R. Soc. Lond. A 457, 3007-3022 Bekki N and Nozaki K, 1985. Phys. Lett. A 110, 133 Ben Jacob E, Brand HR, Dee G, Kramer L, and Langer JS, 1985. Physica D 14, 348-364. Benettin G, Galgani L, and Strelcyn JM, 1976. Physica A 14, 2338-2345 Bennett M, Schatz MF, Rockwood H, and Weisenfeld K, 2002, Proc. R. Soc. Lond. A, 458, 563-579 Cahn JW and Hilliard JE, 1959. J. Chem. Phys. 31, 688-699 Cartwright, ML and Littlewood JE, 1947. Ann. Math. 48, 472-494 Cencini M, Cecconi F and Vulpiani A, 2010. Chaos from simple models to complex systems, World Scientific, Hackensack, NJ

227

228

BIBLIOGRAPHY

Chirikov BV, 1979. Phys. Reports 52, 265-379 Coullet P, Lega J, Houchmanzadeh D, and Lajzerowicz J, 1990. Phys. Rev. Lett. 65, 1352-1355 Coullet P, Riera C, and Tresser C, 2000. Phys. Rev. Lett. 84, 3069-3072 Cross MC and Hohenberg P, 1993. Rev. Mod. Phys. 65, 851-1112 Dee GT and Langer JS, 1983. Phys. Rev. Lett. 50, 383-386 Desai RC and Kapral R, 2009. Dynamics of Self-Organized and Self-Assembled Structures, Cambridge University Press De Wit A, 1999. Adv. Chem. Phys. 109, 435-513 Eckhaus W, 1983. In Asymptotic Analysis II, Springer Lecture Notes Math. 985 449-494 Elphick C, Meron E, and Spiegel EA, 1988. Phys. Rev. Lett. 61, 496-499 Eckmann JP, Oliffson Kamphorst S, Ruelle D and Ciliberto S, 1986. Phys. Rev. A 34, 4971-4979 Feigenbaum M, 1978. J. Stat. Phys. 19, 25-52 Gavrilov N and Shilnikov AL, 2000. AMS Transl. Series II, 200, 99-105 Gilmore R and Letellier C, 2007. The symmetry of chaos, Oxford University Press Ginzburg VL and Landau LD, 1950. Zh. Eksp. Teor. Fiz. 20, 1064-1082 Golubitsky M and Schaeffer DG, 1985. Singularities and Groups in Bifurcation Theory Volume I, Springer Golubitsky M, Stewart I, and Schaeffer DG, 1988. Singularities and Groups in Bifurcation Theory Volume II, Springer Gopal R, Venkatesan A, and Lakshmanan M, 2013. Chaos 23, 023123 Gottwald GA and Melbourne I, 2004. Proc. R. Soc. London A 460, 603-611 Gottwald GA and Melbourne I, 2005. Physica D 212, 100-110 Coullet P, Riera C, and Tresser C, 2000. Phys. Rev. Lett. 84, 3069-3072 Grassberger P and Procaccia I, 1983. Physica D 9, 189-208 Grebogi, C, Ott E, Pelikan S. and Yorke JA, 1984. Physica D 13, 261-268 Groh B and Mulder B, 1999. Phys. Rev. E 59, 5613-5620 Guckenheimer J and Holmes P, 1983. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag Hadamard J, 1898. J. Math. Pures Appl. 4, 27-73 Hagberg A and Meron E, 1994. Phys. Rev. Lett. 72, 2494-2497 Hagberg A, Yochelis A, Yizhaq H, Elphick C, Pismen L, and Meron E, 2006. Physica D 217, 186-192 Haken H, 2004. Synergetics: Introduction and Advanced Topics, Springer-Verlag, Berlin. H´enon M and Heiles C, 1964. Astrophys. J. 69, 73-79 Hocking LM and Stewartson K, 1972. P roc. R. Soc. Lond. A. 326, 289-313 Hopf E, 1937. Ergodentheorie, Springer Verlag, Berlin Hopf E, 1948. Comm. Pure Appl. Math. 1, 303-322 Horwitz L, Ben Zion Y, Lewkowicz M, Schiffer M, and Levitan J, 2007. Phys. Rev. Lett. 98, 234301 Hoyle RB, 1994. Phys. Rev. E 49, 2875-2880

BIBLIOGRAPHY

229

Hoyle RB, 2006. Pattern formation: an introduction to methods, Cambridge University Press Huygens C, 1673. Horologium Oscillatorium, Apud Muguet, Paris, transl. The Pendulum Clock, 1986. Iowa State University Press, Ames Huynh HN, Nguyen PT, and Chew LY, 2013. Commun. Nonlinear Sci. Numer. Simulat. 18 291-307 Kapitaniak M, Czolczynski K, Perlikowski P, Stefanski A, and Kapitaniak T, 2012. Physics Reports 517, 1-69 Kaplan JL and Yorke JA, 1979. Lect. Notes Math. 730 204-227 Kevorkian J and Cole JD, 1996. Multiple Scale and Singular Perturbation Methods, Springer, New York Khaikin SE, 1930. Zh. Prikl. Fiz. 7, 21 Kolmogorov AN, Petrovsky IG, and Piskunov NS, 1937. Bull. Moscow Univ. Math. Ser. A 1, 1-25 Kolmogorov AN, 1941. Dokl. Acad. Nauk SSSR, 30, 301-305 Kolmogorov AN, 1954. Dokl. Acad. Nauk SSSR 98, 527-530 Knobloch E, 2015. Annu. Rev. Condens. Matter Phys. 6, 325-359 Kotwal T, Jiang X, and Abrams DA, 2017. Phys. Rev. Lett. 119, 264101 Kuang Y and Bertta E, 1998. J. Math. Biol. 36, 389-406 Kuramoto Y, 1984, Chemical Oscillations, Waves and Turbulence, Springer-Verlag, Berlin Kuramoto Y and Tsuzuki T, 1976. Prog. Theor. Phys. 55, 356 Kuramoto Y and Battogtokh D, 2002. Nonlinear Phenom. Complex Syst. 5, 380 Kuznetsov SP, 2012. Hyperbolic Chaos, Springer Kuznetsov YA, 1995. Elements of Applied Bifurcation Theory, Springer Verlag, Berlin Landau LD, 1944. Dokl. Acad. Nauk SSSR, 44, 339-342 Landau LD and Lifshitz EM, 1960, 3rd ed. 1969. Mechanics, Pergamon Press, Oxford Letellier C, Roulin E and R¨ossler OE, 2006. Chaos Soli. Fract. 28, 337-360 Letellier C and Messager V, 2010. Int. J. Bifurcation and Chaos, 20, 3585-3616 Letellier C, 2013. Chaos in Nature, World Scientific, Hackensack NJ Levich VG, 1959, transl. 1962. Physicochemical hydrodynamics Prentice-Hall, Englewood Cliffs, NJ Li TY and Yorke J, 1975. Amer. Math. Monthly 82, 985-992 Lichtenberg AJ and Lieberman MA, 1983, 2nd ed. 1992. Regular and stochastic motion, Springer-Verlag Lorenz EN 1963. J. Atmosph. Sci. 20, 130-141 Luongo A, Paolone A, and Di Egidio A, 2003. Nonlinear Dynamics 34, 269-291 Lyapunov AM, 1892. Transl. 1992, Int. J. Control 55, 531-773 Malomed BA, Nepomnyashchy AA, and Tribelsky MI, 1989, Zh. Eksp. Teor. Fiz. 96, 684. Manneville P and Pomeau Y, 1979. Phys. Lett. A 75, 1-2 Manneville, P, 1990. Dissipative Structures and Weak Turbulence, Academic Press, San Diego CA Matsumoto T, Chua LO, and Tanaka S, 1984. Phys. Rev. A 30 1155-1158

230

BIBLIOGRAPHY

May RM, 1974a. Stability and Complexity in Model Eco-System. Princeton University Press, Princeton May RM, 1974b. Science 186 645-647 May RM, 1976. Nature 261, 459-467 Messias M, de Carvalho Braga D, Fernando Mello L, 2009. Int. J. Bifurc. Chaos, 19, 497-515 Mikhailov AS, 1991, 1994. Foundations of Synergetics. I: Distributed Active Systems II (with Loskutov AY): Complex Patterns, Springer-Verlag, Berlin Mori H and Kuramoto Y, 1998, Dissipative Structures and Chaos, Springer-Verlag, Berlin Moser J, 1962. Nachricht Akad. Wiss. G¨ottingen II Math. Phys. K1 1-20 Murray JD, 1989. Mathematical Biology, Springer, Berlin (3rd ed, 2002/2003) Nepomnyashchy AA, 1974. Izv. AN SSSR, Mekh. Zhidk. i Gaza, (3) 28-34, transl. Fluid Dyn. 9, 354-359 Nicolis G and Auchmuty JF, 1974. Proc Natl. Acad Sci USA, 7l, 2748 Nicolis, G, 1995. Introduction to Nonlinear Science, Cambridge University Press Nozaki K and Bekki N, 1984. J. Phys. Soc. Jpn. 53, 1581-1582 Osipov GV, Kurths J, and Zhou C, 2007. Synchronization in Oscillatory Networks, Springer-Verlag Berlin Ott E, 1993 Chaos in Dynamical Systems (2nd edition 2002), Cambridge University Press Pikovsky A, Rosenblum M, and Kurths J, 2001. Synchronization. A universal concept in nonlinear sciences, Cambridge University Press Pikovsky A and Rosenblum M, 2015. Chaos 25, 097616 Pismen LM, 1980. Chem. Eng. Sci. 35, 1950-1978 Pismen LM, 1984. Chem. Eng. Sci. 39, 1063-1077 Pismen LM, 1985. Chem. Eng. Sci. 40, 905-916 Pismen LM, 1986. Dynamics and Stability of Systems, 1, 97-113 Pismen LM, 1987. Phys. Rev. A 35, 2709-2717 Pismen LM, 2006. Patterns and Interfaces in Dissipative Dynamics, Springer, Berlin. Poincar´e H, 1885. J. Math´ematiques 1, 167-224 Poincar´e, H. 1890. Acta Mathematica 13, 1-270 Pomeau Y and Manneville P, 1980. Comm. Math. Phys. 74, 189-197 Pomeau Y, 1986. Physica D, 23 3-11 Rayleigh, Lord, 1916. Proc. Roy. Soc. London A 93, 148-154 R´enyi A, 1970. Probability Theory, North Holland, Amsterdam Rosenstein MT, Collins JJ, and De Luca CJ, 1993. Physica D 65 117-134 R¨ ossler OE, 1976. Zs. f. Naturforschung A 31, 259-264 R¨ ossler OE, 1979. Phys. Lett. A 71, 155-157 Ruelle D and Takens F, 1971. Commun. Math. Phys. 20, 167-192 Sano M and Sawada Y, 1985, Phys. Rev. Lett. 55, 1082-1085 Schuster HG, 1984. Deterministic Chaos (4th edition 2005), Physik-Verlag, Weinheim Shechtman D, Blech I, Gratias D, and Cahn J, 1984, Phys. Rev. Lett. 53, 1951-1953 Shilnikov LP, 1965. Sov. Math. Dokl. 6, 163-166

BIBLIOGRAPHY

231

Shilnikov LP, Shilnikov AL, Turaev DV, and Chua LO, 2001. Methods of qualitative theory in nonlinear dynamics Part II, World Scientific, Hackensack NJ Singer J, Wang YZ, Bau HH, 1991. Controlling a chaotic system. Phys. Rev. Lett. 66, 1123-1125 Sirovich L, 1987. Quart. Appl. Math. 45 561-590 Sivashinsky GI, 1977. Acta Astronaut. 4, 1177 Smale S, 1967. Bull. Am. Math. Soc. 73, 747-817 Sparrow, C, 1982. The Lorenz Equations: Bifurcations, Chaos, and Strange Attractors, Springer, New York Sprott JC, 1994. Phys. Rev. E 50, R647-R650 Stachowiak T and Okada T, 2006. Chaos, Solitons and Fractals 29, 417-422 Swift J and Hohenberg P, 1977. Phys. Rev. A 15, 319-328 Szuminski ´ W and Wo´zniak D, 2020. Commun Nonlinear Sci. Numer. Simulat. 83, 105099 Taylor GI, 1935. Trans. Roy. Soc. London A 151 412-478 Thom R, 1975. Structural Stability and Morphogenesis: an Outline of a General Theory of Models, Benjamin, Reading MA Turing AM, 1952. Phil. Trans. Roy. Soc. London B 237, 37-72 van Saarloos W, 1988. Phys. Rev. A 37, 211-229 van Saarloos W, 2003 Physics Reports 386, 29-222 van Saarloos W and Hohenberg PC, 1992. Physica D 56, 303-367 Vo T, 2017. Physica D 356, 37-64 Walgraef D, 1997. Spatio-Temporal Pattern Formation, Springer-Verlag, New York Wiener N, 1948. Cybernetics or Control and Communication in the Animal and the Machine, 2nd ed. 1961, reissued 2019, MIT Press, Cambridge MA Winfree AT, 1967. J. Theor. Biol. 16, 15-42 Wolpert L, 2011. How We Live and Why We Die: The Secret Lives of Cells, WW Norton & Co., New York Zaslavsky GM, 2005. Hamiltonian chaos and fractional dynamics, Oxford University Press, New York

Online Files ChuaCircuit.nb CoupledPendulums.nb DoubleHopf.nb Exothermic.nb Forced.nb HenonHeiles.nb Lorenz.nb PopulationDynamics.nb Roessler.nb SeparatedScales.nb WeakDissipation.nb

233

Illustration Credits The following illustrations are published under the Creative Commons license.

1.3(a): By Catslash - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=11557949 1.3(d): By George Ioannidis - Own work, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=7920826 1.8(b): By Chetvorno - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=31049859 1.9: By Alksub at the English language Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2756944 4.7(a): By Jordan Pierce - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=16445229 4.7(b): By InXnI - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=18744163 4.21(a-c) By Linas - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=12053416-8

235