Complexity and Evolution of Dissipative Systems: An Analytical Approach 9783110268287, 9783110266481

This book focuses on the dynamic complexity of neural, genetic networks, and reaction diffusion systems. The author show

183 4 4MB

English Pages 311 [316] Year 2013

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1. Introduction
2. Complex dynamics in neural and genetic networks
3. Complex patterns and attractors for reaction-diffusion systems
4. Random perturbations, evolution and complexity
Bibliography
Index

Complexity and Evolution of Dissipative Systems: An Analytical Approach
 9783110268287, 9783110266481

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Sergey Vakulenko Complexity and Evolution of Dissipative Systems

De Gruyter Series in Mathematics and Life Sciences

| Edited by Alexandra V. Antoniouk, Kiev, Ukraine Roderick V. Nicolas Melnik, Waterloo, Ontario, Canada

Volume 4

Sergey Vakulenko

Complexity and Evolution of Dissipative Systems | An Analytical Approach

Mathematics Subject Classification 2010 35K57, 35K40, 60K37, 37D05, 37D10, 37D45, 92B05, 92B20, 92B25 Author Prof. Dr. Sergey Vakulenko Russian Academy of Sciences Institute of Problems of Mechanical Engineering Laboratory of Hydroelasticity V.O., Bolshoj pr. 61 199178 St. Petersburg RUSSIA [email protected]

ISBN 978-3-11-026648-1 e-ISBN 978-3-11-026828-7 Set-ISBN 978-3-11-026829-4 ISSN 2195-5530 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.dnb.de. © 2014 Walter de Gruyter GmbH, Berlin/Boston Typesetting: le-tex publishing services GmbH, Leipzig Printing and binding: Hubert & Co. GmbH & Co. KG, Göttingen ♾Printed on acid-free paper Printed in Germany www.degruyter.com

Preface This book is focused on mathematical methods for dissipative system dynamics and mathematical biology. We consider the problem of the emergence of complexity in dis­ sipative systems and chaos, stability and evolution for genetic networks. We investigate important classes of dynamical systems generating complicated patterns and strange (chaotic) attractors. This question is inspired by the famous pa­ per by D. Ruelle and F. Takens [235], where the notion of the strange, or chaotic, attrac­ tor is introduced. We analytically prove the existence of strange attractors of arbitrarily high dimension for many fundamental models, such as Hopfield neural networks, ge­ netic circuits, and basic systems of phase transition theory. We also prove the existence of chaotic dynamics for large classes of reaction-diffu­ sion systems, coupled oscillator and population dynamics systems. A general method is proposed that allows us to study the chaotic behavior of unbounded complexity, i. e. dimension of corresponding attractors can go to infinity when we vary some system parameters. This approach is constructive and yields attractor control algorithms. The second problem is increasing complexity in biological evolution. Charles Dar­ win formulated the following question of critical importance for evolution theory: How can a gradual evolution produce complex special organs functioning in a cor­ rect manner? ([57, Ch. VI]). This question has provoked a great discussion between creationists and scientists believing Darwin’s theory [26, 60, 91, 109, 175–177, 227, 231, 238]. Is Darwin evolution sufficiently fast enough to create complex structures? Is evolution is really feasible? Or, maybe, biological structures are actually not so complex? In this book, we also develop a mathematical approach in order to explain in­ creasing complexity. Using the viability theory [13–16], ideas proposed by M. Gromov and A. Carbone [97], the attractor theory, the Kolmogorov complexity and new meth­ ods for hard combinatorial problems (for example, [1, 184]), we consider some math­ ematically rigorous approaches for viability, pattern complexity, evolution rate and feasibility. We find a connection between the attractor complexity problem and the viability of biological systems. Let us outline results on the attractor complexity in more detail. It is well known that, under fairly general conditions, dissipative infinite dimensional dynamical sys­ tems can have finite dimensional attractors and finite dimensional invariant (or iner­ tial) manifolds. However, excluding some narrow classes of systems (monotone and gradient systems, see [111–114, 256]), only upper estimates of attractor dimensions and dimensions of the invariant (inertial) manifolds can be obtained [18, 50, 61, 77, 78, 101, 108, 129, 155, 168, 170]. Many problems on complexity of attractors and large time be­ havior are left open. In particular, an analytical proof of the existence of the chaotic dynamics for fundamental physical systems such as Navier–Stokes equations or re­ action-diffusion systems is unknown. Chapter I presents a brief review on dynamical

vi | Preface system theory. In particular, we describe classes of systems having relatively simple large time behavior (monotone and gradient semiflows). These systems can have im­ portant biological applications. In Chapters 2 and 3, our goal is to investigate some important classes of systems with complicated behavior. Chapter 2 considers coupled oscillator systems, neural and genetic networks. In Chapter 3, we extend this approach to systems of partial differ­ ential equations. We describe a special method that allows us to find semiflows with a complicated large time behavior. These semiflows can produce all possible finite di­ mensional structurally stable dynamics when we vary some system parameters. Semi­ flows with this property are the maximally complicated family of semiflows, or, briefly, MC semiflows. The corresponding parameters can be called control parameters. The class of MC systems includes such fundamental models as Hopfield networks, genet­ ical circuits and some systems of phase transition theory connected with the scalar Ginzburg–Landau equation. We also describe new classes of spatially localized chem­ ical waves. Although these waves move in an inhomogeneous medium, the propaga­ tion speed of these waves is a complicated time function and the front form can vary in a periodical or even chaotical manner. In many important reaction-diffusion systems, some reagents diffuse much faster than others. We can observe such situations in numerous biological systems, where large molecules (for example, proteins) are much less mobile than small ones (sub­ strats, microRNA). In such systems, parameters are diffusion and degradation coeffi­ cients. Moreover, these systems involve some additive spatially inhomogeneous exter­ nal fluxes which do not depend on unknown reagent densities. Beginning with the seminal paper [272], where the Turing instability was invented, this class of systems has received great attention [103, 138, 178, 179, 193, 196]. In Chap­ ter 3, we consider such reaction-diffusion systems, where we take as attractor control parameters, diffusion and degradation coefficients and external fluxes. Then, we show that there exists an alternative: either this reaction-diffusion system induces a strongly monotone semiflow (therefore, we can observe no chaotic local attractors here), or this system induces a family of maximally dynamically complex semiflows (and thus can generate all structurally stable attractors which can be chaotic). A criterion that guar­ antees maximal dynamical complexity admits a transparent chemical interpretation: there is a reagent, which is neither an activator nor an inhibitor. The complexity de­ pends on the parameter r = D/d, where D, d are diffusion rates. In order to obtain attractors of larger dimensions, we must take larger r. This result on reaction-diffusion systems has interesting biological and physical interpretation. Biological systems convert a space information contained in DNA se­ quences to a complex behavior in time (and in space). This shows that a physico­ chemical basis of such a transformation is as follows: (i) coexistence of mobile and slow reagents (components); (ii) a sufficient reagent interaction complexity: either we have reagents, which are neither activators nor inhibitors, or we have a number of in­ hibitors and activators (this case of genetic networks is considered in Chapter 2). Such

Preface | vii

systems with complicate dynamics can transform a complicated spatial information (contained in spatially inhomogeneous terms) into a complex time behavior. From the physical point of view, this result can be interpreted as follows. Mono­ tone and gradient systems describe, in a sense, an “ordered” dynamics. For example, for gradient systems, there exists a Lyapunov function decreasing along the trajec­ tories. Physically, it corresponds to the case of systems having an entropy that is au­ tomatically time monotone due to system functioning laws. Our result means that if such an ordering is absent (no entropy or another thermodynamic function), then we can create any prescribed hyperbolic chaos in this system by a variation of external fluxes, diffusion and degradation coefficients. Although our constructions use some sophisticated mathematical methods (mainly, realization of vector fields proposed by P. Poláčik, see [212–214]), the basic idea beyond this mathematics is simple and admits a transparent physical interpretation. We use the so-called slaving principle [100]: if system dynamics can be decomposed in fast and slow modes, then for large times, the whole system dynamics is captured by the slow modes. To this well-known idea (that can be justified by the invariant man­ ifold theory), we add a new one: one can control the system dynamics by adjusting parameters that define an interaction between slow and fast modes. In Chapter 2, we show how these two ideas work for neural and genetical net­ works. We exploit a special topology of a weighted graph that defines node interaction in the network (nodes are genes or neurons). This topology can be named “centralized topology,” or “empire structure.” In the centralized networks, highly connected hubs play the role of organizing centers. The hubs receive and dispatch interactions. Each center interacts with many weakly connected nodes (satellites). We assume that satel­ lites do not interact, but only obtain orders from center (ancient roman principle divide et impera, an ideal for some empires). We study complex behavior and bifurcations in the networks where the node in­ teraction has the empire structure. We show that the corresponding dissipative semi­ flows are maximally dynamically complex. This means that depending on the network parameters (thresholds, synaptic weights and the neuron number), these semiflows can realize all structurally stable dynamics. These semiflows are capable of gener­ ating (up to an orbital topological equivalency) all structurally stable dynamics in­ cluding chaotic, periodic etc., for example, all Anosov flows and Smale axiom A sys­ tems, Smale horseshoes and all hyperbolic dynamics. There is an explicit algorithm to construct a network with a prescribed dynamics. The algorithm is based on the well-­ known theorem of neural network theory that multilayered perceptrons can serve as universal approximators. The attractor control parameters are coefficients that define interaction between satellites and centers. For centralized genetical networks, we also present a mathematical realization of the famous Wolpert’s idea: positional information form multicellular organisms [325]. We show that it is sufficient to have three morphogen gradients and a sufficient num­ ber of genes to create different complicated dynamics in different cells of an “organ­

viii | Preface ism.” Then, there arises a natural question as to how such a complicated dynamics in such a multicellular structure can be synchronized? We can answer this key question by using a combination of our methods with Kuramoto’s ideas [152]. These results are presented in Chapter 4. In Chapter 2, we also show that the dynamics of large classes of coupled oscillator systems with quadratic interactions is maximally complex. These classes, in particu­ lar, include the celebrated Lotka–Volterra model. In Chapter 4, we consider viability of systems investigated in Chapters 2 and 3 under random fluctuations. This helps us to shed light on the problem of complexity increasing in evolution. We can outline key ideas here as follows. One of the main characteristics of biological systems is that these systems sup­ port their own life functions. In particular, a biological system tries to keep the values of the main characteristics of each cell – such as temperature, pressure, pH (acidity measure), concentrations of different reagents – within a certain range of values that makes the biological processes possible. These domains of values are called viability domains, and the process of supporting the life functions – by keeping the values in­ side viability domains – is called homeostasis. The concept of homeostasis was first developed by a French physiologist Claude Bernard; it is now one of the main concepts of biology; see, e. g. [42]. The homeostasis process is notoriously difficult to describe in precise mathemat­ ical terms. At first glance, homeostasis is similar to the well-known and well-studied notion of stability: in both cases, once a system deviates from the desirable domain, it is pushed back. However, a more detailed analysis shows that these notions are ac­ tually different: – the usual mathematical descriptions of stability mean that a system will indefi­ nitely remain in the desired state, for time t → ∞, while – a biological cell (and the whole living being) eventually dies. This difference has been emphasized by M. Gromov and A. Carbone: “Homeostasis of an individual cell cannot be stable for a long time as it would be destroyed by random fluctuations within and off cell ([97, p. 40]).” One might argue that while individuals die, their children survive and thus, species remain. However, it turns out that the biological species are unstable too. This conclusion was confirmed, e. g. by L. Van Valen based on his analysis of empirical data; see, e. g. [227, 301]. Moreover, he concluded that the species extinction rate is approximately constant for all species. Species extinction does not necessarily mean complete extinction, it usually means that a species evolves and a new mutated better-fit species replaces the origi­ nal one. From this viewpoint, the evolution is “stable,” in the sense that it keeps life on Earth effectively functioning. However, as M. Gromov and A. Carbone mention, it is very difficult to describe this “stability” in precise terms: “There is no adequate mathematical formalism to express the intuitively clear idea of replicative stability of dynamical systems ([97], p. 40).”

Preface | ix

– –

Specifically, we need to formalize two ideas: First, the biological systems are unstable (in particular, under random perturba­ tions). Second, that these systems can be stabilized by replication (evolution).

A progress in solving both aspects of the viability problem can be achieved if we use the notion of Kolmogorov complexity. In our formalizations, we will use the basic con­ cepts and ideas proposed by M. Gromov and A. Carbone [97], L. Van Valen [301], and L. Valiant [303, 304]. We introduce classes of random dynamical systems modeling biological systems under large fluctuations. For them, we mathematically formalize the homeostasis con­ cept using the viability theory mainly developed by P. Aubin with colleagues [13– 16]. For these systems, we prove the first part of the Gromov–Carbone hypothesis: a “generic” system of our class is capable of supporting homeostasis only within a bounded time interval. We obtain a result that, nonformally speaking, states that: re­ action-diffusion systems are unviable under two generic multiplicative noises. Some explicit estimates of the viability times and probabilities can be found for genetic networks. We can express the viability probability via the genetical network parameters. These estimates show that this approach is consistent with key exper­ imental data [133, 134]. Namely, the interaction graph should contain strongly con­ nected nodes: centrality correlates with lethality. Moreover, this result is in a good ac­ cordance with the fact that species emerge in periods of ecological catastrophe [143] and L. Van Valen’s law on species evolution [301]. The next step is connected with the Kolmogorov complexity theory starting with seminal papers [145, 259] developed in many works [64, 71, 315], see [159, 187] for a overview. This theory is important for such applications such as information compres­ sion and others [157, 333, 334]. We know that biological systems are coded by a discrete genetic code. Therefore, we can introduce the Kolmogorov complexity K (C) of such codes C. To estimate the DNA sequence, complexity is important for applications and evolution comprehen­ sion, and the problems on gene complexity, organism complexity and complexity in­ creasing in evolution have received a great deal of attention in many fundamental works, for example, [3, 33, 34, 44, 58, 106, 147, 160, 180, 181, 185, 222, 230, 231, 238, 302, 318, 319]. Mathematically, a precise computation of the Kolmogorov complexity is nondecidable problem, i. e. it is impossible to invent a universal algorithm that is ca­ pable of computing the complexity for all DNAs, however, we can find upper estimates of the DNA complexity. The next result states that the survival time Tsurv (C) of a system coded by C and the Kolmogorov complexity C are connected. Roughly speaking, we have the following as­ sertion: if K (C) is bounded by a constant K0 , the survival time is bounded as well. This result shows (for a precise mathematical formulation, see Theorem 4.24 of Chapter 4) that the code complexity is a time function which has a tendency to increase, i. e. this

x | Preface function cannot be a priori bounded. (We say that a function K (t), where t ∈ (0, +∞), has a tendency to grow if for each number a, there is a time moment t(a) such that K (t(a)) > a.) The proof of this fact is based on results from the Gromov–Carbone problem and nonviability of “generic” systems under extremal fluctuations. To be vi­ able under extremal events (strong fluctuations of environment), the gene code of bi­ ological systems should have large Kolmogorov complexity, and this complexity has a tendency to grow in evolution. Notice that a connection between complexity of gene code and organism complexity is not direct. Under some conditions, one can show that the attractor complexity also has a tendency to increase during evolution. Notice that the relation between organism complexity and the corresponding code complexity can be, in principle, arbitrary. Our assertion can be considered as a math­ ematical formulation of the arrow-of-complexity hypothesis [24]: The hypothesis of the arrow of complexity asserts that the complex functional organization of the most complex products of open-ended evolutionary systems has a general tendency to increase with time.

So, it is shown that organisms have tendency to a complexification, but the next funda­ mental question is about the evolution rate. What does the “slow” evolution rate and “fast” evolution rate mean? From a mathematical point of view, this question needs a formalization. Let us consider an example: how we can distinguish a mountain and a hill? Fortunately, since we can consider the genetic code as a discrete one, for evolu­ tion, we can overcome this difficulty without fuzzy procedures. These ideas are in­ spired by a remarkable paper by L. Valiant [303]. Assume we are looking for a code X of size |X |, which satisfies a number M of con­ straints necessary for the organism viability. If the numbers N, M are both large, we obtain a typical hard combinatorial problem. Important examples of such hard prob­ lems are given by the K − SAT (K-satisfiability) problem (it is considered to be one of the most famous due to the seminal work [51]), integer and Boolean linear program­ ming, the problem regarding the search of a Hamiltonian cycle in a graph, and many others [52]. A list of such problems contains thousands of examples and many of them are important for bioinformatics [83]. They have received great attention during the last decades [2, 49, 52, 56, 63, 80, 183, 184, 245]. For such problems as Boolean and integer programming, and K − SAT, we in­ troduce a parameter β = M /N playing a crucial role: (relation between restriction number and a free variable number, N is usually proportional to |X |). In general, to resolve a hard combinatorial problem, we need exponentially large resources (if we use a bounded memory, the algorithm running time is more than O(exp(bN )) with b > 0, if the running time is bounded, then we should use an exponentially large memory). In some cases, however, we are capable of resolving the problem by Poly(N ) elementary steps, where Poly denotes a polynomial of N. Since exp(bN )/Poly(N ) → ∞ as N → ∞, we can describe about two classes of problems: easy ones where the running time is polynomial in the problem size, and

Preface | xi

hard, where this time is more than any polynomial (usually here, the time is expo­ nentially large). Using these ideas, we can say that an evolution algorithm is fast if this algorithm finds a correct genetic code (that makes a viable organism) within a running time polynomial in N, and slow if the algorithm works, say, during an expo­ nential running time. Then, the key question can be reformulated as follows: is there a fast “evolu­ tion” algorithm capable of resolving complicated evolution problems? Regarding the precise notion of the “evolution” algorithm, we can consider gradient or greedy al­ gorithms, random search etc., or combinations of gradient algorithms with random search (simulated annealing). To answer this question, we apply new ideas proposed recently for hard combina­ torial problems by mathematicians and physicians [1, 2, 49, 80, 183, 184]. Namely, in many hard combinatorial problems, we observe a phase transition. If the parameter β is less than some critical level, β < β c , there exists natural gradient algorithms that resolve the problem within Poly(N ) running time. For large β, we have no solutions. Thus, we observe a phase transition in these hard combinatorial problems. This al­ lows us to demonstrate that an evolution of Darwin’s type can, in a gradual manner, create a complicated multicellular organism. Here, we have two parameters playing a decisive role. The first parameter β is, roughly speaking, the number of ecological constraints divided by the number of genes. The second one, K, can be interpreted as gene redundancy. If β < A(K )2K , where A(K ) is a slowly growing function, i. e. we have a sufficient genetic freedom (“Freedom Principle,” proposed by Prof. A. Kondrashov), evolution goes, or other­ wise it stops. Numerical experiments confirm this assertion and thus “mutations plus natural selection have led to the marvels found in Nature.” In Chapter 4, we also discuss robustness of centralized networks whose dynamical properties have been investigated in Chapter 2. Here, we discuss connections between viability, robustness and functioning rate. We show that centralized networks can be robust (viable) and flexible, i. e. have a number of local attractors. They can thus sup­ port a great multistationarity. However, we show that there is a slow-down effect: if a center controls a number of satellites, the network rate must be bounded and can be estimated via network parameters. This effect restricts possibilities of centralized networks controlled by the center. There are also other possible functioning regimes when satellites control the center or satellites interact. These results are applied to es­ timate viability of an “empire structure.” We compare here, for a simple illustration, the contemporary Russian Federation and the “Stalin” empire. Furthermore, we consider the so-called Standard model introduced in ecology to study the famous plankton paradox [124]. The plankton paradox is connected with the known ecology principle that a number of species cannot share the single re­ source. Actually, however, we sometimes observe that numerous species use the same resource. To resolve this paradox, the Standard model was introduced [124]. We study this model in a more general case than in previous works and consider a number of

xii | Preface species under random fluctuations. We find an asymptotic formula for the number of coexisting species that connects the viability, the species number and some other parameters. We also address that when random perturbations are absent, the Stan­ dard model can be reduced to the Lotka–Volterra model with n resources (studied in Chapter 2) and, therefore, it also exhibits all kinds of dynamical chaos. Finally, Chapter 4 can be summarized as follows. We connect concepts of struc­ tural stability and genericity with the Kolmogorov complexity theory in order to ex­ plain main properties of biological evolution. To describe mathematically biological systems, we consider classical main models of mathematical biology (circuits, reac­ tion-diffusion equations). Recall that R. Thom [268] proposed the concept of structural stability to describe complex structures observed in biology and other applications (so-called “stability dogma,” as it was named by J. Guckenheimer and P. Holmes, [99]). These stability ideas have been successfully applied by many authors (catastrophe theory). However, this fundamental concept also meets some serious difficulties (see an interesting discussion in [254]). Quite opposite ideas were proposed in [301] and [97]. Based on some experimental data, L. Van Valen concluded that biological species are unstable, but evolution can stabilize them. This assertion (the so-called Red Queen hypothesis) drew upon the apparent constant probability of extinction in families of related organisms. In Chapter 4, we have proposed a mathematical basis for the Van-Valen–Gromov– Carbone instability ideas. Under large random perturbations, an organism with fixed complicated structure is viable only within some bounded time intervals; there is a relation between organism genetic code complexity and viability. Although organisms of fixed structure are unviable, it is possible that populations of evolving organisms are viable eternally with nonzero probability. This evolution may be gradual and, nonetheless, in a sense, fast. Briefly, organisms are fragile and they are not eternal, but organism evolution may go eternally. Many results of Chapter 4 are conjoint with D. Yu. Grigoriev. Results on the Lotka– Volterra systems are conjoint with V. A. Kozlov, and results on centralized genetic net­ works have been obtained together with O. Radulescu. I have greatly benefited from the comments of many of my colleagues in particular, A. K. Abramian, E. L. Aero, D. Yu. Grigoriev, S. Genieys, P. Gordon, V. A. Kozlov, I. A. Molotkov, N. N. Petrov, O. Radulescu, J. Reinitz, V. M. Schelkovich, A. D. Vilesov, V. Volpert and A. Weber. I am grateful to Vl. Kreinovich for all of the help. I am grateful to the Department of Applied Mathematics at the Lyon University II Claude Bernard, regarding their hospitality in 1998–2003, where the author obtained a part of the results while working with V. Volpert, and to the Mathematical Institute of University Rennes I, where the author had a fruitful collaboration with D. Grigoriev and O. Radulescu in 2004–2010, to Bonn University (where the author worked with A. Weber and D. Grigoriev in 2012–2013). I am thankful to the Department of Biology of Montpellier University for various invitations (2009, 2011 and 2012).

Contents Preface | v 1 1.1 1.2 1.3 1.4 1.5 1.6 1.6.1 1.6.2 1.6.3 1.6.4

Introduction | 1 Flows and semiflows | 1 Dissipative semiflows. Attractors | 4 Invariant manifolds and slaving principle | 5 Relatively simple behavior: gradient systems | 7 Monotone systems | 10 Complicated large time behavior | 12 General facts and ideas | 12 Hyperbolic dynamics | 15 Persistence of elementary hyperbolic sets | 17 Chaotic dynamics | 18

2 2.1 2.1.1 2.1.2 2.2 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5

Complex dynamics in neural and genetic networks | 22 Realization of vector fields (RVF) | 23 Some definitions | 23 Applications of RVF | 24 General scheme of RVF method for evolution equations | 25 Control of attractor and inertial dynamics for neural networks | 29 Attractors for neural networks with discrete time | 31 Graph growth | 33 Dynamics of time discrete centralized networks | 34 Bifurcations and chaos onset | 36 Realization of n-dimensional maps by time discrete centralized networks | 40 Attractors and inertial manifolds of the Hopfield system | 41 Complex dynamics for Lotka–Volterra systems | 43 Summary of the main results for the Lotka–Volterra system | 44 Lotka–Volterra model with n resources | 45 Change of variables | 46 Properties of fields from G | 47 Chaos in the Lotka–Volterra model with n resources | 50 Lotka–Volterra systems generating Lorenz dynamics | 51 Permanency and strong persistence | 53 Strong persistency and chaos | 56 Concluding remarks | 58 Standard model | 59 Model formulation | 59

2.3.6 2.4 2.4.1 2.4.2 2.4.3 2.4.4 2.4.5 2.4.6 2.4.7 2.4.8 2.4.9 2.5 2.5.1

xiv | Contents 2.5.2 2.5.3 2.5.4 2.5.5 2.6 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.6.6 2.6.7 2.7 2.7.1 2.7.2 2.7.3 2.8 2.8.1 2.8.2 2.8.3 2.8.4 2.8.5 2.9 2.9.1 2.9.2 2.9.3 2.9.4 2.9.5 2.10 2.10.1 2.10.2 2.11 2.11.1 2.11.2 2.11.3 2.11.4 2.12

General properties of Standard model | 60 Equilibria for the case of a single resource | 61 Numerical results for the case of a single resource | 62 Reductions of Standard model | 63 Systems of chemical kinetics | 66 Model | 67 Decomposition | 68 Reduction to shorted system by slow manifolds (quasiequilibria) | 69 Control of slow dynamics | 70 Checking oscillations, bifurcations, and chaos existence | 74 Some numerical results. Why are networks large? | 75 Algorithm | 77 Quadratic systems | 80 System (2.237) can be reduced to systems of Hopfield’s type | 80 Auxiliary approximation lemma | 83 Invariant manifolds for the Hopfield system | 84 Morphogenesis by genetic networks | 87 Systems under consideration. Network models | 87 Patterning problems | 90 Patterning and hierarchical modular structure of genetic networks | 92 Generation of complicated patterns | 92 Approximation of reaction-diffusion systems by gene networks | 94 Centralized gene networks | 96 Existence of solutions | 98 Reduced dynamics | 99 Complex behavior in centralized gene networks | 100 How positional information can be transformed into the body plan of a multicellular organism | 102 Bifurcations of centralized network dynamics | 104 Computational power of neural networks and graph growth | 106 Realization of Turing machines by neural networks | 106 Emergence of Turing machines by networks of a random structure | 107 Appendix | 110 Proof of Proposition 2.16 | 110 Proof of Proposition 2.15 | 111 A proof of Lemma 2.9 | 112 Algorithm of neural dynamics control | 114 Summary | 115

Contents |

3 3.1 3.1.1 3.1.2 3.1.3 3.1.4 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5 3.2.6 3.2.7 3.2.8 3.2.9 3.2.10 3.2.11 3.2.12 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.4 3.4.1 3.4.2 3.4.3 3.4.4 3.4.5 3.4.6 3.4.7 3.4.8 3.4.9 3.4.10 3.4.11 3.4.12 3.4.13

xv

Complex patterns and attractors for reaction-diffusion systems | 117 Whitham method for dissipative systems | 118 General ideas | 118 Quasiequilibrium (QE) approximation and entropy | 119 Applications to phase transition theory. Scalar Ginzburg–Landau equation | 120 Pattern formation in Drosophila | 124 Chaotic and periodic chemical waves | 130 Introduction | 130 A priori estimates, global existence and uniqueness | 132 Invariant manifold | 134 Coordinates in a neighborhood of M0 | 134 Change of variables | 136 A priori estimates | 138 Main theorem | 140 Periodic and chaotic waves | 142 Description of the model | 143 Transformation of the equations | 143 Existence of invariant manifolds | 147 Existence of periodic and chaotic waves | 147 Complicated large time behavior for reaction-diffusion systems of the Ginzburg–Landau type | 150 Mathematical model and physical background | 152 Control of kink dynamics | 154 Control of interactions in Hopfield equations | 156 Implementation of complicated dynamics and Turing machines | 157 Memory and performance rate | 157 Reaction-diffusion systems realizing all finite dimensional dynamics | 158 Introduction | 158 Statement of the problem | 161 Function spaces | 162 Assumptions to f and g | 163 Main results | 165 Strategy of proof | 167 Problem (3.118)–(3.124) defines a local semiflow | 169 Global semiflow existence | 169 Construction of special linear operator L N | 170 Estimates for semigroups | 173 Reduction to a system with fast and slow variables | 176 Some preliminaries | 178 Estimates of solutions to system (3.208)–(3.210) | 180

xvi | Contents 3.4.14 3.4.15 3.4.16 3.4.17 3.4.18 3.4.19 3.5 3.6 4 4.1 4.1.1 4.1.2 4.1.3 4.2 4.2.1 4.2.2 4.2.3 4.2.4 4.2.5 4.2.6 4.2.7 4.2.8 4.3 4.3.1 4.3.2 4.3.3 4.4 4.4.1 4.4.2 4.4.3 4.4.4 4.4.5 4.4.6 4.4.7 4.5 4.5.1 4.5.2 4.5.3 4.5.4 4.5.5 4.5.6

Existence of the invariant manifold | 181 Reduction of dynamics to the invariant manifold | 183 Auxiliary estimates | 184 Lemma on control of matrices M (property D) | 186 Proof of theorems | 188 Conclusion | 189 Appendix: theorems on invariant manifolds | 189 Summary | 192 Random perturbations, evolution and complexity | 193 Introduction | 193 Viability problem | 194 Evolution, graphs and dynamical networks | 195 Main problems and some ideas | 197 Neural and genetic networks under random perturbations | 198 Systems under consideration | 198 Transition functions | 199 Assumptions on random processes ξ | 201 Evolution in the time discrete case | 201 Assumptions to fluctuations in the time continuous case | 203 Network viability under random fluctuations | 205 Complexity | 206 Evolution model for the time continuous case | 206 Instability in random environment | 207 Instability of circuits | 207 Instability of time continuous systems | 209 Viability for network models | 210 Robustness, attractor complexity and functioning rate | 216 Some toy models and numerical simulations | 216 Reductions for the toy model | 217 Multistationarity of the toy model | 218 Robustness and the stability of attractors | 220 Generalized toy model | 221 Results of simulations | 222 Why Empires fall | 223 Evolution as a computational problem | 226 Complex organ formation as a hard combinatorial problem | 226 Some facts about the k-SAT model | 228 Gene network model and morphogenesis | 229 Evolution | 230 Capacitors and centralized networks | 231 Hebb rule and canalization | 231

Contents |

4.5.7 4.5.8 4.5.9 4.5.10 4.6 4.6.1 4.6.2 4.6.3 4.6.4 4.7 4.7.1 4.7.2 4.7.3 4.7.4 4.7.5 4.8 4.8.1 4.8.2 4.8.3 4.9 4.10 4.10.1 4.10.2

xvii

Canalization and decanalization as a phase transition. Passage through the bottleneck | 233 Simulation of evolution and mutation effects | 234 Other NP-complete problems in evolution | 239 Evolution of circuit population | 240 Kolmogorov complexity and genetic code | 248 Model of evolution | 248 Genetic code complexity | 249 Viability and unviability | 249 Proof of Theorem 4.24 on the complexity of gene code | 251 Viability of reaction-diffusion systems | 251 Statement of problem | 253 Reaction-diffusion systems with random parameters | 254 Existence of solutions of noisy systems | 256 Unviability for generic noises g | 256 Biological evolution and complexity | 262 Synchronization in multicellular systems | 265 General approach | 265 Linear analysis of synchronization stability | 268 Space discrete case | 270 Summary | 272 Appendix | 273 Estimate of the number of genes m via complexity C1 | 275 Estimates of E and C2 | 276

Bibliography | 279 Index | 293

1 Introduction This chapter contains prerequisite material, in particular, some basic concepts and definitions of dynamical system theory that play a central role in the book. Details can be found in [12, 101, 108, 129, 135, 204, 205, 207, 208, 211, 228, 232, 254] and other monographs and reviews. In this chapter, we only consider the results and definitions essential in what follows. We state here definitions of attractors, hyperbolic sets and invariant manifolds. We formulate some important results of theory of monotone and gradient dynamical systems.

1.1 Flows and semiflows The simplest class of finite dimensional dynamical systems with continuous time is defined by systems of ordinary differential equations x t = F (x)

(1.1)

where F is a sufficiently smooth vector field on a smooth finite dimensional compact manifold M, x ∈ M. Typical examples of such manifolds are the n-dimensional torus T n and the sphere S n . Let us consider the Cauchy problem for (1.1) with the initial condition x(0) = y. Since F is a smooth field, this Cauchy problem has a unique solution for all t ∈ (−∞, +∞), and we obtain a trajectory t → x(t, y) such that x(0, y) = y. We can then define a family of maps S t : X → X, where X is a phase space by the relation S t y = x(t, y). In order to generalize this example, let us consider a family of maps S t : X → X, depending on t ∈ (−∞, +∞), and where X is a Banach space. Assume this family has the following properties, that is, (i) (ii) (iii) (iv)

S0 = I, S

t+τ t

(1.2) t τ

=SS , 0

S ∈ C (X, X )

for all t, τ ∈ R, for each fixed t, t

the map (t, x) → S x is continuous in (t, x) ∈ (−∞, ∞) × X,

(1.3) (1.4) (1.5)

where I : X → X is the identity operator. A family S t satisfying (i)–(iv) is said to be a flow in X. For (1.1), we take X = M. A flow can be defined as well on a manifold M with a boundary ∂M if the vector field is tangent to the boundary. Dynamical systems with discrete time t ∈ Z can be defined by diffeomorphisms x → F (x) such that F M ⊂ M. In some cases, we can obtain such a system from a flow S t by the so-called Poincaré map [232]. The Poincaré maps are useful when we

2 | 1 Introduction are dealing with nonautonomous equations (1.1), where f = f (t, x) is a T periodical function of t [208]. To investigate models defined by partial differential equations or systems of cou­ pled oscillators, we extend this approach, admitting that x lies in an infinite dimen­ sional Banach space B (which serves as a phase space). In this case, we consider dif­ ferential equations of the following form, namely, u t = Au + F (u )

(1.6)

where F is a sufficiently smooth (for example, C r -smooth, r ≥ 1) map, F ∈ C r (B, B), A is a linear operator A : Dom A → B. For bounded operators A, equations (1.6) were investigated first by the pioneering work of Peter Bohl [32]. The theory of the evolution equations (1.6) with bounded operators A can be found, for example, in [54]. To study systems of partial differential equations, we should investigate (1.6) with unbounded linear operators A. Let us consider, for example, parabolic equations u t = Δu + f (x, u, ∇u )

(1.7)

with initial and boundary conditions u (x, 0) = ϕ(x),

(1.8)

u (x, t) = 0,

(1.9)

x ∈ ∂Ω

where u (x, t) is an unknown function defined for x ∈ Ω, Ω is a connected domain with a smooth boundary ∂Ω. Here, one can apply the following standard approach [108, 167]. Let B = L p (Ω), where p ∈ (1, ∞), be the Banach spaces of measurable functions u such that ⎞ 1/ p ⎛  ⎟ ⎜ u p = ⎝ |u (x)|p dx⎠ < ∞, Ω

where  p the norm in these spaces. Then, the operator Au = Δu with a domain Dom A ⊂ L p (Ω) is sectorial in C(R) [108, 167]. If p = 2, we deal with the Hilbert space H = L2 (Ω), and then our operator is self-adjoint and negatively defined in H. We can then introduce the fractional spaces, associated with B, by B α = Dom A1α . These spaces are equipped by the norms u p,α = A1α u p ,

where α ≥ 0, A1 = A − aI, a > 0 and I is the identity operator. The theory of the fractional spaces is well-developed, see [108, 167, 260]. The operator A defines a semigroup exp(At) by the linear evolution equation u t = Au,

u (0) = v,

(1.10)

1.1 Flows and semiflows

| 3

where exp(At)v = u (t). This allows us to rewrite (1.7) as an abstract evolution equa­ tion (1.6). Here, F is a map associated with the nonlinear term f in (1.7), i. e. F (u ) : u → f (x, u, ∇u ). The local in time existence of solutions of (1.6) can be obtained, for exam­ ple, if, for some α ∈ (0, 1), F is a C1 -map from some bounded subdomain U ⊂ B α to B. To check this property, one applies the Sobolev embeddings [260]. Rewriting (1.6) as an integral equation t

u (t) = exp(At)u 0 +

exp(A(t − s))F (u (s))ds,

(1.11)

0

we can establish the existence of solutions on a bounded time interval by the stan­ dard contracting map principle in B α [108]. In this case, (1.6) defines a local semiflow S t defined for t ∈ [0, T ). To obtain existence of solutions to (1.6) for all t > 0, we need an a priori estimate that guarantees that our solutions are bounded in a weak norm. There are different methods that allow us to obtain such estimates (see, for ex­ ample, [167, 258, 311]). However, it is necessary to note, that, in general, f should sat­ isfy some conditions, otherwise the norms of solutions of (1.6) may increase to +∞ within a finite time interval (the blow-up effect). The blow-up effects are well-studied. For most systems considered in this book, blow-up effects are forbidden by a priori estimates that can be checked in a quite straightforward way. If we have found an a priori estimate, then solutions of (1.6) exist for all times t, and then our evolution equation defines a global semiflow (semigroup) in an appropri­ ate Banach space B having the properties (i)

S0 = I

(ii)

S t+τ = S t S τ ,

(iii)

S t ∈ C0 (B, B )

(iv)

the map (t, x) → S t x is continuous in (t, x) ∈ (0, ∞) × X.

(1.12) for all t, τ ∈ R+ , for each fixed t > 0

(1.13) (1.14)

and (1.15)

Ordinary differential equations (1.1) generate global semiflows under the follow­ ing assumptions. Let us consider an open connected domain D ⊂ Rn with a smooth boundary ∂D (for example, a ball B nR in Rn of radius R centered at 0). Then, in order to obtain a global semiflow, we can suppose that the vector field F is directed strictly inward at the boundary ∂D: n(x) · F (x) < 0,

for each x ∈ ∂D,

(1.16)

where n(x) is the outward normal vector to ∂D at the point x. In the following sections, we review some basic notions of dynamical system the­ ory.

4 | 1 Introduction

1.2 Dissipative semiflows. Attractors Many physical, chemical and biological effects, such as fluid viscosity, diffusion and protein degradation, lead to an energy dissipation. The concept of a dissipative sys­ tem is a mathematical formalization of phenomena that we observe in systems with dissipation. In dissipative systems, we often observe a typical picture of global semiflow trajec­ tories in the phase space B, when a “small” set attracts all trajectories of the semiflow. To formulate it in more precise mathematical words, we introduce different concepts of “attraction.” Let us consider a global semiflow S t on X. A trajectory x(t) of x is the map t : [0, +∞) → X defined by x(t) = S t x, and a trajectory of a set B is the map defined by S t B, where S t B denotes the image of B under action of the semiflow at the time moment t. The positive orbit of the set B is the union of all S t B with t > 0, and the orbit of B is the union of S t B over all t ∈ R. A set I is positively invariant if S t I = I [102] for all t ≥ 0. A set I is invariant if t S I = I for all t (of course, then S t is a flow or I trajectories of S t can be defined for all t < 0). Definition 1.1. A set A ⊂ X attracts a set B if lim dist(S t B, A) = 0,

t →+∞

X

(1.17)

where the dist(B, A) denotes the distance between sets B, A in the norm of X: dist(B, A) = sup inf x − yX . X

x∈B y∈A

A set A attracts a point x if (1.17) holds with B = {x}. Definition 1.2. We say that the global semiflow S t is point dissipative if there is a bounded set D that attracts each point of X. Definition 1.3. We say that the global semiflow S t is dissipative if there is a bounded absorbing set A attracting each bounded subset of X. The following fundamental concept is that of the attractor. There is a large variation in attractor definitions. We shall use the following definition which is popular in mathe­ matical physics [102] (see also [18, 101, 155]). Definition 1.4. We say that the set A is a compact global attractor of the semiflow S t if this set is compact, invariant under S t and attracts each bounded subset of X. Remark. Other main definition variants are as follows. We can require that an attrac­ tor attracts each point of X. Such an attractor is, in general, a subset of the global at­ tractor. For example, in the gradient systems, the global attractor contains not only all

1.3 Invariant manifolds and slaving principle | 5

stable stationary solutions (equilibria), saddle solutions, but even repellers, and sta­ ble and unstable manifolds connecting different equilibria. Physically, the unstable connecting manifolds can be interpreted as transient regimes. The attractor, which attracts all points, does not contain unstable manifolds. An interesting definition is given by J. Milnor [186]. Let us assume that the phase space X is enabled by a mea­ sure μ. Then, the Milnor attractor is a set A such that the basin of attraction B(A), consisting of all points whose orbits converge towards A, has a strictly positive mea­ sure. Moreover, for any closed proper subset A ⊂ A, the set difference B(A) − B(A ) also has a strictly positive measure. The Milnor attractor does not contain saddle invariant sets, repellers and unstable manifolds. The statistical attractor was suggested by Yu. Ilyashenko [130]. The existence of global attractors can be established for large classes of dynamical systems. The following result is essentially due to V. Pliss [208], see also [102]: Theorem 1.5. If S t is a point dissipative global semiflow on locally compact metric space X, then S t has a compact global attractor A. During 1970–1980, it was understood that many parabolic partial differential equa­ tions (PDE), hyperbolic PDEs with dissipative terms and systems of PDEs generate semiflows with global attractors. Often, these attractors are not only compact, but also have finite Hausdorff and fractal dimensions [101, 102, 129, 155, 265]. The main phys­ ical reason beyond attractor existence is that many dissipative semiflows, defined by PDEs, are determined by a few main modes. The first work, where this fundamental concept was realized by a rigorous mathematical method for the Navier–Stokes equa­ tions, is [77]. We state these ideas in the following section.

1.3 Invariant manifolds and slaving principle The following principle plays a central role in the investigation of many dissipative systems. Following H. Haken [100], we call it the slaving principle: under some natu­ ral conditions, dynamics of fast modes is captured completely by dynamics of slow modes. The mathematical formalization of this nonrigorous assertion is connected with concepts of the invariant manifold, center manifold and slow manifold [50, 72, 108, 135, 208, 228, 232, 265]. To illustrate these ideas, let us consider a system where all variables u can be decomposed in a finite number of slow modes q and fast modes w. The fast mode can contain an infinite number of components, i. e. lies in a Banach or Hilbert space. The following system can be considered as a simple example: q t = ϵQ(q, w)

(1.18)

w t = Aw + F (q),

(1.19)

6 | 1 Introduction where q ∈ Rn , Q and F are sufficiently smooth maps, A is a self-adjoint operator, w ∈ B, where B is a Banach phase space and ϵ is a small positive parameter. Let us assume that A is negatively defined, i. e. the spectrum of the operator A lies in the negative half plane and is separated by a barrier away from the imaginary axe: Re Spec A < −b 0 .

(1.20)

We suppose that this barrier b 0 > 0 is independent of the small parameter ϵ. This barrier property leads to the following estimate  exp(At)w ≤ C exp(−b 0 t)w,

C > 0.

(1.21)

We can then present solutions of (1.19) as follows: t

w(t) = exp(At)w(0) +

exp(A(t − s))F (q(s))ds.

(1.22)

0

To explain the main idea, let us assume, temporarily, that q(s) in F in the right-hand side of (1.19) does not depend on s. Then, in (1.22), F (q(s)) = F (q) is a constant, the integral in (1.22) can be computed and we obtain w(t) = exp(−At)(w(0) + A−1 F (q)) − A−1 F (q).

(1.23)

1 We see that for large times t >> b − 0 and small ϵ, the fast component w is a function of the slow mode q: (1.24) w as (t) = W (q) = −A−1 F (q(ϵt)).

For small ϵ, this relation gives a good approximation for w that works for large times. One can expect that the precision of this formula is O(ϵ). The function w as depends on t via q(ϵt) and it is a slow time function. Let us give some formal definitions. We say that a global semiflow S t in a Banach space B has a finite dimensional positively invariant manifold M if M is a manifold and it is a positively invariant set, i. e. S t M = M for each t ≥ 0. This means that the invariant manifold consists of semiorbits. We will consider invariant manifolds embedded in B by maps, i. e. they are graphs of sufficiently smooth maps: M = {u = (q, w) : w = W (q), q ∈ U ⊂ Rn },

where W ∈ C r (U, B) is a C r smooth map from U to B, and U is an open connected domain in Rn with a smooth boundary. Sometimes, it is difficult to obtain an invariant manifold, however, we can construct a locally invariant one. We say that a global semiflow S t in a Banach space B has a finite dimensional locally invariant manifold M if M is a finite dimensional manifold and if u 0 ∈ M, and then for some τ0 > 0, u (t, u 0 ) = S t u 0 ∈ M for t ∈ [0, τ0 ), i. e. a part of the orbit of u 0 lies in M.

1.4 Relatively simple behavior: gradient systems

| 7

The manifold is locally attractive if there is an open neighborhood U of this man­ ifold such that the manifold attracts all bounded sets B0 ⊂ U (Definition 1.1). Some­ times, one can prove the existence of so-called an inertial manifold which attracts all bounded sets [50, 170, 265]. The existence of a smooth finite dimensional inertial man­ ifold means that the global semiflow can be completely reduced to a semiflow defined by a finite dimensional system of differential equations. If system (1.18), (1.19) has an inertial manifold MI with equation w = W (q), this means that for large times, dy­ namics of this system can be described by a finite dimensional system of differential equations ¯ ( q) . q t = ϵQ(q, W (q)) = ϵ Q (1.25) This reduction can be considered as a mathematical formalization of the intuitive slaving principle. In this case, the global attractor exists, has a finite dimension d ≤ dim q = n), and all attractors of semiflow (1.18), (1.19) lie on MI . However, condi­ tions that guarantee an inertial manifold existence are restrictive [50, 170, 265]. We can, however, find inertial manifolds with delay [61] that exist under essentially less restrictive conditions. Here, the reduced dynamics is defined by differential equations with a delay. There are two main methods that allow us to prove invariant manifold existence: the Hadamard graph transform method and the Lyapunov–Perron method [108]. Both methods are based on the contracting map principle. In Section 3.5, one can find ex­ amples of theorems on invariant manifold existence.

1.4 Relatively simple behavior: gradient systems Let us consider the initial boundary value problem (IBVP) defined by (1.7), (1.8) and (1.9). Our goal is to study the large time behavior of the trajectories. However, it is a difficult problem, especially for a more general situation, where u is a vector valued function and we are dealing with a system of quasilinear parabolic PDEs of the second order. In the next two sections, we consider some cases where one can obtain qualita­ tive information on the large time behavior of a semiflow. Consider reaction-diffusion equations (1.26) u t = ϵ2 Δu + f (x, u ), initial and boundary conditions are (1.8), (1.9). These equations have a number of applications, for example, in population dynamics, chemistry, liquid crystals, phase transitions and others [100, 108, 193, 196]. Let us assume that f ∈ C1 satisfies a sign condition, for example, f (x, u )u < 0 for sufficiently large |u |. Then, solutions of (1.26) are a priori bounded due to the maximum principle [258] and we conclude that IBVP (1.26),(1.8) and (1.9) define a global semiflow S t on B = L p (Ω) with p > d, d = dim Ω [108].

8 | 1 Introduction This system belongs to the class of gradient systems. Indeed, let us define a func­ tional V [u ]

 2 ϵ (∇u )2 V= + Φ(x, u ) dx, (1.27) 2 Ω

where Φ u = f (x, u ) is an antiderivative of f with respect to u. Equation (1.26) implies  dV 1 =− u 2t dx ≤ 0 (1.28) dt 2 Ω

along the trajectories u (x, t) of the semiflow S t . Therefore, the Lyapunov functional V [u (t)] decreases along trajectories of S t , which are not equilibria. Since V is con­ tinuous on B α and the orbits are relatively compact sets, V is a constant function on ω-limit sets. The ω-limit set is invariant, and therefore it can only contain equilibrium solutions [211]. We denote by E f the set of all equilibria. There are two main possible cases when all trajectories are convergent to equilibria. Let us formulate the remark­ able theorem of Simon [252]. ¯ × R → R is continuous and real analytic in u. Then, any Theorem 1.6. Assume f : Ω bounded solution of IBVP (1.9),(1.26) converges to an equilibrium of (1.9),(1.26). The proof uses the Lojasiewicz inequality for analytic functions (for an outline of this proof, see [211]). An important case is when all equilibria are hyperbolic. This means the following. Assume u eq (x) = U is an equilibrium, i. e. a stationary solution of (1.26), (1.9). Let us introduce a linearized evolution equation v t = ϵ2 Δv + f u (x, U )v = L U v.

(1.29)

Assume that the spectrum of L U has no intersections with the imaginary axis, i. e. the exponential dichotomy property [54] holds. Then, we say that U is a hyperbolic equilibrium (for more details about hyperbolic equilibria and sets, see Section 1.6.2 and [135, 197, 228]). Results on invariant manifolds show then that with each hyper­ bolic equilibrium, we can associate two smooth manifolds, namely, the unstable man­ ifold M u (U ) and the stable manifold M s (U ). Locally, they are close to the correspond­ ing linear subspaces Lu and Lu . It is difficult to check this hyperbolicity property, in particular, for multidimen­ sional problems Dim Ω > 1. However, if the nonlinear term f is not analytic, we can obtain the important result (Brunovský–Polácik [41]), which shows that, in a sense, almost all reaction-diffusion equations generate a “simple” behavior. Before formulation, let us remember the notion of the Morse–Smale system. Finite dimen­ sional Morse–Smale systems play an important role in dynamical system theory as an example of systems with a simple behavior [128, 135, 197, 228, 232, 254]. Systems with a “complicated” behavior can be obtained as perturbations of Morse–Smale systems [128]. This strategy will be used in this book.

1.4 Relatively simple behavior: gradient systems |

9

We say that a dynamical system is Morse–Smale if (i) there are only a finite number of equilibria, each is hyperbolic with smooth stable and unstable manifolds, (ii) there are only a finite number of periodical orbits, each is hyperbolic with smooth stable and unstable manifolds, (iii) stable and unstable manifolds of equilibria and periodical orbits intersect trans­ versely; (iv) the union of equilibria and periodical orbits coincides with the nonwandering set NW (S t ). Let us remember that the nonwandering set is defined as a set NW of points x ∈ B such that for each neighborhood V of x, and for each t0 , there exists a t > t0 such that S t V ∩ V is not empty. The Brunovský–Polácik theorem can be formulated as follows. Theorem 1.7. There is a residual set R ⊂ C ∞ (Ω × U, R) of functions f such that, for every f , system (1.26) is Morse–Smale, that is, each element of the equilibrium set E f is hyperbolic and the corresponding stable and unstable manifolds intersect transversally. Recall that the residual set in a topological space X is the complement of a set, which is a countable union of nowhere dense sets. In the one-dimensional case when x ∈ [0, π], we can apply arguments of Stur­ m–Liouville type that allows us to verify hyperbolicity. As an example, let us consider the following Chaffee–Infante problem: u t = u xx + af (u ), u (0, t) = u (π, t) = 0,

x ∈ [0, π]

(1.30) (1.31)

where a > 0 is a parameter, f ∈ C2 is a nonlinear function satisfying some conditions (that hold for the important cases f = u − u 3 , f = sin u). Such a problem appears in many applications, in particular, in nematic liquid crystals, morphogenesis theory, and Euler’s rod problem. In nematic liquid crystals, this problem describes the socalled Frederiks transition. Under some natural conditions to f problem (1.30), (1.31) defines an infinite dimensional Morse–Smale system. The semiflow, defined by prob­ lem (1.30), (1.31), has no periodic orbits since this semiflow has a Lyapunov function. The global attractor is a union of equilibria and some manifolds. Only a single equi­ librium U0 is stable. For a < a c , where a c is a critical value, this stationary solution is trivial: U0 ≡ 0. For a > a c , it is a nontrivial solution without zero in (0, π) (this bifurcation at a = a c corresponds to the Frederiks transition in nematic crystals). Unstable equilibria u (n) (x) have zeroes inside (0, π) and describe oscillating in x periodical patterns. The number of zeroes gives us the dimension of the corresponding unstable manifold Mu (u (n) ) = W n . The global attractor is defined by the formula A = n W n , i. e. the attractor is a union of unstable manifolds of all the equilibria.

10 | 1 Introduction In a similar way, we also investigate the Neumann case, where u x (x, t)|x=0,π = 0. Here, stable equilibria are constants C such that f (C) = 0 and f (C) < 0, and these solutions have no zeroes. For (1.30), (1.31), the stationary solutions (patterns) can be described analytically in two cases: a ≈ a c and a = ϵ−2 >> 1, where ϵ is a small parameter. This example shows that the attractor complexity increases in a (or, that is, equiv­ alent, in ϵ−1 ). Indeed, the number of equilibria grows as a function of the parameter a and the dimension N (ϵ) of the invariant manifolds also goes to ∞ as ϵ → 0. Let us consider an important system of coupled oscillators which can also gener­ ate a gradient semiflow with a Lyapunov function. This system has the form dq i = K ij σ (q j ) − bq i + θ i , dt j =1 m

i = 1, . . . m

(1.32)

where q = (q1 , q2 , . . . , q m ) ∈ Rm , m > 0 is the number of oscillators (neurons), K is a matrix that determines neuron interaction (synaptic matrix), the terms −bq i with a parameter b > 0 define a dissipative force, and θ i ∈ C0 are constant external forces (thresholds). Function σ (z) ∈ C1 (R) satisfies lim z→+∞ σ (z) = 1, limz→−∞ σ (z) = 0. System (1.32) defines the famous Hopfield model [118], basic for the theory of attractor neural networks. If b > 0, equations (1.32) generate a global dissipative semiflow. If σ (z) > 0 and K is symmetric, then dynamics (1.32) is gradient. In this case, existence of an “energy” (Lyapunov function) can be applied for neural computa­ tions [118]. For nonsymmetric K, we can observe some nontrivial dynamical effects (Chapter 1).

1.5 Monotone systems For general quasilinear parabolic IBVP (1.7)–(1.9), the Lyapunov function does not exist. However, the corresponding semiflows have a remarkable property: they are monotone. This property restricts the trajectory behavior and gives us information about the large time dynamics. An abstract theory of monotone flows started with the seminal works of M. Hirsch [111, 112], and now it is the well-developed [113, 114, 256, 266]. Here, we outline this theory following [211]. Monotone systems exhibit, in a sense, a relatively simple behavior. Assume (1.7),(1.9) defines a semiflow in an appropriate Banach space B. The monotonicity of this semiflow is a consequence of the well-known comparison principle [258]. Lemma 1.8 (Comparison principle). Assume for initial data ϕ, ϕ ∈ B α the following inequality is fulfilled: ϕ(x) ≤ ϕ+ (x) x ∈ Ω.

1.5 Monotone systems

| 11

Then, the corresponding solutions of (1.7), (1.9) satisfy the same inequality for all t > 0: u (x, t, ϕ) ≤ u (x, t, ϕ+ )

x ∈ Ω, t > 0.

To formulate an analogue of this principle for abstract semiflows in a Banach space Y, we use the notion of an ordered cone. A cone Y + is an ordered cone if it is a closed convex cone with nonempty interior int Y + such that the intersection of Y + and −Y + is the singleton {0}. Then, we can introduce the following relations between elements of Y: x≤y

if y − x ∈ Y + ,

x 0). We say that a point x ∈ B is quasiconvergent relative to a semiflow if the corre­ sponding orbit is relatively compact and the ω -limit set of this orbit consists of equi­ libria. Let us formulate the fundamental theorem by M. W. Hirsch [112]. Theorem 1.9. Let S be a compact strongly monotone semiflow on Y. Let D be an open set such that the corresponding orbit O( S δ D) = {S t D : t ≥ δ } is bounded. Then, the set of quasiconvergent points contains an open and dense subset of D. So, we can expect that, in a sense, almost all trajectories converge to equilibria. For quasilinear problems (1.7)–(1.9) in the one-dimensional case x ∈ Ω = [a, b ] ⊂ R, we have the following theorem first established by T. Zelenyak [329] (see an overview in [211]). Here, Y = B = L p (Ω) with an appropriate p. Theorem 1.10. Assume that f satisfies the following conditions: (N1) For some integer m ≥ 0, f : (x, u, ∇u ) → f (x, u, ∇u ) is continuous in x, u, ξ = ∇u with all partial derivatives with respect to (u, ξ ) up to order m; (N2) If m = 0, f is locally Lipschitz continuous in (x, ξ ); Let u (t, ϕ(·)) be any solution of (1.7), (1.9) that is globally defined and bounded in B α . Then, there is an equilibrium solution v(x) of (1.7), (1.9) such that u (t, ϕ(·)) − v → 0

as t → ∞.

12 | 1 Introduction This means that dynamics in one space dimension are always relatively simple: each trajectory is convergent. In the multidimensional case dim Ω > 1, this theorem is invalid. For the plane case dim Ω = 2, one can construct an example of chaotic dynamics, but the dynamics can be realized on unstable invariant manifolds [55, 212–214]. However, this chaos is unstable and numerically nonrealizable. Examples of monotone dynamics can be given by so-called competitive and coop­ erative systems which are important for biological and ecological applications. System dX i = F i (X ) dt is said to be cooperative [112, 114] if ∂F i ≥ 0 for all j = i. ∂x j

(1.33)

For more information regarding competitive systems, see Subsection 3.4.4. Under some mild restrictions, almost all trajectories of cooperative systems are convergent (if they are bounded) [112, 114]. For example, the dynamics of the Hopfield system is cooperative if K ij > 0 for i = j. Many results for reaction-diffusion equations can be extended to monotone systems of reaction-diffusion equations [310, 311, 313].

1.6 Complicated large time behavior 1.6.1 General facts and ideas Naturally, we would like to have a description of large time behavior for semiflows generated by fundamental PDEs and systems of PDEs. Theorems 1.7 and 1.10 show that the large time behavior of semiflows induced by reaction-diffusion equations and quasilinear parabolic equations of the second order are relatively simple. We have here an analogue of the Poincaré–Bendixon theory, which shows that for flows defined by systems of two ordinary differential equations on a compact smooth manifold, the ω-limit sets consist of equilibria and limit cycles. In many applications for physics, biology, ecology and chemistry, we are dealing with a system of reaction-diffusion equations of the form ∂u i = d i Δu i + f i (x, u ), ∂t

(1.34)

under the following boundary and initial conditions ∇u i (x, t) · n = 0,

u i (x, 0) =

u 0i (x),

(x ∈ ∂Ω ),

(1.35) (1.36)

1.6 Complicated large time behavior

|

13

where u = (u 1 , . . . , u m ), x ∈ Ω and f i ∈ C1 (Ω × R), n(x) is an outward normal vector to the boundary ∂Ω at x. Semiflows, defined by IBVP (1.34), (1.35), (1.36), in general, are not monotone or gradient. For such systems, we can use a general theory of at­ tractors of infinite dimensional dissipative systems developed by many works [18, 50, 101, 108, 128, 155, 265]. Under some conditions, initial boundary value problems (1.34), (1.35) and (1.36) generate a global semiflow possessing a compact global attractor of finite Hausdorff dimension. However, for applications, it would be interesting to un­ derstand the attractor structure. These general results only show that if we fix the norm of f (|f |C1 < C), then upper estimates of the attractor dimension dimH A < g(d¯ ) are defined by functions increasing in d¯ = mini d i . Therefore, to obtain a complicated at­ tractor, we should investigate systems where some diffusion coefficients d i are small, thus presenting a hard problem. In order to explain the strategy of this investigation, we shall review some fundamental concepts of the finite dimensional dynamical sys­ tem theory. Let us consider a system of ordinary differential equations du i = g i (u), dt

f ∈ C1 (M)

(1.37)

where u lies in the n-dimensional smooth compact manifold M (say, a torus M = T n ) or in a ball B n ⊂ Rn (in the second case, the field g should be directed inward on the boundary). These equations define a dynamical system (a global flow) for the case of the manifold and a global semiflow for the case B n (or a compact smooth manifold with a smooth boundary). The fundamental concept of structural stability was introduced by A. Andronov and L. Pontryagin in 1937. Roughly speaking, this stability means that small perturba­ tions of a structurally stable (robust) dynamical system does not change the topolog­ ical structure of the system trajectories. Definition 1.11. We say that a dynamical system S t on X is equivalent to a dynamical system T t on Y if there is a homeomorphism h: X → Y which preserves orbits and the sense of direction in time. Remark. One can use less restrictive definitions when h is a homeomorphism connect­ ing the corresponding attractors or the corresponding nonwandering sets, or neigh­ borhoods of the attractors. We also can restrict h to some invariant sets (definitions and an interesting discussion can be found in [197]). Definition 1.12. We say that a dynamical system S t on a compact smooth manifold M defined by (1.37) is structurally stable if each perturbed field g + g˜ such that ˜| g|C1 < ϵ M generates a dynamics that is equivalent to S t if ϵ > 0 is small enough.

14 | 1 Introduction For two-dimensional fields, we have two fundamental theorems of Peixoto [128]. Theorem 1.13. A vector field on a two-dimensional smooth compact manifold (a sur­ face) is structurally stable if and only if this field is Morse–Smale. Theorem 1.14. For any integer r ≥ 1, the set of the Morse–Smale fields of class C r is open and dense in the set of all C r vector fields. So, the case n = 2 is relatively simple, however, for n > 2, formidable difficul­ ties appear. It is impossible to find a classification, up to homeomorphisms, of fi­ nite dimensional dynamical systems. This fact follows from the next theorems due to S. Smale. Theorem 1.15. There is a structurally stable system that is not Morse–Smale. Theorem 1.16. The set of structurally stable fields of class C r is not open and dense in the set of all C r vector fields if dim M > 3. Therefore, it is impossible to construct a general theory even for the finite dimen­ sional case. In this case, one can use the following strategies. We can study some sys­ tems where dynamics are well-understood, for example, systems with hyperbolic dy­ namics (for a definition and examples of hyperbolic sets, see above, and [135, 197, 228]). Some particular cases such as the Lorenz and Rössler systems, the Smale horse­ shoe and others are well-studied. We can also investigate small perturbations of the Morse–Smale systems and bifurcations in such systems [128]. For neural, genetical networks and reaction-diffusion systems, we apply the fol­ lowing strategy. The main technical tools are the slaving principle and a special method (realization of vector fields, or, briefly, RVF, see Section 2.1) [55, 211–214]. We find that, under a special choice of these system parameters, the corresponding dy­ namics can be decomposed in slow and fast variables, and it can be reduced to a finite low dimensional dynamics by the slaving principle. For some fundamental systems, we can show that this reduced low dimensional dynamics take practically any form when we vary some system parameters. Let us outline here two examples of this strategy. The first example [217] shows that dynamics of some exceptional parabolic equations (1.7) is complicated: Theorem 1.17. Let us consider systems (1.7), (1.9) in the Sobolev space W 1,q for an ap­ propriate q > dim Ω, where f is a C1 -function. For any given n dimensional ODE (1.37) with g ∈ C1 (Rn ), there is a parabolic equation with a center manifold on which the flow contains the flow of the ODE (1.37). Such systems have specific forms since, according to Theorem 1.7, a generic quasilinear parabolic equation (1.7) has “simple” dynamics. Moreover, this realization of (1.37) uses a unstable center manifold, and, therefore, this realization is also unstable: if our initial data lie outside this manifold, the corresponding trajectory is convergent

1.6 Complicated large time behavior

|

15

and we observe no chaos. Physically, this means that this theorem describes chaotic transient regimes that can be observed during a bounded time. The second example is considered in detail in Chapter 3, and shows that dynam­ ics of generic two component reaction-diffusion systems can be very complicated. This realization is stable: the corresponding center manifold is locally attractive. Some ex­ amples of complicated dynamics can be found by the fact that the scalar singularly perturbed Ginzburg–Landau equation generates a Morse–Smale dynamics, which is exponentially slow [43, 190].

1.6.2 Hyperbolic dynamics Some dynamical systems have compact invariant sets (hyperbolic sets) where trajec­ tories (if this set is not a cycle or a rest point) exhibit, in a sense, complicated (chaotic) behavior. In principle, this set can coincide with the whole manifold where the cor­ responding flow is defined. Notice that some important ideas on chaos in statistical mechanics were formulated in a nonrigorous form in N. Krylov’s thesis published as a monograph in 1950 (in Russian, for an English translation and comments, see [132, 151, 253]). These ideas were based on the property of exponential divergence of trajec­ tories. Actually, dynamical systems with hyperbolic behavior are seldom in real appli­ cations, but mathematicians hope that hyperbolic sets, such as the Smale horseshoe, can serve as good models of real chaotic systems which can be investigated numer­ ically. Rigorous results obtained by dynamical system theory can be interpreted as diamonds found by excursions in a great and horrible Dark Realm of Chaos. Unfortu­ nately, now we have no skill set to profoundly penetrate this Realm and explore all of its details, constructing a general theory. Nonetheless, we can investigate interesting physical, chemical and biological examples. Here, we state a formal definition of hyperbolic sets. Notice that this definition is not further essential in this book; we will only use the theorem on the persistence (structural stability) of hyperbolic sets. We follow [232]. We consider the case of C r maps and C r semiflows with r ≥ 1. Let us first consider the case of maps. Let us assume that a map (x, t) → f t x is continuous for t ≥ 0 and C1 for t > 0, x ∈ M, where M is a compact smooth finite dimensional manifold. We can also consider the case when x ∈ B, where B is an open bounded subset of Rn , for example, a ball, and fB ⊂ B. Let the compact set K be invariant under the map f , i. e. fK = K, and assume f restricted to K is a homeomorphism. Definition 1.18. The set K is said to be hyperbolic for f if there is a continuous splitting T K M = V − + V + of the tangent bundle restricted to K such that (Tf )V − ⊂ V,

(Tf )V + ⊂ V + ,

16 | 1 Introduction (invariance) where Tf |V + is invertible and there exist constants C and a > 1 such that max Tf + |V +  ≤ Ca−n x∈K

(1.38)

for n ≥ 0. The condition that f |K is injective is not natural. In this case, we can define so-called prehyperbolic sets, with hyperbolic cover K + (for details, see [232]). For semiflows S t , the definition is analogous. We suppose that K is invariant and contains no fixed points, and there exists a splitting (invariant under flow) T K M = V 0 + V − + V + , where V 0 is a one-dimensional bundle directed along a trajectory, d t i. e. along the tangent vector X = dt f x at t = 0 and, moreover, estimate (1.38) holds [135, 228, 232]. The simplest examples of hyperbolic sets can be given by hyperbolic equilibria and hyperbolic limit cycles. Let us study the second situation in more detail. Consider a system of differential equations dx = f (x) dt

(1.39)

where x ∈ D, D is an open subdomain of Rm with a smooth boundary, and f is a sufficiently smooth vector field on D (at least C2 ). Assume S(t) is a periodic solution corresponding to a limit cycle, S(t + T ) = S(t) for all t, where T is a minimal period. We consider solutions x close to S(t). These solutions can be presented as ˜ (t), x(t) = S(t) + w

(1.40)

˜ is a small correction. Then, we can rewrite (1.39) as where w ˜ dw ˜ (t) + h(w ˜ ), = L(t)w dt

L(t) = f (S(t))

(1.41)

˜ )| < C|w ˜ |2 . Let us consider the linearized equation where h satisfies the estimate |h(w dv = L(t)v(t). dt

(1.42)

This equation defines a linear map Π by v(t) → v(t + T ). This map always has the eigenvalue 1 corresponding the tangent solution S t = S (t) (in fact, S (t) = S (t + T ) for all t). We say that the cycle S(t) is hyperbolic if the spectrum of Π intersects the unit circle S1 ⊂ C only in the point 1. The eigenvalues of Π are called Floquet multiplicators. Among these multiplicators ω i , i = 0, 1, . . . , m − 1, there is at least one that equals 1, but in the hyperbolic case, the remaining multiplicators ω i ∉ S1 . We can set ω0 = 1. Let us consider an adjoint map Π ∗ induced by the adjoint linear equation dv = −L(t)tr v(t). dt

(1.43)

1.6 Complicated large time behavior

| 17

The multiplicators of Π ∗ are conjugated to ω j . This equation also has a solution S∗ (t) such that the corresponding multiplicator equals 1. We can assume, without any loss of generality, that (1.44) S t (t) · S∗ (t) = 1. In fact, the scalar product μ (t) = S t (t) · S∗ (t) is a constant c since dμ ∗ tr ∗ = S t · S∗ + S · S∗ t = LS t · S − S · L S t = 0. dt Let us take c = 1, adjusting an appropriate S∗ (this solution is defined up to a multi­ plicative factor). This solution S∗ plays an important role in Chapter 4 (synchroniza­ tion problems). If the |ω j | < 1 for all j > 0, the cycle is a locally attracting set (a stable cycle), otherwise it is a saddle set or a repeller. Hyperbolic cycles are persistent under small C1 perturbations of the vector field f .

1.6.3 Persistence of elementary hyperbolic sets It is important to understand behavior of trajectories in a small neighborhood of a hyperbolic set. Here, we deal with two simplest situations: an equilibrium and limit cycle. Let us first consider the case of a hyperbolic equilibrium x(t) = x0 . For small |w|, where w = x − x0 and f ∈ C2 , we have dw = Lw + g(w), dt

|g(w)| ≤ c|w|2 ,

L = Df (0).

(1.45)

Such a flow is locally topologically conjugated to a flow defined by the corresponding linear part w t = Lw (the Grobman–Hartman theorem, see [12, 135, 197, 228, 232]). This means that locally, in a small neighborhood of x0 , the topological structure of trajectory is completely defined by the linear operator L. Let us turn to the cycle case. Here, we introduce w in a more sophisticated way. Instead w, we use two variables, w and ϕ, where ϕ is a phase shift along the cycle s. We define these variables by the relation x = S(t + ϕ(t)) + w(t),

w(t) · S∗ (t) = 0.

(1.46)

So, these variables serve as coordinates in a small tubular neighborhood of a cycle: ϕ is a coordinate along the cycle, and w measures a deviation in a transversal direction towards the cycle. Relation (1.44) implies ∗ ∗ w t · S ∗ = −w · S ∗ t = w · LS = Lw · S .

(1.47)

18 | 1 Introduction Taking into account (1.44) and (1.47), one can show that equation (1.39) can be transformed to the system dϕ = g(t + ϕ, w) · S∗ , dt dw = L(t)w + b (t), dt

(1.48) (1.49)

where L(t) = Df (S(t + ϕ)), b (t) = g(t + ϕ, w) − S (g(t + ϕ, w) · S∗ ) and g = f (S(t + ϕ) + w) − f (S(t + ϕ)) − f (S(t + ϕ))w. Equation (1.49) has a solution bounded for all times t. To demonstrate it, let us formulate an important auxiliary lemma that will be applied in asymptotic constructions of Section 4.8. Lemma 1.19. Let S(t) be a hyperbolic cycle. Let us consider the linear equation (1.49). This equation has a T-periodic solution if and only if the condition T

b (t) · S∗ (t)dt = 0

(1.50)

0

is fulfilled. To show that (1.50) is a necessary condition, we multiply both sides of (1.49) by S∗ and integrate the obtained relation over the interval [0, T ]. Using (1.47), one ob­ tains (1.50). This computation shows that (1.50) is a necessary condition for the exis­ tence of the periodic solution w in the general S(t) (not obligatory hyperbolic). In the hyperbolic case, the existence of a periodic solution w follows from the fact that the spectrum of the operator L, restricted to the space of all w satisfying (1.44) does not intersect the unit circle. Notice that b in the right-hand side of equation (1.49) satisfies (1.50). Thus, equa­ tion (1.49) has a single bounded time periodic solution and the hyperbolic cycle is stable.

1.6.4 Chaotic dynamics There is a large variation in the definition of invariant sets with chaotic dynamics. The most well-known definition [207, 228, 232] uses two fundamental concepts, transitiv­ ity and sensitive dependence on initial conditions [228, 232]. Chaotic flows on compact invariant hyperbolic sets are transitive and have sensitive dependence on initial condi­ tions. These two important properties can be considered as key features of chaos [228]. Definition 1.20. A semiflow is transitive on a compact invariant set Y provided the orbit of some point p ∈ Y is dense in Y.

1.6 Complicated large time behavior

| 19

Definition 1.21. A semiflow S t on a metric space X with a metric d is said to have sen­ sitive dependence on initial conditions provided there is an r > 0 (independent of the point) such that for each point x ∈ X and for each ϵ > 0, there is a point y ∈ X with d(x, y) < ϵ and time moment T ≥ 0 such that d(S T x, S T y) > r. The sensitive dependence on initial conditions is a physically important property, and, for the hyperbolic sets, if x is close to y, the distance d(S t x, S t y) increases exponen­ tially with t (at least while this distance remains small). See [232], p. 41 and [228] for more details and a discussion. Other interesting properties of chaotic hyperbolic dynamics is as follows. These dynamics generate an infinite set of periodic trajecto­ ries [197, 232]. If a compact invariant hyperbolic set Γ with chaotic dynamics is a locally attracting set, we say that Γ is a chaotic (strange) attractor. It is not easy to prove that a given system of differential equations induces a flow having an invariant set with chaotic dynamics. Nevertheless, some examples are known and well-studied analytically. In particular, the Lorenz equations (subsection 2.4.6, equations (2.108)) exhibit a chaotic behavior. This fact was proven by W. Tucker [271] in 1999 by computer assisted analytical methods (although the Lorenz system was still suggested in 1963). However, the Lorenz attractor is not purely hyperbolic: it is the so-called partially hyperbolic set. Such sets were intensively studied during the last decades [309]. One of the first examples of a hyperbolic dynamical behavior was found by M. L. Cartwright and J. Littlewood [45] for the second order differential equations describing nonlinear oscillators under an external time periodic force d2 x dx − k (1 − x 2 ) = b sin(ωt + a). dt2 dt

(1.51)

This famous equation was invented by Van der Pol for electrical engineering after this model was applied to biology and seismology. Some explicit examples of chaotic and hyperbolic dynamics can be found for map­ pings (time discrete dynamical systems). The simplest examples of time discrete dy­ namics with a chaotic behavior can be defined by the following maps of intervals [0, 1]: x → ax(1 − x),

a ∈ (0, 4],

(1.52)

x → ax mod 1,

a ∈ (0, ∞).

(1.53)

Another well-studied example can be given by the following map defined on torus T 2 = R2 /Z2 : (1.54) x → a11 x + a12 y(mod 1), y → a21 x + a22 y (mod 1) where the 2 × 2 matrix with the entries a ij consists of integer elements and has the determinant DetA, which equals +1 or −1 (for details, see [197]). Condition |DetA| = 1 entails that A−1 is also a matrix with integer entries. Therefore, map (1.54) is a linear

20 | 1 Introduction automorphism of torus T 2 . Similarly, we can define automorphisms of n-dimensional torus T n . Let us consider the maps x → f (x),

f ∈ C1 [I ]

(1.55)

defined on an interval I = [a, b ] ⊂ Rn . We assume that f (I ) ⊂ I. A simple criterion of chaos existence for such maps was suggested by [247] and [163]: a cycle with the period three entails the chaos existence. The complete and beautiful theory of maps (1.55) is developed by [228, 247]. Hyperbolic sets having a fractal structure like Cantor’s sets can be found by bifur­ cation analysis [128] for maps and for time continuous dynamical systems. Probably, Poincaré was the first who noticed the existence of such structures after K. Birkhof developed these ideas, and finally a rigorous proof of existence of such a hyperbolic structure (the Smale horseshoe) was obtained by S. Smale (the Birkhof–Smale theo­ rem). Regarding this theorem, see book [128], where one can find a general theory of local and nonlocal bifurcations. In Chapters 2 and 3, we shall use the following property of hyperbolic sets, the so-called Persistence [135, 197, 228, 232]. This means that the hyperbolic sets are, in a sense, stable(robust): if a system of differential equations dx = F (x) dt generates a semiflow S tF having a hyperbolic set Γ, and ϵ > 0 is sufficiently small, then the perturbed vector field dx ˜ = F (x) + ϵG(x), =F dt

|G|C1 (M)

˜ Semiflows S tF and S t˜ , restricted also generates a semiflow S tF˜ with a hyperbolic set Γ. F to Γ and Γ , respectively, are topologically orbitally equivalent. This means that there ˜ which maps orbits of S tF |Γ onto orbits of S t˜ |Γ˜ (for exists a homeomorphism, h : Γ → Γ, F a definition of this equivalence and a discussion, see [135, 197, 228, 232]). In particular, the dynamics of (1.54) is robust, i. e. small perturbations of linear automorphisms of a torus have the same topological structure of trajectories of the original automorphisms. Let us formulate the main theorem on the persistence following [135]. Let M be a compact smooth finite dimensional manifold. Theorem 1.22. Let Γ ⊂ M be a hyperbolic set of the smooth flow S t on M. Then, for any open neighborhood V of Γ and every δ > 0, there exists ϵ > 0 such that if S˜ t is another smooth flow and d C1 (S, S˜ ) < ϵ, then there is an invariant hyperbolic set Γ˜ for S˜ and a homeomorphism h : Γ → Γ˜ with d C0 (Id, h) + d C0 (Id, h−1 ) < δ that is smooth along the ˜ Furthermore, the vector field orbits of S t and establishes an orbit equivalence of S and S.

1.6 Complicated large time behavior

| 21

1 h∗ dS/dt is C0 close to d S˜ /dt and if h1 , h2 are two such homeomorphisms, then h− 2 · h1 is a time change of S (close to identity).

One of the central questions discussed in the book is as follows. How does one check if a family of semiflows can generate a given dynamics? We can propose some procedures for this checking if this prescribed dynamics is robust ( Chapter 2 and 3). In Chapter 2, we show that if the prescribed dynamics is hyperbolic, that this theorem on hyperbolic chaos reduces such checking to some approximation problems. If we are dealing with a difficult case of nonhyperbolic chaos [198, 309], this question becomes very difficult.

2 Complex dynamics in neural and genetic networks In this chapter, we first consider the method of realization of vector fields (RVF), the main tool for proving the existence of complicated large time behavior for some dis­ sipative semiflows. We state a general scheme of the RVF application and introduce maximally complex families of semiflows (MCF). Such families are capable of real­ izing all finite dimensional vector fields with arbitrary small accuracy and therefore they can generate all structurally stable finite dimensional semiflows (for example, hyperbolic dynamics). MCF can produce chaotic attractors of any dimension. TheRVF alsogivesusa constructivemethod tocontroldynamicsinthesesystems. In this chapter, coupled oscillator systems, and gene, neural and metabolic networks are considered.ThefirstexampleisthefamousHopfieldneuralnetworkmodel,fundamental for pattern recognition theory and artificial intellect. The attractor control procedure can be done by two methods. The first one is based on an algebraic trick. We can use a special representation for a matrix which defines the neuron interaction (the synaptic matrix). This is a generalization of the Hopfield idea [118]. The second approach exploits aspecialtopologyofaweightedgraphthatdefinesthesynapticmatrix.Thistopologycan be named “centralized topology,” or “empire structure.” In these centralized networks, highlyconnectedhubsplaytheroleoforganizingcenters.Thehubsreceiveanddispatch interactions. Each center interacts with many weakly connected nodes (satellites). We study complex large time behavior and bifurcations in the centralized networks. The Hopfield networks involve sigmoidal functions which are not rational. In the next sections, we extend the method to similar networks that involve rational func­ tions, and furthermore, to general quadratic systems. Such quadratic systems occur in many applications. For example, the classical Lotka–Volterra system only involves quadratic nonlinearities. Also, the quadratic systems appear as Galerkin approxima­ tions of weakly nonlinear PDEs and systems of PDEs such as the Navier–Stokes equa­ tions and others. We show that the famous Lotka–Volterra systems define MCF. Further, we state a general theorem on systems of chemical kinetics, where non­ linearities consist of quadratic and linear parts. We show that these families are MCF ones. This result is a small improvement of the classical Korzuchin theorem (which, it seems, remains unknown for Western readers) [148, 330]. We state algorithms which allow one to investigate dynamics of large metabolic networks. We consider a model of genetic networks proposed by J. Reinitz, D. Sharp and E. Mjolness for patterning in Drosophila [188]. We establish important and biologically natural facts: genetic networks are capable of generating attractors and patterns of arbitrary complexity; it can be done using previously constructed patterns for a gen­ eration of new ones because the genes are organized in blocks (modularity). In the last sections, we also propose an approach for the estimation of compu­ tational power of genetic and neural networks and describe algorithms of attractor control for these networks.

2.1 Realization of vector fields (RVF) | 23

2.1 Realization of vector fields (RVF) 2.1.1 Some definitions In this section, we describe the method of realization of the vector fields for dissipative systems (proposed in a rigorous mathematical form by P. Poláčik [212] and developed in [55, 214, 217, 219, 220, 236, 280, 281]). Let us denote by B n (R) the ball of the radius R in Rn centered at the origin x = 0 B n ( R) = {x ∈ Rn : |x| ≤ R},

x = (x1 , . . . , x n ).

For brevity, we write B n = B n (1). Let us consider evolution equations (1.6) possessing the following properties: t in an ambient Hilbert or Banach (A) These equations generate global semiflows S P phase space H. These semiflows depend on some parameters P (which may be el­ ements of another Banach space B). For some values of P , they have finite dimen­ sional locally attracting and locally invariant C1 smooth manifolds MP ; t (B) Dynamics SP , restricted to the manifolds MP , are, in a sense, almost completely controllable by the parameter P . Namely, assume the differential equations dx = F (x), dt

F ∈ C1 (B n )

(2.1)

define a global semiflow in the unit ball B n ⊂ Rn . Then, for each n, F and ϵ > 0, we can choose suitable parameters P = P(n, F, ϵ) such that: t (B1) The semiflow SP has an at least C1 -smooth locally attracting invariant mani­ fold MP diffeomorphic to B n ; t (B2) The restricted dynamics SP |MP is defined by equations dx ˜ (x, P), =F dt

F˜ ∈ C1 (B n ),

(2.2)

where the estimate ˜ |C1 (B n ) < ϵ |F − F

(2.3)

holds. Remark. These manifolds MP can be globally attracting, i. e. inertial, positively in­ variant and locally attracting, or invariant. The theory of invariant and inertial mani­ folds is well-developed, see [54] for equations defined by bounded operators in a Ba­ nach space and [48, 50, 61, 78, 108, 168, 170, 265] for PDEs and systems of PDEs. t The families SP of the semiflows satisfying (A), (B), (B1) and (B2) will be referred to as maximally dynamically complex families of semiflows or, for brevity, simply max­ imally complex semiflows (MCFs). According to the definition of structural stability (Section 1.6), all finite dimen­ sional dynamics stable under small perturbations can be generated by the MCF (up

24 | 2 Complex dynamics in neural and genetic networks to orbital topological equivalencies). Notice that a structurally stable dynamics may be, in a sense, “chaotic” (Chapter 1). There is a rather wide variation in different def­ initions of “chaos.” In principle, one can use any concept of chaos. If this chaos is persistent under small C1 -perturbations, this kind of chaos occurs in the dynamics of a MCF. Here, following the classical mathematical tradition [65, 135, 194, 207, 232, 233, 235, 254], we focus our attention on compact invariant sets with hyperbolic chaotic dy­ namics. We only use the following basic property of hyperbolic sets, the so-called per­ sistence [65, 194, 228, 232] ( Section 1.6). This means that the hyperbolic sets are, in a sense, robust: if (2.1) generates the hyperbolic set Γ and δ is sufficiently small, then dy­ ˜ Dynamics (2.1) and (2.2) restricted namics (2.2) also generates another hyperbolic set Γ. ˜ respectively, are topologically orbitally equivalent ([65, 135, 228, 232] and to Γ and Γ, Chapter 1, subsection 1.6.4). Thus, all hyperbolic sets can appear in the dynamics of a MCF, for example, hy­ perbolic rest points, cycles and also chaotic hyperbolic sets: the Smale horseshoes, Anosov flows, the Ruelle–Takens–Newhouse chaos, see [65, 128, 135, 194, 228, 232]. Examples of MCF families can be given by some reaction-diffusion equations [55] (in this case, however, invariant manifolds are not locally attracting) and reaction-diffu­ sion systems [280]. We consider these systems in Chapter 3. In this chapter, we first study neural and genetic networks which present the simplest examples of MCF hav­ ing fundamental applications.

2.1.2 Applications of RVF The RVF method can be applied to systems whose dynamics admits a decomposition in slow and fast components ( Chapter 1, Section 1.3). We can obtain such a decom­ position by many methods. In neural and genetical networks, one can exploit a spe­ cial topology of a graph that defines neuron (gene) interconnections (centralized net­ works). Another variant is to use a generalization of the known Hopfield substitution. This method is elementary for networks with discrete time, and we first consider this case. For reaction-diffusion systems, the main strategy is to investigate small pertur­ bations of the Morse–Smale systems or monotone semiflows. If we use the scalar Ginzburg–Landau equation with a small diffusion coefficient, then there exist kink chain solutions and kink coordinates can be considered as slow coordinates on a center manifold [43]. This idea helps us to find examples of two component reactiondiffusion systems with complicated large time behavior (Section 3.3). By small pertur­ bations of monotone reaction-diffusion systems, we can find dissipative waves with chaotic or periodic fronts (Section 3.2). Notice that for the Navier–Stokes equations, complex Ginzburg–Landau equa­ tions and some other systems, it is difficult to find physically natural slow variables.

2.2 General scheme of RVF method for evolution equations

| 25

For some systems of coupled oscillators, we can take, as slow modes, oscillator phases [276] following the beautiful book by Kuramoto [152]. An important application of this approach can be given by large metabolic sys­ tems. Consider, for example, a system of, say, 7000 ordinary differential equations with polynomial right-hand sides. Coefficients of polynomials (kinetic rates) are usu­ ally known only approximately. One can try to investigate this system by numerical simulations, substituting different values of these rates and making direct numeri­ cal simulations. Taking into account the quantity of parameters involved in (up to 104 −105 ) such a procedure seems difficult if we are going to make an exhaustive search of all possible types of kinetic behavior. The RVF approach looks more promising. By purely algebraic methods, one can detect different kinds of dynamics that can be generated by the network by a variation of kinetic rates. Moreover, one can find out conditions on kinetic rates, which lead, for example, to a chaotic behavior or the An­ dronov–Hopf bifurcation. Notice that, in a similar way, one can define maximally dynamically complex fam­ ilies of dynamical systems with discrete time.

2.2 General scheme of RVF method for evolution equations This section describes a general construction of the RVF method for the case when we are looking for small solutions. Let us consider an evolution equation v t = Lv + F (v) + γf,

(2.4)

where v lies in a Hilbert space H, L is a sectorial operator, F is a nonlinear operator, f ∈ H is independent of v, t (“an external force”) and γ > 0 is a small parameter. We use the standard fractional spaces [108] H α = {v ∈ H : vα = (I − L)α v < ∞}. Assume F is a C1+r map from H α to H, r ∈ (0, 1). We set v(0) = v0 ,

v0 ∈ H α .

(2.5)

We also suppose that the map F ∈ C1+r (H α , H ), where r > 0, and this map satisfies conditions F (v) ≤ C1 v2α , DF (v) ≤ C2 vα . (2.6) Then, a unique solution of the Cauchy problem (2.4), (2.5) exists on some open time interval (0, t0 (v0 )), where t0 > 0 [108]. The following assumption is essential since it allows us to pick out slow variables. Assumption 2.1 (Spectral Gap Condition). Assume that the spectrum Spec L ⊂ C of L consists of the two parts: Spec L = {0} ∪ S, where Re z < −c0 < 0

for all z ∈ S

(2.7)

26 | 2 Complex dynamics in neural and genetic networks and there exist N different e j ∈ H such that Le j = 0,

j = 1, . . . , N.

(2.8)

Let B1 be the space B1 = Span{e1 , . . . , e N }. This space contains slow modes. Then, there is a space B2 invariant under L such that H = B1 + B2 , where B1 + B2 is a direct sum of B i ( [108], Th. 1.5.2). We have two com­ plementary projection operators P1 and P2 such that P1 + P2 = I, where I denotes the identity operator, and B i = Pi H. Let us denote by L∗ an operator conjugate to L. If the operator L has a compact resolvent, then the spectrums of L and L∗ are discrete and we have countable sets eigenvectors e j and ˜e j of the operators L and L∗ respectively, j ∈ N = {1, 2, . . . }. In this case, without loss of generality one, can assume that e i and ˜e i are biorthogonal: e i , ˜ e j  = δ ij , where δ ij is the Kronecker symbol. Then, P1 can be defined by P1 u =

N  u, ˜e i  e i ,

(2.9)

i =1

where L∗ ˜e i = 0,

i = 1, . . . , N.

Let us denote f k = Pk f . Consider small solutions of (2.4) of the following form: v = γX + w,

w( t) ∈ B 2 ,

X (t) =

N

X i (t)e i ∈ B1 .

(2.10)

i =1

Substituting (2.10) to equation (2.4), we obtain X t = γ−1 P1 F (γX + w) + f1 ,

(2.11)

w t = Lw + P2 F (γX + w) + γf2 .

(2.12)

Assume f1 = γ ¯f1 ,

¯1, ¯f1  < C

f2  < C2 .

Let us set ˜ w = γw0 + w where w0 is defined by w 0 = − L −1 f 2 . We consider (2.11), (2.12) in the domain ˜ α < Cγ2 }. ˜ ) : |X | < R, w BR,γ,C = {(X, w

(2.13)

Remember that B N (R) is the N-dimensional ball B N (R) = {X : |X | < R}. The follow­ ing assertion is useful below.

2.2 General scheme of RVF method for evolution equations | 27

Proposition 2.2. Assume r ∈ (0, 1) and C > C0 (R, F ) is large enough. For sufficiently small positive γ < γ0 (r, F, C, R, N ) system (2.11), (2.12) has a locally invariant manifold MN defined by ˜ (X ), (2.14) w = γw0 + W ˜ : B N (R) → H α satisfies the estimates where a C1+r smooth map W ˜ α < c1 γ2 , sup W

(2.15)

˜ α < c2 γ2 . sup D X W

(2.16)

X ∈B N (R)

and X ∈B N (R)

Proof. This assertion is an immediate consequence of Theorem 6.1.7 from [108]. In ˜ X, system (2.11), (2.12) takes the form variables w, ˜ ), X t = g(X, w

˜ ) + γ ¯f1 , g = γ − 1 P1 F ( γ ( X + w 0 ) + w

˜ t = Lw ˜ + f (X, w ˜ ), w

˜ ). f = P2 F ( γ ( X + w 0 ) + w

(2.17) (2.18)

Let us consider the semigroup exp(Lt). If w(0) ∈ B2 , we have [108]  exp(Lt)w(0) ≤ M w(0) exp(−βt),  exp(Lt)w(0)α ≤ M w(0)t

−α

(2.19)

exp(−βt),

(2.20)

where M, β > 0 do not depend on γ. Moreover, M0 = λ= M2 = μ=

sup ˜ ∈BR,γ,C X, w

sup ˜ ∈BR,γ,C X, w

sup ˜ ∈BR,γ,C X, w

sup ˜ ∈BR,γ,C X, w

f  < c2 γ2 ,

(2.21)

D X f  + D w˜ f  < c3 γ2 ,

(2.22)

D w˜ g < c4 γ,

(2.23)

D X g < c4 γ.

(2.24)

We set Δ = 2θ1 , where ∞

θ p = λM

u −α exp(−(β + pμ )u ) du,

1 ≤ p ≤ 1 + r,

(2.25)

0

and μ = μ + ΔM2 . For sufficiently small γ, one has μ < β /2, therefore, the integral in the right-hand side of (2.25) converges and, according to (2.23), θ < c5 γ2 (since M = O(1) as γ → 0). We notice that for sufficiently small γ, (1 + r)μ < β /2,

θ 1 < Δ (1 + Δ )−1 ,

θ1 < 1,

θ 1 (1 + Δ ) M 2 μ

−1

< 1,

28 | 2 Complex dynamics in neural and genetic networks

and θp

(1 + Δ ) M 2 1+ rμ

< 1.

According to Theorem 6.1.7 [108], these estimates imply the existence of the C1+r -smooth locally invariant manifold. Local attractivity of this manifold follows from the Spectral Gap condition. The proof is complete. On the manifold MN , the evolution equation (2.11) for the slow component X takes the following form: dX ¯ (X ) + MX + ˆf1 + ϕ(X, γ), =Q (2.26) dτ where τ = γt and ¯ (X ) = P1 γ−2 (F (γ(X + w0 )) − γDF (γw0 )X − F (γw0 )), Q

(2.27)

M is a bounded linear operator defined by MX = P1 γ−1 DF (γw0 )X, ˆf1 = ¯f1 + P1 γ−2 F (γw0 ),

(2.28) (2.29)

and ϕ is a small correction such that |ϕ|C1+r (B N (R)) < c5 γ,

r > 0.

(2.30)

For quadratic nonlinearities F such that F (αv) = α 2 F (v), the relations for Q and M can be simplified to ¯ (X ) = P1 (F (X + w0 ) − DF (w0 )X − F (w0 )), Q M (w0 )X = P1 DF (w0 )X.

(2.31) (2.32)

Notice that if all the solutions X (t) to (2.26) are bounded for t ≥ 0, for example, X (t) ∈ B(R)N on (0, ∞), then the manifold MN is positively invariant. The key idea of the RVF method is to use w0 as a parameter to adjust M. This works if the following property holds. Assumption 2.3 (Linear operator density (LOD) condition). Let us consider the set OF of all linear operators M (L−1 f ) that can be obtained by (2.32) when f runs over the whole space B2 . We assume that this set OF is dense in the set of all linear operators RN → RN . Due to this property, the linear operator M can be considered as a parameter. In some cases, this property can be rewritten in a more explicit form. Let L have a compact resolvent, and let {e j }Ni=1 , {˜e j }Ni=1 be the biorthogonal system mentioned above. Then, the Fredholm alternative gives the following reformulation of the LOD condition.

2.3 Control of attractor and inertial dynamics for neural networks |

29

If for some real numbers Z ij , where i, j = 1, . . . , N, the equality N N

Z ij  DF (L−1 f )e j , ˜e i  = 0

(2.33)

i =1 j =1

holds for all f ∈ B2 , then these numbers equal zero: Z ij = 0 for all i, j. System (2.26) is quadratic when F is a quadratic map or can be approximated by such a map. It is a typical case for “small” solutions. We study quadratic systems in Sections 2.4, 2.6 and 2.7 by special methods. First, we investigate simpler systems which are important for genetic and neural applications.

2.3 Control of attractor and inertial dynamics for neural networks This section follows [276, 277, 280, 281, 287, 288, 290, 291, 295]. We consider the celebrated Hopfield system[118], which is a fundamental model of attractor neural network theory. The corresponding equations have the following form, that is, m dx i = K ij σ (x j ) − bx i + θ i . (2.34) dt j =1 Here, x i (t) are continuous neuron states (membrane potentials), the matrix K (the matrix of synaptic weights) defines interactions between neurons (the entry K ij mea­ sures the action of neuron j on neuron i), σ is a sigmoidal function that describes a neuron response dependence on membrane potentials, b > 0 is a coefficient, −bx i is a dissipative term, and θ i are parameters. An analogue of (2.34) with discrete time has the form ⎛ ⎞ m (2.35) x i (t + 1) = σ ⎝ K ij x j (t) − θ i ⎠ , j =1

where parameters θ i are thresholds. We assume that σ is a smooth monotone increasing function. If σ = sgn, we obtain the model with discrete time and discrete states ⎛ ⎞ m (2.36) s i (t + 1) = sgn ⎝ K ij s j − θ i ⎠ , j =1

where s i ∈ {−1, 1}. Such a model has interesting properties and a complicated dy­ namics [278]. Other important examples of sigmoidal functions are as follows: (i) Piecewise linear function σ PL (z) = 0, z < 0,

σ PL (z) = z, z ∈ (0, 1),

σ PL (z) = 1, z ≥ 1.

(2.37)

30 | 2 Complex dynamics in neural and genetic networks Such functions give us an excellent tool for simulation of the Turing machines by neural networks, see [30, 250, 251]. (ii) The Michaelis–Menten function is a popular model in fermentative kinetics [193] σ (z) = 0, z < 0,

σ ( z ) = z ( K + z )−1 ,

z ≥ 0.

(2.38)

This relation for σ can be obtained by a usual quadratic reaction model under the assumption that some reactions are fast [193]. (iii) The Hill functions can be considered as a generalization of the Michaelis–Menten model: zm , (2.39) σ ( z) = m K + zm or

m z σ ( z) = . (2.40) K+z These expressions can be obtained by simple biological arguments [332]. (iv) The logistic (Fermi) function σ a (z) = (1 + exp(−az))−1 ,

a > 0,

(2.41)

where the parameter a defines the sharpness of this function increasing. In fact, σ a (z) → H (z) as a → ∞, where H (z) is the Heaviside step function. Network models (2.34) and (2.35) have been investigated extensively [30, 81, 118, 142, 250, 251, 278]. First, consider the case of the symmetric matrix K. In this case, system (2.34) has an energy function (the Lyapunov function) decreasing along trajectories, and, there­ fore, this system generates a gradient semiflow. This Lyapunov function is defined by E(x) =

m

K ij σ (x i )σ (x j ) +

i,j =1

N

λT (x i ) + θ i σ (x i ),

(2.42)

i =1

where T (s) is defined by T (s) = −sσ (s). Then, one can expect that all or almost all trajectories converge to fixed points (equilibria) ( Section 1.4). In order to simplify the investigation of these equilibria, one uses a special substitution for the matrix K invented by J. J. Hopfield [118]: K ij =

n

A is A js ,

(2.43)

s =1

or, briefly, K = AAtr . Due to (2.43), selected patterns corresponding to equilibria be­ come fixed parts of dynamics. This property is useful for pattern recognition and ar­ tificial intellect problems [118]. In fact, the great advantage of a model such as the Hopfield with symmetric K is that they can be studied analytically because its proper­ ties remind those of a common statistical mechanical system possessing an “energy.”

2.3 Control of attractor and inertial dynamics for neural networks |

31

In Nature, we find, however, nonsymmetrical neuron interconnections. So, from a bi­ ological point of view, substitution (2.43) does not look too realistic. In a real situation, it is impossible to find a simple relation for the energy (the Lyapunov function). As a result of some investigations (see, for example, works [81, 278, 280, 281]), one can conclude that the dynamical behavior of the Hopfield system is, in general, com­ plex. In particular, numerical and analytical studies show that the large time behavior of networks may be periodic or chaotic. It will be proven below by the RVF approach. For Hopfield systems, the RVF method can be performed in a quite straightforward manner by the two main tools: (a) a generalization of the substitution (2.43) K ij =

n

A is B sj ,

(2.44)

s =1

where n < m. Briefly, this can be written as K = AB, where A, B are matrices. This relation means that rank K = n, and all matrices of rank n admit such a represen­ tation; (b) results on approximations of functions by multilayered functionals [21, 53, 107, 119].

2.3.1 Attractors for neural networks with discrete time For completeness and in order to introduce some technical tools, we first consider model (2.35) (see [278], where the case (2.36) is investigated as well). Model (2.35) is a paradigm in the theory of time recurrent neural networks. Indeed, in spite of the simplicity of (2.35), one can prove that this model is capable of simulating any Turing machine. It was shown first in [250, 251], see also [30, 142]. Denote by F a class of smooth functions σ (z) defined on R and possessing the following properties: σ z ∈ S(R),

lim σ (z) > lim σ (z)

z →+∞

z →−∞

(2.45)

where S(R) denotes the L. Schwartz class of fast decreasing functions. We assume that σ ∈ F . Clearly, all such functions are bounded. The function tanh(z) presents a simple example. Let us fix an integer n > 0 and suppose that the interaction matrix K has the form (2.44). We define quantities q i (hidden collective coordinates) by qi =

m

B ij x j

j =1

where i = 1, 2, . . . , n, x = (x1 , . . . , x m ). Then, the q-dynamics is defined by the map n

m B lk σ A kr q r (t) − θ k = F (q, P). (2.46) q l (t + 1) = k =1

r =1

32 | 2 Complex dynamics in neural and genetic networks The neuron dynamics is governed by (2.46) and by the map ⎛ ⎞ n x i (t + 1) = σ ⎝ A il q l (t) + θ i ⎠ , i = 1, . . . , m.

(2.47)

l =1

Thus, the time evolution of the variables x j is determined completely by q i (t). Let us formulate a theorem on the family of maps (2.46). Remember that B n de­ notes the unit ball in Rn centered at 0. Let us consider the quadruple (m, A, B, θ) as a parameter P in the RVF method. Theorem 2.4. The fields F defined by (2.46) are dense in the space of all vector fields on the unit ball B n enabled by C1 -norm. Thus, the family F (q, P) of dynamical systems with discrete time is maximally dynamically complex. In other words, this assertion can be explained as follows. For a given map G de­ fined on the cube B n and such that G(B n ) ⊂ B n , we can construct the neural network with a “hidden” q-dynamics that simulates this map G. Inside the ball B n , the map G can be approximated by F (q, P) with any given accuracy δ. The proof of this assertion is a straight forward application of the well-known re­ sults on multilayered neural networks, see, for example, [21, 53, 107, 119]. We shall formulate a lemma that immediately implies our theorem. Let us define the family of vector fields ⎛ ⎞ m n ⎝ B ip σ A pj q j + θ p ⎠ , (i = 1, 2, . . . , n), (2.48) Ψ i (q, P) = p =1

j =1

depending on parameters P = (m, A, B, θ). Lemma 2.5. Assume σ lies in the class F . Let G(q) be a C1 -field defined in the unit ball B n . Then, for any positive number δ, a number m, matrices A, B and m-vector θ exists such that |G(·) − Ψ (·, P)|C1 (B n ) < δ. (2.49) The proof of this technical lemma is standard. It follows from classical multilay­ ered approximations [21, 53, 107, 119] and can be found in [287], see also Appendix 2.11. Theorem 2.4 has the following basic consequences. (1) For any given T > 0 and ϵ > 0, choosing sufficiently small δ(ϵ, T ), one can ϵ-approximate any families of G-iterations q, G(q), G2 (q), . . . , G T (q). This means that for any positive integer j < N (ϵ) and any q ∈ Q, one has |F j (q, P) − G j (q)| < ϵ. Thus, we can control families of trajectories within bounded time intervals. (2) The second corollary is that if the map G has some robust (structurally stable) local chaotic attractor or a robust invariant set (for example, a compact hyper­ bolic invariant set Γ [135, 228]), we can obtain the neural network with the hid­ den q-dynamics generating a topologically equivalent local attractor (invariant

2.3 Control of attractor and inertial dynamics for neural networks | 33

˜ This equivalency means that a homeomorphism h exists that maps the set) Γ. ˜ see Section 1.6 and [135, 197, 228]. Con­ G-trajectories on Γ onto the F-ones on Γ, sequently, all robust dynamics can be realized by networks (2.35). This fact allows one, by [30, 250, 251], to show that networks (2.35) can serve as a universal model for computing. In the coming subsections, we consider an alternative approach to the problem of net­ work dynamic complexity which uses the network topology and admits a more trans­ parent interpretation.

2.3.2 Graph growth With each circuit (2.35), we can associate a directed graph (V, E) as follows. Let V = {1, 2, . . . , m} be the set of vertices (nodes). We denote by E the set of edges. We assume that e = {i, j} ∈ E if and only if K ij = 0. On the other hand, for each graph (V, E), we can construct the corresponding circuit (2.35) by an assignment of the weights K ij , where K ij = 0 if e = {i, j} ∉ E. Then, in (2.35), the number m = |V |, where |V | is the number of the vertices of the graph. Let us formulate special conditions on the graphs (V, E) important for ϵ-realizations of dynamics on the networks. Consider an infinite sequence of graphs G N = (V N , E N ) with increasing |V N | = N, where N = 1, 2, . . . . Let us formulate some auxil­ iary definitions. We say that a graph (V N , E N ) contains (n, d) biclique if there are two disjoint subsets S n = {v1 , . . . , v n } ⊂ V N and S d = {w1 , . . . , w d } ⊂ V N of the nodes such that all w i are mutually connected with all v j : both edges (v i , w j ) and (w j , v i ) lie in E N . It is not essential for us if connections inside S n and S d exist, or if they do not exist. If such connections are absent, then (n, d) biclique is a complete bipartite subgraph in the graph (V, E). Assume this graph sequence enjoys the following property: Assumption 2.6 (Property nH (to have n hubs)). Let us fix a positive integer n. We say that a graph sequence S = {G N }∞ N =1 has property nH if for each integer N 0 > 0, there is a graph from S that contains an (n, d) biclique with d ≥ N0 . This property can be interpreted as follows. Assume a sequence of growing graphs (V N , E N ) emerges by a growth procedure (for example, by the preferential attach­ ment [7]). Our property holds, for example, if there is a fixed set of n “central” nodes connected with an infinitely increasing number d(N ) of other nodes, d(N ) → ∞ as N → ∞. If d = N − n, this means that there are n “central” nodes (hubs). The other nodes (“satellites”) are connected with the hubs. Such networks can be named cen­ tralized ones with n centers. These (n, d) bicliques can create feedback loops which appear in genetic networks [327]. For example, a single microRNA can regulate a num­ ber of transcription factors (TF). On the other hand, TFs influence microRNA [327]. An­

34 | 2 Complex dynamics in neural and genetic networks other example of such centralized (empire) structure is the Russian Federation, where n = 1 and a center (Moscow, Kremlin) is trying to control all. In this chapter, we only consider the dynamical behavior of empire structures, the empire stability under fluc­ tuations will be considered in Chapter 4. Consider a sequence of random graphs (V N , E N ) having scale-free structures ([7] and Section 4.1.2), and let N → ∞. Then, with probability 1, the 1H-property holds for this sequence since scale-free graphs have hubs with a number of adjacent nodes [7]. Clearly, the probability that a large random graph (V, E), with |V |  n, d contains an (n, d)-biclique, is decreasing in both parameters (n, d). The number of arising (n, d)-bicliques in networks generated by the preferential attachment growth can be investigated by numerical simulations (see below, Section 2.10). In the com­ ing section, we consider the attractors of the networks where the node interaction is defined by graphs with nH property.

2.3.3 Dynamics of time discrete centralized networks The main assertion can be formulated as follows: Theorem 2.7. Let q → F (q) be a C1 -map defined on the unit ball B n ⊂ Rn and such that F (B n ) ⊂ B n . Let us consider a sequence of graphs (V N , E N ), N = 1, 2, . . . with nH-property. Then, for each ϵ > 0, there is a number N0 = N0 (ϵ, F ) such that a net­ work (2.35), associated with N0 -th graph, ϵ-realizes dynamics q(t + 1) = F (q(t)) on B n by an appropriate choice of entries K ij and thresholds h i . This means that there are indices (j1 , . . . , j n ) ∈ {1, . . . , N0 } such that the states q(t) = (x j1 (t), . . . , x j n (t)) of the corresponding nodes satisfy (2.50) q(t + 2) = G(q(t)), where |G(·) − F (·)|C1 (B n ) < ϵ.

(2.51)

Therefore, the dynamics of a sufficiently large random network from our sequence is controllable by the entries K ij and the thresholds. The proof of this assertion uses multilayered network approximations and the cen­ tralized structure. To simplify our statement, we prove the assertion for n = 1 when q ∈ R. The general case n ≥ 1 will be considered in subsection 2.3.5. Proof of Theorem for n = 1. Let us take a graph sequence with 1H-property. For each N0 , a graph from our sequence contains a node j∗ of the degree C(j∗ ) = d > N0 . Let us assign K ij as follows: K j∗ j = a j , K jj∗ = b j and K ij = 0. Here, a j , b j are new coefficients to adjust. Such a choice has a simple interpretation: we obtain a star subnetwork consisting of a center and a number of mutually disconnected nodes (satellites). Only the cen­

2.3 Control of attractor and inertial dynamics for neural networks | 35

ter and satellites are connected. This organization of interconnections corresponds to the classical power principle of the Roman Empire: divide et impera. Nonformally speaking, if the empire is well-divided, the emperor should only pull springs. To have a prescribed satellite behavior, he should assign appropriate connections with subjects (satellites). Let us denote q = x j∗ . Assume θ j∗ = 0 and θ j = h j for j = j∗ . Then, equations (2.35) reduce to ⎞ ⎛ d (2.52) q(t + 1) = σ ⎝ a j x j (t)⎠ , j =1

x j (t + 1) = σ (b j q(t) − h j ),

j = 1, . . . , d.

(2.53)

Thus, dynamics of the variable q is governed by q(t + 2) = Q(q(t)), where Q(q) = σ (S(q)),

S(q, a, b, h) =

(2.54)

d

a j σ(b j q − h j ).

j =1

Since Q(q) ∈ (0, 1), we can restrict the map q → S(q) to the interval I = [0, 1]. Now, our next step is to approximate, with the accuracy ϵ > 0, a smooth func­ tion f (v) defined on I by S(v, a, b, h). Due to the approximation Lemma 2.5, for each C1 -smooth function f (v) defined on [0, 1] and each ϵ > 0, there are d, a j , h j such that sup |f (v) − S(v, d, a, b, h)| < ϵ,

(2.55)

sup |f (v) − S (v, d, a, b, h)| < ϵ.

(2.56)

v∈[0,1] v∈[0,1]

By these inequalities, one can construct an ϵ-realization of the map F. Now, we can formulate a lemma. Lemma 2.8. Let q → F (q) be a continuous map defined on q ∈ R such that |F | < 1 and F ∈ C1 (I ), where I = [0, 1]. Then, for any ϵ > 0, there are coefficients a j , b j and h j such that the map q → Q(q) defined by (2.54) satisfies q( t) ∈ I

for

t = 2, 4, . . .

(2.57)

sup(|Q(q) − F (q)| + |Q (q) − F (q)|) < ϵ. q∈I

(2.58)

The proof of this lemma follows from (2.54), (2.55) and (2.56). The lemma entails The­ orem 2.7 in the case n = 1 . This theorem shows that arbitrary one-dimensional maps Q on compact intervals can be realized, up to an arbitrary small error, by a sequence of networks with nH prop­ erty. Even for n = 1, such networks can generate complicated periodic and chaotic

36 | 2 Complex dynamics in neural and genetic networks attractors (subsection 1.6.4). For example, from the Sharkovski theorem [163, 247], it is well known that for maps on an interval, “the period 3 implies a chaos.” Namely, if the map q → Q(q) has a trajectory of the period 3, then the attractor of this map Q is, in a sense, chaotic since this attractor contains an infinite set of periodic trajecto­ ries(orbits). If a smooth map Q has a periodic orbit of the period 3 such that Q (q s ) = 1 ˜ (where Q ˜ is a smooth sufficiently small at all points of this trajectory, a map Q + ϵ Q perturbation) also has a trajectory of the period 3. Therefore, this new perturbed map also has a chaotic attractor. This shows that the network dynamics (2.35) can generate an infinite set of periodic orbits. Notice that relation (2.57) guarantees the compactness of images of the neural dy­ namics. Therefore, the dynamics is dissipative and always has a global attractor.

2.3.4 Bifurcations and chaos onset Bifurcations It is well known that an elimination of a hub can dramatically change the network dynamics [10]. Let us show that an elimination of a single weakly connected node can also produce sharp bifurcations. Let us consider a map q → Q(q) of the interval I = [0, 1] such that Q(I ) ⊂ I, for example, Q = λq(1 − q) for λ ∈ (0, 4). Assume this map has a stable rest point q0 ∈ (0, 1). Geometrically, this point is an intersection q0 of the parabolic curve y = Q(x) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.2

0.4

0.6

0.8

1

Figure 2.1. An equilibrium point is weakly perturbed by a node elimination. The parabola (the small dot points) shows the map q → aq (1 − q ); the intersection of the right line and the parabola is the equilibrium point of this map; the deformed parabola (the star points) shows a map obtained as a result of a node elimination.

2.3 Control of attractor and inertial dynamics for neural networks |

37

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.2

0.4

0.6

0.8

1

Figure 2.2. The rest point is shifted after node elimination and the deformed parabola (the star points) is a result of a node elimination.

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.2

0.4

0.6

0.8

1

Figure 2.3. Formation of new stable rest points as a result of a node elimination.

and the right line y = x (Figure 2.1–2.3). The stability of q0 means that |Q (q0 )| < 1. This map Q can be approximated by a network (2.35) as above (Lemma 2.8). Let us consider such an approximation and the corresponding network. Let us eliminate a node in this network, for example, j-th node. Then, we obtain a new map P(q) defined by P = Q − a j σ (b j σ (q) − h j )). Possible plots of P are shown on Figure 2.1–2.3. The three situations are possible: (a) a point close to q0 is a new rest point (Figure 2.1) ;

38 | 2 Complex dynamics in neural and genetic networks (b) we obtain a new rest point instead of q0 (Figure 2.2), however, this point is not close to q0 ; (c) we obtain three new rest points (Figure 2.3). Notice that sometimes an elimination of a number of nodes does not influences essen­ tially the steady state q0 .

Network extension By the results of the previous subsection and numerical simulations, one can show that a simple network extension can lead to oscillations and chaos onset. To see it, let us consider the well-known quadratic map q → Q = λq(1 − q). If λ < 3, the attractor is a stable rest point q0 = Q(q) = (λ − 1)λ−1 . For λ > 3.57 . . . , one has a chaos and for λ ∈ (3.4, 3.57), one observes complicated periodic cycles. We can simulate Q by a network with N nodes and a single center. We use equa­ tions (2.52) and (2.53), assuming that d = N, coefficients a j are small, σ (0) = 0, and the derivative σ (0) = 1. Assumption on a j admits a simple interpretation. Namely, this means that the center acts on satellites in a hard manner, but the satellites respond to the center softly. Roughly speaking, we have a sharp center and soft satellites. Then, this equation takes the form q(t + 1) =

N

a j x j (t)),

(2.59)

j =1

x j (t + 1) = σ (b j q(t) − h j ),

j = 1, . . . , N.

(2.60)

This implies q(t + 2) = F (q(t)), where F ( q) =

N

a j σ(b j q − h j ).

(2.61)

(2.62)

j =1

Remember that any continuous functions Q on [0, 1] can be approximated by F (q), in particular, we can take Q = λq(1 − q), where λ is a parameter. Assume that our network grows as follows. We copy some nodes (excluding the center), and each new node has the same connections K ji with the center that the prototype (in genetics, this process can be interpreted as a gene duplication). Then, we have a network with αN nodes, where α > 1. Such an extended network simulates the map q → F (q) = αQ(q) = αλq(1 − q). Let us take λ = 3 and α = 1.2. Then, the network extension leads a transition from a globally convergent dynamics to a chaos (Figures 2.4 and 2.5). To detect a complicated attractor structure, we apply an elementary probabilistic method. Let us decompose the interval [0, 1] in M disjoint intervals I j of the same

2.3 Control of attractor and inertial dynamics for neural networks | 39

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.2

0.4

0.6

0.8

1

Figure 2.4. The visiting frequency (density of q, q ( t) is the center state) for a network with a single center and 20 satellites.

0.2 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.2

0.4

0.6

0.8

1

Figure 2.5. The visiting frequency (q-density) for the network with a single center and 40 satellites.

length. Iterating Q, we compute frequencies ω j , which indicate how many times our trajectory enters I j . These visiting frequencies give an approximation of an invariant probabilistic measure induced by the map Q. If the map has a rest point attractor, this measure is localized at a point, and for chaotic attractors, this measure is distributed on a large set. The results of this subsection admit an interpretation in the framework of the Em­ pire model. In Chapter 4, we shall show that Empires (centralized networks with a

40 | 2 Complex dynamics in neural and genetic networks single center) should extend. Otherwise, they will be destroyed by random fluctua­ tions. This means that the number N of satellites is an unbounded function of time: N (t) → ∞ as t → ∞. However, then this effect of the chaos onset arises and the empire can fall as a result of internal chaos and disorder.

2.3.5 Realization of n-dimensional maps by time discrete centralized networks Let us extend our approach in order to realize all multidimensional maps and to prove Theorem 2.7 for all n ≥ 1. Using the nH-property, we find a graph that contains an (n, d)-biclique with a sufficiently large d ≥ n. We assume the set S n contains the nodes indexed by 1, 2, . . . , n and the set S d consists of the nodes indexed by n + 1, . . . , n + d. Let us assign the entries K ij as follows: K lj = a lj , K jl = b jl for l = 1, 2, . . . , n and j such that d + n ≥ j > n, and otherwise K ij = 0. Let us denote q l = x l , l = 1, 2, . . . , n. Assume h l = 0 for l = 1, . . . , n. Then, equations (2.35) reduce to ⎞ ⎛ d +n ⎝ a lj x j (t)⎠ , l = 1, 2, . . . , n, (2.63) q l (t + 1) = σ ⎛

x j (t + 1) = σ ⎝

j = n +1 n



b jl q l (t) − h j ⎠ ,

j = n + 1, . . . , n + d.

(2.64)

l =1

This gives q l (t + 2) = Q l (q(t)), where S l ( q) =

d +n j = n +1

Q l (q) = σ (S l (q)), ⎛

a lj σ ⎝

n

(2.65)



b jk q k − h j ⎠ .

k =1

Let us show that system (2.65) ϵ-realizes all n-dimensional C1 smooth time discrete dy­ namical systems defined on a compact connected domain D with a smooth boundary. To this end, it is sufficient to prove the following lemma: Lemma 2.9. Assume σ (z) ∈ F , q = (q1 , . . . , q n ) ∈ Rn . Then, for each C1 vector valued function F (q) = (F1 , F2 , . . . , F n ) defined on I n = [0, 1]n and any ϵ > 0, there are d, a lj , b jl , h j such that |F l (q) − S l (q, d, a, b, h)| < ϵ, |∇F l (q) − ∇q S l (q, d, a, b, h)| < ϵ,

v ∈ I n = [0, 1]n , n

n

v ∈ I = [0, 1] .

(2.66) (2.67)

This lemma is a multidimensional generalization of the one-dimensional approxima­ tion lemma. The proof is also standard and follows from classical multilayered ap­ proximations ([21, 53], [287] and Appendix 2.11). The lemma implies Theorem 2.7.

2.3 Control of attractor and inertial dynamics for neural networks |

41

2.3.6 Attractors and inertial manifolds of the Hopfield system Let us show that dynamics of the Hopfield networks (2.34) is maximally dynamically complex, realizing all finite dimensional dynamics. The proof is organized as follows. We first find an important property of the Hopfield system. Namely, under some assumptions, this system has an inertial manifold (i. e. a globally attracting positively invariant manifold). We can change the dynamics on this manifold (the local inertial form) by adjusting the parameters m, K, θ and b of the Hopfield model. We then show that for any hyperbolic set Γ generated by a system of ordinary differential equations, there exists a Hopfield system (2.34) with a global attractor containing a topologically equivalent hyperbolic set Γ . It can be done by a choice of the suitable network param­ eters P = (N (Γ ), K(Γ ), θ(Γ ), b ). Notice that if σ ∈ F , and sup |θ i (t)| < C < ∞, then (2.34) gives rise to a global semiflow which is dissipative for b > 0. Indeed, all the trajectories enter the ball  B m (R) of radius R = b −1 (|γ| + |θ|) where the vector γ is γ j = sup |σ (u )| i=1 |K ij |. Moreover, no trajectories can leave this ball, which is an absorbing set. We choose the matrix K in a special way. Namely, we use the generalized Hopfield substitution (2.44), where A and B are, respectively m × n and n × m matrices, and ˜ the n × n-matrix defined by n ≤ m. We denote by A ˜ ij = A ij , A

i, j = 1, . . . , n,

(2.68)

˜ is the unit matrix and suppose, for simplicity, that A ˜ = I. A

(2.69)

˜ , det A ˜ = 0). (Notice that all main results hold for general nondegenerate matrices A Let us formulate the first result. Lemma 2.10. Let b = 1. Let us fix a positive integer n. Under conditions (2.44), (2.68) and (2.69), an n-dimensional inertial manifold MH for (2.34) exists. The corresponding inertial dynamics is governed by dq i = Φ i ( q) − q i + θ i , dt with Φ j ( q) =

m



B jk σ ⎝

k =1

where h k are defined by h i = −θ i +

n

i = 1, 2, . . . , n,

(2.70)



A kl q l + h k ⎠ ,

(2.71)

A ij θ j .

(2.72)

l =1 n j =1

42 | 2 Complex dynamics in neural and genetic networks Proof. If K is defined by (2.44), system (2.34) takes the form dx i = A ij Φ j (x) − x i + θ i , dt j =1 m

with Φ j (x) =

m

(2.73)

B jk σ (x k ).

k =1

Introduce new variables q k and z i by qi = xi , and zi = xi −

n

i = 1, 2, . . . , n,

A ij q j + h i , i = n + 1, . . . , m.

(2.74)

(2.75)

j =1

Let us define h i by (2.72). It follows then from (2.69) and (2.72)–(2.74) that n n dz i = A is Φ s (x) − x i + θ i − A ij (Φ j (x) − x j + θ j ). dt s =1 j =1

(2.76)

Thus, due to (2.76), dz i = −z i ( n < i ≤ m ) . (2.77) dt Therefore, z i are exponentially decreasing functions. Finally, we see that the inertial manifold MH is given by equations z i = 0. The inertial form is defined by (2.70) and (2.71). Remark. Consider a local semiflow in the unit ball B n = {q : |q| ≤ 1|} induced by the vector field with components Q i (q) = Ψ i (q) − q i . Assume that at the boundary S n of B n , the vector field is directed inward B n . Equation (2.70) implies then that all q-trajectories of this semiflow attain the unit ball B n and remain in this ball. The approximation Lemma 2.9 now yields the following main assertion about the Hop­ field systems. Theorem 2.11. The Hopfield system (2.34) generates a family of semiflows with maxi­ mally complex dynamics. Depending on parameters P, this system can generate any (up to an orbital topological equivalency) hyperbolic dynamics. Proof. Consider a system of ordinary differential equations defined by a C1 -smooth vector field. dq = F ( q) (2.78) dt in the unit ball B n . Assume that F is directed toward the ball at the sphere ∂B n . Then, this system generates a global semiflow. Suppose, moreover, that this semiflow has

2.4 Complex dynamics for Lotka–Volterra systems

| 43

a hyperbolic set Γ. Then, according to the Persistence Hyperbolic Set Theorem, there exists a positive δ(Γ ) such that, for any C1 -field G in B n satisfying |F − G|C1 (B n ) < δ ( Γ ) ,

(2.79)

the system dq dt = G ( q ) has the same (up to homeomorphism h) hyperbolic set Γ : hΓ = Γ h. If we take this number δ = δ(Γ ) as ϵ in Lemma 2.9, then parameters m, A, B and θ i of the network may be found. Due to (2.70) and the previous lemma, dynamics (2.34) with such parameters generates a topologically equivalent hyperbolic set Γ .

2.4 Complex dynamics for Lotka–Volterra systems Models studied in the previous subsection are similar to the classical Lotka–Volterra system that describes species concurrence and interactions. In this section, we consider the Lotka–Volterra system of equations for N species and n resources, where n  N. According to the Poincaré–Bendixon theory, the chaotic behavior in the Lotka–Volterra systems may occur only for N ≥ 3. There is nu­ merical evidence supporting the existence of chaotic motion for N ≥ 3 [263], but as far as we know, there is no theoretical proof of this fact. For competitive Kolmogorov sys­ tems, it was shown by Smale [255] that any type of dynamics can be realized here when the number of species is sufficiently large. Moreover, in the pioneering book [203], it was shown that all autonomous systems of first order ordinary differential equations with polynomial right-hand sides can be realized by large Lotka–Volterra systems. However, [203] contains no results on invariant manifolds, persistence and chaos. The large time behavior of solutions is understood relatively well for gradient and monotone systems (see, for example, [108, 111–114]). The Lotka–Volterra system, in general, is not monotone or gradient, or even dissipative. To overcome mathematical difficulties in studying the large time behavior, we again use the method of realization of vector fields. Another question, considered in this section, is how to describe an influence of resources on the dynamics of the system. The case of one resource was investigated already by V. Volterra [314]. He has shown that only one species survives. Other ex­ amples of such influence can be found in [115, 263]. The competitive exclusion princi­ ple [263] predicts that N species cannot coexist on fewer than n resources, although, in some realistic situations, many species can coexist using a small number of resources. This dilemma is the so-called paradox of plankton [126]. Chaotic dynamics have been proposed as an answer to this paradox [11, 124, 125]. Results of this section are conjoint with Prof. V. Kozlov (Linkoping), see [149].

44 | 2 Complex dynamics in neural and genetic networks 2.4.1 Summary of the main results for the Lotka–Volterra system The main result is the existence of chaotic behavior for the Lotka–Volterra dynamics with a few resources and many species. We also show that for any prescribed finite family of hyperbolic (possibly, chaotic) dynamics, there is a Lotka–Volterra system generating this family of dynamics. Moreover, this Lotka–Volterra system can be cho­ sen to be strongly persistent. This means that species abundances x i (t) do not vanish since t → +∞ and |x i (t)| are bounded for all times t. We find a class of the Lotka– Volterra systems with n < N resources, which are strongly persistent. Certainly, the conditions for the exclusion principle ( [115], Sec. 5.4) are violated here. In the next subsection, we state the Lotka–Volterra model for N species and n re­ sources. An important change of variable, which reduces the initial system to a system of n differential equations, is presented in subsection 2.4.3. This change was found in [37, 38, 75], where the generalized Lotka–Volterra system was investigated. There, we also describe some n-dimensional invariant manifolds Qn of the Lotka–Volterra dynamics. Vector fields G determining this new system of n variables depend on the Lotka–Volterra system parameters and its initial conditions. First, we study the family G of the vector fields G in Rn = {q : q = (q1 , . . . , q n )} resulting from the above change of variables. We show, in particular, that the fam­ ily G contains all polynomial fields and all polynomials of exponential functions exp(λ k q k ). Using a classical theorem on the approximation of functions by exponen­ tial polynomials (Theorem 18 from [161]), we prove that the arbitrary vector field on the unit ball in Rn can be approximated by vector fields from G . Furthermore, using these results on approximations, we state the main results on chaotic behavior in the Lotka–Volterra systems. If a finite family of hyperbolic dynamics is given, a sufficiently big Lotka–Volterra model with appropriate parameters can generate this family by a variation of initial data. Afterwards, we focus on the ecological stability. We investigate the plankton para­ dox problem: how many species can share a bounded number of resources? Mathe­ matically, it is concerned with important concepts of permanency and strong persis­ tency introduced by Schuster, Sigmund and Wolf, and Freedman and Waltman, re­ spectively ([79, 243], for an overview, see [115, 263]). The strong persistency is weaker than permanency and means that for each individual trajectory x(t) (an individual life history, from a biological point of view), all species abundance does not tend to 0 as time t goes to +∞. We describe a construction of strongly persistent Lotka–Volterra systems, which have a chaotic behavior. Biological interpretation of these results can be found in the last subsection.

2.4 Complex dynamics for Lotka–Volterra systems

| 45

2.4.2 Lotka–Volterra model with n resources The Lotka–Volterra system reads as ⎛ ⎞ N dx i ⎝ = xi ri − K ij x j ⎠ , dt j =1

(2.80)

where i = 1, . . . , N, and in which N species with population x i for i = 1 to N compete for bounded resources, the coefficient r i is the intrinsic growth (or decay) rate for i-th species. The matrix K with the entries K ij determines an interaction between species. N = {x = (x1 , . . . , x N ) : x i > 0}. Notice We consider this system in the positive cone R> that this cone is invariant under dynamics (2.80). Below, we assume that initial data for (2.80) always lie in this cone: N x(0) = ϕ ∈ R> .

(2.81)

Our key assumptions are as follows. Suppose that the interaction matrix can be factorized by K = AB, where A, B are matrices of size N × n and n × N respectively, i. e. K ij =

n

A is B sj .

(2.82)

s =1

Here, 1 ≤ n ≤ N. Notice that each matrix K of the rank n admits such factorization and the matrices A, B depend continuously on K. We also assume ri =

n

A ik μ k

(2.83)

k =1

for certain μ k , k = 1, . . . , n. We discuss a biological interpretation of this assumption below. Under the above assumptions, equations (2.80) can be represented as a model with n resources: dx i = x i S i (x), i = 1, . . . , N, (2.84) dt where n N S i (x) = A ik R k (x), R k (x) = μ k − B kj x j . (2.85) k =1

l=j

This means that all the growth coefficients S i depend on resources R j (x) which are linear functions of x.

46 | 2 Complex dynamics in neural and genetic networks 2.4.3 Change of variables Let q = (q1 , . . . , q n ) ∈ RN . We define a family of vector fields depending on parame­ ters A, B, μ and N by ⎛ ⎞ N n (2.86) G k (q) = G k (q, μ, A, B, C, N ) = −μ k + B kl C l exp ⎝− A lj q j ⎠ , l =1

j =1

N where C = (C1 , . . . , C N ) ∈ R> . The next proposition describes a reduction of system (2.84) to a system of differ­ ential equations with n variables.

Proposition 2.12. (i) Assume p ∈ Rn . Let q be a solution to the Cauchy problem dq k = G k (q, μ, A, B, C, N ), dt where k = 1, . . . , n. Then, the functions ⎛ ⎞ n x i = C i exp ⎝− A ij q j ⎠ ,

q(0) = p,

(2.87)

i = 1, . . . , N

(2.88)

i = 1, . . . , N.

(2.89)

j =1

satisfy (2.80) and the Cauchy data (2.81) with ⎛ ⎞ n ϕ i = C i exp ⎝− A ij p j ⎠ , j =1

(ii) Let x be a solution to the Cauchy problem (2.80), (2.81). If q solves (2.87) with C and p satisfying (2.89), then x and q are connected by (2.88). Proof. (i) Differentiating (2.88) and using (2.87), we get ⎛ ⎛ ⎞⎞ n N n dx i ⎝ ⎝ = xi − A ij (−μ j + B jm C m exp − A mk q k ⎠⎠ , dt m = 1 j =1 k =1 which implies (2.80). Relation (2.89) follows from (2.88). (ii) Let x be a solution to (2.80), (2.81). If we take q as a solution to (2.87), where C and p are subject to (2.89), then we observe that vector function (2.88) also solves (2.80), (2.81). It proves the assertion. Remark 1. From relation (2.89), it follows that the constants C and p are defined nonuniquely. This is connected with the following fact. If q is a solution to (2.87) with certain C and p, then the vector function q + α = (q1 + α 1 , . . . , q n + α n ) solves (2.87)  with C j replaced by C j exp(− nk=1 A ik α k ) and p k by p k + α k . It means that there is a natural isomorphism between systems satisfying (2.89) for a fixed ϕ.

2.4 Complex dynamics for Lotka–Volterra systems |

47

We denote by G(A, B, N, μ ) the family of vector fields defined by (2.86) for all pos­ N N sible C ∈ R> . Let Qn (C) be the set of points x ∈ R> defined by (2.88), where q ∈ Rn . Then Qn (C) is a manifold of dimension n invariant with respect to the semiflow gen­ erated by (2.84).

2.4.4 Properties of fields from G Let M be a positive integer and D be a M × n matrix with real entries. We put M n = {1, . . . , M }n and consider the class E(D, M, n) consisting of vector fields F = (F1 , . . . , F n ) with components ⎛ ⎞ n b ki exp ⎝ D i j j q j ⎠ , k = 1, . . . , n. (2.90) F k ( q) = −μ k + j =1

i ∈M n

Here, i = (i1 , . . . , i n ) is a multi-index, b mi are arbitrary coefficients and μ l , l = 1, . . . , n are fixed constants, that is, the same as in (2.83). In order to motivate the introduction of this class of fields, let us consider the fol­ lowing example. As is known, the functions M

(j)

(j)

a i exp(λ i q j ),

i =1

where j is fixed, are dense in any closed interval C[r0 , r1 ], r0 < r1 provided the expo­ (j) nents λ i satisfy a certain growth condition (Appendix 2.11 and [161]). Therefore, the linear combinations of the functions   (1) (2) (n) (1) (n) a k1 a k2 . . . a k n exp λ k1 q1 + · · · + λ k n q n are dense in (C1 [r0 , r1 ])n . However, these combinations are the same as that in (2.90). The connection between classes E(D, M, n) and G is given by the following: Proposition 2.13. For every integer M > 0 and M × n matrix D, positive integer N and matrices A and B of the sizes N × n and n × N, respectively, such that E(D, M, n) ⊂ G(A, B, N, μ ).

Proof. Let N = 2M n n. First, we will use the multi-index i = (i1 , . . . , i n ) ∈ M n in­ stead of the index m = 1, . . . , M n . For this purpose, we assume that an isomorphism m = m(i) is fixed. Second, every index l = 1, . . . , 2M n n can be represented as l = (2m − 2)n + s or l = (2m − 1)n + s, with certain m = 1, . . . , M n and s = 1, . . . , n. Then, by this notation, sum (2.86) can be rewritten as G k = −μ k + S 1 + S 2 ,

(2.91)

48 | 2 Complex dynamics in neural and genetic networks where S2 =

M M −1



B k,(2m−2)n+s C(2m−2)n+s exp ⎝−

m =1 s =1

S1 =

M M −1



A(2m−2)n+s,j q j ⎠ ,

j =1



B k,(2m−1)n+s C(2m−1)n+s exp ⎝−

m =1 s =1

n

n



A(2m−1)n+s,jq j ⎠ .

j =1

Let us set A(2m−2)n+s,j = A(2m−1)n+s,j = D i j ,j ,

(2.92)

where i = i(m) is a multi-index corresponding to m, and (0)

B k,(2m−2)n+s = κ km δ k,s ,

(1)

B k,(2m−1)n+s = κ km δ k,s ,

(0) (1)

κ km κ km < 0

(2.93)

for all s, j, k = 1, . . . , n and m = 1, . . . , M n . Here, δ s,k is the Kronecker delta and i j is the jth component of i, m = m(i). Then, formula (2.91) becomes ⎛ ⎞ n  k,m(i) exp ⎝− G k = −μ k + C D i j ,j q j ⎠ , (2.94) j =1

i ∈M n

where 0) (1)  k,m = κ (km C C(2m−2)n+k + κ km C(2m−1)n+k .

 k,1 , . . . , C  k,m run all values in RN as C runs all values This implies that the constants C N , and N = M n for every k. The proof of the assertion is complete. in R>

Let us consider the family Pk,n of polynomial vector fields H = (H1 , . . . , H n ) of order k, i. e. i i a m,i z11 z22 . . . z inn , m = 1, . . . , n, (2.95) H m ( z) = |i|≤ k

where i = (i1 , . . . , i n ), |i| = i1 + i2 + · · · + i n and z j ∈ R + are real positive numbers. Proposition 2.14. For given positive integers k and n, there exist a number N, an n × N matrix A, an N × n matrix B and a vector μ ∈ Rn such that for each polynomial field H ∈ Pk,n , there are coefficients C i such that system (2.87) reduces to the system dz m = H m ( z) , dt

m = 1, . . . , n

(2.96)

by the change of variables z m = exp(q m ).

(2.97)

dq m = F m ( q) , dt

(2.98)

Proof. Consider the system

2.4 Complex dynamics for Lotka–Volterra systems

| 49

where F m are defined by (2.90) with μ k = 0. The change of variables (2.97) transforms (2.98) to the following system: dz m D i ,1 D i ,2 D ˜m = =H b m,i z1 1 z2 2 . . . z n in ,n z m , dt n |i|∈M

(2.99)

where M = k + 2. Taking D sj = s − 2, we have ˜m = H



s ∈ {1, . . . , k + 2},

i −2 i −2

b m,i z11 z22

. . . z ni n −2 z m

m = 1, . . . , n.

(2.100)

|i|∈Mn

˜ m is not a polynomial because the monomials in the right-hand side of In general, H 1 (2.100) can contain some terms with negative exponents, z− j . However, by a choice of ˜ m to polynomials. In fact, let b m,i, we transform H b m,i = 0, if i j = 1 for some j = m and b m,i = 0, if i j = k + 2. Then, (2.100) is a polynomial of degree k and all polynomials from Pk,n can be ob­ tained in (2.100) by appropriate choice of b m.i . The proof is complete. In Appendix 2.11, we prove the following assertion. Proposition 2.15. Assume a field F ∈ G(A, B, N, μ ) has a stable hyperbolic rest point. Then, the ω-limit set of (2.84) has the fractal and Hausdorff dimension d ≥ N − n. ¯ (K, n) be the By B n , we denote the closed ball of radius 1 in Rn centered at 0. Let D class of infinite matrices D = {D ij }, i ∈ N and j ∈ {1, . . . , n} such that | D ij | < K, i = 1, 2, . . . , j = 1, . . . n.

(2.101)

¯ (K, n) and M ¯ is a positive integer, then, by D(M¯ ) , we denote M ¯ × n-matrix with If D ∈ D entries ¯) (M ¯ j = 1, . . . , n. (2.102) D ij = D ij , i = 1, . . . , M, ¯ ¯ of D. The matrix D(M¯ ) will be called M-truncation Proposition 2.16. Let us fix an integer n > 0. For any K > 0, there exists a matrix (M ) ¯ (K, n) such that the space of all vector fields ∪∞ D∈D , M, n) is dense in the M =1 E(D 1 n set of all C -smooth vector fields on B . The proof uses a technical lemma and it is relegated to Appendix 2.11. Inequality (2.101) has a transparent biological interpretation: interactions K ij between species can be chosen a priori bounded. In fact, boundedness of D in (2.90) and boundedness of b ki imply that K ij are bounded as N → ∞.

50 | 2 Complex dynamics in neural and genetic networks 2.4.5 Chaos in the Lotka–Volterra model with n resources Let us recall some facts from the theory of dynamical systems that we are going to use below. Assume that a semiflow on B n has a compact invariant hyperbolic set I (we refer to the definition of compact invariant hyperbolic sets in [135, 228, 232]). Simplest hyperbolic sets are hyperbolic rest points and limit cycles. Famous examples of the hy­ perbolic sets with chaotic dynamics are given by the Smale horseshoe and the Anosov flows [135, 228]. Chaotic dynamics on the compact invariant hyperbolic set I is tran­ sitive and has sensitive dependence on initial conditions ([232], p. 41, [228], p.40 and p.86). A flow S t is transitive on I provided the orbit of some point p ∈ I is dense in I . The flow S t on the set I ⊂ B n is said to have sensitive dependence on initial conditions provided there is an r > 0 (independent of the point) such that for each point x ∈ I and for each ϵ > 0, there is a point y ∈ I with |x − y| < ϵ and a time moment T ≥ 0 such that |S T x − S T y| > r [228]. Consider a Cauchy problem dq = F ( q) , dt

q(0) = p

(2.103)

where q, p ∈ B n ⊂ Rn and F = (F1 , . . . , F n ) ∈ C1 (B n ). Assume this field F is directed inward on the boundary ∂B n , and then this system generates a global semiflow. As­ sume this semiflow has a compact hyperbolic invariant set I . Clearly, it must lie at a ˜ which is sufficiently close positive distance from ∂B n . Let us consider a vector field F 1 to F in C -norm, i. e. ˜ − F |C1 (B n ) < ϵ |F (2.104) for a sufficiently small ϵ > 0. Then, the perturbed dynamics dq ˜ ( q) =F dt

(2.105)

also has a compact invariant hyperbolic set I˜ close to I ([232], Sec. 15, and [135], The­ orem 18.2.3). Flows, generated by (2.103) and (2.105), and restricted to I and I˜ respec­ tively, are topologically orbitally equivalent (regarding this equivalence, see [228], Sec. 4.7). This entails, in particular, that there is a homeomorphism h : I → I˜ which tends ˜ − F |C1 (B n ) → 0. Moreover, this homeomorphism maps the orbits of to the identity as |F (2.103) restricted to I onto the orbits of (2.105), restricted to I˜ (notice that these orbits are images of trajectories defined for all t ∈ (−∞, +∞)). We denote by S tF˜ the global semiflow on B n generated by (2.105). If this semiflow has a compact invariant set I , then we denote by S tF˜ |I a flow which is the restriction of S tF˜ to I . A semiflow generated by system (2.84) is denoted by S tLV and the corresponding restriction to an invariant set K, by S tLV |K . Now, we can formulate the main theorem. Theorem 2.17. Let F (l) , l = 1, . . . , p, be C1 -vector fields on B n directed inward on ∂B n and having compact invariant hyperbolic sets I (l) . Then, a positive integer N, μ ∈ Rn ,

2.4 Complex dynamics for Lotka–Volterra systems |

51

N exist such that system matrices A, B of sizes N × n and n × N, respectively, and C(l) ∈ R> (l) (l) (2.84) has compact invariant sets K(C ) ⊂ Qn (C ) which are homeomorphic to I (l) . These sets are hyperbolic for the flow S tLV |K(C(l) ) and, moreover, the flows S tLV |K(C(l) ) and S tF (l) |I (l) are orbitally topologically equivalent.

In other words, if a finite family of hyperbolic dynamics is given, a sufficiently big Lotka–Volterra model with appropriate parameters can generate this family by a variation of initial data. These hyperbolic dynamics may be chaotic. Proof. We derive this theorem from Prop. 2.12 and Prop. 2.16. According to Prop. 2.16, ¯ (K, n), a number M and vector fields for each ϵ > 0, there exist a matrix D ∈ D ˜ (l) ∈ E(D(M ) , M, n), l = 1, . . . , p such that F ˜ (l) − F (l) |C1 (B n ) < ϵ. |F

(2.106)

These estimates imply the existence of compact invariant hyperbolic sets I˜(l) for dy­ namics dq ˜ (l ) ( q) , =F dt together with homeomorphisms h(l) : I (l) → I˜(l) , which map orbits of I (l) onto orbits of I˜(l) and define topological orbital equivalencies, provided that ϵ > 0 is a sufficiently small number. N By Prop. 2.13, we can find matrices A, B and vectors C(l) ∈ R>l such that ˜ kl = F

N m =1

˜ (ml) exp(− B km C

n

A mj q j ).

(2.107)

j =1

Then, the compact invariant sets K(C(l) ) are images of I˜(l) under the map q → x(q) N from B n to R> defined by relation (2.88). The proof is complete. Remarks. (i) For C sufficiently close to C(l) , the manifold Qn (C) also contains a com­ pact hyperbolic invariant set K(C) homeomorphic to I (l) . ¯ can be chosen uniformly bounded (Prop. 2.16), the entries of the (ii) Since the matrix D matrices A, B (and hence the coefficients K ij ) are also uniformly bounded with re­ spect to N. From the biological point of view, this means that one can generate com­ plicated dynamicswithina largepopulationwithrestricted speciesinteractions.

2.4.6 Lotka–Volterra systems generating Lorenz dynamics As is known, the Lorenz system dx1 = σ(x2 − x1 ), dt

dx2 = x1 (r − x3 ) − x2 , dt

dx3 = x1 x2 − βx3 dt

(2.108)

52 | 2 Complex dynamics in neural and genetic networks with an appropriate choice of the constant r, σ, β has trajectories with a chaotic behav­ ior. This system is polynomial, but the trajectory may leave the cone x i > 0, i = 1, 2, 3, whereas polynomial system (2.96) is defined on this cone. We can, however, circum­ vent this difficulty by a shift of the variables x i . Notice that the Lorenz system (2.108) has an absorbing set A R defined by A R = {x : x21 + x22 + (x3 − σ − r)2 < R2 }, where R is large enough. The attractor of the Lorenz system lies in this set A R . In equa­ tion (2.108), therefore, we can restrict x = (x1 , x2 , x3 ) to this domain. Let us make the change z i = x i + R i , where R1 , R2 > R and R3 > σ + r + R. Then, on the set A R , all variables z i ≥ δ > 0. The Lorenz system then takes the following form, that is, dz1 = σ ( z2 − z1 ) + σ ( R1 − R2 ) , dt dz2 = ( r + R3 ) z1 + R1 z3 − z1 z3 − z2 − R3 R1 − R1 r + R2 , dt dz3 = z1 z2 − βz3 − R2 z1 − R1 z2 + R1 R2 + βR3 . dt

(2.109) (2.110) (2.111)

According to Prop. 2.14, this system can be obtained from a Lotka–Volterra system with 3 resources and 2 ∗ M 3 ∗ n = 162 species since here, M = n = 3. However, one can find a realization of the Lorenz system by (2.84) involving only 10 species. To show this, we note that for sufficiently large R > R0 (R1 , R2 , r3 , σ, r, β ), the system (2.109)–(2.111) has the absorbing set ˜ R = { z : ( z 1 − R 1 )2 + ( z 2 − R 2 )2 + ( z 3 − R 3 − σ − r )2 < R 2 } A that is obtained by a shift from the absorbing set A R for the Lorenz system. Assume R2 >> R1 , R3 , r. Let us make a variable change z i = exp(q i ). System (2.109)–(2.111) takes the form dq1 = σ (exp(q2 − q1 ) − 1) + σ (R1 − R2 ) exp(−q1 ), dt dq2 = (r + R3 ) exp(q1 − q2 ) + R1 exp(q3 − q2 ) − exp(q1 + q3 − q2 )− dt − 1 + (−R3 R1 − R1 r + R2 ) exp(−q2 ), dq3 = exp(q1 + q2 − q3 ) − β − R2 exp(q1 − q3 ) − R1 exp(q2 − q3 )+ dt + (R1 R2 + βR3 ) exp(−q3 ).

(2.112) (2.113)

(2.114)

We will write the right-hand side of this system as G k ( q1 , q2 , q3 ) = −μ k +

10 l =1



B kl C l exp ⎝−

3 j =1



A lj q j ⎠ ,

(2.115)

2.4 Complex dynamics for Lotka–Volterra systems

| 53

where k = 1, 2, 3 is the number of the equation. We take μ 1 = σ, μ 2 = 1, μ 3 = β and C1 = C2 = · · · = C10 = 1. Moreover, we choose A11 = 1, A12 = −1, A13 = 0, A21 = 1, A22 = A23 = 0 and B11 = σ, B12 = σ (R1 − R2 ),

B13 = · · · = B1,10 = 0.

For k = 2, we set A32 = A52 = A62 = A42 = 1,

A33 = A41 = A61 = A63 = 0,

A31 = A43 = A51 = A53 = −1, and B21 = B22 = 0, B25 = −1,

B23 = r + r3 ,

B24 = R1 ,

B26 = −R3 r1 − r1 r + R2 .

Finally, for k = 3, we take A71 = A72 = A10,3 = A82 = A93 = 1,

A81 = A73 = A92 = −1,

A91 = A83 = A10,1 = A10,2 = 0, and B31 = · · · = B36 = 0, B39 = −R1 ,

B37 = 1,

B38 = −R2 ,

B3,10 = R1 R2 + βR3 .

We then obtain that system dq k /dt = G k (q1 , q2 , q3 ) coincides with system (2.112)–(2.114). We take such σ, β, r so that the corresponding Lorenz attractor is chaotic. The proof is complete. Notice that when we vary parameters β, r, σ, the Lorenz system demonstrate dif­ ferent transitions and dynamical effects, namely, the Andronov–Hopf and pitchfork bifurcations, transient chaos, and bifurcations to strange attractors. Let us fix β = 8/3 and σ = 10. It is well know that for r < 1, the point x = (0, 0, 0) is a rest point attrac­ tor for the Lorenz system. Thus, z = (R1 , R2 , R3 ) is a globally attracting rest point for (2.112)–(2.114). For r = 1, we have saddle-node bifurcation, and for large r, we obtain the Andronov–Hopf bifurcations, intermittency and the strange attractor for r > r0 , where r0 ≈ 24.06. The same effects can be observed in the Lotka–Volterra dynamics.

2.4.7 Permanency and strong persistence We say that the Lotka–Volterra dynamics (2.84) is strong persistent if each trajectory N x(t) has a ω-limit set which is a compact in R> (see [79]).

54 | 2 Complex dynamics in neural and genetic networks N If ω-limit sets of all trajectories are contained in a compact subset of R> , we say that system (2.84) is permanent [115]. Strong persistency is weaker than permanency. The permanency and persistence are important concepts of mathematical ecology and have received a great deal of attention during the last decades, see the mono­ graphs [115, 263]. In [20], permanency, partial permanency and the global stability of rest points in Lotka–Volterra systems were studied. Some general sufficient conditions for permanency were found by S. Schreiber, [241]. These conditions are not satisfied in our case because of (2.83). When relation (2.83) is not valid, it is shown in [115] that at least one of x i (t) tends to 0 or ∞ as t → +∞. To see a biological sense of relation (2.83), let us recall some results of Volterra on the so-called competition exclusion principle. This principle asserts that in a popula­ tion consisting of N species where a single resource is shared, only a single dominant species survives. To describe this in more detail, let us consider the following Lotka– Volterra system with a single resource: ⎛ ⎞ N dx i = x i ⎝r i − A i B j x j ⎠ , i = 1, . . . , N. (2.116) dt j =1

Assume that r N /A N > r N −1 /A N −1 > · · · > r1 /A1 and A i , B i > 0 for all i. These con­ ditions mean that N-th species are dominant. Volterra proved that solutions of (2.116) satisfy x i (t) → 0 and x N (t) → r N /(A N B N ) as t → +∞. Notice that for n = 1, condition (2.83) implies r i /A i = μ, which means that in case (2.83), there are no dominant species within the population. Below, we describe some effects that can appear in the Lotka–Volterra dynamics with a few resources. (I) The case n = 1, a single resource. Let us consider the function G(q) = G1 (q, μ, A, B, C, N ) defined by (2.86). To simplify the notation, let us set q1 = q ∈ R,

A m1 = −a m ,

B1m = b m ,

μ 1 = μ.

Then, G( q) = −μ +

N

b m C m exp(a m q).

(2.117)

m =1

We introduce the quantities a+ = max{a m } and a− = min{a m } and denote by m+ , m− the corresponding indices, i. e. a m± = a± . Let us set b ± = b m± . To sim­ plify our analysis, we suppose that a− < a j < a+ ,

j = m− and j = m+ .

It is straightforward to check the following assertions on large time behavior for q(t) and x i = C i exp(a i q).

2.4 Complex dynamics for Lotka–Volterra systems

| 55

(1) a+ > 0, b + > 0. Here, we have the blow-up effect for trajectories with suf­ ficiently large q m+ (0), i. e. q(t) → +∞ as t → t0 for a finite t0 . The species number tends to infinity, |x(t)| → ∞ as t → t0 . (2) a− < 0, b − < 0. We have the blow-up effect a for trajectories with sufficiently large −q(0), q(t) → −∞ as t → t0 . In this case, x m− tends to zero as t → t0 . (3) a+ > 0, b + < 0 and a− < 0, b − > 0. The Lotka–Volterra system is strongly persistent. (II) Persistence for arbitrary n. First, we observe that, by Prop. 2.12, for positive initial data ϕ > 0, the corresponding Lotka–Volterra dynamics is strongly persistent if the norms |q i (t, ϕ)| are uniformly bounded. Let us introduce the quantities A+ k = min {A lk }, l =1,...,N

A− k = max {A lk }. l =1,...,N

Denote by l± (k ) the values of indices such that A l+ (k)k = A+ k, Let us assume

A l− (k)k = A− k.



−A + k >

|A l + (k)j |,

(2.118)

j = l + ( k )

and



A− k >

|A l − (k)j |.

(2.119)

j = l −( k )

Moreover, let −(A l+ (k)j − A lj ) >



|A l+ (k)j − A lj |,

l = l+ (k ),

(2.120)

j = k

and (A l− (k)j − A lj )
0.

(2.122)

Proposition 2.18. If conditions (2.118)–(2.122) are fulfilled, then system (2.84) with n resources and N species is strongly persistent. Proof. Let us prove that, under assumptions (2.118)–(2.122), all solutions q(t) of (2.87) are bounded. To this end, we show that for a sufficiently large a = a(p), the trajectory q(t, p) of the Cauchy problem (2.87) cannot leave the domain Π a = {q : −a ≤ q i ≤ a}.

(2.123)

56 | 2 Complex dynamics in neural and genetic networks In fact, let as assume that q(t, p) can attain the boundary of Π a . This means that there exists a time moment t0 (a) and an index k such that q k (t0 ) = ±a,

|q i (t)| ≤ a

(i = k, 0 < t ≤ t0 ),

(2.124)

and

dq k (t) |t=t0 ≥ 0. dt Consider the case q k (t0 ) = a in (2.124), and the case −a can be considered gously. From (2.120) and the first inequality (2.122), it follows that for all ϵ > has ⎞ ⎞ ⎛ ⎛ n n − 1 −B k,l+(k) C l+ (k) exp ⎝− A l+ (k),j q j (t0 )⎠ > ϵ B km C m exp ⎝− A mj q j (t0 )⎠ j =1

(2.125) analo­ 0, one

(2.126)

j =1

for any q(t0 ) satisfying (2.124) for sufficiently large a > a0 (ϵ) and all m = l+ (k ). Applying this inequality together with (2.118), we arrive at (2.125). Let us consider the right-hand side of (2.87) at the moment t0 and at the point q(t0 , p) satisfying (2.124). Due to (2.126), it is clear that G k (q, μ, A, B, C, N ) < 0 for a sufficiently large a > 0. Therefore, dq k (t0 )/dt < 0, and we have obtained a contra­ diction with (2.125). Similarly, one can show that −b < q k (t) for some b > 0 and all k. Thus, the trajectories cannot leave Π a .

2.4.8 Strong persistency and chaos In this subsection, we show that, for any N and n > 2, there are Lotka–Volterra systems with N species and n resources, which are strongly persistent and, at the same time, exhibit a chaotic behavior. Theorem 2.19. Let F (l) , l = 1, . . . , p be C1 -vector fields on B n directed inward on ∂B n and having compact invariant hyperbolic sets I (l) . Then, a positive integer N, μ ∈ Rn , and matrices A, B of sizes N × n and n × N, respectively, exist such that dynamics defined by system (2.84) satisfies all conclusions of Theorem 2.17 and moreover, this dynamics is strongly persistent. Proof. We take the number N, the matrices A, B and the vector μ as in Theorem 2.17. We modify system (2.87) in the following way: dq k = G k (q, μ, A, B, C, N ) + ϵ(C N +2k exp(−bq k ) − C N +2k−1 exp(bq k )), dt

(2.127)

where k = 1, . . . , n, ϵ > 0 is a small parameter, b is a sufficiently large parameter, C N +1 , . . . , C N +2n > 0. The parameter b should be chosen as follows. Each func­  tion G k is a linear combination of the exponents exp(− nj=1 A mj q j ). We set b >

2.4 Complex dynamics for Lotka–Volterra systems |

57

n 2 maxm { j=1 |A mj |}. Consider the domain Π a defined by (2.123). Assume that q(0) ∈ Π a . Reasoning as in the proof of Theorem 2.19, one can show that for some suffi­ ciently large a = a(ϵ, C N +1 , . . . , C N +2n ), the corresponding trajectory q(t, q(0)) of system (2.127) cannot leave Π a . Let us consider a Lotka–Volterra system (2.84) with n resources and with some ˜ = N + 2n for which equations (2.127) give the ˜ B ˜ and a new species number N new A, corresponding q-system (2.87) (Prop. 2.12). Since all the trajectories of system (2.127) are bounded, this new Lotka–Volterra system is persistent. On the other hand, if q ∈ B n , and C N +1 , . . . , C N +2n < 1, the vector field that defines system (2.127) is a small smooth perturbation of the vector field that defines system (2.87). Therefore, for suffi­ ciently small ϵ, the property of structural stability of hyperbolic sets proves the asser­ tion.

Let us notice that transitions from convergent behavior to unbounded in time trajec­ tories occur when we change initial data C. Existence of such transformations does not formally follow from our previous results, but it may be shown by the following elementary example. Let us consider the case n = 1 (a single resource). Denote q = q1 ∈ R. Consider the following equation for q: dq = −μ + f ( q) , (2.128) dt where f (q) = G1 (q) = κC1 exp(−λq) − ˜κ(C3 − C2 ) exp(˜λq). (2.129) Here, N = 3 and κ, ˜κ , λ, ˜λ > 0. If C3 > C2 and μ is sufficiently small, then dynamics (2.129) has a rest point as a global attractor. If C3 < C2 , we have a blow-up. If the third species is removed from the pop­ ulation, i. e. C3 = 0, we also obtain a blow-up. We can notice that the coexistence of many similar species increases, in a sense, the stability. Let us consider, for example, the case of 2N + 1 species and one resource. Consider the system f (q) = κ (C1 + C2 + · · · + C N ) exp(−λq) κ (C N +1 + · · · + C2N − C2N +1 ) exp(˜λq), −˜

(2.130)

with some λ, λ˜ > 0. Then, if all C i have the same order, for example, c0 < C i < c1 , where c i do not depend on N, then removing m  N species C i does not change the dynamics. Indeed, let us set C i k = 0 for some indices i1 , i2 , . . . , i m > 1. If N is sufficiently large, this does not influence the signs of C N +1 + · · · + C2N − C2N +1 and C1 + · · · + C N .

58 | 2 Complex dynamics in neural and genetic networks 2.4.9 Concluding remarks We have studied the Lotka–Volterra systems with N species and n resources. The main result is an analytical proof of coexistence of many hyperbolic dynamics on some in­ variant sets in the same Lotka–Volterra system (the entries of the interaction matrix of these systems can be chosen uniformly bounded as N → ∞). This allows, in par­ ticular, one to prove the existence of a chaotic behavior for certain Lotka–Volterra systems. These Lotka–Volterra dynamics exhibit the main feature of chaos, namely, sensitive dependence on initial conditions and transitivity. The ω-limit sets can have N can have a a large dimension (≥ N − n), and the model dynamics in the whole R> long, nonfading memory. We also present a main example of a Lotka–Volterra system with N = 10 species and n = 3 resources that has the Lorenz dynamics. This example shows that in a Lotka–Volterra system, we can observe different bifurcations resulting from variations of the interaction matrix K. Let us compare these Lotka–Volterra systems with the Hopfield model of attrac­ tor neural networks with N neurons. This system is also defined by interaction matri­ ces K. Similar to (2.82), substitution allows one to show that the Hopfield system can generate different hyperbolic dynamics for appropriate K and large N [281, 288]. These Hopfield systems have global attractors of dimension ≤ n. Moreover, in the Hopfield systems, in order to obtain a prescribed hyperbolic set, we have to vary N and K. In contrast to this, some Lotka–Volterra systems with appropriate parameters can real­ ize large classes of hyperbolic dynamics only by a variation of initial data (Prop. 2.16 and Theorem 2.17). However, on the other hand, the Lotka–Volterra systems with n re­ sources have no global attractor in the positive cone, and one can show that they are not structurally stable: small perturbations can break the multidimensional invariant manifolds generated by these Lotka–Volterra systems. In the next section, we consider a more general ecological model which, in certain cases, can be reduced to a weakly perturbed Lotka–Volterra system with n resources and can have a global attractor in the positive cone. The second question that was treated here is persistence and permanency for the Lotka–Volterra systems with n resources. Investigation of dynamics of systems of many species, exploiting a few resources, is an important ecological problem [105, 126]. The competitive exclusion principle asserts that many species cannot survive to­ gether on a few resources. Concepts of permanency and persistency were proposed in order to mathematically formulate the meaning of the survival of all species [79, 243]. In Nature, we observe that a number of species can share the same resources and sur­ vive together (as an example, phytoplankton can be considered). We have presented a general method that allows one to find different examples of persistent Lotka–Volterra systems. It is shown that under some conditions, the Lotka–Volterra model with n re­ sources and N > n species exhibits a complicated large time behavior and, at the same time, this dynamics is strongly persistent. Therefore, large ecosystems may be stable,

2.5 Standard model | 59

exhibit many kinds of chaotic behavior and have a long memory. Notice, however, that a strong property of ecological stability, namely, permanence, it is not shown.

2.5 Standard model 2.5.1 Model formulation We consider the following model: ⎛ ⎞ N dx i = x i ⎝− r i + ρ i ( v ) − γ ij x j ⎠ , dt j =1

(2.131)

N dv = D(S − v) − c j x j ρ j (v), dt j =1

where ρ j (v) =

(2.132)

aj v Kj + v

(2.133)

are Michaelis–Menten’s functions, r i are the species mortalities, D is the resource turnover rate, S is the supply of resource v, and c i is the content of the resource in the i-th species. The terms γ ij x j define self-regulation and concurrence, coefficients a i are specific growth rates and K i are self-saturation constants. We assume that c i > 0,

N

c i = 1.

i =1

For γ ij = 0, this system was considered in works [124, 125] devoted to the plank­ ton paradox (see the previous section). For the case of M resources v j , we have more complicated equations ⎛ ⎞ N dx i ⎝ = x i −r i + ϕ i ( v) − γ ij x j ⎠ , (2.134) dt j =1 N dv j = D j (S j − v j ) − c jk x k ϕ k (v), dt k =1

(2.135)

where v = (v1 , v2 , . . . , v M ), and 

ϕ j (v) = min

a j v1 aj vM ,..., K1j + v1 K1j + v M



.

(2.136)

This model is widely used for primary producers like phytoplankton and also it can be applied to describe competition for terrestrial plants [124, 125, 269, 270]. Rela­

60 | 2 Complex dynamics in neural and genetic networks tion (2.136) corresponds to the von Liebig minimum law, but we can consider even more general ϕ j subjected to conditions ϕ j (v) ∈ C1 ,

|ϕ j (v)| ≤ C+ ,

(2.137)

where C+ > 0 is a positive constant. We assume that c ik > 0,

N

c ik = 1.

k =1

The case of equations (2.131), (2.132) corresponds to a single resource, M = 1.

2.5.2 General properties of Standard model Let us first describe some sufficient conditions which guarantee that systems (2.131), (2.132) and (2.134), (2.135) are dissipative and has a global attractor. Assume that the matrix Γ with the entries γ ij satisfies the following condition (this is a classical as­ sumption, see [115]): Assumption. The matrix Γ has a positive dominant diagonal: γ ii − |γ ij | = ρ i > 0.

(2.138)

j = i

Then, we can assert that solutions to (2.134), (2.135) are nonnegative and bounded. Lemma 2.20. Assume ϕ j satisfy (2.137). Let us consider for (2.134), (2.135) the Cauchy problem with positive initial data for x and bounded positive initial resources x i (0) > 0,

v j ∈ (0, S j ).

(2.139)

Then, solutions of this Cauchy problem are positive and a priori bounded, that is, 0 < v j (t) < S j ,

0 < x i (t) < M i ,

t > 0,

(2.140)

where M i are positive constants. Proof. The proof is standard. Let us prove that v j (t) > 0. Assume that this fact is violated. Then, there exists an index j0 and the time moment t0 > 0 such that v j0 (t0 ) = 0,

dv j0 ≤ 0, dt

v j (t0 ) ≥ 0 for all j = j0 .

(2.141)

We substitute these inequalities into j0 -th equation (2.135) and obtain a contradiction. In a similar way, we can prove that v j (t) < S j . Here, we use positivity of c jk and ϕ k that implies dv j ≤ D j (S j − v j ). dt

2.5 Standard model

| 61

Positivity of x i follows from x i (t) = x i (0) exp(S i (t)). Let us prove that x i (t) < M0 for some M0 > 0. Let E(t) = max{x1 (t), . . . , x N (t)}. Let us estimate dE/dt for large E. Let i0 (t) be an index such that E(t) = x i0 (t). According to (2.137), ϕ i are uniformly bounded by C+ . Therefore, within any open interval where i0 is fixed, one has dx i0 ≤ x i0 R i0 , dt

R i0 ≤ C+ − ρ i x i0 (t),

where ρ i > 0 due to the assumption (2.138) to Γ. Thus, if E is large enough, R i0 is a dx negative number and dti0 < 0. This includes that E is bounded and completes the proof. We then obtain following corollary. Lemma 2.21. Under conditions of the previous lemma, system (2.134), (2.135) is dissipa­ tive and has a compact global attractor. On the attractor structure, one can say more for the particular case (2.131), (2.132). Nu­ merical simulations for this system show that all trajectories tend to equilibria. To understand this fact, let us recall the fundamental concept of cooperative systems (Section 1.5 and [112, 114]). Condition (1.33) does not hold for (2.131), (2.132), but we make change y i = −x i , then, in new variables, this system becomes cooperative. Con­ sequently, it is important to study equilibria for this model.

2.5.3 Equilibria for the case of a single resource We consider the case when Γ is a purely diagonal: γ ij = γ i δ ij ,

γ i > 0.

(2.142)

Let us fix an index i. Equation dx i = x i (b i − γ i x i ) dt has a single stable rest point x i = 0 when b i < 0 and two rest points, 0 and b i /γ i , when b i > 0. In the second case, the nonzero rest point is stable and it is an attractor. Therefore, for (2.131), (2.132), we have at most 2N equilibria. Below, we use notation f+ , setting f+ = f if f ≥ 0, and f+ = 0 if f < 0. Then, we can describe all stable equilibria of (2.131), (2.132) by the following nonlinear equation

N a i v eq −1 a i v eq D(S − v eq ) = ci γi − ri , 0 < v eq < S. (2.143) K i + v eq K i + v eq + i =1 The solutions of this equation define x i by x i = (a i v eq (K i + v eq )−1 − r i )+ .

(2.144)

62 | 2 Complex dynamics in neural and genetic networks 18 16 14 12 10 8 6 4 2 0

0

0.5

1

1.5

2

2.5

3 x 104

Figure 2.6. Plot of x 1 ( t) without self-regulation, t ∈ [0, 30 000].

Notice that this equation is complicated and may describe the coexistence of 1, 2, . . . , N species. In fact, equation (2.143) is not polynomial even if K i ≡ K,

a i = a,

γ i = γ,

|r i − r|  r,

(2.145)

where r is a positive number since this equation involves the operation f → f+ . Results of a numerical investigation for equation (2.143) are presented below.

2.5.4 Numerical results for the case of a single resource For model (2.132), (2.131), the main result, obtained numerically, is that a weak selfregulation (when γ ij = g i δ ij ) leads to stabilization and permanency (coexistence of all species). Moreover, numerical results show that if g i = 0, a single species survives while all the rest x i (t) vanish for large times as t → ∞ (Figures 2.6 and 2.7). Equation (2.143) for the resource level was considered by numerical simulations for different values of the supply S. All the remaining parameters were fixed as fol­ lows: K i = 4 + δK i , a i = 2 + 0.5δa i and r i = 1 + 0.5δr i , where δK i , δa i and δr i are independent random numbers drawn by uniform distribution on [0, 1]; D takes values 1, 10 and 100; γ i = 1. Figure 2.8 shows a dependence of the number N s of coexisting species on the resource supply S for fixed D.

2.5 Standard model |

63

14 12 10 8 6 4 2 0

0

1

2

3

4

5

6

7

8 x 104

Figure 2.7. Plot of x 1 ( t) for the self-regulated case.

20 18 16 14 12 10 8 6 4 2 0

0

20

40

60

80

100

Figure 2.8. Vertical axis: the number of coexisting species. Horizontal axis: supply level S. The turnover D = 10, K0 = 4, a0 = 2, r 0 = 1, k1 = 0.2, k2 = 0.2, k3 = 0.1. The species number N = 20.

2.5.5 Reductions of Standard model We consider system (2.134),(2.135). Our goal is to simplify this system. Assume that the resource turnover is much faster than species evolution. This case appears under the following conditions. We assume that all γ ij have the same order γ and D >> 1. Then, x i will be slow modes whereas v i are fast. First, we solve equation (2.135), setting

64 | 2 Complex dynamics in neural and genetic networks dv j /dt = 0 and assuming that x are “frozen.” Wonderfully, this can be done in a very easy way. We obtain, for fixed x, D j (S j − v j ) =

N

c jk x k ϕ k (v).

(2.146)

k =1

Since D is large, we have v j ≈ S J . Therefore, we can solve (2.146) by iterations. The first approximation is given by v j = S j − ˜v j , 1 ˜v j = D− j

S = (S1 , . . . , S M ), N

c jk x k ϕ k (S).

(2.147)

k =1

We substitute this result in (2.134) and obtain ⎛ ⎞ N dx i = x i ⎝−r i + ϕ i (S − ˜v ) − γ ij x j ⎠ . dt j =1

(2.148)

We present ϕ i (S − ˜v) as ϕ i (S − ˜v ) = ϕ i (S) −

M

V ik (S)˜v k ,

V ik =

k =1

∂ϕ i (S) . ∂S k

(2.149)

Substituting (2.149) into (2.148), one finally has a Lotka–Volterra system for x i , that is, ⎛ ⎞ N dx i ⎝ = xi Ri − K ij x j ⎠ . (2.150) dt j =1 Here, R i = −r i + ϕ i ( S ) and K ij = γ ij + ϕ j (S)

M

1 V il (S)D− l c lj .

(2.151)

(2.152)

l =1

Lemma 2.22. If all entries γ ij > 0, then, for sufficiently large turnovers D j , all of the species survive only under condition r i < ϕ i (S).

(2.153)

This assertion is obvious, and we omit the proof. It is interesting to find equilibria of (2.150) since, according to [115], ecological stability (permanence) is possible only if system (2.150) has a positive equilibrium x∗ , x∗ i > 0. First, consider the simple case of a single resource M = 1 and γ ij = γ i δ ij . We denote, as above, c lj = c j , D1 = D, S1 = S. We then obtain K ij = γ i δ ij + D−1 ϕ j (S)c j

∂ϕ i (S) . ∂S

(2.154)

2.5 Standard model |

65

Thus, equilibrium states are defined by the equation R i = γ i x i + D −1 where ¯= X

N

∂ϕ i (S) ¯ X, ∂S

(2.155)

ϕ j (S)c j x j .

j =1

This equation has the solution

∂ϕ i (S) ¯ −1 X γi , x i = R i − D −1 ∂S

(2.156)

¯ one has where, by the definition of X, N

c i ϕ i ( R i − D −1

i =1

∂ϕ i (S) ¯ −1 ¯ X )γ i = X, ∂S

and therefore ⎛

¯ =⎝ X

N

⎞⎛ 1⎠ ⎝ c i ϕ i (S)R i γ− 1 i

+D

−1

i =1

N

⎞−1

∂ϕ i (S) ⎠ 1 c i γ− i ϕ i (S)

i =1

∂S

.

(2.157)

Permanency is possible only under condition R i − D −1

∂ϕ i (S) ¯ X>0 ∂S

for all i = 1, . . . , N.

(2.158)

If all γ ij = 0, system (2.150) becomes the Lotka–Volterra system with M linear resources studied in the previous section. Then, the matrix K has the rank M and it can be presented as K = AB, where the entries of A, B are defined by 1 A il = V il (S)D− l ,

B lj = c lj ϕ j (S),

where i = 1, . . . , N, l = 1, . . . , M. These systems can be persistent only if the condition Ri =

M

A ij ν j

j =1

holds for some ν j (Section 2.4). Notice that a chaotic behavior is possible only if some entries B lj < 0, i. e. ϕ j < 0. This condition looks slightly exotic, but we can interpret it as follows. This means that some resources are "poison"for some species. When nat­ ural inequalities ∂ϕ i ≥0 (2.159) ϕ i (S) > 0, ∂S j

66 | 2 Complex dynamics in neural and genetic networks hold, we have monotone dynamics and one can therefore expect that a chaotic behav­ ior does not appear. One can suggest another interpretation of the condition ϕ j < 0, namely, we can consider variables S i as different factors of an environment that influence our species. A complicated structure of species-environment interaction, when (2.159) is violated, can lead to a chaotic attractor.

2.6 Systems of chemical kinetics Biochemical and chemical kinetic models can often be represented as systems of ordi­ nary differential equations with quadratic nonlinearities. This quadratic nonlinearity is a consequence of the mass action law. As an example, we consider a system of equa­ tions describing an interaction of a noncoding RNA with many mRNAs [332]. Some ncRNAs and especially miRNAs have many (up to a few hundreds) mRNA targets [22]. A direct experimental validation and reliable theoretical predictions of the targets is a difficult problem. The kinetic models including association of ncRNA with two mRNAs and many mRNAs were analyzed in [249] and [332], respectively. In these works, ncRNAs are considered as global regulators. For m distinct mRNAs, the equations take the following form, that is, dN i = w i − k i N i − r i N i N∗ , dt m dN∗ = w − k∗ N∗ − N∗ ri Ni , dt i =1

(2.160) (2.161)

where N i and N∗ are the mRNA and ncRNA populations, w i and u are the transcription rates, k i > 0 and k ∗ > 0 are the degradation rate constants, and r i > 0 are associ­ ation rate constants. Naturally, we are looking for positive solutions, assuming that N i , N∗ (0) > 0 since only such solutions have biological meaning. This system was studied numerically [332]. In the dynamics of this relatively sim­ ple system, oscillations are impossible. Indeed, it is easy to see that system (2.160), m +1 (2.161) defines a global dissipative strong monotone semiflow in a box Π ⊂ R+ that follows from the condition r i > 0. So, this system has convergent trajectories. In this section, we shall show that, nonetheless, for general systems of chemical kinetics, the large time behavior is very complicated, even if nonlinearities only in­ volve quadratic terms. First, it was established by M. D. Korzuchin [148], see also [330]. In particular, this theorem explains how the famous Belousov–Zhabotinsky reaction can appear (for an overview on this topic, see the monograph [330]). We consider a method (an algorithm) of dynamics control for large quadratic sys­ tems, for example, for large metabolic networks. We use (i) algebraic methods; (ii) the­ ory of slow invariant manifolds; and (iii) some special approximations.

2.6 Systems of chemical kinetics

| 67

We prove a theorem which can be interpreted as a “black power” of a class of large metabolic networks. Namely, this theorem asserts that such networks can approxi­ mate any prescribed dynamics with arbitrary given accuracy ϵ > 0 when the reagent number N is large enough and the kinetic rates are adjusted in a special way. This as­ sertion is analogous to the Korzuchin theorem [148, 330]. However, this result slightly reinforces this Korzuchin theorem since we describe a method of complete control of dynamics for systems which contain a smaller number of free parameters, and and we show how to realize all robust dynamics on infinite time intervals. Notice that direct numerical simulations do not allow us to study all possible vari­ ants of large time dynamical behavior. Indeed, if a model includes 1000 differential equations and 10 000 different kinetic coefficients, it is difficult to investigate all possi­ ble situations by an exhaustive search. Our goal is to describe an algorithmic approach which allows us to avoid this primitive brute-force method.

2.6.1 Model We consider the following system dx i = −x i S i ( x) + R i ( x) , dt

(2.162)

where x = (x1 , . . . , x N )tr , x i are unknown reagent concentrations, and R i , S i are some functions. We assume that the functions R i are polynomials: R i (x) =

N

R ij xj ,

R ij ≥ 0,

(2.163)

j j

j

where j = col(j1 , . . . , j N ) are multiindices, and xj = x11 . . . x NN . We suppose that R i does not involve factor x i , i. e. j i = 0. Notice that then the fundamental theorem of chemical kinetics [104, 330] says that if system (2.162) describes the chemical kinetics, the coefficients R ij must be nonnegative: R ij ≥ 0.

(2.164)

Often, R i are quadratic. Some data on real metabolic networks show that S i are usually linear functions and have the form S i (x) = k i +

N

K ij x j ,

(2.165)

j =1

where k i > 0,

K ij ≥ 0.

(2.166)

Condition (2.166) helps to obtain estimates for solutions. In book [330], it is shown that this condition follows from fundamental ideas of chemical kinetics.

68 | 2 Complex dynamics in neural and genetic networks We set the initial conditions to x(0) = ϕ.

(2.167)

N = {x : x i > 0}. Let We say that the vector x is positive if it lies in the positive cone R> us show that the problem is well-posed and defines a chemical kinetics.

Lemma 2.23. For positive ϕ, the Cauchy problem (2.162), (2.167) has positive solutions x(t) on some open interval (0, T (ϕ). These solutions are unique. Therefore, problem (2.162), (2.167) defines a local semiflow in the positive cone.

2.6.2 Decomposition We seek an integer n  N and for a decomposition of variables x = z ⊗ y, where N −n , z ∈ R>

n y ∈ R>

(2.168)

such that, in new variables (z, y), the functions S i and R i take a special form. Let us denote by Kz the set of indices Kz = {i1 , i2 , . . . , i N − n },

zl = xil ,

corresponding to the z-subspace. Respectively, Ky = {j1 , j2 , . . . , j N − n },

yl = xjl .

Assume that S i and R j can be represented in the following form: R i (z, y) =

N

¯ ij yj = T i (y), R

i ∈ Kz ,

(2.169)

j

R i (z, y) =

N N j

˜ ijk yj zk , R

i ∈ Ky ,

(2.170)

k

S i (z, y) = m i +

n

M ij y j = P i (y),

i ∈ Kz ,

(2.171)

i ∈ Ky .

(2.172)

j =1

S i (z, y) = k¯ i +

n

¯ ij y j + K

j =1

N −n

˜ ij z j K

j =1

One can suppose that y-variables are hubs in the networks, i. e. they participate in many reactions and mediate them, whereas z-variables are “satellites.” This structure is typical for many biochemical network systems [158, 331]. For quadratic systems (2.162) with linear S i of the form (2.165) and quadratic R i such that N N N ¯ j xj + ¯ kl x k x l , R (2.173) R i (x) = r i + R i = j

k = i l = i

this decomposition can be made by the vertex cover method (see below).

2.6 Systems of chemical kinetics

| 69

2.6.3 Reduction to shorted system by slow manifolds (quasiequilibria) Our goal in this subsection is to reduce the dynamics of the whole model to a shorted y-dynamics. Notice first that if the right-hand sides of (2.162) can be transformed to the form (2.169), (2.171), (2.170), (2.172), then we can simplify finding equilibria. Notice that for the initial system (2.162), it is a formidable problem. Namely, we obtain zi =

T i (y) = Z i (y), P i (y)

(2.174)

where T i and P i are defined by (2.169) and (2.171), respectively. After we substitute Z i in equations for y, y i S i (y, Z (y)) = R i (y, Z (y)). (2.175) The reduction of dynamics can be done under the following assumptions: Assumption (S). Assume that kinetic coefficients involved in y-equations are small with respect to k i involved in z-equations: ¯ ij | + sup |K ˜ ij | < κ (m0 ), sup |k¯ i | + sup |K i

i,j

(2.176)

i,j

where m0 = min m j . j

Biologically, this means that the dynamics of hub variables is slow with respect to the dynamics of the remaining ones. Lemma 2.24. Suppose Assumption (S) holds. Consider the equations dy i = −y i S i (y, Z (y)) + R i (y, Z (y)) = F i (y). dt

(2.177)

Suppose that (2.177) defines a global semiflow in an open bounded connected domain D ⊂ Rn with a smooth boundary ∂D. For sufficiently small κ, a locally invariant and locally attracting manifold Zn of dimension n defined by ˜ i (y), y ∈ D, (2.178) z i = Z i (y) + Z exists where ˜ i (y)|C1 (D ) → 0 |Z

κ → 0.

(2.179)

Dynamics (2.162) restricted to Zn is defined by equations dy i ˜ i (y), = F i (y) + F dt

(2.180)

where ˜ i (y)|C1 (D ) → 0 |F

κ → 0.

(2.181)

70 | 2 Complex dynamics in neural and genetic networks The proof is standard and follows from Theorem 6.1.7 of [108]. We can describe an al­ gebraic procedure that allows us to construct more and more accurate approximations to the solution. Let us write down our system in (z, y) variables as dz i = −z i Z i ( y) + G i ( y) , dt dy i = κF i (y, z). dt

(2.182) (2.183)

According to the slaving principle (Section 1.3), we can present solutions of this system by (0) (1) z i = z i + κz i + . . . (0)

(0)

where the main term z i is defined by z i = G i (y)Z i (y)−1 . For the first correction (1) z i , one has n (1) (0) dz i ∂z i (1) = −z i Z i ( y) + κ F j (y, z(0) (y)), (2.184) dt ∂y j j =1 giving z(1) =

n (0) ∂z i F j (y, z(0) (y)). ∂y j j =1

It is clear that this procedure can be continued and it gives us an asymptotical series for the solution z. This last lemma shows that under some assumptions, the dynamics of a global network can be reduced to n equations. In the coming section, we describe the method of control of the dynamics.

2.6.4 Control of slow dynamics The main idea beyond the mathematical construction of control is as follows. We ad­ just coefficients that define an interaction between slow modes y and fast modes z. The number of these coefficients is much more than the number of slow modes n. This fact allows us to apply some special approximations. To simplify our statement, we consider the case when the polynomial T i is a linear function of y, i. e. deg(T i ) = 1 and therefore T i (y) = τ i +

n

T ij y j ,

i ∈ Kz .

(2.185)

j =1

Then, components Z i are given by Z i (y) =

τi + mi +

n

j =1

T ij y j

j =1

M ij y j

n

.

(2.186)

2.6 Systems of chemical kinetics

| 71

Then, for i ∈ Ky , one has S i (y) = k¯ i +

n

¯ ij y j + E i (y, K, ˜ T, τ), K

(2.187)

j =1

where ˜ T, τ, m, M ) = E i (y, K,

N −n

˜ ij Z j (y). K

(2.188)

j =1

We consider the field E(y) with components E i in the domain D. Besides E, we con­ sider more general fields G(y) defined by ˜ T, τ, m, M, C) = E i (y, K, ˜ T, τ, m, M ) + C i , G i (y, K,

(2.189)

depending on constants C i . These constants can have any signs. ˜ T, τ follows from the A possibility to control E and thus S by the parameters K, next assertion. Lemma 2.25. Let us fix an integer n > 0. Consider the set Gn of all vector fields G(y) defined by (2.189) on a bounded open subdomain D ⊂ Rn for all possible values of N ˜ ij , T ij , m p , M pl , C i where i, l = and all possible positive values of the coefficients τ i , K 1, . . . , n, j = 1, . . . , N − n and p = 1, . . . , N − n. This set Gn is dense in the space of all smooth vector fields F enabled by C1 -norm. Proof. Step 1. The Fourier approximation and reduction to the one-dimensional case. We can assume that D ⊂ Π, where Π = [0, 2π]n is a box. Then, for each ϵ > 0 and a given smooth field H (y), we can construct an approximation − H i (y) = H+ (2.190) i,m cos(m · y ) + H i,m sin(m · y )) m∈M⊂Nn

such that |H i (y) − G i (y)|C1 D < ϵ.

(2.191)

Here, m is a multiindex, m = (m1 , . . . , m n ), where m i ∈ N, M is finite subset of Nn and m · y = m1 y1 + m2 y2 + · · · + m n y n . Relations (2.190),(2.188), (2.189) and (2.191) show that it is sufficient to approximate fields H of the form H i ( y) = H i ( z) ,

z = m · y.

(2.192)

Remark. For systems with quadratic terms, we can use the following elementary trick. We can present the term y i y j with i = j as 1 y i y j = ((y i + y j )2 − y2i − y2j )). 2 It is simpler than the Fourier decomposition.

(2.193)

72 | 2 Complex dynamics in neural and genetic networks Step 2. One-dimensional case. The problem of approximation of fields (2.192) by G i can be formulated as follows. Given ϵ > 0 and the smooth function h(z) on some bounded interval [0, β ], to find coefficients k j , C, r i , a j , b j , τ j > 0 such that the sum N0

τi + bi q ri + ai q

(2.194)

|g(·) − h(·)|C1 [0,β]) < ϵ.

(2.195)

g( q) = C +

ki

i =1

approximates h on [a, b ],

Let us set b i = a i . Then, we can transform (2.194) as g(q, γ, r, a, C ) = C + g˜ (q),

g˜ =

N0 i =1

where C = C +

N0

ki ,

γi , ri + ai q

(2.196)

γ i = k i (τ i − r i ).

i =1

There are different approaches resolving this approximation problem. We state two methods. The first approach uses the Weierstrass theorem and polynomial approxi­ mations, and the second one is a least square procedure. Method 1. According to the Weierstrass theorem, linear combinations R( q) = C +

M

c k (1 + q )− k ,

(2.197)

k =1

where M is an arbitrary integer and c k are arbitrary coefficients, are dense in the space of all smooth functions on [0, β ] enabled by the C1 norm. Thus, it is sufficient to prove that functions P k (q) = (1 + q)−k can be approxi­ mated, within an arbitrary precision, by g defined by (2.196). We can prove it by in­ duction. It is clear that this fact holds for k = 1. Assume this holds for k = p. Let us prove it for k = p + 1. We use the estimate |(1 + q)−p−1 − c p h((1 + q)−p − (1 + (q − h))−p )| < C p h

that is valid on [0, β ] for appropriate c p and C p > 0. For small h, this estimate gives the needed approximation. The lemma is proven. Method 2. We can apply to this approximation problem the classical method of the least squares. Let us introduce the matrix A with the entries 1

A ij = (a i + q j )−1 (a j + q)−1 dq. 0

(2.198)

2.6 Systems of chemical kinetics |

73

Notice that A ij = (a(i) − a(j)−1 (log(a i + 1)/a i )) − log(a j + 1)/a j ))) for i = j, and

1 −1 A ii = a− i − ( a i + 1) .

Let us define the vector B(i) by 1 B i = (a i + q j )−1 h(q)dq.

(2.199)

0

We solve the linear algebraic system AX = B

(2.200)

and the solution X gives the optimal approximation in the L2 ([0, β ])-norm. Let us now consider the second term in the right-hand side of (2.177). We assume that R i (y, z) are linear in z: R i (y, z) = ¯r i +

N −n

¯ il z l . R

(2.201)

l =1

Then, a lemma analogous to the previous one holds. Let us consider the vector fields J with components J i defined by J i (y) = ¯r i +

N −n

¯ il Z l (y). R

(2.202)

l =1

Lemma 2.26. Let us fix an integer n > 0. Consider the set Jn of all vector fields J (y) defined by (2.202) on a bounded open subdomain D ⊂ Rn for all possible values of N and ¯ ij , T ij , m p , M pl where i, l = 1, . . . , n, all possible positive values of the coefficients ¯r i , R j = 1, . . . , N − n and p = 1, . . . , N − n. This set Jn is dense in the space of all smooth vector fields F on D enabled by C1 -norm. The proof is the same as in Lemma 2.25. Let us formulate the main theorem. Let us consider the class of simple systems of chemical kinetics: ⎛ ⎞ N N dx i = −x i ⎝ k i + K ij x j ⎠ + r i + R ij x j , (2.203) dt j =1 j =1 where x = (x1 , . . . , x N ) ∈ RN , i = 1, . . . , N and k i > 0, K ij ≥ 0,

r i > 0,

R ij ≥ 0,

R ii = 0.

(2.204)

74 | 2 Complex dynamics in neural and genetic networks Under conditions (2.204), it is easy to show that the Cauchy problem for this system defines a global semiflow in the positive cone RN > . In fact, the solutions of (2.203) can be estimated by x ≤ C(r, k ) + x(0) exp(λ min t), where λ min is the maximal eigenvalue of the matrix T with entries T ij = −k i δ ij + R ij . If λ min < 0, then the system (2.203) generates a dissipative semiflow because then the solutions are globally bounded as t → +∞: x(t) ∈ D δ ,

t > T0 (x(0), δ),

where D δ = {x : 0 < x i < r i /k i + δ } and δ > 0 is an arbitrary positive number. Further, we use the method of realization of vector fields that implies the following assertion: Theorem 2.27. Consider the family S t (P) of semiflows defined by system (2.203), where the parameters are the number N, and nonnegative coefficients k i , K ij ≥ 0, R i ≥ 0, R ij ≥ 0. Then, this family is maximally complex and thus generates all possible structurally stable dynamics. The proof immediately follows from Lemmas 2.24, 2.25 and 2.26. Let us make brief comments in connection with the fundamental Korzuchin’s the­ orem [148]. This result asserts that any dynamics defined by a system of ordinary differ­ ential equations can be realized within a bounded time interval by quadratic systems of chemical kinetics describing monomolecular and bimolecular reactions. These sys­ tems are more complicated than (2.203) and have the form N N dx i = A ij x j + B ijl x j x l , dt j =1 j =1

(2.205)

where A ij , B ijl satisfy some conditions of nonnegativity [330]. Theorem 2.27 can be con­ sidered as a variant of the Korzuchin theorem for a simplified model of chemical ki­ netics.

2.6.5 Checking oscillations, bifurcations, and chaos existence The method described in the previous subsections has met difficulties for large n since at the very beginning, we can obtain the number of the Fourier coefficients exponen­ tial in n. In fact, the set M can contain an exponential number of multiindices m. It is a typical effect first described by R. Bellman and called the “curse of dimensional­ ity.” How does one check then, by a really feasible algorithm, that a large metabolic network can exhibit oscillations or saddle-node bifurcations?

2.6 Systems of chemical kinetics |

75

To avoid this curse, we propose the use of the fundamental idea of “normal form.” We reduce system (2.177) to a simpler one, which is a normal form. For example, we would like to check that a large network exhibits time oscil­ lating solutions. Simple systems, which can demonstrate such a behavior, have the form [154] dy1 = a0 + a1 y1 + a2 y2 + a11 y21 + a12 y1 y2 + a22 y2 + Y 1 (y) dt dy2 = b 0 + b 1 y1 + b 2 y2 + b 11 y21 + b 12 y1 y2 + b 22 y22 + Y 2 (y) dt dy i = −y i S i (y), n ≥ i > 2, dt

(2.206) (2.207) (2.208)

where a i , b i , a ij , b ij ∈ R are coefficients, i, j = 1, 2, and S i , Y1 , Y2 satisfy S i (y) > κ > 0 Y i (y) ≤ C1

n

y ∈ D,

|y i |,

κ>0

(2.209) y ∈ D.

(2.210)

i =3

Then, for i > 2, the variables y i are exponentially decreasing in t, and we obtain that a large time behavior of system (2.206),(2.207) and (2.208) is defined by shorted equa­ tions dy1 = a0 + a1 y1 + a2 y2 + a11 y21 + a12 y1 y2 + a22 y22 , dt dy2 = b 0 + b 1 y1 + b 2 y2 + b 11 y21 + b 12 y1 y2 + b 22 y22 dt

(2.211) (2.212)

which are well-studied. These equations can exhibit an Andronov–Hopf bifurcation and limit cycles (for example, coexistence of 4 cycles) [154]. It is clear that this system also exhibits the saddle-node bifurcations since the normal form of this bifurcation is dy1 = a0 − a11 y21 , a11 > 0, dt dy2 = −b 0 y2 + O(y22 ), b 0 > 0. dt

(2.213) (2.214)

This bifurcation occurs when a0 passes through 0, whereas b 0 , a11 are fixed. To obtain a chaos, we can try to reduce (2.177) to the Lorenz system, which demon­ strates a complicated behavior, including a global convergence of trajectories, the An­ dronov–Hopf bifurcations and chaos.

2.6.6 Some numerical results. Why are networks large? Numerical realizations by the reduction method presented above show some interest­ ing results. Let us consider realizations of quadratic vector fields. Then, we can use the

76 | 2 Complex dynamics in neural and genetic networks elementary identity (2.193) and the problem reduces to an approximation of quadratic N polynomials μ (q) = c1 q + c2 q2 by sums g(q) = i=01 X i (a i + q)−1 on some interval [0, β ], where β > 0. It is clear that without loss of generality, one can set β = 1. Since the approximation problem is linear, we can consider two cases: μ1 = q and μ 2 = q2 . We have applied the method of least squares (method 2). We set a i = i¯a , where i = 1, . . . , N0 and N0 = 2, 3, 5. By a program in MATLAB2009, we have estimated the following quantities:     dμ   i − g ( q ) , d0 = sup |μ i (q) − g(q)|, d1 = sup    q ∈[0,1] q ∈[0,1] dq ¯ = max |X i |. In these relations, X (a¯ , N0 ) are optimal vectors that can be found and X by the method of least squares by relations (2.198), (2.199) and (2.200). The quantities ¯ gives an estimate of the maximum of d i describe the approximation precision and X the kinetic rates involved in (y, z) interaction (it is a force of satellite action on the hubs). They depend on a¯ and N0 . The number N0 can be interpreted as a complexity of the network. The number a¯ estimates the magnitude of some kinetic rates. These rates determine the action of a center on satellites. We have found that one can obtain a good approximation for μ1 by N0 = 2, 3 and for μ 2 by N0 = 3. For μ 1 = q and a¯ = 5, N0 = 3, we have obtained that d0 ≈ 0.002,

d1 ≈ 0.024,

¯ ≈ 1220, X

(2.215)

¯ ≈ 17000. X

(2.216)

and for a¯ = 20, N0 = 3, it is found that d0 ≈ 10−5 ,

d1 ≈ 0.0015,

Similar results can be obtained for μ2 = q2 . We therefore observe that the approx­ imation precision increases when a¯ grows but, on the other hand, the kinetic rates then increase. To see this dependence and obtain analytical results, let us consider the case a¯ >> 1. Some rough estimates based on method 2 then show (we omit some technical details) that − 1 /2

d i = O(a¯ )N0

,

¯ = O(a¯ 3 )N0−1 . X

(2.217)

Given a network complexity N0 , to obtain a prescribed dynamics of a network, it is necessary to either use a strong action of the centers to the satellites, or from the satellites to the centers (or both interactions should be strong). Notice that actually in Nature, the kinetic rates are bounded by some constants and cannot be done arbi­ trarily large. Therefore, the networks, which are flexible and show different kinds of dynamics, should be complex (N0 >> 1). Increasing N0 (which is a polynomial func­ tion of N, though really it is the connectivity of hubs, see [7]), we can conserve the same precision of approximation with restricted kinetic rates.

2.6 Systems of chemical kinetics |

77

2.6.7 Algorithm Let us describe an algorithm. Input (1) A number ϵ > 0 (precision) is given. (2) A large quadratic system ⎛ ⎞ N N dx i ⎝ = −x i k i + K ij x j ⎠ + r i + R ij x j dt j =1 j = i

(2.218)

is given. Here, i = 1, . . . , N and N can be large. (3) As well, we have a short target, or reference, system du k = f k (u), dt

i = 1, . . . , n,

(2.219)

where f i are given smooth functions in an open bounded connected domain D ⊂ Rn with a smooth boundary. This domain may be a ball, though we can also take D as a box. We assume that system (2.219) has a compact attractor and an absorbing set in D. We assume that n  N. The algorithm goal To check that the given large system can generate a dynamics close to (2.219) under some choice of parameters r i , R ij . Comment. The realization of this goal allows us to check that (2.218) can exhibit a chaos, oscillations or saddle-node bifurcations. Indeed, even for n = 3, systems (2.219) can show these effects. We would like to have a purely algebraic algorithm.

Description of the Algorithm Step 1. Vortex cover. We construct a graph (V, E), where substances x i are vertices, and therefore, V = {1, 2, . . . , N }. The edge (i, j) ∈ E if and only if both x i and x j participate in a reaction, i. e. they are involved in a quadratic term in the right-hand sides of the system (2.218). The algorithm works if there is a vertex cover C = {i1 , . . . i m } for (V, E) which includes m ≥ n vertices and m  N is not so large (we suppose that m ∈ (1, 50)). The problem of finding a minimal vertex cover is a NP-complete problem [52, 83]. However, there exists a simple greedy 2-approximation algorithm [52] which finds a cover that contains ≤ 2m∗ vertices (if the best cover contains m∗ vertices).

78 | 2 Complex dynamics in neural and genetic networks If such a vertex cover exists, we make a decomposition x = (y, z) described and studied above. We denote yk = xjk ,

k = 1, . . . , m,

and z l , where l = 1, . . . , N − n will be all the rest x i such that i = i k , k = 1, . . . , m. System (2.218) takes the form ⎛ ⎞ m N −n m dz i ˜i + K ˜ ij y j ⎠ + r i + ˜ ij z j + R ¯ ij y j R = −z i ⎝ k (2.220) dt j =1 j =1 j = i where i = 1, 2, . . . , N − m, ⎛ ⎞ m N −n N −n m dy j ⎝ ¯ ˜ ij y j + M = −y i k i + S ij z j ⎠ + r i + T ij z j + P il y l dt j =1 l =1 l = j l = j

(2.221)

where j = 1, . . . , m. Now, we express z via y using (2.220), assuming that z i are fast and y j are slow. We obtain (2.222) z i = Z i (y), ˜ ij = 0, we have simple formulas for Z i : where Z i are rational functions. If R m ¯ j =1 R ij y j . Z i (y) = m ˜k i + j=1 K ˜ ij y j

(2.223)

Notice that if we seek equilibria of (2.220), (2.221), then (2.222) is an exact relation. If we investigate dynamics, this relation is asymptotic, but we can make it precise, step by step. Substituting (2.222) into (2.221), one has ⎞ ⎛ m N −n N −n m dy j ˜ ij y j + M = −y i ⎝k¯ i + S ij Z j (y)⎠ + r i + T ij Z j (y) + P il y l . (2.224) dt j =1 l =1 l = j l = j We call these equations the vertex cover system. Step 2. If m = n, we make nothing. If m > n, we extend (2.219) to du k = f k (u ) + ˜f k (u, v), i = 1, . . . , n, dt dv k = −g k (v), k = n + 1, . . . , m, dt

(2.225) (2.226)

where g k are arbitrary linear functions defined in m − n-dimensional open bounded box D ⊂ Rn such that m −n G kk v k , (2.227) g k (v) = k =1

where G is a positively defined matrix. There are possible other variants to choose g k .

2.6 Systems of chemical kinetics

| 79

Step 3. Here, we can make different things. Variant 3A, the most straightforward: direct numerical simulation. We set initial data y = y0 for the vertex cover system and check the attractor structure. However, it is difficult to handle all possible variants of parameter values (kinetic rates). Variant 3B, the method of realization of vector fields by least squares. We check that a given quadratic system (2.225), (2.226) coincides, up to small ϵ, with the vertex cover system (2.224). We choose the entries S ij and T ij as parameters. Let y1 = u 1 , . . . , y n = u n , y n+1 = v1 , . . . , y m = v n−m (one can choose another order). We rewrite (2.225), (2.226) as dy i = Y i (y), dt

y = (y1 , . . . , y m ),

(2.228)

where Y i involve constants, quadratic and linear terms. The vertex cover system (2.224) can be presented as dy i = W i0 (y) − y i S ij Z j (y) + T ij Z j (y) = H i (y). (2.229) dt j =1 j =1 Here, S ij can be, in principle, negative. Our idea is to check that Y i (y) − H i (y)D < ϵ

(2.230)

for some matrices S and T. We can proceed this checking by the least square method. If we would like to have symbol relations for the parameters S ij , T ij , we can make this as follows. Preliminaries. We approximate all monomials 1, y i , y2i by 1≈

N −m

U ij0 Z j (y),

yi ≈

j =1

y2i ≈

N −m

N −m

U ij1 Z j (y),

(2.231)

j =1

U ij2 Z j (y).

(2.232)

j =1

We present y i y j as

 1 (y i + y j )2 − y2i − y2j ) . 2 As a result, to check (2.230), we should resolve the system of linear equations of such a form N N S ij U ijl−1 + T ij U ijl = a i , (2.233)

yi yj =

j =1

j =1

where a i are given, under the linear restrictions N j =1

T ij ≥ 0.

(2.234)

80 | 2 Complex dynamics in neural and genetic networks Variant 3C, the purely algebraic and symbol approach It looks the most interesting. Moreover, here we do not need Step 2. For example, we would like to check that system (2.218) exhibits an Andronov–Hopf bifurcation. (a) Step 1. We seek an equilibrium y eq for the vertex cover system. Further, we use the Taylor series for the right-hand sides at y eq , setting w = y − y eq . Then, the vertex cover system takes the form ⎛ ⎞ m l =1 dw j = ξ i ⎝S, T ) + η ij (S, T ⎠ w j + ϕ ijl (S, T )w j w l , dt j =1 j =1

(2.235)

where ϕ ijl , η ij and ξ i are linear functions of S i j , T i l . (b) By symbol algebra [35], we can find a condition that system (2.224) has an An­ dronov–Hopf bifurcation in an explicit symbol form via ξ, ϕ, η, say η11 < ϕ2 + 1.2.

(2.236)

(c) Then, we substitute ξ i (S, T ) and other expressions into (2.236) and obtain a con­ dition for existence of the Andronov–Hopf bifurcation.

2.7 Quadratic systems We consider quadratic systems N dX i 2 ¯ r X = + M ik X k = P i (X ), i dt k =1

(2.237)

where ¯r > 0 and we consider the number N and the N × N matrix M as parameters. This system will occur in Chapter 3 as an approximation of systems of reaction-diffusion equations. Notice that this system defines a chemical kinetics only if M ik ≥ 0 for i = k. Then, the semiflow defined by equations (2.237) is monotone. Although equations (2.237) are simpler than systems of chemical kinetics studied above, we can prove that they define a maximally complex family of semiflows. The proof follows the ideas of the previous section.

2.7.1 System (2.237) can be reduced to systems of Hopfield’s type Let us prove a lemma which describes a reduction of (2.237). Suppose N2 = 2N1 . Let us fix an integer n. We decompose the variables Y i into two groups: x1 , x2 , . . . , x N1 and y1 , y2 , . . . , y N1 . Let ζ > 0 be a new auxiliary small parameter independent of γ.

2.7 Quadratic systems

Let us fix an integer n > 0. Consider the following system, that is, ⎛ ⎞ N1 dx i = ζ ⎝−λx i + J ij y j ⎠ + ζ¯ 2 x2i , dt j =1

| 81

(2.238)

dy i = −y i + ay2i + x i , (2.239) dt √ where a = a(n) is defined by a = (100 n)−1 . It is clear that this system can be obtained from (2.237). To see it, we set b i = ζ 2 for some i and b i = a¯r−1 for all the rest i and adjust M ik in a special way. We consider equations (2.238) and (2.239) in the domain D n,ζ = {x, y : |x i |
0

and x(0) ∈ Hn (C0 ),

where Hn (C0 ) is a smooth submanifold embedded in the ball B N1 (C0 ) = {x : |x| < C0 } with a smooth boundary such that inf

i =1,...,N 1 ,x ∈Hn

1 − 4a(n)x i > c(n) > 0.

(2.243)

Assume |J ik | < C1 ,

0 < λ < C2 .

(2.244)

Then, if ζ > 0 are sufficiently small, i. e. ζ < ζ0 (n, N1 , C1 , C2 , C3 , C0 ), the semiflow defined by (2.238), (2.239) has a locally invariant normally hyperbolic manifold YN1 of dimension N1 defined by equations y i = Y i (x),

x ∈ Hn (C0 ),

82 | 2 Complex dynamics in neural and genetic networks where Y i are C1+r -functions having the form Y i = σ n (x i ) + ˜y i (x), and y i |C1 (D n,ζ ) < cζ. |˜ Proof. Let us introduce new variables by y i = z i + σ n (x i ), x = (x1 , . . . , x N1 ), z = (z1 , . . . , z N1 ). Assume that z is in the ball B N1 (cζ ) of radius Cζ , where C > 0 and x ∈ B N1 (R). Let us rewrite system (2.238) and (2.238) as an evolution equation with fast and slow variables: dx = F (x, z), dt dz = A(x)z + G(X, z). dt

(2.245) (2.246)

Here, the functions F, G have the form Fi =

N1

ζJ ik (z k + σ n (x k )) − ζλ i x i + ζ 2 x2i ,

k =1

G i = az2i + σ n (x i )F i . The matrix A is diagonal: A ij = δ ij (−1 + p i (x)), where Notice that F, G ∈ C

 p i = 2aσ n (x i ) = 1 − 1 − 4ax i . 1+ r

(2.247) (2.248)

(D n,ζ ), and in the domain D n,ζ , the following estimates hold: |F i | < c1 ζ,

c2 ζ 2 < c2 ζ. |G i | < c2 (|a|ζ + 1)ζ + ˜ For derivatives of F and G, in the same domain, one has |DF i | < c3 ζ, |DG i | ≤ c4 |a| + c5 ζ.

These estimates show that for some c > 0, the field −A(x)z i + G i in the right-hand side of (2.246) is directed inside B N1 (Cζ ) for sufficiently small ζ < ζ0 (n, N1 ). Moreover, they also show that for small ζ and an appropriate C, the functions F and G are small in the norm C1 in the domain D R,ζ . Due to our choice of a(n) and (2.243), (2.248), if a trajectory x(t) lies in Hn (C0 ) for t ∈ (0, ∞), we have |p i (x(t))| < 1/2

2.7 Quadratic systems

| 83

and, therefore, by (2.247), the solutions of the linear equation z t = A(x)z satisfy |z(t)| < exp(−t/2)|z(0)|.

The standard theorems on invariant manifolds then yield the ( [108] and Appendix 3.5) existence of an invariant manifold YN1 defined by C1+r -function z = Z (x). This proves the lemma. Notice that if we restrict the semiflow defined by (2.238) and (2.239) to the invariant manifold YN1 , we obtain dynamics defined by (2.242). Now, our next goal is to show that systems (2.242) define a maximally complex family of semiflows.

2.7.2 Auxiliary approximation lemma Let us consider a family of smooth vector fields Ψ (q) on Rn , where q = (q1 , q2 , . . . , q n ), of a special form and depending on the following parameters: number m ≥ n, m × n matrix B and n × m matrix A. Let us set ⎛ ⎞ m n B ik σ n ⎝ A kj q j ⎠ + b¯ i , (2.249) Ψ i (q, m, A, B, b¯ ) = k =1

j =1

where i = 1, 2, . . . , n, Ψ = (Ψ1 , . . . , Ψ n ), the function σ n (z) is given by σ n (z) = (2a)−1 (1 − (1 − 4az)1/2 ),

a=

1 . 100n1/2

(2.250)

The function σ n (z) is defined on the interval I a = (−∞, (4a)−1 ). Lemma 2.29. For any C1 -vector field Q defined on the ball B n and any positive numbers ϵ, κ, there exist a number m ≥ n, matrices A, B and coefficients b¯ such that ¯ )|C1 (B n ) < ϵ, |Q i (·) − Ψ i (·, m, A, B, b

(2.251)

and the relations A ij = δ ij ,

i, j ≤ n

(2.252)

|A ij | ≤ 1,

|B ik | < κ.

(2.253)

hold together with the estimates

The proof follows the arguments of subsection 2.6.4, and it is omitted.

84 | 2 Complex dynamics in neural and genetic networks 2.7.3 Invariant manifolds for the Hopfield system Consider system (2.242), where σ n (z) is defined by (2.241), x = (x1 , x2 , . . . , x N1 ), and ζ is a positive small parameter. Let us denote |x| = (x21 + x22 + · · · + x2N1 )1/2 . Nonperturbed equations (2.242) with g i = 0 are similar to the famous Hopfield sys­ tem [118] studied above, but here the function σ n (z) is defined only for z ≤ (4a(n))−1 . To overcome this difficulty, let us restrict (2.242) to a special domain W n defined by W n = {x : |x i | < n1/2 + 1, i = 1, 2, . . . , N1 }.

(2.254)

Let us assume that for some r > 0, g i ( x ) ∈ C 1+ r ( W n ) ,

|g i (x)|C1 (W n ) < C1 .

(2.255)

Then, equations (2.242) define a local smooth semiflow in W n since σ n (x i ) are defined on W n . We consider N1 , J ij , λ, η as parameters P. Below, we shall show that under a choice of these parameters, W n contains an open domain invariant under this semi­ flow. This shows that equations (2.242) generate a global semiflow on W n . Let us prove the following assertion. Lemma 2.30. For each ϵ > 0 and a vector field Q on the unit ball B n satisfying (3.150), (3.151), the family of local semiflows defined by system (2.242) ϵ-realizes this field. More precisely, for each ϵ > 0 and λ > 2, there exist a number N1 = N1 (n, Q, ϵ), an N1 × N1 matrix J = J(n, Q, ϵ) such that for sufficiently small ζ > 0 dynamics (2.242) has an n-dimensional invariant locally attracting normally hyperbolic manifold Hn ⊂ W n defined by the almost linear equations xi =

n

˜ i ( q) , A is q s + ζ X

i = n + 1, . . . , N1

s =1

˜ i ( q) , xi = qi + ζ X

i = 1, . . . , n,

where q i are coordinates on the manifold, q ∈ B n and A is a matrix satisfying (2.252), ˜ i (q) are uniformly bounded in C 1 norm: (2.253), and X ˜ i |C1 (B n ) < C 2 . |X Dynamics (2.242) reduced on the manifold Hn ⊂ W n is defined by the differential equa­ tions dq ˜ ( q) , τ > 0 = Q( q) + Q (2.256) dτ where ˜ |C1 (B n ) < ϵ. |Q (2.257) Proof. Let us set λ > 3. The first step is a special substitution for J. Let us suppose N1 > n and set J = AB, (2.258)

2.7 Quadratic systems

| 85

where A and B are new matrices of size N1 × n and n × N1 , respectively. Then, system (2.242) takes the form dx i = A is P s (x) − λx i + ζg i (x), dt s =1 n

P s (x) =

N1

(2.259)

B sj σ n (x j ),

j =1

and we denote x = (x1 , . . . , x N1 ). The next part of the proof is based on the two following key points. First, system (2.242) has an invariant manifold Hn . Second, one can “control” the corresponding local inertial form by the parameters N1 , B kj and A ik . The second point is based on Lemma 2.29. Let us introduce new variables z i and q k by zi = xi −

n

A ik q k , i = n + 1, . . . , N1 ,

xi = qi ,

i = 1, . . . , n.

k =1

Then, an elementary calculation shows (see also [280]) that under conditions (2.252), equations (2.259) take the form dz i = −λz i + ζ g¯ i (x(q, z)), dt where g¯ i (x(q, z)) = g˜ i (x(q, z)) −

n

i > n,

(2.260)

A is g˜ s (x(q, z)),

s =1

and

dq j ˜ j (x(q, z)), = P j (x(q, z)) − λq j + ζ g dt

where x i (q, z) =

n

A is q s + z i ,

j = 1, . . . , n,

i > n.

(2.261)

(2.262)

s =1

We denote x i (q, z) = q i ,

i ≤ n.

(2.263)

Now, let us adjust the parameters N1 , A ik , B ik in a special way. According to Lemma 2.29, one can choose N1 , A is , B sj such that the estimates sup |Q i (q) + λq i − Ψ i (q, N1 , A, B)| < ϵ,

q∈B n

   ∂Q (q) ∂Ψ i (q, N1 , A, B)    i  0 is small enough and ζ < ζ0 (n, A, B), equations (2.242) are correctly defined in the domain V n : Vn ⊂ Wn and the trajectories of (2.242), which start at t = 0 in the domain V n , do not leave this domain for t > 0.  √ Proof. Since |A ij | ≤ 1, the Schwarz inequality shows that s |A is q s | < n. This gives n √ |x i | = |z i + s =1 A is q s | ≤ n + 1. Then, relations (2.262) and (2.263) entail that tra­ jectories x(t) of (2.242), starting from V n at t = 0, stay in the domain W n for all t > 0. Therefore, equations (2.242) are correctly defined in the invariant domain V n . Let us show that the domain V n is invariant under our dynamics for sufficiently small ζ . In­ deed, suppose |z i (0)| < 1 and ζ is small enough. Equations (2.260) imply |z i (t)| < 1 for all t > 0. Moreover, |q(t)| < 1 for small ζ and ϵ due to the assumption that Q is directed inward the unit ball B n . Therefore, one concludes that the domain V n is invariant under dynamics (2.260), (2.261).

We can now consider our equations on V n , where they induce a global semiflow.

Case ζ = 0 To simplify our statement, let us consider first the case ζ = 0. Equations (2.260) give |z i (t)| < |z i (0)| exp(−λt), t > 0. Thus, the manifold defined by the relations z i = 0, i = n + 1, . . . , N1 , |q| < 1 is a locally invariant and locally attractive manifold of dimension n that is contained in V n . Denote this manifold by Hn0 . Let us consider the semiflow defined by (2.261) on this manifold Hn0 . This re­ stricted semiflow is defined by a vector field with components P j (x(q, z))|z=0 . We ob­ serve that P j (x(q, 0)) = Ψ j (q, N1 , A, B), j = 1, . . . , n, where the functions Ψ j are de­ fined by relations (2.249). Thus, the restricted semiflow is defined by dq i ˜ i ( q) , = Ψ i (q, N1 , A, B) − λq i = Q i (q) + Q (2.266) dt where i = 1, . . . , n. Estimates (2.257) hold due to (2.264) and (2.265). The manifold Hn0 is normally hyperbolic for sufficiently small ϵ. In fact, the 1 C -norm of the field Ψ is less than 1. Then, estimates (2.264) and (2.265) imply that the divergence rate of trajectories on the invariant manifold Hn0 is less than the trajectory convergence rate λ > 2 to Hn0 .

2.8 Morphogenesis by genetic networks | 87

Small ζ > 0 Due to the normal hyperbolicity and estimates obtained for sufficiently small ζ system (2.260), (2.261) also has an invariant manifold Hn defined by equations z i = ζZ i (q), which are small C1 -perturbations of equations z i = 0. We can prove the existence of this manifold, for example, by Theorem A from Appendix 3.5 (which is a simplified variant of Theorem 9.1.1 [108], Ch.9). This completes the proof.

2.8 Morphogenesis by genetic networks In this section, we consider genetic networks. We follow the works [282, 284–286, 293, 296].

2.8.1 Systems under consideration. Network models Let us consider the model of genetic networks proposed by [188, 226], which has the following form: ⎛ ⎞ p m ∂u i = R i σ ⎝ K ij u j + M il θ l (x) − η i ⎠ − λ i u i + d i Δu i , (2.267) ∂t j =1 l =1 where m is the number of genes included in the circuit, u i (x, t) is the concentration of the i-th protein, λ i are the protein decay rates, R i are some positive coefficients and d i are the protein diffusion coefficients. We consider (2.267) in some bounded domain Ω with a boundary ∂Ω. The real number K ij measures the influence of the j-th gene on the i-th one. The assumption that gene interactions can be expressed by a single real number per pair of genes is a simplification excluding complicated interactions between three, four and more genes. Clearly, such interactions are possible. However, in this case, the problem is much more complicated. Moreover, systems with pair interactions are sufficiently powerful to model all main effects. The parameters η i are activation thresholds and σ is a monotone function satisfy­ ing the following assumptions σ ∈ C3 (R), lim σ (z) = 0, lim σ (z) = 1, z →−∞ z →+∞    dσ     dz  < C exp(−c|z|), σ (0) = 1.

(2.268) (2.269)

An elementary function satisfying these conditions is σ (z) = (1 + tanh z)/2. The functions θ l (x) can be interpreted as densities of proteins associated with the maternal genes. The matrix M il describes the action of the l-th maternal gene on the i-th gene.

88 | 2 Complex dynamics in neural and genetic networks In order to have a correctly posed problem, we add to (2.267) the boundary and initial conditions. For u i , we set the zero Neumann boundary conditions ∇u i (x, t) · n(x) = 0,

(x ∈ ∂Ω )

(2.270)

where n(x) is a normal vector to the smooth boundary ∂Ω at the point x. Initial data are (2.271) u i (x, 0) = ϕ i (x), (xΩ), where ϕ ∈ C2 (Ω). Notice that then problem (2.267), (2.270) and (2.271) is well-posed: solutions exist for all t > 0 (i. e. we have a global semiflow S t ) and they are a priori bounded. The semiflow S t defined by this IBVP is dissipative and possesses a global finite dimen­ sional attractor. These facts can be proved by the standard methods [108, 258, 265]. This model takes into account the three fundamental processes: (a decay of gene products (the term −λ i y i ); (b) exchange of gene products between cells (the term with Δ) and (c) gene regulation and protein synthesis. Notice that if d i = 0, this model of a gene circuit can be considered as a Hopfield’s neural network [118] with thresholds depending on x. This model was proposed to explain the pattern formation in biosys­ tems [188, 226]. To study (2.267), numerical simulations were applied, for example, works [131, 188, 226] are devoted to the segmentation in Drosophila. Let us fix a function σ satisfying (2.268), (2.269) and functions θ i . On the con­ trary, we consider m, K ij , M il , λ i , d i , R i and η i as parameters P to be adjusted, P = {m, K, M, η, λ, d, R}. Model (2.267) allows us to use experimental data on gene reg­ ulation to fit parameters P [131, 188, 226]. For a rigorous formulation of the fitting problem, see the next subsection. The paper [237] has investigated complex patterns occurring under a random choice of the coefficients K ij . Numerical results of [131, 188, 226] have elucidated gap-gene interactions in Drosophila. Let us recall that dur­ ing Drosophila’s embryo development, 6 gap genes (Knips, Hunchback, Kruppel, Tll, Cadal, Giant) and pair-rule genes form a periodic pattern consisting of some segments (segmentation process). The main maternal gene involved in this process is Bicoid (Bc). An interesting experimental fact about this pattern formation process is that the segmentation process is remarkably stable with respect to mutations (elimination of some genes), fluctuations of bicoid concentration and variations of embryo size L [121]. To investigate this stability with respect to the morphogen concentrations and mu­ tations, we can consider random θ l or random thresholds η i . For example, one can set η i = η¯ i + ξ i (x, t) where ξ i are random fields. We consider these perturbed models in Chapter 4. The pattern stability problem is also considered in subsection 3.1.4. Equations (2.267) are very complex and thus it could be useful to consider simpli­ fied versions of these equations. Let us consider a variant where diffusion is removed,

2.8 Morphogenesis by genetic networks |

namely,

⎞ ⎛ p m ∂u i = R i σ ⎝ K ij u j + M il η l (x) − ξ i (t)⎠ − λ i u i , ∂t j =1 l =1

89

(2.272)

where x is involved as a parameter through thresholds η i (x), and ξ i (t) are random noises. A similar model was first introduced for biochemical applications in [87]. Another possible model is a dynamical system with discrete time, for example, defined by the following iterative process ⎛ ⎞ m t +1 t t (2.273) u i = r i σ ⎝ K ij u j + θ i (x) − ξ i ⎠ , j =1

where t = 0, 1, 2, . . . , T, T is an integer, and ξ it are random functions of discrete time t. Here, x ∈ Ω and we omit the boundary conditions (2.270), but initial data conserve the form (2.271). This model can be considered as a discrete time version of (2.267), where diffusion and degradation effects are removed. On the other hand, this system makes biological sense. In fact, if θ i are constants, then equations (2.273) reduce to the classical model of the neural network theory well-studied in last decades [30, 34, 67, 86, 142, 250, 251]. For patterning problems, this system describes pattern formation by the so-called sequential induction [8]. The signals that organize spatial patterns of an embryo typically act over short distances. We can consider concentrations u 1 (x) as a result of the first patterning round, u 2 (x) of the second round etc. In principle, for such a process K ij , and σ also can depend on t, but we do not consider this case. Biologi­ cally, this means that “it is chiefly through sequential inductions that the body plan of a developing animal, after being first roughed in miniature, becomes elaborated with finer and finer details as development proceeds ([8], p.1169).” If we assume that σ is the Heaviside step function, i. e. σ (z) = 0 for z < 0 and σ (z) = 1 for z > 0, and that r i = 1, then equations (2.273) present an example of a Boolean circuit. Such circuits are well-studied and applied to genetic regulation prob­ lems [34, 257, 267]. Such models are well-justified by gene properties since “like an input-output logic device, an individual gene is thus turned on and off according to the particular combination of proteins bound to its regulatory regions at each stage of development” ([8], p. 1187). One can also use general Boolean circuits with nonpair interactions, for example, u it+1 = σ i (u ti1 , u ti2 , . . . , u til , x) where σ i are Boolean functions of Boolean arguments, indices i k and integer l can depend on i. Since Kauffman’s seminal work [136], the Boolean networks were widely em­ ployed to describe complex genetic networks [257, 267]. In this approach, each gene is represented as a node which can be active or inactive. Each node is connected with some other nodes. The time is discrete and during each step, the states of all the nodes change depending on the states of other nodes.

90 | 2 Complex dynamics in neural and genetic networks 2.8.2 Patterning problems Mathematical approaches to pattern formation problem in the developmental biology has started with the seminal work by A. M. Turing [272] devoted to pattern formation from a spatially uniform state. Turing’s model is a system of two reaction-diffusion equations. After [272], similar phenomenological models were studied by numerous works ( [178, 179, 193] for the review). Computer simulations based on this mathemat­ ical approach produce patterns similar to really observed ones [179]. However, there is no direct evidence of Turing’s patterning in any developing organism ([326], p.347). The mathematical models are often selected to be mathematically tractable and they do not take into account experimental genetic information. Moreover, within the framework of the Turing–Meinhardt approach, some impor­ tant theoretical questions are left open. For example, whether universal mathematical models and patterning algorithms exist that allow one to obtain any, even very com­ plicated, patterns. In fact, a difficulty in using simple reaction-diffusion models with polynomial or rational nonlinearities is that we have no patterning algorithms. To ob­ tain a given pattern, first we choose a reasonable model (often using intuitive ideas) and later we adjust coefficients or nonlinear terms by numerical experiments (an ex­ cellent example of this approach is given by the book of H. Meinhardt on pigmentation in shells [178, 179]). To overcome this algorithmic difficulty, we use genetic circuit mod­ els. We show that they are capable of generating any spatiotemporal patterns and that there are different algorithms to resolve patterning problems. Let us now formulate the patterning problem.

Pattern generation problem for gene circuit model Let T0 , T be two positive numbers and T0 < T. Given functions z k (x, t), x ∈ Ω, t ∈ [0, T ], k = 1, .., s and a positive number ϵ, to find the parameters P such that the solu­ tion of system (2.267) with initial conditions u j (x) = 0 satisfies sup |z k (x, t) − u k (x, t)| < ϵ, x,t

x ∈ Ω,

t ∈ [T0 , T ].

(2.274)

We refer the functions z k as a target pattern. We can consider another version of this problem where we minimize the distance between z and u. It can be formulated as follows:

Fitting problem for the gene circuit model Let T0 , T be two positive numbers and T0 < T. Given functions z k (x, t), x ∈ Ω, t ∈ [0, T ], k = 1, .., s and a positive number ϵ, to find the parameters P such that the func­

2.8 Morphogenesis by genetic networks | 91

tions z k minimize the functional T  |z k (x, t) − u k (x, t)|2 dxdt = min .

F ( P) =

(2.275)

T0 Ω

Similarly, we can formulate these problems for the discrete time version:

Pattern generation problem for time discrete gene circuits Let T0 > 0 and T0 < T, where T0 , T are integers. Given functions z tk (x) ∈ [0, 1], x ∈ Ω, t = 0, 1, . . . , T, k = 1, . . . , s and a positive ϵ, to find parameters P such that the functions u tk defined by relations (2.273) satisfy sup |z tk (x) − u tk (x)| < ϵ, x,t

x ∈ Ω,

t = T0 , . . . , T, k = 1, . . . , s.

(2.276)

For logic (Boolean) networks z, u m ∈ {0, 1}, and we can set ϵ = 0. Then, inequal­ ities (2.276) can be replaced by the equalities z tk (x) = u tk (x)

t = T0 , . . . , T.

Let us consider a biological interpretation of these formulations. Among the vari­ ables u i , we select special variables, say u 1 , . . . , u s . They can be interpreted as mor­ phogenes. The cell states depend on the expression of these genes. All the remaining genes u s +1 , . . . , u m are hidden units, or regulatory genes. They are involved in a biochemical machinery, but they do not act directly on the cell states. Such an approach is consistent with experimental facts [8, 44, 324, 326]. Let us consider, for example, the pigmentation process well-studied for Drosophila Melanoguster [324]. This patterning process is controlled by regulatory genes, which control the expression of other genes, and structural genes, which encode enzymes. These enzymes are involved in biochemical pathways used for pigment synthesis. Differ­ ent regulatory genes control the expression of the structural genes in different body regions. The activity of most enzymes is limited to the cells in which they are ex­ pressed [324]. Such a patterning problem statement reminds one of the classical approaches of neural network theory [195, 321], where, similarly, one distinguishes input, output and hidden neurones. This decomposition helps to resolve approximation and classifica­ tion problems by multilayered networks. The output genes can change the cell states and therefore, they can predetermine an output pattern z. The hidden genes do not directly influence the cell states, they are only involved in an internal cellular gene regulation. For stationary patterns z (independent of time), the solution of patterning prob­ lems follows from the well-known results on multilayered neural networks [21, 53, 107, 119]. If s = m, i. e. without hidden genes, the pattern problem is the fitting problem

92 | 2 Complex dynamics in neural and genetic networks posed in [188]. In this case, we have experimental data on all gene concentrations and we try to find circuit parameters which give the best approximation of the data on some time interval. The patterning and fitting problems can have a number of different solutions. For the gap-gene system in Drosphila, the fitting problem was studied by numerical sim­ ulations [131, 188, 202, 226]. Even in this case, where m = 6, p = 1, the fitting problem is difficult: numerical simulations require a large computing time.

2.8.3 Patterning and hierarchical modular structure of genetic networks The first main result on the patterning problem asserts that under natural conditions on maternal genes densities θ i , the pattern generation problem always has a solution. Moreover, there is a constructive and numerically effective algorithm that allows us to find a circuit generating a given pattern. We show that the modular hierarchical organization and sigmoidal interaction are effective tools to form complex hierarchical patterns. New, and more refined patterns, can be obtained by the previous old ones. Given a final pattern z T (x), one can estimate the minimal number of genes in a network that generates this pattern. We give definitions of complexity of the circuits and pattern complexity. The Khovanski theory [139] then shows that a connection be­ tween these complexities exists: it is impossible to obtain a “complex” pattern by a “simple” circuit. These results can be found in [284–286, 296]. The second result is that for each reaction-diffusion model, one can find a gene network which simulates this model within a given accuracy. This means that the Tur­ ing–Meinhardt patterning models can be reformulated as gene circuits.

2.8.4 Generation of complicated patterns Let us consider a pattern generation problem for time discrete gene circuits (2.273). To simplify the statement, let us set s = 1. We denote θ = (θ1 , . . . , θ m ). Theorem 2.32. Suppose that T0 > 2 and continuous functions ϕ l (θ), l = 1, . . . , d defined on Rm exist such that x l = ϕ l (θ1 (x), . . . , θ m (x)) for each x ∈ Ω ⊂ Rd . Then, the pattern generation problem has a solution. Remark 1. The assumption of the theorem implies that at least d functions θ i are nontrivial: θ i = const. In the one-dimensional case d = 1, to satisfy this assumption, it is sufficient to suppose that at least one function θ i is strictly monotone. More­ over, under the condition of the theorem, any function f (x1 , . . . , x d ) can be presented as a function of θ = (θ1 , . . . , θ m ). Indeed, f (x1 , . . . , x d ) = f (ϕ1 (θ), . . . , ϕ d (θ)) = ˜f (θ).

2.8 Morphogenesis by genetic networks | 93

Remark 2. We also observe that the assumption on θ i is necessary to approximate any sequences z t (x). In fact, chain (2.273) can generate only such sequences z t , where each z t (x) depend on x through θ(x) = (θ1 (x), . . . , θ m (x)). This means that for each z t , there exists a function G t (θ) such that z t (x) = G t (θ). If our assumption does not hold, the trivial target sequence z t = x k cannot be approximated by iterations (2.273). Consequently, the assumption of the theorem is sufficient and necessary to resolve the pattern generation problem for any outputs z t . This theorem can be considered as a generalization of the well-known results for multilayered neural networks (multilayered perceptrons) [21, 53, 107, 119, 195, 321] and time recurrent networks [81, 280, 287]. In fact, removing t from (2.272) and introducing the input v = u t and the output w = u t+1, we transform (2.272) to a multilayered net­ work. The multilayered networks are capable of generating complicated patterns and resolve classification problems [21, 53, 107, 119, 195, 321]. If we remove x from (2.272), we obtain a time recurrent network. It is well known that these networks can generate all possible time trajectories [81] and all possible structurally stable attractors (up to a topological orbital equivalency) [280, 287]. Roughly speaking, they can induce all time depending patterns. Theorem 2.32 can be considered as a development of these results: the networks generate all spatiotemporal patterns. This theorem can be de­ rived from the following lemma. Lemma 2.33 (Superposition principle). Consider a family consisting of p circuits (2.273) generating functions u ti,s , where t = 0, . . . , T1 , s = 1, . . . , p and i = 1, 2, . . . , m s (here, the index s marks the functions generated by the s-th circuit, m s is the number of the genes involved in the s-th circuit). Denote by ut the vector with the components t t t t , u 2,1 , . . . , u tm1 ,1 , u 1,2 , . . . , u tm2 ,2 , . . . , u 1,p , . . . u tm p ,p . u 1,1 t t Suppose that z (x) = F (u (x)), where F is a continuous function of N variables de­ p fined on N-dimensional cube Q N = [0, 1]N , and N = s =1 m s is the number of functions involved in the circuits. (This means that the target pattern can be expressed through the patterns generated by our family). Then, for any ϵ > 0, there exists a circuit (2.273) sat­ isfying (2.276) with T0 = 2 and T = T1 + 2. The main idea of the proof of this lemma is based on the biological fact: the gene networks have a modular hierarchical structure and are organized in blocks [106, 224]. As a mathematical basis, we use the following known approximation result: for each κ > 0, a number m and coefficients A kjs , b k , η k exist such that     p ms m  −1  σ (F (u)) −  < κ, u ∈ Q N b σ ( A u − η ) (2.277) k kjs j,s k     s =1 j =1 k =1 (for more detail, see [296]). The theorem is an immediate consequence of this lemma, see [296]. This proof gives, moreover, an algorithm to resolve the pattern generation problem. Namely, the key step of the proof (approximation (2.277)) can be realized by a constructive pro­

94 | 2 Complex dynamics in neural and genetic networks cedure [280]. An explicit estimate of the gene number m can be obtained under some supplementary assumptions on F from this lemma and on z t . Namely, we suppose that the functions F (u) and z t (x) are Lipschitzian, with the Lipschitz constants Lip(F ) and Lip(z t ). Analogues of these results can be obtained for the time continuous patterning problem (2.274), see [285, 286]. The superposition lemma confirms the famous law that “morphogenesis repeats evolution” [227]: new, more refined patterns, can always be obtained by old patterns by a sequential induction. Notice that this patterning algorithm, based on a superposition, is not unique. There are many other methods to resolve the patterning problem. Even for the gap-­ gene fitting problem, when we have experimental data on all the gene concentrations, numerical results show that there are a number of solutions. Finding these solutions needs a global search and takes a formidable computer processing time. For the fitting problem, the first works used simulated annealing [188, 226], while later asymptotic approaches were developed in order to diminish processing time [202]. Moreover, to handle experimental data correctly, we need an a priori hypothesis on the net struc­ ture. It is well known that such inverse problems for gene nets are very difficult, al­ though different approaches are proposed [136, 201, 202, 328]. Furthermore, we should take into account a fundamental robustness of a circuit with respect to variations in maternal gene concentrations, and mutations embryo sizes. The output pattern must be proportional to the embryo length [121, 122]. There are different reaction-diffusion models, allowing us to explain this stability observed in experiments [122, 123]. How­ ever, we think that they are still far from biological reality. The problem is far from being resolved: it is a topic for coming investigations (see an approach in subsection 3.1.4).

2.8.5 Approximation of reaction-diffusion systems by gene networks This section follows works [285, 286]. We consider, for simplicity, the case of two com­ ponent reaction-diffusion systems ∂u = d1 Δu + f (u, v), ∂t ∂v = d2 Δv + g(u, v). ∂t

(2.278) (2.279)

In these equations, u = u 1 and v = u 2 are unknown functions of the space vari­ ables x = (x1 , x2 , x3 ) defined in a bounded domain Ω. System (2.279) must be comple­ mented by standard initial and boundary conditions (2.270) and (2.271). The general multicomponent case can be studied in a similar way. The phenomenological approach based on equations (2.278) and (2.279) gives ex­ cellent results for some pattern formation problems, see [178, 179, 193].

2.8 Morphogenesis by genetic networks | 95

Assume that the solutions of (2.279) remain globally bounded, i. e. for some posi­ tive constants C i , we have the estimate |u (x, t)| < C1 ,

|v(x, t)| < C2

for all t > 0, if it holds for t = 0. Let us define the domain D C1 ,C2 as follows: D C1 ,C2 = {(u, v) :

0 ≤ u < C 1 , 0 ≤ v < C 2 }.

We suppose that the initial data (ϕ1 (x), ϕ2 (x)) lie in D C1 ,C2 for each x. We shall show that for a given reaction-diffusion system, we can always find an ϵ-equivalent circuit (2.267). Namely, for this equivalent circuit, a smooth map b (y) : (y1 , y2 , . . . , y m ) → (u, v) exists, transforming the gene concentrations to the reagent concentrations and such that time evolution of u, v is defined by a new reac­ tion-diffusion system with nonlinearities Φ1 (u, v), Φ2 (u, v), ϵ-close to nonlinearities f (u, v), g(u, v). Therefore, one can say that all reaction-diffusion systems can be realized as gene circuits. To construct these circuits, we use the modular approach and the generalized Hopfield substitution (2.43). Let us consider a system (2.267) having a special structure. Namely, we assume that there exist two kinds of the genes. We denote these groups of the genes by y and z, where vector y(x, t) contains m1 components and z(x, t) con­ tains m2 components. Naturally, m = m1 + m2 . Therefore, we shall consider a system (2.267) of the following form, that is, ∂y i yy yz = σ (Ki y + Ki z − θ i ) + d1 Δy i , (2.280) ∂t ∂z i zy ¯ = σ (Ki y + Kzz (2.281) i z − θ i ) + d 2 Δz i . ∂t  yy zz zy yz Here, we use notation Ki y = m j =1 K ij y j , and matrices K , K , K and K describe interactions between different groups of the genes. In general, these interactions are not symmetric, i. e. Kyz is not equal to the trans­ pose of Kz y . The coefficients d1 and d2 coincide with the diffusion coefficients in equa­ tions (2.279). We choose the entries of the matrices Kyy , Kzz , Kz y and Kyz as yy

K ij = a i b j ,

yz K ij = γ i b¯ j ,

zy

K ij = γ¯ i b j ,

K ijzz = a¯ i b¯ j

where a i , a¯ i , γ i , γ¯ i , b i , b¯ i are unknown coefficients. Let us define collective variables u and v by u=

m1

bi yi ,

i =1

v=

m2

b¯ i z i .

i =1

By elementary calculations [286], we obtain ∂u = d1 Δu + Φ1 (u, v), ∂t

∂v = d2 Δv + Φ2 (u, v), ∂t

96 | 2 Complex dynamics in neural and genetic networks where Φ1 (u, v) =

m1

b i σ(a i u + γ i v − θ i ),

Φ2 (u, v) =

i =1

m2

b¯ i σ (a¯ i v + γ¯ i u − θ¯ i ).

i =1

The well-known approximation results of the neural network theory [21, 81, 107, 119] (see also Lemma 2.9) then entail that for any ϵ > 0, numbers m1 , m2 , vectors ¯ γ, γ¯ and θ, θ¯ exist such that a, b, a¯ , b, |Φ1 (u, v) − f (u, v)| < ϵ,

|Φ2 (u, v) − g(u, v)| < ϵ

for all u, v from some bounded domain. This proves the following result [285, 286]: Theorem 2.34. Consider system (2.279). Suppose solutions of this system remain in a domain D C1 ,C2 . Then, if functions f and g are continuous for any ϵ > 0, a circuit (2.267) exists with a sufficiently large number m and coefficients r = (r1 , r2 , . . . , r m ) and s = (s1 , s2 , . . . , s m ) such that the functions m m ri yi , v = si yi u= i =1

i =1

satisfy the system u t = d1 Δu + ˜f (u, v),

v t = d2 Δv + g˜ (u, v)

where |f (u, v) − ˜f (u, v)| < ϵ,

˜ (u, v)| < ϵ |g(u, v) − g

for (u, v) ∈ D C1 ,C2 . This theorem shows that the pattern capacity of the gene circuits on bounded time intervals are not less than the pattern capacity of reaction-diffusion systems.

2.9 Centralized gene networks This section follows [290, 291]. We consider a class of centralized genetic networks which are the analogues of centralized neural networks studied above. Such networks can emerge as a result of natural growth mechanisms which lead to scale-free network topology. Notice that the scale-free networks [7, 158] occur in many areas, that is, in economics, biology and sociology. In the scale-free networks, the probability P(k ) that a node is connected with k neighbors has the asymptotics Ck −γ with γ ∈ (2, 3). Such networks typically contain a few strongly connected nodes and a number of satellite nodes. Hence, scale-free networks are, in a sense, centralized.

2.9 Centralized gene networks | 97

Centralized networks have been empirically identified in molecular biology where the centers can be, for example, transcription factors, while the satellite regulators can be small regulatory molecules such as microRNAs. In order to model dynamics of centralized networks, we adapt a gene circuit model proposed to describe early stages of Drosophila (fruit-fly) morphogenesis [188, 226]. To take into account the two types of the nodes, we use distinct variables v j , u i for the cen­ ters and the satellites. The real matrix entry A ij defines the intensity of the action of a center node j on a satellite node i. This action can be either a repression A ij < 0 or an activation A ij > 0. Similarly, the matrices B and C define the action of the centers on the satellites and the satellites on the centers, respectively. Let us assume that a satel­ lite does not act directly on another satellite (divide et impera). We also assume that satellites respond more rapidly to perturbations and are more diffusive/mobile than the centers. Both of these assumptions seem natural when satellites are microRNAs. Let M, N be positive integers, and let A, B and C be matrices of the sizes N × M, M × M and M × N, respectively. We denote by Ai , Bj and Cj the rows of these matrices. To simplify formulas, we use notation M

A ij v j = Ai v,

j =1

M

N

B jl v l = Bj v,

l =1

C jk u k = Cj u.

k =1

Then, the centralized gene circuit model reads   ∂u i ˜ i Δu i + ˜r i σ Ai v + b ˜ i m(x) − h ˜ i − ˜λ i u i , =d ∂t   ∂v j = d j Δv j + r j σ Bj v + Cj u + b j m(x) − h j − λ j v j , ∂t

(2.282) (2.283)

where m(x) represents the maternal morphogen gradient, i = 1, . . . , N, j = 1, . . . , M. ˜ i and amplitude coefficients r i , ˜r i are non­ We assume that diffusion coefficient d i , d ˜ negative: d i , d i , r i , ˜r i ≥ 0. Here, the morphogenetic field m(x) and unknown gene concentrations u i (x, t), v j (x, t) are defined in a compact domain ∈ Ω (dim(Ω) ≤ 3) having smooth boundary ∂Ω, x ∈ Ω and σ is a monotone and smooth (at least twice differentiable) sigmoidal function such that σ (−∞) = 0,

σ (+∞) = 1.

(2.284)

Typical examples can be given by σ(h) =

1 , 1 + exp(−h)

σ(h) =

1 2



h √ + 1 . 1 + h2

(2.285)

The function σ (βx) becomes a step-like function as its sharpness β tends to ∞. We assume that the centers do not diffuse or diffuse in a slow manner. Then, for u i , we set the zero Neumann boundary conditions ∇u (x, t) · n(x) = 0,

∇v(x, t) · n(x) = 0,

(x ∈ ∂Ω ).

(2.286)

98 | 2 Complex dynamics in neural and genetic networks They means that the flux of each u-reagent through the boundary is zero (here, n de­ notes the unit normal vector towards the boundary ∂Ω at the point x). Moreover, we set the initial conditions ˜ i (x) ≥ 0, u i (x, 0) = ϕ

v j (x, 0) = ϕ j (x) ≥ 0 (x ∈ Ω).

(2.287)

It is natural to assume that all concentrations u i (x, t) are nonnegative at the initial point, and then it is easy to show that u i (x, t) are nonnegative for all times t > 0 (see the estimate below, 2.291). By neglecting diffusion effects, we obtain from (2.282),(2.283) the following shorted system:   ∂u i ˜ i m(x) − h˜ i − ˜λ i u i , = ˜r i σ Ai v + b (2.288) ∂t   ∂v j = r j σ Bj v + Cj u + b j m(x) − h j − λ j v j . (2.289) ∂t This is a Hopfield-like network model [118] with thresholds depending on x (contrary to the Hopfield model, the interaction matrices are not necessarily symmetric). In this case, we remove all boundary conditions (2.286). If only d i = 0, we remove the corre­ sponding boundary conditions for v i .

2.9.1 Existence of solutions Let us introduce some special functional spaces [108, 260]. Let us denote the Hilbert space of the vector value functions w by H = L2 (Ω)n . This space is enabled by the   standard L2 -norm defined by w2 = Ω |w(x)|2 dx, where |w|2 = w2i . For α > 0, we denote by H α the space consisting of all functions w ∈ H such that the norm wα is bounded, that is, here w2α = (−Δ + I )α w2 . These spaces have been well-studied (see [108, 167, 260] and references in). The phase space of the system is H = {w = (u, v) : u ∈ H, v ∈ H }, the corresponding natural fractional spaces are denoted by H α and Hα , and here H0 = H and H0 = H . Denote by B α (R) the n-dimensional ball in H α centered at the origin with the radius R: B α (R) = {w : w ∈ H α , wα < R}. In the case of system (2.282), (2.283), all f i (w, x) are smooth in w, x, and there­ fore, the standard technique [108] shows that solutions of this system exist locally in time and are unique. In fact, system (2.282), (2.283) can be rewritten as an evolution equation of the form w t = Aw + f (w), (2.290) where f is a uniformly bounded C1 map from Hα to H and a linear self-adjoint nega­ tively defined operator A generates a semigroup satisfying the estimate  exp(At)w ≤ exp(−βt)w with a β > 0. To show local existence and uniqueness, we use the standard procedure [108], em­ bedding supx∈Ω |w| ≤ cwα , α > 3/4, and that the derivative σ (z) is uniformly bounded in z.

2.9 Centralized gene networks

| 99

Let us prove that the gene network dynamics defines a global dissipative semiflow. In fact, an absorbing set B exists that is defined by 1 1 r i ˜λ− B = {w = (u, v) : 0 ≤ v j ≤ r j λ− j , 0 ≤ ui ≤ ˜ i , j = 1, . . . , M, i = 1, . . . , N }.

One can show, by super and subsolutions, that 1 ˜ i (x) exp(−˜λ i t) + ˜r i ˜λ− ˜ 0 ≤ u i (x, t) ≤ ϕ i (1 − exp(−λ i t )) , 1 0 ≤ v i (x, t) ≤ ϕ i (x) exp(−λ i t) + r i λ− i (1 − exp(−λ i t )) .

(2.291)

Therefore, solutions of (2.282), (2.283) not only exist for all times t, but they also enter for the set B at a time moment t0 and then stay in this set for all t > t0 . So, system (2.282), (2.283) defines a global dissipative semiflow.

2.9.2 Reduced dynamics In this subsection, we describe an asymptotic for centralized system dynamics. It is possible under some assumptions. We suppose that the u-variables are fast and the v-ones are slow. We show then that the fast u variables are captured, for large times, by ˜ is a small correction. the slow v modes. More precisely, one has u = U (v) + u˜ , where u This means that for large times, the satellite dynamics is defined almost completely by the center dynamics. Our assumptions are as follows. Let us suppose that the parameters of system (2.282), (2.283) satisfy the following conditions: ˜ i |, |h j | < C 0 , |A jl |, |B il |, |C ij |, |h

(2.292)

where i = 1, 2, . . . , N, i, l = 1, . . . , M, j = 1, . . . , N, 0 < C1 < ˜λ j , ˜ i | < C3 , |b j |, |b

˜ j < C2 , d

(2.293)

sup |m(x)| < C4 ,

(2.294)

and r i = κR i ,

˜i, ˜r i = κ R

(2.295)

where ˜ i | < C5 , |R i |, |R d j = κ d¯ j ,

λ i = κ λ¯ i , |λ¯ | < C6 ,

0 < d¯ j < C7 ,

(2.296) (2.297)

where κ is a small parameter, and where all positive constants C k are independent of κ.

100 | 2 Complex dynamics in neural and genetic networks Proposition 2.35. Assume the space dimension Dim Ω ≤ 3. Under assumptions (2.292)–(2.295) for sufficiently small κ < κ 0 solutions (u, v) of (2.282), (2.283), (2.286), and (2.287), satisfy ˜ (x, t), (2.298) u = U (x, v(x, t)) + u where the j-th component U j of U is defined as a unique solution of the equation ˜ j ΔU j − ˜λ j U j = κG j (v) d

(2.299)

under the boundary conditions (2.286) where   ˜ j m(x) − h ˜j . ˜ j σ Aj v(x, t) + b Gj = R ˜ satisfies the estimates The function u ˜  + ∇u ˜  < c1 κ 2 + R exp(−βt), u

β > 0.

(2.300)

The v dynamics for large times t > C1 | log κ | takes the form ∂v i = κF i (u, v) + w i , ∂t

(2.301)

where w i satisfy w i  < c0 κ2

and F i (u, v) = d¯ i Δv i + R i σ (Bi v + Ci U (x, v) + b i m − h i ) − λ¯ i v i . ˜ i , Ci. Constants c0 , c1 do not depend on κ as κ → 0, but they may depend on R i , R A proof of this assertion is basically straightforward; it is based on well-known results [108], see also theorems from the Appendix 2.11. We omit this proof. An analogous assertion holds for shorted system (2.288),(2.289). In this case, the functions U i can be found by an explicit formula. Namely, one has U i (x, v(x, t)) = κV i , Vi =

1 R i ˜λ− i σ





˜ j m(x) − h ˜j . Aj v(x, t) + b

(2.302) (2.303)

For large times, the reduced v dynamics has the same form (2.301) with d i = 0.

2.9.3 Complex behavior in centralized gene networks Let us apply this approach to network dynamics using the results of the previous sec­ tion. Let us assume b i = κ b¯ i , h i = κ h¯ i λ i = κ 2 λ¯ i , d i = κ 2 d¯ i

(2.304) (2.305)

2.9 Centralized gene networks | 101

where all coefficients b¯ i and h¯ i are uniform in κ as κ → 0. These assumptions are useful for technical reasons. We also assume that all direct interactions between cen­ ters are absent, B = 0. This constraint is not essential, but facilitates notation and calculations. Since U j = O(κ ) for small κ, we can use the Taylor expansion for σ in (2.289). Then, these equations reduce to ∂v i (x, τ) ¯ i Δv i + ρ i (Ci V (x, v) + b¯ i m(x) − h¯ i ) − λ¯ i v i + w ˜ i (x, t), =d ∂τ

(2.306)

where ρ i (x) = ¯r i σ (0), i = 1, 2, . . . , M and τ is a slow rescaling time: τ = κ 2 t. Due to ˜ i satisfy conditions (2.304) and (2.305), corrections w ˜ i  < cκ. w ˜ i = 0. Now, let us focus our attention on the nonperturbed equation (2.306) with w Let us fix the number of centers M. The number of satellites N will be considered as a parameter. The next lemma follows from the approximation theorems of the multilayered net­ work theory [21, 53, 119]. It is a version of the approximation Lemma 2.9. Lemma 2.36. Given a number δ > 0, an integer M and a vector field F = (F1 , . . . , F M ) defined on the ball B M = {v = (v1 , . . . , v M ) ∈ RM : |v| ≤ 1}, F i ∈ C1 (B M ), there are a number N, an N × M matrix A, an M × N matrix C and coefficients h i , where i = 1, 2, . . . , N, such that |F j (·) − Cj W (·)|C1 (B M ) < δ,

(2.307)

W i (v) = σ (Ai v − h i ) .

(2.308)

where This lemma gives us a tool to control network dynamics and patterns. First, we con­ ¯ i = 0. sider the case when the morphogens are absent. Formally, we can set b˜ i = b¯ j = d ¯ ˜ i = 0 reduce to the Hopfield-like equa­ Assume h i = 0. Then, equations (2.306) with w tions for variables v i ≡ v i (τ) that only depend on τ: dv l = Kl W (v) − λ¯ l v l , dτ

(2.309)

1 where l = 1, . . . , M, the matrix K is defined by K lj = ρ l C lj R j ˜λ− j . The parameters P of (2.309) are K, M, h j and λ¯ j . We then obtain the following result.

Lemma 2.37. Let us consider a C1 -smooth vector field Q(p) defined on a ball B M ⊂ RM and directed strictly inside this ball at the boundary ∂B M : F (p) · p < 0,

p ∈ ∂B M .

(2.310)

102 | 2 Complex dynamics in neural and genetic networks Let us consider the system of differential equations defined by F: dp = F (p), dt

F ∈ C1 (B M ).

(2.311)

Then, for each ϵ > 0, there is a choice of parameters P such that (2.282), (2.283) ϵ-realizes system (2.311). This means that the semiflows, defined by (2.282), (2.283), form a MDC family. This assertion follows from Prop. 2.35 and Lemma 2.36. It implies the following impor­ tant corollary: all structurally stable dynamics, including periodic and chaotic dynam­ ics, can be realized by centralized networks. The proof of this fact uses the classical results on the persistence of hyperbolic sets, see [65, 135, 228, 232].

2.9.4 How positional information can be transformed into the body plan of a multicellular organism Above, we have considered a spatially homogeneous case. It is shown that a central­ ized network can approximate any prescribed dynamics and, thus, the cells can be programmed to have arbitrarily complex dynamics. By network rewiring or by inter­ action modulating, one can switch between various types of dynamics. During devel­ opment, switches are position dependent that induce cell differentiation into specific spatial arrangements. In this section, we are going to demonstrate that the centralized networks, cou­ pled to morphogen gradients, can generate complicated patterns and produce a com­ plicated multicellular organization. We consider shorted dynamics (2.288), (2.289) reasonable for those developmental stages, where cell walls prevent a free diffusion of regulatory molecules. Although other phenomena such as cell signaling could lead to cell coupling, we do not discuss these effects herein. Assume cell positions are centered at the points x ∈ X = {x1 , x2 , . . . , x k }, dim Ω = 1, X is a discrete subset of [0, L]. Let us show that equations (2.288)–(2.289) can realize arbitrary dynamics at different points x l of the domain Ω = [0, L]. To prove this assertion, let us turn to equations (2.306), where, taking into account biological arguments given above, we set d i = 0, b˜ i = 0. Let us suppose, to simplify the formulas, ρ j = 1, h¯ j = 0 and λ¯ j = 1. Then, 1 ˜ V j (v) = R j ˜λ− j σ (A j v − h j ) .

 ˜ i in (2.306), Denote by Q i the sums Q i (v) = M j =1 C ij V j ( v ) = C i V. Removing the terms w one obtains that equations (2.306) reduce to

∂v i (x, τ) ¯ i m(x) − v i (x, τ). = Q i (v(x, τ )) + b ∂τ

(2.312)

2.9 Centralized gene networks

| 103

Let us fix a x = x l ∈ X . We substitute v i (x l , τ) = z i (τ) + b¯ i m(x l ) in (2.312) that gives dz i (τ) ¯ (x l )) − z i , = Q i (z + bm (2.313) dτ where b¯ = (b¯ 1 , . . . , b¯ M ), i = 1, . . . , M. Now, we use approximation Lemma 2.9. Let us consider a family of vector fields F (l) , where l = 1, . . . , k. We suppose that C1 -smooth vector fields F (l) are defined on a unit ball B M = {z ∈ RM , |z| ≤ 1} and directed strictly inside this ball at the boundary ∂B M : F (l) (z) · z < 0, z ∈ ∂B M . (2.314) Assume m(x) is a strictly monotone function in x. The main idea is as follows: since all ¯ l) m(x l ) = μ l and m(x j ) = μ j are different for j = l, the vector fields Q(l) (z) = Q(z + bμ −1 (l) can approximate different vector fields F (ρ 0 z) for l = 1, . . . , k and for z such that |z| < ρ 0 , where 1 ρ 0 = min |b¯ i ||μ (x j ) − μ (x l )|. 2 i,j,l,j=l Indeed, for each ϵ > 0, we can find an approximation Q satisfying 1 ¯ (x l )) − (ρ 0 F (l) (ρ − sup |Q(z + bm 0 z) + z)| < ρ 0 ϵ,

(2.315)

    1 ¯ (x l ))) − ∇(ρ 0 F (l) (ρ − sup ∇(Q(z + bm 0 z ) + z ) < ρ 0 ϵ

(2.316)

| z |< ρ0

and

| z |< ρ0

for all l = 1, . . . , k. Then, equation (2.313) reduces to dz 1 ˜ (l) = ρ 0 F (l) (ρ − 0 z) + ρ 0 ϵ F ( z) , dτ where

   ˜ (l)  (z) < 1, sup F

z∈B M

   ˜ (l)  sup ∇F (z) < 1.

z∈B M

We set z i = ρ 0 p i . This gives dp ˜ (l) (p). = F (l) (p) + ϵ F dτ

(2.317)

Let us notice that, if ϵ is small enough, then for each index l, due to assumption (2.314), all trajectories p(t, p(0)) such that p(0) ∈ B M stay in the ball B M for all t > 0. Here, p(0) denotes the initial point of the trajectory. Consequently, approximations (2.315) give vector fields that, by (2.317), realize different prescribed dynamics for different x l . Therefore, we can formulate the following: Theorem 2.38 (On programming multicellular organism). Suppose x ∈ [0, L] ⊂ R and m(x) is a strictly monotone smooth function. Assume that 0 < x1 < x2 < · · · < x k < L and that F (l) (p), l = 1, 2, . . . , k is a family of C1 -smooth vector fields defined on

104 | 2 Complex dynamics in neural and genetic networks a unit ball B M ⊂ RM . We assume that each field defines a dynamical system, i. e. F (l) are directed inwards on the boundary ∂B M . Then, for each δ > 0, there is a parameter P choice such that for shorted dynamics (2.288)–(2.289), one has ˜, u = U (x, v) + u where ˜ | < C exp(−βτ) + cκ 2 . |u For x = x l and for sufficiently large times, the dynamics for v(x l , t) can be reduced to the form dp i = F¯ i (x l , p), (2.318) dτ where     sup F¯ (x l , p) − F (l) (p) < δ. (2.319) p∈ B M

Here, p i (τ) can be expressed in a linear way via v i (x l , τ) by ¯ (x l ) = ρ0 p i (τ). v i (x l , τ) − bm This result can be considered as a mathematical formalization of positional informa­ tion ideas [8, 326]. It describes a translation of positional information into cell com­ plicated dynamics, and different cells have different dynamics. A synchronization of these dynamics is considered in section 4.8.

2.9.5 Bifurcations of centralized network dynamics It is well known [10] that removing a strong connected center can sharply change the network attractor. In this section, we show that one can obtain transitions be­ tween different structurally stable attractors by a single mutation in a specially chosen weakly connected node (gene). Such mutation may be a gene duplication or elimina­ tion. Therefore, such a node (gene) can serve as a switch between two kinds of the network dynamical behavior. To formulate these ideas more precisely, let us consider a system of ordinary dif­ ferential equations dp = F (p, s), p ∈ B n ⊂ Rn (2.320) dt depending on a real parameter s. Here, p = (p1 , p2 , . . . , p n ), B n is the unit ball cen­ tered at p = 0 and F is a C1 -smooth vector field directed inside the ball at the ball boundary for each s ((2.314)). Suppose that two values s0 = s1 exist such that (2.320) has different attractors A0 and A1 for s = s0 , s = s1 , respectively.

2.9 Centralized gene networks

| 105

Consider, for simplicity, gene circuit model (2.282), (2.283) without diffusion and space variables, that is,   du i ˜ i − ˜λ i u i , = ˜r i σ Ai v − h dt   dv j = r j σ Cj u − h j − λ j v j , dt

(2.321) (2.322)

˜i, where i = 1, . . . , M + 1, j = 1, . . . , N. The parameters P of this system are M, N, h i , h ˜ λ i , r i , ˜r j , λ j and the matrices A, C. We can assume, without loss of generality, that we remove the M + 1-th satellite node, i = M + 1. As a result of elimination, we obtain a similar system with i = 1, . . . , M and shorted matrices A, C. The following assertion means that bifurcations of arbitrary complexity can appear as a result of a single gene mutation, and this gene is a weakly connected satellite. Theorem 2.39. For each ϵ > 0, there is a choice of the parameters P such that system (2.321), (2.322) with M satellite nodes ϵ-realizes (2.320) with s = s0 and system (2.321), (2.322) with M + 1 satellite nodes ϵ-realizes (2.320) with s = s1 . To prove this theorem, we use the following extended system dp = ρF (p, s), p ∈ B n dt ds = f (s, β ) − νs, s ∈ R dt

(2.323) (2.324)

where ν > 1 and f (s) is a smooth function, and β, ρ > 0 are parameters. Then, equi­ librium points seq of (2.324) are solutions of f (s, β ) = νs.

(2.325)

The point seq (β ) is a local attractor if f s (seq ) < ν. If ρ > 0 is small enough and s k is a single stable rest point, the fast variable s approaches at seq (β ) and for large times t, the dynamics of system (2.323), (2.324) is defined by the reduced equations dp = ρF (p, seq (β )). dt

(2.326)

f (s, β ) = 2β 2 σ (b (s − h0 )),

(2.327)

Let us set where b is a large parameter. Then, f is close to a step function with the step 2β 2 . Then, for seq (β ), one has the asymptotics seq = 2β 2 ν−1 + O(exp(−b )) as b → ∞. Thus, we can adjust parameters β 0 and b > 0 such that (2.325) has a single stable root s0 for β = β 0 and the equation f (s, β 0 /2) = νs (2.328) also has a single root s1 = s0 .

106 | 2 Complex dynamics in neural and genetic networks Dynamics (2.323), (2.324) with f from (2.327) can be realized by a network (2.321), (2.322). To this end, we decompose all satellites u i into two subsets. The first set con­ tains satellites u 1 , u 2 , . . . , u M −1, the second one consists of the satellites u M , u M +1 (to single out these variables, let us denote u M = y1 , u M +1 = y2 ). The main idea of this decomposition is as follows. We can linearize equations for the centers v j , assuming that the matrix C is small and B = 0. The y-satellites realize dynamics (2.324) by a center s: ds = −νs + β (y1 + y2 ), dt dy k = −y k + βσ (b (s − h)), dt

(2.329) k = 1, 2.

(2.330)

Here, we assume that ν, β are small enough, and therefore, for large times, system (2.329), (2.330) reduces to (2.324) with f defined by (2.327). We see that the dynamics bifurcates to (2.324) with f = β 2 σ (b (s − h0 )) if we remove y2 in the right-hand side of (2.329). The rest equations, after a notation modification and linearization, take the fol­ lowing form, that is,   du i ˜ i − ˜λ i u i , = ˜r i σ Ai v + D i s − h dt dv j = −λ j v j + r j Cj u − h j , dt

i = 1, . . . , M − 1

(2.331) (2.332)

where i = 1, . . . , M + 1, j = 1, . . . , N. Equations (2.331), (2.332) can ϵ-realize arbitrary systems (2.320) with the parame­ ter s. It can be shown as above, and this completes the proof.

2.10 Computational power of neural networks and graph growth 2.10.1 Realization of Turing machines by neural networks We first follow the beautiful paper [30] in order to recall the main definitions and facts about the Turing machines. A Turing machine is a deterministic model of computation. A given Turing ma­ chine M has a finite set Q of internal states and operates on a doubly infinite tape over some finite alphabet ABC. The tape consists of squares indexed by an integer i, where 1 < i < n. At each step, the Turing machine scans the square indexed by 0. Depend­ ing on its internal state and the scanned symbol, the machine can replace the scanned symbol with a new symbol, focus attention on an adjacent square (by shifting the tape by one unit), and transfer to a new state. The instructions for the Turing machine are quintuples (q i , s j , s k , D, q l ), where q i and s j are the present state and scanned sym­ bol, respectively, s k is the symbol to be printed in place of s j , D is the direction of

2.10 Computational power of neural networks and graph growth

| 107

motion (left-shift, right-shift, or no-shift of the tape), and q l is a new internal state. If the Turing machine enters a state symbol pair without the corresponding quintuple, the machine halts. We assume that ABC = [0, 1, . . . , n − 1], the set of the internal states Q = [0, 1, . . . , m − 1], where n, m ∈ N, and that the Turing machine halts if and only if the internal state q is equal to 0, where q = 0 as the halting state. The tape contents are defined by two infinite words w1 , w2 ∈ ABC∗ , where ∗ stands for the set of infinite words over the alphabet ABC: w1 consists of the scanned symbol and the symbols to its right; w2 consists of the symbols to the left of the scanned symbol, ex­ cluding the latter. The pair (w1 , w2 ) and an internal state q ∈ Q form the configuration of the Turing machine. If a quintuple is applied to a configuration, the result is another configuration, a successor of the original. We have a successor function s+ : C → C, where C is the set of all configurations (the configuration space). It is impossible to decide whether a given Turing machine halts for every initial configuration. A function f : RN → RM is piecewise affine if RN can be represented as the union of a finite number of subsets X i where each set X i is defined by the intersection of finitely many open or closed halfspaces of RN , and the restriction of f to each X i is affine. The following result belongs to [30]. Theorem 2.40. Let M be a Turing machine and let C = Σ × Σ × Q be its configuration space. A piecewise affine map g M : (w1 , w2 ) → R2 and an encoding function ν : C → [0, 1]2 exists such that g M (ν(c)) = ν(c ) for all configurations c, c ∈ C with c = s+ (c). This theorem means that the action of a Turing machine M can be simulated by a piece affine map g M of the plane. Below, we apply this result for an estimation of computa­ tional power of randomly growing networks. Let us make some important remarks. This result is an assertion about machines with an infinite memory, and without any restrictions to running time. In real situa­ tions, running time and the memory are restricted. For infinite resources, we can sim­ ulate the Turing machines by maps acting in the plane R2 , however, it is unknown if a simulation by maps R → R exists.

2.10.2 Emergence of Turing machines by networks of a random structure Experimental data show that really neural networks grow in a random, diffuse way, and rewiring leads to the formation of effectively working structures. We are going to estimate the computational power of such networks. The main idea is to estimate how many Turing machines and finite automata can be simulated by such networks. We are going to obtain such an estimate for large networks with N nodes.

108 | 2 Complex dynamics in neural and genetic networks To obtain this estimate, we use Theorem 2.40 and constructions of subsections 2.3.2 and 2.3.3. All the Turing machines can be simulated by piecewise affine maps of the plane; in turn, such maps can be simulated by sequences of networks with bicliques (network sequences that have 2H property). Therefore, one can expect for large N that the number of the different Turing machines that can be obtained by a rewiring is proportional to the number H2 (N, d) of (2, d) bicliques with a large d. (a) Realization of Turing machines by discrete time circuits (2.35) with saturated linear sigmoid First, we consider the networks where the sigmoidal function σ (z) is a function analogous to the saturated linear function defined by (2.37). If a sequence of growing networks has 2H-property, all Turing machines can be simulated, in a sense, by networks from this infinite sequence, by appropriate choices of N, W ij , h i . It follows from results [30]. In fact, all Turing machines can be simulated by dynamics (2.35) with a piecewise linear σ = σ PL and only N = 2 neurons [30]. Denote by q1 (t), q2 (t) the center states in a biclique, and consider all maps FPL of the form q1 (t + 1) = σ PL (W11 q1 (t) + W12 q2 (t) − h1 ), q2 (t + 1) = σ PL (W21 q1 (t) + W22 q2 (t) − h2 ). Such dynamics is defined on the square [0, 1]2 , which is a compact set. The simulation can be done by an encoding function and (2.35) as follows from [30]. Let p1 be the content of the left part of the tape, and p2 be the content of the right part of the tape. The symbols p i = a0i a1i . . . a ni . . . can be encoded as a real number by the relation [30] j +1 +∞ aj xi = 2 i . (2.333) 2n j =0 Here, n is the state number. Let us consider the machine configuration c = (p1 , p2 , q), where q is a machine state. Then, the corresponding point of [0, 1]2 is X (c) = (x1 (p1 )/m + q/m, x2 (p2 )). The simulation property can be expressed mathematically as follows. This means that the encoding map and neural dynamics X → g(X ) are connected by gPL (X (c)) = X (c ),

(2.334)

where c is a configuration that follows after c, i. e. c = successor(c). This construction shows that all possible computations (which eventually halt) can be made (asymptotically, as t → +∞) by networks from a sequence of growing networks with 2H-property. The graph (V N , E N ) can contain a number of different bi­ cliques. If the sets S d , S n and S˜ d , S˜ n of two different bicliques are disjoint, the network can generate two different, completely independent dynamics. If there are p mutually disjoint (2, n) bicliques in a network graph, such a network can simulate p different Turing machines simultaneously. Here, we show that the graph sequences with nH property can emerge when a network grows by the preferential attachment rule [7]. This shows that feedback loops, that can produce a complicated behavior, can appear in a natural way.

2.10 Computational power of neural networks and graph growth |

109

Let us consider a small graph consisting of N0 nodes. Following [7], at each growth step, we add a new node with m ≥ 1 new connections. The probability that this new node is connected with a j-th node is proportional to the connectivity (degree) C(j) of this j node. This procedure gives us scale-free networks [7]. The case n = 2 is most important for us since (2, d)-bicliques can be used for the Turing machine simulation. Therefore, the number of such bicliques can be associated with the network comput­ ing power. On the other hand, it is clear that the number of (3, d) bicliques is much less than the number of (2, d) bicliques. The following results have been obtained. Let us denote K (N, d) as the averaged number of (2, d) bicliques. The averaged maximal value of d in such bicliques are de­ noted by τ(N ). Simulations show that for large N, the numbers K (N, d) are decreasing in d, and that τ(N ) is proportional to some small degree of N: τ(N ) ≈ N α , where it seems that α ∈ (1/4, 1/3). In the first series of simulations, the parameters m, N0 were m = 3, N0 = 10 and the small initial graph was constructed by random choice: W (i, j) = 1 with the probability p = 0.1, otherwise W ij = 0. Results for (2, d) bicliques are as follows. For N = 600, one has 21 bicliques with d = 4, 13 with d = 5, 7 with d = 6, with d = 7, 2 with d = 9. For N = 1000, one has 25 bicliques with d = 4, 6 with d = 5, 3 with d = 6, 2 with d = 7. For N = 3000, one has 23 bicliques with d = 4, 12 with d = 5, 3 with d = 6, 2 with d = 7 and 1 with d = 9. In the second simulation, the initial graph was taken as a star with a center (1-th node) and 10 satellite nodes. As a result of the preferential attachment growth, we obtain a network where 1-th node has a higher degree (for N = 1000 of order 80). It is interesting that in this case, τ(N ) is larger. Namely, for N = 300, one has 17 bicliques with d = 4, 12 with d = 5, 6 with d = 6, 5 with d = 7, 3 with d = 8, 2 with d = 9 and 1 with d = 10. For N = 1000, one has 30 bicliques with d = 4, 18 with d = 5, 11 with d = 6, 8 with d = 7 and so on. The maximal d is 11 and we obtained two bicliques with d = 11. These bicliques are independent or almost independent. For example, the inter­ section of the sets S d for two different bicliques is, typically, either the empty set, or a set with 1 − 3 nodes. This is important since this means that the intersection weakly affects dynamics associated with these bicliques which can exhibit asynchronous dy­ namics. So, the networks can generate many independent dynamics and Turing ma­ chines. (b) Realization of Turing machines by discrete time circuits (2.35) with a smooth sigmoid Simulations by smooth sigmoidal functions is a more difficult problem. We can construct approximative simulations, or simulations with a fading memory, as fol­ lows. Each map gPL , constructed above, can be ϵ-realized by a sequence of systems (2.35) with smooth σ and with 2H-property. If we denote the neural map by g, we then obtain that one can simulate any Turing machine with any accuracy ϵ > 0: gPL (X (c)) = X (c ) + ϵ.

(2.335)

110 | 2 Complex dynamics in neural and genetic networks Notice that for small ϵ > 0, the tape contents corresponding to X (c ) and X (c ) + ϵ differ only by symbols, which are located far away from the square on the tape indexed by 0 (scanned by the machine). Remark. Notice that ϵ depends on N (the number of the network nodes): ϵ = ϵ(N ) and ϵ(N ) → 0 as N → +∞. This shows that a randomly growing network can perform all computations “in limit” N → +∞. (c) Realization of Turing machines by continuous time circuits (2.34) A good overview of different results on computations by ordinary differential equations can be found in the paper [92] (also see references therein). We propose another method, which allows one to find approximate simulations. It is based on a reduction of the continuous time circuits to time discrete ones. The main idea is to use a “clock.” Let X (t + 1) = g(X (t)) (2.336) be an arbitrary dynamical map, where X = (x1 , x2 , . . . , x n ) ∈ [0, 1]n and g is a suffi­ ciently smooth function. Let us consider the system dx i = g i (X )ρ κ (t), dt

i = 1, . . . n,

(2.337)

where ρ κ (t) is a smooth function depending on a small positive parameter κ such that ρ κ (t) →

+∞

δ(t − j)

κ→0

(2.338)

j =1

in the distribution theory sense (here, δ(t) is the Dirac δ-function). For small κ dy­ namics, (2.337) reduces to the map (2.336).

2.11 Appendix 2.11.1 Proof of Proposition 2.16 The following assertion is a consequence of Theorem 18, Sect.7, Chapter 4 in [161]. Proposition 2.41. Let λ k , k = 1, . . . , be an infinite sequence of real numbers. Assume that 1 k . (2.339) 0 < R < lim sup e k→∞ |λ k | Then, the set of linear combinations of the function exp(λ k x) is dense in C1 ([−R, R]).

2.11 Appendix |

111

Now, let us choose a matrix D ji , j = 1, . . . , and i = 1, . . . , n such that for each i = 1, . . . the sequences D j,i , j = 1, 2, . . . , is infinite, subject to (2.101) and satisfies the relation lim sup j →∞

j = ∞. |D j,i |

Then, according to Prop. 2.41, sums (2.90) are dense in C1 (B n ). This proves Prop. 2.16.

2.11.2 Proof of Proposition 2.15 Let the matrices A, B and the number N be fixed in (2.87) and let q∗ = q∗ (C(0) ) be a hyperbolic rest point of (2.87) for a certain C(0) . Then, there is a hyperbolic rest point for system (2.87) for C subject to |C − C(0) | ≤ δ with some positive δ. The image ⎧ ⎛ ⎞⎫ n ⎨ ⎬ ∗ X = ⎩x = (x1 , . . . , x N ) : x i = C i exp ⎝− A ij q j (C)⎠⎭ j =1

is an invariant set for system (2.84). Let us estimate its dimension. We have n ∂q∗ ∂x i j (C) = (δ ik − C i A ij )h i , ∂C k ∂C k j =1

where h i = exp(− ξ N ) satisfying

n

j =1

A ij q∗ j ( C )). The dimension of the set of vectors ξ = ( ξ 1 , . . . , N ∂q∗ j (C) k =1

∂C k

ξ k = 0, j = 1, . . . , n

is ≥ N − n and for such ξ , N ∂x i ξk = hi ξi ∂C k k =1

for each i = 1, . . . , N.

Therefore, the dimension of the set of η = (η1 , . . . , η N ) satisfying N ∂x i ηk = 0 ∂C k k =1

is ≤ n. Thus, the matrix



for each i = 1, . . . , N

∂x i ∂C k

N i,k =1

has at least N − n nonzero eigenvalues and consequently dim X ≥ N − n, completing the proof.

112 | 2 Complex dynamics in neural and genetic networks 2.11.3 A proof of Lemma 2.9 Let us consider the vector fields Ψ i (q, A, B, m, θ) =

m



B ip σ ⎝

p =1

n



A pj q j + θ p ⎠ ,

(i = 1, 2, . . . , n)

(2.340)

j =1

depending on parameters P = (m, A, B, θ).

Step 1. The case n = 1 In this case, q ∈ R and we can omit some indices in notation. Relation (2.340) takes the form ⎛ ⎞ m n Ψ (q, A, B, m, θ) = Bp σ ⎝ Ap q + θp ⎠ . (2.341) p =1

j =1

We apply a method based on the wavelet theory. Notice that this method is nu­ merically effective and can be realized as an algorithm. Let us set θ p = −A p h p and introduce the function (2.342) ψ( q) = σ ( q) .



Since σ is a smooth, well-localized function, and σ (q) → 0 as q ± ∞, we observe that ∞ ψ(q)dq = 0. (2.343) −∞

ˆ (k ) be the Fourier coefficients of the function ψ. Then, one has Let ψ ∞



ˆ (k )|k |−1 dk = C ψ < ∞. ψ

(2.344)

−∞

Let us introduce the following family of functions indexed by the parameters a, h: ψ a,h (q) = |a|−1/2 ψ(a−1 (q − h)).

(2.345)

For any f ∈ L2 (R), we define the wavelet coefficients T f (a, h) of the function f by ∞

T f (a, h) =  f, ψ a,h  =

dqf (q)ψ a,h (q).

(2.346)

−∞

Notice that for smooth f with the finite support I R , one has asymptotics |T f (a, h)| < C sup |f ||a|−1/2 exp(−(x0 (a, h) − h)a−1 )

(2.347)

for some C > 0, where |x0 (a, h)| < R. In this estimate, the constant C is uniform in a, h.

2.11 Appendix | 113

The aforementioned properties of ψ imply that for any smooth function f with a finite support I R = (−R, R), one has the following fundamental relation ∞ ∞

a−2 dadhT f (a, h)ψ a,h = fwav .

f = Cψ

(2.348)

−∞ −∞

This equality holds in a weak sense: the left-hand side and the right-hand side define the same linear functionals on L2 (R), i. e. for each smooth, well-localized g, one has  f, g =  fwav , g .

Let ϵ be a positive number. According to (2.347), we can find points a1 , . . . , a p and h1 , ¯ such that the integral in the right-hand side of (2.348) can . . . , h p , and a constant C be approximated by a finite sum: fwav − ¯fwav  < ϵ,

where ¯fwav = C ¯ψ

p

(2.349)

2 a− j T f ( a j , h j ) ψ A j ,h j .

j =1

Let F and F¯ wav be antiderivatives of f and ¯fwav : q

F=

q

f (x)dx, 0

¯fwav (x)dx.

F¯ wav = 0

Let g = F − F¯ wav . Notice that    q      |g(q)| =  g (x)dx .   −∞ 

By the Cauchy–Schwartz inequality, we use the following elementary estimate, that is,   q        g (x)dx ≤ c |q|g  ≤ cR1/2 F¯ − Fwav . (2.350)   0  Now, estimate (2.349) gives |g(q)| ≤ cR1/2 ϵ.

This inequality proves the one-dimensional version of Lemma 2.9.

(2.351)

114 | 2 Complex dynamics in neural and genetic networks Step 2. The case n > 1 We use the estimates obtained for the one-dimensional case. We approximate the vec­ tor function F ∈ C1 (B n ) norm by a C∞ , periodic in q1 , . . . , q n vector. We extend this function on all Rn so that the extended F ∈ C∞ (Rn ) with a support inside the ball B n (R) of the radius R centered at 0. Therefore, without loss of generality, one can assume that F is defined by  ˆ k exp(i(k, q))d n k, Q( q) = Q Rn

where (k, q) = k 1 q1 + k 2 q2 +· · · + k n q n . The last integral can be presented as follows:  ˜ e ((e, q))dS, (2.352) Q( q) = V Sn

where

+∞ 

ˆ (|k |, e) exp(i|k |z)d|k |, Q

˜ e ( z) = V 0

S is the unit sphere in R |e| = 1 and dS is the standard Lebesgue measure on the sphere. For each finite subset K of the sphere S n , let us introduce the function ˜ e ((e, q)). V WK ( q) = n

n

e∈K

Then, we can approximate the integral in (2.352) by this finite sum |Q(q) − WK (q)| < ϵ1 ,

(2.353)

where K is an appropriate finite subset of S n . ˜ e ( ze ) Relation (2.353) shows that the field Q can be presented as a sum of terms V depending only on variables ze = (e, q) ∈ R. Since |e| ≤ 1 and |q| ≤ 1, one has |ze | ≤ 1. ˜ e (x), where x ∈ R, can be approximated by the algorithm stated Now, each term V above. This procedure gives us the needed approximation.

2.11.4 Algorithm of neural dynamics control Let us consider the following problem. Given a positive ϵ and a system of differential equations dq = Q( q) , dt

(2.354)

where a vector field Q is C1 -smooth on the unit ball B n and defines a structurally stable global semiflow, to find a network (2.35) with the global attractor, which is ϵ close to the attractor of (2.354).

2.12 Summary

| 115

For bounded dimensions n, the algorithm is based on approximations of the pre­ vious section and construction of the centralized network. For large dimensions n, the algorithm needs a modification since in this case, we have the curse of dimensionality (R. Bellman). Indeed, the set K from (2.353) contains, approximately, > exp(c0 n) elements, where c0 > 0. To overcome this difficulty, we can use the Monte–Carlo method. We choose K randomly. This procedure gives an approximation of Q within accuracy O(M −1/2 ), where M = cardK is the number of elements in K . However, this approximation holds only with a probability p(M ) such that p(M ) → 1 as M → ∞.

2.12 Summary (A) The Hopfield neural networks have been investigated. It is shown that the corre­ sponding dissipative semiflows are, in a sense, maximally complex, i. e. depending on the network parameters (thresholds, synaptic weights and the neuron number), they can realize all vector fields. This means that these semiflows are capable of generating (up to an orbital topological equivalency) all structurally stable dynamics, including chaotic, periodic ones etc., for example, all Anosov flows and Smale axiom A systems, Smale horseshoes and all hyperbolic dynamics. Moreover, there is an explicit algo­ rithm to construct networks with a prescribed dynamics. The algorithm is based on the fundamental theorem of neural network theory which asserts that multilayered perceptrons can serve as universal approximators (Theorem on Universal Approxima­ tion). This algorithm exploits the centralized topology of the network (empire struc­ ture) and two ideas: “divide and conquer” and “Emperor pulls springs.” (B) The Lotka–Volterra systems have been considered. A special algebraic trick allows one to prove that the so-called Lotka–Volterra systems with n resources can gener­ ate all structurally stable dynamics defined by vector fields on n-dimensional balls. As a parameter, we use the matrix that determines species interaction. This theorem essentially extends previous results on chaos existence obtained for some particular examples of the Lotka–Volterra systems with 3 or 4 species. However, there is a difference between the Lotka–Volterra and Hopfield systems. For the Hopfield case, hyperbolic chaos can be realized on stable (locally attracting or even globally attracting) invariant manifolds. Therefore, the chaos may be stable. The dimension of this manifold is much less than the neuron number. For the Lotka–­ Volterra systems, the hyperbolic dynamics is realized on unstable invariant manifolds and therefore it is always unstable. The global attractor of the Lotka–Volterra systems can contain many invariant sets and have a high dimension (of the same order as the species number). (C) The standard ecological model (SM) is considered. This model describes species sharing the same resources. The SM also exhibits all kinds of structurally stable dy­ namics and can be reduced to the Lotka–Volterra system.

116 | 2 Complex dynamics in neural and genetic networks (D) Algorithms for the control of dynamics in large metabolic networks are presented. (E) It is shown that genetic networks can produce any spatiotemporal patterns. It is important to note that these patterns can be generated step-by-step. This means that a new pattern refines the previous one: a new one can be obtained for the previous one by a superposition procedure. Patterns can be controlled by morphogen gradi­ ents which determine position information. It suffices only three morphogen gradients to create a complicated spatiotemporal pattern (if the gene number is large enough). Morphogens can create a “multicellular organism” consisting of cells, where each cell has its own behavior program (attractor). In evolution, mutations, in particular, gene duplication, plays an important role. We show that gene duplications can trigger attractors of genetic networks in a very complicated manner: so we can obtain a bifurcation between two complex attractors. In the centralized networks, it is sufficient to have a few simple morphogen gradients to induce, by a complicated gene interaction, a complex multicellular structure. In this multicellular pattern, each cell can function as it is prescribed by some attractor. These attractors can be different for different cells. We show, therefore, that central­ ized networks can be used to implement the Driesch–Wolpert positional information paradigm [325] in order to organize a multicellular organism. This organism consists of a number of specialized cells, each cell type being dynamically characterized by dis­ tinct attractors. These attractors can be programmed by morphogen gradients. Tran­ sitions between attractors can be performed by acting on key nodes of the network. Contrary to previous theories of random networks [10, 86, 136], these key nodes do not have to be hubs. (F) Randomly growing networks possess a great computational power. They can gen­ erate finite automatons, and all Turing machines.

3 Complex patterns and attractors for reaction-diffusion systems This chapter considers pattern formation and complexity for models of biology, chem­ istry and physics defined by reaction-diffusion systems. In Section 3.1, we study sys­ tems having a variational nature. They often appear in applications, in particular, in the phase transition theory. These systems generate gradient semiflows. We describe a physically transparent procedure producing asymptotic representations for equilib­ rium patterns. This method can be considered as a dissipative variant of the Whitham principle, which is well known in nonlinear wave theory [320]. Moreover, we state the quasiequilibrium approach proposed by A.N. Gorban et al. [88], which is more gen­ eral. We describe applications of these approaches to pattern formation in biology us­ ing segmentation in Drosophila as an example. Here, we propose a new mechanism of morphogenesis, different from the classical Turing diffusion instability [272]. Fur­ thermore, we consider reaction-diffusion systems with a chaotic large time behavior. In Section 3.2, we investigate small perturbations of gradient and monotone systems. We find new kinds of nonlinear waves and new effects in wave propagation. Beginning with the seminal works of Kolmogorov, Piskunov and Petrovski [146] and Fisher [76], great efforts were focused on the problem of traveling wave existence for reaction-d­ iffusion equations and systems. For large classes of reaction-diffusion equations, one can prove that traveling waves always exist. Moreover, under some assumptions, one can show that all solutions converge, for large times, to a traveling wave or a system of traveling waves [311]. Such property is connected with the fact that all reaction-dif­ fusion equations generate monotone semiflows. In Section 3.2, we show that for reac­ tion-diffusion systems with 2 and more components, the wave behavior is significantly different. Namely, even small perturbations of monotone reaction-diffusion systems can produce dramatic effects in nonlinear wave propagation. In Section 3.3, we consider a large time behavior for reaction-diffusion systems. We investigate systems similar to fundamental Ginzburg–Landau systems and phase transition models. We prove that two-component reaction-diffusion systems can ex­ hibit complex behavior and generate all structurally stable attractors. This complexity is generated by cubic nonlinear terms and nonlinear boundary conditions. These re­ sults can have applications to natural computing since they allow to implement Turing machines in physical systems. In Section 3.4 we develop a general approach to chaotic behavior of reaction-dif­ fusion systems with two components. We apply the RVF method, where parameter choice has a transparent physical sense. Namely, we choose spatially inhomogeneous external sources, diffusion and degradation coefficients as parameters. We find a sim­ ple criterion for complicated attractors existence.

118 | 3 Complex patterns and attractors for reaction-diffusion systems

3.1 Whitham method for dissipative systems 3.1.1 General ideas Let us consider equations which have the variational form δF [u ] δD[u, u t ] =− , δu t δu

(3.1)

where D is a positively defined functional of dissipation, F is an energy functional, u is an unknown function and δF δu denotes the variational derivative. Many important physics and chemistry equations can be presented in form (3.1). For example, reac­ tion-diffusion equations u t = ϵ2 Δu + f (u, x) (3.2) in the domain Ω ⊂ Rm under zero Neumann or Dirichlet conditions on the smooth boundary ∂Ω take the form (2.43) if we set  1 u 2t d m x, D= 2 Ω

 2 ϵ (∇u )2 − Φ(u, x) d m x, F= 2 Ω

where Φ(u, x) is defined by Φ u = f (u, x). Relation (3.1) allows us to better understand the physics beyond equation (3.2). Assume that for each u, the functional D[u, u t ] is a positively defined quadratic homogeneous in u t form and our equation is defined in a Hilbert space enabled the inner scalar product  , . Then, according to the Euler theorem, (3.1) implies that ! δD[u, u t ] , u t = 2D[u, u t ] ≥ 0, δu t and −

δF [u ] , ut δu

! =

dF [u (t)] = −2D ≤ 0. dt

These relations show that the corresponding initial boundary value problems (IVBP) generate gradient semiflows, where F is a Lyapunov function decreasing along trajec­ tories. From a physical point of view, this function can be interpreted as energy. In many situations, we seek asymptotical solutions u of (2.43) that can describe nontrivial patterns. For example, for small ϵ, these solutions can describe spatially periodic patterns consisting of narrow interfaces. Our goal is to state a general heuristic principle describing a time evolution of slow parameters in an asymptotical formula that defines solutions. Let us assume that we

3.1 Whitham method for dissipative systems

| 119

have found such an asymptotic formula (an approximation) u = U (x, q) for solutions depending on a parameter q(t). The time evolution of u is defined by a time evolution of this unknown parameter q(t); our problem is to find an equation for q(t). There ex­ ist numerous standard asymptotical methods to find the evolution of q. We are going to propose a nonrigorous heuristic method having two advantages. First, this method simplifies calculations. Second, this approach usually admits a simple physical inter­ pretation. In some cases, the results can be justified rigorously by theorems on invari­ ant manifolds (Chapter 1, Appendix 3.5 and [43]). Let us introduce the Whitham averaged functionals ¯ (q, q t ) = D[U (x, q), U t (x, q)], D F¯ (q) = F [U (x, q)].

(3.3) (3.4)

¯ F¯ are obtained by substitution of our asymptotical solution U These functions D, into D, F and only depend on variables q and p = q t . For reaction-diffusion equations (3.2), one has ¯ (q, q t ) = 1 D 2

 Ω



F¯ (q) =

dU (x, q) dt

2

d m x,

ϵ2 (∇U (x, q))2 − Φ(U (x, q), x) d m x. 2

Ω

Let us formulate an assertion which is actually a heuristic principle (a receipt for asymptotic computations). Proposition 3.1 (Dissipative variant of the Whitham principle). Evolution of the pa­ rameter q is given, in a first approximation, by the equation ¯ p = −F¯ q , D

p = qt .

(3.5)

So, asymptotic evolution equations have a variational form, repeating the original variation structure (3.1). To illustrate this method in subsection 3.1.3, we shall con­ sider an example (see also [69] and [274] for a rigorous justification for perturbed scalar Ginzburg–Landau equations).

3.1.2 Quasiequilibrium (QE) approximation and entropy Recently, in the paper [88], a new method was proposed which can be considered as a generalization of the dissipative Whitham principle. For simplicity, let us consider a system of differential equations dx = F (x), dt

x ∈ Dn ,

(3.6)

120 | 3 Complex patterns and attractors for reaction-diffusion systems where D n is a bounded domain in Rn with a smooth boundary. Entropy is a nondecreasing Lyapunov function S(x) for (3.6) such that the Hessian is nondegenerated: Det ∂2 S∂x i ∂x j = 0. Then, the relation dS/dt ≥ 0 can be considered as the Second Thermodynamics Law for (3.6). Slow variable M is a differentiable function of x: M = m(x). The QE approximation defines x as a solution of the following problem: S(x) → max subject to m(x) = M.

(3.7)

We denote by x∗ (M ) the solution of the problem (3.7). The set of QE states x∗ (M ) is the QE manifold. The corresponding value of the entropy S(M ) = S(x∗ (M )) is the QE entropy. The evolution of the slow variables can be found by dM = m(F (x∗ (M )) dt

(3.8)

and defines QE-dynamics. We have the following: Theorem 3.2 (Theorem about preservation entropy production [88]). Let us calculate dS ( M ) at the point M according to QE dynamics (3.8) and find dSdt(x) at the point x∗ (M ) dt due to the initial equation (3.6). The results always coincide: dS(x∗ (M )) dS(M ) = . dt dt

3.1.3 Applications to phase transition theory. Scalar Ginzburg–Landau equation Let us consider the reaction-diffusion equation u t = ϵ2 u xx + f (u )

(3.9)

on Ω = [0, 1] for small ϵ. Let us denote Φ (u ) = f (u ). We assume that Φ is a typical double well potential with two minima u ± such that Φ(u + ) = Φ(u − ) = 0 and a single maxima u 0 . We suppose that u − < u + . As an example, we can take f = u − u3 ,

Φ=

(1 − u 2 )2 . 4

Then, u ± = ±1 and u 0 = 0. This case corresponds to the scalar Ginzburg–Landau equation, one of canonical models in the phase transition theory.

3.1 Whitham method for dissipative systems

| 121

Under such assumptions to Φ, there exists a kink solution w of equation w zz + f (w) = 0,

z ∈ (−∞, ∞).

(3.10)

It is a special solution enjoying the following properties: (i) w is a smooth monotone function of z; (ii) lim w(z) = b + as z → +∞ and lim w(z) = b − as z → −∞ where b + = u + , b − = u − for increasing w and b + = u − , b − = u + for decreasing w; (iii) the solution is invariant with respect to shifts (translations): if w is a kink solution, then, for each fixed constant q, the function w(z − q) is also a kink solution; (iv) the solution has exponential asymptotics as z → ∞: |w(z) − b + − C+ exp(−a+ z)| < C1 exp(−c+ z), |w(z) − b − − C− exp(a− z)| < C2 exp(c− z),

z>0 z a± and C i are positive constants. The kinks have a sigmoidal form, and for f = u − u 3 , we obtain a function w = tanh(z). The stationary solutions w ϵ of (3.9) for Ω = (−∞, ∞) can be obtained via w by space rescaling: w ϵ (x) = w(ϵ−1 (x − q)). These kink solutions have a topological charge s(U ) = (b + − b − )/2. We, however, are looking for asymptotic solutions of (3.9) in the case of a bounded interval Ω, and without loss of generality, one can set Ω = [0, 1]. To simplify computations, we assume u 0 = 0, u + = −u − and that f (u ) is an odd function. This fact holds for the main case f = u − u 3 . Then, a kink solution, centered at x = 0, is an even in the x function and C± = C0 ,

a± = a0 .

We will consider special asymptotic solutions describing kink chains. To avoid some technical problems connected with boundary layers, we set the periodical boundary conditions u (0, t) = u (1, t), u x (0, t) = u x (1, t). Kink chains U (x, q1 , q2 , . . . , q n ) = U (x, q), where 0 < q1 < q2 < · · · < q n < 1 and δq = min |q i − q i+1 | = O(1) i

(ϵ → 0),

can be described as follows. We suppose that n is an even positive integer and consider the sequence of points q¯ i such that 0 < q¯ 1 < q¯ 2 < . . . q¯ n < 1. The periodical boundary conditions formally imply q¯ n+1 = q¯ 1 . On the interval I i = [q¯ i , q¯ i+1 ), the solution has the form (3.13) U (x, q) = (−1)i w(ϵ−1 (x − q i )) = U i (x, q), x ∈ I i , where q i = (q¯ i + q¯ i+1 )/2.

122 | 3 Complex patterns and attractors for reaction-diffusion systems Notice that s i = (−1)i are kink charges and they alternate, and that the total charge of the chain is 0. This solution is continuous since the kinks U i are even functions in x − q i , but the first derivative U x has exponentially small discontinuities at the point ¯ F¯ are well-defined since they do not involve higher x = q¯ i . Nonetheless, functionals D, derivatives. This allows us to make simple calculations. Let us start with F. The integral for F can be decomposed as follows: # " 2 n ϵ U x (x, q)2 + Φ(U (x, q)) dx = J i ( q) , (3.14) 2 i =1 Ω

"

where Ji =

# ϵ2 (U i (x, q i )x )2 + Φ(U i (x, q i )) dx. 2

Ii

Let us consider the integrals J i . For small ϵ, we can compute it using that the derivative of the function w(ϵ−1 (x − q i )) is well-localized at x − q i and the solution is exponen­ tially close to 1 outside of the interval I i (this follows from (iv)). Then, one obtains − Ji = K − J+ i − Ji ,

where

+∞  "

K= −∞

# ϵ 2 ( U i ( x − q i ) x )2 + Φ(U i (x − q i ) dx 2

is a constant independent of q i due to the translation invariancy (iii), and J+ i

+∞  "

= q¯ i+1

J− i

q¯ i " = −∞

# ϵ2 U i 2x + Φ(U i ) dx, 2 # ϵ2 U i 2x + Φ(U i ) dx. 2

Now, we use asymptotic property (iv) and substitute (3.11) into the integrals that de­ fines J i . We use that at u ± , the potential Φ is almost quadratic: Φ( v + u + ) = c 0 v2 + O( v3 ) , Finally, we obtain F¯ = C0

n

(v → 0).

exp(−a0 ϵ−1 (q i − q i+1 )),

(3.15)

i =1

where a0 , C0 are positive constants. This asymptotical formula for the energy admits a simple physical interpretation: the kinks interact by exponentially small kink tails. ¯ is straightforward and gives (up to exponentially small cor­ The calculation of D rections)

n dq i 2 ¯ , (3.16) D=M dt i =1

3.1 Whitham method for dissipative systems |

123

where M is the so-called kink mass given by 1 M= 2

+∞ 

w x (x)2 dx. −∞

Finally, we apply the dissipative Whitham principle 3.1 and obtain the following asymptotic equations: M1

dq i = exp(−a0 ϵ−1 (q i − q i+1 )) − exp(−a0 ϵ−1 (q i−1 − q i )), dt

(3.17)

where M1 > 0 is a constant. A more rigorous derivation of equations (3.17) can be done by the invariant mani­ fold theory, see [43]. The presented statement follows [190]. Let us now discuss some general properties of kink chain evolution. Let us denote d i = q i+1 − q i distances between the kinks. We obtain the following. (a) All equilibria are periodic, i.e. d i = const = 1/n.

(3.18)

(b) All the equilibria are unstable. To show it, we can linearize the kink chain equations (3.17) at an equilibrium. δq i are small deviations of q i . We obtain dδq i = K0 (2δq i − δq i+1 − δq i ) = −K0 (Δ L δq)i , dt

(3.19)

where K0 > 0 is a constant and where Δ L denotes the standard finite difference Laplace operator. This shows that the spectrum of −K0 Δ L lies in the set λ > 0, i.e. our periodic pattern is unstable. The second method to find this instability is a straight­ forward solution of (3.19) by the Fourier series. We can explain this result by simple physical arguments. Let us consider the en­ ¯ One can check that this energy decreases when the distance between two near­ ergy F. est kinks decreases. (c) The time evolution is exponentially slow for small ϵ. Therefore, a periodic kink chain can be considered as long term living (metastable) patterns. Asymptotics hold while min d i  ϵ. The kink collisions can also be described by the Whitham principle [89], however, the corresponding asymptotic relations are more sophisticated, and we omit these details. Qualitatively, a picture of kink collisions can be described as follows [43]. Consider a kink chain of an arbitrary form, where all distances d i = q i+1 − q i are different. First, the two nearest kinks approach. When the mutual distance becomes O(ϵ), the collision process strongly accelerates. The result of kink collision is a mutual

124 | 3 Complex patterns and attractors for reaction-diffusion systems annihilation of both kinks. Notice that the topological charge s of the kink chain is a time constant during this process. Moreover, the kink annihilation is an energetically profitable since this process decreases the Lyapunov function. Furthermore, the next two nearest kinks meet, and the process continues up to a final annihilation, when the solution u becomes a constant. One can show that all stable stationary solutions of problem (3.9) with periodic (or zero Neumann) boundary conditions are constants u − or u + . Thus, all periodic patterns are unstable in such a model involving a simple reac­ tion-diffusion equation. How does one stabilize these patterns? There are many differ­ ent possibilities. For example, we can obtain stable patterns using systems of reactiondiffusion equations. A classical method for the stabilization of periodical patterns is to consider a situation when a reagent diffuses much faster than other ones. We will con­ sider this case in Sections 3.3 and 3.4. It will be shown that such systems can exhibit any structurally stable dynamics with complicated spatiotemporal patterns. Another approach will be considered in the coming subsection, where we con­ sider the segmentation process in Drosophila, where diffusion coefficients of all reagents (proteins) have the same order. Here, stability is a result of a complicated gene regulation.

3.1.4 Pattern formation in Drosophila This subsection follows the paper [294]. In the last decades, a great deal of attention has been given to problems of pattern formation and control. Some general patterning principles emerged from theories using normal forms for dynamical systems [98]. In particular, it was understood that the interaction of kinks, vortices, and generally localized modes (LMs) is important in stabilizing complex equilibrium and nonequi­ librium structures such as dislocation patterns on grain boundaries [225], block­ copolymer phases [300], and flow patterns of shear-banding fluids [223]. Typically, patterning proceeds in two stages. The first stage is the relatively fast growth of LMs, while the second one can be described as a slow motion of interacting modes [43]. Our goal is to apply these ideas to biological systems. There are two traditional ap­ proaches to the patterning of biological systems. Diffusion-driven Turing instabilities control patterns in systems containing at least two substances with very different mobility [178, 272]. Thresholding models are based on the existence of preestablished maternal morphogen gradients [325]. The maternal morphogen triggers zygotic gene expression in regions of the embryo where its concentration is larger than a thresh­ old value. Our approach describes a new mechanism for patterning control. This mechanism is compatible with the thresholding hypothesis, which can be used to describe nucleation of the LMs at early patterning stages. However, the proposed in­ teraction between mobile LMs represents a new patterning principle in biology. To illustrate these new concepts, we consider the segmentation of insects such as the

3.1 Whitham method for dissipative systems |

125

Drosophila (fruit fly), which is the focus of many works, biological, mathematical and physical [4, 122, 123, 150, 169]. First, let us outline the segmentation process. During this process, developmental genes are expressed in localized domains distributed along the anteriorposterior (AP) axis of the embryo. The sizes and positions of the domains evolve in time. In Drosophila, the patterning is influenced by a maternal pro­ tein, called the bicoid, whose concentration decreases exponentially from anterior to posterior. The bicoid directs the expression of gap genes. After rejecting the Turing mechanism, many biologists now think that segmentation of Drosophila is governed by a thresholding mechanism. This hypothesis implies that the variation of position of zygotic gene expression domain borders should closely follow the variations of the maternal gradients. This simple explanation is disproved by recent quantitative stud­ ies [121]. Numerous theoretical works proposed hypothetical mobile determinants [4] or additional maternal gradients [122] to explain the paradox. We propose a new ap­ proach to these problems using LMs and their interaction. In segmentation processes, proteins produced by active genes are expressed in spatially localized domains. The local modes important for pattern stabilization are the kinks, representing transition regions between a domain expressing a gene and a neighboring domain where the same gene is not expressed. We start from the gene circuit model (GCM) largely used to describe patterning by gene networks [188]. Recently, we have used numerical simulation and the GCM to show that robust patterning in the embryo of Drosophila can be generated by the collective action of a genetic network [169]. Here, we show that the observed robustness of the model can be explained by LMs interaction. We compute the kink-kink interactions for the GCM and relate them to genetic interac­ tions. These interactions depend on the interkink distances and stabilize a pattern that respects proportions. This mechanism also offers stability with respect to mater­ nal gradient variations since this gradient mainly serves as an initial condition for attractor selection, stimulating kink nucleation and growth.

Model To describe segmentation, we use a system of reaction-diffusion equations where m is the number of gene products (proteins). This model is a homogenized version of the space-discrete GCM [226]. We have introduced this model in subsection 2.8.1. The corresponding GCM equations are ⎛ ⎞ m ∂u i ∂2 u i = di + R i σ ⎝ T ij u j + m i μ (x) + h i ⎠ − c i u i , (3.20) ∂t ∂x2 j =1 where u i (x, t) are concentrations of proteins; d i , c i , R i are diffusion coefficients, degradation constants, and protein maximum production rates, respectively; σ is a fixed function of a typical sigmoidal form, smooth and monotonically increasing from 0 to 1; the matrix T describes pair interaction between induced (zygotic) genes;

126 | 3 Complex patterns and attractors for reaction-diffusion systems h i is the maternal morphogen concentration, m i defines the strength of the interac­ tions between morphogen and induced genes, and the parameters h i are thresholds. Solutions of (3.20) contain domains of relatively constant expression, encompassed by kink-antikink pairs representing the domain borders. For each gene i, there are as many kinks as borders of expression domains. To simplify calculations, we assume that σ is a step function. Then, kink positions are solutions of 0=

m

T ij u j + m i μ (x) + h i .

(3.21)

j =1

We show that kinks are mobile and interact. Equations for the slow motion of kink positions can be obtained by using the Whitham principle ([69], also see the previous subsection).

Slow kink motion, one kink Let us first consider a single gene expressing in an external field. In this case, we con­ sider the one reaction-diffusion equation ∂u ∂2 u = d 2 + Rσ (Tu + M (x) + h) − λu. ∂t ∂x

(3.22)

A kink solution of equation (3.21), corresponding to the right border of the expression domain, localized at q, is given by

1 u = Rλ−1 1 − exp(−γ(x − q)) , x > q, (3.23) 2 R (3.24) exp(−γ(q − x)), x < q, u= 2λ  where γ = λ/d is the kink tail parameter. In Drosophila, whose egg size is about 500 nm, kink tail parameters during cycle 14 are about 0.1 nm−1 . Antikink solutions, i.e. left borders of expression domains, are obtained by changing x − q to q − x in (3.23), (3.24). To obtain the slow kink motion, note first that equation (3.21) can be rewritten in a variational form (3.1), where D and F are the dissipation and energy functionals, respectively. The potential F can be obtained from (3.22). For the step function, we get Φ = RM /T + λu 2 /2,

u < −M /T,

and Φ = −Ru + λu 2 /2,

u > −M /T.

To find a moving kink solution u, depending on slow variables q, we substitute the so­ ¯ and F, ¯ de­ lution into our functionals and we get the Whitham averaged functionals D pending on q through U. The equation for q time evolution follows from the Whitham principle 3.1, also see equation (3.5).

3.1 Whitham method for dissipative systems

| 127

We obtain the equation of motion of a kink under the influence of the field M

M ( q) 1 dq =s + (3.25) r dt RT 2λ where r = γ(2λ)−2 is a parameter. The equation of motion also contains the topolog­ ical charge s. This takes two values s = 1 for kinks (right borders), and s = −1 for antikinks (left borders). Given the same field, kinks and antikinks move in opposite directions. In order to pass from single kink to multikink solutions, we apply a local field approximation. From equations (3.20) and (3.21), the total local field acting on a gene i at x is M i (x) = T ij u j (x) + μ (x)m i + h i . (3.26) j = i

The field M i gathers the influence of the maternal morphogen and of other zygotic genes on the gene i. Relation (3.26) is the asymptotic expression for the field produced by gene j (typically the field of a kink-antikink pair). Under the influence of the total local field, kinks of gene i move to the right and antikinks move to the left if the con­ dition M i > −R i T ii /2λ i is satisfied (equation (3.25)). This means that the expression domain of gene i expands. Also, if M i < −R i T ii /2λ i , the expression domain shrinks.

Interacting kinks, alternating cushions A pattern is specified by a sequence of expression domains wherein gene 1 is expressed in [0, q1 ], gene 2 is expressed in [q2 , q3 ], etc. We assume that zygotic genes are all weakly self-activated, T ii R i /λ i = T A > 0, and repress each other, T ij < 0, i = j. We also consider that there are two types of interactions: weakly repressive between genes in adjacent domain T ii R j /λ j = −α 1 T A < 0 and strongly repressive between genes in next-adjacent domains T ii R j /λ j = −α 2 T A < 0, where 0 < α 1  α 2 . This condition appears in the Drosophila gap gene system, where it is called “alternating cushions” (AC) [150]. In other insects that develop in a similar manner, there is limited informa­ tion about genetic interactions. Nevertheless, it is known that in the midge Clogmia, four gap genes that are paralogs of hb, kni, Kr, and gt are expressed in the strongly repressing pairs (hb, Kr) and (kni, gt), forming nonadjacent domains separated by “cushions.” Given that the minimal GCM model satisfying AC contains at least four genes, this fact indicates that this mechanism may be evolutionarily conserved. We obtain the equations of motion for a system of interacting kinks by replacing in (3.25), the field (3.26) produced by the other domains. The various contributions are obtained from (3.23), (3.24). For instance, the field acting on the right border of the domain i has the three contributions M (q2i−1 ) = Smat + S i+1 + S i+2 , where the maternal term Smat is defined by Smat = m i μ (q2i−1 ) + h i ,

(3.27)

128 | 3 Complex patterns and attractors for reaction-diffusion systems the contribution of i + 1-th domain is

1 S i+1 = −α 1 T A 1 − exp(−γ i+1 δ i+1 ) , 2 and the contribution of i + 2-th domain is 1 S i+2 = −α 2 T A exp(−γ i+2 Δ i+1 ). 2 A similar equation can be written for the left border of the domain. We get dq2i−1 ¯i) = v i (α 1 exp(−γ i+1 δ i+1 ) − α 2 exp(−γ i+2 Δ i+1 ) + h dt dq2i ¯ i ). = v i+1 (α 1 exp(−γ i−1 Δ i ) − α 2 exp(−γ i δ i+1 ) − h dt

(3.28) (3.29)

The distances δ i+1 = q2i−1 − q2i , Δ i+1 = q2i+2 − q2i−1 correspond to the overlap of neigh­ boring domains and to the distance between next neighbor domains, respectively. The  mobility v i = 4 λ i d i only depends on the gene in domain i. The terms h¯ i regroup contributions to the field whose variation is small on the scale of the kink “widths” (inverse tail parameter). Various terms in equations (3.28), (3.28) can be interpreted as forces acting on kinks. These forces are of two kinds. The force h¯ i , representing the balance between activation and mutual repression of overlapping domains, can be in­ terpreted by analogy as “pressure” that expands or shrinks the domains. If pressure terms are positive right and left kinks are pushed into directions opposed to the do­ main center, the domains expand. Pressure terms only weakly depend on the kinks po­ sitions, and so their contribution to pattern stability is small. The second category are forces depending exponentially on the distances between kinks. These exponential terms, ensuring the main contribution to stability, will be called by analogy, “springs.” AC architecture introduces two types of springs: between the border of next-adjacent domains that tend to prevent overlap, and between borders of adjacent domains that tend to increase overlap. Without springs, patterns dismantle. Domains with negative pressure shrink and disappear. Domains with positive pressure expand and occupy all available space. An important property of developmental systems is size regulation. For a patterning system, this means that pattern length scales adapt proportionally when the size of the embryo is changed. An interesting alternative explanation of size regulation [178], based on lateral induction and self-inhibition of the patterning substances, uses very mobile components. While such hypotheses may be sufficient in some cases, our model shows that they are not necessary in general. Our mechanism explains size regulation in a simplest way. When size varies, the kinks readjust their mutual dis­ tances by “spring” repulsion. Thus, if size increases, then interkink distances increase as well. For periodic patterns with only one interkink distance, this mechanism en­ sures perfect proportionality. The cases with several interkink distances should be analyzed more carefully. For instance, the size increase can lead to uneven increases

3.1 Whitham method for dissipative systems |

129

of domain widths and of domain overlaps. The size control of a pattern with two in­ terkink distances is studied analytically. Then, we numerically simulate the dynamics of interacting kinks using interkink distances observed in Drosophila.

Uniform pressure terms, two interkink distances Size regulation is easy to study in the case of uniform pressure terms h¯ i = h. In this case, stationary patterns are periodic with two interkink distances δ i = δ, Δ i = Δ where δ + Δ = L/N and N is the number of domains. Let ρ = δ/(δ + Δ) be the interkink distances ratio. Size regulation means that ρ is not sensitive to variations of L, i.e. d log ρ < ϵ, Sρ = d log L where ϵ is a small number. If ϵ = 0.1, then a relative variation of 10 percent of L leads to less than a 1 percent variation of the ratio ρ. Simple calculations show that ϵ can be very small. From the stationary equations of equation (3.28),(3.29), we obtain an implicit dependence of 10 on L and b Sρ = |

˜ (2ρ − 1)) 1 − (1 − ρ )/ρ ) exp(L | ˜ (2ρ − 1)) 1 + ρ α exp(L

˜ = γL/N, ρ α = α 1 /α 2 < 1. Values of L ˜ and ρ can be obtained from exper­ where L imental data. Sensitivity S ρ only depends on model parameter ρ α . Experimental val­ ˜ ρ ) were obtained using gene expression data [335]. Computations show a weak ues (L, sensitivity.

Summary In this subsection, a new model of morphogenesis is proposed which presents an al­ ternative to the classical Turing model [272]. A special gene interaction leads to the following picture. Some gene expression domains appear as a result of nonlinear gene interaction. The borders of domains can be considered as kinks since diffusion rates of all gene products are small and have the same order (therefore, the diffusion instability is im­ possible). A special gene interaction leads to the alternating cushion effect, and we obtain a kink chain where we should to take into account not only interactions be­ tween nearest kinks, but also an interaction between next neighbors. This alternating cushion effect stabilizes the kink chain. Without this effect, the chain is unstable, as it was shown in the previous subsection. The alternating cushion effect also explains the pattern proportionality: we always have the same layer number independently on the embryo size L.

130 | 3 Complex patterns and attractors for reaction-diffusion systems

3.2 Chaotic and periodic chemical waves 3.2.1 Introduction In this section, which follows [273, 275, 297, 298] with some modifications, we con­ sider the problem of existence of traveling wave solutions for general reaction-diffu­ sion systems. The traveling wave is a solution of the form U (x − Vt), where V is a constant speed. This issue was first considered in the seminal works [76, 146], how­ ever, great attention was devoted to this domain after the remarkable paper [74]. The exhaustive survey of classical results can be found in [311], see also [258]. For some re­ sults for monotone systems, see [310, 312, 313]. In this section, we shall show that for the nonmonotone case, the wave can have a nontraveling form. The wave propagation exhibits interesting effects. We consider the Cauchy problem for the parabolic system ∂2 u ∂u = D 2 + F (u, x), ∂t ∂x

x ∈ R, t ≥ 0

(3.30)

with the initial condition u (x, 0) = u 0 (x).

(3.31)

Here, u = (u 1 , . . . , u n ), F = (F1 , . . . , F n ), D is a diagonal matrix with positive diago­ nal elements d i , i = 1, . . . , n. The function F (u, x) has the form F (u, x) = f (u ) + λg(u, x), where f = (f1 , . . . , f n ), g = (g1 , . . . , g n ), λ is a positive parameter, and f (u ) satisfies the following conditions: f (u+ ) = f (u− ) = 0 for some constant vectors u + and u − , u + < u − (the inequality is understood compo­ nentwise), ∂f i (u ) ≥ 0, i, j = 1, . . . , n, i ≠ j, u ∈ B κ , (3.32) ∂u j and f (u ) ∈ C2 (B κ ). Here, B κ denotes the set of u ∈ R n satisfying the inequality u + − κ ≤ u ≤ u − + κ, κ is a small positive number. For the scalar case (n = 1), the condition (3.32) is not imposed. We assume further that the function g(u, x) is continuous and bounded uniformly in x together with its first derivative with respect to x and second derivatives with respect to u i . The systems with condition (3.32) (and λ = 0) are called monotone. We recall that this is a class of systems for which comparison theorems are valid: the inequality u 01 (x) ≥ u 02 (x), x ∈ R for the initial conditions of the Cauchy problem implies the inequality u 1 (x, t) ≥ u 2 (x, t), x ∈ R, t ≥ 0 for the corresponding solutions. Systems of this type arise in numerous applications. In particular, to chemical kinetics and combustion [311].

3.2 Chaotic and periodic chemical waves

| 131

If λ = 0 and if we suppose, for simplicity, that u 0 (±∞) = u ± , then the behavior of the solutions of the Cauchy problem (3.30), (3.31) is described by traveling wave solutions. We recall that the traveling wave is a solution of the form u (x, t) = w(x − ct). Here, c is an unknown constant, the wave velocity, and the function w(x) is a solution of the problem Dw + cw + f (w) = 0, w(±∞) = u ± . (3.33) Existence and stability of traveling waves for monotone parabolic systems, i.e. for the systems with the condition (3.32), are known [311, 312]. If we consider the bistable case where the matrices f (u + ) and f (u − ) have all eigenvalues in the left half-plane, and if we assume that in all other zeros u (i) ∈ [u + , u − ] of the function f (u ) the correspond­ ing matrices f (u (i) ) have eigenvalues in the right half-plane, then a unique monotone traveling wave w(x) exists, i.e. a unique value of c for which the problem (3.33) has a monotone solution. If we assume, moreover, that the matrix f (w(x)) is functionally irreducible, then this wave is globally stable as a solution of a Cauchy problem (3.30), (3.31). In the monostable case, where one of the matrices f (u + ) and f (u − ) has all eigen­ values in the left half-plane and another one has eigenvalues in the right half-plane, the monotone traveling waves exist for all values of the velocity greater or equal to some minimal velocity c0 and they do not exist for c < c0 [311]. This case is a general­ ization of the well-known results for the scalar equation studied first in [146] and in a number of works after that (see [74, 311] and references therein). The condition of functional irreducibility of the matrix f (w(x)) mentioned above means that if we replace all its elements by their C-norms, then this numerical matrix is irreducible. If this condition is not satisfied, then the system of equations is in some sense decoupled. The simplest example of this situation is provided by the system of n independent equations of the form (3.30). In this case, the matrix f (w(x)) is diagonal. As we will see later, when the condition of irreducibility is not satisfied, the behavior of solutions of the Cauchy problem can be essentially different. We study perturbed monotone systems (λ ≠ 0) using the RVF method and fol­ lowing the strategy discussed in Chapter 1 and in Section 2.1. We consider the bistable case and show that if the irreducibility condition is satisfied, then for small λ, there exist solutions of the Cauchy problem of the form u (x, t) = w(x − q(t)) + λv(x, q(t))

(3.34)

where |v| is bounded. If c is the speed of the unperturbed wave, then the function q(t) is a solution of the ordinary differential equation dq = c − λΦ(q, λ), dt where Φ(q, λ) = Φ0 (q) + O(λ),

(3.35)

132 | 3 Complex patterns and attractors for reaction-diffusion systems and the principal term of this expansion Φ0 is found explicitly. The large time behavior of solutions of (3.35) can be described completely. It can be easily shown that solutions of (3.35) defined for all t from −∞ to +∞ exist. We show that the corresponding solu­ tions (3.34) lie on some invariant manifold and therefore they are also defined for all t. We will call solutions of this type, i.e. solutions u (x, t) defined for all t and such that at ±∞, they are close to u ± , respectively, generalized traveling waves. Thus, for small λ, it is possible to prove the existence of generalized traveling waves. The behavior of solutions for the perturbed scalar equation (n = 1) is studied in [190, 191]. If the irreducibility condition is not satisfied, then generalized traveling waves can have more complex structure. In the generic situation, solutions of the form (3.35) may not exist. Instead of them, we should consider solutions u (x, t) = v(x, q1 (t), . . . , q n (t)),

(3.36)

where the functions q i (t) satisfy the system of ordinary differential equations dq i = c i − λΦ i (q, λ), i = 1, . . . , n. dt

(3.37)

We can illustrate this situation on the unperturbed system of n independent equa­ tions. In this case, there exists a traveling wave solution for each component of the vector-valued function u (x, t): u i (x, t) = w i (x − c i t), i = 1, . . . , n and q i (t) = c i t. Suppose now that all constants c i , i = 1, . . . , n are equal to each other and consider the perturbed system. Then, different components of the solution begin to interact and it can generate a complex dynamics described by system (3.37). The dynamics depends on the perturbation g(u, x) and periodic (for n ≥ 2) and even chaotic (for n ≥ 3) solutions can exist. Moreover, for any given polynomial P(q) and ϵ > 0, it is possible to construct a perturbation g(u, x) such that sup |Φ(q, λ) − P(q)| ≤ ϵ.

| q |≤1

This perturbation is also a polynomial with respect to u i of the same degree as P. In other words, any vector field in R n can be approximated by the vector field Φ(q, λ) which describes the dynamics of solutions of (2.34). In particular, we show that gen­ eralized traveling waves can have the chaotic behavior similar to that described by the Lorenz system. These results develop the ideas suggested in [190, 278].

3.2.2 A priori estimates, global existence and uniqueness We assume that the functions f (u ) and g(u, x) satisfy the conditions of the previous subsection and the condition of functional irreducibility. We assume, moreover, that

3.2 Chaotic and periodic chemical waves

| 133

all eigenvalues of the matrices f (u + ) and f (u − ) lie in the left half-plane and that there exists a finite number of zeros u (1) , . . . , u (k) of the function f (u ) in the interval (u + , u − ) (i.e. for u + < u < u − ). We recall that the principal eigenvalues of the matrices f (u (i) ) are real. We suppose that they are positive. As it was already mentioned above (see (3.33)), in this case, there exists a monotone [u + , u − ]-wave, i.e. a wave with the limits u (±∞) = u ± . This wave is unique up to translation in space and it is globally stable [312], [310]. Lemma 3.3. For sufficiently small λ > 0, a constant vector b = (b 1 , . . . , b n ) exists such that the solution u (x, t) of the Cauchy problem (2.34), (2.35) satisfies the inequality u + − b ≤ u (x, t) ≤ u − + b

(3.38)

for all x ∈ R and t ≥ 0 if u + − b ≤ u 0 (x) ≤ u − + b,

x ∈ R.

(3.39)

Proof. The proof is standard and uses the principle of invariant domains [258]. We show that the rectangle J (b ) = [u + − b, u − + b ] = {u : u + i − b i ≤ u i ≤ u − i + b i } is an invariant domain for (3.30) for some b > 0. To prove it, we should show that the vector field F (u, x) is directed inside J (b ) on the boundary ∂J (b ) for all x ∈ R. Indeed, since the principle eigenvalues of the matrices f (u + ) and f (u − ) are negative and (3.32) holds, then f i (u ) < 0 for u ∈ ∂J (b ), u i = u − i + b i and f i (u ) > 0 for u ∈ ∂J (b ), u i = u + i − b i . Hence, the same inequalities hold for the function F i if λ is sufficiently small. The lemma is proved. We note that global existence and uniqueness of the classical solution of (3.30), (3.31) is well known (see, for example, [74, 146]). Denote, E = C(R), the Banach space of bounded continuous functions and C2 (R) for the space of functions, the first and the second derivatives of which belong to C(R). The operator Au = Du with the domain D(A) = C2 (R) is sectorial in C(R) [167]. (Here, denotes the derivative with respect to x.) Hence, we can introduce the space E α = D(A1α ) with the norm u α = A1α u , where  ·  is the uniform norm, α ≥ 0, A1 = A − aI, a > 0, and I is the identity operator [108]. Problem (3.30), (3.31) generates a semigroup S t . If the initial condition u 0 belongs δ to E for some δ > 0 and satisfies (3.39), then for any t > 0, the solution S t u 0 belongs to E α for any 0 < α < 1 and satisfies (3.38). We denote  ,  the inner product in L2m (R) and in L2 (R).

134 | 3 Complex patterns and attractors for reaction-diffusion systems 3.2.3 Invariant manifold In this subsection we show existence of an invariant locally attracting manifold M λ describing the large time behavior of solutions of the problem (3.30), (3.31). If λ = 0, then this manifold consists of a one-parameter family of functions, M0 = {u : u = w(x − x0 )}, where w(x) is a monotonically decreasing (componentwise) solution of (3.34). The shift x0 can be considered as a coordinate on this manifold. The global semiflow S t reduced to M0 can be described by the equation dx0 /dt = c, where c is the wave velocity. We show that for small λ ≠ 0, the invariant manifold M λ still exists and is close to M0 . Construction of the invariant manifold will require several steps.

3.2.4 Coordinates in a neighborhood of M0 Denote W δα the neighborhood of M0 in the space E α : W δα = {u ∈ E α : inf u (z) − w(z − x0 )α < δ}. x0

(3.40)

In the case α = 0, we use the notation W δ . We study solutions of (3.30) with the initial conditions u 0 from W δα , α > 0. Let us introduce new variables (q, v) as follows. For each u ∈ W δα , we put v(z − q) = u (z) − w(z − q) ∈ E,

(3.41)

where a real number q should satisfy ∞

˜ (z − q)dz = 0. (3.42) (u (z) − w(z − q))U

˜ (· − q) ≡ ρ (q, u ) ≡  u (·) − w(· − q), U −∞

˜ is the principal eigenfunction of the problem Here, U Dψ − cψ + B∗ (z)ψ = μψ,

ψ(±∞) = 0,

where B∗ (z) is the matrix transposed to B(z) = f (w(z)). We note that this is an adjoint problem to the problem obtained by linearization of (3.33) about w(z). The ˜ is posi­ principal eigenvalue of this problem is real and the principal eigenfunction U tive [312]. Moreover, it is exponentially decreasing at infinity. Below, we denote by C all constants independent of u and λ.

3.2 Chaotic and periodic chemical waves

| 135

Lemma 3.4. If δ is sufficiently small, then a unique q that satisfies (3.42) exists. Proof. From the definition of W δ , it follows that there exists x0 such that u (z)− w(z − x0 ) ≤ δ. We have ∞ q →+∞

−∞ ∞

˜ (z)dz < 0, (w+ − w(z))U −∞ ∞

˜ (z − q)dz = (w(z − x0 ) − w(z − q))U

lim

q →−∞

∞

˜ (z − q)dz = (w(z − x0 ) − w(z − q))U

lim

−∞

˜ (z)dz > 0. (w− − w(z))U −∞

Since ρ (q, u ) is a continuous function of q, then for δ sufficiently small, ρ (q, u ) = 0 for some q. Moreover, the absolute value of |x0 − q| can be estimated independently of δ for all δ sufficiently small. Let us show that this value of q is unique. Suppose there are two different values ˜ (z) = u (z) − w(z − x0 ). Then, q1 and q2 such that ρ (q i , u ) = 0, i = 1, 2. Denote u ∞

∞

˜ ( z − q1 ) − U ˜ (z − q2 ))dz = − w(z − x0 )(U −∞

˜ ( z − q1 ) − U ˜ (z − q2 ))dz. u˜ (z)(U −∞

We have ∞

˜ ( z − q1 ) − U ˜ (z − q2 ))dz = ˜ (z)(U u

I1 ≡ − −∞

1 = ( q1 − q2 )

∞

dt 0

˜ (z − q1 + t(q1 − q2 ))dz. ˜ ( z) U u

−∞

The absolute value of I1 can be estimated from above by C1 δ|q1 − q2 |. By virtue of the estimation |x0 − q i | ≤ M, from the equality ∞

˜ ( z − q1 ) − U ˜ (z − q2 ))dz = w(z − x0 )(U

I2 ≡ −∞ ∞

˜ (z)dz, (w(z − x0 + q1 ) − w(z − x0 + q2 ))U

= −∞

˜ (z), it follows that I2 ≥ C2 |q1 − q2 |. These monotonicity of w(z) and the positivity of U estimates lead to a contradiction for small δ and q1 = q2 . The lemma is proved. Lemma 3.5. The equality ρ (q, u ) = 0 determines a C1 mapping q(u ) from W δα to R.

136 | 3 Complex patterns and attractors for reaction-diffusion systems Proof. The assertion of the lemma will follow from the implicit function theorem if we verify that ρ q (q, u ) ≠ 0 for u ∈ W δ . We have ∞

∞

˜ (z − q)dz − u ( z) U

ρ (q, u ) = −∞

ρ q = −

˜ (z)dz, w( z) U −∞

∞ −∞ ∞

=−

˜ (z − q)dz = u ( z) U ˜ (z − q)dz, ˜ (z))U ( w( z − x 0 ) + u

−∞

˜  ≤ δ. Since where u ∞

˜ (z − q)dz = − w( z − x 0 ) U

−∞

∞

˜ (z − q)dz > 0, w ( z − x 0 ) U

−∞

then for δ sufficiently small, |ρ q (q, u )| ≥ k for some positive constant k and all u ∈ W δα . The lemma is proved. Similarly, it can be shown that q(u ) is a C2 mapping. Lemma 3.6. If u ∈ W δ , then the function v(z) defined by (3.41) satisfies the estimation v ≤ Cδ, where C is independent of u. Proof. By the definition of q, we have ∞

∞

˜ (z − q)dz = (w(z − x0 ) − w(z − q))U −∞

˜ (z − q)dz. (w(z − x0 ) − u (z))U −∞

Since |u (z) − w(z − x0 )| ≤ δ, then the integral in the right-hand side can be estimated by C1 δ where the constant C1 is independent of u. From the same estimate for the integral in the left-hand side and by virtue of monotonicity of w(z) and positiveness ˜ (z), we conclude that |x0 − q| ≤ C2 δ. Hence, of U |v(z)| = |u (z) − w(z − q)| ≤ |u (z) − w(z − x0 )| + |w(z − q) − w(z − x0 )| ≤ C3 δ.

The lemma is proved.

3.2.5 Change of variables Consider the solution u (x, t) of the Cauchy problem (3.30), (3.31). Suppose that u (z, t) ∈ W δα for 0 ≤ t ≤ T for some T > 0. For each t fixed, we define q(t) and v(ξ, t) by equations ρ (q, u (·, t)) = 0,

v(ξ, t) = u (ξ + q(t), t) − w(ξ ).

(3.43)

3.2 Chaotic and periodic chemical waves

| 137

The function q(t) determines the shift of the unperturbed wave and v is a pertur­ bation of the wave. From (3.30), we have



∂v ∂2 w ∂2 v ∂v + F (w(ξ ) + v(ξ, t), ξ + q). − w + q (t) = D + (3.44) ∂t ∂ξ ∂ξ 2 ∂ξ 2 ∂v Let us add, to both sides of this equation, the term c(w + ∂ξ ). Taking into account (3.33), we obtain

∂2 v ∂v ∂v ∂v + + w (ξ ) (c − q (t)) = D 2 + c + F (w(ξ ) + v(ξ, t), ξ + q(t)) ∂t ∂ξ ∂ξ ∂ξ

− f (w(ξ )).

(3.45) We can now rewrite the last equality as

∂v ∂v − + w (ξ ) (q (t) − c) = Lv + γ(ξ, v) + λh(ξ, v, q), ∂t ∂ξ

(3.46)

where Lv = D

∂2 v ∂v +c + f (w(ξ ))v, ∂ξ 2 ∂ξ

γ(ξ, v) = f (w(ξ ) + v) − f (w(ξ )) − f (w(ξ ))v, h(ξ, v, q) = g(w(ξ ) + v(ξ, t), ξ + q). ˜ (ξ ) and integrate from −∞ to ∞. From the definition of q, We multiply (3.46) by U ∞

˜ (ξ ))dξ = 0 (v(ξ, t), U −∞

and after differentiation, ∞ −∞

Moreover,

∂v ˜ (ξ ) dξ = 0. (ξ, t), U ∂t

∞

˜ (ξ ))dξ = 0 (Lv(ξ, t), U −∞

˜ is the eigenfunction of the adjoint operator L∗ corresponding to the zero eigen­ since U value. Thus, we obtain ˜  + λ h, U ˜ ) ≡ c − Φ(q, v, λ) q (t) = c − τ−1 (v) ( γ, U

(3.47)

138 | 3 Complex patterns and attractors for reaction-diffusion systems $

where τ=

∂v ˜ w + ,U ∂ξ

%

˜  −  v, U ˜ . =  w , U

For δ, sufficiently small τ ≥ k > 0, where a constant k does not depend on v. We note that |γ(ξ, v)| < C|v|2 , |γ v (ξ, v)| < C|v| with a constant C independent of v and λ. Hence, the following estimate holds:     |Φ(q, v, λ)| ≤ C1 (|v|2 + λ), Φ q (q, v, λ) ≤ C1 λ, (3.48) where a constant C1 is also independent of v and λ. From (3.46), we have

∂v ∂v = Lv + γ(ξ, v) + λh(ξ, v, q) − + w (ξ ) Φ(q, v, λ) ∂t ∂ξ

(3.49)

≡ Lv + F (ξ, v, q, λ).

Thus, we have reduced problem (3.30), (3.31) to problem (3.47), (3.49). This reduc­ tion is possible in a small neighborhood of the manifold M0 . In other words, the per­ turbation v is supposed to be small. We obtain a priori estimates of solutions of the problem (3.47), (3.49) in the next subsection. These estimates show, in particular, that the norm v(·, t)α is small for all t ≥ 0, provided v(·, 0)α is small.

3.2.6 A priori estimates We can represent the space E as a direct sum E1 + E2 , where E1 is subspace of functions ϕ(ξ ) for which ∞ ˜ (ξ ))dξ = 0 (ϕ(ξ ), U −∞

and E2 is its supplement. By construction, solution v(ξ, t) of problem (3.47), (3.49) belongs to E1 . The reduction L1 of the operator L to E1 has all spectrums in the left half-plane. Moreover, it is separated from the imaginary axis, Re σ (L1 ) ≤ −β < 0. Since the operator L1 is sectorial and generates an analytic semigroup [167], one has e L 1 t ˜v  ≤ Me−βt ˜v, e

L1 t

˜vα ≤ M max(1, t

t ≥ 0, −α

)e

− βt

˜v,

(3.50) t≥0

(3.51)

with a constant M, independent of ˜v and 0 ≤ α < 1. We conserve below the notation v(t) for the function v(ξ, t) considered as an ele­ ment of the space E α . We also note that the operator L in (3.49) can be replaced by L1 . In the sequel, we shall consider system (3.47), (3.49) as a system of abstract evolution equations, (q, v) ∈ R × E α .

3.2 Chaotic and periodic chemical waves

| 139

Lemma 3.7. Let F : v(ξ ) → F (ξ, v(ξ ), q, λ) be the operator considered from E α to E for any q fixed. If α > 1/2, then F ∈ C1+ω for some ω ∈ (0, 1). Proof. Due to the converse Taylor theorem [9], it is sufficient to show that S(v, v¯ , q, λ) = F (v + v¯ , q, λ) − Fv (v, q, λ)v¯ − F (v, q, λ) ≤ Cv¯1α+ω for some C. Here, Fv (v, q, λ) is the Frechet derivative of F , which is a bounded linear operator from E α to E. We have ¯2 + C2 v¯ ξ v¯. S ≤ C1 (γ vv  +  h vv  +  v ξ + w ξ ) v

Taking into account the well-known embeddings [167] v¯  ≤ v¯α , v¯ ξ  ≤ v¯α valid for α > 1/2, we obtain S ≤ Cv¯ 2α . The lemma is proved. Lemma 3.8. Let δ be sufficiently small, 1/2 < α < 1, and v(0) ∈ W δα . Then, λ0 (δ) > 0 exists such that for all λ, 0 < λ < λ0 and some t0 > 0, we have v(t) ∈ W δα1 ,

t > 0,

v(t) ∈ W δα ,

t ≥ t0 > 0.

(3.52)

Here, λ0 depends on δ, t0 is independent of δ, δ1 = (M + ϵ)δ, ϵ > 0. Moreover, there are constants C, C0 and t1 (λ) < C0 | ln λ| such that α , v(t) ∈ W Cλ

t > t1 ( λ) .

(3.53)

Proof. We represent the solution of (3.49) in the form t

v(t) = e L 1 t v(0) +

e L 1 (t−s ) F (v(s), q(s)) ds,

(3.54)

0

where F (v(s), q(s)) denotes the function F (ξ, v(ξ, s), q(s)) considered as an ele­ ment of E. The representation (3.54) holds since F ∈ C1 (E α × R1 , E). This follows from Lemma 3.6 since α > 1/2. From (3.48), one has F (v(s), q(s)) ≤ C(λ + v2 ),

(3.55)

where the constant C is independent of v. Since v(0) ∈ W δα , then for t sufficiently small v(t) ∈ W δα1 . Suppose that v(t) does not belong to W δα1 for all t. Then, there exists T > 0 such that v(t)α < δ1 , 0 ≤ t < T,

v(T )α = δ1 .

140 | 3 Complex patterns and attractors for reaction-diffusion systems We have from (3.54), (3.55) (M + ϵ)δ = δ1 = v(T )α ≤ ≤ Me

− βT

T 2

δ + C(λ + sup v(t) ) 0< t ≤ T

T ≤ Mδ + C(λ + δ21 )

max(1, s−α ) e−βs ds ≤

0

(3.56)

max(1, s−α ) e−βs ds.

0

If λ and δ are sufficiently small, (3.56) gives a contradiction. We conclude from the same estimates that ˜v ∈ W δα for t sufficiently large. To prove the last assertion of the lemma, consider the following inequality, which follows from (3.54), (3.55) v(t)α ≤ Me

− βt

t

δ + Cλ + C2

max(1, (t − s)−α )e−β(t−s )v(s)2α ds.

0

Here, C2 is a positive constant. Let S be an integral operator acting on the set D δ1 of nonnegative functions X (t), supt X (t) ≤ δ1 defined by (SX )(t) = Me

− βt

t

δ + Cλ + C2

max(1, (t − s)−α )e−β(t−s ) X 2 (s)ds.

0

It is easy to verify that it is a contracting operator on the set of functions D δ1 with the supremum norm. Consider the set D β,k of nonnegative functions X (t) satisfying the inequality X (t) ≤ 2Cλ + ke−βt/2 Mδ. The constants k, β, and δ can be chosen such that D β,k ⊂ D δ1 and the operator S maps D β,k into itself. Hence, there exists a unique solution X0 (t) of the equation X0 (t) = (SX0 )(t) in D β,k . It remains to be noted that v(t)α ≤ X0 (t). The lemma is proved.

3.2.7 Main theorem The main result of this section is given by the following theorem. Theorem 3.9. System (3.47), (3.49) has an invariant locally attracting manifold M λ for all λ sufficiently small. This manifold is a C1 -graph ˜v = V (q, λ) in E α , where 1/2 < α < 1. The global semiflow reduced to M λ satisfies the ordinary differential equation q (t) = c − λΦ(q, λ),

(3.57)

3.2 Chaotic and periodic chemical waves

| 141

where Φ(q, λ) is continuous together with its first derivative with respect to q, Φ(q, λ) = Φ0 (q) + Φ1 (q, λ), ∞ 0 ˜ (z)dz, Φ (q, λ) = g( w( z) , z + q) U

(3.58) (3.59)

−∞

|Φ1 (q, λ)|, |Φ1 q (q, λ)| < Cλ.

(3.60)

Proof. System (3.47), (3.49) can be rewritten as dq = c + λQ(v, q, λ), dt dv = L1 v + λG(v, q, λ). dt

(3.61) (3.62)

Here, the operator L1 is sectorial and its spectrum σ (L1 ) lies in the half-plane Re σ ≤ −β (σ ∈ σ (L1 )), where β does not depend on λ. The maps Q and G belong to the classes C1+ω (E α × R, R) and C1+ω (E α × R, E), respectively (Lemma 3.6). We note that by virtue of Lemmas 3.6 and 3.7, all solutions starting from W δα cannot α leave the neighborhood W δα1 , enter the neighborhood W Cλ and remain in it. Hence, the α . Inside this domain, the Hölder constants of Q and G invariant manifold lies in W Cλ are uniformly bounded as λ → 0. Therefore, we can apply results on the existence of invariant manifolds [48, 50, α . For sufficiently small λ, 78, 108] to the system (3.61), (3.62) in the neighborhood W Cλ it can be represented in the form v = V (q, λ), where the function V (q, λ) admits the estimate V α < cλ and V q α < cλ. Moreover, the invariant manifold is locally attracting. Substituting v = V (q) into (3.61), and, omitting for simplicity, the dependence of the function V on λ, we obtain (3.57) where ˜ , Φ(q, λ) = λ−1 τ−1 (V ) γ(V (q)) + λh(w + V (q), q)U and τ ∈ R is a rescaled time, defined above. Let us denote 1 ˜ Φ0 = τ − 0  h ( w, q ) , U  ,

˜ , and Φ1 = Φ − Φ0 . Since γ(V ) = O(V 2 ) and |τ(v) − τ0 | ≤ where τ0 = τ(0) =  w , U Cvα , then |Φ1 | < cλ. We must still estimate the derivative Φ1 q . Since V q α ≤ cλ, then the derivative of the ˜  can be estimated by cλ2 . It can be easily verified that term τ−1 (V ) γ(V ), U ' d & −1 1 ˜ ≤ cλ. τ ( V ) h( w + V ( q) , q) − τ − 0 h ( w, q ) , U dq

This completes the proof of the theorem.

142 | 3 Complex patterns and attractors for reaction-diffusion systems We conclude this subsection with some remarks concerning the speed of propagation of generalized traveling waves. If λ = 0, then q(t) ≡ const is a shift of the traveling wave. For λ ≠ 0, we can also consider q(t) as a wave coordinate. If the perturbation g does not depend on the space variable, g(u, x) ≡ g(u ), then q(t) = c1 t with some constant c1 which converges to c as λ → 0. In this case, we have a usual traveling wave propagating with a constant velocity. If g depends on the space variable, there are two different cases. In the first case, c = 0 and the function Φ(s) has zeros. Then, there are stationary solutions of equation (3.57) which can be stable or unstable. In the second case, c = 0 or c = 0 and Φ(s) ≠ 0. Then, stationary solutions of (3.57) do not exist and the wave moves with a perturbed speed, which can depend on time. Let us note that the expression (3.60) gives, in an explicit form, the principal term of the expansion of the function Φ with respect to λ.

3.2.8 Periodic and chaotic waves In the previous subsection, we assumed that the nonlinearity f (u ) and the traveling wave w(x) were such that the matrix f (w(x)) was functionally irreducible. For exam­ ple, this condition is satisfied if the inequality (2.36) is strict. In this case, a small per­ turbation of the system leads to the appearance of perturbed traveling waves, which can be characterized by two functions, q(t) and v(x, t). The first one determines the location of the wave and the second one determines its profile. In this subsection, we do not assume that the matrix f (w(x)) is functionally irre­ ducible. In particular, this is the case if the matrix f (u ) is diagonal or, in other words, if we have a decoupled system of n equations. As we will see below, the dynamics of the perturbed system is described not by a scalar function q(t), in this case, but by a vector-valued function q(t) = (q1 (t), . . . , q n (t)). This function satisfies a system of ordinary differential equations and it can have complex behavior. In what follows, we restrict ourselves to the case where each component f i (u ) of the vector-valued function f (u ) only depends on u i . In this case, the system (3.33) is a decoupled system of n equations. Each of them can have a solution with its own value of c = c i . Generically, they are not equal to each other. It mean that for the un­ perturbed system (2.34) (λ = 0), traveling waves for different components propagate with different velocities and the distance between them tends to infinity. For the per­ turbed system, the corresponding perturbed waves can interact with each other. This effect can stabilize the solution in the sense that its different components are local­ ized in the same region. We shall show that it can lead to periodic and even to chaotic behavior of solutions.

3.2 Chaotic and periodic chemical waves

| 143

3.2.9 Description of the model We suppose that the nonlinearity f has the form f (u ) = (f1 (u 1 ), . . . , f n (u n )), where the functions f i (u i , λ) : R → R satisfy the following conditions: f i (u + i ) = f (u − i ) = 0,

f i (u ± i ) < 0.

(3.63)

We note that condition (3.32) is satisfied in this case. We assume that there exists a monotone in z solution of each of the following scalar problems d i w i + c i w i + f i ( w i ) = 0,

w(±∞) = u ± i .

(3.64)

Denote w(z) = (w1 (z), . . . , w n (z)). Conditions of existence of one-dimensional traveling waves for the scalar parabolic equation are well-studied and we do not discuss them here [311].

3.2.10 Transformation of the equations We define here the neighborhood W δα as follows:   W δα = u ∈ E α : inf u i (·) − w i (· − x i )α ≤ δ . x i ∈R

For each u = (u 1 , . . . , u n ) ∈ W δα and each t ≥ 0 considered as a parameter, we put v i (x − q i , t) = u i (x, t) − w i (x − q i (t)),

i = 1, . . . , n,

(3.65)

where a real number q i (t) is defined by ∞

˜ i (x − q i (t))dz = 0. (u i (x, t) − w i (x − q i (t)))U

(3.66)

−∞

˜ i (z) is the principal eigenfunction of the problem Here, U d i ψ − c i ψ + f i (w i ) ψ = μψ,

ψ(±∞) = 0.

We recall that it is positive, exponentially decreasing at infinity and corresponds to the zero eigenvalue μ = 0. We note that Lemmas 3.3–3.5 are applicable in this case. Let us fix some index i, i ∈ {1, 2 . . . n} and let us derive the equation for the i-th component v i (ξ, t). Opposite to the previous subsection, we use here, and in the se­ ˜ i (·) = quel, the notation  ,  for the inner product in L2 (R). Then, we have  v i (·, t), U ˜ i (·) = 0. 0 and, consequently,  v i t (·, t), U

144 | 3 Complex patterns and attractors for reaction-diffusion systems By setting ξ = x − q i , we obtain from the i-th equation of system (3.30)



∂v i ∂v i ∂2 w i ∂2 v i ∂v i − wi + q i (t) = d i + + f i (w i (ξ )+ + ci 2 2 ∂t ∂ξ ∂ξ ∂ξ ∂ξ

(3.67)

+ v i (ξ, q i ), ξ + q i ) + λh i (w, v, q, ξ ),

where v = (v1 , v2 , . . . , v n ), q = (q1 , q2 , . . . , q n ) and h i (w, v, q, ξ ) = g i (w1 (ξ + q i − q1 ) + v1 (ξ + q i − q1 , t), w2 ( ξ + q i − q2 ) + v1 ( ξ + q i − q2 , t) , . . . . . . , w n (ξ + q i − q n ) + v n (ξ + q i − q n , t), ξ + q i ). This expression for h i can be written in a more compact form if we introduce the follow­ ing shift operators T i (p) and T (q). The operator T i (p) acts on real-valued functions v i (z) and is defined by T i (p)v i (z) = v i (z − p), i.e. it shifts the argument on p. The operator T (q) (where q = (q1 , q2 , . . . , q n )) acts on vector-valued functions v(z) and is defined by (T (q)v)i (z) = v i (z − q i ), i.e. it shifts the argument of the i-th component of v on q i . Then, the term h i can be written as h i = T i (−q i )g i (T (q)(w + v), ξ ).

(3.68)

Let us note that T i and T are bounded linear operators acting from E to E and the operators T (q)v(z) = T (q)v z are bounded linear operators acting from E α to E if α > 1/2 [167]. As above, we have

∂v i ∂v i − + w i (ξ ) (q i (t) − c i ) = L i v i + γ i (ξ, v i ) + λh i (ξ, v, q), (3.69) ∂t ∂ξ where Li vi = Di

∂2 v i ∂v i + ci + f i (w i (ξ ))v i , ∂ξ 2 ∂ξ

γ i (ξ, v i ) = f i (w i (ξ ) + v i ) − f i (w i (ξ )) − f i (w i (ξ ))v i . The function γ i belongs to C2 and satisfies the estimates |γ i | < C|v i |2 ,

|γ i v i | < C|v i |,

where C is a positive constant independent of v i . Let us derive the equation for q i . As ˜ (ξ ) and integrate from −∞ to ∞. We obtain above, we multiply (3.72) by U 1 ˜ ˜ q i (t) = c i + τ− i ( v i ) ( γ i , U i  + λ g i , U i ) ≡ c i − λΦ i ( q, v, λ) ,

(3.70)

i ˜ ˜ ˜ where τ i =  w i + ∂v ∂ξ , U i . Below, we shall use the following formula for h i =  h i , U i :

∞

h˜ i (w, v, q) =

g i ( w1 ( ξ + q i − q1 ) + v1 ( ξ + q i − q1 , t) , . . . , −∞

˜ i (ξ )dξ. . . . , w n (ξ + q i − q n ) + v n (ξ + q i − q n , t), ξ + q i )U

(3.71)

3.2 Chaotic and periodic chemical waves

| 145

We have the following estimate, that is, |Φ(q, λ)| ≤ C(|v|2 + λ)

(3.72)

with a constant C independent of v and λ. We then obtain ∂v i = L i v i + Fi (v, q, ξ, λ), ∂t where

Fi (v, q, ξ, λ) = γ(ξ, v i ) + λh i (ξ, v, q, λ) −

(3.73)

∂v i + w i (ξ ) λΦ(v, q, λ). ∂ξ

Given q(t), we can now consider equation (3.73) as an abstract evolution equation for v i in some phase spaces of fractional powers associated with the operator L i , where α ∈ (1/2, 1). Repeating this construction for all i, we obtain the following system for q and v which will be considered below as an abstract evolution equation in the phase space Eα × Rn : q (t) = c − λΦ(q, λ), ∂v = Lv + F (v, q, λ), ∂t

(3.74) (3.75)

where L = (L1 , . . . , L n ), F = (F1 , . . . , Fn ) Φ = (Φ1 , . . . , Φ n ), c is the vector com­ posed by c i : c = (c1 , c2 , . . . , c n ), and Fi = γ i (v i ) + λh i (v, q) + (w i + v i )Φ i .

Due to the properties of the operators T (q) and T i (q i ), one can assert that F ∈ C1+ω (E α × R n , E) for some ω > 0. This assertion plays an important role below in the proof of existence of the invariant manifold. Lemma 3.10. The map F belongs to the Hölder class C1+ω (E α × R n , E). Proof. We first consider the map g i (w q + v q ) : E α × R n → E

(3.76)

for q ∈ R n fixed. Here, we denote w q = (w1 (ξ + q1 ), . . . , w n (ξ + q n )),

v q = (v1 (ξ + q1 ), . . . , v n (ξ + q n )).

The dependence of g i on ξ is omitted for the sake of simplicity. Since g i ∈ C2 , then     n   ∂g ( w + v ) i q q 2 1+ ω g i (w q + v q + v¯ q ) − g i (w q + v q ) − v¯q j    ≤ C|v¯ | ≤ Cv¯ α . ∂u   j j =1 (3.77)

146 | 3 Complex patterns and attractors for reaction-diffusion systems Here, C denotes any constant independent of v, v¯ , and q, and the norm v¯ α is sup­ posed to be sufficiently small. We now consider (3.76) for v ∈ E α fixed. We have     n   ∂g ( w + v ) i q q ¯ j (w q j + v q j )q S(q, v) ≡   g i (w q+q¯ + v q+q¯ ) − g i (w q + v q ) − ∂u j   j =1 ≤ S1 + S2 ,

where the prime “ ” denotes the derivative with respect to z,     n   ∂g ( w + v ) i q q  θq j  S1 (q, v) = g i (w q + v q + θ q ) − g i (w q + v q ) − , ∂u   j j =1 θ = w q¯ + v q¯ − w − v,       n ∂g i (w q + v q )  ¯ j − θq j  S2 (q, v) =  (w q j + v q j )q .  ∂u j   j =1 We have the following estimates: S1 (q, v) ≤ C|θ|2 ≤ C(|w | + |v |)2 |q¯ |2 , n         w q+q¯ j − w q j − w q j q¯ j  + v q+q¯ j − v q j − v q j q ¯ j . S2 (q, v) ≤ C j =1

We note that w ∈ C2 , v ∈ E α (α > 1/2) and |v|C1+ω ≤ Cvα for some 0 < ω < 1 [167]. Hence, S2 (q, v) ≤ C(wα + vα )|q¯ |1+ω and S(q, v) ≤ C(wα + vα )|q¯ |1+ω

(3.78)

(with another value of C). Applying estimates (3.77) and (3.78), one can obtain , by the converse Taylor theo­ rem [9], that g i ∈ C1+ω (E α × R n , E). Similarly, it can be shown that h i = T i (−q i )g i ∈ C 1 + ω ( E α × R n , E ). We must note that two other terms in Fi , γ i and (w i + v i )Φ i , can be considered as above. The lemma is proved. This reduction of the original problem to equations (3.74), (3.75) is possible only in a small neighborhood of the manifold M0 . In other words, the perturbation v is sup­ posed to be small. We obtain a priori estimates of solutions of the problem (3.74), (3.75), simply repeating the constructions described in previous subsections. These estimates show, in particular, that the norm v(·, t)α is small for all t ≥ 0, provided that v(·, 0)α is small. Thus, due to a priori estimates, the solutions of (3.74), (3.75) are defined for all t > 0.

3.2 Chaotic and periodic chemical waves

| 147

3.2.11 Existence of invariant manifolds The main result of this subsection is the following theorem. Theorem 3.11. System (3.74), (3.75) has an invariant locally attracting manifold M λ for all λ sufficiently small. This manifold is a C1 -graph ˜v = V (q, λ) in E α , where 1/2 < α < 1. The global semiflow reduced to M λ is defined by the system of the ordinary differen­ tial equations q i (t) = c i − λΦ i (q, λ),

(3.79)

where the function Φ(q, λ) ∈ C1 admits the following representation: Φ(q, λ) = Φ0 (q) + Φ1 (q, λ),

(3.80)

and where ∞

˜ i (z)dz, g i ( w1 ( z + q i − q1 ) , . . . , w n ( z + q i − q n ) , z + q i ) U

0

Φ i ( q) =

(3.81)

−∞

|Φ1 (q, λ)|,

|(Φ1 ) q (q, λ)| ≤ Cλ.

(3.82)

The proof of this theorem is similar to the proof of Theorem 3.9 and we omit it.

3.2.12 Existence of periodic and chaotic waves The previous theorem allows us to apply the method of realization of vector fields [212, 213] and to prove the existence of new types of nonlinear waves. If c i = c0 , then the system of equations (3.79) can describe complex nonlinear dynamics. We start with an auxiliary result. Proposition 3.12. Let B be the unit ball in R n . Consider a system of ordinary differential equations dq = Q( q) , q ∈ R n (3.83) dt where Q(q) ∈ C1 (R n ) and q · Q( q) =

n

q i Q i < −δ 1 ,

δ1 > 0,

if q ∈ ∂B

(3.84)

i =1

i.e. the corresponding vector field on the boundary ∂B goes inside B. Then, for any posi­ tive ϵ > 0, a perturbation g(u, x), a constant K and functions f i satisfying all conditions formulated above exist such that for sufficiently small λ |K Q(·) − Φ(·, λ)|C1 (B ) < ϵ,

q ∈ B.

Moreover, c i = 0 and the unperturbed waves are standing.

(3.85)

148 | 3 Complex patterns and attractors for reaction-diffusion systems Proof. The construction has a very transparent physical interpretation. We describe the scattering of a system of slow traveling waves on a well-localized space inhomo­ geneity. This scattering leads to the formation of a complicated connected state of the waves (a similar effect arises in quantum mechanics). This connected state can move chaotically or periodically in time. We define a family of functions Q δ = Q(δ−1 q) where δ is a positive constant. Note that |Q δ |C1 (B) < cδ−1 . Let us set g δi (u 1 , . . . , u n , x) = exp(−p2 x2 /2)Q δ (−u 1 , . . . , −u n ),

(3.86)

where p = p(δ) is a large parameter such that δp−1 → 0 as δ → 0. We put f i = 2δ2 (u i − ˜ i = cosh−2 (δx) and the integral that defines Φ0i u 3i ). Then, w i = tanh(δx), c i = 0, U involves the well-localized at z = −q i function h(p, z) = exp(−p2 (z + q i )2 /2) and can be estimated by standard asymptotic methods. We then obtain ( ) * pΦ0i (q) = C(δ) Q −δ−1 tanh(−δq1 ), . . . , −δ−1 tanh(−δq n ) × + × cosh−2 (δq i ) + ϕ0 (q) ,

where ϕ0 (q) is a small correction and C(δ) is a positive coefficient. Due to the above arguments on localization of h(p, z), one has |ϕ0 (q)|C1 (B ) < cδp−1 = o(1)

δ → 0.

Using the Taylor expansion for the hyperbolic tangent, for small δ, we obtain ˜ 0 (q, δ)], Φ0i (q) = Cp−1 δ[R i (q) + ϕ ˜ 0 (q, δ) vanish as δ → 0. We define K = C(δ)p−1 , where R i = Q i (q1 , . . . , q n ) and ϕ take a sufficiently small δ and obtain (3.85). The proof is complete. Let us formulate the main result of this subsection. We use here the structural stability of hyperbolic dynamics, see [135, 228, 232] and Chapter 1. Theorem 3.13. Consider a system of differential equations dq = Q( q) , dt

q = ( q1 , . . . , q n )

defined by C1 -smooth vector field Q on the unit ball B n ⊂ Rn . Assume that this system generates a hyperbolic set Γ. Then, there exists such a perturbation g and such a choice of f i that for small λ, the wave dynamics defined by (3.30), (3.31) also generates a topo­ ˜ logically equivalent hyperbolic set Γ. This theorem is an immediate corollary of Proposition 3.12, and for the persistence of hyperbolic sets, see Theorem 1.22.

3.2 Chaotic and periodic chemical waves

| 149

In particular, it implies that polynomial systems with two components can gener­ ate nonlinear fronts with a periodic time behavior, and that polynomial systems with more than two components can generate nonlinear fronts with a chaotic time behav­ ior. The following example shows how we can obtain the Lorenz dynamics for the wave front coordinates q i . Let us recall that the Lorenz system has the form dq1 = s( q2 − q1 ) , dt

dq2 = r( q1 − q2 − q1 q3 ) , dt

dq3 = q1 q2 − σq3 . dt

Depending on the parameters s, r and σ, the solution of these equations can have pe­ riodic or chaotic behavior. To obtain generalized traveling waves with the similar behavior, we can take, for example, f i = 2p−2 (u i − u 3i ) and the perturbation g(u, x) in the following form: g1 (u, x) = s exp(−p4 x2 )(u 2 − u 1 ), g2 (u, x) = r exp(−p4 x2 )(u 1 − u 2 − pu 1 u 3 ), g3 (u, x) = s exp(−p4 x2 )(pu 1 u 2 − σu 3 ). For sufficiently large p > 0 and λ < λ0 (p), system (3.37) for q i (t) coincides with the Lorenz system up to small corrections. These results and also papers [90, 273, 275, 278, 298] show that for spatially in­ homogeneous perturbations g(u, x), three component reaction-diffusion systems can generate chaotic waves and two component ones can generate time periodic waves. To realize the Lorenz system by chemical waves in a spatially homogeneous media (when g = g(u ) is independent on x), we can use 4 reagents. To realize a periodical behavior, one can consider 3 component system. It follows from the following assertion. Proposition 3.14. Under the assumptions of the previous proposition, for any smooth Q and any positive ϵ > 0, there exists a smooth vector function g (u ) = (g1 (u ), . . . , g n+1 (u )) and constants K i such that for sufficiently small λ, the wave coordinates q i satisfy equa­ tions dq i = K i Φ i ( q i − q 1 , . . . , q i − q i −1 , q i − q i +1 , . . . , q i − q n +1 ) , dt

q ∈ R n +1

(3.87)

where i = 1, . . . , n + 1, and for each i, |K Q i (z1 , . . . , z n ) − Φ i (z1 , . . . , z n )| < ϵ,

|z| < 1.

(3.88)

The proof presented here is new and it is simpler than in papers [90, 273, 275]. We set ˜ i = cosh−2 (x). We use the well-known f i = 2(u i − u 3i ). Then, w i = tanh x, c i = 0 and U relation tanh a − tanh b tanh(a − b ) = . (3.89) 1 − tanh a tanh b The main idea is similar to the proof of the previous proposition and is as follows. Let us assume that the i-th component is g i (u is a well-localized function in variable u i

150 | 3 Complex patterns and attractors for reaction-diffusion systems and regular enough in all the rest variables u 1 , . . . , u i−1 , u i+1 , . . . u n+1 ). For example, one can take g i = exp(−p(λ)u 2i /2)h i (u 1 , . . . , u i−1 , u i+1 , . . . u n+1) where p → +∞ as λ → 0 and h i are smooth functions. Let us fix an index i. In the integral from the right-hand side (3.81) that defines Φ0i , we make a change v = tanh(x − q i ). Then, by (3.89), this integral can be transformed to the form 1

Φ0i (q)

=



gi −1

v − z i −1 v − z n +1 v − z1 ,..., , v, . . . , dv, 1 − vz1 1 − vz i−1 1 − vz n+1

where z j = tanh(q j − q i ). Since g i is well-localized in v, we obtain Φ0i (q) = K (λ)(g i (−z1 , . . . , −z i−1 , 0, . . . , −z n+1 ) + o(1))

λ → 0.

This relation proves the proposition after a change. To conclude this section, let us notice the following. Assume we would like to have a very complicated wave motion, say, with an attractor of fractal dimension 100. Then, the method of this section needs a system with > 100 reagents. A natural question arises: is it possible to obtain such a complicated behavior with only two reagents? In the next sections, we shall give the positive answer to this question when we shall consider reaction-diffusion systems in the multidimensional domain. Whether it is true or not for the one-dimensional case is unknown as of yet.

3.3 Complicated large time behavior for reaction-diffusion systems of the Ginzburg–Landau type In the previous section and in Chapter 2, we have considered examples of systems that generate maximally complex, in a sense, large time dynamical behavior. All of these systems are complex because they involve many reagents (if they describe chem­ ical kinetics), or many neurons, or a number of genes, and numerous species for the Lotka–Volterra case. The goal of this section is to describe reaction-diffusion systems with only two components similar to the fundamental Ginzburg–Landau model, which exhibits a complicated large time behavior. This system can work as a natural computer. No­ tice that the Ginzburg–Landau functional appears as a normal form for many physical models at the bifurcation point [98]. Therefore, it is an appropriate model for natural computing theory. The system, considered in this section, can realize all possible neural networks, all possible dynamics and simulate all Turing machines. This property may be useful, for example, in robotics, since realization of different dynamics allows us to perform

3.3 Complicated large time behavior for reaction-diffusion systems of the Ginzburg–Landau type |

any, even complicated, movements and also for many other applications. To create a computationally effective and physically feasible model, we should satisfy the follow­ ing main conditions: (A) all interactions should be local (to provide parallel computations); (B) system should be based on fundamental physical effects; (C) system must support a number of local attractors (stable dynamics); (D) attractor control should be performed by the boundary conditions (a boundary control). The model that we are going to study in the coming subsections satisfies all of these conditions. Some important steps to such a model have been done in works [67, 81, 280, 287, 288]. Let us mention some physical ideas beyond formal mathematical arguments. All of the two-component systems, considered here and in the next section, are defined on bounded domains in Rd for d ≥ 2, i.e. we consider the multidimensional case. The complexity appears as a result of spatial inhomogeneities in a chemical medium, whereas earlier, the complexity occurred due to the presence of a number of reagents which interact in a special way. It is unknown how to extend results of the next two sections on the case d = 1 as it seems that only in multidimensional media, can we control the large time behavior by spatial inhomogeneities. We start with the known Hopfield system, playing a key role in neural network theory [81]: N d˜ qi = ω −1 K ij σ (ω˜ q j ) − q˜ i + ω−1 h i , (3.90) dt j =1 where x i are neuron states, N is the neuron number, σ (z) is a fixed increasing smooth function such that lim x→±∞ σ (x) = a± , a+ > a− (for example, one can take σ (x) = tanh x, K is a matrix determining neuron interaction, h i are coefficients (thresholds) and ω is a large parameter. We consider N, ω, the vector h and the matrix K as param­ eters, which we adjust in a special way. In particular, we take ω  1. Comment. We can make the change ω˜ q i = x i that allows us to eliminate ω from (3.90) and obtain a usual Hopfield system. Then, it is clear that the attractor of this system lies in the region D = {x ∈ RN : |x i | < C(N, K, h)}. Therefore, the attractor of (3.90) lies in (3.91) D ω = {q˜ ∈ RN : |q˜ i | < C(N, K, h)ω−1 }. In Section 2.3.6, it is shown that system (3.90) can generate, within arbitrary preci­ sion ϵ, any time bounded trajectories q(t), t ∈ (0, T ) for some variables q l , l = 1, 2 . . . , n. These variables can be expressed through x i . These results show that the Hopfield system possesses the property (C), however, it does not satisfy (A) since the interaction between neurons is nonlocal.

151

152 | 3 Complex patterns and attractors for reaction-diffusion systems An idea as to how to satisfy (A), (B), can be found in [67], another method in [287, 288]. We consider a spatially extended system defined by a reaction-diffusion system with two components and space inhomogeneous coefficients (it describes an interac­ tion of two chemical reagents in an inhomogeneous medium). The attractor control can be performed by functions that define space inhomogeneities. However, mod­ els [67, 287, 288] do not satisfy (D). The main results of this section are as follows. First, a new model is proposed based on physically fundamental equations, where a boundary attractor control can be realized. Implementation of the Hopfield networks can be done by explicit formu­ las. Second, we show that these systems can simulate all Turing machines. Third, we describe an explicit method of attractor control.

3.3.1 Mathematical model and physical background We consider equations having the following form, namely, 1 u t = ϵ2 Δu + u − u 3 + ϵ r (av + b ), 2 v t = dΔv − b 0 v,

(3.92) (3.93)

where u (x, y, z, t), v(x, y, z, t) are unknown functions defined in the box Ω = {0 < x < L1 , 0 < y < L2 , 0 < z < h} and a(x, y, z), b (x, y, z) are some smooth func­ tions that define space inhomogeneities. Let ϵ > 0 be a small parameter, a(x, y, z), b (x, y, z) are smooth functions, b 0 is a constant independent of ϵ, d > 0 and r > 0 are parameters. For u, we set the Neumann no flux boundary conditions at the bound­ aries z = 0, z = h, u z (x, y, z)|z=0,h = 0. (3.94) Here, u z denotes the derivative with respect to z. Let us assume, for simplicity, that all the structure is periodic in x, y, i.e. we seek for solutions u, v such that u (x + L1 , y, z, t) = u (x, y, z, t) = u (x, y + L2 , z, t), v(x + L1 , y, z, t) = v(x, y, z, t) = v(x, y + L2 , z, t). Naturally, then a and b are also periodic in x, y with the same period. For v, we set nonlinear boundary conditions v z (x, y, z)|z=h = β (x, y, ϵ)(1 − u 2 ), v z (x, y, z, t)|z=0 = 0,

(3.95) (3.96)

where β are some smooth functions. We perform the attractor control by ϵ and β. No­ tice that condition (3.95) connects equations (3.92) and (3.93). Moreover, in the case a = 0, β = 0, equation (3.92) presents a model which appears in many physical, bio­ logical and chemical applications and may describe layered pattern formation [5].

3.3 Complicated large time behavior for reaction-diffusion systems of the Ginzburg–Landau type |

To understand the main physical background of (3.92), (3.93), let us first con­ sider the simplest case, where a = 0 and L1 = +∞ (an infinite media). Then, equations (3.92), (3.93) are independent and for (3.92), and they have kink solu­ tions u = tanh(ϵ−1 (x − q)) localized at q. Moreover, following [43, 261], we can find solutions describing kink chains (see Subsection 3.1.2). Namely, let us assume that q1 < q2 < · · · < q N are kink positions, q¯ i = (q i−1 + q i )/2, one sets formally ˜ (x, q, ϵ). q N +1 = q1 . Then, for x ∈ q¯ i , q¯ i+1 ), one has u = (−1)i tanh(ϵ−1 (x − q)) + u ˜ | < C exp(−c/ϵ). The posi­ ˜ is an exponentially small correction, |u The function u tions q i are very slowly evolving time functions, dq i /dt = O(exp(−c/ϵ)), having a simple physical interpretation. This time evolution is generated by an exponentially small kink interaction (kinks interact by exponentially small kink tails). As it was said above, this kink pattern is metastable, but exists for an exponentially long time while |q i − q i+1 |  ϵ. Let us now investigate another situation. We consider the term av in (3.92) as a given external field. We suppose that v = O(1) and the number r is large enough, for example, r > r0 , where r0 is a critical value. Following standard estimates [43, 287], we obtain two basic facts: ˜ (x, y, z, t) = O(ϵ r1 ); (1) kinks still conserve structure, but now u (2) the evolution of q is defined by the space inhomogeneity v, the contribution of the direct kink-kink interaction is exponentially small and can be removed. As a result, one has dq i = ϵ r ρ i F i (q, ϵ), (3.97) M0 dt where r > 0, ρ i = (L1 L2 )−1 (−1)i and up to small O(ϵ) corrections. The functions F i are defined by  F i = (a(x, y, z)v(x, y, z, t) + b (x, y, z)) cosh−2 ((x − q i )/ϵ))dμ + o(1)), Ω

ϵ → 0.

(3.98)

Here, dμ = dxdydz, the function θ i = cosh−2 ((x − q i )/ϵ) is a Goldstone mode asso­ ciated with the i-th kink, and the constant M0 is a kink mass, which is independent of ϵ and q: ∞ 4 M0 = ϵ θ2i dx = . 3 −∞

For small ϵ, relation (3.98) can be simplified since cosh is localized in a small O(ϵ)-neighborhood of q i : L 2 h

F i = 2ϵ

(a(q i , y, z)v(q i , y, z, t) + b (q i , y, z))dydz + o(ϵ), 0 0

ϵ → 0.

(3.99)

153

154 | 3 Complex patterns and attractors for reaction-diffusion systems Relation (3.99) shows that in this approximation, the kinks move independently under the external field. Notice that parameter r defines the speed of kinks, and finally, a dependence of the performing rate of our natural computer on ϵ. Therefore, the critical value r0 is an im­ portant characteristic since it determines the maximal possible rate of kink evolution. If r < r0 , the kink mechanism does not work. Indeed, then the kinks lose their planar structures, and it becomes impossible to describe them by u ≈ tanh((x − q)ϵ−1 ) since the kink fronts become unstable. In this case, we must take into account the kink front curvatures.

3.3.2 Control of kink dynamics The main idea of the control mechanism is based on (3.97), (3.99) and a connection be­ tween u, v via boundary conditions (3.95). If β = 0, then, since kinks move slowly, one can, for large times, express v through u = U (x, y, z, q, ϵ), where U is a kink chain so­ lution described above. One then has v = V (x, y, z, q, ϵ). As a result, after substitution of such a V in (3.99), we obtain a system of ordinary differential equations describing a nonlocal kink interaction. This system has a structure like a Hopfield system (3.90). We shall describe these mathematics in the coming section. Here, we notice only that this mathematical construction admits a simple physical interpretation. Boundary condi­ tions (3.95) induce a nonlocal interaction between kinks. This interaction is stronger than the direct kink-kink one. It is generated by a feedback: kinks act on the field v, and the field v, in turn, acts on kinks. To understand attractor control, let us consider first the time evolution of the sec­ ond component v. This evolution is defined by (3.93), (3.95). Since the u-evolution is slow, we can, following the standard procedure, replace (3.93), (3.95) by an elliptic boundary value problem, where t is only involved as a parameter: dΔV − b 20 V = 0, V z (x, y, z)|z=h = R,

(3.100) V z (x, y, z, t)|z=0 = 0,

(3.101)

where R(x, y, q) = β (x, y)(1 − U 2 ),

U = U (x, y, z, q, ϵ).

Then, v = V + O(ϵ) and V depends on t only through q(t) since the kink chain so­ lution depends on slowly evolving kink coordinates q i . Denote by G(x, y, z, x0 , y0 ) the boundary Green function of problem (3.100), (3.101). The function G satisfies (3.100) and (3.101) with R = δ(x − x0 )δ(y − y0 ). Then, one has  G(x, y, z, x0 , y0 )R(x0 , y0 , q)dx0 dy0 . (3.102) V (x, y, z, q) = L1 L2

3.3 Complicated large time behavior for reaction-diffusion systems of the Ginzburg–Landau type |

Since U are well-localized at points x = q j , we can simplify (3.102), removing small terms N W j (x, y, z), (3.103) V (x, y, z, q) = j =1

where

L 1

W j = 2ϵ

G(x, y, z, q j , y0 )β (q j , y0 , ϵ)dy0 + o(ϵ). 0

Substituting this relation into (3.99), one obtains the following system for the kink coordinates: ⎛ ⎞ N dq i r +1 ⎝ = c¯0 ρ i ϵ F ij (q) + g i (q i )⎠ , (3.104) dt j =1 where L 2 L 2 h

F ij =

G(q i , y, z, q j , y0 )a(q i , y, z)β (q j , y0 , ϵ)dy0 dydz,

(3.105)

0 0 0

and

L 2 h

gi =

b (q i , y, z)dydz + o(1).

(3.106)

0 0

Now, our goal is to show that (I) this system, by a choice of β, can be reduced to the Hopfield system (3.90); (II) we can completely control neuron interconnections K ij in (3.90) by β. The main idea is as follows. We choose the parameters in such a way that the kinks move inside small ω−1 -neighborhoods of some points x¯ 1 < · · · < x¯ j < · · · < x¯ N . Then, the kink collisions are not possible and the direct interaction between the kinks is exponentially small in ϵ (as it is described in subsection 3.1.2). Nondirect nonlocal interaction is given by equation (3.104). So, we assume that d0 (ϵ) = inf |x¯ i − x¯ j | > ϵ κ , i,j

κ ∈ (0, 1/2).

These conditions help us to exclude the direct kink-kink interaction, having the order O(exp(−cϵ−1 κ )). Let us set β (q j , y) = σ (ω(q j − x¯ j ))μ j (y), where μ j are new unknown functions. Remember that ω is a large parameter involved in (3.90). Then, L 2 L 2 h

F ij = σ (ω(q j − x¯ j ))

a¯ (q i , y, z)G(q i , y, z, q j , y0 )μ j (y0 )dy0 dydz.

(3.107)

0 0 0

Let us now consider the Hopfield system with a sufficiently large ω. According to the comment, the quantities q˜ j = q j − x j lie in the domain D ω defined by (3.91). Therefore,

155

156 | 3 Complex patterns and attractors for reaction-diffusion systems the coordinates q i can be presented as q i = x¯ i + ω−1 q˜ i , where |q˜ | < C, and C is independent of ω and ϵ. This means that q i are localized at x¯ i . Using this localization property, one has the following asymptotics for F i : ¯ ij (μ )σ (ω˜ F ij = K q j ) + O ( ω −1 ) , where

(3.108)

L 2 L 2 h

¯ ij (μ ) = K

G(x¯ i , y, z, x¯ j , y0 )a i (y, z)μ j (y0 )dy0 dydz. 0 0 0

Here, a i (y, z) = a(x¯ i , y, z). ¯ ij , we have to resolve N the systems of N equations with respect to the To control K unknown functions μ j : ¯ ij (μ 1 (·), . . . , μ N (·)) = K ij . K (3.109) Since all μ j are mutually independent, these systems are also independent. If we use the generalized Hopfield substitution (2.44), for instance, setting K = YZ, systems (3.109) take the form L 2 L 2 h

G(x¯ l , y, z, x¯ j , y0 )a l (y, z)μ j (y0 )dy0 dydz =

p

Y lk Z jk ,

(3.110)

k =1

0 0 0

where a l , μ j are unknown functions, l, j = 1, 2, . . . , N. In the coming subsection, we show that these systems are resolvable in an explicit way. Moreover, we choose b in order to obtain g i = −q˜ i . It is easy to show that it is possible.

3.3.3 Control of interactions in Hopfield equations Let us consider equations (3.110). Let us use the Fourier representation of the Green function G: G(x¯ l , y, z, x¯ j , y0 ) =

+∞ +∞

k 1 =0 k 2 =0

¯ ) exp(ik 1 γ1 (x¯ l − x¯ j ) + iγ2 k 2 (y − y0 )) sinh(kz , ¯ sinh kh

1 ¯2 2 2 2 where γ i = 2πL− i , k = k 1 + k 2 + b 0 and i = Let us set

√ −1.

h L 2

¯ )dzdy exp(iγ2 k 2 y)a l (y, z) sinh(kz

ˆ l (k 2 , k¯ ) = a 0 0

ˆ j (k ) be the Fourier coefficients of μ j and let μ L 2

ˆ j (k) = μ

exp(ikγ2 y0 )μ j (y0 )dy0 . 0

(3.111)

3.3 Complicated large time behavior for reaction-diffusion systems of the Ginzburg–Landau type |

Equation (3.110) can be rewritten then as p  exp(ik 1 (x¯ l − x¯ j ))  ˆ j (k2 )a ˆ l (k 2 , k 21 + k 22 + b 20 ) = μ Y lk Z jk . 2 2 2 k 1 =0 k 2 =0 sinh( k 1 + k 2 + b 0 h ) k =1 +∞ +∞

(3.112)

Equations (3.112) can be resolved explicitly if we assume that h  1 is a large parameter. Then, +∞ +∞

exp(ik 1 (x¯ l − x¯ j ))  ≈ sinh(˜ k ( k 2 ) h )−1 , 2 2 2 k 1 =0 k 2 =1 sinh( h k 1 + k 2 + b 0 )  where k˜(k 2 ) = k 22 + b 20 . In this case, we set

ˆ j (k) = Z j (k), μ

k = 1, . . . , p

(3.113)

and ˆ l (k 2 , ˜k(k 2 )) = Y l (k 2 ), a

k 2 = 1, . . . , p.

(3.114)

Notice that a rigorous justification of these asymptotic constructions can be done by a complicated, but straightforward, procedure based on the invariant manifold the­ ory (see the next section, where more general systems are considered).

3.3.4 Implementation of complicated dynamics and Turing machines This result can be applied to implement Turing machines into our model. We use the beautiful results [30, 142]. Let us consider the following system with discrete time ⎛ ⎞ m (3.115) X i (t + 1) = σ L ⎝ w ij X j (t) − h i ⎠ , j =1

where X = (X1 , . . . , X m ) and σ L is a piecewise linear sigmoidal function such that σ L (S) = 0 for S < 0, σ L = S for S ∈ [0, 1] and σ L (S) = 1 for S > 1. Such systems can simulate all Turing machines, and even for n = 2, it is sufficient to adjust W and h, see [30] for details. For a prescribed Turing machine, let us adjust w ij and h i . Let us set n = m + 1 in (2.337). Then, one can find a dynamical system (2.337) such that this system will have a Poincare map defined by (3.115), up to small corrections.

3.3.5 Memory and performance rate The problem, as to how many patterns can be stored by the Hopfield system, received a great deal of attention from physicists and mathematicians during the last decades.

157

158 | 3 Complex patterns and attractors for reaction-diffusion systems For mathematically rigorous results, see [264]. For (3.90), the main results are con­ cerned with the symmetric case K ij = K ji . In this case, it is well known that the num­ ber of patterns PatH (N ) generated by the Hopfield system is αN, where α > 0 is a small coefficient independent of N for large N. In the symmetric case, all local at­ tractors of (3.90) are rest points (equilibria), but if the matrix K is not symmetric, all structurally stable attractors can appear in dynamics of the Hopfield system [280, 288] when N, K, h run all possible values. Naturally, the fractal dimensions n of these at­ tractors are much less than N. Let us fix a natural n. Let us denote the PatGL (n, L) as the number of topologically different local attrac­ tors of dimension n that can be stored by system (3.92), (3.93). Since the kink number N, that can be embedded in the system, admits the rough estimate O(L/ϵ), one can expect that in our case, there is an estimate PatGL (n, L) = O(L/ϵ). It is clear that PatGL (n, L) > C(n)L/ϵ, where C(n) is a constant decreasing with n. One can suppose as well (although a rigorous proof is absent as of yet) that similar estimates hold even for, in a sense, “random” a(x, y, z) and ρ j (y). Notice, how­ ever, that if the memory grows as ϵ → 0, the performance rate is a decreasing function of ϵ since the kink motion speed is proportional to ϵ r0 , where r0 is a constant. There­ fore, we observe here a relation typical for many algorithms that connects the memory and the performance rate when the memory increasing leads to the algorithm running time decreasing. Finally, a nonlinear system based on fundamental physical models can store a number of complicated attractors and patterns, and may be useful for robotics prob­ lems. We do not concern ourselves with the training procedures for such a system. One can expect that gradient descent algorithms can be useful for it. The model pro­ posed uses effects of the kink propagation for dissipative systems, it is a slow process and the corresponding performance rate can be small. However, the model physical mechanism is quite general and based on a feedback created by interaction between well-localized modes (domain walls, interfaces) and some regular external field. This feedback creates a nonlinear interaction and nontrivial dynamics.

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics 3.4.1 Introduction In this section, we consider a general approach to the problem of chaotic large time behavior for reaction-diffusion systems. These systems play an important role in many applications, and exhibit a rich variety of phenomena in physical, chemical and bio­ logical applications [108, 196]. The idea that dissipative dynamical systems associated with fundamental models of physics, chemistry and biology can generate a chaotic dynamics is pioneered in the seminal work of D. Ruelle and F. Takens [235]. In this

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

| 159

section, we prove the existence of chaotic large time behavior for a large class of reac­ tion-diffusion systems. A criterion for chaos existence is simple and has a transparent chemical interpretation. Let us recall that Korzuchin’s theorem ( [148, 330] and Sec­ tion 2.6) asserts that in a homogeneous chemical reactor with a sufficient number of reagents, we can create any given dynamical behavior within a bounded time interval and any structurally stable behavior for all positive times. In this section, we prove the same assertion for generic reaction-diffusion systems with only two components, but in an inhomogeneous reactor. Similarly to the previ­ ous section, we create complexity by spatial inhomogeneities, whereas a nonlinear reagent interaction is fixed. To outline the main ideas of this section, let us first recall some facts. If general regularity assumptions hold, initial boundary value problems generated by reactiondiffusion systems define local semiflows. Under some conditions, they have finite di­ mensional global attractors and invariant manifolds. Upper estimates of the attractor and invariant manifold dimensions have been obtained in many works [18, 50, 61, 77, 78, 101, 108, 129, 155, 168, 170, 306]. However, the attractor structure and large time be­ havior can be described, to some extent, for only gradient and monotone semiflows, see [101, 108, 111–114, 173, 256, 266]. Strongly monotone semiflows have a striking property : if all semiorbits have compact closures, then convergence to equilibria is, in a sense, “generic.” Semiflows associated reaction-diffusion equations are strongly monotone and the large time behavior of solutions is well-studied. The first result in this direction belongs, probably, to T. Zelenyuk [329], and now we have a number of fundamental results (see [211] for an overview). They show that chaotic attractors are forbidden for semiflows generated by generic quasilinear parabolic equations: almost all trajectories converge to periodic cycles or equilibria (the monotone case), or con­ verge to equilibria (the gradient case). For reaction-diffusion equations with polyno­ mial nonlinearities, all the trajectories converge to equilibria [252], see Theorem 1.6. For some quasilinear parabolic equations of a special form, one can prove the existence of chaotic behavior [211]. It is shown by the RVF method developed in works [212–214, 219, 220, 236], which is also a main technical tool in this section. We follow the general schema of RVF application stated in Section 2.2. In the works [280, 288], some reaction-diffusion systems with two components have been investigated. It was shown, by the RVF method, that these systems can gen­ erate all structurally stable semiflows and the corresponding invariant manifolds are stable. However, the systems, studied in these papers, involve nonpolynomial nonlin­ earities. Summing up these facts, we conclude that until now, there is no general theory concerning large time dynamics of general reaction-diffusion systems. It is difficult to hope that such a theory could be constructed, taking into account the richness of phe­ nomena and effects that reaction-diffusion systems can exhibit. Moreover, the most interesting cases occur for nongradient and nonmonotone systems.

160 | 3 Complex patterns and attractors for reaction-diffusion systems Nonetheless, a combination of two ideas allows us to shed light on the problem of complicated large time behavior for general reaction-diffusion systems. The first idea is the aforementioned realization vector field method, and the second one is inspired by the classical Turing’s paper [272], where it was shown that two-component reac­ tion-diffusion systems can create spatially periodic patterns if diffusion coefficients of the reagent are different. The most interesting case is presented by so-called shadow systems, when a reagent (u) diffuses much faster than the other one (v). The shadow systems are well-studied (see, for example, [103, 138]). Such systems occur in many biochemical applications, where substrates diffuse much faster than enzymes (pro­ teins). They have the form u t = dΔu + f (u, v) − λ1 u + η1 (x, y),

(3.116)

v t = DΔv + g(u, v) − λ2 v + η2 (x, y),

(3.117)

where 0 < d  D, λ i are coefficients. The nonlinearities f, g may be polynomials or any sufficiently smooth functions. We consider these systems in a rectangle Ω ⊂ R2 , (x, y) ∈ Ω and set some natural boundary conditions for u, v. Functions η i are spatially inhomogeneous external fluxes. We show that the known results on chaos existence can be essentially extended, and state a general theorem on the realization of vector fields for systems (3.116), (3.117). Main sufficient conditions, which provide chaos existence, can be formu­ lated as follows: there is a nondegenerate root (u ∗ , v∗ ) of the derivative f v (u, v) and g u (u ∗ , v∗ ) = 0. Notice that the monotone and gradient systems do not satisfy these conditions. On the other hand, if the derivative f v and g u have no roots, then sys­ tem (3.116), (3.117) generates a strongly monotone semiflow. Moreover, the condition f v (u ∗ , v∗ ) = 0 for some (u ∗ , v∗ ) has a transparent chemical (biological) interpreta­ tion. This means that the reagent v is neither an inhibitor nor activator for u. In the RVF method for (3.116), (3.117), we use the coefficients D, d, λ i and the func­ tions η i (x, y) as a parameter P. The nonlinearities f, g satisfying the aforementioned conditions are fixed. We show that for each n and ϵ > 0, all n-dimensional C1 smooth vector fields on the unit ball B n can be realized with accuracy ϵ by semiflows induced by (3.116), (3.117). To obtain more and more complicated dynamics, we decrease d, in­ crease D and adjust appropriate λ i , η i (x, y). Although, the proof is quite straightforward and follows the key idea of the at­ tractor theory [101, 108, 129, 155, 265]: to extract a finite number of “main modes” X = (X1 , X2 , . . . , X n ), which, in a sense, capture the whole dynamics. This reduction leads to a system of ordinary differential equations defined by a smooth vector field Q(X ) on a locally invariant and locally attractive manifold. This field is quadratic in X (up to a small perturbation). A new technical point is an application of the RVF method for such quadratic systems (Chapter 2). Notice that the main result of this section admits a transparent interpretation: in systems of chemical kinetics, complicated large time dynamics can be produced by space inhomogeneous external sources. In a sense, spatial complexity of external

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

|

161

sources can be transformed to the temporal complexity of dynamics. A sufficiently general open system is capable of generating practically arbitrary complicated dy­ namics as a result of large variations in external fluxes. Notice that biological systems transform a complicated spatial information stored in DNA into a complicated dynam­ ics and they typically involve fast and slow diffusing reagents.

3.4.2 Statement of the problem We consider the system v t = DΔv + g(u, v) + η1 (x, y),

(3.118)

u t = dΔu + f (u, v) + η2 (x, y),

(3.119)

where u = u (x, y, t), v = v(x, y, t) are unknown functions defined on Ω × {t ≥ 0}, Ω is the strip (−∞, ∞) × [0, h] ⊂ R2 , d, D > 0 are diffusion coefficients, and η i (x, y) are smooth functions. We assume g(u, v), f (u, v) ∈ C3 .

(3.120)

The initial conditions are given by v(x, y, 0) = v0 (x, y),

u (x, y, 0) = u 0 (x, y),

(3.121)

where u 0 , v0 ∈ C(Ω). Let us formulate boundary conditions. To simplify some relations below, let us assume that the functions u, v are 2π-periodic in x: v(x, y, t) = v(x + 2π, y, t),

u (x, y, t) = u (x + 2π, y, t),

(3.122)

and that η i , u 0 , v0 are also 2π-periodic in x. At boundaries y = 0, y = h, let us set the zero Dirichlet boundary conditions for v v(x, h, t) = v(x, 0, t) = 0

(3.123)

and the zero Neumann boundary conditions for u u y (x, y, t)|y=h,y=0 = 0.

(3.124)

We also consider more general systems v t = DΔv + g(u, v) + ξ (x, y, w),

(3.125)

u t = dΔu + f (u, v) + η(x, y, w),

(3.126)

w t = Δw + z(u, v, w),

(3.127)

162 | 3 Complex patterns and attractors for reaction-diffusion systems where η ∈ C∞ and h ∈ C3 are smooth functions. For u, v, we set the same initial and boundary conditions (3.121)–(3.124). For w, one has w(x, y, 0) = w0 (x, y),

w ∈ C(Ω),

w y (x, y, t)|y=h,y=0 = 0,

w(x, y, t) = w(x + 2π, y, t).

(3.128) (3.129)

In the coming subsection, we introduce function spaces and some auxiliary defi­ nitions.

3.4.3 Function spaces Initial boundary value problem (IBVP) (3.118)–(3.124) involves the numbers d, D and the functions η j (x, y). Let us denote the problem parameters by P, where P = {d, D, η1 (·, ·), η2 (·, ·)}. By  u, w, we denote the inner scalar product in the space of 2π-periodic in x mea­ surable functions defined on Ω: 2π h  u, w =

u (x, y)w(x, y)dxdy.

(3.130)

0 0

Let u  be the corresponding L2 -norm, i.e. u 2 =  u, u . We denote by H the Hilbert space of measurable functions defined on Ω and 2π-periodical in x with bounded norms  . We consider IBVP (3.118)–(3.124) in the space H = H × H, i.e. u ∈ H and v ∈ H. Let us introduce the standard function spaces [108]. We denote H α = {v ∈ H :

(−Δ D )α v = u α < ∞},

where Δ D is the Laplace operator under the Dirichlet boundary conditions with the definition domain  D2 v < ∞,

Dom Δ D = {v ∈ H : and α ≥ 0. Here, H0 = H. Similarly, ˜ α = {u ∈ H : H

v(x, 0) = v(x, h) = 0}

(−Δ N )α u  = u α < ∞},

where Δ N is the Laplace operator under the Neumann boundary conditions with the definition domain Dom Δ N = {u ∈ H

 D2 u  < ∞,

u y (x, y)|y=h,y=0 = 0}.

˜ α by Hα . Let B n be the unit ball {q : |q| ≤ 1} in Rn Let us denote the product H α × H centered at 0, where q = (q1 , q2 , . . . , q n ) and |q|2 = q21 + · · · + q2n .

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

| 163

3.4.4 Assumptions to f and g Let us formulate conditions on f and g. Assumption (MC). Assume u ∗ , v∗ exist such that f u (u ∗ , v∗ ) = 0,

f v (u ∗ , v∗ ) = 0

(3.131)

and g u (u ∗ , v∗ ) = 0.

(3.132)

Moreover, the critical point u∗ , v∗ is nondegenerated. This means that the Hesse matrix Hf , defined by

f uu (u ∗ , v∗ ) f uv (u ∗ , v∗ ) , f uv (u ∗ , v∗ ) f vv (u ∗ , v∗ ) satisfies det H f = 0.

(3.133)

In chemical and biological applications, f and g often have a quadratic form: f11 2 f22 2 u + f12 uv + v , 2 2 g11 2 g22 2 g(u, v) = c21 u + c22 v + u + g12 uv + v . 2 2 f (u, v) = c11 u + c12 v +

(3.134) (3.135)

Then, it is natural to assume that reagent concentrations u, v > 0, and therefore the positive cone, is invariant under the corresponding semiflow. In this case, assumption (MC) becomes 2 f0 = det Hf = f22 f11 − f12 = 0,

(3.136)

u∗ =

f0−1 (c12 f12

− c11 f22 ) > 0,

(3.137)

v∗ =

f0−1 (c11 f12

− c12 f11 ) > 0,

(3.138)

and c21 + g11 u ∗ + g12 v∗ = 0.

(3.139)

If f11 = f22 = 0, these conditions take the form f12 = 0,

c12 f12 < 0,

c11 f12 < 0.

(3.140)

If we assume that the positive cone R2> is invariant under dynamics du = f (u, v), dt

dv = g(u, v), dt

then c12 > 0 and, therefore, conditions (3.140) reduce to c12 > 0, f12 < 0 and c11 > 0.

164 | 3 Complex patterns and attractors for reaction-diffusion systems Let us consider an example with cubic nonlinearities. The Brusselator is a classi­ cal system exhibiting the Turing instability and complicated spatial patterns [196]. We slightly generalize the Brusselator, setting f (u, v) = bu − u 2 v + λv,

g(u, v) = a + u 2 v − (b + 1)u,

(3.141)

where a, b > 0 (the Brusselator arises when λ = 0). Equations f u = f v = 0 give 2uv = b,

u 2 = λ.

(3.142)

If λ > 0, then all assumptions (MC) hold. This idea with a parameter λ can be applied for fairly general nonlinearities f and g as follows. Let us consider general smooth f (u, v) and g(u, v). In some cases, the dynamics of the corresponding reaction-diffusion system is strongly monotone and, therefore, a stable chaos is not possible. These cases are (a) Cooperative systems f v (u, v) ≥ 0, g u (u, v) ≥ 0, (3.143) and then both variables are activators; (b) Competitive systems f v (u, v) ≤ 0,

g u (u, v) ≤ 0

(3.144)

in which both variables are inhibitors and (c) f v (u, v) ≥ 0,

g u (u, v) ≤ 0,

(3.145)

(or, otherwise, f v ≤ 0, g v ≥ 0). Here, the variable v is an activator, whereas u is an inhibitor. In all of these cases, the corresponding reaction-diffusion system u t = dΔu + f (u, v) − λ1 u + η1 (x, y),

(3.146)

v t = DΔu + g(u, v) − λ2 u + η2 (x, y),

(3.147)

under boundary and initial conditions (3.121)–(3.124) generates a strongly monotone semiflow [114]. In the case (a), it is well known [114]. In the cases (b) and (c), we make changes u → −u, v → −v and v → −v, respectively, that reduces (b) and (c) to (a) [114]. We observe that if one of (a), (b) or (c) does not hold, then either f v = 0, or g u = 0 at some points. If we include λ i in the parameter list, condition (3.131) becomes f v (u ∗ , v∗ ) = 0

(3.148)

for some u ∗ , v∗ . Conditions (3.132) and (3.133) conserve the form, and assumption (3.132) is fulfilled for generic g. Notice that assumptions (3.132), (3.133) and (3.148)

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

| 165

admit transparent mathematical and physical interpretations. Mathematically, they mean that system (3.146)–(3.147) does not generate a strongly monotone semiflow. Moreover, notice that a generic system (3.146), (3.147) should satisfy these con­ ditions if this system does not induce a strong monotone semiflow. From a chemical point of view, assumption (3.148) means that the variable u is neither the activator nor the inhibitor.

3.4.5 Main results Let us formulate the main results. Theorem 3.15. Consider IBVP (3.118)–(3.124). Assume assumption (MC) holds. Then, the family of the local semiflows S t (P) defined by this IBVP is maximally dynamically complex. Corollary 3.16. Let us consider a system of differential equations dq = Q( q) , dt

q ∈ Bn

(3.149)

defined on the unit ball B n . Assume that sup |∇Q(q)| < 1

q∈B n

(3.150)

and that this field is directed inward at the sphere ∂B n Q · n(q) < 0,

q ∈ ∂B n .

(3.151)

(a) If system (3.149) generates a structurally stable (persistence under C1 perturbations) global semiflow, then a choice of the parameters P = P(Q, n) exists such that IBVP (3.118)–(3.124) generates a local semiflow S t (P) in H, which has a locally attracting positively invariant manifold Mn . The semiflow S t (P)|Mn restricted to this manifold is topologically orbitally equivalent to the semiflow (3.149). (b) If system (3.149) has hyperbolic dynamics on a compact invariant hyperbolic set I ⊂ B n , then for some P = P(Q, n), the semiflow S t (P) also has a topologically equivalent compact invariant hyperbolic set I ⊂ B n , and the corresponding re­ stricted semiflows are topologically orbitally equivalent. For a definition of topological orbital equivalency, see [135, 228] and Section 1.6.4.

166 | 3 Complex patterns and attractors for reaction-diffusion systems Notice that the solutions u, v can be described explicitly. Namely, there are func­ tions u 0 (x, y), v0 (x, y) and θ i (x, y), V i (x, y), i = 1, . . . , N such that u = u0 + γ

N

˜, X i (t)θ i + u

i =1

v = v0 + γ

N

X i (t)V i + ˜v ,

i =1

˜ are small where X i (t) are time depending functions, γ > 0 is a small parameter, ˜v , u ˜ α < cγ s 0 , with s0 > 1 and α > 0. According to Theo­ corrections such that ˜vα , u rem 3.15, the large time dynamics of X i (t) can be complicated when the problem pa­ rameters are adjusted in a special way. For X i (t), we obtain a system of differential equations with quadratic nonlinearities. In Theorem 3.15, we vary diffusion coefficients D, d and the space sources (external fluxes) η i . The following theorem is concerned with the case when we can vary some parameters in f and g. Theorem 3.17. Assume the field Q satisfies conditions (3.150) and (3.151), and assump­ tions (3.148), (3.132) (3.133) are fulfilled. Then, the family of the local semiflows S t (P) defined by IBVP (3.146), (3.147), (3.121)–(3.124) is maximally dynamically complex (pa­ rameters are numbers d > 0, D > 0, λ1 , λ2 and smooth functions η 1 , η2 ). The parameters λ i are important in biological and biochemical applications where they determine degradation rates of reagents. This assertion, which is an obvious consequence of Theorem (3.15), shows that the following remarkable fact on systems (3.146), (3.147) is valid. The family of semi­ flows defined by (3.146), (3.147) either induces only strongly monotone semiflows, or ϵ-realizes all the finite dimensional vector fields when we vary the parameters D, d, λ i , η i (·, ·). For some systems (3.125)–(3.127), it suffices to vary only D and d. Theorem 3.18. Let us consider a countable sequence of vector fields Q(l) , where l = 1, 2, . . . . Assume fields Q(l) satisfy conditions (3.150) and (3.151). Consider the family of local semiflows S t (P) defined by IBVP (3.121)–(3.129), where parameters P are coeffi­ cients D, d. Then, functions ξ , η and z exist, enjoying the following property. For any ϵ > 0, there are D = D l > 0 and d = d l > 0 such that the corresponding local semiflow S t (D l , d l ) ϵ-realizes the fields Q(l) . This result is a consequence of Theorem 3.15. For spatially homogeneous reaction-diffusion systems, the main conclusion from these theorems is as follows. There exists a 5-component system of spatially homo­ geneous reaction-diffusion equations (a chemical reactor) which can generate any ro­ bust dynamics when we vary diffusion coefficients (mixing rates in the reactor). In

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

|

167

fact, we can add to system (3.125)–(3.127) two equations end modify η, giving v t = DΔv + g(u, v) + η¯ 1 (x, y, w),

(3.152)

u t = dΔu + f (u, v) + η(z, ρ, w),

(3.153)

w t = Δw + h(u, v, w),

(3.154)

z t = Δz + 1,

(3.155)

ρ t = Δρ + 1.

By setting appropriate boundary conditions for equations (3.155), we can obtain solu­ tions z = z(x) and ρ = ρ (y), where z, ρ are increasing smooth functions. This implies, by Theorem (3.18), that for a nonpolynomial η, this system realizes all vector fields. For polynomial reactions, the question on the existence of a homogeneous reactor with a fixed reagent number, which “generates all”, is open.

3.4.6 Strategy of proof The proof proceeds via some steps.

Step 1 Reduction to a system with quadratic nonlinearities We express the diffusion coefficients D, d via a small parameter γ, namely, d = γ2s ,

3s

s1

D = γ −1 + 2 − 2 ,

(3.156)

where

s . (3.157) 10 Below, we use a standard convention: C j , c i , δ k denote different positive constants in­ dependent of the small parameter γ. Sometimes, we shall use the same constants if it does not lead to confusion. We linearize equations (3.118)–(3.119) in a small O(γ) neighborhood of some func­ tions u 0 (x, y), v0 (x, y). This linearization is defined by a linear operator L N . The key property of L N can be formulated as follows. s ∈ (0, 1/100),

0 < s1
0 are constant, independent of d as d → 0. The construction of L N is not too complicated. It is well known from previous works [43]. This property of L N allows us to apply the known theorems on invariant

168 | 3 Complex patterns and attractors for reaction-diffusion systems manifold existence [23, 48, 50, 108, 168, 209, 306, 322]. As a result, the dynamics de­ fined by (3.118)–(3.124) can be reduced to the system dX i = P i (X, γ), dt

(3.158)

where t = γ1−s /2+s 1/2 t is a rescaling time and the field P(X, γ) is quadratic: P i (X, γ) = ¯r X 2i +

N

M ik X k + γ s 0 ρ i (X, γ),

s0 > 0.

(3.159)

k =1

Here, i = 1, . . . , N, X = (X1 , . . . , X N ), ¯r = 0 is a coefficient and ρ i (X, γ) are functions bounded in C1+r -norm: |ρ i (·, γ)|C1+r (B N (R)) | < C2 , (3.160) where B N (R) is the ball in RN of the radius R centered at 0. It is important that R, ¯r and M ik are independent of γ as γ → 0. In (3.158), the matrix M is a linear function of the sources η1 , η2 . Thus, we have a linear map (η1 , η2 ) → M(η1 , η2 ) from the set of C∞ -smooth, 2π periodic in x functions η i defined on Ω in the set of all N × N matrices M. The following fact is essential in the RVF approach.

D. Matrices M are dense The image of the map (η1 , η2 ) → M(η1 , η2 ) is dense in the linear space of all the N × N matrices M. Therefore, the matrices M(η1 , η2 ) can serve as parameters in application of the RVF method to (3.158). A precise formulation of this density property can be found in subsections 3.4.16 and 3.4.17. Our next step is an investigation of quadratic systems (3.158). It is difficult to analytically prove the existence of a chaotic behavior for concrete polynomial systems. For example, a rigorous proof of chaotic behavior of the Lorenz system [165], pioneered in 1963, was obtained much later, in 1999 [271]. The proof is complicated and uses computer calculations. To overcome this difficulty, we use a spe­ cial approach. We assume that N, M ik are parameters and N is an even integer, N = 2N1 > 0. We apply the RVF to equations (3.158) as follows. We show that system (3.158), where N, M are parameters, ϵ-realizes all possible dynamics (3.161). The proof is based on a choice of M such that (3.158) can be reduced to a typical system with fast and slow variables and on the well-known results about invariant manifolds [23, 48, 50, 108, 168, 306, 322]. To make this step, we exploit the assertion D about the density of M. Furthermore, we consider a weakly perturbed Hopfield’s type system dx i 1 = J ij σ n (x j ) − λx i + ζg i (x) = H i (x), dτ j =1 N

(3.161)

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

| 169

where i = 1, 2, . . . , N1 , x = (x1 , x2 , . . . , x N1 ), σ n (z) = (2a)−1 (1 − (1 − 4az)1/2 ), a = 1 1+ r functions defined on some open bounded domain of RN1 with a 100n 1/2 , g i ( x ) are C smooth boundary, and ζ (γ) is a parameter such that ζ → 0 as γ → 0. In (3.161), the number N1 and the matrix J are parameters. One can show that sys­ tem (3.161) ϵ-realizes arbitrary dynamics (3.149) for any n and any ϵ > 0. This can be done following Chapter 2, Section 2.6.

3.4.7 Problem (3.118)–(3.124) defines a local semiflow Let us prove that problem (3.118)–(3.124) is well-posed and defines a local semiflow. The proof is standard and follows [108]. Let us consider this problem in the phase space H defined in subsection 3.4.3. Denote v = (u, v)tr and v = u  + v. The initial boundary value problem (3.118)–(3.124) can be rewritten as an evolu­ tion equation [108, 167] vt = Av + F (v), (3.162) where A = (DΔ D , dΔ N )tr is a negatively defined self-adjoint operator in H and F = (f (u, v) + η1 , g(u, v) + η2 )tr . Let us take α ∈ (3/4, 1). We use the embeddings u L ∞(Ω) ≤ c0 u α ,

vL ∞ (Ω) ≤ c1 vα ,

and therefore, v  L ∞ ( Ω ) ≤ c 2 v H α = v  α .

These estimates show that F is a C1 -map from Hα to H. This implies [108, 167] that equation (3.162) defines a family of local semiflows S t (P).

3.4.8 Global semiflow existence To establish the existence of solutions on infinite time interval (0, ∞), we need a priori estimates. These estimates do not hold for general f, g. To guarantee the global exis­ tence of solutions for all times, one can assume, for example, that there is an invariant rectangle Π = [a− , a+ ] × [b − , b + ] for the vector field (f, g) in R2 [258]: f (a− , v) > 0, f (a+ , v) < 0, −

+

g(u, b ) > 0, g(u, b ) < 0,

(v ∈ [b − , b + ]), −

+

(u ∈ [a , a ]).

(3.163) (3.164)

In general, we do not suppose that these conditions are fulfilled. Therefore, the semi­ flows generated by IBVPs are only local ones. We investigate these semiflows in small open subdomains of Hα , which are defined by two auxiliary functions u 0 , v0 and by

170 | 3 Complex patterns and attractors for reaction-diffusion systems the small parameter γ. All invariant manifolds, where vector fields should be realized, are positively invariant. Trajectories with initial data from these manifolds are defined for all positive times.

3.4.9 Construction of special linear operator L N Both u 0 (x, y) and v0 (x, y) are two smooth functions defined on Ω and 2π-periodic in x. Let us set ˜ , v = v0 + ˜v . u = u0 + u We set η1 = −DΔv0 − g(u 0 , v0 ),

(3.165)

η2 = −dΔu 0 − f (u 0 , v0 ).

(3.166)

We consider the functions u 0 and v0 as parameters. For new unknown functions u˜ , ˜v, we then obtain ˜ + S2 ˜v + g˜ (u ˜ , ˜v ), ˜v t = DΔ˜v + S1 u ˜ ˜ , ˜v ), u˜ t = dΔ˜ u + R1 u˜ + R2 ˜v + f (u

(3.167) (3.168)

where R1 = f u ( u 0 , v0 ) ,

R2 = f v ( u 0 , v0 ) ,

(3.169)

S1 = g u (u 0 , v0 ),

S2 = g v (u 0 , v0 ).

(3.170)

The smooth functions ˜f , ˜g satisfy estimates ˜ , ˜v)| + |g˜ (u ˜ , ˜v )| ≤ C1 (u ˜ 2 + ˜v2 ), |˜f (u

(3.171)

˜ , ˜v → 0. as u The construction of the operator L N proceeds with two steps. Step 1. An auxiliary one-dimensional Schrödinger operator with a single potential well and a single zero eigenvalue. First, let us formulate some auxiliary assertions on Schrödinger operators. Let us introduce the function  x , b = μ −1 2d, (3.172) Φ b (x) = 3 cosh−2 b where μ = γ s1 ,

(3.173)

and the exponent s1 satisfies (3.157). Let us consider the operator Hb = d

d2 + μ 2 (Φ b (x) − 2), dx2

x ∈ (−∞, +∞)

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

|

171

defined on a domain Dom Hb ⊂ L2 (R). It is well known [43] that this operator has an eigenfunction of the discrete spectrum θ(x) =

31/2 −2 x cosh , 2b 1/2 b

θ = 1

(3.174)

with zero eigenvalue λ = 0. Notice that the spectrum of Hb does not depend on b. In fact, making the change x = x b, we obtain an operator of the form μ2 H , where the operator H involves no small parameters. It is clear that the continuous spectrum of Hb lies in the domain (−∞, −2μ 2 ). Therefore, the spectrum of Hb consists of 0 and a set that is separated by a barrier O(μ2 ) away from of the imaginary axis Im λ = 0. We thus obtain the following estimate of the quadratic form associated with the operator Hb : ∞ ∞ 2 2 L b w(x)w(x)dx ≤ −c0 μ w , if θwdx = 0, (3.175) −∞

−∞

where a positive constant c0 is independent of γ. Step 2. A two-dimensional Schrödinger operator with N potential wells. Let x¯ i be N different points in (0, 2π), i = 1, . . . , N. Using Φ b from (3.172), one can construct an operator having N eigenvalues exponentially small as d → 0. Roughly speaking, we use N potential wells localized at x i . Let us set Φ b,N (x) =

N

Φ b (x − x¯ j ),

(3.176)

j =1

and L N u = dΔu + R1 u,

(3.177)

R1 = μ 2 (−2 + Φ b,N ).

(3.178)



x − x¯ j 3 . θ j (x) = √ cosh−2 b 2 b

(3.179)

where Let us define

Notice that the norms θ j  are exponentially close to 1: θ j  = 1 + O(exp(−cγ−s )),

γ → 0.

Lemma 3.19. Let us consider the quadratic form Q N (w) =  L N w, w defined for w ∈ H1/2 . If the function w is orthogonal to all θ j ,  θ j , w = 0,

(3.180)

172 | 3 Complex patterns and attractors for reaction-diffusion systems this form is negatively defined Q N ( w ) ≤ − c N d  w 2

(3.181)

where a positive c N is uniform in d as d → 0. Moreover, |L N θ i | ≤ C N exp(−c0 d−1 ).

(3.182)

The proof uses estimate (3.175) and can be found in [43, 287]. Nonformally speak­ ing, the quadratic form −Q N (w) takes values O(d) on the test functions w of the form w = θ j (x) cos(kπy/h), where k are positive integers, and O(μ2 ) for all w = ψ(x) cos(nπy/h), where ψ is a function orthogonal to all θ j on [0, 1], ψ = 1 and n ≥ 0 are integers. Notice that we use restriction (3.157) to the exponent s1 , which implies that μ 2  d. Let us consider the matrix operator v → LN v defined by

DΔ + S2 S1 R2 LN where v = (v, u )tr . Below, we use the norm v = (u 2 + v2 )1/2 .

Lemma 3.19 includes the following useful assertion about the quadratic form as­ sociated with the operator LN . Lemma 3.20. Assume S1 , S2 and R2 are bounded continuous functions: |S i (x, y)|, |R2 (x, y)| < C1 ,

(x, y) ∈ Ω,

(3.183)

coefficients D and d satisfy (3.156). Let us consider the quadratic form Q(v) = LN v, v = −D∇v2 +  L N u, u 

(3.184)

defined for v = (v, u )tr ∈ H1/2 . Then, if γ is small enough and the function u is orthogonal to all θ j , i.e.  θ j , u  = 0, (3.185) the form Q is negatively defined as Q(v) ≤ −c¯0 dv2 ,

(3.186)

where c¯0 = c¯0 (N ) > 0 is uniform in γ as γ → 0. To prove this assertion, we use (3.181) and some estimates. In fact, let u be orthogonal to all θ j . Then, due to Lemma 3.19, Q(v) ≤ −D∇v2 + C1 u  v − c N du 2 . 1 −1 Using the inequality 2u v ≤ au 2 + a−1 v2 , where a = c− N dC 1 , the Poincaré −1 inequality and D  d (that holds for small γ due to (3.156) and (3.157)), one obtains (3.186).

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

| 173

3.4.10 Estimates for semigroups Let us consider estimates for semigroups generated by the operators B = DΔ, L N and LN . For the semigroup exp(Bt), one has the well-known estimate [108] (−Δ)α exp(Bt)v ≤ D−α b α,D (t)v

(3.187)

where the function b α,λ (t) is defined by b α,λ (t) = (teα −1 )−α ,

0 ≤ t ≤ α /λ,

α

b α,λ (t) = λ exp(−λt),

t ≥ α /λ.

From now, we assume that α ∈ (3/4, 5/6). If u satisfies (3.185) for exp(L N t), one obtains α (−Δ)α exp(L N t)u  ≤ d−α C1 (exp(−d c0 t) + b 1,d (t))u .

(3.188)

Estimate (3.187) is an immediate consequence of the Fourier decomposition. Let us prove (3.188). Notice that L N exp(L N t)u  ≤ c1 b 1,d (t))u ,

c1 > 0.

−1

We have Δ = d (L N − R1 ), where, according to (3.176), R1 is a bounded function: R1 < C3 μ 2 . Therefore, Δ exp(L N t)u  ≤ d−1 C2 (exp(−c2 d t) + b 1,d (t))u .

We now use the well-known inequality (Theorem 1.4.4, [108]) (−Δ)α w ≤ C0 Δwα w1−α ,

where w = exp(L N t)u. This implies (3.188). Let us now consider estimates for the matrix operator LN . Lemma 3.20 entails  exp(LN t)v ≤ exp(−c¯0 d t)v,

c¯0 > 0.

(3.189)

Let us estimate  exp(LN t)vα for α ∈ (0, 1). Again, we follow [108]. The operator exp(LN t) is defined by the evolution equations v t = DΔv + S1 v + S2 w,

(3.190)

w t = L N w + R2 v.

(3.191)

We rewrite these equations as integral ones by t

v(t) = exp(−Bt)v(0) +

exp(−B(t − τ))(S1 v(τ) + S2 w(τ))dτ,

(3.192)

exp(−L N (t − τ))R2 v(τ)dτ.

(3.193)

0

t

w(t) = exp(L N t)w(0) + 0

174 | 3 Complex patterns and attractors for reaction-diffusion systems First, we estimate the norm of the right-hand side of (3.192). By (3.187) and (3.189), one obtains v(t)α = D−α (b α,D (t)v(0) + C1 v(0)

t

b α,D (t − τ) exp(−d¯c0 τ)dτ ). (3.194) 0

We use the inequalities t

b α,D (t − τ) exp(−d¯c0 τ)dτ < 0

t (t − τ)

< c3

−α

t −t0

t−t0

< c4 D

exp(−d¯c0 τ)D α exp(−c1 D(t − τ))dτ
t0 = c2 D−1 and t

b α,D (t − τ) exp(−d¯c0 τ)dτ < c5 D−1+α exp(−d¯c0 t)

0 −1

for t ≤ c2 D , where c i = c i (α ) are positive constants independent of γ. Then, by these estimates and (3.194), one finds v(t)α ≤ (D−α b α,D (t) + C5 D−1 exp(−d¯c0 t))v(0).

(3.195)

Now, we substitute estimate (3.195) with α = 0 in (3.193). Notice that for α = 0, inequality (3.195) becomes v(t) ≤ (exp(−c2 Dt) + C5 D−1 exp(−d¯c0 t))v(0).

(3.196)

This entails α w(t)α = c5 d−α [(b 1,d (t) + exp(−d¯c0 t))w(0) + c6 v(0)I α ],

(3.197)

where t

α (t − τ ) + exp(−d¯c0 (t − τ )))(exp(−c2 Dτ ) + C5 D−1 exp(−d¯c0 τ ))dτ. I α = (b 1,d 0

We present this integral as a sum of 4 integrals: I α = I1,α + I2,α + I3,α + I4,α . Let us estimate these integrals. One has t

I1,α = 0

exp(−d¯c0 (t − τ)) exp(−Dc2 τ)dτ ≤ c11 D−1 exp(−d c¯0 t).

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

| 175

Moreover, for t > ¯c0 d−1 , one obtains t α b 1,d (t − τ ) exp(−Dc2 τ )dτ ≤

I2,α =

(3.198)

0 t − ¯c0 d −1

≤ c6 d α

exp(−αd¯c0 (t − τ)) exp(−Dc2 τ)dτ+ 0

(3.199)

t +

(t − τ) t − ¯c 0

−α

exp(−Dc2 τ)dτ.

d −1

One has t − ¯c0 d −1



exp(−αd¯c0 (t − τ)) exp(−Dc2 τ)dτ < c7 d α D−1 exp(−αd¯c0 t).

0

The second integral in the right-hand side of (3.199) can be estimated by the Hölder inequality t

(t − τ )−α exp(−Dc2 τ )dτ ≤ c13 D−1/q d(αp−1)/p exp(−c2 D(t − c¯0 d−1 )),

˜I = t − ¯c 0 d −1

where p, q > 0 such that

1 1 + = 1. p q

We take 1/p = α < 1, and 1/q = 1 − α. Thus, ˜I < c8 D−(1−α) exp(−c2 D(t − c¯0 d−1 )),

t > c¯0 d−1 .

One has exp(−c2 D(t − c¯0 d−1 )) < C10 exp(−c¯0 dt),

t > c¯0 d−1 ,

where C10 > 0 is a constant. This implies I2,α ≤ c9 (D−1 + D α−1 ) exp(−αd¯c0 t) ≤ c10 D α−1 exp(−αd¯c0 t). Moreover, I3,α = D−1

t

exp(−d¯c0 (t − τ)) exp(−c¯0 d τ)dτ ≤ c11 D−1 t exp(−d¯c0 t)
0,

if

X (0) ∈ D.

(3.250)

Let γ be small enough: γ < γ0 (α, s, s1 , N, C0 ). Then, the semiflow, generated by system (3.238), (3.239) and (3.240), has a positively invariant and locally attracting normally hyperbolic manifold W N of dimension N defined by ˆ (X ), v¯ = V

ˆ (X ), ¯ =W w

X∈D

(3.251)

ˆ α ≤ C1 , W

ˆ |||α ≤ C1 , |||D X W

(3.252)

ˆ α ≤ C2 , V

ˆ |||α ≤ C2 . |||D X V

(3.253)

ˆ W ˆ are C1+r maps bounded in the C1 norm: where V,

The proof uses the standard technique, see Theorems 6.1.3 and 6.1.7 [108] (Appendix ¯ )tr , F =(F¯ 1 , F¯ 2 )tr and G =(G1 , . . . , G N )tr ) 3.5). We rewrite our system as (here, v =(v¯ , w vt = LN v + F(v, X ),

(3.254)

X t = G(v, X ).

(3.255)

The function G is defined by ¯ + γθ(X ), V + γ2 v¯)+ G i (v, X ) = γ−1 ( θ i , R(V + γ2 v¯ ) + ˜f (W + γ2 w ¯ )). + ξ i (X, W + γ2 w

(3.256)

We consider this system in the domain Vα,C0, C¯ 0 defined by ¯ 0 }. ¯ α + v¯α < C ¯ ) : |X | < C 0 , w V α,C0 , C¯ 0 = {(X, v¯ , w

(3.257)

To apply Theorem 6.1.7 [108], let us estimate constants μ, M2 , M1 , M, β, λ, M F and Θ1 (see Theorem B in Appendix 3.5, which is a simplified variant of Theorem 6.1.7

182 | 3 Complex patterns and attractors for reaction-diffusion systems from [108]). Here, the constants β, M estimate the evolution operator exp(LN t) by (3.306). Due to estimate (3.202) for the semigroup exp(LN t), one has β > c0 d = c0 γ2s ,

M = c 1 d −1 .

(3.258)

The constant M1 = 1. Let us estimate M2 , μ 0 defined by M2 = μ0 =

sup ¯ )∈Vα,C0 ,C¯ 0 ( X,¯v, w

sup ¯ )∈Vα,C0 ,C¯ 0 ( X,¯v, w

Dv G, D X G|.

One obtains, under our choice of d and D (see (3.156) and (3.157), by (3.211) and by D w¯ G ≤ C1 γ2 ,

D ¯v G ≤ C1 γ, −1

(3.259) −1

D X G ≤ C 1 (γ V X  + γ ) < C 1 D .

(3.260)

Since D−1  γ1/2 for small γ, these estimates yield μ 0 = C 2 γ 1 /2 ,

M 2 < C 3 γ 1 /2 .

(3.261)

The constants M F and λ estimate sup F and sup DF, respectively, in the do­ main Vα,C0 , C¯ 0 . One has MF = λ=

sup ¯ )∈Vα,C0 ,C¯ 0 ( X,¯v, w

sup ¯ )∈Vα,C0 ,C¯ 0 ( X,¯v, w

F  , DF.

Relations (3.241), (3.242), estimates (3.243), (3.244) and (3.236), (3.237) of the derivatives V t , W t imply M F < C 4 ( D − 2 γ − 1 + c 8 γ ) < C 5 γ 1 /2 . Moreover, by (3.225), (3.230), (3.228), (3.241), (3.242), (3.243), and (3.244), sup ¯ )∈Vα,C0 ,C¯ 0 ( X,¯v, w

D ¯v F
0 is independent of γ and k. Let us introduce auxiliary functions U j (x, y) = γ

s1 −s 2

which will be useful in the coming subsection.

¯j V

(3.283)

186 | 3 Complex patterns and attractors for reaction-diffusion systems 3.4.17 Lemma on control of matrices M (property D) The property of density D is formulated by the following lemma. Notice that due to relations (3.165) and (3.166), we can consider u 0 , v0 as parameters instead of η i . Lemma 3.23. Suppose assumption (MC) (i.e. conditions (3.131), (3.132) and (3.133)) ¯ with entries M ¯ ij , there exist 2π-periodic in x smooth holds. Then, for any N × N matrix M ¯ ), one has functions u0 (x, y) and v0 (x, y) such that for sufficiently small γ < γ0 (f, g, M f u (u 0 , v0 ) = R1 (x, y), |g u (u 0 , v0 )| > δ1 > 0

(3.284) (3.285)

and ˜ jl (u 0 , v0 ) − M ¯ jl | < c1 γ s 2 , |M

(3.286)

where j, l = 1, 2, . . . , N and s2 > 0. Notice that if this lemma holds, the linear and quadratic terms in (3.267) have the same order in γ. Proof. Step 1. The function R1 , defined by relations (3.172), (3.176) and (3.178), depends on two parameters: b and μ. Notice that sup |R1 (x, y)| < C1 μ 2 , where C1 > 0. Assume that R2 = μρ (x, y),

(3.287)

where a smooth function ρ satisfies sup |ρ (x, y)| + sup |∇ρ (x, y)| < C2 .

(3.288)

Then, since condition (3.133) holds, the equations f u ( u 0 , v0 ) = R1 ,

f v ( u 0 , v0 ) = R2

(3.289)

have solutions u 0 (x, y), v0 (x, y) for sufficiently small γ (since μ = γ s 1 ). These solu­ tions u 0 , v0 are close to the constants u ∗ , v∗ from Assumption (MC). We have sup |u 0 (x, y) − u ∗ | + |v(x, y) − v∗ | < c3 μ.

x,y ∈ Ω

(3.290)

˜ is defined by the relation Step 2. Recall that the matrix M ˜ ij = γ−s +s 1 M

h 2π

¯ j (x, y)θ i (x)dydx, ρ (x, y)V 0 0

(3.291)

3.4 Reaction-diffusion systems realizing all finite dimensional dynamics

| 187

¯ j are solutions of boundary problems (3.223), (3.224). Moreover, by (3.290) and where V the definition of S1 , we obtain S1 (x, y) = g u (u, v0 )|u=u0 = g¯ + O(μ ),

(3.292)

where g¯ = g u (u ∗ , v∗ ) = 0 due to (3.132). Now, we seek a function ρ satisfying (3.288) such that (3.291) holds. ¯ j , which Equation (3.291) can be simplified. In fact, we can estimate functions V are solutions of the boundary value problem (3.223), (3.224), by the functions Z j (x, y) ( (3.274) and (3.292)). Taking into account assumptions (MC), we can present the right-­ hand side of equation (3.223) as S1 (x, y)θ j (x) = (b¯ + O(μ ))θ j (x). Then, by (3.283), one obtains ⎛

˜ ij = γ(−s +s 1)/2 ⎜ M ⎝

h 2π

⎞ ⎟ ρ (x, y)U j (x, y)θ i (x)dydx + O(μ )⎠ .

(3.293)

0 0

Notice that θ j is well-localized at x = x j . Indeed,   |θ j (x)| < C1 exp −b −1 |x − x¯ j | ,

b = O(γ s ).

If γ is small enough, we can remove in the left-hand sides of equations (3.291) contri­ butions, vanishing as γ → 0. Since  θ i (x)dx = γ(−s +s 1)/2 (1 + O(γ s 3 )), s3 > 0, (γ → 0), we notice that to satisfy (3.291), it is sufficient to resolve the following system of linear equations for functions ρ i (y) = ρ (x¯ i , y) h

¯ ij . ρ i (y)U j (x¯ i , y)dy = c2 M

(3.294)

0

Let us show that the system of integral equations (3.294) has a solution. We use the Fourier decomposition 3.283) for U j . Let us present ρ i as a truncated finite Fourier sum ρˆ i (k ) sin(kπyh−1 ), ρ i (y) = k ∈KN

where ρ i are new unknown coefficients and K1 , K2 are numbers satisfying (3.277), (3.278). We substitute this presentation into (3.294). This gives N independent sys­ tems of linear algebraic equations ¯ ij , i, j = 1, . . . , N. ρˆ i (k )G k (x¯ i − x¯ j ) = M (3.295) k ∈KN

where G k are the Green functions defined by (3.279).

188 | 3 Complex patterns and attractors for reaction-diffusion systems Let us show that this algebraic system of linear equations is resolvable. Indeed, since the points x¯ i can be adjusted in an arbitrary way, we can assume that all pairs x¯ i , x¯ j with i = j satisfy |x¯ i − x¯ j | > δ0 . Moreover, we suppose that for each i, all numbers |x¯ i − x¯ j |, j = 1, . . . , N are mutually different. Since k ∈ KN , and Nis fixed, for large K1 we can use asymptotics (3.280). This shows that for small γ and large K1 , system (3.295) is a small perturbation of the following system, where γ is not involved: k ∈KN

h ¯ ij , ρˆ i (k ) exp(−k |x¯ i − x¯ j |) = M 2πk

(3.296)

where i, j = 1, . . . , N. All functions ρˆ i can be chosen independent for different i. Therefore, equations (3.296) reduce to N independent systems of linear algebraic equations. The matrices in the left-hand sides are the Van der Monde ones, and the corresponding determinants are not equal to zero since all distances |x¯ i − x¯ j | are different for a fixed i and different j. This proves that solutions of (3.296) and thus (3.295) exist and moreover, the norms maxi,k |ρ i (k )| are bounded as γ → 0 ¯ ), ˆ i (k )| < C(M |ρ

(3.297)

thus completing the proof.

3.4.18 Proof of theorems Let us consider system (3.158) using notation t again for the time. For small γ, this system is a weak perturbation of system (2.237) considered in Chapter 2. In Section 2.7.1, it was shown that system (2.237) realizes all n-dimensional vector fields with any prescribed accuracy ϵ. Therefore, adjusting sufficiently small γ < γ0 (ϵ), one obtains that system (3.158) also realizes all the fields. Since our IBVP reduces to (3.158) for sufficiently small γ, Theorem (3.15) is proved. Theorem 3.17 is a consequence of Theorem 3.15. Let us state an elementary proof of Theorem 3.18. We set z(u, v, w) = sin w. Then, problem (3.127), (3.128) and (3.129) has trivial solutions w ≡ πk, where k is an integer. These solutions are local attractors for odd k = 2k + 1. According to Theorem 3.15, each field Q(l) can be ϵ-realized by a two component (l) (l) system (3.118), (3.119). We have, thus, a sequence of functions η1 (x, y), η2 (x, y) and coefficients d(l) , D(l) giving these realizations. We now set (l)

ξ (x, y, 2l + 1) = η1 (x, y), This proves the theorem.

(l)

η(x, y, 2l + 1) = η2 (x, y).

3.5 Appendix: theorems on invariant manifolds

|

189

3.4.19 Conclusion We have the following picture of dynamical complexity for a system of reaction-dif­ fusion and quasilinear parabolic equations of the second order with inhomogeneous coefficients. Let n be the component number and m the space dimension. We only dis­ cussed the time autonomous case. For the time nonautonomous case, see paper [137]. (i) For n = 1 (i.e. we deal with a reaction-diffusion equation) and m = 1, all trajecto­ ries are convergent to an equilibrium or a cycle. If our equation is generic, or this equation has a polynomial nonlinearity, the same convergence holds for m > 1. Special adjusted equations can exhibit unstable chaotic behavior (generate all in­ variant hyperbolic sets) [211]. (ii) If n > 1 and m > 1, some special systems with nonpolynomial nonlinearities can exhibit a chaotic behavior (they generate all invariant hyperbolic sets) [280]. This chaos can be realized on an inertial manifold, and, therefore, it may be globally stable. (iii) It is shown that if n > 1 and m > 1, a generic reaction-diffusion system can gener­ ate all invariant hyperbolic sets when we vary some functions η i in the right-hand side and diffusion coefficients. This invariant sets lie on locally attracting invari­ ant manifolds and therefore they can be local attractors. However, there are important questions left open. Probably, the most difficult ques­ tion is as follows: prove (ii) or (iii) for the one-dimensional case, m = 1. Another prob­ lem exists, namely, when is it possible to obtain all structurally stable dynamics for a semiflow induced by a system of a fixed structure if we only vary diffusion coefficients? Theorem 3.18 gives a positive answer for n ≥ 3 and m > 1, however, one can expect that actually this holds for n = 2 and m = 1, or, at least, for n = 2 and m = 2.

3.5 Appendix: theorems on invariant manifolds Let us formulate theorems on invariant manifold existence. The first theorem A is a particular case of Theorem 9.1.1 from [108] that is sufficient for our goals. The second one is a simplified, time autonomous variant of Theorem 6.1.7 [108]. Let us consider the system ˜ (q, z), q t = Q( q) + Q

(3.298)

z t = A(q)z + F (q, z),

(3.299)

where q = (q1 , . . . , q n ) lies in Rn , and z lies in a Banach space B. Assume the sectorial operator A has the form ˜ ( q) . A( q) = A0 + A

190 | 3 Complex patterns and attractors for reaction-diffusion systems Theorem A. Let B be a Banach space and A0 be a sectorial operator in B, z ∈ Rn . Assume that ˜ ( q) : Rn → L( B α , B) A (3.300) is bounded and differentiable with respect to q map, U is a neighborhood of 0 in B α and ˜ : Rn × U → B × Rn × Rn F, Q, Q

(3.301)

are bounded and differentiable with respect to q, z maps. Moreover, let us assume that the following conditions are satisfied: (i) There are constants M, μ > 0 such that if q satisfies the nonperturbed equation q t = Q(q), then |q1 (t) − q2 (t)| ≤ M exp(μ |t|)|q1 (0) − q2 (0)| for all t.

(3.302)

(ii) If q satisfies the nonperturbed equation q t = Q(q), for the operator A, one has a trivial exponential dichotomy, i.e. the solutions of z t = A(q(t))z satisfy z(t) ≤ M exp(−βt)z(0),

β > μ;

(3.303)

˜ Q ˜ q, Q ˜ z , Qq , A ˜ and A ˜ q are uniformly bounded in the corre­ (iii) the maps F, F q , F z , Q, sponding norms by the number M. There is a number θ ∈ (0, 1]) such that μ (1 + θ) < β, ˜ z, Q ˜ q , Qq , A ˜ q lie in the Hölder class with the expo­ and the maps (q, z) → F q , F z , Q nent θ and the constant M; (iv) for some λ, one has the estimate F (q, 0) < λ,

F z (q, 0) < λ,

˜  < λ. Q

(3.304)

Then, for sufficiently small positive λ and r0 that only depend on β, A0 , μ, θ, β − μ (1 + θ) and M, there exists an invariant manifold z = Z ( q) ,

Z ∈ C1 ,

which is a maximal invariant subset of the set  z α < r0

and such that Z , Z q  → 0

as λ → 0.

(3.305)

3.5 Appendix: theorems on invariant manifolds

| 191

Theorem B. Let us consider (3.298), (3.299), where A(q) = A does not depend on q. Assume that for some β > 0,  exp(At)v ≤ M v exp(−βt),  exp(At)v ≤ M vt

−α

t > 0.

exp(−βt),

t > 0.

(3.306) (3.307)

˜ Assume (3.301) holds, Q, ¯ F ∈ C1+r , r ∈ (0, 1) and ¯ (q, z) = Q + Q. Let us denote Q n in Z = R × U sup F  < C F ,

(3.308)

F (q1 , z1 ) − F (q2 , z2 ) ≤ λq1 − q2  + z1 − z2 .

(3.309)

Let us set ¯ , μ = sup D q Q ( q,z )∈ Z

¯ . M2 = sup D z Q

(3.310)

( q,z )∈ Z

Let us introduce ∞

Θ p (Δ) = λM

u −α exp(−(β − pμ1 )u )du,

p ∈ [1, 1 + r]

0

where r ∈ (0, 1), β − pμ1 > 0 and μ 1 = μ + ΔM2 . Suppose that for some Δ and R0 , the following conditions and inequalities hold: {v : vα < R0 } ⊂ U, ∞

MM F

(3.311)

u −α exp(−βu )du < R0 ,

(3.312)

0

β , 2 Θ1 < Δ/(1 + Δ),

μ1
0

T → ∞.

Since gene codes are discrete, we can consider evolution as a computational problem. Then, the formation of complex organisms can be considered as a hard combinatorial problem (for example, the famous k-SAT is one of such problems). In Section 4.7, we consider reaction-diffusion systems with random perturbations. We formulate and prove the Gromov–Carbone hypothesis (i) for a large class of such systems and find a connection between viability and gene complexity. The next question (Section 4.8) is interesting for cancer and cellular cycle prob­ lems: how can a multicellular organism be synchronized? How can a complicated cell system survive under perturbations? We apply here the remarkable method of Ku­ ramoto [152].

194 | 4 Random perturbations, evolution and complexity 4.1.1 Viability problem Our starting point is a remark from [97], where M. Gromov and A. Carbone have formu­ lated the following problem: “Homeostasis of an individual cell cannot be stable for a long time as it would be destroyed by random fluctuations within and out of the cell. There is no adequate mathematical formalism to express the intuitively clear idea of replicative stability of dynamical systems ([97], p.40).” This assertion contains two hypotheses. First, that functioning biological systems are unstable under random perturbations. Second, these systems can be stabilized by replication (evolution). The goal of this chapter is to mathematically formulate and prove these hypothe­ ses for some classes of systems important in biology, chemistry and other applica­ tions. We introduce a measure of homeostasis stability under random perturbations. For some important classes of systems, we show that almost all individual systems with fixed parameters are in a sense, unstable for large times (T → +∞). However, populations of evolving systems with changing (from time to time) parameters can be stable even as T → ∞. The approach to this homeostasis problem uses standard prob­ abilistic methods and theoretical computer science approaches. The homeostasis concept was proposed by Claude Bernard [28]: “La fixité du mi­ lieu intérieur est la condition d’une vie libre et indépendante.” (Constance of the in­ ternal environment is the condition for a free and independent life). This is the under­ lying principle: homeostasis entails supporting life functions of a system. Biological molecules and chemical mechanisms in the cell are fragile. Thus, in order to support their functioning, main characteristics of the cell (temperature, pressure, pH, reagent concentrations) must be within a narrow domain [8, 25, 156], independently of ex­ ternal medium oscillations. For example, the temperature of a human body must lie within 35−42 C◦ . Sharp changes in the external medium can destroy the system. Bio­ logical, economical and social systems can survive only when their states stay within some prescribed domains. We denote these viability domains by Π. Based on these ideas, we consider some known and new mathematical models. These models contain a dynamical component and a stochastic part describing a ran­ dom environment. For such models, a natural measure of the stochastic stability can be introduced. This is a probability P T (Π ) that the system state stays in the domain Π within time period t ∈ [0, T ]. This characteristic is well known and studied [308]. For brevity, if the system state stays within Π for t ∈ [0, T ], we say that our system survives on [0, T ]. Besides this stability measure, in this chapter, the idea of a “generic” system plays an important role. Two concepts of genericity are known. Concept 1. Suppose a system depends on parameters P . Following standard ideas [39, 110] of differential topology, we say that a property holds for a generic system if this property holds for an open dense set in the space of possible values of

4.1 Introduction

| 195

the parameters P . This approach is popular in the theory of dissipative dynamical systems [128, 197, 268]. Concept 2. An alternative approach is to introduce a measure μ on the set of the pa­ rameters P . Then, a generic property is valid for all values P besides, perhaps, values from a set S such that μ (S) = 0. In other words, the generic property holds for almost all P with respect to the measure μ. An interesting discussion on these two concepts can be found in [128]. The second concept is used in the KAM theory [12].

4.1.2 Evolution, graphs and dynamical networks Evolution can stabilize unstable systems. If we consider a set of unstable systems with parameters Pi (t), which can change from time to time, then the limit of the survival probability P T (Π ) as T → ∞ may be positive. Briefly, a fixed system is almost always unstable, but an infinite chain of evolving systems may be stable. As a model of complex biological systems, we consider circuits (networks). This choice is natural since during the last decades a large amount of attention was given to problems of global organization, stability and evolution of complex networks such as protein and gene networks, networks of metabolic reactions, neural and economical circuits, Internet etc. (see [106, 133, 134] for an overview [7]). The simplest mathematical model of such network is a (directed) graph. For ex­ ample, for a gene network, we can associate, with this network, a graph where a node describes a gene, the i-th node is connected with the j-th one if the i-th gene acts on j-th one. The evolution of such graphs can be considered as an algorithm adding or removing edges and nodes. Stability can be examined in different contexts. For ex­ ample, we can examine how many edges (or nodes) must be eliminated in order to destroy connectivity of the graph. In biological applications, such an elimination may simulate mutations. The first theory of graph evolution was developed by Erdös and Rényi [7, 70, 144]. They assumed that at time moments 0, 1, 2, . . . , a new edge appears with probabil­ ¯ (k ) of ity p, where p is a fixed number. This theory leads to a Gaussian distribution C the valency of a node. Recall that the valency (degree) of a node is the number of the ¯ (k ) is the probability that a node has k ad­ nodes adjacent to this node. The quantity C jacent nodes [7]. This probability distribution describes network connectivity. It was ¯ (k ) has another non-Gaussian form. Networks have the found that in real networks, C ¯ (k ) ≈ const k −γ with the exponent γ lies usu­ so-called scale-free structure when C ally within (2, 3). Such networks have a few number of nodes with a great valency (connectivity), whereas most of the nodes have a small valency [7]. This shows that

196 | 4 Random perturbations, evolution and complexity the network evolution is more complicated than it was described by the Erdös–Rényi model. The evolution algorithm leading to the scale-free organization was proposed by Albert and Barabasi [7]. This algorithm uses the idea of a preferential attachment: the probability that a new edge is incident to the i-th node is proportional to the degree of this node. Besides this algorithm, a number of other growth algorithms can lead to scale-free organization. In particular, an algorithm proposed by [224] generates an hierarchical modular structure observed in metabolic networks. Interesting properties of graphs associated with actual biological, informational and economical systems can be described as follows. The graph diameter is restricted (the diameter is the maximal length of the shortest path connecting two nodes). The small diameter accelerates dynamical processes in the circuit. Moreover, it is well known that the averaged connectivity  C increases during  ¯ (k ):  C = 0≤k≤∞ k C ¯ (k ). Another prop­ evolution. Here,  C can be computed via C erty, which was found experimentally for protein nets, is as follows. More connected proteins are more important for organisms: lethality correlates with connectivity [134]. Stability of the scale-free structures is high with respect to a random attack when nodes to eliminate are chosen randomly. However, this stability is weak with respect to a terrorist attack (when one eliminates the most connected nodes) [7]. However, metabolic or gene networks cannot be described completely by simple graph models. They constitute some complex dynamical systems where a scheme of interaction of substrates, ferments or genes can be associated with a graph. A part of the substrates enters this system from an external medium (input) and another part can be considered as an output (products). It is well known that these systems suc­ cessfully support an output independent of fluctuating input [8, 156]. It is difficult to propose a mathematical model for metabolic reactions, neural or gene interactions in detail. Different circuit models were proposed ([67, 68, 86, 118, 136, 182, 188, 226, 237] among many others, see [34, 257] for an overview) to take into account the theoretical ideas and experimental information on gene interaction. Some gene net models [188, 237] can be considered as a generalization of the famous Hopfield model of the attractor neural network [118]. We consider a simplified version of [188] and take into account random fluctuations and the evolution of network pa­ rameters. We assume that gene (protein) interaction is a pair one and it is defined by a m × m matrix K. A directed graph can be associated with this matrix: two nodes are connected if the corresponding entry of K is not equal 0. An evolution model associated with these networks can be considered as a com­ bination of graph evolution approaches, which is described above, and dynamical circuit models. It can be described as follows. One has a discrete set Y (finite or count­ able). This set can be considered as a “genetic code.” We also have a map from Y to the set P of the network parameters (the number of genes m, the interaction matrix K etc.). “Evolution” is a time continuous Markov process with values in Y, which changes sys­ tem parameters.

4.1 Introduction

| 197

4.1.3 Main problems and some ideas This chapter is focused on the following problems. (A) Viability of the networks with respect to fluctuations describing an internal noise and environment changes. We also consider viability of systems that can be described by reaction-diffusion models; (B) We investigate stable evolution algorithms such that lim P T (Π ) > 0 as T → ∞, when a chain of evolving unstable systems has nonzero chances to survive for large times. Our goal is to explain the main properties of evolution (for example, why sys­ tems must make copies and the mutation probability is small, why the genetic code size cannot be bounded during the evolution process, why the evolution tree must be large and the networks should be scale-free). To study these problems, we introduce a concept of a priori computational com­ plexity of evolution problems. It allows us to apply some ideas and notions from com­ plexity theory [83, 94, 95, 199, 239]. Indeed, it seems that many evolution problems are, in a certain sense, complex. Roughly speaking, since “almost all” systems are un­ viable, to construct a viable system is a “complex” problem. In fact, we shall show, in the framework of some simplified models, that many evolution problems are hard and NP-complete (for more information about NP-completeness, see books [83, 199], and for biological applications, see [221]). (C) Evolution rate problem: we estimate the running time of evolution algorithms that find viable structures with a large fitness among many possible structures. This cor­ nerstone problem was posed by Charles Darwin [57]. In fact, it is not obvious how to obtain a complex effectively working organ by an evolution using a local search based on random mutations and selection. For example, the cell can be considered as “a bi­ ological computer” proceeding a complicated feedback [8, 192] and it is unclear how one can construct such a computer (consisting of unstable elements) by evolution­ ary mechanisms. It is only clear that this problem is difficult. Let us turn to Charles Darwin: To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chro­ matic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree. When it was first said that the sun stood still and the world turned round, the common sense of mankind declared the doctrine false; but the old saying of Vox populi, vox Dei, as every philosopher knows, cannot be trusted in science. Reason tells me, that if numerous gradations from a perfect and complex eye to one imperfect and simple, each grade being useful to its possessor, can be shown to exist; if further, the eye does vary ever so slightly and the vari­ ations be inherited, which is certainly the case; and if any variation or modification in the organ be ever useful to an animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could have been formed by natural selection, though insuperable by our imagination, can hardly be considered real. How a nerve comes to be sensitive to light, hardly concerns us more than how life itself first originated; but I may remark that, as some of the lowest organisms, in which nerves cannot be detected, are known to be sensitive to light, it does not seem

198 | 4 Random perturbations, evolution and complexity impossible that certain elements in their tissues or sarcode should have become aggregated and developed into nerves endowed with special sensibility to its action. (Darwin, [57], Chapter VI).

This organ development problem has been considered in many books and papers, see, for example, [227] and references in it. In many works, the genes were considered as Boolean variables, and a gene can be expressed (turned on) or not expressed (turned off) [8, 34, 136, 257]. Using this fact and some ideas from theoretical computer science, we consider a mathematical formalization of this organ development enigma. This formalization is connected with the following mathematical problem: for some hard combinatorial problems of a random structure, whether there exist algorithms [52, 83] which solve these problems running time for a certain subclass of instances. We for­ mulate the notion “relatively short” by ideas proposed by L. Valiant [303, 304] and connect it with the famous problem P = NP [52].

4.2 Neural and genetic networks under random perturbations 4.2.1 Systems under consideration Let us describe the models that we will use to describe biological systems. In the fol­ lowing subsection, we focus our attention on time discrete systems. Let n denote the number of quantities that characterize the current state of a given system. This means that the state of the system can be described by a tuple u = (u 1 , . . . , u n ) consisting of the values of all these characteristics. The set H of all pos­ sible states of a system is therefore equal to H = IRn . The state of a biological system evolves with time. In practice, even when we have measuring instruments that “continuously” monitor the state of a biological system, we can only perform a finite number of measurements in each time interval. So, in fact, we only know the values of the corresponding characteristics at certain moments of time. Thus, to get a good description of the observed data, it makes sense to consider discrete time models. Usually, there is a certain frequency with which we perform measurements, and so we get values measured at moments T0 , T1 = T0 + ΔT, T2 = T0 + 2ΔT, . . . , T t = T0 + t · ΔT, . . . . It is therefore convenient to call the integer index t of the moment T t the t-th moment of time, and talk about the state u (t), t = 0, 1, . . . , at the t-th moment of time. The state u (t) of a system at a given moment of time affects the state of the sys­ tem u (t + 1) at the next moment of time. The next state of the system is determined not only by its previous state: biological systems operate in an environment in which unpredictable (“random”) fluctuations occur all the time. Let m be the number of pa­ rameters that describe such fluctuations; then, the current state of these fluctuations can be described by an m-dimensional vector ξ (t) = (ξ1 (t), . . . , ξ m (t)).

199

4.2 Neural and genetic networks under random perturbations |

Once we know the current state of the system u (t) and the current state ξ (t) of all the external parameters that influence this system, we should be able to determine the next state u (t + 1). In other words, we consider the following dynamics: u i (t + 1) = f i (u (t), ξ (t)), t = 0, 1, . . .

(4.1)

with initial conditions u i (0) = φ i . To specify evolution, we must therefore describe the transition functions f i and the random process ξ (t). We will do this in the following two subsections.

4.2.2 Transition functions The transition functions f i describe physical processes and thus, ultimately come from physics. Most equations of fundamental physics, for example, equations of quantum mechanics, electrodynamics, and gravity, are partial differential equations with poly­ nomial right-hand sides. Other physical phenomena are described by partial differen­ tial equations that use fundamental fields, i.e. a solution to the fundamental physics equations as solutions. The resulting dependencies can again be used in the righthand sides of other physics equations. The resulting functions are known as Pfaffian functions; these functions are formally defined as follows [139]: Definition 4.1. By a Pfaffian chain, we mean a sequence of real analytic functions f1 (x), f2 (x), . . . , f r (x) defined on IR n which, for every j = 1, . . . , r, satisfy a system of partial differential equations ∂f j = g kj (x, f1 (x), . . . , f j (x)), j = 1, . . . , n, ∂x k with polynomials g kj . For each Pfaffian chain, the integer r is called its length, and the largest of the degrees of polynomials g kj is called its degree. A function f (x) is called Pfaffian if it appears in a Pfaffian chain. It is known that Pfaffian functions satisfy many important properties, in particular: – the sum and the product of two Pfaffian functions f1 and f2 of lengths r i and de­ grees d i are again Pfaffian functions, of length r1 + r2 and degree d1 + d2 ; – superpositions of Pfaffian functions are also Pfaffian. Results from the theory of Pfaffian functions and the powerful computational tools that are based on these results are described in [82, 96, 139]. In this chapter, we consider dynamical systems (4.1) with Pfaffian functions f i . The class of such systems will be denoted by Kh.

200 | 4 Random perturbations, evolution and complexity Class Kh. This class consists of Systems (4.1) for which f i are Pfaffian functions. We will also consider several subclasses of this class, subclasses which are known to be useful in applications. Two of these subclasses are related to the fact that when the fluctuations are small and/or the deviation of the state from a nominal state is small, we can expand the dependence f i into a Taylor series and only keep the first few terms (or even only the first term) in this expansion because higher-order terms can be safely ignored. In this case, we end up with a polynomial (or even linear) dependence. Usually, the deviation of the state of a biological system from its nominal state can be reasonably large, so terms which are quadratic in this dependence cannot be ignored; however, random fluctuations can be small. When the random fluctuations are so small that we can only keep terms which are linear in ξ , we get the following class which is well-studied in control theory: Class Kl. This class consists of Systems (4.1) in which the transition functions f i have the  form f i (u, ξ ) = g0i (u ) + m k =1 ξ k g ki ( u ), with polynomial g ki . When the fluctuations are larger and their squares can no longer be ignored, we get a more general class of systems: Class Kp. This class consists of Systems (4.1) in which the transition functions f i are polynomial in u and ξ . Comment. Here, l in Kl stands for linear (meaning linear dependence on ξ ), while p in Kp stands for polynomial. Another important class comes from the situation when a random fluctuation simply means selecting one of the finitely many options. Class Kr. Let us assume that we have a finite family of maps u → ˜f (k) (u ) = (k) (k) ( k ( t )) (˜f1 (u ), . . . , ˜f n (u )), u ∈ Rn , where k = 1, . . . , m . Assume that f i = ˜f i (u) + ( k ) λ · g i (u, ξ ), where λ > 0 is a parameter, ˜f i and g i are Pfaffian, and k (t) is a random index: at each moment t, we make a random choice of k with probabilities p k ≥ 0, p1 + p2 + · · · + p m = 1 (these choices at different moments of time are done indepen­ dently). In the particular case when all the maps u → ˜f (k) (u ) are contractions and λ = 0, we obtain so-called iterated function systems; see, e.g. [127]. It is important to mention that the class Kh contains many neural and genetic circuit models. Genetic circuits were proposed to take into account theoretical ideas and experimental information on gene interaction; see, e.g. [86, 87, 158, 188, 267]; see [257] for a review. We consider the following circuit model ⎞ ⎛ N ⎠ ⎝ K ij (t )u j (t ) + h i − ξ i (t ) , u i (0) = x i , (4.2) u i (t + τ) = σ j =1

where t = 0, τ, 2τ, . . . , d · τ, i = 1, 2, . . . , N, d and N are positive integers, τ > 0 is a real parameter, and x = (x1 , . . . , x N ) is an initial condition. It is usually assumed that the function σ is a strictly monotone increasing function for which limz→−∞ σ (z) = 0 and limz→∞ σ (z) = 1.

4.2 Neural and genetic networks under random perturbations

|

201

Many functions σ used in practical applications are Pfaffian functions of length 1; moreover, they are solutions of a differential equation σ = P(σ ), where P is a poly­ nomial for which P(0) = 0, P(1) = 0, and P(z) > 0 for all z ∈ (0, 1).

4.2.3 Assumptions on random processes ξ We assume that the fluctuations ξ i (t) are – Markov processes, i.e. that the probabilities of different values of ξ (t) depend only on the previous values ξ i (t − 1), and – are strong, in the sense that there is a positive probability to move into a small vicinity of any state. Formally, this assumption of strong fluctuations can be described as follows. For δ > 0 let V (θ, δ) denote the δ-neighborhood of a point θ ∈ Rm . Assumption 4.2. Assume ξ i (t) are Markov processes with discrete time, t = 0, 1, 2, . . . . Moreover, let us suppose that for each θ, θ0 and for each t > 0 and δ > 0, the probability that the process ξ (t) attains the neighborhood V (θ, δ) from the start point θ0 is positive: Prob(ξ (t) ∈ V (θ, δ ) | ξ (0) = θ0 } > c(δ) > 0, where a constant c(δ) is uniform in t. Mathematically, this assumption is one of the versions of ergodicity of the Markov pro­ cess. This assumption holds for many stochastic processes that are used in modeling biological phenomena.

4.2.4 Evolution in the time discrete case Systems (4.1) are well-suited to describe the dynamics of a single individual. Individu­ als belonging to different species s may have different dynamics. So, a more accurate way to describe the dynamics is to use u i (t + 1) = f i (u (t), ξ (t), s), where s describes the species. In biology, different species and subspecies can be characterized by their DNA, i.e. by a sequence of symbols. Without losing generality, we can always encode the 4values language of DNA codons into a binary code, so we can assume that s is a finite binary sequence. In mathematical terms, we consider a discrete (finite or countable) set S with N (S) ≤ +∞ elements s. We assume that all elements of the set S are binary strings, i.e. that S ⊆ S∞ , where S∞ denotes the set of all possible finite binary strings.

202 | 4 Random perturbations, evolution and complexity The fact that the transition functions f i depend not only on u (t), but also on s can be described by saying that we extend our original phase space H of all the states u to a larger space H × S. In addition to dynamics within a species, we also have to take into account the possibility of mutation, when s changes. We assume that these transitions follow a Markov chain with transition probabilities p s s (u ) to go from s to s; in line with the biological applications, we take into account that the probability of different transi­ tions (mutations) may depend on the state u. To take into consideration that only states from a certain set Π ⊆ IR n (called via­ bility domain) are viable, we use the following standard construction. We introduce, formally, an absorbing state a such that p as (u ) = 0 for each s = a. If u leaves the viability domain Π, then the system automatically reaches this absorbing state a. So, our model is defined by: (a) a family of random dynamical systems u i (t + 1) = f i (u i (t), ξ (t), s), i = 1, 2, . . . , n, corresponding to different binary strings s ∈ S; (b) a set Π ⊆ R n ; (c) a Markov chain M with the state space S ∪ {a} and the transition matrix W(u ) with entries p s s (u ) (the transition probability from s to s depending on u) such that p as (u ) = 0 (if s = a). Regarding the dependence of f on s ∈ S, we assume the following. Let us consider a class C of dynamical systems (4.1) with f depending on parameters r ∈ P , where P is a set of possible values of the parameters. We assume that the set P is equipped with a measure ν. For example, if the functions f i are defined by a sequence of polynomials (as is the case when f i are Pfaffian functions), then P is the set of all tuples of coefficients of all these polynomials, and as ν, we can select the standard Lebesgue measure on this set. It is reasonable to assume that parameters r are random functions of s. To describe these random functions, we need to introduce, for every natural number l, a probabil­ ity measure μ l on the set of all possible mappings α from binary strings of lengths ≤ l to the set P . It is reasonable to make the following assumption: Assumption 4.3. For every set A ⊆ P of ν-measure 0, for every integer l, and for every string s of length ≤ l, the probability that the parameters α (s) are in A is 0: def

μ l (B l (s, A)) = 0, where B l (s, A) = {α : α (s) ∈ A}. For systems from the class P , let us denote, by P(Π, v, r, t), the conditional probability that in the next moment of time, the system will still be viable (u (t + 1) ∈ Π) under the condition that its previous state is u (t) = v ∈ Π and that the previous value of the parameters was r(t) = r.

4.2 Neural and genetic networks under random perturbations

| 203

Definition 4.4. We say that a class of system P from the general class Kh is generically unviable in Π if there exists a positive function κ (r) such that sup u ( t )∈ Π,t =0,1,2,...

P(Π, u, r, t) = 1 − κ (r)

(4.3)

for ν-almost all values of the parameter r and inf κ (r) > 0. r

(4.4)

This means that at every step, there is a nonzero probability ≥ κ (r) > 0 of moving into an unviable state, and since the unviable state is absorbing, we are guaranteed to eventually move into an unviable state. For every viable state u 0 ∈ Π and for every integer T, by P T (Π, u 0 ), let us denote the conditional probability that u (t) ∈ Π for all t = 1, 2, . . . , T under the condition that u (0) = u 0 . Definition 4.5. We say that the evolution is stable, if there exists a real number δ > 0 for which P T (Π, u 0 ) > δ for all integers T > 0 and all states u 0 ∈ A, where A is an open subset of Π. If such a real number δ and A do not exist, we say that the evolution is unstable.

4.2.5 Assumptions to fluctuations in the time continuous case Let us consider the time continuous case when our system is defined by differential equations. We can consider, for example, a simplified genetic network model (2.272) where a dependence on x is eliminated: ⎞ ⎛ m du i ⎝ = Ri σ K ij u j − h i + ξ i (t)⎠ − λ i u i , (4.5) dt j =1 where ξ i are trajectories of a random process. We assume that trajectories ξ i are piece­ wise continuous. Then, solutions of this system of differential equations are defined for all t ≥ 0. More general reaction-diffusion systems will be investigated in the end of this chapter. Let us formulate some assumptions on the random process ξ i (t). We suppose that this process is a time homogeneous Markov process with values in Rp . Denote by P(t, x, B) the transition probability from x to the set B within time t, t ≥ 0: P(t, x, B) = Prob{ξ (t) ∈ B|ξ (0) = x}. The following assumption 4.6 is important for establishing nonviability of time continuous systems with fixed parameters. Notice that the standard Wiener and many diffusion processes satisfy this assumption [84].

204 | 4 Random perturbations, evolution and complexity Let us define probabilistic measures μ T on the set of all continuous trajectories t → ξ (t) ∈ S r defined on [0, T ]. These measures describe probabilities of different spatially distributed random fluctuations. The condition that ξ (t) lies in S r means that our fluctuations are sufficiently smooth in x for each fixed t. Assumption 4.6. For each T, any piecewise continuous curve t → z(t) ∈ Rp , t ∈ [0, T ] and a number δ > 0, the probability that the trajectory ξ (t ), t ∈ [0, T ] lies in the tubular neighborhood V (δ, z(·), T ) of z(t) is positive: Prob{ξ : ξ ∈ V (δ, z(·), T )} > κ (T, z, δ) > 0. μT

This assumption implies that the probability of a large deviation may be small, though it is not zero. This probability can be estimated by large deviation theory [308]. For the finite dimensional case dim H = n < ∞, this theory can be applied as follows. We consider a random model where ξ evolution is defined by Ito’s stochastic equations dξ i = R i (ξ )dt + λ1/2

M

g ij (ξ )dω j ,

(4.6)

j =1

where ω j (t) are standard mutually independent Wiener processes (white noises), the matrix G is positively defined and it is a smooth function of ξ , and R i are smooth func­ tions of ξ = (ξ1 , . . . , ξ n ). The λ > 0 is a parameter. Lemma 4.7. Assume ξ¯ (t), where t ∈ (t0 , T ) and t0 > 0 is a prescribed smooth curve in Rn . Let Uξ¯ ,ϵ be a ϵ small open tubular neighborhood of ξ¯ : U ξ¯ ,ϵ = {U (t) :

sup |U (t) − ξ¯ (t)| < ϵ}.

(4.7)

t ∈[ t 0 ,T ]

Then, the probability P(t0 , T, ϵ, ξ¯ ) that the random process ξ (t) will lie completely in this neighborhood for all t ∈ [t0 , T ] is positive and admits an exponential estimate 0 < P(t0 , T, ϵ, ξ¯ ) < C2 exp(−c2 λ−1 )

(4.8)

if ξ¯ is not a trajectory of the corresponding determined system dξ i = R i (ξ ). dt

(4.9)

Proof. Let us denote by f the vector valued function f = (f1 , . . . , f n ),

f i (t) = R i (ξ¯ (t)).

¯ the asymptotics The Theory of Large Deviations ( [308]) states that, for fixed ϵ and S, of log(P) is defined by the so-called action functional I, i.e. λ log(P) → −I s (ξ¯ ),

λ → 0,

(4.10)

4.2 Neural and genetic networks under random perturbations

where I s (ξ¯ ) =

1 2

T



G−1

s



ξ¯ ξ¯ − f (t) · − f (t) dt. dt dt

| 205

(4.11)

We see that P > 0 and if ξ¯ is not a trajectory of system (4.6) with λ = 0, then log(P) ≈ − c 3 λ −1 . The second assumption means that the probability of sharply oscillating fluctua­ tions is small enough: Assumption 4.8. For each T, one has ⎧ ⎫ ⎪ ⎪ ⎨ T ⎬ Prob ⎪ξ : ξ (t)S r dt > R⎪ = Γ T (R) → 0 μT ⎩ ⎭ 0

as R → ∞.

4.2.6 Network viability under random fluctuations We consider the viability problems for systems (4.5). Let us assume that the viability m = {u i ≥ 0}. If, moreover, Π is domain Π ⊂ Rm is a subset of the nonnegative cone R+ a subset of the cone m Con = {u : u ∈ R+ , u i l > 0, l = 1, 2, . . . , s}

i 1 ,i 2 ,...,i s

(4.12)

for the maximal possible s, then we say that i1 , i2 , . . . , i s ∈ K are key indices. Here, K stands for the set of the key indices. Remark 1. If i is a key index, then u i must be positive (i-th node must be active). For biological applications, such a node may correspond to a gene important for organism functionality. Let us consider another, nonbiological example: let our circuit simulate a country, nodes are cities, and then key indices can correspond to important admin­ istrative centers. The idea beyond key node definition is that it is impossible for networks to be viable if all nodes should be active. We cannot conserve all nodes, but one can save important ones, thus sacrificing some other nodes which are not so important. As a measure of the stochastic stability of system (4.5) with the initial state u 0 , we consider the probability P(P , Π, u 0 , T1 , T2 ) = Prob{u(t) ∈ Π for each t ∈ [T1 , T2 ]},

(4.13)

where u = (u 1 , . . . , u m ). This probability depends on the circuit parameters P and the homeostasis domain Π. The quantity P(P , Π, u 0 , T1 , T2 ) can be named as the sur­ vival probability on the time interval [T1 , T2 ]. For brevity, we shall denote it sometimes

206 | 4 Random perturbations, evolution and complexity by P(T1 , T2 ) or P T (if T1 = 0, T2 = T), omitting the dependence on the parameters P , u 0 , Π.

4.2.7 Complexity Let us define a complexity for Systems (4.5). Each interaction matrix K generates a directed graph with m nodes and at most m(m − 1)/2 edges. We suppose that i and j-th nodes are connected by a directed edge if the corresponding entry K ij = 0. It is well known that biological circuits are far from being completely connected [7]. Valency (degree and connectivity) of a node (vertex) is the number of edges that enter in this node: V i = Card(E i ) = |E i |, E i = {j : K ij = 0}. For each fixed node i, we have a valency V i ≤ m: only V i among the entries K ij are not equal to zero. In applications, for scale-free networks, a typical node has valency V i  m [7]. We introduce the parameter V by V = min V i , i ∈K

where the minimum is taken over all key indices i1 , . . . , i s ∈ K . Let N key be the num­ ber of such indices. So, the parameter V is the minimal connectivity of the key nodes. We estimate the stochastic stability via V and the following parameters: K∗ = max |K ij |, ij

θ¯ = max |

p

M il θ l |,

l =1

1 r = max R i λ− i . i

(4.14)

¯ r ). Definition 4.9. Complexity CompN of network (4.5) is the quadruple (V, K∗ , θ, It is difficult to define an analog of such complexity for systems with general an­ alytical or smooth nonlinearities, however, it is possible for Pfaffian systems. Com­ plexity of a system with a Pfaffian nonlinearity F is complexity of a Pfaffian chain that defines F (see above). In Appendix 4.10, we consider this Pfaffian complexity. The most interesting prop­ erty of the estimates obtained in this Appendix is that they are uniform in network parameters.

4.2.8 Evolution model for the time continuous case Let us suppose that the parameters P of systems (4.5) can depend on some variable y which defines circuit internal parameters P by a map y → P(y). Let us consider the ˜ and Y ˜ is a finite or countable set. If Y ˜ case where y takes some discrete values y i ∈ Y, ˜ ) the number of such states, i = 1, 2, . . . , N. is finite, let us denote by N (Y

4.3 Instability in random environment |

207

Suppose that if a trajectory u (t) of system (4.5) leaves Π, our circuit homeostasis instantaneously falls. To take into account this assumption, it is convenient to for­ ˜ adding to Y ˜ a marked state y0 = {∞} ( [84]). Denote Y = mally extend the set Y, ˜ Y ∪ {∞}. The transition probabilities from this marked state to the other states y i , i = 1, . . . , N equal zero. The phase space of our evolution model is then H = R m × Y. Suppose that, for fixed y, the time evolution of u is defined by equations (4.5). Since ξ l satisfy condition (4.6), a solution of (4.5) exists, unique, and (4.5) defines a Markov process with continuous trajectories u (t). Suppose, moreover, that the time evolution of y is defined by a continuous jump-­ ˜ [84]. like homogeneous Markov process with the state set Y The conjoint evolution of y, u can be defined as follows. In each state y i , i = 0, the u-process is defined by (4.5) with the parameters P(y i ) while u ∈ Π and y = y i . If u leaves Π, the process finishes at the absorbing state y0 = {∞}. If the process makes a jump from y i to y j , j = 0 at t = t0 , we assume that u is continuous: u (t0 −) = u (t0 +). At last, if this jump involves new nodes, we assume that these new nodes are in zero states at t = t0 +. The process is defined by transition probabilities P(t, B, i | u 0 , j) (the probabilities that the random trajectory u (t, u 0 , y j ) of (4.5) with the initial data u 0 , y(0) = y j enters for B at the moment t and y(t) = y i ). Such processes are well-studied [84]. Notice that in this case so-called hybrid dynamical systems appear which have received a great deal of attention during the last decades (see [240] and references therein).

4.3 Instability in random environment 4.3.1 Instability of circuits Consider the viability problem for system (4.5). Suppose i1 , i2 , . . . , i s are key indices. We denote by ΠK the corresponding viability domain and P T (K , u (0)) denotes the probability that the state u of the system lies in ΠK for all t ∈ [0, T ]. We assume that the initial state u (0) ∈ ΠK . Theorem 4.10. Suppose assumption 4.6 holds. Then, the viability probability P T of sys­ tem (4.5) can be estimated uniformly in u (0) through CompN , i.e. ¯ r) , P T (ΠK , u (0)) < g(T, V, K∗ , θ, ¯ r and g is where a function g converges to 0 as T → ∞ for all fixed values V, K∗ , θ, ¯ r are fixed, survival probability P T → 0 monotone, increasing in V. Moreover, if V, K∗ , θ, as T → ∞.

208 | 4 Random perturbations, evolution and complexity Proof. Suppose η(t) is a smooth function with the values in R p defined on [t1 , t2 ], t2 > t1 . Let V η,δ,t1,t2 be a tubular neighborhood of the trajectory η: V η,δ,t1,t2 = {v : there is t ∈ [t1 , t2 ] such that |v − η(t)| < δ}.

(4.15)

Lemma 4.11. Under condition (4.6), one has P T,δ,η = Prob{ξ : ξ (t) ∈ V η,δ,0,T for each t ∈ [0, T ]} > 0.

(4.16)

This lemma is an immediate consequence of Assumption 4.6. The next step is an a priori estimate of the solutions of (4.5). One obtains 1 |u i | < M i = max{R i λ− i , s i },

(4.17)

where s i are initial data: s i = u i (0). Using this estimate and properties of σ, one has the inequality m K ij u j (t) + θ i − ξ i (t) ≤ V i K∗ M + θ¯ − ξ i (t), j =1

where K∗ , r are defined by (4.14) and M = max M j . Let us introduce δ(a) > 0 by δ(a) = σ (V i K∗ M + θ¯ − a),

(4.18)

and consider a set Ξ a of trajectories ξ (t) such that ξ i (t) > a,

for all t ∈ [T1 , T2 ]

for some i. The probability that a trajectory ξ (t) ∈ Ξ a for all t is positive due to Lemma 4.11. Consider system (4.5) on the interval [T1 , T2 ]. Let us fix a key index i. Equations (4.5) imply 1 0 < u i (t) < M i exp(−λ i (t − T1 )) + δ(a)R i λ− i (1 − exp(−λ i ( t − T 1 )) .

Notice that δ(a) > 0 converges to zero as a → ∞. For sufficiently large T2 − T1 and a, one then has that u i (T2 ) is small enough. Therefore, the system state u leaves the domain Π at the moment t = T2 . This shows that P T < 1 for sufficiently large T and P T can be estimated through the network complexity by a function g of the parameters ¯ r and time interval length T. This estimate of P T = P T (u ) < g < 1 is uniform V, K∗ , θ, with respect to initial state u = u (0). Let us prove that P T → as T → ∞. In fact, let us consider the sequence p n = P nT . Notice that by using conditional probabilities, one has P T1 +T2 (u ) ≤ sup P T1 (u ) sup P T2 (v), u

v

and therefore, 0 ≤ pn ≤ qn ,

0 < q = sup P T (u ) < 1. u

Thus, one concludes that P nT → 0 as n → +∞, and the theorem is proved.

4.3 Instability in random environment |

209

4.3.2 Instability of time continuous systems The well-studied models are given by stochastic differential equations (4.6) with white noises p g j (u )dω j , (4.19) du = f (u )dt + λ1/2 j =1

where ω l are the standard mutually independent Wiener processes and λ > 0 is a parameter [120, 308]. The viability of systems (4.19) can be studied for small ϵ by the methods of the large deviation theory (see, for example, [308]). Let us consider the action functional I s (ξ¯ ) defined by (4.11), where ξ¯ (t) is a smooth trajectory such that ξ¯ (0) = u (0),

u (0) ∈ Π

ξ¯ (T ) ∉ Π.

(4.20)

Then, as above, we obtain that the probability P(ξ¯ ) that trajectory u (t) of (4.19) lies in a small tubular neighborhood U of ξ¯ satisfies the estimate (4.10). Therefore, we see that the viability probability P T (u (0)) can be expressed via P U by P T (u (0)) = 1 − sup P(ξ¯ (·)), (4.21) ξ¯ (·)

where the supremum is over all ξ¯ (t) satisfying (4.20). Let us consider stochastic differential equations p du = f (u) + η l (t)g l (u ), dt l =1

(4.22)

where η l are random functions, and f and g are smooth functions. Here, it is more difficult to obtain an explicit estimate, however, we can prove a general theorem on nonviability for generic g and p > 1. One can show that Π is an open bounded set with a smooth boundary and the number of noises p > 1, and then a generic (in the sense of differential topology, see Concept I in Subsection 4.1.1) system (4.19) is unviable. A proof [289] based on results of [164], which, in turn, use the fundamental results of R. Thom [110, 268]. The main idea is that for a strong noise ξ , we can remove the bounded term f from equations (4.19) and then system (4.19) can be reduced to a symmetric differential polydynamical system defined by vector fields g l , l = 1, . . . , p. It is a classical object of geometric control theory [164]. The great transversality theorem of R. Thom allows us then to show that for p > 1, a generic polydynamical system is completely controllable. This property shows that there is always a trajectory of this polydynamical system starting with any point u ∈ Π and leaving Π. We consider system (4.22), where p > 1 (at least two independent noises). We assume that η l are random processes satisfying Assumption 4.6.

210 | 4 Random perturbations, evolution and complexity Theorem 4.12. Let us consider system (4.22) and suppose p > 1. Assume Π is an open bounded set in Rn with a smooth boundary. Then, for generic functions g l (u ) defined on Π, the viability probability P T (Π, u (0)) satisfies P T (Π, u (0)) < 1 − κ (T, g),

(4.23)

where a function κ (T, g) > 0. Proof. See the last section of this chapter, where more general systems are considered. Notice that if p = 1, i.e. one has a single noise, a generic system can be viable. As an example, one can take m = 1, u 1 = u, P0 (u ) = −u, Π = [0, b ], b > 0 P1 is a polynomial in u having a root at a point c, c > b. One can prove that this theorem does not hold for general Pfaffian systems with nonlinearly involved random functions ξ . Example. Let us take the Pfaffian system p du = u − u3 + b i σ(ξ i ), dt i =1

where σ (z) = (1 + exp(z))−1 , and b i are sufficiently small: (sup σ (z)) |b i | < δ. z ∈R

i =1

Indeed, the terms σ (ξ i ) are uniformly bounded. Then, a trajectory u (t) with |u (0)| < 1/2 stays in some interval Π δ = [−1 + O(δ), 1 + O(δ)] for small δ.

4.3.3 Viability for network models ˜ into a value We consider networks (4.5). Let P(y) be a mapping that transforms y ∈ Y P(y) of the network parameters. ¯ V of network (4.5) are fixed and are Theorem 4.13. Assume that the parameters r, K∗ , θ, independent on y, whereas the matrix K and the number of the nodes m depend on y. If ˜ is finite, then viability probability P T → 0 as T → ∞. the set Y The proof of this theorem is analogous to the proof of Theorem 4.10 since the es­ ¯ V. timate of P T , given by Theorem 4.10, is uniform in r, K∗ , θ, ˜ If the state set Y is countable, then it is possible that the stochastic stability does not vanish for large times: P T > P∗ > 0 for all T > 0. In this case, the parameter V satisfies the following asymptotical relation Prob{V (y(t)) < A} → 0

(4.24)

4.3 Instability in random environment |

211

as t → ∞ for any A. This means that in this case, the circuit complexity is unbounded as t → ∞. This assertion is trivial, that is, if there are no a priori restrictions to averaged va­ lency of circuit and valency growth rates. To see this fact, let us consider a Markov process with a countable state set defined by a master (Kolmorogov’s) equation [84]. Denote by p i (t) the probabilities to be in the state y i at the moment t, and p∞ , the probability to be into an absorbing set {∞}. The master equation has the form dp i = w ij p j − w ji p i − w∞i p i , i = 1, 2, . . . , N, (4.25) dt j = i,j =∞ j = i dp∞ = w∞ j p j . dt j =∞

(4.26)

Here, w ij are the transition probabilities to go to the state y i from the state y j per unit time. Due to (4.26), the function p∞ is a time nondecreasing function (Lyapunov func­ tion). Moreover, if w∞,i > δ > 0 (that always holds when N (Y ) < ∞), one has p∞ → 1 as t → ∞. These facts express “the second law of thermodynamics” for such systems.  If w∞i = 0 for all i, then the Lyapunov function is the entropy H = − p i log p i . To show a possibility of the stable evolution, consider the case when the com­ plexity C i of i-th state defined by C i = V (y(i)) is an increasing function of i. Suppose w i+1,i > δ > 0, the rest entries w ij = 0. Then, if C i increases sufficiently fast with i, one has p∞ (t) < 1 as t → ∞. The number N (y) can be interpreted as a genome size. These results yield that the genome size is not bounded in time, or otherwise the evolution stops. One can say that the genome size has a tendency to grow. Moreover, the circuit size also has a tendency to grow because V (y(t)) is not bounded in t. To analyze the evolution process in more detail, let us consider the simplest model, where for each i, j, either K ij = K∗ or K ij = 0. Then, the time changing of the matrix K can be considered as a time evolution of a directed graph associated with K and vice versa, a growth of a directed graph generates an evolution of a network (4.5). The graph evolution can then be considered as an algorithm adding edges and nodes. Following [283], let us compare two evolution algorithms, the Erdös–Rényi one [70, 144], and the preferential attachment algorithm [7]. We suppose, in both  ¯ = m −1 m cases, that averaged (over the whole circuit) valency is bounded: V i =1 V i , where m = m(t) is the number of circuit nodes. In Erdös–Rényi’s algorithm, at time moments 0, 1, 2, . . . , one adds to a graph a new edge with probability p. This leads to a Gaussian distribution for the degree. In the preferential attachment algorithm [7], the probability that a new edge goes to the i-th node is proportional to the valency (connectivity) of this node. It can be shown that the Erdös–Rényi growth mechanism is always unstable (un­ der some natural assumptions) [283], whereas the Albert–Barabasi preferential at­ tachment evolution can be stable. Indeed, in the Erdös–Rényi case, almost all the

212 | 4 Random perturbations, evolution and complexity nodes have valency close to the average one, which is bounded, and the probability that the key nodes have a great valency is small. In the preferential attachment case, the key nodes can have a great increasing in time valency if these nodes have had a great valency at the initial moment. This preferential evolution can be illustrated by a country evolution. Looking at the map of a country, we see that a few large cities and many small cities exist. In development of some countries, we can observe such an effect: there are large admin­ istrative centers attracting a big part of the resources and population, and many small cities (such an example can be given by the Russian Federation, where Moscow rep­ resents almost all of the power). A more nontrivial problem is to demonstrate that a stable evolution satisfying (4.5) is possible under some natural restrictions. To simplify the statement, let us consider the particular case of time discrete networks (2.273): ⎛ ⎞ N t +1 t t u i = σ ⎝ K ij u j + θ i (x) − ξ i ⎠ , (4.27) j =1

where σ (z) is the step function such that σ (z) = 1 for z > 0 and σ (z) = 0 for z ≤ 0. Let us formulate some natural restrictions to the circuit growth. (R1) The averaged valency of the whole network is a priori bounded for all times: N (t) 1 V i (t) < K c , t →∞ N ( t ) i =1

lim

(4.28)

where K c is a positive constant, and N (t) is the number of nodes involved in the circuit. This assumption is consistent with experimental data [7, 133, 134]. Let us make note that the averaged valency of the key nodes is not bounded. Otherwise, the system is unviable. (R2) The evolution rate is bounded, i.e. at each evolution time step, we add, at most, one edge and one node to the graph associated with the circuit. (R3) The noises ξ it are mutually independent random processes with discrete time, satisfying (4.29) 0 < P(ξ it > a) = ϕ(a) < c0 exp(−βa) for each a > 0 and for each t, where c0 , β are positive constants independent of t. Theorem 4.14. There is a growing circuit (4.27) satisfying (R1), (R2), (R3) such that the viability probability does not vanish for large times: P T (u0 ) > P∗ > 0 for all T > 0 and some initial state u0 ∈ Rm(0) .

4.3 Instability in random environment |

213

Proof. The circuit (4.27) is a Boolean one and u i ∈ {0, 1} since σ is the step function. Let us set N key = 1, r i = 1, θ¯ = 1, K∗ > 0. Let us suppose that at the initial moment we have N = N (0) nodes and the matrix K is defined by K1j = K∗ , K j1 = K∗ , K jj = 0, where j = 1, 2, . . . , V0 . We set θ i (x) = h > 0, and u 0j = 1 for all j. At the time moment t, where t = 1, 2, . . . , we can add a node and one edge con­ necting this new node to the key node. For the new node, we have weights K1j = K∗ , K j1 = K∗ . Let us denote, by V (t), the valency of the key node at the moment t. The total number N (t) of the nodes at the time moment t is N (t) = V (t) − 1. The nodes 2, 3, . . . , N (t) are usual ones. Let us first find an upper estimate of the probability Q(t) that the circuit will be destroyed at a moment t. Suppose that, at the moment t, exactly k of the N usual nodes have values 0 and u 1t+1 = 1 (i.e. the key node is active). Then, the value u 1t+1 can become zero at the time moment t + 1 as a result of the noise action ξ1t to the key node. If u 1t+1 = 0, this noise should satisfy the inequality ξ1t > h + K∗ (V (t) − k ).

(4.30)

On the other hand, at the time moment t, the i-th usual node is not active only if the inequality (4.31) ξ it−1 > h + K∗ is fulfilled. Therefore, due to hypothesis (R3) and (4.30), (4.31), the probability Q(t) admits the estimate C kV exp(−β (h + K∗ (V (t) − k ))) exp(−β (h + K∗ )k ), Q(t) < k =0,...,V ( t )

where C kV are binomial coefficients. By summarizing over k, one obtains Q(t) < exp(−β (K∗ V (t) + h))(1 + exp(−βh))V (t) = ρ V exp(−βh),

(4.32)

where ρ = exp(−βK∗ )(1 + exp(−βh)). Suppose ρ < 1. One has log P T =



(4.33)

log(1 − Q(t)).

t =1,2,...T

Using this relation and (4.32), one finds log P T > −βh +

T

  log 1 − ρ V (t) .

(4.34)

t =1

Assume now that the valency of the key node V (t) increases at least linearly: V (t) > αt + V (0) where α > 0. Then, T t =1

T   log 1 − ρ V (t) > −C − 2 ρ V (t) > −C1 , t =1

214 | 4 Random perturbations, evolution and complexity where C, C1 are positive constants. This uniform in the T estimate concludes the proof. Remark 1. If V (0) is large, running this algorithm is similar to the preferential attach­ ment. The preferential attachment can be considered as a generalization of the de­ scribed algorithm. Remark 2. We do not know whether this algorithm is optimal (gives maximal value P T for large T) or not. Moreover, other stable algorithms are possible. They depend on properties of ϕ(a) and on the parameters h, K∗ . To explain this dependence, let us consider the case where ϕ(a) has a threshold behavior: this function is not small for bounded a, say, on [0, a0 ], and it is fast decreasing to zero for a > a0 . Let us compare the stability of two structures. The first one is described above, it consists of a single key node connected with V usual nodes. Each usual node is connected with the key node, but there are no connections between the usual nodes. The second structure is formed by a key node and by some clusters. Each usual node is involved in a cluster consisting of n usual nodes. Inside this cluster, all the nodes are completely interconnected, one has N c = n(n − 1)/2 connections in each cluster. Moreover, each cluster contains a marked central node (this node is also a usual one). Only this central node is connected with the key node. Such a structure is proposed in [224]. Clearly, the second structure can essentially be stabler than the first one. The sta­ bility(viability) depends on K∗ , h, n and a0 . For large h and small K∗ , the first struc­ ture (no clusters) becomes stabler, and the second one is better for small positive or negative h and large K∗ . Remark 3. It is interesting to interpret the growth algorithm from Theorem 4.25 in the framework of our analogy with a strongly centralized country (an Empire) develop­ ment consisting of a number of regions and a bureaucratic center. The evolution goal is to conserve the center. The parameter K∗ determines the connection intensity be­ tween the center and the regions. The noises can be considered as instability sources in the regions. We notice that the Empire must extend to survive. The described algorithm is as follows: the center obtains resources from all regions returning some resources for each region. The regions are disconnected. The algorithm works successfully under condition ρ < 1. The algorithm does not work if the noises ξ i are correlated. Appearance of corre­ lated noise can be interpreted, for example, as global vanishing resources, or a large rebellion. So, a great Empire can be destroyed by a correlated action. We investigate the viability of such empire structures in the coming section using more realistic mod­ els important for different applications in biology and economics.

4.3 Instability in random environment |

215

Remark 4. (Hard combinatorial problems, empires with many centers and greedy al­ gorithms) It is difficult to generalize this simplest algorithm to the case of many cen­ ters and arbitrary noises. We are going to show that the problem of optimal control of satellites by centers is extremely complicated even if all satellites are separated and only obtain the orders from centers. We assume that there are n centers and m satel­ lites with states u 1 (t), . . . , u m (t). Centers emerge binary signals (“orders”) s1 , . . . , s n for satellites (one can say that we are dealing with an “evil empire,” where centers are rigid). The variables s i ∈ {0, 1} can be considered as logic (Boolean, 0 is “false,” 1 is “true”). The states u i are defined by ⎞ ⎛ n (4.35) u i (t) = σ ⎝ K ij s j (t) − h i − ξ i (t)⎠ , i = 1, . . . , n, t = 0, 1, . . . j =1

where ξ = (ξ1 , . . . , ξ m (t)) is a vector valued random process, σ is a sigmoidal func­ tion, σ H (z) = 1 for z > 0 and 0 otherwise, and h i are thresholds. To simplify the problem, let us suppose that ξ (t) and ξ (t ) are independent for different times t = t . Therefore, at each time moment, ξ is a random vector defined by some fixed proba­ bility distribution ρ (ξ, t) (which can depend on time t). Assume that at each moment, centers try to maximize the satellite activity (a greedy algorithm). This leads to the following optimization problem, that is, ⎛ ⎞ N n σ ⎝ K ij s j (t) − h i − ξ i ⎠ ρ (ξ, t)dξ. (4.36) s(t) = argmax Z (s, t), Z (s, t) = i =1

j =1

Many simplifications have been done (we remove a satellite mutual interaction and a possible feedback “center-satellites”) nonetheless, problem (4.36) is formidable. Indeed, for large n and m, it is a hard combinatorial problem (we shall discuss this topic below, in Section 4.5). To see it, let us consider a particular case when for each i, the row K ij , j = 1, . . . , n only contains k nonzero entries such that |K ij | = a, h i = aM (i), where M (i) is the number of negative entries K ij for given i, and ρ (ξ ) = ρ g (ξ1 ) . . . ρ (ξ m ), where ρ g are Gaussian distributions (this means that all ξ i are independent). One can then show that for large a → +∞, the quantity Z converges to a quantity which can be considered as a sum of disjunctions. These disjunctions involve k literals, where each literal is either one of the variables s i , or the negation of this variable NOT s i = 1 − s i . The problem (4.36) is then the so-called max k-Sat problem (how to find an assignment s such that the maximal number of disjunctions will be satisfied, see [52, 83]). This is a famous NP-complete problem and we discuss it in Section 4.5. Thus, we can conclude that even under many assumptions facilitating the model, the empire control is a hard problem.

216 | 4 Random perturbations, evolution and complexity

4.4 Robustness, attractor complexity and functioning rate In this section (which mainly follows the work [291]), we consider a simple centralized network that, nonetheless, can produce a number of local attractors. Due to the simple structure, we can investigate the robustness of this system. This allows us to study robustness and viability of this network. We analyze situations when the satellites control the center and the influence of satellite interaction.

4.4.1 Some toy models and numerical simulations Let us consider a central node interacting with many satellites. This motif can appear as a subnetwork in a larger scale-free network. In order to study robustness, we add a noise to the model. In the case of a single center, we consider a slightly modified, more realistic model. Let us consider the general model of the centralized gene network (2.282),(2.283) from Chapter 2. Sigmoidal functions can effectively present the transcriptional regulation of the satellites by the center (equations (2.282). In TF-microRNAs networks, the ac­ tion of satellites (microRNAs) on centers (TFs) is post-transcriptional and produces a modulation of the production rate of the center protein. This modulation can be mod­ eled by soft sigmoidal functions σ or even by linear functions. Thus, we can replace the sigmoid in equations (2.283) by a linear function. Moreover, to simplify our model, we assume that all satellites, in a sense, are identic. The network dynamics can then be described by the following equations: ∂u i = dΔu i − λu i + r i σ (b (v − h i ) + ξ i (x)), (4.37) ∂t ∂v = d0 Δv − νv + Q(u ), (4.38) ∂t n where Q(u ) = r0 + a i=1 u i . The parameters can be interpreted as follows. The coefficient λ > 0 is a satellite mobility(degradation rates), r i > 0 are satellite activities, b defines a sharpness of center action to the i-th satellite, ν > 0 is a center mobility (degradation rate), a is a force of the i-th satellite feedback action on the center. We set the zero Neumann boundary conditions (2.286) for u i , v. The random fields ξ i summarize the effect of the environment or other extrinsic noise sources on the satellite expression. For instance, random variations of the ma­ ternal morphogen gradient are suitably represented by such fields. Intrinsic noises, resulting from a stochastic gene expression, can be coarse grained to terms outside the sigmoid functions. In order to avoid technical complexities, we dot not consider the intrinsic noises. Denote by ξ (x) = (ξ1 (x), ξ2 (x), . . . , ξ n (x)) possible variations of continuous in x morphogenetic fields. We assume that these variations may be large, but they are

4.4 Robustness, attractor complexity and functioning rate

| 217

bounded: ¯ sup |ξ i (x)| < C.

i,x ∈ Ω

(4.39)

Such a class of perturbations are denoted by CC¯ .

4.4.2 Reductions for the toy model Dynamics of the networks with only a single center is relatively simple. Indeed, in dimension one, all attractors are point attractors and periodic or chaotic dynamics is impossible. However, we can show that there is a multistationarity, i.e. a coexistence of any number of point attractors. Let us make a transformation reducing (4.37) and (4.38) to a system of two reac­ tion-diffusion equations. Let us set Z=

N

uj ,

G(v, x) =

j =1

N

r j σ(b(v − h j ) + ξ j ).

j =1

Then, by summarizing equations (4.37), one obtains ∂Z = dΔZ − λZ + G(v, x), ∂t ∂v = d0 Δv − νv + r0 + aZ. ∂t

(4.40) (4.41)

This system is relatively simple and it can be studied analytically and numerically. Depending on the parameters, there are three main different cases possible: – PC (power of center): d0 , ν are small with respect to d,λ respectively; – PS (satellite power): d, λ are small with respect to d0 ,ν respectively; – IC (intermediate case), d0 , d and ν,λ have the same order. Let us formulate an assertion. Theorem 4.15. Case PC. Let us fix b, λ > 0, r i and b. Then, if ν, a, d0 , r0 < κ and κ > 0 is sufficiently small, one has Z = Z¯ (v, x) + z,

sup |z| < c1 κ,

(4.42)

where Z¯ (v, x) is defined by dΔ Z¯ − λ Z¯ + G(v, x) = 0. Case PS. Let us fix ν, a, r0 and d0 . Then, if λ, r i , d < κ

(4.43)

218 | 4 Random perturbations, evolution and complexity and κ > 0 is sufficiently small, one has v = v¯ + w,

sup |w| < c2 κ,

(4.44)

where v¯ is defined by d0 Δ¯v − ν¯v + r0 + aZ = 0.

(4.45)

The proof is standard and we omit it. It follows immediately from the theorems stated in Appendix 3.5. To express Z via v, we can use the Green functions. The two situations lead to ¯ For small d, one has simpler expressions for Z. N

r i λ −1 σ ( b ( v − h i ) + ξ i ) .

(4.46)

r i λ −1  σ ( b ( v − h i ) + ξ i )  ,

(4.47)

Z¯ =

i =1

For large d i ≥ 0, one obtains Z=

N i =1

for large d i , where  f (x) = Z¯ (x) = d−1

N i =1



ri

 1 Vol ( Ω ) Ω

f (x)dx. In the general case,

G γ (x, x )σ (b (v(x ) − h i ) + ξ i (x ))dx ,

(4.48)

Ω

where G γ is the Green function for the operator u → Δu − γ2 u under the zero Neumann boundary conditions, where γ2 = λ/d.

4.4.3 Multistationarity of the toy model Let us first consider the case of small diffusion. We consider the case PC. Substituting asymptotics (4.46) into (4.38) and removing small contributions, one obtains, up to small contributions of the order κ, ∂v = d0 Δv − νv + F (x, v), ∂t where F (x, v) = r0 +

n

w i σ(b(v − h i ) + ξ i ),

(4.49)

(4.50)

i =1

and w i = aλ−1 r i . Assume first, to simplify our analysis, that ξ i (x) = 0 for each x. Then, we seek solutions v independent of x removing diffusion terms. Let P = {w i , b, h i , N, r0 } be free parameters that can be adjusted. We can control the nonlinearity F in (4.49) by P using the fact that F (x, v) can approximate arbitrary smooth functions. The following assertion shows that multistationarity holds, even under the re­ pression condition a < 0. Here, x plays a role of a parameter, and we can remove it in notation.

4.4 Robustness, attractor complexity and functioning rate |

219

50 45 40 35 30 25 20 15 10 5 0

0

2

4

6

8

10

Figure 4.1. The step curve is the plot of F ( v), where v ∈ [0, 10]. The straight line is the plot of νv, intersections give equilibria.

Lemma 4.16. Assume a < 0 and ξ i = d0 = 0. Let N be a positive integer. Then, there are coefficients b, λ > 0, r i , where i = 1, . . . , N, ν > 0 and h i , r0 , w i such that the equation −νv + F (v) = 0 (4.51) has at least n + 1 stable roots that can be placed in any given positions in the v-space. The main idea of the proof can be illustrated by Figure 4.1. The function F (v) is close to a step function with n steps, where each step is given by the rescaled Heaviside function w i H (b (v − h i )). The position of the i-th step in v-space is h i and its height is negative w i < 0. Under an appropriate choice of h i , b, w i , λ0 , this entails our assertion (Figure 4.1). It is useful to note that one gets n + 1 attractors on the horizontal segments of the step function provided that h i decrease with i. Remark. Notice that the conditions for flexibility (multistationarity) are not very re­ strictive. Namely, the sigmoidal functions σ (b (v − h i )), that describe the center action on the satellites, should be sharp, i.e. b is large. Coefficients ν, r0 have to be small, whereas λ has to be large. This is natural from the biological point of view: the satel­ lites (microRNAs), being small macromolecules, work much faster than TFs, which are large macromolecules. The construction is robust: we can vary w i , b, h¯ i , but the number of equilibria conserves.

220 | 4 Random perturbations, evolution and complexity 25

20

15

10

5

0

0

1

2

3

4

5

Figure 4.2. Robustness for a soft Fermi sigmoidal function, perturbations ξ i are not correlated for different i. The steplike curves are the plots of nonperturbed and perturbed functions F ( v), respec­ tively. The straight line is the plot of νv, intersections give equilibria.

4.4.4 Robustness and the stability of attractors Let us now consider the noisy case ξ i = 0. We are interested in the robustness of the number and positions of the attractors with respect to noises ξ i (x). We use the con­ struction from Lemma 4.16, assuming that constants h i are perturbed by terms ξ i (x). First, we consider dynamic stability in the case of small diffusion (4.46), assuming, for simplicity, that r0 , h i are independent of x. The dynamics of v is defined by (4.49), where the function F (x, v) is smooth and increasing in v. If d0 = ϵ2 is small enough, then, by super and subsolutions, it is easy to show that equilibrium solutions veq (x) of (4.49) are defined by νw = F (x, w),

|veq − w| < cϵ2 .

(4.52)

If a veq is a local attractor, the following condition holds, that is, F (x, veq ) < ν.

(4.53)

These roots are local attractors and they are dynamically stable. Otherwise, they are repellers and unstable. For large b i , the graph of F consists of repeating rectangles (Figure 4.2). Each veq is given by an intersection of the right line νv with the curve F (v ). Accord­ ing to (4.53), if the intersection lies on a left vertical side, the corresponding equilib­ rium veq is an unstable (a repeller) (the case LV). If an intersection lies on a horizontal line, the corresponding equilibrium veq is a stable (an attractor) (the case H). If the in­

4.4 Robustness, attractor complexity and functioning rate

| 221

tersection lies on a right vertical side, the corresponding equilibrium veq is an attractor (the case RV). Since equilibria for the case LV are dynamically unstable, we can only consider the cases H and RV. The result is clear if we consider the plot of F ( Figure 4.2). A pertur­ bation of h i by ξ i induces a shift of h i . The horizontal intersections then shift on small distances μ i . To estimate these distances, we should know the asymptotic of σ (z) for z → ∞. In the case of the Fermi function, which has an exponentially decreasing tail, these μ i are proportional to ξ i exp(−c b¯ ) and very small. For the Hill function σ, we obtain μ i ≈ ξ i b¯ −1 . In both cases, μ i are small, and therefore here we have robust sit­ uations. In contrast with it, in the case RV, an intersection sharply depends on ξ i and then there is no robustness. Notice that the centralized structure reinforces the robustness. Indeed, if ξ i are in­ dependent random contributions, then one can expect that S i = r have different signs  that diminish the sum δv = ni=1 S i . This dependence of robustness on the sharpness of the sigmoidal function and perturbations ξ i , showing a correlation with respect to i, is shown in Figure 4.2. The cumulative effect resulting from the central organization may be important for large perturbations. Perturbations correlated in i, ξ i , can destroy many equilibria. For large diffusion coefficients d i , we can repeat the above analysis. Notice that diffusion smooths small space inhomogeneities and increases robustness, and the v pattern is more regular. In order to obtain the robustness with respect to ξ i variations and multistationar­ ity, it is necessary to take large b, therefore, a should be small. The maximally possible density of equilibrium states then has the order O(b −1 ).

4.4.5 Generalized toy model To take into account satellite interaction and memory effects, we investigate a more complicated system, namely, ∂Z = dΔZ − λZ + G1 (w, Z, x), ∂t ∂v = d0 Δv − νv + r0 + aZ, ∂t ∂w = −α ( v − w) , ∂t where G1 (w, Z, x) =

N

r j σ (b (w − h j ) + RZ + ξ j ),

(4.54) (4.55) (4.56)

α > 0.

j =1

The parameter R can be interpreted as follows. The term RZ means that each satellite interacts with all others with the same intensity. If R > 0 has a mutual activation, for

222 | 4 Random perturbations, evolution and complexity R < 0, we are dealing with the mutual repression. Equation (4.56) can be resolved and we obtain t w = w0 exp(−αt) + exp(−α (t − τ))v(τ)dτ. 0

Let us consider the case d0 = d = 0, ξ j = 0. Some general properties of this system can be obtained by the well-known ideas of dynamical system theory. If a, b > 0, then the vector field F (v, Z ) with components F1 = −λZ + G(v, Z, 0), F2 = −νv + r0 + aZ that defines the right-hand sides of (4.40), (4.41) is cooperative since ∂F1 > 0, ∂v

∂F2 > 0. ∂Z

(4.57)

In this case, in our dynamic system, all the trajectories converge to equilibria v∗ , Z∗ (stable or unstable). Let us denote by M the matrix that defines a linearized operator DF(v, Z ) at v∗ , Z ∗ . The stability depends on the quantities Tr M and Det M given by Tr M = −ν − λ + bS,

S=

N

r j σ (b (v∗ − h j ) + μZ∗ ),

i =1

Det M = ν(λ − μS) − abS. If Tr M < 0 and Det M >, one has a stable rest point, and if Tr M < 0 and Det M < 0, we obtain a saddle equilibrium. For a < 0, b < 0, we observe that our system is also a monotone dynamic system. Indeed, we can make a change v = −˜v to see that, in new variables ˜v , Z, the field F is cooperative. So, for ab > 0, our system defines a monotone dynamic system and this property holds for d0 , d > 0 and arbitrary ξ i (x). For ab < 0 and μ = 0, we can obtain, in principle, a periodic time behavior. The Hopf bifurcation occurs if Tr M changes the sign and Det M > 0. The necessary conditions for this are ab < 0 and μ > 0. In numerical simulations, we set d = d0 = 0, ξ j = 0. We have investigated this system for different values of ν, λ, b, a, μ. In numerical experiments, we have observed that the trajectories are convergent to equilibria, sometimes, after sharp oscillations.

4.4.6 Results of simulations Let us consider the case d = d0 = 0. We have fixed the parameters by the following relations, namely, λ = 1, a = ν, r j = r = 1, b = 10, N = 6. We change the noise level ϵ, the center mobility ν, and the interaction parameter R. Consider the following situations.

4.4 Robustness, attractor complexity and functioning rate

| 223

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

10

20

30

40

50

Figure 4.3. This plot illustrates a typical trajectory for R = 0, ν = 8, a = 4, the case of correlated noises, N = 20. The lower line is the center activity v( t), the upper line is the satellite activity Z ( t).

(AL) The parameter ν is large, R > 0, R = 0, or R < 0, the noises are noncorrelated, i.e. ξ i are mutually independent white noises for different i. (BL) The parameter ν is large, R > 0, R = 0, or R < 0, the noises are noncorrelated, i.e. ξ i are identic white noises for different i: ξ i = ξ . (AS) The parameter ν is small, R > 0, R = 0, or R < 0, the noises are noncorrelated, i.e. ξ i are mutually independent white noises for different i. (BS) The parameter ν is small, R > 0, R = 0, or R < 0, the noises are noncorrelated, i.e. ξ i are identic white noises for different i: ξ i = ξ . The results of numerical simulations are consistent with aforementioned analytical ar­ guments. We conclude that centralized interaction topology allows one to compensate the action of noncorrelated noise (let us compare Figures 4.3 and 4.4). Moreover, for a mobile center, the center activity oscillates, repeating an oscillating motion of the satellite activity. For a slow center, the center activity is a smoother curve and does not repeat oscillations of the satellite activity curve (Figure 4.5). An interesting effect is that the mutual satellite repression leads to stabilization, and oscillations almost vanish (Figure 4.6).

4.4.7 Why Empires fall Let us consider an economic interpretation of the results from this section. From an economical point of view, we can interpret our toy model as follows. We have an economical center connecting a number of regions. The economical efficacy (coeffi­

224 | 4 Random perturbations, evolution and complexity 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

10

20

30

40

50

Figure 4.4. This plot illustrates a typical trajectory for R = 0, ν = 8, a = 4, the case of noncorrelated noises. The lower line is the center activity, the upper line is the satellite activity. We see that the oscillations are smaller.

7 6 5 4 3 2 1 0

0

100

200

300

400

500

Figure 4.5. This plot shows a typical trajectory for mutually repressive interaction R = −2, ν = a = 10 (a mobile center), the case of correlated noises.

cients r i ) of the regions is bounded. Regions have some internal resources (parame­ ters h i ). The parameter ν defines a center mobility (rate) and λ is an averaged region economical mobility. As an example, we can consider an Empire economically independent of the ex­ ternal world (an autarkeia), which is separated in weakly connected regions (divide et

4.4 Robustness, attractor complexity and functioning rate

| 225

6 5 4 3 2 1 0

0

100

200

300

400

500

Figure 4.6. This plot shows a typical trajectory R = 0, ν = a = 0.1 (a very slow center), the case of correlated noises.

impera or “divide and rule”). Random perturbations in regions of such an Empire can be considered as noncorrelated noises ξ i . Then, theoretically, the averaged viability time of this empire is exponentially large in N for large N. This explains the Empire’s tendency to expand: a large territory reinforces viability. On other hand, the results of Subsection 2.3.4 show that a network extension can create bifurcations and dynami­ cal chaos. This effect can appear if the center supports a rigid control, but the system has a weak feedback: an inverse action of satellites on the center is soft. Moreover, in the previous section, it was shown that in this centralized system, we observe some slow-down phenomena. Namely, the rigid centralized control leads to this slow-down effect: the speed of the center functioning should be restricted if the center wants to control the satellites (the bureaucrats should handle idiotic documents slowly, as it is necessary to control you!). The important parameters that influence the Empire viability are the Empire size and connectivity. Let us consider relation (4.33) for the parameter ρ. This key parame­ ter determines the system viability during the growth process. Decreasing ρ makes the Empire growth stabler. The parameter ρ depends on K ∗ and h. The parameter h can be interpreted as a resource; K ∗ can be considered as a connectivity parameter. We see that as K ∗ increases, the Empire viability increases exponentially. We conclude that the Empire internal connectivity is more important for viability than resources. If the random perturbations in regions are correlated, then the viability is not ex­ ponentially large. The same effect can be obtained if the system is not an autarkeia, i.e. it is involved in a global market which can fluctuate.

226 | 4 Random perturbations, evolution and complexity

(A) (B) (C) (D)

Finally, we see the following main reasons as to why the Empire can fall. Fluctuations in the global world; Bureaucratic slow down: or the power, or development; Internal connectivity is too small for successful development; Rigid centralized control together with weak feedback.

It is obvious that all of these factors were involved in the dissolution of the Soviet Union. After Stalin, the USSR started to slowly integrate into a global market. The decision of the Soviet Union to participate in international trade and thus abandoning the idea of economic isolation violated the autarkeia rule. Now, the Russian “Empire” is sharply immersed in the contemporary global eco­ nomic system and the Empire economical state critically depends on oil and gas prices. These prices oscillate in a very sharp way and these fluctuations can be con­ sidered as a correlated noise that affects the whole system. Therefore, the viability time for this structure is bounded and independent of the system size N. Centralized systems in an oscillating environment do not survive for a long time. As it was shown above (equation (4.36)), even in the framework of extremely simplified models, an effective control of centralized structures is a hard problem. Since the bureaucratic elite prefers stability and control, moreover, internal con­ currency (the parameter R) is weak in the Russian economy, and there is only a low rate of this economy’s possible development. However, a centralized system not only has drawbacks. In the coming section, we shall show that in genetic networks, cen­ ters (hubs) can serve as capacitors. Capacitors are special genes which control pattern stability and evolution rate [27, 162].

4.5 Evolution as a computational problem 4.5.1 Complex organ formation as a hard combinatorial problem We consider this problem in context of the key question of biology: how to explain complexity emergence and formation of complex organs. In biology, the concept of “complexity” is anything but simple and transparently defined. Different approaches to complexity were developed. Here, we exploit some ideas from theoretical computer science [303]. This allows us to formulate the problem in a rigorous mathematical way. Notice that the difficulty in organ evolution was well-understood already by Darwin , who noted that if we could not explain how complex organs (for example, eyes) can appear as a result of small, slight modifications, then the evolution theory “absolutely breaks down” [57]. One of the problems here is how to explain that evolution can “fore­ see” a final structure of the organ. Completely formed complex organs may be func­ tionally perfect, but with undeveloped intermediate forms, possibly, they are not so

4.5 Evolution as a computational problem

| 227

adapted and it is unclear how they can be obtained by slight modifications. Another problem is how to explain a simultaneous development of many important organism traits. It seems, therefore, that complex organ formation is not a “feasible” problem. To shed light on this feasibility problem, we use an analogy between these evo­ lution processes and hard-combinatorial problems. This analogy helps to obtain an analytical relation for the evolution rate as a function of gene redundancy. In the last decades, these problems have received a great deal of attention from mathematicians and theoretical physicists, see [1, 49, 62, 63, 80, 183, 184], among many other works. The most fundamental hard-combinatorial problem is the fa­ mous k-SAT. It was the first known NP-complete problem (it was shown by Stephen Cook [51], for a review of some algorithms for k-SAT, see [56]). The k-SAT can be for­ mulated as follows. Let us consider the set V n = {x1 , . . . , x n } of Boolean variables x i ∈ {0, 1} and a set Cm of m clauses. Clauses C j are disjunctions involving k literals y i1 , y i2 , . . . , y i k , where each y i is either x i or the negation x¯ i of x i . The problem is to test whether one can satisfy all the clauses by an assignment of Boolean variables. The famous unresolved problem of theoretical computer science, P = NP, is equivalent to the following question: Does an algorithm solving the k-SAT problem in a polynomial time (in Poly(|X |) steps) exist, where X is the k-SAT problem input and |X | = n is the size of this input. Suppose we are dealing with a randomly gen­ erated k-SAT formula and n  1. The parameter α = m/n defines the asymptotic behavior of k-SAT solutions as n → ∞. Biologically, this parameter can be interpreted as a “gene freedom.” Everywhere in this section, we assume that n  1. A biological interpretation of k-SAT is transparent. The number n is the gene num­ ber. Each gene is involved in the formation of many phenotype traits, and the gene can be either turned on, or turned off. We have m traits, therefore, the case m  1 corre­ sponds to a formation of a complex organism with many traits. The number m can be interpreted, therefore, as a primitive measure of the phenotype complexity. The parameter k determines the genetic redundancy, namely, a trait can be formed by k different genes. The main difficulty of the k-SAT problem is that, a logical variable x i can be involved in different clauses as x i and x¯ i . Therefore, it is difficult to assign x i in a correct way. Biologically, this effect corresponds to the pleiotropy of genes. Namely, ac­ tivation of a gene can help to create a useful feature but, on other the hand, it becomes an obstacle for another useful trait formation. This connection of the evolution problem with hard combinatorial problems al­ lows us to formulate the meaning of feasible evolution. In the framework of this model, evolution is feasible if one can find a local search algorithm (for example, a greedy one [52]) that resolves the problem in Poly(n) elementary steps (for instance, single mutations). This idea connects P = NP problem with evolutionary biology (probably, the first such connection was found in [303]). Notice that if P = NP (nobody, of course, believes in this equality), then evolution is always feasible.

228 | 4 Random perturbations, evolution and complexity 4.5.2 Some facts about the k-SAT model Evolution can be considered as a problem involving a number of constraints. A natu­ ral similar computer science problem is how to make an assignment in a random CNF formula with m disjunctions to maximally satisfy any possible number of them. If dis­ junctions have the same size k, we obtain the classical k-SAT problem. Many proper­ ties of k-SAT also hold for more general constraint problems, and it is useful to briefly discuss the k-SAT. We consider the k-SAT of a random structure with m = αn clauses and n variables, assuming n  1. The basic facts about this problem are as follows. The k-SAT can always be resolved in Poly(n) steps for k = 2. We consider k > 2. For α > α c (n, k ), where α c is a critical value, there are no solutions in average with the probability close to 1, and for α < α c (n, k ), there exists, in average, a solution with probability close to 1. (The words “in average” mean that we always take a formula (scheme of genetic regulation) of a random structure). A natural plausible conjecture that lim α c (n, k ) = α c (k )

(n → ∞)

is not proven yet, but a very close fact is shown [1, 80]. For large k and n → +∞, we have α c (n, k ) ≈ (ln2)2k . An important second critical value α d (n, k ) < α c (k ) also exists. For large k and n → +∞, this value has an asymptotic α d (n, k ) ≈ 2k /k. If α < α d , then all solutions form a unique cluster. A cluster in the space {0, 1}n of Boolean sequences of the length n can be defined as a connected set. Here, we assume that two sequences are connected (adjacent) if the Hamming distance between them equals 1. If solutions form a single cluster, this means that any solution can be obtained from another one by a chain of flipping some variables. In this case, simple algorithms of local search are capable of finding solutions in Poly(n) time. Now, we know a num­ ber of such algorithms (WalkSat, GSat, DPLL etc., see [141, 183, 184, 245, 246], among many other works). If α c > α > α d , the set of the solutions is a union of exponen­ tially many clusters. In this region, there is a very effective algorithm, the so-called Survey Propagation (SP), that allows us to find a solution (even for n = 106 , it pro­ ceeds in minutes). SP was pioneered in [36, 183] and uses sophisticated methods of statistical physics. Now, a connection of SP with the classical algorithm of artificial in­ telligence theory, belief propagation (BP)[200], is found. This connection between BP and SP is based on the representation of the k-SAT problem by a bipartite graph. We can identify clauses with centers, and satellites with logical variables. Satellites can either activate a center, or repress them, or, they do not act to this center (clause). If

4.5 Evolution as a computational problem |

229

we generalize this k-SAT bipartite structure, we can obtain a more complicated model of a Boolean circuit, where each disjunction involves a different number of variables. This case needs numerical simulations to obtain an analogue of relations for α c , α d .

4.5.3 Gene network model and morphogenesis A number of papers were focused on mathematical models of morphogenesis (for ex­ ample, [178, 179, 188, 193, 226], among many others) beginning with the seminal pa­ per [272]. We assume, for simplicity and following many works (for example, [136, 267, 303, 304]), that genes u 1 , . . . , u n are Boolean variables, u i ∈ {0, 1} (0, a gene turns off, and 1, and a gene turns on). We assume that each feature (trait) of an or­ ganism is controlled by a protein, y j ∈ (0, ∞), where j = 1, . . . , m and this process can be described by a general differential model involving gene activity Boolean string u = ( u 1 , . . . , u n ): dy j (t) = f j (y, u ) − λ j y j , j = 1, . . . , m, (4.58) dt where f j are terms that define gene production components, f j > 0, and the terms λ j y j , where λ j > 0, define protein degradation. Choosing the appropriate f j , we obtain different models. Let us consider two examples. (a) Let us set λ j = 1 and f j (y, u ) = z j (u ) = σ (w j1 u 1 + · · · + w jn u n − h j ),

(4.59)

where σ is the step function, σ (z) = 1 for z > 0 and σ (z) = 0 for z ≤ 0 and coefficients w ij describe either activation (w ij = 1), or inhibition (w ij = −1). We assume that if u = u ∗ is a correct genetic pattern (without mutations), then all traits are expressed, i.e. for large t, solutions of (4.58) tend to equilibrium y j = 1. Constraints y j = 1 are then equivalent to relation z j (u ) = 1. Therefore, the problem on the constraint satisfaction is k-SAT. (b) If we set f j (y, u ) = R j (u ) = r j σ (w j1 (u )y1 + · · · + w jn (u )y n − h j ),

(4.60)

where r j > 0, w ij and y j are real numbers, we obtain the Hopfield equations of the neural network. Taking into account the diffusion of gene products y j , we obtain a model proposed in [188] and studied in many papers (for example, [226]). We mainly focus on the case (a). We assume that thresholds h j are small: h j ∈ (0, 1). We also suppose that coefficients w ji take values 1, −1 or 0 and that each |w ji | takes value 1 with probability β /n, where β > 0 is a parameter. Moreover, if |w ji | = 1 then,

230 | 4 Random perturbations, evolution and complexity with probability 0.5, each w ji is either −1, or 1. Then, z j are disjunctions of random numbers of variables u 1 , . . . , u n . Each disjunction involves, in average, K = β vari­ ables. The parameter β is thus a redundancy parameter. Notice that we use disjunc­ tions because these logical functions have an important advantage: they provide a maximal redundancy effect.

4.5.4 Evolution The goal of evolution in silico is to maximize the sum of satisfied features, that is, W F (u) =

m

z j (u),

(4.61)

j =1

where z j are defined by (4.59). This sum can be considered as a fitness of an “organ­ ism.” Relation (4.61) can be interpreted as a random conjunctive normal (CNF) form, where each disjunction involves, in average, β literals. For simplicity, we consider evolution as a process of point mutations (although it is well known that evolution also involves more complicated processes, for example, horizontal gene transfer, recombination and gene duplications [8, 153]). We have a population of u consisting of Npop members. Mathematically, each member u l is a log­ ical variable u l = {u 1l , u 2l , . . . , u ln }. An elementary evolution step is a change (a point mutation) of value u lj(l) in the l-th population member for all l = 1, . . . , Npop , where we choose j l randomly of indices {1, . . . , n} (all choices are independent). If the mutation increases W F (u l ), we conserve this new value u l , otherwise we keep the old one. Moreover, from time to time, we can make a selection that can be described as follows. We conserve in populations only an organism u ∗ = u l which has the maximal fitness W F , and all mutants that can be obtained from u ∗ by mutations in k loc places. After the selection, we again continue the mutation process as above. This algorithm is a variant of the well-known GSAT, see [141, 245, 246]. Notice that, as a result of a large number of mutations, one has a pattern u final which is unstable under mutations. Typically, mutations for this final pattern are ei­ ther neutral, i.e. conserve fitness W F (u mut ) = W F (u final ), or lead to fitness decreasing (W F (u mut ) < W F (u final ), and only seldom mutations can increase the fitness (see Sub­ section 4.5.8 on numerical simulations). This pattern fragility occurs for any simple algorithms of local search. Therefore, the question as to how to stabilize the pattern is nontrivial. Notice that any algorithm that increases a fitness will produce a final pattern which is unstable under mutations. We are faced with the central problem of evolution theory: how to account for both pattern stability and continued evolution in systems with stable patterns. To resolve this enigma, biologists suggested the concept of gene capacitors [27, 162]. Capacitors are special genes which can control the mutation rates and release hidden gene information. We discuss these ideas in the next subsection.

4.5 Evolution as a computational problem |

231

4.5.5 Capacitors and centralized networks Let us add to (4.58) special terms Z c (u ) > 0 (we shall clearly express the form of them below). Then, these equations become dy j (t) = (1 − b )f j (y, u ) − λ j y j + bZ c (u ), dt

(4.62)

where b ∈ (0, 1/2) is a buffering parameter. This model can be interpreted as follows. Each feature is controlled by a block of random structure consisting of β (in average) genes. Moreover, an organizing center (hub) influences the morphogenesis process via the term bZ c involving a number of genes. Here, coefficient b c determines a regulation of buffering since larger b c buffering is bigger. One can make an analogy with an Em­ pire structure. The hubs perform an additional control of the patterning process in a manner that resembles Empire control by a center. Mathematically, the main idea can be formulated as follows. The term Z c should satisfy the following property: Z: Let u = u ∗ be a correct genetic pattern satisfying all constraints. Then, if mutation frequency is less than a critical level p c , i.e. p < p c , Z c (u ) = 1 for all u = u ∗ . If p > p c ,

Z c (u ) = 0 for all u = u ∗ and Z c (u ∗ ) = 1.

For example, one can set Z c (u ) = σ (v1 u 1 + · · · + v n u n + h0 ),

(4.63)

where v i are some appropriate coefficients. These coefficients, together with threshold h0 , will be defined by a special procedure. Notice that such a choice of construction of the buffering term is natural, not only from genetic point of view (buffering via hubs, see [27, 162]), but also from theoretical computer science ideas. It can be obtained from the famous Valiant–Vazirani–Vazirani lemma (isolation lemma, [305]).

4.5.6 Hebb rule and canalization Robustness is an important property of biological systems. Wild-type organisms are buffered, or “canalized”, against genetic variations. On the other hand, organisms evolve successfully to be adapted to environmental changes. Experimental data and biological concepts show the existence of special genes which can play the role of “capacitors”: they can regulate the pattern stability and instability. These genes can

232 | 4 Random perturbations, evolution and complexity buffer environmental variations “canalizing” pattern formation. Canalization is an ef­ fect that allows a population to produce the same phenotype regardless of variability of its environment or genotype providing the robustness. This term was coined by C. H. Waddington [316], who formulated this as follows: “developmental reactions, as they occur in organisms submitted to natural selection ... are adjusted so as to bring about one definite end-result regardless of minor variations in conditions during the course of the reaction”. In this subsection, we discuss how to find buffering terms Z c (u ) which determine an interaction between the canalization center and the protein y i . Let us first recall some fundamental ideas of neural network theory and brain modeling. Experiments show that a synaptic connection between two neurons in­ creases if they are both active within a time period, and decreases if one neuron is active and another one is passive (the Hebb rule). In other words, neurons that are active together, have to be connected [8, 40]. In our case, we can make a standard assumption that, in (4.63), either v i = 1 (activation of i-th gene by the center), or v i = −1 (inhibition of i-th gene by the center). If we assume that for the interaction between the genes and the center, an analogue of the Hebb rule also holds, and that the center is always active, we obtain the following algorithm: (A1) If u i = 1 within a long time period, then v i = 1; otherwise v i = −1. Notice that the evolution is a long process, and that in the end of this process, most of the genes do not change. Then, the algorithm can be transformed as follows: (A2) If for the final pattern u i = 1, then v i = 1; otherwise v i = −1. Let us describe a mathematical model for (A2) and a construction of Z c . We introduce auxiliary variables X i defined by equations dX i = −γ(X i − u i (t)), dt

(4.64)

where γ > 0 is a small memory parameter. We assume that the mutation process for gene u i (t) is a Poisson process. To describe this process, let us suppose that in a “nor­ mal state” gene, u i takes the value 1. If we consider a discrete approximation for linear equation (4.64), we can suppose that at each time step, u i = 1 with the probability 1 − p and u i = 0 with the probability p, where p  1. Therefore, the expectation Eu i of u i is 1 − p. Notice that for small γ and large t, the value X i (t) is close to the time average of u i . Therefore, expectations EX i of X i are also 1 − p. By these auxiliary variables X i , we define Z c as follows: Z c (u, X ) = σ a (h0 − S),

S=

n

|u i − X i |,

(4.65)

i =1

where σ a (x) is a sigmoidal function, for example, σ a = (1 + exp(−ax))−1 , and h0 is a threshold. Notice that S and thus Z c involve all genes. It can be interpreted as follows:

4.5 Evolution as a computational problem | 233

in the gene networks, there is a hub which summarizes contributions of all genes to stabilize the pattern. The terms w i = |u i − X i | in relation (4.65) can be explained by the Hebb rule. In fact, if all u i = X i , then w i = 0, otherwise w i > const > 0. If all w i = 0, we have Z c = 0. If h0 is small enough, in this case, Z c = 1.

4.5.7 Canalization and decanalization as a phase transition. Passage through the bottleneck Let us show, under some assumptions, that there is a critical mutation frequency p c such that for large t, Z c (u, X ) ≈ 1,

(p < p c ),

Z c ≈ 0,

(p > p c ).

(4.66)

We can obtain an approximation for Z c for small p such that np  1 and thresholds h0 = 1 + δ, where 0 < δ  1. Let u i = 1 be a normal state of i-th gene. Then, the expectation of u i (t) = 1. Due to (4.64), one has t

X i (t) = X i (0) exp(−γt) + γ

exp(−γ(t − τ))u i (τ)dτ.

(4.67)

0

Then, for small γ and large times t  γ−1 , the quantity X i (t) is a constant. Taking the expectation, we obtain that EX i (t) ≈ 1 − p. Then, for each time moment t, the difference u i (t) − X i (t) = 0 with the probability 1 − p and u i (t) − X i (t) = 1 with probability p. Thus, we obtain that S = np < h0 if all u i = 1 (i.e. we have no mutant genes) and S = (n − 1)p + 1 − p = (n − 2)p + 1 for a single mutant gene. Since np  1, the probability that there are two mutant genes at the same time moment is very small. We obtain the same result if the normal state of the i-th gene is 0. Indeed, then equation(4.67) implies that X i (t) ≈ p. If all u i = 0, we have S = np. If there is a mutation, S = (n − 2)p + 1. So, for the critical probability p c , we obtain pc ≈

h0 − 1 , n

(n  1, np  1).

(4.68)

We find, therefore, a simple mechanism of decanalization which is consistent with experimental data. Indeed, these data show that a stress may dampen the canaliza­ tion effect that leads to decanalization. This fact can be explained by this model as follows. When a population consisting of N members lives in stress conditions, the abundance N can diminish. The genetic drift effect is proportional to N −1/2 [8, 262]. Therefore, for small N, when the population goes through a bottleneck, the mutation frequency pmut exceeds p c that can lead to releasing hidden gene information and the formation of a new species. Certainly, the stress can directly affect p if it is connected with chemical reagents, radiation etc. Aging also leads to decanalization since within

234 | 4 Random perturbations, evolution and complexity 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.05

0.1

0.15

0.2

Figure 4.7. This plot shows the averaged value of the buffering term Z c ( p) as a function of muta­ tion frequency p for b = 0.2. Here, the number of genes is n = 50, the trait number is m = 120 and the redundancy parameter β equals 6. The time interval is 2000 steps of length dt = 0.1, and γ = λ = 1. The horizontal axis is p, and Z c is shown on the vertical axis. Theoretical critical probabil­ ity p c equals ≈ 0.02.

a larger time period t ∈ (0, T ), the probability that Z c (t)  1 for some t ∈ (0, T ) is larger. The main idea of this model can be illustrated as follows. When p increases grad­ ually, the quantities X i also increase in a gradual manner. However, the canalization term Z c (p) decreases sharply. This fact is illustrated by Figure 4.7. We observe a sharp fall of Z c at p = 0.015 − 2 that is consistent with the theoretical value of the critical probability p c = 0.02.

4.5.8 Simulation of evolution and mutation effects Populations with 50−100 members are considered and the redundancy parameter β = 7. During the evolution process, the number m of constraints in a random CNF formula increases from m = 300 to m ∈ (800, 1800). This means that we simulate an increase of an ecological environment sharpness. This means the gene assignment (pattern) should satisfy more and more constraints. Between these steps of environ­ ment changes, we have made 100 − 200 mutations for such population member. After each mutation series, we have checked the fitness values, the pattern stability level and the value of the buffering term. Results are as follows. First, they are consistent with main facts about k-SAT de­ scribed in the previous section. A phase transition for α ∈ (10, 16) is found: for ex­ ample, a gene system with β = 7 and n = 100 genes can satisfy m ∈ (1000, 1600)

4.5 Evolution as a computational problem | 235

constraints (to create an organism with ≈ 1500 features). Solutions form a giant clus­ ter and the Hamming distance between different solutions is not large (of order 10). Notice that for k = 7, the best estimate for algorithmic phase transition is ≈ 33, see [1]. It is clear that in our case this value should be smaller since we have some clauses (dis­ junctions) of size < 7 (since our CNF formula is random and the clause length equals β only in average). Patterns u ∗ , that maximize the fitness W F (u ), are fragile if the buffer mechanism is turned out (b c = 0). To check the pattern robustness, we have produced 50 point random mutations and 50 double mutations. We have observed the following picture. Let ΔW1 = W (u ∗ ) − W (u mut ) be a vector of lengths 50 that contains the fitness varia­ tions for single mutations, and ΔW2 , ΔW3 be analogous vectors for double and triple mutations. Typical results for the last stage of evolution, when the constraint number m = 1900, are ΔW1 = (1, 3, 2, 3, 1, 1, 0, 1, 1, 3, 1, 3, 1, 2, 3, 1, 3, 4, 3, 2, 0, 0, 1, 2, 1, 2, 1, 0, 2, 3, 1, 1, 1, 1, 3, 1, 2, 1, 3, 0, 1, 0, 1, 3, 3, 3, 2, 1, 2, 0) and ΔW2 = (4, 3, 2, 4, 6, 2, 7, 1, 4, 4, 2, 3, 7, 7, 3, 5, 0, 2, 5, 6, 3, 2, 2, 5, 2, 4, 4, 3, 2, 3, 3, 2, 6, 9, 4, 7, 4, 3, 3, 5, 7, 4, 3, 2, 1, 4, 4, 8, 2, 4). It is natural that single mutations are less dangerous than double and triple ones. Some mutations are neutral (the corresponding values of ΔW i equal 0). This negative mutation effect is completely compensated by the buffering term b c Z c , which takes the values 50 for all mutations. At the final evolution step (m = 1900), the averaged number of the satisfied constraints in the population (i.e. the fit­ ness) is Waveraged = 1887 and the maximal number is Wmax = 1892. For the initial evolution steps, when m = 800, we have ΔW1 = (0, 1, 1, 2, 1, 2, 1, −1, 3, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, −1, 0, 0, 3, 0, 2, 2, 2, 1, 0, 2, 2, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, −1). We observe here more neutral mutations and even some positive mutations (with ΔW1 < 0). The maximal number of satisfied constraints is close to the limit value of the fitness, Wmax = 799. All the values of the buffering contribution b c Z c are 50. The number of double neutral mutations is given by the following data (an exper­ iment with β = 7) which show that a part of the neutral mutations decreases in the evolution step m, where m = (200, 2000): 0.42, 0.38, 0.42, 0.28, 0.24, 0.20, 0.16, 0.30, 0.10, 0.10, 0.06, 0.12, 0.08, 0.16, 0.08, 0.04, 0.06, 0.08. Figures 4.8–4.13 present some properties of evolution.

236 | 4 Random perturbations, evolution and complexity 7 6 5 4 3 2 1 0

0

500

1000

1500

2000

2500

Figure 4.8. The plot of the minimal (over the population) number of nonsatisfied constraints m NS as a function of the number of constraints m. The horizontal axis is m, m ∈ [300−2000], and the verti­ cal axis is the m NS value. Each evolution step adds 100 new constraints. The population consists of 100 members. The parameter β = 7. All of the constraints are satisfied for m < 200.

4 3.5 3 2.5 2 1.5 1 0.5 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Figure 4.9. The plot of the minimal (over the population) number of nonsatisfied constraints m NS as a function of the number of constraints m. The horizontal axis is m, m ∈ [300−2000], and the vertical axis is the m NS value. Each evolution step adds 100 new constraints. The number of mutations at each evolution step is 100, the population contains 100 members. The parameter β = 8. We see that, with respect to the case β = 6, evolution is more effective (all of the constraints are satisfied for m < 1100).

4.5 Evolution as a computational problem |

237

0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

0

2

4

6

8

10

12

14

16

18

Figure 4.10. This plot shows a part of the positive double mutations as a function of m. The parame­ ter β = 8. On the horizontal axis, values m /100 are shown.

0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05

0

2

4

6

8

10

12

14

16

18

Figure 4.11. This plot shows a part of the neutral double mutations as a function of the constraint num­ ber m (“organism complexity”).The parameter β = 8.On the horizontal axis, values m /100 are shown.

So, we have presented an analytic approach to the canalization problem. This ap­ proach works for all morphogenesis models and is based on the fundamental ideas of theoretical computer science, gene and neural network theory. We assume that gene networks contain some hubs which control the canalization process (this assumption is confirmed by experimental data). The hub activity exhibits a sharp phase transi­ tion depending on the mutation frequency. For a favorable environment, when the mutation frequency is small, these hubs block all mutations and gene product con­

238 | 4 Random perturbations, evolution and complexity 1 0.99 0.98 0.97 0.96 0.95 0.94 0.93

0

0.05

0.1

0.15

0.2

Figure 4.12. This plot shows the averaged value y¯ of the trait expression as a function of mutation frequency p for b = 0.2 (the upper curve, a strong buffer effect) and b = 0.01 (the lower curve, a weak buffer effect). In the first case, we observe decanalization at p = 0.025. Here, the number of genes is n = 50, the trait number is m = 120 and the redundancy parameter β = 6. The time interval consists of 2000 steps of length dt = 0.1, and γ = λ = 1. The horizontal axis is p, and the values of y¯ are shown on the vertical axis.

1 0.99 0.98 0.97 0.96 0.95 0.94

0

0.05

0.1

0.15

0.2

Figure 4.13. This plot shows a dependence of the averaged value ¯y of the trait expression on redun­ dancy parameter β. For β = 6, the value of y¯ are shown by the lower curve and for β = 7, by the upper curve. Here, n = 50 and m = 100.

4.5 Evolution as a computational problem

| 239

centrations do not change, and thus, the pattern (phenotype) is stable. When the en­ vironment becomes harder and the population needs new genotypes, these hubs or­ ganize the passage through bottleneck. Genetic drift and other effects then increase the mutation frequency. For the frequencies larger than a critical value, the hubs are suppressed, and therefore all mutations can affect phenotype. This phase transition effect can serve as an engine for evolution. The second key question is how to explain the complex organ formation during a sufficiently short time. We have used an approach inspired by [303, 304]. The mor­ phogenesis is considered as a hard combinatorial problem. This problem involves two key parameters, β and α. The first one determines genetic network redundancy and the second one can be interpreted as a gene freedom. It is shown that evolution effectiveness grows with genetic redundancy β in an ex­ ponential manner, as E(β ) = o(2β /β ). We conclude that a random evolution process is quite feasible while this genetic freedom is large enough (α > E(β ), but when the number of constraints become too large (a very speciated organism), evolution stops.

4.5.9 Other NP-complete problems in evolution Biological molecules consist of numerous polymer groups. During a chemical reac­ tion, they lose (or accumulate) only one such group. This explains, in particular, why enzyme reactions proceed in many steps [156]. We conclude, therefore, that it is im­ possible, in general, to connect two arbitrary nodes. An analogous picture can be ob­ served for other graphs corresponding to processes in Nature and society [7]. Taking into account possible restrictions on the matrix K fixed a priori, we can introduce a large graph (V, E), where V is a set of nodes, and E is a set of edges. An entry K ij in (2.272) could be nonzero only if it is prescribed by E, i.e. if v i , v j a priori can be connected (v i , v j ∈ E). Now, an evolution can be formally described as a time change of subgraphs (V, D t ), D t ⊂ E, where t = 0, 1, , 2, . . . and D0 ⊂ D1 ⊂ D2 . . . , and the time depending matrices Kt such that K ijt = 0 if v i , v j ∉ D t . Let us now consider some problems connected with such an evolution. Let us fix time t. To obtain a chemical reaction that transforms a substrate s ∈ V to a product p ∈ V, we have to find a simple path in (V, D t ) leading from s to p. It is clear as well that the length of this way may be large, though a priori bounded by a number Lmax . Otherwise, the relaxation processes will be very long and such a system does not survive. Let us recall our main principle, namely, that the system must be stable in a stochastic environment. This implies, in particular, that the system should be stable with respect to mutations or random vanishing of some substrats involved in the pro­ duction of p. Mutations can eliminate some nodes or edges. So, evolution should form

240 | 4 Random perturbations, evolution and complexity more than one way from different nutrients to products [156]. The more different ways one has, the stabler our system. Thus, these ideas lead us to the following problem: Problem. Given a graph G = (V, E), a collection of disjoint node pairs (s1 , ¯s1 ), . . . , (s k , ¯s k ). Does G contain J or more mutually pairwise node-disjoint simple paths con­ necting s i and ¯s i for each i = 1, . . . , k? This problem is NP-complete [83]. The given nodes s i could correspond to nutrients (substrates), nodes ¯s i could correspond to products, the paths then correspond to some metabolic paths. Suppose that a system, defined by the graph, survives if the environment contains at least one type of nutrient s i . Environment fluctuations are eliminations of some nutrients. Circuit stability ideas lead to another natural NP-connected problem that can be formulated as follows: Problem. Given a graph (V, E), positive integers K ≤ |V | and B ≤ |E|, is there a ˜ ⊂ E with |E | ≤ B such that the graph (V, E ) is K-connected, i.e. cannot be subset E disconnected by removing fewer than K nodes? This problem is simple for a fixed K, but it is NP-hard for K varying [83]. To conclude this subsection, let us mention the following. First, a number of prac­ tical problems of bioinformatics are quite complicated, see [221]. The second fact is fundamental. Some natural evolution problems are not only complicated. They are not decidable, i.e. there are no algorithms to resolve them. To show it, one can use the results proved in [30]. Let us consider iterations of some map, similar to map (2.273) (but without random noises). The problem is as fol­ lows: given a domain D in the phase space, iterations of this map enter D or not? This problem is not decidable according to [30]. So, one can expect that it is impossible to decide, evolution stops or not. We are not capable of foreseeing the End of the World or the collapse of an Empire. The next NP complete problem is connected with recognition. In fact, to be viable, it is necessary to recognize the viability domain Π. For Boolean circuits, this problem can be difficult. The ideas of the next subsection developed the pioneering approach to evolution suggested by L. Valiant [303, 304] and it follows [292].

4.5.10 Evolution of circuit population In this section, we show that for circuits, stable evolution is possible. These circuits have the form ⎞ ⎛ N u i (t + τ) = σ ⎝ K ij (t )u j (t ) + h i − ξ i (t )⎠ , u i (0) = x i , (4.69) j =1

4.5 Evolution as a computational problem |

241

where t = 0, τ, 2τ, . . . , d · τ, i = 1, 2, . . . , N, d and N are positive integers, τ > 0 is a real parameter, and x = (x1 , . . . , x N ) is an initial condition. It is usually assumed that the function σ is a strictly monotone increasing function for which limz→−∞ σ (z) = 0 and limz→∞ σ (z) = 1. Here, ξ i (t) are random Markov processes. Specifically, we consider an evolution of a family (“population") of circuits (4.69). To simplify our analysis, we consider the Boolean case, when the values u i (t) are al­ ways 0s or 1s. In this case, σ is the step function, i.e. σ (z) = 1 for z > 0 and σ (z) = 0 for z ≤ 0. We also assume that for every time t, there is a positive integer b (t) called the connection intensity. For every i and j, the value K ij (t) is equal either to b (t) or to −b (t), or to 0. In other words, the quantity u j (t ) either “excites"the value u i (t) at the next moment of time t, or inhibits it, or does not affect this value. We assume that h i = (m i + 21 ) · b, where m i are integers, and that the number N, in general, changes with time: N = N (t). At every moment t, the situation can be described by a directed graph (V (t), E(t)) whose vertices correspond to components u i : V (t) = {1, 2, . . . , N (t)}, and where there is an edge (i → j) ∈ E(t) if and only if K ij (t) = 0. Each graph represents a single circuit. Each step of the evolution of an individual circuit consists of one of the following changes in the graph and in the corresponding values K ij : (a) the graph (V, E) stays the same; (b) one adds a node to V; (c) one adds an edge i → j to E with a new weight K ij ; (d) one changes a weight K ij ; when the new value of K ij is 0, this change deletes an edge i → j. Steps (b)–(d) will be called mutations. We will assume that mutations occur with a given probability μ > 0. Based on these individual changes, we can perform the following changes in the population: – First, at each time step, we can simultaneously change many circuits in the pop­ ulation by performing changes (a)–(d) on different circuits. – We can also replicate (make copies of) some circuits and delete (“destroy") some other circuits. We consider a population consisting of X (t) random circuits (4.2) of different structure and different depths d(t). Let us now describe the set Π. In this description, we will use several ideas from [303]. Suppose that a circuit Circ j , a member of the population, survives at the moment t if and only if it gives a correct output y(x) as an answer to a Boolean input x: y = f (x, t), where f (x, t) are given Boolean func­ tions depending on t. The output y is the final state of some node: y(x) = u 1 (τ · d),

242 | 4 Random perturbations, evolution and complexity where u i (t) are computed by the formulas (4.2) starting with u (0) = x. The whole population survives if it contains at least one circuit. We suppose that f are a priori unknown: to survive, circuits should “learn"correct answers. So, in effect, we are dealing here with the notions from the learning the­ ory [304], though in a different context. Suppose that the correct answers are defined by a special piecewise constant se­ quence of Boolean functions f (x, t) = f j (x),

N (t) = N j

t ∈ ((j − 1) · T e , j · T e ],

(4.70)

where T e is a positive number (the “length"of the j-th evolution stage) and x = (x1 , x2 , . . . , x N j ). Here, we also assume that each function f j belongs to a certain class C of Boolean circuits (4.69) (naturally, the values N, K, and d can depend on j). Assume that the parameter τ is small enough; thus, we should not take into account the time τ · d of the circuit reactions. The problem can be interpreted as a problem of an adaptive behavior of a large growing population of evolving circuits under the challenge of a “random environ­ ment."Let us now formulate our assumptions about this environment. Suppose that at the j-th evolution stage, the values x are chosen randomly by a probability distribution P j (x) on the set Ξ of all possible inputs x. We assume that each circuit obtains the values generated by the same distribution P j and that the values corresponding to different circuits are independent. We say that the circuit (4.69) is correct if when the noise is turned off (ξ (t) = 0), this circuit returns a correct answer for every input x. For each pair of functions f and f , we can define the probability of error def

Err(f, f ) = Prob{f (x) ≠ f (x)}, where the probability in the right-hand side is defined with respect to P j . We can then define, for every j, the probability def

Err = j

def

inf

f ≠ f j ,f ∈C

Err(f j , f ),

(4.71)

and δ j = Err(f j , 0), where 0 denotes a trivial circuit with output 0. Here, two drastically different situations are possible: (A) Passive environment: in this case, all the distributions P j are the same, P j = P. In this case, the environment does not actively interact with the circuits. (B) Active environment, an environment that tries to create as many difficulties as pos­ sible to the circuit population. This may correspond to a predator-prey type inter­ action, when a predator tries to learn the prey’s behavior and vice versa. In this case, the probability distributions P j can be different. (Here, interesting situations appear when for large j, the probabilities corresponding to the distributions P j are not computable in polynomial time.)

4.5 Evolution as a computational problem | 243

Our objective is to show that a stable evolution is possible. We will show that for the above-described population of circuits, a stable evolution is possible – provided that the circuit growth satisfies some natural conditions. These conditions are listed below. (R1) We assume that for all time moments t = 1, 2, . . . , Res(t) = |K ijC (t)| < Poly(t), (4.72) C i,j, ( i,j )∈E C( t )

where the first sum ranges over all circuits involved in the population, and Poly(t) is a polynomial. This assumption means that, within each time interval [0, T ], t = 1, 2, . . . , T, the evo­ lution process can only use resources whose total amount is bounded by a polynomial of T. (R2) There exists a value β > 0 such that the noises ξ i (t) corresponding to different i and t are independent identically distributed (i.i.d.) random quantities for which, for each a > 0, we have 0 < P(|ξ i (t)| > a) < exp(−β · a).

(4.73)

(R3) The population size is polynomially bounded: X (t) < Poly(t). Our main assumption about the functions f j can be described as follows. Let us assume that a conditional relative complexity of the correct outputs increases slowly in some reasonable sense; for example, we can assume that f j +1 = g ( f j , f j −1 , . . . , f 1 , x ) , d(g) = depth(g) ≤ dmax ,

g ∈ C, Comp(g) < Kmax ,

(4.74)

where dmax and Kmax are constant (independent on j), and Comp(g) denotes a circuit complexity, i.e. the number of elementary steps necessary to construct g. Let us first formulate a simple lemma showing that sometimes, one can survive without learning.  Lemma 4.17 (Survival without learning). If the series +∞ j =1 δ j converges, then for every value p0 ∈ (0, 1), there exists a circuit population that survives with the probability ≥ p0 , i.e. for which P T > p0 for all T. Proof. Let us take X identical circuits with ξ i = 0. For every input, each circuit gen­ erates 1. For such individual circuits, the probability P T to survive within the time in­ -T terval [0, T ] is then equal to P T = j=1 (1 − δ j )T e . By taking into account that δ j → 0 as j → ∞, one concludes that as T → ∞, the values P T are bounded, from below, by some value κ > 0. If κ < p0 , we increase X until we get κ ≥ p0 . The lemma is proven.

244 | 4 Random perturbations, evolution and complexity Theorem 4.18. Let us assume that for some real number ρ ∈ (0, 1), the functions f j satisfy the conditions (4.74) and Err > ρ (4.75) j

for all j. Then, there exist values μ and T e for which there exists an algorithm describing the evolution of circuits that satisfies the conditions (R1), (R2), and (R3), and for which P T > p0 > 0 for all T > 0. In other words, for this algorithm, the system remains stochastically stable for large time intervals. This theorem can be interpreted as follows: stable evolution is possible even in severe conditions (when a single error leads to destruction) – if the rate of change of the environment complexity is bounded. Proof. In this proof, we will use two lemmas from [52]. Recall that a Bernoulli process is a discrete-time stochastic process consisting of a sequence of independent random variables that take only two values: success and failure. For each integer M and real number p ∈ (0, 1), we can consider an M-trial Bernoulli process in which in each of the M trials, the probability of success is equal to p. Let us denote the total number of successes in all M trials by Y. Lemma 4.19. For k < M · p, we have Prob{Y < k } ≤

k · (1 − p ) · C kM · p k · (1 − p)M −k . M·p−k

(4.76)

Lemma 4.20. For r > M · p, we have

Prob{Y > M · p + r} ≤

M·p·e r

r

.

(4.77)

Since for our choice of K ij and h i , we have min |K ij · u j + h i | ≥ 0.5b, we can prove the following useful lemma. Lemma 4.21. Let y(x) be a circuit (4.2) of depth d and complexity KMax , for which K ij ∈ {b, 0, −b } and ξ = 0. Let ˜ y(x) be the same circuit with the noise ξ (which satisfies the condition (R2)). Then, sup Prob{y(x) = ˜y (x)} < exp(−c · β · b ), where c = c(d, KMax ) > 0.

(4.78)

x

Let us now describe a circuit evolution and a population growth that satisfies the con­ ditions (R1), (R2), and (R3). We will proceed in three stages. Our estimates are obtained by induction. Suppose that at j = m, the populations contain correct circuits that give correct answers with probabilities p m = 1 − exp(−c1 · m), where c1 > 0.

4.5 Evolution as a computational problem | 245

Stage I. Generation of new circuits by random mutations. Consider the time interval I m = [m · T e , m · T e + Kmax ], where T e > C · dmax for some large constant C > 1. Let us denote, by ˜x, a combined entry (x0 , x), where we use x0 = (f m , f m−1 , . . . , f1 ) as an additional entry component. Using steps (a)–(d), we construct all possible circuits of complexity ≤ Kmax and depths ≤ dmax . Among them, correct circuits may occur, i.e. circuits coinciding with g(f m , x). For t ∈ I m , we set b (t) = b ∗ , where b ∗ is a large constant independent of m. Such a correct circuit can be obtained with the probability K + p+ c ( b ∗ ) · μ , where μ > 0 is the mutation probability and p c ( b ∗ ) is a probability that an incorrect circuit gives a correct answer. We have already obtained the estimate p+ c ( b ∗ ) > exp(−c 2 · b ∗ ). Let us denote def

K κ = (μ · p+ c ( b ∗ )) . Then, one can expect that after K max steps, we shall have at least X+ = 0.5κ · X m correct circuits, where X m is the number of circuits at the moment m · T e . Indeed, using Lemma 4.19, one can prove the following result: def

Lemma 4.22. Let us consider the random number Z m = X+ (m · T e + Kmax − 1) of correct circuits X+ (t) at the moment t = m · T e + Kmax − 1. If the parameter β is small enough, then the probability that Z m < 0.5κ · X m can be bounded by the following expression: Prob{X+ (t) < 0.5κ · X m } < exp(−c3 · X m ),

(4.79)

where c3 (μ, p+ c , K ) is a positive constant that does not depend on X m . Stage II. Removing circuits and increasing b. For the following T1 = T e − Kmax − 1 time steps, we do nothing, no mutations. Many circuits die as a result of incorrect answers. On this stage, we increase the parameter b in these circuits (Step (d)) by setting b = b 2 = O(m). Let us denote ˜ – by P∗ 1 , the probability that at the moment t = T e ( m + 1), the number X + = X + ( m · ˜ T e + Kmax − 1) of correct circuits is smaller than the number X− = X− (m · T e + ˜ − , and ˜+ < X Kmax − 1) of incorrect ones: X ˜ + > 0. – by P∗ , the probability that there are correct circuits left, i.e. that X 0 Lemma 4.23. Values T1 , c4 , and c5 exist for which the probabilities P∗ i satisfy the fol­ lowing inequalities for all m: P∗ 0 < exp(−c 4 · m ) ,

P∗ 1 < exp(−c 5 · m ) .

(4.80)

Proof. The probability that a correct circuit survives after T1 trials is larger than q T1 , where q > 1 − exp(−c · b 2 ). Thus, the probability that all correct circuits die is (1 − q T1 )X+ < exp(−c · X m ); since X m = O(m), one gets the first estimate (4.80). Let us denote by Z the number of inputs x˜ among T1 inputs ˜x for which all incorrect circuits give an incorrect answer y(x) = f m+1 (x). We denote such inputs by x inc . If an incorrect circuit C inc obtains such an input, C inc dies with a probability p d close to 1: p d = 1 − exp(−c · b 2 ). The probability that a circuit obtains, as an input, some

246 | 4 Random perturbations, evolution and complexity x inc within T1 independent inputs, is p1 = 1 − (1 − ρ )T1 . Then, by Lemma 4.20, the probability that the number of surviving incorrect circuits is larger than 6(1 − ρ )T1 X m does not exceed 0.8O(X m ) . The number of the correct circuits will be close to X+ = c9 · X m , with a probability > 1 − exp(−c · X m ), where c9 depends on μ and b 2 , but does not depend on T1 . This observation gives the second estimate (4.80) for large enough values of T1 large. This completes the proof of the lemma. Stage III. Replications. We now come back to the design of the algorithm required in Theorem 4.18. Notice that it is not a priori known whether a given circuit is correct or not. However, one can investigate structures of circuits and one can find a group of circuits having the same structure. We preserve these circuits and remove all the others. Then, we replicate all the remaining circuits to obtain X m+1 = X (t) = O(m) up to the moment t = (m + 1)T e . By Lemma 4.21 one can show that for new noisy correct circuits, the probability of the incorrect output admits the upper bound exp(−c · m)), where c > 0 (we repress the noise by increasing b (t) on Stage II; at the other stages b (t) is a large constant b ∗ independent of m). We notice that the probability to survive within I m is larger than 1 − exp(−c1 m)), c1 > 0. The resources within I m are O(m|E(m)|) < O(m3 ). This finishes the proof of Theorem 4.18. Notice that Theorem 4.18 makes natural sense. It describes a survival with learn­ ing. To survive, the population should learn something about a Boolean black box. It is a difficult problem, but the population can recognize a black box step-by-step if the box’s complexity increases “slowly” (i.e. according to (4.74)). Example. An interesting example is given by the sequence of conjunctions f j = D i1 ∧ D i2 · · · ∧ D i k ( j ) ,

(4.81)

where each Di is a disjunction of some literals: Di = ˜x i1 ∨ ˜x i2 · · · ∨ ˜x i K , where x˜ i is either x i or ¬x i , i ∈ {1, . . . , N }. The integer K can be interpreted, biologically, as a redundancy parameter. The dependence of the number k (j) on j can be increasing, decreasing, or nonmonotonic (it depends on g in (4.74)). Notice that learning of (4.81) is hard for large N [303]. In the case of (4.81), the evolution stability is connected with the K-SAT problem, which, for a few decades, has been a focus of many important research activities; see, e.g. [2, 52, 80, 183]. We can consider our evolution as a “game” of population against an environment which becomes more and more complicated. (1) If the dimension N of inputs x is fixed, one can survive in a simple way (Lemma 4.18) if P j = P and P is uniform. However, the survival probability may be expo­ nentially small in N. (2) Assume now that N = N (t) increases with time. Suppose that each new clause contains a new literal that is not used in the previous clauses. Then, for a passive

4.5 Evolution as a computational problem |

247

environment with uniform P, it is possible to survive without learning (Lemma 4.17). For active environments, Theorem 4.18 holds if the distributions P m+1 have the following property: Prob{1 − δ0 > f m (x) = 1} > δ0 with δ uniform in m for x chosen randomly according to P m+1 . (3) It is natural to assume that conjunctions (4.81) are constructed randomly, i.e. all indices i K ∈ {1, 2, . . . , N (t)} are chosen randomly (random K-SAT). For example, at each j, we choose a random i, and we add, with probability p, certain L disjunc­ tions to f m . In this situation, our problem looks complex, and it is related to the results on phase transitions in hard combinatorial problems [80, 183]. We consider this relation in our forthcoming publications. In this section, we restrict ourselves to some simple obser­ vations. If K ≤ 2, one can expect that, inevitably, some new clause will be in contradiction with previous ones; thus, for the passive case (A), we can again use Lemma 4.18. For K > 2 and the active case (B), it is possible that P m are not computable in polynomial time Poly(m). Indeed, to implement the algorithm from Theorem 4.18, we should have P j satisfying the condition (4.75) (or, at least such that Errj > const ·j−n for some n > 0). For large j, it is possible that the number N j of solutions of the K-SAT problem corresponding to f j (x) is exponentially small in j; moreover, if P = NP and j is a part of the input, there is no polynomial-time algorithm to find x such that f (x) = 1 [80, 183]. Nonetheless, even in such a situation, survival is possible if the population always preserves a trivial circuit. It would be interesting to compare results from this section with a real biological situation. A discussion of the problems of species extinction and complexity growth can be found, for example, in [227]. A change of f j can be interpreted as a variation in ecological conditions. It can be shown that, according to our model, such a change leads to a massive species extinction with an exponential rate (this fact is in good accordance with biological reality [227], Ch. 23). Another interesting conclusion is that evolution is possible, but we can not recon­ struct it. In fact, it is difficult to recognize a general Boolean function of a large number of variables [303, 304]. This section shows that an algorithm simulating a biological evolution can proceed it step-by-step if this function is defined by a Boolean circuit slowly growing in time. However, we can understand such evolution in details only when we know exactly how this circuit was created.

248 | 4 Random perturbations, evolution and complexity

4.6 Kolmogorov complexity and genetic code 4.6.1 Model of evolution Let us consider some evolution models. Let a parameter y be connected with a gene code and defines a system structure. In this section, we suppose that the parameter y is continuous. Then, an evolution model consists of a random dynamical system, viabil­ ity domain Π t , which can depend on time t, and a second random dynamical system governing y-evolution: du = F (u, y, ξ ), u (0) ∈ Π0 dt dy = Y (u, y, s, η ), dt

(4.82) (4.83)

where u is the system state that defines a time depending pattern, y is an evolving parameter, ξ, η are random noises and s is a discrete parameter which can evolve in time. The evolution scheme is described above. The vector y can be interpreted as gene product (for example, proteins) concentrations. There are some reasons why we have chosen the case above, where y is discrete. The first, in principle, a continuous evolution, can be always approximated by a dis­ crete evolution. The classical example is the organism size L. One can assume that L is continuous because L is controlled by many genes [227]. The second reason is that it is difficult to describe a graph growth (for example, formation of new connections) by equations (4.82), (4.83) without a discrete parame­ ter s. At last, the third argument (and the most important one) is as follows. To create a biological system effectively functioning in an extremal environment, it is a com­ plicated problem (see above). Among all possible parameter values, only a small part gives biologically reasonable ones. Therefore, if, fortunately, evolution has found a sufficiently stable system, then it is necessary to fix the corresponding parameter y. This fixation is simpler when y is discrete. So, one can assume that many years ago, when there were no complicated evo­ lution algorithms, and primitive biosystems lived in extremal conditions, they used a genetic code to survive. Fragile systems without such a code have vanished under fluctuations and chaos. Some fragile systems could survive by first using a primitive random search and after more sophisticated evolution algorithms were found. Notice that a concrete mathematical model of genome evolution (hypercycle) is proposed in the seminal work [66], see also [244]. We can simplify model (4.82), (4.83) by removing the y-equation (for exam­ ple, if y is a fast variable) and then we have a random dynamical system govern­ ing u-evolution: du = F (u, s, ξ ), u (0) ∈ Π0 . (4.84) dt

4.6 Kolmogorov complexity and genetic code | 249

4.6.2 Genetic code complexity Below, we use the Kolmogorov complexity theory pioneered by R. Solomonoff, A.N. Kolmogorov and G. Chaitin [46, 64, 71, 145, 187, 259, 315]. For an arbitrary Turing machine F and for every string s, by K F (s), we denote the Kolmogorov complexity of the string s relative to F, i.e. the shortest length of the program on F for which F generates s [187]. In the following text, by a Kolmogorov complexity of a string s, or simply complexity (for short), we mean K F (s) for a fixed Turing machine F. Theorem 4.24. Let F be an arbitrary Turing machine. Let us consider a class of gener­ ically unviable systems (Definition 4.4). Assume that the Markov chain M and system (4.1) generate strings s with a priori bounded Kolmogorov complexities K F (s) relative to F. Then, for almost all mappings α : s → r(s) of strings s to the parameters r, the evolution is unstable and the corresponding system is unviable: P T → 0 as T → ∞. Comment 1. Instead of K F , one could take any function K satisfying the following property: for any n, there exists finitely many strings s with K (s) = n. Comment 2. In this analysis, we only consider the Kolmogorov complexity of the codes s, and not of the states u themselves. Complexity of the states can also be studied for systems similar to Kh; see, e.g. [296]. Comment 3. Theorem says that the evolution is unstable for almost all mappings α, but it does not tell us whether the stable evolution is possible for some functions α.

4.6.3 Viability and unviability It is difficult to determine when a given class of systems is generically unviable. Under Assumption 4.2 on ξ , a natural way for proving unviability is to consider ξ as a control and to use methods from control theory; see, e.g. [164]. This reduction to control leads to complex attainability and controllability problems. Let us describe several results that can be nonetheless obtained. Theorem 4.25. Let us assume that we have a system (4.1) from the class Kp, with poly­ nomials f i (u, ξ ) of positive degree d, and with m ≥ 2. Assume also that the viability domain Π t is bounded, i.e. Π t ⊆ B R for a ball B R of some radius R uniform in t. Then, for ν-almost all polynomials f i , system (4.1) is stochastically unviable, i.e. P T (u (0)) → 0 T → ∞ where P T (u (0)) is the probability that the system state u (t) lies in Π t for each t ∈ [0, T ].

250 | 4 Random perturbations, evolution and complexity Comment. Moreover, for almost all tuples of polynomials f i , there exists a value κ (f ) for which P Π (u, f, t) ≤ 1 − κ (f ) (4.85) for all u ∈ Π and t. We present a proof of this theorem at the end of this section. Before that, let us consider other types of systems. Systems from the Class Kr can be both viable and unviable. Indeed, let Π be a bounded set. Example 1. Suppose that for some R > 0, for every u, the range of the map ξ → g(u, ξ ) contains the ball B R = {g : |g| < R}. Then, one can show that for a sufficiently large λ > 0, the corresponding system is unviable. Example 2. Let us consider situations when the functions g i are uniformly bounded and m = 1. Then, by definition of m , the corresponding dynamical system u (t + 1) = ˜f (u (t)) is deterministic (not random). In particular, we can consider the case when this system has an attractor consisting of hyperbolic equilibria points, and that this attractor is contained in the viability domain Π. Then, one can show that if the initial state u (0) is sufficiently close to the attractor, then, for sufficiently small values λ > 0, the corresponding system u (t + 1) = ˜f (u (t)) + λg(u (t), ξ (t)) is viable. First, let us prove that Theorem 4.25 (a proof of Theorem 4.24) can be found in the next subsection). We start with the following preliminary lemma. Lemma 4.26. Let Π be a compact set. Let us consider a system of polynomial equations g i (u ) = 0,

i = 1, . . . , N,

(4.86)

where g i are polynomials, and the number of equations N is greater than the number of variables n. Then, the probability that this system has a solution u ∗ ∈ Π is equal to 0. Proof. This lemma easily follows from the resultant theory; see, e.g. [317]. Proof of Theorem 4.25. Since the set Π is bounded, there exists a real number R > 0 such that if |u | > R, then u ∉ Π. For systems of class Kp, one has f (u, ξ ) =  l l:| l |< d h l ( u ) ξ , where l = ( l 1 , . . . , l m ) is a multiindex, h l ( u ) are polynomials of u, def

def

l

l

|l| = l1 + · · · + l m , and ξ l = ξ1 1 · · · · · ξ mm . Let us consider a finite tuple a = (a1 , a2 , . . . , a m ), where a j are different positive numbers. Let us set ξ j = a j · z, and let z → +∞. Suppose |f (u, ξ )| < C for all ξ (z), where C > 0. Then, one can conclude that h l (u ) = 0 for all l for which |l| < d. The equations h l (u ) = 0 form ≥ n · (d + 1) polynomial equations with n unknowns u i . Now, we apply Lemma 4.26 and conclude that since, in general, such a system has no solutions, therefore, in general, the values |f (u, ξ )| are not bounded as z → ∞.

4.7 Viability of reaction-diffusion systems |

251

Thus, if u (t) ∈ Π t , for some ξ , we have |u (t + 1)| > R and consequently u (t + 1) ∉ Π. The theorem is proven.

4.6.4 Proof of Theorem 4.24 on the complexity of gene code First, let us show that a stable evolution is possible only when the code length is un­ bounded in time. Indeed, suppose that the lengths L(s) of all the codes s are a priori bounded by an integer l. The number of such codes is bounded, and thus, due to Assumption 4.3, for almost all maps α, one has min κ (r(s)) > κ 0 > 0.

s:L( s )≤ l

(4.87)

Indeed, one can observe that the set of all maps α for which κ (r(s)) = 0 for some string s of length ≤ l is contained in the finite union of the sets B l (s) of measure 0: μ l (B l (s)) = 0. Then, since our process is a Markov one, according to Assumption (4.2), the probability P T (Π ) for ξ (t) to be in Π at time moments 0, 1, . . . , T is smaller than (1 − κ 0 )T , and we conclude that the evolution is unstable. This proves Theorem 4.24 for the case when all strings have a priori bounded length. Let us note now that the lengths L(s) of the strings of the relative Kolmogorov complexity K F (s) not exceeding K are a priori bounded since there are only finitely many such strings: L(s) < N K for some N K . Therefore, all strings of complexity < K are contained in a finite set BK of binary strings. The theorem is proven. Comment. It is worth mentioning that while for every Turing machine F and for every integer K, there exists an upper bound on the length of all the strings s with K F (x) ≤ K, this upper bound is not always effectively computable. For example, for a univer­ sal Turing machine F, the impossibility to have an algorithm computing such upper bounds follows from the well-known theorem of Rabin [31].

4.7 Viability of reaction-diffusion systems We consider reaction-diffusion systems with random parameters. Reaction-diffusion systems are important as models of pattern formation in biology, chemistry and physics. Moreover, they can describe chaos, turbulence and oscillations. To investigate pat­ tern stability, it is essential to take into account random fluctuations of parameters. Here, we also focus on negative part i of the M. Gromov and A. Carbone hypothesis, see the above Introduction, Sec. 4.1.1 and [97].

252 | 4 Random perturbations, evolution and complexity For reaction-diffusion system a mathematical formulation of the homeostasis problem can be done, as above, by viability theory [13] and standard probabilistic approaches [308]. Let us assume that our system generates a global semiflow in an appropriate phase space H. If the random parameters are fixed (no fluctuations), due to many works [101, 155, 265], we now know that under some fairly natural assumptions, such semiflows possess finite dimensional global attractors A ⊂ H. Under our assumptions on fluc­ tuation effects, the random parameters perturb dynamics, but we conserve solutions existing globally in time. Let us also assume that there is a special subset Π ⊂ H which gives us a viability domain. We assume the attractor is contained in the viability domain: A ⊂ Π. Let us also assume that just the system state leaves Π, the system will be destroyed. For ex­ ample, for systems describing population dynamics, ecology, or metabolic reactions, it is natural to suppose that Π contains only the states u, where averaged in space vari­ ables take values larger than some critically small thresholds. So, homeostasis holds and the system is viable at the moment t if the system state u (t) ∈ Π. The quanti­ tative characteristic of the system viability is the probability P(T, Π, ρ (A)) that the u (t) ∈ Π for all t ∈ [0, T ] under the condition that the initial state u 0 is distributed on A according to a probabilistic measure ρ defined on A. If ρ is a delta-function ρ = δ(u − u 0 ), we denote P by P(T, Π, u 0 ). Clearly, P(T, Π, ρ (A)) is the average of P(T, Π, u 0 ) over A computed by the measure ρ. Here, we find out a connection between the pattern complexity and viability. We use natural viability conditions that allow us to simplify the statement. First, we prove that for under two generic noises, the reaction-diffusion system is unviable for large times: P(T, Π, ρ ) → 0 as T → ∞. The generic noises are de­ fined by some generic vector fields, allowing us to apply powerful mathematical tools from the geometric control theory based on R. Thom’s transversality theorem [164]. We essentially use assumption 4.6. This assumption can be interpreted as a condi­ tion that the system has an extreme environment, i.e. environment parameters can strongly fluctuate. Indeed, experiments showed that biological patterns stay stable under large variations of external and internal parameters (although system compo­ nents, biomolecules, are fragile). Assume the number of components of reaction-diffusion system n is fixed. We then show that if a “spatial complexity” Comps (ϕ) of patterns ϕ ∈ A is bounded, then P(T, Π, ρ ) admits the estimate P(T, Π, ρ ) > δ(C s , T ) > 0,

(4.88)

where δ(C s , T ) is a function decreasing in T and increasing in C s = Comps . For exam­ ple, for layered patterns, the stability can increase in the layer number. The results are consistent with experimental facts. Let us consider a chemical sys­ tem consisting of, possibly, a number of reagents which diffuse and interact. If we only have reagents with small diffusion coefficients of the order ϵ2 (for example, proteins),

4.7 Viability of reaction-diffusion systems

| 253

then, even if the initial pattern ϕ(x) is complicated (for example, ∇ϕ = O(1/ϵ)), this pattern is unviable for a long time under the noise. Under certain natural condi­ tions, we obtain that the probability P(T, Π, ϕ) to be in the viability domain Π within the time interval [0, T ] is independent of ϵ as ϵ → 0 and converges to zero as T → ∞. In contrast to this, if our system involves reagents with small diffusion coefficients (for example, proteins) and reagents moving with larger diffusion rates (substrates), then if the initial pattern ϕ(x) is complicated, this pattern becomes stabler under the noise for small ϵ. This means that P(T, Π, ϕ) increases and approaches to 1 as ϵ → 0. Systems with different diffusion coefficients are under great attention beginning with the Turing paper [272]. They are well-studied [103, 138]. Typically, we observe smooth, regular patterns for fast reagents and nonuniform patterns formed by kinks (well-localized internal layers) for slow ones. Such structures can be observed in cells, where relatively regular temperature, pH and substrate fields help to support normal functioning enzyme reactions that can effectively work only in some parameter do­ main. On the other hand, nonuniform protein densities support these fields against even strong fluctuations. The second result is about genetic code complexity. Assume that the system struc­ ture depends on a discrete parameter s. One can suppose, without loss of general­ ity, that this parameter s is a multidimensional Boolean variable s ∈ {0, 1}M . Such a Boolean variable can be interpreted as a genetic code. Then, if the Kolmogorov com­ plexity of the code is a priori bounded, for generic noises, the viability is also a priori bounded.

4.7.1 Statement of problem Consider an evolution equation with random parameters ξ : u t = Au + F (u, ξ ),

u (0) = ϕ,

(4.89)

where ϕ, u (t) ∈ H, H is an Hilbert (Banach) space (a phase space), A is a self-adjoint negatively defined linear operator, F is a nonlinear operator depending on random parameter ξ ∈ S, where S is a Banach space. We assume that almost all trajectories t → ξ (t) (with respect to a measure on S) are uniformly bounded in S piecewise continuous functions of t. For α ∈ (0, 1), we define the fractional spaces H α = {u : (−A)α u  < ∞}. The corresponding norm will be denoted by u α . Assume that the viability domain Π ⊂ B(R) is an open bounded subset in H α , for some α ∈ (0, 1). Let us suppose, moreover, that for each fixed ξ (·) and t, the operator F (u, ξ (t)) is a map from H α to H. Denote by D u F the derivative of this map F with respect to u. Suppose that for each fixed continuous ξ (t), one has sup F (u, ξ ) < M (ξ ), u∈Π

sup D u F (u, ξ ) < C(ξ ). u∈Π

(4.90)

254 | 4 Random perturbations, evolution and complexity Introduction of the bounded set Π is useful to overcome difficulties with the exis­ tence of global in time solutions and attractors. Namely, for many systems important in applications, uniform estimates (4.90) do not hold, and it is not easy to prove that solutions of (4.89) exist for all t ∈ (0, ∞), the system is dissipative, and a compact attractor exists. We need a uniform a priori estimate of some weak norms of u to prove these facts. However, for viability problems, we can avoid these technical difficulties. Since we can consider the dynamics of (4.89) only in Π, we can modify F outside Π in such a way that a modified F coincides with old F on Π and this modified F satis­ fies estimates (4.90) uniformly in u. This simple trick shows that no problems occur with a priori estimates and the global existence of solutions when we investigated the viability problems for bounded domains Π. For a fixed continuous ξ , equation (4.89) defines a global semiflow. Such evolu­ tion equations were investigated by many works [108, 155, 167, 265]. Using (4.90), we can prove the existence of solutions (4.89) for all positive t [108] and for almost all trajectories ξ (t). The part i of the Gromov–Carbone hypothesis can be formulated as P(T, Π, ϕ) → 0

(T → +∞).

(4.91)

In the coming sections, we prove (4.91) for a large class of reaction-diffusion systems.

4.7.2 Reaction-diffusion systems with random parameters Let us consider the following reaction-diffusion systems with a noise ξ : ∂u i = d i Δu i + F i (u, x, ξ ), ∂t

(4.92)

F i (u, x, t) = f i (u, x) + g i (u ) · ξ (x, t),

(4.93)

where and g i (u) · ξ =

p

(k)

g i (u)ξ k .

k =1

Here, u = (u 1 , u 2 , . . . , u n ), u i (x, t) are unknown functions, x ∈ Ω, t > 0, i = (k) 1, 2, . . . , n and d i > 0 are diffusion coefficients. Functions f i (u, x) and g i (u ) ∈ n m 2 C (R × Ω). The set Ω ⊂ R is a bounded open connected domain with a smooth boundary ∂Ω. We assume that ξ (x, t) = (ξ1 (x, t), . . . , ξ p (x, t)), where ξ j (x, t) are random fields with the following properties. The fields ξ (x, t) are time continuous in t and such that for each fixed t, the function ξ (x, t) lies in the function space . / S r (Ω ) = ξ ∈ L2 (Ω ), (−Δ + I )r ξ (·, t)2 + ξ (·, t)2 < ∞ ,

4.7 Viability of reaction-diffusion systems

| 255

where f  denotes the standard L2 -norm:  f 2 =

p 

|f i |2 (x)dx,

i =1 Ω

and r ≥ 0. Let us set the Neumann boundary conditions: ∂u i (x, t) = 0, ∂n

x ∈ ∂Ω, t > 0,

(4.94)

and initial conditions u i (x, 0) = ϕ i (x),

|ϕ i |C2 (Ω ) < C 0 ,

(4.95)

where n is a normal vector with respect to ∂Ω and ϕ i ∈ C2 (Ω). In many applications, u i are concentrations which should be nonnegative if initial concentrations are nonnegative ϕ i ≥ 0. To provide this property, let us assume f i = u i K i (u, x) + Q i , (k) g i (u)

=

Q i (u, x) ≥ 0 for all u,

(k) u i h i (u),

(4.96) (4.97)

where K, h, Q are C1 -functions. If, in addition, K i (u ) < −m0 < 0,

sup Q i (u ) < M0 ,

(4.98)

u

where m0 , M0 > 0, then system (4.92) is dissipative for ξ ≡ 0. In fact, multiplying the i-th equation of (4.92) by u i and integrating over Ω, one obtains the inequality d  u 2 ≤ − c 1  u 2 + c 2 M 0  u  , dt

c1 , c2 > 0.

(4.99)

This entails u (t) < C1 + u (0) exp(−c1 t)

where C1 is a sufficiently large positive constant and c1 > 0. Therefore, systems (4.92) with nonlinearities defined by (4.96) are dissipative in L2 -norm. A typical example of a system satisfying (4.98), (4.96) is given by the Reinitz–Mjol­ ness–Sharp (RMS) model of a genetic circuit, considered in Chapter 2. For the RMS, one has ⎞ ⎛ n K i = −λ i < 0, Q i = σ ⎝ W ij u j − m i (x)⎠ , (4.100) j =1 ∞

where λ i are constants, and σ ∈ C is a sigmoidal function. Under conditions (4.96) and (4.98), let us introduce the viability domain Π by Π δ = {u ∈ H :

u  ≥ δ },

(4.101)

256 | 4 Random perturbations, evolution and complexity where δ is a small positive constant. Here, Π is defined by a more natural integral condition, opposite to the case studied in [279]. Let D ⊂ Rn be an open connected domain with a smooth boundary. Assume that outside D, the vector fields g vanish, i.e. g(k) (u ) = 0

u ∉ D.

(4.102)

Let us formulate assumptions to random fields ξ . Let us denote V (δ, z(·), T ) a tubular neighborhood of the curve t → z(t), z ∈ Rp , t ∈ [0, T ]: V (δ, z(·), T ) = {ξ ∈ S r ,

dist(ξ (·, t), z(t)) < δ for all t ∈ [0, T ]}.

Here, dist(f (·), a) = supx∈Ω |f (x) − a|. Let us assume that for each [0, T ], the map t → ξ (t) is continuous with values in S r and assumption 4.6 holds.

4.7.3 Existence of solutions of noisy systems Let us take such r > 0 in the definition of the fractional space that there holds a com­ pact embedding Sr ⊂ C0 (Ω). It is well known that r depends on dim Ω = m. One has r > m/4 [108]. In the previous subsection, we have obtained an a priori estimate that guarantees global existence, dissipative and the attractor existence for our system when the noise is absent (ξ ≡ 0). Under conditions (4.102) and Assumption 4.8, for all T > 0 and almost all trajectories t → ξ (t), t ∈ [0, T ], we can show the existence of solutions. In fact, similar to (4.99), one obtains d  u 2 ≤ −c1 u 2 + c2 M0 u  + cW (t)u 2 , (4.103) dt where W (t) = supx |ξ (t, x)|. Due to our embedding and Assumption 4.8, the proba­ bility that W (t) is not bounded on [0, T ] equals zero. Therefore, almost always, there holds a priori estimate ⎛ ⎞ t ⎜ ⎟ u (t) < C1 M0 + u (0) exp ⎝−c1 t + W (s)ds⎠ . 0

This estimate proves solution existence on [0, T ] for all times T and for almost all realizations of ξ .

4.7.4 Unviability for generic noises g Using a seminal result of geometric control theory [164], in this subsection, we prove the Gromov–Carbone hypothesis (4.91). We suppose here that assumption 4.6 holds. We use two auxiliary lemmas.

4.7 Viability of reaction-diffusion systems |

257

Lemma 4.27. Let δ be a small number. Let us consider the system of ordinary differential equations du i = g i (u ) · z(t), u (0) = ϕ ∈ D, i = 1, . . . , n (4.104) dt in the open domain D ⊂ Rn with a smooth boundary ∂D. Assume that p ≥ 2 and vector fields h(k) ∈ C∞ (D), k = 1, 2, . . . , p are generic and the corresponding fields g k de­ fined by (4.97) are tangent to ∂D. Then, for each ϕ, there exists a piecewise constant in t function z(t, ϕ) on [0, T0 ] such that the solution u (t, ϕ) = u (t, z(·, ϕ), ϕ) of (4.104) satisfies |u (T0 , ϕ)| < δ. (4.105) Proof. We make the change u i = exp(v i ). Then, substituting the piecewise constant in t functions η(t) in (4.104), we obtain a symmetric polydynamical system for v = (v1 , . . . , v n ) on D [6, 164]. Since the fields g(k) are tangent to ∂D, we can consider the domain D as a compact smooth manifold. A symmetric polydynamical system defined by p generic vector fields on a compact smooth manifold enjoys the complete con­ trollability (attainability) property, i.e. each two points can be connected by a trajec­ tory [164]. In particular, for each ϕ, there is a trajectory that enters for the small ball B δ = {u ∈ D : |u | < δ}. This remark completes the proof. The assertion of this lemma is not fulfilled for p = 1, i.e. if only a single noise per­ turbs our system[6, 164]. It is also useful to notice that if a function z(t, ϕ), t ∈ [0, T0 ] satisfies (4.105), then the rescaling control z a (t) = az(at), where t ∈ [0, T0 /a], also satisfies (4.105) since we have |u (T0 /a, ϕ)| < δ. Indeed, by a change τ = at, one obtains that the corresponding trajectory u (τ, ϕ) is defined by (4.104). Lemma 4.28. Let us consider the system du i = f ( u ) + g i ( u ) · z( t) , dt

u (0) = ϕ,

(4.106)

where f ∈ C1 (D) and where we consider η as a control. Assume that p ≥ 2 and vec­ tor fields h(k) ∈ C∞ (D) are generic. Then, for each ϕ, there exists a piecewise con­ stant in t function z(t, ϕ) on [0, T0 ] such that the corresponding solution u (t, ϕ) = u (t, η(·, ϕ), ϕ) satisfies (4.105). For f = 0, according to Lemma 4.27 and the remark, there is a ˜z (t) = az(at), t ∈ ˜ the corresponding solution of [0, T0 /a] such that u (T0 /a, ϕ) < δ/2. We denote by u (4.104). Let us consider the trajectory u of (4.106) with f and with the same ˜z . Let us com­ ˜ , τ = at. Then, one has pare these trajectories setting w(τ) = u − u dw ˜ )) · ˜z (τ), = a −1 f ( u ) + ( g ( u ) − g ( u dτ

0 ≤ τ ≤ T0 ,

w(0) = 0.

258 | 4 Random perturbations, evolution and complexity Integrating this relation over [0, τ], one obtains −1



|w(τ )| ≤ a C f τ + C0

|w(s)|ds.

(4.107)

0

Here, C f = supu∈D |f |, C0 = c0 C g , C g = supu∈D |∇g|, c0 = supτ∈[0,T0] |z(τ)|. Now (4.107) entails, by the Gronwall inequality, that |w(τ )| ≤ Ca−1 exp(C0 τ ),

τ ∈ [0, T0 ].

Choosing a sufficiently large a, one has |w(T0 )| ≤ C2 a−1 < δ/2,

which completes the proof. The controls z from this lemma depend on the start point ϕ. It is important below that one can assume that z(t, ϕ) is smooth in ϕ. To show it, we construct controls z k = z(t, ϕ k ) for a sufficiently dense set K of points ϕ k ∈ D such that the corresponding trajectories u (t, ϕ k ) satisfy |u (T0 , ϕ k )| < δ/2. If the quantity supϕ∈D dist(ϕ, K ) is sufficiently small, then |u (T0 , ϕ)| < δ/2 for a function z(t, ϕ), which can be obtained by a unit decomposition by z k [110]. Therefore, without loss of generality, one can assume that |D2ϕ z(τ, ϕ)| + |D ϕ z(τ, ϕ)| + |z| < M1 ,

τ ∈ (0, T0 ).

(4.108)

This allows us, under assumptions (4.97) and (4.96), to extend estimate (4.105) on the case of reaction-diffusion systems (4.92). First we prove a lemma. Lemma 4.29. Let us consider (4.92). Assume    ∂f (u, x)   sup   ∂u  < C, x ∈ Ω,u ∈ D and that h(k) (u, x) ∈ C∞ (D) × C3 (Ω) are generic. Then, for each δ > 0, there exists a ξ = z(x, t), t ∈ [0, T1 ] such that u (·, T1 ) < δ

(4.109)

and for each t, the function z(x, t) lies in S r . Let us consider the control system (4.104), where x is involved as a parameter: ∂˜ ui ˜ ). = ξ (t, ϕ(x)) · g i (u ∂t

(4.110)

We use the method of Lemma 4.28. Let us choose δ1 > 0 such that 2δ1 meas(Ω) = δ. Consider the control ξ = ˜z(t) = az(at), t ∈ [0, T0 /a] such that the corresponding

4.7 Viability of reaction-diffusion systems

| 259

solution u˜ (t, ϕ) satisfies (4.105). The corresponding solution of the complete system (4.92) is denoted by u (x, t), omitting index a. Let us make a time rescaling by τ = at. ˜ . For w, one has Let us define a new variable w by w = u − u ∂w i ˜ ) − g i (u )) + a−1 ρ i (x, τ), = d i a−1 Δw i + a−1 f + z · (g i (u ∂τ

(4.111)

where ρ i (x, τ, a) = d i Δ˜ ui . Let us estimate ρ i . One observes that ˜ )ϕ x . D x u˜ = (D ϕ u ˜ |, it suffices to estimate derivatives of u ˜ (t, ϕ) with respect Therefore, to estimate |∇u to the initial data ϕ. To this end, we observe that ˜ )t = (f (u ˜ ) + g (u ˜ ) · z(τ, ϕ))D ϕ u ˜ + g(u ˜ ) · D ϕ z(τ, ϕ). (D ϕ u Thus, by (4.108), one obtains ˜ | ≤ C1 ∇ϕ, |D ϕ u

(4.112)

where a constant C1 > 0 depends on f, g, however, C1 is independent of a. In a similar way, we can estimate the second derivatives. We notice that ⎛ ⎞ n n n 2 ˜ u ∂˜ u ∂ i i Δϕ j + (∇ϕ j · ∇ϕ l )⎠ . (4.113) ρ i (x, t) = d i ⎝ ∂ϕ j ∂ϕ j ∂ϕ l j =1 i =1 l =1 This relation by (4.112) gives the estimate |ρ i | ≤ C2 d i (Δϕ + ∇ϕ2 ),

0 ≤ t ≤ T0 /a.

(4.114)

One has ˜ ) − g i (u )| ≤ c7 |w|. |g i ( u

(4.115)

Now, let us multiply equations (4.111) by w i , integrate over x and summarize over i. This gives the inequality d  w 2 ≤ C 3 a − 1  w  + C 4  w 2 , dτ

w(0) = 0.

˜ (T1 ) < δ due to our This gives w(T1 ) < δ1 /2 for a large enough. Moreover, u choice of δ1 . This completes the proof. Theorem 4.30. Let us consider (4.92) under conditions of Lemma 4.29 and Assumption 4.6. Then, (4.91) holds.

260 | 4 Random perturbations, evolution and complexity Proof. The proof uses the same arguments as in the case of Theorem 4.25. Due to the previous lemma, a field ξ (x, t) such that the corresponding solution u (x, t, ξ (·, ·), ϕ) leaves Π δ/2 at t = T0 . Let us take a tubular neighborhood V (κ, ξ, T ) of ξ , where κ = κ (δ) is small enough such that u (x, t, ξ (·, ·), ϕ) enters Π δ at t = T0 for each ξ ∈ V. The measure of all ξ is positive due to Assumption 4.6. Thus, the probability to leave Π δ within any interval [t, t + T0 ] is positive and admits a uniform estimate. This implies (4.91). Let us make some comments. It interesting to generalize this result for systems with nonlinearly involved noises and also to understand a connection between viability probability P(T, Π δ , ϕ) and the pattern morphological complexity defined by C M (ϕ) = ∇ϕ2 + Δϕ.

(4.116)

Let us start with the second point. We can clearly express the previous theorem as follows: Theorem 4.31. Let us consider (4.92) under conditions of Lemma 4.29 and Assumptions 4.6 and 4.8. Then, the probability P(T, Π δ , ϕ) can be estimated through pattern complexity C M (ϕ), the number δ and diffusion coefficients d = (d1 , . . . , d n ): P(T, Π δ , ϕ) < 1 − Q1 (T, d, C M (ϕ), δ, n),

(4.117)

where Q1 ≥ 0,

Q1 (T, d, C M (ϕ), δ, n) → 1 (T → +∞).

For each fixed T, the quantity Q1 decreases as a function of the morphological complexity C M ( ϕ ). Notice that Q1 is the probability that the system state leaves the viability domain Π at a time moment t ∈ (0, T ]. To show it, we consider the proof of Theorem 4.30. Let us consider estimates (4.114), (4.112) and relation (4.113). They imply that μ i = d i ρ i  ≤ c0 d i C M (ϕ).

(4.118)

The proof of Theorem (4.30) shows that the probability Q1 is a decreasing function of μ 1 , . . . , μ n . Therefore, the inequality (4.118) that the function Q1 is decreasing in C M (ϕ) and Q1 → 0 only if C M (ϕ) → +∞. It is natural to assume that the initial state ϕ lies in the global attractor of (4.92) for ξ = 0 (we assume the mean value of ξ (t) equals zero). Under assumptions (4.96) and (4.98), the semiflow defined by (4.92) with ξ ≡ 0 is dissipative and therefore the global attractor exists. We assume that ϕ is a stationary solution of (4.92) with ξ = 0. Let us introduce parameter Kd =

dmax , dmin

dmax = max d i , i =1,...,n

dmin = min d i . i =1,...,n

(4.119)

4.7 Viability of reaction-diffusion systems

| 261

This parameter defines the difference in the relative reagent mobility. We observe that 1 Δϕ i  < C(f )d− i ,

1 ∇ϕ i 2 < C2 (f )d− i ,

and thus max |μ i | < cK d .

(4.120)

P(T, Π, ϕ(·)) < 1 − Q2 (T, K d , δ, n),

(4.121)

This means that where Q2 → 0 as K d → +∞. We obtain the following fundamental consequences: Only shadow reaction-diffusion systems with K d  1 may be viable for large times. If the pattern morphological complexity C M (ϕ) is bounded (i.e. the pattern is not strongly oscillate in x), then this pattern is unviable for a long time. The viability proba­ bility P T can be estimated via an increasing in C M (ϕ) function. Recall that in Chapter 3, it is shown that such shadow systems can generate max­ imally complex dynamical behavior. Theorem 4.30 leads to another corollary, which means that the genome complex­ ity has a tendency to increase during the evolution process. Let us assume that f, g de­ pend on some discrete parameters s = {s1 , s2 , . . . , s N }, s i ∈ {0, 1}, s ∈ SN = {0, 1}N . Each s can be interpreted as a string of a genetic code that encodes the system struc­ ture. Assume that all the vector fields g(u, s) are generic for all s (this assumption seems natural since s is the discrete parameters). As above, in Subsection 4.6.2, we can formulate an assertion about genetic code complexity. Corollary 4.32. Let F be an arbitrary Turing machine. Consider a class of generically unviable systems. Assume that an evolution generates strings s with a priori bounded Kolmogorov complexities K F (s) relative to F. Then, for almost all mappings α : s → r(s) of strings s to the parameters r, the evolution is unstable and the corresponding system is unviable: P T → 0 as T → ∞ . An interesting case where noises are involved in a nonlinear way and, nonethe­ less, Theorem 4.30 holds, is given by for the Reinitz–Sharp–Mjolness model [188, 226], where nonlinearities are defined by (4.100). We assume that the noise is induced by ¯ i + ξ i (x, t). Then, the SRM model can be written down as m i (x) = m N ∂u i ¯ i (x) + ξ i (x, t)) − λ i u i . = d i Δu i + r i σ ( W ij u j + m ∂t j =1

(4.122)

Here, u i are gene product concentrations (proteins by genes), r i > 0 are coefficients, ¯ i (x) are W ij is a matrix describing a nonlinear interaction between proteins (genes), m morphogen densities, ξ i are noises in these m i , and λ i are degradation constants.

262 | 4 Random perturbations, evolution and complexity Assume that W ij are a priori bounded, |W ij | < W∗ , and the noises ξ i satisfy As­ sumption 4.6. Let us denote by V i the input connectivity of the i-th reagent: V i equals the number of nonzero entries W ij , when j runs over 1, 2, . . . , N. Now, one can prove the following analogue of the previous assertion: Theorem 4.33. Let us consider (4.122) under assumptions 4.6 and 4.8. Then, (4.91) holds. The viability probability P can be estimated via V∗ = maxi V i , m∗ = max |m j |, 1 R∗ = max(r i λ− i ), namely, P(T, Π, ϕ(·)) < 1 − Q(T, V∗ , m∗ , R∗ , δ),

Q ≥ 0,

(4.123)

where ϕ(x) ≥ 0 is a stationary pattern for ξ = 0, and Q is an increasing function in T and decreasing in V∗ , m∗ , R∗ , δ. Moreover, Q(T ) → 1 (T → +∞). To prove it, first let us notice that ϕ i satisfy estimate supx |ϕ i | ≤ R∗ , and for all solutions of (4.122), such an estimate also holds: supx |u i (x, t)| ≤ R∗ . Let us choose a large positive number a such that R∗ σ (W∗ V∗ R∗ + m∗ − a) < δλ∗ ,

λ∗ = meas(Ω)−1 min λ i .

N

i

p

Let us define the subset D a of R as follows: D a = {y ∈ R : y i < −a}. Let Q1 be the probability that ξ (t) ∈ D a for all t ∈ [0, T ]. Under Assumption 4.6, this probability is positive for times T large enough. Assume now that ξ (t) ∈ D a for all t ∈ [0, T ]. Then, du i  ≤ −λ i u i  + δλ∗ . dt This gives u i (t) ≤ ϕ i  exp(−λ i t) + δ,

and therefore, our trajectory enters for Π δ for large t > T (δ). According to this esti­ mate, the viability increases together with the degree V∗ of nodes in the interaction graph associated with the protein network. Notice that, in general, the case when the noise ξ are involved in our system in a nonlinear way is complicated. Indeed, the results of the geometric control theory show that the case of nonsymmetric polydynamical systems, that we should analyze here, is extremely nontrivial [6]. Even local controllability problems are resolved up to now for only n = 2 (the two-component case) [206].

4.7.5 Biological evolution and complexity In this subsection, we briefly describe biological concepts and ideas in morphological and genome complexity. We also consider connections between these ideas and the mathematical results stated above.

4.7 Viability of reaction-diffusion systems |

263

A huge literature is devoted to the problem of complexity and an increase of com­ plexity in biological evolution beginning with Darwin [57] and the seminal book [242]. This brief overview does not pretend to demonstrate completeness, but one can men­ tion, for example, [3, 17, 19, 24, 24, 26, 33, 59, 85, 85, 91, 109, 171, 174–177, 180, 181, 185, 218, 227, 230, 231, 248, 319], among many others. The increase of genome complexity in evolution is considered in [248]. Since, as it was mentioned above, the Kolmogorov complexity is not computable (although it can be estimated, see [157, 333, 334]), one can use simple measures of complexity. From the biological point of view, a natural definition is given in [3]. Namely, the size of nonredundant functional genome |s|RF (O) can serve as an indicator of the biological complexity of an organism O. In the work [248], as a plausible hypothesis, the follow­ ing relation is proposed: log10 (|s|RF ) = 8.64 + 0.89t,

(4.124)

where t is the time measured in milliard (109 ) years. This regression describes the in­ crease of the nonredundant functional genome size from prokaryotes to mammals. These ideas have been developed by [171]. In this paper, the authors use the con­ cept of the minimal genome size (MGS) in a lineage. They report that although within a lineage a strict relationship between the genome size and organism complexity is absent, nonetheless, the MGS increases during evolution in a hyperexponential man­ ner. The authors suggest that this fact can be connected with positive feedback that accelerate evolution. For example, one can assume that such a feedback may result, for example, due to arms race (a co-evolution between a prey and a predator) [60]. The concept of arms race was suggested in order to explain the famous Van Valen law and the Red Queen hypothesis [301]. In the work [24], the concept of the Darwin Arrow is proposed. It addresses the topic of a maximal genome size among organisms present in the biosphere: “The hy­ pothesis of the arrow of complexity asserts that the complex functional organization of the most complex products of open-ended evolutionary systems has a general tendency to increase with time.” A good overview of this problem can be found in [185]. Notice that it is impossible to assert that the complexity of all organisms increased during evolution. For example, the genome complexity of many parasites (tapeworms, for example) can decrease in time. The genome complexity of prokaryotes is bounded. The majority of living or­ ganisms are much simpler than humans and anthropoids, and these lowest forms are abundant. It is not obvious that humans are better adapted to the environment than insects (by the way, the difference in genome size between insects and mammals is not so great, 13 000 genes of Drosophila and 20 000−30 000 of mammals). Bon­ ner [33] connects the complexity increase with the increase in body sizes. Some other authors suppose that complexity can correlate with adaptivity (viability). However, some leading experts in biology [59, 60] disclaim that this correlation exists. Finally, following [185], one can conclude that “. . . the undeniable increase of maximum com­

264 | 4 Random perturbations, evolution and complexity plexity in evolution is best explained as a passive trend that is a side consequence of an otherwise complexity-neutral process of reckless local adaptation.” In this chapter, we have obtained theorems on complexity. Theorem 4.25 is con­ cerned with genome complexity, whereas Theorems 4.30 and 4.31 are concerned with the morphological one C M (ϕ). Nonformally speaking, these theorems say that both complexities have a tendency to increase during the evolution process. The analysis of the demonstrations show that there should be a correlation between viability (adap­ tivity) and genome complexity. Comment. There are no direct connections between the genome and morphological (pattern) complexities. Some weak estimates for the Reinitz–Sharp–Mjolness model can be found in Appendix 4.10. Notice, for example, that a pattern ϕ(x) with a large C M (ϕ) may have a small Kolmogorov complexity (as an example, we can take a layered periodic pattern). It seems that, in order to correctly interpret the result on genome complexity from a biological point of view, we should consider the total genetic information of the whole biosphere. In fact, if we apply Theorem 4.25 to separate populations or organisms, then this result leads to a contradiction with the aforementioned abundance of many lowest forms. One can assume that since all biospheres are globally interconnected, the existence of high organisms with complex morphology and large genomes can be useful for the survival of simple organisms (an example can be given by parasites). An interesting idea of J. Lovelock, namely, the Gaia hypothesis, is worth mentioning which postulates that the biosphere is a self-regulating system with the capacity to control the chemical and physical environment [166]. Finally, if we summarize the results of this chapter, the “logic” of evolution can be described as follows. Biological systems are fragile. In our models, we take this prop­ erty into account, assuming that the random noises are large. We show that generic systems with fixed parameters are unstable under such noises, and therefore they must inevitably be destroyed. However, if we have a biosphere consisting of such frag­ ile systems whose parameters depend on a discrete genetic code, and this code evolves in time, then such a biosphere can have nonzero chances to survive. It is possible only if the general gene information concluded in all codes has the tendency to increase unlimitedly. Moreover, the morphological complexity of some organisms in the bio­ sphere should also have a tendency to increase. Nonetheless, these increase condi­ tions are only necessary, but they are not sufficient. We should explain a paradox of evolution rate, namely, why evolution has sufficient time to create more and more vi­ able structure (since creation of such structures can be a difficult task). According to Section 4.5, it is possible due to the following: (a) evolution bricks (proteins and other gene products) are very flexible; (b) gene redundancy, when each gene product is con­ trolled by many genes; (c) a special (centralized and modular) gene network topology.

4.8 Synchronization in multicellular systems |

265

The properties (a) and (b) provide a sufficiently fast evolution rate. The prop­ erty (c) is important for obtaining complicated patterns. The network hubs that can serve as capacitors (Subsection 4.5.5) regulate pattern stability and the evolution rate. Moreover, let us mention that the two main results of Chapter 3 (Theorem 3.15) and this Chapter (see 4.32, 4.31) can be used to explain the existence of the Darwin Arrow, at least for models that are defined by systems of reaction-diffusion equations. Indeed, the viability of the reaction-diffusion system is a decreasing function on the parame­ ter K d . According to (4.31) and comments to this result, in order to obtain a sufficient viable system, we should take a sufficiently small K d . However, generic systems with small K d may have complicated attractors (since these systems generate maximally complex families of semiflows).

4.8 Synchronization in multicellular systems 4.8.1 General approach First, we describe a general and beautiful approach proposed by Kuramoto [152]. Let us consider a general reaction-diffusion system with a multiplicative noise: dw i = d i Δw i + f i (w) + ϵ2 ξ i (x, t)g i (x, w), dt

(4.125)

where w = (w1 , w2 , . . . , w n ), w i = w i (x, t) are unknown concentrations, and x = (x1 , . . . , x m ) ∈ Ω ⊂ Rm are space variables. To make equation (4.125) more mathe­ matically tractable, we assume that the domain Ω is a box and periodical boundary conditions are fulfilled for all components w i : w i (x + a, t) = w i (x, t)

for all x, t,

(4.126)

where a is a nonzero vector. The terms g i ξ i describe fluctuation effects, where ϵ > 0 is a small parameter. We suppose that ξ i are random fields continuous in x (space) and t (time). Assume that for ξ i = 0 and d i = 0, the shorted system (4.125) dw i = f i ( w) dt

(4.127)

has a limit cycle solution w = S(t) (with a minimal time period T > 0). For small d i ≈ ϵ2 such that d i = ϵ2 d¯ i where d¯ i = O(1), we have the following asymptotic solution [152]: w = S(t + ϕ(x, τ)) + ϵ2 w(1) (x, t, τ) + . . . ,

τ = ϵ2 t.

266 | 4 Random perturbations, evolution and complexity In this relation, ϕ is an unknown phase function, τ is a slow rescaling time and w(1) is a correction. To obtain an equation for this correction w(1) , let us linearize (4.125) at S(t), giving n (1) dw i (1) = L ij (S)w j + F i (x, t, τ), (4.128) dt j =1 where L ij =

∂f i , ∂w j

and F i are defined by F i (x, t, τ) = d¯ i (S i (∇ϕ)2 + S i Δϕ) + g i (x, S(t + ϕ))ξ i − S i ϕ τ .

(4.129)



Here, S (z) = dS/dz denotes the derivative with respect to z = t + ϕ. The linear homogeneous equation n dv i = L ij (S)v j dt j =1

has the periodic solution S , and therefore, the corresponding equation with the con­ ∗ jugated operator L tr = L∗ (where L∗ ji = L ij ) also has a periodic solution S ( t + ϕ ) (Subsection 1.6.2). The necessary condition that the correction w(1) is a bounded func­ tion for large times is given by Lemma 1.19. In this case, this condition takes the form ∗

 F (x, t, τ ) · S (t + ϕ) = T

−1

T n

F i (x, t, τ)S∗ i ( t + ϕ ) dt = 0,

(4.130)

0 i =1

where  f  denotes time averages of f over [0, T ]:  f  = T

−1

T

f (s)ds. 0

We assume that S · S∗ = 1 for each t ((1.44)). Equations (4.129) and (4.130) then give the following main phase equation (MPE): ϕ τ = a¯ Δϕ + b¯ (∇ϕ)2 + G(x, ϕ), where b¯ = a¯ =

$$ n i =1 $$ n

(4.131)

%% d¯ i S i S∗ i

,

(4.132)

,

(4.133)

%% d¯ i S i S∗ i

i =1

and G(x, ϕ) =

$$ n i =1

%%

g i (x, S(· +

ϕ))ξ i (x, ·)S∗ i (·

+ ϕ(x, τ ))

.

(4.134)

4.8 Synchronization in multicellular systems

| 267

Let us observe that the Cauchy problem for equation (4.131) is well-posed and de­ fines a global semiflow. In fact, let us notice that the function G is bounded on each bounded time interval [0, T ] (since we supposed that ξ i are continuous). We can use the comparison principle (Chapter 1). As a supersolution and a subsolution on [0, T ], we take sup |G(x, t)|t). ϕ± = ±(sup |ϕ(x, 0)| + x∈Ω

x ∈ Ω,t ∈[0,T ]

Consequently, solutions of (4.131) exist for all t > 0 and are bounded on [0, T ]. Thus, this periodic initial value problem generates a semiflow. Moreover, we obtain a general important conclusion: Lemma 4.34. Let a¯ > 0. If ξ i (x, t) are uniformly bounded in t ∈ (0, ∞), the phase function π can be estimated by a linear function: sup |ϕ(x, t)| < C1 + C2 t x∈Ω

for some C i > 0. Notice that if G ≡ 0, i.e. the noise is absent, and equation (4.131) becomes the famous Burgers equation ϕ τ = a¯ Δϕ + b¯ (∇ϕ)2 (4.135) proposed by J. M. Burgers as a turbulence model [320]. Let us describe some general properties of this equation. The Cauchy problem for the Burgers equation in a periodical box is well-posed if ¯ If ¯ i = d. a¯ > 0. Notice that always a¯ > 0 if all diffusion coefficients d i are equal: d a¯ < 0, the initial value problem is not well-posed and does not define a global semi­ flow (the case of antidiffusion). Here, while solutions exist, space oscillations in ϕ increase with an exponential rate. Therefore, this case corresponds to a fast desyn­ chronization (a chemical turbulence). Such a situation is possible if d i are different, i.e. some reagents diffuse faster than others. It is well known that for a¯ > 0, the Burgers equation can be linearized by the Hopf substitution [320]. Let us show that this is also possible in some other cases. If ξ i are independent of t and the term G(x, ϕ) = g(x) is independent of the phase ϕ, then the main equation (4.131) can be transformed to an analogue of the Schrödinger equation for all amplitudes by the substitution ϕ = A log ψ, where A = a¯ b¯ −1 . We obtain ψ τ = a¯ Δψ + W (x)ψ = Hψ,

(4.136)

where W (x) is a potential defined by W (x) = a¯ −1 g(x). Assume a¯ > 0. Then, equation (4.136), together initial data ψ(x, 0) = ψ0 (x),

(4.137)

268 | 4 Random perturbations, evolution and complexity defines a well-posed Cauchy problem, and solutions exist for all t > 0. Thus, the Cauchy problem (4.136), (4.137) generates a global semiflow. The asymptotic of solutions of this equation is defined by a positive eigenfunction of the operator H with the maximal eigenvalue λ0 (H ). There are three possible cases: (i) λ0 (H ) > 0. Solutions of (4.136) are exponentially increasing and thus ϕ is linearly increasing, and we have the asymptotic ϕ(x, t) = λ0 t + M (x, t)

(t → +∞),

(4.138)

where α > 0 and M is a small correction with respect to λ0 : |M (x, t)| < λ1 t,

0 < λ1 < |λ0 |.

(4.139)

So, in this case, we observe a desynchronization defined by a linear time function. (ii) λ0 (H ) < 0. Solutions of (4.136) are exponentially decreasing and ϕ is linearly de­ creasing. Here, we obtain the same relations (4.138) and (4.139). Again, a desyn­ chronization grows as a linear time function. (iii) λ0 (H ) = 0. Equation (4.136) again defines a well-posed Cauchy problem and there is a global semiflow. Solutions of (4.136) tend to an equilibrium and ϕ is a time constant for large t. Thus, in this case, we observe a synchronization and we have ϕ(x, t) → ϕeq (x) as t → ∞. Here, in a generic situation, we obtain a spatially inhomogeneous synchronous state for large times. In the particular case W = 0, we have ϕeq = const and then we obtain a spatially homogeneous synchronous state for large times. Notice, however, that the situation (iii) is not generic. Thus, we have obtained the following conclusion: A generic random space homogeneous perturbation leads to desynchronization. For more general ξ i and g i , this linearization by the Hopf substitution does not work, and it is difficult to resolve equation (4.131). If a¯ > 0 and there is an a priori estimate of solutions, then all trajectories are bounded for t > 0. We obtain a monotone semiflow. General theorems of Chapter 1 imply that almost all trajectories are convergent, i.e. for large positive times t, solution ϕ(x, t) is a spatially inhomogeneous synchronous state ϕeq (x). To investigate this case and to see when we are dealing with synchronization, in the coming subsection, we consider a simplified linear version of equation (4.131).

4.8.2 Linear analysis of synchronization stability To investigate the stability of a synchronized state ϕ ≡ const, we can linearize equa­ tion (4.131) that leads to the Euclidean Schrödinger equation (with an imaginary time variable) with an external force: ϕ τ = a¯ Δϕ + W (x)ϕ + V (x) = Hϕ + V (x),

(4.140)

4.8 Synchronization in multicellular systems | 269

where the potential W is defined by W (x) = G ϕ (x, ϕ)|ϕ=0 ,

(4.141)

V (x) = G(x, 0).

(4.142)

and The right-hand side of (4.140) involves the Schrödinger operator H = a¯ Δ + W (x). Now, we can repeat the above arguments on the solution analysis. We again have three cases. (i) λ0 (H ) > 0. Solutions of (4.140) are exponentially increasing as t → τ. So, we observe here a desynchronization. (ii) λ0 (H ) < 0. Solutions of (4.140) tend to a stationary solution as t → τ, i.e. ϕ(x, t) → ϕeq (x). In this case, we have a synchronization. (iii) λ0 (H ) = 0. In this case, for generic V, the phase ϕ is a linear function of time for large t (plus exponentially decreasing terms). So, we have a weak logarithmic in t desynchronization. Notice that, at least formally, the relations of this section can be extended on the case when the shorted system has transitive (ergodic) dynamics on a compact attractor. In this case, we can use averaging over the infinite period: ∗

 F (x, t, τ ) · S (t + ϕ) = lim T T →∞

−1

T N

F i (x, t, τ)S∗ i ( t + ϕ ) dt.

(4.143)

0 i =1

In the next section, we consider a space discrete situation, where more rigorous meth­ ods can be developed. We are going to analyze the influence of the extracellular matrix (ECM). The ECM is a gel-like medium composed of complex protein filaments forming a global network between cells [8, 210]. It is joined to the cytoskeleton in each cell by transmembrane matrix receptors such as integrin. The ECM is not only a cement­ ing material. Interactions between the ECM and cells lead to signal transmission that provides cell proliferation, differentiation, and death inside the cells. The ECM constitutes a global nonlocal system for signal transmission that affects cell proliferation for the following two reasons. (a) The ECM is a global dynamic system having viscoelastic properties. It is well known that the ECM plays a direct role in morphogenesis; (b) The ECM has a mechanism for mutual conversion between mechanical and chem­ ical reactions. The cell-ECM system supports mechanochemical mechanisms. We shall show that nonlocal interactions, induced by the ECM, are important for syn­ chronization.

270 | 4 Random perturbations, evolution and complexity 4.8.3 Space discrete case The Kuramoto method, applied in the previous subsections, is a formal asymptotic approach. It is difficult to obtain rigorous theorems here since the small perturbation is singular: the operator d i Δw i is not bounded. However, if we consider the space dis­ crete case, where x are elements of some discrete lattice, this perturbation is regular, and the corresponding operator is bounded and small. Therefore, in this case, the Ku­ ramoto method can be justified by center manifold theorems (Appendix 3.5). Let us consider a reaction-diffusion system with noise on a lattice: dw i = ϵ2 A i w i + f i (w) + ϵ2 ξ i (x, t), dt

(4.144)

where w = (w1 , w2 , . . . , w n ), w i = w i (x, t) are unknown concentrations, x ∈ Ω, and where Ω is a finite subset of d-dimensional lattice Zd . For simplicity and mathemati­ cal tractability, we assume that Ω is a box and we set periodical boundary conditions. Here, ϵ > 0 is a small parameter. We assume, as above, that ξ i are random fields con­ tinuous in t for each x. The points x can be interpreted as cell positions. The operator A i describes cell interactions. Nonlocal interactions, which can describe an influence of the extracellular matrix, can be presented by an operator (A i w)(x) =

n j =1

ρ ij (x, x )w j (x ),

(4.145)

x ∈Ω

where ρ ij (z) are decreasing in z functions. This operator is a simple formal descrip­ tion of a nonlocal signaling system. Analogous operators appear in block copolymer models [5]. To obtain nonlocal, decreasing in space interactions, we can take ρ ij (x, x ) = O(|x − x |−d ), where d > 0. Let us assume that for ξ i = 0, the shorted system (4.125) dw i = f i ( w) dt

(4.146)

has a limit cycle solution w = S(x, t) with a time period T > 0. For small ϵ, we then have the following asymptotic solution: w = S(t + ϕ(x, τ)) + ϵ2 w(1) (x, t, τ) + . . . ,

τ = ϵ2 t,

where ϕ is a unknown phase, τ is a slow rescaling time, and w(1) is a correction. To obtain an equation for w(1) , let us linearize (4.127) at S(t) that gives (1)

dw i dt

=

n j =1

L ij (S)w(1) + F i (x, t, ϕ(τ)),

L ij =

∂f i , ∂w j

(4.147)

where F i are defined by F i (x, t, τ) = A i S(·, t + ϕ(·, τ))(x) + ξ i (x, t).

(4.148)

4.8 Synchronization in multicellular systems

| 271

The condition that w(1) (t) is bounded for large times t is given by Lemma 1.19. To write down this condition, let us introduce some auxiliary quantities by M ij (ϕ, ϕ ) = T −1

T

∗ S j (t + ϕ )S∗ i ( t + ϕ ) dt =  S j ( t + ϕ ) S i ( t + ϕ ) , (4.149)

0

R(x, x , ϕ, ϕ ) =

n n

ρ ij (x, x )M ij (ϕ, ϕ ).

(4.150)

i =1 j =1

For small ϕ = ϕ(x, τ), ϕ = ϕ(x , τ), we can use a linearization of R. Let us introduce R0 (x, x ) =

n n

ρ ij (x, x )P ij (x, x ),

(4.151)

i =1 j =1

where

P ij (x, x ) =  S j S∗ i ( t ) ,



S j (t) =

dS j . dt

Then, we obtain the relation R(x, x , ϕ(x, τ), ϕ(x , τ)) = M c ϕ + O(ϕ2 ),

(4.152)

x ∈Ω

where operator M c is defined by (M c ϕ)(x, τ ) = R0 (x, x )ϕ(x , τ) − ϕ(x, τ) R0 (x, x ). x ∈Ω

x ∈Ω

This operator appears in the fundamental Master Equation of statistical physics. Un­ der our assumptions, the kernel R0 actually depends on x − x . The straightforward calculations then give, by (4.151), (4.150), that the main equation for the phases ϕ is R(x, x , ϕ(x, τ), ϕ(x , τ)) + G(x, ϕ), (4.153) ϕ τ (x, τ) = x ∈Ω

where the term G is defined by $$ G(x, ϕ) =

n

%%

ξ i (x, ·)S∗ i (x

+ ϕ(x, τ ))

.

(4.154)

i =1

Equation (4.153) is nonlinear and nonlocal. To resolve this equation, we linearize this equation at ϕ = 0 and repeat the analysis of the previous subsection to estimate stability. We then obtain the following linear equation R0 (x, x )ϕ(x , τ) − ϕ(x, τ) R0 (x, x ) + W (x)ϕ(x, τ), (4.155) ϕτ = x ∈Ω

x ∈Ω

where

$$

W (x) =

n i =1

%%

ξ i (x, ·)S¯ i (·)

,

dS∗ i S¯ i (t) = . dt

(4.156)

272 | 4 Random perturbations, evolution and complexity Assume that our medium is space homogeneous. Then, R0 = R0 (x − x ) and for W = ˆ 0 (k ) and 0, this equation can be resolved by the Fourier method. Let us denote, by R ˆ (k, t), the Fourier transformations of the kernel R0 and ϕ respectively. Then, ϕ ˆ (k) − R ˆ (0))ϕ(k, τ), ϕ τ (k, τ) = (R

(4.157)

ˆ (k) − R ˆ (0))τ)ϕ(k, 0). ϕ τ (k, τ) = exp((R

(4.158)

giving Optimization of interaction. Using relations (4.157) and (4.158), we can seek inter­ actions R(x − x ), which produce the best synchronization. Such a problem can be formulated as follows: Under restriction R( z) = 1 z∈Ω

ˆ (k) − R ˆ (0) > 0 for all wave vectors k = to find R(x − x ) such that S(k ) = R π(n1 /L1 , n2 /L2 , n3 /L3), where n i are integers, and S0 =

max

k = π( n 1 / L 1 ,n 2 / L 2 ,n 3 / L3) ,n i∈Z

S(k)

is maximal. To simplify the problem, we can assume that R(x − x ) is radially symmetric, i.e. R(z) = R(|z|). Moreover, we can seek R as a function depending on parameters, for example, R(z) = C(β ) exp(−β |z|), β > 0, (4.159) or R(z) = C(β )(r0 + |z|)−γ ,

γ > 3.

(4.160)

Computations show that the choice (4.160) with γ close to 3 is optimal.

4.9 Summary (A) It is shown that centralized networks can be robust under fluctuations and flex­ ible, i.e. they can have a number of local attractors. However, there is a slow-down effect: if a center controls a number of satellites, the network rate must be bounded. This rate can be estimated via network parameters. (B) We have elucidated the main reasons why the centralized (empire) network can be destroyed by random fluctuations. They are as follows: connections that are too weak between the center and the satellites; slow-down effect restricts evolution rate; correlated fluctuations of the environment are dangerous; an empire extension can lead to a chaos onset. A centralized network may be viable under fluctuations on an infinite time interval only if this network grows by special growth algorithms (similar

4.10 Appendix |

273

to the preferential attachment). It is not easy to save an Empire (a centralized network), even one of very primitive structure (satellites are separated, rigid control, noises are not correlated). It is a hard combinatorial problem similar to the famous k-SAT. (C) We have proved a general theorem on gene code complexity C(T ) increasing in a stable evolution process. The complexity C(T ) has a “tendency to increase,” i.e. C(T ) is not bounded as T → +∞. Further, the analogy between the complex organ forma­ tion and hard combinatorial problems allows us to show how random algorithms of local search can create “complex organisms.” We have shown that the evolution, in a sense, is feasible, if the Freedom Principle holds. The Freedom Principle means that evolution occurs if the gene code has a sufficient gene redundancy K. This effect is exponential, i.e. redundance K allows one to create an “organism” with m ≈ 2K n traits by n genes. It can be performed by local search algorithms based on point mu­ tations and selection. However, when the number of ecological constraints becomes more than some critical value depending on the redundancy parameter and the gene number, the evolution stops. The evolution rate is regulated by special genes (capac­ itors) which are hubs in genetic networks. Capacitors stabilize patterns when a pop­ ulation lives in a stable environment and they can provide an evolution acceleration when the population lives in stress conditions. The mechanism of this acceleration uses the genetic drift and it is based on a model of the phase transition. When the population size is reduced to a critical value, the capacitors begin their activity. (D) It is shown that a generic reaction-diffusion system is unviable under random fluctuations. A connection exists between the system parameters and viability. Sys­ tems, where some reagents diffuse much faster than other ones, are more robust un­ der random perturbations. The biological systems have such a structure: they con­ tain molecules with large molecular weights (for example, proteins) and reagents with small weights (substrates, microRNAs). This result, together with theorems on chaotic dynamics from Chapter 3, may help to explain the existence of the so-called Darwin Arrow. In fact, it is shown that systems which are more viable under fluctuations can generate complicated long time behavior and space patterns. (E) The Kuramoto method is applied to study synchronization in multicellular sys­ tems under random fluctuations. We show how some biological mechanisms (for ex­ ample, extracellular matrices), can support synchronization.

4.10 Appendix Let us consider the following problem. Suppose we observe some sequence of patterns z t (x), x ∈ Ω, t ∈ [0, T ]. We would like to estimate the number of the genes required to create this sequence. To resolve this problem, we can use different characteristics of pattern complex­ ity. We employ the following three quantities: C1 (z t (·), c), C2 (z t (·), c1 , c2 ), E(z t (·)). They are functions of the discrete time t. The quantity C1 is the number of the con­

274 | 4 Random perturbations, evolution and complexity nected components of the set D c,t = {x : z t (x) = c}.

(4.161)

To define C2 , let us consider a set D c1 ,c2 ,t depending on two parameters c1 , c2 and t. Namely, let us define D c1 ,c2 ,t = {x : c1 ≤ z t (x) ≤ c2 }. (4.162) Then, C2 is the number of the connected components of this set. Both complexity measures are discrete, whereas the third measure E is a contin­ uous quantity defined by  |∇z t |2 dx.

E(t) =

(4.163)

Ω

Let us now discuss the biological sense of C1 , C2 and E, and relations between them. Organisms consist of cells and these cells can be in different states. Following the classical ideas [8, 192], we assume that different cell states appear as a result of ex­ pression of different genes. We consider here, for simplicity, the case of a single gene that can change cell states. Let u m be such a gene, which can be called a morphogen. Notice that the mathematical concept of the morphogen was pioneered by A. Turing in [272]. Let us consider, following [272], patterns formed by two kinds of cells: modi­ fied and the usual ones. If u m is expressed at x, then we have here a modified cell at x. Otherwise, the cell remains in a usual state. Following the threshold approach [272], we suppose that the gene u m is expressed if u m > c and it is not expressed in the oppo­ site case (u m ≤ c). In this case, we obtain, as a natural measure of pattern complexity, the quantity C1 , which is the number of interfaces between cells of different kinds. The measure C2 admits a similar interpretation. Here, we assume that u m is ex­ pressed if u m > c2 and it is not expressed if u m < c1 . In the case c1 < u m < c2 , we deal with an intermediate (transient) state. Thus, both measures C1 and C2 relate to the number of transitions between the cells of different types. Notice that using the Sard’s theorem [110], we can choose c, c1 , c2 in (4.161) and (4.162) such that, at least locally, the boundaries of the connected components will be smooth submanifolds of Ω of the codimension 1. In particular, if Ω is an interval, these components will be isolated points. Example. For a periodical in x function z t (x) (layered structure) C1 =C2 = number of layers (for appropriate c, c1 , c2 ). The third measure, the quantity E, can be interpreted as a mean value of “oscilla­ tions” of function z t in x. The results for C1 and C2 are quite different. To estimate m through C1 , we use the so-called Pfaffian chains [139], under some additional assumptions on σ. It allows us to obtain rough estimates of C1 by Khovanski’s results. Estimates of C2 and E can be derived in a simpler way and appear to be essentially better.

4.10 Appendix |

275

Until now, nobody knows whether the Khovanski bounds can be improved. The key difference between estimates of C1 on the one hand, and C2 , E on the other, is that the estimates of C2 and E depend, in particular, on the diameter diam(Ω) of domain Ω whereas the ones of C1 are independent of this diameter.

4.10.1 Estimate of the number of genes m via complexity C1 We use the key notion of a Pfaffian chain [93, 139], introduced above, in subsec­ tion 4.2.2. The Pfaffian functions are well-studied. Consider some elementary examples. The exponent exp(ax), x ∈ R is a Pfaffian function of length 1 and degree 2. More gener­ ally, any real analytic function f (z), z ∈ R satisfying an equation df = P(z, f ) (4.164) dz is a Pfaffian of degree deg P. We thus observe that many classical sigmoidal functions are Pfaffian. For example, f = (1 + exp(z))−1 satisfies (4.164) with P = f 2 − f . Super­ position σ (exp(ax)) is also, for example, a Pfaffian. Let us show that equation (2.273) defines a Pfaffian chain if σ (z) = 1 + exp(−az)−1 , where a > 0. We introduce the complexity of chain (2.273) as the tuple of integers Comp = { m, T, r θ , d θ , deg P},

(4.165)

where r θ is the sum of the lengths of Pfaffian chains for θ i , d θ is the maximum of the degrees of Pfaffian chains determining θ i , and deg P is the degree of the polynomial from (4.164) that defines σ. Using induction, let us now consider the functions u 1i . Taking the derivatives, one has ∂u 1i ∂θ i (x) = σ (μ i θ i − η i )μ i . ∂x l ∂x l Consequently, by (4.164), one obtains ∂u 1i = P(μ i θ i − η i )μ i P i,l (x, v1i , v2i , . . . , θ i ), ∂x l j

where P j,l are appropriate polynomials, and v k are functions of chains determining θ j . Thus, u 1i and θ j form a chain of the degree d θ + deg P and the length r θ + m. Repeating these calculations, we conclude that u ti , u it−1 , . . . θ i form a chain of the degree d θ + t deg P and the length r t = r θ + tm. The complexity of the pattern u Tm (x) can be estimated by the known results on Pfaffian chains ([139], see also [93], Proposition A4). Theorem 4.35. The number C1 of the connected components of the pattern u Tm (x) gen­ erated by (2.273) can be bounded from above by C1 < 2(r θ +Tm) (d θ + T deg P)O(r θ +Tm+n) . 2

(4.166)

276 | 4 Random perturbations, evolution and complexity Thus, given C1 , we can bound from below R = r θ + Tm roughly as (log2 C1 )1/2 , provided that log(deg P), log(d θ ), n1/2 are less than r θ + Tm. The quantity R can be interpreted as a complexity of gene circuit (2.273). This estimate does not look optimal, but in the general case until now, there are no methods that could improve it.

4.10.2 Estimates of E and C2 The estimates of the previous section were independent of maxi,j |K ij | and the diame­ ter diam Ω. Throughout this section, we assume that the domain Ω is open and topo­ logically trivial (contractable). In this section, the bounds on E and C2 are stronger than the ones on C1 from the previous section, but hold under the conditions that max |K ij | ≤ K∗ , i,j

diam Ω = δ > 0.

(4.167)

Other parameters involved in our estimates are V (the circuit valency defined above) and    ∂θ i   . ρ = sup  (4.168) ∂x k  i,k Let us denote sup σ (z) = C σ . Now, we can estimate ∇u ti inductively. Indeed, denote supi,x |∇u ti | = μ t . Then, μ t+1 ≤ C σ (VK∗ μ t + ρ ), where μ 0 = 0. Therefore, μ t ≤ ρC σ

t = 0, 1, . . . ,

(C σ VK∗ )t − 1 C σ VK∗ − 1

(4.169)

(4.170)

if a = C σ VK∗ = 1 and μ t ≤ tρC σ if a = 1. We can suppose without any loss of generality that a = 1. It is obvious that

2 (C σ VK∗ )t − 1 t n E(u m ) < cδ ρC σ , n = dim Ω. C σ VK∗ − 1

(4.171)

(4.172)

Now, we proceed to an estimate of C2 and begin with the one-dimensional case. The inequality C2 > k where k is an integer entails that there are two points x1 , x2 such that |x1 − x2 | < δ/k, u tm (x1 ) = c1 , u tm (x2 ) = c2 . (4.173)

4.10 Appendix |

Thus, there is a point ξ such that    (c − c )C  du t   m 2 1 2 >  ( ξ ) .   dx δ

277

(4.174)

However, by (7.5), we then obtain Proposition 4.36. If Ω is an interval, the following estimate of the pattern complexity via the circuit complexity holds: C2 (u tm , c1 , c2 ) < diam Ω(c2 − c1 )−1 ρC σ

(C σ VK∗ )t − 1 . C σ VK∗ − 1

(4.175)

It gives us the required estimate. Let us notice that an analogue of this estimate also holds for the continuous Reinitz–Mjolness–Sharp model. Its deduction is similar, and we leave it to the reader. Let us now turn to the case n = dim Ω > 1. Theorem 4.37. If Ω is a topologically trivial domain with a smooth boundary, for generic c1 and c2 , we have C2 (u Tm , c1 , c2 ) < const mes Ω (ρC σ

(C σ VK∗ )T − 1 n ) . C σ VK∗ − 1

(4.176)

We start with an elementary assertion: if each connected component contains a ball of a radius r, then the number of connected components C2 < const mes Ω r−n ,

(4.177)

where the factor const depends on n. Now, to prove the theorem, we are going to estimate r. First, using Sard’s theorem, we choose c1 , c2 such that they are regular values of a smooth function u Tm . Consider a connected component D k of the set defined by (4.162). Then, the boundary ∂D k is a union of two disjoint smooth manifolds B i of the codimension 1, B i = {x : u Tm (x) = c i }, i = 1, 2, and herein we employ the theorem on a regular value, see [110]. Since the boundaries are compact, there are two points x1 ∈ B1 , x2 ∈ B2 such that dist(x1 , x2 ) =

inf

x∈B1 , y∈B2

dist(x, y).

(4.178)

Let us set 2r = dist(x1 , x2 ) and show that the open ball B, which has the interval [x1 , x2 ] with the endpoints x1 , x2 as a diameter, is contained in D k . Indeed, we just have two possibilities: either B lies completely in D k or completely outside of D k . Otherwise, B would contain some points of the boundary ∂D k , for ex­ ample, a point z where u Tm (z) = c1 . However, then dist(z, x2 ) < r gives us the contra­ diction with (4.178). Let us check now that the second possibility (B is outside of D k ) also leads to a contradiction.

278 | 4 Random perturbations, evolution and complexity Let us denote by W the unique connected component of B1 which contains the point x1 ∈ W. Since W is a smooth submanifold of the codimension 1, due to the Alexander’s duality [172], the complement Ω \ W consists of two connected compo­ nents U0 , U1 (taking into account the topological triviality of Ω). Then, D k lies com­ pletely in one of U0 , U1 , and let D k ⊂ U0 for definiteness. The interval (x1 , x2 ] (with deleted endpoint x1 ) does not intersect W (due to (6.13)), and therefore this interval is contained completely either in U0 or in U1 . On the other hand, the point x2 ∈ D k ⊂ U0 , and hence the whole interval (x1 , x2 ] ⊂ U0 . For a small enough ball Bx1 (e) centered at x1 , the complement Bx1 (e) \ W has two connected components (again, we make use of that W being a smooth submanifold of the codimension 1 and a connected component of the boundary of D k ). One of these two components coincides with Bx1 (e) ∩ D k and another one with Bx1 (e) \ D k . This partition is the same as the partition of Bx1 (e) \ W into two connected components Bx 1 (e)∩ U0 and Bx 1 (e)∩ U1 . Because we have D k ⊂ U0 , we conclude that Bx 1 (e)∩ D k = Bx 1 (e) ∩ U0 . Therefore, a suitable beginning (x1 , x3 ] ⊂ (x1 , x2 ] of the interval (x1 , x2 ] is contained in Bx1 (e) ∩ D k (see the previous paragraph). Taking into account that the open interval (x1 , x2 ) does not intersect the boundary of D k thanks to (7.13), this finally implies that (x1 , x2 ) ⊂ D k , which is a contradiction with that B is outside of D k . To conclude the proof, it is sufficient now to estimate r. Using the Lagrange theo­ rem, we obtain c2 − c1 = 2r|(n · ∇u m )|, where n is a unit vector directed along the diameter [x1 , x2 ]. This relation entails r−n ≤ C sup |∇u m |n . Applying estimates (4.170) and (4.177), we obtain (4.176). Notice that the complexities C1 and C2 are stable under small perturbations. Lemma 4.38. For generic c and c i , the complexities C1 , C2 of the pattern u tm (x) are con­ served under small smooth perturbations: the complexities of the pattern u tm coincide with the corresponding complexities of u tm + ˜z (x) if |˜z C1 | < ϵ and ϵ is small enough. Proof. Consider the case C2 . The connected components are disjoint. Since they are compact, the distances d k between these components are positive. If c1 , c2 are regular values of u m , their boundaries are smooth submanifolds of the codimension 1. If ϵ is sufficiently small, the perturbation of these level submanifolds are small due to the regularity of the values c i . Thus, since inf d k > 0, the perturbed connected components rest disjoint.

Bibliography [1] [2] [3] [4] [5] [6]

[7] [8] [9] [10] [11] [12] [13]

[14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]

Achlioptas, D. Lower bounds for random 3-sat via differential equations, Theor. Comp. Sci., 265 (2001), 159–185. Achlioptas, D. and Moore, C. The asymptotic order of the random k-SAT threshold. IEE Com­ put.Soc. Press, Los Alamos, CA (2002). Adami, C. What is complexity? BioEssays, 24 (2002), 1085–1094. Aegerter-Wilmsen, T., Aegerter, C. M. and Bisseling, T. J. Theor. Biology, 234 (2005), 13. Aero, E. L., Vakulenko, S. A. and Vilesov, A. D. Kinetic theory of macrophase separation in block copolymers, Journal de Physique France, 51 (1990), 2205–2226. Agrachev, A. A., Sachkov, Yu. Control Theory from the Geometric Viewpoint, Series: Ency­ clopaedia of Mathematical Sciences, Vol. 87 Control Theory and Optimization, Springer (2004), XIV, 412 p. Albert, R. and Barabási, A. L. Statistical mechanics of complex networks, Rev. Modern Physics, 74 (2002), 47–97. Alberts, B., Bray, D., Lewis, J., Raff, M., Roberts, K., Walter, P. Molecular Biology of the Cell, 4th ed. Garland Publishing, Inc., New York (2002). Albrecht, F. and Diamond, H. G. The converse Taylor theorem, Indiana Math Journal, 21 (1971), 347. Aldana, M. Boolean dynamics of networks with scale-free topology, Physica D, 185 (2003), 45–66. Arneodo, A., Coullet, P., Peyraud, J. and Tresser, C. Strange attractors in Volterra equations for species in competition, J. of Math. Biology, 14 (1982), 153–157. Arnol’d, V. I. Geometric methods in Theory of Ordinary Differential Equations, 2nd ed. Springer, New York (1988). Aubin, J.-P., Bayen, A., Bonneuil, N. and Saint-Pierre, P. Viability, Control and Games: Regulation of complex evolutionary systems under uncertainty and viability constraints, Springer-Verlag (2005). Aubin, J. P. Mutational and morphological analysis: tools for shape regulation and morpho­ genesis, Birkhauser (2000). Aubin, J.-P. Dynamic economic theory: a viability approach, Springer-Verlag (1997). Aubin, J.-P. Neural networks and qualitative physics: a viability approach, Cambridge Univer­ sity Press (1996). Avery, J. Information theory and evolution, World Scientific, Singapore (2003). Babin, A. B. and Vishik, M. I. Regular attractors of semigroups and evolution equations, J. Math. Pures Appl., 62 (1983), 441–491. Badii, R. and Politi, A. Complexity. Hierarchical structures and scaling in physics, Cambridge University Press (1997). Baigent, S. and Hou, Z. On the global stability of fixed points for Lotka–Volterra systems, Differential Equations and Dynamical Systems, 20, no. 1 (2012), 23–66. Barron, A. Universal Approximation Bounds for superpositions of a sigmoidal functions, IEEE Trans. on Inf. Theory, 39 (1993), 930–945. Bartel, D. P. MicroRNAs: target recognition and regulatory functions, Cell, 136 (2009), 215–233. Bates, P. W., Lu, K. and Zeng, C. Existence and persistence of invariant manifolds for semiflows in Banach space, Memoirs Amer. Math. Soc., 645 (1998), 1–129. Bedau, M. Four puzzles about life. Artificial Life, 4 (1998), 125–140.

280 | Bibliography [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35]

[36] [37] [38] [39] [40] [41] [42] [43] [44] [45]

[46] [47] [48] [49]

Begon, M., Harper, J. L., Townsend, C. R. Ecology, Vol. 2, Blackwall Scientific Publications, Ox­ ford London Edinburgh (1986). Behe, M. Darwin’s Black Box: The Biochemical Challenge to Evolution, The Free Press, New York, NY (1996). Bergman, A. and Siegal, M. L. Evolution capacitancy as emergent property of evolving net­ works, Nature, 424 (2003), 549–553. Bernard, C. An Introduction to the Study of Experimental Medicine, Macmillan and Co., Ltd. (1927). Bhattacharya, R., Majumdar, M. Random Dynamical Systems. Theory and Applications, Cam­ bridge, Cambridge University Press (2007). Blondel, V., Bournez, O., Koiran, P. and Tsitsiklis, J. The stability of saturated linear dynamical systems is undecidable. Journal of Computer and System Sciences, 62 (2001), 442–462. Blum, M. A machine-independent theory of the complexity of recursive functions, J. Assoc. Comput. Machin., 14 (1967), 322–336. Bohl, P. Uber Differentialungleichnhungen, J.f. reine und angew. Math., 144 (1913), 284–318. Bonner, J. The evolution of complexity by means of natural selection. Princeton, NJ: Princeton University Press (1988). Bornholdt, S. Modeling Genetic Networks and their Evolution: A Complex Dynamical Systems Perspective, Biol. Chem., 382 (2001), 1289–1299. Symbolic Methods for Chemical Reaction Networks (Dagstuhl Seminar 12462) Boulier, F., Shiu, A. J., Sturm, T. and Weber, A. In: Dagstuhl Reports; Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2, no. 11 (2013). Braunstein, A.; Me’zard, R.; Zecchina, R. “Survey propagation: An algorithm for satisfiability.” Random Structures and Algorithms 27, no. 2 (2005), 201–226, doi:10.1002/rsa.20057. Brenig, L. Complete factorization and analytic solution of generalized Lotka–Volterra equa­ tions, Physics Letters A, 133 (1988), 378–382. Brenig, L. and Goriely, A. Universal canonical forms for time-continuous dynamical systems, Physical Review A, 40 (1989), 4119–4122. Brocker, T., Lander, K. Differential Germs and Catastrophes, Cambridge University Press (1975). Brunel, N., Carusi, F. and Fusi, S. Slow stochastic Hebbian learning of classes in recurrent neural networks, Network: Computation in Neural Systems, 9 (1998), 123–152. Brunovsky, P. and Poláčik, P. The Morse-Smale structure of a generic reaction-diffusion equa­ tion. J. of Diff. Equations, 135 (1997), 129–181. Cannon, W. B. The Wisdom of the Body, W. W. Norton Co., New York (1932). Carr, J. and Pego, R. L. Metastable patterns in solutions of u t = ϵ 2 u xx + f ( u ), Comm. Pure Appl. Math., 42 (1989), 523–576. Carroll, S. B., Grenier, J. K. and Weatherbee, S. D. From DNA to Diversity: Molecular Genetics and the Evolution of Animal Design, Blackwell Science (2001). Cartwright, M. L. and Littlewood, J. E. On Nonlinear Differential equations of the second order: I. The equation y − k(1 − y2 ) y + y = bλk cos( λt + a) , k−Large, J. of the London Math. Society, 20 (1945), 180–189. . Chaitin, G. J. On the length of programs for computing finite binary sequences. J. ACM, 13 (1966), 547–569. Chen, X. Lorenz equations, Part III: Existence of hyperbolic sets, preprint (1995). Chow, S. N. and Lu, K. Invariant manifolds for flows in Banach spaces, J. Diff. Equations., 74 (1988), 285–317. Cocco, S., Monasson, R., Montanari, A., Semerjan, S. Approximate analysis of search algo­ rithms with “physical methods,” Chapter for “Phase transitions and Algorithmic complexity” edited by G. Istrate, C. Moore, A. Percus (2004).

Bibliography

[50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65]

[66] [67] [68] [69] [70] [71]

[72] [73]

| 281

Constantin, P., Foias, C., Nicolaenko, B. and Temam, R. Integrable manifolds and inertial man­ ifolds for dissipative differential equations, Springer, New York (1989). Cook, S. The complexity of theorem proving procedures. Proceeding of the Third Annual ACM Symposium on Theory of Computing, 151–155 (1971). Cormen, T. H., Leiserson, C. E., Rivest, R. L. and Stein, C. Introduction to Algorithms (Second Edition), MIT Press (2001). Cybenko, G. Approximation by superpositions of a sigmoidal function, Mathematics of Con­ trol, Signals and System, 2 (1989), 303–314. Daletskii, Ju. and Krein, S. Stability of Solutions of Differential Equations in Banach spaces Trans. Math. Monograph. 43 Am. Math. Soc. Providence, R. I. (1974). Dancer, E. N. and Poláčik, P. Realization of vector fields and dynamics of spatially homoge­ neous parabolic equations. Memoirs of Amer. Math. Society, 140, no. 668 (1999). Dantsin, E., Hirsch, E. A., Ivanov, S., Vsemirnov, M. Algorithms for SAT and Upper Bounds on Their Complexity, Journal of Mathem. Sciences 118(2) (2003), 4948–4962. Darwin, C. On the origin of species (1st ed.). London: John Murray (1859). Darwinism and Philosophy, Hisle, V. and Illies, Ch. (eds), University of Notre Dame, Notre Dame, Indiana (2005). Dawkins, R. The blind watchmaker. New York: Penguin (1986). Dawkins, R. and Krebs, J. R. Arms races between and within species. Proceedings of the Royal Society of London, Series B, 205 (1979), 489–511. Debussche, A. and Temam, R. Some New Generalizations of Inertial Manifolds, Discrete and Continuous Dynamical Systems, 2 (1996), 543–558. Deroulers, C., Monasson, R. Criticality and Universality in the Unit-Propagation Search Rule. Eur. Phys. Journal, B49 (2006) 339. Deroulers, C., Monasson, R. Critical behaviour of combinatorial search algorithms, and the unitary-propagation universality class. Europhys. Lett., 68 (2004), 153. Durand, B. and Zvonkine, A. Kolmogorov complexity. In E. Charpentier, A. Lesne and N. Nikol­ ski (editors), Kolmogorov’s Heritage in Mathematics, Springer, Berlin, 281–300 (2007). Dynamical Systems with Hyperbolic Behaviour, D. V. Anosov (ed). (Dynamical Systems 9). En­ cyclopedia of Mathematical Sciences v. 66). Translated from Russian., Springer V., Berlin, Hei­ delberg, New-York (1995). Eigen, M. and Schuster, P. The Hypercycle – A Principle of Natural Self-Organization, Springer, Berlin, Heidelberg, New York (1979). Edwards, R. Approximation of neural network dynamics by reaction-diffusion equations, Mathematical methods in Applied Sciences, 19 (1996), 651–677. Edwards, R., Siegelmann, T. H., Aziza, K. and Glass, L. Symbolic dynamics and computation in model gene networks, Chaos, 11 (2001), 160–169. EPAPS Document no. E-RPLTAO-103–041944 for Whitham’s principle. Erdös, P., Rényi, A. On the evolution of random graphs, Publ. Math. Inst. Hungarian Academy of Sciences, 5 (1960), 17–61. Falcioni, M., Loreto, V. and Vulpiani, A. Kolmogorov’s legacy about entropy, chaos and com­ plexity. In A. Vulpiani and R. Livi (editors), The Kolmogorov Legacy in Physics, Springer, Berlin, 85–108 (2003). Fenichel, N. Persistence and smoothness of invariant manifolds for flows, Indiana Univ. Math. 21 (1971), 193–225. Ficici, S. G. and Pollack, J. B. Challenges in coevolutionary learning: arms-race dynamics, open-endedness, and mediocre stable states. In Artificial Life VI: Proceedings of the Sixth International Workshop on the Synthesis and Simulation of Living Systems (238–247). MIT Press, Cambridge, MA (1998).

282 | Bibliography [74] [75] [76] [77] [78] [79] [80] [81] [82]

[83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98]

Fife, P. C. and MacLeod, J. B. The approach to solutions of nonlinear diffusion equations to traveling front solutions, Arch. Rat. Mech. Anal., 65 (1997), 335–361. Figueiredo, A., Rocha Filho, T. M. and Brenig, L. Algebraic structures and invariant manifolds of differential systems J. Math.Physics, 39, Number 5, 2929–2946 (1989). Fisher, R. A. The wave of advance of advantageous genes, Ann. Eugenics 7 (1937), 353–369. Foias, C. and Prodi, G. Sur le comportement global des solutions nonstationaire des equations de Navier–Stokes en dimension 2, Rend. Semin. Math. Univ. di Padova, 39 (1967), 1–34. Foias, C., Sell, G. and Temam, R. Inertial Manifolds for nonlinear evolutionary equations, J. Diff. Equations, 73 (1988), 309–353. Freedman, H. I. and Waltman, P. Mathematical analysis of some three species food chain mod­ els, Math. Biosci., 33 (1977), 257–273. Friedgut, E. Sharp thresholds of graph properties, and the k-SAT problem. J. Amer. Math. Soc., 12 (1999), 1017–1054. Funahashi, K. and Nakamura, Y. Approximation of Dynamical Systems by Continuous Time Recurrent Neural Networks, Neural Networks, 6 (1993), 801–806. Gabrielov, A. and Vorobjov, N. Complexity of computations with Pfaffian and Noetherian functions, in: Normal Forms, Bifurcations and Finiteness Problems in Differential Equations, 211–250, Kluwer (2004). Garey, M. R. and Johnson, D. S. Computers and Intractability: A Guide to the Theory of NP-­ Completeness W. H. Freeman and Co. New York, NY, USA (1979) ISBN:071671044. Gichman, I. I. and Scorochod, A. B. Introduction to Theory of Random Processes (in Russian), Nauka, Moscow (1977). Gingerich, P. Rates of evolution: Effects of time and temporal scaling. Science, 222, 159–161. (1983). Glass, L. and Kauffman, S. The logical analysis of continuous, nonlinear biochemical control networks, J. Theor. Biology, 34 (1973), 103–129. Glass, L. Combinatorial and topological methods in nonlinear chemical kinetics. J. Chem. Phys. 63, no. 4 (1975), 1325–1335. Gorban, A. N. and Shahzad, M. The Michaelis–Menten–Stueckelberg theorem, Entropy, 13 (2011), 966–1019. Gordon, P. V. and Vakulenko, S. A. Merging and interacting wave fronts for reaction-diffusion equations. Arch. Mech. (Arch. Mech. Stos.), 51, no. 5 (1999), 547–558. Gordon, P. V., Vakulenko, S. A. Periodic kinks in reaction-diffusion systems. J. Phys. A. Math. Gen., 31, no. 3 (1998), L67–L70. Gould, S. The evolution of life on earth. Scientific American, 271 (1994), 84–91. Graca, D. S., Campagnelo, M. L. and Buescu, J. Computability with polynomial differential equations, Advances in Applied Mathematics, 40 (2008), 330–349. Grigoriev, D. and Vorobjov, N. Complexity Lower Bounds for Computation Trees with Elemen­ tary Transcendental Functions Gates, Theoret. Comput. Sci., 157 (1996), 185–214. Grigoriev, D. Complexity of deciding Tarski algebra, J. Sym. Computations, 5 (1988), 65–108. Grigoriev, D. Application of separability and independence notions for proving lower bounds of circuit complexity, J. Soviet Math., 14 (1980), 1450–1456. . Grigoriev, D. Deviation theorems for solutions of linear ordinary differential equations and applications to parallel complexity of sigmoids, St. Petersburg Math. J., 6 (1995), 89–106. Gromov, M. and Carbone, A. Mathematical slices of molecular biology, Preprint IHES/M/01/03 (2001). Gross, M. and Hohenberg, P. Pattern formation outside equilibrium, Review Modern Physics, 65 (1993), 85111112.

Bibliography

[99] [100] [101] [102] [103] [104]

[105] [106] [107] [108] [109] [110] [111] [112] [113] [114]

[115] [116] [117] [118] [119] [120] [121] [122] [123] [124]

| 283

Guckenheimer, J. and Holmes, P. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer, New York (1981). Haken, H. Synergetic, An Introduction, 3rd ed. Springer, Berlin, Heidelberg, New York (1983). Hale, J. K. Asymptotic behavior of dissipative systems, American Mathematical Society, Prov­ idence (1988). Hale, J. K. Stability and Gradient Dynamical Systems Rev. Mat. Comput., 17 (2004), 7–57. Hale, J. K. and Sakamoto, K. Shadow systems and attractors in reaction-diffusion equations, Appl. Anal., Vol. 32 (1989), 287–304. Halmschlager, A., Szenthe, L. and Tóth, J. Invariants of kinetic differential equations, Elec­ tronic Journal of Qualitative Theory of Differential Equations, Proc. 7th coll. QTDE, no. 14, 1–14 (2004). Hardin, G. The competitive exclusion principle. Science, 131 (1960), 1291–1297. Hartwell, L. H., Hopfield, J. J., Leibler, S. and Murray, A. W. From molecular to modular cell bi­ ology, Nature (London), 402, C47–C52 (1999). Hecht-Nielsen, R. Kolmogorov’s mapping neural network existence theorem, IEEE Interna­ tional Conference on Neural Networks, San Diego: SOS Printing, 11–14 (1987). Henry, D. Geometric theory of semilinear parabolic equations, Lecture Notes in Mathematics, Vol. 840, Springer, Berlin (1981). Heylighen, F., Bollen, J. and Riegler, A. (eds.) The Evolution of Complexity (Kluwer Academic, Dordrecht) (1999). Hirsch, M. Differential topology, Springer-Verlag, New York, Heidelberg, Berlin (1976). Hirsch, M. Differential equations and convergence almost everywhere in strongly monotone flows, Contemp. Math., 17 (1983), 267–285. Hirsch, M. Systems of differential equations that are competitive or cooperative II: conver­ gence almost everywhere, SIAM J. Math. Anal., 16 (1985), 423–439. Hirsch, M. Stability and convergence in strongly monotone dynamical systems, J. Reine Angew. Math., 383 (1988), 1–53. Hirsch, M. and Smith, H. L. Monotone dynamical systems, in: A. Canada, P. Drabek, A. Fonda (Eds.), Handbook of Differential Equations, Ordinary Differential Equations, second volume, Elsevier, Amsterdam (2005). Hofbauer, J. and Sugmund, K. Evolutionary games and Population dynamics, Cambridge Uni­ versity Press. (1988). Hooper, P. K. The undecidability of the Turing machine immortality problem, J. of Symbolic Logic, 3 (1966), 219–234. Hopcroft, J. E. and Ullman, J. D. Introduction to Automata Theory Languages and Computation, Addison-Wesley (1979). Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities, Proc. of Natl. Acad. USA, 79 2554–2558 (1982). Hornik, K., Stinchcombe, M. and White, H. Multilayer feedforward networks are universal ap­ proximators, Neural Networks, 2 (1989), 359–366. Horsthemke, W., Lefever, R. Noise-induced Transitions, Springer-Verlag, Berlin etc. (1984). Houchmandzadech, B., Wieschaus, E. and Leibler, S. Establishment of developmental preci­ sion and proportions in the early Drosophila embryo. Nature, 415 (2002), 798–802. Howard, M. and ten Wolde, P. R. Phys. Rev. Letters, 95 (2005), 208103. Howard, M., ten Wolde, P. R. Finding the center reliably: robust patterns of developmental gene expression, arHiv: q-bio.TO/0511011 v1 (2005). Huisman, J., Weissing, F. J. Biodiversity of plankton by species oscillations and chaos. Nature, 402 (1999), 407–410.

284 | Bibliography [125] [126] [127] [128] [129] [130] [131]

[132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146]

[147] [148]

Huisman, J., Weissing, F. J. Fundamental unpredictability in multispecies competition, Amer. Naturalist, 157 (2001), 488–494. Hutchinson, G. E. The Paradox of the Plankton. Am. Nat, 95 (1961), 137–145. Hutchinson, J. E. Fractals and self similarity, Indiana Univ. Math. J., 30 (1981), 713–747. Ilyashenko, Yu., Weigu Li, Nonlocal bifurcations, Amer. Math. Soc., Mathematical Surveys and Monographs, no. 66 (1999). Il’yashenko, Yu. Weakly contracting systems and attractors of Galerkin approximation for Navier Stokes equation on two-dimensional torus, Uspechi Mechanics, 1 (1982), 31–63. Ilyashenko, Yu. Minimal attractors, EQUADIFF 2003, 421–428, World Sci. Publ., Hackensack, NJ. (2005). Jaeger, J., Blagov, M., Kosman, D., Kozlov, K. N., Manu, Myasnikova, E., Surkova, S., Vanari­ o-Alonso, C. E., Samsonova, M., Sharp, D. H. and Reinitz, J. Dynamical analysis of regula­ tory interactions in the gap gene system of Drosophila melanogaster. Genetics, 167 (2004), 1721–1737. Jaynes, E. T. “Works on the Foundations of Statistical Physics” by N. S. Krylov, J. Am. Stat. Assn., 76, 742 (1981). Jeong, H., Tombor, B., Albert, R., Otvai, Z. N. and Barabási, A. L. The large-scale organisation of metabolic networks, Nature (London), 407 (2000), 651–654. Jeong, H., Mason, S. P., Barabási, A. L. and Otvai, Z. N. Lethality and centrality in protein net­ works, Nature (London), 411 (2000), 41–42. Katok, A. B. and Hasselblatt, B. Introduction to the Modern Theory of Dynamical Systems, Cambridge University Press, Encyclopedia of Mathematics and Its Applications, Vol. 54 (1995). Kauffman, S. A. Metabolic stability and epigenesis in randomly constructed genetic nets. J. Theor. Biology, 22 (1969), 437–467. Kening Lu, Qiudong Wang and Lai-Sang Young, Strange attractors for periodically forced parabolic equations. Memoirs of American Math. Society, 224, no. 1054 (2013). Kerner, B. S. and Osipov, V. V. Autosolitons: A New Approach to Problems of Self-Organization and Turbulence (Fundamental Theories of Physics), Kluwer, Dordrecht (1994). Khovanskii, A. Fewnomials, Translations of Mathem. Monographs, Amer. Math. Soc., 88 (1991). Kinyon, M. and Sagle, A. A. Quadratic dynamical systems and algebras, J. Diff. Eq. 117, no. 1 (1995), 67–126. Kirkpatrick, S. and Selman, B. Critical Behaviour in the Satisfiability of Random Boolean ex­ pressions, Science, Vol. 264, 1297–1301 (1994). Koiran, P. and Moore, C. Closed-form analytic maps in one and two dimensions can simulate Turing machines. Theoretical Computer Science, 210, no. 1 (1999), 217–223. Kolchanov, N. A., Nedosekina, N. A., Ananko, E. A. et al. “GeneNet database: description and modeling of gene networks,” In Silico Biol. 2, no. 2 (2002), 97–110. Kolchin, V. F. Random graphs (in Russian). FizMatLit. Moscow (2004). Kolmogorov, A. N. Three approaches to the quantitative definition of information. Prob. Inf. Transm., 1 (1965), 1–7. Kolmogorov, A. N., Petrovskii, G. I. and Piskunov, N. S. A study of the equation of diffusion with increase in the quantity of matter, with application to a biological problem, Bull. Moscow Univ., Sec. A, no. 1, 1 (1937). Kondrashov, A. S. Deleterious mutations and the evolution of sexual reproduction. Nature 336 (1988), 435–440. Korzuchin, M. D. Kolebaltelnie processi v biol. i chimich. sistemach, Ed. G. M. Frank, Moscow, p. 231 (1967) (in Russian).

Bibliography

[149] [150] [151] [152] [153] [154]

[155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168]

[169]

[170] [171]

[172]

| 285

Kozlov, V. and Vakulenko, S. On chaos in Lotka-Volterra systems: an analytical approach, Non­ linearity, 26 (2013), 2299-2314. Kraut, R. and Levine, M. Mutually repressive interactions between the gap gene giant and Kruppel define the middle bodyregions in Drosophila embryo, Development, 111 (1991), 611. Krylov, N. S. Works on the Foundations of Statistical Physics Princeton University Press, Princeton, New Jersey (1979). Kuramoto, Y. Chemical Oscillations, Waves, and Turbulence Springer (1984). Kurland, C. G., Canback, B. and Berg, O. G. “Horizontal gene transfer. A critical view,” Proc Natl Acad Sci USA, 100 (2003), 9658–9662. Kuznetsov, N. V. and Leonov, G. A. Lyapunov quantities and limit cycles of two-dimensional dy­ namical systems, (chapter in “Dynamics and Control of Hybrid Mechanical Systems”), World Scientific, 7–28 (2010). Ladygenskaya, O. A. Finding minimal global attractors for Navier–Stokes equations and other partial differential equations, Uspechi Mat. Nauk, 42 (1987), 25–60. Lehninger, A. L., Nelson, D. L. and Cox, M. M. Principles of Biochemistry, 2nd. ed. Worth, New York (1993). Lempel, A. and Ziv, J. On the complexity of finite sequences. IEEE Trans. Inf. Th. 22 (1976) 75–81. Lesne, A. Complex networks: from graph theory to biology. Letters in Math. Phys., 78 (2006), 235–262. Lesne, A. Shannon entropy: a rigorous mathematical notion at the crossroads between proba­ bility, information theory, dynamical systems and statistical physics. Preprint IHES M-11–04. Lesne, A. and Benecke, A. Feature context-dependency and complexity reduction in probabil­ ity landscapes for integrative genomics. Theor. Biol. Med. Mod., 5 (2008), 21. Levin, B. Ya. Lectures on entire functions, Translations of Mathematical Monographs, Amer. Math.Society (1996). Levy, S. and Siegal, M. L. Network Hubs Buffer Environmental Variations in Saccharomyces cerevisae, Plos Biol., 6 (11) (2008). Li, T. and Yorke, J. A Period three implies chaos. Amer. Math. Monthly. V. 82 (1975), 985–992. Lobry, C. Une proprieté generique des couples de champs de vecteurs, Chechoslovak Mathe­ matical Journal, 22, no. 97 (1972), 230–237. Lorenz, E. N. Deterministic nonperiodic flow. Journal of Atmospheric Sciences., 20 (1963), 130–141. Lovelock, J. The Vanishing Face of Gaia: A Final Warning: Enjoy It While You Can. Allen Lane (2009) ISBN 978–1-84614–185-0. Lunardi, A. Analytic Semigroups and Optimal Regularity in Parabolic Problems, Birkhauser, Basel – Boston – Berlin (1995). Mane, R. Reduction of semilinear parabolic equations to finite dimensional C 1-flow, Geometry and Topology, Lecture Notes in Mathematics, no. 597, Springer-Verlag, New York, 361–378 (1977). Manu, Surkova, S., Spirov, A. V., Gursky, V. V., Janssens, H., Radulescu, O., Samsonova, M., Sharp, D. H. and Reinitz, J. Canalization of Gene Expression in the Drosophila Blastoderm by Gap Gene Cross Regulation, Plos Biol., 7 (2009), e49. Marion, M. Approximate inertial manifolds for reaction-diffusion equations in high space di­ mension, J. Dyn. Diff. Equations, 1 (1989), 245–267. Markov, V. A., Anisimov, V. A. and Korotyaev, A. V. Relation between Genome size and Organis­ mal Complexity in the Lineage Leading from Prokaryotes to Mammals, Paleontological Journal, 44, no. 4 (2010), 363–373. Massey, W. S. A basic course in algebraic topology, Springer (1991).

286 | Bibliography [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190]

[191] [192] [193] [194] [195]

[196] [197] [198]

Matano, H. Convergence of solutions of one-dimensional semilinear parabolic equations, J. Math Kyoto Univ, 18 (1978), 221–227. May, R. Will a large complex system be stable, Nature (London), 238 (1972), 413–414. Maynard Smith, J. and Szathmary, E. The major transitions in evolution. Oxford University Press, Oxford, UK (1995). McShea, D. Perspective: Metazoan complexity and evolution: Is there a trend? Evolution, 50 (1996), 477–492. McShea, D. The evolution of complexity without natural selection, a possible large-scale trend of the fourth kind. Paleobiology, 31 (2005), 146–156. Meinhardt, H. Models of Biological Pattern Formation, Academic Press (1982). Meinhardt, H. The Algorithmic Beauty of Sea Shells 2nd Enlarged Edition, Springer, Heidel­ berg, New York (1998) . Melkikh, A. V. Deterministic mechanism of molecular evolution. Proceedings of the Interna­ tional Moscow Conference on Computational Molecular Biology, 227–228 (2005). Melkikh, A. V. Could life evolve by random mutations? Biofizika, 50 (2005), 959. Mendoza, L. and Alvarez-Buylla, E. R. Dynamics of Genetic Regulatory Networks for Arabodop­ sis thaliana Flower Morphogenesis, J. Theor. Biol., 193 (1998), 307–319. Mertens, S., Mézard, M. and Zecchina, R. Threshold values of Random K-SAT from the cavity method, Random Structures and Algorithms, 28 (2006), 340–373. Mézard, M. and Zecchina, R. Random K-satisfiability: from an analytic solution to a new effi­ cient algorithm. Phys. Rev. E., 66 (2002), 056126. Miconi, T. Evolution and Complexity: The Double-Edged Sword, Artificial Life, 14, no. 3, 325–344 (2008). Milnor, J. On the concept of attractor: Correction and remarks, Comm. Math. Phys., 102, no. 3 (1985), 517–519. Ming Li and Vitanyi, P. An Introduction to Kolmogorov Complexity and Its Applications, Second Edition, Springer Verlag, New York (1997). Mjolness, E., Sharp, D. H. and Reinitz, J. A connectionist Model of Development, J. Theor. Biol., 152 (1991), 429–453. Molotkov, I. A. and Vakulenko, S. A. Autowave propagation for general reaction diffusion sys­ tems. Wave Motion, 17, no. 3 (1993), 255–266. Molotkov, I. A., Vakulenko, S. A. Sosredotochennye nelineinye volny. (Russian) [Local­ ized nonlinear waves] Leningrad University Publisher, Leningrad, 240 pp. (1988) ISBN: 5-288-00057-83 (in Russian). Molotkov, I. A., Vakulenko, S. A., Bisyarin, M. A. Nelineinye lokalizovannye volnovye prot­ sessy. (Russian) [Nonlinear localized wave processes] Yanus-K, Moscow (1999). Monod, J., Jacob, F. General conclusions: teleonomic mechanisms in cellular metabolism, growth and differentiation, Cold Spring Harb. Symp. quant. Biol., 26 (1961), 389–401. Murray, J. D. Mathematical Biology, Springer, New York, Berlin, Heidelberg (1993). Newhouse, R., Ruelle, D. and Takens, F. Occurence of strange axiom A attractors from quasi periodic flows, Comm. Math. Phys., 64 (1971), 35–40. Nguyen, D. and Widrow, B. The truck backer-upper: An example of self-learning in neural net­ works. In W. T. Miller, R. Sutton and P. Werbos, Eds. Neural Networks for Robotics and Control. Cambridge, MA: MIT Press (1990). Nicolis, G. and Prigogine, I. Self-organization in nonequilibrium systems, Wiley, New York (1977). Nitecki, Z. Differentiable Dynamics, MIT Press, Cambridge (1971). Palis, J. and Yoccoz, J. C. Fers à cheval non uniformement hyperboliques engendres par une bifurcation homocline et densité nulle des attracteurs. Comptes Rendus Acad. Sci. Paris Sèr. Math., 333 (2001), 867–871.

Bibliography

[199] [200]

[201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211]

[212] [213] [214] [215]

[216]

[217] [218] [219] [220]

| 287

Papadimitriou, C. H. and Steglitz, K. Combinatorial optimization, Algorithms and Complexity, Prentice Hall, Inc. Englewood Cliffs. New Jersey (1982). Pearl, J. “Reverend Bayes on inference engines: A distributed hierarchical approach”. Pro­ ceedings of the Second National Conference on Artificial Intelligence. AAAI-82: Pittsburgh, PA. Menlo Park, California: AAAI Press. 133–136. https://www.aaai.org/Papers/AAAI/1982/ AAAI82--032.pdf. (1982). Perkins, T. J., Hallette, M., Glass, L. Dynamical properties of model gene networks and impli­ cations for the inverse problem, Biosystems, 84 (2006), 115–123. Perkins, T. J., Jaeger, J., Reinitz, J., Glass, L. Reverse Engineering the Gap Gene Network of Drosophila Melanoguster, PloS Computational Biology, 2 (2006), 0417–428. Peschel, M. and Mende, W. The Predator Pray Model (do we live in a Volterra world). Springer, Wien-New York (1986). Pesin, Y. Characteristic Lyapunov exponents and smooth ergodic theory, Russ. Math. Surveys, 32 (1977), 55–114. Pesin, Y. Dimension theory in dynamical systems. Contemporary views and applications, Uni­ versity of Chicago Press, Chicago (1997). Petrov, N. N. Lokalnaya upraevlyaemost avtonomnich sistems, (Local controllability of au­ tonomous systems), Differential uravneniya, 4, no. N7 (1968), 1218–1232. Pilugin, S. Yu. Spaces of dynamical systems. M.-Igevsk, “Regular and chaotic dynamics.” (2008). Pliss, V. Integral manifolds of periodical systems. Nauka (1970). Pliss, V. A. and Sell, G. R. Perturbations of normally hyperbolic invariant manifolds with appli­ cations to the Navier–Stokes equations, J. Differential Equations, 169 (2001), 396–492. Plopper, G. The extracellular matrix and cell adhesion. In: Cells (eds Lewin, B., Cassimeris, L., Lingappa, V., Plopper G.). Sudbury, MA: Jones and Bartlett. ISBN 0–7637-3905–7. (2007). Poláčik, P. Parabolic equations: Asymptotic behaviour and Dynamics on Invariant Manifolds, Ch. 16, pp. 835–883, in: HANDBOOK OF DYNAMICAL SYSTEMS, Vol. 2, Edited by B. Fiedler (2002). Poláčik, P. Realization of any finite jet in a scalar semilinear parabolic equation on the ball in R 2 , Annali Scuola Norm Pisa, 17 (1991), 83–102. Poláčik, P. Complicated dynamics in Scalar Semilinear Parabolic Equations, In Higher Space Dimensions J. of Diff. Eq., 89 (1991), 244–271. Poláčik, P. High dimensional ω-limit sets and chaos in scalar parabolic equations, J. Diff. Equa­ tions, 119 (1995), 24–53. Poláčik, P. and Terescak, I. Convergence to cycles as a typical asymptotic behavior in smooth discrete-time strongly monotone dynamical systems, Arch. Rat. Mech. Anal. 116 (1991), 339–360. Poláčik, P. and Terescak, I. Exponential separation and invariant bundles for maps in ordered Banach spaces with applications to parabolic equations, J. Dynamics Diff. Equations 5 (1993), 279–303. Polaćik, P. and Rybakowski, K. P. Imbedding vector fields in scalar parabolic Dirichlet BVPs, Ann. Scuola Norm. Sup. Pisa XXI(1995), 737–749. Prigogine, I. Thermodynamics of irreversible processes, Interscience Publishers, New York. (1967). Prizzi, M. Complicated dynamics in semilinear parabolic equations, Ph. D Thesis, ISAS, Triest (1997). Prizzi, M. and Rybakowski, K. P. Complicated dynamics of parabolic equations with simple gradient dependence, Trans. Amer. Math. Soc., 350 (1998), 3119–3130.

288 | Bibliography [221] [222] [223] [224] [225] [226] [227] [228] [229] [230] [231] [232] [233] [234] [235] [236] [237] [238] [239] [240] [241] [242] [243] [244]

[245] [246] [247]

Proceeding of WCB05 Workshop on Constraint Based Methods for Bioinformatics, eds. R. Backofen and A. Dovier, Spain (2005). Pross, A. The driving force for life’s emergence: kinetic and thermodynamic consideration, J. Theor. Biol. 220 (2003), 393–406. Radulescu, O., Olmsted, P. D. and Lu, C.-Y. D. Rheol. Acta, 36 (1999), 606. Ravasz, E., Somera, A. L., Mongru, D. A., Oltvai, Z. N., Barabasi, A.-L. Hierarchical organisation of Modularity in Metabolic Networks, Science, 297 (2002), 1551–1555. Read, W. T. and Shockley, W. Dislocation Model of Crystal Grain Boundaries, Phys. Rev., 78 (1950), 275–289. Reinitz, J. and Sharp, D. H. Mechanism of formation of eve stripes Mechanisms of Develop­ ment, 49 (1995), 133–158. Ridley, M. Evolution, 2nd ed. Blackwell Scientific Publications Ltd, Oxford (1996). Robinson, C. Dynamical Systems: Stability, Symbolic Dynamics, and Chaos, CRC Press (1999). Roques, L. and Chekroun, M. Probing chaos and biodiversity in a simple competition model. Ecological Complexity, 8(1), 98–104. Rosenberg, S. M. Evolving responsively: adaptive mutations. Nature, 2 (2001), 505–515. Rosslenbroich, B. The notion of progress in evolutionary biology-The unresolved problem and an empirical suggestion. Biology and Philosophy, 21 (2006), 41–70. Ruelle, D. Elements of differentiable dynamics and bifurcation theory, Acad. Press, Boston (1989). Ruelle, D. A measure associated with Axiom A attractors, Amer. J. Math., 98 (1976), 619–654. Ruelle, D. Ergodic theory of differentiable dynamical systems, Publ. Math., Inst. Hautes Etud. Sci. 50 (1979), 27–58. Ruelle, D. and Takens, F. On the nature of turbulence, Comm. Math. Phys, 20 (1971), 167–192. Rybakowski, K. P. Realization of arbitrary vector fields on center manifolds of parabolic Dirich­ let BVPs, J. Differential Equations, 114 (1994), 199–221. Salazar-Ciudad, I., Garcia-Fernadez, J. and Solé, R. V. Gene Networks Capable of Pattern For­ mation: From Induction to Reaction-Diffusion, J. Theor Biology, 205 (2000), 587–603. Sanford, J. C. Genetic Entropy and The Mystery of the Genome, Ivan Press, Lima, New York (2005). Savage, J. E. Models of Computations. Exploring the Power of Computing, Addison-Wesley (1997). van der Schaft, A. and Schumacher, H. An Introduction in Hybrid Dynamical Systems (Lectures Notes in Control and Information Systems, 251) (1999). Schreiber, S. Criteria for C r robust permanency, J. Differential Equations, 162 (2000), 400–426. Schrodinger, E. What is life? The physical aspect of the living cell, Cambridge University Press (1944). Schuster, P., Sigmund, K. and Wolff, R. On ω-limits for competition between three species, SIAM J. Appl. Math., 37 (1979), 49–55. Schuster, P., Sigmund, K. and Wolff, R. Dynamical systems under constant organization. Part 3: Cooperative and competitive behaviour of hypercycles, J. Differential Equations, 32 (1979), 357–368. Seitz, S., Alava, M. and Orponen, P. Focused local search for random 3-satisfiability. J. Statis­ tical Mechanics, 27 (2005), P06006. Selman, B., Levesque, H. and Mitchell, D. New method for solving Hard Satisfiability Prob­ lems, Proceedings AAAI-92, San Jose, 440–446 (1992). Sharkovski, A. Coexistence of cycles of a continuous map of the line into itself, Ukrainkskii Math. Zhurnal, 16 (1964), 61–71.

Bibliography

[248] [249] [250] [251] [252] [253] [254] [255] [256] [257] [258] [259] [260] [261] [262] [263] [264] [265] [266] [267] [268] [269] [270] [271] [272] [273] [274]

[275]

| 289

Sharov, A. Genome increase as a clock for the origin and evolution of life, Biology Direct, 1 (2006), 17. Shimoni, Y., Friedlander, G., Hetzroni, G. et al. Molec. Syst. Biol., 3 138. (2007). Siegelmann, H. T. and Sontag, E. D. Turing computability with neural networks. Appl. Math. Lett., 4 (1991), 6. Siegelmann, H. T. and Sontag, E. D. On the computational power of neural nets. J. Comp. Syst. Sci. 50 (1995), 132–150. Simon, L. Asymptotics for a class of nonlinear evolution equations, with applications to geo­ metric problems, Ann. of Math., 118 (1983), 525–571. Sinai, Ya. G. Developments of Krylov’s Ideas, in: N. S. Krylov, Works on the Foundations of Statistical Physics Princeton University Press, Princeton, New Jersey (1979). Smale, S. Mathematics of Time, Springer, New York (1980). Smale, S. On the differential equations of species in competition, J. Math. Biol., 3, 5–7. (1976). Smith, H. L. and Thieme, H. R. Convergence for strongly order preserving semiflows, SIAM J. Math. Anal., 22 (1991), 1081–1101. Smolen, P., Baxter, D. and Byrne, J. H. Mathematical modelling of gene networks, Review in Neuron, 25 (2000), 247–292. Smoller, J. Shock Waves and Reaction-Diffusion Equations, Springer V., New York (1983). Solomonoff, R. Complexity-based induction systems: comparisons and convergence theo­ rems. IEEE Trans. Inf. Th. 24 (1978), 422–432. Stein, E. M. Singular integrals and differentiability properties of functions, Princeton Univer­ sity Press, Princeton, New Jersey (1970). Stuart, D. M. A. Perturbation theory for kinks, Communications in Mathematical Physics, 149 (1992), 433–462. Sviregev, Ju. M. and Pasekov, V. P. Foundations of theoretical genetics. Moskow, Nauka (1982). Takeuchi, Y. Global dynamical properties of Lotka-Volterra systems, World Scientifique, Sin­ gapur (1996). Talagrand, M. Spin glasses, a Challenge for Mathematicians, Springer-Verlag (2003). Temam, R. Infinite dimensional system in mechanics and physics. New York: Springer-Verlag, 1998. Tereščák, I. Dynamics of C 1 strongly monotone discrete-time dynamical system. Preprint, Comenius University, Bratislava (1994). Thieffry, D. and Thomas, R. Dynamical behaviour of biological regulatory networks, II. Immu­ nity control in bacteriophage lambda, Bull. Math. Biology, 57 (1995), 277–295. Thom, R. Stabilité structurelle et morphogénèse, New York, Benjamin (1972). Tilman, D. Resource competition between plankton algae: an experimental and theoretical approach: Ecology, 58 (1977), 338–348. Tilman, D. Resource competition and community structure. Princeton University Press, Prince­ ton, N. J. (1982). Tucker, W. A Rigorous ODE Solver and Smale’s 14th Problem, Found. Comp. Math. 2 (2002), 53–117. http://www.math.uu.se/ warwick/main/rodes.html. Turing, A. M. The chemical basis of morphogenesis, Phil. Trans. Roy. Soc. B, 237 (1952), 37–72. Vakulenko, S. A. The oscillating wave fronts, Nonlinear Analysis TMA, 19 (1992), 1033–1046. Vakulenko, S. A. Justification of asymptotic solutions for one-dimensional nonlinear parabolic equations. (Russian) Mat. Zametki 52, no. 3 (1992), 10–16, 157; translation in Math. Notes 52 no. 3/4 (1992), 875–880. Vakulenko, S. A. Existence of chemical waves with a complex motion of the front. (Russian) Zh. Vychisl. Mat. i Mat. Fiz. 31, no. 5 (1991), 735–744; translation in Comput. Math. Math. Phys. 31 no. 5 (1992), 68–76.

290 | Bibliography [276] [277] [278] [279] [280] [281] [282] [283] [284] [285]

[286] [287] [288] [289] [290] [291] [292]

[293] [294]

[295] [296] [297] [298]

Vakulenko, S. A. A system of coupled oscillators can have arbitrary prescribed attractors. J. Phys. A, 27, no. 7 (1994), 2335–2349. Vakulenko, S. A. and Gordon, P. V. Neural networks with prescribed large time behaviour. J. Phys. A. Math. Gen., 31, no. 47 (1998), 9555–9570. Vakulenko, S. A. and Gordon, P. V. Propagating and scattering kinks in inhomogeneous non­ linear media, Theoretical and Mathematical Physics, 112 (1997), 269. Vakulenko, S. A. and Cherkai, M. V. Destruction of dissipative structures under random action, Theoretical and Mathematical Physics, 165, no. 1 (2010), 1387–139. Vakulenko, S. A. Dissipative systems generating any structurally stable chaos, Advances in Diff. Equations, 5 (2000), 1139–1178. Vakoulenko, S. Complexité dynamique de reseaux de Hopfield, C. R. Acad. Sci. Paris Sér. I Math., t.335 (2002). Vakulenko, S. and Grigoriev, D. Complexity of patterns generated by genetic circuits and Pfaf­ fian functions, Preprint IHES, M/03/23 (2003). Vakulenko, S. and Grigoriev, D. Stable growth of complex systems, Proceeding of Fifth Work­ shop on Simulation, 705–709 (2005). Vakulenko, S. and Grigoriev, D. Complexity of gene circuits, Pfaffian functions and the mor­ phogenesis problem, C. R. Acad. Sci, Ser I., 337 (2003), 721–724. Vakulenko, S. and Genieys, S. Pattern programming by genetic networks, Patterns and Waves, Collection of papers, eds. A. Abramian, S. Vakulenko and V. Volpert, S. Petersburg, 346–366 (2003). Vakulenko, S. and Genieys, S. Patterning by genetic networks, Mathematical Methods in Ap­ plied Sciences, 29, 173–190 ( 2005). Vakulenko, S. A. Reaction-diffusion systems with prescribed large time behaviour. Ann. Inst. H. Poincaré 66, no. 4 (1997), 373–410. Vakulenko, S. Neural networks and reaction-diffusion systems with prescribed dynamics. C. R. Acad. Sci. Paris Sér. I Math., 324, no. 5 (1997), 509–513. Vakulenko, S. and Grigoriev, D. Evolution in random environment and structural stability, Za­ piski seminarov POMI RAN, 325 (2005), 28–60. Vakulenko, S. and Radulescu, O. Flexible and Robust Patterning by Centralized Gene Networks Fundamenta Informaticae, 119 (2012), 1–25. DOI 10.3233/FI-2012–712 IOS Press. Vakulenko, S. and Radulescu, O. Flexible and robust networks, Journal of Bioinformatics and Computational Biology, 10, no. 2 (2012), 1241011–27. . Vakulenko, S. and Grigoriev, D. Complexity and stable evolution of circuits in Proofs, Cate­ gories and Computations. Essays in honor of Grigori Mints. Edited by Solomon Fefferman and Wilfried Sieg. College Publications. Tributes Series. Dov Gabbay (2010). Vakulenko, S. and Grigoriev, D. Instability, Evolution and Morphogenesis, In Progress of Theor. Biology, 55–100, Nova Publishers (2008). Vakulenko, S., Radulescu, O., Manu and Reinitz, J. Size Regulation in the Segmentation of Drosophila: Interacting Interfaces between Localized Domains of Gene Expression Ensure Ro­ bust Spatial Patterning, Phys. Review Letters, 103 (2009), 168102–168106. Vakulenko, S. and Zimin, M. An analytically tractable model of large network: dynamics and reliability, Int. Journal of Nanotechnology and Molecular Computation (2010). Vakulenko, S. and Grigoriev, D. Algorithms and complexity in biological pattern formation problems, Annales of Pure and Applied Logic, 141, 421–428 ( 2006). Vakulenko, S. and Volpert, V. Generalized traveling waves for perturbed monotone reactiondiffusion systems. Nonlinear Anal. Ser. A: Theory Methods, 46 no. 6 (2001), 757–776. Vakulenko, S. and Volpert, V. New effects in propagation of waves for reaction-diffusion sys­ tems, Asymptotical Analysis, 38 (2004), 11–33.

Bibliography

[299] [300] [301] [302] [303] [304] [305] [306] [307] [308] [309] [310]

[311] [312] [313] [314] [315] [316] [317] [318] [319] [320] [321] [322] [323] [324] [325] [326]

|

291

Vakulenko, S., Kazmierczak, B. and Genieys, S. Pattern formation capacity of spatially ex­ tended systems. Phys. Rev. E, 69, no. 1 (2004), 016215 . Vakulenko, S., Vilesov, A., Stühn, B. and Frenkel, S. Kinetics of superstructure formation in blockcopolymers, J. Chem. Physics, 106 (1997), 3412. Van Valen, L. A new evolutionary law, Evolutionary Theory, 1 (1973), 1–30. Valentine, J., Collins, A. and Meyer, C. Morphological complexity increase in metazoans. Pa­ leobiology, 20 (1994), 131–142. Valiant, L. G. Evolvability. Lect. Notes Comput. Sci., 4708 (2007), 22–43. Valiant, L. G. A theory of learnable. Comm. ACM, 27 (1984), 1134–1142. Valiant, L. and Vazirani, V. NP is as easy as detecting unique solutions, Theoretical Computer Science, 47 (1986), 85–93. doi:10.1016/0304–3975(86)90135–0. Vanderbauwhede, A. and Ioss, G. Center manifold theory in infinite dimensions, Dynamics Reported: Expositions in Dynamical systems, Springer, Berlin, 125–163 (1992). Vano, J. A., Wildenberg, J. C., Anderson, M. B., Noel, J. K., Sprott, J. C. Chaos in low-dimensional Lotka–Volterra models of competition. Nonlinearity, 19 (2006), 2391–2404. Ventsel, A. D. and Freidlin, M. I. Random Perturbations of Dynamic Systems, Springer, New York (1984). Viana, M. Multidimensional nonhyperbolic attractors, Publ. IHES, 85 (1997), 63–96. Volpert, A. I. and Volpert, V. A. Stability of waves described by parabolic systems of equations. In: Boundary value problems for partial differential equations, Acad. Nauk SSSR, Sibirsk. Ot­ del., Inst. Mat., Novosibirsk, Ed. S. K. Godunov, 20 (1995). Volpert, A., Volpert, Vit. and Volpert, Vl. Traveling wave solutions of parabolic systems. Trans­ lation of Mathematical Monographs, Vol. 140, Amer. Math. Society, Providence (1994). Volpert, A. I. and Volpert, V. A. Applications of the rotation theory of vector fields to the study of wave solutions of parabolic equations. Trans. Moscow Math. Soc., 52 (1990), 59. Volpert, A. I. and Volpert, V. A. Location of spectrum and stability of solutions for monotone parabolic systems. Adv. Diff. Eq., 2 (1997), 811. Volterra, V. Lecons sur la theorie mathematique de la lutte pour la vie, (Gauthier-Villard). (1931). Vovk, V. and Shafer, G. Kolmogorov’s contributions to the foundations of probability. Prob. Inf. Transm., 39 21–31. (2003). Waddington, C. H. Canalization of development and the inheritance of acquired characters, Nature, 150, no. 3811 (1942), 563–565, doi: 10.1038/150563a0. van der Waerden, B. L. Algebra, Volumes I, II. Springer New York (2003). Wagner, A. Redundant gene functions and natural selection, J. Evol. Biol., 12 (1999), 1–16. Wagner, G. and Altenberg, L. Complex adaptations and the evolution of evolvability. Evolution, 50 (1996), 967–976. Whitham, G. B. Linear and Nonlinear Waves, John Wiley and Sons Inc. (1974). Widrow, B. and Lehr, M. A. 30 years of adaptive neural networks: perceptron, madaline, and backpropagation, Proceedings of the IEEE, 78, no. 9 (1990), 1415–1442. Wiggins, S. Normally Hyperbolic Invariant Manifolds in Dynamical Systems, Springer, New York (1994). Wiggins, S. Global Bifurcations and Chaos, Springer, New York (1988). Witcopp, P. J., Carroll, S. B. and Kopp, A. Evolution in black and white: genetic control of pig­ ment patterns in Drosophila, TRENDS in Genetics, 19, no. 9 (2003), 495–504. Wolpert, L. Positional information and the spatial pattern of cellular differentiation, Journal Theor. Biol., 25, no. 1 (1969), 1–47, doi:10.1016/S0022–5193(69) 80016–0. . Wolpert, L., Beddington, R., Jessell, T., Lawrence, P., Meyerowitz, E. and Smith, J. Principles of Development Oxford University Press (2002).

292 | Bibliography [327]

[328] [329] [330] [331]

[332] [333] [334] [335]

Wu, S., Huang, S., Ding, J., Zhao, Y., Liang, L., Liu, T., Zhan, R. and He, X. Multiple microRNAs modulate p21Cip1/Waf1 expression by directly targeting its 3’ untranslated region. Oncogene, 29 (2010), 2302–2308. Yeing, M. K. S., Tegner, J. and Collins, J. J. Reverse engineering gene networks using singular vale decomposition and robust regression, Proc. Natl Acad. USA, 99 (2002), 6163–6168. Zelenyak, T. I. Stabilization of solution of boundary nonlinear problems for a second order parabolic equations with one space variable, Diff. Equations, 4 (1968), 17–22. Zhabotinsky, A. M. Konzentrazionnie avtokolebania, Nauka, Moscow (1974) (in Russian). Zhao, J., Yu, H., Luo, J. H., Cao, Z. W and Li, Y. X. Hierarchical modularity of nested bow-ties in metabolic networks. BMC Bioinformatics 7 (2006), 386. doi:10.1186/1471-2105-7-386. PMC 1560398. PMID 16916470. Zhdanov, V. P. Kinetic models of gene expression including non-coding RNAs, Physics Reports 500 (2011), 1–42. Ziv, J. and Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inf. Th., 23 (1977), 337–343. Ziv, J. and Lempel, A. Compression of individual sequences by variable rate coding. IEEE Trans. Inf. Th., 24 (1978), 530–536. http://flyex.ams.sinysb.edu/FlyEx/.

Index A absorbing set 4 attraction 4 attractor 4 B Boolean circuits 89 Brusselator 164 Burgers equation 267 C canalization 233 capacitors 231 centralized gene network 97 centralized network 216 Chaffee–Infante problem 9 chaos in Lotka–Volterra systems with n resources 50 chaotic torus automorphism 20 competition exclusion 54 Competitive systems 164 cooperative systems 12, 164 D Darwin Arrow 263 dissipative 4 dissipative semiflow 4 E evolution feasibility problem 227 F flow 1 G generalized Hopfield substitution 31 generic systems 194 genome complexity 264 global semiflow 3 graph 33 graph growth 195 Gromov–Carbone hypothesis 194 H Hebb rule 232 Hill function 30

homeostasis 194 Hopfield substitution 30 Hopfield system 10, 29 hyperbolic equilibrium 8 hyperbolic set 15 I interval map 20 invariant manifold 7 invariant set 4 K k-SAT 228 kink 121 Kolmogorov complexity 249 Korzuchin theorem 67, 159 L Lorenz system 51 Lotka–Volterra system 45 Lotka–Volterra system with n resources 45 M main phase equation 267 maximally complex semiflows 23 Michaelis–Menten function 30 Milnor attractor 5 monotone semiflow 11 morphological complexity 260 Morse–Smale systems 9 multistationarity 218 N network complexity 206 network viability 205 nonwandering set 9 NP-complete problems 239 O orbital topological equivalency 20 P Pattern generation problem for gene circuits 90 Peixoto theorems 14 periodic and chaotic waves 148 Permanency 53

294 | Index

Q quasiconvergence 11 quasiequilibrium method 119

Sharkovsky theorem 20 sigmoidal functions 29 Smale theorems 14 stable evolution 203 standard ecological model 59 strong persistence 53 Strong persistency and chaos 56 structural stability 13 synchronization and desynchronization 268 systems of chemical kinetics 67

R realization of vector fields 23 Reinitz–Mjolness–Sharp model 87 Reinitz–Sharp–Mjolness model 261 robustness 220

T theorem on attractor existence 5 trajectory 4 transitive semiflow 18 Turing machine 106

S scale-free networks 195 Schrödinger equation 267 sensitive dependence on initial conditions 19 shadow system 160

V viability domain 194

persistence and permanency 44 persistence of hyperbolic sets 20 Pfaffian chain 199 plankton paradox 44 polydynamical system 257 preferential attachment 196

W Whitham principle for dissipative systems 119

De Gruyter Series in Mathematics and Life Sciences

Volume 4 Sergey Vakulenko Complexity and Evolution of Dissipative Systems: An Analytical Approach, 2013 ISBN 978-3-11-026648-1, e-ISBN 978-3-11-026828-7, Set-ISBN 978-3-11-026829-4 Volume 3 Zoran Nikoloski, Sergio Grimbs Network-based Molecular Biology: Data-driven Modeling and Analysis, 2013 ISBN 978-3-11-026256-8, e-ISBN 978-3-11-026266-7, Set-ISBN 978-3-11-916541-9 Volume 2 Shair Ahmad, Ivanka M. Stamova (Eds.) Lotka-Volterra and Related Systems: Recent Developments in Population Dynamics, 2013 ISBN 978-3-11-026951-2, e-ISBN 978-3-11-026984-0, Set-ISBN 978-02698-5-7 Volume 1 Alexandra V. Antoniouk, Roderick V. N. Melnik (Eds.) Mathematics and Life Sciences, 2012 ISBN 978-3-11-027372-4, e-ISBN 978-3-11-028853-7, Set-ISBN 978-3-11-028854-4

www.degruyter.com